Yocto 2.4

Move OpenBMC to Yocto 2.4(rocko)

Tested: Built and verified Witherspoon and Palmetto images
Change-Id: I12057b18610d6fb0e6903c60213690301e9b0c67
Signed-off-by: Brad Bishop <bradleyb@fuzziesquirrel.com>
diff --git a/import-layers/yocto-poky/README b/import-layers/yocto-poky/README
deleted file mode 100644
index 9a52677..0000000
--- a/import-layers/yocto-poky/README
+++ /dev/null
@@ -1,58 +0,0 @@
-Poky
-====
-
-Poky is an integration of various components to form a complete prepackaged
-build system and development environment. It features support for building
-customised embedded device style images. There are reference demo images
-featuring a X11/Matchbox/GTK themed UI called Sato. The system supports
-cross-architecture application development using QEMU emulation and a
-standalone toolchain and SDK with IDE integration.
-
-Additional information on the specifics of hardware that Poky supports
-is available in README.hardware. Further hardware support can easily be added
-in the form of layers which extend the systems capabilities in a modular way.
-
-As an integration layer Poky consists of several upstream projects such as 
-BitBake, OpenEmbedded-Core, Yocto documentation and various sources of information 
-e.g. for the hardware support. Poky is in turn a component of the Yocto Project.
-
-The Yocto Project has extensive documentation about the system including a 
-reference manual which can be found at:
-    http://yoctoproject.org/documentation
-
-OpenEmbedded-Core is a layer containing the core metadata for current versions
-of OpenEmbedded. It is distro-less (can build a functional image with
-DISTRO = "nodistro") and contains only emulated machine support.
-
-For information about OpenEmbedded, see the OpenEmbedded website:
-    http://www.openembedded.org/
-
-Where to Send Patches
-=====================
-
-As Poky is an integration repository (built using a tool called combo-layer),
-patches against the various components should be sent to their respective
-upstreams:
-
-bitbake:
-    Git repository: http://git.openembedded.org/bitbake/
-    Mailing list: bitbake-devel@lists.openembedded.org
-
-documentation:
-    Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/yocto-docs/
-    Mailing list: yocto@yoctoproject.org
-
-meta-poky, meta-yocto-bsp:
-    Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/meta-yocto(-bsp)
-    Mailing list: poky@yoctoproject.org
-
-Everything else should be sent to the OpenEmbedded Core mailing list.  If in
-doubt, check the oe-core git repository for the content you intend to modify.
-Before sending, be sure the patches apply cleanly to the current oe-core git
-repository.
-
-    Git repository: http://git.openembedded.org/openembedded-core/
-    Mailing list: openembedded-core@lists.openembedded.org
-
-Note: The scripts directory should be treated with extra care as it is a mix of
-oe-core and poky-specific files.
diff --git a/import-layers/yocto-poky/README.LSB b/import-layers/yocto-poky/README.LSB
new file mode 100644
index 0000000..c9dca3f
--- /dev/null
+++ b/import-layers/yocto-poky/README.LSB
@@ -0,0 +1,25 @@
+OE-Core aims to be able to provide basic LSB compatible images. There
+are some challenges for OE as LSB isn't always 100% relevant to its
+target embedded and IoT audiences. 
+
+One challenge is that the LSB spec is no longer being actively
+developed [https://github.com/LinuxStandardBase/lsb] and has 
+components which are end of life or significantly dated. OE 
+therefore provides compatibility with the following caveats:
+
+* Qt4 is provided by the separate meta-qt4 layer. Its noted that Qt4 
+  is end of life and this isn't something the core project regularly 
+  tests any longer. Users are recommended to group together to support
+  maintenance of that layer. [http://git.yoctoproject.org/cgit/cgit.cgi/meta-qt4/]
+
+* mailx has been dropped since its no longer being developed upstream 
+  and there are better, more modern replacements such as s-nail 
+  (http://sdaoden.eu/code.html) or mailutils (http://mailutils.org/).
+
+* A few perl modules that were required by LSB 4.x aren't provided:
+  libclass-isa, libenv, libdumpvalue, libfile-checktree,
+  libi18n-collate, libpod-plainer.
+
+* libpng 1.2 isn't provided; oe-core includes the latest release of libpng
+  instead.
+
diff --git a/import-layers/yocto-poky/README.hardware b/import-layers/yocto-poky/README.hardware
deleted file mode 100644
index e6ccf78..0000000
--- a/import-layers/yocto-poky/README.hardware
+++ /dev/null
@@ -1,429 +0,0 @@
-                          Poky Hardware README
-                          ====================
-
-This file gives details about using Poky with the reference machines
-supported out of the box. A full list of supported reference target machines
-can be found by looking in the following directories:
-
-   meta/conf/machine/
-   meta-yocto-bsp/conf/machine/
-
-If you are in doubt about using Poky/OpenEmbedded with your hardware, consult
-the documentation for your board/device.
-
-Support for additional devices is normally added by creating BSP layers - for
-more information please see the Yocto Board Support Package (BSP) Developer's
-Guide - documentation source is in documentation/bspguide or download the PDF
-from:
-
-   http://yoctoproject.org/documentation
-
-Support for physical reference hardware has now been split out into a
-meta-yocto-bsp layer which can be removed separately from other layers if not
-needed.
-
-
-QEMU Emulation Targets
-======================
-
-To simplify development, the build system supports building images to
-work with the QEMU emulator in system emulation mode. Several architectures
-are currently supported:
-
-  * ARM (qemuarm)
-  * x86 (qemux86)
-  * x86-64 (qemux86-64)
-  * PowerPC (qemuppc)
-  * MIPS (qemumips)
-
-Use of the QEMU images is covered in the Yocto Project Reference Manual.
-The appropriate MACHINE variable value corresponding to the target is given
-in brackets.
-
-
-Hardware Reference Boards
-=========================
-
-The following boards are supported by the meta-yocto-bsp layer:
-
-  * Texas Instruments Beaglebone (beaglebone)
-  * Freescale MPC8315E-RDB (mpc8315e-rdb)
-
-For more information see the board's section below. The appropriate MACHINE
-variable value corresponding to the board is given in brackets.
-
-Reference Board Maintenance
-===========================
-
-Send pull requests, patches, comments or questions about meta-yocto-bsps to poky@yoctoproject.org
-
-Maintainers: Kevin Hao <kexin.hao@windriver.com>
-             Bruce Ashfield <bruce.ashfield@windriver.com>
-
-Consumer Devices
-================
-
-The following consumer devices are supported by the meta-yocto-bsp layer:
-
-  * Intel x86 based PCs and devices (genericx86)
-  * Ubiquiti Networks EdgeRouter Lite (edgerouter)
-
-For more information see the device's section below. The appropriate MACHINE
-variable value corresponding to the device is given in brackets.
-
-
-
-                      Specific Hardware Documentation
-                      ===============================
-
-
-Intel x86 based PCs and devices (genericx86*)
-=============================================
-
-The genericx86 and genericx86-64 MACHINE are tested on the following platforms:
-
-Intel Xeon/Core i-Series:
-  + Intel NUC5 Series - ix-52xx Series SOC (Broadwell)
-  + Intel NUC6 Series - ix-62xx Series SOC (Skylake)
-  + Intel Shumway Xeon Server
-
-Intel Atom platforms:
-  + MinnowBoard MAX - E3825 SOC (Bay Trail)
-  + MinnowBoard MAX - Turbot (ADI Engineering) - E3826 SOC (Bay Trail)
-    - These boards can be either 32bot or 64bit modes depending on firmware
-    - See minnowboard.org for details 
-  + Intel Braswell SOC
-
-and is likely to work on many unlisted Atom/Core/Xeon based devices. The MACHINE
-type supports ethernet, wifi, sound, and Intel/vesa graphics by default in
-addition to common PC input devices, busses, and so on.
-
-Depending on the device, it can boot from a traditional hard-disk, a USB device,
-or over the network. Writing generated images to physical media is
-straightforward with a caveat for USB devices. The following examples assume the
-target boot device is /dev/sdb, be sure to verify this and use the correct
-device as the following commands are run as root and are not reversable.
-
-USB Device:
-  1. Build a live image. This image type consists of a simple filesystem
-     without a partition table, which is suitable for USB keys, and with the
-     default setup for the genericx86 machine, this image type is built
-     automatically for any image you build. For example:
-
-     $ bitbake core-image-minimal
-
-  2. Use the "dd" utility to write the image to the raw block device. For
-     example:
-
-     # dd if=core-image-minimal-genericx86.hddimg of=/dev/sdb
-
-  If the device fails to boot with "Boot error" displayed, or apparently
-  stops just after the SYSLINUX version banner, it is likely the BIOS cannot
-  understand the physical layout of the disk (or rather it expects a
-  particular layout and cannot handle anything else). There are two possible
-  solutions to this problem:
-
-  1. Change the BIOS USB Device setting to HDD mode. The label will vary by
-     device, but the idea is to force BIOS to read the Cylinder/Head/Sector
-     geometry from the device.
-
-  2. Use a ".wic" image with an EFI partition
-
-     a) With a default grub-efi bootloader:
-     # dd if=core-image-minimal-genericx86-64.wic of=/dev/sdb
-
-     b) Use systemd-boot instead
-     - Build an image with EFI_PROVIDER="systemd-boot" then use the above
-       dd command to write the image to a USB stick.
-
-
-Texas Instruments Beaglebone (beaglebone)
-=========================================
-
-The Beaglebone is an ARM Cortex-A8 development board with USB, Ethernet, 2D/3D
-accelerated graphics, audio, serial, JTAG, and SD/MMC. The Black adds a faster
-CPU, more RAM, eMMC flash and a micro HDMI port. The beaglebone MACHINE is
-tested on the following platforms:
-
-  o Beaglebone Black A6
-  o Beaglebone A6 (the original "White" model)
-
-The Beaglebone Black has eMMC, while the White does not. Pressing the USER/BOOT
-button when powering on will temporarily change the boot order. But for the sake
-of simplicity, these instructions assume you have erased the eMMC on the Black,
-so its boot behavior matches that of the White and boots off of SD card. To do
-this, issue the following commands from the u-boot prompt:
-
-    # mmc dev 1
-    # mmc erase 0 512
-
-To further tailor these instructions for your board, please refer to the
-documentation at http://www.beagleboard.org/bone and http://www.beagleboard.org/black
-
-From a Linux system with access to the image files perform the following steps:
-
-  1. Build an image. For example:
-
-     $ bitbake core-image-minimal
-
-  2. Use the "dd" utility to write the image to the SD card. For example:
-
-     # dd core-image-minimal-beaglebone.wic of=/dev/sdb
-
-  3. Insert the SD card into the Beaglebone and boot the board.
-
-Freescale MPC8315E-RDB (mpc8315e-rdb)
-=====================================
-
-The MPC8315 PowerPC reference platform (MPC8315E-RDB) is aimed at hardware and
-software development of network attached storage (NAS) and digital media server
-applications. The MPC8315E-RDB features the PowerQUICC II Pro processor, which
-includes a built-in security accelerator.
-
-(Note: you may find it easier to order MPC8315E-RDBA; this appears to be the
-same board in an enclosure with accessories. In any case it is fully
-compatible with the instructions given here.)
-
-Setup instructions
-------------------
-
-You will need the following:
-* NFS root setup on your workstation
-* TFTP server installed on your workstation
-* Straight-thru 9-conductor serial cable (DB9, M/F) connected from your 
-  PC to UART1
-* Ethernet connected to the first ethernet port on the board
-
---- Preparation ---
-
-Note: if you have altered your board's ethernet MAC address(es) from the
-defaults, or you need to do so because you want multiple boards on the same
-network, then you will need to change the values in the dts file (patch
-linux/arch/powerpc/boot/dts/mpc8315erdb.dts within the kernel source). If
-you have left them at the factory default then you shouldn't need to do
-anything here.
-
-Note: To boot from USB disk you need u-boot that supports 'ext2load usb'
-command. You need to setup TFTP server, load u-boot from there and
-flash it to NOR flash.
-
-Beware! Flashing bootloader is potentially dangerous operation that can
-brick your device if done incorrectly. Please, make sure you understand
-what below commands mean before executing them.
-
-Load the new u-boot.bin from TFTP server to memory address 200000
-=> tftp 200000 u-boot.bin
-
-Disable flash protection
-=> protect off all
-
-Erase the old u-boot from fe000000 to fe06ffff in NOR flash.
-The size is 0x70000 (458752 bytes)
-=> erase fe000000 fe06ffff
-
-Copy the new u-boot from address 200000 to fe000000
-the size is 0x70000. It has to be greater or equal to u-boot.bin size
-=> cp.b 200000 fe000000 70000
-
-Enable flash protection again
-=> protect on all
-
-Reset the board
-=> reset
-
---- Booting from USB disk ---
-
- 1. Flash partitioned image to the USB disk
-
-    # dd if=core-image-minimal-mpc8315e-rdb.wic of=/dev/sdb
-
- 2. Plug USB disk into the MPC8315 board
-
- 3. Connect the board's first serial port to your workstation and then start up
-    your favourite serial terminal so that you will be able to interact with
-    the serial console. If you don't have a favourite, picocom is suggested:
-
-  $ picocom /dev/ttyUSB0 -b 115200
-
- 4. Power up or reset the board and press a key on the terminal when prompted
-    to get to the U-Boot command line
-
- 5. Optional. Load the u-boot.bin from the USB disk:
-
- => usb start
- => ext2load usb 0:1 200000 u-boot.bin
-
-    and flash it to NOR flash as described above.
-
- 6. Set fdtaddr and loadaddr. This is not necessary if you set them before.
-
- => setenv fdtaddr a00000
- => setenv loadaddr 1000000
-
- 7. Load the kernel and dtb from first partition of the USB disk:
-
- => usb start
- => ext2load usb 0:1 $loadaddr uImage
- => ext2load usb 0:1 $fdtaddr dtb
-
- 8. Set bootargs and boot up the device
-
- => setenv bootargs root=/dev/sdb2 rw rootwait console=ttyS0,115200
- => bootm $loadaddr - $fdtaddr
-
-
---- Booting from NFS root ---
-
-Load the kernel and dtb (device tree blob), and boot the system as follows:
-
- 1. Get the kernel (uImage-mpc8315e-rdb.bin) and dtb (uImage-mpc8315e-rdb.dtb)
-    files from the tmp/deploy directory, and make them available on your TFTP
-    server.
-
- 2. Connect the board's first serial port to your workstation and then start up
-    your favourite serial terminal so that you will be able to interact with
-    the serial console. If you don't have a favourite, picocom is suggested:
-
-  $ picocom /dev/ttyUSB0 -b 115200
-
- 3. Power up or reset the board and press a key on the terminal when prompted
-    to get to the U-Boot command line
-
- 4. Set up the environment in U-Boot:
-
- => setenv ipaddr <board ip>
- => setenv serverip <tftp server ip>
- => setenv bootargs root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:255.255.255.0:mpc8315e:eth0:off console=ttyS0,115200
-
- 5. Download the kernel and dtb, and boot:
-
- => tftp 1000000 uImage-mpc8315e-rdb.bin
- => tftp 2000000 uImage-mpc8315e-rdb.dtb
- => bootm 1000000 - 2000000
-
---- Booting from JFFS2 root ---
-
- 1. First boot the board with NFS root.
-
- 2. Erase the MTD partition which will be used as root:
-
-    $ flash_eraseall  /dev/mtd3
-
- 3. Copy the JFFS2 image to the MTD partition:
-
-    $ flashcp core-image-minimal-mpc8315e-rdb.jffs2 /dev/mtd3
-
- 4. Then reboot the board and set up the environment in U-Boot:
-
-    => setenv bootargs root=/dev/mtdblock3 rootfstype=jffs2 console=ttyS0,115200
-
-
-Ubiquiti Networks EdgeRouter Lite (edgerouter)
-==============================================
-
-The EdgeRouter Lite is part of the EdgeMax series. It is a MIPS64 router
-(based on the Cavium Octeon processor) with 512MB of RAM, which uses an
-internal USB pendrive for storage.
-
-Setup instructions
-------------------
-
-You will need the following:
-* RJ45 -> serial ("rollover") cable connected from your PC to the CONSOLE
-  port on the device
-* Ethernet connected to the first ethernet port on the board
-
-If using NFS as part of the setup process, you will also need:
-* NFS root setup on your workstation
-* TFTP server installed on your workstation (if fetching the kernel from
-  TFTP, see below).
-
---- Preparation ---
-
-Build an image (e.g. core-image-minimal) using "edgerouter" as the MACHINE.
-In the following instruction it is based on core-image-minimal. Another target
-may be similiar with it.
-
---- Booting from NFS root / kernel via TFTP ---
-
-Load the kernel, and boot the system as follows:
-
- 1. Get the kernel (vmlinux) file from the tmp/deploy/images/edgerouter
-    directory, and make them available on your TFTP server.
-
- 2. Connect the board's first serial port to your workstation and then start up
-    your favourite serial terminal so that you will be able to interact with
-    the serial console. If you don't have a favourite, picocom is suggested:
-
-  $ picocom /dev/ttyS0 -b 115200
-
- 3. Power up or reset the board and press a key on the terminal when prompted
-    to get to the U-Boot command line
-
- 4. Set up the environment in U-Boot:
-
- => setenv ipaddr <board ip>
- => setenv serverip <tftp server ip>
-
- 5. Download the kernel and boot:
-
- => tftp tftp $loadaddr vmlinux
- => bootoctlinux $loadaddr coremask=0x3 root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:<netmask>:edgerouter:eth0:off mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)
-
---- Booting from USB disk ---
-
-To boot from the USB disk, you either need to remove it from the edgerouter
-box and populate it from another computer, or use a previously booted NFS
-image and populate from the edgerouter itself.
-
-Type 1: Use partitioned image
------------------------------
-
-Steps:
-
- 1. Remove the USB disk from the edgerouter and insert it into a computer
-    that has access to your build artifacts.
-
- 2. Flash the image.
-
-    # dd if=core-image-minimal-edgerouter.wic of=/dev/sdb
-
- 3. Insert USB disk into the edgerouter and boot it.
-
-Type 2: NFS
------------
-
-Note: If you place the kernel on the ext3 partition, you must re-create the
-      ext3 filesystem, since the factory u-boot can only handle 128 byte inodes and
-      cannot read the partition otherwise.
-
-      These boot instructions assume that you have recreated the ext3 filesystem with
-      128 byte inodes, you have an updated uboot or you are running and image capable
-      of making the filesystem on the board itself.
-
-
- 1. Boot from NFS root
-
- 2. Mount the USB disk partition 2 and then extract the contents of
-    tmp/deploy/core-image-XXXX.tar.bz2 into it.
-
-    Before starting, copy core-image-minimal-xxx.tar.bz2 and vmlinux into
-    rootfs path on your workstation.
-
-    and then,
-  
-      # mount /dev/sda2 /media/sda2
-      # tar -xvjpf core-image-minimal-XXX.tar.bz2 -C /media/sda2
-      # cp vmlinux /media/sda2/boot/vmlinux
-      # umount /media/sda2
-      # reboot
-
- 3. Reboot the board and press a key on the terminal when prompted to get to the U-Boot
-    command line:
-
-    # reboot
-
- 4. Load the kernel and boot:
-
-      => ext2load usb 0:2 $loadaddr boot/vmlinux
-      => bootoctlinux $loadaddr coremask=0x3 root=/dev/sda2 rw rootwait mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)
diff --git a/import-layers/yocto-poky/README.hardware b/import-layers/yocto-poky/README.hardware
new file mode 120000
index 0000000..8b6258d
--- /dev/null
+++ b/import-layers/yocto-poky/README.hardware
@@ -0,0 +1 @@
+meta-yocto-bsp/README.hardware
\ No newline at end of file
diff --git a/import-layers/yocto-poky/README.poky b/import-layers/yocto-poky/README.poky
new file mode 120000
index 0000000..1877dca
--- /dev/null
+++ b/import-layers/yocto-poky/README.poky
@@ -0,0 +1 @@
+meta-poky/README.poky
\ No newline at end of file
diff --git a/import-layers/yocto-poky/README.qemu b/import-layers/yocto-poky/README.qemu
new file mode 100644
index 0000000..9f56b7d
--- /dev/null
+++ b/import-layers/yocto-poky/README.qemu
@@ -0,0 +1,15 @@
+QEMU Emulation Targets
+======================
+
+To simplify development, the build system supports building images to
+work with the QEMU emulator in system emulation mode. Several architectures
+are currently supported in 32 and 64 bit variants:
+
+  * ARM (qemuarm + qemuarm64)
+  * x86 (qemux86 + qemux86-64)
+  * PowerPC (qemuppc only)
+  * MIPS (qemumips + qemumips64)
+
+Use of the QEMU images is covered in the Yocto Project Reference Manual.
+The appropriate MACHINE variable value corresponding to the target is given
+in brackets.
diff --git a/import-layers/yocto-poky/bitbake/README b/import-layers/yocto-poky/bitbake/README
new file mode 100644
index 0000000..479c376
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/README
@@ -0,0 +1,35 @@
+Bitbake
+=======
+
+BitBake is a generic task execution engine that allows shell and Python tasks to be run
+efficiently and in parallel while working within complex inter-task dependency constraints.
+One of BitBake's main users, OpenEmbedded, takes this core and builds embedded Linux software
+stacks using a task-oriented approach.
+
+For information about Bitbake, see the OpenEmbedded website:
+    http://www.openembedded.org/
+
+Bitbake plain documentation can be found under the doc directory or its integrated
+html version at the Yocto Project website:
+    http://yoctoproject.org/documentation
+
+Contributing
+------------
+
+Please refer to
+http://www.openembedded.org/wiki/How_to_submit_a_patch_to_OpenEmbedded
+for guidelines on how to submit patches, just note that the latter documentation is intended
+for OpenEmbedded (and its core) not bitbake patches (bitbake-devel@lists.openembedded.org)
+but in general main guidelines apply. Once the commit(s) have been created, the way to send
+the patch is through git-send-email. For example, to send the last commit (HEAD) on current
+branch, type:
+
+    git send-email -M -1 --to bitbake-devel@lists.openembedded.org
+
+Mailing list:
+
+    http://lists.openembedded.org/mailman/listinfo/bitbake-devel
+
+Source code:
+
+    http://git.openembedded.org/bitbake/
diff --git a/import-layers/yocto-poky/bitbake/bin/bitbake b/import-layers/yocto-poky/bitbake/bin/bitbake
index 9f5c2d4..3acc23a 100755
--- a/import-layers/yocto-poky/bitbake/bin/bitbake
+++ b/import-layers/yocto-poky/bitbake/bin/bitbake
@@ -36,9 +36,9 @@
 from bb.main import bitbake_main, BitBakeConfigParameters, BBMainException
 
 if sys.getfilesystemencoding() != "utf-8":
-    sys.exit("Please use a locale setting which supports utf-8.\nPython can't change the filesystem locale after loading so we need a utf-8 when python starts or things won't work.")
+    sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
 
-__version__ = "1.34.0"
+__version__ = "1.36.0"
 
 if __name__ == "__main__":
     if __version__ != bb.__version__:
diff --git a/import-layers/yocto-poky/bitbake/bin/bitbake-diffsigs b/import-layers/yocto-poky/bitbake/bin/bitbake-diffsigs
index eb2f859..4e6bbdd 100755
--- a/import-layers/yocto-poky/bitbake/bin/bitbake-diffsigs
+++ b/import-layers/yocto-poky/bitbake/bin/bitbake-diffsigs
@@ -34,18 +34,39 @@
 
 logger = bb.msg.logger_create('bitbake-diffsigs')
 
+def find_siginfo(tinfoil, pn, taskname, sigs=None):
+    result = None
+    tinfoil.set_event_mask(['bb.event.FindSigInfoResult',
+                            'logging.LogRecord',
+                            'bb.command.CommandCompleted',
+                            'bb.command.CommandFailed'])
+    ret = tinfoil.run_command('findSigInfo', pn, taskname, sigs)
+    if ret:
+        while True:
+            event = tinfoil.wait_event(1)
+            if event:
+                if isinstance(event, bb.command.CommandCompleted):
+                    break
+                elif isinstance(event, bb.command.CommandFailed):
+                    logger.error(str(event))
+                    sys.exit(2)
+                elif isinstance(event, bb.event.FindSigInfoResult):
+                    result = event.result
+                elif isinstance(event, logging.LogRecord):
+                    logger.handle(event)
+    else:
+        logger.error('No result returned from findSigInfo command')
+        sys.exit(2)
+    return result
+
 def find_compare_task(bbhandler, pn, taskname, sig1=None, sig2=None, color=False):
     """ Find the most recent signature files for the specified PN/task and compare them """
 
-    if not hasattr(bb.siggen, 'find_siginfo'):
-        logger.error('Metadata does not support finding signature data files')
-        sys.exit(1)
-
     if not taskname.startswith('do_'):
         taskname = 'do_%s' % taskname
 
     if sig1 and sig2:
-        sigfiles = bb.siggen.find_siginfo(pn, taskname, [sig1, sig2], bbhandler.config_data)
+        sigfiles = find_siginfo(bbhandler, pn, taskname, [sig1, sig2])
         if len(sigfiles) == 0:
             logger.error('No sigdata files found matching %s %s matching either %s or %s' % (pn, taskname, sig1, sig2))
             sys.exit(1)
@@ -57,7 +78,7 @@
             sys.exit(1)
         latestfiles = [sigfiles[sig1], sigfiles[sig2]]
     else:
-        filedates = bb.siggen.find_siginfo(pn, taskname, None, bbhandler.config_data)
+        filedates = find_siginfo(bbhandler, pn, taskname)
         latestfiles = sorted(filedates.keys(), key=lambda f: filedates[f])[-3:]
         if not latestfiles:
             logger.error('No sigdata files found matching %s %s' % (pn, taskname))
@@ -69,7 +90,7 @@
     # Define recursion callback
     def recursecb(key, hash1, hash2):
         hashes = [hash1, hash2]
-        hashfiles = bb.siggen.find_siginfo(key, None, hashes, bbhandler.config_data)
+        hashfiles = find_siginfo(bbhandler, key, None, hashes)
 
         recout = []
         if len(hashfiles) == 0:
diff --git a/import-layers/yocto-poky/bitbake/bin/bitbake-layers b/import-layers/yocto-poky/bitbake/bin/bitbake-layers
index 2b05d28..d184011 100755
--- a/import-layers/yocto-poky/bitbake/bin/bitbake-layers
+++ b/import-layers/yocto-poky/bitbake/bin/bitbake-layers
@@ -43,6 +43,7 @@
         add_help=False)
     parser.add_argument('-d', '--debug', help='Enable debug output', action='store_true')
     parser.add_argument('-q', '--quiet', help='Print only errors', action='store_true')
+    parser.add_argument('-F', '--force', help='Force add without recipe parse verification', action='store_true')
     parser.add_argument('--color', choices=['auto', 'always', 'never'], default='auto', help='Colorize output (where %(metavar)s is %(choices)s)', metavar='COLOR')
 
     global_args, unparsed_args = parser.parse_known_args()
@@ -89,7 +90,7 @@
 
         if getattr(args, 'parserecipes', False):
             tinfoil.config_data.disableTracking()
-            tinfoil.parseRecipes()
+            tinfoil.parse_recipes()
             tinfoil.config_data.enableTracking()
 
         return args.func(args)
diff --git a/import-layers/yocto-poky/bitbake/bin/bitbake-selftest b/import-layers/yocto-poky/bitbake/bin/bitbake-selftest
index 380e003..afe1603 100755
--- a/import-layers/yocto-poky/bitbake/bin/bitbake-selftest
+++ b/import-layers/yocto-poky/bitbake/bin/bitbake-selftest
@@ -28,6 +28,7 @@
 tests = ["bb.tests.codeparser",
          "bb.tests.cow",
          "bb.tests.data",
+         "bb.tests.event",
          "bb.tests.fetch",
          "bb.tests.parse",
          "bb.tests.utils"]
diff --git a/import-layers/yocto-poky/bitbake/bin/bitbake-worker b/import-layers/yocto-poky/bitbake/bin/bitbake-worker
index ee2d622..e925054 100755
--- a/import-layers/yocto-poky/bitbake/bin/bitbake-worker
+++ b/import-layers/yocto-poky/bitbake/bin/bitbake-worker
@@ -17,7 +17,7 @@
 from threading import Thread
 
 if sys.getfilesystemencoding() != "utf-8":
-    sys.exit("Please use a locale setting which supports utf-8.\nPython can't change the filesystem locale after loading so we need a utf-8 when python starts or things won't work.")
+    sys.exit("Please use a locale setting which supports UTF-8 (such as LANG=en_US.UTF-8).\nPython can't change the filesystem locale after loading so we need a UTF-8 when Python starts or things won't work.")
 
 # Users shouldn't be running this code directly
 if len(sys.argv) != 2 or not sys.argv[1].startswith("decafbad"):
@@ -499,4 +499,3 @@
 
 workerlog_write("exitting")
 sys.exit(0)
-
diff --git a/import-layers/yocto-poky/bitbake/bin/git-make-shallow b/import-layers/yocto-poky/bitbake/bin/git-make-shallow
new file mode 100755
index 0000000..296d3a3
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/bin/git-make-shallow
@@ -0,0 +1,165 @@
+#!/usr/bin/env python3
+"""git-make-shallow: make the current git repository shallow
+
+Remove the history of the specified revisions, then optionally filter the
+available refs to those specified.
+"""
+
+import argparse
+import collections
+import errno
+import itertools
+import os
+import subprocess
+import sys
+
+version = 1.0
+
+
+def main():
+    if sys.version_info < (3, 4, 0):
+        sys.exit('Python 3.4 or greater is required')
+
+    git_dir = check_output(['git', 'rev-parse', '--git-dir']).rstrip()
+    shallow_file = os.path.join(git_dir, 'shallow')
+    if os.path.exists(shallow_file):
+        try:
+            check_output(['git', 'fetch', '--unshallow'])
+        except subprocess.CalledProcessError:
+            try:
+                os.unlink(shallow_file)
+            except OSError as exc:
+                if exc.errno != errno.ENOENT:
+                    raise
+
+    args = process_args()
+    revs = check_output(['git', 'rev-list'] + args.revisions).splitlines()
+
+    make_shallow(shallow_file, args.revisions, args.refs)
+
+    ref_revs = check_output(['git', 'rev-list'] + args.refs).splitlines()
+    remaining_history = set(revs) & set(ref_revs)
+    for rev in remaining_history:
+        if check_output(['git', 'rev-parse', '{}^@'.format(rev)]):
+            sys.exit('Error: %s was not made shallow' % rev)
+
+    filter_refs(args.refs)
+
+    if args.shrink:
+        shrink_repo(git_dir)
+        subprocess.check_call(['git', 'fsck', '--unreachable'])
+
+
+def process_args():
+    # TODO: add argument to automatically keep local-only refs, since they
+    # can't be easily restored with a git fetch.
+    parser = argparse.ArgumentParser(description='Remove the history of the specified revisions, then optionally filter the available refs to those specified.')
+    parser.add_argument('--ref', '-r', metavar='REF', action='append', dest='refs', help='remove all but the specified refs (cumulative)')
+    parser.add_argument('--shrink', '-s', action='store_true', help='shrink the git repository by repacking and pruning')
+    parser.add_argument('revisions', metavar='REVISION', nargs='+', help='a git revision/commit')
+    if len(sys.argv) < 2:
+        parser.print_help()
+        sys.exit(2)
+
+    args = parser.parse_args()
+
+    if args.refs:
+        args.refs = check_output(['git', 'rev-parse', '--symbolic-full-name'] + args.refs).splitlines()
+    else:
+        args.refs = get_all_refs(lambda r, t, tt: t == 'commit' or tt == 'commit')
+
+    args.refs = list(filter(lambda r: not r.endswith('/HEAD'), args.refs))
+    args.revisions = check_output(['git', 'rev-parse'] + ['%s^{}' % i for i in args.revisions]).splitlines()
+    return args
+
+
+def check_output(cmd, input=None):
+    return subprocess.check_output(cmd, universal_newlines=True, input=input)
+
+
+def make_shallow(shallow_file, revisions, refs):
+    """Remove the history of the specified revisions."""
+    for rev in follow_history_intersections(revisions, refs):
+        print("Processing %s" % rev)
+        with open(shallow_file, 'a') as f:
+            f.write(rev + '\n')
+
+
+def get_all_refs(ref_filter=None):
+    """Return all the existing refs in this repository, optionally filtering the refs."""
+    ref_output = check_output(['git', 'for-each-ref', '--format=%(refname)\t%(objecttype)\t%(*objecttype)'])
+    ref_split = [tuple(iter_extend(l.rsplit('\t'), 3)) for l in ref_output.splitlines()]
+    if ref_filter:
+        ref_split = (e for e in ref_split if ref_filter(*e))
+    refs = [r[0] for r in ref_split]
+    return refs
+
+
+def iter_extend(iterable, length, obj=None):
+    """Ensure that iterable is the specified length by extending with obj."""
+    return itertools.islice(itertools.chain(iterable, itertools.repeat(obj)), length)
+
+
+def filter_refs(refs):
+    """Remove all but the specified refs from the git repository."""
+    all_refs = get_all_refs()
+    to_remove = set(all_refs) - set(refs)
+    if to_remove:
+        check_output(['xargs', '-0', '-n', '1', 'git', 'update-ref', '-d', '--no-deref'],
+                     input=''.join(l + '\0' for l in to_remove))
+
+
+def follow_history_intersections(revisions, refs):
+    """Determine all the points where the history of the specified revisions intersects the specified refs."""
+    queue = collections.deque(revisions)
+    seen = set()
+
+    for rev in iter_except(queue.popleft, IndexError):
+        if rev in seen:
+            continue
+
+        parents = check_output(['git', 'rev-parse', '%s^@' % rev]).splitlines()
+
+        yield rev
+        seen.add(rev)
+
+        if not parents:
+            continue
+
+        check_refs = check_output(['git', 'merge-base', '--independent'] + sorted(refs)).splitlines()
+        for parent in parents:
+            for ref in check_refs:
+                print("Checking %s vs %s" % (parent, ref))
+                try:
+                    merge_base = check_output(['git', 'merge-base', parent, ref]).rstrip()
+                except subprocess.CalledProcessError:
+                    continue
+                else:
+                    queue.append(merge_base)
+
+
+def iter_except(func, exception, start=None):
+    """Yield a function repeatedly until it raises an exception."""
+    try:
+        if start is not None:
+            yield start()
+        while True:
+            yield func()
+    except exception:
+        pass
+
+
+def shrink_repo(git_dir):
+    """Shrink the newly shallow repository, removing the unreachable objects."""
+    subprocess.check_call(['git', 'reflog', 'expire', '--expire-unreachable=now', '--all'])
+    subprocess.check_call(['git', 'repack', '-ad'])
+    try:
+        os.unlink(os.path.join(git_dir, 'objects', 'info', 'alternates'))
+    except OSError as exc:
+        if exc.errno != errno.ENOENT:
+            raise
+    subprocess.check_call(['git', 'prune', '--expire', 'now'])
+
+
+if __name__ == '__main__':
+    main()
diff --git a/import-layers/yocto-poky/bitbake/bin/toaster b/import-layers/yocto-poky/bitbake/bin/toaster
index 61a4a0f..4036f0a 100755
--- a/import-layers/yocto-poky/bitbake/bin/toaster
+++ b/import-layers/yocto-poky/bitbake/bin/toaster
@@ -18,12 +18,21 @@
 # along with this program. If not, see http://www.gnu.org/licenses/.
 
 HELP="
-Usage: source toaster start|stop [webport=<address:port>] [noweb]
+Usage: source toaster start|stop [webport=<address:port>] [noweb] [nobuild]
     Optional arguments:
-        [noweb] Setup the environment for building with toaster but don't start the development server
+        [nobuild] Setup the environment for capturing builds with toaster but disable managed builds
+        [noweb] Setup the environment for capturing builds with toaster but don't start the web server
         [webport] Set the development server (default: localhost:8000)
 "
 
+custom_extention()
+{
+    custom_extension=$BBBASEDIR/lib/toaster/orm/fixtures/custom_toaster_append.sh
+    if [ -f $custom_extension ] ; then
+        $custom_extension $*
+    fi
+}
+
 databaseCheck()
 {
     retval=0
@@ -50,6 +59,11 @@
 webserverKillAll()
 {
     local pidfile
+    if [ -f ${BUILDDIR}/.toastermain.pid ] ; then
+        custom_extention web_stop_postpend
+    else
+        custom_extention noweb_stop_postpend
+    fi
     for pidfile in ${BUILDDIR}/.toastermain.pid ${BUILDDIR}/.runbuilds.pid; do
         if [ -f ${pidfile} ]; then
             pid=`cat ${pidfile}`
@@ -89,6 +103,7 @@
     else
         echo "Toaster development webserver started at http://$ADDR_PORT"
         echo -e "\nYou can now run 'bitbake <target>' on the command line and monitor your build in Toaster.\nYou can also use a Toaster project to configure and run a build.\n"
+        custom_extention web_start_postpend $ADDR_PORT
     fi
 
     return $retval
@@ -116,8 +131,14 @@
     # Verify Django version
     reqfile=$(python3 -c "import os; print(os.path.realpath('$BBBASEDIR/toaster-requirements.txt'))")
     exp='s/Django\([><=]\+\)\([^,]\+\),\([><=]\+\)\(.\+\)/'
-    exp=$exp'import sys,django;version=django.get_version().split(".");'
-    exp=$exp'sys.exit(not (version \1 "\2".split(".") and version \3 "\4".split(".")))/p'
+    # expand version parts to 2 digits to support 1.10.x > 1.8
+    # (note:helper functions hard to insert in-line)
+    exp=$exp'import sys,django;'
+    exp=$exp'version=["%02d" % int(n) for n in django.get_version().split(".")];'
+    exp=$exp'vmin=["%02d" % int(n) for n in "\2".split(".")];'
+    exp=$exp'vmax=["%02d" % int(n) for n in "\4".split(".")];'
+    exp=$exp'sys.exit(not (version \1 vmin and version \3 vmax))'
+    exp=$exp'/p'
     if ! sed -n "$exp" $reqfile | python3 - ; then
         req=`grep ^Django $reqfile`
         echo "This program needs $req"
@@ -162,8 +183,8 @@
 unset OE_ROOT
 
 
-
 WEBSERVER=1
+export TOASTER_BUILDSERVER=1
 ADDR_PORT="localhost:8000"
 unset CMD
 for param in $*; do
@@ -171,6 +192,9 @@
     noweb )
             WEBSERVER=0
     ;;
+    nobuild )
+            TOASTER_BUILDSERVER=0
+    ;;
     start )
             CMD=$param
     ;;
@@ -235,6 +259,7 @@
 echo "The system will $CMD."
 
 # Execute the commands
+custom_extention toaster_prepend $CMD $ADDR_PORT
 
 case $CMD in
     start )
@@ -256,22 +281,28 @@
             if [ ! -f "$TOASTER_DIR/toaster.sqlite" ] ; then
                 if ! databaseCheck; then
                     echo "Failed ${CMD}."
-                  return 4
+                    return 4
                 fi
             fi
+            custom_extention noweb_start_postpend $ADDR_PORT
         fi
         if [ $WEBSERVER -gt 0 ] && ! webserverStartAll; then
             echo "Failed ${CMD}."
             return 4
         fi
         export BITBAKE_UI='toasterui'
-        $MANAGE runbuilds \
-           </dev/null >>${BUILDDIR}/toaster_runbuilds.log 2>&1 \
-           & echo $! >${BUILDDIR}/.runbuilds.pid
+        if [ $TOASTER_BUILDSERVER -eq 1 ] ; then
+            $MANAGE runbuilds \
+               </dev/null >>${BUILDDIR}/toaster_runbuilds.log 2>&1 \
+               & echo $! >${BUILDDIR}/.runbuilds.pid
+        else
+            echo "Toaster build server not started."
+        fi
 
         # set fail safe stop system on terminal exit
         trap stop_system SIGHUP
         echo "Successful ${CMD}."
+        custom_extention toaster_postpend $CMD $ADDR_PORT
         return 0
     ;;
     stop )
@@ -279,3 +310,5 @@
         echo "Successful ${CMD}."
     ;;
 esac
+custom_extention toaster_postpend $CMD $ADDR_PORT
+
diff --git a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml
index d1ce43e..c721e86 100644
--- a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml
+++ b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-fetching.xml
@@ -620,7 +620,9 @@
                         The Git Submodules fetcher is not a complete fetcher
                         implementation.
                         The fetcher has known issues where it does not use the
-                        normal source mirroring infrastructure properly.
+                        normal source mirroring infrastructure properly. Further,
+                        the submodule sources it fetches are not visible to the
+                        licensing and source archiving infrastructures.
                     </para>
                 </note>
             </para>
diff --git a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml
index 2685c0e..9253eaf 100644
--- a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml
+++ b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-hello.xml
@@ -128,15 +128,8 @@
         </para>
 
         <note>
-            This example was inspired by and drew heavily from these sources:
-            <itemizedlist>
-                <listitem><para>
-                    <ulink url="http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html">Mailing List post - The BitBake equivalent of "Hello, World!"</ulink>
-                    </para></listitem>
-                <listitem><para>
-                    <ulink url="https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/">Hambedded Linux blog post - From Bitbake Hello World to an Image</ulink>
-                    </para></listitem>
-            </itemizedlist>
+            This example was inspired by and drew heavily from
+            <ulink url="http://www.mail-archive.com/yocto@yoctoproject.org/msg09379.html">Mailing List post - The BitBake equivalent of "Hello, World!"</ulink>.
         </note>
 
         <para>
@@ -269,7 +262,7 @@
                 and define some key BitBake variables.
                 For more information on the <filename>bitbake.conf</filename>,
                 see
-                <ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#an-overview-of-bitbakeconf'></ulink>
+                <ulink url='http://git.openembedded.org/bitbake/tree/conf/bitbake.conf'></ulink>.
                 </para>
                 <para>Use the following commands to create the <filename>conf</filename>
                 directory in the project directory:
@@ -352,9 +345,6 @@
                 Of course, the <filename>base.bbclass</filename> can have much
                 more depending on which build environments BitBake is
                 supporting.
-                For more information on the <filename>base.bbclass</filename> file,
-                you can look at
-                <ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#tasks'></ulink>.
                 </para></listitem>
             <listitem><para><emphasis>Run Bitbake:</emphasis>
                 After making sure that the <filename>classes/base.bbclass</filename>
@@ -375,8 +365,8 @@
                 code separate from the general metadata used by BitBake.
                 Thus, this example creates and uses a layer called "mylayer".
                 <note>
-                    You can find additional information on adding a layer at
-                    <ulink url='https://web.archive.org/web/20150325165911/http://hambedded.org/blog/2012/11/24/from-bitbake-hello-world-to-an-image/#adding-an-example-layer'></ulink>.
+                    You can find additional information on layers at
+                    <ulink url='http://www.yoctoproject.org/docs/2.3/bitbake-user-manual/bitbake-user-manual.html#layers'></ulink>.
                 </note>
                 </para>
                 <para>Minimally, you need a recipe file and a layer configuration
diff --git a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml
index ca7f724..08d9afd 100644
--- a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml
+++ b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-intro.xml
@@ -440,7 +440,7 @@
                     Build Checkout:</emphasis>
                     A final possibility for getting a copy of BitBake is that it
                     already comes with your checkout of a larger Bitbake-based build
-                    system, such as Poky or Yocto Project.
+                    system, such as Poky.
                     Rather than manually checking out individual layers and
                     gluing them together yourself, you can check
                     out an entire build system.
diff --git a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml
index 1d1e5b3..0cfa53d 100644
--- a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml
+++ b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-metadata.xml
@@ -669,7 +669,7 @@
                         <literallayout class='monospaced'>
      DEPENDS = "glibc ncurses"
      OVERRIDES = "machine:local"
-     DEPENDS_append_machine = "libmad"
+     DEPENDS_append_machine = " libmad"
                         </literallayout>
                         In this example, <filename>DEPENDS</filename> becomes
                         "glibc ncurses libmad".
@@ -899,11 +899,12 @@
 
             <para>
                 The <filename>inherit</filename> directive is a rudimentary
-                means of specifying what classes of functionality your
-                recipes require.
+                means of specifying functionality contained in class files
+                that your recipes require.
                 For example, you can easily abstract out the tasks involved in
                 building a package that uses Autoconf and Automake and put
-                those tasks into a class file that can be used by your recipe.
+                those tasks into a class file and then have your recipe
+                inherit that class file.
             </para>
 
             <para>
@@ -922,13 +923,24 @@
                     inherited class within your recipe by doing so
                     after the "inherit" statement.
                 </note>
+                If you want to use the directive to inherit
+                multiple classes, separate them with spaces.
+                The following example shows how to inherit both the
+                <filename>buildhistory</filename> and <filename>rm_work</filename>
+                classes:
+                <literallayout class='monospaced'>
+     inherit buildhistory rm_work
+                </literallayout>
             </para>
 
             <para>
-                If necessary, it is possible to inherit a class
-                conditionally by using
-                a variable expression after the <filename>inherit</filename>
-                statement.
+                An advantage with the inherit directive as compared to both
+                the
+                <link linkend='include-directive'>include</link> and
+                <link linkend='require-inclusion'>require</link> directives
+                is that you can inherit class files conditionally.
+                You can accomplish this by using a variable expression
+                after the <filename>inherit</filename> statement.
                 Here is an example:
                 <literallayout class='monospaced'>
      inherit ${VARNAME}
@@ -985,6 +997,17 @@
             </para>
 
             <para>
+                The include directive is a more generic method of including
+                functionality as compared to the
+                <link linkend='inherit-directive'>inherit</link> directive,
+                which is restricted to class (i.e. <filename>.bbclass</filename>)
+                files.
+                The include directive is applicable for any other kind of
+                shared or encapsulated functionality or configuration that
+                does not suit a <filename>.bbclass</filename> file.
+            </para>
+
+            <para>
                 As an example, suppose you needed a recipe to include some
                 self-test definitions:
                 <literallayout class='monospaced'>
@@ -1018,6 +1041,18 @@
             </para>
 
             <para>
+                The require directive, like the include directive previously
+                described, is a more generic method of including
+                functionality as compared to the
+                <link linkend='inherit-directive'>inherit</link> directive,
+                which is restricted to class (i.e. <filename>.bbclass</filename>)
+                files.
+                The require directive is applicable for any other kind of
+                shared or encapsulated functionality or configuration that
+                does not suit a <filename>.bbclass</filename> file.
+            </para>
+
+            <para>
                 Similar to how BitBake handles
                 <link linkend='include-directive'><filename>include</filename></link>,
                 if the path specified
@@ -1049,8 +1084,9 @@
 
             <para>
                 When creating a configuration file (<filename>.conf</filename>),
-                you can use the <filename>INHERIT</filename> directive to
-                inherit a class.
+                you can use the
+                <link linkend='var-INHERIT'><filename>INHERIT</filename></link>
+                configuration directive to inherit a class.
                 BitBake only supports this directive when used within
                 a configuration file.
             </para>
@@ -1083,7 +1119,7 @@
                 <filename>autotools</filename> and <filename>pkgconfig</filename>
                 classes:
                 <literallayout class='monospaced'>
-     inherit autotools pkgconfig
+     INHERIT += "autotools pkgconfig"
                 </literallayout>
             </para>
         </section>
@@ -2029,7 +2065,7 @@
             before any tasks are executed so would be in the global
             configuration datastore namespace.
             No recipe-specific metadata exists in that namespace.
-            The "BuildStarted" and "buildCompleted" events also run in
+            The "BuildStarted" and "BuildCompleted" events also run in
             the main cooker/server process rather than any worker context.
             Thus, any changes made to the datastore would be seen by other
             cooker/server events within the current build but not seen
diff --git a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
index 0e89bf2..d89e123 100644
--- a/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
+++ b/import-layers/yocto-poky/bitbake/doc/bitbake-user-manual/bitbake-user-manual-ref-variables.xml
@@ -1143,8 +1143,6 @@
             <glossdef>
                 <para>
                     Sets the base location where layers are stored.
-                    By default, this location is set to
-                    <filename>${COREBASE}</filename>.
                     This setting is used in conjunction with
                     <filename>bitbake-layers layerindex-fetch</filename> and
                     tells <filename>bitbake-layers</filename> where to place
@@ -1596,9 +1594,19 @@
         <glossentry id='var-INHERIT'><glossterm>INHERIT</glossterm>
             <glossdef>
                 <para>
-                    Causes the named class to be inherited at
-                    this point during parsing.
-                    The variable is only valid in configuration files.
+                    Causes the named class or classes to be inherited globally.
+                    Anonymous functions in the class or classes
+                    are not executed for the
+                    base configuration and in each individual recipe.
+                    The OpenEmbedded build system ignores changes to
+                    <filename>INHERIT</filename> in individual recipes.
+                </para>
+
+                <para>
+                    For more information on <filename>INHERIT</filename>, see
+                    the
+                    "<link linkend="inherit-configuration-directive"><filename>INHERIT</filename> Configuration Directive</link>"
+                    section.
                 </para>
             </glossdef>
         </glossentry>
@@ -1893,7 +1901,7 @@
                     Here are two examples:
                     <literallayout class='monospaced'>
      PREFERRED_VERSION_python = "2.7.3"
-     PREFERRED_VERSION_linux-yocto = "3.10%"
+     PREFERRED_VERSION_linux-yocto = "4.12%"
                     </literallayout>
                 </para>
             </glossdef>
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/COW.py b/import-layers/yocto-poky/bitbake/lib/bb/COW.py
index 36ebbd9..bec6208 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/COW.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/COW.py
@@ -3,7 +3,7 @@
 #
 # This is a copy on write dictionary and set which abuses classes to try and be nice and fast.
 #
-# Copyright (C) 2006 Tim Amsell
+# Copyright (C) 2006 Tim Ansell
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License version 2 as
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/__init__.py b/import-layers/yocto-poky/bitbake/lib/bb/__init__.py
index bfe0ca5..5268831 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/__init__.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/__init__.py
@@ -21,7 +21,7 @@
 # with this program; if not, write to the Free Software Foundation, Inc.,
 # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
 
-__version__ = "1.34.0"
+__version__ = "1.36.0"
 
 import sys
 if sys.version_info < (3, 4, 0):
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/cache.py b/import-layers/yocto-poky/bitbake/lib/bb/cache.py
index e7eeb4f..86ce0e7 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/cache.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/cache.py
@@ -86,9 +86,9 @@
 class CoreRecipeInfo(RecipeInfoCommon):
     __slots__ = ()
 
-    cachefile = "bb_cache.dat"   
+    cachefile = "bb_cache.dat"
 
-    def __init__(self, filename, metadata):      
+    def __init__(self, filename, metadata):
         self.file_depends = metadata.getVar('__depends', False)
         self.timestamp = bb.parse.cached_mtime(filename)
         self.variants = self.listvar('__VARIANTS', metadata) + ['']
@@ -107,7 +107,7 @@
 
         self.pn = self.getvar('PN', metadata)
         self.packages = self.listvar('PACKAGES', metadata)
-        if not self.pn in self.packages:
+        if not self.packages:
             self.packages.append(self.pn)
 
         self.basetaskhashes = self.taskvar('BB_BASEHASH', self.tasks, metadata)
@@ -122,7 +122,7 @@
         self.defaultpref = self.intvar('DEFAULT_PREFERENCE', metadata)
         self.not_world = self.getvar('EXCLUDE_FROM_WORLD', metadata)
         self.stamp = self.getvar('STAMP', metadata)
-        self.stampclean = self.getvar('STAMPCLEAN', metadata)        
+        self.stampclean = self.getvar('STAMPCLEAN', metadata)
         self.stamp_extrainfo = self.flaglist('stamp-extra-info', self.tasks, metadata)
         self.file_checksums = self.flaglist('file-checksums', self.tasks, metadata, True)
         self.packages_dynamic = self.listvar('PACKAGES_DYNAMIC', metadata)
@@ -217,7 +217,7 @@
             cachedata.packages_dynamic[package].append(fn)
 
         # Build hash of runtime depends and recommends
-        for package in self.packages + [self.pn]:
+        for package in self.packages:
             cachedata.rundeps[fn][package] = list(self.rdepends) + self.rdepends_pkg[package]
             cachedata.runrecs[fn][package] = list(self.rrecommends) + self.rrecommends_pkg[package]
 
@@ -375,8 +375,8 @@
         data = databuilder.data
 
         # Pass caches_array information into Cache Constructor
-        # It will be used later for deciding whether we 
-        # need extra cache file dump/load support 
+        # It will be used later for deciding whether we
+        # need extra cache file dump/load support
         self.caches_array = caches_array
         self.cachedir = data.getVar("CACHE")
         self.clean = set()
@@ -421,7 +421,7 @@
                 cachesize += os.fstat(cachefile.fileno()).st_size
 
         bb.event.fire(bb.event.CacheLoadStarted(cachesize), self.data)
-        
+
         for cache_class in self.caches_array:
             cachefile = getCacheFile(self.cachedir, cache_class.cachefile, self.data_hash)
             with open(cachefile, "rb") as cachefile:
@@ -438,8 +438,8 @@
                     logger.info('Cache version mismatch, rebuilding...')
                     return
                 elif bitbake_ver != bb.__version__:
-                     logger.info('Bitbake version mismatch, rebuilding...')
-                     return
+                    logger.info('Bitbake version mismatch, rebuilding...')
+                    return
 
                 # Load the rest of the cache file
                 current_progress = 0
@@ -616,13 +616,13 @@
                     a = fl.find(":True")
                     b = fl.find(":False")
                     if ((a < 0) and b) or ((b > 0) and (b < a)):
-                       f = fl[:b+6]
-                       fl = fl[b+7:]
+                        f = fl[:b+6]
+                        fl = fl[b+7:]
                     elif ((b < 0) and a) or ((a > 0) and (a < b)):
-                       f = fl[:a+5]
-                       fl = fl[a+6:]
+                        f = fl[:a+5]
+                        fl = fl[a+6:]
                     else:
-                       break
+                        break
                     fl = fl.strip()
                     if "*" in f:
                         continue
@@ -886,4 +886,3 @@
             p.dump([data, self.__class__.CACHE_VERSION])
 
         bb.utils.unlockfile(glf)
-
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/command.py b/import-layers/yocto-poky/bitbake/lib/bb/command.py
index a919f58..6c966e3 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/command.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/command.py
@@ -50,6 +50,8 @@
     def __init__(self, message):
         self.error = message
         CommandExit.__init__(self, 1)
+    def __str__(self):
+        return "Command execution failed: %s" % self.error
 
 class CommandError(Exception):
     pass
@@ -76,7 +78,8 @@
                 if not hasattr(command_method, 'readonly') or False == getattr(command_method, 'readonly'):
                     return None, "Not able to execute not readonly commands in readonly mode"
             try:
-                if getattr(command_method, 'needconfig', False):
+                self.cooker.process_inotify_updates()
+                if getattr(command_method, 'needconfig', True):
                     self.cooker.updateCacheSync()
                 result = command_method(self, commandline)
             except CommandError as exc:
@@ -96,6 +99,7 @@
 
     def runAsyncCommand(self):
         try:
+            self.cooker.process_inotify_updates()
             if self.cooker.state in (bb.cooker.state.error, bb.cooker.state.shutdown, bb.cooker.state.forceshutdown):
                 # updateCache will trigger a shutdown of the parser
                 # and then raise BBHandledException triggering an exit
@@ -141,6 +145,9 @@
         self.currentAsyncCommand = None
         self.cooker.finishcommand()
 
+    def reset(self):
+        self.remotedatastores = bb.remotedata.RemoteDatastores(self.cooker)
+
 def split_mc_pn(pn):
     if pn.startswith("multiconfig:"):
         _, mc, pn = pn.split(":", 2)
@@ -233,59 +240,15 @@
         command.cooker.configuration.postfile = postfiles
     setPrePostConfFiles.needconfig = False
 
-    def getCpuCount(self, command, params):
-        """
-        Get the CPU count on the bitbake server
-        """
-        return bb.utils.cpu_count()
-    getCpuCount.readonly = True
-    getCpuCount.needconfig = False
-
     def matchFile(self, command, params):
         fMatch = params[0]
         return command.cooker.matchFile(fMatch)
     matchFile.needconfig = False
 
-    def generateNewImage(self, command, params):
-        image = params[0]
-        base_image = params[1]
-        package_queue = params[2]
-        timestamp = params[3]
-        description = params[4]
-        return command.cooker.generateNewImage(image, base_image,
-                        package_queue, timestamp, description)
-
-    def ensureDir(self, command, params):
-        directory = params[0]
-        bb.utils.mkdirhier(directory)
-    ensureDir.needconfig = False
-
-    def setVarFile(self, command, params):
-        """
-        Save a variable in a file; used for saving in a configuration file
-        """
-        var = params[0]
-        val = params[1]
-        default_file = params[2]
-        op = params[3]
-        command.cooker.modifyConfigurationVar(var, val, default_file, op)
-    setVarFile.needconfig = False
-
-    def removeVarFile(self, command, params):
-        """
-        Remove a variable declaration from a file
-        """
-        var = params[0]
-        command.cooker.removeConfigurationVar(var)
-    removeVarFile.needconfig = False
-
-    def createConfigFile(self, command, params):
-        """
-        Create an extra configuration file
-        """
-        name = params[0]
-        command.cooker.createConfigFile(name)
-    createConfigFile.needconfig = False
+    def getUIHandlerNum(self, command, params):
+        return bb.event.get_uihandler()
+    getUIHandlerNum.needconfig = False
+    getUIHandlerNum.readonly = True
 
     def setEventMask(self, command, params):
         handlerNum = params[0]
@@ -323,6 +286,7 @@
     parseConfiguration.needconfig = False
 
     def getLayerPriorities(self, command, params):
+        command.cooker.parseConfiguration()
         ret = []
         # regex objects cannot be marshalled by xmlrpc
         for collection, pattern, regex, pri in command.cooker.bbfile_config_priorities:
@@ -354,6 +318,38 @@
         return command.cooker.recipecaches[mc].pkg_pepvpr
     getRecipeVersions.readonly = True
 
+    def getRecipeProvides(self, command, params):
+        try:
+            mc = params[0]
+        except IndexError:
+            mc = ''
+        return command.cooker.recipecaches[mc].fn_provides
+    getRecipeProvides.readonly = True
+
+    def getRecipePackages(self, command, params):
+        try:
+            mc = params[0]
+        except IndexError:
+            mc = ''
+        return command.cooker.recipecaches[mc].packages
+    getRecipePackages.readonly = True
+
+    def getRecipePackagesDynamic(self, command, params):
+        try:
+            mc = params[0]
+        except IndexError:
+            mc = ''
+        return command.cooker.recipecaches[mc].packages_dynamic
+    getRecipePackagesDynamic.readonly = True
+
+    def getRProviders(self, command, params):
+        try:
+            mc = params[0]
+        except IndexError:
+            mc = ''
+        return command.cooker.recipecaches[mc].rproviders
+    getRProviders.readonly = True
+
     def getRuntimeDepends(self, command, params):
         ret = []
         try:
@@ -592,11 +588,14 @@
         bfile = params[0]
         task = params[1]
         if len(params) > 2:
-            hidewarning = params[2]
+            internal = params[2]
         else:
-            hidewarning = False
+            internal = False
 
-        command.cooker.buildFile(bfile, task, hidewarning)
+        if internal:
+            command.cooker.buildFileInternal(bfile, task, fireevents=False, quietlog=True)
+        else:
+            command.cooker.buildFile(bfile, task)
     buildFile.needcache = False
 
     def buildTargets(self, command, params):
@@ -646,17 +645,6 @@
         command.finishAsyncCommand()
     generateTargetsTree.needcache = True
 
-    def findCoreBaseFiles(self, command, params):
-        """
-        Find certain files in COREBASE directory. i.e. Layers
-        """
-        subdir = params[0]
-        filename = params[1]
-
-        command.cooker.findCoreBaseFiles(subdir, filename)
-        command.finishAsyncCommand()
-    findCoreBaseFiles.needcache = False
-
     def findConfigFiles(self, command, params):
         """
         Find config files which provide appropriate values
@@ -764,3 +752,14 @@
         command.finishAsyncCommand()
     clientComplete.needcache = False
 
+    def findSigInfo(self, command, params):
+        """
+        Find signature info files via the signature generator
+        """
+        pn = params[0]
+        taskname = params[1]
+        sigs = params[2]
+        res = bb.siggen.find_siginfo(pn, taskname, sigs, command.cooker.data)
+        bb.event.fire(bb.event.FindSigInfoResult(res), command.cooker.data)
+        command.finishAsyncCommand()
+    findSigInfo.needcache = False
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/cooker.py b/import-layers/yocto-poky/bitbake/lib/bb/cooker.py
index 3c9e88c..c7fdd72 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/cooker.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/cooker.py
@@ -181,15 +181,15 @@
         self.confignotifier = pyinotify.Notifier(self.configwatcher, self.config_notifications)
         self.watchmask = pyinotify.IN_CLOSE_WRITE | pyinotify.IN_CREATE | pyinotify.IN_DELETE | \
                          pyinotify.IN_DELETE_SELF | pyinotify.IN_MODIFY | pyinotify.IN_MOVE_SELF | \
-                         pyinotify.IN_MOVED_FROM | pyinotify.IN_MOVED_TO 
+                         pyinotify.IN_MOVED_FROM | pyinotify.IN_MOVED_TO
         self.watcher = pyinotify.WatchManager()
         self.watcher.bbseen = []
         self.watcher.bbwatchedfiles = []
         self.notifier = pyinotify.Notifier(self.watcher, self.notifications)
 
-        # If being called by something like tinfoil, we need to clean cached data 
+        # If being called by something like tinfoil, we need to clean cached data
         # which may now be invalid
-        bb.parse.__mtime_cache = {}
+        bb.parse.clear_cache()
         bb.parse.BBHandler.cached_statements = {}
 
         self.ui_cmdline = None
@@ -205,31 +205,11 @@
 
         self.inotify_modified_files = []
 
-        def _process_inotify_updates(server, notifier_list, abort):
-            for n in notifier_list:
-                if n.check_events(timeout=0):
-                    # read notified events and enqeue them
-                    n.read_events()
-                    n.process_events()
+        def _process_inotify_updates(server, cooker, abort):
+            cooker.process_inotify_updates()
             return 1.0
 
-        self.configuration.server_register_idlecallback(_process_inotify_updates, [self.confignotifier, self.notifier])
-
-        self.baseconfig_valid = True
-        self.parsecache_valid = False
-
-        # Take a lock so only one copy of bitbake can run against a given build
-        # directory at a time
-        if not self.lockBitbake():
-            bb.fatal("Only one copy of bitbake should be run against a build directory")
-        try:
-            self.lock.seek(0)
-            self.lock.truncate()
-            if len(configuration.interface) >= 2:
-                self.lock.write("%s:%s\n" % (configuration.interface[0], configuration.interface[1]));
-            self.lock.flush()
-        except:
-            pass
+        self.configuration.server_register_idlecallback(_process_inotify_updates, self)
 
         # TOSTOP must not be set or our children will hang when they output
         try:
@@ -253,10 +233,19 @@
         # Let SIGHUP exit as SIGTERM
         signal.signal(signal.SIGHUP, self.sigterm_exception)
 
+    def process_inotify_updates(self):
+        for n in [self.confignotifier, self.notifier]:
+            if n.check_events(timeout=0):
+                # read notified events and enqeue them
+                n.read_events()
+                n.process_events()
+
     def config_notifications(self, event):
         if event.maskname == "IN_Q_OVERFLOW":
             bb.warn("inotify event queue overflowed, invalidating caches.")
+            self.parsecache_valid = False
             self.baseconfig_valid = False
+            bb.parse.clear_cache()
             return
         if not event.pathname in self.configwatcher.bbwatchedfiles:
             return
@@ -268,6 +257,10 @@
         if event.maskname == "IN_Q_OVERFLOW":
             bb.warn("inotify event queue overflowed, invalidating caches.")
             self.parsecache_valid = False
+            bb.parse.clear_cache()
+            return
+        if event.pathname.endswith("bitbake-cookerdaemon.log") \
+                or event.pathname.endswith("bitbake.lock"):
             return
         if not event.pathname in self.inotify_modified_files:
             self.inotify_modified_files.append(event.pathname)
@@ -288,7 +281,7 @@
             watchtarget = None
             while True:
                 # We try and add watches for files that don't exist but if they did, would influence
-                # the parser. The parent directory of these files may not exist, in which case we need 
+                # the parser. The parent directory of these files may not exist, in which case we need
                 # to watch any parent that does exist for changes.
                 try:
                     watcher.add_watch(f, self.watchmask, quiet=False)
@@ -382,6 +375,15 @@
         self.data.renameVar("__depends", "__base_depends")
         self.add_filewatch(self.data.getVar("__base_depends", False), self.configwatcher)
 
+        self.baseconfig_valid = True
+        self.parsecache_valid = False
+
+    def handlePRServ(self):
+        # Setup a PR Server based on the new configuration
+        try:
+            self.prhost = prserv.serv.auto_start(self.data)
+        except prserv.serv.PRServiceConfigError as e:
+            bb.fatal("Unable to start PR Server, exitting")
 
     def enableDataTracking(self):
         self.configuration.tracking = True
@@ -393,138 +395,6 @@
         if hasattr(self, "data"):
             self.data.disableTracking()
 
-    def modifyConfigurationVar(self, var, val, default_file, op):
-        if op == "append":
-            self.appendConfigurationVar(var, val, default_file)
-        elif op == "set":
-            self.saveConfigurationVar(var, val, default_file, "=")
-        elif op == "earlyAssign":
-            self.saveConfigurationVar(var, val, default_file, "?=")
-
-
-    def appendConfigurationVar(self, var, val, default_file):
-        #add append var operation to the end of default_file
-        default_file = bb.cookerdata.findConfigFile(default_file, self.data)
-
-        total = "#added by hob"
-        total += "\n%s += \"%s\"\n" % (var, val)
-
-        with open(default_file, 'a') as f:
-            f.write(total)
-
-        #add to history
-        loginfo = {"op":"append", "file":default_file, "line":total.count("\n")}
-        self.data.appendVar(var, val, **loginfo)
-
-    def saveConfigurationVar(self, var, val, default_file, op):
-
-        replaced = False
-        #do not save if nothing changed
-        if str(val) == self.data.getVar(var, False):
-            return
-
-        conf_files = self.data.varhistory.get_variable_files(var)
-
-        #format the value when it is a list
-        if isinstance(val, list):
-            listval = ""
-            for value in val:
-                listval += "%s   " % value
-            val = listval
-
-        topdir = self.data.getVar("TOPDIR", False)
-
-        #comment or replace operations made on var
-        for conf_file in conf_files:
-            if topdir in conf_file:
-                with open(conf_file, 'r') as f:
-                    contents = f.readlines()
-
-                lines = self.data.varhistory.get_variable_lines(var, conf_file)
-                for line in lines:
-                    total = ""
-                    i = 0
-                    for c in contents:
-                        total += c
-                        i = i + 1
-                        if i==int(line):
-                            end_index = len(total)
-                    index = total.rfind(var, 0, end_index)
-
-                    begin_line = total.count("\n",0,index)
-                    end_line = int(line)
-
-                    #check if the variable was saved before in the same way
-                    #if true it replace the place where the variable was declared
-                    #else it comments it
-                    if contents[begin_line-1]== "#added by hob\n":
-                        contents[begin_line] = "%s %s \"%s\"\n" % (var, op, val)
-                        replaced = True
-                    else:
-                        for ii in range(begin_line, end_line):
-                            contents[ii] = "#" + contents[ii]
-
-                with open(conf_file, 'w') as f:
-                    f.writelines(contents)
-
-        if replaced == False:
-            #remove var from history
-            self.data.varhistory.del_var_history(var)
-
-            #add var to the end of default_file
-            default_file = bb.cookerdata.findConfigFile(default_file, self.data)
-
-            #add the variable on a single line, to be easy to replace the second time
-            total = "\n#added by hob"
-            total += "\n%s %s \"%s\"\n" % (var, op, val)
-
-            with open(default_file, 'a') as f:
-                f.write(total)
-
-            #add to history
-            loginfo = {"op":"set", "file":default_file, "line":total.count("\n")}
-            self.data.setVar(var, val, **loginfo)
-
-    def removeConfigurationVar(self, var):
-        conf_files = self.data.varhistory.get_variable_files(var)
-        topdir = self.data.getVar("TOPDIR", False)
-
-        for conf_file in conf_files:
-            if topdir in conf_file:
-                with open(conf_file, 'r') as f:
-                    contents = f.readlines()
-
-                lines = self.data.varhistory.get_variable_lines(var, conf_file)
-                for line in lines:
-                    total = ""
-                    i = 0
-                    for c in contents:
-                        total += c
-                        i = i + 1
-                        if i==int(line):
-                            end_index = len(total)
-                    index = total.rfind(var, 0, end_index)
-
-                    begin_line = total.count("\n",0,index)
-
-                    #check if the variable was saved before in the same way
-                    if contents[begin_line-1]== "#added by hob\n":
-                        contents[begin_line-1] = contents[begin_line] = "\n"
-                    else:
-                        contents[begin_line] = "\n"
-                    #remove var from history
-                    self.data.varhistory.del_var_history(var, conf_file, line)
-                    #remove variable
-                    self.data.delVar(var)
-
-                with open(conf_file, 'w') as f:
-                    f.writelines(contents)
-
-    def createConfigFile(self, name):
-        path = os.getcwd()
-        confpath = os.path.join(path, "conf", name)
-        open(confpath, 'w').close()
-
     def parseConfiguration(self):
         # Set log file verbosity
         verboselogs = bb.utils.to_boolean(self.data.getVar("BB_VERBOSE_LOGS", False))
@@ -547,21 +417,27 @@
 
         self.handleCollections(self.data.getVar("BBFILE_COLLECTIONS"))
 
+        self.parsecache_valid = False
+
     def updateConfigOpts(self, options, environment, cmdline):
         self.ui_cmdline = cmdline
         clean = True
         for o in options:
             if o in ['prefile', 'postfile']:
+                # Only these options may require a reparse
+                try:
+                    if getattr(self.configuration, o) == options[o]:
+                        # Value is the same, no need to mark dirty
+                        continue
+                except AttributeError:
+                    pass
+                logger.debug(1, "Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
+                print("Marking as dirty due to '%s' option change to '%s'" % (o, options[o]))
                 clean = False
-                server_val = getattr(self.configuration, "%s_server" % o)
-                if not options[o] and server_val:
-                    # restore value provided on server start
-                    setattr(self.configuration, o, server_val)
-                    continue
             setattr(self.configuration, o, options[o])
         for k in bb.utils.approved_variables():
             if k in environment and k not in self.configuration.env:
-                logger.debug(1, "Updating environment variable %s to %s" % (k, environment[k]))
+                logger.debug(1, "Updating new environment variable %s to %s" % (k, environment[k]))
                 self.configuration.env[k] = environment[k]
                 clean = False
             if k in self.configuration.env and k not in environment:
@@ -569,14 +445,13 @@
                 del self.configuration.env[k]
                 clean = False
             if k not in self.configuration.env and k not in environment:
-                 continue
+                continue
             if environment[k] != self.configuration.env[k]:
-                logger.debug(1, "Updating environment variable %s to %s" % (k, environment[k]))
+                logger.debug(1, "Updating environment variable %s from %s to %s" % (k, self.configuration.env[k], environment[k]))
                 self.configuration.env[k] = environment[k]
                 clean = False
         if not clean:
             logger.debug(1, "Base environment change, triggering reparse")
-            self.baseconfig_valid = False        
             self.reset()
 
     def runCommands(self, server, data, abort):
@@ -616,6 +491,12 @@
         if not pkgs_to_build:
             pkgs_to_build = []
 
+        orig_tracking = self.configuration.tracking
+        if not orig_tracking:
+            self.enableDataTracking()
+            self.reset()
+
+
         if buildfile:
             # Parse the configuration here. We need to do it explicitly here since
             # this showEnvironment() code path doesn't use the cache
@@ -660,6 +541,9 @@
             if envdata.getVarFlag(e, 'func', False) and envdata.getVarFlag(e, 'python', False):
                 logger.plain("\npython %s () {\n%s}\n", e, envdata.getVar(e, False))
 
+        if not orig_tracking:
+            self.disableDataTracking()
+            self.reset()
 
     def buildTaskData(self, pkgs_to_build, task, abort, allowincomplete=False):
         """
@@ -817,12 +701,12 @@
                     depend_tree["pn"][pn][ei] = vars(self.recipecaches[mc])[ei][taskfn]
 
 
+            dotname = "%s.%s" % (pn, bb.runqueue.taskname_from_tid(tid))
+            if not dotname in depend_tree["tdepends"]:
+                depend_tree["tdepends"][dotname] = []
             for dep in rq.rqdata.runtaskentries[tid].depends:
                 (depmc, depfn, deptaskname, deptaskfn) = bb.runqueue.split_tid_mcfn(dep)
                 deppn = self.recipecaches[mc].pkg_fn[deptaskfn]
-                dotname = "%s.%s" % (pn, bb.runqueue.taskname_from_tid(tid))
-                if not dotname in depend_tree["tdepends"]:
-                    depend_tree["tdepends"][dotname] = []
                 depend_tree["tdepends"][dotname].append("%s.%s" % (deppn, bb.runqueue.taskname_from_tid(dep)))
             if taskfn not in seen_fns:
                 seen_fns.append(taskfn)
@@ -913,13 +797,13 @@
                 seen_fns.append(taskfn)
 
                 depend_tree["depends"][pn] = []
-                for item in taskdata[mc].depids[taskfn]:
+                for dep in taskdata[mc].depids[taskfn]:
                     pn_provider = ""
                     if dep in taskdata[mc].build_targets and taskdata[mc].build_targets[dep]:
                         fn_provider = taskdata[mc].build_targets[dep][0]
                         pn_provider = self.recipecaches[mc].pkg_fn[fn_provider]
                     else:
-                        pn_provider = item
+                        pn_provider = dep
                     pn_provider = self.add_mc_prefix(mc, pn_provider)
                     depend_tree["depends"][pn].append(pn_provider)
 
@@ -1046,18 +930,6 @@
                     providerlog.error("conflicting preferences for %s: both %s and %s specified", providee, provider, self.recipecaches[mc].preferred[providee])
                 self.recipecaches[mc].preferred[providee] = provider
 
-    def findCoreBaseFiles(self, subdir, configfile):
-        corebase = self.data.getVar('COREBASE') or ""
-        paths = []
-        for root, dirs, files in os.walk(corebase + '/' + subdir):
-            for d in dirs:
-                configfilepath = os.path.join(root, d, configfile)
-                if os.path.exists(configfilepath):
-                    paths.append(os.path.join(root, d))
-
-        if paths:
-            bb.event.fire(bb.event.CoreBaseFilesFound(paths), self.data)
-
     def findConfigFilePath(self, configfile):
         """
         Find the location on disk of configfile and if it exists and was parsed by BitBake
@@ -1314,12 +1186,26 @@
         """
         Setup any variables needed before starting a build
         """
-        t = time.gmtime() 
-        if not self.data.getVar("BUILDNAME", False):
-            self.data.setVar("BUILDNAME", "${DATE}${TIME}")
-        self.data.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S', t))
-        self.data.setVar("DATE", time.strftime('%Y%m%d', t))
-        self.data.setVar("TIME", time.strftime('%H%M%S', t))
+        t = time.gmtime()
+        for mc in self.databuilder.mcdata:
+            ds = self.databuilder.mcdata[mc]
+            if not ds.getVar("BUILDNAME", False):
+                ds.setVar("BUILDNAME", "${DATE}${TIME}")
+            ds.setVar("BUILDSTART", time.strftime('%m/%d/%Y %H:%M:%S', t))
+            ds.setVar("DATE", time.strftime('%Y%m%d', t))
+            ds.setVar("TIME", time.strftime('%H%M%S', t))
+
+    def reset_mtime_caches(self):
+        """
+        Reset mtime caches - this is particularly important when memory resident as something
+        which is cached is not unlikely to have changed since the last invocation (e.g. a
+        file associated with a recipe might have been modified by the user).
+        """
+        build.reset_cache()
+        bb.fetch._checksum_cache.mtime_cache.clear()
+        siggen_cache = getattr(bb.parse.siggen, 'checksum_cache', None)
+        if siggen_cache:
+            bb.parse.siggen.checksum_cache.mtime_cache.clear()
 
     def matchFiles(self, bf):
         """
@@ -1360,16 +1246,22 @@
             raise NoSpecificMatch
         return matches[0]
 
-    def buildFile(self, buildfile, task, hidewarning=False):
+    def buildFile(self, buildfile, task):
         """
         Build the file matching regexp buildfile
         """
         bb.event.fire(bb.event.BuildInit(), self.data)
 
-        if not hidewarning:
-            # Too many people use -b because they think it's how you normally
-            # specify a target to be built, so show a warning
-            bb.warn("Buildfile specified, dependencies will not be handled. If this is not what you want, do not use -b / --buildfile.")
+        # Too many people use -b because they think it's how you normally
+        # specify a target to be built, so show a warning
+        bb.warn("Buildfile specified, dependencies will not be handled. If this is not what you want, do not use -b / --buildfile.")
+
+        self.buildFileInternal(buildfile, task)
+
+    def buildFileInternal(self, buildfile, task, fireevents=True, quietlog=False):
+        """
+        Build the file matching regexp buildfile
+        """
 
         # Parse the configuration here. We need to do it explicitly here since
         # buildFile() doesn't use the cache
@@ -1385,6 +1277,7 @@
         fn = self.matchFile(fn)
 
         self.buildSetVars()
+        self.reset_mtime_caches()
 
         bb_cache = bb.cache.Cache(self.databuilder, self.data_hash, self.caches_array)
 
@@ -1411,8 +1304,8 @@
         # Remove external dependencies
         self.recipecaches[mc].task_deps[fn]['depends'] = {}
         self.recipecaches[mc].deps[fn] = []
-        self.recipecaches[mc].rundeps[fn] = []
-        self.recipecaches[mc].runrecs[fn] = []
+        self.recipecaches[mc].rundeps[fn] = defaultdict(list)
+        self.recipecaches[mc].runrecs[fn] = defaultdict(list)
 
         # Invalidate task for target if force mode active
         if self.configuration.force:
@@ -1422,10 +1315,15 @@
         # Setup taskdata structure
         taskdata = {}
         taskdata[mc] = bb.taskdata.TaskData(self.configuration.abort)
-        taskdata[mc].add_provider(self.data, self.recipecaches[mc], item)
+        taskdata[mc].add_provider(self.databuilder.mcdata[mc], self.recipecaches[mc], item)
 
-        buildname = self.data.getVar("BUILDNAME")
-        bb.event.fire(bb.event.BuildStarted(buildname, [item]), self.data)
+        if quietlog:
+            rqloglevel = bb.runqueue.logger.getEffectiveLevel()
+            bb.runqueue.logger.setLevel(logging.WARNING)
+
+        buildname = self.databuilder.mcdata[mc].getVar("BUILDNAME")
+        if fireevents:
+            bb.event.fire(bb.event.BuildStarted(buildname, [item]), self.databuilder.mcdata[mc])
 
         # Execute the runqueue
         runlist = [[mc, item, task, fn]]
@@ -1452,11 +1350,20 @@
                 retval = False
             except SystemExit as exc:
                 self.command.finishAsyncCommand(str(exc))
+                if quietlog:
+                    bb.runqueue.logger.setLevel(rqloglevel)
                 return False
 
             if not retval:
-                bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runtaskentries), buildname, item, failures, interrupted), self.data)
+                if fireevents:
+                    bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runtaskentries), buildname, item, failures, interrupted), self.databuilder.mcdata[mc])
                 self.command.finishAsyncCommand(msg)
+                # We trashed self.recipecaches above
+                self.parsecache_valid = False
+                self.configuration.limited_deps = False
+                bb.parse.siggen.reset(self.data)
+                if quietlog:
+                    bb.runqueue.logger.setLevel(rqloglevel)
                 return False
             if retval is True:
                 return True
@@ -1491,14 +1398,17 @@
                 return False
 
             if not retval:
-                bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runtaskentries), buildname, targets, failures, interrupted), self.data)
-                self.command.finishAsyncCommand(msg)
+                try:
+                    for mc in self.multiconfigs:
+                        bb.event.fire(bb.event.BuildCompleted(len(rq.rqdata.runtaskentries), buildname, targets, failures, interrupted), self.databuilder.mcdata[mc])
+                finally:
+                    self.command.finishAsyncCommand(msg)
                 return False
             if retval is True:
                 return True
             return retval
 
-        build.reset_cache()
+        self.reset_mtime_caches()
         self.buildSetVars()
 
         # If we are told to do the None task then query the default task
@@ -1523,7 +1433,8 @@
                 ntargets.append("multiconfig:%s:%s:%s" % (target[0], target[1], target[2]))
             ntargets.append("%s:%s" % (target[1], target[2]))
 
-        bb.event.fire(bb.event.BuildStarted(buildname, ntargets), self.data)
+        for mc in self.multiconfigs:
+            bb.event.fire(bb.event.BuildStarted(buildname, ntargets), self.databuilder.mcdata[mc])
 
         rq = bb.runqueue.RunQueue(self, self.data, self.recipecaches, taskdata, runlist)
         if 'universe' in targets:
@@ -1556,55 +1467,6 @@
         return dump
 
 
-    def generateNewImage(self, image, base_image, package_queue, timestamp, description):
-        '''
-        Create a new image with a "require"/"inherit" base_image statement
-        '''
-        if timestamp:
-            image_name = os.path.splitext(image)[0]
-            timestr = time.strftime("-%Y%m%d-%H%M%S")
-            dest = image_name + str(timestr) + ".bb"
-        else:
-            if not image.endswith(".bb"):
-                dest = image + ".bb"
-            else:
-                dest = image
-
-        basename = False
-        if base_image:
-            with open(base_image, 'r') as f:
-                require_line = f.readline()
-                p = re.compile("IMAGE_BASENAME *=")
-                for line in f:
-                    if p.search(line):
-                        basename = True
-
-        with open(dest, "w") as imagefile:
-            if base_image is None:
-                imagefile.write("inherit core-image\n")
-            else:
-                topdir = self.data.getVar("TOPDIR", False)
-                if topdir in base_image:
-                    base_image = require_line.split()[1]
-                imagefile.write("require " + base_image + "\n")
-            image_install = "IMAGE_INSTALL = \""
-            for package in package_queue:
-                image_install += str(package) + " "
-            image_install += "\"\n"
-            imagefile.write(image_install)
-
-            description_var = "DESCRIPTION = \"" + description + "\"\n"
-            imagefile.write(description_var)
-
-            if basename:
-                # If this is overwritten in a inherited image, reset it to default
-                image_basename = "IMAGE_BASENAME = \"${PN}\"\n"
-                imagefile.write(image_basename)
-
-        self.state = state.initial
-        if timestamp:
-            return timestr
-
     def updateCacheSync(self):
         if self.state == state.running:
             return
@@ -1619,8 +1481,7 @@
         if not self.baseconfig_valid:
             logger.debug(1, "Reloading base configuration data")
             self.initConfigurationData()
-            self.baseconfig_valid = True
-            self.parsecache_valid = False
+            self.handlePRServ()
 
     # This is called for all async commands when self.state != running
     def updateCache(self):
@@ -1636,6 +1497,7 @@
             self.updateCacheSync()
 
         if self.state != state.parsing and not self.parsecache_valid:
+            bb.parse.siggen.reset(self.data)
             self.parseConfiguration ()
             if CookerFeatures.SEND_SANITYEVENTS in self.featureset:
                 for mc in self.multiconfigs:
@@ -1723,46 +1585,14 @@
         return pkgs_to_build
 
     def pre_serve(self):
-        # Empty the environment. The environment will be populated as
-        # necessary from the data store.
-        #bb.utils.empty_environment()
-        try:
-            self.prhost = prserv.serv.auto_start(self.data)
-        except prserv.serv.PRServiceConfigError:
-            bb.event.fire(CookerExit(), self.data)
-            self.state = state.error
+        # We now are in our own process so we can call this here.
+        # PRServ exits if its parent process exits
+        self.handlePRServ()
         return
 
     def post_serve(self):
-        prserv.serv.auto_shutdown(self.data)
+        prserv.serv.auto_shutdown()
         bb.event.fire(CookerExit(), self.data)
-        lockfile = self.lock.name
-        self.lock.close()
-        self.lock = None
-
-        while not self.lock:
-            with bb.utils.timeout(3):
-                self.lock = bb.utils.lockfile(lockfile, shared=False, retry=False, block=True)
-                if not self.lock:
-                    # Some systems may not have lsof available
-                    procs = None
-                    try:
-                        procs = subprocess.check_output(["lsof", '-w', lockfile], stderr=subprocess.STDOUT)
-                    except OSError as e:
-                        if e.errno != errno.ENOENT:
-                            raise
-                    if procs is None:
-                        # Fall back to fuser if lsof is unavailable
-                        try:
-                            procs = subprocess.check_output(["fuser", '-v', lockfile], stderr=subprocess.STDOUT)
-                        except OSError as e:
-                            if e.errno != errno.ENOENT:
-                                raise
-
-                    msg = "Delaying shutdown due to active processes which appear to be holding bitbake.lock"
-                    if procs:
-                        msg += ":\n%s" % str(procs)
-                    print(msg)
 
 
     def shutdown(self, force = False):
@@ -1784,46 +1614,12 @@
 
     def clientComplete(self):
         """Called when the client is done using the server"""
-        if self.configuration.server_only:
-            self.finishcommand()
-        else:
-            self.shutdown(True)
+        self.finishcommand()
+        self.extraconfigdata = {}
+        self.command.reset()
+        self.databuilder.reset()
+        self.data = self.databuilder.data
 
-    def lockBitbake(self):
-        if not hasattr(self, 'lock'):
-            self.lock = None
-            if self.data:
-                lockfile = self.data.expand("${TOPDIR}/bitbake.lock")
-                if lockfile:
-                    self.lock = bb.utils.lockfile(lockfile, False, False)
-        return self.lock
-
-    def unlockBitbake(self):
-        if hasattr(self, 'lock') and self.lock:
-            bb.utils.unlockfile(self.lock)
-
-def server_main(cooker, func, *args):
-    cooker.pre_serve()
-
-    if cooker.configuration.profile:
-        try:
-            import cProfile as profile
-        except:
-            import profile
-        prof = profile.Profile()
-
-        ret = profile.Profile.runcall(prof, func, *args)
-
-        prof.dump_stats("profile.log")
-        bb.utils.process_profilelog("profile.log")
-        print("Raw profiling information saved to profile.log and processed statistics to profile.log.processed")
-
-    else:
-        ret = func(*args)
-
-    cooker.post_serve()
-
-    return ret
 
 class CookerExit(bb.event.Event):
     """
@@ -1890,15 +1686,23 @@
 
         # We need to track where we look so that we can add inotify watches. There
         # is no nice way to do this, this is horrid. We intercept the os.listdir()
-        # calls while we run glob().
+        # (or os.scandir() for python 3.6+) calls while we run glob().
         origlistdir = os.listdir
+        if hasattr(os, 'scandir'):
+            origscandir = os.scandir
         searchdirs = []
 
         def ourlistdir(d):
             searchdirs.append(d)
             return origlistdir(d)
 
+        def ourscandir(d):
+            searchdirs.append(d)
+            return origscandir(d)
+
         os.listdir = ourlistdir
+        if hasattr(os, 'scandir'):
+            os.scandir = ourscandir
         try:
             # Can't use set here as order is important
             newfiles = []
@@ -1918,6 +1722,8 @@
                             newfiles.append(g)
         finally:
             os.listdir = origlistdir
+            if hasattr(os, 'scandir'):
+                os.scandir = origscandir
 
         bbmask = config.getVar('BBMASK')
 
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/cookerdata.py b/import-layers/yocto-poky/bitbake/lib/bb/cookerdata.py
index e408a35..fab47c7 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/cookerdata.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/cookerdata.py
@@ -41,10 +41,6 @@
 
         self.options.pkgs_to_build = targets or []
 
-        self.options.tracking = False
-        if hasattr(self.options, "show_environment") and self.options.show_environment:
-            self.options.tracking = True
-
         for key, val in self.options.__dict__.items():
             setattr(self, key, val)
 
@@ -73,15 +69,15 @@
 
     def updateToServer(self, server, environment):
         options = {}
-        for o in ["abort", "tryaltconfigs", "force", "invalidate_stamp", 
-                  "verbose", "debug", "dry_run", "dump_signatures", 
+        for o in ["abort", "force", "invalidate_stamp",
+                  "verbose", "debug", "dry_run", "dump_signatures",
                   "debug_domains", "extra_assume_provided", "profile",
-                  "prefile", "postfile"]:
+                  "prefile", "postfile", "server_timeout"]:
             options[o] = getattr(self.options, o)
 
         ret, error = server.runCommand(["updateConfig", options, environment, sys.argv])
         if error:
-                raise Exception("Unable to update the server configuration with local parameters: %s" % error)
+            raise Exception("Unable to update the server configuration with local parameters: %s" % error)
 
     def parseActions(self):
         # Parse any commandline into actions
@@ -131,8 +127,6 @@
         self.extra_assume_provided = []
         self.prefile = []
         self.postfile = []
-        self.prefile_server = []
-        self.postfile_server = []
         self.debug = 0
         self.cmd = None
         self.abort = True
@@ -144,7 +138,8 @@
         self.dump_signatures = []
         self.dry_run = False
         self.tracking = False
-        self.interface = []
+        self.xmlrpcinterface = []
+        self.server_timeout = None
         self.writeeventlog = False
         self.server_only = False
         self.limited_deps = False
@@ -157,7 +152,6 @@
             if key in parameters.options.__dict__:
                 setattr(self, key, parameters.options.__dict__[key])
         self.env = parameters.environment.copy()
-        self.tracking = parameters.tracking
 
     def setServerRegIdleCallback(self, srcb):
         self.server_register_idlecallback = srcb
@@ -173,7 +167,7 @@
 
     def __setstate__(self,state):
         for k in state:
-            setattr(self, k, state[k]) 
+            setattr(self, k, state[k])
 
 
 def catch_parse_error(func):
@@ -230,6 +224,27 @@
 
     return None
 
+#
+# We search for a conf/bblayers.conf under an entry in BBPATH or in cwd working 
+# up to /. If that fails, we search for a conf/bitbake.conf in BBPATH.
+#
+
+def findTopdir():
+    d = bb.data.init()
+    bbpath = None
+    if 'BBPATH' in os.environ:
+        bbpath = os.environ['BBPATH']
+        d.setVar('BBPATH', bbpath)
+
+    layerconf = findConfigFile("bblayers.conf", d)
+    if layerconf:
+        return os.path.dirname(os.path.dirname(layerconf))
+    if bbpath:
+        bitbakeconf = bb.utils.which(bbpath, "conf/bitbake.conf")
+        if bitbakeconf:
+            return os.path.dirname(os.path.dirname(bitbakeconf))
+    return None
+
 class CookerDataBuilder(object):
 
     def __init__(self, cookercfg, worker = False):
@@ -255,7 +270,7 @@
         filtered_keys = bb.utils.approved_variables()
         bb.data.inheritFromOS(self.basedata, self.savedenv, filtered_keys)
         self.basedata.setVar("BB_ORIGENV", self.savedenv)
-        
+
         if worker:
             self.basedata.setVar("BB_WORKERCONTEXT", "1")
 
@@ -294,6 +309,8 @@
                 mcdata = self.parseConfigurationFiles(self.prefiles, self.postfiles, config)
                 bb.event.fire(bb.event.ConfigParsed(), mcdata)
                 self.mcdata[config] = mcdata
+            if multiconfig:
+                bb.event.fire(bb.event.MultiConfigParsed(self.mcdata), self.data)
 
         except (SyntaxError, bb.BBHandledException):
             raise bb.BBHandledException
@@ -304,6 +321,18 @@
             logger.exception("Error parsing configuration files")
             raise bb.BBHandledException
 
+        # Create a copy so we can reset at a later date when UIs disconnect
+        self.origdata = self.data
+        self.data = bb.data.createCopy(self.origdata)
+        self.mcdata[''] = self.data
+
+    def reset(self):
+        # We may not have run parseBaseConfiguration() yet
+        if not hasattr(self, 'origdata'):
+            return
+        self.data = bb.data.createCopy(self.origdata)
+        self.mcdata[''] = self.data
+
     def _findLayerConf(self, data):
         return findConfigFile("bblayers.conf", data)
 
@@ -346,6 +375,27 @@
             data.delVar('LAYERDIR_RE')
             data.delVar('LAYERDIR')
 
+            bbfiles_dynamic = (data.getVar('BBFILES_DYNAMIC') or "").split()
+            collections = (data.getVar('BBFILE_COLLECTIONS') or "").split()
+            invalid = []
+            for entry in bbfiles_dynamic:
+                parts = entry.split(":", 1)
+                if len(parts) != 2:
+                    invalid.append(entry)
+                    continue
+                l, f = parts
+                if l in collections:
+                    data.appendVar("BBFILES", " " + f)
+            if invalid:
+                bb.fatal("BBFILES_DYNAMIC entries must be of the form <collection name>:<filename pattern>, not:\n    %s" % "\n    ".join(invalid))
+
+            layerseries = set((data.getVar("LAYERSERIES_CORENAMES") or "").split())
+            for c in collections:
+                compat = set((data.getVar("LAYERSERIES_COMPAT_%s" % c) or "").split())
+                if compat and not (compat & layerseries):
+                    bb.fatal("Layer %s is not compatible with the core layer which only supports these series: %s (layer is compatible with %s)"
+                              % (c, " ".join(layerseries), " ".join(compat)))
+
         if not data.getVar("BBPATH"):
             msg = "The BBPATH variable is not set"
             if not layerconf:
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/daemonize.py b/import-layers/yocto-poky/bitbake/lib/bb/daemonize.py
index ab4a954..8300d1d 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/daemonize.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/daemonize.py
@@ -1,48 +1,14 @@
 """
 Python Daemonizing helper
 
-Configurable daemon behaviors:
-
-    1.) The current working directory set to the "/" directory.
-    2.) The current file creation mode mask set to 0.
-    3.) Close all open files (1024). 
-    4.) Redirect standard I/O streams to "/dev/null".
-
-A failed call to fork() now raises an exception.
-
-References:
-    1) Advanced Programming in the Unix Environment: W. Richard Stevens
-	http://www.apuebook.com/apue3e.html
-    2) The Linux Programming Interface: Michael Kerrisk
-	http://man7.org/tlpi/index.html
-    3) Unix Programming Frequently Asked Questions:
-	http://www.faqs.org/faqs/unix-faq/programmer/faq/
-
-Modified to allow a function to be daemonized and return for 
-bitbake use by Richard Purdie
+Originally based on code Copyright (C) 2005 Chad J. Schroeder but now heavily modified
+to allow a function to be daemonized and return for bitbake use by Richard Purdie
 """
 
-__author__ = "Chad J. Schroeder"
-__copyright__ = "Copyright (C) 2005 Chad J. Schroeder"
-__version__ = "0.2"
-
-# Standard Python modules.
-import os                    # Miscellaneous OS interfaces.
-import sys                   # System-specific parameters and functions.
-
-# Default daemon parameters.
-# File mode creation mask of the daemon.
-# For BitBake's children, we do want to inherit the parent umask.
-UMASK = None
-
-# Default maximum for the number of available file descriptors.
-MAXFD = 1024
-
-# The standard I/O file descriptors are redirected to /dev/null by default.
-if (hasattr(os, "devnull")):
-    REDIRECT_TO = os.devnull
-else:
-    REDIRECT_TO = "/dev/null"
+import os
+import sys
+import io
+import traceback
 
 def createDaemon(function, logfile):
     """
@@ -65,36 +31,6 @@
         # leader of the new process group, we call os.setsid().  The process is
         # also guaranteed not to have a controlling terminal.
         os.setsid()
-
-        # Is ignoring SIGHUP necessary?
-        #
-        # It's often suggested that the SIGHUP signal should be ignored before
-        # the second fork to avoid premature termination of the process.  The
-        # reason is that when the first child terminates, all processes, e.g.
-        # the second child, in the orphaned group will be sent a SIGHUP.
-        #
-        # "However, as part of the session management system, there are exactly
-        # two cases where SIGHUP is sent on the death of a process:
-        #
-        #    1) When the process that dies is the session leader of a session that
-        #        is attached to a terminal device, SIGHUP is sent to all processes
-        #        in the foreground process group of that terminal device.
-        #    2) When the death of a process causes a process group to become
-        #        orphaned, and one or more processes in the orphaned group are
-        #        stopped, then SIGHUP and SIGCONT are sent to all members of the
-        #        orphaned group." [2]
-        #
-        # The first case can be ignored since the child is guaranteed not to have
-        # a controlling terminal.  The second case isn't so easy to dismiss.
-        # The process group is orphaned when the first child terminates and
-        # POSIX.1 requires that every STOPPED process in an orphaned process
-        # group be sent a SIGHUP signal followed by a SIGCONT signal.  Since the
-        # second child is not STOPPED though, we can safely forego ignoring the
-        # SIGHUP signal.  In any case, there are no ill-effects if it is ignored.
-        #
-        # import signal              # Set handlers for asynchronous events.
-        # signal.signal(signal.SIGHUP, signal.SIG_IGN)
-
         try:
             # Fork a second child and exit immediately to prevent zombies.  This
             # causes the second child process to be orphaned, making the init
@@ -108,86 +44,39 @@
         except OSError as e:
             raise Exception("%s [%d]" % (e.strerror, e.errno))
 
-        if (pid == 0):  # The second child.
-            # We probably don't want the file mode creation mask inherited from
-            # the parent, so we give the child complete control over permissions.
-            if UMASK is not None:
-                os.umask(UMASK)
-        else:
+        if (pid != 0):
             # Parent (the first child) of the second child.
+            # exit() or _exit()?
+            # _exit is like exit(), but it doesn't call any functions registered
+            # with atexit (and on_exit) or any registered signal handlers.  It also
+            # closes any open file descriptors.  Using exit() may cause all stdio
+            # streams to be flushed twice and any temporary files may be unexpectedly
+            # removed.  It's therefore recommended that child branches of a fork()
+            # and the parent branch(es) of a daemon use _exit().
             os._exit(0)
     else:
-        # exit() or _exit()?
-        # _exit is like exit(), but it doesn't call any functions registered
-        # with atexit (and on_exit) or any registered signal handlers.  It also
-        # closes any open file descriptors.  Using exit() may cause all stdio
-        # streams to be flushed twice and any temporary files may be unexpectedly
-        # removed.  It's therefore recommended that child branches of a fork()
-        # and the parent branch(es) of a daemon use _exit().
+        os.waitpid(pid, 0)
         return
 
-    # Close all open file descriptors.  This prevents the child from keeping
-    # open any file descriptors inherited from the parent.  There is a variety
-    # of methods to accomplish this task.  Three are listed below.
-    #
-    # Try the system configuration variable, SC_OPEN_MAX, to obtain the maximum
-    # number of open file descriptors to close.  If it doesn't exist, use
-    # the default value (configurable).
-    #
-    # try:
-    #     maxfd = os.sysconf("SC_OPEN_MAX")
-    # except (AttributeError, ValueError):
-    #     maxfd = MAXFD
-    #
-    # OR
-    #
-    # if (os.sysconf_names.has_key("SC_OPEN_MAX")):
-    #     maxfd = os.sysconf("SC_OPEN_MAX")
-    # else:
-    #     maxfd = MAXFD
-    #
-    # OR
-    #
-    # Use the getrlimit method to retrieve the maximum file descriptor number
-    # that can be opened by this process.  If there is no limit on the
-    # resource, use the default value.
-    #
-    import resource             # Resource usage information.
-    maxfd = resource.getrlimit(resource.RLIMIT_NOFILE)[1]
-    if (maxfd == resource.RLIM_INFINITY):
-        maxfd = MAXFD
-  
-    # Iterate through and close all file descriptors.
-#    for fd in range(0, maxfd):
-#        try:
-#            os.close(fd)
-#        except OSError:        # ERROR, fd wasn't open to begin with (ignored)
-#            pass
+    # The second child.
 
-    # Redirect the standard I/O file descriptors to the specified file.  Since
-    # the daemon has no controlling terminal, most daemons redirect stdin,
-    # stdout, and stderr to /dev/null.  This is done to prevent side-effects
-    # from reads and writes to the standard I/O file descriptors.
-
-    # This call to open is guaranteed to return the lowest file descriptor,
-    # which will be 0 (stdin), since it was closed above.
-#    os.open(REDIRECT_TO, os.O_RDWR)    # standard input (0)
-
-    # Duplicate standard input to standard output and standard error.
-#    os.dup2(0, 1)                      # standard output (1)
-#    os.dup2(0, 2)                      # standard error (2)
-
-
+    # Replace standard fds with our own
     si = open('/dev/null', 'r')
-    so = open(logfile, 'w')
-    se = so
-
-
-    # Replace those fds with our own
     os.dup2(si.fileno(), sys.stdin.fileno())
-    os.dup2(so.fileno(), sys.stdout.fileno())
-    os.dup2(se.fileno(), sys.stderr.fileno())
 
-    function()
+    try:
+        so = open(logfile, 'a+')
+        se = so
+        os.dup2(so.fileno(), sys.stdout.fileno())
+        os.dup2(se.fileno(), sys.stderr.fileno())
+    except io.UnsupportedOperation:
+        sys.stdout = open(logfile, 'a+')
+        sys.stderr = sys.stdout
 
-    os._exit(0)
+    try:
+        function()
+    except Exception as e:
+        traceback.print_exc()
+    finally:
+        bb.event.print_ui_queue()
+        os._exit(0)
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/data.py b/import-layers/yocto-poky/bitbake/lib/bb/data.py
index 134afaa..80a7879 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/data.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/data.py
@@ -290,7 +290,7 @@
             return deps, value
         varflags = d.getVarFlags(key, ["vardeps", "vardepvalue", "vardepsexclude", "exports", "postfuncs", "prefuncs", "lineno", "filename"]) or {}
         vardeps = varflags.get("vardeps")
-        value = d.getVar(key, False)
+        value = d.getVarFlag(key, "_content", False)
 
         def handle_contains(value, contains, d):
             newvalue = ""
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/data_smart.py b/import-layers/yocto-poky/bitbake/lib/bb/data_smart.py
index 7dc1c68..7b09af5 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/data_smart.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/data_smart.py
@@ -39,7 +39,7 @@
 logger = logging.getLogger("BitBake.Data")
 
 __setvar_keyword__ = ["_append", "_prepend", "_remove"]
-__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>.*))?$')
+__setvar_regexp__ = re.compile('(?P<base>.*?)(?P<keyword>_append|_prepend|_remove)(_(?P<add>[^A-Z]*))?$')
 __expand_var_regexp__ = re.compile(r"\${[^{}@\n\t :]+}")
 __expand_python_regexp__ = re.compile(r"\${@.+?}")
 
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/event.py b/import-layers/yocto-poky/bitbake/lib/bb/event.py
index 6d8493b..52072b5 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/event.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/event.py
@@ -149,23 +149,34 @@
 
         # First check to see if we have any proper messages
         msgprint = False
+        msgerrs = False
+
+        # Should we print to stderr?
+        for event in ui_queue[:]:
+            if isinstance(event, logging.LogRecord) and event.levelno >= logging.WARNING:
+                msgerrs = True
+                break
+
+        if msgerrs:
+            logger.addHandler(stderr)
+        else:
+            logger.addHandler(stdout)
+
         for event in ui_queue[:]:
             if isinstance(event, logging.LogRecord):
                 if event.levelno > logging.DEBUG:
-                    if event.levelno >= logging.WARNING:
-                        logger.addHandler(stderr)
-                    else:
-                        logger.addHandler(stdout)
                     logger.handle(event)
                     msgprint = True
-        if msgprint:
-            return
 
         # Nope, so just print all of the messages we have (including debug messages)
-        logger.addHandler(stdout)
-        for event in ui_queue[:]:
-            if isinstance(event, logging.LogRecord):
-                logger.handle(event)
+        if not msgprint:
+            for event in ui_queue[:]:
+                if isinstance(event, logging.LogRecord):
+                    logger.handle(event)
+        if msgerrs:
+            logger.removeHandler(stderr)
+        else:
+            logger.removeHandler(stdout)
 
 def fire_ui_handlers(event, d):
     global _thread_lock
@@ -212,6 +223,12 @@
     if worker_fire:
         worker_fire(event, d)
     else:
+        # If messages have been queued up, clear the queue
+        global _uiready, ui_queue
+        if _uiready and ui_queue:
+            for queue_event in ui_queue:
+                fire_ui_handlers(queue_event, d)
+            ui_queue = []
         fire_ui_handlers(event, d)
 
 def fire_from_worker(event, d):
@@ -264,6 +281,11 @@
 def remove(name, handler):
     """Remove an Event handler"""
     _handlers.pop(name)
+    if name in _catchall_handlers:
+        _catchall_handlers.pop(name)
+    for event in _event_handler_map.keys():
+        if name in _event_handler_map[event]:
+            _event_handler_map[event].pop(name)
 
 def get_handlers():
     return _handlers
@@ -277,20 +299,28 @@
     _eventfilter = func
 
 def register_UIHhandler(handler, mainui=False):
-    if mainui:
-        global _uiready
-        _uiready = True
     bb.event._ui_handler_seq = bb.event._ui_handler_seq + 1
     _ui_handlers[_ui_handler_seq] = handler
     level, debug_domains = bb.msg.constructLogOptions()
     _ui_logfilters[_ui_handler_seq] = UIEventFilter(level, debug_domains)
+    if mainui:
+        global _uiready
+        _uiready = _ui_handler_seq
     return _ui_handler_seq
 
-def unregister_UIHhandler(handlerNum):
+def unregister_UIHhandler(handlerNum, mainui=False):
+    if mainui:
+        global _uiready
+        _uiready = False
     if handlerNum in _ui_handlers:
         del _ui_handlers[handlerNum]
     return
 
+def get_uihandler():
+    if _uiready is False:
+        return None
+    return _uiready
+
 # Class to allow filtering of events and specific filtering of LogRecords *before* we put them over the IPC
 class UIEventFilter(object):
     def __init__(self, level, debug_domains):
@@ -353,6 +383,12 @@
 class ConfigParsed(Event):
     """Configuration Parsing Complete"""
 
+class MultiConfigParsed(Event):
+    """Multi-Config Parsing Complete"""
+    def __init__(self, mcdata):
+        self.mcdata = mcdata
+        Event.__init__(self)
+
 class RecipeEvent(Event):
     def __init__(self, fn):
         self.fn = fn
@@ -496,6 +532,28 @@
     def isRuntime(self):
         return self._runtime
 
+    def __str__(self):
+        msg = ''
+        if self._runtime:
+            r = "R"
+        else:
+            r = ""
+
+        extra = ''
+        if not self._reasons:
+            if self._close_matches:
+                extra = ". Close matches:\n  %s" % '\n  '.join(self._close_matches)
+
+        if self._dependees:
+            msg = "Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)%s" % (r, self._item, ", ".join(self._dependees), r, extra)
+        else:
+            msg = "Nothing %sPROVIDES '%s'%s" % (r, self._item, extra)
+        if self._reasons:
+            for reason in self._reasons:
+                msg += '\n' + reason
+        return msg
+
+
 class MultipleProviders(Event):
     """Multiple Providers"""
 
@@ -523,6 +581,16 @@
         """
         return self._candidates
 
+    def __str__(self):
+        msg = "Multiple providers are available for %s%s (%s)" % (self._is_runtime and "runtime " or "",
+                            self._item,
+                            ", ".join(self._candidates))
+        rtime = ""
+        if self._is_runtime:
+            rtime = "R"
+        msg += "\nConsider defining a PREFERRED_%sPROVIDER entry to match %s" % (rtime, self._item)
+        return msg
+
 class ParseStarted(OperationStarted):
     """Recipe parsing for the runqueue has begun"""
     def __init__(self, total):
@@ -616,14 +684,6 @@
         self._pattern = pattern
         self._matches = matches
 
-class CoreBaseFilesFound(Event):
-    """
-    Event when a list of appropriate config files has been generated
-    """
-    def __init__(self, paths):
-        Event.__init__(self)
-        self._paths = paths
-
 class ConfigFilesFound(Event):
     """
     Event when a list of appropriate config files has been generated
@@ -694,19 +754,6 @@
         record.taskpid = worker_pid
         return True
 
-class RequestPackageInfo(Event):
-    """
-    Event to request package information
-    """
-
-class PackageInfo(Event):
-    """
-    Package information for GUI
-    """
-    def __init__(self, pkginfolist):
-        Event.__init__(self)
-        self._pkginfolist = pkginfolist
-
 class MetadataEvent(Event):
     """
     Generic event that target for OE-Core classes
@@ -784,3 +831,10 @@
     Event to indicate network test has failed
     """
 
+class FindSigInfoResult(Event):
+    """
+    Event to return results from findSigInfo command
+    """
+    def __init__(self, result):
+        Event.__init__(self)
+        self.result = result
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/__init__.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/__init__.py
index b853da3..f70f1b5 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/__init__.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/__init__.py
@@ -39,6 +39,7 @@
 import bb.persist_data, bb.utils
 import bb.checksum
 import bb.process
+import bb.event
 
 __version__ = "2"
 _checksum_cache = bb.checksum.FileChecksumCache()
@@ -48,11 +49,11 @@
 class BBFetchException(Exception):
     """Class all fetch exceptions inherit from"""
     def __init__(self, message):
-         self.msg = message
-         Exception.__init__(self, message)
+        self.msg = message
+        Exception.__init__(self, message)
 
     def __str__(self):
-         return self.msg
+        return self.msg
 
 class UntrustedUrl(BBFetchException):
     """Exception raised when encountering a host not listed in BB_ALLOWED_NETWORKS"""
@@ -68,24 +69,24 @@
 class MalformedUrl(BBFetchException):
     """Exception raised when encountering an invalid url"""
     def __init__(self, url, message=''):
-         if message:
-             msg = message
-         else:
-             msg = "The URL: '%s' is invalid and cannot be interpreted" % url
-         self.url = url
-         BBFetchException.__init__(self, msg)
-         self.args = (url,)
+        if message:
+            msg = message
+        else:
+            msg = "The URL: '%s' is invalid and cannot be interpreted" % url
+        self.url = url
+        BBFetchException.__init__(self, msg)
+        self.args = (url,)
 
 class FetchError(BBFetchException):
     """General fetcher exception when something happens incorrectly"""
     def __init__(self, message, url = None):
-         if url:
+        if url:
             msg = "Fetcher failure for URL: '%s'. %s" % (url, message)
-         else:
+        else:
             msg = "Fetcher failure: %s" % message
-         self.url = url
-         BBFetchException.__init__(self, msg)
-         self.args = (message, url)
+        self.url = url
+        BBFetchException.__init__(self, msg)
+        self.args = (message, url)
 
 class ChecksumError(FetchError):
     """Exception when mismatched checksum encountered"""
@@ -99,49 +100,56 @@
 class UnpackError(BBFetchException):
     """General fetcher exception when something happens incorrectly when unpacking"""
     def __init__(self, message, url):
-         msg = "Unpack failure for URL: '%s'. %s" % (url, message)
-         self.url = url
-         BBFetchException.__init__(self, msg)
-         self.args = (message, url)
+        msg = "Unpack failure for URL: '%s'. %s" % (url, message)
+        self.url = url
+        BBFetchException.__init__(self, msg)
+        self.args = (message, url)
 
 class NoMethodError(BBFetchException):
     """Exception raised when there is no method to obtain a supplied url or set of urls"""
     def __init__(self, url):
-         msg = "Could not find a fetcher which supports the URL: '%s'" % url
-         self.url = url
-         BBFetchException.__init__(self, msg)
-         self.args = (url,)
+        msg = "Could not find a fetcher which supports the URL: '%s'" % url
+        self.url = url
+        BBFetchException.__init__(self, msg)
+        self.args = (url,)
 
 class MissingParameterError(BBFetchException):
     """Exception raised when a fetch method is missing a critical parameter in the url"""
     def __init__(self, missing, url):
-         msg = "URL: '%s' is missing the required parameter '%s'" % (url, missing)
-         self.url = url
-         self.missing = missing
-         BBFetchException.__init__(self, msg)
-         self.args = (missing, url)
+        msg = "URL: '%s' is missing the required parameter '%s'" % (url, missing)
+        self.url = url
+        self.missing = missing
+        BBFetchException.__init__(self, msg)
+        self.args = (missing, url)
 
 class ParameterError(BBFetchException):
     """Exception raised when a url cannot be proccessed due to invalid parameters."""
     def __init__(self, message, url):
-         msg = "URL: '%s' has invalid parameters. %s" % (url, message)
-         self.url = url
-         BBFetchException.__init__(self, msg)
-         self.args = (message, url)
+        msg = "URL: '%s' has invalid parameters. %s" % (url, message)
+        self.url = url
+        BBFetchException.__init__(self, msg)
+        self.args = (message, url)
 
 class NetworkAccess(BBFetchException):
     """Exception raised when network access is disabled but it is required."""
     def __init__(self, url, cmd):
-         msg = "Network access disabled through BB_NO_NETWORK (or set indirectly due to use of BB_FETCH_PREMIRRORONLY) but access requested with command %s (for url %s)" % (cmd, url)
-         self.url = url
-         self.cmd = cmd
-         BBFetchException.__init__(self, msg)
-         self.args = (url, cmd)
+        msg = "Network access disabled through BB_NO_NETWORK (or set indirectly due to use of BB_FETCH_PREMIRRORONLY) but access requested with command %s (for url %s)" % (cmd, url)
+        self.url = url
+        self.cmd = cmd
+        BBFetchException.__init__(self, msg)
+        self.args = (url, cmd)
 
 class NonLocalMethod(Exception):
     def __init__(self):
         Exception.__init__(self)
 
+class MissingChecksumEvent(bb.event.Event):
+    def __init__(self, url, md5sum, sha256sum):
+        self.url = url
+        self.checksums = {'md5sum': md5sum,
+                          'sha256sum': sha256sum}
+        bb.event.Event.__init__(self)
+
 
 class URI(object):
     """
@@ -403,8 +411,6 @@
 
     type, host, path, user, pswd, p = decoded
 
-    if not path:
-        raise MissingParameterError('path', "encoded from the data %s" % str(decoded))
     if not type:
         raise MissingParameterError('type', "encoded from the data %s" % str(decoded))
     url = '%s://' % type
@@ -415,17 +421,18 @@
         url += "@"
     if host and type != "file":
         url += "%s" % host
-    # Standardise path to ensure comparisons work
-    while '//' in path:
-        path = path.replace("//", "/")
-    url += "%s" % urllib.parse.quote(path)
+    if path:
+        # Standardise path to ensure comparisons work
+        while '//' in path:
+            path = path.replace("//", "/")
+        url += "%s" % urllib.parse.quote(path)
     if p:
         for parm in p:
             url += ";%s=%s" % (parm, p[parm])
 
     return url
 
-def uri_replace(ud, uri_find, uri_replace, replacements, d):
+def uri_replace(ud, uri_find, uri_replace, replacements, d, mirrortarball=None):
     if not ud.url or not uri_find or not uri_replace:
         logger.error("uri_replace: passed an undefined value, not replacing")
         return None
@@ -455,7 +462,7 @@
                 result_decoded[loc][k] = uri_replace_decoded[loc][k]
         elif (re.match(regexp, uri_decoded[loc])):
             if not uri_replace_decoded[loc]:
-                result_decoded[loc] = ""    
+                result_decoded[loc] = ""
             else:
                 for k in replacements:
                     uri_replace_decoded[loc] = uri_replace_decoded[loc].replace(k, replacements[k])
@@ -464,9 +471,9 @@
             if loc == 2:
                 # Handle path manipulations
                 basename = None
-                if uri_decoded[0] != uri_replace_decoded[0] and ud.mirrortarball:
+                if uri_decoded[0] != uri_replace_decoded[0] and mirrortarball:
                     # If the source and destination url types differ, must be a mirrortarball mapping
-                    basename = os.path.basename(ud.mirrortarball)
+                    basename = os.path.basename(mirrortarball)
                     # Kill parameters, they make no sense for mirror tarballs
                     uri_decoded[5] = {}
                 elif ud.localpath and ud.method.supports_checksum(ud):
@@ -584,6 +591,14 @@
                               ud.sha256_name, sha256data))
             raise NoChecksumError('Missing SRC_URI checksum', ud.url)
 
+        bb.event.fire(MissingChecksumEvent(ud.url, md5data, sha256data), d)
+
+        if strict == "ignore":
+            return {
+                _MD5_KEY: md5data,
+                _SHA256_KEY: sha256data
+            }
+
         # Log missing sums so user can more easily add them
         logger.warning('Missing md5 SRC_URI checksum for %s, consider adding to the recipe:\n'
                        'SRC_URI[%s] = "%s"',
@@ -733,7 +748,7 @@
     In the multi SCM case, we build a value based on SRCREV_FORMAT which must
     have been set.
 
-    The idea here is that we put the string "AUTOINC+" into return value if the revisions are not 
+    The idea here is that we put the string "AUTOINC+" into return value if the revisions are not
     incremental, other code is then responsible for turning that into an increasing value (if needed)
 
     A method_name can be supplied to retrieve an alternatively formatted revision from a fetcher, if
@@ -785,7 +800,7 @@
     format = re.sub(name_to_rev_re, lambda match: name_to_rev[match.group(0)], format)
 
     if seenautoinc:
-       format = "AUTOINC+" + format
+        format = "AUTOINC+" + format
 
     return format
 
@@ -892,45 +907,47 @@
     replacements["BASENAME"] = origud.path.split("/")[-1]
     replacements["MIRRORNAME"] = origud.host.replace(':','.') + origud.path.replace('/', '.').replace('*', '.')
 
-    def adduri(ud, uris, uds, mirrors):
+    def adduri(ud, uris, uds, mirrors, tarballs):
         for line in mirrors:
             try:
                 (find, replace) = line
             except ValueError:
                 continue
-            newuri = uri_replace(ud, find, replace, replacements, ld)
-            if not newuri or newuri in uris or newuri == origud.url:
-                continue
 
-            if not trusted_network(ld, newuri):
-                logger.debug(1, "Mirror %s not in the list of trusted networks, skipping" %  (newuri))
-                continue
+            for tarball in tarballs:
+                newuri = uri_replace(ud, find, replace, replacements, ld, tarball)
+                if not newuri or newuri in uris or newuri == origud.url:
+                    continue
 
-            # Create a local copy of the mirrors minus the current line
-            # this will prevent us from recursively processing the same line
-            # as well as indirect recursion A -> B -> C -> A
-            localmirrors = list(mirrors)
-            localmirrors.remove(line)
+                if not trusted_network(ld, newuri):
+                    logger.debug(1, "Mirror %s not in the list of trusted networks, skipping" %  (newuri))
+                    continue
 
-            try:
-                newud = FetchData(newuri, ld)
-                newud.setup_localpath(ld)
-            except bb.fetch2.BBFetchException as e:
-                logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
-                logger.debug(1, str(e))
+                # Create a local copy of the mirrors minus the current line
+                # this will prevent us from recursively processing the same line
+                # as well as indirect recursion A -> B -> C -> A
+                localmirrors = list(mirrors)
+                localmirrors.remove(line)
+
                 try:
-                    # setup_localpath of file:// urls may fail, we should still see 
-                    # if mirrors of the url exist
-                    adduri(newud, uris, uds, localmirrors)
-                except UnboundLocalError:
-                    pass
-                continue   
-            uris.append(newuri)
-            uds.append(newud)
+                    newud = FetchData(newuri, ld)
+                    newud.setup_localpath(ld)
+                except bb.fetch2.BBFetchException as e:
+                    logger.debug(1, "Mirror fetch failure for url %s (original url: %s)" % (newuri, origud.url))
+                    logger.debug(1, str(e))
+                    try:
+                        # setup_localpath of file:// urls may fail, we should still see
+                        # if mirrors of the url exist
+                        adduri(newud, uris, uds, localmirrors, tarballs)
+                    except UnboundLocalError:
+                        pass
+                    continue
+                uris.append(newuri)
+                uds.append(newud)
 
-            adduri(newud, uris, uds, localmirrors)
+                adduri(newud, uris, uds, localmirrors, tarballs)
 
-    adduri(origud, uris, uds, mirrors)
+    adduri(origud, uris, uds, mirrors, origud.mirrortarballs or [None])
 
     return uris, uds
 
@@ -975,8 +992,8 @@
         # We may be obtaining a mirror tarball which needs further processing by the real fetcher
         # If that tarball is a local file:// we need to provide a symlink to it
         dldir = ld.getVar("DL_DIR")
-        if origud.mirrortarball and os.path.basename(ud.localpath) == os.path.basename(origud.mirrortarball) \
-                and os.path.basename(ud.localpath) != os.path.basename(origud.localpath):
+
+        if origud.mirrortarballs and os.path.basename(ud.localpath) in origud.mirrortarballs and os.path.basename(ud.localpath) != os.path.basename(origud.localpath):
             # Create donestamp in old format to avoid triggering a re-download
             if ud.donestamp:
                 bb.utils.mkdirhier(os.path.dirname(ud.donestamp))
@@ -993,7 +1010,7 @@
                     pass
             if not verify_donestamp(origud, ld) or origud.method.need_update(origud, ld):
                 origud.method.download(origud, ld)
-                if hasattr(origud.method,"build_mirror_data"):
+                if hasattr(origud.method, "build_mirror_data"):
                     origud.method.build_mirror_data(origud, ld)
             return origud.localpath
         # Otherwise the result is a local file:// and we symlink to it
@@ -1015,7 +1032,7 @@
 
     except IOError as e:
         if e.errno in [os.errno.ESTALE]:
-            logger.warn("Stale Error Observed %s." % ud.url)
+            logger.warning("Stale Error Observed %s." % ud.url)
             return False
         raise
 
@@ -1115,7 +1132,7 @@
     attempts.append("SRCREV")
 
     for a in attempts:
-        srcrev = d.getVar(a)              
+        srcrev = d.getVar(a)
         if srcrev and srcrev != "INVALID":
             break
 
@@ -1130,7 +1147,7 @@
         if srcrev == "INVALID" or not srcrev:
             return parmrev
         if srcrev != parmrev:
-            raise FetchError("Conflicting revisions (%s from SRCREV and %s from the url) found, please spcify one valid value" % (srcrev, parmrev))
+            raise FetchError("Conflicting revisions (%s from SRCREV and %s from the url) found, please specify one valid value" % (srcrev, parmrev))
         return parmrev
 
     if srcrev == "INVALID" or not srcrev:
@@ -1190,7 +1207,7 @@
         self.localfile = ""
         self.localpath = None
         self.lockfile = None
-        self.mirrortarball = None
+        self.mirrortarballs = []
         self.basename = None
         self.basepath = None
         (self.type, self.host, self.path, self.user, self.pswd, self.parm) = decodeurl(d.expand(url))
@@ -1228,7 +1245,7 @@
         for m in methods:
             if m.supports(self, d):
                 self.method = m
-                break                
+                break
 
         if not self.method:
             raise NoMethodError(url)
@@ -1263,7 +1280,7 @@
         elif self.basepath or self.basename:
             basepath = dldir + os.sep + (self.basepath or self.basename)
         else:
-             bb.fatal("Can't determine lock path for url %s" % url)
+            bb.fatal("Can't determine lock path for url %s" % url)
 
         self.donestamp = basepath + '.done'
         self.lockfile = basepath + '.lock'
@@ -1326,13 +1343,13 @@
         if os.path.isdir(urldata.localpath) == True:
             return False
         if urldata.localpath.find("*") != -1:
-             return False
+            return False
 
         return True
 
     def recommends_checksum(self, urldata):
         """
-        Is the backend on where checksumming is recommended (should warnings 
+        Is the backend on where checksumming is recommended (should warnings
         be displayed if there is no checksum)?
         """
         return False
@@ -1542,6 +1559,14 @@
         key = self._revision_key(ud, d, name)
         return "%s-%s" % (key, d.getVar("PN") or "")
 
+    def latest_versionstring(self, ud, d):
+        """
+        Compute the latest release name like "x.y.x" in "x.y.x+gitHASH"
+        by searching through the tags output of ls-remote, comparing
+        versions and returning the highest match as a (version, revision) pair.
+        """
+        return ('', '')
+
 class Fetch(object):
     def __init__(self, urls, d, cache = True, localonly = False, connection_cache = None):
         if localonly and cache:
@@ -1612,7 +1637,7 @@
 
             try:
                 self.d.setVar("BB_NO_NETWORK", network)
- 
+
                 if verify_donestamp(ud, self.d) and not m.need_update(ud, self.d):
                     localpath = ud.localpath
                 elif m.try_premirror(ud, self.d):
@@ -1708,9 +1733,8 @@
             ret = try_mirrors(self, self.d, ud, mirrors, True)
             if not ret:
                 # Next try checking from the original uri, u
-                try:
-                    ret = m.checkstatus(self, ud, self.d)
-                except:
+                ret = m.checkstatus(self, ud, self.d)
+                if not ret:
                     # Finally, try checking uri, u, from MIRRORS
                     mirrors = mirror_from_string(self.d.getVar('MIRRORS'))
                     ret = try_mirrors(self, self.d, ud, mirrors, True)
@@ -1720,7 +1744,7 @@
 
     def unpack(self, root, urls=None):
         """
-        Check all urls exist upstream
+        Unpack urls to root
         """
 
         if not urls:
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/git.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/git.py
index 7442f84..5ef8cd6 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/git.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/git.py
@@ -70,11 +70,14 @@
 # with this program; if not, write to the Free Software Foundation, Inc.,
 # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
 
+import collections
 import errno
+import fnmatch
 import os
 import re
+import subprocess
+import tempfile
 import bb
-import errno
 import bb.progress
 from   bb.fetch2 import FetchMethod
 from   bb.fetch2 import runfetchcmd
@@ -172,18 +175,66 @@
         branches = ud.parm.get("branch", "master").split(',')
         if len(branches) != len(ud.names):
             raise bb.fetch2.ParameterError("The number of name and branch parameters is not balanced", ud.url)
+
+        ud.cloneflags = "-s -n"
+        if ud.bareclone:
+            ud.cloneflags += " --mirror"
+
+        ud.shallow = d.getVar("BB_GIT_SHALLOW") == "1"
+        ud.shallow_extra_refs = (d.getVar("BB_GIT_SHALLOW_EXTRA_REFS") or "").split()
+
+        depth_default = d.getVar("BB_GIT_SHALLOW_DEPTH")
+        if depth_default is not None:
+            try:
+                depth_default = int(depth_default or 0)
+            except ValueError:
+                raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH: %s" % depth_default)
+            else:
+                if depth_default < 0:
+                    raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH: %s" % depth_default)
+        else:
+            depth_default = 1
+        ud.shallow_depths = collections.defaultdict(lambda: depth_default)
+
+        revs_default = d.getVar("BB_GIT_SHALLOW_REVS", True)
+        ud.shallow_revs = []
         ud.branches = {}
         for pos, name in enumerate(ud.names):
             branch = branches[pos]
             ud.branches[name] = branch
             ud.unresolvedrev[name] = branch
 
+            shallow_depth = d.getVar("BB_GIT_SHALLOW_DEPTH_%s" % name)
+            if shallow_depth is not None:
+                try:
+                    shallow_depth = int(shallow_depth or 0)
+                except ValueError:
+                    raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH_%s: %s" % (name, shallow_depth))
+                else:
+                    if shallow_depth < 0:
+                        raise bb.fetch2.FetchError("Invalid depth for BB_GIT_SHALLOW_DEPTH_%s: %s" % (name, shallow_depth))
+                    ud.shallow_depths[name] = shallow_depth
+
+            revs = d.getVar("BB_GIT_SHALLOW_REVS_%s" % name)
+            if revs is not None:
+                ud.shallow_revs.extend(revs.split())
+            elif revs_default is not None:
+                ud.shallow_revs.extend(revs_default.split())
+
+        if (ud.shallow and
+                not ud.shallow_revs and
+                all(ud.shallow_depths[n] == 0 for n in ud.names)):
+            # Shallow disabled for this URL
+            ud.shallow = False
+
         if ud.usehead:
             ud.unresolvedrev['default'] = 'HEAD'
 
         ud.basecmd = d.getVar("FETCHCMD_git") or "git -c core.fsyncobjectfiles=0"
 
-        ud.write_tarballs = ((d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0") != "0") or ud.rebaseable
+        write_tarballs = d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0"
+        ud.write_tarballs = write_tarballs != "0" or ud.rebaseable
+        ud.write_shallow_tarballs = (d.getVar("BB_GENERATE_SHALLOW_TARBALLS") or write_tarballs) != "0"
 
         ud.setup_revisions(d)
 
@@ -205,13 +256,42 @@
         if ud.rebaseable:
             for name in ud.names:
                 gitsrcname = gitsrcname + '_' + ud.revisions[name]
-        ud.mirrortarball = 'git2_%s.tar.gz' % gitsrcname
-        ud.fullmirror = os.path.join(d.getVar("DL_DIR"), ud.mirrortarball)
-        gitdir = d.getVar("GITDIR") or (d.getVar("DL_DIR") + "/git2/")
-        ud.clonedir = os.path.join(gitdir, gitsrcname)
 
+        dl_dir = d.getVar("DL_DIR")
+        gitdir = d.getVar("GITDIR") or (dl_dir + "/git2/")
+        ud.clonedir = os.path.join(gitdir, gitsrcname)
         ud.localfile = ud.clonedir
 
+        mirrortarball = 'git2_%s.tar.gz' % gitsrcname
+        ud.fullmirror = os.path.join(dl_dir, mirrortarball)
+        ud.mirrortarballs = [mirrortarball]
+        if ud.shallow:
+            tarballname = gitsrcname
+            if ud.bareclone:
+                tarballname = "%s_bare" % tarballname
+
+            if ud.shallow_revs:
+                tarballname = "%s_%s" % (tarballname, "_".join(sorted(ud.shallow_revs)))
+
+            for name, revision in sorted(ud.revisions.items()):
+                tarballname = "%s_%s" % (tarballname, ud.revisions[name][:7])
+                depth = ud.shallow_depths[name]
+                if depth:
+                    tarballname = "%s-%s" % (tarballname, depth)
+
+            shallow_refs = []
+            if not ud.nobranch:
+                shallow_refs.extend(ud.branches.values())
+            if ud.shallow_extra_refs:
+                shallow_refs.extend(r.replace('refs/heads/', '').replace('*', 'ALL') for r in ud.shallow_extra_refs)
+            if shallow_refs:
+                tarballname = "%s_%s" % (tarballname, "_".join(sorted(shallow_refs)).replace('/', '.'))
+
+            fetcher = self.__class__.__name__.lower()
+            ud.shallowtarball = '%sshallow_%s.tar.gz' % (fetcher, tarballname)
+            ud.fullshallow = os.path.join(dl_dir, ud.shallowtarball)
+            ud.mirrortarballs.insert(0, ud.shallowtarball)
+
     def localpath(self, ud, d):
         return ud.clonedir
 
@@ -221,6 +301,8 @@
         for name in ud.names:
             if not self._contains_ref(ud, d, name, ud.clonedir):
                 return True
+        if ud.shallow and ud.write_shallow_tarballs and not os.path.exists(ud.fullshallow):
+            return True
         if ud.write_tarballs and not os.path.exists(ud.fullmirror):
             return True
         return False
@@ -237,8 +319,16 @@
     def download(self, ud, d):
         """Fetch url"""
 
-        # If the checkout doesn't exist and the mirror tarball does, extract it
-        if not os.path.exists(ud.clonedir) and os.path.exists(ud.fullmirror):
+        no_clone = not os.path.exists(ud.clonedir)
+        need_update = no_clone or self.need_update(ud, d)
+
+        # A current clone is preferred to either tarball, a shallow tarball is
+        # preferred to an out of date clone, and a missing clone will use
+        # either tarball.
+        if ud.shallow and os.path.exists(ud.fullshallow) and need_update:
+            ud.localpath = ud.fullshallow
+            return
+        elif os.path.exists(ud.fullmirror) and no_clone:
             bb.utils.mkdirhier(ud.clonedir)
             runfetchcmd("tar -xzf %s" % ud.fullmirror, d, workdir=ud.clonedir)
 
@@ -284,9 +374,21 @@
                 raise bb.fetch2.FetchError("Unable to find revision %s in branch %s even from upstream" % (ud.revisions[name], ud.branches[name]))
 
     def build_mirror_data(self, ud, d):
-        # Generate a mirror tarball if needed
-        if ud.write_tarballs and not os.path.exists(ud.fullmirror):
-            # it's possible that this symlink points to read-only filesystem with PREMIRROR
+        if ud.shallow and ud.write_shallow_tarballs:
+            if not os.path.exists(ud.fullshallow):
+                if os.path.islink(ud.fullshallow):
+                    os.unlink(ud.fullshallow)
+                tempdir = tempfile.mkdtemp(dir=d.getVar('DL_DIR'))
+                shallowclone = os.path.join(tempdir, 'git')
+                try:
+                    self.clone_shallow_local(ud, shallowclone, d)
+
+                    logger.info("Creating tarball of git repository")
+                    runfetchcmd("tar -czf %s ." % ud.fullshallow, d, workdir=shallowclone)
+                    runfetchcmd("touch %s.done" % ud.fullshallow, d)
+                finally:
+                    bb.utils.remove(tempdir, recurse=True)
+        elif ud.write_tarballs and not os.path.exists(ud.fullmirror):
             if os.path.islink(ud.fullmirror):
                 os.unlink(ud.fullmirror)
 
@@ -294,6 +396,62 @@
             runfetchcmd("tar -czf %s ." % ud.fullmirror, d, workdir=ud.clonedir)
             runfetchcmd("touch %s.done" % ud.fullmirror, d)
 
+    def clone_shallow_local(self, ud, dest, d):
+        """Clone the repo and make it shallow.
+
+        The upstream url of the new clone isn't set at this time, as it'll be
+        set correctly when unpacked."""
+        runfetchcmd("%s clone %s %s %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, dest), d)
+
+        to_parse, shallow_branches = [], []
+        for name in ud.names:
+            revision = ud.revisions[name]
+            depth = ud.shallow_depths[name]
+            if depth:
+                to_parse.append('%s~%d^{}' % (revision, depth - 1))
+
+            # For nobranch, we need a ref, otherwise the commits will be
+            # removed, and for non-nobranch, we truncate the branch to our
+            # srcrev, to avoid keeping unnecessary history beyond that.
+            branch = ud.branches[name]
+            if ud.nobranch:
+                ref = "refs/shallow/%s" % name
+            elif ud.bareclone:
+                ref = "refs/heads/%s" % branch
+            else:
+                ref = "refs/remotes/origin/%s" % branch
+
+            shallow_branches.append(ref)
+            runfetchcmd("%s update-ref %s %s" % (ud.basecmd, ref, revision), d, workdir=dest)
+
+        # Map srcrev+depths to revisions
+        parsed_depths = runfetchcmd("%s rev-parse %s" % (ud.basecmd, " ".join(to_parse)), d, workdir=dest)
+
+        # Resolve specified revisions
+        parsed_revs = runfetchcmd("%s rev-parse %s" % (ud.basecmd, " ".join('"%s^{}"' % r for r in ud.shallow_revs)), d, workdir=dest)
+        shallow_revisions = parsed_depths.splitlines() + parsed_revs.splitlines()
+
+        # Apply extra ref wildcards
+        all_refs = runfetchcmd('%s for-each-ref "--format=%%(refname)"' % ud.basecmd,
+                               d, workdir=dest).splitlines()
+        for r in ud.shallow_extra_refs:
+            if not ud.bareclone:
+                r = r.replace('refs/heads/', 'refs/remotes/origin/')
+
+            if '*' in r:
+                matches = filter(lambda a: fnmatch.fnmatchcase(a, r), all_refs)
+                shallow_branches.extend(matches)
+            else:
+                shallow_branches.append(r)
+
+        # Make the repository shallow
+        shallow_cmd = ['git', 'make-shallow', '-s']
+        for b in shallow_branches:
+            shallow_cmd.append('-r')
+            shallow_cmd.append(b)
+        shallow_cmd.extend(shallow_revisions)
+        runfetchcmd(subprocess.list2cmdline(shallow_cmd), d, workdir=dest)
+
     def unpack(self, ud, destdir, d):
         """ unpack the downloaded src to destdir"""
 
@@ -310,11 +468,12 @@
         if os.path.exists(destdir):
             bb.utils.prunedir(destdir)
 
-        cloneflags = "-s -n"
-        if ud.bareclone:
-            cloneflags += " --mirror"
+        if ud.shallow and (not os.path.exists(ud.clonedir) or self.need_update(ud, d)):
+            bb.utils.mkdirhier(destdir)
+            runfetchcmd("tar -xzf %s" % ud.fullshallow, d, workdir=destdir)
+        else:
+            runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, ud.cloneflags, ud.clonedir, destdir), d)
 
-        runfetchcmd("%s clone %s %s/ %s" % (ud.basecmd, cloneflags, ud.clonedir, destdir), d)
         repourl = self._get_repo_url(ud)
         runfetchcmd("%s remote set-url origin %s" % (ud.basecmd, repourl), d, workdir=destdir)
         if not ud.nocheckout:
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitannex.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitannex.py
index c66c211..a9b69ca 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitannex.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitannex.py
@@ -33,6 +33,11 @@
         """
         return ud.type in ['gitannex']
 
+    def urldata_init(self, ud, d):
+        super(GitANNEX, self).urldata_init(ud, d)
+        if ud.shallow:
+            ud.shallow_extra_refs += ['refs/heads/git-annex', 'refs/heads/synced/*']
+
     def uses_annex(self, ud, d, wd):
         for name in ud.names:
             try:
@@ -55,9 +60,21 @@
     def download(self, ud, d):
         Git.download(self, ud, d)
 
-        annex = self.uses_annex(ud, d, ud.clonedir)
-        if annex:
-            self.update_annex(ud, d, ud.clonedir)
+        if not ud.shallow or ud.localpath != ud.fullshallow:
+            if self.uses_annex(ud, d, ud.clonedir):
+                self.update_annex(ud, d, ud.clonedir)
+
+    def clone_shallow_local(self, ud, dest, d):
+        super(GitANNEX, self).clone_shallow_local(ud, dest, d)
+
+        try:
+            runfetchcmd("%s annex init" % ud.basecmd, d, workdir=dest)
+        except bb.fetch.FetchError:
+            pass
+
+        if self.uses_annex(ud, d, dest):
+            runfetchcmd("%s annex get" % ud.basecmd, d, workdir=dest)
+            runfetchcmd("chmod u+w -R %s/.git/annex" % (dest), d, quiet=True, workdir=dest)
 
     def unpack(self, ud, destdir, d):
         Git.unpack(self, ud, destdir, d)
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitsm.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitsm.py
index a95584c..0aff100 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitsm.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/gitsm.py
@@ -117,14 +117,19 @@
     def download(self, ud, d):
         Git.download(self, ud, d)
 
-        submodules = self.uses_submodules(ud, d, ud.clonedir)
-        if submodules:
-            self.update_submodules(ud, d)
+        if not ud.shallow or ud.localpath != ud.fullshallow:
+            submodules = self.uses_submodules(ud, d, ud.clonedir)
+            if submodules:
+                self.update_submodules(ud, d)
+
+    def clone_shallow_local(self, ud, dest, d):
+        super(GitSM, self).clone_shallow_local(ud, dest, d)
+
+        runfetchcmd('cp -fpPRH "%s/modules" "%s/"' % (ud.clonedir, os.path.join(dest, '.git')), d)
 
     def unpack(self, ud, destdir, d):
         Git.unpack(self, ud, destdir, d)
-        
-        submodules = self.uses_submodules(ud, d, ud.destdir)
-        if submodules:
+
+        if self.uses_submodules(ud, d, ud.destdir):
             runfetchcmd(ud.basecmd + " checkout " + ud.revisions[ud.names[0]], d, workdir=ud.destdir)
             runfetchcmd(ud.basecmd + " submodule update --init --recursive", d, workdir=ud.destdir)
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/hg.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/hg.py
index b5f2686..d0857e6 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/hg.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/hg.py
@@ -76,8 +76,9 @@
         # Create paths to mercurial checkouts
         hgsrcname = '%s_%s_%s' % (ud.module.replace('/', '.'), \
                             ud.host, ud.path.replace('/', '.'))
-        ud.mirrortarball = 'hg_%s.tar.gz' % hgsrcname
-        ud.fullmirror = os.path.join(d.getVar("DL_DIR"), ud.mirrortarball)
+        mirrortarball = 'hg_%s.tar.gz' % hgsrcname
+        ud.fullmirror = os.path.join(d.getVar("DL_DIR"), mirrortarball)
+        ud.mirrortarballs = [mirrortarball]
 
         hgdir = d.getVar("HGDIR") or (d.getVar("DL_DIR") + "/hg/")
         ud.pkgdir = os.path.join(hgdir, hgsrcname)
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/npm.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/npm.py
index 73a75fe..b5f148c 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/npm.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/npm.py
@@ -91,9 +91,10 @@
         ud.prefixdir = prefixdir
 
         ud.write_tarballs = ((d.getVar("BB_GENERATE_MIRROR_TARBALLS") or "0") != "0")
-        ud.mirrortarball = 'npm_%s-%s.tar.xz' % (ud.pkgname, ud.version)
-        ud.mirrortarball = ud.mirrortarball.replace('/', '-')
-        ud.fullmirror = os.path.join(d.getVar("DL_DIR"), ud.mirrortarball)
+        mirrortarball = 'npm_%s-%s.tar.xz' % (ud.pkgname, ud.version)
+        mirrortarball = mirrortarball.replace('/', '-')
+        ud.fullmirror = os.path.join(d.getVar("DL_DIR"), mirrortarball)
+        ud.mirrortarballs = [mirrortarball]
 
     def need_update(self, ud, d):
         if os.path.exists(ud.localpath):
@@ -262,26 +263,27 @@
             runfetchcmd("tar -xJf %s" % (ud.fullmirror), d, workdir=dest)
             return
 
-        shwrf = d.getVar('NPM_SHRINKWRAP')
-        logger.debug(2, "NPM shrinkwrap file is %s" % shwrf)
-        if shwrf:
-            try:
-                with open(shwrf) as datafile:
-                    shrinkobj = json.load(datafile)
-            except Exception as e:
-                raise FetchError('Error loading NPM_SHRINKWRAP file "%s" for %s: %s' % (shwrf, ud.pkgname, str(e)))
-        elif not ud.ignore_checksums:
-            logger.warning('Missing shrinkwrap file in NPM_SHRINKWRAP for %s, this will lead to unreliable builds!' % ud.pkgname)
-        lckdf = d.getVar('NPM_LOCKDOWN')
-        logger.debug(2, "NPM lockdown file is %s" % lckdf)
-        if lckdf:
-            try:
-                with open(lckdf) as datafile:
-                    lockdown = json.load(datafile)
-            except Exception as e:
-                raise FetchError('Error loading NPM_LOCKDOWN file "%s" for %s: %s' % (lckdf, ud.pkgname, str(e)))
-        elif not ud.ignore_checksums:
-            logger.warning('Missing lockdown file in NPM_LOCKDOWN for %s, this will lead to unreproducible builds!' % ud.pkgname)
+        if ud.parm.get("noverify", None) != '1':
+            shwrf = d.getVar('NPM_SHRINKWRAP')
+            logger.debug(2, "NPM shrinkwrap file is %s" % shwrf)
+            if shwrf:
+                try:
+                    with open(shwrf) as datafile:
+                        shrinkobj = json.load(datafile)
+                except Exception as e:
+                    raise FetchError('Error loading NPM_SHRINKWRAP file "%s" for %s: %s' % (shwrf, ud.pkgname, str(e)))
+            elif not ud.ignore_checksums:
+                logger.warning('Missing shrinkwrap file in NPM_SHRINKWRAP for %s, this will lead to unreliable builds!' % ud.pkgname)
+            lckdf = d.getVar('NPM_LOCKDOWN')
+            logger.debug(2, "NPM lockdown file is %s" % lckdf)
+            if lckdf:
+                try:
+                    with open(lckdf) as datafile:
+                        lockdown = json.load(datafile)
+                except Exception as e:
+                    raise FetchError('Error loading NPM_LOCKDOWN file "%s" for %s: %s' % (lckdf, ud.pkgname, str(e)))
+            elif not ud.ignore_checksums:
+                logger.warning('Missing lockdown file in NPM_LOCKDOWN for %s, this will lead to unreproducible builds!' % ud.pkgname)
 
         if ('name' not in shrinkobj):
             self._getdependencies(ud.pkgname, jsondepobj, ud.version, d, ud)
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/repo.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/repo.py
index 1be91cc..c22d9b5 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/repo.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/repo.py
@@ -27,6 +27,7 @@
 import bb
 from   bb.fetch2 import FetchMethod
 from   bb.fetch2 import runfetchcmd
+from   bb.fetch2 import logger
 
 class Repo(FetchMethod):
     """Class to fetch a module or modules from repo (git) repositories"""
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/wget.py b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/wget.py
index ae0ffa8..7c49c2b 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/fetch2/wget.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/fetch2/wget.py
@@ -30,6 +30,7 @@
 import subprocess
 import os
 import logging
+import errno
 import bb
 import bb.progress
 import urllib.request, urllib.parse, urllib.error
@@ -89,13 +90,13 @@
 
         self.basecmd = d.getVar("FETCHCMD_wget") or "/usr/bin/env wget -t 2 -T 30 --passive-ftp --no-check-certificate"
 
-    def _runwget(self, ud, d, command, quiet):
+    def _runwget(self, ud, d, command, quiet, workdir=None):
 
         progresshandler = WgetProgressHandler(d)
 
         logger.debug(2, "Fetching %s using command '%s'" % (ud.url, command))
         bb.fetch2.check_network_access(d, command, ud.url)
-        runfetchcmd(command + ' --progress=dot -v', d, quiet, log=progresshandler)
+        runfetchcmd(command + ' --progress=dot -v', d, quiet, log=progresshandler, workdir=workdir)
 
     def download(self, ud, d):
         """Fetch urls"""
@@ -206,8 +207,21 @@
                     h.request(req.get_method(), req.selector, req.data, headers)
                 except socket.error as err: # XXX what error?
                     # Don't close connection when cache is enabled.
+                    # Instead, try to detect connections that are no longer
+                    # usable (for example, closed unexpectedly) and remove
+                    # them from the cache.
                     if fetch.connection_cache is None:
                         h.close()
+                    elif isinstance(err, OSError) and err.errno == errno.EBADF:
+                        # This happens when the server closes the connection despite the Keep-Alive.
+                        # Apparently urllib then uses the file descriptor, expecting it to be
+                        # connected, when in reality the connection is already gone.
+                        # We let the request fail and expect it to be
+                        # tried once more ("try_again" in check_status()),
+                        # with the dead connection removed from the cache.
+                        # If it still fails, we give up, which can happend for bad
+                        # HTTP proxy settings.
+                        fetch.connection_cache.remove_connection(h.host, h.port)
                     raise urllib.error.URLError(err)
                 else:
                     try:
@@ -269,11 +283,6 @@
             """
             http_error_403 = http_error_405
 
-            """
-            Some servers (e.g. FusionForge) returns 406 Not Acceptable when they
-            actually mean 405 Method Not Allowed.
-            """
-            http_error_406 = http_error_405
 
         class FixedHTTPRedirectHandler(urllib.request.HTTPRedirectHandler):
             """
@@ -302,7 +311,9 @@
             uri = ud.url.split(";")[0]
             r = urllib.request.Request(uri)
             r.get_method = lambda: "HEAD"
-
+            # Some servers (FusionForge, as used on Alioth) require that the
+            # optional Accept header is set.
+            r.add_header("Accept", "*/*")
             def add_basic_auth(login_str, request):
                 '''Adds Basic auth to http request, pass in login:password as string'''
                 import base64
@@ -408,17 +419,16 @@
         Run fetch checkstatus to get directory information
         """
         f = tempfile.NamedTemporaryFile()
+        with tempfile.TemporaryDirectory(prefix="wget-index-") as workdir, tempfile.NamedTemporaryFile(dir=workdir, prefix="wget-listing-") as f:
+            agent = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12"
+            fetchcmd = self.basecmd
+            fetchcmd += " -O " + f.name + " --user-agent='" + agent + "' '" + uri + "'"
+            try:
+                self._runwget(ud, d, fetchcmd, True, workdir=workdir)
+                fetchresult = f.read()
+            except bb.fetch2.BBFetchException:
+                fetchresult = ""
 
-        agent = "Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.2.12) Gecko/20101027 Ubuntu/9.10 (karmic) Firefox/3.6.12"
-        fetchcmd = self.basecmd
-        fetchcmd += " -O " + f.name + " --user-agent='" + agent + "' '" + uri + "'"
-        try:
-            self._runwget(ud, d, fetchcmd, True)
-            fetchresult = f.read()
-        except bb.fetch2.BBFetchException:
-            fetchresult = ""
-
-        f.close()
         return fetchresult
 
     def _check_latest_version(self, url, package, package_regex, current_version, ud, d):
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/main.py b/import-layers/yocto-poky/bitbake/lib/bb/main.py
index 8c948c2..7711b29 100755
--- a/import-layers/yocto-poky/bitbake/lib/bb/main.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/main.py
@@ -28,6 +28,8 @@
 import optparse
 import warnings
 import fcntl
+import time
+import traceback
 
 import bb
 from bb import event
@@ -37,11 +39,17 @@
 from bb import server
 from bb import cookerdata
 
+import bb.server.process
+import bb.server.xmlrpcclient
+
 logger = logging.getLogger("BitBake")
 
 class BBMainException(Exception):
     pass
 
+class BBMainFatal(bb.BBHandledException):
+    pass
+
 def present_options(optionlist):
     if len(optionlist) > 1:
         return ' or '.join([', '.join(optionlist[:-1]), optionlist[-1]])
@@ -58,9 +66,6 @@
         if option.dest == 'ui':
             valid_uis = list_extension_modules(bb.ui, 'main')
             option.help = option.help.replace('@CHOICES@', present_options(valid_uis))
-        elif option.dest == 'servertype':
-            valid_server_types = list_extension_modules(bb.server, 'BitBakeServer')
-            option.help = option.help.replace('@CHOICES@', present_options(valid_server_types))
 
         return optparse.IndentedHelpFormatter.format_option(self, option)
 
@@ -148,11 +153,6 @@
                                "failed and anything depending on it cannot be built, as much as "
                                "possible will be built before stopping.")
 
-        parser.add_option("-a", "--tryaltconfigs", action="store_true",
-                          dest="tryaltconfigs", default=False,
-                          help="Continue with builds by trying to use alternative providers "
-                               "where possible.")
-
         parser.add_option("-f", "--force", action="store_true", dest="force", default=False,
                           help="Force the specified targets/task to run (invalidating any "
                                "existing stamp file).")
@@ -238,11 +238,6 @@
                           default=os.environ.get('BITBAKE_UI', 'knotty'),
                           help="The user interface to use (@CHOICES@ - default %default).")
 
-        # @CHOICES@ is substituted out by BitbakeHelpFormatter above
-        parser.add_option("-t", "--servertype", action="store", dest="servertype",
-                          default=["process", "xmlrpc"]["BBSERVER" in os.environ],
-                          help="Choose which server type to use (@CHOICES@ - default %default).")
-
         parser.add_option("", "--token", action="store", dest="xmlrpctoken",
                           default=os.environ.get("BBTOKEN"),
                           help="Specify the connection token to be used when connecting "
@@ -258,15 +253,14 @@
                           help="Run bitbake without a UI, only starting a server "
                                "(cooker) process.")
 
-        parser.add_option("", "--foreground", action="store_true",
-                          help="Run bitbake server in foreground.")
-
         parser.add_option("-B", "--bind", action="store", dest="bind", default=False,
-                          help="The name/address for the bitbake server to bind to.")
+                          help="The name/address for the bitbake xmlrpc server to bind to.")
 
-        parser.add_option("-T", "--idle-timeout", type=int,
-                          default=int(os.environ.get("BBTIMEOUT", "0")),
-                          help="Set timeout to unload bitbake server due to inactivity")
+        parser.add_option("-T", "--idle-timeout", type=float, dest="server_timeout",
+                          default=os.getenv("BB_SERVER_TIMEOUT"),
+                          help="Set timeout to unload bitbake server due to inactivity, "
+                                "set to -1 means no unload, "
+                                "default: Environment variable BB_SERVER_TIMEOUT.")
 
         parser.add_option("", "--no-setscene", action="store_true",
                           dest="nosetscene", default=False,
@@ -283,7 +277,7 @@
 
         parser.add_option("-m", "--kill-server", action="store_true",
                           dest="kill_server", default=False,
-                          help="Terminate the remote server.")
+                          help="Terminate any running bitbake server.")
 
         parser.add_option("", "--observe-only", action="store_true",
                           dest="observe_only", default=False,
@@ -322,70 +316,20 @@
             eventlog = "bitbake_eventlog_%s.json" % datetime.now().strftime("%Y%m%d%H%M%S")
             options.writeeventlog = eventlog
 
-        # if BBSERVER says to autodetect, let's do that
-        if options.remote_server:
-            port = -1
-            if options.remote_server != 'autostart':
-                host, port = options.remote_server.split(":", 2)
+        if options.bind:
+            try:
+                #Checking that the port is a number and is a ':' delimited value
+                (host, port) = options.bind.split(':')
                 port = int(port)
-            # use automatic port if port set to -1, means read it from
-            # the bitbake.lock file; this is a bit tricky, but we always expect
-            # to be in the base of the build directory if we need to have a
-            # chance to start the server later, anyway
-            if port == -1:
-                lock_location = "./bitbake.lock"
-                # we try to read the address at all times; if the server is not started,
-                # we'll try to start it after the first connect fails, below
-                try:
-                    lf = open(lock_location, 'r')
-                    remotedef = lf.readline()
-                    [host, port] = remotedef.split(":")
-                    port = int(port)
-                    lf.close()
-                    options.remote_server = remotedef
-                except Exception as e:
-                    if options.remote_server != 'autostart':
-                        raise BBMainException("Failed to read bitbake.lock (%s), invalid port" % str(e))
+            except (ValueError,IndexError):
+                raise BBMainException("FATAL: Malformed host:port bind parameter")
+            options.xmlrpcinterface = (host, port)
+        else:
+            options.xmlrpcinterface = (None, 0)
 
         return options, targets[1:]
 
 
-def start_server(servermodule, configParams, configuration, features):
-    server = servermodule.BitBakeServer()
-    single_use = not configParams.server_only and os.getenv('BBSERVER') != 'autostart'
-    if configParams.bind:
-        (host, port) = configParams.bind.split(':')
-        server.initServer((host, int(port)), single_use=single_use,
-                          idle_timeout=configParams.idle_timeout)
-        configuration.interface = [server.serverImpl.host, server.serverImpl.port]
-    else:
-        server.initServer(single_use=single_use)
-        configuration.interface = []
-
-    try:
-        configuration.setServerRegIdleCallback(server.getServerIdleCB())
-
-        cooker = bb.cooker.BBCooker(configuration, features)
-
-        server.addcooker(cooker)
-        server.saveConnectionDetails()
-    except Exception as e:
-        while hasattr(server, "event_queue"):
-            import queue
-            try:
-                event = server.event_queue.get(block=False)
-            except (queue.Empty, IOError):
-                break
-            if isinstance(event, logging.LogRecord):
-                logger.handle(event)
-        raise
-    if not configParams.foreground:
-        server.detach()
-        cooker.shutdown()
-    cooker.lock.close()
-    return server
-
-
 def bitbake_main(configParams, configuration):
 
     # Python multiprocessing requires /dev/shm on Linux
@@ -406,45 +350,15 @@
 
     configuration.setConfigParameters(configParams)
 
-    if configParams.server_only:
-        if configParams.servertype != "xmlrpc":
-            raise BBMainException("FATAL: If '--server-only' is defined, we must set the "
-                                  "servertype as 'xmlrpc'.\n")
-        if not configParams.bind:
-            raise BBMainException("FATAL: The '--server-only' option requires a name/address "
-                                  "to bind to with the -B option.\n")
-        else:
-            try:
-                #Checking that the port is a number
-                int(configParams.bind.split(":")[1])
-            except (ValueError,IndexError):
-                raise BBMainException(
-                        "FATAL: Malformed host:port bind parameter")
-        if configParams.remote_server:
+    if configParams.server_only and configParams.remote_server:
             raise BBMainException("FATAL: The '--server-only' option conflicts with %s.\n" %
                                   ("the BBSERVER environment variable" if "BBSERVER" in os.environ \
                                    else "the '--remote-server' option"))
 
-    elif configParams.foreground:
-        raise BBMainException("FATAL: The '--foreground' option can only be used "
-                              "with --server-only.\n")
-
-    if configParams.bind and configParams.servertype != "xmlrpc":
-        raise BBMainException("FATAL: If '-B' or '--bind' is defined, we must "
-                              "set the servertype as 'xmlrpc'.\n")
-
-    if configParams.remote_server and configParams.servertype != "xmlrpc":
-        raise BBMainException("FATAL: If '--remote-server' is defined, we must "
-                              "set the servertype as 'xmlrpc'.\n")
-
-    if configParams.observe_only and (not configParams.remote_server or configParams.bind):
+    if configParams.observe_only and not (configParams.remote_server or configParams.bind):
         raise BBMainException("FATAL: '--observe-only' can only be used by UI clients "
                               "connecting to a server.\n")
 
-    if configParams.kill_server and not configParams.remote_server:
-        raise BBMainException("FATAL: '--kill-server' can only be used to "
-                              "terminate a remote server")
-
     if "BBDEBUG" in os.environ:
         level = int(os.environ["BBDEBUG"])
         if level > configuration.debug:
@@ -453,9 +367,13 @@
     bb.msg.init_msgconfig(configParams.verbose, configuration.debug,
                           configuration.debug_domains)
 
-    server, server_connection, ui_module = setup_bitbake(configParams, configuration)
-    if server_connection is None and configParams.kill_server:
-        return 0
+    server_connection, ui_module = setup_bitbake(configParams, configuration)
+    # No server connection
+    if server_connection is None:
+        if configParams.status_only:
+            return 1
+        if configParams.kill_server:
+            return 0
 
     if not configParams.server_only:
         if configParams.status_only:
@@ -463,16 +381,15 @@
             return 0
 
         try:
+            for event in bb.event.ui_queue:
+                server_connection.events.queue_event(event)
+            bb.event.ui_queue = []
+
             return ui_module.main(server_connection.connection, server_connection.events,
                                   configParams)
         finally:
-            bb.event.ui_queue = []
             server_connection.terminate()
     else:
-        print("Bitbake server address: %s, server port: %s" % (server.serverImpl.host,
-                                                               server.serverImpl.port))
-        if configParams.foreground:
-            server.serverImpl.serve_forever()
         return 0
 
     return 1
@@ -495,58 +412,93 @@
         # Collect the feature set for the UI
         featureset = getattr(ui_module, "featureSet", [])
 
-    if configParams.server_only:
-        for param in ('prefile', 'postfile'):
-            value = getattr(configParams, param)
-            if value:
-                setattr(configuration, "%s_server" % param, value)
-                param = "%s_server" % param
-
     if extrafeatures:
         for feature in extrafeatures:
             if not feature in featureset:
                 featureset.append(feature)
 
-    servermodule = import_extension_module(bb.server,
-                                            configParams.servertype,
-                                            'BitBakeServer')
+    server_connection = None
+
     if configParams.remote_server:
-        if os.getenv('BBSERVER') == 'autostart':
-            if configParams.remote_server == 'autostart' or \
-               not servermodule.check_connection(configParams.remote_server, timeout=2):
-                configParams.bind = 'localhost:0'
-                srv = start_server(servermodule, configParams, configuration, featureset)
-                configParams.remote_server = '%s:%d' % tuple(configuration.interface)
-                bb.event.ui_queue = []
-        # we start a stub server that is actually a XMLRPClient that connects to a real server
-        from bb.server.xmlrpc import BitBakeXMLRPCClient
-        server = servermodule.BitBakeXMLRPCClient(configParams.observe_only,
-                                                  configParams.xmlrpctoken)
-        server.saveConnectionDetails(configParams.remote_server)
+        # Connect to a remote XMLRPC server
+        server_connection = bb.server.xmlrpcclient.connectXMLRPC(configParams.remote_server, featureset,
+                                                                 configParams.observe_only, configParams.xmlrpctoken)
     else:
-        # we start a server with a given configuration
-        server = start_server(servermodule, configParams, configuration, featureset)
+        retries = 8
+        while retries:
+            try:
+                topdir, lock = lockBitbake()
+                sockname = topdir + "/bitbake.sock"
+                if lock:
+                    if configParams.status_only or configParams.kill_server:
+                        logger.info("bitbake server is not running.")
+                        lock.close()
+                        return None, None
+                    # we start a server with a given configuration
+                    logger.info("Starting bitbake server...")
+                    # Clear the event queue since we already displayed messages
+                    bb.event.ui_queue = []
+                    server = bb.server.process.BitBakeServer(lock, sockname, configuration, featureset)
+
+                else:
+                    logger.info("Reconnecting to bitbake server...")
+                    if not os.path.exists(sockname):
+                        print("Previous bitbake instance shutting down?, waiting to retry...")
+                        i = 0
+                        lock = None
+                        # Wait for 5s or until we can get the lock
+                        while not lock and i < 50:
+                            time.sleep(0.1)
+                            _, lock = lockBitbake()
+                            i += 1
+                        if lock:
+                            bb.utils.unlockfile(lock)
+                        raise bb.server.process.ProcessTimeout("Bitbake still shutting down as socket exists but no lock?")
+                if not configParams.server_only:
+                    try:
+                        server_connection = bb.server.process.connectProcessServer(sockname, featureset)
+                    except EOFError:
+                        # The server may have been shutting down but not closed the socket yet. If that happened,
+                        # ignore it.
+                        pass
+
+                if server_connection or configParams.server_only:
+                    break
+            except BBMainFatal:
+                raise
+            except (Exception, bb.server.process.ProcessTimeout) as e:
+                if not retries:
+                    raise
+                retries -= 1
+                if isinstance(e, (bb.server.process.ProcessTimeout, BrokenPipeError)):
+                    logger.info("Retrying server connection...")
+                else:
+                    logger.info("Retrying server connection... (%s)" % traceback.format_exc())
+            if not retries:
+                bb.fatal("Unable to connect to bitbake server, or start one")
+            if retries < 5:
+                time.sleep(5)
+
+    if configParams.kill_server:
+        server_connection.connection.terminateServer()
+        server_connection.terminate()
         bb.event.ui_queue = []
+        logger.info("Terminated bitbake server.")
+        return None, None
 
-    if configParams.server_only:
-        server_connection = None
-    else:
-        try:
-            server_connection = server.establishConnection(featureset)
-        except Exception as e:
-            bb.fatal("Could not connect to server %s: %s" % (configParams.remote_server, str(e)))
+    # Restore the environment in case the UI needs it
+    for k in cleanedvars:
+        os.environ[k] = cleanedvars[k]
 
-        if configParams.kill_server:
-            server_connection.connection.terminateServer()
-            bb.event.ui_queue = []
-            return None, None, None
+    logger.removeHandler(handler)
 
-        server_connection.setupEventQueue()
+    return server_connection, ui_module
 
-        # Restore the environment in case the UI needs it
-        for k in cleanedvars:
-            os.environ[k] = cleanedvars[k]
+def lockBitbake():
+    topdir = bb.cookerdata.findTopdir()
+    if not topdir:
+        bb.error("Unable to find conf/bblayers.conf or conf/bitbake.conf. BBAPTH is unset and/or not in a build directory?")
+        raise BBMainFatal
+    lockfile = topdir + "/bitbake.lock"
+    return topdir, bb.utils.lockfile(lockfile, False, False)
 
-        logger.removeHandler(handler)
-
-    return server, server_connection, ui_module
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/msg.py b/import-layers/yocto-poky/bitbake/lib/bb/msg.py
index 90b1582..f1723be 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/msg.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/msg.py
@@ -216,3 +216,10 @@
         logger.handlers = [console]
     logger.setLevel(level)
     return logger
+
+def has_console_handler(logger):
+    for handler in logger.handlers:
+        if isinstance(handler, logging.StreamHandler):
+            if handler.stream in [sys.stderr, sys.stdout]:
+                return True
+    return False
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/parse/__init__.py b/import-layers/yocto-poky/bitbake/lib/bb/parse/__init__.py
index a2952ec..2fc4002 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/parse/__init__.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/parse/__init__.py
@@ -84,6 +84,10 @@
         logger.debug(1, "Updating mtime cache for %s" % f)
         update_mtime(f)
 
+def clear_cache():
+    global __mtime_cache
+    __mtime_cache = {}
+
 def mark_dependency(d, f):
     if f.startswith('./'):
         f = "%s/%s" % (os.getcwd(), f[2:])
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/BBHandler.py b/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/BBHandler.py
index fe918a4..f89ad24 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/BBHandler.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/BBHandler.py
@@ -144,7 +144,7 @@
     try:
         statements.eval(d)
     except bb.parse.SkipRecipe:
-        bb.data.setVar("__SKIPPED", True, d)
+        d.setVar("__SKIPPED", True)
         if include == 0:
             return { "" : d }
 
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/ConfHandler.py b/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/ConfHandler.py
index f7d0cf7..97aa130 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/ConfHandler.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/parse/parse_py/ConfHandler.py
@@ -32,7 +32,7 @@
 
 __config_regexp__  = re.compile( r"""
     ^
-    (?P<exp>export\s*)?
+    (?P<exp>export\s+)?
     (?P<var>[a-zA-Z0-9\-_+.${}/~]+?)
     (\[(?P<flag>[a-zA-Z0-9\-_+.]+)\])?
 
@@ -69,18 +69,26 @@
 def supports(fn, d):
     return fn[-5:] == ".conf"
 
-def include(parentfn, fn, lineno, data, error_out):
+def include(parentfn, fns, lineno, data, error_out):
     """
     error_out: A string indicating the verb (e.g. "include", "inherit") to be
     used in a ParseError that will be raised if the file to be included could
     not be included. Specify False to avoid raising an error in this case.
     """
+    fns = data.expand(fns)
+    parentfn = data.expand(parentfn)
+
+    # "include" or "require" accept zero to n space-separated file names to include.
+    for fn in fns.split():
+        include_single_file(parentfn, fn, lineno, data, error_out)
+
+def include_single_file(parentfn, fn, lineno, data, error_out):
+    """
+    Helper function for include() which does not expand or split its parameters.
+    """
     if parentfn == fn: # prevent infinite recursion
         return None
 
-    fn = data.expand(fn)
-    parentfn = data.expand(parentfn)
-
     if not os.path.isabs(fn):
         dname = os.path.dirname(parentfn)
         bbpath = "%s:%s" % (dname, data.getVar("BBPATH"))
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/process.py b/import-layers/yocto-poky/bitbake/lib/bb/process.py
index a4a5599..e69697c 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/process.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/process.py
@@ -94,46 +94,53 @@
                 if data is not None:
                     func(data)
 
+    def read_all_pipes(log, rin, outdata, errdata):
+        rlist = rin
+        stdoutbuf = b""
+        stderrbuf = b""
+
+        try:
+            r,w,e = select.select (rlist, [], [], 1)
+        except OSError as e:
+            if e.errno != errno.EINTR:
+                raise
+
+        readextras(r)
+
+        if pipe.stdout in r:
+            data = stdoutbuf + pipe.stdout.read()
+            if data is not None and len(data) > 0:
+                try:
+                    data = data.decode("utf-8")
+                    outdata.append(data)
+                    log.write(data)
+                    log.flush()
+                    stdoutbuf = b""
+                except UnicodeDecodeError:
+                    stdoutbuf = data
+
+        if pipe.stderr in r:
+            data = stderrbuf + pipe.stderr.read()
+            if data is not None and len(data) > 0:
+                try:
+                    data = data.decode("utf-8")
+                    errdata.append(data)
+                    log.write(data)
+                    log.flush()
+                    stderrbuf = b""
+                except UnicodeDecodeError:
+                    stderrbuf = data
+
     try:
+        # Read all pipes while the process is open
         while pipe.poll() is None:
-            rlist = rin
-            stdoutbuf = b""
-            stderrbuf = b""
-            try:
-                r,w,e = select.select (rlist, [], [], 1)
-            except OSError as e:
-                if e.errno != errno.EINTR:
-                    raise
+            read_all_pipes(log, rin, outdata, errdata)
 
-            if pipe.stdout in r:
-                data = stdoutbuf + pipe.stdout.read()
-                if data is not None and len(data) > 0:
-                    try:
-                        data = data.decode("utf-8")
-                        outdata.append(data)
-                        log.write(data)
-                        stdoutbuf = b""
-                    except UnicodeDecodeError:
-                        stdoutbuf = data
-
-            if pipe.stderr in r:
-                data = stderrbuf + pipe.stderr.read()
-                if data is not None and len(data) > 0:
-                    try:
-                        data = data.decode("utf-8")
-                        errdata.append(data)
-                        log.write(data)
-                        stderrbuf = b""
-                    except UnicodeDecodeError:
-                        stderrbuf = data
-
-            readextras(r)
-
-    finally:    
+        # Pocess closed, drain all pipes...
+        read_all_pipes(log, rin, outdata, errdata)
+    finally:
         log.flush()
 
-    readextras([fobj for fobj, _ in extrafiles])
-
     if pipe.stdout is not None:
         pipe.stdout.close()
     if pipe.stderr is not None:
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/runqueue.py b/import-layers/yocto-poky/bitbake/lib/bb/runqueue.py
index 7d2ff81..ae12c25 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/runqueue.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/runqueue.py
@@ -1355,12 +1355,7 @@
                 logger.info("Tasks Summary: Attempted %d tasks of which %d didn't need to be rerun and all succeeded.", self.rqexe.stats.completed, self.rqexe.stats.skipped)
 
         if self.state is runQueueFailed:
-            if not self.rqdata.taskData[''].tryaltconfigs:
-                raise bb.runqueue.TaskFailure(self.rqexe.failed_tids)
-            for tid in self.rqexe.failed_tids:
-                (mc, fn, tn, _) = split_tid_mcfn(tid)
-                self.rqdata.taskData[mc].fail_fn(fn)
-            self.rqdata.reset()
+            raise bb.runqueue.TaskFailure(self.rqexe.failed_tids)
 
         if self.state is runQueueComplete:
             # All done
@@ -1839,7 +1834,7 @@
         Run the tasks in a queue prepared by rqdata.prepare()
         """
 
-        if self.rqdata.setscenewhitelist and not self.rqdata.setscenewhitelist_checked:
+        if self.rqdata.setscenewhitelist is not None and not self.rqdata.setscenewhitelist_checked:
             self.rqdata.setscenewhitelist_checked = True
 
             # Check tasks that are going to run against the whitelist
@@ -1932,7 +1927,7 @@
                         self.rq.state = runQueueFailed
                         self.stats.taskFailed()
                         return True
-                self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, False, self.cooker.collection.get_file_appends(fn), taskdepdata, self.rqdata.setscene_enforce)) + b"</runtask>")
+                self.rq.fakeworker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, False, self.cooker.collection.get_file_appends(taskfn), taskdepdata, self.rqdata.setscene_enforce)) + b"</runtask>")
                 self.rq.fakeworker[mc].process.stdin.flush()
             else:
                 self.rq.worker[mc].process.stdin.write(b"<runtask>" + pickle.dumps((taskfn, task, taskname, False, self.cooker.collection.get_file_appends(taskfn), taskdepdata, self.rqdata.setscene_enforce)) + b"</runtask>")
@@ -2254,7 +2249,7 @@
         self.scenequeue_updatecounters(task)
 
     def check_taskfail(self, task):
-        if self.rqdata.setscenewhitelist:
+        if self.rqdata.setscenewhitelist is not None:
             realtask = task.split('_setscene')[0]
             (mc, fn, taskname, taskfn) = split_tid_mcfn(realtask)
             pn = self.rqdata.dataCaches[mc].pkg_fn[taskfn]
@@ -2372,7 +2367,7 @@
         self.rq.scenequeue_covered = self.scenequeue_covered
         self.rq.scenequeue_notcovered = self.scenequeue_notcovered
 
-        logger.debug(1, 'We can skip tasks %s', sorted(self.rq.scenequeue_covered))
+        logger.debug(1, 'We can skip tasks %s', "\n".join(sorted(self.rq.scenequeue_covered)))
 
         self.rq.state = runQueueRunInit
 
@@ -2488,6 +2483,9 @@
         runQueueEvent.__init__(self, task, stats, rq)
         self.exitcode = exitcode
 
+    def __str__(self):
+        return "Task (%s) failed with exit code '%s'" % (self.taskstring, self.exitcode)
+
 class sceneQueueTaskFailed(sceneQueueEvent):
     """
     Event notifying a setscene task failed
@@ -2496,6 +2494,9 @@
         sceneQueueEvent.__init__(self, task, stats, rq)
         self.exitcode = exitcode
 
+    def __str__(self):
+        return "Setscene task (%s) failed with exit code '%s' - real task will be run instead" % (self.taskstring, self.exitcode)
+
 class sceneQueueComplete(sceneQueueEvent):
     """
     Event when all the sceneQueue tasks are complete
@@ -2602,7 +2603,7 @@
 
 def check_setscene_enforce_whitelist(pn, taskname, whitelist):
     import fnmatch
-    if whitelist:
+    if whitelist is not None:
         item = '%s:%s' % (pn, taskname)
         for whitelist_item in whitelist:
             if fnmatch.fnmatch(item, whitelist_item):
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/server/__init__.py b/import-layers/yocto-poky/bitbake/lib/bb/server/__init__.py
index 538a633..5a3fba9 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/server/__init__.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/server/__init__.py
@@ -18,82 +18,4 @@
 # with this program; if not, write to the Free Software Foundation, Inc.,
 # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
 
-""" Base code for Bitbake server process
 
-Have a common base for that all Bitbake server classes ensures a consistent
-approach to the interface, and minimize risks associated with code duplication.
-
-"""
-
-"""  BaseImplServer() the base class for all XXServer() implementations.
-
-    These classes contain the actual code that runs the server side, i.e.
-    listens for the commands and executes them. Although these implementations
-    contain all the data of the original bitbake command, i.e the cooker instance,
-    they may well run on a different process or even machine.
-
-"""
-
-class BaseImplServer():
-    def __init__(self):
-        self._idlefuns = {}
-
-    def addcooker(self, cooker):
-        self.cooker = cooker
-
-    def register_idle_function(self, function, data):
-        """Register a function to be called while the server is idle"""
-        assert hasattr(function, '__call__')
-        self._idlefuns[function] = data
-
-
-
-""" BitBakeBaseServerConnection class is the common ancestor to all
-    BitBakeServerConnection classes.
-
-    These classes control the remote server. The only command currently
-    implemented is the terminate() command.
-
-"""
-
-class BitBakeBaseServerConnection():
-    def __init__(self, serverImpl):
-        pass
-
-    def terminate(self):
-        pass
-
-    def setupEventQueue(self):
-        pass
-
-
-""" BitBakeBaseServer class is the common ancestor to all Bitbake servers
-
-    Derive this class in order to implement a BitBakeServer which is the
-    controlling stub for the actual server implementation
-
-"""
-class BitBakeBaseServer(object):
-    def initServer(self):
-        self.serverImpl = None  # we ensure a runtime crash if not overloaded
-        self.connection = None
-        return
-
-    def addcooker(self, cooker):
-        self.cooker = cooker
-        self.serverImpl.addcooker(cooker)
-
-    def getServerIdleCB(self):
-        return self.serverImpl.register_idle_function
-
-    def saveConnectionDetails(self):
-        return
-
-    def detach(self):
-        return
-
-    def establishConnection(self, featureset):
-        raise   "Must redefine the %s.establishConnection()" % self.__class__.__name__
-
-    def endSession(self):
-        self.connection.terminate()
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/server/process.py b/import-layers/yocto-poky/bitbake/lib/bb/server/process.py
index c3c1450..3d31355 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/server/process.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/server/process.py
@@ -22,125 +22,245 @@
 
 import bb
 import bb.event
-import itertools
 import logging
 import multiprocessing
+import threading
+import array
 import os
-import signal
 import sys
 import time
 import select
-from queue import Empty
-from multiprocessing import Event, Process, util, Queue, Pipe, queues, Manager
-
-from . import BitBakeBaseServer, BitBakeBaseServerConnection, BaseImplServer
+import socket
+import subprocess
+import errno
+import re
+import datetime
+import bb.server.xmlrpcserver
+from bb import daemonize
+from multiprocessing import queues
 
 logger = logging.getLogger('BitBake')
 
-class ServerCommunicator():
-    def __init__(self, connection, event_handle, server):
-        self.connection = connection
-        self.event_handle = event_handle
-        self.server = server
+class ProcessTimeout(SystemExit):
+    pass
 
-    def runCommand(self, command):
-        # @todo try/except
-        self.connection.send(command)
-
-        if not self.server.is_alive():
-            raise SystemExit
-
-        while True:
-            # don't let the user ctrl-c while we're waiting for a response
-            try:
-                for idx in range(0,4): # 0, 1, 2, 3
-                    if self.connection.poll(5):
-                        return self.connection.recv()
-                    else:
-                        bb.warn("Timeout while attempting to communicate with bitbake server")
-                bb.fatal("Gave up; Too many tries: timeout while attempting to communicate with bitbake server")
-            except KeyboardInterrupt:
-                pass
-
-    def getEventHandle(self):
-        return self.event_handle.value
-
-class EventAdapter():
-    """
-    Adapter to wrap our event queue since the caller (bb.event) expects to
-    call a send() method, but our actual queue only has put()
-    """
-    def __init__(self, queue):
-        self.queue = queue
-
-    def send(self, event):
-        try:
-            self.queue.put(event)
-        except Exception as err:
-            print("EventAdapter puked: %s" % str(err))
-
-
-class ProcessServer(Process, BaseImplServer):
+class ProcessServer(multiprocessing.Process):
     profile_filename = "profile.log"
     profile_processed_filename = "profile.log.processed"
 
-    def __init__(self, command_channel, event_queue, featurelist):
-        BaseImplServer.__init__(self)
-        Process.__init__(self)
-        self.command_channel = command_channel
-        self.event_queue = event_queue
-        self.event = EventAdapter(event_queue)
-        self.featurelist = featurelist
+    def __init__(self, lock, sock, sockname):
+        multiprocessing.Process.__init__(self)
+        self.command_channel = False
+        self.command_channel_reply = False
         self.quit = False
         self.heartbeat_seconds = 1 # default, BB_HEARTBEAT_EVENT will be checked once we have a datastore.
         self.next_heartbeat = time.time()
 
-        self.quitin, self.quitout = Pipe()
-        self.event_handle = multiprocessing.Value("i")
+        self.event_handle = None
+        self.haveui = False
+        self.lastui = False
+        self.xmlrpc = False
+
+        self._idlefuns = {}
+
+        self.bitbake_lock = lock
+        self.sock = sock
+        self.sockname = sockname
+
+    def register_idle_function(self, function, data):
+        """Register a function to be called while the server is idle"""
+        assert hasattr(function, '__call__')
+        self._idlefuns[function] = data
 
     def run(self):
-        for event in bb.event.ui_queue:
-            self.event_queue.put(event)
-        self.event_handle.value = bb.event.register_UIHhandler(self, True)
+
+        if self.xmlrpcinterface[0]:
+            self.xmlrpc = bb.server.xmlrpcserver.BitBakeXMLRPCServer(self.xmlrpcinterface, self.cooker, self)
+
+            print("Bitbake XMLRPC server address: %s, server port: %s" % (self.xmlrpc.host, self.xmlrpc.port))
 
         heartbeat_event = self.cooker.data.getVar('BB_HEARTBEAT_EVENT')
         if heartbeat_event:
             try:
                 self.heartbeat_seconds = float(heartbeat_event)
             except:
-                # Throwing an exception here causes bitbake to hang.
-                # Just warn about the invalid setting and continue
                 bb.warn('Ignoring invalid BB_HEARTBEAT_EVENT=%s, must be a float specifying seconds.' % heartbeat_event)
-        bb.cooker.server_main(self.cooker, self.main)
+
+        self.timeout = self.server_timeout or self.cooker.data.getVar('BB_SERVER_TIMEOUT')
+        try:
+            if self.timeout:
+                self.timeout = float(self.timeout)
+        except:
+            bb.warn('Ignoring invalid BB_SERVER_TIMEOUT=%s, must be a float specifying seconds.' % self.timeout)
+
+
+        try:
+            self.bitbake_lock.seek(0)
+            self.bitbake_lock.truncate()
+            if self.xmlrpc:
+                self.bitbake_lock.write("%s %s:%s\n" % (os.getpid(), self.xmlrpc.host, self.xmlrpc.port))
+            else:
+                self.bitbake_lock.write("%s\n" % (os.getpid()))
+            self.bitbake_lock.flush()
+        except Exception as e:
+            print("Error writing to lock file: %s" % str(e))
+            pass
+
+        if self.cooker.configuration.profile:
+            try:
+                import cProfile as profile
+            except:
+                import profile
+            prof = profile.Profile()
+
+            ret = profile.Profile.runcall(prof, self.main)
+
+            prof.dump_stats("profile.log")
+            bb.utils.process_profilelog("profile.log")
+            print("Raw profiling information saved to profile.log and processed statistics to profile.log.processed")
+
+        else:
+            ret = self.main()
+
+        return ret
 
     def main(self):
-        # Ignore SIGINT within the server, as all SIGINT handling is done by
-        # the UI and communicated to us
-        self.quitin.close()
-        signal.signal(signal.SIGINT, signal.SIG_IGN)
+        self.cooker.pre_serve()
+
         bb.utils.set_process_name("Cooker")
+
+        ready = []
+
+        self.controllersock = False
+        fds = [self.sock]
+        if self.xmlrpc:
+            fds.append(self.xmlrpc)
+        print("Entering server connection loop")
+
+        def disconnect_client(self, fds):
+            if not self.haveui:
+                return
+            print("Disconnecting Client")
+            fds.remove(self.controllersock)
+            fds.remove(self.command_channel)
+            bb.event.unregister_UIHhandler(self.event_handle, True)
+            self.command_channel_reply.writer.close()
+            self.event_writer.writer.close()
+            del self.event_writer
+            self.controllersock.close()
+            self.controllersock = False
+            self.haveui = False
+            self.lastui = time.time()
+            self.cooker.clientComplete()
+            if self.timeout is None:
+                print("No timeout, exiting.")
+                self.quit = True
+
         while not self.quit:
-            try:
-                if self.command_channel.poll():
-                    command = self.command_channel.recv()
-                    self.runCommand(command)
-                if self.quitout.poll():
-                    self.quitout.recv()
+            if self.sock in ready:
+                self.controllersock, address = self.sock.accept()
+                if self.haveui:
+                    print("Dropping connection attempt as we have a UI %s" % (str(ready)))
+                    self.controllersock.close()
+                else:
+                    print("Accepting %s" % (str(ready)))
+                    fds.append(self.controllersock)
+            if self.controllersock in ready:
+                try:
+                    print("Connecting Client")
+                    ui_fds = recvfds(self.controllersock, 3)
+
+                    # Where to write events to
+                    writer = ConnectionWriter(ui_fds[0])
+                    self.event_handle = bb.event.register_UIHhandler(writer, True)
+                    self.event_writer = writer
+
+                    # Where to read commands from
+                    reader = ConnectionReader(ui_fds[1])
+                    fds.append(reader)
+                    self.command_channel = reader
+
+                    # Where to send command return values to
+                    writer = ConnectionWriter(ui_fds[2])
+                    self.command_channel_reply = writer
+
+                    self.haveui = True
+
+                except (EOFError, OSError):
+                    disconnect_client(self, fds)
+
+            if not self.timeout == -1.0 and not self.haveui and self.lastui and self.timeout and \
+                    (self.lastui + self.timeout) < time.time():
+                print("Server timeout, exiting.")
+                self.quit = True
+
+            if self.command_channel in ready:
+                try:
+                    command = self.command_channel.get()
+                except EOFError:
+                    # Client connection shutting down
+                    ready = []
+                    disconnect_client(self, fds)
+                    continue
+                if command[0] == "terminateServer":
                     self.quit = True
+                    continue
+                try:
+                    print("Running command %s" % command)
+                    self.command_channel_reply.send(self.cooker.command.runCommand(command))
+                except Exception as e:
+                   logger.exception('Exception in server main event loop running command %s (%s)' % (command, str(e)))
+
+            if self.xmlrpc in ready:
+                self.xmlrpc.handle_requests()
+
+            ready = self.idle_commands(.1, fds)
+
+        print("Exiting")
+        # Remove the socket file so we don't get any more connections to avoid races
+        os.unlink(self.sockname)
+        self.sock.close()
+
+        try: 
+            self.cooker.shutdown(True)
+        except:
+            pass
+
+        self.cooker.post_serve()
+
+        # Finally release the lockfile but warn about other processes holding it open
+        lock = self.bitbake_lock
+        lockfile = lock.name
+        lock.close()
+        lock = None
+
+        while not lock:
+            with bb.utils.timeout(3):
+                lock = bb.utils.lockfile(lockfile, shared=False, retry=False, block=True)
+                if not lock:
+                    # Some systems may not have lsof available
+                    procs = None
                     try:
-                        self.runCommand(["stateForceShutdown"])
-                    except:
-                        pass
+                        procs = subprocess.check_output(["lsof", '-w', lockfile], stderr=subprocess.STDOUT)
+                    except OSError as e:
+                        if e.errno != errno.ENOENT:
+                            raise
+                    if procs is None:
+                        # Fall back to fuser if lsof is unavailable
+                        try:
+                            procs = subprocess.check_output(["fuser", '-v', lockfile], stderr=subprocess.STDOUT)
+                        except OSError as e:
+                            if e.errno != errno.ENOENT:
+                                raise
 
-                self.idle_commands(.1, [self.command_channel, self.quitout])
-            except Exception:
-                logger.exception('Running command %s', command)
-
-        self.event_queue.close()
-        bb.event.unregister_UIHhandler(self.event_handle.value)
-        self.command_channel.close()
-        self.cooker.shutdown(True)
-        self.quitout.close()
+                    msg = "Delaying shutdown due to active processes which appear to be holding bitbake.lock"
+                    if procs:
+                        msg += ":\n%s" % str(procs)
+                    print(msg)
+                    return
+        # We hold the lock so we can remove the file (hide stale pid data)
+        bb.utils.remove(lockfile)
+        bb.utils.unlockfile(lock)
 
     def idle_commands(self, delay, fds=None):
         nextsleep = delay
@@ -186,109 +306,317 @@
             nextsleep = self.next_heartbeat - now
 
         if nextsleep is not None:
-            select.select(fds,[],[],nextsleep)
+            if self.xmlrpc:
+                nextsleep = self.xmlrpc.get_timeout(nextsleep)
+            try:
+                return select.select(fds,[],[],nextsleep)[0]
+            except InterruptedError:
+                # Ignore EINTR
+                return []
+        else:
+            return select.select(fds,[],[],0)[0]
+
+
+class ServerCommunicator():
+    def __init__(self, connection, recv):
+        self.connection = connection
+        self.recv = recv
 
     def runCommand(self, command):
-        """
-        Run a cooker command on the server
-        """
-        self.command_channel.send(self.cooker.command.runCommand(command))
+        self.connection.send(command)
+        if not self.recv.poll(30):
+            raise ProcessTimeout("Timeout while waiting for a reply from the bitbake server")
+        return self.recv.get()
 
-    def stop(self):
-        self.quitin.send("quit")
-        self.quitin.close()
-
-class BitBakeProcessServerConnection(BitBakeBaseServerConnection):
-    def __init__(self, serverImpl, ui_channel, event_queue):
-        self.procserver = serverImpl
-        self.ui_channel = ui_channel
-        self.event_queue = event_queue
-        self.connection = ServerCommunicator(self.ui_channel, self.procserver.event_handle, self.procserver)
-        self.events = self.event_queue
-        self.terminated = False
-
-    def sigterm_terminate(self):
-        bb.error("UI received SIGTERM")
-        self.terminate()
-
-    def terminate(self):
-        if self.terminated:
-            return
-        self.terminated = True
-        def flushevents():
-            while True:
-                try:
-                    event = self.event_queue.get(block=False)
-                except (Empty, IOError):
-                    break
-                if isinstance(event, logging.LogRecord):
-                    logger.handle(event)
-
-        self.procserver.stop()
-
-        while self.procserver.is_alive():
-            flushevents()
-            self.procserver.join(0.1)
-
-        self.ui_channel.close()
-        self.event_queue.close()
-        self.event_queue.setexit()
-        # XXX: Call explicity close in _writer to avoid
-        # fd leakage because isn't called on Queue.close()
-        self.event_queue._writer.close()
-
-# Wrap Queue to provide API which isn't server implementation specific
-class ProcessEventQueue(multiprocessing.queues.Queue):
-    def __init__(self, maxsize):
-        multiprocessing.queues.Queue.__init__(self, maxsize, ctx=multiprocessing.get_context())
-        self.exit = False
-        bb.utils.set_process_name("ProcessEQueue")
-
-    def setexit(self):
-        self.exit = True
-
-    def waitEvent(self, timeout):
-        if self.exit:
-            return self.getEvent()
-        try:
-            if not self.server.is_alive():
-                return self.getEvent()
-            return self.get(True, timeout)
-        except Empty:
-            return None
-
-    def getEvent(self):
-        try:
-            if not self.server.is_alive():
-                self.setexit()
-            return self.get(False)
-        except Empty:
-            if self.exit:
-                sys.exit(1)
-            return None
-
-class BitBakeServer(BitBakeBaseServer):
-    def initServer(self, single_use=True):
-        # establish communication channels.  We use bidirectional pipes for
-        # ui <--> server command/response pairs
-        # and a queue for server -> ui event notifications
-        #
-        self.ui_channel, self.server_channel = Pipe()
-        self.event_queue = ProcessEventQueue(0)
-        self.serverImpl = ProcessServer(self.server_channel, self.event_queue, None)
-        self.event_queue.server = self.serverImpl
-
-    def detach(self):
-        self.serverImpl.start()
-        return
-
-    def establishConnection(self, featureset):
-
-        self.connection = BitBakeProcessServerConnection(self.serverImpl, self.ui_channel, self.event_queue)
-
-        _, error = self.connection.connection.runCommand(["setFeatures", featureset])
+    def updateFeatureSet(self, featureset):
+        _, error = self.runCommand(["setFeatures", featureset])
         if error:
             logger.error("Unable to set the cooker to the correct featureset: %s" % error)
             raise BaseException(error)
-        signal.signal(signal.SIGTERM, lambda i, s: self.connection.sigterm_terminate())
-        return self.connection
+
+    def getEventHandle(self):
+        handle, error = self.runCommand(["getUIHandlerNum"])
+        if error:
+            logger.error("Unable to get UI Handler Number: %s" % error)
+            raise BaseException(error)
+
+        return handle
+
+    def terminateServer(self):
+        self.connection.send(['terminateServer'])
+        return
+
+class BitBakeProcessServerConnection(object):
+    def __init__(self, ui_channel, recv, eq, sock):
+        self.connection = ServerCommunicator(ui_channel, recv)
+        self.events = eq
+        # Save sock so it doesn't get gc'd for the life of our connection
+        self.socket_connection = sock
+
+    def terminate(self):
+        self.socket_connection.close()
+        self.connection.connection.close()
+        self.connection.recv.close()
+        return
+
+class BitBakeServer(object):
+    start_log_format = '--- Starting bitbake server pid %s at %s ---'
+    start_log_datetime_format = '%Y-%m-%d %H:%M:%S.%f'
+
+    def __init__(self, lock, sockname, configuration, featureset):
+
+        self.configuration = configuration
+        self.featureset = featureset
+        self.sockname = sockname
+        self.bitbake_lock = lock
+        self.readypipe, self.readypipein = os.pipe()
+
+        # Create server control socket
+        if os.path.exists(sockname):
+            os.unlink(sockname)
+
+        self.sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
+        # AF_UNIX has path length issues so chdir here to workaround
+        cwd = os.getcwd()
+        logfile = os.path.join(cwd, "bitbake-cookerdaemon.log")
+
+        try:
+            os.chdir(os.path.dirname(sockname))
+            self.sock.bind(os.path.basename(sockname))
+        finally:
+            os.chdir(cwd)
+        self.sock.listen(1)
+
+        os.set_inheritable(self.sock.fileno(), True)
+        startdatetime = datetime.datetime.now()
+        bb.daemonize.createDaemon(self._startServer, logfile)
+        self.sock.close()
+        self.bitbake_lock.close()
+
+        ready = ConnectionReader(self.readypipe)
+        r = ready.poll(30)
+        if r:
+            r = ready.get()
+        if not r or r != "ready":
+            ready.close()
+            bb.error("Unable to start bitbake server")
+            if os.path.exists(logfile):
+                logstart_re = re.compile(self.start_log_format % ('([0-9]+)', '([0-9-]+ [0-9:.]+)'))
+                started = False
+                lines = []
+                with open(logfile, "r") as f:
+                    for line in f:
+                        if started:
+                            lines.append(line)
+                        else:
+                            res = logstart_re.match(line.rstrip())
+                            if res:
+                                ldatetime = datetime.datetime.strptime(res.group(2), self.start_log_datetime_format)
+                                if ldatetime >= startdatetime:
+                                    started = True
+                                    lines.append(line)
+                if lines:
+                    if len(lines) > 10:
+                        bb.error("Last 10 lines of server log for this session (%s):\n%s" % (logfile, "".join(lines[-10:])))
+                    else:
+                        bb.error("Server log for this session (%s):\n%s" % (logfile, "".join(lines)))
+            raise SystemExit(1)
+        ready.close()
+        os.close(self.readypipein)
+
+    def _startServer(self):
+        print(self.start_log_format % (os.getpid(), datetime.datetime.now().strftime(self.start_log_datetime_format)))
+        server = ProcessServer(self.bitbake_lock, self.sock, self.sockname)
+        self.configuration.setServerRegIdleCallback(server.register_idle_function)
+        writer = ConnectionWriter(self.readypipein)
+        try:
+            self.cooker = bb.cooker.BBCooker(self.configuration, self.featureset)
+            writer.send("ready")
+        except:
+            writer.send("fail")
+            raise
+        finally:
+            os.close(self.readypipein)
+        server.cooker = self.cooker
+        server.server_timeout = self.configuration.server_timeout
+        server.xmlrpcinterface = self.configuration.xmlrpcinterface
+        print("Started bitbake server pid %d" % os.getpid())
+        server.start()
+
+def connectProcessServer(sockname, featureset):
+    # Connect to socket
+    sock = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
+    # AF_UNIX has path length issues so chdir here to workaround
+    cwd = os.getcwd()
+
+    try:
+        os.chdir(os.path.dirname(sockname))
+        sock.connect(os.path.basename(sockname))
+    finally:
+        os.chdir(cwd)
+
+    readfd = writefd = readfd1 = writefd1 = readfd2 = writefd2 = None
+    eq = command_chan_recv = command_chan = None
+
+    try:
+
+        # Send an fd for the remote to write events to
+        readfd, writefd = os.pipe()
+        eq = BBUIEventQueue(readfd)
+        # Send an fd for the remote to recieve commands from
+        readfd1, writefd1 = os.pipe()
+        command_chan = ConnectionWriter(writefd1)
+        # Send an fd for the remote to write commands results to
+        readfd2, writefd2 = os.pipe()
+        command_chan_recv = ConnectionReader(readfd2)
+
+        sendfds(sock, [writefd, readfd1, writefd2])
+
+        server_connection = BitBakeProcessServerConnection(command_chan, command_chan_recv, eq, sock)
+
+        # Close the ends of the pipes we won't use
+        for i in [writefd, readfd1, writefd2]:
+            os.close(i)
+
+        server_connection.connection.updateFeatureSet(featureset)
+
+    except (Exception, SystemExit) as e:
+        if command_chan_recv:
+            command_chan_recv.close()
+        if command_chan:
+            command_chan.close()
+        for i in [writefd, readfd1, writefd2]:
+            try:
+                os.close(i)
+            except OSError:
+                pass
+        sock.close()
+        raise
+
+    return server_connection
+
+def sendfds(sock, fds):
+        '''Send an array of fds over an AF_UNIX socket.'''
+        fds = array.array('i', fds)
+        msg = bytes([len(fds) % 256])
+        sock.sendmsg([msg], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, fds)])
+
+def recvfds(sock, size):
+        '''Receive an array of fds over an AF_UNIX socket.'''
+        a = array.array('i')
+        bytes_size = a.itemsize * size
+        msg, ancdata, flags, addr = sock.recvmsg(1, socket.CMSG_LEN(bytes_size))
+        if not msg and not ancdata:
+            raise EOFError
+        try:
+            if len(ancdata) != 1:
+                raise RuntimeError('received %d items of ancdata' %
+                                   len(ancdata))
+            cmsg_level, cmsg_type, cmsg_data = ancdata[0]
+            if (cmsg_level == socket.SOL_SOCKET and
+                cmsg_type == socket.SCM_RIGHTS):
+                if len(cmsg_data) % a.itemsize != 0:
+                    raise ValueError
+                a.frombytes(cmsg_data)
+                assert len(a) % 256 == msg[0]
+                return list(a)
+        except (ValueError, IndexError):
+            pass
+        raise RuntimeError('Invalid data received')
+
+class BBUIEventQueue:
+    def __init__(self, readfd):
+
+        self.eventQueue = []
+        self.eventQueueLock = threading.Lock()
+        self.eventQueueNotify = threading.Event()
+
+        self.reader = ConnectionReader(readfd)
+
+        self.t = threading.Thread()
+        self.t.setDaemon(True)
+        self.t.run = self.startCallbackHandler
+        self.t.start()
+
+    def getEvent(self):
+        self.eventQueueLock.acquire()
+
+        if len(self.eventQueue) == 0:
+            self.eventQueueLock.release()
+            return None
+
+        item = self.eventQueue.pop(0)
+
+        if len(self.eventQueue) == 0:
+            self.eventQueueNotify.clear()
+
+        self.eventQueueLock.release()
+        return item
+
+    def waitEvent(self, delay):
+        self.eventQueueNotify.wait(delay)
+        return self.getEvent()
+
+    def queue_event(self, event):
+        self.eventQueueLock.acquire()
+        self.eventQueue.append(event)
+        self.eventQueueNotify.set()
+        self.eventQueueLock.release()
+
+    def send_event(self, event):
+        self.queue_event(pickle.loads(event))
+
+    def startCallbackHandler(self):
+        bb.utils.set_process_name("UIEventQueue")
+        while True:
+            try:
+                self.reader.wait()
+                event = self.reader.get()
+                self.queue_event(event)
+            except EOFError:
+                # Easiest way to exit is to close the file descriptor to cause an exit
+                break
+        self.reader.close()
+
+class ConnectionReader(object):
+
+    def __init__(self, fd):
+        self.reader = multiprocessing.connection.Connection(fd, writable=False)
+        self.rlock = multiprocessing.Lock()
+
+    def wait(self, timeout=None):
+        return multiprocessing.connection.wait([self.reader], timeout)
+
+    def poll(self, timeout=None):
+        return self.reader.poll(timeout)
+
+    def get(self):
+        with self.rlock:
+            res = self.reader.recv_bytes()
+        return multiprocessing.reduction.ForkingPickler.loads(res)
+
+    def fileno(self):
+        return self.reader.fileno()
+
+    def close(self):
+        return self.reader.close()
+
+
+class ConnectionWriter(object):
+
+    def __init__(self, fd):
+        self.writer = multiprocessing.connection.Connection(fd, readable=False)
+        self.wlock = multiprocessing.Lock()
+        # Why bb.event needs this I have no idea
+        self.event = self
+
+    def send(self, obj):
+        obj = multiprocessing.reduction.ForkingPickler.dumps(obj)
+        with self.wlock:
+            self.writer.send_bytes(obj)
+
+    def fileno(self):
+        return self.writer.fileno()
+
+    def close(self):
+        return self.writer.close()
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpc.py b/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpc.py
deleted file mode 100644
index a06007f..0000000
--- a/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpc.py
+++ /dev/null
@@ -1,422 +0,0 @@
-#
-# BitBake XMLRPC Server
-#
-# Copyright (C) 2006 - 2007  Michael 'Mickey' Lauer
-# Copyright (C) 2006 - 2008  Richard Purdie
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License version 2 as
-# published by the Free Software Foundation.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License along
-# with this program; if not, write to the Free Software Foundation, Inc.,
-# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-
-"""
-    This module implements an xmlrpc server for BitBake.
-
-    Use this by deriving a class from BitBakeXMLRPCServer and then adding
-    methods which you want to "export" via XMLRPC. If the methods have the
-    prefix xmlrpc_, then registering those function will happen automatically,
-    if not, you need to call register_function.
-
-    Use register_idle_function() to add a function which the xmlrpc server
-    calls from within server_forever when no requests are pending. Make sure
-    that those functions are non-blocking or else you will introduce latency
-    in the server's main loop.
-"""
-
-import os
-import sys
-
-import hashlib
-import time
-import socket
-import signal
-import threading
-import pickle
-import inspect
-import select
-import http.client
-import xmlrpc.client
-from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
-
-import bb
-from bb import daemonize
-from bb.ui import uievent
-from . import BitBakeBaseServer, BitBakeBaseServerConnection, BaseImplServer
-
-DEBUG = False
-
-class BBTransport(xmlrpc.client.Transport):
-    def __init__(self, timeout):
-        self.timeout = timeout
-        self.connection_token = None
-        xmlrpc.client.Transport.__init__(self)
-
-    # Modified from default to pass timeout to HTTPConnection
-    def make_connection(self, host):
-        #return an existing connection if possible.  This allows
-        #HTTP/1.1 keep-alive.
-        if self._connection and host == self._connection[0]:
-            return self._connection[1]
-
-        # create a HTTP connection object from a host descriptor
-        chost, self._extra_headers, x509 = self.get_host_info(host)
-        #store the host argument along with the connection object
-        self._connection = host, http.client.HTTPConnection(chost, timeout=self.timeout)
-        return self._connection[1]
-
-    def set_connection_token(self, token):
-        self.connection_token = token
-
-    def send_content(self, h, body):
-        if self.connection_token:
-            h.putheader("Bitbake-token", self.connection_token)
-        xmlrpc.client.Transport.send_content(self, h, body)
-
-def _create_server(host, port, timeout = 60):
-    t = BBTransport(timeout)
-    s = xmlrpc.client.ServerProxy("http://%s:%d/" % (host, port), transport=t, allow_none=True, use_builtin_types=True)
-    return s, t
-
-def check_connection(remote, timeout):
-    try:
-        host, port = remote.split(":")
-        port = int(port)
-    except Exception as e:
-        bb.warn("Failed to read remote definition (%s)" % str(e))
-        raise e
-
-    server, _transport = _create_server(host, port, timeout)
-    try:
-        ret, err =  server.runCommand(['getVariable', 'TOPDIR'])
-        if err or not ret:
-            return False
-    except ConnectionError:
-        return False
-    return True
-
-class BitBakeServerCommands():
-
-    def __init__(self, server):
-        self.server = server
-        self.has_client = False
-
-    def registerEventHandler(self, host, port):
-        """
-        Register a remote UI Event Handler
-        """
-        s, t = _create_server(host, port)
-
-        # we don't allow connections if the cooker is running
-        if (self.cooker.state in [bb.cooker.state.parsing, bb.cooker.state.running]):
-            return None, "Cooker is busy: %s" % bb.cooker.state.get_name(self.cooker.state)
-
-        self.event_handle = bb.event.register_UIHhandler(s, True)
-        return self.event_handle, 'OK'
-
-    def unregisterEventHandler(self, handlerNum):
-        """
-        Unregister a remote UI Event Handler
-        """
-        return bb.event.unregister_UIHhandler(handlerNum)
-
-    def runCommand(self, command):
-        """
-        Run a cooker command on the server
-        """
-        return self.cooker.command.runCommand(command, self.server.readonly)
-
-    def getEventHandle(self):
-        return self.event_handle
-
-    def terminateServer(self):
-        """
-        Trigger the server to quit
-        """
-        self.server.quit = True
-        print("Server (cooker) exiting")
-        return
-
-    def addClient(self):
-        if self.has_client:
-            return None
-        token = hashlib.md5(str(time.time()).encode("utf-8")).hexdigest()
-        self.server.set_connection_token(token)
-        self.has_client = True
-        return token
-
-    def removeClient(self):
-        if self.has_client:
-            self.server.set_connection_token(None)
-            self.has_client = False
-            if self.server.single_use:
-                self.server.quit = True
-
-# This request handler checks if the request has a "Bitbake-token" header
-# field (this comes from the client side) and compares it with its internal
-# "Bitbake-token" field (this comes from the server). If the two are not
-# equal, it is assumed that a client is trying to connect to the server
-# while another client is connected to the server. In this case, a 503 error
-# ("service unavailable") is returned to the client.
-class BitBakeXMLRPCRequestHandler(SimpleXMLRPCRequestHandler):
-    def __init__(self, request, client_address, server):
-        self.server = server
-        SimpleXMLRPCRequestHandler.__init__(self, request, client_address, server)
-
-    def do_POST(self):
-        try:
-            remote_token = self.headers["Bitbake-token"]
-        except:
-            remote_token = None
-        if remote_token != self.server.connection_token and remote_token != "observer":
-            self.report_503()
-        else:
-            if remote_token == "observer":
-                self.server.readonly = True
-            else:
-                self.server.readonly = False
-            SimpleXMLRPCRequestHandler.do_POST(self)
-
-    def report_503(self):
-        self.send_response(503)
-        response = 'No more client allowed'
-        self.send_header("Content-type", "text/plain")
-        self.send_header("Content-length", str(len(response)))
-        self.end_headers()
-        self.wfile.write(bytes(response, 'utf-8'))
-
-
-class XMLRPCProxyServer(BaseImplServer):
-    """ not a real working server, but a stub for a proxy server connection
-
-    """
-    def __init__(self, host, port, use_builtin_types=True):
-        self.host = host
-        self.port = port
-
-class XMLRPCServer(SimpleXMLRPCServer, BaseImplServer):
-    # remove this when you're done with debugging
-    # allow_reuse_address = True
-
-    def __init__(self, interface, single_use=False, idle_timeout=0):
-        """
-        Constructor
-        """
-        BaseImplServer.__init__(self)
-        self.single_use = single_use
-        # Use auto port configuration
-        if (interface[1] == -1):
-            interface = (interface[0], 0)
-        SimpleXMLRPCServer.__init__(self, interface,
-                                    requestHandler=BitBakeXMLRPCRequestHandler,
-                                    logRequests=False, allow_none=True)
-        self.host, self.port = self.socket.getsockname()
-        self.connection_token = None
-        #self.register_introspection_functions()
-        self.commands = BitBakeServerCommands(self)
-        self.autoregister_all_functions(self.commands, "")
-        self.interface = interface
-        self.time = time.time()
-        self.idle_timeout = idle_timeout
-        if idle_timeout:
-            self.register_idle_function(self.handle_idle_timeout, self)
-
-    def addcooker(self, cooker):
-        BaseImplServer.addcooker(self, cooker)
-        self.commands.cooker = cooker
-
-    def autoregister_all_functions(self, context, prefix):
-        """
-        Convenience method for registering all functions in the scope
-        of this class that start with a common prefix
-        """
-        methodlist = inspect.getmembers(context, inspect.ismethod)
-        for name, method in methodlist:
-            if name.startswith(prefix):
-                self.register_function(method, name[len(prefix):])
-
-    def handle_idle_timeout(self, server, data, abort):
-        if not abort:
-            if time.time() - server.time > server.idle_timeout:
-                server.quit = True
-                print("Server idle timeout expired")
-        return []
-
-    def serve_forever(self):
-        # Start the actual XMLRPC server
-        bb.cooker.server_main(self.cooker, self._serve_forever)
-
-    def _serve_forever(self):
-        """
-        Serve Requests. Overloaded to honor a quit command
-        """
-        self.quit = False
-        while not self.quit:
-            fds = [self]
-            nextsleep = 0.1
-            for function, data in list(self._idlefuns.items()):
-                retval = None
-                try:
-                    retval = function(self, data, False)
-                    if retval is False:
-                        del self._idlefuns[function]
-                    elif retval is True:
-                        nextsleep = 0
-                    elif isinstance(retval, float):
-                        if (retval < nextsleep):
-                            nextsleep = retval
-                    else:
-                        fds = fds + retval
-                except SystemExit:
-                    raise
-                except:
-                    import traceback
-                    traceback.print_exc()
-                    if retval == None:
-                        # the function execute failed; delete it
-                        del self._idlefuns[function]
-                    pass
-
-            socktimeout = self.socket.gettimeout() or nextsleep
-            socktimeout = min(socktimeout, nextsleep)
-            # Mirror what BaseServer handle_request would do
-            try:
-                fd_sets = select.select(fds, [], [], socktimeout)
-                if fd_sets[0] and self in fd_sets[0]:
-                    if self.idle_timeout:
-                        self.time = time.time()
-                    self._handle_request_noblock()
-            except IOError:
-                # we ignore interrupted calls
-                pass
-
-        # Tell idle functions we're exiting
-        for function, data in list(self._idlefuns.items()):
-            try:
-                retval = function(self, data, True)
-            except:
-                pass
-        self.server_close()
-        return
-
-    def set_connection_token(self, token):
-        self.connection_token = token
-
-class BitBakeXMLRPCServerConnection(BitBakeBaseServerConnection):
-    def __init__(self, serverImpl, clientinfo=("localhost", 0), observer_only = False, featureset = None):
-        self.connection, self.transport = _create_server(serverImpl.host, serverImpl.port)
-        self.clientinfo = clientinfo
-        self.serverImpl = serverImpl
-        self.observer_only = observer_only
-        if featureset:
-            self.featureset = featureset
-        else:
-            self.featureset = []
-
-    def connect(self, token = None):
-        if token is None:
-            if self.observer_only:
-                token = "observer"
-            else:
-                token = self.connection.addClient()
-
-        if token is None:
-            return None
-
-        self.transport.set_connection_token(token)
-        return self
-
-    def setupEventQueue(self):
-        self.events = uievent.BBUIEventQueue(self.connection, self.clientinfo)
-        for event in bb.event.ui_queue:
-            self.events.queue_event(event)
-
-        _, error = self.connection.runCommand(["setFeatures", self.featureset])
-        if error:
-            # disconnect the client, we can't make the setFeature work
-            self.connection.removeClient()
-            # no need to log it here, the error shall be sent to the client
-            raise BaseException(error)
-
-    def removeClient(self):
-        if not self.observer_only:
-            self.connection.removeClient()
-
-    def terminate(self):
-        # Don't wait for server indefinitely
-        import socket
-        socket.setdefaulttimeout(2)
-        try:
-            self.events.system_quit()
-        except:
-            pass
-        try:
-            self.connection.removeClient()
-        except:
-            pass
-
-class BitBakeServer(BitBakeBaseServer):
-    def initServer(self, interface = ("localhost", 0),
-                   single_use = False, idle_timeout=0):
-        self.interface = interface
-        self.serverImpl = XMLRPCServer(interface, single_use, idle_timeout)
-
-    def detach(self):
-        daemonize.createDaemon(self.serverImpl.serve_forever, "bitbake-cookerdaemon.log")
-        del self.cooker
-
-    def establishConnection(self, featureset):
-        self.connection = BitBakeXMLRPCServerConnection(self.serverImpl, self.interface, False, featureset)
-        return self.connection.connect()
-
-    def set_connection_token(self, token):
-        self.connection.transport.set_connection_token(token)
-
-class BitBakeXMLRPCClient(BitBakeBaseServer):
-
-    def __init__(self, observer_only = False, token = None):
-        self.token = token
-
-        self.observer_only = observer_only
-        # if we need extra caches, just tell the server to load them all
-        pass
-
-    def saveConnectionDetails(self, remote):
-        self.remote = remote
-
-    def establishConnection(self, featureset):
-        # The format of "remote" must be "server:port"
-        try:
-            [host, port] = self.remote.split(":")
-            port = int(port)
-        except Exception as e:
-            bb.warn("Failed to read remote definition (%s)" % str(e))
-            raise e
-
-        # We need our IP for the server connection. We get the IP
-        # by trying to connect with the server
-        try:
-            s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
-            s.connect((host, port))
-            ip = s.getsockname()[0]
-            s.close()
-        except Exception as e:
-            bb.warn("Could not create socket for %s:%s (%s)" % (host, port, str(e)))
-            raise e
-        try:
-            self.serverImpl = XMLRPCProxyServer(host, port, use_builtin_types=True)
-            self.connection = BitBakeXMLRPCServerConnection(self.serverImpl, (ip, 0), self.observer_only, featureset)
-            return self.connection.connect(self.token)
-        except Exception as e:
-            bb.warn("Could not connect to server at %s:%s (%s)" % (host, port, str(e)))
-            raise e
-
-    def endSession(self):
-        self.connection.removeClient()
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpcclient.py b/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpcclient.py
new file mode 100644
index 0000000..4661a9e
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpcclient.py
@@ -0,0 +1,154 @@
+#
+# BitBake XMLRPC Client Interface
+#
+# Copyright (C) 2006 - 2007  Michael 'Mickey' Lauer
+# Copyright (C) 2006 - 2008  Richard Purdie
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+import os
+import sys
+
+import socket
+import http.client
+import xmlrpc.client
+
+import bb
+from bb.ui import uievent
+
+class BBTransport(xmlrpc.client.Transport):
+    def __init__(self, timeout):
+        self.timeout = timeout
+        self.connection_token = None
+        xmlrpc.client.Transport.__init__(self)
+
+    # Modified from default to pass timeout to HTTPConnection
+    def make_connection(self, host):
+        #return an existing connection if possible.  This allows
+        #HTTP/1.1 keep-alive.
+        if self._connection and host == self._connection[0]:
+            return self._connection[1]
+
+        # create a HTTP connection object from a host descriptor
+        chost, self._extra_headers, x509 = self.get_host_info(host)
+        #store the host argument along with the connection object
+        self._connection = host, http.client.HTTPConnection(chost, timeout=self.timeout)
+        return self._connection[1]
+
+    def set_connection_token(self, token):
+        self.connection_token = token
+
+    def send_content(self, h, body):
+        if self.connection_token:
+            h.putheader("Bitbake-token", self.connection_token)
+        xmlrpc.client.Transport.send_content(self, h, body)
+
+def _create_server(host, port, timeout = 60):
+    t = BBTransport(timeout)
+    s = xmlrpc.client.ServerProxy("http://%s:%d/" % (host, port), transport=t, allow_none=True, use_builtin_types=True)
+    return s, t
+
+def check_connection(remote, timeout):
+    try:
+        host, port = remote.split(":")
+        port = int(port)
+    except Exception as e:
+        bb.warn("Failed to read remote definition (%s)" % str(e))
+        raise e
+
+    server, _transport = _create_server(host, port, timeout)
+    try:
+        ret, err =  server.runCommand(['getVariable', 'TOPDIR'])
+        if err or not ret:
+            return False
+    except ConnectionError:
+        return False
+    return True
+
+class BitBakeXMLRPCServerConnection(object):
+    def __init__(self, host, port, clientinfo=("localhost", 0), observer_only = False, featureset = None):
+        self.connection, self.transport = _create_server(host, port)
+        self.clientinfo = clientinfo
+        self.observer_only = observer_only
+        if featureset:
+            self.featureset = featureset
+        else:
+            self.featureset = []
+
+        self.events = uievent.BBUIEventQueue(self.connection, self.clientinfo)
+
+        _, error = self.connection.runCommand(["setFeatures", self.featureset])
+        if error:
+            # disconnect the client, we can't make the setFeature work
+            self.connection.removeClient()
+            # no need to log it here, the error shall be sent to the client
+            raise BaseException(error)
+
+    def connect(self, token = None):
+        if token is None:
+            if self.observer_only:
+                token = "observer"
+            else:
+                token = self.connection.addClient()
+
+        if token is None:
+            return None
+
+        self.transport.set_connection_token(token)
+        return self
+
+    def removeClient(self):
+        if not self.observer_only:
+            self.connection.removeClient()
+
+    def terminate(self):
+        # Don't wait for server indefinitely
+        socket.setdefaulttimeout(2)
+        try:
+            self.events.system_quit()
+        except:
+            pass
+        try:
+            self.connection.removeClient()
+        except:
+            pass
+
+def connectXMLRPC(remote, featureset, observer_only = False, token = None):
+    # The format of "remote" must be "server:port"
+    try:
+        [host, port] = remote.split(":")
+        port = int(port)
+    except Exception as e:
+        bb.warn("Failed to parse remote definition %s (%s)" % (remote, str(e)))
+        raise e
+
+    # We need our IP for the server connection. We get the IP
+    # by trying to connect with the server
+    try:
+        s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
+        s.connect((host, port))
+        ip = s.getsockname()[0]
+        s.close()
+    except Exception as e:
+        bb.warn("Could not create socket for %s:%s (%s)" % (host, port, str(e)))
+        raise e
+    try:
+        connection = BitBakeXMLRPCServerConnection(host, port, (ip, 0), observer_only, featureset)
+        return connection.connect(token)
+    except Exception as e:
+        bb.warn("Could not connect to server at %s:%s (%s)" % (host, port, str(e)))
+        raise e
+
+
+
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpcserver.py b/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpcserver.py
new file mode 100644
index 0000000..875b128
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/bb/server/xmlrpcserver.py
@@ -0,0 +1,158 @@
+#
+# BitBake XMLRPC Server Interface
+#
+# Copyright (C) 2006 - 2007  Michael 'Mickey' Lauer
+# Copyright (C) 2006 - 2008  Richard Purdie
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+import os
+import sys
+
+import hashlib
+import time
+import inspect
+from xmlrpc.server import SimpleXMLRPCServer, SimpleXMLRPCRequestHandler
+
+import bb
+
+# This request handler checks if the request has a "Bitbake-token" header
+# field (this comes from the client side) and compares it with its internal
+# "Bitbake-token" field (this comes from the server). If the two are not
+# equal, it is assumed that a client is trying to connect to the server
+# while another client is connected to the server. In this case, a 503 error
+# ("service unavailable") is returned to the client.
+class BitBakeXMLRPCRequestHandler(SimpleXMLRPCRequestHandler):
+    def __init__(self, request, client_address, server):
+        self.server = server
+        SimpleXMLRPCRequestHandler.__init__(self, request, client_address, server)
+
+    def do_POST(self):
+        try:
+            remote_token = self.headers["Bitbake-token"]
+        except:
+            remote_token = None
+        if 0 and remote_token != self.server.connection_token and remote_token != "observer":
+            self.report_503()
+        else:
+            if remote_token == "observer":
+                self.server.readonly = True
+            else:
+                self.server.readonly = False
+            SimpleXMLRPCRequestHandler.do_POST(self)
+
+    def report_503(self):
+        self.send_response(503)
+        response = 'No more client allowed'
+        self.send_header("Content-type", "text/plain")
+        self.send_header("Content-length", str(len(response)))
+        self.end_headers()
+        self.wfile.write(bytes(response, 'utf-8'))
+
+class BitBakeXMLRPCServer(SimpleXMLRPCServer):
+    # remove this when you're done with debugging
+    # allow_reuse_address = True
+
+    def __init__(self, interface, cooker, parent):
+        # Use auto port configuration
+        if (interface[1] == -1):
+            interface = (interface[0], 0)
+        SimpleXMLRPCServer.__init__(self, interface,
+                                    requestHandler=BitBakeXMLRPCRequestHandler,
+                                    logRequests=False, allow_none=True)
+        self.host, self.port = self.socket.getsockname()
+        self.interface = interface
+
+        self.connection_token = None
+        self.commands = BitBakeXMLRPCServerCommands(self)
+        self.register_functions(self.commands, "")
+
+        self.cooker = cooker
+        self.parent = parent
+
+
+    def register_functions(self, context, prefix):
+        """
+        Convenience method for registering all functions in the scope
+        of this class that start with a common prefix
+        """
+        methodlist = inspect.getmembers(context, inspect.ismethod)
+        for name, method in methodlist:
+            if name.startswith(prefix):
+                self.register_function(method, name[len(prefix):])
+
+    def get_timeout(self, delay):
+        socktimeout = self.socket.gettimeout() or delay
+        return min(socktimeout, delay)
+
+    def handle_requests(self):
+        self._handle_request_noblock()
+
+class BitBakeXMLRPCServerCommands():
+
+    def __init__(self, server):
+        self.server = server
+        self.has_client = False
+
+    def registerEventHandler(self, host, port):
+        """
+        Register a remote UI Event Handler
+        """
+        s, t = bb.server.xmlrpcclient._create_server(host, port)
+
+        # we don't allow connections if the cooker is running
+        if (self.server.cooker.state in [bb.cooker.state.parsing, bb.cooker.state.running]):
+            return None, "Cooker is busy: %s" % bb.cooker.state.get_name(self.server.cooker.state)
+
+        self.event_handle = bb.event.register_UIHhandler(s, True)
+        return self.event_handle, 'OK'
+
+    def unregisterEventHandler(self, handlerNum):
+        """
+        Unregister a remote UI Event Handler
+        """
+        ret = bb.event.unregister_UIHhandler(handlerNum, True)
+        self.event_handle = None
+        return ret
+
+    def runCommand(self, command):
+        """
+        Run a cooker command on the server
+        """
+        return self.server.cooker.command.runCommand(command, self.server.readonly)
+
+    def getEventHandle(self):
+        return self.event_handle
+
+    def terminateServer(self):
+        """
+        Trigger the server to quit
+        """
+        self.server.parent.quit = True
+        print("XMLRPC Server triggering exit")
+        return
+
+    def addClient(self):
+        if self.server.parent.haveui:
+            return None
+        token = hashlib.md5(str(time.time()).encode("utf-8")).hexdigest()
+        self.server.connection_token = token
+        self.server.parent.haveui = True
+        return token
+
+    def removeClient(self):
+        if self.server.parent.haveui:
+            self.server.connection_token = None
+            self.server.parent.haveui = False
+
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/siggen.py b/import-layers/yocto-poky/bitbake/lib/bb/siggen.py
index f71190a..5ef82d7 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/siggen.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/siggen.py
@@ -69,6 +69,10 @@
     def set_taskdata(self, data):
         self.runtaskdeps, self.taskhash, self.file_checksum_values, self.taints, self.basehash = data
 
+    def reset(self, data):
+        self.__init__(data)
+
+
 class SignatureGeneratorBasic(SignatureGenerator):
     """
     """
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/taskdata.py b/import-layers/yocto-poky/bitbake/lib/bb/taskdata.py
index 8c96a56..0ea6c0b 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/taskdata.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/taskdata.py
@@ -47,7 +47,7 @@
     """
     BitBake Task Data implementation
     """
-    def __init__(self, abort = True, tryaltconfigs = False, skiplist = None, allowincomplete = False):
+    def __init__(self, abort = True, skiplist = None, allowincomplete = False):
         self.build_targets = {}
         self.run_targets = {}
 
@@ -66,7 +66,6 @@
         self.failed_fns = []
 
         self.abort = abort
-        self.tryaltconfigs = tryaltconfigs
         self.allowincomplete = allowincomplete
 
         self.skiplist = skiplist
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/tests/event.py b/import-layers/yocto-poky/bitbake/lib/bb/tests/event.py
new file mode 100644
index 0000000..c7eb1fe
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/bb/tests/event.py
@@ -0,0 +1,377 @@
+# ex:ts=4:sw=4:sts=4:et
+# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
+#
+# BitBake Tests for the Event implementation (event.py)
+#
+# Copyright (C) 2017 Intel Corporation
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+
+import unittest
+import bb
+import logging
+import bb.compat
+import bb.event
+import importlib
+import threading
+import time
+import pickle
+from unittest.mock import Mock
+from unittest.mock import call
+
+
+class EventQueueStub():
+    """ Class used as specification for UI event handler queue stub objects """
+    def __init__(self):
+        return
+
+    def send(self, event):
+        return
+
+
+class PickleEventQueueStub():
+    """ Class used as specification for UI event handler queue stub objects
+        with sendpickle method """
+    def __init__(self):
+        return
+
+    def sendpickle(self, pickled_event):
+        return
+
+
+class UIClientStub():
+    """ Class used as specification for UI event handler stub objects """
+    def __init__(self):
+        self.event = None
+
+
+class EventHandlingTest(unittest.TestCase):
+    """ Event handling test class """
+    _threadlock_test_calls = []
+
+    def setUp(self):
+        self._test_process = Mock()
+        ui_client1 = UIClientStub()
+        ui_client2 = UIClientStub()
+        self._test_ui1 = Mock(wraps=ui_client1)
+        self._test_ui2 = Mock(wraps=ui_client2)
+        importlib.reload(bb.event)
+
+    def _create_test_handlers(self):
+        """ Method used to create a test handler ordered dictionary """
+        test_handlers = bb.compat.OrderedDict()
+        test_handlers["handler1"] = self._test_process.handler1
+        test_handlers["handler2"] = self._test_process.handler2
+        return test_handlers
+
+    def test_class_handlers(self):
+        """ Test set_class_handlers and get_class_handlers methods """
+        test_handlers = self._create_test_handlers()
+        bb.event.set_class_handlers(test_handlers)
+        self.assertEqual(test_handlers,
+                         bb.event.get_class_handlers())
+
+    def test_handlers(self):
+        """ Test set_handlers and get_handlers """
+        test_handlers = self._create_test_handlers()
+        bb.event.set_handlers(test_handlers)
+        self.assertEqual(test_handlers,
+                         bb.event.get_handlers())
+
+    def test_clean_class_handlers(self):
+        """ Test clean_class_handlers method """
+        cleanDict = bb.compat.OrderedDict()
+        self.assertEqual(cleanDict,
+                         bb.event.clean_class_handlers())
+
+    def test_register(self):
+        """ Test register method for class handlers """
+        result = bb.event.register("handler", self._test_process.handler)
+        self.assertEqual(result, bb.event.Registered)
+        handlers_dict = bb.event.get_class_handlers()
+        self.assertIn("handler", handlers_dict)
+
+    def test_already_registered(self):
+        """ Test detection of an already registed class handler """
+        bb.event.register("handler", self._test_process.handler)
+        handlers_dict = bb.event.get_class_handlers()
+        self.assertIn("handler", handlers_dict)
+        result = bb.event.register("handler", self._test_process.handler)
+        self.assertEqual(result, bb.event.AlreadyRegistered)
+
+    def test_register_from_string(self):
+        """ Test register method receiving code in string """
+        result = bb.event.register("string_handler", "    return True")
+        self.assertEqual(result, bb.event.Registered)
+        handlers_dict = bb.event.get_class_handlers()
+        self.assertIn("string_handler", handlers_dict)
+
+    def test_register_with_mask(self):
+        """ Test register method with event masking """
+        mask = ["bb.event.OperationStarted",
+                "bb.event.OperationCompleted"]
+        result = bb.event.register("event_handler",
+                                   self._test_process.event_handler,
+                                   mask)
+        self.assertEqual(result, bb.event.Registered)
+        handlers_dict = bb.event.get_class_handlers()
+        self.assertIn("event_handler", handlers_dict)
+
+    def test_remove(self):
+        """ Test remove method for class handlers """
+        test_handlers = self._create_test_handlers()
+        bb.event.set_class_handlers(test_handlers)
+        count = len(test_handlers)
+        bb.event.remove("handler1", None)
+        test_handlers = bb.event.get_class_handlers()
+        self.assertEqual(len(test_handlers), count - 1)
+        with self.assertRaises(KeyError):
+            bb.event.remove("handler1", None)
+
+    def test_execute_handler(self):
+        """ Test execute_handler method for class handlers """
+        mask = ["bb.event.OperationProgress"]
+        result = bb.event.register("event_handler",
+                                   self._test_process.event_handler,
+                                   mask)
+        self.assertEqual(result, bb.event.Registered)
+        event = bb.event.OperationProgress(current=10, total=100)
+        bb.event.execute_handler("event_handler",
+                                 self._test_process.event_handler,
+                                 event,
+                                 None)
+        self._test_process.event_handler.assert_called_once_with(event)
+
+    def test_fire_class_handlers(self):
+        """ Test fire_class_handlers method """
+        mask = ["bb.event.OperationStarted"]
+        result = bb.event.register("event_handler1",
+                                   self._test_process.event_handler1,
+                                   mask)
+        self.assertEqual(result, bb.event.Registered)
+        result = bb.event.register("event_handler2",
+                                   self._test_process.event_handler2,
+                                   "*")
+        self.assertEqual(result, bb.event.Registered)
+        event1 = bb.event.OperationStarted()
+        event2 = bb.event.OperationCompleted(total=123)
+        bb.event.fire_class_handlers(event1, None)
+        bb.event.fire_class_handlers(event2, None)
+        bb.event.fire_class_handlers(event2, None)
+        expected_event_handler1 = [call(event1)]
+        expected_event_handler2 = [call(event1),
+                                   call(event2),
+                                   call(event2)]
+        self.assertEqual(self._test_process.event_handler1.call_args_list,
+                         expected_event_handler1)
+        self.assertEqual(self._test_process.event_handler2.call_args_list,
+                         expected_event_handler2)
+
+    def test_change_handler_event_mapping(self):
+        """ Test changing the event mapping for class handlers """
+        event1 = bb.event.OperationStarted()
+        event2 = bb.event.OperationCompleted(total=123)
+
+        # register handler for all events
+        result = bb.event.register("event_handler1",
+                                   self._test_process.event_handler1,
+                                   "*")
+        self.assertEqual(result, bb.event.Registered)
+        bb.event.fire_class_handlers(event1, None)
+        bb.event.fire_class_handlers(event2, None)
+        expected = [call(event1), call(event2)]
+        self.assertEqual(self._test_process.event_handler1.call_args_list,
+                         expected)
+
+        # unregister handler and register it only for OperationStarted
+        result = bb.event.remove("event_handler1",
+                                 self._test_process.event_handler1)
+        mask = ["bb.event.OperationStarted"]
+        result = bb.event.register("event_handler1",
+                                   self._test_process.event_handler1,
+                                   mask)
+        self.assertEqual(result, bb.event.Registered)
+        bb.event.fire_class_handlers(event1, None)
+        bb.event.fire_class_handlers(event2, None)
+        expected = [call(event1), call(event2), call(event1)]
+        self.assertEqual(self._test_process.event_handler1.call_args_list,
+                         expected)
+
+        # unregister handler and register it only for OperationCompleted
+        result = bb.event.remove("event_handler1",
+                                 self._test_process.event_handler1)
+        mask = ["bb.event.OperationCompleted"]
+        result = bb.event.register("event_handler1",
+                                   self._test_process.event_handler1,
+                                   mask)
+        self.assertEqual(result, bb.event.Registered)
+        bb.event.fire_class_handlers(event1, None)
+        bb.event.fire_class_handlers(event2, None)
+        expected = [call(event1), call(event2), call(event1), call(event2)]
+        self.assertEqual(self._test_process.event_handler1.call_args_list,
+                         expected)
+
+    def test_register_UIHhandler(self):
+        """ Test register_UIHhandler method """
+        result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+        self.assertEqual(result, 1)
+
+    def test_UIHhandler_already_registered(self):
+        """ Test registering an UIHhandler already existing """
+        result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+        self.assertEqual(result, 1)
+        result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+        self.assertEqual(result, 2)
+
+    def test_unregister_UIHhandler(self):
+        """ Test unregister_UIHhandler method """
+        result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+        self.assertEqual(result, 1)
+        result = bb.event.unregister_UIHhandler(1)
+        self.assertIs(result, None)
+
+    def test_fire_ui_handlers(self):
+        """ Test fire_ui_handlers method """
+        self._test_ui1.event = Mock(spec_set=EventQueueStub)
+        result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+        self.assertEqual(result, 1)
+        self._test_ui2.event = Mock(spec_set=PickleEventQueueStub)
+        result = bb.event.register_UIHhandler(self._test_ui2, mainui=True)
+        self.assertEqual(result, 2)
+        event1 = bb.event.OperationStarted()
+        bb.event.fire_ui_handlers(event1, None)
+        expected = [call(event1)]
+        self.assertEqual(self._test_ui1.event.send.call_args_list,
+                         expected)
+        expected = [call(pickle.dumps(event1))]
+        self.assertEqual(self._test_ui2.event.sendpickle.call_args_list,
+                         expected)
+
+    def test_fire(self):
+        """ Test fire method used to trigger class and ui event handlers """
+        mask = ["bb.event.ConfigParsed"]
+        result = bb.event.register("event_handler1",
+                                   self._test_process.event_handler1,
+                                   mask)
+
+        self._test_ui1.event = Mock(spec_set=EventQueueStub)
+        result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+        self.assertEqual(result, 1)
+
+        event1 = bb.event.ConfigParsed()
+        bb.event.fire(event1, None)
+        expected = [call(event1)]
+        self.assertEqual(self._test_process.event_handler1.call_args_list,
+                         expected)
+        self.assertEqual(self._test_ui1.event.send.call_args_list,
+                         expected)
+
+    def test_fire_from_worker(self):
+        """ Test fire_from_worker method """
+        self._test_ui1.event = Mock(spec_set=EventQueueStub)
+        result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+        self.assertEqual(result, 1)
+        event1 = bb.event.ConfigParsed()
+        bb.event.fire_from_worker(event1, None)
+        expected = [call(event1)]
+        self.assertEqual(self._test_ui1.event.send.call_args_list,
+                         expected)
+
+    def test_print_ui_queue(self):
+        """ Test print_ui_queue method """
+        event1 = bb.event.OperationStarted()
+        event2 = bb.event.OperationCompleted(total=123)
+        bb.event.fire(event1, None)
+        bb.event.fire(event2, None)
+        logger = logging.getLogger("BitBake")
+        logger.addHandler(bb.event.LogHandler())
+        logger.info("Test info LogRecord")
+        logger.warning("Test warning LogRecord")
+        with self.assertLogs("BitBake", level="INFO") as cm:
+            bb.event.print_ui_queue()
+        self.assertEqual(cm.output,
+                         ["INFO:BitBake:Test info LogRecord",
+                          "WARNING:BitBake:Test warning LogRecord"])
+
+    def _set_threadlock_test_mockups(self):
+        """ Create UI event handler mockups used in enable and disable
+            threadlock tests """
+        def ui1_event_send(event):
+            if type(event) is bb.event.ConfigParsed:
+                self._threadlock_test_calls.append("w1_ui1")
+            if type(event) is bb.event.OperationStarted:
+                self._threadlock_test_calls.append("w2_ui1")
+            time.sleep(2)
+
+        def ui2_event_send(event):
+            if type(event) is bb.event.ConfigParsed:
+                self._threadlock_test_calls.append("w1_ui2")
+            if type(event) is bb.event.OperationStarted:
+                self._threadlock_test_calls.append("w2_ui2")
+            time.sleep(2)
+
+        self._threadlock_test_calls = []
+        self._test_ui1.event = EventQueueStub()
+        self._test_ui1.event.send = ui1_event_send
+        result = bb.event.register_UIHhandler(self._test_ui1, mainui=True)
+        self.assertEqual(result, 1)
+        self._test_ui2.event = EventQueueStub()
+        self._test_ui2.event.send = ui2_event_send
+        result = bb.event.register_UIHhandler(self._test_ui2, mainui=True)
+        self.assertEqual(result, 2)
+
+    def _set_and_run_threadlock_test_workers(self):
+        """ Create and run the workers used to trigger events in enable and
+            disable threadlock tests """
+        worker1 = threading.Thread(target=self._thread_lock_test_worker1)
+        worker2 = threading.Thread(target=self._thread_lock_test_worker2)
+        worker1.start()
+        time.sleep(1)
+        worker2.start()
+        worker1.join()
+        worker2.join()
+
+    def _thread_lock_test_worker1(self):
+        """ First worker used to fire the ConfigParsed event for enable and
+            disable threadlocks tests """
+        bb.event.fire(bb.event.ConfigParsed(), None)
+
+    def _thread_lock_test_worker2(self):
+        """ Second worker used to fire the OperationStarted event for enable
+            and disable threadlocks tests """
+        bb.event.fire(bb.event.OperationStarted(), None)
+
+    def test_enable_threadlock(self):
+        """ Test enable_threadlock method """
+        self._set_threadlock_test_mockups()
+        bb.event.enable_threadlock()
+        self._set_and_run_threadlock_test_workers()
+        # Calls to UI handlers should be in order as all the registered
+        # handlers for the event coming from the first worker should be
+        # called before processing the event from the second worker.
+        self.assertEqual(self._threadlock_test_calls,
+                         ["w1_ui1", "w1_ui2", "w2_ui1", "w2_ui2"])
+
+    def test_disable_threadlock(self):
+        """ Test disable_threadlock method """
+        self._set_threadlock_test_mockups()
+        bb.event.disable_threadlock()
+        self._set_and_run_threadlock_test_workers()
+        # Calls to UI handlers should be intertwined together. Thanks to the
+        # delay in the registered handlers for the event coming from the first
+        # worker, the event coming from the second worker starts being
+        # processed before finishing handling the first worker event.
+        self.assertEqual(self._threadlock_test_calls,
+                         ["w1_ui1", "w2_ui1", "w1_ui2", "w2_ui2"])
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/tests/fetch.py b/import-layers/yocto-poky/bitbake/lib/bb/tests/fetch.py
index 5a8d892..7d7c5d7 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/tests/fetch.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/tests/fetch.py
@@ -28,6 +28,11 @@
 from bb.fetch2 import FetchMethod
 import bb
 
+def skipIfNoNetwork():
+    if os.environ.get("BB_SKIP_NETTESTS") == "yes":
+        return unittest.skip("Network tests being skipped")
+    return lambda f: f
+
 class URITest(unittest.TestCase):
     test_uris = {
         "http://www.google.com/index.html" : {
@@ -518,141 +523,153 @@
             self.fetchUnpack(['file://a;subdir=/bin/sh'])
 
 class FetcherNetworkTest(FetcherTest):
+    @skipIfNoNetwork()
+    def test_fetch(self):
+        fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
+        fetcher.download()
+        self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+        self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.1.tar.gz"), 57892)
+        self.d.setVar("BB_NO_NETWORK", "1")
+        fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
+        fetcher.download()
+        fetcher.unpack(self.unpackdir)
+        self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.0/")), 9)
+        self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.1/")), 9)
 
-    if os.environ.get("BB_SKIP_NETTESTS") == "yes":
-        print("Unset BB_SKIP_NETTESTS to run network tests")
-    else:
-        def test_fetch(self):
-            fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
-            fetcher.download()
-            self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
-            self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.1.tar.gz"), 57892)
-            self.d.setVar("BB_NO_NETWORK", "1")
-            fetcher = bb.fetch.Fetch(["http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz", "http://downloads.yoctoproject.org/releases/bitbake/bitbake-1.1.tar.gz"], self.d)
-            fetcher.download()
+    @skipIfNoNetwork()
+    def test_fetch_mirror(self):
+        self.d.setVar("MIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
+        fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
+        fetcher.download()
+        self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+
+    @skipIfNoNetwork()
+    def test_fetch_mirror_of_mirror(self):
+        self.d.setVar("MIRRORS", "http://.*/.* http://invalid2.yoctoproject.org/ \n http://invalid2.yoctoproject.org/.* http://downloads.yoctoproject.org/releases/bitbake")
+        fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
+        fetcher.download()
+        self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+
+    @skipIfNoNetwork()
+    def test_fetch_file_mirror_of_mirror(self):
+        self.d.setVar("MIRRORS", "http://.*/.* file:///some1where/ \n file:///some1where/.* file://some2where/ \n file://some2where/.* http://downloads.yoctoproject.org/releases/bitbake")
+        fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
+        os.mkdir(self.dldir + "/some2where")
+        fetcher.download()
+        self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+
+    @skipIfNoNetwork()
+    def test_fetch_premirror(self):
+        self.d.setVar("PREMIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
+        fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
+        fetcher.download()
+        self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+
+    @skipIfNoNetwork()
+    def gitfetcher(self, url1, url2):
+        def checkrevision(self, fetcher):
             fetcher.unpack(self.unpackdir)
-            self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.0/")), 9)
-            self.assertEqual(len(os.listdir(self.unpackdir + "/bitbake-1.1/")), 9)
+            revision = bb.process.run("git rev-parse HEAD", shell=True, cwd=self.unpackdir + "/git")[0].strip()
+            self.assertEqual(revision, "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
 
-        def test_fetch_mirror(self):
-            self.d.setVar("MIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
-            fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
-            fetcher.download()
-            self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+        self.d.setVar("BB_GENERATE_MIRROR_TARBALLS", "1")
+        self.d.setVar("SRCREV", "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
+        fetcher = bb.fetch.Fetch([url1], self.d)
+        fetcher.download()
+        checkrevision(self, fetcher)
+        # Wipe out the dldir clone and the unpacked source, turn off the network and check mirror tarball works
+        bb.utils.prunedir(self.dldir + "/git2/")
+        bb.utils.prunedir(self.unpackdir)
+        self.d.setVar("BB_NO_NETWORK", "1")
+        fetcher = bb.fetch.Fetch([url2], self.d)
+        fetcher.download()
+        checkrevision(self, fetcher)
 
-        def test_fetch_mirror_of_mirror(self):
-            self.d.setVar("MIRRORS", "http://.*/.* http://invalid2.yoctoproject.org/ \n http://invalid2.yoctoproject.org/.* http://downloads.yoctoproject.org/releases/bitbake")
-            fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
-            fetcher.download()
-            self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+    @skipIfNoNetwork()
+    def test_gitfetch(self):
+        url1 = url2 = "git://git.openembedded.org/bitbake"
+        self.gitfetcher(url1, url2)
 
-        def test_fetch_file_mirror_of_mirror(self):
-            self.d.setVar("MIRRORS", "http://.*/.* file:///some1where/ \n file:///some1where/.* file://some2where/ \n file://some2where/.* http://downloads.yoctoproject.org/releases/bitbake")
-            fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
-            os.mkdir(self.dldir + "/some2where")
-            fetcher.download()
-            self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+    @skipIfNoNetwork()
+    def test_gitfetch_goodsrcrev(self):
+        # SRCREV is set but matches rev= parameter
+        url1 = url2 = "git://git.openembedded.org/bitbake;rev=270a05b0b4ba0959fe0624d2a4885d7b70426da5"
+        self.gitfetcher(url1, url2)
 
-        def test_fetch_premirror(self):
-            self.d.setVar("PREMIRRORS", "http://.*/.* http://downloads.yoctoproject.org/releases/bitbake")
-            fetcher = bb.fetch.Fetch(["http://invalid.yoctoproject.org/releases/bitbake/bitbake-1.0.tar.gz"], self.d)
-            fetcher.download()
-            self.assertEqual(os.path.getsize(self.dldir + "/bitbake-1.0.tar.gz"), 57749)
+    @skipIfNoNetwork()
+    def test_gitfetch_badsrcrev(self):
+        # SRCREV is set but does not match rev= parameter
+        url1 = url2 = "git://git.openembedded.org/bitbake;rev=dead05b0b4ba0959fe0624d2a4885d7b70426da5"
+        self.assertRaises(bb.fetch.FetchError, self.gitfetcher, url1, url2)
 
-        def gitfetcher(self, url1, url2):
-            def checkrevision(self, fetcher):
-                fetcher.unpack(self.unpackdir)
-                revision = bb.process.run("git rev-parse HEAD", shell=True, cwd=self.unpackdir + "/git")[0].strip()
-                self.assertEqual(revision, "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
+    @skipIfNoNetwork()
+    def test_gitfetch_tagandrev(self):
+        # SRCREV is set but does not match rev= parameter
+        url1 = url2 = "git://git.openembedded.org/bitbake;rev=270a05b0b4ba0959fe0624d2a4885d7b70426da5;tag=270a05b0b4ba0959fe0624d2a4885d7b70426da5"
+        self.assertRaises(bb.fetch.FetchError, self.gitfetcher, url1, url2)
 
-            self.d.setVar("BB_GENERATE_MIRROR_TARBALLS", "1")
-            self.d.setVar("SRCREV", "270a05b0b4ba0959fe0624d2a4885d7b70426da5")
-            fetcher = bb.fetch.Fetch([url1], self.d)
-            fetcher.download()
-            checkrevision(self, fetcher)
-            # Wipe out the dldir clone and the unpacked source, turn off the network and check mirror tarball works
-            bb.utils.prunedir(self.dldir + "/git2/")
-            bb.utils.prunedir(self.unpackdir)
-            self.d.setVar("BB_NO_NETWORK", "1")
-            fetcher = bb.fetch.Fetch([url2], self.d)
-            fetcher.download()
-            checkrevision(self, fetcher)
+    @skipIfNoNetwork()
+    def test_gitfetch_localusehead(self):
+        # Create dummy local Git repo
+        src_dir = tempfile.mkdtemp(dir=self.tempdir,
+                                   prefix='gitfetch_localusehead_')
+        src_dir = os.path.abspath(src_dir)
+        bb.process.run("git init", cwd=src_dir)
+        bb.process.run("git commit --allow-empty -m'Dummy commit'",
+                       cwd=src_dir)
+        # Use other branch than master
+        bb.process.run("git checkout -b my-devel", cwd=src_dir)
+        bb.process.run("git commit --allow-empty -m'Dummy commit 2'",
+                       cwd=src_dir)
+        stdout = bb.process.run("git rev-parse HEAD", cwd=src_dir)
+        orig_rev = stdout[0].strip()
 
-        def test_gitfetch(self):
-            url1 = url2 = "git://git.openembedded.org/bitbake"
-            self.gitfetcher(url1, url2)
+        # Fetch and check revision
+        self.d.setVar("SRCREV", "AUTOINC")
+        url = "git://" + src_dir + ";protocol=file;usehead=1"
+        fetcher = bb.fetch.Fetch([url], self.d)
+        fetcher.download()
+        fetcher.unpack(self.unpackdir)
+        stdout = bb.process.run("git rev-parse HEAD",
+                                cwd=os.path.join(self.unpackdir, 'git'))
+        unpack_rev = stdout[0].strip()
+        self.assertEqual(orig_rev, unpack_rev)
 
-        def test_gitfetch_goodsrcrev(self):
-            # SRCREV is set but matches rev= parameter
-            url1 = url2 = "git://git.openembedded.org/bitbake;rev=270a05b0b4ba0959fe0624d2a4885d7b70426da5"
-            self.gitfetcher(url1, url2)
+    @skipIfNoNetwork()
+    def test_gitfetch_remoteusehead(self):
+        url = "git://git.openembedded.org/bitbake;usehead=1"
+        self.assertRaises(bb.fetch.ParameterError, self.gitfetcher, url, url)
 
-        def test_gitfetch_badsrcrev(self):
-            # SRCREV is set but does not match rev= parameter
-            url1 = url2 = "git://git.openembedded.org/bitbake;rev=dead05b0b4ba0959fe0624d2a4885d7b70426da5"
-            self.assertRaises(bb.fetch.FetchError, self.gitfetcher, url1, url2)
+    @skipIfNoNetwork()
+    def test_gitfetch_premirror(self):
+        url1 = "git://git.openembedded.org/bitbake"
+        url2 = "git://someserver.org/bitbake"
+        self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
+        self.gitfetcher(url1, url2)
 
-        def test_gitfetch_tagandrev(self):
-            # SRCREV is set but does not match rev= parameter
-            url1 = url2 = "git://git.openembedded.org/bitbake;rev=270a05b0b4ba0959fe0624d2a4885d7b70426da5;tag=270a05b0b4ba0959fe0624d2a4885d7b70426da5"
-            self.assertRaises(bb.fetch.FetchError, self.gitfetcher, url1, url2)
+    @skipIfNoNetwork()
+    def test_gitfetch_premirror2(self):
+        url1 = url2 = "git://someserver.org/bitbake"
+        self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
+        self.gitfetcher(url1, url2)
 
-        def test_gitfetch_localusehead(self):
-            # Create dummy local Git repo
-            src_dir = tempfile.mkdtemp(dir=self.tempdir,
-                                       prefix='gitfetch_localusehead_')
-            src_dir = os.path.abspath(src_dir)
-            bb.process.run("git init", cwd=src_dir)
-            bb.process.run("git commit --allow-empty -m'Dummy commit'",
-                           cwd=src_dir)
-            # Use other branch than master
-            bb.process.run("git checkout -b my-devel", cwd=src_dir)
-            bb.process.run("git commit --allow-empty -m'Dummy commit 2'",
-                           cwd=src_dir)
-            stdout = bb.process.run("git rev-parse HEAD", cwd=src_dir)
-            orig_rev = stdout[0].strip()
+    @skipIfNoNetwork()
+    def test_gitfetch_premirror3(self):
+        realurl = "git://git.openembedded.org/bitbake"
+        dummyurl = "git://someserver.org/bitbake"
+        self.sourcedir = self.unpackdir.replace("unpacked", "sourcemirror.git")
+        os.chdir(self.tempdir)
+        bb.process.run("git clone %s %s 2> /dev/null" % (realurl, self.sourcedir), shell=True)
+        self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (dummyurl, self.sourcedir))
+        self.gitfetcher(dummyurl, dummyurl)
 
-            # Fetch and check revision
-            self.d.setVar("SRCREV", "AUTOINC")
-            url = "git://" + src_dir + ";protocol=file;usehead=1"
-            fetcher = bb.fetch.Fetch([url], self.d)
-            fetcher.download()
-            fetcher.unpack(self.unpackdir)
-            stdout = bb.process.run("git rev-parse HEAD",
-                                    cwd=os.path.join(self.unpackdir, 'git'))
-            unpack_rev = stdout[0].strip()
-            self.assertEqual(orig_rev, unpack_rev)
-
-        def test_gitfetch_remoteusehead(self):
-            url = "git://git.openembedded.org/bitbake;usehead=1"
-            self.assertRaises(bb.fetch.ParameterError, self.gitfetcher, url, url)
-
-        def test_gitfetch_premirror(self):
-            url1 = "git://git.openembedded.org/bitbake"
-            url2 = "git://someserver.org/bitbake"
-            self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
-            self.gitfetcher(url1, url2)
-
-        def test_gitfetch_premirror2(self):
-            url1 = url2 = "git://someserver.org/bitbake"
-            self.d.setVar("PREMIRRORS", "git://someserver.org/bitbake git://git.openembedded.org/bitbake \n")
-            self.gitfetcher(url1, url2)
-
-        def test_gitfetch_premirror3(self):
-            realurl = "git://git.openembedded.org/bitbake"
-            dummyurl = "git://someserver.org/bitbake"
-            self.sourcedir = self.unpackdir.replace("unpacked", "sourcemirror.git")
-            os.chdir(self.tempdir)
-            bb.process.run("git clone %s %s 2> /dev/null" % (realurl, self.sourcedir), shell=True)
-            self.d.setVar("PREMIRRORS", "%s git://%s;protocol=file \n" % (dummyurl, self.sourcedir))
-            self.gitfetcher(dummyurl, dummyurl)
-
-        def test_git_submodule(self):
-            fetcher = bb.fetch.Fetch(["gitsm://git.yoctoproject.org/git-submodule-test;rev=f12e57f2edf0aa534cf1616fa983d165a92b0842"], self.d)
-            fetcher.download()
-            # Previous cwd has been deleted
-            os.chdir(os.path.dirname(self.unpackdir))
-            fetcher.unpack(self.unpackdir)
+    @skipIfNoNetwork()
+    def test_git_submodule(self):
+        fetcher = bb.fetch.Fetch(["gitsm://git.yoctoproject.org/git-submodule-test;rev=f12e57f2edf0aa534cf1616fa983d165a92b0842"], self.d)
+        fetcher.download()
+        # Previous cwd has been deleted
+        os.chdir(os.path.dirname(self.unpackdir))
+        fetcher.unpack(self.unpackdir)
 
 
 class TrustedNetworksTest(FetcherTest):
@@ -782,32 +799,32 @@
         ("db", "http://download.oracle.com/berkeley-db/db-5.3.21.tar.gz", "http://www.oracle.com/technetwork/products/berkeleydb/downloads/index-082944.html", "http://download.oracle.com/otn/berkeley-db/(?P<name>db-)(?P<pver>((\d+[\.\-_]*)+))\.tar\.gz")
             : "6.1.19",
     }
-    if os.environ.get("BB_SKIP_NETTESTS") == "yes":
-        print("Unset BB_SKIP_NETTESTS to run network tests")
-    else:
-        def test_git_latest_versionstring(self):
-            for k, v in self.test_git_uris.items():
-                self.d.setVar("PN", k[0])
-                self.d.setVar("SRCREV", k[2])
-                self.d.setVar("UPSTREAM_CHECK_GITTAGREGEX", k[3])
-                ud = bb.fetch2.FetchData(k[1], self.d)
-                pupver= ud.method.latest_versionstring(ud, self.d)
-                verstring = pupver[0]
-                self.assertTrue(verstring, msg="Could not find upstream version")
-                r = bb.utils.vercmp_string(v, verstring)
-                self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
 
-        def test_wget_latest_versionstring(self):
-            for k, v in self.test_wget_uris.items():
-                self.d.setVar("PN", k[0])
-                self.d.setVar("UPSTREAM_CHECK_URI", k[2])
-                self.d.setVar("UPSTREAM_CHECK_REGEX", k[3])
-                ud = bb.fetch2.FetchData(k[1], self.d)
-                pupver = ud.method.latest_versionstring(ud, self.d)
-                verstring = pupver[0]
-                self.assertTrue(verstring, msg="Could not find upstream version")
-                r = bb.utils.vercmp_string(v, verstring)
-                self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
+    @skipIfNoNetwork()
+    def test_git_latest_versionstring(self):
+        for k, v in self.test_git_uris.items():
+            self.d.setVar("PN", k[0])
+            self.d.setVar("SRCREV", k[2])
+            self.d.setVar("UPSTREAM_CHECK_GITTAGREGEX", k[3])
+            ud = bb.fetch2.FetchData(k[1], self.d)
+            pupver= ud.method.latest_versionstring(ud, self.d)
+            verstring = pupver[0]
+            self.assertTrue(verstring, msg="Could not find upstream version")
+            r = bb.utils.vercmp_string(v, verstring)
+            self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
+
+    @skipIfNoNetwork()
+    def test_wget_latest_versionstring(self):
+        for k, v in self.test_wget_uris.items():
+            self.d.setVar("PN", k[0])
+            self.d.setVar("UPSTREAM_CHECK_URI", k[2])
+            self.d.setVar("UPSTREAM_CHECK_REGEX", k[3])
+            ud = bb.fetch2.FetchData(k[1], self.d)
+            pupver = ud.method.latest_versionstring(ud, self.d)
+            verstring = pupver[0]
+            self.assertTrue(verstring, msg="Could not find upstream version")
+            r = bb.utils.vercmp_string(v, verstring)
+            self.assertTrue(r == -1 or r == 0, msg="Package %s, version: %s <= %s" % (k[0], v, verstring))
 
 
 class FetchCheckStatusTest(FetcherTest):
@@ -820,37 +837,636 @@
                       "https://yoctoproject.org/documentation",
                       "http://downloads.yoctoproject.org/releases/opkg/opkg-0.1.7.tar.gz",
                       "http://downloads.yoctoproject.org/releases/opkg/opkg-0.3.0.tar.gz",
-                      "ftp://ftp.gnu.org/gnu/autoconf/autoconf-2.60.tar.gz",
-                      "ftp://ftp.gnu.org/gnu/chess/gnuchess-5.08.tar.gz",
-                      "ftp://ftp.gnu.org/gnu/gmp/gmp-4.0.tar.gz",
+                      "ftp://sourceware.org/pub/libffi/libffi-1.20.tar.gz",
+                      "http://ftp.gnu.org/gnu/autoconf/autoconf-2.60.tar.gz",
+                      "https://ftp.gnu.org/gnu/chess/gnuchess-5.08.tar.gz",
+                      "https://ftp.gnu.org/gnu/gmp/gmp-4.0.tar.gz",
                       # GitHub releases are hosted on Amazon S3, which doesn't support HEAD
                       "https://github.com/kergoth/tslib/releases/download/1.1/tslib-1.1.tar.xz"
                       ]
 
-    if os.environ.get("BB_SKIP_NETTESTS") == "yes":
-        print("Unset BB_SKIP_NETTESTS to run network tests")
-    else:
-
-        def test_wget_checkstatus(self):
-            fetch = bb.fetch2.Fetch(self.test_wget_uris, self.d)
-            for u in self.test_wget_uris:
+    @skipIfNoNetwork()
+    def test_wget_checkstatus(self):
+        fetch = bb.fetch2.Fetch(self.test_wget_uris, self.d)
+        for u in self.test_wget_uris:
+            with self.subTest(url=u):
                 ud = fetch.ud[u]
                 m = ud.method
                 ret = m.checkstatus(fetch, ud, self.d)
                 self.assertTrue(ret, msg="URI %s, can't check status" % (u))
 
+    @skipIfNoNetwork()
+    def test_wget_checkstatus_connection_cache(self):
+        from bb.fetch2 import FetchConnectionCache
 
-        def test_wget_checkstatus_connection_cache(self):
-            from bb.fetch2 import FetchConnectionCache
+        connection_cache = FetchConnectionCache()
+        fetch = bb.fetch2.Fetch(self.test_wget_uris, self.d,
+                    connection_cache = connection_cache)
 
-            connection_cache = FetchConnectionCache()
-            fetch = bb.fetch2.Fetch(self.test_wget_uris, self.d,
-                        connection_cache = connection_cache)
-
-            for u in self.test_wget_uris:
+        for u in self.test_wget_uris:
+            with self.subTest(url=u):
                 ud = fetch.ud[u]
                 m = ud.method
                 ret = m.checkstatus(fetch, ud, self.d)
                 self.assertTrue(ret, msg="URI %s, can't check status" % (u))
 
-            connection_cache.close_connections()
+        connection_cache.close_connections()
+
+
+class GitMakeShallowTest(FetcherTest):
+    bitbake_dir = os.path.join(os.path.dirname(os.path.join(__file__)), '..', '..', '..')
+    make_shallow_path = os.path.join(bitbake_dir, 'bin', 'git-make-shallow')
+
+    def setUp(self):
+        FetcherTest.setUp(self)
+        self.gitdir = os.path.join(self.tempdir, 'gitshallow')
+        bb.utils.mkdirhier(self.gitdir)
+        bb.process.run('git init', cwd=self.gitdir)
+
+    def assertRefs(self, expected_refs):
+        actual_refs = self.git(['for-each-ref', '--format=%(refname)']).splitlines()
+        full_expected = self.git(['rev-parse', '--symbolic-full-name'] + expected_refs).splitlines()
+        self.assertEqual(sorted(full_expected), sorted(actual_refs))
+
+    def assertRevCount(self, expected_count, args=None):
+        if args is None:
+            args = ['HEAD']
+        revs = self.git(['rev-list'] + args)
+        actual_count = len(revs.splitlines())
+        self.assertEqual(expected_count, actual_count, msg='Object count `%d` is not the expected `%d`' % (actual_count, expected_count))
+
+    def git(self, cmd):
+        if isinstance(cmd, str):
+            cmd = 'git ' + cmd
+        else:
+            cmd = ['git'] + cmd
+        return bb.process.run(cmd, cwd=self.gitdir)[0]
+
+    def make_shallow(self, args=None):
+        if args is None:
+            args = ['HEAD']
+        return bb.process.run([self.make_shallow_path] + args, cwd=self.gitdir)
+
+    def add_empty_file(self, path, msg=None):
+        if msg is None:
+            msg = path
+        open(os.path.join(self.gitdir, path), 'w').close()
+        self.git(['add', path])
+        self.git(['commit', '-m', msg, path])
+
+    def test_make_shallow_single_branch_no_merge(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.assertRevCount(2)
+        self.make_shallow()
+        self.assertRevCount(1)
+
+    def test_make_shallow_single_branch_one_merge(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.git('checkout -b a_branch')
+        self.add_empty_file('c')
+        self.git('checkout master')
+        self.add_empty_file('d')
+        self.git('merge --no-ff --no-edit a_branch')
+        self.git('branch -d a_branch')
+        self.add_empty_file('e')
+        self.assertRevCount(6)
+        self.make_shallow(['HEAD~2'])
+        self.assertRevCount(5)
+
+    def test_make_shallow_at_merge(self):
+        self.add_empty_file('a')
+        self.git('checkout -b a_branch')
+        self.add_empty_file('b')
+        self.git('checkout master')
+        self.git('merge --no-ff --no-edit a_branch')
+        self.git('branch -d a_branch')
+        self.assertRevCount(3)
+        self.make_shallow()
+        self.assertRevCount(1)
+
+    def test_make_shallow_annotated_tag(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.git('tag -a -m a_tag a_tag')
+        self.assertRevCount(2)
+        self.make_shallow(['a_tag'])
+        self.assertRevCount(1)
+
+    def test_make_shallow_multi_ref(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.git('checkout -b a_branch')
+        self.add_empty_file('c')
+        self.git('checkout master')
+        self.add_empty_file('d')
+        self.git('checkout -b a_branch_2')
+        self.add_empty_file('a_tag')
+        self.git('tag a_tag')
+        self.git('checkout master')
+        self.git('branch -D a_branch_2')
+        self.add_empty_file('e')
+        self.assertRevCount(6, ['--all'])
+        self.make_shallow()
+        self.assertRevCount(5, ['--all'])
+
+    def test_make_shallow_multi_ref_trim(self):
+        self.add_empty_file('a')
+        self.git('checkout -b a_branch')
+        self.add_empty_file('c')
+        self.git('checkout master')
+        self.assertRevCount(1)
+        self.assertRevCount(2, ['--all'])
+        self.assertRefs(['master', 'a_branch'])
+        self.make_shallow(['-r', 'master', 'HEAD'])
+        self.assertRevCount(1, ['--all'])
+        self.assertRefs(['master'])
+
+    def test_make_shallow_noop(self):
+        self.add_empty_file('a')
+        self.assertRevCount(1)
+        self.make_shallow()
+        self.assertRevCount(1)
+
+    @skipIfNoNetwork()
+    def test_make_shallow_bitbake(self):
+        self.git('remote add origin https://github.com/openembedded/bitbake')
+        self.git('fetch --tags origin')
+        orig_revs = len(self.git('rev-list --all').splitlines())
+        self.make_shallow(['refs/tags/1.10.0'])
+        self.assertRevCount(orig_revs - 1746, ['--all'])
+
+class GitShallowTest(FetcherTest):
+    def setUp(self):
+        FetcherTest.setUp(self)
+        self.gitdir = os.path.join(self.tempdir, 'git')
+        self.srcdir = os.path.join(self.tempdir, 'gitsource')
+
+        bb.utils.mkdirhier(self.srcdir)
+        self.git('init', cwd=self.srcdir)
+        self.d.setVar('WORKDIR', self.tempdir)
+        self.d.setVar('S', self.gitdir)
+        self.d.delVar('PREMIRRORS')
+        self.d.delVar('MIRRORS')
+
+        uri = 'git://%s;protocol=file;subdir=${S}' % self.srcdir
+        self.d.setVar('SRC_URI', uri)
+        self.d.setVar('SRCREV', '${AUTOREV}')
+        self.d.setVar('AUTOREV', '${@bb.fetch2.get_autorev(d)}')
+
+        self.d.setVar('BB_GIT_SHALLOW', '1')
+        self.d.setVar('BB_GENERATE_MIRROR_TARBALLS', '0')
+        self.d.setVar('BB_GENERATE_SHALLOW_TARBALLS', '1')
+
+    def assertRefs(self, expected_refs, cwd=None):
+        if cwd is None:
+            cwd = self.gitdir
+        actual_refs = self.git(['for-each-ref', '--format=%(refname)'], cwd=cwd).splitlines()
+        full_expected = self.git(['rev-parse', '--symbolic-full-name'] + expected_refs, cwd=cwd).splitlines()
+        self.assertEqual(sorted(set(full_expected)), sorted(set(actual_refs)))
+
+    def assertRevCount(self, expected_count, args=None, cwd=None):
+        if args is None:
+            args = ['HEAD']
+        if cwd is None:
+            cwd = self.gitdir
+        revs = self.git(['rev-list'] + args, cwd=cwd)
+        actual_count = len(revs.splitlines())
+        self.assertEqual(expected_count, actual_count, msg='Object count `%d` is not the expected `%d`' % (actual_count, expected_count))
+
+    def git(self, cmd, cwd=None):
+        if isinstance(cmd, str):
+            cmd = 'git ' + cmd
+        else:
+            cmd = ['git'] + cmd
+        if cwd is None:
+            cwd = self.gitdir
+        return bb.process.run(cmd, cwd=cwd)[0]
+
+    def add_empty_file(self, path, cwd=None, msg=None):
+        if msg is None:
+            msg = path
+        if cwd is None:
+            cwd = self.srcdir
+        open(os.path.join(cwd, path), 'w').close()
+        self.git(['add', path], cwd)
+        self.git(['commit', '-m', msg, path], cwd)
+
+    def fetch(self, uri=None):
+        if uri is None:
+            uris = self.d.getVar('SRC_URI', True).split()
+            uri = uris[0]
+            d = self.d
+        else:
+            d = self.d.createCopy()
+            d.setVar('SRC_URI', uri)
+            uri = d.expand(uri)
+            uris = [uri]
+
+        fetcher = bb.fetch2.Fetch(uris, d)
+        fetcher.download()
+        ud = fetcher.ud[uri]
+        return fetcher, ud
+
+    def fetch_and_unpack(self, uri=None):
+        fetcher, ud = self.fetch(uri)
+        fetcher.unpack(self.d.getVar('WORKDIR'))
+        assert os.path.exists(self.d.getVar('S'))
+        return fetcher, ud
+
+    def fetch_shallow(self, uri=None, disabled=False, keepclone=False):
+        """Fetch a uri, generating a shallow tarball, then unpack using it"""
+        fetcher, ud = self.fetch_and_unpack(uri)
+        assert os.path.exists(ud.clonedir), 'Git clone in DLDIR (%s) does not exist for uri %s' % (ud.clonedir, uri)
+
+        # Confirm that the unpacked repo is unshallow
+        if not disabled:
+            assert os.path.exists(os.path.join(self.dldir, ud.mirrortarballs[0]))
+
+        # fetch and unpack, from the shallow tarball
+        bb.utils.remove(self.gitdir, recurse=True)
+        bb.utils.remove(ud.clonedir, recurse=True)
+
+        # confirm that the unpacked repo is used when no git clone or git
+        # mirror tarball is available
+        fetcher, ud = self.fetch_and_unpack(uri)
+        if not disabled:
+            assert os.path.exists(os.path.join(self.gitdir, '.git', 'shallow')), 'Unpacked git repository at %s is not shallow' % self.gitdir
+        else:
+            assert not os.path.exists(os.path.join(self.gitdir, '.git', 'shallow')), 'Unpacked git repository at %s is shallow' % self.gitdir
+        return fetcher, ud
+
+    def test_shallow_disabled(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.assertRevCount(2, cwd=self.srcdir)
+
+        self.d.setVar('BB_GIT_SHALLOW', '0')
+        self.fetch_shallow(disabled=True)
+        self.assertRevCount(2)
+
+    def test_shallow_nobranch(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.assertRevCount(2, cwd=self.srcdir)
+
+        srcrev = self.git('rev-parse HEAD', cwd=self.srcdir).strip()
+        self.d.setVar('SRCREV', srcrev)
+        uri = self.d.getVar('SRC_URI', True).split()[0]
+        uri = '%s;nobranch=1;bare=1' % uri
+
+        self.fetch_shallow(uri)
+        self.assertRevCount(1)
+
+        # shallow refs are used to ensure the srcrev sticks around when we
+        # have no other branches referencing it
+        self.assertRefs(['refs/shallow/default'])
+
+    def test_shallow_default_depth_1(self):
+        # Create initial git repo
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.assertRevCount(2, cwd=self.srcdir)
+
+        self.fetch_shallow()
+        self.assertRevCount(1)
+
+    def test_shallow_depth_0_disables(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.assertRevCount(2, cwd=self.srcdir)
+
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+        self.fetch_shallow(disabled=True)
+        self.assertRevCount(2)
+
+    def test_shallow_depth_default_override(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.assertRevCount(2, cwd=self.srcdir)
+
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH', '2')
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH_default', '1')
+        self.fetch_shallow()
+        self.assertRevCount(1)
+
+    def test_shallow_depth_default_override_disable(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.add_empty_file('c')
+        self.assertRevCount(3, cwd=self.srcdir)
+
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH_default', '2')
+        self.fetch_shallow()
+        self.assertRevCount(2)
+
+    def test_current_shallow_out_of_date_clone(self):
+        # Create initial git repo
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.add_empty_file('c')
+        self.assertRevCount(3, cwd=self.srcdir)
+
+        # Clone and generate mirror tarball
+        fetcher, ud = self.fetch()
+
+        # Ensure we have a current mirror tarball, but an out of date clone
+        self.git('update-ref refs/heads/master refs/heads/master~1', cwd=ud.clonedir)
+        self.assertRevCount(2, cwd=ud.clonedir)
+
+        # Fetch and unpack, from the current tarball, not the out of date clone
+        bb.utils.remove(self.gitdir, recurse=True)
+        fetcher, ud = self.fetch()
+        fetcher.unpack(self.d.getVar('WORKDIR'))
+        self.assertRevCount(1)
+
+    def test_shallow_single_branch_no_merge(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.assertRevCount(2, cwd=self.srcdir)
+
+        self.fetch_shallow()
+        self.assertRevCount(1)
+        assert os.path.exists(os.path.join(self.gitdir, 'a'))
+        assert os.path.exists(os.path.join(self.gitdir, 'b'))
+
+    def test_shallow_no_dangling(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.assertRevCount(2, cwd=self.srcdir)
+
+        self.fetch_shallow()
+        self.assertRevCount(1)
+        assert not self.git('fsck --dangling')
+
+    def test_shallow_srcrev_branch_truncation(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        b_commit = self.git('rev-parse HEAD', cwd=self.srcdir).rstrip()
+        self.add_empty_file('c')
+        self.assertRevCount(3, cwd=self.srcdir)
+
+        self.d.setVar('SRCREV', b_commit)
+        self.fetch_shallow()
+
+        # The 'c' commit was removed entirely, and 'a' was removed from history
+        self.assertRevCount(1, ['--all'])
+        self.assertEqual(self.git('rev-parse HEAD').strip(), b_commit)
+        assert os.path.exists(os.path.join(self.gitdir, 'a'))
+        assert os.path.exists(os.path.join(self.gitdir, 'b'))
+        assert not os.path.exists(os.path.join(self.gitdir, 'c'))
+
+    def test_shallow_ref_pruning(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.git('branch a_branch', cwd=self.srcdir)
+        self.assertRefs(['master', 'a_branch'], cwd=self.srcdir)
+        self.assertRevCount(2, cwd=self.srcdir)
+
+        self.fetch_shallow()
+
+        self.assertRefs(['master', 'origin/master'])
+        self.assertRevCount(1)
+
+    def test_shallow_submodules(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+
+        smdir = os.path.join(self.tempdir, 'gitsubmodule')
+        bb.utils.mkdirhier(smdir)
+        self.git('init', cwd=smdir)
+        self.add_empty_file('asub', cwd=smdir)
+
+        self.git('submodule init', cwd=self.srcdir)
+        self.git('submodule add file://%s' % smdir, cwd=self.srcdir)
+        self.git('submodule update', cwd=self.srcdir)
+        self.git('commit -m submodule -a', cwd=self.srcdir)
+
+        uri = 'gitsm://%s;protocol=file;subdir=${S}' % self.srcdir
+        fetcher, ud = self.fetch_shallow(uri)
+
+        self.assertRevCount(1)
+        assert './.git/modules/' in bb.process.run('tar -tzf %s' % os.path.join(self.dldir, ud.mirrortarballs[0]))[0]
+        assert os.listdir(os.path.join(self.gitdir, 'gitsubmodule'))
+
+    if any(os.path.exists(os.path.join(p, 'git-annex')) for p in os.environ.get('PATH').split(':')):
+        def test_shallow_annex(self):
+            self.add_empty_file('a')
+            self.add_empty_file('b')
+            self.git('annex init', cwd=self.srcdir)
+            open(os.path.join(self.srcdir, 'c'), 'w').close()
+            self.git('annex add c', cwd=self.srcdir)
+            self.git('commit -m annex-c -a', cwd=self.srcdir)
+            bb.process.run('chmod u+w -R %s' % os.path.join(self.srcdir, '.git', 'annex'))
+
+            uri = 'gitannex://%s;protocol=file;subdir=${S}' % self.srcdir
+            fetcher, ud = self.fetch_shallow(uri)
+
+            self.assertRevCount(1)
+            assert './.git/annex/' in bb.process.run('tar -tzf %s' % os.path.join(self.dldir, ud.mirrortarballs[0]))[0]
+            assert os.path.exists(os.path.join(self.gitdir, 'c'))
+
+    def test_shallow_multi_one_uri(self):
+        # Create initial git repo
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.git('checkout -b a_branch', cwd=self.srcdir)
+        self.add_empty_file('c')
+        self.add_empty_file('d')
+        self.git('checkout master', cwd=self.srcdir)
+        self.git('tag v0.0 a_branch', cwd=self.srcdir)
+        self.add_empty_file('e')
+        self.git('merge --no-ff --no-edit a_branch', cwd=self.srcdir)
+        self.add_empty_file('f')
+        self.assertRevCount(7, cwd=self.srcdir)
+
+        uri = self.d.getVar('SRC_URI', True).split()[0]
+        uri = '%s;branch=master,a_branch;name=master,a_branch' % uri
+
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+        self.d.setVar('BB_GIT_SHALLOW_REVS', 'v0.0')
+        self.d.setVar('SRCREV_master', '${AUTOREV}')
+        self.d.setVar('SRCREV_a_branch', '${AUTOREV}')
+
+        self.fetch_shallow(uri)
+
+        self.assertRevCount(5)
+        self.assertRefs(['master', 'origin/master', 'origin/a_branch'])
+
+    def test_shallow_multi_one_uri_depths(self):
+        # Create initial git repo
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.git('checkout -b a_branch', cwd=self.srcdir)
+        self.add_empty_file('c')
+        self.add_empty_file('d')
+        self.git('checkout master', cwd=self.srcdir)
+        self.add_empty_file('e')
+        self.git('merge --no-ff --no-edit a_branch', cwd=self.srcdir)
+        self.add_empty_file('f')
+        self.assertRevCount(7, cwd=self.srcdir)
+
+        uri = self.d.getVar('SRC_URI', True).split()[0]
+        uri = '%s;branch=master,a_branch;name=master,a_branch' % uri
+
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH_master', '3')
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH_a_branch', '1')
+        self.d.setVar('SRCREV_master', '${AUTOREV}')
+        self.d.setVar('SRCREV_a_branch', '${AUTOREV}')
+
+        self.fetch_shallow(uri)
+
+        self.assertRevCount(4, ['--all'])
+        self.assertRefs(['master', 'origin/master', 'origin/a_branch'])
+
+    def test_shallow_clone_preferred_over_shallow(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+
+        # Fetch once to generate the shallow tarball
+        fetcher, ud = self.fetch()
+        assert os.path.exists(os.path.join(self.dldir, ud.mirrortarballs[0]))
+
+        # Fetch and unpack with both the clonedir and shallow tarball available
+        bb.utils.remove(self.gitdir, recurse=True)
+        fetcher, ud = self.fetch_and_unpack()
+
+        # The unpacked tree should *not* be shallow
+        self.assertRevCount(2)
+        assert not os.path.exists(os.path.join(self.gitdir, '.git', 'shallow'))
+
+    def test_shallow_mirrors(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+
+        # Fetch once to generate the shallow tarball
+        fetcher, ud = self.fetch()
+        mirrortarball = ud.mirrortarballs[0]
+        assert os.path.exists(os.path.join(self.dldir, mirrortarball))
+
+        # Set up the mirror
+        mirrordir = os.path.join(self.tempdir, 'mirror')
+        bb.utils.mkdirhier(mirrordir)
+        self.d.setVar('PREMIRRORS', 'git://.*/.* file://%s/\n' % mirrordir)
+
+        os.rename(os.path.join(self.dldir, mirrortarball),
+                  os.path.join(mirrordir, mirrortarball))
+
+        # Fetch from the mirror
+        bb.utils.remove(self.dldir, recurse=True)
+        bb.utils.remove(self.gitdir, recurse=True)
+        self.fetch_and_unpack()
+        self.assertRevCount(1)
+
+    def test_shallow_invalid_depth(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH', '-12')
+        with self.assertRaises(bb.fetch2.FetchError):
+            self.fetch()
+
+    def test_shallow_invalid_depth_default(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH_default', '-12')
+        with self.assertRaises(bb.fetch2.FetchError):
+            self.fetch()
+
+    def test_shallow_extra_refs(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.git('branch a_branch', cwd=self.srcdir)
+        self.assertRefs(['master', 'a_branch'], cwd=self.srcdir)
+        self.assertRevCount(2, cwd=self.srcdir)
+
+        self.d.setVar('BB_GIT_SHALLOW_EXTRA_REFS', 'refs/heads/a_branch')
+        self.fetch_shallow()
+
+        self.assertRefs(['master', 'origin/master', 'origin/a_branch'])
+        self.assertRevCount(1)
+
+    def test_shallow_extra_refs_wildcard(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.git('branch a_branch', cwd=self.srcdir)
+        self.git('tag v1.0', cwd=self.srcdir)
+        self.assertRefs(['master', 'a_branch', 'v1.0'], cwd=self.srcdir)
+        self.assertRevCount(2, cwd=self.srcdir)
+
+        self.d.setVar('BB_GIT_SHALLOW_EXTRA_REFS', 'refs/tags/*')
+        self.fetch_shallow()
+
+        self.assertRefs(['master', 'origin/master', 'v1.0'])
+        self.assertRevCount(1)
+
+    def test_shallow_missing_extra_refs(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+
+        self.d.setVar('BB_GIT_SHALLOW_EXTRA_REFS', 'refs/heads/foo')
+        with self.assertRaises(bb.fetch2.FetchError):
+            self.fetch()
+
+    def test_shallow_missing_extra_refs_wildcard(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+
+        self.d.setVar('BB_GIT_SHALLOW_EXTRA_REFS', 'refs/tags/*')
+        self.fetch()
+
+    def test_shallow_remove_revs(self):
+        # Create initial git repo
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+        self.git('checkout -b a_branch', cwd=self.srcdir)
+        self.add_empty_file('c')
+        self.add_empty_file('d')
+        self.git('checkout master', cwd=self.srcdir)
+        self.git('tag v0.0 a_branch', cwd=self.srcdir)
+        self.add_empty_file('e')
+        self.git('merge --no-ff --no-edit a_branch', cwd=self.srcdir)
+        self.git('branch -d a_branch', cwd=self.srcdir)
+        self.add_empty_file('f')
+        self.assertRevCount(7, cwd=self.srcdir)
+
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+        self.d.setVar('BB_GIT_SHALLOW_REVS', 'v0.0')
+
+        self.fetch_shallow()
+
+        self.assertRevCount(5)
+
+    def test_shallow_invalid_revs(self):
+        self.add_empty_file('a')
+        self.add_empty_file('b')
+
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+        self.d.setVar('BB_GIT_SHALLOW_REVS', 'v0.0')
+
+        with self.assertRaises(bb.fetch2.FetchError):
+            self.fetch()
+
+    @skipIfNoNetwork()
+    def test_bitbake(self):
+        self.git('remote add --mirror=fetch origin git://github.com/openembedded/bitbake', cwd=self.srcdir)
+        self.git('config core.bare true', cwd=self.srcdir)
+        self.git('fetch', cwd=self.srcdir)
+
+        self.d.setVar('BB_GIT_SHALLOW_DEPTH', '0')
+        # Note that the 1.10.0 tag is annotated, so this also tests
+        # reference of an annotated vs unannotated tag
+        self.d.setVar('BB_GIT_SHALLOW_REVS', '1.10.0')
+
+        self.fetch_shallow()
+
+        # Confirm that the history of 1.10.0 was removed
+        orig_revs = len(self.git('rev-list master', cwd=self.srcdir).splitlines())
+        revs = len(self.git('rev-list master').splitlines())
+        self.assertNotEqual(orig_revs, revs)
+        self.assertRefs(['master', 'origin/master'])
+        self.assertRevCount(orig_revs - 1758)
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/tests/parse.py b/import-layers/yocto-poky/bitbake/lib/bb/tests/parse.py
index ab6ca90..8f16ba4 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/tests/parse.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/tests/parse.py
@@ -83,7 +83,28 @@
         self.assertEqual(d.getVar("A"), None)
         self.assertEqual(d.getVarFlag("A","flag"), None)
         self.assertEqual(d.getVar("B"), "2")
-        
+
+    exporttest = """
+A = "a"
+export B = "b"
+export C
+exportD = "d"
+"""
+
+    def test_parse_exports(self):
+        f = self.parsehelper(self.exporttest)
+        d = bb.parse.handle(f.name, self.d)['']
+        self.assertEqual(d.getVar("A"), "a")
+        self.assertIsNone(d.getVarFlag("A", "export"))
+        self.assertEqual(d.getVar("B"), "b")
+        self.assertEqual(d.getVarFlag("B", "export"), 1)
+        self.assertIsNone(d.getVar("C"))
+        self.assertEqual(d.getVarFlag("C", "export"), 1)
+        self.assertIsNone(d.getVar("D"))
+        self.assertIsNone(d.getVarFlag("D", "export"))
+        self.assertEqual(d.getVar("exportD"), "d")
+        self.assertIsNone(d.getVarFlag("exportD", "export"))
+
 
     overridetest = """
 RRECOMMENDS_${PN} = "a"
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/tinfoil.py b/import-layers/yocto-poky/bitbake/lib/bb/tinfoil.py
index 928333a..fa95f63 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/tinfoil.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/tinfoil.py
@@ -2,6 +2,7 @@
 #
 # Copyright (C) 2012-2017 Intel Corporation
 # Copyright (C) 2011 Mentor Graphics Corporation
+# Copyright (C) 2006-2012 Richard Purdie
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License version 2 as
@@ -54,6 +55,7 @@
     """Exception raised when run_command fails"""
 
 class TinfoilDataStoreConnector:
+    """Connector object used to enable access to datastore objects via tinfoil"""
 
     def __init__(self, tinfoil, dsindex):
         self.tinfoil = tinfoil
@@ -172,6 +174,14 @@
                 attrvalue = self.tinfoil.run_command('getBbFilePriority') or {}
             elif name == 'pkg_dp':
                 attrvalue = self.tinfoil.run_command('getDefaultPreference') or {}
+            elif name == 'fn_provides':
+                attrvalue = self.tinfoil.run_command('getRecipeProvides') or {}
+            elif name == 'packages':
+                attrvalue = self.tinfoil.run_command('getRecipePackages') or {}
+            elif name == 'packages_dynamic':
+                attrvalue = self.tinfoil.run_command('getRecipePackagesDynamic') or {}
+            elif name == 'rproviders':
+                attrvalue = self.tinfoil.run_command('getRProviders') or {}
             else:
                 raise AttributeError("%s instance has no attribute '%s'" % (self.__class__.__name__, name))
 
@@ -208,19 +218,119 @@
         return self.tinfoil.find_best_provider(pn)
 
 
+class TinfoilRecipeInfo:
+    """
+    Provides a convenient representation of the cached information for a single recipe.
+    Some attributes are set on construction, others are read on-demand (which internally
+    may result in a remote procedure call to the bitbake server the first time).
+    Note that only information which is cached is available through this object - if
+    you need other variable values you will need to parse the recipe using
+    Tinfoil.parse_recipe().
+    """
+    def __init__(self, recipecache, d, pn, fn, fns):
+        self._recipecache = recipecache
+        self._d = d
+        self.pn = pn
+        self.fn = fn
+        self.fns = fns
+        self.inherit_files = recipecache.inherits[fn]
+        self.depends = recipecache.deps[fn]
+        (self.pe, self.pv, self.pr) = recipecache.pkg_pepvpr[fn]
+        self._cached_packages = None
+        self._cached_rprovides = None
+        self._cached_packages_dynamic = None
+
+    def __getattr__(self, name):
+        if name == 'alternates':
+            return [x for x in self.fns if x != self.fn]
+        elif name == 'rdepends':
+            return self._recipecache.rundeps[self.fn]
+        elif name == 'rrecommends':
+            return self._recipecache.runrecs[self.fn]
+        elif name == 'provides':
+            return self._recipecache.fn_provides[self.fn]
+        elif name == 'packages':
+            if self._cached_packages is None:
+                self._cached_packages = []
+                for pkg, fns in self._recipecache.packages.items():
+                    if self.fn in fns:
+                        self._cached_packages.append(pkg)
+            return self._cached_packages
+        elif name == 'packages_dynamic':
+            if self._cached_packages_dynamic is None:
+                self._cached_packages_dynamic = []
+                for pkg, fns in self._recipecache.packages_dynamic.items():
+                    if self.fn in fns:
+                        self._cached_packages_dynamic.append(pkg)
+            return self._cached_packages_dynamic
+        elif name == 'rprovides':
+            if self._cached_rprovides is None:
+                self._cached_rprovides = []
+                for pkg, fns in self._recipecache.rproviders.items():
+                    if self.fn in fns:
+                        self._cached_rprovides.append(pkg)
+            return self._cached_rprovides
+        else:
+            raise AttributeError("%s instance has no attribute '%s'" % (self.__class__.__name__, name))
+    def inherits(self, only_recipe=False):
+        """
+        Get the inherited classes for a recipe. Returns the class names only.
+        Parameters:
+            only_recipe: True to return only the classes inherited by the recipe
+                         itself, False to return all classes inherited within
+                         the context for the recipe (which includes globally
+                         inherited classes).
+        """
+        if only_recipe:
+            global_inherit = [x for x in (self._d.getVar('BBINCLUDED') or '').split() if x.endswith('.bbclass')]
+        else:
+            global_inherit = []
+        for clsfile in self.inherit_files:
+            if only_recipe and clsfile in global_inherit:
+                continue
+            clsname = os.path.splitext(os.path.basename(clsfile))[0]
+            yield clsname
+    def __str__(self):
+        return '%s' % self.pn
+
+
 class Tinfoil:
+    """
+    Tinfoil - an API for scripts and utilities to query
+    BitBake internals and perform build operations.
+    """
 
     def __init__(self, output=sys.stdout, tracking=False, setup_logging=True):
+        """
+        Create a new tinfoil object.
+        Parameters:
+            output: specifies where console output should be sent. Defaults
+                    to sys.stdout.
+            tracking: True to enable variable history tracking, False to
+                    disable it (default). Enabling this has a minor
+                    performance impact so typically it isn't enabled
+                    unless you need to query variable history.
+            setup_logging: True to setup a logger so that things like
+                    bb.warn() will work immediately and timeout warnings
+                    are visible; False to let BitBake do this itself.
+        """
         self.logger = logging.getLogger('BitBake')
         self.config_data = None
         self.cooker = None
         self.tracking = tracking
         self.ui_module = None
         self.server_connection = None
+        self.recipes_parsed = False
+        self.quiet = 0
+        self.oldhandlers = self.logger.handlers[:]
         if setup_logging:
             # This is the *client-side* logger, nothing to do with
             # logging messages from the server
             bb.msg.logger_create('BitBake', output)
+            self.localhandlers = []
+            for handler in self.logger.handlers:
+                if handler not in self.oldhandlers:
+                    self.localhandlers.append(handler)
 
     def __enter__(self):
         return self
@@ -228,19 +338,61 @@
     def __exit__(self, type, value, traceback):
         self.shutdown()
 
-    def prepare(self, config_only=False, config_params=None, quiet=0):
+    def prepare(self, config_only=False, config_params=None, quiet=0, extra_features=None):
+        """
+        Prepares the underlying BitBake system to be used via tinfoil.
+        This function must be called prior to calling any of the other
+        functions in the API.
+        NOTE: if you call prepare() you must absolutely call shutdown()
+        before your code terminates. You can use a "with" block to ensure
+        this happens e.g.
+
+            with bb.tinfoil.Tinfoil() as tinfoil:
+                tinfoil.prepare()
+                ...
+
+        Parameters:
+            config_only: True to read only the configuration and not load
+                        the cache / parse recipes. This is useful if you just
+                        want to query the value of a variable at the global
+                        level or you want to do anything else that doesn't
+                        involve knowing anything about the recipes in the
+                        current configuration. False loads the cache / parses
+                        recipes.
+            config_params: optionally specify your own configuration
+                        parameters. If not specified an instance of
+                        TinfoilConfigParameters will be created internally.
+            quiet:      quiet level controlling console output - equivalent
+                        to bitbake's -q/--quiet option. Default of 0 gives
+                        the same output level as normal bitbake execution.
+            extra_features: extra features to be added to the feature
+                        set requested from the server. See
+                        CookerFeatures._feature_list for possible
+                        features.
+        """
+        self.quiet = quiet
+
         if self.tracking:
             extrafeatures = [bb.cooker.CookerFeatures.BASEDATASTORE_TRACKING]
         else:
             extrafeatures = []
 
+        if extra_features:
+            extrafeatures += extra_features
+
         if not config_params:
             config_params = TinfoilConfigParameters(config_only=config_only, quiet=quiet)
 
         cookerconfig = CookerConfiguration()
         cookerconfig.setConfigParameters(config_params)
 
-        server, self.server_connection, ui_module = setup_bitbake(config_params,
+        if not config_only:
+            # Disable local loggers because the UI module is going to set up its own
+            for handler in self.localhandlers:
+                self.logger.handlers.remove(handler)
+            self.localhandlers = []
+
+        self.server_connection, ui_module = setup_bitbake(config_params,
                             cookerconfig,
                             extrafeatures)
 
@@ -266,6 +418,7 @@
                 self.run_command('parseConfiguration')
             else:
                 self.run_actions(config_params)
+                self.recipes_parsed = True
 
             self.config_data = bb.data.init()
             connector = TinfoilDataStoreConnector(self, None)
@@ -285,7 +438,13 @@
 
     def parseRecipes(self):
         """
-        Force a parse of all recipes. Normally you should specify
+        Legacy function - use parse_recipes() instead.
+        """
+        self.parse_recipes()
+
+    def parse_recipes(self):
+        """
+        Load information on all recipes. Normally you should specify
         config_only=False when calling prepare() instead of using this
         function; this function is designed for situations where you need
         to initialise Tinfoil and use it with config_only=True first and
@@ -293,6 +452,7 @@
         """
         config_params = TinfoilConfigParameters(config_only=False)
         self.run_actions(config_params)
+        self.recipes_parsed = True
 
     def run_command(self, command, *params):
         """
@@ -339,9 +499,16 @@
         return self.server_connection.events.waitEvent(timeout)
 
     def get_overlayed_recipes(self):
+        """
+        Find recipes which are overlayed (i.e. where recipes exist in multiple layers)
+        """
         return defaultdict(list, self.run_command('getOverlayedRecipes'))
 
     def get_skipped_recipes(self):
+        """
+        Find recipes which were skipped (i.e. SkipRecipe was raised
+        during parsing).
+        """
         return OrderedDict(self.run_command('getSkippedRecipes'))
 
     def get_all_providers(self):
@@ -374,8 +541,77 @@
         return best[3]
 
     def get_file_appends(self, fn):
+        """
+        Find the bbappends for a recipe file
+        """
         return self.run_command('getFileAppends', fn)
 
+    def all_recipes(self, mc='', sort=True):
+        """
+        Enable iterating over all recipes in the current configuration.
+        Returns an iterator over TinfoilRecipeInfo objects created on demand.
+        Parameters:
+            mc: The multiconfig, default of '' uses the main configuration.
+            sort: True to sort recipes alphabetically (default), False otherwise
+        """
+        recipecache = self.cooker.recipecaches[mc]
+        if sort:
+            recipes = sorted(recipecache.pkg_pn.items())
+        else:
+            recipes = recipecache.pkg_pn.items()
+        for pn, fns in recipes:
+            prov = self.find_best_provider(pn)
+            recipe = TinfoilRecipeInfo(recipecache,
+                                       self.config_data,
+                                       pn=pn,
+                                       fn=prov[3],
+                                       fns=fns)
+            yield recipe
+
+    def all_recipe_files(self, mc='', variants=True, preferred_only=False):
+        """
+        Enable iterating over all recipe files in the current configuration.
+        Returns an iterator over file paths.
+        Parameters:
+            mc: The multiconfig, default of '' uses the main configuration.
+            variants: True to include variants of recipes created through
+                      BBCLASSEXTEND (default) or False to exclude them
+            preferred_only: True to include only the preferred recipe where
+                      multiple exist providing the same PN, False to list
+                      all recipes
+        """
+        recipecache = self.cooker.recipecaches[mc]
+        if preferred_only:
+            files = []
+            for pn in recipecache.pkg_pn.keys():
+                prov = self.find_best_provider(pn)
+                files.append(prov[3])
+        else:
+            files = recipecache.pkg_fn.keys()
+        for fn in sorted(files):
+            if not variants and fn.startswith('virtual:'):
+                continue
+            yield fn
+
+
+    def get_recipe_info(self, pn, mc=''):
+        """
+        Get information on a specific recipe in the current configuration by name (PN).
+        Returns a TinfoilRecipeInfo object created on demand.
+        Parameters:
+            mc: The multiconfig, default of '' uses the main configuration.
+        """
+        recipecache = self.cooker.recipecaches[mc]
+        prov = self.find_best_provider(pn)
+        fn = prov[3]
+        actual_pn = recipecache.pkg_fn[fn]
+        recipe = TinfoilRecipeInfo(recipecache,
+                                    self.config_data,
+                                    pn=actual_pn,
+                                    fn=fn,
+                                    fns=recipecache.pkg_pn[actual_pn])
+        return recipe
+
     def parse_recipe(self, pn):
         """
         Parse the specified recipe and return a datastore object
@@ -399,26 +635,199 @@
                          specify config_data then you cannot use a virtual
                          specification for fn.
         """
-        if appends and appendlist == []:
-            appends = False
-        if config_data:
-            dctr = bb.remotedata.RemoteDatastores.transmit_datastore(config_data)
-            dscon = self.run_command('parseRecipeFile', fn, appends, appendlist, dctr)
-        else:
-            dscon = self.run_command('parseRecipeFile', fn, appends, appendlist)
-        if dscon:
-            return self._reconvert_type(dscon, 'DataStoreConnectionHandle')
-        else:
-            return None
+        if self.tracking:
+            # Enable history tracking just for the parse operation
+            self.run_command('enableDataTracking')
+        try:
+            if appends and appendlist == []:
+                appends = False
+            if config_data:
+                dctr = bb.remotedata.RemoteDatastores.transmit_datastore(config_data)
+                dscon = self.run_command('parseRecipeFile', fn, appends, appendlist, dctr)
+            else:
+                dscon = self.run_command('parseRecipeFile', fn, appends, appendlist)
+            if dscon:
+                return self._reconvert_type(dscon, 'DataStoreConnectionHandle')
+            else:
+                return None
+        finally:
+            if self.tracking:
+                self.run_command('disableDataTracking')
 
-    def build_file(self, buildfile, task):
+    def build_file(self, buildfile, task, internal=True):
         """
         Runs the specified task for just a single recipe (i.e. no dependencies).
-        This is equivalent to bitbake -b, except no warning will be printed.
+        This is equivalent to bitbake -b, except with the default internal=True
+        no warning about dependencies will be produced, normal info messages
+        from the runqueue will be silenced and BuildInit, BuildStarted and
+        BuildCompleted events will not be fired.
         """
-        return self.run_command('buildFile', buildfile, task, True)
+        return self.run_command('buildFile', buildfile, task, internal)
+
+    def build_targets(self, targets, task=None, handle_events=True, extra_events=None, event_callback=None):
+        """
+        Builds the specified targets. This is equivalent to a normal invocation
+        of bitbake. Has built-in event handling which is enabled by default and
+        can be extended if needed.
+        Parameters:
+            targets:
+                One or more targets to build. Can be a list or a
+                space-separated string.
+            task:
+                The task to run; if None then the value of BB_DEFAULT_TASK
+                will be used. Default None.
+            handle_events:
+                True to handle events in a similar way to normal bitbake
+                invocation with knotty; False to return immediately (on the
+                assumption that the caller will handle the events instead).
+                Default True.
+            extra_events:
+                An optional list of events to add to the event mask (if
+                handle_events=True). If you add events here you also need
+                to specify a callback function in event_callback that will
+                handle the additional events. Default None.
+            event_callback:
+                An optional function taking a single parameter which
+                will be called first upon receiving any event (if
+                handle_events=True) so that the caller can override or
+                extend the event handling. Default None.
+        """
+        if isinstance(targets, str):
+            targets = targets.split()
+        if not task:
+            task = self.config_data.getVar('BB_DEFAULT_TASK')
+
+        if handle_events:
+            # A reasonable set of default events matching up with those we handle below
+            eventmask = [
+                        'bb.event.BuildStarted',
+                        'bb.event.BuildCompleted',
+                        'logging.LogRecord',
+                        'bb.event.NoProvider',
+                        'bb.command.CommandCompleted',
+                        'bb.command.CommandFailed',
+                        'bb.build.TaskStarted',
+                        'bb.build.TaskFailed',
+                        'bb.build.TaskSucceeded',
+                        'bb.build.TaskFailedSilent',
+                        'bb.build.TaskProgress',
+                        'bb.runqueue.runQueueTaskStarted',
+                        'bb.runqueue.sceneQueueTaskStarted',
+                        'bb.event.ProcessStarted',
+                        'bb.event.ProcessProgress',
+                        'bb.event.ProcessFinished',
+                        ]
+            if extra_events:
+                eventmask.extend(extra_events)
+            ret = self.set_event_mask(eventmask)
+
+        includelogs = self.config_data.getVar('BBINCLUDELOGS')
+        loglines = self.config_data.getVar('BBINCLUDELOGS_LINES')
+
+        ret = self.run_command('buildTargets', targets, task)
+        if handle_events:
+            result = False
+            # Borrowed from knotty, instead somewhat hackily we use the helper
+            # as the object to store "shutdown" on
+            helper = bb.ui.uihelper.BBUIHelper()
+            # We set up logging optionally in the constructor so now we need to
+            # grab the handlers to pass to TerminalFilter
+            console = None
+            errconsole = None
+            for handler in self.logger.handlers:
+                if isinstance(handler, logging.StreamHandler):
+                    if handler.stream == sys.stdout:
+                        console = handler
+                    elif handler.stream == sys.stderr:
+                        errconsole = handler
+            format_str = "%(levelname)s: %(message)s"
+            format = bb.msg.BBLogFormatter(format_str)
+            helper.shutdown = 0
+            parseprogress = None
+            termfilter = bb.ui.knotty.TerminalFilter(helper, helper, console, errconsole, format, quiet=self.quiet)
+            try:
+                while True:
+                    try:
+                        event = self.wait_event(0.25)
+                        if event:
+                            if event_callback and event_callback(event):
+                                continue
+                            if helper.eventHandler(event):
+                                if isinstance(event, bb.build.TaskFailedSilent):
+                                    logger.warning("Logfile for failed setscene task is %s" % event.logfile)
+                                elif isinstance(event, bb.build.TaskFailed):
+                                    bb.ui.knotty.print_event_log(event, includelogs, loglines, termfilter)
+                                continue
+                            if isinstance(event, bb.event.ProcessStarted):
+                                if self.quiet > 1:
+                                    continue
+                                parseprogress = bb.ui.knotty.new_progress(event.processname, event.total)
+                                parseprogress.start(False)
+                                continue
+                            if isinstance(event, bb.event.ProcessProgress):
+                                if self.quiet > 1:
+                                    continue
+                                if parseprogress:
+                                    parseprogress.update(event.progress)
+                                else:
+                                    bb.warn("Got ProcessProgress event for someting that never started?")
+                                continue
+                            if isinstance(event, bb.event.ProcessFinished):
+                                if self.quiet > 1:
+                                    continue
+                                if parseprogress:
+                                    parseprogress.finish()
+                                parseprogress = None
+                                continue
+                            if isinstance(event, bb.command.CommandCompleted):
+                                result = True
+                                break
+                            if isinstance(event, bb.command.CommandFailed):
+                                self.logger.error(str(event))
+                                result = False
+                                break
+                            if isinstance(event, logging.LogRecord):
+                                if event.taskpid == 0 or event.levelno > logging.INFO:
+                                    self.logger.handle(event)
+                                continue
+                            if isinstance(event, bb.event.NoProvider):
+                                self.logger.error(str(event))
+                                result = False
+                                break
+
+                        elif helper.shutdown > 1:
+                            break
+                        termfilter.updateFooter()
+                    except KeyboardInterrupt:
+                        termfilter.clearFooter()
+                        if helper.shutdown == 1:
+                            print("\nSecond Keyboard Interrupt, stopping...\n")
+                            ret = self.run_command("stateForceShutdown")
+                            if ret and ret[2]:
+                                self.logger.error("Unable to cleanly stop: %s" % ret[2])
+                        elif helper.shutdown == 0:
+                            print("\nKeyboard Interrupt, closing down...\n")
+                            interrupted = True
+                            ret = self.run_command("stateShutdown")
+                            if ret and ret[2]:
+                                self.logger.error("Unable to cleanly shutdown: %s" % ret[2])
+                        helper.shutdown = helper.shutdown + 1
+                termfilter.clearFooter()
+            finally:
+                termfilter.finish()
+            if helper.failed_tasks:
+                result = False
+            return result
+        else:
+            return ret
 
     def shutdown(self):
+        """
+        Shut down tinfoil. Disconnects from the server and gracefully
+        releases any associated resources. You must call this function if
+        prepare() has been called, or use a with... block when you create
+        the tinfoil object which will ensure that it gets called.
+        """
         if self.server_connection:
             self.run_command('clientComplete')
             _server_connections.remove(self.server_connection)
@@ -426,6 +835,12 @@
             self.server_connection.terminate()
             self.server_connection = None
 
+        # Restore logging handlers to how it looked when we started
+        if self.oldhandlers:
+            for handler in self.logger.handlers:
+                if handler not in self.oldhandlers:
+                    self.logger.handlers.remove(handler)
+
     def _reconvert_type(self, obj, origtypename):
         """
         Convert an object back to the right type, in the case
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/ui/buildinfohelper.py b/import-layers/yocto-poky/bitbake/lib/bb/ui/buildinfohelper.py
index e451c63..524a5b0 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/ui/buildinfohelper.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/ui/buildinfohelper.py
@@ -719,7 +719,11 @@
 
     def save_build_package_information(self, build_obj, package_info, recipes,
                                        built_package):
-       # assert isinstance(build_obj, Build)
+        # assert isinstance(build_obj, Build)
+
+        if not 'PN' in package_info.keys():
+            # no package data to save (e.g. 'OPKGN'="lib64-*"|"lib32-*")
+            return None
 
         # create and save the object
         pname = package_info['PKG']
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/ui/knotty.py b/import-layers/yocto-poky/bitbake/lib/bb/ui/knotty.py
index 82aa7c4..fa88e6c 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/ui/knotty.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/ui/knotty.py
@@ -207,8 +207,10 @@
             self.interactive = False
             bb.note("Unable to use interactive mode for this terminal, using fallback")
             return
-        console.addFilter(InteractConsoleLogFilter(self, format))
-        errconsole.addFilter(InteractConsoleLogFilter(self, format))
+        if console:
+            console.addFilter(InteractConsoleLogFilter(self, format))
+        if errconsole:
+            errconsole.addFilter(InteractConsoleLogFilter(self, format))
 
         self.main_progress = None
 
@@ -310,6 +312,32 @@
             fd = sys.stdin.fileno()
             self.termios.tcsetattr(fd, self.termios.TCSADRAIN, self.stdinbackup)
 
+def print_event_log(event, includelogs, loglines, termfilter):
+    # FIXME refactor this out further
+    logfile = event.logfile
+    if logfile and os.path.exists(logfile):
+        termfilter.clearFooter()
+        bb.error("Logfile of failure stored in: %s" % logfile)
+        if includelogs and not event.errprinted:
+            print("Log data follows:")
+            f = open(logfile, "r")
+            lines = []
+            while True:
+                l = f.readline()
+                if l == '':
+                    break
+                l = l.rstrip()
+                if loglines:
+                    lines.append(' | %s' % l)
+                    if len(lines) > int(loglines):
+                        lines.pop(0)
+                else:
+                    print('| %s' % l)
+            f.close()
+            if lines:
+                for line in lines:
+                    print(line)
+
 def _log_settings_from_server(server, observe_only):
     # Get values of variables which control our output
     includelogs, error = server.runCommand(["getVariable", "BBINCLUDELOGS"])
@@ -342,6 +370,9 @@
 
 def main(server, eventHandler, params, tf = TerminalFilter):
 
+    if not params.observe_only:
+        params.updateToServer(server, os.environ.copy())
+
     includelogs, loglines, consolelogfile = _log_settings_from_server(server, params.observe_only)
 
     if sys.stdin.isatty() and sys.stdout.isatty():
@@ -365,8 +396,9 @@
     bb.msg.addDefaultlogFilter(errconsole, bb.msg.BBLogFilterStdErr)
     console.setFormatter(format)
     errconsole.setFormatter(format)
-    logger.addHandler(console)
-    logger.addHandler(errconsole)
+    if not bb.msg.has_console_handler(logger):
+        logger.addHandler(console)
+        logger.addHandler(errconsole)
 
     bb.utils.set_process_name("KnottyUI")
 
@@ -395,7 +427,6 @@
     universe = False
     if not params.observe_only:
         params.updateFromServer(server)
-        params.updateToServer(server, os.environ.copy())
         cmdline = params.parseActions()
         if not cmdline:
             print("Nothing to do.  Use 'bitbake world' to build everything, or run 'bitbake --help' for usage information.")
@@ -471,11 +502,11 @@
                         continue
 
                     # Prefix task messages with recipe/task
-                    if event.taskpid in helper.running_tasks:
+                    if event.taskpid in helper.running_tasks and event.levelno != format.PLAIN:
                         taskinfo = helper.running_tasks[event.taskpid]
                         event.msg = taskinfo['title'] + ': ' + event.msg
                 if hasattr(event, 'fn'):
-                        event.msg = event.fn + ': ' + event.msg
+                    event.msg = event.fn + ': ' + event.msg
                 logger.handle(event)
                 continue
 
@@ -484,29 +515,7 @@
                 continue
             if isinstance(event, bb.build.TaskFailed):
                 return_value = 1
-                logfile = event.logfile
-                if logfile and os.path.exists(logfile):
-                    termfilter.clearFooter()
-                    bb.error("Logfile of failure stored in: %s" % logfile)
-                    if includelogs and not event.errprinted:
-                        print("Log data follows:")
-                        f = open(logfile, "r")
-                        lines = []
-                        while True:
-                            l = f.readline()
-                            if l == '':
-                                break
-                            l = l.rstrip()
-                            if loglines:
-                                lines.append(' | %s' % l)
-                                if len(lines) > int(loglines):
-                                    lines.pop(0)
-                            else:
-                                print('| %s' % l)
-                        f.close()
-                        if lines:
-                            for line in lines:
-                                print(line)
+                print_event_log(event, includelogs, loglines, termfilter)
             if isinstance(event, bb.build.TaskBase):
                 logger.info(event._message)
                 continue
@@ -559,7 +568,7 @@
                 return_value = event.exitcode
                 if event.error:
                     errors = errors + 1
-                    logger.error("Command execution failed: %s", event.error)
+                    logger.error(str(event))
                 main.shutdown = 2
                 continue
             if isinstance(event, bb.command.CommandExit):
@@ -570,39 +579,16 @@
                 main.shutdown = 2
                 continue
             if isinstance(event, bb.event.MultipleProviders):
-                logger.info("multiple providers are available for %s%s (%s)", event._is_runtime and "runtime " or "",
-                            event._item,
-                            ", ".join(event._candidates))
-                rtime = ""
-                if event._is_runtime:
-                    rtime = "R"
-                logger.info("consider defining a PREFERRED_%sPROVIDER entry to match %s" % (rtime, event._item))
+                logger.info(str(event))
                 continue
             if isinstance(event, bb.event.NoProvider):
-                if event._runtime:
-                    r = "R"
-                else:
-                    r = ""
-
-                extra = ''
-                if not event._reasons:
-                    if event._close_matches:
-                        extra = ". Close matches:\n  %s" % '\n  '.join(event._close_matches)
-
                 # For universe builds, only show these as warnings, not errors
-                h = logger.warning
                 if not universe:
                     return_value = 1
                     errors = errors + 1
-                    h = logger.error
-
-                if event._dependees:
-                    h("Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)%s", r, event._item, ", ".join(event._dependees), r, extra)
+                    logger.error(str(event))
                 else:
-                    h("Nothing %sPROVIDES '%s'%s", r, event._item, extra)
-                if event._reasons:
-                    for reason in event._reasons:
-                        h("%s", reason)
+                    logger.warning(str(event))
                 continue
 
             if isinstance(event, bb.runqueue.sceneQueueTaskStarted):
@@ -624,13 +610,11 @@
             if isinstance(event, bb.runqueue.runQueueTaskFailed):
                 return_value = 1
                 taskfailures.append(event.taskstring)
-                logger.error("Task (%s) failed with exit code '%s'",
-                             event.taskstring, event.exitcode)
+                logger.error(str(event))
                 continue
 
             if isinstance(event, bb.runqueue.sceneQueueTaskFailed):
-                logger.warning("Setscene task (%s) failed with exit code '%s' - real task will be run instead",
-                               event.taskstring, event.exitcode)
+                logger.warning(str(event))
                 continue
 
             if isinstance(event, bb.event.DepTreeGenerated):
@@ -663,6 +647,7 @@
                                   bb.event.MetadataEvent,
                                   bb.event.StampUpdate,
                                   bb.event.ConfigParsed,
+                                  bb.event.MultiConfigParsed,
                                   bb.event.RecipeParsed,
                                   bb.event.RecipePreFinalise,
                                   bb.runqueue.runQueueEvent,
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/ui/ncurses.py b/import-layers/yocto-poky/bitbake/lib/bb/ui/ncurses.py
index ca845a3..8690c52 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/ui/ncurses.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/ui/ncurses.py
@@ -315,7 +315,7 @@
                     # also allow them to now exit with a single ^C
                     shutdown = 2
                 if isinstance(event, bb.command.CommandFailed):
-                    mw.appendText("Command execution failed: %s" % event.error)
+                    mw.appendText(str(event))
                     time.sleep(2)
                     exitflag = True
                 if isinstance(event, bb.command.CommandExit):
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/ui/taskexp.py b/import-layers/yocto-poky/bitbake/lib/bb/ui/taskexp.py
index 9d14ece..0e8e9d4 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/ui/taskexp.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/ui/taskexp.py
@@ -63,7 +63,9 @@
         self.current = None
         self.filter_model = model.filter_new()
         self.filter_model.set_visible_func(self._filter)
-        self.set_model(self.filter_model)
+        self.sort_model = self.filter_model.sort_new_with_model()
+        self.sort_model.set_sort_column_id(COL_DEP_PARENT, Gtk.SortType.ASCENDING)
+        self.set_model(self.sort_model)
         self.append_column(Gtk.TreeViewColumn(label, Gtk.CellRendererText(), text=COL_DEP_PARENT))
 
     def _filter(self, model, iter, data):
@@ -286,23 +288,7 @@
                 continue
 
             if isinstance(event, bb.event.NoProvider):
-                if event._runtime:
-                    r = "R"
-                else:
-                    r = ""
-
-                extra = ''
-                if not event._reasons:
-                    if event._close_matches:
-                        extra = ". Close matches:\n  %s" % '\n  '.join(event._close_matches)
-
-                if event._dependees:
-                    print("Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)%s" % (r, event._item, ", ".join(event._dependees), r, extra))
-                else:
-                    print("Nothing %sPROVIDES '%s'%s" % (r, event._item, extra))
-                if event._reasons:
-                    for reason in event._reasons:
-                        print(reason)
+                print(str(event))
 
                 _, error = server.runCommand(["stateShutdown"])
                 if error:
@@ -310,7 +296,7 @@
                 break
 
             if isinstance(event, bb.command.CommandFailed):
-                print("Command execution failed: %s" % event.error)
+                print(str(event))
                 return event.exitcode
 
             if isinstance(event, bb.command.CommandExit):
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/ui/toasterui.py b/import-layers/yocto-poky/bitbake/lib/bb/ui/toasterui.py
index 71f04fa..88cec37 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/ui/toasterui.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/ui/toasterui.py
@@ -320,29 +320,13 @@
             if isinstance(event, bb.event.CacheLoadCompleted):
                 continue
             if isinstance(event, bb.event.MultipleProviders):
-                logger.info("multiple providers are available for %s%s (%s)", event._is_runtime and "runtime " or "",
-                            event._item,
-                            ", ".join(event._candidates))
-                logger.info("consider defining a PREFERRED_PROVIDER entry to match %s", event._item)
+                logger.info(str(event))
                 continue
 
             if isinstance(event, bb.event.NoProvider):
                 errors = errors + 1
-                if event._runtime:
-                    r = "R"
-                else:
-                    r = ""
-
-                if event._dependees:
-                    text = "Nothing %sPROVIDES '%s' (but %s %sDEPENDS on or otherwise requires it)" % (r, event._item, ", ".join(event._dependees), r)
-                else:
-                    text = "Nothing %sPROVIDES '%s'" % (r, event._item)
-
+                text = str(event)
                 logger.error(text)
-                if event._reasons:
-                    for reason in event._reasons:
-                        logger.error("%s", reason)
-                        text += reason
                 buildinfohelper.store_log_error(text)
                 continue
 
@@ -364,8 +348,7 @@
             if isinstance(event, bb.runqueue.runQueueTaskFailed):
                 buildinfohelper.update_and_store_task(event)
                 taskfailures.append(event.taskstring)
-                logger.error("Task (%s) failed with exit code '%s'",
-                             event.taskstring, event.exitcode)
+                logger.error(str(event))
                 continue
 
             if isinstance(event, (bb.runqueue.sceneQueueTaskCompleted, bb.runqueue.sceneQueueTaskFailed)):
@@ -382,7 +365,7 @@
                 if isinstance(event, bb.command.CommandFailed):
                     errors += 1
                     errorcode = 1
-                    logger.error("Command execution failed: %s", event.error)
+                    logger.error(str(event))
                 elif isinstance(event, bb.event.BuildCompleted):
                     buildinfohelper.scan_image_artifacts()
                     buildinfohelper.clone_required_sdk_artifacts()
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/ui/uihelper.py b/import-layers/yocto-poky/bitbake/lib/bb/ui/uihelper.py
index 113fced..963c1ea 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/ui/uihelper.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/ui/uihelper.py
@@ -61,6 +61,9 @@
                 self.running_tasks[event.pid]['progress'] = event.progress
                 self.running_tasks[event.pid]['rate'] = event.rate
                 self.needUpdate = True
+        else:
+            return False
+        return True
 
     def getTasks(self):
         self.needUpdate = False
diff --git a/import-layers/yocto-poky/bitbake/lib/bb/utils.py b/import-layers/yocto-poky/bitbake/lib/bb/utils.py
index 6a44db5..c540b49 100644
--- a/import-layers/yocto-poky/bitbake/lib/bb/utils.py
+++ b/import-layers/yocto-poky/bitbake/lib/bb/utils.py
@@ -771,13 +771,14 @@
             return None
 
     renamefailed = 1
+    # os.rename needs to know the dest path ending with file name
+    # so append the file name to a path only if it's a dir specified
+    srcfname = os.path.basename(src)
+    destpath = os.path.join(dest, srcfname) if os.path.isdir(dest) \
+                else dest
+
     if sstat[stat.ST_DEV] == dstat[stat.ST_DEV]:
         try:
-            # os.rename needs to know the dest path ending with file name
-            # so append the file name to a path only if it's a dir specified
-            srcfname = os.path.basename(src)
-            destpath = os.path.join(dest, srcfname) if os.path.isdir(dest) \
-                        else dest
             os.rename(src, destpath)
             renamefailed = 0
         except Exception as e:
@@ -791,8 +792,8 @@
         didcopy = 0
         if stat.S_ISREG(sstat[stat.ST_MODE]):
             try: # For safety copy then move it over.
-                shutil.copyfile(src, dest + "#new")
-                os.rename(dest + "#new", dest)
+                shutil.copyfile(src, destpath + "#new")
+                os.rename(destpath + "#new", destpath)
                 didcopy = 1
             except Exception as e:
                 print('movefile: copy', src, '->', dest, 'failed.', e)
@@ -813,9 +814,9 @@
             return None
 
     if newmtime:
-        os.utime(dest, (newmtime, newmtime))
+        os.utime(destpath, (newmtime, newmtime))
     else:
-        os.utime(dest, (sstat[stat.ST_ATIME], sstat[stat.ST_MTIME]))
+        os.utime(destpath, (sstat[stat.ST_ATIME], sstat[stat.ST_MTIME]))
         newmtime = sstat[stat.ST_MTIME]
     return newmtime
 
@@ -1502,7 +1503,7 @@
 
 def load_plugins(logger, plugins, pluginpath):
     def load_plugin(name):
-        logger.debug('Loading plugin %s' % name)
+        logger.debug(1, 'Loading plugin %s' % name)
         fp, pathname, description = imp.find_module(name, [pluginpath])
         try:
             return imp.load_module(name, fp, pathname, description)
@@ -1510,7 +1511,7 @@
             if fp:
                 fp.close()
 
-    logger.debug('Loading plugins from %s...' % pluginpath)
+    logger.debug(1, 'Loading plugins from %s...' % pluginpath)
 
     expanded = (glob.glob(os.path.join(pluginpath, '*' + ext))
                 for ext in python_extensions)
diff --git a/import-layers/yocto-poky/bitbake/lib/bblayers/action.py b/import-layers/yocto-poky/bitbake/lib/bblayers/action.py
index cf94704..b1326e5 100644
--- a/import-layers/yocto-poky/bitbake/lib/bblayers/action.py
+++ b/import-layers/yocto-poky/bitbake/lib/bblayers/action.py
@@ -1,7 +1,9 @@
 import fnmatch
 import logging
 import os
+import shutil
 import sys
+import tempfile
 
 import bb.utils
 
@@ -32,10 +34,26 @@
             sys.stderr.write("Unable to find bblayers.conf\n")
             return 1
 
-        notadded, _ = bb.utils.edit_bblayers_conf(bblayers_conf, layerdir, None)
-        if notadded:
-            for item in notadded:
-                sys.stderr.write("Specified layer %s is already in BBLAYERS\n" % item)
+        # Back up bblayers.conf to tempdir before we add layers
+        tempdir = tempfile.mkdtemp()
+        backup = tempdir + "/bblayers.conf.bak"
+        shutil.copy2(bblayers_conf, backup)
+
+        try:
+            notadded, _ = bb.utils.edit_bblayers_conf(bblayers_conf, layerdir, None)
+            if not (args.force or notadded):
+                try:
+                    self.tinfoil.parseRecipes()
+                except bb.tinfoil.TinfoilUIException:
+                    # Restore the back up copy of bblayers.conf
+                    shutil.copy2(backup, bblayers_conf)
+                    bb.fatal("Parse failure with the specified layer added")
+                else:
+                    for item in notadded:
+                        sys.stderr.write("Specified layer %s is already in BBLAYERS\n" % item)
+        finally:
+            # Remove the back up copy of bblayers.conf
+            shutil.rmtree(tempdir)
 
     def do_remove_layer(self, args):
         """Remove a layer from bblayers.conf."""
diff --git a/import-layers/yocto-poky/bitbake/lib/bblayers/layerindex.py b/import-layers/yocto-poky/bitbake/lib/bblayers/layerindex.py
index 506c110..9af385d 100644
--- a/import-layers/yocto-poky/bitbake/lib/bblayers/layerindex.py
+++ b/import-layers/yocto-poky/bitbake/lib/bblayers/layerindex.py
@@ -247,6 +247,7 @@
                         logger.plain("Adding layer \"%s\" to conf/bblayers.conf" % name)
                     localargs = argparse.Namespace()
                     localargs.layerdir = layerdir
+                    localargs.force = args.force
                     self.do_add_layer(localargs)
                 else:
                     break
diff --git a/import-layers/yocto-poky/bitbake/lib/prserv/serv.py b/import-layers/yocto-poky/bitbake/lib/prserv/serv.py
index a7efa58..6a99728 100644
--- a/import-layers/yocto-poky/bitbake/lib/prserv/serv.py
+++ b/import-layers/yocto-poky/bitbake/lib/prserv/serv.py
@@ -6,10 +6,11 @@
 import socket
 import io
 import sqlite3
-import bb.server.xmlrpc
+import bb.server.xmlrpcclient
 import prserv
 import prserv.db
 import errno
+import select
 
 logger = logging.getLogger("BitBake.PRserv")
 
@@ -59,6 +60,8 @@
         self.register_function(self.importone, "importone")
         self.register_introspection_functions()
 
+        self.quitpipein, self.quitpipeout = os.pipe()
+
         self.requestqueue = queue.Queue()
         self.handlerthread = threading.Thread(target = self.process_request_thread)
         self.handlerthread.daemon = False
@@ -75,12 +78,14 @@
 
         bb.utils.set_process_name("PRServ Handler")
 
-        while not self.quit:
+        while not self.quitflag:
             try:
                 (request, client_address) = self.requestqueue.get(True, 30)
             except queue.Empty:
                 self.table.sync_if_dirty()
                 continue
+            if request is None:
+                continue
             try:
                 self.finish_request(request, client_address)
                 self.shutdown_request(request)
@@ -100,7 +105,8 @@
     def sigterm_handler(self, signum, stack):
         if self.table:
             self.table.sync()
-        self.quit=True
+        self.quit()
+        self.requestqueue.put((None, None))
 
     def process_request(self, request, client_address):
         self.requestqueue.put((request, client_address))
@@ -136,7 +142,7 @@
         return self.table.importone(version, pkgarch, checksum, value)
 
     def ping(self):
-        return not self.quit
+        return not self.quitflag
 
     def getinfo(self):
         return (self.host, self.port)
@@ -152,12 +158,17 @@
             return None
 
     def quit(self):
-        self.quit=True
+        self.quitflag=True
+        os.write(self.quitpipeout, b"q")
+        os.close(self.quitpipeout)
         return
 
     def work_forever(self,):
-        self.quit = False
-        self.timeout = 0.5
+        self.quitflag = False
+        # This timeout applies to the poll in TCPServer, we need the select 
+        # below to wake on our quit pipe closing. We only ever call into handle_request
+        # if there is data there.
+        self.timeout = 0.01
 
         bb.utils.set_process_name("PRServ")
 
@@ -169,12 +180,17 @@
                      (self.dbfile, self.host, self.port, str(os.getpid())))
 
         self.handlerthread.start()
-        while not self.quit:
-            self.handle_request()
+        while not self.quitflag:
+            ready = select.select([self.fileno(), self.quitpipein], [], [], 30)
+            if self.quitflag:
+                break
+            if self.fileno() in ready[0]:
+                self.handle_request()
         self.handlerthread.join()
         self.db.disconnect()
         logger.info("PRServer: stopping...")
         self.server_close()
+        os.close(self.quitpipein)
         return
 
     def start(self):
@@ -182,6 +198,7 @@
             pid = self.daemonize()
         else:
             pid = self.fork()
+            self.pid = pid
 
         # Ensure both the parent sees this and the child from the work_forever log entry above
         logger.info("Started PRServer with DBfile: %s, IP: %s, PORT: %s, PID: %s" %
@@ -300,7 +317,7 @@
             host, port = singleton.getinfo()
         self.host = host
         self.port = port
-        self.connection, self.transport = bb.server.xmlrpc._create_server(self.host, self.port)
+        self.connection, self.transport = bb.server.xmlrpcclient._create_server(self.host, self.port)
 
     def terminate(self):
         try:
@@ -428,6 +445,9 @@
 def auto_start(d):
     global singleton
 
+    # Shutdown any existing PR Server
+    auto_shutdown()
+
     host_params = list(filter(None, (d.getVar('PRSERV_HOST') or '').split(':')))
     if not host_params:
         return None
@@ -464,7 +484,7 @@
         logger.critical("PRservice %s:%d not available" % (host, port))
         raise PRServiceConfigError
 
-def auto_shutdown(d=None):
+def auto_shutdown():
     global singleton
     if singleton:
         host, port = singleton.getinfo()
@@ -472,6 +492,11 @@
             PRServerConnection(host, port).terminate()
         except:
             logger.critical("Stop PRService %s:%d failed" % (host,port))
+
+        try:
+            os.waitpid(singleton.prserv.pid, 0)
+        except ChildProcessError:
+            pass
         singleton = None
 
 def ping(host, port):
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/bldcollector/urls.py b/import-layers/yocto-poky/bitbake/lib/toaster/bldcollector/urls.py
index 64722f2..888175d 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/bldcollector/urls.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/bldcollector/urls.py
@@ -1,7 +1,7 @@
 #
 # BitBake Toaster Implementation
 #
-# Copyright (C) 2014        Intel Corporation
+# Copyright (C) 2014-2017   Intel Corporation
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License version 2 as
@@ -17,9 +17,11 @@
 # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
 
 
-from django.conf.urls import patterns, include, url
+from django.conf.urls import include, url
 
-urlpatterns = patterns('bldcollector.views',
+import bldcollector.views
+
+urlpatterns = [
         # landing point for pushing a bitbake_eventlog.json file to this toaster instace
-        url(r'^eventfile$', 'eventfile', name='eventfile'),
-        )
+        url(r'^eventfile$', bldcollector.views.eventfile, name='eventfile'),
+]
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/bbcontroller.py b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/bbcontroller.py
index 912f67b..5195600 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/bbcontroller.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/bbcontroller.py
@@ -37,8 +37,8 @@
     """
 
     def __init__(self, be):
-        import bb.server.xmlrpc
-        self.connection = bb.server.xmlrpc._create_server(be.bbaddress,
+        import bb.server.xmlrpcclient
+        self.connection = bb.server.xmlrpcclient._create_server(be.bbaddress,
                                                           int(be.bbport))[0]
 
     def _runCommand(self, command):
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
index 1207102..4c17562 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/localhostbecontroller.py
@@ -24,6 +24,7 @@
 import sys
 import re
 import shutil
+import time
 from django.db import transaction
 from django.db.models import Q
 from bldcontrol.models import BuildEnvironment, BRLayer, BRVariable, BRTarget, BRBitbake
@@ -51,12 +52,14 @@
         self.pokydirname = None
         self.islayerset = False
 
-    def _shellcmd(self, command, cwd=None, nowait=False):
+    def _shellcmd(self, command, cwd=None, nowait=False,env=None):
         if cwd is None:
             cwd = self.be.sourcedir
+        if env is None:
+            env=os.environ.copy()
 
-        logger.debug("lbc_shellcmmd: (%s) %s" % (cwd, command))
-        p = subprocess.Popen(command, cwd = cwd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+        logger.debug("lbc_shellcmd: (%s) %s" % (cwd, command))
+        p = subprocess.Popen(command, cwd = cwd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env)
         if nowait:
             return
         (out,err) = p.communicate()
@@ -85,6 +88,11 @@
         return local_checkout_path
 
 
+    def setCloneStatus(self,bitbake,status,total,current):
+        bitbake.req.build.repos_cloned=current
+        bitbake.req.build.repos_to_clone=total
+        bitbake.req.build.save()
+
     def setLayers(self, bitbake, layers, targets):
         """ a word of attention: by convention, the first layer for any build will be poky! """
 
@@ -92,6 +100,8 @@
 
         layerlist = []
         nongitlayerlist = []
+        git_env = os.environ.copy()
+        # (note: add custom environment settings here)
 
         # set layers in the layersource
 
@@ -132,7 +142,7 @@
         cached_layers = {}
 
         try:
-            for remotes in self._shellcmd("git remote -v", self.be.sourcedir).split("\n"):
+            for remotes in self._shellcmd("git remote -v", self.be.sourcedir,env=git_env).split("\n"):
                 try:
                     remote = remotes.split("\t")[1].split(" ")[0]
                     if remote not in cached_layers:
@@ -147,7 +157,13 @@
         logger.info("Using pre-checked out source for layer %s", cached_layers)
 
         # 3. checkout the repositories
+        clone_count=0
+        clone_total=len(gitrepos.keys())
+        self.setCloneStatus(bitbake,'Started',clone_total,clone_count)
         for giturl, commit in gitrepos.keys():
+            self.setCloneStatus(bitbake,'progress',clone_total,clone_count)
+            clone_count += 1
+
             localdirname = os.path.join(self.be.sourcedir, self.getGitCloneDirectory(giturl, commit))
             logger.debug("localhostbecontroller: giturl %s:%s checking out in current directory %s" % (giturl, commit, localdirname))
 
@@ -155,7 +171,7 @@
             if os.path.exists(localdirname):
                 try:
                     localremotes = self._shellcmd("git remote -v",
-                                                  localdirname)
+                                                  localdirname,env=git_env)
                     if not giturl in localremotes and commit != 'HEAD':
                         raise BuildSetupException("Existing git repository at %s, but with different remotes ('%s', expected '%s'). Toaster will not continue out of fear of damaging something." % (localdirname, ", ".join(localremotes.split("\n")), giturl))
                 except ShellCmdException:
@@ -165,18 +181,18 @@
             else:
                 if giturl in cached_layers:
                     logger.debug("localhostbecontroller git-copying %s to %s" % (cached_layers[giturl], localdirname))
-                    self._shellcmd("git clone \"%s\" \"%s\"" % (cached_layers[giturl], localdirname))
-                    self._shellcmd("git remote remove origin", localdirname)
-                    self._shellcmd("git remote add origin \"%s\"" % giturl, localdirname)
+                    self._shellcmd("git clone \"%s\" \"%s\"" % (cached_layers[giturl], localdirname),env=git_env)
+                    self._shellcmd("git remote remove origin", localdirname,env=git_env)
+                    self._shellcmd("git remote add origin \"%s\"" % giturl, localdirname,env=git_env)
                 else:
                     logger.debug("localhostbecontroller: cloning %s in %s" % (giturl, localdirname))
-                    self._shellcmd('git clone "%s" "%s"' % (giturl, localdirname))
+                    self._shellcmd('git clone "%s" "%s"' % (giturl, localdirname),env=git_env)
 
             # branch magic name "HEAD" will inhibit checkout
             if commit != "HEAD":
                 logger.debug("localhostbecontroller: checking out commit %s to %s " % (commit, localdirname))
                 ref = commit if re.match('^[a-fA-F0-9]+$', commit) else 'origin/%s' % commit
-                self._shellcmd('git fetch --all && git reset --hard "%s"' % ref, localdirname)
+                self._shellcmd('git fetch --all && git reset --hard "%s"' % ref, localdirname,env=git_env)
 
             # take the localdirname as poky dir if we can find the oe-init-build-env
             if self.pokydirname is None and os.path.exists(os.path.join(localdirname, "oe-init-build-env")):
@@ -186,7 +202,7 @@
                 # make sure we have a working bitbake
                 if not os.path.exists(os.path.join(self.pokydirname, 'bitbake')):
                     logger.debug("localhostbecontroller: checking bitbake into the poky dirname %s " % self.pokydirname)
-                    self._shellcmd("git clone -b \"%s\" \"%s\" \"%s\" " % (bitbake.commit, bitbake.giturl, os.path.join(self.pokydirname, 'bitbake')))
+                    self._shellcmd("git clone -b \"%s\" \"%s\" \"%s\" " % (bitbake.commit, bitbake.giturl, os.path.join(self.pokydirname, 'bitbake')),env=git_env)
 
             # verify our repositories
             for name, dirpath in gitrepos[(giturl, commit)]:
@@ -198,6 +214,7 @@
                 if name != "bitbake":
                     layerlist.append(localdirpath.rstrip("/"))
 
+        self.setCloneStatus(bitbake,'complete',clone_total,clone_count)
         logger.debug("localhostbecontroller: current layer list %s " % pformat(layerlist))
 
         if self.pokydirname is None and os.path.exists(os.path.join(self.be.sourcedir, "oe-init-build-env")):
@@ -319,16 +336,32 @@
                 conf.write('%s="%s"\n' % (var.name, var.value))
             conf.write('INHERIT+="toaster buildhistory"')
 
+        # clean the Toaster to build environment
+        env_clean = 'unset BBPATH;' # clean BBPATH for <= YP-2.4.0
+
         # run bitbake server from the clone
         bitbake = os.path.join(self.pokydirname, 'bitbake', 'bin', 'bitbake')
         toasterlayers = os.path.join(builddir,"conf/toaster-bblayers.conf")
-        self._shellcmd('bash -c \"source %s %s; BITBAKE_UI="knotty" %s --read %s --read %s '
-                       '--server-only -t xmlrpc -B 0.0.0.0:0\"' % (oe_init,
+        self._shellcmd('%s bash -c \"source %s %s; BITBAKE_UI="knotty" %s --read %s --read %s '
+                       '--server-only -B 0.0.0.0:0\"' % (env_clean, oe_init,
                        builddir, bitbake, confpath, toasterlayers), self.be.sourcedir)
 
         # read port number from bitbake.lock
-        self.be.bbport = ""
+        self.be.bbport = -1
         bblock = os.path.join(builddir, 'bitbake.lock')
+        # allow 10 seconds for bb lock file to appear but also be populated
+        for lock_check in range(10):
+            if not os.path.exists(bblock):
+                logger.debug("localhostbecontroller: waiting for bblock file to appear")
+                time.sleep(1)
+                continue
+            if 10 < os.stat(bblock).st_size:
+                break
+            logger.debug("localhostbecontroller: waiting for bblock content to appear")
+            time.sleep(1)
+        else:
+            raise BuildSetupException("Cannot find bitbake server lock file '%s'. Aborting." % bblock)
+
         with open(bblock) as fplock:
             for line in fplock:
                 if ":" in line:
@@ -336,7 +369,7 @@
                     logger.debug("localhostbecontroller: bitbake port %s", self.be.bbport)
                     break
 
-        if not self.be.bbport:
+        if -1 == self.be.bbport:
             raise BuildSetupException("localhostbecontroller: can't read bitbake port from %s" % bblock)
 
         self.be.bbaddress = "localhost"
@@ -357,10 +390,11 @@
         log = os.path.join(builddir, 'toaster_ui.log')
         local_bitbake = os.path.join(os.path.dirname(os.getenv('BBBASEDIR')),
                                      'bitbake')
-        self._shellcmd(['bash -c \"(TOASTER_BRBE="%s" BBSERVER="0.0.0.0:-1" '
-                        '%s %s -u toasterui --token="" >>%s 2>&1;'
-                        'BITBAKE_UI="knotty" BBSERVER=0.0.0.0:-1 %s -m)&\"' \
-                        % (brbe, local_bitbake, bbtargets, log, bitbake)],
+        self._shellcmd(['%s bash -c \"(TOASTER_BRBE="%s" BBSERVER="0.0.0.0:%s" '
+                        '%s %s -u toasterui  --read %s --read %s --token="" >>%s 2>&1;'
+                        'BITBAKE_UI="knotty" BBSERVER=0.0.0.0:%s %s -m)&\"' \
+                        % (env_clean, brbe, self.be.bbport, local_bitbake, bbtargets, confpath, toasterlayers, log,
+                        self.be.bbport, bitbake,)],
                         builddir, nowait=True)
 
         logger.debug('localhostbecontroller: Build launched, exiting. '
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
index 2ed994f..582114a 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/checksettings.py
@@ -1,4 +1,4 @@
-from django.core.management.base import NoArgsCommand, CommandError
+from django.core.management.base import BaseCommand, CommandError
 from django.db import transaction
 
 from django.core.management import call_command
@@ -18,7 +18,7 @@
         return os.path.dirname(path)
 
 
-class Command(NoArgsCommand):
+class Command(BaseCommand):
     args = ""
     help = "Verifies that the configured settings are valid and usable, or prompts the user to fix the settings."
 
@@ -75,7 +75,10 @@
                         call_command("loaddata", "settings")
                         template_conf = os.environ.get("TEMPLATECONF", "")
 
-                        if "poky" in template_conf:
+                        if ToasterSetting.objects.filter(name='CUSTOM_XML_ONLY').count() > 0:
+                            # only use the custom settings
+                            pass
+                        elif "poky" in template_conf:
                             print("Loading poky configuration")
                             call_command("loaddata", "poky")
                         else:
@@ -152,7 +155,7 @@
 
 
 
-    def handle_noargs(self, **options):
+    def handle(self, **options):
         retval = 0
         retval += self._verify_build_environment()
         retval += self._verify_default_settings()
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
index df11f9d..791e53e 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/bldcontrol/management/commands/runbuilds.py
@@ -1,4 +1,4 @@
-from django.core.management.base import NoArgsCommand
+from django.core.management.base import BaseCommand
 from django.db import transaction
 from django.db.models import Q
 
@@ -16,7 +16,7 @@
 logger = logging.getLogger("toaster")
 
 
-class Command(NoArgsCommand):
+class Command(BaseCommand):
     args = ""
     help = "Schedules and executes build requests as possible. "\
            "Does not return (interrupt with Ctrl-C)"
@@ -79,6 +79,14 @@
             br.save()
             bec.be.lock = BuildEnvironment.LOCK_FREE
             bec.be.save()
+            # Cancel the pending build and report the exception to the UI
+            log_object = LogMessage.objects.create(
+                            build = br.build,
+                            level = LogMessage.EXCEPTION,
+                            message = errmsg)
+            log_object.save()
+            br.build.outcome = Build.FAILED
+            br.build.save()
 
     def archive(self):
         for br in BuildRequest.objects.filter(state=BuildRequest.REQ_ARCHIVE):
@@ -168,7 +176,7 @@
         except Exception as e:
             logger.warn("runbuilds: schedule exception %s" % str(e))
 
-    def handle_noargs(self, **options):
+    def handle(self, **options):
         pidfile_path = os.path.join(os.environ.get("BUILDDIR", "."),
                                     ".runbuilds.pid")
 
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/custom_toaster_append.sh_sample b/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/custom_toaster_append.sh_sample
new file mode 100755
index 0000000..8c4e163
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/custom_toaster_append.sh_sample
@@ -0,0 +1,49 @@
+#!/bin/bash
+
+# Copyright (C) 2017 Intel Corp.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+# See the GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+
+# This is sample software. Rename it to 'custom_toaster_append.sh' and
+# enable the respective custom sections.
+
+verbose=0
+if [ $verbose -ne 0 ] ; then
+    echo "custom_toaster_append.sh:$*"
+fi
+
+if [ "toaster_prepend" = "$1" ] ; then
+    echo "Add custom actions here when Toaster script is started"
+fi
+
+if [ "web_start_postpend" = "$1" ] ; then
+    echo "Add custom actions here after Toaster web service is started"
+fi
+
+if [ "web_stop_postpend" = "$1" ] ; then
+    echo "Add custom actions here after Toaster web service is stopped"
+fi
+
+if [ "noweb_start_postpend" = "$1" ] ; then
+    echo "Add custom actions here after Toaster (no web) service is started"
+fi
+
+if [ "noweb_stop_postpend" = "$1" ] ; then
+    echo "Add custom actions here after Toaster (no web) service is stopped"
+fi
+
+if [ "toaster_postpend" = "$1" ] ; then
+    echo "Add custom actions here after Toaster script is done"
+fi
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/oe-core.xml b/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/oe-core.xml
index 66c3595..00720c3 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/oe-core.xml
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/oe-core.xml
@@ -8,9 +8,9 @@
 
   <!-- Bitbake versions which correspond to the metadata release -->
   <object model="orm.bitbakeversion" pk="1">
-    <field type="CharField" name="name">pyro</field>
+    <field type="CharField" name="name">rocko</field>
     <field type="CharField" name="giturl">git://git.openembedded.org/bitbake</field>
-    <field type="CharField" name="branch">1.34</field>
+    <field type="CharField" name="branch">1.36</field>
   </object>
   <object model="orm.bitbakeversion" pk="2">
     <field type="CharField" name="name">HEAD</field>
@@ -25,11 +25,11 @@
 
   <!-- Releases available -->
   <object model="orm.release" pk="1">
-    <field type="CharField" name="name">pyro</field>
-    <field type="CharField" name="description">Openembedded Pyro</field>
+    <field type="CharField" name="name">rocko</field>
+    <field type="CharField" name="description">Openembedded Rocko</field>
     <field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">1</field>
-    <field type="CharField" name="branch_name">pyro</field>
-    <field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href=\"http://cgit.openembedded.org/openembedded-core/log/?h=pyro\"&gt;OpenEmbedded Pyro&lt;/a&gt; branch.</field>
+    <field type="CharField" name="branch_name">rocko</field>
+    <field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href=\"http://cgit.openembedded.org/openembedded-core/log/?h=rocko\"&gt;OpenEmbedded Rocko&lt;/a&gt; branch.</field>
   </object>
   <object model="orm.release" pk="2">
     <field type="CharField" name="name">local</field>
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/poky.xml b/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/poky.xml
index 7827aac..2f39d77 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/poky.xml
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/fixtures/poky.xml
@@ -8,9 +8,9 @@
 
   <!-- Bitbake versions which correspond to the metadata release -->
   <object model="orm.bitbakeversion" pk="1">
-    <field type="CharField" name="name">pyro</field>
+    <field type="CharField" name="name">rocko</field>
     <field type="CharField" name="giturl">git://git.yoctoproject.org/poky</field>
-    <field type="CharField" name="branch">pyro</field>
+    <field type="CharField" name="branch">rocko</field>
     <field type="CharField" name="dirpath">bitbake</field>
   </object>
   <object model="orm.bitbakeversion" pk="2">
@@ -29,11 +29,11 @@
 
   <!-- Releases available -->
   <object model="orm.release" pk="1">
-    <field type="CharField" name="name">pyro</field>
-    <field type="CharField" name="description">Yocto Project 2.3 "Pyro"</field>
+    <field type="CharField" name="name">rocko</field>
+    <field type="CharField" name="description">Yocto Project 2.4 "Rocko"</field>
     <field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version">1</field>
-    <field type="CharField" name="branch_name">pyro</field>
-    <field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=pyro"&gt;Yocto Project Pyro branch&lt;/a&gt;.</field>
+    <field type="CharField" name="branch_name">rocko</field>
+    <field type="TextField" name="helptext">Toaster will run your builds using the tip of the &lt;a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=rocko"&gt;Yocto Project Rocko branch&lt;/a&gt;.</field>
   </object>
   <object model="orm.release" pk="2">
     <field type="CharField" name="name">local</field>
@@ -105,7 +105,7 @@
     <field rel="ManyToOneRel" to="orm.layer" name="layer">1</field>
     <field type="IntegerField" name="layer_source">0</field>
     <field rel="ManyToOneRel" to="orm.release" name="release">1</field>
-    <field type="CharField" name="branch">pyro</field>
+    <field type="CharField" name="branch">rocko</field>
     <field type="CharField" name="dirpath">meta</field>
   </object>
   <object model="orm.layer_version" pk="2">
@@ -136,7 +136,7 @@
     <field rel="ManyToOneRel" to="orm.layer" name="layer">2</field>
     <field type="IntegerField" name="layer_source">0</field>
     <field rel="ManyToOneRel" to="orm.release" name="release">1</field>
-    <field type="CharField" name="branch">pyro</field>
+    <field type="CharField" name="branch">rocko</field>
     <field type="CharField" name="dirpath">meta-poky</field>
   </object>
   <object model="orm.layer_version" pk="5">
@@ -167,7 +167,7 @@
     <field rel="ManyToOneRel" to="orm.layer" name="layer">3</field>
     <field type="IntegerField" name="layer_source">0</field>
     <field rel="ManyToOneRel" to="orm.release" name="release">1</field>
-    <field type="CharField" name="branch">pyro</field>
+    <field type="CharField" name="branch">rocko</field>
     <field type="CharField" name="dirpath">meta-yocto-bsp</field>
   </object>
   <object model="orm.layer_version" pk="8">
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/management/commands/lsupdates.py b/import-layers/yocto-poky/bitbake/lib/toaster/orm/management/commands/lsupdates.py
index 482908d..efc6b3a 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/orm/management/commands/lsupdates.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/management/commands/lsupdates.py
@@ -4,7 +4,7 @@
 #
 # BitBake Toaster Implementation
 #
-# Copyright (C) 2016        Intel Corporation
+# Copyright (C) 2016-2017   Intel Corporation
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License version 2 as
@@ -19,10 +19,12 @@
 # with this program; if not, write to the Free Software Foundation, Inc.,
 # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
 
-from django.core.management.base import NoArgsCommand
+from django.core.management.base import BaseCommand
 
 from orm.models import LayerSource, Layer, Release, Layer_Version
 from orm.models import LayerVersionDependency, Machine, Recipe
+from orm.models import Distro
+from orm.models import ToasterSetting
 
 import os
 import sys
@@ -56,7 +58,7 @@
         self.signal = False
 
 
-class Command(NoArgsCommand):
+class Command(BaseCommand):
     args = ""
     help = "Updates locally cached information from a layerindex server"
 
@@ -80,6 +82,8 @@
         os.system('setterm -cursor off')
 
         self.apiurl = DEFAULT_LAYERINDEX_SERVER
+        if ToasterSetting.objects.filter(name='CUSTOM_LAYERINDEX_SERVER').count() == 1:
+            self.apiurl = ToasterSetting.objects.get(name = 'CUSTOM_LAYERINDEX_SERVER').value
 
         assert self.apiurl is not None
         try:
@@ -91,7 +95,9 @@
 
         proxy_settings = os.environ.get("http_proxy", None)
 
-        def _get_json_response(apiurl=DEFAULT_LAYERINDEX_SERVER):
+        def _get_json_response(apiurl=None):
+            if None == apiurl:
+                apiurl=self.apiurl
             http_progress = Spinner()
             http_progress.start()
 
@@ -251,6 +257,24 @@
                                                              depends_on=lvd)
             self.mini_progress("Layer version dependencies", i, total)
 
+        # update Distros
+        logger.info("Fetching distro information")
+        distros_info = _get_json_response(
+            apilinks['distros'] + "?filter=layerbranch__branch__name:%s" %
+            "OR".join(whitelist_branch_names))
+
+        total = len(distros_info)
+        for i, di in enumerate(distros_info):
+            distro, created = Distro.objects.get_or_create(
+                name=di['name'],
+                layer_version=Layer_Version.objects.get(
+                    pk=li_layer_branch_id_to_toaster_lv_id[di['layerbranch']]))
+            distro.up_date = di['updated']
+            distro.name = di['name']
+            distro.description = di['description']
+            distro.save()
+            self.mini_progress("distros", i, total)
+
         # update machines
         logger.info("Fetching machine information")
         machines_info = _get_json_response(
@@ -309,5 +333,5 @@
 
         os.system('setterm -cursor on')
 
-    def handle_noargs(self, **options):
+    def handle(self, **options):
         self.update()
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/migrations/0016_clone_progress.py b/import-layers/yocto-poky/bitbake/lib/toaster/orm/migrations/0016_clone_progress.py
new file mode 100644
index 0000000..cd4023b
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/migrations/0016_clone_progress.py
@@ -0,0 +1,24 @@
+# -*- coding: utf-8 -*-
+from __future__ import unicode_literals
+
+from django.db import migrations, models
+
+class Migration(migrations.Migration):
+
+    dependencies = [
+        ('orm', '0015_layer_local_source_dir'),
+    ]
+
+    operations = [
+        migrations.AddField(
+            model_name='build',
+            name='repos_cloned',
+            field=models.IntegerField(default=1),
+        ),
+        migrations.AddField(
+            model_name='build',
+            name='repos_to_clone',
+            field=models.IntegerField(default=1), # (default off)
+        ),
+    ]
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/migrations/0017_distro_clone.py b/import-layers/yocto-poky/bitbake/lib/toaster/orm/migrations/0017_distro_clone.py
new file mode 100644
index 0000000..d3c5901
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/migrations/0017_distro_clone.py
@@ -0,0 +1,25 @@
+# -*- coding: utf-8 -*-
+from __future__ import unicode_literals
+
+from django.db import migrations, models
+
+class Migration(migrations.Migration):
+
+    dependencies = [
+        ('orm', '0016_clone_progress'),
+    ]
+
+    operations = [
+        migrations.CreateModel(
+            name='Distro',
+            fields=[
+                ('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
+                ('up_id', models.IntegerField(default=None, null=True)),
+                ('up_date', models.DateTimeField(default=None, null=True)),
+                ('name', models.CharField(max_length=255)),
+                ('description', models.CharField(max_length=255)),
+                ('layer_version', models.ForeignKey(to='orm.Layer_Version')),
+            ],
+        ),
+    ]
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/orm/models.py b/import-layers/yocto-poky/bitbake/lib/toaster/orm/models.py
index a49f9a4..3a7dff8 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/orm/models.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/orm/models.py
@@ -321,6 +321,22 @@
 
         return queryset
 
+    def get_available_distros(self):
+        """ Returns QuerySet of all Distros which are provided by the
+        Layers currently added to the Project """
+        queryset = Distro.objects.filter(
+            layer_version__in=self.get_project_layer_versions())
+
+        return queryset
+
+    def get_all_compatible_distros(self):
+        """ Returns QuerySet of all the compatible Wind River distros available to the
+        project including ones from Layers not currently added """
+        queryset = Distro.objects.filter(
+            layer_version__in=self.get_all_compatible_layer_versions())
+
+        return queryset
+
     def get_available_recipes(self):
         """ Returns QuerySet of all the recipes that are provided by layers
         added to this project """
@@ -435,7 +451,13 @@
     recipes_to_parse = models.IntegerField(default=1)
 
     # number of recipes parsed so far for this build
-    recipes_parsed = models.IntegerField(default=0)
+    recipes_parsed = models.IntegerField(default=1)
+
+    # number of repos to clone for this build
+    repos_to_clone = models.IntegerField(default=1)
+
+    # number of repos cloned so far for this build (default off)
+    repos_cloned = models.IntegerField(default=1)
 
     @staticmethod
     def get_recent(project=None):
@@ -486,7 +508,7 @@
         tf = Task.objects.filter(build = self)
         tfc = tf.count()
         if tfc > 0:
-            completeper = tf.exclude(order__isnull=True).count()*100 // tfc
+            completeper = tf.exclude(outcome=Task.OUTCOME_NA).count()*100 // tfc
         else:
             completeper = 0
         return completeper
@@ -667,6 +689,13 @@
         else:
             return False
 
+    def is_cloning(self):
+        """
+        True if the build is still cloning repos
+        """
+        return self.outcome == Build.IN_PROGRESS and \
+            self.repos_cloned < self.repos_to_clone
+
     def is_parsing(self):
         """
         True if the build is still parsing recipes
@@ -680,10 +709,11 @@
         tasks.
 
         Note that the mechanism for testing whether a Task is "done" is whether
-        its order field is set, as per the completeper() method.
+        its outcome field is set, as per the completeper() method.
         """
         return self.outcome == Build.IN_PROGRESS and \
-            self.task_build.filter(order__isnull=False).count() == 0
+            self.task_build.exclude(outcome=Task.OUTCOME_NA).count() == 0
+
 
     def get_state(self):
         """
@@ -698,6 +728,8 @@
             return 'Cancelling';
         elif self.is_queued():
             return 'Queued'
+        elif self.is_cloning():
+            return 'Cloning'
         elif self.is_parsing():
             return 'Parsing'
         elif self.is_starting():
@@ -1485,12 +1517,12 @@
         return self._handle_url_path(self.layer.vcs_web_tree_base_url, '')
 
     def get_vcs_reference(self):
+        if self.commit is not None and len(self.commit) > 0:
+            return self.commit
         if self.branch is not None and len(self.branch) > 0:
             return self.branch
         if self.release is not None:
             return self.release.name
-        if self.commit is not None and len(self.commit) > 0:
-            return self.commit
         return 'N/A'
 
     def get_detailspage_url(self, project_id=None):
@@ -1626,7 +1658,7 @@
 
     def get_base_recipe_file(self):
         """Get the base recipe file path if it exists on the file system"""
-        path_schema_one = "%s/%s" % (self.base_recipe.layer_version.dirpath,
+        path_schema_one = "%s/%s" % (self.base_recipe.layer_version.local_path,
                                      self.base_recipe.file_path)
 
         path_schema_two = self.base_recipe.file_path
@@ -1780,6 +1812,21 @@
     except FileNotFoundError:
         logger.info("Stopping existing runbuilds: no current process found")
 
+class Distro(models.Model):
+    search_allowed_fields = ["name", "description", "layer_version__layer__name"]
+    up_date = models.DateTimeField(null = True, default = None)
+
+    layer_version = models.ForeignKey('Layer_Version')
+    name = models.CharField(max_length=255)
+    description = models.CharField(max_length=255)
+
+    def get_vcs_distro_file_link_url(self):
+        path = self.name+'.conf'
+        return self.layer_version.get_vcs_file_link_url(path)
+
+    def __unicode__(self):
+        return "Distro " + self.name + "(" + self.description + ")"
+
 django.db.models.signals.post_save.connect(invalidate_cache)
 django.db.models.signals.post_delete.connect(invalidate_cache)
 django.db.models.signals.m2m_changed.connect(invalidate_cache)
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/api.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/api.py
index 1a6507c..ab6ba69 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/api.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/api.py
@@ -18,6 +18,7 @@
 
 # Please run flake8 on this file before sending patches
 
+import os
 import re
 import logging
 import json
@@ -28,7 +29,7 @@
 from orm.models import Recipe, CustomImageRecipe, CustomImagePackage
 from orm.models import Layer, Target, Package, Package_Dependency
 from orm.models import ProjectVariable
-from bldcontrol.models import BuildRequest
+from bldcontrol.models import BuildRequest, BuildEnvironment
 from bldcontrol import bbcontroller
 
 from django.http import HttpResponse, JsonResponse
@@ -509,6 +510,27 @@
                                    (tpackage.package.name, e))
                     pass
 
+        # pre-create layer directory structure, so that other builds
+        # are not blocked by this new recipe dependecy
+        # NOTE: this is parallel code to 'localhostbecontroller.py'
+        be = BuildEnvironment.objects.all()[0]
+        layerpath = os.path.join(be.builddir,
+                                 CustomImageRecipe.LAYER_NAME)
+        for name in ("conf", "recipes"):
+            path = os.path.join(layerpath, name)
+            if not os.path.isdir(path):
+                os.makedirs(path)
+        # pre-create layer.conf
+        config = os.path.join(layerpath, "conf", "layer.conf")
+        if not os.path.isfile(config):
+            with open(config, "w") as conf:
+                conf.write('BBPATH .= ":${LAYERDIR}"\nBBFILES += "${LAYERDIR}/recipes/*.bb"\n')
+        # pre-create new image's recipe file
+        recipe_path = os.path.join(layerpath, "recipes", "%s.bb" %
+                                   recipe.name)
+        with open(recipe_path, "w") as recipef:
+            recipef.write(recipe.generate_recipe_file_contents())
+
         return JsonResponse(
             {"error": "ok",
              "packages": recipe.get_all_packages().count(),
@@ -752,7 +774,6 @@
                 return error_response("Package %s not found in excludes"
                                       " but was in included list" %
                                       package.name)
-
         else:
             recipe.appends_set.add(package)
             # Make sure that package is not in the excludes set
@@ -760,26 +781,27 @@
                 recipe.excludes_set.remove(package)
             except:
                 pass
-            # Add the dependencies we think will be added to the recipe
-            # as a result of appending this package.
-            # TODO this should recurse down the entire deps tree
-            for dep in package.package_dependencies_source.all_depends():
-                try:
-                    cust_package = CustomImagePackage.objects.get(
-                        name=dep.depends_on.name)
 
-                    recipe.includes_set.add(cust_package)
-                    try:
-                        # When adding the pre-requisite package, make
-                        # sure it's not in the excluded list from a
-                        # prior removal.
-                        recipe.excludes_set.remove(cust_package)
-                    except package.DoesNotExist:
-                        # Don't care if the package had never been excluded
-                        pass
-                except:
-                    logger.warning("Could not add package's suggested"
-                                   "dependencies to the list")
+        # Add the dependencies we think will be added to the recipe
+        # as a result of appending this package.
+        # TODO this should recurse down the entire deps tree
+        for dep in package.package_dependencies_source.all_depends():
+            try:
+                cust_package = CustomImagePackage.objects.get(
+                    name=dep.depends_on.name)
+
+                recipe.includes_set.add(cust_package)
+                try:
+                    # When adding the pre-requisite package, make
+                    # sure it's not in the excluded list from a
+                    # prior removal.
+                    recipe.excludes_set.remove(cust_package)
+                except package.DoesNotExist:
+                    # Don't care if the package had never been excluded
+                    pass
+            except:
+                logger.warning("Could not add package's suggested"
+                               "dependencies to the list")
         return JsonResponse({"error": "ok"})
 
     def delete(self, request, *args, **kwargs):
@@ -797,22 +819,24 @@
                 recipe.excludes_set.add(package)
             else:
                 recipe.appends_set.remove(package)
-                all_current_packages = recipe.get_all_packages()
 
-                reverse_deps_dictlist = self._get_all_dependents(
-                    package.pk,
-                    all_current_packages)
+            # remove dependencies as well
+            all_current_packages = recipe.get_all_packages()
 
-                ids = [entry['pk'] for entry in reverse_deps_dictlist]
-                reverse_deps = CustomImagePackage.objects.filter(id__in=ids)
-                for r in reverse_deps:
-                    try:
-                        if r.id in included_packages:
-                            recipe.excludes_set.add(r)
-                        else:
-                            recipe.appends_set.remove(r)
-                    except:
-                        pass
+            reverse_deps_dictlist = self._get_all_dependents(
+                package.pk,
+                all_current_packages)
+
+            ids = [entry['pk'] for entry in reverse_deps_dictlist]
+            reverse_deps = CustomImagePackage.objects.filter(id__in=ids)
+            for r in reverse_deps:
+                try:
+                    if r.id in included_packages:
+                        recipe.excludes_set.add(r)
+                    else:
+                        recipe.appends_set.remove(r)
+                except:
+                    pass
 
             return JsonResponse({"error": "ok"})
         except CustomImageRecipe.DoesNotExist:
@@ -874,6 +898,12 @@
             machinevar.value = request.POST['machineName']
             machinevar.save()
 
+        # Distro name change
+        if 'distroName' in request.POST:
+            distrovar = prj.projectvariable_set.get(name="DISTRO")
+            distrovar.value = request.POST['distroName']
+            distrovar.save()
+
         return JsonResponse({"error": "ok"})
 
     def get(self, request, *args, **kwargs):
@@ -960,10 +990,11 @@
         except ProjectVariable.DoesNotExist:
             data["machine"] = None
         try:
-            data["distro"] = project.projectvariable_set.get(
-                name="DISTRO").value
+            data["distro"] = {"name":
+                               project.projectvariable_set.get(
+                                   name="DISTRO").value}
         except ProjectVariable.DoesNotExist:
-            data["distro"] = "-- not set yet"
+            data["distro"] = None
 
         data['error'] = "ok"
 
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/mrbsection.js b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/mrbsection.js
index 73d0935..c0c5fa9 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/mrbsection.js
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/mrbsection.js
@@ -61,6 +61,12 @@
     return (cached.recipes_parsed_percentage !== build.recipes_parsed_percentage);
   }
 
+  // returns true if the number of repos cloned/to clone changed
+  function cloneProgressChanged(build) {
+    var cached = getCached(build);
+    return (cached.repos_cloned_percentage !== build.repos_cloned_percentage);
+  }
+
   function refreshMostRecentBuilds(){
     libtoaster.getMostRecentBuilds(
       libtoaster.ctx.mostRecentBuildsUrl,
@@ -100,6 +106,15 @@
 
             container.html(html);
           }
+          else if (cloneProgressChanged(build)) {
+            // update the clone progress text
+            selector = '#repos-cloned-percentage-' + build.id;
+            $(selector).html(build.repos_cloned_percentage);
+
+            // update the recipe progress bar
+            selector = '#repos-cloned-percentage-bar-' + build.id;
+            $(selector).width(build.repos_cloned_percentage + '%');
+          }
           else if (tasksProgressChanged(build)) {
             // update the task progress text
             selector = '#build-pc-done-' + build.id;
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projectpage.js b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projectpage.js
index 21adf81..506471e 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projectpage.js
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projectpage.js
@@ -15,6 +15,13 @@
   var machineInputForm = $("#machine-input-form");
   var invalidMachineNameHelp = $("#invalid-machine-name-help");
 
+  var distroChangeInput = $("#distro-change-input");
+  var distroChangeBtn = $("#distro-change-btn");
+  var distroForm = $("#select-distro-form");
+  var distroChangeFormToggle = $("#change-distro-toggle");
+  var distroNameTitle = $("#project-distro-name");
+  var distroChangeCancel = $("#cancel-distro-change");
+
   var freqBuildBtn =  $("#freq-build-btn");
   var freqBuildList = $("#freq-build-list");
 
@@ -26,6 +33,7 @@
 
   var currentLayerAddSelection;
   var currentMachineAddSelection = "";
+  var currentDistroAddSelection = "";
 
   var urlParams = libtoaster.parseUrlParams();
 
@@ -45,6 +53,17 @@
       updateMachineName(prjInfo.machine.name);
     }
 
+    /* If we're receiving a distro set from the url and it's different from
+     * our current distro then activate set machine sequence.
+     */
+    if (urlParams.hasOwnProperty('setDistro') &&
+        urlParams.setDistro !== prjInfo.distro.name){
+        distroChangeInput.val(urlParams.setDistro);
+        distroChangeBtn.click();
+    } else {
+      updateDistroName(prjInfo.distro.name);
+    }
+
    /* Now we're really ready show the page */
     $("#project-page").show();
 
@@ -278,6 +297,60 @@
   });
 
 
+  /* Change distro functionality */
+
+  distroChangeFormToggle.click(function(){
+    distroForm.slideDown();
+    distroNameTitle.hide();
+    $(this).hide();
+  });
+
+  distroChangeCancel.click(function(){
+    distroForm.slideUp(function(){
+      distroNameTitle.show();
+      distroChangeFormToggle.show();
+    });
+  });
+
+  function updateDistroName(distroName){
+    distroChangeInput.val(distroName);
+    distroNameTitle.text(distroName);
+  }
+
+  libtoaster.makeTypeahead(distroChangeInput,
+                           libtoaster.ctx.distrosTypeAheadUrl,
+                           { }, function(item){
+    currentDistroAddSelection = item.name;
+    distroChangeBtn.removeAttr("disabled");
+  });
+
+  distroChangeBtn.click(function(e){
+    e.preventDefault();
+    /* We accept any value regardless of typeahead selection or not */
+    if (distroChangeInput.val().length === 0)
+      return;
+
+    currentDistroAddSelection = distroChangeInput.val();
+
+    libtoaster.editCurrentProject(
+      { distroName : currentDistroAddSelection },
+      function(){
+        /* Success machine changed */
+        updateDistroName(currentDistroAddSelection);
+        distroChangeCancel.click();
+
+        /* Show the alert message */
+        var message = $('<span>You have changed the distro to: <strong><span id="notify-machine-name"></span></strong></span>');
+        message.find("#notify-machine-name").text(currentDistroAddSelection);
+        libtoaster.showChangeNotification(message);
+    },
+      function(){
+        /* Failed machine changed */
+        console.warn("Failed to change distro");
+    });
+  });
+
+
   /* Change release functionality */
   function updateProjectRelease(release){
     releaseTitle.text(release.description);
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js
index 92ab2d6..69220aa 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/static/js/projecttopbar.js
@@ -73,14 +73,14 @@
 
   newBuildTargetBuildBtn.click(function (e) {
     e.preventDefault();
-    if (!newBuildTargetInput.val()) {
+    if (!newBuildTargetInput.val().trim()) {
       return;
     }
     /* We use the value of the input field so as to maintain any command also
      * added e.g. core-image-minimal:clean and because we can build targets
      * that toaster doesn't yet know about
      */
-    selectedTarget = { name: newBuildTargetInput.val() };
+    selectedTarget = { name: newBuildTargetInput.val().trim() };
 
     /* Fire off the build */
     libtoaster.startABuild(null, selectedTarget.name,
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/tables.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/tables.py
index e2d23c1..dca2fa2 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/tables.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/tables.py
@@ -23,6 +23,7 @@
 from orm.models import Recipe, ProjectLayer, Layer_Version, Machine, Project
 from orm.models import CustomImageRecipe, Package, Target, Build, LogMessage, Task
 from orm.models import CustomImagePackage, Package_DependencyManager
+from orm.models import Distro
 from django.db.models import Q, Max, Sum, Count, When, Case, Value, IntegerField
 from django.conf.urls import url
 from django.core.urlresolvers import reverse, resolve
@@ -1536,3 +1537,93 @@
             context['build_in_progress_none_completed'] = False
 
         return context
+
+
+class DistrosTable(ToasterTable):
+    """Table of Distros in Toaster"""
+
+    def __init__(self, *args, **kwargs):
+        super(DistrosTable, self).__init__(*args, **kwargs)
+        self.empty_state = "Toaster has no distro information for this project. Sadly, 			   distro information cannot be obtained from builds, so this 				  page will remain empty."
+        self.title = "Compatible Distros"
+        self.default_orderby = "name"
+
+    def get_context_data(self, **kwargs):
+        context = super(DistrosTable, self).get_context_data(**kwargs)
+        context['project'] = Project.objects.get(pk=kwargs['pid'])
+        return context
+
+    def setup_filters(self, *args, **kwargs):
+        project = Project.objects.get(pk=kwargs['pid'])
+
+        in_current_project_filter = TableFilter(
+            "in_current_project",
+            "Filter by project Distros"
+        )
+
+        in_project_action = TableFilterActionToggle(
+            "in_project",
+            "Distro provided by layers added to this project",
+            ProjectFilters.in_project(self.project_layers)
+        )
+
+        not_in_project_action = TableFilterActionToggle(
+            "not_in_project",
+            "Distros provided by layers not added to this project",
+            ProjectFilters.not_in_project(self.project_layers)
+        )
+
+        in_current_project_filter.add_action(in_project_action)
+        in_current_project_filter.add_action(not_in_project_action)
+        self.add_filter(in_current_project_filter)
+
+    def setup_queryset(self, *args, **kwargs):
+        prj = Project.objects.get(pk = kwargs['pid'])
+        self.queryset = prj.get_all_compatible_distros()
+        self.queryset = self.queryset.order_by(self.default_orderby)
+
+        self.static_context_extra['current_layers'] = \
+                self.project_layers = \
+                prj.get_project_layer_versions(pk=True)
+
+    def setup_columns(self, *args, **kwargs):
+
+        self.add_column(title="Distro",
+                        hideable=False,
+                        orderable=True,
+                        field_name="name")
+
+        self.add_column(title="Description",
+                        field_name="description")
+
+        layer_link_template = '''
+        <a href="{% url 'layerdetails' extra.pid data.layer_version.id %}">
+        {{data.layer_version.layer.name}}</a>
+        '''
+
+        self.add_column(title="Layer",
+                        static_data_name="layer_version__layer__name",
+                        static_data_template=layer_link_template,
+                        orderable=True)
+
+        self.add_column(title="Git revision",
+                        help_text="The Git branch, tag or commit. For the layers from the OpenEmbedded layer source, the revision is always the branch compatible with the Yocto Project version you selected for this project",
+                        hidden=True,
+                        field_name="layer_version__get_vcs_reference")
+
+        wrtemplate_file_template = '''<code>conf/machine/{{data.name}}.conf</code>
+        <a href="{{data.get_vcs_machine_file_link_url}}" target="_blank"><span class="glyphicon glyphicon-new-window"></i></a>'''
+
+        self.add_column(title="Distro file",
+                        hidden=True,
+                        static_data_name="templatefile",
+                        static_data_template=wrtemplate_file_template)
+
+
+        self.add_column(title="Select",
+                        help_text="Sets the selected distro to the project",
+                        hideable=False,
+                        filter_name="in_current_project",
+                        static_data_name="add-del-layers",
+                        static_data_template='{% include "distro_btn.html" %}')
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/base.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/base.html
index 32b4979..4f72064 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/base.html
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/base.html
@@ -49,6 +49,7 @@
         recipesTypeAheadUrl: {% url 'xhr_recipestypeahead' project.id as paturl%}{{paturl|json}},
         layersTypeAheadUrl: {% url 'xhr_layerstypeahead' project.id as paturl%}{{paturl|json}},
         machinesTypeAheadUrl: {% url 'xhr_machinestypeahead' project.id as paturl%}{{paturl|json}},
+        distrosTypeAheadUrl: {% url 'xhr_distrostypeahead' project.id as paturl%}{{paturl|json}},
         projectBuildsUrl: {% url 'projectbuilds' project.id as pburl %}{{pburl|json}},
         xhrCustomRecipeUrl : "{% url 'xhr_customrecipe' %}",
         projectId : {{project.id}},
@@ -109,6 +110,7 @@
                 All builds
               </a>
               </li>
+              {% if project_enable %}
               <li id="navbar-all-projects"
               {% if request.resolver_match.url_name == 'all-projects'  %}
               class="active"
@@ -118,6 +120,7 @@
                 All projects
               </a>
               </li>
+              {% endif %}
             {% endif %}
               <li id="navbar-docs">
               <a target="_blank" href="http://www.yoctoproject.org/docs/latest/toaster-manual/toaster-manual.html">
@@ -126,7 +129,9 @@
               </a>
               </li>
             </ul>
+            {% if project_enable %}
             <a class="btn btn-default navbar-btn navbar-right" id="new-project-button" href="{% url 'newproject' %}">New project</a>
+            {% endif %}
           </div>
       </div>
     </nav>
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/baseprojectpage.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/baseprojectpage.html
index 8427d25..f2bb2eb 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/baseprojectpage.html
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/baseprojectpage.html
@@ -32,6 +32,7 @@
       <li><a href="{% url 'projectsoftwarerecipes' project.id %}">Software recipes</a></li>
       <li><a href="{% url 'projectmachines' project.id %}">Machines</a></li>
       <li><a href="{% url 'projectlayers' project.id %}">Layers</a></li>
+      <li><a href="{% url 'projectdistros' project.id %}">Distros</a></li>
       <li class="nav-header">Extra configuration</li>
       <li><a href="{% url 'projectconf' project.id %}">BitBake variables</a></li>
 
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/distro_btn.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/distro_btn.html
new file mode 100644
index 0000000..fac7947
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/distro_btn.html
@@ -0,0 +1,20 @@
+<a href="{% url 'project' extra.pid %}?setDistro={{data.name}}" class="btn btn-default btn-block layer-exists-{{data.layer_version.id}}"
+    {% if data.layer_version.pk not in extra.current_layers %}
+    style="display:none;"
+    {% endif %}>
+  Set distro</a>
+<a class="btn btn-default btn-block layerbtn layer-add-{{data.layer_version.id}}" data-layer='{
+    "id": {{data.layer_version.id}},
+    "name":  "{{data.layer_version.layer.name}}",
+    "xhrLayerUrl": "{% url "xhr_layer" extra.pid data.pk %}",
+    "layerdetailurl": "{%url 'layerdetails' extra.pid data.layer_version.id %}"
+    }' data-directive="add"
+    {% if data.layer_version.pk in extra.current_layers %}
+    style="display:none;"
+    {% endif %}
+>
+  <span class="glyphicon glyphicon-plus"></span>
+  Add layer
+  <span class="glyphicon glyphicon-question-sign get-help" title="To select this distro, you must first add the {{data.layer_version.layer.name}} layer to your project"></i>
+</a>
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/health.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/health.html
new file mode 100644
index 0000000..f17fdbc
--- /dev/null
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/health.html
@@ -0,0 +1,6 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head><title>Toaster Health</title></head>
+  <body>Ok</body>
+</html>
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/landing.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/landing.html
index 4986632..70c7359 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/landing.html
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/landing.html
@@ -14,12 +14,20 @@
 
               <p>A web interface to <a href="http://www.openembedded.org">OpenEmbedded</a> and <a href="http://www.yoctoproject.org/tools-resources/projects/bitbake">BitBake</a>, the <a href="http://www.yoctoproject.org">Yocto Project</a> build system.</p>
 
+		          <p class="top-air">
+		            <a class="btn btn-info btn-lg" href="http://www.yoctoproject.org/docs/latest/toaster-manual/toaster-manual.html#toaster-manual-setup-and-use">
+			            Toaster is ready to capture your command line builds
+		            </a>
+		          </p>
+
 		          {% if lvs_nos %}
+                    {% if project_enable %}
 		            <p class="top-air">
 		              <a class="btn btn-primary btn-lg" href="{% url 'newproject' %}">
-			              To start building, create your first Toaster project
+			              Create your first Toaster project to run manage builds
 		              </a>
 		            </p>
+                    {% endif %}
 		          {% else %}
                 <div class="alert alert-info lead top-air">
                   Toaster has no layer information. Without layer information, you cannot run builds. To generate layer information you can:
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/mrb_section.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/mrb_section.html
index b761ffe..c5b9fe9 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/mrb_section.html
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/mrb_section.html
@@ -64,7 +64,9 @@
   </div>
 
   <div data-build-state="<%:state%>">
-    <%if state == 'Parsing'%>
+    <%if state == 'Cloning'%>
+      <%include tmpl='#cloning-repos-build-template'/%>
+    <%else state == 'Parsing'%>
       <%include tmpl='#parsing-recipes-build-template'/%>
     <%else state == 'Queued'%>
       <%include tmpl='#queued-build-template'/%>
@@ -98,6 +100,31 @@
   </div>
 </script>
 
+<!-- cloning repos build -->
+<script id="cloning-repos-build-template" type="text/x-jsrender">
+  <!-- progress bar and parse completion percentage -->
+  <div data-role="build-status" class="col-md-4 col-md-offset-1 progress-info">
+    <!-- progress bar -->
+    <div class="progress">
+      <div id="repos-cloned-percentage-bar-<%:id%>"
+           style="width: <%:repos_cloned_percentage%>%;"
+           class="progress-bar">
+      </div>
+    </div>
+  </div>
+
+  <div class="col-md-4 progress-info">
+    <!-- parse completion percentage -->
+    <span class="glyphicon glyphicon-question-sign get-help get-help-blue"
+          title="Toaster is cloning the repos required for your build">
+    </span>
+
+    Cloning <span id="repos-cloned-percentage-<%:id%>"><%:repos_cloned_percentage%></span>% complete
+
+    <%include tmpl='#cancel-template'/%>
+  </div>
+</script>
+
 <!-- parsing recipes build -->
 <script id="parsing-recipes-build-template" type="text/x-jsrender">
   <!-- progress bar and parse completion percentage -->
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/project.html b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/project.html
index ab7e665..11603d1 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/project.html
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/templates/project.html
@@ -77,6 +77,22 @@
       </form>
     </div>
 
+    <div class="well well-transparent" id="distro-section">
+      <h3>Distro</h3>
+
+      <p class="lead"><span id="project-distro-name"></span> <span class="glyphicon glyphicon-edit" id="change-distro-toggle"></span></p>
+
+      <form id="select-distro-form" style="display:none;" class="form-inline">
+        <span class="help-block">Distro suggestions come from the Layer Index</a></span>
+        <div class="form-group">
+          <input class="form-control" id="distro-change-input" autocomplete="off" value="" data-provide="typeahead" data-minlength="1" data-autocomplete="off" type="text">
+        </div>
+        <button id="distro-change-btn" class="btn btn-default" type="button">Save</button>
+        <a href="#" id="cancel-distro-change" class="btn btn-link">Cancel</a>
+        <p class="form-link"><a href="{% url 'projectdistros' project.id %}">View compatible distros</a></p>
+      </form>
+    </div>
+
     <div class="well well-transparent">
       <h3>Most built recipes</h3>
 
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/typeaheads.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/typeaheads.py
index 58c650f..5aa0f8d 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/typeaheads.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/typeaheads.py
@@ -100,6 +100,36 @@
         return results
 
 
+class DistrosTypeAhead(ToasterTypeAhead):
+    """ Typeahead for all the distros available in the current project's
+    configuration """
+    def __init__(self):
+        super(DistrosTypeAhead, self).__init__()
+
+    def apply_search(self, search_term, prj, request):
+        distros = prj.get_available_distros()
+        distros = distros.order_by("name")
+
+        primary_results = distros.filter(name__istartswith=search_term)
+        secondary_results = distros.filter(name__icontains=search_term).exclude(pk__in=primary_results)
+        tertiary_results = distros.filter(layer_version__layer__name__icontains=search_term).exclude(pk__in=primary_results).exclude(pk__in=secondary_results)
+
+        results = []
+
+        for distro in list(primary_results) + list(secondary_results) + list(tertiary_results):
+
+            detail = "[ %s ]" % (distro.layer_version.layer.name)
+            needed_fields = {
+                'id' : distro.pk,
+                'name' : distro.name,
+                'detail' : detail,
+            }
+
+            results.append(needed_fields)
+
+        return results
+
+
 class RecipesTypeAhead(ToasterTypeAhead):
     """ Typeahead for all the recipes available in the current project's
     configuration """
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/urls.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/urls.py
index d92f190..e07b0ef 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/urls.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/urls.py
@@ -1,7 +1,7 @@
 #
 # BitBake Toaster Implementation
 #
-# Copyright (C) 2013        Intel Corporation
+# Copyright (C) 2013-2017    Intel Corporation
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License version 2 as
@@ -16,7 +16,7 @@
 # with this program; if not, write to the Free Software Foundation, Inc.,
 # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
 
-from django.conf.urls import patterns, include, url
+from django.conf.urls import include, url
 from django.views.generic import RedirectView, TemplateView
 
 from django.http import HttpResponseBadRequest
@@ -25,49 +25,50 @@
 from toastergui import typeaheads
 from toastergui import api
 from toastergui import widgets
+from toastergui import views
 
-urlpatterns = patterns('toastergui.views',
+urlpatterns = [
         # landing page
-        url(r'^landing/$', 'landing', name='landing'),
+        url(r'^landing/$', views.landing, name='landing'),
 
         url(r'^builds/$',
             tables.AllBuildsTable.as_view(template_name="builds-toastertable.html"),
             name='all-builds'),
 
         # build info navigation
-        url(r'^build/(?P<build_id>\d+)$', 'builddashboard', name="builddashboard"),
+        url(r'^build/(?P<build_id>\d+)$', views.builddashboard, name="builddashboard"),
         url(r'^build/(?P<build_id>\d+)/tasks/$',
             buildtables.BuildTasksTable.as_view(
                 template_name="buildinfo-toastertable.html"),
             name='tasks'),
 
-        url(r'^build/(?P<build_id>\d+)/task/(?P<task_id>\d+)$', 'task', name='task'),
+        url(r'^build/(?P<build_id>\d+)/task/(?P<task_id>\d+)$', views.task, name='task'),
 
         url(r'^build/(?P<build_id>\d+)/recipes/$',
             buildtables.BuiltRecipesTable.as_view(
                 template_name="buildinfo-toastertable.html"),
             name='recipes'),
 
-        url(r'^build/(?P<build_id>\d+)/recipe/(?P<recipe_id>\d+)/active_tab/(?P<active_tab>\d{1})$', 'recipe', name='recipe'),
+        url(r'^build/(?P<build_id>\d+)/recipe/(?P<recipe_id>\d+)/active_tab/(?P<active_tab>\d{1})$', views.recipe, name='recipe'),
 
-        url(r'^build/(?P<build_id>\d+)/recipe/(?P<recipe_id>\d+)$', 'recipe', name='recipe'),
-        url(r'^build/(?P<build_id>\d+)/recipe_packages/(?P<recipe_id>\d+)$', 'recipe_packages', name='recipe_packages'),
+        url(r'^build/(?P<build_id>\d+)/recipe/(?P<recipe_id>\d+)$', views.recipe, name='recipe'),
+        url(r'^build/(?P<build_id>\d+)/recipe_packages/(?P<recipe_id>\d+)$', views.recipe_packages, name='recipe_packages'),
 
         url(r'^build/(?P<build_id>\d+)/packages/$',
             buildtables.BuiltPackagesTable.as_view(
                 template_name="buildinfo-toastertable.html"),
             name='packages'),
 
-        url(r'^build/(?P<build_id>\d+)/package/(?P<package_id>\d+)$', 'package_built_detail',
+        url(r'^build/(?P<build_id>\d+)/package/(?P<package_id>\d+)$', views.package_built_detail,
                 name='package_built_detail'),
         url(r'^build/(?P<build_id>\d+)/package_built_dependencies/(?P<package_id>\d+)$',
-            'package_built_dependencies', name='package_built_dependencies'),
+            views.package_built_dependencies, name='package_built_dependencies'),
         url(r'^build/(?P<build_id>\d+)/package_included_detail/(?P<target_id>\d+)/(?P<package_id>\d+)$',
-            'package_included_detail', name='package_included_detail'),
+            views.package_included_detail, name='package_included_detail'),
         url(r'^build/(?P<build_id>\d+)/package_included_dependencies/(?P<target_id>\d+)/(?P<package_id>\d+)$',
-            'package_included_dependencies', name='package_included_dependencies'),
+            views.package_included_dependencies, name='package_included_dependencies'),
         url(r'^build/(?P<build_id>\d+)/package_included_reverse_dependencies/(?P<target_id>\d+)/(?P<package_id>\d+)$',
-            'package_included_reverse_dependencies', name='package_included_reverse_dependencies'),
+            views.package_included_reverse_dependencies, name='package_included_reverse_dependencies'),
 
         url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)$',
             buildtables.InstalledPackagesTable.as_view(
@@ -75,11 +76,11 @@
             name='target'),
 
 
-        url(r'^dentries/build/(?P<build_id>\d+)/target/(?P<target_id>\d+)$', 'xhr_dirinfo', name='dirinfo_ajax'),
-        url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)/dirinfo$', 'dirinfo', name='dirinfo'),
-        url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)/dirinfo_filepath/_(?P<file_path>(?:/[^/\n]+)*)$', 'dirinfo', name='dirinfo_filepath'),
-        url(r'^build/(?P<build_id>\d+)/configuration$', 'configuration', name='configuration'),
-        url(r'^build/(?P<build_id>\d+)/configvars$', 'configvars', name='configvars'),
+        url(r'^dentries/build/(?P<build_id>\d+)/target/(?P<target_id>\d+)$', views.xhr_dirinfo, name='dirinfo_ajax'),
+        url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)/dirinfo$', views.dirinfo, name='dirinfo'),
+        url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)/dirinfo_filepath/_(?P<file_path>(?:/[^/\n]+)*)$', views.dirinfo, name='dirinfo_filepath'),
+        url(r'^build/(?P<build_id>\d+)/configuration$', views.configuration, name='configuration'),
+        url(r'^build/(?P<build_id>\d+)/configvars$', views.configvars, name='configvars'),
         url(r'^build/(?P<build_id>\d+)/buildtime$',
             buildtables.BuildTimeTable.as_view(
                 template_name="buildinfo-toastertable.html"),
@@ -97,26 +98,26 @@
 
         # image information dir
         url(r'^build/(?P<build_id>\d+)/target/(?P<target_id>\d+)/packagefile/(?P<packagefile_id>\d+)$',
-             'image_information_dir', name='image_information_dir'),
+             views.image_information_dir, name='image_information_dir'),
 
         # build download artifact
-        url(r'^build/(?P<build_id>\d+)/artifact/(?P<artifact_type>\w+)/id/(?P<artifact_id>\w+)', 'build_artifact', name="build_artifact"),
+        url(r'^build/(?P<build_id>\d+)/artifact/(?P<artifact_type>\w+)/id/(?P<artifact_id>\w+)', views.build_artifact, name="build_artifact"),
 
         # project URLs
-        url(r'^newproject/$', 'newproject', name='newproject'),
+        url(r'^newproject/$', views.newproject, name='newproject'),
 
         url(r'^projects/$',
             tables.ProjectsTable.as_view(template_name="projects-toastertable.html"),
             name='all-projects'),
 
-        url(r'^project/(?P<pid>\d+)/$', 'project', name='project'),
-        url(r'^project/(?P<pid>\d+)/configuration$', 'projectconf', name='projectconf'),
+        url(r'^project/(?P<pid>\d+)/$', views.project, name='project'),
+        url(r'^project/(?P<pid>\d+)/configuration$', views.projectconf, name='projectconf'),
         url(r'^project/(?P<pid>\d+)/builds/$',
             tables.ProjectBuildsTable.as_view(template_name="projectbuilds-toastertable.html"),
             name='projectbuilds'),
 
         # the import layer is a project-specific functionality;
-        url(r'^project/(?P<pid>\d+)/importlayer$', 'importlayer', name='importlayer'),
+        url(r'^project/(?P<pid>\d+)/importlayer$', views.importlayer, name='importlayer'),
 
         # the table pages that have been converted to ToasterTable widget
         url(r'^project/(?P<pid>\d+)/machines/$',
@@ -142,7 +143,7 @@
             name="projectlayers"),
 
         url(r'^project/(?P<pid>\d+)/layer/(?P<layerid>\d+)$',
-            'layerdetails', name='layerdetails'),
+            views.layerdetails, name='layerdetails'),
 
         url(r'^project/(?P<pid>\d+)/layer/(?P<layerid>\d+)/recipes/$',
             tables.LayerRecipesTable.as_view(template_name="generic-toastertable-page.html"),
@@ -157,6 +158,11 @@
             name=tables.LayerMachinesTable.__name__.lower()),
 
 
+        url(r'^project/(?P<pid>\d+)/distros/$',
+            tables.DistrosTable.as_view(template_name="generic-toastertable-page.html"),
+            name="projectdistros"),
+
+
         url(r'^project/(?P<pid>\d+)/customrecipe/(?P<custrecipeid>\d+)/selectpackages/$',
             tables.SelectPackagesTable.as_view(), name="recipeselectpackages"),
 
@@ -166,7 +172,7 @@
             name="customrecipe"),
 
         url(r'^project/(?P<pid>\d+)/customrecipe/(?P<recipe_id>\d+)/download$',
-            'customrecipe_download',
+            views.customrecipe_download,
             name="customrecipedownload"),
 
         url(r'^project/(?P<pid>\d+)/recipe/(?P<recipe_id>\d+)$',
@@ -186,9 +192,12 @@
             typeaheads.GitRevisionTypeAhead.as_view(),
             name='xhr_gitrevtypeahead'),
 
-        url(r'^xhr_testreleasechange/(?P<pid>\d+)$', 'xhr_testreleasechange',
+        url(r'^xhr_typeahead/(?P<pid>\d+)/distros$',
+            typeaheads.DistrosTypeAhead.as_view(), name='xhr_distrostypeahead'),
+
+        url(r'^xhr_testreleasechange/(?P<pid>\d+)$', views.xhr_testreleasechange,
             name='xhr_testreleasechange'),
-        url(r'^xhr_configvaredit/(?P<pid>\d+)$', 'xhr_configvaredit',
+        url(r'^xhr_configvaredit/(?P<pid>\d+)$', views.xhr_configvaredit,
             name='xhr_configvaredit'),
 
         url(r'^xhr_layer/(?P<pid>\d+)/(?P<layerversion_id>\d+)$',
@@ -200,7 +209,7 @@
             name='xhr_layer'),
 
         # JS Unit tests
-        url(r'^js-unit-tests/$', 'jsunittests', name='js-unit-tests'),
+        url(r'^js-unit-tests/$', views.jsunittests, name='js-unit-tests'),
 
         # image customisation functionality
         url(r'^xhr_customrecipe/(?P<recipe_id>\d+)'
@@ -235,6 +244,11 @@
         url(r'^mostrecentbuilds$', widgets.MostRecentBuildsView.as_view(),
             name='most_recent_builds'),
 
-          # default redirection
+        # JSON data for aggregators
+        url(r'^api/builds$', views.json_builds, name='json_builds'),
+        url(r'^api/building$', views.json_building, name='json_building'),
+        url(r'^api/build/(?P<build_id>\d+)$', views.json_build, name='json_build'),
+
+        # default redirection
         url(r'^$', RedirectView.as_view(url='landing', permanent=True)),
-)
+]
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/views.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/views.py
index 75c5911..34ed2b2 100755
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/views.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/views.py
@@ -35,7 +35,7 @@
 from django.core.urlresolvers import reverse, resolve
 from django.core.exceptions import MultipleObjectsReturned, ObjectDoesNotExist
 from django.core.paginator import Paginator, EmptyPage, PageNotAnInteger
-from django.http import HttpResponseNotFound
+from django.http import HttpResponseNotFound, JsonResponse
 from django.utils import timezone
 from datetime import timedelta, datetime
 from toastergui.templatetags.projecttags import json as jsonfilter
@@ -49,6 +49,8 @@
 
 logger = logging.getLogger("toaster")
 
+# Project creation and managed build enable
+project_enable = ('1' == os.environ.get('TOASTER_BUILDSERVER'))
 
 class MimeTypeFinder(object):
     # setting this to False enables additional non-standard mimetypes
@@ -65,6 +67,12 @@
             guessed_type = 'application/octet-stream'
         return guessed_type
 
+# single point to add global values into the context before rendering
+def toaster_render(request, page, context):
+    context['project_enable'] = project_enable
+    return render(request, page, context)
+
+
 # all new sessions should come through the landing page;
 # determine in which mode we are running in, and redirect appropriately
 def landing(request):
@@ -86,7 +94,7 @@
 
     context = {'lvs_nos' : Layer_Version.objects.all().count()}
 
-    return render(request, 'landing.html', context)
+    return toaster_render(request, 'landing.html', context)
 
 def objtojson(obj):
     from django.db.models.query import QuerySet
@@ -277,7 +285,7 @@
             return None, invalid + str(field_input_list)
 
         # Check we are looking for a valid field
-        valid_fields = model._meta.get_all_field_names()
+        valid_fields = [f.name for f in model._meta.get_fields()]
         for field in field_input_list[0].split(AND_VALUE_SEPARATOR):
             if True in [field.startswith(x) for x in valid_fields]:
                 break
@@ -457,10 +465,15 @@
         npkg = 0
         pkgsz = 0
         package = None
-        for package in Package.objects.filter(id__in = [x.package_id for x in t.target_installed_package_set.all()]):
-            pkgsz = pkgsz + package.size
-            if package.installed_name:
-                npkg = npkg + 1
+        # Chunk the query to avoid "too many SQL variables" error
+        package_set = t.target_installed_package_set.all()
+        package_set_len = len(package_set)
+        for ps_start in range(0,package_set_len,500):
+            ps_stop = min(ps_start+500,package_set_len)
+            for package in Package.objects.filter(id__in = [x.package_id for x in package_set[ps_start:ps_stop]]):
+                pkgsz = pkgsz + package.size
+                if package.installed_name:
+                    npkg = npkg + 1
         elem['npkg'] = npkg
         elem['pkgsz'] = pkgsz
         ti = Target_Image_File.objects.filter(target_id = t.id)
@@ -514,7 +527,7 @@
             'packagecount'    : packageCount,
             'logmessages'     : logmessages,
     }
-    return render( request, template, context )
+    return toaster_render( request, template, context )
 
 
 
@@ -586,7 +599,7 @@
             build__completed_on__lt=task_object.build.completed_on).exclude(
             order__isnull=True).exclude(outcome=Task.OUTCOME_NA).order_by('-build__completed_on')
 
-    return render( request, template, context )
+    return toaster_render( request, template, context )
 
 def recipe(request, build_id, recipe_id, active_tab="1"):
     template = "recipe.html"
@@ -613,7 +626,7 @@
             'package_count' : package_count,
             'tab_states' : tab_states,
     }
-    return render(request, template, context)
+    return toaster_render(request, template, context)
 
 def recipe_packages(request, build_id, recipe_id):
     template = "recipe_packages.html"
@@ -658,7 +671,7 @@
                 },
            ]
        }
-    response = render(request, template, context)
+    response = toaster_render(request, template, context)
     _set_parameters_values(pagesize, orderby, request)
     return response
 
@@ -780,7 +793,7 @@
                 'dir_list': dir_list,
                 'file_path': file_path,
               }
-    return render(request, template, context)
+    return toaster_render(request, template, context)
 
 def _find_task_dep(task_object):
     tdeps = Task_Dependency.objects.filter(task=task_object).filter(depends_on__order__gt=0)
@@ -832,7 +845,7 @@
                     'build': build,
                     'project': build.project,
                     'targets': Target.objects.filter(build=build_id)})
-    return render(request, template, context)
+    return toaster_render(request, template, context)
 
 
 def configvars(request, build_id):
@@ -921,7 +934,7 @@
                 ],
             }
 
-    response = render(request, template, context)
+    response = toaster_render(request, template, context)
     _set_parameters_values(pagesize, orderby, request)
     return response
 
@@ -934,7 +947,7 @@
         'project': build.project,
         'objects' : files
     }
-    return render(request, template, context)
+    return toaster_render(request, template, context)
 
 
 # A set of dependency types valid for both included and built package views
@@ -1087,7 +1100,7 @@
     if paths.all().count() < 2:
         context['disable_sort'] = True;
 
-    response = render(request, template, context)
+    response = toaster_render(request, template, context)
     _set_parameters_values(pagesize, orderby, request)
     return response
 
@@ -1106,7 +1119,7 @@
             'other_deps' :   dependencies['other_deps'],
             'dependency_count' : _get_package_dependency_count(package, -1,  False)
     }
-    return render(request, template, context)
+    return toaster_render(request, template, context)
 
 
 def package_included_detail(request, build_id, target_id, package_id):
@@ -1152,7 +1165,7 @@
     }
     if paths.all().count() < 2:
         context['disable_sort'] = True
-    response = render(request, template, context)
+    response = toaster_render(request, template, context)
     _set_parameters_values(pagesize, orderby, request)
     return response
 
@@ -1176,7 +1189,7 @@
             'reverse_count' : _get_package_reverse_dep_count(package, target_id),
             'dependency_count' : _get_package_dependency_count(package, target_id, True)
     }
-    return render(request, template, context)
+    return toaster_render(request, template, context)
 
 def package_included_reverse_dependencies(request, build_id, target_id, package_id):
     template = "package_included_reverse_dependencies.html"
@@ -1227,7 +1240,7 @@
     }
     if objects.all().count() < 2:
         context['disable_sort'] = True
-    response = render(request, template, context)
+    response = toaster_render(request, template, context)
     _set_parameters_values(pagesize, orderby, request)
     return response
 
@@ -1251,6 +1264,89 @@
     }
     return ret
 
+# REST-based API calls to return build/building status to external Toaster
+# managers and aggregators via JSON
+
+def _json_build_status(build_id,extend):
+    build_stat = None
+    try:
+        build = Build.objects.get( pk = build_id )
+        build_stat = {}
+        build_stat['id'] = build.id
+        build_stat['name'] = build.build_name
+        build_stat['machine'] = build.machine
+        build_stat['distro'] = build.distro
+        build_stat['start'] = build.started_on
+        # look up target name
+        target= Target.objects.get( build = build )
+        if target:
+            if target.task:
+                build_stat['target'] = '%s:%s' % (target.target,target.task)
+            else:
+                build_stat['target'] = '%s' % (target.target)
+        else:
+            build_stat['target'] = ''
+        # look up project name
+        project = Project.objects.get( build = build )
+        if project:
+            build_stat['project'] = project.name
+        else:
+            build_stat['project'] = ''
+        if Build.IN_PROGRESS == build.outcome:
+            now = timezone.now()
+            timediff = now - build.started_on
+            build_stat['seconds']='%.3f' % timediff.total_seconds()
+            build_stat['clone']='%d:%d' % (build.repos_cloned,build.repos_to_clone)
+            build_stat['parse']='%d:%d' % (build.recipes_parsed,build.recipes_to_parse)
+            tf = Task.objects.filter(build = build)
+            tfc = tf.count()
+            if tfc > 0:
+                tfd = tf.exclude(order__isnull=True).count()
+            else:
+                tfd = 0
+            build_stat['task']='%d:%d' % (tfd,tfc)
+        else:
+            build_stat['outcome'] = build.get_outcome_text()
+            timediff = build.completed_on - build.started_on
+            build_stat['seconds']='%.3f' % timediff.total_seconds()
+            build_stat['stop'] = build.completed_on
+            messages = LogMessage.objects.all().filter(build = build)
+            errors = len(messages.filter(level=LogMessage.ERROR) |
+                 messages.filter(level=LogMessage.EXCEPTION) |
+                 messages.filter(level=LogMessage.CRITICAL))
+            build_stat['errors'] = errors
+            warnings = len(messages.filter(level=LogMessage.WARNING))
+            build_stat['warnings'] = warnings
+        if extend:
+            build_stat['cooker_log'] = build.cooker_log_path
+    except Exception as e:
+        build_state = str(e)
+    return build_stat
+
+def json_builds(request):
+    build_table = []
+    builds = []
+    try:
+        builds = Build.objects.exclude(outcome=Build.IN_PROGRESS).order_by("-started_on")
+        for build in builds:
+            build_table.append(_json_build_status(build.id,False))
+    except Exception as e:
+        build_table = str(e)
+    return JsonResponse({'builds' : build_table, 'count' : len(builds)})
+
+def json_building(request):
+    build_table = []
+    builds = []
+    try:
+        builds = Build.objects.filter(outcome=Build.IN_PROGRESS).order_by("-started_on")
+        for build in builds:
+            build_table.append(_json_build_status(build.id,False))
+    except Exception as e:
+        build_table = str(e)
+    return JsonResponse({'building' : build_table, 'count' : len(builds)})
+
+def json_build(request,build_id):
+    return JsonResponse({'build' : _json_build_status(build_id,True)})
 
 
 import toastermain.settings
@@ -1277,6 +1373,9 @@
 
     # new project
     def newproject(request):
+        if not project_enable:
+            return redirect( landing )
+
         template = "newproject.html"
         context = {
             'email': request.user.email if request.user.is_authenticated() else '',
@@ -1291,7 +1390,7 @@
 
         if request.method == "GET":
             # render new project page
-            return render(request, template, context)
+            return toaster_render(request, template, context)
         elif request.method == "POST":
             mandatory_fields = ['projectname', 'ptype']
             try:
@@ -1331,7 +1430,7 @@
                     context['alert'] = "Your chosen username is already used"
                 else:
                     context['alert'] = str(e)
-                return render(request, template, context)
+                return toaster_render(request, template, context)
 
         raise Exception("Invalid HTTP method for this page")
 
@@ -1339,7 +1438,7 @@
     def project(request, pid):
         project = Project.objects.get(pk=pid)
         context = {"project": project}
-        return render(request, "project.html", context)
+        return toaster_render(request, "project.html", context)
 
     def jsunittests(request):
         """ Provides a page for the js unit tests """
@@ -1365,7 +1464,7 @@
                                               name="MACHINE",
                                               value="qemux86")
         context = {'project': new_project}
-        return render(request, "js-unit-tests.html", context)
+        return toaster_render(request, "js-unit-tests.html", context)
 
     from django.views.decorators.csrf import csrf_exempt
     @csrf_exempt
@@ -1500,7 +1599,7 @@
         context = {
             'project': Project.objects.get(id=pid),
         }
-        return render(request, template, context)
+        return toaster_render(request, template, context)
 
     def layerdetails(request, pid, layerid):
         project = Project.objects.get(pk=pid)
@@ -1529,7 +1628,7 @@
             'projectlayers': list(project_layers)
         }
 
-        return render(request, 'layerdetails.html', context)
+        return toaster_render(request, 'layerdetails.html', context)
 
 
     def get_project_configvars_context():
@@ -1619,7 +1718,7 @@
         except (ProjectVariable.DoesNotExist, BuildEnvironment.DoesNotExist):
             pass
 
-        return render(request, "projectconf.html", context)
+        return toaster_render(request, "projectconf.html", context)
 
     def _file_names_for_artifact(build, artifact_type, artifact_id):
         """
@@ -1686,6 +1785,7 @@
 
                 return response
             else:
-                return render(request, "unavailable_artifact.html")
+                return toaster_render(request, "unavailable_artifact.html")
         except (ObjectDoesNotExist, IOError):
-            return render(request, "unavailable_artifact.html")
+            return toaster_render(request, "unavailable_artifact.html")
+
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/widgets.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/widgets.py
index 6b7b981..a1792d9 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/widgets.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastergui/widgets.py
@@ -41,6 +41,7 @@
 import json
 import collections
 import re
+import os
 
 from toastergui.tablefilter import TableFilterMap
 
@@ -86,6 +87,9 @@
         context['table_name'] = type(self).__name__.lower()
         context['empty_state'] = self.empty_state
 
+        # global variables
+        context['project_enable'] = ('1' == os.environ.get('TOASTER_BUILDSERVER'))
+
         return context
 
     def get(self, request, *args, **kwargs):
@@ -511,6 +515,10 @@
                 int((build_obj.recipes_parsed /
                      build_obj.recipes_to_parse) * 100)
 
+            build['repos_cloned_percentage'] = \
+                int((build_obj.repos_cloned /
+                     build_obj.repos_to_clone) * 100)
+
             tasks_complete_percentage = 0
             if build_obj.outcome in (Build.SUCCEEDED, Build.FAILED):
                 tasks_complete_percentage = 100
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/management/commands/buildslist.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/management/commands/buildslist.py
index 8dfef0a..70b5812 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/management/commands/buildslist.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/management/commands/buildslist.py
@@ -1,13 +1,13 @@
-from django.core.management.base import NoArgsCommand, CommandError
+from django.core.management.base import BaseCommand, CommandError
 from orm.models import Build
 import os
 
 
 
-class Command(NoArgsCommand):
+class Command(BaseCommand):
     args    = ""
     help    = "Lists current builds"
 
-    def handle_noargs(self,**options):
+    def handle(self,**options):
         for b in Build.objects.all():
             print("%d: %s %s %s" % (b.pk, b.machine, b.distro, ",".join([x.target for x in b.target_set.all()])))
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/settings.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/settings.py
index 1fd649c..13541d3 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/settings.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/settings.py
@@ -24,7 +24,6 @@
 import os
 
 DEBUG = True
-TEMPLATE_DEBUG = DEBUG
 
 # Set to True to see the SQL queries in console
 SQL_DEBUG = False
@@ -161,12 +160,47 @@
 # Make this unique, and don't share it with anybody.
 SECRET_KEY = 'NOT_SUITABLE_FOR_HOSTED_DEPLOYMENT'
 
-# List of callables that know how to import templates from various sources.
-TEMPLATE_LOADERS = (
-    'django.template.loaders.filesystem.Loader',
-    'django.template.loaders.app_directories.Loader',
-#     'django.template.loaders.eggs.Loader',
-)
+class InvalidString(str):
+    def __mod__(self, other):
+        from django.template.base import TemplateSyntaxError
+        raise TemplateSyntaxError(
+            "Undefined variable or unknown value for: \"%s\"" % other)
+
+TEMPLATES = [
+    {
+        'BACKEND': 'django.template.backends.django.DjangoTemplates',
+        'DIRS': [
+            # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
+            # Always use forward slashes, even on Windows.
+            # Don't forget to use absolute paths, not relative paths.
+        ],
+        'OPTIONS': {
+            'context_processors': [
+                # Insert your TEMPLATE_CONTEXT_PROCESSORS here or use this
+                # list if you haven't customized them:
+                'django.contrib.auth.context_processors.auth',
+                'django.template.context_processors.debug',
+                'django.template.context_processors.i18n',
+                'django.template.context_processors.media',
+                'django.template.context_processors.static',
+                'django.template.context_processors.tz',
+                'django.contrib.messages.context_processors.messages',
+                # Custom
+                'django.template.context_processors.request',
+                'toastergui.views.managedcontextprocessor',
+
+            ],
+            'loaders': [
+                # List of callables that know how to import templates from various sources.
+                'django.template.loaders.filesystem.Loader',
+                'django.template.loaders.app_directories.Loader',
+                #'django.template.loaders.eggs.Loader',
+            ],
+            'string_if_invalid': InvalidString("%s"),
+            'debug': DEBUG,
+        },
+    },
+]
 
 MIDDLEWARE_CLASSES = (
     'django.middleware.common.CommonMiddleware',
@@ -203,22 +237,6 @@
 # Python dotted path to the WSGI application used by Django's runserver.
 WSGI_APPLICATION = 'toastermain.wsgi.application'
 
-TEMPLATE_DIRS = (
-    # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates".
-    # Always use forward slashes, even on Windows.
-    # Don't forget to use absolute paths, not relative paths.
-)
-
-TEMPLATE_CONTEXT_PROCESSORS = ('django.contrib.auth.context_processors.auth',
- 'django.core.context_processors.debug',
- 'django.core.context_processors.i18n',
- 'django.core.context_processors.media',
- 'django.core.context_processors.static',
- 'django.core.context_processors.tz',
- 'django.contrib.messages.context_processors.messages',
- "django.core.context_processors.request",
- 'toastergui.views.managedcontextprocessor',
- )
 
 INSTALLED_APPS = (
     'django.contrib.auth',
@@ -348,10 +366,4 @@
 #
 
 
-class InvalidString(str):
-    def __mod__(self, other):
-        from django.template.base import TemplateSyntaxError
-        raise TemplateSyntaxError(
-            "Undefined variable or unknown value for: \"%s\"" % other)
 
-TEMPLATE_STRING_IF_INVALID = InvalidString("%s")
diff --git a/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/urls.py b/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/urls.py
index 1f8599e..e2fb0ae 100644
--- a/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/urls.py
+++ b/import-layers/yocto-poky/bitbake/lib/toaster/toastermain/urls.py
@@ -19,9 +19,10 @@
 # with this program; if not, write to the Free Software Foundation, Inc.,
 # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
 
-from django.conf.urls import patterns, include, url
-from django.views.generic import RedirectView
+from django.conf.urls import include, url
+from django.views.generic import RedirectView, TemplateView
 from django.views.decorators.cache import never_cache
+import bldcollector.views
 
 import logging
 
@@ -31,7 +32,7 @@
 from django.contrib import admin
 admin.autodiscover()
 
-urlpatterns = patterns('',
+urlpatterns = [
 
     # Examples:
     # url(r'^toaster/', include('toaster.foo.urls')),
@@ -42,11 +43,13 @@
 
     # This is here to maintain backward compatibility and will be deprecated
     # in the future.
-    url(r'^orm/eventfile$', 'bldcollector.views.eventfile'),
+    url(r'^orm/eventfile$', bldcollector.views.eventfile),
+
+    url(r'^health$', TemplateView.as_view(template_name="health.html"), name='Toaster Health'),
 
     # if no application is selected, we have the magic toastergui app here
     url(r'^$', never_cache(RedirectView.as_view(url='/toastergui/', permanent=True))),
-)
+]
 
 import toastermain.settings
 
diff --git a/import-layers/yocto-poky/bitbake/toaster-requirements.txt b/import-layers/yocto-poky/bitbake/toaster-requirements.txt
index e61c8e2..c0ec368 100644
--- a/import-layers/yocto-poky/bitbake/toaster-requirements.txt
+++ b/import-layers/yocto-poky/bitbake/toaster-requirements.txt
@@ -1,3 +1,3 @@
-Django>1.8,<1.9
+Django>1.8,<1.11.9
 beautifulsoup4>=4.4.0
 pytz
diff --git a/import-layers/yocto-poky/documentation/Makefile b/import-layers/yocto-poky/documentation/Makefile
index 9077c81..9891095 100644
--- a/import-layers/yocto-poky/documentation/Makefile
+++ b/import-layers/yocto-poky/documentation/Makefile
@@ -8,7 +8,7 @@
 # system.  These manuals also include an "eclipse" sub-directory as part of
 # the make process.
 #
-# Note that the figures for the Yocto Project Development Manual
+# Note that the figures for the Yocto Project Development Tasks Manual
 # differ depending on the BRANCH being built.
 #
 # The Makefile has these targets:
@@ -42,7 +42,7 @@
 # To build a manual, you must invoke Makefile with the DOC argument.  If you
 # are going to publish the manual, then you must invoke Makefile with both the
 # DOC and the VER argument.  Furthermore, if you are building or publishing
-# the edison or denzil versions of the Yocto Project Development Manual or
+# the edison or denzil versions of the Yocto Project Development Tasks Manual or
 # the mega-manual, you must also use the BRANCH argument.
 #
 # Examples:
@@ -59,7 +59,7 @@
 # 'make DOC=yocto-project-qs' command would be equivalent. The third example
 # generates just the PDF version of the Yocto Project Reference Manual.
 # The fourth example generates the HTML 'edison' version and (if available)
-# the Eclipse help version of the YP Development Manual.  The last example
+# the Eclipse help version of the YP Development Tasks Manual.  The last example
 # generates the HTML version of the mega-manual and uses the 'denzil'
 # branch when choosing figures for the tarball of figures.  Any example that does
 # not use the BRANCH argument builds the current version of the manual set.
@@ -79,15 +79,16 @@
 # The first example publishes the 1.7 version of both the PDF and HTML versions of
 # the BSP Guide.  The second example publishes the 1.6 version of both the PDF and
 # HTML versions of the ADT Manual. The third example publishes the 1.1.1 version of
-# the PDF and HTML YP Development Manual for the 'edison' branch.  The fourth example
-# publishes the 1.2 version of the PDF and HTML YP Development Manual for the
-# 'denzil' branch.
+# the PDF and HTML YP Development Tasks Manual for the 'edison' branch.  The fourth
+# example publishes the 1.2 version of the PDF and HTML YP Development Tasks Manual
+# for the 'denzil' branch.
 #
 
 ifeq ($(DOC),bsp-guide)
 XSLTOPTS = --xinclude
 ALLPREQ = html eclipse tarball
 TARFILES = bsp-style.css bsp-guide.html figures/bsp-title.png \
+           figures/bsp-dev-flow.png \
            eclipse
 MANUALS = $(DOC)/$(DOC).html $(DOC)/eclipse
 FIGURES = figures
@@ -128,14 +129,8 @@
            figures/wip.png
         else
 TARFILES = dev-style.css dev-manual.html \
-           figures/bsp-dev-flow.png \
-           figures/dev-title.png figures/git-workflow.png \
-           figures/index-downloads.png figures/kernel-dev-flow.png \
-           figures/kernel-overview-1.png figures/kernel-overview-2-generic.png \
-           figures/source-repos.png figures/yp-download.png \
-           figures/recipe-workflow.png \
-           figures/devtool-add-flow.png figures/devtool-modify-flow.png \
-           figures/devtool-upgrade-flow.png \
+           figures/dev-title.png \
+           figures/recipe-workflow.png figures/bitbake-build-flow.png \
            eclipse
 	endif
 
@@ -148,7 +143,7 @@
 ifeq ($(DOC),yocto-project-qs)
 XSLTOPTS = --xinclude
 ALLPREQ = html eclipse tarball
-TARFILES = yocto-project-qs.html qs-style.css figures/yocto-environment.png \
+TARFILES = yocto-project-qs.html qs-style.css \
            figures/yocto-project-transp.png \
            eclipse
 MANUALS = $(DOC)/$(DOC).html $(DOC)/eclipse
@@ -195,8 +190,8 @@
 	figures/source-repos.png figures/yp-download.png \
 	figures/wip.png
         else
-TARFILES = mega-manual.html mega-style.css figures/yocto-environment.png \
-        figures/building-an-image.png  \
+TARFILES = mega-manual.html mega-style.css \
+        figures/building-an-image.png figures/YP-flow-diagram.png \
 	figures/using-a-pre-built-image.png \
 	figures/poky-title.png figures/buildhistory.png \
         figures/buildhistory-web.png \
@@ -206,7 +201,7 @@
         figures/dev-title.png \
 	figures/git-workflow.png figures/index-downloads.png \
         figures/kernel-dev-flow.png \
-	figures/kernel-overview-1.png figures/kernel-overview-2-generic.png \
+	figures/kernel-overview-2-generic.png \
 	figures/source-repos.png figures/yp-download.png \
         figures/profile-title.png figures/kernelshark-all.png \
         figures/kernelshark-choose-events.png \
@@ -244,13 +239,12 @@
 	figures/sdk-generation.png figures/recipe-workflow.png \
 	figures/build-workspace-directory.png figures/mega-title.png \
 	figures/toaster-title.png figures/hosted-service.png \
-	figures/simple-configuration.png figures/devtool-add-flow.png \
-	figures/devtool-modify-flow.png figures/devtool-upgrade-flow.png \
+	figures/simple-configuration.png \
 	figures/compatible-layers.png figures/import-layer.png figures/new-project.png \
 	figures/sdk-environment.png figures/sdk-installed-standard-sdk-directory.png \
 	figures/sdk-devtool-add-flow.png figures/sdk-installed-extensible-sdk-directory.png \
 	figures/sdk-devtool-modify-flow.png figures/sdk-eclipse-dev-flow.png \
-	figures/sdk-devtool-upgrade-flow.png
+	figures/sdk-devtool-upgrade-flow.png figures/bitbake-build-flow.png
 	endif
 
 MANUALS = $(DOC)/$(DOC).html
@@ -262,7 +256,7 @@
 ifeq ($(DOC),ref-manual)
 XSLTOPTS = --xinclude
 ALLPREQ = html eclipse tarball
-TARFILES = ref-manual.html ref-style.css figures/poky-title.png \
+TARFILES = ref-manual.html ref-style.css figures/poky-title.png figures/YP-flow-diagram.png \
 	figures/buildhistory.png figures/buildhistory-web.png eclipse \
         figures/cross-development-toolchains.png figures/layer-input.png \
 	figures/package-feeds.png figures/source-input.png \
@@ -271,7 +265,8 @@
 	figures/patching.png figures/configuration-compile-autoreconf.png \
 	figures/analysis-for-package-splitting.png figures/image-generation.png \
 	figures/sdk-generation.png figures/building-an-image.png \
-	figures/build-workspace-directory.png
+	figures/build-workspace-directory.png figures/source-repos.png \
+	figures/index-downloads.png figures/yp-download.png figures/git-workflow.png
 MANUALS = $(DOC)/$(DOC).html $(DOC)/eclipse
 FIGURES = figures
 STYLESHEET = $(DOC)/*.css
@@ -330,8 +325,8 @@
 XSLTOPTS = --xinclude
 ALLPREQ = html eclipse tarball
 TARFILES = kernel-dev.html kernel-dev-style.css \
-           figures/kernel-dev-title.png \
-           figures/kernel-architecture-overview.png \
+           figures/kernel-dev-title.png figures/kernel-overview-2-generic.png \
+           figures/kernel-architecture-overview.png figures/kernel-dev-flow.png \
            eclipse
 MANUALS = $(DOC)/$(DOC).html $(DOC)/eclipse
 FIGURES = figures
diff --git a/import-layers/yocto-poky/documentation/README b/import-layers/yocto-poky/documentation/README
index a4e70a8..d64f2fd 100644
--- a/import-layers/yocto-poky/documentation/README
+++ b/import-layers/yocto-poky/documentation/README
@@ -36,8 +36,8 @@
 
 * sdk-manual       - The Yocto Project Software Development Kit (SDK) Developer's Guide.
 * bsp-guide        - The Yocto Project Board Support Package (BSP) Developer's Guide
-* dev-manual       - The Yocto Project Development Manual
-* kernel-dev       - The Yocto Project Linux Kernel Development Manual
+* dev-manual       - The Yocto Project Development Tasks Manual
+* kernel-dev       - The Yocto Project Linux Kernel Development Tasks Manual
 * ref-manual       - The Yocto Project Reference Manual
 * yocto-project-qs - The Yocto Project Quick Start
 * mega-manual      - The Yocto Project Mega-Manual, which is an aggregated manual comprised
diff --git a/import-layers/yocto-poky/documentation/bsp-guide/bsp-guide.xml b/import-layers/yocto-poky/documentation/bsp-guide/bsp-guide.xml
index f0ee399..576ed18 100644
--- a/import-layers/yocto-poky/documentation/bsp-guide/bsp-guide.xml
+++ b/import-layers/yocto-poky/documentation/bsp-guide/bsp-guide.xml
@@ -22,18 +22,11 @@
 
         <authorgroup>
             <author>
-                <firstname>Saul</firstname> <surname>Wold</surname>
+                <firstname>Scott</firstname> <surname>Rifenbark</surname>
                 <affiliation>
-                    <orgname>Intel Corporation</orgname>
+                    <orgname>Scotty's Documentation Services, INC</orgname>
                 </affiliation>
-                <email>saul.wold@intel.com</email>
-            </author>
-            <author>
-                <firstname>Richard</firstname> <surname>Purdie</surname>
-                <affiliation>
-                    <orgname>Linux Foundation</orgname>
-                </affiliation>
-                <email>richard.purdie@linuxfoundation.org</email>
+                <email>srifenbark@gmail.com</email>
             </author>
         </authorgroup>
 
@@ -119,24 +112,19 @@
                 <revremark>Released with the Yocto Project 2.3 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.1</revnumber>
-                <date>June 2017</date>
-                <revremark>Released with the Yocto Project 2.3.1 Release.</revremark>
+                <revnumber>2.4</revnumber>
+                <date>October 2017</date>
+                <revremark>Released with the Yocto Project 2.4 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.2</revnumber>
-                <date>September 2017</date>
-                <revremark>Released with the Yocto Project 2.3.2 Release.</revremark>
-            </revision>
-            <revision>
-                <revnumber>2.3.3</revnumber>
+                <revnumber>2.4.1</revnumber>
                 <date>January 2018</date>
-                <revremark>Released with the Yocto Project 2.3.3 Release.</revremark>
+                <revremark>Released with the Yocto Project 2.4.1 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.4</revnumber>
-                <date>April 2018</date>
-                <revremark>Released with the Yocto Project 2.3.4 Release.</revremark>
+                <revnumber>2.4.2</revnumber>
+                <date>March 2018</date>
+                <revremark>Released with the Yocto Project 2.4.2 Release.</revremark>
             </revision>
         </revhistory>
 
@@ -150,34 +138,34 @@
         Permission is granted to copy, distribute and/or modify this document under
         the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-nc-sa/2.0/uk/">Creative Commons Attribution-Share Alike 2.0 UK: England &amp; Wales</ulink> as published by Creative Commons.
       </para>
-            <note><title>Manual Notes</title>
-                <itemizedlist>
-                    <listitem><para>
-                        For the latest version of the Yocto Project Board
-                        Support Package (BSP) Developer's Guide associated with
-                        this Yocto Project release (version
-                        &YOCTO_DOC_VERSION;),
-                        see the Yocto Project Board Support Package (BSP)
-                        Developer's Guide from the
-                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
+           <note><title>Manual Notes</title>
+               <itemizedlist>
+                   <listitem><para>
+                       This version of the
+                       <emphasis>Yocto Project Board Support Package Developer's Guide</emphasis>
+                       is for the &YOCTO_DOC_VERSION; release of the
+                       Yocto Project.
+                       To be sure you have the latest version of the manual
+                       for this release, use the manual from the
+                       <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
+                       </para></listitem>
+                   <listitem><para>
+                       For manuals associated with other releases of the Yocto
+                       Project, go to the
+                       <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
+                       and use the drop-down "Active Releases" button
+                       and choose the manual associated with the desired
+                       Yocto Project.
+                       </para></listitem>
+                   <listitem><para>
+                        To report any inaccuracies or problems with this
+                        manual, send an email to the Yocto Project
+                        discussion group at
+                        <filename>yocto@yoctoproject.com</filename> or log into
+                        the freenode <filename>#yocto</filename> channel.
                         </para></listitem>
-                    <listitem><para>
-                        This version of the manual is version
-                        &YOCTO_DOC_VERSION;.
-                        For later releases of the Yocto Project (if they exist),
-                        go to the
-                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
-                        and use the drop-down "Active Releases" button
-                        and choose the Yocto Project version for which you want
-                        the manual.
-                        </para></listitem>
-                    <listitem><para>
-                        For an in-development version of the Yocto Project
-                        Board Support Package (BSP) Developer's Guide, see
-                        <ulink url='&YOCTO_DOCS_URL;/latest/bsp-guide/bsp-guide.html'></ulink>.
-                        </para></listitem>
-                </itemizedlist>
-            </note>
+               </itemizedlist>
+           </note>
     </legalnotice>
 
     </bookinfo>
diff --git a/import-layers/yocto-poky/documentation/bsp-guide/bsp.xml b/import-layers/yocto-poky/documentation/bsp-guide/bsp.xml
index a92e611..d7b6f15 100644
--- a/import-layers/yocto-poky/documentation/bsp-guide/bsp.xml
+++ b/import-layers/yocto-poky/documentation/bsp-guide/bsp.xml
@@ -55,7 +55,7 @@
                 To help understand the BSP layer concept, consider the BSPs that the
                 Yocto Project supports and provides with each release.
                 You can see the layers in the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-repositories'>Yocto Project Source Repositories</ulink>
+                <ulink url='&YOCTO_DOCS_REF_URL;#yocto-project-repositories'>Yocto Project Source Repositories</ulink>
                 through a web interface at
                 <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi'></ulink>.
                 If you go to that interface, you will find near the bottom of the list
@@ -83,12 +83,12 @@
 
             <para>
                 For information on the BSP development workflow, see the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#developing-a-board-support-package-bsp'>Developing a Board Support Package (BSP)</ulink>"
-                section in the Yocto Project Development Manual.
+                "<link linkend='developing-a-board-support-package-bsp'>Developing a Board Support Package (BSP)</link>"
+                section.
                 For more information on how to set up a local copy of source files
                 from a Git repository, see the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#getting-setup'>Getting Set Up</ulink>"
-                section also in the Yocto Project Development Manual.
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#working-with-yocto-project-source-files'>Working With Yocto Project Source Files</ulink>"
+                section also in the Yocto Project Development Tasks Manual.
             </para>
 
             <para>
@@ -98,12 +98,10 @@
                 This root is what you add to the
                 <ulink url='&YOCTO_DOCS_REF_URL;#var-BBLAYERS'><filename>BBLAYERS</filename></ulink>
                 variable in the <filename>conf/bblayers.conf</filename> file found in the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>,
-                which is established after you run one of the OpenEmbedded build environment
-                setup scripts (i.e.
-                <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
-                and
-                <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>).
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
+                which is established after you run the OpenEmbedded build environment
+                setup script (i.e.
+                <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>).
                 Adding the root allows the OpenEmbedded build system to recognize the BSP
                 definition and from it build an image.
                 Here is an example:
@@ -141,7 +139,149 @@
             <para>
                 For more detailed information on layers, see the
                 "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
-                section of the Yocto Project Development Manual.
+                section of the Yocto Project Development Tasks Manual.
+            </para>
+        </section>
+
+        <section id='preparing-your-build-host-to-work-with-bsp-layers'>
+            <title>Preparing Your Build Host to Work With BSP Layers</title>
+
+            <para>
+                This section describes how to get your build host ready
+                to work with BSP layers.
+                Once you have the host set up, you can create the layer
+                as described in the
+                "<link linkend='creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a new BSP Layer Using the yocto-bsp Script</link>"
+                section.
+                <note>
+                    For structural information on BSPs, see the
+                    <link linkend='bsp-filelayout'>Example Filesystem Layout</link>
+                    section.
+                </note>
+                <orderedlist>
+                    <listitem><para>
+                        <emphasis>Set Up the Build Environment:</emphasis>
+                        Be sure you are set up to use BitBake in a shell.
+                        See the
+                        "<ulink url='&YOCTO_DOCS_DEV_URL;#setting-up-the-development-host-to-use-the-yocto-project'>Setting Up the Development Host to Use the Yocto Project</ulink>"
+                        section in the Yocto Project Development Tasks Manual for information
+                        on how to get a build host ready that is either a native
+                        Linux machine or a machine that uses CROPS.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Clone the <filename>poky</filename> Repository:</emphasis>
+                        You need to have a local copy of the Yocto Project
+                        <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+                        (i.e. a local <filename>poky</filename> repository).
+                        See the
+                        "<ulink url='&YOCTO_DOCS_DEV_URL;#cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</ulink>"
+                        and possibly the
+                        "<ulink url='&YOCTO_DOCS_DEV_URL;#checking-out-by-branch-in-poky'>Checking Out by Branch in Poky</ulink>"
+                        and
+                        "<ulink url='&YOCTO_DOCS_DEV_URL;#checkout-out-by-tag-in-poky'>Checking Out by Tag in Poky</ulink>"
+                        sections all in the Yocto Project Development Tasks Manual for
+                        information on how to clone the <filename>poky</filename>
+                        repository and check out the appropriate branch for your work.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Determine the BSP Layer You Want:</emphasis>
+                        The Yocto Project supports many BSPs, which are maintained in
+                        their own layers or in layers designed to contain several
+                        BSPs.
+                        To get an idea of machine support through BSP layers, you can
+                        look at the
+                        <ulink url='&YOCTO_RELEASE_DL_URL;/machines'>index of machines</ulink>
+                        for the release.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Optionally Clone the
+                        <filename>meta-intel</filename> BSP Layer:</emphasis>
+                        If your hardware is based on current Intel CPUs and devices,
+                        you can leverage this BSP layer.
+                        For details on the <filename>meta-intel</filename> BSP layer,
+                        see the layer's
+                        <ulink url='http://git.yoctoproject.org/cgit/cgit.cgi/meta-intel/tree/README'><filename>README</filename></ulink>
+                        file.
+                        <orderedlist>
+                            <listitem><para>
+                                <emphasis>Navigate to Your Source Directory:</emphasis>
+                                Typically, you set up the
+                                <filename>meta-intel</filename> Git repository
+                                inside the
+                                <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+                                (e.g. <filename>poky</filename>).
+                                </para></listitem>
+                            <listitem><para>
+                                <emphasis>Clone the Layer:</emphasis>
+                                <literallayout class='monospaced'>
+     $ git clone git://git.yoctoproject.org/meta-intel.git
+     Cloning into 'meta-intel'...
+     remote: Counting objects: 14224, done.
+     remote: Compressing objects: 100% (4591/4591), done.
+     remote: Total 14224 (delta 8245), reused 13985 (delta 8006)
+     Receiving objects: 100% (14224/14224), 4.29 MiB | 2.90 MiB/s, done.
+     Resolving deltas: 100% (8245/8245), done.
+     Checking connectivity... done.
+                                </literallayout>
+                                </para></listitem>
+                            <listitem><para>
+                                <emphasis>Check Out the Proper Branch:</emphasis>
+                                The branch you check out for
+                                <filename>meta-intel</filename> must match the same
+                                branch you are using for the Yocto Project release
+                                (e.g. &DISTRO_NAME_NO_CAP;):
+                                <literallayout class='monospaced'>
+     $ git checkout <replaceable>branch_name</replaceable>
+                                </literallayout>
+                                For an example on how to discover branch names and
+                                checkout on a branch, see the
+                                "<ulink url='&YOCTO_DOCS_DEV_URL;#checking-out-by-branch-in-poky'>Checking Out By Branch in Poky</ulink>"
+                                section in the Yocto Project Development Tasks Manual.
+                                </para></listitem>
+                        </orderedlist>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Optionally Set Up an Alternative BSP Layer:</emphasis>
+                        If your hardware can be more closely leveraged to an
+                        existing BSP not within the <filename>meta-intel</filename>
+                        BSP layer, you can clone that BSP layer.</para>
+
+                        <para>The process is identical to the process used for the
+                        <filename>meta-intel</filename> layer except for the layer's
+                        name.
+                        For example, if you determine that your hardware most
+                        closely matches the <filename>meta-minnow</filename>,
+                        clone that layer:
+                        <literallayout class='monospaced'>
+     $ git clone git://git.yoctoproject.org/meta-minnow
+     Cloning into 'meta-minnow'...
+     remote: Counting objects: 456, done.
+     remote: Compressing objects: 100% (283/283), done.
+     remote: Total 456 (delta 163), reused 384 (delta 91)
+     Receiving objects: 100% (456/456), 96.74 KiB | 0 bytes/s, done.
+     Resolving deltas: 100% (163/163), done.
+     Checking connectivity... done.
+                        </literallayout>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Initialize the Build Environment:</emphasis>
+                        While in the root directory of the Source Directory (i.e.
+                        <filename>poky</filename>), run the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
+                        environment setup script to define the OpenEmbedded
+                        build environment on your build host.
+                        <literallayout class='monospaced'>
+     $ source &OE_INIT_FILE;
+                        </literallayout>
+                        Among other things, the script creates the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
+                        which is <filename>build</filename> in this case
+                        and is located in the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
+                        After the script runs, your current working directory
+                        is set to the <filename>build</filename> directory.
+                        </para></listitem>
+                </orderedlist>
             </para>
         </section>
 
@@ -417,7 +557,7 @@
                     released with the BSP.
                     The information in the <filename>README.sources</filename>
                     file also helps you find the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink>
                     used to generate the images that ship with the BSP.
                     <note>
                         If the BSP's <filename>binary</filename> directory is
@@ -522,7 +662,7 @@
 
                 <para>
                     This file simply makes
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#bitbake-term'>BitBake</ulink>
                     aware of the recipes and configuration directories.
                     The file must exist so that the OpenEmbedded build system can recognize the BSP.
                 </para>
@@ -571,7 +711,7 @@
                 <para>
                     Tuning files are found in the <filename>meta/conf/machine/include</filename>
                     directory within the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
                     For example, the <filename>ia32-base.inc</filename> file resides in the
                     <filename>meta/conf/machine/include</filename> directory.
                 </para>
@@ -627,7 +767,7 @@
                     formfactor recipe
                     <filename>meta/recipes-bsp/formfactor/formfactor_0.0.bb</filename>,
                     which is found in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
                 </para></note>
             </section>
 
@@ -667,7 +807,7 @@
                 <para>
                     For your BSP, you typically want to use an existing Yocto
                     Project kernel recipe found in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
                     at <filename>meta/recipes-kernel/linux</filename>.
                     You can append machine-specific changes to the kernel recipe
                     by using a similarly named append file, which is located in
@@ -710,6 +850,204 @@
             </section>
         </section>
 
+        <section id='developing-a-board-support-package-bsp'>
+            <title>Developing a Board Support Package (BSP)</title>
+
+            <para>
+                This section contains the high-level procedure you can follow
+                to create a BSP using the Yocto Project's
+                <link linkend='using-the-yocto-projects-bsp-tools'>BSP Tools</link>.
+                Although not required for BSP creation, the
+                <filename>meta-intel</filename> repository, which contains
+                many BSPs supported by the Yocto Project, is part of the
+                example.
+            </para>
+
+            <para>
+                For an example that shows how to create a new layer using
+                the tools, see the
+                "<link linkend='creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a New BSP Layer Using the yocto-bsp Script</link>"
+                 section.
+            </para>
+
+            <para>
+                The following illustration and list summarize the BSP
+                creation general workflow.
+            </para>
+
+            <para>
+                <imagedata fileref="figures/bsp-dev-flow.png" width="7in" depth="5in" align="center" scalefit="1" />
+            </para>
+
+            <para>
+                <orderedlist>
+                    <listitem><para>
+                        <emphasis>Set up Your Host Development System to Support
+                        Development Using the Yocto Project</emphasis>:
+                        See the
+                        "<ulink url='&YOCTO_DOCS_QS_URL;#yp-resources'>Setting Up to Use the Yocto Project</ulink>"
+                        section in the Yocto Project Quick Start for options on how
+                        to get a build host ready to use the Yocto Project.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Establish the <filename>meta-intel</filename>
+                        Repository on Your System:</emphasis>
+                        Having local copies of these supported BSP layers on
+                        your system gives you access to layers you might be able
+                        to build on or modify to create your BSP.
+                        For information on how to get these files, see the
+                        "<link linkend='preparing-your-build-host-to-work-with-bsp-layers'>Preparing Your Build Host to Work with BSP Layers</link>"
+                        section.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Create Your Own BSP Layer Using the
+                        <link linkend='creating-a-new-bsp-layer-using-the-yocto-bsp-script'><filename>yocto-bsp</filename></link>
+                        script:</emphasis>
+                        Layers are ideal for isolating and storing work for a
+                        given piece of hardware.
+                        A layer is really just a location or area in which you
+                        place the recipes and configurations for your BSP.
+                        In fact, a BSP is, in itself, a special type of layer.
+                        The simplest way to create a new BSP layer that is
+                        compliant with the Yocto Project is to use the
+                        <filename>yocto-bsp</filename> script.
+                        For information about that script, see the
+                        "<link linkend='creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a New BSP Layer Using the yocto-bsp Script</link>"
+                        section.</para>
+
+                        <para>Another example that illustrates a layer
+                        is an application.
+                        Suppose you are creating an application that has
+                        library or other dependencies in order for it to
+                        compile and run.
+                        The layer, in this case, would be where all the
+                        recipes that define those dependencies are kept.
+                        The key point for a layer is that it is an isolated
+                        area that contains all the relevant information for
+                        the project that the OpenEmbedded build system knows
+                        about.
+                        For more information on layers, see the
+                        "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
+                        section in the Yocto Project Development Tasks Manual.
+                        For more information on BSP layers, see the
+                        "<link linkend='bsp-layers'>BSP Layers</link>"
+                        section.
+                        <note><title>Notes</title>
+                            <para>Five BSPs exist that are part of the Yocto
+                            Project release:
+                            <filename>beaglebone</filename> (ARM),
+                            <filename>mpc8315e</filename> (PowerPC),
+                            and <filename>edgerouter</filename> (MIPS).
+                            The recipes and configurations for these five BSPs
+                            are located and dispersed within the
+                            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
+                            </para>
+
+                            <para>Three core Intel BSPs exist as part of the Yocto
+                            Project release in the
+                            <filename>meta-intel</filename> layer:
+                            <itemizedlist>
+                                <listitem><para>
+                                    <filename>intel-core2-32</filename>,
+                                    which is a BSP optimized for the Core2 family of CPUs
+                                    as well as all CPUs prior to the Silvermont core.
+                                    </para></listitem>
+                                <listitem><para>
+                                    <filename>intel-corei7-64</filename>,
+                                    which is a BSP optimized for Nehalem and later
+                                    Core and Xeon CPUs as well as Silvermont and later
+                                    Atom CPUs, such as the Baytrail SoCs.
+                                    </para></listitem>
+                                <listitem><para>
+                                    <filename>intel-quark</filename>,
+                                    which is a BSP optimized for the Intel Galileo
+                                    gen1 &amp; gen2 development boards.
+                                    </para></listitem>
+                            </itemizedlist></para>
+                        </note></para>
+
+                        <para>When you set up a layer for a new BSP, you should
+                        follow a standard layout.
+                        This layout is described in the
+                        "<link linkend='bsp-filelayout'>Example Filesystem Layout</link>"
+                        section.
+                        In the standard layout, you will notice a suggested
+                        structure for recipes and configuration information.
+                        You can see the standard layout for a BSP by examining
+                        any supported BSP found in the
+                        <filename>meta-intel</filename> layer inside the Source
+                        Directory.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Make Configuration Changes to Your New BSP
+                        Layer:</emphasis>
+                        The standard BSP layer structure organizes the files
+                        you need to edit in <filename>conf</filename> and
+                        several <filename>recipes-*</filename>
+                        directories within the BSP layer.
+                        Configuration changes identify where your new layer
+                        is on the local system and identify which kernel you
+                        are going to use.
+                        When you run the <filename>yocto-bsp</filename> script,
+                        you are able to interactively configure many things for
+                        the BSP (e.g. keyboard, touchscreen, and so forth).
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Make Recipe Changes to Your New BSP
+                        Layer:</emphasis>
+                        Recipe changes include altering recipes
+                        (<filename>.bb</filename> files), removing recipes you
+                        do not use, and adding new recipes or append files
+                        (<filename>.bbappend</filename>) that you need to
+                        support your hardware.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Prepare for the Build:</emphasis>
+                        Once you have made all the changes to your BSP layer,
+                        there remains a few things you need to do for the
+                        OpenEmbedded build system in order for it to create
+                        your image.
+                        You need to get the build environment ready by
+                        sourcing an environment setup script
+                        (i.e. <filename>oe-init-build-env</filename>)
+                        and you need to be sure two key configuration
+                        files are configured appropriately: the
+                        <filename>conf/local.conf</filename> and the
+                        <filename>conf/bblayers.conf</filename> file.
+                        You must make the OpenEmbedded build system aware
+                        of your new layer.
+                        See the
+                        "<ulink url='&YOCTO_DOCS_DEV_URL;#enabling-your-layer'>Enabling Your Layer</ulink>"
+                        section in the Yocto Project Development Tasks Manual
+                        for information on how to let the build system
+                        know about your new layer.</para>
+
+                        <para>The entire process for building an image is
+                        overviewed in the section
+                        "<ulink url='&YOCTO_DOCS_QS_URL;#qs-building-images'>Building Images</ulink>" section
+                        of the Yocto Project Quick Start.
+                        You might want to reference this information.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Build the Image:</emphasis>
+                        The OpenEmbedded build system uses the BitBake tool
+                        to build images based on the type of image you want to
+                        create.
+                        You can find more information about BitBake in the
+                        <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual</ulink>.
+                        </para>
+
+                        <para>The build process supports several types of
+                        images to satisfy different needs.
+                        See the
+                        "<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>"
+                        chapter in the Yocto Project Reference Manual for
+                        information on supported images.
+                        </para></listitem>
+                </orderedlist>
+            </para>
+        </section>
+
         <section id='requirements-and-recommendations-for-released-bsps'>
             <title>Requirements and Recommendations for Released BSPs</title>
 
@@ -732,24 +1070,28 @@
                             For guidelines on creating a layer that meets these base requirements, see the
                             "<link linkend='bsp-layers'>BSP Layers</link>" and the
                             "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding
-                            and Creating Layers"</ulink> in the Yocto Project Development Manual.</para></listitem>
+                            and Creating Layers"</ulink> in the Yocto Project Development Tasks Manual.
+                            </para></listitem>
                         <listitem><para>The requirements in this section apply regardless of how you
                             package a BSP.
                             You should consult the packaging and distribution guidelines for your
                             specific release process.
                             For an example of packaging and distribution requirements, see the
                             "<ulink url='https://wiki.yoctoproject.org/wiki/Third_Party_BSP_Release_Process'>Third Party BSP Release Process</ulink>"
-                            wiki page.</para></listitem>
+                            wiki page.
+                            </para></listitem>
                         <listitem><para>The requirements for the BSP as it is made available to a developer
                             are completely independent of the released form of the BSP.
                             For example, the BSP Metadata can be contained within a Git repository
                             and could have a directory structure completely different from what appears
-                            in the officially released BSP layer.</para></listitem>
+                            in the officially released BSP layer.
+                            </para></listitem>
                         <listitem><para>It is not required that specific packages or package
                             modifications exist in the BSP layer, beyond the requirements for general
                             compliance with the Yocto Project.
                             For example, no requirement exists dictating that a specific kernel or
-                            kernel version be used in a given BSP.</para></listitem>
+                            kernel version be used in a given BSP.
+                            </para></listitem>
                     </itemizedlist>
                 </para>
 
@@ -776,7 +1118,7 @@
                             <filename>recipes-*</filename> subdirectory.
                             You can find <filename>recipes.txt</filename> in the
                             <filename>meta</filename> directory of the
-                            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
+                            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>,
                             or in the OpenEmbedded Core Layer
                             (<filename>openembedded-core</filename>) found at
                             <ulink url='http://git.openembedded.org/openembedded-core/tree/meta'></ulink>.
@@ -832,8 +1174,8 @@
                                     This is the person to whom patches and questions should
                                     be sent.
                                     For information on how to find the right person, see the
-                                    "<ulink url='&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change'>How to Submit a Change</ulink>"
-                                    section in the Yocto Project Development Manual.
+                                    "<ulink url='&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change'>Submitting a Change to the Yocto Project</ulink>"
+                                    section in the Yocto Project Development Tasks Manual.
                                     </para></listitem>
                                 <listitem><para>Instructions on how to build the BSP using the BSP
                                     layer.</para></listitem>
@@ -937,7 +1279,7 @@
                        file for the modified recipe.
                        For information on using append files, see the
                        "<ulink url='&YOCTO_DOCS_DEV_URL;#using-bbappend-files'>Using .bbappend Files in Your Layer</ulink>"
-                       section in the Yocto Project Development Manual.
+                       section in the Yocto Project Development Tasks Manual.
                        </para></listitem>
                    <listitem><para>
                        Ensure your directory structure in the BSP layer
@@ -1144,7 +1486,7 @@
 
                 <para>
                     Designed to have a  command interface somewhat like
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#git'>Git</ulink>, each
+                    <ulink url='&YOCTO_DOCS_REF_URL;#git'>Git</ulink>, each
                     tool is structured as a set of sub-commands under a
                     top-level command.
                     The top-level command (<filename>yocto-bsp</filename>
@@ -1155,7 +1497,7 @@
 
                 <para>
                     Both tools reside in the <filename>scripts/</filename> subdirectory
-                    of the <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    of the <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
                     Consequently, to use the scripts, you must <filename>source</filename> the
                     environment just as you would when invoking a build:
                     <literallayout class='monospaced'>
@@ -1251,11 +1593,11 @@
                     necessary to create a BSP and perform basic kernel maintenance on that BSP using
                     the tools.
                     <note>
-                        You can also use the <filename>yocto-layer</filename> tool to create
+                        You can also use the <filename>bitbake-layers</filename> script to create
                         a "generic" layer.
-                        For information on this tool, see the
-                        "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-general-layer-using-the-yocto-layer-script'>Creating a General Layer Using the yocto-layer Script</ulink>"
-                        section in the Yocto Project Development Guide.
+                        For information on using this script to create a layer, see the
+                        "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-general-layer-using-the-bitbake-layers-script'>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</ulink>"
+                        section in the Yocto Project Development Tasks Manual.
                     </note>
                 </para>
 
@@ -1384,8 +1726,7 @@
                             you do want to use.</para></listitem>
                         <listitem><para>Next, the script asks whether you would like to have a new
                             branch created especially for your BSP in the local
-                            <ulink url='&YOCTO_DOCS_DEV_URL;#local-kernel-files'>Linux Yocto Kernel</ulink>
-                            Git repository .
+                            Linux Yocto Kernel Git repository .
                             If not, then the script re-uses an existing branch.</para>
                             <para>In this example, the default (or "yes") is accepted.
                             Thus, a new branch is created for the BSP rather than using a common, shared
@@ -1406,7 +1747,7 @@
                             Defaults are accepted for each.</para></listitem>
                         <listitem><para>By default, the script creates the new BSP Layer in the
                             current working directory of the
-                            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
+                            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>,
                             (i.e. <filename>poky/build</filename>).
                             </para></listitem>
                     </orderedlist>
diff --git a/import-layers/yocto-poky/documentation/bsp-guide/figures/bsp-dev-flow.png b/import-layers/yocto-poky/documentation/bsp-guide/figures/bsp-dev-flow.png
new file mode 100644
index 0000000..0f82a1f
--- /dev/null
+++ b/import-layers/yocto-poky/documentation/bsp-guide/figures/bsp-dev-flow.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-common-tasks.xml b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-common-tasks.xml
index 598f877..0081738 100644
--- a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-common-tasks.xml
+++ b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-common-tasks.xml
@@ -18,7 +18,8 @@
 
         <para>
             The OpenEmbedded build system supports organizing
-            <link linkend='metadata'>Metadata</link> into multiple layers.
+            <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink> into
+            multiple layers.
             Layers allow you to isolate different types of customizations from
             each other.
             You might find it tempting to keep everything in one layer when
@@ -58,7 +59,7 @@
             <title>Layers</title>
 
             <para>
-                The <link linkend='source-directory'>Source Directory</link>
+                The <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
                 contains both general layers and BSP
                 layers right out of the box.
                 You can easily identify layers that ship with a
@@ -107,7 +108,7 @@
                 "<ulink url='&YOCTO_DOCS_BSP_URL;#creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a New BSP Layer Using the yocto-bsp Script</ulink>"
                 section in the Yocto Project Board Support Package (BSP)
                 Developer's Guide and the
-                "<link linkend='creating-a-general-layer-using-the-yocto-layer-script'>Creating a General Layer Using the yocto-layer Script</link>"
+                "<link linkend='creating-a-general-layer-using-the-bitbake-layers-script'>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</link>"
                 section further down in this manual.
             </para>
 
@@ -254,197 +255,182 @@
         </section>
 
         <section id='best-practices-to-follow-when-creating-layers'>
-            <title>Best Practices to Follow When Creating Layers</title>
+            <title>Following Best Practices When Creating Layers</title>
 
             <para>
                 To create layers that are easier to maintain and that will
                 not impact builds for other machines, you should consider the
-                information in the following sections.
-            </para>
-
-            <section id='avoid-overlaying-entire-recipes'>
-                <title>Avoid "Overlaying" Entire Recipes</title>
-
-                <para>
-                    Avoid "overlaying" entire recipes from other layers in your
-                    configuration.
-                    In other words, do not copy an entire recipe into your
-                    layer and then modify it.
-                    Rather, use an append file (<filename>.bbappend</filename>)
-                    to override
-                    only those parts of the original recipe you need to modify.
-                </para>
-            </section>
-
-            <section id='avoid-duplicating-include-files'>
-                <title>Avoid Duplicating Include Files</title>
-
-                <para>
-                    Avoid duplicating include files.
-                    Use append files (<filename>.bbappend</filename>)
-                    for each recipe
-                    that uses an include file.
-                    Or, if you are introducing a new recipe that requires
-                    the included file, use the path relative to the original
-                    layer directory to refer to the file.
-                    For example, use
-                    <filename>require recipes-core/</filename><replaceable>package</replaceable><filename>/</filename><replaceable>file</replaceable><filename>.inc</filename>
-                    instead of <filename>require </filename><replaceable>file</replaceable><filename>.inc</filename>.
-                    If you're finding you have to overlay the include file,
-                    it could indicate a deficiency in the include file in
-                    the layer to which it originally belongs.
-                    If this is the case, you should try to address that
-                    deficiency instead of overlaying the include file.
-                    For example, you could address this by getting the
-                    maintainer of the include file to add a variable or
-                    variables to make it easy to override the parts needing
-                    to be overridden.
-                </para>
-            </section>
-
-            <section id='structure-your-layers'>
-                <title>Structure Your Layers</title>
-
-                <para>
-                    Proper use of overrides within append files and placement
-                    of machine-specific files within your layer can ensure that
-                    a build is not using the wrong Metadata and negatively
-                    impacting a build for a different machine.
-                    Following are some examples:
-                    <itemizedlist>
-                        <listitem><para><emphasis>Modifying Variables to Support
-                            a Different Machine:</emphasis>
-                            Suppose you have a layer named
-                            <filename>meta-one</filename> that adds support
-                            for building machine "one".
-                            To do so, you use an append file named
-                            <filename>base-files.bbappend</filename> and
-                            create a dependency on "foo" by altering the
-                            <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>
-                            variable:
-                            <literallayout class='monospaced'>
+                information in the following list:
+                <itemizedlist>
+                    <listitem><para>
+                        <emphasis>Avoid "Overlaying" Entire Recipes from Other Layers in Your Configuration:</emphasis>
+                        In other words, do not copy an entire recipe into your
+                        layer and then modify it.
+                        Rather, use an append file
+                        (<filename>.bbappend</filename>) to override only those
+                        parts of the original recipe you need to modify.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Avoid Duplicating Include Files:</emphasis>
+                        Use append files (<filename>.bbappend</filename>)
+                        for each recipe that uses an include file.
+                        Or, if you are introducing a new recipe that requires
+                        the included file, use the path relative to the
+                        original layer directory to refer to the file.
+                        For example, use
+                        <filename>require recipes-core/</filename><replaceable>package</replaceable><filename>/</filename><replaceable>file</replaceable><filename>.inc</filename>
+                        instead of
+                        <filename>require </filename><replaceable>file</replaceable><filename>.inc</filename>.
+                        If you're finding you have to overlay the include file,
+                        it could indicate a deficiency in the include file in
+                        the layer to which it originally belongs.
+                        If this is the case, you should try to address that
+                        deficiency instead of overlaying the include file.
+                        For example, you could address this by getting the
+                        maintainer of the include file to add a variable or
+                        variables to make it easy to override the parts needing
+                        to be overridden.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Structure Your Layers:</emphasis>
+                        Proper use of overrides within append files and
+                        placement of machine-specific files within your layer
+                        can ensure that a build is not using the wrong Metadata
+                        and negatively impacting a build for a different
+                        machine.
+                        Following are some examples:
+                        <itemizedlist>
+                            <listitem><para>
+                                <emphasis>Modify Variables to Support a
+                                Different Machine:</emphasis>
+                                Suppose you have a layer named
+                                <filename>meta-one</filename> that adds support
+                                for building machine "one".
+                                To do so, you use an append file named
+                                <filename>base-files.bbappend</filename> and
+                                create a dependency on "foo" by altering the
+                                <ulink url='&YOCTO_DOCS_REF_URL;#var-DEPENDS'><filename>DEPENDS</filename></ulink>
+                                variable:
+                                <literallayout class='monospaced'>
      DEPENDS = "foo"
-                            </literallayout>
-                            The dependency is created during any build that
-                            includes the layer
-                            <filename>meta-one</filename>.
-                            However, you might not want this dependency
-                            for all machines.
-                            For example, suppose you are building for
-                            machine "two" but your
-                            <filename>bblayers.conf</filename> file has the
-                            <filename>meta-one</filename> layer included.
-                            During the build, the
-                            <filename>base-files</filename> for machine
-                            "two" will also have the dependency on
-                            <filename>foo</filename>.</para>
-                            <para>To make sure your changes apply only when
-                            building machine "one", use a machine override
-                            with the <filename>DEPENDS</filename> statement:
-                            <literallayout class='monospaced'>
+                                </literallayout>
+                                The dependency is created during any build that
+                                includes the layer
+                                <filename>meta-one</filename>.
+                                However, you might not want this dependency
+                                for all machines.
+                                For example, suppose you are building for
+                                machine "two" but your
+                                <filename>bblayers.conf</filename> file has the
+                                <filename>meta-one</filename> layer included.
+                                During the build, the
+                                <filename>base-files</filename> for machine
+                                "two" will also have the dependency on
+                                <filename>foo</filename>.</para>
+                                <para>To make sure your changes apply only when
+                                building machine "one", use a machine override
+                                with the <filename>DEPENDS</filename> statement:
+                                <literallayout class='monospaced'>
      DEPENDS_one = "foo"
-                            </literallayout>
-                            You should follow the same strategy when using
-                            <filename>_append</filename> and
-                            <filename>_prepend</filename> operations:
-                            <literallayout class='monospaced'>
+                                </literallayout>
+                                You should follow the same strategy when using
+                                <filename>_append</filename> and
+                                <filename>_prepend</filename> operations:
+                                <literallayout class='monospaced'>
      DEPENDS_append_one = " foo"
      DEPENDS_prepend_one = "foo "
-                            </literallayout>
-                            As an actual example, here's a line from the recipe
-                            for gnutls, which adds dependencies on
-                            "argp-standalone" when building with the musl C
-                            library:
-                            <literallayout class='monospaced'>
+                                </literallayout>
+                                As an actual example, here's a line from the recipe
+                                for gnutls, which adds dependencies on
+                                "argp-standalone" when building with the musl C
+                                library:
+                                <literallayout class='monospaced'>
      DEPENDS_append_libc-musl = " argp-standalone"
-                            </literallayout>
-                            <note>
-                                Avoiding "+=" and "=+" and using
-                                machine-specific
-                                <filename>_append</filename>
-                                and <filename>_prepend</filename> operations
-                                is recommended as well.
-                            </note></para></listitem>
-                        <listitem><para><emphasis>Place Machine-Specific Files
-                            in Machine-Specific Locations:</emphasis>
-                            When you have a base recipe, such as
-                            <filename>base-files.bb</filename>, that
-                            contains a
-                            <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
-                            statement to a file, you can use an append file
-                            to cause the build to use your own version of
-                            the file.
-                            For example, an append file in your layer at
-                            <filename>meta-one/recipes-core/base-files/base-files.bbappend</filename>
-                            could extend
-                            <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESPATH'><filename>FILESPATH</filename></ulink>
-                            using
-                            <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
-                            as follows:
-                            <literallayout class='monospaced'>
+                                </literallayout>
+                                <note>
+                                    Avoiding "+=" and "=+" and using
+                                    machine-specific
+                                    <filename>_append</filename>
+                                    and <filename>_prepend</filename> operations
+                                    is recommended as well.
+                                </note>
+                                </para></listitem>
+                            <listitem><para>
+                                <emphasis>Place Machine-Specific Files in
+                                Machine-Specific Locations:</emphasis>
+                                When you have a base recipe, such as
+                                <filename>base-files.bb</filename>, that
+                                contains a
+                                <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
+                                statement to a file, you can use an append file
+                                to cause the build to use your own version of
+                                the file.
+                                For example, an append file in your layer at
+                                <filename>meta-one/recipes-core/base-files/base-files.bbappend</filename>
+                                could extend
+                                <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESPATH'><filename>FILESPATH</filename></ulink>
+                                using
+                                <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
+                                as follows:
+                                <literallayout class='monospaced'>
      FILESEXTRAPATHS_prepend := "${THISDIR}/${BPN}:"
-                            </literallayout>
-                            The build for machine "one" will pick up your
-                            machine-specific file as long as you have the
-                            file in
-                            <filename>meta-one/recipes-core/base-files/base-files/</filename>.
-                            However, if you are building for a different
-                            machine and the
-                            <filename>bblayers.conf</filename> file includes
-                            the <filename>meta-one</filename> layer and
-                            the location of your machine-specific file is
-                            the first location where that file is found
-                            according to <filename>FILESPATH</filename>,
-                            builds for all machines will also use that
-                            machine-specific file.</para>
-                            <para>You can make sure that a machine-specific
-                            file is used for a particular machine by putting
-                            the file in a subdirectory specific to the
-                            machine.
-                            For example, rather than placing the file in
-                            <filename>meta-one/recipes-core/base-files/base-files/</filename>
-                            as shown above, put it in
-                            <filename>meta-one/recipes-core/base-files/base-files/one/</filename>.
-                            Not only does this make sure the file is used
-                            only when building for machine "one", but the
-                            build process locates the file more quickly.</para>
-                            <para>In summary, you need to place all files
-                            referenced from <filename>SRC_URI</filename>
-                            in a machine-specific subdirectory within the
-                            layer in order to restrict those files to
-                            machine-specific builds.</para></listitem>
-                    </itemizedlist>
-                </para>
-            </section>
-
-            <section id='other-recommendations'>
-                <title>Other Recommendations</title>
-
-                <para>
-                    We also recommend the following:
-                    <itemizedlist>
-                        <listitem><para>If you want permission to use the
-                            Yocto Project Compatibility logo with your layer
-                            or application that uses your layer, perform the
-                            steps to apply for compatibility.
-                            See the
-                            "<link linkend='making-sure-your-layer-is-compatible-with-yocto-project'>Making Sure Your Layer is Compatible With Yocto Project</link>"
-                            section for more information.
-                            </para></listitem>
-                        <listitem><para>Store custom layers in a Git repository
-                            that uses the
-                            <filename>meta-<replaceable>layer_name</replaceable></filename> format.
-                            </para></listitem>
-                        <listitem><para>Clone the repository alongside other
-                            <filename>meta</filename> directories in the
-                            <link linkend='source-directory'>Source Directory</link>.
-                            </para></listitem>
-                     </itemizedlist>
-                     Following these recommendations keeps your Source Directory and
-                     its configuration entirely inside the Yocto Project's core
-                     base.
-                </para>
-            </section>
+                                </literallayout>
+                                The build for machine "one" will pick up your
+                                machine-specific file as long as you have the
+                                file in
+                                <filename>meta-one/recipes-core/base-files/base-files/</filename>.
+                                However, if you are building for a different
+                                machine and the
+                                <filename>bblayers.conf</filename> file includes
+                                the <filename>meta-one</filename> layer and
+                                the location of your machine-specific file is
+                                the first location where that file is found
+                                according to <filename>FILESPATH</filename>,
+                                builds for all machines will also use that
+                                machine-specific file.</para>
+                                <para>You can make sure that a machine-specific
+                                file is used for a particular machine by putting
+                                the file in a subdirectory specific to the
+                                machine.
+                                For example, rather than placing the file in
+                                <filename>meta-one/recipes-core/base-files/base-files/</filename>
+                                as shown above, put it in
+                                <filename>meta-one/recipes-core/base-files/base-files/one/</filename>.
+                                Not only does this make sure the file is used
+                                only when building for machine "one", but the
+                                build process locates the file more quickly.</para>
+                                <para>In summary, you need to place all files
+                                referenced from <filename>SRC_URI</filename>
+                                in a machine-specific subdirectory within the
+                                layer in order to restrict those files to
+                                machine-specific builds.
+                                </para></listitem>
+                        </itemizedlist>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Perform Steps to Apply for Yocto Project Compatibility:</emphasis>
+                        If you want permission to use the
+                        Yocto Project Compatibility logo with your layer
+                        or application that uses your layer, perform the
+                        steps to apply for compatibility.
+                        See the
+                        "<link linkend='making-sure-your-layer-is-compatible-with-yocto-project'>Making Sure Your Layer is Compatible With Yocto Project</link>"
+                        section for more information.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Follow the Layer Naming Convention:</emphasis>
+                        Store custom layers in a Git repository that use the
+                        <filename>meta-<replaceable>layer_name</replaceable></filename>
+                        format.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Group Your Layers Locally:</emphasis>
+                        Clone your repository alongside other cloned
+                        <filename>meta</filename> directories from the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
+                        </para></listitem>
+                </itemizedlist>
+            </para>
         </section>
 
         <section id='making-sure-your-layer-is-compatible-with-yocto-project'>
@@ -456,65 +442,74 @@
                 existing Yocto Project layers (i.e. the layer is compatible
                 with the Yocto Project).
                 Ensuring compatibility makes the layer easy to be consumed
-                by others in the Yocto Project community and allows you
-                permission to use the Yocto Project Compatibility logo.
-            </para>
-
-            <para>
-                Version 1.0 of the Yocto Project Compatibility Program has
-                been in existence for a number of releases.
-                This version of the program consists of the layer application
-                process that requests permission to use the Yocto Project
-                Compatibility logo for your layer and application.
-                You can find version 1.0 of the form at
-                <ulink url='https://www.yoctoproject.org/webform/yocto-project-compatible-registration'></ulink>.
-                To be granted permission to use the logo, you need to be able
-                to answer "Yes" to the questions or have an acceptable
-                explanation for any questions answered "No".
-            </para>
-
-            <para>
-                A second version (2.0) of the Yocto Project Compatibility
-                Program is currently under development.
-                Included as part of version 2.0 (and currently available) is
-                the <filename>yocto-compat-layer.py</filename> script.
-                When run against a layer, this script tests the layer against
-                tighter constraints based on experiences of how layers have
-                worked in the real world and where pitfalls have been found.
-            </para>
-
-            <para>
-                Part of the 2.0 version of the program that is not currently
-                available but is in development is an updated compatibility
-                application form.
-                This updated form, among other questions, specifically
-                asks if your layer has passed the test using the
-                <filename>yocto-compat-layer.py</filename> script.
-                <note><title>Tip</title>
-                    Even though the updated application form is currently
-                    unavailable for version 2.0 of the Yocto Project
-                    Compatibility Program, the
-                    <filename>yocto-compat-layer.py</filename> script is
-                    available in OE-Core.
-                    You can use the script to assess the status of your
-                    layers in advance of the 2.0 release of the program.
+                by others in the Yocto Project community and could allow you
+                permission to use the Yocto Project Compatible Logo.
+                <note>
+                    Only Yocto Project member organizations are permitted to
+                    use the Yocto Project Compatible Logo.
+                    The logo is not available for general use.
+                    For information on how to become a Yocto Project member
+                    organization, see the
+                    <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>.
                 </note>
             </para>
 
             <para>
-                The remainder of this section presents information on the
-                version 1.0 registration form and on the
-                <filename>yocto-compat-layer.py</filename> script.
+                The Yocto Project Compatibility Program consists of a layer
+                application process that requests permission to use the Yocto
+                Project Compatibility Logo for your layer and application.
+                The process consists of two parts:
+                <orderedlist>
+                    <listitem><para>
+                        Successfully passing a script
+                        (<filename>yocto-check-layer</filename>) that
+                        when run against your layer, tests it against
+                        constraints based on experiences of how layers have
+                        worked in the real world and where pitfalls have been
+                        found.
+                        Getting a "PASS" result from the script is required for
+                        successful compatibility registration.
+                        </para></listitem>
+                    <listitem><para>
+                        Completion of an application acceptance form, which
+                        you can find at
+                        <ulink url='https://www.yoctoproject.org/webform/yocto-project-compatible-registration'></ulink>.
+                        </para></listitem>
+                </orderedlist>
             </para>
 
-            <section id='yocto-project-compatibility-program-application'>
-                <title>Yocto Project Compatibility Program Application</title>
+            <para>
+                To be granted permission to use the logo, you need to satisfy
+                the following:
+                <itemizedlist>
+                    <listitem><para>
+                        Be able to check the box indicating that you
+                        got a "PASS" when running the script against your
+                        layer.
+                        </para></listitem>
+                    <listitem><para>
+                        Answer "Yes" to the questions on the form or have an
+                        acceptable explanation for any questions answered "No".
+                        </para></listitem>
+                    <listitem><para>
+                        You need to be a Yocto Project Member Organization.
+                        </para></listitem>
+                </itemizedlist>
+            </para>
+
+            <para>
+                The remainder of this section presents information on the
+                registration form and on the
+                <filename>yocto-check-layer</filename> script.
+            </para>
+
+            <section id='yocto-project-compatible-program-application'>
+                <title>Yocto Project Compatible Program Application</title>
 
                 <para>
-                    Use the 1.0 version of the form to apply for your
-                    layer's compatibility approval.
+                    Use the form to apply for your layer's approval.
                     Upon successful application, you can use the Yocto
-                    Project Compatibility logo with your layer and the
+                    Project Compatibility Logo with your layer and the
                     application that uses your layer.
                 </para>
 
@@ -552,26 +547,18 @@
                 </para>
             </section>
 
-            <section id='yocto-compat-layer-py-script'>
-                <title><filename>yocto-compat-layer.py</filename> Script</title>
+            <section id='yocto-check-layer-script'>
+                <title><filename>yocto-check-layer</filename> Script</title>
 
                 <para>
-                    The <filename>yocto-compat-layer.py</filename> script,
-                    which is currently available, provides you a way to
-                    assess how compatible your layer is with the Yocto
-                    Project.
+                    The <filename>yocto-check-layer</filename> script
+                    provides you a way to assess how compatible your layer is
+                    with the Yocto Project.
                     You should run this script prior to using the form to
                     apply for compatibility as described in the previous
                     section.
-                    <note>
-                        Because the script is part of the 2.0 release of the
-                        Yocto Project Compatibility Program, you are not
-                        required to successfully run your layer against it
-                        in order to be granted compatibility status.
-                        However, it is a good idea as it promotes
-                        well-behaved layers and gives you an idea of where your
-                        layer stands regarding compatibility.
-                    </note>
+                    You need to achieve a "PASS" result in order to have
+                    your application form successfully processed.
                 </para>
 
                 <para>
@@ -588,7 +575,7 @@
                     your build directory:
                     <literallayout class='monospaced'>
      $ source oe-init-build-env
-     $ yocto-compat-layer.py <replaceable>your_layer_directory</replaceable>
+     $ yocto-check-layer <replaceable>your_layer_directory</replaceable>
                     </literallayout>
                     Be sure to provide the actual directory for your layer
                     as part of the command.
@@ -655,7 +642,7 @@
                 <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-BBLAYERS'>BBLAYERS</ulink></filename>
                 variable in your <filename>conf/bblayers.conf</filename> file,
                 which is found in the
-                <link linkend='build-directory'>Build Directory</link>.
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
                 The following example shows how to enable a layer named
                 <filename>meta-mylayer</filename>:
                 <literallayout class='monospaced'>
@@ -736,7 +723,7 @@
             <para>
                 As an example, consider the main formfactor recipe and a
                 corresponding formfactor append file both from the
-                <link linkend='source-directory'>Source Directory</link>.
+                <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
                 Here is the main formfactor recipe, which is named
                 <filename>formfactor_0.0.bb</filename> and located in the
                 "meta" layer at
@@ -975,128 +962,163 @@
      ...
      EXTRA_OECONF = "--enable-something --enable-somethingelse"
      ...
-                                </literallayout></para></listitem>
-                        </itemizedlist></para></listitem>
+                                </literallayout>
+                                </para></listitem>
+                        </itemizedlist>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis><filename>layerindex-fetch</filename>:</emphasis>
+                        Fetches a layer from a layer index, along with its
+                        dependent layers, and adds the layers to the
+                        <filename>conf/bblayers.conf</filename> file.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis><filename>layerindex-show-depends</filename>:</emphasis>
+                        Finds layer dependencies from the layer index.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis><filename>create-layer</filename>:</emphasis>
+                        Creates a basic layer.
+                        </para></listitem>
                 </itemizedlist>
             </para>
         </section>
 
-        <section id='creating-a-general-layer-using-the-yocto-layer-script'>
-            <title>Creating a General Layer Using the yocto-layer Script</title>
+        <section id='creating-a-general-layer-using-the-bitbake-layers-script'>
+            <title>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</title>
 
             <para>
-                The <filename>yocto-layer</filename> script simplifies
+                The <filename>bitbake-layers</filename> script with the
+                <filename>create-layer</filename> subcommand simplifies
                 creating a new general layer.
-                <note>
-                    For information on BSP layers, see the
-                    "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP Layers</ulink>"
-                    section in the Yocto Project Board Specific (BSP)
-                    Developer's Guide.
+                <note><title>Notes</title>
+                    <itemizedlist>
+                        <listitem><para>
+                            For information on BSP layers, see the
+                            "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP Layers</ulink>"
+                            section in the Yocto Project Board Specific (BSP)
+                            Developer's Guide.
+                            </para></listitem>
+                        <listitem><para>
+                            The <filename>bitbake-layers</filename> script
+                            replaces the <filename>yocto-layer</filename>
+                            script, which is deprecated in the Yocto Project
+                            2.4 release.
+                            The <filename>yocto-layer</filename> script
+                            continues to function as part of the 2.4 release
+                            but will be removed post 2.4.
+                            </para></listitem>
+                    </itemizedlist>
                 </note>
-                The default mode of the script's operation is to prompt you for
-                information needed to generate the layer:
+                The default mode of the script's operation with this
+                subcommand is to create a layer with the following:
                 <itemizedlist>
-                    <listitem><para>The layer priority.
+                    <listitem><para>A layer priority of 6.
                         </para></listitem>
-                    <listitem><para>Whether or not to create a sample recipe.
+                    <listitem><para>A <filename>conf</filename>
+                        subdirectory that contains a
+                        <filename>layer.conf</filename> file.
                         </para></listitem>
-                    <listitem><para>Whether or not to create a sample
-                        append file.
+                    <listitem><para>
+                        A <filename>recipes-example</filename> subdirectory
+                        that contains a further subdirectory named
+                        <filename>example</filename>, which contains
+                        an <filename>example.bb</filename> recipe file.
+                        </para></listitem>
+                    <listitem><para>A <filename >COPYING.MIT</filename>,
+                        which is the license statement for the layer.
+                        The script assumes you want to use the MIT license,
+                        which is typical for most layers, for the contents of
+                        the layer itself.
+                        </para></listitem>
+                    <listitem><para>
+                        A <filename>README</filename> file, which is a file
+                        describing the contents of your new layer.
                         </para></listitem>
                 </itemizedlist>
             </para>
 
             <para>
-                Use the <filename>yocto-layer create</filename> sub-command
-                to create a new general layer.
-                In its simplest form, you can create a layer as follows:
+                In its simplest form, you can use the following command form
+                to create a layer.
+                The command creates a layer whose name corresponds to
+                <replaceable>your_layer_name</replaceable> in the current
+                directory:
                 <literallayout class='monospaced'>
-     $ yocto-layer create mylayer
+     $ bitbake-layers create-layer <replaceable>your_layer_name</replaceable>
                 </literallayout>
-                The previous example creates a layer named
-                <filename>meta-mylayer</filename> in the current directory.
             </para>
 
             <para>
-                As the <filename>yocto-layer create</filename> command runs,
-                default values for the prompts appear in brackets.
-                Pressing enter without supplying anything for the prompts
-                or pressing enter and providing an invalid response causes the
-                script to accept the default value.
-                Once the script completes, the new layer
-                is created in the current working directory.
-                The script names the layer by prepending
-                <filename>meta-</filename> to the name you provide.
+                If you want to set the priority of the layer to other than the
+                default value of "6", you can either use the
+                <filename>&dash;&dash;priority</filename> option or you can
+                edit the
+                <ulink url='&YOCTO_DOCS_REF_URL;#var-BBFILE_PRIORITY'><filename>BBFILE_PRIORITY</filename></ulink>
+                value in the <filename>conf/layer.conf</filename> after the
+                script creates it.
+                Furthermore, if you want to give the example recipe file
+                some name other than the default, you can
+                use the
+                <filename>&dash;&dash;example-recipe-name</filename> option.
             </para>
 
             <para>
-                Minimally, the script creates the following within the layer:
-                <itemizedlist>
-                    <listitem><para><emphasis>The <filename>conf</filename>
-                        directory:</emphasis>
-                        This directory contains the layer's configuration file.
-                        The root name for the file is the same as the root name
-                        your provided for the layer (e.g.
-                        <filename><replaceable>layer</replaceable>.conf</filename>).
-                        </para></listitem>
-                    <listitem><para><emphasis>The
-                        <filename>COPYING.MIT</filename> file:</emphasis>
-                        The copyright and use notice for the software.
-                        </para></listitem>
-                    <listitem><para><emphasis>The <filename>README</filename>
-                        file:</emphasis>
-                        A file describing the contents of your new layer.
-                        </para></listitem>
-                </itemizedlist>
-            </para>
-
-            <para>
-                If you choose to generate a sample recipe file, the script
-                prompts you for the name for the recipe and then creates it
-                in <filename><replaceable>layer</replaceable>/recipes-example/example/</filename>.
-                The script creates a <filename>.bb</filename> file and a
-                directory, which contains a sample
-                <filename>helloworld.c</filename> source file, along with
-                a sample patch file.
-                If you do not provide a recipe name, the script uses
-                "example".
-            </para>
-
-            <para>
-                If you choose to generate a sample append file, the script
-                prompts you for the name for the file and then creates it
-                in <filename><replaceable>layer</replaceable>/recipes-example-bbappend/example-bbappend/</filename>.
-                The script creates a <filename>.bbappend</filename> file and a
-                directory, which contains a sample patch file.
-                If you do not provide a recipe name, the script uses
-                "example".
-                The script also prompts you for the version of the append file.
-                The version should match the recipe to which the append file
-                is associated.
-            </para>
-
-            <para>
-                The easiest way to see how the <filename>yocto-layer</filename>
-                script works is to experiment with the script.
+                The easiest way to see how the
+                <filename>bitbake-layers create-layer</filename> command
+                works is to experiment with the script.
                 You can also read the usage information by entering the
                 following:
                 <literallayout class='monospaced'>
-     $ yocto-layer help
+     $ bitbake-layers create-layer --help
+     NOTE: Starting bitbake server...
+     usage: bitbake-layers create-layer [-h] [--priority PRIORITY]
+                                        [--example-recipe-name EXAMPLERECIPE]
+                                        layerdir
+
+     Create a basic layer
+
+     positional arguments:
+       layerdir              Layer directory to create
+
+     optional arguments:
+       -h, --help            show this help message and exit
+       --priority PRIORITY, -p PRIORITY
+                             Layer directory to create
+       --example-recipe-name EXAMPLERECIPE, -e EXAMPLERECIPE
+                             Filename of the example recipe
                 </literallayout>
             </para>
 
             <para>
                 Once you create your general layer, you must add it to your
                 <filename>bblayers.conf</filename> file.
-                Here is an example where a layer named
-                <filename>meta-mylayer</filename> is added:
+                You can add your layer by using the
+                <filename>bitbake-layers add-layer</filename> command:
                 <literallayout class='monospaced'>
-     BBLAYERS = ?" \
-        /usr/local/src/yocto/meta \
-        /usr/local/src/yocto/meta-poky \
-        /usr/local/src/yocto/meta-yocto-bsp \
-        /usr/local/src/yocto/meta-mylayer \
-        "
+     $ bitbake-layers add-layer <replaceable>your_layer_name</replaceable>
+                </literallayout>
+                Here is an example where a layer named
+                <filename>meta-scottrif</filename> is added and then the
+                layers are shown using the
+                <filename>bitbake-layers show-layers</filename> command:
+                <literallayout class='monospaced'>
+     $ bitbake-layers add-layer meta-scottrif
+     NOTE: Starting bitbake server...
+     Loading cache: 100% |############################################| Time: 0:00:00
+     Loaded 1275 entries from dependency cache.
+     Parsing recipes: 100% |##########################################| Time: 0:00:00
+     Parsing of 819 .bb files complete (817 cached, 2 parsed). 1276 targets, 44 skipped, 0 masked, 0 errors.
+     $ bitbake-layers show-layers
+     NOTE: Starting bitbake server...
+     layer                 path                                      priority
+     ==========================================================================
+     meta                  /home/scottrif/poky/meta                  5
+     meta-poky             /home/scottrif/poky/meta-poky             5
+     meta-yocto-bsp        /home/scottrif/poky/meta-yocto-bsp        5
+     meta-mylayer          /home/scottrif/meta-mylayer               6
+     workspace             /home/scottrif/poky/build/workspace       99
+     meta-scottrif         /home/scottrif/poky/build/meta-scottrif   6
                 </literallayout>
                 Adding the layer to this file enables the build system to
                 locate the layer during the build.
@@ -1193,7 +1215,7 @@
                 from within a recipe and using
                 <filename>EXTRA_IMAGE_FEATURES</filename> from within
                 your <filename>local.conf</filename> file, which is found in the
-                <link linkend='build-directory'>Build Directory</link>.
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
             </para>
 
             <para>
@@ -1496,6 +1518,11 @@
                         similar in function to the recipe you need.
                         </para></listitem>
                 </itemizedlist>
+                <note>
+                    For information on recipe syntax, see the
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#recipe-syntax'>Recipe Syntax</ulink>"
+                    section in the Yocto Project Reference Manual.
+                </note>
             </para>
 
             <section id='new-recipe-creating-the-base-recipe-using-devtool'>
@@ -1516,8 +1543,9 @@
                 <para>
                     You can find a complete description of the
                     <filename>devtool add</filename> command in the
-                    "<link linkend='use-devtool-to-integrate-new-code'>Use <filename>devtool add</filename> to Add an Application</link>"
-                    section.
+                    "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-a-closer-look-at-devtool-add'>A Closer Look at <filename>devtool</filename> add</ulink>"
+                    section in the Yocto Project Application Development
+                    and the Extensible Software Development Kit (eSDK) manual.
                 </para>
             </section>
 
@@ -1540,12 +1568,10 @@
 
                 <para>
                     To run the tool, you just need to be in your
-                    <link linkend='build-directory'>Build Directory</link>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
                     and have sourced the build environment setup script
                     (i.e.
-                    <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>oe-init-build-env</filename></ulink>
-                    or
-                    <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>).
+                    <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>oe-init-build-env</filename></ulink>).
                     Here is the basic <filename>recipetool</filename> syntax:
                     <note>
                         Running <filename>recipetool -h</filename> or
@@ -1715,303 +1741,6 @@
             </itemizedlist>
         </section>
 
-        <section id='understanding-recipe-syntax'>
-            <title>Understanding Recipe Syntax</title>
-
-            <para>
-                Understanding recipe file syntax is important for
-                writing recipes.
-                The following list overviews the basic items that make up a
-                BitBake recipe file.
-                For more complete BitBake syntax descriptions, see the
-                "<ulink url='&YOCTO_DOCS_BB_URL;#bitbake-user-manual-metadata'>Syntax and Operators</ulink>"
-                chapter of the BitBake User Manual.
-                <itemizedlist>
-                    <listitem><para><emphasis>Variable Assignments and Manipulations:</emphasis>
-                        Variable assignments allow a value to be assigned to a
-                        variable.
-                        The assignment can be static text or might include
-                        the contents of other variables.
-                        In addition to the assignment, appending and prepending
-                        operations are also supported.</para>
-                        <para>The following example shows some of the ways
-                        you can use variables in recipes:
-                        <literallayout class='monospaced'>
-     S = "${WORKDIR}/postfix-${PV}"
-     CFLAGS += "-DNO_ASM"
-     SRC_URI_append = " file://fixup.patch"
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>Functions:</emphasis>
-                        Functions provide a series of actions to be performed.
-                        You usually use functions to override the default
-                        implementation of a task function or to complement
-                        a default function (i.e. append or prepend to an
-                        existing function).
-                        Standard functions use <filename>sh</filename> shell
-                        syntax, although access to OpenEmbedded variables and
-                        internal methods are also available.</para>
-                        <para>The following is an example function from the
-                        <filename>sed</filename> recipe:
-                        <literallayout class='monospaced'>
-     do_install () {
-         autotools_do_install
-         install -d ${D}${base_bindir}
-         mv ${D}${bindir}/sed ${D}${base_bindir}/sed
-         rmdir ${D}${bindir}/
-     }
-                        </literallayout>
-                        It is also possible to implement new functions that
-                        are called between existing tasks as long as the
-                        new functions are not replacing or complementing the
-                        default functions.
-                        You can implement functions in Python
-                        instead of shell.
-                        Both of these options are not seen in the majority of
-                        recipes.</para></listitem>
-                    <listitem><para><emphasis>Keywords:</emphasis>
-                        BitBake recipes use only a few keywords.
-                        You use keywords to include common
-                        functions (<filename>inherit</filename>), load parts
-                        of a recipe from other files
-                        (<filename>include</filename> and
-                        <filename>require</filename>) and export variables
-                        to the environment (<filename>export</filename>).</para>
-                        <para>The following example shows the use of some of
-                        these keywords:
-                        <literallayout class='monospaced'>
-     export POSTCONF = "${STAGING_BINDIR}/postconf"
-     inherit autoconf
-     require otherfile.inc
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>Comments:</emphasis>
-                        Any lines that begin with the hash character
-                        (<filename>#</filename>) are treated as comment lines
-                        and are ignored:
-                        <literallayout class='monospaced'>
-     # This is a comment
-                        </literallayout>
-                        </para></listitem>
-                </itemizedlist>
-            </para>
-
-            <para>
-                This next list summarizes the most important and most commonly
-                used parts of the recipe syntax.
-                For more information on these parts of the syntax, you can
-                reference the
-                <ulink url='&YOCTO_DOCS_BB_URL;#bitbake-user-manual-metadata'>Syntax and Operators</ulink>
-                chapter in the BitBake User Manual.
-                <itemizedlist>
-                    <listitem><para><emphasis>Line Continuation: <filename>\</filename></emphasis> -
-                        Use the backward slash (<filename>\</filename>)
-                        character to split a statement over multiple lines.
-                        Place the slash character at the end of the line that
-                        is to be continued on the next line:
-                        <literallayout class='monospaced'>
-     VAR = "A really long \
-            line"
-                        </literallayout>
-                        <note>
-                            You cannot have any characters including spaces
-                            or tabs after the slash character.
-                        </note>
-                        </para></listitem>
-                    <listitem><para>
-                        <emphasis>Using Variables: <filename>${...}</filename></emphasis> -
-                        Use the <filename>${<replaceable>VARNAME</replaceable>}</filename> syntax to
-                        access the contents of a variable:
-                        <literallayout class='monospaced'>
-     SRC_URI = "${SOURCEFORGE_MIRROR}/libpng/zlib-${PV}.tar.gz"
-                        </literallayout>
-                        <note>
-                            It is important to understand that the value of a
-                            variable expressed in this form does not get
-                            substituted automatically.
-                            The expansion of these expressions happens
-                            on-demand later (e.g. usually when a function that
-                            makes reference to the variable executes).
-                            This behavior ensures that the values are most
-                            appropriate for the context in which they are
-                            finally used.
-                            On the rare occasion that you do need the variable
-                            expression to be expanded immediately, you can use
-                            the <filename>:=</filename> operator instead of
-                            <filename>=</filename> when you make the
-                            assignment, but this is not generally needed.
-                        </note>
-                        </para></listitem>
-                    <listitem><para><emphasis>Quote All Assignments: <filename>"<replaceable>value</replaceable>"</filename></emphasis> -
-                        Use double quotes around the value in all variable
-                        assignments.
-                        <literallayout class='monospaced'>
-     VAR1 = "${OTHERVAR}"
-     VAR2 = "The version is ${PV}"
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>Conditional Assignment: <filename>?=</filename></emphasis> -
-                        Conditional assignment is used to assign a value to
-                        a variable, but only when the variable is currently
-                        unset.
-                        Use the question mark followed by the equal sign
-                        (<filename>?=</filename>) to make a "soft" assignment
-                        used for conditional assignment.
-                        Typically, "soft" assignments are used in the
-                        <filename>local.conf</filename> file for variables
-                        that are allowed to come through from the external
-                        environment.
-                        </para>
-                        <para>Here is an example where
-                        <filename>VAR1</filename> is set to "New value" if
-                        it is currently empty.
-                        However, if <filename>VAR1</filename> has already been
-                        set, it remains unchanged:
-                        <literallayout class='monospaced'>
-     VAR1 ?= "New value"
-                        </literallayout>
-                        In this next example, <filename>VAR1</filename>
-                        is left with the value "Original value":
-                        <literallayout class='monospaced'>
-     VAR1 = "Original value"
-     VAR1 ?= "New value"
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>Appending: <filename>+=</filename></emphasis> -
-                        Use the plus character followed by the equals sign
-                        (<filename>+=</filename>) to append values to existing
-                        variables.
-                        <note>
-                            This operator adds a space between the existing
-                            content of the variable and the new content.
-                        </note></para>
-                        <para>Here is an example:
-                        <literallayout class='monospaced'>
-     SRC_URI += "file://fix-makefile.patch"
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>Prepending: <filename>=+</filename></emphasis> -
-                        Use the equals sign followed by the plus character
-                        (<filename>=+</filename>) to prepend values to existing
-                        variables.
-                        <note>
-                            This operator adds a space between the new content
-                            and the existing content of the variable.
-                        </note></para>
-                        <para>Here is an example:
-                        <literallayout class='monospaced'>
-     VAR =+ "Starts"
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>Appending: <filename>_append</filename></emphasis> -
-                        Use the <filename>_append</filename> operator to
-                        append values to existing variables.
-                        This operator does not add any additional space.
-                        Also, the operator is applied after all the
-                        <filename>+=</filename>, and
-                        <filename>=+</filename> operators have been applied and
-                        after all <filename>=</filename> assignments have
-                        occurred.
-                        </para>
-                        <para>The following example shows the space being
-                        explicitly added to the start to ensure the appended
-                        value is not merged with the existing value:
-                        <literallayout class='monospaced'>
-     SRC_URI_append = " file://fix-makefile.patch"
-                        </literallayout>
-                        You can also use the <filename>_append</filename>
-                        operator with overrides, which results in the actions
-                        only being performed for the specified target or
-                        machine:
-                        <literallayout class='monospaced'>
-     SRC_URI_append_sh4 = " file://fix-makefile.patch"
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>Prepending: <filename>_prepend</filename></emphasis> -
-                        Use the <filename>_prepend</filename> operator to
-                        prepend values to existing variables.
-                        This operator does not add any additional space.
-                        Also, the operator is applied after all the
-                        <filename>+=</filename>, and
-                        <filename>=+</filename> operators have been applied and
-                        after all <filename>=</filename> assignments have
-                        occurred.
-                        </para>
-                        <para>The following example shows the space being
-                        explicitly added to the end to ensure the prepended
-                        value is not merged with the existing value:
-                        <literallayout class='monospaced'>
-     CFLAGS_prepend = "-I${S}/myincludes "
-                        </literallayout>
-                        You can also use the <filename>_prepend</filename>
-                        operator with overrides, which results in the actions
-                        only being performed for the specified target or
-                        machine:
-                        <literallayout class='monospaced'>
-     CFLAGS_prepend_sh4 = "-I${S}/myincludes "
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>Overrides:</emphasis> -
-                        You can use overrides to set a value conditionally,
-                        typically based on how the recipe is being built.
-                        For example, to set the
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-KBRANCH'><filename>KBRANCH</filename></ulink>
-                        variable's value to "standard/base" for any target
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>,
-                        except for qemuarm where it should be set to
-                        "standard/arm-versatile-926ejs", you would do the
-                        following:
-                        <literallayout class='monospaced'>
-     KBRANCH = "standard/base"
-     KBRANCH_qemuarm  = "standard/arm-versatile-926ejs"
-                        </literallayout>
-                        Overrides are also used to separate alternate values
-                        of a variable in other situations.
-                        For example, when setting variables such as
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-FILES'><filename>FILES</filename></ulink>
-                        and
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-RDEPENDS'><filename>RDEPENDS</filename></ulink>
-                        that are specific to individual packages produced by
-                        a recipe, you should always use an override that
-                        specifies the name of the package.
-                        </para></listitem>
-                    <listitem><para><emphasis>Indentation:</emphasis>
-                        Use spaces for indentation rather than than tabs.
-                        For shell functions, both currently work.
-                        However, it is a policy decision of the Yocto Project
-                        to use tabs in shell functions.
-                        Realize that some layers have a policy to use spaces
-                        for all indentation.
-                        </para></listitem>
-                    <listitem><para><emphasis>Using Python for Complex Operations: <filename>${@<replaceable>python_code</replaceable>}</filename></emphasis> -
-                        For more advanced processing, it is possible to use
-                        Python code during variable assignments (e.g.
-                        search and replacement on a variable).</para>
-                        <para>You indicate Python code using the
-                        <filename>${@<replaceable>python_code</replaceable>}</filename>
-                        syntax for the variable assignment:
-                        <literallayout class='monospaced'>
-     SRC_URI = "ftp://ftp.info-zip.org/pub/infozip/src/zip${@d.getVar('PV',1).replace('.', '')}.tgz
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>Shell Function Syntax:</emphasis>
-                        Write shell functions as if you were writing a shell
-                        script when you describe a list of actions to take.
-                        You should ensure that your script works with a generic
-                        <filename>sh</filename> and that it does not require
-                        any <filename>bash</filename> or other shell-specific
-                        functionality.
-                        The same considerations apply to various system
-                        utilities (e.g. <filename>sed</filename>,
-                        <filename>grep</filename>, <filename>awk</filename>,
-                        and so forth) that you might wish to use.
-                        If in doubt, you should check with multiple
-                        implementations - including those from BusyBox.
-                        </para></listitem>
-                </itemizedlist>
-            </para>
-        </section>
-
         <section id='new-recipe-running-a-build-on-the-recipe'>
             <title>Running a Build on the Recipe</title>
 
@@ -2023,12 +1752,10 @@
             </para>
 
             <para>
-                Assuming you have sourced a build environment setup script (i.e.
-                <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
-                or
-                <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>)
+                Assuming you have sourced the build environment setup script (i.e.
+                <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>)
                 and you are in the
-                <link linkend='build-directory'>Build Directory</link>,
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
                 use BitBake to process your recipe.
                 All you need to provide is the
                 <filename><replaceable>basename</replaceable></filename> of the recipe as described
@@ -2083,8 +1810,8 @@
             </para>
 
             <para>
-                You can find more information about the build process in the
-                "<ulink url='&YOCTO_DOCS_REF_URL;#closer-look'>A Closer Look at the Yocto Project Development Environment</ulink>"
+                You can find more information about the build process in
+                "<ulink url='&YOCTO_DOCS_REF_URL;#ref-development-environment'>The Yocto Project Development Environment</ulink>"
                 chapter of the Yocto Project Reference Manual.
             </para>
         </section>
@@ -3074,7 +2801,7 @@
                         class.
                         See the <filename>systemd.bbclass</filename> file
                         located in your
-                        <link linkend='source-directory'>Source Directory</link>.
+                        <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
                         section for more information.
                         </para></listitem>
                 </itemizedlist>
@@ -3967,6 +3694,417 @@
         </section>
     </section>
 
+    <section id='finding-the-temporary-source-code'>
+        <title>Finding Temporary Source Code</title>
+
+        <para>
+            You might find it helpful during development to modify the
+            temporary source code used by recipes to build packages.
+            For example, suppose you are developing a patch and you need to
+            experiment a bit to figure out your solution.
+            After you have initially built the package, you can iteratively
+            tweak the source code, which is located in the
+            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
+            and then you can force a re-compile and quickly test your altered
+            code.
+            Once you settle on a solution, you can then preserve your changes
+            in the form of patches.
+        </para>
+
+        <para>
+            During a build, the unpacked temporary source code used by recipes
+            to build packages is available in the Build Directory as
+            defined by the
+            <ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink>
+            variable.
+            Below is the default value for the <filename>S</filename> variable
+            as defined in the
+            <filename>meta/conf/bitbake.conf</filename> configuration file
+            in the
+            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>:
+            <literallayout class='monospaced'>
+     S = "${WORKDIR}/${BP}"
+            </literallayout>
+            You should be aware that many recipes override the
+            <filename>S</filename> variable.
+            For example, recipes that fetch their source from Git usually set
+            <filename>S</filename> to <filename>${WORKDIR}/git</filename>.
+            <note>
+                The
+                <ulink url='&YOCTO_DOCS_REF_URL;#var-BP'><filename>BP</filename></ulink>
+                represents the base recipe name, which consists of the name
+                and version:
+                <literallayout class='monospaced'>
+     BP = "${BPN}-${PV}"
+                </literallayout>
+            </note>
+        </para>
+
+        <para>
+            The path to the work directory for the recipe
+            (<ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink>)
+            is defined as follows:
+            <literallayout class='monospaced'>
+     ${TMPDIR}/work/${MULTIMACH_TARGET_SYS}/${PN}/${EXTENDPE}${PV}-${PR}
+            </literallayout>
+            The actual directory depends on several things:
+            <itemizedlist>
+                <listitem><para>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink>:
+                    The top-level build output directory.
+                    </para></listitem>
+                <listitem><para>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-MULTIMACH_TARGET_SYS'><filename>MULTIMACH_TARGET_SYS</filename></ulink>:
+                    The target system identifier.
+                    </para></listitem>
+                <listitem><para>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink>:
+                    The recipe name.
+                    </para></listitem>
+                <listitem><para>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTENDPE'><filename>EXTENDPE</filename></ulink>:
+                    The epoch - (if
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-PE'><filename>PE</filename></ulink>
+                    is not specified, which is usually the case for most
+                    recipes, then <filename>EXTENDPE</filename> is blank).
+                    </para></listitem>
+                <listitem><para>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>:
+                    The recipe version.
+                    </para></listitem>
+                <listitem><para>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-PR'><filename>PR</filename></ulink>:
+                    The recipe revision.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+
+        <para>
+            As an example, assume a Source Directory top-level folder
+            named <filename>poky</filename>, a default Build Directory at
+            <filename>poky/build</filename>, and a
+            <filename>qemux86-poky-linux</filename> machine target
+            system.
+            Furthermore, suppose your recipe is named
+            <filename>foo_1.3.0.bb</filename>.
+            In this case, the work directory the build system uses to
+            build the package would be as follows:
+            <literallayout class='monospaced'>
+     poky/build/tmp/work/qemux86-poky-linux/foo/1.3.0-r0
+            </literallayout>
+        </para>
+    </section>
+
+    <section id="using-a-quilt-workflow">
+        <title>Using Quilt in Your Workflow</title>
+
+        <para>
+            <ulink url='http://savannah.nongnu.org/projects/quilt'>Quilt</ulink>
+            is a powerful tool that allows you to capture source code changes
+            without having a clean source tree.
+            This section outlines the typical workflow you can use to modify
+            source code, test changes, and then preserve the changes in the
+            form of a patch all using Quilt.
+            <note><title>Tip</title>
+                With regard to preserving changes to source files, if you
+                clean a recipe or have <filename>rm_work</filename> enabled,
+                the
+                <ulink url='&YOCTO_DOCS_SDK_URL;#using-devtool-in-your-sdk-workflow'><filename>devtool</filename> workflow</ulink>
+                as described in the Yocto Project Application Development
+                and the Extensible Software Development Kit (eSDK) manual
+                is a safer development flow than the flow that uses Quilt.
+            </note>
+        </para>
+
+        <para>
+            Follow these general steps:
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Find the Source Code:</emphasis>
+                    Temporary source code used by the OpenEmbedded build system
+                    is kept in the
+                    <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
+                    See the
+                    "<link linkend='finding-the-temporary-source-code'>Finding Temporary Source Code</link>"
+                    section to learn how to locate the directory that has the
+                    temporary source code for a particular package.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Change Your Working Directory:</emphasis>
+                    You need to be in the directory that has the temporary
+                    source code.
+                    That directory is defined by the
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink>
+                    variable.</para></listitem>
+                <listitem><para>
+                    <emphasis>Create a New Patch:</emphasis>
+                    Before modifying source code, you need to create a new
+                    patch.
+                    To create a new patch file, use
+                    <filename>quilt new</filename> as below:
+                    <literallayout class='monospaced'>
+     $ quilt new my_changes.patch
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Notify Quilt and Add Files:</emphasis>
+                    After creating the patch, you need to notify Quilt about
+                    the files you plan to edit.
+                    You notify Quilt by adding the files to the patch you
+                    just created:
+                    <literallayout class='monospaced'>
+     $ quilt add file1.c file2.c file3.c
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Edit the Files:</emphasis>
+                    Make your changes in the source code to the files you added
+                    to the patch.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Test Your Changes:</emphasis>
+                    Once you have modified the source code, the easiest way to
+                    test your changes is by calling the
+                    <filename>do_compile</filename> task as shown in the
+                    following example:
+                    <literallayout class='monospaced'>
+     $ bitbake -c compile -f <replaceable>package</replaceable>
+                    </literallayout>
+                    The <filename>-f</filename> or <filename>--force</filename>
+                    option forces the specified task to execute.
+                    If you find problems with your code, you can just keep
+                    editing and re-testing iteratively until things work
+                    as expected.
+                    <note>
+                        All the modifications you make to the temporary
+                        source code disappear once you run the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-clean'><filename>do_clean</filename></ulink>
+                        or
+                        <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-cleanall'><filename>do_cleanall</filename></ulink>
+                        tasks using BitBake (i.e.
+                        <filename>bitbake -c clean <replaceable>package</replaceable></filename>
+                        and
+                        <filename>bitbake -c cleanall <replaceable>package</replaceable></filename>).
+                        Modifications will also disappear if you use the
+                        <filename>rm_work</filename> feature as described
+                        in the
+                        "<ulink url='&YOCTO_DOCS_QS_URL;#qs-building-images'>Building Images</ulink>"
+                        section of the Yocto Project Quick Start.
+                    </note>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Generate the Patch:</emphasis>
+                    Once your changes work as expected, you need to use Quilt
+                    to generate the final patch that contains all your
+                    modifications.
+                    <literallayout class='monospaced'>
+     $ quilt refresh
+                    </literallayout>
+                    At this point, the <filename>my_changes.patch</filename>
+                    file has all your edits made to the
+                    <filename>file1.c</filename>, <filename>file2.c</filename>,
+                    and <filename>file3.c</filename> files.</para>
+
+                    <para>You can find the resulting patch file in the
+                    <filename>patches/</filename> subdirectory of the source
+                    (<filename>S</filename>) directory.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Copy the Patch File:</emphasis>
+                    For simplicity, copy the patch file into a directory
+                    named <filename>files</filename>, which you can create
+                    in the same directory that holds the recipe
+                    (<filename>.bb</filename>) file or the append
+                    (<filename>.bbappend</filename>) file.
+                    Placing the patch here guarantees that the OpenEmbedded
+                    build system will find the patch.
+                    Next, add the patch into the
+                    <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'>SRC_URI</ulink></filename>
+                    of the recipe.
+                    Here is an example:
+                    <literallayout class='monospaced'>
+     SRC_URI += "file://my_changes.patch"
+                    </literallayout>
+                    </para></listitem>
+            </orderedlist>
+        </para>
+    </section>
+
+    <section id="platdev-appdev-devshell">
+        <title>Using a Development Shell</title>
+
+        <para>
+            When debugging certain commands or even when just editing packages,
+            <filename>devshell</filename> can be a useful tool.
+            When you invoke <filename>devshell</filename>, all tasks up to and
+            including
+            <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-patch'><filename>do_patch</filename></ulink>
+            are run for the specified target.
+            Then, a new terminal is opened and you are placed in
+            <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink><filename>}</filename>,
+            the source directory.
+            In the new terminal, all the OpenEmbedded build-related environment variables are
+            still defined so you can use commands such as <filename>configure</filename> and
+            <filename>make</filename>.
+            The commands execute just as if the OpenEmbedded build system were executing them.
+            Consequently, working this way can be helpful when debugging a build or preparing
+            software to be used with the OpenEmbedded build system.
+        </para>
+
+        <para>
+            Following is an example that uses <filename>devshell</filename> on a target named
+            <filename>matchbox-desktop</filename>:
+            <literallayout class='monospaced'>
+     $ bitbake matchbox-desktop -c devshell
+            </literallayout>
+        </para>
+
+        <para>
+            This command spawns a terminal with a shell prompt within the OpenEmbedded build environment.
+            The <ulink url='&YOCTO_DOCS_REF_URL;#var-OE_TERMINAL'><filename>OE_TERMINAL</filename></ulink>
+            variable controls what type of shell is opened.
+        </para>
+
+        <para>
+            For spawned terminals, the following occurs:
+            <itemizedlist>
+                <listitem><para>The <filename>PATH</filename> variable includes the
+                    cross-toolchain.</para></listitem>
+                <listitem><para>The <filename>pkgconfig</filename> variables find the correct
+                    <filename>.pc</filename> files.</para></listitem>
+                <listitem><para>The <filename>configure</filename> command finds the
+                    Yocto Project site files as well as any other necessary files.</para></listitem>
+            </itemizedlist>
+        </para>
+
+        <para>
+            Within this environment, you can run configure or compile
+            commands as if they were being run by
+            the OpenEmbedded build system itself.
+            As noted earlier, the working directory also automatically changes to the
+            Source Directory (<ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink>).
+        </para>
+
+        <para>
+            To manually run a specific task using <filename>devshell</filename>,
+            run the corresponding <filename>run.*</filename> script in
+            the
+            <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink><filename>}/temp</filename>
+            directory (e.g.,
+            <filename>run.do_configure.</filename><replaceable>pid</replaceable>).
+            If a task's script does not exist, which would be the case if the task was
+            skipped by way of the sstate cache, you can create the task by first running
+            it outside of the <filename>devshell</filename>:
+            <literallayout class='monospaced'>
+     $ bitbake -c <replaceable>task</replaceable>
+            </literallayout>
+            <note><title>Notes</title>
+                <itemizedlist>
+                    <listitem><para>Execution of a task's <filename>run.*</filename>
+                        script and BitBake's execution of a task are identical.
+                        In other words, running the script re-runs the task
+                        just as it would be run using the
+                        <filename>bitbake -c</filename> command.
+                        </para></listitem>
+                    <listitem><para>Any <filename>run.*</filename> file that does not
+                        have a <filename>.pid</filename> extension is a
+                        symbolic link (symlink) to the most recent version of that
+                        file.
+                        </para></listitem>
+                </itemizedlist>
+            </note>
+        </para>
+
+        <para>
+            Remember, that the <filename>devshell</filename> is a mechanism that allows
+            you to get into the BitBake task execution environment.
+            And as such, all commands must be called just as BitBake would call them.
+            That means you need to provide the appropriate options for
+            cross-compilation and so forth as applicable.
+        </para>
+
+        <para>
+            When you are finished using <filename>devshell</filename>, exit the shell
+            or close the terminal window.
+        </para>
+
+        <note><title>Notes</title>
+            <itemizedlist>
+                <listitem><para>
+                    It is worth remembering that when using <filename>devshell</filename>
+                    you need to use the full compiler name such as <filename>arm-poky-linux-gnueabi-gcc</filename>
+                    instead of just using <filename>gcc</filename>.
+                    The same applies to other applications such as <filename>binutils</filename>,
+                    <filename>libtool</filename> and so forth.
+                    BitBake sets up environment variables such as <filename>CC</filename>
+                    to assist applications, such as <filename>make</filename> to find the correct tools.
+                    </para></listitem>
+                <listitem><para>
+                    It is also worth noting that <filename>devshell</filename> still works over
+                    X11 forwarding and similar situations.
+                    </para></listitem>
+            </itemizedlist>
+        </note>
+    </section>
+
+    <section id="platdev-appdev-devpyshell">
+        <title>Using a Development Python Shell</title>
+
+        <para>
+            Similar to working within a development shell as described in
+            the previous section, you can also spawn and work within an
+            interactive Python development shell.
+            When debugging certain commands or even when just editing packages,
+            <filename>devpyshell</filename> can be a useful tool.
+            When you invoke <filename>devpyshell</filename>, all tasks up to and
+            including
+            <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-patch'><filename>do_patch</filename></ulink>
+            are run for the specified target.
+            Then a new terminal is opened.
+            Additionally, key Python objects and code are available in the same
+            way they are to BitBake tasks, in particular, the data store 'd'.
+            So, commands such as the following are useful when exploring the data
+            store and running functions:
+            <literallayout class='monospaced'>
+     pydevshell> d.getVar("STAGING_DIR", True)
+     '/media/build1/poky/build/tmp/sysroots'
+     pydevshell> d.getVar("STAGING_DIR", False)
+     '${TMPDIR}/sysroots'
+     pydevshell> d.setVar("FOO", "bar")
+     pydevshell> d.getVar("FOO", True)
+     'bar'
+     pydevshell> d.delVar("FOO")
+     pydevshell> d.getVar("FOO", True)
+     pydevshell> bb.build.exec_func("do_unpack", d)
+     pydevshell>
+            </literallayout>
+            The commands execute just as if the OpenEmbedded build system were executing them.
+            Consequently, working this way can be helpful when debugging a build or preparing
+            software to be used with the OpenEmbedded build system.
+        </para>
+
+        <para>
+            Following is an example that uses <filename>devpyshell</filename> on a target named
+            <filename>matchbox-desktop</filename>:
+            <literallayout class='monospaced'>
+     $ bitbake matchbox-desktop -c devpyshell
+            </literallayout>
+        </para>
+
+        <para>
+            This command spawns a terminal and places you in an interactive
+            Python interpreter within the OpenEmbedded build environment.
+            The <ulink url='&YOCTO_DOCS_REF_URL;#var-OE_TERMINAL'><filename>OE_TERMINAL</filename></ulink>
+            variable controls what type of shell is opened.
+        </para>
+
+        <para>
+            When you are finished using <filename>devpyshell</filename>, you
+            can exit the shell either by using Ctrl+d or closing the terminal
+            window.
+        </para>
+    </section>
+
     <section id='platdev-building-targets-with-multiple-configurations'>
         <title>Building Targets with Multiple Configurations</title>
 
@@ -4177,7 +4315,7 @@
             <para>
                 Several examples exist in the
                 <filename>meta-skeleton</filename> layer found in the
-               <link linkend='source-directory'>Source Directory</link>:
+               <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>:
                 <itemizedlist>
                     <listitem><para><filename>conf/multilib-example.conf</filename>
                         configuration file</para></listitem>
@@ -4203,7 +4341,7 @@
                     Many standard recipes are already extended and support multiple libraries.
                     You can check in the <filename>meta/conf/multilib.conf</filename>
                     configuration file in the
-                    <link linkend='source-directory'>Source Directory</link> to see how this is
+                    <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink> to see how this is
                     done using the
                     <ulink url='&YOCTO_DOCS_REF_URL;#var-BBCLASSEXTEND'><filename>BBCLASSEXTEND</filename></ulink>
                     variable.
@@ -4239,7 +4377,7 @@
                     combination of multiple libraries you want to build.
                     You accomplish this through your <filename>local.conf</filename>
                     configuration file in the
-                    <link linkend='build-directory'>Build Directory</link>.
+                    <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
                     An example configuration would be as follows:
                     <literallayout class='monospaced'>
      MACHINE = "qemux86-64"
@@ -4315,7 +4453,7 @@
                         <listitem><para>A unique architecture is defined for the Multilib packages,
                             along with creating a unique deploy folder under
                             <filename>tmp/deploy/rpm</filename> in the
-                            <link linkend='build-directory'>Build Directory</link>.
+                            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
                             For example, consider <filename>lib32</filename> in a
                             <filename>qemux86-64</filename> image.
                             The possible architectures in the system are "all", "qemux86_64",
@@ -4675,8 +4813,8 @@
         </para>
     </section>
 
-    <section id='creating-partitioned-images'>
-        <title>Creating Partitioned Images</title>
+    <section id='creating-partitioned-images-using-wic'>
+        <title>Creating Partitioned Images Using Wic</title>
 
         <para>
             Creating an image for a particular hardware target using the
@@ -4692,1124 +4830,781 @@
         </para>
 
         <para>
-            You can generate partitioned images
-            (<replaceable>image</replaceable><filename>.wic</filename>)
-            two ways: using the OpenEmbedded build system and by running
-            the OpenEmbedded Image Creator Wic directly.
-            The former way is preferable as it is easier to use and understand.
+            The <filename>wic</filename> command generates partitioned
+            images from existing OpenEmbedded build artifacts.
+            Image generation is driven by partitioning commands
+            contained in an Openembedded kickstart file
+            (<filename>.wks</filename>) specified either directly on
+            the command line or as one of a selection of canned
+            kickstart files as shown with the
+            <filename>wic list images</filename> command in the
+            "<link linkend='using-a-provided-kickstart-file'>Using an Existing Kickstart File</link>"
+            section.
+            When you apply the command to a given set of build
+            artifacts, the result is an image or set of images that
+            can be directly written onto media and used on a particular
+            system.
+            <note>
+                For a kickstart file reference, see the
+                "<ulink url='&YOCTO_DOCS_REF_URL;#openembedded-kickstart-wks-reference'>OpenEmbedded Kickstart (<filename>.wks</filename>) Reference</ulink>"
+                Chapter in the Yocto Project Reference Manual.
+            </note>
         </para>
 
-        <section id='creating-wic-images-oe'>
-            <title>Creating Partitioned Images</title>
+        <para>
+            The <filename>wic</filename> command and the infrastructure
+            it is based on is by definition incomplete.
+            The purpose of the command is to allow the generation of
+            customized images, and as such, was designed to be
+            completely extensible through a plug-in interface.
+            See the
+            "<ulink url='&YOCTO_DOCS_REF_URL;#wic-plug-ins-interface'>Wic Plug-Ins Interface</ulink>"
+            section in the Yocto Project Reference Manual for information
+            on these plug-ins.
+        </para>
+
+        <para>
+            This section provides some background information on Wic,
+            describes what you need to have in
+            place to run the tool, provides instruction on how to use
+            the Wic utility, and provides several examples.
+        </para>
+
+        <section id='wic-background'>
+            <title>Background</title>
 
             <para>
-                The OpenEmbedded build system can generate
-                partitioned images the same way as it generates
-                any other image type.
-                To generate a partitioned image, you need to modify
-                two variables.
+                This section provides some background on the Wic utility.
+                While none of this information is required to use
+                Wic, you might find it interesting.
                 <itemizedlist>
                     <listitem><para>
+                        The name "Wic" is derived from OpenEmbedded
+                        Image Creator (oeic).
+                        The "oe" diphthong in "oeic" was promoted to the
+                        letter "w", because "oeic" is both difficult to
+                        remember and to pronounce.
+                        </para></listitem>
+                    <listitem><para>
+                        Wic is loosely based on the
+                        Meego Image Creator (<filename>mic</filename>)
+                        framework.
+                        The Wic implementation has been
+                        heavily modified to make direct use of OpenEmbedded
+                        build artifacts instead of package installation and
+                        configuration, which are already incorporated within
+                        the OpenEmbedded artifacts.
+                        </para></listitem>
+                    <listitem><para>
+                        Wic is a completely independent
+                        standalone utility that initially provides
+                        easier-to-use and more flexible replacements for an
+                        existing functionality in OE Core's
+                        <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-image-live'><filename>image-live</filename></ulink>
+                        class and <filename>mkefidisk.sh</filename> script.
+                        The difference between
+                        Wic and those examples is
+                        that with Wic the
+                        functionality of those scripts is implemented
+                        by a general-purpose partitioning language, which is
+                        based on Redhat kickstart syntax.</para></listitem>
+                </itemizedlist>
+            </para>
+        </section>
+
+        <section id='wic-requirements'>
+            <title>Requirements</title>
+
+            <para>
+                In order to use the Wic utility with the OpenEmbedded Build
+                system, your system needs to meet the following
+                requirements:
+                <itemizedlist>
+                    <listitem><para>
+                        The Linux distribution on your development host must
+                        support the Yocto Project.
+                        See the
+                        "<ulink url='&YOCTO_DOCS_REF_URL;#detailed-supported-distros'>Supported Linux Distributions</ulink>"
+                        section in the Yocto Project Reference Manual for
+                        the list of distributions that support the
+                        Yocto Project.
+                        </para></listitem>
+                    <listitem><para>
+                        The standard system utilities, such as
+                        <filename>cp</filename>, must be installed on your
+                        development host system.
+                        </para></listitem>
+                    <listitem><para>
+                        You must have sourced the build environment
+                        setup script (i.e.
+                        <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>)
+                        found in the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
+                        </para></listitem>
+                    <listitem><para>
+                        You need to have the build artifacts already
+                        available, which typically means that you must
+                        have already created an image using the
+                        Openembedded build system (e.g.
+                        <filename>core-image-minimal</filename>).
+                        While it might seem redundant to generate an image
+                        in order to create an image using
+                        Wic, the current version of
+                        Wic requires the artifacts
+                        in the form generated by the OpenEmbedded build
+                        system.
+                        </para></listitem>
+                    <listitem><para>
+                        You must build several native tools, which are
+                        built to run on the build system:
+                        <literallayout class='monospaced'>
+     $ bitbake parted-native dosfstools-native mtools-native
+                        </literallayout>
+                        </para></listitem>
+                    <listitem><para>
                         Include "wic" as part of the
                         <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></ulink>
                         variable.
                         </para></listitem>
                     <listitem><para>
                         Include the name of the
-                        <link linkend='openembedded-kickstart-wks-reference'>wic kickstart file</link>
+                        <ulink url='&YOCTO_DOCS_REF_URL;#openembedded-kickstart-wks-reference'>wic kickstart file</ulink>
                         as part of the
                         <ulink url='&YOCTO_DOCS_REF_URL;#var-WKS_FILE'><filename>WKS_FILE</filename></ulink>
                         variable
                         </para></listitem>
                 </itemizedlist>
-                Further steps to generate a partitioned image
-                are the same as for any other image type.
-                For information on image types, see the
-                "<link linkend='building-images'>Building Images</link>"
-                section.
             </para>
         </section>
 
-        <section id='create-wic-images-wic'>
-            <title>Using OpenEmbedded Image Creator Wic to Generate Partitioned Images</title>
+        <section id='wic-getting-help'>
+            <title>Getting Help</title>
 
             <para>
-                The <filename>wic</filename> command generates partitioned
-                images from existing OpenEmbedded build artifacts.
-                Image generation is driven by partitioning commands
-                contained in an Openembedded kickstart file
-                (<filename>.wks</filename>) specified either directly on
-                the command line or as one of a selection of canned
-                <filename>.wks</filename> files as shown with the
-                <filename>wic list images</filename> command in the
-                "<link linkend='using-a-provided-kickstart-file'>Using an Existing Kickstart File</link>"
-                section.
-                When you apply the command to a given set of build
-                artifacts, the result is an image or set of images that
-                can be directly written onto media and used on a particular
-                system.
-            </para>
-
-            <para>
-                The <filename>wic</filename> command and the infrastructure
-                it is based on is by definition incomplete.
-                The purpose of the command is to allow the generation of
-                customized images, and as such, was designed to be
-                completely extensible through a plug-in interface.
-                See the
-                "<link linkend='openembedded-kickstart-plugins'>Plug-ins</link>"
-                section for information on these plug-ins.
-            </para>
-
-            <para>
-                This section provides some background information on Wic,
-                describes what you need to have in
-                place to run the tool, provides instruction on how to use
-                the <filename>wic</filename> utility,
-                and provides several examples.
-            </para>
-
-            <section id='wic-background'>
-                <title>Background</title>
-
-                <para>
-                    This section provides some background on the
-                    <filename>wic</filename> utility.
-                    While none of this information is required to use
-                    Wic, you might find it interesting.
-                    <itemizedlist>
-                        <listitem><para>
-                            The name "Wic" is derived from OpenEmbedded
-                            Image Creator (oeic).
-                            The "oe" diphthong in "oeic" was promoted to the
-                            letter "w", because "oeic" is both difficult to
-                            remember and to pronounce.
-                            </para></listitem>
-                        <listitem><para>
-                            Wic is loosely based on the
-                            Meego Image Creator (<filename>mic</filename>)
-                            framework.
-                            The Wic implementation has been
-                            heavily modified to make direct use of OpenEmbedded
-                            build artifacts instead of package installation and
-                            configuration, which are already incorporated within
-                            the OpenEmbedded artifacts.
-                            </para></listitem>
-                        <listitem><para>
-                            Wic is a completely independent
-                            standalone utility that initially provides
-                            easier-to-use and more flexible replacements for an
-                            existing functionality in OE Core's
-                            <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-image-live'><filename>image-live</filename></ulink>
-                            class and <filename>mkefidisk.sh</filename> script.
-                            The difference between
-                            Wic and those examples is
-                            that with Wic the
-                            functionality of those scripts is implemented
-                            by a general-purpose partitioning language, which is
-                            based on Redhat kickstart syntax.</para></listitem>
-                    </itemizedlist>
-                </para>
-            </section>
-
-            <section id='wic-requirements'>
-                <title>Requirements</title>
-
-                <para>
-                    In order to use the <filename>wic</filename> utility
-                    with the OpenEmbedded Build system, your system needs
-                    to meet the following requirements:
-                    <itemizedlist>
-                        <listitem><para>The Linux distribution on your
-                            development host must support the Yocto Project.
-                            See the
-                            "<ulink url='&YOCTO_DOCS_REF_URL;#detailed-supported-distros'>Supported Linux Distributions</ulink>"
-                            section in the Yocto Project Reference Manual for
-                            the list of distributions that support the
-                            Yocto Project.
-                            </para></listitem>
-                        <listitem><para>
-                            The standard system utilities, such as
-                            <filename>cp</filename>, must be installed on your
-                            development host system.
-                            </para></listitem>
-                        <listitem><para>
-                            You need to have the build artifacts already
-                            available, which typically means that you must
-                            have already created an image using the
-                            Openembedded build system (e.g.
-                            <filename>core-image-minimal</filename>).
-                            While it might seem redundant to generate an image
-                            in order to create an image using
-                            Wic, the current version of
-                            Wic requires the artifacts
-                            in the form generated by the build system.
-                            </para></listitem>
-                        <listitem><para>
-                            You must build several native tools, which are tools
-                            built to run on the build system:
-                            <literallayout class='monospaced'>
-     $ bitbake parted-native dosfstools-native mtools-native
-                            </literallayout>
-                            </para></listitem>
-                        <listitem><para>
-                            You must have sourced one of the build environment
-                            setup scripts (i.e.
-                            <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
-                            or
-                            <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>)
-                            found in the
-                            <link linkend='build-directory'>Build Directory</link>.
-                            </para></listitem>
-                    </itemizedlist>
-                </para>
-            </section>
-
-            <section id='wic-getting-help'>
-                <title>Getting Help</title>
-
-                <para>
-                    You can get general help for the <filename>wic</filename>
-                    command by entering the <filename>wic</filename> command
-                    by itself or by entering the command with a help argument
-                    as follows:
-                    <literallayout class='monospaced'>
+                You can get general help for the <filename>wic</filename>
+                command by entering the <filename>wic</filename> command
+                by itself or by entering the command with a help argument
+                as follows:
+                <literallayout class='monospaced'>
      $ wic -h
      $ wic --help
-                    </literallayout>
-                </para>
+                </literallayout>
+            </para>
 
-                <para>
-                    Currently, Wic supports two commands:
-                    <filename>create</filename> and <filename>list</filename>.
-                    You can get help for these commands as follows:
-                    <literallayout class='monospaced'>
+            <para>
+                Currently, Wic supports seven commands:
+                <filename>cp</filename>, <filename>create</filename>,
+                <filename>help</filename>, <filename>list</filename>,
+                <filename>ls</filename>, <filename>rm</filename>, and
+                <filename>write</filename>.
+                You can get help for these commands as follows with
+                <replaceable>command</replaceable> being one of the
+                supported commands:
+                <literallayout class='monospaced'>
      $ wic help <replaceable>command</replaceable>
-                    with <replaceable>command</replaceable> being either
-                    <filename>create</filename> or <filename>list</filename>.
-                    </literallayout>
-                </para>
+                </literallayout>
+            </para>
 
-                <para>
-                    You can also get detailed help on a number of topics
-                    from the help system.
-                    The output of <filename>wic --help</filename>
-                    displays a list of available help
-                    topics under a "Help topics" heading.
-                    You can have the help system display the help text for
-                    a given topic by prefacing the topic with
-                    <filename>wic help</filename>:
-                    <literallayout class='monospaced'>
+            <para>
+                You can also get detailed help on a number of topics
+                from the help system.
+                The output of <filename>wic --help</filename>
+                displays a list of available help
+                topics under a "Help topics" heading.
+                You can have the help system display the help text for
+                a given topic by prefacing the topic with
+                <filename>wic help</filename>:
+                <literallayout class='monospaced'>
      $ wic help <replaceable>help_topic</replaceable>
-                    </literallayout>
-                </para>
+                </literallayout>
+            </para>
 
-                <para>
-                    You can find out more about the images
-                    Wic creates using the existing
-                    kickstart files with the following form of the command:
-                    <literallayout class='monospaced'>
+            <para>
+                You can find out more about the images Wic creates using
+                the existing kickstart files with the following form of
+                the command:
+                <literallayout class='monospaced'>
      $ wic list <replaceable>image</replaceable> help
-                    </literallayout>
-                    with <filename><replaceable>image</replaceable></filename>
-                    being either <filename>directdisk</filename> or
-                    <filename>mkefidisk</filename>.
-                </para>
-            </section>
+                </literallayout>
+                For <replaceable>image</replaceable>, you can provide
+                any of the following:
+                <literallayout class='monospaced'>
+       beaglebone
+       mpc8315e-rdb
+       genericx86
+       edgerouter
+       qemux86-directdisk
+       directdisk-gpt
+       mkefidisk
+       directdisk
+       systemd-bootdisk
+       mkhybridiso
+       sdimage-bootpart
+       directdisk-multi-rootfs
+       directdisk-bootloader-config
+                </literallayout>
+            </para>
+        </section>
 
-            <section id='operational-modes'>
-                <title>Operational Modes</title>
+        <section id='operational-modes'>
+            <title>Operational Modes</title>
+
+            <para>
+                You can use Wic in two different
+                modes, depending on how much control you need for
+                specifying the Openembedded build artifacts that are
+                used for creating the image: Raw and Cooked:
+                <itemizedlist>
+                    <listitem><para>
+                        <emphasis>Raw Mode:</emphasis>
+                        You explicitly specify build artifacts through
+                        <filename>wic</filename> command-line arguments.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Cooked Mode:</emphasis>
+                        The current
+                        <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
+                        setting and image name are used to automatically
+                        locate and provide the build artifacts.
+                        You just supply a kickstart file and the name
+                        of the image from which to use artifacts.
+                        </para></listitem>
+                </itemizedlist>
+            </para>
+
+            <para>
+                Regardless of the mode you use, you need to have the build
+                artifacts ready and available.
+            </para>
+
+            <section id='raw-mode'>
+                <title>Raw Mode</title>
 
                 <para>
-                    You can use Wic in two different
-                    modes, depending on how much control you need for
-                    specifying the Openembedded build artifacts that are
-                    used for creating the image: Raw and Cooked:
-                    <itemizedlist>
-                        <listitem><para>
-                            <emphasis>Raw Mode:</emphasis>
-                            You explicitly specify build artifacts through
-                            command-line arguments.
-                            </para></listitem>
-                        <listitem><para>
-                            <emphasis>Cooked Mode:</emphasis>
-                            The current
-                            <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
-                            setting and image name are used to automatically
-                            locate and provide the build artifacts.
-                            </para></listitem>
-                    </itemizedlist>
+                    Running Wic in raw mode allows you to specify all the
+                    partitions through the <filename>wic</filename>
+                    command line.
+                    The primary use for raw mode is if you have built
+                    your kernel outside of the Yocto Project
+                    <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
+                    In other words, you can point to arbitrary kernel,
+                    root filesystem locations, and so forth.
+                    Contrast this behavior with cooked mode where Wic
+                    looks in the Build Directory (e.g.
+                    <filename>tmp/deploy/images/</filename><replaceable>machine</replaceable>).
                 </para>
 
                 <para>
-                    Regardless of the mode you use, you need to have the build
-                    artifacts ready and available.
-                    Additionally, the environment must be set up using the
-                    <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
-                    or
-                    <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>
-                    script found in the
-                    <link linkend='build-directory'>Build Directory</link>.
-                </para>
-
-                <section id='raw-mode'>
-                    <title>Raw Mode</title>
-
-                    <para>
-                        The general form of the
-                        <filename>wic</filename> command in raw mode is:
-                        <literallayout class='monospaced'>
-     $ wic create <replaceable>image_name</replaceable>.wks [<replaceable>options</replaceable>] [...]
+                    The general form of the
+                    <filename>wic</filename> command in raw mode is:
+                    <literallayout class='monospaced'>
+     $ wic create <replaceable>wks_file</replaceable> <replaceable>options</replaceable> ...
 
        Where:
 
-          <replaceable>image_name</replaceable>.wks
+          <replaceable>wks_file</replaceable>:
              An OpenEmbedded kickstart file.  You can provide
              your own custom file or use a file from a set of
              existing files as described by further options.
 
-          -o <replaceable>OUTDIR</replaceable>, --outdir=<replaceable>OUTDIR</replaceable>
-             The name of a directory in which to create image.
+          optional arguments:
+            -h, --help            show this help message and exit
+            -o <replaceable>OUTDIR</replaceable>, --outdir <replaceable>OUTDIR</replaceable>
+                                  name of directory to create image in
+            -e <replaceable>IMAGE_NAME</replaceable>, --image-name <replaceable>IMAGE_NAME</replaceable>
+                                  name of the image to use the artifacts from e.g. core-
+                                  image-sato
+            -r <replaceable>ROOTFS_DIR</replaceable>, --rootfs-dir <replaceable>ROOTFS_DIR</replaceable>
+                                  path to the /rootfs dir to use as the .wks rootfs
+                                  source
+            -b <replaceable>BOOTIMG_DIR</replaceable>, --bootimg-dir <replaceable>BOOTIMG_DIR</replaceable>
+                                  path to the dir containing the boot artifacts (e.g.
+                                  /EFI or /syslinux dirs) to use as the .wks bootimg
+                                  source
+            -k <replaceable>KERNEL_DIR</replaceable>, --kernel-dir <replaceable>KERNEL_DIR</replaceable>
+                                  path to the dir containing the kernel to use in the
+                                  .wks bootimg
+            -n <replaceable>NATIVE_SYSROOT</replaceable>, --native-sysroot <replaceable>NATIVE_SYSROOT</replaceable>
+                                  path to the native sysroot containing the tools to use
+                                  to build the image
+            -s, --skip-build-check
+                                  skip the build check
+            -f, --build-rootfs    build rootfs
+            -c {gzip,bzip2,xz}, --compress-with {gzip,bzip2,xz}
+                                  compress image with specified compressor
+            -m, --bmap            generate .bmap
+            --no-fstab-update     Do not change fstab file.
+            -v <replaceable>VARS_DIR</replaceable>, --vars <replaceable>VARS_DIR</replaceable>
+                                  directory with &lt;image&gt;.env files that store bitbake
+                                  variables
+            -D, --debug           output debug information
+                    </literallayout>
+                    <note>
+                        You do not need root privileges to run
+                        Wic.
+                        In fact, you should not run as root when using the
+                        utility.
+                    </note>
+                </para>
+            </section>
 
-          -i <replaceable>PROPERTIES_FILE</replaceable>, --infile=<replaceable>PROPERTIES_FILE</replaceable>
-             The name of a file containing the values for image
-             properties as a JSON file.
+            <section id='cooked-mode'>
+                <title>Cooked Mode</title>
 
-          -e <replaceable>IMAGE_NAME</replaceable>, --image-name=<replaceable>IMAGE_NAME</replaceable>
-             The name of the image from which to use the artifacts
-             (e.g. <filename>core-image-sato</filename>).
+                <para>
+                    Running Wic in cooked mode leverages off artifacts in
+                    Build Directory.
+                    In other words, you do not have to specify kernel or
+                    root filesystem locations as part of the command.
+                    All you need to provide is a kickstart file and the
+                    name of the image from which to use artifacts by using
+                    the "-e" option.
+                    Wic looks in the Build Directory (e.g.
+                    <filename>tmp/deploy/images/</filename><replaceable>machine</replaceable>)
+                    for artifacts.
+                </para>
 
-          -r <replaceable>ROOTFS_DIR</replaceable>, --rootfs-dir=<replaceable>ROOTFS_DIR</replaceable>
-             The path to the <filename>/rootfs</filename> directory to use as the
-             <filename>.wks</filename> rootfs source.
-
-          -b <replaceable>BOOTIMG_DIR</replaceable>, --bootimg-dir=<replaceable>BOOTIMG_DIR</replaceable>
-             The path to the directory containing the boot artifacts
-             (e.g. <filename>/EFI</filename> or <filename>/syslinux</filename>) to use as the <filename>.wks</filename> bootimg
-             source.
-
-          -k <replaceable>KERNEL_DIR</replaceable>, --kernel-dir=<replaceable>KERNEL_DIR</replaceable>
-             The path to the directory containing the kernel to use
-             in the <filename>.wks</filename> boot image.
-
-          -n <replaceable>NATIVE_SYSROOT</replaceable>, --native-sysroot=<replaceable>NATIVE_SYSROOT</replaceable>
-             The path to the native sysroot containing the tools to use
-             to build the image.
-
-          -s, --skip-build-check
-              Skips the build check.
-
-          -D, --debug
-              Output debug information.
-                        </literallayout>
-                        <note>
-                            You do not need root privileges to run
-                            Wic.
-                            In fact, you should not run as root when using the
-                            utility.
-                        </note>
-                    </para>
-                </section>
-
-                <section id='cooked-mode'>
-                    <title>Cooked Mode</title>
-
-                    <para>
-                        The general form of the <filename>wic</filename> command
-                        using Cooked Mode is:
-                        <literallayout class='monospaced'>
-     $ wic create <replaceable>kickstart_file</replaceable> -e <replaceable>image_name</replaceable>
+                <para>
+                    The general form of the <filename>wic</filename>
+                    command using Cooked Mode is as follows:
+                    <literallayout class='monospaced'>
+     $ wic create <replaceable>wks_file</replaceable> -e <replaceable>IMAGE_NAME</replaceable>
 
        Where:
 
-           <replaceable>kickstart_file</replaceable>
-               An OpenEmbedded kickstart file. You can provide your own
-               custom file or a supplied file.
+          <replaceable>wks_file</replaceable>:
+             An OpenEmbedded kickstart file.  You can provide
+             your own custom file or use a file from a set of
+             existing files provided with the Yocto Project
+             release.
 
-           <replaceable>image_name</replaceable>
-               Specifies the image built using the OpenEmbedded build
-               system.
-                        </literallayout>
-                        This form is the simplest and most user-friendly, as it
-                        does not require specifying all individual parameters.
-                        All you need to provide is your own
-                        <filename>.wks</filename> file or one provided with the
-                        release.
-                    </para>
-                </section>
-            </section>
-
-            <section id='using-a-provided-kickstart-file'>
-                <title>Using an Existing Kickstart File</title>
-
-                <para>
-                    If you do not want to create your own
-                    <filename>.wks</filename> file, you can use an existing
-                    file provided by the Wic installation.
-                    Use the following command to list the available files:
-                    <literallayout class='monospaced'>
-     $ wic list images
-     directdisk Create a 'pcbios' direct disk image
-     mkefidisk Create an EFI disk image
+          required argument:
+             -e <replaceable>IMAGE_NAME</replaceable>, --image-name <replaceable>IMAGE_NAME</replaceable>
+                                  name of the image to use the artifacts from e.g. core-
+                                  image-sato
                     </literallayout>
-                    When you use an existing file, you do not have to use the
-                    <filename>.wks</filename> extension.
-                    Here is an example in Raw Mode that uses the
-                    <filename>directdisk</filename> file:
-                    <literallayout class='monospaced'>
+                </para>
+            </section>
+        </section>
+
+        <section id='using-a-provided-kickstart-file'>
+            <title>Using an Existing Kickstart File</title>
+
+            <para>
+                If you do not want to create your own kickstart file, you
+                can use an existing file provided by the Wic installation.
+                As shipped, kickstart files can be found in the
+                Yocto Project
+                <ulink url='&YOCTO_DOCS_REF_URL;#source-repositories'>Source Repositories</ulink>
+                in the following two locations:
+                <literallayout class='monospaced'>
+     poky/meta-yocto-bsp/wic
+     poky/scripts/lib/wic/canned-wks
+                </literallayout>
+                Use the following command to list the available kickstart
+                files:
+                <literallayout class='monospaced'>
+     $ wic list images
+       beaglebone                    		Create SD card image for Beaglebone
+       mpc8315e-rdb                  		Create SD card image for MPC8315E-RDB
+       genericx86                    		Create an EFI disk image for genericx86*
+       edgerouter                    		Create SD card image for Edgerouter
+       qemux86-directdisk            		Create a qemu machine 'pcbios' direct disk image
+       directdisk-gpt                		Create a 'pcbios' direct disk image
+       mkefidisk                     		Create an EFI disk image
+       directdisk                    		Create a 'pcbios' direct disk image
+       systemd-bootdisk              		Create an EFI disk image with systemd-boot
+       mkhybridiso                   		Create a hybrid ISO image
+       sdimage-bootpart              		Create SD card image with a boot partition
+       directdisk-multi-rootfs       		Create multi rootfs image using rootfs plugin
+       directdisk-bootloader-config  		Create a 'pcbios' direct disk image with custom bootloader config
+                </literallayout>
+                When you use an existing file, you do not have to use the
+                <filename>.wks</filename> extension.
+                Here is an example in Raw Mode that uses the
+                <filename>directdisk</filename> file:
+                <literallayout class='monospaced'>
      $ wic create directdisk -r <replaceable>rootfs_dir</replaceable> -b <replaceable>bootimg_dir</replaceable> \
            -k <replaceable>kernel_dir</replaceable> -n <replaceable>native_sysroot</replaceable>
-                    </literallayout>
-                </para>
+                </literallayout>
+            </para>
 
-                <para>
-                    Here are the actual partition language commands
-                    used in the <filename>mkefidisk.wks</filename> file to
-                    generate an image:
-                    <literallayout class='monospaced'>
-     # short-description: Create an EFI disk image
-     # long-description: Creates a partitioned EFI disk image that the user
-     # can directly dd to boot media.
-
-     part /boot --source bootimg-efi --ondisk sda --label msdos --active --align 1024
-
-     part / --source rootfs --ondisk sda --fstype=ext3 --label platform --align 1024
-
+            <para>
+                Here are the actual partition language commands
+                used in the <filename>genericx86.wks</filename> file to
+                generate an image:
+                <literallayout class='monospaced'>
+     # short-description: Create an EFI disk image for genericx86*
+     # long-description: Creates a partitioned EFI disk image for genericx86* machines
+     part /boot --source bootimg-efi --sourceparams="loader=grub-efi" --ondisk sda --label msdos --active --align 1024
+     part / --source rootfs --ondisk sda --fstype=ext4 --label platform --align 1024 --use-uuid
      part swap --ondisk sda --size 44 --label swap1 --fstype=swap
 
-     bootloader  --timeout=10  --append="rootwait rootfstype=ext3 console=ttyPCH0,115200 console=tty0 vmalloc=256MB snd-hda-intel.enable_msi=0"
-                    </literallayout>
-                </para>
-            </section>
+     bootloader --ptable gpt --timeout=5 --append="rootfstype=ext4 console=ttyS0,115200 console=tty0"
+                </literallayout>
+            </para>
+        </section>
 
-            <section id='wic-usage-examples'>
-                <title>Examples</title>
+        <section id='wic-usage-examples'>
+            <title>Examples</title>
+
+            <para>
+                This section provides several examples that show how to use
+                the Wic utility.
+                All the examples assume the list of requirements in the
+                "<link linkend='wic-requirements'>Requirements</link>"
+                section have been met.
+                The examples assume the previously generated image is
+                <filename>core-image-minimal</filename>.
+            </para>
+
+            <section id='generate-an-image-using-a-provided-kickstart-file'>
+                <title>Generate an Image using an Existing Kickstart File</title>
 
                 <para>
-                    This section provides several examples that show how to use
-                    the <filename>wic</filename> utility.
-                    All the examples assume the list of requirements in the
-                    "<link linkend='wic-requirements'>Requirements</link>"
-                    section have been met.
-                    The examples assume the previously generated image is
-                    <filename>core-image-minimal</filename>.
-                </para>
-
-                <section id='generate-an-image-using-a-provided-kickstart-file'>
-                    <title>Generate an Image using an Existing Kickstart File</title>
-
-                    <para>
-                        This example runs in Cooked Mode and uses the
-                        <filename>mkefidisk</filename> kickstart file:
-                        <literallayout class='monospaced'>
+                    This example runs in Cooked Mode and uses the
+                    <filename>mkefidisk</filename> kickstart file:
+                    <literallayout class='monospaced'>
      $ wic create mkefidisk -e core-image-minimal
-     Checking basic build environment...
-     Done.
-
-     Creating image(s)...
-
-     Info: The new image(s) can be found here:
-      <replaceable>current_directory</replaceable>/build/mkefidisk-201310230946-sda.direct
+     INFO: Building wic-tools...
+               .
+               .
+               .
+     INFO: The new image(s) can be found here:
+       ./mkefidisk-201710061409-sda.direct
 
      The following build artifacts were used to create the image(s):
-      ROOTFS_DIR: /home/trz/yocto/yocto-image/build/tmp/work/minnow-poky-linux/core-image-minimal/1.0-r0/rootfs
-      BOOTIMG_DIR: /home/trz/yocto/yocto-image/build/tmp/work/minnow-poky-linux/core-image-minimal/1.0-r0/core-image-minimal-1.0/hddimg
-      KERNEL_DIR: /home/trz/yocto/yocto-image/build/tmp/sysroots/minnow/usr/src/kernel
-      NATIVE_SYSROOT: /home/trz/yocto/yocto-image/build/tmp/sysroots/x86_64-linux
+       ROOTFS_DIR:                   /home/scottrif/poky/build/tmp.wic.r4hkds0b/rootfs_copy
+       BOOTIMG_DIR:                  /home/scottrif/poky/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share
+       KERNEL_DIR:                   /home/scottrif/poky/build/tmp/deploy/images/qemux86
+       NATIVE_SYSROOT:               /home/scottrif/poky/build/tmp/work/i586-poky-linux/wic-tools/1.0-r0/recipe-sysroot-native
 
-     The image(s) were created using OE kickstart file:
-      /home/trz/yocto/yocto-image/scripts/lib/image/canned-wks/mkefidisk.wks
-                        </literallayout>
-                        The previous example shows the easiest way to create
-                        an image by running in Cooked Mode and using the
-                        <filename>-e</filename> option with an existing
-                        kickstart file.
-                        All that is necessary is to specify the image used to
-                        generate the artifacts.
-                        Your <filename>local.conf</filename> needs to have the
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
-                        variable set to the machine you are using, which is
-                        "minnow" in this example.
-                    </para>
+     INFO: The image(s) were created using OE kickstart file:
+       /home/scottrif/poky/scripts/lib/wic/canned-wks/mkefidisk.wks
+                    </literallayout>
+                    The previous example shows the easiest way to create
+                    an image by running in cooked mode and supplying
+                    a kickstart file and the "-e" option to point to the
+                    existing build artifacts.
+                    Your <filename>local.conf</filename> file needs to have
+                    the
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
+                    variable set to the machine you are using, which is
+                    "qemux86" in this example.
+                </para>
 
-                    <para>
-                        The output specifies the exact image created as well as
-                        where it was created, which is in the current
-                        directory by default.
-                        The output also names the artifacts used and the exact
-                        <filename>.wks</filename> script that was used to
-                        generate the image.
-                        <note>
-                            You should always verify the details provided in the
-                            output to make sure that the image was indeed
-                            created exactly as expected.
-                        </note>
-                    </para>
+                <para>
+                    Once the image builds, the output provides image
+                    location, artifact use, and kickstart file information.
+                    <note>
+                        You should always verify the details provided in the
+                        output to make sure that the image was indeed
+                        created exactly as expected.
+                    </note>
+                </para>
 
-                    <para>
-                        Continuing with the example, you can now write the
-                        image to a USB stick, or whatever media for which you
-                        built your image, and boot the resulting media.
-                        You can write the image by using
-                        <filename>bmaptool</filename> or
-                        <filename>dd</filename>:
-                        <literallayout class='monospaced'>
-     $ oe-run-native bmaptool copy build/mkefidisk-201310230946-sda.direct /dev/sd<replaceable>X</replaceable>
-                        </literallayout>
-                        or
-                        <literallayout class='monospaced'>
-     $ sudo dd if=build/mkefidisk-201310230946-sda.direct of=/dev/sd<replaceable>X</replaceable>
-                        </literallayout>
-                        <note>
-                            For more information on how to use the
-                            <filename>bmaptool</filename> to flash a device
-                            with an image, see the
-                            "<link linkend='flashing-images-using-bmaptool'>Flashing Images Using <filename>bmaptool</filename></link>"
-                            section.
-                        </note>
-                    </para>
-                </section>
+                <para>
+                    Continuing with the example, you can now write the
+                    image to a USB stick, or whatever media for which you
+                    built your image, and boot from the media.
+                    You can write the image by using
+                    <filename>bmaptool</filename> or
+                    <filename>dd</filename>:
+                    <literallayout class='monospaced'>
+     $ oe-run-native bmaptool copy build/mkefidisk-201710061409-sda.direct /dev/sd<replaceable>X</replaceable>
+                    </literallayout>
+                    or
+                    <literallayout class='monospaced'>
+     $ sudo dd if=build/mkefidisk-201710061409-sda.direct of=/dev/sd<replaceable>X</replaceable>
+                    </literallayout>
+                    <note>
+                        For more information on how to use the
+                        <filename>bmaptool</filename> to flash a device
+                        with an image, see the
+                        "<link linkend='flashing-images-using-bmaptool'>Flashing Images Using <filename>bmaptool</filename></link>"
+                        section.
+                    </note>
+                </para>
+            </section>
 
-                <section id='using-a-modified-kickstart-file'>
-                    <title>Using a Modified Kickstart File</title>
+            <section id='using-a-modified-kickstart-file'>
+                <title>Using a Modified Kickstart File</title>
 
-                    <para>
-                        Because partitioned image creation is
-                        driven by the kickstart file, it is easy to affect
-                        image creation by changing the parameters in the file.
-                        This next example demonstrates that through modification
-                        of the <filename>directdisk</filename> kickstart file.
-                    </para>
+                <para>
+                    Because partitioned image creation is driven by the
+                    kickstart file, it is easy to affect image creation by
+                    changing the parameters in the file.
+                    This next example demonstrates that through modification
+                    of the <filename>directdisk-gpt</filename> kickstart
+                    file.
+                </para>
 
-                    <para>
-                        As mentioned earlier, you can use the command
-                        <filename>wic list images</filename> to show the list
-                        of existing kickstart files.
-                        The directory in which these files reside is
-                        <filename>scripts/lib/image/canned-wks/</filename>
-                        located in the
-                        <link linkend='source-directory'>Source Directory</link>.
-                        Because the available files reside in this directory,
-                        you can create and add your own custom files to the
-                        directory.
-                        Subsequent use of the
-                        <filename>wic list images</filename> command would then
-                        include your kickstart files.
-                    </para>
+                <para>
+                    As mentioned earlier, you can use the command
+                    <filename>wic list images</filename> to show the list
+                    of existing kickstart files.
+                    The directory in which the
+                    <filename>directdisk-gpt.wks</filename> file resides is
+                    <filename>scripts/lib/image/canned-wks/</filename>,
+                    which is located in the
+                    <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+                    (e.g. <filename>poky</filename>).
+                    Because available files reside in this directory,
+                    you can create and add your own custom files to the
+                    directory.
+                    Subsequent use of the
+                    <filename>wic list images</filename> command would then
+                    include your kickstart files.
+                </para>
 
-                    <para>
-                        In this example, the existing
-                        <filename>directdisk</filename> file already does most
-                        of what is needed.
-                        However, for the hardware in this example, the image
-                        will need to boot from <filename>sdb</filename> instead
-                        of <filename>sda</filename>, which is what the
-                        <filename>directdisk</filename> kickstart file uses.
-                    </para>
+                <para>
+                    In this example, the existing
+                    <filename>directdisk-gpt</filename> file already does
+                    most of what is needed.
+                    However, for the hardware in this example, the image
+                    will need to boot from <filename>sdb</filename> instead
+                    of <filename>sda</filename>, which is what the
+                    <filename>directdisk-gpt</filename> kickstart file
+                    uses.
+                </para>
 
-                    <para>
-                        The example begins by making a copy of the
-                        <filename>directdisk.wks</filename> file in the
-                        <filename>scripts/lib/image/canned-wks</filename>
-                        directory and then by changing the lines that specify
-                        the target disk from which to boot.
-                        <literallayout class='monospaced'>
-     $ cp /home/trz/yocto/yocto-image/scripts/lib/image/canned-wks/directdisk.wks \
-          /home/trz/yocto/yocto-image/scripts/lib/image/canned-wks/directdisksdb.wks
-                        </literallayout>
-                        Next, the example modifies the
-                        <filename>directdisksdb.wks</filename> file and changes
-                        all instances of "<filename>--ondisk sda</filename>"
-                        to "<filename>--ondisk sdb</filename>".
-                        The example changes the following two lines and leaves
-                        the remaining lines untouched:
-                        <literallayout class='monospaced'>
+                <para>
+                    The example begins by making a copy of the
+                    <filename>directdisk-gpt.wks</filename> file in the
+                    <filename>scripts/lib/image/canned-wks</filename>
+                    directory and then by changing the lines that specify
+                    the target disk from which to boot.
+                    <literallayout class='monospaced'>
+     $ cp /home/scottrif/poky/scripts/lib/wic/canned-wks/directdisk-gpt.wks \
+          /home/scottrif/poky/scripts/lib/wic/canned-wks/directdisksdb-gpt.wks
+                    </literallayout>
+                    Next, the example modifies the
+                    <filename>directdisksdb-gpt.wks</filename> file and
+                    changes all instances of
+                    "<filename>--ondisk sda</filename>" to
+                    "<filename>--ondisk sdb</filename>".
+                    The example changes the following two lines and leaves
+                    the remaining lines untouched:
+                    <literallayout class='monospaced'>
      part /boot --source bootimg-pcbios --ondisk sdb --label boot --active --align 1024
-     part / --source rootfs --ondisk sdb --fstype=ext3 --label platform --align 1024
-                        </literallayout>
-                        Once the lines are changed, the example generates the
-                        <filename>directdisksdb</filename> image.
-                        The command points the process at the
-                        <filename>core-image-minimal</filename> artifacts for
-                        the Next Unit of Computing (nuc)
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
-                        the <filename>local.conf</filename>.
-                        <literallayout class='monospaced'>
-     $ wic create directdisksdb -e core-image-minimal
-     Checking basic build environment...
-     Done.
+     part / --source rootfs --ondisk sdb --fstype=ext4 --label platform --align 1024 --use-uuid
+                    </literallayout>
+                    Once the lines are changed, the example generates the
+                    <filename>directdisksdb-gpt</filename> image.
+                    The command points the process at the
+                    <filename>core-image-minimal</filename> artifacts for
+                    the Next Unit of Computing (nuc)
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
+                    the <filename>local.conf</filename>.
+                    <literallayout class='monospaced'>
+     $ wic create directdisksdb-gpt -e core-image-minimal
+     INFO: Building wic-tools...
+                .
+                .
+                .
+     Initialising tasks: 100% |#######################################| Time: 0:00:01
+     NOTE: Executing SetScene Tasks
+     NOTE: Executing RunQueue Tasks
+     NOTE: Tasks Summary: Attempted 1161 tasks of which 1157 didn't need to be rerun and all succeeded.
+     INFO: Creating image(s)...
 
-     Creating image(s)...
-
-     Info: The new image(s) can be found here:
-      <replaceable>current_directory</replaceable>/build/directdisksdb-201310231131-sdb.direct
+     INFO: The new image(s) can be found here:
+       ./directdisksdb-gpt-201710090938-sdb.direct
 
      The following build artifacts were used to create the image(s):
+       ROOTFS_DIR:                   /home/scottrif/poky/build/tmp.wic.hk3wl6zn/rootfs_copy
+       BOOTIMG_DIR:                  /home/scottrif/poky/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share
+       KERNEL_DIR:                   /home/scottrif/poky/build/tmp/deploy/images/qemux86
+       NATIVE_SYSROOT:               /home/scottrif/poky/build/tmp/work/i586-poky-linux/wic-tools/1.0-r0/recipe-sysroot-native
 
-      ROOTFS_DIR: /home/trz/yocto/yocto-image/build/tmp/work/nuc-poky-linux/core-image-minimal/1.0-r0/rootfs
-      BOOTIMG_DIR: /home/trz/yocto/yocto-image/build/tmp/sysroots/nuc/usr/share
-      KERNEL_DIR: /home/trz/yocto/yocto-image/build/tmp/sysroots/nuc/usr/src/kernel
-      NATIVE_SYSROOT: /home/trz/yocto/yocto-image/build/tmp/sysroots/x86_64-linux
-
-     The image(s) were created using OE kickstart file:
-      /home/trz/yocto/yocto-image/scripts/lib/image/canned-wks/directdisksdb.wks
-                        </literallayout>
-                        Continuing with the example, you can now directly
-                        <filename>dd</filename> the image to a USB stick, or
-                        whatever media for which you built your image,
-                        and boot the resulting media:
-                        <literallayout class='monospaced'>
-     $ sudo dd if=build/directdisksdb-201310231131-sdb.direct of=/dev/sdb
-     86018+0 records in
-     86018+0 records out
-     44041216 bytes (44 MB) copied, 13.0734 s, 3.4 MB/s
-     [trz at empanada tmp]$ sudo eject /dev/sdb
-                        </literallayout>
-                    </para>
-                </section>
-
-                <section id='creating-an-image-based-on-core-image-minimal-and-crownbay-noemgd'>
-                    <title>Creating an Image Based on <filename>core-image-minimal</filename> and <filename>crownbay-noemgd</filename></title>
-
-                    <para>
-                        This example creates an image based on
-                        <filename>core-image-minimal</filename> and a
-                        <filename>crownbay-noemgd</filename>
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
-                        that works right out of the box.
-                        <literallayout class='monospaced'>
-     $ wic create directdisk -e core-image-minimal
-
-     Checking basic build environment...
-     Done.
-
-     Creating image(s)...
-
-     Info: The new image(s) can be found here:
-      <replaceable>current_directory</replaceable>/build/directdisk-201309252350-sda.direct
-
-     The following build artifacts were used to create the image(s):
-
-     ROOTFS_DIR: /home/trz/yocto/yocto-image/build/tmp/work/crownbay_noemgd-poky-linux/core-image-minimal/1.0-r0/rootfs
-     BOOTIMG_DIR: /home/trz/yocto/yocto-image/build/tmp/sysroots/crownbay-noemgd/usr/share
-     KERNEL_DIR: /home/trz/yocto/yocto-image/build/tmp/sysroots/crownbay-noemgd/usr/src/kernel
-     NATIVE_SYSROOT: /home/trz/yocto/yocto-image/build/tmp/sysroots/crownbay-noemgd/usr/src/kernel
-
-     The image(s) were created using OE kickstart file:
-      /home/trz/yocto/yocto-image/scripts/lib/image/canned-wks/directdisk.wks
-                        </literallayout>
-                    </para>
-                </section>
-
-                <section id='using-a-modified-kickstart-file-and-running-in-raw-mode'>
-                    <title>Using a Modified Kickstart File and Running in Raw Mode</title>
-
-                    <para>
-                        This next example manually specifies each build artifact
-                        (runs in Raw Mode) and uses a modified kickstart file.
-                        The example also uses the <filename>-o</filename> option
-                        to cause Wic to create the output
-                        somewhere other than the default output directory,
-                        which is the current directory:
-                        <literallayout class='monospaced'>
-     $ wic create ~/test.wks -o /home/trz/testwic --rootfs-dir \
-          /home/trz/yocto/yocto-image/build/tmp/work/crownbay_noemgd-poky-linux/core-image-minimal/1.0-r0/rootfs \
-          --bootimg-dir /home/trz/yocto/yocto-image/build/tmp/sysroots/crownbay-noemgd/usr/share \
-          --kernel-dir /home/trz/yocto/yocto-image/build/tmp/sysroots/crownbay-noemgd/usr/src/kernel \
-          --native-sysroot /home/trz/yocto/yocto-image/build/tmp/sysroots/x86_64-linux
-
-     Creating image(s)...
-
-     Info: The new image(s) can be found here:
-      /home/trz/testwic/build/test-201309260032-sda.direct
-
-     The following build artifacts were used to create the image(s):
-
-     ROOTFS_DIR: /home/trz/yocto/yocto-image/build/tmp/work/crownbay_noemgd-poky-linux/core-image-minimal/1.0-r0/rootfs
-     BOOTIMG_DIR: /home/trz/yocto/yocto-image/build/tmp/sysroots/crownbay-noemgd/usr/share
-     KERNEL_DIR: /home/trz/yocto/yocto-image/build/tmp/sysroots/crownbay-noemgd/usr/src/kernel
-     NATIVE_SYSROOT: /home/trz/yocto/yocto-image/build/tmp/sysroots/crownbay-noemgd/usr/src/kernel
-
-     The image(s) were created using OE kickstart file:
-      /home/trz/test.wks
-                        </literallayout>
-                        For this example,
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
-                        did not have to be specified in the
-                        <filename>local.conf</filename> file since the
-                        artifact is manually specified.
-                    </para>
-                </section>
+     INFO: The image(s) were created using OE kickstart file:
+       /home/scottrif/poky/scripts/lib/wic/canned-wks/directdisksdb-gpt.wks
+                    </literallayout>
+                    Continuing with the example, you can now directly
+                    <filename>dd</filename> the image to a USB stick, or
+                    whatever media for which you built your image,
+                    and boot the resulting media:
+                    <literallayout class='monospaced'>
+     $ sudo dd if=directdisksdb-gpt-201710090938-sdb.direct of=/dev/sdb
+     140966+0 records in
+     140966+0 records out
+     72174592 bytes (72 MB, 69 MiB) copied, 78.0282 s, 925 kB/s
+     $ sudo eject /dev/sdb
+                    </literallayout>
+                </para>
             </section>
 
-            <section id='openembedded-kickstart-plugins'>
-                <title>Plug-ins</title>
+            <section id='using-a-modified-kickstart-file-and-running-in-raw-mode'>
+                <title>Using a Modified Kickstart File and Running in Raw Mode</title>
 
                 <para>
-                    Plug-ins allow Wic functionality to
-                    be extended and specialized by users.
-                    This section documents the plug-in interface, which is
-                    currently restricted to source plug-ins.
+                    This next example manually specifies each build artifact
+                    (runs in Raw Mode) and uses a modified kickstart file.
+                    The example also uses the <filename>-o</filename> option
+                    to cause Wic to create the output
+                    somewhere other than the default output directory,
+                    which is the current directory:
+                    <literallayout class='monospaced'>
+     $ wic create /home/scottrif/my_yocto/test.wks -o /home/scottrif/testwic \
+          --rootfs-dir /home/scottrif/poky/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/rootfs \
+          --bootimg-dir /home/scottrif/poky/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share \
+          --kernel-dir /home/scottrif/poky/build/tmp/deploy/images/qemux86 \
+          --native-sysroot /home/scottrif/poky/build/tmp/work/i586-poky-linux/wic-tools/1.0-r0/recipe-sysroot-native
+
+     INFO: Creating image(s)...
+
+     INFO: The new image(s) can be found here:
+       /home/scottrif/testwic/test-201710091445-sdb.direct
+
+     The following build artifacts were used to create the image(s):
+       ROOTFS_DIR:                   /home/scottrif/testwic/tmp.wic.x4wipbmb/rootfs_copy
+       BOOTIMG_DIR:                  /home/scottrif/poky/build/tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/recipe-sysroot/usr/share
+       KERNEL_DIR:                   /home/scottrif/poky/build/tmp/deploy/images/qemux86
+       NATIVE_SYSROOT:               /home/scottrif/poky/build/tmp/work/i586-poky-linux/wic-tools/1.0-r0/recipe-sysroot-native
+
+     INFO: The image(s) were created using OE kickstart file:
+       /home/scottrif/my_yocto/test.wks
+                    </literallayout>
+                    For this example,
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
+                    did not have to be specified in the
+                    <filename>local.conf</filename> file since the
+                    artifact is manually specified.
                 </para>
+            </section>
+
+            <section id='using-wic-to-manipulate-an-image'>
+                <title>Using Wic to Manipulate an Image</title>
 
                 <para>
-                    Source plug-ins provide a mechanism to customize
-                    various aspects of the image generation process in
-                    Wic, mainly the contents of
-                    partitions.
-                    The plug-ins provide a mechanism for mapping values
-                    specified in <filename>.wks</filename> files using the
-                    <filename>--source</filename> keyword to a
-                    particular plug-in implementation that populates a
-                    corresponding partition.
+                    Wic image manipulation allows you to shorten turnaround
+                    time during image development.
+                    For example, you can use Wic to delete the kernel partition
+                    of a Wic image and then insert a newly built kernel.
+                    This saves you time from having to rebuild the entire image
+                    each time you modify the kernel.
                     <note>
-                        If you use plug-ins that have build-time dependencies
-                        (e.g. native tools, bootloaders, and so forth)
-                        when building a Wic image, you need to specify those
-                        dependencies using the
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-WKS_FILE_DEPENDS'><filename>WKS_FILE_DEPENDS</filename></ulink>
-                        variable.
+                        In order to use Wic to manipulate a Wic image as in
+                        this example, your development machine must have the
+                        <filename>mtools</filename> package installed.
                     </note>
                 </para>
 
                 <para>
-                    A source plug-in is created as a subclass of
-                    <filename>SourcePlugin</filename>.
-                    The plug-in file containing it is added to
-                    <filename>scripts/lib/wic/plugins/source/</filename> to
-                    make the plug-in implementation available to the
-                    Wic implementation.
-                    For more information, see
-                    <filename>scripts/lib/wic/pluginbase.py</filename>.
-                </para>
-
-                <para>
-                    Source plug-ins can also be implemented and added by
-                    external layers.
-                    As such, any plug-ins found in a
-                    <filename>scripts/lib/wic/plugins/source/</filename>
-                    directory in an external layer are also made
-                    available.
-                </para>
-
-                <para>
-                    When the Wic implementation needs
-                    to invoke a partition-specific implementation, it looks
-                    for the plug-in that has the same name as the
-                    <filename>--source</filename> parameter given to
-                    that partition.
-                    For example, if the partition is set up as follows:
-                    <literallayout class='monospaced'>
-     part /boot --source bootimg-pcbios   ...
-                    </literallayout>
-                    The methods defined as class members of the plug-in
-                    having the matching <filename>bootimg-pcbios.name</filename>
-                    class member are used.
-                </para>
-
-                <para>
-                    To be more concrete, here is the plug-in definition that
-                    matches a
-                    <filename>--source bootimg-pcbios</filename> usage,
-                    along with an example
-                    method called by the Wic implementation
-                    when it needs to invoke an implementation-specific
-                    partition-preparation function:
-                    <literallayout class='monospaced'>
-    class BootimgPcbiosPlugin(SourcePlugin):
-        name = 'bootimg-pcbios'
-
-    @classmethod
-        def do_prepare_partition(self, part, ...)
-                    </literallayout>
-                    If the subclass itself does not implement a function, a
-                    default version in a superclass is located and
-                    used, which is why all plug-ins must be derived from
-                    <filename>SourcePlugin</filename>.
-                </para>
-
-                <para>
-                    The <filename>SourcePlugin</filename> class defines the
-                    following methods, which is the current set of methods
-                    that can be implemented or overridden by
-                    <filename>--source</filename> plug-ins.
-                    Any methods not implemented by a
-                    <filename>SourcePlugin</filename> subclass inherit the
-                    implementations present in the
-                    <filename>SourcePlugin</filename> class.
-                    For more information, see the
-                    <filename>SourcePlugin</filename> source for details:
-                </para>
-
-                <para>
-                    <itemizedlist>
+                    The following example examines the contents of the Wic
+                    image, deletes the existing kernel, and then inserts a
+                    new kernel:
+                    <orderedlist>
                         <listitem><para>
-                            <emphasis><filename>do_prepare_partition()</filename>:</emphasis>
-                            Called to do the actual content population for a
+                            <emphasis>List the Partitions:</emphasis>
+                            Use the <filename>wic ls</filename> command to list
+                            all the partitions in the Wic image:
+                            <literallayout class='monospaced'>
+     $ wic ls tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic
+     Num     Start        End          Size      Fstype
+      1       1048576     25041919     23993344  fat16
+      2      25165824     72157183     46991360  ext4
+                            </literallayout>
+                            The previous output shows two partitions in the
+                            <filename>core-image-minimal-qemux86.wic</filename>
+                            image.
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis>Examine a Particular Partition:</emphasis>
+                            Use the <filename>wic ls</filename> command again
+                            but in a different form to examine a particular
                             partition.
-                            In other words, the method prepares the final
-                            partition image that is incorporated into the
-                            disk image.
-                            </para></listitem>
-                        <listitem><para>
-                            <emphasis><filename>do_configure_partition()</filename>:</emphasis>
-                            Called before
-                            <filename>do_prepare_partition()</filename>.
-                            This method is typically used to create custom
-                            configuration files for a partition (e.g. syslinux
-                            or grub configuration files).
-                            </para></listitem>
-                        <listitem><para>
-                            <emphasis><filename>do_install_disk()</filename>:</emphasis>
-                            Called after all partitions have been prepared and
-                            assembled into a disk image.
-                            This method provides a hook to allow finalization
-                            of a disk image, (e.g. writing an MBR).
-                            </para></listitem>
-                        <listitem><para>
-                            <emphasis><filename>do_stage_partition()</filename>:</emphasis>
-                            Special content-staging hook called before
-                            <filename>do_prepare_partition()</filename>.
-                            This method is normally empty.</para>
-                            <para>Typically, a partition just uses the passed-in
-                            parameters (e.g. the unmodified value of
-                            <filename>bootimg_dir</filename>).
-                            However, in some cases things might need to be
-                            more tailored.
-                            As an example, certain files might additionally
-                            need to be taken from
-                            <filename>bootimg_dir + /boot</filename>.
-                            This hook allows those files to be staged in a
-                            customized fashion.
                             <note>
-                                <filename>get_bitbake_var()</filename>
-                                allows you to access non-standard variables
-                                that you might want to use for this.
+                                You can get command usage on any Wic command
+                                using the following form:
+                                <literallayout class='monospaced'>
+     $ wic help <replaceable>command</replaceable>
+                                </literallayout>
+                                For example, the following command shows you
+                                the various ways to use the
+                                <filename>wic ls</filename> command:
+                                <literallayout class='monospaced'>
+     $ wic help ls
+                                </literallayout>
                             </note>
-                            </para></listitem>
-                    </itemizedlist>
-                </para>
+                            The following command shows what is in Partition
+                            one:
+                            <literallayout class='monospaced'>
+     $ wic ls tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic:1
+     Volume in drive : is boot
+      Volume Serial Number is E894-1809
+     Directory for ::/
 
-                <para>
-                    This scheme is extensible.
-                    Adding more hooks is a simple matter of adding more
-                    plug-in methods to <filename>SourcePlugin</filename> and
-                    derived classes.
-                    The code that then needs to call the plug-in methods uses
-                    <filename>plugin.get_source_plugin_methods()</filename>
-                    to find the method or methods needed by the call.
-                    Retrieval of those methods is accomplished
-                    by filling up a dict with keys
-                    containing the method names of interest.
-                    On success, these will be filled in with the actual
-                    methods.
-                    Please see the Wic
-                    implementation for examples and details.
-                </para>
-            </section>
+     libcom32 c32    186500 2017-10-09  16:06
+     libutil  c32     24148 2017-10-09  16:06
+     syslinux cfg       220 2017-10-09  16:06
+     vesamenu c32     27104 2017-10-09  16:06
+     vmlinuz        6904608 2017-10-09  16:06
+             5 files           7 142 580 bytes
+                              16 582 656 bytes free
+                            </literallayout>
+                            The previous output shows five files, with the
+                            <filename>vmlinuz</filename> being the kernel.
+                            <note>
+                                If you see the following error, you need to
+                                update or create a
+                                <filename>~/.mtoolsrc</filename> file and
+                                be sure to have the line “mtools_skip_check=1“
+                                in the file.
+                                Then, run the Wic command again:
+                                <literallayout class='monospaced'>
+     ERROR: _exec_cmd: /usr/bin/mdir -i /tmp/wic-parttfokuwra ::/ returned '1' instead of 0
+      output: Total number of sectors (47824) not a multiple of sectors per track (32)!
+      Add mtools_skip_check=1 to your .mtoolsrc file to skip this test
+                                </literallayout>
+                             </note>
+                             </para></listitem>
+                         <listitem><para>
+                             <emphasis>Remove the Old Kernel:</emphasis>
+                             Use the <filename>wic rm</filename> command to
+                             remove the <filename>vmlinuz</filename> file
+                             (kernel):
+                             <literallayout class='monospaced'>
+     $ wic rm tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic:1/vmlinuz
+                             </literallayout>
+                             </para></listitem>
+                         <listitem><para>
+                             <emphasis>Add In the New Kernel:</emphasis>
+                             Use the <filename>wic cp</filename> command to
+                             add the updated kernel to the Wic image.
+                             Depending on how you built your kernel, it could
+                             be in different places.
+                             If you used <filename>devtool</filename> and
+                             an SDK to build your kernel, it resides in the
+                             <filename>tmp/work</filename> directory of the
+                             extensible SDK.
+                             If you used <filename>make</filename> to build the
+                             kernel, the kernel will be in the
+                             <filename>workspace/sources</filename> area.
+                             </para>
 
-            <section id='openembedded-kickstart-wks-reference'>
-                <title>OpenEmbedded Kickstart (<filename>.wks</filename>) Reference</title>
-
-                <para>
-                    The current Wic implementation supports
-                    only the basic kickstart partitioning commands:
-                    <filename>partition</filename> (or <filename>part</filename>
-                    for short) and <filename>bootloader</filename>.
-                    <note>
-                        Future updates will implement more commands and options.
-                        If you use anything that is not specifically
-                        supported, results can be unpredictable.
-                    </note>
-                </para>
-
-                <para>
-                    The following is a list of the commands, their syntax,
-                    and meanings.
-                    The commands are based on the Fedora
-                    kickstart versions but with modifications to
-                    reflect Wic capabilities.
-                    You can see the original documentation for those commands
-                    at the following links:
-                    <itemizedlist>
-                        <listitem><para>
-                            <ulink url='http://fedoraproject.org/wiki/Anaconda/Kickstart#part_or_partition'>http://fedoraproject.org/wiki/Anaconda/Kickstart#part_or_partition</ulink>
-                            </para></listitem>
-                        <listitem><para>
-                            <ulink url='http://fedoraproject.org/wiki/Anaconda/Kickstart#bootloader'>http://fedoraproject.org/wiki/Anaconda/Kickstart#bootloader</ulink>
-                            </para></listitem>
-                    </itemizedlist>
-                </para>
-
-                <section id='command-part-or-partition'>
-                    <title>Command: part or partition</title>
-
-                    <para>
-                    Either of these commands create a partition on the system
-                    and use the following syntax:
-                        <literallayout class='monospaced'>
-     part [<replaceable>mntpoint</replaceable>]
-     partition [<replaceable>mntpoint</replaceable>]
-                        </literallayout>
-                        If you do not provide
-                        <replaceable>mntpoint</replaceable>, Wic creates a
-                        partition but does not mount it.
-                    </para>
-
-                    <para>
-                        The
-                        <filename><replaceable>mntpoint</replaceable></filename>
-                        is where the partition will be mounted and must be of
-                        one of the following forms:
-                        <itemizedlist>
-                            <listitem><para>
-                                <filename>/<replaceable>path</replaceable></filename>:
-                                For example, "/", "/usr", or "/home"
-                                </para></listitem>
-                            <listitem><para>
-                                <filename>swap</filename>:
-                                The created partition is used as swap space.
-                                </para></listitem>
-                        </itemizedlist>
-                    </para>
-
-                    <para>
-                        Specifying a <replaceable>mntpoint</replaceable> causes
-                        the partition to automatically be mounted.
-                        Wic achieves this by adding entries to the filesystem
-                        table (fstab) during image generation.
-                        In order for wic to generate a valid fstab, you must
-                        also provide one of the <filename>--ondrive</filename>,
-                        <filename>--ondisk</filename>, or
-                        <filename>--use-uuid</filename> partition options as
-                        part of the command.
-                        Here is an example using "/" as the mountpoint.
-                        The command uses "--ondisk" to force the partition onto
-                        the <filename>sdb</filename> disk:
-                        <literallayout class='monospaced'>
-     part / --source rootfs --ondisk sdb --fstype=ext3 --label platform --align 1024
-                        </literallayout>
-                    </para>
-
-                    <para>
-                        Here is a list that describes other supported options
-                        you can use with the <filename>part</filename> and
-                        <filename>partition</filename> commands:
-                        <itemizedlist>
-                            <listitem><para>
-                                <emphasis><filename>--size</filename>:</emphasis>
-                                The minimum partition size in MBytes.
-                                Specify an integer value such as 500.
-                                Do not append the number with "MB".
-                                You do not need this option if you use
-                                <filename>--source</filename>.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--source</filename>:</emphasis>
-                                This option is a
-                                Wic-specific option that
-                                names the source of the data that populates
-                                the partition.
-                                The most common value for this option is
-                                "rootfs", but you can use any value that maps to
-                                a valid source plug-in.
-                                For information on the source plug-ins, see the
-                                "<link linkend='openembedded-kickstart-plugins'>Plug-ins</link>"
-                                section.</para>
-                                <para>If you use
-                                <filename>--source rootfs</filename>,
-                                Wic creates a partition as
-                                large as needed and to fill it with the contents
-                                of the root filesystem pointed to by the
-                                <filename>-r</filename> command-line option
-                                or the equivalent rootfs derived from the
-                                <filename>-e</filename> command-line
-                                option.
-                                The filesystem type used to create the
-                                partition is driven by the value of the
-                                <filename>--fstype</filename> option
-                                specified for the partition.
-                                See the entry on
-                                <filename>--fstype</filename> that
-                                follows for more information.
-                                </para>
-                                <para>If you use
-                                <filename>--source <replaceable>plugin-name</replaceable></filename>,
-                                Wic creates a partition as
-                                large as needed and fills it with the contents
-                                of the partition that is generated by the
-                                specified plug-in name using the data pointed
-                                to by the <filename>-r</filename> command-line
-                                option or the equivalent rootfs derived from the
-                                <filename>-e</filename> command-line
-                                option.
-                                Exactly what those contents and filesystem type
-                                end up being are dependent on the given plug-in
-                                implementation.
-                                </para>
-                                <para>If you do not use the
-                                <filename>--source</filename> option, the
-                                <filename>wic</filename> command creates an
-                                empty partition.
-                                Consequently, you must use the
-                                <filename>--size</filename> option to specify
-                                the size of the empty partition.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--ondisk</filename> or <filename>--ondrive</filename>:</emphasis>
-                                Forces the partition to be created on a
-                                particular disk.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--fstype</filename>:</emphasis>
-                                Sets the file system type for the partition.
-                                Valid values are:
-                                <itemizedlist>
-                                    <listitem><para><filename>ext4</filename>
-                                    </para></listitem>
-                                    <listitem><para><filename>ext3</filename>
-                                    </para></listitem>
-                                    <listitem><para><filename>ext2</filename>
-                                    </para></listitem>
-                                    <listitem><para><filename>btrfs</filename>
-                                    </para></listitem>
-                                    <listitem><para><filename>squashfs</filename>
-                                    </para></listitem>
-                                    <listitem><para><filename>swap</filename>
-                                    </para></listitem>
-                                </itemizedlist>
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--fsoptions</filename>:</emphasis>
-                                Specifies a free-form string of options to be
-                                used when mounting the filesystem.
-                                This string will be copied into the
-                                <filename>/etc/fstab</filename> file of the
-                                installed system and should be enclosed in
-                                quotes.
-                                If not specified, the default string
-                                is "defaults".
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--label label</filename>:</emphasis>
-                                Specifies the label to give to the filesystem to
-                                be made on the partition.
-                                If the given label is already in use by another
-                                filesystem, a new label is created for the
-                                partition.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--active</filename>:</emphasis>
-                                Marks the partition as active.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--align (in KBytes)</filename>:</emphasis>
-                                This option is a
-                                Wic-specific option that
-                                says to start a partition on an
-                                <replaceable>x</replaceable> KBytes
-                                boundary.</para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--no-table</filename>:</emphasis>
-                                This option is a
-                                Wic-specific option.
-                                Using the option reserves space for the
-                                partition and causes it to become populated.
-                                However, the partition is not added to the
-                                partition table.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--extra-space</filename>:</emphasis>
-                                This option is a
-                                Wic-specific option that
-                                adds extra space after the space filled by the
-                                content of the partition.
-                                The final size can go beyond the size specified
-                                by the <filename>--size</filename> option.
-                                The default value is 10 Mbytes.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--overhead-factor</filename>:</emphasis>
-                                This option is a
-                                Wic-specific option that
-                                multiplies the size of the partition by the
-                                option's value.
-                                You must supply a value greater than or equal to
-                                "1".
-                                The default value is "1.3".
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--part-type</filename>:</emphasis>
-                                This option is a
-                                Wic-specific option that
-                                specifies the partition type globally
-                                unique identifier (GUID) for GPT partitions.
-                                You can find the list of partition type GUIDs
-                                at
-                                <ulink url='http://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs'></ulink>.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--use-uuid</filename>:</emphasis>
-                                This option is a
-                                Wic-specific option that
-                                causes Wic to generate a
-                                random GUID for the partition.
-                                The generated identifier is used in the
-                                bootloader configuration to specify the root
-                                partition.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--uuid</filename>:</emphasis>
-                                This option is a
-                                Wic-specific
-                                option that specifies the partition UUID.
-                                </para></listitem>
-                        </itemizedlist>
-                    </para>
-                </section>
-
-                <section id='command-bootloader'>
-                    <title>Command: bootloader</title>
-
-                    <para>
-                        This command specifies how the bootloader should be
-                        configured and supports the following options:
-                        <note>
-                            Bootloader functionality and boot partitions are
-                            implemented by the various
-                            <filename>--source</filename>
-                            plug-ins that implement bootloader functionality.
-                            The bootloader command essentially provides a
-                            means of modifying bootloader configuration.
-                        </note>
-                        <itemizedlist>
-                            <listitem><para>
-                                <emphasis><filename>--timeout</filename>:</emphasis>
-                                Specifies the number of seconds before the
-                                bootloader times out and boots the default
-                                option.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--append</filename>:</emphasis>
-                                Specifies kernel parameters.
-                                These parameters will be added to the syslinux
-                                <filename>APPEND</filename> or
-                                <filename>grub</filename> kernel command line.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis><filename>--configfile</filename>:</emphasis>
-                                Specifies a user-defined configuration file for
-                                the bootloader.
-                                You can provide a full pathname for the file or
-                                a file that exists in the
-                                <filename>canned-wks</filename> folder.
-                                This option overrides all other bootloader
-                                options.
-                                </para></listitem>
-                        </itemizedlist>
-                    </para>
-                </section>
-            </section>
+                             <para>The following example assumes
+                             <filename>devtool</filename> was used to build
+                             the kernel:
+                             <literallayout class='monospaced'>
+     cp ~/poky_sdk/tmp/work/qemux86-poky-linux/linux-yocto/4.12.12+git999-r0/linux-yocto-4.12.12+git999/arch/x86/boot/bzImage \
+        ~/poky/build/tmp/deploy/images/qemux86/core-image-minimal-qemux86.wic:1/vmlinuz
+                             </literallayout>
+                             Once the new kernel is added back into the image,
+                             you can use the <filename>dd</filename>
+                             command or
+                             <link linkend='flashing-images-using-bmaptool'><filename>bmaptool</filename></link>
+                             to flash your wic image onto an SD card
+                             or USB stick and test your target.
+                             <note>
+                                 Using <filename>bmaptool</filename> is
+                                 generally 10 to 20 times faster than using
+                                 <filename>dd</filename>.
+                             </note>
+                             </para></listitem>
+                     </orderedlist>
+                 </para>
+             </section>
         </section>
     </section>
 
@@ -5817,887 +5612,231 @@
         <title>Building an Initial RAM Filesystem (initramfs) Image</title>
 
         <para>
-            initramfs is the successor of Initial RAM Disk (initrd).
-            It is a "copy in and out" (cpio) archive of the initial file system
-            that gets loaded into memory during the Linux startup process.
-            Because Linux uses the contents of the archive during
-            initialization, the initramfs needs to contain all of the device
-            drivers and tools needed to mount the final root filesystem.
+            An initial RAM filesystem (initramfs) image provides a temporary
+            root filesystem used for early system initialization (e.g.
+            loading of modules needed to locate and mount the "real" root
+            filesystem).
+            <note>
+                The initramfs image is the successor of initial RAM disk
+                (initrd).
+                It is a "copy in and out" (cpio) archive of the initial
+                filesystem that gets loaded into memory during the Linux
+                startup process.
+                Because Linux uses the contents of the archive during
+                initialization, the initramfs image needs to contain all of the
+                device drivers and tools needed to mount the final root
+                filesystem.
+            </note>
         </para>
 
         <para>
-            To build an initramfs image and bundle it into the kernel, set the
-            <ulink url='&YOCTO_DOCS_REF_URL;#var-INITRAMFS_IMAGE_BUNDLE'><filename>INITRAMFS_IMAGE_BUNDLE</filename></ulink>
-            variable in your <filename>local.conf</filename> file, and set the
-            <ulink url='&YOCTO_DOCS_REF_URL;#var-INITRAMFS_IMAGE'><filename>INITRAMFS_IMAGE</filename></ulink>
-            variable in your <filename>machine.conf</filename> file:
+            Follow these steps to create an initramfs image:
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Create the initramfs Image Recipe:</emphasis>
+                    You can reference the
+                    <filename>core-image-minimal-initramfs.bb</filename>
+                    recipe found in the <filename>meta/recipes-core</filename>
+                    directory of the
+                    <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+                    as an example from which to work.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Decide if You Need to Bundle the initramfs Image
+                    Into the Kernel Image:</emphasis>
+                    If you want the initramfs image that is built to be
+                    bundled in with the kernel image, set the
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-INITRAMFS_IMAGE_BUNDLE'><filename>INITRAMFS_IMAGE_BUNDLE</filename></ulink>
+                    variable to "1" in your <filename>local.conf</filename>
+                    configuration file and set the
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-INITRAMFS_IMAGE'><filename>INITRAMFS_IMAGE</filename></ulink>
+                    variable in the recipe that builds the kernel image.
+                    <note><title>Tip</title>
+                        It is recommended that you do bundle the initramfs
+                        image with the kernel image to avoid circular
+                        dependencies between the kernel recipe and the
+                        initramfs recipe should the initramfs image
+                        include kernel modules.
+                    </note>
+                    Setting the <filename>INITRAMFS_IMAGE_BUNDLE</filename>
+                    flag causes the initramfs image to be unpacked
+                    into the <filename>${B}/usr/</filename> directory.
+                    The unpacked initramfs image is then passed to the kernel's
+                    <filename>Makefile</filename> using the
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-CONFIG_INITRAMFS_SOURCE'><filename>CONFIG_INITRAMFS_SOURCE</filename></ulink>
+                    variable, allowing the initramfs image to be built into
+                    the kernel normally.
+                    <note>
+                        If you choose to not bundle the initramfs image with
+                        the kernel image, you are essentially using an
+                        <ulink url='https://en.wikipedia.org/wiki/Initrd'>Initial RAM Disk (initrd)</ulink>.
+                        Creating an initrd is handled primarily through the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#var-INITRD_IMAGE'><filename>INITRD_IMAGE</filename></ulink>,
+                        <filename>INITRD_LIVE</filename>, and
+                        <filename>INITRD_IMAGE_LIVE</filename> variables.
+                        For more information, see the
+                        <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/meta/classes/image-live.bbclass'><filename>image-live.bbclass</filename></ulink>
+                        file.
+                    </note>
+                    </para></listitem>
+<!--
+Some notes from Cal:
+
+    A non-bundled initramfs is essentially an initrd, which I am discovering
+    to be rather confusingly supported in OE at the moment.
+
+    Its primarily handled through INITRD_IMAGE(_LIVE/_VM) and INITRD(_LIVE/_VM)
+    variables. INITRD_IMAGE* is the primary image target, which gets added to
+    INITRD*, which is a list of cpio filesystems. You can add more cpio
+    filesystems to the INITRD variable to add more to the initrd. For
+    instance, meta-intel adds intel-microcode via the following:
+
+        INITRD_LIVE_prepend = "${@bb.utils.contains('MACHINE_FEATURES', 'intel-ucode', '${DEPLOY_DIR_IMAGE}/microcode.cpio ', '', d)}"
+
+    If 'intel-ucode' is in MACHINE_FEATURES, this resolves to:
+
+        INITRD_LIVE_prepend = "${DEPLOY_DIR_IMAGE}/microcode.cpio "
+
+    Unfortunately you need the full path, and its up to you to sort out
+    dependencies as well. For instance, we have the following:
+
+        MACHINE_ESSENTIAL_EXTRA_RDEPENDS_append = "${@bb.utils.contains('MACHINE_FEATURES', 'intel-ucode', ' intel-microcode', '', d)}"
+
+    which resolves to:
+
+        MACHINE_ESSENTIAL_EXTRA_RDEPENDS_append = "intel-microcode"
+
+    However, the above is only true with the "live" IMAGE_FSTYPE. Wic is
+    another beast entirely, with current wic kickstart files not supporting
+    initrds, and only partial support in the source plugins. That being said,
+    I know the generic bootfs work Ed is working on will help immensely in this
+    aspect. He or Saul can provide more details here.
+
+    Anyhow, its rather fractured and confusing and could probably use a
+    rework honestly. I don't know how feasible it is to document all the
+    details and corner cases of this area.
+-->
+                <listitem><para>
+                    <emphasis>Optionally Add Items to the initramfs Image
+                    Through the initramfs Image Recipe:</emphasis>
+                    If you add items to the initramfs image by way of its
+                    recipe, you should use
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_INSTALL'><filename>PACKAGE_INSTALL</filename></ulink>
+                    rather than
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_INSTALL'><filename>IMAGE_INSTALL</filename></ulink>.
+                    <filename>PACKAGE_INSTALL</filename> gives more direct
+                    control of what is added to the image as compared to
+                    the defaults you might not necessarily want that are
+                    set by the
+                    <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-image'><filename>image</filename></ulink>
+                    or
+                    <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-core-image'><filename>core-image</filename></ulink>
+                    classes.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Build the Kernel Image and the initramfs
+                    Image:</emphasis>
+                    Build your kernel image using BitBake.
+                    Because the initramfs image recipe is a dependency of the
+                    kernel image, the initramfs image is built as well and
+                    bundled with the kernel image if you used the
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-INITRAMFS_IMAGE_BUNDLE'><filename>INITRAMFS_IMAGE_BUNDLE</filename></ulink>
+                    variable described earlier.
+                    </para></listitem>
+            </orderedlist>
+        </para>
+    </section>
+
+    <section id='flashing-images-using-bmaptool'>
+        <title>Flashing Images Using <filename>bmaptool</filename></title>
+
+        <para>
+            An easy way to flash an image to a bootable device is to use
+            <filename>bmaptool</filename>, which is integrated into the
+            OpenEmbedded build system.
+        </para>
+
+        <para>
+            Following, is an example that shows how to flash a Wic image.
+            <note>
+                You can use <filename>bmaptool</filename> to flash any
+                type of image.
+            </note>
+            Use these steps to flash an image using
+            <filename>bmaptool</filename>:
+            <note>
+                Unless you are able to install the
+                <filename>bmap-tools</filename> package as mentioned in the note
+                in the second bullet of step 3 further down, you will need to build
+                <filename>bmaptool</filename> before using it.
+                Build the tool using the following command:
+                <literallayout class='monospaced'>
+     $ bitbake bmap-tools-native
+                </literallayout>
+            </note>
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Update the <filename>local.conf</filename> File:</emphasis>
+                    Add the following to your <filename>local.conf</filename>
+                    file:
+                    <literallayout class='monospaced'>
+     IMAGE_FSTYPES += "wic wic.bmap"
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Get Your Image:</emphasis>
+                    Either have your image ready (pre-built) or take the step
+                    build the image:
+                    <literallayout class='monospaced'>
+     $ bitbake <replaceable>image</replaceable>
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Flash the Device:</emphasis>
+                    Flash the device with the image by using
+                    <filename>bmaptool</filename> depending on your particular
+                    setup:
+                    <itemizedlist>
+                        <listitem><para>
+                            If you have write access to the media,
+                            use this command form:
+                            <literallayout class='monospaced'>
+     $ oe-run-native bmap-tools-native bmaptool copy ./tmp/deploy/images/qemux86-64-core-image-minimal-<replaceable>machine</replaceable>.wic /dev/sd<replaceable>X</replaceable>
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            If you do not have write access to
+                            the media, use the following
+                            commands:
+                            <literallayout class='monospaced'>
+     $ sudo chmod 666 /dev/sd<replaceable>X</replaceable>
+     $ oe-run-native bmap-tools-native bmaptool copy ./tmp/deploy/images/qemux86-64-core-image-minimal-<replaceable>machine</replaceable>.wic /dev/sd<replaceable>X</replaceable>
+                            </literallayout>
+                            <note>
+                                If you are using Ubuntu or Debian distributions,
+                                you can install the
+                                <filename>bmap-tools</filename> package using
+                                the following command and then use the tool
+                                without specifying
+                                <filename>PATH</filename> even from the
+                                root account:
+                                <literallayout class='monospaced'>
+     $ sudo apt-get install bmap-tools
+                                </literallayout>
+                            </note>
+                            </para></listitem>
+                    </itemizedlist>
+                    </para></listitem>
+            </orderedlist>
+        </para>
+
+        <para>
+            For help on the <filename>bmaptool</filename> command, use the
+            following command:
             <literallayout class='monospaced'>
-     INITRAMFS_IMAGE_BUNDLE = "1"
-     INITRAMFS_IMAGE = "<replaceable>image_recipe_name</replaceable>"
+     $ bmaptool --help
             </literallayout>
-            Setting the <filename>INITRAMFS_IMAGE_BUNDLE</filename>
-            flag causes the initramfs created by the recipe and defined by
-            <filename>INITRAMFS_IMAGE</filename> to be unpacked into the
-            <filename>${B}/usr/</filename> directory.
-            The unpacked initramfs is then passed to the kernel's
-            <filename>Makefile</filename> using the
-            <ulink url='&YOCTO_DOCS_REF_URL;#var-CONFIG_INITRAMFS_SOURCE'><filename>CONFIG_INITRAMFS_SOURCE</filename></ulink>
-            variable, allowing initramfs to be built in to the kernel
-            normally.
-            <note>
-                The preferred method is to use the
-                <filename>INITRAMFS_IMAGE</filename> variable rather than the
-                <filename>INITRAMFS_TASK</filename> variable.
-                Setting <filename>INITRAMFS_TASK</filename> is supported for
-                backward compatibility.
-                However, use of this variable has circular dependency
-                problems.
-                See the
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-INITRAMFS_IMAGE_BUNDLE'><filename>INITRAMFS_IMAGE_BUNDLE</filename></ulink>
-                variable for additional information on these dependency
-                problems.
-            </note>
         </para>
-
-        <para>
-            The recipe that <filename>INITRAMFS_IMAGE</filename>
-            points to must produce a <filename>.cpio.gz</filename>,
-            <filename>.cpio.tar</filename>, <filename>.cpio.lz4</filename>,
-            <filename>.cpio.lzma</filename>, or
-            <filename>.cpio.xz</filename> file.
-            You can ensure you produce one of these <filename>.cpio.*</filename>
-            files by setting the
-            <ulink url='&YOCTO_DOCS_REF_URL;#var-INITRAMFS_FSTYPES'><filename>INITRAMFS_FSTYPES</filename></ulink>
-            variable in your configuration file to one or more of the above
-            file types.
-            <note>
-                If you add items to the initramfs image by way of its recipe,
-                you should use
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_INSTALL'><filename>PACKAGE_INSTALL</filename></ulink>
-                rather than
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-IMAGE_INSTALL'><filename>IMAGE_INSTALL</filename></ulink>.
-                <filename>PACKAGE_INSTALL</filename> gives more direct control
-                of what is added to the image as compared to the defaults you
-                might not necessarily want that are set by the
-                <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-image'><filename>image</filename></ulink>
-                or
-                <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-core-image'><filename>core-image</filename></ulink>
-                classes.
-            </note>
-        </para>
-    </section>
-
-    <section id='configuring-the-kernel'>
-        <title>Configuring the Kernel</title>
-
-        <para>
-            Configuring the Yocto Project kernel consists of making sure the
-            <filename>.config</filename> file has all the right information
-            in it for the image you are building.
-            You can use the <filename>menuconfig</filename> tool and
-            configuration fragments to make sure your
-            <filename>.config</filename> file is just how you need it.
-            You can also save known configurations in a
-            <filename>defconfig</filename> file that the build system can use
-            for kernel configuration.
-        </para>
-
-        <para>
-            This section describes how to use <filename>menuconfig</filename>,
-            create and use configuration fragments, and how to interactively
-            modify your <filename>.config</filename> file to create the
-            leanest kernel configuration file possible.
-        </para>
-
-        <para>
-            For more information on kernel configuration, see the
-            "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#changing-the-configuration'>Changing the Configuration</ulink>"
-            section in the Yocto Project Linux Kernel Development Manual.
-        </para>
-
-        <section id='using-menuconfig'>
-            <title>Using&nbsp;&nbsp;<filename>menuconfig</filename></title>
-
-            <para>
-                The easiest way to define kernel configurations is to set them through the
-                <filename>menuconfig</filename> tool.
-                This tool provides an interactive method with which
-                to set kernel configurations.
-                For general information on <filename>menuconfig</filename>, see
-                <ulink url='http://en.wikipedia.org/wiki/Menuconfig'></ulink>.
-            </para>
-
-            <para>
-                To use the <filename>menuconfig</filename> tool in the Yocto Project development
-                environment, you must launch it using BitBake.
-                Thus, the environment must be set up using the
-                <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
-                or
-                <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>
-                script found in the
-                <link linkend='build-directory'>Build Directory</link>.
-                You must also be sure of the state of your build in the
-                <link linkend='source-directory'>Source Directory</link>.
-                The following commands run <filename>menuconfig</filename>
-                assuming the Source Directory's top-level folder is
-                <filename>~/poky</filename>:
-                <literallayout class='monospaced'>
-     $ cd poky
-     $ source oe-init-build-env
-     $ bitbake linux-yocto -c kernel_configme -f
-     $ bitbake linux-yocto -c menuconfig
-                </literallayout>
-                Once <filename>menuconfig</filename> comes up, its standard
-                interface allows you to interactively examine and configure
-                all the kernel configuration parameters.
-                After making your changes, simply exit the tool and save your
-                changes to create an updated version of the
-                <filename>.config</filename> configuration file.
-            </para>
-
-            <para>
-                Consider an example that configures the <filename>linux-yocto-3.14</filename>
-                kernel.
-                The OpenEmbedded build system recognizes this kernel as
-                <filename>linux-yocto</filename>.
-                Thus, the following commands from the shell in which you previously sourced the
-                environment initialization script cleans the shared state cache and the
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink>
-                directory and then runs <filename>menuconfig</filename>:
-                <literallayout class='monospaced'>
-     $ bitbake linux-yocto -c menuconfig
-                </literallayout>
-            </para>
-
-            <para>
-                Once <filename>menuconfig</filename> launches, use the interface
-                to navigate through the selections to find the configuration settings in
-                which you are interested.
-                For example, consider the <filename>CONFIG_SMP</filename> configuration setting.
-                You can find it at <filename>Processor Type and Features</filename> under
-                the configuration selection <filename>Symmetric Multi-processing Support</filename>.
-                After highlighting the selection, use the arrow keys to select or deselect
-                the setting.
-                When you are finished with all your selections, exit out and save them.
-            </para>
-
-            <para>
-                Saving the selections updates the <filename>.config</filename> configuration file.
-                This is the file that the OpenEmbedded build system uses to configure the
-                kernel during the build.
-                You can find and examine this file in the Build Directory in
-                <filename>tmp/work/</filename>.
-                The actual <filename>.config</filename> is located in the area where the
-                specific kernel is built.
-                For example, if you were building a Linux Yocto kernel based on the
-                Linux 3.14 kernel and you were building a QEMU image targeted for
-                <filename>x86</filename> architecture, the
-                <filename>.config</filename> file would be located here:
-                <literallayout class='monospaced'>
-     poky/build/tmp/work/qemux86-poky-linux/linux-yocto-3.14.11+git1+84f...
-        ...656ed30-r1/linux-qemux86-standard-build
-                </literallayout>
-                <note>
-                    The previous example directory is artificially split and many of the characters
-                    in the actual filename are omitted in order to make it more readable.
-                    Also, depending on the kernel you are using, the exact pathname
-                    for <filename>linux-yocto-3.14...</filename> might differ.
-                </note>
-            </para>
-
-            <para>
-                Within the <filename>.config</filename> file, you can see the kernel settings.
-                For example, the following entry shows that symmetric multi-processor support
-                is not set:
-                <literallayout class='monospaced'>
-     # CONFIG_SMP is not set
-                </literallayout>
-            </para>
-
-            <para>
-                A good method to isolate changed configurations is to use a combination of the
-                <filename>menuconfig</filename> tool and simple shell commands.
-                Before changing configurations with <filename>menuconfig</filename>, copy the
-                existing <filename>.config</filename> and rename it to something else,
-                use <filename>menuconfig</filename> to make
-                as many changes as you want and save them, then compare the renamed configuration
-                file against the newly created file.
-                You can use the resulting differences as your base to create configuration fragments
-                to permanently save in your kernel layer.
-                <note>
-                    Be sure to make a copy of the <filename>.config</filename> and don't just
-                    rename it.
-                    The build system needs an existing <filename>.config</filename>
-                    from which to work.
-                </note>
-            </para>
-        </section>
-
-        <section id='creating-a-defconfig-file'>
-            <title>Creating a&nbsp;&nbsp;<filename>defconfig</filename> File</title>
-
-            <para>
-                A <filename>defconfig</filename> file is simply a
-                <filename>.config</filename> renamed to "defconfig".
-                You can use a <filename>defconfig</filename> file
-                to retain a known set of kernel configurations from which the
-                OpenEmbedded build system can draw to create the final
-                <filename>.config</filename> file.
-                <note>
-                    Out-of-the-box, the Yocto Project never ships a
-                    <filename>defconfig</filename> or
-                    <filename>.config</filename> file.
-                    The OpenEmbedded build system creates the final
-                    <filename>.config</filename> file used to configure the
-                    kernel.
-                </note>
-            </para>
-
-            <para>
-                To create a <filename>defconfig</filename>, start with a
-                complete, working Linux kernel <filename>.config</filename>
-                file.
-                Copy that file to the appropriate
-                <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink><filename>}</filename>
-                directory in your layer's
-                <filename>recipes-kernel/linux</filename> directory, and rename
-                the copied file to "defconfig".
-                Then, add the following lines to the linux-yocto
-                <filename>.bbappend</filename> file in your layer:
-                <literallayout class='monospaced'>
-     FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
-     SRC_URI += "file://defconfig"
-                </literallayout>
-                The
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
-                tells the build system how to search for the file, while the
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
-                extends the
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESPATH'><filename>FILESPATH</filename></ulink>
-                variable (search directories) to include the
-                <filename>${PN}</filename> directory you created to hold the
-                configuration changes.
-                <note>
-                    The build system applies the configurations from the
-                    <filename>defconfig</filename> file before applying any
-                    subsequent configuration fragments.
-                    The final kernel configuration is a combination of the
-                    configurations in the <filename>defconfig</filename>
-                    file and any configuration fragments you provide.
-                    You need to realize that if you have any configuration
-                    fragments, the build system applies these on top of and
-                    after applying the existing defconfig file configurations.
-                </note>
-                For more information on configuring the kernel, see the
-                "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#changing-the-configuration'>Changing the Configuration</ulink>"
-                and
-                "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#generating-configuration-files'>Generating Configuration Files</ulink>"
-                sections, both in the Yocto Project Linux Kernel Development
-                Manual.
-            </para>
-        </section>
-
-        <section id='creating-config-fragments'>
-            <title>Creating Configuration Fragments</title>
-
-            <para>
-                Configuration fragments are simply kernel options that appear in a file
-                placed where the OpenEmbedded build system can find and apply them.
-                Syntactically, the configuration statement is identical to what would appear
-                in the <filename>.config</filename> file, which is in the
-                <link linkend='build-directory'>Build Directory</link>:
-                <literallayout class='monospaced'>
-     tmp/work/<replaceable>arch</replaceable>-poky-linux/linux-yocto-<replaceable>release_specific_string</replaceable>/linux-<replaceable>arch</replaceable>-<replaceable>build_type</replaceable>
-                </literallayout>
-            </para>
-
-            <para>
-                It is simple to create a configuration fragment.
-                For example, issuing the following from the shell creates a configuration fragment
-                file named <filename>my_smp.cfg</filename> that enables multi-processor support
-                within the kernel:
-                <literallayout class='monospaced'>
-     $ echo "CONFIG_SMP=y" >> my_smp.cfg
-                </literallayout>
-                <note>
-                    All configuration fragment files must use the
-                    <filename>.cfg</filename> extension in order for the
-                    OpenEmbedded build system to recognize them as a
-                    configuration fragment.
-                </note>
-            </para>
-
-            <para>
-                Where do you put your configuration fragment files?
-                You can place these files in the same area pointed to by
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>.
-                The OpenEmbedded build system picks up the configuration and
-                adds it to the kernel's configuration.
-                For example, suppose you had a set of configuration options
-                in a file called <filename>myconfig.cfg</filename>.
-                If you put that file inside a directory named
-                <filename>linux-yocto</filename> that resides in the same
-                directory as the kernel's append file and then add a
-                <filename>SRC_URI</filename> statement such as the following
-                to the kernel's append file, those configuration options
-                will be picked up and applied when the kernel is built.
-                <literallayout class='monospaced'>
-     SRC_URI += "file://myconfig.cfg"
-                </literallayout>
-            </para>
-
-            <para>
-                As mentioned earlier, you can group related configurations into multiple files and
-                name them all in the <filename>SRC_URI</filename> statement as well.
-                For example, you could group separate configurations specifically for Ethernet and graphics
-                into their own files and add those by using a <filename>SRC_URI</filename> statement like the
-                following in your append file:
-                <literallayout class='monospaced'>
-     SRC_URI += "file://myconfig.cfg \
-            file://eth.cfg \
-            file://gfx.cfg"
-                </literallayout>
-            </para>
-        </section>
-
-        <section id='fine-tuning-the-kernel-configuration-file'>
-            <title>Fine-Tuning the Kernel Configuration File</title>
-
-            <para>
-                You can make sure the <filename>.config</filename> file is as lean or efficient as
-                possible by reading the output of the kernel configuration fragment audit,
-                noting any issues, making changes to correct the issues, and then repeating.
-            </para>
-
-            <para>
-                As part of the kernel build process, the
-                <filename>do_kernel_configcheck</filename> task runs.
-                This task validates the kernel configuration by checking the final
-                <filename>.config</filename> file against the input files.
-                During the check, the task produces warning messages for the following
-                issues:
-                <itemizedlist>
-                    <listitem><para>Requested options that did not make the final
-                        <filename>.config</filename> file.</para></listitem>
-                    <listitem><para>Configuration items that appear twice in the same
-                        configuration fragment.</para></listitem>
-                    <listitem><para>Configuration items tagged as "required" that were overridden.
-                        </para></listitem>
-                    <listitem><para>A board overrides a non-board specific option.</para></listitem>
-                    <listitem><para>Listed options not valid for the kernel being processed.
-                        In other words, the option does not appear anywhere.</para></listitem>
-                </itemizedlist>
-                <note>
-                    The <filename>do_kernel_configcheck</filename> task can
-                    also optionally report if an option is overridden during
-                    processing.
-                </note>
-            </para>
-
-            <para>
-                For each output warning, a message points to the file
-                that contains a list of the options and a pointer to the
-                configuration fragment that defines them.
-                Collectively, the files are the key to streamlining the
-                configuration.
-            </para>
-
-            <para>
-                To streamline the configuration, do the following:
-                <orderedlist>
-                    <listitem><para>Start with a full configuration that you
-                        know works - it builds and boots successfully.
-                        This configuration file will be your baseline.
-                        </para></listitem>
-                    <listitem><para>Separately run the
-                        <filename>do_kernel_configme</filename> and
-                        <filename>do_kernel_configcheck</filename> tasks.
-                        </para></listitem>
-                    <listitem><para>Take the resulting list of files from the
-                        <filename>do_kernel_configcheck</filename> task
-                        warnings and do the following:
-                        <itemizedlist>
-                            <listitem><para>
-                                Drop values that are redefined in the fragment
-                                but do not change the final
-                                <filename>.config</filename> file.
-                                </para></listitem>
-                            <listitem><para>
-                                Analyze and potentially drop values from the
-                                <filename>.config</filename> file that override
-                                required configurations.
-                                </para></listitem>
-                            <listitem><para>
-                                Analyze and potentially remove non-board
-                                specific options.
-                                </para></listitem>
-                            <listitem><para>
-                                Remove repeated and invalid options.
-                                </para></listitem>
-                        </itemizedlist></para></listitem>
-                    <listitem><para>
-                        After you have worked through the output of the kernel
-                        configuration audit, you can re-run the
-                        <filename>do_kernel_configme</filename> and
-                        <filename>do_kernel_configcheck</filename> tasks to
-                        see the results of your changes.
-                        If you have more issues, you can deal with them as
-                        described in the previous step.
-                        </para></listitem>
-                </orderedlist>
-            </para>
-
-            <para>
-                Iteratively working through steps two through four eventually yields
-                a minimal, streamlined configuration file.
-                Once you have the best <filename>.config</filename>, you can build the Linux
-                Yocto kernel.
-            </para>
-        </section>
-
-        <section id='determining-hardware-and-non-hardware-features-for-the-kernel-configuration-audit-phase'>
-            <title>Determining Hardware and Non-Hardware Features for the Kernel Configuration Audit Phase</title>
-
-            <para>
-                This section describes part of the kernel configuration audit
-                phase that most developers can ignore.
-                During this part of the audit phase, the contents of the final
-                <filename>.config</filename> file are compared against the
-                fragments specified by the system.
-                These fragments can be system fragments, distro fragments,
-                or user specified configuration elements.
-                Regardless of their origin, the OpenEmbedded build system
-                warns the user if a specific option is not included in the
-                final kernel configuration.
-            </para>
-
-            <para>
-                In order to not overwhelm the user with configuration warnings,
-                by default the system only reports on missing "hardware"
-                options because a missing hardware option could mean a boot
-                failure or that important hardware is not available.
-            </para>
-
-            <para>
-                To determine whether or not a given option is "hardware" or
-                "non-hardware", the kernel Metadata contains files that
-                classify individual or groups of options as either hardware
-                or non-hardware.
-                To better show this, consider a situation where the
-                Yocto Project kernel cache contains the following files:
-                <literallayout class='monospaced'>
-     kernel-cache/features/drm-psb/hardware.cfg
-     kernel-cache/features/kgdb/hardware.cfg
-     kernel-cache/ktypes/base/hardware.cfg
-     kernel-cache/bsp/mti-malta32/hardware.cfg
-     kernel-cache/bsp/fsl-mpc8315e-rdb/hardware.cfg
-     kernel-cache/bsp/qemu-ppc32/hardware.cfg
-     kernel-cache/bsp/qemuarma9/hardware.cfg
-     kernel-cache/bsp/mti-malta64/hardware.cfg
-     kernel-cache/bsp/arm-versatile-926ejs/hardware.cfg
-     kernel-cache/bsp/common-pc/hardware.cfg
-     kernel-cache/bsp/common-pc-64/hardware.cfg
-     kernel-cache/features/rfkill/non-hardware.cfg
-     kernel-cache/ktypes/base/non-hardware.cfg
-     kernel-cache/features/aufs/non-hardware.kcf
-     kernel-cache/features/ocf/non-hardware.kcf
-     kernel-cache/ktypes/base/non-hardware.kcf
-     kernel-cache/ktypes/base/hardware.kcf
-     kernel-cache/bsp/qemu-ppc32/hardware.kcf
-                </literallayout>
-                The following list provides explanations for the various
-                files:
-                <itemizedlist>
-                    <listitem><para><filename>hardware.kcf</filename>:
-                        Specifies a list of kernel Kconfig files that contain
-                        hardware options only.
-                        </para></listitem>
-                    <listitem><para><filename>non-hardware.kcf</filename>:
-                        Specifies a list of kernel Kconfig files that contain
-                        non-hardware options only.
-                        </para></listitem>
-                    <listitem><para><filename>hardware.cfg</filename>:
-                        Specifies a list of kernel
-                        <filename>CONFIG_</filename> options that are hardware,
-                        regardless of whether or not they are within a Kconfig
-                        file specified by a hardware or non-hardware
-                        Kconfig file (i.e. <filename>hardware.kcf</filename> or
-                        <filename>non-hardware.kcf</filename>).
-                        </para></listitem>
-                    <listitem><para><filename>non-hardware.cfg</filename>:
-                        Specifies a list of kernel
-                        <filename>CONFIG_</filename> options that are
-                        not hardware, regardless of whether or not they are
-                        within a Kconfig file specified by a hardware or
-                        non-hardware Kconfig file (i.e.
-                        <filename>hardware.kcf</filename> or
-                        <filename>non-hardware.kcf</filename>).
-                        </para></listitem>
-                </itemizedlist>
-                Here is a specific example using the
-                <filename>kernel-cache/bsp/mti-malta32/hardware.cfg</filename>:
-                <literallayout class='monospaced'>
-     CONFIG_SERIAL_8250
-     CONFIG_SERIAL_8250_CONSOLE
-     CONFIG_SERIAL_8250_NR_UARTS
-     CONFIG_SERIAL_8250_PCI
-     CONFIG_SERIAL_CORE
-     CONFIG_SERIAL_CORE_CONSOLE
-     CONFIG_VGA_ARB
-                </literallayout>
-                The kernel configuration audit automatically detects these
-                files (hence the names must be exactly the ones discussed here),
-                and uses them as inputs when generating warnings about the
-                final <filename>.config</filename> file.
-            </para>
-
-            <para>
-                A user-specified kernel Metadata repository, or recipe space
-                feature, can use these same files to classify options that are
-                found within its <filename>.cfg</filename> files as hardware
-                or non-hardware, to prevent the OpenEmbedded build system from
-                producing an error or warning when an option is not in the
-                final <filename>.config</filename> file.
-            </para>
-        </section>
-    </section>
-
-    <section id="patching-the-kernel">
-        <title>Patching the Kernel</title>
-
-        <para>
-            Patching the kernel involves changing or adding configurations to an existing kernel,
-            changing or adding recipes to the kernel that are needed to support specific hardware features,
-            or even altering the source code itself.
-            <note>
-                You can use the <filename>yocto-kernel</filename> script
-                found in the <link linkend='source-directory'>Source Directory</link>
-                under <filename>scripts</filename> to manage kernel patches and configuration.
-                See the "<ulink url='&YOCTO_DOCS_BSP_URL;#managing-kernel-patches-and-config-items-with-yocto-kernel'>Managing kernel Patches and Config Items with yocto-kernel</ulink>"
-                section in the Yocto Project Board Support Packages (BSP) Developer's Guide for
-                more information.</note>
-        </para>
-
-        <para>
-            This example creates a simple patch by adding some QEMU emulator console
-            output at boot time through <filename>printk</filename> statements in the kernel's
-            <filename>calibrate.c</filename> source code file.
-            Applying the patch and booting the modified image causes the added
-            messages to appear on the emulator's console.
-        </para>
-
-        <para>
-            The example assumes a clean build exists for the <filename>qemux86</filename>
-            machine in a
-            <link linkend='source-directory'>Source Directory</link>
-            named <filename>poky</filename>.
-            Furthermore, the <link linkend='build-directory'>Build Directory</link> is
-            <filename>build</filename> and is located in <filename>poky</filename> and
-            the kernel is based on the Linux 3.4 kernel.
-        </para>
-
-        <para>
-            Also, for more information on patching the kernel, see the
-            "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#applying-patches'>Applying Patches</ulink>"
-            section in the Yocto Project Linux Kernel Development Manual.
-        </para>
-
-        <section id='create-a-layer-for-your-changes'>
-            <title>Create a Layer for your Changes</title>
-
-            <para>
-                The first step is to create a layer so you can isolate your
-                changes.
-                Rather than use the <filename>yocto-layer</filename> script
-                to create the layer, this example steps through the process
-                by hand.
-                If you want information on the script that creates a general
-                layer, see the
-                "<link linkend='creating-a-general-layer-using-the-yocto-layer-script'>Creating a General Layer Using the yocto-layer Script</link>"
-                section.
-            </para>
-
-            <para>
-                These two commands create a directory you can use for your
-                layer:
-                <literallayout class='monospaced'>
-     $ cd ~/poky
-     $ mkdir meta-mylayer
-                </literallayout>
-                Creating a directory that follows the Yocto Project layer naming
-                conventions sets up the layer for your changes.
-                The layer is where you place your configuration files, append
-                files, and patch files.
-                To learn more about creating a layer and filling it with the
-                files you need, see the "<link linkend='understanding-and-creating-layers'>Understanding
-                and Creating Layers</link>" section.
-            </para>
-        </section>
-
-        <section id='finding-the-kernel-source-code'>
-            <title>Finding the Kernel Source Code</title>
-
-            <para>
-                Each time you build a kernel image, the kernel source code is fetched
-                and unpacked into the following directory:
-                <literallayout class='monospaced'>
-     ${S}/linux
-                </literallayout>
-                See the "<link linkend='finding-the-temporary-source-code'>Finding Temporary Source Code</link>"
-                section and the
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink> variable
-                for more information about where source is kept during a build.
-            </para>
-
-            <para>
-                For this example, we are going to patch the
-                <filename>init/calibrate.c</filename> file
-                by adding some simple console <filename>printk</filename> statements that we can
-                see when we boot the image using QEMU.
-            </para>
-        </section>
-
-        <section id='creating-the-patch'>
-            <title>Creating the Patch</title>
-
-            <para>
-                Two methods exist by which you can create the patch:
-                <link linkend='using-devtool-in-your-workflow'><filename>devtool</filename></link> and
-                <link linkend='using-a-quilt-workflow'>Quilt</link>.
-                For kernel patches, the Git workflow is more appropriate.
-                This section assumes the Git workflow and shows the steps specific to
-                this example.
-                <orderedlist>
-                    <listitem><para><emphasis>Change the working directory</emphasis>:
-                        Change to where the kernel source code is before making
-                        your edits to the <filename>calibrate.c</filename> file:
-                        <literallayout class='monospaced'>
-     $ cd ~/poky/build/tmp/work/qemux86-poky-linux/linux-yocto-${PV}-${PR}/linux
-                        </literallayout>
-                        Because you are working in an established Git repository,
-                        you must be in this directory in order to commit your changes
-                        and create the patch file.
-                        <note>The <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink> and
-                            <ulink url='&YOCTO_DOCS_REF_URL;#var-PR'><filename>PR</filename></ulink> variables
-                            represent the version and revision for the
-                            <filename>linux-yocto</filename> recipe.
-                            The <filename>PV</filename> variable includes the Git meta and machine
-                            hashes, which make the directory name longer than you might
-                            expect.
-                        </note></para></listitem>
-                    <listitem><para><emphasis>Edit the source file</emphasis>:
-                        Edit the <filename>init/calibrate.c</filename> file to have the
-                        following changes:
-                        <literallayout class='monospaced'>
-     void calibrate_delay(void)
-     {
-         unsigned long lpj;
-         static bool printed;
-         int this_cpu = smp_processor_id();
-
-         printk("*************************************\n");
-         printk("*                                   *\n");
-         printk("*        HELLO YOCTO KERNEL         *\n");
-         printk("*                                   *\n");
-         printk("*************************************\n");
-
-     	if (per_cpu(cpu_loops_per_jiffy, this_cpu)) {
-               .
-               .
-               .
-                        </literallayout></para></listitem>
-                    <listitem><para><emphasis>Stage and commit your changes</emphasis>:
-                        These Git commands display the modified file, stage it, and then
-                        commit the file:
-                        <literallayout class='monospaced'>
-     $ git status
-     $ git add init/calibrate.c
-     $ git commit -m "calibrate: Add printk example"
-                        </literallayout></para></listitem>
-                    <listitem><para><emphasis>Generate the patch file</emphasis>:
-                        This Git command creates the a patch file named
-                        <filename>0001-calibrate-Add-printk-example.patch</filename>
-                        in the current directory.
-                        <literallayout class='monospaced'>
-     $ git format-patch -1
-                        </literallayout>
-                        </para></listitem>
-                </orderedlist>
-            </para>
-        </section>
-
-        <section id='set-up-your-layer-for-the-build'>
-            <title>Set Up Your Layer for the Build</title>
-
-            <para>These steps get your layer set up for the build:
-                <orderedlist>
-                    <listitem><para><emphasis>Create additional structure</emphasis>:
-                        Create the additional layer structure:
-                        <literallayout class='monospaced'>
-     $ cd ~/poky/meta-mylayer
-     $ mkdir conf
-     $ mkdir recipes-kernel
-     $ mkdir recipes-kernel/linux
-     $ mkdir recipes-kernel/linux/linux-yocto
-                         </literallayout>
-                         The <filename>conf</filename> directory holds your configuration files, while the
-                         <filename>recipes-kernel</filename> directory holds your append file and
-                         your patch file.</para></listitem>
-                    <listitem><para><emphasis>Create the layer configuration file</emphasis>:
-                        Move to the <filename>meta-mylayer/conf</filename> directory and create
-                        the <filename>layer.conf</filename> file as follows:
-                        <literallayout class='monospaced'>
-     # We have a conf and classes directory, add to BBPATH
-     BBPATH .= ":${LAYERDIR}"
-
-     # We have recipes-* directories, add to BBFILES
-     BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
-                 ${LAYERDIR}/recipes-*/*/*.bbappend"
-
-     BBFILE_COLLECTIONS += "mylayer"
-     BBFILE_PATTERN_mylayer = "^${LAYERDIR}/"
-     BBFILE_PRIORITY_mylayer = "5"
-                         </literallayout>
-                         Notice <filename>mylayer</filename> as part of the last three
-                         statements.</para></listitem>
-                    <listitem><para><emphasis>Create the kernel recipe append file</emphasis>:
-                        Move to the <filename>meta-mylayer/recipes-kernel/linux</filename> directory and create
-                        the <filename>linux-yocto_3.4.bbappend</filename> file as follows:
-                        <literallayout class='monospaced'>
-     FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
-
-     SRC_URI += "file://0001-calibrate-Add-printk-example.patch"
-                        </literallayout>
-                        The <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
-                        and <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
-                        statements enable the OpenEmbedded build system to find the patch file.
-                        For more information on using append files, see the
-                        "<link linkend='using-bbappend-files'>Using .bbappend Files in Your Layer</link>"
-                        section.
-                        </para></listitem>
-                    <listitem><para><emphasis>Put the patch file in your layer</emphasis>:
-                        Move the <filename>0001-calibrate-Add-printk-example.patch</filename> file to
-                        the <filename>meta-mylayer/recipes-kernel/linux/linux-yocto</filename>
-                        directory.</para></listitem>
-                </orderedlist>
-            </para>
-        </section>
-
-        <section id='set-up-for-the-build'>
-            <title>Set Up for the Build</title>
-
-            <para>
-                Do the following to make sure the build parameters are set up for the example.
-                Once you set up these build parameters, they do not have to change unless you
-                change the target architecture of the machine you are building:
-                <itemizedlist>
-                    <listitem><para><emphasis>Build for the correct target architecture:</emphasis> Your
-                        selected <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
-                        definition within the <filename>local.conf</filename> file in the
-                        <link linkend='build-directory'>Build Directory</link>
-                        specifies the target architecture used when building the Linux kernel.
-                        By default, <filename>MACHINE</filename> is set to
-                        <filename>qemux86</filename>, which specifies a 32-bit
-                        <trademark class='registered'>Intel</trademark> Architecture
-                        target machine suitable for the QEMU emulator.</para></listitem>
-                    <listitem><para><emphasis>Identify your <filename>meta-mylayer</filename>
-                        layer:</emphasis> The
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-BBLAYERS'><filename>BBLAYERS</filename></ulink>
-                        variable in the
-                        <filename>bblayers.conf</filename> file found in the
-                        <filename>poky/build/conf</filename> directory needs to have the path to your local
-                        <filename>meta-mylayer</filename> layer.
-                        By default, the <filename>BBLAYERS</filename> variable contains paths to
-                        <filename>meta</filename>, <filename>meta-poky</filename>, and
-                        <filename>meta-yocto-bsp</filename> in the
-                        <filename>poky</filename> Git repository.
-                        Add the path to your <filename>meta-mylayer</filename> location:
-                        <literallayout class='monospaced'>
-     BBLAYERS ?= " \
-       $HOME/poky/meta \
-       $HOME/poky/meta-poky \
-       $HOME/poky/meta-yocto-bsp \
-       $HOME/poky/meta-mylayer \
-       "
-                        </literallayout></para></listitem>
-                </itemizedlist>
-            </para>
-        </section>
-
-        <section id='build-the-modified-qemu-kernel-image'>
-            <title>Build the Modified QEMU Kernel Image</title>
-
-            <para>
-                The following steps build your modified kernel image:
-                <orderedlist>
-                    <listitem><para><emphasis>Be sure your build environment is initialized</emphasis>:
-                        Your environment should be set up since you previously sourced
-                        the
-                        <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
-                        script.
-                        If it is not, source the script again from <filename>poky</filename>.
-                        <literallayout class='monospaced'>
-     $ cd ~/poky
-     $ source &OE_INIT_FILE;
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>Clean up</emphasis>:
-                        Be sure to clean the shared state out by using BitBake
-                        to run from within the Build Directory the
-                        <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-cleansstate'><filename>do_cleansstate</filename></ulink>
-                        task as follows:
-                        <literallayout class='monospaced'>
-     $ bitbake -c cleansstate linux-yocto
-                        </literallayout></para>
-                        <para>
-                           <note>
-                               Never remove any files by hand from the
-                               <filename>tmp/deploy</filename>
-                               directory inside the
-                               <link linkend='build-directory'>Build Directory</link>.
-                               Always use the various BitBake clean tasks to
-                               clear out previous build artifacts.
-                               For information on the clean tasks, see the
-                               "<ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-clean'><filename>do_clean</filename></ulink>",
-                               "<ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-cleanall'><filename>do_cleanall</filename></ulink>",
-                               and
-                               "<ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-cleansstate'><filename>do_cleansstate</filename></ulink>"
-                               sections all in the Yocto Project Reference
-                               Manual.
-                           </note>
-                        </para></listitem>
-                    <listitem><para><emphasis>Build the image</emphasis>:
-                        Next, build the kernel image using this command:
-                        <literallayout class='monospaced'>
-     $ bitbake -k linux-yocto
-                        </literallayout></para></listitem>
-                </orderedlist>
-            </para>
-        </section>
-
-        <section id='boot-the-image-and-verify-your-changes'>
-            <title>Boot the Image and Verify Your Changes</title>
-
-            <para>
-                These steps boot the image and allow you to see the changes
-                <orderedlist>
-                    <listitem><para><emphasis>Boot the image</emphasis>:
-                        Boot the modified image in the QEMU emulator
-                        using this command:
-                        <literallayout class='monospaced'>
-     $ runqemu qemux86
-                        </literallayout></para></listitem>
-                    <listitem><para><emphasis>Verify the changes</emphasis>:
-                        Log into the machine using <filename>root</filename> with no password and then
-                        use the following shell command to scroll through the console's boot output.
-                        <literallayout class='monospaced'>
-     # dmesg | less
-                        </literallayout>
-                        You should see the results of your <filename>printk</filename> statements
-                        as part of the output.</para></listitem>
-                </orderedlist>
-            </para>
-        </section>
     </section>
 
     <section id='making-images-more-secure'>
@@ -6812,7 +5951,7 @@
                 The security flags are in the
                 <filename>meta/conf/distro/include/security_flags.inc</filename>
                 file in your
-                <link linkend='source-directory'>Source Directory</link>
+                <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
                 (e.g. <filename>poky</filename>).
                 <note>
                     Depending on the recipe, certain security flags are enabled
@@ -6932,8 +6071,8 @@
         <para>
             When you build an image using the Yocto Project and
             do not alter any distribution
-            <link linkend='metadata'>Metadata</link>, you are creating a
-            Poky distribution.
+            <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink>,
+            you are creating a Poky distribution.
             If you wish to gain more control over package alternative
             selections, compile-time options, and other low-level
             configurations, you can create your own distribution.
@@ -6956,7 +6095,7 @@
                     configuration file makes it easier to reproduce the same
                     build configuration when using multiple build machines.
                     See the
-                    "<link linkend='creating-a-general-layer-using-the-yocto-layer-script'>Creating a General Layer Using the yocto-layer Script</link>"
+                    "<link linkend='creating-a-general-layer-using-the-bitbake-layers-script'>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</link>"
                     section for information on how to quickly set up a layer.
                     </para></listitem>
                 <listitem><para><emphasis>Create the distribution configuration file:</emphasis>
@@ -7019,7 +6158,7 @@
                     previous bulleted item.</para></listitem>
                 <listitem><para><emphasis>Point to Your distribution configuration file:</emphasis>
                     In your <filename>local.conf</filename> file in the
-                    <link linkend='build-directory'>Build Directory</link>,
+                    <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
                     set your
                     <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO'><filename>DISTRO</filename></ulink>
                     variable to point to your distribution's configuration file.
@@ -7044,7 +6183,7 @@
                             on how to add recipes to your layer, see the
                             "<link linkend='creating-your-own-layer'>Creating Your Own Layer</link>"
                             and
-                            "<link linkend='best-practices-to-follow-when-creating-layers'>Best Practices to Follow When Creating Layers</link>"
+                            "<link linkend='best-practices-to-follow-when-creating-layers'>Following Best Practices When Creating Layers</link>"
                             sections.</para></listitem>
                         <listitem><para>Add any image recipes that are specific
                             to your distribution.</para></listitem>
@@ -7079,7 +6218,7 @@
             <filename>TEMPLATECONF</filename> to locate the directory
             from which it gathers configuration information that ultimately
             ends up in the
-            <link linkend='build-directory'>Build Directory's</link>
+            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
             <filename>conf</filename> directory.
             By default, <filename>TEMPLATECONF</filename> is set as
             follows in the <filename>poky</filename> repository:
@@ -7106,7 +6245,7 @@
             The <filename>TEMPLATECONF</filename> variable is set in the
             <filename>.templateconf</filename> file, which is in the
             top-level
-            <link linkend='source-directory'>Source Directory</link>
+            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
             folder (e.g. <filename>poky</filename>).
             Edit the <filename>.templateconf</filename> so that it can locate
             your directory.
@@ -7137,12 +6276,10 @@
             Aside from the <filename>*.sample</filename> configuration files,
             the <filename>conf-notes.txt</filename> also resides in the
             default <filename>meta-poky/conf</filename> directory.
-            The scripts that set up the build environment
+            The script that sets up the build environment
             (i.e.
-            <ulink url="&YOCTO_DOCS_REF_URL;#structure-core-script"><filename>&OE_INIT_FILE;</filename></ulink>
-            and
-            <ulink url="&YOCTO_DOCS_REF_URL;#structure-memres-core-script"><filename>oe-init-build-env-memres</filename></ulink>)
-            use this file to display BitBake targets as part of the script
+            <ulink url="&YOCTO_DOCS_REF_URL;#structure-core-script"><filename>&OE_INIT_FILE;</filename></ulink>)
+            uses this file to display BitBake targets as part of the script
             output.
             Customizing this <filename>conf-notes.txt</filename> file is a
             good way to make sure your list of custom targets appears
@@ -7296,7 +6433,7 @@
             <para>
                 To help you see where you currently are with kernel and root
                 filesystem sizes, you can use two tools found in the
-                <link linkend='source-directory'>Source Directory</link> in
+                <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink> in
                 the <filename>scripts/tiny/</filename> directory:
                 <itemizedlist>
                     <listitem><para><filename>ksize.py</filename>: Reports
@@ -7328,10 +6465,10 @@
                         <filename>scripts/kconfig</filename> directory.</para>
                         <para>For more information on configuration fragments,
                         see the
-                        "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#generating-configuration-files'>Generating Configuration Files</ulink>"
-                        section of the Yocto Project Linux Kernel Development
-                        Manual and the "<link linkend='creating-config-fragments'>Creating Configuration Fragments</link>"
-                        section, which is in this manual.</para></listitem>
+                        "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#creating-config-fragments'>Creating Configuration Fragments</ulink>"
+                        section in the Yocto Project Linux Kernel Development
+                        Manual.
+                        </para></listitem>
                     <listitem><para><filename>bitbake -u taskexp -g <replaceable>bitbake_target</replaceable></filename>:
                         Using the BitBake command with these options brings up
                         a Dependency Explorer from which you can view file
@@ -7957,7 +7094,7 @@
 
                 <para>
                     As mentioned, attempting to maintain revision numbers in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink>
                     is error prone, inaccurate, and causes problems for people
                     submitting recipes.
                     Conversely, the PR Service automatically generates
@@ -8032,7 +7169,7 @@
                     setting
                     <ulink url='&YOCTO_DOCS_REF_URL;#var-PRSERV_HOST'><filename>PRSERV_HOST</filename></ulink>
                     in your <filename>local.conf</filename> file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>:
+                    <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>:
                     <literallayout class='monospaced'>
      PRSERV_HOST = "localhost:0"
                     </literallayout>
@@ -8333,7 +7470,7 @@
                     <filename>connman.inc</filename> file in the
                     <filename>meta/recipes-connectivity/connman/</filename>
                     directory of the <filename>poky</filename>
-                    <link linkend='yocto-project-repositories'>source repository</link>.
+                    <ulink url='&YOCTO_DOCS_REF_URL;#yocto-project-repositories'>source repository</ulink>.
                     You can also find examples in
                     <filename>meta/classes/kernel.bbclass</filename>.
                  </para>
@@ -8568,7 +7705,7 @@
                         <listitem><para>
                             Open the <filename>local.conf</filename> file
                             inside your
-                            <link linkend='build-directory'>Build Directory</link>
+                            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
                             (e.g. <filename>~/poky/build/conf/local.conf</filename>).
                             </para></listitem>
                         <listitem><para>
@@ -8738,7 +7875,7 @@
                         file with the following content:
                         <literallayout class='monospaced'>
      [oe-packages]
-     baseurl="http://my.server/rpm/i586 http://my.server/rpm/qemux86 http://my.server/rpm/all"
+     baseurl=http://my.server/rpm/i586 http://my.server/rpm/qemux86 http://my.server/rpm/all
                         </literallayout>
                         From the target machine, fetch the repository:
                         <literallayout class='monospaced'>
@@ -8920,13 +8057,10 @@
 
                 <para>
                     In addition to being able to sign RPM packages, you can
-                    also enable the OpenEmbedded build system to be able to
-                    handle previously signed package feeds for IPK
-                    packages.
-                    <note>
-                        The OpenEmbedded build system does not currently
-                        support signed DPKG or RPM package feeds.
-                    </note>
+                    also enable signed package feeds for IPK and RPM packages.
+                </para>
+
+                <para>
                     The steps you need to take to enable signed package feed
                     use are similar to the steps used to sign RPM packages.
                     You must define the following in your
@@ -9026,7 +8160,7 @@
                     and <ulink url='&YOCTO_DOCS_REF_URL;#var-EXTRA_IMAGE_FEATURES'><filename>EXTRA_IMAGE_FEATURES</filename></ulink>
                     variables to your <filename>local.conf</filename> file,
                     which is found in the
-                    <link linkend='build-directory'>Build Directory</link>:
+                    <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>:
                     <literallayout class='monospaced'>
      DISTRO_FEATURES_append = " ptest"
      EXTRA_IMAGE_FEATURES += "ptest-pkgs"
@@ -9262,8 +8396,8 @@
 
         <para>
             By default, the OpenEmbedded build system uses the
-            <link linkend='build-directory'>Build Directory</link> when
-            building source code.
+            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
+            when building source code.
             The build process involves fetching the source files, unpacking
             them, and then patching them if necessary before the build takes
             place.
@@ -9627,7 +8761,7 @@
                 Using either of the following statements in your
                 image recipe or from within the
                 <filename>local.conf</filename> file found in the
-                <link linkend='build-directory'>Build Directory</link>
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
                 causes the build system to create a read-only root filesystem:
                 <literallayout class='monospaced'>
      IMAGE_FEATURES = "read-only-rootfs"
@@ -10221,7 +9355,7 @@
                         <ulink url='&YOCTO_DOCS_REF_URL;#var-TEST_IMAGE'><filename>TEST_IMAGE</filename></ulink>
                         variable to "1" in your <filename>local.conf</filename>
                         file in the
-                        <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>:
+                        <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>:
                         <literallayout class='monospaced'>
      TEST_IMAGE = "1"
                         </literallayout>
@@ -10249,7 +9383,7 @@
             <para>
                 All test files reside in
                 <filename>meta/lib/oeqa/runtime</filename> in the
-                <link linkend='source-directory'>Source Directory</link>.
+                <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
                 A test name maps directly to a Python module.
                 Each test module may contain a number of individual tests.
                 Tests are usually grouped together by the area
@@ -10353,7 +9487,8 @@
      $ bitbake <replaceable>image</replaceable> -c testexport
                 </literallayout>
                 Exporting the tests places them in the
-                <link linkend='build-directory'>Build Directory</link> in
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
+                in
                 <filename>tmp/testexport/</filename><replaceable>image</replaceable>,
                 which is controlled by the
                 <filename>TEST_EXPORT_DIR</filename> variable.
@@ -10893,182 +10028,6 @@
         </para>
     </section>
 
-<!--
-        <section id='platdev-gdb-remotedebug-setup'>
-            <title>Set Up the Cross-Development Debugging Environment</title>
-
-            <para>
-                Before you can initiate a remote debugging session, you need
-                to be sure you have set up the cross-development environment,
-                toolchain, and sysroot.
-                The <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-intro'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>
-                describes this process.
-            </para>
-        </section>
-
-        <section id="platdev-gdb-remotedebug-launch-gdbserver">
-            <title>Launch gdbserver on the Target</title>
-
-            <para>
-                Make sure gdbserver is installed on the target.
-                If it is not, install the package
-                <filename>gdbserver</filename>, which needs the
-                <filename>libthread-db1</filename> package.
-            </para>
-
-            <para>
-                Here is an example, that when entered from the host,
-                connects to the target and launches gdbserver in order to
-                "debug" a binary named <filename>helloworld</filename>:
-                <literallayout class='monospaced'>
-     $ gdbserver localhost:2345 /usr/bin/helloworld
-                </literallayout>
-                gdbserver should now be listening on port 2345 for debugging
-                commands coming from a remote GDB process that is running on
-                the host computer.
-                Communication between gdbserver and the host GDB are done
-                using TCP.
-                To use other communication protocols, please refer to the
-                <ulink url='http://www.gnu.org/software/gdb/'>Gdbserver documentation</ulink>.
-            </para>
-        </section>
-
-        <section id="platdev-gdb-remotedebug-launch-gdb">
-            <title>Launch GDB on the Host Computer</title>
-
-            <para>
-                Running GDB on the host computer takes a number of stages, which
-                this section describes.
-            </para>
-
-            <section id="platdev-gdb-remotedebug-launch-gdb-buildcross">
-                <title>Build the Cross-GDB Package</title>
-                <para>
-                    A suitable GDB cross-binary is required that runs on your
-                    host computer but also knows about the the ABI of the
-                    remote target.
-                    You can get this binary from the
-                    <link linkend='cross-development-toolchain'>Cross-Development Toolchain</link>.
-                    Here is an example where the toolchain has been installed
-                    in the default directory
-                    <filename>/opt/poky/&DISTRO;</filename>:
-                    <literallayout class='monospaced'>
-     /opt/poky/&DISTRO;/sysroots/i686-pokysdk-linux/usr/bin/armv7a-vfp-neon-poky-linux-gnueabi/arm-poky-linux-gnueabi-gdb
-                    </literallayout>
-                    where <filename>arm</filename> is the target architecture
-                    and <filename>linux-gnueabi</filename> is the target ABI.
-                </para>
-
-                <para>
-                    Alternatively, you can use BitBake to build the
-                    <filename>gdb-cross</filename> binary.
-                    Here is an example:
-                    <literallayout class='monospaced'>
-     $ bitbake gdb-cross
-                    </literallayout>
-                    Once the binary is built, you can find it here:
-                    <literallayout class='monospaced'>
-     tmp/sysroots/<replaceable>host-arch</replaceable>/usr/bin/<replaceable>target-platform</replaceable>/<replaceable>target-abi</replaceable>-gdb
-                    </literallayout>
-                </para>
-            </section>
-
-            <section id='create-the-gdb-initialization-file'>
-                <title>Create the GDB Initialization File and Point to Your Root Filesystem</title>
-
-                <para>
-                    Aside from the GDB cross-binary, you also need a GDB
-                    initialization file in the same top directory in which
-                    your binary resides.
-                    When you start GDB on your host development system, GDB
-                    finds this initialization file and executes all the
-                    commands within.
-                    For information on the <filename>.gdbinit</filename>, see
-                    "<ulink url='http://sourceware.org/gdb/onlinedocs/gdb/'>Debugging with GDB</ulink>",
-                    which is maintained by
-                    <ulink url='http://www.sourceware.org'>sourceware.org</ulink>.
-                </para>
-
-                <para>
-                    You need to add a statement in the
-                    <filename>~/.gdbinit</filename> file that points to your
-                    root filesystem.
-                    Here is an example that points to the root filesystem for
-                    an ARM-based target device:
-                    <literallayout class='monospaced'>
-     set sysroot ~/sysroot_arm
-                    </literallayout>
-                </para>
-            </section>
-
-            <section id="platdev-gdb-remotedebug-launch-gdb-launchhost">
-                <title>Launch the Host GDB</title>
-
-                <para>
-                    Before launching the host GDB, you need to be sure
-                    you have sourced the cross-debugging environment script,
-                    which if you installed the root filesystem in the default
-                    location is at <filename>/opt/poky/&DISTRO;</filename>
-                    and begins with the string "environment-setup".
-                    For more information, see the
-                    <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-manual'>Yocto Project Software Development Kit (SDK) Developer's
-                    Guide</ulink>.
-                </para>
-
-                <para>
-                    Finally, switch to the directory where the binary resides
-                    and run the <filename>cross-gdb</filename> binary.
-                    Provide the binary file you are going to debug.
-                    For example, the following command continues with the
-                    example used in the previous section by loading
-                    the <filename>helloworld</filename> binary as well as the
-                    debugging information:
-                    <literallayout class='monospaced'>
-     $ arm-poky-linux-gnuabi-gdb helloworld
-                    </literallayout>
-                    The commands in your <filename>.gdbinit</filename> execute
-                    and the GDB prompt appears.
-                </para>
-            </section>
-        </section>
-
-        <section id='platdev-gdb-connect-to-the-remote-gdb-server'>
-            <title>Connect to the Remote GDB Server</title>
-
-            <para>
-                From the target, you need to connect to the remote GDB
-                server that is running on the host.
-                You need to specify the remote host and port.
-                Here is the command continuing with the example:
-                <literallayout class='monospaced'>
-     target remote 192.168.7.2:2345
-                </literallayout>
-            </para>
-        </section>
-
-        <section id="platdev-gdb-remotedebug-launch-gdb-using">
-            <title>Use the Debugger</title>
-
-            <para>
-                You can now proceed with debugging as normal - as if you were debugging
-                on the local machine.
-                For example, to instruct GDB to break in the "main" function and then
-                continue with execution of the inferior binary use the following commands
-                from within GDB:
-                <literallayout class='monospaced'>
-     (gdb) break main
-     (gdb) continue
-                </literallayout>
-            </para>
-
-            <para>
-                For more information about using GDB, see the project's online documentation at
-                <ulink url="http://sourceware.org/gdb/download/onlinedocs/"/>.
-            </para>
-        </section>
-    </section>
--->
-
     <section id='debugging-with-the-gnu-project-debugger-gdb-on-the-target'>
         <title>Debugging with the GNU Project Debugger (GDB) on the Target</title>
 
@@ -11347,7 +10306,7 @@
                 Once the patch file exists, you need to add it back to the
                 originating recipe folder.
                 Here is an example assuming a top-level
-                <link linkend='source-directory'>Source Directory</link>
+                <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
                 named <filename>poky</filename>:
                 <literallayout class='monospaced'>
      $ cp patches/parallelmake.patch poky/meta/recipes-connectivity/neard/neard
@@ -11407,7 +10366,7 @@
                 need to submit the fix for the recipe in OE-Core and upstream
                 so that the problem is taken care of at its source.
                 See the
-                "<link linkend='how-to-submit-a-change'>How to Submit a Change</link>"
+                "<link linkend='how-to-submit-a-change'>Submitting a Change to the Yocto Project</link>"
                 section for more information.
             </para>
         </section>
@@ -11510,7 +10469,7 @@
                 release just the source as a tarball.
                 You can do this by adding the following to the
                 <filename>local.conf</filename> file found in the
-                <link linkend='build-directory'>Build Directory</link>:
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>:
                 <literallayout class='monospaced'>
      INHERIT += "archiver"
      ARCHIVER_MODE[src] = "original"
@@ -11678,8 +10637,8 @@
        "
                 </literallayout>
                 Creating and providing an archive of the
-                <link linkend='metadata'>Metadata</link> layers
-                (recipes, configuration files, and so forth)
+                <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink>
+                layers (recipes, configuration files, and so forth)
                 enables you to meet your
                 requirements to include the scripts to control compilation
                 as well as any modifications to the original source.
@@ -11697,7 +10656,7 @@
             browse errors, view statistics, and query for errors.
             The tool works using a client-server system where the client
             portion is integrated with the installed Yocto Project
-            <link linkend='source-directory'>Source Directory</link>
+            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
             (e.g. <filename>poky</filename>).
             The server receives the information collected and saves it in a
             database.
@@ -11725,7 +10684,7 @@
                 <ulink url='&YOCTO_DOCS_REF_URL;#ref-classes-report-error'><filename>report-error</filename></ulink>
                 class by adding the following statement to the end of
                 your <filename>local.conf</filename> file in your
-                <link linkend='build-directory'>Build Directory</link>.
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
                 <literallayout class='monospaced'>
      INHERIT += "report-error"
                 </literallayout>
@@ -11784,7 +10743,7 @@
                 To disable the error reporting feature, simply remove or comment
                 out the following statement from the end of your
                 <filename>local.conf</filename> file in your
-                <link linkend='build-directory'>Build Directory</link>.
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
                 <literallayout class='monospaced'>
      INHERIT += "report-error"
                 </literallayout>
diff --git a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-intro.xml b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-intro.xml
index 49148ab..47c8006 100644
--- a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-intro.xml
+++ b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-intro.xml
@@ -4,102 +4,73 @@
 
 <chapter id='dev-manual-intro'>
 
-<title>The Yocto Project Development Manual</title>
-    <section id='dev-intro'>
-        <title>Introduction</title>
+<title>The Yocto Project Development Tasks Manual</title>
+    <section id='dev-welcome'>
+        <title>Welcome</title>
 
         <para>
-            Welcome to the Yocto Project Development Manual!
-            This manual provides information on how to use the Yocto Project to
-            develop embedded Linux images and user-space applications that
-            run on targeted devices.
-            The manual provides an overview of image, kernel, and
-            user-space application development using the Yocto Project.
-            Because much of the information in this manual is general, it
-            contains many references to other sources where you can find more
-            detail.
-            For example, you can find detailed information on Git, repositories,
-            and open source in general in many places on the Internet.
-            Another example specific to the Yocto Project is how to quickly
-            set up your host development system and build an image, which you
-            find in the
-            <ulink url='&YOCTO_DOCS_QS_URL;'>Yocto Project Quick Start</ulink>.
+            Welcome to the Yocto Project Development Tasks Manual!
+            This manual provides relevant procedures necessary for developing
+            in the Yocto Project environment (i.e. developing embedded Linux
+            images and user-space applications that run on targeted devices).
+            The manual groups related procedures into higher-level sections.
+            Procedures can consist of high-level steps or low-level steps
+            depending on the topic.
+            You can find conceptual information related to a procedure by
+            following appropriate links to the Yocto Project Reference
+            Manual.
         </para>
 
         <para>
-            The Yocto Project Development Manual does, however, provide
-            guidance and examples on how to change the kernel source code,
-            reconfigure the kernel, and develop an application using
-            <filename>devtool</filename>.
-        </para>
-
-        <note>
-            By default, using the Yocto Project creates a Poky distribution.
-            However, you can create your own distribution by providing key
-            <link linkend='metadata'>Metadata</link>.
-            A good example is Angstrom, which has had a distribution
-            based on the Yocto Project since its inception.
-            Other examples include commercial distributions like
-            <ulink url='https://www.yoctoproject.org/organization/wind-river-systems'>Wind River Linux</ulink>,
-            <ulink url='https://www.yoctoproject.org/organization/mentor-graphics'>Mentor Embedded Linux</ulink>,
-            <ulink url='https://www.yoctoproject.org/organization/enea-ab'>ENEA Linux</ulink>
-            and <ulink url='https://www.yoctoproject.org/ecosystem/member-organizations'>others</ulink>.
-            See the "<link linkend='creating-your-own-distribution'>Creating Your Own Distribution</link>"
-            section for more information.
-        </note>
-    </section>
-
-    <section id='what-this-manual-provides'>
-        <title>What This Manual Provides</title>
-
-        <para>
             The following list describes what you can get from this manual:
             <itemizedlist>
-                <listitem><para>Information that lets you get set
-                    up to develop using the Yocto Project.</para></listitem>
-                <listitem><para>Information to help developers who are new to
-                    the open source environment and to the distributed revision
-                    control system Git, which the Yocto Project uses.
+                <listitem><para>
+                    <emphasis>Setup Procedures:</emphasis>
+                    Procedures that show you how to set
+                    up a Yocto Project Development environment and how
+                    to accomplish the change workflow through logging
+                    defects and submitting changes.
                     </para></listitem>
-                <listitem><para>An understanding of common end-to-end
-                    development models and tasks.</para></listitem>
-                <listitem><para>Information about common development tasks
-                    generally used during image development for
-                    embedded devices.
+                <listitem><para>
+                    <emphasis>Emulation Procedures:</emphasis>
+                    Procedures that show you how to use the
+                    Yocto Project integrated QuickEMUlator (QEMU), which lets
+                    you simulate running on hardware an image you have built
+                    using the OpenEmbedded build system.
                     </para></listitem>
-                <listitem><para>Information on using the Yocto Project
-                    integration of the QuickEMUlator (QEMU), which lets you
-                    simulate running on hardware an image you have built using
-                    the OpenEmbedded build system.
+                <listitem><para>
+                    <emphasis>Common Procedures:</emphasis>
+                    Procedures related to "everyday" tasks you perform while
+                    developing images and applications using the Yocto
+                    Project.
                     </para></listitem>
-                <listitem><para>Many references to other sources of related
-                    information.</para></listitem>
             </itemizedlist>
         </para>
-    </section>
-
-    <section id='what-this-manual-does-not-provide'>
-        <title>What this Manual Does Not Provide</title>
 
         <para>
             This manual will not give you the following:
             <itemizedlist>
-                <listitem><para><emphasis>Step-by-step instructions when those instructions exist in other Yocto
-                    Project documentation:</emphasis>
+                <listitem><para>
+                    <emphasis>Redundant Step-by-step Instructions:</emphasis>
                     For example, the
-                    <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>
+                    <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</ulink>
                     manual contains detailed instructions on how to install an
                     SDK, which is used to develop applications for target
                     hardware.
                     </para></listitem>
-                <listitem><para><emphasis>Reference material:</emphasis>
+                <listitem><para>
+                    <emphasis>Reference or Conceptual Material:</emphasis>
                     This type of material resides in an appropriate reference manual.
                     For example, system variables are documented in the
                     <ulink url='&YOCTO_DOCS_REF_URL;'>Yocto Project Reference Manual</ulink>.
                     </para></listitem>
-                <listitem><para><emphasis>Detailed public information that is not specific to the Yocto Project:</emphasis>
-                    For example, exhaustive information on how to use Git is covered better through the
-                    Internet than in this manual.
+                <listitem><para>
+                    <emphasis>Detailed Public Information Not Specific to the
+                    Yocto Project:</emphasis>
+                    For example, exhaustive information on how to use the
+                    Source Control Manager Git is better covered with Internet
+                    searches and official Git Documentation than through the
+                    Yocto Project documentation.
                     </para></listitem>
             </itemizedlist>
         </para>
@@ -109,144 +80,23 @@
         <title>Other Information</title>
 
         <para>
-            Because this manual presents overview information for many different
+            Because this manual presents information for many different
             topics, supplemental information is recommended for full
             comprehension.
-            The following list presents other sources of information you might find helpful:
-            <itemizedlist>
-                <listitem><para><emphasis><ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>:
-                    </emphasis> The home page for the Yocto Project provides lots of information on the project
-                    as well as links to software and documentation.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_DOCS_QS_URL;'>Yocto Project Quick Start</ulink>:</emphasis>
-                    This short document lets you get started
-                    with the Yocto Project and quickly begin building an image.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_DOCS_REF_URL;'>Yocto Project Reference Manual</ulink>:</emphasis>
-                    This manual is a reference
-                    guide to the OpenEmbedded build system, which is based on BitBake.
-                    The build system is sometimes referred to as "Poky".
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>:</emphasis>
-                    This guide provides information that lets you get going
-                    with the standard or extensible SDK.
-                    An SDK, with its cross-development toolchains, allows you
-                    to develop projects inside or outside of the Yocto Project
-                    environment.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_DOCS_BSP_URL;'>Yocto Project Board Support Package (BSP) Developer's Guide</ulink>:</emphasis>
-                    This guide defines the structure for BSP components.
-                    Having a commonly understood structure encourages standardization.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;'>Yocto Project Linux Kernel Development Manual</ulink>:</emphasis>
-                    This manual describes how to work with Linux Yocto kernels as well as provides a bit
-                    of conceptual information on the construction of the Yocto Linux kernel tree.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_DOCS_PROF_URL;'>Yocto Project Profiling and Tracing Manual</ulink>:</emphasis>
-                    This manual presents a set of common and generally useful tracing and
-                    profiling schemes along with their applications (as appropriate) to each tool.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_DOCS_TOAST_URL;'>Toaster User Manual</ulink>:</emphasis>
-                    This manual introduces and describes how to set up and use
-                    Toaster, which is a web interface to the Yocto Project's
-                    <link linkend='build-system-term'>OpenEmbedded Build System</link>.
-                    </para></listitem>
-<!--
-                <listitem><para><emphasis>
-                    <ulink url='http://www.youtube.com/watch?v=3ZlOu-gLsh0'>
-                    Eclipse IDE Yocto Plug-in</ulink>:</emphasis>
-                    A step-by-step instructional video that
-                    demonstrates how an application developer uses Yocto Plug-in features within
-                    the Eclipse IDE.
-                    </para></listitem>
--->
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-appendix-latest-yp-eclipse-plug-in'>Eclipse IDE Yocto Plug-in</ulink>:</emphasis>
-                    Instructions that demonstrate how an application developer
-                    uses the Eclipse Yocto Project Plug-in feature within
-                    the Eclipse IDE.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_WIKI_URL;/wiki/FAQ'>FAQ</ulink>:</emphasis>
-                    A list of commonly asked questions and their answers.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_RELEASE_NOTES;'>Release Notes</ulink>:</emphasis>
-                    Features, updates and known issues for the current
-                    release of the Yocto Project.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_HOME_URL;/tools-resources/projects/toaster'>Toaster</ulink>:</emphasis>
-                    An Application Programming Interface (API) and web-based
-                    interface to the OpenEmbedded build system, which uses
-                    BitBake, that reports build information.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_HOME_URL;/tools-resources/projects/build-appliance'>Build Appliance</ulink>:</emphasis>
-                    A virtual machine that
-                    enables you to build and boot a custom embedded Linux image
-                    with the Yocto Project using a non-Linux development system.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_BUGZILLA_URL;'>Bugzilla</ulink>:</emphasis>
-                    The bug tracking application the Yocto Project uses.
-                    If you find problems with the Yocto Project, you should report them using this
-                    application.
-                    </para></listitem>
-                <listitem><para><emphasis>Yocto Project Mailing Lists:</emphasis>
-                    To subscribe to the Yocto Project mailing
-                    lists, click on the following URLs and follow the instructions:
-                    <itemizedlist>
-                        <listitem><para><ulink url='&YOCTO_LISTS_URL;/listinfo/yocto'></ulink>
-                            for a Yocto Project Discussions mailing list.
-                            </para></listitem>
-                        <listitem><para><ulink url='&YOCTO_LISTS_URL;/listinfo/poky'></ulink>
-                            for a Yocto Project Discussions mailing list about the
-                            OpenEmbedded build system (Poky).
-                            </para></listitem>
-                        <listitem><para><ulink url='&YOCTO_LISTS_URL;/listinfo/yocto-announce'></ulink>
-                            for a mailing list to receive official Yocto Project announcements
-                            as well as Yocto Project milestones.
-                            </para></listitem>
-                        <listitem><para><ulink url='&YOCTO_LISTS_URL;/listinfo'></ulink>
-                            for a listing of all public mailing lists on
-                            <filename>lists.yoctoproject.org</filename>.
-                            </para></listitem>
-                    </itemizedlist></para></listitem>
-                <listitem><para><emphasis>Internet Relay Chat (IRC):</emphasis>
-                    Two IRC channels on freenode are available
-                    for Yocto Project and Poky discussions: <filename>#yocto</filename> and
-                    <filename>#poky</filename>, respectively.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&OE_HOME_URL;'>OpenEmbedded</ulink>:</emphasis>
-                    The build system used by the Yocto Project.
-                    This project is the upstream, generic, embedded distribution
-                    from which the Yocto Project derives its build system (Poky)
-                    and to which it contributes.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='http://www.openembedded.org/wiki/BitBake'>BitBake</ulink>:</emphasis>
-                    The tool used by the OpenEmbedded build system
-                    to process project metadata.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual:</ulink></emphasis>
-                    A comprehensive guide to the BitBake tool.
-                    If you want information on BitBake, see this manual.
-                    </para></listitem>
-                <listitem><para><emphasis>
-                    <ulink url='http://wiki.qemu.org/Index.html'>Quick EMUlator (QEMU)</ulink>:</emphasis>
-                    An open-source machine emulator and virtualizer.
-                    </para></listitem>
-            </itemizedlist>
+            For introductory information on the Yocto Project, see the
+            <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>.
+            You can find an introductory to using the Yocto Project by working
+            through the
+            <ulink url='&YOCTO_DOCS_QS_URL;'>Yocto Project Quick Start</ulink>.
+        </para>
+
+        <para>
+            For a comprehensive list of links and other documentation, see the
+            "<ulink url='&YOCTO_DOCS_REF_URL;#resources-links-and-related-documentation'>Links and Related Documentation</ulink>"
+            section in the Yocto Project Reference Manual.
+        </para>
+
+        <para>
         </para>
     </section>
 </chapter>
diff --git a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-model.xml b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-model.xml
deleted file mode 100644
index 1008e11..0000000
--- a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-model.xml
+++ /dev/null
@@ -1,1654 +0,0 @@
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
-"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
-[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
-
-<chapter id='dev-manual-model'>
-
-<title>Common Development Models</title>
-
-<para>
-    Many development models exist for which you can use the Yocto Project.
-    This chapter overviews simple methods that use tools provided by the
-    Yocto Project:
-    <itemizedlist>
-        <listitem><para><emphasis>System Development:</emphasis>
-             System Development covers Board Support Package (BSP) development
-             and kernel modification or configuration.
-             For an example on how to create a BSP, see the
-             "<ulink url='&YOCTO_DOCS_BSP_URL;#creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a New BSP Layer Using the yocto-bsp Script</ulink>"
-             section in the Yocto Project Board Support Package (BSP)
-             Developer's Guide.
-             For more complete information on how to work with the kernel,
-             see the
-             <ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;'>Yocto Project Linux Kernel Development Manual</ulink>.
-             </para></listitem>
-         <listitem><para><emphasis>User Application Development:</emphasis>
-             User Application Development covers development of applications
-             that you intend to run on target hardware.
-             For information on how to set up your host development system for
-             user-space application development, see the
-             <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
-             For a simple example of user-space application development using
-             the <trademark class='trade'>Eclipse</trademark> IDE, see the
-             "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-developing-applications-using-eclipse'>Developing Applications Using <trademark class='trade'>Eclipse</trademark></ulink>" section.
-             </para></listitem>
-         <listitem><para><emphasis>Temporary Source Code Modification:</emphasis>
-             Direct modification of temporary source code is a convenient
-             development model to quickly iterate and develop towards a
-             solution.
-             Once you implement the solution, you should of course take
-             steps to get the changes upstream and applied in the affected
-             recipes.
-             </para></listitem>
-         <listitem><para><emphasis>Image Development using Toaster:</emphasis>
-             You can use <ulink url='&YOCTO_HOME_URL;/Tools-resources/projects/toaster'>Toaster</ulink>
-             to build custom operating system images within the build
-             environment.
-             Toaster provides an efficient interface to the OpenEmbedded build
-             that allows you to start builds and examine build statistics.
-             </para></listitem>
-         <listitem><para><emphasis>Using a Development Shell:</emphasis>
-             You can use a
-             <link linkend='platdev-appdev-devshell'><filename>devshell</filename></link>
-             to efficiently debug
-             commands or simply edit packages.
-             Working inside a development shell is a quick way to set up the
-             OpenEmbedded build environment to work on parts of a project.
-             </para></listitem>
-     </itemizedlist>
-</para>
-
-<section id='system-development-model'>
-    <title>System Development Workflow</title>
-
-    <para>
-        System development involves modification or creation of an image that you want to run on
-        a specific hardware target.
-        Usually, when you want to create an image that runs on embedded hardware, the image does
-        not require the same number of features that a full-fledged Linux distribution provides.
-        Thus, you can create a much smaller image that is designed to use only the
-        features for your particular hardware.
-    </para>
-
-    <para>
-        To help you understand how system development works in the Yocto Project, this section
-        covers two types of image development:  BSP creation and kernel modification or
-        configuration.
-    </para>
-
-    <section id='developing-a-board-support-package-bsp'>
-        <title>Developing a Board Support Package (BSP)</title>
-
-        <para>
-            A BSP is a collection of recipes that, when applied during a build, results in
-            an image that you can run on a particular board.
-            Thus, the package when compiled into the new image, supports the operation of the board.
-        </para>
-
-        <note>
-            For a brief list of terms used when describing the development process in the Yocto Project,
-            see the "<link linkend='yocto-project-terms'>Yocto Project Terms</link>" section.
-        </note>
-
-        <para>
-            The remainder of this section presents the basic
-            steps used to create a BSP using the Yocto Project's
-            <ulink url='&YOCTO_DOCS_BSP_URL;#using-the-yocto-projects-bsp-tools'>BSP Tools</ulink>.
-            Although not required for BSP creation, the
-            <filename>meta-intel</filename> repository, which contains
-            many BSPs supported by the Yocto Project, is part of the example.
-        </para>
-
-        <para>
-            For an example that shows how to create a new layer using the tools, see the
-            "<ulink url='&YOCTO_DOCS_BSP_URL;#creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a New BSP Layer Using the yocto-bsp Script</ulink>"
-             section in the Yocto Project Board Support Package (BSP) Developer's Guide.
-        </para>
-
-        <para>
-            The following illustration and list summarize the BSP creation general workflow.
-        </para>
-
-        <para>
-            <imagedata fileref="figures/bsp-dev-flow.png" width="6in" depth="7in" align="center" scalefit="1" />
-        </para>
-
-        <para>
-            <orderedlist>
-                <listitem><para><emphasis>Set up your host development system to support
-                    development using the Yocto Project</emphasis>:  See the
-                    "<ulink url='&YOCTO_DOCS_QS_URL;#the-linux-distro'>The Linux Distribution</ulink>"
-                    and the
-                    "<ulink url='&YOCTO_DOCS_QS_URL;#packages'>The Build Host Packages</ulink>" sections both
-                    in the Yocto Project Quick Start for requirements.</para></listitem>
-                <listitem><para><emphasis>Establish a local copy of the project files on your
-                    system</emphasis>:  You need this <link linkend='source-directory'>Source
-                    Directory</link> available on your host system.
-                    Having these files on your system gives you access to the build
-                    process and to the tools you need.
-                    For information on how to set up the Source Directory,
-                    see the
-                    "<link linkend='getting-setup'>Getting Set Up</link>" section.</para></listitem>
-                <listitem><para><emphasis>Establish the <filename>meta-intel</filename>
-                    repository on your system</emphasis>:  Having local copies
-                    of these supported BSP layers on your system gives you
-                    access to layers you might be able to build on or modify
-                    to create your BSP.
-                    For information on how to get these files, see the
-                    "<link linkend='getting-setup'>Getting Set Up</link>" section.</para></listitem>
-                <listitem><para><emphasis>Create your own BSP layer using the
-                    <ulink url='&YOCTO_DOCS_BSP_URL;#creating-a-new-bsp-layer-using-the-yocto-bsp-script'><filename>yocto-bsp</filename></ulink> script</emphasis>:
-                    Layers are ideal for
-                    isolating and storing work for a given piece of hardware.
-                    A layer is really just a location or area in which you place
-                    the recipes and configurations for your BSP.
-                    In fact, a BSP is, in itself, a special type of layer.
-                    The simplest way to create a new BSP layer that is compliant with the
-                    Yocto Project is to use the <filename>yocto-bsp</filename> script.
-                    For information about that script, see the
-                    "<ulink url='&YOCTO_DOCS_BSP_URL;#creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a New BSP Layer Using the yocto-bsp Script</ulink>"
-                    section in the Yocto Project Board Support (BSP) Developer's Guide.
-                    </para>
-
-                    <para>
-                        Another example that illustrates a layer
-                        is an application.
-                        Suppose you are creating an application that has
-                        library or other dependencies in order for it to
-                        compile and run.
-                        The layer, in this case, would be where all the
-                        recipes that define those dependencies are kept.
-                        The key point for a layer is that it is an isolated
-                        area that contains all the relevant information for
-                        the project that the OpenEmbedded build system knows
-                        about.
-                        For more information on layers, see the
-                        "<link linkend='understanding-and-creating-layers'>Understanding and Creating Layers</link>"
-                        section.
-                        For more information on BSP layers, see the
-                        "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP Layers</ulink>"
-                        section in the Yocto Project Board Support Package (BSP)
-                        Developer's Guide.
-                        <note>
-                            <para>
-                                Five BSPs exist that are part of the Yocto Project release:
-                                <filename>beaglebone</filename> (ARM),
-                                <filename>mpc8315e</filename> (PowerPC),
-                                and <filename>edgerouter</filename> (MIPS).
-                                The recipes and configurations for these five BSPs
-                                are located and dispersed within the
-                                <link linkend='source-directory'>Source Directory</link>.
-                            </para>
-
-                            <para>
-                                Three core Intel BSPs exist as part of the Yocto
-                                Project release in the
-                                <filename>meta-intel</filename> layer:
-                                <itemizedlist>
-                                    <listitem><para><filename>intel-core2-32</filename>,
-                                        which is a BSP optimized for the Core2 family of CPUs
-                                        as well as all CPUs prior to the Silvermont core.
-                                        </para></listitem>
-                                    <listitem><para><filename>intel-corei7-64</filename>,
-                                        which is a BSP optimized for Nehalem and later
-                                        Core and Xeon CPUs as well as Silvermont and later
-                                        Atom CPUs, such as the Baytrail SoCs.
-                                        </para></listitem>
-                                    <listitem><para><filename>intel-quark</filename>,
-                                        which is a BSP optimized for the Intel Galileo
-                                        gen1 &amp; gen2 development boards.
-                                        </para></listitem>
-                                </itemizedlist>
-                            </para>
-                        </note>
-                    </para>
-
-                    <para>When you set up a layer for a new BSP, you should follow a standard layout.
-                    This layout is described in the
-                    "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-filelayout'>Example Filesystem Layout</ulink>"
-                    section of the Board Support Package (BSP) Development Guide.
-                    In the standard layout, you will notice a suggested structure for recipes and
-                    configuration information.
-                    You can see the standard layout for a BSP by examining
-                    any supported BSP found in the <filename>meta-intel</filename> layer inside
-                    the Source Directory.</para></listitem>
-                <listitem><para><emphasis>Make configuration changes to your new BSP
-                    layer</emphasis>:  The standard BSP layer structure organizes the files you need
-                    to edit in <filename>conf</filename> and several <filename>recipes-*</filename>
-                    directories within the BSP layer.
-                    Configuration changes identify where your new layer is on the local system
-                    and identify which kernel you are going to use.
-                    When you run the <filename>yocto-bsp</filename> script, you are able to interactively
-                    configure many things for the BSP (e.g. keyboard, touchscreen, and so forth).
-                    </para></listitem>
-                <listitem><para><emphasis>Make recipe changes to your new BSP layer</emphasis>:  Recipe
-                    changes include altering recipes (<filename>.bb</filename> files), removing
-                    recipes you do not use, and adding new recipes or append files
-                    (<filename>.bbappend</filename>) that you need to support your hardware.
-                    </para></listitem>
-                <listitem><para><emphasis>Prepare for the build</emphasis>:  Once you have made all the
-                    changes to your BSP layer, there remains a few things
-                    you need to do for the OpenEmbedded build system in order for it to create your image.
-                    You need to get the build environment ready by sourcing an environment setup script
-                    (i.e. <filename>oe-init-build-env</filename> or
-                    <filename>oe-init-build-env-memres</filename>)
-                    and you need to be sure two key configuration files are configured appropriately:
-                    the <filename>conf/local.conf</filename> and the
-                    <filename>conf/bblayers.conf</filename> file.
-                    You must make the OpenEmbedded build system aware of your new layer.
-                    See the
-                    "<link linkend='enabling-your-layer'>Enabling Your Layer</link>" section
-                    for information on how to let the build system know about your new layer.</para>
-                    <para>The entire process for building an image is overviewed in the section
-                    "<ulink url='&YOCTO_DOCS_QS_URL;#qs-building-images'>Building Images</ulink>" section
-                    of the Yocto Project Quick Start.
-                    You might want to reference this information.</para></listitem>
-                <listitem><para><emphasis>Build the image</emphasis>:  The OpenEmbedded build system
-                    uses the BitBake tool to build images based on the type of image you want to create.
-                    You can find more information about BitBake in the
-                    <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual</ulink>.
-                    </para>
-                    <para>The build process supports several types of images to satisfy different needs.
-                    See the
-                    "<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>" chapter
-                    in the Yocto Project Reference Manual for information on
-                    supported images.</para></listitem>
-            </orderedlist>
-        </para>
-
-        <para>
-            You can view a video presentation on "Building Custom Embedded Images with Yocto"
-            at <ulink url='http://free-electrons.com/blog/elc-2011-videos'>Free Electrons</ulink>.
-            After going to the page, just search for "Embedded".
-            You can also find supplemental information in the
-            <ulink url='&YOCTO_DOCS_BSP_URL;'>
-            Yocto Project Board Support Package (BSP) Developer's Guide</ulink>.
-            Finally, there is helpful material and links on this
-            <ulink url='&YOCTO_WIKI_URL;/wiki/Transcript:_creating_one_generic_Atom_BSP_from_another'>wiki page</ulink>.
-            Although a bit dated, you might find the information on the wiki
-            helpful.
-       </para>
-    </section>
-
-    <section id='modifying-the-kernel'>
-        <title><anchor id='kernel-spot' />Modifying the Kernel</title>
-
-        <para>
-            Kernel modification involves changing the Yocto Project kernel, which could involve changing
-            configuration options as well as adding new kernel recipes.
-            Configuration changes can be added in the form of configuration fragments, while recipe
-            modification comes through the kernel's <filename>recipes-kernel</filename> area
-            in a kernel layer you create.
-        </para>
-
-        <para>
-            The remainder of this section presents a high-level overview of the Yocto Project
-            kernel architecture and the steps to modify the kernel.
-            You can reference the
-            "<link linkend='patching-the-kernel'>Patching the Kernel</link>" section
-            for an example that changes the source code of the kernel.
-            For information on how to configure the kernel, see the
-            "<link linkend='configuring-the-kernel'>Configuring the Kernel</link>" section.
-            For more information on the kernel and on modifying the kernel, see the
-            <ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;'>Yocto Project Linux Kernel Development Manual</ulink>.
-        </para>
-
-        <section id='kernel-overview'>
-            <title>Kernel Overview</title>
-
-            <para>
-                Traditionally, when one thinks of a patched kernel, they think of a base kernel
-                source tree and a fixed structure that contains kernel patches.
-                The Yocto Project, however, employs mechanisms that, in a sense, result in a kernel source
-                generator.
-                By the end of this section, this analogy will become clearer.
-            </para>
-
-            <para>
-                You can find a web interface to the Yocto Project kernel source repositories at
-                <ulink url='&YOCTO_GIT_URL;'></ulink>.
-                If you look at the interface, you will see to the left a grouping of
-                Git repositories titled "Yocto Linux Kernel."
-                Within this group, you will find several kernels supported by
-                the Yocto Project:
-                <itemizedlist>
-                    <listitem><para><emphasis>
-                        <filename>linux-yocto-3.14</filename></emphasis> - The
-                        stable Yocto Project kernel to use with the Yocto
-                        Project Releases 1.6 and 1.7.
-                        This kernel is based on the Linux 3.14 released kernel.
-                        </para></listitem>
-                    <listitem><para><emphasis>
-                        <filename>linux-yocto-3.17</filename></emphasis> - An
-                        additional, unsupported Yocto Project kernel used with
-                        the Yocto Project Release 1.7.
-                        This kernel is based on the Linux 3.17 released kernel.
-                        </para></listitem>
-                    <listitem><para><emphasis>
-                        <filename>linux-yocto-3.19</filename></emphasis> - The
-                        stable Yocto Project kernel to use with the Yocto
-                        Project Release 1.8.
-                        This kernel is based on the Linux 3.19 released kernel.
-                        </para></listitem>
-                    <listitem><para><emphasis>
-                        <filename>linux-yocto-4.1</filename></emphasis> - The
-                        stable Yocto Project kernel to use with the Yocto
-                        Project Release 2.0.
-                        This kernel is based on the Linux 4.1 released kernel.
-                        </para></listitem>
-                    <listitem><para><emphasis>
-                        <filename>linux-yocto-4.4</filename></emphasis> - The
-                        stable Yocto Project kernel to use with the Yocto
-                        Project Release 2.1.
-                        This kernel is based on the Linux 4.4 released kernel.
-                        </para></listitem>
-                    <listitem><para><emphasis>
-                        <filename>linux-yocto-dev</filename></emphasis> - A
-                        development kernel based on the latest upstream release
-                        candidate available.
-                        </para></listitem>
-                </itemizedlist>
-                <note>
-                    Long Term Support Initiative (LTSI) for Yocto Project kernels
-                    is as follows:
-                    <itemizedlist>
-                        <listitem><para>For Yocto Project releases 1.7, 1.8, and 2.0,
-                            the LTSI kernel is <filename>linux-yocto-3.14</filename>.
-                            </para></listitem>
-                        <listitem><para>For Yocto Project release 2.1, the
-                            LTSI kernel is <filename>linux-yocto-4.1</filename>.
-                            </para></listitem>
-                    </itemizedlist>
-                </note>
-            </para>
-
-            <para>
-                The kernels are maintained using the Git revision control system
-                that structures them using the familiar "tree", "branch", and "leaf" scheme.
-                Branches represent diversions from general code to more specific code, while leaves
-                represent the end-points for a complete and unique kernel whose source files,
-                when gathered from the root of the tree to the leaf, accumulate to create the files
-                necessary for a specific piece of hardware and its features.
-                The following figure displays this concept:
-            <para>
-                <imagedata fileref="figures/kernel-overview-1.png"
-                    width="6in" depth="6in" align="center" scale="100" />
-            </para>
-
-            <para>
-                Within the figure, the "Kernel.org Branch Point" represents the point in the tree
-                where a supported base kernel is modified from the Linux kernel.
-                For example, this could be the branch point for the <filename>linux-yocto-3.19</filename>
-                kernel.
-                Thus, everything further to the right in the structure is based on the
-                <filename>linux-yocto-3.19</filename> kernel.
-                Branch points to the right in the figure represent where the
-                <filename>linux-yocto-3.19</filename> kernel is modified for specific hardware
-                or types of kernels, such as real-time kernels.
-                Each leaf thus represents the end-point for a kernel designed to run on a specific
-                targeted device.
-            </para>
-
-            <para>
-                The overall result is a Git-maintained repository from which all the supported
-                kernel types can be derived for all the supported devices.
-                A big advantage to this scheme is the sharing of common features by keeping them in
-                "larger" branches within the tree.
-                This practice eliminates redundant storage of similar features shared among kernels.
-            </para>
-
-            <note>
-                Keep in mind the figure does not take into account all the supported Yocto
-                Project kernel types, but rather shows a single generic kernel just for conceptual purposes.
-                Also keep in mind that this structure represents the Yocto Project source repositories
-                that are either pulled from during the build or established on the host development system
-                prior to the build by either cloning a particular kernel's Git repository or by
-                downloading and unpacking a tarball.
-            </note>
-
-            <para>
-                Upstream storage of all the available kernel source code is one thing, while
-                representing and using the code on your host development system is another.
-                Conceptually, you can think of the kernel source repositories as all the
-                source files necessary for all the supported kernels.
-                As a developer, you are just interested in the source files for the kernel on
-                which you are working.
-                And, furthermore, you need them available on your host system.
-            </para>
-
-            <para>
-                Kernel source code is available on your host system a couple of different
-                ways.
-                If you are working in the kernel all the time, you probably would want
-                to set up your own local Git repository of the kernel tree.
-                If you just need to make some patches to the kernel, you can access
-                temporary kernel source files that were extracted and used
-                during a build.
-                We will just talk about working with the temporary source code.
-                For more information on how to get kernel source code onto your
-                host system, see the
-                "<link linkend='local-kernel-files'>Yocto Project Kernel</link>"
-                bulleted item earlier in the manual.
-            </para>
-
-            <para>
-                What happens during the build?
-                When you build the kernel on your development system, all files needed for the build
-                are taken from the source repositories pointed to by the
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink> variable
-                and gathered in a temporary work area
-                where they are subsequently used to create the unique kernel.
-                Thus, in a sense, the process constructs a local source tree specific to your
-                kernel to generate the new kernel image - a source generator if you will.
-            </para>
-                The following figure shows the temporary file structure
-                created on your host system when the build occurs.
-                This
-                <link linkend='build-directory'>Build Directory</link> contains all the
-                source files used during the build.
-            </para>
-
-            <para>
-                <imagedata fileref="figures/kernel-overview-2-generic.png"
-                    width="6in" depth="5in" align="center" scale="100" />
-            </para>
-
-            <para>
-                Again, for additional information on the Yocto Project kernel's
-                architecture and its branching strategy, see the
-                <ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;'>Yocto Project Linux Kernel Development Manual</ulink>.
-                You can also reference the
-                "<link linkend='patching-the-kernel'>Patching the Kernel</link>"
-                section for a detailed example that modifies the kernel.
-            </para>
-        </section>
-
-        <section id='kernel-modification-workflow'>
-            <title>Kernel Modification Workflow</title>
-
-            <para>
-                This illustration and the following list summarizes the kernel modification general workflow.
-            </para>
-
-            <para>
-                <imagedata fileref="figures/kernel-dev-flow.png"
-                    width="6in" depth="5in" align="center" scalefit="1" />
-            </para>
-
-            <para>
-                <orderedlist>
-                    <listitem><para><emphasis>Set up your host development system to support
-                        development using the Yocto Project</emphasis>:  See
-                        "<ulink url='&YOCTO_DOCS_QS_URL;#the-linux-distro'>The Linux Distribution</ulink>" and
-                        "<ulink url='&YOCTO_DOCS_QS_URL;#packages'>The Build Host Packages</ulink>" sections both
-                        in the Yocto Project Quick Start for requirements.</para></listitem>
-                    <listitem><para><emphasis>Establish a local copy of project files on your
-                        system</emphasis>:  Having the <link linkend='source-directory'>Source
-                        Directory</link> on your system gives you access to the build process and tools
-                        you need.
-                        For information on how to get these files, see the bulleted item
-                        "<link linkend='local-yp-release'>Yocto Project Release</link>" earlier in this manual.
-                        </para></listitem>
-                    <listitem><para><emphasis>Establish the temporary kernel source files</emphasis>:
-                        Temporary kernel source files are kept in the
-                        <link linkend='build-directory'>Build Directory</link>
-                        created by the
-                        OpenEmbedded build system when you run BitBake.
-                        If you have never built the kernel in which you are
-                        interested, you need to run an initial build to
-                        establish local kernel source files.</para>
-                        <para>If you are building an image for the first time, you need to get the build
-                        environment ready by sourcing an environment setup script
-                        (i.e. <filename>oe-init-build-env</filename> or
-                        <filename>oe-init-build-env-memres</filename>).
-                        You also need to be sure two key configuration files
-                        (<filename>local.conf</filename> and <filename>bblayers.conf</filename>)
-                        are configured appropriately.</para>
-                        <para>The entire process for building an image is overviewed in the
-                        "<ulink url='&YOCTO_DOCS_QS_URL;#qs-building-images'>Building Images</ulink>"
-                        section of the Yocto Project Quick Start.
-                        You might want to reference this information.
-                        You can find more information on BitBake in the
-                        <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual</ulink>.
-                        </para>
-                        <para>The build process supports several types of images to satisfy different needs.
-                        See the "<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>" chapter in
-                        the Yocto Project Reference Manual for information on supported images.
-                        </para></listitem>
-                    <listitem><para><emphasis>Make changes to the kernel source code if
-                        applicable</emphasis>:  Modifying the kernel does not always mean directly
-                        changing source files.
-                        However, if you have to do this, you make the changes to the files in the
-                        Build Directory.</para></listitem>
-                    <listitem><para><emphasis>Make kernel configuration changes if applicable</emphasis>:
-                        If your situation calls for changing the kernel's
-                        configuration, you can use
-                        <ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#generating-configuration-files'><filename>menuconfig</filename></ulink>,
-                        which allows you to interactively develop and test the
-                        configuration changes you are making to the kernel.
-                        Saving changes you make with
-                        <filename>menuconfig</filename> updates
-                        the kernel's <filename>.config</filename> file.
-                        <note><title>Warning</title>
-                            Try to resist the temptation to directly edit an
-                            existing <filename>.config</filename> file, which is
-                            found in the Build Directory at
-                            <filename>tmp/sysroots/<replaceable>machine-name</replaceable>/kernel</filename>.
-                            Doing so, can produce unexpected results when the
-                            OpenEmbedded build system regenerates the configuration
-                            file.
-                        </note>
-                        Once you are satisfied with the configuration
-                        changes made using <filename>menuconfig</filename>
-                        and you have saved them, you can directly compare the
-                        resulting <filename>.config</filename> file against an
-                        existing original and gather those changes into a
-                        <link linkend='creating-config-fragments'>configuration fragment file</link>
-                        to be referenced from within the kernel's
-                        <filename>.bbappend</filename> file.</para>
-
-                        <para>Additionally, if you are working in a BSP layer
-                        and need to modify the BSP's kernel's configuration,
-                        you can use the
-                        <ulink url='&YOCTO_DOCS_BSP_URL;#managing-kernel-patches-and-config-items-with-yocto-kernel'><filename>yocto-kernel</filename></ulink>
-                        script as well as <filename>menuconfig</filename>.
-                        The <filename>yocto-kernel</filename> script lets
-                        you interactively set up kernel configurations.
-                        </para></listitem>
-                    <listitem><para><emphasis>Rebuild the kernel image with your changes</emphasis>:
-                        Rebuilding the kernel image applies your changes.
-                        </para></listitem>
-                </orderedlist>
-            </para>
-        </section>
-    </section>
-</section>
-
-<section id='application-development-workflow-using-an-sdk'>
-    <title>Application Development Workflow Using an SDK</title>
-
-    <para>
-        Standard and extensible Software Development Kits (SDK) make it easy
-        to develop applications inside or outside of the Yocto Project
-        development environment.
-        Tools exist to help the application developer during any phase
-        of development.
-        For information on how to install and use an SDK, see the
-        <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-intro'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
-    </para>
-</section>
-
-<section id="dev-modifying-source-code">
-    <title>Modifying Source Code</title>
-
-    <para>
-        A common development workflow consists of modifying project source
-        files that are external to the Yocto Project and then integrating
-        that project's build output into an image built using the
-        OpenEmbedded build system.
-        Given this scenario, development engineers typically want to stick
-        to their familiar project development tools and methods, which allows
-        them to just focus on the project.
-    </para>
-
-    <para>
-        Several workflows exist that allow you to develop, build, and test
-        code that is going to be integrated into an image built using the
-        OpenEmbedded build system.
-        This section describes two:
-        <itemizedlist>
-            <listitem><para><emphasis><filename>devtool</filename>:</emphasis>
-                A set of tools to aid in working on the source code built by
-                the OpenEmbedded build system.
-                Section
-                "<link linkend='using-devtool-in-your-workflow'>Using <filename>devtool</filename> in Your Workflow</link>"
-                describes this workflow.
-                If you want more information that showcases the workflow, click
-                <ulink url='https://drive.google.com/a/linaro.org/file/d/0B3KGzY5fW7laTDVxUXo3UDRvd2s/view'>here</ulink>
-                for a presentation by Trevor Woerner that, while somewhat dated,
-                provides detailed background information and a complete
-                working tutorial.
-                </para></listitem>
-            <listitem><para><emphasis><ulink url='http://savannah.nongnu.org/projects/quilt'>Quilt</ulink>:</emphasis>
-                A powerful tool that allows you to capture source
-                code changes without having a clean source tree.
-                While Quilt is not the preferred workflow of the two, this
-                section includes it for users that are committed to using
-                the tool.
-                See the
-                "<link linkend='using-a-quilt-workflow'>Using Quilt in Your Workflow</link>"
-                section for more information.
-                </para></listitem>
-        </itemizedlist>
-    </para>
-
-    <section id='using-devtool-in-your-workflow'>
-        <title>Using <filename>devtool</filename> in Your Workflow</title>
-
-        <para>
-            As mentioned earlier, <filename>devtool</filename> helps
-            you easily develop projects whose build output must be part of
-            an image built using the OpenEmbedded build system.
-        </para>
-
-        <para>
-            Three entry points exist that allow you to develop using
-            <filename>devtool</filename>:
-            <itemizedlist>
-                <listitem><para><emphasis><filename>devtool add</filename></emphasis>
-                    </para></listitem>
-                <listitem><para><emphasis><filename>devtool modify</filename></emphasis>
-                    </para></listitem>
-                <listitem><para><emphasis><filename>devtool upgrade</filename></emphasis>
-                    </para></listitem>
-            </itemizedlist>
-        </para>
-
-        <para>
-            The remainder of this section presents these workflows.
-            See the
-            "<ulink url='&YOCTO_DOCS_REF_URL;#ref-devtool-reference'><filename>devtool</filename>&nbsp;Quick Reference</ulink>"
-            in the Yocto Project Reference Manual for a
-            <filename>devtool</filename> quick reference.
-        </para>
-
-        <section id='use-devtool-to-integrate-new-code'>
-            <title>Use <filename>devtool add</filename> to Add an Application</title>
-
-            <para>
-                The <filename>devtool add</filename> command generates
-                a new recipe based on existing source code.
-                This command takes advantage of the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#devtool-the-workspace-layer-structure'>workspace</ulink>
-                layer that many <filename>devtool</filename> commands
-                use.
-                The command is flexible enough to allow you to extract source
-                code into both the workspace or a separate local Git repository
-                and to use existing code that does not need to be extracted.
-            </para>
-
-            <para>
-                Depending on your particular scenario, the arguments and options
-                you use with <filename>devtool add</filename> form different
-                combinations.
-                The following diagram shows common development flows
-                you would use with the <filename>devtool add</filename>
-                command:
-            </para>
-
-            <para>
-                <imagedata fileref="figures/devtool-add-flow.png" align="center" />
-            </para>
-
-            <para>
-                <orderedlist>
-                    <listitem><para><emphasis>Generating the New Recipe</emphasis>:
-                        The top part of the flow shows three scenarios by which
-                        you could use <filename>devtool add</filename> to
-                        generate a recipe based on existing source code.</para>
-
-                        <para>In a shared development environment, it is
-                        typical where other developers are responsible for
-                        various areas of source code.
-                        As a developer, you are probably interested in using
-                        that source code as part of your development using
-                        the Yocto Project.
-                        All you need is access to the code, a recipe, and a
-                        controlled area in which to do your work.</para>
-
-                        <para>Within the diagram, three possible scenarios
-                        feed into the <filename>devtool add</filename> workflow:
-                        <itemizedlist>
-                            <listitem><para><emphasis>Left</emphasis>:
-                                The left scenario represents a common situation
-                                where the source code does not exist locally
-                                and needs to be extracted.
-                                In this situation, you just let it get
-                                extracted to the default workspace - you do not
-                                want it in some specific location outside of the
-                                workspace.
-                                Thus, everything you need will be located in the
-                                workspace:
-                                <literallayout class='monospaced'>
-     $ devtool add <replaceable>recipe fetchuri</replaceable>
-                                </literallayout>
-                                With this command, <filename>devtool</filename>
-                                creates a recipe and an append file in the
-                                workspace as well as extracts the upstream
-                                source files into a local Git repository also
-                                within the <filename>sources</filename> folder.
-                                </para></listitem>
-                            <listitem><para><emphasis>Middle</emphasis>:
-                                The middle scenario also represents a situation where
-                                the source code does not exist locally.
-                                In this case, the code is again upstream
-                                and needs to be extracted to some
-                                local area - this time outside of the default
-                                workspace.
-                                If required, <filename>devtool</filename>
-                                always creates
-                                a Git repository locally during the extraction.
-                                Furthermore, the first positional argument
-                                <replaceable>srctree</replaceable> in this case
-                                identifies where the
-                                <filename>devtool add</filename> command
-                                will locate the extracted code outside of the
-                                workspace:
-                                <literallayout class='monospaced'>
-     $ devtool add <replaceable>recipe srctree fetchuri</replaceable>
-                                </literallayout>
-                                In summary, the source code is pulled from
-                                <replaceable>fetchuri</replaceable> and extracted
-                                into the location defined by
-                                <replaceable>srctree</replaceable> as a local
-                                Git repository.</para>
-
-                                <para>Within workspace, <filename>devtool</filename>
-                                creates both the recipe and an append file
-                                for the recipe.
-                                </para></listitem>
-                            <listitem><para><emphasis>Right</emphasis>:
-                                The right scenario represents a situation
-                                where the source tree (srctree) has been
-                                previously prepared outside of the
-                                <filename>devtool</filename> workspace.
-                                </para>
-
-                                <para>The following command names the recipe
-                                and identifies where the existing source tree
-                                is located:
-                                <literallayout class='monospaced'>
-     $ devtool add <replaceable>recipe srctree</replaceable>
-                                </literallayout>
-                                The command examines the source code and creates
-                                a recipe for it placing the recipe into the
-                                workspace.</para>
-
-                                <para>Because the extracted source code already exists,
-                                <filename>devtool</filename> does not try to
-                                relocate it into the workspace - just the new
-                                the recipe is placed in the workspace.</para>
-
-                                <para>Aside from a recipe folder, the command
-                                also creates an append folder and places an initial
-                                <filename>*.bbappend</filename> within.
-                                </para></listitem>
-                        </itemizedlist>
-                        </para></listitem>
-                    <listitem><para><emphasis>Edit the Recipe</emphasis>:
-                        At this point, you can use <filename>devtool edit-recipe</filename>
-                        to open up the editor as defined by the
-                        <filename>$EDITOR</filename> environment variable
-                        and modify the file:
-                        <literallayout class='monospaced'>
-     $ devtool edit-recipe <replaceable>recipe</replaceable>
-                        </literallayout>
-                        From within the editor, you can make modifications to the
-                        recipe that take affect when you build it later.
-                        </para></listitem>
-                    <listitem><para><emphasis>Build the Recipe or Rebuild the Image</emphasis>:
-                        At this point in the flow, the next step you
-                        take depends on what you are going to do with
-                        the new code.</para>
-                        <para>If you need to take the build output and eventually
-                        move it to the target hardware, you would use
-                        <filename>devtool build</filename>:
-                        <literallayout class='monospaced'>
-     $ devtool build <replaceable>recipe</replaceable>
-                        </literallayout></para>
-                        <para>On the other hand, if you want an image to
-                        contain the recipe's packages for immediate deployment
-                        onto a device (e.g. for testing purposes), you can use
-                        the <filename>devtool build-image</filename> command:
-                        <literallayout class='monospaced'>
-     $ devtool build-image <replaceable>image</replaceable>
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>Deploy the Build Output</emphasis>:
-                        When you use the <filename>devtool build</filename>
-                        command to build out your recipe, you probably want to
-                        see if the resulting build output works as expected on target
-                        hardware.
-                        <note>
-                            This step assumes you have a previously built
-                            image that is already either running in QEMU or
-                            running on actual hardware.
-                            Also, it is assumed that for deployment of the image
-                            to the target, SSH is installed in the image and if
-                            the image is running on real hardware that you have
-                            network access to and from your development machine.
-                        </note>
-                        You can deploy your build output to that target hardware by
-                        using the <filename>devtool deploy-target</filename> command:
-                        <literallayout class='monospaced'>
-     $ devtool deploy-target <replaceable>recipe target</replaceable>
-                        </literallayout>
-                        The <replaceable>target</replaceable> is a live target machine
-                        running as an SSH server.</para>
-
-                        <para>You can, of course, also deploy the image you build
-                        using the <filename>devtool build-image</filename> command
-                        to actual hardware.
-                        However, <filename>devtool</filename> does not provide a
-                        specific command that allows you to do this.
-                        </para></listitem>
-                    <listitem><para>
-                        <emphasis>Finish Your Work With the Recipe</emphasis>:
-                        The <filename>devtool finish</filename> command creates
-                        any patches corresponding to commits in the local
-                        Git repository, moves the new recipe to a more permanent
-                        layer, and then resets the recipe so that the recipe is
-                        built normally rather than from the workspace.
-                        <literallayout class='monospaced'>
-     $ devtool finish <replaceable>recipe layer</replaceable>
-                        </literallayout>
-                        <note>
-                            Any changes you want to turn into patches must be
-                            committed to the Git repository in the source tree.
-                        </note></para>
-
-                        <para>As mentioned, the <filename>devtool finish</filename>
-                        command moves the final recipe to its permanent layer.
-                        </para>
-
-                        <para>As a final process of the
-                        <filename>devtool finish</filename> command, the state
-                        of the standard layers and the upstream source is
-                        restored so that you can build the recipe from those
-                        areas rather than the workspace.
-                        <note>
-                            You can use the <filename>devtool reset</filename>
-                            command to put things back should you decide you
-                            do not want to proceed with your work.
-                            If you do use this command, realize that the source
-                            tree is preserved.
-                        </note>
-                        </para></listitem>
-                </orderedlist>
-            </para>
-        </section>
-
-        <section id='devtool-use-devtool-modify-to-enable-work-on-code-associated-with-an-existing-recipe'>
-            <title>Use <filename>devtool modify</filename> to Modify the Source of an Existing Component</title>
-
-            <para>
-                The <filename>devtool modify</filename> command prepares the
-                way to work on existing code that already has a recipe in
-                place.
-                The command is flexible enough to allow you to extract code,
-                specify the existing recipe, and keep track of and gather any
-                patch files from other developers that are
-                associated with the code.
-            </para>
-
-            <para>
-                Depending on your particular scenario, the arguments and options
-                you use with <filename>devtool modify</filename> form different
-                combinations.
-                The following diagram shows common development flows
-                you would use with the <filename>devtool modify</filename>
-                command:
-            </para>
-
-            <para>
-                <imagedata fileref="figures/devtool-modify-flow.png" align="center" />
-            </para>
-
-            <para>
-                <orderedlist>
-                    <listitem><para><emphasis>Preparing to Modify the Code</emphasis>:
-                        The top part of the flow shows three scenarios by which
-                        you could use <filename>devtool modify</filename> to
-                        prepare to work on source files.
-                        Each scenario assumes the following:
-                        <itemizedlist>
-                            <listitem><para>The recipe exists in some layer external
-                                to the <filename>devtool</filename> workspace.
-                                </para></listitem>
-                            <listitem><para>The source files exist upstream in an
-                                un-extracted state or locally in a previously
-                                extracted state.
-                                </para></listitem>
-                        </itemizedlist>
-                        The typical situation is where another developer has
-                        created some layer for use with the Yocto Project and
-                        their recipe already resides in that layer.
-                        Furthermore, their source code is readily available
-                        either upstream or locally.
-                        <itemizedlist>
-                            <listitem><para><emphasis>Left</emphasis>:
-                                The left scenario represents a common situation
-                                where the source code does not exist locally
-                                and needs to be extracted.
-                                In this situation, the source is extracted
-                                into the default workspace location.
-                                The recipe, in this scenario, is in its own
-                                layer outside the workspace
-                                (i.e.
-                                <filename>meta-</filename><replaceable>layername</replaceable>).
-                                </para>
-
-                                <para>The following command identifies the recipe
-                                and by default extracts the source files:
-                                <literallayout class='monospaced'>
-     $ devtool modify <replaceable>recipe</replaceable>
-                                </literallayout>
-                                Once <filename>devtool</filename>locates the recipe,
-                                it uses the
-                                <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
-                                variable to locate the source code and
-                                any local patch files from other developers are
-                                located.
-                                <note>
-                                    You cannot provide an URL for
-                                    <replaceable>srctree</replaceable> when using the
-                                    <filename>devtool modify</filename> command.
-                                </note>
-                                With this scenario, however, since no
-                                <replaceable>srctree</replaceable> argument exists, the
-                                <filename>devtool modify</filename> command by default
-                                extracts the source files to a Git structure.
-                                Furthermore, the location for the extracted source is the
-                                default area within the workspace.
-                                The result is that the command sets up both the source
-                                code and an append file within the workspace with the
-                                recipe remaining in its original location.
-                                </para></listitem>
-                            <listitem><para><emphasis>Middle</emphasis>:
-                                The middle scenario represents a situation where
-                                the source code also does not exist locally.
-                                In this case, the code is again upstream
-                                and needs to be extracted to some
-                                local area as a Git repository.
-                                The recipe, in this scenario, is again in its own
-                                layer outside the workspace.</para>
-
-                                <para>The following command tells
-                                <filename>devtool</filename> what recipe with
-                                which to work and, in this case, identifies a local
-                                area for the extracted source files that is outside
-                                of the default workspace:
-                                <literallayout class='monospaced'>
-     $ devtool modify <replaceable>recipe srctree</replaceable>
-                                </literallayout>
-                                As with all extractions, the command uses
-                                the recipe's <filename>SRC_URI</filename> to locate the
-                                source files.
-                                Once the files are located, the command by default
-                                extracts them.
-                                Providing the <replaceable>srctree</replaceable>
-                                argument instructs <filename>devtool</filename> where
-                                place the extracted source.</para>
-
-                                <para>Within workspace, <filename>devtool</filename>
-                                creates an append file for the recipe.
-                                The recipe remains in its original location but
-                                the source files are extracted to the location you
-                                provided with <replaceable>srctree</replaceable>.
-                                </para></listitem>
-                            <listitem><para><emphasis>Right</emphasis>:
-                                The right scenario represents a situation
-                                where the source tree
-                                (<replaceable>srctree</replaceable>) exists as a
-                                previously extracted Git structure outside of
-                                the <filename>devtool</filename> workspace.
-                                In this example, the recipe also exists
-                                elsewhere in its own layer.
-                                </para>
-
-                                <para>The following command tells
-                                <filename>devtool</filename> the recipe
-                                with which to work, uses the "-n" option to indicate
-                                source does not need to be extracted, and uses
-                                <replaceable>srctree</replaceable> to point to the
-                                previously extracted source files:
-                                <literallayout class='monospaced'>
-     $ devtool modify -n <replaceable>recipe srctree</replaceable>
-                                </literallayout>
-                                </para>
-
-                                <para>Once the command finishes, it creates only
-                                an append file for the recipe in the workspace.
-                                The recipe and the source code remain in their
-                                original locations.
-                                </para></listitem>
-                            </itemizedlist>
-                        </para></listitem>
-                    <listitem><para><emphasis>Edit the Source</emphasis>:
-                        Once you have used the <filename>devtool modify</filename>
-                        command, you are free to make changes to the source
-                        files.
-                        You can use any editor you like to make and save
-                        your source code modifications.
-                        </para></listitem>
-                    <listitem><para><emphasis>Build the Recipe</emphasis>:
-                        Once you have updated the source files, you can build
-                        the recipe.
-                        </para></listitem>
-                    <listitem><para><emphasis>Deploy the Build Output</emphasis>:
-                        When you use the <filename>devtool build</filename>
-                        command to build out your recipe, you probably want to see
-                        if the resulting build output works as expected on target
-                        hardware.
-                        <note>
-                            This step assumes you have a previously built
-                            image that is already either running in QEMU or
-                            running on actual hardware.
-                            Also, it is assumed that for deployment of the image
-                            to the target, SSH is installed in the image and if
-                            the image is running on real hardware that you have
-                            network access to and from your development machine.
-                        </note>
-                        You can deploy your build output to that target hardware by
-                        using the <filename>devtool deploy-target</filename> command:
-                        <literallayout class='monospaced'>
-     $ devtool deploy-target <replaceable>recipe target</replaceable>
-                        </literallayout>
-                        The <replaceable>target</replaceable> is a live target machine
-                        running as an SSH server.</para>
-
-                        <para>You can, of course, also deploy the image you build
-                        using the <filename>devtool build-image</filename> command
-                        to actual hardware.
-                        However, <filename>devtool</filename> does not provide a
-                        specific command that allows you to do this.
-                        </para></listitem>
-                    <listitem><para>
-                        <emphasis>Finish Your Work With the Recipe</emphasis>:
-                        The <filename>devtool finish</filename> command creates
-                        any patches corresponding to commits in the local
-                        Git repository, updates the recipe to point to them
-                        (or creates a <filename>.bbappend</filename> file to do
-                        so, depending on the specified destination layer), and
-                        then resets the recipe so that the recipe is built normally
-                        rather than from the workspace.
-                        <literallayout class='monospaced'>
-     $ devtool finish <replaceable>recipe layer</replaceable>
-                        </literallayout>
-                        <note>
-                            Any changes you want to turn into patches must be
-                            committed to the Git repository in the source tree.
-                        </note></para>
-
-                        <para>Because there is no need to move the recipe,
-                        <filename>devtool finish</filename> either updates the
-                        original recipe in the original layer or the command
-                        creates a <filename>.bbappend</filename> in a different
-                        layer as provided by <replaceable>layer</replaceable>.
-                        </para>
-
-                        <para>As a final process of the
-                        <filename>devtool finish</filename> command, the state
-                        of the standard layers and the upstream source is
-                        restored so that you can build the recipe from those
-                        areas rather than the workspace.
-                        <note>
-                            You can use the <filename>devtool reset</filename>
-                            command to put things back should you decide you
-                            do not want to proceed with your work.
-                            If you do use this command, realize that the source
-                            tree is preserved.
-                        </note>
-                        </para></listitem>
-                </orderedlist>
-            </para>
-        </section>
-
-        <section id='devtool-use-devtool-upgrade-to-create-a-version-of-the-recipe-that-supports-a-newer-version-of-the-software'>
-            <title>Use <filename>devtool upgrade</filename> to Create a Version of the Recipe that Supports a Newer Version of the Software</title>
-
-            <para>
-                The <filename>devtool upgrade</filename> command updates
-                an existing recipe so that you can build it for an updated
-                set of source files.
-                The command is flexible enough to allow you to specify
-                source code revision and versioning schemes, extract code into
-                or out of the <filename>devtool</filename> workspace, and
-                work with any source file forms that the fetchers support.
-            </para>
-
-            <para>
-                Depending on your particular scenario, the arguments and options
-                you use with <filename>devtool upgrade</filename> form different
-                combinations.
-                The following diagram shows a common development flow
-                you would use with the <filename>devtool modify</filename>
-                command:
-            </para>
-
-            <para>
-                <imagedata fileref="figures/devtool-upgrade-flow.png" align="center" />
-            </para>
-
-            <para>
-                <orderedlist>
-                    <listitem><para><emphasis>Initiate the Upgrade</emphasis>:
-                        The top part of the flow shows a typical scenario by which
-                        you could use <filename>devtool upgrade</filename>.
-                        The following conditions exist:
-                        <itemizedlist>
-                            <listitem><para>The recipe exists in some layer external
-                                to the <filename>devtool</filename> workspace.
-                                </para></listitem>
-                            <listitem><para>The source files for the new release
-                                exist adjacent to the same location pointed to by
-                                <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
-                                in the recipe (e.g. a tarball with the new version
-                                number in the name, or as a different revision in
-                                the upstream Git repository).
-                                </para></listitem>
-                        </itemizedlist>
-                        A common situation is where third-party software has
-                        undergone a revision so that it has been upgraded.
-                        The recipe you have access to is likely in your own layer.
-                        Thus, you need to upgrade the recipe to use the
-                        newer version of the software:
-                        <literallayout class='monospaced'>
-     $ devtool upgrade -V <replaceable>version recipe</replaceable>
-                        </literallayout>
-                        By default, the <filename>devtool upgrade</filename> command
-                        extracts source code into the <filename>sources</filename>
-                        directory in the workspace.
-                        If you want the code extracted to any other location, you
-                        need to provide the <replaceable>srctree</replaceable>
-                        positional argument with the command as follows:
-                        <literallayout class='monospaced'>
-     $ devtool upgrade -V <replaceable>version recipe srctree</replaceable>
-                        </literallayout>
-                        Also, in this example, the "-V" option is used to specify
-                        the new version.
-                        If the source files pointed to by the
-                        <filename>SRC_URI</filename> statement in the recipe are
-                        in a Git repository, you must provide the "-S" option and
-                        specify a revision for the software.</para>
-
-                        <para>Once <filename>devtool</filename> locates the recipe,
-                        it uses the <filename>SRC_URI</filename> variable to locate
-                        the source code and any local patch files from other
-                        developers are located.
-                        The result is that the command sets up the source
-                        code, the new version of the recipe, and an append file
-                        all within the workspace.
-                        </para></listitem>
-                    <listitem><para><emphasis>Resolve any Conflicts created by the Upgrade</emphasis>:
-                        At this point, there could be some conflicts due to the
-                        software being upgraded to a new version.
-                        This would occur if your recipe specifies some patch files in
-                        <filename>SRC_URI</filename> that conflict with changes
-                        made in the new version of the software.
-                        If this is the case, you need to resolve the conflicts
-                        by editing the source and following the normal
-                        <filename>git rebase</filename> conflict resolution
-                        process.</para>
-
-                        <para>Before moving onto the next step, be sure to resolve any
-                        such conflicts created through use of a newer or different
-                        version of the software.
-                        </para></listitem>
-                    <listitem><para><emphasis>Build the Recipe</emphasis>:
-                        Once you have your recipe in order, you can build it.
-                        You can either use <filename>devtool build</filename> or
-                        <filename>bitbake</filename>.
-                        Either method produces build output that is stored
-                        in
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink>.
-                        </para></listitem>
-                    <listitem><para><emphasis>Deploy the Build Output</emphasis>:
-                        When you use the <filename>devtool build</filename>
-                        command or <filename>bitbake</filename> to build out your
-                        recipe, you probably want to see if the resulting build
-                        output works as expected on target hardware.
-                        <note>
-                            This step assumes you have a previously built
-                            image that is already either running in QEMU or
-                            running on actual hardware.
-                            Also, it is assumed that for deployment of the image
-                            to the target, SSH is installed in the image and if
-                            the image is running on real hardware that you have
-                            network access to and from your development machine.
-                        </note>
-                        You can deploy your build output to that target hardware by
-                        using the <filename>devtool deploy-target</filename> command:
-                        <literallayout class='monospaced'>
-     $ devtool deploy-target <replaceable>recipe target</replaceable>
-                        </literallayout>
-                        The <replaceable>target</replaceable> is a live target machine
-                        running as an SSH server.</para>
-
-                        <para>You can, of course, also deploy the image you build
-                        using the <filename>devtool build-image</filename> command
-                        to actual hardware.
-                        However, <filename>devtool</filename> does not provide a
-                        specific command that allows you to do this.
-                        </para></listitem>
-                    <listitem><para>
-                        <emphasis>Finish Your Work With the Recipe</emphasis>:
-                        The <filename>devtool finish</filename> command creates
-                        any patches corresponding to commits in the local
-                        Git repository, moves the new recipe to a more permanent
-                        layer, and then resets the recipe so that the recipe is
-                        built normally rather than from the workspace.
-                        If you specify a destination layer that is the same as
-                        the original source, then the old version of the
-                        recipe and associated files will be removed prior to
-                        adding the new version.
-                        <literallayout class='monospaced'>
-     $ devtool finish <replaceable>recipe layer</replaceable>
-                        </literallayout>
-                        <note>
-                            Any changes you want to turn into patches must be
-                            committed to the Git repository in the source tree.
-                        </note></para>
-                        <para>As a final process of the
-                        <filename>devtool finish</filename> command, the state
-                        of the standard layers and the upstream source is
-                        restored so that you can build the recipe from those
-                        areas rather than the workspace.
-                        <note>
-                            You can use the <filename>devtool reset</filename>
-                            command to put things back should you decide you
-                            do not want to proceed with your work.
-                            If you do use this command, realize that the source
-                            tree is preserved.
-                        </note>
-                        </para></listitem>
-                </orderedlist>
-            </para>
-        </section>
-    </section>
-
-    <section id="using-a-quilt-workflow">
-        <title>Using Quilt in Your Workflow</title>
-
-        <para>
-            <ulink url='http://savannah.nongnu.org/projects/quilt'>Quilt</ulink>
-            is a powerful tool that allows you to capture source code changes
-            without having a clean source tree.
-            This section outlines the typical workflow you can use to modify
-            source code, test changes, and then preserve the changes in the
-            form of a patch all using Quilt.
-            <note><title>Tip</title>
-                With regard to preserving changes to source files if you
-                clean a recipe or have <filename>rm_work</filename> enabled,
-                the workflow described in the
-                "<link linkend='using-devtool-in-your-workflow'>Using <filename>devtool</filename> in Your Workflow</link>"
-                section is a safer development flow than than the flow that
-                uses Quilt.
-            </note>
-        </para>
-
-        <para>
-            Follow these general steps:
-            <orderedlist>
-                <listitem><para><emphasis>Find the Source Code:</emphasis>
-                    Temporary source code used by the OpenEmbedded build system
-                    is kept in the
-                    <link linkend='build-directory'>Build Directory</link>.
-                    See the
-                    "<link linkend='finding-the-temporary-source-code'>Finding Temporary Source Code</link>"
-                    section to learn how to locate the directory that has the
-                    temporary source code for a particular package.
-                    </para></listitem>
-                <listitem><para><emphasis>Change Your Working Directory:</emphasis>
-                    You need to be in the directory that has the temporary source code.
-                    That directory is defined by the
-                    <ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink>
-                    variable.</para></listitem>
-                <listitem><para><emphasis>Create a New Patch:</emphasis>
-                    Before modifying source code, you need to create a new patch.
-                    To create a new patch file, use <filename>quilt new</filename> as below:
-                    <literallayout class='monospaced'>
-     $ quilt new my_changes.patch
-                    </literallayout></para></listitem>
-                <listitem><para><emphasis>Notify Quilt and Add Files:</emphasis>
-                    After creating the patch, you need to notify Quilt about the files
-                    you plan to edit.
-                    You notify Quilt by adding the files to the patch you just created:
-                    <literallayout class='monospaced'>
-     $ quilt add file1.c file2.c file3.c
-                    </literallayout>
-                    </para></listitem>
-                <listitem><para><emphasis>Edit the Files:</emphasis>
-                    Make your changes in the source code to the files you added
-                    to the patch.
-                    </para></listitem>
-                <listitem><para><emphasis>Test Your Changes:</emphasis>
-                    Once you have modified the source code, the easiest way to
-                    your changes is by calling the
-                    <filename>do_compile</filename> task as shown in the
-                    following example:
-                    <literallayout class='monospaced'>
-     $ bitbake -c compile -f <replaceable>package</replaceable>
-                    </literallayout>
-                    The <filename>-f</filename> or <filename>--force</filename>
-                    option forces the specified task to execute.
-                    If you find problems with your code, you can just keep editing and
-                    re-testing iteratively until things work as expected.
-                    <note>All the modifications you make to the temporary source code
-                    disappear once you run the
-                    <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-clean'><filename>do_clean</filename></ulink>
-                    or
-                    <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-cleanall'><filename>do_cleanall</filename></ulink>
-                    tasks using BitBake (i.e.
-                    <filename>bitbake -c clean <replaceable>package</replaceable></filename>
-                    and
-                    <filename>bitbake -c cleanall <replaceable>package</replaceable></filename>).
-                    Modifications will also disappear if you use the <filename>rm_work</filename>
-                    feature as described in the
-                    "<ulink url='&YOCTO_DOCS_QS_URL;#qs-building-images'>Building Images</ulink>"
-                    section of the Yocto Project Quick Start.
-                    </note></para></listitem>
-                <listitem><para><emphasis>Generate the Patch:</emphasis>
-                    Once your changes work as expected, you need to use Quilt to generate the final patch that
-                    contains all your modifications.
-                    <literallayout class='monospaced'>
-     $ quilt refresh
-                    </literallayout>
-                    At this point, the <filename>my_changes.patch</filename> file has all your edits made
-                    to the <filename>file1.c</filename>, <filename>file2.c</filename>, and
-                    <filename>file3.c</filename> files.</para>
-                    <para>You can find the resulting patch file in the <filename>patches/</filename>
-                    subdirectory of the source (<filename>S</filename>) directory.</para></listitem>
-                <listitem><para><emphasis>Copy the Patch File:</emphasis>
-                    For simplicity, copy the patch file into a directory named <filename>files</filename>,
-                    which you can create in the same directory that holds the recipe
-                    (<filename>.bb</filename>) file or the
-                    append (<filename>.bbappend</filename>) file.
-                    Placing the patch here guarantees that the OpenEmbedded build system will find
-                    the patch.
-                    Next, add the patch into the
-                    <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'>SRC_URI</ulink></filename>
-                    of the recipe.
-                    Here is an example:
-                    <literallayout class='monospaced'>
-     SRC_URI += "file://my_changes.patch"
-                    </literallayout></para></listitem>
-            </orderedlist>
-        </para>
-    </section>
-
-    <section id='finding-the-temporary-source-code'>
-        <title>Finding Temporary Source Code</title>
-
-        <para>
-            You might find it helpful during development to modify the
-            temporary source code used by recipes to build packages.
-            For example, suppose you are developing a patch and you need to
-            experiment a bit to figure out your solution.
-            After you have initially built the package, you can iteratively
-            tweak the source code, which is located in the
-            <link linkend='build-directory'>Build Directory</link>, and then
-            you can force a re-compile and quickly test your altered code.
-            Once you settle on a solution, you can then preserve your changes
-            in the form of patches.
-            If you are using Quilt for development, see the
-            "<link linkend='using-a-quilt-workflow'>Using Quilt in Your Workflow</link>"
-            section for more information.
-        </para>
-
-        <para>
-            During a build, the unpacked temporary source code used by recipes
-            to build packages is available in the Build Directory as
-            defined by the
-            <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-S'>S</ulink></filename> variable.
-            Below is the default value for the <filename>S</filename> variable as defined in the
-            <filename>meta/conf/bitbake.conf</filename> configuration file in the
-            <link linkend='source-directory'>Source Directory</link>:
-            <literallayout class='monospaced'>
-     S = "${WORKDIR}/${BP}"
-            </literallayout>
-            You should be aware that many recipes override the <filename>S</filename> variable.
-            For example, recipes that fetch their source from Git usually set
-            <filename>S</filename> to <filename>${WORKDIR}/git</filename>.
-            <note>
-                The
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-BP'><filename>BP</filename></ulink>
-                represents the base recipe name, which consists of the name and version:
-                <literallayout class='monospaced'>
-     BP = "${BPN}-${PV}"
-                </literallayout>
-            </note>
-        </para>
-
-        <para>
-            The path to the work directory for the recipe
-            (<ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink>)
-            is defined as follows:
-            <literallayout class='monospaced'>
-     ${TMPDIR}/work/${MULTIMACH_TARGET_SYS}/${PN}/${EXTENDPE}${PV}-${PR}
-            </literallayout>
-            The actual directory depends on several things:
-            <itemizedlist>
-                <listitem><ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink>:
-                    The top-level build output directory</listitem>
-                <listitem><ulink url='&YOCTO_DOCS_REF_URL;#var-MULTIMACH_TARGET_SYS'><filename>MULTIMACH_TARGET_SYS</filename></ulink>:
-                    The target system identifier</listitem>
-                <listitem><ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink>:
-                    The recipe name</listitem>
-                <listitem><ulink url='&YOCTO_DOCS_REF_URL;#var-EXTENDPE'><filename>EXTENDPE</filename></ulink>:
-                    The epoch - (if
-                    <ulink url='&YOCTO_DOCS_REF_URL;#var-PE'><filename>PE</filename></ulink>
-                    is not specified, which is usually the case for most
-                    recipes, then <filename>EXTENDPE</filename> is blank)</listitem>
-                <listitem><ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>:
-                    The recipe version</listitem>
-                <listitem><ulink url='&YOCTO_DOCS_REF_URL;#var-PR'><filename>PR</filename></ulink>:
-                    The recipe revision</listitem>
-            </itemizedlist>
-        </para>
-
-        <para>
-            As an example, assume a Source Directory top-level folder
-            named <filename>poky</filename>, a default Build Directory at
-            <filename>poky/build</filename>, and a
-            <filename>qemux86-poky-linux</filename> machine target
-            system.
-            Furthermore, suppose your recipe is named
-            <filename>foo_1.3.0.bb</filename>.
-            In this case, the work directory the build system uses to
-            build the package would be as follows:
-            <literallayout class='monospaced'>
-     poky/build/tmp/work/qemux86-poky-linux/foo/1.3.0-r0
-            </literallayout>
-        </para>
-
-        <para>
-            Now that you know where to locate the directory that has the
-            temporary source code, you can use a Quilt as described in section
-            "<link linkend='using-a-quilt-workflow'>Using Quilt in Your Workflow</link>"
-            to make your edits, test the changes, and preserve the changes in
-            the form of patches.
-        </para>
-    </section>
-</section>
-
-<section id='image-development-using-toaster'>
-    <title>Image Development Using Toaster</title>
-
-    <para>
-        Toaster is a web interface to the Yocto Project's OpenEmbedded build
-        system.
-        You can initiate builds using Toaster as well as examine the results
-        and statistics of builds.
-        See the
-        <ulink url='&YOCTO_DOCS_TOAST_URL;#toaster-manual-intro'>Toaster User Manual</ulink>
-        for information on how to set up and use Toaster to build images.
-    </para>
-</section>
-
-<section id="platdev-appdev-devshell">
-    <title>Using a Development Shell</title>
-
-    <para>
-        When debugging certain commands or even when just editing packages,
-        <filename>devshell</filename> can be a useful tool.
-        When you invoke <filename>devshell</filename>, all tasks up to and
-        including
-        <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-patch'><filename>do_patch</filename></ulink>
-        are run for the specified target.
-        Then, a new terminal is opened and you are placed in
-        <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink><filename>}</filename>,
-        the source directory.
-        In the new terminal, all the OpenEmbedded build-related environment variables are
-        still defined so you can use commands such as <filename>configure</filename> and
-        <filename>make</filename>.
-        The commands execute just as if the OpenEmbedded build system were executing them.
-        Consequently, working this way can be helpful when debugging a build or preparing
-        software to be used with the OpenEmbedded build system.
-    </para>
-
-    <para>
-        Following is an example that uses <filename>devshell</filename> on a target named
-        <filename>matchbox-desktop</filename>:
-        <literallayout class='monospaced'>
-     $ bitbake matchbox-desktop -c devshell
-        </literallayout>
-    </para>
-
-    <para>
-        This command spawns a terminal with a shell prompt within the OpenEmbedded build environment.
-        The <ulink url='&YOCTO_DOCS_REF_URL;#var-OE_TERMINAL'><filename>OE_TERMINAL</filename></ulink>
-        variable controls what type of shell is opened.
-    </para>
-
-    <para>
-        For spawned terminals, the following occurs:
-        <itemizedlist>
-            <listitem><para>The <filename>PATH</filename> variable includes the
-                cross-toolchain.</para></listitem>
-            <listitem><para>The <filename>pkgconfig</filename> variables find the correct
-                <filename>.pc</filename> files.</para></listitem>
-                <listitem><para>The <filename>configure</filename> command finds the
-                Yocto Project site files as well as any other necessary files.</para></listitem>
-        </itemizedlist>
-    </para>
-
-    <para>
-        Within this environment, you can run configure or compile
-        commands as if they were being run by
-        the OpenEmbedded build system itself.
-        As noted earlier, the working directory also automatically changes to the
-        Source Directory (<ulink url='&YOCTO_DOCS_REF_URL;#var-S'><filename>S</filename></ulink>).
-    </para>
-
-    <para>
-        To manually run a specific task using <filename>devshell</filename>,
-        run the corresponding <filename>run.*</filename> script in
-        the
-        <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink><filename>}/temp</filename>
-        directory (e.g.,
-        <filename>run.do_configure.</filename><replaceable>pid</replaceable>).
-        If a task's script does not exist, which would be the case if the task was
-        skipped by way of the sstate cache, you can create the task by first running
-        it outside of the <filename>devshell</filename>:
-        <literallayout class='monospaced'>
-     $ bitbake -c <replaceable>task</replaceable>
-        </literallayout>
-        <note><title>Notes</title>
-            <itemizedlist>
-                <listitem><para>Execution of a task's <filename>run.*</filename>
-                    script and BitBake's execution of a task are identical.
-                    In other words, running the script re-runs the task
-                    just as it would be run using the
-                    <filename>bitbake -c</filename> command.
-                    </para></listitem>
-                <listitem><para>Any <filename>run.*</filename> file that does not
-                    have a <filename>.pid</filename> extension is a
-                    symbolic link (symlink) to the most recent version of that
-                    file.
-                    </para></listitem>
-            </itemizedlist>
-        </note>
-    </para>
-
-    <para>
-        Remember, that the <filename>devshell</filename> is a mechanism that allows
-        you to get into the BitBake task execution environment.
-        And as such, all commands must be called just as BitBake would call them.
-        That means you need to provide the appropriate options for
-        cross-compilation and so forth as applicable.
-    </para>
-
-    <para>
-        When you are finished using <filename>devshell</filename>, exit the shell
-        or close the terminal window.
-    </para>
-
-    <note><title>Notes</title>
-        <itemizedlist>
-            <listitem><para>
-                It is worth remembering that when using <filename>devshell</filename>
-                you need to use the full compiler name such as <filename>arm-poky-linux-gnueabi-gcc</filename>
-                instead of just using <filename>gcc</filename>.
-                The same applies to other applications such as <filename>binutils</filename>,
-                <filename>libtool</filename> and so forth.
-                BitBake sets up environment variables such as <filename>CC</filename>
-                to assist applications, such as <filename>make</filename> to find the correct tools.
-                </para></listitem>
-            <listitem><para>
-                It is also worth noting that <filename>devshell</filename> still works over
-                X11 forwarding and similar situations.
-                </para></listitem>
-        </itemizedlist>
-    </note>
-</section>
-
-<section id="platdev-appdev-devpyshell">
-    <title>Using a Development Python Shell</title>
-
-    <para>
-        Similar to working within a development shell as described in
-        the previous section, you can also spawn and work within an
-        interactive Python development shell.
-        When debugging certain commands or even when just editing packages,
-        <filename>devpyshell</filename> can be a useful tool.
-        When you invoke <filename>devpyshell</filename>, all tasks up to and
-        including
-        <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-patch'><filename>do_patch</filename></ulink>
-        are run for the specified target.
-        Then a new terminal is opened.
-        Additionally, key Python objects and code are available in the same
-        way they are to BitBake tasks, in particular, the data store 'd'.
-        So, commands such as the following are useful when exploring the data
-        store and running functions:
-        <literallayout class='monospaced'>
-     pydevshell> d.getVar("STAGING_DIR", True)
-     '/media/build1/poky/build/tmp/sysroots'
-     pydevshell> d.getVar("STAGING_DIR", False)
-     '${TMPDIR}/sysroots'
-     pydevshell> d.setVar("FOO", "bar")
-     pydevshell> d.getVar("FOO", True)
-     'bar'
-     pydevshell> d.delVar("FOO")
-     pydevshell> d.getVar("FOO", True)
-     pydevshell> bb.build.exec_func("do_unpack", d)
-     pydevshell>
-        </literallayout>
-        The commands execute just as if the OpenEmbedded build system were executing them.
-        Consequently, working this way can be helpful when debugging a build or preparing
-        software to be used with the OpenEmbedded build system.
-    </para>
-
-    <para>
-        Following is an example that uses <filename>devpyshell</filename> on a target named
-        <filename>matchbox-desktop</filename>:
-        <literallayout class='monospaced'>
-     $ bitbake matchbox-desktop -c devpyshell
-        </literallayout>
-    </para>
-
-    <para>
-        This command spawns a terminal and places you in an interactive
-        Python interpreter within the OpenEmbedded build environment.
-        The <ulink url='&YOCTO_DOCS_REF_URL;#var-OE_TERMINAL'><filename>OE_TERMINAL</filename></ulink>
-        variable controls what type of shell is opened.
-    </para>
-
-    <para>
-        When you are finished using <filename>devpyshell</filename>, you
-        can exit the shell either by using Ctrl+d or closing the terminal
-        window.
-    </para>
-</section>
-
-</chapter>
diff --git a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-newbie.xml b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-newbie.xml
index ad32ac6..a0fbb4b 100644
--- a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-newbie.xml
+++ b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-newbie.xml
@@ -6,63 +6,13 @@
 
 <title>The Yocto Project Open Source Development Environment</title>
 
-<para>
-    This chapter helps you understand the Yocto Project as an open source development project.
-    In general, working in an open source environment is very different from working in a
-    closed, proprietary environment.
-    Additionally, the Yocto Project uses specific tools and constructs as part of its development
-    environment.
-    This chapter specifically addresses open source philosophy, using the
-    Yocto Project in a team environment, source repositories, Yocto Project
-    terms, licensing, the open source distributed version control system Git,
-    workflows, bug tracking, and how to submit changes.
-</para>
-
-<section id='open-source-philosophy'>
-    <title>Open Source Philosophy</title>
-
-    <para>
-        Open source philosophy is characterized by software development directed by peer production
-        and collaboration through an active community of developers.
-        Contrast this to the more standard centralized development models used by commercial software
-        companies where a finite set of developers produces a product for sale using a defined set
-        of procedures that ultimately result in an end product whose architecture and source material
-        are closed to the public.
-    </para>
-
-    <para>
-        Open source projects conceptually have differing concurrent agendas, approaches, and production.
-        These facets of the development process can come from anyone in the public (community) that has a
-        stake in the software project.
-        The open source environment contains new copyright, licensing, domain, and consumer issues
-        that differ from the more traditional development environment.
-        In an open source environment, the end product, source material, and documentation are
-        all available to the public at no cost.
-    </para>
-
-    <para>
-        A benchmark example of an open source project is the Linux kernel, which was initially conceived
-        and created by Finnish computer science student Linus Torvalds in 1991.
-        Conversely, a good example of a non-open source project is the
-        <trademark class='registered'>Windows</trademark> family of operating
-        systems developed by <trademark class='registered'>Microsoft</trademark> Corporation.
-    </para>
-
-    <para>
-        Wikipedia has a good historical description of the Open Source Philosophy
-        <ulink url='http://en.wikipedia.org/wiki/Open_source'>here</ulink>.
-        You can also find helpful information on how to participate in the Linux Community
-        <ulink url='http://ldn.linuxfoundation.org/book/how-participate-linux-community'>here</ulink>.
-    </para>
-</section>
-
 <section id="usingpoky-changes-collaborate">
-    <title>Using the Yocto Project in a Team Environment</title>
+    <title>Setting Up a Team Yocto Project Development Environment</title>
 
     <para>
         It might not be immediately clear how you can use the Yocto
-        Project in a team environment, or scale it for a large team of
-        developers.
+        Project in a team development environment, or scale it for a large
+        team of developers.
         One of the strengths of the Yocto Project is that it is extremely
         flexible.
         Thus, you can adapt it to many different use cases and scenarios.
@@ -71,1505 +21,615 @@
     </para>
 
     <para>
-        To help with these types of situations, this section presents
-        some of the project's most successful experiences,
-        practices, solutions, and available technologies that work well.
-        Keep in mind, the information here is a starting point.
+        To help you understand how to set up this type of environment,
+        this section presents a procedure that gives you the information
+        to learn how to get the results you want.
+        The procedure is high-level and presents some of the project's most
+        successful experiences, practices, solutions, and available
+        technologies that work well.
+        Keep in mind, the procedure here is a starting point.
         You can build off it and customize it to fit any
         particular working environment and set of practices.
-    </para>
+        <orderedlist>
+            <listitem><para>
+                <emphasis>Determine Who is Going to be Developing:</emphasis>
+                You need to understand who is going to be doing anything
+                related to the Yocto Project and what their roles would be.
+                Making this determination is essential to completing the
+                steps two and three, which are to get your equipment together
+                and set up your development environment's hardware topology.
+                </para>
 
-    <section id='best-practices-system-configurations'>
-        <title>System Configurations</title>
-
-        <para>
-            Systems across a large team should meet the needs of
-            two types of developers: those working on the contents of the
-            operating system image itself and those developing applications.
-            Regardless of the type of developer, their workstations must
-            be both reasonably powerful and run Linux.
-        </para>
-
-        <section id='best-practices-application-development'>
-            <title>Application Development</title>
-
-            <para>
-                For developers who mainly do application level work
-                on top of an existing software stack,
-                the following list shows practices that work best.
-                For information on using a Software Development Kit (SDK), see
-                the
-                <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-intro'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>:
+                <para>The following roles exist:
                 <itemizedlist>
-                    <listitem><para>Use a pre-built toolchain that
+                    <listitem><para>
+                        <emphasis>Application Development:</emphasis>
+                        These types of developers do application level work
+                        on top of an existing software stack.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Core System Development:</emphasis>
+                        These types of developers work on the contents of the
+                        operating system image itself.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Build Engineer:</emphasis>
+                        This type of developer manages Autobuilders and
+                        releases.
+                        Not all environments need a Build Engineer.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Test Engineer:</emphasis>
+                        This type of developer creates and manages automated
+                        tests needed to ensure all application and core
+                        system development meets desired quality standards.
+                        </para></listitem>
+                </itemizedlist>
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Gather the Hardware:</emphasis>
+                Based on the size and make-up of the team, get the hardware
+                together.
+                Any development, build, or test engineer should be using
+                a system that is running a supported Linux distribution.
+                Systems, in general, should be high performance (e.g. dual,
+                six-core Xeons with 24 Gbytes of RAM and plenty of disk space).
+                You can help ensure efficiency by having any machines used
+                for testing or that run Autobuilders be as high performance
+                as possible.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Understand the Hardware Topology of the Environment:</emphasis>
+                Now that you know how many developers and support engineers
+                are required, you can understand the topology of the
+                hardware environment.
+                The following figure shows a moderately sized Yocto Project
+                development environment.
+
+                <para role="writernotes">
+                Need figure.</para>
+
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Use Git as Your Source Control Manager (SCM):</emphasis>
+                Keeping your
+                <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink>
+                and any software you are developing under the
+                control of an SCM system that is compatible
+                with the OpenEmbedded build system is advisable.
+                Of the SCMs BitBake supports, the
+                Yocto Project team strongly recommends using
+                <ulink url='&YOCTO_DOCS_REF_URL;#git'>Git</ulink>.
+                Git is a distributed system that is easy to backup,
+                allows you to work remotely, and then connects back to the
+                infrastructure.
+                <note>
+                    For information about BitBake, see the
+                    <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual</ulink>.
+                </note></para>
+
+                <para>It is relatively easy to set up Git services and create
+                infrastructure like
+                <ulink url='&YOCTO_GIT_URL;'>http://git.yoctoproject.org</ulink>,
+                which is based on server software called
+                <filename>gitolite</filename> with <filename>cgit</filename>
+                being used to generate the web interface that lets you view the
+                repositories.
+                The <filename>gitolite</filename> software identifies users
+                using SSH keys and allows branch-based
+                access controls to repositories that you can control as little
+                or as much as necessary.
+
+                <note>
+                   The setup of these services is beyond the scope of this
+                   manual.
+                   However, sites such as these exist that describe how to
+                   perform setup:
+                   <itemizedlist>
+                       <listitem><para>
+                           <ulink url='http://git-scm.com/book/ch4-8.html'>Git documentation</ulink>:
+                           Describes how to install <filename>gitolite</filename>
+                           on the server.
+                           </para></listitem>
+                       <listitem><para>
+                           <ulink url='http://sitaramc.github.com/gitolite/master-toc.html'>The <filename>gitolite</filename> master index</ulink>:
+                            All topics for <filename>gitolite</filename>.
+                            </para></listitem>
+                        <listitem><para>
+                            <ulink url='https://git.wiki.kernel.org/index.php/Interfaces,_frontends,_and_tools'>Interfaces, frontends, and tools</ulink>:
+                            Documentation on how to create interfaces and frontends
+                            for Git.
+                            </para></listitem>
+                    </itemizedlist>
+                </note>
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Set up the Application Development Machines:</emphasis>
+                As mentioned earlier, application developers are creating
+                applications on top of existing software stacks.
+                Following are some best practices for setting up machines
+                that do application development:
+                <itemizedlist>
+                    <listitem><para>
+                        Use a pre-built toolchain that
                         contains the software stack itself.
                         Then, develop the application code on top of the
                         stack.
                         This method works well for small numbers of relatively
-                        isolated applications.</para></listitem>
-                    <listitem><para>When possible, use the Yocto Project
-                        plug-in for the <trademark class='trade'>Eclipse</trademark> IDE
+                        isolated applications.
+                        </para></listitem>
+                    <listitem><para>
+                        When possible, use the Yocto Project
+                        plug-in for the
+                        <trademark class='trade'>Eclipse</trademark> IDE
                         and SDK development practices.
                         For more information, see the
-                        "<ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>".
+                        "<ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</ulink>"
+                        manual.
                         </para></listitem>
-                    <listitem><para>Keep your cross-development toolchains
-                        updated.
+                    <listitem><para>
+                        Keep your cross-development toolchains updated.
                         You can do this through provisioning either as new
                         toolchain downloads or as updates through a package
                         update mechanism using <filename>opkg</filename>
                         to provide updates to an existing toolchain.
                         The exact mechanics of how and when to do this are a
-                        question for local policy.</para></listitem>
-                    <listitem><para>Use multiple toolchains installed locally
+                        question for local policy.
+                        </para></listitem>
+                    <listitem><para>
+                        Use multiple toolchains installed locally
                         into different locations to allow development across
-                        versions.</para></listitem>
+                        versions.
+                        </para></listitem>
                 </itemizedlist>
-            </para>
-        </section>
-
-        <section id='best-practices-core-system-development'>
-            <title>Core System Development</title>
-
-            <para>
-                For core system development, it is often best to have the
-                build system itself available on the developer workstations
-                so developers can run their own builds and directly
-                rebuild the software stack.
-                You should keep the core system unchanged as much as
-                possible and do your work in layers on top of the core system.
-                Doing so gives you a greater level of portability when
-                upgrading to new versions of the core system or Board
-                Support Packages (BSPs).
-                You can share layers amongst the developers of a particular
-                project and contain the policy configuration that defines
-                the project.
-            </para>
-
-            <para>
-                Aside from the previous best practices, there exists a number
-                of tips and tricks that can help speed up core development
-                projects:
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Set up the Core Development Machines:</emphasis>
+                As mentioned earlier, these types of developers work on the
+                contents of the operating system itself.
+                Following are some best practices for setting up machines
+                used for developing images:
                 <itemizedlist>
-                    <listitem><para>Use a
-                        <ulink url='&YOCTO_DOCS_REF_URL;#shared-state-cache'>Shared State Cache</ulink>
-                        (sstate) among groups of developers who are on a
-                        fast network.
-                        The best way to share sstate is through a
-                        Network File System (NFS) share.
-                        The first user to build a given component for the
-                        first time contributes that object to the sstate,
-                        while subsequent builds from other developers then
-                        reuse the object rather than rebuild it themselves.
-                        </para>
-                        <para>Although it is possible to use other protocols for the
-                        sstate such as HTTP and FTP, you should avoid these.
-                        Using HTTP limits the sstate to read-only and
-                        FTP provides poor performance.
+                    <listitem><para>
+                        Have the Yocto Project build system itself available on
+                        the developer workstations so developers can run their own
+                        builds and directly rebuild the software stack.
                         </para></listitem>
-                    <listitem><para>Have autobuilders contribute to the sstate
-                        pool similarly to how the developer workstations
-                        contribute.
-                        For information, see the
-                        "<link linkend='best-practices-autobuilders'>Autobuilders</link>"
-                        section.</para></listitem>
-                    <listitem><para>Build stand-alone tarballs that contain
-                        "missing" system requirements if for some reason
-                        developer workstations do not meet minimum system
-                        requirements such as latest Python versions,
-                        <filename>chrpath</filename>, or other tools.
-                        You can install and relocate the tarball exactly as you
-                        would the usual cross-development toolchain so that
-                        all developers can meet minimum version requirements
-                        on most distributions.</para></listitem>
-                    <listitem><para>Use a small number of shared,
-                        high performance systems for testing purposes
-                        (e.g. dual, six-core Xeons with 24 Gbytes of RAM
-                        and plenty of disk space).
-                        Developers can use these systems for wider, more
-                        extensive testing while they continue to develop
-                        locally using their primary development system.
+                    <listitem><para>
+                        Keep the core system unchanged as much as
+                        possible and do your work in layers on top of the
+                        core system.
+                        Doing so gives you a greater level of portability when
+                        upgrading to new versions of the core system or Board
+                        Support Packages (BSPs).
                         </para></listitem>
-                    <listitem><para>Enable the PR Service when package feeds
-                        need to be incremental with continually increasing
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-PR'>PR</ulink>
-                        values.
-                        Typically, this situation occurs when you use or
-                        publish package feeds and use a shared state.
-                        You should enable the PR Service for all users who
-                        use the shared state pool.
-                        For more information on the PR Service, see the
-                        "<link linkend='working-with-a-pr-service'>Working With a PR Service</link>".
+                    <listitem><para>
+                        Share layers amongst the developers of a
+                        particular project and contain the policy configuration
+                        that defines the project.
                         </para></listitem>
                 </itemizedlist>
-            </para>
-        </section>
-    </section>
-
-    <section id='best-practices-source-control-management'>
-        <title>Source Control Management (SCM)</title>
-
-        <para>
-            Keeping your
-            <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
-            and any software you are developing under the
-            control of an SCM system that is compatible
-            with the OpenEmbedded build system is advisable.
-            Of the SCMs BitBake supports, the
-            Yocto Project team strongly recommends using
-            <link linkend='git'>Git</link>.
-            Git is a distributed system that is easy to backup,
-            allows you to work remotely, and then connects back to the
-            infrastructure.
-            <note>
-                For information about BitBake, see the
-                <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual</ulink>.
-            </note>
-        </para>
-
-        <para>
-            It is relatively easy to set up Git services and create
-            infrastructure like
-            <ulink url='&YOCTO_GIT_URL;'>http://git.yoctoproject.org</ulink>,
-            which is based on server software called
-            <filename>gitolite</filename> with <filename>cgit</filename>
-            being used to generate the web interface that lets you view the
-            repositories.
-            The <filename>gitolite</filename> software identifies users
-            using SSH keys and allows branch-based
-            access controls to repositories that you can control as little
-            or as much as necessary.
-        </para>
-
-        <note>
-            The setup of these services is beyond the scope of this manual.
-            However, sites such as these exist that describe how to perform
-            setup:
-            <itemizedlist>
-                <listitem><para><ulink url='http://git-scm.com/book/ch4-8.html'>Git documentation</ulink>:
-                    Describes how to install <filename>gitolite</filename>
-                    on the server.</para></listitem>
-                <listitem><para><ulink url='http://sitaramc.github.com/gitolite/master-toc.html'>The <filename>gitolite</filename> master index</ulink>:
-                    All topics for <filename>gitolite</filename>.
-                    </para></listitem>
-                <listitem><para><ulink url='https://git.wiki.kernel.org/index.php/Interfaces,_frontends,_and_tools'>Interfaces, frontends, and tools</ulink>:
-                    Documentation on how to create interfaces and frontends
-                    for Git.</para></listitem>
-            </itemizedlist>
-        </note>
-    </section>
-
-    <section id='best-practices-autobuilders'>
-        <title>Autobuilders</title>
-
-        <para>
-            Autobuilders are often the core of a development project.
-            It is here that changes from individual developers are brought
-            together and centrally tested and subsequent decisions about
-            releases can be made.
-            Autobuilders also allow for "continuous integration" style
-            testing of software components and regression identification
-            and tracking.
-        </para>
-
-        <para>
-            See "<ulink url='http://autobuilder.yoctoproject.org'>Yocto Project Autobuilder</ulink>"
-            for more information and links to buildbot.
-            The Yocto Project team has found this implementation
-            works well in this role.
-            A public example of this is the Yocto Project
-            Autobuilders, which we use to test the overall health of the
-            project.
-        </para>
-
-        <para>
-            The features of this system are:
-            <itemizedlist>
-                <listitem><para>Highlights when commits break the build.
-                    </para></listitem>
-                <listitem><para>Populates an sstate cache from which
-                    developers can pull rather than requiring local
-                    builds.</para></listitem>
-                <listitem><para>Allows commit hook triggers,
-                    which trigger builds when commits are made.
-                    </para></listitem>
-                <listitem><para>Allows triggering of automated image booting
-                    and testing under the QuickEMUlator (QEMU).
-                    </para></listitem>
-                <listitem><para>Supports incremental build testing and
-                    from-scratch builds.</para></listitem>
-                <listitem><para>Shares output that allows developer
-                    testing and historical regression investigation.
-                    </para></listitem>
-                <listitem><para>Creates output that can be used for releases.
-                    </para></listitem>
-                <listitem><para>Allows scheduling of builds so that resources
-                    can be used efficiently.</para></listitem>
-            </itemizedlist>
-        </para>
-    </section>
-
-    <section id='best-practices-policies-and-change-flow'>
-        <title>Policies and Change Flow</title>
-
-        <para>
-            The Yocto Project itself uses a hierarchical structure and a
-            pull model.
-            Scripts exist to create and send pull requests
-            (i.e. <filename>create-pull-request</filename> and
-            <filename>send-pull-request</filename>).
-            This model is in line with other open source projects where
-            maintainers are responsible for specific areas of the project
-            and a single maintainer handles the final "top-of-tree" merges.
-        </para>
-
-        <note>
-            You can also use a more collective push model.
-            The <filename>gitolite</filename> software supports both the
-            push and pull models quite easily.
-        </note>
-
-        <para>
-            As with any development environment, it is important
-            to document the policy used as well as any main project
-            guidelines so they are understood by everyone.
-            It is also a good idea to have well structured
-            commit messages, which are usually a part of a project's
-            guidelines.
-            Good commit messages are essential when looking back in time and
-            trying to understand why changes were made.
-        </para>
-
-        <para>
-            If you discover that changes are needed to the core layer of the
-            project, it is worth sharing those with the community as soon
-            as possible.
-            Chances are if you have discovered the need for changes, someone
-            else in the community needs them also.
-        </para>
-    </section>
-
-    <section id='best-practices-summary'>
-        <title>Summary</title>
-
-        <para>
-            This section summarizes the key recommendations described in the
-            previous sections:
-            <itemizedlist>
-                <listitem><para>Use <link linkend='git'>Git</link>
-                    as the source control system.</para></listitem>
-                <listitem><para>Maintain your Metadata in layers that make sense
-                    for your situation.
-                    See the "<link linkend='understanding-and-creating-layers'>Understanding
-                    and Creating Layers</link>" section for more information on
-                    layers.</para></listitem>
-                <listitem><para>
-                    Separate the project's Metadata and code by using
-                    separate Git repositories.
-                    See the
-                    "<link linkend='yocto-project-repositories'>Yocto Project Source Repositories</link>"
-                    section for information on these repositories.
-                    See the
-                    "<link linkend='getting-setup'>Getting Set Up</link>"
-                    section for information on how to set up local Git
-                    repositories for related upstream Yocto Project
-                    Git repositories.
-                    </para></listitem>
-                <listitem><para>Set up the directory for the shared state cache
-                    (<ulink url='&YOCTO_DOCS_REF_URL;#var-SSTATE_DIR'><filename>SSTATE_DIR</filename></ulink>)
-                    where it makes sense.
-                    For example, set up the sstate cache on a system used
-                    by developers in the same organization and share the
-                    same source directories on their machines.
-                    </para></listitem>
-                <listitem><para>Set up an Autobuilder and have it populate the
-                    sstate cache and source directories.</para></listitem>
-                <listitem><para>The Yocto Project community encourages you
-                    to send patches to the project to fix bugs or add features.
-                    If you do submit patches, follow the project commit
-                    guidelines for writing good commit messages.
-                    See the "<link linkend='how-to-submit-a-change'>How to Submit a Change</link>"
-                    section.</para></listitem>
-                <listitem><para>Send changes to the core sooner than later
-                    as others are likely to run into the same issues.
-                    For some guidance on mailing lists to use, see the list in the
-                    "<link linkend='how-to-submit-a-change'>How to Submit a Change</link>"
-                    section.
-                    For a description of the available mailing lists, see the
-                    "<ulink url='&YOCTO_DOCS_REF_URL;#resources-mailinglist'>Mailing Lists</ulink>"
-                    section in the Yocto Project Reference Manual.
-                    </para></listitem>
-            </itemizedlist>
-        </para>
-    </section>
-</section>
-
-<section id='yocto-project-repositories'>
-    <title>Yocto Project Source Repositories</title>
-
-    <para>
-        The Yocto Project team maintains complete source repositories for all
-        Yocto Project files at
-        <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi'></ulink>.
-        This web-based source code browser is organized into categories by
-        function such as IDE Plugins, Matchbox, Poky, Yocto Linux Kernel, and
-        so forth.
-        From the interface, you can click on any particular item in the "Name"
-        column and see the URL at the bottom of the page that you need to clone
-        a Git repository for that particular item.
-        Having a local Git repository of the
-        <link linkend='source-directory'>Source Directory</link>, which is
-        usually named "poky", allows
-        you to make changes, contribute to the history, and ultimately enhance
-        the Yocto Project's tools, Board Support Packages, and so forth.
-    </para>
-
-    <para>
-        For any supported release of Yocto Project, you can also go to the
-        <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink> and
-        select the "Downloads" tab and get a released tarball of the
-        <filename>poky</filename> repository or any supported BSP tarballs.
-        Unpacking these tarballs gives you a snapshot of the released
-        files.
-        <note><title>Notes</title>
-            <itemizedlist>
-                <listitem><para>
-                    The recommended method for setting up the Yocto Project
-                    <link linkend='source-directory'>Source Directory</link>
-                    and the files for supported BSPs
-                    (e.g., <filename>meta-intel</filename>) is to use
-                    <link linkend='git'>Git</link> to create a local copy of
-                    the upstream repositories.
-                    </para></listitem>
-                <listitem><para>
-                    Be sure to always work in matching branches for both
-                    the selected BSP repository and the
-                    <link linkend='source-directory'>Source Directory</link>
-                    (i.e. <filename>poky</filename>) repository.
-                    For example, if you have checked out the "master" branch
-                    of <filename>poky</filename> and you are going to use
-                    <filename>meta-intel</filename>, be sure to checkout the
-                    "master" branch of <filename>meta-intel</filename>.
-                    </para></listitem>
-            </itemizedlist>
-        </note>
-    </para>
-
-    <para>
-        In summary, here is where you can get the project files needed for development:
-        <itemizedlist>
-            <listitem><para id='source-repositories'><emphasis><ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi'>Source Repositories:</ulink></emphasis>
-                This area contains IDE Plugins, Matchbox, Poky, Poky Support, Tools, Yocto Linux Kernel, and Yocto
-                Metadata Layers.
-                You can create local copies of Git repositories for each of these areas.</para>
-                <para>
-                <imagedata fileref="figures/source-repos.png" align="center" width="6in" depth="4in" />
                 </para></listitem>
-            <listitem><para><anchor id='index-downloads' /><emphasis><ulink url='&YOCTO_DL_URL;/releases/'>Index of /releases:</ulink></emphasis>
-                This is an index of releases such as
-                the <trademark class='trade'>Eclipse</trademark>
-                Yocto Plug-in, miscellaneous support, Poky, Pseudo, installers for cross-development toolchains,
-                and all released versions of Yocto Project in the form of images or tarballs.
-                Downloading and extracting these files does not produce a local copy of the
-                Git repository but rather a snapshot of a particular release or image.</para>
-                <para>
-                <imagedata fileref="figures/index-downloads.png" align="center" width="6in" depth="3.5in" />
-                </para></listitem>
-            <listitem><para><emphasis>"Downloads" page for the
-                <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>:</emphasis>
-                Access this page by going to the website and then selecting
-                the "Downloads" tab.
-                This page allows you to download any Yocto Project
-                release or Board Support Package (BSP) in tarball form.
-                The tarballs are similar to those found in the
-                <ulink url='&YOCTO_DL_URL;/releases/'>Index of /releases:</ulink> area.</para>
-                <para>
-                <imagedata fileref="figures/yp-download.png" align="center" width="6in" depth="4in" />
-            </para></listitem>
-        </itemizedlist>
-    </para>
-</section>
+            <listitem><para>
+                <emphasis>Set up an Autobuilder:</emphasis>
+                Autobuilders are often the core of the development
+                environment.
+                It is here that changes from individual developers are brought
+                together and centrally tested and subsequent decisions about
+                releases can be made.
+                Autobuilders also allow for "continuous integration" style
+                testing of software components and regression identification
+                and tracking.</para>
 
-<section id='yocto-project-terms'>
-    <title>Yocto Project Terms</title>
+                <para>See "<ulink url='http://autobuilder.yoctoproject.org'>Yocto Project Autobuilder</ulink>"
+                for more information and links to buildbot.
+                The Yocto Project team has found this implementation
+                works well in this role.
+                A public example of this is the Yocto Project
+                Autobuilders, which we use to test the overall health of the
+                project.</para>
 
-    <para>
-        Following is a list of terms and definitions users new to the Yocto Project development
-        environment might find helpful.
-        While some of these terms are universal, the list includes them just in case:
-        <itemizedlist>
-            <listitem><para><emphasis>Append Files:</emphasis> Files that append build information to
-                a recipe file.
-                Append files are known as BitBake append files and <filename>.bbappend</filename> files.
-                The OpenEmbedded build system expects every append file to have a corresponding
-                recipe (<filename>.bb</filename>) file.
-                Furthermore, the append file and corresponding recipe file
-                must use the same root filename.
-                The filenames can differ only in the file type suffix used (e.g.
-                <filename>formfactor_0.0.bb</filename> and <filename>formfactor_0.0.bbappend</filename>).
-                </para>
-                <para>Information in append files extends or overrides the
-                information in the similarly-named recipe file.
-                For an example of an append file in use, see the
-                "<link linkend='using-bbappend-files'>Using .bbappend Files</link>" section.
+                <para>The features of this system are:
+                <itemizedlist>
+                    <listitem><para>
+                         Highlights when commits break the build.
+                         </para></listitem>
+                    <listitem><para>
+                        Populates an sstate cache from which
+                        developers can pull rather than requiring local
+                        builds.
+                        </para></listitem>
+                    <listitem><para>
+                        Allows commit hook triggers,
+                        which trigger builds when commits are made.
+                        </para></listitem>
+                    <listitem><para>
+                        Allows triggering of automated image booting
+                        and testing under the QuickEMUlator (QEMU).
+                        </para></listitem>
+                    <listitem><para>
+                        Supports incremental build testing and
+                        from-scratch builds.
+                        </para></listitem>
+                    <listitem><para>
+                        Shares output that allows developer
+                        testing and historical regression investigation.
+                        </para></listitem>
+                    <listitem><para>
+                        Creates output that can be used for releases.
+                        </para></listitem>
+                    <listitem><para>
+                        Allows scheduling of builds so that resources
+                        can be used efficiently.
+                        </para></listitem>
+               </itemizedlist>
+               </para></listitem>
+           <listitem><para>
+               <emphasis>Set up Test Machines:</emphasis>
+               Use a small number of shared, high performance systems
+               for testing purposes.
+               Developers can use these systems for wider, more
+               extensive testing while they continue to develop
+               locally using their primary development system.
+               </para></listitem>
+           <listitem><para>
+               <emphasis>Document Policies and Change Flow:</emphasis>
+               The Yocto Project itself uses a hierarchical structure and a
+                pull model.
+                Scripts exist to create and send pull requests
+                (i.e. <filename>create-pull-request</filename> and
+                <filename>send-pull-request</filename>).
+                This model is in line with other open source projects where
+                maintainers are responsible for specific areas of the project
+                and a single maintainer handles the final "top-of-tree" merges.
                 <note>
-                    Append files can also use wildcard patterns in their version numbers
-                    so they can be applied to more than one version of the underlying recipe file.
-                </note>
-                </para></listitem>
-            <listitem><para id='bitbake-term'><emphasis>BitBake:</emphasis>
-                The task executor and scheduler used by the OpenEmbedded build
-                system to build images.
-                For more information on BitBake, see the
-                <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual</ulink>.
-                </para></listitem>
-            <listitem>
-                <para id='build-directory'><emphasis>Build Directory:</emphasis>
-                This term refers to the area used by the OpenEmbedded build
-                system for builds.
-                The area is created when you <filename>source</filename> the
-                setup environment script that is found in the Source Directory
-                (i.e. <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
-                or
-                <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>).
-                The <ulink url='&YOCTO_DOCS_REF_URL;#var-TOPDIR'><filename>TOPDIR</filename></ulink>
-                variable points to the Build Directory.</para>
-
-                <para>
-                    You have a lot of flexibility when creating the Build
-                    Directory.
-                    Following are some examples that show how to create the
-                    directory.
-                    The examples assume your
-                    <link linkend='source-directory'>Source Directory</link> is
-                    named <filename>poky</filename>:
-                   <itemizedlist>
-                        <listitem><para>Create the Build Directory inside your
-                            Source Directory and let the name of the Build
-                            Directory default to <filename>build</filename>:
-                            <literallayout class='monospaced'>
-     $ cd $HOME/poky
-     $ source &OE_INIT_FILE;
-                            </literallayout></para></listitem>
-                        <listitem><para>Create the Build Directory inside your
-                            home directory and specifically name it
-                            <filename>test-builds</filename>:
-                            <literallayout class='monospaced'>
-     $ cd $HOME
-     $ source poky/&OE_INIT_FILE; test-builds
-                            </literallayout></para></listitem>
-                        <listitem><para>
-                            Provide a directory path and
-                            specifically name the Build Directory.
-                            Any intermediate folders in the pathname must
-                            exist.
-                            This next example creates a Build Directory named
-                            <filename>YP-&POKYVERSION;</filename>
-                            in your home directory within the existing
-                            directory <filename>mybuilds</filename>:
-                            <literallayout class='monospaced'>
-     $cd $HOME
-     $ source $HOME/poky/&OE_INIT_FILE; $HOME/mybuilds/YP-&POKYVERSION;
-                            </literallayout></para></listitem>
-                    </itemizedlist>
-                    <note>
-                        By default, the Build Directory contains
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-TMPDIR'><filename>TMPDIR</filename></ulink>,
-                        which is a temporary directory the build system uses for
-                        its work.
-                        <filename>TMPDIR</filename> cannot be under NFS.
-                        Thus, by default, the Build Directory cannot be under NFS.
-                        However, if you need the Build Directory to be under NFS,
-                        you can set this up by setting <filename>TMPDIR</filename>
-                        in your <filename>local.conf</filename> file
-                        to use a local drive.
-                        Doing so effectively separates <filename>TMPDIR</filename>
-                        from <filename>TOPDIR</filename>, which is the Build
-                        Directory.
-                    </note>
-                    </para></listitem>
-            <listitem><para><emphasis>Classes:</emphasis> Files that provide for logic encapsulation
-                and inheritance so that commonly used patterns can be defined once and then easily used
-                in multiple recipes.
-                For reference information on the Yocto Project classes, see the
-                "<ulink url='&YOCTO_DOCS_REF_URL;#ref-classes'>Classes</ulink>" chapter of the
-                Yocto Project Reference Manual.
-                Class files end with the <filename>.bbclass</filename> filename extension.
-                </para></listitem>
-            <listitem><para><emphasis>Configuration File:</emphasis>
-                Configuration information in various <filename>.conf</filename>
-                files provides global definitions of variables.
-                The <filename>conf/local.conf</filename> configuration file in
-                the
-                <link linkend='build-directory'>Build Directory</link>
-                contains user-defined variables that affect every build.
-                The <filename>meta-poky/conf/distro/poky.conf</filename>
-                configuration file defines Yocto "distro" configuration
-                variables used only when building with this policy.
-                Machine configuration files, which
-                are located throughout the
-                <link linkend='source-directory'>Source Directory</link>, define
-                variables for specific hardware and are only used when building
-                for that target (e.g. the
-                <filename>machine/beaglebone.conf</filename> configuration
-                file defines variables for the Texas Instruments ARM Cortex-A8
-                development board).
-                Configuration files end with a <filename>.conf</filename>
-                filename extension.
-                </para></listitem>
-            <listitem><para id='cross-development-toolchain'>
-                <emphasis>Cross-Development Toolchain:</emphasis>
-                    In general, a cross-development toolchain is a collection of
-                    software development tools and utilities that run on one
-                    architecture and allow you to develop software for a
-                    different, or targeted, architecture.
-                    These toolchains contain cross-compilers, linkers, and
-                    debuggers that are specific to the target architecture.
-                </para>
-
-                <para>The Yocto Project supports two different cross-development
-                    toolchains:
-                    <itemizedlist>
-                        <listitem><para>A toolchain only used by and within
-                            BitBake when building an image for a target
-                            architecture.</para></listitem>
-                        <listitem><para>A relocatable toolchain used outside of
-                            BitBake by developers when developing applications
-                            that will run on a targeted device.
-                            </para></listitem>
-                    </itemizedlist>
-                </para>
-
-                <para>
-                    Creation of these toolchains is simple and automated.
-                    For information on toolchain concepts as they apply to the
-                    Yocto Project, see the
-                    "<ulink url='&YOCTO_DOCS_REF_URL;#cross-development-toolchain-generation'>Cross-Development Toolchain Generation</ulink>"
-                    section in the Yocto Project Reference Manual.
-                    You can also find more information on using the
-                    relocatable toolchain in the
-                    <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
-                </para></listitem>
-            <listitem><para><emphasis>Image:</emphasis>
-                An image is an artifact of the BitBake build process given
-                a collection of recipes and related Metadata.
-                Images are the binary output that run on specific hardware or
-                QEMU and are used for specific use-cases.
-                For a list of the supported image types that the Yocto Project provides, see the
-                "<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>"
-                chapter in the Yocto Project Reference Manual.</para></listitem>
-            <listitem><para id='layer'><emphasis>Layer:</emphasis> A collection of recipes representing the core,
-                a BSP, or an application stack.
-                For a discussion specifically on BSP Layers, see the
-                "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP Layers</ulink>"
-                section in the Yocto Project Board Support Packages (BSP)
-                Developer's Guide.</para></listitem>
-            <listitem><para id='metadata'><emphasis>Metadata:</emphasis>
-                The files that BitBake parses when building an image.
-                In general, Metadata includes recipes, classes, and
-                configuration files.
-                In the context of the kernel ("kernel Metadata"),
-                it refers to Metadata in the <filename>meta</filename>
-                branches of the kernel source Git repositories.
-                </para></listitem>
-            <listitem><para id='oe-core'><emphasis>OE-Core:</emphasis> A core set of Metadata originating
-                with OpenEmbedded (OE) that is shared between OE and the Yocto Project.
-                This Metadata is found in the <filename>meta</filename> directory of the
-                <link linkend='source-directory'>Source Directory</link>.</para></listitem>
-            <listitem><para id='build-system-term'><emphasis>OpenEmbedded Build System:</emphasis>
-                The build system specific to the Yocto Project.
-                The OpenEmbedded build system is based on another project known
-                as "Poky", which uses
-                <link linkend='bitbake-term'>BitBake</link> as the task
-                executor.
-                Throughout the Yocto Project documentation set, the
-                OpenEmbedded build system is sometimes referred to simply
-                as "the build system".
-                If other build systems, such as a host or target build system
-                are referenced, the documentation clearly states the
-                difference.
-                <note>
-                    For some historical information about Poky, see the
-                    <link linkend='poky'>Poky</link> term.
-                </note>
-                </para></listitem>
-            <listitem><para><emphasis>Package:</emphasis>
-                In the context of the Yocto Project, this term refers to a
-                recipe's packaged output produced by BitBake (i.e. a
-                "baked recipe").
-                A package is generally the compiled binaries produced from the
-                recipe's sources.
-                You "bake" something by running it through BitBake.</para>
-                <para>It is worth noting that the term "package" can, in general, have subtle
-                meanings.  For example, the packages referred to in the
-                "<ulink url='&YOCTO_DOCS_QS_URL;#packages'>The Build Host Packages</ulink>" section are
-                compiled binaries that, when installed, add functionality to your Linux
-                distribution.</para>
-                <para>Another point worth noting is that historically within the Yocto Project,
-                recipes were referred to as packages - thus, the existence of several BitBake
-                variables that are seemingly mis-named,
-                (e.g. <ulink url='&YOCTO_DOCS_REF_URL;#var-PR'><filename>PR</filename></ulink>,
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>, and
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-PE'><filename>PE</filename></ulink>).
-                </para></listitem>
-            <listitem><para><emphasis>Package Groups:</emphasis>
-                Arbitrary groups of software Recipes.
-                You use package groups to hold recipes that, when built,
-                usually accomplish a single task.
-                For example, a package group could contain the recipes for a
-                company’s proprietary or value-add software.
-                Or, the package group could contain the recipes that enable
-                graphics.
-                A package group is really just another recipe.
-                Because package group files are recipes, they end with the
-                <filename>.bb</filename> filename extension.</para></listitem>
-            <listitem><para id='poky'><emphasis>Poky:</emphasis>
-                The term "poky" can mean several things.
-                In its most general sense, it is an open-source
-                project that was initially developed by OpenedHand.
-                With OpenedHand, poky was developed off of the existing
-                OpenEmbedded build system becoming a commercially
-                supportable build system for embedded Linux.
-                After Intel Corporation acquired OpenedHand, the
-                project poky became the basis for the Yocto Project's
-                build system.</para>
-                <para>Within the Yocto Project source repositories,
-                <filename>poky</filename> exists as a separate Git
-                repository you can clone to yield a local copy on your
-                host system.
-                Thus, "poky" can refer to the local copy of the Source
-                Directory used for development within the Yocto
-                Project.</para>
-                <para>Finally, "poky" can refer to the default
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO'><filename>DISTRO</filename></ulink>
-                (i.e. distribution) created when you use the Yocto
-                Project in conjunction with the
-                <filename>poky</filename> repository to build an image.
-                </para></listitem>
-            <listitem><para><emphasis>Recipe:</emphasis>
-                A set of instructions for building packages.
-                A recipe describes where you get source code, which patches
-                to apply, how to configure the source, how to compile it and so on.
-                Recipes also describe dependencies for libraries or for other
-                recipes.
-                Recipes represent the logical unit of execution, the software
-                to build, the images to build, and use the
-                <filename>.bb</filename> file extension.
-                </para></listitem>
-            <listitem>
-                <para id='source-directory'><emphasis>Source Directory:</emphasis>
-                This term refers to the directory structure created as a result
-                of creating a local copy of the <filename>poky</filename> Git
-                repository <filename>git://git.yoctoproject.org/poky</filename>
-                or expanding a released <filename>poky</filename> tarball.
-                <note>
-                    Creating a local copy of the <filename>poky</filename>
-                    Git repository is the recommended method for setting up
-                    your Source Directory.
-                </note>
-                Sometimes you might hear the term "poky directory" used to refer
-                to this directory structure.
-                <note>
-                    The OpenEmbedded build system does not support file or
-                    directory names that contain spaces.
-                    Be sure that the Source Directory you use does not contain
-                    these types of names.
+                    You can also use a more collective push model.
+                    The <filename>gitolite</filename> software supports both the
+                    push and pull models quite easily.
                 </note></para>
 
-                <para>The Source Directory contains BitBake, Documentation,
-                Metadata and other files that all support the Yocto Project.
-                Consequently, you must have the Source Directory in place on
-                your development system in order to do any development using
-                the Yocto Project.</para>
+                <para>As with any development environment, it is important
+                to document the policy used as well as any main project
+                guidelines so they are understood by everyone.
+                It is also a good idea to have well structured
+                commit messages, which are usually a part of a project's
+                guidelines.
+                Good commit messages are essential when looking back in time and
+                trying to understand why changes were made.</para>
 
-                <para>When you create a local copy of the Git repository, you
-                can name the repository anything you like.
-                Throughout much of the documentation, "poky"
-                is used as the name of the top-level folder of the local copy of
-                the poky Git repository.
-                So, for example, cloning the <filename>poky</filename> Git
-                repository results in a local Git repository whose top-level
-                folder is also named "poky".</para>
-
-                <para>While it is not recommended that you use tarball expansion
-                to set up the Source Directory, if you do, the top-level
-                directory name of the Source Directory is derived from the
-                Yocto Project release tarball.
-                For example, downloading and unpacking
-                <filename>&YOCTO_POKY_TARBALL;</filename> results in a
-                Source Directory whose root folder is named
-                <filename>&YOCTO_POKY;</filename>.</para>
-
-                <para>It is important to understand the differences between the
-                Source Directory created by unpacking a released tarball as
-                compared to cloning
-                <filename>git://git.yoctoproject.org/poky</filename>.
-                When you unpack a tarball, you have an exact copy of the files
-                based on the time of release - a fixed release point.
-                Any changes you make to your local files in the Source Directory
-                are on top of the release and will remain local only.
-                On the other hand, when you clone the <filename>poky</filename>
-                Git repository, you have an active development repository with
-                access to the upstream repository's branches and tags.
-                In this case, any local changes you make to the local
-                Source Directory can be later applied to active development
-                branches of the upstream <filename>poky</filename> Git
-                repository.</para>
-
-                <para>For more information on concepts related to Git
-                repositories, branches, and tags, see the
-                "<link linkend='repositories-tags-and-branches'>Repositories, Tags, and Branches</link>"
-                section.</para></listitem>
-            <listitem><para><emphasis>Task:</emphasis>
-                A unit of execution for BitBake (e.g.
-                <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-compile'><filename>do_compile</filename></ulink>,
-                <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-fetch'><filename>do_fetch</filename></ulink>,
-                <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-patch'><filename>do_patch</filename></ulink>,
-                and so forth).
+                <para>If you discover that changes are needed to the core
+                layer of the project, it is worth sharing those with the
+                community as soon as possible.
+                Chances are if you have discovered the need for changes,
+                someone else in the community needs them also.
                 </para></listitem>
-            <listitem><para><emphasis>Upstream:</emphasis> A reference to source code or repositories
-                that are not local to the development system but located in a master area that is controlled
-                by the maintainer of the source code.
-                For example, in order for a developer to work on a particular piece of code, they need to
-                first get a copy of it from an "upstream" source.</para></listitem>
-        </itemizedlist>
-    </para>
-</section>
-
-<section id='licensing'>
-    <title>Licensing</title>
-
-    <para>
-        Because open source projects are open to the public, they have different licensing structures in place.
-        License evolution for both Open Source and Free Software has an interesting history.
-        If you are interested in this history, you can find basic information here:
-    <itemizedlist>
-        <listitem><para><ulink url='http://en.wikipedia.org/wiki/Open-source_license'>Open source license history</ulink>
-            </para></listitem>
-        <listitem><para><ulink url='http://en.wikipedia.org/wiki/Free_software_license'>Free software license
-            history</ulink></para></listitem>
-    </itemizedlist>
-    </para>
-
-    <para>
-        In general, the Yocto Project is broadly licensed under the Massachusetts Institute of Technology
-        (MIT) License.
-        MIT licensing permits the reuse of software within proprietary software as long as the
-        license is distributed with that software.
-        MIT is also compatible with the GNU General Public License (GPL).
-        Patches to the Yocto Project follow the upstream licensing scheme.
-        You can find information on the MIT license
-        <ulink url='http://www.opensource.org/licenses/mit-license.php'>here</ulink>.
-        You can find information on the GNU GPL <ulink url='http://www.opensource.org/licenses/LGPL-3.0'>
-        here</ulink>.
-    </para>
-
-    <para>
-        When you build an image using the Yocto Project, the build process uses a
-        known list of licenses to ensure compliance.
-        You can find this list in the
-        <link linkend='source-directory'>Source Directory</link> at
-        <filename>meta/files/common-licenses</filename>.
-        Once the build completes, the list of all licenses found and used during that build are
-        kept in the
-        <link linkend='build-directory'>Build Directory</link> at
-        <filename>tmp/deploy/licenses</filename>.
-    </para>
-
-    <para>
-        If a module requires a license that is not in the base list, the build process
-        generates a warning during the build.
-        These tools make it easier for a developer to be certain of the licenses with which
-        their shipped products must comply.
-        However, even with these tools it is still up to the developer to resolve potential licensing issues.
-    </para>
-
-    <para>
-        The base list of licenses used by the build process is a combination of the Software Package
-        Data Exchange (SPDX) list and the Open Source Initiative (OSI) projects.
-        <ulink url='http://spdx.org'>SPDX Group</ulink> is a working group of the Linux Foundation
-        that maintains a specification
-        for a standard format for communicating the components, licenses, and copyrights
-        associated with a software package.
-        <ulink url='http://opensource.org'>OSI</ulink> is a corporation dedicated to the Open Source
-        Definition and the effort for reviewing and approving licenses that
-        conform to the Open Source Definition (OSD).
-    </para>
-
-    <para>
-        You can find a list of the combined SPDX and OSI licenses that the
-        Yocto Project uses in the
-        <filename>meta/files/common-licenses</filename> directory in your
-        <link linkend='source-directory'>Source Directory</link>.
-    </para>
-
-    <para>
-        For information that can help you maintain compliance with various
-        open source licensing during the lifecycle of a product created using
-        the Yocto Project, see the
-        "<link linkend='maintaining-open-source-license-compliance-during-your-products-lifecycle'>Maintaining Open Source License Compliance During Your Product's Lifecycle</link>"
-        section.
-    </para>
-</section>
-
-<section id='git'>
-    <title>Git</title>
-
-    <para>
-        The Yocto Project makes extensive use of Git,
-        which is a free, open source distributed version control system.
-        Git supports distributed development, non-linear development, and can handle large projects.
-        It is best that you have some fundamental understanding of how Git tracks projects and
-        how to work with Git if you are going to use the Yocto Project for development.
-        This section provides a quick overview of how Git works and provides you with a summary
-        of some essential Git commands.
-    </para>
-
-    <para>
-        For more information on Git, see
-        <ulink url='http://git-scm.com/documentation'></ulink>.
-        If you need to download Git, go to <ulink url='http://git-scm.com/download'></ulink>.
-    </para>
-
-    <section id='repositories-tags-and-branches'>
-        <title>Repositories, Tags, and Branches</title>
-
-        <para>
-            As mentioned earlier in the section
-            "<link linkend='yocto-project-repositories'>Yocto Project Source Repositories</link>",
-            the Yocto Project maintains source repositories at
-            <ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink>.
-            If you look at this web-interface of the repositories, each item is a separate
-            Git repository.
-        </para>
-
-        <para>
-            Git repositories use branching techniques that track content change (not files)
-            within a project (e.g. a new feature or updated documentation).
-            Creating a tree-like structure based on project divergence allows for excellent historical
-            information over the life of a project.
-            This methodology also allows for an environment from which you can do lots of
-            local experimentation on projects as you develop changes or new features.
-        </para>
-
-        <para>
-            A Git repository represents all development efforts for a given project.
-            For example, the Git repository <filename>poky</filename> contains all changes
-            and developments for Poky over the course of its entire life.
-            That means that all changes that make up all releases are captured.
-            The repository maintains a complete history of changes.
-        </para>
-
-        <para>
-            You can create a local copy of any repository by "cloning" it with the Git
-            <filename>clone</filename> command.
-            When you clone a Git repository, you end up with an identical copy of the
-            repository on your development system.
-            Once you have a local copy of a repository, you can take steps to develop locally.
-            For examples on how to clone Git repositories, see the
-            "<link linkend='getting-setup'>Getting Set Up</link>" section.
-        </para>
-
-        <para>
-            It is important to understand that Git tracks content change and
-            not files.
-            Git uses "branches" to organize different development efforts.
-            For example, the <filename>poky</filename> repository has
-            several branches that include the current
-            <filename>&DISTRO_NAME_NO_CAP;</filename> branch, the
-            <filename>master</filename> branch, and many branches for past
-            Yocto Project releases.
-            You can see all the branches by going to
-            <ulink url='&YOCTO_GIT_URL;/cgit.cgi/poky/'></ulink> and
-            clicking on the
-            <filename><ulink url='&YOCTO_GIT_URL;/cgit.cgi/poky/refs/heads'>[...]</ulink></filename>
-            link beneath the "Branch" heading.
-        </para>
-
-        <para>
-            Each of these branches represents a specific area of development.
-            The <filename>master</filename> branch represents the current or most recent
-            development.
-            All other branches represent offshoots of the <filename>master</filename>
-            branch.
-        </para>
-
-        <para>
-            When you create a local copy of a Git repository, the copy has the same set
-            of branches as the original.
-            This means you can use Git to create a local working area (also called a branch)
-            that tracks a specific development branch from the source Git repository.
-            in other words, you can define your local Git environment to work on any development
-            branch in the repository.
-            To help illustrate, here is a set of commands that creates a local copy of the
-            <filename>poky</filename> Git repository and then creates and checks out a local
-            Git branch that tracks the Yocto Project &DISTRO; Release (&DISTRO_NAME;) development:
-            <literallayout class='monospaced'>
-     $ cd ~
-     $ git clone git://git.yoctoproject.org/poky
-     $ cd poky
-     $ git checkout -b &DISTRO_NAME_NO_CAP; origin/&DISTRO_NAME_NO_CAP;
-            </literallayout>
-            In this example, the name of the top-level directory of your local
-            <link linkend='source-directory'>Source Directory</link>
-            is "poky" and the name of that local working area (local branch)
-            you just created and checked out is "&DISTRO_NAME_NO_CAP;".
-            The files in your local repository now reflect the same files that
-            are in the "&DISTRO_NAME_NO_CAP;" development branch of the
-            Yocto Project's "poky" upstream repository.
-            It is important to understand that when you create and checkout a
-            local working branch based on a branch name,
-            your local environment matches the "tip" of that development branch
-            at the time you created your local branch, which could be
-            different from the files at the time of a similarly named release.
-            In other words, creating and checking out a local branch based on
-            the "&DISTRO_NAME_NO_CAP;" branch name is not the same as
-            cloning and checking out the "master" branch.
-            Keep reading to see how you create a local snapshot of a Yocto
-            Project Release.
-        </para>
-
-        <para>
-            Git uses "tags" to mark specific changes in a repository.
-            Typically, a tag is used to mark a special point such as the final
-            change before a project is released.
-            You can see the tags used with the <filename>poky</filename> Git
-            repository by going to
-            <ulink url='&YOCTO_GIT_URL;/cgit.cgi/poky/'></ulink> and
-            clicking on the
-            <filename><ulink url='&YOCTO_GIT_URL;/cgit.cgi/poky/refs/tags'>[...]</ulink></filename>
-            link beneath the "Tag" heading.
-        </para>
-
-        <para>
-            Some key tags are
-            <filename>dizzy-12.0.0</filename>,
-            <filename>fido-13.0.0</filename>,
-            <filename>jethro-14.0.0</filename>, and
-            <filename>&DISTRO_NAME_NO_CAP;-&POKYVERSION;</filename>.
-            These tags represent Yocto Project releases.
-        </para>
-
-        <para>
-            When you create a local copy of the Git repository, you also have access to all the
-            tags.
-            Similar to branches, you can create and checkout a local working Git branch based
-            on a tag name.
-            When you do this, you get a snapshot of the Git repository that reflects
-            the state of the files when the change was made associated with that tag.
-            The most common use is to checkout a working branch that matches a specific
-            Yocto Project release.
-            Here is an example:
-            <literallayout class='monospaced'>
-     $ cd ~
-     $ git clone git://git.yoctoproject.org/poky
-     $ cd poky
-     $ git checkout -b my-&DISTRO_NAME_NO_CAP;-&POKYVERSION; &DISTRO_NAME_NO_CAP;-&POKYVERSION;
-            </literallayout>
-            In this example, the name of the top-level directory of your local Yocto Project
-            Files Git repository is <filename>poky</filename>.
-            And, the name of the local branch you have created and checked out is
-            <filename>my-&DISTRO_NAME_NO_CAP;-&POKYVERSION;</filename>.
-            The files in your repository now exactly match the Yocto Project &DISTRO;
-            Release tag (<filename>&DISTRO_NAME_NO_CAP;-&POKYVERSION;</filename>).
-            It is important to understand that when you create and checkout a local
-            working branch based on a tag, your environment matches a specific point
-            in time and not the entire development branch.
-        </para>
-    </section>
-
-    <section id='basic-commands'>
-        <title>Basic Commands</title>
-
-        <para>
-            Git has an extensive set of commands that lets you manage changes and perform
-            collaboration over the life of a project.
-            Conveniently though, you can manage with a small set of basic operations and workflows
-            once you understand the basic philosophy behind Git.
-            You do not have to be an expert in Git to be functional.
-            A good place to look for instruction on a minimal set of Git commands is
-            <ulink url='http://git-scm.com/documentation'>here</ulink>.
-            If you need to download Git, you can do so
-            <ulink url='http://git-scm.com/download'>here</ulink>, although
-            any reasonably current Linux distribution should already have an
-            installable package for Git.
-        </para>
-
-        <para>
-            If you do not know much about Git, you should educate
-            yourself by visiting the links previously mentioned.
-        </para>
-
-        <para>
-            The following list briefly describes some basic Git operations as a way to get started.
-            As with any set of commands, this list (in most cases) simply shows the base command and
-            omits the many arguments they support.
-            See the Git documentation for complete descriptions and strategies on how to use these commands:
-            <itemizedlist>
-                <listitem><para><emphasis><filename>git init</filename>:</emphasis> Initializes an empty Git repository.
-                    You cannot use Git commands unless you have a <filename>.git</filename> repository.</para></listitem>
-                <listitem><para><emphasis><filename>git clone</filename>:</emphasis>
-                    Creates a local clone of a Git repository.
-                    During collaboration, this command allows you to create a
-                    local Git repository that is on equal footing with a fellow
-                    developer’s Git repository.
-                    </para></listitem>
-                <listitem><para><emphasis><filename>git add</filename>:</emphasis> Stages updated file contents
-                    to the index that
-                    Git uses to track changes.
-                    You must stage all files that have changed before you can commit them.</para></listitem>
-                <listitem><para><emphasis><filename>git commit</filename>:</emphasis> Creates a "commit" that documents
-                    the changes you made.
-                    Commits are used for historical purposes, for determining if a maintainer of a project
-                    will allow the change, and for ultimately pushing the change from your local Git repository
-                    into the project’s upstream (or master) repository.</para></listitem>
-                <listitem><para><emphasis><filename>git status</filename>:</emphasis> Reports any modified files that
-                    possibly need to be staged and committed.</para></listitem>
-                <listitem><para><emphasis><filename>git checkout</filename> <replaceable>branch-name</replaceable>:</emphasis> Changes
-                    your working branch.
-                    This command is analogous to "cd".</para></listitem>
-                <listitem><para><emphasis><filename>git checkout –b</filename> <replaceable>working-branch</replaceable>:</emphasis> Creates
-                    a working branch on your local machine where you can isolate work.
-                    It is a good idea to use local branches when adding specific features or changes.
-                    This way if you do not like what you have done you can easily get rid of the work.</para></listitem>
-                <listitem><para><emphasis><filename>git branch</filename>:</emphasis> Reports
-                    existing local branches and
-                    tells you the branch in which you are currently working.</para></listitem>
-                <listitem><para><emphasis><filename>git branch -D</filename> <replaceable>branch-name</replaceable>:</emphasis>
-                    Deletes an existing local branch.
-                    You need to be in a local branch other than the one you are deleting
-                    in order to delete <replaceable>branch-name</replaceable>.</para></listitem>
-                <listitem><para><emphasis><filename>git pull</filename>:</emphasis> Retrieves information
-                    from an upstream Git
-                    repository and places it in your local Git repository.
-                    You use this command to make sure you are synchronized with the repository
-                    from which you are basing changes (.e.g. the master branch).</para></listitem>
-                <listitem><para><emphasis><filename>git push</filename>:</emphasis>
-                    Sends all your committed local changes to an upstream Git
-                    repository (e.g. a contribution repository).
-                    The maintainer of the project draws from these repositories
-                    when adding changes to the project’s master repository or
-                    other development branch.
-                    </para></listitem>
-                <listitem><para><emphasis><filename>git merge</filename>:</emphasis> Combines or adds changes from one
-                    local branch of your repository with another branch.
-                    When you create a local Git repository, the default branch is named "master".
-                    A typical workflow is to create a temporary branch for isolated work, make and commit your
-                    changes, switch to your local master branch, merge the changes from the temporary branch into the
-                    local master branch, and then delete the temporary branch.</para></listitem>
-                <listitem><para><emphasis><filename>git cherry-pick</filename>:</emphasis> Choose and apply specific
-                    commits from one branch into another branch.
-                    There are times when you might not be able to merge all the changes in one branch with
-                    another but need to pick out certain ones.</para></listitem>
-                <listitem><para><emphasis><filename>gitk</filename>:</emphasis> Provides a GUI view of the branches
-                    and changes in your local Git repository.
-                    This command is a good way to graphically see where things have diverged in your
-                    local repository.</para></listitem>
-                <listitem><para><emphasis><filename>git log</filename>:</emphasis> Reports a history of your changes to the
-                    repository.</para></listitem>
-                <listitem><para><emphasis><filename>git diff</filename>:</emphasis> Displays line-by-line differences
-                    between your local working files and the same files in the upstream Git repository that your
-                    branch currently tracks.</para></listitem>
-            </itemizedlist>
-        </para>
-    </section>
-</section>
-
-<section id='workflows'>
-    <title>Workflows</title>
-
-    <para>
-        This section provides some overview on workflows using Git.
-        In particular, the information covers basic practices that describe roles and actions in a
-        collaborative development environment.
-        Again, if you are familiar with this type of development environment, you might want to just
-        skip this section.
-    </para>
-
-    <para>
-        The Yocto Project files are maintained using Git in a "master" branch whose Git history
-        tracks every change and whose structure provides branches for all diverging functionality.
-        Although there is no need to use Git, many open source projects do so.
-        For the Yocto Project, a key individual called the "maintainer" is responsible for the "master"
-        branch of a given Git repository.
-        The "master" branch is the “upstream” repository where the final builds of the project occur.
-        The maintainer is responsible for accepting changes from other developers and for
-        organizing the underlying branch structure to reflect release strategies and so forth.
-        <note>For information on finding out who is responsible for (maintains)
-            a particular area of code, see the
-            "<link linkend='how-to-submit-a-change'>How to Submit a Change</link>"
-            section.
-        </note>
-    </para>
-
-    <para>
-        The project also has an upstream contribution Git repository named
-        <filename>poky-contrib</filename>.
-        You can see all the branches in this repository using the web interface
-        of the
-        <ulink url='&YOCTO_GIT_URL;'>Source Repositories</ulink> organized
-        within the "Poky Support" area.
-        These branches temporarily hold changes to the project that have been
-        submitted or committed by the Yocto Project development team and by
-        community members who contribute to the project.
-        The maintainer determines if the changes are qualified to be moved
-        from the "contrib" branches into the "master" branch of the Git
-        repository.
-    </para>
-
-    <para>
-        Developers (including contributing community members) create and maintain cloned repositories
-        of the upstream "master" branch.
-        These repositories are local to their development platforms and are used to develop changes.
-        When a developer is satisfied with a particular feature or change, they "push" the changes
-        to the appropriate "contrib" repository.
-    </para>
-
-    <para>
-        Developers are responsible for keeping their local repository up-to-date with "master".
-        They are also responsible for straightening out any conflicts that might arise within files
-        that are being worked on simultaneously by more than one person.
-        All this work is done locally on the developer’s machines before anything is pushed to a
-        "contrib" area and examined at the maintainer’s level.
-    </para>
-
-    <para>
-        A somewhat formal method exists by which developers commit changes and push them into the
-        "contrib" area and subsequently request that the maintainer include them into "master"
-        This process is called “submitting a patch” or "submitting a change."
-        For information on submitting patches and changes, see the
-        "<link linkend='how-to-submit-a-change'>How to Submit a Change</link>" section.
-    </para>
-
-    <para>
-        To summarize the environment:  a single point of entry exists for
-        changes into the project’s "master" branch of the Git repository,
-        which is controlled by the project’s maintainer.
-        And, a set of developers exist who independently develop, test, and
-        submit changes to "contrib" areas for the maintainer to examine.
-        The maintainer then chooses which changes are going to become a
-        permanent part of the project.
-    </para>
-
-    <para>
-        <imagedata fileref="figures/git-workflow.png" width="6in" depth="3in" align="left" scalefit="1" />
-    </para>
-
-    <para>
-        While each development environment is unique, there are some best practices or methods
-        that help development run smoothly.
-        The following list describes some of these practices.
-        For more information about Git workflows, see the workflow topics in the
-        <ulink url='http://book.git-scm.com'>Git Community Book</ulink>.
-        <itemizedlist>
-            <listitem><para><emphasis>Make Small Changes:</emphasis> It is best to keep the changes you commit
-                small as compared to bundling many disparate changes into a single commit.
-                This practice not only keeps things manageable but also allows the maintainer
-                to more easily include or refuse changes.</para>
-                <para>It is also good practice to leave the repository in a state that allows you to
-                still successfully build your project.  In other words, do not commit half of a feature,
-                then add the other half as a separate, later commit.
-                Each commit should take you from one buildable project state to another
-                buildable state.</para></listitem>
-            <listitem><para><emphasis>Use Branches Liberally:</emphasis> It is very easy to create, use, and
-                delete local branches in your working Git repository.
-                You can name these branches anything you like.
-                It is helpful to give them names associated with the particular feature or change
-                on which you are working.
-                Once you are done with a feature or change and have merged it
-                into your local master branch, simply discard the temporary
-                branch.</para></listitem>
-            <listitem><para><emphasis>Merge Changes:</emphasis> The <filename>git merge</filename>
-                command allows you to take the
-                changes from one branch and fold them into another branch.
-                This process is especially helpful when more than a single developer might be working
-                on different parts of the same feature.
-                Merging changes also automatically identifies any collisions or "conflicts"
-                that might happen as a result of the same lines of code being altered by two different
-                developers.</para></listitem>
-            <listitem><para><emphasis>Manage Branches:</emphasis> Because branches are easy to use, you should
-                use a system where branches indicate varying levels of code readiness.
-                For example, you can have a "work" branch to develop in, a "test" branch where the code or
-                change is tested, a "stage" branch where changes are ready to be committed, and so forth.
-                As your project develops, you can merge code across the branches to reflect ever-increasing
-                stable states of the development.</para></listitem>
-            <listitem><para><emphasis>Use Push and Pull:</emphasis> The push-pull workflow is based on the
-                concept of developers "pushing" local commits to a remote repository, which is
-                usually a contribution repository.
-                This workflow is also based on developers "pulling" known states of the project down into their
-                local development repositories.
-                The workflow easily allows you to pull changes submitted by other developers from the
-                upstream repository into your work area ensuring that you have the most recent software
-                on which to develop.
-                The Yocto Project has two scripts named <filename>create-pull-request</filename> and
-                <filename>send-pull-request</filename> that ship with the release to facilitate this
-                workflow.
-                You can find these scripts in the <filename>scripts</filename>
-                folder of the
-                <link linkend='source-directory'>Source Directory</link>.
-                For information on how to use these scripts, see the
-                "<link linkend='pushing-a-change-upstream'>Using Scripts to Push a Change Upstream and Request a Pull</link>" section.
+            <listitem><para>
+                <emphasis>Development Environment Summary:</emphasis>
+                Aside from the previous steps, some best practices exist
+                within the Yocto Project development environment.
+                Consider the following:
+                <itemizedlist>
+                    <listitem><para>
+                        Use <ulink url='&YOCTO_DOCS_REF_URL;#git'>Git</ulink>
+                        as the source control system.
+                        </para></listitem>
+                    <listitem><para>
+                        Maintain your Metadata in layers that make sense
+                        for your situation.
+                        See the "<link linkend='understanding-and-creating-layers'>Understanding
+                        and Creating Layers</link>" section for more information on
+                        layers.
+                        </para></listitem>
+                    <listitem><para>
+                        Separate the project's Metadata and code by using
+                        separate Git repositories.
+                        See the
+                        "<ulink url='&YOCTO_DOCS_REF_URL;#yocto-project-repositories'>Yocto Project Source Repositories</ulink>"
+                        section for information on these repositories.
+                        See the
+                        "<link linkend='working-with-yocto-project-source-files'>Working With Yocto Project Source Files</link>"
+                        section for information on how to set up local Git
+                        repositories for related upstream Yocto Project
+                        Git repositories.
+                        </para></listitem>
+                    <listitem><para>
+                        Set up the directory for the shared state cache
+                        (<ulink url='&YOCTO_DOCS_REF_URL;#var-SSTATE_DIR'><filename>SSTATE_DIR</filename></ulink>)
+                        where it makes sense.
+                        For example, set up the sstate cache on a system used
+                        by developers in the same organization and share the
+                        same source directories on their machines.
+                        </para></listitem>
+                    <listitem><para>
+                        Set up an Autobuilder and have it populate the
+                        sstate cache and source directories.
+                        </para></listitem>
+                    <listitem><para>
+                        The Yocto Project community encourages you
+                        to send patches to the project to fix bugs or add features.
+                        If you do submit patches, follow the project commit
+                        guidelines for writing good commit messages.
+                        See the "<link linkend='how-to-submit-a-change'>Submitting a Change to the Yocto Project</link>"
+                        section.
+                        </para></listitem>
+                    <listitem><para>
+                        Send changes to the core sooner than later
+                        as others are likely to run into the same issues.
+                        For some guidance on mailing lists to use, see the list in the
+                        "<link linkend='how-to-submit-a-change'>Submitting a Change to the Yocto Project</link>"
+                        section.
+                        For a description of the available mailing lists, see the
+                        "<ulink url='&YOCTO_DOCS_REF_URL;#resources-mailinglist'>Mailing Lists</ulink>"
+                        section in the Yocto Project Reference Manual.
+                        </para></listitem>
+                </itemizedlist>
                 </para></listitem>
-            <listitem><para><emphasis>Patch Workflow:</emphasis> This workflow allows you to notify the
-                maintainer through an email that you have a change (or patch) you would like considered
-                for the "master" branch of the Git repository.
-                To send this type of change, you format the patch and then send the email using the Git commands
-                <filename>git format-patch</filename> and <filename>git send-email</filename>.
-                For information on how to use these scripts, see the
-                "<link linkend='how-to-submit-a-change'>How to Submit a Change</link>"
-                section.
-                </para></listitem>
-        </itemizedlist>
-    </para>
-</section>
-
-<section id='tracking-bugs'>
-    <title>Tracking Bugs</title>
-
-    <para>
-        The Yocto Project uses its own implementation of
-        <ulink url='http://www.bugzilla.org/about/'>Bugzilla</ulink> to track bugs.
-        Implementations of Bugzilla work well for group development because they track bugs and code
-        changes, can be used to communicate changes and problems with developers, can be used to
-        submit and review patches, and can be used to manage quality assurance.
-        The home page for the Yocto Project implementation of Bugzilla is
-        <ulink url='&YOCTO_BUGZILLA_URL;'>&YOCTO_BUGZILLA_URL;</ulink>.
-    </para>
-
-    <para>
-        Sometimes it is helpful to submit, investigate, or track a bug against the Yocto Project itself
-        such as when discovering an issue with some component of the build system that acts contrary
-        to the documentation or your expectations.
-        Following is the general procedure for submitting a new bug using the Yocto Project
-        Bugzilla.
-        You can find more information on defect management, bug tracking, and feature request
-        processes all accomplished through the Yocto Project Bugzilla on the
-        <ulink url='&YOCTO_WIKI_URL;/wiki/Bugzilla_Configuration_and_Bug_Tracking'>wiki page</ulink>.
-        <orderedlist>
-            <listitem><para>Always use the Yocto Project implementation of Bugzilla to submit
-                a bug.</para></listitem>
-            <listitem><para>When submitting a new bug, be sure to choose the appropriate
-                Classification, Product, and Component for which the issue was found.
-                Defects for the Yocto Project fall into one of seven classifications:
-                Yocto Project Components, Infrastructure, Build System &amp; Metadata,
-                Documentation, QA/Testing, Runtime and Hardware.
-                Each of these Classifications break down into multiple Products and, in some
-                cases, multiple Components.</para></listitem>
-            <listitem><para>Use the bug form to choose the correct Hardware and Architecture
-                for which the bug applies.</para></listitem>
-            <listitem><para>Indicate the Yocto Project version you were using when the issue
-                occurred.</para></listitem>
-            <listitem><para>Be sure to indicate the Severity of the bug.
-                Severity communicates how the bug impacted your work.</para></listitem>
-            <listitem><para>Select the appropriate "Documentation change" item
-                for the bug.
-                Fixing a bug may or may not affect the Yocto Project
-                documentation.</para></listitem>
-            <listitem><para>Provide a brief summary of the issue.
-                Try to limit your summary to just a line or two and be sure to capture the
-                essence of the issue.</para></listitem>
-            <listitem><para>Provide a detailed description of the issue.
-                You should provide as much detail as you can about the context, behavior, output,
-                and so forth that surrounds the issue.
-                You can even attach supporting files for output from logs by
-                using the "Add an attachment" button.</para></listitem>
-            <listitem><para>Be sure to copy the appropriate people in the
-                "CC List" for the bug.
-                See the "<link linkend='how-to-submit-a-change'>How to Submit a Change</link>"
-                section for information about finding out who is responsible
-                for code.</para></listitem>
-            <listitem><para>Submit the bug by clicking the "Submit Bug" button.</para></listitem>
         </orderedlist>
     </para>
 </section>
 
+<section id='submitting-a-defect-against-the-yocto-project'>
+    <title>Submitting a Defect Against the Yocto Project</title>
+
+    <para>
+        Use the Yocto Project implementation of
+        <ulink url='http://www.bugzilla.org/about/'>Bugzilla</ulink>
+        to submit a defect (bug) against the Yocto Project.
+        For additional information on this implementation of Bugzilla see the
+        "<ulink url='&YOCTO_DOCS_REF_URL;#resources-bugtracker'>Yocto Project Bugzilla</ulink>"
+        section in the Yocto Project Reference Manual.
+        For more detail on any of the following steps, see the Yocto Project
+        <ulink url='&YOCTO_WIKI_URL;/wiki/Bugzilla_Configuration_and_Bug_Tracking'>Bugzilla wiki page</ulink>.
+    </para>
+
+    <para>
+        Use the following general steps to submit a bug"
+
+        <orderedlist>
+            <listitem><para>
+                Open the Yocto Project implementation of
+                <ulink url='&YOCTO_BUGZILLA_URL;'>Bugzilla</ulink>.
+                </para></listitem>
+            <listitem><para>
+                Click "File a Bug" to enter a new bug.
+                </para></listitem>
+            <listitem><para>
+                Choose the appropriate "Classification", "Product", and
+                "Component" for which the bug was found.
+                Bugs for the Yocto Project fall into one of several
+                classifications, which in turn break down into several
+                products and components.
+                For example, for a bug against the
+                <filename>meta-intel</filename> layer, you would choose
+                "Build System, Metadata &amp; Runtime", "BSPs", and
+                "bsps-meta-intel", respectively.
+                </para></listitem>
+            <listitem><para>
+                Choose the "Version" of the Yocto Project for which you found
+                the bug (e.g. &DISTRO;).
+                </para></listitem>
+            <listitem><para>
+                Determine and select the "Severity" of the bug.
+                The severity indicates how the bug impacted your work.
+                </para></listitem>
+            <listitem><para>
+                Choose the "Hardware" that the bug impacts.
+                </para></listitem>
+            <listitem><para>
+                Choose the "Architecture" that the bug impacts.
+                </para></listitem>
+            <listitem><para>
+                Choose a "Documentation change" item for the bug.
+                Fixing a bug might or might not affect the Yocto Project
+                documentation.
+                If you are unsure of the impact to the documentation, select
+                "Don't Know".
+                </para></listitem>
+            <listitem><para>
+                Provide a brief "Summary" of the bug.
+                Try to limit your summary to just a line or two and be sure
+                to capture the essence of the bug.
+                </para></listitem>
+            <listitem><para>
+                Provide a detailed "Description" of the bug.
+                You should provide as much detail as you can about the context,
+                behavior, output, and so forth that surrounds the bug.
+                You can even attach supporting files for output from logs by
+                using the "Add an attachment" button.
+                </para></listitem>
+            <listitem><para>
+                Click the "Submit Bug" button submit the bug.
+                A new Bugzilla number is assigned to the bug and the defect
+                is logged in the bug tracking system.
+                </para></listitem>
+        </orderedlist>
+        Once you file a bug, the bug is processed by the Yocto Project Bug
+        Triage Team and further details concerning the bug are assigned
+        (e.g. priority and owner).
+        You are the "Submitter" of the bug and any further categorization,
+        progress, or comments on the bug result in Bugzilla sending you an
+        automated email concerning the particular change or progress to the
+        bug.
+    </para>
+</section>
+
 <section id='how-to-submit-a-change'>
-    <title>How to Submit a Change</title>
+    <title>Submitting a Change to the Yocto Project</title>
 
     <para>
         Contributions to the Yocto Project and OpenEmbedded are very welcome.
-        Because the system is extremely configurable and flexible, we recognize that developers
-        will want to extend, configure or optimize it for their specific uses.
-        You should send patches to the appropriate mailing list so that they
-        can be reviewed and merged by the appropriate maintainer.
+        Because the system is extremely configurable and flexible, we recognize
+        that developers will want to extend, configure or optimize it for
+        their specific uses.
     </para>
 
-    <section id='submit-change-overview'>
-        <title>Overview</title>
+    <para>
+        The Yocto Project uses a mailing list and a patch-based workflow
+        that is similar to the Linux kernel but contains important
+        differences.
+        In general, a mailing list exists through which you can submit
+        patches.
+        You should send patches to the appropriate mailing list so that they
+        can be reviewed and merged by the appropriate maintainer.
+        The specific mailing list you need to use depends on the
+        location of the code you are changing.
+        Each component (e.g. layer) should have a
+        <filename>README</filename> file that indicates where to send
+        the changes and which process to follow.
+    </para>
 
-        <para>
-            The Yocto Project uses a mailing list and patch-based workflow
-            that is similar to the Linux kernel but contains important
-            differences.
-            In general, a mailing list exists through which you can submit
-            patches.
-            The specific mailing list you need to use depends on the
-            location of the code you are changing.
-            Each component (e.g. layer) should have a
-            <filename>README</filename> file that indicates where to send
-            the changes and which process to follow.
-        </para>
+    <para>
+        You can send the patch to the mailing list using whichever approach
+        you feel comfortable with to generate the patch.
+        Once sent, the patch is usually reviewed by the community at large.
+        If somebody has concerns with the patch, they will usually voice
+        their concern over the mailing list.
+        If a patch does not receive any negative reviews, the maintainer of
+        the affected layer typically takes the patch, tests it, and then
+        based on successful testing, merges the patch.
+    </para>
 
-        <para>
-            You can send the patch to the mailing list using whichever approach
-            you feel comfortable with to generate the patch.
-            Once sent, the patch is usually reviewed by the community at large.
-            If somebody has concerns with the patch, they will usually voice
-            their concern over the mailing list.
-            If a patch does not receive any negative reviews, the maintainer of
-            the affected layer typically takes the patch, tests it, and then
-            based on successful testing, merges the patch.
-        </para>
+    <para id='figuring-out-the-mailing-list-to-use'>
+        The "poky" repository, which is the Yocto Project's reference build
+        environment, is a hybrid repository that contains several
+        individual pieces (e.g. BitBake, Metadata, documentation,
+        and so forth) built using the combo-layer tool.
+        The upstream location used for submitting changes varies by
+        component:
+        <itemizedlist>
+            <listitem><para>
+                <emphasis>Core Metadata:</emphasis>
+                Send your patch to the
+                <ulink url='http://lists.openembedded.org/mailman/listinfo/openembedded-core'>openembedded-core</ulink>
+                mailing list.  For example, a change to anything under
+                the <filename>meta</filename> or
+                <filename>scripts</filename> directories should be sent
+                to this mailing list.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>BitBake:</emphasis>
+                For changes to BitBake (i.e. anything under the
+                <filename>bitbake</filename> directory), send your patch
+                to the
+                <ulink url='http://lists.openembedded.org/mailman/listinfo/bitbake-devel'>bitbake-devel</ulink>
+                mailing list.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>"meta-*" trees:</emphasis>
+                These trees contain Metadata.
+                Use the
+                <ulink url='https://lists.yoctoproject.org/listinfo/poky'>poky</ulink>
+                mailing list.
+                </para></listitem>
+        </itemizedlist>
+    </para>
 
-        <para>
-            Specific to OpenEmbedded-Core, two commonly used testing trees
-            exist:
-            <itemizedlist>
-                <listitem><para>
-                    <emphasis>"ross/mut" branch:</emphasis>
-                    The "mut" (master-under-test) tree
-                    exists in the <filename>poky-contrib</filename> repository
-                    in the
-                    <ulink url='&YOCTO_GIT_URL;'>Yocto Project source repositories</ulink>.
-                    </para></listitem>
-                <listitem><para>
-                    <emphasis>"master-next" branch:</emphasis>
-                    This branch is part of the main
-                    "poky" repository in the Yocto Project source repositories.
-                    </para></listitem>
-            </itemizedlist>
-            Maintainers use these branches to test submissions prior to merging
-            patches.
-            Thus, you can get an idea of the status of a patch based on
-            whether the patch has been merged into one of these branches.
-        </para>
+    <para>
+        For changes to other layers hosted in the Yocto Project source
+        repositories (i.e. <filename>yoctoproject.org</filename>), tools,
+        and the Yocto Project documentation, use the
+        <ulink url='https://lists.yoctoproject.org/listinfo/yocto'>Yocto Project</ulink>
+        general mailing list.
+        <note>
+            Sometimes a layer's documentation specifies to use a
+            particular mailing list.
+            If so, use that list.
+        </note>
+        For additional recipes that do not fit into the core Metadata, you
+        should determine which layer the recipe should go into and submit
+        the change in the manner recommended by the documentation (e.g.
+        the <filename>README</filename> file) supplied with the layer.
+        If in doubt, please ask on the Yocto general mailing list or on
+        the openembedded-devel mailing list.
+    </para>
 
-        <para>
-            This system is imperfect and patches can sometimes get lost in the
+    <para>
+        You can also push a change upstream and request a maintainer to
+        pull the change into the component's upstream repository.
+        You do this by pushing to a contribution repository that is upstream.
+        See the
+        "<ulink url='&YOCTO_DOCS_REF_URL;#workflows'>Workflows</ulink>"
+        section in the Yocto Project Reference Manual for additional
+        concepts on working in the Yocto Project development environment.
+    </para>
+
+    <para>
+        Two commonly used testing repositories exist for
+        OpenEmbedded-Core:
+        <itemizedlist>
+            <listitem><para>
+                <emphasis>"ross/mut" branch:</emphasis>
+                The "mut" (master-under-test) tree
+                exists in the <filename>poky-contrib</filename> repository
+                in the
+                <ulink url='&YOCTO_GIT_URL;'>Yocto Project source repositories</ulink>.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>"master-next" branch:</emphasis>
+                This branch is part of the main
+                "poky" repository in the Yocto Project source repositories.
+                </para></listitem>
+        </itemizedlist>
+        Maintainers use these branches to test submissions prior to merging
+        patches.
+        Thus, you can get an idea of the status of a patch based on
+        whether the patch has been merged into one of these branches.
+        <note>
+            This system is imperfect and changes can sometimes get lost in the
             flow.
-            Asking about the status of a patch is reasonable if the patch
-            has been idle for a while with no feedback.
+            Asking about the status of a patch or change is reasonable if the
+            change has been idle for a while with no feedback.
             The Yocto Project does have plans to use
             <ulink url='https://en.wikipedia.org/wiki/Patchwork_(software)'>Patchwork</ulink>
             to track the status of patches and also to automatically preview
             patches.
-        </para>
+        </note>
+    </para>
+
+    <para>
+        The following sections provide procedures for submitting a change.
+    </para>
+
+    <section id='pushing-a-change-upstream'>
+        <title>Using Scripts to Push a Change Upstream and Request a Pull</title>
 
         <para>
-            The following sections provide general instructions for both
-            pushing changes upstream and for submitting changes as patches.
-        </para>
-    </section>
-
-    <section id='submit-change-submissions-to-poky'>
-        <title>Submissions to Poky</title>
-
-        <para>
-            The "poky" repository, which is the Yocto Project's reference build
-            environment, is a hybrid repository that contains several
-            individual pieces (e.g. BitBake, OpenEmbedded-Core, meta-yocto,
-            documentation, and so forth) built using the combo-layer tool.
-            The upstream location used for submitting changes varies by
-            component:
-            <itemizedlist>
-                <listitem><para>
-                    <emphasis>Core Metadata:</emphasis>
-                    Send your patch to the
-                    <ulink url='http://lists.openembedded.org/mailman/listinfo/openembedded-core'>openembedded-core</ulink>
-                    mailing list.  For example, a change to anything under
-                    the <filename>meta</filename> or
-                    <filename>scripts</filename> directories should be sent
-                    to this mailing list.
-                    </para></listitem>
-                <listitem><para>
-                    <emphasis>BitBake:</emphasis>
-                    For changes to BitBake (i.e. anything under the
-                    <filename>bitbake</filename> directory), send your patch
-                    to the
-                    <ulink url='http://lists.openembedded.org/mailman/listinfo/bitbake-devel'>bitbake-devel</ulink>
-                    mailing list.
-                    </para></listitem>
-                <listitem><para>
-                    <emphasis>"meta-yocto-bsp" and "meta-poky" trees:</emphasis>
-                    These trees are
-                    part of the "meta-yocto" repository in the Yocto Project
-                    source repositories.
-                    Use the
-                    <ulink url='https://lists.yoctoproject.org/listinfo/poky'>poky</ulink>
-                    mailing list.
-                    </para></listitem>
-            </itemizedlist>
-        </para>
-    </section>
-
-    <section id='submit-change-submissions-to-other-layers'>
-        <title>Submissions to Other Layers</title>
-
-        <para>
-            For changes to other layers hosted in the Yocto Project source
-            repositories (i.e. <filename>yoctoproject.org</filename>), tools,
-            and the Yocto Project documentation, use the
-            <ulink url='https://lists.yoctoproject.org/listinfo/yocto'>Yocto Project</ulink>
-            general mailing list.
+            Follow this procedure to push a change to an upstream "contrib"
+            Git repository:
             <note>
-                Sometimes a layer's documentation specifies to use a
-                particular mailing list.
-                If so, use that list.
+                You can find general Git information on how to push a change
+                upstream in the
+                <ulink url='http://git-scm.com/book/en/v2/Distributed-Git-Distributed-Workflows'>Git Community Book</ulink>.
             </note>
-            For additional recipes that do not fit into the core Metadata, you
-            should determine which layer the recipe should go into and submit
-            the change in the manner recommended by the documentation (e.g.
-            the <filename>README</filename> file) supplied with the layer.
-            If in doubt, please ask on the Yocto general mailing list or on
-            the openembedded-devel mailing list.
-        </para>
-    </section>
-
-    <section id='submit-change-patch-submission-details'>
-        <title>Patch Submission Details</title>
-
-        <para>
-            When submitting any change, you can check who you should be
-            notifying.
-            Use either of these methods to find out:
-            <itemizedlist>
+            <orderedlist>
                 <listitem><para>
-                    <emphasis>Maintenance File:</emphasis>
-                    Examine the <filename>maintainers.inc</filename> file, which is
-                    located in the
-                    <link linkend='source-directory'>Source Directory</link>
-                    at <filename>meta-poky/conf/distro/include</filename>, to
-                    see who is responsible for code.
+                    <emphasis>Make Your Changes Locally:</emphasis>
+                    Make your changes in your local Git repository.
+                    You should make small, controlled, isolated changes.
+                    Keeping changes small and isolated aids review,
+                    makes merging/rebasing easier and keeps the change
+                    history clean should anyone need to refer to it in
+                    future.
                     </para></listitem>
                 <listitem><para>
-                    <emphasis>Search by File:</emphasis>
-                    Using <link linkend='git'>Git</link>, you can enter the
-                    following command to bring up a short list of all commits
-                    against a specific file:
-                    <literallayout class='monospaced'>
-     git shortlog -- <replaceable>filename</replaceable>
-                    </literallayout>
-                    Just provide the name of the file for which you are interested.
-                    The information returned is not ordered by history but does
-                    include a list of all committers grouped by name.
-                    From the list, you can see who is responsible for the bulk of
-                    the changes against the file.
+                    <emphasis>Stage Your Changes:</emphasis>
+                    Stage your changes by using the <filename>git add</filename>
+                    command on each file you changed.
                     </para></listitem>
-            </itemizedlist>
-        </para>
-
-        <para>
-            For a list of the Yocto Project and related mailing lists, see the
-            "<ulink url='&YOCTO_DOCS_REF_URL;#resources-mailinglist'>Mailing lists</ulink>"
-            section in the Yocto Project Reference Manual.
-        </para>
-
-        <para>
-            When you send a patch, be sure to include a "Signed-off-by:"
-            line in the same style as required by the Linux kernel.
-            Adding this line signifies that you, the submitter, have agreed
-            to the Developer's Certificate of Origin 1.1 as follows:
-            <literallayout class='monospaced'>
+                <listitem><para id='making-sure-you-have-correct-commit-information'>
+                    <emphasis>Commit Your Changes:</emphasis>
+                    Commit the change by using the
+                    <filename>git commit</filename> command.
+                    Make sure your commit information follows standards by
+                    following these accepted conventions:
+                    <itemizedlist>
+                        <listitem><para>
+                            Be sure to include a "Signed-off-by:" line in the
+                            same style as required by the Linux kernel.
+                            Adding this line signifies that you, the submitter,
+                            have agreed to the Developer's Certificate of
+                            Origin 1.1 as follows:
+                            <literallayout class='monospaced'>
      Developer's Certificate of Origin 1.1
 
      By making a contribution to this project, I certify that:
@@ -1595,121 +655,183 @@
          personal information I submit with it, including my sign-off) is
          maintained indefinitely and may be redistributed consistent with
          this project or the open source license(s) involved.
-            </literallayout>
-        </para>
-
-        <para>
-            In a collaborative environment, it is necessary to have some sort
-            of standard or method through which you submit changes.
-            Otherwise, things could get quite chaotic.
-            One general practice to follow is to make small, controlled changes.
-            Keeping changes small and isolated aids review, makes
-            merging/rebasing easier and keeps the change history clean should
-            anyone need to refer to it in future.
-        </para>
-
-        <para>
-            When you make a commit, you must follow certain standards
-            established by the OpenEmbedded and Yocto Project development teams.
-            For each commit, you must provide a single-line summary of the
-            change and you should almost always provide a more detailed
-            description of what you did (i.e. the body of the commit message).
-            The only exceptions for not providing a detailed description would
-            be if your change is a simple, self-explanatory change that needs
-            no further description beyond the summary.
-            Here are the guidelines for composing a commit message:
-            <itemizedlist>
-                <listitem><para>
-                    Provide a single-line, short summary of the change.
-                    This summary is typically viewable in the "shortlist" of
-                    changes.
-                    Thus, providing something short and descriptive that
-                    gives the reader a summary of the change is useful when
-                    viewing a list of many commits.
-                    You should prefix this short description with the recipe
-                    name (if changing a recipe), or else with the short form
-                    path to the file being changed.
-                    </para></listitem>
-                <listitem><para>
-                    For the body of the commit message, provide detailed
-                    information that describes what you changed, why you made
-                    the change, and the approach you used.
-                    It might also be helpful if you mention how you tested
-                    the change.
-                    Provide as much detail as you can in the body of the
-                    commit message.
-                    </para></listitem>
-                <listitem><para>
-                    If the change addresses a specific bug or issue that is
-                    associated with a bug-tracking ID, include a reference
-                    to that ID in your detailed description.
-                    For example, the Yocto Project uses a specific convention
-                    for bug references - any commit that addresses a specific
-                    bug should use the following form for the detailed
-                    description.
-                    Be sure to use the actual bug-tracking ID from
-                    Bugzilla for
-                    <replaceable>bug-id</replaceable>:
-                    <literallayout class='monospaced'>
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            Provide a single-line summary of the change.
+                            and,
+                            if more explanation is needed, provide more
+                            detail in the body of the commit.
+                            This summary is typically viewable in the
+                            "shortlist" of changes.
+                            Thus, providing something short and descriptive
+                            that gives the reader a summary of the change is
+                            useful when viewing a list of many commits.
+                            You should prefix this short description with the
+                            recipe name (if changing a recipe), or else with
+                            the short form path to the file being changed.
+                            </para></listitem>
+                        <listitem><para>
+                            For the body of the commit message, provide
+                            detailed information that describes what you
+                            changed, why you made the change, and the approach
+                            you used.
+                            It might also be helpful if you mention how you
+                            tested the change.
+                            Provide as much detail as you can in the body of
+                            the commit message.
+                            <note>
+                                You do not need to provide a more detailed
+                                explanation of a change if the change is
+                                minor to the point of the single line
+                                summary providing all the information.
+                            </note>
+                            </para></listitem>
+                        <listitem><para>
+                            If the change addresses a specific bug or issue
+                            that is associated with a bug-tracking ID,
+                            include a reference to that ID in your detailed
+                            description.
+                            For example, the Yocto Project uses a specific
+                            convention for bug references - any commit that
+                            addresses a specific bug should use the following
+                            form for the detailed description.
+                            Be sure to use the actual bug-tracking ID from
+                            Bugzilla for
+                            <replaceable>bug-id</replaceable>:
+                            <literallayout class='monospaced'>
      Fixes [YOCTO #<replaceable>bug-id</replaceable>]
 
      <replaceable>detailed description of change</replaceable>
+                            </literallayout>
+                            </para></listitem>
+                    </itemizedlist>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Push Your Commits to a "Contrib" Upstream:</emphasis>
+                    If you have arranged for permissions to push to an
+                    upstream contrib repository, push the change to that
+                    repository:
+                    <literallayout class='monospaced'>
+     $ git push <replaceable>upstream_remote_repo</replaceable> <replaceable>local_branch_name</replaceable>
+                    </literallayout>
+                    For example, suppose you have permissions to push into the
+                    upstream <filename>meta-intel-contrib</filename>
+                    repository and you are working in a local branch named
+                    <replaceable>your_name</replaceable><filename>/README</filename>.
+                    The following command pushes your local commits to the
+                    <filename>meta-intel-contrib</filename> upstream
+                    repository and puts the commit in a branch named
+                    <replaceable>your_name</replaceable><filename>/README</filename>:
+                    <literallayout class='monospaced'>
+     $ git push meta-intel-contrib <replaceable>your_name</replaceable>/README
                     </literallayout>
                     </para></listitem>
-            </itemizedlist>
-        </para>
+                <listitem><para id='push-determine-who-to-notify'>
+                    <emphasis>Determine Who to Notify:</emphasis>
+                    Determine the maintainer or the mailing list
+                    that you need to notify for the change.</para>
 
-        <para>
-            You can find more guidance on creating well-formed commit messages
-            at this OpenEmbedded wiki page:
-            <ulink url='&OE_HOME_URL;/wiki/Commit_Patch_Message_Guidelines'></ulink>.
-        </para>
-    </section>
-
-    <section id='pushing-a-change-upstream'>
-        <title>Using Scripts to Push a Change Upstream and Request a Pull</title>
-
-        <para>
-            The basic flow for pushing a change to an upstream "contrib" Git repository is as follows:
-            <itemizedlist>
-                <listitem><para>Make your changes in your local Git repository.</para></listitem>
-                <listitem><para>Stage your changes by using the <filename>git add</filename>
-                    command on each file you changed.</para></listitem>
-                <listitem><para>
-                    Commit the change by using the
-                    <filename>git commit</filename> command.
-                    Be sure to provide a commit message that follows the
-                    project’s commit message standards as described earlier.
+                    <para>Before submitting any change, you need to be sure
+                    who the maintainer is or what mailing list that you need
+                    to notify.
+                    Use either these methods to find out:
+                    <itemizedlist>
+                        <listitem><para>
+                            <emphasis>Maintenance File:</emphasis>
+                            Examine the <filename>maintainers.inc</filename>
+                            file, which is located in the
+                            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+                            at
+                            <filename>meta/conf/distro/include</filename>,
+                            to see who is responsible for code.
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis>Search by File:</emphasis>
+                            Using <ulink url='&YOCTO_DOCS_REF_URL;#git'>Git</ulink>,
+                            you can enter the following command to bring up a
+                            short list of all commits against a specific file:
+                            <literallayout class='monospaced'>
+     git shortlog -- <replaceable>filename</replaceable>
+                            </literallayout>
+                            Just provide the name of the file for which you
+                            are interested.
+                            The information returned is not ordered by history
+                            but does include a list of everyone who has
+                            committed grouped by name.
+                            From the list, you can see who is responsible for
+                            the bulk of the changes against the file.
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis>Examine the List of Mailing Lists:</emphasis>
+                            For a list of the Yocto Project and related mailing
+                            lists, see the
+                            "<ulink url='&YOCTO_DOCS_REF_URL;#resources-mailinglist'>Mailing lists</ulink>"
+                            section in the Yocto Project Reference Manual.
+                            </para></listitem>
+                    </itemizedlist>
                     </para></listitem>
                 <listitem><para>
-                    Push the change to the upstream "contrib" repository by
-                    using the <filename>git push</filename> command.
-                    </para></listitem>
-                <listitem><para>Notify the maintainer that you have pushed a change by making a pull
-                    request.
-                    The Yocto Project provides two scripts that conveniently let you generate and send
-                    pull requests to the Yocto Project.
-                    These scripts are <filename>create-pull-request</filename> and
-                    <filename>send-pull-request</filename>.
-                    You can find these scripts in the <filename>scripts</filename> directory
-                    within the <link linkend='source-directory'>Source Directory</link>.</para>
-                    <para>Using these scripts correctly formats the requests without introducing any
-                    whitespace or HTML formatting.
-                    The maintainer that receives your patches needs to be able to save and apply them
-                    directly from your emails.
-                    Using these scripts is the preferred method for sending patches.</para>
-                    <para>For help on using these scripts, simply provide the
-                    <filename>-h</filename> argument as follows:
+                    <emphasis>Make a Pull Request:</emphasis>
+                    Notify the maintainer or the mailing list that you have
+                    pushed a change by making a pull request.</para>
+
+                    <para>The Yocto Project provides two scripts that
+                    conveniently let you generate and send pull requests to the
+                    Yocto Project.
+                    These scripts are <filename>create-pull-request</filename>
+                    and <filename>send-pull-request</filename>.
+                    You can find these scripts in the
+                    <filename>scripts</filename> directory within the
+                    <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+                    (e.g. <filename>~/poky/scripts</filename>).
+                    </para>
+
+                    <para>Using these scripts correctly formats the requests
+                    without introducing any whitespace or HTML formatting.
+                    The maintainer that receives your patches either directly
+                    or through the mailing list needs to be able to save and
+                    apply them directly from your emails.
+                    Using these scripts is the preferred method for sending
+                    patches.</para>
+
+                    <para>First, create the pull request.
+                    For example, the following command runs the script,
+                    specifies the upstream repository in the contrib directory
+                    into which you pushed the change, and provides a subject
+                    line in the created patch files:
                     <literallayout class='monospaced'>
+     $ ~/poky/scripts/create-pull-request -u meta-intel-contrib -s "Updated Manual Section Reference in README"
+                    </literallayout>
+                    Running this script forms
+                    <filename>*.patch</filename> files in a folder named
+                    <filename>pull-</filename><replaceable>PID</replaceable>
+                    in the current directory.
+                    One of the patch files is a cover letter.</para>
+
+                    <para>Before running the
+                    <filename>send-pull-request</filename> script, you must
+                    edit the cover letter patch to insert information about
+                    your change.
+                    After editing the cover letter, send the pull request.
+                    For example, the following command runs the script and
+                    specifies the patch directory and email address.
+                    In this example, the email address is a mailing list:
+                    <literallayout class='monospaced'>
+     $ ~/poky/scripts/send-pull-request -p ~/meta-intel/pull-10565 -t meta-intel@yoctoproject.org
+                    </literallayout>
+                    You need to follow the prompts as the script is
+                    interactive.
+                    <note>
+                        For help on using these scripts, simply provide the
+                        <filename>-h</filename> argument as follows:
+                        <literallayout class='monospaced'>
      $ poky/scripts/create-pull-request -h
      $ poky/scripts/send-pull-request -h
-                    </literallayout></para></listitem>
-            </itemizedlist>
-        </para>
-
-        <para>
-            You can find general Git information on how to push a change upstream in the
-            <ulink url='http://git-scm.com/book/en/v2/Distributed-Git-Distributed-Workflows'>Git Community Book</ulink>.
+                        </literallayout>
+                    </note>
+                    </para></listitem>
+            </orderedlist>
         </para>
     </section>
 
@@ -1717,43 +839,67 @@
         <title>Using Email to Submit a Patch</title>
 
         <para>
-            You can submit patches without using the <filename>create-pull-request</filename> and
-            <filename>send-pull-request</filename> scripts described in the previous section.
+            You can submit patches without using the
+            <filename>create-pull-request</filename> and
+            <filename>send-pull-request</filename> scripts described in the
+            previous section.
             However, keep in mind, the preferred method is to use the scripts.
         </para>
 
         <para>
-            Depending on the components changed, you need to submit the email to a specific
-            mailing list.
-            For some guidance on which mailing list to use, see the list in the
-            "<link linkend='how-to-submit-a-change'>How to Submit a Change</link>"
-            section.
-            For a description of the available mailing lists, see the
+            Depending on the components changed, you need to submit the email
+            to a specific mailing list.
+            For some guidance on which mailing list to use, see the
+            <link linkend='figuring-out-the-mailing-list-to-use'>beginning</link>
+            of this section.
+            For a description of all the available mailing lists, see the
             "<ulink url='&YOCTO_DOCS_REF_URL;#resources-mailinglist'>Mailing Lists</ulink>"
             section in the Yocto Project Reference Manual.
         </para>
 
         <para>
-            Here is the general procedure on how to submit a patch through email without using the
-            scripts:
-            <itemizedlist>
-                <listitem><para>Make your changes in your local Git repository.</para></listitem>
-                <listitem><para>Stage your changes by using the <filename>git add</filename>
-                    command on each file you changed.</para></listitem>
-                <listitem><para>Commit the change by using the
+            Here is the general procedure on how to submit a patch through
+            email without using the scripts:
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Make Your Changes Locally:</emphasis>
+                    Make your changes in your local Git repository.
+                    You should make small, controlled, isolated changes.
+                    Keeping changes small and isolated aids review,
+                    makes merging/rebasing easier and keeps the change
+                    history clean should anyone need to refer to it in
+                    future.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Stage Your Changes:</emphasis>
+                    Stage your changes by using the <filename>git add</filename>
+                    command on each file you changed.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Commit Your Changes:</emphasis>
+                    Commit the change by using the
                     <filename>git commit --signoff</filename> command.
-                    Using the <filename>--signoff</filename> option identifies you as the person
-                    making the change and also satisfies the Developer's Certificate of
-                    Origin (DCO) shown earlier.</para>
-                    <para>When you form a commit, you must follow certain standards established by the
-                    Yocto Project development team.
-                    See the earlier section
-                    "<link linkend='how-to-submit-a-change'>How to Submit a Change</link>"
-                    for Yocto Project commit message standards.</para></listitem>
-                <listitem><para>Format the commit into an email message.
-                    To format commits, use the <filename>git format-patch</filename> command.
-                    When you provide the command, you must include a revision list or a number of patches
-                    as part of the command.
+                    Using the <filename>--signoff</filename> option identifies
+                    you as the person making the change and also satisfies
+                    the Developer's Certificate of Origin (DCO) shown earlier.
+                    </para>
+
+                    <para>When you form a commit, you must follow certain
+                    standards established by the Yocto Project development
+                    team.
+                    See
+                    <link linkend='making-sure-you-have-correct-commit-information'>Step 3</link>
+                    in the previous section for information on how to
+                    provide commit information that meets Yocto Project
+                    commit message standards.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Format the Commit:</emphasis>
+                    Format the commit into an email message.
+                    To format commits, use the
+                    <filename>git format-patch</filename> command.
+                    When you provide the command, you must include a revision
+                    list or a number of patches as part of the command.
                     For example, either of these two commands takes your most
                     recent single commit and formats it as an email message in
                     the current directory:
@@ -1764,50 +910,76 @@
                     <literallayout class='monospaced'>
      $ git format-patch HEAD~
                     </literallayout></para>
-                    <para>After the command is run, the current directory contains a
-                    numbered <filename>.patch</filename> file for the commit.</para>
-                    <para>If you provide several commits as part of the command,
-                    the <filename>git format-patch</filename> command produces a
-                    series of numbered files in the current directory – one for each commit.
+
+                    <para>After the command is run, the current directory
+                    contains a numbered <filename>.patch</filename> file for
+                    the commit.</para>
+
+                    <para>If you provide several commits as part of the
+                    command, the <filename>git format-patch</filename> command
+                    produces a series of numbered files in the current
+                    directory – one for each commit.
                     If you have more than one patch, you should also use the
-                    <filename>--cover</filename> option with the command, which generates a
-                    cover letter as the first "patch" in the series.
-                    You can then edit the cover letter to provide a description for
-                    the series of patches.
-                    For information on the <filename>git format-patch</filename> command,
-                    see <filename>GIT_FORMAT_PATCH(1)</filename> displayed using the
-                    <filename>man git-format-patch</filename> command.</para>
-                    <note>If you are or will be a frequent contributor to the Yocto Project
-                    or to OpenEmbedded, you might consider requesting a contrib area and the
-                    necessary associated rights.</note></listitem>
-                <listitem><para>Import the files into your mail client by using the
+                    <filename>--cover</filename> option with the command,
+                    which generates a cover letter as the first "patch" in
+                    the series.
+                    You can then edit the cover letter to provide a
+                    description for the series of patches.
+                    For information on the
+                    <filename>git format-patch</filename> command,
+                    see <filename>GIT_FORMAT_PATCH(1)</filename> displayed
+                    using the <filename>man git-format-patch</filename>
+                    command.
+                    <note>
+                        If you are or will be a frequent contributor to the
+                        Yocto Project or to OpenEmbedded, you might consider
+                        requesting a contrib area and the necessary associated
+                        rights.
+                    </note>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Import the Files Into Your Mail Client:</emphasis>
+                    Import the files into your mail client by using the
                     <filename>git send-email</filename> command.
-                    <note>In order to use <filename>git send-email</filename>, you must have the
-                    the proper Git packages installed.
-                    For Ubuntu, Debian, and Fedora the package is <filename>git-email</filename>.</note></para>
-                    <para>The <filename>git send-email</filename> command sends email by using a local
-                    or remote Mail Transport Agent (MTA) such as
-                    <filename>msmtp</filename>, <filename>sendmail</filename>, or through a direct
-                    <filename>smtp</filename> configuration in your Git <filename>config</filename>
-                    file.
-                    If you are submitting patches through email only, it is very important
-                    that you submit them without any whitespace or HTML formatting that
-                    either you or your mailer introduces.
-                    The maintainer that receives your patches needs to be able to save and
-                    apply them directly from your emails.
-                    A good way to verify that what you are sending will be applicable by the
-                    maintainer is to do a dry run and send them to yourself and then
-                    save and apply them as the maintainer would.</para>
-                    <para>The <filename>git send-email</filename> command is the preferred method
-                    for sending your patches since there is no risk of compromising whitespace
-                    in the body of the message, which can occur when you use your own mail client.
+                    <note>
+                        In order to use <filename>git send-email</filename>,
+                        you must have the proper Git packages installed on
+                        your host.
+                        For Ubuntu, Debian, and Fedora the package is
+                        <filename>git-email</filename>.
+                    </note></para>
+
+                    <para>The <filename>git send-email</filename> command
+                    sends email by using a local or remote Mail Transport Agent
+                    (MTA) such as <filename>msmtp</filename>,
+                    <filename>sendmail</filename>, or through a direct
+                    <filename>smtp</filename> configuration in your Git
+                    <filename>~/.gitconfig</filename> file.
+                    If you are submitting patches through email only, it is
+                    very important that you submit them without any whitespace
+                    or HTML formatting that either you or your mailer
+                    introduces.
+                    The maintainer that receives your patches needs to be able
+                    to save and apply them directly from your emails.
+                    A good way to verify that what you are sending will be
+                    applicable by the maintainer is to do a dry run and send
+                    them to yourself and then save and apply them as the
+                    maintainer would.</para>
+
+                    <para>The <filename>git send-email</filename> command is
+                    the preferred method for sending your patches using
+                    email since there is no risk of compromising whitespace
+                    in the body of the message, which can occur when you use
+                    your own mail client.
                     The command also has several options that let you
-                    specify recipients and perform further editing of the email message.
-                    For information on how to use the <filename>git send-email</filename> command,
+                    specify recipients and perform further editing of the
+                    email message.
+                    For information on how to use the
+                    <filename>git send-email</filename> command,
                     see <filename>GIT-SEND-EMAIL(1)</filename> displayed using
                     the <filename>man git-send-email</filename> command.
                     </para></listitem>
-            </itemizedlist>
+            </orderedlist>
         </para>
     </section>
 </section>
diff --git a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-qemu.xml b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-qemu.xml
index 41c1829..85e7315 100644
--- a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-qemu.xml
+++ b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-qemu.xml
@@ -6,427 +6,303 @@
 
 <title>Using the Quick EMUlator (QEMU)</title>
 
-<para>
-    Quick EMUlator (QEMU) is an Open Source project the Yocto Project uses
-    as part of its development "tool set".
-    As such, the information in this chapter is limited to the
-    Yocto Project integration of QEMU and not QEMU in general.
-    For official information and documentation on QEMU, see the
-    following references:
-    <itemizedlist>
-        <listitem><para><emphasis><ulink url='http://wiki.qemu.org/Main_Page'>QEMU Website</ulink>:</emphasis>
-            The official website for the QEMU Open Source project.
-            </para></listitem>
-        <listitem><para><emphasis><ulink url='http://wiki.qemu.org/Manual'>Documentation</ulink>:</emphasis>
-            The QEMU user manual.
-            </para></listitem>
-    </itemizedlist>
-</para>
-
-<para>
-    This chapter provides an overview of the Yocto Project's integration of
-    QEMU, a description of how you use QEMU and its various options, running
-    under a Network File System (NFS) server, and a few tips and tricks you
-    might find helpful when using QEMU.
-</para>
-
-<section id='qemu-overview'>
-    <title>Overview</title>
-
     <para>
-        Within the context of the Yocto Project, QEMU is an
-        emulator and virtualization machine that allows you to run a complete
-        image you have built using the Yocto Project as just another task
-        on your build system.
-        QEMU is useful for running and testing images and applications on
-        supported Yocto Project architectures without having actual hardware.
-        Among other things, the Yocto Project uses QEMU to run automated
-        Quality Assurance (QA) tests on final images shipped with each
-        release.
+        This chapter provides procedures that show you how to use the
+        Quick EMUlator (QEMU), which is an Open Source project the Yocto
+        Project uses as part of its development "tool set".
+        For reference information on the Yocto Project implementation of QEMU,
+        see the
+        "<ulink url='&YOCTO_DOCS_REF_URL;#ref-quick-emulator-qemu'>Quick EMUlator (QEMU)</ulink>"
+        section in the Yocto Project Reference Manual.
     </para>
 
-    <para>
-        QEMU is made available with the Yocto Project a number of ways.
-        One method is to install a Software Development Kit (SDK).
-        For more information on how to make sure you have
-        QEMU available, see the
-        <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-intro'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
-    </para>
-</section>
-
-<section id='qemu-running-qemu'>
-    <title>Running QEMU</title>
-
-    <para>
-        Running QEMU involves having your build environment set up, having the
-        right artifacts available, and understanding how to use the many
-        options that are available to you when you start QEMU using the
-        <filename>runqemu</filename> command.
-    </para>
-
-    <section id='qemu-setting-up-the-environment'>
-        <title>Setting Up the Environment</title>
+    <section id='qemu-running-qemu'>
+        <title>Running QEMU</title>
 
         <para>
-            You run QEMU in the same environment from which you run BitBake.
-            This means you need to source a build environment script (i.e.
-            <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
-            or
-            <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>).
-        </para>
-    </section>
+            To use QEMU, you need to have QEMU installed and initialized as
+            well as have the proper artifacts (i.e. image files and root
+            filesystems) available.
+            Follow these general steps to run QEMU:
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Install QEMU:</emphasis>
+                    See
+                    "<ulink url='&YOCTO_DOCS_SDK_URL;#the-qemu-emulator'>The QEMU Emulator</ulink>"
+                    section in the Yocto Project Application Development and
+                    the Extensible Software Development Kit (eSDK) manual
+                    for information on how to install QEMU.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Setting Up the Environment:</emphasis>
+                    How you set up the QEMU environment depends on how you
+                    installed QEMU:
+                    <itemizedlist>
+                        <listitem><para>
+                            If you cloned the <filename>poky</filename>
+                            repository or you downloaded and unpacked a
+                            Yocto Project release tarball, you can source
+                            the build environment script (i.e.
+                            <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>):
+                            <literallayout class='monospaced'>
+     $ cd ~/poky
+     $ source oe-init-build-env
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            If you installed a cross-toolchain, you can
+                            run the script that initializes the toolchain.
+                            For example, the following commands run the
+                            initialization script from the default
+                            <filename>poky_sdk</filename> directory:
+                            <literallayout class='monospaced'>
+     . ~/poky_sdk/environment-setup-core2-64-poky-linux
+                            </literallayout>
+                            </para></listitem>
+                    </itemizedlist>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Ensure the Artifacts are in Place:</emphasis>
+                    You need to be sure you have a pre-built kernel that
+                    will boot in QEMU.
+                    You also need the target root filesystem for your target
+                    machine’s architecture:
+                    <itemizedlist>
+                        <listitem><para>
+                            If you have previously built an image for QEMU
+                            (e.g. <filename>qemux86</filename>,
+                            <filename>qemuarm</filename>, and so forth),
+                            then the artifacts are in place in your
+                            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
+                            </para></listitem>
+                        <listitem><para>
+                            If you have not built an image, you can go to the
+                            <ulink url='&YOCTO_MACHINES_DL_URL;'>machines/qemu</ulink>
+                            area and download a pre-built image that matches
+                            your architecture and can be run on QEMU.
+                            </para></listitem>
+                    </itemizedlist></para>
 
-    <section id='qemu-using-the-runqemu-command'>
-        <title>Using the <filename>runqemu</filename> Command</title>
-
-        <para>
-            The basic <filename>runqemu</filename> command syntax is as
-            follows:
-            <literallayout class='monospaced'>
+                    <para>See the
+                    "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-extracting-the-root-filesystem'>Extracting the Root Filesystem</ulink>"
+                    section in the Yocto Project Application Development and
+                    the Extensible Software Development Kit (eSDK) manual
+                    for information on how to extract a root filesystem.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Run QEMU:</emphasis>
+                    The basic <filename>runqemu</filename> command syntax is as
+                    follows:
+                    <literallayout class='monospaced'>
      $ runqemu [<replaceable>option</replaceable> ]  [...]
-            </literallayout>
-            Based on what you provide on the command line,
-            <filename>runqemu</filename> does a good job of figuring out what
-            you are trying to do.
-            For example, by default, QEMU looks for the most recently built
-            image according to the timestamp when it needs to look for an
-            image.
-            Minimally, through the use of options, you must provide either
-            a machine name, a virtual machine image
-            (<filename>*.vmdk</filename>), or a kernel image
-            (<filename>*.bin</filename>).
-        </para>
+                    </literallayout>
+                    Based on what you provide on the command line,
+                    <filename>runqemu</filename> does a good job of figuring
+                    out what you are trying to do.
+                    For example, by default, QEMU looks for the most recently
+                    built image according to the timestamp when it needs to
+                    look for an image.
+                    Minimally, through the use of options, you must provide
+                    either a machine name, a virtual machine image
+                    (<filename>*wic.vmdk</filename>), or a kernel image
+                    (<filename>*.bin</filename>).</para>
 
-        <para>
-            Following is a description of <filename>runqemu</filename>
-            options you can provide on the command line:
-            <note><title>Tip</title>
-                If you do provide some "illegal" option combination or perhaps
-                you do not provide enough in the way of options,
-                <filename>runqemu</filename> provides appropriate error
-                messaging to help you correct the problem.
-            </note>
-            <itemizedlist>
-                <listitem><para><replaceable>QEMUARCH</replaceable>:
-                    The QEMU machine architecture, which must be "qemux86",
-                    "qemux86_64", "qemuarm", "qemumips", "qemumipsel",
-                    “qemumips64", "qemush4", "qemuppc", "qemumicroblaze",
-                    or "qemuzynq".
-                    </para></listitem>
-                <listitem><para><filename><replaceable>VM</replaceable></filename>:
-                    The virtual machine image, which must be a
-                    <filename>.vmdk</filename> file.
-                    Use this option when you want to boot a
-                    <filename>.vmdk</filename> image.
-                    The image filename you provide must contain one of the
-                    following strings: "qemux86-64", "qemux86", "qemuarm",
-                    "qemumips64", "qemumips", "qemuppc", or "qemush4".
-                    </para></listitem>
-                <listitem><para><replaceable>ROOTFS</replaceable>:
-                    A root filesystem that has one of the following
-                    filetype extensions: "ext2", "ext3", "ext4", "jffs2",
-                    "nfs", or "btrfs".
-                    If the filename you provide for this option uses “nfs”, it
-                    must provide an explicit root filesystem path.
-                    </para></listitem>
-                <listitem><para><replaceable>KERNEL</replaceable>:
-                    A kernel image, which is a <filename>.bin</filename> file.
-                    When you provide a <filename>.bin</filename> file,
-                    <filename>runqemu</filename> detects it and assumes the
-                    file is a kernel image.
-                    </para></listitem>
-                <listitem><para><replaceable>MACHINE</replaceable>:
-                    The architecture of the QEMU machine, which must be one
-                    of the following: "qemux86",
-                    "qemux86-64", "qemuarm", "qemumips", "qemumipsel",
-                    “qemumips64", "qemush4", "qemuppc", "qemumicroblaze",
-                    or "qemuzynq".
-                    The <replaceable>MACHINE</replaceable> and
-                    <replaceable>QEMUARCH</replaceable> options are basically
-                    identical.
-                    If you do not provide a <replaceable>MACHINE</replaceable>
-                    option, <filename>runqemu</filename> tries to determine
-                    it based on other options.
-                    </para></listitem>
-                <listitem><para><filename>ramfs</filename>:
-                    Indicates you are booting an initial RAM disk (initramfs)
-                    image, which means the <filename>FSTYPE</filename> is
-                    <filename>cpio.gz</filename>.
-                    </para></listitem>
-                <listitem><para><filename>iso</filename>:
-                    Indicates you are booting an ISO image, which means the
-                    <filename>FSTYPE</filename> is
-                    <filename>.iso</filename>.
-                    </para></listitem>
-                <listitem><para><filename>nographic</filename>:
-                    Disables the video console, which sets the console to
-                    "ttys0".
-                    </para></listitem>
-                <listitem><para><filename>serial</filename>:
-                    Enables a serial console on
-                    <filename>/dev/ttyS0</filename>.
-                    </para></listitem>
-                <listitem><para><filename>biosdir</filename>:
-                    Establishes a custom directory for BIOS, VGA BIOS and
-                    keymaps.
-                    </para></listitem>
-                <listitem><para><filename>biosfilename</filename>:
-                    Establishes a custom BIOS name.
-                    </para></listitem>
-                <listitem><para><filename>qemuparams=\"<replaceable>xyz</replaceable>\"</filename>:
-                    Specifies custom QEMU parameters.
-                    Use this option to pass options other than the simple
-                    "kvm" and "serial" options.
-                    </para></listitem>
-                <listitem><para><filename>bootparams=\"<replaceable>xyz</replaceable>\"</filename>:
-                    Specifies custom boot parameters for the kernel.
-                    </para></listitem>
-                <listitem><para><filename>audio</filename>:
-                    Enables audio in QEMU.
-                    The <replaceable>MACHINE</replaceable> option must be
-                    either "qemux86" or "qemux86-64" in order for audio to be
-                    enabled.
-                    Additionally, the <filename>snd_intel8x0</filename>
-                    or <filename>snd_ens1370</filename> driver must be
-                    installed in linux guest.
-                    </para></listitem>
-                <listitem><para><filename>slirp</filename>:
-                    Enables "slirp" networking, which is a different way
-                    of networking that does not need root access
-                    but also is not as easy to use or comprehensive
-                    as the default.
-                    </para></listitem>
-                <listitem><para id='kvm-cond'><filename>kvm</filename>:
-                    Enables KVM when running "qemux86" or "qemux86-64"
-                    QEMU architectures.
-                    For KVM to work, all the following conditions must be met:
+                    <para>Here are some additional examples to help illustrate
+                    further QEMU:
                     <itemizedlist>
                         <listitem><para>
-                            Your <replaceable>MACHINE</replaceable> must be either
-qemux86" or "qemux86-64".
+                            This example starts QEMU with
+                            <replaceable>MACHINE</replaceable> set to "qemux86".
+                            Assuming a standard
+                            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
+                            <filename>runqemu</filename> automatically finds the
+                            <filename>bzImage-qemux86.bin</filename> image file and
+                            the
+                            <filename>core-image-minimal-qemux86-20140707074611.rootfs.ext3</filename>
+                            (assuming the current build created a
+                            <filename>core-image-minimal</filename> image).
+                            <note>
+                            When more than one image with the same name exists, QEMU finds
+                            and uses the most recently built image according to the
+                            timestamp.
+                            </note>
+                            <literallayout class='monospaced'>
+     $ runqemu qemux86
+                            </literallayout>
                             </para></listitem>
                         <listitem><para>
-                            Your build host has to have the KVM modules
-                            installed, which are
-                            <filename>/dev/kvm</filename>.
-                            </para></listitem>
-                        <listitem><para>
-                            The  build host <filename>/dev/kvm</filename>
-                            directory has to be both writable and readable.
-                            </para></listitem>
-                    </itemizedlist>
-                    </para></listitem>
-                <listitem><para><filename>kvm-vhost</filename>:
-                    Enables KVM with VHOST support when running "qemux86" or "qemux86-64"
-                    QEMU architectures.
-                    For KVM with VHOST to work, the following conditions must
-                    be met:
-                    <itemizedlist>
-                        <listitem><para>
-                            <link linkend='kvm-cond'>kvm</link> option
-                            conditions must be met.
-                            </para></listitem>
-                        <listitem><para>
-                            Your build host has to have virtio net device, which
-                            are <filename>/dev/vhost-net</filename>.
-                            </para></listitem>
-                        <listitem><para>
-                            The build host <filename>/dev/vhost-net</filename>
-                            directory has to be either readable or writable
-                            and “slirp-enabled”.
-                            </para></listitem>
-                    </itemizedlist>
-                    </para></listitem>
-                <listitem><para><filename>publicvnc</filename>:
-                    Enables a VNC server open to all hosts.
-                    </para></listitem>
-            </itemizedlist>
-        </para>
-
-        <para>
-            For further understanding regarding option use with
-            <filename>runqemu</filename>, consider some examples.
-        </para>
-
-        <para>
-            This example starts QEMU with
-            <replaceable>MACHINE</replaceable> set to "qemux86".
-            Assuming a standard
-            <link linkend='build-directory'>Build Directory</link>,
-            <filename>runqemu</filename> automatically finds the
-            <filename>bzImage-qemux86.bin</filename> image file and
-            the
-            <filename>core-image-minimal-qemux86-20140707074611.rootfs.ext3</filename>
-            (assuming the current build created a
-            <filename>core-image-minimal</filename> image).
-            <note>
-                When more than one image with the same name exists, QEMU finds
-                and uses the most recently built image according to the
-                timestamp.
-            </note>
-            <literallayout class='monospaced'>
-    $ runqemu qemux86
-            </literallayout>
-            This example produces the exact same results as the
-            previous example.
-            This command, however, specifically provides the image
-            and root filesystem type.
-            <literallayout class='monospaced'>
+                            This example produces the exact same results as the
+                            previous example.
+                            This command, however, specifically provides the image
+                            and root filesystem type.
+                            <literallayout class='monospaced'>
      $ runqemu qemux86 core-image-minimal ext3
-            </literallayout>
-            This example specifies to boot an initial RAM disk image
-            and to enable audio in QEMU.
-            For this case, <filename>runqemu</filename> set the
-            internal variable <filename>FSTYPE</filename> to
-            "cpio.gz".
-            Also, for audio to be enabled, an appropriate driver must
-            be installed (see the previous description for the
-            <filename>audio</filename> option for more information).
-            <literallayout class='monospaced'>
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            This example specifies to boot an initial RAM disk image
+                            and to enable audio in QEMU.
+                            For this case, <filename>runqemu</filename> set the
+                            internal variable <filename>FSTYPE</filename> to
+                            "cpio.gz".
+                            Also, for audio to be enabled, an appropriate driver must
+                            be installed (see the previous description for the
+                            <filename>audio</filename> option for more information).
+                            <literallayout class='monospaced'>
      $ runqemu qemux86 ramfs audio
-            </literallayout>
-            This example does not provide enough information for
-            QEMU to launch.
-            While the command does provide a root filesystem type, it
-            must also minimally provide a
-            <replaceable>MACHINE</replaceable>,
-            <replaceable>KERNEL</replaceable>, or
-            <replaceable>VM</replaceable> option.
-            <literallayout class='monospaced'>
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            This example does not provide enough information for
+                            QEMU to launch.
+                            While the command does provide a root filesystem type, it
+                            must also minimally provide a
+                            <replaceable>MACHINE</replaceable>,
+                            <replaceable>KERNEL</replaceable>, or
+                            <replaceable>VM</replaceable> option.
+                            <literallayout class='monospaced'>
      $ runqemu ext3
-            </literallayout>
-            This example specifies to boot a virtual machine image
-            (<filename>.vmdk</filename> file).
-            From the <filename>.vmdk</filename>,
-            <filename>runqemu</filename> determines the QEMU
-            architecture (<replaceable>MACHINE</replaceable>) to be
-            "qemux86" and the root filesystem type to be "vmdk".
-            <literallayout class='monospaced'>
-     $ runqemu /home/scott-lenovo/vm/core-image-minimal-qemux86.vmdk
-            </literallayout>
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            This example specifies to boot a virtual machine
+                            image (<filename>.wic.vmdk</filename> file).
+                            From the <filename>.wic.vmdk</filename>,
+                            <filename>runqemu</filename> determines the QEMU
+                            architecture (<replaceable>MACHINE</replaceable>) to be
+                            "qemux86" and the root filesystem type to be "vmdk".
+                            <literallayout class='monospaced'>
+     $ runqemu /home/scott-lenovo/vm/core-image-minimal-qemux86.wic.vmdk
+                            </literallayout>
+                            </para></listitem>
+                    </itemizedlist>
+                    </para></listitem>
+            </orderedlist>
         </para>
     </section>
-</section>
 
-<section id='qemu-running-under-a-network-file-system-nfs-server'>
-    <title>Running Under a Network File System (NFS) Server</title>
-
-    <para>
-        One method for running QEMU is to run it on an NFS server.
-        This is useful when you need to access the same file system from both
-        the build and the emulated system at the same time.
-        It is also worth noting that the system does not need root privileges
-        to run.
-        It uses a user space NFS server to avoid that.
-        This section describes how to set up for running QEMU using an NFS
-        server and then how you can start and stop the server.
-    </para>
-
-    <section id='qemu-setting-up-to-use-nfs'>
-        <title>Setting Up to Use NFS</title>
+    <section id='switching-between-consoles'>
+        <title>Switching Between Consoles</title>
 
         <para>
-            Once you are able to run QEMU in your environment, you can use the
-            <filename>runqemu-extract-sdk</filename> script, which is located
-            in the <filename>scripts</filename> directory along with
-            <filename>runqemu</filename> script.
-            The <filename>runqemu-extract-sdk</filename> takes a root
-            file system tarball and extracts it into a location that you
-            specify.
-            Then, when you run <filename>runqemu</filename>, you can specify
-            the location that has the file system to pass it to QEMU.
-            Here is an example that takes a file system and extracts it to
-            a directory named <filename>test-nfs</filename>:
-            <literallayout class='monospaced'>
+            When booting or running QEMU, you can switch between
+            supported consoles by using
+            Ctrl+Alt+<replaceable>number</replaceable>.
+            For example, Ctrl+Alt+3 switches you to the serial console
+            as long as that console is enabled.
+            Being able to switch consoles is helpful, for example, if
+            the main QEMU console breaks for some reason.
+            <note>
+                Usually, "2" gets you to the main console and "3"
+                gets you to the serial console.
+            </note>
+        </para>
+    </section>
+
+    <section id='removing-the-splash-screen'>
+        <title>Removing the Splash Screen</title>
+
+        <para>
+            You can remove the splash screen when QEMU is booting by
+            using Alt+left.
+            Removing the splash screen allows you to see what is
+            happening in the background.
+        </para>
+    </section>
+
+    <section id='disabling-the-cursor-grab'>
+        <title>Disabling the Cursor Grab</title>
+
+        <para>
+            The default QEMU integration captures the cursor within the
+            main window.
+            It does this since standard mouse devices only provide
+            relative input and not absolute coordinates.
+            You then have to break out of the grab using the "Ctrl+Alt"
+            key combination.
+            However, the Yocto Project's integration of QEMU enables
+            the wacom USB touch pad driver by default to allow input
+            of absolute coordinates.
+            This default means that the mouse can enter and leave the
+            main window without the grab taking effect leading to a
+            better user experience.
+        </para>
+    </section>
+
+    <section id='qemu-running-under-a-network-file-system-nfs-server'>
+        <title>Running Under a Network File System (NFS) Server</title>
+
+        <para>
+            One method for running QEMU is to run it on an NFS server.
+            This is useful when you need to access the same file system
+            from both the build and the emulated system at the same time.
+            It is also worth noting that the system does not need root
+            privileges to run.
+            It uses a user space NFS server to avoid that.
+            Follow these steps to set up for running QEMU using an NFS
+            server.
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Extract a Root Filesystem:</emphasis>
+                    Once you are able to run QEMU in your environment, you can
+                    use the <filename>runqemu-extract-sdk</filename> script,
+                    which is located in the <filename>scripts</filename>
+                    directory along with the <filename>runqemu</filename>
+                    script.</para>
+
+                    <para>The <filename>runqemu-extract-sdk</filename> takes a
+                    root filesystem tarball and extracts it into a location
+                    that you specify.
+                    Here is an example that takes a file system and
+                    extracts it to a directory named
+                    <filename>test-nfs</filename>:
+                    <literallayout class='monospaced'>
      runqemu-extract-sdk ./tmp/deploy/images/qemux86/core-image-sato-qemux86.tar.bz2 test-nfs
-            </literallayout>
-            Once you have extracted the file system, you can run
-            <filename>runqemu</filename> normally with the additional
-            location of the file system.
-            You can then also make changes to the files within
-            <filename>./test-nfs</filename> and see those changes appear in the
-            image in real time.
-            Here is an example using the <filename>qemux86</filename> image:
-            <literallayout class='monospaced'>
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Start QEMU:</emphasis>
+                    Once you have extracted the file system, you can run
+                    <filename>runqemu</filename> normally with the additional
+                    location of the file system.
+                    You can then also make changes to the files within
+                    <filename>./test-nfs</filename> and see those changes
+                    appear in the image in real time.
+                    Here is an example using the <filename>qemux86</filename>
+                    image:
+                    <literallayout class='monospaced'>
      runqemu qemux86 ./test-nfs
-            </literallayout>
-        </para>
-    </section>
-
-    <section id='qemu-starting-and-stopping-nfs'>
-        <title>Starting and Stopping NFS</title>
-
-        <para>
-            You can manually start and stop the NFS share using these
-            commands:
-            <itemizedlist>
-                <listitem><para><emphasis><filename>start</filename>:</emphasis>
-                    Starts the NFS share:
-                    <literallayout class='monospaced'>
+                    </literallayout>
+                    </para></listitem>
+            </orderedlist>
+            <note>
+                <para>
+                    Should you need to start, stop, or restart the NFS share,
+                    you can use the following commands:
+                    <itemizedlist>
+                        <listitem><para>
+                            The following command starts the NFS share:
+                            <literallayout class='monospaced'>
      runqemu-export-rootfs start <replaceable>file-system-location</replaceable>
-                    </literallayout>
-                    </para></listitem>
-                <listitem><para><emphasis><filename>stop</filename>:</emphasis>
-                    Stops the NFS share:
-                    <literallayout class='monospaced'>
-     runqemu-export-rootfs stop <replaceable>file-system-location</replaceable>
-                    </literallayout>
-                    </para></listitem>
-                <listitem><para><emphasis><filename>restart</filename>:</emphasis>
-                    Restarts the NFS share:
-                    <literallayout class='monospaced'>
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            The following command stops the NFS share:
+                            <literallayout class='monospaced'>
+         runqemu-export-rootfs stop <replaceable>file-system-location</replaceable>
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            The following command restarts the NFS share:
+                            <literallayout class='monospaced'>
      runqemu-export-rootfs restart <replaceable>file-system-location</replaceable>
-                    </literallayout>
-                    </para></listitem>
-            </itemizedlist>
+                            </literallayout>
+                            </para></listitem>
+                    </itemizedlist>
+                </para>
+            </note>
         </para>
     </section>
-</section>
-
-<section id='qemu-tips-and-tricks'>
-    <title>Tips and Tricks</title>
-
-    <para>
-        The following list describes things you can do to make running QEMU
-        in the context of the Yocto Project a better experience:
-        <itemizedlist>
-            <listitem><para><emphasis>Switching Between Consoles:</emphasis>
-                When booting or running QEMU, you can switch between
-                supported consoles by using
-                Ctrl+Alt+<replaceable>number</replaceable>.
-                For example, Ctrl+Alt+3 switches you to the serial console as
-                long as that console is enabled.
-                Being able to switch consoles is helpful, for example, if the
-                main QEMU console breaks for some reason.
-                <note>
-                    Usually, "2" gets you to the main console and "3" gets you
-                    to the serial console.
-                </note>
-                </para></listitem>
-            <listitem><para><emphasis>Removing the Splash Screen:</emphasis>
-                You can remove the splash screen when QEMU is booting by
-                using Alt+left.
-                Removing the splash screen allows you to see what is happening
-                in the background.
-                </para></listitem>
-            <listitem><para><emphasis>Disabling the Cursor Grab:</emphasis>
-                The default QEMU integration captures the cursor within the
-                main window.
-                It does this since standard mouse devices only provide relative
-                input and not absolute coordinates.
-                You then have to break out of the grab using the "Ctrl+Alt" key
-                combination.
-                However, the Yocto Project's integration of QEMU enables the
-                wacom USB touch pad driver by default to allow input of absolute
-                coordinates.
-                This default means that the mouse can enter and leave the
-                main window without the grab taking effect leading to a better
-                user experience.
-                </para></listitem>
-        </itemizedlist>
-    </para>
-</section>
-
 </chapter>
 <!--
 vim: expandtab tw=80 ts=4
diff --git a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-start.xml b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-start.xml
index b8527f6..195b22d 100644
--- a/import-layers/yocto-poky/documentation/dev-manual/dev-manual-start.xml
+++ b/import-layers/yocto-poky/documentation/dev-manual/dev-manual-start.xml
@@ -7,523 +7,744 @@
 <title>Getting Started with the Yocto Project</title>
 
 <para>
-    This chapter introduces the Yocto Project and gives you an idea of what you need to get started.
-    You can find enough information to set up your development host and build or use images for
-    hardware supported by the Yocto Project by reading the
+    This chapter provides procedures related to getting set up to use the
+    Yocto Project.
+    For a more front-to-end process that takes you from minimally preparing
+    a build host through building an image, see the
     <ulink url='&YOCTO_DOCS_QS_URL;'>Yocto Project Quick Start</ulink>.
 </para>
 
-<para>
-    The remainder of this chapter summarizes what is in the Yocto Project Quick Start and provides
-    some higher-level concepts you might want to consider.
-</para>
-
-<section id='introducing-the-yocto-project'>
-    <title>Introducing the Yocto Project</title>
+<section id='setting-up-the-development-host-to-use-the-yocto-project'>
+    <title>Setting Up the Development Host to Use the Yocto Project</title>
 
     <para>
-        The Yocto Project is an open-source collaboration project focused on embedded Linux development.
-        The project currently provides a build system that is
-        referred to as the
-        <link linkend='build-system-term'>OpenEmbedded build system</link>
-        in the Yocto Project documentation.
-        The Yocto Project provides various ancillary tools for the embedded developer
-        and also features the Sato reference User Interface, which is optimized for
-        stylus-driven, low-resolution screens.
+        This section provides procedures to set up your development host to
+        use the Yocto Project.
+        You can use the Yocto Project on a native Linux development host or
+        you can use
+        <ulink url='https://git.yoctoproject.org/cgit/cgit.cgi/crops/about/'>CROPS</ulink>,
+        which leverages
+        <ulink url='https://www.docker.com/'>Docker Containers</ulink>,
+        to prepare any Linux, Mac, or Windows development host.
     </para>
 
     <para>
-        You can use the OpenEmbedded build system, which uses
-        <link linkend='bitbake-term'>BitBake</link>, to develop complete Linux
-        images and associated user-space applications for architectures based
-        on ARM, MIPS, PowerPC, x86 and x86-64.
-        <note>
-            By default, using the Yocto Project creates a Poky distribution.
-            However, you can create your own distribution by providing key
-            <link linkend='metadata'>Metadata</link>.
-            See the "<link linkend='creating-your-own-distribution'>Creating Your Own Distribution</link>"
-            section for more information.
-        </note>
-        While the Yocto Project does not provide a strict testing framework,
-        it does provide or generate for you artifacts that let you perform target-level and
-        emulated testing and debugging.
-        Additionally, if you are an <trademark class='trade'>Eclipse</trademark>
-        IDE user, you can install an Eclipse Yocto Plug-in to allow you to
-        develop within that familiar environment.
-    </para>
-</section>
-
-<section id='getting-setup'>
-    <title>Getting Set Up</title>
-
-    <para>
-        Here is what you need to use the Yocto Project:
+        Once your development host is set up to use the Yocto Project,
+        further steps are necessary depending on what you want to
+        accomplish.
+        See the following references for information on how to prepare for
+        Board Support Package (BSP) development, kernel development, and
+        development using the <trademark class='trade'>Eclipse</trademark> IDE:
         <itemizedlist>
-            <listitem><para><emphasis>Host System:</emphasis>  You should have a reasonably current
-                Linux-based host system.
-                You will have the best results with a recent release of Fedora,
-                openSUSE, Debian, Ubuntu, or CentOS as these releases are frequently tested against the Yocto Project
-                and officially supported.
-                For a list of the distributions under validation and their status, see the
-                "<ulink url='&YOCTO_DOCS_REF_URL;#detailed-supported-distros'>Supported Linux Distributions</ulink>" section
-                in the Yocto Project Reference Manual and the wiki page at
-                <ulink url='&YOCTO_WIKI_URL;/wiki/Distribution_Support'>Distribution Support</ulink>.</para>
-                <para>
-                You should also have about 50 Gbytes of free disk space for building images.
-                </para></listitem>
-            <listitem><para><emphasis>Packages:</emphasis>  The OpenEmbedded build system
-                requires that certain packages exist on your development system (e.g. Python 2.7).
-                See "<ulink url='&YOCTO_DOCS_QS_URL;#packages'>The Build Host Packages</ulink>"
-                section in the Yocto Project Quick Start and the
-                "<ulink url='&YOCTO_DOCS_REF_URL;#required-packages-for-the-host-development-system'>Required Packages for the Host Development System</ulink>"
-                section in the Yocto Project Reference Manual for the exact
-                package requirements and the installation commands to install
-                them for the supported distributions.
-                </para></listitem>
-            <listitem id='local-yp-release'><para><emphasis>Yocto Project Release:</emphasis>
-                You need a release of the Yocto Project locally installed on
-                your development system.
-                The documentation refers to this set of locally installed files
-                as the <link linkend='source-directory'>Source Directory</link>.
-                You create your Source Directory by using
-                <link linkend='git'>Git</link> to clone a local copy
-                of the upstream <filename>poky</filename> repository,
-                or by downloading and unpacking a tarball of an official
-                Yocto Project release.
-                The preferred method is to create a clone of the repository.
-                </para>
-                <para>Working from a copy of the upstream repository allows you
-                to contribute back into the Yocto Project or simply work with
-                the latest software on a development branch.
-                Because Git maintains and creates an upstream repository with
-                a complete history of changes and you are working with a local
-                clone of that repository, you have access to all the Yocto
-                Project development branches and tag names used in the upstream
-                repository.</para>
-                <note>You can view the Yocto Project Source Repositories at
-                    <ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink>
-                </note>
-                <para>The following transcript shows how to clone the
-                <filename>poky</filename> Git repository into the current
-                working directory.
-                The command creates the local repository in a directory
-                named <filename>poky</filename>.
-                For information on Git used within the Yocto Project, see
-                the "<link linkend='git'>Git</link>" section.
-                <literallayout class='monospaced'>
-     $ git clone git://git.yoctoproject.org/poky
-     Cloning into 'poky'...
-     remote: Counting objects: 226790, done.
-     remote: Compressing objects: 100% (57465/57465), done.
-     remote: Total 226790 (delta 165212), reused 225887 (delta 164327)
-     Receiving objects: 100% (226790/226790), 100.98 MiB | 263 KiB/s, done.
-     Resolving deltas: 100% (165212/165212), done.
-                </literallayout></para>
-                <para>For another example of how to set up your own local Git
-                repositories, see this
-                <ulink url='&YOCTO_WIKI_URL;/wiki/Transcript:_from_git_checkout_to_meta-intel_BSP'>
-                wiki page</ulink>, which describes how to create local
-                Git repositories for both
-                <filename>poky</filename> and <filename>meta-intel</filename>.
-                </para>
-                <para>
-                You can also get the Yocto Project Files by downloading
-                Yocto Project releases from the
-                <ulink url="&YOCTO_HOME_URL;">Yocto Project website</ulink>.
-                From the website, you just click "Downloads" in the navigation
-                pane to the left to display all Yocto Project downloads.
-                Current and archived releases are available for download.
-                Nightly and developmental builds are also maintained at
-                <ulink url="&YOCTO_AB_NIGHTLY_URL;"></ulink>.
-                One final site you can visit for information on Yocto Project
-                releases is the
-                <ulink url='&YOCTO_WIKI_URL;/wiki/Releases'>Releases</ulink>
-                wiki.
-                </para></listitem>
-            <listitem id='local-kernel-files'><para><emphasis>Yocto Project Kernel:</emphasis>
-                If you are going to be making modifications to a supported Yocto Project kernel, you
-                need to establish local copies of the source.
-                You can find Git repositories of supported Yocto Project kernels organized under
-                "Yocto Linux Kernel" in the Yocto Project Source Repositories at
-                <ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink>.</para>
-                <para>This setup can involve creating a bare clone of the Yocto Project kernel and then
-                copying that cloned repository.
-                You can create the bare clone and the copy of the bare clone anywhere you like.
-                For simplicity, it is recommended that you create these structures outside of the
-                Source Directory, which is usually named <filename>poky</filename>.</para>
-                <para>As an example, the following transcript shows how to create the bare clone
-                of the <filename>linux-yocto-3.19</filename> kernel and then create a copy of
-                that clone.
-                <note>When you have a local Yocto Project kernel Git repository, you can
-                reference that repository rather than the upstream Git repository as
-                part of the <filename>clone</filename> command.
-                Doing so can speed up the process.</note></para>
-                <para>In the following example, the bare clone is named
-                <filename>linux-yocto-3.19.git</filename>, while the
-                copy is named <filename>my-linux-yocto-3.19-work</filename>:
-                <literallayout class='monospaced'>
-     $ git clone --bare git://git.yoctoproject.org/linux-yocto-3.19 linux-yocto-3.19.git
-     Cloning into bare repository 'linux-yocto-3.19.git'...
-     remote: Counting objects: 3983256, done.
-     remote: Compressing objects: 100% (605006/605006), done.
-     remote: Total 3983256 (delta 3352832), reused 3974503 (delta 3344079)
-     Receiving objects: 100% (3983256/3983256), 843.66 MiB | 1.07 MiB/s, done.
-     Resolving deltas: 100% (3352832/3352832), done.
-     Checking connectivity... done.
-                </literallayout></para>
-                <para>Now create a clone of the bare clone just created:
-                <literallayout class='monospaced'>
-     $ git clone linux-yocto-3.19.git my-linux-yocto-3.19-work
-     Cloning into 'my-linux-yocto-3.19-work'...
-     done.
-     Checking out files: 100% (48440/48440), done.
-                </literallayout></para></listitem>
-            <listitem id='meta-yocto-kernel-extras-repo'><para><emphasis>
-                The <filename>meta-yocto-kernel-extras</filename> Git Repository</emphasis>:
-                The <filename>meta-yocto-kernel-extras</filename> Git repository contains Metadata needed
-                only if you are modifying and building the kernel image.
-                In particular, it contains the kernel BitBake append (<filename>.bbappend</filename>)
-                files that you
-                edit to point to your locally modified kernel source files and to build the kernel
-                image.
-                Pointing to these local files is much more efficient than requiring a download of the
-                kernel's source files from upstream each time you make changes to the kernel.</para>
-                <para>You can find the <filename>meta-yocto-kernel-extras</filename> Git Repository in the
-                "Yocto Metadata Layers" area of the Yocto Project Source Repositories at
-                <ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink>.
-                It is good practice to create this Git repository inside the Source Directory.</para>
-                <para>Following is an example that creates the <filename>meta-yocto-kernel-extras</filename> Git
-                repository inside the Source Directory, which is named <filename>poky</filename>
-                in this case:
-                <literallayout class='monospaced'>
-     $ cd ~/poky
-     $ git clone git://git.yoctoproject.org/meta-yocto-kernel-extras meta-yocto-kernel-extras
-     Cloning into 'meta-yocto-kernel-extras'...
-     remote: Counting objects: 727, done.
-     remote: Compressing objects: 100% (452/452), done.
-     remote: Total 727 (delta 260), reused 719 (delta 252)
-     Receiving objects: 100% (727/727), 536.36 KiB | 240 KiB/s, done.
-     Resolving deltas: 100% (260/260), done.
-               </literallayout></para></listitem>
-           <listitem><para id='supported-board-support-packages-(bsps)'><emphasis>Supported Board Support Packages (BSPs):</emphasis>
-                The Yocto Project supports many BSPs, which are maintained in
-                their own layers or in layers designed to contain several
-                BSPs.
-                To get an idea of machine support through BSP layers, you can
-                look at the
-                <ulink url='&YOCTO_RELEASE_DL_URL;/machines'>index of machines</ulink>
-                for the release.</para>
-
-                <para>The Yocto Project uses the following BSP layer naming
-                scheme:
-                <literallayout class='monospaced'>
-     meta-<replaceable>bsp_name</replaceable>
-                </literallayout>
-                where <replaceable>bsp_name</replaceable> is the recognized
-                BSP name.
-                Here is an example:
-                <literallayout class='monospaced'>
-     meta-raspberrypi
-                </literallayout>
+            <listitem><para>
+                <emphasis>BSP Development:</emphasis>
                 See the
-                "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP Layers</ulink>"
+                "<ulink url='&YOCTO_DOCS_BSP_URL;#preparing-your-build-host-to-work-with-bsp-layers'>Preparing Your Build Host to Work With BSP Layers</ulink>"
                 section in the Yocto Project Board Support Package (BSP)
-                Developer's Guide for more information on BSP Layers.</para>
-
-                <para>A useful Git repository released with the Yocto
-                Project is <filename>meta-intel</filename>, which is a
-                parent layer that contains many supported
-                <ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP Layers</ulink>.
-                You can locate the <filename>meta-intel</filename> Git
-                repository in the "Yocto Metadata Layers" area of the Yocto
-                Project Source Repositories at
-                <ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink>.</para>
-
-                <para>Using
-                <link linkend='git'>Git</link> to create a local clone of the
-                upstream repository can be helpful if you are working with
-                BSPs.
-                Typically, you set up the <filename>meta-intel</filename>
-                Git repository inside the Source Directory.
-                For example, the following transcript shows the steps to clone
-                <filename>meta-intel</filename>.
-                <note>
-                    Be sure to work in the <filename>meta-intel</filename>
-                    branch that matches your
-                    <link linkend='source-directory'>Source Directory</link>
-                    (i.e. <filename>poky</filename>) branch.
-                    For example, if you have checked out the "master" branch
-                    of <filename>poky</filename> and you are going to use
-                    <filename>meta-intel</filename>, be sure to checkout the
-                    "master" branch of <filename>meta-intel</filename>.
-                </note>
-                <literallayout class='monospaced'>
-     $ cd ~/poky
-     $ git clone git://git.yoctoproject.org/meta-intel.git
-     Cloning into 'meta-intel'...
-     remote: Counting objects: 11917, done.
-     remote: Compressing objects: 100% (3842/3842), done.
-     remote: Total 11917 (delta 6840), reused 11699 (delta 6622)
-     Receiving objects: 100% (11917/11917), 2.92 MiB | 2.88 MiB/s, done.
-     Resolving deltas: 100% (6840/6840), done.
-     Checking connectivity... done.
-                </literallayout></para>
-
-                <para>The same
-                <ulink url='&YOCTO_WIKI_URL;/wiki/Transcript:_from_git_checkout_to_meta-intel_BSP'>wiki page</ulink>
-                referenced earlier covers how to set up the
-                <filename>meta-intel</filename> Git repository.
+                Developer's Guide.
                 </para></listitem>
-            <listitem><para><emphasis>Eclipse Yocto Plug-in:</emphasis>  If you are developing
-                applications using the Eclipse Integrated Development Environment (IDE),
-                you will need this plug-in.
+            <listitem><para>
+                <emphasis>Kernel Development:</emphasis>
                 See the
-                "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-appendix-latest-yp-eclipse-plug-in'>Using Eclipse</ulink>"
-                section in the Yocto Project Software Development Kit (SDK)
-                Developer's Guide for more information.</para></listitem>
+                "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#preparing-the-build-host-to-work-on-the-kernel'>Preparing the Build Host to Work on the Kernel</ulink>"
+                section in the Yocto Project Linux Kernel Development Manual.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Eclipse Development:</emphasis>
+                See the
+                "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-eclipse-project'>Developing Applications Using <trademark class='trade'>Eclipse</trademark></ulink>"
+                Chapter in the Yocto Project Application Development and the
+                Extensible Software Development Kit (eSDK) manual.
+                </para></listitem>
         </itemizedlist>
     </para>
+
+    <section id='setting-up-a-native-linux-host'>
+        <title>Setting Up a Native Linux Host</title>
+
+        <para>
+            Follow these steps to prepare a native Linux machine as your
+            Yocto Project development host:
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Use a Supported Linux Distribution:</emphasis>
+                    You should have a reasonably current Linux-based host
+                    system.
+                    You will have the best results with a recent release of
+                    Fedora, openSUSE, Debian, Ubuntu, or CentOS as these
+                    releases are frequently tested against the Yocto Project
+                    and officially supported.
+                    For a list of the distributions under validation and their
+                    status, see the
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#detailed-supported-distros'>Supported Linux Distributions</ulink>" section
+                    in the Yocto Project Reference Manual and the wiki page at
+                    <ulink url='&YOCTO_WIKI_URL;/wiki/Distribution_Support'>Distribution Support</ulink>.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Have Enough Free Memory:</emphasis>
+                    You should have at least 50 Gbytes of free disk space
+                    for building images.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Meet Minimal Version Requirements:</emphasis>
+                    The OpenEmbedded build system should be able to run on any
+                    modern distribution that has the following versions for
+                    Git, tar, and Python.
+                    <itemizedlist>
+                        <listitem><para>
+                            Git 1.8.3.1 or greater
+                            </para></listitem>
+                        <listitem><para>
+                            tar 1.27 or greater
+                            </para></listitem>
+                        <listitem><para>
+                            Python 3.4.0 or greater.
+                            </para></listitem>
+                    </itemizedlist>
+                    If your build host does not meet any of these three listed
+                    version requirements, you can take steps to prepare the
+                    system so that you can still use the Yocto Project.
+                    See the
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#required-git-tar-and-python-versions'>Required Git, tar, and Python Versions</ulink>"
+                    section in the Yocto Project Reference Manual for
+                    information.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Install Development Host Packages:</emphasis>
+                    Required development host packages vary depending on your
+                    build machine and what you want to do with the Yocto
+                    Project.
+                    Collectively, the number of required packages is large
+                    if you want to be able to cover all cases.</para>
+
+                    <para>For lists of required packages for all scenarios,
+                    see the
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#required-packages-for-the-host-development-system'>Required Packages for the Host Development System</ulink>"
+                    section in the Yocto Project Reference Manual.
+                    </para></listitem>
+            </orderedlist>
+            Once you have completed the previous steps, you are ready to
+            continue using a given development path on your native Linux
+            machine.
+            If you are going to use BitBake, see the
+            "<link linkend='cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</link>"
+            section.
+            If you are going to use the Extensible SDK, see the
+            "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-extensible'>Using the Extensible SDK</ulink>"
+            Chapter in the Yocto Project Application Development and the
+            Extensible Software Development Kit (eSDK) manual.
+            If you want to work on the kernel, see the
+            <ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;'>Yocto Project Linux Kernel Development Manual</ulink>.
+            If you are going to use Toaster, see the
+            "<ulink url='&YOCTO_DOCS_TOAST_URL;#toaster-manual-setup-and-use'>Setting Up and Using Toaster</ulink>"
+            section in the Toaster User Manual.
+        </para>
+    </section>
+
+    <section id='setting-up-to-use-crops'>
+        <title>Setting Up to Use CROss PlatformS (CROPS)</title>
+
+        <para>
+            With
+            <ulink url='https://git.yoctoproject.org/cgit/cgit.cgi/crops/about/'>CROPS</ulink>,
+            which leverages
+            <ulink url='https://www.docker.com/'>Docker Containers</ulink>,
+            you can create a Yocto Project development environment that
+            is operating system agnostic.
+            You can set up a container in which you can develop using the
+            Yocto Project on a Windows, Mac, or Linux  machine.
+        </para>
+
+        <para>
+            Follow these general steps to prepare a Windows, Mac, or Linux
+            machine as your Yocto Project development host:
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Go to the Docker Installation Site:</emphasis>
+                    <ulink url='https://www.docker.com/what-docker'>Docker</ulink>
+                    is a software container platform that you need to install
+                    on the host development machine.
+                    To start the installation process, see the
+                    <ulink url='https://docs.docker.com/engine/installation/'>Docker Installation</ulink>
+                    site.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Choose Your Docker Edition:</emphasis>
+                    Docker comes in several editions.
+                    For the Yocto Project, the stable community edition
+                    (i.e. "Docker CE Stable") is adequate.
+                    You can learn more about the Docker editions from the
+                    site.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Go the Install Site for Your Platform:</emphasis>
+                    Click the link for the Docker edition associated with
+                    your development host machine's native software.
+                    For example, if your machine is running Microsoft
+                    Windows Version 10 and you want the Docker CE Stable
+                    edition, click that link under "Supported Platforms".
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Understand What You Need:</emphasis>
+                    The install page has pre-requisites your machine must
+                    meet.
+                    Be sure you read through this page and make sure your
+                    machine meets the requirements to run Docker.
+                    If your machine does not meet the requirements, the page
+                    has instructions to handle exceptions.
+                    For example, to run Docker on Windows 10, you must have
+                    the pro version of the operating system.
+                    If you have the home version, you need to install the
+                    <ulink url='https://docs.docker.com/toolbox/overview/#ready-to-get-started'>Docker Toolbox</ulink>.
+                    </para>
+
+                    <para>Another example is that a Windows machine needs to
+                    have Microsoft Hyper-V.
+                    If you have a legacy version of the the Microsoft
+                    operating system or for any other reason you do not have
+                    Microsoft Hyper-V, you would have to enter the BIOS and
+                    enable virtualization.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Install the Software:</emphasis>
+                    Once you have understood all the pre-requisites, you can
+                    download and install the appropriate software.
+                    Follow the instructions for your specific machine and
+                    the type of the software you need to install.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Optionally Orient Yourself With Dockers:</emphasis>
+                    If you are unfamiliar with Dockers and the container
+                    concept, you can learn more here -
+                    <ulink url='https://docs.docker.com/get-started/'></ulink>.
+                    You should be able to launch Docker or the Docker Toolbox
+                    and have a terminal shell on your development host.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Set Up the Containers to Use the Yocto Project:</emphasis>
+                    Go to
+                    <ulink url='https://github.com/crops/docker-win-mac-docs/wiki'></ulink>
+                    and follow the directions for your particular
+                    development host (i.e. Linux, Mac, or Windows).</para>
+
+                    <para>Once you complete the setup instructions for your
+                    machine, you have the Poky, Extensible SDK, and Toaster
+                    containers available.
+                    You can click those links from the page and learn more
+                    about using each of those containers.
+                    </para></listitem>
+            </orderedlist>
+            Once you have a container set up, everything is in place to
+            develop just as if you were running on a native Linux machine.
+            If you are going to use the Poky container, see the
+            "<link linkend='cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</link>"
+            section.
+            If you are going to use the Extensible SDK container, see the
+            "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-extensible'>Using the Extensible SDK</ulink>"
+            Chapter in the Yocto Project Application Development and the
+            Extensible Software Development Kit (eSDK) manual.
+            If you are going to use the Toaster container, see the
+            "<ulink url='&YOCTO_DOCS_TOAST_URL;#toaster-manual-setup-and-use'>Setting Up and Using Toaster</ulink>"
+            section in the Toaster User Manual.
+        </para>
+    </section>
 </section>
 
-<section id='building-images'>
-    <title>Building Images</title>
+<section id='working-with-yocto-project-source-files'>
+    <title>Working With Yocto Project Source Files</title>
 
     <para>
-        The build process creates an entire Linux distribution, including the toolchain, from source.
-        For more information on this topic, see the
+        This section contains procedures related to locating and securing
+        Yocto Project files.
+        You establish and use these local files to work on projects.
+        <note><title>Notes</title>
+            <itemizedlist>
+                <listitem><para>
+                    For concepts and introductory information about Git as it
+                    is used in the Yocto Project, see the
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#git'>Git</ulink>"
+                    section in the Yocto Project Reference Manual.
+                    </para></listitem>
+                <listitem><para>
+                    For concepts on Yocto Project source repositories, see the
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#yocto-project-repositories'>Yocto Project Source Repositories</ulink>"
+                    section in the Yocto Project Reference Manual."
+                    </para></listitem>
+            </itemizedlist>
+        </note>
+    </para>
+
+    <section id='accessing-source-repositories'>
+        <title>Accessing Source Repositories</title>
+
+        <para>
+            Yocto Project maintains upstream Git
+            <ulink url='&YOCTO_DOCS_REF_URL;#source-repositories'>Source Repositories</ulink>
+            that you can examine and access using a browser-based UI:
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Access Repositories:</emphasis>
+                    Open a browser and go to
+                    <ulink url='&YOCTO_GIT_URL;'></ulink> to access the
+                    GUI-based interface into the Yocto Project source
+                    repositories.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Select a Repository:</emphasis>
+                    Click on any repository in which you are interested (e.g.
+                    <filename>poky</filename>).
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Find the URL Used to Clone the Repository:</emphasis>
+                    At the bottom of the page, note the URL used to
+                    <ulink url='&YOCTO_DOCS_REF_URL;#git-commands-clone'>clone</ulink>
+                    that repository (e.g.
+                    <filename>&YOCTO_GIT_URL;/poky</filename>).
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Examine Change History of the Repository:</emphasis>
+                    At the top of the page, click on any branch in which you
+                    might be interested (e.g.
+                    <filename>&DISTRO_NAME_NO_CAP;</filename>).
+                    You can then view the commit log or tree view for that
+                    development branch.
+                    </para></listitem>
+            </orderedlist>
+        </para>
+    </section>
+
+    <section id='accessing-index-of-releases'>
+        <title>Accessing Index of Releases</title>
+
+        <para>
+            Yocto Project maintains an Index of Releases area that contains
+            related files that contribute to the Yocto Project.
+            Rather than Git repositories, these files represent snapshot
+            tarballs.
+            <note><title>Tip</title>
+                The recommended method for accessing Yocto Project
+                components is to use Git to clone a repository and work from
+                within that local repository.
+                The procedure in this section exists should you desire a
+                tarball snapshot of any given component.
+            </note>
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Access the Index of Releases:</emphasis>
+                    Open a browser and go to
+                    <ulink url='&YOCTO_DL_URL;/releases'></ulink> to access the
+                    Index of Releases.
+                    The list represents released components (e.g.
+                    <filename>eclipse-plugin</filename>,
+                    <filename>sato</filename>, and so on).
+                    <note>
+                        The <filename>yocto</filename> directory contains the
+                        full array of released Poky tarballs.
+                        The <filename>poky</filename> directory in the
+                        Index of Releases was historically used for very
+                        early releases and exists for retroactive
+                        completeness only.
+                    </note>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Select a Component:</emphasis>
+                    Click on any released component in which you are interested
+                    (e.g. <filename>yocto</filename>).
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Find the Tarball:</emphasis>
+                    Drill down to find the associated tarball.
+                    For example, click on <filename>yocto-&DISTRO;</filename> to
+                    view files associated with the Yocto Project &DISTRO;
+                    release (e.g. <filename>poky-&DISTRO_NAME_NO_CAP;-&POKYVERSION;.tar.bz2</filename>,
+                    which is the released Poky tarball).
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Download the Tarball:</emphasis>
+                    Click a tarball to download and save a snapshot of a
+                    given component.
+                    </para></listitem>
+            </orderedlist>
+        </para>
+    </section>
+
+    <section id='using-the-downloads-page'>
+        <title>Using the Downloads Page</title>
+
+        <para>
+            The
+            <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>
+            uses a "Downloads" area from which you can locate and download
+            tarballs of any Yocto Project release.
+            Rather than Git repositories, these files represent snapshot
+            tarballs.
+            <note><title>Tip</title>
+                The recommended method for accessing Yocto Project
+                components is to use Git to clone a repository and work from
+                within that local repository.
+                The procedure in this section exists should you desire a
+                tarball snapshot of any given component.
+            </note>
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Go to the Yocto Project Website:</emphasis>
+                    Open The
+                    <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>
+                    in your browser.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Get to the Downloads Area:</emphasis>
+                    Click the "Downloads" tab.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Select the Type of Files:</emphasis>
+                    Click the type of files you want (i.e "Build System",
+                    "Tools", or "Board Support Packages (BSPs)".
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Locate and Download the Tarball:</emphasis>
+                    From the list of releases, locate the appropriate
+                    download link and download the files.
+                    </para></listitem>
+            </orderedlist>
+        </para>
+    </section>
+
+    <section id='cloning-the-poky-repository'>
+        <title>Cloning the <filename>poky</filename> Repository</title>
+
+        <para>
+            To use the Yocto Project, you need a release of the Yocto Project
+            locally installed on your development system.
+            The locally installed set of files is referred to as the
+            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+            in the Yocto Project documentation.
+        </para>
+
+        <para>
+            You create your Source Directory by using
+            <ulink url='&YOCTO_DOCS_REF_URL;#git'>Git</ulink> to clone a local
+            copy of the upstream <filename>poky</filename> repository.
+            <note><title>Tip</title>
+                The preferred method of getting the Yocto Project Source
+                Directory set up is to clone the repository.
+            </note>
+            Working from a copy of the upstream repository allows you
+            to contribute back into the Yocto Project or simply work with
+            the latest software on a development branch.
+            Because Git maintains and creates an upstream repository with
+            a complete history of changes and you are working with a local
+            clone of that repository, you have access to all the Yocto
+            Project development branches and tag names used in the upstream
+            repository.
+        </para>
+
+        <para>
+            Follow these steps to create a local version of the
+            upstream
+            <ulink url='&YOCTO_DOCS_REF_URL;#poky'><filename>poky</filename></ulink>
+            Git repository.
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Set Your Directory:</emphasis>
+                    Be in the directory where you want to create your local
+                    copy of poky.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Clone the Repository:</emphasis>
+                    The following command clones the repository and uses
+                    the default name "poky" for your local repository:
+                    <literallayout class='monospaced'>
+     $ git clone git://git.yoctoproject.org/poky
+     Cloning into 'poky'...
+     remote: Counting objects: 367178, done.
+     remote: Compressing objects: 100% (88161/88161), done.
+     remote: Total 367178 (delta 272761), reused 366942 (delta 272525)
+     Receiving objects: 100% (367178/367178), 133.26 MiB | 6.40 MiB/s, done.
+     Resolving deltas: 100% (272761/272761), done.
+     Checking connectivity... done.
+                    </literallayout>
+                    Unless you specify a specific development branch or
+                    tag name, Git clones the "master" branch, which results
+                    in a snapshot of the latest development changes for
+                    "master".
+                    For information on how to check out a specific
+                    development branch or on how to check out a local
+                    branch based on a tag name, see the
+                    "<link linkend='checking-out-by-branch-in-poky'>Checking Out By Branch in Poky</link>"
+                    and
+                    <link linkend='checkout-out-by-tag-in-poky'>Checking Out By Tag in Poky</link>",
+                    respectively.</para>
+
+                    <para>Once the repository is created, you can change to
+                    that directory and check its status.
+                    Here, the single "master" branch exists on your system
+                    and by default, it is checked out:
+                    <literallayout class='monospaced'>
+     $ cd ~/poky
+     $ git status
+     On branch master
+     Your branch is up-to-date with 'origin/master'.
+     nothing to commit, working directory clean
+     $ git branch
+     * master
+                    </literallayout>
+                    Your local repository of poky is identical to the
+                    upstream poky repository at the time from which it was
+                    cloned.
+                    </para></listitem>
+            </orderedlist>
+        </para>
+    </section>
+
+    <section id='checking-out-by-branch-in-poky'>
+        <title>Checking Out by Branch in Poky</title>
+
+        <para>
+            When you clone the upstream poky repository, you have access to
+            all its development branches.
+            Each development branch in a repository is unique as it forks
+            off the "master" branch.
+            To see and use the files of a particular development branch
+            locally, you need to know the branch name and then specifically
+            check out that development branch.
+            <note>
+                Checking out an active development branch by branch name
+                gives you a snapshot of that particular branch at the time
+                you check it out.
+                Further development on top of the branch that occurs after
+                check it out can occur.
+            </note>
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Switch to the Poky Directory:</emphasis>
+                    If you have a local poky Git repository, switch to that
+                    directory.
+                    If you do not have the local copy of poky, see the
+                    "<link linkend='cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</link>"
+                    section.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Determine Existing Branch Names:</emphasis>
+                    <literallayout class='monospaced'>
+     $ git branch -a
+     * master
+       remotes/origin/1.1_M1
+       remotes/origin/1.1_M2
+       remotes/origin/1.1_M3
+       remotes/origin/1.1_M4
+       remotes/origin/1.2_M1
+       remotes/origin/1.2_M2
+       remotes/origin/1.2_M3
+           .
+           .
+           .
+       remotes/origin/master-next
+       remotes/origin/master-next2
+       remotes/origin/morty
+       remotes/origin/pinky
+       remotes/origin/purple
+       remotes/origin/pyro
+       remotes/origin/rocko
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Checkout the Branch:</emphasis>
+                    Checkout the development branch in which you want to work.
+                    For example, to access the files for the Yocto Project
+                    &DISTRO; Release (&DISTRO_NAME;), use the following command:
+                    <literallayout class='monospaced'>
+     $ git checkout -b &DISTRO_NAME_NO_CAP; origin/&DISTRO_NAME_NO_CAP;
+     Branch &DISTRO_NAME_NO_CAP; set up to track remote branch &DISTRO_NAME_NO_CAP; from origin.
+     Switched to a new branch '&DISTRO_NAME_NO_CAP;'
+                    </literallayout>
+                    The previous command checks out the "&DISTRO_NAME_NO_CAP;"
+                    development branch and reports that the branch is tracking
+                    the upstream "origin/&DISTRO_NAME_NO_CAP;" branch.</para>
+
+                    <para>The following command displays the branches
+                    that are now part of your local poky repository.
+                    The asterisk character indicates the branch that is
+                    currently checked out for work:
+                    <literallayout class='monospaced'>
+     $ git branch
+       master
+     * &DISTRO_NAME_NO_CAP;
+                    </literallayout>
+                    </para></listitem>
+            </orderedlist>
+        </para>
+    </section>
+
+    <section id='checkout-out-by-tag-in-poky'>
+        <title>Checking Out by Tag in Poky</title>
+
+        <para>
+            Similar to branches, the upstream repository uses tags
+            to mark specific commits associated with significant points in
+            a development branch (i.e. a release point or stage of a
+            release).
+            You might want to set up a local branch based on one of those
+            points in the repository.
+            The process is similar to checking out by branch name except you
+            use tag names.
+            <note>
+                Checking out a branch based on a tag gives you a
+                stable set of files not affected by development on the
+                branch above the tag.
+            </note>
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Switch to the Poky Directory:</emphasis>
+                    If you have a local poky Git repository, switch to that
+                    directory.
+                    If you do not have the local copy of poky, see the
+                    "<link linkend='cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</link>"
+                    section.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Fetch the Tag Names:</emphasis>
+                    To checkout the branch based on a tag name, you need to
+                    fetch the upstream tags into your local repository:
+                    <literallayout class='monospaced'>
+     $ git fetch --tags
+     $
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>List the Tag Names:</emphasis>
+                    You can list the tag names now:
+                    <literallayout class='monospaced'>
+     $ git tag
+     1.1_M1.final
+     1.1_M1.rc1
+     1.1_M1.rc2
+     1.1_M2.final
+     1.1_M2.rc1
+        .
+        .
+        .
+     yocto-2.2
+     yocto-2.2.1
+     yocto-2.3
+     yocto-2.3.1
+     yocto-2.4
+     yocto_1.5_M5.rc8
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Checkout the Branch:</emphasis>
+                    <literallayout class='monospaced'>
+     $ git checkout tags/&DISTRO; -b my_yocto_&DISTRO;
+     Switched to a new branch 'my_yocto_&DISTRO;'
+     $ git branch
+       master
+     * my_yocto_&DISTRO;
+                    </literallayout>
+                    The previous command creates and checks out a local
+                    branch named "my_yocto_&DISTRO;", which is based on
+                    the commit in the upstream poky repository that has
+                    the same tag.
+                    In this example, the files you have available locally
+                    as a result of the <filename>checkout</filename>
+                    command are a snapshot of the
+                    "&DISTRO_NAME_NO_CAP;" development branch at the point
+                    where Yocto Project &DISTRO; was released.
+                    </para></listitem>
+            </orderedlist>
+        </para>
+    </section>
+</section>
+
+<section id='performing-a-simple-build'>
+    <title>Performing a Simple Build</title>
+
+    <para>
+        Several methods exist that allow you to build an image within the
+        Yocto Project.
+        This procedure shows how to build an image using BitBake from a
+        Linux host.
+        <note><title>Notes</title>
+            <itemizedlist>
+                <listitem><para>
+                    For information on how to build an image using
+                    <ulink url='&YOCTO_DOCS_REF_URL;#toaster-term'>Toaster</ulink>,
+                    see the
+                    <ulink url='&YOCTO_DOCS_TOAST_URL;'>Yocto Project Toaster Manual</ulink>.
+                    </para></listitem>
+                <listitem><para>
+                    For information on how to use
+                    <filename>devtool</filename> to build images, see the
+                    "<ulink url='&YOCTO_DOCS_SDK_URL;#using-devtool-in-your-sdk-workflow'>Using <filename>devtool</filename> in Your SDK Workflow</ulink>"
+                    section in the Yocto Project Application Development and
+                    the Extensible Software Development Kit (eSDK) manual.
+                    </para></listitem>
+            </itemizedlist>
+        </note>
+    </para>
+
+    <para>
+        The build process creates an entire Linux distribution from source
+        and places it in your
+        <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
+        under <filename>tmp/deploy/images</filename>.
+        For detailed information on the build process using BitBake, see the
+        "<ulink url='&YOCTO_DOCS_REF_URL;#images-dev-environment'>Images</ulink>"
+        section in the Yocto Project Reference Manual.
+        You can also reference the
         "<ulink url='&YOCTO_DOCS_QS_URL;#qs-building-images'>Building Images</ulink>"
         section in the Yocto Project Quick Start.
     </para>
 
     <para>
-        The build process is as follows:
+        The following figure and list overviews the build process:
+        <imagedata fileref="figures/bitbake-build-flow.png" width="7in" depth="4in" align="center" scalefit="1" />
         <orderedlist>
-            <listitem><para>Make sure you have set up the Source Directory described in the
-                previous section.</para></listitem>
-            <listitem><para>Initialize the build environment by sourcing a build
-                environment script (i.e.
-                <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
-                or
-                <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>).
+            <listitem><para>
+                <emphasis>Set up Your Host Development System to Support
+                Development Using the Yocto Project</emphasis>:
+                See the
+                "<ulink url='&YOCTO_DOCS_QS_URL;#yp-resources'>Setting Up to Use the Yocto Project</ulink>"
+                section in the Yocto Project Quick Start for options on how
+                to get a build host ready to use the Yocto Project.
                 </para></listitem>
-            <listitem><para>Optionally ensure the <filename>conf/local.conf</filename> configuration file,
-                which is found in the
-                <link linkend='build-directory'>Build Directory</link>,
+            <listitem><para>
+                <emphasis>Initialize the Build Environment:</emphasis>
+                Initialize the build environment by sourcing the build
+                environment script (i.e.
+                <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>).
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Make Sure Your <filename>local.conf</filename>
+                File is Correct:</emphasis>
+                Ensure the <filename>conf/local.conf</filename> configuration
+                file, which is found in the
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
                 is set up how you want it.
-                This file defines many aspects of the build environment including
-                the target machine architecture through the
+                This file defines many aspects of the build environment
+                including the target machine architecture through the
                 <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'>MACHINE</ulink></filename> variable,
                 the packaging format used during the build
                 (<ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_CLASSES'><filename>PACKAGE_CLASSES</filename></ulink>),
                 and a centralized tarball download directory through the
-                <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-DL_DIR'>DL_DIR</ulink></filename> variable.</para></listitem>
+                <filename><ulink url='&YOCTO_DOCS_REF_URL;#var-DL_DIR'>DL_DIR</ulink></filename> variable.
+                </para></listitem>
             <listitem><para>
+                <emphasis>Build the Image:</emphasis>
                 Build the image using the <filename>bitbake</filename> command.
-                If you want information on BitBake, see the
+                For example, the following command builds the
+                <filename>core-image-minimal</filename> image:
+                <literallayout class='monospaced'>
+     $ bitbake core-image-minimal
+                </literallayout>
+                For information on BitBake, see the
                 <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual</ulink>.
                 </para></listitem>
-            <listitem><para>Run the image either on the actual hardware or using the QEMU
-                emulator.</para></listitem>
         </orderedlist>
     </para>
 </section>
 
-<section id='flashing-images-using-bmaptool'>
-    <title>Flashing Images Using <filename>bmaptool</filename></title>
-
-    <para>
-        An easy way to flash an image to a bootable device is to use
-        <filename>bmaptool</filename>, which is integrated into the
-        OpenEmbedded build system.
-    </para>
-
-    <para>
-        Following, is an example that shows how to flash a Wic image.
-        <note>
-            You can use <filename>bmaptool</filename> to flash any
-            type of image.
-        </note>
-        Use these steps to flash an image using
-        <filename>bmaptool</filename>:
-        <note>
-            Unless you are able to install the
-            <filename>bmap-tools</filename> package as mentioned in the note
-            in the second bullet of step 3 further down, you will need to build
-            <filename>bmaptool</filename> before using it.
-            Build the tool using the following command:
-            <literallayout class='monospaced'>
-     $ bitbake bmap-tools-native
-            </literallayout>
-        </note>
-        <orderedlist>
-            <listitem><para>
-                Add the following to your <filename>local.conf</filename>
-                file:
-                <literallayout class='monospaced'>
-     IMAGE_FSTYPES += "wic wic.bmap"
-                </literallayout>
-                </para></listitem>
-            <listitem><para>
-                Either have your image ready (pre-built) or take the step
-                build the image:
-                <literallayout class='monospaced'>
-     $ bitbake <replaceable>image</replaceable>
-                </literallayout>
-                </para></listitem>
-            <listitem><para>
-                Flash the image to the media by using
-                <filename>bmaptool</filename> depending on your particular
-                setup:
-                <itemizedlist>
-                    <listitem><para>
-                        If you have write access to the media,
-                        use this command form:
-                        <literallayout class='monospaced'>
-     $ oe-run-native bmaptool copy ./tmp/deploy/images/qemux86-64/core-image-minimal-<replaceable>machine</replaceable>.wic /dev/sd<replaceable>X</replaceable>
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para>
-                        If you do not have write access to
-                        the media, use the following
-                        commands:
-                        <literallayout class='monospaced'>
-     $ sudo bash
-     $ PATH=tmp/sysroots/x86_64-linux/usr/bin/ bmaptool copy ./tmp/deploy/images/qemux86-64/core-image-minimal-<replaceable>machine</replaceable>.wic /dev/sd<replaceable>X</replaceable>
-                        </literallayout>
-                        <note>
-                            If you are using Ubuntu or Debian distributions,
-                            you can install the
-                            <filename>bmap-tools</filename> package using the
-                            following command and then use the tool
-                            without specifying
-                            <filename>PATH</filename> even from the
-                            root account:
-                            <literallayout class='monospaced'>
-     $ sudo apt-get install bmap-tools
-                            </literallayout>
-                        </note>
-                        </para></listitem>
-                </itemizedlist>
-                </para></listitem>
-        </orderedlist>
-    </para>
-
-    <para>
-        For help on the <filename>bmaptool</filename> command, use either of
-        the following commands:
-        <literallayout class='monospaced'>
-     $ bmaptool --help
-     $ oe-run-native bmaptool --help
-        </literallayout>
-    </para>
-</section>
-
-<section id='using-pre-built-binaries-and-qemu'>
-    <title>Using Pre-Built Binaries and QEMU</title>
-
-    <para>
-        Another option you have to get started is to use pre-built binaries.
-        The Yocto Project provides many types of binaries with each release.
-        See the "<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>"
-        chapter in the Yocto Project Reference Manual
-        for descriptions of the types of binaries that ship with a Yocto Project
-        release.
-    </para>
-
-    <para>
-        Using a pre-built binary is ideal for developing software
-        applications to run on your target hardware.
-        To do this, you need to be able to access the appropriate
-        cross-toolchain tarball for the architecture on which you are
-        developing.
-        If you are using an SDK type image, the image ships with the complete
-        toolchain native to the architecture (i.e. a toolchain designed to
-        run on the
-        <ulink url='&YOCTO_DOCS_REF_URL;#var-SDKMACHINE'><filename>SDKMACHINE</filename></ulink>).
-        If you are not using an SDK type image, you need to separately download
-        and install the stand-alone Yocto Project cross-toolchain tarball.
-        See the
-        "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-appendix-obtain'>Obtaining the SDK</ulink>"
-        appendix in the Yocto Project Software Development Kit (SDK)
-        Developer's Guide for more information on locating and installing
-        cross-toolchains.
-    </para>
-
-    <para>
-        Regardless of the type of image you are using, you need to download the pre-built kernel
-        that you will boot in the QEMU emulator and then download and extract the target root
-        filesystem for your target machine’s architecture.
-        You can get architecture-specific binaries and file systems from
-        <ulink url='&YOCTO_MACHINES_DL_URL;'>machines</ulink>.
-        You can get installation scripts for stand-alone toolchains from
-        <ulink url='&YOCTO_TOOLCHAIN_DL_URL;'>toolchains</ulink>.
-        Once you have all your files, you set up the environment to emulate the hardware
-        by sourcing an environment setup script.
-        Finally, you start the QEMU emulator.
-        You can find details on all these steps in the
-        <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-manual'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
-        You can learn more about using QEMU with the Yocto Project in the
-        "<link linkend='dev-manual-qemu'>Using the Quick EMUlator (QEMU)</link>"
-        section.
-    </para>
-
-    <para>
-        Using QEMU to emulate your hardware can result in speed issues
-        depending on the target and host architecture mix.
-        For example, using the <filename>qemux86</filename> image in the emulator
-        on an Intel-based 32-bit (x86) host machine is fast because the target and
-        host architectures match.
-        On the other hand, using the <filename>qemuarm</filename> image on the same Intel-based
-        host can be slower.
-        But, you still achieve faithful emulation of ARM-specific issues.
-    </para>
-
-    <para>
-        To speed things up, the QEMU images support using <filename>distcc</filename>
-        to call a cross-compiler outside the emulated system.
-        If you used <filename>runqemu</filename> to start QEMU, and the
-        <filename>distccd</filename> application is present on the host system, any
-        BitBake cross-compiling toolchain available from the build system is automatically
-        used from within QEMU simply by calling <filename>distcc</filename>.
-        You can accomplish this by defining the cross-compiler variable
-        (e.g. <filename>export CC="distcc"</filename>).
-        Alternatively, if you are using a suitable SDK image or the appropriate
-        stand-alone toolchain is present,
-        the toolchain is also automatically used.
-    </para>
-
-    <note>
-        Several mechanisms exist that let you connect to the system running on the
-        QEMU emulator:
-        <itemizedlist>
-            <listitem><para>QEMU provides a framebuffer interface that makes standard
-                consoles available.</para></listitem>
-            <listitem><para>Generally, headless embedded devices have a serial port.
-                If so, you can configure the operating system of the running image
-                to use that port to run a console.
-                The connection uses standard IP networking.</para></listitem>
-            <listitem><para>
-                SSH servers exist in some QEMU images.
-                The <filename>core-image-sato</filename> QEMU image has a
-                Dropbear secure shell (SSH) server that runs with the root
-                password disabled.
-                The <filename>core-image-full-cmdline</filename> and
-                <filename>core-image-lsb</filename> QEMU images
-                have OpenSSH instead of Dropbear.
-                Including these SSH servers allow you to use standard
-                <filename>ssh</filename> and <filename>scp</filename> commands.
-                The <filename>core-image-minimal</filename> QEMU image,
-                however, contains no SSH server.
-                </para></listitem>
-            <listitem><para>You can use a provided, user-space NFS server to boot the QEMU session
-                using a local copy of the root filesystem on the host.
-                In order to make this connection, you must extract a root filesystem tarball by using the
-                <filename>runqemu-extract-sdk</filename> command.
-                After running the command, you must then point the <filename>runqemu</filename>
-                script to the extracted directory instead of a root filesystem image file.</para></listitem>
-        </itemizedlist>
-    </note>
-</section>
+-->
 </chapter>
 <!--
 vim: expandtab tw=80 ts=4
diff --git a/import-layers/yocto-poky/documentation/dev-manual/dev-manual.xml b/import-layers/yocto-poky/documentation/dev-manual/dev-manual.xml
index 26ee974..ed8011d 100644
--- a/import-layers/yocto-poky/documentation/dev-manual/dev-manual.xml
+++ b/import-layers/yocto-poky/documentation/dev-manual/dev-manual.xml
@@ -17,14 +17,14 @@
         </mediaobject>
 
         <title>
-            Yocto Project Development Manual
+            Yocto Project Development Tasks Manual
         </title>
 
         <authorgroup>
             <author>
                 <firstname>Scott</firstname> <surname>Rifenbark</surname>
                 <affiliation>
-                    <orgname>Intel Corporation</orgname>
+                    <orgname>Scotty's Documentation Services, INC</orgname>
                 </affiliation>
                 <email>srifenbark@gmail.com</email>
             </author>
@@ -97,24 +97,19 @@
                 <revremark>Released with the Yocto Project 2.3 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.1</revnumber>
-                <date>June 2017</date>
-                <revremark>Released with the Yocto Project 2.3.1 Release.</revremark>
+                <revnumber>2.4</revnumber>
+                <date>October 2017</date>
+                <revremark>Released with the Yocto Project 2.4 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.2</revnumber>
-                <date>September 2017</date>
-                <revremark>Released with the Yocto Project 2.3.2 Release.</revremark>
-            </revision>
-            <revision>
-                <revnumber>2.3.3</revnumber>
+                <revnumber>2.4.1</revnumber>
                 <date>January 2018</date>
-                <revremark>Released with the Yocto Project 2.3.3 Release.</revremark>
+                <revremark>Released with the Yocto Project 2.4.1 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.4</revnumber>
-                <date>April 2018</date>
-                <revremark>Released with the Yocto Project 2.3.4 Release.</revremark>
+                <revnumber>2.4.2</revnumber>
+                <date>March 2018</date>
+                <revremark>Released with the Yocto Project 2.4.2 Release.</revremark>
             </revision>
         </revhistory>
 
@@ -130,33 +125,34 @@
           Creative Commons Attribution-Share Alike 2.0 UK: England &amp; Wales</ulink> as published by
           Creative Commons.
       </para>
-
-            <note><title>Manual Notes</title>
-                <itemizedlist>
-                    <listitem><para>
-                        For the latest version of the Yocto Project Development
-                        Manual associated with this Yocto Project release
-                        (version &YOCTO_DOC_VERSION;),
-                        see the Yocto Project Development Manual from the
-                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
+           <note><title>Manual Notes</title>
+               <itemizedlist>
+                   <listitem><para>
+                       This version of the
+                       <emphasis>Yocto Project Development Tasks Manual</emphasis>
+                       is for the &YOCTO_DOC_VERSION; release of the
+                       Yocto Project.
+                       To be sure you have the latest version of the manual
+                       for this release, use the manual from the
+                       <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
+                       </para></listitem>
+                   <listitem><para>
+                       For manuals associated with other releases of the Yocto
+                       Project, go to the
+                       <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
+                       and use the drop-down "Active Releases" button
+                       and choose the manual associated with the desired
+                       Yocto Project.
+                       </para></listitem>
+                   <listitem><para>
+                        To report any inaccuracies or problems with this
+                        manual, send an email to the Yocto Project
+                        discussion group at
+                        <filename>yocto@yoctoproject.com</filename> or log into
+                        the freenode <filename>#yocto</filename> channel.
                         </para></listitem>
-                    <listitem><para>
-                        This version of the manual is version
-                        &YOCTO_DOC_VERSION;.
-                        For later releases of the Yocto Project (if they exist),
-                        go to the
-                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
-                        and use the drop-down "Active Releases" button
-                        and choose the Yocto Project version for which you want
-                        the manual.
-                        </para></listitem>
-                    <listitem><para>
-                        For an in-development version of the Yocto Project
-                        Development Manual, see
-                        <ulink url='&YOCTO_DOCS_URL;/latest/dev-manual/dev-manual.html'></ulink>.
-                        </para></listitem>
-                </itemizedlist>
-            </note>
+               </itemizedlist>
+           </note>
     </legalnotice>
 
     </bookinfo>
@@ -167,8 +163,6 @@
 
     <xi:include href="dev-manual-newbie.xml"/>
 
-    <xi:include href="dev-manual-model.xml"/>
-
     <xi:include href="dev-manual-common-tasks.xml"/>
 
     <xi:include href="dev-manual-qemu.xml"/>
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/bitbake-build-flow.png b/import-layers/yocto-poky/documentation/dev-manual/figures/bitbake-build-flow.png
new file mode 100644
index 0000000..108265a
--- /dev/null
+++ b/import-layers/yocto-poky/documentation/dev-manual/figures/bitbake-build-flow.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/bsp-dev-flow.png b/import-layers/yocto-poky/documentation/dev-manual/figures/bsp-dev-flow.png
deleted file mode 100644
index 540b0ab..0000000
--- a/import-layers/yocto-poky/documentation/dev-manual/figures/bsp-dev-flow.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/dev-title.png b/import-layers/yocto-poky/documentation/dev-manual/figures/dev-title.png
index d3cac4a..15e67d0 100644
--- a/import-layers/yocto-poky/documentation/dev-manual/figures/dev-title.png
+++ b/import-layers/yocto-poky/documentation/dev-manual/figures/dev-title.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/devtool-add-flow.png b/import-layers/yocto-poky/documentation/dev-manual/figures/devtool-add-flow.png
deleted file mode 100644
index 985ac33..0000000
--- a/import-layers/yocto-poky/documentation/dev-manual/figures/devtool-add-flow.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/devtool-modify-flow.png b/import-layers/yocto-poky/documentation/dev-manual/figures/devtool-modify-flow.png
deleted file mode 100644
index fd684ff..0000000
--- a/import-layers/yocto-poky/documentation/dev-manual/figures/devtool-modify-flow.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/devtool-upgrade-flow.png b/import-layers/yocto-poky/documentation/dev-manual/figures/devtool-upgrade-flow.png
deleted file mode 100644
index 65474da..0000000
--- a/import-layers/yocto-poky/documentation/dev-manual/figures/devtool-upgrade-flow.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/index-downloads.png b/import-layers/yocto-poky/documentation/dev-manual/figures/index-downloads.png
deleted file mode 100644
index 41251d5d..0000000
--- a/import-layers/yocto-poky/documentation/dev-manual/figures/index-downloads.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/kernel-dev-flow.png b/import-layers/yocto-poky/documentation/dev-manual/figures/kernel-dev-flow.png
deleted file mode 100644
index 009105d..0000000
--- a/import-layers/yocto-poky/documentation/dev-manual/figures/kernel-dev-flow.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/kernel-overview-1.png b/import-layers/yocto-poky/documentation/dev-manual/figures/kernel-overview-1.png
deleted file mode 100644
index 116c0b9..0000000
--- a/import-layers/yocto-poky/documentation/dev-manual/figures/kernel-overview-1.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/kernel-overview-2-generic.png b/import-layers/yocto-poky/documentation/dev-manual/figures/kernel-overview-2-generic.png
deleted file mode 100644
index cb970ea..0000000
--- a/import-layers/yocto-poky/documentation/dev-manual/figures/kernel-overview-2-generic.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/source-repos.png b/import-layers/yocto-poky/documentation/dev-manual/figures/source-repos.png
deleted file mode 100644
index 65c5f29..0000000
--- a/import-layers/yocto-poky/documentation/dev-manual/figures/source-repos.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/kernel-dev/figures/kernel-dev-flow.png b/import-layers/yocto-poky/documentation/kernel-dev/figures/kernel-dev-flow.png
new file mode 100644
index 0000000..793a395
--- /dev/null
+++ b/import-layers/yocto-poky/documentation/kernel-dev/figures/kernel-dev-flow.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/kernel-dev/figures/kernel-overview-2-generic.png b/import-layers/yocto-poky/documentation/kernel-dev/figures/kernel-overview-2-generic.png
new file mode 100644
index 0000000..ee2cdb2
--- /dev/null
+++ b/import-layers/yocto-poky/documentation/kernel-dev/figures/kernel-overview-2-generic.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-advanced.xml b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-advanced.xml
index a5ccfdc..c3013b8 100644
--- a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-advanced.xml
+++ b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-advanced.xml
@@ -3,7 +3,7 @@
 [<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
 
 <chapter id='kernel-dev-advanced'>
-<title>Working with Advanced Metadata</title>
+<title>Working with Advanced Metadata (<filename>yocto-kernel-cache</filename>)</title>
 
 <section id='kernel-dev-advanced-overview'>
     <title>Overview</title>
@@ -11,33 +11,51 @@
     <para>
         In addition to supporting configuration fragments and patches, the
         Yocto Project kernel tools also support rich
-        <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink> that you can
+        <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink> that you can
         use to define complex policies and Board Support Package (BSP) support.
-        The purpose of the Metadata and the tools that manage it, known as
-        the kern-tools (<filename>kern-tools-native_git.bb</filename>), is
+        The purpose of the Metadata and the tools that manage it is
         to help you manage the complexity of the configuration and sources
         used to support multiple BSPs and Linux kernel types.
     </para>
+
+    <para>
+        Kernel Metadata exists in many places.
+        One area in the Yocto Project
+        <ulink url='&YOCTO_DOCS_REF_URL;#source-repositories'>Source Repositories</ulink>
+        is the <filename>yocto-kernel-cache</filename> Git repository.
+        You can find this repository grouped under the "Yocto Linux Kernel"
+        heading in the
+        <ulink url='&YOCTO_GIT_URL;'>Yocto Project Source Repositories</ulink>.
+    </para>
+
+    <para>
+        Kernel development tools ("kern-tools") exist also in the Yocto
+        Project Source Repositories under the "Yocto Linux Kernel" heading
+        in the <filename>yocto-kernel-tools</filename> Git repository.
+        The recipe that builds these tools is
+        <filename>meta/recipes-kernel/kern-tools/kern-tools-native_git.bb</filename>
+        in the
+        <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+        (e.g. <filename>poky</filename>).
+    </para>
 </section>
 
 <section id='using-kernel-metadata-in-a-recipe'>
     <title>Using Kernel Metadata in a Recipe</title>
 
     <para>
-        The kernel sources in the Yocto Project contain kernel Metadata, which
-        is located in the <filename>meta</filename> branches of the kernel
-        source Git repositories.
+        As mentioned in the introduction, the Yocto Project contains kernel
+        Metadata, which is located in the
+        <filename>yocto-kernel-cache</filename> Git repository.
         This Metadata defines Board Support Packages (BSPs) that
-        correspond to definitions in linux-yocto recipes for the same BSPs.
+        correspond to definitions in linux-yocto recipes for corresponding BSPs.
         A BSP consists of an aggregation of kernel policy and enabled
         hardware-specific features.
         The BSP can be influenced from within the linux-yocto recipe.
         <note>
-            Linux kernel source that contains kernel Metadata is said to be
-            "linux-yocto style" kernel source.
-            A Linux kernel recipe that inherits from the
-            <filename>linux-yocto.inc</filename> include file is said to be a
-            "linux-yocto style" recipe.
+            A Linux kernel recipe that contains kernel Metadata (e.g.
+            inherits from the <filename>linux-yocto.inc</filename> file)
+            is said to be a "linux-yocto style" recipe.
         </note>
     </para>
 
@@ -48,7 +66,7 @@
         This variable is typically set to the same value as the
         <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
         variable, which is used by
-        <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>.
+        <ulink url='&YOCTO_DOCS_REF_URL;#bitbake-term'>BitBake</ulink>.
         However, in some cases, the variable might instead refer to the
         underlying platform of the <filename>MACHINE</filename>.
     </para>
@@ -56,12 +74,16 @@
     <para>
         Multiple BSPs can reuse the same <filename>KMACHINE</filename>
         name if they are built using the same BSP description.
-        The "ep108-zynqmp" and "qemuzynqmp" BSP combination
-        in the <filename>meta-xilinx</filename>
-        layer is a good example of two BSPs using the same
-        <filename>KMACHINE</filename> value (i.e. "zynqmp").
-        See the <link linkend='bsp-descriptions'>BSP Descriptions</link> section
-        for more information.
+        Multiple Corei7-based BSPs could share the same "intel-corei7-64"
+        value for <filename>KMACHINE</filename>.
+        It is important to realize that <filename>KMACHINE</filename> is
+        just for kernel mapping, while
+        <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
+        is the machine type within a BSP Layer.
+        Even with this distinction, however, these two variables can hold
+        the same value.
+        See the <link linkend='bsp-descriptions'>BSP Descriptions</link>
+        section for more information.
     </para>
 
     <para>
@@ -72,9 +94,9 @@
         <note>
             You can use the <filename>KBRANCH</filename> value to define an
             alternate branch typically with a machine override as shown here
-            from the <filename>meta-emenlow</filename> layer:
+            from the <filename>meta-yocto-bsp</filename> layer:
             <literallayout class='monospaced'>
-     KBRANCH_emenlow-noemgd = "standard/base"
+     KBRANCH_edgerouter = "standard/edgerouter"
             </literallayout>
         </note>
     </para>
@@ -113,16 +135,7 @@
         recipe.
         The tools use the first BSP description it finds that match
         both variables.
-        If the tools cannot find a match, they issue a warning such as
-        the following:
-        <literallayout class='monospaced'>
-     WARNING: Can't find any BSP hardware or required configuration fragments.
-     WARNING: Looked at meta/cfg/broken/emenlow-broken/hdw_frags.txt and
-              meta/cfg/broken/emenlow-broken/required_frags.txt in directory:
-              meta/cfg/broken/emenlow-broken
-        </literallayout>
-        In this example, <filename>KMACHINE</filename> was set to "emenlow-broken"
-        and <filename>LINUX_KERNEL_TYPE</filename> was set to "broken".
+        If the tools cannot find a match, they issue a warning.
     </para>
 
     <para>
@@ -154,19 +167,13 @@
         </literallayout>
         The value of the entries in <filename>KERNEL_FEATURES</filename>
         are dependent on their location within the kernel Metadata itself.
-        The examples here are taken from the <filename>meta</filename>
-        branch of the <filename>linux-yocto-3.19</filename> repository.
-        Within that branch, "features" and "cfg" are subdirectories of the
-        <filename>meta/cfg/kernel-cache</filename> directory.
+        The examples here are taken from the
+        <filename>yocto-kernel-cache</filename> repository.
+        Each branch of this repository contains "features" and "cfg"
+        subdirectories at the top-level.
         For more information, see the
-        "<link linkend='kernel-metadata-syntax'>Kernel Metadata Syntax</link>" section.
-        <note>
-            The processing of the these variables has evolved some between the
-	        0.9 and 1.3 releases of the Yocto Project and associated
-	        kern-tools sources.
-            The descriptions in this section are accurate for 1.3 and later
-	        releases of the Yocto Project.
-        </note>
+        "<link linkend='kernel-metadata-syntax'>Kernel Metadata Syntax</link>"
+        section.
     </para>
 </section>
 
@@ -279,11 +286,13 @@
 
     <para>
         Paths used in kernel Metadata files are relative to
-        <filename>&lt;base&gt;</filename>, which is either
+        <replaceable>base</replaceable>, which is either
         <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
         if you are creating Metadata in
         <link linkend='recipe-space-metadata'>recipe-space</link>,
-        or <filename>meta/cfg/kernel-cache/</filename> if you are creating
+        or the top level of
+        <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/yocto-kernel-cache/tree/'><filename>yocto-kernel-cache</filename></ulink>
+        if you are creating
         <link linkend='metadata-outside-the-recipe-space'>Metadata outside of the recipe-space</link>.
     </para>
 
@@ -300,12 +309,18 @@
         </para>
 
         <para>
-            The Symmetric Multi-Processing (SMP) fragment included in the
-            <filename>linux-yocto-3.19</filename> Git repository
-            consists of the following two files:
+            As an example, consider the Symmetric Multi-Processing (SMP)
+            fragment used with the <filename>linux-yocto-4.12</filename>
+            kernel as defined outside of the recipe space (i.e.
+            <filename>yocto-kernel-cache</filename>).
+            This Metadata consists of two files: <filename>smp.scc</filename>
+            and <filename>smp.cfg</filename>.
+            You can find these files in the <filename>cfg</filename> directory
+            of the <filename>yocto-4.12</filename> branch in the
+            <filename>yocto-kernel-cache</filename> Git repository:
             <literallayout class='monospaced'>
      cfg/smp.scc:
-        define KFEATURE_DESCRIPTION "Enable SMP"
+        define KFEATURE_DESCRIPTION "Enable SMP for 32 bit builds"
         define KFEATURE_COMPATIBILITY all
 
         kconf hardware smp.cfg
@@ -316,22 +331,27 @@
         # Increase default NR_CPUS from 8 to 64 so that platform with
         # more than 8 processors can be all activated at boot time
         CONFIG_NR_CPUS=64
+        # The following is needed when setting NR_CPUS to something
+        # greater than 8 on x86 architectures, it should be automatically
+        # disregarded by Kconfig when using a different arch
+        CONFIG_X86_BIGSMP=y
             </literallayout>
-            You can find information on configuration fragment files in the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-config-fragments'>Creating Configuration Fragments</ulink>"
-            section of the Yocto Project Development Manual and in
-            the "<link linkend='generating-configuration-files'>Generating Configuration Files</link>"
-            section earlier in this manual.
+            You can find general information on configuration fragment files in
+            the
+            "<link linkend='creating-config-fragments'>Creating Configuration Fragments</link>"
+            section.
         </para>
 
         <para>
+            Within the <filename>smp.scc</filename> file, the
             <ulink url='&YOCTO_DOCS_REF_URL;#var-KFEATURE_DESCRIPTION'><filename>KFEATURE_DESCRIPTION</filename></ulink>
-            provides a short description of the fragment.
+            statement provides a short description of the fragment.
             Higher level kernel tools use this description.
         </para>
 
         <para>
-            The <filename>kconf</filename> command is used to include the
+            Also within the <filename>smp.scc</filename> file, the
+            <filename>kconf</filename> command includes the
             actual configuration fragment in an <filename>.scc</filename>
             file, and the "hardware" keyword identifies the fragment as
             being hardware enabling, as opposed to general policy,
@@ -347,7 +367,7 @@
 
         <para>
             As described in the
-            "<link linkend='generating-configuration-files'>Generating Configuration Files</link>"
+            "<link linkend='validating-configuration'>Validating Configuration</link>"
             section, you can use the following BitBake command to audit your
             configuration:
             <literallayout class='monospaced'>
@@ -363,26 +383,71 @@
             Patch descriptions are very similar to configuration fragment
             descriptions, which are described in the previous section.
             However, instead of a <filename>.cfg</filename> file, these
-            descriptions work with source patches.
+            descriptions work with source patches (i.e.
+            <filename>.patch</filename> files).
         </para>
 
         <para>
-            A typical patch includes a description file and the patch itself:
+            A typical patch includes a description file and the patch itself.
+            As an example, consider the build patches used with the
+            <filename>linux-yocto-4.12</filename> kernel as defined outside of
+            the recipe space (i.e. <filename>yocto-kernel-cache</filename>).
+            This Metadata consists of several files:
+            <filename>build.scc</filename> and a set of
+            <filename>*.patch</filename> files.
+            You can find these files in the <filename>patches/build</filename>
+            directory of the <filename>yocto-4.12</filename> branch in the
+            <filename>yocto-kernel-cache</filename> Git repository.
+        </para>
+
+        <para>
+            The following listings show the <filename>build.scc</filename>
+            file and part of the
+            <filename>modpost-mask-trivial-warnings.patch</filename> file:
             <literallayout class='monospaced'>
-     patches/mypatch.scc:
-        patch mypatch.patch
+     patches/build/build.scc:
+        patch arm-serialize-build-targets.patch
+        patch powerpc-serialize-image-targets.patch
+        patch kbuild-exclude-meta-directory-from-distclean-processi.patch
 
-     patches/mypatch.patch:
-        <replaceable>typical-patch</replaceable>
+        # applied by kgit
+        # patch kbuild-add-meta-files-to-the-ignore-li.patch
+
+        patch modpost-mask-trivial-warnings.patch
+        patch menuconfig-check-lxdiaglog.sh-Allow-specification-of.patch
+
+     patches/build/modpost-mask-trivial-warnings.patch:
+        From bd48931bc142bdd104668f3a062a1f22600aae61 Mon Sep 17 00:00:00 2001
+        From: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
+        Date: Sun, 25 Jan 2009 17:58:09 -0500
+        Subject: [PATCH] modpost: mask trivial warnings
+
+        Newer HOSTCC will complain about various stdio fcns because
+                          .
+                          .
+                          .
+ 	        char *dump_write = NULL, *files_source = NULL;
+ 	        int opt;
+        --
+        2.10.1
+
+        generated by cgit v0.10.2 at 2017-09-28 15:23:23 (GMT)
             </literallayout>
-            You can create the typical <filename>.patch</filename>
-            file using <filename>diff -Nurp</filename> or
-            <filename>git format-patch</filename>.
+            The description file can include multiple patch statements where
+            each statement handles a single patch.
+            In the example <filename>build.scc</filename> file, five patch
+            statements exist for the five patches in the directory.
         </para>
 
         <para>
-            The description file can include multiple patch statements,
-            one per patch.
+            You can create a typical <filename>.patch</filename> file using
+            <filename>diff -Nurp</filename> or
+            <filename>git format-patch</filename> commands.
+            For information on how to create patches, see the
+            "<link linkend='using-devtool-to-patch-the-kernel'>Using <filename>devtool</filename> to Patch the Kernel</link>"
+            and
+            "<link linkend='using-traditional-kernel-development-to-patch-the-kernel'>Using Traditional Kernel Development to Patch the Kernel</link>"
+            sections.
         </para>
     </section>
 
@@ -391,26 +456,23 @@
 
         <para>
             Features are complex kernel Metadata types that consist
-            of configuration fragments (<filename>kconf</filename>), patches
-            (<filename>patch</filename>), and possibly other feature
-            description files (<filename>include</filename>).
-        </para>
-
-        <para>
-            Here is an example that shows a feature description file:
+            of configuration fragments, patches, and possibly other feature
+            description files.
+            As an example, consider the following generic listing:
             <literallayout class='monospaced'>
-     features/myfeature.scc
-        define KFEATURE_DESCRIPTION "Enable myfeature"
+     features/<replaceable>myfeature</replaceable>.scc
+        define KFEATURE_DESCRIPTION "Enable <replaceable>myfeature</replaceable>"
 
-        patch 0001-myfeature-core.patch
-        patch 0002-myfeature-interface.patch
+        patch 0001-<replaceable>myfeature</replaceable>-core.patch
+        patch 0002-<replaceable>myfeature</replaceable>-interface.patch
 
-        include cfg/myfeature_dependency.scc
-        kconf non-hardware myfeature.cfg
+        include cfg/<replaceable>myfeature</replaceable>_dependency.scc
+        kconf non-hardware <replaceable>myfeature</replaceable>.cfg
             </literallayout>
             This example shows how the <filename>patch</filename> and
             <filename>kconf</filename> commands are used as well as
-            how an additional feature description file is included.
+            how an additional feature description file is included with
+            the <filename>include</filename> command.
         </para>
 
         <para>
@@ -430,21 +492,47 @@
         <para>
             A kernel type defines a high-level kernel policy by
             aggregating non-hardware configuration fragments with
-            patches you want to use when building a Linux kernels of a
-            specific type.
+            patches you want to use when building a Linux kernel of a
+            specific type (e.g. a real-time kernel).
             Syntactically, kernel types are no different than features
             as described in the "<link linkend='features'>Features</link>"
             section.
-            The <filename>LINUX_KERNEL_TYPE</filename> variable in the kernel
-            recipe selects the kernel type.
-            See the "<link linkend='using-kernel-metadata-in-a-recipe'>Using Kernel Metadata in a Recipe</link>"
-            section for more information.
+            The
+            <ulink url='&YOCTO_DOCS_REF_URL;#var-LINUX_KERNEL_TYPE'><filename>LINUX_KERNEL_TYPE</filename></ulink>
+            variable in the kernel recipe selects the kernel type.
+            For example, in the <filename>linux-yocto_4.12.bb</filename>
+            kernel recipe found in
+            <filename>poky/meta/recipes-kernel/linux</filename>, a
+            <ulink url='&YOCTO_DOCS_BB_URL;#require-inclusion'><filename>require</filename></ulink>
+            directive includes the
+            <filename>poky/meta/recipes-kernel/linux/linux-yocto.inc</filename>
+            file, which has the following statement that defines the default
+            kernel type:
+            <literallayout class='monospaced'>
+     LINUX_KERNEL_TYPE ??= "standard"
+            </literallayout>
         </para>
 
         <para>
-            As an example, the <filename>linux-yocto-3.19</filename>
-            tree defines three kernel types: "standard",
-            "tiny", and "preempt-rt":
+            Another example would be the real-time kernel (i.e.
+            <filename>linux-yocto-rt_4.12.bb</filename>).
+            This kernel recipe directly sets the kernel type as follows:
+            <literallayout class='monospaced'>
+     LINUX_KERNEL_TYPE = "preempt-rt"
+            </literallayout>
+            <note>
+                You can find kernel recipes in the
+                <filename>meta/recipes-kernel/linux</filename> directory of the
+                <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+                (e.g. <filename>poky/meta/recipes-kernel/linux/linux-yocto_4.12.bb</filename>).
+                See the "<link linkend='using-kernel-metadata-in-a-recipe'>Using Kernel Metadata in a Recipe</link>"
+                section for more information.
+            </note>
+        </para>
+
+        <para>
+            Three kernel types ("standard", "tiny", and "preempt-rt") are
+            supported for Linux Yocto kernels:
             <itemizedlist>
                 <listitem><para>"standard":
                     Includes the generic Linux kernel policy of the Yocto
@@ -471,29 +559,40 @@
         </para>
 
         <para>
-            The "standard" kernel type is defined by
-            <filename>standard.scc</filename>:
+            For any given kernel type, the Metadata is defined by the
+            <filename>.scc</filename> (e.g. <filename>standard.scc</filename>).
+            Here is a partial listing for the <filename>standard.scc</filename>
+            file, which is found in the <filename>ktypes/standard</filename>
+            directory of the <filename>yocto-kernel-cache</filename> Git
+            repository:
             <literallayout class='monospaced'>
      # Include this kernel type fragment to get the standard features and
      # configuration values.
 
-     # Include all standard features
-     include standard-nocfg.scc
+     # Note: if only the features are desired, but not the configuration
+     #       then this should be included as:
+     #             include ktypes/standard/standard.scc nocfg
+     #       if no chained configuration is desired, include it as:
+     #             include ktypes/standard/standard.scc nocfg inherit
+
+
+
+     include ktypes/base/base.scc
+     branch standard
 
      kconf non-hardware standard.cfg
 
-     # individual cfg block section
-     include cfg/fs/devtmpfs.scc
-     include cfg/fs/debugfs.scc
-     include cfg/fs/btrfs.scc
-     include cfg/fs/ext2.scc
-     include cfg/fs/ext3.scc
-     include cfg/fs/ext4.scc
+     include features/kgdb/kgdb.scc
+                .
+                .
+                .
 
-     include cfg/net/ipv6.scc
-     include cfg/net/ip_nf.scc
      include cfg/net/ip6_nf.scc
      include cfg/net/bridge.scc
+
+     include cfg/systemd.scc
+
+     include features/rfkill/rfkill.scc
             </literallayout>
         </para>
 
@@ -539,9 +638,9 @@
         </para>
 
         <para>
-            This section provides a BSP description structural overview along
-            with aggregation concepts as well as a detailed example using
-            a BSP supported by the Yocto Project (i.e. Minnow Board).
+            This section overviews the BSP description structure, the
+            aggregation concepts, and presents a detailed example using
+            a BSP supported by the Yocto Project (i.e. BeagleBone Board).
         </para>
 
         <section id='bsp-description-file-overview'>
@@ -549,7 +648,7 @@
 
             <para>
                 For simplicity, consider the following top-level BSP
-                description file.
+                description files for the BeagleBone board.
                 Top-level BSP descriptions files employ both a structure
                 and naming convention for consistency.
                 The naming convention for the file is as follows:
@@ -557,31 +656,30 @@
      <replaceable>bsp_name</replaceable>-<replaceable>kernel_type</replaceable>.scc
                 </literallayout>
                 Here are some example top-level BSP filenames for the
-                Minnow Board BSP, which is supported by the Yocto Project:
+                BeagleBone Board BSP, which is supported by the Yocto Project:
                 <literallayout class='monospaced'>
-     minnow-standard.scc
-     minnow-preempt-rt.scc
-     minnow-tiny.scc
+     beaglebone-standard.scc
+     beaglebone-preempt-rt.scc
                 </literallayout>
                 Each file uses the BSP name followed by the kernel type.
             </para>
 
             <para>
-                is simple BSP description file whose name has the
-                form
-                <replaceable>mybsp</replaceable><filename>-standard</filename>
-                and supports the <replaceable>mybsp</replaceable> machine using
-                a standard kernel:
+                Examine the <filename>beaglebone-standard.scc</filename>
+                file:
                 <literallayout class='monospaced'>
-     define KMACHINE <replaceable>mybsp</replaceable>
+     define KMACHINE beaglebone
      define KTYPE standard
-     define KARCH i386
+     define KARCH arm
 
-     include ktypes/standard
+     include ktypes/standard/standard.scc
+     branch beaglebone
 
-     include <replaceable>mybsp</replaceable>.scc
+     include beaglebone.scc
 
-     kconf hardware <replaceable>mybsp</replaceable>-<replaceable>extra</replaceable>.cfg
+     # default policy for standard kernels
+     include features/latencytop/latencytop.scc
+     include features/profiling/profiling.scc
                 </literallayout>
                 Every top-level BSP description file should define the
                 <ulink url='&YOCTO_DOCS_REF_URL;#var-KMACHINE'><filename>KMACHINE</filename></ulink>,
@@ -591,23 +689,20 @@
                 These variables allow the OpenEmbedded build system to identify
                 the description as meeting the criteria set by the recipe being
                 built.
-                This simple example supports the "mybsp" machine for the "standard"
-                kernel and the "i386" architecture.
+                This example supports the "beaglebone" machine for the
+                "standard" kernel and the "arm" architecture.
             </para>
 
             <para>
                 Be aware that a hard link between the
-                <filename>KTYPE</filename> variable and a kernel type description
-                file does not exist.
-                Thus, if you do not have kernel types defined in your kernel
-                Metadata, you only need to ensure that the kernel recipe's
+                <filename>KTYPE</filename> variable and a kernel type
+                description file does not exist.
+                Thus, if you do not have the kernel type defined in your kernel
+                Metadata as it is here, you only need to ensure that the
                 <ulink url='&YOCTO_DOCS_REF_URL;#var-LINUX_KERNEL_TYPE'><filename>LINUX_KERNEL_TYPE</filename></ulink>
-                variable and the <filename>KTYPE</filename> variable in the
-                BSP description file match.
-                <note>
-                    Future versions of the tooling make the specification of
-                    <filename>KTYPE</filename> in the BSP optional.
-                </note>
+                variable in the kernel recipe and the
+                <filename>KTYPE</filename> variable in the BSP description
+                file match.
             </para>
 
             <para>
@@ -616,13 +711,12 @@
                 "standard".
                 In the previous example, this is done using the following:
                 <literallayout class='monospaced'>
-     include ktypes/standard
+     include ktypes/standard/standard.scc
                 </literallayout>
-                In the previous example, <filename>ktypes/standard.scc</filename>
-                aggregates all the configuration fragments, patches, and
-                features that make up your standard kernel policy.
-                See the "<link linkend='kernel-types'>Kernel Types</link>" section
-                for more information.
+                This file aggregates all the configuration fragments, patches,
+                and features that make up your standard kernel policy.
+                See the "<link linkend='kernel-types'>Kernel Types</link>"
+                section for more information.
             </para>
 
             <para>
@@ -631,10 +725,14 @@
                 <literallayout class='monospaced'>
      include <replaceable>mybsp</replaceable>.scc
                 </literallayout>
+                You can see that in the BeagleBone example with the following:
+                <literallayout class='monospaced'>
+     include beaglebone.scc
+                </literallayout>
                 For information on how to break a complete
                 <filename>.config</filename> file into the various
                 configuration fragments, see the
-                "<link linkend='generating-configuration-files'>Generating Configuration Files</link>"
+                "<link linkend='creating-config-fragments'>Creating Configuration Fragments</link>"
                 section.
             </para>
 
@@ -645,6 +743,23 @@
                 <literallayout class='monospaced'>
      kconf hardware <replaceable>mybsp</replaceable>-<replaceable>extra</replaceable>.cfg
                 </literallayout>
+                The BeagleBone example does not include these types of
+                configurations.
+                However, the Malta 32-bit board does ("mti-malta32").
+                Here is the <filename>mti-malta32-le-standard.scc</filename>
+                file:
+                <literallayout class='monospaced'>
+     define KMACHINE mti-malta32-le
+     define KMACHINE qemumipsel
+     define KTYPE standard
+     define KARCH mips
+
+     include ktypes/standard/standard.scc
+     branch mti-malta32
+
+     include mti-malta32.scc
+     kconf hardware mti-malta32-le.cfg
+                </literallayout>
             </para>
         </section>
 
@@ -655,14 +770,15 @@
                 Many real-world examples are more complex.
                 Like any other <filename>.scc</filename> file, BSP
                 descriptions can aggregate features.
-                Consider the Minnow BSP definition from the
-                <filename>linux-yocto-4.4</filename> in the
-                Yocto Project
-                <ulink url='&YOCTO_DOCS_DEV_URL;#source-repositories'>Source Repositories</ulink>
-                (i.e.
-                <filename>yocto-kernel-cache/bsp/minnow</filename>):
+                Consider the Minnow BSP definition given the
+                <filename>linux-yocto-4.4</filename> branch of the
+                <filename>yocto-kernel-cache</filename> (i.e.
+                <filename>yocto-kernel-cache/bsp/minnow/minnow.scc</filename>):
+                <note>
+                    Although the Minnow Board BSP is unused, the Metadata
+                    remains and is being used here just as an example.
+                </note>
                 <literallayout class='monospaced'>
-     minnow.scc:
          include cfg/x86.scc
          include features/eg20t/eg20t.scc
          include cfg/dmaengine.scc
@@ -698,9 +814,8 @@
                 "minnow" description files for the supported kernel types
                 (i.e. "standard", "preempt-rt", and "tiny").
                 Consider the "minnow" description for the "standard" kernel
-                type:
+                type (i.e. <filename>minnow-standard.scc</filename>:
                 <literallayout class='monospaced'>
-     minnow-standard.scc:
          define KMACHINE minnow
          define KTYPE standard
          define KARCH i386
@@ -735,9 +850,8 @@
 
             <para>
                 Now consider the "minnow" description for the "tiny" kernel
-                type:
+                type (i.e. <filename>minnow-tiny.scc</filename>:
                 <literallayout class='monospaced'>
-     minnow-tiny.scc:
         define KMACHINE minnow
         define KTYPE tiny
         define KARCH i386
@@ -757,10 +871,12 @@
 
             <para>
                 Notice again the three critical variables:
-                <filename>KMACHINE</filename>, <filename>KTYPE</filename>,
-                and <filename>KARCH</filename>.
-                Of these variables, only the <filename>KTYPE</filename> has changed.
-                It is now set to "tiny".
+                <ulink url='&YOCTO_DOCS_REF_URL;#var-KMACHINE'><filename>KMACHINE</filename></ulink>,
+                <ulink url='&YOCTO_DOCS_REF_URL;#var-KTYPE'><filename>KTYPE</filename></ulink>,
+                and
+                <ulink url='&YOCTO_DOCS_REF_URL;#var-KARCH'><filename>KARCH</filename></ulink>.
+                Of these variables, only <filename>KTYPE</filename>
+                has changed to specify the "tiny" kernel type.
             </para>
         </section>
     </section>
@@ -867,15 +983,15 @@
             When stored outside of the recipe-space, the kernel Metadata
             files reside in a separate repository.
             The OpenEmbedded build system adds the Metadata to the build as
-            a "ktype=meta" repository through the
+            a "type=kmeta" repository through the
             <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
             variable.
             As an example, consider the following <filename>SRC_URI</filename>
-            statement from the <filename>linux-yocto_4.4.bb</filename>
+            statement from the <filename>linux-yocto_4.12.bb</filename>
             kernel recipe:
             <literallayout class='monospaced'>
-     SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.4.git;name=machine;branch=${KBRANCH}; \
-                git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.4;destsuffix=${KMETA}"
+     SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.12.git;name=machine;branch=${KBRANCH}; \
+                git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.12;destsuffix=${KMETA}"
             </literallayout>
             <filename>${KMETA}</filename>, in this context, is simply used to
             name the directory into which the Git fetcher places the Metadata.
@@ -894,46 +1010,6 @@
             configuration phase.
         </para>
 
-<!--
-
-
-        <para>
-            Following is an example that shows how a trivial tree of Metadata
-            is stored in a custom Linux kernel Git repository:
-            <literallayout class='monospaced'>
-     meta/
-     `&dash;&dash; cfg
-         `&dash;&dash; kernel-cache
-             |&dash;&dash; bsp-standard.scc
-             |&dash;&dash; bsp.cfg
-             `&dash;&dash; standard.cfg
-            </literallayout>
-        </para>
-
-        <para>
-            To use a branch different from where the sources reside,
-            specify the branch in the <filename>KMETA</filename> variable
-            in your Linux kernel recipe.
-            Here is an example:
-            <literallayout class='monospaced'>
-     KMETA = "meta"
-            </literallayout>
-            To use the same branch as the sources, set
-            <filename>KMETA</filename> to an empty string:
-            <literallayout class='monospaced'>
-     KMETA = ""
-            </literallayout>
-            If you are working with your own sources and want to create an
-            orphan <filename>meta</filename> branch, use these commands
-            from within your Linux kernel Git repository:
-            <literallayout class='monospaced'>
-     $ git checkout &dash;&dash;orphan meta
-     $ git rm -rf .
-     $ git commit &dash;&dash;allow-empty -m "Create orphan meta branch"
-            </literallayout>
-        </para>
--->
-
         <para>
             If you modify the Metadata, you must not forget to update the
             <ulink url='&YOCTO_DOCS_REF_URL;#var-SRCREV'><filename>SRCREV</filename></ulink>
@@ -1057,10 +1133,9 @@
         </para>
 
         <para>
-            If you find
-            yourself with numerous branches, you might consider using a
-            hierarchical branching system similar to what the linux-yocto Linux
-            kernel repositories use:
+            If you find yourself with numerous branches, you might consider
+            using a hierarchical branching system similar to what the
+            Yocto Linux Kernel Git repositories use:
             <literallayout class='monospaced'>
      <replaceable>common</replaceable>/<replaceable>kernel_type</replaceable>/<replaceable>machine</replaceable>
             </literallayout>
@@ -1090,7 +1165,8 @@
             The "standard" and "small" branches add sources specific to those
             kernel types that for whatever reason are not appropriate for the
             other branches.
-            <note>The "base" branches are an artifact of the way Git manages
+            <note>
+                The "base" branches are an artifact of the way Git manages
                 its data internally on the filesystem: Git will not allow you
                 to use <filename>mydir/standard</filename> and
                 <filename>mydir/standard/machine_a</filename> because it
@@ -1137,27 +1213,34 @@
         This section provides a brief reference for the commands you can use
         within an SCC description file (<filename>.scc</filename>):
         <itemizedlist>
-            <listitem><para><filename>branch [ref]</filename>:
+            <listitem><para>
+                <filename>branch [ref]</filename>:
                 Creates a new branch relative to the current branch
                 (typically <filename>${KTYPE}</filename>) using
                 the currently checked-out branch, or "ref" if specified.
                 </para></listitem>
-            <listitem><para><filename>define</filename>:
+            <listitem><para>
+                <filename>define</filename>:
                 Defines variables, such as <filename>KMACHINE</filename>,
                 <filename>KTYPE</filename>, <filename>KARCH</filename>,
                 and <filename>KFEATURE_DESCRIPTION</filename>.</para></listitem>
-            <listitem><para><filename>include SCC_FILE</filename>:
+            <listitem><para>
+                <filename>include SCC_FILE</filename>:
                 Includes an SCC file in the current file.
                 The file is parsed as if you had inserted it inline.
                 </para></listitem>
-            <listitem><para><filename>kconf [hardware|non-hardware] CFG_FILE</filename>:
+            <listitem><para>
+                <filename>kconf [hardware|non-hardware] CFG_FILE</filename>:
                 Queues a configuration fragment for merging into the final
                 Linux <filename>.config</filename> file.</para></listitem>
-            <listitem><para><filename>git merge GIT_BRANCH</filename>:
+            <listitem><para>
+                <filename>git merge GIT_BRANCH</filename>:
                 Merges the feature branch into the current branch.
                 </para></listitem>
-            <listitem><para><filename>patch PATCH_FILE</filename>:
-                Applies the patch to the current Git branch.</para></listitem>
+            <listitem><para>
+                <filename>patch PATCH_FILE</filename>:
+                Applies the patch to the current Git branch.
+                </para></listitem>
         </itemizedlist>
     </para>
 </section>
diff --git a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-common.xml b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-common.xml
index aa40fc8..b8fd870 100644
--- a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-common.xml
+++ b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-common.xml
@@ -3,20 +3,466 @@
 [<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
 
 <chapter id='kernel-dev-common'>
-
 <title>Common Tasks</title>
 
-<para>
-    This chapter presents several common tasks you perform when you
-    work with the Yocto Project Linux kernel.
-    These tasks include preparing a layer, modifying an existing recipe,
-    iterative development, working with your own sources, and incorporating
-    out-of-tree modules.
-    <note>
-        The examples presented in this chapter work with the Yocto Project
-        1.2.2 Release and forward.
-    </note>
-</para>
+    <para>
+        This chapter presents several common tasks you perform when you
+        work with the Yocto Project Linux kernel.
+        These tasks include preparing your host development system for
+        kernel development, preparing a layer, modifying an existing recipe,
+        patching the kernel, configuring the kernel, iterative development,
+        working with your own sources, and incorporating out-of-tree modules.
+        <note>
+            The examples presented in this chapter work with the Yocto Project
+            2.4 Release and forward.
+        </note>
+    </para>
+
+    <section id='preparing-the-build-host-to-work-on-the-kernel'>
+        <title>Preparing the Build Host to Work on the Kernel</title>
+
+        <para>
+            Before you can do any kernel development, you need to be
+            sure your build host is set up to use the Yocto Project.
+            For information on how to get set up, see the
+            "<ulink url='&YOCTO_DOCS_DEV_URL;#setting-up-the-development-host-to-use-the-yocto-project'>Setting Up to Use the Yocto Project</ulink>"
+            section in the Yocto Project Development Tasks Manual.
+            Part of preparing the system is creating a local Git
+            repository of the
+            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+            (<filename>poky</filename>) on your system.
+            Follow the steps in the
+            "<ulink url='&YOCTO_DOCS_DEV_URL;#cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</ulink>"
+            section in the Yocto Project Development Tasks Manual to set up your
+            Source Directory.
+            <note>
+                Be sure you check out the appropriate development branch or
+                you create your local branch by checking out a specific tag
+                to get the desired version of Yocto Project.
+                See the
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#checking-out-by-branch-in-poky'>Checking Out by Branch in Poky</ulink>"
+                and
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#checkout-out-by-tag-in-poky'>Checking Out by Tag in Poky</ulink>"
+                sections in the Yocto Project Development Tasks Manual for more
+                information.
+            </note>
+        </para>
+
+        <para>
+            Kernel development is best accomplished using
+            <ulink url='&YOCTO_DOCS_SDK_URL;#using-devtool-in-your-sdk-workflow'><filename>devtool</filename></ulink>
+            and not through traditional kernel workflow methods.
+            The remainder of this section provides information for both
+            scenarios.
+        </para>
+
+        <section id='getting-ready-to-develop-using-devtool'>
+            <title>Getting Ready to Develop Using <filename>devtool</filename></title>
+
+            <para>
+                Follow these steps to prepare to update the kernel image using
+                <filename>devtool</filename>.
+                Completing this procedure leaves you with a clean kernel image
+                and ready to make modifications as described in the
+                "<link linkend='using-devtool-to-patch-the-kernel'>Using <filename>devtool</filename> to Patch the Kernel</link>"
+                section:
+                <orderedlist>
+                    <listitem><para>
+                        <emphasis>Initialize the BitBake Environment:</emphasis>
+                        Before building an extensible SDK, you need to
+                        initialize the BitBake build environment by sourcing the
+                        build environment script
+                        (i.e. <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>oe-init-build-env</filename></ulink>):
+                        <literallayout class='monospaced'>
+     $ cd ~/poky
+     $ source oe-init-build-env
+                        </literallayout>
+                        <note>
+                            The previous commands assume the
+                            <ulink url='&YOCTO_DOCS_REF_URL;#source-repositories'>Source Repositories</ulink>
+                            (i.e. <filename>poky</filename>) have been cloned
+                            using Git and the local repository is named
+                            "poky".
+                        </note>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Prepare Your <filename>local.conf</filename> File:</emphasis>
+                        By default, the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
+                        variable is set to "qemux86", which is fine if you are
+                        building for the QEMU emulator in 32-bit mode.
+                        However, if you are not, you need to set the
+                        <filename>MACHINE</filename> variable appropriately in
+                        your <filename>conf/local.conf</filename> file found in
+                        the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
+                        (i.e. <filename>~/poky/build</filename> in this
+                        example).</para>
+
+                        <para>Also, since you are preparing to work on the
+                        kernel image, you need to set the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE_ESSENTIAL_EXTRA_RRECOMMENDS'><filename>MACHINE_ESSENTIAL_EXTRA_RRECOMMENDS</filename></ulink>
+                        variable to include kernel modules.</para>
+
+                        <para>This example uses the default "qemux86" for the
+                        <filename>MACHINE</filename> variable but needs to
+                        add the "kernel-modules":
+                        <literallayout class='monospaced'>
+     MACHINE_ESSENTIAL_EXTRA_RRECOMMENDS += "kernel-modules"
+                        </literallayout>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Create a Layer for Patches:</emphasis>
+                        You need to create a layer to hold patches created
+                        for the kernel image.
+                        You can use the
+                        <filename>bitbake-layers create-layer</filename>
+                        command as follows:
+                        <literallayout class='monospaced'>
+     $ cd ~/poky/build
+     $ bitbake-layers create-layer ../../meta-mylayer
+     NOTE: Starting bitbake server...
+     Add your new layer with 'bitbake-layers add-layer ../../meta-mylayer'
+     $
+                        </literallayout>
+                        <note>
+                            For background information on working with
+                            common and BSP layers, see the
+                            "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
+                            section in the Yocto Project Development Tasks
+                            Manual and the
+                            "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP Layers</ulink>"
+                            section in the Yocto Project Board Support (BSP)
+                            Developer's Guide, respectively.
+                            For information on how to use the
+                            <filename>bitbake-layers create-layer</filename>
+                            command, see the
+                            "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-general-layer-using-the-bitbake-layers-script'>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</ulink>"
+                            section in the Yocto Project Development Tasks
+                            Manual.
+                        </note>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Inform the BitBake Build Environment About
+                        Your Layer:</emphasis>
+                        As directed when you created your layer, you need to
+                        add the layer to the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#var-BBLAYERS'><filename>BBLAYERS</filename></ulink>
+                        variable in the <filename>bblayers.conf</filename> file
+                        as follows:
+                        <literallayout class='monospaced'>
+     $ cd ~/poky/build
+     $ bitbake-layers add-layer ../../meta-mylayer
+     NOTE: Starting bitbake server...
+     $
+                        </literallayout>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Build the Extensible SDK:</emphasis>
+                        Use BitBake to build the extensible SDK specifically
+                        for use with images to be run using QEMU:
+                        <literallayout class='monospaced'>
+     $ cd ~/poky/build
+     $ bitbake core-image-minimal -c populate_sdk_ext
+                        </literallayout>
+                        Once the build finishes, you can find the SDK installer
+                        file (i.e. <filename>*.sh</filename> file) in the
+                        following directory:
+                        <literallayout class='monospaced'>
+     ~/poky/build/tmp/deploy/sdk
+                        </literallayout>
+                        For this example, the installer file is named
+                        <filename>poky-glibc-x86_64-core-image-minimal-i586-toolchain-ext-&DISTRO;.sh</filename>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Install the Extensible SDK:</emphasis>
+                        Use the following command to install the SDK.
+                        For this example, install the SDK in the default
+                        <filename>~/poky_sdk</filename> directory:
+                        <literallayout class='monospaced'>
+     $ cd ~/poky/build/tmp/deploy/sdk
+     $ ./poky-glibc-x86_64-core-image-minimal-i586-toolchain-ext-&DISTRO;.sh
+     Poky (Yocto Project Reference Distro) Extensible SDK installer version &DISTRO;
+     ============================================================================
+     Enter target directory for SDK (default: ~/poky_sdk):
+     You are about to install the SDK to "/home/scottrif/poky_sdk". Proceed[Y/n]? Y
+     Extracting SDK......................................done
+     Setting it up...
+     Extracting buildtools...
+     Preparing build system...
+     Parsing recipes: 100% |#################################################################| Time: 0:00:52
+     Initializing tasks: 100% |############## ###############################################| Time: 0:00:04
+     Checking sstate mirror object availability: 100% |######################################| Time: 0:00:00
+     Parsing recipes: 100% |#################################################################| Time: 0:00:33
+     Initializing tasks: 100% |##############################################################| Time: 0:00:00
+     done
+     SDK has been successfully set up and is ready to be used.
+     Each time you wish to use the SDK in a new shell session, you need to source the environment setup script e.g.
+      $ . /home/scottrif/poky_sdk/environment-setup-i586-poky-linux
+                        </literallayout>
+                        </para></listitem>
+                    <listitem><para id='setting-up-the-esdk-terminal'>
+                        <emphasis>Set Up a New Terminal to Work With the
+                        Extensible SDK:</emphasis>
+                        You must set up a new terminal to work with the SDK.
+                        You cannot use the same BitBake shell used to build the
+                        installer.</para>
+
+                        <para>After opening a new shell, run the SDK environment
+                        setup script as directed by the output from installing
+                        the SDK:
+                        <literallayout class='monospaced'>
+     $ source ~/poky_sdk/environment-setup-i586-poky-linux
+     "SDK environment now set up; additionally you may now run devtool to perform development tasks.
+     Run devtool --help for further details.
+                        </literallayout>
+                        <note>
+                            If you get a warning about attempting to use the
+                            extensible SDK in an environment set up to run
+                            BitBake, you did not use a new shell.
+                        </note>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Build the Clean Image:</emphasis>
+                        The final step in preparing to work on the kernel is to
+                        build an initial image using
+                        <filename>devtool</filename> in the new terminal you
+                        just set up and initialized for SDK work:
+                        <literallayout class='monospaced'>
+     $ devtool build-image
+     Parsing recipes: 100% |##########################################| Time: 0:00:05
+     Parsing of 830 .bb files complete (0 cached, 830 parsed). 1299 targets, 47 skipped, 0 masked, 0 errors.
+     WARNING: No packages to add, building image core-image-minimal unmodified
+     Loading cache: 100% |############################################| Time: 0:00:00
+     Loaded 1299 entries from dependency cache.
+     NOTE: Resolving any missing task queue dependencies
+     Initializing tasks: 100% |#######################################| Time: 0:00:07
+     Checking sstate mirror object availability: 100% |###############| Time: 0:00:00
+     NOTE: Executing SetScene Tasks
+     NOTE: Executing RunQueue Tasks
+     NOTE: Tasks Summary: Attempted 2866 tasks of which 2604 didn't need to be rerun and all succeeded.
+     NOTE: Successfully built core-image-minimal. You can find output files in /home/scottrif/poky_sdk/tmp/deploy/images/qemux86
+                        </literallayout>
+                        If you were building for actual hardware and not for
+                        emulation, you could flash the image to a USB stick
+                        on <filename>/dev/sdd</filename> and boot your device.
+                        For an example that uses a Minnowboard, see the
+                        <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/KernelDevelopmentWithEsdk'>TipsAndTricks/KernelDevelopmentWithEsdk</ulink>
+                        Wiki page.
+                        </para></listitem>
+                </orderedlist>
+            </para>
+
+            <para>
+                At this point you have set up to start making modifications to
+                the kernel by using the extensible SDK.
+                For a continued example, see the
+                "<link linkend='using-devtool-to-patch-the-kernel'>Using <filename>devtool</filename> to Patch the Kernel</link>"
+                section.
+            </para>
+        </section>
+
+        <section id='getting-ready-for-traditional-kernel-development'>
+            <title>Getting Ready for Traditional Kernel Development</title>
+
+            <para>
+                Getting ready for traditional kernel development using the Yocto
+                Project involves many of the same steps as described in the
+                previous section.
+                However, you need to establish a local copy of the kernel source
+                since you will be editing these files.
+            </para>
+
+            <para>
+                Follow these steps to prepare to update the kernel image using
+                traditional kernel development flow with the Yocto Project.
+                Completing this procedure leaves you ready to make modifications
+                to the kernel source as described in the
+                "<link linkend='using-traditional-kernel-development-to-patch-the-kernel'>Using Traditional Kernel Development to Patch the Kernel</link>"
+                section:
+                <orderedlist>
+                    <listitem><para>
+                        <emphasis>Initialize the BitBake Environment:</emphasis>
+                        Before you can do anything using BitBake, you need to
+                        initialize the BitBake build environment by sourcing the
+                        build environment script
+                        (i.e. <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>oe-init-build-env</filename></ulink>).
+                        Also, for this example, be sure that the local branch
+                        you have checked out for <filename>poky</filename> is
+                        the Yocto Project &DISTRO_NAME; branch.
+                        If you need to checkout out the &DISTRO_NAME; branch,
+                        see the
+                        "<ulink url='&YOCTO_DOCS_DEV_URL;#checking-out-by-branch-in-poky'>Checking out by Branch in Poky</ulink>"
+                        section in the Yocto Project Development Tasks Manual.
+                        <literallayout class='monospaced'>
+     $ cd ~/poky
+     $ git branch
+     master
+     * &DISTRO_NAME;
+     $ source oe-init-build-env
+                        </literallayout>
+                        <note>
+                            The previous commands assume the
+                            <ulink url='&YOCTO_DOCS_REF_URL;#source-repositories'>Source Repositories</ulink>
+                            (i.e. <filename>poky</filename>) have been cloned
+                            using Git and the local repository is named
+                            "poky".
+                        </note>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Prepare Your <filename>local.conf</filename>
+                        File:</emphasis>
+                        By default, the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
+                        variable is set to "qemux86", which is fine if you are
+                        building for the QEMU emulator in 32-bit mode.
+                        However, if you are not, you need to set the
+                        <filename>MACHINE</filename> variable appropriately in
+                        your <filename>conf/local.conf</filename> file found
+                        in the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
+                        (i.e. <filename>~/poky/build</filename> in this
+                        example).</para>
+
+                        <para>Also, since you are preparing to work on the
+                        kernel image, you need to set the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE_ESSENTIAL_EXTRA_RRECOMMENDS'><filename>MACHINE_ESSENTIAL_EXTRA_RRECOMMENDS</filename></ulink>
+                        variable to include kernel modules.</para>
+
+                        <para>This example uses the default "qemux86" for the
+                        <filename>MACHINE</filename> variable but needs to
+                        add the "kernel-modules":
+                        <literallayout class='monospaced'>
+     MACHINE_ESSENTIAL_EXTRA_RRECOMMENDS += "kernel-modules"
+                        </literallayout>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Create a Layer for Patches:</emphasis>
+                        You need to create a layer to hold patches created
+                        for the kernel image.
+                        You can use the
+                        <filename>bitbake-layers create-layer</filename>
+                        command as follows:
+                        <literallayout class='monospaced'>
+     $ cd ~/poky/build
+     $ bitbake-layers create-layer ../../meta-mylayer
+     NOTE: Starting bitbake server...
+     Add your new layer with 'bitbake-layers add-layer ../../meta-mylayer'
+                        </literallayout>
+                        <note>
+                            For background information on working with
+                            common and BSP layers, see the
+                            "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
+                            section in the Yocto Project Development Tasks
+                            Manual and the
+                            "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP Layers</ulink>"
+                            section in the Yocto Project Board Support (BSP)
+                            Developer's Guide, respectively.
+                            For information on how to use the
+                            <filename>bitbake-layers create-layer</filename>
+                            command, see the
+                            "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-general-layer-using-the-bitbake-layers-script'>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</ulink>"
+                            section in the Yocto Project Development Tasks
+                            Manual.
+                        </note>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Inform the BitBake Build Environment About
+                        Your Layer:</emphasis>
+                        As directed when you created your layer, you need to add
+                        the layer to the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#var-BBLAYERS'><filename>BBLAYERS</filename></ulink>
+                        variable in the <filename>bblayers.conf</filename> file
+                        as follows:
+                        <literallayout class='monospaced'>
+     $ cd ~/poky/build
+     $ bitbake-layers add-layer ../../meta-mylayer
+     NOTE: Starting bitbake server ...
+     $
+                        </literallayout>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Create a Local Copy of the Kernel Git
+                        Repository:</emphasis>
+                        You can find Git repositories of supported Yocto Project
+                        kernels organized under "Yocto Linux Kernel" in the
+                        Yocto Project Source Repositories at
+                        <ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink>.
+                        </para>
+
+                        <para>
+                        For simplicity, it is recommended that you create your
+                        copy of the kernel Git repository outside of the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>,
+                        which is usually named <filename>poky</filename>.
+                        Also, be sure you are in the
+                        <filename>standard/base</filename> branch.
+                        </para>
+
+                        <para>
+                        The following commands show how to create a local copy
+                        of the <filename>linux-yocto-4.12</filename> kernel and
+                        be in the <filename>standard/base</filename> branch.
+                        <note>
+                            The <filename>linux-yocto-4.12</filename> kernel
+                            can be used with the Yocto Project 2.4 release
+                            and forward.
+                            You cannot use the
+                            <filename>linux-yocto-4.12</filename> kernel with
+                            releases prior to Yocto Project 2.4:
+                        </note>
+                        <literallayout class='monospaced'>
+     $ cd ~
+     $ git clone git://git.yoctoproject.org/linux-yocto-4.12 --branch standard/base
+     Cloning into 'linux-yocto-4.12'...
+     remote: Counting objects: 6097195, done.
+     remote: Compressing objects: 100% (901026/901026), done.
+     remote: Total 6097195 (delta 5152604), reused 6096847 (delta 5152256)
+     Receiving objects: 100% (6097195/6097195), 1.24 GiB | 7.81 MiB/s, done.
+     Resolving deltas: 100% (5152604/5152604), done.
+     Checking connectivity... done.
+     Checking out files: 100% (59846/59846), done.
+                        </literallayout>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Create a Local Copy of the Kernel Cache Git
+                        Repository:</emphasis>
+                        For simplicity, it is recommended that you create your
+                        copy of the kernel cache Git repository outside of the
+                        <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>,
+                        which is usually named <filename>poky</filename>.
+                        Also, for this example, be sure you are in the
+                        <filename>yocto-4.12</filename> branch.
+                        </para>
+
+                        <para>
+                        The following commands show how to create a local copy
+                        of the <filename>yocto-kernel-cache</filename> and
+                        be in the <filename>yocto-4.12</filename> branch:
+                        <literallayout class='monospaced'>
+     $ cd ~
+     $ git clone git://git.yoctoproject.org/yocto-kernel-cache --branch yocto-4.12
+     Cloning into 'yocto-kernel-cache'...
+     remote: Counting objects: 22639, done.
+     remote: Compressing objects: 100% (9761/9761), done.
+     remote: Total 22639 (delta 12400), reused 22586 (delta 12347)
+     Receiving objects: 100% (22639/22639), 22.34 MiB | 6.27 MiB/s, done.
+     Resolving deltas: 100% (12400/12400), done.
+     Checking connectivity... done.
+                        </literallayout>
+                        </para></listitem>
+                </orderedlist>
+            </para>
+
+            <para>
+                At this point, you are ready to start making modifications to
+                the kernel using traditional kernel development steps.
+                For a continued example, see the
+                "<link linkend='using-traditional-kernel-development-to-patch-the-kernel'>Using Traditional Kernel Development to Patch the Kernel</link>"
+                section.
+            </para>
+        </section>
+    </section>
 
     <section id='creating-and-preparing-a-layer'>
         <title>Creating and Preparing a Layer</title>
@@ -26,25 +472,98 @@
             that you create and prepare your own layer in which to do your
             work.
             Your layer contains its own
-            <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>
-            append files
-            (<filename>.bbappend</filename>) and provides a convenient
-            mechanism to create your own recipe files
-            (<filename>.bb</filename>).
-            For details on how to create and work with layers, see the
+            <ulink url='&YOCTO_DOCS_REF_URL;#bitbake-term'>BitBake</ulink>
+            append files (<filename>.bbappend</filename>) and provides a
+            convenient mechanism to create your own recipe files
+            (<filename>.bb</filename>) as well as store and use kernel
+            patch files.
+            For background information on working with layers, see the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
-            section in the Yocto Project Development Manual.
+            section in the Yocto Project Development Tasks Manual.
             <note><title>Tip</title>
                 The Yocto Project comes with many tools that simplify
                 tasks you need to perform.
-                One such tool is the <filename>yocto-layer create</filename>
-                script, which simplifies creating a new layer.
+                One such tool is the
+                <filename>bitbake-layers create-layer</filename>
+                command, which simplifies creating a new layer.
                 See the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-general-layer-using-the-yocto-layer-script'>Creating a General Layer Using the yocto-layer Script</ulink>"
-                section in the Yocto Project Development Manual for more
-                information.
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-general-layer-using-the-bitbake-layers-script'>Creating a General Layer Using the <filename>bitbake-layers</filename> Script</ulink>"
+                section in the Yocto Project Development Tasks Manual for
+                information on how to use this script.
             </note>
         </para>
+
+        <para>
+            To better understand the layer you create for kernel development,
+            the following section describes how to create a layer
+            without the aid of tools.
+            These steps assume creation of a layer named
+            <filename>mylayer</filename> in your home directory:
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Create Structure</emphasis>:
+                    Create the layer's structure:
+                    <literallayout class='monospaced'>
+     $ cd $HOME
+     $ mkdir meta-mylayer
+     $ mkdir meta-mylayer/conf
+     $ mkdir meta-mylayer/recipes-kernel
+     $ mkdir meta-mylayer/recipes-kernel/linux
+     $ mkdir meta-mylayer/recipes-kernel/linux/linux-yocto
+                    </literallayout>
+                    The <filename>conf</filename> directory holds your
+                    configuration files, while the
+                    <filename>recipes-kernel</filename> directory holds your
+                    append file and eventual patch files.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Create the Layer Configuration File</emphasis>:
+                    Move to the <filename>meta-mylayer/conf</filename>
+                    directory and create the <filename>layer.conf</filename>
+                    file as follows:
+                    <literallayout class='monospaced'>
+     # We have a conf and classes directory, add to BBPATH
+     BBPATH .= ":${LAYERDIR}"
+
+     # We have recipes-* directories, add to BBFILES
+     BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
+                 ${LAYERDIR}/recipes-*/*/*.bbappend"
+
+     BBFILE_COLLECTIONS += "mylayer"
+     BBFILE_PATTERN_mylayer = "^${LAYERDIR}/"
+     BBFILE_PRIORITY_mylayer = "5"
+                    </literallayout>
+                    Notice <filename>mylayer</filename> as part of the last
+                    three statements.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Create the Kernel Recipe Append File</emphasis>:
+                    Move to the
+                    <filename>meta-mylayer/recipes-kernel/linux</filename>
+                    directory and create the kernel's append file.
+                    This example uses the
+                    <filename>linux-yocto-4.12</filename> kernel.
+                    Thus, the name of the append file is
+                    <filename>linux-yocto_4.12.bbappend</filename>:
+                    <literallayout class='monospaced'>
+     FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
+
+     SRC_URI_append += "file://<replaceable>patch-file-one</replaceable>"
+     SRC_URI_append += "file://<replaceable>patch-file-two</replaceable>"
+     SRC_URI_append += "file://<replaceable>patch-file-three</replaceable>"
+                    </literallayout>
+                    The
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
+                    and
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
+                    statements enable the OpenEmbedded build system to find
+                    patch files.
+                    For more information on using append files, see the
+                    "<ulink url='&YOCTO_DOCS_DEV_URL;#using-bbappend-files'>Using .bbappend Files in Your Layer</ulink>"
+                    section in the Yocto Project Development Tasks Manual.
+                    </para></listitem>
+            </orderedlist>
+        </para>
     </section>
 
     <section id='modifying-an-existing-recipe'>
@@ -56,7 +575,7 @@
             Each release of the Yocto Project provides a few Linux
             kernel recipes from which you can choose.
             These are located in the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
             in <filename>meta/recipes-kernel/linux</filename>.
         </para>
 
@@ -72,12 +591,9 @@
         <para>
             Before modifying an existing recipe, be sure that you have created
             a minimal, custom layer from which you can work.
-            See the "<link linkend='creating-and-preparing-a-layer'>Creating and Preparing a Layer</link>"
-            section for some general resources.
-            You can also see the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#set-up-your-layer-for-the-build'>Set Up Your Layer for the Build</ulink>" section
-            of the Yocto Project Development Manual for a detailed
-            example.
+            See the
+            "<link linkend='creating-and-preparing-a-layer'>Creating and Preparing a Layer</link>"
+            section for information.
         </para>
 
         <section id='creating-the-append-file'>
@@ -88,11 +604,11 @@
                 You also name it accordingly based on the linux-yocto recipe
                 you are using.
                 For example, if you are modifying the
-                <filename>meta/recipes-kernel/linux/linux-yocto_4.4.bb</filename>
+                <filename>meta/recipes-kernel/linux/linux-yocto_4.12.bb</filename>
                 recipe, the append file will typically be located as follows
                 within your custom layer:
                 <literallayout class='monospaced'>
-     <replaceable>your-layer</replaceable>/recipes-kernel/linux/linux-yocto_4.4.bbappend
+     <replaceable>your-layer</replaceable>/recipes-kernel/linux/linux-yocto_4.12.bbappend
                 </literallayout>
                 The append file should initially extend the
                 <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESPATH'><filename>FILESPATH</filename></ulink>
@@ -123,7 +639,7 @@
                 As an example, consider the following append file
                 used by the BSPs in <filename>meta-yocto-bsp</filename>:
                 <literallayout class='monospaced'>
-     meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.4.bbappend
+     meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.12.bbappend
                 </literallayout>
                 The following listing shows the file.
                 Be aware that the actual commit ID strings in this
@@ -140,11 +656,12 @@
      KBRANCH_beaglebone = "standard/beaglebone"
      KBRANCH_mpc8315e-rdb = "standard/fsl-mpc8315e-rdb"
 
-     SRCREV_machine_genericx86    ?= "ad8b1d659ddd2699ebf7d50ef9de8940b157bfc2"
-     SRCREV_machine_genericx86-64 ?= "ad8b1d659ddd2699ebf7d50ef9de8940b157bfc2"
-     SRCREV_machine_edgerouter ?= "cebe1ad56aebd89e0de29412e19433fb441bf13c"
-     SRCREV_machine_beaglebone ?= "cebe1ad56aebd89e0de29412e19433fb441bf13c"
-     SRCREV_machine_mpc8315e-rdb ?= "06c0dbdcba374ca7f92a53d69292d6bb7bc9b0f3"
+     SRCREV_machine_genericx86    ?= "d09f2ce584d60ecb7890550c22a80c48b83c2e19"
+     SRCREV_machine_genericx86-64 ?= "d09f2ce584d60ecb7890550c22a80c48b83c2e19"
+     SRCREV_machine_edgerouter ?= "b5c8cfda2dfe296410d51e131289fb09c69e1e7d"
+     SRCREV_machine_beaglebone ?= "b5c8cfda2dfe296410d51e131289fb09c69e1e7d"
+     SRCREV_machine_mpc8315e-rdb ?= "2d1d010240846d7bff15d1fcc0cb6eb8a22fc78a"
+
 
      COMPATIBLE_MACHINE_genericx86 = "genericx86"
      COMPATIBLE_MACHINE_genericx86-64 = "genericx86-64"
@@ -152,11 +669,11 @@
      COMPATIBLE_MACHINE_beaglebone = "beaglebone"
      COMPATIBLE_MACHINE_mpc8315e-rdb = "mpc8315e-rdb"
 
-     LINUX_VERSION_genericx86 = "4.4.41"
-     LINUX_VERSION_genericx86-64 = "4.4.41"
-     LINUX_VERSION_edgerouter = "4.4.53"
-     LINUX_VERSION_beaglebone = "4.4.53"
-     LINUX_VERSION_mpc8315e-rdb = "4.4.53"
+     LINUX_VERSION_genericx86 = "4.12.7"
+     LINUX_VERSION_genericx86-64 = "4.12.7"
+     LINUX_VERSION_edgerouter = "4.12.10"
+     LINUX_VERSION_beaglebone = "4.12.10"
+     LINUX_VERSION_mpc8315e-rdb = "4.12.10"
                 </literallayout>
                 This append file contains statements used to support
                 several BSPs that ship with the Yocto Project.
@@ -179,7 +696,7 @@
                 variable could be used to enable features specific to
                 the kernel.
                 The append file points to specific commits in the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
                 Git repository and the <filename>meta</filename> Git repository
                 branches to identify the exact kernel needed to build the
                 BSP.
@@ -187,8 +704,8 @@
 
             <para>
                 One thing missing in this particular BSP, which you will
-                typically need when developing a BSP, is the kernel configuration
-                file (<filename>.config</filename>) for your BSP.
+                typically need when developing a BSP, is the kernel
+                configuration file (<filename>.config</filename>) for your BSP.
                 When developing a BSP, you probably have a kernel configuration
                 file or a set of kernel configuration files that, when taken
                 together, define the kernel configuration for your BSP.
@@ -196,7 +713,8 @@
                 in a file or a set of files inside a directory located at the
                 same level as your kernel's append file and having the same
                 name as the kernel's main recipe file.
-                With all these conditions met, simply reference those files in the
+                With all these conditions met, simply reference those files in
+                the
                 <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
                 statement in the append file.
             </para>
@@ -242,25 +760,31 @@
 
             <note>
                 <para>
-                    Other methods exist to accomplish grouping and defining configuration options.
-                    For example, if you are working with a local clone of the kernel repository,
-                    you could checkout the kernel's <filename>meta</filename> branch, make your changes,
-                    and then push the changes to the local bare clone of the kernel.
-                    The result is that you directly add configuration options to the
-                    <filename>meta</filename> branch for your BSP.
-                    The configuration options will likely end up in that location anyway if the BSP gets
-                    added to the Yocto Project.
+                    Other methods exist to accomplish grouping and defining
+                    configuration options.
+                    For example, if you are working with a local clone of the
+                    kernel repository, you could checkout the kernel's
+                    <filename>meta</filename> branch, make your changes, and
+                    then push the changes to the local bare clone of the
+                    kernel.
+                    The result is that you directly add configuration options
+                    to the <filename>meta</filename> branch for your BSP.
+                    The configuration options will likely end up in that
+                    location anyway if the BSP gets added to the Yocto Project.
                 </para>
 
                 <para>
-                    In general, however, the Yocto Project maintainers take care of moving the
-                    <filename>SRC_URI</filename>-specified
-                    configuration options to the kernel's <filename>meta</filename> branch.
-                    Not only is it easier for BSP developers to not have to worry about putting those
-                    configurations in the branch, but having the maintainers do it allows them to apply
-                    'global' knowledge about the kinds of common configuration options multiple BSPs in
-                    the tree are typically using.
-                    This allows for promotion of common configurations into common features.
+                    In general, however, the Yocto Project maintainers take
+                    care of moving the <filename>SRC_URI</filename>-specified
+                    configuration options to the kernel's
+                    <filename>meta</filename> branch.
+                    Not only is it easier for BSP developers to not have to
+                    worry about putting those configurations in the branch,
+                    but having the maintainers do it allows them to apply
+                    'global' knowledge about the kinds of common configuration
+                    options multiple BSPs in the tree are typically using.
+                    This allows for promotion of common configurations into
+                    common features.
                 </para>
             </note>
         </section>
@@ -295,9 +819,12 @@
             </para>
 
             <para>
-                For a detailed example showing how to patch the kernel, see the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#patching-the-kernel'>Patching the Kernel</ulink>"
-                section in the Yocto Project Development Manual.
+                For a detailed example showing how to patch the kernel using
+                <filename>devtool</filename>, see the
+                "<link linkend='using-devtool-to-patch-the-kernel'>Using <filename>devtool</filename> to Patch the Kernel</link>"
+                and
+                "<link linkend='using-traditional-kernel-development-to-patch-the-kernel'>Using Traditional Kernel Development to Patch the Kernel</link>"
+                sections.
             </para>
         </section>
 
@@ -383,8 +910,8 @@
             <para>
                 For a detailed example showing how to configure the kernel,
                 see the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#configuring-the-kernel'>Configuring the Kernel</ulink>"
-                section in the Yocto Project Development Manual.
+                "<link linkend='configuring-the-kernel'>Configuring the Kernel</link>"
+                section.
             </para>
         </section>
 
@@ -416,15 +943,17 @@
 
             <para>
                 To specify an "in-tree" <filename>defconfig</filename> file,
-                edit the recipe that builds your kernel so that it has the
-                following command form:
+                use the following statement form:
                 <literallayout class='monospaced'>
-     KBUILD_DEFCONFIG_KMACHINE ?= <replaceable>defconfig_file</replaceable>
+     KBUILD_DEFCONFIG_<replaceable>KMACHINE</replaceable> ?= <replaceable>defconfig_file</replaceable>
                 </literallayout>
-                You need to append the variable with
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-KMACHINE'><filename>KMACHINE</filename></ulink>
-                and then supply the path to your "in-tree"
-                <filename>defconfig</filename> file.
+                Here is an example that appends the
+                <filename>KBUILD_DEFCONFIG</filename> variable with
+                "common-pc" and provides the path to the "in-tree"
+                <filename>defconfig</filename> file:
+                <literallayout class='monospaced'>
+     KBUILD_DEFCONFIG_common-pc ?= "/home/scottrif/configfiles/my_defconfig_file"
+                </literallayout>
             </para>
 
             <para>
@@ -432,7 +961,8 @@
                 <filename>defconfig</filename> file, you need to be sure no
                 files or statements set <filename>SRC_URI</filename> to use a
                 <filename>defconfig</filename> other than your "in-tree"
-                file (e.g. a kernel's <filename>linux-</filename><replaceable>machine</replaceable><filename>.inc</filename>
+                file (e.g. a kernel's
+                <filename>linux-</filename><replaceable>machine</replaceable><filename>.inc</filename>
                 file).
                 In other words, if the build system detects a statement
                 that identifies an "out-of-tree"
@@ -449,115 +979,750 @@
         </section>
     </section>
 
-    <section id='using-an-iterative-development-process'>
-        <title>Using an Iterative Development Process</title>
+    <section id="using-devtool-to-patch-the-kernel">
+        <title>Using <filename>devtool</filename> to Patch the Kernel</title>
 
         <para>
-            If you do not have existing patches or configuration files,
-            you can iteratively generate them from within the BitBake build
-            environment as described within this section.
-            During an iterative workflow, running a previously completed BitBake
-            task causes BitBake to invalidate the tasks that follow the
-            completed task in the build sequence.
-            Invalidated tasks rebuild the next time you run the build using
-            BitBake.
+            The steps in this procedure show you how you can patch the
+            kernel using the extensible SDK and <filename>devtool</filename>.
+            <note>
+                Before attempting this procedure, be sure you have performed
+                the steps to get ready for updating the kernel as described
+                in the
+                "<link linkend='getting-ready-to-develop-using-devtool'>Getting Ready to Develop Using <filename>devtool</filename></link>"
+                section.
+            </note>
         </para>
 
         <para>
-            As you read this section, be sure to substitute the name
-            of your Linux kernel recipe for the term
-            "linux-yocto".
+            Patching the kernel involves changing or adding configurations
+            to an existing kernel, changing or adding recipes to the kernel
+            that are needed to support specific hardware features, or even
+            altering the source code itself.
         </para>
 
-        <section id='tip-dirty-string'>
-            <title>"-dirty" String</title>
+        <para>
+            This example creates a simple patch by adding some QEMU emulator
+            console output at boot time through <filename>printk</filename>
+            statements in the kernel's <filename>calibrate.c</filename> source
+            code file.
+            Applying the patch and booting the modified image causes the added
+            messages to appear on the emulator's console.
+            The example is a continuation of the setup procedure found in
+            the
+            "<link linkend='getting-ready-to-develop-using-devtool'>Getting Ready to Develop Using <filename>devtool</filename></link>"
+            Section.
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Check Out the Kernel Source Files:</emphasis>
+                    First you must use <filename>devtool</filename> to checkout
+                    the kernel source code in its workspace.
+                    Be sure you are in the terminal set up to do work
+                    with the extensible SDK.
+                    <note>
+                        See this
+                        <link linkend='setting-up-the-esdk-terminal'>step</link>
+                        in the
+                        "<link linkend='getting-ready-to-develop-using-devtool'>Getting Ready to Develop Using <filename>devtool</filename></link>"
+                        section for more information.
+                    </note>
+                    Use the following <filename>devtool</filename> command
+                    to check out the code:
+                    <literallayout class='monospaced'>
+     $ devtool modify linux-yocto
+                    </literallayout>
+                    <note>
+                        During the checkout operation, a bug exists that could
+                        cause errors such as the following to appear:
+                        <literallayout class='monospaced'>
+     ERROR: Taskhash mismatch 2c793438c2d9f8c3681fd5f7bc819efa versus
+            be3a89ce7c47178880ba7bf6293d7404 for
+            /path/to/esdk/layers/poky/meta/recipes-kernel/linux/linux-yocto_4.10.bb.do_unpack
+                        </literallayout>
+                        You can safely ignore these messages.
+                        The source code is correctly checked out.
+                    </note>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Edit the Source Files</emphasis>
+                    Follow these steps to make some simple changes to the source
+                    files:
+                    <orderedlist>
+                        <listitem><para>
+                            <emphasis>Change the working directory</emphasis>:
+                            In the previous step, the output noted where you can find
+                            the source files (e.g.
+                            <filename>~/poky_sdk/workspace/sources/linux-yocto</filename>).
+                            Change to where the kernel source code is before making
+                            your edits to the <filename>calibrate.c</filename> file:
+                            <literallayout class='monospaced'>
+     $ cd ~/poky_sdk/workspace/sources/linux-yocto
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis>Edit the source file</emphasis>:
+                            Edit the <filename>init/calibrate.c</filename> file to have
+                            the following changes:
+                            <literallayout class='monospaced'>
+     void calibrate_delay(void)
+     {
+         unsigned long lpj;
+         static bool printed;
+         int this_cpu = smp_processor_id();
 
-<!--
-            <para>
-                <emphasis>AR - Darren Hart:</emphasis>  This section
-                originated from the old Yocto Project Kernel Architecture
-                and Use Manual.
-                It was decided we need to put it in this section here.
-                Darren needs to figure out where we want it and what part
-                of it we want (all, revision???)
-            </para>
--->
+         printk("*************************************\n");
+         printk("*                                   *\n");
+         printk("*        HELLO YOCTO KERNEL         *\n");
+         printk("*                                   *\n");
+         printk("*************************************\n");
 
-            <para>
-                If kernel images are being built with "-dirty" on the
-                end of the version string, this simply means that
-                modifications in the source directory have not been committed.
-                <literallayout class='monospaced'>
+     	if (per_cpu(cpu_loops_per_jiffy, this_cpu)) {
+               .
+               .
+               .
+                            </literallayout>
+                            </para></listitem>
+                    </orderedlist>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Build the Updated Kernel Source:</emphasis>
+                    To build the updated kernel source, use
+                    <filename>devtool</filename>:
+                    <literallayout class='monospaced'>
+     $ devtool build linux-yocto
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Create the Image With the New Kernel:</emphasis>
+                    Use the <filename>devtool build-image</filename> command
+                    to create a new image that has the new kernel.
+                    <note>
+                        If the image you originally created resulted in a Wic
+                        file, you can use an alternate method to create the new
+                        image with the updated kernel.
+                        For an example, see the steps in the
+                        <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/KernelDevelopmentWithEsdk'>TipsAndTricks/KernelDevelopmentWithEsdk</ulink>
+                        Wiki Page.
+                    </note>
+                    <literallayout class='monospaced'>
+     $ cd ~
+     $ devtool build-image core-image-minimal
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Test the New Image:</emphasis>
+                    For this example, you can run the new image using QEMU
+                    to verify your changes:
+                    <orderedlist>
+                        <listitem><para>
+                            <emphasis>Boot the image</emphasis>:
+                            Boot the modified image in the QEMU emulator
+                            using this command:
+                            <literallayout class='monospaced'>
+     $ runqemu qemux86
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis>Verify the changes</emphasis>:
+                            Log into the machine using <filename>root</filename>
+                            with no password and then use the following shell
+                            command to scroll through the console's boot output.
+                            <literallayout class='monospaced'>
+     # dmesg | less
+                            </literallayout>
+                            You should see the results of your
+                            <filename>printk</filename> statements
+                            as part of the output when you scroll down the
+                            console window.
+                            </para></listitem>
+                    </orderedlist>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Stage and commit your changes</emphasis>:
+                    Within your eSDK terminal, change your working directory to
+                    where you modified the <filename>calibrate.c</filename>
+                    file and use these Git commands to stage and commit your
+                    changes:
+                    <literallayout class='monospaced'>
+     $ cd ~/poky_sdk/workspace/sources/linux-yocto
      $ git status
-                </literallayout>
+     $ git add init/calibrate.c
+     $ git commit -m "calibrate: Add printk example"
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Export the Patches and Create an Append File:</emphasis>
+                    To export your commits as patches and create a
+                    <filename>.bbappend</filename> file, use the following
+                    command in the terminal used to work with the extensible
+                    SDK.
+                    This example uses the previously established layer named
+                    <filename>meta-mylayer</filename>.
+                    <note>
+                        See Step 3 of the
+                        "<link linkend='getting-ready-to-develop-using-devtool'>Getting Ready to Develop Using devtool</link>"
+                        section for information on setting up this layer.
+                    </note>
+                    <literallayout class='monospaced'>
+     $ devtool finish linux-yocto ~/meta-mylayer
+                    </literallayout>
+                    Once the command finishes, the patches and the
+                    <filename>.bbappend</filename> file are located in the
+                    <filename>~/meta-mylayer/recipes-kernel/linux</filename>
+                    directory.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Build the Image With Your Modified Kernel:</emphasis>
+                    You can now build an image that includes your kernel
+                    patches.
+                    Execute the following command from your
+                    <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
+                    in the terminal set up to run BitBake:
+                    <literallayout class='monospaced'>
+     $ cd ~/poky/build
+     $ bitbake core-image-minimal
+                    </literallayout>
+                    </para></listitem>
+            </orderedlist>
+        </para>
+    </section>
+
+    <section id="using-traditional-kernel-development-to-patch-the-kernel">
+        <title>Using Traditional Kernel Development to Patch the Kernel</title>
+
+        <para>
+            The steps in this procedure show you how you can patch the
+            kernel using traditional kernel development (i.e. not using
+            <filename>devtool</filename> and the extensible SDK as
+            described in the
+            "<link linkend='using-devtool-to-patch-the-kernel'>Using <filename>devtool</filename> to Patch the Kernel</link>"
+            section).
+            <note>
+                Before attempting this procedure, be sure you have performed
+                the steps to get ready for updating the kernel as described
+                in the
+                "<link linkend='getting-ready-for-traditional-kernel-development'>Getting Ready for Traditional Kernel Development</link>"
+                section.
+            </note>
+        </para>
+
+        <para>
+            Patching the kernel involves changing or adding configurations
+            to an existing kernel, changing or adding recipes to the kernel
+            that are needed to support specific hardware features, or even
+            altering the source code itself.
+        </para>
+
+        <para>
+            The example in this section creates a simple patch by adding some
+            QEMU emulator console output at boot time through
+            <filename>printk</filename> statements in the kernel's
+            <filename>calibrate.c</filename> source code file.
+            Applying the patch and booting the modified image causes the added
+            messages to appear on the emulator's console.
+            The example is a continuation of the setup procedure found in
+            the
+            "<link linkend='getting-ready-for-traditional-kernel-development'>Getting Ready for Traditional Kernel Development</link>"
+            Section.
+        </para>
+
+        <para>
+            Although this example uses Git and shell commands to generate the
+            patch, you could use the <filename>yocto-kernel</filename> script
+            found in the <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+            under <filename>scripts</filename> to add and manage kernel
+            patches and configuration.
+            See the "<ulink url='&YOCTO_DOCS_BSP_URL;#managing-kernel-patches-and-config-items-with-yocto-kernel'>Managing kernel Patches and Config Items with yocto-kernel</ulink>"
+            section in the Yocto Project Board Support Packages (BSP)
+            Developer's Guide for more information on the
+            <filename>yocto-kernel</filename> script.
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Edit the Source Files</emphasis>
+                    Prior to this step, you should have used Git to create a
+                    local copy of the repository for your kernel.
+                    Assuming you created the repository as directed in the
+                    "<link linkend='getting-ready-for-traditional-kernel-development'>Getting Ready for Traditional Kernel Development</link>"
+                    section, use the following commands to edit the
+                    <filename>calibrate.c</filename> file:
+                    <orderedlist>
+                        <listitem><para>
+                            <emphasis>Change the working directory</emphasis>:
+                            You need to locate the source files in the
+                            local copy of the kernel Git repository:
+                            Change to where the kernel source code is before making
+                            your edits to the <filename>calibrate.c</filename> file:
+                            <literallayout class='monospaced'>
+     $ cd ~/linux-yocto-4.12/init
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis>Edit the source file</emphasis>:
+                            Edit the <filename>calibrate.c</filename> file to have
+                            the following changes:
+                            <literallayout class='monospaced'>
+     void calibrate_delay(void)
+     {
+         unsigned long lpj;
+         static bool printed;
+         int this_cpu = smp_processor_id();
+
+         printk("*************************************\n");
+         printk("*                                   *\n");
+         printk("*        HELLO YOCTO KERNEL         *\n");
+         printk("*                                   *\n");
+         printk("*************************************\n");
+
+     	if (per_cpu(cpu_loops_per_jiffy, this_cpu)) {
+               .
+               .
+               .
+                            </literallayout>
+                            </para></listitem>
+                    </orderedlist>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Stage and Commit Your Changes:</emphasis>
+                    Use standard Git commands to stage and commit the changes
+                    you just made:
+                    <literallayout class='monospaced'>
+     $ git add calibrate.c
+     $ git commit -m "calibrate.c - Added some printk statements"
+                    </literallayout>
+                    If you do not stage and commit your changes, the OpenEmbedded
+                    Build System will not pick up the changes.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Update Your <filename>local.conf</filename> File
+                    to Point to Your Source Files:</emphasis>
+                    In addition to your <filename>local.conf</filename> file
+                    specifying to use "kernel-modules" and the "qemux86"
+                    machine, it must also point to the updated kernel source
+                    files.
+                    Add
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
+                    and
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-SRCREV'><filename>SRCREV</filename></ulink>
+                    statements similar to the following to your
+                    <filename>local.conf</filename>:
+                    <literallayout class='monospaced'>
+     $ cd ~/poky/build/conf
+                    </literallayout>
+                    Add the following to the <filename>local.conf</filename>:
+                    <literallayout class='monospaced'>
+     SRC_URI_pn-linux-yocto = "git:///<replaceable>path-to</replaceable>/linux-yocto-4.12;protocol=file;name=machine;branch=standard/base; \
+                               git:///<replaceable>path-to</replaceable>/yocto-kernel-cache;protocol=file;type=kmeta;name=meta;branch=yocto-4.12;destsuffix=${KMETA}"
+     SRCREV_meta_qemux86 = "${AUTOREV}"
+     SRCREV_machine_qemux86 = "${AUTOREV}"
+                    </literallayout>
+                    <note>
+                        Be sure to replace
+                        <replaceable>path-to</replaceable> with the pathname
+                        to your local Git repositories.
+                        Also, you must be sure to specify the correct branch
+                        and machine types.
+                        For this example, the branch is
+                        <filename>standard/base</filename> and the machine is
+                        "qemux86".
+                    </note>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Build the Image:</emphasis>
+                    With the source modified, your changes staged and
+                    committed, and the <filename>local.conf</filename> file
+                    pointing to the kernel files, you can now use BitBake to
+                    build the image:
+                    <literallayout class='monospaced'>
+     $ cd ~/poky/build
+     $ bitbake core-image-minimal
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Boot the image</emphasis>:
+                    Boot the modified image in the QEMU emulator
+                    using this command.
+                    When prompted to login to the QEMU console, use "root"
+                    with no password:
+                    <literallayout class='monospaced'>
+     $ cd ~/poky/build
+     $ runqemu qemux86
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Look for Your Changes:</emphasis>
+                    As QEMU booted, you might have seen your changes rapidly
+                    scroll by.
+                    If not, use these commands to see your changes:
+                    <literallayout class='monospaced'>
+     # dmesg | less
+                    </literallayout>
+                    You should see the results of your
+                    <filename>printk</filename> statements
+                    as part of the output when you scroll down the
+                    console window.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Generate the Patch File:</emphasis>
+                    Once you are sure that your patch works correctly, you
+                    can generate a <filename>*.patch</filename> file in the
+                    kernel source repository:
+                    <literallayout class='monospaced'>
+     $ cd ~/linux-yocto-4.12/init
+     $ git format-patch -1
+     0001-calibrate.c-Added-some-printk-statements.patch
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Move the Patch File to Your Layer:</emphasis>
+                    In order for subsequent builds to pick up patches, you
+                    need to move the patch file you created in the previous
+                    step to your layer <filename>meta-mylayer</filename>.
+                    For this example, the layer created earlier is located
+                    in your home directory as <filename>meta-mylayer</filename>.
+                    When the layer was created using the
+                    <filename>yocto-create</filename> script, no additional
+                    hierarchy was created to support patches.
+                    Before moving the patch file, you need to add additional
+                    structure to your layer using the following commands:
+                    <literallayout class='monospaced'>
+     $ cd ~/meta-mylayer
+     $ mkdir recipes-kernel
+     $ mkdir recipes-kernel/linux
+     $ mkdir recipes-kernel/linux/linux-yocto
+                    </literallayout>
+                    Once you have created this hierarchy in your layer, you can
+                    move the patch file using the following command:
+                    <literallayout class='monospaced'>
+     $ mv ~/linux-yocto-4.12/init/0001-calibrate.c-Added-some-printk-statements.patch ~/meta-mylayer/recipes-kernel/linux/linux-yocto
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Create the Append File:</emphasis>
+                    Finally, you need to create the
+                    <filename>linux-yocto_4.12.bbappend</filename> file and
+                    insert statements that allow the OpenEmbedded build
+                    system to find the patch.
+                    The append file needs to be in your layer's
+                    <filename>recipes-kernel/linux</filename>
+                    directory and it must be named
+                    <filename>linux-yocto_4.12.bbappend</filename> and have
+                    the following contents:
+                    <literallayout class='monospaced'>
+     FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
+
+     SRC_URI_append = " file://0001-calibrate.c-Added-some-printk-statements.patch"
+                    </literallayout>
+                    The
+                    <ulink url='&YOCTO_DOCS_REF_URL;var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
+                    and
+                    <ulink url='&YOCTO_DOCS_REF_URL;var-SRC_URI'><filename>SRC_URI</filename></ulink>
+                    statements enable the OpenEmbedded build system to find
+                    the patch file.</para>
+
+                    <para>For more information on append files and patches,
+                    see the
+                    "<link linkend='creating-the-append-file'>Creating the Append File</link>"
+                    and
+                    "<link linkend='applying-patches'>Applying Patches</link>"
+                    sections.
+                    You can also see the
+                    "<ulink url='&YOCTO_DOCS_DEV_URL;#using-bbappend-files'>Using .bbappend Files in Your Layer"</ulink>"
+                    section in the Yocto Project Development Tasks Manual.
+                    <note>
+                        To build <filename>core-image-minimal</filename>
+                        again and see the effects of your patch, you can
+                        essentially eliminate the temporary source files
+                        saved in <filename>poky/build/tmp/work/...</filename>
+                        and residual effects of the build by entering the
+                        following sequence of commands:
+                        <literallayout class='monospaced'>
+     $ cd ~/poky/build
+     $ bitbake -c cleanall yocto-linux
+     $ bitbake core-image-minimal -c cleanall
+     $ bitbake core-image-minimal
+     $ runqemu qemux86
+                        </literallayout>
+                    </note>
+                    </para></listitem>
+            </orderedlist>
+        </para>
+    </section>
+
+    <section id='configuring-the-kernel'>
+        <title>Configuring the Kernel</title>
+
+        <para>
+            Configuring the Yocto Project kernel consists of making sure the
+            <filename>.config</filename> file has all the right information
+            in it for the image you are building.
+            You can use the <filename>menuconfig</filename> tool and
+            configuration fragments to make sure your
+            <filename>.config</filename> file is just how you need it.
+            You can also save known configurations in a
+            <filename>defconfig</filename> file that the build system can use
+            for kernel configuration.
+        </para>
+
+        <para>
+            This section describes how to use <filename>menuconfig</filename>,
+            create and use configuration fragments, and how to interactively
+            modify your <filename>.config</filename> file to create the
+            leanest kernel configuration file possible.
+        </para>
+
+        <para>
+            For more information on kernel configuration, see the
+            "<link linkend='changing-the-configuration'>Changing the Configuration</link>"
+            section.
+        </para>
+
+        <section id='using-menuconfig'>
+            <title>Using&nbsp;&nbsp;<filename>menuconfig</filename></title>
+
+            <para>
+                The easiest way to define kernel configurations is to set
+                them through the <filename>menuconfig</filename> tool.
+                This tool provides an interactive method with which
+                to set kernel configurations.
+                For general information on <filename>menuconfig</filename>, see
+                <ulink url='http://en.wikipedia.org/wiki/Menuconfig'></ulink>.
             </para>
 
             <para>
-                You can use the above Git command to report modified,
-                removed, or added files.
-                You should commit those changes to the tree regardless of
-                whether they will be saved, exported, or used.
-                Once you commit the changes, you need to rebuild the kernel.
-            </para>
-
-            <para>
-                To force a pickup and commit of all such pending changes,
-                enter the following:
+                To use the <filename>menuconfig</filename> tool in the Yocto
+                Project development environment, you must launch it using
+                BitBake.
+                Thus, the environment must be set up using the
+                <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
+                script found in the
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
+                You must also be sure of the state of your build's
+                configuration in the
+                <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
+                The following commands initialize the BitBake environment,
+                run the
+                <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-kernel_configme'><filename>do_kernel_configme</filename></ulink>
+                task, and launch <filename>menuconfig</filename>.
+                These commands assume the Source Directory's top-level folder
+                is <filename>~/poky</filename>:
                 <literallayout class='monospaced'>
-     $ git add .
-     $ git commit -s -a -m "getting rid of -dirty"
-                </literallayout>
-            </para>
-
-            <para>
-                Next, rebuild the kernel.
-            </para>
-        </section>
-
-        <section id='generating-configuration-files'>
-            <title>Generating Configuration Files</title>
-
-            <para>
-                You can manipulate the <filename>.config</filename> file
-                used to build a linux-yocto recipe with the
-                <filename>menuconfig</filename> command as follows:
-                <literallayout class='monospaced'>
+     $ cd poky
+     $ source oe-init-build-env
+     $ bitbake linux-yocto -c kernel_configme -f
      $ bitbake linux-yocto -c menuconfig
                 </literallayout>
-                This command starts the Linux kernel configuration tool,
-                which allows you to prepare a new
-                <filename>.config</filename> file for the build.
-                When you exit the tool, be sure to save your changes
-                at the prompt.
-            </para>
-
-            <para>
-                The resulting <filename>.config</filename> file is
-                located in the build directory,
-                <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-B'><filename>B</filename></ulink><filename>}</filename>,
-                which expands to
-                <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink><filename>}</filename><filename>/linux-</filename><filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_ARCH'><filename>PACKAGE_ARCH</filename></ulink><filename>}-${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-LINUX_KERNEL_TYPE'><filename>LINUX_KERNEL_TYPE</filename></ulink><filename>}-build</filename>.
-                You can use the entire <filename>.config</filename> file as the
-                <filename>defconfig</filename> file as described in the
-                "<link linkend='changing-the-configuration'>Changing the Configuration</link>" section.
-                For more information on the <filename>.config</filename> file,
-                see the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#using-menuconfig'>Using <filename>menuconfig</filename></ulink>"
-                section in the Yocto Project Development Manual.
+                Once <filename>menuconfig</filename> comes up, its standard
+                interface allows you to interactively examine and configure
+                all the kernel configuration parameters.
+                After making your changes, simply exit the tool and save your
+                changes to create an updated version of the
+                <filename>.config</filename> configuration file.
                 <note>
-                    You can determine what a variable expands to by looking
-                    at the output of the <filename>bitbake -e</filename>
-                    command:
-                    <literallayout class='monospaced'>
-     $ bitbake -e virtual/kernel
-                    </literallayout>
-                    Search the output for the variable in which you are
-                    interested to see exactly how it is expanded and used.
+                    You can use the entire <filename>.config</filename> file
+                    as the <filename>defconfig</filename> file.
+                    For information on <filename>defconfig</filename> files,
+                    see the
+                    "<link linkend='changing-the-configuration'>Changing the Configuration</link>",
+                    "<link linkend='using-an-in-tree-defconfig-file'>Using an In-Tree <filename>defconfig</filename> File</link>,
+                    and
+                    "<link linkend='creating-a-defconfig-file'>Creating a <filename>defconfig</filename> File</link>"
+                    sections.
                 </note>
             </para>
 
             <para>
-                A better method is to create a configuration fragment using the
+                Consider an example that configures the "CONFIG_SMP" setting
+                for the <filename>linux-yocto-4.12</filename> kernel.
+                <note>
+                    The OpenEmbedded build system recognizes this kernel as
+                    <filename>linux-yocto</filename> through Metadata (e.g.
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-PREFERRED_VERSION'><filename>PREFERRED_VERSION</filename></ulink><filename>_linux-yocto ?= "12.4%"</filename>).
+                </note>
+                Once <filename>menuconfig</filename> launches, use the
+                interface to navigate through the selections to find the
+                configuration settings in which you are interested.
+                For this example, you deselect "CONFIG_SMP" by clearing the
+                "Symmetric Multi-Processing Support" option.
+                Using the interface, you can find the option under
+                "Processor Type and Features".
+                To deselect "CONFIG_SMP", use the arrow keys to
+                highlight "Symmetric Multi-Processing Support" and enter "N"
+                to clear the asterisk.
+                When you are finished, exit out and save the change.
+            </para>
+
+            <para>
+                Saving the selections updates the <filename>.config</filename>
+                configuration file.
+                This is the file that the OpenEmbedded build system uses to
+                configure the kernel during the build.
+                You can find and examine this file in the Build Directory in
+                <filename>tmp/work/</filename>.
+                The actual <filename>.config</filename> is located in the
+                area where the specific kernel is built.
+                For example, if you were building a Linux Yocto kernel based
+                on the <filename>linux-yocto-4.12</filename> kernel and you
+                were building a QEMU image targeted for
+                <filename>x86</filename> architecture, the
+                <filename>.config</filename> file would be:
+                <literallayout class='monospaced'>
+     poky/build/tmp/work/qemux86-poky-linux/linux-yocto/4.12.12+gitAUTOINC+eda4d18...
+     ...967-r0/linux-qemux86-standard-build/.config
+                </literallayout>
+                <note>
+                    The previous example directory is artificially split and
+                    many of the characters in the actual filename are omitted
+                    in order to make it more readable.
+                    Also, depending on the kernel you are using, the exact
+                    pathname might differ.
+                </note>
+            </para>
+
+            <para>
+                Within the <filename>.config</filename> file, you can see the
+                kernel settings.
+                For example, the following entry shows that symmetric
+                multi-processor support is not set:
+                <literallayout class='monospaced'>
+     # CONFIG_SMP is not set
+                </literallayout>
+            </para>
+
+            <para>
+                A good method to isolate changed configurations is to use a
+                combination of the <filename>menuconfig</filename> tool and
+                simple shell commands.
+                Before changing configurations with
+                <filename>menuconfig</filename>, copy the existing
+                <filename>.config</filename> and rename it to something else,
+                use <filename>menuconfig</filename> to make as many changes as
+                you want and save them, then compare the renamed configuration
+                file against the newly created file.
+                You can use the resulting differences as your base to create
+                configuration fragments to permanently save in your kernel
+                layer.
+                <note>
+                    Be sure to make a copy of the <filename>.config</filename>
+                    file and do not just rename it.
+                    The build system needs an existing
+                    <filename>.config</filename> file from which to work.
+                </note>
+            </para>
+        </section>
+
+        <section id='creating-a-defconfig-file'>
+            <title>Creating a&nbsp;&nbsp;<filename>defconfig</filename> File</title>
+
+            <para>
+                A <filename>defconfig</filename> file is simply a
+                <filename>.config</filename> renamed to "defconfig".
+                You can use a <filename>defconfig</filename> file
+                to retain a known set of kernel configurations from which the
+                OpenEmbedded build system can draw to create the final
+                <filename>.config</filename> file.
+                <note>
+                    Out-of-the-box, the Yocto Project never ships a
+                    <filename>defconfig</filename> or
+                    <filename>.config</filename> file.
+                    The OpenEmbedded build system creates the final
+                    <filename>.config</filename> file used to configure the
+                    kernel.
+                </note>
+            </para>
+
+            <para>
+                To create a <filename>defconfig</filename>, start with a
+                complete, working Linux kernel <filename>.config</filename>
+                file.
+                Copy that file to the appropriate
+                <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PN'><filename>PN</filename></ulink><filename>}</filename>
+                directory in your layer's
+                <filename>recipes-kernel/linux</filename> directory, and rename
+                the copied file to "defconfig" (e.g.
+                <filename>~/meta-mylayer/recipes-kernel/linux/linux-yocto/defconfig</filename>).
+                Then, add the following lines to the linux-yocto
+                <filename>.bbappend</filename> file in your layer:
+                <literallayout class='monospaced'>
+     FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
+     SRC_URI += "file://defconfig"
+                </literallayout>
+                The
+                <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
+                tells the build system how to search for the file, while the
+                <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESEXTRAPATHS'><filename>FILESEXTRAPATHS</filename></ulink>
+                extends the
+                <ulink url='&YOCTO_DOCS_REF_URL;#var-FILESPATH'><filename>FILESPATH</filename></ulink>
+                variable (search directories) to include the
+                <filename>${PN}</filename> directory you created to hold the
+                configuration changes.
+                <note>
+                    The build system applies the configurations from the
+                    <filename>defconfig</filename> file before applying any
+                    subsequent configuration fragments.
+                    The final kernel configuration is a combination of the
+                    configurations in the <filename>defconfig</filename>
+                    file and any configuration fragments you provide.
+                    You need to realize that if you have any configuration
+                    fragments, the build system applies these on top of and
+                    after applying the existing defconfig file configurations.
+                </note>
+                For more information on configuring the kernel, see the
+                "<link link='changing-the-configuration'>Changing the Configuration</link>"
+                section.
+            </para>
+        </section>
+
+        <section id='creating-config-fragments'>
+            <title>Creating Configuration Fragments</title>
+
+            <para>
+                Configuration fragments are simply kernel options that
+                appear in a file placed where the OpenEmbedded build system
+                can find and apply them.
+                The build system applies configuration fragments after
+                applying configurations from a <filename>defconfig</filename>
+                file.
+                Thus, the final kernel configuration is a combination of the
+                configurations in the <filename>defconfig</filename>
+                file and then any configuration fragments you provide.
+                The build system applies fragments on top of and
+                after applying the existing defconfig file configurations.
+            </para>
+
+            <para>
+                Syntactically, the configuration statement is identical to
+                what would appear in the <filename>.config</filename> file,
+                which is in the
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
+                <note>
+                    For more information about where the
+                    <filename>.config</filename> file is located, see the
+                    example in the
+                    "<link linkend='using-menuconfig'>Using <filename>menuconfig</filename></link>"
+                    section.
+                </note>
+            </para>
+
+            <para>
+                It is simple to create a configuration fragment.
+                One method is to use shell commands.
+                For example, issuing the following from the shell creates a
+                configuration fragment file named
+                <filename>my_smp.cfg</filename> that enables multi-processor
+                support within the kernel:
+                <literallayout class='monospaced'>
+     $ echo "CONFIG_SMP=y" >> my_smp.cfg
+                </literallayout>
+                <note>
+                    All configuration fragment files must use the
+                    <filename>.cfg</filename> extension in order for the
+                    OpenEmbedded build system to recognize them as a
+                    configuration fragment.
+                </note>
+            </para>
+
+            <para>
+                Another method is to create a configuration fragment using the
                 differences between two configuration files: one previously
                 created and saved, and one freshly created using the
                 <filename>menuconfig</filename> tool.
@@ -567,40 +1732,47 @@
                 To create a configuration fragment using this method, follow
                 these steps:
                 <orderedlist>
-                    <listitem><para>Complete a build at least through the kernel
+                    <listitem><para>
+                        <emphasis>Complete a Build Through Kernel Configuration:</emphasis>
+                        Complete a build at least through the kernel
                         configuration task as follows:
                         <literallayout class='monospaced'>
      $ bitbake linux-yocto -c kernel_configme -f
                         </literallayout>
-                        This step ensures that you will be creating a
+                        This step ensures that you create a
                         <filename>.config</filename> file from a known state.
                         Because situations exist where your build state might
-                        become unknown, it is best to run the previous
-                        command prior to starting up
-                        <filename>menuconfig</filename>.
+                        become unknown, it is best to run this task prior
+                        to starting <filename>menuconfig</filename>.
                         </para></listitem>
-                    <listitem><para>Run the <filename>menuconfig</filename>
-                        command:
+                    <listitem><para>
+                        <emphasis>Launch <filename>menuconfig</filename>:</emphasis>
+                        Run the <filename>menuconfig</filename> command:
                         <literallayout class='monospaced'>
      $ bitbake linux-yocto -c menuconfig
-                        </literallayout></para></listitem>
-                    <listitem><para>Run the <filename>diffconfig</filename>
+                        </literallayout>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Create the Configuration Fragment:</emphasis>
+                        Run the <filename>diffconfig</filename>
                         command to prepare a configuration fragment.
                         The resulting file <filename>fragment.cfg</filename>
-                        will be placed in the
+                        is placed in the
                         <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink><filename>}</filename> directory:
                         <literallayout class='monospaced'>
      $ bitbake linux-yocto -c diffconfig
-                        </literallayout></para></listitem>
+                        </literallayout>
+                        </para></listitem>
                 </orderedlist>
             </para>
 
             <para>
-                The <filename>diffconfig</filename> command creates a file that is a
-                list of Linux kernel <filename>CONFIG_</filename> assignments.
+                The <filename>diffconfig</filename> command creates a file
+                that is a list of Linux kernel <filename>CONFIG_</filename>
+                assignments.
                 See the "<link linkend='changing-the-configuration'>Changing the Configuration</link>"
-                section for information on how to use the output as a
-                configuration fragment.
+                section for additional information on how to use the output
+                as a configuration fragment.
                 <note>
                     You can also use this method to create configuration
                     fragments for a BSP.
@@ -610,34 +1782,126 @@
             </para>
 
             <para>
-                The kernel tools also provide configuration validation.
-                You can use these tools to produce warnings for when a
+                Where do you put your configuration fragment files?
+                You can place these files in an area pointed to by
+                <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
+                as directed by your <filename>bblayers.conf</filename> file,
+                which is located in your layer.
+                The OpenEmbedded build system picks up the configuration and
+                adds it to the kernel's configuration.
+                For example, suppose you had a set of configuration options
+                in a file called <filename>myconfig.cfg</filename>.
+                If you put that file inside a directory named
+                <filename>linux-yocto</filename> that resides in the same
+                directory as the kernel's append file within your layer
+                and then add the following statements to the kernel's append
+                file, those configuration options will be picked up and applied
+                when the kernel is built:
+                <literallayout class='monospaced'>
+     FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
+     SRC_URI += "file://myconfig.cfg"
+                </literallayout>
+            </para>
+
+            <para>
+                As mentioned earlier, you can group related configurations
+                into multiple files and name them all in the
+                <filename>SRC_URI</filename> statement as well.
+                For example, you could group separate configurations
+                specifically for Ethernet and graphics into their own files
+                and add those by using a <filename>SRC_URI</filename> statement
+                like the following in your append file:
+                <literallayout class='monospaced'>
+     SRC_URI += "file://myconfig.cfg \
+            file://eth.cfg \
+            file://gfx.cfg"
+                </literallayout>
+            </para>
+        </section>
+
+        <section id='validating-configuration'>
+            <title>Validating Configuration</title>
+
+            <para>
+                You can use the
+                <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-kernel_configcheck'><filename>do_kernel_configcheck</filename></ulink>
+                task to provide configuration validation:
+                <literallayout class='monospaced'>
+     $ bitbake linux-yocto -c kernel_configcheck -f
+                </literallayout>
+                Running this task produces warnings for when a
                 requested configuration does not appear in the final
                 <filename>.config</filename> file or when you override a
                 policy configuration in a hardware configuration fragment.
-                Here is an example with some sample output of the command
-                that runs these tools:
+            </para>
+
+            <para>
+                In order to run this task, you must have an existing
+                <filename>.config</filename> file.
+                See the
+                "<link linkend='using-menuconfig'>Using <filename>menuconfig</filename></link>"
+                section for information on how to create a configuration file.
+            </para>
+
+            <para>
+                Following is sample output from the
+                <filename>do_kernel_configcheck</filename> task:
                 <literallayout class='monospaced'>
-     $ bitbake linux-yocto -c kernel_configcheck -f
+     Loading cache: 100% |########################################################| Time: 0:00:00
+     Loaded 1275 entries from dependency cache.
+     NOTE: Resolving any missing task queue dependencies
 
-     ...
+     Build Configuration:
+         .
+         .
+         .
 
-     NOTE: validating kernel configuration
-     This BSP sets 3 invalid/obsolete kernel options.
-     These config options are not offered anywhere within this kernel.
-     The full list can be found in your kernel src dir at:
-     meta/cfg/standard/mybsp/invalid.cfg
+     NOTE: Executing SetScene Tasks
+     NOTE: Executing RunQueue Tasks
+     WARNING: linux-yocto-4.12.12+gitAUTOINC+eda4d18ce4_16de014967-r0 do_kernel_configcheck:
+         [kernel config]: specified values did not make it into the kernel's final configuration:
 
-     This BSP sets 21 kernel options that are possibly non-hardware related.
-     The full list can be found in your kernel src dir at:
-     meta/cfg/standard/mybsp/specified_non_hdw.cfg
+     ---------- CONFIG_X86_TSC -----------------
+     Config: CONFIG_X86_TSC
+     From: /home/scottrif/poky/build/tmp/work-shared/qemux86/kernel-source/.kernel-meta/configs/standard/bsp/common-pc/common-pc-cpu.cfg
+     Requested value:  CONFIG_X86_TSC=y
+     Actual value:
 
-     WARNING: There were 2 hardware options requested that do not
-              have a corresponding value present in the final ".config" file.
-              This probably means you are not getting the config you wanted.
-              The full list can be found in your kernel src dir at:
-              meta/cfg/standard/mybsp/mismatch.cfg
+
+     ---------- CONFIG_X86_BIGSMP -----------------
+     Config: CONFIG_X86_BIGSMP
+     From: /home/scottrif/poky/build/tmp/work-shared/qemux86/kernel-source/.kernel-meta/configs/standard/cfg/smp.cfg
+           /home/scottrif/poky/build/tmp/work-shared/qemux86/kernel-source/.kernel-meta/configs/standard/defconfig
+     Requested value:  # CONFIG_X86_BIGSMP is not set
+     Actual value:
+
+
+     ---------- CONFIG_NR_CPUS -----------------
+     Config: CONFIG_NR_CPUS
+     From: /home/scottrif/poky/build/tmp/work-shared/qemux86/kernel-source/.kernel-meta/configs/standard/cfg/smp.cfg
+           /home/scottrif/poky/build/tmp/work-shared/qemux86/kernel-source/.kernel-meta/configs/standard/bsp/common-pc/common-pc.cfg
+           /home/scottrif/poky/build/tmp/work-shared/qemux86/kernel-source/.kernel-meta/configs/standard/defconfig
+     Requested value:  CONFIG_NR_CPUS=8
+     Actual value:     CONFIG_NR_CPUS=1
+
+
+     ---------- CONFIG_SCHED_SMT -----------------
+     Config: CONFIG_SCHED_SMT
+     From: /home/scottrif/poky/build/tmp/work-shared/qemux86/kernel-source/.kernel-meta/configs/standard/cfg/smp.cfg
+           /home/scottrif/poky/build/tmp/work-shared/qemux86/kernel-source/.kernel-meta/configs/standard/defconfig
+     Requested value:  CONFIG_SCHED_SMT=y
+     Actual value:
+
+
+
+     NOTE: Tasks Summary: Attempted 288 tasks of which 285 didn't need to be rerun and all succeeded.
+
+     Summary: There were 3 WARNING messages shown.
                 </literallayout>
+                <note>
+                    The previous output example has artificial line breaks
+                    to make it more readable.
+                </note>
             </para>
 
             <para>
@@ -646,111 +1910,208 @@
                 items.
                 You can use the information in the logs to adjust your
                 configuration files and then repeat the
-                <filename>kernel_configme</filename> and
-                <filename>kernel_configcheck</filename> commands until
-                they produce no warnings.
+                <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-kernel_configme'><filename>do_kernel_configme</filename></ulink>
+                and
+                <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-kernel_configcheck'><filename>do_kernel_configcheck</filename></ulink>
+                tasks until they produce no warnings.
             </para>
 
             <para>
                 For more information on how to use the
                 <filename>menuconfig</filename> tool, see the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#using-menuconfig'>Using <filename>menuconfig</filename></ulink>"
-                section in the Yocto Project Development Manual.
-            </para>
-        </section>
-
-        <section id='modifying-source-code'>
-            <title>Modifying Source Code</title>
-
-            <para>
-                You can experiment with source code changes and create a
-                simple patch without leaving the BitBake environment.
-                To get started, be sure to complete a build at
-                least through the kernel configuration task:
-                <literallayout class='monospaced'>
-     $ bitbake linux-yocto -c kernel_configme -f
-                </literallayout>
-                Taking this step ensures you have the sources prepared
-                and the configuration completed.
-                You can find the sources in the build directory within the
-                <filename>source/</filename> directory, which is a symlink
-                (i.e. <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-B'><filename>B</filename></ulink><filename>}/source</filename>).
-                The <filename>source/</filename> directory expands to
-                <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-WORKDIR'><filename>WORKDIR</filename></ulink><filename>}</filename><filename>/linux-</filename><filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-PACKAGE_ARCH'><filename>PACKAGE_ARCH</filename></ulink><filename>}-${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-LINUX_KERNEL_TYPE'><filename>LINUX_KERNEL_TYPE</filename></ulink><filename>}-build/source</filename>.
-                The directory pointed to by the
-                <filename>source/</filename> symlink is also known as
-                <filename>${</filename><ulink url='&YOCTO_DOCS_REF_URL;#var-STAGING_KERNEL_DIR'><filename>STAGING_KERNEL_DIR</filename></ulink><filename>}</filename>.
-            </para>
-
-            <para>
-                You can edit the sources as you would any other Linux source
-                tree.
-                However, keep in mind that you will lose changes if you
-                trigger the
-                <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-fetch'><filename>do_fetch</filename></ulink>
-                task for the recipe.
-                You can avoid triggering this task by not using BitBake to
-                run the
-                <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-cleanall'><filename>cleanall</filename></ulink>,
-                <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-cleansstate'><filename>cleansstate</filename></ulink>,
-                or forced
-                <ulink url='&YOCTO_DOCS_REF_URL;#ref-tasks-fetch'><filename>fetch</filename></ulink>
-                commands.
-                Also, do not modify the recipe itself while working
-                with temporary changes or BitBake might run the
-                <filename>fetch</filename> command depending on the
-                changes to the recipe.
-            </para>
-
-            <para>
-                To test your temporary changes, instruct BitBake to run the
-                <filename>compile</filename> again.
-                The <filename>-f</filename> option forces the command to run
-                even though BitBake might think it has already done so:
-                <literallayout class='monospaced'>
-     $ bitbake linux-yocto -c compile -f
-                </literallayout>
-                If the compile fails, you can update the sources and repeat
-                the <filename>compile</filename>.
-                Once compilation is successful, you can inspect and test
-                the resulting build (i.e. kernel, modules, and so forth) from
-                the following build directory:
-                <literallayout class='monospaced'>
-     ${WORKDIR}/linux-${PACKAGE_ARCH}-${LINUX_KERNEL_TYPE}-build
-                </literallayout>
-                Alternatively, you can run the <filename>deploy</filename>
-                command to place the kernel image in the
-                <filename>tmp/deploy/images</filename> directory:
-                <literallayout class='monospaced'>
-	$ bitbake linux-yocto -c deploy
-                </literallayout>
-                And, of course, you can perform the remaining installation and
-                packaging steps by issuing:
-                <literallayout class='monospaced'>
-	$ bitbake linux-yocto
-                </literallayout>
-            </para>
-
-            <para>
-                For rapid iterative development, the edit-compile-repeat loop
-                described in this section is preferable to rebuilding the
-                entire recipe because the installation and packaging tasks
-                are very time consuming.
-            </para>
-
-            <para>
-                Once you are satisfied with your source code modifications,
-                you can make them permanent by generating patches and
-                applying them to the
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
-                statement as described in the
-                "<link linkend='applying-patches'>Applying Patches</link>"
+                "<link linkend='using-menuconfig'>Using <filename>menuconfig</filename></link>"
                 section.
-                If you are not familiar with generating patches, refer to the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-the-patch'>Creating the Patch</ulink>"
-                section in the Yocto Project Development Manual.
             </para>
         </section>
+
+        <section id='fine-tuning-the-kernel-configuration-file'>
+            <title>Fine-Tuning the Kernel Configuration File</title>
+
+            <para>
+                You can make sure the <filename>.config</filename> file is as
+                lean or efficient as possible by reading the output of the
+                kernel configuration fragment audit, noting any issues, making
+                changes to correct the issues, and then repeating.
+            </para>
+
+            <para>
+                As part of the kernel build process, the
+                <filename>do_kernel_configcheck</filename> task runs.
+                This task validates the kernel configuration by checking the
+                final <filename>.config</filename> file against the input
+                files.
+                During the check, the task produces warning messages for the
+                following issues:
+                <itemizedlist>
+                    <listitem><para>
+                        Requested options that did not make the final
+                        <filename>.config</filename> file.
+                        </para></listitem>
+                    <listitem><para>
+                        Configuration items that appear twice in the same
+                        configuration fragment.
+                        </para></listitem>
+                    <listitem><para>
+                        Configuration items tagged as "required" that were
+                        overridden.
+                        </para></listitem>
+                    <listitem><para>
+                        A board overrides a non-board specific option.
+                        </para></listitem>
+                    <listitem><para>
+                        Listed options not valid for the kernel being
+                        processed.
+                        In other words, the option does not appear anywhere.
+                        </para></listitem>
+                </itemizedlist>
+                <note>
+                    The <filename>do_kernel_configcheck</filename> task can
+                    also optionally report if an option is overridden during
+                    processing.
+                </note>
+            </para>
+
+            <para>
+                For each output warning, a message points to the file
+                that contains a list of the options and a pointer to the
+                configuration fragment that defines them.
+                Collectively, the files are the key to streamlining the
+                configuration.
+            </para>
+
+            <para>
+                To streamline the configuration, do the following:
+                <orderedlist>
+                    <listitem><para>
+                        <emphasis>Use a Working Configuration:</emphasis>
+                        Start with a full configuration that you
+                        know works.
+                        Be sure the configuration builds and boots
+                        successfully.
+                        Use this configuration file as your baseline.
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Run Configure and Check Tasks:</emphasis>
+                        Separately run the
+                        <filename>do_kernel_configme</filename> and
+                        <filename>do_kernel_configcheck</filename> tasks:
+                        <literallayout class='monospaced'>
+     $ bitbake linux-yocto -c kernel_configme -f
+     $ bitbake linux-yocto -c kernel_configcheck -f
+                        </literallayout>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Process the Results:</emphasis>
+                        Take the resulting list of files from the
+                        <filename>do_kernel_configcheck</filename> task
+                        warnings and do the following:
+                        <itemizedlist>
+                            <listitem><para>
+                                Drop values that are redefined in the fragment
+                                but do not change the final
+                                <filename>.config</filename> file.
+                                </para></listitem>
+                            <listitem><para>
+                                Analyze and potentially drop values from the
+                                <filename>.config</filename> file that override
+                                required configurations.
+                                </para></listitem>
+                            <listitem><para>
+                                Analyze and potentially remove non-board
+                                specific options.
+                                </para></listitem>
+                            <listitem><para>
+                                Remove repeated and invalid options.
+                                </para></listitem>
+                        </itemizedlist>
+                        </para></listitem>
+                    <listitem><para>
+                        <emphasis>Re-Run Configure and Check Tasks:</emphasis>
+                        After you have worked through the output of the kernel
+                        configuration audit, you can re-run the
+                        <filename>do_kernel_configme</filename> and
+                        <filename>do_kernel_configcheck</filename> tasks to
+                        see the results of your changes.
+                        If you have more issues, you can deal with them as
+                        described in the previous step.
+                        </para></listitem>
+                </orderedlist>
+            </para>
+
+            <para>
+                Iteratively working through steps two through four eventually
+                yields a minimal, streamlined configuration file.
+                Once you have the best <filename>.config</filename>, you can
+                build the Linux Yocto kernel.
+            </para>
+        </section>
+    </section>
+
+    <section id='expanding-variables'>
+        <title>Expanding Variables</title>
+
+        <para>
+            Sometimes it is helpful to determine what a variable expands
+            to during a build.
+            You can do examine the values of variables by examining the
+            output of the <filename>bitbake -e</filename> command.
+            The output is long and is more easily managed in a text file,
+            which allows for easy searches:
+            <literallayout class='monospaced'>
+     $ bitbake -e virtual/kernel > <replaceable>some_text_file</replaceable>
+            </literallayout>
+            Within the text file, you can see exactly how each variable is
+            expanded and used by the OpenEmbedded build system.
+        </para>
+    </section>
+
+    <section id='working-with-a-dirty-kernel-version-string'>
+        <title>Working with a "Dirty" Kernel Version String</title>
+
+        <para>
+            If you build a kernel image and the version string has a
+            "+" or a "-dirty" at the end, uncommitted modifications exist
+            in the kernel's source directory.
+            Follow these steps to clean up the version string:
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Discover the Uncommitted Changes:</emphasis>
+                    Go to the kernel's locally cloned Git repository
+                    (source directory) and use the following Git command
+                    to list the files that have been changed, added, or
+                    removed:
+                    <literallayout class='monospaced'>
+     $ git status
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Commit the Changes:</emphasis>
+                    You should commit those changes to the kernel source
+                    tree regardless of whether or not you will save,
+                    export, or use the changes:
+                    <literallayout class='monospaced'>
+     $ git add
+     $ git commit -s -a -m "getting rid of -dirty"
+                    </literallayout>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Rebuild the Kernel Image:</emphasis>
+                    Once you commit the changes, rebuild the kernel.</para>
+
+                    <para>Depending on your particular kernel development
+                    workflow, the commands you use to rebuild the
+                    kernel might differ.
+                    For information on building the kernel image when
+                    using <filename>devtool</filename>, see the
+                    "<link linkend='using-devtool-to-patch-the-kernel'>Using <filename>devtool</filename> to Patch the Kernel</link>"
+                    section.
+                    For information on building the kernel image when
+                    using Bitbake, see the
+                    "<link linkend='using-traditional-kernel-development-to-patch-the-kernel'>Using Traditional Kernel Development to Patch the Kernel</link>"
+                    section.
+                    </para></listitem>
+            </orderedlist>
+        </para>
     </section>
 
     <section id='working-with-your-own-sources'>
@@ -763,7 +2124,7 @@
             working with your own sources.
             When you use your own sources, you will not be able to
             leverage the existing kernel
-            <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink> and
+            <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink> and
             stabilization work of the linux-yocto sources.
             However, you will be able to manage your own Metadata in the same
             format as the linux-yocto sources.
@@ -788,23 +2149,29 @@
         </para>
 
         <para>
-            Here are some basic steps you can use to work with your own sources:
+            Here are some basic steps you can use to work with your own
+            sources:
             <orderedlist>
-                <listitem><para>Copy the <filename>linux-yocto-custom.bb</filename>
+                <listitem><para>
+                    <emphasis>Create a Copy of the Kernel Recipe:</emphasis>
+                    Copy the <filename>linux-yocto-custom.bb</filename>
                     recipe to your layer and give it a meaningful name.
-                    The name should include the version of the Linux kernel you
-                    are using (e.g.
-                    <filename>linux-yocto-myproject_3.19.bb</filename>,
-                    where "3.19" is the base version of the Linux kernel
-                    with which you would be working).</para></listitem>
-                <listitem><para>In the same directory inside your layer,
-                    create a matching directory
-                    to store your patches and configuration files (e.g.
-                    <filename>linux-yocto-myproject</filename>).
+                    The name should include the version of the Yocto Linux
+                    kernel you are using (e.g.
+                    <filename>linux-yocto-myproject_4.12.bb</filename>,
+                    where "4.12" is the base version of the Linux kernel
+                    with which you would be working).
                     </para></listitem>
-                <listitem><para>Make sure you have either a
-                    <filename>defconfig</filename> file or configuration
-                    fragment files.
+                <listitem><para>
+                    <emphasis>Create a Directory for Your Patches:</emphasis>
+                    In the same directory inside your layer, create a matching
+                    directory to store your patches and configuration files
+                    (e.g. <filename>linux-yocto-myproject</filename>).
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Ensure You Have Configurations:</emphasis>
+                    Make sure you have either a <filename>defconfig</filename>
+                    file or configuration fragment files in your layer.
                     When you use the <filename>linux-yocto-custom.bb</filename>
                     recipe, you must specify a configuration.
                     If you do not have a <filename>defconfig</filename> file,
@@ -813,27 +2180,32 @@
      $ make defconfig
                     </literallayout>
                     After running the command, copy the resulting
-                    <filename>.config</filename> to the
-                    <filename>files</filename> directory as "defconfig" and
-                    then add it to the
+                    <filename>.config</filename> file to the
+                    <filename>files</filename> directory in your layer
+                    as "defconfig" and then add it to the
                     <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
                     variable in the recipe.</para>
+
                     <para>Running the <filename>make defconfig</filename>
                     command results in the default configuration for your
                     architecture as defined by your kernel.
                     However, no guarantee exists that this configuration is
                     valid for your use case, or that your board will even boot.
-                    This is particularly true for non-x86 architectures.
-                    To use non-x86 <filename>defconfig</filename> files, you
-                    need to be more specific and find one that matches your
+                    This is particularly true for non-x86 architectures.</para>
+
+                    <para>To use non-x86 <filename>defconfig</filename> files,
+                    you need to be more specific and find one that matches your
                     board (i.e. for arm, you look in
                     <filename>arch/arm/configs</filename> and use the one that
                     is the best starting point for your board).
                     </para></listitem>
-                <listitem><para>Edit the following variables in your recipe
-                    as appropriate for your project:
+                <listitem><para>
+                    <emphasis>Edit the Recipe:</emphasis>
+                    Edit the following variables in your recipe as appropriate
+                    for your project:
                     <itemizedlist>
-                        <listitem><para><ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>:
+                        <listitem><para>
+                            <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>:
                             The <filename>SRC_URI</filename> should specify
                             a Git repository that uses one of the supported Git
                             fetcher protocols (i.e. <filename>file</filename>,
@@ -845,24 +2217,32 @@
                             The skeleton recipe provides an example
                             <filename>SRC_URI</filename> as a syntax reference.
                             </para></listitem>
-                        <listitem><para><ulink url='&YOCTO_DOCS_REF_URL;#var-LINUX_VERSION'><filename>LINUX_VERSION</filename></ulink>:
+                        <listitem><para>
+                            <ulink url='&YOCTO_DOCS_REF_URL;#var-LINUX_VERSION'><filename>LINUX_VERSION</filename></ulink>:
                             The Linux kernel version you are using (e.g.
-                            "3.19").</para></listitem>
-                        <listitem><para><ulink url='&YOCTO_DOCS_REF_URL;#var-LINUX_VERSION_EXTENSION'><filename>LINUX_VERSION_EXTENSION</filename></ulink>:
-                            The Linux kernel <filename>CONFIG_LOCALVERSION</filename>
-                            that is compiled into the resulting kernel and visible
+                            "4.12").
+                            </para></listitem>
+                        <listitem><para>
+                            <ulink url='&YOCTO_DOCS_REF_URL;#var-LINUX_VERSION_EXTENSION'><filename>LINUX_VERSION_EXTENSION</filename></ulink>:
+                            The Linux kernel
+                            <filename>CONFIG_LOCALVERSION</filename> that is
+                            compiled into the resulting kernel and visible
                             through the <filename>uname</filename> command.
                             </para></listitem>
-                        <listitem><para><ulink url='&YOCTO_DOCS_REF_URL;#var-SRCREV'><filename>SRCREV</filename></ulink>:
+                        <listitem><para>
+                            <ulink url='&YOCTO_DOCS_REF_URL;#var-SRCREV'><filename>SRCREV</filename></ulink>:
                             The commit ID from which you want to build.
                             </para></listitem>
-                        <listitem><para><ulink url='&YOCTO_DOCS_REF_URL;#var-PR'><filename>PR</filename></ulink>:
-                            Treat this variable the same as you would in any other
-                            recipe.
-                            Increment the variable to indicate to the OpenEmbedded
-                            build system that the recipe has changed.
+                        <listitem><para>
+                            <ulink url='&YOCTO_DOCS_REF_URL;#var-PR'><filename>PR</filename></ulink>:
+                            Treat this variable the same as you would in any
+                            other recipe.
+                            Increment the variable to indicate to the
+                            OpenEmbedded build system that the recipe has
+                            changed.
                             </para></listitem>
-                        <listitem><para><ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>:
+                        <listitem><para>
+                            <ulink url='&YOCTO_DOCS_REF_URL;#var-PV'><filename>PV</filename></ulink>:
                             The default <filename>PV</filename> assignment is
                             typically adequate.
                             It combines the <filename>LINUX_VERSION</filename>
@@ -870,16 +2250,17 @@
                             as derived from the
                             <ulink url='&YOCTO_DOCS_REF_URL;#var-SRCPV'><filename>SRCPV</filename></ulink>
                             variable.
-                            The combined results are a string with
-                            the following form:
+                            The combined results are a string with the
+                            following form:
                             <literallayout class='monospaced'>
      3.19.11+git1+68a635bf8dfb64b02263c1ac80c948647cc76d5f_1+218bd8d2022b9852c60d32f0d770931e3cf343e2
                             </literallayout>
-                            While lengthy, the extra verbosity in <filename>PV</filename>
-                            helps ensure you are using the exact
-                            sources from which you intend to build.
+                            While lengthy, the extra verbosity in
+                            <filename>PV</filename> helps ensure you are using
+                            the exact sources from which you intend to build.
                             </para></listitem>
-                        <listitem><para><ulink url='&YOCTO_DOCS_REF_URL;#var-COMPATIBLE_MACHINE'><filename>COMPATIBLE_MACHINE</filename></ulink>:
+                        <listitem><para>
+                            <ulink url='&YOCTO_DOCS_REF_URL;#var-COMPATIBLE_MACHINE'><filename>COMPATIBLE_MACHINE</filename></ulink>:
                             A list of the machines supported by your new recipe.
                             This variable in the example recipe is set
                             by default to a regular expression that matches
@@ -888,18 +2269,24 @@
                             failure.
                             You must change it to match a list of the machines
                             that your new recipe supports.
-                            For example, to support the <filename>qemux86</filename>
-                            and <filename>qemux86-64</filename> machines, use
+                            For example, to support the
+                            <filename>qemux86</filename> and
+                            <filename>qemux86-64</filename> machines, use
                             the following form:
                             <literallayout class='monospaced'>
      COMPATIBLE_MACHINE = "qemux86|qemux86-64"
-                            </literallayout></para></listitem>
-                    </itemizedlist></para></listitem>
-                <listitem><para>Provide further customizations to your recipe
+                            </literallayout>
+                            </para></listitem>
+                    </itemizedlist>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Customize Your Recipe as Needed:</emphasis>
+                    Provide further customizations to your recipe
                     as needed just as you would customize an existing
                     linux-yocto recipe.
-                    See the "<link linkend='modifying-an-existing-recipe'>Modifying
-                    an Existing Recipe</link>" section for information.
+                    See the
+                    "<link linkend='modifying-an-existing-recipe'>Modifying an Existing Recipe</link>"
+                    section for information.
                     </para></listitem>
             </orderedlist>
         </para>
@@ -1230,7 +2617,8 @@
             The OpenEmbedded build system searches all forms of kernel
             Metadata on the <filename>SRC_URI</filename> statement regardless
             of whether the Metadata is in the "kernel-cache", system kernel
-            Metadata, or a recipe-space Metadata.
+            Metadata, or a recipe-space Metadata (i.e. part of the kernel
+            recipe).
             See the
             "<link linkend='kernel-metadata-location'>Kernel Metadata Location</link>"
             section for additional information.
@@ -1253,6 +2641,7 @@
             to the build.
             <orderedlist>
                 <listitem><para>
+                    <emphasis>Create the Feature File:</emphasis>
                     Create a <filename>.scc</filename> file and locate it
                     just as you would any other patch file,
                     <filename>.cfg</filename> file, or fetcher item
@@ -1295,6 +2684,7 @@
                     <filename>test.cfg</filename>.
                     </para></listitem>
                 <listitem><para>
+                    <emphasis>Add the Feature File to <filename>SRC_URI</filename>:</emphasis>
                     Add the <filename>.scc</filename> file to the
                     recipe's <filename>SRC_URI</filename> statement:
                     <literallayout class='monospaced'>
@@ -1304,7 +2694,9 @@
                     path is appended to the existing path.
                     </para></listitem>
                 <listitem><para>
-                    Specify the feature as a kernel feature:
+                    <emphasis>Specify the Feature as a Kernel Feature:</emphasis>
+                    Use the <filename>KERNEL_FEATURES</filename> statement
+                    to specify the feature as a kernel feature:
                     <literallayout class='monospaced'>
      KERNEL_FEATURES_append = " test.scc"
                     </literallayout>
diff --git a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-concepts-appx.xml b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-concepts-appx.xml
index ac91749..fbecc13 100644
--- a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-concepts-appx.xml
+++ b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-concepts-appx.xml
@@ -7,245 +7,613 @@
 
     <section id='kernel-big-picture'>
         <title>Yocto Project Kernel Development and Maintenance</title>
+
         <para>
-            Kernels available through the Yocto Project, like other kernels, are based off the Linux
-            kernel releases from <ulink url='http://www.kernel.org'></ulink>.
-            At the beginning of a major development cycle, the Yocto Project team
-            chooses its kernel based on factors such as release timing, the anticipated release
-            timing of final upstream <filename>kernel.org</filename> versions, and Yocto Project
+            Kernels available through the Yocto Project (Yocto Linux kernels),
+            like other kernels, are based off the Linux kernel releases from
+            <ulink url='http://www.kernel.org'></ulink>.
+            At the beginning of a major Linux kernel development cycle, the
+            Yocto Project team chooses a Linux kernel based on factors such as
+            release timing, the anticipated release timing of final upstream
+            <filename>kernel.org</filename> versions, and Yocto Project
             feature requirements.
-            Typically, the kernel chosen is in the
-            final stages of development by the community.
-            In other words, the kernel is in the release
-            candidate or "rc" phase and not yet a final release.
-            But, by being in the final stages of external development, the team knows that the
-            <filename>kernel.org</filename> final release will clearly be within the early stages of
-            the Yocto Project development window.
+            Typically, the Linux kernel chosen is in the final stages of
+            development by the Linux community.
+            In other words, the Linux kernel is in the release candidate
+            or "rc" phase and has yet to reach final release.
+            But, by being in the final stages of external development, the
+            team knows that the <filename>kernel.org</filename> final release
+            will clearly be within the early stages of the Yocto Project
+            development window.
         </para>
+
         <para>
-            This balance allows the team to deliver the most up-to-date kernel
-            possible, while still ensuring that the team has a stable official release for
-            the baseline Linux kernel version.
+            This balance allows the Yocto Project team to deliver the most
+            up-to-date Yocto Linux kernel possible, while still ensuring that
+            the team has a stable official release for the baseline Linux
+            kernel version.
         </para>
+
         <para>
-            The ultimate source for kernels available through the Yocto Project are released kernels
-            from <filename>kernel.org</filename>.
-            In addition to a foundational kernel from <filename>kernel.org</filename>, the
-            kernels available contain a mix of important new mainline
-            developments, non-mainline developments (when there is no alternative),
-            Board Support Package (BSP) developments,
-            and custom features.
-            These additions result in a commercially released Yocto Project Linux kernel that caters
-            to specific embedded designer needs for targeted hardware.
+            As implied earlier, the ultimate source for Yocto Linux kernels
+            are released kernels from <filename>kernel.org</filename>.
+            In addition to a foundational kernel from
+            <filename>kernel.org</filename>, the available Yocto Linux kernels
+            contain a mix of important new mainline developments, non-mainline
+            developments (when no alternative exists), Board Support Package
+            (BSP) developments, and custom features.
+            These additions result in a commercially released Yocto
+            Project Linux kernel that caters to specific embedded designer
+            needs for targeted hardware.
         </para>
+
         <para>
-            Once a kernel is officially released, the Yocto Project team goes into
-            their next development cycle, or upward revision (uprev) cycle, while still
-            continuing maintenance on the released kernel.
+            You can find a web interface to the Yocto Linux kernels in the
+            <ulink url='&YOCTO_DOCS_REF_URL;#source-repositories'>Source Repositories</ulink>
+            at
+            <ulink url='&YOCTO_GIT_URL;'></ulink>.
+            If you look at the interface, you will see to the left a
+            grouping of Git repositories titled "Yocto Linux Kernel".
+            Within this group, you will find several Linux Yocto kernels
+            developed and included with Yocto Project releases:
+            <itemizedlist>
+                <listitem><para>
+                    <emphasis><filename>linux-yocto-4.1</filename>:</emphasis>
+                    The stable Yocto Project kernel to use with the Yocto
+                    Project Release 2.0.
+                    This kernel is based on the Linux 4.1 released kernel.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>linux-yocto-4.4</filename>:</emphasis>
+                    The stable Yocto Project kernel to use with the Yocto
+                    Project Release 2.1.
+                    This kernel is based on the Linux 4.4 released kernel.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>linux-yocto-4.6</filename>:</emphasis>
+                    A temporary kernel that is not tied to any Yocto Project
+                    release.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>linux-yocto-4.8</filename>:</emphasis>
+                    The stable yocto Project kernel to use with the Yocto
+                    Project Release 2.2.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>linux-yocto-4.9</filename>:</emphasis>
+                    The stable Yocto Project kernel to use with the Yocto
+                    Project Release 2.3.
+                    This kernel is based on the Linux 4.9 released kernel.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>linux-yocto-4.10</filename>:</emphasis>
+                    The default stable Yocto Project kernel to use with the
+                    Yocto Project Release 2.3.
+                    This kernel is based on the Linux 4.10 released kernel.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>linux-yocto-4.12</filename>:</emphasis>
+                    The default stable Yocto Project kernel to use with the
+                    Yocto Project Release 2.4.
+                    This kernel is based on the Linux 4.12 released kernel.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>yocto-kernel-cache</filename>:</emphasis>
+                    The <filename>linux-yocto-cache</filename> contains
+                    patches and configurations for the linux-yocto kernel
+                    tree.
+                    This repository is useful when working on the linux-yocto
+                    kernel.
+                    For more information on this "Advanced Kernel Metadata",
+                    see the
+                    "<link linkend='kernel-dev-advanced'>Working With Advanced Metadata (<filename>yocto-kernel-cache</filename>)</link>"
+                    Chapter.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>linux-yocto-dev</filename>:</emphasis>
+                    A development kernel based on the latest upstream release
+                    candidate available.
+                    </para></listitem>
+            </itemizedlist>
+            <note><title>Notes</title>
+                Long Term Support Initiative (LTSI) for Yocto Linux
+                kernels is as follows:
+                <itemizedlist>
+                    <listitem><para>
+                        For Yocto Project releases 1.7, 1.8, and 2.0,
+                        the LTSI kernel is
+                        <filename>linux-yocto-3.14</filename>.
+                        </para></listitem>
+                    <listitem><para>
+                        For Yocto Project releases 2.1, 2.2, and 2.3,
+                        the LTSI kernel is <filename>linux-yocto-4.1</filename>.
+                        </para></listitem>
+                    <listitem><para>
+                        For Yocto Project release 2.4, the LTSI kernel is
+                        <filename>linux-yocto-4.9</filename>
+                        </para></listitem>
+                    <listitem><para>
+                        <filename>linux-yocto-4.4</filename> is an LTS
+                        kernel.
+                        </para></listitem>
+                </itemizedlist>
+            </note>
+        </para>
+
+        <para>
+            Once a Yocto Linux kernel is officially released, the Yocto
+            Project team goes into their next development cycle, or upward
+            revision (uprev) cycle, while still continuing maintenance on the
+            released kernel.
             It is important to note that the most sustainable and stable way
-            to include feature development upstream is through a kernel uprev process.
-            Back-porting hundreds of individual fixes and minor features from various
-            kernel versions is not sustainable and can easily compromise quality.
+            to include feature development upstream is through a kernel uprev
+            process.
+            Back-porting hundreds of individual fixes and minor features from
+            various kernel versions is not sustainable and can easily
+            compromise quality.
         </para>
+
         <para>
-            During the uprev cycle, the Yocto Project team uses an ongoing analysis of
-            kernel development, BSP support, and release timing to select the best
-            possible <filename>kernel.org</filename> version.
-            The team continually monitors community kernel
-            development to look for significant features of interest.
-            The team does consider back-porting large features if they have a significant advantage.
-            User or community demand can also trigger a back-port or creation of new
-            functionality in the Yocto Project baseline kernel during the uprev cycle.
+            During the uprev cycle, the Yocto Project team uses an ongoing
+            analysis of Linux kernel development, BSP support, and release
+            timing to select the best possible <filename>kernel.org</filename>
+            Linux kernel version on which to base subsequent Yocto Linux
+            kernel development.
+            The team continually monitors Linux community kernel development
+            to look for significant features of interest.
+            The team does consider back-porting large features if they have a
+            significant advantage.
+            User or community demand can also trigger a back-port or creation
+            of new functionality in the Yocto Project baseline kernel during
+            the uprev cycle.
         </para>
+
         <para>
-            Generally speaking, every new kernel both adds features and introduces new bugs.
-            These consequences are the basic properties of upstream kernel development and are
-            managed by the Yocto Project team's kernel strategy.
-            It is the Yocto Project team's policy to not back-port minor features to the released kernel.
-            They only consider back-porting significant technological jumps - and, that is done
-            after a complete gap analysis.
-            The reason for this policy is that back-porting any small to medium sized change
-            from an evolving kernel can easily create mismatches, incompatibilities and very
-            subtle errors.
+            Generally speaking, every new Linux kernel both adds features and
+            introduces new bugs.
+            These consequences are the basic properties of upstream
+            Linux kernel development and are managed by the Yocto Project
+            team's Yocto Linux kernel development strategy.
+            It is the Yocto Project team's policy to not back-port minor
+            features to the released Yocto Linux kernel.
+            They only consider back-porting significant technological
+            jumps &dash; and, that is done after a complete gap analysis.
+            The reason for this policy is that back-porting any small to
+            medium sized change from an evolving Linux kernel can easily
+            create mismatches, incompatibilities and very subtle errors.
         </para>
+
         <para>
-            These policies result in both a stable and a cutting
-            edge kernel that mixes forward ports of existing features and significant and critical
-            new functionality.
-            Forward porting functionality in the kernels available through the Yocto Project kernel
-            can be thought of as a "micro uprev."
-            The many “micro uprevs” produce a kernel version with a mix of
-            important new mainline, non-mainline, BSP developments and feature integrations.
-            This kernel gives insight into new features and allows focused
-            amounts of testing to be done on the kernel, which prevents
-            surprises when selecting the next major uprev.
-            The quality of these cutting edge kernels is evolving and the kernels are used in leading edge
-            feature and BSP development.
+            The policies described in this section result in both a stable
+            and a cutting edge Yocto Linux kernel that mixes forward ports of
+            existing Linux kernel features and significant and critical new
+            functionality.
+            Forward porting Linux kernel functionality into the Yocto Linux
+            kernels available through the Yocto Project can be thought of as
+            a "micro uprev."
+            The many “micro uprevs” produce a Yocto Linux kernel version with
+            a mix of important new mainline, non-mainline, BSP developments
+            and feature integrations.
+            This Yocto Linux kernel gives insight into new features and
+            allows focused amounts of testing to be done on the kernel,
+            which prevents surprises when selecting the next major uprev.
+            The quality of these cutting edge Yocto Linux kernels is evolving
+            and the kernels are used in leading edge feature and BSP
+            development.
         </para>
     </section>
 
-    <section id='kernel-architecture'>
-        <title>Kernel Architecture</title>
+    <section id='yocto-linux-kernel-architecture-and-branching-strategies'>
+        <title>Yocto Linux Kernel Architecture and Branching Strategies</title>
+
         <para>
-            This section describes the architecture of the kernels available through the
-            Yocto Project and provides information
-            on the mechanisms used to achieve that architecture.
+            As mentioned earlier, a key goal of the Yocto Project is
+            to present the developer with a kernel that has a clear and
+            continuous history that is visible to the user.
+            The architecture and mechanisms, in particular the branching
+            strategies, used achieve that goal in a manner similar to
+            upstream Linux kernel development in
+            <filename>kernel.org</filename>.
         </para>
 
-        <section id='architecture-overview'>
-            <title>Overview</title>
-            <para>
-                As mentioned earlier, a key goal of the Yocto Project is to present the
-                developer with
-                a kernel that has a clear and continuous history that is visible to the user.
-                The architecture and mechanisms used achieve that goal in a manner similar to the
-                upstream <filename>kernel.org</filename>.
-            </para>
-            <para>
-                You can think of a Yocto Project kernel as consisting of a baseline Linux kernel with
-                added features logically structured on top of the baseline.
-                The features are tagged and organized by way of a branching strategy implemented by the
-                source code manager (SCM) Git.
-                For information on Git as applied to the Yocto Project, see the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#git'>Git</ulink>" section in the
-                Yocto Project Development Manual.
-            </para>
-            <para>
-                The result is that the user has the ability to see the added features and
-                the commits that make up those features.
-                In addition to being able to see added features, the user can also view the history of what
-                made up the baseline kernel.
-            </para>
-            <para>
-                The following illustration shows the conceptual Yocto Project kernel.
-            </para>
-            <para>
-                <imagedata fileref="figures/kernel-architecture-overview.png" width="6in" depth="7in" align="center" scale="100" />
-            </para>
-            <para>
-                In the illustration, the "Kernel.org Branch Point"
-                marks the specific spot (or release) from
-                which the Yocto Project kernel is created.
-                From this point "up" in the tree, features and differences are organized and tagged.
-            </para>
-            <para>
-                The "Yocto Project Baseline Kernel" contains functionality that is common to every kernel
-                type and BSP that is organized further up the tree.
-                Placing these common features in the
-                tree this way means features do not have to be duplicated along individual branches of the
-                structure.
-            </para>
-            <para>
-                From the Yocto Project Baseline Kernel, branch points represent specific functionality
-                for individual BSPs as well as real-time kernels.
-                The illustration represents this through three BSP-specific branches and a real-time
-                kernel branch.
-                Each branch represents some unique functionality for the BSP or a real-time kernel.
-            </para>
-            <para>
-                In this example structure, the real-time kernel branch has common features for all
-                real-time kernels and contains
-                more branches for individual BSP-specific real-time kernels.
-                The illustration shows three branches as an example.
-                Each branch points the way to specific, unique features for a respective real-time
-                kernel as they apply to a given BSP.
-            </para>
-            <para>
-                The resulting tree structure presents a clear path of markers (or branches) to the
-                developer that, for all practical purposes, is the kernel needed for any given set
-                of requirements.
-            </para>
-        </section>
+        <para>
+            You can think of a Yocto Linux kernel as consisting of a
+            baseline Linux kernel with added features logically structured
+            on top of the baseline.
+            The features are tagged and organized by way of a branching
+            strategy implemented by the Yocto Project team using the
+            Source Code Manager (SCM) Git.
+            <note><title>Notes</title>
+                <itemizedlist>
+                    <listitem><para>
+                        Git is the obvious SCM for meeting the Yocto Linux
+                        kernel organizational and structural goals described
+                        in this section.
+                        Not only is Git the SCM for Linux kernel development in
+                        <filename>kernel.org</filename> but, Git continues to
+                         grow in popularity and supports many different work
+                         flows, front-ends and management techniques.
+                        </para></listitem>
+                    <listitem><para>
+                        You can find documentation on Git at
+                        <ulink url='http://git-scm.com/documentation'></ulink>.
+                        You can also get an introduction to Git as it
+                        applies to the Yocto Project in the
+                        "<ulink url='&YOCTO_DOCS_REF_URL;#git'>Git</ulink>"
+                        section in the Yocto Project Reference Manual.
+                        The latter reference provides an overview of
+                        Git and presents a minimal set of Git commands
+                        that allows you to be functional using Git.
+                        You can use as much, or as little, of what Git
+                        has to offer to accomplish what you need for your
+                        project.
+                        You do not have to be a "Git Expert" in order to
+                        use it with the Yocto Project.
+                        </para></listitem>
+                </itemizedlist>
+            </note>
+        </para>
 
-        <section id='branching-and-workflow'>
-            <title>Branching Strategy and Workflow</title>
-            <para>
-                The Yocto Project team creates kernel branches at points where functionality is
-                no longer shared and thus, needs to be isolated.
-                For example, board-specific incompatibilities would require different functionality
-                and would require a branch to separate the features.
-                Likewise, for specific kernel features, the same branching strategy is used.
-            </para>
-            <para>
-                This branching strategy results in a tree that has features organized to be specific
-                for particular functionality, single kernel types, or a subset of kernel types.
-                This strategy also results in not having to store the same feature twice
-                internally in the tree.
-                Rather, the kernel team stores the unique differences required to apply the
-                feature onto the kernel type in question.
-                <note>
-                    The Yocto Project team strives to place features in the tree such that they can be
-                    shared by all boards and kernel types where possible.
-                    However, during development cycles or when large features are merged,
-                    the team cannot always follow this practice.
-                    In those cases, the team uses isolated branches to merge features.
-                </note>
-            </para>
-            <para>
-                BSP-specific code additions are handled in a similar manner to kernel-specific additions.
-                Some BSPs only make sense given certain kernel types.
-                So, for these types, the team creates branches off the end of that kernel type for all
-                of the BSPs that are supported on that kernel type.
-                From the perspective of the tools that create the BSP branch, the BSP is really no
-                different than a feature.
-                Consequently, the same branching strategy applies to BSPs as it does to features.
-                So again, rather than store the BSP twice, the team only stores the unique
-                differences for the BSP across the supported multiple kernels.
-            </para>
-            <para>
-                While this strategy can result in a tree with a significant number of branches, it is
-                important to realize that from the developer's point of view, there is a linear
-                path that travels from the baseline <filename>kernel.org</filename>, through a select
-                group of features and ends with their BSP-specific commits.
-                In other words, the divisions of the kernel are transparent and are not relevant
-                to the developer on a day-to-day basis.
-                From the developer's perspective, this path is the "master" branch.
-                The developer does not need to be aware of the existence of any other branches at all.
-                Of course, there is value in the existence of these branches
-                in the tree, should a person decide to explore them.
-                For example, a comparison between two BSPs at either the commit level or at the line-by-line
-                code <filename>diff</filename> level is now a trivial operation.
-            </para>
-            <para>
-                Working with the kernel as a structured tree follows recognized community best practices.
-                In particular, the kernel as shipped with the product, should be
-                considered an "upstream source" and viewed as a series of
-                historical and documented modifications (commits).
-                These modifications represent the development and stabilization done
-                by the Yocto Project kernel development team.
-            </para>
-            <para>
-                Because commits only change at significant release points in the product life cycle,
-                developers can work on a branch created
-                from the last relevant commit in the shipped Yocto Project kernel.
-                As mentioned previously, the structure is transparent to the developer
-                because the kernel tree is left in this state after cloning and building the kernel.
-            </para>
-        </section>
+        <para>
+            Using Git's tagging and branching features, the Yocto Project
+            team creates kernel branches at points where functionality is
+            no longer shared and thus, needs to be isolated.
+            For example, board-specific incompatibilities would require
+            different functionality and would require a branch to
+            separate the features.
+            Likewise, for specific kernel features, the same branching
+            strategy is used.
+        </para>
 
-        <section id='source-code-manager-git'>
-            <title>Source Code Manager - Git</title>
-            <para>
-                The Source Code Manager (SCM) is Git.
-                This SCM is the obvious mechanism for meeting the previously mentioned goals.
-                Not only is it the SCM for <filename>kernel.org</filename> but,
-                Git continues to grow in popularity and supports many different work flows,
-                front-ends and management techniques.
-            </para>
-            <para>
-                You can find documentation on Git at <ulink url='http://git-scm.com/documentation'></ulink>.
-                You can also get an introduction to Git as it applies to the Yocto Project in the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#git'>Git</ulink>"
-                section in the Yocto Project Development Manual.
-                These referenced sections overview Git and describe a minimal set of
-                commands that allows you to be functional using Git.
-                <note>
-                    You can use as much, or as little, of what Git has to offer to accomplish what
-                    you need for your project.
-                    You do not have to be a "Git Master" in order to use it with the Yocto Project.
-                </note>
-            </para>
-        </section>
+        <para>
+            This "tree-like" architecture results in a structure that has
+            features organized to be specific for particular functionality,
+            single kernel types, or a subset of kernel types.
+            Thus, the user has the ability to see the added features and the
+            commits that make up those features.
+            In addition to being able to see added features, the user
+            can also view the history of what made up the baseline
+            Linux kernel.
+        </para>
+
+        <para>
+            Another consequence of this strategy results in not having to
+            store the same feature twice internally in the tree.
+            Rather, the kernel team stores the unique differences required
+            to apply the feature onto the kernel type in question.
+            <note>
+                The Yocto Project team strives to place features in the tree
+                such that features can be shared by all boards and kernel
+                types where possible.
+                However, during development cycles or when large features
+                are merged, the team cannot always follow this practice.
+                In those cases, the team uses isolated branches to merge
+                features.
+            </note>
+        </para>
+
+        <para>
+            BSP-specific code additions are handled in a similar manner to
+            kernel-specific additions.
+            Some BSPs only make sense given certain kernel types.
+            So, for these types, the team creates branches off the end
+            of that kernel type for all of the BSPs that are supported on
+            that kernel type.
+            From the perspective of the tools that create the BSP branch,
+            the BSP is really no different than a feature.
+            Consequently, the same branching strategy applies to BSPs as
+            it does to kernel features.
+            So again, rather than store the BSP twice, the team only
+            stores the unique differences for the BSP across the supported
+            multiple kernels.
+        </para>
+
+        <para>
+            While this strategy can result in a tree with a significant number
+            of branches, it is important to realize that from the developer's
+            point of view, there is a linear path that travels from the
+            baseline <filename>kernel.org</filename>, through a select
+            group of features and ends with their BSP-specific commits.
+            In other words, the divisions of the kernel are transparent and
+            are not relevant to the developer on a day-to-day basis.
+            From the developer's perspective, this path is the "master" branch
+            in Git terms.
+            The developer does not need to be aware of the existence of any
+            other branches at all.
+            Of course, value exists in the having these branches in the tree,
+            should a person decide to explore them.
+            For example, a comparison between two BSPs at either the commit
+            level or at the line-by-line code <filename>diff</filename> level
+            is now a trivial operation.
+        </para>
+
+        <para>
+            The following illustration shows the conceptual Yocto
+            Linux kernel.
+            <imagedata fileref="figures/kernel-architecture-overview.png" width="6in" depth="7in" align="center" scale="100" />
+        </para>
+
+        <para>
+            In the illustration, the "Kernel.org Branch Point" marks the
+            specific spot (or Linux kernel release) from which the
+            Yocto Linux kernel is created.
+            From this point forward in the tree, features and differences
+            are organized and tagged.
+        </para>
+
+        <para>
+            The "Yocto Project Baseline Kernel" contains functionality that
+            is common to every kernel type and BSP that is organized
+            further along in the tree.
+            Placing these common features in the tree this way means
+            features do not have to be duplicated along individual
+            branches of the tree structure.
+        </para>
+
+        <para>
+            From the "Yocto Project Baseline Kernel", branch points represent
+            specific functionality for individual Board Support Packages
+            (BSPs) as well as real-time kernels.
+            The illustration represents this through three BSP-specific
+            branches and a real-time kernel branch.
+            Each branch represents some unique functionality for the BSP
+            or for a real-time Yocto Linux kernel.
+        </para>
+
+        <para>
+            In this example structure, the "Real-time (rt) Kernel" branch has
+            common features for all real-time Yocto Linux kernels and
+            contains more branches for individual BSP-specific real-time
+            kernels.
+            The illustration shows three branches as an example.
+            Each branch points the way to specific, unique features for a
+            respective real-time kernel as they apply to a given BSP.
+        </para>
+
+        <para>
+            The resulting tree structure presents a clear path of markers
+            (or branches) to the developer that, for all practical
+            purposes, is the Yocto Linux kernel needed for any given set of
+            requirements.
+            <note>
+                Keep in mind the figure does not take into account all the
+                supported Yocto Linux kernels, but rather shows a single
+                generic kernel just for conceptual purposes.
+                Also keep in mind that this structure represents the Yocto
+                Project
+                <ulink url='&YOCTO_DOCS_REF_URL;#source-repositories'>Source Repositories</ulink>
+                that are either pulled from during the build or established
+                on the host development system prior to the build by either
+                cloning a particular kernel's Git repository or by
+                downloading and unpacking a tarball.
+            </note>
+        </para>
+
+        <para>
+            Working with the kernel as a structured tree follows recognized
+            community best practices.
+            In particular, the kernel as shipped with the product, should be
+            considered an "upstream source" and viewed as a series of
+            historical and documented modifications (commits).
+            These modifications represent the development and stabilization
+            done by the Yocto Project kernel development team.
+        </para>
+
+        <para>
+            Because commits only change at significant release points in the
+            product life cycle, developers can work on a branch created
+            from the last relevant commit in the shipped Yocto Project Linux
+            kernel.
+            As mentioned previously, the structure is transparent to the
+            developer because the kernel tree is left in this state after
+            cloning and building the kernel.
+        </para>
+    </section>
+
+    <section id='kernel-build-file-hierarchy'>
+        <title>Kernel Build File Hierarchy</title>
+
+        <para>
+            Upstream storage of all the available kernel source code is
+            one thing, while representing and using the code on your host
+            development system is another.
+            Conceptually, you can think of the kernel source repositories
+            as all the source files necessary for all the supported
+            Yocto Linux kernels.
+            As a developer, you are just interested in the source files
+            for the kernel on which you are working.
+            And, furthermore, you need them available on your host system.
+        </para>
+
+        <para>
+            Kernel source code is available on your host system several
+            different ways:
+            <itemizedlist>
+                <listitem><para>
+                    <emphasis>Files Accessed While using <filename>devtool</filename>:</emphasis>
+                    <filename>devtool</filename>, which is available with the
+                    Yocto Project, is the preferred method by which to
+                    modify the kernel.
+                    See the
+                    "<link linkend='kernel-modification-workflow'>Kernel Modification Workflow</link>"
+                    section.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Cloned Repository:</emphasis>
+                    If you are working in the kernel all the time, you probably
+                    would want to set up your own local Git repository of the
+                    Yocto Linux kernel tree.
+                    For information on how to clone a Yocto Linux kernel
+                    Git repository, see the
+                    "<link linkend='preparing-the-build-host-to-work-on-the-kernel'>Preparing the Build Host to Work on the Kernel</link>"
+                    section.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Temporary Source Files from a Build:</emphasis>
+                    If you just need to make some patches to the kernel using
+                    a traditional BitBake workflow (i.e. not using the
+                    <filename>devtool</filename>), you can access temporary
+                    kernel source files that were extracted and used during
+                    a kernel build.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+
+        <para>
+            The temporary kernel source files resulting from a build using
+            BitBake have a particular hierarchy.
+            When you build the kernel on your development system, all files
+            needed for the build are taken from the source repositories
+            pointed to by the
+            <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
+            variable and gathered in a temporary work area where they are
+            subsequently used to create the unique kernel.
+            Thus, in a sense, the process constructs a local source tree
+            specific to your kernel from which to generate the new kernel
+            image.
+        </para>
+
+        <para>
+            The following figure shows the temporary file structure
+            created on your host system when you build the kernel using
+            Bitbake.
+            This
+            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
+            contains all the source files used during the build.
+            <imagedata fileref="figures/kernel-overview-2-generic.png"
+                width="6in" depth="5in" align="center" scale="100" />
+        </para>
+
+        <para>
+            Again, for additional information on the Yocto Project kernel's
+            architecture and its branching strategy, see the
+            "<link linkend='yocto-linux-kernel-architecture-and-branching-strategies'>Yocto Linux Kernel Architecture and Branching Strategies</link>"
+            section.
+            You can also reference the
+            "<link linkend='using-devtool-to-patch-the-kernel'>Using <filename>devtool</filename> to Patch the Kernel</link>"
+            and
+            "<link linkend='using-traditional-kernel-development-to-patch-the-kernel'>Using Traditional Kernel Development to Patch the Kernel</link>"
+            sections for detailed example that modifies the kernel.
+        </para>
+    </section>
+
+    <section id='determining-hardware-and-non-hardware-features-for-the-kernel-configuration-audit-phase'>
+        <title>Determining Hardware and Non-Hardware Features for the Kernel Configuration Audit Phase</title>
+
+        <para>
+            This section describes part of the kernel configuration audit
+            phase that most developers can ignore.
+            For general information on kernel configuration including
+            <filename>menuconfig</filename>, <filename>defconfig</filename>
+            files, and configuration fragments, see the
+            "<link linkend='configuring-the-kernel'>Configuring the Kernel</link>"
+            section.
+        </para>
+
+        <para>
+            During this part of the audit phase, the contents of the final
+            <filename>.config</filename> file are compared against the
+            fragments specified by the system.
+            These fragments can be system fragments, distro fragments,
+            or user-specified configuration elements.
+            Regardless of their origin, the OpenEmbedded build system
+            warns the user if a specific option is not included in the
+            final kernel configuration.
+        </para>
+
+        <para>
+            By default, in order to not overwhelm the user with
+            configuration warnings, the system only reports missing
+            "hardware" options as they could result in a boot
+            failure or indicate that important hardware is not available.
+        </para>
+
+        <para>
+            To determine whether or not a given option is "hardware" or
+            "non-hardware", the kernel Metadata in
+            <filename>yocto-kernel-cache</filename> contains files that
+            classify individual or groups of options as either hardware
+            or non-hardware.
+            To better show this, consider a situation where the
+            <filename>yocto-kernel-cache</filename> contains the following
+            files:
+            <literallayout class='monospaced'>
+     yocto-kernel-cache/features/drm-psb/hardware.cfg
+     yocto-kernel-cache/features/kgdb/hardware.cfg
+     yocto-kernel-cache/ktypes/base/hardware.cfg
+     yocto-kernel-cache/bsp/mti-malta32/hardware.cfg
+     yocto-kernel-cache/bsp/fsl-mpc8315e-rdb/hardware.cfg
+     yocto-kernel-cache/bsp/qemu-ppc32/hardware.cfg
+     yocto-kernel-cache/bsp/qemuarma9/hardware.cfg
+     yocto-kernel-cache/bsp/mti-malta64/hardware.cfg
+     yocto-kernel-cache/bsp/arm-versatile-926ejs/hardware.cfg
+     yocto-kernel-cache/bsp/common-pc/hardware.cfg
+     yocto-kernel-cache/bsp/common-pc-64/hardware.cfg
+     yocto-kernel-cache/features/rfkill/non-hardware.cfg
+     yocto-kernel-cache/ktypes/base/non-hardware.cfg
+     yocto-kernel-cache/features/aufs/non-hardware.kcf
+     yocto-kernel-cache/features/ocf/non-hardware.kcf
+     yocto-kernel-cache/ktypes/base/non-hardware.kcf
+     yocto-kernel-cache/ktypes/base/hardware.kcf
+     yocto-kernel-cache/bsp/qemu-ppc32/hardware.kcf
+            </literallayout>
+            The following list provides explanations for the various
+            files:
+            <itemizedlist>
+                <listitem><para>
+                    <filename>hardware.kcf</filename>:
+                    Specifies a list of kernel Kconfig files that contain
+                    hardware options only.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>non-hardware.kcf</filename>:
+                    Specifies a list of kernel Kconfig files that contain
+                    non-hardware options only.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>hardware.cfg</filename>:
+                    Specifies a list of kernel <filename>CONFIG_</filename>
+                    options that are hardware, regardless of whether or not
+                    they are within a Kconfig file specified by a hardware
+                    or non-hardware Kconfig file (i.e.
+                    <filename>hardware.kcf</filename> or
+                    <filename>non-hardware.kcf</filename>).
+                    </para></listitem>
+                <listitem><para>
+                    <filename>non-hardware.cfg</filename>:
+                    Specifies a list of kernel <filename>CONFIG_</filename>
+                    options that are not hardware, regardless of whether or
+                    not they are within a Kconfig file specified by a
+                    hardware or non-hardware Kconfig file (i.e.
+                    <filename>hardware.kcf</filename> or
+                    <filename>non-hardware.kcf</filename>).
+                    </para></listitem>
+            </itemizedlist>
+            Here is a specific example using the
+            <filename>kernel-cache/bsp/mti-malta32/hardware.cfg</filename>:
+            <literallayout class='monospaced'>
+     CONFIG_SERIAL_8250
+     CONFIG_SERIAL_8250_CONSOLE
+     CONFIG_SERIAL_8250_NR_UARTS
+     CONFIG_SERIAL_8250_PCI
+     CONFIG_SERIAL_CORE
+     CONFIG_SERIAL_CORE_CONSOLE
+     CONFIG_VGA_ARB
+            </literallayout>
+            The kernel configuration audit automatically detects these
+            files (hence the names must be exactly the ones discussed here),
+            and uses them as inputs when generating warnings about the
+            final <filename>.config</filename> file.
+        </para>
+
+        <para>
+            A user-specified kernel Metadata repository, or recipe space
+            feature, can use these same files to classify options that are
+            found within its <filename>.cfg</filename> files as hardware
+            or non-hardware, to prevent the OpenEmbedded build system from
+            producing an error or warning when an option is not in the
+            final <filename>.config</filename> file.
+        </para>
     </section>
 </appendix>
 <!--
diff --git a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-examples.xml b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-examples.xml
deleted file mode 100644
index 9d9aef6..0000000
--- a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-examples.xml
+++ /dev/null
@@ -1,918 +0,0 @@
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
-"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
-[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
-
-<chapter id='kernel-how-to'>
-
-<title>Working with the Yocto Project Kernel</title>
-
-
-<section id='actions-org'>
-    <title>Introduction</title>
-    <para>
-        This chapter describes how to accomplish tasks involving a kernel's tree structure.
-        The information is designed to help the developer that wants to modify the Yocto
-        Project kernel and contribute changes upstream to the Yocto Project.
-        The information covers the following:
-        <itemizedlist>
-            <listitem><para>Tree construction</para></listitem>
-            <listitem><para>Build strategies</para></listitem>
-            <listitem><para>Workflow examples</para></listitem>
-        </itemizedlist>
-    </para>
-</section>
-
-    <section id='tree-construction'>
-        <title>Tree Construction</title>
-        <para>
-            This section describes construction of the Yocto Project kernel source repositories
-            as accomplished by the Yocto Project team to create kernel repositories.
-            These kernel repositories are found under the heading "Yocto Linux Kernel" at
-            <ulink url='&YOCTO_GIT_URL;/cgit.cgi'>&YOCTO_GIT_URL;/cgit.cgi</ulink>
-            and can be shipped as part of a Yocto Project release.
-            The team creates these repositories by
-            compiling and executing the set of feature descriptions for every BSP/feature
-            in the product.
-            Those feature descriptions list all necessary patches,
-            configuration, branching, tagging and feature divisions found in a kernel.
-            Thus, the Yocto Project kernel repository (or tree) is built.
-        </para>
-        <para>
-            The existence of this tree allows you to access and clone a particular
-            Yocto Project kernel repository and use it to build images based on their configurations
-            and features.
-        </para>
-        <para>
-            You can find the files used to describe all the valid features and BSPs
-            in the Yocto Project kernel in any clone of the Yocto Project kernel source repository
-            Git tree.
-            For example, the following command clones the Yocto Project baseline kernel that
-            branched off of <filename>linux.org</filename> version 3.4:
-            <literallayout class='monospaced'>
-     $ git clone git://git.yoctoproject.org/linux-yocto-3.4
-            </literallayout>
-            For another example of how to set up a local Git repository of the Yocto Project
-            kernel files, see the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#local-kernel-files'>Yocto Project Kernel</ulink>" bulleted
-            item in the Yocto Project Development Manual.
-        </para>
-        <para>
-            Once you have cloned the kernel Git repository on your local machine, you can
-            switch to the <filename>meta</filename> branch within the repository.
-            Here is an example that assumes the local Git repository for the kernel is in
-            a top-level directory named <filename>linux-yocto-3.4</filename>:
-            <literallayout class='monospaced'>
-     $ cd ~/linux-yocto-3.4
-     $ git checkout -b meta origin/meta
-            </literallayout>
-            Once you have checked out and switched to the <filename>meta</filename> branch,
-            you can see a snapshot of all the kernel configuration and feature descriptions that are
-            used to build that particular kernel repository.
-            These descriptions are in the form of <filename>.scc</filename> files.
-        </para>
-        <para>
-            You should realize, however, that browsing your local kernel repository
-            for feature descriptions and patches is not an effective way to determine what is in a
-            particular kernel branch.
-            Instead, you should use Git directly to discover the changes in a branch.
-            Using Git is an efficient and flexible way to inspect changes to the kernel.
-            For examples showing how to use Git to inspect kernel commits, see the following sections
-            in this chapter.
-            <note>
-                Ground up reconstruction of the complete kernel tree is an action only taken by the
-                Yocto Project team during an active development cycle.
-                When you create a clone of the kernel Git repository, you are simply making it
-                efficiently available for building and development.
-            </note>
-        </para>
-        <para>
-            The following steps describe what happens when the Yocto Project Team constructs
-            the Yocto Project kernel source Git repository (or tree) found at
-            <ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink> given the
-            introduction of a new top-level kernel feature or BSP.
-            These are the actions that effectively create the tree
-            that includes the new feature, patch or BSP:
-            <orderedlist>
-                <listitem><para>A top-level kernel feature is passed to the kernel build subsystem.
-                    Normally, this feature is a BSP for a particular kernel type.</para></listitem>
-                <listitem><para>The file that describes the top-level feature is located by searching
-                    these system directories:
-                    <itemizedlist>
-                        <listitem><para>The in-tree kernel-cache directories, which are located
-                            in <filename>meta/cfg/kernel-cache</filename></para></listitem>
-                        <listitem><para>Areas pointed to by <filename>SRC_URI</filename> statements
-                            found in recipes</para></listitem>
-                    </itemizedlist>
-                    For a typical build, the target of the search is a
-                    feature description in an <filename>.scc</filename> file
-                    whose name follows this format:
-                    <literallayout class='monospaced'>
-     &lt;bsp_name&gt;-&lt;kernel_type&gt;.scc
-                    </literallayout>
-                </para></listitem>
-                <listitem><para>Once located, the feature description is either compiled into a simple script
-                    of actions, or into an existing equivalent script that is already part of the
-                    shipped kernel.</para></listitem>
-                <listitem><para>Extra features are appended to the top-level feature description.
-                    These features can come from the
-                    <ulink url='&YOCTO_DOCS_REF_URL;#var-KERNEL_FEATURES'><filename>KERNEL_FEATURES</filename></ulink>
-                    variable in recipes.</para></listitem>
-                <listitem><para>Each extra feature is located, compiled and appended to the script
-                    as described in step three.</para></listitem>
-                <listitem><para>The script is executed to produce a series of <filename>meta-*</filename>
-                    directories.
-                    These directories are descriptions of all the branches, tags, patches and configurations that
-                    need to be applied to the base Git repository to completely create the
-                    source (build) branch for the new BSP or feature.</para></listitem>
-                <listitem><para>The base repository is cloned, and the actions
-                    listed in the <filename>meta-*</filename> directories are applied to the
-                    tree.</para></listitem>
-                <listitem><para>The Git repository is left with the desired branch checked out and any
-                    required branching, patching and tagging has been performed.</para></listitem>
-            </orderedlist>
-        </para>
-        <para>
-            The kernel tree is now ready for developer consumption to be locally cloned,
-            configured, and built into a Yocto Project kernel specific to some target hardware.
-            <note><para>The generated <filename>meta-*</filename> directories add to the kernel
-                as shipped with the Yocto Project release.
-                Any add-ons and configuration data are applied to the end of an existing branch.
-                The full repository generation that is found in the
-                official Yocto Project kernel repositories at
-                <ulink url='&YOCTO_GIT_URL;/cgit.cgi'>http://git.yoctoproject.org/cgit.cgi</ulink>
-                is the combination of all supported boards and configurations.</para>
-                <para>The technique the Yocto Project team uses is flexible and allows for seamless
-                blending of an immutable history with additional patches specific to a
-                deployment.
-                Any additions to the kernel become an integrated part of the branches.</para>
-            </note>
-        </para>
-    </section>
-
-    <section id='build-strategy'>
-        <title>Build Strategy</title>
-        <para>
-            Once a local Git repository of the Yocto Project kernel exists on a development system,
-            you can consider the compilation phase of kernel development - building a kernel image.
-            Some prerequisites exist that are validated by the build process before compilation
-            starts:
-        </para>
-
-        <itemizedlist>
-            <listitem><para>The
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink> points
-                to the kernel Git repository.</para></listitem>
-            <listitem><para>A BSP build branch exists.
-                This branch has the following form:
-                <literallayout class='monospaced'>
-     &lt;kernel_type&gt;/&lt;bsp_name&gt;
-                </literallayout></para></listitem>
-        </itemizedlist>
-
-        <para>
-            The OpenEmbedded build system makes sure these conditions exist before attempting compilation.
-            Other means, however, do exist, such as as bootstrapping a BSP, see
-            the "<link linkend='workflow-examples'>Workflow Examples</link>".
-        </para>
-
-        <para>
-            Before building a kernel, the build process verifies the tree
-            and configures the kernel by processing all of the
-            configuration "fragments" specified by feature descriptions in the <filename>.scc</filename>
-            files.
-            As the features are compiled, associated kernel configuration fragments are noted
-            and recorded in the <filename>meta-*</filename> series of directories in their compilation order.
-            The fragments are migrated, pre-processed and passed to the Linux Kernel
-            Configuration subsystem (<filename>lkc</filename>) as raw input in the form
-            of a <filename>.config</filename> file.
-            The <filename>lkc</filename> uses its own internal dependency constraints to do the final
-            processing of that information and generates the final <filename>.config</filename> file
-            that is used during compilation.
-        </para>
-
-        <para>
-            Using the board's architecture and other relevant values from the board's template,
-            kernel compilation is started and a kernel image is produced.
-        </para>
-
-        <para>
-            The other thing that you notice once you configure a kernel is that
-            the build process generates a build tree that is separate from your kernel's local Git
-            source repository tree.
-            This build tree has a name that uses the following form, where
-            <filename>${MACHINE}</filename> is the metadata name of the machine (BSP) and "kernel_type" is one
-            of the Yocto Project supported kernel types (e.g. "standard"):
-        <literallayout class='monospaced'>
-     linux-${MACHINE}-&lt;kernel_type&gt;-build
-        </literallayout>
-        </para>
-
-        <para>
-            The existing support in the <filename>kernel.org</filename> tree achieves this
-            default functionality.
-        </para>
-
-        <para>
-            This behavior means that all the generated files for a particular machine or BSP are now in
-            the build tree directory.
-            The files include the final <filename>.config</filename> file, all the <filename>.o</filename>
-            files, the <filename>.a</filename> files, and so forth.
-            Since each machine or BSP has its own separate build directory in its own separate branch
-            of the Git repository, you can easily switch between different builds.
-        </para>
-    </section>
-
-    <section id='workflow-examples'>
-        <title>Workflow Examples</title>
-
-        <para>
-            As previously noted, the Yocto Project kernel has built-in Git integration.
-            However, these utilities are not the only way to work with the kernel repository.
-            The Yocto Project has not made changes to Git or to other tools that
-            would invalidate alternate workflows.
-            Additionally, the way the kernel repository is constructed results in using
-            only core Git functionality, thus allowing any number of tools or front ends to use the
-            resulting tree.
-        </para>
-
-        <para>
-            This section contains several workflow examples.
-            Many of the examples use Git commands.
-            You can find Git documentation at
-            <ulink url='http://git-scm.com/documentation'></ulink>.
-            You can find a simple overview of using Git with the Yocto Project in the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#git'>Git</ulink>"
-            section of the Yocto Project Development Manual.
-        </para>
-
-        <section id='change-inspection-kernel-changes-commits'>
-            <title>Change Inspection: Changes/Commits</title>
-
-            <para>
-                A common question when working with a kernel is:
-                "What changes have been applied to this tree?"
-            </para>
-
-            <para>
-                In projects that have a collection of directories that
-                contain patches to the kernel, it is possible to inspect or "grep" the contents
-                of the directories to get a general feel for the changes.
-                This sort of patch inspection is not an efficient way to determine what has been
-                done to the kernel.
-                The reason it is inefficient is because there are many optional patches that are
-                selected based on the kernel type and the feature description.
-                Additionally, patches could exist in directories that are not included in the search.
-            </para>
-
-            <para>
-                A more efficient way to determine what has changed in the branch is to use
-                Git and inspect or search the kernel tree.
-                This method gives you a full view of not only the source code modifications,
-                but also provides the reasons for the changes.
-            </para>
-
-            <section id='what-changed-in-a-kernel'>
-                <title>What Changed in a Kernel?</title>
-
-                <para>
-                    Following are a few examples that show how to use Git commands to examine changes.
-                    Because Git repositories in the Yocto Project do not break existing Git
-                    functionality, and because there exists many permutations of these types of
-                    Git commands, many methods exist by which you can discover changes.
-                    <note>
-                        In the following examples, unless you provide a commit range,
-                        <filename>kernel.org</filename> history is blended with Yocto Project
-                        kernel changes.
-                        You can form ranges by using branch names from the kernel tree as the
-                        upper and lower commit markers with the Git commands.
-                        You can see the branch names through the web interface to the
-                        Yocto Project source repositories at
-                        <ulink url='http://git.yoctoproject.org/cgit.cgi'></ulink>.
-                        For example, the branch names for the <filename>linux-yocto-3.4</filename>
-                        kernel repository can be seen at
-                        <ulink url='http://git.yoctoproject.org/cgit.cgi/linux-yocto-3.4/refs/heads'></ulink>.
-                    </note>
-                    To see a full range of the changes, use the
-                    <filename>git whatchanged</filename> command and specify a commit range
-                    for the branch (<filename>&lt;commit&gt;..&lt;commit&gt;</filename>).
-                </para>
-
-                <para>
-                    Here is an example that looks at what has changed in the
-                    <filename>emenlow</filename> branch of the
-                    <filename>linux-yocto-3.4</filename> kernel.
-                    The lower commit range is the commit associated with the
-                    <filename>standard/base</filename> branch, while
-                    the upper commit range is the commit associated with the
-                    <filename>standard/emenlow</filename> branch.
-                    <literallayout class='monospaced'>
-     $ git whatchanged origin/standard/base..origin/standard/emenlow
-                    </literallayout>
-                </para>
-
-                <para>
-                    To see a summary of changes use the <filename>git log</filename> command.
-                    Here is an example using the same branches:
-                    <literallayout class='monospaced'>
-     $ git log --oneline origin/standard/base..origin/standard/emenlow
-                    </literallayout>
-                    The <filename>git log</filename> output might be more useful than
-                    the <filename>git whatchanged</filename> as you get
-                    a short, one-line summary of each change and not the entire commit.
-                </para>
-
-                <para>
-                    If you want to see code differences associated with all the changes, use
-                    the <filename>git diff</filename> command.
-                    Here is an example:
-                    <literallayout class='monospaced'>
-     $ git diff origin/standard/base..origin/standard/emenlow
-                    </literallayout>
-                </para>
-
-                <para>
-                    You can see the commit log messages and the text differences using the
-                    <filename>git show</filename> command:
-                    Here is an example:
-                    <literallayout class='monospaced'>
-     $ git show origin/standard/base..origin/standard/emenlow
-                    </literallayout>
-                </para>
-
-                <para>
-                    You can create individual patches for each change by using the
-                    <filename>git format-patch</filename> command.
-                    Here is an example that that creates patch files for each commit and
-                    places them in your <filename>Documents</filename> directory:
-                    <literallayout class='monospaced'>
-     $ git format-patch -o $HOME/Documents origin/standard/base..origin/standard/emenlow
-                    </literallayout>
-                </para>
-            </section>
-
-            <section id='show-a-particular-feature-or-branch-change'>
-                <title>Show a Particular Feature or Branch Change</title>
-
-                <para>
-                    Developers use tags in the Yocto Project kernel tree to divide changes for significant
-                    features or branches.
-                    Once you know a particular tag, you can use Git commands
-                    to show changes associated with the tag and find the branches that contain
-                    the feature.
-                    <note>
-                        Because BSP branch, <filename>kernel.org</filename>, and feature tags are all
-                        present, there could be many tags.
-                    </note>
-                    The <filename>git show &lt;tag&gt;</filename> command shows changes that are tagged by
-                    a feature.
-                    Here is an example that shows changes tagged by the <filename>systemtap</filename>
-                    feature:
-                    <literallayout class='monospaced'>
-     $ git show systemtap
-                    </literallayout>
-                    You can use the <filename>git branch --contains &lt;tag&gt;</filename> command
-                    to show the branches that contain a particular feature.
-                    This command shows the branches that contain the <filename>systemtap</filename>
-                    feature:
-                    <literallayout class='monospaced'>
-     $ git branch --contains systemtap
-                    </literallayout>
-                </para>
-
-                <para>
-                    You can use many other comparisons to isolate BSP and kernel changes.
-                    For example, you can compare against <filename>kernel.org</filename> tags
-                    such as the <filename>v3.4</filename> tag.
-                </para>
-            </section>
-        </section>
-
-        <section id='development-saving-kernel-modifications'>
-            <title>Development: Saving Kernel Modifications</title>
-
-            <para>
-                Another common operation is to build a BSP supplied by the Yocto Project, make some
-                changes, rebuild, and then test.
-                Those local changes often need to be exported, shared or otherwise maintained.
-            </para>
-
-            <para>
-                Since the Yocto Project kernel source tree is backed by Git, this activity is
-                much easier as compared to with previous releases.
-                Because Git tracks file modifications, additions and deletions, it is easy
-                to modify the code and later realize that you need to save the changes.
-                It is also easy to determine what has changed.
-                This method also provides many tools to commit, undo and export those modifications.
-            </para>
-
-            <para>
-                This section and its sub-sections, describe general application of Git's
-                <filename>push</filename> and <filename>pull</filename> commands, which are used to
-                get your changes upstream or source your code from an upstream repository.
-                The Yocto Project provides scripts that help you work in a collaborative development
-                environment.
-                For information on these scripts, see the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#pushing-a-change-upstream'>Using Scripts to Push a Change
-                Upstream and Request a Pull</ulink>" and
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#submitting-a-patch'>Using Email to Submit a Patch</ulink>"
-                sections in the Yocto Project Development Manual.
-            </para>
-
-            <para>
-                There are many ways to save kernel modifications.
-                The technique employed
-                depends on the destination for the patches:
-
-                <itemizedlist>
-                    <listitem><para>Bulk storage</para></listitem>
-                    <listitem><para>Internal sharing either through patches or by using Git</para></listitem>
-                    <listitem><para>External submissions</para></listitem>
-                    <listitem><para>Exporting for integration into another Source Code
-                        Manager (SCM)</para></listitem>
-                </itemizedlist>
-            </para>
-
-            <para>
-                Because of the following list of issues, the destination of the patches also influences
-                the method for gathering them:
-
-                <itemizedlist>
-                    <listitem><para>Bisectability</para></listitem>
-                    <listitem><para>Commit headers</para></listitem>
-                    <listitem><para>Division of subsystems for separate submission or review</para></listitem>
-                </itemizedlist>
-            </para>
-
-            <section id='bulk-export'>
-                <title>Bulk Export</title>
-
-                <para>
-                    This section describes how you can "bulk" export changes that have not
-                    been separated or divided.
-                    This situation works well when you are simply storing patches outside of the kernel
-                    source repository, either permanently or temporarily, and you are not committing
-                    incremental changes during development.
-                    <note>
-                        This technique is not appropriate for full integration of upstream submission
-                        because changes are not properly divided and do not provide an avenue for per-change
-                        commit messages.
-                        Therefore, this example assumes that changes have not been committed incrementally
-                        during development and that you simply must gather and export them.
-                    </note>
-                    <literallayout class='monospaced'>
-     # bulk export of ALL modifications without separation or division
-     # of the changes
-
-     $ git add .
-     $ git commit -s -a -m &lt;msg&gt;
-        or
-     $ git commit -s -a # and interact with $EDITOR
-                    </literallayout>
-                </para>
-
-                <para>
-                    The previous operations capture all the local changes in the project source
-                    tree in a single Git commit.
-                    And, that commit is also stored in the project's source tree.
-                </para>
-
-                <para>
-                    Once the changes are exported, you can restore them manually using a template
-                    or through integration with the <filename>default_kernel</filename>.
-                </para>
-
-            </section>
-
-            <section id='incremental-planned-sharing'>
-                <title>Incremental/Planned Sharing</title>
-
-                <para>
-                    This section describes how to save modifications when you are making incremental
-                    commits or practicing planned sharing.
-                    The examples in this section assume that you have incrementally committed
-                    changes to the tree during development and now need to export them.
-                    The sections that follow
-                    describe how you can export your changes internally through either patches or by
-                    using Git commands.
-                </para>
-
-                <para>
-                    During development, the following commands are of interest.
-                    For full Git documentation, refer to the Git documentation at
-                    <ulink url='http://github.com'></ulink>.
-
-                    <literallayout class='monospaced'>
-     # edit a file
-     $ vi &lt;path&gt;/file
-     # stage the change
-     $ git add &lt;path&gt;/file
-     # commit the change
-     $ git commit -s
-     # remove a file
-     $ git rm &lt;path&gt;/file
-     # commit the change
-     $ git commit -s
-
-     ... etc.
-                    </literallayout>
-                </para>
-
-                <para>
-                    Distributed development with Git is possible when you use a universally
-                    agreed-upon unique commit identifier (set by the creator of the commit) that maps to a
-                    specific change set with a specific parent.
-                    This identifier is created for you when
-                    you create a commit, and is re-created when you amend, alter or re-apply
-                    a commit.
-                    As an individual in isolation, this is of no interest.
-                    However, if you
-                    intend to share your tree with normal Git <filename>push</filename> and
-                    <filename>pull</filename> operations for
-                    distributed development, you should consider the ramifications of changing a
-                    commit that you have already shared with others.
-                </para>
-
-                <para>
-                    Assuming that the changes have not been pushed upstream, or pulled into
-                    another repository, you can update both the commit content and commit messages
-                    associated with development by using the following commands:
-
-                    <literallayout class='monospaced'>
-     $ Git add &lt;path&gt;/file
-     $ Git commit --amend
-     $ Git rebase or Git rebase -i
-                    </literallayout>
-                </para>
-
-                <para>
-                    Again, assuming that the changes have not been pushed upstream, and that
-                    no pending works-in-progress exist (use <filename>git status</filename> to check), then
-                    you can revert (undo) commits by using the following commands:
-
-                    <literallayout class='monospaced'>
-     # remove the commit, update working tree and remove all
-     # traces of the change
-     $ git reset --hard HEAD^
-     # remove the commit, but leave the files changed and staged for re-commit
-     $ git reset --soft HEAD^
-     # remove the commit, leave file change, but not staged for commit
-     $ git reset --mixed HEAD^
-                    </literallayout>
-                </para>
-
-                <para>
-                    You can create branches, "cherry-pick" changes, or perform any number of Git
-                    operations until the commits are in good order for pushing upstream
-                    or for pull requests.
-                    After a <filename>push</filename> or <filename>pull</filename> command,
-                    commits are normally considered
-                    "permanent" and you should not modify them.
-                    If the commits need to be changed, you can incrementally do so with new commits.
-                    These practices follow standard Git workflow and the <filename>kernel.org</filename> best
-                    practices, which is recommended.
-                    <note>
-                        It is recommended to tag or branch before adding changes to a Yocto Project
-                        BSP or before creating a new one.
-                        The reason for this recommendation is because the branch or tag provides a
-                        reference point to facilitate locating and exporting local changes.
-                    </note>
-                </para>
-
-                <section id='export-internally-via-patches'>
-                    <title>Exporting Changes Internally by Using Patches</title>
-
-                    <para>
-                        This section describes how you can extract committed changes from a working directory
-                        by exporting them as patches.
-                        Once the changes have been extracted, you can use the patches for upstream submission,
-                        place them in a Yocto Project template for automatic kernel patching,
-                        or apply them in many other common uses.
-                    </para>
-
-                    <para>
-                        This example shows how to create a directory with sequentially numbered patches.
-                        Once the directory is created, you can apply it to a repository using the
-                        <filename>git am</filename> command to reproduce the original commit and all
-                        the related information such as author, date, commit log, and so forth.
-                        <note>
-                            The new commit identifiers (ID) will be generated upon re-application.
-                            This action reflects that the commit is now applied to an underlying commit
-                            with a different ID.
-                        </note>
-                        <literallayout class='monospaced'>
-     # &lt;first-commit&gt; can be a tag if one was created before development
-     # began. It can also be the parent branch if a branch was created
-     # before development began.
-
-     $ git format-patch -o &lt;dir&gt; &lt;first commit&gt;..&lt;last commit&gt;
-                        </literallayout>
-                    </para>
-
-                    <para>
-                        In other words:
-                        <literallayout class='monospaced'>
-     # Identify commits of interest.
-
-     # If the tree was tagged before development
-     $ git format-patch -o &lt;save dir&gt; &lt;tag&gt;
-
-     # If no tags are available
-     $ git format-patch -o &lt;save dir&gt; HEAD^  # last commit
-     $ git format-patch -o &lt;save dir&gt; HEAD^^ # last 2 commits
-     $ git whatchanged # identify last commit
-     $ git format-patch -o &lt;save dir&gt; &lt;commit id&gt;
-     $ git format-patch -o &lt;save dir&gt; &lt;rev-list&gt;
-                        </literallayout>
-                    </para>
-                </section>
-
-                <section id='export-internally-via-git'>
-                    <title>Exporting Changes Internally by Using Git</title>
-
-                    <para>
-                        This section describes how you can export changes from a working directory
-                        by pushing the changes into a master repository or by making a pull request.
-                        Once you have pushed the changes to the master repository, you can then
-                        pull those same changes into a new kernel build at a later time.
-                    </para>
-
-                    <para>
-                        Use this command form to push the changes:
-                        <literallayout class='monospaced'>
-     $ git push ssh://&lt;master_server&gt;/&lt;path_to_repo&gt;
-        &lt;local_branch&gt;:&lt;remote_branch&gt;
-                        </literallayout>
-                    </para>
-
-                    <para>
-                        For example, the following command pushes the changes from your local branch
-                        <filename>yocto/standard/common-pc/base</filename> to the remote branch with the same name
-                        in the master repository <filename>//git.mycompany.com/pub/git/kernel-3.4</filename>.
-                        <literallayout class='monospaced'>
-     $ git push ssh://git.mycompany.com/pub/git/kernel-3.4 \
-        yocto/standard/common-pc/base:yocto/standard/common-pc/base
-                        </literallayout>
-                    </para>
-
-                    <para>
-                        A pull request entails using the <filename>git request-pull</filename> command to compose
-                        an email to the
-                        maintainer requesting that a branch be pulled into the master repository, see
-                        <ulink url='http://github.com/guides/pull-requests'></ulink> for an example.
-                        <note>
-                            Other commands such as <filename>git stash</filename> or branching can also be used to save
-                            changes, but are not covered in this document.
-                        </note>
-                    </para>
-                </section>
-            </section>
-
-            <section id='export-for-external-upstream-submission'>
-                <title>Exporting Changes for External (Upstream) Submission</title>
-
-                <para>
-                    This section describes how to export changes for external upstream submission.
-                    If the patch series is large or the maintainer prefers to pull
-                    changes, you can submit these changes by using a pull request.
-                    However, it is common to send patches as an email series.
-                    This method allows easy review and integration of the changes.
-                    <note>
-                        Before sending patches for review be sure you understand the
-                        community standards for submitting and documenting changes and follow their best practices.
-                        For example, kernel patches should follow standards such as:
-                        <itemizedlist>
-                            <listitem><para>
-                                <ulink url='http://linux.yyz.us/patch-format.html'></ulink></para></listitem>
-                            <listitem><para>Documentation/SubmittingPatches (in any linux
-                                kernel source tree)</para></listitem>
-                        </itemizedlist>
-                    </note>
-                </para>
-
-                <para>
-                    The messages used to commit changes are a large part of these standards.
-                    Consequently, be sure that the headers for each commit have the required information.
-                    For information on how to follow the Yocto Project commit message standards, see the
-                    "<ulink url='&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change'>How to Submit a
-                    Change</ulink>" section in the Yocto Project Development Manual.
-                </para>
-
-                <para>
-                    If the initial commits were not properly documented or do not meet those standards,
-                    you can re-base by using the <filename>git rebase -i</filename> command to
-                    manipulate the commits and
-                    get them into the required format.
-                    Other techniques such as branching and cherry-picking commits are also viable options.
-                </para>
-
-                <para>
-                    Once you complete the commits, you can generate the email that sends the patches
-                    to the maintainer(s) or lists that review and integrate changes.
-                    The command <filename>git send-email</filename> is commonly used to ensure
-                    that patches are properly
-                    formatted for easy application and avoid mailer-induced patch damage.
-                </para>
-
-                <para>
-                    The following is an example of dumping patches for external submission:
-                    <literallayout class='monospaced'>
-     # dump the last 4 commits
-     $ git format-patch --thread -n -o ~/rr/ HEAD^^^^
-     $ git send-email --compose --subject '[RFC 0/N] &lt;patch series summary&gt;' \
-      --to foo@yoctoproject.org --to bar@yoctoproject.org \
-      --cc list@yoctoproject.org  ~/rr
-     # the editor is invoked for the 0/N patch, and when complete the entire
-     # series is sent via email for review
-                    </literallayout>
-                </para>
-            </section>
-
-            <section id='export-for-import-into-other-scm'>
-                <title>Exporting Changes for Import into Another SCM</title>
-
-                <para>
-                    When you want to export changes for import into another
-                    Source Code Manager (SCM), you can use any of the previously discussed
-                    techniques.
-                    However, if the patches are manually applied to a secondary tree and then
-                    that tree is checked into the SCM, you can lose change information such as
-                    commit logs.
-                    This process is not recommended.
-                </para>
-
-                <para>
-                    Many SCMs can directly import Git commits, or can translate Git patches so that
-                    information is not lost.
-                    Those facilities are SCM-dependent and you should use them whenever possible.
-                </para>
-            </section>
-        </section>
-
-        <section id='scm-working-with-the-yocto-project-kernel-in-another-scm'>
-            <title>Working with the Yocto Project Kernel in Another SCM</title>
-
-            <para>
-                This section describes kernel development in an SCM other than Git,
-                which is not the same as exporting changes to another SCM described earlier.
-                For this scenario, you use the OpenEmbedded build system to
-                develop the kernel in a different SCM.
-                The following must be true for you to accomplish this:
-                <itemizedlist>
-                    <listitem><para>The delivered Yocto Project kernel must be exported into the second
-                        SCM.</para></listitem>
-                    <listitem><para>Development must be exported from that secondary SCM into a
-                        format that can be used by the OpenEmbedded build system.</para></listitem>
-                </itemizedlist>
-            </para>
-
-            <section id='exporting-delivered-kernel-to-scm'>
-                <title>Exporting the Delivered Kernel to the SCM</title>
-
-                <para>
-                    Depending on the SCM, it might be possible to export the entire Yocto Project
-                    kernel Git repository, branches and all, into a new environment.
-                    This method is preferred because it has the most flexibility and potential to maintain
-                    the meta data associated with each commit.
-                </para>
-
-                <para>
-                    When a direct import mechanism is not available, it is still possible to
-                    export a branch (or series of branches) and check them into a new repository.
-                </para>
-
-                <para>
-                    The following commands illustrate some of the steps you could use to
-                    import the <filename>yocto/standard/common-pc/base</filename>
-                    kernel into a secondary SCM:
-                    <literallayout class='monospaced'>
-     $ git checkout yocto/standard/common-pc/base
-     $ cd .. ; echo linux/.git &gt; .cvsignore
-     $ cvs import -m "initial import" linux MY_COMPANY start
-                    </literallayout>
-                </para>
-
-                <para>
-                    You could now relocate the CVS repository and use it in a centralized manner.
-                </para>
-
-                <para>
-                    The following commands illustrate how you can condense and merge two BSPs into a
-                    second SCM:
-                    <literallayout class='monospaced'>
-     $ git checkout yocto/standard/common-pc/base
-     $ git merge yocto/standard/common-pc-64/base
-     # resolve any conflicts and commit them
-     $ cd .. ; echo linux/.git &gt; .cvsignore
-     $ cvs import -m "initial import" linux MY_COMPANY start
-                    </literallayout>
-                </para>
-            </section>
-
-            <section id='importing-changes-for-build'>
-                <title>Importing Changes for the Build</title>
-
-                <para>
-                    Once development has reached a suitable point in the second development
-                    environment, you need to export the changes as patches.
-                    To export them, place the changes in a recipe and
-                    automatically apply them to the kernel during patching.
-                </para>
-            </section>
-        </section>
-
-        <section id='bsp-creating'>
-            <title>Creating a BSP Based on an Existing Similar BSP</title>
-
-            <para>
-                This section overviews the process of creating a BSP based on an
-                existing similar BSP.
-                The information is introductory in nature and does not provide step-by-step examples.
-                For detailed information on how to create a new BSP, see
-                the "<ulink url='&YOCTO_DOCS_BSP_URL;#creating-a-new-bsp-layer-using-the-yocto-bsp-script'>Creating a New BSP Layer Using the yocto-bsp Script</ulink>" section in the
-                Yocto Project Board Support Package (BSP) Developer's Guide, or see the
-                <ulink url='&YOCTO_WIKI_URL;/wiki/Transcript:_creating_one_generic_Atom_BSP_from_another'>Transcript:_creating_one_generic_Atom_BSP_from_another</ulink>
-                wiki page.
-            </para>
-
-            <para>
-                The basic steps you need to follow are:
-                <orderedlist>
-                    <listitem><para><emphasis>Make sure you have set up a local Source Directory:</emphasis>
-                        You must create a local
-                        <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
-                        by either creating a Git repository (recommended) or
-                        extracting a Yocto Project release tarball.</para></listitem>
-                    <listitem><para><emphasis>Choose an existing BSP available with the Yocto Project:</emphasis>
-                        Try to map your board features as closely to the features of a BSP that is
-                        already supported and exists in the Yocto Project.
-                        Starting with something as close as possible to your board makes developing
-                        your BSP easier.
-                        You can find all the BSPs that are supported and ship with the Yocto Project
-                        on the Yocto Project's Download page at
-                        <ulink url='&YOCTO_HOME_URL;/download'></ulink>.</para></listitem>
-                    <listitem><para><emphasis>Be sure you have the Base BSP:</emphasis>
-                        You need to either have a local Git repository of the base BSP set up or
-                        have downloaded and extracted the files from a release BSP tarball.
-                        Either method gives you access to the BSP source files.</para></listitem>
-                    <listitem><para><emphasis>Make a copy of the existing BSP, thus isolating your new
-                        BSP work:</emphasis>
-                        Copying the existing BSP file structure gives you a new area in which to work.</para></listitem>
-                    <listitem><para><emphasis>Make configuration and recipe changes to your new BSP:</emphasis>
-                        Configuration changes involve the files in the BSP's <filename>conf</filename>
-                        directory.
-                        Changes include creating a machine-specific configuration file and editing the
-                        <filename>layer.conf</filename> file.
-                        The configuration changes identify the kernel you will be using.
-                        Recipe changes include removing, modifying, or adding new recipe files that
-                        instruct the build process on what features to include in the image.</para></listitem>
-                    <listitem><para><emphasis>Prepare for the build:</emphasis>
-                        Before you actually initiate the build, you need to set up the build environment
-                        by sourcing the environment initialization script.
-                        After setting up the environment, you need to make some build configuration
-                        changes to the <filename>local.conf</filename> and <filename>bblayers.conf</filename>
-                        files.</para></listitem>
-                    <listitem><para><emphasis>Build the image:</emphasis>
-                        The OpenEmbedded build system uses BitBake to create the image.
-                        You need to decide on the type of image you are going to build (e.g. minimal, base,
-                        core, sato, and so forth) and then start the build using the <filename>bitbake</filename>
-                        command.</para></listitem>
-                </orderedlist>
-            </para>
-        </section>
-
-        <section id='tip-dirty-string'>
-            <title>"-dirty" String</title>
-
-            <para>
-                If kernel images are being built with "-dirty" on the end of the version
-                string, this simply means that modifications in the source
-                directory have not been committed.
-                <literallayout class='monospaced'>
-     $ git status
-                </literallayout>
-            </para>
-
-            <para>
-                You can use the above Git command to report modified, removed, or added files.
-                You should commit those changes to the tree regardless of whether they will be saved,
-                exported, or used.
-                Once you commit the changes you need to rebuild the kernel.
-            </para>
-
-            <para>
-                To brute force pickup and commit all such pending changes, enter the following:
-                <literallayout class='monospaced'>
-     $ git add .
-     $ git commit -s -a -m "getting rid of -dirty"
-                </literallayout>
-            </para>
-
-            <para>
-                Next, rebuild the kernel.
-            </para>
-        </section>
-    </section>
-</chapter>
-<!--
-vim: expandtab tw=80 ts=4
--->
diff --git a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-faq.xml b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-faq.xml
index 9e0517d..c3a2046 100644
--- a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-faq.xml
+++ b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-faq.xml
@@ -36,7 +36,8 @@
                 </question>
                 <answer>
                     <para>
-                        Refer to the "<link linkend='generating-configuration-files'>Generating Configuration Files</link>"
+                        Refer to the
+                        "<link linkend='creating-config-fragments'>Creating Configuration Fragments</link>"
                         section for information.
                     </para>
                 </answer>
@@ -73,8 +74,9 @@
                         include "kernel-image".</para>
                         <para>See the
                         "<ulink url='&YOCTO_DOCS_DEV_URL;#using-bbappend-files'>Using .bbappend Files in Your Layer</ulink>"
-                        section in the Yocto Project Development Manual for information on
-                        how to use an append file to override metadata.
+                        section in the Yocto Project Development Tasks Manual
+                        for information on how to use an append file to
+                        override metadata.
                     </para>
                 </answer>
             </qandaentry>
diff --git a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-intro.xml b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-intro.xml
index 263e500..dba4549 100644
--- a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-intro.xml
+++ b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-intro.xml
@@ -5,144 +5,263 @@
 <chapter id='kernel-dev-intro'>
 <title>Introduction</title>
 
-<!--
-<para>
-    <emphasis>AR - Darren Hart:</emphasis>  See if the concepts in these
-    three bullets are adequately covered in somewhere in this manual:
-    <itemizedlist>
-        <listitem><para>Do we convey that our kernel Git repositories
-            have a clear and continuous history, similar to the way the
-            kernel Git repositories for <filename>kernel.org</filename>
-            do.
-            </para></listitem>
-        <listitem><para>Does the manual note that Yocto Project delivers
-            a key set of supported kernel types, where
-            each type is tailored to meet a specific use (e.g. networking,
-            consumer, devices, and so forth).</para></listitem>
-        <listitem><para>Do we convey that the Yocto Project uses a
-            Git branching strategy that, from a
-            developer's point of view, results in a linear path from the
-            baseline kernel.org, through a select group of features and
-            ends with their BSP-specific commits.</para></listitem>
-    </itemizedlist>
-</para>
--->
+<section id='kernel-dev-overview'>
+    <title>Overview</title>
 
-    <section id='kernel-dev-overview'>
-        <title>Overview</title>
+    <para>
+        Regardless of how you intend to make use of the Yocto Project,
+        chances are you will work with the Linux kernel.
+        This manual describes how to set up your build host to support
+        kernel development, introduces the kernel development process,
+        provides background information on the Yocto Linux kernel
+        <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink>,
+        describes common tasks you can perform using the kernel tools,
+        shows you how to use the kernel Metadata needed to work with
+        the kernel inside the Yocto Project, and provides insight into how
+        the Yocto Project team develops and maintains Yocto Linux kernel
+        Git repositories and Metadata.
+   </para>
 
-        <para>
-            Regardless of how you intend to make use of the Yocto Project,
-            chances are you will work with the Linux kernel.
-            This manual provides background information on the Yocto Linux kernel
-            <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>,
-            describes common tasks you can perform using the kernel tools,
-            and shows you how to use the kernel Metadata needed to work with
-            the kernel inside the Yocto Project.
-       </para>
+   <para>
+        Each Yocto Project release has a set of Yocto Linux kernel recipes,
+        whose Git repositories you can view in the Yocto
+        <ulink url='&YOCTO_GIT_URL;'>Source Repositories</ulink> under
+        the "Yocto Linux Kernel" heading.
+        New recipes for the release track the latest Linux kernel
+        upstream developments from
+        <ulink url='http://www.kernel.org'></ulink> and introduce
+        newly-supported platforms.
+        Previous recipes in the release are refreshed and supported for at
+        least one additional Yocto Project release.
+        As they align, these previous releases are updated to include the
+        latest from the Long Term Support Initiative (LTSI) project.
+        You can learn more about Yocto Linux kernels and LTSI in the
+        "<link linkend='kernel-big-picture'>Yocto Project Kernel Development and Maintenance</link>"
+        section.
+    </para>
 
-       <para>
-            Each Yocto Project release has a set of linux-yocto recipes, whose
-            Git repositories you can view in the Yocto
-            <ulink url='&YOCTO_GIT_URL;'>Source Repositories</ulink> under
-            the "Yocto Linux Kernel" heading.
-            New recipes for the release track the latest upstream developments
-            and introduce newly-supported platforms.
-            Previous recipes in the release are refreshed and supported for at
-            least one additional release.
-            As they align, these previous releases are updated to include the
-            latest from the
-            <ulink url='&YOCTO_HOME_URL;/organization/long-term-support-initiative-ltsi'>Long Term Support Initiative</ulink>
-            (LTSI) project.
-            Also included is a linux-yocto development recipe
-            (<filename>linux-yocto-dev.bb</filename>) should you want to work
-            with the very latest in upstream Linux kernel development and
-            kernel Metadata development.
-        </para>
+    <para>
+        Also included is a Yocto Linux kernel development recipe
+        (<filename>linux-yocto-dev.bb</filename>) should you want to work
+        with the very latest in upstream Yocto Linux kernel development and
+        kernel Metadata development.
+        <note>
+            For more on Yocto Linux kernels, see the
+            "<link linkend='kernel-big-picture'>Yocto Project Kernel Development and Maintenance</link>
+            section.
+        </note>
+    </para>
 
-        <para>
-            The Yocto Project also provides a powerful set of kernel
-            tools for managing Linux kernel sources and configuration data.
-            You can use these tools to make a single configuration change,
-            apply multiple patches, or work with your own kernel sources.
-        </para>
+    <para>
+        The Yocto Project also provides a powerful set of kernel
+        tools for managing Yocto Linux kernel sources and configuration data.
+        You can use these tools to make a single configuration change,
+        apply multiple patches, or work with your own kernel sources.
+    </para>
 
-        <para>
-            In particular, the kernel tools allow you to generate configuration
-            fragments that specify only what you must, and nothing more.
-            Configuration fragments only need to contain the highest level
-            visible <filename>CONFIG</filename> options as presented by the Linux
-            kernel <filename>menuconfig</filename> system.
-            Contrast this against a complete Linux kernel
-            <filename>.config</filename>, which includes all the automatically
-            selected <filename>CONFIG</filename> options.
-            This efficiency reduces your maintenance effort and allows you
-            to further separate your configuration in ways that make sense for
-            your project.
-            A common split separates policy and hardware.
-            For example, all your kernels might support
-            the <filename>proc</filename> and <filename>sys</filename> filesystems,
-            but only specific boards require sound, USB, or specific drivers.
-            Specifying these configurations individually allows you to aggregate
-            them together as needed, but maintains them in only one place.
-            Similar logic applies to separating source changes.
-        </para>
+    <para>
+        In particular, the kernel tools allow you to generate configuration
+        fragments that specify only what you must, and nothing more.
+        Configuration fragments only need to contain the highest level
+        visible <filename>CONFIG</filename> options as presented by the
+        Yocto Linux kernel <filename>menuconfig</filename> system.
+        Contrast this against a complete Yocto Linux kernel
+        <filename>.config</filename> file, which includes all the automatically
+        selected <filename>CONFIG</filename> options.
+        This efficiency reduces your maintenance effort and allows you
+        to further separate your configuration in ways that make sense for
+        your project.
+        A common split separates policy and hardware.
+        For example, all your kernels might support the
+        <filename>proc</filename> and <filename>sys</filename> filesystems,
+        but only specific boards require sound, USB, or specific drivers.
+        Specifying these configurations individually allows you to aggregate
+        them together as needed, but maintains them in only one place.
+        Similar logic applies to separating source changes.
+    </para>
 
-        <para>
-            If you do not maintain your own kernel sources and need to make
-            only minimal changes to the sources, the released recipes provide a
-            vetted base upon which to layer your changes.
-            Doing so allows you to benefit from the continual kernel
-            integration and testing performed during development of the
-            Yocto Project.
-        </para>
+    <para>
+        If you do not maintain your own kernel sources and need to make
+        only minimal changes to the sources, the released recipes provide a
+        vetted base upon which to layer your changes.
+        Doing so allows you to benefit from the continual kernel
+        integration and testing performed during development of the
+        Yocto Project.
+    </para>
 
-        <para>
-            If, instead, you have a very specific Linux kernel source tree
-            and are unable to align with one of the official linux-yocto
-            recipes, an alternative exists by which you can use the Yocto
-            Project Linux kernel tools with your own kernel sources.
-        </para>
-    </section>
+    <para>
+        If, instead, you have a very specific Linux kernel source tree
+        and are unable to align with one of the official Yocto Linux kernel
+        recipes, an alternative exists by which you can use the Yocto
+        Project Linux kernel tools with your own kernel sources.
+    </para>
 
-    <section id='kernel-dev-other-resources'>
-        <title>Other Resources</title>
+    <para>
+        The remainder of this manual provides instructions for completing
+        specific Linux kernel development tasks.
+        These instructions assume you are comfortable working with
+        <ulink url='http://openembedded.org/wiki/Bitbake'>BitBake</ulink>
+        recipes and basic open-source development tools.
+        Understanding these concepts will facilitate the process of working
+        with the kernel recipes.
+        If you find you need some additional background, please be sure to
+        review and understand the following documentation:
+        <itemizedlist>
+            <listitem><para>
+                <ulink url='&YOCTO_DOCS_QS_URL;'>Yocto Project Quick Start</ulink>
+                </para></listitem>
+            <listitem><para>
+                <ulink url='&YOCTO_DOCS_SDK_URL;#using-devtool-in-your-sdk-workflow'><filename>devtool</filename> workflow</ulink>
+                as described in the Yocto Project Application Development and
+                the Extensible Software Development Kit (eSDK) manual.
+                </para></listitem>
+            <listitem><para>
+                The
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
+                section in the Yocto Project Development Tasks Manual.
+                </para></listitem>
+            <listitem><para>
+                The
+                "<link linkend='kernel-modification-workflow'>Kernel Modification Workflow</link>"
+                section.
+                </para></listitem>
+        </itemizedlist>
+    </para>
 
-        <para>
-            The sections that follow provide instructions for completing
-            specific Linux kernel development tasks.
-            These instructions assume you are comfortable working with
-            <ulink url='http://openembedded.org/wiki/Bitbake'>BitBake</ulink>
-            recipes and basic open-source development tools.
-            Understanding these concepts will facilitate the process of working
-            with the kernel recipes.
-            If you find you need some additional background, please be sure to
-            review and understand the following documentation:
-            <itemizedlist>
-                <listitem><para><ulink url='&YOCTO_DOCS_QS_URL;'>Yocto Project Quick Start</ulink>
-                    </para></listitem>
-                <listitem><para>The "<ulink url='&YOCTO_DOCS_DEV_URL;#dev-modifying-source-code'>Modifying Source Code</ulink>"
-                    section in the Yocto Project Development Manual
-                    </para></listitem>
-                <listitem><para>The "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>" section
-                    in the Yocto Project Development Manual</para></listitem>
-                <listitem><para>The "<ulink url='&YOCTO_DOCS_DEV_URL;#modifying-the-kernel'>Modifying the Kernel</ulink>" section
-                    in the Yocto Project Development Manual.</para></listitem>
-            </itemizedlist>
-        </para>
+    <para>
+        Finally, while this document focuses on the manual creation of
+        recipes, patches, and configuration files, the Yocto Project
+        Board Support Package (BSP) tools are available to automate
+        this process with existing content and work well to create the
+        initial framework and boilerplate code.
+        For details on these tools, see the
+        "<ulink url='&YOCTO_DOCS_BSP_URL;#using-the-yocto-projects-bsp-tools'>Using the Yocto Project's BSP Tools</ulink>"
+        section in the Yocto Project Board Support Package (BSP) Developer's
+        Guide.
+    </para>
+</section>
 
-        <para>
-            Finally, while this document focuses on the manual creation of
-            recipes, patches, and configuration files, the Yocto Project
-            Board Support Package (BSP) tools are available to automate
-            this process with existing content and work well to create the
-            initial framework and boilerplate code.
-            For details on these tools, see the
-            "<ulink url='&YOCTO_DOCS_BSP_URL;#using-the-yocto-projects-bsp-tools'>Using the Yocto Project's BSP Tools</ulink>"
-            section in the Yocto Project Board Support Package (BSP) Developer's
-            Guide.
-        </para>
-    </section>
+<section id='kernel-modification-workflow'>
+    <title>Kernel Modification Workflow</title>
+
+    <para>
+        Kernel modification involves changing the Yocto Project kernel,
+        which could involve changing configuration options as well as adding
+        new kernel recipes.
+        Configuration changes can be added in the form of configuration
+        fragments, while recipe modification comes through the kernel's
+        <filename>recipes-kernel</filename> area in a kernel layer you create.
+    </para>
+
+    <para>
+        This section presents a high-level overview of the Yocto Project
+        kernel modification workflow.
+        The illustration and accompanying list provide general information
+        and references for further information.
+        <imagedata fileref="figures/kernel-dev-flow.png"
+            width="9in" depth="5in" align="center" scalefit="1" />
+    </para>
+
+    <para>
+        <orderedlist>
+            <listitem><para>
+                <emphasis>Set Up Your Host Development System to Support
+                Development Using the Yocto Project:</emphasis>
+                See the
+                "<ulink url='&YOCTO_DOCS_QS_URL;#yp-resources'>Setting Up to Use the Yocto Project</ulink>"
+                section in the Yocto Project Quick Start for options on how
+                to get a build host ready to use the Yocto Project.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Set Up Your Host Development System for Kernel Development:</emphasis>
+                It is recommended that you use <filename>devtool</filename>
+                and an extensible SDK for kernel development.
+                Alternatively, you can use traditional kernel development
+                methods with the Yocto Project.
+                Either way, there are steps you need to take to get the
+                development environment ready.</para>
+
+                <para>Using <filename>devtool</filename> and the eSDK requires
+                that you have a clean build of the image and that you are
+                set up with the appropriate eSDK.
+                For more information, see the
+                "<link linkend='getting-ready-to-develop-using-devtool'>Getting Ready to Develop Using <filename>devtool</filename></link>"
+                section.</para>
+
+                <para>Using traditional kernel development requires that you
+                have the kernel source available in an isolated local Git
+                repository.
+                For more information, see the
+                "<link linkend='getting-ready-for-traditional-kernel-development'>Getting Ready for Traditional Kernel Development</link>"
+                section.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Make Changes to the Kernel Source Code if
+                applicable:</emphasis>
+                Modifying the kernel does not always mean directly
+                changing source files.
+                However, if you have to do this, you make the changes to the
+                files in the eSDK's Build Directory if you are using
+                <filename>devtool</filename>.
+                For more information, see the
+                "<link linkend='using-devtool-to-patch-the-kernel'>Using <filename>devtool</filename> to Patch the Kernel</link>"
+                section.</para>
+
+                <para>If you are using traditional kernel development, you
+                edit the source files in the kernel's local Git repository.
+                For more information, see the
+                "<link linkend='using-traditional-kernel-development-to-patch-the-kernel'>Using Traditional Kernel Development to Patch the Kernel</link>"
+                section.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Make Kernel Configuration Changes if
+                Applicable:</emphasis>
+                If your situation calls for changing the kernel's
+                configuration, you can use
+                <link linkend='using-menuconfig'><filename>menuconfig</filename></link>,
+                which allows you to interactively develop and test the
+                configuration changes you are making to the kernel.
+                Saving changes you make with <filename>menuconfig</filename>
+                updates the kernel's <filename>.config</filename> file.
+                <note><title>Warning</title>
+                    Try to resist the temptation to directly edit an
+                    existing <filename>.config</filename> file, which is
+                    found in the Build Directory among the source code
+                    used for the build.
+                    Doing so, can produce unexpected results when the
+                    OpenEmbedded build system regenerates the configuration
+                    file.
+                </note>
+                Once you are satisfied with the configuration
+                changes made using <filename>menuconfig</filename>
+                and you have saved them, you can directly compare the
+                resulting <filename>.config</filename> file against an
+                existing original and gather those changes into a
+                <link linkend='creating-config-fragments'>configuration fragment file</link>
+                to be referenced from within the kernel's
+                <filename>.bbappend</filename> file.</para>
+
+                <para>Additionally, if you are working in a BSP layer
+                and need to modify the BSP's kernel's configuration,
+                you can use the
+                <ulink url='&YOCTO_DOCS_BSP_URL;#managing-kernel-patches-and-config-items-with-yocto-kernel'><filename>yocto-kernel</filename></ulink>
+                script as well as <filename>menuconfig</filename>.
+                The <filename>yocto-kernel</filename> script lets
+                you interactively set up kernel configurations.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Rebuild the Kernel Image With Your Changes:</emphasis>
+                Rebuilding the kernel image applies your changes.
+                Depending on your target hardware, you can verify your changes
+                on actual hardware or perhaps QEMU.
+                </para></listitem>
+        </orderedlist>
+        The remainder of this developer's guide covers common tasks typically
+        used during kernel development, advanced Metadata usage, and Yocto Linux
+        kernel maintenance concepts.
+    </para>
+</section>
+
 </chapter>
 <!--
 vim: expandtab tw=80 ts=4
diff --git a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-maint-appx.xml b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-maint-appx.xml
index 6bb0cf6..f5fd183 100644
--- a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-maint-appx.xml
+++ b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-maint-appx.xml
@@ -7,82 +7,144 @@
 
     <section id='tree-construction'>
         <title>Tree Construction</title>
+
         <para>
-            This section describes construction of the Yocto Project kernel source repositories
-            as accomplished by the Yocto Project team to create kernel repositories.
-            These kernel repositories are found under the heading "Yocto Linux Kernel" at
+            This section describes construction of the Yocto Project kernel
+            source repositories as accomplished by the Yocto Project team to
+            create Yocto Linux kernel repositories.
+            These kernel repositories are found under the heading "Yocto Linux
+            Kernel" at
             <ulink url='&YOCTO_GIT_URL;/cgit.cgi'>&YOCTO_GIT_URL;/cgit.cgi</ulink>
-            and can be shipped as part of a Yocto Project release.
-            The team creates these repositories by
-            compiling and executing the set of feature descriptions for every BSP
-            and feature in the product.
+            and are shipped as part of a Yocto Project release.
+            The team creates these repositories by compiling and executing the
+            set of feature descriptions for every BSP and feature in the
+            product.
             Those feature descriptions list all necessary patches,
-            configuration, branching, tagging and feature divisions found in a kernel.
-            Thus, the Yocto Project kernel repository (or tree) is built.
+            configurations, branches, tags, and feature divisions found in a
+            Yocto Linux kernel.
+            Thus, the Yocto Project Linux kernel repository (or tree) and
+            accompanying Metadata in the
+            <filename>yocto-kernel-cache</filename> are built.
         </para>
+
         <para>
-            The existence of this tree allows you to access and clone a particular
-            Yocto Project kernel repository and use it to build images based on their configurations
-            and features.
+            The existence of these repositories allow you to access and clone a
+            particular Yocto Project Linux kernel repository and use it to
+            build images based on their configurations and features.
         </para>
+
         <para>
-            You can find the files used to describe all the valid features and BSPs
-            in the Yocto Project kernel in any clone of the Yocto Project kernel source repository
-            Git tree.
-            For example, the following command clones the Yocto Project baseline kernel that
-            branched off of <filename>linux.org</filename> version 3.19:
+            You can find the files used to describe all the valid features and
+            BSPs in the Yocto Project Linux kernel in any clone of the Yocto
+            Project Linux kernel source repository and
+            <filename>yocto-kernel-cache</filename> Git trees.
+            For example, the following commands clone the Yocto Project
+            baseline Linux kernel that branches off
+            <filename>linux.org</filename> version 4.12 and the
+            <filename>yocto-kernel-cache</filename>, which contains stores of
+            kernel Metadata:
             <literallayout class='monospaced'>
-     $ git clone git://git.yoctoproject.org/linux-yocto-3.19
+     $ git clone git://git.yoctoproject.org/linux-yocto-4.12
+     $ git clone git://git.yoctoproject.org/linux-kernel-cache
             </literallayout>
-            For another example of how to set up a local Git repository of the Yocto Project
-            kernel files, see the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#local-kernel-files'>Yocto Project Kernel</ulink>" bulleted
-            item in the Yocto Project Development Manual.
+            For more information on how to set up a local Git repository of
+            the Yocto Project Linux kernel files, see the
+            "<link linkend='preparing-the-build-host-to-work-on-the-kernel'>Preparing the Build Host to Work on the Kernel</link>"
+            section.
         </para>
+
         <para>
-            Once you have cloned the kernel Git repository on your local machine, you can
-            switch to the <filename>meta</filename> branch within the repository.
-            Here is an example that assumes the local Git repository for the kernel is in
-            a top-level directory named <filename>linux-yocto-3.19</filename>:
+            Once you have cloned the kernel Git repository and the
+            cache of Metadata on your local machine, you can discover the
+            branches that are available in the repository using the following
+            Git command:
             <literallayout class='monospaced'>
-     $ cd linux-yocto-3.19
-     $ git checkout -b meta origin/meta
+     $ git branch -a
             </literallayout>
-            Once you have checked out and switched to the <filename>meta</filename> branch,
-            you can see a snapshot of all the kernel configuration and feature descriptions that are
-            used to build that particular kernel repository.
-            These descriptions are in the form of <filename>.scc</filename> files.
-        </para>
-        <para>
-            You should realize, however, that browsing your local kernel repository
-            for feature descriptions and patches is not an effective way to determine what is in a
-            particular kernel branch.
-            Instead, you should use Git directly to discover the changes in a branch.
-            Using Git is an efficient and flexible way to inspect changes to the kernel.
+            Checking out a branch allows you to work with a particular
+            Yocto Linux kernel.
+            For example, the following commands check out the
+            "standard/beagleboard" branch of the Yocto Linux kernel repository
+            and the "yocto-4.12" branch of the
+            <filename>yocto-kernel-cache</filename> repository:
+            <literallayout class='monospaced'>
+     $ cd ~/linux-yocto-4.12
+     $ git checkout -b my-kernel-4.12 remotes/origin/standard/beagleboard
+     $ cd ~/linux-kernel-cache
+     $ git checkout -b my-4.12-metadata remotes/origin/yocto-4.12
+            </literallayout>
             <note>
-                Ground up reconstruction of the complete kernel tree is an action only taken by the
-                Yocto Project team during an active development cycle.
-                When you create a clone of the kernel Git repository, you are simply making it
-                efficiently available for building and development.
+                Branches in the <filename>yocto-kernel-cache</filename>
+                repository correspond to Yocto Linux kernel versions
+                (e.g. "yocto-4.12", "yocto-4.10", "yocto-4.9", and so forth).
+            </note>
+            Once you have checked out and switched to appropriate branches,
+            you can see a snapshot of all the kernel source files used to
+            used to build that particular Yocto Linux kernel for a
+            particular board.
+        </para>
+
+        <para>
+            To see the features and configurations for a particular Yocto
+            Linux kernel, you need to examine the
+            <filename>yocto-kernel-cache</filename> Git repository.
+            As mentioned, branches in the
+            <filename>yocto-kernel-cache</filename> repository correspond to
+            Yocto Linux kernel versions (e.g. <filename>yocto-4.12</filename>).
+            Branches contain descriptions in the form of
+            <filename>.scc</filename> and <filename>.cfg</filename> files.
+        </para>
+
+        <para>
+            You should realize, however, that browsing your local
+            <filename>yocto-kernel-cache</filename> repository for feature
+            descriptions and patches is not an effective way to determine what
+            is in a particular kernel branch.
+            Instead, you should use Git directly to discover the changes in
+            a branch.
+            Using Git is an efficient and flexible way to inspect changes to
+            the kernel.
+            <note>
+                Ground up reconstruction of the complete kernel tree is an
+                action only taken by the Yocto Project team during an active
+                development cycle.
+                When you create a clone of the kernel Git repository, you are
+                simply making it efficiently available for building and
+                development.
             </note>
         </para>
+
         <para>
-            The following steps describe what happens when the Yocto Project Team constructs
-            the Yocto Project kernel source Git repository (or tree) found at
+            The following steps describe what happens when the Yocto Project
+            Team constructs the Yocto Project kernel source Git repository
+            (or tree) found at
             <ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink> given the
             introduction of a new top-level kernel feature or BSP.
-            These are the actions that effectively create the tree
-            that includes the new feature, patch or BSP:
+            These are the actions that effectively provide the Metadata
+            and create the tree that includes the new feature, patch or BSP:
             <orderedlist>
-                <listitem><para>A top-level kernel feature is passed to the kernel build subsystem.
-                    Normally, this feature is a BSP for a particular kernel type.</para></listitem>
-                <listitem><para>The file that describes the top-level feature is located by searching
-                    these system directories:
+                <listitem><para>
+                    <emphasis>Pass Feature to Build Subsystem:</emphasis>
+                    A top-level kernel feature is passed to the kernel build
+                    subsystem.
+                    Normally, this feature is a BSP for a particular kernel
+                    type.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Locate Feature:</emphasis>
+                    The file that describes the top-level feature is located
+                    by searching these system directories:
                     <itemizedlist>
-                        <listitem><para>The in-tree kernel-cache directories, which are located
-                            in <filename>meta/cfg/kernel-cache</filename></para></listitem>
-                        <listitem><para>Areas pointed to by <filename>SRC_URI</filename> statements
-                            found in recipes</para></listitem>
+                        <listitem><para>
+                            The in-tree kernel-cache directories, which are
+                            located in the
+                            <filename>yocto-kernel-cache</filename>
+                            repository
+                            </para></listitem>
+                        <listitem><para>
+                            Areas pointed to by <filename>SRC_URI</filename>
+                            statements found in kernel recipes
+                            </para></listitem>
                     </itemizedlist>
                     For a typical build, the target of the search is a
                     feature description in an <filename>.scc</filename> file
@@ -91,41 +153,96 @@
      <replaceable>bsp_name</replaceable>-<replaceable>kernel_type</replaceable>.scc
                     </literallayout>
                 </para></listitem>
-                <listitem><para>Once located, the feature description is either compiled into a simple script
-                    of actions, or into an existing equivalent script that is already part of the
-                    shipped kernel.</para></listitem>
-                <listitem><para>Extra features are appended to the top-level feature description.
+                <listitem><para>
+                    <emphasis>Expand Feature:</emphasis>
+                    Once located, the feature description is either expanded
+                    into a simple script of actions, or into an existing
+                    equivalent script that is already part of the shipped
+                    kernel.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Append Extra Features:</emphasis>
+                    Extra features are appended to the top-level feature
+                    description.
                     These features can come from the
                     <ulink url='&YOCTO_DOCS_REF_URL;#var-KERNEL_FEATURES'><filename>KERNEL_FEATURES</filename></ulink>
-                    variable in recipes.</para></listitem>
-                <listitem><para>Each extra feature is located, compiled and appended to the script
-                    as described in step three.</para></listitem>
-                <listitem><para>The script is executed to produce a series of <filename>meta-*</filename>
-                    directories.
-                    These directories are descriptions of all the branches, tags, patches and configurations that
-                    need to be applied to the base Git repository to completely create the
-                    source (build) branch for the new BSP or feature.</para></listitem>
-                <listitem><para>The base repository is cloned, and the actions
-                    listed in the <filename>meta-*</filename> directories are applied to the
-                    tree.</para></listitem>
-                <listitem><para>The Git repository is left with the desired branch checked out and any
-                    required branching, patching and tagging has been performed.</para></listitem>
+                    variable in recipes.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Locate, Expand, and Append Each Feature:</emphasis>
+                    Each extra feature is located, expanded and appended to
+                    the script as described in step three.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Execute the Script:</emphasis>
+                    The script is executed to produce files
+                    <filename>.scc</filename> and <filename>.cfg</filename>
+                    files in appropriate directories of the
+                    <filename>yocto-kernel-cache</filename> repository.
+                    These files are descriptions of all the branches, tags,
+                    patches and configurations that need to be applied to the
+                    base Git repository to completely create the
+                    source (build) branch for the new BSP or feature.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Clone Base Repository:</emphasis>
+                    The base repository is cloned, and the actions
+                    listed in the <filename>yocto-kernel-cache</filename>
+                    directories are applied to the tree.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Perform Cleanup:</emphasis>
+                    The Git repositories are left with the desired branches
+                    checked out and any required branching, patching and
+                    tagging has been performed.
+                    </para></listitem>
             </orderedlist>
         </para>
+
         <para>
-            The kernel tree is now ready for developer consumption to be locally cloned,
-            configured, and built into a Yocto Project kernel specific to some target hardware.
-            <note><para>The generated <filename>meta-*</filename> directories add to the kernel
-                as shipped with the Yocto Project release.
-                Any add-ons and configuration data are applied to the end of an existing branch.
-                The full repository generation that is found in the
-                official Yocto Project kernel repositories at
-                <ulink url='&YOCTO_GIT_URL;/cgit.cgi'>http://git.yoctoproject.org/cgit.cgi</ulink>
-                is the combination of all supported boards and configurations.</para>
-                <para>The technique the Yocto Project team uses is flexible and allows for seamless
-                blending of an immutable history with additional patches specific to a
-                deployment.
-                Any additions to the kernel become an integrated part of the branches.</para>
+            The kernel tree and cache are ready for developer consumption to
+            be locally cloned, configured, and built into a Yocto Project
+            kernel specific to some target hardware.
+            <note><title>Notes</title>
+                <itemizedlist>
+                    <listitem><para>
+                        The generated <filename>yocto-kernel-cache</filename>
+                        repository adds to the kernel as shipped with the Yocto
+                        Project release.
+                        Any add-ons and configuration data are applied to the
+                        end of an existing branch.
+                        The full repository generation that is found in the
+                        official Yocto Project kernel repositories at
+                        <ulink url='&YOCTO_GIT_URL;/cgit.cgi'>http://git.yoctoproject.org/cgit.cgi</ulink>
+                        is the combination of all supported boards and
+                        configurations.
+                        </para></listitem>
+                    <listitem><para>
+                        The technique the Yocto Project team uses is flexible
+                        and allows for seamless blending of an immutable
+                        history with additional patches specific to a
+                        deployment.
+                        Any additions to the kernel become an integrated part
+                        of the branches.
+                        </para></listitem>
+                    <listitem><para>
+                        The full kernel tree that you see on
+                        <ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink> is
+                        generated through repeating the above steps for all
+                        valid BSPs.
+                        The end result is a branched, clean history tree that
+                        makes up the kernel for a given release.
+                        You can see the script (<filename>kgit-scc</filename>)
+                        responsible for this in the
+                        <ulink url='&YOCTO_GIT_URL;/cgit.cgi/yocto-kernel-tools/tree/tools'><filename>yocto-kernel-tools</filename></ulink>
+                        repository.
+                        </para></listitem>
+                    <listitem><para>
+                        The steps used to construct the full kernel tree are
+                        the same steps that BitBake uses when it builds a
+                        kernel image.
+                        </para></listitem>
+                </itemizedlist>
             </note>
         </para>
     </section>
@@ -133,85 +250,100 @@
     <section id='build-strategy'>
         <title>Build Strategy</title>
 
-<!--
         <para>
-            <emphasis>AR - Darren Hart:</emphasis>  Some parts of this section
-            need to be in the
-            "<link linkend='using-an-iterative-development-process'>Using an Iterative Development Process</link>"
-            section.
-            Darren needs to figure out which parts and identify them.
-        </para>
--->
-
-        <para>
-            Once a local Git repository of the Yocto Project kernel exists on a development system,
-            you can consider the compilation phase of kernel development - building a kernel image.
-            Some prerequisites exist that are validated by the build process before compilation
-            starts:
+            Once you have cloned a Yocto Linux kernel repository and the
+            cache repository (<filename>yocto-kernel-cache</filename>) onto
+            your development system, you can consider the compilation phase
+            of kernel development, which is building a kernel image.
+            Some prerequisites exist that are validated by the build process
+            before compilation starts:
         </para>
 
         <itemizedlist>
-            <listitem><para>The
-                <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink> points
-                to the kernel Git repository.</para></listitem>
-            <listitem><para>A BSP build branch exists.
-                This branch has the following form:
+            <listitem><para>
+                The
+                <ulink url='&YOCTO_DOCS_REF_URL;#var-SRC_URI'><filename>SRC_URI</filename></ulink>
+                points to the kernel Git repository.
+                </para></listitem>
+            <listitem><para>
+                A BSP build branch with Metadata exists in the
+                <filename>yocto-kernel-cache</filename> repository.
+                The branch is based on the Yocto Linux kernel version and
+                has configurations and features grouped under the
+                <filename>yocto-kernel-cache/bsp</filename> directory.
+                For example, features and configurations for the
+                BeagleBone Board assuming a
+                <filename>linux-yocto_4.12</filename> kernel reside in the
+                following area of the <filename>yocto-kernel-cache</filename>
+                repository:
                 <literallayout class='monospaced'>
-     <replaceable>kernel_type</replaceable>/<replaceable>bsp_name</replaceable>
-                </literallayout></para></listitem>
+     yocto-kernel-cache/bsp/beaglebone
+                </literallayout>
+                <note>
+                    In the previous example, the "yocto-4.12" branch is
+                    checked out in the <filename>yocto-kernel-cache</filename>
+                    repository.
+                </note>
+                </para></listitem>
         </itemizedlist>
 
         <para>
-            The OpenEmbedded build system makes sure these conditions exist before attempting compilation.
+            The OpenEmbedded build system makes sure these conditions exist
+            before attempting compilation.
             Other means, however, do exist, such as as bootstrapping a BSP.
         </para>
 
         <para>
             Before building a kernel, the build process verifies the tree
             and configures the kernel by processing all of the
-            configuration "fragments" specified by feature descriptions in the <filename>.scc</filename>
-            files.
-            As the features are compiled, associated kernel configuration fragments are noted
-            and recorded in the <filename>meta-*</filename> series of directories in their compilation order.
-            The fragments are migrated, pre-processed and passed to the Linux Kernel
-            Configuration subsystem (<filename>lkc</filename>) as raw input in the form
-            of a <filename>.config</filename> file.
-            The <filename>lkc</filename> uses its own internal dependency constraints to do the final
-            processing of that information and generates the final <filename>.config</filename> file
-            that is used during compilation.
+            configuration "fragments" specified by feature descriptions
+            in the <filename>.scc</filename> files.
+            As the features are compiled, associated kernel configuration
+            fragments are noted and recorded in the series of directories
+            in their compilation order.
+            The fragments are migrated, pre-processed and passed to the
+            Linux Kernel Configuration subsystem (<filename>lkc</filename>) as
+            raw input in the form of a <filename>.config</filename> file.
+            The <filename>lkc</filename> uses its own internal dependency
+            constraints to do the final processing of that information and
+            generates the final <filename>.config</filename> file that is used
+            during compilation.
         </para>
 
         <para>
-            Using the board's architecture and other relevant values from the board's template,
-            kernel compilation is started and a kernel image is produced.
+            Using the board's architecture and other relevant values from
+            the board's template, kernel compilation is started and a kernel
+            image is produced.
         </para>
 
         <para>
             The other thing that you notice once you configure a kernel is that
-            the build process generates a build tree that is separate from your kernel's local Git
-            source repository tree.
+            the build process generates a build tree that is separate from
+            your kernel's local Git source repository tree.
             This build tree has a name that uses the following form, where
-            <filename>${MACHINE}</filename> is the metadata name of the machine (BSP) and "kernel_type" is one
-            of the Yocto Project supported kernel types (e.g. "standard"):
+            <filename>${MACHINE}</filename> is the metadata name of the
+            machine (BSP) and "kernel_type" is one of the Yocto Project
+            supported kernel types (e.g. "standard"):
         <literallayout class='monospaced'>
      linux-${MACHINE}-<replaceable>kernel_type</replaceable>-build
         </literallayout>
         </para>
 
         <para>
-            The existing support in the <filename>kernel.org</filename> tree achieves this
-            default functionality.
+            The existing support in the <filename>kernel.org</filename> tree
+            achieves this default functionality.
         </para>
 
         <para>
-            This behavior means that all the generated files for a particular machine or BSP are now in
-            the build tree directory.
-            The files include the final <filename>.config</filename> file, all the <filename>.o</filename>
-            files, the <filename>.a</filename> files, and so forth.
+            This behavior means that all the generated files for a particular
+            machine or BSP are now in the build tree directory.
+            The files include the final <filename>.config</filename> file,
+            all the <filename>.o</filename> files, the <filename>.a</filename>
+            files, and so forth.
             Since each machine or BSP has its own separate
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
-            in its own separate branch
-            of the Git repository, you can easily switch between different builds.
+            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
+            in its own separate branch of the Git repository, you can easily
+            switch between different builds.
         </para>
     </section>
 </appendix>
diff --git a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-style.css b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-style.css
index 6e0c1c7..9c01aa7 100644
--- a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-style.css
+++ b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev-style.css
@@ -730,6 +730,10 @@
   border-color: black;
 }
 
+.writernotes {
+  color: red;
+}
+
 
   /*********** /
  /  graphics  /
diff --git a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev.xml b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev.xml
index 28a3364..ec36d24 100644
--- a/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev.xml
+++ b/import-layers/yocto-poky/documentation/kernel-dev/kernel-dev.xml
@@ -22,11 +22,11 @@
 
         <authorgroup>
             <author>
-                <firstname>Darren</firstname> <surname>Hart</surname>
+                <firstname>Scott</firstname> <surname>Rifenbark</surname>
                 <affiliation>
-                    <orgname>Intel Corporation</orgname>
+                    <orgname>Scotty's Documentation Services, INC</orgname>
                 </affiliation>
-                <email>darren.hart@intel.com</email>
+                <email>srifenbark@gmail.com</email>
             </author>
         </authorgroup>
 
@@ -82,24 +82,19 @@
                 <revremark>Released with the Yocto Project 2.3 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.1</revnumber>
-                <date>June 2017</date>
-                <revremark>Released with the Yocto Project 2.3.1 Release.</revremark>
+                <revnumber>2.4</revnumber>
+                <date>October 2017</date>
+                <revremark>Released with the Yocto Project 2.4 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.2</revnumber>
-                <date>September 2017</date>
-                <revremark>Released with the Yocto Project 2.3.2 Release.</revremark>
-            </revision>
-            <revision>
-                <revnumber>2.3.3</revnumber>
+                <revnumber>2.4.1</revnumber>
                 <date>January 2018</date>
-                <revremark>Released with the Yocto Project 2.3.3 Release.</revremark>
+                <revremark>Released with the Yocto Project 2.4.1 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.4</revnumber>
-                <date>April 2018</date>
-                <revremark>Released with the Yocto Project 2.3.4 Release.</revremark>
+                <revnumber>2.4.2</revnumber>
+                <date>March 2018</date>
+                <revremark>Released with the Yocto Project 2.4.2 Release.</revremark>
             </revision>
         </revhistory>
 
@@ -116,30 +111,31 @@
            <note><title>Manual Notes</title>
                <itemizedlist>
                    <listitem><para>
-                       For the latest version of the Yocto Project Linux
-                       Kernel Development Manual associated with this Yocto
-                       Project release (version &YOCTO_DOC_VERSION;),
-                       see the Yocto Project Linux Kernel Development
-                       Manual from the
+                       This version of the
+                       <emphasis>Yocto Project Linux Kernel Development Manual</emphasis>
+                       is for the &YOCTO_DOC_VERSION; release of the
+                       Yocto Project.
+                       To be sure you have the latest version of the manual
+                       for this release, use the manual from the
                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
                        </para></listitem>
                    <listitem><para>
-                       This version of the manual is version
-                       &YOCTO_DOC_VERSION;.
-                       For later releases of the Yocto Project (if they exist),
-                       go to the
+                       For manuals associated with other releases of the Yocto
+                       Project, go to the
                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
                        and use the drop-down "Active Releases" button
-                       and choose the Yocto Project version for which you want
-                       the manual.
+                       and choose the manual associated with the desired
+                       Yocto Project.
                        </para></listitem>
                    <listitem><para>
-                        For an in-development version of the Yocto Project
-                        Linux Kernel Development Manual, see
-                        <ulink url='&YOCTO_DOCS_URL;/latest/kernel-dev/kernel-dev.html'></ulink>.
+                        To report any inaccuracies or problems with this
+                        manual, send an email to the Yocto Project
+                        discussion group at
+                        <filename>yocto@yoctoproject.com</filename> or log into
+                        the freenode <filename>#yocto</filename> channel.
                         </para></listitem>
-                </itemizedlist>
-            </note>
+               </itemizedlist>
+           </note>
     </legalnotice>
 
     </bookinfo>
@@ -154,10 +150,6 @@
 
     <xi:include href="kernel-dev-maint-appx.xml"/>
 
-<!--
-    <xi:include href="kernel-dev-examples.xml"/>
--->
-
     <xi:include href="kernel-dev-faq.xml"/>
 
 <!--    <index id='index'>
diff --git a/import-layers/yocto-poky/documentation/mega-manual/figures/yocto-environment.png b/import-layers/yocto-poky/documentation/mega-manual/figures/YP-flow-diagram.png
similarity index 100%
rename from import-layers/yocto-poky/documentation/mega-manual/figures/yocto-environment.png
rename to import-layers/yocto-poky/documentation/mega-manual/figures/YP-flow-diagram.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/mega-manual/figures/bitbake-build-flow.png b/import-layers/yocto-poky/documentation/mega-manual/figures/bitbake-build-flow.png
new file mode 100644
index 0000000..eb95eb3
--- /dev/null
+++ b/import-layers/yocto-poky/documentation/mega-manual/figures/bitbake-build-flow.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/mega-manual/figures/bsp-dev-flow.png b/import-layers/yocto-poky/documentation/mega-manual/figures/bsp-dev-flow.png
index 540b0ab..0f82a1f 100644
--- a/import-layers/yocto-poky/documentation/mega-manual/figures/bsp-dev-flow.png
+++ b/import-layers/yocto-poky/documentation/mega-manual/figures/bsp-dev-flow.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/mega-manual/figures/dev-title.png b/import-layers/yocto-poky/documentation/mega-manual/figures/dev-title.png
index d3cac4a..15e67d0 100644
--- a/import-layers/yocto-poky/documentation/mega-manual/figures/dev-title.png
+++ b/import-layers/yocto-poky/documentation/mega-manual/figures/dev-title.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/mega-manual/figures/devtool-add-flow.png b/import-layers/yocto-poky/documentation/mega-manual/figures/devtool-add-flow.png
deleted file mode 100644
index c09e60e..0000000
--- a/import-layers/yocto-poky/documentation/mega-manual/figures/devtool-add-flow.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/mega-manual/figures/devtool-modify-flow.png b/import-layers/yocto-poky/documentation/mega-manual/figures/devtool-modify-flow.png
deleted file mode 100644
index cd7f4d0..0000000
--- a/import-layers/yocto-poky/documentation/mega-manual/figures/devtool-modify-flow.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/mega-manual/figures/devtool-upgrade-flow.png b/import-layers/yocto-poky/documentation/mega-manual/figures/devtool-upgrade-flow.png
deleted file mode 100644
index d25168c..0000000
--- a/import-layers/yocto-poky/documentation/mega-manual/figures/devtool-upgrade-flow.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/mega-manual/figures/index-downloads.png b/import-layers/yocto-poky/documentation/mega-manual/figures/index-downloads.png
index c907997..96303b8 100644
--- a/import-layers/yocto-poky/documentation/mega-manual/figures/index-downloads.png
+++ b/import-layers/yocto-poky/documentation/mega-manual/figures/index-downloads.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/mega-manual/figures/kernel-dev-flow.png b/import-layers/yocto-poky/documentation/mega-manual/figures/kernel-dev-flow.png
index 009105d..793a395 100644
--- a/import-layers/yocto-poky/documentation/mega-manual/figures/kernel-dev-flow.png
+++ b/import-layers/yocto-poky/documentation/mega-manual/figures/kernel-dev-flow.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/mega-manual/figures/kernel-overview-2-generic.png b/import-layers/yocto-poky/documentation/mega-manual/figures/kernel-overview-2-generic.png
index cb970ea..ee2cdb2 100644
--- a/import-layers/yocto-poky/documentation/mega-manual/figures/kernel-overview-2-generic.png
+++ b/import-layers/yocto-poky/documentation/mega-manual/figures/kernel-overview-2-generic.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/mega-manual/figures/sdk-title.png b/import-layers/yocto-poky/documentation/mega-manual/figures/sdk-title.png
index e9d5b34..e69e039 100644
--- a/import-layers/yocto-poky/documentation/mega-manual/figures/sdk-title.png
+++ b/import-layers/yocto-poky/documentation/mega-manual/figures/sdk-title.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/mega-manual/figures/source-repos.png b/import-layers/yocto-poky/documentation/mega-manual/figures/source-repos.png
index 65c5f29..e9cff16 100644
--- a/import-layers/yocto-poky/documentation/mega-manual/figures/source-repos.png
+++ b/import-layers/yocto-poky/documentation/mega-manual/figures/source-repos.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/mega-manual/mega-manual.xml b/import-layers/yocto-poky/documentation/mega-manual/mega-manual.xml
index d06f851..a941d79 100644
--- a/import-layers/yocto-poky/documentation/mega-manual/mega-manual.xml
+++ b/import-layers/yocto-poky/documentation/mega-manual/mega-manual.xml
@@ -33,7 +33,7 @@
             <author>
                 <firstname>Scott</firstname> <surname>Rifenbark</surname>
                 <affiliation>
-                    <orgname>Intel Corporation</orgname>
+                    <orgname>Scotty's Documentation Services, INC</orgname>
                 </affiliation>
                 <email>srifenbark@gmail.com</email>
             </author>
@@ -66,24 +66,19 @@
                 <revremark>Released with the Yocto Project 2.3 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.1</revnumber>
-                <date>June 2017</date>
-                <revremark>Released with the Yocto Project 2.3.1 Release.</revremark>
+                <revnumber>2.4</revnumber>
+                <date>October 2017</date>
+                <revremark>Released with the Yocto Project 2.4 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.2</revnumber>
-                <date>September 2017</date>
-                <revremark>Released with the Yocto Project 2.3.2 Release.</revremark>
-            </revision>
-            <revision>
-                <revnumber>2.3.3</revnumber>
+                <revnumber>2.4.1</revnumber>
                 <date>January 2018</date>
-                <revremark>Released with the Yocto Project 2.3.3 Release.</revremark>
+                <revremark>Released with the Yocto Project 2.4.1 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.4</revnumber>
-                <date>April 2018</date>
-                <revremark>Released with the Yocto Project 2.3.4 Release.</revremark>
+                <revnumber>2.4.2</revnumber>
+                <date>March 2018</date>
+                <revremark>Released with the Yocto Project 2.4.2 Release.</revremark>
             </revision>
        </revhistory>
 
@@ -100,27 +95,29 @@
            <note><title>Manual Notes</title>
                <itemizedlist>
                    <listitem><para>
-                       For the latest version of the Yocto Project
-                       Mega-Manual associated with this Yocto Project release
-                       (version &YOCTO_DOC_VERSION;),
-                       see the Yocto Project Mega-Manual from the
+                       This version of the
+                       <emphasis>Yocto Project Mega-Manual</emphasis>
+                       is for the &YOCTO_DOC_VERSION; release of the
+                       Yocto Project.
+                       To be sure you have the latest version of the manual
+                       for this release, use the manual from the
                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
                        </para></listitem>
                    <listitem><para>
-                       This version of the manual is version
-                       &YOCTO_DOC_VERSION;.
-                       For later releases of the Yocto Project (if they exist),
-                       go to the
+                       For manuals associated with other releases of the Yocto
+                       Project, go to the
                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
                        and use the drop-down "Active Releases" button
-                       and choose the Yocto Project version for which you want
-                       the manual.
+                       and choose the manual associated with the desired
+                       Yocto Project.
                        </para></listitem>
                    <listitem><para>
-                       For an in-development version of the Yocto Project
-                       Mega-Manual, see
-                       <ulink url='&YOCTO_DOCS_URL;/latest/mega-manual/mega-manual.html'></ulink>.
-                       </para></listitem>
+                        To report any inaccuracies or problems with this
+                        manual, send an email to the Yocto Project
+                        discussion group at
+                        <filename>yocto@yoctoproject.com</filename> or log into
+                        the freenode <filename>#yocto</filename> channel.
+                        </para></listitem>
                </itemizedlist>
            </note>
 
@@ -146,8 +143,6 @@
     <xi:include
         xmlns:xi="http://www.w3.org/2003/XInclude" href="../dev-manual/dev-manual-newbie.xml"/>
     <xi:include
-        xmlns:xi="http://www.w3.org/2003/XInclude" href="../dev-manual/dev-manual-model.xml"/>
-    <xi:include
         xmlns:xi="http://www.w3.org/2003/XInclude" href="../dev-manual/dev-manual-common-tasks.xml"/>
     <xi:include
         xmlns:xi="http://www.w3.org/2003/XInclude" href="../dev-manual/dev-manual-qemu.xml"/>
@@ -167,6 +162,8 @@
     <xi:include
         xmlns:xi="http://www.w3.org/2003/XInclude" href="../sdk-manual/sdk-working-projects.xml"/>
     <xi:include
+        xmlns:xi="http://www.w3.org/2003/XInclude" href="../sdk-manual/sdk-eclipse-project.xml"/>
+    <xi:include
         xmlns:xi="http://www.w3.org/2003/XInclude" href="../sdk-manual/sdk-appendix-obtain.xml"/>
     <xi:include
         xmlns:xi="http://www.w3.org/2003/XInclude" href="../sdk-manual/sdk-appendix-customizing.xml"/>
@@ -229,7 +226,7 @@
         xmlns:xi="http://www.w3.org/2003/XInclude" href="../ref-manual/usingpoky.xml"/>
 
     <xi:include
-        xmlns:xi="http://www.w3.org/2003/XInclude" href="../ref-manual/closer-look.xml"/>
+        xmlns:xi="http://www.w3.org/2003/XInclude" href="../ref-manual/ref-development-environment.xml"/>
 
     <xi:include
         xmlns:xi="http://www.w3.org/2003/XInclude" href="../ref-manual/technical-details.xml"/>
@@ -253,6 +250,9 @@
         xmlns:xi="http://www.w3.org/2003/XInclude" href="../ref-manual/ref-devtool-reference.xml"/>
 
     <xi:include
+        xmlns:xi="http://www.w3.org/2003/XInclude" href="../ref-manual/ref-kickstart.xml"/>
+
+    <xi:include
         xmlns:xi="http://www.w3.org/2003/XInclude" href="../ref-manual/ref-qa-checks.xml"/>
 
     <xi:include
diff --git a/import-layers/yocto-poky/documentation/mega-manual/mega-style.css b/import-layers/yocto-poky/documentation/mega-manual/mega-style.css
index df71a20..cd71eb6 100644
--- a/import-layers/yocto-poky/documentation/mega-manual/mega-style.css
+++ b/import-layers/yocto-poky/documentation/mega-manual/mega-style.css
@@ -731,6 +731,11 @@
 }
 
 
+.writernotes {
+  color: red;
+}
+
+
   /*********** /
  /  graphics  /
 / ***********/
diff --git a/import-layers/yocto-poky/documentation/poky.ent b/import-layers/yocto-poky/documentation/poky.ent
index 14d67e5..db3eef0 100644
--- a/import-layers/yocto-poky/documentation/poky.ent
+++ b/import-layers/yocto-poky/documentation/poky.ent
@@ -1,10 +1,13 @@
-<!ENTITY DISTRO "2.3.4">
-<!ENTITY DISTRO_COMPRESSED "234">
-<!ENTITY DISTRO_NAME_NO_CAP "pyro">
-<!ENTITY DISTRO_NAME "Pyro">
-<!ENTITY YOCTO_DOC_VERSION "2.3.4">
-<!ENTITY POKYVERSION "18.0.4">
-<!ENTITY POKYVERSION_COMPRESSED "1804">
+<!ENTITY DISTRO "2.4.2">
+<!ENTITY DISTRO_COMPRESSED "242">
+<!ENTITY DISTRO_NAME_NO_CAP "rocko">
+<!ENTITY DISTRO_NAME "Rocko">
+<!ENTITY YOCTO_DOC_VERSION "2.4.2">
+<!ENTITY DISTRO_REL_TAG "yocto-2.4.2">
+<!ENTITY METAINTELVERSION "8.0">
+<!ENTITY META_INTEL_REL_TAG "&METAINTELVERSION;-&DISTRO_NAME_NO_CAP;-&YOCTO_DOC_VERSION;">
+<!ENTITY POKYVERSION "19.0.2">
+<!ENTITY POKYVERSION_COMPRESSED "1902">
 <!ENTITY YOCTO_POKY "poky-&DISTRO_NAME_NO_CAP;-&POKYVERSION;">
 <!ENTITY COPYRIGHT_YEAR "2010-2018">
 <!ENTITY YOCTO_DL_URL "http://downloads.yoctoproject.org">
diff --git a/import-layers/yocto-poky/documentation/profile-manual/profile-manual.xml b/import-layers/yocto-poky/documentation/profile-manual/profile-manual.xml
index b668fd4..f742e88 100644
--- a/import-layers/yocto-poky/documentation/profile-manual/profile-manual.xml
+++ b/import-layers/yocto-poky/documentation/profile-manual/profile-manual.xml
@@ -22,11 +22,11 @@
 
         <authorgroup>
             <author>
-                <firstname>Tom</firstname> <surname>Zanussi</surname>
+                <firstname>Scott</firstname> <surname>Rifenbark</surname>
                 <affiliation>
-                    <orgname>Intel Corporation</orgname>
+                    <orgname>Scotty's Documentation Services, INC</orgname>
                 </affiliation>
-                <email>tom.zanussi@intel.com</email>
+                <email>srifenbark@gmail.com</email>
             </author>
         </authorgroup>
 
@@ -82,24 +82,19 @@
                 <revremark>Released with the Yocto Project 2.3 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.1</revnumber>
-                <date>June 2017</date>
-                <revremark>Released with the Yocto Project 2.3.1 Release.</revremark>
+                <revnumber>2.4</revnumber>
+                <date>October 2017</date>
+                <revremark>Released with the Yocto Project 2.4 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.2</revnumber>
-                <date>September 2017</date>
-                <revremark>Released with the Yocto Project 2.3.2 Release.</revremark>
-            </revision>
-            <revision>
-                <revnumber>2.3.3</revnumber>
+                <revnumber>2.4.1</revnumber>
                 <date>January 2018</date>
-                <revremark>Released with the Yocto Project 2.3.3 Release.</revremark>
+                <revremark>Released with the Yocto Project 2.4.1 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.4</revnumber>
-                <date>April 2018</date>
-                <revremark>Released with the Yocto Project 2.3.4 Release.</revremark>
+                <revnumber>2.4.2</revnumber>
+                <date>March 2018</date>
+                <revremark>Released with the Yocto Project 2.4.2 Release.</revremark>
             </revision>
         </revhistory>
 
@@ -115,34 +110,34 @@
           Creative Commons Attribution-Share Alike 2.0 UK: England &amp; Wales</ulink> as published by
           Creative Commons.
       </para>
-
-            <note><title>Manual Notes</title>
-                <itemizedlist>
-                    <listitem><para>
-                        For the latest version of the Yocto Project Profiling
-                        and Tracing Manual associated with this Yocto Project
-                        release (version &YOCTO_DOC_VERSION;),
-                        see the Yocto Project Profiling and Tracing Manual
-                        from the
-                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
+           <note><title>Manual Notes</title>
+               <itemizedlist>
+                   <listitem><para>
+                       This version of the
+                       <emphasis>Yocto Project Profiling and Tracing Manual</emphasis>
+                       is for the &YOCTO_DOC_VERSION; release of the
+                       Yocto Project.
+                       To be sure you have the latest version of the manual
+                       for this release, use the manual from the
+                       <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
+                       </para></listitem>
+                   <listitem><para>
+                       For manuals associated with other releases of the Yocto
+                       Project, go to the
+                       <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
+                       and use the drop-down "Active Releases" button
+                       and choose the manual associated with the desired
+                       Yocto Project.
+                       </para></listitem>
+                   <listitem><para>
+                        To report any inaccuracies or problems with this
+                        manual, send an email to the Yocto Project
+                        discussion group at
+                        <filename>yocto@yoctoproject.com</filename> or log into
+                        the freenode <filename>#yocto</filename> channel.
                         </para></listitem>
-                    <listitem><para>
-                        This version of the manual is version
-                        &YOCTO_DOC_VERSION;.
-                        For later releases of the Yocto Project (if they exist),
-                        go to the
-                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
-                        and use the drop-down "Active Releases" button
-                        and choose the Yocto Project version for which you want
-                        the manual.
-                        </para></listitem>
-                    <listitem><para>
-                        For an in-development version of the Yocto Project
-                        Profiling and Tracing Manual, see
-                        <ulink url='&YOCTO_DOCS_URL;/latest/profile-manual/profile-manual.html'></ulink>.
-                        </para></listitem>
-                </itemizedlist>
-            </note>
+               </itemizedlist>
+           </note>
     </legalnotice>
 
     </bookinfo>
diff --git a/import-layers/yocto-poky/documentation/ref-manual/closer-look.xml b/import-layers/yocto-poky/documentation/ref-manual/closer-look.xml
deleted file mode 100644
index 923ed21..0000000
--- a/import-layers/yocto-poky/documentation/ref-manual/closer-look.xml
+++ /dev/null
@@ -1,1630 +0,0 @@
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
-"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
-[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
-
-<chapter id='closer-look'>
-<title>A Closer Look at the Yocto Project Development Environment</title>
-
-    <para>
-        This chapter takes a more detailed look at the Yocto Project
-        development environment.
-        The following diagram represents the development environment at a
-        high level.
-        The remainder of this chapter expands on the fundamental input, output,
-        process, and
-        <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>) blocks
-        in the Yocto Project development environment.
-    </para>
-
-    <para id='general-yocto-environment-figure'>
-        <imagedata fileref="figures/yocto-environment-ref.png" align="center" width="8in" depth="4.25in" />
-    </para>
-
-    <para>
-        The generalized Yocto Project Development Environment consists of
-        several functional areas:
-        <itemizedlist>
-            <listitem><para><emphasis>User Configuration:</emphasis>
-                Metadata you can use to control the build process.
-                </para></listitem>
-            <listitem><para><emphasis>Metadata Layers:</emphasis>
-                Various layers that provide software, machine, and
-                distro Metadata.</para></listitem>
-            <listitem><para><emphasis>Source Files:</emphasis>
-                Upstream releases, local projects, and SCMs.</para></listitem>
-            <listitem><para><emphasis>Build System:</emphasis>
-                Processes under the control of
-                <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>.
-                This block expands on how BitBake fetches source, applies
-                patches, completes compilation, analyzes output for package
-                generation, creates and tests packages, generates images, and
-                generates cross-development tools.</para></listitem>
-            <listitem><para><emphasis>Package Feeds:</emphasis>
-                Directories containing output packages (RPM, DEB or IPK),
-                which are subsequently used in the construction of an image or
-                SDK, produced by the build system.
-                These feeds can also be copied and shared using a web server or
-                other means to facilitate extending or updating existing
-                images on devices at runtime if runtime package management is
-                enabled.</para></listitem>
-            <listitem><para><emphasis>Images:</emphasis>
-                Images produced by the development process.
-                </para></listitem>
-            <listitem><para><emphasis>Application Development SDK:</emphasis>
-                Cross-development tools that are produced along with an image
-                or separately with BitBake.</para></listitem>
-        </itemizedlist>
-    </para>
-
-    <section id="user-configuration">
-        <title>User Configuration</title>
-
-        <para>
-            User configuration helps define the build.
-            Through user configuration, you can tell BitBake the
-            target architecture for which you are building the image,
-            where to store downloaded source, and other build properties.
-        </para>
-
-        <para>
-            The following figure shows an expanded representation of the
-            "User Configuration" box of the
-            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>:
-        </para>
-
-        <para>
-            <imagedata fileref="figures/user-configuration.png" align="center" />
-        </para>
-
-        <para>
-            BitBake needs some basic configuration files in order to complete
-            a build.
-            These files are <filename>*.conf</filename> files.
-            The minimally necessary ones reside as example files in the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
-            For simplicity, this section refers to the Source Directory as
-            the "Poky Directory."
-        </para>
-
-        <para>
-            When you clone the <filename>poky</filename> Git repository or you
-            download and unpack a Yocto Project release, you can set up the
-            Source Directory to be named anything you want.
-            For this discussion, the cloned repository uses the default
-            name <filename>poky</filename>.
-            <note>
-                The Poky repository is primarily an aggregation of existing
-                repositories.
-                It is not a canonical upstream source.
-            </note>
-        </para>
-
-        <para>
-            The <filename>meta-poky</filename> layer inside Poky contains
-            a <filename>conf</filename> directory that has example
-            configuration files.
-            These example files are used as a basis for creating actual
-            configuration files when you source the build environment
-            script
-            (i.e.
-            <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>
-            or
-            <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>).
-        </para>
-
-        <para>
-            Sourcing the build environment script creates a
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
-            if one does not already exist.
-            BitBake uses the Build Directory for all its work during builds.
-            The Build Directory has a <filename>conf</filename> directory that
-            contains default versions of your <filename>local.conf</filename>
-            and <filename>bblayers.conf</filename> configuration files.
-            These default configuration files are created only if versions
-            do not already exist in the Build Directory at the time you
-            source the build environment setup script.
-        </para>
-
-        <para>
-            Because the Poky repository is fundamentally an aggregation of
-            existing repositories, some users might be familiar with running
-            the <filename>&OE_INIT_FILE;</filename> or
-            <filename>oe-init-build-env-memres</filename> script in the context
-            of separate OpenEmbedded-Core and BitBake repositories rather than a
-            single Poky repository.
-            This discussion assumes the script is executed from within a cloned
-            or unpacked version of Poky.
-        </para>
-
-        <para>
-            Depending on where the script is sourced, different sub-scripts
-            are called to set up the Build Directory (Yocto or OpenEmbedded).
-            Specifically, the script
-            <filename>scripts/oe-setup-builddir</filename> inside the
-            poky directory sets up the Build Directory and seeds the directory
-            (if necessary) with configuration files appropriate for the
-            Yocto Project development environment.
-            <note>
-                The <filename>scripts/oe-setup-builddir</filename> script
-                uses the <filename>$TEMPLATECONF</filename> variable to
-                determine which sample configuration files to locate.
-            </note>
-        </para>
-
-        <para>
-            The <filename>local.conf</filename> file provides many
-            basic variables that define a build environment.
-            Here is a list of a few.
-            To see the default configurations in a <filename>local.conf</filename>
-            file created by the build environment script, see the
-            <filename>local.conf.sample</filename> in the
-            <filename>meta-poky</filename> layer:
-            <itemizedlist>
-                <listitem><para><emphasis>Parallelism Options:</emphasis>
-                    Controlled by the
-                    <link linkend='var-BB_NUMBER_THREADS'><filename>BB_NUMBER_THREADS</filename></link>,
-                    <link linkend='var-PARALLEL_MAKE'><filename>PARALLEL_MAKE</filename></link>,
-                    and
-                    <ulink url='&YOCTO_DOCS_BB_URL;#var-BB_NUMBER_PARSE_THREADS'><filename>BB_NUMBER_PARSE_THREADS</filename></ulink>
-                    variables.</para></listitem>
-                <listitem><para><emphasis>Target Machine Selection:</emphasis>
-                    Controlled by the
-                    <link linkend='var-MACHINE'><filename>MACHINE</filename></link>
-                    variable.</para></listitem>
-                <listitem><para><emphasis>Download Directory:</emphasis>
-                    Controlled by the
-                    <link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
-                    variable.</para></listitem>
-                <listitem><para><emphasis>Shared State Directory:</emphasis>
-                    Controlled by the
-                    <link linkend='var-SSTATE_DIR'><filename>SSTATE_DIR</filename></link>
-                    variable.</para></listitem>
-                <listitem><para><emphasis>Build Output:</emphasis>
-                    Controlled by the
-                    <link linkend='var-TMPDIR'><filename>TMPDIR</filename></link>
-                    variable.</para></listitem>
-            </itemizedlist>
-            <note>
-                Configurations set in the <filename>conf/local.conf</filename>
-                file can also be set in the
-                <filename>conf/site.conf</filename> and
-                <filename>conf/auto.conf</filename> configuration files.
-            </note>
-        </para>
-
-        <para>
-            The <filename>bblayers.conf</filename> file tells BitBake what
-            layers you want considered during the build.
-            By default, the layers listed in this file include layers
-            minimally needed by the build system.
-            However, you must manually add any custom layers you have created.
-            You can find more information on working with the
-            <filename>bblayers.conf</filename> file in the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#enabling-your-layer'>Enabling Your Layer</ulink>"
-            section in the Yocto Project Development Manual.
-        </para>
-
-        <para>
-            The files <filename>site.conf</filename> and
-            <filename>auto.conf</filename> are not created by the environment
-            initialization script.
-            If you want the <filename>site.conf</filename> file, you need to
-            create that yourself.
-            The <filename>auto.conf</filename> file is typically created by
-            an autobuilder:
-            <itemizedlist>
-                <listitem><para><emphasis><filename>site.conf</filename>:</emphasis>
-                    You can use the <filename>conf/site.conf</filename>
-                    configuration file to configure multiple build directories.
-                    For example, suppose you had several build environments and
-                    they shared some common features.
-                    You can set these default build properties here.
-                    A good example is perhaps the packaging format to use
-                    through the
-                    <link linkend='var-PACKAGE_CLASSES'><filename>PACKAGE_CLASSES</filename></link>
-                    variable.</para>
-                    <para>One useful scenario for using the
-                    <filename>conf/site.conf</filename> file is to extend your
-                    <link linkend='var-BBPATH'><filename>BBPATH</filename></link>
-                    variable to include the path to a
-                    <filename>conf/site.conf</filename>.
-                    Then, when BitBake looks for Metadata using
-                    <filename>BBPATH</filename>, it finds the
-                    <filename>conf/site.conf</filename> file and applies your
-                    common configurations found in the file.
-                    To override configurations in a particular build directory,
-                    alter the similar configurations within that build
-                    directory's <filename>conf/local.conf</filename> file.
-                    </para></listitem>
-                <listitem><para><emphasis><filename>auto.conf</filename>:</emphasis>
-                    The file is usually created and written to by
-                    an autobuilder.
-                    The settings put into the file are typically the same as
-                    you would find in the <filename>conf/local.conf</filename>
-                    or the <filename>conf/site.conf</filename> files.
-                    </para></listitem>
-            </itemizedlist>
-        </para>
-
-        <para>
-            You can edit all configuration files to further define
-            any particular build environment.
-            This process is represented by the "User Configuration Edits"
-            box in the figure.
-        </para>
-
-        <para>
-            When you launch your build with the
-            <filename>bitbake <replaceable>target</replaceable></filename>
-            command, BitBake sorts out the configurations to ultimately
-            define your build environment.
-            It is important to understand that the OpenEmbedded build system
-            reads the configuration files in a specific order:
-            <filename>site.conf</filename>, <filename>auto.conf</filename>,
-            and <filename>local.conf</filename>.
-            And, the build system applies the normal assignment statement
-            rules.
-            Because the files are parsed in a specific order, variable
-            assignments for the same variable could be affected.
-            For example, if the <filename>auto.conf</filename> file and
-            the <filename>local.conf</filename> set
-            <replaceable>variable1</replaceable> to different values, because
-            the build system parses <filename>local.conf</filename> after
-            <filename>auto.conf</filename>,
-            <replaceable>variable1</replaceable> is assigned the value from
-            the <filename>local.conf</filename> file.
-        </para>
-    </section>
-
-    <section id="metadata-machine-configuration-and-policy-configuration">
-        <title>Metadata, Machine Configuration, and Policy Configuration</title>
-
-        <para>
-            The previous section described the user configurations that
-            define BitBake's global behavior.
-            This section takes a closer look at the layers the build system
-            uses to further control the build.
-            These layers provide Metadata for the software, machine, and
-            policy.
-        </para>
-
-        <para>
-            In general, three types of layer input exist:
-            <itemizedlist>
-                <listitem><para><emphasis>Policy Configuration:</emphasis>
-                    Distribution Layers provide top-level or general
-                    policies for the image or SDK being built.
-                    For example, this layer would dictate whether BitBake
-                    produces RPM or IPK packages.</para></listitem>
-                <listitem><para><emphasis>Machine Configuration:</emphasis>
-                    Board Support Package (BSP) layers provide machine
-                    configurations.
-                    This type of information is specific to a particular
-                    target architecture.</para></listitem>
-                <listitem><para><emphasis>Metadata:</emphasis>
-                    Software layers contain user-supplied recipe files,
-                    patches, and append files.
-                    </para></listitem>
-            </itemizedlist>
-        </para>
-
-        <para>
-            The following figure shows an expanded representation of the
-            Metadata, Machine Configuration, and Policy Configuration input
-            (layers) boxes of the
-            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>:
-        </para>
-
-        <para>
-            <imagedata fileref="figures/layer-input.png" align="center" width="8in" depth="7.5in" />
-        </para>
-
-        <para>
-            In general, all layers have a similar structure.
-            They all contain a licensing file
-            (e.g. <filename>COPYING</filename>) if the layer is to be
-            distributed, a <filename>README</filename> file as good practice
-            and especially if the layer is to be distributed, a
-            configuration directory, and recipe directories.
-        </para>
-
-        <para>
-            The Yocto Project has many layers that can be used.
-            You can see a web-interface listing of them on the
-            <ulink url="http://git.yoctoproject.org/">Source Repositories</ulink>
-            page.
-            The layers are shown at the bottom categorized under
-            "Yocto Metadata Layers."
-            These layers are fundamentally a subset of the
-            <ulink url="http://layers.openembedded.org/layerindex/layers/">OpenEmbedded Metadata Index</ulink>,
-            which lists all layers provided by the OpenEmbedded community.
-            <note>
-                Layers exist in the Yocto Project Source Repositories that
-                cannot be found in the OpenEmbedded Metadata Index.
-                These layers are either deprecated or experimental in nature.
-            </note>
-        </para>
-
-        <para>
-            BitBake uses the <filename>conf/bblayers.conf</filename> file,
-            which is part of the user configuration, to find what layers it
-            should be using as part of the build.
-        </para>
-
-        <para>
-            For more information on layers, see the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
-            section in the Yocto Project Development Manual.
-        </para>
-
-        <section id="distro-layer">
-            <title>Distro Layer</title>
-
-            <para>
-                The distribution layer provides policy configurations for your
-                distribution.
-                Best practices dictate that you isolate these types of
-                configurations into their own layer.
-                Settings you provide in
-                <filename>conf/distro/<replaceable>distro</replaceable>.conf</filename> override
-                similar
-                settings that BitBake finds in your
-                <filename>conf/local.conf</filename> file in the Build
-                Directory.
-            </para>
-
-            <para>
-                The following list provides some explanation and references
-                for what you typically find in the distribution layer:
-                <itemizedlist>
-                    <listitem><para><emphasis>classes:</emphasis>
-                        Class files (<filename>.bbclass</filename>) hold
-                        common functionality that can be shared among
-                        recipes in the distribution.
-                        When your recipes inherit a class, they take on the
-                        settings and functions for that class.
-                        You can read more about class files in the
-                        "<link linkend='ref-classes'>Classes</link>" section.
-                        </para></listitem>
-                    <listitem><para><emphasis>conf:</emphasis>
-                        This area holds configuration files for the
-                        layer (<filename>conf/layer.conf</filename>),
-                        the distribution
-                        (<filename>conf/distro/<replaceable>distro</replaceable>.conf</filename>),
-                        and any distribution-wide include files.
-                        </para></listitem>
-                    <listitem><para><emphasis>recipes-*:</emphasis>
-                        Recipes and append files that affect common
-                        functionality across the distribution.
-                        This area could include recipes and append files
-                        to add distribution-specific configuration,
-                        initialization scripts, custom image recipes,
-                        and so forth.</para></listitem>
-                </itemizedlist>
-            </para>
-        </section>
-
-        <section id="bsp-layer">
-            <title>BSP Layer</title>
-
-            <para>
-                The BSP Layer provides machine configurations.
-                Everything in this layer is specific to the machine for which
-                you are building the image or the SDK.
-                A common structure or form is defined for BSP layers.
-                You can learn more about this structure in the
-                <ulink url='&YOCTO_DOCS_BSP_URL;'>Yocto Project Board Support Package (BSP) Developer's Guide</ulink>.
-                <note>
-                    In order for a BSP layer to be considered compliant with the
-                    Yocto Project, it must meet some structural requirements.
-                </note>
-            </para>
-
-            <para>
-                The BSP Layer's configuration directory contains
-                configuration files for the machine
-                (<filename>conf/machine/<replaceable>machine</replaceable>.conf</filename>) and,
-                of course, the layer (<filename>conf/layer.conf</filename>).
-            </para>
-
-            <para>
-                The remainder of the layer is dedicated to specific recipes
-                by function: <filename>recipes-bsp</filename>,
-                <filename>recipes-core</filename>,
-                <filename>recipes-graphics</filename>, and
-                <filename>recipes-kernel</filename>.
-                Metadata can exist for multiple formfactors, graphics
-                support systems, and so forth.
-                <note>
-                    While the figure shows several <filename>recipes-*</filename>
-                    directories, not all these directories appear in all
-                    BSP layers.
-                </note>
-            </para>
-        </section>
-
-        <section id="software-layer">
-            <title>Software Layer</title>
-
-            <para>
-                The software layer provides the Metadata for additional
-                software packages used during the build.
-                This layer does not include Metadata that is specific to the
-                distribution or the machine, which are found in their
-                respective layers.
-            </para>
-
-            <para>
-                This layer contains any new recipes that your project needs
-                in the form of recipe files.
-            </para>
-        </section>
-    </section>
-
-    <section id="sources-dev-environment">
-        <title>Sources</title>
-
-        <para>
-            In order for the OpenEmbedded build system to create an image or
-            any target, it must be able to access source files.
-            The
-            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>
-            represents source files using the "Upstream Project Releases",
-            "Local Projects", and "SCMs (optional)" boxes.
-            The figure represents mirrors, which also play a role in locating
-            source files, with the "Source Mirror(s)" box.
-        </para>
-
-        <para>
-            The method by which source files are ultimately organized is
-            a function of the project.
-            For example, for released software, projects tend to use tarballs
-            or other archived files that can capture the state of a release
-            guaranteeing that it is statically represented.
-            On the other hand, for a project that is more dynamic or
-            experimental in nature, a project might keep source files in a
-            repository controlled by a Source Control Manager (SCM) such as
-            Git.
-            Pulling source from a repository allows you to control
-            the point in the repository (the revision) from which you want to
-            build software.
-            Finally, a combination of the two might exist, which would give the
-            consumer a choice when deciding where to get source files.
-        </para>
-
-        <para>
-            BitBake uses the
-            <link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>
-            variable to point to source files regardless of their location.
-            Each recipe must have a <filename>SRC_URI</filename> variable
-            that points to the source.
-        </para>
-
-        <para>
-            Another area that plays a significant role in where source files
-            come from is pointed to by the
-            <link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
-            variable.
-            This area is a cache that can hold previously downloaded source.
-            You can also instruct the OpenEmbedded build system to create
-            tarballs from Git repositories, which is not the default behavior,
-            and store them in the <filename>DL_DIR</filename> by using the
-            <link linkend='var-BB_GENERATE_MIRROR_TARBALLS'><filename>BB_GENERATE_MIRROR_TARBALLS</filename></link>
-            variable.
-        </para>
-
-        <para>
-            Judicious use of a <filename>DL_DIR</filename> directory can
-            save the build system a trip across the Internet when looking
-            for files.
-            A good method for using a download directory is to have
-            <filename>DL_DIR</filename> point to an area outside of your
-            Build Directory.
-            Doing so allows you to safely delete the Build Directory
-            if needed without fear of removing any downloaded source file.
-        </para>
-
-        <para>
-            The remainder of this section provides a deeper look into the
-            source files and the mirrors.
-            Here is a more detailed look at the source file area of the
-            base figure:
-            <imagedata fileref="figures/source-input.png" align="center" width="7in" depth="7.5in" />
-        </para>
-
-        <section id='upstream-project-releases'>
-            <title>Upstream Project Releases</title>
-
-            <para>
-                Upstream project releases exist anywhere in the form of an
-                archived file (e.g. tarball or zip file).
-                These files correspond to individual recipes.
-                For example, the figure uses specific releases each for
-                BusyBox, Qt, and Dbus.
-                An archive file can be for any released product that can be
-                built using a recipe.
-            </para>
-        </section>
-
-        <section id='local-projects'>
-            <title>Local Projects</title>
-
-            <para>
-                Local projects are custom bits of software the user provides.
-                These bits reside somewhere local to a project - perhaps
-                a directory into which the user checks in items (e.g.
-                a local directory containing a development source tree
-                used by the group).
-            </para>
-
-            <para>
-                The canonical method through which to include a local project
-                is to use the
-                <link linkend='ref-classes-externalsrc'><filename>externalsrc</filename></link>
-                class to include that local project.
-                You use either the <filename>local.conf</filename> or a
-                recipe's append file to override or set the
-                recipe to point to the local directory on your disk to pull
-                in the whole source tree.
-            </para>
-
-            <para>
-                For information on how to use the
-                <filename>externalsrc</filename> class, see the
-                "<link linkend='ref-classes-externalsrc'><filename>externalsrc.bbclass</filename></link>"
-                section.
-            </para>
-        </section>
-
-        <section id='scms'>
-            <title>Source Control Managers (Optional)</title>
-
-            <para>
-                Another place the build system can get source files from is
-                through an SCM such as Git or Subversion.
-                In this case, a repository is cloned or checked out.
-                The
-                <link linkend='ref-tasks-fetch'><filename>do_fetch</filename></link>
-                task inside BitBake uses
-                the <link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>
-                variable and the argument's prefix to determine the correct
-                fetcher module.
-            </para>
-
-            <note>
-                For information on how to have the OpenEmbedded build system
-                generate tarballs for Git repositories and place them in the
-                <link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
-                directory, see the
-                <link linkend='var-BB_GENERATE_MIRROR_TARBALLS'><filename>BB_GENERATE_MIRROR_TARBALLS</filename></link>
-                variable.
-            </note>
-
-            <para>
-                When fetching a repository, BitBake uses the
-                <link linkend='var-SRCREV'><filename>SRCREV</filename></link>
-                variable to determine the specific revision from which to
-                build.
-            </para>
-        </section>
-
-        <section id='source-mirrors'>
-            <title>Source Mirror(s)</title>
-
-            <para>
-                Two kinds of mirrors exist: pre-mirrors and regular mirrors.
-                The <link linkend='var-PREMIRRORS'><filename>PREMIRRORS</filename></link>
-                and
-                <link linkend='var-MIRRORS'><filename>MIRRORS</filename></link>
-                variables point to these, respectively.
-                BitBake checks pre-mirrors before looking upstream for any
-                source files.
-                Pre-mirrors are appropriate when you have a shared directory
-                that is not a directory defined by the
-                <link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
-                variable.
-                A Pre-mirror typically points to a shared directory that is
-                local to your organization.
-            </para>
-
-            <para>
-                Regular mirrors can be any site across the Internet that is
-                used as an alternative location for source code should the
-                primary site not be functioning for some reason or another.
-            </para>
-        </section>
-    </section>
-
-    <section id="package-feeds-dev-environment">
-        <title>Package Feeds</title>
-
-        <para>
-            When the OpenEmbedded build system generates an image or an SDK,
-            it gets the packages from a package feed area located in the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
-            The
-            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>
-            shows this package feeds area in the upper-right corner.
-        </para>
-
-        <para>
-            This section looks a little closer into the package feeds area used
-            by the build system.
-            Here is a more detailed look at the area:
-            <imagedata fileref="figures/package-feeds.png" align="center" width="7in" depth="6in" />
-        </para>
-
-        <para>
-            Package feeds are an intermediary step in the build process.
-            The OpenEmbedded build system provides classes to generate
-            different package types, and you specify which classes to enable
-            through the
-            <link linkend='var-PACKAGE_CLASSES'><filename>PACKAGE_CLASSES</filename></link>
-            variable.
-            Before placing the packages into package feeds,
-            the build process validates them with generated output quality
-            assurance checks through the
-            <link linkend='ref-classes-insane'><filename>insane</filename></link>
-            class.
-        </para>
-
-        <para>
-            The package feed area resides in the Build Directory.
-            The directory the build system uses to temporarily store packages
-            is determined by a combination of variables and the particular
-            package manager in use.
-            See the "Package Feeds" box in the illustration and note the
-            information to the right of that area.
-            In particular, the following defines where package files are
-            kept:
-            <itemizedlist>
-                <listitem><para><link linkend='var-DEPLOY_DIR'><filename>DEPLOY_DIR</filename></link>:
-                    Defined as <filename>tmp/deploy</filename> in the Build
-                    Directory.
-                    </para></listitem>
-                <listitem><para><filename>DEPLOY_DIR_*</filename>:
-                    Depending on the package manager used, the package type
-                    sub-folder.
-                    Given RPM, IPK, or DEB packaging and tarball creation, the
-                    <link linkend='var-DEPLOY_DIR_RPM'><filename>DEPLOY_DIR_RPM</filename></link>,
-                    <link linkend='var-DEPLOY_DIR_IPK'><filename>DEPLOY_DIR_IPK</filename></link>,
-                    <link linkend='var-DEPLOY_DIR_DEB'><filename>DEPLOY_DIR_DEB</filename></link>,
-                    or
-                    <link linkend='var-DEPLOY_DIR_TAR'><filename>DEPLOY_DIR_TAR</filename></link>,
-                    variables are used, respectively.
-                    </para></listitem>
-                <listitem><para><link linkend='var-PACKAGE_ARCH'><filename>PACKAGE_ARCH</filename></link>:
-                    Defines architecture-specific sub-folders.
-                    For example, packages could exist for the i586 or qemux86
-                    architectures.
-                    </para></listitem>
-            </itemizedlist>
-        </para>
-
-        <para>
-            BitBake uses the <filename>do_package_write_*</filename> tasks to
-            generate packages and place them into the package holding area (e.g.
-            <filename>do_package_write_ipk</filename> for IPK packages).
-            See the
-            "<link linkend='ref-tasks-package_write_deb'><filename>do_package_write_deb</filename></link>",
-            "<link linkend='ref-tasks-package_write_ipk'><filename>do_package_write_ipk</filename></link>",
-            "<link linkend='ref-tasks-package_write_rpm'><filename>do_package_write_rpm</filename></link>",
-            and
-            "<link linkend='ref-tasks-package_write_tar'><filename>do_package_write_tar</filename></link>"
-            sections for additional information.
-            As an example, consider a scenario where an IPK packaging manager
-            is being used and package architecture support for both i586
-            and qemux86 exist.
-            Packages for the i586 architecture are placed in
-            <filename>build/tmp/deploy/ipk/i586</filename>, while packages for
-            the qemux86 architecture are placed in
-            <filename>build/tmp/deploy/ipk/qemux86</filename>.
-        </para>
-    </section>
-
-    <section id='bitbake-dev-environment'>
-        <title>BitBake</title>
-
-        <para>
-            The OpenEmbedded build system uses
-            <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>
-            to produce images.
-            You can see from the
-            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>,
-            the BitBake area consists of several functional areas.
-            This section takes a closer look at each of those areas.
-        </para>
-
-        <para>
-            Separate documentation exists for the BitBake tool.
-            See the
-            <ulink url='&YOCTO_DOCS_BB_URL;#bitbake-user-manual'>BitBake User Manual</ulink>
-            for reference material on BitBake.
-        </para>
-
-        <section id='source-fetching-dev-environment'>
-            <title>Source Fetching</title>
-
-            <para>
-                The first stages of building a recipe are to fetch and unpack
-                the source code:
-                <imagedata fileref="figures/source-fetching.png" align="center" width="6.5in" depth="5in" />
-            </para>
-
-            <para>
-                The
-                <link linkend='ref-tasks-fetch'><filename>do_fetch</filename></link>
-                and
-                <link linkend='ref-tasks-unpack'><filename>do_unpack</filename></link>
-                tasks fetch the source files and unpack them into the work
-                directory.
-                <note>
-                    For every local file (e.g. <filename>file://</filename>)
-                    that is part of a recipe's
-                    <link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>
-                    statement, the OpenEmbedded build system takes a checksum
-                    of the file for the recipe and inserts the checksum into
-                    the signature for the <filename>do_fetch</filename>.
-                    If any local file has been modified, the
-                    <filename>do_fetch</filename> task and all tasks that
-                    depend on it are re-executed.
-                </note>
-                By default, everything is accomplished in the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>,
-                which has a defined structure.
-                For additional general information on the Build Directory,
-                see the
-                "<link linkend='structure-core-build'><filename>build/</filename></link>"
-                section.
-            </para>
-
-            <para>
-                Unpacked source files are pointed to by the
-                <link linkend='var-S'><filename>S</filename></link> variable.
-                Each recipe has an area in the Build Directory where the
-                unpacked source code resides.
-                The name of that directory for any given recipe is defined from
-                several different variables.
-                You can see the variables that define these directories
-                by looking at the figure:
-                <itemizedlist>
-                    <listitem><para><link linkend='var-TMPDIR'><filename>TMPDIR</filename></link> -
-                        The base directory where the OpenEmbedded build system
-                        performs all its work during the build.
-                        </para></listitem>
-                    <listitem><para><link linkend='var-PACKAGE_ARCH'><filename>PACKAGE_ARCH</filename></link> -
-                        The architecture of the built package or packages.
-                        </para></listitem>
-                    <listitem><para><link linkend='var-TARGET_OS'><filename>TARGET_OS</filename></link> -
-                        The operating system of the target device.
-                        </para></listitem>
-                    <listitem><para><link linkend='var-PN'><filename>PN</filename></link> -
-                        The name of the built package.
-                        </para></listitem>
-                    <listitem><para><link linkend='var-PV'><filename>PV</filename></link> -
-                        The version of the recipe used to build the package.
-                        </para></listitem>
-                    <listitem><para><link linkend='var-PR'><filename>PR</filename></link> -
-                        The revision of the recipe used to build the package.
-                        </para></listitem>
-                    <listitem><para><link linkend='var-WORKDIR'><filename>WORKDIR</filename></link> -
-                        The location within <filename>TMPDIR</filename> where
-                        a specific package is built.
-                        </para></listitem>
-                    <listitem><para><link linkend='var-S'><filename>S</filename></link> -
-                        Contains the unpacked source files for a given recipe.
-                        </para></listitem>
-                </itemizedlist>
-            </para>
-        </section>
-
-        <section id='patching-dev-environment'>
-            <title>Patching</title>
-
-            <para>
-                Once source code is fetched and unpacked, BitBake locates
-                patch files and applies them to the source files:
-                <imagedata fileref="figures/patching.png" align="center" width="6in" depth="5in" />
-            </para>
-
-            <para>
-                The
-                <link linkend='ref-tasks-patch'><filename>do_patch</filename></link>
-                task processes recipes by
-                using the
-                <link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>
-                variable to locate applicable patch files, which by default
-                are <filename>*.patch</filename> or
-                <filename>*.diff</filename> files, or any file if
-                "apply=yes" is specified for the file in
-                <filename>SRC_URI</filename>.
-            </para>
-
-            <para>
-                BitBake finds and applies multiple patches for a single recipe
-                in the order in which it finds the patches.
-                Patches are applied to the recipe's source files located in the
-                <link linkend='var-S'><filename>S</filename></link> directory.
-            </para>
-
-            <para>
-                For more information on how the source directories are
-                created, see the
-                "<link linkend='source-fetching-dev-environment'>Source Fetching</link>"
-                section.
-            </para>
-        </section>
-
-        <section id='configuration-and-compilation-dev-environment'>
-            <title>Configuration and Compilation</title>
-
-            <para>
-                After source code is patched, BitBake executes tasks that
-                configure and compile the source code:
-                <imagedata fileref="figures/configuration-compile-autoreconf.png" align="center" width="7in" depth="5in" />
-            </para>
-
-            <para>
-                This step in the build process consists of three tasks:
-                <itemizedlist>
-                    <listitem><para>
-                        <emphasis><link linkend='ref-tasks-prepare_recipe_sysroot'><filename>do_prepare_recipe_sysroot</filename></link>:</emphasis>
-                        This task sets up the two sysroots in
-                        <filename>${</filename><link linkend='var-WORKDIR'><filename>WORKDIR</filename></link><filename>}</filename>
-                        (i.e. <filename>recipe-sysroot</filename> and
-                        <filename>recipe-sysroot-native</filename>) so that
-                        the sysroots contain the contents of the
-                        <link linkend='ref-tasks-populate_sysroot'><filename>do_populate_sysroot</filename></link>
-                        tasks of the recipes on which the recipe
-                        containing the tasks depends.
-                        A sysroot exists for both the target and for the native
-                        binaries, which run on the host system.
-                        </para></listitem>
-                    <listitem><para><emphasis><filename>do_configure</filename>:</emphasis>
-                        This task configures the source by enabling and
-                        disabling any build-time and configuration options for
-                        the software being built.
-                        Configurations can come from the recipe itself as well
-                        as from an inherited class.
-                        Additionally, the software itself might configure itself
-                        depending on the target for which it is being built.
-                        </para>
-
-                        <para>The configurations handled by the
-                        <link linkend='ref-tasks-configure'><filename>do_configure</filename></link>
-                        task are specific
-                        to source code configuration for the source code
-                        being built by the recipe.</para>
-
-                        <para>If you are using the
-                        <link linkend='ref-classes-autotools'><filename>autotools</filename></link>
-                        class,
-                        you can add additional configuration options by using
-                        the <link linkend='var-EXTRA_OECONF'><filename>EXTRA_OECONF</filename></link>
-                        or
-                        <link linkend='var-PACKAGECONFIG_CONFARGS'><filename>PACKAGECONFIG_CONFARGS</filename></link>
-                        variables.
-                        For information on how this variable works within
-                        that class, see the
-                        <filename>meta/classes/autotools.bbclass</filename> file.
-                        </para></listitem>
-                    <listitem><para><emphasis><filename>do_compile</filename>:</emphasis>
-                        Once a configuration task has been satisfied, BitBake
-                        compiles the source using the
-                        <link linkend='ref-tasks-compile'><filename>do_compile</filename></link>
-                        task.
-                        Compilation occurs in the directory pointed to by the
-                        <link linkend='var-B'><filename>B</filename></link>
-                        variable.
-                        Realize that the <filename>B</filename> directory is, by
-                        default, the same as the
-                        <link linkend='var-S'><filename>S</filename></link>
-                        directory.</para></listitem>
-                    <listitem><para><emphasis><filename>do_install</filename>:</emphasis>
-                        Once compilation is done, BitBake executes the
-                        <link linkend='ref-tasks-install'><filename>do_install</filename></link>
-                        task.
-                        This task copies files from the <filename>B</filename>
-                        directory and places them in a holding area pointed to
-                        by the
-                        <link linkend='var-D'><filename>D</filename></link>
-                        variable.</para></listitem>
-                </itemizedlist>
-            </para>
-        </section>
-
-        <section id='package-splitting-dev-environment'>
-            <title>Package Splitting</title>
-
-            <para>
-                After source code is configured and compiled, the
-                OpenEmbedded build system analyzes
-                the results and splits the output into packages:
-                <imagedata fileref="figures/analysis-for-package-splitting.png" align="center" width="7in" depth="7in" />
-            </para>
-
-            <para>
-                The
-                <link linkend='ref-tasks-package'><filename>do_package</filename></link>
-                and
-                <link linkend='ref-tasks-packagedata'><filename>do_packagedata</filename></link>
-                tasks combine to analyze
-                the files found in the
-                <link linkend='var-D'><filename>D</filename></link> directory
-                and split them into subsets based on available packages and
-                files.
-                The analyzing process involves the following as well as other
-                items: splitting out debugging symbols,
-                looking at shared library dependencies between packages,
-                and looking at package relationships.
-                The <filename>do_packagedata</filename> task creates package
-                metadata based on the analysis such that the
-                OpenEmbedded build system can generate the final packages.
-                Working, staged, and intermediate results of the analysis
-                and package splitting process use these areas:
-                <itemizedlist>
-                    <listitem><para><link linkend='var-PKGD'><filename>PKGD</filename></link> -
-                        The destination directory for packages before they are
-                        split.
-                        </para></listitem>
-                    <listitem><para><link linkend='var-PKGDATA_DIR'><filename>PKGDATA_DIR</filename></link> -
-                        A shared, global-state directory that holds data
-                        generated during the packaging process.
-                        </para></listitem>
-                    <listitem><para><link linkend='var-PKGDESTWORK'><filename>PKGDESTWORK</filename></link> -
-                        A temporary work area used by the
-                        <filename>do_package</filename> task.
-                        </para></listitem>
-                    <listitem><para><link linkend='var-PKGDEST'><filename>PKGDEST</filename></link> -
-                        The parent directory for packages after they have
-                        been split.
-                        </para></listitem>
-                </itemizedlist>
-                The <link linkend='var-FILES'><filename>FILES</filename></link>
-                variable defines the files that go into each package in
-                <link linkend='var-PACKAGES'><filename>PACKAGES</filename></link>.
-                If you want details on how this is accomplished, you can
-                look at the
-                <link linkend='ref-classes-package'><filename>package</filename></link>
-                class.
-            </para>
-
-            <para>
-                Depending on the type of packages being created (RPM, DEB, or
-                IPK), the <filename>do_package_write_*</filename> task
-                creates the actual packages and places them in the
-                Package Feed area, which is
-                <filename>${TMPDIR}/deploy</filename>.
-                You can see the
-                "<link linkend='package-feeds-dev-environment'>Package Feeds</link>"
-                section for more detail on that part of the build process.
-                <note>
-                    Support for creating feeds directly from the
-                    <filename>deploy/*</filename> directories does not exist.
-                    Creating such feeds usually requires some kind of feed
-                    maintenance mechanism that would upload the new packages
-                    into an official package feed (e.g. the
-                    Ångström distribution).
-                    This functionality is highly distribution-specific
-                    and thus is not provided out of the box.
-                </note>
-            </para>
-        </section>
-
-        <section id='image-generation-dev-environment'>
-            <title>Image Generation</title>
-
-            <para>
-                Once packages are split and stored in the Package Feeds area,
-                the OpenEmbedded build system uses BitBake to generate the
-                root filesystem image:
-                <imagedata fileref="figures/image-generation.png" align="center" width="6in" depth="7in" />
-            </para>
-
-            <para>
-                The image generation process consists of several stages and
-                depends on several tasks and variables.
-                The
-                <link linkend='ref-tasks-rootfs'><filename>do_rootfs</filename></link>
-                task creates the root filesystem (file and directory structure)
-                for an image.
-                This task uses several key variables to help create the list
-                of packages to actually install:
-                <itemizedlist>
-                    <listitem><para><link linkend='var-IMAGE_INSTALL'><filename>IMAGE_INSTALL</filename></link>:
-                        Lists out the base set of packages to install from
-                        the Package Feeds area.</para></listitem>
-                    <listitem><para><link linkend='var-PACKAGE_EXCLUDE'><filename>PACKAGE_EXCLUDE</filename></link>:
-                        Specifies packages that should not be installed.
-                        </para></listitem>
-                    <listitem><para><link linkend='var-IMAGE_FEATURES'><filename>IMAGE_FEATURES</filename></link>:
-                        Specifies features to include in the image.
-                        Most of these features map to additional packages for
-                        installation.</para></listitem>
-                    <listitem><para><link linkend='var-PACKAGE_CLASSES'><filename>PACKAGE_CLASSES</filename></link>:
-                        Specifies the package backend to use and consequently
-                        helps determine where to locate packages within the
-                        Package Feeds area.</para></listitem>
-                    <listitem><para><link linkend='var-IMAGE_LINGUAS'><filename>IMAGE_LINGUAS</filename></link>:
-                        Determines the language(s) for which additional
-                        language support packages are installed.
-                        </para></listitem>
-                    <listitem><para><link linkend='var-PACKAGE_INSTALL'><filename>PACKAGE_INSTALL</filename></link>:
-                        The final list of packages passed to the package manager
-                        for installation into the image.
-                        </para></listitem>
-                </itemizedlist>
-            </para>
-
-            <para>
-                With
-                <link linkend='var-IMAGE_ROOTFS'><filename>IMAGE_ROOTFS</filename></link>
-                pointing to the location of the filesystem under construction and
-                the <filename>PACKAGE_INSTALL</filename> variable providing the
-                final list of packages to install, the root file system is
-                created.
-            </para>
-
-            <para>
-                Package installation is under control of the package manager
-                (e.g. dnf/rpm, opkg, or apt/dpkg) regardless of whether or
-                not package management is enabled for the target.
-                At the end of the process, if package management is not
-                enabled for the target, the package manager's data files
-                are deleted from the root filesystem.
-                As part of the final stage of package installation, postinstall
-                scripts that are part of the packages are run.
-                Any scripts that fail to run
-                on the build host are run on the target when the target system
-                is first booted.
-                If you are using a
-                <ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-read-only-root-filesystem'>read-only root filesystem</ulink>,
-                all the post installation scripts must succeed during the
-                package installation phase since the root filesystem is
-                read-only.
-            </para>
-
-            <para>
-                The final stages of the <filename>do_rootfs</filename> task
-                handle post processing.
-                Post processing includes creation of a manifest file and
-                optimizations.
-            </para>
-
-            <para>
-                The manifest file (<filename>.manifest</filename>) resides
-                in the same directory as the root filesystem image.
-                This file lists out, line-by-line, the installed packages.
-                The manifest file is useful for the
-                <link linkend='ref-classes-testimage*'><filename>testimage</filename></link>
-                class, for example, to determine whether or not to run
-                specific tests.
-                See the
-                <link linkend='var-IMAGE_MANIFEST'><filename>IMAGE_MANIFEST</filename></link>
-                variable for additional information.
-            </para>
-
-            <para>
-                Optimizing processes run across the image include
-                <filename>mklibs</filename>, <filename>prelink</filename>,
-                and any other post-processing commands as defined by the
-                <link linkend='var-ROOTFS_POSTPROCESS_COMMAND'><filename>ROOTFS_POSTPROCESS_COMMAND</filename></link>
-                variable.
-                The <filename>mklibs</filename> process optimizes the size
-                of the libraries, while the
-                <filename>prelink</filename> process optimizes the dynamic
-                linking of shared libraries to reduce start up time of
-                executables.
-            </para>
-
-            <para>
-                After the root filesystem is built, processing begins on
-                the image through the <filename>do_image</filename> task.
-                The build system runs any pre-processing commands as defined
-                by the
-                <link linkend='var-IMAGE_PREPROCESS_COMMAND'><filename>IMAGE_PREPROCESS_COMMAND</filename></link>
-                variable.
-                This variable specifies a list of functions to call before
-                the OpenEmbedded build system creates the final image output
-                files.
-            </para>
-
-            <para>
-                The <filename>do_image</filename> task dynamically creates
-                other <filename>do_image_*</filename> tasks as needed, which
-                include compressing the root filesystem image to reduce the
-                overall size of the image.
-                The process turns everything into an image file or a set of
-                image files.
-                The formats used for the root filesystem depend on the
-                <link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>
-                variable.
-            </para>
-
-            <para>
-                The final task involved in image creation is the
-                <filename>do_image_complete</filename> task.
-                This task completes the image by applying any image
-                post processing as defined through the
-                <link linkend='var-IMAGE_POSTPROCESS_COMMAND'><filename>IMAGE_POSTPROCESS_COMMAND</filename></link>
-                variable.
-                The variable specifies a list of functions to call once the
-                OpenEmbedded build system has created the final image output
-                files.
-            </para>
-
-            <note>
-                The entire image generation process is run under Pseudo.
-                Running under Pseudo ensures that the files in the root
-                filesystem have correct ownership.
-            </note>
-        </section>
-
-        <section id='sdk-generation-dev-environment'>
-            <title>SDK Generation</title>
-
-            <para>
-                The OpenEmbedded build system uses BitBake to generate the
-                Software Development Kit (SDK) installer script for both the
-                standard and extensible SDKs:
-                <imagedata fileref="figures/sdk-generation.png" align="center" />
-            </para>
-
-            <note>
-                For more information on the cross-development toolchain
-                generation, see the
-                "<link linkend='cross-development-toolchain-generation'>Cross-Development Toolchain Generation</link>"
-                section.
-                For information on advantages gained when building a
-                cross-development toolchain using the
-                <link linkend='ref-tasks-populate_sdk'><filename>do_populate_sdk</filename></link>
-                task, see the
-                "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-building-an-sdk-installer'>Building an SDK Installer</ulink>"
-                section in the Yocto Project Software Development Kit (SDK)
-                Developer's Guide.
-            </note>
-
-            <para>
-                Like image generation, the SDK script process consists of
-                several stages and depends on many variables.
-                The <filename>do_populate_sdk</filename> and
-                <filename>do_populate_sdk_ext</filename> tasks use these
-                key variables to help create the list of packages to actually
-                install.
-                For information on the variables listed in the figure, see the
-                "<link linkend='sdk-dev-environment'>Application Development SDK</link>"
-                section.
-            </para>
-
-            <para>
-                The <filename>do_populate_sdk</filename> task helps create
-                the standard SDK and handles two parts: a target part and a
-                host part.
-                The target part is the part built for the target hardware and
-                includes libraries and headers.
-                The host part is the part of the SDK that runs on the
-                <link linkend='var-SDKMACHINE'><filename>SDKMACHINE</filename></link>.
-            </para>
-
-            <para>
-                The <filename>do_populate_sdk_ext</filename> task helps create
-                the extensible SDK and handles host and target parts
-                differently than its counter part does for the standard SDK.
-                For the extensible SDK, the task encapsulates the build system,
-                which includes everything needed (host and target) for the SDK.
-            </para>
-
-            <para>
-                Regardless of the type of SDK being constructed, the
-                tasks perform some cleanup after which a cross-development
-                environment setup script and any needed configuration files
-                are created.
-                The final output is the Cross-development
-                toolchain installation script (<filename>.sh</filename> file),
-                which includes the environment setup script.
-            </para>
-        </section>
-
-        <section id='stamp-files-and-the-rerunning-of-tasks'>
-            <title>Stamp Files and the Rerunning of Tasks</title>
-
-            <para>
-                For each task that completes successfully, BitBake writes a
-                stamp file into the
-                <link linkend='var-STAMPS_DIR'><filename>STAMPS_DIR</filename></link>
-                directory.
-                The beginning of the stamp file's filename is determined by the
-                <link linkend='var-STAMP'><filename>STAMP</filename></link>
-                variable, and the end of the name consists of the task's name
-                and current
-                <ulink url='&YOCTO_DOCS_BB_URL;#checksums'>input checksum</ulink>.
-                <note>
-                    This naming scheme assumes that
-                    <ulink url='&YOCTO_DOCS_BB_URL;#var-BB_SIGNATURE_HANDLER'><filename>BB_SIGNATURE_HANDLER</filename></ulink>
-                    is "OEBasicHash", which is almost always the case in
-                    current OpenEmbedded.
-                </note>
-                To determine if a task needs to be rerun, BitBake checks if a
-                stamp file with a matching input checksum exists for the task.
-                If such a stamp file exists, the task's output is assumed to
-                exist and still be valid.
-                If the file does not exist, the task is rerun.
-                <note>
-                    <para>The stamp mechanism is more general than the shared
-                    state (sstate) cache mechanism described in the
-                    "<link linkend='setscene-tasks-and-shared-state'>Setscene Tasks and Shared State</link>"
-                    section.
-                    BitBake avoids rerunning any task that has a valid
-                    stamp file, not just tasks that can be accelerated through
-                    the sstate cache.</para>
-                    <para>However, you should realize that stamp files only
-                    serve as a marker that some work has been done and that
-                    these files do not record task output.
-                    The actual task output would usually be somewhere in
-                    <link linkend='var-TMPDIR'><filename>TMPDIR</filename></link>
-                    (e.g. in some recipe's
-                    <link linkend='var-WORKDIR'><filename>WORKDIR</filename></link>.)
-                    What the sstate cache mechanism adds is a way to cache task
-                    output that can then be shared between build machines.
-                    </para>
-                </note>
-                Since <filename>STAMPS_DIR</filename> is usually a subdirectory
-                of <filename>TMPDIR</filename>, removing
-                <filename>TMPDIR</filename> will also remove
-                <filename>STAMPS_DIR</filename>, which means tasks will
-                properly be rerun to repopulate <filename>TMPDIR</filename>.
-            </para>
-
-            <para>
-                If you want some task to always be considered "out of date",
-                you can mark it with the
-                <ulink url='&YOCTO_DOCS_BB_URL;#variable-flags'><filename>nostamp</filename></ulink>
-                varflag.
-                If some other task depends on such a task, then that task will
-                also always be considered out of date, which might not be what
-                you want.
-            </para>
-
-            <para>
-                For details on how to view information about a task's
-                signature, see the
-                "<link linkend='usingpoky-viewing-task-variable-dependencies'>Viewing Task Variable Dependencies</link>"
-                section.
-            </para>
-        </section>
-
-        <section id='setscene-tasks-and-shared-state'>
-            <title>Setscene Tasks and Shared State</title>
-
-            <para>
-                The description of tasks so far assumes that BitBake needs to
-                build everything and there are no prebuilt objects available.
-                BitBake does support skipping tasks if prebuilt objects are
-                available.
-                These objects are usually made available in the form of a
-                shared state (sstate) cache.
-                <note>
-                    For information on variables affecting sstate, see the
-                    <link linkend='var-SSTATE_DIR'><filename>SSTATE_DIR</filename></link>
-                    and
-                    <link linkend='var-SSTATE_MIRRORS'><filename>SSTATE_MIRRORS</filename></link>
-                    variables.
-                </note>
-            </para>
-
-            <para>
-                The idea of a setscene task (i.e
-                <filename>do_</filename><replaceable>taskname</replaceable><filename>_setscene</filename>)
-                is a version of the task where
-                instead of building something, BitBake can skip to the end
-                result and simply place a set of files into specific locations
-                as needed.
-                In some cases, it makes sense to have a setscene task variant
-                (e.g. generating package files in the
-                <filename>do_package_write_*</filename> task).
-                In other cases, it does not make sense, (e.g. a
-                <link linkend='ref-tasks-patch'><filename>do_patch</filename></link>
-                task or
-                <link linkend='ref-tasks-unpack'><filename>do_unpack</filename></link>
-                task) since the work involved would be equal to or greater than
-                the underlying task.
-            </para>
-
-            <para>
-                In the OpenEmbedded build system, the common tasks that have
-                setscene variants are <link linkend='ref-tasks-package'><filename>do_package</filename></link>,
-                <filename>do_package_write_*</filename>,
-                <link linkend='ref-tasks-deploy'><filename>do_deploy</filename></link>,
-                <link linkend='ref-tasks-packagedata'><filename>do_packagedata</filename></link>,
-                and
-                <link linkend='ref-tasks-populate_sysroot'><filename>do_populate_sysroot</filename></link>.
-                Notice that these are most of the tasks whose output is an
-                end result.
-            </para>
-
-            <para>
-                The OpenEmbedded build system has knowledge of the relationship
-                between these tasks and other tasks that precede them.
-                For example, if BitBake runs
-                <filename>do_populate_sysroot_setscene</filename> for
-                something, there is little point in running any of the
-                <filename>do_fetch</filename>, <filename>do_unpack</filename>,
-                <filename>do_patch</filename>,
-                <filename>do_configure</filename>,
-                <filename>do_compile</filename>, and
-                <filename>do_install</filename> tasks.
-                However, if <filename>do_package</filename> needs to be run,
-                BitBake would need to run those other tasks.
-            </para>
-
-            <para>
-                It becomes more complicated if everything can come from an
-                sstate cache because some objects are simply not required at
-                all.
-                For example, you do not need a compiler or native tools, such
-                as quilt, if there is nothing to compile or patch.
-                If the <filename>do_package_write_*</filename> packages are
-                available from sstate, BitBake does not need the
-                <filename>do_package</filename> task data.
-            </para>
-
-            <para>
-                To handle all these complexities, BitBake runs in two phases.
-                The first is the "setscene" stage.
-                During this stage, BitBake first checks the sstate cache for
-                any targets it is planning to build.
-                BitBake does a fast check to see if the object exists rather
-                than a complete download.
-                If nothing exists, the second phase, which is the setscene
-                stage, completes and the main build proceeds.
-            </para>
-
-            <para>
-                If objects are found in the sstate cache, the OpenEmbedded
-                build system works backwards from the end targets specified
-                by the user.
-                For example, if an image is being built, the OpenEmbedded build
-                system first looks for the packages needed for that image and
-                the tools needed to construct an image.
-                If those are available, the compiler is not needed.
-                Thus, the compiler is not even downloaded.
-                If something was found to be unavailable, or the download or
-                setscene task fails, the OpenEmbedded build system then tries
-                to install dependencies, such as the compiler, from the cache.
-            </para>
-
-            <para>
-                The availability of objects in the sstate cache is handled by
-                the function specified by the
-                <ulink url='&YOCTO_DOCS_BB_URL;#var-BB_HASHCHECK_FUNCTION'><filename>BB_HASHCHECK_FUNCTION</filename></ulink>
-                variable and returns a list of the objects that are available.
-                The function specified by the
-                <ulink url='&YOCTO_DOCS_BB_URL;#var-BB_SETSCENE_DEPVALID'><filename>BB_SETSCENE_DEPVALID</filename></ulink>
-                variable is the function that determines whether a given
-                dependency needs to be followed, and whether for any given
-                relationship the function needs to be passed.
-                The function returns a True or False value.
-            </para>
-        </section>
-    </section>
-
-    <section id='images-dev-environment'>
-        <title>Images</title>
-
-        <para>
-            The images produced by the OpenEmbedded build system
-            are compressed forms of the
-            root filesystem that are ready to boot on a target device.
-            You can see from the
-            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>
-            that BitBake output, in part, consists of images.
-            This section is going to look more closely at this output:
-            <imagedata fileref="figures/images.png" align="center" width="5.5in" depth="5.5in" />
-        </para>
-
-        <para>
-            For a list of example images that the Yocto Project provides,
-            see the
-            "<link linkend='ref-images'>Images</link>" chapter.
-        </para>
-
-        <para>
-            Images are written out to the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
-            inside the <filename>tmp/deploy/images/<replaceable>machine</replaceable>/</filename>
-            folder as shown in the figure.
-            This folder contains any files expected to be loaded on the
-            target device.
-            The
-            <link linkend='var-DEPLOY_DIR'><filename>DEPLOY_DIR</filename></link>
-            variable points to the <filename>deploy</filename> directory,
-            while the
-            <link linkend='var-DEPLOY_DIR_IMAGE'><filename>DEPLOY_DIR_IMAGE</filename></link>
-            variable points to the appropriate directory containing images for
-            the current configuration.
-            <itemizedlist>
-                <listitem><para><filename><replaceable>kernel-image</replaceable></filename>:
-                    A kernel binary file.
-                    The <link linkend='var-KERNEL_IMAGETYPE'><filename>KERNEL_IMAGETYPE</filename></link>
-                    variable setting determines the naming scheme for the
-                    kernel image file.
-                    Depending on that variable, the file could begin with
-                    a variety of naming strings.
-                    The <filename>deploy/images/<replaceable>machine</replaceable></filename>
-                    directory can contain multiple image files for the
-                    machine.</para></listitem>
-                <listitem><para><filename><replaceable>root-filesystem-image</replaceable></filename>:
-                    Root filesystems for the target device (e.g.
-                    <filename>*.ext3</filename> or <filename>*.bz2</filename>
-                    files).
-                    The <link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>
-                    variable setting determines the root filesystem image
-                    type.
-                    The <filename>deploy/images/<replaceable>machine</replaceable></filename>
-                    directory can contain multiple root filesystems for the
-                    machine.</para></listitem>
-                <listitem><para><filename><replaceable>kernel-modules</replaceable></filename>:
-                    Tarballs that contain all the modules built for the kernel.
-                    Kernel module tarballs exist for legacy purposes and
-                    can be suppressed by setting the
-                    <link linkend='var-MODULE_TARBALL_DEPLOY'><filename>MODULE_TARBALL_DEPLOY</filename></link>
-                    variable to "0".
-                    The <filename>deploy/images/<replaceable>machine</replaceable></filename>
-                    directory can contain multiple kernel module tarballs
-                    for the machine.</para></listitem>
-                <listitem><para><filename><replaceable>bootloaders</replaceable></filename>:
-                    Bootloaders supporting the image, if applicable to the
-                    target machine.
-                    The <filename>deploy/images/<replaceable>machine</replaceable></filename>
-                    directory can contain multiple bootloaders for the
-                    machine.</para></listitem>
-                <listitem><para><filename><replaceable>symlinks</replaceable></filename>:
-                    The <filename>deploy/images/<replaceable>machine</replaceable></filename>
-                    folder contains
-                    a symbolic link that points to the most recently built file
-                    for each machine.
-                    These links might be useful for external scripts that
-                    need to obtain the latest version of each file.
-                    </para></listitem>
-            </itemizedlist>
-        </para>
-    </section>
-
-    <section id='sdk-dev-environment'>
-        <title>Application Development SDK</title>
-
-        <para>
-            In the
-            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>,
-            the output labeled "Application Development SDK" represents an
-            SDK.
-            The SDK generation process differs depending on whether you build
-            a standard SDK
-            (e.g. <filename>bitbake -c populate_sdk</filename> <replaceable>imagename</replaceable>)
-            or an extensible SDK
-            (e.g. <filename>bitbake -c populate_sdk_ext</filename> <replaceable>imagename</replaceable>).
-            This section is going to take a closer look at this output:
-            <imagedata fileref="figures/sdk.png" align="center" width="9in" depth="7.25in" />
-        </para>
-
-        <para>
-            The specific form of this output is a self-extracting
-            SDK installer (<filename>*.sh</filename>) that, when run,
-            installs the SDK, which consists of a cross-development
-            toolchain, a set of libraries and headers, and an SDK
-            environment setup script.
-            Running this installer essentially sets up your
-            cross-development environment.
-            You can think of the cross-toolchain as the "host"
-            part because it runs on the SDK machine.
-            You can think of the libraries and headers as the "target"
-            part because they are built for the target hardware.
-            The environment setup script is added so that you can initialize
-            the environment before using the tools.
-        </para>
-
-        <note><title>Notes</title>
-            <itemizedlist>
-                <listitem><para>
-                    The Yocto Project supports several methods by which you can
-                    set up this cross-development environment.
-                    These methods include downloading pre-built SDK installers
-                    or building and installing your own SDK installer.
-                    </para></listitem>
-                <listitem><para>
-                    For background information on cross-development toolchains
-                    in the Yocto Project development environment, see the
-                    "<link linkend='cross-development-toolchain-generation'>Cross-Development Toolchain Generation</link>"
-                    section.
-                    </para></listitem>
-                <listitem><para>
-                    For information on setting up a cross-development
-                    environment, see the
-                    <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
-                    </para></listitem>
-            </itemizedlist>
-        </note>
-
-        <para>
-            Once built, the SDK installers are written out to the
-            <filename>deploy/sdk</filename> folder inside the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
-            as shown in the figure at the beginning of this section.
-            Depending on the type of SDK, several variables exist that help
-            configure these files.
-            The following list shows the variables associated with a standard
-            SDK:
-            <itemizedlist>
-                <listitem><para><link linkend='var-DEPLOY_DIR'><filename>DEPLOY_DIR</filename></link>:
-                    Points to the <filename>deploy</filename>
-                    directory.</para></listitem>
-                <listitem><para><link linkend='var-SDKMACHINE'><filename>SDKMACHINE</filename></link>:
-                    Specifies the architecture of the machine
-                    on which the cross-development tools are run to
-                    create packages for the target hardware.
-                    </para></listitem>
-                <listitem><para><link linkend='var-SDKIMAGE_FEATURES'><filename>SDKIMAGE_FEATURES</filename></link>:
-                    Lists the features to include in the "target" part
-                    of the SDK.
-                    </para></listitem>
-                <listitem><para><link linkend='var-TOOLCHAIN_HOST_TASK'><filename>TOOLCHAIN_HOST_TASK</filename></link>:
-                    Lists packages that make up the host
-                    part of the SDK (i.e. the part that runs on
-                    the <filename>SDKMACHINE</filename>).
-                    When you use
-                    <filename>bitbake -c populate_sdk <replaceable>imagename</replaceable></filename>
-                    to create the SDK, a set of default packages
-                    apply.
-                    This variable allows you to add more packages.
-                    </para></listitem>
-                <listitem><para><link linkend='var-TOOLCHAIN_TARGET_TASK'><filename>TOOLCHAIN_TARGET_TASK</filename></link>:
-                    Lists packages that make up the target part
-                    of the SDK (i.e. the part built for the
-                    target hardware).
-                    </para></listitem>
-                <listitem><para><link linkend='var-SDKPATH'><filename>SDKPATH</filename></link>:
-                    Defines the default SDK installation path offered by the
-                    installation script.
-                    </para></listitem>
-            </itemizedlist>
-            This next list, shows the variables associated with an extensible
-            SDK:
-            <itemizedlist>
-                <listitem><para><link linkend='var-DEPLOY_DIR'><filename>DEPLOY_DIR</filename></link>:
-                    Points to the <filename>deploy</filename> directory.
-                    </para></listitem>
-                <listitem><para><link linkend='var-SDK_EXT_TYPE'><filename>SDK_EXT_TYPE</filename></link>:
-                    Controls whether or not shared state artifacts are copied
-                    into the extensible SDK.
-                    By default, all required shared state artifacts are copied
-                    into the SDK.
-                    </para></listitem>
-                <listitem><para><link linkend='var-SDK_INCLUDE_PKGDATA'><filename>SDK_INCLUDE_PKGDATA</filename></link>:
-                    Specifies whether or not packagedata will be included in
-                    the extensible SDK for all recipes in the "world" target.
-                    </para></listitem>
-                <listitem><para><link linkend='var-SDK_INCLUDE_TOOLCHAIN'><filename>SDK_INCLUDE_TOOLCHAIN</filename></link>:
-                    Specifies whether or not the toolchain will be included
-                    when building the extensible SDK.
-                    </para></listitem>
-                <listitem><para><link linkend='var-SDK_LOCAL_CONF_WHITELIST'><filename>SDK_LOCAL_CONF_WHITELIST</filename></link>:
-                    A list of variables allowed through from the build system
-                    configuration into the extensible SDK configuration.
-                    </para></listitem>
-                <listitem><para><link linkend='var-SDK_LOCAL_CONF_BLACKLIST'><filename>SDK_LOCAL_CONF_BLACKLIST</filename></link>:
-                    A list of variables not allowed through from the build
-                    system configuration into the extensible SDK configuration.
-                    </para></listitem>
-                <listitem><para><link linkend='var-SDK_INHERIT_BLACKLIST'><filename>SDK_INHERIT_BLACKLIST</filename></link>:
-                    A list of classes to remove from the
-                    <link linkend='var-INHERIT'><filename>INHERIT</filename></link>
-                    value globally within the extensible SDK configuration.
-                    </para></listitem>
-            </itemizedlist>
-        </para>
-    </section>
-
-</chapter>
-<!--
-vim: expandtab tw=80 ts=4
--->
diff --git a/import-layers/yocto-poky/documentation/ref-manual/faq.xml b/import-layers/yocto-poky/documentation/ref-manual/faq.xml
index 5f3f173..11dfc5b 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/faq.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/faq.xml
@@ -13,11 +13,11 @@
         </question>
         <answer>
             <para>
-                The term "<ulink url='&YOCTO_DOCS_DEV_URL;#poky'>Poky</ulink>"
+                The term "<link link='poky'>Poky</link>"
                 refers to the specific reference build system that
                 the Yocto Project provides.
-                Poky is based on <ulink url='&YOCTO_DOCS_DEV_URL;#oe-core'>OE-Core</ulink>
-                and <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>.
+                Poky is based on <link linkend='oe-core'>OE-Core</link>
+                and <link linkend='bitbake-term'>BitBake</link>.
                 Thus, the generic term used here for the build system is
                 the "OpenEmbedded build system."
                 Development in the Yocto Project using Poky is closely tied to OpenEmbedded, with
@@ -60,7 +60,7 @@
                 There are three areas that help with stability;
                 <itemizedlist>
                     <listitem><para>The Yocto Project team keeps
-                        <ulink url='&YOCTO_DOCS_DEV_URL;#oe-core'>OE-Core</ulink> small
+                        <link linkend='oe-core'>OE-Core</link> small
                         and focused, containing around 830 recipes as opposed to the thousands
                         available in other OpenEmbedded community layers.
                         Keeping it small makes it easy to test and maintain.</para></listitem>
@@ -86,7 +86,7 @@
                 Board Support Package (BSP) layer for it.
                 For more information on how to create a BSP layer, see the
                 "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
-                section in the Yocto Project Development Manual and the
+                section in the Yocto Project Development Tasks Manual and the
                 <ulink url='&YOCTO_DOCS_BSP_URL;'>Yocto Project Board Support Package (BSP) Developer's Guide</ulink>.
             </para>
             <para>
@@ -143,7 +143,7 @@
                 To add a package, you need to create a BitBake recipe.
                 For information on how to create a BitBake recipe, see the
                 "<ulink url='&YOCTO_DOCS_DEV_URL;#new-recipe-writing-a-new-recipe'>Writing a New Recipe</ulink>"
-                in the Yocto Project Development Manual.
+                in the Yocto Project Development Tasks Manual.
             </para>
         </answer>
     </qandaentry>
@@ -235,8 +235,9 @@
     <qandaentry>
         <question>
             <para>
-                I see lots of 404 responses for files on
-                <filename>&YOCTO_HOME_URL;/sources/*</filename>. Is something wrong?
+                I see lots of 404 responses for files when the OpenEmbedded
+                build system is trying to download sources.
+                Is something wrong?
             </para>
         </question>
         <answer>
@@ -416,9 +417,9 @@
 
             <para>
                 You can find more information on licensing in the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#licensing'>Licensing</ulink>"
-                and "<ulink url='&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle'>Maintaining Open Source License Compliance During Your Product's Lifecycle</ulink>"
-                sections, both of which are in the Yocto Project Development
+                "<link linkend='licensing'>Licensing</link>" section and in the
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle'>Maintaining Open Source License Compliance During Your Product's Lifecycle</ulink>"
+                section, which is in the Yocto Project Development Tasks
                 Manual.
             </para>
         </answer>
@@ -547,7 +548,7 @@
                 file to include from the
                 <filename>meta/conf/distro/include</filename> directory within
                 the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                <link linkend='source-directory'>Source Directory</link>.
             </para>
 
             <para>
@@ -699,10 +700,9 @@
                 When you use BitBake to build an image, all the build output
                 goes into the directory created when you run the
                 build environment setup script (i.e.
-                <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>
-                or
-                <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>).
-                By default, this <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>).
+                By default, this
+                <link linkend='build-directory'>Build Directory</link>
                 is named <filename>build</filename> but can be named
                 anything you want.
             </para>
@@ -765,7 +765,7 @@
 
             <para>
                 Meanwhile, <filename>DESTDIR</filename> is a path within the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                <link linkend='build-directory'>Build Directory</link>.
                 However, when the recipe builds a native program (i.e. one
                 that is intended to run on the build machine), that program
                 is never installed directly to the build machine's root
@@ -810,7 +810,7 @@
             <para>
                 This situation results when a build system does
                 not recognize the environment variables supplied to it by
-                <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>.
+                <link linkend='bitbake-term'>BitBake</link>.
                 The incident that prompted this FAQ entry involved a Makefile
                 that used an environment variable named
                 <filename>BINDIR</filename> instead of the more standard
diff --git a/import-layers/yocto-poky/documentation/ref-manual/figures/YP-flow-diagram.png b/import-layers/yocto-poky/documentation/ref-manual/figures/YP-flow-diagram.png
new file mode 100644
index 0000000..8264410
--- /dev/null
+++ b/import-layers/yocto-poky/documentation/ref-manual/figures/YP-flow-diagram.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/git-workflow.png b/import-layers/yocto-poky/documentation/ref-manual/figures/git-workflow.png
similarity index 100%
rename from import-layers/yocto-poky/documentation/dev-manual/figures/git-workflow.png
rename to import-layers/yocto-poky/documentation/ref-manual/figures/git-workflow.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/ref-manual/figures/index-downloads.png b/import-layers/yocto-poky/documentation/ref-manual/figures/index-downloads.png
new file mode 100644
index 0000000..96303b8
--- /dev/null
+++ b/import-layers/yocto-poky/documentation/ref-manual/figures/index-downloads.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/ref-manual/figures/source-repos.png b/import-layers/yocto-poky/documentation/ref-manual/figures/source-repos.png
new file mode 100644
index 0000000..e9cff16
--- /dev/null
+++ b/import-layers/yocto-poky/documentation/ref-manual/figures/source-repos.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/dev-manual/figures/yp-download.png b/import-layers/yocto-poky/documentation/ref-manual/figures/yp-download.png
similarity index 100%
rename from import-layers/yocto-poky/documentation/dev-manual/figures/yp-download.png
rename to import-layers/yocto-poky/documentation/ref-manual/figures/yp-download.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/ref-manual/introduction.xml b/import-layers/yocto-poky/documentation/ref-manual/introduction.xml
index eec6cb3..29ef2d5 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/introduction.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/introduction.xml
@@ -5,119 +5,160 @@
 <chapter id='ref-manual-intro'>
 <title>Introduction</title>
 
-<section id='intro-welcome'>
-    <title>Introduction</title>
+<section id='ref-welcome'>
+    <title>Welcome</title>
 
     <para>
+        Welcome to the Yocto Project Reference Manual.
         This manual provides reference information for the current release
         of the Yocto Project.
-        The Yocto Project is an open-source collaboration project focused
-        on embedded Linux developers.
-        Amongst other things, the Yocto Project uses the OpenEmbedded build
-        system, which is based on the Poky project, to construct complete
-        Linux images.
-        You can find complete introductory and getting started information
-        on the Yocto Project by reading the
+        This manual is best used after you have an understanding
+        of the basics of the Yocto Project.
+        The manual is neither meant to be read as a starting point to the
+        Yocto Project nor read from start to finish.
+        Use this manual to find concepts, variable definitions, class
+        descriptions, and so forth as needed during the course of using
+        the Yocto Project.
+    </para>
+
+    <para>
+        For introductory information on the Yocto Project, see the
+        <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink> and the
+        "<link linkend='yp-intro'>Introducing the Yocto Project Development Environment</link>"
+        section.
+    </para>
+
+    <para>
+        If you want to use the Yocto Project to test run building an image
+        without having to understand concepts, work through the
         <ulink url='&YOCTO_DOCS_QS_URL;'>Yocto Project Quick Start</ulink>.
-    </para>
-
-    <para>
-        For task-based information using the Yocto Project, see the
-        <ulink url='&YOCTO_DOCS_DEV_URL;'>Yocto Project Development Manual</ulink>
-        and the <ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;'>Yocto Project Linux Kernel Development Manual</ulink>.
-        For Board Support Package (BSP) structure information, see the
-        <ulink url='&YOCTO_DOCS_BSP_URL;'>Yocto Project Board Support Package (BSP) Developer's Guide</ulink>.
-        For information on how to use a Software Development Kit, (SDK), see the
-        <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
-        You can find information on tracing and profiling in the
-        <ulink url='&YOCTO_DOCS_PROF_URL;'>Yocto Project Profiling and Tracing Manual</ulink>.
-        For information on BitBake, which is the task execution tool the
-        OpenEmbedded build system is based on, see the
-        <ulink url='&YOCTO_DOCS_BB_URL;#bitbake-user-manual'>BitBake User Manual</ulink>.
-        Finally, you can also find lots of Yocto Project information on the
-        <ulink url="&YOCTO_HOME_URL;">Yocto Project website</ulink>.
+        You can find "how-to" information in the
+        <ulink url='&YOCTO_DOCS_DEV_URL;'>Yocto Project Development Tasks Manual</ulink>.
+        <note><title>Tip</title>
+            For more information about the Yocto Project Documentation set,
+            see the
+            "<link linkend='resources-links-and-related-documentation'>Links and Related Documentation</link>"
+            section.
+        </note>
     </para>
 </section>
 
-<section id='intro-manualoverview'>
-    <title>Documentation Overview</title>
+<section id='yp-intro'>
+    <title>Introducing the Yocto Project Development Environment</title>
+
     <para>
-        This reference manual consists of the following:
-        <itemizedlist>
-            <listitem><para><emphasis>
-                <link linkend='usingpoky'>Using the Yocto Project</link>:</emphasis>
-                Provides an overview of the components that make up the Yocto Project
-                followed by information about debugging images created in the Yocto Project.
-                </para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='closer-look'>A Closer Look at the Yocto Project Development Environment</link>:</emphasis>
-                Provides a more detailed look at the Yocto Project development
-                environment within the context of development.
-                </para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='technical-details'>Technical Details</link>:</emphasis>
-                Describes fundamental Yocto Project components as well as an explanation
-                behind how the Yocto Project uses shared state (sstate) cache to speed build time.
-                </para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='migration'>Migrating to a Newer Yocto Project Release</link>:</emphasis>
-                Describes release-specific information that helps you move from
-                one Yocto Project Release to another.
-                </para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='ref-structure'>Directory Structure</link>:</emphasis>
-                Describes the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink> created
-                either by unpacking a released Yocto Project tarball on your host development system,
-                or by cloning the upstream
-                <ulink url='&YOCTO_DOCS_DEV_URL;#poky'>Poky</ulink> Git repository.
-                </para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='ref-classes'>Classes</link>:</emphasis>
-                Describes the classes used in the Yocto Project.</para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='ref-tasks'>Tasks</link>:</emphasis>
-                Describes the tasks defined by the OpenEmbedded build system.
-                </para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='ref-devtool-reference'><filename>devtool</filename> Quick Reference</link>:</emphasis>
-                Provides a quick reference for the <filename>devtool</filename>
-                command.
-                </para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='ref-qa-checks'>QA Error and Warning Messages</link>:</emphasis>
-                Lists and describes QA warning and error messages.
-                </para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='ref-images'>Images</link>:</emphasis>
-                Describes the standard images that the Yocto Project supports.
-                </para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='ref-features'>Features</link>:</emphasis>
-                Describes mechanisms for creating distribution, machine, and image
-                features during the build process using the OpenEmbedded build system.</para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='ref-variables-glos'>Variables Glossary</link>:</emphasis>
-                Presents most variables used by the OpenEmbedded build system, which
-                uses BitBake.
-                Entries describe the function of the variable and how to apply them.
-                </para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='ref-varlocality'>Variable Context</link>:</emphasis>
-                Provides variable locality or context.</para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='faq'>FAQ</link>:</emphasis>
-                Provides answers for commonly asked questions in the Yocto Project
-                development environment.</para></listitem>
-            <listitem><para><emphasis>
-                <link linkend='resources'>Contributing to the Yocto Project</link>:</emphasis>
-                Provides guidance on how you can contribute back to the Yocto
-                Project.</para></listitem>
-        </itemizedlist>
+        The Yocto Project is an open-source collaboration project whose
+        focus is for developers of embedded Linux systems.
+        Among other things, the Yocto Project uses an
+        <link linkend='build-system-term'>OpenEmbedded build system</link>.
+        The build system, which is based on the OpenEmbedded (OE) project and
+        uses the
+        <link linkend='bitbake-term'>BitBake</link> tool, constructs complete
+        Linux images for architectures based on ARM, MIPS, PowerPC, x86 and
+        x86-64.
+        <note>
+            Historically, the OpenEmbedded build system, which is the
+            combination of BitBake and OE components, formed a reference
+            build host that was known as
+            "<link linkend='poky'>Poky</link>" (<emphasis>Pah</emphasis>-kee).
+            The term "Poky", as used throughout the Yocto Project Documentation
+            set, can have different meanings.
+        </note>
+        The Yocto Project provides various ancillary tools for the embedded
+        developer and also features the Sato reference User Interface, which
+        is optimized for stylus-driven, low-resolution screens.
+    </para>
+
+    <mediaobject>
+        <imageobject>
+            <imagedata fileref="figures/YP-flow-diagram.png"
+                format="PNG" align='center' width="8in"/>
+        </imageobject>
+    </mediaobject>
+
+    <para>
+        Here are some highlights for the Yocto Project:
+    </para>
+
+    <itemizedlist>
+        <listitem><para>
+            Provides a recent Linux kernel along with a set of system
+            commands and libraries suitable for the embedded
+            environment.
+            </para></listitem>
+        <listitem><para>
+            Makes available system components such as X11, GTK+, Qt,
+            Clutter, and SDL (among others) so you can create a rich user
+            experience on devices that have display hardware.
+            For devices that do not have a display or where you wish to
+            use alternative UI frameworks, these components need not be
+            installed.
+            </para></listitem>
+        <listitem><para>
+            Creates a focused and stable core compatible with the
+            OpenEmbedded project with which you can easily and reliably
+            build and develop.
+            </para></listitem>
+        <listitem><para>
+            Fully supports a wide range of hardware and device emulation
+            through the Quick EMUlator (QEMU).
+            </para></listitem>
+        <listitem><para>
+            Provides a layer mechanism that allows you to easily extend
+            the system, make customizations, and keep them organized.
+            </para></listitem>
+    </itemizedlist>
+
+    <para>
+        You can use the Yocto Project to generate images for many kinds
+        of devices.
+        As mentioned earlier, the Yocto Project supports creation of
+        reference images that you can boot within and emulate using QEMU.
+        The standard example machines target QEMU full-system
+        emulation for 32-bit and 64-bit variants of x86, ARM, MIPS, and
+        PowerPC architectures.
+        Beyond emulation, you can use the layer mechanism to extend
+        support to just about any platform that Linux can run on and that
+        a toolchain can target.
+    </para>
+
+    <para>
+        Another Yocto Project feature is the Sato reference User
+        Interface.
+        This optional UI that is based on GTK+ is intended for devices with
+        restricted screen sizes and is included as part of the
+        OpenEmbedded Core layer so that developers can test parts of the
+        software stack.
+    </para>
+
+    <para>
+        While the Yocto Project does not provide a strict testing framework,
+        it does provide or generate for you artifacts that let you perform
+        target-level and emulated testing and debugging.
+        Additionally, if you are an
+        <trademark class='trade'>Eclipse</trademark> IDE user, you can
+        install an Eclipse Yocto Plug-in to allow you to develop within that
+        familiar environment.
+    </para>
+
+    <para>
+        By default, using the Yocto Project to build an image creates a Poky
+        distribution.
+        However, you can create your own distribution by providing key
+        <link link='metadata'>Metadata</link>.
+        A good example is Angstrom, which has had a distribution
+        based on the Yocto Project since its inception.
+        Other examples include commercial distributions like
+        <ulink url='https://www.yoctoproject.org/organization/wind-river-systems'>Wind River Linux</ulink>,
+        <ulink url='https://www.yoctoproject.org/organization/mentor-graphics'>Mentor Embedded Linux</ulink>,
+        <ulink url='https://www.yoctoproject.org/organization/enea-ab'>ENEA Linux</ulink>
+        and <ulink url='https://www.yoctoproject.org/ecosystem/member-organizations'>others</ulink>.
+        See the "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-your-own-distribution'>Creating Your Own Distribution</ulink>"
+        section in the Yocto Project Development Tasks Manual for more
+        information.
     </para>
 </section>
 
-
 <section id='intro-requirements'>
 <title>System Requirements</title>
     <para>
@@ -163,12 +204,12 @@
                 <listitem><para>Ubuntu 10.04</para></listitem>
                 <listitem><para>Ubuntu 11.10</para></listitem>
                 <listitem><para>Ubuntu 12.04 (LTS)</para></listitem>
-                <listitem><para>Ubuntu 13.10</para></listitem> -->
-                <listitem><para>Ubuntu 14.04 (LTS)</para></listitem>
+                <listitem><para>Ubuntu 13.10</para></listitem>
+                <listitem><para>Ubuntu 14.04 (LTS)</para></listitem> -->
                 <listitem><para>Ubuntu 14.10</para></listitem>
                 <listitem><para>Ubuntu 15.04</para></listitem>
                 <listitem><para>Ubuntu 15.10</para></listitem>
-                <listitem><para>Ubuntu 16.04</para></listitem>
+                <listitem><para>Ubuntu 16.04 (LTS)</para></listitem>
 <!--                <listitem><para>Fedora 16 (Verne)</para></listitem>
                 <listitem><para>Fedora 17 (Spherical)</para></listitem>
                 <listitem><para>Fedora release 19 (Schrödinger's Cat)</para></listitem>
@@ -185,6 +226,7 @@
 <!--                <listitem><para>Debian GNU/Linux 6.0 (Squeeze)</para></listitem>
                 <listitem><para>Debian GNU/Linux 7.x (Wheezy)</para></listitem> -->
                 <listitem><para>Debian GNU/Linux 8.x (Jessie)</para></listitem>
+                <listitem><para>Debian GNU/Linux 9.x (Stretch)</para></listitem>
 <!--                <listitem><para>Debian GNU/Linux 7.1 (Wheezy)</para></listitem>
                 <listitem><para>Debian GNU/Linux 7.2 (Wheezy)</para></listitem>
                 <listitem><para>Debian GNU/Linux 7.3 (Wheezy)</para></listitem>
@@ -413,7 +455,7 @@
             Python:
             <itemizedlist>
                 <listitem><para>Git 1.8.3.1 or greater</para></listitem>
-                <listitem><para>tar 1.24 or greater</para></listitem>
+                <listitem><para>tar 1.27 or greater</para></listitem>
                 <listitem><para>Python 3.4.0 or greater</para></listitem>
             </itemizedlist>
         </para>
@@ -492,9 +534,7 @@
                         On the machine that is able to run BitBake,
                         be sure you have set up your build environment with
                         the setup script
-                        (<link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>
-                        or
-                        <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>).
+                        (<link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>).
                         </para></listitem>
                     <listitem><para>
                         Run the BitBake command to build the tarball:
@@ -512,7 +552,7 @@
                        <filename>.sh</filename> file that installs
                        the tools in the <filename>tmp/deploy/sdk</filename>
                        subdirectory of the
-                       <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                       <link linkend='build-directory'>Build Directory</link>.
                        The installer file has the string "buildtools"
                        in the name.
                        </para></listitem>
@@ -600,12 +640,441 @@
     <title>Development Checkouts</title>
     <para>
         Development using the Yocto Project requires a local
-        <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+        <link linkend='source-directory'>Source Directory</link>.
         You can set up the Source Directory by cloning a copy of the upstream
-        <ulink url='&YOCTO_DOCS_DEV_URL;#poky'>poky</ulink> Git repository.
+        <link linkend='poky'>poky</link> Git repository.
         For information on how to do this, see the
-        "<ulink url='&YOCTO_DOCS_DEV_URL;#getting-setup'>Getting Set Up</ulink>"
-        section in the Yocto Project Development Manual.
+        "<ulink url='&YOCTO_DOCS_DEV_URL;#working-with-yocto-project-source-files'>Working With Yocto Project Source Files</ulink>"
+        section in the Yocto Project Development Tasks Manual.
+    </para>
+</section>
+
+<section id='yocto-project-terms'>
+    <title>Yocto Project Terms</title>
+
+    <para>
+        Following is a list of terms and definitions users new to the Yocto
+        Project development environment might find helpful.
+        While some of these terms are universal, the list includes them
+        just in case:
+        <itemizedlist>
+            <listitem><para>
+                <emphasis>Append Files:</emphasis>
+                Files that append build information to a recipe file.
+                Append files are known as BitBake append files and
+                <filename>.bbappend</filename> files.
+                The OpenEmbedded build system expects every append file to have
+                a corresponding recipe (<filename>.bb</filename>) file.
+                Furthermore, the append file and corresponding recipe file
+                must use the same root filename.
+                The filenames can differ only in the file type suffix used
+                (e.g.
+                <filename>formfactor_0.0.bb</filename> and
+                <filename>formfactor_0.0.bbappend</filename>).</para>
+
+                <para>Information in append files extends or overrides the
+                information in the similarly-named recipe file.
+                For an example of an append file in use, see the
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#using-bbappend-files'>Using .bbappend Files in Your Layer</ulink>"
+                section in the Yocto Project Development Tasks Manual.
+                <note>
+                    Append files can also use wildcard patterns in their
+                    version numbers so they can be applied to more than one
+                    version of the underlying recipe file.
+                </note>
+                </para></listitem>
+            <listitem><para id='bitbake-term'>
+                <emphasis>BitBake:</emphasis>
+                The task executor and scheduler used by the OpenEmbedded build
+                system to build images.
+                For more information on BitBake, see the
+                <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual</ulink>.
+                </para></listitem>
+            <listitem><para id='board-support-package-bsp-term'>
+                <emphasis>Board Support Package (BSP):</emphasis>
+                A group of drivers, definitions, and other components that
+                provide support for a specific hardware configuration.
+                For more information on BSPs, see the
+                <ulink url='&YOCTO_DOCS_BSP_URL;'>Yocto Project Board Support Package (BSP) Developer's Guide</ulink>.
+                </para></listitem>
+            <listitem>
+                <para id='build-directory'>
+                <emphasis>Build Directory:</emphasis>
+                This term refers to the area used by the OpenEmbedded build
+                system for builds.
+                The area is created when you <filename>source</filename> the
+                setup environment script that is found in the Source Directory
+                (i.e. <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>).
+                The
+                <link linkend='var-TOPDIR'><filename>TOPDIR</filename></link>
+                variable points to the Build Directory.</para>
+
+                <para>You have a lot of flexibility when creating the Build
+                Directory.
+                Following are some examples that show how to create the
+                directory.
+                The examples assume your
+                <link linkend='source-directory'>Source Directory</link> is
+                named <filename>poky</filename>:
+                <itemizedlist>
+                    <listitem><para>Create the Build Directory inside your
+                        Source Directory and let the name of the Build
+                        Directory default to <filename>build</filename>:
+                        <literallayout class='monospaced'>
+     $ cd $HOME/poky
+     $ source &OE_INIT_FILE;
+                        </literallayout>
+                        </para></listitem>
+                    <listitem><para>Create the Build Directory inside your
+                        home directory and specifically name it
+                        <filename>test-builds</filename>:
+                        <literallayout class='monospaced'>
+     $ cd $HOME
+     $ source poky/&OE_INIT_FILE; test-builds
+                        </literallayout>
+                        </para></listitem>
+                    <listitem><para>
+                        Provide a directory path and specifically name the
+                        Build Directory.
+                        Any intermediate folders in the pathname must exist.
+                        This next example creates a Build Directory named
+                        <filename>YP-&POKYVERSION;</filename>
+                        in your home directory within the existing
+                        directory <filename>mybuilds</filename>:
+                        <literallayout class='monospaced'>
+     $cd $HOME
+     $ source $HOME/poky/&OE_INIT_FILE; $HOME/mybuilds/YP-&POKYVERSION;
+                        </literallayout>
+                        </para></listitem>
+                </itemizedlist>
+                <note>
+                    By default, the Build Directory contains
+                    <link linkend='var-TMPDIR'><filename>TMPDIR</filename></link>,
+                    which is a temporary directory the build system uses for
+                    its work.
+                    <filename>TMPDIR</filename> cannot be under NFS.
+                    Thus, by default, the Build Directory cannot be under NFS.
+                    However, if you need the Build Directory to be under NFS,
+                    you can set this up by setting <filename>TMPDIR</filename>
+                    in your <filename>local.conf</filename> file
+                    to use a local drive.
+                    Doing so effectively separates <filename>TMPDIR</filename>
+                    from <filename>TOPDIR</filename>, which is the Build
+                    Directory.
+                </note>
+                </para></listitem>
+            <listitem><para id='hardware-build-system-term'>
+                <emphasis>Build System:</emphasis>
+                The system used to build images in a Yocto Project
+                Development environment.
+                The build system is sometimes referred to as the
+                development host.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Classes:</emphasis>
+                Files that provide for logic encapsulation and inheritance so
+                that commonly used patterns can be defined once and then
+                easily used in multiple recipes.
+                For reference information on the Yocto Project classes, see the
+                "<link linkend='ref-classes'>Classes</link>" chapter.
+                Class files end with the <filename>.bbclass</filename>
+                filename extension.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Configuration File:</emphasis>
+                Configuration information in various <filename>.conf</filename>
+                files provides global definitions of variables.
+                The <filename>conf/local.conf</filename> configuration file in
+                the
+                <link linkend='build-directory'>Build Directory</link>
+                contains user-defined variables that affect every build.
+                The <filename>meta-poky/conf/distro/poky.conf</filename>
+                configuration file defines Yocto "distro" configuration
+                variables used only when building with this policy.
+                Machine configuration files, which
+                are located throughout the
+                <link linkend='source-directory'>Source Directory</link>, define
+                variables for specific hardware and are only used when building
+                for that target (e.g. the
+                <filename>machine/beaglebone.conf</filename> configuration
+                file defines variables for the Texas Instruments ARM Cortex-A8
+                development board).
+                Configuration files end with a <filename>.conf</filename>
+                filename extension.
+                </para></listitem>
+            <listitem><para id='cross-development-toolchain'>
+                <emphasis>Cross-Development Toolchain:</emphasis>
+                In general, a cross-development toolchain is a collection of
+                software development tools and utilities that run on one
+                architecture and allow you to develop software for a
+                different, or targeted, architecture.
+                These toolchains contain cross-compilers, linkers, and
+                debuggers that are specific to the target architecture.</para>
+
+                <para>The Yocto Project supports two different cross-development
+                toolchains:
+                <itemizedlist>
+                    <listitem><para>
+                        A toolchain only used by and within
+                        BitBake when building an image for a target
+                        architecture.
+                        </para></listitem>
+                    <listitem><para>A relocatable toolchain used outside of
+                        BitBake by developers when developing applications
+                        that will run on a targeted device.
+                        </para></listitem>
+                </itemizedlist></para>
+
+                <para>Creation of these toolchains is simple and automated.
+                For information on toolchain concepts as they apply to the
+                Yocto Project, see the
+                "<link linkend='cross-development-toolchain-generation'>Cross-Development Toolchain Generation</link>"
+                section.
+                You can also find more information on using the
+                relocatable toolchain in the
+                <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</ulink>
+                manual.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Image:</emphasis>
+                An image is an artifact of the BitBake build process given
+                a collection of recipes and related Metadata.
+                Images are the binary output that run on specific hardware or
+                QEMU and are used for specific use-cases.
+                For a list of the supported image types that the Yocto Project
+                provides, see the
+                "<link linkend='ref-images'>Images</link>"
+                chapter.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Layer:</emphasis>
+                A collection of recipes representing the core,
+                a BSP, or an application stack.
+                For a discussion specifically on BSP Layers, see the
+                "<ulink url='&YOCTO_DOCS_BSP_URL;#bsp-layers'>BSP Layers</ulink>"
+                section in the Yocto Project Board Support Packages (BSP)
+                Developer's Guide.
+                </para></listitem>
+            <listitem><para id='metadata'>
+                <emphasis>Metadata:</emphasis>
+                The files that BitBake parses when building an image.
+                In general, Metadata includes recipes, classes, and
+                configuration files.
+                In the context of the kernel ("kernel Metadata"), the
+                term refers to the kernel config fragments and features
+                contained in the
+                <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/yocto-kernel-cache'><filename>yocto-kernel-cache</filename></ulink>
+                Git repository.
+                </para></listitem>
+            <listitem><para id='oe-core'>
+                <emphasis>OE-Core:</emphasis>
+                A core set of Metadata originating with OpenEmbedded (OE)
+                that is shared between OE and the Yocto Project.
+                This Metadata is found in the <filename>meta</filename>
+                directory of the
+                <link linkend='source-directory'>Source Directory</link>.
+                </para></listitem>
+            <listitem><para id='build-system-term'>
+                <emphasis>OpenEmbedded Build System:</emphasis>
+                The build system specific to the Yocto Project.
+                The OpenEmbedded build system is based on another project known
+                as "Poky", which uses
+                <link linkend='bitbake-term'>BitBake</link> as the task
+                executor.
+                Throughout the Yocto Project documentation set, the
+                OpenEmbedded build system is sometimes referred to simply
+                as "the build system".
+                If other build systems, such as a host or target build system
+                are referenced, the documentation clearly states the
+                difference.
+                <note>
+                    For some historical information about Poky, see the
+                    <link linkend='poky'>Poky</link> term.
+                </note>
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Package:</emphasis>
+                In the context of the Yocto Project, this term refers to a
+                recipe's packaged output produced by BitBake (i.e. a
+                "baked recipe").
+                A package is generally the compiled binaries produced from the
+                recipe's sources.
+                You "bake" something by running it through BitBake.</para>
+
+                <para>It is worth noting that the term "package" can,
+                in general, have subtle meanings.
+                For example, the packages referred to in the
+                "<ulink url='&YOCTO_DOCS_QS_URL;#packages'>The Build Host Packages</ulink>"
+                section in the Yocto Project Quick Start are compiled binaries
+                that, when installed, add functionality to your Linux
+                distribution.</para>
+
+                <para>Another point worth noting is that historically within
+                the Yocto Project, recipes were referred to as packages - thus,
+                the existence of several BitBake variables that are seemingly
+                mis-named,
+                (e.g. <link linkend='var-PR'><filename>PR</filename></link>,
+                <link linkend='var-PV'><filename>PV</filename></link>, and
+                <link linkend='var-PE'><filename>PE</filename></link>).
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Package Groups:</emphasis>
+                Arbitrary groups of software Recipes.
+                You use package groups to hold recipes that, when built,
+                usually accomplish a single task.
+                For example, a package group could contain the recipes for a
+                company’s proprietary or value-add software.
+                Or, the package group could contain the recipes that enable
+                graphics.
+                A package group is really just another recipe.
+                Because package group files are recipes, they end with the
+                <filename>.bb</filename> filename extension.
+                </para></listitem>
+            <listitem><para id='poky'>
+                <emphasis>Poky:</emphasis>
+                The term "poky", which is pronounced
+                <emphasis>Pah</emphasis>-kee, can mean several things:
+                <itemizedlist>
+                    <listitem><para>
+                        In its most general sense, poky is an open-source
+                        project that was initially developed by OpenedHand.
+                        OpenedHand developed poky off of the existing
+                        OpenEmbedded build system to create a commercially
+                        supportable build system for embedded Linux.
+                        After Intel Corporation acquired OpenedHand, the
+                        poky project became the basis for the Yocto Project's
+                        build system.
+                        </para></listitem>
+                    <listitem><para>
+                        Within the Yocto Project
+                        <ulink url='&YOCTO_GIT_URL;'>Source Repositories</ulink>,
+                        "poky" exists as a separate Git
+                        repository from which you can clone to yield a local
+                        Git repository that is a copy on your host system.
+                        Thus, "poky" can refer to the upstream or
+                        local copy of the files used for development within
+                        the Yocto Project.
+                        </para></listitem>
+                    <listitem><para>
+                        Finally, "poky" can refer to the default
+                        <link linkend='var-DISTRO'><filename>DISTRO</filename></link>
+                        (i.e. distribution) created when you use the Yocto
+                        Project in conjunction with the
+                        <filename>poky</filename> repository to build an image.
+                        </para></listitem>
+                </itemizedlist>
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Recipe:</emphasis>
+                A set of instructions for building packages.
+                A recipe describes where you get source code, which patches
+                to apply, how to configure the source, how to compile it and so on.
+                Recipes also describe dependencies for libraries or for other
+                recipes.
+                Recipes represent the logical unit of execution, the software
+                to build, the images to build, and use the
+                <filename>.bb</filename> file extension.
+                </para></listitem>
+            <listitem><para id='reference-kit-term'>
+                <emphasis>Reference Kit:</emphasis>
+                A working example of a system, which includes a
+                <link linkend='board-support-package-bsp-term'>BSP</link>
+                as well as a
+                <link linkend='hardware-build-system-term'>build system</link>
+                and other components, that can work on specific hardware.
+                </para></listitem>
+            <listitem>
+                <para id='source-directory'>
+                <emphasis>Source Directory:</emphasis>
+                This term refers to the directory structure created as a result
+                of creating a local copy of the <filename>poky</filename> Git
+                repository <filename>git://git.yoctoproject.org/poky</filename>
+                or expanding a released <filename>poky</filename> tarball.
+                <note>
+                    Creating a local copy of the <filename>poky</filename>
+                    Git repository is the recommended method for setting up
+                    your Source Directory.
+                </note>
+                Sometimes you might hear the term "poky directory" used to refer
+                to this directory structure.
+                <note>
+                    The OpenEmbedded build system does not support file or
+                    directory names that contain spaces.
+                    Be sure that the Source Directory you use does not contain
+                    these types of names.
+                </note></para>
+
+                <para>The Source Directory contains BitBake, Documentation,
+                Metadata and other files that all support the Yocto Project.
+                Consequently, you must have the Source Directory in place on
+                your development system in order to do any development using
+                the Yocto Project.</para>
+
+                <para>When you create a local copy of the Git repository, you
+                can name the repository anything you like.
+                Throughout much of the documentation, "poky"
+                is used as the name of the top-level folder of the local copy of
+                the poky Git repository.
+                So, for example, cloning the <filename>poky</filename> Git
+                repository results in a local Git repository whose top-level
+                folder is also named "poky".</para>
+
+                <para>While it is not recommended that you use tarball expansion
+                to set up the Source Directory, if you do, the top-level
+                directory name of the Source Directory is derived from the
+                Yocto Project release tarball.
+                For example, downloading and unpacking
+                <filename>&YOCTO_POKY_TARBALL;</filename> results in a
+                Source Directory whose root folder is named
+                <filename>&YOCTO_POKY;</filename>.</para>
+
+                <para>It is important to understand the differences between the
+                Source Directory created by unpacking a released tarball as
+                compared to cloning
+                <filename>git://git.yoctoproject.org/poky</filename>.
+                When you unpack a tarball, you have an exact copy of the files
+                based on the time of release - a fixed release point.
+                Any changes you make to your local files in the Source Directory
+                are on top of the release and will remain local only.
+                On the other hand, when you clone the <filename>poky</filename>
+                Git repository, you have an active development repository with
+                access to the upstream repository's branches and tags.
+                In this case, any local changes you make to the local
+                Source Directory can be later applied to active development
+                branches of the upstream <filename>poky</filename> Git
+                repository.</para>
+
+                <para>For more information on concepts related to Git
+                repositories, branches, and tags, see the
+                "<link linkend='repositories-tags-and-branches'>Repositories, Tags, and Branches</link>"
+                section.
+                </para></listitem>
+            <listitem><para><emphasis>Task:</emphasis>
+                A unit of execution for BitBake (e.g.
+                <link linkend='ref-tasks-compile'><filename>do_compile</filename></link>,
+                <link linkend='ref-tasks-fetch'><filename>do_fetch</filename></link>,
+                <link linkend='ref-tasks-patch'><filename>do_patch</filename></link>,
+                and so forth).
+                </para></listitem>
+            <listitem><para id='toaster-term'><emphasis>Toaster:</emphasis>
+                A web interface to the Yocto Project's
+                <link linkend='build-system-term'>OpenEmbedded Build System</link>.
+                The interface enables you to configure and run your builds.
+                Information about builds is collected and stored in a database.
+                For information on Toaster, see the
+                <ulink url='&YOCTO_DOCS_TOAST_URL;'>Yocto Project Toaster Manual</ulink>.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Upstream:</emphasis>
+                A reference to source code or repositories
+                that are not local to the development system but located in a
+                master area that is controlled by the maintainer of the source
+                code.
+                For example, in order for a developer to work on a particular
+                piece of code, they need to first get a copy of it from an
+                "upstream" source.
+                </para></listitem>
+        </itemizedlist>
     </para>
 </section>
 
diff --git a/import-layers/yocto-poky/documentation/ref-manual/migration.xml b/import-layers/yocto-poky/documentation/ref-manual/migration.xml
index 7ca929c..811577b 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/migration.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/migration.xml
@@ -251,7 +251,7 @@
                 The following recipes have been removed.
                 For most of them, it is unlikely that you would have any
                 references to them in your own
-                <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>.
+                <link linkend='metadata'>Metadata</link>.
                 However, you should check your metadata against this list to be sure:
                 <itemizedlist>
                     <listitem><para><emphasis><filename>libx11-trim</filename></emphasis>:
@@ -293,7 +293,7 @@
                 For the remainder, you can now find them in the
                 <filename>meta-extras</filename> repository, which is in the
                 Yocto Project
-                <ulink url='&YOCTO_DOCS_DEV_URL;#source-repositories'>Source Repositories</ulink>.
+                <link linkend='source-repositories'>Source Repositories</link>.
             </para>
         </section>
     </section>
@@ -438,11 +438,11 @@
             you now need to create an append file for the
             <filename>init-ifupdown</filename> recipe instead, which you can
             find in the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+            <link linkend='source-directory'>Source Directory</link>
             at <filename>meta/recipes-core/init-ifupdown</filename>.
             For information on how to use append files, see the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#using-bbappend-files'>Using .bbappend Files</ulink>"
-            in the Yocto Project Development Manual.
+            in the Yocto Project Development Tasks Manual.
         </para>
     </section>
 
@@ -821,7 +821,7 @@
                 <listitem><para>
                     When buildhistory is enabled, its output is now written
                     under the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                    <link linkend='build-directory'>Build Directory</link>
                     rather than
                     <link linkend='var-TMPDIR'><filename>TMPDIR</filename></link>.
                     Doing so makes it easier to delete
@@ -992,7 +992,7 @@
         <para>
             You can learn more about performing automated image tests in the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing'>Performing Automated Runtime Testing</ulink>"
-            section.
+            section in the Yocto Project Development Tasks Manual.
         </para>
     </section>
 
@@ -1173,7 +1173,7 @@
             class has been rewritten and its configuration has been simplified.
             For more details on the source archiver, see the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle'>Maintaining Open Source License Compliance During Your Product's Lifecycle</ulink>"
-            section in the Yocto Project Development Manual.
+            section in the Yocto Project Development Tasks Manual.
         </para>
     </section>
 
@@ -1218,7 +1218,7 @@
 
         <para>
             The following changes have been made to
-            <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>.
+            <link linkend='bitbake-term'>BitBake</link>.
         </para>
 
         <section id='migration-1.6-matching-branch-requirement-for-git-fetching'>
@@ -1362,7 +1362,7 @@
                 increments on changes, use the PR service instead.
                 You can find out more about this service in the
                 "<ulink url='&YOCTO_DOCS_DEV_URL;#working-with-a-pr-service'>Working With a PR Service</ulink>"
-                section in the Yocto Project Development Manual.
+                section in the Yocto Project Development Tasks Manual.
             </para>
         </section>
 
@@ -1453,7 +1453,7 @@
             Package Tests (ptest) are built but not installed by default.
             For information on using Package Tests, see the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#testing-packages-with-ptest'>Setting up and running package test (ptest)</ulink>"
-            section in the Yocto Project Development Manual.
+            section in the Yocto Project Development Tasks Manual.
             For information on the <filename>ptest</filename> class, see the
             "<link linkend='ref-classes-ptest'><filename>ptest.bbclass</filename></link>"
             section.
@@ -1525,7 +1525,7 @@
         <para>
             The top-level <filename>LICENSE</filename> file has been changed
             to better describe the license of the various components of
-            OE-Core.
+            <link linkend='oe-core'>OE-Core</link>.
             However, the licensing itself remains unchanged.
         </para>
 
@@ -1748,7 +1748,7 @@
 
         <para>
             The minimum
-            <ulink url='&YOCTO_DOCS_DEV_URL;#git'>Git</ulink> version required
+            <link linkend='git'>Git</link> version required
             on the build host is now 1.7.8 because the
             <filename>--list</filename> option is now required by
             BitBake's Git fetcher.
@@ -2572,7 +2572,8 @@
 
         <para>
             Maintenance tracking data for recipes that was previously part
-            of <filename>meta-yocto</filename> has been moved to OE-Core.
+            of <filename>meta-yocto</filename> has been moved to
+            <link linkend='oe-core'>OE-Core</link>.
             The change includes <filename>package_regex.inc</filename> and
             <filename>distro_alias.inc</filename>, which are typically enabled
             when using the
@@ -3000,8 +3001,7 @@
                     from <filename>autotools</filename> instead.
                     </para></listitem>
                 <listitem><para><filename>boot-directdisk</filename>:
-                    Merged into the
-                    <link linkend='ref-classes-image-vm'><filename>image-vm</filename></link>
+                    Merged into the <filename>image-vm</filename>
                     class.
                     The <filename>boot-directdisk</filename> class was rarely
                     directly used.
@@ -3210,7 +3210,8 @@
             You can enable, disable, and test the generation of this data.
             See the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#enabling-gobject-introspection-support'>Enabling GObject Introspection Support</ulink>"
-            section for more information.
+            section in the Yocto Project Development Tasks Manual
+            for more information.
         </para>
     </section>
 
@@ -3419,7 +3420,8 @@
                 For help preparing metadata, see any of the many Python 3 porting
                 guides available.
                 Alternatively, you can reference the conversion commits for Bitbake
-                and you can use OE-Core as a guide for changes.
+                and you can use
+                <link linkend='oe-core'>OE-Core</link> as a guide for changes.
                 Following are particular areas of interest:
                 <literallayout class='monospaced'>
      * subprocess command-line pipes needing locale decoding
@@ -3508,7 +3510,8 @@
             hard-coding any knowledge about different machines.
             Using a configuration file is particularly convenient when trying
             to use QEMU with machines other than the
-            <filename>qemu*</filename> machines in OE-Core.
+            <filename>qemu*</filename> machines in
+            <link linkend='oe-core'>OE-Core</link>.
             The <filename>qemuboot.conf</filename> file is generated by the
             <filename>qemuboot</filename>
             class when the root filesystem is being build (i.e.
@@ -4035,7 +4038,7 @@
                     <para>For an example, see the
                     <filename>pixbufcache</filename> class in
                     <filename>meta/classes/</filename> in the Yocto Project
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-repositories'>Source Repositories</ulink>.
+                    <link linkend='source-repositories'>Source Repositories</link>.
                     <note>
                         The <filename>SSTATEPOSTINSTFUNCS</filename> variable
                         itself is now deprecated in favor of the
@@ -4478,7 +4481,8 @@
                     needed by current hardware.
                     Thus, GStreamer's ivorbis plugin has been disabled
                     by default eliminating the need for the
-                    <filename>tremor</filename> recipe in OE-Core.
+                    <filename>tremor</filename> recipe in
+                    <link linkend='oe-core'>OE-Core</link>.
                     </para></listitem>
                  <listitem><para>
                     <emphasis><filename>gummiboot:</filename></emphasis>
@@ -4495,8 +4499,8 @@
             The following changes have been made to Wic:
             <note>
                 For more information on Wic, see the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-partitioned-images'>Creating Partitioned Images</ulink>"
-                section in the Yocto Project Development Manual.
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-partitioned-images-using-wic'>Creating Partitioned Images Using Wic</ulink>"
+                section in the Yocto Project Development Tasks Manual.
             </note>
             <itemizedlist>
                 <listitem><para>
@@ -4717,6 +4721,523 @@
         </para>
     </section>
 </section>
+
+<section id='moving-to-the-yocto-project-2.4-release'>
+    <title>Moving to the Yocto Project 2.4 Release</title>
+
+    <para>
+        This section provides migration information for moving to the
+        Yocto Project 2.4 Release from the prior release.
+    </para>
+
+    <section id='migration-2.4-memory-resident-mode'>
+        <title>Memory Resident Mode</title>
+
+        <para>
+            A persistent mode is now available in BitBake's default operation,
+            replacing its previous "memory resident mode" (i.e.
+            <filename>oe-init-build-env-memres</filename>).
+            Now you only need to set
+            <filename>BB_SERVER_TIMEOUT</filename> to a timeout
+            (in seconds) and BitBake's server stays resident for that
+            amount of time between invocations.
+            The <filename>oe-init-build-env-memres</filename> script has been
+            removed since a separate environment setup script is no longer
+            needed.
+        </para>
+    </section>
+
+    <section id='migration-2.4-packaging-changes'>
+        <title>Packaging Changes</title>
+
+        <para>
+            This section provides information about packaging changes that have
+            ocurred:
+            <itemizedlist>
+                <listitem><para>
+                    <emphasis><filename>python3</filename> Changes:</emphasis>
+                    <itemizedlist>
+                        <listitem><para>
+                            The main "python3" package now brings in all of the
+                            standard Python 3 distribution rather than a subset.
+                            This behavior matches what is expected based on
+                            traditional Linux distributions.
+                            If you wish to install a subset of Python 3, specify
+                            <filename>python-core</filename> plus one or more of
+                            the individual packages that are still produced.
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis><filename>python3</filename>:</emphasis>
+                            The <filename>bz2.py</filename>,
+                            <filename>lzma.py</filename>, and
+                            <filename>_compression.py</filename> scripts have
+                            been moved from the
+                            <filename>python3-misc</filename> package to
+                            the <filename>python3-compression</filename> package.
+                            </para></listitem>
+                    </itemizedlist>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>binutils</filename>:</emphasis>
+                    The <filename>libbfd</filename> library is now packaged in
+                    a separate "libbfd" package.
+                    This packaging saves space when certain tools
+                    (e.g. <filename>perf</filename>) are installed.
+                    In such cases, the tools only need
+                    <filename>libbfd</filename> rather than all the packages in
+                    <filename>binutils</filename>.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>util-linux</filename> Changes:</emphasis>
+                    <itemizedlist>
+                        <listitem><para>
+                            The <filename>su</filename> program is now packaged
+                            in a separate "util-linux-su" package, which is only
+                            built when "pam" is listed in the
+                            <link linkend='var-DISTRO_FEATURES'><filename>DISTRO_FEATURES</filename></link>
+                            variable.
+                            <filename>util-linux</filename> should not be
+                            installed unless it is needed because
+                            <filename>su</filename> is normally provided through
+                            the shadow file format.
+                            The main <filename>util-linux</filename> package has
+                            runtime dependencies (i.e.
+                            <link linkend='var-RDEPENDS'><filename>RDEPENDS</filename></link>)
+                            on the <filename>util-linux-su</filename> package
+                            when "pam" is in
+                            <link linkend='var-DISTRO_FEATURES'><filename>DISTRO_FEATURES</filename></link>.
+                            </para></listitem>
+                        <listitem><para>
+                            The <filename>switch_root</filename> program is now
+                            packaged in a separate "util-linux-switch-root"
+                            package for small initramfs images that do not need
+                            the whole <filename>util-linux</filename> package or
+                            the busybox binary, which are both much larger than
+                            <filename>switch_root</filename>.
+                            The main <filename>util-linux</filename> package has
+                            a recommended runtime dependency (i.e.
+                            <link linkend='var-RRECOMMENDS'><filename>RRECOMMENDS</filename></link>)
+                            on the <filename>util-linux-switch-root</filename> package.
+                            </para></listitem>
+                        <listitem><para>
+                            The <filename>ionice</filename> program is now
+                            packaged in a separate "util-linux-ionice" package.
+                            The main <filename>util-linux</filename> package has
+                            a recommended runtime dependency (i.e.
+                            <link linkend='var-RRECOMMENDS'><filename>RRECOMMENDS</filename></link>)
+                            on the <filename>util-linux-ionice</filename> package.
+                            </para></listitem>
+                    </itemizedlist>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>initscripts</filename>:</emphasis>
+                    The <filename>sushell</filename> program is now packaged in
+                    a separate "initscripts-sushell" package.
+                    This packaging change allows systems to pull
+                    <filename>sushell</filename> in when
+                    <filename>selinux</filename> is enabled.
+                    The change also eliminates needing to pull in the entire
+                    <filename>initscripts</filename> package.
+                    The main <filename>initscripts</filename> package has a
+                    runtime dependency (i.e.
+                    <link linkend='var-RDEPENDS'><filename>RDEPENDS</filename></link>)
+                    on the <filename>sushell</filename> package when
+                    "selinux" is in
+                    <link linkend='var-DISTRO_FEATURES'><filename>DISTRO_FEATURES</filename></link>.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>glib-2.0</filename>:</emphasis>
+                    The <filename>glib-2.0</filename> package now has a
+                    recommended runtime dependency (i.e.
+                    <link linkend='var-RRECOMMENDS'><filename>RRECOMMENDS</filename></link>)
+                    on the
+                    <filename>shared-mime-info</filename> package, since large
+                    portions of GIO are not useful without the MIME database.
+                    You can remove the dependency by using the
+                    <link linkend='var-BAD_RECOMMENDATIONS'><filename>BAD_RECOMMENDATIONS</filename></link>
+                    variable if <filename>shared-mime-info</filename> is too
+                    large and is not required.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Go Standard Runtime:</emphasis>
+                    The Go standard runtime has been split out from the main
+                    <filename>go</filename> recipe into a separate
+                    <filename>go-runtime</filename> recipe.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+
+    <section id='migration-2.4-removed-recipes'>
+        <title>Removed Recipes</title>
+
+        <para>
+            The following recipes have been removed:
+            <itemizedlist>
+                <listitem><para>
+                    <emphasis><filename>acpitests</filename>:</emphasis>
+                    This recipe is not maintained.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>autogen-native</filename>:</emphasis>
+                    No longer required by Grub, oe-core, or meta-oe.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>bdwgc</filename>:</emphasis>
+                    Nothing in OpenEmbedded-Core requires this recipe.
+                    It has moved to meta-oe.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>byacc</filename>:</emphasis>
+                    This recipe was only needed by rpm 5.x and has moved to
+                    meta-oe.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>gcc (5.4)</filename>:</emphasis>
+                    The 5.4 series dropped the recipe in favor of 6.3 / 7.2.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>gnome-common</filename>:</emphasis>
+                    Deprecated upstream and no longer needed.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>go-bootstrap-native</filename>:</emphasis>
+                    Go 1.9 does its own bootstrapping so this recipe has been
+                    removed.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>guile</filename>:</emphasis>
+                    This recipe was only needed by
+                    <filename>autogen-native</filename> and
+                    <filename>remake</filename>.
+                    The recipe is no longer needed by either of these programs.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>libclass-isa-perl</filename>:</emphasis>
+                    This recipe was previously needed for LSB 4, no longer
+                    needed.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>libdumpvalue-perl</filename>:</emphasis>
+                    This recipe was previously needed for LSB 4, no longer
+                    needed.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>libenv-perl</filename>:</emphasis>
+                    This recipe was previously needed for LSB 4, no longer
+                    needed.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>libfile-checktree-perl</filename>:</emphasis>
+                    This recipe was previously needed for LSB 4, no longer
+                    needed.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>libi18n-collate-perl</filename>:</emphasis>
+                    This recipe was previously needed for LSB 4, no longer
+                    needed.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>libiconv</filename>:</emphasis>
+                    This recipe was only needed for <filename>uclibc</filename>,
+                    which was removed in the previous release.
+                    <filename>glibc</filename> and <filename>musl</filename>
+                    have their own implementations.
+                    <filename>meta-mingw</filename> still needs
+                    <filename>libiconv</filename>, so it has
+                    been moved to <filename>meta-mingw</filename>.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>libpng12</filename>:</emphasis>
+                    This recipe was previously needed for LSB. The current
+                    <filename>libpng</filename> is 1.6.x.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>libpod-plainer-perl</filename>:</emphasis>
+                    This recipe was previously needed for LSB 4, no longer
+                    needed.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>linux-yocto (4.1)</filename>:</emphasis>
+                    This recipe was removed in favor of 4.4, 4.9, 4.10 and 4.12.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>mailx</filename>:</emphasis>
+                    This recipe was previously only needed for LSB
+                    compatibility, and upstream is defunct.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>mesa (git version only)</filename>:</emphasis>
+                    The git version recipe was stale with respect to the release
+                    version.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>ofono (git version only)</filename>:</emphasis>
+                    The git version recipe was stale with respect to the release
+                    version.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>portmap</filename>:</emphasis>
+                    This recipe is obsolete and is superseded by
+                    <filename>rpcbind</filename>.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>python3-pygpgme</filename>:</emphasis>
+                    This recipe is old and unmaintained. It was previously
+                    required by <filename>dnf</filename>, which has switched
+                    to official <filename>gpgme</filename> Python bindings.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>python-async</filename>:</emphasis>
+                    This recipe has been removed in favor of the Python 3
+                    version.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>python-gitdb</filename>:</emphasis>
+                    This recipe has been removed in favor of the Python 3
+                    version.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>python-git</filename>:</emphasis>
+                    This recipe was removed in favor of the Python 3
+                    version.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>python-mako</filename>:</emphasis>
+                    This recipe was removed in favor of the Python 3
+                    version.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>python-pexpect</filename>:</emphasis>
+                    This recipe was removed in favor of the Python 3 version.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>python-ptyprocess</filename>:</emphasis>
+                    This recipe was removed in favor of Python the 3 version.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>python-pycurl</filename>:</emphasis>
+                    Nothing is using this recipe in OpenEmbedded-Core
+                    (i.e. <filename>meta-oe</filename>).
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>python-six</filename>:</emphasis>
+                    This recipe was removed in favor of the Python 3 version.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>python-smmap</filename>:</emphasis>
+                    This recipe was removed in favor of the Python 3 version.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>remake</filename>:</emphasis>
+                    Using <filename>remake</filename> as the provider of
+                    <filename>virtual/make</filename> is broken.
+                    Consequently, this recipe is not needed in OpenEmbedded-Core.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+
+    <section id='migration-2.4-kernel-device-tree-move'>
+        <title>Kernel Device Tree Move</title>
+
+        <para>
+            Kernel Device Tree support is now easier to enable in a kernel
+            recipe.
+            The Device Tree code has moved to a
+            <filename>kernel-devicetree</filename> class.
+            Functionality is automatically enabled for any recipe that inherits
+            the
+            <link linkend='ref-classes-kernel'><filename>kernel</filename></link>
+            class and sets the
+            <link linkend='var-KERNEL_DEVICETREE'><filename>KERNEL_DEVICETREE</filename></link>
+            variable.
+            The previous mechanism for doing this,
+            <filename>meta/recipes-kernel/linux/linux-dtb.inc</filename>,
+            is still available to avoid breakage, but triggers a
+            deprecation warning.
+            Future releases of the Yocto Project will remove
+            <filename>meta/recipes-kernel/linux/linux-dtb.inc</filename>.
+            It is advisable to remove any <filename>require</filename>
+            statements that request
+            <filename>meta/recipes-kernel/linux/linux-dtb.inc</filename>
+            from any custom kernel recipes you might have.
+            This will avoid breakage in post 2.4 releases.
+        </para>
+    </section>
+
+    <section id='migration-2.4-package-qa-changes'>
+        <title>Package QA Changes</title>
+
+        <para>
+            The following package QA changes took place:
+            <itemizedlist>
+                <listitem><para>
+                    The "unsafe-references-in-scripts" QA check has been
+                    removed.
+                    </para></listitem>
+                <listitem><para>
+                    If you refer to <filename>${COREBASE}/LICENSE</filename>
+                    within
+                    <link linkend='var-LIC_FILES_CHKSUM'><filename>LIC_FILES_CHKSUM</filename></link>
+                    you receive a warning because this file is a description of
+                    the license for OE-Core.
+                    Use <filename>${COMMON_LICENSE_DIR}/MIT</filename>
+                    if your recipe is MIT-licensed and you cannot use the
+                    preferred method of referring to a file within the source
+                    tree.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+
+    <section id='migration-2.4-readme-changes'>
+        <title><filename>README</filename> File Changes</title>
+
+        <para>
+            The following are changes to <filename>README</filename> files:
+            <itemizedlist>
+                <listitem><para>
+                    The main Poky <filename>README</filename> file has been
+                    moved to the <filename>meta-poky</filename> layer and
+                    has been renamed <filename>README.poky</filename>.
+                    A symlink has been created so that references to the old
+                    location work.
+                    </para></listitem>
+                <listitem><para>
+                    The <filename>README.hardware</filename> file has been moved
+                    to <filename>meta-yocto-bsp</filename>.
+                    A symlink has been created so that references to the old
+                    location work.
+                    </para></listitem>
+                <listitem><para>
+                    A <filename>README.qemu</filename> file has been created
+                    with coverage of the <filename>qemu*</filename> machines.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+
+    <section id='migration-2.4-miscellaneous-changes'>
+        <title>Miscellaneous Changes</title>
+
+        <para>
+            The following are additional changes:
+            <itemizedlist>
+                <listitem><para>
+                    The <filename>ROOTFS_PKGMANAGE_BOOTSTRAP</filename>
+                    variable and any references to it have been removed.
+                    You should remove this variable from any custom recipes.
+                    </para></listitem>
+                <listitem><para>
+                    The <filename>meta-yocto</filename> directory has been
+                    removed.
+                    <note>
+                        In the Yocto Project 2.1 release
+                        <filename>meta-yocto</filename> was renamed to
+                        <filename>meta-poky</filename> and the
+                        <filename>meta-yocto</filename> subdirectory remained
+                        to avoid breaking existing configurations.
+                    </note>
+                    </para></listitem>
+                <listitem><para>
+                    The <filename>maintainers.inc</filename> file, which tracks
+                    maintainers by listing a primary person responsible for each
+                    recipe in OE-Core, has been moved from
+                    <filename>meta-poky</filename> to OE-Core (i.e. from
+                    <filename>meta-poky/conf/distro/include</filename> to
+                    <filename>meta/conf/distro/include</filename>).
+                    </para></listitem>
+                <listitem><para>
+                    The
+                    <link linkend='ref-classes-buildhistory'><filename>buildhistory</filename></link>
+                    class now makes a single commit per build rather than one
+                    commit per subdirectory in the repository.
+                    This behavior assumes the commits are enabled with
+                    <link linkend='var-BUILDHISTORY_COMMIT'><filename>BUILDHISTORY_COMMIT</filename></link>
+                    = "1", which is typical.
+                    Previously, the <filename>buildhistory</filename> class made
+                    one commit per subdirectory in the repository in order to
+                    make it easier to see the changes for a particular
+                    subdirectory.
+                    To view a particular change, specify that subdirectory as
+                    the last parameter on the <filename>git show</filename>
+                    or <filename>git diff</filename> commands.
+                    </para></listitem>
+                <listitem><para>
+                    The <filename>x86-base.inc</filename> file, which is
+                    included by all x86-based machine configurations, now sets
+                    <link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>
+                    using <filename>?=</filename> to "live" rather than
+                    appending with <filename>+=</filename>.
+                    This change makes the default easier to override.
+                    </para></listitem>
+                <listitem><para>
+                    BitBake fires multiple "BuildStarted" events when
+                    multiconfig is enabled (one per configuration).
+                    For more information, see the
+                    "<ulink url='&YOCTO_DOCS_BB_URL;#events'>Events</ulink>"
+                    in the BitBake User Manual.
+                    </para></listitem>
+                <listitem><para>
+                    By default, the <filename>security_flags.inc</filename> file
+                    sets a <filename>GCCPIE</filename> variable with an option
+                    to enable Position Independent Executables (PIE) within
+                    <filename>gcc</filename>.
+                    Enabling PIE in the GNU C Compiler (GCC), makes Return
+                    Oriented Programming (ROP) attacks much more difficult to
+                    execute.
+                    </para></listitem>
+                <listitem><para>
+                    OE-Core now provides a
+                    <filename>bitbake-layers</filename> plugin that implements
+                    a "create-layer" subcommand.
+                    The implementation of this subcommand has resulted in the
+                    <filename>yocto-layer</filename> script being deprecated and
+                    will likely be removed in the next Yocto Project release.
+                    </para></listitem>
+                <listitem><para>
+                      The <filename>vmdk</filename>, <filename>vdi</filename>,
+                      and <filename>qcow2</filename> image file types are now
+                      used in conjunction with the "wic" image type through
+                      <filename>CONVERSION_CMD</filename>.
+                      Consequently, the equivalent image types are now
+                      <filename>wic.vmdk</filename>, <filename>wic.vdi</filename>,
+                      and <filename>wic.qcow2</filename>, respectively.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>do_image_&lt;type&gt;[depends]</filename> has
+                    replaced <filename>IMAGE_DEPENDS_&lt;type&gt;</filename>.
+                    If you have your own classes that implement custom image
+                    types, then you need to update them.
+                    </para></listitem>
+                <listitem><para>
+                    OpenSSL 1.1 has been introduced.
+                    However, the default is still 1.0.x through the
+                    <link linkend='var-PREFERRED_VERSION'><filename>PREFERRED_VERSION</filename></link>
+                    variable.
+                    This preference is set is due to the remaining compatibility
+                    issues with other software.
+                    The
+                    <link linkend='var-PROVIDES'><filename>PROVIDES</filename></link>
+                    variable  in the openssl 1.0 recipe now includes "openssl10"
+                    as a marker that can be used in
+                    <link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>
+                    within recipes that build software that still depend on
+                    OpenSSL 1.0.
+                    </para></listitem>
+                <listitem><para>
+                    To ensure consistent behavior, BitBake's "-r" and "-R"
+                    options (i.e. prefile and postfile), which are used to
+                    read or post-read additional configuration files from the
+                    command line, now only affect the current BitBake command.
+                    Before these BitBake changes, these options would "stick"
+                    for future executions.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+</section>
 </chapter>
 <!--
 vim: expandtab tw=80 ts=4
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-bitbake.xml b/import-layers/yocto-poky/documentation/ref-manual/ref-bitbake.xml
index 2f36e16..17f4c54 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/ref-bitbake.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-bitbake.xml
@@ -8,7 +8,7 @@
 
     <para>
         BitBake is a program written in Python that interprets the
-        <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink> used by
+        <link linkend='metadata'>Metadata</link> used by
         the OpenEmbedded build system.
         At some point, developers wonder what actually happens when you enter:
         <literallayout class='monospaced'>
@@ -24,7 +24,7 @@
         BitBake strives to be a generic "task" executor that is capable of handling complex dependency relationships.
         As such, it has no real knowledge of what the tasks being executed actually do.
         BitBake just considers a list of tasks with dependencies and handles
-        <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+        <link linkend='metadata'>Metadata</link>
         consisting of variables in a certain format that get passed to the tasks.
     </note>
 
@@ -36,9 +36,10 @@
         </para>
 
         <para>
-            The first thing BitBake does is look for the <filename>bitbake.conf</filename> file.
+            The first thing BitBake does is look for the
+            <filename>bitbake.conf</filename> file.
             This file resides in the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+            <link linkend='source-directory'>Source Directory</link>
             within the <filename>meta/conf/</filename> directory.
             BitBake finds it by examining its
             <link linkend='var-BBPATH'><filename>BBPATH</filename></link> environment
@@ -92,8 +93,8 @@
             <filename>meta/recipes-*/</filename> directory within Poky.
             Adding extra content to <filename>BBFILES</filename> is best achieved through the use of
             BitBake layers as described in the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and
-            Creating Layers</ulink>" section of the Yocto Project Development Manual.
+            "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
+            section of the Yocto Project Development Tasks Manual.
         </para>
 
         <para>
@@ -227,13 +228,14 @@
 
         <para>
             Dependencies are defined through several variables.
-            You can find information about variables BitBake uses in the BitBake documentation,
-            which is found in the <filename>bitbake/doc/manual</filename> directory within the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+            You can find information about variables BitBake uses in the
+            BitBake documentation, which is found in the
+            <filename>bitbake/doc/manual</filename> directory within the
+            <link linkend='source-directory'>Source Directory</link>.
             At a basic level, it is sufficient to know that BitBake uses the
             <filename><link linkend='var-DEPENDS'>DEPENDS</link></filename> and
-            <filename><link linkend='var-RDEPENDS'>RDEPENDS</link></filename> variables when
-            calculating dependencies.
+            <filename><link linkend='var-RDEPENDS'>RDEPENDS</link></filename>
+            variables when calculating dependencies.
         </para>
     </section>
 
@@ -448,20 +450,23 @@
             You can find information about the options and formats of entries for specific
             fetchers in the BitBake manual located in the
             <filename>bitbake/doc/manual</filename> directory of the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+            <link linkend='source-directory'>Source Directory</link>.
         </para>
 
         <para>
-            One useful feature for certain Source Code Manager (SCM) fetchers is the ability to
-            "auto-update" when the upstream SCM changes version.
-            Since this ability requires certain functionality from the SCM, not all
-            systems support it.
-            Currently Subversion, Bazaar and to a limited extent, Git support the ability to "auto-update".
+            One useful feature for certain Source Code Manager (SCM) fetchers
+            is the ability to "auto-update" when the upstream SCM changes
+            version.
+            Since this ability requires certain functionality from the SCM,
+            not all systems support it.
+            Currently Subversion, Bazaar and to a limited extent, Git support
+            the ability to "auto-update".
             This feature works using the <filename><link linkend='var-SRCREV'>SRCREV</link></filename>
             variable.
             See the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#platdev-appdev-srcrev'>Using an External SCM</ulink>" section
-            in the Yocto Project Development Manual for more information.
+            "<ulink url='&YOCTO_DOCS_DEV_URL;#platdev-appdev-srcrev'>Using an External SCM</ulink>"
+            section in the Yocto Project Development Tasks Manual for more
+            information.
         </para>
 
     </section>
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-classes.xml b/import-layers/yocto-poky/documentation/ref-manual/ref-classes.xml
index c88162b..5961d3e 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/ref-classes.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-classes.xml
@@ -16,12 +16,12 @@
 </para>
 
 <para>
-    Any <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink> usually
+    Any <link linkend='metadata'>Metadata</link> usually
     found in a recipe can also be placed in a class file.
     Class files are identified by the extension <filename>.bbclass</filename>
     and are usually placed in a <filename>classes/</filename> directory beneath
     the <filename>meta*/</filename> directory found in the
-    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+    <link linkend='source-directory'>Source Directory</link>.
     Class files can also be pointed to by
     <link linkend='var-BUILDDIR'><filename>BUILDDIR</filename></link>
     (e.g. <filename>build/</filename>) in the same way as
@@ -36,7 +36,7 @@
     This chapter discusses only the most useful and important classes.
     Other classes do exist within the <filename>meta/classes</filename>
     directory in the
-    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+    <link linkend='source-directory'>Source Directory</link>.
     You can reference the <filename>.bbclass</filename> files directly
     for more information.
 </para>
@@ -94,7 +94,7 @@
     <para>
         For more details on the source archiver, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle'>Maintaining Open Source License Compliance During Your Product's Lifecycle</ulink>"
-        section in the Yocto Project Development Manual.
+        section in the Yocto Project Development Tasks Manual.
         You can also see the
         <link linkend='var-ARCHIVER_MODE'><filename>ARCHIVER_MODE</filename></link>
         variable for information about the variable flags (varflags)
@@ -122,7 +122,7 @@
         These classes can also work with software that emulates Autotools.
         For more information, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#new-recipe-autotooled-package'>Autotooled Package</ulink>"
-        section in the Yocto Project Development Manual.
+        section in the Yocto Project Development Tasks Manual.
     </para>
 
     <para>
@@ -333,7 +333,7 @@
         For details on how the class works, see the
         <filename>meta/classes/bluetooth.bbclass</filename> file in the Yocto
         Project
-        <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+        <link linkend='source-directory'>Source Directory</link>.
     </para>
 </section>
 
@@ -641,7 +641,7 @@
         Distribution policy dictates whether to include this class.
         See the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#platdev-appdev-devshell'>Using a Development Shell</ulink>" section
-        in the Yocto Project Development Manual for more information about
+        in the Yocto Project Development Tasks Manual for more information about
         using <filename>devshell</filename>.
     </para>
 </section>
@@ -816,11 +816,11 @@
         For more information on the
         <filename>externalsrc</filename> class, see the comments in
         <filename>meta/classes/externalsrc.bbclass</filename> in the
-        <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+        <link linkend='source-directory'>Source Directory</link>.
         For information on how to use the <filename>externalsrc</filename>
         class, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#building-software-from-an-external-source'>Building Software from an External Source</ulink>"
-        section in the Yocto Project Development Manual.
+        section in the Yocto Project Development Tasks Manual.
     </para>
 </section>
 
@@ -1247,7 +1247,7 @@
         </itemizedlist>
         For information on customizing images, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#usingpoky-extend-customimage'>Customizing Images</ulink>"
-        section in the Yocto Project Development Manual.
+        section in the Yocto Project Development Tasks Manual.
         For information on how images are created, see the
         "<link linkend='images-dev-environment'>Images</link>" section elsewhere
         in this manual.
@@ -1286,14 +1286,15 @@
         <filename>image_types</filename> must also appear in
         <filename>IMAGE_CLASSES</filename>.
     </para>
-</section>
-
-<section id='ref-classes-image_types_uboot'>
-    <title><filename>image_types_uboot.bbclass</filename></title>
 
     <para>
-        The <filename>image_types_uboot</filename> class
-        defines additional image types specifically for the U-Boot bootloader.
+        This class also handles conversion and compression of images.
+        <note>
+            To build a VMware VMDK image, you need to add "wic.vmdk" to
+            <link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>.
+            This would also be similar for Virtual Box Virtual Disk Image
+            ("vdi") and QEMU Copy On Write Version 2 ("qcow2") images.
+        </note>
     </para>
 </section>
 
@@ -1370,27 +1371,6 @@
     </para>
 </section>
 
-<section id='ref-classes-image-vm'>
-    <title><filename>image-vm.bbclass</filename></title>
-
-    <para>
-        The <filename>image-vm</filename> class supports building VM
-        images.
-    </para>
-</section>
-
-<section id='ref-classes-image-vmdk'>
-    <title><filename>image-vmdk.bbclass</filename></title>
-
-    <para>
-        The <filename>image-vmdk</filename> class supports building VMware
-        VMDK images.
-        Normally, you do not use this class directly.
-        Instead, you add "vmdk" to
-        <link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>.
-    </para>
-</section>
-
 <section id='ref-classes-insane'>
     <title><filename>insane.bbclass</filename></title>
 
@@ -1788,27 +1768,6 @@
                 </note>
                 </para></listitem>
 -->
-            <listitem><para><emphasis><filename>unsafe-references-in-scripts:</filename></emphasis>
-                Reports when a script file installed in
-                <filename>${base_libdir}</filename>,
-                <filename>${base_bindir}</filename>, or
-                <filename>${base_sbindir}</filename>, depends on files
-                installed under <filename>${exec_prefix}</filename>.
-                This dependency is a concern if you want the system to remain
-                basically operable if <filename>/usr</filename> is mounted
-                separately and is not mounted.
-                <note>
-                    Defaults for binaries installed in
-                    <filename>${base_libdir}</filename>,
-                    <filename>${base_bindir}</filename>, and
-                    <filename>${base_sbindir}</filename> are
-                    <filename>/lib</filename>, <filename>/bin</filename>, and
-                    <filename>/sbin</filename>, respectively.
-                    The default for a binary installed
-                    under <filename>${exec_prefix}</filename> is
-                    <filename>/usr</filename>.
-                </note>
-                </para></listitem>
             <listitem><para><emphasis><filename>useless-rpaths:</filename></emphasis>
                 Checks for dynamic library load paths (rpaths) in the binaries that
                 by default on a standard system are searched by the linker (e.g.
@@ -1900,7 +1859,7 @@
         you build the kernel image.
         For information on how to build an initramfs, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#building-an-initramfs-image'>Building an Initial RAM Filesystem (initramfs) Image</ulink>"
-        section in the Yocto Project Development Manual.
+        section in the Yocto Project Development Tasks Manual.
     </para>
 
     <para>
@@ -2194,7 +2153,7 @@
     <para>
         For more information on using the Multilib feature, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#combining-multiple-versions-library-files-into-one-image'>Combining Multiple Versions of Library Files into One Image</ulink>"
-        section in the Yocto Project Development Manual.
+        section in the Yocto Project Development Tasks Manual.
     </para>
 </section>
 
@@ -2328,7 +2287,7 @@
         The <filename>oelint</filename> class is an
         obsolete lint checking tool that exists in
         <filename>meta/classes</filename> in the
-        <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+        <link linkend='source-directory'>Source Directory</link>.
     </para>
 
     <para>
@@ -2393,7 +2352,7 @@
         <filename><link linkend='var-PACKAGE_CLASSES'>PACKAGE_CLASSES</link></filename>
         variable defined in your <filename>conf/local.conf</filename>
         configuration file, which is located in the
-        <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+        <link linkend='build-directory'>Build Directory</link>.
         When defining the variable, you can specify one or more package types.
         Since images are generated from packages, a packaging class is
         needed to enable image generation.
@@ -2407,7 +2366,7 @@
         on the target (i.e. runtime installation of packages).
         For more information, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#using-runtime-package-management'>Using Runtime Package Management</ulink>"
-         section in the Yocto Project Development Manual.
+         section in the Yocto Project Development Tasks Manual.
     </para>
 
     <para>
@@ -2419,7 +2378,7 @@
         all dependencies previously built.
         The reason for this discrepancy is because the RPM package manager
         creates and processes more
-        <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink> than the
+        <link linkend='metadata'>Metadata</link> than the
         IPK package manager.
         Consequently, you might consider setting
         <filename>PACKAGE_CLASSES</filename> to "package_ipk" if you are
@@ -2587,7 +2546,7 @@
     <para>
         For information on how to use this class, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#usingpoky-extend-customimage-customtasks'>Customizing Images Using Custom Package Groups</ulink>"
-        section in the Yocto Project Development Manual.
+        section in the Yocto Project Development Tasks Manual.
     </para>
 
     <para>
@@ -2676,7 +2635,8 @@
         <link linkend='ref-tasks-populate_sdk'><filename>do_populate_sdk</filename></link>
         task, see the
         "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-building-an-sdk-installer'>Building an SDK Installer</ulink>"
-        section in the Yocto Project Software Development Kit (SDK) Developer's Guide.
+        section in the Yocto Project Application Development and the
+        Extensible Software Development Kit (eSDK) manual.
     </para>
 </section>
 
@@ -2756,8 +2716,8 @@
         <link linkend='ref-tasks-populate_sdk'><filename>do_populate_sdk</filename></link>
         task, see the
         "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-building-an-sdk-installer'>Building an SDK Installer</ulink>"
-        section in the Yocto Project Software Development Kit (SDK) Developer's
-        Guide.
+        section in the Yocto Project Application Development and the
+        Extensible Software Development Kit (eSDK) manual.
     </para>
 </section>
 
@@ -2830,8 +2790,8 @@
         <link linkend='var-DISTRO_FEATURES'><filename>DISTRO_FEATURES</filename></link>.
         See the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#testing-packages-with-ptest'>Testing Packages With ptest</ulink>"
-        section in the Yocto Project Development Manual for more information
-        on ptest.
+        section in the Yocto Project Development Tasks Manual for more
+        information on ptest.
     </para>
 </section>
 
@@ -2847,7 +2807,7 @@
     <para>
         For information on setting up and running ptests, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#testing-packages-with-ptest'>Testing Packages With ptest</ulink>"
-        section in the Yocto Project Development Manual.
+        section in the Yocto Project Development Tasks Manual.
     </para>
 </section>
 
@@ -2988,7 +2948,7 @@
         as the build progresses, you can enable <filename>rm_work</filename>
         by adding the following to your <filename>local.conf</filename> file,
         which is found in the
-        <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+        <link linkend='build-directory'>Build Directory</link>.
         <literallayout class='monospaced'>
     INHERIT += "rm_work"
         </literallayout>
@@ -3455,7 +3415,7 @@
     <para>
         For more information on <filename>systemd</filename>, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#selecting-an-initialization-manager'>Selecting an Initialization Manager</ulink>"
-        section in the Yocto Project Development Manual.
+        section in the Yocto Project Development Tasks Manual.
     </para>
 </section>
 
@@ -3555,7 +3515,7 @@
     <para>
         For information on how to enable, run, and create new tests, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing'>Performing Automated Runtime Testing</ulink>"
-        section in the Yocto Project Development Manual.
+        section in the Yocto Project Development Tasks Manual.
     </para>
 </section>
 
@@ -3737,7 +3697,8 @@
         provide pathnames for links, default links for targets, and
         so forth.
         For details on how to use this class, see the comments in the
-        <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/meta/classes/update-alternatives.bbclass'><filename>update-alternatives.bbclass</filename></ulink>.
+        <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/meta/classes/update-alternatives.bbclass'><filename>update-alternatives.bbclass</filename></ulink>
+        file.
     </para>
 
     <note>
@@ -3777,8 +3738,9 @@
         For example, if you have packages that contain system services that
         should be run under their own user or group, you can use these classes
         to enable creation of the user or group.
-        The <filename>meta-skeleton/recipes-skeleton/useradd/useradd-example.bb</filename>
-        recipe in the <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+        The
+        <filename>meta-skeleton/recipes-skeleton/useradd/useradd-example.bb</filename>
+        recipe in the <link linkend='source-directory'>Source Directory</link>
         provides a simple example that shows how to add three
         users and groups to two packages.
         See the <filename>useradd-example.bb</filename> recipe for more
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-development-environment.xml b/import-layers/yocto-poky/documentation/ref-manual/ref-development-environment.xml
new file mode 100644
index 0000000..52197d1
--- /dev/null
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-development-environment.xml
@@ -0,0 +1,2761 @@
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
+[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
+
+<chapter id='ref-development-environment'>
+<title>The Yocto Project Development Environment</title>
+
+<para>
+    This chapter takes a look at the Yocto Project development
+    environment and also provides a detailed look at what goes on during
+    development in that environment.
+    The chapter provides Yocto Project Development environment concepts that
+    help you understand how work is accomplished in an open source environment,
+    which is very different as compared to work accomplished in a closed,
+    proprietary environment.
+</para>
+
+<para>
+    Specifically, this chapter addresses open source philosophy, workflows,
+    Git, source repositories, licensing, recipe syntax, and development
+    syntax.
+</para>
+
+<section id='open-source-philosophy'>
+    <title>Open Source Philosophy</title>
+
+    <para>
+        Open source philosophy is characterized by software development
+        directed by peer production and collaboration through an active
+        community of developers.
+        Contrast this to the more standard centralized development models
+        used by commercial software companies where a finite set of developers
+        produces a product for sale using a defined set of procedures that
+        ultimately result in an end product whose architecture and source
+        material are closed to the public.
+    </para>
+
+    <para>
+        Open source projects conceptually have differing concurrent agendas,
+        approaches, and production.
+        These facets of the development process can come from anyone in the
+        public (community) that has a stake in the software project.
+        The open source environment contains new copyright, licensing, domain,
+        and consumer issues that differ from the more traditional development
+        environment.
+        In an open source environment, the end product, source material,
+        and documentation are all available to the public at no cost.
+    </para>
+
+    <para>
+        A benchmark example of an open source project is the Linux kernel,
+        which was initially conceived and created by Finnish computer science
+        student Linus Torvalds in 1991.
+        Conversely, a good example of a non-open source project is the
+        <trademark class='registered'>Windows</trademark> family of operating
+        systems developed by
+        <trademark class='registered'>Microsoft</trademark> Corporation.
+    </para>
+
+    <para>
+        Wikipedia has a good historical description of the Open Source
+        Philosophy
+        <ulink url='http://en.wikipedia.org/wiki/Open_source'>here</ulink>.
+        You can also find helpful information on how to participate in the
+        Linux Community
+        <ulink url='http://ldn.linuxfoundation.org/book/how-participate-linux-community'>here</ulink>.
+    </para>
+</section>
+
+<section id='workflows'>
+    <title>Workflows</title>
+
+    <para>
+        This section provides workflow concepts using the Yocto Project and
+        Git.
+        In particular, the information covers basic practices that describe
+        roles and actions in a collaborative development environment.
+        <note>
+            If you are familiar with this type of development environment, you
+            might not want to read this section.
+        </note>
+    </para>
+
+    <para>
+        The Yocto Project files are maintained using Git in "master"
+        branches whose Git histories track every change and whose structures
+        provides branches for all diverging functionality.
+        Although there is no need to use Git, many open source projects do so.
+    <para>
+
+    </para>
+        For the Yocto Project, a key individual called the "maintainer" is
+        responsible for the "master" branch of a given Git repository.
+        The "master" branch is the “upstream” repository from which final or
+        most recent builds of the project occur.
+        The maintainer is responsible for accepting changes from other
+        developers and for organizing the underlying branch structure to
+        reflect release strategies and so forth.
+        <note>For information on finding out who is responsible for (maintains)
+            a particular area of code, see the
+            "<ulink url='&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change'>Submitting a Change to the Yocto Project</ulink>"
+            section of the Yocto Project Development Tasks Manual.
+        </note>
+    </para>
+
+    <para>
+        The Yocto Project <filename>poky</filename> Git repository also has an
+        upstream contribution Git repository named
+        <filename>poky-contrib</filename>.
+        You can see all the branches in this repository using the web interface
+        of the
+        <ulink url='&YOCTO_GIT_URL;'>Source Repositories</ulink> organized
+        within the "Poky Support" area.
+        These branches temporarily hold changes to the project that have been
+        submitted or committed by the Yocto Project development team and by
+        community members who contribute to the project.
+        The maintainer determines if the changes are qualified to be moved
+        from the "contrib" branches into the "master" branch of the Git
+        repository.
+    </para>
+
+    <para>
+        Developers (including contributing community members) create and
+        maintain cloned repositories of the upstream "master" branch.
+        The cloned repositories are local to their development platforms and
+        are used to develop changes.
+        When a developer is satisfied with a particular feature or change,
+        they "push" the changes to the appropriate "contrib" repository.
+    </para>
+
+    <para>
+        Developers are responsible for keeping their local repository
+        up-to-date with "master".
+        They are also responsible for straightening out any conflicts that
+        might arise within files that are being worked on simultaneously by
+        more than one person.
+        All this work is done locally on the developer’s machine before
+        anything is pushed to a "contrib" area and examined at the maintainer’s
+        level.
+    </para>
+
+    <para>
+        A somewhat formal method exists by which developers commit changes
+        and push them into the "contrib" area and subsequently request that
+        the maintainer include them into "master".
+        This process is called “submitting a patch” or "submitting a change."
+        For information on submitting patches and changes, see the
+        "<ulink url='&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change'>Submitting a Change to the Yocto Project</ulink>"
+        section in the Yocto Project Development Tasks Manual.
+    </para>
+
+    <para>
+        To summarize the development workflow:  a single point of entry
+        exists for changes into the project’s "master" branch of the
+        Git repository, which is controlled by the project’s maintainer.
+        And, a set of developers exist who independently develop, test, and
+        submit changes to "contrib" areas for the maintainer to examine.
+        The maintainer then chooses which changes are going to become a
+        permanent part of the project.
+    </para>
+
+    <para>
+        <imagedata fileref="figures/git-workflow.png" width="6in" depth="3in" align="left" scalefit="1" />
+    </para>
+
+    <para>
+        While each development environment is unique, there are some best
+        practices or methods that help development run smoothly.
+        The following list describes some of these practices.
+        For more information about Git workflows, see the workflow topics in
+        the
+        <ulink url='http://book.git-scm.com'>Git Community Book</ulink>.
+        <itemizedlist>
+            <listitem><para>
+                <emphasis>Make Small Changes:</emphasis>
+                It is best to keep the changes you commit small as compared to
+                bundling many disparate changes into a single commit.
+                This practice not only keeps things manageable but also allows
+                the maintainer to more easily include or refuse changes.</para>
+
+                <para>It is also good practice to leave the repository in a
+                state that allows you to still successfully build your project.
+                In other words, do not commit half of a feature,
+                then add the other half as a separate, later commit.
+                Each commit should take you from one buildable project state
+                to another buildable state.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Use Branches Liberally:</emphasis>
+                It is very easy to create, use, and delete local branches in
+                your working Git repository.
+                You can name these branches anything you like.
+                It is helpful to give them names associated with the particular
+                feature or change on which you are working.
+                Once you are done with a feature or change and have merged it
+                into your local master branch, simply discard the temporary
+                branch.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Merge Changes:</emphasis>
+                The <filename>git merge</filename> command allows you to take
+                the changes from one branch and fold them into another branch.
+                This process is especially helpful when more than a single
+                developer might be working on different parts of the same
+                feature.
+                Merging changes also automatically identifies any collisions
+                or "conflicts" that might happen as a result of the same lines
+                of code being altered by two different developers.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Manage Branches:</emphasis>
+                Because branches are easy to use, you should use a system
+                where branches indicate varying levels of code readiness.
+                For example, you can have a "work" branch to develop in, a
+                "test" branch where the code or change is tested, a "stage"
+                branch where changes are ready to be committed, and so forth.
+                As your project develops, you can merge code across the
+                branches to reflect ever-increasing stable states of the
+                development.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Use Push and Pull:</emphasis>
+                The push-pull workflow is based on the concept of developers
+                "pushing" local commits to a remote repository, which is
+                usually a contribution repository.
+                This workflow is also based on developers "pulling" known
+                states of the project down into their local development
+                repositories.
+                The workflow easily allows you to pull changes submitted by
+                other developers from the upstream repository into your
+                work area ensuring that you have the most recent software
+                on which to develop.
+                The Yocto Project has two scripts named
+                <filename>create-pull-request</filename> and
+                <filename>send-pull-request</filename> that ship with the
+                release to facilitate this workflow.
+                You can find these scripts in the <filename>scripts</filename>
+                folder of the
+                <link linkend='source-directory'>Source Directory</link>.
+                For information on how to use these scripts, see the
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#pushing-a-change-upstream'>Using Scripts to Push a Change Upstream and Request a Pull</ulink>"
+                section in the Yocto Project Development Tasks Manual.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Patch Workflow:</emphasis>
+                This workflow allows you to notify the maintainer through an
+                email that you have a change (or patch) you would like
+                considered for the "master" branch of the Git repository.
+                To send this type of change, you format the patch and then
+                send the email using the Git commands
+                <filename>git format-patch</filename> and
+                <filename>git send-email</filename>.
+                For information on how to use these scripts, see the
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change'>Submitting a Change to the Yocto Project</ulink>"
+                section in the Yocto Project Development Tasks Manual.
+                </para></listitem>
+        </itemizedlist>
+    </para>
+</section>
+
+<section id='git'>
+    <title>Git</title>
+
+    <para>
+        The Yocto Project makes extensive use of Git, which is a
+        free, open source distributed version control system.
+        Git supports distributed development, non-linear development,
+        and can handle large projects.
+        It is best that you have some fundamental understanding
+        of how Git tracks projects and how to work with Git if
+        you are going to use the Yocto Project for development.
+        This section provides a quick overview of how Git works and
+        provides you with a summary of some essential Git commands.
+        <note><title>Notes</title>
+            <itemizedlist>
+                <listitem><para>
+                    For more information on Git, see
+                    <ulink url='http://git-scm.com/documentation'></ulink>.
+                    </para></listitem>
+                <listitem><para>
+                    If you need to download Git, it is recommended that you add
+                    Git to your system through your distribution's "software
+                    store" (e.g. for Ubuntu, use the Ubuntu Software feature).
+                    For the Git download page, see
+                    <ulink url='http://git-scm.com/download'></ulink>.
+                    </para></listitem>
+                <listitem><para>
+                    For examples beyond the limited few in this section on how
+                    to use Git with the Yocto Project, see the
+                    "<ulink url='&YOCTO_DOCS_DEV_URL;#working-with-yocto-project-source-files'>Working With Yocto Project Source Files</ulink>"
+                    section in the Yocto Project Development Tasks Manual.
+                    </para></listitem>
+            </itemizedlist>
+        </note>
+    </para>
+
+    <section id='repositories-tags-and-branches'>
+        <title>Repositories, Tags, and Branches</title>
+
+        <para>
+            As mentioned briefly in the previous section and also in the
+            "<link linkend='workflows'>Workflows</link>" section,
+            the Yocto Project maintains source repositories at
+            <ulink url='&YOCTO_GIT_URL;/cgit.cgi'></ulink>.
+            If you look at this web-interface of the repositories, each item
+            is a separate Git repository.
+        </para>
+
+        <para>
+            Git repositories use branching techniques that track content
+            change (not files) within a project (e.g. a new feature or updated
+            documentation).
+            Creating a tree-like structure based on project divergence allows
+            for excellent historical information over the life of a project.
+            This methodology also allows for an environment from which you can
+            do lots of local experimentation on projects as you develop
+            changes or new features.
+        </para>
+
+        <para>
+            A Git repository represents all development efforts for a given
+            project.
+            For example, the Git repository <filename>poky</filename> contains
+            all changes and developments for Poky over the course of its
+            entire life.
+            That means that all changes that make up all releases are captured.
+            The repository maintains a complete history of changes.
+        </para>
+
+        <para>
+            You can create a local copy of any repository by "cloning" it
+            with the <filename>git clone</filename> command.
+            When you clone a Git repository, you end up with an identical
+            copy of the repository on your development system.
+            Once you have a local copy of a repository, you can take steps to
+            develop locally.
+            For examples on how to clone Git repositories, see the
+            "<ulink url='&YOCTO_DOCS_DEV_URL;#working-with-yocto-project-source-files'>Working With Yocto Project Source Files</ulink>"
+            section in the Yocto Project Development Tasks Manual.
+        </para>
+
+        <para>
+            It is important to understand that Git tracks content change and
+            not files.
+            Git uses "branches" to organize different development efforts.
+            For example, the <filename>poky</filename> repository has
+            several branches that include the current "&DISTRO_NAME_NO_CAP;"
+            branch, the "master" branch, and many branches for past
+            Yocto Project releases.
+            You can see all the branches by going to
+            <ulink url='&YOCTO_GIT_URL;/cgit.cgi/poky/'></ulink> and
+            clicking on the
+            <filename><ulink url='&YOCTO_GIT_URL;/cgit.cgi/poky/refs/heads'>[...]</ulink></filename>
+            link beneath the "Branch" heading.
+        </para>
+
+        <para>
+            Each of these branches represents a specific area of development.
+            The "master" branch represents the current or most recent
+            development.
+            All other branches represent offshoots of the "master" branch.
+        </para>
+
+        <para>
+            When you create a local copy of a Git repository, the copy has
+            the same set of branches as the original.
+            This means you can use Git to create a local working area
+            (also called a branch) that tracks a specific development branch
+            from the upstream source Git repository.
+            in other words, you can define your local Git environment to
+            work on any development branch in the repository.
+            To help illustrate, consider the following example Git commands:
+            <literallayout class='monospaced'>
+     $ cd ~
+     $ git clone git://git.yoctoproject.org/poky
+     $ cd poky
+     $ git checkout -b &DISTRO_NAME_NO_CAP; origin/&DISTRO_NAME_NO_CAP;
+            </literallayout>
+            In the previous example after moving to the home directory, the
+            <filename>git clone</filename> command creates a
+            local copy of the upstream <filename>poky</filename> Git repository.
+            By default, Git checks out the "master" branch for your work.
+            After changing the working directory to the new local repository
+            (i.e. <filename>poky</filename>), the
+            <filename>git checkout</filename> command creates
+            and checks out a local branch named "&DISTRO_NAME_NO_CAP;", which
+            tracks the upstream "origin/&DISTRO_NAME_NO_CAP;" branch.
+            Changes you make while in this branch would ultimately affect
+            the upstream "&DISTRO_NAME_NO_CAP;" branch of the
+            <filename>poky</filename> repository.
+        </para>
+
+        <para>
+            It is important to understand that when you create and checkout a
+            local working branch based on a branch name,
+            your local environment matches the "tip" of that particular
+            development branch at the time you created your local branch,
+            which could be different from the files in the "master" branch
+            of the upstream repository.
+            In other words, creating and checking out a local branch based on
+            the "&DISTRO_NAME_NO_CAP;" branch name is not the same as
+            cloning and checking out the "master" branch if the repository.
+            Keep reading to see how you create a local snapshot of a Yocto
+            Project Release.
+        </para>
+
+        <para>
+            Git uses "tags" to mark specific changes in a repository.
+            Typically, a tag is used to mark a special point such as the final
+            change before a project is released.
+            You can see the tags used with the <filename>poky</filename> Git
+            repository by going to
+            <ulink url='&YOCTO_GIT_URL;/cgit.cgi/poky/'></ulink> and
+            clicking on the
+            <filename><ulink url='&YOCTO_GIT_URL;/cgit.cgi/poky/refs/tags'>[...]</ulink></filename>
+            link beneath the "Tag" heading.
+        </para>
+
+        <para>
+            Some key tags for the <filename>poky</filename> are
+            <filename>jethro-14.0.3</filename>,
+            <filename>morty-16.0.1</filename>,
+            <filename>pyro-17.0.0</filename>, and
+            <filename>&DISTRO_NAME_NO_CAP;-&POKYVERSION;</filename>.
+            These tags represent Yocto Project releases.
+        </para>
+
+        <para>
+            When you create a local copy of the Git repository, you also
+            have access to all the tags in the upstream repository.
+            Similar to branches, you can create and checkout a local working
+            Git branch based on a tag name.
+            When you do this, you get a snapshot of the Git repository that
+            reflects the state of the files when the change was made associated
+            with that tag.
+            The most common use is to checkout a working branch that matches
+            a specific Yocto Project release.
+            Here is an example:
+            <literallayout class='monospaced'>
+     $ cd ~
+     $ git clone git://git.yoctoproject.org/poky
+     $ cd poky
+     $ git fetch --all --tags --prune
+     $ git checkout tags/pyro-17.0.0 -b my-pyro-17.0.0
+            </literallayout>
+            In this example, the name of the top-level directory of your
+            local Yocto Project repository is <filename>poky</filename>.
+            After moving to the <filename>poky</filename> directory, the
+            <filename>git fetch</filename> command makes all the upstream
+            tags available locally in your repository.
+            Finally, the <filename>git checkout</filename> command
+            creates and checks out a branch named "my-pyro-17.0.0" that is
+            based on the specific change upstream in the repository
+            associated with the "pyro-17.0.0" tag.
+            The files in your repository now exactly match that particular
+            Yocto Project release as it is tagged in the upstream Git
+            repository.
+            It is important to understand that when you create and
+            checkout a local working branch based on a tag, your environment
+            matches a specific point in time and not the entire development
+            branch (i.e. the "tip" of the branch).
+        </para>
+    </section>
+
+    <section id='basic-commands'>
+        <title>Basic Commands</title>
+
+        <para>
+            Git has an extensive set of commands that lets you manage changes
+            and perform collaboration over the life of a project.
+            Conveniently though, you can manage with a small set of basic
+            operations and workflows once you understand the basic
+            philosophy behind Git.
+            You do not have to be an expert in Git to be functional.
+            A good place to look for instruction on a minimal set of Git
+            commands is
+            <ulink url='http://git-scm.com/documentation'>here</ulink>.
+        </para>
+
+        <para>
+            If you do not know much about Git, you should educate
+            yourself by visiting the links previously mentioned.
+        </para>
+
+        <para>
+            The following list of Git commands briefly describes some basic
+            Git operations as a way to get started.
+            As with any set of commands, this list (in most cases) simply shows
+            the base command and omits the many arguments they support.
+            See the Git documentation for complete descriptions and strategies
+            on how to use these commands:
+            <itemizedlist>
+                <listitem><para>
+                    <emphasis><filename>git init</filename>:</emphasis>
+                    Initializes an empty Git repository.
+                    You cannot use Git commands unless you have a
+                    <filename>.git</filename> repository.
+                    </para></listitem>
+                <listitem><para id='git-commands-clone'>
+                    <emphasis><filename>git clone</filename>:</emphasis>
+                    Creates a local clone of a Git repository that is on
+                    equal footing with a fellow developer’s Git repository
+                    or an upstream repository.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>git add</filename>:</emphasis>
+                    Locally stages updated file contents to the index that
+                    Git uses to track changes.
+                    You must stage all files that have changed before you
+                    can commit them.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>git commit</filename>:</emphasis>
+                    Creates a local "commit" that documents the changes you
+                    made.
+                    Only changes that have been staged can be committed.
+                    Commits are used for historical purposes, for determining
+                    if a maintainer of a project will allow the change,
+                    and for ultimately pushing the change from your local
+                    Git repository into the project’s upstream repository.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>git status</filename>:</emphasis>
+                    Reports any modified files that possibly need to be
+                    staged and gives you a status of where you stand regarding
+                    local commits as compared to the upstream repository.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>git checkout</filename> <replaceable>branch-name</replaceable>:</emphasis>
+                    Changes your working branch.
+                    This command is analogous to "cd".
+                    </para></listitem>
+                <listitem><para><emphasis><filename>git checkout –b</filename> <replaceable>working-branch</replaceable>:</emphasis>
+                    Creates and checks out a working branch on your local
+                    machine that you can use to isolate your work.
+                    It is a good idea to use local branches when adding
+                    specific features or changes.
+                    Using isolated branches facilitates easy removal of
+                    changes if they do not work out.
+                    </para></listitem>
+                <listitem><para><emphasis><filename>git branch</filename>:</emphasis>
+                    Displays the existing local branches associated with your
+                    local repository.
+                    The branch that you have currently checked out is noted
+                    with an asterisk character.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>git branch -D</filename> <replaceable>branch-name</replaceable>:</emphasis>
+                    Deletes an existing local branch.
+                    You need to be in a local branch other than the one you
+                    are deleting in order to delete
+                    <replaceable>branch-name</replaceable>.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>git pull</filename>:</emphasis>
+                    Retrieves information from an upstream Git repository
+                    and places it in your local Git repository.
+                    You use this command to make sure you are synchronized with
+                    the repository from which you are basing changes
+                    (.e.g. the "master" branch).
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>git push</filename>:</emphasis>
+                    Sends all your committed local changes to the upstream Git
+                    repository that your local repository is tracking
+                    (e.g. a contribution repository).
+                    The maintainer of the project draws from these repositories
+                    to merge changes (commits) into the appropriate branch
+                    of project's upstream repository.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>git merge</filename>:</emphasis>
+                    Combines or adds changes from one
+                    local branch of your repository with another branch.
+                    When you create a local Git repository, the default branch
+                    is named "master".
+                    A typical workflow is to create a temporary branch that is
+                    based off "master" that you would use for isolated work.
+                    You would make your changes in that isolated branch,
+                    stage and commit them locally, switch to the "master"
+                    branch, and then use the <filename>git merge</filename>
+                    command to apply the changes from your isolated branch
+                    into the currently checked out branch (e.g. "master").
+                    After the merge is complete and if you are done with
+                    working in that isolated branch, you can safely delete
+                    the isolated branch.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>git cherry-pick</filename>:</emphasis>
+                    Choose and apply specific commits from one branch
+                    into another branch.
+                    There are times when you might not be able to merge
+                    all the changes in one branch with
+                    another but need to pick out certain ones.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>gitk</filename>:</emphasis>
+                    Provides a GUI view of the branches and changes in your
+                    local Git repository.
+                    This command is a good way to graphically see where things
+                    have diverged in your local repository.
+                    <note>
+                        You need to install the <filename>gitk</filename>
+                        package on your development system to use this
+                        command.
+                    </note>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>git log</filename>:</emphasis>
+                    Reports a history of your commits to the repository.
+                    This report lists all commits regardless of whether you
+                    have pushed them upstream or not.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>git diff</filename>:</emphasis>
+                    Displays line-by-line differences between a local
+                    working file and the same file as understood by Git.
+                    This command is useful to see what you have changed
+                    in any given file.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+</section>
+
+<section id='yocto-project-repositories'>
+    <title>Yocto Project Source Repositories</title>
+
+    <para>
+        The Yocto Project team maintains complete source repositories for all
+        Yocto Project files at
+        <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi'></ulink>.
+        This web-based source code browser is organized into categories by
+        function such as IDE Plugins, Matchbox, Poky, Yocto Linux Kernel, and
+        so forth.
+        From the interface, you can click on any particular item in the "Name"
+        column and see the URL at the bottom of the page that you need to clone
+        a Git repository for that particular item.
+        Having a local Git repository of the
+        <link linkend='source-directory'>Source Directory</link>, which is
+        usually named "poky", allows
+        you to make changes, contribute to the history, and ultimately enhance
+        the Yocto Project's tools, Board Support Packages, and so forth.
+    </para>
+
+    <para>
+        For any supported release of Yocto Project, you can also go to the
+        <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink> and
+        select the "Downloads" tab and get a released tarball of the
+        <filename>poky</filename> repository or any supported BSP tarballs.
+        Unpacking these tarballs gives you a snapshot of the released
+        files.
+        <note><title>Notes</title>
+            <itemizedlist>
+                <listitem><para>
+                    The recommended method for setting up the Yocto Project
+                    <link linkend='source-directory'>Source Directory</link>
+                    and the files for supported BSPs
+                    (e.g., <filename>meta-intel</filename>) is to use
+                    <link linkend='git'>Git</link> to create a local copy of
+                    the upstream repositories.
+                    </para></listitem>
+                <listitem><para>
+                    Be sure to always work in matching branches for both
+                    the selected BSP repository and the
+                    <link linkend='source-directory'>Source Directory</link>
+                    (i.e. <filename>poky</filename>) repository.
+                    For example, if you have checked out the "master" branch
+                    of <filename>poky</filename> and you are going to use
+                    <filename>meta-intel</filename>, be sure to checkout the
+                    "master" branch of <filename>meta-intel</filename>.
+                    </para></listitem>
+            </itemizedlist>
+        </note>
+    </para>
+
+    <para>
+        In summary, here is where you can get the project files needed for
+        development:
+        <itemizedlist>
+            <listitem><para id='source-repositories'>
+                <emphasis>
+                <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi'>Source Repositories:</ulink>
+                </emphasis>
+                This area contains IDE Plugins, Matchbox, Poky, Poky Support,
+                Tools, Yocto Linux Kernel, and Yocto Metadata Layers.
+                You can create local copies of Git repositories for each of
+                these areas.</para>
+
+                <para>
+                <imagedata fileref="figures/source-repos.png" align="center" width="6in" depth="4in" />
+                For steps on how to view and access these upstream Git
+                repositories, see the
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#accessing-source-repositories'>Accessing Source Repositories</ulink>"
+                Section in the Yocto Project Development Tasks Manual.
+                </para></listitem>
+            <listitem><para><anchor id='index-downloads' />
+                <emphasis>
+                <ulink url='&YOCTO_DL_URL;/releases/'>Index of /releases:</ulink>
+                </emphasis>
+                This is an index of releases such as
+                the <trademark class='trade'>Eclipse</trademark>
+                Yocto Plug-in, miscellaneous support, Poky, Pseudo, installers
+                for cross-development toolchains, and all released versions of
+                Yocto Project in the form of images or tarballs.
+                Downloading and extracting these files does not produce a local
+                copy of the Git repository but rather a snapshot of a
+                particular release or image.</para>
+
+                <para>
+                <imagedata fileref="figures/index-downloads.png" align="center" width="6in" depth="3.5in" />
+                For steps on how to view and access these files, see the
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#accessing-index-of-releases'>Accessing Index of Releases</ulink>"
+                section in the Yocto Project Development Tasks Manual.
+                </para></listitem>
+            <listitem><para id='downloads-page'>
+                <emphasis>"Downloads" page for the
+                <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>:
+                </emphasis></para>
+
+                <para role="writernotes">This section will change due to
+                reworking of the YP Website.</para>
+
+                <para>The Yocto Project website includes a "Downloads" tab
+                that allows you to download any Yocto Project
+                release and Board Support Package (BSP) in tarball form.
+                The tarballs are similar to those found in the
+                <ulink url='&YOCTO_DL_URL;/releases/'>Index of /releases:</ulink> area.</para>
+
+                <para>
+                <imagedata fileref="figures/yp-download.png" align="center" width="6in" depth="4in" />
+                For steps on how to use the "Downloads" page, see the
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#using-the-downloads-page'>Using the Downloads Page</ulink>"
+                section in the Yocto Project Development Tasks Manual.
+                </para></listitem>
+        </itemizedlist>
+    </para>
+</section>
+
+<section id='licensing'>
+    <title>Licensing</title>
+
+    <para>
+        Because open source projects are open to the public, they have
+        different licensing structures in place.
+        License evolution for both Open Source and Free Software has an
+        interesting history.
+        If you are interested in this history, you can find basic information
+        here:
+        <itemizedlist>
+            <listitem><para>
+                <ulink url='http://en.wikipedia.org/wiki/Open-source_license'>Open source license history</ulink>
+                </para></listitem>
+            <listitem><para>
+                <ulink url='http://en.wikipedia.org/wiki/Free_software_license'>Free software license history</ulink>
+                </para></listitem>
+        </itemizedlist>
+    </para>
+
+    <para>
+        In general, the Yocto Project is broadly licensed under the
+        Massachusetts Institute of Technology (MIT) License.
+        MIT licensing permits the reuse of software within proprietary
+        software as long as the license is distributed with that software.
+        MIT is also compatible with the GNU General Public License (GPL).
+        Patches to the Yocto Project follow the upstream licensing scheme.
+        You can find information on the MIT license
+        <ulink url='http://www.opensource.org/licenses/mit-license.php'>here</ulink>.
+        You can find information on the GNU GPL
+        <ulink url='http://www.opensource.org/licenses/LGPL-3.0'>here</ulink>.
+    </para>
+
+    <para>
+        When you build an image using the Yocto Project, the build process
+        uses a known list of licenses to ensure compliance.
+        You can find this list in the
+        <link linkend='source-directory'>Source Directory</link> at
+        <filename>meta/files/common-licenses</filename>.
+        Once the build completes, the list of all licenses found and used
+        during that build are kept in the
+        <link linkend='build-directory'>Build Directory</link>
+        at <filename>tmp/deploy/licenses</filename>.
+    </para>
+
+    <para>
+        If a module requires a license that is not in the base list, the
+        build process generates a warning during the build.
+        These tools make it easier for a developer to be certain of the
+        licenses with which their shipped products must comply.
+        However, even with these tools it is still up to the developer to
+        resolve potential licensing issues.
+    </para>
+
+    <para>
+        The base list of licenses used by the build process is a combination
+        of the Software Package Data Exchange (SPDX) list and the Open
+        Source Initiative (OSI) projects.
+        <ulink url='http://spdx.org'>SPDX Group</ulink> is a working group of
+        the Linux Foundation that maintains a specification for a standard
+        format for communicating the components, licenses, and copyrights
+        associated with a software package.
+        <ulink url='http://opensource.org'>OSI</ulink> is a corporation
+        dedicated to the Open Source Definition and the effort for reviewing
+        and approving licenses that conform to the Open Source Definition
+        (OSD).
+    </para>
+
+    <para>
+        You can find a list of the combined SPDX and OSI licenses that the
+        Yocto Project uses in the
+        <filename>meta/files/common-licenses</filename> directory in your
+        <link linkend='source-directory'>Source Directory</link>.
+    </para>
+
+    <para>
+        For information that can help you maintain compliance with various
+        open source licensing during the lifecycle of a product created using
+        the Yocto Project, see the
+        "<ulink url='&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle'>Maintaining Open Source License Compliance During Your Product's Lifecycle</ulink>"
+        section in the Yocto Project Development Tasks Manual.
+    </para>
+</section>
+
+<section id='recipe-syntax'>
+    <title>Recipe Syntax</title>
+
+    <para>
+        Understanding recipe file syntax is important for
+        writing recipes.
+        The following list overviews the basic items that make up a
+        BitBake recipe file.
+        For more complete BitBake syntax descriptions, see the
+        "<ulink url='&YOCTO_DOCS_BB_URL;#bitbake-user-manual-metadata'>Syntax and Operators</ulink>"
+        chapter of the BitBake User Manual.
+        <itemizedlist>
+            <listitem><para><emphasis>Variable Assignments and Manipulations:</emphasis>
+                Variable assignments allow a value to be assigned to a
+                variable.
+                The assignment can be static text or might include
+                the contents of other variables.
+                In addition to the assignment, appending and prepending
+                operations are also supported.</para>
+                <para>The following example shows some of the ways
+                you can use variables in recipes:
+                <literallayout class='monospaced'>
+     S = "${WORKDIR}/postfix-${PV}"
+     CFLAGS += "-DNO_ASM"
+     SRC_URI_append = " file://fixup.patch"
+                </literallayout>
+                </para></listitem>
+            <listitem><para><emphasis>Functions:</emphasis>
+                Functions provide a series of actions to be performed.
+                You usually use functions to override the default
+                implementation of a task function or to complement
+                a default function (i.e. append or prepend to an
+                existing function).
+                Standard functions use <filename>sh</filename> shell
+                syntax, although access to OpenEmbedded variables and
+                internal methods are also available.</para>
+                <para>The following is an example function from the
+                <filename>sed</filename> recipe:
+                <literallayout class='monospaced'>
+     do_install () {
+         autotools_do_install
+         install -d ${D}${base_bindir}
+         mv ${D}${bindir}/sed ${D}${base_bindir}/sed
+         rmdir ${D}${bindir}/
+     }
+                </literallayout>
+                It is also possible to implement new functions that
+                are called between existing tasks as long as the
+                new functions are not replacing or complementing the
+                default functions.
+                You can implement functions in Python
+                instead of shell.
+                Both of these options are not seen in the majority of
+                recipes.</para></listitem>
+            <listitem><para><emphasis>Keywords:</emphasis>
+                BitBake recipes use only a few keywords.
+                You use keywords to include common
+                functions (<filename>inherit</filename>), load parts
+                of a recipe from other files
+                (<filename>include</filename> and
+                <filename>require</filename>) and export variables
+                to the environment (<filename>export</filename>).</para>
+                <para>The following example shows the use of some of
+                these keywords:
+                <literallayout class='monospaced'>
+     export POSTCONF = "${STAGING_BINDIR}/postconf"
+     inherit autoconf
+     require otherfile.inc
+                </literallayout>
+                </para></listitem>
+            <listitem><para><emphasis>Comments:</emphasis>
+                Any lines that begin with the hash character
+                (<filename>#</filename>) are treated as comment lines
+                and are ignored:
+                <literallayout class='monospaced'>
+     # This is a comment
+                </literallayout>
+                </para></listitem>
+        </itemizedlist>
+    </para>
+
+    <para>
+        This next list summarizes the most important and most commonly
+        used parts of the recipe syntax.
+        For more information on these parts of the syntax, you can
+        reference the
+        <ulink url='&YOCTO_DOCS_BB_URL;#bitbake-user-manual-metadata'>Syntax and Operators</ulink>
+        chapter in the BitBake User Manual.
+        <itemizedlist>
+            <listitem><para><emphasis>Line Continuation: <filename>\</filename></emphasis> -
+                Use the backward slash (<filename>\</filename>)
+                character to split a statement over multiple lines.
+                Place the slash character at the end of the line that
+                is to be continued on the next line:
+                <literallayout class='monospaced'>
+     VAR = "A really long \
+            line"
+                </literallayout>
+                <note>
+                    You cannot have any characters including spaces
+                    or tabs after the slash character.
+                </note>
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Using Variables: <filename>${...}</filename></emphasis> -
+                Use the <filename>${<replaceable>VARNAME</replaceable>}</filename> syntax to
+                access the contents of a variable:
+                <literallayout class='monospaced'>
+     SRC_URI = "${SOURCEFORGE_MIRROR}/libpng/zlib-${PV}.tar.gz"
+                </literallayout>
+                <note>
+                    It is important to understand that the value of a
+                    variable expressed in this form does not get
+                    substituted automatically.
+                    The expansion of these expressions happens
+                    on-demand later (e.g. usually when a function that
+                    makes reference to the variable executes).
+                    This behavior ensures that the values are most
+                    appropriate for the context in which they are
+                    finally used.
+                    On the rare occasion that you do need the variable
+                    expression to be expanded immediately, you can use
+                    the <filename>:=</filename> operator instead of
+                    <filename>=</filename> when you make the
+                    assignment, but this is not generally needed.
+                </note>
+                </para></listitem>
+            <listitem><para><emphasis>Quote All Assignments: <filename>"<replaceable>value</replaceable>"</filename></emphasis> -
+                Use double quotes around the value in all variable
+                assignments.
+                <literallayout class='monospaced'>
+     VAR1 = "${OTHERVAR}"
+     VAR2 = "The version is ${PV}"
+                </literallayout>
+                </para></listitem>
+            <listitem><para><emphasis>Conditional Assignment: <filename>?=</filename></emphasis> -
+                Conditional assignment is used to assign a value to
+                a variable, but only when the variable is currently
+                unset.
+                Use the question mark followed by the equal sign
+                (<filename>?=</filename>) to make a "soft" assignment
+                used for conditional assignment.
+                Typically, "soft" assignments are used in the
+                <filename>local.conf</filename> file for variables
+                that are allowed to come through from the external
+                environment.
+                </para>
+                <para>Here is an example where
+                <filename>VAR1</filename> is set to "New value" if
+                it is currently empty.
+                However, if <filename>VAR1</filename> has already been
+                set, it remains unchanged:
+                <literallayout class='monospaced'>
+     VAR1 ?= "New value"
+                </literallayout>
+                In this next example, <filename>VAR1</filename>
+                is left with the value "Original value":
+                <literallayout class='monospaced'>
+     VAR1 = "Original value"
+     VAR1 ?= "New value"
+                </literallayout>
+                </para></listitem>
+            <listitem><para><emphasis>Appending: <filename>+=</filename></emphasis> -
+                Use the plus character followed by the equals sign
+                (<filename>+=</filename>) to append values to existing
+                variables.
+                <note>
+                    This operator adds a space between the existing
+                    content of the variable and the new content.
+                </note></para>
+                <para>Here is an example:
+                <literallayout class='monospaced'>
+     SRC_URI += "file://fix-makefile.patch"
+                </literallayout>
+                </para></listitem>
+            <listitem><para><emphasis>Prepending: <filename>=+</filename></emphasis> -
+                Use the equals sign followed by the plus character
+                (<filename>=+</filename>) to prepend values to existing
+                variables.
+                <note>
+                    This operator adds a space between the new content
+                    and the existing content of the variable.
+                </note></para>
+                <para>Here is an example:
+                <literallayout class='monospaced'>
+     VAR =+ "Starts"
+                </literallayout>
+                </para></listitem>
+            <listitem><para><emphasis>Appending: <filename>_append</filename></emphasis> -
+                Use the <filename>_append</filename> operator to
+                append values to existing variables.
+                This operator does not add any additional space.
+                Also, the operator is applied after all the
+                <filename>+=</filename>, and
+                <filename>=+</filename> operators have been applied and
+                after all <filename>=</filename> assignments have
+                occurred.
+                </para>
+                <para>The following example shows the space being
+                explicitly added to the start to ensure the appended
+                value is not merged with the existing value:
+                <literallayout class='monospaced'>
+     SRC_URI_append = " file://fix-makefile.patch"
+                </literallayout>
+                You can also use the <filename>_append</filename>
+                operator with overrides, which results in the actions
+                only being performed for the specified target or
+                machine:
+                <literallayout class='monospaced'>
+     SRC_URI_append_sh4 = " file://fix-makefile.patch"
+                </literallayout>
+                </para></listitem>
+            <listitem><para><emphasis>Prepending: <filename>_prepend</filename></emphasis> -
+                Use the <filename>_prepend</filename> operator to
+                prepend values to existing variables.
+                This operator does not add any additional space.
+                Also, the operator is applied after all the
+                <filename>+=</filename>, and
+                <filename>=+</filename> operators have been applied and
+                after all <filename>=</filename> assignments have
+                occurred.
+                </para>
+                <para>The following example shows the space being
+                explicitly added to the end to ensure the prepended
+                value is not merged with the existing value:
+                <literallayout class='monospaced'>
+     CFLAGS_prepend = "-I${S}/myincludes "
+                </literallayout>
+                You can also use the <filename>_prepend</filename>
+                operator with overrides, which results in the actions
+                only being performed for the specified target or
+                machine:
+                <literallayout class='monospaced'>
+     CFLAGS_prepend_sh4 = "-I${S}/myincludes "
+                </literallayout>
+                </para></listitem>
+            <listitem><para><emphasis>Overrides:</emphasis> -
+                You can use overrides to set a value conditionally,
+                typically based on how the recipe is being built.
+                For example, to set the
+                <link linkend='var-KBRANCH'><filename>KBRANCH</filename></link>
+                variable's value to "standard/base" for any target
+                <link linkend='var-MACHINE'><filename>MACHINE</filename></link>,
+                except for qemuarm where it should be set to
+                "standard/arm-versatile-926ejs", you would do the
+                following:
+                <literallayout class='monospaced'>
+     KBRANCH = "standard/base"
+     KBRANCH_qemuarm  = "standard/arm-versatile-926ejs"
+                </literallayout>
+                Overrides are also used to separate alternate values
+                of a variable in other situations.
+                For example, when setting variables such as
+                <link linkend='var-FILES'><filename>FILES</filename></link>
+                and
+                <link linkend='var-RDEPENDS'><filename>RDEPENDS</filename></link>
+                that are specific to individual packages produced by
+                a recipe, you should always use an override that
+                specifies the name of the package.
+                </para></listitem>
+            <listitem><para><emphasis>Indentation:</emphasis>
+                Use spaces for indentation rather than than tabs.
+                For shell functions, both currently work.
+                However, it is a policy decision of the Yocto Project
+                to use tabs in shell functions.
+                Realize that some layers have a policy to use spaces
+                for all indentation.
+                </para></listitem>
+            <listitem><para><emphasis>Using Python for Complex Operations: <filename>${@<replaceable>python_code</replaceable>}</filename></emphasis> -
+                For more advanced processing, it is possible to use
+                Python code during variable assignments (e.g.
+                search and replacement on a variable).</para>
+                <para>You indicate Python code using the
+                <filename>${@<replaceable>python_code</replaceable>}</filename>
+                syntax for the variable assignment:
+                <literallayout class='monospaced'>
+     SRC_URI = "ftp://ftp.info-zip.org/pub/infozip/src/zip${@d.getVar('PV',1).replace('.', '')}.tgz
+                </literallayout>
+                </para></listitem>
+            <listitem><para><emphasis>Shell Function Syntax:</emphasis>
+                Write shell functions as if you were writing a shell
+                script when you describe a list of actions to take.
+                You should ensure that your script works with a generic
+                <filename>sh</filename> and that it does not require
+                any <filename>bash</filename> or other shell-specific
+                functionality.
+                The same considerations apply to various system
+                utilities (e.g. <filename>sed</filename>,
+                <filename>grep</filename>, <filename>awk</filename>,
+                and so forth) that you might wish to use.
+                If in doubt, you should check with multiple
+                implementations - including those from BusyBox.
+                </para></listitem>
+        </itemizedlist>
+    </para>
+</section>
+
+<section id="development-concepts">
+    <title>Development Concepts</title>
+
+    <para>
+        This section takes a more detailed look inside the development
+        process.
+        The following diagram represents development at a high level.
+        The remainder of this chapter expands on the fundamental input, output,
+        process, and
+        <link linkend='metadata'>Metadata</link>) blocks
+        that make up development in the Yocto Project environment.
+    </para>
+
+    <para id='general-yocto-environment-figure'>
+        <imagedata fileref="figures/yocto-environment-ref.png" align="center" width="8in" depth="4.25in" />
+    </para>
+
+    <para>
+        In general, development consists of several functional areas:
+        <itemizedlist>
+            <listitem><para><emphasis>User Configuration:</emphasis>
+                Metadata you can use to control the build process.
+                </para></listitem>
+            <listitem><para><emphasis>Metadata Layers:</emphasis>
+                Various layers that provide software, machine, and
+                distro Metadata.</para></listitem>
+            <listitem><para><emphasis>Source Files:</emphasis>
+                Upstream releases, local projects, and SCMs.</para></listitem>
+            <listitem><para><emphasis>Build System:</emphasis>
+                Processes under the control of
+                <link linkend='bitbake-term'>BitBake</link>.
+                This block expands on how BitBake fetches source, applies
+                patches, completes compilation, analyzes output for package
+                generation, creates and tests packages, generates images, and
+                generates cross-development tools.</para></listitem>
+            <listitem><para><emphasis>Package Feeds:</emphasis>
+                Directories containing output packages (RPM, DEB or IPK),
+                which are subsequently used in the construction of an image or
+                SDK, produced by the build system.
+                These feeds can also be copied and shared using a web server or
+                other means to facilitate extending or updating existing
+                images on devices at runtime if runtime package management is
+                enabled.</para></listitem>
+            <listitem><para><emphasis>Images:</emphasis>
+                Images produced by the development process.
+                </para></listitem>
+            <listitem><para><emphasis>Application Development SDK:</emphasis>
+                Cross-development tools that are produced along with an image
+                or separately with BitBake.</para></listitem>
+        </itemizedlist>
+    </para>
+
+    <section id="user-configuration">
+        <title>User Configuration</title>
+
+        <para>
+            User configuration helps define the build.
+            Through user configuration, you can tell BitBake the
+            target architecture for which you are building the image,
+            where to store downloaded source, and other build properties.
+        </para>
+
+        <para>
+            The following figure shows an expanded representation of the
+            "User Configuration" box of the
+            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>:
+        </para>
+
+        <para>
+            <imagedata fileref="figures/user-configuration.png" align="center" />
+        </para>
+
+        <para>
+            BitBake needs some basic configuration files in order to complete
+            a build.
+            These files are <filename>*.conf</filename> files.
+            The minimally necessary ones reside as example files in the
+            <link linkend='source-directory'>Source Directory</link>.
+            For simplicity, this section refers to the Source Directory as
+            the "Poky Directory."
+        </para>
+
+        <para>
+            When you clone the <filename>poky</filename> Git repository or you
+            download and unpack a Yocto Project release, you can set up the
+            Source Directory to be named anything you want.
+            For this discussion, the cloned repository uses the default
+            name <filename>poky</filename>.
+            <note>
+                The Poky repository is primarily an aggregation of existing
+                repositories.
+                It is not a canonical upstream source.
+            </note>
+        </para>
+
+        <para>
+            The <filename>meta-poky</filename> layer inside Poky contains
+            a <filename>conf</filename> directory that has example
+            configuration files.
+            These example files are used as a basis for creating actual
+            configuration files when you source the build environment
+            script
+            (i.e.
+            <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>).
+        </para>
+
+        <para>
+            Sourcing the build environment script creates a
+            <link linkend='build-directory'>Build Directory</link>
+            if one does not already exist.
+            BitBake uses the Build Directory for all its work during builds.
+            The Build Directory has a <filename>conf</filename> directory that
+            contains default versions of your <filename>local.conf</filename>
+            and <filename>bblayers.conf</filename> configuration files.
+            These default configuration files are created only if versions
+            do not already exist in the Build Directory at the time you
+            source the build environment setup script.
+        </para>
+
+        <para>
+            Because the Poky repository is fundamentally an aggregation of
+            existing repositories, some users might be familiar with running
+            the <filename>&OE_INIT_FILE;</filename> script in the context
+            of separate OpenEmbedded-Core and BitBake repositories rather than a
+            single Poky repository.
+            This discussion assumes the script is executed from within a cloned
+            or unpacked version of Poky.
+        </para>
+
+        <para>
+            Depending on where the script is sourced, different sub-scripts
+            are called to set up the Build Directory (Yocto or OpenEmbedded).
+            Specifically, the script
+            <filename>scripts/oe-setup-builddir</filename> inside the
+            poky directory sets up the Build Directory and seeds the directory
+            (if necessary) with configuration files appropriate for the
+            Yocto Project development environment.
+            <note>
+                The <filename>scripts/oe-setup-builddir</filename> script
+                uses the <filename>$TEMPLATECONF</filename> variable to
+                determine which sample configuration files to locate.
+            </note>
+        </para>
+
+        <para>
+            The <filename>local.conf</filename> file provides many
+            basic variables that define a build environment.
+            Here is a list of a few.
+            To see the default configurations in a <filename>local.conf</filename>
+            file created by the build environment script, see the
+            <filename>local.conf.sample</filename> in the
+            <filename>meta-poky</filename> layer:
+            <itemizedlist>
+                <listitem><para><emphasis>Parallelism Options:</emphasis>
+                    Controlled by the
+                    <link linkend='var-BB_NUMBER_THREADS'><filename>BB_NUMBER_THREADS</filename></link>,
+                    <link linkend='var-PARALLEL_MAKE'><filename>PARALLEL_MAKE</filename></link>,
+                    and
+                    <ulink url='&YOCTO_DOCS_BB_URL;#var-BB_NUMBER_PARSE_THREADS'><filename>BB_NUMBER_PARSE_THREADS</filename></ulink>
+                    variables.</para></listitem>
+                <listitem><para><emphasis>Target Machine Selection:</emphasis>
+                    Controlled by the
+                    <link linkend='var-MACHINE'><filename>MACHINE</filename></link>
+                    variable.</para></listitem>
+                <listitem><para><emphasis>Download Directory:</emphasis>
+                    Controlled by the
+                    <link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
+                    variable.</para></listitem>
+                <listitem><para><emphasis>Shared State Directory:</emphasis>
+                    Controlled by the
+                    <link linkend='var-SSTATE_DIR'><filename>SSTATE_DIR</filename></link>
+                    variable.</para></listitem>
+                <listitem><para><emphasis>Build Output:</emphasis>
+                    Controlled by the
+                    <link linkend='var-TMPDIR'><filename>TMPDIR</filename></link>
+                    variable.</para></listitem>
+            </itemizedlist>
+            <note>
+                Configurations set in the <filename>conf/local.conf</filename>
+                file can also be set in the
+                <filename>conf/site.conf</filename> and
+                <filename>conf/auto.conf</filename> configuration files.
+            </note>
+        </para>
+
+        <para>
+            The <filename>bblayers.conf</filename> file tells BitBake what
+            layers you want considered during the build.
+            By default, the layers listed in this file include layers
+            minimally needed by the build system.
+            However, you must manually add any custom layers you have created.
+            You can find more information on working with the
+            <filename>bblayers.conf</filename> file in the
+            "<ulink url='&YOCTO_DOCS_DEV_URL;#enabling-your-layer'>Enabling Your Layer</ulink>"
+            section in the Yocto Project Development Tasks Manual.
+        </para>
+
+        <para>
+            The files <filename>site.conf</filename> and
+            <filename>auto.conf</filename> are not created by the environment
+            initialization script.
+            If you want the <filename>site.conf</filename> file, you need to
+            create that yourself.
+            The <filename>auto.conf</filename> file is typically created by
+            an autobuilder:
+            <itemizedlist>
+                <listitem><para><emphasis><filename>site.conf</filename>:</emphasis>
+                    You can use the <filename>conf/site.conf</filename>
+                    configuration file to configure multiple build directories.
+                    For example, suppose you had several build environments and
+                    they shared some common features.
+                    You can set these default build properties here.
+                    A good example is perhaps the packaging format to use
+                    through the
+                    <link linkend='var-PACKAGE_CLASSES'><filename>PACKAGE_CLASSES</filename></link>
+                    variable.</para>
+                    <para>One useful scenario for using the
+                    <filename>conf/site.conf</filename> file is to extend your
+                    <link linkend='var-BBPATH'><filename>BBPATH</filename></link>
+                    variable to include the path to a
+                    <filename>conf/site.conf</filename>.
+                    Then, when BitBake looks for Metadata using
+                    <filename>BBPATH</filename>, it finds the
+                    <filename>conf/site.conf</filename> file and applies your
+                    common configurations found in the file.
+                    To override configurations in a particular build directory,
+                    alter the similar configurations within that build
+                    directory's <filename>conf/local.conf</filename> file.
+                    </para></listitem>
+                <listitem><para><emphasis><filename>auto.conf</filename>:</emphasis>
+                    The file is usually created and written to by
+                    an autobuilder.
+                    The settings put into the file are typically the same as
+                    you would find in the <filename>conf/local.conf</filename>
+                    or the <filename>conf/site.conf</filename> files.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+
+        <para>
+            You can edit all configuration files to further define
+            any particular build environment.
+            This process is represented by the "User Configuration Edits"
+            box in the figure.
+        </para>
+
+        <para>
+            When you launch your build with the
+            <filename>bitbake <replaceable>target</replaceable></filename>
+            command, BitBake sorts out the configurations to ultimately
+            define your build environment.
+            It is important to understand that the OpenEmbedded build system
+            reads the configuration files in a specific order:
+            <filename>site.conf</filename>, <filename>auto.conf</filename>,
+            and <filename>local.conf</filename>.
+            And, the build system applies the normal assignment statement
+            rules.
+            Because the files are parsed in a specific order, variable
+            assignments for the same variable could be affected.
+            For example, if the <filename>auto.conf</filename> file and
+            the <filename>local.conf</filename> set
+            <replaceable>variable1</replaceable> to different values, because
+            the build system parses <filename>local.conf</filename> after
+            <filename>auto.conf</filename>,
+            <replaceable>variable1</replaceable> is assigned the value from
+            the <filename>local.conf</filename> file.
+        </para>
+    </section>
+
+    <section id="metadata-machine-configuration-and-policy-configuration">
+        <title>Metadata, Machine Configuration, and Policy Configuration</title>
+
+        <para>
+            The previous section described the user configurations that
+            define BitBake's global behavior.
+            This section takes a closer look at the layers the build system
+            uses to further control the build.
+            These layers provide Metadata for the software, machine, and
+            policy.
+        </para>
+
+        <para>
+            In general, three types of layer input exist:
+            <itemizedlist>
+                <listitem><para><emphasis>Policy Configuration:</emphasis>
+                    Distribution Layers provide top-level or general
+                    policies for the image or SDK being built.
+                    For example, this layer would dictate whether BitBake
+                    produces RPM or IPK packages.</para></listitem>
+                <listitem><para><emphasis>Machine Configuration:</emphasis>
+                    Board Support Package (BSP) layers provide machine
+                    configurations.
+                    This type of information is specific to a particular
+                    target architecture.</para></listitem>
+                <listitem><para><emphasis>Metadata:</emphasis>
+                    Software layers contain user-supplied recipe files,
+                    patches, and append files.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+
+        <para>
+            The following figure shows an expanded representation of the
+            Metadata, Machine Configuration, and Policy Configuration input
+            (layers) boxes of the
+            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>:
+        </para>
+
+        <para>
+            <imagedata fileref="figures/layer-input.png" align="center" width="8in" depth="7.5in" />
+        </para>
+
+        <para>
+            In general, all layers have a similar structure.
+            They all contain a licensing file
+            (e.g. <filename>COPYING</filename>) if the layer is to be
+            distributed, a <filename>README</filename> file as good practice
+            and especially if the layer is to be distributed, a
+            configuration directory, and recipe directories.
+        </para>
+
+        <para>
+            The Yocto Project has many layers that can be used.
+            You can see a web-interface listing of them on the
+            <ulink url="http://git.yoctoproject.org/">Source Repositories</ulink>
+            page.
+            The layers are shown at the bottom categorized under
+            "Yocto Metadata Layers."
+            These layers are fundamentally a subset of the
+            <ulink url="http://layers.openembedded.org/layerindex/layers/">OpenEmbedded Metadata Index</ulink>,
+            which lists all layers provided by the OpenEmbedded community.
+            <note>
+                Layers exist in the Yocto Project Source Repositories that
+                cannot be found in the OpenEmbedded Metadata Index.
+                These layers are either deprecated or experimental in nature.
+            </note>
+        </para>
+
+        <para>
+            BitBake uses the <filename>conf/bblayers.conf</filename> file,
+            which is part of the user configuration, to find what layers it
+            should be using as part of the build.
+        </para>
+
+        <para>
+            For more information on layers, see the
+            "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and Creating Layers</ulink>"
+            section in the Yocto Project Development Tasks Manual.
+        </para>
+
+        <section id="distro-layer">
+            <title>Distro Layer</title>
+
+            <para>
+                The distribution layer provides policy configurations for your
+                distribution.
+                Best practices dictate that you isolate these types of
+                configurations into their own layer.
+                Settings you provide in
+                <filename>conf/distro/<replaceable>distro</replaceable>.conf</filename> override
+                similar
+                settings that BitBake finds in your
+                <filename>conf/local.conf</filename> file in the Build
+                Directory.
+            </para>
+
+            <para>
+                The following list provides some explanation and references
+                for what you typically find in the distribution layer:
+                <itemizedlist>
+                    <listitem><para><emphasis>classes:</emphasis>
+                        Class files (<filename>.bbclass</filename>) hold
+                        common functionality that can be shared among
+                        recipes in the distribution.
+                        When your recipes inherit a class, they take on the
+                        settings and functions for that class.
+                        You can read more about class files in the
+                        "<link linkend='ref-classes'>Classes</link>" section.
+                        </para></listitem>
+                    <listitem><para><emphasis>conf:</emphasis>
+                        This area holds configuration files for the
+                        layer (<filename>conf/layer.conf</filename>),
+                        the distribution
+                        (<filename>conf/distro/<replaceable>distro</replaceable>.conf</filename>),
+                        and any distribution-wide include files.
+                        </para></listitem>
+                    <listitem><para><emphasis>recipes-*:</emphasis>
+                        Recipes and append files that affect common
+                        functionality across the distribution.
+                        This area could include recipes and append files
+                        to add distribution-specific configuration,
+                        initialization scripts, custom image recipes,
+                        and so forth.</para></listitem>
+                </itemizedlist>
+            </para>
+        </section>
+
+        <section id="bsp-layer">
+            <title>BSP Layer</title>
+
+            <para>
+                The BSP Layer provides machine configurations.
+                Everything in this layer is specific to the machine for which
+                you are building the image or the SDK.
+                A common structure or form is defined for BSP layers.
+                You can learn more about this structure in the
+                <ulink url='&YOCTO_DOCS_BSP_URL;'>Yocto Project Board Support Package (BSP) Developer's Guide</ulink>.
+                <note>
+                    In order for a BSP layer to be considered compliant with the
+                    Yocto Project, it must meet some structural requirements.
+                </note>
+            </para>
+
+            <para>
+                The BSP Layer's configuration directory contains
+                configuration files for the machine
+                (<filename>conf/machine/<replaceable>machine</replaceable>.conf</filename>) and,
+                of course, the layer (<filename>conf/layer.conf</filename>).
+            </para>
+
+            <para>
+                The remainder of the layer is dedicated to specific recipes
+                by function: <filename>recipes-bsp</filename>,
+                <filename>recipes-core</filename>,
+                <filename>recipes-graphics</filename>, and
+                <filename>recipes-kernel</filename>.
+                Metadata can exist for multiple formfactors, graphics
+                support systems, and so forth.
+                <note>
+                    While the figure shows several <filename>recipes-*</filename>
+                    directories, not all these directories appear in all
+                    BSP layers.
+                </note>
+            </para>
+        </section>
+
+        <section id="software-layer">
+            <title>Software Layer</title>
+
+            <para>
+                The software layer provides the Metadata for additional
+                software packages used during the build.
+                This layer does not include Metadata that is specific to the
+                distribution or the machine, which are found in their
+                respective layers.
+            </para>
+
+            <para>
+                This layer contains any new recipes that your project needs
+                in the form of recipe files.
+            </para>
+        </section>
+    </section>
+
+    <section id="sources-dev-environment">
+        <title>Sources</title>
+
+        <para>
+            In order for the OpenEmbedded build system to create an image or
+            any target, it must be able to access source files.
+            The
+            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>
+            represents source files using the "Upstream Project Releases",
+            "Local Projects", and "SCMs (optional)" boxes.
+            The figure represents mirrors, which also play a role in locating
+            source files, with the "Source Mirror(s)" box.
+        </para>
+
+        <para>
+            The method by which source files are ultimately organized is
+            a function of the project.
+            For example, for released software, projects tend to use tarballs
+            or other archived files that can capture the state of a release
+            guaranteeing that it is statically represented.
+            On the other hand, for a project that is more dynamic or
+            experimental in nature, a project might keep source files in a
+            repository controlled by a Source Control Manager (SCM) such as
+            Git.
+            Pulling source from a repository allows you to control
+            the point in the repository (the revision) from which you want to
+            build software.
+            Finally, a combination of the two might exist, which would give the
+            consumer a choice when deciding where to get source files.
+        </para>
+
+        <para>
+            BitBake uses the
+            <link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>
+            variable to point to source files regardless of their location.
+            Each recipe must have a <filename>SRC_URI</filename> variable
+            that points to the source.
+        </para>
+
+        <para>
+            Another area that plays a significant role in where source files
+            come from is pointed to by the
+            <link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
+            variable.
+            This area is a cache that can hold previously downloaded source.
+            You can also instruct the OpenEmbedded build system to create
+            tarballs from Git repositories, which is not the default behavior,
+            and store them in the <filename>DL_DIR</filename> by using the
+            <link linkend='var-BB_GENERATE_MIRROR_TARBALLS'><filename>BB_GENERATE_MIRROR_TARBALLS</filename></link>
+            variable.
+        </para>
+
+        <para>
+            Judicious use of a <filename>DL_DIR</filename> directory can
+            save the build system a trip across the Internet when looking
+            for files.
+            A good method for using a download directory is to have
+            <filename>DL_DIR</filename> point to an area outside of your
+            Build Directory.
+            Doing so allows you to safely delete the Build Directory
+            if needed without fear of removing any downloaded source file.
+        </para>
+
+        <para>
+            The remainder of this section provides a deeper look into the
+            source files and the mirrors.
+            Here is a more detailed look at the source file area of the
+            base figure:
+            <imagedata fileref="figures/source-input.png" align="center" width="7in" depth="7.5in" />
+        </para>
+
+        <section id='upstream-project-releases'>
+            <title>Upstream Project Releases</title>
+
+            <para>
+                Upstream project releases exist anywhere in the form of an
+                archived file (e.g. tarball or zip file).
+                These files correspond to individual recipes.
+                For example, the figure uses specific releases each for
+                BusyBox, Qt, and Dbus.
+                An archive file can be for any released product that can be
+                built using a recipe.
+            </para>
+        </section>
+
+        <section id='local-projects'>
+            <title>Local Projects</title>
+
+            <para>
+                Local projects are custom bits of software the user provides.
+                These bits reside somewhere local to a project - perhaps
+                a directory into which the user checks in items (e.g.
+                a local directory containing a development source tree
+                used by the group).
+            </para>
+
+            <para>
+                The canonical method through which to include a local project
+                is to use the
+                <link linkend='ref-classes-externalsrc'><filename>externalsrc</filename></link>
+                class to include that local project.
+                You use either the <filename>local.conf</filename> or a
+                recipe's append file to override or set the
+                recipe to point to the local directory on your disk to pull
+                in the whole source tree.
+            </para>
+
+            <para>
+                For information on how to use the
+                <filename>externalsrc</filename> class, see the
+                "<link linkend='ref-classes-externalsrc'><filename>externalsrc.bbclass</filename></link>"
+                section.
+            </para>
+        </section>
+
+        <section id='scms'>
+            <title>Source Control Managers (Optional)</title>
+
+            <para>
+                Another place the build system can get source files from is
+                through an SCM such as Git or Subversion.
+                In this case, a repository is cloned or checked out.
+                The
+                <link linkend='ref-tasks-fetch'><filename>do_fetch</filename></link>
+                task inside BitBake uses
+                the <link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>
+                variable and the argument's prefix to determine the correct
+                fetcher module.
+            </para>
+
+            <note>
+                For information on how to have the OpenEmbedded build system
+                generate tarballs for Git repositories and place them in the
+                <link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
+                directory, see the
+                <link linkend='var-BB_GENERATE_MIRROR_TARBALLS'><filename>BB_GENERATE_MIRROR_TARBALLS</filename></link>
+                variable.
+            </note>
+
+            <para>
+                When fetching a repository, BitBake uses the
+                <link linkend='var-SRCREV'><filename>SRCREV</filename></link>
+                variable to determine the specific revision from which to
+                build.
+            </para>
+        </section>
+
+        <section id='source-mirrors'>
+            <title>Source Mirror(s)</title>
+
+            <para>
+                Two kinds of mirrors exist: pre-mirrors and regular mirrors.
+                The <link linkend='var-PREMIRRORS'><filename>PREMIRRORS</filename></link>
+                and
+                <link linkend='var-MIRRORS'><filename>MIRRORS</filename></link>
+                variables point to these, respectively.
+                BitBake checks pre-mirrors before looking upstream for any
+                source files.
+                Pre-mirrors are appropriate when you have a shared directory
+                that is not a directory defined by the
+                <link linkend='var-DL_DIR'><filename>DL_DIR</filename></link>
+                variable.
+                A Pre-mirror typically points to a shared directory that is
+                local to your organization.
+            </para>
+
+            <para>
+                Regular mirrors can be any site across the Internet that is
+                used as an alternative location for source code should the
+                primary site not be functioning for some reason or another.
+            </para>
+        </section>
+    </section>
+
+    <section id="package-feeds-dev-environment">
+        <title>Package Feeds</title>
+
+        <para>
+            When the OpenEmbedded build system generates an image or an SDK,
+            it gets the packages from a package feed area located in the
+            <link linkend='build-directory'>Build Directory</link>.
+            The
+            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>
+            shows this package feeds area in the upper-right corner.
+        </para>
+
+        <para>
+            This section looks a little closer into the package feeds area used
+            by the build system.
+            Here is a more detailed look at the area:
+            <imagedata fileref="figures/package-feeds.png" align="center" width="7in" depth="6in" />
+        </para>
+
+        <para>
+            Package feeds are an intermediary step in the build process.
+            The OpenEmbedded build system provides classes to generate
+            different package types, and you specify which classes to enable
+            through the
+            <link linkend='var-PACKAGE_CLASSES'><filename>PACKAGE_CLASSES</filename></link>
+            variable.
+            Before placing the packages into package feeds,
+            the build process validates them with generated output quality
+            assurance checks through the
+            <link linkend='ref-classes-insane'><filename>insane</filename></link>
+            class.
+        </para>
+
+        <para>
+            The package feed area resides in the Build Directory.
+            The directory the build system uses to temporarily store packages
+            is determined by a combination of variables and the particular
+            package manager in use.
+            See the "Package Feeds" box in the illustration and note the
+            information to the right of that area.
+            In particular, the following defines where package files are
+            kept:
+            <itemizedlist>
+                <listitem><para><link linkend='var-DEPLOY_DIR'><filename>DEPLOY_DIR</filename></link>:
+                    Defined as <filename>tmp/deploy</filename> in the Build
+                    Directory.
+                    </para></listitem>
+                <listitem><para><filename>DEPLOY_DIR_*</filename>:
+                    Depending on the package manager used, the package type
+                    sub-folder.
+                    Given RPM, IPK, or DEB packaging and tarball creation, the
+                    <link linkend='var-DEPLOY_DIR_RPM'><filename>DEPLOY_DIR_RPM</filename></link>,
+                    <link linkend='var-DEPLOY_DIR_IPK'><filename>DEPLOY_DIR_IPK</filename></link>,
+                    <link linkend='var-DEPLOY_DIR_DEB'><filename>DEPLOY_DIR_DEB</filename></link>,
+                    or
+                    <link linkend='var-DEPLOY_DIR_TAR'><filename>DEPLOY_DIR_TAR</filename></link>,
+                    variables are used, respectively.
+                    </para></listitem>
+                <listitem><para><link linkend='var-PACKAGE_ARCH'><filename>PACKAGE_ARCH</filename></link>:
+                    Defines architecture-specific sub-folders.
+                    For example, packages could exist for the i586 or qemux86
+                    architectures.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+
+        <para>
+            BitBake uses the <filename>do_package_write_*</filename> tasks to
+            generate packages and place them into the package holding area (e.g.
+            <filename>do_package_write_ipk</filename> for IPK packages).
+            See the
+            "<link linkend='ref-tasks-package_write_deb'><filename>do_package_write_deb</filename></link>",
+            "<link linkend='ref-tasks-package_write_ipk'><filename>do_package_write_ipk</filename></link>",
+            "<link linkend='ref-tasks-package_write_rpm'><filename>do_package_write_rpm</filename></link>",
+            and
+            "<link linkend='ref-tasks-package_write_tar'><filename>do_package_write_tar</filename></link>"
+            sections for additional information.
+            As an example, consider a scenario where an IPK packaging manager
+            is being used and package architecture support for both i586
+            and qemux86 exist.
+            Packages for the i586 architecture are placed in
+            <filename>build/tmp/deploy/ipk/i586</filename>, while packages for
+            the qemux86 architecture are placed in
+            <filename>build/tmp/deploy/ipk/qemux86</filename>.
+        </para>
+    </section>
+
+    <section id='bitbake-dev-environment'>
+        <title>BitBake</title>
+
+        <para>
+            The OpenEmbedded build system uses
+            <link linkend='bitbake-term'>BitBake</link>
+            to produce images.
+            You can see from the
+            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>,
+            the BitBake area consists of several functional areas.
+            This section takes a closer look at each of those areas.
+        </para>
+
+        <para>
+            Separate documentation exists for the BitBake tool.
+            See the
+            <ulink url='&YOCTO_DOCS_BB_URL;#bitbake-user-manual'>BitBake User Manual</ulink>
+            for reference material on BitBake.
+        </para>
+
+        <section id='source-fetching-dev-environment'>
+            <title>Source Fetching</title>
+
+            <para>
+                The first stages of building a recipe are to fetch and unpack
+                the source code:
+                <imagedata fileref="figures/source-fetching.png" align="center" width="6.5in" depth="5in" />
+            </para>
+
+            <para>
+                The
+                <link linkend='ref-tasks-fetch'><filename>do_fetch</filename></link>
+                and
+                <link linkend='ref-tasks-unpack'><filename>do_unpack</filename></link>
+                tasks fetch the source files and unpack them into the work
+                directory.
+                <note>
+                    For every local file (e.g. <filename>file://</filename>)
+                    that is part of a recipe's
+                    <link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>
+                    statement, the OpenEmbedded build system takes a checksum
+                    of the file for the recipe and inserts the checksum into
+                    the signature for the <filename>do_fetch</filename>.
+                    If any local file has been modified, the
+                    <filename>do_fetch</filename> task and all tasks that
+                    depend on it are re-executed.
+                </note>
+                By default, everything is accomplished in the
+                <link linkend='build-directory'>Build Directory</link>,
+                which has a defined structure.
+                For additional general information on the Build Directory,
+                see the
+                "<link linkend='structure-core-build'><filename>build/</filename></link>"
+                section.
+            </para>
+
+            <para>
+                Unpacked source files are pointed to by the
+                <link linkend='var-S'><filename>S</filename></link> variable.
+                Each recipe has an area in the Build Directory where the
+                unpacked source code resides.
+                The name of that directory for any given recipe is defined from
+                several different variables.
+                You can see the variables that define these directories
+                by looking at the figure:
+                <itemizedlist>
+                    <listitem><para><link linkend='var-TMPDIR'><filename>TMPDIR</filename></link> -
+                        The base directory where the OpenEmbedded build system
+                        performs all its work during the build.
+                        </para></listitem>
+                    <listitem><para><link linkend='var-PACKAGE_ARCH'><filename>PACKAGE_ARCH</filename></link> -
+                        The architecture of the built package or packages.
+                        </para></listitem>
+                    <listitem><para><link linkend='var-TARGET_OS'><filename>TARGET_OS</filename></link> -
+                        The operating system of the target device.
+                        </para></listitem>
+                    <listitem><para><link linkend='var-PN'><filename>PN</filename></link> -
+                        The name of the built package.
+                        </para></listitem>
+                    <listitem><para><link linkend='var-PV'><filename>PV</filename></link> -
+                        The version of the recipe used to build the package.
+                        </para></listitem>
+                    <listitem><para><link linkend='var-PR'><filename>PR</filename></link> -
+                        The revision of the recipe used to build the package.
+                        </para></listitem>
+                    <listitem><para><link linkend='var-WORKDIR'><filename>WORKDIR</filename></link> -
+                        The location within <filename>TMPDIR</filename> where
+                        a specific package is built.
+                        </para></listitem>
+                    <listitem><para><link linkend='var-S'><filename>S</filename></link> -
+                        Contains the unpacked source files for a given recipe.
+                        </para></listitem>
+                </itemizedlist>
+            </para>
+        </section>
+
+        <section id='patching-dev-environment'>
+            <title>Patching</title>
+
+            <para>
+                Once source code is fetched and unpacked, BitBake locates
+                patch files and applies them to the source files:
+                <imagedata fileref="figures/patching.png" align="center" width="6in" depth="5in" />
+            </para>
+
+            <para>
+                The
+                <link linkend='ref-tasks-patch'><filename>do_patch</filename></link>
+                task processes recipes by
+                using the
+                <link linkend='var-SRC_URI'><filename>SRC_URI</filename></link>
+                variable to locate applicable patch files, which by default
+                are <filename>*.patch</filename> or
+                <filename>*.diff</filename> files, or any file if
+                "apply=yes" is specified for the file in
+                <filename>SRC_URI</filename>.
+            </para>
+
+            <para>
+                BitBake finds and applies multiple patches for a single recipe
+                in the order in which it finds the patches.
+                Patches are applied to the recipe's source files located in the
+                <link linkend='var-S'><filename>S</filename></link> directory.
+            </para>
+
+            <para>
+                For more information on how the source directories are
+                created, see the
+                "<link linkend='source-fetching-dev-environment'>Source Fetching</link>"
+                section.
+            </para>
+        </section>
+
+        <section id='configuration-and-compilation-dev-environment'>
+            <title>Configuration and Compilation</title>
+
+            <para>
+                After source code is patched, BitBake executes tasks that
+                configure and compile the source code:
+                <imagedata fileref="figures/configuration-compile-autoreconf.png" align="center" width="7in" depth="5in" />
+            </para>
+
+            <para>
+                This step in the build process consists of three tasks:
+                <itemizedlist>
+                    <listitem><para>
+                        <emphasis><link linkend='ref-tasks-prepare_recipe_sysroot'><filename>do_prepare_recipe_sysroot</filename></link>:</emphasis>
+                        This task sets up the two sysroots in
+                        <filename>${</filename><link linkend='var-WORKDIR'><filename>WORKDIR</filename></link><filename>}</filename>
+                        (i.e. <filename>recipe-sysroot</filename> and
+                        <filename>recipe-sysroot-native</filename>) so that
+                        the sysroots contain the contents of the
+                        <link linkend='ref-tasks-populate_sysroot'><filename>do_populate_sysroot</filename></link>
+                        tasks of the recipes on which the recipe
+                        containing the tasks depends.
+                        A sysroot exists for both the target and for the native
+                        binaries, which run on the host system.
+                        </para></listitem>
+                    <listitem><para><emphasis><filename>do_configure</filename>:</emphasis>
+                        This task configures the source by enabling and
+                        disabling any build-time and configuration options for
+                        the software being built.
+                        Configurations can come from the recipe itself as well
+                        as from an inherited class.
+                        Additionally, the software itself might configure itself
+                        depending on the target for which it is being built.
+                        </para>
+
+                        <para>The configurations handled by the
+                        <link linkend='ref-tasks-configure'><filename>do_configure</filename></link>
+                        task are specific
+                        to source code configuration for the source code
+                        being built by the recipe.</para>
+
+                        <para>If you are using the
+                        <link linkend='ref-classes-autotools'><filename>autotools</filename></link>
+                        class,
+                        you can add additional configuration options by using
+                        the <link linkend='var-EXTRA_OECONF'><filename>EXTRA_OECONF</filename></link>
+                        or
+                        <link linkend='var-PACKAGECONFIG_CONFARGS'><filename>PACKAGECONFIG_CONFARGS</filename></link>
+                        variables.
+                        For information on how this variable works within
+                        that class, see the
+                        <filename>meta/classes/autotools.bbclass</filename> file.
+                        </para></listitem>
+                    <listitem><para><emphasis><filename>do_compile</filename>:</emphasis>
+                        Once a configuration task has been satisfied, BitBake
+                        compiles the source using the
+                        <link linkend='ref-tasks-compile'><filename>do_compile</filename></link>
+                        task.
+                        Compilation occurs in the directory pointed to by the
+                        <link linkend='var-B'><filename>B</filename></link>
+                        variable.
+                        Realize that the <filename>B</filename> directory is, by
+                        default, the same as the
+                        <link linkend='var-S'><filename>S</filename></link>
+                        directory.</para></listitem>
+                    <listitem><para><emphasis><filename>do_install</filename>:</emphasis>
+                        Once compilation is done, BitBake executes the
+                        <link linkend='ref-tasks-install'><filename>do_install</filename></link>
+                        task.
+                        This task copies files from the <filename>B</filename>
+                        directory and places them in a holding area pointed to
+                        by the
+                        <link linkend='var-D'><filename>D</filename></link>
+                        variable.</para></listitem>
+                </itemizedlist>
+            </para>
+        </section>
+
+        <section id='package-splitting-dev-environment'>
+            <title>Package Splitting</title>
+
+            <para>
+                After source code is configured and compiled, the
+                OpenEmbedded build system analyzes
+                the results and splits the output into packages:
+                <imagedata fileref="figures/analysis-for-package-splitting.png" align="center" width="7in" depth="7in" />
+            </para>
+
+            <para>
+                The
+                <link linkend='ref-tasks-package'><filename>do_package</filename></link>
+                and
+                <link linkend='ref-tasks-packagedata'><filename>do_packagedata</filename></link>
+                tasks combine to analyze
+                the files found in the
+                <link linkend='var-D'><filename>D</filename></link> directory
+                and split them into subsets based on available packages and
+                files.
+                The analyzing process involves the following as well as other
+                items: splitting out debugging symbols,
+                looking at shared library dependencies between packages,
+                and looking at package relationships.
+                The <filename>do_packagedata</filename> task creates package
+                metadata based on the analysis such that the
+                OpenEmbedded build system can generate the final packages.
+                Working, staged, and intermediate results of the analysis
+                and package splitting process use these areas:
+                <itemizedlist>
+                    <listitem><para><link linkend='var-PKGD'><filename>PKGD</filename></link> -
+                        The destination directory for packages before they are
+                        split.
+                        </para></listitem>
+                    <listitem><para><link linkend='var-PKGDATA_DIR'><filename>PKGDATA_DIR</filename></link> -
+                        A shared, global-state directory that holds data
+                        generated during the packaging process.
+                        </para></listitem>
+                    <listitem><para><link linkend='var-PKGDESTWORK'><filename>PKGDESTWORK</filename></link> -
+                        A temporary work area used by the
+                        <filename>do_package</filename> task.
+                        </para></listitem>
+                    <listitem><para><link linkend='var-PKGDEST'><filename>PKGDEST</filename></link> -
+                        The parent directory for packages after they have
+                        been split.
+                        </para></listitem>
+                </itemizedlist>
+                The <link linkend='var-FILES'><filename>FILES</filename></link>
+                variable defines the files that go into each package in
+                <link linkend='var-PACKAGES'><filename>PACKAGES</filename></link>.
+                If you want details on how this is accomplished, you can
+                look at the
+                <link linkend='ref-classes-package'><filename>package</filename></link>
+                class.
+            </para>
+
+            <para>
+                Depending on the type of packages being created (RPM, DEB, or
+                IPK), the <filename>do_package_write_*</filename> task
+                creates the actual packages and places them in the
+                Package Feed area, which is
+                <filename>${TMPDIR}/deploy</filename>.
+                You can see the
+                "<link linkend='package-feeds-dev-environment'>Package Feeds</link>"
+                section for more detail on that part of the build process.
+                <note>
+                    Support for creating feeds directly from the
+                    <filename>deploy/*</filename> directories does not exist.
+                    Creating such feeds usually requires some kind of feed
+                    maintenance mechanism that would upload the new packages
+                    into an official package feed (e.g. the
+                    Ångström distribution).
+                    This functionality is highly distribution-specific
+                    and thus is not provided out of the box.
+                </note>
+            </para>
+        </section>
+
+        <section id='image-generation-dev-environment'>
+            <title>Image Generation</title>
+
+            <para>
+                Once packages are split and stored in the Package Feeds area,
+                the OpenEmbedded build system uses BitBake to generate the
+                root filesystem image:
+                <imagedata fileref="figures/image-generation.png" align="center" width="6in" depth="7in" />
+            </para>
+
+            <para>
+                The image generation process consists of several stages and
+                depends on several tasks and variables.
+                The
+                <link linkend='ref-tasks-rootfs'><filename>do_rootfs</filename></link>
+                task creates the root filesystem (file and directory structure)
+                for an image.
+                This task uses several key variables to help create the list
+                of packages to actually install:
+                <itemizedlist>
+                    <listitem><para><link linkend='var-IMAGE_INSTALL'><filename>IMAGE_INSTALL</filename></link>:
+                        Lists out the base set of packages to install from
+                        the Package Feeds area.</para></listitem>
+                    <listitem><para><link linkend='var-PACKAGE_EXCLUDE'><filename>PACKAGE_EXCLUDE</filename></link>:
+                        Specifies packages that should not be installed.
+                        </para></listitem>
+                    <listitem><para><link linkend='var-IMAGE_FEATURES'><filename>IMAGE_FEATURES</filename></link>:
+                        Specifies features to include in the image.
+                        Most of these features map to additional packages for
+                        installation.</para></listitem>
+                    <listitem><para><link linkend='var-PACKAGE_CLASSES'><filename>PACKAGE_CLASSES</filename></link>:
+                        Specifies the package backend to use and consequently
+                        helps determine where to locate packages within the
+                        Package Feeds area.</para></listitem>
+                    <listitem><para><link linkend='var-IMAGE_LINGUAS'><filename>IMAGE_LINGUAS</filename></link>:
+                        Determines the language(s) for which additional
+                        language support packages are installed.
+                        </para></listitem>
+                    <listitem><para><link linkend='var-PACKAGE_INSTALL'><filename>PACKAGE_INSTALL</filename></link>:
+                        The final list of packages passed to the package manager
+                        for installation into the image.
+                        </para></listitem>
+                </itemizedlist>
+            </para>
+
+            <para>
+                With
+                <link linkend='var-IMAGE_ROOTFS'><filename>IMAGE_ROOTFS</filename></link>
+                pointing to the location of the filesystem under construction and
+                the <filename>PACKAGE_INSTALL</filename> variable providing the
+                final list of packages to install, the root file system is
+                created.
+            </para>
+
+            <para>
+                Package installation is under control of the package manager
+                (e.g. dnf/rpm, opkg, or apt/dpkg) regardless of whether or
+                not package management is enabled for the target.
+                At the end of the process, if package management is not
+                enabled for the target, the package manager's data files
+                are deleted from the root filesystem.
+                As part of the final stage of package installation, postinstall
+                scripts that are part of the packages are run.
+                Any scripts that fail to run
+                on the build host are run on the target when the target system
+                is first booted.
+                If you are using a
+                <ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-read-only-root-filesystem'>read-only root filesystem</ulink>,
+                all the post installation scripts must succeed during the
+                package installation phase since the root filesystem is
+                read-only.
+            </para>
+
+            <para>
+                The final stages of the <filename>do_rootfs</filename> task
+                handle post processing.
+                Post processing includes creation of a manifest file and
+                optimizations.
+            </para>
+
+            <para>
+                The manifest file (<filename>.manifest</filename>) resides
+                in the same directory as the root filesystem image.
+                This file lists out, line-by-line, the installed packages.
+                The manifest file is useful for the
+                <link linkend='ref-classes-testimage*'><filename>testimage</filename></link>
+                class, for example, to determine whether or not to run
+                specific tests.
+                See the
+                <link linkend='var-IMAGE_MANIFEST'><filename>IMAGE_MANIFEST</filename></link>
+                variable for additional information.
+            </para>
+
+            <para>
+                Optimizing processes run across the image include
+                <filename>mklibs</filename>, <filename>prelink</filename>,
+                and any other post-processing commands as defined by the
+                <link linkend='var-ROOTFS_POSTPROCESS_COMMAND'><filename>ROOTFS_POSTPROCESS_COMMAND</filename></link>
+                variable.
+                The <filename>mklibs</filename> process optimizes the size
+                of the libraries, while the
+                <filename>prelink</filename> process optimizes the dynamic
+                linking of shared libraries to reduce start up time of
+                executables.
+            </para>
+
+            <para>
+                After the root filesystem is built, processing begins on
+                the image through the
+                <link linkend='ref-tasks-image'><filename>do_image</filename></link>
+                task.
+                The build system runs any pre-processing commands as defined
+                by the
+                <link linkend='var-IMAGE_PREPROCESS_COMMAND'><filename>IMAGE_PREPROCESS_COMMAND</filename></link>
+                variable.
+                This variable specifies a list of functions to call before
+                the OpenEmbedded build system creates the final image output
+                files.
+            </para>
+
+            <para>
+                The OpenEmbedded build system dynamically creates
+                <filename>do_image_*</filename> tasks as needed, based
+                on the image types specified in the
+                <link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>
+                variable.
+                The process turns everything into an image file or a set of
+                image files and compresses the root filesystem image to reduce
+                the overall size of the image.
+                The formats used for the root filesystem depend on the
+                <filename>IMAGE_FSTYPES</filename> variable.
+            </para>
+
+            <para>
+                As an example, a dynamically created task when creating a
+                particular image <replaceable>type</replaceable> would take the
+                following form:
+                <literallayout class='monospaced'>
+     do_image_<replaceable>type</replaceable>[depends]
+                </literallayout>
+                So, if the <replaceable>type</replaceable> as specified by the
+                <filename>IMAGE_FSTYPES</filename> were
+                <filename>ext4</filename>, the dynamically generated task
+                would be as follows:
+                <literallayout class='monospaced'>
+     do_image_ext4[depends]
+                </literallayout>
+            </para>
+
+            <para>
+                The final task involved in image creation is the
+                <link linkend='ref-tasks-image-complete'><filename>do_image_complete</filename></link>
+                task.
+                This task completes the image by applying any image
+                post processing as defined through the
+                <link linkend='var-IMAGE_POSTPROCESS_COMMAND'><filename>IMAGE_POSTPROCESS_COMMAND</filename></link>
+                variable.
+                The variable specifies a list of functions to call once the
+                OpenEmbedded build system has created the final image output
+                files.
+            </para>
+
+            <note>
+                The entire image generation process is run under Pseudo.
+                Running under Pseudo ensures that the files in the root
+                filesystem have correct ownership.
+            </note>
+        </section>
+
+        <section id='sdk-generation-dev-environment'>
+            <title>SDK Generation</title>
+
+            <para>
+                The OpenEmbedded build system uses BitBake to generate the
+                Software Development Kit (SDK) installer script for both the
+                standard and extensible SDKs:
+                <imagedata fileref="figures/sdk-generation.png" align="center" />
+            </para>
+
+            <note>
+                For more information on the cross-development toolchain
+                generation, see the
+                "<link linkend='cross-development-toolchain-generation'>Cross-Development Toolchain Generation</link>"
+                section.
+                For information on advantages gained when building a
+                cross-development toolchain using the
+                <link linkend='ref-tasks-populate_sdk'><filename>do_populate_sdk</filename></link>
+                task, see the
+                "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-building-an-sdk-installer'>Building an SDK Installer</ulink>"
+                section in the Yocto Project Application Development and the
+                Extensible Software Development Kit (SDK) manual.
+            </note>
+
+            <para>
+                Like image generation, the SDK script process consists of
+                several stages and depends on many variables.
+                The <filename>do_populate_sdk</filename> and
+                <filename>do_populate_sdk_ext</filename> tasks use these
+                key variables to help create the list of packages to actually
+                install.
+                For information on the variables listed in the figure, see the
+                "<link linkend='sdk-dev-environment'>Application Development SDK</link>"
+                section.
+            </para>
+
+            <para>
+                The <filename>do_populate_sdk</filename> task helps create
+                the standard SDK and handles two parts: a target part and a
+                host part.
+                The target part is the part built for the target hardware and
+                includes libraries and headers.
+                The host part is the part of the SDK that runs on the
+                <link linkend='var-SDKMACHINE'><filename>SDKMACHINE</filename></link>.
+            </para>
+
+            <para>
+                The <filename>do_populate_sdk_ext</filename> task helps create
+                the extensible SDK and handles host and target parts
+                differently than its counter part does for the standard SDK.
+                For the extensible SDK, the task encapsulates the build system,
+                which includes everything needed (host and target) for the SDK.
+            </para>
+
+            <para>
+                Regardless of the type of SDK being constructed, the
+                tasks perform some cleanup after which a cross-development
+                environment setup script and any needed configuration files
+                are created.
+                The final output is the Cross-development
+                toolchain installation script (<filename>.sh</filename> file),
+                which includes the environment setup script.
+            </para>
+        </section>
+
+        <section id='stamp-files-and-the-rerunning-of-tasks'>
+            <title>Stamp Files and the Rerunning of Tasks</title>
+
+            <para>
+                For each task that completes successfully, BitBake writes a
+                stamp file into the
+                <link linkend='var-STAMPS_DIR'><filename>STAMPS_DIR</filename></link>
+                directory.
+                The beginning of the stamp file's filename is determined by the
+                <link linkend='var-STAMP'><filename>STAMP</filename></link>
+                variable, and the end of the name consists of the task's name
+                and current
+                <ulink url='&YOCTO_DOCS_BB_URL;#checksums'>input checksum</ulink>.
+                <note>
+                    This naming scheme assumes that
+                    <ulink url='&YOCTO_DOCS_BB_URL;#var-BB_SIGNATURE_HANDLER'><filename>BB_SIGNATURE_HANDLER</filename></ulink>
+                    is "OEBasicHash", which is almost always the case in
+                    current OpenEmbedded.
+                </note>
+                To determine if a task needs to be rerun, BitBake checks if a
+                stamp file with a matching input checksum exists for the task.
+                If such a stamp file exists, the task's output is assumed to
+                exist and still be valid.
+                If the file does not exist, the task is rerun.
+                <note>
+                    <para>The stamp mechanism is more general than the shared
+                    state (sstate) cache mechanism described in the
+                    "<link linkend='setscene-tasks-and-shared-state'>Setscene Tasks and Shared State</link>"
+                    section.
+                    BitBake avoids rerunning any task that has a valid
+                    stamp file, not just tasks that can be accelerated through
+                    the sstate cache.</para>
+                    <para>However, you should realize that stamp files only
+                    serve as a marker that some work has been done and that
+                    these files do not record task output.
+                    The actual task output would usually be somewhere in
+                    <link linkend='var-TMPDIR'><filename>TMPDIR</filename></link>
+                    (e.g. in some recipe's
+                    <link linkend='var-WORKDIR'><filename>WORKDIR</filename></link>.)
+                    What the sstate cache mechanism adds is a way to cache task
+                    output that can then be shared between build machines.
+                    </para>
+                </note>
+                Since <filename>STAMPS_DIR</filename> is usually a subdirectory
+                of <filename>TMPDIR</filename>, removing
+                <filename>TMPDIR</filename> will also remove
+                <filename>STAMPS_DIR</filename>, which means tasks will
+                properly be rerun to repopulate <filename>TMPDIR</filename>.
+            </para>
+
+            <para>
+                If you want some task to always be considered "out of date",
+                you can mark it with the
+                <ulink url='&YOCTO_DOCS_BB_URL;#variable-flags'><filename>nostamp</filename></ulink>
+                varflag.
+                If some other task depends on such a task, then that task will
+                also always be considered out of date, which might not be what
+                you want.
+            </para>
+
+            <para>
+                For details on how to view information about a task's
+                signature, see the
+                "<link linkend='usingpoky-viewing-task-variable-dependencies'>Viewing Task Variable Dependencies</link>"
+                section.
+            </para>
+        </section>
+
+        <section id='setscene-tasks-and-shared-state'>
+            <title>Setscene Tasks and Shared State</title>
+
+            <para>
+                The description of tasks so far assumes that BitBake needs to
+                build everything and there are no prebuilt objects available.
+                BitBake does support skipping tasks if prebuilt objects are
+                available.
+                These objects are usually made available in the form of a
+                shared state (sstate) cache.
+                <note>
+                    For information on variables affecting sstate, see the
+                    <link linkend='var-SSTATE_DIR'><filename>SSTATE_DIR</filename></link>
+                    and
+                    <link linkend='var-SSTATE_MIRRORS'><filename>SSTATE_MIRRORS</filename></link>
+                    variables.
+                </note>
+            </para>
+
+            <para>
+                The idea of a setscene task (i.e
+                <filename>do_</filename><replaceable>taskname</replaceable><filename>_setscene</filename>)
+                is a version of the task where
+                instead of building something, BitBake can skip to the end
+                result and simply place a set of files into specific locations
+                as needed.
+                In some cases, it makes sense to have a setscene task variant
+                (e.g. generating package files in the
+                <filename>do_package_write_*</filename> task).
+                In other cases, it does not make sense, (e.g. a
+                <link linkend='ref-tasks-patch'><filename>do_patch</filename></link>
+                task or
+                <link linkend='ref-tasks-unpack'><filename>do_unpack</filename></link>
+                task) since the work involved would be equal to or greater than
+                the underlying task.
+            </para>
+
+            <para>
+                In the OpenEmbedded build system, the common tasks that have
+                setscene variants are <link linkend='ref-tasks-package'><filename>do_package</filename></link>,
+                <filename>do_package_write_*</filename>,
+                <link linkend='ref-tasks-deploy'><filename>do_deploy</filename></link>,
+                <link linkend='ref-tasks-packagedata'><filename>do_packagedata</filename></link>,
+                and
+                <link linkend='ref-tasks-populate_sysroot'><filename>do_populate_sysroot</filename></link>.
+                Notice that these are most of the tasks whose output is an
+                end result.
+            </para>
+
+            <para>
+                The OpenEmbedded build system has knowledge of the relationship
+                between these tasks and other tasks that precede them.
+                For example, if BitBake runs
+                <filename>do_populate_sysroot_setscene</filename> for
+                something, there is little point in running any of the
+                <filename>do_fetch</filename>, <filename>do_unpack</filename>,
+                <filename>do_patch</filename>,
+                <filename>do_configure</filename>,
+                <filename>do_compile</filename>, and
+                <filename>do_install</filename> tasks.
+                However, if <filename>do_package</filename> needs to be run,
+                BitBake would need to run those other tasks.
+            </para>
+
+            <para>
+                It becomes more complicated if everything can come from an
+                sstate cache because some objects are simply not required at
+                all.
+                For example, you do not need a compiler or native tools, such
+                as quilt, if there is nothing to compile or patch.
+                If the <filename>do_package_write_*</filename> packages are
+                available from sstate, BitBake does not need the
+                <filename>do_package</filename> task data.
+            </para>
+
+            <para>
+                To handle all these complexities, BitBake runs in two phases.
+                The first is the "setscene" stage.
+                During this stage, BitBake first checks the sstate cache for
+                any targets it is planning to build.
+                BitBake does a fast check to see if the object exists rather
+                than a complete download.
+                If nothing exists, the second phase, which is the setscene
+                stage, completes and the main build proceeds.
+            </para>
+
+            <para>
+                If objects are found in the sstate cache, the OpenEmbedded
+                build system works backwards from the end targets specified
+                by the user.
+                For example, if an image is being built, the OpenEmbedded build
+                system first looks for the packages needed for that image and
+                the tools needed to construct an image.
+                If those are available, the compiler is not needed.
+                Thus, the compiler is not even downloaded.
+                If something was found to be unavailable, or the download or
+                setscene task fails, the OpenEmbedded build system then tries
+                to install dependencies, such as the compiler, from the cache.
+            </para>
+
+            <para>
+                The availability of objects in the sstate cache is handled by
+                the function specified by the
+                <ulink url='&YOCTO_DOCS_BB_URL;#var-BB_HASHCHECK_FUNCTION'><filename>BB_HASHCHECK_FUNCTION</filename></ulink>
+                variable and returns a list of the objects that are available.
+                The function specified by the
+                <ulink url='&YOCTO_DOCS_BB_URL;#var-BB_SETSCENE_DEPVALID'><filename>BB_SETSCENE_DEPVALID</filename></ulink>
+                variable is the function that determines whether a given
+                dependency needs to be followed, and whether for any given
+                relationship the function needs to be passed.
+                The function returns a True or False value.
+            </para>
+        </section>
+    </section>
+
+    <section id='images-dev-environment'>
+        <title>Images</title>
+
+        <para>
+            The images produced by the OpenEmbedded build system
+            are compressed forms of the
+            root filesystem that are ready to boot on a target device.
+            You can see from the
+            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>
+            that BitBake output, in part, consists of images.
+            This section is going to look more closely at this output:
+            <imagedata fileref="figures/images.png" align="center" width="5.5in" depth="5.5in" />
+        </para>
+
+        <para>
+            For a list of example images that the Yocto Project provides,
+            see the
+            "<link linkend='ref-images'>Images</link>" chapter.
+        </para>
+
+        <para>
+            Images are written out to the
+            <link linkend='build-directory'>Build Directory</link>
+            inside the <filename>tmp/deploy/images/<replaceable>machine</replaceable>/</filename>
+            folder as shown in the figure.
+            This folder contains any files expected to be loaded on the
+            target device.
+            The
+            <link linkend='var-DEPLOY_DIR'><filename>DEPLOY_DIR</filename></link>
+            variable points to the <filename>deploy</filename> directory,
+            while the
+            <link linkend='var-DEPLOY_DIR_IMAGE'><filename>DEPLOY_DIR_IMAGE</filename></link>
+            variable points to the appropriate directory containing images for
+            the current configuration.
+            <itemizedlist>
+                <listitem><para><filename><replaceable>kernel-image</replaceable></filename>:
+                    A kernel binary file.
+                    The <link linkend='var-KERNEL_IMAGETYPE'><filename>KERNEL_IMAGETYPE</filename></link>
+                    variable setting determines the naming scheme for the
+                    kernel image file.
+                    Depending on that variable, the file could begin with
+                    a variety of naming strings.
+                    The <filename>deploy/images/<replaceable>machine</replaceable></filename>
+                    directory can contain multiple image files for the
+                    machine.</para></listitem>
+                <listitem><para><filename><replaceable>root-filesystem-image</replaceable></filename>:
+                    Root filesystems for the target device (e.g.
+                    <filename>*.ext3</filename> or <filename>*.bz2</filename>
+                    files).
+                    The <link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>
+                    variable setting determines the root filesystem image
+                    type.
+                    The <filename>deploy/images/<replaceable>machine</replaceable></filename>
+                    directory can contain multiple root filesystems for the
+                    machine.</para></listitem>
+                <listitem><para><filename><replaceable>kernel-modules</replaceable></filename>:
+                    Tarballs that contain all the modules built for the kernel.
+                    Kernel module tarballs exist for legacy purposes and
+                    can be suppressed by setting the
+                    <link linkend='var-MODULE_TARBALL_DEPLOY'><filename>MODULE_TARBALL_DEPLOY</filename></link>
+                    variable to "0".
+                    The <filename>deploy/images/<replaceable>machine</replaceable></filename>
+                    directory can contain multiple kernel module tarballs
+                    for the machine.</para></listitem>
+                <listitem><para><filename><replaceable>bootloaders</replaceable></filename>:
+                    Bootloaders supporting the image, if applicable to the
+                    target machine.
+                    The <filename>deploy/images/<replaceable>machine</replaceable></filename>
+                    directory can contain multiple bootloaders for the
+                    machine.</para></listitem>
+                <listitem><para><filename><replaceable>symlinks</replaceable></filename>:
+                    The <filename>deploy/images/<replaceable>machine</replaceable></filename>
+                    folder contains
+                    a symbolic link that points to the most recently built file
+                    for each machine.
+                    These links might be useful for external scripts that
+                    need to obtain the latest version of each file.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+
+    <section id='sdk-dev-environment'>
+        <title>Application Development SDK</title>
+
+        <para>
+            In the
+            <link linkend='general-yocto-environment-figure'>general Yocto Project Development Environment figure</link>,
+            the output labeled "Application Development SDK" represents an
+            SDK.
+            The SDK generation process differs depending on whether you build
+            a standard SDK
+            (e.g. <filename>bitbake -c populate_sdk</filename> <replaceable>imagename</replaceable>)
+            or an extensible SDK
+            (e.g. <filename>bitbake -c populate_sdk_ext</filename> <replaceable>imagename</replaceable>).
+            This section is going to take a closer look at this output:
+            <imagedata fileref="figures/sdk.png" align="center" width="9in" depth="7.25in" />
+        </para>
+
+        <para>
+            The specific form of this output is a self-extracting
+            SDK installer (<filename>*.sh</filename>) that, when run,
+            installs the SDK, which consists of a cross-development
+            toolchain, a set of libraries and headers, and an SDK
+            environment setup script.
+            Running this installer essentially sets up your
+            cross-development environment.
+            You can think of the cross-toolchain as the "host"
+            part because it runs on the SDK machine.
+            You can think of the libraries and headers as the "target"
+            part because they are built for the target hardware.
+            The environment setup script is added so that you can initialize
+            the environment before using the tools.
+        </para>
+
+        <note><title>Notes</title>
+            <itemizedlist>
+                <listitem><para>
+                    The Yocto Project supports several methods by which you can
+                    set up this cross-development environment.
+                    These methods include downloading pre-built SDK installers
+                    or building and installing your own SDK installer.
+                    </para></listitem>
+                <listitem><para>
+                    For background information on cross-development toolchains
+                    in the Yocto Project development environment, see the
+                    "<link linkend='cross-development-toolchain-generation'>Cross-Development Toolchain Generation</link>"
+                    section.
+                    </para></listitem>
+                <listitem><para>
+                    For information on setting up a cross-development
+                    environment, see the
+                    <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
+                    </para></listitem>
+            </itemizedlist>
+        </note>
+        <para>
+            Once built, the SDK installers are written out to the
+            <filename>deploy/sdk</filename> folder inside the
+            <link linkend='build-directory'>Build Directory</link>
+            as shown in the figure at the beginning of this section.
+            Depending on the type of SDK, several variables exist that help
+            configure these files.
+            The following list shows the variables associated with a standard
+            SDK:
+            <itemizedlist>
+                <listitem><para><link linkend='var-DEPLOY_DIR'><filename>DEPLOY_DIR</filename></link>:
+                    Points to the <filename>deploy</filename>
+                    directory.</para></listitem>
+                <listitem><para><link linkend='var-SDKMACHINE'><filename>SDKMACHINE</filename></link>:
+                    Specifies the architecture of the machine
+                    on which the cross-development tools are run to
+                    create packages for the target hardware.
+                    </para></listitem>
+                <listitem><para><link linkend='var-SDKIMAGE_FEATURES'><filename>SDKIMAGE_FEATURES</filename></link>:
+                    Lists the features to include in the "target" part
+                    of the SDK.
+                    </para></listitem>
+                <listitem><para><link linkend='var-TOOLCHAIN_HOST_TASK'><filename>TOOLCHAIN_HOST_TASK</filename></link>:
+                    Lists packages that make up the host
+                    part of the SDK (i.e. the part that runs on
+                    the <filename>SDKMACHINE</filename>).
+                    When you use
+                    <filename>bitbake -c populate_sdk <replaceable>imagename</replaceable></filename>
+                    to create the SDK, a set of default packages
+                    apply.
+                    This variable allows you to add more packages.
+                    </para></listitem>
+                <listitem><para><link linkend='var-TOOLCHAIN_TARGET_TASK'><filename>TOOLCHAIN_TARGET_TASK</filename></link>:
+                    Lists packages that make up the target part
+                    of the SDK (i.e. the part built for the
+                    target hardware).
+                    </para></listitem>
+                <listitem><para><link linkend='var-SDKPATH'><filename>SDKPATH</filename></link>:
+                    Defines the default SDK installation path offered by the
+                    installation script.
+                    </para></listitem>
+            </itemizedlist>
+            This next list, shows the variables associated with an extensible
+            SDK:
+            <itemizedlist>
+                <listitem><para><link linkend='var-DEPLOY_DIR'><filename>DEPLOY_DIR</filename></link>:
+                    Points to the <filename>deploy</filename> directory.
+                    </para></listitem>
+                <listitem><para><link linkend='var-SDK_EXT_TYPE'><filename>SDK_EXT_TYPE</filename></link>:
+                    Controls whether or not shared state artifacts are copied
+                    into the extensible SDK.
+                    By default, all required shared state artifacts are copied
+                    into the SDK.
+                    </para></listitem>
+                <listitem><para><link linkend='var-SDK_INCLUDE_PKGDATA'><filename>SDK_INCLUDE_PKGDATA</filename></link>:
+                    Specifies whether or not packagedata will be included in
+                    the extensible SDK for all recipes in the "world" target.
+                    </para></listitem>
+                <listitem><para><link linkend='var-SDK_INCLUDE_TOOLCHAIN'><filename>SDK_INCLUDE_TOOLCHAIN</filename></link>:
+                    Specifies whether or not the toolchain will be included
+                    when building the extensible SDK.
+                    </para></listitem>
+                <listitem><para><link linkend='var-SDK_LOCAL_CONF_WHITELIST'><filename>SDK_LOCAL_CONF_WHITELIST</filename></link>:
+                    A list of variables allowed through from the build system
+                    configuration into the extensible SDK configuration.
+                    </para></listitem>
+                <listitem><para><link linkend='var-SDK_LOCAL_CONF_BLACKLIST'><filename>SDK_LOCAL_CONF_BLACKLIST</filename></link>:
+                    A list of variables not allowed through from the build
+                    system configuration into the extensible SDK configuration.
+                    </para></listitem>
+                <listitem><para><link linkend='var-SDK_INHERIT_BLACKLIST'><filename>SDK_INHERIT_BLACKLIST</filename></link>:
+                    A list of classes to remove from the
+                    <link linkend='var-INHERIT'><filename>INHERIT</filename></link>
+                    value globally within the extensible SDK configuration.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+</section>
+
+</chapter>
+<!--
+vim: expandtab tw=80 ts=4
+-->
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-devtool-reference.xml b/import-layers/yocto-poky/documentation/ref-manual/ref-devtool-reference.xml
index 99d5a52..e29bf89 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/ref-devtool-reference.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-devtool-reference.xml
@@ -20,8 +20,8 @@
         For more information on how to apply the command when using the
         extensible SDK, see the
         "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-extensible'>Using the Extensible SDK</ulink>"
-        section in the Yocto Project Software Development Kit (SDK) Developer's
-        Guide.
+        section in the Yocto Project Application Development and the
+        Extensible Software Development Kit (eSDK) manual.
     </para>
 
     <section id='devtool-getting-help'>
@@ -35,45 +35,59 @@
             the commands:
             <literallayout class='monospaced'>
      $ devtool --help
-     usage: devtool [--basepath BASEPATH] [--bbpath BBPATH] [-d] [-q]
-                    [--color COLOR] [-h]
-                    &lt;subcommand&gt; ...
+     usage: devtool add [-h] [--same-dir | --no-same-dir] [--fetch URI]
+                        [--fetch-dev] [--version VERSION] [--no-git]
+                        [--srcrev SRCREV | --autorev] [--srcbranch SRCBRANCH]
+                        [--binary] [--also-native] [--src-subdir SUBDIR]
+                        [--mirrors] [--provides PROVIDES]
+                        [recipename] [srctree] [fetchuri]
 
-     OpenEmbedded development tool
+     Adds a new recipe to the workspace to build a specified source tree. Can
+     optionally fetch a remote URI and unpack it to create the source tree.
+
+     arguments:
+       recipename            Name for new recipe to add (just name - no version,
+                             path or extension). If not specified, will attempt
+                             to auto-detect it.
+       srctree               Path to external source tree. If not specified, a
+                             subdirectory of
+                             /home/<replaceable>user</replaceable>/poky/build/workspace/sources will be
+                             used.
+       fetchuri              Fetch the specified URI and extract it to create
+                             the source tree
 
      options:
-       --basepath BASEPATH  Base directory of SDK / build directory
-       --bbpath BBPATH      Explicitly specify the BBPATH, rather than getting it
-                            from the metadata
-       -d, --debug          Enable debug output
-       -q, --quiet          Print only errors
-       --color COLOR        Colorize output (where COLOR is auto, always, never)
-       -h, --help           show this help message and exit
-
-     subcommands:
-       Beginning work on a recipe:
-         add                  Add a new recipe
-         modify               Modify the source for an existing recipe
-         upgrade              Upgrade an existing recipe
-       Getting information:
-         status               Show workspace status
-         search               Search available recipes
-       Working on a recipe in the workspace:
-         edit-recipe          Edit a recipe file in your workspace
-         configure-help       Get help on configure script options
-         build                Build a recipe
-         update-recipe        Apply changes from external source tree to recipe
-         reset                Remove a recipe from your workspace
-         finish               Finish working on a recipe in your workspace
-       Testing changes on target:
-         deploy-target        Deploy recipe output files to live target machine
-         undeploy-target      Undeploy recipe output files in live target machine
-         build-image          Build image including workspace recipe packages
-       Advanced:
-         create-workspace     Set up workspace in an alternative location
-         extract              Extract the source for an existing recipe
-         sync                 Synchronize the source tree for an existing recipe
-     Use devtool &lt;subcommand&gt; --help to get help on a specific command
+       -h, --help            show this help message and exit
+       --same-dir, -s        Build in same directory as source
+       --no-same-dir         Force build in a separate build directory
+       --fetch URI, -f URI   Fetch the specified URI and extract it to create
+                             the source tree (deprecated - pass as positional
+                             argument instead)
+       --fetch-dev           For npm, also fetch devDependencies
+       --version VERSION, -V VERSION
+                             Version to use within recipe (PV)
+       --no-git, -g          If fetching source, do not set up source tree as a
+                             git repository
+       --srcrev SRCREV, -S SRCREV
+                             Source revision to fetch if fetching from an SCM
+                             such as git (default latest)
+       --autorev, -a         When fetching from a git repository, set SRCREV in
+                             the recipe to a floating revision instead of fixed
+       --srcbranch SRCBRANCH, -B SRCBRANCH
+                             Branch in source repository if fetching from an SCM
+                             such as git (default master)
+       --binary, -b          Treat the source tree as something that should be
+                             installed verbatim (no compilation, same directory
+                             structure). Useful with binary packages e.g. RPMs.
+       --also-native         Also add native variant (i.e. support building
+                             recipe for the build host as well as the target
+                             machine)
+       --src-subdir SUBDIR   Specify subdirectory within source tree to use
+       --mirrors             Enable PREMIRRORS and MIRRORS for source tree
+                             fetching (disable by default).
+       --provides PROVIDES, -p PROVIDES
+                             Specify an alias for the item provided by the
+                             recipe. E.g. virtual/libgl
             </literallayout>
         </para>
 
@@ -194,9 +208,9 @@
             The following example creates and adds a new recipe named
             <filename>jackson</filename> to a workspace layer the tool creates.
             The source code built by the recipes resides in
-            <filename>/home/scottrif/sources/jackson</filename>:
+            <filename>/home/<replaceable>user</replaceable>/sources/jackson</filename>:
             <literallayout class='monospaced'>
-     $ devtool add jackson /home/scottrif/sources/jackson
+     $ devtool add jackson /home/<replaceable>user</replaceable>/sources/jackson
             </literallayout>
         </para>
 
@@ -214,18 +228,52 @@
             append files, and source files into the existing workspace layer.
             The <filename>.bbappend</filename> file is created to point
             to the external source tree.
+            <note>
+                If your recipe has runtime dependencies defined, you must be sure
+                that these packages exist on the target hardware before attempting
+                to run your application.
+                If dependent packages (e.g. libraries) do not exist on the target,
+                your application, when run, will fail to find those functions.
+                For more information, see the
+                "<link linkend='devtool-deploying-your-software-on-the-target-machine'>Deploying Your Software on the Target Machine</link>"
+                section.
+            </note>
         </para>
 
-        <note>
-            If your recipe has runtime dependencies defined, you must be sure
-            that these packages exist on the target hardware before attempting
-            to run your application.
-            If dependent packages (e.g. libraries) do not exist on the target,
-            your application, when run, will fail to find those functions.
-            For more information, see the
-            "<link linkend='devtool-deploying-your-software-on-the-target-machine'>Deploying Your Software on the Target Machine</link>"
-            section.
-        </note>
+        <para>
+            By default, <filename>devtool add</filename> uses the latest
+            revision (i.e. master) when unpacking files from a remote URI.
+            In some cases, you might want to specify a source revision by
+            branch, tag, or commit hash. You can specify these options when
+            using the <filename>devtool add</filename> command:
+            <itemizedlist>
+                <listitem><para>
+                    To specify a source branch, use the
+                    <filename>--srcbranch</filename> option:
+                    <literallayout class='monospaced'>
+     $ devtool add --srcbranch &DISTRO_NAME_NO_CAP; jackson /home/<replaceable>user</replaceable>/sources/jackson
+                    </literallayout>
+                    In the previous example, you are checking out the
+                    &DISTRO_NAME_NO_CAP; branch.
+                    </para></listitem>
+                <listitem><para>
+                    To specify a specific tag or commit hash, use the
+                    <filename>--srcrev</filename> option:
+                    <literallayout class='monospaced'>
+     $ devtool add --srcrev &DISTRO_REL_TAG; jackson /home/<replaceable>user</replaceable>/sources/jackson
+     $ devtool add --srcrev <replaceable>some_commit_hash</replaceable> /home/<replaceable>user</replaceable>/sources/jackson
+                    </literallayout>
+                    The previous examples check out the &DISTRO_REL_TAG; tag
+                    and the commit associated with the
+                    <replaceable>some_commit_hash</replaceable> hash.
+                    </para></listitem>
+            </itemizedlist>
+            <note>
+                If you prefer to use the latest revision every time the recipe is
+                built, use the options <filename>--autorev</filename>
+                or <filename>-a</filename>.
+            </note>
+        </para>
     </section>
 
     <section id='devtool-extracting-the-source-for-an-existing-recipe'>
@@ -276,7 +324,7 @@
             Use the <filename>devtool modify</filename> command to begin
             modifying the source of an existing recipe.
             This command is very similar to the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#devtool-adding-a-new-recipe-to-the-workspace'><filename>add</filename></ulink>
+            <link linkend='devtool-adding-a-new-recipe-to-the-workspace'><filename>add</filename></link>
             command except that it does not physically create the
             recipe in the workspace layer because the recipe already
             exists in an another layer.
@@ -334,7 +382,7 @@
             to the source files.
             For example, if you know you are going to work on some
             code, you could first use the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#devtool-modifying-a-recipe'><filename>devtool modify</filename></ulink>
+            <link linkend='devtool-modifying-a-recipe'><filename>devtool modify</filename></link>
             command to extract the code and set up the workspace.
             After which, you could modify, compile, and test the code.
         </para>
@@ -556,7 +604,7 @@
             remove deployed build output from the target machine.
             For the <filename>devtool undeploy-target</filename> command to
             work, you must have previously used the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#devtool-deploying-your-software-on-the-target-machine'><filename>devtool deploy-target</filename></ulink>
+            <link linkend='devtool-deploying-your-software-on-the-target-machine'><filename>devtool deploy-target</filename></link>
             command.
             <literallayout class='monospaced'>
      $ devtool undeploy-target <replaceable>recipe</replaceable>&nbsp;<replaceable>target</replaceable>
@@ -573,7 +621,7 @@
         <para>
             Use the <filename>devtool create-workspace</filename> command to
             create a new workspace layer in your
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+            <link linkend='build-directory'>Build Directory</link>.
             When you create a new workspace layer, it is populated with the
             <filename>README</filename> file and the
             <filename>conf</filename> directory only.
@@ -616,7 +664,7 @@
      $ devtool status
             </literallayout>
             Following is sample output after using
-            <ulink url='&YOCTO_DOCS_DEV_URL;#devtool-adding-a-new-recipe-to-the-workspace'><filename>devtool add</filename></ulink>
+            <link linkend='devtool-adding-a-new-recipe-to-the-workspace'><filename>devtool add</filename></link>
             to create and add the <filename>mtr_0.86.bb</filename> recipe
             to the <filename>workspace</filename> directory:
             <literallayout class='monospaced'>
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-features.xml b/import-layers/yocto-poky/documentation/ref-manual/ref-features.xml
index 7e1c5ef..02857dc 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/ref-features.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-features.xml
@@ -36,7 +36,7 @@
     <para>
         One method you can use to determine which recipes are checking to see if a
         particular feature is contained or not is to <filename>grep</filename> through
-        the <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+        the <link linkend='metadata'>Metadata</link>
         for the feature.
         Here is an example that discovers the recipes whose build is potentially
         changed based on a given feature:
@@ -132,7 +132,7 @@
             These select features make sense to be controlled both at
             the machine and distribution configuration level.
             See the
-            <ulink url='&YOCTO_DOCS_REF_URL;#var-COMBINED_FEATURES'><filename>COMBINED_FEATURES</filename></ulink>
+            <link linkend='var-COMBINED_FEATURES'><filename>COMBINED_FEATURES</filename></link>
             variable for more information.
         </para>
 
@@ -151,8 +151,8 @@
                     is used.
                     See the
                     "<ulink url='&YOCTO_DOCS_SDK_URL;#adding-api-documentation-to-the-standard-sdk'>Adding API Documentation to the Standard SDK</ulink>"
-                    section in the Yocto Project Software Development Kit (SDK)
-                    Developer's Guide for more information.
+                    section in the Yocto Project Application Development and
+                    the Extensible Software Development Kit (eSDK) manual.
                     </para></listitem>
                 <listitem><para><emphasis>bluetooth:</emphasis> Include
                     bluetooth support (integrated BT only).</para></listitem>
@@ -217,7 +217,7 @@
                     the package tests where supported by individual recipes.
                     For more information on package tests, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#testing-packages-with-ptest'>Testing Packages With ptest</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                     </para></listitem>
                 <listitem><para><emphasis>smbfs:</emphasis> Include SMB networks
                     client support (for mounting Samba/Microsoft Windows shares
@@ -304,7 +304,7 @@
                     <note>
                         To make the <filename>/var/log</filename> directory
                         on the target persistent, use the
-                        <ulink url='&YOCTO_DOCS_REF_URL;#var-VOLATILE_LOG_DIR'><filename>VOLATILE_LOG_DIR</filename></ulink>
+                        <link linkend='var-VOLATILE_LOG_DIR'><filename>VOLATILE_LOG_DIR</filename></link>
                         variable by setting it to "no".
                     </note>
                     </para></listitem>
@@ -315,8 +315,8 @@
                     Creates an image whose root filesystem is read-only.
                     See the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-read-only-root-filesystem'>Creating a Read-Only Root Filesystem</ulink>"
-                    section in the Yocto Project Development Manual for more
-                    information.
+                    section in the Yocto Project Development Tasks Manual for
+                    more information.
                     </para></listitem>
                 <listitem><para><emphasis>splash:</emphasis>
                     Enables showing a splash screen during boot.
@@ -356,7 +356,8 @@
                     <filename>perf</filename>, <filename>systemtap</filename>,
                     and <filename>LTTng</filename>.
                     For general information on user-space tools, see the
-                    <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-manual'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
+                    <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</ulink>
+                    manual.
                     </para></listitem>
                 <listitem><para><emphasis>ssh-server-dropbear:</emphasis>
                     Installs the Dropbear minimal SSH server.
@@ -374,7 +375,7 @@
                     <filename>strace</filename> and <filename>gdb</filename>.
                     For information on GDB, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#platdev-gdb-remotedebug'>Debugging With the GNU Project Debugger (GDB) Remotely</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                     For information on tracing and profiling, see the
                     <ulink url='&YOCTO_DOCS_PROF_URL;'>Yocto Project Profiling and Tracing Manual</ulink>.
                     </para></listitem>
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-images.xml b/import-layers/yocto-poky/documentation/ref-manual/ref-images.xml
index f220968..c752f94 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/ref-images.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-images.xml
@@ -30,8 +30,8 @@
     <para>
         From within the <filename>poky</filename> Git repository, you can use
         the following command to display the list of directories within the
-        <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
-        that containe image recipe files:
+        <link linkend='source-directory'>Source Directory</link>
+        that contain image recipe files:
         <literallayout class='monospaced'>
      $ ls meta*/recipes*/images/*.bb
         </literallayout>
@@ -140,7 +140,7 @@
                 second image to be tested.
                 You can find more information about runtime testing in the
                 "<ulink url='&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing'>Performing Automated Runtime Testing</ulink>"
-                section in the Yocto Project Development Manual.
+                section in the Yocto Project Development Tasks Manual.
                 </para></listitem>
             <listitem><para><filename>core-image-testmaster-initramfs</filename>:
                 A RAM-based Initial Root Filesystem (initramfs) image tailored for
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-kickstart.xml b/import-layers/yocto-poky/documentation/ref-manual/ref-kickstart.xml
new file mode 100644
index 0000000..1dd36b2
--- /dev/null
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-kickstart.xml
@@ -0,0 +1,284 @@
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
+[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
+
+<chapter id='ref-kickstart'>
+<title>OpenEmbedded Kickstart (<filename>.wks</filename>) Reference</title>
+
+    <section id='openembedded-kickstart-wks-reference'>
+        <title>Introduction</title>
+
+        <para>
+            The current Wic implementation supports only the basic kickstart
+            partitioning commands:
+            <filename>partition</filename> (or <filename>part</filename>
+            for short) and <filename>bootloader</filename>.
+            <note>
+                Future updates will implement more commands and options.
+                If you use anything that is not specifically supported, results
+                can be unpredictable.
+            </note>
+        </para>
+
+        <para>
+            This chapter provides a reference on the available kickstart
+            commands.
+            The information lists the commands, their syntax, and meanings.
+            Kickstart commands are based on the Fedora kickstart versions but
+            with modifications to reflect Wic capabilities.
+            You can see the original documentation for those commands at the
+            following links:
+            <itemizedlist>
+                <listitem><para>
+                    <ulink url='http://fedoraproject.org/wiki/Anaconda/Kickstart#part_or_partition'>http://fedoraproject.org/wiki/Anaconda/Kickstart#part_or_partition</ulink>
+                    </para></listitem>
+                <listitem><para>
+                    <ulink url='http://fedoraproject.org/wiki/Anaconda/Kickstart#bootloader'>http://fedoraproject.org/wiki/Anaconda/Kickstart#bootloader</ulink>
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+
+    <section id='command-part-or-partition'>
+        <title>Command: part or partition</title>
+
+        <para>
+            Either of these commands create a partition on the system and use
+            the following syntax:
+            <literallayout class='monospaced'>
+     part [<replaceable>mntpoint</replaceable>]
+     partition [<replaceable>mntpoint</replaceable>]
+            </literallayout>
+            If you do not provide <replaceable>mntpoint</replaceable>, Wic
+            creates a partition but does not mount it.
+        </para>
+
+        <para>
+            The <filename><replaceable>mntpoint</replaceable></filename> is
+            where the partition will be mounted and must be of one of the
+            following forms:
+            <itemizedlist>
+                <listitem><para>
+                    <filename>/<replaceable>path</replaceable></filename>:
+                    For example, "/", "/usr", or "/home"
+                    </para></listitem>
+                <listitem><para>
+                    <filename>swap</filename>:
+                    The created partition is used as swap space.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+
+        <para>
+            Specifying a <replaceable>mntpoint</replaceable> causes the
+            partition to automatically be mounted.
+            Wic achieves this by adding entries to the filesystem table (fstab)
+            during image generation.
+            In order for wic to generate a valid fstab, you must also provide
+            one of the <filename>--ondrive</filename>,
+            <filename>--ondisk</filename>, or
+            <filename>--use-uuid</filename> partition options as part of the
+            command.
+            Here is an example using "/" as the mountpoint.
+            The command uses "--ondisk" to force the partition onto the
+            <filename>sdb</filename> disk:
+            <literallayout class='monospaced'>
+     part / --source rootfs --ondisk sdb --fstype=ext3 --label platform --align 1024
+            </literallayout>
+        </para>
+
+        <para>
+            Here is a list that describes other supported options you can use
+            with the <filename>part</filename> and
+            <filename>partition</filename> commands:
+            <itemizedlist>
+                <listitem><para>
+                    <emphasis><filename>--size</filename>:</emphasis>
+                    The minimum partition size in MBytes.
+                    Specify an integer value such as 500.
+                    Do not append the number with "MB".
+                    You do not need this option if you use
+                    <filename>--source</filename>.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--source</filename>:</emphasis>
+                    This option is a Wic-specific option that names the source
+                    of the data that populates the partition.
+                    The most common value for this option is "rootfs", but you
+                    can use any value that maps to a valid source plug-in.
+                    For information on the source plug-ins, see the
+                    "<link linkend='wic-plug-ins-interface'>Wic Plug-Ins Interface</link>"
+                    section.</para>
+
+                    <para>If you use <filename>--source rootfs</filename>, Wic
+                    creates a partition as large as needed and to fill it with
+                    the contents of the root filesystem pointed to by the
+                    <filename>-r</filename> command-line option or the
+                    equivalent rootfs derived from the <filename>-e</filename>
+                    command-line option.
+                    The filesystem type used to create the partition is driven
+                    by the value of the <filename>--fstype</filename> option
+                    specified for the partition.
+                    See the entry on <filename>--fstype</filename> that follows
+                    for more information.</para>
+
+                    <para>If you use
+                    <filename>--source <replaceable>plugin-name</replaceable></filename>,
+                    Wic creates a partition as large as needed and fills it
+                    with the contents of the partition that is generated by the
+                    specified plug-in name using the data pointed to by the
+                    <filename>-r</filename> command-line option or the
+                    equivalent rootfs derived from the <filename>-e</filename>
+                    command-line option.
+                    Exactly what those contents and filesystem type end up
+                    being are dependent on the given plug-in implementation.
+                    </para>
+
+                    <para>If you do not use the <filename>--source</filename>
+                    option, the <filename>wic</filename> command creates an
+                    empty partition.
+                    Consequently, you must use the <filename>--size</filename>
+                    option to specify the size of the empty partition.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--ondisk</filename> or <filename>--ondrive</filename>:</emphasis>
+                    Forces the partition to be created on a particular disk.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--fstype</filename>:</emphasis>
+                    Sets the file system type for the partition.
+                    Valid values are:
+                    <itemizedlist>
+                        <listitem><para>
+                            <filename>ext4</filename>
+                            </para></listitem>
+                        <listitem><para>
+                            <filename>ext3</filename>
+                            </para></listitem>
+                        <listitem><para>
+                            <filename>ext2</filename>
+                            </para></listitem>
+                        <listitem><para>
+                            <filename>btrfs</filename>
+                            </para></listitem>
+                        <listitem><para>
+                            <filename>squashfs</filename>
+                            </para></listitem>
+                        <listitem><para>
+                            <filename>swap</filename>
+                            </para></listitem>
+                    </itemizedlist>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--fsoptions</filename>:</emphasis>
+                    Specifies a free-form string of options to be used when
+                    mounting the filesystem.
+                    This string will be copied into the
+                    <filename>/etc/fstab</filename> file of the installed
+                    system and should be enclosed in quotes.
+                    If not specified, the default string is "defaults".
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--label label</filename>:</emphasis>
+                    Specifies the label to give to the filesystem to be made on
+                    the partition.
+                    If the given label is already in use by another filesystem,
+                    a new label is created for the partition.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--active</filename>:</emphasis>
+                    Marks the partition as active.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--align (in KBytes)</filename>:</emphasis>
+                    This option is a Wic-specific option that says to start a
+                    partition on an <replaceable>x</replaceable> KBytes
+                    boundary.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--no-table</filename>:</emphasis>
+                    This option is a Wic-specific option.
+                    Using the option reserves space for the partition and
+                    causes it to become populated.
+                    However, the partition is not added to the partition table.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--extra-space</filename>:</emphasis>
+                    This option is a Wic-specific option that adds extra space
+                    after the space filled by the content of the partition.
+                    The final size can go beyond the size specified by the
+                    <filename>--size</filename> option.
+                    The default value is 10 Mbytes.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--overhead-factor</filename>:</emphasis>
+                    This option is a Wic-specific option that multiplies the
+                    size of the partition by the option's value.
+                    You must supply a value greater than or equal to "1".
+                    The default value is "1.3".
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--part-type</filename>:</emphasis>
+                    This option is a Wic-specific option that specifies the
+                    partition type globally unique identifier (GUID) for GPT
+                    partitions.
+                    You can find the list of partition type GUIDs at
+                    <ulink url='http://en.wikipedia.org/wiki/GUID_Partition_Table#Partition_type_GUIDs'></ulink>.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--use-uuid</filename>:</emphasis>
+                    This option is a Wic-specific option that causes Wic to
+                    generate a random GUID for the partition.
+                    The generated identifier is used in the bootloader
+                    configuration to specify the root partition.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--uuid</filename>:</emphasis>
+                    This option is a Wic-specific option that specifies the
+                    partition UUID.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+
+    <section id='command-bootloader'>
+    <title>Command: bootloader</title>
+
+        <para>
+            This command specifies how the bootloader should be configured and
+            supports the following options:
+            <note>
+                Bootloader functionality and boot partitions are implemented by
+                the various <filename>--source</filename> plug-ins that
+                implement bootloader functionality.
+                The bootloader command essentially provides a means of
+                modifying bootloader configuration.
+            </note>
+            <itemizedlist>
+                <listitem><para>
+                    <emphasis><filename>--timeout</filename>:</emphasis>
+                    Specifies the number of seconds before the bootloader times
+                    out and boots the default option.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--append</filename>:</emphasis>
+                    Specifies kernel parameters.
+                    These parameters will be added to the syslinux
+                    <filename>APPEND</filename> or <filename>grub</filename>
+                    kernel command line.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis><filename>--configfile</filename>:</emphasis>
+                    Specifies a user-defined configuration file for the
+                    bootloader.
+                    You can provide a full pathname for the file or a file that
+                    exists in the <filename>canned-wks</filename> folder.
+                    This option overrides all other bootloader options.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+</chapter>
+<!--
+vim: expandtab tw=80 ts=4
+-->
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-manual.xml b/import-layers/yocto-poky/documentation/ref-manual/ref-manual.xml
index 752b210..d4b7bee 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/ref-manual.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-manual.xml
@@ -22,11 +22,11 @@
 
         <authorgroup>
             <author>
-                <firstname>Richard</firstname> <surname>Purdie</surname>
+                <firstname>Scott</firstname> <surname>Rifenbark</surname>
                 <affiliation>
-                    <orgname>Linux Foundation</orgname>
+                    <orgname>Scotty's Documentation Services, INC</orgname>
                 </affiliation>
-                <email>richard.purdie@linuxfoundation.org</email>
+                <email>srifenbark@gmail.com</email>
             </author>
 
         </authorgroup>
@@ -113,24 +113,19 @@
                 <revremark>Released with the Yocto Project 2.3 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.1</revnumber>
-                <date>June 2017</date>
-                <revremark>Released with the Yocto Project 2.3.1 Release.</revremark>
+                <revnumber>2.4</revnumber>
+                <date>October 2017</date>
+                <revremark>Released with the Yocto Project 2.4 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.2</revnumber>
-                <date>September 2017</date>
-                <revremark>Released with the Yocto Project 2.3.2 Release.</revremark>
-            </revision>
-            <revision>
-                <revnumber>2.3.3</revnumber>
+                <revnumber>2.4.1</revnumber>
                 <date>January 2018</date>
-                <revremark>Released with the Yocto Project 2.3.3 Release.</revremark>
+                <revremark>Released with the Yocto Project 2.4.1 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.4</revnumber>
-                <date>April 2018</date>
-                <revremark>Released with the Yocto Project 2.3.4 Release.</revremark>
+                <revnumber>2.4.2</revnumber>
+                <date>March 2018</date>
+                <revremark>Released with the Yocto Project 2.4.2 Release.</revremark>
             </revision>
         </revhistory>
 
@@ -147,29 +142,31 @@
            <note><title>Manual Notes</title>
                <itemizedlist>
                    <listitem><para>
-                       For the latest version of the Yocto Project Reference
-                       Manual associated with this Yocto Project release
-                       (version &YOCTO_DOC_VERSION;),
-                       see the Yocto Project Reference Manual from the
+                       This version of the
+                       <emphasis>Yocto Project Reference Manual</emphasis>
+                       is for the &YOCTO_DOC_VERSION; release of the
+                       Yocto Project.
+                       To be sure you have the latest version of the manual
+                       for this release, use the manual from the
                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
                        </para></listitem>
                    <listitem><para>
-                       This version of the manual is version
-                       &YOCTO_DOC_VERSION;.
-                       For later releases of the Yocto Project (if they exist),
-                       go to the
+                       For manuals associated with other releases of the Yocto
+                       Project, go to the
                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
                        and use the drop-down "Active Releases" button
-                       and choose the Yocto Project version for which you want
-                       the manual.
+                       and choose the manual associated with the desired
+                       Yocto Project.
                        </para></listitem>
                    <listitem><para>
-                       For an in-development version of the Yocto Project
-                       Reference Manual, see
-                        <ulink url='&YOCTO_DOCS_URL;/latest/ref-manual/ref-manual.html'></ulink>.
+                        To report any inaccuracies or problems with this
+                        manual, send an email to the Yocto Project
+                        discussion group at
+                        <filename>yocto@yoctoproject.com</filename> or log into
+                        the freenode <filename>#yocto</filename> channel.
                         </para></listitem>
-                </itemizedlist>
-            </note>
+               </itemizedlist>
+           </note>
     </legalnotice>
 
     </bookinfo>
@@ -178,7 +175,7 @@
 
     <xi:include href="usingpoky.xml"/>
 
-    <xi:include href="closer-look.xml"/>
+    <xi:include href="ref-development-environment.xml"/>
 
     <xi:include href="technical-details.xml"/>
 
@@ -194,6 +191,8 @@
 
     <xi:include href="ref-devtool-reference.xml"/>
 
+    <xi:include href="ref-kickstart.xml"/>
+
     <xi:include href="ref-qa-checks.xml"/>
 
     <xi:include href="ref-images.xml"/>
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-release-process.xml b/import-layers/yocto-poky/documentation/ref-manual/ref-release-process.xml
index fe3ba09..e2902eb 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/ref-release-process.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-release-process.xml
@@ -61,9 +61,9 @@
     <para>
         Each major release receives a codename that identifies the release in
         the
-        <ulink url='&YOCTO_DOCS_DEV_URL;#yocto-project-repositories'>Yocto Project Source Repositories</ulink>.
+        <link linkend='yocto-project-repositories'>Yocto Project Source Repositories</link>.
         The concept is that branches of
-        <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+        <link linkend='metadata'>Metadata</link>
         with the same codename are likely to be compatible and thus
         work together.
         <note>
@@ -131,7 +131,7 @@
         Yocto Project.
         For information on how to run available tests on your projects, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing'>Performing Automated Runtime Testing</ulink>"
-        section in the Yocto Project Development Manual.
+        section in the Yocto Project Development Tasks Manual.
     </para>
 
     <para>
@@ -206,7 +206,8 @@
     <para>
         The Yocto Project's main Autobuilder
         (<filename>autobuilder.yoctoproject.org</filename>) publicly tests
-        each Yocto Project release's code in the OE-Core, Poky, and BitBake
+        each Yocto Project release's code in the
+        <link linkend='oe-core'>OE-Core</link>, Poky, and BitBake
         repositories.
         The testing occurs for both the current state of the
         "master" branch and also for submitted patches.
@@ -216,7 +217,7 @@
         in the <filename>poky</filename> repository.
         <note>
             You can find all these branches in the Yocto Project
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-repositories'>Source Repositories</ulink>.
+            <link linkend='source-repositories'>Source Repositories</link>.
         </note>
         Testing within these public branches ensures in a publicly visible way
         that all of the main supposed architectures and recipes in OE-Core
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-structure.xml b/import-layers/yocto-poky/documentation/ref-manual/ref-structure.xml
index 9b2701c..4bddc59 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/ref-structure.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-structure.xml
@@ -7,16 +7,19 @@
 <title>Source Directory Structure</title>
 
 <para>
-    The <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink> consists of several components.
-    Understanding them and knowing where they are located is key to using the Yocto Project well.
-    This chapter describes the Source Directory and gives information about the various
-    files and directories.
+    The <link linkend='source-directory'>Source Directory</link>
+    consists of several components.
+    Understanding them and knowing where they are located is key to using the
+    Yocto Project well.
+    This chapter describes the Source Directory and gives information about
+    the various files and directories.
 </para>
 
 <para>
-    For information on how to establish a local Source Directory on your development system, see the
-    "<ulink url='&YOCTO_DOCS_DEV_URL;#getting-setup'>Getting Set Up</ulink>"
-    section in the Yocto Project Development Manual.
+    For information on how to establish a local Source Directory on your
+    development system, see the
+    "<ulink url='&YOCTO_DOCS_DEV_URL;#working-with-yocto-project-source-files'>Working With Yocto Project Source Files</ulink>"
+    section in the Yocto Project Development Tasks Manual.
 </para>
 
 <note>
@@ -31,7 +34,7 @@
 
     <para>
         This section describes the top-level components of the
-        <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+        <link linkend='source-directory'>Source Directory</link>.
     </para>
 
     <section id='structure-core-bitbake'>
@@ -42,7 +45,7 @@
             The copy usually matches the current stable BitBake release from
             the BitBake project.
             BitBake, a
-            <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+            <link linkend='metadata'>Metadata</link>
             interpreter, reads the Yocto Project Metadata and runs the tasks
             defined by that data.
             Failures are usually from the Metadata and not from BitBake itself.
@@ -53,10 +56,8 @@
             When you run the <filename>bitbake</filename> command, the
             main BitBake executable, which resides in the
             <filename>bitbake/bin/</filename> directory, starts.
-            Sourcing an environment setup script (e.g.
-            <link linkend="structure-core-script"><filename>&OE_INIT_FILE;</filename></link>
-            or
-            <link linkend="structure-memres-core-script"><filename>oe-init-build-env-memres</filename></link>)
+            Sourcing the environment setup script (i.e.
+            <link linkend="structure-core-script"><filename>&OE_INIT_FILE;</filename></link>)
             places the <filename>scripts</filename> and
             <filename>bitbake/bin</filename> directories (in that order) into
             the shell's <filename>PATH</filename> environment variable.
@@ -75,27 +76,24 @@
             This directory contains user configuration files and the output
             generated by the OpenEmbedded build system in its standard configuration where
             the source tree is combined with the output.
-            The <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+            The
+            <link linkend='build-directory'>Build Directory</link>
             is created initially when you <filename>source</filename>
             the OpenEmbedded build environment setup script
             (i.e.
-            <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>
-            or
-            <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>).
+            <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>).
         </para>
 
         <para>
             It is also possible to place output and configuration
             files in a directory separate from the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+            <link linkend='source-directory'>Source Directory</link>
             by providing a directory name when you <filename>source</filename>
             the setup script.
             For information on separating output from your local
             Source Directory files, see the
-            "<link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>
-            and
-            "<link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>"
-            sections.
+            "<link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>"
+            section.
         </para>
     </section>
 
@@ -175,9 +173,7 @@
             This directory contains various integration scripts that implement
             extra functionality in the Yocto Project environment (e.g. QEMU scripts).
             The <link linkend="structure-core-script"><filename>&OE_INIT_FILE;</filename></link>
-            and
-            <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>
-            scripts append this directory to the shell's
+            script appends this directory to the shell's
             <filename>PATH</filename> environment variable.
         </para>
 
@@ -192,14 +188,7 @@
         <title><filename>&OE_INIT_FILE;</filename></title>
 
         <para>
-            This script is one of two scripts that set up the OpenEmbedded build
-            environment.
-            For information on the other script, see the
-            "<link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>"
-            section.
-        </para>
-
-        <para>
+            This script sets up the OpenEmbedded build environment.
             Running this script with the <filename>source</filename> command in
             a shell makes changes to <filename>PATH</filename> and sets other
             core BitBake variables based on the current working directory.
@@ -212,7 +201,7 @@
         <para>
             When you run this script, your Yocto Project environment is set
             up, a
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+            <link linkend='build-directory'>Build Directory</link>
             is created, your working directory becomes the Build Directory,
             and you are presented with a list of common BitBake targets.
             Here is an example:
@@ -234,19 +223,19 @@
             The script gets its default list of common targets from the
             <filename>conf-notes.txt</filename> file, which is found in the
             <filename>meta-poky</filename> directory within the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+            <link linkend='source-directory'>Source Directory</link>.
             Should you have custom distributions, it is very easy to modify
             this configuration file to include your targets for your
             distribution.
             See the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-custom-template-configuration-directory'>Creating a Custom Template Configuration Directory</ulink>"
-            section in the Yocto Project Development Manual for more
+            section in the Yocto Project Development Tasks Manual for more
             information.
         </para>
 
         <para>
             By default, running this script without a
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+            <link linkend='build-directory'>Build Directory</link>
             argument creates the <filename>build</filename> directory
             in your current working directory.
             If you provide a Build Directory argument when you
@@ -254,17 +243,17 @@
             build system to create a Build Directory of your choice.
             For example, the following command creates a Build Directory named
             <filename>mybuilds</filename> that is outside of the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>:
+            <link linkend='source-directory'>Source Directory</link>:
             <literallayout class='monospaced'>
      $ source &OE_INIT_FILE; ~/mybuilds
             </literallayout>
             The OpenEmbedded build system uses the template configuration
             files, which are found by default in the
             <filename>meta-poky/conf</filename> directory in the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+            Source Directory.
             See the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-custom-template-configuration-directory'>Creating a Custom Template Configuration Directory</ulink>"
-            section in the Yocto Project Development Manual for more
+            section in the Yocto Project Development Tasks Manual for more
             information.
             <note>
                 The OpenEmbedded build system does not support file or directory names that
@@ -278,157 +267,6 @@
         </para>
     </section>
 
-    <section id='structure-memres-core-script'>
-        <title><filename>oe-init-build-env-memres</filename></title>
-
-        <para>
-            This script is one of two scripts that set up the OpenEmbedded
-            build environment.
-            Aside from setting up the environment, this script starts a
-            memory-resident BitBake server.
-            For information on the other setup script, see the
-            "<link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>"
-            section.
-        </para>
-
-        <para>
-            Memory-resident BitBake resides in memory until you specifically
-            remove it using the following BitBake command:
-            <literallayout class='monospaced'>
-     $ bitbake -m
-            </literallayout>
-        </para>
-
-        <para>
-            Running this script with the <filename>source</filename> command in
-            a shell makes changes to <filename>PATH</filename> and sets other
-            core BitBake variables based on the current working directory.
-            One of these variables is the
-            <link linkend='var-BBSERVER'><filename>BBSERVER</filename></link>
-            variable, which allows the OpenEmbedded build system to locate
-            the server that is running BitBake.
-        </para>
-
-        <para>
-            You need to run an environment setup script before using BitBake
-            commands.
-            Following is the script syntax:
-            <literallayout class='monospaced'>
-     $ source oe-init-build-env-memres <replaceable>port_number</replaceable> <replaceable>build_dir</replaceable>
-            </literallayout>
-            Following are some considerations when sourcing this script:
-            <itemizedlist>
-                <listitem><para>
-                    The script uses other scripts within the
-                    <filename>scripts</filename> directory to do the bulk of
-                    the work.
-                    </para></listitem>
-                <listitem><para>
-                    If you do not provide a port number with the script, the
-                    BitBake server starts at a randomly selected port.
-                    </para></listitem>
-                <listitem><para>
-                    The script's parameters are positionally dependent.
-                    Consequently, you cannot run the script and provide a
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
-                    name without also providing a port number.
-                    In other words, the following syntax is illegal:
-                    <literallayout class='monospaced'>
-     $ source oe-initbuild-env-memres <replaceable>build_dir</replaceable>
-                    </literallayout>
-                    <note>
-                        The previous restriction might be resolved in the
-                        future.
-                        See
-                        <ulink url='https://bugzilla.yoctoproject.org/show_bug.cgi?id=7555'>Bug 7555</ulink>
-                        for more information.
-                    </note>
-                    </para></listitem>
-            </itemizedlist>
-        </para>
-
-        <para>
-            When you run this script, your Yocto Project environment is set
-            up, a Build Directory is created, your working directory becomes
-            the Build Directory, and you are presented with a list of common
-            BitBake targets.
-            Here is an example:
-            <literallayout class='monospaced'>
-     $ source oe-init-build-env-memres
-     No port specified, using dynamically selected port
-
-     ### Shell environment set up for builds. ###
-
-     You can now run 'bitbake &lt;target&gt;'
-
-     Common targets are:
-         core-image-minimal
-         core-image-sato
-         meta-toolchain
-         meta-ide-support
-
-     You can also run generated qemu images with a command like 'runqemu qemux86'
-     Bitbake server address: 127.0.0.1, server port: 53995
-     Bitbake server started on demand as needed, use bitbake -m to shut it down
-            </literallayout>
-            The script gets its default list of common targets from the
-            <filename>conf-notes.txt</filename> file, which is found in the
-            <filename>meta-poky</filename> directory within the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
-            Should you have custom distributions, it is very easy to modify
-            this configuration file to include your targets for your
-            distribution.
-            See the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-custom-template-configuration-directory'>Creating a Custom Template Configuration Directory</ulink>"
-            section in the Yocto Project Development Manual for more
-            information.
-        </para>
-
-        <para>
-            By default, running this script without a
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
-            argument creates a build directory named
-            <filename>build</filename>.
-            If you provide a Build Directory argument and port number when you
-            <filename>source</filename> the script, the Build Directory is
-            created using that name.
-            For example, the following command starts the BitBake server using
-            port 53995 and creates a Build Directory named
-            <filename>mybuilds</filename> that is outside of the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>:
-            <literallayout class='monospaced'>
-     $ source oe-init-build-env-memres 53995 ~/mybuilds
-            </literallayout>
-            The <filename>oe-init-build-env-memres</filename> script starts a
-            memory resident BitBake server.
-            This BitBake instance uses the
-            <filename>bitbake-cookerdaemon.log</filename> file, which is
-            located in the Build Directory.
-        </para>
-
-        <para>
-            The OpenEmbedded build system uses the template configuration
-            files, which are found by default in the
-            <filename>meta-poky/conf</filename> directory in the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
-            See the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-custom-template-configuration-directory'>Creating a Custom Template Configuration Directory</ulink>"
-            section in the Yocto Project Development Manual for more
-            information.
-            <note>
-                The OpenEmbedded build system does not support file or
-                directory names that contain spaces.
-                If you attempt to run the
-                <filename>oe-init-build-env-memres</filename> script
-                from a Source Directory that contains spaces in either the
-                filenames or directory names, the script returns an error
-                indicating no such file or directory.
-                Be sure to use a Source Directory free of names containing
-                spaces.
-            </note>
-        </para>
-    </section>
-
     <section id='structure-basic-top-level'>
         <title><filename>LICENSE, README, and README.hardware</filename></title>
 
@@ -443,11 +281,9 @@
 
     <para>
         The OpenEmbedded build system creates the
-        <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
-        when you run one of the build environment setup scripts (i.e.
-        <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>
-        or
-        <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>).
+        <link linkend='build-directory'>Build Directory</link>
+        when you run the build environment setup scripts (i.e.
+        <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>).
     </para>
 
     <para>
@@ -505,9 +341,7 @@
             <filename>local.conf.sample</filename> when
             you <filename>source</filename> the top-level build environment
             setup script (i.e.
-            <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>
-            or
-            <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>).
+            <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>).
         </para>
 
         <para>
@@ -533,7 +367,7 @@
                 You can see how the <filename>TEMPLATECONF</filename> variable
                 is used by looking at the
                 <filename>scripts/oe-setup-builddir</filename> script in the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                <link linkend='source-directory'>Source Directory</link>.
                 You can find the Yocto Project version of the
                 <filename>local.conf.sample</filename> file in the
                 <filename>meta-poky/conf</filename> directory.
@@ -559,9 +393,7 @@
             <filename>bblayers.conf.sample</filename> when
             you <filename>source</filename> the top-level build environment
             setup script (i.e.
-            <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>
-            or
-            <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>).
+            <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>).
         </para>
 
         <para>
@@ -585,7 +417,7 @@
             <note>
                 You can see how the <filename>TEMPLATECONF</filename> variable
                 <filename>scripts/oe-setup-builddir</filename> script in the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                <link linkend='source-directory'>Source Directory</link>.
                 You can find the Yocto Project version of the
                 <filename>bblayers.conf.sample</filename> file in the
                 <filename>meta-poky/conf</filename> directory.
@@ -733,7 +565,7 @@
             contain appropriate <filename>COPYING</filename> license files with other licensing information.
             For information on licensing, see the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle'>Maintaining Open Source License Compliance During Your Product's Lifecycle</ulink>"
-            section.
+            section in the Yocto Project Development Tasks Manual.
         </para>
     </section>
 
@@ -777,7 +609,8 @@
             sysroot that matches your target hardware.
             You can find out more about these installers in the
             "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-building-an-sdk-installer'>Building an SDK Installer</ulink>"
-            section in the Yocto Project Software Development Kit (SDK) Developer's Guide.
+            section in the Yocto Project Application Development and the
+            Extensible Software Development Kit (eSDK) manual.
         </para>
     </section>
 
@@ -808,7 +641,8 @@
             recipe listed in
             <link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>.
             Population of this directory is handled through shared state, while
-            the path is specified by the <filename>COMPONENTS_DIR</filename>
+            the path is specified by the
+            <link linkend='var-COMPONENTS_DIR'><filename>COMPONENTS_DIR</filename></link>
             variable. Apart from a few unusual circumstances, handling of the
             <filename>sysroots-components</filename> directory should be
             automatic, and recipes should not directly reference
@@ -907,7 +741,8 @@
             <filename>linux-qemux86-standard-build</filename> and then patched by Quilt.
             (See the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#using-a-quilt-workflow'>Using Quilt in Your Workflow</ulink>"
-            section in the Yocto Project Development Manual for more information.)
+            section in the Yocto Project Development Tasks Manual for more
+            information.)
             Within the <filename>linux-qemux86-standard-build</filename> directory,
             standard Quilt directories <filename>linux-3.0/patches</filename>
             and <filename>linux-3.0/.pc</filename> are created,
@@ -1041,7 +876,7 @@
 
     <para>
         As mentioned previously,
-        <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink> is the core
+        <link linkend='metadata'>Metadata</link> is the core
         of the Yocto Project.
         Metadata has several important subdivisions:
     </para>
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-style.css b/import-layers/yocto-poky/documentation/ref-manual/ref-style.css
index 8ea8dac..7077e4b 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/ref-style.css
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-style.css
@@ -773,6 +773,10 @@
   border-color: black;
 }
 
+.writernotes {
+  color: red;
+}
+
 
   /*********** /
  /  graphics  /
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-tasks.xml b/import-layers/yocto-poky/documentation/ref-manual/ref-tasks.xml
index 87ddb98..c726cb9 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/ref-tasks.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-tasks.xml
@@ -302,7 +302,7 @@
                             See the <filename>bin_package.bbclass</filename>
                             file in the <filename>meta/classes</filename>
                             directory of the
-                            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                            <link linkend='source-directory'>Source Directory</link>
                             for an example.
                             </para></listitem>
                     </itemizedlist>
@@ -495,10 +495,11 @@
 
         <para>
             Installs the files into the individual recipe specific sysroots
-            (i.e.
+            (i.e. <filename>recipe-sysroot</filename> and
+            <filename>recipe-sysroot-native</filename> under
             <filename>${</filename><link linkend='var-WORKDIR'><filename>WORKDIR</filename></link><filename>}</filename>
             based upon the dependencies specified by
-            <link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>.
+            <link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>).
             See the
             "<link linkend='ref-classes-staging'><filename>staging</filename></link>"
             class for more information.
@@ -717,7 +718,7 @@
             the BitBake environment.
             See the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#platdev-appdev-devpyshell'>Using a Development Python Shell</ulink>"
-            section in the Yocto Project Development Manual for more
+            section in the Yocto Project Development Tasks Manual for more
             information about using <filename>devpyshell</filename>.
         </para>
     </section>
@@ -730,7 +731,7 @@
             development, debugging, or both.
             See the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#platdev-appdev-devshell'>Using a Development Shell</ulink>"
-            section in the Yocto Project Development Manual for more
+            section in the Yocto Project Development Tasks Manual for more
             information about using <filename>devshell</filename>.
         </para>
     </section>
@@ -820,7 +821,7 @@
             Boots an image and performs runtime tests within the image.
             For information on automatically testing images, see the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing'>Performing Automated Runtime Testing</ulink>"
-            section in the Yocto Project Development Manual.
+            section in the Yocto Project Development Tasks Manual.
         </para>
     </section>
 
@@ -838,17 +839,7 @@
         <para>
             For information on automatically testing images, see the
             "<ulink url='&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing'>Performing Automated Runtime Testing</ulink>"
-            section in the Yocto Project Development Manual.
-        </para>
-    </section>
-
-    <section id='ref-tasks-vmdkimg'>
-        <title><filename>do_vmdkimg</filename></title>
-
-        <para>
-            Creates a <filename>.vmdk</filename> image for use with
-            <ulink url='http://www.vmware.com/'>VMware</ulink>
-            and compatible virtual machine hosts.
+            section in the Yocto Project Development Tasks Manual.
         </para>
     </section>
 </section>
@@ -892,7 +883,7 @@
      $ bitbake linux-yocto -c diffconfig
             </literallayout>
             For more information, see the
-            "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#generating-configuration-files'>Generating Configuration Files</ulink>"
+            "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#creating-config-fragments'>Creating Configuration Fragments</ulink>"
             section in the Yocto Project Linux Kernel Development Manual.
         </para>
     </section>
@@ -927,7 +918,7 @@
      $ bitbake linux-yocto -c kernel_configcheck -f
             </literallayout>
             For more information, see the
-            "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#generating-configuration-files'>Generating Configuration Files</ulink>"
+            "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#validating-configuration'>Validating Configuration</ulink>"
             section in the Yocto Project Linux Kernel Development Manual.
         </para>
     </section>
@@ -965,12 +956,9 @@
                 </literallayout>
             </note>
             See the
-            "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#generating-configuration-files'>Generating Configuration Files</ulink>"
+            "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#using-menuconfig'>Using <filename>menuconfig</filename></ulink>"
             section in the Yocto Project Linux Kernel Development Manual
             for more information on this configuration tool.
-            You can also reference the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#using-menuconfig'>Using <filename>menuconfig</filename></ulink>"
-            section in the Yocto Project Development Manual.
         </para>
     </section>
 
@@ -997,8 +985,8 @@
         <para>
             Runs <filename>make menuconfig</filename> for the kernel.
             For information on <filename>menuconfig</filename>, see the
-            "<ulink url='&YOCTO_DOCS_DEV_URL;#using-menuconfig'>Using&nbsp;&nbsp;<filename>menuconfig</filename></ulink>"
-            section in the Yocto Project Development Manual.
+            "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#using-menuconfig'>Using&nbsp;&nbsp;<filename>menuconfig</filename></ulink>"
+            section in the Yocto Project Linux Kernel Development Manual.
         </para>
     </section>
 
diff --git a/import-layers/yocto-poky/documentation/ref-manual/ref-variables.xml b/import-layers/yocto-poky/documentation/ref-manual/ref-variables.xml
index ad10139..2b01723 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/ref-variables.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/ref-variables.xml
@@ -29,7 +29,7 @@
        <link linkend='var-KARCH'>K</link>
        <link linkend='var-LABELS'>L</link>
        <link linkend='var-MACHINE'>M</link>
-<!--               <link linkend='var-glossary-n'>N</link> -->
+       <link linkend='var-NATIVELSBSTRING'>N</link>
        <link linkend='var-OBJCOPY'>O</link>
        <link linkend='var-P'>P</link>
 <!--               <link linkend='var-glossary-q'>Q</link> -->
@@ -321,7 +321,7 @@
                     For information on how the variable works, see the
                     <filename>meta/classes/archiver.bbclass</filename> file
                     in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <link linkend='source-directory'>Source Directory</link>.
                 </para>
             </glossdef>
         </glossentry>
@@ -488,7 +488,7 @@
                 <para>
                     For more information see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#automatically-incrementing-a-binary-package-revision-number'>Automatically Incrementing a Binary Package Revision Number</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                     </para>
             </glossdef>
         </glossentry>
@@ -536,7 +536,7 @@
                 <para role="glossdeffirst">
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
                     The directory within the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                    <link linkend='build-directory'>Build Directory</link>
                     in which the OpenEmbedded build system places generated
                     objects during a recipe's build process.
                     By default, this directory is the same as the <link linkend='var-S'><filename>S</filename></link>
@@ -626,14 +626,14 @@
                     Multilib context.
                     See the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#combining-multiple-versions-library-files-into-one-image'>Combining Multiple Versions of Library Files into One Image</ulink>"
-                    section in the Yocto Project Development Manual for
+                    section in the Yocto Project Development Tasks Manual for
                     information on Multilib.
                 </para>
 
                 <para>
                     The <filename>BASE_LIB</filename> variable is defined in
                     the machine include files in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <link linkend='source-directory'>Source Directory</link>.
                     If Multilib is not being used, the value defaults to "lib".
                 </para>
             </glossdef>
@@ -734,7 +734,7 @@
                     variable to "1", "yes", or "true"
                     in your <filename>local.conf</filename> file, which is
                     located in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>:
+                    <link linkend='build-directory'>Build Directory</link>:
                     Here is an example:
                     <literallayout class='monospaced'>
      BB_DANGLINGAPPENDS_WARNONLY = "1"
@@ -759,7 +759,7 @@
                     Disk space monitoring is disabled by default.
                     To enable monitoring, add the <filename>BB_DISKMON_DIRS</filename>
                     variable to your <filename>conf/local.conf</filename> file found in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <link linkend='build-directory'>Build Directory</link>.
                     Use the following form:
                     <literallayout class='monospaced'>
      BB_DISKMON_DIRS = "<replaceable>action</replaceable>,<replaceable>dir</replaceable>,<replaceable>threshold</replaceable> [...]"
@@ -852,7 +852,7 @@
                     Defines the disk space and free inode warning intervals.
                     To set these intervals, define the variable in your
                     <filename>conf/local.conf</filename> file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <link linkend='build-directory'>Build Directory</link>.
                 </para>
 
                 <para>
@@ -936,7 +936,7 @@
                     </literallayout>
                     Set this variable in your <filename>local.conf</filename>
                     file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <link linkend='build-directory'>Build Directory</link>.
                 </para>
             </glossdef>
         </glossentry>
@@ -1154,7 +1154,8 @@
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
                     Lists the layers to enable during the build.
                     This variable is defined in the <filename>bblayers.conf</filename> configuration
-                    file in the <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    file in the
+                    <link linkend='build-directory'>Build Directory</link>.
                     Here is an example:
                     <literallayout class='monospaced'>
      BBLAYERS = " \
@@ -1250,7 +1251,7 @@
      BBMULTIFONFIG = "configA configB configC"
                     </literallayout>
                     Each configuration file you use must reside in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory's</ulink>
+                    <link linkend='build-directory'>Build Directory</link>
                     <filename>conf/multiconfig</filename> directory
                     (e.g.
                     <replaceable>build_directory</replaceable><filename>/conf/multiconfig/configA.conf</filename>).
@@ -1262,7 +1263,7 @@
                     supports building targets with multiple configurations,
                     see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#platdev-building-targets-with-multiple-configurations'>Building Targets with Multiple Configurations</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -1280,7 +1281,7 @@
                     <filename>PATH</filename> variable.
                     <note>
                         If you run BitBake from a directory outside of the
-                        <ulink url='&YOCTO_DOCS_DEV_URL;build-directory'>Build Directory</ulink>,
+                        <link linkend='build-directory'>Build Directory</link>,
                         you must be sure to set
                         <filename>BBPATH</filename> to point to the
                         Build Directory.
@@ -1298,29 +1299,29 @@
 
         <glossentry id='var-BBSERVER'><glossterm>BBSERVER</glossterm>
             <info>
-                BBSERVER[doc] = "Points to the server that runs memory-resident BitBake."
+                BBSERVER[doc] = "Points to the BitBake remote server."
             </info>
             <glossdef>
                 <para role="glossdeffirst">
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
-                    Points to the server that runs memory-resident BitBake.
-                    This variable is set by the
-                    <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>
-                    setup script and should not be hand-edited.
-                    The variable is only used when you employ memory-resident
-                    BitBake.
-                    The setup script exports the value as follows:
+                    If defined in the BitBake environment,
+                    <filename>BBSERVER</filename> points to the BitBake
+                    remote server.
+                </para>
+
+                <para>
+                    Use the following format to export the variable to the
+                    BitBake environment:
                     <literallayout class='monospaced'>
-     export BBSERVER=localhost:$port
+     export BBSERVER=localhost:$port"
                     </literallayout>
                 </para>
 
                 <para>
-                    For more information on how the
-                    <filename>BBSERVER</filename> is used, see the
-                    <filename>oe-init-build-env-memres</filename> script, which
-                    is located in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    By default, <filename>BBSERVER</filename> also appears in
+                    <ulink url='&YOCTO_DOCS_BB_URL;#var-BB_HASHBASE_WHITELIST'><filename>BB_HASHBASE_WHITELIST</filename></ulink>.
+                    Consequently, <filename>BBSERVER</filename> is excluded
+                    from checksum and dependency data.
                 </para>
             </glossdef>
         </glossentry>
@@ -1373,7 +1374,7 @@
                 <para>
                     For more information on how this variable works, see
                     <filename>meta/classes/binconfig.bbclass</filename> in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <link linkend='source-directory'>Source Directory</link>.
                     You can also find general information on the class in the
                     "<link linkend='ref-classes-binconfig'><filename>binconfig.bbclass</filename></link>"
                     section.
@@ -1456,6 +1457,51 @@
             </glossdef>
         </glossentry>
 
+        <glossentry id='var-BUILD_AS_ARCH'><glossterm>BUILD_AS_ARCH</glossterm>
+            <info>
+                BUILD_AS_ARCH[doc] = "Specifies the architecture-specific assembler flags for the build host."
+            </info>
+            <glossdef>
+                <para role="glossdeffirst">
+<!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
+                    Specifies the architecture-specific assembler flags for
+                    the build host. By default, the value of
+                    <filename>BUILD_AS_ARCH</filename> is empty.
+                </para>
+            </glossdef>
+        </glossentry>
+
+        <glossentry id='var-BUILD_CC_ARCH'><glossterm>BUILD_CC_ARCH</glossterm>
+            <info>
+                BUILD_CC_ARCH[doc] = "Specifies the architecture-specific C compiler flags for the build host."
+            </info>
+            <glossdef>
+                <para role="glossdeffirst">
+<!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
+                    Specifies the architecture-specific C compiler flags for
+                    the build host. By default, the value of
+                    <filename>BUILD_CC_ARCH</filename> is empty.
+                </para>
+            </glossdef>
+        </glossentry>
+
+        <glossentry id='var-BUILD_CCLD'><glossterm>BUILD_CCLD</glossterm>
+            <info>
+                BUILD_CCLD[doc] = "Specifies the linker command to be used for the build host when the C compiler is being used as the linker."
+            </info>
+            <glossdef>
+                <para role="glossdeffirst">
+<!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
+                    Specifies the linker command to be used for the build host
+                    when the C compiler is being used as the linker. By default,
+                    <filename>BUILD_CCLD</filename> points to GCC and passes as
+                    arguments the value of
+                    <link linkend='var-BUILD_CC_ARCH'><filename>BUILD_CC_ARCH</filename></link>,
+                    assuming <filename>BUILD_CC_ARCH</filename> is set.
+                </para>
+            </glossdef>
+        </glossentry>
+
         <glossentry id='var-BUILD_CFLAGS'><glossterm>BUILD_CFLAGS</glossterm>
             <info>
                 BUILD_CFLAGS[doc] = "Specifies the flags to pass to the C compiler when building for the build host."
@@ -1505,6 +1551,52 @@
             </glossdef>
         </glossentry>
 
+        <glossentry id='var-BUILD_FC'><glossterm>BUILD_FC</glossterm>
+            <info>
+                BUILD_FC[doc] = "Specifies the Fortran compiler command for the build host."
+            </info>
+            <glossdef>
+                <para role="glossdeffirst">
+<!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
+                    Specifies the Fortran compiler command for the build host.
+                    By default, <filename>BUILD_FC</filename> points to
+                    Gfortran and passes as arguments the value of
+                    <link linkend='var-BUILD_CC_ARCH'><filename>BUILD_CC_ARCH</filename></link>,
+                    assuming <filename>BUILD_CC_ARCH</filename> is set.
+               </para>
+            </glossdef>
+        </glossentry>
+
+        <glossentry id='var-BUILD_LD'><glossterm>BUILD_LD</glossterm>
+            <info>
+                BUILD_LD[doc] = "Specifies the linker command for the build host."
+            </info>
+            <glossdef>
+                <para role="glossdeffirst">
+<!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
+                    Specifies the linker command for the build host. By default,
+                    <filename>BUILD_LD</filename> points to the GNU linker (ld)
+                    and passes as arguments the value of
+                    <link linkend='var-BUILD_LD_ARCH'><filename>BUILD_LD_ARCH</filename></link>,
+                    assuming <filename>BUILD_LD_ARCH</filename> is set.
+               </para>
+            </glossdef>
+        </glossentry>
+
+        <glossentry id='var-BUILD_LD_ARCH'><glossterm>BUILD_LD_ARCH</glossterm>
+            <info>
+                BUILD_LD_ARCH[doc] = "Specifies architecture-specific linker flags for the build."
+            </info>
+            <glossdef>
+                <para role="glossdeffirst">
+<!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
+                    Specifies architecture-specific linker flags for the build
+                    host. By default, the value of
+                    <filename>BUILD_LD_ARCH</filename> is empty.
+               </para>
+            </glossdef>
+        </glossentry>
+
         <glossentry id='var-BUILD_LDFLAGS'><glossterm>BUILD_LDFLAGS</glossterm>
             <info>
                 BUILD_LDFLAGS[doc] = "Specifies the flags to pass to the linker when building for the build host."
@@ -1578,6 +1670,21 @@
             </glossdef>
         </glossentry>
 
+        <glossentry id='var-BUILD_STRIP'><glossterm>BUILD_STRIP</glossterm>
+            <info>
+                BUILD_STRIP[doc] = "Specifies the command to be used to strip debugging symbols from binaries produced for the build host."
+            </info>
+            <glossdef>
+                <para role="glossdeffirst">
+<!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
+                    Specifies the command to be used to strip debugging symbols
+                    from binaries produced for the build host. By default,
+                    <filename>BUILD_STRIP</filename> points to
+                    <filename>${</filename><link linkend='var-BUILD_PREFIX'><filename>BUILD_PREFIX</filename></link><filename>}strip</filename>.
+                </para>
+            </glossdef>
+        </glossentry>
+
         <glossentry id='var-BUILD_SYS'><glossterm>BUILD_SYS</glossterm>
             <info>
                 BUILD_SYS[doc] = "The toolchain binary prefix used for native recipes."
@@ -1626,14 +1733,12 @@
                 <para role="glossdeffirst">
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
                     Points to the location of the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <link linkend='build-directory'>Build Directory</link>.
                     You can define this directory indirectly through the
                     <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>
-                    and
-                    <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>
-                    scripts by passing in a Build Directory path when you run
-                    the scripts.
-                    If you run the scripts and do not provide a Build Directory
+                    script by passing in a Build Directory path when you run
+                    the script.
+                    If you run the script and do not provide a Build Directory
                     path, the <filename>BUILDDIR</filename> defaults to
                     <filename>build</filename> in the current directory.
                 </para>
@@ -1968,7 +2073,7 @@
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
                     Specifies the directory BitBake uses to store a cache
                     of the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+                    <link linkend='metadata'>Metadata</link>
                     so it does not need to be parsed every time BitBake is
                     started.
                 </para>
@@ -2126,7 +2231,7 @@
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
                     Points to <filename>meta/files/common-licenses</filename>
                     in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
+                    <link linkend='source-directory'>Source Directory</link>,
                     which is where generic license files reside.
                 </para>
             </glossdef>
@@ -2207,6 +2312,27 @@
             </glossdef>
         </glossentry>
 
+        <glossentry id='var-COMPONENTS_DIR'><glossterm>COMPONENTS_DIR</glossterm>
+            <info>
+                COMPONENTS_DIR[doc] = "Stores sysroot components for each recipe."
+            </info>
+            <glossdef>
+                <para role="glossdeffirst">
+<!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
+                    Stores sysroot components for each recipe.
+                    The OpenEmbedded build system uses
+                    <filename>COMPONENTS_DIR</filename> when constructing
+                    recipe-specific sysroots for other recipes.
+                </para>
+
+                <para>
+                    The default is
+                    "<filename>${</filename><link linkend='var-STAGING_DIR'><filename>STAGING_DIR</filename></link><filename>}-components</filename>."
+                    (i.e. "<filename>${</filename><link linkend='var-TMPDIR'><filename>TMPDIR</filename></link><filename>}/sysroots-components</filename>").
+                </para>
+            </glossdef>
+        </glossentry>
+
         <glossentry id='var-CONF_VERSION'><glossterm>CONF_VERSION</glossterm>
             <info>
                 CONF_VERSION[doc] = "Tracks the version of local.conf.  Increased each time build/conf/ changes incompatibly."
@@ -2272,7 +2398,7 @@
                     than <filename>/usr/bin</filename>.
                     You can find a list of these variables at the top of the
                     <filename>meta/conf/bitbake.conf</filename> file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <link linkend='source-directory'>Source Directory</link>.
                 </note>
             </glossdef>
         </glossentry>
@@ -2315,7 +2441,7 @@
                 <para>
                     For information on creating an initramfs, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#building-an-initramfs-image'>Building an Initial RAM Filesystem (initramfs) Image</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -2549,8 +2675,8 @@
                         variable for additional information.
                         You can also reference the
                         "<ulink url='&YOCTO_DOCS_DEV_URL;#providing-license-text'>Providing License Text</ulink>"
-                        section in the Yocto Project Development Manual for
-                        information on providing license text.
+                        section in the Yocto Project Development Tasks Manual
+                        for information on providing license text.
                     </note>
                 </para>
             </glossdef>
@@ -2577,8 +2703,8 @@
                         variable for additional information.
                         You can also reference the
                         "<ulink url='&YOCTO_DOCS_DEV_URL;#providing-license-text'>Providing License Text</ulink>"
-                        section in the Yocto Project Development Manual for
-                        information on providing license text.
+                        section in the Yocto Project Development Tasks Manual
+                        for information on providing license text.
                     </note>
                 </para>
             </glossdef>
@@ -2595,7 +2721,7 @@
                     You should only set this variable in the
                     <filename>local.conf</filename> configuration file found
                     in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <link linkend='build-directory'>Build Directory</link>.
                 </para>
 
                 <para>
@@ -2805,7 +2931,8 @@
                 <para role="glossdeffirst">
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
                     The destination directory.
-                    The location in the <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                    The location in the
+                    <link linkend='build-directory'>Build Directory</link>
                     where components are installed by the
                     <link linkend='ref-tasks-install'><filename>do_install</filename></link>
                     task.
@@ -3116,7 +3243,7 @@
                     system uses to place images, packages, SDKs and other output
                     files that are ready to be used outside of the build system.
                     By default, this directory resides within the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                    <link linkend='build-directory'>Build Directory</link>
                     as <filename>${TMPDIR}/deploy</filename>.
                 </para>
 
@@ -3189,7 +3316,7 @@
                     The directory is machine-specific as it contains the
                     <filename>${MACHINE}</filename> name.
                     By default, this directory resides within the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                    <link linkend='build-directory'>Build Directory</link>
                     as <filename>${DEPLOY_DIR}/images/${MACHINE}/</filename>.
                 </para>
 
@@ -3439,7 +3566,7 @@
                     and resides in the
                     <filename>meta-poky/conf/distro</filename> directory of
                     the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <link linkend='source-directory'>Source Directory</link>.
                 </para>
 
                 <para>
@@ -3453,7 +3580,7 @@
                 <para>
                     Distribution configuration files are located in a
                     <filename>conf/distro</filename> directory within the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+                    <link linkend='metadata'>Metadata</link>
                     that contains the distribution configuration.
                     The value for <filename>DISTRO</filename> must not contain
                     spaces, and is typically all lower-case.
@@ -3794,7 +3921,7 @@
                     to touch it.
                     By default, the directory is <filename>downloads</filename>
                     in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <link linkend='build-directory'>Build Directory</link>.
                     <literallayout class='monospaced'>
      #DL_DIR ?= "${TOPDIR}/downloads"
                     </literallayout>
@@ -3862,14 +3989,14 @@
 
         <glossentry id='var-EFI_PROVIDER'><glossterm>EFI_PROVIDER</glossterm>
             <info>
-                EFI_PROVIDER[doc] = "When building bootable images (i.e. where hddimg or vmdk is in IMAGE_FSTYPES), the EFI_PROVIDER variable specifies the EFI bootloader to use."
+                EFI_PROVIDER[doc] = "When building bootable images (i.e. where hddimg, iso, or wic.vmdk is in IMAGE_FSTYPES), the EFI_PROVIDER variable specifies the EFI bootloader to use."
             </info>
             <glossdef>
                 <para role="glossdeffirst">
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
                     When building bootable images (i.e. where
-                    <filename>hddimg</filename> or <filename>vmdk</filename>
-                    is in
+                    <filename>hddimg</filename>, <filename>iso</filename>,
+                    or <filename>wic.vmdk</filename> is in
                     <link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>),
                     the <filename>EFI_PROVIDER</filename> variable specifies
                     the EFI bootloader to use.
@@ -4117,7 +4244,7 @@
                     You can also find information on how to use this variable
                     in the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#building-software-from-an-external-source'>Building Software from an External Source</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -4148,7 +4275,7 @@
                     You can also find information on how to use this variable
                     in the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#building-software-from-an-external-source'>Building Software from an External Source</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -4191,7 +4318,7 @@
                 <para>
                     Typically, you configure this variable in your
                     <filename>local.conf</filename> file, which is found in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <link linkend='build-directory'>Build Directory</link>.
                     Although you can use this variable from within a recipe,
                     best practices dictate that you do not.
                     <note>
@@ -4225,8 +4352,8 @@
                      filesystem is read-only. See the
                      "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-a-read-only-root-filesystem'>Creating a Read-Only Root Filesystem</ulink>"
                      section in the Yocto Project
-                     Development Manual for more
-                     information
+                     Development Tasks Manual for
+                     more information
 
 "tools-debug" - Adds debugging tools such as gdb and
                 strace.
@@ -4252,7 +4379,7 @@
                     For an example that shows how to customize your image by
                     using this variable, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#usingpoky-extend-customimage-imagefeatures'>Customizing Images Using Custom <filename>IMAGE_FEATURES</filename> and <filename>EXTRA_IMAGE_FEATURES</filename></ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -4541,7 +4668,7 @@
                     <filename>/usr/bin</filename>.
                     You can find a list of these variables at the top of the
                     <filename>meta/conf/bitbake.conf</filename> file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <link linkend='source-directory'>Source Directory</link>.
                     You will also find the default values of the various
                     <filename>FILES_*</filename> variables in this file.
                 </note>
@@ -4615,7 +4742,8 @@
                     in a directory that has the same name as the corresponding
                     append file.
                     <note>
-                        <para>When extending <filename>FILESEXTRAPATHS</filename>,
+                        <para>When extending
+                        <filename>FILESEXTRAPATHS</filename>,
                         be sure to use the immediate expansion
                         (<filename>:=</filename>) operator.
                         Immediate expansion makes sure that BitBake evaluates
@@ -4624,6 +4752,7 @@
                         some later time when expansion might result in a
                         directory that does not contain the files you need.
                         </para>
+
                         <para>Also, include the trailing separating colon
                         character if you are prepending.
                         The trailing colon character is necessary because you
@@ -4641,13 +4770,37 @@
                 </para>
 
                 <para>
-                    Here is a final example that specifically adds three paths:
+                    This next example specifically adds three paths:
                     <literallayout class='monospaced'>
      FILESEXTRAPATHS_prepend := "path_1:path_2:path_3:"
                     </literallayout>
                 </para>
 
                 <para>
+                    A final example shows how you can extend the search path
+                    and include a
+                    <link linkend='var-MACHINE'><filename>MACHINE</filename></link>-specific
+                    override, which is useful in a BSP layer:
+                    <literallayout class='monospaced'>
+     FILESEXTRAPATHS_prepend_intel-x86-common := "${THISDIR}/${PN}:"
+                    </literallayout>
+                    The previous statement appears in the
+                    <filename>linux-yocto-dev.bbappend</filename> file, which
+                    is found in the Yocto Project
+                    <link linkend='source-repositories'>Source Repositories</link>
+                    in
+                    <filename>meta-intel/common/recipes-kernel/linux</filename>.
+                    Here, the machine override is a special
+                    <link linkend='var-PACKAGE_ARCH'><filename>PACKAGE_ARCH</filename></link>
+                    definition for multiple <filename>meta-intel</filename>
+                    machines.
+                    <note>
+                        For a layer that supports a single BSP, the override
+                        could just be the value of <filename>MACHINE</filename>.
+                    </note>
+                </para>
+
+                <para>
                     By prepending paths in <filename>.bbappend</filename>
                     files, you allow multiple append files that reside in
                     different layers but are used for the same recipe to
@@ -4707,7 +4860,7 @@
                     The default value for the <filename>FILESPATH</filename>
                     variable is defined in the <filename>base.bbclass</filename>
                     class found in <filename>meta/classes</filename> in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>:
+                    <link linkend='source-directory'>Source Directory</link>:
                     <literallayout class='monospaced'>
      FILESPATH = "${@base_set_filespath(["${FILE_DIRNAME}/${BP}", \
         "${FILE_DIRNAME}/${BPN}", "${FILE_DIRNAME}/files"], d)}"
@@ -4753,7 +4906,7 @@
                 <para>
                     By default, the OpenEmbedded build system uses the <filename>fs-perms.txt</filename>, which
                     is located in the <filename>meta/files</filename> folder in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <link linkend='source-directory'>Source Directory</link>.
                     If you create your own file permissions setting table, you should place it in your
                     layer or the distro's layer.
                 </para>
@@ -4761,7 +4914,7 @@
                 <para>
                     You define the <filename>FILESYSTEM_PERMS_TABLES</filename> variable in the
                     <filename>conf/local.conf</filename> file, which is found in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>, to
+                    <link linkend='build-directory'>Build Directory</link>, to
                     point to your custom <filename>fs-perms.txt</filename>.
                     You can specify more than a single file permissions setting table.
                     The paths you specify to these files must be defined within the
@@ -5539,7 +5692,7 @@
                 <para>
                     For more information, see
                     <filename>meta/classes/image_types.bbclass</filename> in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <link linkend='source-directory'>Source Directory</link>.
                 </para>
             </glossdef>
         </glossentry>
@@ -5613,7 +5766,7 @@
                     Typically, you configure this variable in an image recipe.
                     Although you can use this variable from your
                     <filename>local.conf</filename> file, which is found in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>,
+                    <link linkend='build-directory'>Build Directory</link>,
                     best practices dictate that you do not.
                     <note>
                         To enable extra features from outside the image recipe,
@@ -5633,7 +5786,7 @@
                     For an example that shows how to customize your image by
                     using this variable, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#usingpoky-extend-customimage-imagefeatures'>Customizing Images Using Custom <filename>IMAGE_FEATURES</filename> and <filename>EXTRA_IMAGE_FEATURES</filename></ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -5662,20 +5815,25 @@
                     <link linkend='var-IMAGE_TYPES'><filename>IMAGE_TYPES</filename></link>.
                 </para>
 
-                <note>
-                    If you add "live" to <filename>IMAGE_FSTYPES</filename>
-                    inside an image recipe, be sure that you do so prior to the
-                    "inherit image" line of the recipe or the live image will
-                    not build.
-                </note>
-
-                <note>
-                    Due to the way this variable is processed, it is not
-                    possible to update its contents using
-                    <filename>_append</filename> or
-                    <filename>_prepend</filename>. To add one or more
-                    additional options to this variable the
-                    <filename>+=</filename> operator must be used.
+                <note><title>Notes</title>
+                    <itemizedlist>
+                        <listitem><para>
+                            If you add "live" to
+                            <filename>IMAGE_FSTYPES</filename> inside an image
+                            recipe, be sure that you do so prior to the
+                            "inherit image" line of the recipe or the live
+                            image will not build.
+                            </para></listitem>
+                        <listitem><para>
+                            Due to the way the OpenEmbedded build system
+                            processes this variable, you cannot update its
+                            contents by using <filename>_append</filename> or
+                            <filename>_prepend</filename>.
+                            You must use the <filename>+=</filename>
+                            operator to add one or more options to the
+                            <filename>IMAGE_FSTYPES</filename> variable.
+                            </para></listitem>
+                    </itemizedlist>
                 </note>
             </glossdef>
         </glossentry>
@@ -5703,7 +5861,7 @@
                         not be affected by <filename>IMAGE_INSTALL</filename>.
                         For information on creating an initramfs, see the
                         "<ulink url='&YOCTO_DOCS_DEV_URL;#building-an-initramfs-image'>Building an Initial RAM Filesystem (initramfs) Image</ulink>"
-                        section in the Yocto Project Development Manual.
+                        section in the Yocto Project Development Tasks Manual.
                     </note>
                 </para>
 
@@ -6188,7 +6346,6 @@
      jffs2
      jffs2.sum
      multiubi
-     qcow2
      squashfs
      squashfs-lzo
      squashfs-xz
@@ -6199,8 +6356,6 @@
      tar.xz
      ubi
      ubifs
-     vdi
-     vmdk
      wic
      wic.bz2
      wic.gz
@@ -6212,7 +6367,7 @@
                     For more information about these types of images, see
                     <filename>meta/classes/image_types*.bbclass</filename>
                     in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <link linkend='source-directory'>Source Directory</link>.
                 </para>
             </glossdef>
         </glossentry>
@@ -6455,7 +6610,7 @@
                     The default value of this variable, which is set in the
                     <filename>meta/conf/bitbake.conf</filename> configuration
                     file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
+                    <link linkend='source-directory'>Source Directory</link>,
                     is "cpio.gz".
                     The Linux kernel's initramfs mechanism, as opposed to the
                     initial RAM filesystem
@@ -6477,32 +6632,38 @@
                     <link linkend='var-PROVIDES'><filename>PROVIDES</filename></link>
                     name of an image recipe that is used to build an initial
                     RAM filesystem (initramfs) image.
-                    An initramfs provides a temporary root filesystem used for
-                    early system initialization (e.g. loading of modules
-                    needed to locate and mount the "real" root filesystem).
-                    The specified recipe is added as a dependency of the root
-                    filesystem recipe (e.g.
-                    <filename>core-image-sato</filename>).
-                    See the <filename>meta/recipes-core/images/core-image-minimal-initramfs.bb</filename>
-                    recipe in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
-                    for an example initramfs recipe.
-                    To select this recipe to provide the initramfs,
-                    set <filename>INITRAMFS_IMAGE</filename> to
-                    "core-image-minimal-initramfs".
+                    In other words, the <filename>INITRAMFS_IMAGE</filename>
+                    variable causes an additional recipe to be built as
+                    a dependency to whatever root filesystem recipe you
+                    might be using (e.g. <filename>core-image-sato</filename>).
+                    The initramfs image recipe you provide should set
+                    <link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>
+                    to
+                    <link linkend='var-INITRAMFS_FSTYPES'><filename>INITRAMFS_FSTYPES</filename></link>.
+                </para>
+
+                <para>
+                    An initramfs image provides a temporary root filesystem
+                    used for early system initialization (e.g. loading of
+                    modules needed to locate and mount the "real" root
+                    filesystem).
                     <note>
-                        The initramfs image recipe should set
-                        <link linkend='var-IMAGE_FSTYPES'><filename>IMAGE_FSTYPES</filename></link>
-                        to
-                        <link linkend='var-INITRAMFS_FSTYPES'><filename>INITRAMFS_FSTYPES</filename></link>.
+                        See the <filename>meta/recipes-core/images/core-image-minimal-initramfs.bb</filename>
+                        recipe in the
+                        <link linkend='source-directory'>Source Directory</link>
+                        for an example initramfs recipe.
+                        To select this sample recipe as the one built
+                        to provide the initramfs image,
+                        set <filename>INITRAMFS_IMAGE</filename> to
+                        "core-image-minimal-initramfs".
                     </note>
                 </para>
 
                 <para>
                     You can also find more information by referencing the
-                    <filename>meta/poky/conf/local.conf.sample.extended</filename>
+                    <filename>meta-poky/conf/local.conf.sample.extended</filename>
                     configuration file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
+                    <link linkend='source-directory'>Source Directory</link>,
                     the
                     <link linkend='ref-classes-image'><filename>image</filename></link>
                     class, and the
@@ -6513,7 +6674,7 @@
 
                 <para>
                     If <filename>INITRAMFS_IMAGE</filename> is empty, which is
-                    the default, then no initramfs is built.
+                    the default, then no initramfs image is built.
                 </para>
 
                 <para>
@@ -6521,10 +6682,10 @@
                     <link linkend='var-INITRAMFS_IMAGE_BUNDLE'><filename>INITRAMFS_IMAGE_BUNDLE</filename></link>
                     variable, which allows the generated image to be bundled
                     inside the kernel image.
-                    Additionally, for information on creating an initramfs, see
-                    the
+                    Additionally, for information on creating an initramfs
+                    image, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#building-an-initramfs-image'>Building an Initial RAM Filesystem (initramfs) Image</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -6562,7 +6723,7 @@
                     The combined binary is deposited into the
                     <filename>tmp/deploy</filename> directory, which is part
                     of the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <link linkend='build-directory'>Build Directory</link>.
                 </para>
 
                 <para>
@@ -6591,7 +6752,7 @@
                     file for additional information.
                     Also, for information on creating an initramfs, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#building-an-initramfs-image'>Building an Initial RAM Filesystem (initramfs) Image</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -6862,12 +7023,13 @@
                 <para>
                     Values for this variable are set in the kernel's recipe
                     file and the kernel's append file.
-                    For example, if you are using the Yocto Project kernel that
-                    is based on the Linux 3.14 kernel, the kernel recipe file
-                    is the
-                    <filename>meta/recipes-kernel/linux/linux-yocto_3.14.bb</filename>
+                    For example, if you are using the
+                    <filename>linux-yocto_4.12</filename> kernel, the kernel
+                    recipe file is the
+                    <filename>meta/recipes-kernel/linux/linux-yocto_4.12.bb</filename>
                     file.
-                    Following is an example for a kernel recipe file:
+                    <filename>KBRANCH</filename> is set as follows in that
+                    kernel recipe file:
                     <literallayout class='monospaced'>
      KBRANCH ?= "standard/base"
                     </literallayout>
@@ -6877,21 +7039,25 @@
                     This variable is also used from the kernel's append file
                     to identify the kernel branch specific to a particular
                     machine or target hardware.
-                    The kernel's append file is located in the BSP layer for
-                    a given machine.
-                    For example, the kernel append file for the Emenlow BSP is in the
-                    <filename>meta-intel</filename> Git repository and is named
-                    <filename>meta-emenlow/recipes-kernel/linux/linux-yocto_3.14.bbappend</filename>.
-                    Here are the related statements from the append file:
+                    Continuing with the previous kernel example, the kernel's
+                    append file (i.e.
+                    <filename>linux-yocto_4.12.bbappend</filename>) is located
+                    in the BSP layer for a given machine.
+                    For example, the append file for the Beaglebone,
+                    EdgeRouter, and generic versions of both 32 and 64-bit IA
+                    machines (<filename>meta-yocto-bsp</filename>) is named
+                    <filename>meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.12.bbappend</filename>.
+                    Here are the related statements from that append file:
                     <literallayout class='monospaced'>
-     COMPATIBLE_MACHINE_emenlow-noemgd = "emenlow-noemgd"
-     KMACHINE_emenlow-noemgd = "emenlow"
-     KBRANCH_emenlow-noemgd = "standard/base"
-     KERNEL_FEATURES_append_emenlow-noemgd = " features/drm-gma500/drm-gma500.scc"
+     KBRANCH_genericx86  = "standard/base"
+     KBRANCH_genericx86-64  = "standard/base"
+     KBRANCH_edgerouter = "standard/edgerouter"
+     KBRANCH_beaglebone = "standard/beaglebone"
+     KBRANCH_mpc8315e-rdb = "standard/fsl-mpc8315e-rdb"
                     </literallayout>
-                        The <filename>KBRANCH</filename> statement identifies
-                        the kernel branch to use when building for the Emenlow
-                        BSP.
+                        The <filename>KBRANCH</filename> statements identify
+                        the kernel branch to use when building for each
+                        supported BSP.
                 </para>
             </glossdef>
         </glossentry>
@@ -6913,20 +7079,23 @@
                     Typically, when using a <filename>defconfig</filename> to
                     configure a kernel during a build, you place the
                     file in your layer in the same manner as you would
-                    patch files and configuration fragment files (i.e.
+                    place patch files and configuration fragment files (i.e.
                     "out-of-tree").
                     However, if you want to use a <filename>defconfig</filename>
                     file that is part of the kernel tree (i.e. "in-tree"),
                     you can use the
-                    <filename>KBUILD_DEFCONFIG</filename> variable to point
-                    to the <filename>defconfig</filename> file.
+                    <filename>KBUILD_DEFCONFIG</filename> variable and append
+                    the
+                    <link linkend='var-KMACHINE'><filename>KMACHINE</filename></link>
+                    variable to point to the <filename>defconfig</filename>
+                    file.
                 </para>
 
                 <para>
                     To use the variable, set it in the append file for your
                     kernel recipe using the following form:
                     <literallayout class='monospaced'>
-     KBUILD_DEFCONFIG_<link linkend='var-KMACHINE'>KMACHINE</link> ?= <replaceable>defconfig_file</replaceable>
+     KBUILD_DEFCONFIG_<replaceable>KMACHINE</replaceable> ?= <replaceable>defconfig_file</replaceable>
                     </literallayout>
                     Here is an example from a "raspberrypi2"
                     <filename>KMACHINE</filename> build that uses a
@@ -7028,41 +7197,50 @@
 
         <glossentry id='var-KERNEL_FEATURES'><glossterm>KERNEL_FEATURES</glossterm>
             <info>
-                KERNEL_FEATURES[doc] = "Includes additional metadata from the Yocto Project kernel Git repository. The metadata you add through this variable includes config fragments and features descriptions."
+                KERNEL_FEATURES[doc] = "Includes additional kernel metadata. The metadata you add through this variable includes config fragments and features descriptions."
             </info>
             <glossdef>
                 <para role="glossdeffirst">
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
-                    Includes additional metadata from the Yocto Project kernel Git repository.
-                    In the OpenEmbedded build system, the default Board Support Packages (BSPs)
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+                    Includes additional kernel metadata.
+                    In the OpenEmbedded build system, the default Board Support
+                    Packages (BSPs)
+                    <link linkend='metadata'>Metadata</link>
                     is provided through
                     the <link linkend='var-KMACHINE'><filename>KMACHINE</filename></link>
-                    and <link linkend='var-KBRANCH'><filename>KBRANCH</filename></link> variables.
-                    You can use the <filename>KERNEL_FEATURES</filename> variable to further
-                    add metadata for all BSPs.
+                    and
+                    <link linkend='var-KBRANCH'><filename>KBRANCH</filename></link>
+                    variables.
+                    You can use the <filename>KERNEL_FEATURES</filename>
+                    variable from within the kernel recipe or kernel append
+                    file to further add metadata for all BSPs or specific
+                    BSPs.
                 </para>
 
                 <para>
-                    The metadata you add through this variable includes config fragments and
-                    features descriptions,
+                    The metadata you add through this variable includes config
+                    fragments and features descriptions,
                     which usually includes patches as well as config fragments.
-                    You typically override the <filename>KERNEL_FEATURES</filename> variable
-                    for a specific machine.
-                    In this way, you can provide validated, but optional, sets of kernel
-                    configurations and features.
+                    You typically override the
+                    <filename>KERNEL_FEATURES</filename> variable for a
+                    specific machine.
+                    In this way, you can provide validated, but optional,
+                    sets of kernel configurations and features.
                 </para>
 
                 <para>
-                    For example, the following adds <filename>netfilter</filename> to all
-                    the Yocto Project kernels and adds sound support to the <filename>qemux86</filename>
-                    machine:
+                    For example, the following example from the
+                    <filename>linux-yocto-rt_4.12</filename> kernel recipe
+                    adds "netfilter" and "taskstats" features to all BSPs
+                    as well as "virtio" configurations to all QEMU machines.
+                    The last two statements add specific configurations to
+                    targeted machine types:
                     <literallayout class='monospaced'>
-     # Add netfilter to all linux-yocto kernels
-     KERNEL_FEATURES="features/netfilter/netfilter.scc"
-
-     # Add sound support to the qemux86 machine
-     KERNEL_FEATURES_append_qemux86=" cfg/sound.scc"
+     KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc features/taskstats/taskstats.scc"
+     KERNEL_FEATURES_append = " ${KERNEL_EXTRA_FEATURES}"
+     KERNEL_FEATURES_append_qemuall=" cfg/virtio.scc"
+     KERNEL_FEATURES_append_qemux86=" cfg/sound.scc cfg/paravirt_kvm.scc"
+     KERNEL_FEATURES_append_qemux86-64=" cfg/sound.scc"
                     </literallayout></para>
             </glossdef>
         </glossentry>
@@ -7739,7 +7917,7 @@
                     <link linkend='var-COPY_LIC_MANIFEST'><filename>COPY_LIC_MANIFEST</filename></link>
                     variable, and the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#providing-license-text'>Providing License Text</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -7838,7 +8016,7 @@
                     defines the search
                     arguments used by the kernel tools to find the appropriate
                     description within the kernel
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+                    <link linkend='metadata'>Metadata</link>
                     with which to build out the sources and configuration.
                     </para>
             </glossdef>
@@ -7942,7 +8120,7 @@
                     Specifies the target device for which the image is built.
                     You define <filename>MACHINE</filename> in the
                     <filename>local.conf</filename> file found in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <link linkend='build-directory'>Build Directory</link>.
                     By default, <filename>MACHINE</filename> is set to
                     "qemux86", which is an x86-based architecture machine to
                     be emulated using QEMU:
@@ -7957,7 +8135,7 @@
                     Thus, when <filename>MACHINE</filename> is set to "qemux86" there
                     exists the corresponding <filename>qemux86.conf</filename> machine
                     configuration file, which can be found in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                    <link linkend='source-directory'>Source Directory</link>
                     in <filename>meta/conf/machine</filename>.
                 </para>
 
@@ -8708,6 +8886,29 @@
             </glossdef>
         </glossentry>
 
+        <glossentry id='var-NOAUTOPACKAGEDEBUG'><glossterm>NOAUTOPACKAGEDEBUG</glossterm>
+            <info>
+                NOAUTOPACKAGEDEBUG[doc] = "Disables auto package from splitting .debug files."
+            </info>
+            <glossdef>
+                <para role="glossdeffirst">
+<!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
+                    Disables auto package from splitting
+                    <filename>.debug</filename> files. If a recipe requires
+                    <filename>FILES_${PN}-dbg</filename> to be set manually,
+                    the <filename>NOAUTOPACKAGEDEBUG</filename> can be defined
+                    allowing you to define the content of the debug package.
+                    For example:
+                    <literallayout class='monospaced'>
+     NOAUTOPACKAGEDEBUG = "1"
+     FILES_${PN}-dev = "${includedir}/${QT_DIR_NAME}/Qt/*"
+     FILES_${PN}-dbg = "/usr/src/debug/"
+     FILES_${QT_BASE_NAME}-demos-doc = "${docdir}/${QT_DIR_NAME}/qch/qt.qch"
+                    </literallayout>
+                </para>
+            </glossdef>
+        </glossentry>
+
         <glossentry id='var-NOHDD'><glossterm>NOHDD</glossterm>
             <info>
                 NOHDD[doc] = "Causes the OpenEmbedded build system to skip building the .hddimg image."
@@ -8798,7 +8999,7 @@
                 <para>
                     See the <filename>meta/classes/binconfig.bbclass</filename>
                     in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                    <link linkend='source-directory'>Source Directory</link>
                     for details on how this class applies these additional
                     sed command arguments.
                     For general information on the
@@ -8863,7 +9064,7 @@
                     <filename>-c devshell</filename> command-line option).
                     For more information, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#platdev-appdev-devshell'>Using a Development Shell</ulink>" section
-                    in the Yocto Project Development Manual.
+                    in the Yocto Project Development Tasks Manual.
                  </para>
 
                  <para>
@@ -8891,19 +9092,17 @@
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
                     The directory from which the top-level build environment
                     setup script is sourced.
-                    The Yocto Project makes two top-level build environment
-                    setup scripts available:
-                    <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>
-                    and
-                    <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>.
-                    When you run one of these scripts, the
+                    The Yocto Project provides a top-level build environment
+                    setup script:
+                    <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>.
+                    When you run this script, the
                     <filename>OEROOT</filename> variable resolves to the
                     directory that contains the script.
                 </para>
 
                 <para>
                     For additional information on how this variable is used,
-                    see the initialization scripts.
+                    see the initialization script.
                 </para>
             </glossdef>
         </glossentry>
@@ -9086,7 +9285,7 @@
                     This variable, which is set in the
                     <filename>local.conf</filename> configuration file found in
                     the <filename>conf</filename> folder of the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>,
+                    <link linkend='build-directory'>Build Directory</link>,
                     specifies the package manager the OpenEmbedded build system
                     uses when packaging data.
                 </para>
@@ -9179,7 +9378,7 @@
                     You can find out more about debugging using GDB by reading
                     the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#platdev-gdb-remotedebug'>Debugging With the GNU Project Debugger (GDB) Remotely</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -9466,7 +9665,7 @@
                     variable.
                     For information on creating an initramfs, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#building-an-initramfs-image'>Building an Initial RAM Filesystem (initramfs) Image</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -9522,7 +9721,7 @@
                     For information on running post-installation scripts, see
                     the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#new-recipe-post-installation-scripts'>Post-Installation Scripts</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -9654,7 +9853,7 @@
 
         <glossentry id='var-PACKAGECONFIG_CONFARGS'><glossterm>PACKAGECONFIG_CONFARGS</glossterm>
             <info>
-                PACKAGECONFIG_CONFARGS[doc] = "A space-separated list of configuration options generated from PACKAGECONFIG."
+                PACKAGECONFIG_CONFARGS[doc] = "A space-separated list of configuration options generated from the PACKAGECONFIG setting."
             </info>
             <glossdef>
                 <para role="glossdeffirst">
@@ -9789,7 +9988,7 @@
                     For an example of how to use the <filename>PACKAGES_DYNAMIC</filename>
                     variable when you are splitting packages, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#handling-optional-module-packaging'>Handling Optional Module Packaging</ulink>" section
-                    in the Yocto Project Development Manual.
+                    in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -9854,7 +10053,7 @@
                         the recipe as a workaround.
                         For information on addressing race conditions, see the
                         "<ulink url='&YOCTO_DOCS_DEV_URL;#debugging-parallel-make-races'>Debugging Parallel Make Races</ulink>"
-                        section in the Yocto Project Development Manual.
+                        section in the Yocto Project Development Tasks Manual.
                     </note>
                     For single socket systems (i.e. one CPU), you should not
                     have to override this variable to gain optimal parallelism
@@ -9903,7 +10102,8 @@
                         the recipe as a workaround.
                         For information on addressing race conditions, see the
                         "<ulink url='&YOCTO_DOCS_DEV_URL;#debugging-parallel-make-races'>Debugging Parallel Make Races</ulink>"
-                        section in the Yocto Project Development Manual.</para>
+                        section in the Yocto Project Development Tasks Manual.
+                        </para>
                     </note>
                 </para>
             </glossdef>
@@ -10385,7 +10585,8 @@
                     cumbersome and error-prone, an automated solution exists.
                     See the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#working-with-a-pr-service'>Working With a PR Service</ulink>"
-                    section for more information.
+                    section in the Yocto Project Development Tasks Manual
+                    for more information.
                 </para>
             </glossdef>
         </glossentry>
@@ -10409,6 +10610,8 @@
      PREFERRED_PROVIDER_virtual/xserver = "xserver-xf86"
      PREFERRED_PROVIDER_virtual/libgl ?= "mesa"
                     </literallayout>
+                    For more information see:
+                    <link linkend='metadata-virtual-providers'>Metadata (Virtual Providers)</link>
                     <note>
                         If you set <filename>PREFERRED_PROVIDER</filename>
                         for a <filename>virtual/*</filename> item, then any
@@ -10445,7 +10648,7 @@
                     Here are two examples:
                     <literallayout class='monospaced'>
      PREFERRED_VERSION_python = "3.4.0"
-     PREFERRED_VERSION_linux-yocto = "3.19%"
+     PREFERRED_VERSION_linux-yocto = "4.12%"
                     </literallayout>
                     <note>
                         The specified version is matched against
@@ -10480,14 +10683,14 @@
                     to set a machine-specific override.
                     Here is an example:
                     <literallayout class='monospaced'>
-     PREFERRED_VERSION_linux-yocto_qemux86 = "3.4%"
+     PREFERRED_VERSION_linux-yocto_qemux86 = "4.12%"
                     </literallayout>
                     Although not recommended, worst case, you can also use the
                     "forcevariable" override, which is the strongest override
                     possible.
                     Here is an example:
                     <literallayout class='monospaced'>
-     PREFERRED_VERSION_linux-yocto_forcevariable = "3.4%"
+     PREFERRED_VERSION_linux-yocto_forcevariable = "4.12%"
                     </literallayout>
                     <note>
                         The <filename>_forcevariable</filename> override is
@@ -10532,7 +10735,7 @@
                     build system to attempt before any others by adding
                     something like the following to the
                     <filename>local.conf</filename> configuration file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>:
+                    <link linkend='build-directory'>Build Directory</link>:
                     <literallayout class='monospaced'>
      PREMIRRORS_prepend = "\
      git://.*/.* http://www.yoctoproject.org/sources/ \n \
@@ -10710,7 +10913,7 @@
                 <para>
                     The <filename>conf/local.conf.sample.extended</filename>
                     configuration file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                    <link linkend='source-directory'>Source Directory</link>
                     shows how the <filename>PRSERV_HOST</filename> variable is
                     set:
                     <literallayout class='monospaced'>
@@ -11494,7 +11697,7 @@
                 <para role="glossdeffirst">
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
                     The location in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                    <link linkend='build-directory'>Build Directory</link>
                     where unpacked recipe source code resides.
                     By default, this directory is
                     <filename>${</filename><link linkend='var-WORKDIR'><filename>WORKDIR</filename></link><filename>}/${</filename><link linkend='var-BPN'><filename>BPN</filename></link><filename>}-${</filename><link linkend='var-PV'><filename>PV</filename></link><filename>}</filename>,
@@ -11502,7 +11705,7 @@
                     and <filename>${PV}</filename> is the recipe version.
                     If the source tarball extracts the code to a directory
                     named anything other than <filename>${BPN}-${PV}</filename>,
-                    or if the source code if fetched from an SCM such as
+                    or if the source code is fetched from an SCM such as
                     Git or Subversion, then you must set <filename>S</filename>
                     in the recipe so that the OpenEmbedded build system
                     knows where to find the unpacked source.
@@ -11510,7 +11713,7 @@
 
                 <para>
                     As an example, assume a
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                    <link linkend='source-directory'>Source Directory</link>
                     top-level folder named <filename>poky</filename> and a
                     default Build Directory at <filename>poky/build</filename>.
                     In this case, the work directory the build system uses
@@ -12412,7 +12615,7 @@
                     To enable file removal, set the variable to "1" in your
                     <filename>conf/local.conf</filename> configuration file
                     in your:
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <link linkend='build-directory'>Build Directory</link>.
                     <literallayout class='monospaced'>
      SKIP_FILEDEPS = "1"
                     </literallayout>
@@ -12619,7 +12822,7 @@
                         <listitem><para><emphasis><filename>file://</filename> -</emphasis>
                             Fetches files, which are usually files shipped with
                             the
-                            <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>,
+                            <link linkend='metadata'>Metadata</link>,
                             from the local machine.
                             The path is relative to the
                             <link linkend='var-FILESPATH'><filename>FILESPATH</filename></link>
@@ -12819,7 +13022,7 @@
                     The <filename>SRCPV</filename> variable is defined in the
                     <filename>meta/conf/bitbake.conf</filename> configuration
                     file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                    <link linkend='source-directory'>Source Directory</link>
                     as follows:
                     <literallayout class='monospaced'>
      SRCPV = "${@bb.fetch2.get_srcrev(d)}"
@@ -12863,7 +13066,8 @@
                         <link linkend='var-AUTOREV'><filename>AUTOREV</filename></link>
                         variable description and the
                         "<ulink url='&YOCTO_DOCS_DEV_URL;#automatically-incrementing-a-binary-package-revision-number'>Automatically Incrementing a Binary Package Revision Number</ulink>"
-                        section, which is in the Yocto Project Development Manual.
+                        section, which is in the Yocto Project Development
+                        Tasks Manual.
                     </note>
                 </para>
 
@@ -13116,7 +13320,8 @@
                     <link linkend='var-SYSROOT_DIRS'><filename>SYSROOT_DIRS</filename></link>
                     variable and the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#new-sharing-files-between-recipes'>Sharing Files Between Recipes</ulink>"
-                    section for more information.
+                    section in the Yocto Project Development Tasks Manual
+                    for more information.
                     <note>
                         Recipes should never write files directly under
                         the <filename>STAGING_DIR</filename> directory because
@@ -13583,15 +13788,12 @@
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
                     Points to the temporary directory under the work directory
                     (default
-                    <filename>${</filename><link linkend='var-WORKDIR'><filename>WORKDIR</filename></link><filename>}/sysroot-destidir</filename>)
+                    "<filename>${</filename><link linkend='var-WORKDIR'><filename>WORKDIR</filename></link><filename>}/sysroot-destdir</filename>")
                     where the files
                     that will be populated into the sysroot are assembled
                     during the
                     <link linkend='ref-tasks-populate_sysroot'><filename>do_populate_sysroot</filename></link>
                     task.
-                    <literallayout class='monospaced'>
-     SYSROOT_DESTDIR ?= "console=ttyS0,115200"
-                    </literallayout>
                 </para>
             </glossdef>
         </glossentry>
@@ -14026,7 +14228,7 @@
                     See the
                     <filename>meta/conf/machine/include/arm/feature-arm-thumb.inc</filename>
                     file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                    <link linkend='source-directory'>Source Directory</link>
                     for an example.
                 </para>
             </glossdef>
@@ -14292,7 +14494,7 @@
                     The suffix identifies the <filename>libc</filename> variant
                     for building.
                     When you are building for multiple variants with the same
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>,
+                    <link linkend='build-directory'>Build Directory</link>,
                     this mechanism ensures that output for different
                     <filename>libc</filename> variants is kept separate to
                     avoid potential conflicts.
@@ -14450,7 +14652,7 @@
                     <filename>ssh</filename>.
                     You can set this variable to "1" in your
                     <filename>local.conf</filename> file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                    <link linkend='build-directory'>Build Directory</link>
                     to have the OpenEmbedded build system automatically run
                     these tests after an image successfully builds:
                     <literallayout class='monospaced'>
@@ -14459,7 +14661,8 @@
                     For more information on enabling, running, and writing
                     these tests, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing'>Performing Automated Runtime Testing</ulink>"
-                    section in the Yocto Project Development Manual and the
+                    section in the Yocto Project Development Tasks Manual and
+                    the
                     "<link linkend='ref-classes-testimage*'><filename>testimage*.bbclass</filename></link>"
                     section.
                 </para>
@@ -14543,7 +14746,7 @@
                 <para>
                     For more information on testing images, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing'>Performing Automated Runtime Testing</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -14650,8 +14853,8 @@
                             Boots a QEMU image and runs the tests.
                             See the
                             "<ulink url='&YOCTO_DOCS_DEV_URL;#qemu-image-enabling-tests'>Enabling Runtime Tests on QEMU</ulink>"
-                            section in the Yocto Project Development Manual for
-                            more information.
+                            section in the Yocto Project Development Tasks
+                            Manual for more information.
                             </para></listitem>
                         <listitem><para><emphasis>"simpleremote" and "SimpleRemoteTarget":</emphasis>
                             Runs the tests on target hardware that is already
@@ -14683,7 +14886,7 @@
                 <para>
                     For information on running tests on hardware, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#hardware-image-enabling-tests'>Enabling Runtime Tests on Hardware</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -14772,7 +14975,7 @@
                 <para>
                     For more information on testing images, see the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#performing-automated-runtime-testing'>Performing Automated Runtime Testing</ulink>"
-                    section in the Yocto Project Development Manual.
+                    section in the Yocto Project Development Tasks Manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -14818,7 +15021,7 @@
                     files (other than the shared state cache).
                     By default, the <filename>TMPDIR</filename> variable points
                     to <filename>tmp</filename> within the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <link linkend='build-directory'>Build Directory</link>.
                 </para>
 
                 <para>
@@ -14826,7 +15029,7 @@
                     than the default, you can uncomment and edit the following
                     statement in the
                     <filename>conf/local.conf</filename> file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>:
+                    <link linkend='source-directory'>Source Directory</link>:
                     <literallayout class='monospaced'>
      #TMPDIR = "${TOPDIR}/tmp"
                     </literallayout>
@@ -14872,8 +15075,9 @@
                     variable, but you can add additional packages to the list.
                     See the
                     "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-adding-individual-packages'>Adding Individual Packages to the Standard SDK</ulink>"
-                    section in the Yocto Project Software Development Kit (SDK)
-                    Developer's Guide for more information.
+                    section in the Yocto Project Application Development and
+                    the Extensible Software Development Kit (eSDK) manual
+                    for more information.
                 </para>
 
                 <para>
@@ -14883,7 +15087,8 @@
                     section.
                     For information on setting up a cross-development
                     environment, see the
-                    <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-manual'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
+                    <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</ulink>
+                    manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -14929,8 +15134,9 @@
                     part of the SDK that runs on the target.
                     See the
                     "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-adding-individual-packages'>Adding Individual Packages to the Standard SDK</ulink>"
-                    section in the Yocto Project Software Development Kit (SDK)
-                    Developer's Guide for more information.
+                    section in the Yocto Project Application Development and
+                    the Extensible Software Development Kit (eSDK) manual for
+                    more information.
                 </para>
 
                 <para>
@@ -14940,7 +15146,8 @@
                     section.
                     For information on setting up a cross-development
                     environment, see the
-                    <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-manual'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
+                    <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</ulink>
+                    manual.
                 </para>
             </glossdef>
         </glossentry>
@@ -14953,12 +15160,10 @@
                 <para role="glossdeffirst">
 <!--                <para role="glossdeffirst"><imagedata fileref="figures/define-generic.png" /> -->
                     The top-level
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <link linkend='build-directory'>Build Directory</link>.
                     BitBake automatically sets this variable when you
-                    initialize your build environment using either
-                    <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>
-                    or
-                    <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>.
+                    initialize your build environment using
+                    <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>.
                 </para>
             </glossdef>
         </glossentry>
@@ -15010,7 +15215,7 @@
                     For example, the
                     <filename>meta/conf/machine/include/mips/README</filename>
                     file in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                    <link linkend='source-directory'>Source Directory</link>
                     provides information for <filename>TUNE_ARCH</filename>
                     specific to the <filename>mips</filename> architecture.
                 </para>
@@ -15293,7 +15498,7 @@
                 <para>
                     Known tuning conflicts are specified in the machine include
                     files in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <link linkend='source-directory'>Source Directory</link>.
                     Here is an example from the
                     <filename>meta/conf/machine/include/mips/arch-mips.inc</filename>
                     include file that lists the "o32" and "n64" features as
@@ -15325,7 +15530,7 @@
 
                 <para>
                     See the machine include files in the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                    <link linkend='source-directory'>Source Directory</link>
                     for these features.
                 </para>
             </glossdef>
@@ -15662,7 +15867,7 @@
                 <para>
                     See the
                     "<ulink url='&YOCTO_DOCS_DEV_URL;#selecting-dev-manager'>Selecting a Device Manager</ulink>"
-                    section in the Yocto Project Development Manual for
+                    section in the Yocto Project Development Tasks Manual for
                     information on how to use this variable.
                 </para>
             </glossdef>
@@ -15717,7 +15922,7 @@
                     For more information, see
                     <filename>meta-poky/conf/local.conf.sample</filename> in
                     the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                    <link linkend='source-directory'>Source Directory</link>.
                 </para>
             </glossdef>
         </glossentry>
@@ -16056,12 +16261,13 @@
                     kickstart file that is used by the OpenEmbedded build
                     system to create a partitioned image
                     (<replaceable>image</replaceable><filename>.wic</filename>).
-                    For information on how to create a
-                    partitioned image, see the
-                    "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-wic-images-oe'>Creating Partitioned Images</ulink>"
-                    section.
+                    For information on how to create a partitioned image, see
+                    the
+                    "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-partitioned-images-using-wic'>Creating Partitioned Images Using Wic</ulink>"
+                    section in the Yocto Project Development Tasks Manual.
                     For details on the kickstart file format, see the
-                    "<ulink url='&YOCTO_DOCS_DEV_URL;#openembedded-kickstart-wks-reference'>OpenEmbedded Kickstart (<filename>.wks</filename>) Reference</ulink>.
+                    "<link linkend='openembedded-kickstart-wks-reference'>OpenEmbedded Kickstart (<filename>.wks</filename>) Reference</link>
+                    Chapter.
                 </para>
             </glossdef>
         </glossentry>
diff --git a/import-layers/yocto-poky/documentation/ref-manual/resources.xml b/import-layers/yocto-poky/documentation/ref-manual/resources.xml
index 8299f9f..d59bea2 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/resources.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/resources.xml
@@ -3,25 +3,71 @@
 [<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
 
 <chapter id='resources'>
-<title>Contributing to the Yocto Project</title>
+<title>Contributions and Additional Information</title>
 
 <section id='resources-intro'>
     <title>Introduction</title>
     <para>
-        The Yocto Project team is happy for people to experiment with the Yocto Project.
-        A number of places exist to find help if you run into difficulties or find bugs.
-        To find out how to download source code,
-        see the "<ulink url='&YOCTO_DOCS_DEV_URL;#local-yp-release'>Yocto Project Release</ulink>"
-        section in the Yocto Project Development Manual.
+        The Yocto Project team is happy for people to experiment with the
+        Yocto Project.
+        A number of places exist to find help if you run into difficulties
+        or find bugs.
+        This presents information about contributing and participating in
+        the Yocto Project.
+    </para>
+</section>
+
+<section id='resources-contributions'>
+    <title>Contributions</title>
+
+    <para>
+        The Yocto Project gladly accepts contributions.
+        You can submit changes to the project either by creating and sending
+        pull requests,
+        or by submitting patches through email.
+        For information on how to do both as well as information on how
+        to identify the maintainer for each area of code, see the
+        "<ulink url='&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change'>Submitting a Change to the Yocto Project</ulink>"
+        section in the Yocto Project Development Tasks Manual.
     </para>
 </section>
 
 <section id='resources-bugtracker'>
-    <title>Tracking Bugs</title>
+    <title>Yocto Project Bugzilla</title>
 
     <para>
-        If you find problems with the Yocto Project, you should report them using the
-        Bugzilla application at <ulink url='&YOCTO_BUGZILLA_URL;'></ulink>.
+        The Yocto Project uses its own implementation of
+        <ulink url='http://www.bugzilla.org/about/'>Bugzilla</ulink> to track
+        defects (bugs).
+        Implementations of Bugzilla work well for group development because
+        they track bugs and code changes, can be used to communicate changes
+        and problems with developers, can be used to submit and review patches,
+        and can be used to manage quality assurance.
+    </para>
+
+    <para>
+        Sometimes it is helpful to submit, investigate, or track a bug against
+        the Yocto Project itself (e.g. when discovering an issue with some
+        component of the build system that acts contrary to the documentation
+        or your expectations).
+    </para>
+
+    <para>
+        A general procedure and guidelines exist for when you use Bugzilla to
+        submit a bug.
+        For information on how to use Bugzilla to submit a bug against the
+        Yocto Project, see the following:
+        <itemizedlist>
+            <listitem><para>
+                The
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#submitting-a-defect-against-the-yocto-project'>Submitting a Defect Against the Yocto Project</ulink>"
+                section in the Yocto Project Development Tasks Manual.
+                </para></listitem>
+            <listitem><para>
+                The Yocto Project
+                <ulink url='&YOCTO_WIKI_URL;/wiki/Bugzilla_Configuration_and_Bug_Tracking'>Bugzilla wiki page</ulink>
+                </para></listitem>
+        </itemizedlist>
     </para>
 </section>
 
@@ -43,19 +89,19 @@
                 Discussion mailing list about OpenEmbedded.</para></listitem>
             <listitem><para><ulink url='&OE_LISTS_URL;/listinfo/bitbake-devel'></ulink> -
                 Discussion mailing list about the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>
+                <link linkend='bitbake-term'>BitBake</link>
                 build tool.</para></listitem>
             <listitem><para><ulink url='&YOCTO_LISTS_URL;/listinfo/poky'></ulink> -
                 Discussion mailing list about
-                <ulink url='&YOCTO_DOCS_DEV_URL;#poky'>Poky</ulink>.
+                <link linkend='poky'>Poky</link>.
                 </para></listitem>
             <listitem><para><ulink url='&YOCTO_LISTS_URL;/listinfo/yocto-announce'></ulink> -
                 Mailing list to receive official Yocto Project release and milestone
                 announcements.</para></listitem>
         </itemizedlist>
     </para>
-    For more Yocto Project-related mailing lists, see the Yocto Project community mailing lists page
-    <ulink url='&YOCTO_HOME_URL;/tools-resources/community/mailing-lists'>here</ulink>.
+    For more Yocto Project-related mailing lists, see the
+    <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>.
 </section>
 
 <section id='resources-irc'>
@@ -70,53 +116,160 @@
     </para>
 </section>
 
-<section id='resources-links'>
-    <title>Links</title>
+<section id='resources-links-and-related-documentation'>
+    <title>Links and Related Documentation</title>
 
     <para>
-        Here is a list of resources you will find helpful:
+        Here is a list of resources you might find helpful:
         <itemizedlist>
-            <listitem><para><emphasis>
+            <listitem><para>
+                <emphasis>
                 <ulink url='&YOCTO_HOME_URL;'>The Yocto Project website</ulink>:
-                </emphasis> The home site for the Yocto
-                Project.</para></listitem>
-<!--
-            <listitem><para><emphasis>
-                <ulink url='http://www.intel.com/'>Intel Corporation</ulink>:</emphasis>
-                The company that acquired OpenedHand in 2008 and began
-                development on the Yocto Project.</para></listitem>
--->
-            <listitem><para><emphasis>
-                <ulink url='&OE_HOME_URL;'>OpenEmbedded</ulink>:</emphasis>
-                The upstream, generic, embedded distribution used as the basis
-                for the build system in the Yocto Project.
-                Poky derives from and contributes back to the OpenEmbedded
-                project.</para></listitem>
-            <listitem><para><emphasis>
+                </emphasis> The home site for the Yocto Project.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_WIKI_URL;/wiki/Main_Page'>The Yocto Project Main Wiki Page</ulink>:
+                </emphasis>
+                The main wiki page for the Yocto Project.
+                This page contains information about project planning,
+                release engineering, QA &amp; automation, a reference
+                site map, and other resources related to the Yocto Project.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&OE_HOME_URL;'>OpenEmbedded</ulink>:
+                </emphasis>
+                The build system used by the Yocto Project.
+                This project is the upstream, generic, embedded distribution
+                from which the Yocto Project derives its build system (Poky)
+                and to which it contributes.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
                 <ulink url='http://www.openembedded.org/wiki/BitBake'>
-                BitBake</ulink>:</emphasis> The tool used to process metadata.</para></listitem>
+                BitBake</ulink>:
+                </emphasis> The tool used to process metadata.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_DOCS_BB_URL;'>BitBake User Manual:</ulink>
+                </emphasis>
+                A comprehensive guide to the BitBake tool.
+                If you want information on BitBake, see this manual.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_DOCS_QS_URL;'>Yocto Project Quick Start</ulink>:
+                </emphasis>
+                This short document lets you get started
+                with the Yocto Project and quickly begin building an image.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_DOCS_DEV_URL;'>Yocto Project Development Tasks Manual</ulink>:
+                </emphasis>
+                This manual is a "how-to" guide that presents procedures
+                useful to both application and system developers who use the
+                Yocto Project.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</ulink>
+                manual:</emphasis>
+                This guide provides information that lets you get going
+                with the standard or extensible SDK.
+                An SDK, with its cross-development toolchains, allows you
+                to develop projects inside or outside of the Yocto Project
+                environment.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_DOCS_BSP_URL;'>Yocto Project Board Support Package (BSP) Developer's Guide</ulink>:
+                </emphasis>
+                This guide defines the structure for BSP components.
+                Having a commonly understood structure encourages
+                standardization.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;'>Yocto Project Linux Kernel Development Manual</ulink>:
+                </emphasis>
+                This manual describes how to work with Linux Yocto kernels as
+                well as provides a bit of conceptual information on the
+                construction of the Yocto Linux kernel tree.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_DOCS_PROF_URL;'>Yocto Project Profiling and Tracing Manual</ulink>:
+                </emphasis>
+                This manual presents a set of common and generally useful
+                tracing and profiling schemes along with their applications
+                (as appropriate) to each tool.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-appendix-latest-yp-eclipse-plug-in'>Eclipse IDE Yocto Plug-in</ulink>:
+                </emphasis>
+                Instructions that demonstrate how an application developer
+                uses the Eclipse Yocto Project Plug-in feature within
+                the Eclipse IDE.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_WIKI_URL;/wiki/FAQ'>FAQ</ulink>:
+                </emphasis>
+                A list of commonly asked questions and their answers.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_RELEASE_NOTES;'>Release Notes</ulink>:
+                </emphasis>
+                Features, updates and known issues for the current
+                release of the Yocto Project.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_DOCS_TOAST_URL;'>Toaster User Manual</ulink>:
+                </emphasis>
+                This manual introduces and describes how to set up and use
+                Toaster.
+                Toaster is an Application Programming Interface (API) and
+                web-based interface to the
+                <link linkend='build-system-term'>OpenEmbedded Build System</link>,
+                which uses BitBake, that reports build information.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_BUGZILLA_URL;'>Bugzilla</ulink>:
+                </emphasis>
+                The bug tracking application the Yocto Project uses.
+                If you find problems with the Yocto Project, you should report
+                them using this application.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='&YOCTO_WIKI_URL;/wiki/Bugzilla_Configuration_and_Bug_Tracking'>Bugzilla Configuration and Bug Tracking Wiki Page</ulink>:
+                </emphasis>
+                Information on how to get set up and use the Yocto Project
+                implementation of Bugzilla for logging and tracking Yocto
+                Project defects.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Internet Relay Chat (IRC):</emphasis>
+                Two IRC channels on freenode are available
+                for Yocto Project and Poky discussions: <filename>#yocto</filename> and
+                <filename>#poky</filename>, respectively.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>
+                <ulink url='http://wiki.qemu.org/Index.html'>Quick EMUlator (QEMU)</ulink>:
+                </emphasis>
+                An open-source machine emulator and virtualizer.
+                </para></listitem>
         </itemizedlist>
-        For more links, see the
-        "<ulink url='&YOCTO_DOCS_DEV_URL;#other-information'>Other Information</ulink>"
-        section in the Yocto Project Development Manual.
     </para>
 </section>
-
-<section id='resources-contributions'>
-    <title>Contributions</title>
-
-    <para>
-        The Yocto Project gladly accepts contributions.
-        You can submit changes to the project either by creating and sending
-        pull requests,
-        or by submitting patches through email.
-        For information on how to do both as well as information on how
-        to identify the maintainer for each area of code, see the
-        "<ulink url='&YOCTO_DOCS_DEV_URL;#how-to-submit-a-change'>How to Submit a Change</ulink>"
-        section in the Yocto Project Development Manual.
-    </para>
-</section>
-
 </chapter>
 <!--
 vim: expandtab tw=80 ts=4
diff --git a/import-layers/yocto-poky/documentation/ref-manual/technical-details.xml b/import-layers/yocto-poky/documentation/ref-manual/technical-details.xml
index 1964a9a..e9e76e4 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/technical-details.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/technical-details.xml
@@ -18,7 +18,7 @@
 
     <para>
         The
-        <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>
+        <link linkend='bitbake-term'>BitBake</link>
         task executor together with various types of configuration files form
         the OpenEmbedded Core.
         This section overviews these components by describing their use and
@@ -48,15 +48,16 @@
         to each data source as a layer.
         For information on layers, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#understanding-and-creating-layers'>Understanding and
-        Creating Layers</ulink>" section of the Yocto Project Development Manual.
+        Creating Layers</ulink>" section of the Yocto Project Development
+        Tasks Manual.
     </para>
 
     <para>
         Following are some brief details on these core components.
         For additional information on how these components interact during
         a build, see the
-        "<link linkend='closer-look'>A Closer Look at the Yocto Project Development Environment</link>"
-        Chapter.
+        "<link linkend='development-concepts'>Development Concepts</link>"
+        section.
     </para>
 
     <section id='usingpoky-components-bitbake'>
@@ -65,7 +66,7 @@
         <para>
             BitBake is the tool at the heart of the OpenEmbedded build system
             and is responsible for parsing the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>,
+            <link linkend='metadata'>Metadata</link>,
             generating a list of tasks from it, and then executing those tasks.
         </para>
 
@@ -146,13 +147,70 @@
         </para>
     </section>
 
+    <section id='metadata-virtual-providers'>
+        <title>Metadata (Virtual Providers)</title>
+
+        <para>
+            Prior to the build, if you know that several different recipes
+            provide the same functionality, you can use a virtual provider
+            (i.e. <filename>virtual/*</filename>) as a placeholder for the
+            actual provider.
+            The actual provider would be determined at build
+            time.
+            In this case, you should add <filename>virtual/*</filename>
+            to <link linkend='var-DEPENDS'><filename>DEPENDS</filename></link>,
+            rather than listing the specified provider.
+            You would select the actual provider by setting the
+            <link linkend='var-PREFERRED_PROVIDER'><filename>PREFERRED_PROVIDER</filename></link>
+            variable (i.e. <filename>PREFERRED_PROVIDER_virtual/*</filename>)
+            in the build's configuration file (e.g.
+            <filename>poky/build/conf/local.conf</filename>).
+            <note>
+                Any recipe that PROVIDES a <filename>virtual/*</filename> item
+                that is ultimately not selected through
+                <filename>PREFERRED_PROVIDER</filename> does not get built.
+                Preventing these recipes from building is usually the desired
+                behavior since this mechanism's purpose is to select between
+                mutually exclusive alternative providers.
+            </note>
+        </para>
+
+        <para>
+            The following lists specific examples of virtual providers:
+            <itemizedlist>
+                <listitem><para>
+                    <filename>virtual/mesa</filename>:
+                    Provides <filename>gbm.pc</filename>.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>virtual/egl</filename>:
+                    Provides <filename>egl.pc</filename> and possibly
+                    <filename>wayland-egl.pc</filename>.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>virtual/libgl</filename>:
+                    Provides <filename>gl.pc</filename> (i.e. libGL).
+                    </para></listitem>
+                <listitem><para>
+                    <filename>virtual/libgles1</filename>:
+                    Provides <filename>glesv1_cm.pc</filename>
+                    (i.e. libGLESv1_CM).
+                    </para></listitem>
+                <listitem><para>
+                    <filename>virtual/libgles2</filename>:
+                    Provides <filename>glesv2.pc</filename> (i.e. libGLESv2).
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+
     <section id='usingpoky-components-classes'>
         <title>Classes</title>
 
         <para>
             Class files (<filename>.bbclass</filename>) contain information that
             is useful to share between
-            <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink> files.
+            <link linkend='metadata'>Metadata</link> files.
             An example is the
             <link linkend='ref-classes-autotools'><filename>autotools</filename></link>
             class, which contains common settings for any application that
@@ -172,7 +230,7 @@
             distribution configuration options, compiler tuning options, general common configuration
             options, and user configuration options in <filename>local.conf</filename>, which is found
             in the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+            <link linkend='build-directory'>Build Directory</link>.
         </para>
     </section>
 </section>
@@ -183,11 +241,12 @@
     <para>
         The Yocto Project does most of the work for you when it comes to
         creating
-        <ulink url='&YOCTO_DOCS_DEV_URL;#cross-development-toolchain'>cross-development toolchains</ulink>.
+        <link linkend='cross-development-toolchain'>cross-development toolchains</link>.
         This section provides some technical background on how
         cross-development toolchains are created and used.
         For more information on toolchains, you can also see the
-        <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
+        <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</ulink>
+        manual.
     </para>
 
     <para>
@@ -372,8 +431,8 @@
         For information on advantages gained when building a
         cross-development toolchain installer, see the
         "<ulink url='&YOCTO_DOCS_SDK_URL;#sdk-building-an-sdk-installer'>Building an SDK Installer</ulink>"
-        section in the Yocto Project Software Development Kit (SDK) Developer's
-        Guide.
+        section in the Yocto Project Application Development and the
+        Extensible Software Development Kit (eSDK) manual.
     </note>
 </section>
 
@@ -433,7 +492,7 @@
         works with packages and can
         track incrementing <filename>PR</filename> information, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#automatically-incrementing-a-binary-package-revision-number'>Automatically Incrementing a Binary Package Revision Number</ulink>"
-        section.
+        section in the Yocto Project Development Tasks Manual.
     </note>
 
     <para>
@@ -562,7 +621,7 @@
             code.
             However, there is still the question of a task's indirect inputs - the
             things that were already built and present in the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+            <link linkend='build-directory'>Build Directory</link>.
             The checksum (or signature) for a particular task needs to add the hashes
             of all the tasks on which the particular task depends.
             Choosing which dependencies to add is a policy decision.
@@ -598,11 +657,12 @@
             The code in <filename>meta/lib/oe/sstatesig.py</filename> shows two examples
             of this and also illustrates how you can insert your own policy into the system
             if so desired.
-            This file defines the two basic signature generators <filename>OE-Core</filename>
-            uses:  "OEBasic" and "OEBasicHash".
+            This file defines the two basic signature generators
+            <link linkend='oe-core'>OE-Core</link> uses:  "OEBasic" and
+            "OEBasicHash".
             By default, there is a dummy "noop" signature handler enabled in BitBake.
             This means that behavior is unchanged from previous versions.
-            <filename>OE-Core</filename> uses the "OEBasicHash" signature handler by default
+            OE-Core uses the "OEBasicHash" signature handler by default
             through this setting in the <filename>bitbake.conf</filename> file:
             <literallayout class='monospaced'>
      BB_SIGNATURE_HANDLER ?= "OEBasicHash"
@@ -610,7 +670,7 @@
             The "OEBasicHash" <filename>BB_SIGNATURE_HANDLER</filename> is the same as the
             "OEBasic" version but adds the task hash to the stamp files.
             This results in any
-            <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+            <link linkend='metadata'>Metadata</link>
             change that changes the task hash, automatically
             causing the task to be run again.
             This removes the need to bump <link linkend='var-PR'><filename>PR</filename></link>
@@ -1195,6 +1255,213 @@
     </para>
 </section>
 
+<section id='wic-plug-ins-interface'>
+    <title>Wic Plug-Ins Interface</title>
+
+    <para>
+        You can extend and specialize Wic functionality by using
+        Wic plug-ins.
+        This section explains the Wic plug-in interface.
+        For information on using Wic in general, see the
+        "<ulink url='&YOCTO_DOCS_DEV_URL;#creating-partitioned-images-using-wic'>Creating Partitioned Images Using Wic</ulink>"
+        section in the Yocto Project Development Tasks Manual.
+        <note>
+            Wic plug-ins consist of "source" and "imager" plug-ins.
+            Imager plug-ins are beyond the scope of this section.
+        </note>
+    </para>
+
+    <para>
+        Source plug-ins provide a mechanism to customize partition
+        content during the Wic image generation process.
+        You can use source plug-ins to map values that you specify
+        using <filename>--source</filename> commands in kickstart
+        files (i.e. <filename>*.wks</filename>) to a plug-in
+        implementation used to populate a given partition.
+        <note>
+            If you use plug-ins that have build-time dependencies
+            (e.g. native tools, bootloaders, and so forth)
+            when building a Wic image, you need to specify those
+            dependencies using the
+            <link linkend='var-WKS_FILE_DEPENDS'><filename>WKS_FILE_DEPENDS</filename></link>
+            variable.
+        </note>
+    </para>
+
+    <para>
+        Source plug-ins are subclasses defined in plug-in files.
+        As shipped, the Yocto Project provides several plug-in
+        files.
+        You can see the source plug-in files that ship with the
+        Yocto Project
+        <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/tree/scripts/lib/wic/plugins/source'>here</ulink>.
+        Each of these plug-in files contain source plug-ins that
+        are designed to populate a specific Wic image partition.
+    </para>
+
+    <para>
+        Source plug-ins are subclasses of the
+        <filename>SourcePlugin</filename> class, which is
+        defined in the
+        <filename>poky/scripts/lib/wic/pluginbase.py</filename>
+        file.
+        For example, the <filename>BootimgEFIPlugin</filename>
+        source plug-in found in the
+        <filename>bootimg-efi.py</filename> file is a subclass of
+        the <filename>SourcePlugin</filename> class, which is found
+        in the <filename>pluginbase.py</filename> file.
+    </para>
+
+    <para>
+        You can also implement source plug-ins in a layer outside
+        of the Source Repositories (external layer).
+        To do so, be sure that your plug-in files are located in
+        a directory whose path is
+        <filename>scripts/lib/wic/plugins/source/</filename>
+        within your external layer.
+        When the plug-in files are located there, the source
+        plug-ins they contain are made available to Wic.
+    </para>
+
+    <para>
+        When the Wic implementation needs to invoke a
+        partition-specific implementation, it looks for the plug-in
+        with the same name as the <filename>--source</filename>
+        parameter used in the kickstart file given to that
+        partition.
+        For example, if the partition is set up using the following
+        command in a kickstart file:
+        <literallayout class='monospaced'>
+     part /boot --source bootimg-pcbios --ondisk sda --label boot --active --align 1024
+        </literallayout>
+        The methods defined as class members of the matching
+        source plug-in (i.e. <filename>bootimg-pcbios</filename>)
+        in the <filename>bootimg-pcbios.py</filename> plug-in file
+        are used.
+    </para>
+
+    <para>
+        To be more concrete, here is the corresponding plug-in
+        definition from the <filename>bootimg-pcbios.py</filename>
+        file for the previous command along with an example
+        method called by the Wic implementation when it needs to
+        prepare a partition using an implementation-specific
+        function:
+        <literallayout class='monospaced'>
+     bootimg-pcbios.py
+                  .
+                  .
+                  .
+        class BootimgPcbiosPlugin(SourcePlugin):
+        """
+        Create MBR boot partition and install syslinux on it.
+        """
+
+        name = 'bootimg-pcbios'
+                  .
+                  .
+                  .
+        @classmethod
+        def do_prepare_partition(cls, part, source_params, creator, cr_workdir,
+                                 oe_builddir, bootimg_dir, kernel_dir,
+                                 rootfs_dir, native_sysroot):
+            """
+            Called to do the actual content population for a partition i.e. it
+            'prepares' the partition to be incorporated into the image.
+            In this case, prepare content for legacy bios boot partition.
+            """
+                  .
+                  .
+                  .
+        </literallayout>
+        If a subclass (plug-in) itself does not implement a
+        particular function, Wic locates and uses the default
+        version in the superclass.
+        It is for this reason that all source plug-ins are derived
+        from the <filename>SourcePlugin</filename> class.
+    </para>
+
+    <para>
+        The <filename>SourcePlugin</filename> class defined in
+        the <filename>pluginbase.py</filename> file defines
+        a set of methods that source plug-ins can implement or
+        override.
+        Any plug-ins (subclass of
+        <filename>SourcePlugin</filename>) that do not implement
+        a particular method inherit the implementation of the
+        method from the <filename>SourcePlugin</filename> class.
+        For more information, see the
+        <filename>SourcePlugin</filename> class in the
+        <filename>pluginbase.py</filename> file for details:
+    </para>
+
+    <para>
+        The following list describes the methods implemented in the
+        <filename>SourcePlugin</filename> class:
+        <itemizedlist>
+            <listitem><para>
+                <emphasis><filename>do_prepare_partition()</filename>:</emphasis>
+                Called to populate a partition with actual content.
+                In other words, the method prepares the final
+                partition image that is incorporated into the
+                disk image.
+                </para></listitem>
+            <listitem><para>
+                <emphasis><filename>do_configure_partition()</filename>:</emphasis>
+                Called before
+                <filename>do_prepare_partition()</filename> to
+                create custom configuration files for a partition
+                (e.g. syslinux or grub configuration files).
+                </para></listitem>
+            <listitem><para>
+                <emphasis><filename>do_install_disk()</filename>:</emphasis>
+                Called after all partitions have been prepared and
+                assembled into a disk image.
+                This method provides a hook to allow finalization
+                of a disk image (e.g. writing an MBR).
+                </para></listitem>
+            <listitem><para>
+                <emphasis><filename>do_stage_partition()</filename>:</emphasis>
+                Special content-staging hook called before
+                <filename>do_prepare_partition()</filename>.
+                This method is normally empty.</para>
+
+                <para>Typically, a partition just uses the passed-in
+                parameters (e.g. the unmodified value of
+                <filename>bootimg_dir</filename>).
+                However, in some cases, things might need to be
+                more tailored.
+                As an example, certain files might additionally
+                need to be taken from
+                <filename>bootimg_dir + /boot</filename>.
+                This hook allows those files to be staged in a
+                customized fashion.
+                <note>
+                    <filename>get_bitbake_var()</filename>
+                    allows you to access non-standard variables
+                    that you might want to use for this
+                    behavior.
+                </note>
+                </para></listitem>
+        </itemizedlist>
+    </para>
+
+    <para>
+        You can extend the source plug-in mechanism.
+        To add more hooks, create more source plug-in methods
+        within <filename>SourcePlugin</filename> and the
+        corresponding derived subclasses.
+        The code that calls the plug-in methods uses the
+        <filename>plugin.get_source_plugin_methods()</filename>
+        function to find the method or methods needed by the call.
+        Retrieval of those methods is accomplished by filling up
+        a dict with keys that contain the method names of interest.
+        On success, these will be filled in with the actual
+        methods.
+        See the Wic implementation for examples and details.
+    </para>
+</section>
+
 <section id='x32'>
     <title>x32</title>
 
@@ -1310,7 +1577,7 @@
             The Wayland protocol libraries and the reference Weston compositor
             ship as integrated packages in the <filename>meta</filename> layer
             of the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+            <link linkend='source-directory'>Source Directory</link>.
             Specifically, you can find the recipes that build both Wayland
             and Weston at <filename>meta/recipes-graphics/wayland</filename>.
         </para>
@@ -1425,8 +1692,8 @@
     <para>
         For information that can help you maintain compliance with various open
         source licensing during the lifecycle of the product, see the
-        "<ulink url='&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle'>Maintaining Open Source License Compliance During Your Project's Lifecycle</ulink>" section
-        in the Yocto Project Development Manual.
+        "<ulink url='&YOCTO_DOCS_DEV_URL;#maintaining-open-source-license-compliance-during-your-products-lifecycle'>Maintaining Open Source License Compliance During Your Project's Lifecycle</ulink>"
+        section in the Yocto Project Development Tasks Manual.
     </para>
 
     <section id="usingpoky-configuring-LIC_FILES_CHKSUM">
diff --git a/import-layers/yocto-poky/documentation/ref-manual/usingpoky.xml b/import-layers/yocto-poky/documentation/ref-manual/usingpoky.xml
index d080316..13d9ad6 100644
--- a/import-layers/yocto-poky/documentation/ref-manual/usingpoky.xml
+++ b/import-layers/yocto-poky/documentation/ref-manual/usingpoky.xml
@@ -42,11 +42,9 @@
 
         <para>
             The first thing you need to do is set up the OpenEmbedded build
-            environment by sourcing an environment setup script
+            environment by sourcing the environment setup script
             (i.e.
-            <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>
-            or
-            <link linkend='structure-memres-core-script'><filename>oe-init-build-env-memres</filename></link>).
+            <link linkend='structure-core-script'><filename>&OE_INIT_FILE;</filename></link>).
             Here is an example:
             <literallayout class='monospaced'>
      $ source &OE_INIT_FILE; [<replaceable>build_dir</replaceable>]
@@ -56,7 +54,8 @@
         <para>
             The <replaceable>build_dir</replaceable> argument is optional and specifies the directory the
             OpenEmbedded build system uses for the build -
-            the <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+            the
+            <link linkend='build-directory'>Build Directory</link>.
             If you do not specify a Build Directory, it defaults to a directory
             named <filename>build</filename> in your current working directory.
             A common practice is to use a different Build Directory for different targets.
@@ -108,7 +107,7 @@
             The <replaceable>target</replaceable> is the name of the recipe you want to build.
             Common targets are the images in <filename>meta/recipes-core/images</filename>,
             <filename>meta/recipes-sato/images</filename>, etc. all found in the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+            <link linkend='source-directory'>Source Directory</link>.
             Or, the target can be the name of a recipe for a specific piece of software such as
             BusyBox.
             For more details about the images the OpenEmbedded build system supports, see the
@@ -142,11 +141,12 @@
     <para>
         Once an image has been built, it often needs to be installed.
         The images and kernels built by the OpenEmbedded build system are placed in the
-        <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink> in
+        <link linkend='build-directory'>Build Directory</link> in
         <filename class="directory">tmp/deploy/images</filename>.
         For information on how to run pre-built images such as <filename>qemux86</filename>
         and <filename>qemuarm</filename>, see the
-        <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-manual'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>.
+        <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</ulink>
+        manual.
         For information about how to install these images, see the documentation for your
         particular board or machine.
     </para>
@@ -177,16 +177,17 @@
         going wrong.
         For information on how to enable and use this feature, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#using-the-error-reporting-tool'>Using the Error Reporting Tool</ulink>"
-        section in the Yocto Project Development Manual.
+        section in the Yocto Project Development Tasks Manual.
     </para>
 
     <para>
         For discussions on debugging, see the
         "<ulink url='&YOCTO_DOCS_DEV_URL;#platdev-gdb-remotedebug'>Debugging With the GNU Project Debugger (GDB) Remotely</ulink>" section
-        in the Yocto Project Developer's Manual
+        in the Yocto Project Development Tasks Manual
         and the
         "<ulink url='&YOCTO_DOCS_SDK_URL;#adt-eclipse'>Working within Eclipse</ulink>"
-        section in the Yocto Project Software Development Kit (SDK) Developer's Guide.
+        section in the Yocto Project Application Development and the
+        Extensible Software Development Kit (eSDK) manual.
     </para>
 
     <note>
@@ -208,7 +209,7 @@
             (<filename>qemux86</filename>) might be in
             <filename>tmp/work/qemux86-poky-linux/core-image-minimal/1.0-r0/temp/log.do_compile</filename>.
             To see the commands
-            <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink> ran
+            <link linkend='bitbake-term'>BitBake</link> ran
             to generate a log, look at the corresponding
             <filename>run.do_</filename><replaceable>taskname</replaceable>
             file in the same directory.
@@ -874,7 +875,7 @@
             class implements these functions.
             See that class in the
             <filename>meta/classes</filename> folder of the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+            <link linkend='source-directory'>Source Directory</link>
             for information.
         </para>
 
@@ -978,7 +979,7 @@
                     Removing
                     <link linkend='var-TMPDIR'><filename>TMPDIR</filename></link>
                     (usually <filename>tmp/</filename>, within the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>)
+                    <link linkend='build-directory'>Build Directory</link>)
                     can often fix temporary build issues.
                     Removing <filename>TMPDIR</filename> is usually a
                     relatively cheap operation, because task output will be
@@ -1031,9 +1032,12 @@
                     the Yocto Project implementation of
                     <ulink url='https://bugzilla.yoctoproject.org/'>Bugzilla</ulink>.
                     For general information on how to submit a bug against
-                    the Yocto Project, see the
-                    "<ulink url='&YOCTO_DOCS_DEV_URL;#tracking-bugs'>Tracking Bugs</ulink>"
-                    section in the Yocto Project Development Manual.
+                    the Yocto Project, see the Yocto Project Bugzilla
+                    <ulink url='&YOCTO_WIKI_URL;/wiki/Bugzilla_Configuration_and_Bug_Tracking'>wiki page</ulink>"
+                    or the
+                    <ulink url='&YOCTO_DOCS_DEV_URL;#submitting-a-defect-against-the-yocto-project'>Submitting a Defect Against the Yocto Project</ulink>"
+                    section, which is in the Yocto Project Development Tasks
+                    Manual.
                     <note>
                         The manuals might not be the right place to document
                         variables that are purely internal and have a limited
@@ -1046,6 +1050,376 @@
     </section>
 </section>
 
+<section id='ref-quick-emulator-qemu'>
+    <title>Quick EMUlator (QEMU)</title>
+
+    <para>
+        The Yocto Project uses an implementation of the Quick EMUlator (QEMU)
+        Open Source project as part of the Yocto Project development "tool
+        set".
+    </para>
+
+    <para>
+        Within the context of the Yocto Project, QEMU is an
+        emulator and virtualization machine that allows you to run a complete
+        image you have built using the Yocto Project as just another task
+        on your build system.
+        QEMU is useful for running and testing images and applications on
+        supported Yocto Project architectures without having actual hardware.
+        Among other things, the Yocto Project uses QEMU to run automated
+        Quality Assurance (QA) tests on final images shipped with each
+        release.
+        <note>
+            This implementation is not the same as QEMU in general.
+        </note>
+        This section provides a brief reference for the Yocto Project
+        implementation of QEMU.
+    </para>
+
+    <para>
+        For official information and documentation on QEMU in general, see the
+        following references:
+        <itemizedlist>
+            <listitem><para>
+                <emphasis><ulink url='http://wiki.qemu.org/Main_Page'>QEMU Website</ulink>:</emphasis>
+                The official website for the QEMU Open Source project.
+                </para></listitem>
+            <listitem><para>
+                <emphasis><ulink url='http://wiki.qemu.org/Manual'>Documentation</ulink>:</emphasis>
+                The QEMU user manual.
+                </para></listitem>
+        </itemizedlist>
+    </para>
+
+    <para>
+        For information on how to use the Yocto Project implementation of
+        QEMU, see the
+        "<ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-qemu'>Using the Quick EMUlator (QEMU)</ulink>"
+        chapter in the Yocto Project Development Tasks Manual.
+    </para>
+
+    <section id='qemu-availability'>
+        <title>QEMU Availability</title>
+
+        <para>
+            QEMU is made available with the Yocto Project a number of ways.
+            One method is to install a Software Development Kit (SDK).
+            For more information on how to make sure you have
+            QEMU available, see
+            "<ulink url='&YOCTO_DOCS_SDK_URL;#the-qemu-emulator'>The QEMU Emulator</ulink>"
+            section in the Yocto Project Application Development and the
+            Extensible Software Development Kit (eSDK) manual.
+        </para>
+    </section>
+
+    <section id='qemu-performance'>
+        <title>QEMU Performance</title>
+
+        <para>
+            Using QEMU to emulate your hardware can result in speed issues
+            depending on the target and host architecture mix.
+            For example, using the <filename>qemux86</filename> image in the
+            emulator on an Intel-based 32-bit (x86) host machine is fast
+            because the target and host architectures match.
+            On the other hand, using the <filename>qemuarm</filename> image
+            on the same Intel-based host can be slower.
+            But, you still achieve faithful emulation of ARM-specific issues.
+        </para>
+
+        <para>
+            To speed things up, the QEMU images support using
+            <filename>distcc</filename> to call a cross-compiler outside the
+            emulated system.
+            If you used <filename>runqemu</filename> to start QEMU, and the
+            <filename>distccd</filename> application is present on the host
+            system, any BitBake cross-compiling toolchain available from the
+            build system is automatically used from within QEMU simply by
+            calling <filename>distcc</filename>.
+            You can accomplish this by defining the cross-compiler variable
+            (e.g. <filename>export CC="distcc"</filename>).
+            Alternatively, if you are using a suitable SDK image or the
+            appropriate stand-alone toolchain is present, the toolchain is
+            also automatically used.
+        </para>
+
+        <note>
+            Several mechanisms exist that let you connect to the system
+            running on the QEMU emulator:
+            <itemizedlist>
+                <listitem><para>
+                    QEMU provides a framebuffer interface that makes standard
+                    consoles available.
+                    </para></listitem>
+                <listitem><para>
+                    Generally, headless embedded devices have a serial port.
+                    If so, you can configure the operating system of the
+                    running image to use that port to run a console.
+                    The connection uses standard IP networking.
+                    </para></listitem>
+                <listitem><para>
+                    SSH servers exist in some QEMU images.
+                    The <filename>core-image-sato</filename> QEMU image has a
+                    Dropbear secure shell (SSH) server that runs with the root
+                    password disabled.
+                    The <filename>core-image-full-cmdline</filename> and
+                    <filename>core-image-lsb</filename> QEMU images
+                    have OpenSSH instead of Dropbear.
+                    Including these SSH servers allow you to use standard
+                    <filename>ssh</filename> and <filename>scp</filename>
+                    commands.
+                    The <filename>core-image-minimal</filename> QEMU image,
+                    however, contains no SSH server.
+                    </para></listitem>
+                <listitem><para>
+                    You can use a provided, user-space NFS server to boot
+                    the QEMU session using a local copy of the root
+                    filesystem on the host.
+                    In order to make this connection, you must extract a
+                    root filesystem tarball by using the
+                    <filename>runqemu-extract-sdk</filename> command.
+                    After running the command, you must then point the
+                    <filename>runqemu</filename>
+                    script to the extracted directory instead of a root
+                    filesystem image file.
+                    See the
+                    "<ulink url='&YOCTO_DOCS_DEV_URL;#qemu-running-under-a-network-file-system-nfs-server'>Running Under a Network File System (NFS) Server</ulink>"
+                    section in the Yocto Project Development Tasks Manual for
+                    more information.
+                    </para></listitem>
+            </itemizedlist>
+        </note>
+    </section>
+
+    <section id='qemu-command-line-syntax'>
+        <title>QEMU Command-Line Syntax</title>
+
+        <para>
+            The basic <filename>runqemu</filename> command syntax is as
+            follows:
+            <literallayout class='monospaced'>
+     $ runqemu [<replaceable>option</replaceable> ]  [...]
+            </literallayout>
+            Based on what you provide on the command line,
+            <filename>runqemu</filename> does a good job of figuring out what
+            you are trying to do.
+            For example, by default, QEMU looks for the most recently built
+            image according to the timestamp when it needs to look for an
+            image.
+            Minimally, through the use of options, you must provide either
+            a machine name, a virtual machine image
+            (<filename>*wic.vmdk</filename>), or a kernel image
+            (<filename>*.bin</filename>).
+        </para>
+
+        <para>
+            Following is the command-line help output for the
+            <filename>runqemu</filename> command:
+            <literallayout class='monospaced'>
+     $ runqemu --help
+
+     Usage: you can run this script with any valid combination
+     of the following environment variables (in any order):
+       KERNEL - the kernel image file to use
+       ROOTFS - the rootfs image file or nfsroot directory to use
+       MACHINE - the machine name (optional, autodetected from KERNEL filename if unspecified)
+       Simplified QEMU command-line options can be passed with:
+         nographic - disable video console
+         serial - enable a serial console on /dev/ttyS0
+         slirp - enable user networking, no root privileges is required
+         kvm - enable KVM when running x86/x86_64 (VT-capable CPU required)
+         kvm-vhost - enable KVM with vhost when running x86/x86_64 (VT-capable CPU required)
+         publicvnc - enable a VNC server open to all hosts
+         audio - enable audio
+         [*/]ovmf* - OVMF firmware file or base name for booting with UEFI
+       tcpserial=&lt;port&gt; - specify tcp serial port number
+       biosdir=&lt;dir&gt; - specify custom bios dir
+       biosfilename=&lt;filename&gt; - specify bios filename
+       qemuparams=&lt;xyz&gt; - specify custom parameters to QEMU
+       bootparams=&lt;xyz&gt; - specify custom kernel parameters during boot
+       help, -h, --help: print this text
+
+     Examples:
+       runqemu
+       runqemu qemuarm
+       runqemu tmp/deploy/images/qemuarm
+       runqemu tmp/deploy/images/qemux86/&lt;qemuboot.conf&gt;
+       runqemu qemux86-64 core-image-sato ext4
+       runqemu qemux86-64 wic-image-minimal wic
+       runqemu path/to/bzImage-qemux86.bin path/to/nfsrootdir/ serial
+       runqemu qemux86 iso/hddimg/wic.vmdk/wic.qcow2/wic.vdi/ramfs/cpio.gz...
+       runqemu qemux86 qemuparams="-m 256"
+       runqemu qemux86 bootparams="psplash=false"
+       runqemu path/to/&lt;image&gt;-&lt;machine&gt;.wic
+       runqemu path/to/&lt;image&gt;-&lt;machine&gt;.wic.vmdk
+            </literallayout>
+        </para>
+    </section>
+
+    <section id='runqemu-command-line-options'>
+        <title><filename>runqemu</filename> Command-Line Options</title>
+
+        <para>
+            Following is a description of <filename>runqemu</filename>
+            options you can provide on the command line:
+            <note><title>Tip</title>
+                If you do provide some "illegal" option combination or perhaps
+                you do not provide enough in the way of options,
+                <filename>runqemu</filename> provides appropriate error
+                messaging to help you correct the problem.
+            </note>
+            <itemizedlist>
+                <listitem><para>
+                    <replaceable>QEMUARCH</replaceable>:
+                    The QEMU machine architecture, which must be "qemuarm",
+                    "qemuarm64", "qemumips", "qemumips64", "qemuppc",
+                    "qemux86", or "qemux86-64".
+                    </para></listitem>
+                <listitem><para>
+                    <filename><replaceable>VM</replaceable></filename>:
+                    The virtual machine image, which must be a
+                    <filename>.wic.vmdk</filename> file.
+                    Use this option when you want to boot a
+                    <filename>.wic.vmdk</filename> image.
+                    The image filename you provide must contain one of the
+                    following strings: "qemux86-64", "qemux86", "qemuarm",
+                    "qemumips64", "qemumips", "qemuppc", or "qemush4".
+                    </para></listitem>
+                <listitem><para>
+                    <replaceable>ROOTFS</replaceable>:
+                    A root filesystem that has one of the following
+                    filetype extensions: "ext2", "ext3", "ext4", "jffs2",
+                    "nfs", or "btrfs".
+                    If the filename you provide for this option uses “nfs”, it
+                    must provide an explicit root filesystem path.
+                    </para></listitem>
+                <listitem><para>
+                    <replaceable>KERNEL</replaceable>:
+                    A kernel image, which is a <filename>.bin</filename> file.
+                    When you provide a <filename>.bin</filename> file,
+                    <filename>runqemu</filename> detects it and assumes the
+                    file is a kernel image.
+                    </para></listitem>
+                <listitem><para>
+                    <replaceable>MACHINE</replaceable>:
+                    The architecture of the QEMU machine, which must be one
+                    of the following: "qemux86", "qemux86-64", "qemuarm",
+                    "qemuarm64", "qemumips", “qemumips64", or "qemuppc".
+                    The <replaceable>MACHINE</replaceable> and
+                    <replaceable>QEMUARCH</replaceable> options are basically
+                    identical.
+                    If you do not provide a <replaceable>MACHINE</replaceable>
+                    option, <filename>runqemu</filename> tries to determine
+                    it based on other options.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>ramfs</filename>:
+                    Indicates you are booting an initial RAM disk (initramfs)
+                    image, which means the <filename>FSTYPE</filename> is
+                    <filename>cpio.gz</filename>.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>iso</filename>:
+                    Indicates you are booting an ISO image, which means the
+                    <filename>FSTYPE</filename> is
+                    <filename>.iso</filename>.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>nographic</filename>:
+                    Disables the video console, which sets the console to
+                    "ttys0".
+                    </para></listitem>
+                <listitem><para>
+                    <filename>serial</filename>:
+                    Enables a serial console on
+                    <filename>/dev/ttyS0</filename>.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>biosdir</filename>:
+                    Establishes a custom directory for BIOS, VGA BIOS and
+                    keymaps.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>biosfilename</filename>:
+                    Establishes a custom BIOS name.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>qemuparams=\"<replaceable>xyz</replaceable>\"</filename>:
+                    Specifies custom QEMU parameters.
+                    Use this option to pass options other than the simple
+                    "kvm" and "serial" options.
+                    </para></listitem>
+                <listitem><para><filename>bootparams=\"<replaceable>xyz</replaceable>\"</filename>:
+                    Specifies custom boot parameters for the kernel.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>audio</filename>:
+                    Enables audio in QEMU.
+                    The <replaceable>MACHINE</replaceable> option must be
+                    either "qemux86" or "qemux86-64" in order for audio to be
+                    enabled.
+                    Additionally, the <filename>snd_intel8x0</filename>
+                    or <filename>snd_ens1370</filename> driver must be
+                    installed in linux guest.
+                    </para></listitem>
+                <listitem><para>
+                    <filename>slirp</filename>:
+                    Enables "slirp" networking, which is a different way
+                    of networking that does not need root access
+                    but also is not as easy to use or comprehensive
+                    as the default.
+                    </para></listitem>
+                <listitem><para id='kvm-cond'>
+                    <filename>kvm</filename>:
+                    Enables KVM when running "qemux86" or "qemux86-64"
+                    QEMU architectures.
+                    For KVM to work, all the following conditions must be met:
+                    <itemizedlist>
+                        <listitem><para>
+                            Your <replaceable>MACHINE</replaceable> must be either
+qemux86" or "qemux86-64".
+                            </para></listitem>
+                        <listitem><para>
+                            Your build host has to have the KVM modules
+                            installed, which are
+                            <filename>/dev/kvm</filename>.
+                            </para></listitem>
+                        <listitem><para>
+                            The  build host <filename>/dev/kvm</filename>
+                            directory has to be both writable and readable.
+                            </para></listitem>
+                    </itemizedlist>
+                    </para></listitem>
+                <listitem><para>
+                    <filename>kvm-vhost</filename>:
+                    Enables KVM with VHOST support when running "qemux86"
+                    or "qemux86-64" QEMU architectures.
+                    For KVM with VHOST to work, the following conditions must
+                    be met:
+                    <itemizedlist>
+                        <listitem><para>
+                            <link linkend='kvm-cond'>kvm</link> option
+                            conditions must be met.
+                            </para></listitem>
+                        <listitem><para>
+                            Your build host has to have virtio net device, which
+                            are <filename>/dev/vhost-net</filename>.
+                            </para></listitem>
+                        <listitem><para>
+                            The build host <filename>/dev/vhost-net</filename>
+                            directory has to be either readable or writable
+                            and “slirp-enabled”.
+                            </para></listitem>
+                    </itemizedlist>
+                    </para></listitem>
+                <listitem><para>
+                    <filename>publicvnc</filename>:
+                    Enables a VNC server open to all hosts.
+                    </para></listitem>
+            </itemizedlist>
+        </para>
+    </section>
+</section>
+
 <section id='maintaining-build-output-quality'>
     <title>Maintaining Build Output Quality</title>
 
@@ -1099,15 +1473,15 @@
             <link linkend='var-BUILDHISTORY_COMMIT'><filename>BUILDHISTORY_COMMIT</filename></link>
             variable to "1" at the end of your
             <filename>conf/local.conf</filename> file found in the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>:
+            <link linkend='build-directory'>Build Directory</link>:
             <literallayout class='monospaced'>
      INHERIT += "buildhistory"
      BUILDHISTORY_COMMIT = "1"
             </literallayout>
-            Enabling build history as previously described
-            causes the build process to collect build
-            output information and commit it to a local
-            <ulink url='&YOCTO_DOCS_DEV_URL;#git'>Git</ulink> repository.
+            Enabling build history as previously described causes the
+            OpenEmbedded build system to collect build output information and
+            commit it as a single commit to a local
+            <link linkend='git'>Git</link> repository.
             <note>
                 Enabling build history increases your build times slightly,
                 particularly for images, and increases the amount of disk
@@ -1357,7 +1731,7 @@
                 you can enable writing only image information without
                 any history by adding the following to your
                 <filename>conf/local.conf</filename> file found in the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>:
+                <link linkend='build-directory'>Build Directory</link>:
                 <literallayout class='monospaced'>
      INHERIT += "buildhistory"
      BUILDHISTORY_COMMIT = "0"
@@ -1643,7 +2017,7 @@
                 <filename>sync()</filename> calls into the
                 file system on the principle that if there was a significant
                 failure, the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                <link linkend='build-directory'>Build Directory</link>
                 contents could easily be rebuilt.
                 </para></listitem>
             <listitem><para>
diff --git a/import-layers/yocto-poky/documentation/sdk-manual/figures/sdk-title.png b/import-layers/yocto-poky/documentation/sdk-manual/figures/sdk-title.png
index e9d5b34..e69e039 100644
--- a/import-layers/yocto-poky/documentation/sdk-manual/figures/sdk-title.png
+++ b/import-layers/yocto-poky/documentation/sdk-manual/figures/sdk-title.png
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/sdk-manual/sdk-appendix-customizing.xml b/import-layers/yocto-poky/documentation/sdk-manual/sdk-appendix-customizing.xml
index 965cccc..587526f 100644
--- a/import-layers/yocto-poky/documentation/sdk-manual/sdk-appendix-customizing.xml
+++ b/import-layers/yocto-poky/documentation/sdk-manual/sdk-appendix-customizing.xml
@@ -148,9 +148,7 @@
             <listitem><para>
                 If your OpenEmbedded build system setup uses a different
                 environment setup script other than
-                <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
-                or
-                <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>,
+                <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>,
                 then you must set
                 <ulink url='&YOCTO_DOCS_REF_URL;#var-OE_INIT_ENV_SCRIPT'><filename>OE_INIT_ENV_SCRIPT</filename></ulink>
                 to point to the environment setup script you use.
@@ -289,7 +287,7 @@
                         for the SDK alone, create a
                         <filename>conf/sdk-extra.conf</filename> either in
                         your
-                        <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                        <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
                         or within any layer and put your
                         <filename>SSTATE_MIRRORS</filename> setting within
                         that file.
diff --git a/import-layers/yocto-poky/documentation/sdk-manual/sdk-appendix-mars.xml b/import-layers/yocto-poky/documentation/sdk-manual/sdk-appendix-mars.xml
index 9957057..2d80f64 100644
--- a/import-layers/yocto-poky/documentation/sdk-manual/sdk-appendix-mars.xml
+++ b/import-layers/yocto-poky/documentation/sdk-manual/sdk-appendix-mars.xml
@@ -14,8 +14,8 @@
         from start to finish.
         For general information on using the Eclipse IDE and the Yocto
         Project Eclipse Plug-In, see the
-        "<link linkend='sdk-developing-applications-using-eclipse'>Developing Applications Using <trademark class='trade'>Eclipse</trademark></link>"
-        section.
+        "<link linkend='sdk-eclipse-project'>Developing Applications Using <trademark class='trade'>Eclipse</trademark></link>"
+        Chapter.
     </para>
 
     <section id='mars-setting-up-the-eclipse-ide'>
@@ -392,7 +392,7 @@
                                         <filename>Build System Derived Toolchain:</filename></emphasis>
                                         Select this type if you built the
                                         toolchain as part of the
-                                        <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                                        <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
                                         When you select
                                         <filename>Build system derived toolchain</filename>,
                                         you are using the toolchain built and
@@ -418,7 +418,7 @@
                             toolchain, the path you provide for the
                             <filename>Toolchain Root Location</filename>
                             field is the
-                            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
                             from which you run the
                             <filename>bitbake</filename> command (e.g
                             <filename>/home/scottrif/poky/build</filename>).</para>
@@ -431,14 +431,22 @@
                             the target hardware resides.
                             </para>
                             <para>This location depends on where you
-                            separately extracted and installed the target
-                            filesystem.
+                            separately extracted and installed the
+                            target filesystem when you either built
+                            it or downloaded it.
+                            <note>
+                                If you downloaded the root filesystem
+                                for the target hardware rather than
+                                built it, you must download the
+                                <filename>sato-sdk</filename> image
+                                in order to build any c/c++ projects.
+                            </note>
                             As an example, suppose you prepared an image
                             using the steps in the
                             <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
                             If so, the <filename>MY_QEMU_ROOTFS</filename>
                             directory is found in the
-                            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
                             and you would browse to and select that directory
                             (e.g. <filename>/home/scottrif/build/MY_QEMU_ROOTFS</filename>).
                             </para>
@@ -487,7 +495,7 @@
                             <filename>Build system derived toolchain</filename>,
                             the target kernel you built will be located in
                             the
-                            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+                            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
                             in
                             <filename>tmp/deploy/images/<replaceable>machine</replaceable></filename>
                             directory.
@@ -692,7 +700,7 @@
             <note>
                 See the
                 "<ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-qemu'>Using the Quick EMUlator (QEMU)</ulink>"
-                chapter in the Yocto Project Development Manual
+                chapter in the Yocto Project Development Tasks Manual
                 for more information on using QEMU.
             </note>
             <orderedlist>
@@ -804,7 +812,7 @@
                     by clicking on "new".</para></listitem>
                 <listitem><para>Select <filename>SSH</filename>, which means
                     Secure Socket Shell.
-                    Optionally, you can select an TCF connection instead.
+                    Optionally, you can select a TCF connection instead.
                     </para></listitem>
                 <listitem><para>Click "Next".
                     </para></listitem>
@@ -848,11 +856,30 @@
                     launch.
                     Eclipse is helpful in that it auto fills your application
                     name for you assuming you browsed to a directory.
-                    <note>
-                        If you are prompted to provide a username and to
-                        optionally set a password, be sure you provide
-                        "root" as the username and you leave the password
-                        field blank.
+                    <note><title>Tips</title>
+                        <itemizedlist>
+                            <listitem><para>
+                                If you are prompted to provide a username
+                                and to optionally set a password, be sure
+                                you provide "root" as the username and you
+                                leave the password field blank.
+                                </para></listitem>
+                            <listitem><para>
+                                If browsing to a directory fails or times
+                                out, but you can
+                                <filename>ssh</filename> into your QEMU
+                                or target from the command line and you
+                                have proxies set up, it is likely that
+                                Eclipse is sending the SSH traffic to a
+                                proxy.
+                                In this case, either use TCF , or click on
+                                "Configure proxy settings" in the
+                                connection dialog and add the target IP
+                                address to the "bypass proxy" section.
+                                You might also need to change
+                                "Active Provider" from Native to Manual.
+                                </para></listitem>
+                        </itemizedlist>
                     </note>
                     </para></listitem>
                 <listitem><para>
diff --git a/import-layers/yocto-poky/documentation/sdk-manual/sdk-appendix-obtain.xml b/import-layers/yocto-poky/documentation/sdk-manual/sdk-appendix-obtain.xml
index d0cbf9c..ab9055e 100644
--- a/import-layers/yocto-poky/documentation/sdk-manual/sdk-appendix-obtain.xml
+++ b/import-layers/yocto-poky/documentation/sdk-manual/sdk-appendix-obtain.xml
@@ -18,37 +18,78 @@
     </para>
 
     <para>
-        You can find SDK installers here:
-        <itemizedlist>
-            <listitem><para><emphasis>Standard SDK Installers:</emphasis>
+        Follow these steps to locate and hand-install the toolchain:
+        <orderedlist>
+            <listitem><para>
+                <emphasis>Go to the Installers Directory:</emphasis>
                 Go to <ulink url='&YOCTO_TOOLCHAIN_DL_URL;'></ulink>
-                and find the folder that matches your host development system
-                (i.e. <filename>i686</filename> for 32-bit machines or
-                <filename>x86_64</filename> for 64-bit machines).</para>
-
-                <para>Go into that folder and download the SDK installer
-                whose name includes the appropriate target architecture.
-                The toolchains provided by the Yocto Project are based off of
-                the <filename>core-image-sato</filename> image and contain
-                libraries appropriate for developing against that image.
-                For example, if your host development system is a 64-bit x86
-                system and you are going to use your cross-toolchain for a
-                32-bit x86 target, go into the <filename>x86_64</filename>
-                folder and download the following installer:
-                <literallayout class='monospaced'>
-     poky-glibc-x86_64-core-image-sato-i586-toolchain-&DISTRO;.sh
-                </literallayout>
                 </para></listitem>
-            <listitem><para><emphasis>Extensible SDK Installers:</emphasis>
-                Installers for the extensible SDK are also located in
-                <ulink url='&YOCTO_TOOLCHAIN_DL_URL;'></ulink>.
-                These installers have the string
-                <filename>ext</filename> as part of their names:
+            <listitem><para>
+                <emphasis>Open the Folder for Your Development System:</emphasis>
+                Open the folder that matches your host development system
+                (i.e. <filename>i686</filename> for 32-bit machines or
+                <filename>x86_64</filename> for 64-bit machines).
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Locate and Download the SDK Installer:</emphasis>
+                You need to find and download the installer appropriate for
+                your development system, target hardware, and image type.
+                </para>
+
+                <para>The installer files (<filename>*.sh</filename>) follow
+                this naming convention:
+                <literallayout class='monospaced'>
+     poky-eglibc-<replaceable>host_system</replaceable>-core-image-<replaceable>type</replaceable>-<replaceable>arch</replaceable>-toolchain-ext-<replaceable>release</replaceable>.sh
+
+     Where:
+         <replaceable>host_system</replaceable> is a string representing your development system:
+                i686 or x86_64.
+
+         <replaceable>type</replaceable> is a string representing either a "sato" or "minimal"
+                image.
+
+         <replaceable>arch</replaceable> is a string representing the target architecture:
+                aarch64, armv5e, core2-64, coretexa8hf-neon, i586, mips3242,
+                mips64, or ppc7400.
+
+         <replaceable>release</replaceable> is the version of Yocto Project.
+
+         NOTE:
+            The standard SDK installer does not have the "-ext" string as
+            part of the filename.
+
+                </literallayout>
+                The toolchains provided by the Yocto Project are based off of
+                the <filename>core-image-sato</filename> and
+                <filename>core-image-minimal</filename> images and contain
+                libraries appropriate for developing against those images.
+                </para>
+
+                <para>For example, if your host development system is a
+                64-bit x86 system and you are need an extended SDK for a
+                64-bit core2 target, go into the <filename>x86_64</filename>
+                folder and download the following installer:
                 <literallayout class='monospaced'>
      poky-glibc-x86_64-core-image-sato-core2-64-toolchain-ext-&DISTRO;.sh
                 </literallayout>
                 </para></listitem>
-        </itemizedlist>
+            <listitem><para>
+                <emphasis>Run the Installer:</emphasis>
+                Be sure you have execution privileges and run the installer.
+                Following is an example from the <filename>Downloads</filename>
+                directory:
+                <literallayout class='monospaced'>
+     $ ~/Downloads/poky-glibc-x86_64-core-image-sato-core2-64-toolchain-ext-&DISTRO;.sh
+                </literallayout>
+                During execution of the script, you choose the root location
+                for the toolchain.
+                See the
+                "<link linkend='sdk-installed-standard-sdk-directory-structure'>Installed Standard SDK Directory Structure</link>"
+                section and the
+                "<link linkend='sdk-installed-extensible-sdk-directory-structure'>Installed Extensible SDK Directory Structure</link>"
+                section for more information.
+                </para></listitem>
+        </orderedlist>
     </para>
 </section>
 
@@ -57,65 +98,135 @@
 
     <para>
         As an alternative to locating and downloading a SDK installer,
-        you can build the SDK installer assuming you have first sourced
-        the environment setup script.
-        See the
-        "<ulink url='&YOCTO_DOCS_QS_URL;#qs-building-images'>Building Images</ulink>"
-        section in the Yocto Project Quick Start for steps that show you
-        how to set up the Yocto Project environment.
-        In particular, you need to be sure the
-        <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
-        variable matches the architecture for which you are building and that
-        the
-        <ulink url='&YOCTO_DOCS_REF_URL;#var-SDKMACHINE'><filename>SDKMACHINE</filename></ulink>
-        variable is correctly set if you are building a toolchain designed to
-        run on an architecture that differs from your current development host
-        machine (i.e. the build machine).
-    </para>
-
-    <para>
-        To build the SDK installer for a standard SDK and populate
-        the SDK image, use the following command:
-        <literallayout class='monospaced'>
+        you can build the SDK installer.
+        Follow these steps:
+        <orderedlist>
+            <listitem><para>
+                <emphasis>Set Up the Build Environment:</emphasis>
+                Be sure you are set up to use BitBake in a shell.
+                See the
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#setting-up-the-development-host-to-use-the-yocto-project'>Setting Up the Development Host to Use the Yocto Project</ulink>"
+                section in the Yocto Project Development Tasks Manual for
+                information on how to get a build host ready that is either a
+                native Linux machine or a machine that uses CROPS.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Clone the <filename>poky</filename> Repository:</emphasis>
+                You need to have a local copy of the Yocto Project
+                <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
+                (i.e. a local <filename>poky</filename> repository).
+                See the
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#cloning-the-poky-repository'>Cloning the <filename>poky</filename> Repository</ulink>"
+                and possibly the
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#checking-out-by-branch-in-poky'>Checking Out by Branch in Poky</ulink>"
+                and
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#checkout-out-by-tag-in-poky'>Checking Out by Tag in Poky</ulink>"
+                sections all in the Yocto Project Development Tasks Manual for
+                information on how to clone the <filename>poky</filename>
+                repository and check out the appropriate branch for your work.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Initialize the Build Environment:</emphasis>
+                While in the root directory of the Source Directory (i.e.
+                <filename>poky</filename>), run the
+                <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
+                environment setup script to define the OpenEmbedded
+                build environment on your build host.
+                <literallayout class='monospaced'>
+     $ source &OE_INIT_FILE;
+                </literallayout>
+                Among other things, the script creates the
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
+                which is <filename>build</filename> in this case
+                and is located in the
+                <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
+                After the script runs, your current working directory
+                is set to the <filename>build</filename> directory.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Make Sure You Are Building an Installer for the Correct Machine:</emphasis>
+                Check to be sure that your
+                <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
+                variable in the <filename>local.conf</filename> file in your
+                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
+                matches the architecture for which you are building.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Make Sure Your SDK Machine is Correctly Set:</emphasis>
+                If you are building a toolchain designed to run on an
+                architecture that differs from your current development host
+                machine (i.e. the build machine), be sure that the
+                <ulink url='&YOCTO_DOCS_REF_URL;#var-SDKMACHINE'><filename>SDKMACHINE</filename></ulink>
+                variable in the <filename>local.conf</filename> file in your
+                Build Directory is correctly set.
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Build the SDK Installer:</emphasis>
+                To build the SDK installer for a standard SDK and populate
+                the SDK image, use the following command form.
+                Be sure to replace <replaceable>image</replaceable> with
+                an image (e.g. "core-image-sato"):
+                <literallayout class='monospaced'>
      $ bitbake <replaceable>image</replaceable> -c populate_sdk
-        </literallayout>
-        You can do the same for the extensible SDK using this command:
-        <literallayout class='monospaced'>
+                </literallayout>
+                You can do the same for the extensible SDK using this command
+                form:
+                <literallayout class='monospaced'>
      $ bitbake <replaceable>image</replaceable> -c populate_sdk_ext
-        </literallayout>
-        These commands result in a SDK installer that contains the sysroot
-        that matches your target root filesystem.
-    </para>
+                </literallayout>
+                These commands result in a SDK installer that contains the
+                sysroot that matches your target root filesystem.</para>
 
-    <para>
-        When the <filename>bitbake</filename> command completes, the SDK
-        installer will be in
-        <filename>tmp/deploy/sdk</filename> in the Build Directory.
-        <note><title>Notes</title>
-            <itemizedlist>
-                <listitem><para>
-                    By default, this toolchain does not build static binaries.
-                    If you want to use the toolchain to build these types of
-                    libraries, you need to be sure your SDK has the
-                    appropriate static development libraries.
-                    Use the
-                    <ulink url='&YOCTO_DOCS_REF_URL;#var-TOOLCHAIN_TARGET_TASK'><filename>TOOLCHAIN_TARGET_TASK</filename></ulink>
-                    variable inside your <filename>local.conf</filename> file
-                    to install the appropriate library packages in the SDK.
-                    Following is an example using <filename>libc</filename>
-                    static development libraries:
-                    <literallayout class='monospaced'>
+                <para>When the <filename>bitbake</filename> command completes,
+                the SDK installer will be in
+                <filename>tmp/deploy/sdk</filename> in the Build Directory.
+                <note><title>Notes</title>
+                    <itemizedlist>
+                        <listitem><para>
+                            By default, this toolchain does not build static
+                            binaries.
+                            If you want to use the toolchain to build these
+                            types of libraries, you need to be sure your SDK
+                            has the appropriate static development libraries.
+                            Use the
+                            <ulink url='&YOCTO_DOCS_REF_URL;#var-TOOLCHAIN_TARGET_TASK'><filename>TOOLCHAIN_TARGET_TASK</filename></ulink>
+                            variable inside your <filename>local.conf</filename>
+                            file to install the appropriate library packages
+                            in the SDK.
+                            Following is an example using
+                            <filename>libc</filename> static development
+                            libraries:
+                            <literallayout class='monospaced'>
      TOOLCHAIN_TARGET_TASK_append = " libc-staticdev"
-                    </literallayout>
-                    </para></listitem>
-                <listitem><para>
-                    For additional information on building the installer,
-                    see the
-                    <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>Cookbook guide to Making an Eclipse Debug Capable Image</ulink>
-                    wiki page.
-                    </para></listitem>
-            </itemizedlist>
-        </note>
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            For additional information on building the
+                            installer, see the
+                            <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>Cookbook guide to Making an Eclipse Debug Capable Image</ulink>
+                            wiki page.
+                            </para></listitem>
+                    </itemizedlist>
+                </note>
+            </para></listitem>
+            <listitem><para>
+                <emphasis>Run the Installer:</emphasis>
+                You can now run the SDK installer from
+                <filename>tmp/deploy/sdk</filename> in the Build Directory.
+                Following is an example:
+                <literallayout class='monospaced'>
+     $ cd ~/poky/build/tmp/deploy/sdk
+     $ ./poky-glibc-x86_64-core-image-sato-core2-64-toolchain-ext-&DISTRO;.sh
+                </literallayout>
+                During execution of the script, you choose the root location
+                for the toolchain.
+                See the
+                "<link linkend='sdk-installed-standard-sdk-directory-structure'>Installed Standard SDK Directory Structure</link>"
+                section and the
+                "<link linkend='sdk-installed-extensible-sdk-directory-structure'>Installed Extensible SDK Directory Structure</link>"
+                section for more information.
+                </para></listitem>
+        </orderedlist>
     </para>
 </section>
 
@@ -126,55 +237,106 @@
         After installing the toolchain, for some use cases you
         might need to separately extract a root filesystem:
         <itemizedlist>
-            <listitem><para>You want to boot the image using NFS.
+            <listitem><para>
+                You want to boot the image using NFS.
                 </para></listitem>
-            <listitem><para>You want to use the root filesystem as the
+            <listitem><para>
+                You want to use the root filesystem as the
                 target sysroot.
                 For example, the Eclipse IDE environment with the Eclipse
                 Yocto Plug-in installed allows you to use QEMU to boot
-                under NFS.</para></listitem>
-            <listitem><para>You want to develop your target application
+                under NFS.
+                </para></listitem>
+            <listitem><para>
+                You want to develop your target application
                 using the root filesystem as the target sysroot.
                 </para></listitem>
         </itemizedlist>
     </para>
 
     <para>
-        To extract the root filesystem, first <filename>source</filename>
-        the cross-development environment setup script to establish
-        necessary environment variables.
-        If you built the toolchain in the Build Directory, you will find
-        the toolchain environment script in the
-        <filename>tmp</filename> directory.
-        If you installed the toolchain by hand, the environment setup
-        script is located in <filename>/opt/poky/&DISTRO;</filename>.
-    </para>
+        Follow these steps to extract the root filesystem:
+        <orderedlist>
+            <listitem><para>
+                <emphasis>Locate and Download the Tarball for the Pre-Built
+                Root Filesystem Image File:</emphasis>
+                You need to find and download the root filesystem image
+                file that is appropriate for your target system.
+                These files are kept in the
+                <ulink url='&YOCTO_DL_URL;/releases/yocto/yocto-&DISTRO;/machines/'>Index of Releases</ulink>
+                in the "machines" directory.</para>
 
-    <para>
-        After sourcing the environment script, use the
-        <filename>runqemu-extract-sdk</filename> command and provide the
-        filesystem image.
-    </para>
+                <para>The "machines" directory contains tarballs
+                (<filename>*.tar.bz2</filename>) for supported machines.
+                The directory also contains flattened root filesystem
+                image files (<filename>*.ext4</filename>), which you can use
+                with QEMU directly.</para>
 
-    <para>
-        Following is an example.
-        The second command sets up the environment.
-        In this case, the setup script is located in the
-        <filename>/opt/poky/&DISTRO;</filename> directory.
-        The third command extracts the root filesystem from a previously
-        built filesystem that is located in the
-        <filename>~/Downloads</filename> directory.
-        Furthermore, this command extracts the root filesystem into the
-        <filename>qemux86-sato</filename> directory:
-        <literallayout class='monospaced'>
-     $ cd ~
-     $ source /opt/poky/&DISTRO;/environment-setup-i586-poky-linux
-     $ runqemu-extract-sdk \
-        ~/Downloads/core-image-sato-sdk-qemux86-2011091411831.rootfs.tar.bz2 \
-        $HOME/qemux86-sato
-        </literallayout>
-        You could now point to the target sysroot at
-        <filename>qemux86-sato</filename>.
+                <para>The pre-built root filesystem image files
+                follow these naming conventions:
+                <literallayout class='monospaced'>
+     core-image-<replaceable>profile</replaceable>-<replaceable>arch</replaceable>.tar.bz2
+
+     Where:
+         <replaceable>profile</replaceable> is the filesystem image's profile:
+                   lsb, lsb-dev, lsb-sdk, lsb-qt3, minimal, minimal-dev, sato,
+                   sato-dev, sato-sdk, minimal-initramfs, or sdk-ptest. For
+                   information on these types of image profiles, see the
+                   "Images" chapter in the Yocto Project Reference Manual.
+
+         <replaceable>arch</replaceable> is a string representing the target architecture:
+                beaglebone, edgerouter, genericx86, genericx86-64, mpc8315e-rdb,
+                qemuarm, qemuarm64, qemumips, qemumips64, qemuppc, qemux86, or
+                qemux86-64.
+
+                </literallayout>
+                The root filesystems provided by the Yocto Project are based
+                off of the <filename>core-image-sato</filename> and
+                <filename>core-image-minimal</filename> images.
+                </para>
+
+                <para>For example, if your target hardware system is a
+                BeagleBone board and your image is a
+                <filename>core-image-minimal</filename> image, you need
+                to download the following root filesystem image file:
+                <literallayout class='monospaced'>
+     core-image-minimal-beaglebone.tar.bz2
+                </literallayout>
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Initialize the Cross-Development Environment:</emphasis>
+                You must <filename>source</filename>
+                the cross-development environment setup script to establish
+                necessary environment variables.</para>
+
+                <para>This script is located in the top-level directory in
+                which you installed the toolchain (e.g.
+                <filename>poky_sdk</filename>).</para>
+
+                <para>Following is an example for the Core2 64-bit
+                architecture:
+                <literallayout class='monospaced'>
+     $ source ~/poky_sdk/environment-setup-core2-64-poky-linux
+                </literallayout>
+                </para></listitem>
+            <listitem><para>
+                <emphasis>Extract the Root Filesystem:</emphasis>
+                Use the <filename>runqemu-extract-sdk</filename> command
+                and provide the root filesystem image.</para>
+
+                <para>Following is an example command that extracts the root
+                filesystem from a previously built root filesystem image that
+                was downloaded from the
+                <ulink url='&YOCTO_DOCS_REF_URL;#index-downloads'>Index of Releases</ulink>.
+                This command extracts the root filesystem into the
+                <filename>core2-64-sato</filename> directory:
+                <literallayout class='monospaced'>
+     $ runqemu-extract-sdk ~/Downloads/core-image-sato-core2-64.tar.bz2 ~/core2-64-sato
+                </literallayout>
+                You could now point to the target sysroot at
+                <filename>core2-64-sato</filename>.
+                </para></listitem>
+        </orderedlist>
     </para>
 </section>
 
diff --git a/import-layers/yocto-poky/documentation/sdk-manual/sdk-eclipse-project.xml b/import-layers/yocto-poky/documentation/sdk-manual/sdk-eclipse-project.xml
new file mode 100644
index 0000000..bdb8344
--- /dev/null
+++ b/import-layers/yocto-poky/documentation/sdk-manual/sdk-eclipse-project.xml
@@ -0,0 +1,1211 @@
+<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.2//EN"
+"http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"
+[<!ENTITY % poky SYSTEM "../poky.ent"> %poky; ] >
+
+<chapter id='sdk-eclipse-project'>
+
+    <title>Developing Applications Using <trademark class='trade'>Eclipse</trademark></title>
+
+    <para>
+        If you are familiar with the popular Eclipse IDE, you can use an
+        Eclipse Yocto Plug-in to allow you to develop, deploy, and test your
+        application all from within Eclipse.
+        This chapter describes general workflow using the SDK and Eclipse
+        and how to configure and set up Eclipse.
+    </para>
+
+    <section id='workflow-using-eclipse'>
+        <title>Workflow Using <trademark class='trade'>Eclipse</trademark></title>
+
+        <para>
+            The following figure and supporting list summarize the
+            application development general workflow that employs both the
+            SDK Eclipse.
+        </para>
+
+        <para>
+            <imagedata fileref="figures/sdk-eclipse-dev-flow.png"
+                width="7in" depth="7in" align="center" scale="100" />
+        </para>
+
+        <para>
+            <orderedlist>
+                <listitem><para>
+                    <emphasis>Prepare the host system for the Yocto
+                    Project</emphasis>:
+                    See
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#detailed-supported-distros'>Supported Linux Distributions</ulink>"
+                    and
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#required-packages-for-the-host-development-system'>Required Packages for the Host Development System</ulink>"
+                    sections both in the Yocto Project Reference Manual for
+                    requirements.
+                    In particular, be sure your host system has the
+                    <filename>xterm</filename> package installed.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Secure the Yocto Project kernel target
+                    image</emphasis>:
+                    You must have a target kernel image that has been built
+                    using the OpenEmbedded build system.</para>
+                    <para>Depending on whether the Yocto Project has a
+                    pre-built image that matches your target architecture
+                    and where you are going to run the image while you
+                    develop your application (QEMU or real hardware), the
+                    area from which you get the image differs.
+                    <itemizedlist>
+                        <listitem><para>
+                            Download the image from
+                            <ulink url='&YOCTO_MACHINES_DL_URL;'><filename>machines</filename></ulink>
+                            if your target architecture is supported and
+                            you are going to develop and test your
+                            application on actual hardware.
+                            </para></listitem>
+                        <listitem><para>
+                            Download the image from
+                            <ulink url='&YOCTO_QEMU_DL_URL;'>
+                            <filename>machines/qemu</filename></ulink> if
+                            your target architecture is supported and you
+                            are going to develop and test your application
+                            using the QEMU emulator.
+                            </para></listitem>
+                        <listitem><para>
+                            Build your image if you cannot find a pre-built
+                            image that matches your target architecture.
+                            If your target architecture is similar to a
+                            supported architecture, you can modify the
+                            kernel image before you build it.
+                            See the
+                            "<ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#using-devtool-to-patch-the-kernel'>Using <filename>devtool</filename> to Patch the Kernel</ulink>"
+                            section in the Yocto Project Linux Kernel
+                            Development Manual for an example.
+                            </para></listitem>
+                    </itemizedlist>
+                    </para></listitem>
+                <listitem>
+                    <para><emphasis>Install the SDK</emphasis>:
+                    The SDK provides a target-specific cross-development
+                    toolchain, the root filesystem, the QEMU emulator, and
+                    other tools that can help you develop your application.
+                    For information on how to install the SDK, see the
+                    "<link linkend='sdk-installing-the-sdk'>Installing the SDK</link>"
+                    section.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Secure the target root filesystem
+                    and the Cross-development toolchain</emphasis>:
+                    You need to find and download the appropriate root
+                    filesystem and the cross-development toolchain.</para>
+                    <para>You can find the tarballs for the root filesystem
+                    in the same area used for the kernel image.
+                    Depending on the type of image you are running, the
+                    root filesystem you need differs.
+                    For example, if you are developing an application that
+                    runs on an image that supports Sato, you need to get a
+                    root filesystem that supports Sato.</para>
+                    <para>You can find the cross-development toolchains at
+                    <ulink url='&YOCTO_TOOLCHAIN_DL_URL;'><filename>toolchains</filename></ulink>.
+                    Be sure to get the correct toolchain for your
+                    development host and your target architecture.
+                    See the "<link linkend='sdk-locating-pre-built-sdk-installers'>Locating Pre-Built SDK Installers</link>"
+                    section for information and the
+                    "<link linkend='sdk-installing-the-sdk'>Installing the SDK</link>"
+                    section for installation information.
+                    <note>
+                        As an alternative to downloading an SDK, you can
+                        build the SDK installer.
+                        For information on building the installer, see the
+                        "<link linkend='sdk-building-an-sdk-installer'>Building an SDK Installer</link>"
+                        section.
+                        Another helpful resource for building an installer
+                        is the
+                        <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>Cookbook guide to Making an Eclipse Debug Capable Image</ulink>
+                        wiki page.
+                    </note>
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Create and build your application</emphasis>:
+                    At this point, you need to have source files for your
+                    application.
+                    Once you have the files, you can use the Eclipse IDE
+                    to import them and build the project.
+                    If you are not using Eclipse, you need to use the
+                    cross-development tools you have installed to create
+                    the image.</para></listitem>
+                <listitem><para>
+                    <emphasis>Deploy the image with the
+                    application</emphasis>:
+                    Using the Eclipse IDE, you can deploy your image to the
+                    hardware or to QEMU through the project's preferences.
+                    You can also use Eclipse to load and test your image
+                    under QEMU.
+                    See the
+                    "<ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-qemu'>Using the Quick EMUlator (QEMU)</ulink>"
+                    chapter in the Yocto Project Development Tasks Manual
+                    for information on using QEMU.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Test and debug the application</emphasis>:
+                    Once your application is deployed, you need to test it.
+                    Within the Eclipse IDE, you can use the debugging
+                    environment along with supported performance enhancing
+                    <ulink url='http://www.eclipse.org/linuxtools/'>Linux Tools</ulink>.
+                    </para></listitem>
+            </orderedlist>
+        </para>
+    </section>
+
+    <section id='adt-eclipse'>
+        <title>Working Within Eclipse</title>
+
+        <para>
+            The Eclipse IDE is a popular development environment and it
+            fully supports development using the Yocto Project.
+        </para>
+
+        <para>
+            When you install and configure the Eclipse Yocto Project
+            Plug-in into the Eclipse IDE, you maximize your Yocto
+            Project experience.
+            Installing and configuring the Plug-in results in an
+            environment that has extensions specifically designed to let
+            you more easily develop software.
+            These extensions allow for cross-compilation, deployment, and
+            execution of your output into a QEMU emulation session as well
+            as actual target hardware.
+            You can also perform cross-debugging and profiling.
+            The environment also supports performance enhancing
+            <ulink url='http://www.eclipse.org/linuxtools/'>tools</ulink>
+            that allow you to perform remote profiling, tracing,
+            collection of power data, collection of latency data, and
+            collection of performance data.
+            <note>
+                This release of the Yocto Project supports both the Neon
+                and Mars versions of the Eclipse IDE.
+                This section provides information on how to use the Neon
+                release with the Yocto Project.
+                For information on how to use the Mars version of Eclipse
+                with the Yocto Project, see
+                "<link linkend='sdk-appendix-latest-yp-eclipse-plug-in'>Appendix C</link>.
+             </note>
+        </para>
+
+        <section id='neon-setting-up-the-eclipse-ide'>
+            <title>Setting Up the Neon Version of the Eclipse IDE</title>
+
+            <para>
+                To develop within the Eclipse IDE, you need to do the
+                following:
+                <orderedlist>
+                    <listitem><para>
+                        Install the Neon version of the Eclipse IDE.
+                        </para></listitem>
+                    <listitem><para>
+                        Configure the Eclipse IDE.
+                        </para></listitem>
+                    <listitem><para>
+                        Install the Eclipse Yocto Plug-in.
+                        </para></listitem>
+                    <listitem><para>
+                        Configure the Eclipse Yocto Plug-in.
+                        </para></listitem>
+                </orderedlist>
+                <note>
+                    Do not install Eclipse from your distribution's package
+                    repository.
+                    Be sure to install Eclipse from the official Eclipse
+                    download site as directed in the next section.
+                </note>
+            </para>
+
+            <section id='neon-installing-eclipse-ide'>
+                <title>Installing the Neon Eclipse IDE</title>
+
+                <para>
+                    Follow these steps to locate, install, and configure
+                    Neon Eclipse:
+                    <orderedlist>
+                        <listitem><para>
+                            <emphasis>Locate the Neon Download:</emphasis>
+                            Open a browser and go to
+                            <ulink url='http://www.eclipse.org/neon/'>http://www.eclipse.org/neon/</ulink>.
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis>Download the Tarball:</emphasis>
+                            Click through the "Download" buttons to
+                            download the file.
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis>Unpack the Tarball:</emphasis>
+                            Move to a clean directory and unpack the
+                            tarball.
+                            Here is an example:
+                            <literallayout class='monospaced'>
+     $ cd ~
+     $ tar -xzvf ~/Downloads/eclipse-inst-linux64.tar.gz
+                            </literallayout>
+                            Everything unpacks into a folder named
+                            "eclipse-installer".
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis>Launch the Installer:</emphasis>
+                            Use the following commands to launch the
+                            installer:
+                            <literallayout class='monospaced'>
+     $ cd ~/eclipse-installer
+     $ ./eclipse-inst
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis>Select Your IDE:</emphasis>
+                            From the list, select the "Eclipse IDE for
+                            C/C++ Developers".
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis>Install the Software:</emphasis>
+                            Accept the default "cpp-neon" directory and
+                            click "Install".
+                            Accept any license agreements and approve any
+                            certificates.
+                            </para></listitem>
+                        <listitem><para>
+                            <emphasis>Launch Neon:</emphasis>
+                            Click the "Launch" button and accept the
+                            default "workspace".
+                            </para></listitem>
+                    </orderedlist>
+                </para>
+            </section>
+
+            <section id='neon-configuring-the-mars-eclipse-ide'>
+                <title>Configuring the Neon Eclipse IDE</title>
+
+                <para>
+                    Follow these steps to configure the Neon Eclipse IDE.
+                    <note>
+                        Depending on how you installed Eclipse and what
+                        you have already done, some of the options will
+                        not appear.
+                        If you cannot find an option as directed by the
+                        manual, it has already been installed.
+                    </note>
+                    <orderedlist>
+                        <listitem><para>
+                            Be sure Eclipse is running and you are in your
+                            workbench.
+                            </para></listitem>
+                        <listitem><para>
+                            Select "Install New Software" from the "Help"
+                            pull-down menu.
+                            </para></listitem>
+                        <listitem><para>
+                            Select
+                            "Neon - http://download.eclipse.org/releases/neon"
+                            from the "Work with:" pull-down menu.
+                            </para></listitem>
+                        <listitem><para>
+                            Expand the box next to "Linux Tools" and select
+                            the following:
+                            <literallayout class='monospaced'>
+     C/C++ Remote (Over TCF/TE) Run/Debug Launcher
+     TM Terminal
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            Expand the box next to "Mobile and Device
+                            Development" and select the following
+                            boxes:
+                            <literallayout class='monospaced'>
+     C/C++ Remote (Over TCF/TE) Run/Debug Launcher
+     Remote System Explorer User Actions
+     TM Terminal
+     TCF Remote System Explorer add-in
+     TCF Target Explorer
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            Expand the box next to "Programming Languages"
+                            and select the following box:
+                            <literallayout class='monospaced'>
+     C/C++ Development Tools SDK
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para>
+                            Complete the installation by clicking through
+                            appropriate "Next" and "Finish" buttons.
+                            </para></listitem>
+                    </orderedlist>
+                </para>
+            </section>
+
+            <section id='neon-installing-the-eclipse-yocto-plug-in'>
+                <title>Installing or Accessing the Neon Eclipse Yocto Plug-in</title>
+
+                <para>
+                    You can install the Eclipse Yocto Plug-in into the
+                    Eclipse IDE one of two ways:  use the Yocto Project's
+                    Eclipse Update site to install the pre-built plug-in
+                    or build and install the plug-in from the latest
+                    source code.
+                </para>
+
+                <section id='neon-new-software'>
+                    <title>Installing the Pre-built Plug-in from the Yocto Project Eclipse Update Site</title>
+
+                    <para>
+                        To install the Neon Eclipse Yocto Plug-in from the
+                        update site, follow these steps:
+                        <orderedlist>
+                            <listitem><para>
+                                Start up the Eclipse IDE.
+                                </para></listitem>
+                            <listitem><para>
+                                In Eclipse, select "Install New
+                                Software" from the "Help" menu.
+                                </para></listitem>
+                            <listitem><para>
+                                Click "Add..." in the "Work with:" area.
+                                </para></listitem>
+                            <listitem><para>
+                                Enter
+                                <filename>&ECLIPSE_DL_PLUGIN_URL;/neon</filename>
+                                in the URL field and provide a meaningful
+                                name in the "Name" field.
+                                </para></listitem>
+                            <listitem><para>
+                                Click "OK" to have the entry added
+                                to the "Work with:" drop-down list.
+                                </para></listitem>
+                            <listitem><para>
+                                Select the entry for the plug-in
+                                from the "Work with:" drop-down list.
+                                </para></listitem>
+                            <listitem><para>
+                                Check the boxes next to the following:
+                                <literallayout class='monospaced'>
+     Yocto Project SDK Plug-in
+     Yocto Project Documentation plug-in
+                                </literallayout>
+                                </para></listitem>
+                            <listitem><para>
+                                Complete the remaining software
+                                installation steps and then restart the
+                                Eclipse IDE to finish the installation of
+                                the plug-in.
+                                <note>
+                                    You can click "OK" when prompted about
+                                    installing software that contains
+                                    unsigned content.
+                                </note>
+                                </para></listitem>
+                        </orderedlist>
+                    </para>
+                </section>
+
+                <section id='neon-zip-file-method'>
+                    <title>Installing the Plug-in Using the Latest Source Code</title>
+
+                    <para>
+                        To install the Neon Eclipse Yocto Plug-in from the
+                        latest source code, follow these steps:
+                        <orderedlist>
+                            <listitem><para>
+                                Be sure your development system
+                                has JDK 1.8+
+                                </para></listitem>
+                            <listitem><para>
+                                Install X11-related packages:
+                                <literallayout class='monospaced'>
+     $ sudo apt-get install xauth
+                                </literallayout>
+                                </para></listitem>
+                            <listitem><para>
+                                In a new terminal shell, create a
+                                Git repository with:
+                                <literallayout class='monospaced'>
+     $ cd ~
+     $ git clone git://git.yoctoproject.org/eclipse-poky
+                                </literallayout>
+                                </para></listitem>
+                            <listitem><para>
+                                Use Git to create the correct tag:
+                                <literallayout class='monospaced'>
+     $ cd ~/eclipse-poky
+     $ git checkout neon/yocto-&DISTRO;
+                                </literallayout>
+                                This creates a local tag named
+                                <filename>neon/yocto-&DISTRO;</filename>
+                                based on the branch
+                                <filename>origin/neon-master</filename>.
+                                You are put into a detached HEAD state,
+                                which is fine since you are only going to
+                                be building and not developing.
+                                </para></listitem>
+                            <listitem><para>
+                                Change to the <filename>scripts</filename>
+                                directory within the Git repository:
+                                <literallayout class='monospaced'>
+     $ cd scripts
+                                </literallayout>
+                                </para></listitem>
+                            <listitem><para>
+                                Set up the local build environment
+                                by running the setup script:
+                                <literallayout class='monospaced'>
+     $ ./setup.sh
+                                </literallayout>
+                                When the script finishes execution,
+                                it prompts you with instructions on how to
+                                run the <filename>build.sh</filename>
+                                script, which is also in the
+                                <filename>scripts</filename> directory of
+                                the Git repository created earlier.
+                                </para></listitem>
+                            <listitem><para>
+                                Run the <filename>build.sh</filename>
+                                script as directed.
+                                Be sure to provide the tag name,
+                                documentation branch, and a release name.
+                                </para>
+                                <para>
+                                Following is an example:
+                                <literallayout class='monospaced'>
+     $ ECLIPSE_HOME=/home/scottrif/eclipse-poky/scripts/eclipse ./build.sh -l neon/yocto-&DISTRO; master yocto-&DISTRO; 2>&amp;1 | tee build.log
+                                </literallayout>
+                                The previous example command adds the tag
+                                you need for
+                                <filename>mars/yocto-&DISTRO;</filename>
+                                to <filename>HEAD</filename>, then tells
+                                the build script to use the local (-l) Git
+                                checkout for the build.
+                                After running the script, the file
+                                <filename>org.yocto.sdk-</filename><replaceable>release</replaceable><filename>-</filename><replaceable>date</replaceable><filename>-archive.zip</filename>
+                                is in the current directory.
+                                </para></listitem>
+                            <listitem><para>
+                                If necessary, start the Eclipse IDE
+                                and be sure you are in the Workbench.
+                                </para></listitem>
+                            <listitem><para>
+                                Select "Install New Software" from
+                                the "Help" pull-down menu.
+                                </para></listitem>
+                            <listitem><para>
+                                Click "Add".
+                                </para></listitem>
+                            <listitem><para>
+                                Provide anything you want in the
+                                "Name" field.
+                                </para></listitem>
+                            <listitem><para>
+                                Click "Archive" and browse to the
+                                ZIP file you built earlier.
+                                This ZIP file should not be "unzipped", and
+                                must be the
+                                <filename>*archive.zip</filename> file
+                                created by running the
+                                <filename>build.sh</filename> script.
+                                </para></listitem>
+                            <listitem><para>
+                                Click the "OK" button.
+                                </para></listitem>
+                            <listitem><para>
+                                Check the boxes that appear in
+                                the installation window to install the
+                                following:
+                                <literallayout class='monospaced'>
+     Yocto Project SDK Plug-in
+     Yocto Project Documentation plug-in
+                                </literallayout>
+                                </para></listitem>
+                            <listitem><para>
+                                Finish the installation by clicking
+                                through the appropriate buttons.
+                                You can click "OK" when prompted about
+                                installing software that contains unsigned
+                                content.
+                                </para></listitem>
+                            <listitem><para>
+                                Restart the Eclipse IDE if necessary.
+                                </para></listitem>
+                        </orderedlist>
+                    </para>
+
+                    <para>
+                        At this point you should be able to configure the
+                        Eclipse Yocto Plug-in as described in the
+                        "<link linkend='mars-configuring-the-eclipse-yocto-plug-in'>Configuring the Neon Eclipse Yocto Plug-in</link>"
+                        section.
+                    </para>
+                </section>
+            </section>
+
+            <section id='neon-configuring-the-eclipse-yocto-plug-in'>
+                <title>Configuring the Neon Eclipse Yocto Plug-in</title>
+
+                <para>
+                    Configuring the Neon Eclipse Yocto Plug-in involves
+                    setting the Cross Compiler options and the Target
+                    options.
+                    The configurations you choose become the default
+                    settings for all projects.
+                    You do have opportunities to change them later when
+                    you configure the project (see the following section).
+                </para>
+
+                <para>
+                    To start, you need to do the following from within the
+                    Eclipse IDE:
+                    <itemizedlist>
+                        <listitem><para>
+                            Choose "Preferences" from the "Window" menu to
+                            display the Preferences Dialog.
+                            </para></listitem>
+                        <listitem><para>
+                            Click "Yocto Project SDK" to display
+                            the configuration screen.
+                            </para></listitem>
+                    </itemizedlist>
+                    The following sub-sections describe how to configure
+                    the plug-in.
+                    <note>
+                        Throughout the descriptions, a start-to-finish
+                        example for preparing a QEMU image for use with
+                        Eclipse is referenced as the "wiki" and is linked
+                        to the example on the
+                        <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'> Cookbook guide to Making an Eclipse Debug Capable Image</ulink>
+                        wiki page.
+                    </note>
+                </para>
+
+                <section id='neon-configuring-the-cross-compiler-options'>
+                    <title>Configuring the Cross-Compiler Options</title>
+
+                    <para>
+                        Cross Compiler options enable Eclipse to use your
+                        specific cross compiler toolchain.
+                        To configure these options, you must select
+                        the type of toolchain, point to the toolchain,
+                        specify the sysroot location, and select the target
+                        architecture.
+                        <itemizedlist>
+                            <listitem><para>
+                                <emphasis>Selecting the Toolchain
+                                Type:</emphasis>
+                                Choose between
+                                <filename>Standalone pre-built toolchain</filename>
+                                and
+                                <filename>Build system derived toolchain</filename>
+                                for Cross Compiler Options.
+                                <itemizedlist>
+                                    <listitem><para>
+                                        <emphasis>
+                                        <filename>Standalone Pre-built Toolchain:</filename>
+                                        </emphasis>
+                                        Select this type when you are using
+                                        a stand-alone cross-toolchain.
+                                        For example, suppose you are an
+                                        application developer and do not
+                                        need to build a target image.
+                                        Instead, you just want to use an
+                                        architecture-specific toolchain on
+                                        an existing kernel and target root
+                                        filesystem.
+                                        In other words, you have downloaded
+                                        and installed a pre-built toolchain
+                                        for an existing image.
+                                        </para></listitem>
+                                    <listitem><para>
+                                        <emphasis>
+                                        <filename>Build System Derived Toolchain:</filename>
+                                        </emphasis>
+                                        Select this type if you built the
+                                        toolchain as part of the
+                                        <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
+                                        When you select
+                                        <filename>Build system derived toolchain</filename>,
+                                        you are using the toolchain built
+                                        and bundled inside the Build
+                                        Directory.
+                                        For example, suppose you created a
+                                        suitable image using the steps in the
+                                        <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
+                                        In this situation, you would select
+                                        the
+                                        <filename>Build system derived toolchain</filename>.
+                                        </para></listitem>
+                                </itemizedlist>
+                                </para></listitem>
+                            <listitem><para>
+                                <emphasis>Specify the Toolchain Root
+                                Location:</emphasis>
+                                If you are using a stand-alone pre-built
+                                toolchain, you should be pointing to where
+                                it is installed (e.g.
+                                <filename>/opt/poky/&DISTRO;</filename>).
+                                See the
+                                "<link linkend='sdk-installing-the-sdk'>Installing the SDK</link>"
+                                section for information about how the SDK is
+                                installed.</para>
+                                <para>If you are using a build system
+                                derived toolchain, the path you provide for
+                                the
+                                <filename>Toolchain Root Location</filename>
+                                field is the
+                                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
+                                from which you run the
+                                <filename>bitbake</filename> command (e.g
+                                <filename>/home/scottrif/poky/build</filename>).
+                                </para>
+                                <para>For more information, see the
+                                "<link linkend='sdk-building-an-sdk-installer'>Building an SDK Installer</link>"
+                                section.
+                                </para></listitem>
+                            <listitem><para>
+                                <emphasis>Specify Sysroot Location:
+                                </emphasis>
+                                This location is where the root filesystem
+                                for the target hardware resides.
+                                </para>
+                                <para>This location depends on where you
+                                separately extracted and installed the
+                                target filesystem when you either built
+                                it or downloaded it.
+                                <note>
+                                    If you downloaded the root filesystem
+                                    for the target hardware rather than
+                                    built it, you must download the
+                                    <filename>sato-sdk</filename> image
+                                    in order to build any c/c++ projects.
+                                </note>
+                                As an example, suppose you prepared an
+                                image using the steps in the
+                                <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
+                                If so, the
+                                <filename>MY_QEMU_ROOTFS</filename>
+                                directory is found in the
+                                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
+                                and you would browse to and select that
+                                directory (e.g.
+                                <filename>/home/scottrif/poky/build/MY_QEMU_ROOTFS</filename>).
+                                </para>
+                                <para>For more information on how to
+                                install the toolchain and on how to extract
+                                and install the sysroot filesystem, see the
+                                "<link linkend='sdk-building-an-sdk-installer'>Building an SDK Installer</link>"
+                                section.
+                                </para></listitem>
+                            <listitem><para>
+                                <emphasis>Select the Target Architecture:
+                                </emphasis>
+                                The target architecture is the type of
+                                hardware you are going to use or emulate.
+                                Use the pull-down
+                                <filename>Target Architecture</filename>
+                                menu to make your selection.
+                                The pull-down menu should have the
+                                supported architectures.
+                                If the architecture you need is not listed
+                                in the menu, you will need to build the
+                                image.
+                                See the
+                                "<ulink url='&YOCTO_DOCS_QS_URL;#qs-building-images'>Building Images</ulink>"
+                                section of the Yocto Project Quick Start
+                                for more information.
+                                You can also see the
+                                <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
+                                </para></listitem>
+                        </itemizedlist>
+                    </para>
+                </section>
+
+                <section id='neon-configuring-the-target-options'>
+                    <title>Configuring the Target Options</title>
+
+                    <para>
+                        You can choose to emulate hardware using the QEMU
+                        emulator, or you can choose to run your image on
+                        actual hardware.
+                        <itemizedlist>
+                            <listitem><para>
+                                <emphasis>QEMU:</emphasis>
+                                Select this option if you will be using the
+                                QEMU emulator.
+                                If you are using the emulator, you also
+                                need to locate the kernel and specify any
+                                custom options.</para>
+                                <para>If you selected the
+                                <filename>Build system derived toolchain</filename>,
+                                the target kernel you built will be located
+                                in the
+                                <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
+                                in
+                                <filename>tmp/deploy/images/<replaceable>machine</replaceable></filename>
+                                directory.
+                                As an example, suppose you performed the
+                                steps in the
+                                <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
+                                In this case, you specify your Build
+                                Directory path followed by the image (e.g.
+                                <filename>/home/scottrif/poky/build/tmp/deploy/images/qemux86/bzImage-qemux86.bin</filename>).
+                                </para>
+                                <para>If you selected the standalone
+                                pre-built toolchain, the pre-built image
+                                you downloaded is located in the directory
+                                you specified when you downloaded the
+                                image.</para>
+                                <para>Most custom options are for advanced
+                                QEMU users to further customize their QEMU
+                                instance.
+                                These options are specified between paired
+                                angled brackets.
+                                Some options must be specified outside the
+                                brackets.
+                                In particular, the options
+                                <filename>serial</filename>,
+                                <filename>nographic</filename>, and
+                                <filename>kvm</filename> must all be
+                                outside the brackets.
+                                Use the <filename>man qemu</filename>
+                                command to get help on all the options and
+                                their use.
+                                The following is an example:
+                               <literallayout class='monospaced'>
+    serial ‘&lt;-m 256 -full-screen&gt;’
+                                </literallayout></para>
+                                <para>
+                                Regardless of the mode, Sysroot is already
+                                defined as part of the Cross-Compiler
+                                Options configuration in the
+                                <filename>Sysroot Location:</filename>
+                                field.
+                                </para></listitem>
+                            <listitem><para>
+                                <emphasis>External HW:</emphasis>
+                                Select this option if you will be using
+                                actual hardware.</para></listitem>
+                        </itemizedlist>
+                    </para>
+
+                    <para>
+                        Click the "Apply" and "OK" to save your plug-in
+                        configurations.
+                    </para>
+                </section>
+            </section>
+        </section>
+
+        <section id='neon-creating-the-project'>
+            <title>Creating the Project</title>
+
+            <para>
+                You can create two types of projects:  Autotools-based, or
+                Makefile-based.
+                This section describes how to create Autotools-based
+                projects from within the Eclipse IDE.
+                For information on creating Makefile-based projects in a
+                terminal window, see the
+                "<link linkend='makefile-based-projects'>Makefile-Based Projects</link>"
+                section.
+                <note>
+                    Do not use special characters in project names
+                    (e.g. spaces, underscores, etc.).  Doing so can
+                    cause configuration to fail.
+                </note>
+            </para>
+
+            <para>
+                To create a project based on a Yocto template and then
+                display the source code, follow these steps:
+                <orderedlist>
+                    <listitem><para>
+                        Select "C Project" from the "File -> New" menu.
+                        </para></listitem>
+                    <listitem><para>
+                        Expand
+                        <filename>Yocto Project SDK Autotools Project</filename>.
+                        </para></listitem>
+                    <listitem><para>
+                        Select <filename>Hello World ANSI C Autotools Projects</filename>.
+                        This is an Autotools-based project based on a Yocto
+                        template.
+                        </para></listitem>
+                    <listitem><para>
+                        Put a name in the
+                        <filename>Project name:</filename> field.
+                        Do not use hyphens as part of the name
+                        (e.g. <filename>hello</filename>).
+                        </para></listitem>
+                    <listitem><para>
+                        Click "Next".
+                        </para></listitem>
+                    <listitem><para>
+                        Add appropriate information in the various fields.
+                        </para></listitem>
+                    <listitem><para>
+                        Click "Finish".
+                        </para></listitem>
+                    <listitem><para>
+                        If the "open perspective" prompt appears,
+                        click "Yes" so that you in the C/C++ perspective.
+                        </para></listitem>
+                    <listitem><para>The left-hand navigation pane shows
+                        your project.
+                        You can display your source by double clicking the
+                        project's source file.
+                        </para></listitem>
+                </orderedlist>
+            </para>
+        </section>
+
+        <section id='neon-configuring-the-cross-toolchains'>
+            <title>Configuring the Cross-Toolchains</title>
+
+            <para>
+                The earlier section,
+                "<link linkend='neon-configuring-the-eclipse-yocto-plug-in'>Configuring the Neon Eclipse Yocto Plug-in</link>",
+                sets up the default project configurations.
+                You can override these settings for a given project by
+                following these steps:
+                <orderedlist>
+                    <listitem><para>
+                        Select "Yocto Project Settings" from
+                        the "Project -> Properties" menu.
+                        This selection brings up the Yocto Project Settings
+                        Dialog and allows you to make changes specific to
+                        an individual project.</para>
+                        <para>By default, the Cross Compiler Options and
+                        Target Options for a project are inherited from
+                        settings you provided using the Preferences Dialog
+                        as described earlier in the
+                        "<link linkend='neon-configuring-the-eclipse-yocto-plug-in'>Configuring the Neon Eclipse Yocto Plug-in</link>"
+                        section.
+                        The Yocto Project Settings Dialog allows you to
+                        override those default settings for a given
+                        project.
+                        </para></listitem>
+                    <listitem><para>
+                        Make or verify your configurations for the
+                        project and click "OK".
+                        </para></listitem>
+                    <listitem><para>
+                        Right-click in the navigation pane and
+                        select "Reconfigure Project" from the pop-up menu.
+                        This selection reconfigures the project by running
+                        <filename>autogen.sh</filename> in the workspace
+                        for your project.
+                        The script also runs
+                        <filename>libtoolize</filename>,
+                        <filename>aclocal</filename>,
+                        <filename>autoconf</filename>,
+                        <filename>autoheader</filename>,
+                        <filename>automake --a</filename>, and
+                        <filename>./configure</filename>.
+                        Click on the "Console" tab beneath your source code
+                        to see the results of reconfiguring your project.
+                        </para></listitem>
+                </orderedlist>
+            </para>
+         </section>
+
+         <section id='neon-building-the-project'>
+            <title>Building the Project</title>
+             <para>
+                To build the project select "Build All" from the
+                "Project" menu.
+                The console should update and you can note the
+                cross-compiler you are using.
+                <note>
+                    When building "Yocto Project SDK Autotools" projects,
+                    the Eclipse IDE might display error messages for
+                    Functions/Symbols/Types that cannot be "resolved",
+                    even when the related include file is listed at the
+                    project navigator and when the project is able to
+                    build.
+                    For these cases only, it is recommended to add a new
+                    linked folder to the appropriate sysroot.
+                    Use these steps to add the linked folder:
+                    <orderedlist>
+                        <listitem><para>
+                            Select the project.
+                            </para></listitem>
+                        <listitem><para>
+                            Select "Folder" from the
+                            <filename>File > New</filename> menu.
+                            </para></listitem>
+                        <listitem><para>
+                            In the "New Folder" Dialog, select "Link to
+                            alternate location (linked folder)".
+                            </para></listitem>
+                        <listitem><para>
+                            Click "Browse" to navigate to the include
+                            folder inside the same sysroot location
+                            selected in the Yocto Project
+                            configuration preferences.
+                            </para></listitem>
+                        <listitem><para>
+                            Click "OK".
+                            </para></listitem>
+                        <listitem><para>
+                            Click "Finish" to save the linked folder.
+                            </para></listitem>
+                    </orderedlist>
+                </note>
+            </para>
+        </section>
+
+        <section id='neon-starting-qemu-in-user-space-nfs-mode'>
+            <title>Starting QEMU in User-Space NFS Mode</title>
+
+            <para>
+                To start the QEMU emulator from within Eclipse, follow
+                these steps:
+                <note>
+                    See the
+                    "<ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-qemu'>Using the Quick EMUlator (QEMU)</ulink>"
+                    chapter in the Yocto Project Development Tasks Manual
+                    for more information on using QEMU.
+                </note>
+                <orderedlist>
+                    <listitem><para>Expose and select "External Tools
+                        Configurations ..." from the "Run -> External
+                        Tools" menu.
+                        </para></listitem>
+                    <listitem><para>
+                        Locate and select your image in the navigation
+                        panel to the left
+                        (e.g. <filename>qemu_i586-poky-linux</filename>).
+                        </para></listitem>
+                    <listitem><para>
+                        Click "Run" to launch QEMU.
+                        <note>
+                            The host on which you are running QEMU must
+                            have the <filename>rpcbind</filename> utility
+                            running to be able to make RPC calls on a
+                            server on that machine.
+                            If QEMU does not invoke and you receive error
+                            messages involving
+                            <filename>rpcbind</filename>, follow the
+                            suggestions to get the service running.
+                            As an example, on a new Ubuntu 16.04 LTS
+                            installation, you must do the following in
+                            order to get QEMU to launch:
+                            <literallayout class='monospaced'>
+     $ sudo apt-get install rpcbind
+                            </literallayout>
+                            After installing <filename>rpcbind</filename>,
+                            you need to edit the
+                            <filename>/etc/init.d/rpcbind</filename> file
+                            to include the following line:
+                            <literallayout class='monospaced'>
+     OPTIONS="-i -w"
+                            </literallayout>
+                            After modifying the file, you need to start the
+                            service:
+                            <literallayout class='monospaced'>
+     $ sudo service portmap restart
+                            </literallayout>
+                        </note>
+                        </para></listitem>
+                    <listitem><para>
+                        If needed, enter your host root password in
+                        the shell window at the prompt.
+                        This sets up a <filename>Tap 0</filename>
+                        connection needed for running in user-space NFS
+                        mode.
+                        </para></listitem>
+                    <listitem><para>
+                        Wait for QEMU to launch.
+                        </para></listitem>
+                    <listitem><para>
+                        Once QEMU launches, you can begin operating
+                        within that environment.
+                        One useful task at this point would be to determine
+                        the IP Address for the user-space NFS by using the
+                        <filename>ifconfig</filename> command.
+                        The IP address of the QEMU machine appears in the
+                        xterm window.
+                        You can use this address to help you see which
+                        particular
+                        IP address the instance of QEMU is using.
+                        </para></listitem>
+                </orderedlist>
+            </para>
+        </section>
+
+        <section id='neon-deploying-and-debugging-the-application'>
+            <title>Deploying and Debugging the Application</title>
+
+            <para>
+                Once the QEMU emulator is running the image, you can deploy
+                your application using the Eclipse IDE and then use
+                the emulator to perform debugging.
+                Follow these steps to deploy the application.
+                <note>
+                    Currently, Eclipse does not support SSH port
+                    forwarding.
+                    Consequently, if you need to run or debug a remote
+                    application using the host display, you must create a
+                    tunneling connection from outside Eclipse and keep
+                    that connection alive during your work.
+                    For example, in a new terminal, run the following:
+                    <literallayout class='monospaced'>
+     $ ssh -XY <replaceable>user_name</replaceable>@<replaceable>remote_host_ip</replaceable>
+                    </literallayout>
+                    Using the above form, here is an example:
+                    <literallayout class='monospaced'>
+     $ ssh -XY root@192.168.7.2
+                    </literallayout>
+                    After running the command, add the command to be
+                    executed in Eclipse's run configuration before the
+                    application as follows:
+                    <literallayout class='monospaced'>
+     export DISPLAY=:10.0
+                    </literallayout>
+                    Be sure to not destroy the connection during your QEMU
+                    session (i.e. do not
+                    exit out of or close that shell).
+                </note>
+                <orderedlist>
+                    <listitem><para>
+                        Select "Debug Configurations..." from the
+                        "Run" menu.
+                        </para></listitem>
+                    <listitem><para>
+                        In the left area, expand
+                        <filename>C/C++Remote Application</filename>.
+                        </para></listitem>
+                    <listitem><para>
+                        Locate your project and select it to bring
+                        up a new tabbed view in the Debug Configurations
+                        Dialog.
+                        </para></listitem>
+                    <listitem><para>
+                        Click on the "Debugger" tab to see the
+                        cross-tool debugger you are using.
+                        Be sure to change to the debugger perspective in
+                        Eclipse.
+                        </para></listitem>
+                    <listitem><para>
+                        Click on the "Main" tab.
+                        </para></listitem>
+                    <listitem><para>
+                        Create a new connection to the QEMU instance
+                        by clicking on "new".</para></listitem>
+                    <listitem><para>Select <filename>SSH</filename>, which
+                        means Secure Socket Shell and then click "OK".
+                        Optionally, you can select a TCF connection
+                        instead.
+                        </para></listitem>
+                    <listitem><para>
+                        Clear out the "Connection name" field and
+                        enter any name you want for the connection.
+                        </para></listitem>
+                    <listitem><para>
+                        Put the IP address for the connection in
+                        the "Host" field.
+                        For QEMU, the default is
+                        <filename>192.168.7.2</filename>.
+                        However, if a previous QEMU session did not exit
+                        cleanly, the IP address increments (e.g.
+                        <filename>192.168.7.3</filename>).
+                        <note>
+                            You can find the IP address for the current
+                            QEMU session by looking in the xterm that
+                            opens when you launch QEMU.
+                        </note>
+                        </para></listitem>
+                    <listitem><para>
+                        Enter <filename>root</filename>, which
+                        is the default for QEMU, for the "User" field.
+                        Be sure to leave the password field empty.
+                        </para></listitem>
+                    <listitem><para>
+                        Click "Finish" to close the New Connections Dialog.
+                        </para></listitem>
+                    <listitem><para>
+                        If necessary, use the drop-down menu now in the
+                        "Connection" field and pick the IP Address you
+                        entered.
+                        </para></listitem>
+                    <listitem><para>
+                        Assuming you are connecting as the root
+                        user, which is the default for QEMU x86-64 SDK
+                        images provided by the Yocto Project, in the
+                        "Remote Absolute File Path for C/C++ Application"
+                        field, browse to
+                        <filename>/home/root/</filename><replaceable>ProjectName</replaceable>
+                        (e.g. <filename>/home/root/hello</filename>).
+                        You could also browse to any other path you have
+                        write access to on the target such as
+                        <filename>/usr/bin</filename>.
+                        This location is where your application will be
+                        located on the QEMU system.
+                        If you fail to browse to and specify an appropriate
+                        location, QEMU will not understand what to remotely
+                        launch.
+                        Eclipse is helpful in that it auto fills your
+                        application name for you assuming you browsed to a
+                        directory.
+                        <note><title>Tips</title>
+                            <itemizedlist>
+                                <listitem><para>
+                                    If you are prompted to provide a username
+                                    and to optionally set a password, be sure
+                                    you provide "root" as the username and you
+                                    leave the password field blank.
+                                    </para></listitem>
+                                <listitem><para>
+                                    If browsing to a directory fails or times
+                                    out, but you can
+                                    <filename>ssh</filename> into your QEMU
+                                    or target from the command line and you
+                                    have proxies set up, it is likely that
+                                    Eclipse is sending the SSH traffic to a
+                                    proxy.
+                                    In this case, either use TCF , or click on
+                                    "Configure proxy settings" in the
+                                    connection dialog and add the target IP
+                                    address to the "bypass proxy" section.
+                                    You might also need to change
+                                    "Active Provider" from Native to Manual.
+                                    </para></listitem>
+                            </itemizedlist>
+                        </note>
+                        </para></listitem>
+                    <listitem><para>
+                        Be sure you change to the "Debug" perspective in
+                        Eclipse.
+                        </para></listitem>
+                    <listitem><para>
+                        Click "Debug"
+                        </para></listitem>
+                    <listitem><para>
+                        Accept the debug perspective.
+                        </para></listitem>
+                </orderedlist>
+            </para>
+        </section>
+
+        <section id='neon-using-Linuxtools'>
+            <title>Using Linuxtools</title>
+
+            <para>
+                As mentioned earlier in the manual, performance tools exist
+                (Linuxtools) that enhance your development experience.
+                These tools are aids in developing and debugging
+                applications and images.
+                You can run these tools from within the Eclipse IDE through
+                the "Linuxtools" menu.
+            </para>
+
+            <para>
+                For information on how to configure and use these tools,
+                see
+                <ulink url='http://www.eclipse.org/linuxtools/'>http://www.eclipse.org/linuxtools/</ulink>.
+            </para>
+        </section>
+    </section>
+</chapter>
+<!--
+vim: expandtab tw=80 ts=4
+-->
diff --git a/import-layers/yocto-poky/documentation/sdk-manual/sdk-extensible.xml b/import-layers/yocto-poky/documentation/sdk-manual/sdk-extensible.xml
index 1496476..444d816 100644
--- a/import-layers/yocto-poky/documentation/sdk-manual/sdk-extensible.xml
+++ b/import-layers/yocto-poky/documentation/sdk-manual/sdk-extensible.xml
@@ -15,7 +15,7 @@
         to an image, modify the source for an existing component, test
         changes on the target hardware, and ease integration into the rest of
         the
-        <ulink url='&YOCTO_DOCS_DEV_URL;#build-system-term'>OpenEmbedded build system</ulink>.
+        <ulink url='&YOCTO_DOCS_REF_URL;#build-system-term'>OpenEmbedded build system</ulink>.
         <note>
             For a side-by-side comparison of main features supported for an
             extensible SDK as compared to a standard SDK, see the
@@ -235,15 +235,28 @@
             you build, test and package software within the extensible SDK, and
             optionally integrate it into an image built by the OpenEmbedded
             build system.
+            <note><title>Tip</title>
+                The use of <filename>devtool</filename> is not limited to
+                the extensible SDK.
+                You can use <filename>devtool</filename> to help you easily
+                develop any project whose build output must be part of an
+                image built using the OpenEmbedded build system.
+            </note>
         </para>
 
         <para>
             The <filename>devtool</filename> command line is organized
             similarly to
-            <ulink url='&YOCTO_DOCS_DEV_URL;#git'>Git</ulink> in that it has a
+            <ulink url='&YOCTO_DOCS_REF_URL;#git'>Git</ulink> in that it has a
             number of sub-commands for each function.
             You can run <filename>devtool --help</filename> to see all the
             commands.
+            <note>
+                See the
+                "<ulink url='&YOCTO_DOCS_REF_URL;#ref-devtool-reference'><filename>devtool</filename>&nbsp;Quick Reference</ulink>"
+                in the Yocto Project Reference Manual for a
+                <filename>devtool</filename> quick reference.
+            </note>
         </para>
 
         <para>
@@ -293,7 +306,7 @@
                 The <filename>devtool add</filename> command generates
                 a new recipe based on existing source code.
                 This command takes advantage of the
-                <ulink url='&YOCTO_DOCS_DEV_URL;#devtool-the-workspace-layer-structure'>workspace</ulink>
+                <ulink url='&YOCTO_DOCS_REF_URL;#devtool-the-workspace-layer-structure'>workspace</ulink>
                 layer that many <filename>devtool</filename> commands
                 use.
                 The command is flexible enough to allow you to extract source
@@ -612,7 +625,7 @@
                                 extracts them.
                                 Providing the <replaceable>srctree</replaceable>
                                 argument instructs <filename>devtool</filename> where
-                                place the extracted source.</para>
+                                to place the extracted source.</para>
 
                                 <para>Within workspace, <filename>devtool</filename>
                                 creates an append file for the recipe.
@@ -1262,7 +1275,7 @@
         <title>Working With Recipes</title>
 
         <para>
-            When building a recipe with <filename>devtool build</filename> the
+            When building a recipe with <filename>devtool build</filename>, the
             typical build progression is as follows:
             <orderedlist>
                 <listitem><para>
diff --git a/import-layers/yocto-poky/documentation/sdk-manual/sdk-intro.xml b/import-layers/yocto-poky/documentation/sdk-manual/sdk-intro.xml
index 6840169..b6925fa 100644
--- a/import-layers/yocto-poky/documentation/sdk-manual/sdk-intro.xml
+++ b/import-layers/yocto-poky/documentation/sdk-manual/sdk-intro.xml
@@ -9,8 +9,8 @@
     <title>Introduction</title>
 
     <para>
-        Welcome to the Yocto Project Software Development Kit (SDK)
-        Developer's Guide.
+        Welcome to the Yocto Project Application Development and the
+        Extensible Software Development Kit (eSDK) manual.
         This manual provides information that explains how to use both the
         Yocto Project extensible and standard SDKs to develop
         applications and images.
@@ -52,7 +52,7 @@
         new applications and libraries to an image, modify the source of an
         existing component, test changes on the target hardware, and easily
         integrate an application into the
-        <ulink url='&YOCTO_DOCS_DEV_URL;#build-system-term'>OpenEmbedded build system</ulink>.
+        <ulink url='&YOCTO_DOCS_REF_URL;#build-system-term'>OpenEmbedded build system</ulink>.
     </para>
 
     <para>
@@ -93,7 +93,7 @@
                 matching sysroots (target and native) all built by the
                 OpenEmbedded build system (e.g. the SDK).
                 The toolchain and sysroots are based on a
-                <ulink url='&YOCTO_DOCS_DEV_URL;#metadata'>Metadata</ulink>
+                <ulink url='&YOCTO_DOCS_REF_URL;#metadata'>Metadata</ulink>
                 configuration and extensions,
                 which allows you to cross-develop on the host machine for the
                 target hardware.
@@ -207,7 +207,7 @@
 
         <para>
             The
-            <ulink url='&YOCTO_DOCS_DEV_URL;#cross-development-toolchain'>Cross-Development Toolchain</ulink>
+            <ulink url='&YOCTO_DOCS_REF_URL;#cross-development-toolchain'>Cross-Development Toolchain</ulink>
             consists of a cross-compiler, cross-linker, and cross-debugger
             that are used to develop user-space applications for targeted
             hardware.
@@ -215,7 +215,7 @@
             built-in <filename>devtool</filename> functionality.
             This toolchain is created by running a SDK installer script
             or through a
-            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
+            <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>
             that is based on your Metadata configuration or extension for
             your targeted device.
             The cross-toolchain works with a matching target sysroot.
@@ -245,16 +245,15 @@
                 <listitem><para>
                     If you have cloned the <filename>poky</filename> Git
                     repository to create a
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
                     and you have sourced the environment setup script, QEMU is
                     installed and automatically available.
                     </para></listitem>
                 <listitem><para>
                     If you have downloaded a Yocto Project release and unpacked
-                    it to create a
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
-                    and you have sourced the environment setup script, QEMU is
-                    installed and automatically available.
+                    it to create a Source Directory and you have sourced the
+                    environment setup script, QEMU is installed and
+                    automatically available.
                     </para></listitem>
                 <listitem><para>
                     If you have installed the cross-toolchain tarball and you
@@ -295,8 +294,8 @@
             For information about the application development workflow that
             uses the Eclipse IDE and for a detailed example of how to install
             and configure the Eclipse Yocto Project Plug-in, see the
-            "<link linkend='sdk-developing-applications-using-eclipse'>Developing Applications Using <trademark class='trade'>Eclipse</trademark></link>"
-            section.
+            "<link linkend='sdk-eclipse-project'>Developing Applications Using <trademark class='trade'>Eclipse</trademark></link>"
+            Chapter.
         </para>
     </section>
 
@@ -385,7 +384,7 @@
                 to download and learn about the emulator.
                 See the
                 "<ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-qemu'>Using the Quick EMUlator (QEMU)</ulink>"
-                chapter in the Yocto Project Development Manual
+                chapter in the Yocto Project Development Tasks Manual
                 for information on using QEMU within the Yocto
                 Project.</para></listitem>
         </orderedlist>
diff --git a/import-layers/yocto-poky/documentation/sdk-manual/sdk-manual.xml b/import-layers/yocto-poky/documentation/sdk-manual/sdk-manual.xml
index 5c28e34..7fc0472 100644
--- a/import-layers/yocto-poky/documentation/sdk-manual/sdk-manual.xml
+++ b/import-layers/yocto-poky/documentation/sdk-manual/sdk-manual.xml
@@ -17,14 +17,14 @@
         </mediaobject>
 
         <title>
-            Yocto Project Software Development Kit (SDK) Developer's Guide
+            Yocto Project Application Development and the Extensible Software Development Kit (eSDK)
         </title>
 
         <authorgroup>
             <author>
                 <firstname>Scott</firstname> <surname>Rifenbark</surname>
                 <affiliation>
-                    <orgname>Scotty's Documentation Services, LLC</orgname>
+                    <orgname>Scotty's Documentation Services, INC</orgname>
                 </affiliation>
                 <email>srifenbark@gmail.com</email>
             </author>
@@ -47,24 +47,19 @@
                 <revremark>Released with the Yocto Project 2.3 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.1</revnumber>
-                <date>June 2017</date>
-                <revremark>Released with the Yocto Project 2.3.1 Release.</revremark>
+                <revnumber>2.4</revnumber>
+                <date>October 2017</date>
+                <revremark>Released with the Yocto Project 2.4 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.2</revnumber>
-                <date>September 2017</date>
-                <revremark>Released with the Yocto Project 2.3.2 Release.</revremark>
-            </revision>
-            <revision>
-                <revnumber>2.3.3</revnumber>
+                <revnumber>2.4.1</revnumber>
                 <date>January 2018</date>
-                <revremark>Released with the Yocto Project 2.3.3 Release.</revremark>
+                <revremark>Released with the Yocto Project 2.4.1 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.4</revnumber>
-                <date>April 2018</date>
-                <revremark>Released with the Yocto Project 2.3.4 Release.</revremark>
+                <revnumber>2.4.2</revnumber>
+                <date>March 2018</date>
+                <revremark>Released with the Yocto Project 2.4.2 Release.</revremark>
             </revision>
        </revhistory>
 
@@ -81,28 +76,29 @@
            <note><title>Manual Notes</title>
                <itemizedlist>
                    <listitem><para>
-                       For the latest version of the Yocto Project Software
-                       Development Kit (SDK) Developer's Guide associated with
-                       this Yocto Project release (version &YOCTO_DOC_VERSION;),
-                       see the Yocto Project Software Development Kit (SDK)
-                       Developer's Guide from the
+                       This version of the
+                       <emphasis>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</emphasis>
+                       manual is for the &YOCTO_DOC_VERSION; release of the
+                       Yocto Project.
+                       To be sure you have the latest version of the manual
+                       for this release, use the manual from the
                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
                        </para></listitem>
                    <listitem><para>
-                       This version of the manual is version
-                       &YOCTO_DOC_VERSION;.
-                       For later releases of the Yocto Project (if they exist),
-                       go to the
+                       For manuals associated with other releases of the Yocto
+                       Project, go to the
                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
                        and use the drop-down "Active Releases" button
-                       and choose the Yocto Project version for which you want
-                       the manual.
+                       and choose the manual associated with the desired
+                       Yocto Project.
                        </para></listitem>
                    <listitem><para>
-                       For an in-development version of the Yocto Project
-                       Software Development Kit (SDK) Developer's Guide, see
-                       <ulink url='&YOCTO_DOCS_URL;/latest/sdk-manual/sdk-manual.html'></ulink>.
-                       </para></listitem>
+                        To report any inaccuracies or problems with this
+                        manual, send an email to the Yocto Project
+                        discussion group at
+                        <filename>yocto@yoctoproject.com</filename> or log into
+                        the freenode <filename>#yocto</filename> channel.
+                        </para></listitem>
                </itemizedlist>
            </note>
 
@@ -118,6 +114,8 @@
 
     <xi:include href="sdk-working-projects.xml"/>
 
+    <xi:include href="sdk-eclipse-project.xml"/>
+
     <xi:include href="sdk-appendix-obtain.xml"/>
 
     <xi:include href="sdk-appendix-customizing.xml"/>
diff --git a/import-layers/yocto-poky/documentation/sdk-manual/sdk-working-projects.xml b/import-layers/yocto-poky/documentation/sdk-manual/sdk-working-projects.xml
index 54bc4d7..6965e3f 100644
--- a/import-layers/yocto-poky/documentation/sdk-manual/sdk-working-projects.xml
+++ b/import-layers/yocto-poky/documentation/sdk-manual/sdk-working-projects.xml
@@ -10,8 +10,9 @@
         You can use the SDK toolchain directly with Makefile,
         Autotools, and <trademark class='trade'>Eclipse</trademark> based
         projects.
-        This chapter covers information specific to each of these types of
-        projects.
+        This chapter covers the first two, while the
+        "<link linkend='sdk-eclipse-project'>Developing Applications Using <trademark class='trade'>Eclipse</trademark></link>"
+        Chapter covers the latter.
     </para>
 
     <section id='autotools-based-projects'>
@@ -276,1184 +277,6 @@
             </note>
         </para>
     </section>
-
-    <section id='sdk-developing-applications-using-eclipse'>
-        <title>Developing Applications Using <trademark class='trade'>Eclipse</trademark></title>
-
-        <para>
-            If you are familiar with the popular Eclipse IDE, you can use an
-            Eclipse Yocto Plug-in to allow you to develop, deploy, and test your
-            application all from within Eclipse.
-            This section describes general workflow using the SDK and Eclipse
-            and how to configure and set up Eclipse.
-        </para>
-
-        <section id='workflow-using-eclipse'>
-            <title>Workflow Using <trademark class='trade'>Eclipse</trademark></title>
-
-            <para>
-                The following figure and supporting list summarize the
-                application development general workflow that employs both the
-                SDK Eclipse.
-            </para>
-
-            <para>
-                <imagedata fileref="figures/sdk-eclipse-dev-flow.png"
-                    width="7in" depth="7in" align="center" scale="100" />
-            </para>
-
-            <para>
-                <orderedlist>
-                    <listitem><para>
-                        <emphasis>Prepare the host system for the Yocto
-                        Project</emphasis>:
-                        See
-                        "<ulink url='&YOCTO_DOCS_REF_URL;#detailed-supported-distros'>Supported Linux Distributions</ulink>"
-                        and
-                        "<ulink url='&YOCTO_DOCS_REF_URL;#required-packages-for-the-host-development-system'>Required Packages for the Host Development System</ulink>"
-                        sections both in the Yocto Project Reference Manual for
-                        requirements.
-                        In particular, be sure your host system has the
-                        <filename>xterm</filename> package installed.
-                        </para></listitem>
-                    <listitem><para>
-                        <emphasis>Secure the Yocto Project kernel target
-                        image</emphasis>:
-                        You must have a target kernel image that has been built
-                        using the OpenEmbedded build system.</para>
-                        <para>Depending on whether the Yocto Project has a
-                        pre-built image that matches your target architecture
-                        and where you are going to run the image while you
-                        develop your application (QEMU or real hardware), the
-                        area from which you get the image differs.
-                        <itemizedlist>
-                            <listitem><para>
-                                Download the image from
-                                <ulink url='&YOCTO_MACHINES_DL_URL;'><filename>machines</filename></ulink>
-                                if your target architecture is supported and
-                                you are going to develop and test your
-                                application on actual hardware.
-                                </para></listitem>
-                            <listitem><para>
-                                Download the image from
-                                <ulink url='&YOCTO_QEMU_DL_URL;'>
-                                <filename>machines/qemu</filename></ulink> if
-                                your target architecture is supported and you
-                                are going to develop and test your application
-                                using the QEMU emulator.
-                                </para></listitem>
-                            <listitem><para>
-                                Build your image if you cannot find a pre-built
-                                image that matches your target architecture.
-                                If your target architecture is similar to a
-                                supported architecture, you can modify the
-                                kernel image before you build it.
-                                See the
-                                "<ulink url='&YOCTO_DOCS_DEV_URL;#patching-the-kernel'>Patching the Kernel</ulink>"
-                                section in the Yocto Project Development
-                                manual for an example.
-                                </para></listitem>
-                        </itemizedlist>
-                        </para></listitem>
-                    <listitem>
-                        <para><emphasis>Install the SDK</emphasis>:
-                        The SDK provides a target-specific cross-development
-                        toolchain, the root filesystem, the QEMU emulator, and
-                        other tools that can help you develop your application.
-                        For information on how to install the SDK, see the
-                        "<link linkend='sdk-installing-the-sdk'>Installing the SDK</link>"
-                        section.
-                        </para></listitem>
-                    <listitem><para>
-                        <emphasis>Secure the target root filesystem
-                        and the Cross-development toolchain</emphasis>:
-                        You need to find and download the appropriate root
-                        filesystem and the cross-development toolchain.</para>
-                        <para>You can find the tarballs for the root filesystem
-                        in the same area used for the kernel image.
-                        Depending on the type of image you are running, the
-                        root filesystem you need differs.
-                        For example, if you are developing an application that
-                        runs on an image that supports Sato, you need to get a
-                        root filesystem that supports Sato.</para>
-                        <para>You can find the cross-development toolchains at
-                        <ulink url='&YOCTO_TOOLCHAIN_DL_URL;'><filename>toolchains</filename></ulink>.
-                        Be sure to get the correct toolchain for your
-                        development host and your target architecture.
-                        See the "<link linkend='sdk-locating-pre-built-sdk-installers'>Locating Pre-Built SDK Installers</link>"
-                        section for information and the
-                        "<link linkend='sdk-installing-the-sdk'>Installing the SDK</link>"
-                        section for installation information.
-                        <note>
-                            As an alternative to downloading an SDK, you can
-                            build the SDK installer.
-                            For information on building the installer, see the
-                            "<link linkend='sdk-building-an-sdk-installer'>Building an SDK Installer</link>"
-                            section.
-                            Another helpful resource for building an installer
-                            is the
-                            <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>Cookbook guide to Making an Eclipse Debug Capable Image</ulink>
-                            wiki page.
-                        </note>
-                        </para></listitem>
-                    <listitem><para>
-                        <emphasis>Create and build your application</emphasis>:
-                        At this point, you need to have source files for your
-                        application.
-                        Once you have the files, you can use the Eclipse IDE
-                        to import them and build the project.
-                        If you are not using Eclipse, you need to use the
-                        cross-development tools you have installed to create
-                        the image.</para></listitem>
-                    <listitem><para>
-                        <emphasis>Deploy the image with the
-                        application</emphasis>:
-                        Using the Eclipse IDE, you can deploy your image to the
-                        hardware or to QEMU through the project's preferences.
-                        You can also use Eclipse to load and test your image
-                        under QEMU.
-                        See the
-                        "<ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-qemu'>Using the Quick EMUlator (QEMU)</ulink>"
-                        chapter in the Yocto Project Development Manual
-                        for information on using QEMU.
-                        </para></listitem>
-                    <listitem><para>
-                        <emphasis>Test and debug the application</emphasis>:
-                        Once your application is deployed, you need to test it.
-                        Within the Eclipse IDE, you can use the debugging
-                        environment along with supported performance enhancing
-                        <ulink url='http://www.eclipse.org/linuxtools/'>Linux Tools</ulink>.
-                        </para></listitem>
-                </orderedlist>
-            </para>
-        </section>
-
-        <section id='adt-eclipse'>
-            <title>Working Within Eclipse</title>
-
-            <para>
-                The Eclipse IDE is a popular development environment and it
-                fully supports development using the Yocto Project.
-            </para>
-
-            <para>
-                When you install and configure the Eclipse Yocto Project
-                Plug-in into the Eclipse IDE, you maximize your Yocto
-                Project experience.
-                Installing and configuring the Plug-in results in an
-                environment that has extensions specifically designed to let
-                you more easily develop software.
-                These extensions allow for cross-compilation, deployment, and
-                execution of your output into a QEMU emulation session as well
-                as actual target hardware.
-                You can also perform cross-debugging and profiling.
-                The environment also supports performance enhancing
-                <ulink url='http://www.eclipse.org/linuxtools/'>tools</ulink>
-                that allow you to perform remote profiling, tracing,
-                collection of power data, collection of latency data, and
-                collection of performance data.
-                <note>
-                    This release of the Yocto Project supports both the Neon
-                    and Mars versions of the Eclipse IDE.
-                    This section provides information on how to use the Neon
-                    release with the Yocto Project.
-                    For information on how to use the Mars version of Eclipse
-                    with the Yocto Project, see
-                    "<link linkend='sdk-appendix-latest-yp-eclipse-plug-in'>Appendix C</link>.
-                </note>
-            </para>
-
-            <section id='neon-setting-up-the-eclipse-ide'>
-                <title>Setting Up the Neon Version of the Eclipse IDE</title>
-
-                <para>
-                    To develop within the Eclipse IDE, you need to do the
-                    following:
-                    <orderedlist>
-                        <listitem><para>
-                            Install the Neon version of the Eclipse IDE.
-                            </para></listitem>
-                        <listitem><para>
-                            Configure the Eclipse IDE.
-                            </para></listitem>
-                        <listitem><para>
-                            Install the Eclipse Yocto Plug-in.
-                            </para></listitem>
-                        <listitem><para>
-                            Configure the Eclipse Yocto Plug-in.
-                            </para></listitem>
-                    </orderedlist>
-                    <note>
-                        Do not install Eclipse from your distribution's package
-                        repository.
-                        Be sure to install Eclipse from the official Eclipse
-                        download site as directed in the next section.
-                    </note>
-                </para>
-
-                <section id='neon-installing-eclipse-ide'>
-                    <title>Installing the Neon Eclipse IDE</title>
-
-                    <para>
-                        Follow these steps to locate, install, and configure
-                        Neon Eclipse:
-                        <orderedlist>
-                            <listitem><para>
-                                <emphasis>Locate the Neon Download:</emphasis>
-                                Open a browser and go to
-                                <ulink url='http://www.eclipse.org/neon/'>http://www.eclipse.org/neon/</ulink>.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis>Download the Tarball:</emphasis>
-                                Click through the "Download" buttons to
-                                download the file.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis>Unpack the Tarball:</emphasis>
-                                Move to a clean directory and unpack the
-                                tarball.
-                                Here is an example:
-                                <literallayout class='monospaced'>
-     $ cd ~
-     $ tar -xzvf ~/Downloads/eclipse-inst-linux64.tar.gz
-                                </literallayout>
-                                Everything unpacks into a folder named
-                                "eclipse-installer".
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis>Launch the Installer:</emphasis>
-                                Use the following commands to launch the
-                                installer:
-                                <literallayout class='monospaced'>
-     $ cd ~/eclipse-installer
-     $ ./eclipse-inst
-                                </literallayout>
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis>Select Your IDE:</emphasis>
-                                From the list, select the "Eclipse IDE for
-                                C/C++ Developers".
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis>Install the Software:</emphasis>
-                                Accept the default "cpp-neon" directory and
-                                click "Install".
-                                Accept any license agreements and approve any
-                                certificates.
-                                </para></listitem>
-                            <listitem><para>
-                                <emphasis>Launch Neon:</emphasis>
-                                Click the "Launch" button and accept the
-                                default "workspace".
-                                </para></listitem>
-                        </orderedlist>
-                    </para>
-                </section>
-
-                <section id='neon-configuring-the-mars-eclipse-ide'>
-                    <title>Configuring the Neon Eclipse IDE</title>
-
-                    <para>
-                        Follow these steps to configure the Neon Eclipse IDE.
-                        <note>
-                            Depending on how you installed Eclipse and what
-                            you have already done, some of the options will
-                            not appear.
-                            If you cannot find an option as directed by the
-                            manual, it has already been installed.
-                        </note>
-                        <orderedlist>
-                            <listitem><para>
-                                Be sure Eclipse is running and you are in your
-                                workbench.
-                                </para></listitem>
-                            <listitem><para>
-                                Select "Install New Software" from the "Help"
-                                pull-down menu.
-                                </para></listitem>
-                            <listitem><para>
-                                Select
-                                "Neon - http://download.eclipse.org/releases/neon"
-                                from the "Work with:" pull-down menu.
-                                </para></listitem>
-                            <listitem><para>
-                                Expand the box next to "Linux Tools" and select
-                                the following:
-                                <literallayout class='monospaced'>
-     C/C++ Remote (Over TCF/TE) Run/Debug Launcher
-     TM Terminal
-                                </literallayout>
-                                </para></listitem>
-                            <listitem><para>
-                                Expand the box next to "Mobile and Device
-                                Development" and select the following
-                                boxes:
-                                <literallayout class='monospaced'>
-     C/C++ Remote (Over TCF/TE) Run/Debug Launcher
-     Remote System Explorer User Actions
-     TM Terminal
-     TCF Remote System Explorer add-in
-     TCF Target Explorer
-                                </literallayout>
-                                </para></listitem>
-                            <listitem><para>
-                                Expand the box next to "Programming Languages"
-                                and select the following box:
-                                <literallayout class='monospaced'>
-     C/C++ Development Tools SDK
-                                </literallayout>
-                                </para></listitem>
-                            <listitem><para>
-                                Complete the installation by clicking through
-                                appropriate "Next" and "Finish" buttons.
-                                </para></listitem>
-                        </orderedlist>
-                    </para>
-                </section>
-
-                <section id='neon-installing-the-eclipse-yocto-plug-in'>
-                    <title>Installing or Accessing the Neon Eclipse Yocto Plug-in</title>
-
-                    <para>
-                        You can install the Eclipse Yocto Plug-in into the
-                        Eclipse IDE one of two ways:  use the Yocto Project's
-                        Eclipse Update site to install the pre-built plug-in
-                        or build and install the plug-in from the latest
-                        source code.
-                    </para>
-
-                    <section id='neon-new-software'>
-                        <title>Installing the Pre-built Plug-in from the Yocto Project Eclipse Update Site</title>
-
-                        <para>
-                            To install the Neon Eclipse Yocto Plug-in from the
-                            update site, follow these steps:
-                            <orderedlist>
-                                <listitem><para>
-                                    Start up the Eclipse IDE.
-                                    </para></listitem>
-                                <listitem><para>
-                                    In Eclipse, select "Install New
-                                    Software" from the "Help" menu.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Click "Add..." in the "Work with:" area.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Enter
-                                    <filename>&ECLIPSE_DL_PLUGIN_URL;/neon</filename>
-                                    in the URL field and provide a meaningful
-                                    name in the "Name" field.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Click "OK" to have the entry added
-                                    to the "Work with:" drop-down list.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Select the entry for the plug-in
-                                    from the "Work with:" drop-down list.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Check the boxes next to the following:
-                                    <literallayout class='monospaced'>
-     Yocto Project SDK Plug-in
-     Yocto Project Documentation plug-in
-                                    </literallayout>
-                                    </para></listitem>
-                                <listitem><para>
-                                    Complete the remaining software
-                                    installation steps and then restart the
-                                    Eclipse IDE to finish the installation of
-                                    the plug-in.
-                                    <note>
-                                        You can click "OK" when prompted about
-                                        installing software that contains
-                                        unsigned content.
-                                    </note>
-                                    </para></listitem>
-                            </orderedlist>
-                        </para>
-                    </section>
-
-                    <section id='neon-zip-file-method'>
-                        <title>Installing the Plug-in Using the Latest Source Code</title>
-
-                        <para>
-                            To install the Neon Eclipse Yocto Plug-in from the
-                            latest source code, follow these steps:
-                            <orderedlist>
-                                <listitem><para>
-                                    Be sure your development system
-                                    has JDK 1.8+
-                                    </para></listitem>
-                                <listitem><para>
-                                    Install X11-related packages:
-                                    <literallayout class='monospaced'>
-     $ sudo apt-get install xauth
-                                    </literallayout>
-                                    </para></listitem>
-                                <listitem><para>
-                                    In a new terminal shell, create a
-                                    Git repository with:
-                                    <literallayout class='monospaced'>
-     $ cd ~
-     $ git clone git://git.yoctoproject.org/eclipse-poky
-                                    </literallayout>
-                                    </para></listitem>
-                                <listitem><para>
-                                    Use Git to create the correct tag:
-                                    <literallayout class='monospaced'>
-     $ cd ~/eclipse-poky
-     $ git checkout neon/yocto-&DISTRO;
-                                    </literallayout>
-                                    This creates a local tag named
-                                    <filename>neon/yocto-&DISTRO;</filename>
-                                    based on the branch
-                                    <filename>origin/neon-master</filename>.
-                                    You are put into a detached HEAD state,
-                                    which is fine since you are only going to
-                                    be building and not developing.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Change to the <filename>scripts</filename>
-                                    directory within the Git repository:
-                                    <literallayout class='monospaced'>
-     $ cd scripts
-                                    </literallayout>
-                                    </para></listitem>
-                                <listitem><para>
-                                    Set up the local build environment
-                                    by running the setup script:
-                                    <literallayout class='monospaced'>
-     $ ./setup.sh
-                                    </literallayout>
-                                    When the script finishes execution,
-                                    it prompts you with instructions on how to
-                                    run the <filename>build.sh</filename>
-                                    script, which is also in the
-                                    <filename>scripts</filename> directory of
-                                    the Git repository created earlier.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Run the <filename>build.sh</filename>
-                                    script as directed.
-                                    Be sure to provide the tag name,
-                                    documentation branch, and a release name.
-                                    </para>
-                                    <para>
-                                    Following is an example:
-                                    <literallayout class='monospaced'>
-     $ ECLIPSE_HOME=/home/scottrif/eclipse-poky/scripts/eclipse ./build.sh -l neon/yocto-&DISTRO; master yocto-&DISTRO; 2>&amp;1 | tee build.log
-                                    </literallayout>
-                                    The previous example command adds the tag
-                                    you need for
-                                    <filename>mars/yocto-&DISTRO;</filename>
-                                    to <filename>HEAD</filename>, then tells
-                                    the build script to use the local (-l) Git
-                                    checkout for the build.
-                                    After running the script, the file
-                                    <filename>org.yocto.sdk-</filename><replaceable>release</replaceable><filename>-</filename><replaceable>date</replaceable><filename>-archive.zip</filename>
-                                    is in the current directory.
-                                    </para></listitem>
-                                <listitem><para>
-                                    If necessary, start the Eclipse IDE
-                                    and be sure you are in the Workbench.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Select "Install New Software" from
-                                    the "Help" pull-down menu.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Click "Add".
-                                    </para></listitem>
-                                <listitem><para>
-                                    Provide anything you want in the
-                                    "Name" field.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Click "Archive" and browse to the
-                                    ZIP file you built earlier.
-                                    This ZIP file should not be "unzipped", and
-                                    must be the
-                                    <filename>*archive.zip</filename> file
-                                    created by running the
-                                    <filename>build.sh</filename> script.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Click the "OK" button.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Check the boxes that appear in
-                                    the installation window to install the
-                                    following:
-                                    <literallayout class='monospaced'>
-     Yocto Project SDK Plug-in
-     Yocto Project Documentation plug-in
-                                    </literallayout>
-                                    </para></listitem>
-                                <listitem><para>
-                                    Finish the installation by clicking
-                                    through the appropriate buttons.
-                                    You can click "OK" when prompted about
-                                    installing software that contains unsigned
-                                    content.
-                                    </para></listitem>
-                                <listitem><para>
-                                    Restart the Eclipse IDE if necessary.
-                                    </para></listitem>
-                            </orderedlist>
-                        </para>
-
-                        <para>
-                            At this point you should be able to configure the
-                            Eclipse Yocto Plug-in as described in the
-                            "<link linkend='mars-configuring-the-eclipse-yocto-plug-in'>Configuring the Neon Eclipse Yocto Plug-in</link>"
-                            section.
-                        </para>
-                    </section>
-                </section>
-
-                <section id='neon-configuring-the-eclipse-yocto-plug-in'>
-                    <title>Configuring the Neon Eclipse Yocto Plug-in</title>
-
-                    <para>
-                        Configuring the Neon Eclipse Yocto Plug-in involves
-                        setting the Cross Compiler options and the Target
-                        options.
-                        The configurations you choose become the default
-                        settings for all projects.
-                        You do have opportunities to change them later when
-                        you configure the project (see the following section).
-                    </para>
-
-                    <para>
-                        To start, you need to do the following from within the
-                        Eclipse IDE:
-                        <itemizedlist>
-                            <listitem><para>
-                                Choose "Preferences" from the "Window" menu to
-                                display the Preferences Dialog.
-                                </para></listitem>
-                            <listitem><para>
-                                Click "Yocto Project SDK" to display
-                                the configuration screen.
-                                </para></listitem>
-                        </itemizedlist>
-                        The following sub-sections describe how to configure
-                        the plug-in.
-                        <note>
-                            Throughout the descriptions, a start-to-finish
-                            example for preparing a QEMU image for use with
-                            Eclipse is referenced as the "wiki" and is linked
-                            to the example on the
-                            <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'> Cookbook guide to Making an Eclipse Debug Capable Image</ulink>
-                            wiki page.
-                        </note>
-                    </para>
-
-                    <section id='neon-configuring-the-cross-compiler-options'>
-                        <title>Configuring the Cross-Compiler Options</title>
-
-                        <para>
-                            Cross Compiler options enable Eclipse to use your
-                            specific cross compiler toolchain.
-                            To configure these options, you must select
-                            the type of toolchain, point to the toolchain,
-                            specify the sysroot location, and select the target
-                            architecture.
-                            <itemizedlist>
-                                <listitem><para>
-                                    <emphasis>Selecting the Toolchain
-                                    Type:</emphasis>
-                                    Choose between
-                                    <filename>Standalone pre-built toolchain</filename>
-                                    and
-                                    <filename>Build system derived toolchain</filename>
-                                    for Cross Compiler Options.
-                                    <itemizedlist>
-                                        <listitem><para>
-                                            <emphasis>
-                                            <filename>Standalone Pre-built Toolchain:</filename>
-                                            </emphasis>
-                                            Select this type when you are using
-                                            a stand-alone cross-toolchain.
-                                            For example, suppose you are an
-                                            application developer and do not
-                                            need to build a target image.
-                                            Instead, you just want to use an
-                                            architecture-specific toolchain on
-                                            an existing kernel and target root
-                                            filesystem.
-                                            In other words, you have downloaded
-                                            and installed a pre-built toolchain
-                                            for an existing image.
-                                            </para></listitem>
-                                        <listitem><para>
-                                            <emphasis>
-                                            <filename>Build System Derived Toolchain:</filename>
-                                            </emphasis>
-                                            Select this type if you built the
-                                            toolchain as part of the
-                                            <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
-                                            When you select
-                                            <filename>Build system derived toolchain</filename>,
-                                            you are using the toolchain built
-                                            and bundled inside the Build
-                                            Directory.
-                                            For example, suppose you created a
-                                            suitable image using the steps in the
-                                            <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
-                                            In this situation, you would select
-                                            the
-                                            <filename>Build system derived toolchain</filename>.
-                                            </para></listitem>
-                                    </itemizedlist>
-                                    </para></listitem>
-                                <listitem><para>
-                                    <emphasis>Specify the Toolchain Root
-                                    Location:</emphasis>
-                                    If you are using a stand-alone pre-built
-                                    toolchain, you should be pointing to where
-                                    it is installed (e.g.
-                                    <filename>/opt/poky/&DISTRO;</filename>).
-                                    See the
-                                    "<link linkend='sdk-installing-the-sdk'>Installing the SDK</link>"
-                                    section for information about how the SDK is
-                                    installed.</para>
-                                    <para>If you are using a build system
-                                    derived toolchain, the path you provide for
-                                    the
-                                    <filename>Toolchain Root Location</filename>
-                                    field is the
-                                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
-                                    from which you run the
-                                    <filename>bitbake</filename> command (e.g
-                                    <filename>/home/scottrif/poky/build</filename>).
-                                    </para>
-                                    <para>For more information, see the
-                                    "<link linkend='sdk-building-an-sdk-installer'>Building an SDK Installer</link>"
-                                    section.
-                                    </para></listitem>
-                                <listitem><para>
-                                    <emphasis>Specify Sysroot Location:
-                                    </emphasis>
-                                    This location is where the root filesystem
-                                    for the target hardware resides.
-                                    </para>
-                                    <para>This location depends on where you
-                                    separately extracted and installed the
-                                    target filesystem.
-                                    As an example, suppose you prepared an
-                                    image using the steps in the
-                                    <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
-                                    If so, the
-                                    <filename>MY_QEMU_ROOTFS</filename>
-                                    directory is found in the
-                                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
-                                    and you would browse to and select that
-                                    directory (e.g.
-                                    <filename>/home/scottrif/poky/build/MY_QEMU_ROOTFS</filename>).
-                                    </para>
-                                    <para>For more information on how to
-                                    install the toolchain and on how to extract
-                                    and install the sysroot filesystem, see the
-                                    "<link linkend='sdk-building-an-sdk-installer'>Building an SDK Installer</link>"
-                                    section.
-                                    </para></listitem>
-                                <listitem><para>
-                                    <emphasis>Select the Target Architecture:
-                                    </emphasis>
-                                    The target architecture is the type of
-                                    hardware you are going to use or emulate.
-                                    Use the pull-down
-                                    <filename>Target Architecture</filename>
-                                    menu to make your selection.
-                                    The pull-down menu should have the
-                                    supported architectures.
-                                    If the architecture you need is not listed
-                                    in the menu, you will need to build the
-                                    image.
-                                    See the
-                                    "<ulink url='&YOCTO_DOCS_QS_URL;#qs-building-images'>Building Images</ulink>"
-                                    section of the Yocto Project Quick Start
-                                    for more information.
-                                    You can also see the
-                                    <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
-                                    </para></listitem>
-                            </itemizedlist>
-                        </para>
-                    </section>
-
-                    <section id='neon-configuring-the-target-options'>
-                        <title>Configuring the Target Options</title>
-
-                        <para>
-                            You can choose to emulate hardware using the QEMU
-                            emulator, or you can choose to run your image on
-                            actual hardware.
-                            <itemizedlist>
-                                <listitem><para>
-                                    <emphasis>QEMU:</emphasis>
-                                    Select this option if you will be using the
-                                    QEMU emulator.
-                                    If you are using the emulator, you also
-                                    need to locate the kernel and specify any
-                                    custom options.</para>
-                                    <para>If you selected the
-                                    <filename>Build system derived toolchain</filename>,
-                                    the target kernel you built will be located
-                                    in the
-                                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>
-                                    in
-                                    <filename>tmp/deploy/images/<replaceable>machine</replaceable></filename>
-                                    directory.
-                                    As an example, suppose you performed the
-                                    steps in the
-                                    <ulink url='https://wiki.yoctoproject.org/wiki/TipsAndTricks/RunningEclipseAgainstBuiltImage'>wiki</ulink>.
-                                    In this case, you specify your Build
-                                    Directory path followed by the image (e.g.
-                                    <filename>/home/scottrif/poky/build/tmp/deploy/images/qemux86/bzImage-qemux86.bin</filename>).
-                                    </para>
-                                    <para>If you selected the standalone
-                                    pre-built toolchain, the pre-built image
-                                    you downloaded is located in the directory
-                                    you specified when you downloaded the
-                                    image.</para>
-                                    <para>Most custom options are for advanced
-                                    QEMU users to further customize their QEMU
-                                    instance.
-                                    These options are specified between paired
-                                    angled brackets.
-                                    Some options must be specified outside the
-                                    brackets.
-                                    In particular, the options
-                                    <filename>serial</filename>,
-                                    <filename>nographic</filename>, and
-                                    <filename>kvm</filename> must all be
-                                    outside the brackets.
-                                    Use the <filename>man qemu</filename>
-                                    command to get help on all the options and
-                                    their use.
-                                    The following is an example:
-                                   <literallayout class='monospaced'>
-    serial ‘&lt;-m 256 -full-screen&gt;’
-                                    </literallayout></para>
-                                    <para>
-                                    Regardless of the mode, Sysroot is already
-                                    defined as part of the Cross-Compiler
-                                    Options configuration in the
-                                    <filename>Sysroot Location:</filename>
-                                    field.
-                                    </para></listitem>
-                                <listitem><para>
-                                    <emphasis>External HW:</emphasis>
-                                    Select this option if you will be using
-                                    actual hardware.</para></listitem>
-                            </itemizedlist>
-                        </para>
-
-                        <para>
-                            Click the "Apply" and "OK" to save your plug-in
-                            configurations.
-                        </para>
-                    </section>
-                </section>
-            </section>
-
-            <section id='neon-creating-the-project'>
-                <title>Creating the Project</title>
-
-                <para>
-                    You can create two types of projects:  Autotools-based, or
-                    Makefile-based.
-                    This section describes how to create Autotools-based
-                    projects from within the Eclipse IDE.
-                    For information on creating Makefile-based projects in a
-                    terminal window, see the
-                    "<link linkend='makefile-based-projects'>Makefile-Based Projects</link>"
-                    section.
-                    <note>
-                        Do not use special characters in project names
-                        (e.g. spaces, underscores, etc.).  Doing so can
-                        cause configuration to fail.
-                    </note>
-                </para>
-
-                <para>
-                    To create a project based on a Yocto template and then
-                    display the source code, follow these steps:
-                    <orderedlist>
-                        <listitem><para>
-                            Select "C Project" from the "File -> New" menu.
-                            </para></listitem>
-                        <listitem><para>
-                            Expand
-                            <filename>Yocto Project SDK Autotools Project</filename>.
-                            </para></listitem>
-                        <listitem><para>
-                            Select <filename>Hello World ANSI C Autotools Projects</filename>.
-                            This is an Autotools-based project based on a Yocto
-                            template.
-                            </para></listitem>
-                        <listitem><para>
-                            Put a name in the
-                            <filename>Project name:</filename> field.
-                            Do not use hyphens as part of the name
-                            (e.g. <filename>hello</filename>).
-                            </para></listitem>
-                        <listitem><para>
-                            Click "Next".
-                            </para></listitem>
-                        <listitem><para>
-                            Add appropriate information in the various fields.
-                            </para></listitem>
-                        <listitem><para>
-                            Click "Finish".
-                            </para></listitem>
-                        <listitem><para>
-                            If the "open perspective" prompt appears,
-                            click "Yes" so that you in the C/C++ perspective.
-                            </para></listitem>
-                        <listitem><para>The left-hand navigation pane shows
-                            your project.
-                            You can display your source by double clicking the
-                            project's source file.
-                            </para></listitem>
-                    </orderedlist>
-                </para>
-            </section>
-
-            <section id='neon-configuring-the-cross-toolchains'>
-                <title>Configuring the Cross-Toolchains</title>
-
-                <para>
-                    The earlier section,
-                    "<link linkend='neon-configuring-the-eclipse-yocto-plug-in'>Configuring the Neon Eclipse Yocto Plug-in</link>",
-                    sets up the default project configurations.
-                    You can override these settings for a given project by
-                    following these steps:
-                    <orderedlist>
-                        <listitem><para>
-                            Select "Yocto Project Settings" from
-                            the "Project -> Properties" menu.
-                            This selection brings up the Yocto Project Settings
-                            Dialog and allows you to make changes specific to
-                            an individual project.</para>
-                            <para>By default, the Cross Compiler Options and
-                            Target Options for a project are inherited from
-                            settings you provided using the Preferences Dialog
-                            as described earlier in the
-                            "<link linkend='neon-configuring-the-eclipse-yocto-plug-in'>Configuring the Neon Eclipse Yocto Plug-in</link>"
-                            section.
-                            The Yocto Project Settings Dialog allows you to
-                            override those default settings for a given
-                            project.
-                            </para></listitem>
-                        <listitem><para>
-                            Make or verify your configurations for the
-                            project and click "OK".
-                            </para></listitem>
-                        <listitem><para>
-                            Right-click in the navigation pane and
-                            select "Reconfigure Project" from the pop-up menu.
-                            This selection reconfigures the project by running
-                            <filename>autogen.sh</filename> in the workspace
-                            for your project.
-                            The script also runs
-                            <filename>libtoolize</filename>,
-                            <filename>aclocal</filename>,
-                            <filename>autoconf</filename>,
-                            <filename>autoheader</filename>,
-                            <filename>automake --a</filename>, and
-                            <filename>./configure</filename>.
-                            Click on the "Console" tab beneath your source code
-                            to see the results of reconfiguring your project.
-                            </para></listitem>
-                    </orderedlist>
-                </para>
-            </section>
-
-            <section id='neon-building-the-project'>
-                <title>Building the Project</title>
-
-                <para>
-                    To build the project select "Build All" from the
-                    "Project" menu.
-                    The console should update and you can note the
-                    cross-compiler you are using.
-                    <note>
-                        When building "Yocto Project SDK Autotools" projects,
-                        the Eclipse IDE might display error messages for
-                        Functions/Symbols/Types that cannot be "resolved",
-                        even when the related include file is listed at the
-                        project navigator and when the project is able to
-                        build.
-                        For these cases only, it is recommended to add a new
-                        linked folder to the appropriate sysroot.
-                        Use these steps to add the linked folder:
-                        <orderedlist>
-                            <listitem><para>
-                                Select the project.
-                                </para></listitem>
-                            <listitem><para>
-                                Select "Folder" from the
-                                <filename>File > New</filename> menu.
-                                </para></listitem>
-                            <listitem><para>
-                                In the "New Folder" Dialog, select "Link to
-                                alternate location (linked folder)".
-                                </para></listitem>
-                            <listitem><para>
-                                Click "Browse" to navigate to the include
-                                folder inside the same sysroot location
-                                selected in the Yocto Project
-                                configuration preferences.
-                                </para></listitem>
-                            <listitem><para>
-                                Click "OK".
-                                </para></listitem>
-                            <listitem><para>
-                                Click "Finish" to save the linked folder.
-                                </para></listitem>
-                        </orderedlist>
-                    </note>
-                </para>
-            </section>
-
-            <section id='neon-starting-qemu-in-user-space-nfs-mode'>
-                <title>Starting QEMU in User-Space NFS Mode</title>
-
-                <para>
-                    To start the QEMU emulator from within Eclipse, follow
-                    these steps:
-                    <note>
-                        See the
-                        "<ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-qemu'>Using the Quick EMUlator (QEMU)</ulink>"
-                        chapter in the Yocto Project Development Manual
-                        for more information on using QEMU.
-                    </note>
-                    <orderedlist>
-                        <listitem><para>Expose and select "External Tools
-                            Configurations ..." from the "Run -> External
-                            Tools" menu.
-                            </para></listitem>
-                        <listitem><para>
-                            Locate and select your image in the navigation
-                            panel to the left
-                            (e.g. <filename>qemu_i586-poky-linux</filename>).
-                            </para></listitem>
-                        <listitem><para>
-                            Click "Run" to launch QEMU.
-                            <note>
-                                The host on which you are running QEMU must
-                                have the <filename>rpcbind</filename> utility
-                                running to be able to make RPC calls on a
-                                server on that machine.
-                                If QEMU does not invoke and you receive error
-                                messages involving
-                                <filename>rpcbind</filename>, follow the
-                                suggestions to get the service running.
-                                As an example, on a new Ubuntu 16.04 LTS
-                                installation, you must do the following in
-                                order to get QEMU to launch:
-                                <literallayout class='monospaced'>
-     $ sudo apt-get install rpcbind
-                                </literallayout>
-                                After installing <filename>rpcbind</filename>,
-                                you need to edit the
-                                <filename>/etc/init.d/rpcbind</filename> file
-                                to include the following line:
-                                <literallayout class='monospaced'>
-     OPTIONS="-i -w"
-                                </literallayout>
-                                After modifying the file, you need to start the
-                                service:
-                                <literallayout class='monospaced'>
-     $ sudo service portmap restart
-                                </literallayout>
-                            </note>
-                            </para></listitem>
-                        <listitem><para>
-                            If needed, enter your host root password in
-                            the shell window at the prompt.
-                            This sets up a <filename>Tap 0</filename>
-                            connection needed for running in user-space NFS
-                            mode.
-                            </para></listitem>
-                        <listitem><para>
-                            Wait for QEMU to launch.
-                            </para></listitem>
-                        <listitem><para>
-                            Once QEMU launches, you can begin operating
-                            within that environment.
-                            One useful task at this point would be to determine
-                            the IP Address for the user-space NFS by using the
-                            <filename>ifconfig</filename> command.
-                            The IP address of the QEMU machine appears in the
-                            xterm window.
-                            You can use this address to help you see which
-                            particular
-                            IP address the instance of QEMU is using.
-                            </para></listitem>
-                    </orderedlist>
-                </para>
-            </section>
-
-            <section id='neon-deploying-and-debugging-the-application'>
-                <title>Deploying and Debugging the Application</title>
-
-                <para>
-                    Once the QEMU emulator is running the image, you can deploy
-                    your application using the Eclipse IDE and then use
-                    the emulator to perform debugging.
-                    Follow these steps to deploy the application.
-                    <note>
-                        Currently, Eclipse does not support SSH port
-                        forwarding.
-                        Consequently, if you need to run or debug a remote
-                        application using the host display, you must create a
-                        tunneling connection from outside Eclipse and keep
-                        that connection alive during your work.
-                        For example, in a new terminal, run the following:
-                        <literallayout class='monospaced'>
-     $ ssh -XY <replaceable>user_name</replaceable>@<replaceable>remote_host_ip</replaceable>
-                        </literallayout>
-                        Using the above form, here is an example:
-                        <literallayout class='monospaced'>
-     $ ssh -XY root@192.168.7.2
-                        </literallayout>
-                        After running the command, add the command to be
-                        executed in Eclipse's run configuration before the
-                        application as follows:
-                        <literallayout class='monospaced'>
-     export DISPLAY=:10.0
-                        </literallayout>
-                        Be sure to not destroy the connection during your QEMU
-                        session (i.e. do not
-                        exit out of or close that shell).
-                    </note>
-                    <orderedlist>
-                        <listitem><para>
-                            Select "Debug Configurations..." from the
-                            "Run" menu.
-                            </para></listitem>
-                        <listitem><para>
-                            In the left area, expand
-                            <filename>C/C++Remote Application</filename>.
-                            </para></listitem>
-                        <listitem><para>
-                            Locate your project and select it to bring
-                            up a new tabbed view in the Debug Configurations
-                            Dialog.
-                            </para></listitem>
-                        <listitem><para>
-                            Click on the "Debugger" tab to see the
-                            cross-tool debugger you are using.
-                            Be sure to change to the debugger perspective in
-                            Eclipse.
-                            </para></listitem>
-                        <listitem><para>
-                            Click on the "Main" tab.
-                            </para></listitem>
-                        <listitem><para>
-                            Create a new connection to the QEMU instance
-                            by clicking on "new".</para></listitem>
-                        <listitem><para>Select <filename>SSH</filename>, which
-                            means Secure Socket Shell and then click "OK".
-                            Optionally, you can select an TCF connection
-                            instead.
-                            </para></listitem>
-                        <listitem><para>
-                            Clear out the "Connection name" field and
-                            enter any name you want for the connection.
-                            </para></listitem>
-                        <listitem><para>
-                            Put the IP address for the connection in
-                            the "Host" field.
-                            For QEMU, the default is
-                            <filename>192.168.7.2</filename>.
-                            However, if a previous QEMU session did not exit
-                            cleanly, the IP address increments (e.g.
-                            <filename>192.168.7.3</filename>).
-                            <note>
-                                You can find the IP address for the current
-                                QEMU session by looking in the xterm that
-                                opens when you launch QEMU.
-                            </note>
-                            </para></listitem>
-                        <listitem><para>
-                            Enter <filename>root</filename>, which
-                            is the default for QEMU, for the "User" field.
-                            Be sure to leave the password field empty.
-                            </para></listitem>
-                        <listitem><para>
-                            Click "Finish" to close the New Connections Dialog.
-                            </para></listitem>
-                        <listitem><para>
-                            If necessary, use the drop-down menu now in the
-                            "Connection" field and pick the IP Address you
-                            entered.
-                             </para></listitem>
-                        <listitem><para>
-                            Assuming you are connecting as the root
-                            user, which is the default for QEMU x86-64 SDK
-                            images provided by the Yocto Project, in the
-                            "Remote Absolute File Path for C/C++ Application"
-                            field, browse to
-                            <filename>/home/root/</filename><replaceable>ProjectName</replaceable>
-                            (e.g. <filename>/home/root/hello</filename>).
-                            You could also browse to any other path you have
-                            write access to on the target such as
-                            <filename>/usr/bin</filename>.
-                            This location is where your application will be
-                            located on the QEMU system.
-                            If you fail to browse to and specify an appropriate
-                            location, QEMU will not understand what to remotely
-                            launch.
-                            Eclipse is helpful in that it auto fills your
-                            application name for you assuming you browsed to a
-                            directory.
-                            <note>
-                                If you are prompted to provide a username and
-                                to optionally set a password, be sure you
-                                provide "root" as the username and you leave
-                                the password field blank.
-                            </note>
-                            </para></listitem>
-                        <listitem><para>
-                            Be sure you change to the "Debug" perspective in
-                            Eclipse.
-                            </para></listitem>
-                        <listitem><para>
-                            Click "Debug"
-                            </para></listitem>
-                        <listitem><para>
-                            Accept the debug perspective.
-                            </para></listitem>
-                    </orderedlist>
-                </para>
-            </section>
-
-            <section id='neon-using-Linuxtools'>
-                <title>Using Linuxtools</title>
-
-                <para>
-                    As mentioned earlier in the manual, performance tools exist
-                    (Linuxtools) that enhance your development experience.
-                    These tools are aids in developing and debugging
-                    applications and images.
-                    You can run these tools from within the Eclipse IDE through
-                    the "Linuxtools" menu.
-                </para>
-
-                <para>
-                    For information on how to configure and use these tools,
-                    see
-                    <ulink url='http://www.eclipse.org/linuxtools/'>http://www.eclipse.org/linuxtools/</ulink>.
-                </para>
-            </section>
-        </section>
-    </section>
 </chapter>
 <!--
 vim: expandtab tw=80 ts=4
diff --git a/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-intro.xml b/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-intro.xml
index a408024..4e59e00 100644
--- a/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-intro.xml
+++ b/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-intro.xml
@@ -7,7 +7,7 @@
 
     <para>
         Toaster is a web interface to the Yocto Project's
-        <ulink url='&YOCTO_DOCS_DEV_URL;#build-system-term'>OpenEmbedded build system</ulink>.
+        <ulink url='&YOCTO_DOCS_REF_URL;#build-system-term'>OpenEmbedded build system</ulink>.
         The interface enables you to configure and run your builds.
         Information about builds is collected and stored in a database.
         You can use Toaster to configure and start builds on multiple
diff --git a/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-reference.xml b/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-reference.xml
index 3a46b61..e984391 100644
--- a/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-reference.xml
+++ b/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-reference.xml
@@ -148,177 +148,46 @@
                     </para>
                 </section>
 
-                <section id='select-the-toasterconf-json-file'>
-                    <title>Use the <filename>toasterconf.json</filename> File</title>
+                <section id='use-the-fixture-feature'>
+                    <title>Use the Fixture Feature</title>
 
                     <para>
-                        If you do not want to use the Administration
-                        Interface, you can edit the
-                        <link linkend='toaster-json-files'><filename>toasterconf.json</filename></link>
-                        file and reload it to Toaster.
-                    </para>
-
-                    <para>
-                        The Toaster startup script in
-                        <filename>/bitbake/bin/toaster</filename> specifies
-                        the location of a Toaster configuration file
-                        <filename>toasterconf.json</filename> as the value of
-                        the <filename>TOASTER_CONF</filename> variable.
-                        This configuration file is used to set up the initial
-                        configuration values within the Toaster database
-                        including the layer sources.
-                        Two versions of the configuration file exist:
-                        <itemizedlist>
-                            <listitem><para>
-                                The first version of the file is found in the
-                                <filename>conf</filename> directory of the
-                                <filename>meta-poky</filename> layer
-                                (i.e.
-                                <filename>meta-poky/conf/toasterconf.json</filename>).
-                                This version contains the default Yocto Project
-                                configuration for Toaster.
-                                </para></listitem>
-                            <listitem><para>
-                                The second version of the file is in the
-                                <filename>conf</filename> directory of the
-                                <filename>openembedded-core</filename> layer
-                                (i.e. <filename>meta/conf/toasterconf.json</filename>).
-                                This version contains the default OpenEmbedded
-                                configuration for Toaster.
-                                </para></listitem>
-                        </itemizedlist>
-                    </para>
-                </section>
-
-                <section id='edit-the-configuration-file'>
-                    <title>Edit the Configuration File</title>
-
-                    <para>
-                        Edit the version of the
-                        <filename>toasterconf.json</filename> file you
-                        used to set up your Toaster instance.
-                        In the file, you will find a section for layer sources
-                        such as the following:
+                        The Django fixture feature overrides the default layer
+                        server when you use it to specify a custom URL. To use
+                        the fixture feature, create (or edit) the file
+                        <filename>bitbake/lib/toaster.orm/fixtures/custom.xml</filename>,
+                        and then set the following Toaster setting to your
+                        custom URL:
                         <literallayout class='monospaced'>
-    "layersources": [
-        {
-            "name": "Local Yocto Project",
-            "sourcetype": "local",
-            "apiurl": "../../",
-            "branches": ["HEAD" ],
-            "layers": [
-                {
-                    "name": "openembedded-core",
-                    "local_path": "meta",
-                    "vcs_url": "remote:origin",
-                    "dirpath": "meta"
-                },
-                {
-                    "name": "meta-poky",
-                    "local_path": "meta-poky",
-                    "vcs_url": "remote:origin",
-                    "dirpath": "meta-poky"
-                },
-                {
-                    "name": "meta-yocto-bsp",
-                    "local_path": "meta-yocto-bsp",
-                    "vcs_url": "remote:origin",
-                    "dirpath": "meta-yocto-bsp"
-                }
-
-            ]
-        },
-        {
-            "name": "OpenEmbedded",
-            "sourcetype": "layerindex",
-            "apiurl": "http://layers.openembedded.org/layerindex/api/",
-            "branches": ["master", "jethro" ,"fido"]
-        },
-        {
-            "name": "Imported layers",
-            "sourcetype": "imported",
-            "apiurl": "",
-            "branches": ["master", "jethro","fido", "HEAD"]
-
-        }
-    ],
+     &lt;?xml version="1.0" ?&gt;
+     &lt;django-objects version="1.0"&gt;
+       &lt;object model="orm.toastersetting" pk="100"&gt;
+                     &lt;field name="name" type="CharField"&gt;CUSTOM_LAYERINDEX_SERVER&lt;/field&gt;
+                     &lt;field name="value" type="CharField"&gt;https://layers.my_organization.org/layerindex/branch/master/layers/&lt;/field&gt;
+       &lt;/object&gt;
+     &lt;django-objects&gt;
                         </literallayout>
-                        You should add your own layer source to this section by
-                        following the same format used for the "OpenEmbedded"
-                        layer source shown above.
-                    </para>
-
-                    <para>
-                        Give your layer source a name, provide the URL of your
-                        layer source API, use the source type "layerindex", and
-                        indicate which branches from your layer source you want
-                        to make available through Toaster.
-                        For example, the OpenEmbedded layer source makes
-                        available only its "master", "fido", and "jethro"
-                        branches.
-                    </para>
-
-                    <para>
-                        The branches must match the branch you
-                        set when configuring your releases.
-                        For example, if you configure one release in Toaster
-                        by setting its branch to "branch-one" and you configure
-                        another release in Toaster by setting its branch to
-                        "branch-two", the branches in your layer source should
-                        be "branch-one" and "branch-two" as well.
-                        Doing so creates a connection between the releases
-                        and the layer information from your layer source.
-                        Thus, when users create a project with a given
-                        release, they will see the appropriate layers from
-                        your layer source.
-                        This connection ensures that only layers that are
-                        compatible with the selected project release can be
-                        selected for building.
-                    </para>
-
-                    <para>
-                        Once you have added this information to the
-                        <filename>toasterconf.json</filename> file, save your
-                        changes.
-                    </para>
-
-                    <para>
-                        In a terminal window, navigate to the directory that
-                        contains the Toaster database, which by default is the
-                        root of the Yocto Project
-                        <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
-                        Once you are located in that directory, run the
-                        "<filename>loadconf</filename>" command, which takes as
-                        an argument the full path to the
-                        <filename>toasterconf.json</filename> file you just edited.
-                        For example, if you cloned the
-                        <filename>poky</filename> repository and you edited the
-                        <filename>meta-poky/conf/toasterconf.json</filename> file,
-                        you would type something like the following:
-                        <literallayout class='monospaced'>
-     $ bitbake/lib/toaster/manage.py loadconf /home/scottrif/poky/meta-poky/conf/toasterconf.json
-                        </literallayout>
-                        After entering this command, you need to update the
-                        Toaster database with the information coming from your
-                        new layer source.
-                        To do that, you should run the
-                        "<filename>lsupdates</filename>" command from the directory
-                        that contains the Toaster database.
-                        Here is an example:
-                        <literallayout class='monospaced'>
-     $ bitbake/lib/toaster/manage.py lsupdates
-                        </literallayout>
-                        If Toaster can reach the API URL, you should see a message
-                        telling you that Toaster is updating the layer source
-                        information.
+                        When you start Toaster for the first time, or if you
+                        delete the file <filename>toaster.sqlite</filename> and restart,
+                        the database will populate cleanly from this layer index server.
                     </para>
 
                     <para>
                         Once the information has been updated, verify the new layer
                         information is available by using the Toaster web interface.
                         To do that, visit the "All compatible layers" page inside a
-                        Toaster project.
-                        The layers from your layer source should be listed there.
+                        Toaster project. The layers from your layer source should be
+                        listed there.
+                    </para>
+
+                    <para>
+                        If you change the information in your layer index server,
+                        refresh the Toaster database by running the following command:
+                        <literallayout class='monospaced'>
+     $ bitbake/lib/toaster/manage.py lsupdates
+                        </literallayout>
+                        If Toaster can reach the API URL, you should see a message
+                        telling you that Toaster is updating the layer source information.
                     </para>
                 </section>
             </section>
@@ -359,19 +228,12 @@
                 As shipped, Toaster is configured to work with the following
                 releases:
                 <itemizedlist>
-                    <listitem><para><emphasis>Yocto Project 2.0 "Jethro" or OpenEmbedded "Jethro":</emphasis>
-                        This release causes your Toaster projects to
-                        build against the head of the jethro branch at
-                        <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/log/?h=jethro'></ulink>
-                        or
-                        <ulink url='http://git.openembedded.org/openembedded-core/commit/?h=jethro'></ulink>.
-                        </para></listitem>
-                    <listitem><para><emphasis>Yocto Project 1.8 "Fido" or OpenEmbedded "Fido":</emphasis>
-                        This release causes your Toaster projects to
-                        build against the head of the fido branch at
-                        <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/log/?h=fido'></ulink>
-                        or
-                        <ulink url='http://git.openembedded.org/openembedded-core/commit/?h=fido'></ulink>.
+                    <listitem><para><emphasis>
+                        Yocto Project &DISTRO; "&DISTRO_NAME;" or OpenEmbedded "&DISTRO_NAME;":</emphasis>
+                        This release causes your Toaster projects to build
+                        against the head of the &DISTRO_NAME_NO_CAP; branch at
+                        <ulink url='&YOCTO_GIT_URL;/cgit/cgit.cgi/poky/log/?h=rocko'></ulink>
+                        or <ulink url='http://git.openembedded.org/openembedded-core/commit/?h=rocko'></ulink>.
                         </para></listitem>
                     <listitem><para><emphasis>Yocto Project "Master" or OpenEmbedded "Master":</emphasis>
                         This release causes your Toaster Projects to
@@ -390,468 +252,413 @@
                 </itemizedlist>
             </para>
         </section>
+    </section>
 
-        <section id='toaster-releases-comprised-of'>
-            <title>What Makes Up a Release?</title>
+    <section id='configuring-toaster'>
+        <title>Configuring Toaster</title>
+
+        <para>
+            In order to use Toaster, you must configure the database with the
+            default content. The following subsections describe various aspects
+            of Toaster configuration.
+        </para>
+
+        <section id='configuring-the-workflow'>
+            <title>Configuring the Workflow</title>
 
             <para>
-                A release consists of the following:
-                <itemizedlist>
-                    <listitem><para><emphasis>Name:</emphasis>
-                        The name of the release (<filename>name</filename>).
-                        This release name never appears in the the Toaster
-                        web interface.
-                        Consequently, a user never sees the release name.
+                The
+                <filename>bldcontrol/management/commands/checksettings.py</filename>
+                file controls workflow configuration.
+                The following steps outline the process to initially populate
+                this database.
+                <orderedlist>
+                    <listitem><para>
+                        The default project settings are set from
+                        <filename>orm/fixtures/settings.xml</filename>.
                         </para></listitem>
-                    <listitem><para><emphasis>Description:</emphasis>
-                        The textual description of the release
-                        (<filename>description</filename>).
-                        This description is what users encounter when creating
-                        projects with the Toaster web interface.
-                        When you configure your release, be sure to use
-                        a description that sufficiently describes and is
-                        understandable.
-                        If Toaster has more than one release configured, the
-                        release descriptions appear listed in a drop down menu
-                        when a user creates a new project.
-                        If Toaster has only one release configured, all
-                        projects created using the web interface take that
-                        release and the drop down menu does not display in the
-                        Toaster web interface.
+                    <listitem><para>
+                        The default project distro and layers are added
+                        from <filename>orm/fixtures/poky.xml</filename> if poky
+                        is installed.
+                        If poky is not installed, they are added
+                        from <filename>orm/fixtures/oe-core.xml</filename>.
                         </para></listitem>
-                    <listitem><para><emphasis>BitBake:</emphasis>
-                        The Bitbake version (<filename>bitbake</filename>)
-                        used to build layers set in the current release.
-                        This version is described by a name, a Git URL, a
-                        branch in the Git URL, and a directory path in the
-                        Git repository.
-                        As an example, consider the following snippet from
-                        a Toaster JSON configuration file.
-                        This BitBake version uses the master branch from the
-                        OpenEmbedded repository:
-                        <literallayout class='monospaced'>
-     "bitbake" : [
-         {
-             "name": "master",
-             "giturl": "git://git.openembedded.org/bitbake",
-             "branch": "master",
-             "dirpath": ""
-         }
-     ]
-                        </literallayout>
-                        Here is more detail on each of the items that comprise
-                        the BitBake version:
-                        <itemizedlist>
-                            <listitem><para><emphasis>Name:</emphasis>
-                                A string
-                                (<filename>name</filename>) used to refer to
-                                the version of BitBake you are using with
-                                Toaster.
-                                This name is never exposed through Toaster.
-                                </para></listitem>
-                            <listitem><para><emphasis>Git URL:</emphasis>
-                                The URL (<filename>giturl</filename>)
-                                for the BitBake Git repository cloned
-                                for Toaster projects.
-                                </para></listitem>
-                            <listitem><para><emphasis>Branch:</emphasis>
-                                The Git branch, or revision,
-                                (<filename>branch</filename>) of the BitBake
-                                repository used with Toaster.
-                                </para></listitem>
-                            <listitem><para><emphasis>Directory Path:</emphasis>
-                                The sub-directory of the BitBake repository
-                                (<filename>dirpath</filename>).
-                                If the Git URL includes more than one
-                                repository, you need to set this directory.
-                                If the URL does not include more than a single
-                                repository, you can set
-                                <filename>dirpath</filename> to a null string
-                                (i.e. "").
-                                </para></listitem>
-                        </itemizedlist>
+                    <listitem><para>
+                        If the <filename>orm/fixtures/custom.xml</filename> file
+                        exists, then its values are added.
                         </para></listitem>
-                    <listitem><para><emphasis>Branch:</emphasis>
-                        The branch for the layer source
-                        (<filename>branch</filename>) used with the release.
-                        For example, for the OpenEmbedded layer source, the
-                        "master", "fido", and "jethro" branches are available.
+                    <listitem><para>
+                        The layer index is then scanned and added to the database.
                         </para></listitem>
-                    <listitem><para><emphasis>Default Layers:</emphasis>
-                        The set of default layers
-                        (<filename>defaultlayers</filename>) automatically
-                        added to the project configuration when a project is
-                        created.
-                        </para></listitem>
-                    <listitem><para><emphasis>Layer Source Priorities</emphasis>
-                        A specification of
-                        <link linkend='layer-source'>layer source</link>
-                        priorities (<filename>layersourcepriority</filename>).
-                        In order for Toaster to work as intended, the
-                        "Imported layers" layer source should have the highest
-                        priority, which means that layers manually imported by
-                        users with the "Import layer" functionality will
-                        always be visible and available for selection.
-                        </para></listitem>
-                    <listitem><para><emphasis>Help Text:</emphasis>
-                        Help text (<filename>helptext</filename>) that explains
-                        what the release does when selected.
-                        This help text appears below the release drop-down
-                        menu when you create a Toaster project.
-                        The help text should assist users in making the correct
-                        decision regarding the release to use for a given
-                        project.
-                        </para></listitem>
-                </itemizedlist>
+                </orderedlist>
+                Once these steps complete, Toaster is set up and ready to use.
+            </para>
+        </section>
+
+        <section id='customizing-pre-set-data'>
+            <title>Customizing Pre-Set Data</title>
+
+            <para>
+                The pre-set data for Toaster is easily customizable. You can
+                create the <filename>orm/fixtures/custom.xml</filename> file
+                to customize the values that go into to the database.
+                Customization is additive,
+                and can either extend or completely replace the existing values.
             </para>
 
             <para>
-                To summarize what comprises a release, consider the following
-                example from a Toaster JSON file.
-                The configuration names the release "master" and uses the
-                "master" branch provided by the layer source of type
-                "layerindex", which is called "OpenEmbedded", and sets
-                the <filename>openembedded-core</filename> layer as the one
-                to be added by default to any projects created in Toaster.
-                The BitBake version used would be defined as shown earlier
-                in the previous list:
+                You use the <filename>orm/fixtures/custom.xml</filename> file
+                to change the default project settings for the machine, distro,
+                file images, and layers.
+                When creating a new project, you can use the file to define
+                the offered alternate project release selections.
+                For example, you can add one or more additional selections that
+                present custom layer sets or distros, and any other local or proprietary
+                content.
+            </para>
+
+            <para>
+                Additionally, you can completely disable the content from the
+                <filename>oe-core.xml</filename> and <filename>poky.xml</filename>
+                files by defining the section shown below in the
+                <filename>settings.xml</filename> file.
+                For example, this option is particularly useful if your custom
+                configuration defines fewer releases or layers than the default
+                fixture files.
+            </para>
+
+            <para>
+                The following example sets "name" to "CUSTOM_XML_ONLY" and its value
+                to "True".
                 <literallayout class='monospaced'>
-     "releases": [
-         {
-             "name": "master",
-             "description": "OpenEmbedded master",
-             "bitbake": "master",
-             "branch": "master",
-             "defaultlayers": [ "openembedded-core" ],
-             "layersourcepriority": { "Imported layers": 99, "Local OpenEmbedded" : 10, "OpenEmbedded" :  0 },
-             "helptext": "Toaster will run your builds using the tip of the &lt;a href=\"http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/\"&gt;Yocto Project master branch&lt;/a&gt;, where active development takes place. This is not a stable branch, so your builds might not work as expected."
-         }
-     ]
+     &lt;object model="orm.toastersetting" pk="99"&gt;
+       &lt;field type="CharField" name="name"&gt;CUSTOM_XML_ONLY&lt;/field&gt;
+       &lt;field type="CharField" name="value"&gt;True&lt;/field&gt;
+     &lt;/object&gt;
                 </literallayout>
             </para>
         </section>
-    </section>
 
-    <section id='toaster-json-files'>
-        <title>JSON Files</title>
-
-        <para>
-            You must configure Toaster before using it.
-            Configuration customizes layer source settings and Toaster defaults
-            for all users and is performed by the person responsible for
-            Toaster Configuration (i.e  the Toaster Administrator).
-            The Toaster Administrator performs this configuration through the
-            Django administration interface.
-        </para>
-
-        <para>
-            To make it easier to initially start Toaster, you can import a
-            pre-defined configuration file using the
-            <link linkend='toaster-command-loadconf'><filename>loadconf</filename></link>
-            command.
-            <note>
-                The configuration file is a JSON-formatted text file with
-                specific fields that Toaster recognizes.
-                It is not a data dump from the database, so it cannot be
-                loaded directly in the database.
-            </note>
-        </para>
-
-        <para>
-            By convention, the supplied configuration files are named
-            <filename>toasterconf.json</filename>.
-            The Toaster Administrator can customize the file prior to loading
-            it into Toaster.
-            The <filename>TOASTER_CONF</filename> variable in the
-            Toaster startup script at <filename>bitbake/bin/toaster</filename>
-            specifies the location of the <filename>toasterconf.json</filename> file.
-        </para>
-
-        <section id='json-file-choices'>
-            <title>Configuration File Choices</title>
+        <section id='understanding-fixture-file-format'>
+            <title>Understanding Fixture File Format</title>
 
             <para>
-                Two versions of the configuration file exist:
-                <itemizedlist>
-                    <listitem><para>
-                        The
-                        <filename>meta-poky/conf/toasterconf.json</filename>
-                        in the <filename>conf</filename> directory of the
-                        Yocto Project's <filename>meta-poky</filename> layer.
-                        This version contains the default Yocto Project
-                        configuration for Toaster.
-                        You are prompted to select this file during the Toaster
-                        set up process if you cloned the
-                        <filename>poky</filename> repository (i.e.
-                        <filename>&YOCTO_GIT_URL;/poky</filename>).
-                        </para></listitem>
-                    <listitem><para>
-                        The <filename>meta/conf/toasterconf.json</filename>
-                        in the <filename>conf</filename> directory of the
-                        OpenEmbedded's <filename>openembedded-core</filename>
-                        layer.
-                        This version contains the default OpenEmbedded
-                        configuration for Toaster.
-                        You are prompted to select this file during the Toaster
-                        set up process if you had cloned the
-                        <filename>openembedded-core</filename> repository (i.e.
-                        <filename>git://git.openembedded.org/openembedded-core</filename>).
-                        </para></listitem>
-                </itemizedlist>
+                The following is an overview of the file format used by the
+                <filename>oe-core.xml</filename>, <filename>poky.xml</filename>,
+                and <filename>custom.xml</filename> files.
+            </para>
+
+            <para>
+                The following subsections describe each of the sections in the
+                fixture files, and outline an example section of the XML code.
+                you can use to help understand this information and create a local
+                <filename>custom.xml</filename> file.
+            </para>
+
+            <section id='defining-the-default-distro-and-other-values'>
+                <title>Defining the Default Distro and Other Values</title>
+
+                <para>
+                    This section defines the default distro value for new projects.
+                    By default, it reserves the first Toaster Setting record "1".
+                    The following demonstrates how to set the project default value
+                    for
+                    <ulink url='&YOCTO_DOCS_REF_URL;#var-DISTRO'><filename>DISTRO</filename></ulink>:
+                    <literallayout class='monospaced'>
+     &lt;!-- Set the project default value for DISTRO --&gt;
+     &lt;object model="orm.toastersetting" pk="1"&gt;
+       &lt;field type="CharField" name="name"&gt;DEFCONF_DISTRO&lt;/field&gt;
+       &lt;field type="CharField" name="value"&gt;poky&lt;/field&gt;
+     &lt;/object&gt;
+                    </literallayout>
+                       You can override other default project values by adding
+                       additional Toaster Setting sections such as any of the
+                       settings coming from the <filename>settings.xml</filename>
+                       file.
+                       Also, you can add custom values that are included in the
+                       BitBake environment.
+                       The "pk" values must be unique.
+                       By convention, values that set default project values have a
+                       "DEFCONF" prefix.
+                </para>
+            </section>
+
+            <section id='defining-bitbake-version'>
+                <title>Defining BitBake Version</title>
+
+                <para>
+                    The following defines which version of BitBake is used
+                    for the following release selection:
+                    <literallayout class='monospaced'>
+     &lt;!-- Bitbake versions which correspond to the metadata release --&gt;
+     &lt;object model="orm.bitbakeversion" pk="1"&gt;
+       &lt;field type="CharField" name="name"&gt;rocko&lt;/field&gt;
+       &lt;field type="CharField" name="giturl"&gt;git://git.yoctoproject.org/poky&lt;/field&gt;
+       &lt;field type="CharField" name="branch"&gt;rocko&lt;/field&gt;
+       &lt;field type="CharField" name="dirpath"&gt;bitbake&lt;/field&gt;
+     &lt;/object&gt;
+                    </literallayout>
+                </para>
+            </section>
+
+            <section id='defining-releases'>
+                <title>Defining Release</title>
+
+                <para>
+                    The following defines the releases when you create a new
+                    project.
+                    <literallayout class='monospaced'>
+     &lt;!-- Releases available --&gt;
+     &lt;object model="orm.release" pk="1"&gt;
+       &lt;field type="CharField" name="name"&gt;rocko&lt;/field&gt;
+       &lt;field type="CharField" name="description"&gt;Yocto Project 2.4 "Rocko"&lt;/field&gt;
+       &lt;field rel="ManyToOneRel" to="orm.bitbakeversion" name="bitbake_version"&gt;1&lt;/field&gt;
+       &lt;field type="CharField" name="branch_name"&gt;rocko&lt;/field&gt;
+       &lt;field type="TextField" name="helptext"&gt;Toaster will run your builds using the tip of the &lt;a href="http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=rocko"&gt;Yocto Project Rocko branch&lt;/a&gt;.&lt;/field&gt;
+     &lt;/object&gt;
+                    </literallayout>
+                    The "pk" value must match the above respective BitBake
+                    version record.
+                </para>
+            </section>
+
+            <section id='defining-the-release-default-layer-names'>
+                <title>Defining the Release Default Layer Names</title>
+
+                <para>
+                    The following defines the default layers for each release:
+                    <literallayout class='monospaced'>
+     &lt;!-- Default project layers for each release --&gt;
+     &lt;object model="orm.releasedefaultlayer" pk="1"&gt;
+       &lt;field rel="ManyToOneRel" to="orm.release" name="release"&gt;1&lt;/field&gt;
+       &lt;field type="CharField" name="layer_name"&gt;openembedded-core&lt;/field&gt;
+     &lt;/object&gt;
+                    </literallayout>
+                    The 'pk' values in the example above should start at "1" and increment
+                    uniquely.
+                    You can use the same layer name in multiple releases.
+                </para>
+            </section>
+
+            <section id='defining-layer-definitions'>
+                <title>Defining Layer Definitions</title>
+
+                <para>
+                    Layer definitions are the most complex.
+                    The following defines each of the layers, and then defines the exact layer
+                    version of the layer used for each respective release.
+                    You must have one <filename>orm.layer</filename>
+                    entry for each layer.
+                    Then, with each entry you need a set of
+                    <filename>orm.layer_version</filename> entries that connects
+                    the layer with each release that includes the layer.
+                    In general all releases include the layer.
+                    <literallayout class='monospaced'>
+     &lt;object model="orm.layer" pk="1"&gt;
+       &lt;field type="CharField" name="name"&gt;openembedded-core&lt;/field&gt;
+       &lt;field type="CharField" name="layer_index_url"&gt;&lt;/field&gt;
+       &lt;field type="CharField" name="vcs_url"&gt;git://git.yoctoproject.org/poky&lt;/field&gt;
+       &lt;field type="CharField" name="vcs_web_url"&gt;http://git.yoctoproject.org/cgit/cgit.cgi/poky&lt;/field&gt;
+       &lt;field type="CharField" name="vcs_web_tree_base_url"&gt;http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%&lt;/field&gt;
+       &lt;field type="CharField" name="vcs_web_file_base_url"&gt;http://git.yoctoproject.org/cgit/cgit.cgi/poky/tree/%path%?h=%branch%&lt;/field&gt;
+     &lt;/object&gt;
+     &lt;object model="orm.layer_version" pk="1"&gt;
+       &lt;field rel="ManyToOneRel" to="orm.layer" name="layer"&gt;1&lt;/field&gt;
+       &lt;field type="IntegerField" name="layer_source"&gt;0&lt;/field&gt;
+       &lt;field rel="ManyToOneRel" to="orm.release" name="release"&gt;1&lt;/field&gt;
+       &lt;field type="CharField" name="branch"&gt;rocko&lt;/field&gt;
+       &lt;field type="CharField" name="dirpath"&gt;meta&lt;/field&gt;
+     &lt;/object&gt;
+     &lt;object model="orm.layer_version" pk="2"&gt;
+       &lt;field rel="ManyToOneRel" to="orm.layer" name="layer"&gt;1&lt;/field&gt;
+       &lt;field type="IntegerField" name="layer_source"&gt;0&lt;/field&gt;
+       &lt;field rel="ManyToOneRel" to="orm.release" name="release"&gt;2&lt;/field&gt;
+       &lt;field type="CharField" name="branch"&gt;HEAD&lt;/field&gt;
+       &lt;field type="CharField" name="commit"&gt;HEAD&lt;/field&gt;
+       &lt;field type="CharField" name="dirpath"&gt;meta&lt;/field&gt;
+     &lt;/object&gt;
+     &lt;object model="orm.layer_version" pk="3"&gt;
+       &lt;field rel="ManyToOneRel" to="orm.layer" name="layer"&gt;1&lt;/field&gt;
+       &lt;field type="IntegerField" name="layer_source"&gt;0&lt;/field&gt;
+       &lt;field rel="ManyToOneRel" to="orm.release" name="release"&gt;3&lt;/field&gt;
+
+       &lt;field type="CharField" name="branch"&gt;master&lt;/field&gt;
+       &lt;field type="CharField" name="dirpath"&gt;meta&lt;/field&gt;
+     &lt;/object&gt;
+                    </literallayout>
+                    The layer "pk" values above must be unique, and typically start at "1".
+                    The layer version "pk" values must also be unique across all layers,
+                    and typically start at "1".
+                </para>
+            </section>
+        </section>
+    </section>
+
+    <section id='remote-toaster-monitoring'>
+        <title>Remote Toaster Monitoring</title>
+
+        <para>
+            Toaster has an API that allows remote management applications to
+            directly query the state of the Toaster server and its builds
+            in a machine-to-machine manner.
+            This API uses the
+            <ulink url='http://en.wikipedia.org/wiki/Representational_state_transfer'>REST</ulink>
+            interface and the transfer of JSON files.
+            For example, you might
+            monitor a build inside a container through well supported
+            known HTTP ports in order to easily access a Toaster server
+            inside the container.
+            In this example, when you use this direct JSON API, you avoid
+            having web page parsing against the display the user sees.
+        </para>
+
+        <section id='checking-health'>
+            <title>Checking Health</title>
+
+            <para>
+                Before you use remote Toaster monitoring, you should do
+                a health check.
+                To do this, ping the Toaster server using the following call
+                to see if it is still alive:
+                <literallayout class='monospaced'>
+     http://<replaceable>host</replaceable>:<replaceable>port</replaceable>/health
+                </literallayout>
+                Be sure to provide values for <replaceable>host</replaceable>
+                and <replaceable>port</replaceable>.
+                If the server is alive, you will get the response HTML:
+                <literallayout class='monospaced'>
+     &lt;!DOCTYPE html&gt;
+     &lt;html lang="en"&gt;
+       &lt;head&gt;&lt;title&gt;Toaster Health&lt;/title&gt;&lt;/head&gt;
+       &lt;body&gt;Ok&lt;/body&gt;
+     &lt;/html&gt;
+                </literallayout>
             </para>
         </section>
 
-        <section id='json-structure'>
-            <title>File Structure</title>
+        <section id='determining-status-of-builds-in-progress'>
+            <title>Determining Status of Builds in Progress</title>
 
             <para>
-                The <filename>toasterconf.json</filename> file consists of
-                easily readable areas: configuration, layer sources, BitBake,
-                default release, and releases.
+                Sometimes it is useful to determine the status of a build
+                in progress.
+                To get the status of pending builds, use the following call:
+                <literallayout class='monospaced'>
+     http://<replaceable>host</replaceable>:<replaceable>port</replaceable>/toastergui/api/building
+                </literallayout>
+                Be sure to provide values for <replaceable>host</replaceable>
+                and <replaceable>port</replaceable>.
+                The output is a JSON file that itemizes all builds in
+                progress.
+                This file includes the time in seconds since each
+                respective build started as well as the progress of the
+                cloning, parsing, and task execution.
+                The following is sample output for a build in progress:
+                <literallayout class='monospaced'>
+     {"count": 1,
+      "building": [
+        {"machine": "beaglebone",
+           "seconds": "463.869",
+           "task": "927:2384",
+           "distro": "poky",
+           "clone": "1:1",
+           "id": 2,
+           "start": "2017-09-22T09:31:44.887Z",
+           "name": "20170922093200",
+           "parse": "818:818",
+           "project": "my_rocko",
+           "target": "core-image-minimal"
+           }]
+     }
+                </literallayout>
+                The JSON data for this query is returned in a single line.
+                In the previous example the line has been artificially split for readability.
+            </para>
+        </section>
+
+        <section id='checking-status-of-builds-completed'>
+            <title>Checking Status of Builds Completed</title>
+
+            <para>
+                Once a build is completed, you get the status when you use
+                the following call:
+                <literallayout class='monospaced'>
+     http://<replaceable>host</replaceable>:<replaceable>port</replaceable>/toastergui/api/builds
+                </literallayout>
+                Be sure to provide values for <replaceable>host</replaceable>
+                and <replaceable>port</replaceable>.
+                The output is a JSON file that itemizes all complete builds,
+                and includes build summary information.
+                The following is sample output for a completed build:
+                <literallayout class='monospaced'>
+     {"count": 1,
+      "builds": [
+        {"distro": "poky",
+           "errors": 0,
+           "machine":
+           "beaglebone",
+           "project": "my_rocko",
+           "stop": "2017-09-22T09:26:36.017Z",
+           "target": "quilt-native",
+           "seconds": "78.193",
+           "outcome": "Succeeded",
+           "id": 1,
+           "start": "2017-09-22T09:25:17.824Z",
+           "warnings": 1,
+           "name": "20170922092618"
+           }]
+     }
+                </literallayout>
+                The JSON data for this query is returned in a single line.
+                In the previous example the line has been artificially split for readability.
+            </para>
+        </section>
+
+        <section id='determining-status-of-a-specific-build'>
+            <title>Determining Status of a Specific Build</title>
+
+            <para>
+                Sometimes it is useful to determine the status of a specific
+                build.
+                To get the status of a specific build, use the following
+                call:
+                <literallayout class='monospaced'>
+     http://<replaceable>host</replaceable>:<replaceable>port</replaceable>/toastergui/api/build/<replaceable>ID</replaceable>
+                </literallayout>
+                Be sure to provide values for <replaceable>host</replaceable>,
+                <replaceable>port</replaceable>, and <replaceable>ID</replaceable>.
+                You can find the value for <replaceable>ID</replaceable> from the
+                Builds Completed query. See the
+                "<link linkend='checking-status-of-builds-completed'>Checking Status of Builds Completed</link>"
+                section for more information.
             </para>
 
-            <section id='json-config-area'>
-                <title>Configuration Area</title>
-
-                <para>
-                    This area of the JSON file sets which variables are exposed
-                    to users through the Toaster web interface.
-                    Users can easily edit these variables.
-                </para>
-
-                <para>
-                    The variables you set here are displayed in the
-                    "Configuration variables" page in Toaster.
-                    Minimally, you should set the
-                    <ulink url='&YOCTO_DOCS_REF_URL;#var-MACHINE'><filename>MACHINE</filename></ulink>
-                    variable, which appears to users as part of the project
-                    page in Toaster.
-                </para>
-
-                <para>
-                    Here is the default <filename>config</filename> area:
-                    <literallayout class='monospaced'>
-     "config": {
-         "MACHINE"      : "qemux86",
-         "DISTRO"       : "poky",
-         "IMAGE_FSTYPES": "ext3 jffs2 tar.bz2",
-         "IMAGE_INSTALL_append": "",
-         "PACKAGE_CLASSES": "package_rpm",
-     },
-                    </literallayout>
-                </para>
-            </section>
-
-            <section id='json-layersources-area'>
-                <title>Layer Sources Area</title>
-
-                <para>
-                    This area of the JSON file defines the
-                    <link linkend='layer-source'>layer sources</link>
-                    Toaster uses.
-                    Toaster reads layer information from layer sources.
-                    Three types of layer sources exist that Toaster
-                    recognizes: Local, LayerIndex, and Imported.
-                </para>
-
-                <para>
-                    The Local layer source reads layers from Git clones
-                    available on your local drive.
-                    Using a local layer source enables you to easily test
-                    Toaster.
-                    <note>
-                        If you are setting up a hosted version of Toaster,
-                        it does not make sense to have a local layer source.
-                    </note>
-                </para>
-
-                <para>
-                    The LayerIndex layer source uses a REST API exposed by
-                    instances of the Layer Index application (e.g the public
-                    <ulink url='http://layers.openembedded.org/'></ulink>)
-                    to read layer data.
-                </para>
-
-                <para>
-                    The Imported layer source is reserved for layer data
-                    manually introduced by the user or Toaster Administrator
-                    through the GUI.
-                    This layer source lets users import their own layers
-                    and build them with Toaster.
-                    You should not remove the imported layer source.
-                </para>
-
-                <para>
-                    Here is the default <filename>layersources</filename> area:
-                    <literallayout class='monospaced'>
-    "layersources": [
-        {
-            "name": "Local Yocto Project",
-            "sourcetype": "local",
-            "apiurl": "../../",
-            "branches": ["HEAD" ],
-            "layers": [
-                {
-                    "name": "openembedded-core",
-                    "local_path": "meta",
-                    "vcs_url": "remote:origin",
-                    "dirpath": "meta"
-                },
-                {
-                    "name": "meta-poky",
-                    "local_path": "meta-poky",
-                    "vcs_url": "remote:origin",
-                    "dirpath": "meta-poky"
-                },
-                {
-                    "name": "meta-yocto-bsp",
-                    "local_path": "meta-yocto-bsp",
-                    "vcs_url": "remote:origin",
-                    "dirpath": "meta-yocto-bsp"
-                }
-
-            ]
-        },
-        {
-            "name": "OpenEmbedded",
-            "sourcetype": "layerindex",
-            "apiurl": "http://layers.openembedded.org/layerindex/api/",
-            "branches": ["master", "jethro" ,"fido"]
-        },
-        {
-            "name": "Imported layers",
-            "sourcetype": "imported",
-            "apiurl": "",
-            "branches": ["master", "jethro","fido", "HEAD"]
-
+            <para>
+                The output is a JSON file that itemizes the specific build
+                and includes build summary information.
+                The following is sample output for a specific build:
+                <literallayout class='monospaced'>
+     {"build":
+       {"distro": "poky",
+        "errors": 0,
+        "machine": "beaglebone",
+        "project": "my_rocko",
+        "stop": "2017-09-22T09:26:36.017Z",
+        "target": "quilt-native",
+        "seconds": "78.193",
+        "outcome": "Succeeded",
+        "id": 1,
+        "start": "2017-09-22T09:25:17.824Z",
+        "warnings": 1,
+        "name": "20170922092618",
+        "cooker_log": "/opt/user/poky/build-toaster-2/tmp/log/cooker/beaglebone/build_20170922_022607.991.log"
         }
-    ],
-                    </literallayout>
-                </para>
-            </section>
-
-            <section id='json-bitbake-area'>
-                <title>BitBake Area</title>
-
-                <para>
-                    This area of the JSON file defines the version of
-                    BitBake Toaster uses.
-                    As shipped, Toaster is configured to recognize four
-                    versions of BitBake: master, fido, jethro, and HEAD.
-                    <note>
-                        HEAD is a special option that builds whatever is
-                        available on disk, without checking out any remote
-                        Git repositories.
-                    </note>
-                </para>
-
-                <para>
-                    Here is the default <filename>bitbake</filename> area:
-                    <literallayout class='monospaced'>
-     "bitbake" : [
-         {
-             "name": "master",
-             "giturl": "remote:origin",
-             "branch": "master",
-             "dirpath": "bitbake"
-         },
-        {
-             "name": "jethro",
-             "giturl": "remote:origin",
-             "branch": "jethro",
-             "dirpath": "bitbake"
-         },
-         {
-             "name": "fido",
-             "giturl": "remote:origin",
-             "branch": "fido",
-            "dirpath": "bitbake"
-        },
-         {
-             "name": "HEAD",
-             "giturl": "remote:origin",
-             "branch": "HEAD",
-             "dirpath": "bitbake"
-         }
-     ],
-                    </literallayout>
-                </para>
-            </section>
-
-            <section id='json-default-area'>
-                <title>Default Area</title>
-
-                <para>
-                    This area of the JSON file establishes a default
-                    release used by Toaster.
-                    As shipped, Toaster uses the "master" release.
-                </para>
-
-                <para>
-                    Here is the statement in the JSON file that establishes
-                    the default release:
-                    <literallayout class='monospaced'>
-     "defaultrelease": "master",
-                    </literallayout>
-                </para>
-            </section>
-
-            <section id='json-releases-area'>
-                <title>Releases Area</title>
-
-                <para>
-                    This area of the JSON file defines the versions of the
-                    OpenEmbedded build system Toaster recognizes.
-                    As shipped, Toaster is configured to work with the four
-                    releases described in the
-                    "<link linkend='toaster-releases-supported'>Pre-Configured Releases</link>"
-                    section.
-                </para>
-
-                <para>
-                    Here is the default <filename>releases</filename> area:
-                    <literallayout class='monospaced'>
-     "releases": [
-         {
-             "name": "master",
-             "description": "Yocto Project master",
-             "bitbake": "master",
-             "branch": "master",
-             "defaultlayers": [ "openembedded-core", "meta-poky", "meta-yocto-bsp"],
-             "layersourcepriority": { "Imported layers": 99, "Local Yocto Project" : 10, "OpenEmbedded" :  0 },
-             "helptext": "Toaster will run your builds using the tip of the &lt;a href=\"http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/\"&gt;Yocto Project master branch&lt;/a&gt;, where active development takes place. This is not a stable branch, so your builds might not work as expected."
-         },
-         {
-             "name": "jethro",
-             "description": "Yocto Project 2.0 Jethro",
-             "bitbake": "jethro",
-             "branch": "jethro",
-             "defaultlayers": [ "openembedded-core", "meta-poky", "meta-yocto-bsp"],
-             "layersourcepriority": { "Imported layers": 99, "Local Yocto Project" : 10, "OpenEmbedded" :  0 },
-             "helptext": "Toaster will run your builds with the tip of the &lt;a href=\"http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=jethro\"&gt;Yocto Project 2.0 \"Jethro\"&lt;/a&gt; branch."
-         },
-         {
-             "name": "fido",
-             "description": "Yocto Project 1.8 Fido",
-             "bitbake": "fido",
-             "branch": "fido",
-             "defaultlayers": [ "openembedded-core", "meta-poky", "meta-yocto-bsp"],
-             "layersourcepriority": { "Imported layers": 99, "Local Yocto Project" : 10, "OpenEmbedded" :  0 },
-             "helptext": "Toaster will run your builds with the tip of the &lt;a href=\"http://git.yoctoproject.org/cgit/cgit.cgi/poky/log/?h=fido\"&gt;Yocto Project 1.8 \"Fido\"&lt;/a&gt; branch."
-         },
-         {
-             "name": "local",
-             "description": "Local Yocto Project",
-             "bitbake": "HEAD",
-             "branch": "HEAD",
-             "defaultlayers": [ "openembedded-core", "meta-poky", "meta-yocto-bsp"],
-             "layersourcepriority": { "Imported layers": 99, "Local Yocto Project" : 10, "OpenEmbedded" :  0 },
-             "helptext": "Toaster will run your builds with the version of the Yocto Project you have cloned or downloaded to your computer."
-         }
-     ]
-                    </literallayout>
-                </para>
-            </section>
+     }
+                </literallayout>
+                The JSON data for this query is returned in a single line.
+                In the previous example the line has been artificially split for readability.
+            </para>
         </section>
     </section>
 
@@ -870,7 +677,7 @@
             created that are specific to Toaster and are used to control
             configuration and back-end tasks.
             You can locate these commands in the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
             (e.g. <filename>poky</filename>) at
             <filename>bitbake/lib/manage.py</filename>.
             This section documents those commands.
@@ -879,7 +686,7 @@
                     When using <filename>manage.py</filename> commands given
                     a default configuration, you must be sure that your
                     working directory is set to the
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>.
+                    <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>.
                     Using <filename>manage.py</filename> commands from the
                     Build Directory allows Toaster to find the
                     <filename>toaster.sqlite</filename> file, which is located
@@ -1005,30 +812,6 @@
             </para>
         </section>
 
-        <section id='toaster-command-loadconf'>
-            <title><filename>loadconf</filename></title>
-
-            <para>
-                The <filename>loadconf</filename> command loads an
-                existing Toaster configuration file (JSON file).
-                You must run this on a new database that does not have any
-                data.
-                Running this command on an existing database that has data
-                results in errors.
-                Access the command as follows:
-                <literallayout class='monospaced'>
-     $ bitbake/lib/toaster/manage.py loadconf <replaceable>filepath</replaceable>
-                </literallayout>
-                The <filename>loadconf</filename> command configures a database
-                based on the supplied existing
-                <filename>toasterconf.json</filename> file.
-                For information on the <filename>toasterconf.json</filename>,
-                see the
-                "<link linkend='toaster-json-files'>JSON Files</link>"
-                section.
-            </para>
-        </section>
-
         <section id='toaster-command-runbuilds'>
             <title><filename>runbuilds</filename></title>
 
diff --git a/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-setup-and-use.xml b/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-setup-and-use.xml
index 966c35d..c26a32a 100644
--- a/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-setup-and-use.xml
+++ b/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-setup-and-use.xml
@@ -18,7 +18,7 @@
 
         <para>
             Navigate to the root of your
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
             (e.g. <filename>poky</filename>):
             <literallayout class='monospaced'>
      $ cd poky
@@ -61,19 +61,32 @@
         </para>
     </section>
 
-    <section id='setting-a-different-address'>
-        <title>Setting a Different Address</title>
+    <section id='setting-up-external-access'>
+        <title>Setting up External Access</title>
 
         <para>
             By default, Toaster binds to the loop back address
-            (i.e. localhost).
-            You can use the <filename>WEBPORT</filename> parameter to
-            set a different host.
-            For example, the following command sets the host and port
-            to "0.0.0.0:8400":
+            (i.e. localhost), which does not allow access from
+            external hosts. To allow external access, use the
+            <filename>WEBPORT</filename> parameter to open an
+            address that connects to the network, specifically the
+            IP address that your NIC uses to connect to the network.
+            You can also bind to all IP addresses the computer
+            supports by using the shortcut
+            "0.0.0.0:<replaceable>port</replaceable>".
+        </para>
+
+        <para>
+            The following example binds to all IP addresses on the
+            host:
             <literallayout class='monospaced'>
      $ source toaster start webport=0.0.0.0:8400
             </literallayout>
+            This example binds to a specific IP address on the host's
+            NIC:
+            <literallayout class='monospaced'>
+     $ source toaster start webport=192.168.1.1:8400
+            </literallayout>
         </para>
     </section>
 
@@ -145,7 +158,7 @@
               <listitem><para>
                   From the directory containing the Toaster database,
                   which by default is the
-                  <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>,
+                  <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
                   invoke the <filename>createsuperuser</filename> command
                   from <filename>manage.py</filename>:
                   <literallayout class='monospaced'>
@@ -369,16 +382,16 @@
                           <filename>TOASTER_CONF</filename>, which is
                           relative to the Toaster root directory
                           <filename>TOASTER_DIR</filename>.
-                          For more information on the Toaster configuration file
-                          <filename>TOASTER_CONF</filename>, see the
-                          <link linkend='toaster-json-files'>JSON Files</link>
-                          section of this manual.
+                          For more information on the Toaster configuration file,
+                          see the
+                          <link linkend='configuring-toaster'>Configuring Toaster</link>
+                          chapter.
                       </para>
 
                       <para>
                           This line also runs the <filename>checksettings</filename>
                           command, which configures the location of the Toaster
-                          <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build directory</ulink>.
+                          <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build directory</ulink>.
                           The Toaster root directory <filename>TOASTER_DIR</filename>
                           determines where the Toaster build directory
                           is created on the file system.
diff --git a/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-start.xml b/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-start.xml
index c5c6795..65e057a 100644
--- a/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-start.xml
+++ b/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual-start.xml
@@ -41,7 +41,7 @@
             The requirements file is located in the
             <filename>bitbake</filename> directory, which is located in the
             root directory of the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+            <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
             (e.g. <filename>poky/bitbake/toaster-requirements.txt</filename>).
             The dependencies appear in a <filename>pip</filename>,
             install-compatible format.
@@ -54,7 +54,7 @@
                 You need to install the packages that Toaster requires.
                 Use this command:
                 <literallayout class='monospaced'>
-     $ $ pip3 install --user -r bitbake/toaster-requirements.txt
+     $ pip3 install --user -r bitbake/toaster-requirements.txt
                 </literallayout>
                 The previous command installs the necessary Toaster modules
                 into a local python 3 cache in your
diff --git a/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual.xml b/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual.xml
index c7a7fcd..5a1b60e 100644
--- a/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual.xml
+++ b/import-layers/yocto-poky/documentation/toaster-manual/toaster-manual.xml
@@ -22,11 +22,11 @@
 
         <authorgroup>
             <author>
-                <firstname>Scott</firstname> <surname>Rifenbark</surname>
+                <firstname>Kristi</firstname> <surname>Rifenbark</surname>
                 <affiliation>
-                    <orgname>Intel Corporation</orgname>
+                    <orgname>Scotty's Documentation Services, INC</orgname>
                 </affiliation>
-                <email>srifenbark@gmail.com</email>
+                <email>kristi@buzzcollectivemarketing.com</email>
             </author>
         </authorgroup>
 
@@ -57,24 +57,19 @@
                 <revremark>Released with the Yocto Project 2.3 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.1</revnumber>
-                <date>June 2017</date>
-                <revremark>Released with the Yocto Project 2.3.1 Release.</revremark>
+                <revnumber>2.4</revnumber>
+                <date>October 2017</date>
+                <revremark>Released with the Yocto Project 2.4 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.2</revnumber>
-                <date>September 2017</date>
-                <revremark>Released with the Yocto Project 2.3.2 Release.</revremark>
-            </revision>
-            <revision>
-                <revnumber>2.3.3</revnumber>
+                <revnumber>2.4.1</revnumber>
                 <date>January 2018</date>
-                <revremark>Released with the Yocto Project 2.3.3 Release.</revremark>
+                <revremark>Released with the Yocto Project 2.4.1 Release.</revremark>
             </revision>
             <revision>
-                <revnumber>2.3.4</revnumber>
-                <date>April 2018</date>
-                <revremark>Released with the Yocto Project 2.3.4 Release.</revremark>
+                <revnumber>2.4.2</revnumber>
+                <date>March 2018</date>
+                <revremark>Released with the Yocto Project 2.4.2 Release.</revremark>
             </revision>
        </revhistory>
 
@@ -91,27 +86,29 @@
            <note><title>Manual Notes</title>
                <itemizedlist>
                    <listitem><para>
-                       For the latest version of the Yocto Project Toaster
-                       User Manual associated with this Yocto Project release
-                       (version &YOCTO_DOC_VERSION;),
-                       see the Yocto Project Toaster User Manual from the
+                       This version of the
+                       <emphasis>Toaster User Manual</emphasis>
+                       is for the &YOCTO_DOC_VERSION; release of the
+                       Yocto Project.
+                       To be sure you have the latest version of the manual
+                       for this release, use the manual from the
                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
                        </para></listitem>
                    <listitem><para>
-                       This version of the manual is version
-                       &YOCTO_DOC_VERSION;.
-                       For later releases of the Yocto Project (if they exist),
-                       go to the
+                       For manuals associated with other releases of the Yocto
+                       Project, go to the
                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
                        and use the drop-down "Active Releases" button
-                       and choose the Yocto Project version for which you want
-                       the manual.
+                       and choose the manual associated with the desired
+                       Yocto Project.
                        </para></listitem>
                    <listitem><para>
-                       For an in-development version of the Yocto Project
-                       Toaster User Manual, see
-                       <ulink url='&YOCTO_DOCS_URL;/latest/toaster-manual/toaster-manual.html'></ulink>.
-                       </para></listitem>
+                        To report any inaccuracies or problems with this
+                        manual, send an email to the Yocto Project
+                        discussion group at
+                        <filename>yocto@yoctoproject.com</filename> or log into
+                        the freenode <filename>#yocto</filename> channel.
+                        </para></listitem>
                </itemizedlist>
            </note>
 
diff --git a/import-layers/yocto-poky/documentation/tools/mega-manual.sed b/import-layers/yocto-poky/documentation/tools/mega-manual.sed
index 936b109..ae2800c 100644
--- a/import-layers/yocto-poky/documentation/tools/mega-manual.sed
+++ b/import-layers/yocto-poky/documentation/tools/mega-manual.sed
@@ -2,32 +2,32 @@
 # This style is for manual folders like "yocto-project-qs" and "poky-ref-manual".
 # This is the old way that did it.  Can't do that now that we have "bitbake-user-manual" strings
 # in the mega-manual.
-# s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/[a-z]*-[a-z]*-[a-z]*\/[a-z]*-[a-z]*-[a-z]*.html#/\"link\" href=\"#/g
-s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/yocto-project-qs\/yocto-project-qs.html#/\"link\" href=\"#/g
-s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/poky-ref-manual\/poky-ref-manual.html#/\"link\" href=\"#/g
+# s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/[a-z]*-[a-z]*-[a-z]*\/[a-z]*-[a-z]*-[a-z]*.html#/\"link\" href=\"#/g
+s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/yocto-project-qs\/yocto-project-qs.html#/\"link\" href=\"#/g
+s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/poky-ref-manual\/poky-ref-manual.html#/\"link\" href=\"#/g
 
 # Processes all other manuals (<word>-<word> style) except for the BitBake User Manual because
 # it is not included in the mega-manual.
 # This style is for manual folders that use two word, which is the standard now (e.g. "ref-manual").
 # This was the one-liner that worked before we introduced the BitBake User Manual, which is
 # not in the mega-manual.
-# s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/[a-z]*-[a-z]*\/[a-z]*-[a-z]*.html#/\"link\" href=\"#/g
+# s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/[a-z]*-[a-z]*\/[a-z]*-[a-z]*.html#/\"link\" href=\"#/g
 
-s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/sdk-manual\/sdk-manual.html#/\"link\" href=\"#/g
-s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/bsp-guide\/bsp-guide.html#/\"link\" href=\"#/g
-s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/dev-manual\/dev-manual.html#/\"link\" href=\"#/g
-s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/kernel-dev\/kernel-dev.html#/\"link\" href=\"#/g
-s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/profile-manual\/profile-manual.html#/\"link\" href=\"#/g
-s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/ref-manual\/ref-manual.html#/\"link\" href=\"#/g
-s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/toaster-manual\/toaster-manual.html#/\"link\" href=\"#/g
-s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/yocto-project-qs\/yocto-project-qs.html#/\"link\" href=\"#/g
+s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/sdk-manual\/sdk-manual.html#/\"link\" href=\"#/g
+s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/bsp-guide\/bsp-guide.html#/\"link\" href=\"#/g
+s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/dev-manual\/dev-manual.html#/\"link\" href=\"#/g
+s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/kernel-dev\/kernel-dev.html#/\"link\" href=\"#/g
+s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/profile-manual\/profile-manual.html#/\"link\" href=\"#/g
+s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/ref-manual\/ref-manual.html#/\"link\" href=\"#/g
+s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/toaster-manual\/toaster-manual.html#/\"link\" href=\"#/g
+s/\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/yocto-project-qs\/yocto-project-qs.html#/\"link\" href=\"#/g
 
 # Process cases where just an external manual is referenced without an id anchor
-s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/yocto-project-qs\/yocto-project-qs.html\" target=\"_top\">Yocto Project Quick Start<\/a>/Yocto Project Quick Start/g
-s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/dev-manual\/dev-manual.html\" target=\"_top\">Yocto Project Development Manual<\/a>/Yocto Project Development Manual/g
-s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/sdk-manual\/sdk-manual.html\" target=\"_top\">Yocto Project Software Development Kit (SDK) Developer's Guide<\/a>/Yocto Project Software Development Kit (SDK) Developer's Guide/g
-s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/bsp-guide\/bsp-guide.html\" target=\"_top\">Yocto Project Board Support Package (BSP) Developer's Guide<\/a>/Yocto Project Board Support Package (BSP) Developer's Guide/g
-s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/profile-manual\/profile-manual.html\" target=\"_top\">Yocto Project Profiling and Tracing Manual<\/a>/Yocto Project Profiling and Tracing Manual/g
-s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/kernel-dev\/kernel-dev.html\" target=\"_top\">Yocto Project Linux Kernel Development Manual<\/a>/Yocto Project Linux Kernel Development Manual/g
-s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/ref-manual\/ref-manual.html\" target=\"_top\">Yocto Project Reference Manual<\/a>/Yocto Project Reference Manual/g
-s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.3.4\/toaster-manual\/toaster-manual.html\" target=\"_top\">Toaster User Manual<\/a>/Toaster User Manual/g
+s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/yocto-project-qs\/yocto-project-qs.html\" target=\"_top\">Yocto Project Quick Start<\/a>/Yocto Project Quick Start/g
+s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/dev-manual\/dev-manual.html\" target=\"_top\">Yocto Project Development Tasks Manual<\/a>/Yocto Project Development Tasks Manual/g
+s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/sdk-manual\/sdk-manual.html\" target=\"_top\">Yocto Project Application Development and the Extensible Software Development Kit (eSDK)<\/a>/Yocto Project Application Development and the Extensible Software Development Kit (eSDK)/g
+s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/bsp-guide\/bsp-guide.html\" target=\"_top\">Yocto Project Board Support Package (BSP) Developer's Guide<\/a>/Yocto Project Board Support Package (BSP) Developer's Guide/g
+s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/profile-manual\/profile-manual.html\" target=\"_top\">Yocto Project Profiling and Tracing Manual<\/a>/Yocto Project Profiling and Tracing Manual/g
+s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/kernel-dev\/kernel-dev.html\" target=\"_top\">Yocto Project Linux Kernel Development Manual<\/a>/Yocto Project Linux Kernel Development Manual/g
+s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/ref-manual\/ref-manual.html\" target=\"_top\">Yocto Project Reference Manual<\/a>/Yocto Project Reference Manual/g
+s/<a class=\"ulink\" href=\"http:\/\/www.yoctoproject.org\/docs\/2.4.2\/toaster-manual\/toaster-manual.html\" target=\"_top\">Toaster User Manual<\/a>/Toaster User Manual/g
diff --git a/import-layers/yocto-poky/documentation/yocto-project-qs/figures/yocto-environment.png b/import-layers/yocto-poky/documentation/yocto-project-qs/figures/yocto-environment.png
deleted file mode 100644
index 3596903..0000000
--- a/import-layers/yocto-poky/documentation/yocto-project-qs/figures/yocto-environment.png
+++ /dev/null
Binary files differ
diff --git a/import-layers/yocto-poky/documentation/yocto-project-qs/qs-style.css b/import-layers/yocto-poky/documentation/yocto-project-qs/qs-style.css
index 235c85a..948f1be 100644
--- a/import-layers/yocto-poky/documentation/yocto-project-qs/qs-style.css
+++ b/import-layers/yocto-poky/documentation/yocto-project-qs/qs-style.css
@@ -730,6 +730,11 @@
 }
 
 
+.writernotes {
+  color: red;
+}
+
+
   /*********** /
  /  graphics  /
 / ***********/
diff --git a/import-layers/yocto-poky/documentation/yocto-project-qs/yocto-project-qs.xml b/import-layers/yocto-poky/documentation/yocto-project-qs/yocto-project-qs.xml
index b4b3f4b..e6dae7f 100644
--- a/import-layers/yocto-poky/documentation/yocto-project-qs/yocto-project-qs.xml
+++ b/import-layers/yocto-poky/documentation/yocto-project-qs/yocto-project-qs.xml
@@ -16,35 +16,36 @@
                 Permission is granted to copy, distribute and/or modify this document under
                 the terms of the <ulink type="http" url="http://creativecommons.org/licenses/by-sa/2.0/uk/">Creative Commons Attribution-Share Alike 2.0 UK: England &amp; Wales</ulink> as published by Creative Commons.
             </para>
-            <note><title>Manual Notes</title>
-                <itemizedlist>
-                    <listitem><para>
-                        For the latest version of the Yocto Project Quick
-                        Start associated with this Yocto Project release
-                        (version &YOCTO_DOC_VERSION;),
-                        see the Yocto Project Quick Start from the
-                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
+           <note><title>Manual Notes</title>
+               <itemizedlist>
+                   <listitem><para>
+                       This version of the
+                       <emphasis>Yocto Project Quick Start</emphasis>
+                       is for the &YOCTO_DOC_VERSION; release of the
+                       Yocto Project.
+                       To be sure you have the latest version of the manual
+                       for this release, use the manual from the
+                       <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>.
+                       </para></listitem>
+                   <listitem><para>
+                       For manuals associated with other releases of the Yocto
+                       Project, go to the
+                       <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
+                       and use the drop-down "Active Releases" button
+                       and choose the manual associated with the desired
+                       Yocto Project.
+                       </para></listitem>
+                   <listitem><para>
+                        To report any inaccuracies or problems with this
+                        manual, send an email to the Yocto Project
+                        discussion group at
+                        <filename>yocto@yoctoproject.com</filename> or log into
+                        the freenode <filename>#yocto</filename> channel.
                         </para></listitem>
-                    <listitem><para>
-                        This version of the manual is version
-                        &YOCTO_DOC_VERSION;.
-                        For later releases of the Yocto Project (if they exist),
-                        go to the
-                        <ulink url='&YOCTO_HOME_URL;/documentation'>Yocto Project documentation page</ulink>
-                        and use the drop-down "Active Releases" button
-                        and choose the Yocto Project version for which you want
-                        the manual.
-                        </para></listitem>
-                    <listitem><para>
-                        For an in-development version of the Yocto Project
-                        Quick Start, see
-                        <ulink url='&YOCTO_DOCS_URL;/latest/yocto-project-qs/yocto-project-qs.html'></ulink>.
-                        </para></listitem>
-                </itemizedlist>
-            </note>
+               </itemizedlist>
+           </note>
         </legalnotice>
 
-
         <abstract>
             <imagedata fileref="figures/yocto-project-transp.png"
                         width="6in" depth="1in"
@@ -60,24 +61,12 @@
             focus is developers of embedded Linux systems.
             Among other things, the Yocto Project uses a build host based
             on the OpenEmbedded (OE) project, which uses the
-            <ulink url='&YOCTO_DOCS_DEV_URL;#bitbake-term'>BitBake</ulink>
+            <ulink url='&YOCTO_DOCS_REF_URL;#bitbake-term'>BitBake</ulink>
             tool, to construct complete Linux images.
-            The BitBake and OE components are combined together to form
+            The BitBake and OE components combine together to form
             a reference build host, historically known as
-            <ulink url='&YOCTO_DOCS_DEV_URL;#poky'>Poky</ulink>
-            (<emphasis>Pah</emphasis>-key).
-        </para>
-
-        <para>
-            If you do not have a system that runs Linux and you want to give
-            the Yocto Project a test run, you might consider using the Yocto
-            Project Build Appliance.
-            The Build Appliance allows you to build and boot a custom embedded
-            Linux image with the Yocto Project using a non-Linux development
-            system.
-            See the
-            <ulink url='https://www.yoctoproject.org/tools-resources/projects/build-appliance'>Yocto Project Build Appliance</ulink>
-            for more information.
+            <ulink url='&YOCTO_DOCS_REF_URL;#poky'>Poky</ulink>
+            (<emphasis>Pah</emphasis>-kee).
         </para>
 
         <para>
@@ -86,30 +75,74 @@
             Linux images.
             Rather than go into great detail about the Yocto Project and its
             many capabilities, this quick start provides the minimal
-            information you need to try out the Yocto Project using a
-            supported Linux build host.
+            information you need to try out the Yocto Project using either a
+            supported Linux build host or a build host set up to use
+            <ulink url='https://git.yoctoproject.org/cgit/cgit.cgi/crops/about/'>CROPS</ulink>,
+            which leverages
+            <ulink url='https://www.docker.com/'>Docker Containers</ulink>.
+        </para>
+
+        <para>
             Reading and using the quick start should result in you having a
             basic understanding of what the Yocto Project is and how to use
             some of its core components.
             You will also have worked through steps to produce two images:
-            one that is suitable for emulation and one that boots on actual
-            hardware.
+            one that runs on the emulator (QEMU) and one that boots on actual
+            hardware (i.e. MinnowBoard Turbot).
             The examples highlight the ease with which you can use the
             Yocto Project to create images for multiple types of hardware.
         </para>
 
         <para>
+            The following list directs you to key sections of this
+            quick start:
+            <itemizedlist>
+                <listitem><para>
+                    <ulink url='http://www.yoctoproject.org/docs/2.4/yocto-project-qs/yocto-project-qs.html#yp-resources'>Setting Up to Use the Yocto Project</ulink>
+                    </para></listitem>
+                <listitem><para>
+                    <ulink url='http://www.yoctoproject.org/docs/2.4/yocto-project-qs/yocto-project-qs.html#building-an-image-for-emulation'>Building an Image for Emulation</ulink>
+                    </para></listitem>
+                <listitem><para>
+                    <ulink url='http://www.yoctoproject.org/docs/2.4/yocto-project-qs/yocto-project-qs.html#building-an-image-for-hardware'>Building an Image for Hardware</ulink>
+                    </para></listitem>
+            </itemizedlist>
+<!--
+            <note>
+                If you do not have a system that runs Linux and you want to give
+                the Yocto Project a test run, you might consider using the Yocto
+                Project Build Appliance.
+                The Build Appliance allows you to build and boot a custom
+                embedded Linux image with the Yocto Project using a non-Linux
+                development system.
+                See the
+                <ulink url='https://www.yoctoproject.org/tools-resources/projects/build-appliance'>Yocto Project Build Appliance</ulink>
+                for more information.
+            </note>
+-->
+        </para>
+
+        <para>
             For more detailed information on the Yocto Project, you can
             reference these resources:
             <itemizedlist>
-                <listitem><para><emphasis>Website:</emphasis>
+                <listitem><para>
+                    <emphasis>Website:</emphasis>
                     The
                     <ulink url='&YOCTO_HOME_URL;'>Yocto Project Website</ulink>
-                    provides the latest builds, breaking news, full development
-                    documentation, and access to a rich Yocto Project
-                    Development Community into which you can tap.
+                    provides background information, the latest builds, breaking
+                    news, full development documentation, and access to a rich
+                    Yocto Project Development Community into which you can tap.
                     </para></listitem>
-                <listitem><para><emphasis>FAQs:</emphasis>
+                <listitem><para>
+                    <emphasis>Yocto Project Development Environment Overview:</emphasis>
+                    The
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#yp-intro'>Introducing the Yocto Project Development Environment</ulink>"
+                    section presents an overview of the Yocto Project
+                    development environment.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>FAQs:</emphasis>
                     Lists commonly asked Yocto Project questions and answers.
                     You can find two FAQs:
                     <ulink url='&YOCTO_WIKI_URL;/wiki/FAQ'>Yocto Project FAQ</ulink>
@@ -117,7 +150,8 @@
                     "<ulink url='&YOCTO_DOCS_REF_URL;#faq'>FAQ</ulink>"
                     chapter in the Yocto Project Reference Manual.
                     </para></listitem>
-                <listitem><para><emphasis>Developer Screencast:</emphasis>
+                <listitem><para>
+                    <emphasis>Developer Screencast:</emphasis>
                     The
                     <ulink url='http://vimeo.com/36450321'>Getting Started with the Yocto Project - New Developer Screencast Tutorial</ulink>
                     provides a 30-minute video created for users unfamiliar
@@ -126,238 +160,240 @@
                     While this screencast is somewhat dated, the introductory
                     and fundamental concepts are useful for the beginner.
                     </para></listitem>
+                <listitem><para>
+                    <emphasis>Comprehensive List of Links and Other Documentation:</emphasis>
+                    The
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#resources-links-and-related-documentation'>Links and Related Documentation</ulink>"
+                    section in the Yocto Project Reference Manual provides a
+                    comprehensive list of related links and documentation.
+                    </para></listitem>
             </itemizedlist>
         </para>
     </section>
 
-    <section id='yp-intro'>
-        <title>Introducing the Yocto Project Development Environment</title>
-
-        <para>
-            The Yocto Project through the OpenEmbedded build system provides an
-            open source development environment targeting the ARM, MIPS,
-            PowerPC, and x86 architectures for a variety of platforms
-            including x86-64 and emulated ones.
-            You can use components from the Yocto Project to design, develop,
-            build, debug, simulate, and test the complete software stack using
-            Linux, the X Window System, GTK+ frameworks, and Qt frameworks.
-        </para>
-
-        <mediaobject>
-            <imageobject>
-                <imagedata fileref="figures/yocto-environment.png"
-                    format="PNG" align='center' width="8in"/>
-            </imageobject>
-         </mediaobject>
-
-        <para>
-            Here are some highlights for the Yocto Project:
-        </para>
-
-        <itemizedlist>
-            <listitem><para>
-                Provides a recent Linux kernel along with a set of system
-                commands and libraries suitable for the embedded
-                environment.
-                </para></listitem>
-            <listitem><para>
-                Makes available system components such as X11, GTK+, Qt,
-                Clutter, and SDL (among others) so you can create a rich user
-                experience on devices that have display hardware.
-                For devices that do not have a display or where you wish to
-                use alternative UI frameworks, these components need not be
-                installed.
-                </para></listitem>
-            <listitem><para>
-                Creates a focused and stable core compatible with the
-                OpenEmbedded project with which you can easily and reliably
-                build and develop.
-                </para></listitem>
-            <listitem><para>
-                Fully supports a wide range of hardware and device emulation
-                through the Quick EMUlator (QEMU).
-                </para></listitem>
-            <listitem><para>
-                Provides a layer mechanism that allows you to easily extend
-                the system, make customizations, and keep them organized.
-                </para></listitem>
-        </itemizedlist>
-
-        <para>
-            You can use the Yocto Project to generate images for many kinds
-            of devices.
-            As mentioned earlier, the Yocto Project supports creation of
-            reference images that you can boot within and emulate using QEMU.
-            The standard example machines target QEMU full-system
-            emulation for 32-bit and 64-bit variants of x86, ARM, MIPS, and
-            PowerPC architectures.
-            Beyond emulation, you can use the layer mechanism to extend
-            support to just about any platform that Linux can run on and that
-            a toolchain can target.
-        </para>
-
-        <para>
-            Another Yocto Project feature is the Sato reference User
-            Interface.
-            This optional UI that is based on GTK+ is intended for devices with
-            restricted screen sizes and is included as part of the
-            OpenEmbedded Core layer so that developers can test parts of the
-            software stack.
-        </para>
-    </section>
-
     <section id='yp-resources'>
         <title>Setting Up to Use the Yocto Project</title>
 
         <para>
-            The following list shows what you need in order to use a
-            Linux-based build host to use the Yocto Project to build images:
+            Setting up to use the Yocto Project involves getting your build
+            host ready.
+            If you have a native Linux machine that runs a Yocto Project
+            supported distribution as described by the
+            "<ulink url='&YOCTO_DOCS_REF_URL;#detailed-supported-distros'>Supported Linux Distributions</ulink>"
+            section in the Yocto Project Reference Manual, you can prepare
+            that machine as your build host.
+            See the
+            "<link linkend='qs-native-linux-build-host'>Using a Native Linux Machine</link>"
+            section for more information.
         </para>
 
-        <itemizedlist>
-            <listitem><para><emphasis>Build Host</emphasis>
-                A build host with a minimum of 50 Gbytes of free disk
-                space that is running a supported Linux distribution (i.e.
-                recent releases of Fedora, openSUSE, CentOS, Debian, or
-                Ubuntu).
-                </para></listitem>
-            <listitem><para><emphasis>Build Host Packages</emphasis>
-                Appropriate packages installed on the build host.
-                </para></listitem>
-            <listitem><para><emphasis>The Yocto Project</emphasis>
-                A release of the Yocto Project.
-                </para></listitem>
-        </itemizedlist>
+        <para>
+            If you do not want to use the Yocto Project on a native Linux
+            machine, you can prepare your build host to use
+            <ulink url='https://git.yoctoproject.org/cgit/cgit.cgi/crops/about/'>CROPS</ulink>,
+            which leverages
+            <ulink url='https://www.docker.com/'>Docker Containers</ulink>.
+            You can set up a build host for Windows, Mac, and Linux
+            machines.
+            See the
+            "<link linkend='qs-crops-build-host'>Using CROPS and Containers</link>"
+            section for more information.
+        </para>
 
-        <section id='the-linux-distro'>
-            <title>The Linux Distribution</title>
+        <section id='qs-crops-build-host'>
+            <title>Using CROPS and Containers</title>
 
             <para>
-                The Yocto Project team verifies each release against recent
-                versions of the most popular Linux distributions that
-                provide stable releases.
-                In general, if you have the current release minus one of the
-                following distributions, you should have no problems.
-                <itemizedlist>
+                Follow these steps to get your build host set up with a
+                Poky container that you can use to complete the build
+                examples further down in the Quick Start:
+                <orderedlist>
                     <listitem><para>
-                        Ubuntu
+                        <emphasis>Set Up to use CROss PlatformS (CROPS):</emphasis>
+                        Work through the first six steps of the procedure
+                        in the
+                        "<ulink url='&YOCTO_DOCS_DEV_URL;#setting-up-to-use-crops'>Setting Up to Use CROss PlatformS (CROPS)</ulink>"
+                        section of the Yocto Project Development Tasks Manual.
                         </para></listitem>
                     <listitem><para>
-                        Fedora
-                        </para></listitem>
-                    <listitem><para>
-                        openSUSE
-                        </para></listitem>
-                    <listitem><para>
-                        CentOS
-                        </para></listitem>
-                    <listitem><para>
-                        Debian
-                        </para></listitem>
-                </itemizedlist>
-                For a more detailed list of distributions that support the
-                Yocto Project, see the
-                "<ulink url='&YOCTO_DOCS_REF_URL;#detailed-supported-distros'>Supported Linux Distributions</ulink>"
-                section in the Yocto Project Reference Manual.
-            </para>
+                        <emphasis>Set Up the Poky Container to Use the Yocto Project:</emphasis>
+                        Go to
+                        <ulink url='https://github.com/crops/poky-container/blob/master/README.md'></ulink>
+                        and follow the directions to set up the Poky container
+                        on your build host.</para>
 
-            <para>
-                The OpenEmbedded build system should be able to run on any
-                modern distribution that has the following versions for
-                Git, tar, and Python.
-                <itemizedlist>
-                    <listitem><para>
-                        Git 1.8.3.1 or greater
+                        <para>Once you complete the setup instructions for your
+                        machine, you need to get a copy of the
+                        <filename>poky</filename> repository on your build
+                        host.
+                        See the
+                        "<link linkend='releases'>Yocto Project Release</link>"
+                        section to continue.
                         </para></listitem>
-                    <listitem><para>
-                        tar 1.24 or greater
-                        </para></listitem>
-                    <listitem><para>
-                        Python 3.4.0 or greater.
-                        </para></listitem>
-                </itemizedlist>
-                If your build host does not meet any of these three listed
-                version requirements, you can take steps to prepare the
-                system so that you can still use the Yocto Project.
-                See the
-                "<ulink url='&YOCTO_DOCS_REF_URL;#required-git-tar-and-python-versions'>Required Git, tar, and Python Versions</ulink>"
-                section in the Yocto Project Reference Manual for information.
+                </orderedlist>
             </para>
         </section>
 
-        <section id='packages'>
-            <title>The Build Host Packages</title>
+        <section id='qs-native-linux-build-host'>
+            <title>Using a Native Linux Machine</title>
 
             <para>
-                Required build host packages vary depending on your
-                build machine and what you want to do with the Yocto Project.
-                For example, if you want to build an image that can run
-                on QEMU in graphical mode (a minimal, basic build
-                requirement), then the build host package requirements
-                are different than if you want to build an image on a headless
-                system or build out the Yocto Project documentation set.
+                The following list shows what you need in order to use a
+                Linux-based build host to use the Yocto Project to build images:
             </para>
 
-            <para>
-                Collectively, the number of required packages is large
-                if you want to be able to cover all cases.
-                <note>
-                    In general, you need to have root access and then install
-                    the required packages.
-                    Thus, the commands in the following section may or may
-                    not work depending on whether or not your Linux
-                    distribution has <filename>sudo</filename> installed.
-                </note>
-            </para>
+            <itemizedlist>
+                <listitem><para><emphasis>Build Host</emphasis>
+                    A build host with a minimum of 50 Gbytes of free disk
+                    space that is running a supported Linux distribution (i.e.
+                    recent releases of Fedora, openSUSE, CentOS, Debian, or
+                    Ubuntu).
+                    </para></listitem>
+                <listitem><para><emphasis>Build Host Packages</emphasis>
+                    Appropriate packages installed on the build host.
+                    </para></listitem>
+            </itemizedlist>
 
-            <para>
-                The following list shows the required packages needed to build
-                an image that runs on QEMU in graphical mode (e.g. essential
-                plus graphics support).
-                For lists of required packages for other scenarios, see the
-                "<ulink url='&YOCTO_DOCS_REF_URL;#required-packages-for-the-host-development-system'>Required Packages for the Host Development System</ulink>"
-                section in the Yocto Project Reference Manual.
-                <itemizedlist>
-                    <listitem><para><emphasis>Ubuntu and Debian</emphasis>
-                        <literallayout class='monospaced'>
+            <section id='the-linux-distro'>
+                <title>The Linux Distribution</title>
+
+                <para>
+                    The Yocto Project team verifies each release against recent
+                    versions of the most popular Linux distributions that
+                    provide stable releases.
+                    In general, if you have the current release minus one of the
+                    following distributions, you should have no problems.
+                    <itemizedlist>
+                        <listitem><para>
+                            Ubuntu
+                            </para></listitem>
+                        <listitem><para>
+                            Fedora
+                            </para></listitem>
+                        <listitem><para>
+                            openSUSE
+                            </para></listitem>
+                        <listitem><para>
+                            CentOS
+                            </para></listitem>
+                        <listitem><para>
+                            Debian
+                            </para></listitem>
+                    </itemizedlist>
+                    For a more detailed list of distributions that support the
+                    Yocto Project, see the
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#detailed-supported-distros'>Supported Linux Distributions</ulink>"
+                    section in the Yocto Project Reference Manual.
+                </para>
+
+                <para>
+                    The OpenEmbedded build system should be able to run on any
+                    modern distribution that has the following versions for
+                    Git, tar, and Python.
+                    <itemizedlist>
+                        <listitem><para>
+                            Git 1.8.3.1 or greater
+                            </para></listitem>
+                        <listitem><para>
+                            tar 1.27 or greater
+                            </para></listitem>
+                        <listitem><para>
+                            Python 3.4.0 or greater.
+                            </para></listitem>
+                    </itemizedlist>
+                    If your build host does not meet any of these three listed
+                    version requirements, you can take steps to prepare the
+                    system so that you can still use the Yocto Project.
+                    See the
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#required-git-tar-and-python-versions'>Required Git, tar, and Python Versions</ulink>"
+                    section in the Yocto Project Reference Manual for information.
+                </para>
+            </section>
+
+            <section id='packages'>
+                <title>The Build Host Packages</title>
+
+                <para>
+                    Required build host packages vary depending on your
+                    build machine and what you want to do with the Yocto Project.
+                    For example, if you want to build an image that can run
+                    on QEMU in graphical mode (a minimal, basic build
+                    requirement), then the build host package requirements
+                    are different than if you want to build an image on a headless
+                    system or build out the Yocto Project documentation set.
+                </para>
+
+                <para>
+                    Collectively, the number of required packages is large
+                    if you want to be able to cover all cases.
+                    <note>
+                        In general, you need to have root access and then install
+                        the required packages.
+                        Thus, the commands in the following section may or may
+                        not work depending on whether or not your Linux
+                        distribution has <filename>sudo</filename> installed.
+                    </note>
+                </para>
+
+                <para>
+                    The following list shows the required packages needed to build
+                    an image that runs on QEMU in graphical mode (e.g. essential
+                    plus graphics support).
+                    For lists of required packages for other scenarios, see the
+                    "<ulink url='&YOCTO_DOCS_REF_URL;#required-packages-for-the-host-development-system'>Required Packages for the Host Development System</ulink>"
+                    section in the Yocto Project Reference Manual.
+                    <itemizedlist>
+                        <listitem><para><emphasis>Ubuntu and Debian</emphasis>
+                            <literallayout class='monospaced'>
      $ sudo apt-get install &UBUNTU_HOST_PACKAGES_ESSENTIAL; libsdl1.2-dev xterm
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>Fedora</emphasis>
-                        <literallayout class='monospaced'>
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para><emphasis>Fedora</emphasis>
+                            <literallayout class='monospaced'>
      $ sudo dnf install &FEDORA_HOST_PACKAGES_ESSENTIAL; SDL-devel xterm
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>OpenSUSE</emphasis>
-                        <literallayout class='monospaced'>
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para><emphasis>OpenSUSE</emphasis>
+                            <literallayout class='monospaced'>
      $ sudo zypper install &OPENSUSE_HOST_PACKAGES_ESSENTIAL; libSDL-devel xterm
-                        </literallayout>
-                        </para></listitem>
-                    <listitem><para><emphasis>CentOS</emphasis>
-                        <literallayout class='monospaced'>
+                            </literallayout>
+                            </para></listitem>
+                        <listitem><para><emphasis>CentOS</emphasis>
+                            <literallayout class='monospaced'>
      $ sudo yum install &CENTOS_HOST_PACKAGES_ESSENTIAL; SDL-devel xterm
-                        </literallayout>
-                        <note><title>Notes</title>
-                            <itemizedlist>
-                                <listitem><para>
-                                    Extra Packages for Enterprise Linux
-                                    (i.e. <filename>epel-release</filename>)
-                                    is a collection of packages from Fedora
-                                    built on RHEL/CentOS for easy installation
-                                    of packages not included in enterprise
-                                    Linux by default.
-                                    You need to install these packages
-                                    separately.
-                                    </para></listitem>
-                                <listitem><para>
-                                    The <filename>makecache</filename> command
-                                    consumes additional Metadata from
-                                    <filename>epel-release</filename>.
-                                    </para></listitem>
-                            </itemizedlist>
-                        </note>
-                        </para></listitem>
-                </itemizedlist>
+                            </literallayout>
+                            <note><title>Notes</title>
+                                <itemizedlist>
+                                    <listitem><para>
+                                        Extra Packages for Enterprise Linux
+                                        (i.e. <filename>epel-release</filename>)
+                                        is a collection of packages from Fedora
+                                        built on RHEL/CentOS for easy installation
+                                        of packages not included in enterprise
+                                        Linux by default.
+                                        You need to install these packages
+                                        separately.
+                                        </para></listitem>
+                                    <listitem><para>
+                                        The <filename>makecache</filename> command
+                                        consumes additional Metadata from
+                                        <filename>epel-release</filename>.
+                                        </para></listitem>
+                                </itemizedlist>
+                            </note>
+                            </para></listitem>
+                    </itemizedlist>
+                </para>
+            </section>
+
+            <para>
+                Once you complete the setup instructions for your
+                machine, you need to get a copy of the
+                <filename>poky</filename> repository on your build
+                host.
+                Continue with the
+                "<link linkend='releases'>Yocto Project Release</link>"
+                section.
             </para>
         </section>
 
@@ -365,11 +401,12 @@
             <title>Yocto Project Release</title>
 
             <para>
-                The last requirement you need to meet before using the
-                Yocto Project is getting a Yocto Project release.
+                Now that your build host has the right packages (native
+                Linux machine) or you have the Poky container set up
+                (CROPS), you need to get a copy of the Yocto Project.
                 It is recommended that you get the latest Yocto Project release
                 by setting up (cloning in
-                <ulink url='&YOCTO_DOCS_DEV_URL;#git'>Git</ulink> terms) a
+                <ulink url='&YOCTO_DOCS_REF_URL;#git'>Git</ulink> terms) a
                 local copy of the <filename>poky</filename> Git repository on
                 your build host and then checking out the latest release.
                 Doing so allows you to easily update to newer Yocto Project
@@ -377,9 +414,15 @@
             </para>
 
             <para>
-                Here is an example from an Ubuntu build host that clones the
-                <filename>poky</filename> repository and then checks out the
-                latest Yocto Project Release (i.e. &DISTRO;):
+                Here is an example from a native Linux machine that is
+                running Ubuntu.
+                <note>
+                    If your build host is using a Poky container, you can
+                    use the same Git commands.
+                </note>
+                The following example clones the <filename>poky</filename>
+                repository and then checks out the latest Yocto Project Release
+                by tag (i.e. <filename>&DISTRO_REL_TAG;</filename>):
                 <literallayout class='monospaced'>
      $ git clone git://git.yoctoproject.org/poky
      Cloning into 'poky'...
@@ -389,18 +432,35 @@
      Receiving objects: 100% (361782/361782), 131.94 MiB | 6.88 MiB/s, done.
      Resolving deltas: 100% (268619/268619), done.
      Checking connectivity... done.
-     $ git checkout &DISTRO_NAME_NO_CAP;
+     $ git checkout tags/&DISTRO_REL_TAG; -b poky_&DISTRO;
                 </literallayout>
-                You can also get the Yocto Project Files by downloading
-                Yocto Project releases from the
-                <ulink url="&YOCTO_HOME_URL;">Yocto Project website</ulink>.
             </para>
 
             <para>
-                For more information on getting set up with the Yocto Project
-                release, see the
-                "<ulink url='&YOCTO_DOCS_DEV_URL;#local-yp-release'>Yocto Project Release</ulink>"
-                item in the Yocto Project Development Manual.
+                The previous Git <filename>checkout</filename> command
+                creates a local branch named
+                <filename>poky_&DISTRO;</filename>.
+                The files available to you in that branch exactly match the
+                repository's files in the
+                <filename>&DISTRO_NAME_NO_CAP;</filename>
+                development branch at the time of the Yocto Project &DISTRO;
+                release.
+                <note>
+                    Rather than checking out the entire development branch
+                    of a release (i.e. the tip), which could be continuously
+                    changing while you are doing your development, you would
+                    check out a branch based on a release tag as shown in
+                    the previous example.
+                    Doing so provides you with an unchanging, stable set of
+                    files.
+                </note>
+            </para>
+
+            <para>
+                For more options and information about accessing Yocto
+                Project related repositories, see the
+                "<ulink url='&YOCTO_DOCS_DEV_URL;#working-with-yocto-project-source-files'>Working With Yocto Project Source Files</ulink>"
+                section in the Yocto Project Development Tasks Manual.
             </para>
         </section>
     </section>
@@ -409,20 +469,22 @@
         <title>Building Images</title>
 
         <para>
-            Now that you have your system requirements in order, you can give
-            Yocto Project a try.
-            You can try out Yocto Project using either the command-line
-            interface or using Toaster, which uses a graphical user
-            interface.
-            If you want to try out the Yocto Project using a GUI, see the
-            <ulink url='&YOCTO_DOCS_TOAST_URL;'>Toaster User Manual</ulink>
-            for information on how to install and set up Toaster.
+            You are now ready to give the Yocto Project a try.
+            For this example, you will be using the command line to build
+            your images.
+            <note>
+                A graphical user interface to the Yocto Project is available
+                through
+                <ulink url='&YOCTO_DOCS_REF_URL;#toaster-term'>Toaster</ulink>.
+                See the
+                <ulink url='&YOCTO_DOCS_TOAST_URL;'>Toaster User Manual</ulink>
+                for more information.
+            </note>
         </para>
 
         <para>
-            To use the Yocto Project through the command-line interface,
-            finish this quick start, which presents steps that let you
-            do the following:
+            The remainder of this quick start steps you through the
+            following:
             <itemizedlist>
                 <listitem><para>
                     Build a <filename>qemux86</filename> reference image
@@ -433,7 +495,7 @@
                     create a second image that you can load onto bootable
                     media and actually boot target hardware.
                     This example uses the MinnowBoard
-                    MAX-compatible boards.
+                    Turbot-compatible boards.
                     </para></listitem>
             </itemizedlist>
             <note>
@@ -452,37 +514,39 @@
                 Use the following commands to build your image.
                 The OpenEmbedded build system creates an entire Linux
                 distribution, including the toolchain, from source.
-                <note><title>Note about Network Proxies</title>
-                    <para>
-                        By default, the build process searches for source code
-                        using a pre-determined order through a set of
-                        locations.
-                        If you are working behind a firewall and your build
-                        host is not set up for proxies, you could encounter
-                        problems with the build process when fetching source
-                        code (e.g. fetcher failures or Git failures).
-                    </para>
-
-                    <para>
-                        If you do not know your proxy settings, consult your
-                        local network infrastructure resources and get that
-                        information.
-                        A good starting point could also be to check your web
-                        browser settings.
-                        Finally, you can find more information on using the
-                        Yocto Project behind a firewall in the Yocto Project
-                        Reference Manual
-                        <ulink url='&YOCTO_DOCS_REF_URL;#how-does-the-yocto-project-obtain-source-code-and-will-it-work-behind-my-firewall-or-proxy-server'>FAQ</ulink>
-                        and on the
-                        "<ulink url='https://wiki.yoctoproject.org/wiki/Working_Behind_a_Network_Proxy'>Working Behind a Network Proxy</ulink>"
-                        wiki page.
-                    </para>
+                <note><title>Notes about Network Proxies</title>
+                    <itemizedlist>
+                        <listitem><para>
+                            By default, the build process searches for source
+                            code using a pre-determined order through a set of
+                            locations.
+                            If you are working behind a firewall and your build
+                            host is not set up for proxies, you could encounter
+                            problems with the build process when fetching source
+                            code (e.g. fetcher failures or Git failures).
+                            </para></listitem>
+                        <listitem><para>
+                            If you do not know your proxy settings, consult your
+                            local network infrastructure resources and get that
+                            information.
+                            A good starting point could also be to check your
+                            web browser settings.
+                            Finally, you can find more information on using the
+                            Yocto Project behind a firewall in the Yocto Project
+                            Reference Manual
+                            <ulink url='&YOCTO_DOCS_REF_URL;#how-does-the-yocto-project-obtain-source-code-and-will-it-work-behind-my-firewall-or-proxy-server'>FAQ</ulink>
+                            and on the
+                            "<ulink url='https://wiki.yoctoproject.org/wiki/Working_Behind_a_Network_Proxy'>Working Behind a Network Proxy</ulink>"
+                            wiki page.
+                            </para></listitem>
+                    </itemizedlist>
                 </note>
             </para>
 
             <para>
                 <orderedlist>
-                    <listitem><para><emphasis>Be Sure Your Build Host is Set Up:</emphasis>
+                    <listitem><para>
+                        <emphasis>Be Sure Your Build Host is Set Up:</emphasis>
                         The steps to build an image in this section depend on
                         your build host being properly set up.
                         Be sure you have worked through the requirements
@@ -490,9 +554,10 @@
                         "<link linkend='yp-resources'>Setting Up to Use the Yocto Project</link>"
                         section.
                         </para></listitem>
-                    <listitem><para><emphasis>Check Out Your Branch:</emphasis>
+                    <listitem><para>
+                        <emphasis>Check Out Your Branch:</emphasis>
                         Be sure you are in the
-                        <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>
+                        <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>
                         (e.g. <filename>poky</filename>) and then check out
                         the branch associated with the latest Yocto Project
                         Release:
@@ -510,7 +575,8 @@
                         branch ensures you are using the latest files for
                         that release.
                         </para></listitem>
-                    <listitem><para><emphasis>Initialize the Build Environment:</emphasis>
+                    <listitem><para>
+                        <emphasis>Initialize the Build Environment:</emphasis>
                         Run the
                         <ulink url='&YOCTO_DOCS_REF_URL;#structure-core-script'><filename>&OE_INIT_FILE;</filename></ulink>
                         environment setup script to define the OpenEmbedded
@@ -519,23 +585,17 @@
      $ source &OE_INIT_FILE;
                         </literallayout>
                         Among other things, the script creates the
-                        <ulink url='&YOCTO_DOCS_DEV_URL;#build-directory'>Build Directory</ulink>,
+                        <ulink url='&YOCTO_DOCS_REF_URL;#build-directory'>Build Directory</ulink>,
                         which is <filename>build</filename> in this case
                         and is located in the
-                        <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>.
+                        <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>.
                         After the script runs, your current working directory
                         is set to the Build Directory.
                         Later, when the build completes, the Build Directory
                         contains all the files created during the build.
-                        <note>
-                            For information on running a memory-resident
-                            <ulink url='&YOCTO_DOCS_REF_URL;#usingpoky-components-bitbake'>BitBake</ulink>,
-                            see the
-                            <ulink url='&YOCTO_DOCS_REF_URL;#structure-memres-core-script'><filename>oe-init-build-env-memres</filename></ulink>
-                            setup script.
-                        </note>
                         </para></listitem>
-                    <listitem><para><emphasis>Examine Your Local Configuration File:</emphasis>
+                    <listitem><para>
+                        <emphasis>Examine Your Local Configuration File:</emphasis>
                         When you set up the build environment, a local
                         configuration file named
                         <filename>local.conf</filename> becomes available in
@@ -589,7 +649,8 @@
                                 </para></listitem>
                         </itemizedlist>
                         </para></listitem>
-                    <listitem><para><emphasis>Start the Build:</emphasis>
+                    <listitem><para>
+                        <emphasis>Start the Build:</emphasis>
                         Continue with the following command to build an OS image
                         for the target, which is
                         <filename>core-image-sato</filename> in this example:
@@ -647,7 +708,8 @@
                         "<ulink url='&YOCTO_DOCS_REF_URL;#ref-images'>Images</ulink>"
                         chapter in the Yocto Project Reference Manual.
                         </para></listitem>
-                    <listitem><para><emphasis>Simulate Your Image Using QEMU:</emphasis>
+                    <listitem><para>
+                        <emphasis>Simulate Your Image Using QEMU:</emphasis>
                         Once this particular image is built, you can start QEMU
                         and run the image:
                         <literallayout class='monospaced'>
@@ -655,9 +717,10 @@
                         </literallayout>
                         If you want to learn more about running QEMU, see the
                         "<ulink url="&YOCTO_DOCS_DEV_URL;#dev-manual-qemu">Using the Quick EMUlator (QEMU)</ulink>"
-                        chapter in the Yocto Project Development Manual.
+                        chapter in the Yocto Project Development Tasks Manual.
                         </para></listitem>
-                    <listitem><para><emphasis>Exit QEMU:</emphasis>
+                    <listitem><para>
+                        <emphasis>Exit QEMU:</emphasis>
                         Exit QEMU by either clicking on the shutdown icon or by
                         typing <filename>Ctrl-C</filename> in the QEMU
                         transcript window from which you evoked QEMU.
@@ -672,13 +735,13 @@
             <para id='qs-minnowboard-example'>
                 The following steps show how easy it is to set up to build an
                 image for a new machine.
-                These steps build an image for the MinnowBoard MAX, which is
+                These steps build an image for the MinnowBoard Turbot, which is
                 supported by the Yocto Project and the
                 <filename>meta-intel</filename> <filename>intel-corei7-64</filename>
                 and <filename>intel-core2-32</filename> Board Support Packages
                 (BSPs).
                 <note>
-                    The MinnowBoard MAX ships with 64-bit firmware.
+                    The MinnowBoard Turbot ships with 64-bit firmware.
                     If you want to use the board in 32-bit mode, you must
                     download the
                     <ulink url='http://firmware.intel.com/projects/minnowboard-max'>32-bit firmware</ulink>.
@@ -687,13 +750,15 @@
 
             <para>
                 <orderedlist>
-                    <listitem><para><emphasis>Create a Local Copy of the
+                    <listitem><para>
+                        <emphasis>Create a Local Copy of the
                         <filename>meta-intel</filename> Repository:</emphasis>
-                        Building an image for the MinnowBoard MAX requires the
+                        Building an image for the MinnowBoard Turbot requires
+                        the
                         <filename>meta-intel</filename> layer.
                         Use the <filename>git clone</filename> command to create
                         a local copy of the repository inside your
-                        <ulink url='&YOCTO_DOCS_DEV_URL;#source-directory'>Source Directory</ulink>,
+                        <ulink url='&YOCTO_DOCS_REF_URL;#source-directory'>Source Directory</ulink>,
                         which is <filename>poky</filename> in this example:
                         <literallayout class='monospaced'>
      $ cd $HOME/poky
@@ -713,17 +778,30 @@
                         sure that both repositories
                         (<filename>meta-intel</filename> and
                         <filename>poky</filename>) are using the same releases.
+                        Because you used the <filename>&DISTRO_REL_TAG;</filename>
+                        tag when you checked out the <filename>poky</filename>
+                        repository by tag, you should use a
+                        <filename>meta-intel</filename>
+                        tag that corresponds with the release you used for
+                        <filename>poky</filename>.
                         Consequently, you need to checkout out the
-                        "<filename>&DISTRO_NAME_NO_CAP;</filename>" release after
-                        cloning <filename>meta-intel</filename>:
+                        "<filename>&METAINTELVERSION;-&DISTRO_NAME_NO_CAP;-&YOCTO_DOC_VERSION;</filename>"
+                        branch after cloning <filename>meta-intel</filename>:
                         <literallayout class='monospaced'>
      $ cd $HOME/poky/meta-intel
-     $ git checkout &DISTRO_NAME_NO_CAP;
-     Branch &DISTRO_NAME_NO_CAP; set up to track remote branch &DISTRO_NAME_NO_CAP; from origin.
-     Switched to a new branch '&DISTRO_NAME_NO_CAP;'
+     $ git checkout tags/&METAINTELVERSION;-&DISTRO_NAME_NO_CAP;-&YOCTO_DOC_VERSION; -b meta-intel-&DISTRO_NAME_NO_CAP;-&YOCTO_DOC_VERSION;
+     Switched to a new branch 'meta-intel-&DISTRO_NAME_NO_CAP;-&YOCTO_DOC_VERSION;'
                         </literallayout>
+                        The previous Git <filename>checkout</filename> command
+                        creates a local branch named
+                        <filename>meta-intel-&DISTRO_NAME_NO_CAP;-&YOCTO_DOC_VERSION;</filename>.
+                        You have the option to name your local branch whatever
+                        you want by providing any name you like for
+                        "meta-intel-&DISTRO_NAME_NO_CAP;-&YOCTO_DOC_VERSION;"
+                        in the above example.
                         </para></listitem>
-                    <listitem><para><emphasis>Configure the Build:</emphasis>
+                    <listitem><para>
+                        <emphasis>Configure the Build:</emphasis>
                         To configure the build, you edit the
                         <filename>bblayers.conf</filename> and
                         <filename>local.conf</filename> files, both of which are
@@ -760,13 +838,15 @@
                         </para>
                         </note>
                         </para></listitem>
-                    <listitem><para><emphasis>Build an Image for MinnowBoard MAX:</emphasis>
+                    <listitem><para>
+                        <emphasis>Build an Image for MinnowBoard
+                        Turbot:</emphasis>
                         The type of image you build depends on your goals.
                         For example, the previous build created a
                         <filename>core-image-sato</filename> image, which is an
                         image with Sato support.
                         It is possible to build many image types for the
-                        MinnowBoard MAX.
+                        MinnowBoard Turbot.
                         Some possibilities are <filename>core-image-base</filename>,
                         which is a console-only image.
                         Another choice could be a
@@ -826,7 +906,8 @@
      tmp/deploy/images/intel-corei7-64/core-image-base-intel-corei7-64.wic
                         </literallayout>
                         </para></listitem>
-                    <listitem><para><emphasis>Write the Image:</emphasis>
+                    <listitem><para>
+                        <emphasis>Write the Image:</emphasis>
                         You can write the image just built to a bootable media
                         (e.g. a USB key, SATA drive, SD card, etc.) using the
                         <filename>dd</filename> utility:
@@ -840,9 +921,10 @@
                         <filename>/dev/mmcblk0</filename>, which is most likely an
                         SD card).
                         </para></listitem>
-                    <listitem><para><emphasis>Boot the Hardware:</emphasis>
+                    <listitem><para>
+                        <emphasis>Boot the Hardware:</emphasis>
                         With the boot device provisioned, you can insert the
-                        media into the MinnowBoard MAX and boot the hardware.
+                        media into the MinnowBoard Turbot and boot the hardware.
                         The board should automatically detect the media and boot to
                         the bootloader and subsequently the operating system.
                         </para>
@@ -880,66 +962,76 @@
             Depending on what you primary interests are with the Yocto Project,
             you could consider any of the following:
             <itemizedlist>
-                <listitem><para><emphasis>Visit the Yocto Project Web Site:</emphasis>
+                <listitem><para>
+                    <emphasis>Visit the Yocto Project Web Site:</emphasis>
                     The official
                     <ulink url='&YOCTO_HOME_URL;'>Yocto Project</ulink>
                     web site contains information on the entire project.
                     Visiting this site is a good way to familiarize yourself
                     with the overall project.
                     </para></listitem>
-                <listitem><para><emphasis>Look Through the Yocto Project Development Manual:</emphasis>
-                    The
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-intro'>Yocto Project Development Manual</ulink>
-                    is a great place to get a feel for how to use the Yocto
-                    Project.
-                    The manual contains conceptual and procedural information
-                    that covers
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-model'>common development models</ulink>
-                    and introduces
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-newbie'>the Yocto Project open source development environment</ulink>.
-                    The manual also contains several targeted sections that
-                    cover specific
-                    <ulink url='&YOCTO_DOCS_DEV_URL;#extendpoky'>common tasks</ulink>
-                    such as understanding and creating layers, customizing
-                    images, writing new recipes, working with libraries, and
-                    configuring and patching the kernel.
+                <listitem><para>
+                    <emphasis>Look Through the
+                    <ulink url='&YOCTO_DOCS_DEV_URL;#dev-manual-intro'>Yocto Project Development Tasks Manual</ulink>:</emphasis>
+                    This manual contains procedural information grouped to
+                    help you get set up, work with layers, customize images,
+                    write new recipes, work with libraries, and use QEMU.
+                    The information is task-based and spans the breadth of the
+                    Yocto Project.
                     </para></listitem>
-                <listitem><para><emphasis>Look Through the Yocto Project Software Development Kit (SDK) Developer's Guide:</emphasis>
-                    The
-                    <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-intro'>Yocto Project Software Development Kit (SDK) Developer's Guide</ulink>
-                    describes how to use both the
+                <listitem><para>
+                    <emphasis>Look Through the
+                    <ulink url='&YOCTO_DOCS_SDK_URL;'>Yocto Project Application Development and the Extensible Software Development Kit (eSDK)</ulink>
+                    manual:</emphasis>
+                    This manual describes how to use both the
                     <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-using-the-standard-sdk'>standard SDK</ulink>
                     and the
                     <ulink url='&YOCTO_DOCS_SDK_URL;#sdk-extensible'>extensible SDK</ulink>,
                     which are used primarily for application development.
-                    This manual also provides an example workflow that uses
-                    the popular <trademark class='trade'>Eclipse</trademark>
-                    development environment.
+                    This manual also provides example workflows
+                    that use the popular <trademark class='trade'>Eclipse</trademark>
+                    development environment and that use <filename>devtool</filename>.
                     See the
                     "<ulink url='&YOCTO_DOCS_SDK_URL;#workflow-using-eclipse'>Workflow using Eclipse™</ulink>"
-                    section.
+                    and
+                    "<ulink url='&YOCTO_DOCS_SDK_URL;#using-devtool-in-your-sdk-workflow'>Using <filename>devtool</filename> in your SDK Workflow</ulink>"
+                    sections for more information.
                     </para></listitem>
-                <listitem><para><emphasis>Learn About Board Support Packages (BSPs):</emphasis>
+                <listitem><para>
+                    <emphasis>Learn About Kernel Development:</emphasis>
+                    If you want to see how to work with the kernel and
+                    understand Yocto Linux kernels, see the
+                    <ulink url='&YOCTO_DOCS_KERNEL_DEV_URL;#kernel-dev-intro'>Yocto Project Linux Kernel Development Manual</ulink>.
+                    This manual provides information on how to patch the
+                    kernel, modify kernel recipes, and configure the kernel.
+                    </para></listitem>
+                <listitem><para>
+                    <emphasis>Learn About Board Support Packages (BSPs):</emphasis>
                     If you want to learn about BSPs, see the
                     <ulink url='&YOCTO_DOCS_BSP_URL;#bsp'>Yocto Project Board Support Packages (BSP) Developer's Guide</ulink>.
+                    This manual also provides an example BSP creation workflow.
+                    See the
+                    <ulink url='&YOCTO_DOCS_BSP_URL;#developing-a-board-support-package-bsp'>"Developing a Board Support Package (BSP)</ulink>"
+                    section.
                     </para></listitem>
-                <listitem><para><emphasis>Learn About Toaster:</emphasis>
+                <listitem><para>
+                    <emphasis>Learn About Toaster:</emphasis>
                     Toaster is a web interface to the Yocto Project's
                     OpenEmbedded build system.
                     If you are interested in using this type of interface to
                     create images, see the
                     <ulink url='&YOCTO_DOCS_TOAST_URL;#toaster-manual-intro'>Toaster User Manual</ulink>.
                     </para></listitem>
-                <listitem><para><emphasis>Have Available the Yocto Project Reference Manual</emphasis>
-                    The
-                    <ulink url='&YOCTO_DOCS_REF_URL;#ref-manual-intro'>Yocto Project Reference Manual</ulink>,
-                    unlike the rest of the Yocto Project manual set, is
-                    comprised of material suited for reference rather than
+                <listitem><para>
+                    <emphasis>Have Available the
+                    <ulink url='&YOCTO_DOCS_REF_URL;#ref-manual-intro'>Yocto Project Reference Manual:</ulink></emphasis>
+                    Unlike the rest of the Yocto Project manual set, this manual
+                    is comprised of material suited for reference rather than
                     procedures.
                     You can get
                     <ulink url='&YOCTO_DOCS_REF_URL;#usingpoky'>build details</ulink>,
                     a
-                    <ulink url='&YOCTO_DOCS_REF_URL;#closer-look'>closer look</ulink>
+                    <ulink url='&YOCTO_DOCS_REF_URL;#development-concepts'>closer look</ulink>
                     at how the pieces of the Yocto Project development
                     environment work together, information on various
                     <ulink url='&YOCTO_DOCS_REF_URL;#technical-details'>technical details</ulink>,
diff --git a/import-layers/yocto-poky/meta-poky/README.poky b/import-layers/yocto-poky/meta-poky/README.poky
new file mode 100644
index 0000000..0a42843
--- /dev/null
+++ b/import-layers/yocto-poky/meta-poky/README.poky
@@ -0,0 +1,58 @@
+Poky
+====
+
+Poky is an integration of various components to form a complete prepackaged
+build system and development environment. It features support for building
+customised embedded device style images. There are reference demo images
+featuring a X11/Matchbox/GTK themed UI called Sato. The system supports
+cross-architecture application development using QEMU emulation and a
+standalone toolchain and SDK with IDE integration.
+
+Additional information on the specifics of hardware that Poky supports
+is available in README.hardware. Further hardware support can easily be added
+in the form of layers which extend the systems capabilities in a modular way.
+
+As an integration layer Poky consists of several upstream projects such as 
+BitBake, OpenEmbedded-Core, Yocto documentation and various sources of information 
+e.g. for the hardware support. Poky is in turn a component of the Yocto Project.
+
+The Yocto Project has extensive documentation about the system including a 
+reference manual which can be found at:
+    http://yoctoproject.org/documentation
+
+OpenEmbedded-Core is a layer containing the core metadata for current versions
+of OpenEmbedded. It is distro-less (can build a functional image with
+DISTRO = "nodistro") and contains only emulated machine support.
+
+For information about OpenEmbedded, see the OpenEmbedded website:
+    http://www.openembedded.org/
+
+Where to Send Patches
+=====================
+
+As Poky is an integration repository (built using a tool called combo-layer),
+patches against the various components should be sent to their respective
+upstreams:
+
+bitbake:
+    Git repository: http://git.openembedded.org/bitbake/
+    Mailing list: bitbake-devel@lists.openembedded.org
+
+documentation:
+    Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/yocto-docs/
+    Mailing list: yocto@yoctoproject.org
+
+meta-poky, meta-yocto-bsp:
+    Git repository: http://git.yoctoproject.org/cgit/cgit.cgi/meta-yocto(-bsp)
+    Mailing list: poky@yoctoproject.org
+
+Everything else should be sent to the OpenEmbedded Core mailing list.  If in
+doubt, check the oe-core git repository for the content you intend to modify.
+Before sending, be sure the patches apply cleanly to the current oe-core git
+repository.
+
+    Git repository: http://git.openembedded.org/openembedded-core/
+    Mailing list: openembedded-core@lists.openembedded.org
+
+Note: The scripts directory should be treated with extra care as it is a mix of
+oe-core and poky-specific files from meta-poky.
diff --git a/import-layers/yocto-poky/meta-poky/conf/conf-notes.txt b/import-layers/yocto-poky/meta-poky/conf/conf-notes.txt
index 2f2932b..f1a4f4d 100644
--- a/import-layers/yocto-poky/meta-poky/conf/conf-notes.txt
+++ b/import-layers/yocto-poky/meta-poky/conf/conf-notes.txt
@@ -1,3 +1,8 @@
+
+### Shell environment set up for builds. ###
+
+You can now run 'bitbake <target>'
+
 Common targets are:
     core-image-minimal
     core-image-sato
diff --git a/import-layers/yocto-poky/meta-poky/conf/distro/include/maintainers.inc b/import-layers/yocto-poky/meta-poky/conf/distro/include/maintainers.inc
deleted file mode 100644
index 261c8f3..0000000
--- a/import-layers/yocto-poky/meta-poky/conf/distro/include/maintainers.inc
+++ /dev/null
@@ -1,840 +0,0 @@
-# Yocto Project / OpenEmbedded-Core (OE-Core) Maintainers File
-#
-# This file contains a list of recipe maintainers.
-#
-# Please submit any patches against recipes in meta to the 
-# OE-Core mail list (openembedded-core@lists.openembedded.org)
-# For recipes in meta-yocto please use the Poky list (poky@yoctoproject.org)
-#
-# If you have problems with or questions about a particular recipe, feel
-# free to contact the maintainer directly (cc:ing the appropriate mailing list
-# puts it in the archive and helps other people who might have the same
-# questions in the future), but please try to do the following first:
-#
-#  - look in the Yocto Project Bugzilla
-#    (http://bugzilla.yoctoproject.org/) to see if a problem has
-#    already been reported
-#
-# - look through recent entries of the appropriate mailing list archives
-#   (http://lists.linuxtogo.org/pipermail/openembedded-core or
-#    https://lists.yoctoproject.org/pipermail/poky/) to see if other
-#   people have run into similar problems or had similar questions
-#   answered.
-#
-# The format is as a bitbake variable override for each recipe
-#
-#	RECIPE_MAINTAINER_pn-<recipe name> = "Full Name <address@domain>"
-#
-# Please keep this list in alphabetical order.
-#
-RECIPE_MAINTAINER_pn-acl = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-acpid = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-adwaita-icon-theme = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-alsa-lib = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-alsa-plugins = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-alsa-state = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-alsa-tools = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-alsa-utils = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-alsa-utils-scripts = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-apmd = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-apr = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-apr-util = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-apt = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-apt-native = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-argp-standalone = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-asciidoc = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-aspell = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-at = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-at-spi2-atk = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-at-spi2-core = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-atk = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-attr = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-autoconf = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-autogen-native = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-automake = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-avahi = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-avahi-ui = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-babeltrace = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-base-files = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-base-passwd = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-bash = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-bash-completion = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-bc = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-bdwgc = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-beecrypt = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-bigreqsproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-bind = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-binutils = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-binutils-cross = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-binutils-cross-canadian = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-binutils-crosssdk = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-bison = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-bjam-native = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-blktool = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-blktrace = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-bluez5 = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-bmap-tools = "Ed Bartosh <ed.bartosh@linux.intel.com>"
-RECIPE_MAINTAINER_pn-boost = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-bootchart2 = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-bsd-headers = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-btrfs-tools = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-build-appliance-image = "Cristian Iorga <cristian.iorga@intel.com>"
-RECIPE_MAINTAINER_pn-build-compare = "Randy Witt <randy.e.witt@linux.intel.com>"
-RECIPE_MAINTAINER_pn-builder = "Cristian Iorga <cristian.iorga@intel.com>"
-RECIPE_MAINTAINER_pn-buildtools-tarball = "Cristian Iorga <cristian.iorga@intel.com>"
-RECIPE_MAINTAINER_pn-busybox = "Armin Kuster <akuster808@gmail.com>"
-RECIPE_MAINTAINER_pn-byacc = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-bzip2 = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-ca-certificates = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-cairo = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-calibrateproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-cantarell-fonts = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-ccache = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-cdrtools-native = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-chkconfig = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-chkconfig-alternatives-native = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-chrpath = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-clutter-1.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-clutter-gst-3.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-clutter-gtk-1.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-cmake = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
-RECIPE_MAINTAINER_pn-cmake-native = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
-RECIPE_MAINTAINER_pn-cogl-1.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-compositeproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-connman = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-connman-conf = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-connman-gnome = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-console-tools = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-consolekit = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-core-image-base = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-clutter = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-full-cmdline = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-kernel-dev = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-lsb = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-lsb-dev = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-lsb-sdk = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-minimal = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-minimal-dev = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-minimal-initramfs = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-minimal-mtdutils = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-rt = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-rt-sdk = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-sato = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-sato-dev = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-sato-sdk = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-sato-sdk-ptest = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-testmaster = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-testmaster-initramfs = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-weston = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-core-image-x11 = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-coreutils = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-cpio = "Denys Dmytriyenko <denis@denix.org>"
-RECIPE_MAINTAINER_pn-cracklib = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-createrepo = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-cronie = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-cross-localedef-native = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-cryptodev-linux = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-cryptodev-module = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-cryptodev-tests = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-cups = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-curl = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-cve-check-tool = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-cwautomacros = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-damageproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-db = "Mark Hatle <mark.hatle@windriver.com>"
-RECIPE_MAINTAINER_pn-dbus = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-dbus-glib = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-dbus-test = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-dbus-wait = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-debianutils = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-depmodwrapper-cross = "Mark Hatle <mark.hatle@windriver.com>"
-RECIPE_MAINTAINER_pn-desktop-file-utils-native = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-dhcp = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-diffstat = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-diffutils = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-directfb = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-directfb-examples = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-distcc = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-distcc-config = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-dmidecode = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-dmxproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-docbook-xml-dtd4 = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-docbook-xsl-stylesheets = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-dosfstools = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-dpkg = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-dri2proto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-dri3proto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-dropbear = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-dtc = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-e2fsprogs = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-ed = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-eee-acpi-scripts = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-efilinux = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-eglinfo-fb = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-eglinfo-x11 = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-efilinux = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-elfutils = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-enchant = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-encodings = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-epiphany = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-ethtool = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-eudev = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-expat = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-expect = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-ffmpeg = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-file = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-findutils = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-fixesproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-flac = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-flex = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-font-alias = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-font-util = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-fontconfig = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-fontsproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-foomatic-filters = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-formfactor = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-freetype = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-fstests = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-fts = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gawk = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-gcc = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gcc-cross = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gcc-cross-canadian = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gcc-cross-initial = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gcc-crosssdk = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gcc-crosssdk-initial = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gcc-runtime = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gcc-sanitizers = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gcc-source = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gccmakedep = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gconf = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gcr = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-gdb = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gdb-cross = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gdb-cross-canadian = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gdbm = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-gdk-pixbuf = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gettext = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-gettext-minimal-native = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-ghostscript = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-git = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-glew = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-glib-2.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-glib-networking = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-glibc = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-glibc-initial = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-glibc-locale = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-glibc-mtrace = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-glibc-scripts = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-glproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gmp = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-gnome-common = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gnome-desktop-testing = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gnome-desktop3 = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-gnome-doc-utils = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gnome-themes-standard = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gnu-config = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-gnu-efi = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-gnupg = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-gnutls = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-gobject-introspection = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-go-native = "Armin Kuster <akuster808@gmail.com>"
-RECIPE_MAINTAINER_pn-gperf = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-gpgme = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-gptfdisk = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-grep = "Denys Dmytriyenko <denis@denix.org>"
-RECIPE_MAINTAINER_pn-groff = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-grub = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-grub-efi = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-gsettings-desktop-schemas = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gst-player = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gstreamer1.0 = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-gstreamer1.0-libav = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-gstreamer1.0-omx = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-gstreamer1.0-meta-base = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-gstreamer1.0-plugins-bad = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-gstreamer1.0-plugins-base = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-gstreamer1.0-plugins-good = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-gstreamer1.0-plugins-ugly = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-gstreamer1.0-rtsp-server = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-gtk+ = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gtk+3 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gtk-doc = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-gtk-engines = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gtk-icon-utils-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-gtk-sato-engine = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-guile = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-guilt-native = "Bruce Ashfield <bruce.ashfield@windriver.com>"
-RECIPE_MAINTAINER_pn-gummiboot = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-gzip = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-harfbuzz = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-hdparm = "Denys Dmytriyenko <denis@denix.org>"
-RECIPE_MAINTAINER_pn-help2man-native = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-hicolor-icon-theme = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-hostap-conf = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-hostap-utils = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-hwlatdetect = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-i2c-tools = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-icecc-create-env-native = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-icon-naming-utils = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-icu = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-ifupdown = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-init-ifupdown = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-initramfs-boot = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
-RECIPE_MAINTAINER_pn-initramfs-framework = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
-RECIPE_MAINTAINER_pn-initramfs-live-boot = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-initramfs-live-install = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-initramfs-live-install-efi = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-initramfs-live-install-efi-testfs = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-initramfs-live-install-testfs = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-initscripts = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-inputproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-intltool = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-iproute2 = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-iptables = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-iputils = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-irda-utils = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-iso-codes = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-iw = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libjpeg-turbo = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-json-c = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-json-glib = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-kbd = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-kbproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-kconfig-frontends = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-kern-tools-native = "Bruce Ashfield <bruce.ashfield@windriver.com>"
-RECIPE_MAINTAINER_pn-kernelshark = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-kernel-devsrc = "Bruce Ashfield <bruce.ashfield@windriver.com>"
-RECIPE_MAINTAINER_pn-kexec-tools = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-keymaps = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-kmod = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-kmod-native = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-l3afpad = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-lame = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-latencytop = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-ldconfig-native = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-less = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-liba52 = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-libacpi = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libaio = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libarchive = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
-RECIPE_MAINTAINER_pn-libart-lgpl = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libassuan = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-libatomic-ops = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libav = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libbsd = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-libcap = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-libcap-ng = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-libcgroup = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libcheck = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-libclass-isa-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libconvert-asn1-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libcroco = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libdaemon = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libdmx = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libdrm = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
-RECIPE_MAINTAINER_pn-libdumpvalue-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libenv-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libepoxy = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-liberation-fonts = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-liberror-perl = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-libevdev = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libevent = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libexif = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libfakekey = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libffi = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libfile-checktree-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libfm = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libfm-extra = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libfontenc = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libgcc = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-libgcc-initial = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-libgcrypt = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-libgfortran = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-libglade = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libglu = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libgpg-error = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-libgudev = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libi18n-collate-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libical = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libice = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libiconv = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libid3tag = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-libidn = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libinput = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libjson = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libksba = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libmatchbox = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libmpc = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-libnewt = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-libnewt-python = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-libnfsidmap = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libnl = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libnotify = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libnss-mdns = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libogg = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libomxil = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libowl = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libpam = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libpcap = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libpciaccess = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libpcre = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-libpcre2 = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-libpfm4 = "Matthew McClintock <msm@freescale.com>"
-RECIPE_MAINTAINER_pn-libpng = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libpng12 = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libpod-plainer-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libproxy = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libpthread-stubs = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-librsvg = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libsamplerate0 = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-libsdl = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-libsdl2 = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-libsecret = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libsm = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libsndfile1 = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-libsolv = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libsoup-2.4 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libtasn1 = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libtheora = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libtimedate-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libtirpc = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libtool = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-libtool-cross = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-libtool-native = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-libunistring = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-libunwind = "Bruce Ashfield <bruce.ashfield@windriver.com>"
-RECIPE_MAINTAINER_pn-liburcu = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-liburi-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libusb-compat = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libusb1 = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libuser = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-libvorbis = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-libwebp = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libx11 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libx11-diet = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxau = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxcalibrate = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxcb = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxcomposite = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxcursor = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxdamage = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxdmcp = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxext = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxfixes = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxfont = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxft = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxi = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxinerama = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxkbcommon = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxkbfile = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxml-namespacesupport-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libxml-parser-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libxml-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libxml-sax-base-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libxml-sax-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libxml-simple-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-libxml2 = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-libxmu = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxpm = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxrandr = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxrender = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxres = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxscrnsaver = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxsettings-client = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxshmfence = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxslt = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-libxt = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxtst = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxv = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxvmc = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxxf86dga = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxxf86misc = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libxxf86vm = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-libyaml = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-lighttpd = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-linux-dummy = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-linux-firmware = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
-RECIPE_MAINTAINER_pn-linux-libc-headers = "Bruce Ashfield <bruce.ashfield@windriver.com>"
-RECIPE_MAINTAINER_pn-linux-yocto = "Bruce Ashfield <bruce.ashfield@windriver.com>"
-RECIPE_MAINTAINER_pn-linux-yocto-dev = "Bruce Ashfield <bruce.ashfield@windriver.com>"
-RECIPE_MAINTAINER_pn-linux-yocto-rt = "Bruce Ashfield <bruce.ashfield@windriver.com>"
-RECIPE_MAINTAINER_pn-linux-yocto-tiny = "Bruce Ashfield <bruce.ashfield@windriver.com>"
-RECIPE_MAINTAINER_pn-logrotate = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-lrzsz = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-lsb = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-lsbinitscripts = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-lsbtest = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-lsof = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-ltp = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-lttng-modules = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-lttng-tools = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-lttng-ust = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-lz4 = "Armin Kuster <akuster808@gmail.com>"
-RECIPE_MAINTAINER_pn-lzo = "Armin Kuster <akuster808@gmail.com>"
-RECIPE_MAINTAINER_pn-lzop = "Armin Kuster <akuster808@gmail.com>"
-RECIPE_MAINTAINER_pn-m4 = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-m4-native = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-mailx = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-make = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-makedepend = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-makedevs = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-man = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-man-pages = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-matchbox-config-gtk = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-matchbox-desktop = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-matchbox-desktop-sato = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-matchbox-keyboard = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-matchbox-panel-2 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-matchbox-session = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-matchbox-session-sato = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-matchbox-terminal = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-matchbox-theme-sato = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-matchbox-wm = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-mc = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-mdadm = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-menu-cache = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-mesa = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
-RECIPE_MAINTAINER_pn-mesa-demos = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
-RECIPE_MAINTAINER_pn-mesa-gl = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
-RECIPE_MAINTAINER_pn-meta-environment = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-meta-environment-extsdk = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-meta-extsdk-toolchain = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-meta-ide-support = "Cristian Iorga <cristian.iorga@intel.com>"
-RECIPE_MAINTAINER_pn-meta-toolchain = "Cristian Iorga <cristian.iorga@intel.com>"
-RECIPE_MAINTAINER_pn-meta-world-pkgdata = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-mingetty = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-mini-x-session = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-minicom = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-mkelfimage = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-mkfontdir = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-mkfontscale = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-mklibs-native = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-mktemp = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-mmc-utils = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-mobile-broadband-provider-info = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-modutils-initscripts = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-mpeg2dec = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-mpfr = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-mpg123 = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-msmtp = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-mtd-utils = "Denys Dmytriyenko <denis@denix.org>"
-RECIPE_MAINTAINER_pn-mtdev = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-mtools = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-musl = "Khem Raj <raj.khem@gmail.com>"
-RECIPE_MAINTAINER_pn-mx-1.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-nasm = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-nativesdk-buildtools-perl-dummy = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-nativesdk-libtool = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-nativesdk-packagegroup-sdk-host = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-nativesdk-qemu-helper = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-nativesdk-postinst-intercept = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-ncurses = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-neard = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-neon = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-net-tools = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-netbase = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-nettle = "Armin Kuster <akuster808@gmail.com>"
-RECIPE_MAINTAINER_pn-nfs-export-root = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-nfs-utils = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-npth = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-nspr = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-nss = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-nss-myhostname = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-ofono = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-oh-puzzles = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-openssh = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-openssl = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-opkg = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
-RECIPE_MAINTAINER_pn-opkg-arch-config = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
-RECIPE_MAINTAINER_pn-opkg-keyrings = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
-RECIPE_MAINTAINER_pn-opkg-utils = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
-RECIPE_MAINTAINER_pn-oprofile = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-orc = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-os-release = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-ossp-uuid = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-p11-kit = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-package-index = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-base = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-boot = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-buildessential = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-clutter = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-device-devel = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-directfb = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-eclipse-debug = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-full-cmdline = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-lsb = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-nfs = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-sdk = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-ssh-dropbear = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-ssh-openssh = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-standalone-sdk-target = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-tools-debug = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-tools-profile = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-tools-testapps = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-x11 = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-x11-base = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-x11-sato = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-core-x11-xserver = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-cross-canadian = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-packagegroup-self-hosted = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-pango = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-parted = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-patch = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-patchelf = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-pax = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-pax-utils = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-pbzip2 = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-pciutils = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-pcmanfm = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-pcmciautils = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-perf = "Bruce Ashfield <bruce.ashfield@windriver.com>"
-RECIPE_MAINTAINER_pn-perl = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-perl-native = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-piglit = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-pigz = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-pinentry = "Armin Kuster <akuster808@gmail.com>"
-RECIPE_MAINTAINER_pn-pixman = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-pixz = "Armin Kuster <akuster808@gmail.com>"
-RECIPE_MAINTAINER_pn-pkgconfig = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-pm-utils = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-pointercal = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-pointercal-xinput = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-pong-clock = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-popt = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-portmap = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-powertop = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-ppp = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-ppp-dialin = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-prelink = "Mark Hatle <mark.hatle@windriver.com>"
-RECIPE_MAINTAINER_pn-presentproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-procps = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-pseudo = "Mark Hatle <mark.hatle@windriver.com>"
-RECIPE_MAINTAINER_pn-psmisc = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-psplash = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-ptest-runner = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-pulseaudio = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-pulseaudio-client-conf-sato = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-puzzles = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-python = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-native = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-async = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-distribute = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-git = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-gitdb = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-imaging = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-mako = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-native = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-nose = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-numpy = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-pexpect = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-ptyprocess = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-pycairo = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-pycurl = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-pygtk = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-pyrex = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-scons = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-scons-native = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-setuptools = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-six = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-smartpm = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python-smmap = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3 = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-async = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-dbus = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-distribute = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-docutils = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-git = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-gitdb = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-mako = "Jose Lamego <jose.a.lamego@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-native = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-nose = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-numpy = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-pip = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-pygobject = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-setuptools = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-six = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-python3-smmap = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-qemu = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-qemu-helper-native = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-qemuwrapper-cross = "Aníbal Limón <anibal.limon@linux.intel.com>"
-RECIPE_MAINTAINER_pn-quilt = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-quilt-native = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-quota = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-randrproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-readline = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-recordproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-remake = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-renderproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-resolvconf = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-resourceproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-rgb = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-rpcbind = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-rng-tools = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-rpm = "Mark Hatle <mark.hatle@windriver.com>"
-RECIPE_MAINTAINER_pn-rpmresolve = "Mark Hatle <mark.hatle@windriver.com>"
-RECIPE_MAINTAINER_pn-rsync = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-rt-tests = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-ruby = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-run-postinsts = "Cristian Iorga <cristian.iorga@intel.com>"
-RECIPE_MAINTAINER_pn-rxvt-unicode = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-sato-icon-theme = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-sato-screenshot = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-sbc = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-screen = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-scrnsaverproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-sed = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-serf = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-setserial = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-settings-daemon = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-shadow = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-shadow-securetty = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-shadow-sysroot = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-shared-mime-info = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-shutdown-desktop = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-signing-keys = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-slang = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-socat = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-source-highlight = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-speex = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-speexdsp = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-sqlite3 = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-squashfs-tools = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-startup-notification = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-stat = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-strace = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-stress = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-subversion = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-sudo = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-swabber-native = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-swig = "Edwin Plauchu <edwin.plauchu.camacho@linux.intel.com>"
-RECIPE_MAINTAINER_pn-sysfsutils = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-sysklogd = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-syslinux = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-sysprof = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-sysstat = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-systemd = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-systemd-boot = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-systemd-bootchart = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-systemd-compat-units = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-systemd-serialgetty = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-systemd-systemctl-native = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-systemtap = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-systemtap-uprobes = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-sysvinit = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-sysvinit-inittab = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-taglib = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-tar = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-tcf-agent = "Randy Witt <randy.e.witt@linux.intel.com>"
-RECIPE_MAINTAINER_pn-tcl = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-tcp-wrappers = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-testexport-tarball = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-texi2html = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-texinfo = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-texinfo-dummy-native = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-tiff = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-time = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-tiny-init = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-trace-cmd = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-tremor = "Tanu Kaskinen <tanuk@iki.fi>"
-RECIPE_MAINTAINER_pn-tslib = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-ttf-bitstream-vera = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-tzcode-native = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-tzdata = "Armin Kuster <akuster@mvista.com>"
-RECIPE_MAINTAINER_pn-u-boot = "Marek Vasut <marek.vasut@gmail.com>"
-RECIPE_MAINTAINER_pn-u-boot-fw-utils = "Marek Vasut <marek.vasut@gmail.com>"
-RECIPE_MAINTAINER_pn-u-boot-mkimage = "Marek Vasut <marek.vasut@gmail.com>"
-RECIPE_MAINTAINER_pn-ubootchart = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-udev-extraconf = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-unfs3 = "Randy Witt <randy.e.witt@linux.intel.com>"
-RECIPE_MAINTAINER_pn-unifdef = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-uninative-tarball = "Richard Purdie <richard.purdie@linuxfoundation.org>"
-RECIPE_MAINTAINER_pn-unzip = "Armin Kuster <akuster808@gmail.com>"
-RECIPE_MAINTAINER_pn-update-rc.d = "Ross Burton <ross.burton@intel.com>"
-RECIPE_MAINTAINER_pn-usbinit = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-usbutils = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-util-linux = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-util-macros = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-v86d = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-vala = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-valgrind = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-videoproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-volatile-binds = "Chen Qi <Qi.Chen@windriver.com>"
-RECIPE_MAINTAINER_pn-vte = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-vulkan = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-waffle = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-watchdog = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-watchdog-config = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-wayland = "Denys Dmytriyenko <denis@denix.org>"
-RECIPE_MAINTAINER_pn-wayland-protocols = "Denys Dmytriyenko <denis@denix.org>"
-RECIPE_MAINTAINER_pn-webkitgtk = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-weston = "Denys Dmytriyenko <denis@denix.org>"
-RECIPE_MAINTAINER_pn-weston-init = "Denys Dmytriyenko <denis@denix.org>"
-RECIPE_MAINTAINER_pn-wget = "Robert Yang <liezhi.yang@windriver.com>"
-RECIPE_MAINTAINER_pn-which = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
-RECIPE_MAINTAINER_pn-wireless-tools = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-wpa-supplicant = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-x11-common = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-x11perf = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-x264 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xauth = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xcb-proto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xcb-util = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xcb-util-image = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xcb-util-keysyms = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xcb-util-renderutil = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xcb-util-wm = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xcmiscproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xcursor-transparent-theme = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xdg-utils = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xdpyinfo = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xev = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xextproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xeyes = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-input-evdev = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-input-keyboard = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-input-libinput = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-input-mouse = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-input-synaptics = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-input-vmmouse = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-video-cirrus = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-video-fbdev = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-video-intel = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-video-omap = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-video-omapfb = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-video-vesa = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86-video-vmware = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86dgaproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86driproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86miscproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xf86vidmodeproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xhost = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xineramaproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xinetd = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-xinit = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xinput = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xinput-calibrator = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xkbcomp = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xkeyboard-config = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xmlto = "Hongxu Jia <hongxu.jia@windriver.com>"
-RECIPE_MAINTAINER_pn-xmodmap = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xorg-minimal-fonts = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xprop = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xrandr = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xrestop = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xserver-nodm-init = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xserver-xf86-config = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xserver-xorg = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xset = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xtrans = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xtscal = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xuser-account = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xvideo-tests = "Maxin B. John <maxin.john@intel.com>"
-RECIPE_MAINTAINER_pn-xvinfo = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xwininfo = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
-RECIPE_MAINTAINER_pn-xz = "Armin Kuster <akuster808@gmail.com>"
-RECIPE_MAINTAINER_pn-yasm = "Dengke Du <dengke.du@windriver.com>"
-RECIPE_MAINTAINER_pn-zip = "Armin Kuster <akuster808@gmail.com>"
-RECIPE_MAINTAINER_pn-zisofs-tools-native = "Alexander Kanavin <alexander.kanavin@intel.com>"
-RECIPE_MAINTAINER_pn-zlib = "Armin Kuster <akuster808@gmail.com>"
diff --git a/import-layers/yocto-poky/meta-poky/conf/distro/include/poky-world-exclude.inc b/import-layers/yocto-poky/meta-poky/conf/distro/include/poky-world-exclude.inc
index a6635b6..1a2dea5 100644
--- a/import-layers/yocto-poky/meta-poky/conf/distro/include/poky-world-exclude.inc
+++ b/import-layers/yocto-poky/meta-poky/conf/distro/include/poky-world-exclude.inc
@@ -2,7 +2,3 @@
 # Things we exlude fromw world testing within the reference distro
 #
 
-# qwt from meta-qt4, has poky-lsb QA warnings, qt4 for lsb only
-EXCLUDE_FROM_WORLD_pn-qwt = "1"
-# python-pyqt from meta-qt4 requires sip from meta-oe
-EXCLUDE_FROM_WORLD_pn-python-pyqt = "1"
diff --git a/import-layers/yocto-poky/meta-poky/conf/distro/poky-lsb.conf b/import-layers/yocto-poky/meta-poky/conf/distro/poky-lsb.conf
index e7d6995..24cfb08 100644
--- a/import-layers/yocto-poky/meta-poky/conf/distro/poky-lsb.conf
+++ b/import-layers/yocto-poky/meta-poky/conf/distro/poky-lsb.conf
@@ -12,4 +12,4 @@
 KERNEL_FEATURES_append_pn-linux-yocto = " features/nfsd/nfsd-enable.scc"
 
 # Use the LTSI Kernel for LSB Testing
-PREFERRED_VERSION_linux-yocto_linuxstdbase ?= "4.1%"
+PREFERRED_VERSION_linux-yocto_linuxstdbase ?= "4.9%"
diff --git a/import-layers/yocto-poky/meta-poky/conf/distro/poky-tiny.conf b/import-layers/yocto-poky/meta-poky/conf/distro/poky-tiny.conf
index 561566e..2032bfd 100644
--- a/import-layers/yocto-poky/meta-poky/conf/distro/poky-tiny.conf
+++ b/import-layers/yocto-poky/meta-poky/conf/distro/poky-tiny.conf
@@ -38,7 +38,7 @@
 # Distro config is evaluated after the machine config, so we have to explicitly
 # set the kernel provider to override a machine config.
 PREFERRED_PROVIDER_virtual/kernel = "linux-yocto-tiny"
-PREFERRED_VERSION_linux-yocto-tiny ?= "4.9%"
+PREFERRED_VERSION_linux-yocto-tiny ?= "4.12%"
 
 # We can use packagegroup-core-boot, but in the future we may need a new packagegroup-core-tiny
 #POKY_DEFAULT_EXTRA_RDEPENDS += "packagegroup-core-boot"
diff --git a/import-layers/yocto-poky/meta-poky/conf/distro/poky.conf b/import-layers/yocto-poky/meta-poky/conf/distro/poky.conf
index 621d845..57d00bc 100644
--- a/import-layers/yocto-poky/meta-poky/conf/distro/poky.conf
+++ b/import-layers/yocto-poky/meta-poky/conf/distro/poky.conf
@@ -1,7 +1,7 @@
 DISTRO = "poky"
 DISTRO_NAME = "Poky (Yocto Project Reference Distro)"
-DISTRO_VERSION = "2.3.3"
-DISTRO_CODENAME = "pyro"
+DISTRO_VERSION = "2.4.2"
+DISTRO_CODENAME = "rocko"
 SDK_VENDOR = "-pokysdk"
 SDK_VERSION := "${@'${DISTRO_VERSION}'.replace('snapshot-${DATE}','snapshot')}"
 
@@ -15,19 +15,13 @@
 SDK_VERSION[vardepsexclude] = "DATE"
 
 # Override these in poky based distros
-POKY_DEFAULT_DISTRO_FEATURES = "largefile opengl ptest multiarch wayland"
+POKY_DEFAULT_DISTRO_FEATURES = "largefile opengl ptest multiarch wayland vulkan"
 POKY_DEFAULT_EXTRA_RDEPENDS = "packagegroup-core-boot"
 POKY_DEFAULT_EXTRA_RRECOMMENDS = "kernel-module-af-packet"
 
 DISTRO_FEATURES ?= "${DISTRO_FEATURES_DEFAULT} ${DISTRO_FEATURES_LIBC} ${POKY_DEFAULT_DISTRO_FEATURES}"
 
-PREFERRED_VERSION_linux-yocto ?= "4.10%"
-PREFERRED_VERSION_linux-yocto_qemux86 ?= "4.10%"
-PREFERRED_VERSION_linux-yocto_qemux86-64 ?= "4.10%"
-PREFERRED_VERSION_linux-yocto_qemuarm ?= "4.10%"
-PREFERRED_VERSION_linux-yocto_qemumips ?= "4.10%"
-PREFERRED_VERSION_linux-yocto_qemumips64 ?= "4.10%"
-PREFERRED_VERSION_linux-yocto_qemuppc ?= "4.10%"
+PREFERRED_VERSION_linux-yocto ?= "4.12%"
 
 SDK_NAME = "${DISTRO}-${TCLIBC}-${SDK_ARCH}-${IMAGE_BASENAME}-${TUNE_PKGARCH}"
 SDKPATH = "/opt/${DISTRO}/${SDK_VERSION}"
@@ -45,7 +39,7 @@
 
 TCLIBCAPPEND = ""
 
-QEMU_TARGETS ?= "arm aarch64 i386 mips mipsel mips64 mips64el ppc x86_64"
+QEMU_TARGETS ?= "arm aarch64 i386 mips mipsel mips64 mips64el nios2 ppc x86_64"
 # Other QEMU_TARGETS "sh4"
 
 PREMIRRORS ??= "\
@@ -72,20 +66,24 @@
 SANITY_TESTED_DISTROS ?= " \
             poky-2.2 \n \
             poky-2.3 \n \
+            poky-2.4 \n \
             ubuntu-15.04 \n \
             ubuntu-16.04 \n \
             ubuntu-16.10 \n \
+            ubuntu-17.04 \n \
             fedora-24 \n \
             fedora-25 \n \
+            fedora-26 \n \
             centos-7 \n \
             debian-8 \n \
+            debian-9 \n \
             opensuse-42.1 \n \
             opensuse-42.2 \n \
             "
 #
-# OELAYOUT_ABI allows us to notify users when the format of TMPDIR changes in 
+# OELAYOUT_ABI allows us to notify users when the format of TMPDIR changes in
 # an incompatible way. Such changes should usually be detailed in the commit
-# that breaks the format and have been previously discussed on the mailing list 
+# that breaks the format and have been previously discussed on the mailing list
 # with general agreement from the core team.
 #
 OELAYOUT_ABI = "12"
diff --git a/import-layers/yocto-poky/meta-poky/conf/layer.conf b/import-layers/yocto-poky/meta-poky/conf/layer.conf
index 8b7b33d..26d7c3b 100644
--- a/import-layers/yocto-poky/meta-poky/conf/layer.conf
+++ b/import-layers/yocto-poky/meta-poky/conf/layer.conf
@@ -9,6 +9,8 @@
 BBFILE_PATTERN_yocto = "^${LAYERDIR}/"
 BBFILE_PRIORITY_yocto = "5"
 
+LAYERSERIES_COMPAT_yocto = "rocko"
+
 # This should only be incremented on significant changes that will
 # cause compatibility issues with other layers
 LAYERVERSION_yocto = "3"
diff --git a/import-layers/yocto-poky/meta-poky/conf/local.conf.sample b/import-layers/yocto-poky/meta-poky/conf/local.conf.sample
index 304ee01..9a560df 100644
--- a/import-layers/yocto-poky/meta-poky/conf/local.conf.sample
+++ b/import-layers/yocto-poky/meta-poky/conf/local.conf.sample
@@ -149,7 +149,6 @@
 #   - 'buildstats' collect build statistics
 #   - 'image-mklibs' to reduce shared library files size for an image
 #   - 'image-prelink' in order to prelink the filesystem image
-#   - 'image-swab' to perform host system intrusion detection
 # NOTE: if listing mklibs & prelink both, then make sure mklibs is before prelink
 # NOTE: mklibs also needs to be explicitly enabled for a given image, see local.conf.extended
 USER_CLASSES ?= "buildstats image-mklibs image-prelink"
diff --git a/import-layers/yocto-poky/meta-selftest/conf/layer.conf b/import-layers/yocto-poky/meta-selftest/conf/layer.conf
index a847b78..2a71895 100644
--- a/import-layers/yocto-poky/meta-selftest/conf/layer.conf
+++ b/import-layers/yocto-poky/meta-selftest/conf/layer.conf
@@ -8,3 +8,5 @@
 BBFILE_COLLECTIONS += "selftest"
 BBFILE_PATTERN_selftest = "^${LAYERDIR}/"
 BBFILE_PRIORITY_selftest = "5"
+
+LAYERSERIES_COMPAT_selftest = "rocko"
diff --git a/import-layers/yocto-poky/meta-selftest/files/signing/key.passphrase b/import-layers/yocto-poky/meta-selftest/files/signing/key.passphrase
new file mode 100644
index 0000000..5271a52
--- /dev/null
+++ b/import-layers/yocto-poky/meta-selftest/files/signing/key.passphrase
@@ -0,0 +1 @@
+test123
diff --git a/import-layers/yocto-poky/meta-selftest/files/static-group b/import-layers/yocto-poky/meta-selftest/files/static-group
new file mode 100644
index 0000000..9213b8e
--- /dev/null
+++ b/import-layers/yocto-poky/meta-selftest/files/static-group
@@ -0,0 +1,14 @@
+messagebus:x:500:
+systemd-bus-proxy:x:501:
+systemd-network:x:502:
+systemd-resolve:x:503:
+systemd-timesync:x:504:
+polkitd:x:505:
+lock:x:506:
+systemd-journal:x:507:
+netdev:x:508:
+avahi:x:509:
+avahi-autoipd:x:510:
+rpc:x:511:
+rpcuser:x:513:
+
diff --git a/import-layers/yocto-poky/meta-selftest/files/static-passwd b/import-layers/yocto-poky/meta-selftest/files/static-passwd
new file mode 100644
index 0000000..412f85d
--- /dev/null
+++ b/import-layers/yocto-poky/meta-selftest/files/static-passwd
@@ -0,0 +1,11 @@
+messagebus:x:500:500::/var/lib/dbus:/bin/false
+systemd-bus-proxy:x:501:501::/:/bin/nologin
+systemd-network:x:502:502::/:/bin/nologin
+systemd-resolve:x:503:503::/:/bin/nologin
+systemd-timesync:x:504:504::/:/bin/nologin
+polkitd:x:505:505::/:/bin/nologin
+avahi:x:509:509::/:/bin/nologin
+avahi-autoipd:x:510:510::/:/bin/nologin
+rpc:x:511:511::/:/bin/nologin
+distcc:x:512:nogroup::/:/bin/nologin
+rpcuser:x:513:513::/var/lib/nfs:/bin/nologin
diff --git a/import-layers/yocto-poky/meta-selftest/lib/oeqa/runtime/cases/dnf_runtime.py b/import-layers/yocto-poky/meta-selftest/lib/oeqa/runtime/cases/dnf_runtime.py
new file mode 100644
index 0000000..81c50ed
--- /dev/null
+++ b/import-layers/yocto-poky/meta-selftest/lib/oeqa/runtime/cases/dnf_runtime.py
@@ -0,0 +1,47 @@
+from oeqa.core.decorator.depends import OETestDepends
+from oeqa.runtime.cases.dnf import DnfTest
+from oeqa.utils.httpserver import HTTPService
+
+class DnfSelftest(DnfTest):
+
+    @classmethod
+    def setUpClass(cls):
+        import tempfile
+        cls.temp_dir = tempfile.TemporaryDirectory(prefix="oeqa-remotefeeds-")
+        cls.repo_server = HTTPService(os.path.join(cls.tc.td['WORKDIR'], 'oe-rootfs-repo'),
+                                      cls.tc.target.server_ip)
+        cls.repo_server.start()
+
+    @classmethod
+    def tearDownClass(cls):
+        cls.repo_server.stop()
+        cls.temp_dir.cleanup()
+
+    @OETestDepends(['dnf.DnfBasicTest.test_dnf_help'])
+    def test_verify_package_feeds(self):
+        """
+        Summary: Check correct setting of PACKAGE_FEED_URIS var
+        Expected: 1. Feeds were correctly set for dnf
+                  2. Update recovers packages from host's repo
+        Author: Humberto Ibarra <humberto.ibarra.lopez@intel.com>
+        Author: Alexander Kanavin <alexander.kanavin@intel.com>
+        """
+        # When we created an image, we had to supply fake ip and port
+        # for the feeds. Now we can patch the real ones into the config file.
+        temp_file = os.path.join(self.temp_dir.name, 'tmp.repo')
+        self.tc.target.copyFrom("/etc/yum.repos.d/oe-remote-repo.repo", temp_file)
+        fixed_config = open(temp_file, "r").read().replace("bogus_ip", self.tc.target.server_ip).replace("bogus_port", str(self.repo_server.port))
+        with open(temp_file, "w") as f:
+            f.write(fixed_config)
+        self.tc.target.copyTo(temp_file, "/etc/yum.repos.d/oe-remote-repo.repo")
+
+        import re
+        # Use '-y' for non-interactive mode: automatically import the feed signing key
+        output_makecache = self.dnf('-vy makecache')
+        self.assertTrue(re.match(r".*Failed to synchronize cache", output_makecache, re.DOTALL) is None, msg = "dnf makecache failed to synchronize repo: %s" %(output_makecache))
+        self.assertTrue(re.match(r".*Metadata cache created", output_makecache, re.DOTALL) is not None, msg = "dnf makecache failed: %s" %(output_makecache))
+
+        output_repoinfo = self.dnf('-v repoinfo')
+        matchobj = re.match(r".*Repo-pkgs\s*:\s*(?P<n_pkgs>[0-9]+)", output_repoinfo, re.DOTALL)
+        self.assertTrue(matchobj is not None, msg = "Could not find the amount of packages in dnf repoinfo output: %s" %(output_repoinfo))
+        self.assertTrue(int(matchobj.group('n_pkgs')) > 0, msg = "Amount of remote packages is not more than zero: %s\n" %(output_repoinfo))
diff --git a/import-layers/yocto-poky/meta-selftest/lib/oeqa/runtime/cases/selftest.py b/import-layers/yocto-poky/meta-selftest/lib/oeqa/runtime/cases/selftest.py
index e4985a6..19de740 100644
--- a/import-layers/yocto-poky/meta-selftest/lib/oeqa/runtime/cases/selftest.py
+++ b/import-layers/yocto-poky/meta-selftest/lib/oeqa/runtime/cases/selftest.py
@@ -1,7 +1,5 @@
 from oeqa.runtime.case import OERuntimeTestCase
 from oeqa.core.decorator.depends import OETestDepends
-from oeqa.runtime.cases.dnf import DnfTest
-from oeqa.utils.httpserver import HTTPService
 
 class Selftest(OERuntimeTestCase):
 
@@ -31,43 +29,3 @@
 
         (status, output) = self.target.run("socat -V")
         self.assertNotEqual(status, 0, msg="socat is still installed")
-
-
-class DnfSelftest(DnfTest):
-
-    @classmethod
-    def setUpClass(cls):
-        cls.repo_server = HTTPService(os.path.join(cls.tc.td['WORKDIR'], 'oe-rootfs-repo'),
-                                      cls.tc.target.server_ip)
-        cls.repo_server.start()
-
-    @classmethod
-    def tearDownClass(cls):
-        cls.repo_server.stop()
-
-    @OETestDepends(['ssh.SSHTest.test_ssh'])
-    def test_verify_package_feeds(self):
-        """
-        Summary: Check correct setting of PACKAGE_FEED_URIS var
-        Expected: 1. Feeds were correctly set for dnf
-                  2. Update recovers packages from host's repo
-        Author: Humberto Ibarra <humberto.ibarra.lopez@intel.com>
-        Author: Alexander Kanavin <alexander.kanavin@intel.com>
-        """
-        # When we created an image, we had to supply fake ip and port
-        # for the feeds. Now we can patch the real ones into the config file.
-        import tempfile
-        temp_file = tempfile.TemporaryDirectory(prefix="oeqa-remotefeeds-").name
-        self.tc.target.copyFrom("/etc/yum.repos.d/oe-remote-repo.repo", temp_file)
-        fixed_config = open(temp_file, "r").read().replace("bogus_ip", self.tc.target.server_ip).replace("bogus_port", str(self.repo_server.port))
-        open(temp_file, "w").write(fixed_config)
-        self.tc.target.copyTo(temp_file, "/etc/yum.repos.d/oe-remote-repo.repo")
-
-        import re
-        output_makecache = self.dnf('makecache')
-        self.assertTrue(re.match(r".*Metadata cache created", output_makecache, re.DOTALL) is not None, msg = "dnf makecache failed: %s" %(output_makecache))
-
-        output_repoinfo = self.dnf('repoinfo')
-        matchobj = re.match(r".*Repo-pkgs\s*:\s*(?P<n_pkgs>[0-9]+)", output_repoinfo, re.DOTALL)
-        self.assertTrue(matchobj is not None, msg = "Could not find the amount of packages in dnf repoinfo output: %s" %(output_repoinfo))
-        self.assertTrue(int(matchobj.group('n_pkgs')) > 0, msg = "Amount of remote packages is not more than zero: %s\n" %(output_repoinfo))
diff --git a/import-layers/yocto-poky/meta-selftest/lib/oeqa/selftest/cases/external-layer.py b/import-layers/yocto-poky/meta-selftest/lib/oeqa/selftest/cases/external-layer.py
new file mode 100644
index 0000000..59b1afa
--- /dev/null
+++ b/import-layers/yocto-poky/meta-selftest/lib/oeqa/selftest/cases/external-layer.py
@@ -0,0 +1,16 @@
+#from oeqa.selftest.base import oeSelfTest
+from oeqa.selftest.case import OESelftestTestCase
+#from oeqa.utils.decorators import testcase
+
+
+class ImportedTests(OESelftestTestCase):
+
+    def test_unconditional_pass(self):
+        """
+        Summary: Doesn't check anything, used to check import test from other layers.
+        Expected: 1. Pass unconditionally
+        Product: oe-core
+        Author: Mariano Lopez <mariano.lopez@intel.com
+        """
+
+        self.assertEqual(True, True, msg = "Impossible to fail this test")
diff --git a/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-test-patch-gz.bb b/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-test-patch-gz.bb
index e45ee9f..fc37995 100644
--- a/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-test-patch-gz.bb
+++ b/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-test-patch-gz.bb
@@ -6,6 +6,7 @@
 SRC_URI = "http://downloads.yoctoproject.org/releases/xrestop/xrestop-0.4.tar.gz \
            file://readme.patch.gz \
            "
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 S = "${WORKDIR}/xrestop-0.4"
 
diff --git a/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test1_1.5.3.bb b/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test1_1.5.3.bb
index 4049be2..333ecac 100644
--- a/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test1_1.5.3.bb
+++ b/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test1_1.5.3.bb
@@ -4,6 +4,8 @@
 
 SRC_URI = "http://www.ivarch.com/programs/sources/pv-${PV}.tar.gz \
            file://0001-Add-a-note-line-to-the-quick-reference.patch"
+UPSTREAM_CHECK_URI = "http://www.ivarch.com/programs/pv.shtml"
+RECIPE_NO_UPDATE_REASON = "This recipe is used to test devtool upgrade feature"
 
 SRC_URI[md5sum] = "9365d86bd884222b4bf1039b5a9ed1bd"
 SRC_URI[sha256sum] = "681bcca9784bf3cb2207e68236d1f68e2aa7b80f999b5750dc77dcd756e81fbc"
diff --git a/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test1_1.5.3.bb.upgraded b/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test1_1.5.3.bb.upgraded
index 42c0705..9d94f67 100644
--- a/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test1_1.5.3.bb.upgraded
+++ b/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test1_1.5.3.bb.upgraded
@@ -4,6 +4,8 @@
 
 SRC_URI = "http://www.ivarch.com/programs/sources/pv-${PV}.tar.gz \
            file://0001-Add-a-note-line-to-the-quick-reference.patch"
+UPSTREAM_CHECK_URI = "http://www.ivarch.com/programs/pv.shtml"
+RECIPE_NO_UPDATE_REASON = "This recipe is used to test devtool upgrade feature"
 
 SRC_URI[md5sum] = "062bca5ff33df1dd09472e7fc3bbe332"
 SRC_URI[sha256sum] = "9dd45391806b0ed215abee4c5ac1597d018c386fe9c1f5afd2f6bc3b07fd82c3"
diff --git a/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test2_git.bb b/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test2_git.bb
index 450636e..07b8327 100644
--- a/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test2_git.bb
+++ b/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test2_git.bb
@@ -12,6 +12,8 @@
 PR = "r2"
 
 SRC_URI = "git://git.yoctoproject.org/dbus-wait"
+UPSTREAM_CHECK_COMMITS = "1"
+RECIPE_NO_UPDATE_REASON = "This recipe is used to test devtool upgrade feature"
 
 S = "${WORKDIR}/git"
 
diff --git a/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test2_git.bb.upgraded b/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test2_git.bb.upgraded
index 0d2e19e..32ec4b1 100644
--- a/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test2_git.bb.upgraded
+++ b/import-layers/yocto-poky/meta-selftest/recipes-test/devtool/devtool-upgrade-test2_git.bb.upgraded
@@ -11,6 +11,8 @@
 PV = "0.1+git${SRCPV}"
 
 SRC_URI = "git://git.yoctoproject.org/dbus-wait"
+UPSTREAM_CHECK_COMMITS = "1"
+RECIPE_NO_UPDATE_REASON = "This recipe is used to test devtool upgrade feature"
 
 S = "${WORKDIR}/git"
 
diff --git a/import-layers/yocto-poky/meta-skeleton/conf/layer.conf b/import-layers/yocto-poky/meta-skeleton/conf/layer.conf
index aca1633..a15516a 100644
--- a/import-layers/yocto-poky/meta-skeleton/conf/layer.conf
+++ b/import-layers/yocto-poky/meta-skeleton/conf/layer.conf
@@ -13,3 +13,5 @@
 LAYERVERSION_skeleton = "1"
 
 LAYERDEPENDS_skeleton = "core"
+
+LAYERSERIES_COMPAT_skeleton = "rocko"
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/README.hardware b/import-layers/yocto-poky/meta-yocto-bsp/README.hardware
new file mode 100644
index 0000000..84c5afa
--- /dev/null
+++ b/import-layers/yocto-poky/meta-yocto-bsp/README.hardware
@@ -0,0 +1,407 @@
+                  Yocto Project Hardware Reference BSPs README
+                  ============================================
+
+This file gives details about using the Yocto Project hardware reference BSPs.
+The machines supported can be seen in the conf/machine/ directory and are listed 
+below. There is one per supported hardware architecture and these are primarily
+used to validate that the Yocto Project works on the hardware arctectures of 
+those machines.
+
+If you are in doubt about using Poky/OpenEmbedded/Yocto Project with your hardware, 
+consult the documentation for your board/device.
+
+Support for additional devices is normally added by adding BSP layers to your 
+configuration. For more information please see the Yocto Board Support Package 
+(BSP) Developer's Guide - documentation source is in documentation/bspguide or 
+download the PDF from:
+
+   http://yoctoproject.org/documentation
+
+Note that these reference BSPs use the linux-yocto kernel and in general don't
+pull in binary module support for the platforms. This means some device functionality
+may be limited compared to a 'full' BSP which may be available.
+
+
+Hardware Reference Boards
+=========================
+
+The following boards are supported by the meta-yocto-bsp layer:
+
+  * Texas Instruments Beaglebone (beaglebone)
+  * Freescale MPC8315E-RDB (mpc8315e-rdb)
+  * Ubiquiti Networks EdgeRouter Lite (edgerouter)
+  * General IA platforms (genericx86 and genericx86-64)
+
+For more information see the board's section below. The appropriate MACHINE
+variable value corresponding to the board is given in brackets.
+
+Reference Board Maintenance
+===========================
+
+Send pull requests, patches, comments or questions about meta-yocto-bsps to poky@yoctoproject.org
+
+Maintainers: Kevin Hao <kexin.hao@windriver.com>
+             Bruce Ashfield <bruce.ashfield@windriver.com>
+
+Consumer Devices
+================
+
+The following consumer devices are supported by the meta-yocto-bsp layer:
+
+  * Intel x86 based PCs and devices (genericx86)
+  * Ubiquiti Networks EdgeRouter Lite (edgerouter)
+
+For more information see the device's section below. The appropriate MACHINE
+variable value corresponding to the device is given in brackets.
+
+
+
+                      Specific Hardware Documentation
+                      ===============================
+
+
+Intel x86 based PCs and devices (genericx86*)
+=============================================
+
+The genericx86 and genericx86-64 MACHINE are tested on the following platforms:
+
+Intel Xeon/Core i-Series:
+  + Intel NUC5 Series - ix-52xx Series SOC (Broadwell)
+  + Intel NUC6 Series - ix-62xx Series SOC (Skylake)
+  + Intel Shumway Xeon Server
+
+Intel Atom platforms:
+  + MinnowBoard MAX - E3825 SOC (Bay Trail)
+  + MinnowBoard MAX - Turbot (ADI Engineering) - E3826 SOC (Bay Trail)
+    - These boards can be either 32bot or 64bit modes depending on firmware
+    - See minnowboard.org for details 
+  + Intel Braswell SOC
+
+and is likely to work on many unlisted Atom/Core/Xeon based devices. The MACHINE
+type supports ethernet, wifi, sound, and Intel/vesa graphics by default in
+addition to common PC input devices, busses, and so on.
+
+Depending on the device, it can boot from a traditional hard-disk, a USB device,
+or over the network. Writing generated images to physical media is
+straightforward with a caveat for USB devices. The following examples assume the
+target boot device is /dev/sdb, be sure to verify this and use the correct
+device as the following commands are run as root and are not reversable.
+
+USB Device:
+  1. Build a live image. This image type consists of a simple filesystem
+     without a partition table, which is suitable for USB keys, and with the
+     default setup for the genericx86 machine, this image type is built
+     automatically for any image you build. For example:
+
+     $ bitbake core-image-minimal
+
+  2. Use the "dd" utility to write the image to the raw block device. For
+     example:
+
+     # dd if=core-image-minimal-genericx86.hddimg of=/dev/sdb
+
+  If the device fails to boot with "Boot error" displayed, or apparently
+  stops just after the SYSLINUX version banner, it is likely the BIOS cannot
+  understand the physical layout of the disk (or rather it expects a
+  particular layout and cannot handle anything else). There are two possible
+  solutions to this problem:
+
+  1. Change the BIOS USB Device setting to HDD mode. The label will vary by
+     device, but the idea is to force BIOS to read the Cylinder/Head/Sector
+     geometry from the device.
+
+  2. Use a ".wic" image with an EFI partition
+
+     a) With a default grub-efi bootloader:
+     # dd if=core-image-minimal-genericx86-64.wic of=/dev/sdb
+
+     b) Use systemd-boot instead
+     - Build an image with EFI_PROVIDER="systemd-boot" then use the above
+       dd command to write the image to a USB stick.
+
+
+Texas Instruments Beaglebone (beaglebone)
+=========================================
+
+The Beaglebone is an ARM Cortex-A8 development board with USB, Ethernet, 2D/3D
+accelerated graphics, audio, serial, JTAG, and SD/MMC. The Black adds a faster
+CPU, more RAM, eMMC flash and a micro HDMI port. The beaglebone MACHINE is
+tested on the following platforms:
+
+  o Beaglebone Black A6
+  o Beaglebone A6 (the original "White" model)
+
+The Beaglebone Black has eMMC, while the White does not. Pressing the USER/BOOT
+button when powering on will temporarily change the boot order. But for the sake
+of simplicity, these instructions assume you have erased the eMMC on the Black,
+so its boot behavior matches that of the White and boots off of SD card. To do
+this, issue the following commands from the u-boot prompt:
+
+    # mmc dev 1
+    # mmc erase 0 512
+
+To further tailor these instructions for your board, please refer to the
+documentation at http://www.beagleboard.org/bone and http://www.beagleboard.org/black
+
+From a Linux system with access to the image files perform the following steps:
+
+  1. Build an image. For example:
+
+     $ bitbake core-image-minimal
+
+  2. Use the "dd" utility to write the image to the SD card. For example:
+
+     # dd core-image-minimal-beaglebone.wic of=/dev/sdb
+
+  3. Insert the SD card into the Beaglebone and boot the board.
+
+Freescale MPC8315E-RDB (mpc8315e-rdb)
+=====================================
+
+The MPC8315 PowerPC reference platform (MPC8315E-RDB) is aimed at hardware and
+software development of network attached storage (NAS) and digital media server
+applications. The MPC8315E-RDB features the PowerQUICC II Pro processor, which
+includes a built-in security accelerator.
+
+(Note: you may find it easier to order MPC8315E-RDBA; this appears to be the
+same board in an enclosure with accessories. In any case it is fully
+compatible with the instructions given here.)
+
+Setup instructions
+------------------
+
+You will need the following:
+* NFS root setup on your workstation
+* TFTP server installed on your workstation
+* Straight-thru 9-conductor serial cable (DB9, M/F) connected from your 
+  PC to UART1
+* Ethernet connected to the first ethernet port on the board
+
+--- Preparation ---
+
+Note: if you have altered your board's ethernet MAC address(es) from the
+defaults, or you need to do so because you want multiple boards on the same
+network, then you will need to change the values in the dts file (patch
+linux/arch/powerpc/boot/dts/mpc8315erdb.dts within the kernel source). If
+you have left them at the factory default then you shouldn't need to do
+anything here.
+
+Note: To boot from USB disk you need u-boot that supports 'ext2load usb'
+command. You need to setup TFTP server, load u-boot from there and
+flash it to NOR flash.
+
+Beware! Flashing bootloader is potentially dangerous operation that can
+brick your device if done incorrectly. Please, make sure you understand
+what below commands mean before executing them.
+
+Load the new u-boot.bin from TFTP server to memory address 200000
+=> tftp 200000 u-boot.bin
+
+Disable flash protection
+=> protect off all
+
+Erase the old u-boot from fe000000 to fe06ffff in NOR flash.
+The size is 0x70000 (458752 bytes)
+=> erase fe000000 fe06ffff
+
+Copy the new u-boot from address 200000 to fe000000
+the size is 0x70000. It has to be greater or equal to u-boot.bin size
+=> cp.b 200000 fe000000 70000
+
+Enable flash protection again
+=> protect on all
+
+Reset the board
+=> reset
+
+--- Booting from USB disk ---
+
+ 1. Flash partitioned image to the USB disk
+
+    # dd if=core-image-minimal-mpc8315e-rdb.wic of=/dev/sdb
+
+ 2. Plug USB disk into the MPC8315 board
+
+ 3. Connect the board's first serial port to your workstation and then start up
+    your favourite serial terminal so that you will be able to interact with
+    the serial console. If you don't have a favourite, picocom is suggested:
+
+  $ picocom /dev/ttyUSB0 -b 115200
+
+ 4. Power up or reset the board and press a key on the terminal when prompted
+    to get to the U-Boot command line
+
+ 5. Optional. Load the u-boot.bin from the USB disk:
+
+ => usb start
+ => ext2load usb 0:1 200000 u-boot.bin
+
+    and flash it to NOR flash as described above.
+
+ 6. Load the kernel and dtb from the first partition of the USB disk:
+
+ => usb start
+ => ext2load usb 0:1 1000000 uImage
+ => ext2load usb 0:1 2000000 dtb
+
+ 7. Set bootargs and boot up the device
+
+ => setenv bootargs root=/dev/sdb2 rw rootwait console=ttyS0,115200
+ => bootm 1000000 - 2000000
+
+
+--- Booting from NFS root ---
+
+Load the kernel and dtb (device tree blob), and boot the system as follows:
+
+ 1. Get the kernel (uImage-mpc8315e-rdb.bin) and dtb (uImage-mpc8315e-rdb.dtb)
+    files from the tmp/deploy directory, and make them available on your TFTP
+    server.
+
+ 2. Connect the board's first serial port to your workstation and then start up
+    your favourite serial terminal so that you will be able to interact with
+    the serial console. If you don't have a favourite, picocom is suggested:
+
+  $ picocom /dev/ttyUSB0 -b 115200
+
+ 3. Power up or reset the board and press a key on the terminal when prompted
+    to get to the U-Boot command line
+
+ 4. Set up the environment in U-Boot:
+
+ => setenv ipaddr <board ip>
+ => setenv serverip <tftp server ip>
+ => setenv bootargs root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:255.255.255.0:mpc8315e:eth0:off console=ttyS0,115200
+
+ 5. Download the kernel and dtb, and boot:
+
+ => tftp 1000000 uImage-mpc8315e-rdb.bin
+ => tftp 2000000 uImage-mpc8315e-rdb.dtb
+ => bootm 1000000 - 2000000
+
+--- Booting from JFFS2 root ---
+
+ 1. First boot the board with NFS root.
+
+ 2. Erase the MTD partition which will be used as root:
+
+    $ flash_eraseall  /dev/mtd3
+
+ 3. Copy the JFFS2 image to the MTD partition:
+
+    $ flashcp core-image-minimal-mpc8315e-rdb.jffs2 /dev/mtd3
+
+ 4. Then reboot the board and set up the environment in U-Boot:
+
+    => setenv bootargs root=/dev/mtdblock3 rootfstype=jffs2 console=ttyS0,115200
+
+
+Ubiquiti Networks EdgeRouter Lite (edgerouter)
+==============================================
+
+The EdgeRouter Lite is part of the EdgeMax series. It is a MIPS64 router
+(based on the Cavium Octeon processor) with 512MB of RAM, which uses an
+internal USB pendrive for storage.
+
+Setup instructions
+------------------
+
+You will need the following:
+* RJ45 -> serial ("rollover") cable connected from your PC to the CONSOLE
+  port on the device
+* Ethernet connected to the first ethernet port on the board
+
+If using NFS as part of the setup process, you will also need:
+* NFS root setup on your workstation
+* TFTP server installed on your workstation (if fetching the kernel from
+  TFTP, see below).
+
+--- Preparation ---
+
+Build an image (e.g. core-image-minimal) using "edgerouter" as the MACHINE.
+In the following instruction it is based on core-image-minimal. Another target
+may be similiar with it.
+
+--- Booting from NFS root / kernel via TFTP ---
+
+Load the kernel, and boot the system as follows:
+
+ 1. Get the kernel (vmlinux) file from the tmp/deploy/images/edgerouter
+    directory, and make them available on your TFTP server.
+
+ 2. Connect the board's first serial port to your workstation and then start up
+    your favourite serial terminal so that you will be able to interact with
+    the serial console. If you don't have a favourite, picocom is suggested:
+
+  $ picocom /dev/ttyS0 -b 115200
+
+ 3. Power up or reset the board and press a key on the terminal when prompted
+    to get to the U-Boot command line
+
+ 4. Set up the environment in U-Boot:
+
+ => setenv ipaddr <board ip>
+ => setenv serverip <tftp server ip>
+
+ 5. Download the kernel and boot:
+
+ => tftp tftp $loadaddr vmlinux
+ => bootoctlinux $loadaddr coremask=0x3 root=/dev/nfs rw nfsroot=<nfsroot ip>:<rootfs path> ip=<board ip>:<server ip>:<gateway ip>:<netmask>:edgerouter:eth0:off mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)
+
+--- Booting from USB disk ---
+
+To boot from the USB disk, you either need to remove it from the edgerouter
+box and populate it from another computer, or use a previously booted NFS
+image and populate from the edgerouter itself.
+
+Type 1: Use partitioned image
+-----------------------------
+
+Steps:
+
+ 1. Remove the USB disk from the edgerouter and insert it into a computer
+    that has access to your build artifacts.
+
+ 2. Flash the image.
+
+    # dd if=core-image-minimal-edgerouter.wic of=/dev/sdb
+
+ 3. Insert USB disk into the edgerouter and boot it.
+
+Type 2: NFS
+-----------
+
+Note: If you place the kernel on the ext3 partition, you must re-create the
+      ext3 filesystem, since the factory u-boot can only handle 128 byte inodes and
+      cannot read the partition otherwise.
+
+      These boot instructions assume that you have recreated the ext3 filesystem with
+      128 byte inodes, you have an updated uboot or you are running and image capable
+      of making the filesystem on the board itself.
+
+
+ 1. Boot from NFS root
+
+ 2. Mount the USB disk partition 2 and then extract the contents of
+    tmp/deploy/core-image-XXXX.tar.bz2 into it.
+
+    Before starting, copy core-image-minimal-xxx.tar.bz2 and vmlinux into
+    rootfs path on your workstation.
+
+    and then,
+  
+      # mount /dev/sda2 /media/sda2
+      # tar -xvjpf core-image-minimal-XXX.tar.bz2 -C /media/sda2
+      # cp vmlinux /media/sda2/boot/vmlinux
+      # umount /media/sda2
+      # reboot
+
+ 3. Reboot the board and press a key on the terminal when prompted to get to the U-Boot
+    command line:
+
+    # reboot
+
+ 4. Load the kernel and boot:
+
+      => ext2load usb 0:2 $loadaddr boot/vmlinux
+      => bootoctlinux $loadaddr coremask=0x3 root=/dev/sda2 rw rootwait mtdparts=phys_mapped_flash:512k(boot0),512k(boot1),64k@3072k(eeprom)
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/conf/layer.conf b/import-layers/yocto-poky/meta-yocto-bsp/conf/layer.conf
index 44dbca6..7b565f2 100644
--- a/import-layers/yocto-poky/meta-yocto-bsp/conf/layer.conf
+++ b/import-layers/yocto-poky/meta-yocto-bsp/conf/layer.conf
@@ -9,3 +9,4 @@
 BBFILE_PATTERN_yoctobsp = "^${LAYERDIR}/"
 BBFILE_PRIORITY_yoctobsp = "5"
 LAYERVERSION_yoctobsp = "3"
+LAYERSERIES_COMPAT_yoctobsp = "rocko"
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/beaglebone.conf b/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/beaglebone.conf
index 4a90ba5..1bdd4df 100644
--- a/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/beaglebone.conf
+++ b/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/beaglebone.conf
@@ -26,7 +26,7 @@
 PREFERRED_VERSION_linux-yocto ?= "4.12%"
 
 KERNEL_IMAGETYPE = "zImage"
-KERNEL_DEVICETREE = "am335x-bone.dtb am335x-boneblack.dtb"
+KERNEL_DEVICETREE = "am335x-bone.dtb am335x-boneblack.dtb am335x-bonegreen.dtb"
 KERNEL_EXTRA_ARGS += "LOADADDR=${UBOOT_ENTRYPOINT}"
 
 SPL_BINARY = "MLO"
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/genericx86-64.conf b/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/genericx86-64.conf
index bfedd84..12f7c0d 100644
--- a/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/genericx86-64.conf
+++ b/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/genericx86-64.conf
@@ -7,6 +7,4 @@
 require conf/machine/include/tune-core2.inc
 require conf/machine/include/genericx86-common.inc
 
-PREFERRED_VERSION_linux-yocto ?= "4.10%"
-
 SERIAL_CONSOLES_CHECK = "ttyS0"
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/genericx86.conf b/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/genericx86.conf
index af03b86..798b62e 100644
--- a/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/genericx86.conf
+++ b/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/genericx86.conf
@@ -8,5 +8,3 @@
 require conf/machine/include/genericx86-common.inc
 
 MACHINE_ESSENTIAL_EXTRA_RRECOMMENDS += "gma500-gfx-check"
-
-PREFERRED_VERSION_linux-yocto ?= "4.10%"
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/mpc8315e-rdb.conf b/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/mpc8315e-rdb.conf
index 6e2d316..371a3db 100644
--- a/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/mpc8315e-rdb.conf
+++ b/import-layers/yocto-poky/meta-yocto-bsp/conf/machine/mpc8315e-rdb.conf
@@ -30,6 +30,6 @@
 IMAGE_FSTYPES ?= "jffs2 tar.bz2"
 JFFS2_ERASEBLOCK = "0x4000"
 
-IMAGE_FSTYPES += "wic"
+IMAGE_FSTYPES += "wic wic.bmap"
 WKS_FILE ?= 'mpc8315e-rdb.wks'
 IMAGE_BOOT_FILES ?= "u-boot.bin uImage uImage-mpc8315erdb.dtb;dtb"
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/lib/oeqa/selftest/__init__.py b/import-layers/yocto-poky/meta-yocto-bsp/lib/oeqa/selftest/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/import-layers/yocto-poky/meta-yocto-bsp/lib/oeqa/selftest/__init__.py
+++ /dev/null
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/lib/oeqa/selftest/cases/systemd_boot.py b/import-layers/yocto-poky/meta-yocto-bsp/lib/oeqa/selftest/cases/systemd_boot.py
new file mode 100644
index 0000000..dd5eeec
--- /dev/null
+++ b/import-layers/yocto-poky/meta-yocto-bsp/lib/oeqa/selftest/cases/systemd_boot.py
@@ -0,0 +1,98 @@
+import os
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.core.decorator.oeid import OETestID
+from oeqa.core.decorator.depends import OETestDepends
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, runqemu
+
+class Systemdboot(OESelftestTestCase):
+    def _common_setup(self):
+        """
+        Common setup for test cases: 1445, 1528
+        """
+
+        # Set EFI_PROVIDER = "systemdboot" and MACHINE = "genericx86-64" in conf/local.conf
+        features = 'EFI_PROVIDER = "systemd-boot"\n'
+        features += 'MACHINE = "genericx86-64"'
+        self.append_config(features)
+
+    def _common_build(self):
+        """
+        Common build for test cases: 1445 , 1528
+        """
+
+        # Build a genericx86-64/efi systemdboot image
+        bitbake('mtools-native core-image-minimal')
+
+
+    @OETestID(1445)
+    def test_efi_systemdboot_images_can_be_built(self):
+        """
+        Summary:     Check if systemd-boot images can be built correctly
+        Expected:    1. File systemd-boot.efi should be available in $poky/build/tmp/deploy/images/genericx86-64
+                     2. 'systemd-boot" can be built correctly
+        Product:     oe-core
+        Author:      Jose Perez Carranza <jose.perez.carranza@intel.com>
+        AutomatedBy: Jose Perez Carranza <jose.perez.carranza@intel.com>
+        """
+
+        # We'd use DEPLOY_DIR_IMAGE here, except that we need its value for
+        # MACHINE="genericx86-64 which is probably not the one configured
+        systemdbootfile = os.path.join(get_bb_var('DEPLOY_DIR'), 'images', 'genericx86-64', 'systemd-bootx64.efi')
+
+        self._common_setup()
+
+        # Ensure we're actually testing that this gets built and not that
+        # it was around from an earlier build
+        bitbake('-c cleansstate systemd-boot')
+        runCmd('rm -f %s' % systemdbootfile)
+
+        self._common_build()
+
+        found = os.path.isfile(systemdbootfile)
+        self.assertTrue(found, 'Systemd-Boot file %s not found' % systemdbootfile)
+
+    @OETestID(1528)
+    @OETestDepends(['systemd_boot.Systemdboot.test_efi_systemdboot_images_can_be_built'])
+    def test_image_efi_file(self):
+
+        """
+        Summary:      Check if EFI bootloader for systemd is correctly build
+        Dependencies: Image was built correctly on testcase 1445
+        Steps:        1. Copy bootx64.efi file form the hddimg created
+                      under build/tmp/deploy/images/genericx86-64
+                      2. Check bootx64.efi was copied form hddimg
+                      3. Verify the checksums from the copied and previously
+                      created file are equal.
+        Expected :    Systemd-bootx64.efi and bootx64.efi should be the same
+                      hence checksums should be equal.
+        Product:      oe-core
+        Author:       Jose Perez Carranza <jose.perez.carranza at linux-intel.com>
+        AutomatedBy:  Jose Perez Carranza <jose.perez.carranza at linux-intel.com>
+        """
+
+        systemdbootfile = os.path.join(get_bb_var('DEPLOY_DIR'), 'images', 'genericx86-64',
+                                           'systemd-bootx64.efi')
+        systemdbootimage = os.path.join(get_bb_var('DEPLOY_DIR'), 'images', 'genericx86-64',
+                                                'core-image-minimal-genericx86-64.hddimg')
+        imagebootfile = os.path.join(get_bb_var('DEPLOY_DIR'), 'images', 'genericx86-64',
+                                                            'bootx64.efi')
+        mcopynative = os.path.join(get_bb_var('STAGING_BINDIR_NATIVE'), 'mcopy')
+
+        #Clean environment before start the test
+        if os.path.isfile(imagebootfile):
+            runCmd('rm -f %s' % imagebootfile)
+
+            #Step 1
+            runCmd('%s -i %s ::EFI/BOOT/bootx64.efi %s' % (mcopynative ,systemdbootimage,
+                                                           imagebootfile))
+
+            #Step 2
+            found = os.path.isfile(imagebootfile)
+            self.assertTrue(found, 'bootx64.efi file %s was not copied from image'
+                            % imagebootfile)
+
+            #Step 3
+            result = runCmd('md5sum %s %s' % (systemdbootfile, imagebootfile))
+            self.assertEqual(result.output.split()[0], result.output.split()[2],
+                             '%s was not correclty generated' % imagebootfile)
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/lib/oeqa/selftest/systemd_boot.py b/import-layers/yocto-poky/meta-yocto-bsp/lib/oeqa/selftest/systemd_boot.py
deleted file mode 100644
index f7f74db..0000000
--- a/import-layers/yocto-poky/meta-yocto-bsp/lib/oeqa/selftest/systemd_boot.py
+++ /dev/null
@@ -1,56 +0,0 @@
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, runqemu
-from oeqa.utils.decorators import testcase
-import re
-import os
-import sys
-import logging
-
-
-class Systemdboot(oeSelfTest):
-
-    def _common_setup(self):
-        """
-        Common setup for test cases: 1445, XXXX
-        """
-
-        # Set EFI_PROVIDER = "gummiboot" and MACHINE = "genericx86-64" in conf/local.conf
-        features = 'EFI_PROVIDER = "systemd-boot"\n'
-        features += 'MACHINE = "genericx86-64"'
-        self.append_config(features)
-
-    def _common_build(self):
-        """
-        Common build for test cases: 1445 , XXXX
-        """
-
-        # Build a genericx86-64/efi gummiboot image
-        bitbake('mtools-native core-image-minimal')
-
-
-    @testcase(1445)
-    def test_efi_systemdboot_images_can_be_built(self):
-        """
-        Summary:     Check if systemd-boot images can be built correctly
-        Expected:    1. File systemd-boot.efi should be available in $poky/build/tmp/deploy/images/genericx86-64
-                     2. 'systemd-boot" can be built correctly
-        Product:     oe-core
-        Author:      Jose Perez Carranza <jose.perez.carranza@intel.com>
-        AutomatedBy: Jose Perez Carranza <jose.perez.carranza@intel.com>
-        """
-
-        # We'd use DEPLOY_DIR_IMAGE here, except that we need its value for
-        # MACHINE="genericx86-64 which is probably not the one configured
-        systemdbootfile = os.path.join(get_bb_var('DEPLOY_DIR'), 'images', 'genericx86-64', 'systemd-bootx64.efi')
-
-        self._common_setup()
-
-        # Ensure we're actually testing that this gets built and not that
-        # it was around from an earlier build
-        bitbake('-c cleansstate systemd-boot')
-        runCmd('rm -f %s' % systemdbootfile)
-
-        self._common_build()
-
-        found = os.path.isfile(systemdbootfile)
-        self.assertTrue(found, 'Systemd-Boot file %s not found' % systemdbootfile)
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/recipes-core/packagegroups/packagegroup-core-tools-profile.bbappend b/import-layers/yocto-poky/meta-yocto-bsp/recipes-core/packagegroups/packagegroup-core-tools-profile.bbappend
deleted file mode 100644
index f86595c..0000000
--- a/import-layers/yocto-poky/meta-yocto-bsp/recipes-core/packagegroups/packagegroup-core-tools-profile.bbappend
+++ /dev/null
@@ -1,2 +0,0 @@
-RDEPENDS_${PN}_append_genericx86 = " lttng-ust systemtap"
-
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.1.bbappend b/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.1.bbappend
deleted file mode 100644
index c55f925..0000000
--- a/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.1.bbappend
+++ /dev/null
@@ -1,26 +0,0 @@
-KBRANCH_genericx86  = "standard/base"
-KBRANCH_genericx86-64  = "standard/base"
-KBRANCH_edgerouter = "standard/edgerouter"
-KBRANCH_beaglebone = "standard/beaglebone"
-KBRANCH_mpc8315e-rdb = "standard/fsl-mpc8315e-rdb"
-
-KMACHINE_genericx86 ?= "common-pc"
-KMACHINE_genericx86-64 ?= "common-pc-64"
-
-SRCREV_machine_genericx86    ?= "782133d8166ac71ef1ffaba58b7cf81ec9e532a1"
-SRCREV_machine_genericx86-64 ?= "782133d8166ac71ef1ffaba58b7cf81ec9e532a1"
-SRCREV_machine_edgerouter ?= "782133d8166ac71ef1ffaba58b7cf81ec9e532a1"
-SRCREV_machine_beaglebone ?= "ce38fdb820476e496579f2481be977c0f35509f4"
-SRCREV_machine_mpc8315e-rdb ?= "782133d8166ac71ef1ffaba58b7cf81ec9e532a1"
-
-COMPATIBLE_MACHINE_genericx86 = "genericx86"
-COMPATIBLE_MACHINE_genericx86-64 = "genericx86-64"
-COMPATIBLE_MACHINE_edgerouter = "edgerouter"
-COMPATIBLE_MACHINE_beaglebone = "beaglebone"
-COMPATIBLE_MACHINE_mpc8315e-rdb = "mpc8315e-rdb"
-
-LINUX_VERSION_genericx86 = "4.1.43"
-LINUX_VERSION_genericx86-64 = "4.1.43"
-LINUX_VERSION_edgerouter = "4.1.43"
-LINUX_VERSION_beaglebone = "4.1.43"
-LINUX_VERSION_mpc8315e-rdb = "4.1.43"
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.12.bbappend b/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.12.bbappend
new file mode 100644
index 0000000..fad8ebc
--- /dev/null
+++ b/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.12.bbappend
@@ -0,0 +1,27 @@
+KBRANCH_genericx86  = "standard/base"
+KBRANCH_genericx86-64  = "standard/base"
+
+KMACHINE_genericx86 ?= "common-pc"
+KMACHINE_genericx86-64 ?= "common-pc-64"
+KBRANCH_edgerouter = "standard/edgerouter"
+KBRANCH_beaglebone = "standard/beaglebone"
+KBRANCH_mpc8315e-rdb = "standard/fsl-mpc8315e-rdb"
+
+SRCREV_machine_genericx86    ?= "1c4ad569af3e23a77994235435040e322908687f"
+SRCREV_machine_genericx86-64 ?= "1c4ad569af3e23a77994235435040e322908687f"
+SRCREV_machine_edgerouter ?= "257f843ea367744620f1d92910afd2f454e31483"
+SRCREV_machine_beaglebone-yocto ?= "257f843ea367744620f1d92910afd2f454e31483"
+SRCREV_machine_mpc8315e-rdb ?= "014560874f9eb2a86138c9cc35046ff1720485e1"
+
+
+COMPATIBLE_MACHINE_genericx86 = "genericx86"
+COMPATIBLE_MACHINE_genericx86-64 = "genericx86-64"
+COMPATIBLE_MACHINE_edgerouter = "edgerouter"
+COMPATIBLE_MACHINE_beaglebone = "beaglebone"
+COMPATIBLE_MACHINE_mpc8315e-rdb = "mpc8315e-rdb"
+
+LINUX_VERSION_genericx86 = "4.12.20"
+LINUX_VERSION_genericx86-64 = "4.12.20"
+LINUX_VERSION_edgerouter = "4.12.19"
+LINUX_VERSION_beaglebone-yocto = "4.12.19"
+LINUX_VERSION_mpc8315e-rdb = "4.12.19"
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.4.bbappend b/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.4.bbappend
index 427af4c..35cce1b 100644
--- a/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.4.bbappend
+++ b/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.4.bbappend
@@ -7,11 +7,11 @@
 KBRANCH_beaglebone = "standard/beaglebone"
 KBRANCH_mpc8315e-rdb = "standard/fsl-mpc8315e-rdb"
 
-SRCREV_machine_genericx86    ?= "b71c7b786aed26c0a1e4eca66f1d874ec017d699"
-SRCREV_machine_genericx86-64 ?= "b71c7b786aed26c0a1e4eca66f1d874ec017d699"
-SRCREV_machine_edgerouter ?= "b71c7b786aed26c0a1e4eca66f1d874ec017d699"
-SRCREV_machine_beaglebone ?= "b71c7b786aed26c0a1e4eca66f1d874ec017d699"
-SRCREV_machine_mpc8315e-rdb ?= "b4daa4e9d68862e559d726b0b66b7be605889b9e"
+SRCREV_machine_genericx86    ?= "4d31a8b7661509ff1044abcf9050750cc2478e20"
+SRCREV_machine_genericx86-64 ?= "4d31a8b7661509ff1044abcf9050750cc2478e20"
+SRCREV_machine_edgerouter ?= "4d31a8b7661509ff1044abcf9050750cc2478e20"
+SRCREV_machine_beaglebone-yocto ?= "4d31a8b7661509ff1044abcf9050750cc2478e20"
+SRCREV_machine_mpc8315e-rdb ?= "70120c0a6b0f88122799705114a99a15bf0113ce"
 
 COMPATIBLE_MACHINE_genericx86 = "genericx86"
 COMPATIBLE_MACHINE_genericx86-64 = "genericx86-64"
@@ -19,8 +19,8 @@
 COMPATIBLE_MACHINE_beaglebone = "beaglebone"
 COMPATIBLE_MACHINE_mpc8315e-rdb = "mpc8315e-rdb"
 
-LINUX_VERSION_genericx86 = "4.4.87"
-LINUX_VERSION_genericx86-64 = "4.4.87"
-LINUX_VERSION_edgerouter = "4.4.87"
-LINUX_VERSION_beaglebone = "4.4.87"
-LINUX_VERSION_mpc8315e-rdb = "4.4.87"
+LINUX_VERSION_genericx86 = "4.4.113"
+LINUX_VERSION_genericx86-64 = "4.4.113"
+LINUX_VERSION_edgerouter = "4.4.113"
+LINUX_VERSION_beaglebone-yocto = "4.4.113"
+LINUX_VERSION_mpc8315e-rdb = "4.4.113"
diff --git a/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.9.bbappend b/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.9.bbappend
index b3b5cd5..c0b2eea 100644
--- a/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.9.bbappend
+++ b/import-layers/yocto-poky/meta-yocto-bsp/recipes-kernel/linux/linux-yocto_4.9.bbappend
@@ -7,11 +7,11 @@
 KBRANCH_beaglebone = "standard/beaglebone"
 KBRANCH_mpc8315e-rdb = "standard/fsl-mpc8315e-rdb"
 
-SRCREV_machine_genericx86    ?= "480ee599fb8df712c10dcf4b7aa6398b79f7d404"
-SRCREV_machine_genericx86-64 ?= "480ee599fb8df712c10dcf4b7aa6398b79f7d404"
-SRCREV_machine_edgerouter ?= "480ee599fb8df712c10dcf4b7aa6398b79f7d404"
-SRCREV_machine_beaglebone ?= "480ee599fb8df712c10dcf4b7aa6398b79f7d404"
-SRCREV_machine_mpc8315e-rdb ?= "88a703b15a7564704c3dc5d3c1237e0859897655"
+SRCREV_machine_genericx86    ?= "f7a6d45fff853173bfbf61706aeffcd1d1e99467"
+SRCREV_machine_genericx86-64 ?= "f7a6d45fff853173bfbf61706aeffcd1d1e99467"
+SRCREV_machine_edgerouter ?= "f7a6d45fff853173bfbf61706aeffcd1d1e99467"
+SRCREV_machine_beaglebone-yocto ?= "f7a6d45fff853173bfbf61706aeffcd1d1e99467"
+SRCREV_machine_mpc8315e-rdb ?= "195a857a0e42150b940dfcbdf857be66e05dd003"
 
 COMPATIBLE_MACHINE_genericx86 = "genericx86"
 COMPATIBLE_MACHINE_genericx86-64 = "genericx86-64"
@@ -19,8 +19,8 @@
 COMPATIBLE_MACHINE_beaglebone = "beaglebone"
 COMPATIBLE_MACHINE_mpc8315e-rdb = "mpc8315e-rdb"
 
-LINUX_VERSION_genericx86 = "4.9.49"
-LINUX_VERSION_genericx86-64 = "4.9.49"
-LINUX_VERSION_edgerouter = "4.9.49"
-LINUX_VERSION_beaglebone = "4.9.49"
-LINUX_VERSION_mpc8315e-rdb = "4.9.49"
+LINUX_VERSION_genericx86 = "4.9.78"
+LINUX_VERSION_genericx86-64 = "4.9.78"
+LINUX_VERSION_edgerouter = "4.9.78"
+LINUX_VERSION_beaglebone-yocto = "4.9.78"
+LINUX_VERSION_mpc8315e-rdb = "4.9.78"
diff --git a/import-layers/yocto-poky/meta-yocto/conf/layer.conf b/import-layers/yocto-poky/meta-yocto/conf/layer.conf
deleted file mode 100644
index 9ed30ed..0000000
--- a/import-layers/yocto-poky/meta-yocto/conf/layer.conf
+++ /dev/null
@@ -1,2 +0,0 @@
-# Dummy file to allow for meta-yocto -> meta-poky transition
-BBPATH =. "${LAYERDIR}/../meta-poky:"
diff --git a/import-layers/yocto-poky/meta/classes/allarch.bbclass b/import-layers/yocto-poky/meta/classes/allarch.bbclass
index a7ce024..51ba509 100644
--- a/import-layers/yocto-poky/meta/classes/allarch.bbclass
+++ b/import-layers/yocto-poky/meta/classes/allarch.bbclass
@@ -43,8 +43,8 @@
         d.setVar("INHIBIT_PACKAGE_STRIP", "1")
 
         # These multilib values shouldn't change allarch packages so exclude them
-        d.setVarFlag("emit_pkgdata", "vardepsexclude", "MULTILIB_VARIANTS")
-        d.setVarFlag("write_specfile", "vardepsexclude", "MULTILIBS")
+        d.appendVarFlag("emit_pkgdata", "vardepsexclude", " MULTILIB_VARIANTS")
+        d.appendVarFlag("write_specfile", "vardepsexclude", " MULTILIBS")
     elif bb.data.inherits_class('packagegroup', d) and not bb.data.inherits_class('nativesdk', d):
         bb.error("Please ensure recipe %s sets PACKAGE_ARCH before inherit packagegroup" % d.getVar("FILE"))
 }
diff --git a/import-layers/yocto-poky/meta/classes/archiver.bbclass b/import-layers/yocto-poky/meta/classes/archiver.bbclass
index 18c5b96..ec80ad4 100644
--- a/import-layers/yocto-poky/meta/classes/archiver.bbclass
+++ b/import-layers/yocto-poky/meta/classes/archiver.bbclass
@@ -223,6 +223,8 @@
     import shutil
 
     # Forcibly expand the sysroot paths as we're about to change WORKDIR
+    d.setVar('STAGING_DIR_HOST', d.getVar('STAGING_DIR_HOST'))
+    d.setVar('STAGING_DIR_TARGET', d.getVar('STAGING_DIR_TARGET'))
     d.setVar('RECIPE_SYSROOT', d.getVar('RECIPE_SYSROOT'))
     d.setVar('RECIPE_SYSROOT_NATIVE', d.getVar('RECIPE_SYSROOT_NATIVE'))
 
diff --git a/import-layers/yocto-poky/meta/classes/autotools.bbclass b/import-layers/yocto-poky/meta/classes/autotools.bbclass
index ac04a07..efa4098 100644
--- a/import-layers/yocto-poky/meta/classes/autotools.bbclass
+++ b/import-layers/yocto-poky/meta/classes/autotools.bbclass
@@ -141,7 +141,7 @@
 
 python autotools_aclocals () {
     # Refresh variable with cache files
-    d.setVar("CONFIG_SITE", siteinfo_get_files(d, aclocalcache=True))
+    d.setVar("CONFIG_SITE", siteinfo_get_files(d, sysrootcache=True))
 }
 
 CONFIGURE_FILES = "${S}/configure.in ${S}/configure.ac ${S}/config.h.in ${S}/acinclude.m4 Makefile.am"
diff --git a/import-layers/yocto-poky/meta/classes/base.bbclass b/import-layers/yocto-poky/meta/classes/base.bbclass
index d95afb7..bd0d6e3 100644
--- a/import-layers/yocto-poky/meta/classes/base.bbclass
+++ b/import-layers/yocto-poky/meta/classes/base.bbclass
@@ -61,22 +61,15 @@
 
 
 def base_dep_prepend(d):
-    #
-    # Ideally this will check a flag so we will operate properly in
-    # the case where host == build == target, for now we don't work in
-    # that case though.
-    #
+    if d.getVar('INHIBIT_DEFAULT_DEPS', False):
+        return ""
+    return "${BASE_DEFAULT_DEPS}"
 
-    deps = ""
-    # INHIBIT_DEFAULT_DEPS doesn't apply to the patch command.  Whether or  not
-    # we need that built is the responsibility of the patch function / class, not
-    # the application.
-    if not d.getVar('INHIBIT_DEFAULT_DEPS', False):
-        if (d.getVar('HOST_SYS') != d.getVar('BUILD_SYS')):
-            deps += " virtual/${TARGET_PREFIX}gcc virtual/${TARGET_PREFIX}compilerlibs virtual/libc "
-    return deps
+BASE_DEFAULT_DEPS = "virtual/${TARGET_PREFIX}gcc virtual/${TARGET_PREFIX}compilerlibs virtual/libc"
 
-BASEDEPENDS = "${@base_dep_prepend(d)}"
+BASEDEPENDS = ""
+BASEDEPENDS_class-target = "${@base_dep_prepend(d)}"
+BASEDEPENDS_class-nativesdk = "${@base_dep_prepend(d)}"
 
 DEPENDS_prepend="${BASEDEPENDS} "
 
@@ -185,7 +178,7 @@
 
 def get_layers_branch_rev(d):
     layers = (d.getVar("BBLAYERS") or "").split()
-    layers_branch_rev = ["%-17s = \"%s:%s\"" % (os.path.basename(i), \
+    layers_branch_rev = ["%-20s = \"%s:%s\"" % (os.path.basename(i), \
         base_get_metadata_git_branch(i, None).strip(), \
         base_get_metadata_git_revision(i, None)) \
             for i in layers]
@@ -213,7 +206,7 @@
     for var in statusvars:
         value = d.getVar(var)
         if value is not None:
-            yield '%-17s = "%s"' % (var, value)
+            yield '%-20s = "%s"' % (var, value)
 
 def buildcfg_neededvars(d):
     needed_vars = oe.data.typed_value("BUILDCFG_NEEDEDVARS", d)
@@ -227,7 +220,7 @@
         bb.fatal('The following variable(s) were not set: %s\nPlease set them directly, or choose a MACHINE or DISTRO that sets them.' % ', '.join(pesteruser))
 
 addhandler base_eventhandler
-base_eventhandler[eventmask] = "bb.event.ConfigParsed bb.event.BuildStarted bb.event.RecipePreFinalise bb.runqueue.sceneQueueComplete bb.event.RecipeParsed"
+base_eventhandler[eventmask] = "bb.event.ConfigParsed bb.event.MultiConfigParsed bb.event.BuildStarted bb.event.RecipePreFinalise bb.runqueue.sceneQueueComplete bb.event.RecipeParsed"
 python base_eventhandler() {
     import bb.runqueue
 
@@ -242,6 +235,16 @@
         setup_hosttools_dir(d.getVar('HOSTTOOLS_DIR'), 'HOSTTOOLS', d)
         setup_hosttools_dir(d.getVar('HOSTTOOLS_DIR'), 'HOSTTOOLS_NONFATAL', d, fatal=False)
 
+    if isinstance(e, bb.event.MultiConfigParsed):
+        # We need to expand SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS in each of the multiconfig data stores
+        # own contexts so the variables get expanded correctly for that arch, then inject back into
+        # the main data store.
+        deps = []
+        for config in e.mcdata:
+            deps.append(e.mcdata[config].getVar("SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS"))
+        deps = " ".join(deps)
+        e.mcdata[''].setVar("SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS", deps)
+
     if isinstance(e, bb.event.BuildStarted):
         localdata = bb.data.createCopy(e.data)
         statuslines = []
@@ -391,7 +394,7 @@
     # These take the form:
     #
     # PACKAGECONFIG ??= "<default options>"
-    # PACKAGECONFIG[foo] = "--enable-foo,--disable-foo,foo_depends,foo_runtime_depends"
+    # PACKAGECONFIG[foo] = "--enable-foo,--disable-foo,foo_depends,foo_runtime_depends,foo_runtime_recommends"
     pkgconfigflags = d.getVarFlags("PACKAGECONFIG") or {}
     if pkgconfigflags:
         pkgconfig = (d.getVar('PACKAGECONFIG') or "").split()
@@ -433,12 +436,13 @@
 
         extradeps = []
         extrardeps = []
+        extrarrecs = []
         extraconf = []
         for flag, flagval in sorted(pkgconfigflags.items()):
             items = flagval.split(",")
             num = len(items)
-            if num > 4:
-                bb.error("%s: PACKAGECONFIG[%s] Only enable,disable,depend,rdepend can be specified!"
+            if num > 5:
+                bb.error("%s: PACKAGECONFIG[%s] Only enable,disable,depend,rdepend,rrecommend can be specified!"
                     % (d.getVar('PN'), flag))
 
             if flag in pkgconfig:
@@ -446,12 +450,15 @@
                     extradeps.append(items[2])
                 if num >= 4 and items[3]:
                     extrardeps.append(items[3])
+                if num >= 5 and items[4]:
+                    extrarrecs.append(items[4])
                 if num >= 1 and items[0]:
                     extraconf.append(items[0])
             elif num >= 2 and items[1]:
                     extraconf.append(items[1])
         appendVar('DEPENDS', extradeps)
         appendVar('RDEPENDS_${PN}', extrardeps)
+        appendVar('RRECOMMENDS_${PN}', extrarrecs)
         appendVar('PACKAGECONFIG_CONFARGS', extraconf)
 
     pn = d.getVar('PN')
@@ -617,16 +624,16 @@
             d.appendVarFlag('do_unpack', 'depends', ' lzip-native:do_populate_sysroot')
 
         # *.xz should DEPEND on xz-native for unpacking
-        elif path.endswith('.xz'):
+        elif path.endswith('.xz') or path.endswith('.txz'):
             d.appendVarFlag('do_unpack', 'depends', ' xz-native:do_populate_sysroot')
 
         # .zip should DEPEND on unzip-native for unpacking
-        elif path.endswith('.zip'):
+        elif path.endswith('.zip') or path.endswith('.jar'):
             d.appendVarFlag('do_unpack', 'depends', ' unzip-native:do_populate_sysroot')
 
         # file is needed by rpm2cpio.sh
-        elif path.endswith('.src.rpm'):
-            d.appendVarFlag('do_unpack', 'depends', ' file-native:do_populate_sysroot')
+        elif path.endswith('.rpm'):
+            d.appendVarFlag('do_unpack', 'depends', ' xz-native:do_populate_sysroot')
 
     if needsrcrev:
         d.setVar("SRCPV", "${@bb.fetch2.get_srcrev(d)}")
diff --git a/import-layers/yocto-poky/meta/classes/buildhistory.bbclass b/import-layers/yocto-poky/meta/classes/buildhistory.bbclass
index 3823c66..7a5534e 100644
--- a/import-layers/yocto-poky/meta/classes/buildhistory.bbclass
+++ b/import-layers/yocto-poky/meta/classes/buildhistory.bbclass
@@ -192,7 +192,7 @@
     pe = d.getVar('PE') or "0"
     pv = d.getVar('PV')
     pr = d.getVar('PR')
-    layer = bb.utils.get_file_layer(d.getVar('FILE', True), d)
+    layer = bb.utils.get_file_layer(d.getVar('FILE'), d)
 
     pkgdata_dir = d.getVar('PKGDATA_DIR')
     packages = ""
@@ -348,6 +348,7 @@
         f.write(u"PACKAGES = %s\n" %  rcpinfo.packages)
         f.write(u"LAYER = %s\n" %  rcpinfo.layer)
 
+    write_latest_srcrev(d, pkghistdir)
 
 def write_pkghistory(pkginfo, d):
     bb.debug(2, "Writing package history for package %s" % pkginfo.name)
@@ -600,26 +601,19 @@
 
 python buildhistory_get_extra_sdkinfo() {
     import operator
-    import math
+    from oe.sdk import get_extra_sdkinfo
+
+    sstate_dir = d.expand('${SDK_OUTPUT}/${SDKPATH}/sstate-cache')
+    extra_info = get_extra_sdkinfo(sstate_dir)
 
     if d.getVar('BB_CURRENTTASK') == 'populate_sdk_ext' and \
             "sdk" in (d.getVar('BUILDHISTORY_FEATURES') or "").split():
-        tasksizes = {}
-        filesizes = {}
-        for root, _, files in os.walk(d.expand('${SDK_OUTPUT}/${SDKPATH}/sstate-cache')):
-            for fn in files:
-                if fn.endswith('.tgz'):
-                    fsize = int(math.ceil(float(os.path.getsize(os.path.join(root, fn))) / 1024))
-                    task = fn.rsplit(':', 1)[1].split('_', 1)[1].split('.')[0]
-                    origtotal = tasksizes.get(task, 0)
-                    tasksizes[task] = origtotal + fsize
-                    filesizes[fn] = fsize
         with open(d.expand('${BUILDHISTORY_DIR_SDK}/sstate-package-sizes.txt'), 'w') as f:
-            filesizes_sorted = sorted(filesizes.items(), key=operator.itemgetter(1, 0), reverse=True)
+            filesizes_sorted = sorted(extra_info['filesizes'].items(), key=operator.itemgetter(1, 0), reverse=True)
             for fn, size in filesizes_sorted:
                 f.write('%10d KiB %s\n' % (size, fn))
         with open(d.expand('${BUILDHISTORY_DIR_SDK}/sstate-task-sizes.txt'), 'w') as f:
-            tasksizes_sorted = sorted(tasksizes.items(), key=operator.itemgetter(1, 0), reverse=True)
+            tasksizes_sorted = sorted(extra_info['tasksizes'].items(), key=operator.itemgetter(1, 0), reverse=True)
             for task, size in tasksizes_sorted:
                 f.write('%10d KiB %s\n' % (size, task))
 }
@@ -715,20 +709,23 @@
 
 
 def buildhistory_get_cmdline(d):
-    if sys.argv[0].endswith('bin/bitbake'):
-        bincmd = 'bitbake'
-    else:
-        bincmd = sys.argv[0]
-    return '%s %s' % (bincmd, ' '.join(sys.argv[1:]))
+    argv = d.getVar('BB_CMDLINE', False)
+    if argv:
+        if argv[0].endswith('bin/bitbake'):
+            bincmd = 'bitbake'
+        else:
+            bincmd = argv[0]
+        return '%s %s' % (bincmd, ' '.join(argv[1:]))
+    return ''
 
 
 buildhistory_single_commit() {
 	if [ "$3" = "" ] ; then
 		commitopts="${BUILDHISTORY_DIR}/ --allow-empty"
-		item="No changes"
+		shortlogprefix="No changes: "
 	else
-		commitopts="$3 metadata-revs"
-		item="$3"
+		commitopts=""
+		shortlogprefix=""
 	fi
 	if [ "${BUILDHISTORY_BUILD_FAILURES}" = "0" ] ; then
 		result="succeeded"
@@ -745,7 +742,7 @@
 	esac
 	commitmsgfile=`mktemp`
 	cat > $commitmsgfile << END
-$item: Build ${BUILDNAME} of ${DISTRO} ${DISTRO_VERSION} for machine ${MACHINE} on $2
+${shortlogprefix}Build ${BUILDNAME} of ${DISTRO} ${DISTRO_VERSION} for machine ${MACHINE} on $2
 
 cmd: $1
 
@@ -789,9 +786,7 @@
 			git add -A .
 			# porcelain output looks like "?? packages/foo/bar"
 			# Ensure we commit metadata-revs with the first commit
-			for entry in `echo "$repostatus" | awk '{print $2}' | awk -F/ '{print $1}' | sort | uniq` ; do
-				buildhistory_single_commit "$CMDLINE" "$HOSTNAME" "$entry"
-			done
+			buildhistory_single_commit "$CMDLINE" "$HOSTNAME" dummy
 			git gc --auto --quiet
 		else
 			buildhistory_single_commit "$CMDLINE" "$HOSTNAME"
@@ -829,6 +824,8 @@
                 interrupted = getattr(e, '_interrupted', 0)
                 localdata.setVar('BUILDHISTORY_BUILD_INTERRUPTED', str(interrupted))
                 bb.build.exec_func("buildhistory_commit", localdata)
+            else:
+                bb.note("No commit since BUILDHISTORY_COMMIT != '1'")
 }
 
 addhandler buildhistory_eventhandler
@@ -874,7 +871,10 @@
 do_fetch[postfuncs] += "write_srcrev"
 do_fetch[vardepsexclude] += "write_srcrev"
 python write_srcrev() {
-    pkghistdir = d.getVar('BUILDHISTORY_DIR_PACKAGE')
+    write_latest_srcrev(d, d.getVar('BUILDHISTORY_DIR_PACKAGE'))
+}
+
+def write_latest_srcrev(d, pkghistdir):
     srcrevfile = os.path.join(pkghistdir, 'latest_srcrev')
 
     srcrevs, tag_srcrevs = _get_srcrev_values(d)
@@ -912,4 +912,33 @@
     else:
         if os.path.exists(srcrevfile):
             os.remove(srcrevfile)
+
+do_testimage[postfuncs] += "write_ptest_result"
+do_testimage[vardepsexclude] += "write_ptest_result"
+
+python write_ptest_result() {
+    write_latest_ptest_result(d, d.getVar('BUILDHISTORY_DIR'))
 }
+
+def write_latest_ptest_result(d, histdir):
+    import glob
+    import subprocess
+    test_log_dir = d.getVar('TEST_LOG_DIR')
+    input_ptest = os.path.join(test_log_dir, 'ptest_log')
+    output_ptest = os.path.join(histdir, 'ptest')
+    if os.path.exists(input_ptest):
+        try:
+            # Lock it avoid race issue
+            lock = bb.utils.lockfile(output_ptest + "/ptest.lock")
+            bb.utils.mkdirhier(output_ptest)
+            oe.path.copytree(input_ptest, output_ptest)
+            # Sort test result
+            for result in glob.glob('%s/pass.fail.*' % output_ptest):
+                bb.debug(1, 'Processing %s' % result)
+                cmd = ['sort', result, '-o', result]
+                bb.debug(1, 'Running %s' % cmd)
+                ret = subprocess.call(cmd)
+                if ret != 0:
+                    bb.error('Failed to run %s!' % cmd)
+        finally:
+            bb.utils.unlockfile(lock)
diff --git a/import-layers/yocto-poky/meta/classes/ccache.bbclass b/import-layers/yocto-poky/meta/classes/ccache.bbclass
index d58c8f6..9609020 100644
--- a/import-layers/yocto-poky/meta/classes/ccache.bbclass
+++ b/import-layers/yocto-poky/meta/classes/ccache.bbclass
@@ -1,6 +1,5 @@
 CCACHE = "${@bb.utils.which(d.getVar('PATH'), 'ccache') and 'ccache '}"
 export CCACHE_DIR ?= "${TMPDIR}/ccache/${MULTIMACH_TARGET_SYS}/${PN}"
-CCACHE_DISABLE[unexport] = "1"
 
 # We need to stop ccache considering the current directory or the
 # debug-prefix-map target directory to be significant when calculating
@@ -10,6 +9,3 @@
 
 DEPENDS_append_class-target = " ccache-native"
 DEPENDS[vardepvalueexclude] = " ccache-native"
-
-do_configure[dirs] =+ "${CCACHE_DIR}"
-do_kernel_configme[dirs] =+ "${CCACHE_DIR}"
diff --git a/import-layers/yocto-poky/meta/classes/cmake.bbclass b/import-layers/yocto-poky/meta/classes/cmake.bbclass
index 12df617..ac2c151 100644
--- a/import-layers/yocto-poky/meta/classes/cmake.bbclass
+++ b/import-layers/yocto-poky/meta/classes/cmake.bbclass
@@ -31,6 +31,9 @@
 
 EXTRA_OECMAKE_append = " ${PACKAGECONFIG_CONFARGS}"
 
+EXTRA_OECMAKE_BUILD_prepend_task-compile = "${PARALLEL_MAKE} "
+EXTRA_OECMAKE_BUILD_prepend_task-install = "${PARALLEL_MAKEINST} "
+
 # CMake expects target architectures in the format of uname(2),
 # which do not always match TARGET_ARCH, so all the necessary
 # conversions should happen here.
@@ -135,13 +138,13 @@
 
 do_compile[progress] = "percent"
 cmake_do_compile()  {
-	cd ${B}
-	base_do_compile VERBOSE=1
+	bbnote VERBOSE=1 cmake --build '${B}' -- ${EXTRA_OECMAKE_BUILD}
+	VERBOSE=1 cmake --build '${B}' -- ${EXTRA_OECMAKE_BUILD}
 }
 
 cmake_do_install() {
-	cd ${B}
-	oe_runmake 'DESTDIR=${D}' install
+	bbnote DESTDIR='${D}' cmake --build '${B}' --target install -- ${EXTRA_OECMAKE_BUILD}
+	DESTDIR='${D}' cmake --build '${B}' --target install -- ${EXTRA_OECMAKE_BUILD}
 }
 
 EXPORT_FUNCTIONS do_configure do_compile do_install do_generate_toolchain_file
diff --git a/import-layers/yocto-poky/meta/classes/cml1.bbclass b/import-layers/yocto-poky/meta/classes/cml1.bbclass
index 38e6613..926747f 100644
--- a/import-layers/yocto-poky/meta/classes/cml1.bbclass
+++ b/import-layers/yocto-poky/meta/classes/cml1.bbclass
@@ -64,7 +64,8 @@
     if isdiff:
         statement = 'diff --unchanged-line-format= --old-line-format= --new-line-format="%L" ' + configorig + ' ' + config + '>' + fragment
         subprocess.call(statement, shell=True)
-
+        # No need to check the exit code as we know it's going to be
+        # non-zero, but that's what we expect.
         shutil.copy(configorig, config)
 
         bb.plain("Config fragment has been dumped into:\n %s" % fragment)
diff --git a/import-layers/yocto-poky/meta/classes/cross-canadian.bbclass b/import-layers/yocto-poky/meta/classes/cross-canadian.bbclass
index 49388d4..1928455 100644
--- a/import-layers/yocto-poky/meta/classes/cross-canadian.bbclass
+++ b/import-layers/yocto-poky/meta/classes/cross-canadian.bbclass
@@ -15,7 +15,7 @@
 # Update BASE_PACKAGE_ARCH and PACKAGE_ARCHS
 #
 PACKAGE_ARCH = "${SDK_ARCH}-${SDKPKGSUFFIX}"
-BASECANADIANEXTRAOS ?= "linux-uclibc linux-musl"
+BASECANADIANEXTRAOS ?= "linux-musl"
 CANADIANEXTRAOS = "${BASECANADIANEXTRAOS}"
 CANADIANEXTRAVENDOR = ""
 MODIFYTOS ??= "1"
@@ -36,11 +36,9 @@
     tos = d.getVar("TARGET_OS")
     whitelist = []
     extralibcs = [""]
-    if "uclibc" in d.getVar("BASECANADIANEXTRAOS"):
-        extralibcs.append("uclibc")
     if "musl" in d.getVar("BASECANADIANEXTRAOS"):
         extralibcs.append("musl")
-    for variant in ["", "spe", "x32", "eabi", "n32"]:
+    for variant in ["", "spe", "x32", "eabi", "n32", "ilp32"]:
         for libc in extralibcs:
             entry = "linux"
             if variant and libc:
@@ -80,7 +78,7 @@
         for extraos in d.getVar("BASECANADIANEXTRAOS").split():
             d.appendVar("CANADIANEXTRAOS", " " + extraos + "n32")
     if tarch == "arm" or tarch == "armeb":
-        d.appendVar("CANADIANEXTRAOS", " linux-gnueabi linux-musleabi linux-uclibceabi")
+        d.appendVar("CANADIANEXTRAOS", " linux-gnueabi linux-musleabi")
         d.setVar("TARGET_OS", "linux-gnueabi")
     else:
         d.setVar("TARGET_OS", "linux")
@@ -115,11 +113,6 @@
 HOST_LD_ARCH = "${SDK_LD_ARCH}"
 HOST_AS_ARCH = "${SDK_AS_ARCH}"
 
-TARGET_CPPFLAGS = "${BUILDSDK_CPPFLAGS}"
-TARGET_CFLAGS = "${BUILDSDK_CFLAGS}"
-TARGET_CXXFLAGS = "${BUILDSDK_CXXFLAGS}"
-TARGET_LDFLAGS = "${BUILDSDK_LDFLAGS}"
-
 #assign DPKG_ARCH
 DPKG_ARCH = "${@debian_arch_map(d.getVar('SDK_ARCH'), '')}"
 
diff --git a/import-layers/yocto-poky/meta/classes/cross.bbclass b/import-layers/yocto-poky/meta/classes/cross.bbclass
index 4feb01e..d217717 100644
--- a/import-layers/yocto-poky/meta/classes/cross.bbclass
+++ b/import-layers/yocto-poky/meta/classes/cross.bbclass
@@ -50,7 +50,7 @@
 # Path mangling needed by the cross packaging
 # Note that we use := here to ensure that libdir and includedir are
 # target paths.
-target_base_prefix := "${base_prefix}"
+target_base_prefix := "${root_prefix}"
 target_prefix := "${prefix}"
 target_exec_prefix := "${exec_prefix}"
 target_base_libdir = "${target_base_prefix}/${baselib}"
diff --git a/import-layers/yocto-poky/meta/classes/cve-check.bbclass b/import-layers/yocto-poky/meta/classes/cve-check.bbclass
index 13ec62e..bc2f03f 100644
--- a/import-layers/yocto-poky/meta/classes/cve-check.bbclass
+++ b/import-layers/yocto-poky/meta/classes/cve-check.bbclass
@@ -83,6 +83,11 @@
 
     import shutil
 
+    if d.getVar("CVE_CHECK_COPY_FILES") == "1":
+        deploy_file = os.path.join(d.getVar("CVE_CHECK_DIR"), d.getVar("PN"))
+        if os.path.exists(deploy_file):
+            bb.utils.remove(deploy_file)
+
     if os.path.exists(d.getVar("CVE_CHECK_TMP_FILE")):
         bb.note("Writing rootfs CVE manifest")
         deploy_dir = d.getVar("DEPLOY_DIR_IMAGE")
@@ -102,6 +107,7 @@
 }
 
 ROOTFS_POSTPROCESS_COMMAND_prepend = "${@'cve_check_write_rootfs_manifest; ' if d.getVar('CVE_CHECK_CREATE_MANIFEST') == '1' else ''}"
+do_rootfs[recrdeptask] += "${@'do_cve_check' if d.getVar('CVE_CHECK_CREATE_MANIFEST') == '1' else ''}"
 
 def get_patches_cves(d):
     """
@@ -112,10 +118,24 @@
 
     pn = d.getVar("PN")
     cve_match = re.compile("CVE:( CVE\-\d{4}\-\d+)+")
+
+    # Matches last CVE-1234-211432 in the file name, also if written
+    # with small letters. Not supporting multiple CVE id's in a single
+    # file name.
+    cve_file_name_match = re.compile(".*([Cc][Vv][Ee]\-\d{4}\-\d+)")
+
     patched_cves = set()
     bb.debug(2, "Looking for patches that solves CVEs for %s" % pn)
     for url in src_patches(d):
         patch_file = bb.fetch.decodeurl(url)[2]
+
+        # Check patch file name for CVE ID
+        fname_match = cve_file_name_match.search(patch_file)
+        if fname_match:
+            cve = fname_match.group(1).upper()
+            patched_cves.add(cve)
+            bb.debug(2, "Found CVE %s from patch file name %s" % (cve, patch_file))
+
         with open(patch_file, "r", encoding="utf-8") as f:
             try:
                 patch_text = f.read()
@@ -134,7 +154,7 @@
             for cve in cves.split():
                 bb.debug(2, "Patch %s solves %s" % (patch_file, cve))
                 patched_cves.add(cve)
-        else:
+        elif not fname_match:
             bb.debug(2, "Patch %s doesn't solve CVEs" % patch_file)
 
     return patched_cves
@@ -149,7 +169,7 @@
     cves_patched = []
     cves_unpatched = []
     bpn = d.getVar("CVE_PRODUCT")
-    pv = d.getVar("PV").split("git+")[0]
+    pv = d.getVar("PV").split("+git")[0]
     cves = " ".join(patched_cves)
     cve_db_dir = d.getVar("CVE_CHECK_DB_DIR")
     cve_whitelist = ast.literal_eval(d.getVar("CVE_CHECK_CVE_WHITELIST"))
@@ -171,7 +191,7 @@
             f.write("%s,%s,%s," % (bpn, pv, cves))
         cmd.append(faux)
 
-        output = subprocess.check_output(cmd, stderr=subprocess.STDOUT).decode("utf-8")
+        output = subprocess.check_output(cmd).decode("utf-8")
         bb.debug(2, "Output of command %s:\n%s" % ("\n".join(cmd), output))
     except subprocess.CalledProcessError as e:
         bb.warn("Couldn't check for CVEs: %s (output %s)" % (e, e.output))
diff --git a/import-layers/yocto-poky/meta/classes/devshell.bbclass b/import-layers/yocto-poky/meta/classes/devshell.bbclass
index 4de7ea6..fdf7dc1 100644
--- a/import-layers/yocto-poky/meta/classes/devshell.bbclass
+++ b/import-layers/yocto-poky/meta/classes/devshell.bbclass
@@ -8,14 +8,14 @@
        fakeenv = d.getVar("FAKEROOTENV").split()
        for f in fakeenv:
             k = f.split("=")
-            d.setVar(k[0], k[1])           
+            d.setVar(k[0], k[1])
             d.appendVar("OE_TERMINAL_EXPORTS", " " + k[0])
        d.delVarFlag("do_devshell", "fakeroot")
 
     oe_terminal(d.getVar('DEVSHELL'), 'OpenEmbedded Developer Shell', d)
 }
 
-addtask devshell after do_patch
+addtask devshell after do_patch do_prepare_recipe_sysroot
 
 # The directory that the terminal starts in
 DEVSHELL_STARTDIR ?= "${S}"
@@ -49,7 +49,7 @@
         old[3] = old[3] &~ termios.ECHO &~ termios.ICANON
         # &~ termios.ISIG
         termios.tcsetattr(fd, termios.TCSADRAIN, old)
-    
+
     # No echo or buffering over the pty
     noechoicanon(s)
 
@@ -145,7 +145,7 @@
     try:
         devpyshell(d)
     except SystemExit:
-        # Stop the SIGTERM above causing an error exit code    
+        # Stop the SIGTERM above causing an error exit code
         return
     finally:
         return
diff --git a/import-layers/yocto-poky/meta/classes/devtool-source.bbclass b/import-layers/yocto-poky/meta/classes/devtool-source.bbclass
new file mode 100644
index 0000000..8f5bc86
--- /dev/null
+++ b/import-layers/yocto-poky/meta/classes/devtool-source.bbclass
@@ -0,0 +1,165 @@
+# Development tool - source extraction helper class
+#
+# NOTE: this class is intended for use by devtool and should not be
+# inherited manually.
+#
+# Copyright (C) 2014-2017 Intel Corporation
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+
+
+DEVTOOL_TEMPDIR ?= ""
+DEVTOOL_PATCH_SRCDIR = "${DEVTOOL_TEMPDIR}/patchworkdir"
+
+
+python() {
+    tempdir = d.getVar('DEVTOOL_TEMPDIR')
+
+    if not tempdir:
+        bb.fatal('devtool-source class is for internal use by devtool only')
+
+    # Make a subdir so we guard against WORKDIR==S
+    workdir = os.path.join(tempdir, 'workdir')
+    d.setVar('WORKDIR', workdir)
+    if not d.getVar('S').startswith(workdir):
+        # Usually a shared workdir recipe (kernel, gcc)
+        # Try to set a reasonable default
+        if bb.data.inherits_class('kernel', d):
+            d.setVar('S', '${WORKDIR}/source')
+        else:
+            d.setVar('S', '${WORKDIR}/%s' % os.path.basename(d.getVar('S')))
+    if bb.data.inherits_class('kernel', d):
+        # We don't want to move the source to STAGING_KERNEL_DIR here
+        d.setVar('STAGING_KERNEL_DIR', '${S}')
+
+    d.setVar('STAMPS_DIR', os.path.join(tempdir, 'stamps'))
+    d.setVar('T', os.path.join(tempdir, 'temp'))
+
+    # Hook in pre/postfuncs
+    is_kernel_yocto = bb.data.inherits_class('kernel-yocto', d)
+    if is_kernel_yocto:
+        unpacktask = 'do_kernel_checkout'
+        d.appendVarFlag('do_configure', 'postfuncs', ' devtool_post_configure')
+    else:
+        unpacktask = 'do_unpack'
+    d.appendVarFlag(unpacktask, 'postfuncs', ' devtool_post_unpack')
+    d.prependVarFlag('do_patch', 'prefuncs', ' devtool_pre_patch')
+    d.appendVarFlag('do_patch', 'postfuncs', ' devtool_post_patch')
+
+    # NOTE: in order for the patch stuff to be fully functional,
+    # PATCHTOOL and PATCH_COMMIT_FUNCTIONS need to be set; we can't
+    # do that here because we can't guarantee the order of the anonymous
+    # functions, so it gets done in the bbappend we create.
+}
+
+
+python devtool_post_unpack() {
+    import oe.recipeutils
+    import shutil
+    sys.path.insert(0, os.path.join(d.getVar('COREBASE'), 'scripts', 'lib'))
+    import scriptutils
+    from devtool import setup_git_repo
+
+    tempdir = d.getVar('DEVTOOL_TEMPDIR')
+    workdir = d.getVar('WORKDIR')
+    srcsubdir = d.getVar('S')
+
+    def _move_file(src, dst):
+        """Move a file. Creates all the directory components of destination path."""
+        dst_d = os.path.dirname(dst)
+        if dst_d:
+            bb.utils.mkdirhier(dst_d)
+        shutil.move(src, dst)
+
+    def _ls_tree(directory):
+        """Recursive listing of files in a directory"""
+        ret = []
+        for root, dirs, files in os.walk(directory):
+            ret.extend([os.path.relpath(os.path.join(root, fname), directory) for
+                        fname in files])
+        return ret
+
+    # Move local source files into separate subdir
+    recipe_patches = [os.path.basename(patch) for patch in
+                        oe.recipeutils.get_recipe_patches(d)]
+    local_files = oe.recipeutils.get_recipe_local_files(d)
+
+    # Ignore local files with subdir={BP}
+    srcabspath = os.path.abspath(srcsubdir)
+    local_files = [fname for fname in local_files if
+                    os.path.exists(os.path.join(workdir, fname)) and
+                    (srcabspath == workdir or not
+                    os.path.join(workdir, fname).startswith(srcabspath +
+                        os.sep))]
+    if local_files:
+        for fname in local_files:
+            _move_file(os.path.join(workdir, fname),
+                        os.path.join(tempdir, 'oe-local-files', fname))
+        with open(os.path.join(tempdir, 'oe-local-files', '.gitignore'),
+                    'w') as f:
+            f.write('# Ignore local files, by default. Remove this file '
+                    'if you want to commit the directory to Git\n*\n')
+
+    if srcsubdir == workdir:
+        # Find non-patch non-local sources that were "unpacked" to srctree
+        # directory
+        src_files = [fname for fname in _ls_tree(workdir) if
+                        os.path.basename(fname) not in recipe_patches]
+        srcsubdir = d.getVar('DEVTOOL_PATCH_SRCDIR')
+        # Move source files to S
+        for path in src_files:
+            _move_file(os.path.join(workdir, path),
+                        os.path.join(srcsubdir, path))
+    elif os.path.dirname(srcsubdir) != workdir:
+        # Handle if S is set to a subdirectory of the source
+        srcsubdir = os.path.join(workdir, os.path.relpath(srcsubdir, workdir).split(os.sep)[0])
+
+    scriptutils.git_convert_standalone_clone(srcsubdir)
+
+    # Make sure that srcsubdir exists
+    bb.utils.mkdirhier(srcsubdir)
+    if not os.listdir(srcsubdir):
+        bb.warn("No source unpacked to S - either the %s recipe "
+                "doesn't use any source or the correct source "
+                "directory could not be determined" % d.getVar('PN'))
+
+    devbranch = d.getVar('DEVTOOL_DEVBRANCH')
+    setup_git_repo(srcsubdir, d.getVar('PV'), devbranch, d=d)
+
+    (stdout, _) = bb.process.run('git rev-parse HEAD', cwd=srcsubdir)
+    initial_rev = stdout.rstrip()
+    with open(os.path.join(tempdir, 'initial_rev'), 'w') as f:
+        f.write(initial_rev)
+
+    with open(os.path.join(tempdir, 'srcsubdir'), 'w') as f:
+        f.write(srcsubdir)
+}
+
+python devtool_pre_patch() {
+    if d.getVar('S') == d.getVar('WORKDIR'):
+        d.setVar('S', '${DEVTOOL_PATCH_SRCDIR}')
+}
+
+python devtool_post_patch() {
+    tempdir = d.getVar('DEVTOOL_TEMPDIR')
+    with open(os.path.join(tempdir, 'srcsubdir'), 'r') as f:
+        srcsubdir = f.read()
+    bb.process.run('git tag -f devtool-patched', cwd=srcsubdir)
+}
+
+python devtool_post_configure() {
+    import shutil
+    tempdir = d.getVar('DEVTOOL_TEMPDIR')
+    shutil.copy2(os.path.join(d.getVar('B'), '.config'), tempdir)
+}
diff --git a/import-layers/yocto-poky/meta/classes/distrodata.bbclass b/import-layers/yocto-poky/meta/classes/distrodata.bbclass
index 5e34441..c85f7b3 100644
--- a/import-layers/yocto-poky/meta/classes/distrodata.bbclass
+++ b/import-layers/yocto-poky/meta/classes/distrodata.bbclass
@@ -261,12 +261,44 @@
         from bb.utils import vercmp_string
         from bb.fetch2 import FetchError, NoMethodError, decodeurl
 
-        """first check whether a uri is provided"""
-        src_uri = (d.getVar('SRC_URI') or '').split()
-        if src_uri:
-            uri_type, _, _, _, _, _ = decodeurl(src_uri[0])
-        else:
-            uri_type = "none"
+        def get_upstream_version_and_status():
+
+            # set if the upstream check fails reliably, e.g. absent git tags, or weird version format used on our or on upstream side.
+            upstream_version_unknown = localdata.getVar('UPSTREAM_VERSION_UNKNOWN')
+            # set if the upstream check cannot be reliably performed due to transient network failures, or server behaving weirdly. 
+            # This one should be used sparingly, as it completely excludes a recipe from upstream checking.
+            upstream_check_unreliable = localdata.getVar('UPSTREAM_CHECK_UNRELIABLE')
+
+            if upstream_check_unreliable == "1":
+                return "N/A", "CHECK_IS_UNRELIABLE"
+
+            try:
+                uv = oe.recipeutils.get_recipe_upstream_version(localdata)
+                pupver = uv['version'] if uv['version'] else "N/A"
+            except Exception as e:
+                pupver = "N/A"
+
+            if pupver == "N/A":
+                pstatus = "UNKNOWN" if upstream_version_unknown else "UNKNOWN_BROKEN"
+            else:
+                src_uri = (localdata.getVar('SRC_URI') or '').split()
+                if src_uri:
+                    uri_type, _, _, _, _, _ = decodeurl(src_uri[0])
+                else:
+                    uri_type = "none"
+                pv, _, _ = oe.recipeutils.get_recipe_pv_without_srcpv(pversion, uri_type)
+                upv, _, _ = oe.recipeutils.get_recipe_pv_without_srcpv(pupver, uri_type)
+
+                cmp = vercmp_string(pv, upv)
+                if cmp == -1:
+                    pstatus = "UPDATE" if not upstream_version_unknown else "KNOWN_BROKEN"
+                elif cmp == 0:
+                    pstatus = "MATCH" if not upstream_version_unknown else "KNOWN_BROKEN"
+                else:
+                    pstatus = "UNKNOWN" if upstream_version_unknown else "UNKNOWN_BROKEN"
+
+            return pupver, pstatus
+
 
         """initialize log files."""
         logpath = d.getVar('LOG_DIR')
@@ -313,34 +345,7 @@
         psrcuri = localdata.getVar('SRC_URI')
         maintainer = localdata.getVar('RECIPE_MAINTAINER')
 
-        """ Get upstream version version """
-        pupver = ""
-        pstatus = ""
-
-        try:
-            uv = oe.recipeutils.get_recipe_upstream_version(localdata)
-
-            pupver = uv['version']
-        except Exception as e:
-            if e is FetchError:
-                pstatus = "ErrAccess"
-            elif e is NoMethodError:
-                pstatus = "ErrUnsupportedProto"
-            else:
-                pstatus = "ErrUnknown"
-
-        """Set upstream version status"""
-        if not pupver:
-            pupver = "N/A"
-        else:
-            pv, _, _ = oe.recipeutils.get_recipe_pv_without_srcpv(pversion, uri_type)
-            upv, _, _ = oe.recipeutils.get_recipe_pv_without_srcpv(pupver, uri_type)
-
-            cmp = vercmp_string(pv, upv)
-            if cmp == -1:
-                pstatus = "UPDATE"
-            elif cmp == 0:
-                pstatus = "MATCH"
+        pupver, pstatus = get_upstream_version_and_status()
 
         if psrcuri:
             psrcuri = psrcuri.split()[0]
diff --git a/import-layers/yocto-poky/meta/classes/distrooverrides.bbclass b/import-layers/yocto-poky/meta/classes/distrooverrides.bbclass
new file mode 100644
index 0000000..9f4db0d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/classes/distrooverrides.bbclass
@@ -0,0 +1,32 @@
+# Turns certain DISTRO_FEATURES into overrides with the same
+# name plus a df- prefix. Ensures that these special
+# distro features remain set also for native and nativesdk
+# recipes, so that these overrides can also be used there.
+#
+# This makes it simpler to write .bbappends that only change the
+# task signatures of the recipe if the change is really enabled,
+# for example with:
+#   do_install_append_df-my-feature () { ... }
+# where "my-feature" is a DISTRO_FEATURE.
+#
+# The class is meant to be used in a layer.conf or distro
+# .inc file with:
+# INHERIT += "distrooverrides"
+# DISTRO_FEATURES_OVERRIDES += "my-feature"
+#
+# Beware that this part of OVERRIDES changes during parsing, so usage
+# of these overrides should be limited to .bb and .bbappend files,
+# because then DISTRO_FEATURES is final.
+
+DISTRO_FEATURES_OVERRIDES ?= ""
+DISTRO_FEATURES_OVERRIDES[doc] = "A space-separated list of <feature> entries. \
+Each entry is added to OVERRIDES as df-<feature> if <feature> is in DISTRO_FEATURES."
+
+DISTRO_FEATURES_FILTER_NATIVE_append = " ${DISTRO_FEATURES_OVERRIDES}"
+DISTRO_FEATURES_FILTER_NATIVESDK_append = " ${DISTRO_FEATURES_OVERRIDES}"
+
+# If DISTRO_FEATURES_OVERRIDES or DISTRO_FEATURES show up in a task
+# signature because of this line, then the task dependency on
+# OVERRIDES itself should be fixed. Excluding these two variables
+# with DISTROOVERRIDES[vardepsexclude] would just work around the problem.
+DISTROOVERRIDES .= "${@ ''.join([':df-' + x for x in sorted(set(d.getVar('DISTRO_FEATURES_OVERRIDES').split()) & set((d.getVar('DISTRO_FEATURES') or '').split()))]) }"
diff --git a/import-layers/yocto-poky/meta/classes/externalsrc.bbclass b/import-layers/yocto-poky/meta/classes/externalsrc.bbclass
index d64af6a..65dd13d 100644
--- a/import-layers/yocto-poky/meta/classes/externalsrc.bbclass
+++ b/import-layers/yocto-poky/meta/classes/externalsrc.bbclass
@@ -29,6 +29,12 @@
 
 python () {
     externalsrc = d.getVar('EXTERNALSRC')
+    externalsrcbuild = d.getVar('EXTERNALSRC_BUILD')
+
+    if externalsrc and not externalsrc.startswith("/"):
+        bb.error("EXTERNALSRC must be an absolute path")
+    if externalsrcbuild and not externalsrcbuild.startswith("/"):
+        bb.error("EXTERNALSRC_BUILD must be an absolute path")
 
     # If this is the base recipe and EXTERNALSRC is set for it or any of its
     # derivatives, then enable BB_DONT_CACHE to force the recipe to always be
@@ -48,7 +54,6 @@
 
     if externalsrc:
         d.setVar('S', externalsrc)
-        externalsrcbuild = d.getVar('EXTERNALSRC_BUILD')
         if externalsrcbuild:
             d.setVar('B', externalsrcbuild)
         else:
@@ -167,6 +172,7 @@
 do_buildclean[doc] = "Call 'make clean' or equivalent in ${B}"
 externalsrc_do_buildclean() {
 	if [ -e Makefile -o -e makefile -o -e GNUmakefile ]; then
+		rm -f ${@' '.join([x.split(':')[0] for x in (d.getVar('EXTERNALSRC_SYMLINKS') or '').split()])}
 		oe_runmake clean || die "make failed"
 	else
 		bbnote "nothing to do - no makefile found"
@@ -179,14 +185,20 @@
     import tempfile
 
     s_dir = srcdir or d.getVar('EXTERNALSRC')
-    git_dir = os.path.join(s_dir, '.git')
-    oe_hash_file = os.path.join(git_dir, 'oe-devtool-tree-sha1')
+    git_dir = None
+
+    try:
+        git_dir = os.path.join(s_dir,
+            subprocess.check_output(['git', '-C', s_dir, 'rev-parse', '--git-dir']).decode("utf-8").rstrip())
+    except subprocess.CalledProcessError:
+        pass
 
     ret = " "
-    if os.path.exists(git_dir):
-        with tempfile.NamedTemporaryFile(dir=git_dir, prefix='oe-devtool-index') as tmp_index:
+    if git_dir is not None:
+        oe_hash_file = os.path.join(git_dir, 'oe-devtool-tree-sha1')
+        with tempfile.NamedTemporaryFile(prefix='oe-devtool-index') as tmp_index:
             # Clone index
-            shutil.copy2(os.path.join(git_dir, 'index'), tmp_index.name)
+            shutil.copyfile(os.path.join(git_dir, 'index'), tmp_index.name)
             # Update our custom index
             env = os.environ.copy()
             env['GIT_INDEX_FILE'] = tmp_index.name
diff --git a/import-layers/yocto-poky/meta/classes/gettext.bbclass b/import-layers/yocto-poky/meta/classes/gettext.bbclass
index 0be1424..da68e63 100644
--- a/import-layers/yocto-poky/meta/classes/gettext.bbclass
+++ b/import-layers/yocto-poky/meta/classes/gettext.bbclass
@@ -13,7 +13,12 @@
         return '--disable-nls'
     return "--enable-nls"
 
-DEPENDS_GETTEXT ??= "virtual/gettext gettext-native"
+DEPENDS_GETTEXT ??= "gettext-native"
 
-BASEDEPENDS =+ "${@gettext_dependencies(d)}"
+BASEDEPENDS_append = " ${@gettext_dependencies(d)}"
 EXTRA_OECONF_append = " ${@gettext_oeconf(d)}"
+
+# Without this, msgfmt from gettext-native will not find ITS files
+# provided by target recipes (for example, polkit.its).
+GETTEXTDATADIRS_append_class-target = ":${STAGING_DATADIR}/gettext"
+export GETTEXTDATADIRS
diff --git a/import-layers/yocto-poky/meta/classes/gnomebase.bbclass b/import-layers/yocto-poky/meta/classes/gnomebase.bbclass
index 54aa45f..4ccc8e0 100644
--- a/import-layers/yocto-poky/meta/classes/gnomebase.bbclass
+++ b/import-layers/yocto-poky/meta/classes/gnomebase.bbclass
@@ -14,6 +14,8 @@
                 ${datadir}/polkit* \
                 ${datadir}/GConf \
                 ${datadir}/glib-2.0/schemas \
+                ${datadir}/appdata \
+                ${datadir}/icons \
 "
 
 FILES_${PN}-doc += "${datadir}/devhelp"
diff --git a/import-layers/yocto-poky/meta/classes/go.bbclass b/import-layers/yocto-poky/meta/classes/go.bbclass
index 85f71a2..09b01a8 100644
--- a/import-layers/yocto-poky/meta/classes/go.bbclass
+++ b/import-layers/yocto-poky/meta/classes/go.bbclass
@@ -1,77 +1,182 @@
-inherit goarch
+inherit goarch ptest
 
-# x32 ABI is not supported on go compiler so far
-COMPATIBLE_HOST_linux-gnux32 = "null"
-# ppc32 is not supported in go compilers
-COMPATIBLE_HOST_powerpc = "null"
+def get_go_parallel_make(d):
+    pm = (d.getVar('PARALLEL_MAKE') or '').split()
+    # look for '-j' and throw other options (e.g. '-l') away
+    # because they might have a different meaning in golang
+    while pm:
+        opt = pm.pop(0)
+        if opt == '-j':
+            v = pm.pop(0)
+        elif opt.startswith('-j'):
+            v = opt[2:].strip()
+        else:
+            continue
+
+        return '-p %d' % int(v)
+
+    return ""
+
+GO_PARALLEL_BUILD ?= "${@get_go_parallel_make(d)}"
 
 GOROOT_class-native = "${STAGING_LIBDIR_NATIVE}/go"
-GOROOT = "${STAGING_LIBDIR_NATIVE}/${TARGET_SYS}/go"
-GOBIN_FINAL_class-native = "${GOROOT_FINAL}/bin"
-GOBIN_FINAL = "${GOROOT_FINAL}/bin/${GOOS}_${GOARCH}"
-
-export GOOS = "${TARGET_GOOS}"
-export GOARCH = "${TARGET_GOARCH}"
-export GOARM = "${TARGET_GOARM}"
-export CGO_ENABLED = "1"
+GOROOT_class-nativesdk = "${STAGING_DIR_TARGET}${libdir}/go"
+GOROOT = "${STAGING_LIBDIR}/go"
 export GOROOT
-export GOROOT_FINAL = "${libdir}/${TARGET_SYS}/go"
-export GOBIN_FINAL
-export GOPKG_FINAL = "${GOROOT_FINAL}/pkg/${GOOS}_${GOARCH}"
-export GOSRC_FINAL = "${GOROOT_FINAL}/src"
-export GO_GCFLAGS = "${TARGET_CFLAGS}"
-export GO_LDFLAGS = "${TARGET_LDFLAGS}"
-export CGO_CFLAGS = "${TARGET_CC_ARCH}${TOOLCHAIN_OPTIONS} ${TARGET_CFLAGS}"
-export CGO_CPPFLAGS = "${TARGET_CPPFLAGS}"
-export CGO_CXXFLAGS = "${TARGET_CC_ARCH}${TOOLCHAIN_OPTIONS} ${TARGET_CXXFLAGS}"
-export CGO_LDFLAGS = "${TARGET_CC_ARCH}${TOOLCHAIN_OPTIONS} ${TARGET_LDFLAGS}"
+export GOROOT_FINAL = "${libdir}/go"
 
-DEPENDS += "go-cross-${TARGET_ARCH}"
-DEPENDS_class-native += "go-native"
+DEPENDS_GOLANG_class-target = "virtual/${TARGET_PREFIX}go virtual/${TARGET_PREFIX}go-runtime"
+DEPENDS_GOLANG_class-native = "go-native"
+DEPENDS_GOLANG_class-nativesdk = "virtual/${TARGET_PREFIX}go-crosssdk virtual/${TARGET_PREFIX}go-runtime"
 
-FILES_${PN}-staticdev += "${GOSRC_FINAL}/${GO_IMPORT}"
-FILES_${PN}-staticdev += "${GOPKG_FINAL}/${GO_IMPORT}*"
+DEPENDS_append = " ${DEPENDS_GOLANG}"
+
+GO_LINKSHARED ?= "${@'-linkshared' if d.getVar('GO_DYNLINK') else ''}"
+GO_RPATH_LINK = "${@'-Wl,-rpath-link=${STAGING_DIR_TARGET}${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink' if d.getVar('GO_DYNLINK') else ''}"
+GO_RPATH = "${@'-r ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink' if d.getVar('GO_DYNLINK') else ''}"
+GO_RPATH_class-native = "${@'-r ${STAGING_LIBDIR_NATIVE}/go/pkg/${TARGET_GOTUPLE}_dynlink' if d.getVar('GO_DYNLINK') else ''}"
+GO_RPATH_LINK_class-native = "${@'-Wl,-rpath-link=${STAGING_LIBDIR_NATIVE}/go/pkg/${TARGET_GOTUPLE}_dynlink' if d.getVar('GO_DYNLINK') else ''}"
+GO_EXTLDFLAGS ?= "${HOST_CC_ARCH}${TOOLCHAIN_OPTIONS} ${GO_RPATH_LINK} ${LDFLAGS}"
+GO_LINKMODE ?= ""
+GO_LINKMODE_class-nativesdk = "--linkmode=external"
+GO_LDFLAGS ?= '-ldflags="${GO_RPATH} ${GO_LINKMODE} -extldflags '${GO_EXTLDFLAGS}'"'
+export GOBUILDFLAGS ?= "-v ${GO_LDFLAGS}"
+export GOPTESTBUILDFLAGS ?= "${GOBUILDFLAGS} -c"
+export GOPTESTFLAGS ?= "-test.v"
+GOBUILDFLAGS_prepend_task-compile = "${GO_PARALLEL_BUILD} "
+
+export GO = "${HOST_PREFIX}go"
+GOTOOLDIR = "${STAGING_LIBDIR_NATIVE}/${TARGET_SYS}/go/pkg/tool/${BUILD_GOTUPLE}"
+GOTOOLDIR_class-native = "${STAGING_LIBDIR_NATIVE}/go/pkg/tool/${BUILD_GOTUPLE}"
+export GOTOOLDIR
+
+SECURITY_CFLAGS = "${SECURITY_NOPIE_CFLAGS}"
+SECURITY_LDFLAGS = ""
+
+export CGO_ENABLED ?= "1"
+export CGO_CFLAGS ?= "${CFLAGS}"
+export CGO_CPPFLAGS ?= "${CPPFLAGS}"
+export CGO_CXXFLAGS ?= "${CXXFLAGS}"
+export CGO_LDFLAGS ?= "${LDFLAGS}"
 
 GO_INSTALL ?= "${GO_IMPORT}/..."
+GO_INSTALL_FILTEROUT ?= "${GO_IMPORT}/vendor/"
 
-do_go_compile() {
-	GOPATH=${S}:${STAGING_LIBDIR}/${TARGET_SYS}/go go env
+B = "${WORKDIR}/build"
+export GOPATH = "${B}"
+GO_TMPDIR ?= "${WORKDIR}/go-tmp"
+GO_TMPDIR[vardepvalue] = ""
+
+python go_do_unpack() {
+    src_uri = (d.getVar('SRC_URI') or "").split()
+    if len(src_uri) == 0:
+        return
+
+    try:
+        fetcher = bb.fetch2.Fetch(src_uri, d)
+        for url in fetcher.urls:
+            if fetcher.ud[url].type == 'git':
+                if fetcher.ud[url].parm.get('destsuffix') is None:
+                    s_dirname = os.path.basename(d.getVar('S'))
+                    fetcher.ud[url].parm['destsuffix'] = os.path.join(s_dirname, 'src',
+                                                                      d.getVar('GO_IMPORT')) + '/'
+        fetcher.unpack(d.getVar('WORKDIR'))
+    except bb.fetch2.BBFetchException as e:
+        raise bb.build.FuncFailed(e)
+}
+
+go_list_packages() {
+	${GO} list -f '{{.ImportPath}}' ${GOBUILDFLAGS} ${GO_INSTALL} | \
+		egrep -v '${GO_INSTALL_FILTEROUT}'
+}
+
+go_list_package_tests() {
+    ${GO} list -f '{{.ImportPath}} {{.TestGoFiles}}' ${GOBUILDFLAGS} ${GO_INSTALL} | \
+		grep -v '\[\]$' | \
+		egrep -v '${GO_INSTALL_FILTEROUT}' | \
+		awk '{ print $1 }'
+}
+
+go_do_configure() {
+	ln -snf ${S}/src ${B}/
+}
+
+go_do_compile() {
+	export TMPDIR="${GO_TMPDIR}"
+	${GO} env
 	if [ -n "${GO_INSTALL}" ]; then
-		GOPATH=${S}:${STAGING_LIBDIR}/${TARGET_SYS}/go go install -v ${GO_INSTALL}
+		${GO} install ${GO_LINKSHARED} ${GOBUILDFLAGS} `go_list_packages`
+	fi
+}
+do_compile[dirs] =+ "${GO_TMPDIR}"
+do_compile[cleandirs] = "${B}/bin ${B}/pkg"
+
+do_compile_ptest() {
+    export TMPDIR="${GO_TMPDIR}"
+    rm -f ${B}/.go_compiled_tests.list
+	go_list_package_tests | while read pkg; do
+		cd ${B}/src/$pkg
+		${GO} test ${GOPTESTBUILDFLAGS} $pkg
+		find . -mindepth 1 -maxdepth 1 -type f -name '*.test' -exec echo $pkg/{} \; | \
+			sed -e's,/\./,/,'>> ${B}/.go_compiled_tests.list
+	done
+}
+do_compile_ptest_base[dirs] =+ "${GO_TMPDIR}"
+
+go_do_install() {
+	install -d ${D}${libdir}/go/src/${GO_IMPORT}
+	tar -C ${S}/src/${GO_IMPORT} -cf - --exclude-vcs --exclude '*.test' . | \
+		tar -C ${D}${libdir}/go/src/${GO_IMPORT} --no-same-owner -xf -
+	tar -C ${B} -cf - pkg | tar -C ${D}${libdir}/go --no-same-owner -xf -
+
+	if [ -n "`ls ${B}/${GO_BUILD_BINDIR}/`" ]; then
+		install -d ${D}${bindir}
+		install -m 0755 ${B}/${GO_BUILD_BINDIR}/* ${D}${bindir}/
 	fi
 }
 
-do_go_install() {
-	rm -rf ${WORKDIR}/staging
-	install -d ${WORKDIR}/staging${GOROOT_FINAL} ${D}${GOROOT_FINAL}
-	tar -C ${S} -cf - . | tar -C ${WORKDIR}/staging${GOROOT_FINAL} -xpvf -
-
-	find ${WORKDIR}/staging${GOROOT_FINAL} \( \
-		-name \*.indirectionsymlink -o \
-		-name .git\* -o                \
-		-name .hg -o                   \
-		-name .svn -o                  \
-		-name .pc\* -o                 \
-		-name patches\*                \
-		\) -print0 | \
-	xargs -r0 rm -rf
-
-	tar -C ${WORKDIR}/staging${GOROOT_FINAL} -cf - . | \
-	tar -C ${D}${GOROOT_FINAL} -xpvf -
-
-	chown -R root:root "${D}${GOROOT_FINAL}"
-
-	if [ -e "${D}${GOBIN_FINAL}" ]; then
-		install -d -m 0755 "${D}${bindir}"
-		find "${D}${GOBIN_FINAL}" ! -type d -print0 | xargs -r0 mv --target-directory="${D}${bindir}"
-		rmdir -p "${D}${GOBIN_FINAL}" || true
-	fi
+do_install_ptest_base() {
+set -x
+    test -f "${B}/.go_compiled_tests.list" || exit 0
+    tests=""
+    while read test; do
+        tests="$tests${tests:+ }${test%.test}"
+        testdir=`dirname $test`
+        install -d ${D}${PTEST_PATH}/$testdir
+        install -m 0755 ${B}/src/$test ${D}${PTEST_PATH}/$test
+        if [ -d "${B}/src/$testdir/testdata" ]; then
+            cp --preserve=mode,timestamps -R "${B}/src/$testdir/testdata" ${D}${PTEST_PATH}/$testdir
+        fi
+    done < ${B}/.go_compiled_tests.list
+    if [ -n "$tests" ]; then
+        install -d ${D}${PTEST_PATH}
+        cat >${D}${PTEST_PATH}/run-ptest <<EOF
+#!/bin/sh
+ANYFAILED=0
+for t in $tests; do
+    testdir=\`dirname \$t.test\`
+    if ( cd "${PTEST_PATH}/\$testdir"; "${PTEST_PATH}/\$t.test" ${GOPTESTFLAGS} | tee /dev/fd/9 | grep -q "^FAIL" ) 9>&1; then
+        ANYFAILED=1
+    fi
+done
+if [ \$ANYFAILED -ne 0 ]; then
+    echo "FAIL: ${PN}"
+    exit 1
+fi
+echo "PASS: ${PN}"
+exit 0
+EOF
+        chmod +x ${D}${PTEST_PATH}/run-ptest
+    else
+        rm -rf ${D}${PTEST_PATH}
+    fi
+set +x
 }
 
-do_compile() {
-	do_go_compile
-}
+EXPORT_FUNCTIONS do_unpack do_configure do_compile do_install
 
-do_install() {
-	do_go_install
-}
+FILES_${PN}-dev = "${libdir}/go/src"
+FILES_${PN}-staticdev = "${libdir}/go/pkg"
+
+INSANE_SKIP_${PN} += "ldflags"
+INSANE_SKIP_${PN}-ptest += "ldflags"
diff --git a/import-layers/yocto-poky/meta/classes/goarch.bbclass b/import-layers/yocto-poky/meta/classes/goarch.bbclass
index 12df88f..663c9ff 100644
--- a/import-layers/yocto-poky/meta/classes/goarch.bbclass
+++ b/import-layers/yocto-poky/meta/classes/goarch.bbclass
@@ -1,15 +1,37 @@
-BUILD_GOOS = "${@go_map_os(d.getVar('BUILD_OS', True), d)}"
-BUILD_GOARCH = "${@go_map_arch(d.getVar('BUILD_ARCH', True), d)}"
+BUILD_GOOS = "${@go_map_os(d.getVar('BUILD_OS'), d)}"
+BUILD_GOARCH = "${@go_map_arch(d.getVar('BUILD_ARCH'), d)}"
 BUILD_GOTUPLE = "${BUILD_GOOS}_${BUILD_GOARCH}"
-HOST_GOOS = "${@go_map_os(d.getVar('HOST_OS', True), d)}"
-HOST_GOARCH = "${@go_map_arch(d.getVar('HOST_ARCH', True), d)}"
-HOST_GOARM = "${@go_map_arm(d.getVar('HOST_ARCH', True), d.getVar('TUNE_FEATURES', True), d)}"
+HOST_GOOS = "${@go_map_os(d.getVar('HOST_OS'), d)}"
+HOST_GOARCH = "${@go_map_arch(d.getVar('HOST_ARCH'), d)}"
+HOST_GOARM = "${@go_map_arm(d.getVar('HOST_ARCH'), d.getVar('TUNE_FEATURES'), d)}"
+HOST_GO386 = "${@go_map_386(d.getVar('HOST_ARCH'), d.getVar('TUNE_FEATURES'), d)}"
 HOST_GOTUPLE = "${HOST_GOOS}_${HOST_GOARCH}"
-TARGET_GOOS = "${@go_map_os(d.getVar('TARGET_OS', True), d)}"
-TARGET_GOARCH = "${@go_map_arch(d.getVar('TARGET_ARCH', True), d)}"
-TARGET_GOARM = "${@go_map_arm(d.getVar('TARGET_ARCH', True), d.getVar('TUNE_FEATURES', True), d)}"
+TARGET_GOOS = "${@go_map_os(d.getVar('TARGET_OS'), d)}"
+TARGET_GOARCH = "${@go_map_arch(d.getVar('TARGET_ARCH'), d)}"
+TARGET_GOARM = "${@go_map_arm(d.getVar('TARGET_ARCH'), d.getVar('TUNE_FEATURES'), d)}"
+TARGET_GO386 = "${@go_map_386(d.getVar('TARGET_ARCH'), d.getVar('TUNE_FEATURES'), d)}"
 TARGET_GOTUPLE = "${TARGET_GOOS}_${TARGET_GOARCH}"
-GO_BUILD_BINDIR = "${@['bin/${HOST_GOTUPLE}','bin'][d.getVar('BUILD_GOTUPLE',True) == d.getVar('HOST_GOTUPLE',True)]}"
+GO_BUILD_BINDIR = "${@['bin/${HOST_GOTUPLE}','bin'][d.getVar('BUILD_GOTUPLE') == d.getVar('HOST_GOTUPLE')]}"
+
+# Go supports dynamic linking on a limited set of architectures.
+# See the supportsDynlink function in go/src/cmd/compile/internal/gc/main.go
+GO_DYNLINK = ""
+GO_DYNLINK_arm = "1"
+GO_DYNLINK_aarch64 = "1"
+GO_DYNLINK_x86 = "1"
+GO_DYNLINK_x86-64 = "1"
+GO_DYNLINK_powerpc64 = "1"
+GO_DYNLINK_class-native = ""
+
+# define here because everybody inherits this class
+#
+COMPATIBLE_HOST_linux-gnux32 = "null"
+COMPATIBLE_HOST_linux-muslx32 = "null"
+COMPATIBLE_HOST_powerpc = "null"
+COMPATIBLE_HOST_powerpc64 = "null"
+COMPATIBLE_HOST_mipsarchn32 = "null"
+ARM_INSTRUCTION_SET = "arm"
+TUNE_CCARGS_remove = "-march=mips32r2"
 
 def go_map_arch(a, d):
     import re
@@ -21,14 +43,14 @@
         return 'arm'
     elif re.match('aarch64.*', a):
         return 'arm64'
-    elif re.match('mips64el*', a):
+    elif re.match('mips64el.*', a):
         return 'mips64le'
-    elif re.match('mips64*', a):
+    elif re.match('mips64.*', a):
         return 'mips64'
-    elif re.match('mipsel*', a):
-        return 'mipsle'
-    elif re.match('mips*', a):
+    elif a == 'mips':
         return 'mips'
+    elif a == 'mipsel':
+        return 'mipsle'
     elif re.match('p(pc|owerpc)(64)', a):
         return 'ppc64'
     elif re.match('p(pc|owerpc)(64el)', a):
@@ -43,6 +65,17 @@
             return '7'
         elif 'armv6' in f:
             return '6'
+        elif 'armv5' in f:
+            return '5'
+    return ''
+
+def go_map_386(a, f, d):
+    import re
+    if re.match('i.86', a):
+        if ('core2' in f) or ('corei7' in f):
+            return 'sse2'
+        else:
+            return '387'
     return ''
 
 def go_map_os(o, d):
diff --git a/import-layers/yocto-poky/meta/classes/grub-efi.bbclass b/import-layers/yocto-poky/meta/classes/grub-efi.bbclass
index df7fe18..610479b 100644
--- a/import-layers/yocto-poky/meta/classes/grub-efi.bbclass
+++ b/import-layers/yocto-poky/meta/classes/grub-efi.bbclass
@@ -17,7 +17,6 @@
 # ${GRUB_ROOT} - grub's root device.
 
 do_bootimg[depends] += "${MLPREFIX}grub-efi:do_deploy"
-do_bootdirectdisk[depends] += "${MLPREFIX}grub-efi:do_deploy"
 
 GRUB_SERIAL ?= "console=ttyS0,115200"
 GRUB_CFG_VM = "${S}/grub_vm.cfg"
diff --git a/import-layers/yocto-poky/meta/classes/gtk-doc.bbclass b/import-layers/yocto-poky/meta/classes/gtk-doc.bbclass
index 0ae2729..5201c71 100644
--- a/import-layers/yocto-poky/meta/classes/gtk-doc.bbclass
+++ b/import-layers/yocto-poky/meta/classes/gtk-doc.bbclass
@@ -48,6 +48,7 @@
 # which may then get deleted (or their dependencies) and potentially segfault
 export GIO_MODULE_DIR=${STAGING_LIBDIR}/gio/modules-dummy
 
+GIR_EXTRA_LIBS_PATH=\`find ${B} -name *.so -printf "%h\n"|sort|uniq| tr '\n' ':'\`\$GIR_EXTRA_LIBS_PATH
 GIR_EXTRA_LIBS_PATH=\`find ${B} -name .libs| tr '\n' ':'\`\$GIR_EXTRA_LIBS_PATH
 
 if [ -d ".libs" ]; then
diff --git a/import-layers/yocto-poky/meta/classes/icecc.bbclass b/import-layers/yocto-poky/meta/classes/icecc.bbclass
index 77bf611..1cc1c4d 100644
--- a/import-layers/yocto-poky/meta/classes/icecc.bbclass
+++ b/import-layers/yocto-poky/meta/classes/icecc.bbclass
@@ -116,7 +116,7 @@
     # for one reason or the other
     # this is the old list (which doesn't seem to be valid anymore, because I was able to build
     # all these with icecc enabled)
-    # system_package_blacklist = [ "uclibc", "glibc", "gcc", "bind", "u-boot", "dhcp-forwarder", "enchant", "connman", "orbit2" ]
+    # system_package_blacklist = [ "glibc", "gcc", "bind", "u-boot", "dhcp-forwarder", "enchant", "connman", "orbit2" ]
     # when adding new entry, please document why (how it failed) so that we can re-evaluate it later
     # e.g. when there is new version
     # building libgcc-initial with icecc fails with CPP sanity check error if host sysroot contains cross gcc built for another target tune/variant
diff --git a/import-layers/yocto-poky/meta/classes/image-live.bbclass b/import-layers/yocto-poky/meta/classes/image-live.bbclass
index a3d1b4e..1623c15 100644
--- a/import-layers/yocto-poky/meta/classes/image-live.bbclass
+++ b/import-layers/yocto-poky/meta/classes/image-live.bbclass
@@ -34,20 +34,21 @@
                         ${MLPREFIX}syslinux:do_populate_sysroot \
                         syslinux-native:do_populate_sysroot \
                         ${@oe.utils.ifelse(d.getVar('COMPRESSISO', False),'zisofs-tools-native:do_populate_sysroot','')} \
-                        ${PN}:do_image_ext4 \
+                        ${PN}:do_image_${@d.getVar('LIVE_ROOTFS_TYPE').replace('-', '_')} \
                         "
 
 
 LABELS_LIVE ?= "boot install"
 ROOT_LIVE ?= "root=/dev/ram0"
-INITRD_IMAGE_LIVE ?= "core-image-minimal-initramfs"
+INITRD_IMAGE_LIVE ?= "${MLPREFIX}core-image-minimal-initramfs"
 INITRD_LIVE ?= "${DEPLOY_DIR_IMAGE}/${INITRD_IMAGE_LIVE}-${MACHINE}.cpio.gz"
 
-ROOTFS ?= "${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.ext4"
+LIVE_ROOTFS_TYPE ?= "ext4"
+ROOTFS ?= "${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.${LIVE_ROOTFS_TYPE}"
 
-IMAGE_TYPEDEP_live = "ext4"
-IMAGE_TYPEDEP_iso = "ext4"
-IMAGE_TYPEDEP_hddimg = "ext4"
+IMAGE_TYPEDEP_live = "${LIVE_ROOTFS_TYPE}"
+IMAGE_TYPEDEP_iso = "${LIVE_ROOTFS_TYPE}"
+IMAGE_TYPEDEP_hddimg = "${LIVE_ROOTFS_TYPE}"
 IMAGE_TYPES_MASKED += "live hddimg iso"
 
 python() {
@@ -91,7 +92,7 @@
 	for fs in ${INITRD}
 	do
 		if [ ! -s "$fs" ]; then
-			bbnote "ISO image will not be created. $fs is invalid."
+			bbwarn "ISO image will not be created. $fs is invalid."
 			return
 		fi
 	done
@@ -216,10 +217,10 @@
 	fi
 
 	if [ -z "${HDDIMG_ID}" ]; then
-		mkdosfs ${FATSIZE} -n ${BOOTIMG_VOLUME_ID} -S 512 -C ${FATIMG} \
+		mkdosfs ${FATSIZE} -n ${BOOTIMG_VOLUME_ID} ${MKDOSFS_EXTRAOPTS} -C ${FATIMG} \
 			${BLOCKS}
 	else
-		mkdosfs ${FATSIZE} -n ${BOOTIMG_VOLUME_ID} -S 512 -C ${FATIMG} \
+		mkdosfs ${FATSIZE} -n ${BOOTIMG_VOLUME_ID} ${MKDOSFS_EXTRAOPTS} -C ${FATIMG} \
 		${BLOCKS} -i ${HDDIMG_ID}
 	fi
 
diff --git a/import-layers/yocto-poky/meta/classes/image-prelink.bbclass b/import-layers/yocto-poky/meta/classes/image-prelink.bbclass
index 4157df0..f3bb68b 100644
--- a/import-layers/yocto-poky/meta/classes/image-prelink.bbclass
+++ b/import-layers/yocto-poky/meta/classes/image-prelink.bbclass
@@ -1,6 +1,6 @@
 do_rootfs[depends] += "prelink-native:do_populate_sysroot"
 
-IMAGE_PREPROCESS_COMMAND += "prelink_setup; prelink_image; "
+IMAGE_PREPROCESS_COMMAND_append_libc-glibc = " prelink_setup; prelink_image; "
 
 python prelink_setup () {
     oe.utils.write_ld_so_conf(d)
@@ -36,7 +36,17 @@
 	dynamic_loader=$(linuxloader)
 
 	# prelink!
-	${STAGING_SBINDIR_NATIVE}/prelink --root ${IMAGE_ROOTFS} -amR -N -c ${sysconfdir}/prelink.conf --dynamic-linker $dynamic_loader
+	if [ "$BUILD_REPRODUCIBLE_BINARIES" = "1" ]; then
+		bbnote " prelink: BUILD_REPRODUCIBLE_BINARIES..."
+		if [ "$REPRODUCIBLE_TIMESTAMP_ROOTFS" = "" ]; then
+			export PRELINK_TIMESTAMP=`git log -1 --pretty=%ct `
+		else
+			export PRELINK_TIMESTAMP=$REPRODUCIBLE_TIMESTAMP_ROOTFS
+		fi
+		${STAGING_SBINDIR_NATIVE}/prelink --root ${IMAGE_ROOTFS} -am -N -c ${sysconfdir}/prelink.conf --dynamic-linker $dynamic_loader
+	else
+		${STAGING_SBINDIR_NATIVE}/prelink --root ${IMAGE_ROOTFS} -amR -N -c ${sysconfdir}/prelink.conf --dynamic-linker $dynamic_loader
+	fi
 
 	# Remove the prelink.conf if we had to add it.
 	if [ "$dummy_prelink_conf" = "true" ]; then
diff --git a/import-layers/yocto-poky/meta/classes/image-vm.bbclass b/import-layers/yocto-poky/meta/classes/image-vm.bbclass
deleted file mode 100644
index 98bd920..0000000
--- a/import-layers/yocto-poky/meta/classes/image-vm.bbclass
+++ /dev/null
@@ -1,171 +0,0 @@
-# image-vm.bbclass
-# (loosly based off image-live.bbclass Copyright (C) 2004, Advanced Micro Devices, Inc.)
-#
-# Create an image which can be placed directly onto a harddisk using dd and then
-# booted.
-#
-# This uses syslinux. extlinux would have been nice but required the ext2/3
-# partition to be mounted. grub requires to run itself as part of the install
-# process.
-#
-# The end result is a 512 boot sector populated with an MBR and partition table
-# followed by an msdos fat16 partition containing syslinux and a linux kernel
-# completed by the ext2/3 rootfs.
-#
-# We have to push the msdos parition table size > 16MB so fat 16 is used as parted
-# won't touch fat12 partitions.
-
-inherit live-vm-common
-
-do_bootdirectdisk[depends] += "dosfstools-native:do_populate_sysroot \
-                               virtual/kernel:do_deploy \
-                               syslinux:do_populate_sysroot \
-                               syslinux-native:do_populate_sysroot \
-                               parted-native:do_populate_sysroot \
-                               mtools-native:do_populate_sysroot \
-                               ${PN}:do_image_${VM_ROOTFS_TYPE} \
-                               "
-
-IMAGE_TYPEDEP_vmdk = "${VM_ROOTFS_TYPE}"
-IMAGE_TYPEDEP_vdi = "${VM_ROOTFS_TYPE}"
-IMAGE_TYPEDEP_qcow2 = "${VM_ROOTFS_TYPE}"
-IMAGE_TYPEDEP_hdddirect = "${VM_ROOTFS_TYPE}"
-IMAGE_TYPES_MASKED += "vmdk vdi qcow2 hdddirect"
-
-VM_ROOTFS_TYPE ?= "ext4"
-ROOTFS ?= "${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.${VM_ROOTFS_TYPE}"
-
-# Used by bootloader
-LABELS_VM ?= "boot"
-ROOT_VM ?= "root=/dev/sda2"
-# Using an initramfs is optional. Enable it by setting INITRD_IMAGE_VM.
-INITRD_IMAGE_VM ?= ""
-INITRD_VM ?= "${@'${IMGDEPLOYDIR}/${INITRD_IMAGE_VM}-${MACHINE}.cpio.gz' if '${INITRD_IMAGE_VM}' else ''}"
-do_bootdirectdisk[depends] += "${@'${INITRD_IMAGE_VM}:do_image_complete' if '${INITRD_IMAGE_VM}' else ''}"
-
-BOOTDD_VOLUME_ID   ?= "boot"
-BOOTDD_EXTRA_SPACE ?= "16384"
-
-DISK_SIGNATURE ?= "${DISK_SIGNATURE_GENERATED}"
-DISK_SIGNATURE[vardepsexclude] = "DISK_SIGNATURE_GENERATED"
-
-build_boot_dd() {
-	HDDDIR="${S}/hdd/boot"
-	HDDIMG="${S}/hdd.image"
-	IMAGE=${IMGDEPLOYDIR}/${IMAGE_NAME}.hdddirect
-
-	populate_kernel $HDDDIR
-
-	if [ "${PCBIOS}" = "1" ]; then
-		syslinux_hddimg_populate $HDDDIR
-	fi
-	if [ "${EFI}" = "1" ]; then
-		efi_hddimg_populate $HDDDIR
-	fi
-
-	BLOCKS=`du -bks $HDDDIR | cut -f 1`
-	BLOCKS=`expr $BLOCKS + ${BOOTDD_EXTRA_SPACE}`
-
-	# Remove it since mkdosfs would fail when it exists
-	rm -f $HDDIMG
-	mkdosfs -n ${BOOTDD_VOLUME_ID} -S 512 -C $HDDIMG $BLOCKS 
-	mcopy -i $HDDIMG -s $HDDDIR/* ::/
-
-	if [ "${PCBIOS}" = "1" ]; then
-		syslinux_hdddirect_install $HDDIMG
-	fi	
-	chmod 644 $HDDIMG
-
-	ROOTFSBLOCKS=`du -Lbks ${ROOTFS} | cut -f 1`
-	TOTALSIZE=`expr $BLOCKS + $ROOTFSBLOCKS`
-	END1=`expr $BLOCKS \* 1024`
-	END2=`expr $END1 + 512`
-	END3=`expr \( $ROOTFSBLOCKS \* 1024 \) + $END1`
-
-	echo $ROOTFSBLOCKS $TOTALSIZE $END1 $END2 $END3
-	rm -rf $IMAGE
-	dd if=/dev/zero of=$IMAGE bs=1024 seek=$TOTALSIZE count=1
-
-	parted $IMAGE mklabel msdos
-	parted $IMAGE mkpart primary fat16 0 ${END1}B
-	parted $IMAGE unit B mkpart primary ext2 ${END2}B ${END3}B
-	parted $IMAGE set 1 boot on 
-
-	parted $IMAGE print
-
-	awk "BEGIN { printf \"$(echo ${DISK_SIGNATURE} | sed 's/\(..\)\(..\)\(..\)\(..\)/\\x\4\\x\3\\x\2\\x\1/')\" }" | \
-		dd of=$IMAGE bs=1 seek=440 conv=notrunc
-
-	OFFSET=`expr $END2 / 512`
-	if [ "${PCBIOS}" = "1" ]; then
-		dd if=${STAGING_DATADIR}/syslinux/mbr.bin of=$IMAGE conv=notrunc
-	fi
-
-	dd if=$HDDIMG of=$IMAGE conv=notrunc seek=1 bs=512
-	dd if=${ROOTFS} of=$IMAGE conv=notrunc seek=$OFFSET bs=512
-
-	cd ${IMGDEPLOYDIR}
-
-	ln -sf ${IMAGE_NAME}.hdddirect ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.hdddirect
-} 
-
-python do_bootdirectdisk() {
-    validate_disk_signature(d)
-    set_live_vm_vars(d, 'VM')
-    if d.getVar("PCBIOS") == "1":
-        bb.build.exec_func('build_syslinux_cfg', d)
-    if d.getVar("EFI") == "1":
-        bb.build.exec_func('build_efi_cfg', d)
-    bb.build.exec_func('build_boot_dd', d)
-}
-
-def generate_disk_signature():
-    import uuid
-
-    signature = str(uuid.uuid4())[:8]
-
-    if signature != '00000000':
-        return signature
-    else:
-        return 'ffffffff'
-
-def validate_disk_signature(d):
-    import re
-
-    disk_signature = d.getVar("DISK_SIGNATURE")
-
-    if not re.match(r'^[0-9a-fA-F]{8}$', disk_signature):
-        bb.fatal("DISK_SIGNATURE '%s' must be an 8 digit hex string" % disk_signature)
-
-DISK_SIGNATURE_GENERATED := "${@generate_disk_signature()}"
-
-run_qemu_img (){
-    type="$1"
-    qemu-img convert -O $type ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.hdddirect ${IMGDEPLOYDIR}/${IMAGE_NAME}.$type
-
-    ln -sf ${IMAGE_NAME}.$type ${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.$type
-}
-create_vmdk_image () {
-    run_qemu_img vmdk
-}
-
-create_vdi_image () {
-    run_qemu_img vdi
-}
-
-create_qcow2_image () {
-    run_qemu_img qcow2
-}
-
-python do_vmimg() {
-    if 'vmdk' in d.getVar('IMAGE_FSTYPES'):
-        bb.build.exec_func('create_vmdk_image', d)
-    if 'vdi' in d.getVar('IMAGE_FSTYPES'):
-        bb.build.exec_func('create_vdi_image', d)
-    if 'qcow2' in d.getVar('IMAGE_FSTYPES'):
-        bb.build.exec_func('create_qcow2_image', d)
-}
-
-addtask bootdirectdisk before do_vmimg
-addtask vmimg after do_bootdirectdisk before do_image_complete
-do_vmimg[depends] += "qemu-native:do_populate_sysroot"
diff --git a/import-layers/yocto-poky/meta/classes/image.bbclass b/import-layers/yocto-poky/meta/classes/image.bbclass
index 4bcfb87..d88ce5c 100644
--- a/import-layers/yocto-poky/meta/classes/image.bbclass
+++ b/import-layers/yocto-poky/meta/classes/image.bbclass
@@ -9,7 +9,7 @@
 TOOLCHAIN_TARGET_TASK_ATTEMPTONLY += "${PACKAGE_INSTALL_ATTEMPTONLY}"
 POPULATE_SDK_POST_TARGET_COMMAND += "rootfs_sysroot_relativelinks; "
 
-LICENSE = "MIT"
+LICENSE ?= "MIT"
 PACKAGES = ""
 DEPENDS += "${MLPREFIX}qemuwrapper-cross depmodwrapper-cross"
 RDEPENDS += "${PACKAGE_INSTALL} ${LINGUAS_INSTALL}"
@@ -33,7 +33,7 @@
 
 # These packages will be removed from a read-only rootfs after all other
 # packages have been installed
-ROOTFS_RO_UNNEEDED = "update-rc.d base-passwd shadow ${VIRTUAL-RUNTIME_update-alternatives} ${ROOTFS_BOOTSTRAP_INSTALL}"
+ROOTFS_RO_UNNEEDED ??= "update-rc.d base-passwd shadow ${VIRTUAL-RUNTIME_update-alternatives} ${ROOTFS_BOOTSTRAP_INSTALL}"
 
 # packages to install from features
 FEATURE_INSTALL = "${@' '.join(oe.packagegroup.required_packages(oe.data.typed_value('IMAGE_FEATURES', d), d))}"
@@ -85,7 +85,6 @@
 PACKAGE_ARCH = "${MACHINE_ARCH}"
 
 LDCONFIGDEPEND ?= "ldconfig-native:do_populate_sysroot"
-LDCONFIGDEPEND_libc-uclibc = ""
 LDCONFIGDEPEND_libc-musl = ""
 
 # This is needed to have depmod data in PKGDATA_DIR,
@@ -118,7 +117,7 @@
                  'IMAGE_ROOTFS_MAXSIZE','IMAGE_NAME','IMAGE_LINK_NAME','IMAGE_MANIFEST','DEPLOY_DIR_IMAGE','IMAGE_FSTYPES','IMAGE_INSTALL_COMPLEMENTARY','IMAGE_LINGUAS',
                  'MULTILIBRE_ALLOW_REP','MULTILIB_TEMP_ROOTFS','MULTILIB_VARIANTS','MULTILIBS','ALL_MULTILIB_PACKAGE_ARCHS','MULTILIB_GLOBAL_VARIANTS','BAD_RECOMMENDATIONS','NO_RECOMMENDATIONS',
                  'PACKAGE_ARCHS','PACKAGE_CLASSES','TARGET_VENDOR','TARGET_ARCH','TARGET_OS','OVERRIDES','BBEXTENDVARIANT','FEED_DEPLOYDIR_BASE_URI','INTERCEPT_DIR','USE_DEVFS',
-                 'CONVERSIONTYPES', 'IMAGE_GEN_DEBUGFS', 'ROOTFS_RO_UNNEEDED', 'IMGDEPLOYDIR', 'PACKAGE_EXCLUDE_COMPLEMENTARY']
+                 'CONVERSIONTYPES', 'IMAGE_GEN_DEBUGFS', 'ROOTFS_RO_UNNEEDED', 'IMGDEPLOYDIR', 'PACKAGE_EXCLUDE_COMPLEMENTARY', 'REPRODUCIBLE_TIMESTAMP_ROOTFS']
     variables.extend(rootfs_command_variables(d))
     variables.extend(variable_depends(d))
     return " ".join(variables)
@@ -139,9 +138,6 @@
 IMAGE_TYPE_live = "${@build_live(d)}"
 inherit ${IMAGE_TYPE_live}
 
-IMAGE_TYPE_vm = '${@bb.utils.contains_any("IMAGE_FSTYPES", ["vmdk", "vdi", "qcow2", "hdddirect"], "image-vm", "", d)}'
-inherit ${IMAGE_TYPE_vm}
-
 IMAGE_TYPE_container = '${@bb.utils.contains("IMAGE_FSTYPES", "container", "image-container", "", d)}'
 inherit ${IMAGE_TYPE_container}
 
@@ -149,14 +145,18 @@
 inherit ${IMAGE_TYPE_wic}
 
 python () {
+    def extraimage_getdepends(task):
+        deps = ""
+        for dep in (d.getVar('EXTRA_IMAGEDEPENDS') or "").split():
+            deps += " %s:%s" % (dep, task)
+        return deps
+
+    d.appendVarFlag('do_image', 'depends', extraimage_getdepends('do_populate_lic'))
+    d.appendVarFlag('do_image_complete', 'depends', extraimage_getdepends('do_populate_sysroot'))
+
     deps = " " + imagetypes_getdepends(d)
     d.appendVarFlag('do_rootfs', 'depends', deps)
 
-    deps = ""
-    for dep in (d.getVar('EXTRA_IMAGEDEPENDS') or "").split():
-        deps += " %s:do_populate_sysroot" % dep
-    d.appendVarFlag('do_image_complete', 'depends', deps)
-
     #process IMAGE_FEATURES, we must do this before runtime_mapping_rename
     #Check for replaces image features
     features = set(oe.data.typed_value('IMAGE_FEATURES', d))
@@ -254,6 +254,7 @@
     progress_reporter.next_stage()
 
     # generate rootfs
+    d.setVarFlag('REPRODUCIBLE_TIMESTAMP_ROOTFS', 'export', '1')
     create_rootfs(d, progress_reporter=progress_reporter, logcatcher=logcatcher)
 
     progress_reporter.finish()
@@ -261,18 +262,19 @@
 do_rootfs[dirs] = "${TOPDIR}"
 do_rootfs[cleandirs] += "${S} ${IMGDEPLOYDIR}"
 do_rootfs[umask] = "022"
-addtask rootfs before do_build after do_prepare_recipe_sysroot
+addtask rootfs after do_prepare_recipe_sysroot
 
 fakeroot python do_image () {
     from oe.utils import execute_pre_post_process
 
+    d.setVarFlag('REPRODUCIBLE_TIMESTAMP_ROOTFS', 'export', '1')
     pre_process_cmds = d.getVar("IMAGE_PREPROCESS_COMMAND")
 
     execute_pre_post_process(d, pre_process_cmds)
 }
 do_image[dirs] = "${TOPDIR}"
 do_image[umask] = "022"
-addtask do_image after do_rootfs before do_build
+addtask do_image after do_rootfs
 
 fakeroot python do_image_complete () {
     from oe.utils import execute_pre_post_process
@@ -289,14 +291,21 @@
 do_image_complete[sstate-outputdirs] = "${DEPLOY_DIR_IMAGE}"
 do_image_complete[stamp-extra-info] = "${MACHINE}"
 addtask do_image_complete after do_image before do_build
+python do_image_complete_setscene () {
+    sstate_setscene(d)
+}
+addtask do_image_complete_setscene
 
 # Add image-level QA/sanity checks to IMAGE_QA_COMMANDS
 #
 # IMAGE_QA_COMMANDS += " \
 #     image_check_everything_ok \
 # "
-# This task runs all functions in IMAGE_QA_COMMANDS after the image
+# This task runs all functions in IMAGE_QA_COMMANDS after the rootfs
 # construction has completed in order to validate the resulting image.
+#
+# The functions should use ${IMAGE_ROOTFS} to find the unpacked rootfs
+# directory, which if QA passes will be the basis for the images.
 fakeroot python do_image_qa () {
     from oe.utils import ImageQAFailed
 
@@ -318,7 +327,16 @@
         imgname = d.getVar('IMAGE_NAME')
         bb.fatal("QA errors found whilst validating image: %s\n%s" % (imgname, qamsg))
 }
-addtask do_image_qa after do_image_complete before do_build
+addtask do_image_qa after do_rootfs before do_image
+
+SSTATETASKS += "do_image_qa"
+SSTATE_SKIP_CREATION_task-image-qa = '1'
+do_image_qa[sstate-inputdirs] = ""
+do_image_qa[sstate-outputdirs] = ""
+python do_image_qa_setscene () {
+    sstate_setscene(d)
+}
+addtask do_image_qa_setscene
 
 def setup_debugfs_variables(d):
     d.appendVar('IMAGE_ROOTFS', '-dbg')
@@ -426,7 +444,11 @@
         # Expand PV else it can trigger get_srcrev which can fail due to these variables being unset
         localdata.setVar('PV', d.getVar('PV'))
         localdata.delVar('DATETIME')
+        localdata.delVar('DATE')
         localdata.delVar('TMPDIR')
+        vardepsexclude = (d.getVarFlag('IMAGE_CMD_' + realt, 'vardepsexclude', True) or '').split()
+        for dep in vardepsexclude:
+            localdata.delVar(dep)
 
         image_cmd = localdata.getVar("IMAGE_CMD")
         vardeps.add('IMAGE_CMD_' + realt)
@@ -480,19 +502,20 @@
         for dep in typedeps[t]:
             after += ' do_image_%s' % dep.replace("-", "_").replace(".", "_")
 
-        t = t.replace("-", "_").replace(".", "_")
+        task = "do_image_%s" % t.replace("-", "_").replace(".", "_")
 
-        d.setVar('do_image_%s' % t, '\n'.join(cmds))
-        d.setVarFlag('do_image_%s' % t, 'func', '1')
-        d.setVarFlag('do_image_%s' % t, 'fakeroot', '1')
-        d.setVarFlag('do_image_%s' % t, 'prefuncs', debug + 'set_image_size')
-        d.setVarFlag('do_image_%s' % t, 'postfuncs', 'create_symlinks')
-        d.setVarFlag('do_image_%s' % t, 'subimages', ' '.join(subimages))
-        d.appendVarFlag('do_image_%s' % t, 'vardeps', ' '.join(vardeps))
-        d.appendVarFlag('do_image_%s' % t, 'vardepsexclude', 'DATETIME')
+        d.setVar(task, '\n'.join(cmds))
+        d.setVarFlag(task, 'func', '1')
+        d.setVarFlag(task, 'fakeroot', '1')
 
-        bb.debug(2, "Adding type %s before %s, after %s" % (t, 'do_image_complete', after))
-        bb.build.addtask('do_image_%s' % t, 'do_image_complete', after, d)
+        d.appendVarFlag(task, 'prefuncs', ' ' + debug + ' set_image_size')
+        d.prependVarFlag(task, 'postfuncs', ' create_symlinks')
+        d.appendVarFlag(task, 'subimages', ' ' + ' '.join(subimages))
+        d.appendVarFlag(task, 'vardeps', ' ' + ' '.join(vardeps))
+        d.appendVarFlag(task, 'vardepsexclude', 'DATETIME DATE ' + ' '.join(vardepsexclude))
+
+        bb.debug(2, "Adding task %s before %s, after %s" % (task, 'do_image_complete', after))
+        bb.build.addtask(task, 'do_image_complete', after, d)
 }
 
 #
@@ -598,3 +621,46 @@
 do_package_write_deb[noexec] = "1"
 do_package_write_rpm[noexec] = "1"
 
+# Prepare the root links to point to the /usr counterparts.
+create_merged_usr_symlinks() {
+    root="$1"
+    install -d $root${base_bindir} $root${base_sbindir} $root${base_libdir}
+    lnr $root${base_bindir} $root/bin
+    lnr $root${base_sbindir} $root/sbin
+    lnr $root${base_libdir} $root/${baselib}
+
+    if [ "${nonarch_base_libdir}" != "${base_libdir}" ]; then
+       install -d $root${nonarch_base_libdir}
+       lnr $root${nonarch_base_libdir} $root/lib
+    fi
+
+    # create base links for multilibs
+    multi_libdirs="${@d.getVar('MULTILIB_VARIANTS')}"
+    for d in $multi_libdirs; do
+        install -d $root${exec_prefix}/$d
+        lnr $root${exec_prefix}/$d $root/$d
+    done
+}
+
+create_merged_usr_symlinks_rootfs() {
+    create_merged_usr_symlinks ${IMAGE_ROOTFS}
+}
+
+create_merged_usr_symlinks_sdk() {
+    create_merged_usr_symlinks ${SDK_OUTPUT}${SDKTARGETSYSROOT}
+}
+
+ROOTFS_PREPROCESS_COMMAND += "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', 'create_merged_usr_symlinks_rootfs; ', '',d)}"
+POPULATE_SDK_PRE_TARGET_COMMAND += "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', 'create_merged_usr_symlinks_sdk; ', '',d)}"
+
+reproducible_final_image_task () {
+    if [ "$BUILD_REPRODUCIBLE_BINARIES" = "1" ]; then
+        if [ "$REPRODUCIBLE_TIMESTAMP_ROOTFS" = "" ]; then
+            REPRODUCIBLE_TIMESTAMP_ROOTFS=`git log -1 --pretty=%ct`
+        fi
+        # Set mtime of all files to a reproducible value
+        bbnote "reproducible_final_image_task: mtime set to $REPRODUCIBLE_TIMESTAMP_ROOTFS"
+        find  ${IMAGE_ROOTFS} -exec touch -h  --date=@$REPRODUCIBLE_TIMESTAMP_ROOTFS {} \;
+    fi
+}
+IMAGE_PREPROCESS_COMMAND_append = " reproducible_final_image_task; "
diff --git a/import-layers/yocto-poky/meta/classes/image_types.bbclass b/import-layers/yocto-poky/meta/classes/image_types.bbclass
index 8db18ac..e881d0c 100644
--- a/import-layers/yocto-poky/meta/classes/image_types.bbclass
+++ b/import-layers/yocto-poky/meta/classes/image_types.bbclass
@@ -26,20 +26,31 @@
     fstypes = set((d.getVar('IMAGE_FSTYPES') or "").split())
     fstypes |= set((d.getVar('IMAGE_FSTYPES_DEBUGFS') or "").split())
 
+    deprecated = set()
     deps = set()
     for typestring in fstypes:
         basetype, resttypes = split_types(typestring)
-        adddep(d.getVar('IMAGE_DEPENDS_%s' % basetype) , deps)
+
+        var = "IMAGE_DEPENDS_%s" % basetype
+        if d.getVar(var) is not None:
+            deprecated.add(var)
 
         for typedepends in (d.getVar("IMAGE_TYPEDEP_%s" % basetype) or "").split():
             base, rest = split_types(typedepends)
-            adddep(d.getVar('IMAGE_DEPENDS_%s' % base) , deps)
             resttypes += rest
 
+            var = "IMAGE_DEPENDS_%s" % base
+            if d.getVar(var) is not None:
+                deprecated.add(var)
+
         for ctype in resttypes:
             adddep(d.getVar("CONVERSION_DEPENDS_%s" % ctype), deps)
             adddep(d.getVar("COMPRESS_DEPENDS_%s" % ctype), deps)
 
+    if deprecated:
+        bb.fatal('Deprecated variable(s) found: "%s". '
+                 'Use do_image_<type>[depends] += "<recipe>:<task>" instead' % ', '.join(deprecated))
+
     # Sort the set so that ordering is consistant
     return " ".join(sorted(deps))
 
@@ -72,7 +83,11 @@
 		eval COUNT=\"$MIN_COUNT\"
 	fi
 	# Create a sparse image block
+	bbdebug 1 Executing "dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype seek=$ROOTFS_SIZE count=$COUNT bs=1024"
 	dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype seek=$ROOTFS_SIZE count=$COUNT bs=1024
+	bbdebug 1 "Actual Rootfs size:  `du -s ${IMAGE_ROOTFS}`"
+	bbdebug 1 "Actual Partion size: `ls -s ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype`"
+	bbdebug 1 Executing "mkfs.$fstype -F $extra_imagecmd ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype -d ${IMAGE_ROOTFS}"
 	mkfs.$fstype -F $extra_imagecmd ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype -d ${IMAGE_ROOTFS}
 	# Error codes 0-3 indicate successfull operation of fsck (no errors or errors corrected)
 	fsck.$fstype -pvfD ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.$fstype || [ $? -le 3 ]
@@ -89,26 +104,28 @@
 		size=${MIN_BTRFS_SIZE}
 		bbwarn "Rootfs size is too small for BTRFS. Filesystem will be extended to ${size}K"
 	fi
-	dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.btrfs count=${size} bs=1024
+	dd if=/dev/zero of=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.btrfs seek=${size} count=0 bs=1024
 	mkfs.btrfs ${EXTRA_IMAGECMD} -r ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.btrfs
 }
 
 IMAGE_CMD_squashfs = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs ${EXTRA_IMAGECMD} -noappend"
 IMAGE_CMD_squashfs-xz = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-xz ${EXTRA_IMAGECMD} -noappend -comp xz"
 IMAGE_CMD_squashfs-lzo = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-lzo ${EXTRA_IMAGECMD} -noappend -comp lzo"
+IMAGE_CMD_squashfs-lz4 = "mksquashfs ${IMAGE_ROOTFS} ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.squashfs-lz4 ${EXTRA_IMAGECMD} -noappend -comp lz4"
 
 # By default, tar from the host is used, which can be quite old. If
 # you need special parameters (like --xattrs) which are only supported
 # by GNU tar upstream >= 1.27, then override that default:
 # IMAGE_CMD_TAR = "tar --xattrs --xattrs-include=*"
-# IMAGE_DEPENDS_tar_append = " tar-replacement-native"
+# do_image_tar[depends] += "tar-replacement-native:do_populate_sysroot"
 # EXTRANATIVEPATH += "tar-native"
 #
 # The GNU documentation does not specify whether --xattrs-include is necessary.
 # In practice, it turned out to be not needed when creating archives and
 # required when extracting, but it seems prudent to use it in both cases.
 IMAGE_CMD_TAR ?= "tar"
-IMAGE_CMD_tar = "${IMAGE_CMD_TAR} -cvf ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.tar -C ${IMAGE_ROOTFS} ."
+# ignore return code 1 "file changed as we read it" as other tasks(e.g. do_image_wic) may be hardlinking rootfs
+IMAGE_CMD_tar = "${IMAGE_CMD_TAR} -cf ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.tar -C ${IMAGE_ROOTFS} . || [ $? -eq 1 ]"
 
 do_image_cpio[cleandirs] += "${WORKDIR}/cpio_append"
 IMAGE_CMD_cpio () {
@@ -135,7 +152,7 @@
 
 IMAGE_CMD_elf () {
 	test -f ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.elf && rm -f ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.elf
-	mkelfImage --kernel=${ELF_KERNEL} --initrd=${DEPLOY_DIR_IMAGE}/${IMAGE_LINK_NAME}.cpio.gz --output=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.elf --append='${ELF_APPEND}' ${EXTRA_IMAGECMD}
+	mkelfImage --kernel=${ELF_KERNEL} --initrd=${IMGDEPLOYDIR}/${IMAGE_LINK_NAME}.cpio.gz --output=${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.elf --append='${ELF_APPEND}' ${EXTRA_IMAGECMD}
 }
 
 IMAGE_TYPEDEP_elf = "cpio.gz"
@@ -145,6 +162,12 @@
 multiubi_mkfs() {
 	local mkubifs_args="$1"
 	local ubinize_args="$2"
+    
+        # Added prompt error message for ubi and ubifs image creation.
+        if [ -z "$mkubifs_args"] || [ -z "$ubinize_args" ]; then
+            bbfatal "MKUBIFS_ARGS and UBINIZE_ARGS have to be set, see http://www.linux-mtd.infradead.org/faq/ubifs.html for details"
+        fi
+    
 	if [ -z "$3" ]; then
 		local vname=""
 	else
@@ -209,21 +232,20 @@
 EXTRA_IMAGECMD_btrfs ?= "-n 4096"
 EXTRA_IMAGECMD_elf ?= ""
 
-IMAGE_DEPENDS = ""
-IMAGE_DEPENDS_jffs2 = "mtd-utils-native"
-IMAGE_DEPENDS_cramfs = "util-linux-native"
-IMAGE_DEPENDS_ext2 = "e2fsprogs-native"
-IMAGE_DEPENDS_ext3 = "e2fsprogs-native"
-IMAGE_DEPENDS_ext4 = "e2fsprogs-native"
-IMAGE_DEPENDS_btrfs = "btrfs-tools-native"
-IMAGE_DEPENDS_squashfs = "squashfs-tools-native"
-IMAGE_DEPENDS_squashfs-xz = "squashfs-tools-native"
-IMAGE_DEPENDS_squashfs-lzo = "squashfs-tools-native"
-IMAGE_DEPENDS_elf = "virtual/kernel mkelfimage-native"
-IMAGE_DEPENDS_ubi = "mtd-utils-native"
-IMAGE_DEPENDS_ubifs = "mtd-utils-native"
-IMAGE_DEPENDS_multiubi = "mtd-utils-native"
-IMAGE_DEPENDS_wic = "parted-native"
+do_image_jffs2[depends] += "mtd-utils-native:do_populate_sysroot"
+do_image_cramfs[depends] += "util-linux-native:do_populate_sysroot"
+do_image_ext2[depends] += "e2fsprogs-native:do_populate_sysroot"
+do_image_ext3[depends] += "e2fsprogs-native:do_populate_sysroot"
+do_image_ext4[depends] += "e2fsprogs-native:do_populate_sysroot"
+do_image_btrfs[depends] += "btrfs-tools-native:do_populate_sysroot"
+do_image_squashfs[depends] += "squashfs-tools-native:do_populate_sysroot"
+do_image_squashfs_xz[depends] += "squashfs-tools-native:do_populate_sysroot"
+do_image_squashfs_lzo[depends] += "squashfs-tools-native:do_populate_sysroot"
+do_image_squashfs_lz4[depends] += "squashfs-tools-native:do_populate_sysroot"
+do_image_elf[depends] += "virtual/kernel:do_populate_sysroot mkelfimage-native:do_populate_sysroot"
+do_image_ubi[depends] += "mtd-utils-native:do_populate_sysroot"
+do_image_ubifs[depends] += "mtd-utils-native:do_populate_sysroot"
+do_image_multiubi[depends] += "mtd-utils-native:do_populate_sysroot"
 
 # This variable is available to request which values are suitable for IMAGE_FSTYPES
 IMAGE_TYPES = " \
@@ -235,14 +257,10 @@
     btrfs \
     iso \
     hddimg \
-    squashfs squashfs-xz squashfs-lzo \
+    squashfs squashfs-xz squashfs-lzo squashfs-lz4 \
     ubi ubifs multiubi \
     tar tar.gz tar.bz2 tar.xz tar.lz4 \
     cpio cpio.gz cpio.xz cpio.lzma cpio.lz4 \
-    vmdk \
-    vdi \
-    qcow2 \
-    hdddirect \
     elf \
     wic wic.gz wic.bz2 wic.lzma \
     container \
@@ -254,9 +272,9 @@
 # CONVERSION_CMD/DEPENDS.
 COMPRESSIONTYPES ?= ""
 
-CONVERSIONTYPES = "gz bz2 lzma xz lz4 lzo zip sum md5sum sha1sum sha224sum sha256sum sha384sum sha512sum bmap u-boot ${COMPRESSIONTYPES}"
+CONVERSIONTYPES = "gz bz2 lzma xz lz4 lzo zip sum md5sum sha1sum sha224sum sha256sum sha384sum sha512sum bmap u-boot vmdk vdi qcow2 ${COMPRESSIONTYPES}"
 CONVERSION_CMD_lzma = "lzma -k -f -7 ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
-CONVERSION_CMD_gz = "gzip -f -9 -c ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.gz"
+CONVERSION_CMD_gz = "gzip -f -9 -n -c ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.gz"
 CONVERSION_CMD_bz2 = "pbzip2 -f -k ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}"
 CONVERSION_CMD_xz = "xz -f -k -c ${XZ_COMPRESSION_LEVEL} ${XZ_THREADS} --check=${XZ_INTEGRITY_CHECK} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.xz"
 CONVERSION_CMD_lz4 = "lz4 -9 -z ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.lz4"
@@ -272,6 +290,9 @@
 CONVERSION_CMD_sha512sum = "sha512sum ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} > ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.sha512sum"
 CONVERSION_CMD_bmap = "bmaptool create ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} -o ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.bmap"
 CONVERSION_CMD_u-boot = "mkimage -A ${UBOOT_ARCH} -O linux -T ramdisk -C none -n ${IMAGE_NAME} -d ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.u-boot"
+CONVERSION_CMD_vmdk = "qemu-img convert -O vmdk ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.vmdk"
+CONVERSION_CMD_vdi = "qemu-img convert -O vdi ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.vdi"
+CONVERSION_CMD_qcow2 = "qemu-img convert -O qcow2 ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type} ${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.${type}.qcow2"
 CONVERSION_DEPENDS_lzma = "xz-native"
 CONVERSION_DEPENDS_gz = "pigz-native"
 CONVERSION_DEPENDS_bz2 = "pbzip2-native"
@@ -282,15 +303,18 @@
 CONVERSION_DEPENDS_sum = "mtd-utils-native"
 CONVERSION_DEPENDS_bmap = "bmap-tools-native"
 CONVERSION_DEPENDS_u-boot = "u-boot-mkimage-native"
+CONVERSION_DEPENDS_vmdk = "qemu-native"
+CONVERSION_DEPENDS_vdi = "qemu-native"
+CONVERSION_DEPENDS_qcow2 = "qemu-native"
 
 RUNNABLE_IMAGE_TYPES ?= "ext2 ext3 ext4"
 RUNNABLE_MACHINE_PATTERNS ?= "qemu"
 
 DEPLOYABLE_IMAGE_TYPES ?= "hddimg iso" 
 
-# Use IMAGE_EXTENSION_xxx to map image type 'xxx' with real image file extension name(s) for Hob
-IMAGE_EXTENSION_live = "hddimg iso"
-
 # The IMAGE_TYPES_MASKED variable will be used to mask out from the IMAGE_FSTYPES,
 # images that will not be built at do_rootfs time: vmdk, vdi, qcow2, hdddirect, hddimg, iso, etc.
 IMAGE_TYPES_MASKED ?= ""
+
+# bmap requires python3 to be in the PATH
+EXTRANATIVEPATH += "${@'python3-native' if d.getVar('IMAGE_FSTYPES').find('.bmap') else ''}"
diff --git a/import-layers/yocto-poky/meta/classes/image_types_wic.bbclass b/import-layers/yocto-poky/meta/classes/image_types_wic.bbclass
index 68f251c..dcf620c 100644
--- a/import-layers/yocto-poky/meta/classes/image_types_wic.bbclass
+++ b/import-layers/yocto-poky/meta/classes/image_types_wic.bbclass
@@ -3,7 +3,7 @@
 WICVARS ?= "\
            BBLAYERS IMGDEPLOYDIR DEPLOY_DIR_IMAGE FAKEROOTCMD IMAGE_BASENAME IMAGE_BOOT_FILES \
            IMAGE_LINK_NAME IMAGE_ROOTFS INITRAMFS_FSTYPES INITRD INITRD_LIVE ISODIR RECIPE_SYSROOT_NATIVE \
-           ROOTFS_SIZE STAGING_DATADIR STAGING_DIR STAGING_LIBDIR TARGET_SYS TRANSLATED_TARGET_ARCH"
+           ROOTFS_SIZE STAGING_DATADIR STAGING_DIR STAGING_LIBDIR TARGET_SYS"
 
 WKS_FILE ??= "${IMAGE_BASENAME}.${MACHINE}.wks"
 WKS_FILES ?= "${WKS_FILE} ${IMAGE_BASENAME}.wks"
@@ -39,8 +39,19 @@
 USING_WIC = "${@bb.utils.contains_any('IMAGE_FSTYPES', 'wic ' + ' '.join('wic.%s' % c for c in '${CONVERSIONTYPES}'.split()), '1', '', d)}"
 WKS_FILE_CHECKSUM = "${@'${WKS_FULL_PATH}:%s' % os.path.exists('${WKS_FULL_PATH}') if '${USING_WIC}' else ''}"
 do_image_wic[file-checksums] += "${WKS_FILE_CHECKSUM}"
-do_image_wic[depends] += "wic-tools:do_populate_sysroot"
-WKS_FILE_DEPENDS ??= ''
+do_image_wic[depends] += "${@' '.join('%s-native:do_populate_sysroot' % r for r in ('parted', 'gptfdisk', 'dosfstools', 'mtools'))}"
+
+# We ensure all artfacts are deployed (e.g virtual/bootloader)
+do_image_wic[recrdeptask] += "do_deploy"
+
+WKS_FILE_DEPENDS_DEFAULT = "syslinux-native bmap-tools-native cdrtools-native btrfs-tools-native squashfs-tools-native e2fsprogs-native"
+WKS_FILE_DEPENDS_BOOTLOADERS = ""
+WKS_FILE_DEPENDS_BOOTLOADERS_x86 = "syslinux grub-efi systemd-boot"
+WKS_FILE_DEPENDS_BOOTLOADERS_x86-64 = "syslinux grub-efi systemd-boot"
+WKS_FILE_DEPENDS_BOOTLOADERS_x86-x32 = "syslinux grub-efi"
+
+WKS_FILE_DEPENDS ??= "${WKS_FILE_DEPENDS_DEFAULT} ${WKS_FILE_DEPENDS_BOOTLOADERS}"
+
 DEPENDS += "${@ '${WKS_FILE_DEPENDS}' if d.getVar('USING_WIC') else '' }"
 
 python do_write_wks_template () {
diff --git a/import-layers/yocto-poky/meta/classes/insane.bbclass b/import-layers/yocto-poky/meta/classes/insane.bbclass
index 0c11c36..0a3b528 100644
--- a/import-layers/yocto-poky/meta/classes/insane.bbclass
+++ b/import-layers/yocto-poky/meta/classes/insane.bbclass
@@ -16,13 +16,8 @@
 #   into exec_prefix
 #  -Check that scripts in base_[bindir|sbindir|libdir] do not reference
 #   files under exec_prefix
+#  -Check if the package name is upper case
 
-
-# unsafe-references-in-binaries requires prelink-rtld from
-# prelink-native, but we don't want this DEPENDS for -native builds
-QADEPENDS = "prelink-native"
-QADEPENDS_class-native = ""
-QADEPENDS_class-nativesdk = ""
 QA_SANE = "True"
 
 # Elect whether a given type of error is a warning or error, they may
@@ -32,7 +27,7 @@
             installed-vs-shipped compile-host-path install-host-path \
             pn-overrides infodir build-deps \
             unknown-configure-option symlink-to-sysroot multilib \
-            invalid-packageconfig host-user-contaminated \
+            invalid-packageconfig host-user-contaminated uppercase-pn \
             "
 ERROR_QA ?= "dev-so debug-deps dev-deps debug-files arch pkgconfig la \
             perms dep-cmp pkgvarcheck perm-config perm-line perm-link \
@@ -40,6 +35,9 @@
             version-going-backwards expanded-d invalid-chars \
             license-checksum dev-elf file-rdeps \
             "
+# Add usrmerge QA check based on distro feature
+ERROR_QA_append = "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', ' usrmerge', '', d)}"
+
 FAKEROOT_QA = "host-user-contaminated"
 FAKEROOT_QA[doc] = "QA tests which need to run under fakeroot. If any \
 enabled tests are listed here, the do_package_qa task will run under fakeroot."
@@ -103,23 +101,6 @@
                         "microblazeeb":(189,   0,    0,          False,         32),
                         "microblazeel":(189,   0,    0,          True,          32),
                       },
-            "linux-uclibc" : { 
-                        "arm" :       (  40,    97,    0,          True,          32),
-                        "armeb":      (  40,    97,    0,          False,         32),
-                        "powerpc":    (  20,     0,    0,          False,         32),
-                        "i386":       (   3,     0,    0,          True,          32),
-                        "i486":       (   3,     0,    0,          True,          32),
-                        "i586":       (   3,     0,    0,          True,          32),
-                        "i686":       (   3,     0,    0,          True,          32),
-                        "x86_64":     (  62,     0,    0,          True,          64),
-                        "mips":       (   8,     0,    0,          False,         32),
-                        "mipsel":     (   8,     0,    0,          True,          32),
-                        "mips64":     (   8,     0,    0,          False,         64),
-                        "mips64el":   (   8,     0,    0,          True,          64),
-                        "avr32":      (6317,     0,    0,          False,         32),
-                        "sh4":        (42,       0,    0,          True,          32),
-
-                      },
             "linux-musl" : { 
                         "aarch64" :   (183,    0,    0,            True,          64),
                         "aarch64_be" :(183,    0,    0,            False,         64),
@@ -151,19 +132,12 @@
                         "arm" :       (40,     0,    0,          True,          32),
                         "armeb" :     (40,     0,    0,          False,         32),
                       },
-            "linux-uclibceabi" : {
-                        "arm" :       (40,     0,    0,          True,          32),
-                        "armeb" :     (40,     0,    0,          False,         32),
-                      },
             "linux-gnuspe" : {
                         "powerpc":    (20,     0,    0,          False,         32),
                       },
             "linux-muslspe" : {
                         "powerpc":    (20,     0,    0,          False,         32),
                       },
-            "linux-uclibcspe" : {
-                        "powerpc":    (20,     0,    0,          False,         32),
-                      },
             "linux-gnu" :       {
                         "powerpc":    (20,     0,    0,          False,         32),
                         "sh4":        (42,     0,    0,          True,          32),
@@ -171,6 +145,9 @@
             "linux-gnux32" :       {
                         "x86_64":     (62,     0,    0,          True,          32),
                       },
+            "linux-muslx32" :       {
+                        "x86_64":     (62,     0,    0,          True,          32),
+                      },
             "linux-gnun32" :       {
                         "mips64":       ( 8,     0,    0,          False,         32),
                         "mips64el":     ( 8,     0,    0,          True,          32),
@@ -207,12 +184,13 @@
             f.write("%s: %s [%s]\n" % (p, error, type))
 
 def package_qa_handle_error(error_class, error_msg, d):
-    package_qa_write_error(error_class, error_msg, d)
     if error_class in (d.getVar("ERROR_QA") or "").split():
+        package_qa_write_error(error_class, error_msg, d)
         bb.error("QA Issue: %s [%s]" % (error_msg, error_class))
         d.setVar("QA_SANE", False)
         return False
     elif error_class in (d.getVar("WARN_QA") or "").split():
+        package_qa_write_error(error_class, error_msg, d)
         bb.warn("QA Issue: %s [%s]" % (error_msg, error_class))
     else:
         bb.note("QA Issue: %s [%s]" % (error_msg, error_class))
@@ -408,71 +386,6 @@
     """
     return
 
-QAPATHTEST[unsafe-references-in-scripts] = "package_qa_check_unsafe_references_in_scripts"
-def package_qa_check_unsafe_references_in_scripts(path, name, d, elf, messages):
-    """
-    Warn if scripts in base_[bindir|sbindir|libdir] reference files under exec_prefix
-    """
-    if unsafe_references_skippable(path, name, d):
-        return
-
-    if not elf:
-        import stat
-        import subprocess
-        pn = d.getVar('PN')
-
-        # Ensure we're checking an executable script
-        statinfo = os.stat(path)
-        if bool(statinfo.st_mode & stat.S_IXUSR):
-            # grep shell scripts for possible references to /exec_prefix/
-            exec_prefix = d.getVar('exec_prefix')
-            statement = "grep -e '%s/[^ :]\{1,\}/[^ :]\{1,\}' %s > /dev/null" % (exec_prefix, path)
-            if subprocess.call(statement, shell=True) == 0:
-                error_msg = pn + ": Found a reference to %s/ in %s" % (exec_prefix, path)
-                package_qa_handle_error("unsafe-references-in-scripts", error_msg, d)
-                error_msg = "Shell scripts in base_bindir and base_sbindir should not reference anything in exec_prefix"
-                package_qa_handle_error("unsafe-references-in-scripts", error_msg, d)
-
-def unsafe_references_skippable(path, name, d):
-    if bb.data.inherits_class('native', d) or bb.data.inherits_class('nativesdk', d):
-        return True
-
-    if "-dbg" in name or "-dev" in name:
-        return True
-
-    # Other package names to skip:
-    if name.startswith("kernel-module-"):
-        return True
-
-    # Skip symlinks
-    if os.path.islink(path):
-        return True
-
-    # Skip unusual rootfs layouts which make these tests irrelevant
-    exec_prefix = d.getVar('exec_prefix')
-    if exec_prefix == "":
-        return True
-
-    pkgdest = d.getVar('PKGDEST')
-    pkgdest = pkgdest + "/" + name
-    pkgdest = os.path.abspath(pkgdest)
-    base_bindir = pkgdest + d.getVar('base_bindir')
-    base_sbindir = pkgdest + d.getVar('base_sbindir')
-    base_libdir = pkgdest + d.getVar('base_libdir')
-    bindir = pkgdest + d.getVar('bindir')
-    sbindir = pkgdest + d.getVar('sbindir')
-    libdir = pkgdest + d.getVar('libdir')
-
-    if base_bindir == bindir and base_sbindir == sbindir and base_libdir == libdir:
-        return True
-
-    # Skip files not in base_[bindir|sbindir|libdir]
-    path = os.path.abspath(path)
-    if not (base_bindir in path or base_sbindir in path or base_libdir in path):
-        return True
-
-    return False
-
 QAPATHTEST[arch] = "package_qa_check_arch"
 def package_qa_check_arch(path,name,d, elf, messages):
     """
@@ -509,7 +422,7 @@
 
     # Check the architecture and endiannes of the binary
     is_32 = (("virtual/kernel" in provides) or bb.data.inherits_class("module", d)) and \
-            (target_os == "linux-gnux32" or re.match('mips64.*32', d.getVar('DEFAULTTUNE')))
+            (target_os == "linux-gnux32" or target_os == "linux-muslx32"  or re.match('mips64.*32', d.getVar('DEFAULTTUNE')))
     if not ((machine == elf.machine()) or is_32):
         package_qa_add_message(messages, "arch", "Architecture did not match (%s, expected %s) on %s" % \
                  (oe.qa.elf_machine_to_string(elf.machine()), oe.qa.elf_machine_to_string(machine), package_qa_clean_path(path,d)))
@@ -677,7 +590,7 @@
         sane = package_qa_handle_error("license-checksum", pn + ": Recipe file fetches files and does not have license file information (LIC_FILES_CHKSUM)", d)
 
     srcdir = d.getVar('S')
-
+    corebase_licensefile = d.getVar('COREBASE') + "/LICENSE"
     for url in lic_files.split():
         try:
             (type, host, path, user, pswd, parm) = bb.fetch.decodeurl(url)
@@ -689,6 +602,9 @@
             package_qa_handle_error("license-checksum", pn + ": LIC_FILES_CHKSUM points to an invalid file: " + srclicfile, d)
             continue
 
+        if (srclicfile == corebase_licensefile):
+            bb.warn("${COREBASE}/LICENSE is not a valid license file, please use '${COMMON_LICENSE_DIR}/MIT' for a MIT License file in LIC_FILES_CHKSUM. This will become an error in the future")
+
         recipemd5 = parm.get('md5', '')
         beginline, endline = 0, 0
         if 'beginline' in parm:
@@ -816,7 +732,7 @@
     return sane
 
 # Run all package-wide warnfuncs and errorfuncs
-def package_qa_package(warnfuncs, errorfuncs, skip, package, d):
+def package_qa_package(warnfuncs, errorfuncs, package, d):
     warnings = {}
     errors = {}
 
@@ -832,8 +748,25 @@
 
     return len(errors) == 0
 
+# Run all recipe-wide warnfuncs and errorfuncs
+def package_qa_recipe(warnfuncs, errorfuncs, pn, d):
+    warnings = {}
+    errors = {}
+
+    for func in warnfuncs:
+        func(pn, d, warnings)
+    for func in errorfuncs:
+        func(pn, d, errors)
+
+    for w in warnings:
+        package_qa_handle_error(w, warnings[w], d)
+    for e in errors:
+        package_qa_handle_error(e, errors[e], d)
+
+    return len(errors) == 0
+
 # Walk over all files in a directory and call func
-def package_qa_walk(warnfuncs, errorfuncs, skip, package, d):
+def package_qa_walk(warnfuncs, errorfuncs, package, d):
     import oe.qa
 
     #if this will throw an exception, then fix the dict above
@@ -973,8 +906,9 @@
                     error_msg = "%s contained in package %s requires %s, but no providers found in RDEPENDS_%s?" % \
                             (filerdepends[key].replace("_%s" % pkg, "").replace("@underscore@", "_"), pkg, key, pkg)
                     package_qa_handle_error("file-rdeps", error_msg, d)
+package_qa_check_rdepends[vardepsexclude] = "OVERRIDES"
 
-def package_qa_check_deps(pkg, pkgdest, skip, d):
+def package_qa_check_deps(pkg, pkgdest, d):
 
     localdata = bb.data.createCopy(d)
     localdata.setVar('OVERRIDES', pkg)
@@ -997,6 +931,18 @@
     check_valid_deps('RREPLACES')
     check_valid_deps('RCONFLICTS')
 
+QAPKGTEST[usrmerge] = "package_qa_check_usrmerge"
+def package_qa_check_usrmerge(pkg, d, messages):
+    pkgdest = d.getVar('PKGDEST')
+    pkg_dir = pkgdest + os.sep + pkg + os.sep
+    merged_dirs = ['bin', 'sbin', 'lib'] + d.getVar('MULTILIB_VARIANTS').split()
+    for f in merged_dirs:
+        if os.path.exists(pkg_dir + f) and not os.path.islink(pkg_dir + f):
+            msg = "%s package is not obeying usrmerge distro feature. /%s should be relocated to /usr." % (pkg, f)
+            package_qa_add_message(messages, "usrmerge", msg)
+            return False
+    return True
+
 QAPKGTEST[expanded-d] = "package_qa_check_expanded_d"
 def package_qa_check_expanded_d(package, d, messages):
     """
@@ -1070,6 +1016,7 @@
             return False
     return True
 
+
 # The PACKAGE FUNC to scan each package
 python do_package_qa () {
     import subprocess
@@ -1083,7 +1030,7 @@
     package_qa_check_encoding(['DESCRIPTION', 'SUMMARY', 'LICENSE', 'SECTION'], 'utf-8', d)
 
     logdir = d.getVar('T')
-    pkg = d.getVar('PN')
+    pn = d.getVar('PN')
 
     # Check the compile log for host contamination
     compilelog = os.path.join(logdir,"log.do_compile")
@@ -1092,7 +1039,7 @@
         statement = "grep -e 'CROSS COMPILE Badness:' -e 'is unsafe for cross-compilation' %s > /dev/null" % compilelog
         if subprocess.call(statement, shell=True) == 0:
             msg = "%s: The compile log indicates that host include and/or library paths were used.\n \
-        Please check the log '%s' for more information." % (pkg, compilelog)
+        Please check the log '%s' for more information." % (pn, compilelog)
             package_qa_handle_error("compile-host-path", msg, d)
 
     # Check the install log for host contamination
@@ -1102,7 +1049,7 @@
         statement = "grep -e 'CROSS COMPILE Badness:' -e 'is unsafe for cross-compilation' %s > /dev/null" % installlog
         if subprocess.call(statement, shell=True) == 0:
             msg = "%s: The install log indicates that host include and/or library paths were used.\n \
-        Please check the log '%s' for more information." % (pkg, installlog)
+        Please check the log '%s' for more information." % (pn, installlog)
             package_qa_handle_error("install-host-path", msg, d)
 
     # Scan the packages...
@@ -1131,35 +1078,30 @@
     for dep in taskdepdata:
         taskdeps.add(taskdepdata[dep][0])
 
+    def parse_test_matrix(matrix_name):
+        testmatrix = d.getVarFlags(matrix_name) or {}
+        g = globals()
+        warnchecks = []
+        for w in (d.getVar("WARN_QA") or "").split():
+            if w in skip:
+               continue
+            if w in testmatrix and testmatrix[w] in g:
+                warnchecks.append(g[testmatrix[w]])
+
+        errorchecks = []
+        for e in (d.getVar("ERROR_QA") or "").split():
+            if e in skip:
+               continue
+            if e in testmatrix and testmatrix[e] in g:
+                errorchecks.append(g[testmatrix[e]])
+        return warnchecks, errorchecks
+
     for package in packages:
-        def parse_test_matrix(matrix_name):
-            testmatrix = d.getVarFlags(matrix_name) or {}
-            g = globals()
-            warnchecks = []
-            for w in (d.getVar("WARN_QA") or "").split():
-                if w in skip:
-                   continue
-                if w in testmatrix and testmatrix[w] in g:
-                    warnchecks.append(g[testmatrix[w]])
-                if w == 'unsafe-references-in-binaries':
-                    oe.utils.write_ld_so_conf(d)
-
-            errorchecks = []
-            for e in (d.getVar("ERROR_QA") or "").split():
-                if e in skip:
-                   continue
-                if e in testmatrix and testmatrix[e] in g:
-                    errorchecks.append(g[testmatrix[e]])
-                if e == 'unsafe-references-in-binaries':
-                    oe.utils.write_ld_so_conf(d)
-            return warnchecks, errorchecks
-
         skip = set((d.getVar('INSANE_SKIP') or "").split() +
                    (d.getVar('INSANE_SKIP_' + package) or "").split())
         if skip:
             bb.note("Package %s skipping QA tests: %s" % (package, str(skip)))
 
-
         bb.note("Checking Package: %s" % package)
         # Check package name
         if not pkgname_pattern.match(package):
@@ -1167,13 +1109,16 @@
                     "%s doesn't match the [a-z0-9.+-]+ regex" % package, d)
 
         warn_checks, error_checks = parse_test_matrix("QAPATHTEST")
-        package_qa_walk(warn_checks, error_checks, skip, package, d)
+        package_qa_walk(warn_checks, error_checks, package, d)
 
         warn_checks, error_checks = parse_test_matrix("QAPKGTEST")
-        package_qa_package(warn_checks, error_checks, skip, package, d)
+        package_qa_package(warn_checks, error_checks, package, d)
 
         package_qa_check_rdepends(package, pkgdest, skip, taskdeps, packages, d)
-        package_qa_check_deps(package, pkgdest, skip, d)
+        package_qa_check_deps(package, pkgdest, d)
+
+    warn_checks, error_checks = parse_test_matrix("QARECIPETEST")
+    package_qa_recipe(warn_checks, error_checks, pn, d)
 
     if 'libdir' in d.getVar("ALL_QA").split():
         package_qa_check_libdir(d)
@@ -1238,12 +1183,10 @@
     cnf = d.getVar('EXTRA_OECONF') or ""
     if "gettext" not in d.getVar('P') and "gcc-runtime" not in d.getVar('P') and "--disable-nls" not in cnf:
         ml = d.getVar("MLPREFIX") or ""
-        if bb.data.inherits_class('native', d) or bb.data.inherits_class('cross', d) or bb.data.inherits_class('crosssdk', d) or bb.data.inherits_class('nativesdk', d):
-            gt = "gettext-native"
-        elif bb.data.inherits_class('cross-canadian', d):
+        if bb.data.inherits_class('cross-canadian', d):
             gt = "nativesdk-gettext"
         else:
-            gt = "virtual/" + ml + "gettext"
+            gt = "gettext-native"
         deps = bb.utils.explode_deps(d.getVar('DEPENDS') or "")
         if gt not in deps:
             for config in configs:
@@ -1308,6 +1251,8 @@
 do_unpack[postfuncs] += "do_qa_unpack"
 
 python () {
+    import re
+    
     tests = d.getVar('ALL_QA').split()
     if "desktop" in tests:
         d.appendVar("PACKAGE_DEPENDS", " desktop-file-utils-native")
@@ -1334,6 +1279,9 @@
     if pn in overrides:
         msg = 'Recipe %s has PN of "%s" which is in OVERRIDES, this can result in unexpected behaviour.' % (d.getVar("FILE"), pn)
         package_qa_handle_error("pn-overrides", msg, d)
+    prog = re.compile('[A-Z]')
+    if prog.search(pn):
+        package_qa_handle_error("uppercase-pn", 'PN: %s is upper case, this can result in unexpected behavior.' % pn, d)
 
     issues = []
     if (d.getVar('PACKAGES') or "").split():
diff --git a/import-layers/yocto-poky/meta/classes/kernel-devicetree.bbclass b/import-layers/yocto-poky/meta/classes/kernel-devicetree.bbclass
new file mode 100644
index 0000000..6e08be4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/classes/kernel-devicetree.bbclass
@@ -0,0 +1,112 @@
+# Support for device tree generation
+PACKAGES_append = " \
+    kernel-devicetree \
+    ${@['kernel-image-zimage-bundle', ''][d.getVar('KERNEL_DEVICETREE_BUNDLE') != '1']} \
+"
+FILES_kernel-devicetree = "/${KERNEL_IMAGEDEST}/*.dtb /${KERNEL_IMAGEDEST}/*.dtbo"
+FILES_kernel-image-zimage-bundle = "/${KERNEL_IMAGEDEST}/zImage-*.dtb.bin"
+
+# Generate kernel+devicetree bundle
+KERNEL_DEVICETREE_BUNDLE ?= "0"
+
+normalize_dtb () {
+	DTB="$1"
+	if echo ${DTB} | grep -q '/dts/'; then
+		bbwarn "${DTB} contains the full path to the the dts file, but only the dtb name should be used."
+		DTB=`basename ${DTB} | sed 's,\.dts$,.dtb,g'`
+	fi
+	echo "${DTB}"
+}
+
+get_real_dtb_path_in_kernel () {
+	DTB="$1"
+	DTB_PATH="${B}/arch/${ARCH}/boot/dts/${DTB}"
+	if [ ! -e "${DTB_PATH}" ]; then
+		DTB_PATH="${B}/arch/${ARCH}/boot/${DTB}"
+	fi
+	echo "${DTB_PATH}"
+}
+
+do_configure_append() {
+	if [ "${KERNEL_DEVICETREE_BUNDLE}" = "1" ]; then
+		if echo ${KERNEL_IMAGETYPE_FOR_MAKE} | grep -q 'zImage'; then
+			case "${ARCH}" in
+				"arm")
+				config="${B}/.config"
+				if ! grep -q 'CONFIG_ARM_APPENDED_DTB=y' $config; then
+					bbwarn 'CONFIG_ARM_APPENDED_DTB is NOT enabled in the kernel. Enabling it to allow the kernel to boot with the Device Tree appended!'
+					sed -i "/CONFIG_ARM_APPENDED_DTB[ =]/d" $config
+					echo "CONFIG_ARM_APPENDED_DTB=y" >> $config
+					echo "# CONFIG_ARM_ATAG_DTB_COMPAT is not set" >> $config
+				fi
+				;;
+				*)
+				bberror "KERNEL_DEVICETREE_BUNDLE is not supported for ${ARCH}. Currently it is only supported for 'ARM'."
+			esac
+		else
+			bberror 'The KERNEL_DEVICETREE_BUNDLE requires the KERNEL_IMAGETYPE to contain zImage.'
+		fi
+	fi
+}
+
+do_compile_append() {
+	for DTB in ${KERNEL_DEVICETREE}; do
+		DTB=`normalize_dtb "${DTB}"`
+		oe_runmake ${DTB}
+	done
+}
+
+do_install_append() {
+	for DTB in ${KERNEL_DEVICETREE}; do
+		DTB=`normalize_dtb "${DTB}"`
+		DTB_EXT=${DTB##*.}
+		DTB_PATH=`get_real_dtb_path_in_kernel "${DTB}"`
+		DTB_BASE_NAME=`basename ${DTB} ."${DTB_EXT}"`
+		install -m 0644 ${DTB_PATH} ${D}/${KERNEL_IMAGEDEST}/${DTB_BASE_NAME}.${DTB_EXT}
+		for type in ${KERNEL_IMAGETYPE_FOR_MAKE}; do
+			symlink_name=${type}"-"${KERNEL_IMAGE_SYMLINK_NAME}
+			DTB_SYMLINK_NAME=`echo ${symlink_name} | sed "s/${MACHINE}/${DTB_BASE_NAME}/g"`
+			ln -sf ${DTB_BASE_NAME}.${DTB_EXT} ${D}/${KERNEL_IMAGEDEST}/devicetree-${DTB_SYMLINK_NAME}.${DTB_EXT}
+
+			if [ "$type" = "zImage" ] && [ "${KERNEL_DEVICETREE_BUNDLE}" = "1" ]; then
+				cat ${D}/${KERNEL_IMAGEDEST}/$type \
+					${D}/${KERNEL_IMAGEDEST}/${DTB_BASE_NAME}.${DTB_EXT} \
+					> ${D}/${KERNEL_IMAGEDEST}/$type-${DTB_BASE_NAME}.${DTB_EXT}.bin
+			fi
+		done
+	done
+}
+
+do_deploy_append() {
+	for DTB in ${KERNEL_DEVICETREE}; do
+		DTB=`normalize_dtb "${DTB}"`
+		DTB_EXT=${DTB##*.}
+		DTB_BASE_NAME=`basename ${DTB} ."${DTB_EXT}"`
+		for type in ${KERNEL_IMAGETYPE_FOR_MAKE}; do
+			base_name=${type}"-"${KERNEL_IMAGE_BASE_NAME}
+			symlink_name=${type}"-"${KERNEL_IMAGE_SYMLINK_NAME}
+			DTB_NAME=`echo ${base_name} | sed "s/${MACHINE}/${DTB_BASE_NAME}/g"`
+			DTB_SYMLINK_NAME=`echo ${symlink_name} | sed "s/${MACHINE}/${DTB_BASE_NAME}/g"`
+			DTB_PATH=`get_real_dtb_path_in_kernel "${DTB}"`
+			install -d ${DEPLOYDIR}
+			install -m 0644 ${DTB_PATH} ${DEPLOYDIR}/${DTB_NAME}.${DTB_EXT}
+			ln -sf ${DTB_NAME}.${DTB_EXT} ${DEPLOYDIR}/${DTB_SYMLINK_NAME}.${DTB_EXT}
+			ln -sf ${DTB_NAME}.${DTB_EXT} ${DEPLOYDIR}/${DTB_BASE_NAME}.${DTB_EXT}
+
+			if [ "$type" = "zImage" ] && [ "${KERNEL_DEVICETREE_BUNDLE}" = "1" ]; then
+				cat ${DEPLOYDIR}/$type \
+					${DEPLOYDIR}/${DTB_NAME}.${DTB_EXT} \
+					> ${DEPLOYDIR}/${DTB_NAME}.${DTB_EXT}.bin
+				ln -sf ${DTB_NAME}.${DTB_EXT}.bin ${DEPLOYDIR}/$type-${DTB_BASE_NAME}.${DTB_EXT}.bin
+
+				if [ -e "${KERNEL_OUTPUT_DIR}/${type}.initramfs" ]; then
+					cat ${KERNEL_OUTPUT_DIR}/${type}.initramfs \
+						${DEPLOYDIR}/${DTB_NAME}.${DTB_EXT} \
+						> ${DEPLOYDIR}/${type}-${INITRAMFS_BASE_NAME}-${DTB_BASE_NAME}.${DTB_EXT}.bin
+					ln -sf ${type}-${INITRAMFS_BASE_NAME}-${DTB_BASE_NAME}.${DTB_EXT}.bin \
+					       ${DEPLOYDIR}/${type}-initramfs-${DTB_BASE_NAME}.${DTB_EXT}-${MACHINE}.bin
+				fi
+			fi
+		done
+	done
+}
diff --git a/import-layers/yocto-poky/meta/classes/kernel-fitimage.bbclass b/import-layers/yocto-poky/meta/classes/kernel-fitimage.bbclass
index 179185b..9baf399 100644
--- a/import-layers/yocto-poky/meta/classes/kernel-fitimage.bbclass
+++ b/import-layers/yocto-poky/meta/classes/kernel-fitimage.bbclass
@@ -7,9 +7,12 @@
         depends = "%s u-boot-mkimage-native dtc-native" % depends
         d.setVar("DEPENDS", depends)
 
-        if d.getVar("UBOOT_ARCH") == "mips":
+        uarch = d.getVar("UBOOT_ARCH")
+        if uarch == "arm64":
+            replacementtype = "Image"
+        elif uarch == "mips":
             replacementtype = "vmlinuz.bin"
-        elif d.getVar("UBOOT_ARCH") == "x86":
+        elif uarch == "x86":
             replacementtype = "bzImage"
         else:
             replacementtype = "zImage"
diff --git a/import-layers/yocto-poky/meta/classes/kernel-module-split.bbclass b/import-layers/yocto-poky/meta/classes/kernel-module-split.bbclass
index 5e10dcf..1035525 100644
--- a/import-layers/yocto-poky/meta/classes/kernel-module-split.bbclass
+++ b/import-layers/yocto-poky/meta/classes/kernel-module-split.bbclass
@@ -47,7 +47,7 @@
         tf = tempfile.mkstemp()
         tmpfile = tf[1]
         cmd = "%sobjcopy -j .modinfo -O binary %s %s" % (d.getVar("HOST_PREFIX") or "", file, tmpfile)
-        subprocess.call(cmd, shell=True)
+        subprocess.check_call(cmd, shell=True)
         f = open(tmpfile)
         l = f.read().split("\000")
         f.close()
diff --git a/import-layers/yocto-poky/meta/classes/kernel-uboot.bbclass b/import-layers/yocto-poky/meta/classes/kernel-uboot.bbclass
index 87f0265..2364053 100644
--- a/import-layers/yocto-poky/meta/classes/kernel-uboot.bbclass
+++ b/import-layers/yocto-poky/meta/classes/kernel-uboot.bbclass
@@ -3,6 +3,10 @@
 		vmlinux_path="arch/${ARCH}/boot/compressed/vmlinux"
 		linux_suffix=""
 		linux_comp="none"
+	elif [ -e arch/${ARCH}/boot/Image ] ; then
+		vmlinux_path="vmlinux"
+		linux_suffix=""
+		linux_comp="none"
 	elif [ -e arch/${ARCH}/boot/vmlinuz.bin ]; then
 		rm -f linux.bin
 		cp -l arch/${ARCH}/boot/vmlinuz.bin linux.bin
diff --git a/import-layers/yocto-poky/meta/classes/kernel-yocto.bbclass b/import-layers/yocto-poky/meta/classes/kernel-yocto.bbclass
index 1ca0756..663c655 100644
--- a/import-layers/yocto-poky/meta/classes/kernel-yocto.bbclass
+++ b/import-layers/yocto-poky/meta/classes/kernel-yocto.bbclass
@@ -107,20 +107,31 @@
 				cmp "${WORKDIR}/defconfig" "${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG}"
 				if [ $? -ne 0 ]; then
 					bbwarn "defconfig detected in WORKDIR. ${KBUILD_DEFCONFIG} skipped"
+				else
+					cp -f ${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG} ${WORKDIR}/defconfig
 				fi
 			else
 				cp -f ${S}/arch/${ARCH}/configs/${KBUILD_DEFCONFIG} ${WORKDIR}/defconfig
-				sccs="${WORKDIR}/defconfig"
 			fi
+			sccs="${WORKDIR}/defconfig"
 		else
-			bbfatal "A KBUILD_DECONFIG '${KBUILD_DEFCONFIG}' was specified, but not present in the source tree"
+			bbfatal "A KBUILD_DEFCONFIG '${KBUILD_DEFCONFIG}' was specified, but not present in the source tree"
 		fi
 	fi
 
-	sccs="$sccs ${@" ".join(find_sccs(d))}"
+	sccs_from_src_uri="${@" ".join(find_sccs(d))}"
 	patches="${@" ".join(find_patches(d))}"
 	feat_dirs="${@" ".join(find_kernel_feature_dirs(d))}"
 
+	# a quick check to make sure we don't have duplicate defconfigs
+	# If there's a defconfig in the SRC_URI, did we also have one from
+	# the KBUILD_DEFCONFIG processing above ?
+	if [ -n "$sccs" ]; then
+	    # we did have a defconfig from above. remove any that might be in the src_uri
+	    sccs_from_src_uri=$(echo $sccs_from_src_uri | awk '{ if ($0!="defconfig") { print $0 } }' RS=' ')
+	fi
+	sccs="$sccs $sccs_from_src_uri"
+
 	# check for feature directories/repos/branches that were part of the
 	# SRC_URI. If they were supplied, we convert them into include directives
 	# for the update part of the process
@@ -143,6 +154,12 @@
 
 	# expand kernel features into their full path equivalents
 	bsp_definition=$(spp ${includes} --find -DKMACHINE=${KMACHINE} -DKTYPE=${LINUX_KERNEL_TYPE})
+	if [ -z "$bsp_definition" ]; then
+		echo "$sccs" | grep -q defconfig
+		if [ $? -ne 0 ]; then
+			bbfatal_log "Could not locate BSP definition for ${KMACHINE}/${LINUX_KERNEL_TYPE} and no defconfig was provided"
+		fi
+	fi
 	meta_dir=$(kgit --meta)
 
 	# run1: pull all the configuration fragments, no matter where they come from
diff --git a/import-layers/yocto-poky/meta/classes/kernel.bbclass b/import-layers/yocto-poky/meta/classes/kernel.bbclass
index ce2cab6..14f41e9 100644
--- a/import-layers/yocto-poky/meta/classes/kernel.bbclass
+++ b/import-layers/yocto-poky/meta/classes/kernel.bbclass
@@ -2,7 +2,7 @@
 
 PROVIDES += "virtual/kernel"
 DEPENDS += "virtual/${TARGET_PREFIX}binutils virtual/${TARGET_PREFIX}gcc kmod-native bc-native lzop-native"
-PACKAGE_WRITE_DEPS += "depmodwrapper-cross virtual/update-alternatives-native"
+PACKAGE_WRITE_DEPS += "depmodwrapper-cross"
 
 do_deploy[depends] += "depmodwrapper-cross:do_populate_sysroot"
 
@@ -57,7 +57,7 @@
 
         d.appendVar('PACKAGES', ' ' + 'kernel-image-' + typelower)
 
-        d.setVar('FILES_kernel-image-' + typelower, '/' + imagedest + '/' + type + '-${KERNEL_VERSION_NAME}')
+        d.setVar('FILES_kernel-image-' + typelower, '/' + imagedest + '/' + type + '-${KERNEL_VERSION_NAME}' + ' /' + imagedest + '/' + type)
 
         d.appendVar('RDEPENDS_kernel-image', ' ' + 'kernel-image-' + typelower)
 
@@ -65,13 +65,6 @@
 
         d.setVar('ALLOW_EMPTY_kernel-image-' + typelower, '1')
 
-        priority = d.getVar('KERNEL_PRIORITY')
-        postinst = '#!/bin/sh\n' + 'update-alternatives --install /' + imagedest + '/' + type + ' ' + type + ' ' + type + '-${KERNEL_VERSION_NAME} ' + priority + ' || true' + '\n'
-        d.setVar('pkg_postinst_kernel-image-' + typelower, postinst)
-
-        postrm = '#!/bin/sh\n' + 'update-alternatives --remove' + ' ' + type + ' ' + type + '-${KERNEL_VERSION_NAME} || true' + '\n'
-        d.setVar('pkg_postrm_kernel-image-' + typelower, postrm)
-
     image = d.getVar('INITRAMFS_IMAGE')
     if image:
         d.appendVarFlag('do_bundle_initramfs', 'depends', ' ${INITRAMFS_IMAGE}:do_image_complete')
@@ -137,10 +130,6 @@
 export KBUILD_BUILD_USER = "oe-user"
 export KBUILD_BUILD_HOST = "oe-host"
 
-KERNEL_PRIORITY ?= "${@int(d.getVar('PV').split('-')[0].split('+')[0].split('.')[0]) * 10000 + \
-                       int(d.getVar('PV').split('-')[0].split('+')[0].split('.')[1]) * 100 + \
-                       int(d.getVar('PV').split('-')[0].split('+')[0].split('.')[-1])}"
-
 KERNEL_RELEASE ?= "${KERNEL_VERSION}"
 
 # The directory where built kernel lies in the kernel tree
@@ -166,7 +155,7 @@
 # Some Linux kernel configurations need additional parameters on the command line
 KERNEL_EXTRA_ARGS ?= ""
 
-EXTRA_OEMAKE = " HOSTCC="${BUILD_CC}" HOSTCPP="${BUILD_CPP}""
+EXTRA_OEMAKE = " HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" HOSTCPP="${BUILD_CPP}""
 KERNEL_ALT_IMAGETYPE ??= ""
 
 copy_initramfs() {
@@ -255,8 +244,36 @@
 
 addtask bundle_initramfs after do_install before do_deploy
 
+get_cc_option () {
+		# Check if KERNEL_CC supports the option "file-prefix-map".
+		# This option allows us to build images with __FILE__ values that do not
+		# contain the host build path.
+		if ${KERNEL_CC} -Q --help=joined | grep -q "\-ffile-prefix-map=<old=new>"; then
+			echo "-ffile-prefix-map=${S}=/kernel-source/"
+		fi
+}
+
 kernel_do_compile() {
 	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS MACHINE
+	if [ "$BUILD_REPRODUCIBLE_BINARIES" = "1" ]; then
+		# kernel sources do not use do_unpack, so SOURCE_DATE_EPOCH may not
+		# be set....
+		if [ "$SOURCE_DATE_EPOCH" = "0" ]; then
+			olddir=`pwd`
+			cd ${S}
+			SOURCE_DATE_EPOCH=`git log  -1 --pretty=%ct`
+			# git repo not guaranteed, so fall back to REPRODUCIBLE_TIMESTAMP_ROOTFS
+			if [ $? -ne 0 ]; then
+				SOURCE_DATE_EPOCH=${REPRODUCIBLE_TIMESTAMP_ROOTFS}
+			fi
+			cd $olddir
+		fi
+
+		ts=`LC_ALL=C date -d @$SOURCE_DATE_EPOCH`
+		export KBUILD_BUILD_TIMESTAMP="$ts"
+		export KCONFIG_NOTIMESTAMP=1
+		bbnote "KBUILD_BUILD_TIMESTAMP: $ts"
+	fi
 	# The $use_alternate_initrd is only set from
 	# do_bundle_initramfs() This variable is specifically for the
 	# case where we are making a second pass at the kernel
@@ -270,20 +287,22 @@
 		copy_initramfs
 		use_alternate_initrd=CONFIG_INITRAMFS_SOURCE=${B}/usr/${INITRAMFS_IMAGE_NAME}.cpio
 	fi
+	cc_extra=$(get_cc_option)
 	for typeformake in ${KERNEL_IMAGETYPE_FOR_MAKE} ; do
-		oe_runmake ${typeformake} CC="${KERNEL_CC}" LD="${KERNEL_LD}" ${KERNEL_EXTRA_ARGS} $use_alternate_initrd
+		oe_runmake ${typeformake} CC="${KERNEL_CC} $cc_extra " LD="${KERNEL_LD}" ${KERNEL_EXTRA_ARGS} $use_alternate_initrd
 	done
 	# vmlinux.gz is not built by kernel
 	if (echo "${KERNEL_IMAGETYPES}" | grep -wq "vmlinux\.gz"); then
 		mkdir -p "${KERNEL_OUTPUT_DIR}"
-		gzip -9c < ${B}/vmlinux > "${KERNEL_OUTPUT_DIR}/vmlinux.gz"
+		gzip -9cn < ${B}/vmlinux > "${KERNEL_OUTPUT_DIR}/vmlinux.gz"
 	fi
 }
 
 do_compile_kernelmodules() {
 	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS MACHINE
 	if (grep -q -i -e '^CONFIG_MODULES=y$' ${B}/.config); then
-		oe_runmake -C ${B} ${PARALLEL_MAKE} modules CC="${KERNEL_CC}" LD="${KERNEL_LD}" ${KERNEL_EXTRA_ARGS}
+		cc_extra=$(get_cc_option)
+		oe_runmake -C ${B} ${PARALLEL_MAKE} modules CC="${KERNEL_CC} $cc_extra " LD="${KERNEL_LD}" ${KERNEL_EXTRA_ARGS}
 
 		# Module.symvers gets updated during the 
 		# building of the kernel modules. We need to
@@ -320,6 +339,7 @@
 	install -d ${D}/boot
 	for type in ${KERNEL_IMAGETYPES} ; do
 		install -m 0644 ${KERNEL_OUTPUT_DIR}/${type} ${D}/${KERNEL_IMAGEDEST}/${type}-${KERNEL_VERSION}
+		ln -sf ${type}-${KERNEL_VERSION} ${D}/${KERNEL_IMAGEDEST}/${type}
 	done
 	install -m 0644 System.map ${D}/boot/System.map-${KERNEL_VERSION}
 	install -m 0644 .config ${D}/boot/config-${KERNEL_VERSION}
@@ -575,19 +595,27 @@
 addtask strip before do_sizecheck after do_kernel_link_images
 
 # Support checking the kernel size since some kernels need to reside in partitions
-# with a fixed length or there is a limit in transferring the kernel to memory
+# with a fixed length or there is a limit in transferring the kernel to memory.
+# If more than one image type is enabled, warn on any that don't fit but only fail
+# if none fit.
 do_sizecheck() {
 	if [ ! -z "${KERNEL_IMAGE_MAXSIZE}" ]; then
 		invalid=`echo ${KERNEL_IMAGE_MAXSIZE} | sed 's/[0-9]//g'`
 		if [ -n "$invalid" ]; then
-			die "Invalid KERNEL_IMAGE_MAXSIZE: ${KERNEL_IMAGE_MAXSIZE}, should be an integerx (The unit is Kbytes)"
+			die "Invalid KERNEL_IMAGE_MAXSIZE: ${KERNEL_IMAGE_MAXSIZE}, should be an integer (The unit is Kbytes)"
 		fi
+		at_least_one_fits=
 		for type in ${KERNEL_IMAGETYPES} ; do
 			size=`du -ks ${B}/${KERNEL_OUTPUT_DIR}/$type | awk '{print $1}'`
 			if [ $size -ge ${KERNEL_IMAGE_MAXSIZE} ]; then
-				warn "This kernel $type (size=$size(K) > ${KERNEL_IMAGE_MAXSIZE}(K)) is too big for your device. Please reduce the size of the kernel by making more of it modular."
+				bbwarn "This kernel $type (size=$size(K) > ${KERNEL_IMAGE_MAXSIZE}(K)) is too big for your device."
+			else
+				at_least_one_fits=y
 			fi
 		done
+		if [ -z "$at_least_one_fits" ]; then
+			die "All kernel images are too big for your device. Please reduce the size of the kernel by making more of it modular."
+		fi
 	fi
 }
 do_sizecheck[dirs] = "${B}"
@@ -642,3 +670,6 @@
 addtask deploy after do_populate_sysroot do_packagedata
 
 EXPORT_FUNCTIONS do_deploy
+
+# Add using Device Tree support
+inherit kernel-devicetree
diff --git a/import-layers/yocto-poky/meta/classes/license.bbclass b/import-layers/yocto-poky/meta/classes/license.bbclass
index b1fffe7..d353110 100644
--- a/import-layers/yocto-poky/meta/classes/license.bbclass
+++ b/import-layers/yocto-poky/meta/classes/license.bbclass
@@ -255,14 +255,9 @@
     """
 
     depends = []
-    boot_depends_string = ""
     taskdepdata = d.getVar("BB_TASKDEPDATA", False)
-    # Only bootimg and bootdirectdisk include the depends flag
-    boot_tasks = ["do_bootimg", "do_bootdirectdisk",]
-
-    for task in boot_tasks:
-        boot_depends_string = "%s %s" % (boot_depends_string,
-                d.getVarFlag(task, "depends") or "")
+    # Only bootimg includes the depends flag
+    boot_depends_string = d.getVarFlag("do_bootimg", "depends") or ""
     boot_depends = [dep.split(":")[0] for dep
                 in boot_depends_string.split()
                 if not dep.split(":")[0].endswith("-native")]
diff --git a/import-layers/yocto-poky/meta/classes/linuxloader.bbclass b/import-layers/yocto-poky/meta/classes/linuxloader.bbclass
index 117b030..8f30eb3 100644
--- a/import-layers/yocto-poky/meta/classes/linuxloader.bbclass
+++ b/import-layers/yocto-poky/meta/classes/linuxloader.bbclass
@@ -1,5 +1,8 @@
+LDSO_TCLIBC = "glibc"
+LDSO_TCLIBC_libc-musl = "musl"
+LDSO_TCLIBC_libc-baremetal = "musl"
 
-linuxloader () {
+linuxloader_glibc () {
 	case ${TARGET_ARCH} in
 		powerpc | microblaze )
 			dynamic_loader="${base_libdir}/ld.so.1"
@@ -28,3 +31,40 @@
 	esac
 	echo $dynamic_loader
 }
+
+linuxloader_musl () {
+	case ${TARGET_ARCH} in
+		microblaze* )
+			dynamic_loader="${base_libdir}/ld-musl-microblaze${@bb.utils.contains('TUNE_FEATURES', 'bigendian', '', 'el' ,d)}.so.1"
+			;;
+		mips* )
+			dynamic_loader="${base_libdir}/ld-musl-mips${ABIEXTENSION}${MIPSPKGSFX_BYTE}${MIPSPKGSFX_R6}${MIPSPKGSFX_ENDIAN}${@['', '-sf'][d.getVar('TARGET_FPU') == 'soft']}.so.1"
+			;;
+		powerpc )
+			dynamic_loader="${base_libdir}/ld-musl-powerpc${@['', '-sf'][d.getVar('TARGET_FPU') == 'soft']}.so.1"
+			;;
+		powerpc64 )
+			dynamic_loader="${base_libdir}/ld-musl-powerpc64.so.1"
+			;;
+		x86_64 )
+			dynamic_loader="${base_libdir}/ld-musl-x86_64.so.1"
+			;;
+		i*86 )
+			dynamic_loader="${base_libdir}/ld-musl-i386.so.1"
+			;;
+		arm* )
+			dynamic_loader="${base_libdir}/ld-musl-arm${ARMPKGSFX_ENDIAN}${ARMPKGSFX_EABI}.so.1"
+			;;
+		aarch64* )
+			dynamic_loader="${base_libdir}/ld-musl-aarch64${ARMPKGSFX_ENDIAN_64}.so.1"
+			;;
+		* )
+			dynamic_loader="/unknown_dynamic_linker"
+			;;
+	esac
+	echo $dynamic_loader
+}
+
+linuxloader () {
+	linuxloader_${LDSO_TCLIBC}
+}
diff --git a/import-layers/yocto-poky/meta/classes/live-vm-common.bbclass b/import-layers/yocto-poky/meta/classes/live-vm-common.bbclass
index 27b137d..e1d8b18 100644
--- a/import-layers/yocto-poky/meta/classes/live-vm-common.bbclass
+++ b/import-layers/yocto-poky/meta/classes/live-vm-common.bbclass
@@ -15,6 +15,8 @@
 EFI_PROVIDER ?= "grub-efi"
 EFI_CLASS = "${@bb.utils.contains("MACHINE_FEATURES", "efi", "${EFI_PROVIDER}", "", d)}"
 
+MKDOSFS_EXTRAOPTS ??= "-S 512"
+
 # Include legacy boot if MACHINE_FEATURES includes "pcbios" or if it does not
 # contain "efi". This way legacy is supported by default if neither is
 # specified, maintaining the original behavior.
diff --git a/import-layers/yocto-poky/meta/classes/mirrors.bbclass b/import-layers/yocto-poky/meta/classes/mirrors.bbclass
index 4ad814f..766f1cb 100644
--- a/import-layers/yocto-poky/meta/classes/mirrors.bbclass
+++ b/import-layers/yocto-poky/meta/classes/mirrors.bbclass
@@ -30,21 +30,15 @@
 ftp://ftp.gnutls.org/gcrypt/gnutls ${GNUPG_MIRROR}/gnutls \n \
 http://ftp.info-zip.org/pub/infozip/src/ http://mirror.switch.ch/ftp/mirror/infozip/src/ \n \
 http://ftp.info-zip.org/pub/infozip/src/ ftp://sunsite.icm.edu.pl/pub/unix/archiving/info-zip/src/ \n \
-ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/  ftp://ftp.cerias.purdue.edu/pub/tools/unix/sysutils/lsof/ \n \
-ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/  ftp://ftp.tau.ac.il/pub/unix/admin/ \n \
-ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/  ftp://ftp.cert.dfn.de/pub/tools/admin/lsof/ \n \
-ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/  ftp://ftp.fu-berlin.de/pub/unix/tools/lsof/ \n \
-ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/  ftp://ftp.kaizo.org/pub/lsof/ \n \
-ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/  ftp://ftp.tu-darmstadt.de/pub/sysadmin/lsof/ \n \
-ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/  ftp://ftp.tux.org/pub/sites/vic.cc.purdue.edu/tools/unix/lsof/ \n \
-ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/  ftp://gd.tuwien.ac.at/utils/admin-tools/lsof/ \n \
-ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/  ftp://sunsite.ualberta.ca/pub/Mirror/lsof/ \n \
-ftp://lsof.itap.purdue.edu/pub/tools/unix/lsof/  ftp://the.wiretapped.net/pub/security/host-security/lsof/ \n \
+http://www.mirrorservice.org/sites/lsof.itap.purdue.edu/pub/tools/unix/lsof/ http://www.mirrorservice.org/sites/lsof.itap.purdue.edu/pub/tools/unix/lsof/OLD/ \n \
 ${APACHE_MIRROR}  http://www.us.apache.org/dist \n \
 ${APACHE_MIRROR}  http://archive.apache.org/dist \n \
 http://downloads.sourceforge.net/watchdog/ http://fossies.org/linux/misc/ \n \
 ${SAVANNAH_GNU_MIRROR} http://download-mirror.savannah.gnu.org/releases \n \
 ${SAVANNAH_NONGNU_MIRROR} http://download-mirror.savannah.nongnu.org/releases \n \
+ftp://sourceware.org/pub http://mirrors.kernel.org/sourceware \n \
+ftp://sourceware.org/pub http://gd.tuwien.ac.at/gnu/sourceware \n \
+ftp://sourceware.org/pub http://ftp.gwdg.de/pub/linux/sources.redhat.com/sourceware \n \
 cvs://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \n \
 svn://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \n \
 git://.*/.*     http://downloads.yoctoproject.org/mirror/sources/ \n \
diff --git a/import-layers/yocto-poky/meta/classes/module.bbclass b/import-layers/yocto-poky/meta/classes/module.bbclass
index 802476b..78d1b21 100644
--- a/import-layers/yocto-poky/meta/classes/module.bbclass
+++ b/import-layers/yocto-poky/meta/classes/module.bbclass
@@ -1,6 +1,6 @@
 inherit module-base kernel-module-split pkgconfig
 
-addtask make_scripts after do_prepare_recipe_sysroot before do_compile
+addtask make_scripts after do_prepare_recipe_sysroot before do_configure
 do_make_scripts[lockfiles] = "${TMPDIR}/kernel-scripts.lock"
 do_make_scripts[depends] += "virtual/kernel:do_shared_workdir"
 
@@ -18,6 +18,26 @@
     d.setVar('KBUILD_EXTRA_SYMBOLS', " ".join(extra_symbols))
 }
 
+python do_devshell_prepend () {
+    os.environ['CFLAGS'] = ''
+    os.environ['CPPFLAGS'] = ''
+    os.environ['CXXFLAGS'] = ''
+    os.environ['LDFLAGS'] = ''
+
+    os.environ['KERNEL_PATH'] = d.getVar('STAGING_KERNEL_DIR')
+    os.environ['KERNEL_SRC'] = d.getVar('STAGING_KERNEL_DIR')
+    os.environ['KERNEL_VERSION'] = d.getVar('KERNEL_VERSION')
+    os.environ['CC'] = d.getVar('KERNEL_CC')
+    os.environ['LD'] = d.getVar('KERNEL_LD')
+    os.environ['AR'] = d.getVar('KERNEL_AR')
+    os.environ['O'] = d.getVar('STAGING_KERNEL_BUILDDIR')
+    kbuild_extra_symbols = d.getVar('KBUILD_EXTRA_SYMBOLS')
+    if kbuild_extra_symbols:
+        os.environ['KBUILD_EXTRA_SYMBOLS'] = kbuild_extra_symbols
+    else:
+        os.environ['KBUILD_EXTRA_SYMBOLS'] = ''
+}
+
 module_do_compile() {
 	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS
 	oe_runmake KERNEL_PATH=${STAGING_KERNEL_DIR}   \
diff --git a/import-layers/yocto-poky/meta/classes/multilib.bbclass b/import-layers/yocto-poky/meta/classes/multilib.bbclass
index ab04597..816f54e 100644
--- a/import-layers/yocto-poky/meta/classes/multilib.bbclass
+++ b/import-layers/yocto-poky/meta/classes/multilib.bbclass
@@ -4,7 +4,9 @@
     if cls != "multilib" or not variant:
         return
 
-    e.data.setVar('STAGING_KERNEL_DIR', e.data.getVar('STAGING_KERNEL_DIR'))
+    localdata = bb.data.createCopy(e.data)
+    localdata.delVar('TMPDIR')
+    e.data.setVar('STAGING_KERNEL_DIR', localdata.getVar('STAGING_KERNEL_DIR'))
 
     # There should only be one kernel in multilib configs
     # We also skip multilib setup for module packages.
diff --git a/import-layers/yocto-poky/meta/classes/native.bbclass b/import-layers/yocto-poky/meta/classes/native.bbclass
index 6b7f3dd..9c434dc 100644
--- a/import-layers/yocto-poky/meta/classes/native.bbclass
+++ b/import-layers/yocto-poky/meta/classes/native.bbclass
@@ -108,7 +108,7 @@
 PKG_CONFIG_SYSTEM_LIBRARY_PATH[unexport] = "1"
 PKG_CONFIG_SYSTEM_INCLUDE_PATH[unexport] = "1"
 
-# we dont want libc-uclibc or libc-glibc to kick in for native recipes
+# we dont want libc-*libc to kick in for native recipes
 LIBCOVERRIDE = ""
 CLASSOVERRIDE = "class-native"
 MACHINEOVERRIDES = ""
diff --git a/import-layers/yocto-poky/meta/classes/own-mirrors.bbclass b/import-layers/yocto-poky/meta/classes/own-mirrors.bbclass
index 0296d54..a777835 100644
--- a/import-layers/yocto-poky/meta/classes/own-mirrors.bbclass
+++ b/import-layers/yocto-poky/meta/classes/own-mirrors.bbclass
@@ -1,13 +1,13 @@
-PREMIRRORS() {
-cvs://.*/.*     ${SOURCE_MIRROR_URL}
-svn://.*/.*     ${SOURCE_MIRROR_URL}
-git://.*/.*     ${SOURCE_MIRROR_URL}
-gitsm://.*/.*   ${SOURCE_MIRROR_URL}
-hg://.*/.*      ${SOURCE_MIRROR_URL}
-bzr://.*/.*     ${SOURCE_MIRROR_URL}
-p4://.*/.*      ${SOURCE_MIRROR_URL}
-osc://.*/.*     ${SOURCE_MIRROR_URL}
-https?$://.*/.* ${SOURCE_MIRROR_URL}
-ftp://.*/.*     ${SOURCE_MIRROR_URL}
-npm://.*/?.*    ${SOURCE_MIRROR_URL}
-}
+PREMIRRORS_prepend = " \
+cvs://.*/.*     ${SOURCE_MIRROR_URL} \n \
+svn://.*/.*     ${SOURCE_MIRROR_URL} \n \
+git://.*/.*     ${SOURCE_MIRROR_URL} \n \
+gitsm://.*/.*   ${SOURCE_MIRROR_URL} \n \
+hg://.*/.*      ${SOURCE_MIRROR_URL} \n \
+bzr://.*/.*     ${SOURCE_MIRROR_URL} \n \
+p4://.*/.*      ${SOURCE_MIRROR_URL} \n \
+osc://.*/.*     ${SOURCE_MIRROR_URL} \n \
+https?$://.*/.* ${SOURCE_MIRROR_URL} \n \
+ftp://.*/.*     ${SOURCE_MIRROR_URL} \n \
+npm://.*/?.*    ${SOURCE_MIRROR_URL} \n \
+"
diff --git a/import-layers/yocto-poky/meta/classes/package.bbclass b/import-layers/yocto-poky/meta/classes/package.bbclass
index a03c05b..2053d46 100644
--- a/import-layers/yocto-poky/meta/classes/package.bbclass
+++ b/import-layers/yocto-poky/meta/classes/package.bbclass
@@ -737,9 +737,7 @@
     def get_fs_perms_list(d):
         str = ""
         bbpath = d.getVar('BBPATH')
-        fs_perms_tables = d.getVar('FILESYSTEM_PERMS_TABLES')
-        if not fs_perms_tables:
-            fs_perms_tables = 'files/fs-perms.txt'
+        fs_perms_tables = d.getVar('FILESYSTEM_PERMS_TABLES') or ""
         for conf_file in fs_perms_tables.split():
             str += " %s" % bb.utils.which(bbpath, conf_file)
         return str
@@ -879,6 +877,11 @@
         debugdir = "/.debug"
         debuglibdir = ""
         debugsrcdir = ""
+    elif d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-with-srcpkg':
+        debugappend = ""
+        debugdir = "/.debug"
+        debuglibdir = ""
+        debugsrcdir = "/usr/src/debug"
     else:
         # Original OE-core, a.k.a. ".debug", style debug info
         debugappend = ""
@@ -1092,6 +1095,15 @@
     
     autodebug = not (d.getVar("NOAUTOPACKAGEDEBUG") or False)
 
+    split_source_package = (d.getVar('PACKAGE_DEBUG_SPLIT_STYLE') == 'debug-with-srcpkg')
+
+    # If debug-with-srcpkg mode is enabled then the src package is added
+    # into the package list and the source directory as its main content
+    if split_source_package:
+        src_package_name = ('%s-src' % d.getVar('PN'))
+        packages += (' ' + src_package_name)
+        d.setVar('FILES_%s' % src_package_name, '/usr/src/debug')
+
     # Sanity check PACKAGES for duplicates
     # Sanity should be moved to sanity.bbclass once we have the infrastucture
     package_list = []
@@ -1100,7 +1112,12 @@
         if pkg in package_list:
             msg = "%s is listed in PACKAGES multiple times, this leads to packaging errors." % pkg
             package_qa_handle_error("packages-list", msg, d)
-        elif autodebug and pkg.endswith("-dbg"):
+        # If debug-with-srcpkg mode is enabled then the src package will have
+        # priority over dbg package when assigning the files.
+        # This allows src package to include source files and remove them from dbg.
+        elif split_source_package and pkg.endswith("-src"):
+            package_list.insert(0, pkg)
+        elif autodebug and pkg.endswith("-dbg") and not split_source_package:
             package_list.insert(0, pkg)
         else:
             package_list.append(pkg)
@@ -1434,13 +1451,7 @@
 fi
 }
 
-# In Morty and earlier releases, and on master (Rocko), the RPM file
-# dependencies are always enabled. However, since they were broken with the
-# release of Pyro and enabling them may cause build problems for some packages,
-# they are not enabled by default in Pyro. Setting ENABLE_RPM_FILEDEPS_FOR_PYRO
-# to "1" will enable them again.
-ENABLE_RPM_FILEDEPS_FOR_PYRO ??= "0"
-RPMDEPS = "${STAGING_LIBDIR_NATIVE}/rpm/rpmdeps${@' --alldeps' if d.getVar('ENABLE_RPM_FILEDEPS_FOR_PYRO') == '1' else ''}"
+RPMDEPS = "${STAGING_LIBDIR_NATIVE}/rpm/rpmdeps --alldeps"
 
 # Collect perfile run-time dependency metadata
 # Output:
@@ -1465,7 +1476,7 @@
     for pkg in packages.split():
         if d.getVar('SKIP_FILEDEPS_' + pkg) == '1':
             continue
-        if pkg.endswith('-dbg') or pkg.endswith('-doc') or pkg.find('-locale-') != -1 or pkg.find('-localedata-') != -1 or pkg.find('-gconv-') != -1 or pkg.find('-charmap-') != -1 or pkg.startswith('kernel-module-'):
+        if pkg.endswith('-dbg') or pkg.endswith('-doc') or pkg.find('-locale-') != -1 or pkg.find('-localedata-') != -1 or pkg.find('-gconv-') != -1 or pkg.find('-charmap-') != -1 or pkg.startswith('kernel-module-') or pkg.endswith('-src'):
             continue
         for files in chunks(pkgfiles[pkg], 100):
             pkglist.append((pkg, files, rpmdeps, pkgdest))
@@ -1583,7 +1594,7 @@
                 combos.append("-".join(options[0:i]))
             return combos
 
-        if (file.endswith('.dylib') or file.endswith('.so')) and not pkg.endswith('-dev') and not pkg.endswith('-dbg'):
+        if (file.endswith('.dylib') or file.endswith('.so')) and not pkg.endswith('-dev') and not pkg.endswith('-dbg') and not pkg.endswith('-src'):
             # Drop suffix
             name = os.path.basename(file).rsplit(".",1)[0]
             # Find all combinations
@@ -2060,7 +2071,7 @@
     # cache.  This is useful if an item this class depends on changes in a
     # way that the output of this class changes.  rpmdeps is a good example
     # as any change to rpmdeps requires this to be rerun.
-    # PACKAGE_BBCLASS_VERSION = "1"
+    # PACKAGE_BBCLASS_VERSION = "2"
 
     # Init cachedpath
     global cpath
diff --git a/import-layers/yocto-poky/meta/classes/package_deb.bbclass b/import-layers/yocto-poky/meta/classes/package_deb.bbclass
index eacabcd..5d29793 100644
--- a/import-layers/yocto-poky/meta/classes/package_deb.bbclass
+++ b/import-layers/yocto-poky/meta/classes/package_deb.bbclass
@@ -39,50 +39,79 @@
     if arch == "arm":
         return arch + ["el", "hf"]["callconvention-hard" in tune_features]
     return arch
-#
-# install a bunch of packages using apt
-# the following shell variables needs to be set before calling this func:
-# INSTALL_ROOTFS_DEB - install root dir
-# INSTALL_BASEARCH_DEB - install base architecutre
-# INSTALL_ARCHS_DEB - list of available archs
-# INSTALL_PACKAGES_NORMAL_DEB - packages to be installed
-# INSTALL_PACKAGES_ATTEMPTONLY_DEB - packages attempted to be installed only
-# INSTALL_PACKAGES_LINGUAS_DEB - additional packages for uclibc
-# INSTALL_TASK_DEB - task name
 
 python do_package_deb () {
-    import re, copy
-    import textwrap
-    import subprocess
-    import collections
-    import codecs
+
+    import multiprocessing
+    import traceback
+
+    class DebianWritePkgProcess(multiprocessing.Process):
+        def __init__(self, *args, **kwargs):
+            multiprocessing.Process.__init__(self, *args, **kwargs)
+            self._pconn, self._cconn = multiprocessing.Pipe()
+            self._exception = None
+
+        def run(self):
+            try:
+                multiprocessing.Process.run(self)
+                self._cconn.send(None)
+            except Exception as e:
+                tb = traceback.format_exc()
+                self._cconn.send((e, tb))
+
+        @property
+        def exception(self):
+            if self._pconn.poll():
+                self._exception = self._pconn.recv()
+            return self._exception
 
     oldcwd = os.getcwd()
 
-    workdir = d.getVar('WORKDIR')
-    if not workdir:
-        bb.error("WORKDIR not defined, unable to package")
-        return
-
-    outdir = d.getVar('PKGWRITEDIRDEB')
-    if not outdir:
-        bb.error("PKGWRITEDIRDEB not defined, unable to package")
-        return
-
     packages = d.getVar('PACKAGES')
     if not packages:
         bb.debug(1, "PACKAGES not defined, nothing to package")
         return
 
     tmpdir = d.getVar('TMPDIR')
-
     if os.access(os.path.join(tmpdir, "stamps", "DEB_PACKAGE_INDEX_CLEAN"),os.R_OK):
         os.unlink(os.path.join(tmpdir, "stamps", "DEB_PACKAGE_INDEX_CLEAN"))
 
-    if packages == []:
-        bb.debug(1, "No packages; nothing to do")
-        return
+    max_process = int(d.getVar("BB_NUMBER_THREADS") or os.cpu_count() or 1)
+    launched = []
+    error = None
+    pkgs = packages.split()
+    while not error and pkgs:
+        if len(launched) < max_process:
+            p = DebianWritePkgProcess(target=deb_write_pkg, args=(pkgs.pop(), d))
+            p.start()
+            launched.append(p)
+        for q in launched:
+            # The finished processes are joined when calling is_alive()
+            if not q.is_alive():
+                launched.remove(q)
+            if q.exception:
+                error, traceback = q.exception
+                break
 
+    for p in launched:
+        p.join()
+
+    os.chdir(oldcwd)
+
+    if error:
+        raise error
+}
+do_package_deb[vardeps] += "deb_write_pkg"
+do_package_deb[vardepsexclude] = "BB_NUMBER_THREADS"
+
+def deb_write_pkg(pkg, d):
+    import re, copy
+    import textwrap
+    import subprocess
+    import collections
+    import codecs
+
+    outdir = d.getVar('PKGWRITEDIRDEB')
     pkgdest = d.getVar('PKGDEST')
 
     def cleanupcontrol(root):
@@ -91,11 +120,11 @@
             if os.path.exists(p):
                 bb.utils.prunedir(p)
 
-    for pkg in packages.split():
-        localdata = bb.data.createCopy(d)
-        root = "%s/%s" % (pkgdest, pkg)
+    localdata = bb.data.createCopy(d)
+    root = "%s/%s" % (pkgdest, pkg)
 
-        lf = bb.utils.lockfile(root + ".lock")
+    lf = bb.utils.lockfile(root + ".lock")
+    try:
 
         localdata.setVar('ROOT', '')
         localdata.setVar('ROOT_%s' % pkg, root)
@@ -117,8 +146,7 @@
         g = glob('*')
         if not g and localdata.getVar('ALLOW_EMPTY', False) != "1":
             bb.note("Not creating empty archive for %s-%s-%s" % (pkg, localdata.getVar('PKGV'), localdata.getVar('PKGR')))
-            bb.utils.unlockfile(lf)
-            continue
+            return
 
         controldir = os.path.join(root, 'DEBIAN')
         bb.utils.mkdirhier(controldir)
@@ -194,8 +222,8 @@
         mapping_rename_hook(localdata)
 
         def debian_cmp_remap(var):
-            # dpkg does not allow for '(' or ')' in a dependency name
-            # replace these instances with '__' and '__'
+            # dpkg does not allow for '(', ')' or ':' in a dependency name
+            # Replace any instances of them with '__'
             #
             # In debian '>' and '<' do not mean what it appears they mean
             #   '<' = less or equal
@@ -204,8 +232,7 @@
             #
             for dep in var:
                 if '(' in dep:
-                    newdep = dep.replace('(', '__')
-                    newdep = newdep.replace(')', '__')
+                    newdep = re.sub(r'[(:)]', '__', dep)
                     if newdep != dep:
                         var[newdep] = var[dep]
                         del var[dep]
@@ -289,17 +316,19 @@
             conffiles.close()
 
         os.chdir(basedir)
-        subprocess.check_output("PATH=\"%s\" dpkg-deb -b %s %s" % (localdata.getVar("PATH"), root, pkgoutdir), shell=True)
+        subprocess.check_output("PATH=\"%s\" dpkg-deb -b %s %s" % (localdata.getVar("PATH"), root, pkgoutdir),
+                                stderr=subprocess.STDOUT,
+                                shell=True)
 
+    finally:
         cleanupcontrol(root)
         bb.utils.unlockfile(lf)
-    os.chdir(oldcwd)
-}
+
+# Otherwise allarch packages may change depending on override configuration
+deb_write_pkg[vardepsexclude] = "OVERRIDES"
+
 # Indirect references to these vars
 do_package_write_deb[vardeps] += "PKGV PKGR PKGV DESCRIPTION SECTION PRIORITY MAINTAINER DPKG_ARCH PN HOMEPAGE"
-# Otherwise allarch packages may change depending on override configuration
-do_package_deb[vardepsexclude] = "OVERRIDES"
-
 
 SSTATETASKS += "do_package_write_deb"
 do_package_write_deb[sstate-inputdirs] = "${PKGWRITEDIRDEB}"
diff --git a/import-layers/yocto-poky/meta/classes/package_ipk.bbclass b/import-layers/yocto-poky/meta/classes/package_ipk.bbclass
index a1e51ee..6c1fdaa 100644
--- a/import-layers/yocto-poky/meta/classes/package_ipk.bbclass
+++ b/import-layers/yocto-poky/meta/classes/package_ipk.bbclass
@@ -12,15 +12,34 @@
 
 OPKG_ARGS += "--force_postinstall --prefer-arch-to-version"
 OPKG_ARGS += "${@['', '--no-install-recommends'][d.getVar("NO_RECOMMENDATIONS") == "1"]}"
-OPKG_ARGS += "${@['', '--add-exclude ' + ' --add-exclude '.join((d.getVar('PACKAGE_EXCLUDE') or "").split())][(d.getVar("PACKAGE_EXCLUDE") or "") != ""]}"
+OPKG_ARGS += "${@['', '--add-exclude ' + ' --add-exclude '.join((d.getVar('PACKAGE_EXCLUDE') or "").split())][(d.getVar("PACKAGE_EXCLUDE") or "").strip() != ""]}"
 
 OPKGLIBDIR = "${localstatedir}/lib"
 
 python do_package_ipk () {
-    import re, copy
-    import textwrap
-    import subprocess
-    import collections
+    import multiprocessing
+    import traceback
+
+    class IPKWritePkgProcess(multiprocessing.Process):
+        def __init__(self, *args, **kwargs):
+            multiprocessing.Process.__init__(self, *args, **kwargs)
+            self._pconn, self._cconn = multiprocessing.Pipe()
+            self._exception = None
+
+        def run(self):
+            try:
+                multiprocessing.Process.run(self)
+                self._cconn.send(None)
+            except Exception as e:
+                tb = traceback.format_exc()
+                self._cconn.send((e, tb))
+
+        @property
+        def exception(self):
+            if self._pconn.poll():
+                self._exception = self._pconn.recv()
+            return self._exception
+
 
     oldcwd = os.getcwd()
 
@@ -42,20 +61,55 @@
     if os.access(os.path.join(tmpdir, "stamps", "IPK_PACKAGE_INDEX_CLEAN"), os.R_OK):
         os.unlink(os.path.join(tmpdir, "stamps", "IPK_PACKAGE_INDEX_CLEAN"))
 
+    max_process = int(d.getVar("BB_NUMBER_THREADS") or os.cpu_count() or 1)
+    launched = []
+    error = None
+    pkgs = packages.split()
+    while not error and pkgs:
+        if len(launched) < max_process:
+            p = IPKWritePkgProcess(target=ipk_write_pkg, args=(pkgs.pop(), d))
+            p.start()
+            launched.append(p)
+        for q in launched:
+            # The finished processes are joined when calling is_alive()
+            if not q.is_alive():
+                launched.remove(q)
+            if q.exception:
+                error, traceback = q.exception
+                break
+
+    for p in launched:
+        p.join()
+
+    os.chdir(oldcwd)
+
+    if error:
+        raise error
+}
+do_package_ipk[vardeps] += "ipk_write_pkg"
+do_package_ipk[vardepsexclude] = "BB_NUMBER_THREADS"
+
+def ipk_write_pkg(pkg, d):
+    import re, copy
+    import subprocess
+    import textwrap
+    import collections
+
     def cleanupcontrol(root):
         for p in ['CONTROL', 'DEBIAN']:
             p = os.path.join(root, p)
             if os.path.exists(p):
                 bb.utils.prunedir(p)
 
+    outdir = d.getVar('PKGWRITEDIRIPK')
+    pkgdest = d.getVar('PKGDEST')
     recipesource = os.path.basename(d.getVar('FILE'))
 
-    for pkg in packages.split():
-        localdata = bb.data.createCopy(d)
-        root = "%s/%s" % (pkgdest, pkg)
+    localdata = bb.data.createCopy(d)
+    root = "%s/%s" % (pkgdest, pkg)
 
-        lf = bb.utils.lockfile(root + ".lock")
-
+    lf = bb.utils.lockfile(root + ".lock")
+    try:
         localdata.setVar('ROOT', '')
         localdata.setVar('ROOT_%s' % pkg, root)
         pkgname = localdata.getVar('PKG_%s' % pkg)
@@ -100,8 +154,7 @@
         g = glob('*')
         if not g and localdata.getVar('ALLOW_EMPTY', False) != "1":
             bb.note("Not creating empty archive for %s-%s-%s" % (pkg, localdata.getVar('PKGV'), localdata.getVar('PKGR')))
-            bb.utils.unlockfile(lf)
-            continue
+            return
 
         controldir = os.path.join(root, 'CONTROL')
         bb.utils.mkdirhier(controldir)
@@ -142,16 +195,9 @@
                 description = localdata.getVar('DESCRIPTION') or "."
                 description = textwrap.dedent(description).strip()
                 if '\\n' in description:
-                    # Manually indent
+                    # Manually indent: multiline description includes a leading space
                     for t in description.split('\\n'):
-                        # We don't limit the width when manually indent, but we do
-                        # need the textwrap.fill() to set the initial_indent and
-                        # subsequent_indent, so set a large width
-                        line = textwrap.fill(t.strip(),
-                                             width=100000,
-                                             initial_indent=' ',
-                                             subsequent_indent=' ') or '.'
-                        ctrlfile.write('%s\n' % line)
+                        ctrlfile.write(' %s\n' % (t.strip() or ' .'))
                 else:
                     # Auto indent
                     ctrlfile.write('%s\n' % textwrap.fill(description, width=74, initial_indent=' ', subsequent_indent=' '))
@@ -228,20 +274,22 @@
 
         os.chdir(basedir)
         subprocess.check_output("PATH=\"%s\" %s %s %s" % (localdata.getVar("PATH"),
-                                                          d.getVar("OPKGBUILDCMD"), pkg, pkgoutdir), shell=True)
+                                                          d.getVar("OPKGBUILDCMD"), pkg, pkgoutdir),
+                                stderr=subprocess.STDOUT,
+                                shell=True)
 
         if d.getVar('IPK_SIGN_PACKAGES') == '1':
             ipkver = "%s-%s" % (d.getVar('PKGV'), d.getVar('PKGR'))
             ipk_to_sign = "%s/%s_%s_%s.ipk" % (pkgoutdir, pkgname, ipkver, d.getVar('PACKAGE_ARCH'))
             sign_ipk(d, ipk_to_sign)
 
+    finally:
         cleanupcontrol(root)
         bb.utils.unlockfile(lf)
 
-    os.chdir(oldcwd)
-}
 # Otherwise allarch packages may change depending on override configuration
-do_package_ipk[vardepsexclude] = "OVERRIDES"
+ipk_write_pkg[vardepsexclude] = "OVERRIDES"
+
 
 SSTATETASKS += "do_package_write_ipk"
 do_package_write_ipk[sstate-inputdirs] = "${PKGWRITEDIRIPK}"
diff --git a/import-layers/yocto-poky/meta/classes/package_rpm.bbclass b/import-layers/yocto-poky/meta/classes/package_rpm.bbclass
index 1deaf83..a428d30 100644
--- a/import-layers/yocto-poky/meta/classes/package_rpm.bbclass
+++ b/import-layers/yocto-poky/meta/classes/package_rpm.bbclass
@@ -646,9 +646,13 @@
     rpmbuild = d.getVar('RPMBUILD')
     targetsys = d.getVar('TARGET_SYS')
     targetvendor = d.getVar('HOST_VENDOR')
+
     # Too many places in dnf stack assume that arch-independent packages are "noarch".
     # Let's not fight against this.
-    package_arch = (d.getVar('PACKAGE_ARCH') or "").replace("-", "_").replace("all", "noarch")
+    package_arch = (d.getVar('PACKAGE_ARCH') or "").replace("-", "_")
+    if package_arch == "all":
+        package_arch = "noarch"
+
     sdkpkgsuffix = (d.getVar('SDKPKGSUFFIX') or "nativesdk").replace("-", "_")
     d.setVar('PACKAGE_ARCH_EXTEND', package_arch)
     pkgwritedir = d.expand('${PKGWRITEDIRRPM}/${PACKAGE_ARCH_EXTEND}')
diff --git a/import-layers/yocto-poky/meta/classes/packagefeed-stability.bbclass b/import-layers/yocto-poky/meta/classes/packagefeed-stability.bbclass
index c0e9be5..5648602 100644
--- a/import-layers/yocto-poky/meta/classes/packagefeed-stability.bbclass
+++ b/import-layers/yocto-poky/meta/classes/packagefeed-stability.bbclass
@@ -189,7 +189,7 @@
 
     # Remove all the old files and copy again if docopy
     if docopy:
-        bb.plain('Copying packages for recipe %s' % pn)
+        bb.note('Copying packages for recipe %s' % pn)
         pcmanifest = os.path.join(prepath, d.expand('pkg-compare-manifest-${MULTIMACH_TARGET_SYS}-${PN}'))
         try:
             with open(pcmanifest, 'r') as f:
@@ -224,7 +224,7 @@
                     shutil.copyfile(srcpath, destpath)
                 f.write('%s\n' % destpath)
     else:
-        bb.plain('Not copying packages for recipe %s' % pn)
+        bb.note('Not copying packages for recipe %s' % pn)
 
 do_cleansstate[postfuncs] += "pfs_cleanpkgs"
 python pfs_cleanpkgs () {
diff --git a/import-layers/yocto-poky/meta/classes/populate_sdk_base.bbclass b/import-layers/yocto-poky/meta/classes/populate_sdk_base.bbclass
index 563582e..424c63c 100644
--- a/import-layers/yocto-poky/meta/classes/populate_sdk_base.bbclass
+++ b/import-layers/yocto-poky/meta/classes/populate_sdk_base.bbclass
@@ -59,6 +59,9 @@
 
 SDK_TARGET_MANIFEST = "${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.target.manifest"
 SDK_HOST_MANIFEST = "${SDKDEPLOYDIR}/${TOOLCHAIN_OUTPUTNAME}.host.manifest"
+SDK_EXT_TARGET_MANIFEST = "${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.target.manifest"
+SDK_EXT_HOST_MANIFEST = "${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.host.manifest"
+
 python write_target_sdk_manifest () {
     from oe.sdk import sdk_list_installed_packages
     from oe.utils import format_pkg_list
@@ -88,8 +91,9 @@
         output.write(format_pkg_list(pkgs, 'ver'))
 }
 
-POPULATE_SDK_POST_TARGET_COMMAND_append = " write_target_sdk_manifest ; write_sdk_test_data ; "
-POPULATE_SDK_POST_HOST_COMMAND_append = " write_host_sdk_manifest; "
+POPULATE_SDK_POST_TARGET_COMMAND_append = " write_sdk_test_data ; "
+POPULATE_SDK_POST_TARGET_COMMAND_append_task-populate-sdk  = " write_target_sdk_manifest ; "
+POPULATE_SDK_POST_HOST_COMMAND_append_task-populate-sdk = " write_host_sdk_manifest; "
 SDK_PACKAGING_COMMAND = "${@'${SDK_PACKAGING_FUNC};' if '${SDK_PACKAGING_FUNC}' else ''}"
 SDK_POSTPROCESS_COMMAND = " create_sdk_files; check_sdk_sysroots; tar_sdk; ${SDK_PACKAGING_COMMAND} "
 
@@ -97,6 +101,26 @@
     from oe.sdk import populate_sdk
     from oe.manifest import create_manifest, Manifest
 
+    # Handle package exclusions
+    excl_pkgs = (d.getVar("PACKAGE_EXCLUDE") or "").split()
+    inst_pkgs = (d.getVar("PACKAGE_INSTALL") or "").split()
+    inst_attempt_pkgs = (d.getVar("PACKAGE_INSTALL_ATTEMPTONLY") or "").split()
+
+    d.setVar('PACKAGE_INSTALL_ORIG', ' '.join(inst_pkgs))
+    d.setVar('PACKAGE_INSTALL_ATTEMPTONLY', ' '.join(inst_attempt_pkgs))
+
+    for pkg in excl_pkgs:
+        if pkg in inst_pkgs:
+            bb.warn("Package %s, set to be excluded, is in %s PACKAGE_INSTALL (%s).  It will be removed from the list." % (pkg, d.getVar('PN'), inst_pkgs))
+            inst_pkgs.remove(pkg)
+
+        if pkg in inst_attempt_pkgs:
+            bb.warn("Package %s, set to be excluded, is in %s PACKAGE_INSTALL_ATTEMPTONLY (%s).  It will be removed from the list." % (pkg, d.getVar('PN'), inst_pkgs))
+            inst_attempt_pkgs.remove(pkg)
+
+    d.setVar("PACKAGE_INSTALL", ' '.join(inst_pkgs))
+    d.setVar("PACKAGE_INSTALL_ATTEMPTONLY", ' '.join(inst_attempt_pkgs))
+
     pn = d.getVar('PN')
     runtime_mapping_rename("TOOLCHAIN_TARGET_TASK", pn, d)
     runtime_mapping_rename("TOOLCHAIN_TARGET_TASK_ATTEMPTONLY", pn, d)
@@ -256,8 +280,7 @@
 }
 
 def sdk_command_variables(d):
-    return ['OPKG_PREPROCESS_COMMANDS','OPKG_POSTPROCESS_COMMANDS','POPULATE_SDK_POST_HOST_COMMAND','POPULATE_SDK_POST_TARGET_COMMAND','SDK_POSTPROCESS_COMMAND','RPM_PREPROCESS_COMMANDS',
-            'RPM_POSTPROCESS_COMMANDS']
+    return ['OPKG_PREPROCESS_COMMANDS','OPKG_POSTPROCESS_COMMANDS','POPULATE_SDK_POST_HOST_COMMAND','POPULATE_SDK_PRE_TARGET_COMMAND','POPULATE_SDK_POST_TARGET_COMMAND','SDK_POSTPROCESS_COMMAND','RPM_PREPROCESS_COMMANDS','RPM_POSTPROCESS_COMMANDS']
 
 def sdk_variables(d):
     variables = ['BUILD_IMAGES_FROM_FEEDS','SDK_OS','SDK_OUTPUT','SDKPATHNATIVE','SDKTARGETSYSROOT','SDK_DIR','SDK_VENDOR','SDKIMAGE_INSTALL_COMPLEMENTARY','SDK_PACKAGE_ARCHS','SDK_OUTPUT',
diff --git a/import-layers/yocto-poky/meta/classes/populate_sdk_ext.bbclass b/import-layers/yocto-poky/meta/classes/populate_sdk_ext.bbclass
index 8b8a341..c79ddbb 100644
--- a/import-layers/yocto-poky/meta/classes/populate_sdk_ext.bbclass
+++ b/import-layers/yocto-poky/meta/classes/populate_sdk_ext.bbclass
@@ -33,6 +33,7 @@
                              DL_DIR \
                              SSTATE_DIR \
                              TMPDIR \
+                             BB_SERVER_TIMEOUT \
                             "
 SDK_INHERIT_BLACKLIST ?= "buildhistory icecc"
 SDK_UPDATE_URL ?= ""
@@ -69,7 +70,6 @@
 # COREBASE be preserved as well as untracked files.
 COREBASE_FILES ?= " \
     oe-init-build-env \
-    oe-init-build-env-memres \
     scripts \
     LICENSE \
     .templateconf \
@@ -83,6 +83,39 @@
 SDK_EXT_TARGET_MANIFEST = "${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.target.manifest"
 SDK_EXT_HOST_MANIFEST = "${SDK_DEPLOY}/${TOOLCHAINEXT_OUTPUTNAME}.host.manifest"
 
+python write_target_sdk_ext_manifest () {
+    from oe.sdk import get_extra_sdkinfo
+    sstate_dir = d.expand('${SDK_OUTPUT}/${SDKPATH}/sstate-cache')
+    extra_info = get_extra_sdkinfo(sstate_dir)
+
+    target = d.getVar('TARGET_SYS')
+    target_multimach = d.getVar('MULTIMACH_TARGET_SYS')
+    real_target_multimach = d.getVar('REAL_MULTIMACH_TARGET_SYS')
+
+    pkgs = {}
+    with open(d.getVar('SDK_EXT_TARGET_MANIFEST'), 'w') as f:
+        for fn in extra_info['filesizes']:
+            info = fn.split(':')
+            if info[2] in (target, target_multimach, real_target_multimach) \
+                    or info[5] == 'allarch':
+                if not info[1] in pkgs:
+                    f.write("%s %s %s\n" % (info[1], info[2], info[3]))
+                    pkgs[info[1]] = {}
+}
+python write_host_sdk_ext_manifest () {
+    from oe.sdk import get_extra_sdkinfo
+    sstate_dir = d.expand('${SDK_OUTPUT}/${SDKPATH}/sstate-cache')
+    extra_info = get_extra_sdkinfo(sstate_dir)
+    host = d.getVar('BUILD_SYS')
+    with open(d.getVar('SDK_EXT_HOST_MANIFEST'), 'w') as f:
+        for fn in extra_info['filesizes']:
+            info = fn.split(':')
+            if info[2] == host:
+                f.write("%s %s %s\n" % (info[1], info[2], info[3]))
+}
+
+SDK_POSTPROCESS_COMMAND_append_task-populate-sdk-ext = "write_target_sdk_ext_manifest; write_host_sdk_ext_manifest; "    
+
 SDK_TITLE_task-populate-sdk-ext = "${@d.getVar('DISTRO_NAME') or d.getVar('DISTRO')} Extensible SDK"
 
 def clean_esdk_builddir(d, sdkbasepath):
@@ -111,7 +144,7 @@
         with open(sdkbasepath + '/conf/local.conf', 'a') as f:
             # Force the use of sstate from the build system
             f.write('\nSSTATE_DIR_forcevariable = "%s"\n' % d.getVar('SSTATE_DIR'))
-            f.write('SSTATE_MIRRORS_forcevariable = ""\n')
+            f.write('SSTATE_MIRRORS_forcevariable = "file://universal/(.*) file://universal-4.9/\\1 file://universal-4.9/(.*) file://universal-4.8/\\1"\n')
             # Ensure TMPDIR is the default so that clean_esdk_builddir() can delete it
             f.write('TMPDIR_forcevariable = "${TOPDIR}/tmp"\n')
             f.write('TCLIBCAPPEND_forcevariable = ""\n')
@@ -314,12 +347,18 @@
             # the sig computed from the metadata.
             f.write('SIGGEN_LOCKEDSIGS_TASKSIG_CHECK = "warn"\n\n')
 
+            # We want to be able to set this without a full reparse
+            f.write('BB_HASHCONFIG_WHITELIST_append = " SIGGEN_UNLOCKED_RECIPES"\n\n')
+
             # Set up whitelist for run on install
             f.write('BB_SETSCENE_ENFORCE_WHITELIST = "%:* *:do_shared_workdir *:do_rm_work wic-tools:* *:do_addto_recipe_sysroot"\n\n')
 
             # Hide the config information from bitbake output (since it's fixed within the SDK)
             f.write('BUILDCFG_HEADER = ""\n\n')
 
+            f.write('# Provide a flag to indicate we are in the EXT_SDK Context\n')
+            f.write('WITHIN_EXT_SDK = "1"\n\n')
+
             # Map gcc-dependent uninative sstate cache for installer usage
             f.write('SSTATE_MIRRORS += " file://universal/(.*) file://universal-4.9/\\1 file://universal-4.9/(.*) file://universal-4.8/\\1"\n\n')
 
diff --git a/import-layers/yocto-poky/meta/classes/python3native.bbclass b/import-layers/yocto-poky/meta/classes/python3native.bbclass
index ef468b3..89665ef 100644
--- a/import-layers/yocto-poky/meta/classes/python3native.bbclass
+++ b/import-layers/yocto-poky/meta/classes/python3native.bbclass
@@ -9,5 +9,8 @@
 export STAGING_INCDIR
 export STAGING_LIBDIR
 
+# suppress host user's site-packages dirs.
+export PYTHONNOUSERSITE = "1"
+
 # autoconf macros will use their internal default preference otherwise
 export PYTHON
diff --git a/import-layers/yocto-poky/meta/classes/pythonnative.bbclass b/import-layers/yocto-poky/meta/classes/pythonnative.bbclass
index 4e0381b..4cc8b27 100644
--- a/import-layers/yocto-poky/meta/classes/pythonnative.bbclass
+++ b/import-layers/yocto-poky/meta/classes/pythonnative.bbclass
@@ -12,5 +12,8 @@
 export STAGING_INCDIR
 export STAGING_LIBDIR
 
+# suppress host user's site-packages dirs.
+export PYTHONNOUSERSITE = "1"
+
 # autoconf macros will use their internal default preference otherwise
 export PYTHON
diff --git a/import-layers/yocto-poky/meta/classes/qemuboot.bbclass b/import-layers/yocto-poky/meta/classes/qemuboot.bbclass
index 3468d1c..15a9e63 100644
--- a/import-layers/yocto-poky/meta/classes/qemuboot.bbclass
+++ b/import-layers/yocto-poky/meta/classes/qemuboot.bbclass
@@ -85,10 +85,11 @@
 
     qemuboot = "%s/%s.qemuboot.conf" % (d.getVar('IMGDEPLOYDIR'), d.getVar('IMAGE_NAME'))
     qemuboot_link = "%s/%s.qemuboot.conf" % (d.getVar('IMGDEPLOYDIR'), d.getVar('IMAGE_LINK_NAME'))
-    topdir="%s/"%(d.getVar('TOPDIR')).replace("//","/")
+    finalpath = d.getVar("DEPLOY_DIR_IMAGE")
+    topdir = d.getVar('TOPDIR')
     cf = configparser.ConfigParser()
     cf.add_section('config_bsp')
-    for k in qemuboot_vars(d):
+    for k in sorted(qemuboot_vars(d)):
         # qemu-helper-native sysroot is not removed by rm_work and
         # contains all tools required by runqemu
         if k == 'STAGING_BINDIR_NATIVE':
@@ -98,7 +99,8 @@
             val = d.getVar(k)
         # we only want to write out relative paths so that we can relocate images
         # and still run them
-        val=val.replace(topdir,"")
+        if val.startswith(topdir):
+            val = os.path.relpath(val, finalpath)
         cf.set('config_bsp', k, '%s' % val)
 
     # QB_DEFAULT_KERNEL's value of KERNEL_IMAGETYPE is the name of a symlink
@@ -108,14 +110,15 @@
     kernel = os.path.realpath(kernel_link)
     # we only want to write out relative paths so that we can relocate images
     # and still run them
-    kernel=kernel.replace(topdir,"")
+    kernel = os.path.relpath(kernel, finalpath)
     cf.set('config_bsp', 'QB_DEFAULT_KERNEL', kernel)
 
     bb.utils.mkdirhier(os.path.dirname(qemuboot))
     with open(qemuboot, 'w') as f:
         cf.write(f)
 
-    if os.path.lexists(qemuboot_link):
-       os.remove(qemuboot_link)
-    os.symlink(os.path.basename(qemuboot), qemuboot_link)
+    if qemuboot_link != qemuboot:
+        if os.path.lexists(qemuboot_link):
+           os.remove(qemuboot_link)
+        os.symlink(os.path.basename(qemuboot), qemuboot_link)
 }
diff --git a/import-layers/yocto-poky/meta/classes/report-error.bbclass b/import-layers/yocto-poky/meta/classes/report-error.bbclass
index d6fdd36..1c55abf 100644
--- a/import-layers/yocto-poky/meta/classes/report-error.bbclass
+++ b/import-layers/yocto-poky/meta/classes/report-error.bbclass
@@ -29,6 +29,13 @@
         import json
         import codecs
 
+        def nativelsb():
+            nativelsbstr = e.data.getVar("NATIVELSBSTRING")
+            # provide a bit more host info in case of uninative build
+            if e.data.getVar('UNINATIVE_URL') != 'unset':
+                return '/'.join([nativelsbstr, lsb_distro_identifier(e.data)])
+            return nativelsbstr
+
         logpath = e.data.getVar('ERR_REPORT_DIR')
         datafile = os.path.join(logpath, "error-report.txt")
 
@@ -38,7 +45,7 @@
             machine = e.data.getVar("MACHINE")
             data['machine'] = machine
             data['build_sys'] = e.data.getVar("BUILD_SYS")
-            data['nativelsb'] = e.data.getVar("NATIVELSBSTRING")
+            data['nativelsb'] = nativelsb()
             data['distro'] = e.data.getVar("DISTRO")
             data['target_sys'] = e.data.getVar("TARGET_SYS")
             data['failures'] = []
diff --git a/import-layers/yocto-poky/meta/classes/rm_work.bbclass b/import-layers/yocto-poky/meta/classes/rm_work.bbclass
index badeaeb..31d99e4 100644
--- a/import-layers/yocto-poky/meta/classes/rm_work.bbclass
+++ b/import-layers/yocto-poky/meta/classes/rm_work.bbclass
@@ -35,22 +35,12 @@
         fi
     done
 
-    cd ${WORKDIR}
-    for dir in *
-    do
-        # Retain only logs and other files in temp, safely ignore
-        # failures of removing pseudo folers on NFS2/3 server.
-        if [ $dir = 'pseudo' ]; then
-            rm -rf $dir 2> /dev/null || true
-        elif ! echo '${RM_WORK_EXCLUDE_ITEMS}' | grep -q -w "$dir"; then
-            rm -rf $dir
-        fi
-    done
-
     # Need to add pseudo back or subsqeuent work in this workdir
     # might fail since setscene may not rerun to recreate it
     mkdir -p ${WORKDIR}/pseudo/
 
+    excludes='${RM_WORK_EXCLUDE_ITEMS}'
+
     # Change normal stamps into setscene stamps as they better reflect the
     # fact WORKDIR is now empty
     # Also leave noexec stamps since setscene stamps don't cover them
@@ -71,7 +61,12 @@
                 i=dummy
                 break
                 ;;
-            *do_rootfs*|*do_image*|*do_bootimg*|*do_bootdirectdisk*|*do_vmimg*|*do_write_qemuboot_conf*)
+            *do_image_complete*)
+                mv $i `echo $i | sed -e "s#do_image_complete#do_image_complete_setscene#"`
+                i=dummy
+                break
+                ;;
+            *do_rootfs*|*do_image*|*do_bootimg*|*do_write_qemuboot_conf*)
                 i=dummy
                 break
                 ;;
@@ -79,6 +74,12 @@
                 i=dummy
                 break
                 ;;
+            *do_addto_recipe_sysroot*)
+                # Preserve recipe-sysroot-native if do_addto_recipe_sysroot has been used
+                excludes="$excludes recipe-sysroot-native"
+                i=dummy
+                break
+                ;;
             # We remove do_package entirely, including any
             # sstate version since otherwise we'd need to leave 'plaindirs' around
             # such as 'packages' and 'packages-split' and these can be large. No end
@@ -101,6 +102,18 @@
         done
         rm -f $i
     done
+
+    cd ${WORKDIR}
+    for dir in *
+    do
+        # Retain only logs and other files in temp, safely ignore
+        # failures of removing pseudo folers on NFS2/3 server.
+        if [ $dir = 'pseudo' ]; then
+            rm -rf $dir 2> /dev/null || true
+        elif ! echo "$excludes" | grep -q -w "$dir"; then
+            rm -rf $dir
+        fi
+    done
 }
 do_rm_work_all () {
     :
@@ -153,6 +166,10 @@
     deps = set(bb.build.preceedtask('do_build', True, d))
     deps.difference_update(('do_build', 'do_rm_work_all'))
 
+    # deps can be empty if do_build doesn't exist, e.g. *-inital recipes
+    if not deps:
+        deps = ["do_populate_sysroot", "do_populate_lic"]
+
     if pn in excludes:
         d.delVarFlag('rm_work_rootfs', 'cleandirs')
         d.delVarFlag('rm_work_populatesdk', 'cleandirs')
diff --git a/import-layers/yocto-poky/meta/classes/rootfs-postcommands.bbclass b/import-layers/yocto-poky/meta/classes/rootfs-postcommands.bbclass
index c19ff87..a4e627f 100644
--- a/import-layers/yocto-poky/meta/classes/rootfs-postcommands.bbclass
+++ b/import-layers/yocto-poky/meta/classes/rootfs-postcommands.bbclass
@@ -14,6 +14,14 @@
 # Tweak the mount options for rootfs in /etc/fstab if read-only-rootfs is enabled
 ROOTFS_POSTPROCESS_COMMAND += '${@bb.utils.contains("IMAGE_FEATURES", "read-only-rootfs", "read_only_rootfs_hook; ", "",d)}'
 
+# We also need to do the same for the kernel boot parameters,
+# otherwise kernel or initramfs end up mounting the rootfs read/write
+# (the default) if supported by the underlying storage.
+#
+# We do this with _append because the default value might get set later with ?=
+# and we don't want to disable such a default that by setting a value here.
+APPEND_append = '${@bb.utils.contains("IMAGE_FEATURES", "read-only-rootfs", " ro", "", d)}'
+
 # Generates test data file with data store variables expanded in json format
 ROOTFS_POSTPROCESS_COMMAND += "write_image_test_data ; "
 
@@ -84,7 +92,9 @@
 #
 read_only_rootfs_hook () {
 	# Tweak the mount option and fs_passno for rootfs in fstab
-	sed -i -e '/^[#[:space:]]*\/dev\/root/{s/defaults/ro/;s/\([[:space:]]*[[:digit:]]\)\([[:space:]]*\)[[:digit:]]$/\1\20/}' ${IMAGE_ROOTFS}/etc/fstab
+	if [ -f ${IMAGE_ROOTFS}/etc/fstab ]; then
+		sed -i -e '/^[#[:space:]]*\/dev\/root/{s/defaults/ro/;s/\([[:space:]]*[[:digit:]]\)\([[:space:]]*\)[[:digit:]]$/\1\20/}' ${IMAGE_ROOTFS}/etc/fstab
+	fi
 
 	# If we're using openssh and the /etc/ssh directory has no pre-generated keys,
 	# we should configure openssh to use the configuration file /etc/ssh/sshd_config_readonly
@@ -316,5 +326,5 @@
                 continue
 
             if 'unsatisfied recommendation for' in line:
-                bb.warn('[log_check] %s: %s' % (d.getVar('PN', True), line))
+                bb.warn('[log_check] %s: %s' % (d.getVar('PN'), line))
 }
diff --git a/import-layers/yocto-poky/meta/classes/rootfs_deb.bbclass b/import-layers/yocto-poky/meta/classes/rootfs_deb.bbclass
index 262e3d5..9ee1dfc 100644
--- a/import-layers/yocto-poky/meta/classes/rootfs_deb.bbclass
+++ b/import-layers/yocto-poky/meta/classes/rootfs_deb.bbclass
@@ -33,6 +33,3 @@
     elif darch == "arm":
          d.setVar('DEB_SDK_ARCH', 'armel')
 }
-
-# This will of course only work after rootfs_deb_do_rootfs or populate_sdk_deb has been called
-DPKG_QUERY_COMMAND = "${STAGING_BINDIR_NATIVE}/dpkg-query --admindir=$INSTALL_ROOTFS_DEB/var/lib/dpkg"
diff --git a/import-layers/yocto-poky/meta/classes/rootfsdebugfiles.bbclass b/import-layers/yocto-poky/meta/classes/rootfsdebugfiles.bbclass
index a558871..e2ba4e3 100644
--- a/import-layers/yocto-poky/meta/classes/rootfsdebugfiles.bbclass
+++ b/import-layers/yocto-poky/meta/classes/rootfsdebugfiles.bbclass
@@ -15,6 +15,10 @@
 #    ROOTFS_DEBUG_FILES += "${TOPDIR}/conf/dropbear_rsa_host_key ${IMAGE_ROOTFS}/etc/dropbear/dropbear_rsa_host_key ;"
 # 2. Boot the image once, copy the dropbear_rsa_host_key from
 #    the device into your build conf directory.
+# 3. A optional parameter can be used to set file mode
+#    of the copied target, for instance:
+#    ROOTFS_DEBUG_FILES += "${TOPDIR}/conf/dropbear_rsa_host_key ${IMAGE_ROOTFS}/etc/dropbear/dropbear_rsa_host_key 0600;"
+#    in case they might be required to have a specific mode. (Shoundn't be too open, for example)
 #
 # Do not use for production images! It bypasses several
 # core build mechanisms (updating the image when one
@@ -27,10 +31,11 @@
 ROOTFS_POSTPROCESS_COMMAND += "rootfs_debug_files ;"
 rootfs_debug_files () {
    #!/bin/sh -e
-   echo "${ROOTFS_DEBUG_FILES}" | sed -e 's/;/\n/g' | while read source target; do
+   echo "${ROOTFS_DEBUG_FILES}" | sed -e 's/;/\n/g' | while read source target mode; do
       if [ -e "$source" ]; then
          mkdir -p $(dirname $target)
          cp -a $source $target
+         [ -n "$mode" ] && chmod $mode $target
       fi
    done
 }
diff --git a/import-layers/yocto-poky/meta/classes/sanity.bbclass b/import-layers/yocto-poky/meta/classes/sanity.bbclass
index e8064ac..1feb794 100644
--- a/import-layers/yocto-poky/meta/classes/sanity.bbclass
+++ b/import-layers/yocto-poky/meta/classes/sanity.bbclass
@@ -350,6 +350,14 @@
         return "The %s: %s can't be located on nfs.\n" % (name, path)
     return ""
 
+# Check that the path is on a case-sensitive file system
+def check_case_sensitive(path, name):
+    import tempfile
+    with tempfile.NamedTemporaryFile(prefix='TmP', dir=path) as tmp_file:
+        if os.path.exists(tmp_file.name.lower()):
+            return "The %s (%s) can't be on a case-insensitive file system.\n" % (name, path)
+        return ""
+
 # Check that path isn't a broken symlink
 def check_symlink(lnk, data):
     if os.path.islink(lnk) and not os.path.exists(lnk):
@@ -448,45 +456,6 @@
 
     return messages
 
-# Checks if necessary to add option march to host gcc
-def check_gcc_march(sanity_data):
-    result = True
-    message = ""
-
-    # Check if -march not in BUILD_CFLAGS
-    if sanity_data.getVar("BUILD_CFLAGS").find("-march") < 0:
-        result = False
-
-        # Construct a test file
-        f = open("gcc_test.c", "w")
-        f.write("int main (){ volatile int atomic = 2; __sync_bool_compare_and_swap (&atomic, 2, 3); return 0; }\n")
-        f.close()
-
-        # Check if GCC could work without march
-        if not result:
-            status,res = oe.utils.getstatusoutput(sanity_data.expand("${BUILD_CC} gcc_test.c -o gcc_test"))
-            if status == 0:
-                result = True;
-
-        if not result:
-            status,res = oe.utils.getstatusoutput(sanity_data.expand("${BUILD_CC} -march=native gcc_test.c -o gcc_test"))
-            if status == 0:
-                message = "BUILD_CFLAGS_append = \" -march=native\""
-                result = True;
-
-        if not result:
-            build_arch = sanity_data.getVar('BUILD_ARCH')
-            status,res = oe.utils.getstatusoutput(sanity_data.expand("${BUILD_CC} -march=%s gcc_test.c -o gcc_test" % build_arch))
-            if status == 0:
-                message = "BUILD_CFLAGS_append = \" -march=%s\"" % build_arch
-                result = True;
-
-        os.remove("gcc_test.c")
-        if os.path.exists("gcc_test"):
-            os.remove("gcc_test")
-
-    return (result, message)
-
 # Unpatched versions of make 3.82 are known to be broken.  See GNU Savannah Bug 30612.
 # Use a modified reproducer from http://savannah.gnu.org/bugs/?30612 to validate.
 def check_make_version(sanity_data):
@@ -612,7 +581,7 @@
         except IndexError:
             pass
     return testmsg
-       
+
 def check_sanity_version_change(status, d):
     # Sanity checks to be done when SANITY_VERSION or NATIVELSBSTRING changes
     # In other words, these tests run once in a given build directory and then 
@@ -657,23 +626,6 @@
     if "diffstat-native" not in assume_provided:
         status.addresult('Please use ASSUME_PROVIDED +=, not ASSUME_PROVIDED = in your local.conf\n')
 
-    if "qemu-native" in assume_provided:
-        if not check_app_exists("qemu-arm", d):
-            status.addresult("qemu-native was in ASSUME_PROVIDED but the QEMU binaries (qemu-arm) can't be found in PATH")
-
-    if "libsdl-native" in assume_provided:
-        if not check_app_exists("sdl-config", d):
-            status.addresult("libsdl-native is set to be ASSUME_PROVIDED but sdl-config can't be found in PATH. Please either install it, or configure qemu not to require sdl.")
-
-    (result, message) = check_gcc_march(d)
-    if result and message:
-        status.addresult("Your gcc version is older than 4.5, please add the following param to local.conf\n \
-        %s\n" % message)
-    if not result:
-        status.addresult("Your gcc version is older than 4.5 or is not working properly.  Please verify you can build")
-        status.addresult(" and link something that uses atomic operations, such as: \n")
-        status.addresult("        __sync_bool_compare_and_swap (&atomic, 2, 3);\n")
-
     # Check that TMPDIR isn't on a filesystem with limited filename length (eg. eCryptFS)
     tmpdir = d.getVar('TMPDIR')
     status.addresult(check_create_long_filename(tmpdir, "TMPDIR"))
@@ -728,6 +680,10 @@
     # Check that TMPDIR isn't located on nfs
     status.addresult(check_not_nfs(tmpdir, "TMPDIR"))
 
+    # Check for case-insensitive file systems (such as Linux in Docker on
+    # macOS with default HFS+ file system)
+    status.addresult(check_case_sensitive(tmpdir, "TMPDIR"))
+
 def sanity_check_locale(d):
     """
     Currently bitbake switches locale to en_US.UTF-8 so check that this locale actually exists.
@@ -746,10 +702,10 @@
     if 0 == os.getuid():
         raise_sanity_error("Do not use Bitbake as root.", d)
 
-    # Check the Python version, we now have a minimum of Python 2.7.3
+    # Check the Python version, we now have a minimum of Python 3.4
     import sys
-    if sys.hexversion < 0x020703F0:
-        status.addresult('The system requires at least Python 2.7.3 to run. Please update your Python interpreter.\n')
+    if sys.hexversion < 0x03040000:
+        status.addresult('The system requires at least Python 3.4 to run. Please update your Python interpreter.\n')
 
     # Check the bitbake version meets minimum requirements
     from distutils.version import LooseVersion
@@ -770,6 +726,11 @@
         if not ( check_conf_exists("conf/distro/${DISTRO}.conf", d) or check_conf_exists("conf/distro/include/${DISTRO}.inc", d) ):
             status.addresult("DISTRO '%s' not found. Please set a valid DISTRO in your local.conf\n" % d.getVar("DISTRO"))
 
+    # Check that these variables don't use tilde-expansion as we don't do that
+    for v in ("TMPDIR", "DL_DIR", "SSTATE_DIR"):
+        if d.getVar(v).startswith("~"):
+            status.addresult("%s uses ~ but Bitbake will not expand this, use an absolute path or variables." % v)
+
     # Check that DL_DIR is set, exists and is writable. In theory, we should never even hit the check if DL_DIR isn't 
     # set, since so much relies on it being set.
     dldir = d.getVar('DL_DIR')
@@ -839,7 +800,7 @@
 
         # Split into pairs
         if len(mirrors) % 2 != 0:
-            bb.warn('Invalid mirror variable value for %s: %s, should contain paired members.' % (mirror_var, mirrors.strip()))
+            bb.warn('Invalid mirror variable value for %s: %s, should contain paired members.' % (mirror_var, str(mirrors)))
             continue
         mirrors = list(zip(*[iter(mirrors)]*2))
 
diff --git a/import-layers/yocto-poky/meta/classes/sign_package_feed.bbclass b/import-layers/yocto-poky/meta/classes/sign_package_feed.bbclass
index 71df03b..f03c480 100644
--- a/import-layers/yocto-poky/meta/classes/sign_package_feed.bbclass
+++ b/import-layers/yocto-poky/meta/classes/sign_package_feed.bbclass
@@ -28,6 +28,9 @@
 PACKAGE_FEED_GPG_BACKEND ?= 'local'
 PACKAGE_FEED_GPG_SIGNATURE_TYPE ?= 'ASC'
 
+# Make feed signing key to be present in rootfs
+FEATURE_PACKAGES_package-management_append = " signing-keys-packagefeed"
+
 python () {
     # Check sanity of configuration
     for var in ('PACKAGE_FEED_GPG_NAME', 'PACKAGE_FEED_GPG_PASSPHRASE_FILE'):
diff --git a/import-layers/yocto-poky/meta/classes/sign_rpm.bbclass b/import-layers/yocto-poky/meta/classes/sign_rpm.bbclass
index bc2e947..4961b03 100644
--- a/import-layers/yocto-poky/meta/classes/sign_rpm.bbclass
+++ b/import-layers/yocto-poky/meta/classes/sign_rpm.bbclass
@@ -9,16 +9,30 @@
 #           Optional variable for specifying the backend to use for signing.
 #           Currently the only available option is 'local', i.e. local signing
 #           on the build host.
+# RPM_FILE_CHECKSUM_DIGEST
+#           Optional variable for specifying the algorithm for generating file
+#           checksum digest.
+# RPM_FSK_PATH
+#           Optional variable for the file signing key.
+# RPM_FSK_PASSWORD
+#           Optional variable for the file signing key password.
 # GPG_BIN
 #           Optional variable for specifying the gpg binary/wrapper to use for
 #           signing.
+# RPM_GPG_SIGN_CHUNK
+#           Optional variable indicating the number of packages used per gpg
+#           invocation
 # GPG_PATH
 #           Optional variable for specifying the gnupg "home" directory:
-#
+
 inherit sanity
 
 RPM_SIGN_PACKAGES='1'
+RPM_SIGN_FILES ?= '0'
 RPM_GPG_BACKEND ?= 'local'
+# SHA-256 is used by default
+RPM_FILE_CHECKSUM_DIGEST ?= '8'
+RPM_GPG_SIGN_CHUNK ?= "${BB_NUMBER_THREADS}"
 
 
 python () {
@@ -28,6 +42,11 @@
     for var in ('RPM_GPG_NAME', 'RPM_GPG_PASSPHRASE'):
         if not d.getVar(var):
             raise_sanity_error("You need to define %s in the config" % var, d)
+
+    if d.getVar('RPM_SIGN_FILES') == '1':
+        for var in ('RPM_FSK_PATH', 'RPM_FSK_PASSWORD'):
+            if not d.getVar(var):
+                raise_sanity_error("You need to define %s in the config" % var, d)
 }
 
 python sign_rpm () {
@@ -39,8 +58,18 @@
 
     signer.sign_rpms(rpms,
                      d.getVar('RPM_GPG_NAME'),
-                     d.getVar('RPM_GPG_PASSPHRASE'))
+                     d.getVar('RPM_GPG_PASSPHRASE'),
+                     d.getVar('RPM_FILE_CHECKSUM_DIGEST'),
+                     int(d.getVar('RPM_GPG_SIGN_CHUNK')),
+                     d.getVar('RPM_FSK_PATH'),
+                     d.getVar('RPM_FSK_PASSWORD'))
 }
 
 do_package_index[depends] += "signing-keys:do_deploy"
 do_rootfs[depends] += "signing-keys:do_populate_sysroot"
+
+# Newer versions of gpg (at least 2.1.5 and 2.2.1) have issues when signing occurs in parallel
+# so unfortunately the signing must be done serially. Once the upstream problem is fixed,
+# the following line must be removed otherwise we loose all the intrinsic parallelism from
+# bitbake.  For more information, check https://bugzilla.yoctoproject.org/show_bug.cgi?id=12022.
+do_package_write_rpm[lockfiles] += "${TMPDIR}/gpg.lock"
diff --git a/import-layers/yocto-poky/meta/classes/siteinfo.bbclass b/import-layers/yocto-poky/meta/classes/siteinfo.bbclass
index 2c33732..1aada40 100644
--- a/import-layers/yocto-poky/meta/classes/siteinfo.bbclass
+++ b/import-layers/yocto-poky/meta/classes/siteinfo.bbclass
@@ -62,10 +62,8 @@
         "linux-gnun32": "common-linux common-glibc",
         "linux-gnueabi": "common-linux common-glibc",
         "linux-gnuspe": "common-linux common-glibc",
-        "linux-uclibc": "common-linux common-uclibc",
-        "linux-uclibceabi": "common-linux common-uclibc",
-        "linux-uclibcspe": "common-linux common-uclibc",
         "linux-musl": "common-linux common-musl",
+        "linux-muslx32": "common-linux common-musl",
         "linux-musleabi": "common-linux common-musl",
         "linux-muslspe": "common-linux common-musl",
         "uclinux-uclibc": "common-uclibc",
@@ -79,9 +77,7 @@
         "aarch64_be-linux-musl": "aarch64_be-linux",
         "arm-linux-gnueabi": "arm-linux",
         "arm-linux-musleabi": "arm-linux",
-        "arm-linux-uclibceabi": "arm-linux-uclibc",
         "armeb-linux-gnueabi": "armeb-linux",
-        "armeb-linux-uclibceabi": "armeb-linux-uclibc",
         "armeb-linux-musleabi": "armeb-linux",
         "mips-linux-musl": "mips-linux",
         "mipsel-linux-musl": "mipsel-linux",
@@ -93,10 +89,8 @@
         "mipsisa64r6el-linux-gnun32": "mipsisa32r6el-linux bit-32",
         "powerpc-linux": "powerpc32-linux",
         "powerpc-linux-musl": "powerpc-linux powerpc32-linux",
-        "powerpc-linux-uclibc": "powerpc-linux powerpc32-linux",
         "powerpc-linux-gnuspe": "powerpc-linux powerpc32-linux",
         "powerpc-linux-muslspe": "powerpc-linux powerpc32-linux",
-        "powerpc-linux-uclibcspe": "powerpc-linux powerpc32-linux powerpc-linux-uclibc",
         "powerpc64-linux-gnuspe": "powerpc-linux powerpc64-linux",
         "powerpc64-linux-muslspe": "powerpc-linux powerpc64-linux",
         "powerpc64-linux": "powerpc-linux",
@@ -106,7 +100,7 @@
         "x86_64-darwin9": "bit-64",
         "x86_64-linux": "bit-64",
         "x86_64-linux-musl": "x86_64-linux bit-64",
-        "x86_64-linux-uclibc": "bit-64",
+        "x86_64-linux-muslx32": "bit-32 ix86-common x32-linux",
         "x86_64-elf": "bit-64",
         "x86_64-linux-gnu": "bit-64 x86_64-linux",
         "x86_64-linux-gnux32": "bit-32 ix86-common x32-linux",
@@ -159,7 +153,7 @@
         bb.fatal("Please add your architecture to siteinfo.bbclass")
 }
 
-def siteinfo_get_files(d, aclocalcache = False):
+def siteinfo_get_files(d, sysrootcache = False):
     sitedata = siteinfo_data(d)
     sitefiles = ""
     for path in d.getVar("BBPATH").split(":"):
@@ -168,18 +162,11 @@
             if os.path.exists(filename):
                 sitefiles += filename + " "
 
-    if not aclocalcache:
+    if not sysrootcache:
         return sitefiles
 
-    # Now check for siteconfig cache files in the directory setup by autotools.bbclass to
-    # avoid races.
-    #
-    # ACLOCALDIR may or may not exist so cache should only be set to True from autotools.bbclass
-    # after files have been copied into this location. To do otherwise risks parsing/signature
-    # issues and the directory being created/removed whilst this code executes. This can happen
-    # when a multilib recipe is parsed along with its base variant which may be running at the time
-    # causing rare but nasty failures
-    path_siteconfig = d.getVar('ACLOCALDIR')
+    # Now check for siteconfig cache files in sysroots
+    path_siteconfig = d.getVar('SITECONFIG_SYSROOTCACHE')
     if path_siteconfig and os.path.isdir(path_siteconfig):
         for i in os.listdir(path_siteconfig):
             if not i.endswith("_config"):
diff --git a/import-layers/yocto-poky/meta/classes/sstate.bbclass b/import-layers/yocto-poky/meta/classes/sstate.bbclass
index 0a12935..e30fbe1 100644
--- a/import-layers/yocto-poky/meta/classes/sstate.bbclass
+++ b/import-layers/yocto-poky/meta/classes/sstate.bbclass
@@ -346,8 +346,6 @@
         oe.path.remove(dir)
 
     for state in ss['dirs']:
-        if d.getVar('SSTATE_SKIP_CREATION') == '1':
-            continue
         prepdir(state[1])
         os.rename(sstateinst + state[0], state[1])
     sstate_install(ss, d)
@@ -404,7 +402,7 @@
             return
 
         bb.note("Replacing fixme paths in sstate package: %s" % (sstate_hardcode_cmd))
-        subprocess.call(sstate_hardcode_cmd, shell=True)
+        subprocess.check_call(sstate_hardcode_cmd, shell=True)
 
         # Need to remove this or we'd copy it into the target directory and may 
         # conflict with another writer
@@ -453,7 +451,7 @@
     if os.path.exists(manifest + ".postrm"):
         import subprocess
         os.chmod(postrm, 0o755)
-        subprocess.call(postrm, shell=True)
+        subprocess.check_call(postrm, shell=True)
         oe.path.remove(postrm)
 
     oe.path.remove(manifest)
@@ -596,8 +594,6 @@
     for state in ss['dirs']:
         if not os.path.exists(state[1]):
             continue
-        if d.getVar('SSTATE_SKIP_CREATION') == '1':
-            continue
         srcbase = state[0].rstrip("/").rsplit('/', 1)[0]
         # Find and error for absolute symlinks. We could attempt to relocate but its not
         # clear where the symlink is relative to in this context. We could add that markup
@@ -625,6 +621,10 @@
 
     d.setVar('SSTATE_BUILDDIR', sstatebuild)
     d.setVar('SSTATE_PKG', sstatepkg)
+    d.setVar('SSTATE_INSTDIR', sstatebuild)
+
+    if d.getVar('SSTATE_SKIP_CREATION') == '1':
+        return
 
     for f in (d.getVar('SSTATECREATEFUNCS') or '').split() + \
              ['sstate_create_package', 'sstate_sign_package'] + \
@@ -634,8 +634,6 @@
 
     bb.siggen.dump_this_task(sstatepkg + ".siginfo", d)
 
-    d.setVar('SSTATE_INSTDIR', sstatebuild)
-
     return
 
 def pstaging_fetch(sstatefetch, sstatepkg, d):
@@ -969,7 +967,8 @@
             if isNativeCross(taskdependees[dep][0]):
                 return False
             # Native/cross tools depended upon by target sysroot are not needed
-            if isNativeCross(taskdependees[task][0]):
+            # Add an exception for shadow-native as required by useradd.bbclass
+            if isNativeCross(taskdependees[task][0]) and taskdependees[task][0] != 'shadow-native':
                 continue
             # Target populate_sysroot need their dependencies
             return False
@@ -1017,6 +1016,11 @@
     d = e.data
     stamps = e.stamps.values()
     removeworkdir = (d.getVar("SSTATE_PRUNE_OBSOLETEWORKDIR", False) == "1")
+    preservestampfile = d.expand('${SSTATE_MANIFESTS}/preserve-stamps')
+    preservestamps = []
+    if os.path.exists(preservestampfile):
+        with open(preservestampfile, 'r') as f:
+            preservestamps = f.readlines()
     seen = []
     for a in d.getVar("SSTATE_ARCHS").split():
         toremove = []
@@ -1027,7 +1031,7 @@
             lines = f.readlines()
             for l in lines:
                 (stamp, manifest, workdir) = l.split()
-                if stamp not in stamps:
+                if stamp not in stamps and stamp not in preservestamps:
                     toremove.append(l)
                     if stamp not in seen:
                         bb.debug(2, "Stamp %s is not reachable, removing related manifests" % stamp)
@@ -1049,4 +1053,6 @@
         with open(i, "w") as f:
             for l in lines:
                 f.write(l)
+    if preservestamps:
+        os.remove(preservestampfile)
 }
diff --git a/import-layers/yocto-poky/meta/classes/staging.bbclass b/import-layers/yocto-poky/meta/classes/staging.bbclass
index 984051d..c479bd9 100644
--- a/import-layers/yocto-poky/meta/classes/staging.bbclass
+++ b/import-layers/yocto-poky/meta/classes/staging.bbclass
@@ -68,101 +68,19 @@
 }
 
 python sysroot_strip () {
-    import stat, errno
+    inhibit_sysroot = d.getVar('INHIBIT_SYSROOT_STRIP')
+    if inhibit_sysroot and oe.types.boolean(inhibit_sysroot):
+        return 0
 
-    dvar = d.getVar('SYSROOT_DESTDIR')
+    dstdir = d.getVar('SYSROOT_DESTDIR')
     pn = d.getVar('PN')
+    libdir = os.path.abspath(dstdir + os.sep + d.getVar("libdir"))
+    base_libdir = os.path.abspath(dstdir + os.sep + d.getVar("base_libdir"))
+    qa_already_stripped = 'already-stripped' in (d.getVar('INSANE_SKIP_' + pn) or "").split()
+    strip_cmd = d.getVar("STRIP")
 
-    os.chdir(dvar)
-
-    # Return type (bits):
-    # 0 - not elf
-    # 1 - ELF
-    # 2 - stripped
-    # 4 - executable
-    # 8 - shared library
-    # 16 - kernel module
-    def isELF(path):
-        type = 0
-        ret, result = oe.utils.getstatusoutput("file \"%s\"" % path.replace("\"", "\\\""))
-
-        if ret:
-            bb.error("split_and_strip_files: 'file %s' failed" % path)
-            return type
-
-        # Not stripped
-        if "ELF" in result:
-            type |= 1
-            if "not stripped" not in result:
-                type |= 2
-            if "executable" in result:
-                type |= 4
-            if "shared" in result:
-                type |= 8
-        return type
-
-
-    elffiles = {}
-    inodes = {}
-    libdir = os.path.abspath(dvar + os.sep + d.getVar("libdir"))
-    baselibdir = os.path.abspath(dvar + os.sep + d.getVar("base_libdir"))
-    if (d.getVar('INHIBIT_SYSROOT_STRIP') != '1'):
-        #
-        # First lets figure out all of the files we may have to process
-        #
-        for root, dirs, files in os.walk(dvar):
-            for f in files:
-                file = os.path.join(root, f)
-
-                try:
-                    ltarget = oe.path.realpath(file, dvar, False)
-                    s = os.lstat(ltarget)
-                except OSError as e:
-                    (err, strerror) = e.args
-                    if err != errno.ENOENT:
-                        raise
-                    # Skip broken symlinks
-                    continue
-                if not s:
-                    continue
-                # Check its an excutable
-                if (s[stat.ST_MODE] & stat.S_IXUSR) or (s[stat.ST_MODE] & stat.S_IXGRP) or (s[stat.ST_MODE] & stat.S_IXOTH) \
-                        or ((file.startswith(libdir) or file.startswith(baselibdir)) and ".so" in f):
-                    # If it's a symlink, and points to an ELF file, we capture the readlink target
-                    if os.path.islink(file):
-                        continue
-
-                    # It's a file (or hardlink), not a link
-                    # ...but is it ELF, and is it already stripped?
-                    elf_file = isELF(file)
-                    if elf_file & 1:
-                        if elf_file & 2:
-                            if 'already-stripped' in (d.getVar('INSANE_SKIP_' + pn) or "").split():
-                                bb.note("Skipping file %s from %s for already-stripped QA test" % (file[len(dvar):], pn))
-                            else:
-                                bb.warn("File '%s' from %s was already stripped, this will prevent future debugging!" % (file[len(dvar):], pn))
-                            continue
-
-                        if s.st_ino in inodes:
-                            os.unlink(file)
-                            os.link(inodes[s.st_ino], file)
-                        else:
-                            inodes[s.st_ino] = file
-                            # break hardlink
-                            bb.utils.copyfile(file, file)
-                            elffiles[file] = elf_file
-
-        #
-        # Now strip them (in parallel)
-        #
-        strip = d.getVar("STRIP")
-        sfiles = []
-        for file in elffiles:
-            elf_file = int(elffiles[file])
-            #bb.note("Strip %s" % file)
-            sfiles.append((file, elf_file, strip))
-
-        oe.utils.multiprocess_exec(sfiles, oe.package.runstrip)
+    oe.package.strip_execs(pn, dstdir, strip_cmd, libdir, base_libdir,
+                           qa_already_stripped=qa_already_stripped)
 }
 
 do_populate_sysroot[dirs] = "${SYSROOT_DESTDIR}"
@@ -259,6 +177,7 @@
 def staging_populate_sysroot_dir(targetsysroot, nativesysroot, native, d):
     import glob
     import subprocess
+    import errno
 
     fixme = []
     postinsts = []
@@ -454,7 +373,8 @@
                     msgbuf.append("Following dependency on %s" % setscenedeps[datadep][0])
         next = new
 
-    bb.note("\n".join(msgbuf))
+    # This logging is too verbose for day to day use sadly
+    #bb.debug(2, "\n".join(msgbuf))
 
     depdir = recipesysrootnative + "/installeddeps"
     bb.utils.mkdirhier(depdir)
@@ -469,6 +389,8 @@
     postinsts = []
     multilibs = {}
     manifests = {}
+    # All files that we're going to be installing, to find conflicts.
+    fileset = {}
 
     for f in os.listdir(depdir):
         if not f.endswith(".complete"):
@@ -521,6 +443,8 @@
             os.unlink(fl)
             os.unlink(fl + ".complete")
 
+    msg_exists = []
+    msg_adding = []
     for dep in configuredeps:
         c = setscenedeps[dep][0]
         if c not in installed:
@@ -531,7 +455,7 @@
         if os.path.exists(depdir + "/" + c):
             lnk = os.readlink(depdir + "/" + c)
             if lnk == c + "." + taskhash and os.path.exists(depdir + "/" + c + ".complete"):
-                bb.note("%s exists in sysroot, skipping" % c)
+                msg_exists.append(c)
                 continue
             else:
                 bb.note("%s exists in sysroot, but is stale (%s vs. %s), removing." % (c, lnk, c + "." + taskhash))
@@ -542,6 +466,8 @@
         elif os.path.lexists(depdir + "/" + c):
             os.unlink(depdir + "/" + c)
 
+        msg_adding.append(c)
+
         os.symlink(c + "." + taskhash, depdir + "/" + c)
 
         d2 = d
@@ -595,8 +521,19 @@
                     if l.endswith("/fixmepath.cmd"):
                         continue
                     dest = l.replace(stagingdir, "")
-                    dest = targetdir + "/" + "/".join(dest.split("/")[3:])
-                    newmanifest[l] = dest
+                    dest = "/" + "/".join(dest.split("/")[3:])
+                    newmanifest[l] = targetdir + dest
+
+                    # Check if files have already been installed by another
+                    # recipe and abort if they have, explaining what recipes are
+                    # conflicting.
+                    hashname = targetdir + dest
+                    if not hashname.endswith("/"):
+                        if hashname in fileset:
+                            bb.fatal("The file %s is installed by both %s and %s, aborting" % (dest, c, fileset[hashname]))
+                        else:
+                            fileset[hashname] = c
+
             # Having multiple identical manifests in each sysroot eats diskspace so
             # create a shared pool of them and hardlink if we can.
             # We create the manifest in advance so that if something fails during installation,
@@ -627,6 +564,9 @@
                         continue
                     staging_copyfile(l, targetdir, dest, postinsts, seendirs)
 
+    bb.note("Installed into sysroot: %s" % str(msg_adding))
+    bb.note("Skipping as already exists in sysroot: %s" % str(msg_exists))
+
     for f in fixme:
         if f == '':
             staging_processfixme(fixme[f], recipesysroot, recipesysroot, recipesysrootnative, d)
@@ -658,6 +598,9 @@
 # Clean out the recipe specific sysroots before do_fetch
 # (use a prefunc so we can order before extend_recipe_sysroot if it gets added)
 python clean_recipe_sysroot() {
+    # We remove these stamps since we're removing any content they'd have added with
+    # cleandirs. This removes the sigdata too, likely not a big deal,
+    oe.path.remove(d.getVar("STAMP") + "*addto_recipe_sysroot*")
     return
 }
 clean_recipe_sysroot[cleandirs] += "${RECIPE_SYSROOT} ${RECIPE_SYSROOT_NATIVE}"
@@ -672,4 +615,3 @@
 }
 staging_taskhandler[eventmask] = "bb.event.RecipeTaskPreProcess"
 addhandler staging_taskhandler
-
diff --git a/import-layers/yocto-poky/meta/classes/systemd-boot.bbclass b/import-layers/yocto-poky/meta/classes/systemd-boot.bbclass
index 9597759..9373070 100644
--- a/import-layers/yocto-poky/meta/classes/systemd-boot.bbclass
+++ b/import-layers/yocto-poky/meta/classes/systemd-boot.bbclass
@@ -7,10 +7,9 @@
 #                        maintenance.
 #
 # Set EFI_PROVIDER = "systemd-boot" to use systemd-boot on your live images instead of grub-efi
-# (images built by image-live.bbclass or image-vm.bbclass)
+# (images built by image-live.bbclass)
 
 do_bootimg[depends] += "${MLPREFIX}systemd-boot:do_deploy"
-do_bootdirectdisk[depends] += "${MLPREFIX}systemd-boot:do_deploy"
 
 EFIDIR = "/EFI/BOOT"
 
@@ -100,6 +99,8 @@
             bb.fatal('OVERRIDES not defined')
 
         entryfile = "%s/%s.conf" % (s, label)
+        if not os.path.exists(s):
+            os.makedirs(s)
         d.appendVar("SYSTEMD_BOOT_ENTRIES", " " + entryfile)
         try:
             entrycfg = open(entryfile, "w")
diff --git a/import-layers/yocto-poky/meta/classes/systemd.bbclass b/import-layers/yocto-poky/meta/classes/systemd.bbclass
index c4b4bb9..1b13432 100644
--- a/import-layers/yocto-poky/meta/classes/systemd.bbclass
+++ b/import-layers/yocto-poky/meta/classes/systemd.bbclass
@@ -154,8 +154,10 @@
                 # Deal with adding, for example, 'ifplugd@eth0.service' from
                 # 'ifplugd@.service'
                 base = None
-                if service.find('@') != -1:
-                    base = re.sub('@[^.]+.', '@.', service)
+                at = service.find('@')
+                if at != -1:
+                    ext = service.rfind('.')
+                    base = service[:at] + '@' + service[ext:]
 
                 for path in searchpaths:
                     if os.path.exists(oe.path.join(d.getVar("D"), path, service)):
diff --git a/import-layers/yocto-poky/meta/classes/testexport.bbclass b/import-layers/yocto-poky/meta/classes/testexport.bbclass
index 56edda9..d070f07 100644
--- a/import-layers/yocto-poky/meta/classes/testexport.bbclass
+++ b/import-layers/yocto-poky/meta/classes/testexport.bbclass
@@ -113,7 +113,7 @@
     oe.path.remove(cases_path)
     bb.utils.mkdirhier(cases_path)
     test_paths = get_runtime_paths(d)
-    test_modules = d.getVar('TEST_SUITES')
+    test_modules = d.getVar('TEST_SUITES').split()
     tc.loadTests(test_paths, modules=test_modules)
     for f in getSuiteCasesFiles(tc.suites):
         shutil.copy2(f, cases_path)
diff --git a/import-layers/yocto-poky/meta/classes/testimage.bbclass b/import-layers/yocto-poky/meta/classes/testimage.bbclass
index fb21460..45bb2bd 100644
--- a/import-layers/yocto-poky/meta/classes/testimage.bbclass
+++ b/import-layers/yocto-poky/meta/classes/testimage.bbclass
@@ -7,13 +7,13 @@
 # Most of the tests are commands run on target image over ssh.
 # To use it add testimage to global inherit and call your target image with -c testimage
 # You can try it out like this:
-# - first build a qemu core-image-sato
-# - add IMAGE_CLASSES += "testimage" in local.conf
+# - first add IMAGE_CLASSES += "testimage" in local.conf
+# - build a qemu core-image-sato
 # - then bitbake core-image-sato -c testimage. That will run a standard suite of tests.
 
 # You can set (or append to) TEST_SUITES in local.conf to select the tests
 # which you want to run for your target.
-# The test names are the module names in meta/lib/oeqa/runtime.
+# The test names are the module names in meta/lib/oeqa/runtime/cases.
 # Each name in TEST_SUITES represents a required test for the image. (no skipping allowed)
 # Appending "auto" means that it will try to run all tests that are suitable for the image (each test decides that on it's own).
 # Note that order in TEST_SUITES is relevant: tests are run in an order such that
@@ -49,10 +49,10 @@
 DEFAULT_TEST_SUITES_pn-core-image-lsb = "${NETTESTSUITE} pam parselogs ${RPMTESTSUITE}"
 DEFAULT_TEST_SUITES_pn-core-image-sato = "${NETTESTSUITE} connman xorg parselogs ${RPMTESTSUITE} \
     ${@bb.utils.contains('IMAGE_PKGTYPE', 'rpm', 'python', '', d)}"
-DEFAULT_TEST_SUITES_pn-core-image-sato-sdk = "${NETTESTSUITE} buildcpio buildiptables buildgalculator \
+DEFAULT_TEST_SUITES_pn-core-image-sato-sdk = "${NETTESTSUITE} buildcpio buildlzip buildgalculator \
     connman ${DEVTESTSUITE} logrotate perl parselogs python ${RPMTESTSUITE} xorg"
 DEFAULT_TEST_SUITES_pn-core-image-lsb-dev = "${NETTESTSUITE} pam perl python parselogs ${RPMTESTSUITE}"
-DEFAULT_TEST_SUITES_pn-core-image-lsb-sdk = "${NETTESTSUITE} buildcpio buildiptables buildgalculator \
+DEFAULT_TEST_SUITES_pn-core-image-lsb-sdk = "${NETTESTSUITE} buildcpio buildlzip buildgalculator \
     connman ${DEVTESTSUITE} logrotate pam parselogs perl python ${RPMTESTSUITE}"
 DEFAULT_TEST_SUITES_pn-meta-toolchain = "auto"
 
@@ -61,7 +61,7 @@
 
 # qemumips is quite slow and has reached the timeout limit several times on the YP build cluster,
 # mitigate this by removing build tests for qemumips machines.
-MIPSREMOVE ??= "buildcpio buildiptables buildgalculator"
+MIPSREMOVE ??= "buildcpio buildlzip buildgalculator"
 DEFAULT_TEST_SUITES_remove_qemumips = "${MIPSREMOVE}"
 DEFAULT_TEST_SUITES_remove_qemumips64 = "${MIPSREMOVE}"
 
@@ -248,7 +248,7 @@
 
     # the robot dance
     target = OERuntimeTestContextExecutor.getTarget(
-        d.getVar("TEST_TARGET"), None, d.getVar("TEST_TARGET_IP"),
+        d.getVar("TEST_TARGET"), logger, d.getVar("TEST_TARGET_IP"),
         d.getVar("TEST_SERVER_IP"), **target_kwargs)
 
     # test context
@@ -257,7 +257,7 @@
 
     # Load tests before starting the target
     test_paths = get_runtime_paths(d)
-    test_modules = d.getVar('TEST_SUITES')
+    test_modules = d.getVar('TEST_SUITES').split()
     tc.loadTests(test_paths, modules=test_modules)
 
     if not getSuiteCases(tc.suites):
@@ -291,11 +291,11 @@
 
     # Show results (if we have them)
     if not results:
-        bb.fatal('%s - FAILED - tests were interrupted during execution' % pn)
-    tc.logSummary(results, pn)
-    tc.logDetails()
+        bb.fatal('%s - FAILED - tests were interrupted during execution' % pn, forcelog=True)
+    results.logDetails()
+    results.logSummary(pn)
     if not results.wasSuccessful():
-        bb.fatal('%s - FAILED - check the task log and the ssh log' % pn)
+        bb.fatal('%s - FAILED - check the task log and the ssh log' % pn, forcelog=True)
 
 def get_runtime_paths(d):
     """
diff --git a/import-layers/yocto-poky/meta/classes/testsdk.bbclass b/import-layers/yocto-poky/meta/classes/testsdk.bbclass
index 6a201aa..6b51a33 100644
--- a/import-layers/yocto-poky/meta/classes/testsdk.bbclass
+++ b/import-layers/yocto-poky/meta/classes/testsdk.bbclass
@@ -21,10 +21,11 @@
     import logging
 
     from bb.utils import export_proxies
-    from oeqa.core.runner import OEStreamLogger
     from oeqa.sdk.context import OESDKTestContext, OESDKTestContextExecutor
     from oeqa.utils import make_logger_bitbake_compatible
 
+    bb.event.enable_threadlock()
+
     pn = d.getVar("PN")
     logger = make_logger_bitbake_compatible(logging.getLogger("BitBake"))
 
@@ -71,8 +72,8 @@
         component = "%s %s" % (pn, OESDKTestContextExecutor.name)
         context_msg = "%s:%s" % (os.path.basename(tcname), os.path.basename(sdk_env))
 
-        tc.logSummary(result, component, context_msg)
-        tc.logDetails()
+        result.logDetails()
+        result.logSummary(component, context_msg)
 
         if not result.wasSuccessful():
             fail = True
@@ -98,6 +99,8 @@
     from oeqa.utils import avoid_paths_in_environ, make_logger_bitbake_compatible, subprocesstweak
     from oeqa.sdkext.context import OESDKExtTestContext, OESDKExtTestContextExecutor
 
+    bb.event.enable_threadlock()
+
     pn = d.getVar("PN")
     logger = make_logger_bitbake_compatible(logging.getLogger("BitBake"))
 
@@ -173,8 +176,8 @@
         component = "%s %s" % (pn, OESDKExtTestContextExecutor.name)
         context_msg = "%s:%s" % (os.path.basename(tcname), os.path.basename(sdk_env))
 
-        tc.logSummary(result, component, context_msg)
-        tc.logDetails()
+        result.logDetails()
+        result.logSummary(component, context_msg)
 
         if not result.wasSuccessful():
             fail = True
diff --git a/import-layers/yocto-poky/meta/classes/toolchain-scripts.bbclass b/import-layers/yocto-poky/meta/classes/toolchain-scripts.bbclass
index 260ece9..9bcfe70 100644
--- a/import-layers/yocto-poky/meta/classes/toolchain-scripts.bbclass
+++ b/import-layers/yocto-poky/meta/classes/toolchain-scripts.bbclass
@@ -3,7 +3,6 @@
 # We want to be able to change the value of MULTIMACH_TARGET_SYS, because it
 # doesn't always match our expectations... but we default to the stock value
 REAL_MULTIMACH_TARGET_SYS ?= "${MULTIMACH_TARGET_SYS}"
-TARGET_CC_ARCH_append_libc-uclibc = " -muclibc"
 TARGET_CC_ARCH_append_libc-musl = " -mmusl"
 
 # default debug prefix map isn't valid in the SDK
@@ -25,6 +24,21 @@
 	script=${1:-${SDK_OUTPUT}/${SDKPATH}/environment-setup-$multimach_target_sys}
 	rm -f $script
 	touch $script
+
+	echo '# Check for LD_LIBRARY_PATH being set, which can break SDK and generally is a bad practice' >> $script
+	echo '# http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html#AEN80' >> $script
+	echo '# http://xahlee.info/UnixResource_dir/_/ldpath.html' >> $script
+	echo '# Only disable this check if you are absolutely know what you are doing!' >> $script
+	echo 'if [ ! -z "$LD_LIBRARY_PATH" ]; then' >> $script
+	echo "    echo \"Your environment is misconfigured, you probably need to 'unset LD_LIBRARY_PATH'\"" >> $script
+	echo "    echo \"but please check why this was set in the first place and that it's safe to unset.\"" >> $script
+	echo '    echo "The SDK will not operate correctly in most cases when LD_LIBRARY_PATH is set."' >> $script
+	echo '    echo "For more references see:"' >> $script
+	echo '    echo "  http://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html#AEN80"' >> $script
+	echo '    echo "  http://xahlee.info/UnixResource_dir/_/ldpath.html"' >> $script
+	echo '    return 1' >> $script
+	echo 'fi' >> $script
+
 	echo 'export SDKTARGETSYSROOT='"$sysroot" >> $script
 	EXTRAPATH=""
 	for i in ${CANADIANEXTRAOS}; do
diff --git a/import-layers/yocto-poky/meta/classes/uboot-config.bbclass b/import-layers/yocto-poky/meta/classes/uboot-config.bbclass
index 10013b7..533e175 100644
--- a/import-layers/yocto-poky/meta/classes/uboot-config.bbclass
+++ b/import-layers/yocto-poky/meta/classes/uboot-config.bbclass
@@ -20,24 +20,21 @@
     ubootbinaries = d.getVar('UBOOT_BINARIES')
     # The "doc" varflag is special, we don't want to see it here
     ubootconfigflags.pop('doc', None)
+    ubootconfig = (d.getVar('UBOOT_CONFIG') or "").split()
 
-    if not ubootmachine and not ubootconfigflags:
+    if not ubootmachine and not ubootconfig:
         PN = d.getVar("PN")
         FILE = os.path.basename(d.getVar("FILE"))
         bb.debug(1, "To build %s, see %s for instructions on \
                  setting up your machine config" % (PN, FILE))
         raise bb.parse.SkipPackage("Either UBOOT_MACHINE or UBOOT_CONFIG must be set in the %s machine configuration." % d.getVar("MACHINE"))
 
-    if ubootmachine and ubootconfigflags:
+    if ubootmachine and ubootconfig:
         raise bb.parse.SkipPackage("You cannot use UBOOT_MACHINE and UBOOT_CONFIG at the same time.")
 
     if ubootconfigflags and ubootbinaries:
         raise bb.parse.SkipPackage("You cannot use UBOOT_BINARIES as it is internal to uboot_config.bbclass.")
 
-    if not ubootconfigflags:
-        return
-
-    ubootconfig = (d.getVar('UBOOT_CONFIG') or "").split()
     if len(ubootconfig) > 0:
         for config in ubootconfig:
             for f, v in ubootconfigflags.items():
@@ -57,6 +54,4 @@
                         bb.debug(1, "Appending '%s' to UBOOT_BINARIES." % ubootbinary)
                         d.appendVar('UBOOT_BINARIES', ' ' + ubootbinary)
                     break
-    elif len(ubootconfig) == 0:
-       raise bb.parse.SkipPackage('You must set a default in UBOOT_CONFIG.')
 }
diff --git a/import-layers/yocto-poky/meta/classes/uboot-extlinux-config.bbclass b/import-layers/yocto-poky/meta/classes/uboot-extlinux-config.bbclass
index 8447a04..61dff14 100644
--- a/import-layers/yocto-poky/meta/classes/uboot-extlinux-config.bbclass
+++ b/import-layers/yocto-poky/meta/classes/uboot-extlinux-config.bbclass
@@ -68,7 +68,7 @@
 
 UBOOT_EXTLINUX_CONFIG = "${B}/extlinux.conf"
 
-python create_extlinux_config() {
+python do_create_extlinux_config() {
     if d.getVar("UBOOT_EXTLINUX") != "1":
       return
 
@@ -149,4 +149,4 @@
         bb.fatal('Unable to open %s' % (cfile))
 }
 
-do_install[prefuncs] += "create_extlinux_config"
+addtask create_extlinux_config before do_install do_deploy after do_compile
diff --git a/import-layers/yocto-poky/meta/classes/uninative.bbclass b/import-layers/yocto-poky/meta/classes/uninative.bbclass
index 8f34483..670efa9 100644
--- a/import-layers/yocto-poky/meta/classes/uninative.bbclass
+++ b/import-layers/yocto-poky/meta/classes/uninative.bbclass
@@ -49,6 +49,12 @@
             localdata = bb.data.createCopy(d)
             localdata.setVar('FILESPATH', "")
             localdata.setVar('DL_DIR', tarballdir)
+            # Our games with path manipulation of DL_DIR mean standard PREMIRRORS don't work
+            # and we can't easily put 'chksum' into the url path from a url parameter with
+            # the current fetcher url handling
+            ownmirror = d.getVar('SOURCE_MIRROR_URL')
+            if ownmirror:
+                localdata.appendVar("PREMIRRORS", " ${UNINATIVE_URL}${UNINATIVE_TARBALL} ${SOURCE_MIRROR_URL}/uninative/%s/${UNINATIVE_TARBALL}" % chksum)
 
             srcuri = d.expand("${UNINATIVE_URL}${UNINATIVE_TARBALL};sha256sum=%s" % chksum)
             bb.note("Fetching uninative binary shim from %s" % srcuri)
@@ -57,7 +63,19 @@
             fetcher.download()
             localpath = fetcher.localpath(srcuri)
             if localpath != tarballpath and os.path.exists(localpath) and not os.path.exists(tarballpath):
+                # Follow the symlink behavior from the bitbake fetch2.
+                # This will cover the case where an existing symlink is broken
+                # as well as if there are two processes trying to create it
+                # at the same time.
+                if os.path.islink(tarballpath):
+                    # Broken symbolic link
+                    os.unlink(tarballpath)
+
+                # Deal with two processes trying to make symlink at once
+                try:
                     os.symlink(localpath, tarballpath)
+                except FileExistsError:
+                    pass
 
         cmd = d.expand("\
 mkdir -p ${UNINATIVE_STAGING_DIR}-uninative; \
diff --git a/import-layers/yocto-poky/meta/classes/update-alternatives.bbclass b/import-layers/yocto-poky/meta/classes/update-alternatives.bbclass
index 4bba76c..aa01058 100644
--- a/import-layers/yocto-poky/meta/classes/update-alternatives.bbclass
+++ b/import-layers/yocto-poky/meta/classes/update-alternatives.bbclass
@@ -143,6 +143,10 @@
             if not alt_link:
                 alt_link = "%s/%s" % (d.getVar('bindir'), alt_name)
                 d.setVarFlag('ALTERNATIVE_LINK_NAME', alt_name, alt_link)
+            if alt_link.startswith(os.path.join(d.getVar('sysconfdir', True), 'init.d')):
+                # Managing init scripts does not work (bug #10433), foremost
+                # because of a race with update-rc.d
+                bb.fatal("Using update-alternatives for managing SysV init scripts is not supported")
 
             alt_target   = d.getVarFlag('ALTERNATIVE_TARGET_%s' % pkg, alt_name) or d.getVarFlag('ALTERNATIVE_TARGET', alt_name)
             alt_target   = alt_target or d.getVar('ALTERNATIVE_TARGET_%s' % pkg) or d.getVar('ALTERNATIVE_TARGET') or alt_link
@@ -201,8 +205,8 @@
     pkgdest = d.getVar('PKGD')
     for pkg in (d.getVar('PACKAGES') or "").split():
         # Create post install/removal scripts
-        alt_setup_links = "# Begin section update-alternatives\n"
-        alt_remove_links = "# Begin section update-alternatives\n"
+        alt_setup_links = ""
+        alt_remove_links = ""
         for alt_name in (d.getVar('ALTERNATIVE_%s' % pkg) or "").split():
             alt_link     = d.getVarFlag('ALTERNATIVE_LINK_NAME', alt_name)
             alt_target   = d.getVarFlag('ALTERNATIVE_TARGET_%s' % pkg, alt_name) or d.getVarFlag('ALTERNATIVE_TARGET', alt_name)
@@ -225,13 +229,10 @@
             # Default to generate shell script.. eventually we may want to change this...
             alt_target = os.path.normpath(alt_target)
 
-            alt_setup_links  += 'update-alternatives --install %s %s %s %s\n' % (alt_link, alt_name, alt_target, alt_priority)
-            alt_remove_links += 'update-alternatives --remove  %s %s\n' % (alt_name, alt_target)
+            alt_setup_links  += '\tupdate-alternatives --install %s %s %s %s\n' % (alt_link, alt_name, alt_target, alt_priority)
+            alt_remove_links += '\tupdate-alternatives --remove  %s %s\n' % (alt_name, alt_target)
 
-        alt_setup_links += "# End section update-alternatives\n"
-        alt_remove_links += "# End section update-alternatives\n"
-
-        if len(alt_setup_links.splitlines()) > 2:
+        if alt_setup_links:
             # RDEPENDS setup
             provider = d.getVar('VIRTUAL-RUNTIME_update-alternatives')
             if provider:
@@ -241,24 +242,12 @@
             bb.note('adding update-alternatives calls to postinst/prerm for %s' % pkg)
             bb.note('%s' % alt_setup_links)
             postinst = d.getVar('pkg_postinst_%s' % pkg) or '#!/bin/sh\n'
-            postinst = postinst.splitlines(True)
-            try:
-                index = postinst.index('# Begin section update-rc.d\n')
-                postinst.insert(index, alt_setup_links)
-            except ValueError:
-                postinst.append(alt_setup_links)
-            postinst = ''.join(postinst)
+            postinst += alt_setup_links
             d.setVar('pkg_postinst_%s' % pkg, postinst)
 
             bb.note('%s' % alt_remove_links)
             prerm = d.getVar('pkg_prerm_%s' % pkg) or '#!/bin/sh\n'
-            prerm = prerm.splitlines(True)
-            try:
-                index = prerm.index('# End section update-rc.d\n')
-                prerm.insert(index + 1, alt_remove_links)
-            except ValueError:
-                prerm.append(alt_remove_links)
-            prerm = ''.join(prerm)
+            prerm += alt_remove_links
             d.setVar('pkg_prerm_%s' % pkg, prerm)
 }
 
diff --git a/import-layers/yocto-poky/meta/classes/update-rc.d.bbclass b/import-layers/yocto-poky/meta/classes/update-rc.d.bbclass
index 9ba3dac..e1e0e04 100644
--- a/import-layers/yocto-poky/meta/classes/update-rc.d.bbclass
+++ b/import-layers/yocto-poky/meta/classes/update-rc.d.bbclass
@@ -37,7 +37,6 @@
 PACKAGE_WRITE_DEPS += "update-rc.d-native"
 
 updatercd_postinst() {
-# Begin section update-rc.d
 if ${@use_updatercd(d)} && type update-rc.d >/dev/null 2>/dev/null; then
 	if [ -n "$D" ]; then
 		OPT="-r $D"
@@ -46,15 +45,12 @@
 	fi
 	update-rc.d $OPT ${INITSCRIPT_NAME} ${INITSCRIPT_PARAMS}
 fi
-# End section update-rc.d
 }
 
 updatercd_prerm() {
-# Begin section update-rc.d
 if ${@use_updatercd(d)} && [ -z "$D" -a -x "${INIT_D_DIR}/${INITSCRIPT_NAME}" ]; then
 	${INIT_D_DIR}/${INITSCRIPT_NAME} stop || :
 fi
-# End section update-rc.d
 }
 
 updatercd_postrm() {
@@ -95,8 +91,7 @@
             return
         statement = "grep -q -w '/etc/init.d/functions' %s" % path
         if subprocess.call(statement, shell=True) == 0:
-            mlprefix = d.getVar('MLPREFIX') or ""
-            d.appendVar('RDEPENDS_' + pkg, ' %sinitscripts-functions' % (mlprefix))
+            d.appendVar('RDEPENDS_' + pkg, ' initd-functions')
 
     def update_rcd_package(pkg):
         bb.debug(1, 'adding update-rc.d calls to preinst/postinst/prerm/postrm for %s' % pkg)
@@ -116,25 +111,13 @@
         postinst = d.getVar('pkg_postinst_%s' % pkg)
         if not postinst:
             postinst = '#!/bin/sh\n'
-        postinst = postinst.splitlines(True)
-        try:
-            index = postinst.index('# End section update-alternatives\n')
-            postinst.insert(index + 1, localdata.getVar('updatercd_postinst'))
-        except ValueError:
-            postinst.append(localdata.getVar('updatercd_postinst'))
-        postinst = ''.join(postinst)
+        postinst += localdata.getVar('updatercd_postinst')
         d.setVar('pkg_postinst_%s' % pkg, postinst)
 
         prerm = d.getVar('pkg_prerm_%s' % pkg)
         if not prerm:
             prerm = '#!/bin/sh\n'
-        prerm = prerm.splitlines(True)
-        try:
-            index = prerm.index('# Begin section update-alternatives\n')
-            prerm.insert(index, localdata.getVar('updatercd_prerm'))
-        except ValueError:
-            prerm.append(localdata.getVar('updatercd_prerm'))
-        prerm = ''.join(prerm)
+        prerm += localdata.getVar('updatercd_prerm')
         d.setVar('pkg_prerm_%s' % pkg, prerm)
 
         postrm = d.getVar('pkg_postrm_%s' % pkg)
diff --git a/import-layers/yocto-poky/meta/classes/useradd-staticids.bbclass b/import-layers/yocto-poky/meta/classes/useradd-staticids.bbclass
index 6ebf760..589a99f 100644
--- a/import-layers/yocto-poky/meta/classes/useradd-staticids.bbclass
+++ b/import-layers/yocto-poky/meta/classes/useradd-staticids.bbclass
@@ -1,22 +1,10 @@
 # In order to support a deterministic set of 'dynamic' users/groups,
 # we need a function to reformat the params based on a static file
 def update_useradd_static_config(d):
-    import argparse
     import itertools
     import re
     import errno
-
-    class myArgumentParser( argparse.ArgumentParser ):
-        def _print_message(self, message, file=None):
-            bb.warn("%s - %s: %s" % (d.getVar('PN'), pkg, message))
-
-        # This should never be called...
-        def exit(self, status=0, message=None):
-            message = message or ("%s - %s: useradd.bbclass: Argument parsing exited" % (d.getVar('PN'), pkg))
-            error(message)
-
-        def error(self, message):
-            bb.fatal(message)
+    import oe.useradd
 
     def list_extend(iterable, length, obj = None):
         """Ensure that iterable is the specified length by extending with obj
@@ -50,62 +38,44 @@
 
         return id_table
 
-    def handle_missing_id(id, type, pkg):
+    def handle_missing_id(id, type, pkg, files, var, value):
         # For backwards compatibility we accept "1" in addition to "error"
-        if d.getVar('USERADD_ERROR_DYNAMIC') == 'error' or d.getVar('USERADD_ERROR_DYNAMIC') == '1':
-            raise NotImplementedError("%s - %s: %sname %s does not have a static ID defined. Skipping it." % (d.getVar('PN'), pkg, type, id))
-        elif d.getVar('USERADD_ERROR_DYNAMIC') == 'warn':
-            bb.warn("%s - %s: %sname %s does not have a static ID defined." % (d.getVar('PN'), pkg, type, id))
+        error_dynamic = d.getVar('USERADD_ERROR_DYNAMIC')
+        msg = "%s - %s: %sname %s does not have a static ID defined." % (d.getVar('PN'), pkg, type, id)
+        if files:
+            msg += " Add %s to one of these files: %s" % (id, files)
+        else:
+            msg += " %s file(s) not found in BBPATH: %s" % (var, value)
+        if error_dynamic == 'error' or error_dynamic == '1':
+            raise NotImplementedError(msg)
+        elif error_dynamic == 'warn':
+            bb.warn(msg)
+        elif error_dynamic == 'skip':
+            raise bb.parse.SkipRecipe(msg)
+
+    # Return a list of configuration files based on either the default
+    # files/group or the contents of USERADD_GID_TABLES, resp.
+    # files/passwd for USERADD_UID_TABLES.
+    # Paths are resolved via BBPATH.
+    def get_table_list(d, var, default):
+        files = []
+        bbpath = d.getVar('BBPATH', True)
+        tables = d.getVar(var, True)
+        if not tables:
+            tables = default
+        for conf_file in tables.split():
+            files.append(bb.utils.which(bbpath, conf_file))
+        return (' '.join(files), var, default)
 
     # We parse and rewrite the useradd components
     def rewrite_useradd(params, is_pkg):
-        # The following comes from --help on useradd from shadow
-        parser = myArgumentParser(prog='useradd')
-        parser.add_argument("-b", "--base-dir", metavar="BASE_DIR", help="base directory for the home directory of the new account")
-        parser.add_argument("-c", "--comment", metavar="COMMENT", help="GECOS field of the new account")
-        parser.add_argument("-d", "--home-dir", metavar="HOME_DIR", help="home directory of the new account")
-        parser.add_argument("-D", "--defaults", help="print or change default useradd configuration", action="store_true")
-        parser.add_argument("-e", "--expiredate", metavar="EXPIRE_DATE", help="expiration date of the new account")
-        parser.add_argument("-f", "--inactive", metavar="INACTIVE", help="password inactivity period of the new account")
-        parser.add_argument("-g", "--gid", metavar="GROUP", help="name or ID of the primary group of the new account")
-        parser.add_argument("-G", "--groups", metavar="GROUPS", help="list of supplementary groups of the new account")
-        parser.add_argument("-k", "--skel", metavar="SKEL_DIR", help="use this alternative skeleton directory")
-        parser.add_argument("-K", "--key", metavar="KEY=VALUE", help="override /etc/login.defs defaults")
-        parser.add_argument("-l", "--no-log-init", help="do not add the user to the lastlog and faillog databases", action="store_true")
-        parser.add_argument("-m", "--create-home", help="create the user's home directory", action="store_const", const=True)
-        parser.add_argument("-M", "--no-create-home", dest="create_home", help="do not create the user's home directory", action="store_const", const=False)
-        parser.add_argument("-N", "--no-user-group", dest="user_group", help="do not create a group with the same name as the user", action="store_const", const=False)
-        parser.add_argument("-o", "--non-unique", help="allow to create users with duplicate (non-unique UID)", action="store_true")
-        parser.add_argument("-p", "--password", metavar="PASSWORD", help="encrypted password of the new account")
-        parser.add_argument("-P", "--clear-password", metavar="CLEAR_PASSWORD", help="use this clear password for the new account")
-        parser.add_argument("-R", "--root", metavar="CHROOT_DIR", help="directory to chroot into")
-        parser.add_argument("-r", "--system", help="create a system account", action="store_true")
-        parser.add_argument("-s", "--shell", metavar="SHELL", help="login shell of the new account")
-        parser.add_argument("-u", "--uid", metavar="UID", help="user ID of the new account")
-        parser.add_argument("-U", "--user-group", help="create a group with the same name as the user", action="store_const", const=True)
-        parser.add_argument("LOGIN", help="Login name of the new user")
-
-        # Return a list of configuration files based on either the default
-        # files/passwd or the contents of USERADD_UID_TABLES
-        # paths are resolved via BBPATH
-        def get_passwd_list(d):
-            str = ""
-            bbpath = d.getVar('BBPATH')
-            passwd_tables = d.getVar('USERADD_UID_TABLES')
-            if not passwd_tables:
-                passwd_tables = 'files/passwd'
-            for conf_file in passwd_tables.split():
-                str += " %s" % bb.utils.which(bbpath, conf_file)
-            return str
+        parser = oe.useradd.build_useradd_parser()
 
         newparams = []
         users = None
-        for param in re.split('''[ \t]*;[ \t]*(?=(?:[^'"]|'[^']*'|"[^"]*")*$)''', params):
-            param = param.strip()
-            if not param:
-                continue
+        for param in oe.useradd.split_commands(params):
             try:
-                uaargs = parser.parse_args(re.split('''[ \t]+(?=(?:[^'"]|'[^']*'|"[^"]*")*$)''', param))
+                uaargs = parser.parse_args(oe.useradd.split_args(param))
             except:
                 bb.fatal("%s: Unable to parse arguments for USERADD_PARAM_%s: '%s'" % (d.getVar('PN'), pkg, param))
 
@@ -121,10 +91,12 @@
             # all new users get the default ('*' which prevents login) until the user is
             # specifically configured by the system admin.
             if not users:
-                users = merge_files(get_passwd_list(d), 7)
+                files, table_var, table_value = get_table_list(d, 'USERADD_UID_TABLES', 'files/passwd')
+                users = merge_files(files, 7)
 
+            type = 'system user' if uaargs.system else 'normal user'
             if uaargs.LOGIN not in users:
-                handle_missing_id(uaargs.LOGIN, 'user', pkg)
+                handle_missing_id(uaargs.LOGIN, type, pkg, files, table_var, table_value)
                 newparams.append(param)
                 continue
 
@@ -182,7 +154,7 @@
 
             # Should be an error if a specific option is set...
             if not uaargs.uid or not uaargs.uid.isdigit() or not uaargs.gid:
-                 handle_missing_id(uaargs.LOGIN, 'user', pkg)
+                 handle_missing_id(uaargs.LOGIN, type, pkg, files, table_var, table_value)
 
             # Reconstruct the args...
             newparam  = ['', ' --defaults'][uaargs.defaults]
@@ -217,40 +189,14 @@
 
     # We parse and rewrite the groupadd components
     def rewrite_groupadd(params, is_pkg):
-        # The following comes from --help on groupadd from shadow
-        parser = myArgumentParser(prog='groupadd')
-        parser.add_argument("-f", "--force", help="exit successfully if the group already exists, and cancel -g if the GID is already used", action="store_true")
-        parser.add_argument("-g", "--gid", metavar="GID", help="use GID for the new group")
-        parser.add_argument("-K", "--key", metavar="KEY=VALUE", help="override /etc/login.defs defaults")
-        parser.add_argument("-o", "--non-unique", help="allow to create groups with duplicate (non-unique) GID", action="store_true")
-        parser.add_argument("-p", "--password", metavar="PASSWORD", help="use this encrypted password for the new group")
-        parser.add_argument("-P", "--clear-password", metavar="CLEAR_PASSWORD", help="use this clear password for the new group")
-        parser.add_argument("-R", "--root", metavar="CHROOT_DIR", help="directory to chroot into")
-        parser.add_argument("-r", "--system", help="create a system account", action="store_true")
-        parser.add_argument("GROUP", help="Group name of the new group")
-
-        # Return a list of configuration files based on either the default
-        # files/group or the contents of USERADD_GID_TABLES
-        # paths are resolved via BBPATH
-        def get_group_list(d):
-            str = ""
-            bbpath = d.getVar('BBPATH')
-            group_tables = d.getVar('USERADD_GID_TABLES')
-            if not group_tables:
-                group_tables = 'files/group'
-            for conf_file in group_tables.split():
-                str += " %s" % bb.utils.which(bbpath, conf_file)
-            return str
+        parser = oe.useradd.build_groupadd_parser()
 
         newparams = []
         groups = None
-        for param in re.split('''[ \t]*;[ \t]*(?=(?:[^'"]|'[^']*'|"[^"]*")*$)''', params):
-            param = param.strip()
-            if not param:
-                continue
+        for param in oe.useradd.split_commands(params):
             try:
                 # If we're processing multiple lines, we could have left over values here...
-                gaargs = parser.parse_args(re.split('''[ \t]+(?=(?:[^'"]|'[^']*'|"[^"]*")*$)''', param))
+                gaargs = parser.parse_args(oe.useradd.split_args(param))
             except:
                 bb.fatal("%s: Unable to parse arguments for GROUPADD_PARAM_%s: '%s'" % (d.getVar('PN'), pkg, param))
 
@@ -264,10 +210,12 @@
             # Note: similar to the passwd file, the 'password' filed is ignored
             # Note: group_members is ignored, group members must be configured with the GROUPMEMS_PARAM
             if not groups:
-                groups = merge_files(get_group_list(d), 4)
+                files, table_var, table_value = get_table_list(d, 'USERADD_GID_TABLES', 'files/group')
+                groups = merge_files(files, 4)
 
+            type = 'system group' if gaargs.system else 'normal group'
             if gaargs.GROUP not in groups:
-                handle_missing_id(gaargs.GROUP, 'group', pkg)
+                handle_missing_id(gaargs.GROUP, type, pkg, files, table_var, table_value)
                 newparams.append(param)
                 continue
 
@@ -279,7 +227,7 @@
                 gaargs.gid = field[2]
 
             if not gaargs.gid or not gaargs.gid.isdigit():
-                handle_missing_id(gaargs.GROUP, 'group', pkg)
+                handle_missing_id(gaargs.GROUP, type, pkg, files, table_var, table_value)
 
             # Reconstruct the args...
             newparam  = ['', ' --force'][gaargs.force]
@@ -335,11 +283,7 @@
 
     #bb.warn("Before:  'EXTRA_USERS_PARAMS' - '%s'" % (d.getVar('EXTRA_USERS_PARAMS')))
     new_extrausers = []
-    for cmd in re.split('''[ \t]*;[ \t]*(?=(?:[^'"]|'[^']*'|"[^"]*")*$)''', extrausers):
-        cmd = cmd.strip()
-        if not cmd:
-            continue
-
+    for cmd in oe.useradd.split_commands(extrausers):
         if re.match('''useradd (.*)''', cmd):
             useradd_param = re.match('''useradd (.*)''', cmd).group(1)
             useradd_param = rewrite_useradd(useradd_param, False)
diff --git a/import-layers/yocto-poky/meta/classes/useradd.bbclass b/import-layers/yocto-poky/meta/classes/useradd.bbclass
index 0f551b5..124becd 100644
--- a/import-layers/yocto-poky/meta/classes/useradd.bbclass
+++ b/import-layers/yocto-poky/meta/classes/useradd.bbclass
@@ -118,6 +118,7 @@
 	# useradd/groupadd tools are unavailable. If there is no dependency, we assume we don't want to
 	# create users in the sysroot
 	if ! command -v useradd; then
+		bbwarn "command useradd not found!"
 		exit 0
 	fi
 
@@ -139,22 +140,19 @@
 EXTRA_STAGING_FIXMES += "COMPONENTS_DIR PSEUDO_LOCALSTATEDIR LOGFIFO"
 
 python useradd_sysroot_sstate () {
+    scriptfile = None
     task = d.getVar("BB_CURRENTTASK")
     if task == "package_setscene":
         bb.build.exec_func("useradd_sysroot", d)
     elif task == "prepare_recipe_sysroot":
         # Used to update this recipe's own sysroot so the user/groups are available to do_install
         scriptfile = d.expand("${RECIPE_SYSROOT}${bindir}/postinst-useradd-${PN}")
-        bb.utils.mkdirhier(os.path.dirname(scriptfile))
-        with open(scriptfile, 'w') as script:
-            script.write("#!/bin/sh\n")
-            bb.data.emit_func("useradd_sysroot", script, d)
-            script.write("useradd_sysroot\n")
-        os.chmod(scriptfile, 0o755)
         bb.build.exec_func("useradd_sysroot", d)
     elif task == "populate_sysroot":
         # Used when installed in dependent task sysroots
         scriptfile = d.expand("${SYSROOT_DESTDIR}${bindir}/postinst-useradd-${PN}")
+
+    if scriptfile:
         bb.utils.mkdirhier(os.path.dirname(scriptfile))
         with open(scriptfile, 'w') as script:
             script.write("#!/bin/sh\n")
diff --git a/import-layers/yocto-poky/meta/classes/utils.bbclass b/import-layers/yocto-poky/meta/classes/utils.bbclass
index 96463ab..8e07eac 100644
--- a/import-layers/yocto-poky/meta/classes/utils.bbclass
+++ b/import-layers/yocto-poky/meta/classes/utils.bbclass
@@ -320,7 +320,7 @@
 
 
 def check_app_exists(app, d):
-    app = d.expand(app).strip()
+    app = d.expand(app).split()[0].strip()
     path = d.getVar('PATH')
     return bool(bb.utils.which(path, app))
 
@@ -369,6 +369,7 @@
     localdata.setVar("OVERRIDES", overrides)
     localdata.setVar("MLPREFIX", variant + "-")
     return localdata
+get_multilib_datastore[vardepsexclude] = "OVERRIDES"
 
 def all_multilib_tune_values(d, var, unique = True, need_split = True, delim = ' '):
     """Return a string of all ${var} in all multilib tune configuration"""
@@ -431,6 +432,7 @@
         values[v].append(localdata.getVar(v))
         values['ml'].append(item)
     return values
+all_multilib_tune_list[vardepsexclude] = "OVERRIDES"
 
 # If the user hasn't set up their name/email, set some defaults
 check_git_config() {
diff --git a/import-layers/yocto-poky/meta/classes/waf.bbclass b/import-layers/yocto-poky/meta/classes/waf.bbclass
index c4698e9..acbda27 100644
--- a/import-layers/yocto-poky/meta/classes/waf.bbclass
+++ b/import-layers/yocto-poky/meta/classes/waf.bbclass
@@ -25,8 +25,23 @@
 
     return ""
 
+python waf_preconfigure() {
+    from distutils.version import StrictVersion
+    srcsubdir = d.getVar('S')
+    wafbin = os.path.join(srcsubdir, 'waf')
+    status, result = oe.utils.getstatusoutput(wafbin + " --version")
+    if status != 0:
+        bb.warn("Unable to execute waf --version, exit code %d. Assuming waf version without bindir/libdir support." % status)
+        return
+    version = result.split()[1]
+    if StrictVersion(version) >= StrictVersion("1.8.7"):
+        d.setVar("WAF_EXTRA_CONF", "--bindir=${bindir} --libdir=${libdir}")
+}
+
+do_configure[prefuncs] += "waf_preconfigure"
+
 waf_do_configure() {
-	${S}/waf configure --prefix=${prefix} ${EXTRA_OECONF}
+	${S}/waf configure --prefix=${prefix} ${WAF_EXTRA_CONF} ${EXTRA_OECONF}
 }
 
 waf_do_compile()  {
diff --git a/import-layers/yocto-poky/meta/conf/bitbake.conf b/import-layers/yocto-poky/meta/conf/bitbake.conf
index 2dac3a1..9696273 100644
--- a/import-layers/yocto-poky/meta/conf/bitbake.conf
+++ b/import-layers/yocto-poky/meta/conf/bitbake.conf
@@ -17,11 +17,13 @@
 export prefix = "/usr"
 export exec_prefix = "${prefix}"
 
+root_prefix = "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', '${exec_prefix}', '${base_prefix}', d)}"
+
 # Base paths
-export base_bindir = "${base_prefix}/bin"
-export base_sbindir = "${base_prefix}/sbin"
-export base_libdir = "${base_prefix}/${baselib}"
-export nonarch_base_libdir = "${base_prefix}/lib"
+export base_bindir = "${root_prefix}/bin"
+export base_sbindir = "${root_prefix}/sbin"
+export base_libdir = "${root_prefix}/${baselib}"
+export nonarch_base_libdir = "${root_prefix}/lib"
 
 # Architecture independent paths
 export sysconfdir = "${base_prefix}/etc"
@@ -83,6 +85,10 @@
 # Root home directory
 ROOT_HOME ??= "/home/root"
 
+# If set to boolean true ('yes', 'y', 'true', 't', '1'), /var/log links to /var/volatile/log.
+# If set to boolean false ('no', 'n', 'false', 'f', '0'), /var/log is on persistent storage.
+VOLATILE_LOG_DIR ?= "yes"
+
 ##################################################################
 # Architecture-dependent build variables.
 ##################################################################
@@ -170,6 +176,7 @@
     chrpath-native \
     file-native \
     findutils-native \
+    gawk-native \
     git-native \
     grep-native \
     diffstat-native \
@@ -324,6 +331,14 @@
 # This default was only used for checking
 FILESEXTRAPATHS ?= "__default:"
 
+# The default list of fs-perms files to process.  If the list is empty only
+# the builtin definitions will be used.  Builtin definitions included:
+#  base_prefix, prefix, exec_prefix, base_bindir, base_sbindir, base_libdir,
+#  datadir, sysconfdir, servicedir, sharedstatedir, localstatedir, infodir,
+#  mandir, docdir, bindir, sbindir, libexecdir, libdir, includedir and
+#  oldincludedir
+FILESYSTEM_PERMS_TABLES ?= "${@'files/fs-perms.txt' if oe.types.boolean(d.getVar('VOLATILE_LOG_DIR')) else 'files/fs-perms-persistent-log.txt'}"
+
 ##################################################################
 # General work and output directories for the build system.
 ##################################################################
@@ -461,7 +476,7 @@
     [ ar as awk basename bash bzip2 cat chgrp chmod chown chrpath cmp cp cpio \
     cpp cut date dd diff diffstat dirname du echo egrep env expand expr false \
     fgrep file find flock g++ gawk gcc getconf getopt git grep gunzip gzip \
-    head hostname install ld ldd ln ls make makeinfo md5sum mkdir mknod \
+    head hostname id install ld ldd ln ls make makeinfo md5sum mkdir mknod \
     mktemp mv nm objcopy objdump od patch perl pod2man pr printf pwd python python2 \
     python2.7 python3 ranlib readelf readlink rm rmdir rpcgen sed sh sha256sum \
     sleep sort split stat strings strip tail tar tee test touch tr true uname \
@@ -469,10 +484,10 @@
 "
 
 # Tools needed to run testimage runtime image testing
-HOSTTOOLS += "ip ping ps scp ssh stty"
+HOSTTOOLS += "${@['', 'ip ping ps scp ssh stty'][bb.data.inherits_class('testimage', d)]}"
 
 # Link to these if present
-HOSTTOOLS_NONFATAL += "aws ccache gcc-ar gpg ld.bfd ld.gold nc sftp socat sudo"
+HOSTTOOLS_NONFATAL += "aws ccache gcc-ar gpg ld.bfd ld.gold nc sftp socat ssh sudo"
 
 # Temporary add few more detected in bitbake world
 HOSTTOOLS_NONFATAL += "join nl size yes zcat"
@@ -481,9 +496,6 @@
 HOSTTOOLS_NONFATAL += "bzr"
 
 CCACHE ??= ""
-# Disable ccache explicitly if CCACHE is null since gcc may be a symlink
-# of ccache some distributions (e.g., Fedora 17).
-export CCACHE_DISABLE ??= "${@[0,1][d.getVar('CCACHE') == '']}"
 # ccache < 3.1.10 will create CCACHE_DIR on startup even if disabled, and
 # autogen sets HOME=/dev/null so in certain situations builds can fail.
 # Explicitly export CCACHE_DIR until we can assume ccache >3.1.10 on the host.
@@ -546,6 +558,7 @@
 export TARGET_CFLAGS = "${TARGET_CPPFLAGS} ${SELECTED_OPTIMIZATION}"
 
 export BUILD_CXXFLAGS = "${BUILD_CFLAGS}"
+BUILDSDK_CXXFLAGS = "${BUILDSDK_CFLAGS}"
 export CXXFLAGS = "${TARGET_CXXFLAGS}"
 export TARGET_CXXFLAGS = "${TARGET_CFLAGS}"
 
@@ -714,11 +727,9 @@
 # This works for  functions as well, they are really just environment variables.
 # Default OVERRIDES to make compilation fail fast in case of build system misconfiguration.
 OVERRIDES = "${TARGET_OS}:${TRANSLATED_TARGET_ARCH}:build-${BUILD_OS}:pn-${PN}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}:${CLASSOVERRIDE}:forcevariable"
-OVERRIDES[vardepsexclude] = "MACHINEOVERRIDES"
 CLASSOVERRIDE ?= "class-target"
 DISTROOVERRIDES ?= "${@d.getVar('DISTRO') or ''}"
 MACHINEOVERRIDES ?= "${MACHINE}"
-MACHINEOVERRIDES[vardepsexclude] = "MACHINE"
 
 FILESOVERRIDES = "${TRANSLATED_TARGET_ARCH}:${MACHINEOVERRIDES}:${DISTROOVERRIDES}"
 
@@ -731,10 +742,8 @@
 include conf/auto.conf
 include conf/local.conf
 require conf/multiconfig/${BB_CURRENT_MC}.conf
-include conf/build/${BUILD_SYS}.conf
 include conf/machine/${MACHINE}.conf
 include conf/machine-sdk/${SDKMACHINE}.conf
-include conf/target/${TARGET_SYS}.conf
 include conf/distro/${DISTRO}.conf
 include conf/distro/defaultsetup.conf
 include conf/documentation.conf
@@ -798,7 +807,7 @@
 
 # Native distro features (will always be used for -native, even if they
 # are not enabled for target)
-DISTRO_FEATURES_NATIVE ?= "x11"
+DISTRO_FEATURES_NATIVE ?= "x11 ipv6"
 DISTRO_FEATURES_NATIVESDK ?= "x11 libc-charsets libc-locales libc-locale-code"
 
 # Normally target distro features will not be applied to native builds:
@@ -848,7 +857,7 @@
     SSTATE_DIR THISDIR FILESEXTRAPATHS FILE_DIRNAME HOME LOGNAME SHELL TERM \
     USER FILESPATH STAGING_DIR_HOST STAGING_DIR_TARGET COREBASE PRSERV_HOST \
     PRSERV_DUMPDIR PRSERV_DUMPFILE PRSERV_LOCKDOWN PARALLEL_MAKE \
-    CCACHE_DIR EXTERNAL_TOOLCHAIN CCACHE CCACHE_DISABLE LICENSE_PATH SDKPKGSUFFIX \
+    CCACHE_DIR EXTERNAL_TOOLCHAIN CCACHE CCACHE_NOHASHDIR LICENSE_PATH SDKPKGSUFFIX \
     WARN_QA ERROR_QA WORKDIR STAMPCLEAN PKGDATA_DIR BUILD_ARCH SSTATE_PKGARCH \
     BB_WORKERCONTEXT BB_LIMITEDDEPS extend_recipe_sysroot DEPLOY_DIR"
 BB_HASHCONFIG_WHITELIST ?= "${BB_HASHBASE_WHITELIST} DATE TIME SSH_AGENT_PID \
@@ -856,7 +865,7 @@
     PARALLEL_MAKE BB_NUMBER_THREADS BB_ORIGENV BB_INVALIDCONF BBINCLUDED \
     GIT_PROXY_COMMAND ALL_PROXY all_proxy NO_PROXY no_proxy FTP_PROXY ftp_proxy \
     HTTP_PROXY http_proxy HTTPS_PROXY https_proxy SOCKS5_USER SOCKS5_PASSWD \
-    BB_SETSCENE_ENFORCE"
+    BB_SETSCENE_ENFORCE BB_CMDLINE BB_SERVER_TIMEOUT"
 BB_SIGNATURE_EXCLUDE_FLAGS ?= "doc deps depends \
     lockfiles type vardepsexclude vardeps vardepvalue vardepvalueexclude \
     file-checksums python func task export unexport noexec nostamp dirs cleandirs \
diff --git a/import-layers/yocto-poky/meta/conf/conf-notes.txt b/import-layers/yocto-poky/meta/conf/conf-notes.txt
index 2f2932b..f1a4f4d 100644
--- a/import-layers/yocto-poky/meta/conf/conf-notes.txt
+++ b/import-layers/yocto-poky/meta/conf/conf-notes.txt
@@ -1,3 +1,8 @@
+
+### Shell environment set up for builds. ###
+
+You can now run 'bitbake <target>'
+
 Common targets are:
     core-image-minimal
     core-image-sato
diff --git a/import-layers/yocto-poky/meta/conf/distro/include/as-needed.inc b/import-layers/yocto-poky/meta/conf/distro/include/as-needed.inc
index 114d377..6a8ad9d 100644
--- a/import-layers/yocto-poky/meta/conf/distro/include/as-needed.inc
+++ b/import-layers/yocto-poky/meta/conf/distro/include/as-needed.inc
@@ -8,7 +8,6 @@
 ASNEEDED_pn-icu = ""
 ASNEEDED_pn-pciutils = ""
 ASNEEDED_pn-puzzles = ""
-ASNEEDED_pn-pulseaudio = ""
 ASNEEDED_pn-rpm = ""
 
 TARGET_LDFLAGS += "${ASNEEDED}"
diff --git a/import-layers/yocto-poky/meta/conf/distro/include/default-providers.inc b/import-layers/yocto-poky/meta/conf/distro/include/default-providers.inc
index ece4d8b..2b76c3c 100644
--- a/import-layers/yocto-poky/meta/conf/distro/include/default-providers.inc
+++ b/import-layers/yocto-poky/meta/conf/distro/include/default-providers.inc
@@ -46,7 +46,6 @@
 PREFERRED_PROVIDER_console-tools ?= "kbd"
 PREFERRED_PROVIDER_gzip-native ?= "pigz-native"
 PREFERRED_PROVIDER_udev ?= "${@bb.utils.contains('DISTRO_FEATURES','systemd','systemd','eudev',d)}"
-PREFERRED_RPROVIDER_libasound-module-bluez ?= "${@bb.utils.contains('DISTRO_FEATURES','bluetooth bluez5','bluez5','bluez4',d)}"
 PREFERRED_RPROVIDER_bluez-hcidump ?= "${@bb.utils.contains('DISTRO_FEATURES','bluetooth bluez5','bluez5','bluez-hcidump',d)}"
 # Alternative is ltp-ddt in meta-oe: meta-oe/recipes-devtools/ltp-ddt/ltp-ddt_0.0.4.bb
 PREFERRED_PROVIDER_ltp ?= "ltp"
@@ -54,3 +53,7 @@
 PREFERRED_PROVIDER_openssl ?= "openssl"
 PREFERRED_PROVIDER_openssl-native ?= "openssl-native"
 PREFERRED_PROVIDER_nativesdk-openssl ?= "nativesdk-openssl"
+PREFERRED_PROVIDER_pkgconfig ?= "pkgconfig"
+PREFERRED_PROVIDER_nativesdk-pkgconfig ?= "nativesdk-pkgconfig"
+PREFERRED_PROVIDER_pkgconfig-native ?= "pkgconfig-native"
+PREFERRED_RPROVIDER_initd-functions ?= "initscripts"
diff --git a/import-layers/yocto-poky/meta/conf/distro/include/default-versions.inc b/import-layers/yocto-poky/meta/conf/distro/include/default-versions.inc
index d976508..8680738 100644
--- a/import-layers/yocto-poky/meta/conf/distro/include/default-versions.inc
+++ b/import-layers/yocto-poky/meta/conf/distro/include/default-versions.inc
@@ -2,6 +2,6 @@
 # Default preferred versions
 #
 
-# Force the older version of liberation-fonts until we fix the fontforge issue
-PREFERRED_VERSION_liberation-fonts ?= "1.04"
-
+PREFERRED_VERSION_openssl = "1.0.%"
+PREFERRED_VERSION_openssl-native = "1.0.%"
+PREFERRED_VERSION_nativesdk-openssl = "1.0.%"
diff --git a/import-layers/yocto-poky/meta/conf/distro/include/distro_alias.inc b/import-layers/yocto-poky/meta/conf/distro/include/distro_alias.inc
index 489f5ea..f7c8b4a 100644
--- a/import-layers/yocto-poky/meta/conf/distro/include/distro_alias.inc
+++ b/import-layers/yocto-poky/meta/conf/distro/include/distro_alias.inc
@@ -15,7 +15,6 @@
 DISTRO_PN_ALIAS_pn-atk = "Fedora=atk OpenSuSE=atk"
 DISTRO_PN_ALIAS_pn-avahi-ui = "Ubuntu=avahi-discover Debian=avahi-discover"
 DISTRO_PN_ALIAS_pn-babeltrace = "OSPDT"
-DISTRO_PN_ALIAS_pn-bdwgc = "OSPDT"
 DISTRO_PN_ALIAS_pn-bigreqsproto = "Meego=xorg-x11-proto-bigreqsproto"
 DISTRO_PN_ALIAS_pn-bjam = "OpenSuSE=boost-jam Debina=bjam"
 DISTRO_PN_ALIAS_pn-blktool = "Debian=blktool Mandriva=blktool"
@@ -63,6 +62,7 @@
 DISTRO_PN_ALIAS_pn-core-image-testmaster-initramfs = "OE-Core"
 DISTRO_PN_ALIAS_pn-core-image-weston = "OE-Core"
 DISTRO_PN_ALIAS_pn-core-image-x11 = "OE-Core"
+DISTRO_PN_ALIAS_pn-createrepo-c = "Fedora=createrepo_c Clear=createrepo_c"
 DISTRO_PN_ALIAS_pn-cross-localedef = "OSPDT"
 DISTRO_PN_ALIAS_pn-cryptodev-linux = "OE-Core"
 DISTRO_PN_ALIAS_pn-cryptodev-module = "OE-Core"
@@ -129,6 +129,7 @@
 DISTRO_PN_ALIAS_pn-gstreamer1.0-plugins-base = "Debian=gstreamer1.0-plugins-base Ubuntu=gstreamer1.0-plugins-base"
 DISTRO_PN_ALIAS_pn-gstreamer1.0-plugins-good = "Debian=gstreamer1.0-plugins-good Ubuntu=gstreamer1.0-plugins-bad"
 DISTRO_PN_ALIAS_pn-gstreamer1.0-rtsp-server = "Ubuntu=gstreamer0.10-rtsp Fedora=gstreamer-rtsp"
+DISTRO_PN_ALIAS_pn-gstreamer1.0-vaapi = "Fedora=gstreamer1-vaapi Debian=gstreamer-vaapi Clear=gstreamer-vaapi"
 DISTRO_PN_ALIAS_pn-gtk+ = "Meego=gtk2 Fedora=gtk2 OpenSuSE=gtk2 Ubuntu=gtk+2.0 Mandriva=gtk+2.0 Debian=gtk+2.0"
 DISTRO_PN_ALIAS_pn-gtk+3 = "Ubuntu=gtk+3.0 Debian=gtk+3.0 Fedora=gtk3"
 DISTRO_PN_ALIAS_pn-gtk-doc = "Fedora=gtk-doc Ubuntu=gtk-doc"
@@ -153,7 +154,6 @@
 DISTRO_PN_ALIAS_pn-iproute2 = "OSPDT"
 DISTRO_PN_ALIAS_pn-jpeg = "OpenSuSE=libjpeg Ubuntu=libjpeg62"
 DISTRO_PN_ALIAS_pn-kbproto = "Meego=xorg-x11-proto-kbproto Ubuntu=x11proto-kb-dev Debian=x11proto-kb-dev"
-DISTRO_PN_ALIAS_pn-kconfig-frontends = "OSPDT"
 DISTRO_PN_ALIAS_pn-kernel-devsrc = "Debian=linux-base Ubuntu=linux"
 DISTRO_PN_ALIAS_pn-kernelshark = "Mandriva=kernelshark Ubuntu=kernelshark"
 DISTRO_PN_ALIAS_pn-kern-tools-native = "Windriver"
@@ -195,6 +195,7 @@
 DISTRO_PN_ALIAS_pn-libowl = "Debian=owl OpenedHand"
 DISTRO_PN_ALIAS_pn-libpam = "Meego=pam Fedora=pam OpenSuSE=pam Ubuntu=pam Mandriva=pam Debian=pam"
 DISTRO_PN_ALIAS_pn-libpcre = "Mandriva=libpcre0 Fedora=pcre"
+DISTRO_PN_ALIAS_pn-libpcre2 = "Fedora=pcre2 Debian=pcre2 Clear=pcre2"
 DISTRO_PN_ALIAS_pn-libpng12 = "Debian=libpng12-0 Fedora=libpng"
 DISTRO_PN_ALIAS_pn-libpod-plainer-perl = "OSPDT"
 DISTRO_PN_ALIAS_pn-libsamplerate0 = "Meego=libsamplerate Fedora=libsamplerate OpenSuSE=libsamplerate Ubuntu=libsamplerate Mandriva=libsamplerate Debian=libsamplerate"
@@ -209,6 +210,7 @@
 DISTRO_PN_ALIAS_pn-libusb-compat = "OSPDT"
 DISTRO_PN_ALIAS_pn-libx11 = "Debian=libx11-6 Fedora=libX11 Ubuntu=libx11-6 OpenSuSE=xorg-x11-libX11"
 DISTRO_PN_ALIAS_pn-libxcalibrate = "OSPDT upstream=http://cgit.freedesktop.org/xorg/lib/libXCalibrate/"
+DISTRO_PN_ALIAS_pn-libxfont2 = "Fedora=libXfont2 Clear=libXfont2"
 DISTRO_PN_ALIAS_pn-libxft = "Mandriva=libxft Debian=libxft2 Ubuntu=libxft2"
 DISTRO_PN_ALIAS_pn-libxi = "Ubuntu=libxi Fedora=libXi"
 DISTRO_PN_ALIAS_pn-libxkbcommon = "Fedora=libxkbcommon Debian=libxkbcommon"
@@ -325,14 +327,16 @@
 DISTRO_PN_ALIAS_pn-puzzles = "Debian=sgt-puzzles Fedora=puzzles"
 DISTRO_PN_ALIAS_pn-python3 = "Fedora=python3 Debian=python3.2"
 DISTRO_PN_ALIAS_pn-python3-distribute = "Debian=python3-setuptools Fedora=python3-setuptools"
+DISTRO_PN_ALIAS_pn-python3-iniparse = "Fedora=python-iniparse Debian=python-iniparse"
 DISTRO_PN_ALIAS_pn-python3-pip = "OpenSuSE=python3-pip Debian=python3-pip"
+DISTRO_PN_ALIAS_pn-python3-pycurl = "Fedora=python-pycurl Debian=pycurl"
+DISTRO_PN_ALIAS_pn-python3-pygpgme = "Fedora=python-pygpgme Debian=pygpgme"
 DISTRO_PN_ALIAS_pn-python3-setuptools = "OpenSuSE=python3-setuptools Debian=python3-setuptools"
 DISTRO_PN_ALIAS_pn-python-dbus = "Ubuntu=python-dbus Debian=python-dbus Mandriva=python-dbus"
 DISTRO_PN_ALIAS_pn-python-distribute = "Opensuse=python-setuptools Fedora=python-setuptools"
 DISTRO_PN_ALIAS_pn-python-git = "Debian=python-git Fedora=GitPython"
 DISTRO_PN_ALIAS_pn-python-mako = "Fedora=python-mako Opensuse=python-Mako"
 DISTRO_PN_ALIAS_pn-python-pycairo = "Meego=pycairo Fedora=pycairo Ubuntu=pycairo Debian=pycairo"
-DISTRO_PN_ALIAS_pn-python-pycurl = "Debian=python-pycurl Ubuntu=python-pycurl"
 DISTRO_PN_ALIAS_pn-python-pygobject = "Meego=pygobject2 Fedora=pygobject2 Ubuntu=pygobject Debian=pygobject"
 DISTRO_PN_ALIAS_pn-python-scons = "Fedora=scons OpenSuSE=scons Ubuntu=scons Mandriva=scons Debian=scons"
 DISTRO_PN_ALIAS_pn-python-setuptools = "Mandriva=python-setup OpenSuSE=python-setup-git"
diff --git a/import-layers/yocto-poky/meta/conf/distro/include/maintainers.inc b/import-layers/yocto-poky/meta/conf/distro/include/maintainers.inc
new file mode 100644
index 0000000..38789b2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/conf/distro/include/maintainers.inc
@@ -0,0 +1,827 @@
+# Yocto Project / OpenEmbedded-Core (OE-Core) Maintainers File
+#
+# This file contains a list of recipe maintainers.
+#
+# Please submit any patches against recipes in meta to the 
+# OE-Core mail list (openembedded-core@lists.openembedded.org)
+# For recipes in meta-yocto please use the Poky list (poky@yoctoproject.org)
+#
+# If you have problems with or questions about a particular recipe, feel
+# free to contact the maintainer directly (cc:ing the appropriate mailing list
+# puts it in the archive and helps other people who might have the same
+# questions in the future), but please try to do the following first:
+#
+#  - look in the Yocto Project Bugzilla
+#    (http://bugzilla.yoctoproject.org/) to see if a problem has
+#    already been reported
+#
+# - look through recent entries of the appropriate mailing list archives
+#   (http://lists.linuxtogo.org/pipermail/openembedded-core or
+#    https://lists.yoctoproject.org/pipermail/poky/) to see if other
+#   people have run into similar problems or had similar questions
+#   answered.
+#
+# The format is as a bitbake variable override for each recipe
+#
+#	RECIPE_MAINTAINER_pn-<recipe name> = "Full Name <address@domain>"
+#
+# Please keep this list in alphabetical order.
+#
+RECIPE_MAINTAINER_pn-acl = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-acpica = "Fathi Boudra <fathi.boudra@linaro.org>"
+RECIPE_MAINTAINER_pn-acpid = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-acpitests = "Fathi Boudra <fathi.boudra@linaro.org>"
+RECIPE_MAINTAINER_pn-adwaita-icon-theme = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-alsa-lib = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-alsa-plugins = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-alsa-state = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-alsa-tools = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-alsa-utils = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-alsa-utils-scripts = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-apmd = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-apr = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-apr-util = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-apt = "Aníbal Limón <limon.anibal@gmail.com>"
+RECIPE_MAINTAINER_pn-apt-native = "Aníbal Limón <limon.anibal@gmail.com>"
+RECIPE_MAINTAINER_pn-argp-standalone = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-asciidoc = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-aspell = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-assimp = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-at = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-at-spi2-atk = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-at-spi2-core = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-atk = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-attr = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-autoconf = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-autoconf-archive = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-autogen-native = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-automake = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-avahi = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-avahi-ui = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-babeltrace = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-base-files = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-base-passwd = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-bash = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-bash-completion = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-bc = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-bdwgc = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-beecrypt = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-bigreqsproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-bind = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-binutils = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-binutils-cross = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-binutils-cross-canadian = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-binutils-crosssdk = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-bison = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-bjam-native = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-blktool = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-blktrace = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-bluez5 = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-bmap-tools = "Ed Bartosh <ed.bartosh@linux.intel.com>"
+RECIPE_MAINTAINER_pn-boost = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-bootchart2 = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-bsd-headers = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-btrfs-tools = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-build-appliance-image = "Cristian Iorga <cristian.iorga@intel.com>"
+RECIPE_MAINTAINER_pn-build-compare = "Randy Witt <randy.e.witt@linux.intel.com>"
+RECIPE_MAINTAINER_pn-build-sysroots = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-builder = "Cristian Iorga <cristian.iorga@intel.com>"
+RECIPE_MAINTAINER_pn-buildtools-tarball = "Cristian Iorga <cristian.iorga@intel.com>"
+RECIPE_MAINTAINER_pn-busybox = "Armin Kuster <akuster808@gmail.com>"
+RECIPE_MAINTAINER_pn-byacc = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-bzip2 = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-ca-certificates = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-cairo = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-calibrateproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-cantarell-fonts = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-ccache = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-cdrtools-native = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-chkconfig = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-chkconfig-alternatives-native = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-chrpath = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-clutter-1.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-clutter-gst-3.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-clutter-gtk-1.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-cmake = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
+RECIPE_MAINTAINER_pn-cmake-native = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
+RECIPE_MAINTAINER_pn-cogl-1.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-compositeproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-connman = "Changhyeok Bae <changhyeok.bae@lge.com>"
+RECIPE_MAINTAINER_pn-connman-conf = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-connman-gnome = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-console-tools = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-consolekit = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-coreutils = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-cpio = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-cracklib = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-createrepo-c = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-cronie = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-cross-localedef-native = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-cryptodev-linux = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-cryptodev-module = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-cryptodev-tests = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-cups = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-curl = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-cve-check-tool = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-cwautomacros = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-damageproto = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-db = "Mark Hatle <mark.hatle@windriver.com>"
+RECIPE_MAINTAINER_pn-dbus = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-dbus-glib = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-dbus-test = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-dbus-wait = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-debianutils = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-depmodwrapper-cross = "Mark Hatle <mark.hatle@windriver.com>"
+RECIPE_MAINTAINER_pn-desktop-file-utils-native = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-dhcp = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-diffstat = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-diffutils = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-directfb = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-directfb-examples = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-distcc = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-distcc-config = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-dmidecode = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-dmxproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-dnf = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-docbook-xml-dtd4 = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-docbook-xsl-stylesheets = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-dosfstools = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-dpkg = "Aníbal Limón <limon.anibal@gmail.com>"
+RECIPE_MAINTAINER_pn-dri2proto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-dri3proto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-dropbear = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-dtc = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-e2fsprogs = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-ed = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-eee-acpi-scripts = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-efilinux = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-eglinfo-fb = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-eglinfo-x11 = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-efilinux = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-elfutils = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-enchant = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-encodings = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-epiphany = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-ethtool = "Changhyeok Bae <changhyeok.bae@lge.com>"
+RECIPE_MAINTAINER_pn-eudev = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-expat = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-expect = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-ffmpeg = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-file = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-findutils = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-fixesproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-flac = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-flex = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-font-alias = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-font-util = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-fontconfig = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-fontsproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-foomatic-filters = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-formfactor = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-freetype = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-fstests = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-fts = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gawk = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-gcc = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gcc-cross = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gcc-cross-canadian = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gcc-cross-initial = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gcc-crosssdk = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gcc-crosssdk-initial = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gcc-runtime = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gcc-sanitizers = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gcc-source = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gccmakedep = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gconf = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-gcr = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-gdb = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gdb-cross = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gdb-cross-canadian = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gdbm = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-gdk-pixbuf = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-gettext = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-gettext-minimal-native = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-ghostscript = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-git = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-glew = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-glib-2.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-glib-networking = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-glibc = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-glibc-initial = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-glibc-locale = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-glibc-mtrace = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-glibc-scripts = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-glproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-gmp = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gnome-common = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-gnome-desktop-testing = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-gnome-desktop3 = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-gnome-doc-utils = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-gnome-themes-standard = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-gnu-config = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-gnu-efi = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-gnupg = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-gnutls = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-go = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-go-cross = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-go-dep = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
+RECIPE_MAINTAINER_pn-go-helloworld = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-go-native = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-gobject-introspection = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-gperf = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-gpgme = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-gptfdisk = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-grep = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-groff = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-grub = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-grub-efi = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-gsettings-desktop-schemas = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-gst-player = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-gstreamer1.0 = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-gstreamer1.0-libav = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-gstreamer1.0-omx = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-gstreamer1.0-meta-base = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-gstreamer1.0-plugins-bad = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-gstreamer1.0-plugins-base = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-gstreamer1.0-plugins-good = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-gstreamer1.0-plugins-ugly = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-gstreamer1.0-python = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-gstreamer1.0-rtsp-server = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-gstreamer1.0-vaapi = "Wei Tee Ng <wei.tee.ng@intel.com>"
+RECIPE_MAINTAINER_pn-gtk+ = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-gtk+3 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-gtk-doc = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-gtk-engines = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-gtk-icon-utils-native = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-gtk-sato-engine = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-guile = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-guilt-native = "Bruce Ashfield <bruce.ashfield@windriver.com>"
+RECIPE_MAINTAINER_pn-gummiboot = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-gzip = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-harfbuzz = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-hdparm = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-help2man-native = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-hicolor-icon-theme = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-hostap-conf = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-hostap-utils = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-hwlatdetect = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-i2c-tools = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-icecc-create-env-native = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-icon-naming-utils = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-icu = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-ifupdown = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-init-ifupdown = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-initramfs-boot = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
+RECIPE_MAINTAINER_pn-initramfs-framework = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
+RECIPE_MAINTAINER_pn-initramfs-live-boot = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-initramfs-live-install = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-initramfs-live-install-efi = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-initramfs-live-install-efi-testfs = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-initramfs-live-install-testfs = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-initscripts = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-inputproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-intltool = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-iproute2 = "Changhyeok Bae <changhyeok.bae@lge.com>"
+RECIPE_MAINTAINER_pn-iptables = "Changhyeok Bae <changhyeok.bae@lge.com>"
+RECIPE_MAINTAINER_pn-iputils = "Changhyeok Bae <changhyeok.bae@lge.com>"
+RECIPE_MAINTAINER_pn-irda-utils = "Changhyeok Bae <changhyeok.bae@lge.com>"
+RECIPE_MAINTAINER_pn-iso-codes = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-iw = "Changhyeok Bae <changhyeok.bae@lge.com>"
+RECIPE_MAINTAINER_pn-libjpeg-turbo = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-json-c = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-json-glib = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-kbd = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-kbproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-kconfig-frontends = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-kern-tools-native = "Bruce Ashfield <bruce.ashfield@windriver.com>"
+RECIPE_MAINTAINER_pn-kernelshark = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-kernel-devsrc = "Bruce Ashfield <bruce.ashfield@windriver.com>"
+RECIPE_MAINTAINER_pn-kexec-tools = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-keymaps = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-kmod = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-kmod-native = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-kmscube = "Carlos Rafael Giani <dv@pseudoterminal.org>"
+RECIPE_MAINTAINER_pn-l3afpad = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-lame = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-latencytop = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-ldconfig-native = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-less = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-liba52 = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-libacpi = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libaio = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libarchive = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
+RECIPE_MAINTAINER_pn-libart-lgpl = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libassuan = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libatomic-ops = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libav = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libbsd = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-libcap = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-libcap-ng = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-libcgroup = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libcheck = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-libclass-isa-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libcomps = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libconvert-asn1-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libcroco = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libdaemon = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libdmx = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libdnf = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libdrm = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
+RECIPE_MAINTAINER_pn-libdumpvalue-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libenv-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libepoxy = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-liberation-fonts = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-liberror-perl = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-libevdev = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libevent = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libexif = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libfakekey = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libffi = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libfile-checktree-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libfm = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libfm-extra = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libfontenc = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libgcc = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-libgcc-initial = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-libgcrypt = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-libgfortran = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-libglade = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libglu = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libgpg-error = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-libgudev = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libi18n-collate-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libical = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libice = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libiconv = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libid3tag = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-libidn = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libinput = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libjson = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libksba = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libmatchbox = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libmnl = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-libmpc = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-libnewt = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-libnewt-python = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-libnfsidmap = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libnl = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libnotify = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libnsl2 = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-libnss-mdns = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libogg = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libomxil = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libowl = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libpam = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libpcap = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libpciaccess = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libpcre = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-libpcre2 = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libpfm4 = "Matthew McClintock <msm@freescale.com>"
+RECIPE_MAINTAINER_pn-libpng = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libpng12 = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libpod-plainer-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libproxy = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libpthread-stubs = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-librepo = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-librsvg = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libsamplerate0 = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-libsdl = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-libsdl2 = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-libsecret = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libsm = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libsndfile1 = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-libsolv = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libsoup-2.4 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libtasn1 = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libtheora = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libtimedate-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libtirpc = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libtool = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-libtool-cross = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-libtool-native = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-libunistring = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-libunwind = "Bruce Ashfield <bruce.ashfield@windriver.com>"
+RECIPE_MAINTAINER_pn-liburcu = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-liburi-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libusb-compat = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libusb1 = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libuser = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-libva = "Wei Tee Ng <wei.tee.ng@intel.com>"
+RECIPE_MAINTAINER_pn-libva-utils = "Wei Tee Ng <wei.tee.ng@intel.com>"
+RECIPE_MAINTAINER_pn-libvorbis = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-libwebp = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libx11 = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libx11-diet = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxau = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxcalibrate = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxcb = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxcomposite = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxcursor = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxdamage = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxdmcp = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxext = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxfixes = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxfont = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxfont2 = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxft = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxi = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxinerama = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxkbcommon = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxkbfile = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxml-namespacesupport-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libxml-parser-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libxml-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libxml-sax-base-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libxml-sax-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libxml-simple-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-libxml2 = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-libxmu = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxpm = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxrandr = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxrender = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxres = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxscrnsaver = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxsettings-client = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-libxshmfence = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxslt = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-libxt = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxtst = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxv = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxvmc = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxxf86dga = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxxf86misc = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libxxf86vm = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-libyaml = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-lighttpd = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-linux-dummy = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-linux-firmware = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
+RECIPE_MAINTAINER_pn-linux-libc-headers = "Bruce Ashfield <bruce.ashfield@windriver.com>"
+RECIPE_MAINTAINER_pn-linux-yocto = "Bruce Ashfield <bruce.ashfield@windriver.com>"
+RECIPE_MAINTAINER_pn-linux-yocto-dev = "Bruce Ashfield <bruce.ashfield@windriver.com>"
+RECIPE_MAINTAINER_pn-linux-yocto-rt = "Bruce Ashfield <bruce.ashfield@windriver.com>"
+RECIPE_MAINTAINER_pn-linux-yocto-tiny = "Bruce Ashfield <bruce.ashfield@windriver.com>"
+RECIPE_MAINTAINER_pn-llvm = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-logrotate = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-lrzsz = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-lsb = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-lsbinitscripts = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-lsbtest = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-lsof = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-ltp = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-lttng-modules = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-lttng-tools = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-lttng-ust = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-lz4 = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-lzo = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-lzip = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-lzop = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-m4 = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-m4-native = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-mailx = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-make = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-makedepend = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-makedevs = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-man = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-man-pages = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-matchbox-config-gtk = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-matchbox-desktop = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-matchbox-desktop-sato = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-matchbox-keyboard = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-matchbox-panel-2 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-matchbox-session = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-matchbox-session-sato = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-matchbox-terminal = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-matchbox-theme-sato = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-matchbox-wm = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-mc = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-mdadm = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-menu-cache = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-mesa = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
+RECIPE_MAINTAINER_pn-mesa-demos = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
+RECIPE_MAINTAINER_pn-mesa-gl = "Otavio Salvador <otavio.salvador@ossystems.com.br>"
+RECIPE_MAINTAINER_pn-meta-environment = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-meta-environment-extsdk = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-meta-extsdk-toolchain = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-meta-ide-support = "Cristian Iorga <cristian.iorga@intel.com>"
+RECIPE_MAINTAINER_pn-meta-toolchain = "Cristian Iorga <cristian.iorga@intel.com>"
+RECIPE_MAINTAINER_pn-meta-world-pkgdata = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-mingetty = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-mini-x-session = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-minicom = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-mkelfimage = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-mkfontdir = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-mkfontscale = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-mklibs-native = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-mktemp = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-mmc-utils = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-mobile-broadband-provider-info = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-modutils-initscripts = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-mpeg2dec = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-mpfr = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-mpg123 = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-msmtp = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-mtd-utils = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-mtdev = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-mtools = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-musl = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-mx-1.0 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-nasm = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-nativesdk-buildtools-perl-dummy = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-nativesdk-libtool = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-nativesdk-packagegroup-sdk-host = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-nativesdk-qemu-helper = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-nativesdk-postinst-intercept = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-ncurses = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-neard = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-neon = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-net-tools = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-netbase = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-nettle = "Armin Kuster <akuster808@gmail.com>"
+RECIPE_MAINTAINER_pn-nfs-export-root = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-nfs-utils = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-ninja = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-npth = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-nspr = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-nss = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-nss-myhostname = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-ofono = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-oh-puzzles = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-openssh = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-openssl = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-opkg = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
+RECIPE_MAINTAINER_pn-opkg-arch-config = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
+RECIPE_MAINTAINER_pn-opkg-keyrings = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
+RECIPE_MAINTAINER_pn-opkg-utils = "Alejandro del Castillo <alejandro.delcastillo@ni.com>"
+RECIPE_MAINTAINER_pn-oprofile = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-orc = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-os-release = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-ossp-uuid = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-ovmf = "Ricardo Neri <ricardo.neri-calderon@linux.intel.com>"
+RECIPE_MAINTAINER_pn-ovmf-shell-image = "Ricardo Neri <ricardo.neri-calderon@linux.intel.com>"
+RECIPE_MAINTAINER_pn-p11-kit = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-package-index = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-pango = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-parted = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-patch = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-patchelf = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-pax = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-pax-utils = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-pbzip2 = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-pciutils = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-pcmanfm = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-pcmciautils = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-perf = "Bruce Ashfield <bruce.ashfield@windriver.com>"
+RECIPE_MAINTAINER_pn-perl = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-perl-native = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-piglit = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-pigz = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-pinentry = "Armin Kuster <akuster808@gmail.com>"
+RECIPE_MAINTAINER_pn-pixman = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-pixz = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-pkgconfig = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-pm-utils = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-pointercal = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-pointercal-xinput = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-pong-clock = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-popt = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-portmap = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-powertop = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-ppp = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-ppp-dialin = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-prelink = "Mark Hatle <mark.hatle@windriver.com>"
+RECIPE_MAINTAINER_pn-presentproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-procps = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-pseudo = "Mark Hatle <mark.hatle@windriver.com>"
+RECIPE_MAINTAINER_pn-psmisc = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-psplash = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-ptest-runner = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-pulseaudio = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-pulseaudio-client-conf-sato = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-puzzles = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-python = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-native = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-async = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-distribute = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-git = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-gitdb = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-imaging = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-mako = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-native = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-nose = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-numpy = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-pexpect = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-ptyprocess = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-pycairo = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-pycurl = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-pygtk = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-pyrex = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-scons = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-scons-native = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-setuptools = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-six = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-smartpm = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python-smmap = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3 = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-async = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-dbus = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-distribute = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-docutils = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-git = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-gitdb = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-iniparse = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-mako = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-native = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-nose = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-numpy = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-pip = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-pycairo = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-pygobject = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-setuptools = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-six = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-python3-smmap = "Jose Lamego <jose.a.lamego@linux.intel.com>"
+RECIPE_MAINTAINER_pn-qemu = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-qemu-helper-native = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-qemuwrapper-cross = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-quilt = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-quilt-native = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-quota = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-randrproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-re2c = "Khem Raj <raj.khem@gmail.com>"
+RECIPE_MAINTAINER_pn-readline = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-recordproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-remake = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-renderproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-resolvconf = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-resourceproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-rgb = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-rpcbind = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-rng-tools = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-rpm = "Mark Hatle <mark.hatle@windriver.com>"
+RECIPE_MAINTAINER_pn-rpmresolve = "Mark Hatle <mark.hatle@windriver.com>"
+RECIPE_MAINTAINER_pn-rsync = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-rt-tests = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-ruby = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-run-postinsts = "Cristian Iorga <cristian.iorga@intel.com>"
+RECIPE_MAINTAINER_pn-rxvt-unicode = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-sato-icon-theme = "Alejandro Hernandez <alejandro.hernandez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-sato-screenshot = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-sbc = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-screen = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-scrnsaverproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-sed = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-serf = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-setserial = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-settings-daemon = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-shadow = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-shadow-securetty = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-shadow-sysroot = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-shared-mime-info = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-shutdown-desktop = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-signing-keys = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-slang = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-socat = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-source-highlight = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-speex = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-speexdsp = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-sqlite3 = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-squashfs-tools = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-startup-notification = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-stat = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-strace = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-stress = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-subversion = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-sudo = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-swabber-native = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-swig = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-sysfsutils = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-sysklogd = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-syslinux = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-sysprof = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-sysstat = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-systemd = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-systemd-boot = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-systemd-bootchart = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-systemd-compat-units = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-systemd-serialgetty = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-systemd-systemctl-native = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-systemtap = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-systemtap-uprobes = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-sysvinit = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-sysvinit-inittab = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-taglib = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-tar = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-tcf-agent = "Randy Witt <randy.e.witt@linux.intel.com>"
+RECIPE_MAINTAINER_pn-tcl = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-tcp-wrappers = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-testexport-tarball = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-texi2html = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-texinfo = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-texinfo-dummy-native = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-tiff = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-time = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-tiny-init = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-trace-cmd = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-tremor = "Tanu Kaskinen <tanuk@iki.fi>"
+RECIPE_MAINTAINER_pn-tslib = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-ttf-bitstream-vera = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-tzcode-native = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-tzdata = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-u-boot = "Marek Vasut <marek.vasut@gmail.com>"
+RECIPE_MAINTAINER_pn-u-boot-fw-utils = "Marek Vasut <marek.vasut@gmail.com>"
+RECIPE_MAINTAINER_pn-u-boot-mkimage = "Marek Vasut <marek.vasut@gmail.com>"
+RECIPE_MAINTAINER_pn-ubootchart = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-udev-extraconf = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-unfs3 = "Randy Witt <randy.e.witt@linux.intel.com>"
+RECIPE_MAINTAINER_pn-unifdef = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-uninative-tarball = "Richard Purdie <richard.purdie@linuxfoundation.org>"
+RECIPE_MAINTAINER_pn-unzip = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-update-rc.d = "Ross Burton <ross.burton@intel.com>"
+RECIPE_MAINTAINER_pn-usbinit = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-usbutils = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-util-linux = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-util-macros = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-v86d = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-vala = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-valgrind = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-videoproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-volatile-binds = "Chen Qi <Qi.Chen@windriver.com>"
+RECIPE_MAINTAINER_pn-vte = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-vulkan = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-vulkan-demos = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-waffle = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-watchdog = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-watchdog-config = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-wayland = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-wayland-protocols = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-webkitgtk = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-weston = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-weston-init = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-wget = "Robert Yang <liezhi.yang@windriver.com>"
+RECIPE_MAINTAINER_pn-which = "Leonardo Sandoval <leonardo.sandoval.gonzalez@linux.intel.com>"
+RECIPE_MAINTAINER_pn-wic-tools = "Ed Bartosh <ed.bartosh@linux.intel.com>"
+RECIPE_MAINTAINER_pn-wireless-tools = "Changhyeok Bae <changhyeok.bae@lge.com>"
+RECIPE_MAINTAINER_pn-wpa-supplicant = "Changhyeok Bae <changhyeok.bae@lge.com>"
+RECIPE_MAINTAINER_pn-x11-common = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-x11perf = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-x264 = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-xauth = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xcb-proto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xcb-util = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xcb-util-image = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xcb-util-keysyms = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xcb-util-renderutil = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xcb-util-wm = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xcmiscproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xcursor-transparent-theme = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xdg-utils = "Jussi Kukkonen <jussi.kukkonen@intel.com>"
+RECIPE_MAINTAINER_pn-xdpyinfo = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xev = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xextproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xeyes = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-input-evdev = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-input-keyboard = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-input-libinput = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-input-mouse = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-input-synaptics = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-input-vmmouse = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-video-cirrus = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-video-fbdev = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-video-intel = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-video-omap = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-video-omapfb = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-video-vesa = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86-video-vmware = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86dgaproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86driproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86miscproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xf86vidmodeproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xhost = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xineramaproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xinetd = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-xinit = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xinput = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xinput-calibrator = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xkbcomp = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xkeyboard-config = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xmlto = "Hongxu Jia <hongxu.jia@windriver.com>"
+RECIPE_MAINTAINER_pn-xmodmap = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xorg-minimal-fonts = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xprop = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xproto = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xrandr = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xrestop = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xserver-nodm-init = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xserver-xf86-config = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xserver-xorg = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xset = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xtrans = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xtscal = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xuser-account = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xvideo-tests = "Maxin B. John <maxin.john@intel.com>"
+RECIPE_MAINTAINER_pn-xvinfo = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xwininfo = "Armin Kuster <akuster@mvista.com>"
+RECIPE_MAINTAINER_pn-xz = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-yasm = "Yi Zhao <yi.zhao@windriver.com>"
+RECIPE_MAINTAINER_pn-zip = "Denys Dmytriyenko <denys@ti.com>"
+RECIPE_MAINTAINER_pn-zisofs-tools-native = "Alexander Kanavin <alexander.kanavin@intel.com>"
+RECIPE_MAINTAINER_pn-zlib = "Denys Dmytriyenko <denys@ti.com>"
diff --git a/import-layers/yocto-poky/meta/conf/distro/include/no-static-libs.inc b/import-layers/yocto-poky/meta/conf/distro/include/no-static-libs.inc
index f8d8c09..7c165c7 100644
--- a/import-layers/yocto-poky/meta/conf/distro/include/no-static-libs.inc
+++ b/import-layers/yocto-poky/meta/conf/distro/include/no-static-libs.inc
@@ -25,6 +25,9 @@
 DISABLE_STATIC_pn-openssl = ""
 DISABLE_STATIC_pn-openssl-native = ""
 DISABLE_STATIC_pn-nativesdk-openssl = ""
+DISABLE_STATIC_pn-openssl10 = ""
+DISABLE_STATIC_pn-openssl10-native = ""
+DISABLE_STATIC_pn-nativesdk-openssl10 = ""
 # libssp-static-dev included in build-appliance
 DISABLE_STATIC_pn-gcc-runtime = ""
 # libusb1-native is used to build static dfu-util-native
diff --git a/import-layers/yocto-poky/meta/conf/distro/include/security_flags.inc b/import-layers/yocto-poky/meta/conf/distro/include/security_flags.inc
index e162abe..ab2062b 100644
--- a/import-layers/yocto-poky/meta/conf/distro/include/security_flags.inc
+++ b/import-layers/yocto-poky/meta/conf/distro/include/security_flags.inc
@@ -1,10 +1,12 @@
-# Setup extra CFLAGS and LDFLAGS which have 'security' benefits. These 
+# Setup extra CFLAGS and LDFLAGS which have 'security' benefits. These
 # don't work universally, there are recipes which can't use one, the other
 # or both so a blacklist is maintained here. The idea would be over
 # time to reduce this list to nothing.
 # From a Yocto Project perspective, this file is included and tested
 # in the DISTRO="poky-lsb" configuration.
 
+GCCPIE ?= "--enable-default-pie"
+
 # _FORTIFY_SOURCE requires -O1 or higher, so disable in debug builds as they use
 # -O0 which then results in a compiler warning.
 lcl_maybe_fortify = "${@base_conditional('DEBUG_BUILD','1','','-D_FORTIFY_SOURCE=2',d)}"
@@ -12,96 +14,51 @@
 # Error on use of format strings that represent possible security problems
 SECURITY_STRINGFORMAT ?= "-Wformat -Wformat-security -Werror=format-security"
 
-SECURITY_CFLAGS ?= "-fstack-protector-strong -pie -fpie ${lcl_maybe_fortify} ${SECURITY_STRINGFORMAT}"
+# Inject pie flags into compiler flags if not configured with gcc itself
+# especially useful with external toolchains
+SECURITY_PIE_CFLAGS ?= "${@'' if '${GCCPIE}' else '-pie -fPIE'}"
+
+SECURITY_NOPIE_CFLAGS ?= "-no-pie -fno-PIE"
+
+SECURITY_CFLAGS ?= "-fstack-protector-strong ${SECURITY_PIE_CFLAGS} ${lcl_maybe_fortify} ${SECURITY_STRINGFORMAT}"
 SECURITY_NO_PIE_CFLAGS ?= "-fstack-protector-strong ${lcl_maybe_fortify} ${SECURITY_STRINGFORMAT}"
 
 SECURITY_LDFLAGS ?= "-fstack-protector-strong -Wl,-z,relro,-z,now"
 SECURITY_X_LDFLAGS ?= "-fstack-protector-strong -Wl,-z,relro"
 
 # powerpc does not get on with pie for reasons not looked into as yet
-SECURITY_CFLAGS_powerpc = "-fstack-protector-strong ${lcl_maybe_fortify}"
-# Deal with ppc specific linker failures when using the cflags
-SECURITY_CFLAGS_pn-dbus_powerpc = ""
-SECURITY_CFLAGS_pn-dbus-ptest_powerpc = ""
-SECURITY_CFLAGS_pn-libmatchbox_powerpc = ""
+SECURITY_CFLAGS_powerpc = "-fstack-protector-strong ${lcl_maybe_fortify} ${SECURITY_NOPIE_CFLAGS}"
+SECURITY_CFLAGS_pn-libgcc_powerpc = ""
+GCCPIE_powerpc = ""
 
 # arm specific security flag issues
-SECURITY_CFLAGS_pn-lttng-tools_arm = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-aspell = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-beecrypt = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-coreutils = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-cups = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-db = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-directfb = "${SECURITY_NO_PIE_CFLAGS}"
 SECURITY_CFLAGS_pn-glibc = ""
 SECURITY_CFLAGS_pn-glibc-initial = ""
-SECURITY_CFLAGS_pn-elfutils = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-enchant = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-expect = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-flac = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-gcc = "${SECURITY_NO_PIE_CFLAGS}"
 SECURITY_CFLAGS_pn-gcc-runtime = ""
-SECURITY_CFLAGS_pn-gcc-sanitizers = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-gdb = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-gmp = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-gnutls = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-gpgme = "${SECURITY_NO_PIE_CFLAGS}"
 SECURITY_CFLAGS_pn-grub = ""
 SECURITY_CFLAGS_pn-grub-efi = ""
 SECURITY_CFLAGS_pn-grub-efi-native = ""
 SECURITY_CFLAGS_pn-grub-efi-x86-native = ""
 SECURITY_CFLAGS_pn-grub-efi-i586-native = ""
 SECURITY_CFLAGS_pn-grub-efi-x86-64-native = ""
-SECURITY_CFLAGS_pn-gstreamer1.0-plugins-bad = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-gstreamer1.0-plugins-good = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-harfbuzz = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-kexec-tools = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-iptables = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-libaio = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-libcap = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-libgcc = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-libid3tag = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-libnewt-python = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-libglu = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-libpcap = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-libpcre = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-libproxy = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-mesa = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-mesa-gl = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-openssl = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-opensp = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-ppp = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-python = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-python-pycurl = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-python-numpy = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-python3-numpy = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-python3-pycairo = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-python3-pycurl = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-python3-pygpgme = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-python3 = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-syslinux = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-slang = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-source-highlight = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-tcl = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-tiff = "${SECURITY_NO_PIE_CFLAGS}"
-SECURITY_CFLAGS_pn-uclibc = ""
-SECURITY_CFLAGS_pn-uclibc-initial = ""
-SECURITY_CFLAGS_pn-valgrind = ""
-SECURITY_CFLAGS_pn-zlib = "${SECURITY_NO_PIE_CFLAGS}"
+
+SECURITY_CFLAGS_pn-mkelfimage_x86 = ""
+
+SECURITY_CFLAGS_pn-valgrind = "${SECURITY_NOPIE_CFLAGS}"
+SECURITY_LDFLAGS_pn-valgrind = ""
+SECURITY_CFLAGS_pn-sysklogd = "${SECURITY_NOPIE_CFLAGS}"
+SECURITY_LDFLAGS_pn-sysklogd = ""
 
 # Recipes which fail to compile when elevating -Wformat-security to an error
 SECURITY_STRINGFORMAT_pn-busybox = ""
 SECURITY_STRINGFORMAT_pn-gcc = ""
-SECURITY_STRINGFORMAT_pn-oh-puzzles = ""
 
-TARGET_CFLAGS_append_class-target = " ${SECURITY_CFLAGS}"
+TARGET_CC_ARCH_append_class-target = " ${SECURITY_CFLAGS}"
 TARGET_LDFLAGS_append_class-target = " ${SECURITY_LDFLAGS}"
 
 SECURITY_LDFLAGS_remove_pn-gcc-runtime = "-fstack-protector-strong"
 SECURITY_LDFLAGS_remove_pn-glibc = "-fstack-protector-strong"
 SECURITY_LDFLAGS_remove_pn-glibc-initial = "-fstack-protector-strong"
-SECURITY_LDFLAGS_remove_pn-uclibc = "-fstack-protector-strong"
-SECURITY_LDFLAGS_remove_pn-uclibc-initial = "-fstack-protector-strong"
 SECURITY_LDFLAGS_pn-xf86-video-fbdev = "${SECURITY_X_LDFLAGS}"
 SECURITY_LDFLAGS_pn-xf86-video-intel = "${SECURITY_X_LDFLAGS}"
 SECURITY_LDFLAGS_pn-xf86-video-omapfb = "${SECURITY_X_LDFLAGS}"
@@ -110,4 +67,7 @@
 SECURITY_LDFLAGS_pn-xf86-video-vmware = "${SECURITY_X_LDFLAGS}"
 SECURITY_LDFLAGS_pn-xserver-xorg = "${SECURITY_X_LDFLAGS}"
 
-TARGET_CC_ARCH_append_pn-binutils = " ${SECURITY_CFLAGS} ${SELECTED_OPTIMIZATION}"
+TARGET_CC_ARCH_append_pn-binutils = " ${SELECTED_OPTIMIZATION}"
+TARGET_CC_ARCH_append_pn-gcc = " ${SELECTED_OPTIMIZATION}"
+TARGET_CC_ARCH_append_pn-gdb = " ${SELECTED_OPTIMIZATION}"
+TARGET_CC_ARCH_append_pn-perf = " ${SELECTED_OPTIMIZATION}"
diff --git a/import-layers/yocto-poky/meta/conf/distro/include/tclibc-musl.inc b/import-layers/yocto-poky/meta/conf/distro/include/tclibc-musl.inc
index 3d3f6ac..9ae2a93 100644
--- a/import-layers/yocto-poky/meta/conf/distro/include/tclibc-musl.inc
+++ b/import-layers/yocto-poky/meta/conf/distro/include/tclibc-musl.inc
@@ -14,6 +14,8 @@
 PREFERRED_PROVIDER_virtual/nativesdk-libintl ?= "nativesdk-glibc"
 PREFERRED_PROVIDER_virtual/nativesdk-libiconv ?= "nativesdk-glibc"
 
+DISTRO_FEATURES_BACKFILL_CONSIDERED += "ldconfig"
+
 #USE_NLS ?= "no"
 
 CXXFLAGS += "-fvisibility-inlines-hidden"
diff --git a/import-layers/yocto-poky/meta/conf/distro/include/tcmode-default.inc b/import-layers/yocto-poky/meta/conf/distro/include/tcmode-default.inc
index 3db16e8..1787a82 100644
--- a/import-layers/yocto-poky/meta/conf/distro/include/tcmode-default.inc
+++ b/import-layers/yocto-poky/meta/conf/distro/include/tcmode-default.inc
@@ -22,13 +22,12 @@
 PREFERRED_PROVIDER_virtual/nativesdk-${SDK_PREFIX}libc-initial ?= "nativesdk-glibc-initial"
 PREFERRED_PROVIDER_virtual/gettext ??= "gettext"
 
-GCCVERSION ?= "6.3%"
+GCCVERSION ?= "7.%"
 SDKGCCVERSION ?= "${GCCVERSION}"
-BINUVERSION ?= "2.28%"
-GDBVERSION ?= "7.12%"
-GLIBCVERSION ?= "2.25"
-UCLIBCVERSION ?= "1.0%"
-LINUXLIBCVERSION ?= "4.10%"
+BINUVERSION ?= "2.29%"
+GDBVERSION ?= "8.0%"
+GLIBCVERSION ?= "2.26%"
+LINUXLIBCVERSION ?= "4.12%"
 
 PREFERRED_VERSION_gcc ?= "${GCCVERSION}"
 PREFERRED_VERSION_gcc-cross-${TARGET_ARCH} ?= "${GCCVERSION}"
@@ -48,7 +47,7 @@
 PREFERRED_VERSION_binutils ?= "${BINUVERSION}"
 PREFERRED_VERSION_binutils-native ?= "${BINUVERSION}"
 PREFERRED_VERSION_binutils-cross-${TARGET_ARCH} ?= "${BINUVERSION}"
-PREFERRED_VERSION_binutils-crosssdk-${SDK_ARCH} ?= "${BINUVERSION}"
+PREFERRED_VERSION_binutils-crosssdk-${SDK_SYS} ?= "${BINUVERSION}"
 PREFERRED_VERSION_binutils-cross-canadian-${TRANSLATED_TARGET_ARCH} ?= "${BINUVERSION}"
 PREFERRED_VERSION_gdb ?= "${GDBVERSION}"
 PREFERRED_VERSION_gdb-cross-${TARGET_ARCH} ?= "${GDBVERSION}"
@@ -64,8 +63,6 @@
 PREFERRED_VERSION_glibc-initial            ?= "${GLIBCVERSION}"
 PREFERRED_VERSION_nativesdk-glibc-initial  ?= "${GLIBCVERSION}"
 PREFERRED_VERSION_cross-localedef-native   ?= "${GLIBCVERSION}"
-PREFERRED_VERSION_uclibc                   ?= "${UCLIBCVERSION}"
-PREFERRED_VERSION_uclibc-initial           ?= "${UCLIBCVERSION}"
 # don't use version earlier than 1.4 for gzip-native, as it's necessary for
 # some packages using an archive format incompatible with earlier gzip
 PREFERRED_VERSION_gzip-native ?= "1.8"
diff --git a/import-layers/yocto-poky/meta/conf/distro/include/uninative-flags.inc b/import-layers/yocto-poky/meta/conf/distro/include/uninative-flags.inc
index b6a944e..febf2a5 100644
--- a/import-layers/yocto-poky/meta/conf/distro/include/uninative-flags.inc
+++ b/import-layers/yocto-poky/meta/conf/distro/include/uninative-flags.inc
@@ -1,13 +1,3 @@
-# https://wiki.debian.org/GCC5
-# We may see binaries built with gcc5 run or linked into gcc4 environment
-# so use the older libstdc++ standard for now until we don't support gcc4
-# on the host system.
-BUILD_CXXFLAGS_append = " -D_GLIBCXX_USE_CXX11_ABI=0"
-
-# icu configure defaults to CXX11 if no -std= option is passed in CXXFLAGS
-# therefore pass one
-BUILD_CXXFLAGS_append_pn-icu-native = " -std=c++98"
-
 # Some distros (ubuntu 16.10, debian-testing) default to gcc configured with
 # --enable-default-pie (see gcc -v). This breaks e.g. prelink-native on a pie
 # default system if binutils-native was built on a system which is not pie default
diff --git a/import-layers/yocto-poky/meta/conf/distro/include/world-broken.inc b/import-layers/yocto-poky/meta/conf/distro/include/world-broken.inc
index d4bdddf..49e9516 100644
--- a/import-layers/yocto-poky/meta/conf/distro/include/world-broken.inc
+++ b/import-layers/yocto-poky/meta/conf/distro/include/world-broken.inc
@@ -7,28 +7,14 @@
 
 # error: no member named 'sin_port' in 'struct sockaddr_in6'
 # this is due to libtirpc using ipv6 but portmap rpc expecting ipv4
-EXCLUDE_FROM_WORLD_pn-portmap_libc-musl = "1"
 EXCLUDE_FROM_WORLD_pn-unfs3_libc-musl = "1"
 
 # error: use of undeclared identifier '_STAT_VER'
 EXCLUDE_FROM_WORLD_pn-pseudo_libc-musl = "1"
 
-# error: Need to implement custom I/O
-EXCLUDE_FROM_WORLD_pn-libsolv_libc-musl = "1"
-
-# undefined reference to `pthread_tryjoin_np'
-EXCLUDE_FROM_WORLD_pn-btrfs-tools_libc-musl = "1"
-
 # error: error.h: No such file or directory
 EXCLUDE_FROM_WORLD_pn-prelink_libc-musl = "1"
 
-# error: use of undeclared identifier 'O_CREAT'
-EXCLUDE_FROM_WORLD_pn-libbsd_libc-musl = "1"
-
-# error: expected declaration specifiers before '__nonnull'
-EXCLUDE_FROM_WORLD_pn-lttng-ust_libc-musl = "1"
-EXCLUDE_FROM_WORLD_pn-lttng-tools_libc-musl = "1"
-
 # error: obstack.h: No such file or directory
 EXCLUDE_FROM_WORLD_pn-systemtap_libc-musl = "1"
 EXCLUDE_FROM_WORLD_pn-systemtap-uprobes_libc-musl = "1"
@@ -37,20 +23,9 @@
 #            void (*_function)(sigval_t);
 EXCLUDE_FROM_WORLD_pn-qemu_libc-musl = "1"
 
-# glibc specific funcrions
-# error: storage size of 'mi' isn't known struct mallinfo mi
-EXCLUDE_FROM_WORLD_pn-valgrind_libc-musl = "1"
-
 # error: format '%s' expects argument of type 'char *', but argument 4 has type 'int' [-Werror=format=]
 #   snprintf(buf, size, "%s", strerror_r(err, sbuf, sizeof(sbuf)));
 EXCLUDE_FROM_WORLD_pn-perf_libc-musl = "1"
 
 # error: 'RTLD_NEXT' was not declared in this scope
 EXCLUDE_FROM_WORLD_pn-gcc-sanitizers_libc-musl = "1"
-
-# gcc fails to build when libuwind is staged before building gcc since
-# it then finds the unwind.h header from libunwind and not from libgcc
-# and on arm specially they are different since libgcc defines some functions
-# as macros which are functions in libunwind and it fails during linking
-# libbacktrace/backtrace.c:76: undefined reference to `_Unwind_GetIP'
-EXCLUDE_FROM_WORLD_pn-libunwind_libc-musl_arm = "1"
diff --git a/import-layers/yocto-poky/meta/conf/documentation.conf b/import-layers/yocto-poky/meta/conf/documentation.conf
index 35b9103..a55e283 100644
--- a/import-layers/yocto-poky/meta/conf/documentation.conf
+++ b/import-layers/yocto-poky/meta/conf/documentation.conf
@@ -56,7 +56,6 @@
 do_uboot_mkimage[doc] = "Creates a uImage file from the kernel for the U-Boot bootloader"
 do_unpack[doc] = "Unpacks the source code into a working directory"
 do_validate_branches[doc] = "Ensures that the source/meta branches are on the locations specified by their SRCREV values for a linux-yocto style kernel"
-do_vmimg[doc] = "Creates an image for use with VMware or VirtualBox compatible virtual machine hosts (based on IMAGE_FSTYPES either .vmdk or .vdi)"
 
 # DESCRIPTIONS FOR VARIABLES #
 
diff --git a/import-layers/yocto-poky/meta/conf/layer.conf b/import-layers/yocto-poky/meta/conf/layer.conf
index fc16502..4ba0b93 100644
--- a/import-layers/yocto-poky/meta/conf/layer.conf
+++ b/import-layers/yocto-poky/meta/conf/layer.conf
@@ -9,7 +9,9 @@
 
 # This should only be incremented on significant changes that will
 # cause compatibility issues with other layers
-LAYERVERSION_core = "10"
+LAYERVERSION_core = "11"
+
+LAYERSERIES_CORENAMES = "rocko"
 
 BBLAYERS_LAYERINDEX_NAME_core = "openembedded-core"
 
@@ -38,6 +40,7 @@
   base-passwd \
   opkg-utils \
   gstreamer1.0-meta-base \
+  ca-certificates \
 "
 
 SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS += " \
@@ -48,8 +51,16 @@
   docbook-xsl-stylesheets->perl \
   ca-certificates->openssl \
   initramfs-framework->${VIRTUAL-RUNTIME_base-utils} \
-  initramfs-framework->systemd \
   initramfs-framework->eudev \
+  initramfs-framework->systemd \
+  initramfs-module-install-efi->dosfstools \
+  initramfs-module-install-efi->e2fsprogs \
+  initramfs-module-install-efi->parted \
+  initramfs-module-install-efi->util-linux \
+  initramfs-module-install->e2fsprogs \
+  initramfs-module-install->grub \
+  initramfs-module-install->parted \
+  initramfs-module-install->util-linux \
   liberation-fonts->fontconfig \
   cantarell-fonts->fontconfig \
   gnome-icon-theme->librsvg \
diff --git a/import-layers/yocto-poky/meta/conf/licenses.conf b/import-layers/yocto-poky/meta/conf/licenses.conf
index d210a0e..3e2d258 100644
--- a/import-layers/yocto-poky/meta/conf/licenses.conf
+++ b/import-layers/yocto-poky/meta/conf/licenses.conf
@@ -105,6 +105,10 @@
 SPDXLICENSEMAP[AFLv2] = "AFL-2.0"
 SPDXLICENSEMAP[AFLv1] = "AFL-1.2"
 
+#CDDL variations
+SPDXLICENSEMAP[CDDLv1] = "CDDL-1.0"
+SPDXLICENSEMAP[CDDL-1] = "CDDL-1.0"
+
 #Other variations
 SPDXLICENSEMAP[EPLv1.0] = "EPL-1.0"
 
diff --git a/import-layers/yocto-poky/meta/conf/machine-sdk/i586.conf b/import-layers/yocto-poky/meta/conf/machine-sdk/i586.conf
index 99083fb..41e5e15 100644
--- a/import-layers/yocto-poky/meta/conf/machine-sdk/i586.conf
+++ b/import-layers/yocto-poky/meta/conf/machine-sdk/i586.conf
@@ -1,5 +1,4 @@
 SDK_ARCH = "i586"
 SDK_CC_ARCH = "-march=i586"
 ABIEXTENSION_class-nativesdk = ""
-SDK_OLDEST_KERNEL = "2.6.32"
 
diff --git a/import-layers/yocto-poky/meta/conf/machine-sdk/i686.conf b/import-layers/yocto-poky/meta/conf/machine-sdk/i686.conf
index cf22784..fe40697 100644
--- a/import-layers/yocto-poky/meta/conf/machine-sdk/i686.conf
+++ b/import-layers/yocto-poky/meta/conf/machine-sdk/i686.conf
@@ -1,4 +1,3 @@
 SDK_ARCH = "i686"
 SDK_CC_ARCH = "-march=i686"
 ABIEXTENSION_class-nativesdk = ""
-SDK_OLDEST_KERNEL = "2.6.32"
diff --git a/import-layers/yocto-poky/meta/conf/machine-sdk/x86_64.conf b/import-layers/yocto-poky/meta/conf/machine-sdk/x86_64.conf
index 7d2e717..61439b4 100644
--- a/import-layers/yocto-poky/meta/conf/machine-sdk/x86_64.conf
+++ b/import-layers/yocto-poky/meta/conf/machine-sdk/x86_64.conf
@@ -1,3 +1,2 @@
 SDK_ARCH = "x86_64"
 ABIEXTENSION_class-nativesdk = ""
-SDK_OLDEST_KERNEL = "2.6.32"
diff --git a/import-layers/yocto-poky/meta/conf/machine/include/arm/feature-arm-vfp.inc b/import-layers/yocto-poky/meta/conf/machine/include/arm/feature-arm-vfp.inc
index 667b609..7ae7456 100644
--- a/import-layers/yocto-poky/meta/conf/machine/include/arm/feature-arm-vfp.inc
+++ b/import-layers/yocto-poky/meta/conf/machine/include/arm/feature-arm-vfp.inc
@@ -5,8 +5,8 @@
 TUNEVALID[vfp] = "Enable Vector Floating Point (vfp) unit."
 TUNE_CCARGS_MFPU .= "${@bb.utils.contains('TUNE_FEATURES', 'vfp', ' vfp', '', d)}"
 
-TUNE_CCARGS  .= "${@ (' -mfpu=%s ' % d.getVar('TUNE_CCARGS_MFPU').split()[-1]) if (d.getVar('TUNE_CCARGS_MFPU') != '') else ''}"
-ARMPKGSFX_FPU = "${@ ('-%s'        % d.getVar('TUNE_CCARGS_MFPU').split()[-1].replace('vfpv3-d16', 'vfpv3d16')) if (d.getVar('TUNE_CCARGS_MFPU') != '') else ''}"
+TUNE_CCARGS  .= "${@ (' -mfpu=%s' % d.getVar('TUNE_CCARGS_MFPU').split()[-1]) if (d.getVar('TUNE_CCARGS_MFPU') != '') else ''}"
+ARMPKGSFX_FPU = "${@ ('-%s'       % d.getVar('TUNE_CCARGS_MFPU').split()[-1].replace('vfpv3-d16', 'vfpv3d16')) if (d.getVar('TUNE_CCARGS_MFPU') != '') else ''}"
 
 TUNEVALID[callconvention-hard] = "Enable EABI hard float call convention, requires VFP."
 TUNE_CCARGS_MFLOAT = "${@ bb.utils.contains('TUNE_FEATURES', 'callconvention-hard', 'hard', 'softfp', d) if (d.getVar('TUNE_CCARGS_MFPU') != '') else '' }"
diff --git a/import-layers/yocto-poky/meta/conf/machine/include/qemu.inc b/import-layers/yocto-poky/meta/conf/machine/include/qemu.inc
index 0e4103b..e64b0c8 100644
--- a/import-layers/yocto-poky/meta/conf/machine/include/qemu.inc
+++ b/import-layers/yocto-poky/meta/conf/machine/include/qemu.inc
@@ -25,6 +25,7 @@
 
 # Provide the nfs server kernel module for all qemu images
 KERNEL_FEATURES_append_pn-linux-yocto = " features/nfsd/nfsd-enable.scc"
+KERNEL_FEATURES_append_pn-linux-yocto-rt = " features/nfsd/nfsd-enable.scc"
 
 MACHINE_EXTRA_RRECOMMENDS += "rng-tools"
 
diff --git a/import-layers/yocto-poky/meta/conf/machine/include/qemuboot-mips.inc b/import-layers/yocto-poky/meta/conf/machine/include/qemuboot-mips.inc
index 0c60cf2..7d9fa52 100644
--- a/import-layers/yocto-poky/meta/conf/machine/include/qemuboot-mips.inc
+++ b/import-layers/yocto-poky/meta/conf/machine/include/qemuboot-mips.inc
@@ -4,5 +4,5 @@
 QB_MACHINE = "-machine malta"
 QB_KERNEL_CMDLINE_APPEND = "console=ttyS0 console=tty"
 # Add the 'virtio-rng-pci' device otherwise the guest may run out of entropy
-QB_OPT_APPEND = "-vga cirrus -show-cursor -usb -usbdevice tablet -device virtio-rng-pci"
+QB_OPT_APPEND = "-vga cirrus -show-cursor -usb -device usb-tablet -device virtio-rng-pci"
 QB_SYSTEM_NAME = "qemu-system-${TUNE_ARCH}"
diff --git a/import-layers/yocto-poky/meta/conf/machine/include/qemuboot-x86.inc b/import-layers/yocto-poky/meta/conf/machine/include/qemuboot-x86.inc
index acf9d55..1456bf7 100644
--- a/import-layers/yocto-poky/meta/conf/machine/include/qemuboot-x86.inc
+++ b/import-layers/yocto-poky/meta/conf/machine/include/qemuboot-x86.inc
@@ -12,9 +12,6 @@
 QB_AUDIO_OPT = "-soundhw ac97,es1370"
 QB_KERNEL_CMDLINE_APPEND = "vga=0 uvesafb.mode_option=${UVESA_MODE} oprofile.timer=1 uvesafb.task_timeout=-1"
 # Add the 'virtio-rng-pci' device otherwise the guest may run out of entropy
-QB_OPT_APPEND = "-vga vmware -show-cursor -usb -usbdevice tablet -device virtio-rng-pci"
+QB_OPT_APPEND = "-vga vmware -show-cursor -usb -device usb-tablet -device virtio-rng-pci"
 
-KERNEL_MODULE_AUTOLOAD += "uvesafb"
-KERNEL_MODULE_PROBECONF += "uvesafb"
 UVESA_MODE ?= "640x480-32"
-module_conf_uvesafb = "options uvesafb mode_option=${UVESA_MODE}"
diff --git a/import-layers/yocto-poky/meta/conf/machine/include/tune-mips32.inc b/import-layers/yocto-poky/meta/conf/machine/include/tune-mips32.inc
index ce0445f..a90c0f0 100644
--- a/import-layers/yocto-poky/meta/conf/machine/include/tune-mips32.inc
+++ b/import-layers/yocto-poky/meta/conf/machine/include/tune-mips32.inc
@@ -6,7 +6,8 @@
 TUNECONFLICTS[mips32] = "n64 n32"
 TUNE_CCARGS .= "${@bb.utils.contains('TUNE_FEATURES', 'mips32', ' -march=mips32', '', d)}"
 
-AVAILTUNES += "mips32 mips32el mips32-nf mips32el-nf"
+# Base Tunes (Hard Float)
+AVAILTUNES += "mips32 mips32el"
 
 TUNE_FEATURES_tune-mips32 = "${TUNE_FEATURES_tune-mips} mips32"
 MIPSPKGSFX_VARIANT_tune-mips32 = "mips32"
@@ -16,6 +17,9 @@
 MIPSPKGSFX_VARIANT_tune-mips32el = "mips32el"
 PACKAGE_EXTRA_ARCHS_tune-mips32el = "mipsel mips32el"
 
+# Soft Float
+AVAILTUNES += "mips32-nf mips32el-nf"
+
 TUNE_FEATURES_tune-mips32-nf = "${TUNE_FEATURES_tune-mips-nf} mips32"
 MIPSPKGSFX_VARIANT_tune-mips32-nf = "mips32"
 PACKAGE_EXTRA_ARCHS_tune-mips32-nf = "mips-nf mips32-nf"
diff --git a/import-layers/yocto-poky/meta/conf/machine/include/tune-mips32r2.inc b/import-layers/yocto-poky/meta/conf/machine/include/tune-mips32r2.inc
index 5e398cb..14473ca 100644
--- a/import-layers/yocto-poky/meta/conf/machine/include/tune-mips32r2.inc
+++ b/import-layers/yocto-poky/meta/conf/machine/include/tune-mips32r2.inc
@@ -6,7 +6,7 @@
 TUNECONFLICTS[mips32r2] = "n64 n32"
 TUNE_CCARGS .= "${@bb.utils.contains('TUNE_FEATURES', 'mips32r2', ' -march=mips32r2', '', d)}"
 
-# hard float
+# Base Tunes (Hard Float)
 AVAILTUNES += "mips32r2 mips32r2el"
 
 TUNE_FEATURES_tune-mips32r2 = "${TUNE_FEATURES_tune-mips} mips32r2"
@@ -17,7 +17,7 @@
 MIPSPKGSFX_VARIANT_tune-mips32r2el = "mips32r2el"
 PACKAGE_EXTRA_ARCHS_tune-mips32r2el = "mipsel mips32el mips32r2el"
 
-# soft float
+# Soft Float
 AVAILTUNES += "mips32r2-nf mips32r2el-nf"
 
 TUNE_FEATURES_tune-mips32r2-nf = "${TUNE_FEATURES_tune-mips-nf} mips32r2"
diff --git a/import-layers/yocto-poky/meta/conf/machine/include/tune-mips32r6.inc b/import-layers/yocto-poky/meta/conf/machine/include/tune-mips32r6.inc
index dea33ea..44369cb 100644
--- a/import-layers/yocto-poky/meta/conf/machine/include/tune-mips32r6.inc
+++ b/import-layers/yocto-poky/meta/conf/machine/include/tune-mips32r6.inc
@@ -6,7 +6,7 @@
 TUNECONFLICTS[mipsisa32r6] = "n64 n32"
 TUNE_CCARGS .= "${@bb.utils.contains('TUNE_FEATURES', 'mipsisa32r6', ' -march=mips32r6', '', d)}"
 
-# Base Tunes
+# Base Tunes (Hard Float)
 AVAILTUNES += "mipsisa32r6 mipsisa32r6el"
 
 TUNE_FEATURES_tune-mipsisa32r6 = "o32 bigendian mipsisa32r6 fpu-hard r6"
diff --git a/import-layers/yocto-poky/meta/conf/machine/include/x86-base.inc b/import-layers/yocto-poky/meta/conf/machine/include/x86-base.inc
index 7365953..023eb5e 100644
--- a/import-layers/yocto-poky/meta/conf/machine/include/x86-base.inc
+++ b/import-layers/yocto-poky/meta/conf/machine/include/x86-base.inc
@@ -10,7 +10,8 @@
 
 MACHINE_EXTRA_RRECOMMENDS += "kernel-modules"
 
-IMAGE_FSTYPES += "live"
+IMAGE_FSTYPES ?= "live"
+NOISO ?= "1"
 
 KERNEL_IMAGETYPE ?= "bzImage"
 
@@ -20,7 +21,7 @@
 # kernel-related variables
 #
 PREFERRED_PROVIDER_virtual/kernel ??= "linux-yocto"
-PREFERRED_VERSION_linux-yocto ??= "4.10%"
+PREFERRED_VERSION_linux-yocto ??= "4.12%"
 
 #
 # XSERVER subcomponents, used to build the XSERVER variable
diff --git a/import-layers/yocto-poky/meta/conf/machine/include/x86/arch-x86.inc b/import-layers/yocto-poky/meta/conf/machine/include/x86/arch-x86.inc
index e51d595..70814b8 100644
--- a/import-layers/yocto-poky/meta/conf/machine/include/x86/arch-x86.inc
+++ b/import-layers/yocto-poky/meta/conf/machine/include/x86/arch-x86.inc
@@ -26,6 +26,7 @@
 TUNE_ASARGS += "${@bb.utils.contains('TUNE_FEATURES', 'mx32', '-x32', '', d)}"
 # user mode qemu doesn't support x32
 MACHINE_FEATURES_BACKFILL_CONSIDERED_append = " ${@bb.utils.contains('TUNE_FEATURES', 'mx32', 'qemu-usermode', '', d)}"
+MACHINEOVERRIDES =. "${@bb.utils.contains('TUNE_FEATURES', 'mx32', 'x86-x32:', '' ,d)}"
 
 # ELF64 ABI
 TUNEVALID[m64] = "IA32e (x86_64) ELF64 standard ABI"
diff --git a/import-layers/yocto-poky/meta/conf/machine/qemuarm.conf b/import-layers/yocto-poky/meta/conf/machine/qemuarm.conf
index 6b875e4..c8932dd 100644
--- a/import-layers/yocto-poky/meta/conf/machine/qemuarm.conf
+++ b/import-layers/yocto-poky/meta/conf/machine/qemuarm.conf
@@ -15,6 +15,6 @@
 QB_MACHINE = "-machine versatilepb"
 QB_KERNEL_CMDLINE_APPEND = "console=ttyAMA0,115200 console=tty"
 # Add the 'virtio-rng-pci' device otherwise the guest may run out of entropy
-QB_OPT_APPEND = "-show-cursor -usb -usbdevice tablet -device virtio-rng-pci"
-PREFERRED_VERSION_linux-yocto ??= "4.10%"
+QB_OPT_APPEND = "-show-cursor -usb -device usb-tablet -device virtio-rng-pci"
+PREFERRED_VERSION_linux-yocto ??= "4.12%"
 QB_DTB = "${@base_version_less_or_equal('PREFERRED_VERSION_linux-yocto', '4.7', '', 'zImage-versatile-pb.dtb', d)}"
diff --git a/import-layers/yocto-poky/meta/conf/machine/qemuppc.conf b/import-layers/yocto-poky/meta/conf/machine/qemuppc.conf
index a9ef64b..537b2f6 100644
--- a/import-layers/yocto-poky/meta/conf/machine/qemuppc.conf
+++ b/import-layers/yocto-poky/meta/conf/machine/qemuppc.conf
@@ -17,5 +17,5 @@
 QB_CPU = "-cpu G4"
 QB_KERNEL_CMDLINE_APPEND = "console=tty console=ttyS0"
 # Add the 'virtio-rng-pci' device otherwise the guest may run out of entropy
-QB_OPT_APPEND = "-show-cursor -usb -usbdevice tablet -device virtio-rng-pci"
+QB_OPT_APPEND = "-show-cursor -usb -device usb-tablet -device virtio-rng-pci"
 QB_TAP_OPT = "-netdev tap,id=net0,ifname=@TAP@,script=no,downscript=no"
diff --git a/import-layers/yocto-poky/meta/conf/machine/qemux86-64.conf b/import-layers/yocto-poky/meta/conf/machine/qemux86-64.conf
index 10189cb..fcc4459 100644
--- a/import-layers/yocto-poky/meta/conf/machine/qemux86-64.conf
+++ b/import-layers/yocto-poky/meta/conf/machine/qemux86-64.conf
@@ -25,9 +25,13 @@
            xserver-xorg-module-libint10 \
            "
 
-MACHINE_FEATURES += "x86"
+MACHINE_FEATURES += "x86 pci"
 
 MACHINE_ESSENTIAL_EXTRA_RDEPENDS += "v86d"
 
+KERNEL_MODULE_AUTOLOAD += "uvesafb"
+KERNEL_MODULE_PROBECONF += "uvesafb"
+module_conf_uvesafb = "options uvesafb mode_option=${UVESA_MODE}"
+
 WKS_FILE ?= "directdisk.wks"
 do_image_wic[depends] += "syslinux:do_populate_sysroot syslinux-native:do_populate_sysroot mtools-native:do_populate_sysroot dosfstools-native:do_populate_sysroot"
diff --git a/import-layers/yocto-poky/meta/conf/machine/qemux86.conf b/import-layers/yocto-poky/meta/conf/machine/qemux86.conf
index c26dda2..c53f7a9 100644
--- a/import-layers/yocto-poky/meta/conf/machine/qemux86.conf
+++ b/import-layers/yocto-poky/meta/conf/machine/qemux86.conf
@@ -24,9 +24,13 @@
            xserver-xorg-module-libint10 \
            "
 
-MACHINE_FEATURES += "x86"
+MACHINE_FEATURES += "x86 pci"
 
 MACHINE_ESSENTIAL_EXTRA_RDEPENDS += "v86d"
 
+KERNEL_MODULE_AUTOLOAD += "uvesafb"
+KERNEL_MODULE_PROBECONF += "uvesafb"
+module_conf_uvesafb = "options uvesafb mode_option=${UVESA_MODE}"
+
 WKS_FILE ?= "directdisk.wks"
 do_image_wic[depends] += "syslinux:do_populate_sysroot syslinux-native:do_populate_sysroot mtools-native:do_populate_sysroot dosfstools-native:do_populate_sysroot"
diff --git a/import-layers/yocto-poky/meta/conf/sanity.conf b/import-layers/yocto-poky/meta/conf/sanity.conf
index 46bdbeb..390d342 100644
--- a/import-layers/yocto-poky/meta/conf/sanity.conf
+++ b/import-layers/yocto-poky/meta/conf/sanity.conf
@@ -3,7 +3,7 @@
 # See sanity.bbclass
 #
 # Expert users can confirm their sanity with "touch conf/sanity.conf"
-BB_MIN_VERSION = "1.33.4"
+BB_MIN_VERSION = "1.35.0"
 
 SANITY_ABIFILE = "${TMPDIR}/abi_version"
 
diff --git a/import-layers/yocto-poky/meta/files/common-licenses/pkgconf b/import-layers/yocto-poky/meta/files/common-licenses/pkgconf
new file mode 100644
index 0000000..81a5221
--- /dev/null
+++ b/import-layers/yocto-poky/meta/files/common-licenses/pkgconf
@@ -0,0 +1,10 @@
+Copyright (c) 2011, 2012, 2013, 2014, 2015, 2016, 2017
+    pkgconf authors (see AUTHORS file in source directory).
+
+Permission to use, copy, modify, and/or distribute this software for any
+purpose with or without fee is hereby granted, provided that the above
+copyright notice and this permission notice appear in all copies.
+
+This software is provided 'as is' and without any warranty, express or
+implied.  In no event shall the authors be liable for any damages arising
+from the use of this software.
diff --git a/import-layers/yocto-poky/meta/files/fs-perms-persistent-log.txt b/import-layers/yocto-poky/meta/files/fs-perms-persistent-log.txt
new file mode 100644
index 0000000..3a7cf3a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/files/fs-perms-persistent-log.txt
@@ -0,0 +1,66 @@
+# This file contains a list of files and directories with known permissions.
+# It is used by the packaging class to ensure that the permissions, owners and
+# group of listed files and directories are in sync across the system.
+#
+# The format of this file 
+#
+#<path>	<mode>	<uid>	<gid>	<walk>	<fmode>	<fuid>	<fgid>
+#
+# or
+#
+#<path> link <target>
+#
+# <path>: directory path
+# <mode>: mode for directory
+# <uid>:  uid for directory
+# <gid>:  gid for directory
+# <walk>: recursively walk the directory?  true or false
+# <fmode>: if walking, new mode for files
+# <fuid>:  if walking, new uid for files
+# <fgid>:  if walking, new gid for files
+# <target>: turn the directory into a symlink point to target
+#
+# in mode, uid or gid, a "-" means don't change any existing values
+#
+# /usr/src		0755	root	root	false	-	-	-
+# /usr/share/man	0755	root	root	true	0644	root	root
+
+# Note: all standard config directories are automatically assigned "0755 root root false - - -"
+
+# Documentation should always be corrected
+${mandir}		0755	root	root	true	0644	root	root
+${infodir}		0755	root	root	true	0644	root	root
+${docdir}		0755	root	root	true	0644	root	root
+${datadir}/gtk-doc	0755	root	root	true	0644	root	root
+
+# Fixup locales
+${datadir}/locale	0755	root	root	true	0644	root	root
+
+# Cleanup headers
+${includedir}		0755	root	root	true	0644	root	root
+${oldincludedir}	0755	root	root	true	0644	root	root
+
+# Cleanup debug src
+/usr/src/debug		0755	root	root	true	-	root	root
+
+# Items from base-files
+# Links
+${localstatedir}/run	link	/run
+${localstatedir}/lock	link	/run/lock
+${localstatedir}/tmp	link	volatile/tmp
+
+/home				0755	root	root	false - - -
+/srv				0755	root	root	false - - -
+${prefix}/src			0755	root	root	false - - -
+${localstatedir}/local		0755	root	root	false - - -
+
+# Special permissions from base-files
+# Set 1777
+/tmp				01777	root	root	false - - -
+${localstatedir}/volatile/tmp	01777	root	root	false - - -
+
+# Set 0700
+${ROOT_HOME}			0700	root	root	false - - -
+
+# Set 2775-lsb
+${localstatedir}/mail		02775	root	mail	false - - -
diff --git a/import-layers/yocto-poky/meta/lib/bblayers/create.py b/import-layers/yocto-poky/meta/lib/bblayers/create.py
new file mode 100644
index 0000000..6a41fe0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/bblayers/create.py
@@ -0,0 +1,66 @@
+import logging
+import os
+import sys
+import shutil
+
+import bb.utils
+
+from bblayers.common import LayerPlugin
+
+logger = logging.getLogger('bitbake-layers')
+
+def plugin_init(plugins):
+    return CreatePlugin()
+
+def read_template(template, template_dir='templates'):
+    lines = str()
+    with open(os.path.join(os.path.dirname(__file__), template_dir, template)) as fd:
+        lines = ''.join(fd.readlines())
+    return lines
+
+class CreatePlugin(LayerPlugin):
+    def do_create_layer(self, args):
+        """Create a basic layer"""
+        layerdir = os.path.abspath(args.layerdir)
+        if os.path.exists(layerdir):
+            sys.stderr.write("Specified layer directory exists\n")
+            return 1
+
+        # create dirs
+        conf = os.path.join(layerdir, 'conf')
+        bb.utils.mkdirhier(conf)
+
+        # Create the README from templates/README
+        readme_template =  read_template('README') % (args.layerdir, args.layerdir, args.layerdir, args.layerdir, args.layerdir, args.layerdir)
+        readme = os.path.join(layerdir, 'README')
+        with open(readme, 'w') as fd:
+            fd.write(readme_template)
+
+        # Copy the MIT license from meta
+        copying = 'COPYING.MIT'
+        dn = os.path.dirname
+        license_src = os.path.join(dn(dn(dn(__file__))), copying)
+        license_dst = os.path.join(layerdir, copying)
+        shutil.copy(license_src, license_dst)
+
+        # Create the layer.conf from templates/layer.conf
+        layerconf_template = read_template('layer.conf') % (args.layerdir, args.layerdir, args.layerdir, args.priority)
+        layerconf = os.path.join(conf, 'layer.conf')
+        with open(layerconf, 'w') as fd:
+            fd.write(layerconf_template)
+
+        # Create the example from templates/example.bb
+        example_template = read_template('example.bb')
+        example = os.path.join(layerdir, 'recipes-' + args.examplerecipe, args.examplerecipe)
+        bb.utils.mkdirhier(example)
+        with open(os.path.join(example, args.examplerecipe + '.bb'), 'w') as fd:
+            fd.write(example_template)
+
+        logger.plain('Add your new layer with \'bitbake-layers add-layer %s\'' % args.layerdir)
+
+    def register_commands(self, sp):
+        parser_create_layer = self.add_command(sp, 'create-layer', self.do_create_layer, parserecipes=False)
+        parser_create_layer.add_argument('layerdir', help='Layer directory to create')
+        parser_create_layer.add_argument('--priority', '-p', default=6, help='Layer directory to create')
+        parser_create_layer.add_argument('--example-recipe-name', '-e', dest='examplerecipe', default='example', help='Filename of the example recipe')
+
diff --git a/import-layers/yocto-poky/meta/lib/bblayers/templates/README b/import-layers/yocto-poky/meta/lib/bblayers/templates/README
new file mode 100644
index 0000000..5a77f8d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/bblayers/templates/README
@@ -0,0 +1,41 @@
+This README file contains information on the contents of the %s layer.
+
+Please see the corresponding sections below for details.
+
+Dependencies
+============
+
+  URI: <first dependency>
+  branch: <branch name>
+
+  URI: <second dependency>
+  branch: <branch name>
+
+  .
+  .
+  .
+
+Patches
+=======
+
+Please submit any patches against the %s layer to the xxxx mailing list (xxxx@zzzz.org)
+and cc: the maintainer:
+
+Maintainer: XXX YYYYYY <xxx.yyyyyy@zzzzz.com>
+
+Table of Contents
+=================
+
+  I. Adding the %s layer to your build
+ II. Misc
+
+
+I. Adding the %s layer to your build
+=================================================
+
+Run 'bitbake-layers add-layer %s'
+
+II. Misc
+========
+
+--- replace with specific information about the %s layer ---
diff --git a/import-layers/yocto-poky/meta/lib/bblayers/templates/example.bb b/import-layers/yocto-poky/meta/lib/bblayers/templates/example.bb
new file mode 100644
index 0000000..c4b873d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/bblayers/templates/example.bb
@@ -0,0 +1,11 @@
+SUMMARY = "bitbake-layers recipe"
+DESCRIPTION = "Recipe created by bitbake-layers"
+LICENSE = "MIT"
+
+python do_build() {
+    bb.plain("***********************************************");
+    bb.plain("*                                             *");
+    bb.plain("*  Example recipe created by bitbake-layers   *");
+    bb.plain("*                                             *");
+    bb.plain("***********************************************");
+}
diff --git a/import-layers/yocto-poky/meta/lib/bblayers/templates/layer.conf b/import-layers/yocto-poky/meta/lib/bblayers/templates/layer.conf
new file mode 100644
index 0000000..3c03002
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/bblayers/templates/layer.conf
@@ -0,0 +1,10 @@
+# We have a conf and classes directory, add to BBPATH
+BBPATH .= ":${LAYERDIR}"
+
+# We have recipes-* directories, add to BBFILES
+BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \
+            ${LAYERDIR}/recipes-*/*/*.bbappend"
+
+BBFILE_COLLECTIONS += "%s"
+BBFILE_PATTERN_%s = "^${LAYERDIR}/"
+BBFILE_PRIORITY_%s = "%s"
diff --git a/import-layers/yocto-poky/meta/lib/oe/buildhistory_analysis.py b/import-layers/yocto-poky/meta/lib/oe/buildhistory_analysis.py
index 3a5b7b6..3e86a46 100644
--- a/import-layers/yocto-poky/meta/lib/oe/buildhistory_analysis.py
+++ b/import-layers/yocto-poky/meta/lib/oe/buildhistory_analysis.py
@@ -143,22 +143,25 @@
             out += '\n  '.join(list(diff)[2:])
             out += '\n  --'
         elif self.fieldname in img_monitor_files or '/image-files/' in self.path:
-            fieldname = self.fieldname
-            if '/image-files/' in self.path:
-                fieldname = os.path.join('/' + self.path.split('/image-files/')[1], self.fieldname)
-                out = 'Changes to %s:\n  ' % fieldname
+            if self.filechanges or (self.oldvalue and self.newvalue):
+                fieldname = self.fieldname
+                if '/image-files/' in self.path:
+                    fieldname = os.path.join('/' + self.path.split('/image-files/')[1], self.fieldname)
+                    out = 'Changes to %s:\n  ' % fieldname
+                else:
+                    if outer:
+                        prefix = 'Changes to %s ' % self.path
+                    out = '(%s):\n  ' % self.fieldname
+                if self.filechanges:
+                    out += '\n  '.join(['%s' % i for i in self.filechanges])
+                else:
+                    alines = self.oldvalue.splitlines()
+                    blines = self.newvalue.splitlines()
+                    diff = difflib.unified_diff(alines, blines, fieldname, fieldname, lineterm='')
+                    out += '\n  '.join(list(diff))
+                    out += '\n  --'
             else:
-                if outer:
-                    prefix = 'Changes to %s ' % self.path
-                out = '(%s):\n  ' % self.fieldname
-            if self.filechanges:
-                out += '\n  '.join(['%s' % i for i in self.filechanges])
-            else:
-                alines = self.oldvalue.splitlines()
-                blines = self.newvalue.splitlines()
-                diff = difflib.unified_diff(alines, blines, fieldname, fieldname, lineterm='')
-                out += '\n  '.join(list(diff))
-                out += '\n  --'
+                out = ''
         else:
             out = '%s changed from "%s" to "%s"' % (self.fieldname, self.oldvalue, self.newvalue)
 
@@ -169,7 +172,7 @@
                 for line in chg._str_internal(False).splitlines():
                     out += '\n  * %s' % line
 
-        return '%s%s' % (prefix, out)
+        return '%s%s' % (prefix, out) if out else ''
 
 class FileChange:
     changetype_add = 'A'
@@ -508,7 +511,8 @@
     return '\n'.join(out)
 
 
-def process_changes(repopath, revision1, revision2='HEAD', report_all=False, report_ver=False, sigs=False, sigsdiff=False):
+def process_changes(repopath, revision1, revision2='HEAD', report_all=False, report_ver=False,
+                    sigs=False, sigsdiff=False, exclude_path=None):
     repo = git.Repo(repopath)
     assert repo.bare == False
     commit = repo.commit(revision1)
@@ -601,6 +605,19 @@
                     elif chg.path == chg2.path and chg.path.startswith('packages/') and chg2.fieldname in ['PE', 'PV', 'PR']:
                         chg.related.append(chg2)
 
+    # filter out unwanted paths
+    if exclude_path:
+        for chg in changes:
+            if chg.filechanges:
+                fchgs = []
+                for fchg in chg.filechanges:
+                    for epath in exclude_path:
+                        if fchg.path.startswith(epath):
+                           break
+                    else:
+                        fchgs.append(fchg)
+                chg.filechanges = fchgs
+
     if report_all:
         return changes
     else:
diff --git a/import-layers/yocto-poky/meta/lib/oe/copy_buildsystem.py b/import-layers/yocto-poky/meta/lib/oe/copy_buildsystem.py
index a372904..ac2fae1 100644
--- a/import-layers/yocto-poky/meta/lib/oe/copy_buildsystem.py
+++ b/import-layers/yocto-poky/meta/lib/oe/copy_buildsystem.py
@@ -32,6 +32,10 @@
 
         corebase = os.path.abspath(self.d.getVar('COREBASE'))
         layers.append(corebase)
+        # The bitbake build system uses the meta-skeleton layer as a layout
+        # for common recipies, e.g: the recipetool script to create kernel recipies
+        # Add the meta-skeleton layer to be included as part of the eSDK installation
+        layers.append(os.path.join(corebase, 'meta-skeleton'))
 
         # Exclude layers
         for layer_exclude in self.layers_exclude:
@@ -71,6 +75,11 @@
             layerdestpath = destdir
             if corebase == os.path.dirname(layer):
                 layerdestpath += '/' + os.path.basename(corebase)
+            else:
+                layer_relative = os.path.basename(corebase) + '/' + os.path.relpath(layer, corebase)
+                if os.path.dirname(layer_relative) != layernewname:
+                    layerdestpath += '/' + os.path.dirname(layer_relative)
+
             layerdestpath += '/' + layernewname
 
             layer_relative = os.path.relpath(layerdestpath,
@@ -123,6 +132,14 @@
                         line = line.replace('workspacelayer', workspace_newname)
                         f.write(line)
 
+        # meta-skeleton layer is added as part of the build system
+        # but not as a layer included in the build, therefore it is
+        # not reported to the function caller.
+        for layer in layers_copied:
+            if layer.endswith('/meta-skeleton'):
+                layers_copied.remove(layer)
+                break
+
         return layers_copied
 
 def generate_locked_sigs(sigfile, d):
@@ -239,6 +256,7 @@
     cmd = "%sBB_SETSCENE_ENFORCE=1 PSEUDO_DISABLED=1 oe-check-sstate %s -s -o %s %s" % (cmdprefix, targets, filteroutfile, logparam)
     env = dict(d.getVar('BB_ORIGENV', False))
     env.pop('BUILDDIR', '')
+    env.pop('BBPATH', '')
     pathitems = env['PATH'].split(':')
     env['PATH'] = ':'.join([item for item in pathitems if not item.endswith('/bitbake/bin')])
     bb.process.run(cmd, stderr=subprocess.STDOUT, env=env, cwd=cwd, executable='/bin/bash')
diff --git a/import-layers/yocto-poky/meta/lib/oe/distro_check.py b/import-layers/yocto-poky/meta/lib/oe/distro_check.py
index 37f04ed..e775c3a 100644
--- a/import-layers/yocto-poky/meta/lib/oe/distro_check.py
+++ b/import-layers/yocto-poky/meta/lib/oe/distro_check.py
@@ -77,17 +77,10 @@
 
 def get_latest_released_opensuse_source_package_list(d):
     "Returns list of all the name os packages in the latest opensuse distro"
-    latest = find_latest_numeric_release("http://download.opensuse.org/source/distribution/",d)
+    latest = find_latest_numeric_release("http://download.opensuse.org/source/distribution/leap", d)
 
-    package_names = get_source_package_list_from_url("http://download.opensuse.org/source/distribution/%s/repo/oss/suse/src/" % latest, "main", d)
-    package_names |= get_source_package_list_from_url("http://download.opensuse.org/update/%s/src/" % latest, "updates", d)
-    return latest, package_names
-
-def get_latest_released_mandriva_source_package_list(d):
-    "Returns list of all the name os packages in the latest mandriva distro"
-    latest = find_latest_numeric_release("http://distrib-coffee.ipsl.jussieu.fr/pub/linux/MandrivaLinux/official/", d)
-    package_names = get_source_package_list_from_url("http://distrib-coffee.ipsl.jussieu.fr/pub/linux/MandrivaLinux/official/%s/SRPMS/main/release/" % latest, "main", d)
-    package_names |= get_source_package_list_from_url("http://distrib-coffee.ipsl.jussieu.fr/pub/linux/MandrivaLinux/official/%s/SRPMS/main/updates/" % latest, "updates", d)
+    package_names = get_source_package_list_from_url("http://download.opensuse.org/source/distribution/leap/%s/repo/oss/suse/src/" % latest, "main", d)
+    package_names |= get_source_package_list_from_url("http://download.opensuse.org/update/leap/%s/oss/src/" % latest, "updates", d)
     return latest, package_names
 
 def get_latest_released_clear_source_package_list(d):
@@ -161,8 +154,7 @@
                             ("Debian", get_latest_released_debian_source_package_list),
                             ("Ubuntu", get_latest_released_ubuntu_source_package_list),
                             ("Fedora", get_latest_released_fedora_source_package_list),
-                            ("OpenSuSE", get_latest_released_opensuse_source_package_list),
-                            ("Mandriva", get_latest_released_mandriva_source_package_list),
+                            ("openSUSE", get_latest_released_opensuse_source_package_list),
                             ("Clear", get_latest_released_clear_source_package_list),
                            )
 
diff --git a/import-layers/yocto-poky/meta/lib/oe/gpg_sign.py b/import-layers/yocto-poky/meta/lib/oe/gpg_sign.py
index 7ce767e..9cc88f0 100644
--- a/import-layers/yocto-poky/meta/lib/oe/gpg_sign.py
+++ b/import-layers/yocto-poky/meta/lib/oe/gpg_sign.py
@@ -15,7 +15,7 @@
 
     def export_pubkey(self, output_file, keyid, armor=True):
         """Export GPG public key to a file"""
-        cmd = '%s --batch --yes --export -o %s ' % \
+        cmd = '%s --no-permission-warning --batch --yes --export -o %s ' % \
                 (self.gpg_bin, output_file)
         if self.gpg_path:
             cmd += "--homedir %s " % self.gpg_path
@@ -27,22 +27,27 @@
             raise bb.build.FuncFailed('Failed to export gpg public key (%s): %s' %
                                       (keyid, output))
 
-    def sign_rpms(self, files, keyid, passphrase):
+    def sign_rpms(self, files, keyid, passphrase, digest, sign_chunk, fsk=None, fsk_password=None):
         """Sign RPM files"""
 
         cmd = self.rpm_bin + " --addsign --define '_gpg_name %s'  " % keyid
-        gpg_args = '--batch --passphrase=%s' % passphrase
+        gpg_args = '--no-permission-warning --batch --passphrase=%s' % passphrase
         if self.gpg_version > (2,1,):
             gpg_args += ' --pinentry-mode=loopback'
         cmd += "--define '_gpg_sign_cmd_extra_args %s' " % gpg_args
+        cmd += "--define '_binary_filedigest_algorithm %s' " % digest
         if self.gpg_bin:
-            cmd += "--define '%%__gpg %s' " % self.gpg_bin
+            cmd += "--define '__gpg %s' " % self.gpg_bin
         if self.gpg_path:
             cmd += "--define '_gpg_path %s' " % self.gpg_path
+        if fsk:
+            cmd += "--signfiles --fskpath %s " % fsk
+            if fsk_password:
+                cmd += "--define '_file_signing_key_password %s' " % fsk_password
 
-        # Sign in chunks of 100 packages
-        for i in range(0, len(files), 100):
-            status, output = oe.utils.getstatusoutput(cmd + ' '.join(files[i:i+100]))
+        # Sign in chunks
+        for i in range(0, len(files), sign_chunk):
+            status, output = oe.utils.getstatusoutput(cmd + ' '.join(files[i:i+sign_chunk]))
             if status:
                 raise bb.build.FuncFailed("Failed to sign RPM packages: %s" % output)
 
@@ -53,8 +58,8 @@
         if passphrase_file and passphrase:
             raise Exception("You should use either passphrase_file of passphrase, not both")
 
-        cmd = [self.gpg_bin, '--detach-sign', '--batch', '--no-tty', '--yes',
-               '--passphrase-fd', '0', '-u', keyid]
+        cmd = [self.gpg_bin, '--detach-sign', '--no-permission-warning', '--batch',
+               '--no-tty', '--yes', '--passphrase-fd', '0', '-u', keyid]
 
         if self.gpg_path:
             cmd += ['--homedir', self.gpg_path]
@@ -93,7 +98,7 @@
         """Return the gpg version as a tuple of ints"""
         import subprocess
         try:
-            ver_str = subprocess.check_output((self.gpg_bin, "--version")).split()[2].decode("utf-8")
+            ver_str = subprocess.check_output((self.gpg_bin, "--version", "--no-permission-warning")).split()[2].decode("utf-8")
             return tuple([int(i) for i in ver_str.split('.')])
         except subprocess.CalledProcessError as e:
             raise bb.build.FuncFailed("Could not get gpg version: %s" % e)
@@ -101,7 +106,7 @@
 
     def verify(self, sig_file):
         """Verify signature"""
-        cmd = self.gpg_bin + " --verify "
+        cmd = self.gpg_bin + " --verify --no-permission-warning "
         if self.gpg_path:
             cmd += "--homedir %s " % self.gpg_path
         cmd += sig_file
diff --git a/import-layers/yocto-poky/meta/lib/oe/license.py b/import-layers/yocto-poky/meta/lib/oe/license.py
index 8d2fd17..ca385d5 100644
--- a/import-layers/yocto-poky/meta/lib/oe/license.py
+++ b/import-layers/yocto-poky/meta/lib/oe/license.py
@@ -106,7 +106,8 @@
     license string matches the whitelist and does not match the blacklist.
 
     Returns a tuple holding the boolean state and a list of the applicable
-    licenses which were excluded (or None, if the state is True)
+    licenses that were excluded if state is False, or the licenses that were
+    included if the state is True.
     """
 
     def include_license(license):
@@ -117,10 +118,17 @@
 
     def choose_licenses(alpha, beta):
         """Select the option in an OR which is the 'best' (has the most
-        included licenses)."""
-        alpha_weight = len(list(filter(include_license, alpha)))
-        beta_weight = len(list(filter(include_license, beta)))
-        if alpha_weight > beta_weight:
+        included licenses and no excluded licenses)."""
+        # The factor 1000 below is arbitrary, just expected to be much larger
+        # that the number of licenses actually specified. That way the weight
+        # will be negative if the list of licenses contains an excluded license,
+        # but still gives a higher weight to the list with the most included
+        # licenses.
+        alpha_weight = (len(list(filter(include_license, alpha))) -
+                        1000 * (len(list(filter(exclude_license, alpha))) > 0))
+        beta_weight = (len(list(filter(include_license, beta))) -
+                       1000 * (len(list(filter(exclude_license, beta))) > 0))
+        if alpha_weight >= beta_weight:
             return alpha
         else:
             return beta
diff --git a/import-layers/yocto-poky/meta/lib/oe/lsb.py b/import-layers/yocto-poky/meta/lib/oe/lsb.py
index 3a945e0..71c0992 100644
--- a/import-layers/yocto-poky/meta/lib/oe/lsb.py
+++ b/import-layers/yocto-poky/meta/lib/oe/lsb.py
@@ -1,19 +1,26 @@
+def get_os_release():
+    """Get all key-value pairs from /etc/os-release as a dict"""
+    from collections import OrderedDict
+
+    data = OrderedDict()
+    if os.path.exists('/etc/os-release'):
+        with open('/etc/os-release') as f:
+            for line in f:
+                try:
+                    key, val = line.rstrip().split('=', 1)
+                except ValueError:
+                    continue
+                data[key.strip()] = val.strip('"')
+    return data
+
 def release_dict_osr():
     """ Populate a dict with pertinent values from /etc/os-release """
-    if not os.path.exists('/etc/os-release'):
-        return None
-
     data = {}
-    with open('/etc/os-release') as f:
-        for line in f:
-            try:
-                key, val = line.rstrip().split('=', 1)
-            except ValueError:
-                continue
-            if key == 'ID':
-                data['DISTRIB_ID'] = val.strip('"')
-            if key == 'VERSION_ID':
-                data['DISTRIB_RELEASE'] = val.strip('"')
+    os_release = get_os_release()
+    if 'ID' in os_release:
+        data['DISTRIB_ID'] = os_release['ID']
+    if 'VERSION_ID' in os_release:
+        data['DISTRIB_RELEASE'] = os_release['VERSION_ID']
 
     return data
 
diff --git a/import-layers/yocto-poky/meta/lib/oe/package.py b/import-layers/yocto-poky/meta/lib/oe/package.py
index 4797e7d..1e5c3aa 100644
--- a/import-layers/yocto-poky/meta/lib/oe/package.py
+++ b/import-layers/yocto-poky/meta/lib/oe/package.py
@@ -45,6 +45,115 @@
     return
 
 
+def strip_execs(pn, dstdir, strip_cmd, libdir, base_libdir, qa_already_stripped=False):
+    """
+    Strip executable code (like executables, shared libraries) _in_place_
+    - Based on sysroot_strip in staging.bbclass
+    :param dstdir: directory in which to strip files
+    :param strip_cmd: Strip command (usually ${STRIP})
+    :param libdir: ${libdir} - strip .so files in this directory
+    :param base_libdir: ${base_libdir} - strip .so files in this directory
+    :param qa_already_stripped: Set to True if already-stripped' in ${INSANE_SKIP}
+    This is for proper logging and messages only.
+    """
+    import stat, errno, oe.path, oe.utils, mmap
+
+    # Detect .ko module by searching for "vermagic=" string
+    def is_kernel_module(path):
+        with open(path) as f:
+            return mmap.mmap(f.fileno(), 0, prot=mmap.PROT_READ).find(b"vermagic=") >= 0
+
+    # Return type (bits):
+    # 0 - not elf
+    # 1 - ELF
+    # 2 - stripped
+    # 4 - executable
+    # 8 - shared library
+    # 16 - kernel module
+    def is_elf(path):
+        exec_type = 0
+        ret, result = oe.utils.getstatusoutput(
+            "file \"%s\"" % path.replace("\"", "\\\""))
+
+        if ret:
+            bb.error("split_and_strip_files: 'file %s' failed" % path)
+            return exec_type
+
+        if "ELF" in result:
+            exec_type |= 1
+            if "not stripped" not in result:
+                exec_type |= 2
+            if "executable" in result:
+                exec_type |= 4
+            if "shared" in result:
+                exec_type |= 8
+            if "relocatable" in result and is_kernel_module(path):
+                exec_type |= 16
+        return exec_type
+
+    elffiles = {}
+    inodes = {}
+    libdir = os.path.abspath(dstdir + os.sep + libdir)
+    base_libdir = os.path.abspath(dstdir + os.sep + base_libdir)
+    exec_mask = stat.S_IXUSR | stat.S_IXGRP | stat.S_IXOTH
+    #
+    # First lets figure out all of the files we may have to process
+    #
+    for root, dirs, files in os.walk(dstdir):
+        for f in files:
+            file = os.path.join(root, f)
+
+            try:
+                ltarget = oe.path.realpath(file, dstdir, False)
+                s = os.lstat(ltarget)
+            except OSError as e:
+                (err, strerror) = e.args
+                if err != errno.ENOENT:
+                    raise
+                # Skip broken symlinks
+                continue
+            if not s:
+                continue
+            # Check its an excutable
+            if s[stat.ST_MODE] & exec_mask \
+                    or ((file.startswith(libdir) or file.startswith(base_libdir)) and ".so" in f) \
+                    or file.endswith('.ko'):
+                # If it's a symlink, and points to an ELF file, we capture the readlink target
+                if os.path.islink(file):
+                    continue
+
+                # It's a file (or hardlink), not a link
+                # ...but is it ELF, and is it already stripped?
+                elf_file = is_elf(file)
+                if elf_file & 1:
+                    if elf_file & 2:
+                        if qa_already_stripped:
+                            bb.note("Skipping file %s from %s for already-stripped QA test" % (file[len(dstdir):], pn))
+                        else:
+                            bb.warn("File '%s' from %s was already stripped, this will prevent future debugging!" % (file[len(dstdir):], pn))
+                        continue
+
+                    if s.st_ino in inodes:
+                        os.unlink(file)
+                        os.link(inodes[s.st_ino], file)
+                    else:
+                        # break hardlinks so that we do not strip the original.
+                        inodes[s.st_ino] = file
+                        bb.utils.copyfile(file, file)
+                        elffiles[file] = elf_file
+
+    #
+    # Now strip them (in parallel)
+    #
+    sfiles = []
+    for file in elffiles:
+        elf_file = int(elffiles[file])
+        sfiles.append((file, elf_file, strip_cmd))
+
+    oe.utils.multiprocess_exec(sfiles, runstrip)
+
+
+
 def file_translate(file):
     ft = file.replace("@", "@at@")
     ft = ft.replace(" ", "@space@")
@@ -67,8 +176,7 @@
 
     def process_deps(pipe, pkg, pkgdest, provides, requires):
         file = None
-        for line in pipe:
-            line = line.decode("utf-8")
+        for line in pipe.split("\n"):
 
             m = file_re.match(line)
             if m:
@@ -117,12 +225,8 @@
 
         return provides, requires
 
-    try:
-        dep_popen = subprocess.Popen(shlex.split(rpmdeps) + pkgfiles, stdout=subprocess.PIPE)
-        provides, requires = process_deps(dep_popen.stdout, pkg, pkgdest, provides, requires)
-    except OSError as e:
-        bb.error("rpmdeps: '%s' command failed, '%s'" % (shlex.split(rpmdeps) + pkgfiles, e))
-        raise e
+    output = subprocess.check_output(shlex.split(rpmdeps) + pkgfiles, stderr=subprocess.STDOUT).decode("utf-8")
+    provides, requires = process_deps(output, pkg, pkgdest, provides, requires)
 
     return (pkg, provides, requires)
 
diff --git a/import-layers/yocto-poky/meta/lib/oe/package_manager.py b/import-layers/yocto-poky/meta/lib/oe/package_manager.py
index 3a2daad..0c5d907 100644
--- a/import-layers/yocto-poky/meta/lib/oe/package_manager.py
+++ b/import-layers/yocto-poky/meta/lib/oe/package_manager.py
@@ -17,18 +17,11 @@
 def create_index(arg):
     index_cmd = arg
 
-    try:
-        bb.note("Executing '%s' ..." % index_cmd)
-        result = subprocess.check_output(index_cmd, stderr=subprocess.STDOUT, shell=True).decode("utf-8")
-    except subprocess.CalledProcessError as e:
-        return("Index creation command '%s' failed with return code %d:\n%s" %
-               (e.cmd, e.returncode, e.output.decode("utf-8")))
-
+    bb.note("Executing '%s' ..." % index_cmd)
+    result = subprocess.check_output(index_cmd, stderr=subprocess.STDOUT, shell=True).decode("utf-8")
     if result:
         bb.note(result)
 
-    return None
-
 """
 This method parse the output from the package managerand return
 a dictionary with the information of the packages. This is used
@@ -104,13 +97,25 @@
 class RpmIndexer(Indexer):
     def write_index(self):
         if self.d.getVar('PACKAGE_FEED_SIGN') == '1':
-            raise NotImplementedError('Package feed signing not yet implementd for rpm')
+            signer = get_signer(self.d, self.d.getVar('PACKAGE_FEED_GPG_BACKEND'))
+        else:
+            signer = None
 
         createrepo_c = bb.utils.which(os.environ['PATH'], "createrepo_c")
         result = create_index("%s --update -q %s" % (createrepo_c, self.deploy_dir))
         if result:
             bb.fatal(result)
 
+        # Sign repomd
+        if signer:
+            sig_type = self.d.getVar('PACKAGE_FEED_GPG_SIGNATURE_TYPE')
+            is_ascii_sig = (sig_type.upper() != "BIN")
+            signer.detach_sign(os.path.join(self.deploy_dir, 'repodata', 'repomd.xml'),
+                               self.d.getVar('PACKAGE_FEED_GPG_NAME'),
+                               self.d.getVar('PACKAGE_FEED_GPG_PASSPHRASE_FILE'),
+                               armor=is_ascii_sig)
+
+
 class OpkgIndexer(Indexer):
     def write_index(self):
         arch_vars = ["ALL_MULTILIB_PACKAGE_ARCHS",
@@ -152,9 +157,7 @@
             bb.note("There are no packages in %s!" % self.deploy_dir)
             return
 
-        result = oe.utils.multiprocess_exec(index_cmds, create_index)
-        if result:
-            bb.fatal('%s' % ('\n'.join(result)))
+        oe.utils.multiprocess_exec(index_cmds, create_index)
 
         if signer:
             feed_sig_type = self.d.getVar('PACKAGE_FEED_GPG_SIGNATURE_TYPE')
@@ -220,7 +223,7 @@
 
             cmd = "cd %s; PSEUDO_UNLOAD=1 %s packages . > Packages;" % (arch_dir, apt_ftparchive)
 
-            cmd += "%s -fc Packages > Packages.gz;" % gzip
+            cmd += "%s -fcn Packages > Packages.gz;" % gzip
 
             with open(os.path.join(arch_dir, "Release"), "w+") as release:
                 release.write("Label: %s\n" % arch)
@@ -235,9 +238,7 @@
             bb.note("There are no packages in %s" % self.deploy_dir)
             return
 
-        result = oe.utils.multiprocess_exec(index_cmds, create_index)
-        if result:
-            bb.fatal('%s' % ('\n'.join(result)))
+        oe.utils.multiprocess_exec(index_cmds, create_index)
         if self.d.getVar('PACKAGE_FEED_SIGN') == '1':
             raise NotImplementedError('Package feed signing not implementd for dpkg')
 
@@ -548,6 +549,14 @@
         if feed_uris == "":
             return
 
+        gpg_opts = ''
+        if self.d.getVar('PACKAGE_FEED_SIGN') == '1':
+            gpg_opts += 'repo_gpgcheck=1\n'
+            gpg_opts += 'gpgkey=file://%s/pki/packagefeed-gpg/PACKAGEFEED-GPG-KEY-%s-%s\n' % (self.d.getVar('sysconfdir'), self.d.getVar('DISTRO'), self.d.getVar('DISTRO_CODENAME'))
+
+        if self.d.getVar('RPM_SIGN_PACKAGES') == '0':
+            gpg_opts += 'gpgcheck=0\n'
+
         bb.utils.mkdirhier(oe.path.join(self.target_rootfs, "etc", "yum.repos.d"))
         remote_uris = self.construct_uris(feed_uris.split(), feed_base_paths.split())
         for uri in remote_uris:
@@ -558,12 +567,12 @@
                     repo_id   = "oe-remote-repo"  + "-".join(urlparse(repo_uri).path.split("/"))
                     repo_name = "OE Remote Repo:" + " ".join(urlparse(repo_uri).path.split("/"))
                     open(oe.path.join(self.target_rootfs, "etc", "yum.repos.d", repo_base + ".repo"), 'a').write(
-                             "[%s]\nname=%s\nbaseurl=%s\n\n" % (repo_id, repo_name, repo_uri))
+                             "[%s]\nname=%s\nbaseurl=%s\n%s\n" % (repo_id, repo_name, repo_uri, gpg_opts))
             else:
                 repo_name = "OE Remote Repo:" + " ".join(urlparse(uri).path.split("/"))
                 repo_uri = uri
                 open(oe.path.join(self.target_rootfs, "etc", "yum.repos.d", repo_base + ".repo"), 'w').write(
-                             "[%s]\nname=%s\nbaseurl=%s\n" % (repo_base, repo_name, repo_uri))
+                             "[%s]\nname=%s\nbaseurl=%s\n%s" % (repo_base, repo_name, repo_uri, gpg_opts))
 
     def _prepare_pkg_transaction(self):
         os.environ['D'] = self.target_rootfs
@@ -608,10 +617,12 @@
             self._invoke_dnf(["remove"] + pkgs)
         else:
             cmd = bb.utils.which(os.getenv('PATH'), "rpm")
-            args = ["-e", "--nodeps", "--root=%s" %self.target_rootfs]
+            args = ["-e", "-v", "--nodeps", "--root=%s" %self.target_rootfs]
 
             try:
+                bb.note("Running %s" % ' '.join([cmd] + args + pkgs))
                 output = subprocess.check_output([cmd] + args + pkgs, stderr=subprocess.STDOUT).decode("utf-8")
+                bb.note(output)
             except subprocess.CalledProcessError as e:
                 bb.fatal("Could not invoke rpm. Command "
                      "'%s' returned %d:\n%s" % (' '.join([cmd] + args + pkgs), e.returncode, e.output.decode("utf-8")))
@@ -682,7 +693,7 @@
         return packages
 
     def update(self):
-        self._invoke_dnf(["makecache"])
+        self._invoke_dnf(["makecache", "--refresh"])
 
     def _invoke_dnf(self, dnf_args, fatal = True, print_output = True ):
         os.environ['RPM_ETCCONFIGDIR'] = self.target_rootfs
@@ -1145,7 +1156,7 @@
 
         # Create an temp dir as opkg root for dummy installation
         temp_rootfs = self.d.expand('${T}/opkg')
-        opkg_lib_dir = self.d.getVar('OPKGLIBDIR', True)
+        opkg_lib_dir = self.d.getVar('OPKGLIBDIR')
         if opkg_lib_dir[0] == "/":
             opkg_lib_dir = opkg_lib_dir[1:]
         temp_opkg_dir = os.path.join(temp_rootfs, opkg_lib_dir, 'opkg')
diff --git a/import-layers/yocto-poky/meta/lib/oe/patch.py b/import-layers/yocto-poky/meta/lib/oe/patch.py
index f1ab3dd..584bf6c 100644
--- a/import-layers/yocto-poky/meta/lib/oe/patch.py
+++ b/import-layers/yocto-poky/meta/lib/oe/patch.py
@@ -1,4 +1,5 @@
 import oe.path
+import oe.types
 
 class NotFoundError(bb.BBHandledException):
     def __init__(self, path):
diff --git a/import-layers/yocto-poky/meta/lib/oe/path.py b/import-layers/yocto-poky/meta/lib/oe/path.py
index 448a2b9..1ea03d5 100644
--- a/import-layers/yocto-poky/meta/lib/oe/path.py
+++ b/import-layers/yocto-poky/meta/lib/oe/path.py
@@ -98,7 +98,7 @@
     if (os.stat(src).st_dev ==  os.stat(dst).st_dev):
         # Need to copy directories only with tar first since cp will error if two 
         # writers try and create a directory at the same time
-        cmd = "cd %s; find . -type d -print | tar --xattrs --xattrs-include='*' -cf - -C %s -p --no-recursion --files-from - | tar --xattrs --xattrs-include='*' -xf - -C %s" % (src, src, dst)
+        cmd = "cd %s; find . -type d -print | tar --xattrs --xattrs-include='*' -cf - -C %s -p --no-recursion --files-from - | tar --xattrs --xattrs-include='*' -xhf - -C %s" % (src, src, dst)
         subprocess.check_output(cmd, shell=True, stderr=subprocess.STDOUT)
         source = ''
         if os.path.isdir(src):
diff --git a/import-layers/yocto-poky/meta/lib/oe/recipeutils.py b/import-layers/yocto-poky/meta/lib/oe/recipeutils.py
index a7fdd36..cab8e40 100644
--- a/import-layers/yocto-poky/meta/lib/oe/recipeutils.py
+++ b/import-layers/yocto-poky/meta/lib/oe/recipeutils.py
@@ -2,7 +2,7 @@
 #
 # Some code borrowed from the OE layer index
 #
-# Copyright (C) 2013-2016 Intel Corporation
+# Copyright (C) 2013-2017 Intel Corporation
 #
 
 import sys
@@ -188,6 +188,11 @@
             for wrapline in wrapped[:-1]:
                 addlines.append('%s \\%s' % (wrapline, newline))
             addlines.append('%s%s' % (wrapped[-1], newline))
+
+        # Split on newlines - this isn't strictly necessary if you are only
+        # going to write the output to disk, but if you want to compare it
+        # (as patch_recipe_file() will do if patch=True) then it's important.
+        addlines = [line for l in addlines for line in l.splitlines(True)]
         if rewindcomments:
             # Ensure we insert the lines before any leading comments
             # (that we'd want to ensure remain leading the next value)
@@ -320,7 +325,7 @@
 
 
 
-def copy_recipe_files(d, tgt_dir, whole_dir=False, download=True):
+def copy_recipe_files(d, tgt_dir, whole_dir=False, download=True, all_variants=False):
     """Copy (local) recipe files, including both files included via include/require,
     and files referred to in the SRC_URI variable."""
     import bb.fetch2
@@ -328,18 +333,41 @@
 
     # FIXME need a warning if the unexpanded SRC_URI value contains variable references
 
-    uris = (d.getVar('SRC_URI') or "").split()
-    fetch = bb.fetch2.Fetch(uris, d)
-    if download:
-        fetch.download()
+    uri_values = []
+    localpaths = []
+    def fetch_urls(rdata):
+        # Collect the local paths from SRC_URI
+        srcuri = rdata.getVar('SRC_URI') or ""
+        if srcuri not in uri_values:
+            fetch = bb.fetch2.Fetch(srcuri.split(), rdata)
+            if download:
+                fetch.download()
+            for pth in fetch.localpaths():
+                if pth not in localpaths:
+                    localpaths.append(pth)
+            uri_values.append(srcuri)
+
+    fetch_urls(d)
+    if all_variants:
+        # Get files for other variants e.g. in the case of a SRC_URI_append
+        localdata = bb.data.createCopy(d)
+        variants = (localdata.getVar('BBCLASSEXTEND') or '').split()
+        if variants:
+            # Ensure we handle class-target if we're dealing with one of the variants
+            variants.append('target')
+            for variant in variants:
+                localdata.setVar('CLASSOVERRIDE', 'class-%s' % variant)
+                fetch_urls(localdata)
 
     # Copy local files to target directory and gather any remote files
-    bb_dir = os.path.dirname(d.getVar('FILE')) + os.sep
+    bb_dir = os.path.abspath(os.path.dirname(d.getVar('FILE'))) + os.sep
     remotes = []
     copied = []
-    includes = [path for path in d.getVar('BBINCLUDED').split() if
-                path.startswith(bb_dir) and os.path.exists(path)]
-    for path in fetch.localpaths() + includes:
+    # Need to do this in two steps since we want to check against the absolute path
+    includes = [os.path.abspath(path) for path in d.getVar('BBINCLUDED').split() if os.path.exists(path)]
+    # We also check this below, but we don't want any items in this list being considered remotes
+    includes = [path for path in includes if path.startswith(bb_dir)]
+    for path in localpaths + includes:
         # Only import files that are under the meta directory
         if path.startswith(bb_dir):
             if not whole_dir:
@@ -778,7 +806,7 @@
 
 def find_layerdir(fn):
     """ Figure out the path to the base of the layer containing a file (e.g. a recipe)"""
-    pth = fn
+    pth = os.path.abspath(fn)
     layerdir = ''
     while pth:
         if os.path.exists(os.path.join(pth, 'conf', 'layer.conf')):
diff --git a/import-layers/yocto-poky/meta/lib/oe/rootfs.py b/import-layers/yocto-poky/meta/lib/oe/rootfs.py
index 96591f3..754ef56 100644
--- a/import-layers/yocto-poky/meta/lib/oe/rootfs.py
+++ b/import-layers/yocto-poky/meta/lib/oe/rootfs.py
@@ -261,15 +261,22 @@
             # Remove components that we don't need if it's a read-only rootfs
             unneeded_pkgs = self.d.getVar("ROOTFS_RO_UNNEEDED").split()
             pkgs_installed = image_list_installed_packages(self.d)
-            # Make sure update-alternatives is last on the command line, so
-            # that it is removed last. This makes sure that its database is
-            # available while uninstalling packages, allowing alternative
-            # symlinks of packages to be uninstalled to be managed correctly.
+            # Make sure update-alternatives is removed last. This is
+            # because its database has to available while uninstalling
+            # other packages, allowing alternative symlinks of packages
+            # to be uninstalled or to be managed correctly otherwise.
             provider = self.d.getVar("VIRTUAL-RUNTIME_update-alternatives")
             pkgs_to_remove = sorted([pkg for pkg in pkgs_installed if pkg in unneeded_pkgs], key=lambda x: x == provider)
 
+            # update-alternatives provider is removed in its own remove()
+            # call because all package managers do not guarantee the packages
+            # are removed in the order they given in the list (which is
+            # passed to the command line). The sorting done earlier is
+            # utilized to implement the 2-stage removal.
+            if len(pkgs_to_remove) > 1:
+                self.pm.remove(pkgs_to_remove[:-1], False)
             if len(pkgs_to_remove) > 0:
-                self.pm.remove(pkgs_to_remove, False)
+                self.pm.remove([pkgs_to_remove[-1]], False)
 
         if delayed_postinsts:
             self._save_postinsts()
@@ -302,10 +309,11 @@
             bb.note("> Executing %s intercept ..." % script)
 
             try:
-                subprocess.check_output(script_full)
+                output = subprocess.check_output(script_full, stderr=subprocess.STDOUT)
+                if output: bb.note(output.decode("utf-8"))
             except subprocess.CalledProcessError as e:
-                bb.warn("The postinstall intercept hook '%s' failed (exit code: %d)! See log for details! (Output: %s)" %
-                        (script, e.returncode, e.output))
+                bb.warn("The postinstall intercept hook '%s' failed, details in log.do_rootfs" % script)
+                bb.note("Exit code %d. Output:\n%s" % (e.returncode, e.output.decode("utf-8")))
 
                 with open(script_full) as intercept:
                     registered_pkgs = None
@@ -524,7 +532,8 @@
             self.pm.save_rpmpostinst(pkg)
 
     def _cleanup(self):
-        pass
+        self.pm._invoke_dnf(["clean", "all"])
+
 
 class DpkgOpkgRootfs(Rootfs):
     def __init__(self, d, progress_reporter=None, logcatcher=None):
diff --git a/import-layers/yocto-poky/meta/lib/oe/sdk.py b/import-layers/yocto-poky/meta/lib/oe/sdk.py
index 9fe1687..a3a6c39 100644
--- a/import-layers/yocto-poky/meta/lib/oe/sdk.py
+++ b/import-layers/yocto-poky/meta/lib/oe/sdk.py
@@ -152,6 +152,8 @@
         pm.install(pkgs_attempt, True)
 
     def _populate(self):
+        execute_pre_post_process(self.d, self.d.getVar("POPULATE_SDK_PRE_TARGET_COMMAND"))
+
         bb.note("Installing TARGET packages")
         self._populate_sysroot(self.target_pm, self.target_manifest)
 
@@ -233,6 +235,8 @@
                            [False, True][pkg_type == Manifest.PKG_TYPE_ATTEMPT_ONLY])
 
     def _populate(self):
+        execute_pre_post_process(self.d, self.d.getVar("POPULATE_SDK_PRE_TARGET_COMMAND"))
+
         bb.note("Installing TARGET packages")
         self._populate_sysroot(self.target_pm, self.target_manifest)
 
@@ -315,6 +319,8 @@
                            [False, True][pkg_type == Manifest.PKG_TYPE_ATTEMPT_ONLY])
 
     def _populate(self):
+        execute_pre_post_process(self.d, self.d.getVar("POPULATE_SDK_PRE_TARGET_COMMAND"))
+
         bb.note("Installing TARGET packages")
         self._populate_sysroot(self.target_pm, self.target_manifest)
 
@@ -379,5 +385,24 @@
     os.environ.clear()
     os.environ.update(env_bkp)
 
+def get_extra_sdkinfo(sstate_dir):
+    """
+    This function is going to be used for generating the target and host manifest files packages of eSDK.
+    """
+    import math
+    
+    extra_info = {}
+    extra_info['tasksizes'] = {}
+    extra_info['filesizes'] = {}
+    for root, _, files in os.walk(sstate_dir):
+        for fn in files:
+            if fn.endswith('.tgz'):
+                fsize = int(math.ceil(float(os.path.getsize(os.path.join(root, fn))) / 1024))
+                task = fn.rsplit(':',1)[1].split('_',1)[1].split(',')[0]
+                origtotal = extra_info['tasksizes'].get(task, 0)
+                extra_info['tasksizes'][task] = origtotal + fsize
+                extra_info['filesizes'][fn] = fsize
+    return extra_info
+
 if __name__ == "__main__":
     pass
diff --git a/import-layers/yocto-poky/meta/lib/oe/sstatesig.py b/import-layers/yocto-poky/meta/lib/oe/sstatesig.py
index b8dd4c8..3a8778e 100644
--- a/import-layers/yocto-poky/meta/lib/oe/sstatesig.py
+++ b/import-layers/yocto-poky/meta/lib/oe/sstatesig.py
@@ -29,7 +29,7 @@
         return True
 
     # Quilt (patch application) changing isn't likely to affect anything
-    excludelist = ['quilt-native', 'subversion-native', 'git-native']
+    excludelist = ['quilt-native', 'subversion-native', 'git-native', 'ccache-native']
     if depname in excludelist and recipename != depname:
         return False
 
@@ -320,7 +320,7 @@
 
     if not taskhashlist or (len(filedates) < 2 and not foundall):
         # That didn't work, look in sstate-cache
-        hashes = taskhashlist or ['*']
+        hashes = taskhashlist or ['?' * 32]
         localdata = bb.data.createCopy(d)
         for hashval in hashes:
             localdata.setVar('PACKAGE_ARCH', '*')
diff --git a/import-layers/yocto-poky/meta/lib/oe/terminal.py b/import-layers/yocto-poky/meta/lib/oe/terminal.py
index 2f18ec0..94afe39 100644
--- a/import-layers/yocto-poky/meta/lib/oe/terminal.py
+++ b/import-layers/yocto-poky/meta/lib/oe/terminal.py
@@ -292,6 +292,8 @@
             vernum = ver.split(' ')[-1]
         if ver.startswith('GNOME Terminal'):
             vernum = ver.split(' ')[-1]
+        if ver.startswith('MATE Terminal'):
+            vernum = ver.split(' ')[-1]
         if ver.startswith('tmux'):
             vernum = ver.split()[-1]
     return vernum
diff --git a/import-layers/yocto-poky/meta/lib/oe/useradd.py b/import-layers/yocto-poky/meta/lib/oe/useradd.py
new file mode 100644
index 0000000..179ac76
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oe/useradd.py
@@ -0,0 +1,68 @@
+import argparse
+import re
+
+class myArgumentParser(argparse.ArgumentParser):
+    def _print_message(self, message, file=None):
+        bb.warn("%s - %s: %s" % (d.getVar('PN'), pkg, message))
+
+    # This should never be called...
+    def exit(self, status=0, message=None):
+        message = message or ("%s - %s: useradd.bbclass: Argument parsing exited" % (d.getVar('PN'), pkg))
+        error(message)
+
+    def error(self, message):
+        raise bb.build.FuncFailed(message)
+
+def split_commands(params):
+    params = re.split('''[ \t]*;[ \t]*(?=(?:[^'"]|'[^']*'|"[^"]*")*$)''', params.strip())
+    # Remove any empty items
+    return [x for x in params if x]
+
+def split_args(params):
+    params = re.split('''[ \t]+(?=(?:[^'"]|'[^']*'|"[^"]*")*$)''', params.strip())
+    # Remove any empty items
+    return [x for x in params if x]
+
+def build_useradd_parser():
+    # The following comes from --help on useradd from shadow
+    parser = myArgumentParser(prog='useradd')
+    parser.add_argument("-b", "--base-dir", metavar="BASE_DIR", help="base directory for the home directory of the new account")
+    parser.add_argument("-c", "--comment", metavar="COMMENT", help="GECOS field of the new account")
+    parser.add_argument("-d", "--home-dir", metavar="HOME_DIR", help="home directory of the new account")
+    parser.add_argument("-D", "--defaults", help="print or change default useradd configuration", action="store_true")
+    parser.add_argument("-e", "--expiredate", metavar="EXPIRE_DATE", help="expiration date of the new account")
+    parser.add_argument("-f", "--inactive", metavar="INACTIVE", help="password inactivity period of the new account")
+    parser.add_argument("-g", "--gid", metavar="GROUP", help="name or ID of the primary group of the new account")
+    parser.add_argument("-G", "--groups", metavar="GROUPS", help="list of supplementary groups of the new account")
+    parser.add_argument("-k", "--skel", metavar="SKEL_DIR", help="use this alternative skeleton directory")
+    parser.add_argument("-K", "--key", metavar="KEY=VALUE", help="override /etc/login.defs defaults")
+    parser.add_argument("-l", "--no-log-init", help="do not add the user to the lastlog and faillog databases", action="store_true")
+    parser.add_argument("-m", "--create-home", help="create the user's home directory", action="store_const", const=True)
+    parser.add_argument("-M", "--no-create-home", dest="create_home", help="do not create the user's home directory", action="store_const", const=False)
+    parser.add_argument("-N", "--no-user-group", dest="user_group", help="do not create a group with the same name as the user", action="store_const", const=False)
+    parser.add_argument("-o", "--non-unique", help="allow to create users with duplicate (non-unique UID)", action="store_true")
+    parser.add_argument("-p", "--password", metavar="PASSWORD", help="encrypted password of the new account")
+    parser.add_argument("-P", "--clear-password", metavar="CLEAR_PASSWORD", help="use this clear password for the new account")
+    parser.add_argument("-R", "--root", metavar="CHROOT_DIR", help="directory to chroot into")
+    parser.add_argument("-r", "--system", help="create a system account", action="store_true")
+    parser.add_argument("-s", "--shell", metavar="SHELL", help="login shell of the new account")
+    parser.add_argument("-u", "--uid", metavar="UID", help="user ID of the new account")
+    parser.add_argument("-U", "--user-group", help="create a group with the same name as the user", action="store_const", const=True)
+    parser.add_argument("LOGIN", help="Login name of the new user")
+
+    return parser
+
+def build_groupadd_parser():
+    # The following comes from --help on groupadd from shadow
+    parser = myArgumentParser(prog='groupadd')
+    parser.add_argument("-f", "--force", help="exit successfully if the group already exists, and cancel -g if the GID is already used", action="store_true")
+    parser.add_argument("-g", "--gid", metavar="GID", help="use GID for the new group")
+    parser.add_argument("-K", "--key", metavar="KEY=VALUE", help="override /etc/login.defs defaults")
+    parser.add_argument("-o", "--non-unique", help="allow to create groups with duplicate (non-unique) GID", action="store_true")
+    parser.add_argument("-p", "--password", metavar="PASSWORD", help="use this encrypted password for the new group")
+    parser.add_argument("-P", "--clear-password", metavar="CLEAR_PASSWORD", help="use this clear password for the new group")
+    parser.add_argument("-R", "--root", metavar="CHROOT_DIR", help="directory to chroot into")
+    parser.add_argument("-r", "--system", help="create a system account", action="store_true")
+    parser.add_argument("GROUP", help="Group name of the new group")
+
+    return parser
diff --git a/import-layers/yocto-poky/meta/lib/oe/utils.py b/import-layers/yocto-poky/meta/lib/oe/utils.py
index 330a5ff..643ab78 100644
--- a/import-layers/yocto-poky/meta/lib/oe/utils.py
+++ b/import-layers/yocto-poky/meta/lib/oe/utils.py
@@ -126,6 +126,46 @@
     if addfeatures:
         d.appendVar(var, " " + " ".join(addfeatures))
 
+def all_distro_features(d, features, truevalue="1", falsevalue=""):
+    """
+    Returns truevalue if *all* given features are set in DISTRO_FEATURES,
+    else falsevalue. The features can be given as single string or anything
+    that can be turned into a set.
+
+    This is a shorter, more flexible version of
+    bb.utils.contains("DISTRO_FEATURES", features, truevalue, falsevalue, d).
+
+    Without explicit true/false values it can be used directly where
+    Python expects a boolean:
+       if oe.utils.all_distro_features(d, "foo bar"):
+           bb.fatal("foo and bar are mutually exclusive DISTRO_FEATURES")
+
+    With just a truevalue, it can be used to include files that are meant to be
+    used only when requested via DISTRO_FEATURES:
+       require ${@ oe.utils.all_distro_features(d, "foo bar", "foo-and-bar.inc")
+    """
+    return bb.utils.contains("DISTRO_FEATURES", features, truevalue, falsevalue, d)
+
+def any_distro_features(d, features, truevalue="1", falsevalue=""):
+    """
+    Returns truevalue if at least *one* of the given features is set in DISTRO_FEATURES,
+    else falsevalue. The features can be given as single string or anything
+    that can be turned into a set.
+
+    This is a shorter, more flexible version of
+    bb.utils.contains_any("DISTRO_FEATURES", features, truevalue, falsevalue, d).
+
+    Without explicit true/false values it can be used directly where
+    Python expects a boolean:
+       if not oe.utils.any_distro_features(d, "foo bar"):
+           bb.fatal("foo, bar or both must be set in DISTRO_FEATURES")
+
+    With just a truevalue, it can be used to include files that are meant to be
+    used only when requested via DISTRO_FEATURES:
+       require ${@ oe.utils.any_distro_features(d, "foo bar", "foo-or-bar.inc")
+
+    """
+    return bb.utils.contains_any("DISTRO_FEATURES", features, truevalue, falsevalue, d)
 
 def packages_filter_out_system(d):
     """
@@ -184,25 +224,30 @@
     def init_worker():
         signal.signal(signal.SIGINT, signal.SIG_IGN)
 
+    fails = []
+
+    def failures(res):
+        fails.append(res)
+
     nproc = min(multiprocessing.cpu_count(), len(commands))
     pool = bb.utils.multiprocessingpool(nproc, init_worker)
-    imap = pool.imap(function, commands)
 
     try:
-        res = list(imap)
+        mapresult = pool.map_async(function, commands, error_callback=failures)
+
         pool.close()
         pool.join()
-        results = []
-        for result in res:
-            if result is not None:
-                results.append(result)
-        return results
-
+        results = mapresult.get()
     except KeyboardInterrupt:
         pool.terminate()
         pool.join()
         raise
 
+    if fails:
+        raise fails[0]
+
+    return results
+
 def squashspaces(string):
     import re
     return re.sub("\s+", " ", string).strip()
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/buildperf/base.py b/import-layers/yocto-poky/meta/lib/oeqa/buildperf/base.py
index 6e62b27..7b2b4aa 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/buildperf/base.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/buildperf/base.py
@@ -485,6 +485,7 @@
     @staticmethod
     def sync():
         """Sync and drop kernel caches"""
+        runCmd2('bitbake -m', ignore_status=True)
         log.debug("Syncing and dropping kernel caches""")
         KernelDropCaches.drop()
         os.sync()
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/buildperf/test_basic.py b/import-layers/yocto-poky/meta/lib/oeqa/buildperf/test_basic.py
index a9e4a5b..a19089a 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/buildperf/test_basic.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/buildperf/test_basic.py
@@ -121,5 +121,7 @@
         self.sync()
         self.measure_cmd_resources([installer, '-y', '-d', deploy_dir],
                                    'deploy', 'eSDK deploy')
+        #make sure bitbake is unloaded
+        self.sync()
         self.measure_disk_usage(deploy_dir, 'deploy_dir', 'deploy dir',
                                 apparent_size=True)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/controllers/masterimage.py b/import-layers/yocto-poky/meta/lib/oeqa/controllers/masterimage.py
index 07418fc..a2912fc 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/controllers/masterimage.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/controllers/masterimage.py
@@ -108,7 +108,7 @@
             time.sleep(10)
             self.power_ctl("cycle")
         else:
-            status, output = conn.run("reboot")
+            status, output = conn.run("sync; { sleep 1; reboot; } > /dev/null &")
             if status != 0:
                 bb.error("Failed rebooting target and no power control command defined. You need to manually reset the device.\n%s" % output)
 
@@ -143,7 +143,7 @@
     def _deploy(self):
         pass
 
-    def start(self, params=None):
+    def start(self, extra_bootparams=None):
         bb.plain("%s - boot test image on target" % self.pn)
         self._start()
         # set the ssh object for the target/test image
@@ -156,7 +156,7 @@
 
     def stop(self):
         bb.plain("%s - reboot/powercycle target" % self.pn)
-        self.power_cycle(self.connection)
+        self.power_cycle(self.master)
 
 
 class SystemdbootTarget(MasterImageHardwareTarget):
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/README b/import-layers/yocto-poky/meta/lib/oeqa/core/README
index 0c859fd..d4fcda4 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/README
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/README
@@ -1,38 +1,76 @@
-= OEQA Framework =
+= OEQA (v2) Framework =
 
 == Introduction ==
 
-This is the new OEQA framework the base clases of the framework
-are in this module oeqa/core the subsequent components needs to
-extend this classes.
+This is version 2 of the OEQA framework. Base clases are located in the
+'oeqa/core' directory and subsequent components must extend from these.
 
-A new/unique runner was created called oe-test and is under scripts/
-oe-test, this new runner scans over oeqa module searching for test
-components that supports OETestContextExecutor implemented in context
-module (i.e. oeqa/core/context.py).
+The main design consideration was to implement the needed functionality on
+top of the Python unittest framework. To achieve this goal, the following
+modules are used:
 
-For execute an example:
+    * oeqa/core/runner.py: Provides OETestResult and OETestRunner base
+      classes extending the unittest class. These classes support exporting
+      results to different formats; currently RAW and XML support exist.
 
-$ source oe-init-build-env
-$ oe-test core
+    * oeqa/core/loader.py: Provides OETestLoader extending the unittest class.
+      It also features a unified implementation of decorator support and
+      filtering test cases.
 
-For list supported components:
+    * oeqa/core/case.py: Provides OETestCase base class extending
+      unittest.TestCase and provides access to the Test data (td), Test context
+      and Logger functionality.
 
-$ oe-test -h
+    * oeqa/core/decorator: Provides OETestDecorator, a new class to implement
+      decorators for Test cases.
 
-== Create new Test component ==
+    * oeqa/core/context: Provides OETestContext, a high-level API for
+      loadTests and runTests of certain Test component and
+      OETestContextExecutor a base class to enable oe-test to discover/use
+      the Test component.
 
-Usally for add a new Test component the developer needs to extend
-OETestContext/OETestContextExecutor in context.py and OETestCase in
-case.py.
+Also, a new 'oe-test' runner is located under 'scripts', allowing scans for components
+that supports OETestContextExecutor (see below).
 
-== How to run the testing of the OEQA framework ==
+== Terminology ==
+
+    * Test component: The area of testing in the Project, for example: runtime, SDK, eSDK, selftest.
+
+    * Test data: Data associated with the Test component. Currently we use bitbake datastore as
+      a Test data input.
+
+    * Test context: A context of what tests needs to be run and how to do it; this additionally
+      provides access to the Test data and could have custom methods and/or attrs.
+
+== oe-test ==
+
+The new tool, oe-test, has the ability to scan the code base for test components and provide
+a unified way to run test cases. Internally it scans folders inside oeqa module in order to find
+specific classes that implement a test component.
+
+== Usage ==
+
+Executing the example test component
+
+    $ source oe-init-build-env
+    $ oe-test core
+
+Getting help
+
+    $ oe-test -h
+
+== Creating new Test Component ==
+
+Adding a new test component the developer needs to extend OETestContext/OETestContextExecutor
+(from context.py) and OETestCase (from case.py)
+
+== Selftesting the framework ==
 
 Run all tests:
 
-$ PATH=$PATH:../../ python3 -m unittest discover -s tests
+    $ PATH=$PATH:../../ python3 -m unittest discover -s tests
 
 Run some test:
 
-$ cd tests/
-$ ./test_data.py
+    $ cd tests/
+    $ ./test_data.py
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/case.py b/import-layers/yocto-poky/meta/lib/oeqa/core/case.py
index d2dbf20..917a2aa 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/case.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/case.py
@@ -23,7 +23,7 @@
 
     # td_vars has the variables needed by a test class
     # or test case instance, if some var isn't into td a
-    # OEMissingVariable exception is raised
+    # OEQAMissingVariable exception is raised
     td_vars = None
 
     @classmethod
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/context.py b/import-layers/yocto-poky/meta/lib/oeqa/core/context.py
index 4476750..acd5474 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/context.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/context.py
@@ -7,15 +7,14 @@
 import time
 import logging
 import collections
-import re
 
 from oeqa.core.loader import OETestLoader
-from oeqa.core.runner import OETestRunner, OEStreamLogger, xmlEnabled
+from oeqa.core.runner import OETestRunner
+from oeqa.core.exception import OEQAMissingManifest, OEQATestNotFound
 
 class OETestContext(object):
     loaderClass = OETestLoader
     runnerClass = OETestRunner
-    streamLoggerClass = OEStreamLogger
 
     files_dir = os.path.abspath(os.path.join(os.path.dirname(
         os.path.abspath(__file__)), "../files"))
@@ -32,7 +31,7 @@
 
     def _read_modules_from_manifest(self, manifest):
         if not os.path.exists(manifest):
-            raise
+            raise OEQAMissingManifest("Manifest does not exist on %s" % manifest)
 
         modules = []
         for line in open(manifest).readlines():
@@ -42,6 +41,14 @@
 
         return modules
 
+    def skipTests(self, skips):
+        if not skips:
+            return
+        for test in self.suites:
+            for skip in skips:
+                if test.id().startswith(skip):
+                    setattr(test, 'setUp', lambda: test.skipTest('Skip by the command line argument "%s"' % skip))
+
     def loadTests(self, module_paths, modules=[], tests=[],
             modules_manifest="", modules_required=[], filters={}):
         if modules_manifest:
@@ -51,9 +58,11 @@
                 modules_required, filters)
         self.suites = self.loader.discover()
 
-    def runTests(self):
-        streamLogger = self.streamLoggerClass(self.logger)
-        self.runner = self.runnerClass(self, stream=streamLogger, verbosity=2)
+    def runTests(self, skips=[]):
+        self.runner = self.runnerClass(self, descriptions=False, verbosity=2)
+
+        # Dinamically skip those tests specified though arguments
+        self.skipTests(skips)
 
         self._run_start_time = time.time()
         result = self.runner.run(self.suites)
@@ -61,94 +70,13 @@
 
         return result
 
-    def logSummary(self, result, component, context_msg=''):
-        self.logger.info("SUMMARY:")
-        self.logger.info("%s (%s) - Ran %d test%s in %.3fs" % (component,
-            context_msg, result.testsRun, result.testsRun != 1 and "s" or "",
-            (self._run_end_time - self._run_start_time)))
-
-        if result.wasSuccessful():
-            msg = "%s - OK - All required tests passed" % component
-        else:
-            msg = "%s - FAIL - Required tests failed" % component
-        skipped = len(self._results['skipped'])
-        if skipped: 
-            msg += " (skipped=%d)" % skipped
-        self.logger.info(msg)
-
-    def _getDetailsNotPassed(self, case, type, desc):
-        found = False
-
-        for (scase, msg) in self._results[type]:
-            # XXX: When XML reporting is enabled scase is
-            # xmlrunner.result._TestInfo instance instead of
-            # string.
-            if xmlEnabled:
-                if case.id() == scase.test_id:
-                    found = True
-                    break
-                scase_str = scase.test_id
-            else:
-                if case == scase:
-                    found = True
-                    break
-                scase_str = str(scase)
-
-            # When fails at module or class level the class name is passed as string
-            # so figure out to see if match
-            m = re.search("^setUpModule \((?P<module_name>.*)\)$", scase_str)
-            if m:
-                if case.__class__.__module__ == m.group('module_name'):
-                    found = True
-                    break
-
-            m = re.search("^setUpClass \((?P<class_name>.*)\)$", scase_str)
-            if m:
-                class_name = "%s.%s" % (case.__class__.__module__,
-                        case.__class__.__name__)
-
-                if class_name == m.group('class_name'):
-                    found = True
-                    break
-
-        if found:
-            return (found, msg)
-
-        return (found, None)
-
-    def logDetails(self):
-        self.logger.info("RESULTS:")
-        for case_name in self._registry['cases']:
-            case = self._registry['cases'][case_name]
-
-            result_types = ['failures', 'errors', 'skipped', 'expectedFailures']
-            result_desc = ['FAILED', 'ERROR', 'SKIPPED', 'EXPECTEDFAIL']
-
-            fail = False
-            desc = None
-            for idx, name in enumerate(result_types):
-                (fail, msg) = self._getDetailsNotPassed(case, result_types[idx],
-                        result_desc[idx])
-                if fail:
-                    desc = result_desc[idx]
-                    break
-
-            oeid = -1
-            for d in case.decorators:
-                if hasattr(d, 'oeid'):
-                    oeid = d.oeid
-            
-            if fail:
-                self.logger.info("RESULTS - %s - Testcase %s: %s" % (case.id(),
-                    oeid, desc))
-                if msg:
-                    self.logger.info(msg)
-            else:
-                self.logger.info("RESULTS - %s - Testcase %s: %s" % (case.id(),
-                    oeid, 'PASSED'))
+    def listTests(self, display_type):
+        self.runner = self.runnerClass(self, verbosity=2)
+        return self.runner.list_tests(self.suites, display_type)
 
 class OETestContextExecutor(object):
     _context_class = OETestContext
+    _script_executor = 'oe-test'
 
     name = 'core'
     help = 'core test component example'
@@ -168,9 +96,14 @@
         self.parser.add_argument('--output-log', action='store',
                 default=self.default_output_log,
                 help="results output log, default: %s" % self.default_output_log)
-        self.parser.add_argument('--run-tests', action='store',
+
+        group = self.parser.add_mutually_exclusive_group()
+        group.add_argument('--run-tests', action='store', nargs='+',
                 default=self.default_tests,
-                help="tests to run in <module>[.<class>[.<name>]] format. Just works for modules now")
+                help="tests to run in <module>[.<class>[.<name>]]")
+        group.add_argument('--list-tests', action='store',
+                choices=('module', 'class', 'name'),
+                help="lists available tests")
 
         if self.default_test_data:
             self.parser.add_argument('--test-data-file', action='store',
@@ -206,7 +139,8 @@
         self.tc_kwargs = {}
         self.tc_kwargs['init'] = {}
         self.tc_kwargs['load'] = {}
-        self.tc_kwargs['run'] = {}
+        self.tc_kwargs['list'] = {}
+        self.tc_kwargs['run']  = {}
 
         self.tc_kwargs['init']['logger'] = self._setup_logger(logger, args)
         if args.test_data_file:
@@ -215,22 +149,36 @@
         else:
             self.tc_kwargs['init']['td'] = {}
 
-
         if args.run_tests:
-            self.tc_kwargs['load']['modules'] = args.run_tests.split()
+            self.tc_kwargs['load']['modules'] = args.run_tests
+            self.tc_kwargs['load']['modules_required'] = args.run_tests
         else:
-            self.tc_kwargs['load']['modules'] = None
+            self.tc_kwargs['load']['modules'] = []
+
+        self.tc_kwargs['run']['skips'] = []
 
         self.module_paths = args.CASES_PATHS
 
+    def _pre_run(self):
+        pass
+
     def run(self, logger, args):
         self._process_args(logger, args)
 
         self.tc = self._context_class(**self.tc_kwargs['init'])
-        self.tc.loadTests(self.module_paths, **self.tc_kwargs['load'])
-        rc = self.tc.runTests(**self.tc_kwargs['run'])
-        self.tc.logSummary(rc, self.name)
-        self.tc.logDetails()
+        try:
+            self.tc.loadTests(self.module_paths, **self.tc_kwargs['load'])
+        except OEQATestNotFound as ex:
+            logger.error(ex)
+            sys.exit(1)
+
+        if args.list_tests:
+            rc = self.tc.listTests(args.list_tests, **self.tc_kwargs['list'])
+        else:
+            self._pre_run()
+            rc = self.tc.runTests(**self.tc_kwargs['run'])
+            rc.logDetails()
+            rc.logSummary(self.name)
 
         output_link = os.path.join(os.path.dirname(args.output_log),
                 "%s-results.log" % self.name)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/decorator/depends.py b/import-layers/yocto-poky/meta/lib/oeqa/core/decorator/depends.py
index 195711c..baa0434 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/decorator/depends.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/decorator/depends.py
@@ -3,6 +3,7 @@
 
 from unittest import SkipTest
 
+from oeqa.core.threaded import OETestRunnerThreaded
 from oeqa.core.exception import OEQADependency
 
 from . import OETestDiscover, registerDecorator
@@ -63,7 +64,12 @@
     return [cases[case_id] for case_id in cases_ordered]
 
 def _skipTestDependency(case, depends):
-    results = case.tc._results
+    if isinstance(case.tc.runner, OETestRunnerThreaded):
+        import threading
+        results = case.tc._results[threading.get_ident()]
+    else:
+        results = case.tc._results
+
     skipReasons = ['errors', 'failures', 'skipped']
 
     for reason in skipReasons:
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/decorator/oetimeout.py b/import-layers/yocto-poky/meta/lib/oeqa/core/decorator/oetimeout.py
index a247583..f85e7d9 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/decorator/oetimeout.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/decorator/oetimeout.py
@@ -1,8 +1,12 @@
 # Copyright (C) 2016 Intel Corporation
 # Released under the MIT license (see COPYING.MIT)
 
-import signal
 from . import OETestDecorator, registerDecorator
+
+import signal
+from threading import Timer
+
+from oeqa.core.threaded import OETestRunnerThreaded
 from oeqa.core.exception import OEQATimeoutError
 
 @registerDecorator
@@ -10,16 +14,32 @@
     attrs = ('oetimeout',)
 
     def setUpDecorator(self):
-        timeout = self.oetimeout
-        def _timeoutHandler(signum, frame):
-            raise OEQATimeoutError("Timed out after %s "
+        self.logger.debug("Setting up a %d second(s) timeout" % self.oetimeout)
+
+        if isinstance(self.case.tc.runner, OETestRunnerThreaded):
+            self.timeouted = False
+            def _timeoutHandler():
+                self.timeouted = True
+
+            self.timer = Timer(self.oetimeout, _timeoutHandler)
+            self.timer.start()
+        else:
+            timeout = self.oetimeout
+            def _timeoutHandler(signum, frame):
+                raise OEQATimeoutError("Timed out after %s "
                     "seconds of execution" % timeout)
 
-        self.logger.debug("Setting up a %d second(s) timeout" % self.oetimeout)
-        self.alarmSignal = signal.signal(signal.SIGALRM, _timeoutHandler)
-        signal.alarm(self.oetimeout)
+            self.alarmSignal = signal.signal(signal.SIGALRM, _timeoutHandler)
+            signal.alarm(self.oetimeout)
 
     def tearDownDecorator(self):
-        signal.alarm(0)
-        signal.signal(signal.SIGALRM, self.alarmSignal)
-        self.logger.debug("Removed SIGALRM handler")
+        if isinstance(self.case.tc.runner, OETestRunnerThreaded):
+            self.timer.cancel()
+            self.logger.debug("Removed Timer handler")
+            if self.timeouted:
+                raise OEQATimeoutError("Timed out after %s "
+                    "seconds of execution" % self.oetimeout)
+        else:
+            signal.alarm(0)
+            signal.signal(signal.SIGALRM, self.alarmSignal)
+            self.logger.debug("Removed SIGALRM handler")
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/exception.py b/import-layers/yocto-poky/meta/lib/oeqa/core/exception.py
index 2dfd840..732f2ef 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/exception.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/exception.py
@@ -12,3 +12,12 @@
 
 class OEQADependency(OEQAException):
     pass
+
+class OEQAMissingManifest(OEQAException):
+    pass
+
+class OEQAPreRun(OEQAException):
+    pass
+
+class OEQATestNotFound(OEQAException):
+    pass
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/loader.py b/import-layers/yocto-poky/meta/lib/oeqa/core/loader.py
index 63a1703..975a081 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/loader.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/loader.py
@@ -2,25 +2,30 @@
 # Released under the MIT license (see COPYING.MIT)
 
 import os
+import re
 import sys
 import unittest
+import inspect
 
 from oeqa.core.utils.path import findFile
 from oeqa.core.utils.test import getSuiteModules, getCaseID
 
+from oeqa.core.exception import OEQATestNotFound
 from oeqa.core.case import OETestCase
 from oeqa.core.decorator import decoratorClasses, OETestDecorator, \
         OETestFilter, OETestDiscover
 
-def _make_failed_test(classname, methodname, exception, suiteClass):
-    """
-        When loading tests unittest framework stores the exception in a new
-        class created for be displayed into run().
-
-        For our purposes will be better to raise the exception in loading 
-        step instead of wait to run the test suite.
-    """
-    raise exception
+# When loading tests, the unittest framework stores any exceptions and
+# displays them only when the run method is called.
+#
+# For our purposes, it is better to raise the exceptions in the loading
+# step rather than waiting to run the test suite.
+#
+# Generate the function definition because this differ across python versions
+# Python >= 3.4.4 uses tree parameters instead four but for example Python 3.5.3
+# ueses four parameters so isn't incremental.
+_failed_test_args = inspect.getargspec(unittest.loader._make_failed_test).args
+exec("""def _make_failed_test(%s): raise exception""" % ', '.join(_failed_test_args))
 unittest.loader._make_failed_test = _make_failed_test
 
 def _find_duplicated_modules(suite, directory):
@@ -29,6 +34,28 @@
         if path:
             raise ImportError("Duplicated %s module found in %s" % (module, path))
 
+def _built_modules_dict(modules):
+    modules_dict = {}
+
+    if modules == None:
+        return modules_dict
+
+    for module in modules:
+        # Assumption: package and module names do not contain upper case
+        # characters, whereas class names do
+        m = re.match(r'^([^A-Z]+)(?:\.([A-Z][^.]*)(?:\.([^.]+))?)?$', module)
+
+        module_name, class_name, test_name = m.groups()
+
+        if module_name and module_name not in modules_dict:
+            modules_dict[module_name] = {}
+        if class_name and class_name not in modules_dict[module_name]:
+            modules_dict[module_name][class_name] = []
+        if test_name and test_name not in modules_dict[module_name][class_name]:
+            modules_dict[module_name][class_name].append(test_name)
+
+    return modules_dict
+
 class OETestLoader(unittest.TestLoader):
     caseClass = OETestCase
 
@@ -39,7 +66,8 @@
             filters, *args, **kwargs):
         self.tc = tc
 
-        self.modules = modules
+        self.modules = _built_modules_dict(modules)
+
         self.tests = tests
         self.modules_required = modules_required
 
@@ -63,6 +91,8 @@
 
         self._patchCaseClass(self.caseClass)
 
+        super(OETestLoader, self).__init__()
+
     def _patchCaseClass(self, testCaseClass):
         # Adds custom attributes to the OETestCase class
         setattr(testCaseClass, 'tc', self.tc)
@@ -116,7 +146,35 @@
         """
             Returns True if test case must be filtered, False otherwise.
         """
-        if self.filters:
+        # XXX; If the module has more than one namespace only use
+        # the first to support run the whole module specifying the
+        # <module_name>.[test_class].[test_name]
+        module_name_small = case.__module__.split('.')[0]
+        module_name = case.__module__
+
+        class_name = case.__class__.__name__
+        test_name = case._testMethodName
+
+        if self.modules:
+            module = None
+            try:
+                module = self.modules[module_name_small]
+            except KeyError:
+                try:
+                    module = self.modules[module_name]
+                except KeyError:
+                    return True
+
+            if module:
+                if not class_name in module:
+                    return True
+
+                if module[class_name]:
+                    if test_name not in module[class_name]:
+                        return True
+
+        # Decorator filters
+        if self.filters and isinstance(case, OETestCase):
             filters = self.filters.copy()
             case_decorators = [cd for cd in case.decorators
                                if cd.__class__ in self.used_filters]
@@ -134,7 +192,8 @@
         return False
 
     def _getTestCase(self, testCaseClass, tcName):
-        if not hasattr(testCaseClass, '__oeqa_loader'):
+        if not hasattr(testCaseClass, '__oeqa_loader') and \
+                issubclass(testCaseClass, OETestCase):
             # In order to support data_vars validation
             # monkey patch the default setUp/tearDown{Class} to use
             # the ones provided by OETestCase
@@ -161,7 +220,8 @@
             setattr(testCaseClass, '__oeqa_loader', True)
 
         case = testCaseClass(tcName)
-        setattr(case, 'decorators', [])
+        if isinstance(case, OETestCase):
+            setattr(case, 'decorators', [])
 
         return case
 
@@ -173,9 +233,9 @@
             raise TypeError("Test cases should not be derived from TestSuite." \
                                 " Maybe you meant to derive %s from TestCase?" \
                                 % testCaseClass.__name__)
-        if not issubclass(testCaseClass, self.caseClass):
+        if not issubclass(testCaseClass, unittest.case.TestCase):
             raise TypeError("Test %s is not derived from %s" % \
-                    (testCaseClass.__name__, self.caseClass.__name__))
+                    (testCaseClass.__name__, unittest.case.TestCase.__name__))
 
         testCaseNames = self.getTestCaseNames(testCaseClass)
         if not testCaseNames and hasattr(testCaseClass, 'runTest'):
@@ -196,6 +256,28 @@
 
         return self.suiteClass(suite)
 
+    def _required_modules_validation(self):
+        """
+            Search in Test context registry if a required
+            test is found, raise an exception when not found.
+        """
+
+        for module in self.modules_required:
+            found = False
+
+            # The module name is splitted to only compare the
+            # first part of a test case id.
+            comp_len = len(module.split('.'))
+            for case in self.tc._registry['cases']:
+                case_comp = '.'.join(case.split('.')[0:comp_len])
+                if module == case_comp:
+                    found = True
+                    break
+
+            if not found:
+                raise OEQATestNotFound("Not found %s in loaded test cases" % \
+                        module)
+
     def discover(self):
         big_suite = self.suiteClass()
         for path in self.module_paths:
@@ -210,8 +292,41 @@
         for clss in discover_classes:
             cases = clss.discover(self.tc._registry)
 
+        if self.modules_required:
+            self._required_modules_validation()
+
         return self.suiteClass(cases) if cases else big_suite
 
+    def _filterModule(self, module):
+        if module.__name__ in sys.builtin_module_names:
+            msg = 'Tried to import %s test module but is a built-in'
+            raise ImportError(msg % module.__name__)
+
+        # XXX; If the module has more than one namespace only use
+        # the first to support run the whole module specifying the
+        # <module_name>.[test_class].[test_name]
+        module_name_small = module.__name__.split('.')[0]
+        module_name = module.__name__
+
+        # Normal test modules are loaded if no modules were specified,
+        # if module is in the specified module list or if 'all' is in
+        # module list.
+        # Underscore modules are loaded only if specified in module list.
+        load_module = True if not module_name.startswith('_') \
+                              and (not self.modules \
+                                   or module_name in self.modules \
+                                   or module_name_small in self.modules \
+                                   or 'all' in self.modules) \
+                           else False
+
+        load_underscore = True if module_name.startswith('_') \
+                                  and (module_name in self.modules or \
+                                  module_name_small in self.modules) \
+                               else False
+
+        return (load_module, load_underscore)
+
+
     # XXX After Python 3.5, remove backward compatibility hacks for
     # use_load_tests deprecation via *args and **kws.  See issue 16662.
     if sys.version_info >= (3,5):
@@ -219,23 +334,7 @@
             """
                 Returns a suite of all tests cases contained in module.
             """
-            if module.__name__ in sys.builtin_module_names:
-                msg = 'Tried to import %s test module but is a built-in'
-                raise ImportError(msg % module.__name__)
-
-            # Normal test modules are loaded if no modules were specified,
-            # if module is in the specified module list or if 'all' is in
-            # module list.
-            # Underscore modules are loaded only if specified in module list.
-            load_module = True if not module.__name__.startswith('_') \
-                                  and (not self.modules \
-                                       or module.__name__ in self.modules \
-                                       or 'all' in self.modules) \
-                               else False
-
-            load_underscore = True if module.__name__.startswith('_') \
-                                      and module.__name__ in self.modules \
-                                   else False
+            load_module, load_underscore = self._filterModule(module)
 
             if load_module or load_underscore:
                 return super(OETestLoader, self).loadTestsFromModule(
@@ -247,23 +346,7 @@
             """
                 Returns a suite of all tests cases contained in module.
             """
-            if module.__name__ in sys.builtin_module_names:
-                msg = 'Tried to import %s test module but is a built-in'
-                raise ImportError(msg % module.__name__)
-
-            # Normal test modules are loaded if no modules were specified,
-            # if module is in the specified module list or if 'all' is in
-            # module list.
-            # Underscore modules are loaded only if specified in module list.
-            load_module = True if not module.__name__.startswith('_') \
-                                  and (not self.modules \
-                                       or module.__name__ in self.modules \
-                                       or 'all' in self.modules) \
-                               else False
-
-            load_underscore = True if module.__name__.startswith('_') \
-                                      and module.__name__ in self.modules \
-                                   else False
+            load_module, load_underscore = self._filterModule(module)
 
             if load_module or load_underscore:
                 return super(OETestLoader, self).loadTestsFromModule(
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/runner.py b/import-layers/yocto-poky/meta/lib/oeqa/core/runner.py
index 44ffecb..13cdf5b 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/runner.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/runner.py
@@ -5,6 +5,7 @@
 import time
 import unittest
 import logging
+import re
 
 xmlEnabled = False
 try:
@@ -24,10 +25,14 @@
 
     def write(self, msg):
         if len(msg) > 1 and msg[0] != '\n':
-            self.buffer += msg
-        else:
-            self.logger.log(logging.INFO, self.buffer.rstrip("\n"))
-            self.buffer = ""
+            if '...' in msg:
+                self.buffer += msg
+            elif self.buffer:
+                self.buffer += msg
+                self.logger.log(logging.INFO, self.buffer)
+                self.buffer = ""
+            else:
+                self.logger.log(logging.INFO, msg)
 
     def flush(self):
         for handler in self.logger.handlers:
@@ -38,22 +43,122 @@
         super(OETestResult, self).__init__(*args, **kwargs)
 
         self.tc = tc
+        self._tc_map_results()
 
+    def startTest(self, test):
+        # Allow us to trigger the testcase buffer mode on a per test basis
+        # so stdout/stderr are only printed upon failure. Enables debugging
+        # but clean output
+        if hasattr(test, "buffer"):
+            self.buffer = test.buffer
+        super(OETestResult, self).startTest(test)
+
+    def _tc_map_results(self):
         self.tc._results['failures'] = self.failures
         self.tc._results['errors'] = self.errors
         self.tc._results['skipped'] = self.skipped
         self.tc._results['expectedFailures'] = self.expectedFailures
 
-    def startTest(self, test):
-        super(OETestResult, self).startTest(test)
+    def logSummary(self, component, context_msg=''):
+        elapsed_time = self.tc._run_end_time - self.tc._run_start_time
+        self.tc.logger.info("SUMMARY:")
+        self.tc.logger.info("%s (%s) - Ran %d test%s in %.3fs" % (component,
+            context_msg, self.testsRun, self.testsRun != 1 and "s" or "",
+            elapsed_time))
+
+        if self.wasSuccessful():
+            msg = "%s - OK - All required tests passed" % component
+        else:
+            msg = "%s - FAIL - Required tests failed" % component
+        skipped = len(self.tc._results['skipped'])
+        if skipped: 
+            msg += " (skipped=%d)" % skipped
+        self.tc.logger.info(msg)
+
+    def _getDetailsNotPassed(self, case, type, desc):
+        found = False
+
+        for (scase, msg) in self.tc._results[type]:
+            # XXX: When XML reporting is enabled scase is
+            # xmlrunner.result._TestInfo instance instead of
+            # string.
+            if xmlEnabled:
+                if case.id() == scase.test_id:
+                    found = True
+                    break
+                scase_str = scase.test_id
+            else:
+                if case == scase:
+                    found = True
+                    break
+                scase_str = str(scase)
+
+            # When fails at module or class level the class name is passed as string
+            # so figure out to see if match
+            m = re.search("^setUpModule \((?P<module_name>.*)\)$", scase_str)
+            if m:
+                if case.__class__.__module__ == m.group('module_name'):
+                    found = True
+                    break
+
+            m = re.search("^setUpClass \((?P<class_name>.*)\)$", scase_str)
+            if m:
+                class_name = "%s.%s" % (case.__class__.__module__,
+                        case.__class__.__name__)
+
+                if class_name == m.group('class_name'):
+                    found = True
+                    break
+
+        if found:
+            return (found, msg)
+
+        return (found, None)
+
+    def logDetails(self):
+        self.tc.logger.info("RESULTS:")
+        for case_name in self.tc._registry['cases']:
+            case = self.tc._registry['cases'][case_name]
+
+            result_types = ['failures', 'errors', 'skipped', 'expectedFailures']
+            result_desc = ['FAILED', 'ERROR', 'SKIPPED', 'EXPECTEDFAIL']
+
+            fail = False
+            desc = None
+            for idx, name in enumerate(result_types):
+                (fail, msg) = self._getDetailsNotPassed(case, result_types[idx],
+                        result_desc[idx])
+                if fail:
+                    desc = result_desc[idx]
+                    break
+
+            oeid = -1
+            if hasattr(case, 'decorators'):
+                for d in case.decorators:
+                    if hasattr(d, 'oeid'):
+                        oeid = d.oeid
+
+            if fail:
+                self.tc.logger.info("RESULTS - %s - Testcase %s: %s" % (case.id(),
+                    oeid, desc))
+            else:
+                self.tc.logger.info("RESULTS - %s - Testcase %s: %s" % (case.id(),
+                    oeid, 'PASSED'))
+
+class OEListTestsResult(object):
+    def wasSuccessful(self):
+        return True
 
 class OETestRunner(_TestRunner):
+    streamLoggerClass = OEStreamLogger
+
     def __init__(self, tc, *args, **kwargs):
         if xmlEnabled:
             if not kwargs.get('output'):
                 kwargs['output'] = os.path.join(os.getcwd(),
                         'TestResults_%s_%s' % (time.strftime("%Y%m%d%H%M%S"), os.getpid()))
 
+        kwargs['stream'] = self.streamLoggerClass(tc.logger)
         super(OETestRunner, self).__init__(*args, **kwargs)
         self.tc = tc
         self.resultclass = OETestResult
@@ -74,3 +179,99 @@
         def _makeResult(self):
             return self.resultclass(self.tc, self.stream, self.descriptions,
                     self.verbosity)
+
+
+    def _walk_suite(self, suite, func):
+        for obj in suite:
+            if isinstance(obj, unittest.suite.TestSuite):
+                if len(obj._tests):
+                    self._walk_suite(obj, func)
+            elif isinstance(obj, unittest.case.TestCase):
+                func(self.tc.logger, obj)
+                self._walked_cases = self._walked_cases + 1
+
+    def _list_tests_name(self, suite):
+        from oeqa.core.decorator.oeid import OETestID
+        from oeqa.core.decorator.oetag import OETestTag
+
+        self._walked_cases = 0
+
+        def _list_cases_without_id(logger, case):
+
+            found_id = False
+            if hasattr(case, 'decorators'):
+                for d in case.decorators:
+                    if isinstance(d, OETestID):
+                        found_id = True
+
+            if not found_id:
+                logger.info('oeid missing for %s' % case.id())
+
+        def _list_cases(logger, case):
+            oeid = None
+            oetag = None
+
+            if hasattr(case, 'decorators'):
+                for d in case.decorators:
+                    if isinstance(d, OETestID):
+                        oeid = d.oeid
+                    elif isinstance(d, OETestTag):
+                        oetag = d.oetag
+
+            logger.info("%s\t%s\t\t%s" % (oeid, oetag, case.id()))
+
+        self.tc.logger.info("Listing test cases that don't have oeid ...")
+        self._walk_suite(suite, _list_cases_without_id)
+        self.tc.logger.info("-" * 80)
+
+        self.tc.logger.info("Listing all available tests:")
+        self._walked_cases = 0
+        self.tc.logger.info("id\ttag\t\ttest")
+        self.tc.logger.info("-" * 80)
+        self._walk_suite(suite, _list_cases)
+        self.tc.logger.info("-" * 80)
+        self.tc.logger.info("Total found:\t%s" % self._walked_cases)
+
+    def _list_tests_class(self, suite):
+        self._walked_cases = 0
+
+        curr = {}
+        def _list_classes(logger, case):
+            if not 'module' in curr or curr['module'] != case.__module__:
+                curr['module'] = case.__module__
+                logger.info(curr['module'])
+
+            if not 'class' in curr  or curr['class'] != \
+                    case.__class__.__name__:
+                curr['class'] = case.__class__.__name__
+                logger.info(" -- %s" % curr['class'])
+
+            logger.info(" -- -- %s" % case._testMethodName)
+
+        self.tc.logger.info("Listing all available test classes:")
+        self._walk_suite(suite, _list_classes)
+
+    def _list_tests_module(self, suite):
+        self._walked_cases = 0
+
+        listed = []
+        def _list_modules(logger, case):
+            if not case.__module__ in listed:
+                if case.__module__.startswith('_'):
+                    logger.info("%s (hidden)" % case.__module__)
+                else:
+                    logger.info(case.__module__)
+                listed.append(case.__module__)
+
+        self.tc.logger.info("Listing all available test modules:")
+        self._walk_suite(suite, _list_modules)
+
+    def list_tests(self, suite, display_type):
+        if display_type == 'name':
+            self._list_tests_name(suite)
+        elif display_type == 'class':
+            self._list_tests_class(suite)
+        elif display_type == 'module':
+            self._list_tests_module(suite)
+
+        return OEListTestsResult()
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/target/qemu.py b/import-layers/yocto-poky/meta/lib/oeqa/core/target/qemu.py
index 2dc521c..d359bf9 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/target/qemu.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/target/qemu.py
@@ -31,7 +31,7 @@
                                  deploy_dir_image=dir_image, display=display,
                                  logfile=bootlog, boottime=boottime,
                                  use_kvm=kvm, dump_dir=dump_dir,
-                                 dump_host_cmds=dump_host_cmds)
+                                 dump_host_cmds=dump_host_cmds, logger=logger)
 
     def start(self, params=None, extra_bootparams=None):
         if self.runner.start(params, extra_bootparams=extra_bootparams):
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/target/ssh.py b/import-layers/yocto-poky/meta/lib/oeqa/core/target/ssh.py
index b80939c..151b99a 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/target/ssh.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/target/ssh.py
@@ -6,6 +6,7 @@
 import select
 import logging
 import subprocess
+import codecs
 
 from . import OETarget
 
@@ -82,7 +83,7 @@
             processTimeout = self.timeout
 
         status, output = self._run(sshCmd, processTimeout, True)
-        self.logger.info('\nCommand: %s\nOutput:  %s\n' % (command, output))
+        self.logger.debug('Command: %s\nOutput:  %s\n' % (command, output))
         return (status, output)
 
     def copyTo(self, localSrc, remoteDst):
@@ -206,12 +207,12 @@
                 logger.debug('time: %s, endtime: %s' % (time.time(), endtime))
                 try:
                     if select.select([process.stdout], [], [], 5)[0] != []:
-                        data = os.read(process.stdout.fileno(), 1024)
+                        reader = codecs.getreader('utf-8')(process.stdout)
+                        data = reader.read(1024, 1024)
                         if not data:
                             process.stdout.close()
                             eof = True
                         else:
-                            data = data.decode("utf-8")
                             output += data
                             logger.debug('Partial data from SSH call: %s' % data)
                             endtime = time.time() + timeout
@@ -233,7 +234,7 @@
                 output += lastline
 
         else:
-            output = process.communicate()[0].decode("utf-8")
+            output = process.communicate()[0].decode("utf-8", errors='replace')
             logger.debug('Data from SSH call: %s' % output.rstrip())
 
     options = {
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/tests/cases/loader/threaded/threaded.py b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/cases/loader/threaded/threaded.py
new file mode 100644
index 0000000..0fe4cb3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/cases/loader/threaded/threaded.py
@@ -0,0 +1,12 @@
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+from oeqa.core.case import OETestCase
+
+class ThreadedTest(OETestCase):
+    def test_threaded_no_depends(self):
+        self.assertTrue(True, msg='How is this possible?')
+
+class ThreadedTest2(OETestCase):
+    def test_threaded_same_module(self):
+        self.assertTrue(True, msg='How is this possible?')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/tests/cases/loader/threaded/threaded_alone.py b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/cases/loader/threaded/threaded_alone.py
new file mode 100644
index 0000000..905f397
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/cases/loader/threaded/threaded_alone.py
@@ -0,0 +1,8 @@
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+from oeqa.core.case import OETestCase
+
+class ThreadedTestAlone(OETestCase):
+    def test_threaded_alone(self):
+        self.assertTrue(True, msg='How is this possible?')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/tests/cases/loader/threaded/threaded_depends.py b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/cases/loader/threaded/threaded_depends.py
new file mode 100644
index 0000000..0c158d3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/cases/loader/threaded/threaded_depends.py
@@ -0,0 +1,10 @@
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+from oeqa.core.case import OETestCase
+from oeqa.core.decorator.depends import OETestDepends
+
+class ThreadedTest3(OETestCase):
+    @OETestDepends(['threaded.ThreadedTest.test_threaded_no_depends'])
+    def test_threaded_depends(self):
+        self.assertTrue(True, msg='How is this possible?')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/tests/cases/loader/threaded/threaded_module.py b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/cases/loader/threaded/threaded_module.py
new file mode 100644
index 0000000..63d17e0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/cases/loader/threaded/threaded_module.py
@@ -0,0 +1,12 @@
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+from oeqa.core.case import OETestCase
+
+class ThreadedTestModule(OETestCase):
+    def test_threaded_module(self):
+        self.assertTrue(True, msg='How is this possible?')
+
+class ThreadedTestModule2(OETestCase):
+    def test_threaded_module2(self):
+        self.assertTrue(True, msg='How is this possible?')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/tests/common.py b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/common.py
index 52b18a1..1932323 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/tests/common.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/common.py
@@ -33,3 +33,13 @@
         tc.loadTests(self.cases_path, modules=modules, tests=tests,
                      filters=filters)
         return tc
+
+    def _testLoaderThreaded(self, d={}, modules=[],
+            tests=[], filters={}):
+        from oeqa.core.threaded import OETestContextThreaded
+
+        tc = OETestContextThreaded(d, self.logger)
+        tc.loadTests(self.cases_path, modules=modules, tests=tests,
+                     filters=filters)
+
+        return tc
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/tests/test_decorators.py b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/test_decorators.py
index f7d11e8..cf99e0d 100755
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/tests/test_decorators.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/test_decorators.py
@@ -131,5 +131,17 @@
         msg = "OETestTimeout didn't restore SIGALRM"
         self.assertIs(alarm_signal, signal.getsignal(signal.SIGALRM), msg=msg)
 
+    def test_timeout_thread(self):
+        tests = ['timeout.TimeoutTest.testTimeoutPass']
+        msg = 'Failed to run test using OETestTimeout'
+        tc = self._testLoaderThreaded(modules=self.modules, tests=tests)
+        self.assertTrue(tc.runTests().wasSuccessful(), msg=msg)
+
+    def test_timeout_threaded_fail(self):
+        tests = ['timeout.TimeoutTest.testTimeoutFail']
+        msg = "OETestTimeout test didn't timeout as expected"
+        tc = self._testLoaderThreaded(modules=self.modules, tests=tests)
+        self.assertFalse(tc.runTests().wasSuccessful(), msg=msg)
+
 if __name__ == '__main__':
     unittest.main()
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/tests/test_loader.py b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/test_loader.py
index b79b8ba..e0d917d 100755
--- a/import-layers/yocto-poky/meta/lib/oeqa/core/tests/test_loader.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/tests/test_loader.py
@@ -1,6 +1,6 @@
 #!/usr/bin/env python3
 
-# Copyright (C) 2016 Intel Corporation
+# Copyright (C) 2016-2017 Intel Corporation
 # Released under the MIT license (see COPYING.MIT)
 
 import os
@@ -82,5 +82,33 @@
         msg = 'Expected modules from two different paths'
         self.assertEqual(modules, expected_modules, msg=msg)
 
+    def test_loader_threaded(self):
+        cases_path = self.cases_path
+
+        self.cases_path = [os.path.join(self.cases_path, 'loader', 'threaded')]
+
+        tc = self._testLoaderThreaded()
+        self.assertEqual(len(tc.suites), 3, "Expected to be 3 suites")
+
+        case_ids = ['threaded.ThreadedTest.test_threaded_no_depends',
+                'threaded.ThreadedTest2.test_threaded_same_module',
+                'threaded_depends.ThreadedTest3.test_threaded_depends']
+        for case in tc.suites[0]._tests:
+            self.assertEqual(case.id(),
+                    case_ids[tc.suites[0]._tests.index(case)])
+
+        case_ids = ['threaded_alone.ThreadedTestAlone.test_threaded_alone']
+        for case in tc.suites[1]._tests:
+            self.assertEqual(case.id(),
+                    case_ids[tc.suites[1]._tests.index(case)])
+
+        case_ids = ['threaded_module.ThreadedTestModule.test_threaded_module',
+                'threaded_module.ThreadedTestModule2.test_threaded_module2']
+        for case in tc.suites[2]._tests:
+            self.assertEqual(case.id(),
+                    case_ids[tc.suites[2]._tests.index(case)])
+
+        self.cases_path = cases_path
+
 if __name__ == '__main__':
     unittest.main()
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/core/threaded.py b/import-layers/yocto-poky/meta/lib/oeqa/core/threaded.py
new file mode 100644
index 0000000..2cafe03
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/core/threaded.py
@@ -0,0 +1,275 @@
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+import threading
+import multiprocessing
+import queue
+import time
+
+from unittest.suite import TestSuite
+
+from oeqa.core.loader import OETestLoader
+from oeqa.core.runner import OEStreamLogger, OETestResult, OETestRunner
+from oeqa.core.context import OETestContext
+
+class OETestLoaderThreaded(OETestLoader):
+    def __init__(self, tc, module_paths, modules, tests, modules_required,
+            filters, process_num=0, *args, **kwargs):
+        super(OETestLoaderThreaded, self).__init__(tc, module_paths, modules,
+                tests, modules_required, filters, *args, **kwargs)
+
+        self.process_num = process_num
+
+    def discover(self):
+        suite = super(OETestLoaderThreaded, self).discover()
+
+        if self.process_num <= 0:
+            self.process_num = min(multiprocessing.cpu_count(),
+                    len(suite._tests))
+
+        suites = []
+        for _ in range(self.process_num):
+            suites.append(self.suiteClass())
+
+        def _search_for_module_idx(suites, case):
+            """
+                Cases in the same module needs to be run
+                in the same thread because PyUnit keeps track
+                of setUp{Module, Class,} and tearDown{Module, Class,}.
+            """
+
+            for idx in range(self.process_num):
+                suite = suites[idx]
+                for c in suite._tests:
+                    if case.__module__ == c.__module__:
+                        return idx
+
+            return -1
+
+        def _search_for_depend_idx(suites, depends):
+            """
+                Dependency cases needs to be run in the same
+                thread, because OEQA framework look at the state
+                of dependant test to figure out if skip or not.
+            """
+
+            for idx in range(self.process_num):
+                suite = suites[idx]
+
+                for case in suite._tests:
+                    if case.id() in depends:
+                        return idx
+            return -1
+
+        def _get_best_idx(suites):
+            sizes = [len(suite._tests) for suite in suites]
+            return sizes.index(min(sizes))
+
+        def _fill_suites(suite):
+            idx = -1
+            for case in suite:
+                if isinstance(case, TestSuite):
+                    _fill_suites(case)
+                else:
+                    idx = _search_for_module_idx(suites, case)
+
+                    depends = {}
+                    if 'depends' in self.tc._registry:
+                        depends = self.tc._registry['depends']
+
+                    if idx == -1 and case.id() in depends:
+                        case_depends = depends[case.id()] 
+                        idx = _search_for_depend_idx(suites, case_depends)
+
+                    if idx == -1:
+                        idx = _get_best_idx(suites)
+
+                    suites[idx].addTest(case)
+        _fill_suites(suite)
+
+        suites_tmp = suites
+        suites = []
+        for suite in suites_tmp:
+            if len(suite._tests) > 0:
+                suites.append(suite)
+
+        return suites
+
+class OEStreamLoggerThreaded(OEStreamLogger):
+    _lock = threading.Lock()
+    buffers = {}
+
+    def write(self, msg):
+        tid = threading.get_ident()
+
+        if not tid in self.buffers:
+            self.buffers[tid] = ""
+
+        if msg:
+            self.buffers[tid] += msg
+
+    def finish(self):
+        tid = threading.get_ident()
+        
+        self._lock.acquire()
+        self.logger.info('THREAD: %d' % tid)
+        self.logger.info('-' * 70)
+        for line in self.buffers[tid].split('\n'):
+            self.logger.info(line)
+        self._lock.release()
+
+class OETestResultThreadedInternal(OETestResult):
+    def _tc_map_results(self):
+        tid = threading.get_ident()
+        
+        # PyUnit generates a result for every test module run, test
+        # if the thread already has an entry to avoid lose the previous
+        # test module results.
+        if not tid in self.tc._results:
+            self.tc._results[tid] = {}
+            self.tc._results[tid]['failures'] = self.failures
+            self.tc._results[tid]['errors'] = self.errors
+            self.tc._results[tid]['skipped'] = self.skipped
+            self.tc._results[tid]['expectedFailures'] = self.expectedFailures
+
+class OETestResultThreaded(object):
+    _results = {}
+    _lock = threading.Lock()
+
+    def __init__(self, tc):
+        self.tc = tc
+
+    def _fill_tc_results(self):
+        tids = list(self.tc._results.keys())
+        fields = ['failures', 'errors', 'skipped', 'expectedFailures']
+
+        for tid in tids:
+            result = self.tc._results[tid]
+            for field in fields:
+                if not field in self.tc._results:
+                    self.tc._results[field] = []
+                self.tc._results[field].extend(result[field])
+
+    def addResult(self, result, run_start_time, run_end_time):
+        tid = threading.get_ident()
+
+        self._lock.acquire()
+        self._results[tid] = {}
+        self._results[tid]['result'] = result
+        self._results[tid]['run_start_time'] = run_start_time 
+        self._results[tid]['run_end_time'] = run_end_time 
+        self._results[tid]['result'] = result
+        self._lock.release()
+
+    def wasSuccessful(self):
+        wasSuccessful = True
+        for tid in self._results.keys():
+            wasSuccessful = wasSuccessful and \
+                    self._results[tid]['result'].wasSuccessful()
+        return wasSuccessful
+
+    def stop(self):
+        for tid in self._results.keys():
+            self._results[tid]['result'].stop()
+
+    def logSummary(self, component, context_msg=''):
+        elapsed_time = (self.tc._run_end_time - self.tc._run_start_time)
+
+        self.tc.logger.info("SUMMARY:")
+        self.tc.logger.info("%s (%s) - Ran %d tests in %.3fs" % (component,
+            context_msg, len(self.tc._registry['cases']), elapsed_time))
+        if self.wasSuccessful():
+            msg = "%s - OK - All required tests passed" % component
+        else:
+            msg = "%s - FAIL - Required tests failed" % component
+        self.tc.logger.info(msg)
+
+    def logDetails(self):
+        if list(self._results):
+            tid = list(self._results)[0]
+            result = self._results[tid]['result']
+            result.logDetails()
+
+class _Worker(threading.Thread):
+    """Thread executing tasks from a given tasks queue"""
+    def __init__(self, tasks, result, stream):
+        threading.Thread.__init__(self)
+        self.tasks = tasks
+
+        self.result = result
+        self.stream = stream
+
+    def run(self):
+        while True:
+            try:
+                func, args, kargs = self.tasks.get(block=False)
+            except queue.Empty:
+                break
+
+            try:
+                run_start_time = time.time()
+                rc = func(*args, **kargs)
+                run_end_time = time.time()
+                self.result.addResult(rc, run_start_time, run_end_time)
+                self.stream.finish()
+            except Exception as e:
+                print(e)
+            finally:
+                self.tasks.task_done()
+
+class _ThreadedPool:
+    """Pool of threads consuming tasks from a queue"""
+    def __init__(self, num_workers, num_tasks, stream=None, result=None):
+        self.tasks = queue.Queue(num_tasks)
+        self.workers = []
+
+        for _ in range(num_workers):
+            worker = _Worker(self.tasks, result, stream)
+            self.workers.append(worker)
+
+    def start(self):
+        for worker in self.workers:
+            worker.start()
+
+    def add_task(self, func, *args, **kargs):
+        """Add a task to the queue"""
+        self.tasks.put((func, args, kargs))
+
+    def wait_completion(self):
+        """Wait for completion of all the tasks in the queue"""
+        self.tasks.join()
+        for worker in self.workers:
+            worker.join()
+
+class OETestRunnerThreaded(OETestRunner):
+    streamLoggerClass = OEStreamLoggerThreaded
+
+    def __init__(self, tc, *args, **kwargs):
+        super(OETestRunnerThreaded, self).__init__(tc, *args, **kwargs)
+        self.resultclass = OETestResultThreadedInternal # XXX: XML reporting overrides at __init__
+
+    def run(self, suites):
+        result = OETestResultThreaded(self.tc)
+
+        pool = _ThreadedPool(len(suites), len(suites), stream=self.stream,
+                result=result)
+        for s in suites:
+            pool.add_task(super(OETestRunnerThreaded, self).run, s)
+        pool.start()
+        pool.wait_completion()
+        result._fill_tc_results()
+
+        return result
+
+class OETestContextThreaded(OETestContext):
+    loaderClass = OETestLoaderThreaded
+    runnerClass = OETestRunnerThreaded
+
+    def loadTests(self, module_paths, modules=[], tests=[],
+            modules_manifest="", modules_required=[], filters={}, process_num=0):
+        if modules_manifest:
+            modules = self._read_modules_from_manifest(modules_manifest)
+
+        self.loader = self.loaderClass(self, module_paths, modules, tests,
+                modules_required, filters, process_num)
+        self.suites = self.loader.discover()
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/runtime/case.py b/import-layers/yocto-poky/meta/lib/oeqa/runtime/case.py
index c1485c9..2f190ac 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/runtime/case.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/runtime/case.py
@@ -8,10 +8,10 @@
     # target instance set by OERuntimeTestLoader.
     target = None
 
-    def _oeSetUp(self):
-        super(OERuntimeTestCase, self)._oeSetUp()
+    def setUp(self):
+        super(OERuntimeTestCase, self).setUp()
         install_package(self)
 
-    def _oeTearDown(self):
-        super(OERuntimeTestCase, self)._oeTearDown()
+    def tearDown(self):
+        super(OERuntimeTestCase, self).tearDown()
         uninstall_package(self)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/_ptest.py b/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/_ptest.py
deleted file mode 100644
index aaed9a5..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/_ptest.py
+++ /dev/null
@@ -1,103 +0,0 @@
-import os
-import shutil
-import subprocess
-
-from oeqa.runtime.case import OERuntimeTestCase
-from oeqa.core.decorator.depends import OETestDepends
-from oeqa.core.decorator.oeid import OETestID
-from oeqa.core.decorator.data import skipIfNotDataVar, skipIfNotFeature
-from oeqa.runtime.decorator.package import OEHasPackage
-
-from oeqa.runtime.cases.dnf import DnfTest
-from oeqa.utils.logparser import *
-from oeqa.utils.httpserver import HTTPService
-
-class PtestRunnerTest(DnfTest):
-
-    @classmethod
-    def setUpClass(cls):
-        rpm_deploy = os.path.join(cls.tc.td['DEPLOY_DIR'], 'rpm')
-        cls.repo_server = HTTPService(rpm_deploy, cls.tc.target.server_ip)
-        cls.repo_server.start()
-
-    @classmethod
-    def tearDownClass(cls):
-        cls.repo_server.stop()
-
-    # a ptest log parser
-    def parse_ptest(self, logfile):
-        parser = Lparser(test_0_pass_regex="^PASS:(.+)",
-                         test_0_fail_regex="^FAIL:(.+)",
-                         section_0_begin_regex="^BEGIN: .*/(.+)/ptest",
-                         section_0_end_regex="^END: .*/(.+)/ptest")
-        parser.init()
-        result = Result()
-
-        with open(logfile, errors='replace') as f:
-            for line in f:
-                result_tuple = parser.parse_line(line)
-                if not result_tuple:
-                    continue
-                result_tuple = line_type, category, status, name = parser.parse_line(line)
-
-                if line_type == 'section' and status == 'begin':
-                    current_section = name
-                    continue
-
-                if line_type == 'section' and status == 'end':
-                    current_section = None
-                    continue
-
-                if line_type == 'test' and status == 'pass':
-                    result.store(current_section, name, status)
-                    continue
-
-                if line_type == 'test' and status == 'fail':
-                    result.store(current_section, name, status)
-                    continue
-
-        result.sort_tests()
-        return result
-
-    def _install_ptest_packages(self):
-        # Get ptest packages that can be installed in the image.
-        packages_dir = os.path.join(self.tc.td['DEPLOY_DIR'], 'rpm')
-        ptest_pkgs = [pkg[:pkg.find('-ptest')+6]
-                          for _, _, filenames in os.walk(packages_dir)
-                          for pkg in filenames
-                          if 'ptest' in pkg
-                          and pkg[:pkg.find('-ptest')] in self.tc.image_packages]
-
-        repo_url = 'http://%s:%s' % (self.target.server_ip,
-                                     self.repo_server.port)
-        dnf_options = ('--repofrompath=oe-ptest-repo,%s '
-                       '--nogpgcheck '
-                       'install -y' % repo_url)
-        self.dnf('%s %s ptest-runner' % (dnf_options, ' '.join(ptest_pkgs)))
-
-    @skipIfNotFeature('package-management',
-                      'Test requires package-management to be in DISTRO_FEATURES')
-    @skipIfNotFeature('ptest',
-                      'Test requires package-management to be in DISTRO_FEATURES')
-    @skipIfNotDataVar('IMAGE_PKGTYPE', 'rpm',
-                      'RPM is not the primary package manager')
-    @OEHasPackage(['dnf'])
-    @OETestDepends(['ssh.SSHTest.test_ssh'])
-    def test_ptestrunner(self):
-        self.ptest_log = os.path.join(self.tc.td['TEST_LOG_DIR'],
-                                      'ptest-%s.log' % self.tc.td['DATETIME'])
-        self._install_ptest_packages()
-
-        (runnerstatus, result) = self.target.run('/usr/bin/ptest-runner > /tmp/ptest.log 2>&1', 0)
-        #exit code is !=0 even if ptest-runner executes because some ptest tests fail.
-        self.assertTrue(runnerstatus != 127, msg="Cannot execute ptest-runner!")
-        self.target.copyFrom('/tmp/ptest.log', self.ptest_log)
-        shutil.copyfile(self.ptest_log, "ptest.log")
-
-        result = self.parse_ptest("ptest.log")
-        log_results_to_location = "./results"
-        if os.path.exists(log_results_to_location):
-            shutil.rmtree(log_results_to_location)
-        os.makedirs(log_results_to_location)
-
-        result.log_as_files(log_results_to_location, test_status = ['pass','fail'])
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/buildcpio.py b/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/buildcpio.py
index 59edc9c..79b22d0 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/buildcpio.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/buildcpio.py
@@ -9,8 +9,7 @@
 
     @classmethod
     def setUpClass(cls):
-        uri = 'https://ftp.gnu.org/gnu/cpio'
-        uri = '%s/cpio-2.12.tar.bz2' % uri
+        uri = 'https://downloads.yoctoproject.org/mirror/sources/cpio-2.12.tar.gz'
         cls.project = TargetBuildProject(cls.tc.target,
                                          uri,
                                          dl_dir = cls.tc.td['DL_DIR'])
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/parselogs.py b/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/parselogs.py
index 6e92946..1f36c61 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/parselogs.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/parselogs.py
@@ -86,9 +86,11 @@
     'qemumips' : [
         'Failed to load module "glx"',
         'pci 0000:00:00.0: [Firmware Bug]: reg 0x..: invalid BAR (can\'t size)',
+        'cacheinfo: Failed to find cpu0 device node',
         ] + common_errors,
     'qemumips64' : [
         'pci 0000:00:00.0: [Firmware Bug]: reg 0x..: invalid BAR (can\'t size)',
+        'cacheinfo: Failed to find cpu0 device node',
          ] + common_errors,
     'qemuppc' : [
         'PCI 0000:00 Cannot reserve Legacy IO [io  0x0000-0x0fff]',
@@ -151,6 +153,8 @@
         'failed to read out thermal zone',
         'Bluetooth: hci0: Setting Intel event mask failed',
         'ttyS2 - failed to request DMA',
+        'Bluetooth: hci0: Failed to send firmware data (-38)',
+        'atkbd serio0: Failed to enable keyboard on isa0060/serio0',
         ] + x86_common,
     'crownbay' : x86_common,
     'genericx86' : x86_common,
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/ptest.py b/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/ptest.py
new file mode 100644
index 0000000..ec8c038
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/runtime/cases/ptest.py
@@ -0,0 +1,82 @@
+from oeqa.runtime.case import OERuntimeTestCase
+from oeqa.core.decorator.depends import OETestDepends
+from oeqa.core.decorator.oeid import OETestID
+from oeqa.core.decorator.data import skipIfNotFeature
+from oeqa.utils.logparser import Lparser, Result
+
+class PtestRunnerTest(OERuntimeTestCase):
+
+    # a ptest log parser
+    def parse_ptest(self, logfile):
+        parser = Lparser(test_0_pass_regex="^PASS:(.+)",
+                         test_0_fail_regex="^FAIL:(.+)",
+                         test_0_skip_regex="^SKIP:(.+)",
+                         section_0_begin_regex="^BEGIN: .*/(.+)/ptest",
+                         section_0_end_regex="^END: .*/(.+)/ptest")
+        parser.init()
+        result = Result()
+
+        with open(logfile, errors='replace') as f:
+            for line in f:
+                result_tuple = parser.parse_line(line)
+                if not result_tuple:
+                    continue
+                result_tuple = line_type, category, status, name = parser.parse_line(line)
+
+                if line_type == 'section' and status == 'begin':
+                    current_section = name
+                    continue
+
+                if line_type == 'section' and status == 'end':
+                    current_section = None
+                    continue
+
+                if line_type == 'test' and status == 'pass':
+                    result.store(current_section, name, status)
+                    continue
+
+                if line_type == 'test' and status == 'fail':
+                    result.store(current_section, name, status)
+                    continue
+
+                if line_type == 'test' and status == 'skip':
+                    result.store(current_section, name, status)
+                    continue
+
+        result.sort_tests()
+        return result
+
+    @OETestID(1600)
+    @skipIfNotFeature('ptest', 'Test requires ptest to be in DISTRO_FEATURES')
+    @skipIfNotFeature('ptest-pkgs', 'Test requires ptest-pkgs to be in IMAGE_FEATURES')
+    @OETestDepends(['ssh.SSHTest.test_ssh'])
+    def test_ptestrunner(self):
+        import datetime
+
+        test_log_dir = self.td.get('TEST_LOG_DIR', '')
+        # The TEST_LOG_DIR maybe NULL when testimage is added after
+        # testdata.json is generated.
+        if not test_log_dir:
+            test_log_dir = os.path.join(self.td.get('WORKDIR', ''), 'testimage')
+        # Don't use self.td.get('DATETIME'), it's from testdata.json, not
+        # up-to-date, and may cause "File exists" when re-reun.
+        datetime = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
+        ptest_log_dir_link = os.path.join(test_log_dir, 'ptest_log')
+        ptest_log_dir = '%s.%s' % (ptest_log_dir_link, datetime)
+        ptest_runner_log = os.path.join(ptest_log_dir, 'ptest-runner.log')
+
+        status, output = self.target.run('ptest-runner', 0)
+        os.makedirs(ptest_log_dir)
+        with open(ptest_runner_log, 'w') as f:
+            f.write(output)
+
+        # status != 0 is OK since some ptest tests may fail
+        self.assertTrue(status != 127, msg="Cannot execute ptest-runner!")
+
+        # Parse and save results
+        parse_result = self.parse_ptest(ptest_runner_log)
+        parse_result.log_as_files(ptest_log_dir, test_status = ['pass','fail', 'skip'])
+        if os.path.exists(ptest_log_dir_link):
+            # Remove the old link to create a new one
+            os.remove(ptest_log_dir_link)
+        os.symlink(os.path.basename(ptest_log_dir), ptest_log_dir_link)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/runtime/context.py b/import-layers/yocto-poky/meta/lib/oeqa/runtime/context.py
index c4cd76c..0294003 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/runtime/context.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/runtime/context.py
@@ -92,6 +92,12 @@
     def getTarget(target_type, logger, target_ip, server_ip, **kwargs):
         target = None
 
+        if target_ip:
+            target_ip_port = target_ip.split(':')
+            if len(target_ip_port) == 2:
+                target_ip = target_ip_port[0]
+                kwargs['port'] = target_ip_port[1]
+
         if target_type == 'simpleremote':
             target = OESSHTarget(logger, target_ip, server_ip, **kwargs)
         elif target_type == 'qemu':
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/buildgalculator.py b/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/buildgalculator.py
index 42e8ddb..780afcc 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/buildgalculator.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/buildgalculator.py
@@ -8,7 +8,7 @@
 
     @classmethod
     def setUpClass(self):
-        if not (self.tc.hasTargetPackage("gtk+3") or\
+        if not (self.tc.hasTargetPackage("gtk\+3") or\
                 self.tc.hasTargetPackage("libgtk-3.0")):
             raise unittest.SkipTest("GalculatorTest class: SDK don't support gtk+3")
 
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/buildlzip.py b/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/buildlzip.py
index 2a53b78..3a89ce8 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/buildlzip.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/buildlzip.py
@@ -17,7 +17,8 @@
 
         machine = self.td.get("MACHINE")
 
-        if not self.tc.hasHostPackage("packagegroup-cross-canadian-%s" % machine):
+        if not (self.tc.hasTargetPackage("packagegroup-cross-canadian-%s" % machine) or
+                self.tc.hasTargetPackage("gcc")):
             raise unittest.SkipTest("SDK doesn't contain a cross-canadian toolchain")
 
     def test_lzip(self):
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/gcc.py b/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/gcc.py
index 74ad2a2..d11f4b6 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/gcc.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/gcc.py
@@ -18,7 +18,8 @@
 
     def setUp(self):
         machine = self.td.get("MACHINE")
-        if not self.tc.hasHostPackage("packagegroup-cross-canadian-%s" % machine):
+        if not (self.tc.hasTargetPackage("packagegroup-cross-canadian-%s" % machine) or
+                self.tc.hasTargetPackage("gcc")):
             raise unittest.SkipTest("GccCompileTest class: SDK doesn't contain a cross-canadian toolchain")
 
     def test_gcc_compile(self):
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/perl.py b/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/perl.py
index e1bded2..8085678 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/perl.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/perl.py
@@ -8,7 +8,8 @@
 class PerlTest(OESDKTestCase):
     @classmethod
     def setUpClass(self):
-        if not self.tc.hasHostPackage("nativesdk-perl"):
+        if not (self.tc.hasHostPackage("nativesdk-perl") or
+                self.tc.hasHostPackage("perl-native")):
             raise unittest.SkipTest("No perl package in the SDK")
 
         for f in ['test.pl']:
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/python.py b/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/python.py
index 94a296f..72dfcc7 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/python.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/sdk/cases/python.py
@@ -8,7 +8,8 @@
 class PythonTest(OESDKTestCase):
     @classmethod
     def setUpClass(self):
-        if not self.tc.hasHostPackage("nativesdk-python"):
+        if not (self.tc.hasHostPackage("nativesdk-python") or
+                self.tc.hasHostPackage("python-native")):
             raise unittest.SkipTest("No python package in the SDK")
 
         for f in ['test.py']:
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/sdk/context.py b/import-layers/yocto-poky/meta/lib/oeqa/sdk/context.py
index 0189ed8..b3d7c75 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/sdk/context.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/sdk/context.py
@@ -6,9 +6,10 @@
 import glob
 import re
 
-from oeqa.core.context import OETestContext, OETestContextExecutor
+from oeqa.core.context import OETestContextExecutor
+from oeqa.core.threaded import OETestContextThreaded
 
-class OESDKTestContext(OETestContext):
+class OESDKTestContext(OETestContextThreaded):
     sdk_files_dir = os.path.join(os.path.dirname(os.path.abspath(__file__)), "files")
 
     def __init__(self, td=None, logger=None, sdk_dir=None, sdk_env=None,
@@ -44,8 +45,6 @@
     default_test_data = None
 
     def register_commands(self, logger, subparsers):
-        import argparse_oe
-
         super(OESDKTestContextExecutor, self).register_commands(logger, subparsers)
 
         sdk_group = self.parser.add_argument_group('sdk options')
@@ -109,6 +108,8 @@
             log(env)
 
     def run(self, logger, args):
+        import argparse_oe
+
         if not args.sdk_dir:
             raise argparse_oe.ArgumentUsageError("No SDK directory "\
                    "specified please do, --sdk-dir SDK_DIR", self.name)
@@ -128,6 +129,6 @@
                    "environment (%s) specified" % args.sdk_env, self.name)
 
         self.sdk_env = sdk_envs[args.sdk_env]
-        super(OESDKTestContextExecutor, self).run(logger, args)
+        return super(OESDKTestContextExecutor, self).run(logger, args)
 
 _executor_class = OESDKTestContextExecutor
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/sdkext/cases/devtool.py b/import-layers/yocto-poky/meta/lib/oeqa/sdkext/cases/devtool.py
index a01bc0b..ea90517 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/sdkext/cases/devtool.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/sdkext/cases/devtool.py
@@ -1,12 +1,14 @@
 # Copyright (C) 2016 Intel Corporation
 # Released under the MIT license (see COPYING.MIT)
 
+import os
 import shutil
 import subprocess
 
 from oeqa.sdkext.case import OESDKExtTestCase
 from oeqa.core.decorator.depends import OETestDepends
 from oeqa.core.decorator.oeid import OETestID
+from oeqa.utils.httpserver import HTTPService
 
 class DevtoolTest(OESDKExtTestCase):
     @classmethod
@@ -95,3 +97,33 @@
             self._run('devtool build %s ' % package_nodejs)
         finally:
             self._run('devtool reset %s '% package_nodejs)
+
+class SdkUpdateTest(OESDKExtTestCase):
+    @classmethod
+    def setUpClass(self):
+        self.publish_dir = os.path.join(self.tc.sdk_dir, 'esdk_publish')
+        if os.path.exists(self.publish_dir):
+            shutil.rmtree(self.publish_dir)
+        os.mkdir(self.publish_dir)
+
+        base_tcname = "%s/%s" % (self.td.get("SDK_DEPLOY", ''),
+            self.td.get("TOOLCHAINEXT_OUTPUTNAME", ''))
+        tcname_new = "%s-new.sh" % base_tcname
+        if not os.path.exists(tcname_new):
+            tcname_new = "%s.sh" % base_tcname
+
+        cmd = 'oe-publish-sdk %s %s' % (tcname_new, self.publish_dir)
+        subprocess.check_output(cmd, shell=True)
+
+        self.http_service = HTTPService(self.publish_dir)
+        self.http_service.start()
+
+        self.http_url = "http://127.0.0.1:%d" % self.http_service.port
+
+    def test_sdk_update_http(self):
+        output = self._run("devtool sdk-update \"%s\"" % self.http_url)
+
+    @classmethod
+    def tearDownClass(self):
+        self.http_service.stop()
+        shutil.rmtree(self.publish_dir)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/sdkext/cases/sdk_update.py b/import-layers/yocto-poky/meta/lib/oeqa/sdkext/cases/sdk_update.py
deleted file mode 100644
index 2f8598b..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/sdkext/cases/sdk_update.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# Copyright (C) 2016 Intel Corporation
-# Released under the MIT license (see COPYING.MIT)
-
-import os
-import shutil
-import subprocess
-
-from oeqa.sdkext.case import OESDKExtTestCase
-from oeqa.utils.httpserver import HTTPService
-
-class SdkUpdateTest(OESDKExtTestCase):
-    @classmethod
-    def setUpClass(self):
-        self.publish_dir = os.path.join(self.tc.sdk_dir, 'esdk_publish')
-        if os.path.exists(self.publish_dir):
-            shutil.rmtree(self.publish_dir)
-        os.mkdir(self.publish_dir)
-
-        base_tcname = "%s/%s" % (self.td.get("SDK_DEPLOY", ''),
-            self.td.get("TOOLCHAINEXT_OUTPUTNAME", ''))
-        tcname_new = "%s-new.sh" % base_tcname
-        if not os.path.exists(tcname_new):
-            tcname_new = "%s.sh" % base_tcname
-
-        cmd = 'oe-publish-sdk %s %s' % (tcname_new, self.publish_dir)
-        subprocess.check_output(cmd, shell=True)
-
-        self.http_service = HTTPService(self.publish_dir)
-        self.http_service.start()
-
-        self.http_url = "http://127.0.0.1:%d" % self.http_service.port
-
-    def test_sdk_update_http(self):
-        output = self._run("devtool sdk-update \"%s\"" % self.http_url)
-
-    @classmethod
-    def tearDownClass(self):
-        self.http_service.stop()
-        shutil.rmtree(self.publish_dir)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/__init__.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/__init__.py
deleted file mode 100644
index 3ad9513..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/__init__.py
+++ /dev/null
@@ -1,2 +0,0 @@
-from pkgutil import extend_path
-__path__ = extend_path(__path__, __name__)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/_sstatetests_noauto.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/_sstatetests_noauto.py
deleted file mode 100644
index fc9ae7e..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/_sstatetests_noauto.py
+++ /dev/null
@@ -1,95 +0,0 @@
-import datetime
-import unittest
-import os
-import re
-import shutil
-
-import oeqa.utils.ftools as ftools
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_test_layer
-from oeqa.selftest.sstate import SStateBase
-
-
-class RebuildFromSState(SStateBase):
-
-    @classmethod
-    def setUpClass(self):
-        self.builddir = os.path.join(os.environ.get('BUILDDIR'))
-
-    def get_dep_targets(self, primary_targets):
-        found_targets = []
-        bitbake("-g " + ' '.join(map(str, primary_targets)))
-        with open(os.path.join(self.builddir, 'pn-buildlist'), 'r') as pnfile:
-            found_targets = pnfile.read().splitlines()
-        return found_targets
-
-    def configure_builddir(self, builddir):
-        os.mkdir(builddir)
-        self.track_for_cleanup(builddir)
-        os.mkdir(os.path.join(builddir, 'conf'))
-        shutil.copyfile(os.path.join(os.environ.get('BUILDDIR'), 'conf/local.conf'), os.path.join(builddir, 'conf/local.conf'))
-        config = {}
-        config['default_sstate_dir'] = "SSTATE_DIR ?= \"${TOPDIR}/sstate-cache\""
-        config['null_sstate_mirrors'] = "SSTATE_MIRRORS = \"\""
-        config['default_tmp_dir'] = "TMPDIR = \"${TOPDIR}/tmp\""
-        for key in config:
-            ftools.append_file(os.path.join(builddir, 'conf/selftest.inc'), config[key])
-        shutil.copyfile(os.path.join(os.environ.get('BUILDDIR'), 'conf/bblayers.conf'), os.path.join(builddir, 'conf/bblayers.conf'))
-        try:
-            shutil.copyfile(os.path.join(os.environ.get('BUILDDIR'), 'conf/auto.conf'), os.path.join(builddir, 'conf/auto.conf'))
-        except:
-            pass
-
-    def hardlink_tree(self, src, dst):
-        os.mkdir(dst)
-        self.track_for_cleanup(dst)
-        for root, dirs, files in os.walk(src):
-            if root == src:
-                continue
-            os.mkdir(os.path.join(dst, root.split(src)[1][1:]))
-            for sstate_file in files:
-                os.link(os.path.join(root, sstate_file), os.path.join(dst, root.split(src)[1][1:], sstate_file))
-
-    def run_test_sstate_rebuild(self, primary_targets, relocate=False, rebuild_dependencies=False):
-        buildA = os.path.join(self.builddir, 'buildA')
-        if relocate:
-            buildB = os.path.join(self.builddir, 'buildB')
-        else:
-            buildB = buildA
-
-        if rebuild_dependencies:
-            rebuild_targets = self.get_dep_targets(primary_targets)
-        else:
-            rebuild_targets = primary_targets
-
-        self.configure_builddir(buildA)
-        runCmd((". %s/oe-init-build-env %s && " % (get_bb_var('COREBASE'), buildA)) + 'bitbake  ' + ' '.join(map(str, primary_targets)), shell=True, executable='/bin/bash')
-        self.hardlink_tree(os.path.join(buildA, 'sstate-cache'), os.path.join(self.builddir, 'sstate-cache-buildA'))
-        shutil.rmtree(buildA)
-
-        failed_rebuild = []
-        failed_cleansstate = []
-        for target in rebuild_targets:
-            self.configure_builddir(buildB)
-            self.hardlink_tree(os.path.join(self.builddir, 'sstate-cache-buildA'), os.path.join(buildB, 'sstate-cache'))
-
-            result_cleansstate = runCmd((". %s/oe-init-build-env %s && " % (get_bb_var('COREBASE'), buildB)) + 'bitbake -ccleansstate ' + target, ignore_status=True, shell=True, executable='/bin/bash')
-            if not result_cleansstate.status == 0:
-                failed_cleansstate.append(target)
-                shutil.rmtree(buildB)
-                continue
-
-            result_build = runCmd((". %s/oe-init-build-env %s && " % (get_bb_var('COREBASE'), buildB)) + 'bitbake ' + target, ignore_status=True, shell=True, executable='/bin/bash')
-            if not result_build.status == 0:
-                failed_rebuild.append(target)
-
-            shutil.rmtree(buildB)
-
-        self.assertFalse(failed_rebuild, msg="The following recipes have failed to rebuild: %s" % ' '.join(map(str, failed_rebuild)))
-        self.assertFalse(failed_cleansstate, msg="The following recipes have failed cleansstate(all others have passed both cleansstate and rebuild from sstate tests): %s" % ' '.join(map(str, failed_cleansstate)))
-
-    def test_sstate_relocation(self):
-        self.run_test_sstate_rebuild(['core-image-sato-sdk'], relocate=True, rebuild_dependencies=True)
-
-    def test_sstate_rebuild(self):
-        self.run_test_sstate_rebuild(['core-image-sato-sdk'], relocate=False, rebuild_dependencies=True)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/archiver.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/archiver.py
deleted file mode 100644
index 7f01c36..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/archiver.py
+++ /dev/null
@@ -1,119 +0,0 @@
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import bitbake, get_bb_vars
-from oeqa.utils.decorators import testcase
-import glob
-import os
-import shutil
-
-
-class Archiver(oeSelfTest):
-
-    @testcase(1345)
-    def test_archiver_allows_to_filter_on_recipe_name(self):
-        """
-        Summary:     The archiver should offer the possibility to filter on the recipe. (#6929)
-        Expected:    1. Included recipe (busybox) should be included
-                     2. Excluded recipe (zlib) should be excluded
-        Product:     oe-core
-        Author:      Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        """
-
-        include_recipe = 'busybox'
-        exclude_recipe = 'zlib'
-
-        features = 'INHERIT += "archiver"\n'
-        features += 'ARCHIVER_MODE[src] = "original"\n'
-        features += 'COPYLEFT_PN_INCLUDE = "%s"\n' % include_recipe
-        features += 'COPYLEFT_PN_EXCLUDE = "%s"\n' % exclude_recipe
-        self.write_config(features)
-
-        bitbake('-c clean %s %s' % (include_recipe, exclude_recipe))
-        bitbake("-c deploy_archives %s %s" % (include_recipe, exclude_recipe))
-
-        bb_vars = get_bb_vars(['DEPLOY_DIR_SRC', 'TARGET_SYS'])
-        src_path = os.path.join(bb_vars['DEPLOY_DIR_SRC'], bb_vars['TARGET_SYS'])
-
-        # Check that include_recipe was included
-        included_present = len(glob.glob(src_path + '/%s-*' % include_recipe))
-        self.assertTrue(included_present, 'Recipe %s was not included.' % include_recipe)
-
-        # Check that exclude_recipe was excluded
-        excluded_present = len(glob.glob(src_path + '/%s-*' % exclude_recipe))
-        self.assertFalse(excluded_present, 'Recipe %s was not excluded.' % exclude_recipe)
-
-
-    def test_archiver_filters_by_type(self):
-        """
-        Summary:     The archiver is documented to filter on the recipe type.
-        Expected:    1. included recipe type (target) should be included
-                     2. other types should be excluded
-        Product:     oe-core
-        Author:      André Draszik <adraszik@tycoint.com>
-        """
-
-        target_recipe = 'initscripts'
-        native_recipe = 'zlib-native'
-
-        features = 'INHERIT += "archiver"\n'
-        features += 'ARCHIVER_MODE[src] = "original"\n'
-        features += 'COPYLEFT_RECIPE_TYPES = "target"\n'
-        self.write_config(features)
-
-        bitbake('-c clean %s %s' % (target_recipe, native_recipe))
-        bitbake("%s -c deploy_archives %s" % (target_recipe, native_recipe))
-
-        bb_vars = get_bb_vars(['DEPLOY_DIR_SRC', 'TARGET_SYS', 'BUILD_SYS'])
-        src_path_target = os.path.join(bb_vars['DEPLOY_DIR_SRC'], bb_vars['TARGET_SYS'])
-        src_path_native = os.path.join(bb_vars['DEPLOY_DIR_SRC'], bb_vars['BUILD_SYS'])
-
-        # Check that target_recipe was included
-        included_present = len(glob.glob(src_path_target + '/%s-*' % target_recipe))
-        self.assertTrue(included_present, 'Recipe %s was not included.' % target_recipe)
-
-        # Check that native_recipe was excluded
-        excluded_present = len(glob.glob(src_path_native + '/%s-*' % native_recipe))
-        self.assertFalse(excluded_present, 'Recipe %s was not excluded.' % native_recipe)
-
-    def test_archiver_filters_by_type_and_name(self):
-        """
-        Summary:     Test that the archiver archives by recipe type, taking the
-                     recipe name into account.
-        Expected:    1. included recipe type (target) should be included
-                     2. other types should be excluded
-                     3. recipe by name should be included / excluded,
-                        overriding previous decision by type
-        Product:     oe-core
-        Author:      André Draszik <adraszik@tycoint.com>
-        """
-
-        target_recipes = [ 'initscripts', 'zlib' ]
-        native_recipes = [ 'update-rc.d-native', 'zlib-native' ]
-
-        features = 'INHERIT += "archiver"\n'
-        features += 'ARCHIVER_MODE[src] = "original"\n'
-        features += 'COPYLEFT_RECIPE_TYPES = "target"\n'
-        features += 'COPYLEFT_PN_INCLUDE = "%s"\n' % native_recipes[1]
-        features += 'COPYLEFT_PN_EXCLUDE = "%s"\n' % target_recipes[1]
-        self.write_config(features)
-
-        bitbake('-c clean %s %s' % (' '.join(target_recipes), ' '.join(native_recipes)))
-        bitbake('-c deploy_archives %s %s' % (' '.join(target_recipes), ' '.join(native_recipes)))
-
-        bb_vars = get_bb_vars(['DEPLOY_DIR_SRC', 'TARGET_SYS', 'BUILD_SYS'])
-        src_path_target = os.path.join(bb_vars['DEPLOY_DIR_SRC'], bb_vars['TARGET_SYS'])
-        src_path_native = os.path.join(bb_vars['DEPLOY_DIR_SRC'], bb_vars['BUILD_SYS'])
-
-        # Check that target_recipe[0] and native_recipes[1] were included
-        included_present = len(glob.glob(src_path_target + '/%s-*' % target_recipes[0]))
-        self.assertTrue(included_present, 'Recipe %s was not included.' % target_recipes[0])
-
-        included_present = len(glob.glob(src_path_native + '/%s-*' % native_recipes[1]))
-        self.assertTrue(included_present, 'Recipe %s was not included.' % native_recipes[1])
-
-        # Check that native_recipes[0] and target_recipes[1] were excluded
-        excluded_present = len(glob.glob(src_path_native + '/%s-*' % native_recipes[0]))
-        self.assertFalse(excluded_present, 'Recipe %s was not excluded.' % native_recipes[0])
-
-        excluded_present = len(glob.glob(src_path_target + '/%s-*' % target_recipes[1]))
-        self.assertFalse(excluded_present, 'Recipe %s was not excluded.' % target_recipes[1])
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/base.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/base.py
deleted file mode 100644
index 47a8ea8..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/base.py
+++ /dev/null
@@ -1,235 +0,0 @@
-# Copyright (c) 2013 Intel Corporation
-#
-# Released under the MIT license (see COPYING.MIT)
-
-
-# DESCRIPTION
-# Base class inherited by test classes in meta/lib/oeqa/selftest
-
-import unittest
-import os
-import sys
-import shutil
-import logging
-import errno
-
-import oeqa.utils.ftools as ftools
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_test_layer
-from oeqa.utils.decorators import LogResults
-from random import choice
-import glob
-
-@LogResults
-class oeSelfTest(unittest.TestCase):
-
-    log = logging.getLogger("selftest.base")
-    longMessage = True
-
-    def __init__(self, methodName="runTest"):
-        self.builddir = os.environ.get("BUILDDIR")
-        self.localconf_path = os.path.join(self.builddir, "conf/local.conf")
-        self.localconf_backup = os.path.join(self.builddir, "conf/local.bk")
-        self.testinc_path = os.path.join(self.builddir, "conf/selftest.inc")
-        self.local_bblayers_path = os.path.join(self.builddir, "conf/bblayers.conf")
-        self.local_bblayers_backup = os.path.join(self.builddir,
-                                                  "conf/bblayers.bk")
-        self.testinc_bblayers_path = os.path.join(self.builddir, "conf/bblayers.inc")
-        self.machineinc_path = os.path.join(self.builddir, "conf/machine.inc")
-        self.testlayer_path = oeSelfTest.testlayer_path
-        self._extra_tear_down_commands = []
-        self._track_for_cleanup = [
-            self.testinc_path, self.testinc_bblayers_path,
-            self.machineinc_path, self.localconf_backup,
-            self.local_bblayers_backup]
-        super(oeSelfTest, self).__init__(methodName)
-
-    def setUp(self):
-        os.chdir(self.builddir)
-        # Check if local.conf or bblayers.conf files backup exists
-        # from a previous failed test and restore them
-        if os.path.isfile(self.localconf_backup) or os.path.isfile(
-                self.local_bblayers_backup):
-            self.log.debug("Found a local.conf and/or bblayers.conf backup \
-from a previously aborted test. Restoring these files now, but tests should \
-be re-executed from a clean environment to ensure accurate results.")
-            try:
-                shutil.copyfile(self.localconf_backup, self.localconf_path)
-            except OSError as e:
-                if e.errno != errno.ENOENT:
-                    raise
-            try:
-                shutil.copyfile(self.local_bblayers_backup,
-                                self.local_bblayers_path)
-            except OSError as e:
-                if e.errno != errno.ENOENT:
-                    raise
-        else:
-            # backup local.conf and bblayers.conf
-            shutil.copyfile(self.localconf_path, self.localconf_backup)
-            shutil.copyfile(self.local_bblayers_path,
-                            self.local_bblayers_backup)
-            self.log.debug("Creating local.conf and bblayers.conf backups.")
-        # we don't know what the previous test left around in config or inc files
-        # if it failed so we need a fresh start
-        try:
-            os.remove(self.testinc_path)
-        except OSError as e:
-            if e.errno != errno.ENOENT:
-                raise
-        for root, _, files in os.walk(self.testlayer_path):
-            for f in files:
-                if f == 'test_recipe.inc':
-                    os.remove(os.path.join(root, f))
-
-        for incl_file in [self.testinc_bblayers_path, self.machineinc_path]:
-            try:
-                os.remove(incl_file)
-            except OSError as e:
-                if e.errno != errno.ENOENT:
-                    raise
-
-        # Get CUSTOMMACHINE from env (set by --machine argument to oe-selftest)
-        custommachine = os.getenv('CUSTOMMACHINE')
-        if custommachine:
-            if custommachine == 'random':
-                machine = get_random_machine()
-            else:
-                machine = custommachine
-            machine_conf = 'MACHINE ??= "%s"\n' % machine
-            self.set_machine_config(machine_conf)
-            print('MACHINE: %s' % machine)
-
-        # tests might need their own setup
-        # but if they overwrite this one they have to call
-        # super each time, so let's give them an alternative
-        self.setUpLocal()
-
-    def setUpLocal(self):
-        pass
-
-    def tearDown(self):
-        if self._extra_tear_down_commands:
-            failed_extra_commands = []
-            for command in self._extra_tear_down_commands:
-                result = runCmd(command, ignore_status=True)
-                if not result.status ==  0:
-                    failed_extra_commands.append(command)
-            if failed_extra_commands:
-                self.log.warning("tearDown commands have failed: %s" % ', '.join(map(str, failed_extra_commands)))
-                self.log.debug("Trying to move on.")
-            self._extra_tear_down_commands = []
-
-        if self._track_for_cleanup:
-            for path in self._track_for_cleanup:
-                if os.path.isdir(path):
-                    shutil.rmtree(path)
-                if os.path.isfile(path):
-                    os.remove(path)
-            self._track_for_cleanup = []
-
-        self.tearDownLocal()
-
-    def tearDownLocal(self):
-        pass
-
-    # add test specific commands to the tearDown method.
-    def add_command_to_tearDown(self, command):
-        self.log.debug("Adding command '%s' to tearDown for this test." % command)
-        self._extra_tear_down_commands.append(command)
-    # add test specific files or directories to be removed in the tearDown method
-    def track_for_cleanup(self, path):
-        self.log.debug("Adding path '%s' to be cleaned up when test is over" % path)
-        self._track_for_cleanup.append(path)
-
-    # write to <builddir>/conf/selftest.inc
-    def write_config(self, data):
-        self.log.debug("Writing to: %s\n%s\n" % (self.testinc_path, data))
-        ftools.write_file(self.testinc_path, data)
-
-        custommachine = os.getenv('CUSTOMMACHINE')
-        if custommachine and 'MACHINE' in data:
-            machine = get_bb_var('MACHINE')
-            self.log.warning('MACHINE overridden: %s' % machine)
-
-    # append to <builddir>/conf/selftest.inc
-    def append_config(self, data):
-        self.log.debug("Appending to: %s\n%s\n" % (self.testinc_path, data))
-        ftools.append_file(self.testinc_path, data)
-
-        custommachine = os.getenv('CUSTOMMACHINE')
-        if custommachine and 'MACHINE' in data:
-            machine = get_bb_var('MACHINE')
-            self.log.warning('MACHINE overridden: %s' % machine)
-
-    # remove data from <builddir>/conf/selftest.inc
-    def remove_config(self, data):
-        self.log.debug("Removing from: %s\n%s\n" % (self.testinc_path, data))
-        ftools.remove_from_file(self.testinc_path, data)
-
-    # write to meta-sefltest/recipes-test/<recipe>/test_recipe.inc
-    def write_recipeinc(self, recipe, data):
-        inc_file = os.path.join(self.testlayer_path, 'recipes-test', recipe, 'test_recipe.inc')
-        self.log.debug("Writing to: %s\n%s\n" % (inc_file, data))
-        ftools.write_file(inc_file, data)
-
-    # append data to meta-sefltest/recipes-test/<recipe>/test_recipe.inc
-    def append_recipeinc(self, recipe, data):
-        inc_file = os.path.join(self.testlayer_path, 'recipes-test', recipe, 'test_recipe.inc')
-        self.log.debug("Appending to: %s\n%s\n" % (inc_file, data))
-        ftools.append_file(inc_file, data)
-
-    # remove data from meta-sefltest/recipes-test/<recipe>/test_recipe.inc
-    def remove_recipeinc(self, recipe, data):
-        inc_file = os.path.join(self.testlayer_path, 'recipes-test', recipe, 'test_recipe.inc')
-        self.log.debug("Removing from: %s\n%s\n" % (inc_file, data))
-        ftools.remove_from_file(inc_file, data)
-
-    # delete meta-sefltest/recipes-test/<recipe>/test_recipe.inc file
-    def delete_recipeinc(self, recipe):
-        inc_file = os.path.join(self.testlayer_path, 'recipes-test', recipe, 'test_recipe.inc')
-        self.log.debug("Deleting file: %s" % inc_file)
-        try:
-            os.remove(inc_file)
-        except OSError as e:
-            if e.errno != errno.ENOENT:
-                raise
-
-    # write to <builddir>/conf/bblayers.inc
-    def write_bblayers_config(self, data):
-        self.log.debug("Writing to: %s\n%s\n" % (self.testinc_bblayers_path, data))
-        ftools.write_file(self.testinc_bblayers_path, data)
-
-    # append to <builddir>/conf/bblayers.inc
-    def append_bblayers_config(self, data):
-        self.log.debug("Appending to: %s\n%s\n" % (self.testinc_bblayers_path, data))
-        ftools.append_file(self.testinc_bblayers_path, data)
-
-    # remove data from <builddir>/conf/bblayers.inc
-    def remove_bblayers_config(self, data):
-        self.log.debug("Removing from: %s\n%s\n" % (self.testinc_bblayers_path, data))
-        ftools.remove_from_file(self.testinc_bblayers_path, data)
-
-    # write to <builddir>/conf/machine.inc
-    def set_machine_config(self, data):
-        self.log.debug("Writing to: %s\n%s\n" % (self.machineinc_path, data))
-        ftools.write_file(self.machineinc_path, data)
-
-
-def get_available_machines():
-    # Get a list of all available machines
-    bbpath = get_bb_var('BBPATH').split(':')
-    machines = []
-
-    for path in bbpath:
-        found_machines = glob.glob(os.path.join(path, 'conf', 'machine', '*.conf'))
-        if found_machines:
-            for i in found_machines:
-                # eg: '/home/<user>/poky/meta-intel/conf/machine/intel-core2-32.conf'
-                machines.append(os.path.splitext(os.path.basename(i))[0])
-
-    return machines
-
-
-def get_random_machine():
-    # Get a random machine
-    return choice(get_available_machines())
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/bblayers.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/bblayers.py
deleted file mode 100644
index cd658c5..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/bblayers.py
+++ /dev/null
@@ -1,99 +0,0 @@
-import unittest
-import os
-import logging
-import re
-import shutil
-
-import oeqa.utils.ftools as ftools
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, get_bb_var
-from oeqa.utils.decorators import testcase
-
-class BitbakeLayers(oeSelfTest):
-
-    @testcase(756)
-    def test_bitbakelayers_showcrossdepends(self):
-        result = runCmd('bitbake-layers show-cross-depends')
-        self.assertTrue('aspell' in result.output, msg = "No dependencies were shown. bitbake-layers show-cross-depends output: %s" % result.output)
-
-    @testcase(83)
-    def test_bitbakelayers_showlayers(self):
-        result = runCmd('bitbake-layers show-layers')
-        self.assertTrue('meta-selftest' in result.output, msg = "No layers were shown. bitbake-layers show-layers output: %s" % result.output)
-
-    @testcase(93)
-    def test_bitbakelayers_showappends(self):
-        recipe = "xcursor-transparent-theme"
-        bb_file = self.get_recipe_basename(recipe)
-        result = runCmd('bitbake-layers show-appends')
-        self.assertTrue(bb_file in result.output, msg="%s file was not recognised. bitbake-layers show-appends output: %s" % (bb_file, result.output))
-
-    @testcase(90)
-    def test_bitbakelayers_showoverlayed(self):
-        result = runCmd('bitbake-layers show-overlayed')
-        self.assertTrue('aspell' in result.output, msg="aspell overlayed recipe was not recognised bitbake-layers show-overlayed %s" % result.output)
-
-    @testcase(95)
-    def test_bitbakelayers_flatten(self):
-        recipe = "xcursor-transparent-theme"
-        recipe_path = "recipes-graphics/xcursor-transparent-theme"
-        recipe_file = self.get_recipe_basename(recipe)
-        testoutdir = os.path.join(self.builddir, 'test_bitbakelayers_flatten')
-        self.assertFalse(os.path.isdir(testoutdir), msg = "test_bitbakelayers_flatten should not exist at this point in time")
-        self.track_for_cleanup(testoutdir)
-        result = runCmd('bitbake-layers flatten %s' % testoutdir)
-        bb_file = os.path.join(testoutdir, recipe_path, recipe_file)
-        self.assertTrue(os.path.isfile(bb_file), msg = "Cannot find xcursor-transparent-theme_0.1.1.bb in the test_bitbakelayers_flatten local dir.")
-        contents = ftools.read_file(bb_file)
-        find_in_contents = re.search("##### bbappended from meta-selftest #####\n(.*\n)*include test_recipe.inc", contents)
-        self.assertTrue(find_in_contents, msg = "Flattening layers did not work. bitbake-layers flatten output: %s" % result.output)
-
-    @testcase(1195)
-    def test_bitbakelayers_add_remove(self):
-        test_layer = os.path.join(get_bb_var('COREBASE'), 'meta-skeleton')
-        result = runCmd('bitbake-layers show-layers')
-        self.assertNotIn('meta-skeleton', result.output, "This test cannot run with meta-skeleton in bblayers.conf. bitbake-layers show-layers output: %s" % result.output)
-        result = runCmd('bitbake-layers add-layer %s' % test_layer)
-        result = runCmd('bitbake-layers show-layers')
-        self.assertIn('meta-skeleton', result.output, msg = "Something wrong happened. meta-skeleton layer was not added to conf/bblayers.conf.  bitbake-layers show-layers output: %s" % result.output)
-        result = runCmd('bitbake-layers remove-layer %s' % test_layer)
-        result = runCmd('bitbake-layers show-layers')
-        self.assertNotIn('meta-skeleton', result.output, msg = "meta-skeleton should have been removed at this step.  bitbake-layers show-layers output: %s" % result.output)
-        result = runCmd('bitbake-layers add-layer %s' % test_layer)
-        result = runCmd('bitbake-layers show-layers')
-        self.assertIn('meta-skeleton', result.output, msg = "Something wrong happened. meta-skeleton layer was not added to conf/bblayers.conf.  bitbake-layers show-layers output: %s" % result.output)
-        result = runCmd('bitbake-layers remove-layer */meta-skeleton')
-        result = runCmd('bitbake-layers show-layers')
-        self.assertNotIn('meta-skeleton', result.output, msg = "meta-skeleton should have been removed at this step.  bitbake-layers show-layers output: %s" % result.output)
-
-    @testcase(1384)
-    def test_bitbakelayers_showrecipes(self):
-        result = runCmd('bitbake-layers show-recipes')
-        self.assertIn('aspell:', result.output)
-        self.assertIn('mtd-utils:', result.output)
-        self.assertIn('core-image-minimal:', result.output)
-        result = runCmd('bitbake-layers show-recipes mtd-utils')
-        self.assertIn('mtd-utils:', result.output)
-        self.assertNotIn('aspell:', result.output)
-        result = runCmd('bitbake-layers show-recipes -i image')
-        self.assertIn('core-image-minimal', result.output)
-        self.assertNotIn('mtd-utils:', result.output)
-        result = runCmd('bitbake-layers show-recipes -i cmake,pkgconfig')
-        self.assertIn('libproxy:', result.output)
-        self.assertNotIn('mtd-utils:', result.output) # doesn't inherit either
-        self.assertNotIn('wget:', result.output) # doesn't inherit cmake
-        self.assertNotIn('waffle:', result.output) # doesn't inherit pkgconfig
-        result = runCmd('bitbake-layers show-recipes -i nonexistentclass', ignore_status=True)
-        self.assertNotEqual(result.status, 0, 'bitbake-layers show-recipes -i nonexistentclass should have failed')
-        self.assertIn('ERROR:', result.output)
-
-    def get_recipe_basename(self, recipe):
-        recipe_file = ""
-        result = runCmd("bitbake-layers show-recipes -f %s" % recipe)
-        for line in result.output.splitlines():
-            if recipe in line:
-                recipe_file = line
-                break
-
-        self.assertTrue(os.path.isfile(recipe_file), msg = "Can't find recipe file for %s" % recipe)
-        return os.path.basename(recipe_file)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/bbtests.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/bbtests.py
deleted file mode 100644
index 46e09f5..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/bbtests.py
+++ /dev/null
@@ -1,278 +0,0 @@
-import os
-import re
-
-import oeqa.utils.ftools as ftools
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars
-from oeqa.utils.decorators import testcase
-
-class BitbakeTests(oeSelfTest):
-
-    def getline(self, res, line):
-        for l in res.output.split('\n'):
-            if line in l:
-                return l
-
-    @testcase(789)
-    def test_run_bitbake_from_dir_1(self):
-        os.chdir(os.path.join(self.builddir, 'conf'))
-        self.assertEqual(bitbake('-e').status, 0, msg = "bitbake couldn't run from \"conf\" dir")
-
-    @testcase(790)
-    def test_run_bitbake_from_dir_2(self):
-        my_env = os.environ.copy()
-        my_env['BBPATH'] = my_env['BUILDDIR']
-        os.chdir(os.path.dirname(os.environ['BUILDDIR']))
-        self.assertEqual(bitbake('-e', env=my_env).status, 0, msg = "bitbake couldn't run from builddir")
-
-    @testcase(806)
-    def test_event_handler(self):
-        self.write_config("INHERIT += \"test_events\"")
-        result = bitbake('m4-native')
-        find_build_started = re.search("NOTE: Test for bb\.event\.BuildStarted(\n.*)*NOTE: Executing RunQueue Tasks", result.output)
-        find_build_completed = re.search("Tasks Summary:.*(\n.*)*NOTE: Test for bb\.event\.BuildCompleted", result.output)
-        self.assertTrue(find_build_started, msg = "Match failed in:\n%s"  % result.output)
-        self.assertTrue(find_build_completed, msg = "Match failed in:\n%s" % result.output)
-        self.assertFalse('Test for bb.event.InvalidEvent' in result.output, msg = "\"Test for bb.event.InvalidEvent\" message found during bitbake process. bitbake output: %s" % result.output)
-
-    @testcase(103)
-    def test_local_sstate(self):
-        bitbake('m4-native')
-        bitbake('m4-native -cclean')
-        result = bitbake('m4-native')
-        find_setscene = re.search("m4-native.*do_.*_setscene", result.output)
-        self.assertTrue(find_setscene, msg = "No \"m4-native.*do_.*_setscene\" message found during bitbake m4-native. bitbake output: %s" % result.output )
-
-    @testcase(105)
-    def test_bitbake_invalid_recipe(self):
-        result = bitbake('-b asdf', ignore_status=True)
-        self.assertTrue("ERROR: Unable to find any recipe file matching 'asdf'" in result.output, msg = "Though asdf recipe doesn't exist, bitbake didn't output any err. message. bitbake output: %s" % result.output)
-
-    @testcase(107)
-    def test_bitbake_invalid_target(self):
-        result = bitbake('asdf', ignore_status=True)
-        self.assertTrue("ERROR: Nothing PROVIDES 'asdf'" in result.output, msg = "Though no 'asdf' target exists, bitbake didn't output any err. message. bitbake output: %s" % result.output)
-
-    @testcase(106)
-    def test_warnings_errors(self):
-        result = bitbake('-b asdf', ignore_status=True)
-        find_warnings = re.search("Summary: There w.{2,3}? [1-9][0-9]* WARNING messages* shown", result.output)
-        find_errors = re.search("Summary: There w.{2,3}? [1-9][0-9]* ERROR messages* shown", result.output)
-        self.assertTrue(find_warnings, msg="Did not find the mumber of warnings at the end of the build:\n" + result.output)
-        self.assertTrue(find_errors, msg="Did not find the mumber of errors at the end of the build:\n" + result.output)
-
-    @testcase(108)
-    def test_invalid_patch(self):
-        # This patch already exists in SRC_URI so adding it again will cause the
-        # patch to fail.
-        self.write_recipeinc('man', 'SRC_URI += "file://man-1.5h1-make.patch"')
-        self.write_config("INHERIT_remove = \"report-error\"")
-        result = bitbake('man -c patch', ignore_status=True)
-        self.delete_recipeinc('man')
-        bitbake('-cclean man')
-        line = self.getline(result, "Function failed: patch_do_patch")
-        self.assertTrue(line and line.startswith("ERROR:"), msg = "Repeated patch application didn't fail. bitbake output: %s" % result.output)
-
-    @testcase(1354)
-    def test_force_task_1(self):
-        # test 1 from bug 5875
-        test_recipe = 'zlib'
-        test_data = "Microsoft Made No Profit From Anyone's Zunes Yo"
-        bb_vars = get_bb_vars(['D', 'PKGDEST', 'mandir'], test_recipe)
-        image_dir = bb_vars['D']
-        pkgsplit_dir = bb_vars['PKGDEST']
-        man_dir = bb_vars['mandir']
-
-        bitbake('-c clean %s' % test_recipe)
-        bitbake('-c package -f %s' % test_recipe)
-        self.add_command_to_tearDown('bitbake -c clean %s' % test_recipe)
-
-        man_file = os.path.join(image_dir + man_dir, 'man3/zlib.3')
-        ftools.append_file(man_file, test_data)
-        bitbake('-c package -f %s' % test_recipe)
-
-        man_split_file = os.path.join(pkgsplit_dir, 'zlib-doc' + man_dir, 'man3/zlib.3')
-        man_split_content = ftools.read_file(man_split_file)
-        self.assertIn(test_data, man_split_content, 'The man file has not changed in packages-split.')
-
-        ret = bitbake(test_recipe)
-        self.assertIn('task do_package_write_rpm:', ret.output, 'Task do_package_write_rpm did not re-executed.')
-
-    @testcase(163)
-    def test_force_task_2(self):
-        # test 2 from bug 5875
-        test_recipe = 'zlib'
-
-        bitbake(test_recipe)
-        self.add_command_to_tearDown('bitbake -c clean %s' % test_recipe)
-
-        result = bitbake('-C compile %s' % test_recipe)
-        look_for_tasks = ['do_compile:', 'do_install:', 'do_populate_sysroot:', 'do_package:']
-        for task in look_for_tasks:
-            self.assertIn(task, result.output, msg="Couldn't find %s task.")
-
-    @testcase(167)
-    def test_bitbake_g(self):
-        result = bitbake('-g core-image-minimal')
-        for f in ['pn-buildlist', 'recipe-depends.dot', 'task-depends.dot']:
-            self.addCleanup(os.remove, f)
-        self.assertTrue('Task dependencies saved to \'task-depends.dot\'' in result.output, msg = "No task dependency \"task-depends.dot\" file was generated for the given task target. bitbake output: %s" % result.output)
-        self.assertTrue('busybox' in ftools.read_file(os.path.join(self.builddir, 'task-depends.dot')), msg = "No \"busybox\" dependency found in task-depends.dot file.")
-
-    @testcase(899)
-    def test_image_manifest(self):
-        bitbake('core-image-minimal')
-        bb_vars = get_bb_vars(["DEPLOY_DIR_IMAGE", "IMAGE_LINK_NAME"], "core-image-minimal")
-        deploydir = bb_vars["DEPLOY_DIR_IMAGE"]
-        imagename = bb_vars["IMAGE_LINK_NAME"]
-        manifest = os.path.join(deploydir, imagename + ".manifest")
-        self.assertTrue(os.path.islink(manifest), msg="No manifest file created for image. It should have been created in %s" % manifest)
-
-    @testcase(168)
-    def test_invalid_recipe_src_uri(self):
-        data = 'SRC_URI = "file://invalid"'
-        self.write_recipeinc('man', data)
-        self.write_config("""DL_DIR = \"${TOPDIR}/download-selftest\"
-SSTATE_DIR = \"${TOPDIR}/download-selftest\"
-INHERIT_remove = \"report-error\"
-""")
-        self.track_for_cleanup(os.path.join(self.builddir, "download-selftest"))
-
-        bitbake('-ccleanall man')
-        result = bitbake('-c fetch man', ignore_status=True)
-        bitbake('-ccleanall man')
-        self.delete_recipeinc('man')
-        self.assertEqual(result.status, 1, msg="Command succeded when it should have failed. bitbake output: %s" % result.output)
-        self.assertTrue('Fetcher failure: Unable to find file file://invalid anywhere. The paths that were searched were:' in result.output, msg = "\"invalid\" file \
-doesn't exist, yet no error message encountered. bitbake output: %s" % result.output)
-        line = self.getline(result, 'Fetcher failure for URL: \'file://invalid\'. Unable to fetch URL from any source.')
-        self.assertTrue(line and line.startswith("ERROR:"), msg = "\"invalid\" file \
-doesn't exist, yet fetcher didn't report any error. bitbake output: %s" % result.output)
-
-    @testcase(171)
-    def test_rename_downloaded_file(self):
-        # TODO unique dldir instead of using cleanall
-        # TODO: need to set sstatedir?
-        self.write_config("""DL_DIR = \"${TOPDIR}/download-selftest\"
-SSTATE_DIR = \"${TOPDIR}/download-selftest\"
-""")
-        self.track_for_cleanup(os.path.join(self.builddir, "download-selftest"))
-
-        data = 'SRC_URI = "${GNU_MIRROR}/aspell/aspell-${PV}.tar.gz;downloadfilename=test-aspell.tar.gz"'
-        self.write_recipeinc('aspell', data)
-        result = bitbake('-f -c fetch aspell', ignore_status=True)
-        self.delete_recipeinc('aspell')
-        self.assertEqual(result.status, 0, msg = "Couldn't fetch aspell. %s" % result.output)
-        dl_dir = get_bb_var("DL_DIR")
-        self.assertTrue(os.path.isfile(os.path.join(dl_dir, 'test-aspell.tar.gz')), msg = "File rename failed. No corresponding test-aspell.tar.gz file found under %s" % dl_dir)
-        self.assertTrue(os.path.isfile(os.path.join(dl_dir, 'test-aspell.tar.gz.done')), "File rename failed. No corresponding test-aspell.tar.gz.done file found under %s" % dl_dir)
-
-    @testcase(1028)
-    def test_environment(self):
-        self.write_config("TEST_ENV=\"localconf\"")
-        result = runCmd('bitbake -e | grep TEST_ENV=')
-        self.assertTrue('localconf' in result.output, msg = "bitbake didn't report any value for TEST_ENV variable. To test, run 'bitbake -e | grep TEST_ENV='")
-
-    @testcase(1029)
-    def test_dry_run(self):
-        result = runCmd('bitbake -n m4-native')
-        self.assertEqual(0, result.status, "bitbake dry run didn't run as expected. %s" % result.output)
-
-    @testcase(1030)
-    def test_just_parse(self):
-        result = runCmd('bitbake -p')
-        self.assertEqual(0, result.status, "errors encountered when parsing recipes. %s" % result.output)
-
-    @testcase(1031)
-    def test_version(self):
-        result = runCmd('bitbake -s | grep wget')
-        find = re.search("wget *:([0-9a-zA-Z\.\-]+)", result.output)
-        self.assertTrue(find, "No version returned for searched recipe. bitbake output: %s" % result.output)
-
-    @testcase(1032)
-    def test_prefile(self):
-        preconf = os.path.join(self.builddir, 'conf/prefile.conf')
-        self.track_for_cleanup(preconf)
-        ftools.write_file(preconf ,"TEST_PREFILE=\"prefile\"")
-        result = runCmd('bitbake -r conf/prefile.conf -e | grep TEST_PREFILE=')
-        self.assertTrue('prefile' in result.output, "Preconfigure file \"prefile.conf\"was not taken into consideration. ")
-        self.write_config("TEST_PREFILE=\"localconf\"")
-        result = runCmd('bitbake -r conf/prefile.conf -e | grep TEST_PREFILE=')
-        self.assertTrue('localconf' in result.output, "Preconfigure file \"prefile.conf\"was not taken into consideration.")
-
-    @testcase(1033)
-    def test_postfile(self):
-        postconf = os.path.join(self.builddir, 'conf/postfile.conf')
-        self.track_for_cleanup(postconf)
-        ftools.write_file(postconf , "TEST_POSTFILE=\"postfile\"")
-        self.write_config("TEST_POSTFILE=\"localconf\"")
-        result = runCmd('bitbake -R conf/postfile.conf -e | grep TEST_POSTFILE=')
-        self.assertTrue('postfile' in result.output, "Postconfigure file \"postfile.conf\"was not taken into consideration.")
-
-    @testcase(1034)
-    def test_checkuri(self):
-        result = runCmd('bitbake -c checkuri m4')
-        self.assertEqual(0, result.status, msg = "\"checkuri\" task was not executed. bitbake output: %s" % result.output)
-
-    @testcase(1035)
-    def test_continue(self):
-        self.write_config("""DL_DIR = \"${TOPDIR}/download-selftest\"
-SSTATE_DIR = \"${TOPDIR}/download-selftest\"
-INHERIT_remove = \"report-error\"
-""")
-        self.track_for_cleanup(os.path.join(self.builddir, "download-selftest"))
-        self.write_recipeinc('man',"\ndo_fail_task () {\nexit 1 \n}\n\naddtask do_fail_task before do_fetch\n" )
-        runCmd('bitbake -c cleanall man xcursor-transparent-theme')
-        result = runCmd('bitbake -c unpack -k man xcursor-transparent-theme', ignore_status=True)
-        errorpos = result.output.find('ERROR: Function failed: do_fail_task')
-        manver = re.search("NOTE: recipe xcursor-transparent-theme-(.*?): task do_unpack: Started", result.output)
-        continuepos = result.output.find('NOTE: recipe xcursor-transparent-theme-%s: task do_unpack: Started' % manver.group(1))
-        self.assertLess(errorpos,continuepos, msg = "bitbake didn't pass do_fail_task. bitbake output: %s" % result.output)
-
-    @testcase(1119)
-    def test_non_gplv3(self):
-        self.write_config('INCOMPATIBLE_LICENSE = "GPLv3"')
-        result = bitbake('selftest-ed', ignore_status=True)
-        self.assertEqual(result.status, 0, "Bitbake failed, exit code %s, output %s" % (result.status, result.output))
-        lic_dir = get_bb_var('LICENSE_DIRECTORY')
-        self.assertFalse(os.path.isfile(os.path.join(lic_dir, 'selftest-ed/generic_GPLv3')))
-        self.assertTrue(os.path.isfile(os.path.join(lic_dir, 'selftest-ed/generic_GPLv2')))
-
-    @testcase(1422)
-    def test_setscene_only(self):
-        """ Bitbake option to restore from sstate only within a build (i.e. execute no real tasks, only setscene)"""
-        test_recipe = 'ed'
-
-        bitbake(test_recipe)
-        bitbake('-c clean %s' % test_recipe)
-        ret = bitbake('--setscene-only %s' % test_recipe)
-
-        tasks = re.findall(r'task\s+(do_\S+):', ret.output)
-
-        for task in tasks:
-            self.assertIn('_setscene', task, 'A task different from _setscene ran: %s.\n'
-                                             'Executed tasks were: %s' % (task, str(tasks)))
-
-    @testcase(1425)
-    def test_bbappend_order(self):
-        """ Bitbake should bbappend to recipe in a predictable order """
-        test_recipe = 'ed'
-        bb_vars = get_bb_vars(['SUMMARY', 'PV'], test_recipe)
-        test_recipe_summary_before = bb_vars['SUMMARY']
-        test_recipe_pv = bb_vars['PV']
-        recipe_append_file = test_recipe + '_' + test_recipe_pv + '.bbappend'
-        expected_recipe_summary = test_recipe_summary_before
-
-        for i in range(5):
-            recipe_append_dir = test_recipe + '_test_' + str(i)
-            recipe_append_path = os.path.join(self.testlayer_path, 'recipes-test', recipe_append_dir, recipe_append_file)
-            os.mkdir(os.path.join(self.testlayer_path, 'recipes-test', recipe_append_dir))
-            feature = 'SUMMARY += "%s"\n' % i
-            ftools.write_file(recipe_append_path, feature)
-            expected_recipe_summary += ' %s' % i
-
-        self.add_command_to_tearDown('rm -rf %s' % os.path.join(self.testlayer_path, 'recipes-test',
-                                                               test_recipe + '_test_*'))
-
-        test_recipe_summary_after = get_bb_var('SUMMARY', test_recipe)
-        self.assertEqual(expected_recipe_summary, test_recipe_summary_after)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/buildhistory.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/buildhistory.py
deleted file mode 100644
index 008c39c..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/buildhistory.py
+++ /dev/null
@@ -1,47 +0,0 @@
-import os
-import re
-import datetime
-
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import bitbake, get_bb_vars
-from oeqa.utils.decorators import testcase
-
-
-class BuildhistoryBase(oeSelfTest):
-
-    def config_buildhistory(self, tmp_bh_location=False):
-        bb_vars = get_bb_vars(['USER_CLASSES', 'INHERIT'])
-        if (not 'buildhistory' in bb_vars['USER_CLASSES']) and (not 'buildhistory' in bb_vars['INHERIT']):
-            add_buildhistory_config = 'INHERIT += "buildhistory"\nBUILDHISTORY_COMMIT = "1"'
-            self.append_config(add_buildhistory_config)
-
-        if tmp_bh_location:
-            # Using a temporary buildhistory location for testing
-            tmp_bh_dir = os.path.join(self.builddir, "tmp_buildhistory_%s" % datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
-            buildhistory_dir_config = "BUILDHISTORY_DIR = \"%s\"" % tmp_bh_dir
-            self.append_config(buildhistory_dir_config)
-            self.track_for_cleanup(tmp_bh_dir)
-
-    def run_buildhistory_operation(self, target, global_config='', target_config='', change_bh_location=False, expect_error=False, error_regex=''):
-        if change_bh_location:
-            tmp_bh_location = True
-        else:
-            tmp_bh_location = False
-        self.config_buildhistory(tmp_bh_location)
-
-        self.append_config(global_config)
-        self.append_recipeinc(target, target_config)
-        bitbake("-cclean %s" % target)
-        result = bitbake(target, ignore_status=True)
-        self.remove_config(global_config)
-        self.remove_recipeinc(target, target_config)
-
-        if expect_error:
-            self.assertEqual(result.status, 1, msg="Error expected for global config '%s' and target config '%s'" % (global_config, target_config))
-            search_for_error = re.search(error_regex, result.output)
-            self.assertTrue(search_for_error, msg="Could not find desired error in output: %s (%s)" % (error_regex, result.output))
-        else:
-            self.assertEqual(result.status, 0, msg="Command 'bitbake %s' has failed unexpectedly: %s" % (target, result.output))
-
-    # No tests should be added to the base class.
-    # Please create a new class that inherit this one, or use one of those already available for adding tests.
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/buildoptions.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/buildoptions.py
deleted file mode 100644
index a6e0203..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/buildoptions.py
+++ /dev/null
@@ -1,184 +0,0 @@
-import os
-import re
-import glob as g
-import shutil
-import tempfile
-from oeqa.selftest.base import oeSelfTest
-from oeqa.selftest.buildhistory import BuildhistoryBase
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars
-import oeqa.utils.ftools as ftools
-from oeqa.utils.decorators import testcase
-
-class ImageOptionsTests(oeSelfTest):
-
-    @testcase(761)
-    def test_incremental_image_generation(self):
-        image_pkgtype = get_bb_var("IMAGE_PKGTYPE")
-        if image_pkgtype != 'rpm':
-            self.skipTest('Not using RPM as main package format')
-        bitbake("-c clean core-image-minimal")
-        self.write_config('INC_RPM_IMAGE_GEN = "1"')
-        self.append_config('IMAGE_FEATURES += "ssh-server-openssh"')
-        bitbake("core-image-minimal")
-        log_data_file = os.path.join(get_bb_var("WORKDIR", "core-image-minimal"), "temp/log.do_rootfs")
-        log_data_created = ftools.read_file(log_data_file)
-        incremental_created = re.search("Installing  : packagegroup-core-ssh-openssh", log_data_created)
-        self.remove_config('IMAGE_FEATURES += "ssh-server-openssh"')
-        self.assertTrue(incremental_created, msg = "Match failed in:\n%s" % log_data_created)
-        bitbake("core-image-minimal")
-        log_data_removed = ftools.read_file(log_data_file)
-        incremental_removed = re.search("Erasing     : packagegroup-core-ssh-openssh", log_data_removed)
-        self.assertTrue(incremental_removed, msg = "Match failed in:\n%s" % log_data_removed)
-
-    @testcase(286)
-    def test_ccache_tool(self):
-        bitbake("ccache-native")
-        bb_vars = get_bb_vars(['SYSROOT_DESTDIR', 'bindir'], 'ccache-native')
-        p = bb_vars['SYSROOT_DESTDIR'] + bb_vars['bindir'] + "/" + "ccache"
-        self.assertTrue(os.path.isfile(p), msg = "No ccache found (%s)" % p)
-        self.write_config('INHERIT += "ccache"')
-        self.add_command_to_tearDown('bitbake -c clean m4')
-        bitbake("m4 -f -c compile")
-        log_compile = os.path.join(get_bb_var("WORKDIR","m4"), "temp/log.do_compile")
-        res = runCmd("grep ccache %s" % log_compile, ignore_status=True)
-        self.assertEqual(0, res.status, msg="No match for ccache in m4 log.do_compile. For further details: %s" % log_compile)
-
-    @testcase(1435)
-    def test_read_only_image(self):
-        distro_features = get_bb_var('DISTRO_FEATURES')
-        if not ('x11' in distro_features and 'opengl' in distro_features):
-            self.skipTest('core-image-sato requires x11 and opengl in distro features')
-        self.write_config('IMAGE_FEATURES += "read-only-rootfs"')
-        bitbake("core-image-sato")
-        # do_image will fail if there are any pending postinsts
-
-class DiskMonTest(oeSelfTest):
-
-    @testcase(277)
-    def test_stoptask_behavior(self):
-        self.write_config('BB_DISKMON_DIRS = "STOPTASKS,${TMPDIR},100000G,100K"')
-        res = bitbake("m4", ignore_status = True)
-        self.assertTrue('ERROR: No new tasks can be executed since the disk space monitor action is "STOPTASKS"!' in res.output, msg = "Tasks should have stopped. Disk monitor is set to STOPTASK: %s" % res.output)
-        self.assertEqual(res.status, 1, msg = "bitbake reported exit code %s. It should have been 1. Bitbake output: %s" % (str(res.status), res.output))
-        self.write_config('BB_DISKMON_DIRS = "ABORT,${TMPDIR},100000G,100K"')
-        res = bitbake("m4", ignore_status = True)
-        self.assertTrue('ERROR: Immediately abort since the disk space monitor action is "ABORT"!' in res.output, "Tasks should have been aborted immediatelly. Disk monitor is set to ABORT: %s" % res.output)
-        self.assertEqual(res.status, 1, msg = "bitbake reported exit code %s. It should have been 1. Bitbake output: %s" % (str(res.status), res.output))
-        self.write_config('BB_DISKMON_DIRS = "WARN,${TMPDIR},100000G,100K"')
-        res = bitbake("m4")
-        self.assertTrue('WARNING: The free space' in res.output, msg = "A warning should have been displayed for disk monitor is set to WARN: %s" %res.output)
-
-class SanityOptionsTest(oeSelfTest):
-    def getline(self, res, line):
-        for l in res.output.split('\n'):
-            if line in l:
-                return l
-
-    @testcase(927)
-    def test_options_warnqa_errorqa_switch(self):
-
-        self.write_config("INHERIT_remove = \"report-error\"")
-        if "packages-list" not in get_bb_var("ERROR_QA"):
-            self.append_config("ERROR_QA_append = \" packages-list\"")
-
-        self.write_recipeinc('xcursor-transparent-theme', 'PACKAGES += \"${PN}-dbg\"')
-        self.add_command_to_tearDown('bitbake -c clean xcursor-transparent-theme')
-        res = bitbake("xcursor-transparent-theme -f -c package", ignore_status=True)
-        self.delete_recipeinc('xcursor-transparent-theme')
-        line = self.getline(res, "QA Issue: xcursor-transparent-theme-dbg is listed in PACKAGES multiple times, this leads to packaging errors.")
-        self.assertTrue(line and line.startswith("ERROR:"), msg=res.output)
-        self.assertEqual(res.status, 1, msg = "bitbake reported exit code %s. It should have been 1. Bitbake output: %s" % (str(res.status), res.output))
-        self.write_recipeinc('xcursor-transparent-theme', 'PACKAGES += \"${PN}-dbg\"')
-        self.append_config('ERROR_QA_remove = "packages-list"')
-        self.append_config('WARN_QA_append = " packages-list"')
-        res = bitbake("xcursor-transparent-theme -f -c package")
-        self.delete_recipeinc('xcursor-transparent-theme')
-        line = self.getline(res, "QA Issue: xcursor-transparent-theme-dbg is listed in PACKAGES multiple times, this leads to packaging errors.")
-        self.assertTrue(line and line.startswith("WARNING:"), msg=res.output)
-
-    @testcase(278)
-    def test_sanity_unsafe_script_references(self):
-        self.write_config('WARN_QA_append = " unsafe-references-in-scripts"')
-
-        self.add_command_to_tearDown('bitbake -c clean gzip')
-        res = bitbake("gzip -f -c package_qa")
-        line = self.getline(res, "QA Issue: gzip")
-        self.assertFalse(line, "WARNING: QA Issue: gzip message is present in bitbake's output and shouldn't be: %s" % res.output)
-
-        self.append_config("""
-do_install_append_pn-gzip () {
-	echo "\n${bindir}/test" >> ${D}${bindir}/zcat
-}
-""")
-        res = bitbake("gzip -f -c package_qa")
-        line = self.getline(res, "QA Issue: gzip")
-        self.assertTrue(line and line.startswith("WARNING:"), "WARNING: QA Issue: gzip message is not present in bitbake's output: %s" % res.output)
-
-    @testcase(1421)
-    def test_layer_without_git_dir(self):
-        """
-        Summary:     Test that layer git revisions are displayed and do not fail without git repository
-        Expected:    The build to be successful and without "fatal" errors
-        Product:     oe-core
-        Author:      Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        """
-
-        dirpath = tempfile.mkdtemp()
-
-        dummy_layer_name = 'meta-dummy'
-        dummy_layer_path = os.path.join(dirpath, dummy_layer_name)
-        dummy_layer_conf_dir = os.path.join(dummy_layer_path, 'conf')
-        os.makedirs(dummy_layer_conf_dir)
-        dummy_layer_conf_path = os.path.join(dummy_layer_conf_dir, 'layer.conf')
-
-        dummy_layer_content = 'BBPATH .= ":${LAYERDIR}"\n' \
-                              'BBFILES += "${LAYERDIR}/recipes-*/*/*.bb ${LAYERDIR}/recipes-*/*/*.bbappend"\n' \
-                              'BBFILE_COLLECTIONS += "%s"\n' \
-                              'BBFILE_PATTERN_%s = "^${LAYERDIR}/"\n' \
-                              'BBFILE_PRIORITY_%s = "6"\n' % (dummy_layer_name, dummy_layer_name, dummy_layer_name)
-
-        ftools.write_file(dummy_layer_conf_path, dummy_layer_content)
-
-        bblayers_conf = 'BBLAYERS += "%s"\n' % dummy_layer_path
-        self.write_bblayers_config(bblayers_conf)
-
-        test_recipe = 'ed'
-
-        ret = bitbake('-n %s' % test_recipe)
-
-        err = 'fatal: Not a git repository'
-
-        shutil.rmtree(dirpath)
-
-        self.assertNotIn(err, ret.output)
-
-
-class BuildhistoryTests(BuildhistoryBase):
-
-    @testcase(293)
-    def test_buildhistory_basic(self):
-        self.run_buildhistory_operation('xcursor-transparent-theme')
-        self.assertTrue(os.path.isdir(get_bb_var('BUILDHISTORY_DIR')), "buildhistory dir was not created.")
-
-    @testcase(294)
-    def test_buildhistory_buildtime_pr_backwards(self):
-        target = 'xcursor-transparent-theme'
-        error = "ERROR:.*QA Issue: Package version for package %s went backwards which would break package feeds from (.*-r1.* to .*-r0.*)" % target
-        self.run_buildhistory_operation(target, target_config="PR = \"r1\"", change_bh_location=True)
-        self.run_buildhistory_operation(target, target_config="PR = \"r0\"", change_bh_location=False, expect_error=True, error_regex=error)
-
-class ArchiverTest(oeSelfTest):
-    @testcase(926)
-    def test_arch_work_dir_and_export_source(self):
-        """
-        Test for archiving the work directory and exporting the source files.
-        """
-        self.write_config("INHERIT += \"archiver\"\nARCHIVER_MODE[src] = \"original\"\nARCHIVER_MODE[srpm] = \"1\"")
-        res = bitbake("xcursor-transparent-theme", ignore_status=True)
-        self.assertEqual(res.status, 0, "\nCouldn't build xcursortransparenttheme.\nbitbake output %s" % res.output)
-        deploy_dir_src = get_bb_var('DEPLOY_DIR_SRC')
-        pkgs_path = g.glob(str(deploy_dir_src) + "/allarch*/xcurs*")
-        src_file_glob = str(pkgs_path[0]) + "/xcursor*.src.rpm"
-        tar_file_glob = str(pkgs_path[0]) + "/xcursor*.tar.gz"
-        self.assertTrue((g.glob(src_file_glob) and g.glob(tar_file_glob)), "Couldn't find .src.rpm and .tar.gz files under %s/allarch*/xcursor*" % deploy_dir_src)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/case.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/case.py
new file mode 100644
index 0000000..e09915b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/case.py
@@ -0,0 +1,278 @@
+# Copyright (C) 2013-2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+import sys
+import os
+import shutil
+import glob
+import errno
+from unittest.util import safe_repr
+
+import oeqa.utils.ftools as ftools
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var
+from oeqa.core.case import OETestCase
+
+class OESelftestTestCase(OETestCase):
+    def __init__(self, methodName="runTest"):
+        self._extra_tear_down_commands = []
+        super(OESelftestTestCase, self).__init__(methodName)
+
+    @classmethod
+    def setUpClass(cls):
+        super(OESelftestTestCase, cls).setUpClass()
+
+        cls.testlayer_path = cls.tc.config_paths['testlayer_path']
+        cls.builddir = cls.tc.config_paths['builddir']
+
+        cls.localconf_path = cls.tc.config_paths['localconf']
+        cls.localconf_backup = cls.tc.config_paths['localconf_class_backup']
+        cls.local_bblayers_path = cls.tc.config_paths['bblayers']
+        cls.local_bblayers_backup = cls.tc.config_paths['bblayers_class_backup']
+
+        cls.testinc_path = os.path.join(cls.tc.config_paths['builddir'],
+                "conf/selftest.inc")
+        cls.testinc_bblayers_path = os.path.join(cls.tc.config_paths['builddir'],
+                "conf/bblayers.inc")
+        cls.machineinc_path = os.path.join(cls.tc.config_paths['builddir'],
+                "conf/machine.inc")
+
+        cls._track_for_cleanup = [
+            cls.testinc_path, cls.testinc_bblayers_path,
+            cls.machineinc_path, cls.localconf_backup,
+            cls.local_bblayers_backup]
+
+        cls.add_include()
+
+    @classmethod
+    def tearDownClass(cls):
+        cls.remove_include()
+        cls.remove_inc_files()
+        super(OESelftestTestCase, cls).tearDownClass()
+
+    @classmethod
+    def add_include(cls):
+        if "#include added by oe-selftest" \
+            not in ftools.read_file(os.path.join(cls.builddir, "conf/local.conf")):
+                cls.logger.info("Adding: \"include selftest.inc\" in %s" % os.path.join(cls.builddir, "conf/local.conf"))
+                ftools.append_file(os.path.join(cls.builddir, "conf/local.conf"), \
+                        "\n#include added by oe-selftest\ninclude machine.inc\ninclude selftest.inc")
+
+        if "#include added by oe-selftest" \
+            not in ftools.read_file(os.path.join(cls.builddir, "conf/bblayers.conf")):
+                cls.logger.info("Adding: \"include bblayers.inc\" in bblayers.conf")
+                ftools.append_file(os.path.join(cls.builddir, "conf/bblayers.conf"), \
+                        "\n#include added by oe-selftest\ninclude bblayers.inc")
+
+    @classmethod
+    def remove_include(cls):
+        if "#include added by oe-selftest.py" \
+            in ftools.read_file(os.path.join(cls.builddir, "conf/local.conf")):
+                cls.logger.info("Removing the include from local.conf")
+                ftools.remove_from_file(os.path.join(cls.builddir, "conf/local.conf"), \
+                        "\n#include added by oe-selftest.py\ninclude machine.inc\ninclude selftest.inc")
+
+        if "#include added by oe-selftest.py" \
+            in ftools.read_file(os.path.join(cls.builddir, "conf/bblayers.conf")):
+                cls.logger.info("Removing the include from bblayers.conf")
+                ftools.remove_from_file(os.path.join(cls.builddir, "conf/bblayers.conf"), \
+                        "\n#include added by oe-selftest.py\ninclude bblayers.inc")
+
+    @classmethod
+    def remove_inc_files(cls):
+        try:
+            os.remove(os.path.join(cls.builddir, "conf/selftest.inc"))
+            for root, _, files in os.walk(cls.testlayer_path):
+                for f in files:
+                    if f == 'test_recipe.inc':
+                        os.remove(os.path.join(root, f))
+        except OSError as e:
+            pass
+
+        for incl_file in ['conf/bblayers.inc', 'conf/machine.inc']:
+            try:
+                os.remove(os.path.join(cls.builddir, incl_file))
+            except:
+                pass
+
+    def setUp(self):
+        super(OESelftestTestCase, self).setUp()
+        os.chdir(self.builddir)
+        # Check if local.conf or bblayers.conf files backup exists
+        # from a previous failed test and restore them
+        if os.path.isfile(self.localconf_backup) or os.path.isfile(
+                self.local_bblayers_backup):
+            self.logger.debug("\
+Found a local.conf and/or bblayers.conf backup from a previously aborted test.\
+Restoring these files now, but tests should be re-executed from a clean environment\
+to ensure accurate results.")
+            try:
+                shutil.copyfile(self.localconf_backup, self.localconf_path)
+            except OSError as e:
+                if e.errno != errno.ENOENT:
+                    raise
+            try:
+                shutil.copyfile(self.local_bblayers_backup,
+                                self.local_bblayers_path)
+            except OSError as e:
+                if e.errno != errno.ENOENT:
+                    raise
+        else:
+            # backup local.conf and bblayers.conf
+            shutil.copyfile(self.localconf_path, self.localconf_backup)
+            shutil.copyfile(self.local_bblayers_path, self.local_bblayers_backup)
+            self.logger.debug("Creating local.conf and bblayers.conf backups.")
+        # we don't know what the previous test left around in config or inc files
+        # if it failed so we need a fresh start
+        try:
+            os.remove(self.testinc_path)
+        except OSError as e:
+            if e.errno != errno.ENOENT:
+                raise
+        for root, _, files in os.walk(self.testlayer_path):
+            for f in files:
+                if f == 'test_recipe.inc':
+                    os.remove(os.path.join(root, f))
+
+        for incl_file in [self.testinc_bblayers_path, self.machineinc_path]:
+            try:
+                os.remove(incl_file)
+            except OSError as e:
+                if e.errno != errno.ENOENT:
+                    raise
+
+        if self.tc.custommachine:
+            machine_conf = 'MACHINE ??= "%s"\n' % self.tc.custommachine
+            self.set_machine_config(machine_conf)
+
+        # tests might need their own setup
+        # but if they overwrite this one they have to call
+        # super each time, so let's give them an alternative
+        self.setUpLocal()
+
+    def setUpLocal(self):
+        pass
+
+    def tearDown(self):
+        if self._extra_tear_down_commands:
+            failed_extra_commands = []
+            for command in self._extra_tear_down_commands:
+                result = runCmd(command, ignore_status=True)
+                if not result.status ==  0:
+                    failed_extra_commands.append(command)
+            if failed_extra_commands:
+                self.logger.warning("tearDown commands have failed: %s" % ', '.join(map(str, failed_extra_commands)))
+                self.logger.debug("Trying to move on.")
+            self._extra_tear_down_commands = []
+
+        if self._track_for_cleanup:
+            for path in self._track_for_cleanup:
+                if os.path.isdir(path):
+                    shutil.rmtree(path)
+                if os.path.isfile(path):
+                    os.remove(path)
+            self._track_for_cleanup = []
+
+        self.tearDownLocal()
+        super(OESelftestTestCase, self).tearDown()
+
+    def tearDownLocal(self):
+        pass
+
+    def add_command_to_tearDown(self, command):
+        """Add test specific commands to the tearDown method"""
+        self.logger.debug("Adding command '%s' to tearDown for this test." % command)
+        self._extra_tear_down_commands.append(command)
+
+    def track_for_cleanup(self, path):
+        """Add test specific files or directories to be removed in the tearDown method"""
+        self.logger.debug("Adding path '%s' to be cleaned up when test is over" % path)
+        self._track_for_cleanup.append(path)
+
+    def write_config(self, data):
+        """Write to <builddir>/conf/selftest.inc"""
+
+        self.logger.debug("Writing to: %s\n%s\n" % (self.testinc_path, data))
+        ftools.write_file(self.testinc_path, data)
+
+        if self.tc.custommachine and 'MACHINE' in data:
+            machine = get_bb_var('MACHINE')
+            self.logger.warning('MACHINE overridden: %s' % machine)
+
+    def append_config(self, data):
+        """Append to <builddir>/conf/selftest.inc"""
+        self.logger.debug("Appending to: %s\n%s\n" % (self.testinc_path, data))
+        ftools.append_file(self.testinc_path, data)
+
+        if self.tc.custommachine and 'MACHINE' in data:
+            machine = get_bb_var('MACHINE')
+            self.logger.warning('MACHINE overridden: %s' % machine)
+
+    def remove_config(self, data):
+        """Remove data from <builddir>/conf/selftest.inc"""
+        self.logger.debug("Removing from: %s\n%s\n" % (self.testinc_path, data))
+        ftools.remove_from_file(self.testinc_path, data)
+
+    def recipeinc(self, recipe):
+        """Return absolute path of meta-sefltest/recipes-test/<recipe>/test_recipe.inc"""
+        return os.path.join(self.testlayer_path, 'recipes-test', recipe, 'test_recipe.inc')
+
+    def write_recipeinc(self, recipe, data):
+        """Write to meta-sefltest/recipes-test/<recipe>/test_recipe.inc"""
+        inc_file = self.recipeinc(recipe)
+        self.logger.debug("Writing to: %s\n%s\n" % (inc_file, data))
+        ftools.write_file(inc_file, data)
+        return inc_file
+
+    def append_recipeinc(self, recipe, data):
+        """Append data to meta-sefltest/recipes-test/<recipe>/test_recipe.inc"""
+        inc_file = self.recipeinc(recipe)
+        self.logger.debug("Appending to: %s\n%s\n" % (inc_file, data))
+        ftools.append_file(inc_file, data)
+        return inc_file
+
+    def remove_recipeinc(self, recipe, data):
+        """Remove data from meta-sefltest/recipes-test/<recipe>/test_recipe.inc"""
+        inc_file = self.recipeinc(recipe)
+        self.logger.debug("Removing from: %s\n%s\n" % (inc_file, data))
+        ftools.remove_from_file(inc_file, data)
+
+    def delete_recipeinc(self, recipe):
+        """Delete meta-sefltest/recipes-test/<recipe>/test_recipe.inc file"""
+        inc_file = self.recipeinc(recipe)
+        self.logger.debug("Deleting file: %s" % inc_file)
+        try:
+            os.remove(inc_file)
+        except OSError as e:
+            if e.errno != errno.ENOENT:
+                raise
+    def write_bblayers_config(self, data):
+        """Write to <builddir>/conf/bblayers.inc"""
+        self.logger.debug("Writing to: %s\n%s\n" % (self.testinc_bblayers_path, data))
+        ftools.write_file(self.testinc_bblayers_path, data)
+
+    def append_bblayers_config(self, data):
+        """Append to <builddir>/conf/bblayers.inc"""
+        self.logger.debug("Appending to: %s\n%s\n" % (self.testinc_bblayers_path, data))
+        ftools.append_file(self.testinc_bblayers_path, data)
+
+    def remove_bblayers_config(self, data):
+        """Remove data from <builddir>/conf/bblayers.inc"""
+        self.logger.debug("Removing from: %s\n%s\n" % (self.testinc_bblayers_path, data))
+        ftools.remove_from_file(self.testinc_bblayers_path, data)
+
+    def set_machine_config(self, data):
+        """Write to <builddir>/conf/machine.inc"""
+        self.logger.debug("Writing to: %s\n%s\n" % (self.machineinc_path, data))
+        ftools.write_file(self.machineinc_path, data)
+
+    # check does path exist
+    def assertExists(self, expr, msg=None):
+        if not os.path.exists(expr):
+            msg = self._formatMessage(msg, "%s does not exist" % safe_repr(expr))
+            raise self.failureException(msg)
+
+    # check does path not exist
+    def assertNotExists(self, expr, msg=None):
+        if os.path.exists(expr):
+            msg = self._formatMessage(msg, "%s exists when it should not" % safe_repr(expr))
+            raise self.failureException(msg)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/_sstatetests_noauto.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/_sstatetests_noauto.py
new file mode 100644
index 0000000..0e58962
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/_sstatetests_noauto.py
@@ -0,0 +1,92 @@
+import os
+import shutil
+
+import oeqa.utils.ftools as ftools
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_test_layer
+from oeqa.selftest.cases.sstate import SStateBase
+
+
+class RebuildFromSState(SStateBase):
+
+    @classmethod
+    def setUpClass(self):
+        super(RebuildFromSState, self).setUpClass()
+        self.builddir = os.path.join(os.environ.get('BUILDDIR'))
+
+    def get_dep_targets(self, primary_targets):
+        found_targets = []
+        bitbake("-g " + ' '.join(map(str, primary_targets)))
+        with open(os.path.join(self.builddir, 'pn-buildlist'), 'r') as pnfile:
+            found_targets = pnfile.read().splitlines()
+        return found_targets
+
+    def configure_builddir(self, builddir):
+        os.mkdir(builddir)
+        self.track_for_cleanup(builddir)
+        os.mkdir(os.path.join(builddir, 'conf'))
+        shutil.copyfile(os.path.join(os.environ.get('BUILDDIR'), 'conf/local.conf'), os.path.join(builddir, 'conf/local.conf'))
+        config = {}
+        config['default_sstate_dir'] = "SSTATE_DIR ?= \"${TOPDIR}/sstate-cache\""
+        config['null_sstate_mirrors'] = "SSTATE_MIRRORS = \"\""
+        config['default_tmp_dir'] = "TMPDIR = \"${TOPDIR}/tmp\""
+        for key in config:
+            ftools.append_file(os.path.join(builddir, 'conf/selftest.inc'), config[key])
+        shutil.copyfile(os.path.join(os.environ.get('BUILDDIR'), 'conf/bblayers.conf'), os.path.join(builddir, 'conf/bblayers.conf'))
+        try:
+            shutil.copyfile(os.path.join(os.environ.get('BUILDDIR'), 'conf/auto.conf'), os.path.join(builddir, 'conf/auto.conf'))
+        except:
+            pass
+
+    def hardlink_tree(self, src, dst):
+        os.mkdir(dst)
+        self.track_for_cleanup(dst)
+        for root, dirs, files in os.walk(src):
+            if root == src:
+                continue
+            os.mkdir(os.path.join(dst, root.split(src)[1][1:]))
+            for sstate_file in files:
+                os.link(os.path.join(root, sstate_file), os.path.join(dst, root.split(src)[1][1:], sstate_file))
+
+    def run_test_sstate_rebuild(self, primary_targets, relocate=False, rebuild_dependencies=False):
+        buildA = os.path.join(self.builddir, 'buildA')
+        if relocate:
+            buildB = os.path.join(self.builddir, 'buildB')
+        else:
+            buildB = buildA
+
+        if rebuild_dependencies:
+            rebuild_targets = self.get_dep_targets(primary_targets)
+        else:
+            rebuild_targets = primary_targets
+
+        self.configure_builddir(buildA)
+        runCmd((". %s/oe-init-build-env %s && " % (get_bb_var('COREBASE'), buildA)) + 'bitbake  ' + ' '.join(map(str, primary_targets)), shell=True, executable='/bin/bash')
+        self.hardlink_tree(os.path.join(buildA, 'sstate-cache'), os.path.join(self.builddir, 'sstate-cache-buildA'))
+        shutil.rmtree(buildA)
+
+        failed_rebuild = []
+        failed_cleansstate = []
+        for target in rebuild_targets:
+            self.configure_builddir(buildB)
+            self.hardlink_tree(os.path.join(self.builddir, 'sstate-cache-buildA'), os.path.join(buildB, 'sstate-cache'))
+
+            result_cleansstate = runCmd((". %s/oe-init-build-env %s && " % (get_bb_var('COREBASE'), buildB)) + 'bitbake -ccleansstate ' + target, ignore_status=True, shell=True, executable='/bin/bash')
+            if not result_cleansstate.status == 0:
+                failed_cleansstate.append(target)
+                shutil.rmtree(buildB)
+                continue
+
+            result_build = runCmd((". %s/oe-init-build-env %s && " % (get_bb_var('COREBASE'), buildB)) + 'bitbake ' + target, ignore_status=True, shell=True, executable='/bin/bash')
+            if not result_build.status == 0:
+                failed_rebuild.append(target)
+
+            shutil.rmtree(buildB)
+
+        self.assertFalse(failed_rebuild, msg="The following recipes have failed to rebuild: %s" % ' '.join(map(str, failed_rebuild)))
+        self.assertFalse(failed_cleansstate, msg="The following recipes have failed cleansstate(all others have passed both cleansstate and rebuild from sstate tests): %s" % ' '.join(map(str, failed_cleansstate)))
+
+    def test_sstate_relocation(self):
+        self.run_test_sstate_rebuild(['core-image-sato-sdk'], relocate=True, rebuild_dependencies=True)
+
+    def test_sstate_rebuild(self):
+        self.run_test_sstate_rebuild(['core-image-sato-sdk'], relocate=False, rebuild_dependencies=True)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/archiver.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/archiver.py
new file mode 100644
index 0000000..f61a522
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/archiver.py
@@ -0,0 +1,118 @@
+import os
+import glob
+from oeqa.utils.commands import bitbake, get_bb_vars
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.core.decorator.oeid import OETestID
+
+class Archiver(OESelftestTestCase):
+
+    @OETestID(1345)
+    def test_archiver_allows_to_filter_on_recipe_name(self):
+        """
+        Summary:     The archiver should offer the possibility to filter on the recipe. (#6929)
+        Expected:    1. Included recipe (busybox) should be included
+                     2. Excluded recipe (zlib) should be excluded
+        Product:     oe-core
+        Author:      Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        """
+
+        include_recipe = 'busybox'
+        exclude_recipe = 'zlib'
+
+        features = 'INHERIT += "archiver"\n'
+        features += 'ARCHIVER_MODE[src] = "original"\n'
+        features += 'COPYLEFT_PN_INCLUDE = "%s"\n' % include_recipe
+        features += 'COPYLEFT_PN_EXCLUDE = "%s"\n' % exclude_recipe
+        self.write_config(features)
+
+        bitbake('-c clean %s %s' % (include_recipe, exclude_recipe))
+        bitbake("-c deploy_archives %s %s" % (include_recipe, exclude_recipe))
+
+        bb_vars = get_bb_vars(['DEPLOY_DIR_SRC', 'TARGET_SYS'])
+        src_path = os.path.join(bb_vars['DEPLOY_DIR_SRC'], bb_vars['TARGET_SYS'])
+
+        # Check that include_recipe was included
+        included_present = len(glob.glob(src_path + '/%s-*' % include_recipe))
+        self.assertTrue(included_present, 'Recipe %s was not included.' % include_recipe)
+
+        # Check that exclude_recipe was excluded
+        excluded_present = len(glob.glob(src_path + '/%s-*' % exclude_recipe))
+        self.assertFalse(excluded_present, 'Recipe %s was not excluded.' % exclude_recipe)
+
+    @OETestID(1900)
+    def test_archiver_filters_by_type(self):
+        """
+        Summary:     The archiver is documented to filter on the recipe type.
+        Expected:    1. included recipe type (target) should be included
+                     2. other types should be excluded
+        Product:     oe-core
+        Author:      André Draszik <adraszik@tycoint.com>
+        """
+
+        target_recipe = 'initscripts'
+        native_recipe = 'zlib-native'
+
+        features = 'INHERIT += "archiver"\n'
+        features += 'ARCHIVER_MODE[src] = "original"\n'
+        features += 'COPYLEFT_RECIPE_TYPES = "target"\n'
+        self.write_config(features)
+
+        bitbake('-c clean %s %s' % (target_recipe, native_recipe))
+        bitbake("%s -c deploy_archives %s" % (target_recipe, native_recipe))
+
+        bb_vars = get_bb_vars(['DEPLOY_DIR_SRC', 'TARGET_SYS', 'BUILD_SYS'])
+        src_path_target = os.path.join(bb_vars['DEPLOY_DIR_SRC'], bb_vars['TARGET_SYS'])
+        src_path_native = os.path.join(bb_vars['DEPLOY_DIR_SRC'], bb_vars['BUILD_SYS'])
+
+        # Check that target_recipe was included
+        included_present = len(glob.glob(src_path_target + '/%s-*' % target_recipe))
+        self.assertTrue(included_present, 'Recipe %s was not included.' % target_recipe)
+
+        # Check that native_recipe was excluded
+        excluded_present = len(glob.glob(src_path_native + '/%s-*' % native_recipe))
+        self.assertFalse(excluded_present, 'Recipe %s was not excluded.' % native_recipe)
+
+    @OETestID(1901)
+    def test_archiver_filters_by_type_and_name(self):
+        """
+        Summary:     Test that the archiver archives by recipe type, taking the
+                     recipe name into account.
+        Expected:    1. included recipe type (target) should be included
+                     2. other types should be excluded
+                     3. recipe by name should be included / excluded,
+                        overriding previous decision by type
+        Product:     oe-core
+        Author:      André Draszik <adraszik@tycoint.com>
+        """
+
+        target_recipes = [ 'initscripts', 'zlib' ]
+        native_recipes = [ 'update-rc.d-native', 'zlib-native' ]
+
+        features = 'INHERIT += "archiver"\n'
+        features += 'ARCHIVER_MODE[src] = "original"\n'
+        features += 'COPYLEFT_RECIPE_TYPES = "target"\n'
+        features += 'COPYLEFT_PN_INCLUDE = "%s"\n' % native_recipes[1]
+        features += 'COPYLEFT_PN_EXCLUDE = "%s"\n' % target_recipes[1]
+        self.write_config(features)
+
+        bitbake('-c clean %s %s' % (' '.join(target_recipes), ' '.join(native_recipes)))
+        bitbake('-c deploy_archives %s %s' % (' '.join(target_recipes), ' '.join(native_recipes)))
+
+        bb_vars = get_bb_vars(['DEPLOY_DIR_SRC', 'TARGET_SYS', 'BUILD_SYS'])
+        src_path_target = os.path.join(bb_vars['DEPLOY_DIR_SRC'], bb_vars['TARGET_SYS'])
+        src_path_native = os.path.join(bb_vars['DEPLOY_DIR_SRC'], bb_vars['BUILD_SYS'])
+
+        # Check that target_recipe[0] and native_recipes[1] were included
+        included_present = len(glob.glob(src_path_target + '/%s-*' % target_recipes[0]))
+        self.assertTrue(included_present, 'Recipe %s was not included.' % target_recipes[0])
+
+        included_present = len(glob.glob(src_path_native + '/%s-*' % native_recipes[1]))
+        self.assertTrue(included_present, 'Recipe %s was not included.' % native_recipes[1])
+
+        # Check that native_recipes[0] and target_recipes[1] were excluded
+        excluded_present = len(glob.glob(src_path_native + '/%s-*' % native_recipes[0]))
+        self.assertFalse(excluded_present, 'Recipe %s was not excluded.' % native_recipes[0])
+
+        excluded_present = len(glob.glob(src_path_target + '/%s-*' % target_recipes[1]))
+        self.assertFalse(excluded_present, 'Recipe %s was not excluded.' % target_recipes[1])
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/bblayers.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/bblayers.py
new file mode 100644
index 0000000..90a2249
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/bblayers.py
@@ -0,0 +1,97 @@
+import os
+import re
+
+import oeqa.utils.ftools as ftools
+from oeqa.utils.commands import runCmd, get_bb_var
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.core.decorator.oeid import OETestID
+
+class BitbakeLayers(OESelftestTestCase):
+
+    @OETestID(756)
+    def test_bitbakelayers_showcrossdepends(self):
+        result = runCmd('bitbake-layers show-cross-depends')
+        self.assertTrue('aspell' in result.output, msg = "No dependencies were shown. bitbake-layers show-cross-depends output: %s" % result.output)
+
+    @OETestID(83)
+    def test_bitbakelayers_showlayers(self):
+        result = runCmd('bitbake-layers show-layers')
+        self.assertTrue('meta-selftest' in result.output, msg = "No layers were shown. bitbake-layers show-layers output: %s" % result.output)
+
+    @OETestID(93)
+    def test_bitbakelayers_showappends(self):
+        recipe = "xcursor-transparent-theme"
+        bb_file = self.get_recipe_basename(recipe)
+        result = runCmd('bitbake-layers show-appends')
+        self.assertTrue(bb_file in result.output, msg="%s file was not recognised. bitbake-layers show-appends output: %s" % (bb_file, result.output))
+
+    @OETestID(90)
+    def test_bitbakelayers_showoverlayed(self):
+        result = runCmd('bitbake-layers show-overlayed')
+        self.assertTrue('aspell' in result.output, msg="aspell overlayed recipe was not recognised bitbake-layers show-overlayed %s" % result.output)
+
+    @OETestID(95)
+    def test_bitbakelayers_flatten(self):
+        recipe = "xcursor-transparent-theme"
+        recipe_path = "recipes-graphics/xcursor-transparent-theme"
+        recipe_file = self.get_recipe_basename(recipe)
+        testoutdir = os.path.join(self.builddir, 'test_bitbakelayers_flatten')
+        self.assertFalse(os.path.isdir(testoutdir), msg = "test_bitbakelayers_flatten should not exist at this point in time")
+        self.track_for_cleanup(testoutdir)
+        result = runCmd('bitbake-layers flatten %s' % testoutdir)
+        bb_file = os.path.join(testoutdir, recipe_path, recipe_file)
+        self.assertTrue(os.path.isfile(bb_file), msg = "Cannot find xcursor-transparent-theme_0.1.1.bb in the test_bitbakelayers_flatten local dir.")
+        contents = ftools.read_file(bb_file)
+        find_in_contents = re.search("##### bbappended from meta-selftest #####\n(.*\n)*include test_recipe.inc", contents)
+        self.assertTrue(find_in_contents, msg = "Flattening layers did not work. bitbake-layers flatten output: %s" % result.output)
+
+    @OETestID(1195)
+    def test_bitbakelayers_add_remove(self):
+        test_layer = os.path.join(get_bb_var('COREBASE'), 'meta-skeleton')
+        result = runCmd('bitbake-layers show-layers')
+        self.assertNotIn('meta-skeleton', result.output, "This test cannot run with meta-skeleton in bblayers.conf. bitbake-layers show-layers output: %s" % result.output)
+        result = runCmd('bitbake-layers add-layer %s' % test_layer)
+        result = runCmd('bitbake-layers show-layers')
+        self.assertIn('meta-skeleton', result.output, msg = "Something wrong happened. meta-skeleton layer was not added to conf/bblayers.conf.  bitbake-layers show-layers output: %s" % result.output)
+        result = runCmd('bitbake-layers remove-layer %s' % test_layer)
+        result = runCmd('bitbake-layers show-layers')
+        self.assertNotIn('meta-skeleton', result.output, msg = "meta-skeleton should have been removed at this step.  bitbake-layers show-layers output: %s" % result.output)
+        result = runCmd('bitbake-layers add-layer %s' % test_layer)
+        result = runCmd('bitbake-layers show-layers')
+        self.assertIn('meta-skeleton', result.output, msg = "Something wrong happened. meta-skeleton layer was not added to conf/bblayers.conf.  bitbake-layers show-layers output: %s" % result.output)
+        result = runCmd('bitbake-layers remove-layer */meta-skeleton')
+        result = runCmd('bitbake-layers show-layers')
+        self.assertNotIn('meta-skeleton', result.output, msg = "meta-skeleton should have been removed at this step.  bitbake-layers show-layers output: %s" % result.output)
+
+    @OETestID(1384)
+    def test_bitbakelayers_showrecipes(self):
+        result = runCmd('bitbake-layers show-recipes')
+        self.assertIn('aspell:', result.output)
+        self.assertIn('mtd-utils:', result.output)
+        self.assertIn('core-image-minimal:', result.output)
+        result = runCmd('bitbake-layers show-recipes mtd-utils')
+        self.assertIn('mtd-utils:', result.output)
+        self.assertNotIn('aspell:', result.output)
+        result = runCmd('bitbake-layers show-recipes -i image')
+        self.assertIn('core-image-minimal', result.output)
+        self.assertNotIn('mtd-utils:', result.output)
+        result = runCmd('bitbake-layers show-recipes -i cmake,pkgconfig')
+        self.assertIn('libproxy:', result.output)
+        self.assertNotIn('mtd-utils:', result.output) # doesn't inherit either
+        self.assertNotIn('wget:', result.output) # doesn't inherit cmake
+        self.assertNotIn('waffle:', result.output) # doesn't inherit pkgconfig
+        result = runCmd('bitbake-layers show-recipes -i nonexistentclass', ignore_status=True)
+        self.assertNotEqual(result.status, 0, 'bitbake-layers show-recipes -i nonexistentclass should have failed')
+        self.assertIn('ERROR:', result.output)
+
+    def get_recipe_basename(self, recipe):
+        recipe_file = ""
+        result = runCmd("bitbake-layers show-recipes -f %s" % recipe)
+        for line in result.output.splitlines():
+            if recipe in line:
+                recipe_file = line
+                break
+
+        self.assertTrue(os.path.isfile(recipe_file), msg = "Can't find recipe file for %s" % recipe)
+        return os.path.basename(recipe_file)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/bbtests.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/bbtests.py
new file mode 100644
index 0000000..4c82049
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/bbtests.py
@@ -0,0 +1,279 @@
+import os
+import re
+
+import oeqa.utils.ftools as ftools
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.core.decorator.oeid import OETestID
+
+class BitbakeTests(OESelftestTestCase):
+
+    def getline(self, res, line):
+        for l in res.output.split('\n'):
+            if line in l:
+                return l
+
+    @OETestID(789)
+    def test_run_bitbake_from_dir_1(self):
+        os.chdir(os.path.join(self.builddir, 'conf'))
+        self.assertEqual(bitbake('-e').status, 0, msg = "bitbake couldn't run from \"conf\" dir")
+
+    @OETestID(790)
+    def test_run_bitbake_from_dir_2(self):
+        my_env = os.environ.copy()
+        my_env['BBPATH'] = my_env['BUILDDIR']
+        os.chdir(os.path.dirname(os.environ['BUILDDIR']))
+        self.assertEqual(bitbake('-e', env=my_env).status, 0, msg = "bitbake couldn't run from builddir")
+
+    @OETestID(806)
+    def test_event_handler(self):
+        self.write_config("INHERIT += \"test_events\"")
+        result = bitbake('m4-native')
+        find_build_started = re.search("NOTE: Test for bb\.event\.BuildStarted(\n.*)*NOTE: Executing RunQueue Tasks", result.output)
+        find_build_completed = re.search("Tasks Summary:.*(\n.*)*NOTE: Test for bb\.event\.BuildCompleted", result.output)
+        self.assertTrue(find_build_started, msg = "Match failed in:\n%s"  % result.output)
+        self.assertTrue(find_build_completed, msg = "Match failed in:\n%s" % result.output)
+        self.assertFalse('Test for bb.event.InvalidEvent' in result.output, msg = "\"Test for bb.event.InvalidEvent\" message found during bitbake process. bitbake output: %s" % result.output)
+
+    @OETestID(103)
+    def test_local_sstate(self):
+        bitbake('m4-native')
+        bitbake('m4-native -cclean')
+        result = bitbake('m4-native')
+        find_setscene = re.search("m4-native.*do_.*_setscene", result.output)
+        self.assertTrue(find_setscene, msg = "No \"m4-native.*do_.*_setscene\" message found during bitbake m4-native. bitbake output: %s" % result.output )
+
+    @OETestID(105)
+    def test_bitbake_invalid_recipe(self):
+        result = bitbake('-b asdf', ignore_status=True)
+        self.assertTrue("ERROR: Unable to find any recipe file matching 'asdf'" in result.output, msg = "Though asdf recipe doesn't exist, bitbake didn't output any err. message. bitbake output: %s" % result.output)
+
+    @OETestID(107)
+    def test_bitbake_invalid_target(self):
+        result = bitbake('asdf', ignore_status=True)
+        self.assertTrue("ERROR: Nothing PROVIDES 'asdf'" in result.output, msg = "Though no 'asdf' target exists, bitbake didn't output any err. message. bitbake output: %s" % result.output)
+
+    @OETestID(106)
+    def test_warnings_errors(self):
+        result = bitbake('-b asdf', ignore_status=True)
+        find_warnings = re.search("Summary: There w.{2,3}? [1-9][0-9]* WARNING messages* shown", result.output)
+        find_errors = re.search("Summary: There w.{2,3}? [1-9][0-9]* ERROR messages* shown", result.output)
+        self.assertTrue(find_warnings, msg="Did not find the mumber of warnings at the end of the build:\n" + result.output)
+        self.assertTrue(find_errors, msg="Did not find the mumber of errors at the end of the build:\n" + result.output)
+
+    @OETestID(108)
+    def test_invalid_patch(self):
+        # This patch already exists in SRC_URI so adding it again will cause the
+        # patch to fail.
+        self.write_recipeinc('man', 'SRC_URI += "file://man-1.5h1-make.patch"')
+        self.write_config("INHERIT_remove = \"report-error\"")
+        result = bitbake('man -c patch', ignore_status=True)
+        self.delete_recipeinc('man')
+        bitbake('-cclean man')
+        line = self.getline(result, "Function failed: patch_do_patch")
+        self.assertTrue(line and line.startswith("ERROR:"), msg = "Repeated patch application didn't fail. bitbake output: %s" % result.output)
+
+    @OETestID(1354)
+    def test_force_task_1(self):
+        # test 1 from bug 5875
+        test_recipe = 'zlib'
+        test_data = "Microsoft Made No Profit From Anyone's Zunes Yo"
+        bb_vars = get_bb_vars(['D', 'PKGDEST', 'mandir'], test_recipe)
+        image_dir = bb_vars['D']
+        pkgsplit_dir = bb_vars['PKGDEST']
+        man_dir = bb_vars['mandir']
+
+        bitbake('-c clean %s' % test_recipe)
+        bitbake('-c package -f %s' % test_recipe)
+        self.add_command_to_tearDown('bitbake -c clean %s' % test_recipe)
+
+        man_file = os.path.join(image_dir + man_dir, 'man3/zlib.3')
+        ftools.append_file(man_file, test_data)
+        bitbake('-c package -f %s' % test_recipe)
+
+        man_split_file = os.path.join(pkgsplit_dir, 'zlib-doc' + man_dir, 'man3/zlib.3')
+        man_split_content = ftools.read_file(man_split_file)
+        self.assertIn(test_data, man_split_content, 'The man file has not changed in packages-split.')
+
+        ret = bitbake(test_recipe)
+        self.assertIn('task do_package_write_rpm:', ret.output, 'Task do_package_write_rpm did not re-executed.')
+
+    @OETestID(163)
+    def test_force_task_2(self):
+        # test 2 from bug 5875
+        test_recipe = 'zlib'
+
+        bitbake(test_recipe)
+        self.add_command_to_tearDown('bitbake -c clean %s' % test_recipe)
+
+        result = bitbake('-C compile %s' % test_recipe)
+        look_for_tasks = ['do_compile:', 'do_install:', 'do_populate_sysroot:', 'do_package:']
+        for task in look_for_tasks:
+            self.assertIn(task, result.output, msg="Couldn't find %s task.")
+
+    @OETestID(167)
+    def test_bitbake_g(self):
+        result = bitbake('-g core-image-minimal')
+        for f in ['pn-buildlist', 'recipe-depends.dot', 'task-depends.dot']:
+            self.addCleanup(os.remove, f)
+        self.assertTrue('Task dependencies saved to \'task-depends.dot\'' in result.output, msg = "No task dependency \"task-depends.dot\" file was generated for the given task target. bitbake output: %s" % result.output)
+        self.assertTrue('busybox' in ftools.read_file(os.path.join(self.builddir, 'task-depends.dot')), msg = "No \"busybox\" dependency found in task-depends.dot file.")
+
+    @OETestID(899)
+    def test_image_manifest(self):
+        bitbake('core-image-minimal')
+        bb_vars = get_bb_vars(["DEPLOY_DIR_IMAGE", "IMAGE_LINK_NAME"], "core-image-minimal")
+        deploydir = bb_vars["DEPLOY_DIR_IMAGE"]
+        imagename = bb_vars["IMAGE_LINK_NAME"]
+        manifest = os.path.join(deploydir, imagename + ".manifest")
+        self.assertTrue(os.path.islink(manifest), msg="No manifest file created for image. It should have been created in %s" % manifest)
+
+    @OETestID(168)
+    def test_invalid_recipe_src_uri(self):
+        data = 'SRC_URI = "file://invalid"'
+        self.write_recipeinc('man', data)
+        self.write_config("""DL_DIR = \"${TOPDIR}/download-selftest\"
+SSTATE_DIR = \"${TOPDIR}/download-selftest\"
+INHERIT_remove = \"report-error\"
+""")
+        self.track_for_cleanup(os.path.join(self.builddir, "download-selftest"))
+
+        bitbake('-ccleanall man')
+        result = bitbake('-c fetch man', ignore_status=True)
+        bitbake('-ccleanall man')
+        self.delete_recipeinc('man')
+        self.assertEqual(result.status, 1, msg="Command succeded when it should have failed. bitbake output: %s" % result.output)
+        self.assertTrue('Fetcher failure: Unable to find file file://invalid anywhere. The paths that were searched were:' in result.output, msg = "\"invalid\" file \
+doesn't exist, yet no error message encountered. bitbake output: %s" % result.output)
+        line = self.getline(result, 'Fetcher failure for URL: \'file://invalid\'. Unable to fetch URL from any source.')
+        self.assertTrue(line and line.startswith("ERROR:"), msg = "\"invalid\" file \
+doesn't exist, yet fetcher didn't report any error. bitbake output: %s" % result.output)
+
+    @OETestID(171)
+    def test_rename_downloaded_file(self):
+        # TODO unique dldir instead of using cleanall
+        # TODO: need to set sstatedir?
+        self.write_config("""DL_DIR = \"${TOPDIR}/download-selftest\"
+SSTATE_DIR = \"${TOPDIR}/download-selftest\"
+""")
+        self.track_for_cleanup(os.path.join(self.builddir, "download-selftest"))
+
+        data = 'SRC_URI = "${GNU_MIRROR}/aspell/aspell-${PV}.tar.gz;downloadfilename=test-aspell.tar.gz"'
+        self.write_recipeinc('aspell', data)
+        result = bitbake('-f -c fetch aspell', ignore_status=True)
+        self.delete_recipeinc('aspell')
+        self.assertEqual(result.status, 0, msg = "Couldn't fetch aspell. %s" % result.output)
+        dl_dir = get_bb_var("DL_DIR")
+        self.assertTrue(os.path.isfile(os.path.join(dl_dir, 'test-aspell.tar.gz')), msg = "File rename failed. No corresponding test-aspell.tar.gz file found under %s" % dl_dir)
+        self.assertTrue(os.path.isfile(os.path.join(dl_dir, 'test-aspell.tar.gz.done')), "File rename failed. No corresponding test-aspell.tar.gz.done file found under %s" % dl_dir)
+
+    @OETestID(1028)
+    def test_environment(self):
+        self.write_config("TEST_ENV=\"localconf\"")
+        result = runCmd('bitbake -e | grep TEST_ENV=')
+        self.assertTrue('localconf' in result.output, msg = "bitbake didn't report any value for TEST_ENV variable. To test, run 'bitbake -e | grep TEST_ENV='")
+
+    @OETestID(1029)
+    def test_dry_run(self):
+        result = runCmd('bitbake -n m4-native')
+        self.assertEqual(0, result.status, "bitbake dry run didn't run as expected. %s" % result.output)
+
+    @OETestID(1030)
+    def test_just_parse(self):
+        result = runCmd('bitbake -p')
+        self.assertEqual(0, result.status, "errors encountered when parsing recipes. %s" % result.output)
+
+    @OETestID(1031)
+    def test_version(self):
+        result = runCmd('bitbake -s | grep wget')
+        find = re.search("wget *:([0-9a-zA-Z\.\-]+)", result.output)
+        self.assertTrue(find, "No version returned for searched recipe. bitbake output: %s" % result.output)
+
+    @OETestID(1032)
+    def test_prefile(self):
+        preconf = os.path.join(self.builddir, 'conf/prefile.conf')
+        self.track_for_cleanup(preconf)
+        ftools.write_file(preconf ,"TEST_PREFILE=\"prefile\"")
+        result = runCmd('bitbake -r conf/prefile.conf -e | grep TEST_PREFILE=')
+        self.assertTrue('prefile' in result.output, "Preconfigure file \"prefile.conf\"was not taken into consideration. ")
+        self.write_config("TEST_PREFILE=\"localconf\"")
+        result = runCmd('bitbake -r conf/prefile.conf -e | grep TEST_PREFILE=')
+        self.assertTrue('localconf' in result.output, "Preconfigure file \"prefile.conf\"was not taken into consideration.")
+
+    @OETestID(1033)
+    def test_postfile(self):
+        postconf = os.path.join(self.builddir, 'conf/postfile.conf')
+        self.track_for_cleanup(postconf)
+        ftools.write_file(postconf , "TEST_POSTFILE=\"postfile\"")
+        self.write_config("TEST_POSTFILE=\"localconf\"")
+        result = runCmd('bitbake -R conf/postfile.conf -e | grep TEST_POSTFILE=')
+        self.assertTrue('postfile' in result.output, "Postconfigure file \"postfile.conf\"was not taken into consideration.")
+
+    @OETestID(1034)
+    def test_checkuri(self):
+        result = runCmd('bitbake -c checkuri m4')
+        self.assertEqual(0, result.status, msg = "\"checkuri\" task was not executed. bitbake output: %s" % result.output)
+
+    @OETestID(1035)
+    def test_continue(self):
+        self.write_config("""DL_DIR = \"${TOPDIR}/download-selftest\"
+SSTATE_DIR = \"${TOPDIR}/download-selftest\"
+INHERIT_remove = \"report-error\"
+""")
+        self.track_for_cleanup(os.path.join(self.builddir, "download-selftest"))
+        self.write_recipeinc('man',"\ndo_fail_task () {\nexit 1 \n}\n\naddtask do_fail_task before do_fetch\n" )
+        runCmd('bitbake -c cleanall man xcursor-transparent-theme')
+        result = runCmd('bitbake -c unpack -k man xcursor-transparent-theme', ignore_status=True)
+        errorpos = result.output.find('ERROR: Function failed: do_fail_task')
+        manver = re.search("NOTE: recipe xcursor-transparent-theme-(.*?): task do_unpack: Started", result.output)
+        continuepos = result.output.find('NOTE: recipe xcursor-transparent-theme-%s: task do_unpack: Started' % manver.group(1))
+        self.assertLess(errorpos,continuepos, msg = "bitbake didn't pass do_fail_task. bitbake output: %s" % result.output)
+
+    @OETestID(1119)
+    def test_non_gplv3(self):
+        self.write_config('INCOMPATIBLE_LICENSE = "GPLv3"')
+        result = bitbake('selftest-ed', ignore_status=True)
+        self.assertEqual(result.status, 0, "Bitbake failed, exit code %s, output %s" % (result.status, result.output))
+        lic_dir = get_bb_var('LICENSE_DIRECTORY')
+        self.assertFalse(os.path.isfile(os.path.join(lic_dir, 'selftest-ed/generic_GPLv3')))
+        self.assertTrue(os.path.isfile(os.path.join(lic_dir, 'selftest-ed/generic_GPLv2')))
+
+    @OETestID(1422)
+    def test_setscene_only(self):
+        """ Bitbake option to restore from sstate only within a build (i.e. execute no real tasks, only setscene)"""
+        test_recipe = 'ed'
+
+        bitbake(test_recipe)
+        bitbake('-c clean %s' % test_recipe)
+        ret = bitbake('--setscene-only %s' % test_recipe)
+
+        tasks = re.findall(r'task\s+(do_\S+):', ret.output)
+
+        for task in tasks:
+            self.assertIn('_setscene', task, 'A task different from _setscene ran: %s.\n'
+                                             'Executed tasks were: %s' % (task, str(tasks)))
+
+    @OETestID(1425)
+    def test_bbappend_order(self):
+        """ Bitbake should bbappend to recipe in a predictable order """
+        test_recipe = 'ed'
+        bb_vars = get_bb_vars(['SUMMARY', 'PV'], test_recipe)
+        test_recipe_summary_before = bb_vars['SUMMARY']
+        test_recipe_pv = bb_vars['PV']
+        recipe_append_file = test_recipe + '_' + test_recipe_pv + '.bbappend'
+        expected_recipe_summary = test_recipe_summary_before
+
+        for i in range(5):
+            recipe_append_dir = test_recipe + '_test_' + str(i)
+            recipe_append_path = os.path.join(self.testlayer_path, 'recipes-test', recipe_append_dir, recipe_append_file)
+            os.mkdir(os.path.join(self.testlayer_path, 'recipes-test', recipe_append_dir))
+            feature = 'SUMMARY += "%s"\n' % i
+            ftools.write_file(recipe_append_path, feature)
+            expected_recipe_summary += ' %s' % i
+
+        self.add_command_to_tearDown('rm -rf %s' % os.path.join(self.testlayer_path, 'recipes-test',
+                                                               test_recipe + '_test_*'))
+
+        test_recipe_summary_after = get_bb_var('SUMMARY', test_recipe)
+        self.assertEqual(expected_recipe_summary, test_recipe_summary_after)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/buildhistory.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/buildhistory.py
new file mode 100644
index 0000000..06792d9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/buildhistory.py
@@ -0,0 +1,46 @@
+import os
+import re
+import datetime
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import bitbake, get_bb_vars
+
+
+class BuildhistoryBase(OESelftestTestCase):
+
+    def config_buildhistory(self, tmp_bh_location=False):
+        bb_vars = get_bb_vars(['USER_CLASSES', 'INHERIT'])
+        if (not 'buildhistory' in bb_vars['USER_CLASSES']) and (not 'buildhistory' in bb_vars['INHERIT']):
+            add_buildhistory_config = 'INHERIT += "buildhistory"\nBUILDHISTORY_COMMIT = "1"'
+            self.append_config(add_buildhistory_config)
+
+        if tmp_bh_location:
+            # Using a temporary buildhistory location for testing
+            tmp_bh_dir = os.path.join(self.builddir, "tmp_buildhistory_%s" % datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
+            buildhistory_dir_config = "BUILDHISTORY_DIR = \"%s\"" % tmp_bh_dir
+            self.append_config(buildhistory_dir_config)
+            self.track_for_cleanup(tmp_bh_dir)
+
+    def run_buildhistory_operation(self, target, global_config='', target_config='', change_bh_location=False, expect_error=False, error_regex=''):
+        if change_bh_location:
+            tmp_bh_location = True
+        else:
+            tmp_bh_location = False
+        self.config_buildhistory(tmp_bh_location)
+
+        self.append_config(global_config)
+        self.append_recipeinc(target, target_config)
+        bitbake("-cclean %s" % target)
+        result = bitbake(target, ignore_status=True)
+        self.remove_config(global_config)
+        self.remove_recipeinc(target, target_config)
+
+        if expect_error:
+            self.assertEqual(result.status, 1, msg="Error expected for global config '%s' and target config '%s'" % (global_config, target_config))
+            search_for_error = re.search(error_regex, result.output)
+            self.assertTrue(search_for_error, msg="Could not find desired error in output: %s (%s)" % (error_regex, result.output))
+        else:
+            self.assertEqual(result.status, 0, msg="Command 'bitbake %s' has failed unexpectedly: %s" % (target, result.output))
+
+    # No tests should be added to the base class.
+    # Please create a new class that inherit this one, or use one of those already available for adding tests.
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/buildoptions.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/buildoptions.py
new file mode 100644
index 0000000..cf221c3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/buildoptions.py
@@ -0,0 +1,166 @@
+import os
+import re
+import glob as g
+import shutil
+import tempfile
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.selftest.cases.buildhistory import BuildhistoryBase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars
+import oeqa.utils.ftools as ftools
+from oeqa.core.decorator.oeid import OETestID
+
+class ImageOptionsTests(OESelftestTestCase):
+
+    @OETestID(761)
+    def test_incremental_image_generation(self):
+        image_pkgtype = get_bb_var("IMAGE_PKGTYPE")
+        if image_pkgtype != 'rpm':
+            self.skipTest('Not using RPM as main package format')
+        bitbake("-c clean core-image-minimal")
+        self.write_config('INC_RPM_IMAGE_GEN = "1"')
+        self.append_config('IMAGE_FEATURES += "ssh-server-openssh"')
+        bitbake("core-image-minimal")
+        log_data_file = os.path.join(get_bb_var("WORKDIR", "core-image-minimal"), "temp/log.do_rootfs")
+        log_data_created = ftools.read_file(log_data_file)
+        incremental_created = re.search(r"Installing\s*:\s*packagegroup-core-ssh-openssh", log_data_created)
+        self.remove_config('IMAGE_FEATURES += "ssh-server-openssh"')
+        self.assertTrue(incremental_created, msg = "Match failed in:\n%s" % log_data_created)
+        bitbake("core-image-minimal")
+        log_data_removed = ftools.read_file(log_data_file)
+        incremental_removed = re.search(r"Erasing\s*:\s*packagegroup-core-ssh-openssh", log_data_removed)
+        self.assertTrue(incremental_removed, msg = "Match failed in:\n%s" % log_data_removed)
+
+    @OETestID(286)
+    def test_ccache_tool(self):
+        bitbake("ccache-native")
+        bb_vars = get_bb_vars(['SYSROOT_DESTDIR', 'bindir'], 'ccache-native')
+        p = bb_vars['SYSROOT_DESTDIR'] + bb_vars['bindir'] + "/" + "ccache"
+        self.assertTrue(os.path.isfile(p), msg = "No ccache found (%s)" % p)
+        self.write_config('INHERIT += "ccache"')
+        self.add_command_to_tearDown('bitbake -c clean m4')
+        bitbake("m4 -f -c compile")
+        log_compile = os.path.join(get_bb_var("WORKDIR","m4"), "temp/log.do_compile")
+        res = runCmd("grep ccache %s" % log_compile, ignore_status=True)
+        self.assertEqual(0, res.status, msg="No match for ccache in m4 log.do_compile. For further details: %s" % log_compile)
+
+    @OETestID(1435)
+    def test_read_only_image(self):
+        distro_features = get_bb_var('DISTRO_FEATURES')
+        if not ('x11' in distro_features and 'opengl' in distro_features):
+            self.skipTest('core-image-sato requires x11 and opengl in distro features')
+        self.write_config('IMAGE_FEATURES += "read-only-rootfs"')
+        bitbake("core-image-sato")
+        # do_image will fail if there are any pending postinsts
+
+class DiskMonTest(OESelftestTestCase):
+
+    @OETestID(277)
+    def test_stoptask_behavior(self):
+        self.write_config('BB_DISKMON_DIRS = "STOPTASKS,${TMPDIR},100000G,100K"')
+        res = bitbake("m4", ignore_status = True)
+        self.assertTrue('ERROR: No new tasks can be executed since the disk space monitor action is "STOPTASKS"!' in res.output, msg = "Tasks should have stopped. Disk monitor is set to STOPTASK: %s" % res.output)
+        self.assertEqual(res.status, 1, msg = "bitbake reported exit code %s. It should have been 1. Bitbake output: %s" % (str(res.status), res.output))
+        self.write_config('BB_DISKMON_DIRS = "ABORT,${TMPDIR},100000G,100K"')
+        res = bitbake("m4", ignore_status = True)
+        self.assertTrue('ERROR: Immediately abort since the disk space monitor action is "ABORT"!' in res.output, "Tasks should have been aborted immediatelly. Disk monitor is set to ABORT: %s" % res.output)
+        self.assertEqual(res.status, 1, msg = "bitbake reported exit code %s. It should have been 1. Bitbake output: %s" % (str(res.status), res.output))
+        self.write_config('BB_DISKMON_DIRS = "WARN,${TMPDIR},100000G,100K"')
+        res = bitbake("m4")
+        self.assertTrue('WARNING: The free space' in res.output, msg = "A warning should have been displayed for disk monitor is set to WARN: %s" %res.output)
+
+class SanityOptionsTest(OESelftestTestCase):
+    def getline(self, res, line):
+        for l in res.output.split('\n'):
+            if line in l:
+                return l
+
+    @OETestID(927)
+    def test_options_warnqa_errorqa_switch(self):
+
+        self.write_config("INHERIT_remove = \"report-error\"")
+        if "packages-list" not in get_bb_var("ERROR_QA"):
+            self.append_config("ERROR_QA_append = \" packages-list\"")
+
+        self.write_recipeinc('xcursor-transparent-theme', 'PACKAGES += \"${PN}-dbg\"')
+        self.add_command_to_tearDown('bitbake -c clean xcursor-transparent-theme')
+        res = bitbake("xcursor-transparent-theme -f -c package", ignore_status=True)
+        self.delete_recipeinc('xcursor-transparent-theme')
+        line = self.getline(res, "QA Issue: xcursor-transparent-theme-dbg is listed in PACKAGES multiple times, this leads to packaging errors.")
+        self.assertTrue(line and line.startswith("ERROR:"), msg=res.output)
+        self.assertEqual(res.status, 1, msg = "bitbake reported exit code %s. It should have been 1. Bitbake output: %s" % (str(res.status), res.output))
+        self.write_recipeinc('xcursor-transparent-theme', 'PACKAGES += \"${PN}-dbg\"')
+        self.append_config('ERROR_QA_remove = "packages-list"')
+        self.append_config('WARN_QA_append = " packages-list"')
+        res = bitbake("xcursor-transparent-theme -f -c package")
+        self.delete_recipeinc('xcursor-transparent-theme')
+        line = self.getline(res, "QA Issue: xcursor-transparent-theme-dbg is listed in PACKAGES multiple times, this leads to packaging errors.")
+        self.assertTrue(line and line.startswith("WARNING:"), msg=res.output)
+
+    @OETestID(1421)
+    def test_layer_without_git_dir(self):
+        """
+        Summary:     Test that layer git revisions are displayed and do not fail without git repository
+        Expected:    The build to be successful and without "fatal" errors
+        Product:     oe-core
+        Author:      Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        """
+
+        dirpath = tempfile.mkdtemp()
+
+        dummy_layer_name = 'meta-dummy'
+        dummy_layer_path = os.path.join(dirpath, dummy_layer_name)
+        dummy_layer_conf_dir = os.path.join(dummy_layer_path, 'conf')
+        os.makedirs(dummy_layer_conf_dir)
+        dummy_layer_conf_path = os.path.join(dummy_layer_conf_dir, 'layer.conf')
+
+        dummy_layer_content = 'BBPATH .= ":${LAYERDIR}"\n' \
+                              'BBFILES += "${LAYERDIR}/recipes-*/*/*.bb ${LAYERDIR}/recipes-*/*/*.bbappend"\n' \
+                              'BBFILE_COLLECTIONS += "%s"\n' \
+                              'BBFILE_PATTERN_%s = "^${LAYERDIR}/"\n' \
+                              'BBFILE_PRIORITY_%s = "6"\n' % (dummy_layer_name, dummy_layer_name, dummy_layer_name)
+
+        ftools.write_file(dummy_layer_conf_path, dummy_layer_content)
+
+        bblayers_conf = 'BBLAYERS += "%s"\n' % dummy_layer_path
+        self.write_bblayers_config(bblayers_conf)
+
+        test_recipe = 'ed'
+
+        ret = bitbake('-n %s' % test_recipe)
+
+        err = 'fatal: Not a git repository'
+
+        shutil.rmtree(dirpath)
+
+        self.assertNotIn(err, ret.output)
+
+
+class BuildhistoryTests(BuildhistoryBase):
+
+    @OETestID(293)
+    def test_buildhistory_basic(self):
+        self.run_buildhistory_operation('xcursor-transparent-theme')
+        self.assertTrue(os.path.isdir(get_bb_var('BUILDHISTORY_DIR')), "buildhistory dir was not created.")
+
+    @OETestID(294)
+    def test_buildhistory_buildtime_pr_backwards(self):
+        target = 'xcursor-transparent-theme'
+        error = "ERROR:.*QA Issue: Package version for package %s went backwards which would break package feeds from (.*-r1.* to .*-r0.*)" % target
+        self.run_buildhistory_operation(target, target_config="PR = \"r1\"", change_bh_location=True)
+        self.run_buildhistory_operation(target, target_config="PR = \"r0\"", change_bh_location=False, expect_error=True, error_regex=error)
+
+class ArchiverTest(OESelftestTestCase):
+    @OETestID(926)
+    def test_arch_work_dir_and_export_source(self):
+        """
+        Test for archiving the work directory and exporting the source files.
+        """
+        self.write_config("INHERIT += \"archiver\"\nARCHIVER_MODE[src] = \"original\"\nARCHIVER_MODE[srpm] = \"1\"")
+        res = bitbake("xcursor-transparent-theme", ignore_status=True)
+        self.assertEqual(res.status, 0, "\nCouldn't build xcursortransparenttheme.\nbitbake output %s" % res.output)
+        deploy_dir_src = get_bb_var('DEPLOY_DIR_SRC')
+        pkgs_path = g.glob(str(deploy_dir_src) + "/allarch*/xcurs*")
+        src_file_glob = str(pkgs_path[0]) + "/xcursor*.src.rpm"
+        tar_file_glob = str(pkgs_path[0]) + "/xcursor*.tar.gz"
+        self.assertTrue((g.glob(src_file_glob) and g.glob(tar_file_glob)), "Couldn't find .src.rpm and .tar.gz files under %s/allarch*/xcursor*" % deploy_dir_src)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/containerimage.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/containerimage.py
new file mode 100644
index 0000000..99a5cc9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/containerimage.py
@@ -0,0 +1,85 @@
+import os
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import bitbake, get_bb_vars, runCmd
+from oeqa.core.decorator.oeid import OETestID
+
+# This test builds an image with using the "container" IMAGE_FSTYPE, and
+# ensures that then files in the image are only the ones expected.
+#
+# The only package added to the image is container_image_testpkg, which
+# contains one file. However, due to some other things not cleaning up during
+# rootfs creation, there is some cruft. Ideally bugs will be filed and the
+# cruft removed, but for now we whitelist some known set.
+#
+# Also for performance reasons we're only checking the cruft when using ipk.
+# When using deb, and rpm it is a bit different and we could test all
+# of them, but this test is more to catch if other packages get added by
+# default other than what is in ROOTFS_BOOTSTRAP_INSTALL.
+#
+class ContainerImageTests(OESelftestTestCase):
+
+    # Verify that when specifying a IMAGE_TYPEDEP_ of the form "foo.bar" that
+    # the conversion type bar gets added as a dep as well
+    @OETestID(1619)
+    def test_expected_files(self):
+
+        def get_each_path_part(path):
+            if path:
+                part = [ '.' + path + '/' ]
+                result = get_each_path_part(path.rsplit('/', 1)[0])
+                if result:
+                    return part + result
+                else:
+                    return part
+            else:
+                return None
+
+        self.write_config("""PREFERRED_PROVIDER_virtual/kernel = "linux-dummy"
+IMAGE_FSTYPES = "container"
+PACKAGE_CLASSES = "package_ipk"
+IMAGE_FEATURES = ""
+""")
+
+        bbvars = get_bb_vars(['bindir', 'sysconfdir', 'localstatedir',
+                              'DEPLOY_DIR_IMAGE', 'IMAGE_LINK_NAME'],
+                              target='container-test-image')
+        expected_files = [
+                    './',
+                    '.{bindir}/theapp',
+                    '.{sysconfdir}/default/',
+                    '.{sysconfdir}/default/postinst',
+                    '.{sysconfdir}/ld.so.cache',
+                    '.{sysconfdir}/timestamp',
+                    '.{sysconfdir}/version',
+                    './run/',
+                    '.{localstatedir}/cache/',
+                    '.{localstatedir}/cache/ldconfig/',
+                    '.{localstatedir}/cache/ldconfig/aux-cache',
+                    '.{localstatedir}/cache/opkg/',
+                    '.{localstatedir}/lib/',
+                    '.{localstatedir}/lib/opkg/'
+                ]
+
+        expected_files = [ x.format(bindir=bbvars['bindir'],
+                                    sysconfdir=bbvars['sysconfdir'],
+                                    localstatedir=bbvars['localstatedir'])
+                                    for x in expected_files ]
+
+        # Since tar lists all directories individually, make sure each element
+        # from bindir, sysconfdir, etc is added
+        expected_files += get_each_path_part(bbvars['bindir'])
+        expected_files += get_each_path_part(bbvars['sysconfdir'])
+        expected_files += get_each_path_part(bbvars['localstatedir'])
+
+        expected_files = sorted(expected_files)
+
+        # Build the image of course
+        bitbake('container-test-image')
+
+        image = os.path.join(bbvars['DEPLOY_DIR_IMAGE'],
+                             bbvars['IMAGE_LINK_NAME'] + '.tar.bz2')
+
+        # Ensure the files in the image are what we expect
+        result = runCmd("tar tf {} | sort".format(image), shell=True)
+        self.assertEqual(result.output.split('\n'), expected_files)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/devtool.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/devtool.py
new file mode 100644
index 0000000..43280cd
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/devtool.py
@@ -0,0 +1,1711 @@
+import os
+import re
+import shutil
+import tempfile
+import glob
+import fnmatch
+
+import oeqa.utils.ftools as ftools
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, create_temp_layer
+from oeqa.utils.commands import get_bb_vars, runqemu, get_test_layer
+from oeqa.core.decorator.oeid import OETestID
+
+class DevtoolBase(OESelftestTestCase):
+
+    buffer = True
+
+    def _test_recipe_contents(self, recipefile, checkvars, checkinherits):
+        with open(recipefile, 'r') as f:
+            invar = None
+            invalue = None
+            for line in f:
+                var = None
+                if invar:
+                    value = line.strip().strip('"')
+                    if value.endswith('\\'):
+                        invalue += ' ' + value[:-1].strip()
+                        continue
+                    else:
+                        invalue += ' ' + value.strip()
+                        var = invar
+                        value = invalue
+                        invar = None
+                elif '=' in line:
+                    splitline = line.split('=', 1)
+                    var = splitline[0].rstrip()
+                    value = splitline[1].strip().strip('"')
+                    if value.endswith('\\'):
+                        invalue = value[:-1].strip()
+                        invar = var
+                        continue
+                elif line.startswith('inherit '):
+                    inherits = line.split()[1:]
+
+                if var and var in checkvars:
+                    needvalue = checkvars.pop(var)
+                    if needvalue is None:
+                        self.fail('Variable %s should not appear in recipe, but value is being set to "%s"' % (var, value))
+                    if isinstance(needvalue, set):
+                        if var == 'LICENSE':
+                            value = set(value.split(' & '))
+                        else:
+                            value = set(value.split())
+                    self.assertEqual(value, needvalue, 'values for %s do not match' % var)
+
+
+        missingvars = {}
+        for var, value in checkvars.items():
+            if value is not None:
+                missingvars[var] = value
+        self.assertEqual(missingvars, {}, 'Some expected variables not found in recipe: %s' % checkvars)
+
+        for inherit in checkinherits:
+            self.assertIn(inherit, inherits, 'Missing inherit of %s' % inherit)
+
+    def _check_bbappend(self, testrecipe, recipefile, appenddir):
+        result = runCmd('bitbake-layers show-appends', cwd=self.builddir)
+        resultlines = result.output.splitlines()
+        inrecipe = False
+        bbappends = []
+        bbappendfile = None
+        for line in resultlines:
+            if inrecipe:
+                if line.startswith(' '):
+                    bbappends.append(line.strip())
+                else:
+                    break
+            elif line == '%s:' % os.path.basename(recipefile):
+                inrecipe = True
+        self.assertLessEqual(len(bbappends), 2, '%s recipe is being bbappended by another layer - bbappends found:\n  %s' % (testrecipe, '\n  '.join(bbappends)))
+        for bbappend in bbappends:
+            if bbappend.startswith(appenddir):
+                bbappendfile = bbappend
+                break
+        else:
+            self.fail('bbappend for recipe %s does not seem to be created in test layer' % testrecipe)
+        return bbappendfile
+
+    def _create_temp_layer(self, templayerdir, addlayer, templayername, priority=999, recipepathspec='recipes-*/*'):
+        create_temp_layer(templayerdir, templayername, priority, recipepathspec)
+        if addlayer:
+            self.add_command_to_tearDown('bitbake-layers remove-layer %s || true' % templayerdir)
+            result = runCmd('bitbake-layers add-layer %s' % templayerdir, cwd=self.builddir)
+
+    def _process_ls_output(self, output):
+        """
+        Convert ls -l output to a format we can reasonably compare from one context
+        to another (e.g. from host to target)
+        """
+        filelist = []
+        for line in output.splitlines():
+            splitline = line.split()
+            if len(splitline) < 8:
+                self.fail('_process_ls_output: invalid output line: %s' % line)
+            # Remove trailing . on perms
+            splitline[0] = splitline[0].rstrip('.')
+            # Remove leading . on paths
+            splitline[-1] = splitline[-1].lstrip('.')
+            # Drop fields we don't want to compare
+            del splitline[7]
+            del splitline[6]
+            del splitline[5]
+            del splitline[4]
+            del splitline[1]
+            filelist.append(' '.join(splitline))
+        return filelist
+
+
+class DevtoolTests(DevtoolBase):
+
+    @classmethod
+    def setUpClass(cls):
+        super(DevtoolTests, cls).setUpClass()
+        bb_vars = get_bb_vars(['TOPDIR', 'SSTATE_DIR'])
+        cls.original_sstate = bb_vars['SSTATE_DIR']
+        cls.devtool_sstate = os.path.join(bb_vars['TOPDIR'], 'sstate_devtool')
+        cls.sstate_conf  = 'SSTATE_DIR = "%s"\n' % cls.devtool_sstate
+        cls.sstate_conf += ('SSTATE_MIRRORS += "file://.* file:///%s/PATH"\n'
+                            % cls.original_sstate)
+
+    @classmethod
+    def tearDownClass(cls):
+        cls.logger.debug('Deleting devtool sstate cache on %s' % cls.devtool_sstate)
+        runCmd('rm -rf %s' % cls.devtool_sstate)
+        super(DevtoolTests, cls).tearDownClass()
+
+    def setUp(self):
+        """Test case setup function"""
+        super(DevtoolTests, self).setUp()
+        self.workspacedir = os.path.join(self.builddir, 'workspace')
+        self.assertTrue(not os.path.exists(self.workspacedir),
+                        'This test cannot be run with a workspace directory '
+                        'under the build directory')
+        self.append_config(self.sstate_conf)
+
+    def _check_src_repo(self, repo_dir):
+        """Check srctree git repository"""
+        self.assertTrue(os.path.isdir(os.path.join(repo_dir, '.git')),
+                        'git repository for external source tree not found')
+        result = runCmd('git status --porcelain', cwd=repo_dir)
+        self.assertEqual(result.output.strip(), "",
+                         'Created git repo is not clean')
+        result = runCmd('git symbolic-ref HEAD', cwd=repo_dir)
+        self.assertEqual(result.output.strip(), "refs/heads/devtool",
+                         'Wrong branch in git repo')
+
+    def _check_repo_status(self, repo_dir, expected_status):
+        """Check the worktree status of a repository"""
+        result = runCmd('git status . --porcelain',
+                        cwd=repo_dir)
+        for line in result.output.splitlines():
+            for ind, (f_status, fn_re) in enumerate(expected_status):
+                if re.match(fn_re, line[3:]):
+                    if f_status != line[:2]:
+                        self.fail('Unexpected status in line: %s' % line)
+                    expected_status.pop(ind)
+                    break
+            else:
+                self.fail('Unexpected modified file in line: %s' % line)
+        if expected_status:
+            self.fail('Missing file changes: %s' % expected_status)
+
+    @OETestID(1158)
+    def test_create_workspace(self):
+        # Check preconditions
+        result = runCmd('bitbake-layers show-layers')
+        self.assertTrue('/workspace' not in result.output, 'This test cannot be run with a workspace layer in bblayers.conf')
+        # Try creating a workspace layer with a specific path
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        result = runCmd('devtool create-workspace %s' % tempdir)
+        self.assertTrue(os.path.isfile(os.path.join(tempdir, 'conf', 'layer.conf')), msg = "No workspace created. devtool output: %s " % result.output)
+        result = runCmd('bitbake-layers show-layers')
+        self.assertIn(tempdir, result.output)
+        # Try creating a workspace layer with the default path
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        result = runCmd('devtool create-workspace')
+        self.assertTrue(os.path.isfile(os.path.join(self.workspacedir, 'conf', 'layer.conf')), msg = "No workspace created. devtool output: %s " % result.output)
+        result = runCmd('bitbake-layers show-layers')
+        self.assertNotIn(tempdir, result.output)
+        self.assertIn(self.workspacedir, result.output)
+
+    @OETestID(1159)
+    def test_devtool_add(self):
+        # Fetch source
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        pn = 'pv'
+        pv = '1.5.3'
+        url = 'http://www.ivarch.com/programs/sources/pv-1.5.3.tar.bz2'
+        result = runCmd('wget %s' % url, cwd=tempdir)
+        result = runCmd('tar xfv %s' % os.path.basename(url), cwd=tempdir)
+        srcdir = os.path.join(tempdir, '%s-%s' % (pn, pv))
+        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'configure')), 'Unable to find configure script in source directory')
+        # Test devtool add
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake -c cleansstate %s' % pn)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        result = runCmd('devtool add %s %s' % (pn, srcdir))
+        self.assertExists(os.path.join(self.workspacedir, 'conf', 'layer.conf'), 'Workspace directory not created')
+        # Test devtool status
+        result = runCmd('devtool status')
+        recipepath = '%s/recipes/%s/%s_%s.bb' % (self.workspacedir, pn, pn, pv)
+        self.assertIn(recipepath, result.output)
+        self.assertIn(srcdir, result.output)
+        # Test devtool find-recipe
+        result = runCmd('devtool -q find-recipe %s' % pn)
+        self.assertEqual(recipepath, result.output.strip())
+        # Test devtool edit-recipe
+        result = runCmd('VISUAL="echo 123" devtool -q edit-recipe %s' % pn)
+        self.assertEqual('123 %s' % recipepath, result.output.strip())
+        # Clean up anything in the workdir/sysroot/sstate cache (have to do this *after* devtool add since the recipe only exists then)
+        bitbake('%s -c cleansstate' % pn)
+        # Test devtool build
+        result = runCmd('devtool build %s' % pn)
+        bb_vars = get_bb_vars(['D', 'bindir'], pn)
+        installdir = bb_vars['D']
+        self.assertTrue(installdir, 'Could not query installdir variable')
+        bindir = bb_vars['bindir']
+        self.assertTrue(bindir, 'Could not query bindir variable')
+        if bindir[0] == '/':
+            bindir = bindir[1:]
+        self.assertTrue(os.path.isfile(os.path.join(installdir, bindir, 'pv')), 'pv binary not found in D')
+
+    @OETestID(1423)
+    def test_devtool_add_git_local(self):
+        # Fetch source from a remote URL, but do it outside of devtool
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        pn = 'dbus-wait'
+        srcrev = '6cc6077a36fe2648a5f993fe7c16c9632f946517'
+        # We choose an https:// git URL here to check rewriting the URL works
+        url = 'https://git.yoctoproject.org/git/dbus-wait'
+        # Force fetching to "noname" subdir so we verify we're picking up the name from autoconf
+        # instead of the directory name
+        result = runCmd('git clone %s noname' % url, cwd=tempdir)
+        srcdir = os.path.join(tempdir, 'noname')
+        result = runCmd('git reset --hard %s' % srcrev, cwd=srcdir)
+        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'configure.ac')), 'Unable to find configure script in source directory')
+        # Test devtool add
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        # Don't specify a name since we should be able to auto-detect it
+        result = runCmd('devtool add %s' % srcdir)
+        self.assertExists(os.path.join(self.workspacedir, 'conf', 'layer.conf'), 'Workspace directory not created')
+        # Check the recipe name is correct
+        recipefile = get_bb_var('FILE', pn)
+        self.assertIn('%s_git.bb' % pn, recipefile, 'Recipe file incorrectly named')
+        self.assertIn(recipefile, result.output)
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertIn(pn, result.output)
+        self.assertIn(srcdir, result.output)
+        self.assertIn(recipefile, result.output)
+        checkvars = {}
+        checkvars['LICENSE'] = 'GPLv2'
+        checkvars['LIC_FILES_CHKSUM'] = 'file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263'
+        checkvars['S'] = '${WORKDIR}/git'
+        checkvars['PV'] = '0.1+git${SRCPV}'
+        checkvars['SRC_URI'] = 'git://git.yoctoproject.org/git/dbus-wait;protocol=https'
+        checkvars['SRCREV'] = srcrev
+        checkvars['DEPENDS'] = set(['dbus'])
+        self._test_recipe_contents(recipefile, checkvars, [])
+
+    @OETestID(1162)
+    def test_devtool_add_library(self):
+        # Fetch source
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        version = '1.1'
+        url = 'https://www.intra2net.com/en/developer/libftdi/download/libftdi1-%s.tar.bz2' % version
+        result = runCmd('wget %s' % url, cwd=tempdir)
+        result = runCmd('tar xfv libftdi1-%s.tar.bz2' % version, cwd=tempdir)
+        srcdir = os.path.join(tempdir, 'libftdi1-%s' % version)
+        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'CMakeLists.txt')), 'Unable to find CMakeLists.txt in source directory')
+        # Test devtool add (and use -V so we test that too)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        result = runCmd('devtool add libftdi %s -V %s' % (srcdir, version))
+        self.assertExists(os.path.join(self.workspacedir, 'conf', 'layer.conf'), 'Workspace directory not created')
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertIn('libftdi', result.output)
+        self.assertIn(srcdir, result.output)
+        # Clean up anything in the workdir/sysroot/sstate cache (have to do this *after* devtool add since the recipe only exists then)
+        bitbake('libftdi -c cleansstate')
+        # libftdi's python/CMakeLists.txt is a bit broken, so let's just disable it
+        # There's also the matter of it installing cmake files to a path we don't
+        # normally cover, which triggers the installed-vs-shipped QA test we have
+        # within do_package
+        recipefile = '%s/recipes/libftdi/libftdi_%s.bb' % (self.workspacedir, version)
+        result = runCmd('recipetool setvar %s EXTRA_OECMAKE -- \'-DPYTHON_BINDINGS=OFF -DLIBFTDI_CMAKE_CONFIG_DIR=${datadir}/cmake/Modules\'' % recipefile)
+        with open(recipefile, 'a') as f:
+            f.write('\nFILES_${PN}-dev += "${datadir}/cmake/Modules"\n')
+            # We don't have the ability to pick up this dependency automatically yet...
+            f.write('\nDEPENDS += "libusb1"\n')
+            f.write('\nTESTLIBOUTPUT = "${COMPONENTS_DIR}/${TUNE_PKGARCH}/${PN}/${libdir}"\n')
+        # Test devtool build
+        result = runCmd('devtool build libftdi')
+        bb_vars = get_bb_vars(['TESTLIBOUTPUT', 'STAMP'], 'libftdi')
+        staging_libdir = bb_vars['TESTLIBOUTPUT']
+        self.assertTrue(staging_libdir, 'Could not query TESTLIBOUTPUT variable')
+        self.assertTrue(os.path.isfile(os.path.join(staging_libdir, 'libftdi1.so.2.1.0')), "libftdi binary not found in STAGING_LIBDIR. Output of devtool build libftdi %s" % result.output)
+        # Test devtool reset
+        stampprefix = bb_vars['STAMP']
+        result = runCmd('devtool reset libftdi')
+        result = runCmd('devtool status')
+        self.assertNotIn('libftdi', result.output)
+        self.assertTrue(stampprefix, 'Unable to get STAMP value for recipe libftdi')
+        matches = glob.glob(stampprefix + '*')
+        self.assertFalse(matches, 'Stamp files exist for recipe libftdi that should have been cleaned')
+        self.assertFalse(os.path.isfile(os.path.join(staging_libdir, 'libftdi1.so.2.1.0')), 'libftdi binary still found in STAGING_LIBDIR after cleaning')
+
+    @OETestID(1160)
+    def test_devtool_add_fetch(self):
+        # Fetch source
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        testver = '0.23'
+        url = 'https://pypi.python.org/packages/source/M/MarkupSafe/MarkupSafe-%s.tar.gz' % testver
+        testrecipe = 'python-markupsafe'
+        srcdir = os.path.join(tempdir, testrecipe)
+        # Test devtool add
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake -c cleansstate %s' % testrecipe)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        result = runCmd('devtool add %s %s -f %s' % (testrecipe, srcdir, url))
+        self.assertExists(os.path.join(self.workspacedir, 'conf', 'layer.conf'), 'Workspace directory not created. %s' % result.output)
+        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'setup.py')), 'Unable to find setup.py in source directory')
+        self.assertTrue(os.path.isdir(os.path.join(srcdir, '.git')), 'git repository for external source tree was not created')
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertIn(testrecipe, result.output)
+        self.assertIn(srcdir, result.output)
+        # Check recipe
+        recipefile = get_bb_var('FILE', testrecipe)
+        self.assertIn('%s_%s.bb' % (testrecipe, testver), recipefile, 'Recipe file incorrectly named')
+        checkvars = {}
+        checkvars['S'] = '${WORKDIR}/MarkupSafe-${PV}'
+        checkvars['SRC_URI'] = url.replace(testver, '${PV}')
+        self._test_recipe_contents(recipefile, checkvars, [])
+        # Try with version specified
+        result = runCmd('devtool reset -n %s' % testrecipe)
+        shutil.rmtree(srcdir)
+        fakever = '1.9'
+        result = runCmd('devtool add %s %s -f %s -V %s' % (testrecipe, srcdir, url, fakever))
+        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'setup.py')), 'Unable to find setup.py in source directory')
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertIn(testrecipe, result.output)
+        self.assertIn(srcdir, result.output)
+        # Check recipe
+        recipefile = get_bb_var('FILE', testrecipe)
+        self.assertIn('%s_%s.bb' % (testrecipe, fakever), recipefile, 'Recipe file incorrectly named')
+        checkvars = {}
+        checkvars['S'] = '${WORKDIR}/MarkupSafe-%s' % testver
+        checkvars['SRC_URI'] = url
+        self._test_recipe_contents(recipefile, checkvars, [])
+
+    @OETestID(1161)
+    def test_devtool_add_fetch_git(self):
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        url = 'gitsm://git.yoctoproject.org/mraa'
+        checkrev = 'ae127b19a50aa54255e4330ccfdd9a5d058e581d'
+        testrecipe = 'mraa'
+        srcdir = os.path.join(tempdir, testrecipe)
+        # Test devtool add
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake -c cleansstate %s' % testrecipe)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        result = runCmd('devtool add %s %s -a -f %s' % (testrecipe, srcdir, url))
+        self.assertExists(os.path.join(self.workspacedir, 'conf', 'layer.conf'), 'Workspace directory not created: %s' % result.output)
+        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'imraa', 'imraa.c')), 'Unable to find imraa/imraa.c in source directory')
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertIn(testrecipe, result.output)
+        self.assertIn(srcdir, result.output)
+        # Check recipe
+        recipefile = get_bb_var('FILE', testrecipe)
+        self.assertIn('_git.bb', recipefile, 'Recipe file incorrectly named')
+        checkvars = {}
+        checkvars['S'] = '${WORKDIR}/git'
+        checkvars['PV'] = '1.0+git${SRCPV}'
+        checkvars['SRC_URI'] = url
+        checkvars['SRCREV'] = '${AUTOREV}'
+        self._test_recipe_contents(recipefile, checkvars, [])
+        # Try with revision and version specified
+        result = runCmd('devtool reset -n %s' % testrecipe)
+        shutil.rmtree(srcdir)
+        url_rev = '%s;rev=%s' % (url, checkrev)
+        result = runCmd('devtool add %s %s -f "%s" -V 1.5' % (testrecipe, srcdir, url_rev))
+        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'imraa', 'imraa.c')), 'Unable to find imraa/imraa.c in source directory')
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertIn(testrecipe, result.output)
+        self.assertIn(srcdir, result.output)
+        # Check recipe
+        recipefile = get_bb_var('FILE', testrecipe)
+        self.assertIn('_git.bb', recipefile, 'Recipe file incorrectly named')
+        checkvars = {}
+        checkvars['S'] = '${WORKDIR}/git'
+        checkvars['PV'] = '1.5+git${SRCPV}'
+        checkvars['SRC_URI'] = url
+        checkvars['SRCREV'] = checkrev
+        self._test_recipe_contents(recipefile, checkvars, [])
+
+    @OETestID(1391)
+    def test_devtool_add_fetch_simple(self):
+        # Fetch source from a remote URL, auto-detecting name
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        testver = '1.6.0'
+        url = 'http://www.ivarch.com/programs/sources/pv-%s.tar.bz2' % testver
+        testrecipe = 'pv'
+        srcdir = os.path.join(self.workspacedir, 'sources', testrecipe)
+        # Test devtool add
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        result = runCmd('devtool add %s' % url)
+        self.assertExists(os.path.join(self.workspacedir, 'conf', 'layer.conf'), 'Workspace directory not created. %s' % result.output)
+        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'configure')), 'Unable to find configure script in source directory')
+        self.assertTrue(os.path.isdir(os.path.join(srcdir, '.git')), 'git repository for external source tree was not created')
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertIn(testrecipe, result.output)
+        self.assertIn(srcdir, result.output)
+        # Check recipe
+        recipefile = get_bb_var('FILE', testrecipe)
+        self.assertIn('%s_%s.bb' % (testrecipe, testver), recipefile, 'Recipe file incorrectly named')
+        checkvars = {}
+        checkvars['S'] = None
+        checkvars['SRC_URI'] = url.replace(testver, '${PV}')
+        self._test_recipe_contents(recipefile, checkvars, [])
+
+    @OETestID(1164)
+    def test_devtool_modify(self):
+        import oe.path
+
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        self.add_command_to_tearDown('bitbake -c clean mdadm')
+        result = runCmd('devtool modify mdadm -x %s' % tempdir)
+        self.assertExists(os.path.join(tempdir, 'Makefile'), 'Extracted source could not be found')
+        self.assertExists(os.path.join(self.workspacedir, 'conf', 'layer.conf'), 'Workspace directory not created')
+        matches = glob.glob(os.path.join(self.workspacedir, 'appends', 'mdadm_*.bbappend'))
+        self.assertTrue(matches, 'bbappend not created %s' % result.output)
+
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertIn('mdadm', result.output)
+        self.assertIn(tempdir, result.output)
+        self._check_src_repo(tempdir)
+
+        bitbake('mdadm -C unpack')
+
+        def check_line(checkfile, expected, message, present=True):
+            # Check for $expected, on a line on its own, in checkfile.
+            with open(checkfile, 'r') as f:
+                if present:
+                    self.assertIn(expected + '\n', f, message)
+                else:
+                    self.assertNotIn(expected + '\n', f, message)
+
+        modfile = os.path.join(tempdir, 'mdadm.8.in')
+        bb_vars = get_bb_vars(['PKGD', 'mandir'], 'mdadm')
+        pkgd = bb_vars['PKGD']
+        self.assertTrue(pkgd, 'Could not query PKGD variable')
+        mandir = bb_vars['mandir']
+        self.assertTrue(mandir, 'Could not query mandir variable')
+        manfile = oe.path.join(pkgd, mandir, 'man8', 'mdadm.8')
+
+        check_line(modfile, 'Linux Software RAID', 'Could not find initial string')
+        check_line(modfile, 'antique pin sardine', 'Unexpectedly found replacement string', present=False)
+
+        result = runCmd("sed -i 's!^Linux Software RAID$!antique pin sardine!' %s" % modfile)
+        check_line(modfile, 'antique pin sardine', 'mdadm.8.in file not modified (sed failed)')
+
+        bitbake('mdadm -c package')
+        check_line(manfile, 'antique pin sardine', 'man file not modified. man searched file path: %s' % manfile)
+
+        result = runCmd('git checkout -- %s' % modfile, cwd=tempdir)
+        check_line(modfile, 'Linux Software RAID', 'man .in file not restored (git failed)')
+
+        bitbake('mdadm -c package')
+        check_line(manfile, 'Linux Software RAID', 'man file not updated. man searched file path: %s' % manfile)
+
+        result = runCmd('devtool reset mdadm')
+        result = runCmd('devtool status')
+        self.assertNotIn('mdadm', result.output)
+
+    @OETestID(1620)
+    def test_devtool_buildclean(self):
+        def assertFile(path, *paths):
+            f = os.path.join(path, *paths)
+            self.assertExists(f)
+        def assertNoFile(path, *paths):
+            f = os.path.join(path, *paths)
+            self.assertNotExists(f)
+
+        # Clean up anything in the workdir/sysroot/sstate cache
+        bitbake('mdadm m4 -c cleansstate')
+        # Try modifying a recipe
+        tempdir_mdadm = tempfile.mkdtemp(prefix='devtoolqa')
+        tempdir_m4 = tempfile.mkdtemp(prefix='devtoolqa')
+        builddir_m4 = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir_mdadm)
+        self.track_for_cleanup(tempdir_m4)
+        self.track_for_cleanup(builddir_m4)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        self.add_command_to_tearDown('bitbake -c clean mdadm m4')
+        self.write_recipeinc('m4', 'EXTERNALSRC_BUILD = "%s"\ndo_clean() {\n\t:\n}\n' % builddir_m4)
+        try:
+            runCmd('devtool modify mdadm -x %s' % tempdir_mdadm)
+            runCmd('devtool modify m4 -x %s' % tempdir_m4)
+            assertNoFile(tempdir_mdadm, 'mdadm')
+            assertNoFile(builddir_m4, 'src/m4')
+            result = bitbake('m4 -e')
+            result = bitbake('mdadm m4 -c compile')
+            self.assertEqual(result.status, 0)
+            assertFile(tempdir_mdadm, 'mdadm')
+            assertFile(builddir_m4, 'src/m4')
+            # Check that buildclean task exists and does call make clean
+            bitbake('mdadm m4 -c buildclean')
+            assertNoFile(tempdir_mdadm, 'mdadm')
+            assertNoFile(builddir_m4, 'src/m4')
+            bitbake('mdadm m4 -c compile')
+            assertFile(tempdir_mdadm, 'mdadm')
+            assertFile(builddir_m4, 'src/m4')
+            bitbake('mdadm m4 -c clean')
+            # Check that buildclean task is run before clean for B == S
+            assertNoFile(tempdir_mdadm, 'mdadm')
+            # Check that buildclean task is not run before clean for B != S
+            assertFile(builddir_m4, 'src/m4')
+        finally:
+            self.delete_recipeinc('m4')
+
+    @OETestID(1166)
+    def test_devtool_modify_invalid(self):
+        # Try modifying some recipes
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+
+        testrecipes = 'perf kernel-devsrc package-index core-image-minimal meta-toolchain packagegroup-core-sdk meta-ide-support'.split()
+        # Find actual name of gcc-source since it now includes the version - crude, but good enough for this purpose
+        result = runCmd('bitbake-layers show-recipes gcc-source*')
+        for line in result.output.splitlines():
+            # just match those lines that contain a real target
+            m = re.match('(?P<recipe>^[a-zA-Z0-9.-]+)(?P<colon>:$)', line)
+            if m:
+                testrecipes.append(m.group('recipe'))
+        for testrecipe in testrecipes:
+            # Check it's a valid recipe
+            bitbake('%s -e' % testrecipe)
+            # devtool extract should fail
+            result = runCmd('devtool extract %s %s' % (testrecipe, os.path.join(tempdir, testrecipe)), ignore_status=True)
+            self.assertNotEqual(result.status, 0, 'devtool extract on %s should have failed. devtool output: %s' % (testrecipe, result.output))
+            self.assertNotIn('Fetching ', result.output, 'devtool extract on %s should have errored out before trying to fetch' % testrecipe)
+            self.assertIn('ERROR: ', result.output, 'devtool extract on %s should have given an ERROR' % testrecipe)
+            # devtool modify should fail
+            result = runCmd('devtool modify %s -x %s' % (testrecipe, os.path.join(tempdir, testrecipe)), ignore_status=True)
+            self.assertNotEqual(result.status, 0, 'devtool modify on %s should have failed. devtool output: %s' %  (testrecipe, result.output))
+            self.assertIn('ERROR: ', result.output, 'devtool modify on %s should have given an ERROR' % testrecipe)
+
+    @OETestID(1365)
+    def test_devtool_modify_native(self):
+        # Check preconditions
+        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
+        # Try modifying some recipes
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+
+        bbclassextended = False
+        inheritnative = False
+        testrecipes = 'mtools-native apt-native desktop-file-utils-native'.split()
+        for testrecipe in testrecipes:
+            checkextend = 'native' in (get_bb_var('BBCLASSEXTEND', testrecipe) or '').split()
+            if not bbclassextended:
+                bbclassextended = checkextend
+            if not inheritnative:
+                inheritnative = not checkextend
+            result = runCmd('devtool modify %s -x %s' % (testrecipe, os.path.join(tempdir, testrecipe)))
+            self.assertNotIn('ERROR: ', result.output, 'ERROR in devtool modify output: %s' % result.output)
+            result = runCmd('devtool build %s' % testrecipe)
+            self.assertNotIn('ERROR: ', result.output, 'ERROR in devtool build output: %s' % result.output)
+            result = runCmd('devtool reset %s' % testrecipe)
+            self.assertNotIn('ERROR: ', result.output, 'ERROR in devtool reset output: %s' % result.output)
+
+        self.assertTrue(bbclassextended, 'None of these recipes are BBCLASSEXTENDed to native - need to adjust testrecipes list: %s' % ', '.join(testrecipes))
+        self.assertTrue(inheritnative, 'None of these recipes do "inherit native" - need to adjust testrecipes list: %s' % ', '.join(testrecipes))
+
+
+    @OETestID(1165)
+    def test_devtool_modify_git(self):
+        # Check preconditions
+        testrecipe = 'mkelfimage'
+        src_uri = get_bb_var('SRC_URI', testrecipe)
+        self.assertIn('git://', src_uri, 'This test expects the %s recipe to be a git recipe' % testrecipe)
+        # Clean up anything in the workdir/sysroot/sstate cache
+        bitbake('%s -c cleansstate' % testrecipe)
+        # Try modifying a recipe
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        self.add_command_to_tearDown('bitbake -c clean %s' % testrecipe)
+        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempdir))
+        self.assertExists(os.path.join(tempdir, 'Makefile'), 'Extracted source could not be found')
+        self.assertExists(os.path.join(self.workspacedir, 'conf', 'layer.conf'), 'Workspace directory not created. devtool output: %s' % result.output)
+        matches = glob.glob(os.path.join(self.workspacedir, 'appends', 'mkelfimage_*.bbappend'))
+        self.assertTrue(matches, 'bbappend not created')
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertIn(testrecipe, result.output)
+        self.assertIn(tempdir, result.output)
+        # Check git repo
+        self._check_src_repo(tempdir)
+        # Try building
+        bitbake(testrecipe)
+
+    @OETestID(1167)
+    def test_devtool_modify_localfiles(self):
+        # Check preconditions
+        testrecipe = 'lighttpd'
+        src_uri = (get_bb_var('SRC_URI', testrecipe) or '').split()
+        foundlocal = False
+        for item in src_uri:
+            if item.startswith('file://') and '.patch' not in item:
+                foundlocal = True
+                break
+        self.assertTrue(foundlocal, 'This test expects the %s recipe to fetch local files and it seems that it no longer does' % testrecipe)
+        # Clean up anything in the workdir/sysroot/sstate cache
+        bitbake('%s -c cleansstate' % testrecipe)
+        # Try modifying a recipe
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        self.add_command_to_tearDown('bitbake -c clean %s' % testrecipe)
+        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempdir))
+        self.assertExists(os.path.join(tempdir, 'configure.ac'), 'Extracted source could not be found')
+        self.assertExists(os.path.join(self.workspacedir, 'conf', 'layer.conf'), 'Workspace directory not created')
+        matches = glob.glob(os.path.join(self.workspacedir, 'appends', '%s_*.bbappend' % testrecipe))
+        self.assertTrue(matches, 'bbappend not created')
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertIn(testrecipe, result.output)
+        self.assertIn(tempdir, result.output)
+        # Try building
+        bitbake(testrecipe)
+
+    @OETestID(1378)
+    def test_devtool_modify_virtual(self):
+        # Try modifying a virtual recipe
+        virtrecipe = 'virtual/make'
+        realrecipe = 'make'
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        result = runCmd('devtool modify %s -x %s' % (virtrecipe, tempdir))
+        self.assertExists(os.path.join(tempdir, 'Makefile.am'), 'Extracted source could not be found')
+        self.assertExists(os.path.join(self.workspacedir, 'conf', 'layer.conf'), 'Workspace directory not created')
+        matches = glob.glob(os.path.join(self.workspacedir, 'appends', '%s_*.bbappend' % realrecipe))
+        self.assertTrue(matches, 'bbappend not created %s' % result.output)
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertNotIn(virtrecipe, result.output)
+        self.assertIn(realrecipe, result.output)
+        # Check git repo
+        self._check_src_repo(tempdir)
+        # This is probably sufficient
+
+
+    @OETestID(1169)
+    def test_devtool_update_recipe(self):
+        # Check preconditions
+        testrecipe = 'minicom'
+        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
+        recipefile = bb_vars['FILE']
+        src_uri = bb_vars['SRC_URI']
+        self.assertNotIn('git://', src_uri, 'This test expects the %s recipe to NOT be a git recipe' % testrecipe)
+        self._check_repo_status(os.path.dirname(recipefile), [])
+        # First, modify a recipe
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        # (don't bother with cleaning the recipe on teardown, we won't be building it)
+        # We don't use -x here so that we test the behaviour of devtool modify without it
+        result = runCmd('devtool modify %s %s' % (testrecipe, tempdir))
+        # Check git repo
+        self._check_src_repo(tempdir)
+        # Add a couple of commits
+        # FIXME: this only tests adding, need to also test update and remove
+        result = runCmd('echo "Additional line" >> README', cwd=tempdir)
+        result = runCmd('git commit -a -m "Change the README"', cwd=tempdir)
+        result = runCmd('echo "A new file" > devtool-new-file', cwd=tempdir)
+        result = runCmd('git add devtool-new-file', cwd=tempdir)
+        result = runCmd('git commit -m "Add a new file"', cwd=tempdir)
+        self.add_command_to_tearDown('cd %s; rm %s/*.patch; git checkout %s %s' % (os.path.dirname(recipefile), testrecipe, testrecipe, os.path.basename(recipefile)))
+        result = runCmd('devtool update-recipe %s' % testrecipe)
+        expected_status = [(' M', '.*/%s$' % os.path.basename(recipefile)),
+                           ('??', '.*/0001-Change-the-README.patch$'),
+                           ('??', '.*/0002-Add-a-new-file.patch$')]
+        self._check_repo_status(os.path.dirname(recipefile), expected_status)
+
+    @OETestID(1172)
+    def test_devtool_update_recipe_git(self):
+        # Check preconditions
+        testrecipe = 'mtd-utils'
+        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
+        recipefile = bb_vars['FILE']
+        src_uri = bb_vars['SRC_URI']
+        self.assertIn('git://', src_uri, 'This test expects the %s recipe to be a git recipe' % testrecipe)
+        patches = []
+        for entry in src_uri.split():
+            if entry.startswith('file://') and entry.endswith('.patch'):
+                patches.append(entry[7:].split(';')[0])
+        self.assertGreater(len(patches), 0, 'The %s recipe does not appear to contain any patches, so this test will not be effective' % testrecipe)
+        self._check_repo_status(os.path.dirname(recipefile), [])
+        # First, modify a recipe
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        # (don't bother with cleaning the recipe on teardown, we won't be building it)
+        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempdir))
+        # Check git repo
+        self._check_src_repo(tempdir)
+        # Add a couple of commits
+        # FIXME: this only tests adding, need to also test update and remove
+        result = runCmd('echo "# Additional line" >> Makefile.am', cwd=tempdir)
+        result = runCmd('git commit -a -m "Change the Makefile"', cwd=tempdir)
+        result = runCmd('echo "A new file" > devtool-new-file', cwd=tempdir)
+        result = runCmd('git add devtool-new-file', cwd=tempdir)
+        result = runCmd('git commit -m "Add a new file"', cwd=tempdir)
+        self.add_command_to_tearDown('cd %s; rm -rf %s; git checkout %s %s' % (os.path.dirname(recipefile), testrecipe, testrecipe, os.path.basename(recipefile)))
+        result = runCmd('devtool update-recipe -m srcrev %s' % testrecipe)
+        expected_status = [(' M', '.*/%s$' % os.path.basename(recipefile))] + \
+                          [(' D', '.*/%s$' % patch) for patch in patches]
+        self._check_repo_status(os.path.dirname(recipefile), expected_status)
+
+        result = runCmd('git diff %s' % os.path.basename(recipefile), cwd=os.path.dirname(recipefile))
+        addlines = ['SRCREV = ".*"', 'SRC_URI = "git://git.infradead.org/mtd-utils.git"']
+        srcurilines = src_uri.split()
+        srcurilines[0] = 'SRC_URI = "' + srcurilines[0]
+        srcurilines.append('"')
+        removelines = ['SRCREV = ".*"'] + srcurilines
+        for line in result.output.splitlines():
+            if line.startswith('+++') or line.startswith('---'):
+                continue
+            elif line.startswith('+'):
+                matched = False
+                for item in addlines:
+                    if re.match(item, line[1:].strip()):
+                        matched = True
+                        break
+                self.assertTrue(matched, 'Unexpected diff add line: %s' % line)
+            elif line.startswith('-'):
+                matched = False
+                for item in removelines:
+                    if re.match(item, line[1:].strip()):
+                        matched = True
+                        break
+                self.assertTrue(matched, 'Unexpected diff remove line: %s' % line)
+        # Now try with auto mode
+        runCmd('cd %s; git checkout %s %s' % (os.path.dirname(recipefile), testrecipe, os.path.basename(recipefile)))
+        result = runCmd('devtool update-recipe %s' % testrecipe)
+        result = runCmd('git rev-parse --show-toplevel', cwd=os.path.dirname(recipefile))
+        topleveldir = result.output.strip()
+        relpatchpath = os.path.join(os.path.relpath(os.path.dirname(recipefile), topleveldir), testrecipe)
+        expected_status = [(' M', os.path.relpath(recipefile, topleveldir)),
+                           ('??', '%s/0001-Change-the-Makefile.patch' % relpatchpath),
+                           ('??', '%s/0002-Add-a-new-file.patch' % relpatchpath)]
+        self._check_repo_status(os.path.dirname(recipefile), expected_status)
+
+    @OETestID(1170)
+    def test_devtool_update_recipe_append(self):
+        # Check preconditions
+        testrecipe = 'mdadm'
+        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
+        recipefile = bb_vars['FILE']
+        src_uri = bb_vars['SRC_URI']
+        self.assertNotIn('git://', src_uri, 'This test expects the %s recipe to NOT be a git recipe' % testrecipe)
+        self._check_repo_status(os.path.dirname(recipefile), [])
+        # First, modify a recipe
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        tempsrcdir = os.path.join(tempdir, 'source')
+        templayerdir = os.path.join(tempdir, 'layer')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        # (don't bother with cleaning the recipe on teardown, we won't be building it)
+        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempsrcdir))
+        # Check git repo
+        self._check_src_repo(tempsrcdir)
+        # Add a commit
+        result = runCmd("sed 's!\\(#define VERSION\\W*\"[^\"]*\\)\"!\\1-custom\"!' -i ReadMe.c", cwd=tempsrcdir)
+        result = runCmd('git commit -a -m "Add our custom version"', cwd=tempsrcdir)
+        self.add_command_to_tearDown('cd %s; rm -f %s/*.patch; git checkout .' % (os.path.dirname(recipefile), testrecipe))
+        # Create a temporary layer and add it to bblayers.conf
+        self._create_temp_layer(templayerdir, True, 'selftestupdaterecipe')
+        # Create the bbappend
+        result = runCmd('devtool update-recipe %s -a %s' % (testrecipe, templayerdir))
+        self.assertNotIn('WARNING:', result.output)
+        # Check recipe is still clean
+        self._check_repo_status(os.path.dirname(recipefile), [])
+        # Check bbappend was created
+        splitpath = os.path.dirname(recipefile).split(os.sep)
+        appenddir = os.path.join(templayerdir, splitpath[-2], splitpath[-1])
+        bbappendfile = self._check_bbappend(testrecipe, recipefile, appenddir)
+        patchfile = os.path.join(appenddir, testrecipe, '0001-Add-our-custom-version.patch')
+        self.assertExists(patchfile, 'Patch file not created')
+
+        # Check bbappend contents
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n',
+                         'SRC_URI += "file://0001-Add-our-custom-version.patch"\n',
+                         '\n']
+        with open(bbappendfile, 'r') as f:
+            self.assertEqual(expectedlines, f.readlines())
+
+        # Check we can run it again and bbappend isn't modified
+        result = runCmd('devtool update-recipe %s -a %s' % (testrecipe, templayerdir))
+        with open(bbappendfile, 'r') as f:
+            self.assertEqual(expectedlines, f.readlines())
+        # Drop new commit and check patch gets deleted
+        result = runCmd('git reset HEAD^', cwd=tempsrcdir)
+        result = runCmd('devtool update-recipe %s -a %s' % (testrecipe, templayerdir))
+        self.assertNotExists(patchfile, 'Patch file not deleted')
+        expectedlines2 = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n']
+        with open(bbappendfile, 'r') as f:
+            self.assertEqual(expectedlines2, f.readlines())
+        # Put commit back and check we can run it if layer isn't in bblayers.conf
+        os.remove(bbappendfile)
+        result = runCmd('git commit -a -m "Add our custom version"', cwd=tempsrcdir)
+        result = runCmd('bitbake-layers remove-layer %s' % templayerdir, cwd=self.builddir)
+        result = runCmd('devtool update-recipe %s -a %s' % (testrecipe, templayerdir))
+        self.assertIn('WARNING: Specified layer is not currently enabled in bblayers.conf', result.output)
+        self.assertExists(patchfile, 'Patch file not created (with disabled layer)')
+        with open(bbappendfile, 'r') as f:
+            self.assertEqual(expectedlines, f.readlines())
+        # Deleting isn't expected to work under these circumstances
+
+    @OETestID(1171)
+    def test_devtool_update_recipe_append_git(self):
+        # Check preconditions
+        testrecipe = 'mtd-utils'
+        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
+        recipefile = bb_vars['FILE']
+        src_uri = bb_vars['SRC_URI']
+        self.assertIn('git://', src_uri, 'This test expects the %s recipe to be a git recipe' % testrecipe)
+        for entry in src_uri.split():
+            if entry.startswith('git://'):
+                git_uri = entry
+                break
+        self._check_repo_status(os.path.dirname(recipefile), [])
+        # First, modify a recipe
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        tempsrcdir = os.path.join(tempdir, 'source')
+        templayerdir = os.path.join(tempdir, 'layer')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        # (don't bother with cleaning the recipe on teardown, we won't be building it)
+        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempsrcdir))
+        # Check git repo
+        self._check_src_repo(tempsrcdir)
+        # Add a commit
+        result = runCmd('echo "# Additional line" >> Makefile.am', cwd=tempsrcdir)
+        result = runCmd('git commit -a -m "Change the Makefile"', cwd=tempsrcdir)
+        self.add_command_to_tearDown('cd %s; rm -f %s/*.patch; git checkout .' % (os.path.dirname(recipefile), testrecipe))
+        # Create a temporary layer
+        os.makedirs(os.path.join(templayerdir, 'conf'))
+        with open(os.path.join(templayerdir, 'conf', 'layer.conf'), 'w') as f:
+            f.write('BBPATH .= ":${LAYERDIR}"\n')
+            f.write('BBFILES += "${LAYERDIR}/recipes-*/*/*.bbappend"\n')
+            f.write('BBFILE_COLLECTIONS += "oeselftesttemplayer"\n')
+            f.write('BBFILE_PATTERN_oeselftesttemplayer = "^${LAYERDIR}/"\n')
+            f.write('BBFILE_PRIORITY_oeselftesttemplayer = "999"\n')
+            f.write('BBFILE_PATTERN_IGNORE_EMPTY_oeselftesttemplayer = "1"\n')
+        self.add_command_to_tearDown('bitbake-layers remove-layer %s || true' % templayerdir)
+        result = runCmd('bitbake-layers add-layer %s' % templayerdir, cwd=self.builddir)
+        # Create the bbappend
+        result = runCmd('devtool update-recipe -m srcrev %s -a %s' % (testrecipe, templayerdir))
+        self.assertNotIn('WARNING:', result.output)
+        # Check recipe is still clean
+        self._check_repo_status(os.path.dirname(recipefile), [])
+        # Check bbappend was created
+        splitpath = os.path.dirname(recipefile).split(os.sep)
+        appenddir = os.path.join(templayerdir, splitpath[-2], splitpath[-1])
+        bbappendfile = self._check_bbappend(testrecipe, recipefile, appenddir)
+        self.assertNotExists(os.path.join(appenddir, testrecipe), 'Patch directory should not be created')
+
+        # Check bbappend contents
+        result = runCmd('git rev-parse HEAD', cwd=tempsrcdir)
+        expectedlines = set(['SRCREV = "%s"\n' % result.output,
+                             '\n',
+                             'SRC_URI = "%s"\n' % git_uri,
+                             '\n'])
+        with open(bbappendfile, 'r') as f:
+            self.assertEqual(expectedlines, set(f.readlines()))
+
+        # Check we can run it again and bbappend isn't modified
+        result = runCmd('devtool update-recipe -m srcrev %s -a %s' % (testrecipe, templayerdir))
+        with open(bbappendfile, 'r') as f:
+            self.assertEqual(expectedlines, set(f.readlines()))
+        # Drop new commit and check SRCREV changes
+        result = runCmd('git reset HEAD^', cwd=tempsrcdir)
+        result = runCmd('devtool update-recipe -m srcrev %s -a %s' % (testrecipe, templayerdir))
+        self.assertNotExists(os.path.join(appenddir, testrecipe), 'Patch directory should not be created')
+        result = runCmd('git rev-parse HEAD', cwd=tempsrcdir)
+        expectedlines = set(['SRCREV = "%s"\n' % result.output,
+                             '\n',
+                             'SRC_URI = "%s"\n' % git_uri,
+                             '\n'])
+        with open(bbappendfile, 'r') as f:
+            self.assertEqual(expectedlines, set(f.readlines()))
+        # Put commit back and check we can run it if layer isn't in bblayers.conf
+        os.remove(bbappendfile)
+        result = runCmd('git commit -a -m "Change the Makefile"', cwd=tempsrcdir)
+        result = runCmd('bitbake-layers remove-layer %s' % templayerdir, cwd=self.builddir)
+        result = runCmd('devtool update-recipe -m srcrev %s -a %s' % (testrecipe, templayerdir))
+        self.assertIn('WARNING: Specified layer is not currently enabled in bblayers.conf', result.output)
+        self.assertNotExists(os.path.join(appenddir, testrecipe), 'Patch directory should not be created')
+        result = runCmd('git rev-parse HEAD', cwd=tempsrcdir)
+        expectedlines = set(['SRCREV = "%s"\n' % result.output,
+                             '\n',
+                             'SRC_URI = "%s"\n' % git_uri,
+                             '\n'])
+        with open(bbappendfile, 'r') as f:
+            self.assertEqual(expectedlines, set(f.readlines()))
+        # Deleting isn't expected to work under these circumstances
+
+    @OETestID(1370)
+    def test_devtool_update_recipe_local_files(self):
+        """Check that local source files are copied over instead of patched"""
+        testrecipe = 'makedevs'
+        recipefile = get_bb_var('FILE', testrecipe)
+        # Setup srctree for modifying the recipe
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        # (don't bother with cleaning the recipe on teardown, we won't be
+        # building it)
+        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempdir))
+        # Check git repo
+        self._check_src_repo(tempdir)
+        # Try building just to ensure we haven't broken that
+        bitbake("%s" % testrecipe)
+        # Edit / commit local source
+        runCmd('echo "/* Foobar */" >> oe-local-files/makedevs.c', cwd=tempdir)
+        runCmd('echo "Foo" > oe-local-files/new-local', cwd=tempdir)
+        runCmd('echo "Bar" > new-file', cwd=tempdir)
+        runCmd('git add new-file', cwd=tempdir)
+        runCmd('git commit -m "Add new file"', cwd=tempdir)
+        self.add_command_to_tearDown('cd %s; git clean -fd .; git checkout .' %
+                                     os.path.dirname(recipefile))
+        runCmd('devtool update-recipe %s' % testrecipe)
+        expected_status = [(' M', '.*/%s$' % os.path.basename(recipefile)),
+                           (' M', '.*/makedevs/makedevs.c$'),
+                           ('??', '.*/makedevs/new-local$'),
+                           ('??', '.*/makedevs/0001-Add-new-file.patch$')]
+        self._check_repo_status(os.path.dirname(recipefile), expected_status)
+
+    @OETestID(1371)
+    def test_devtool_update_recipe_local_files_2(self):
+        """Check local source files support when oe-local-files is in Git"""
+        testrecipe = 'lzo'
+        recipefile = get_bb_var('FILE', testrecipe)
+        # Setup srctree for modifying the recipe
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempdir))
+        # Check git repo
+        self._check_src_repo(tempdir)
+        # Add oe-local-files to Git
+        runCmd('rm oe-local-files/.gitignore', cwd=tempdir)
+        runCmd('git add oe-local-files', cwd=tempdir)
+        runCmd('git commit -m "Add local sources"', cwd=tempdir)
+        # Edit / commit local sources
+        runCmd('echo "# Foobar" >> oe-local-files/acinclude.m4', cwd=tempdir)
+        runCmd('git commit -am "Edit existing file"', cwd=tempdir)
+        runCmd('git rm oe-local-files/run-ptest', cwd=tempdir)
+        runCmd('git commit -m"Remove file"', cwd=tempdir)
+        runCmd('echo "Foo" > oe-local-files/new-local', cwd=tempdir)
+        runCmd('git add oe-local-files/new-local', cwd=tempdir)
+        runCmd('git commit -m "Add new local file"', cwd=tempdir)
+        runCmd('echo "Gar" > new-file', cwd=tempdir)
+        runCmd('git add new-file', cwd=tempdir)
+        runCmd('git commit -m "Add new file"', cwd=tempdir)
+        self.add_command_to_tearDown('cd %s; git clean -fd .; git checkout .' %
+                                     os.path.dirname(recipefile))
+        # Checkout unmodified file to working copy -> devtool should still pick
+        # the modified version from HEAD
+        runCmd('git checkout HEAD^ -- oe-local-files/acinclude.m4', cwd=tempdir)
+        runCmd('devtool update-recipe %s' % testrecipe)
+        expected_status = [(' M', '.*/%s$' % os.path.basename(recipefile)),
+                           (' M', '.*/acinclude.m4$'),
+                           (' D', '.*/run-ptest$'),
+                           ('??', '.*/new-local$'),
+                           ('??', '.*/0001-Add-new-file.patch$')]
+        self._check_repo_status(os.path.dirname(recipefile), expected_status)
+
+    @OETestID(1627)
+    def test_devtool_update_recipe_local_files_3(self):
+        # First, modify the recipe
+        testrecipe = 'devtool-test-localonly'
+        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
+        recipefile = bb_vars['FILE']
+        src_uri = bb_vars['SRC_URI']
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        # (don't bother with cleaning the recipe on teardown, we won't be building it)
+        result = runCmd('devtool modify %s' % testrecipe)
+        # Modify one file
+        runCmd('echo "Another line" >> file2', cwd=os.path.join(self.workspacedir, 'sources', testrecipe, 'oe-local-files'))
+        self.add_command_to_tearDown('cd %s; rm %s/*; git checkout %s %s' % (os.path.dirname(recipefile), testrecipe, testrecipe, os.path.basename(recipefile)))
+        result = runCmd('devtool update-recipe %s' % testrecipe)
+        expected_status = [(' M', '.*/%s/file2$' % testrecipe)]
+        self._check_repo_status(os.path.dirname(recipefile), expected_status)
+
+    @OETestID(1629)
+    def test_devtool_update_recipe_local_patch_gz(self):
+        # First, modify the recipe
+        testrecipe = 'devtool-test-patch-gz'
+        if get_bb_var('DISTRO') == 'poky-tiny':
+            self.skipTest("The DISTRO 'poky-tiny' does not provide the dependencies needed by %s" % testrecipe)
+        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
+        recipefile = bb_vars['FILE']
+        src_uri = bb_vars['SRC_URI']
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        # (don't bother with cleaning the recipe on teardown, we won't be building it)
+        result = runCmd('devtool modify %s' % testrecipe)
+        # Modify one file
+        srctree = os.path.join(self.workspacedir, 'sources', testrecipe)
+        runCmd('echo "Another line" >> README', cwd=srctree)
+        runCmd('git commit -a --amend --no-edit', cwd=srctree)
+        self.add_command_to_tearDown('cd %s; rm %s/*; git checkout %s %s' % (os.path.dirname(recipefile), testrecipe, testrecipe, os.path.basename(recipefile)))
+        result = runCmd('devtool update-recipe %s' % testrecipe)
+        expected_status = [(' M', '.*/%s/readme.patch.gz$' % testrecipe)]
+        self._check_repo_status(os.path.dirname(recipefile), expected_status)
+        patch_gz = os.path.join(os.path.dirname(recipefile), testrecipe, 'readme.patch.gz')
+        result = runCmd('file %s' % patch_gz)
+        if 'gzip compressed data' not in result.output:
+            self.fail('New patch file is not gzipped - file reports:\n%s' % result.output)
+
+    @OETestID(1628)
+    def test_devtool_update_recipe_local_files_subdir(self):
+        # Try devtool update-recipe on a recipe that has a file with subdir= set in
+        # SRC_URI such that it overwrites a file that was in an archive that
+        # was also in SRC_URI
+        # First, modify the recipe
+        testrecipe = 'devtool-test-subdir'
+        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
+        recipefile = bb_vars['FILE']
+        src_uri = bb_vars['SRC_URI']
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        # (don't bother with cleaning the recipe on teardown, we won't be building it)
+        result = runCmd('devtool modify %s' % testrecipe)
+        testfile = os.path.join(self.workspacedir, 'sources', testrecipe, 'testfile')
+        self.assertExists(testfile, 'Extracted source could not be found')
+        with open(testfile, 'r') as f:
+            contents = f.read().rstrip()
+        self.assertEqual(contents, 'Modified version', 'File has apparently not been overwritten as it should have been')
+        # Test devtool update-recipe without modifying any files
+        self.add_command_to_tearDown('cd %s; rm %s/*; git checkout %s %s' % (os.path.dirname(recipefile), testrecipe, testrecipe, os.path.basename(recipefile)))
+        result = runCmd('devtool update-recipe %s' % testrecipe)
+        expected_status = []
+        self._check_repo_status(os.path.dirname(recipefile), expected_status)
+
+    @OETestID(1163)
+    def test_devtool_extract(self):
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        # Try devtool extract
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        result = runCmd('devtool extract matchbox-terminal %s' % tempdir)
+        self.assertExists(os.path.join(tempdir, 'Makefile.am'), 'Extracted source could not be found')
+        self._check_src_repo(tempdir)
+
+    @OETestID(1379)
+    def test_devtool_extract_virtual(self):
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        # Try devtool extract
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        result = runCmd('devtool extract virtual/make %s' % tempdir)
+        self.assertExists(os.path.join(tempdir, 'Makefile.am'), 'Extracted source could not be found')
+        self._check_src_repo(tempdir)
+
+    @OETestID(1168)
+    def test_devtool_reset_all(self):
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        testrecipe1 = 'mdadm'
+        testrecipe2 = 'cronie'
+        result = runCmd('devtool modify -x %s %s' % (testrecipe1, os.path.join(tempdir, testrecipe1)))
+        result = runCmd('devtool modify -x %s %s' % (testrecipe2, os.path.join(tempdir, testrecipe2)))
+        result = runCmd('devtool build %s' % testrecipe1)
+        result = runCmd('devtool build %s' % testrecipe2)
+        stampprefix1 = get_bb_var('STAMP', testrecipe1)
+        self.assertTrue(stampprefix1, 'Unable to get STAMP value for recipe %s' % testrecipe1)
+        stampprefix2 = get_bb_var('STAMP', testrecipe2)
+        self.assertTrue(stampprefix2, 'Unable to get STAMP value for recipe %s' % testrecipe2)
+        result = runCmd('devtool reset -a')
+        self.assertIn(testrecipe1, result.output)
+        self.assertIn(testrecipe2, result.output)
+        result = runCmd('devtool status')
+        self.assertNotIn(testrecipe1, result.output)
+        self.assertNotIn(testrecipe2, result.output)
+        matches1 = glob.glob(stampprefix1 + '*')
+        self.assertFalse(matches1, 'Stamp files exist for recipe %s that should have been cleaned' % testrecipe1)
+        matches2 = glob.glob(stampprefix2 + '*')
+        self.assertFalse(matches2, 'Stamp files exist for recipe %s that should have been cleaned' % testrecipe2)
+
+    @OETestID(1272)
+    def test_devtool_deploy_target(self):
+        # NOTE: Whilst this test would seemingly be better placed as a runtime test,
+        # unfortunately the runtime tests run under bitbake and you can't run
+        # devtool within bitbake (since devtool needs to run bitbake itself).
+        # Additionally we are testing build-time functionality as well, so
+        # really this has to be done as an oe-selftest test.
+        #
+        # Check preconditions
+        machine = get_bb_var('MACHINE')
+        if not machine.startswith('qemu'):
+            self.skipTest('This test only works with qemu machines')
+        if not os.path.exists('/etc/runqemu-nosudo'):
+            self.skipTest('You must set up tap devices with scripts/runqemu-gen-tapdevs before running this test')
+        result = runCmd('PATH="$PATH:/sbin:/usr/sbin" ip tuntap show', ignore_status=True)
+        if result.status != 0:
+            result = runCmd('PATH="$PATH:/sbin:/usr/sbin" ifconfig -a', ignore_status=True)
+            if result.status != 0:
+                self.skipTest('Failed to determine if tap devices exist with ifconfig or ip: %s' % result.output)
+        for line in result.output.splitlines():
+            if line.startswith('tap'):
+                break
+        else:
+            self.skipTest('No tap devices found - you must set up tap devices with scripts/runqemu-gen-tapdevs before running this test')
+        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
+        # Definitions
+        testrecipe = 'mdadm'
+        testfile = '/sbin/mdadm'
+        testimage = 'oe-selftest-image'
+        testcommand = '/sbin/mdadm --help'
+        # Build an image to run
+        bitbake("%s qemu-native qemu-helper-native" % testimage)
+        deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
+        self.add_command_to_tearDown('bitbake -c clean %s' % testimage)
+        self.add_command_to_tearDown('rm -f %s/%s*' % (deploy_dir_image, testimage))
+        # Clean recipe so the first deploy will fail
+        bitbake("%s -c clean" % testrecipe)
+        # Try devtool modify
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        self.add_command_to_tearDown('bitbake -c clean %s' % testrecipe)
+        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempdir))
+        # Test that deploy-target at this point fails (properly)
+        result = runCmd('devtool deploy-target -n %s root@localhost' % testrecipe, ignore_status=True)
+        self.assertNotEqual(result.output, 0, 'devtool deploy-target should have failed, output: %s' % result.output)
+        self.assertNotIn(result.output, 'Traceback', 'devtool deploy-target should have failed with a proper error not a traceback, output: %s' % result.output)
+        result = runCmd('devtool build %s' % testrecipe)
+        # First try a dry-run of deploy-target
+        result = runCmd('devtool deploy-target -n %s root@localhost' % testrecipe)
+        self.assertIn('  %s' % testfile, result.output)
+        # Boot the image
+        with runqemu(testimage) as qemu:
+            # Now really test deploy-target
+            result = runCmd('devtool deploy-target -c %s root@%s' % (testrecipe, qemu.ip))
+            # Run a test command to see if it was installed properly
+            sshargs = '-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'
+            result = runCmd('ssh %s root@%s %s' % (sshargs, qemu.ip, testcommand))
+            # Check if it deployed all of the files with the right ownership/perms
+            # First look on the host - need to do this under pseudo to get the correct ownership/perms
+            bb_vars = get_bb_vars(['D', 'FAKEROOTENV', 'FAKEROOTCMD'], testrecipe)
+            installdir = bb_vars['D']
+            fakerootenv = bb_vars['FAKEROOTENV']
+            fakerootcmd = bb_vars['FAKEROOTCMD']
+            result = runCmd('%s %s find . -type f -exec ls -l {} \;' % (fakerootenv, fakerootcmd), cwd=installdir)
+            filelist1 = self._process_ls_output(result.output)
+
+            # Now look on the target
+            tempdir2 = tempfile.mkdtemp(prefix='devtoolqa')
+            self.track_for_cleanup(tempdir2)
+            tmpfilelist = os.path.join(tempdir2, 'files.txt')
+            with open(tmpfilelist, 'w') as f:
+                for line in filelist1:
+                    splitline = line.split()
+                    f.write(splitline[-1] + '\n')
+            result = runCmd('cat %s | ssh -q %s root@%s \'xargs ls -l\'' % (tmpfilelist, sshargs, qemu.ip))
+            filelist2 = self._process_ls_output(result.output)
+            filelist1.sort(key=lambda item: item.split()[-1])
+            filelist2.sort(key=lambda item: item.split()[-1])
+            self.assertEqual(filelist1, filelist2)
+            # Test undeploy-target
+            result = runCmd('devtool undeploy-target -c %s root@%s' % (testrecipe, qemu.ip))
+            result = runCmd('ssh %s root@%s %s' % (sshargs, qemu.ip, testcommand), ignore_status=True)
+            self.assertNotEqual(result, 0, 'undeploy-target did not remove command as it should have')
+
+    @OETestID(1366)
+    def test_devtool_build_image(self):
+        """Test devtool build-image plugin"""
+        # Check preconditions
+        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
+        image = 'core-image-minimal'
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        self.add_command_to_tearDown('bitbake -c clean %s' % image)
+        bitbake('%s -c clean' % image)
+        # Add target and native recipes to workspace
+        recipes = ['mdadm', 'parted-native']
+        for recipe in recipes:
+            tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+            self.track_for_cleanup(tempdir)
+            self.add_command_to_tearDown('bitbake -c clean %s' % recipe)
+            runCmd('devtool modify %s -x %s' % (recipe, tempdir))
+        # Try to build image
+        result = runCmd('devtool build-image %s' % image)
+        self.assertNotEqual(result, 0, 'devtool build-image failed')
+        # Check if image contains expected packages
+        deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
+        image_link_name = get_bb_var('IMAGE_LINK_NAME', image)
+        reqpkgs = [item for item in recipes if not item.endswith('-native')]
+        with open(os.path.join(deploy_dir_image, image_link_name + '.manifest'), 'r') as f:
+            for line in f:
+                splitval = line.split()
+                if splitval:
+                    pkg = splitval[0]
+                    if pkg in reqpkgs:
+                        reqpkgs.remove(pkg)
+        if reqpkgs:
+            self.fail('The following packages were not present in the image as expected: %s' % ', '.join(reqpkgs))
+
+    @OETestID(1367)
+    def test_devtool_upgrade(self):
+        # Check preconditions
+        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        # Check parameters
+        result = runCmd('devtool upgrade -h')
+        for param in 'recipename srctree --version -V --branch -b --keep-temp --no-patch'.split():
+            self.assertIn(param, result.output)
+        # For the moment, we are using a real recipe.
+        recipe = 'devtool-upgrade-test1'
+        version = '1.6.0'
+        oldrecipefile = get_bb_var('FILE', recipe)
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        # Check that recipe is not already under devtool control
+        result = runCmd('devtool status')
+        self.assertNotIn(recipe, result.output)
+        # Check upgrade. Code does not check if new PV is older or newer that current PV, so, it may be that
+        # we are downgrading instead of upgrading.
+        result = runCmd('devtool upgrade %s %s -V %s' % (recipe, tempdir, version))
+        # Check if srctree at least is populated
+        self.assertTrue(len(os.listdir(tempdir)) > 0, 'srctree (%s) should be populated with new (%s) source code' % (tempdir, version))
+        # Check new recipe subdirectory is present
+        self.assertExists(os.path.join(self.workspacedir, 'recipes', recipe, '%s-%s' % (recipe, version)), 'Recipe folder should exist')
+        # Check new recipe file is present
+        newrecipefile = os.path.join(self.workspacedir, 'recipes', recipe, '%s_%s.bb' % (recipe, version))
+        self.assertExists(newrecipefile, 'Recipe file should exist after upgrade')
+        # Check devtool status and make sure recipe is present
+        result = runCmd('devtool status')
+        self.assertIn(recipe, result.output)
+        self.assertIn(tempdir, result.output)
+        # Check recipe got changed as expected
+        with open(oldrecipefile + '.upgraded', 'r') as f:
+            desiredlines = f.readlines()
+        with open(newrecipefile, 'r') as f:
+            newlines = f.readlines()
+        self.assertEqual(desiredlines, newlines)
+        # Check devtool reset recipe
+        result = runCmd('devtool reset %s -n' % recipe)
+        result = runCmd('devtool status')
+        self.assertNotIn(recipe, result.output)
+        self.assertNotExists(os.path.join(self.workspacedir, 'recipes', recipe), 'Recipe directory should not exist after resetting')
+
+    @OETestID(1433)
+    def test_devtool_upgrade_git(self):
+        # Check preconditions
+        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        recipe = 'devtool-upgrade-test2'
+        commit = '6cc6077a36fe2648a5f993fe7c16c9632f946517'
+        oldrecipefile = get_bb_var('FILE', recipe)
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        # Check that recipe is not already under devtool control
+        result = runCmd('devtool status')
+        self.assertNotIn(recipe, result.output)
+        # Check upgrade
+        result = runCmd('devtool upgrade %s %s -S %s' % (recipe, tempdir, commit))
+        # Check if srctree at least is populated
+        self.assertTrue(len(os.listdir(tempdir)) > 0, 'srctree (%s) should be populated with new (%s) source code' % (tempdir, commit))
+        # Check new recipe file is present
+        newrecipefile = os.path.join(self.workspacedir, 'recipes', recipe, os.path.basename(oldrecipefile))
+        self.assertExists(newrecipefile, 'Recipe file should exist after upgrade')
+        # Check devtool status and make sure recipe is present
+        result = runCmd('devtool status')
+        self.assertIn(recipe, result.output)
+        self.assertIn(tempdir, result.output)
+        # Check recipe got changed as expected
+        with open(oldrecipefile + '.upgraded', 'r') as f:
+            desiredlines = f.readlines()
+        with open(newrecipefile, 'r') as f:
+            newlines = f.readlines()
+        self.assertEqual(desiredlines, newlines)
+        # Check devtool reset recipe
+        result = runCmd('devtool reset %s -n' % recipe)
+        result = runCmd('devtool status')
+        self.assertNotIn(recipe, result.output)
+        self.assertNotExists(os.path.join(self.workspacedir, 'recipes', recipe), 'Recipe directory should not exist after resetting')
+
+    @OETestID(1352)
+    def test_devtool_layer_plugins(self):
+        """Test that devtool can use plugins from other layers.
+
+        This test executes the selftest-reverse command from meta-selftest."""
+
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+
+        s = "Microsoft Made No Profit From Anyone's Zunes Yo"
+        result = runCmd("devtool --quiet selftest-reverse \"%s\"" % s)
+        self.assertEqual(result.output, s[::-1])
+
+    def _copy_file_with_cleanup(self, srcfile, basedstdir, *paths):
+        dstdir = basedstdir
+        self.assertExists(dstdir)
+        for p in paths:
+            dstdir = os.path.join(dstdir, p)
+            if not os.path.exists(dstdir):
+                os.makedirs(dstdir)
+                self.track_for_cleanup(dstdir)
+        dstfile = os.path.join(dstdir, os.path.basename(srcfile))
+        if srcfile != dstfile:
+            shutil.copy(srcfile, dstfile)
+            self.track_for_cleanup(dstfile)
+
+    @OETestID(1625)
+    def test_devtool_load_plugin(self):
+        """Test that devtool loads only the first found plugin in BBPATH."""
+
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+
+        devtool = runCmd("which devtool")
+        fromname = runCmd("devtool --quiet pluginfile")
+        srcfile = fromname.output
+        bbpath = get_bb_var('BBPATH')
+        searchpath = bbpath.split(':') + [os.path.dirname(devtool.output)]
+        plugincontent = []
+        with open(srcfile) as fh:
+            plugincontent = fh.readlines()
+        try:
+            self.assertIn('meta-selftest', srcfile, 'wrong bbpath plugin found')
+            for path in searchpath:
+                self._copy_file_with_cleanup(srcfile, path, 'lib', 'devtool')
+            result = runCmd("devtool --quiet count")
+            self.assertEqual(result.output, '1')
+            result = runCmd("devtool --quiet multiloaded")
+            self.assertEqual(result.output, "no")
+            for path in searchpath:
+                result = runCmd("devtool --quiet bbdir")
+                self.assertEqual(result.output, path)
+                os.unlink(os.path.join(result.output, 'lib', 'devtool', 'bbpath.py'))
+        finally:
+            with open(srcfile, 'w') as fh:
+                fh.writelines(plugincontent)
+
+    def _setup_test_devtool_finish_upgrade(self):
+        # Check preconditions
+        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        # Use a "real" recipe from meta-selftest
+        recipe = 'devtool-upgrade-test1'
+        oldversion = '1.5.3'
+        newversion = '1.6.0'
+        oldrecipefile = get_bb_var('FILE', recipe)
+        recipedir = os.path.dirname(oldrecipefile)
+        result = runCmd('git status --porcelain .', cwd=recipedir)
+        if result.output.strip():
+            self.fail('Recipe directory for %s contains uncommitted changes' % recipe)
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        # Check that recipe is not already under devtool control
+        result = runCmd('devtool status')
+        self.assertNotIn(recipe, result.output)
+        # Do the upgrade
+        result = runCmd('devtool upgrade %s %s -V %s' % (recipe, tempdir, newversion))
+        # Check devtool status and make sure recipe is present
+        result = runCmd('devtool status')
+        self.assertIn(recipe, result.output)
+        self.assertIn(tempdir, result.output)
+        # Make a change to the source
+        result = runCmd('sed -i \'/^#include "pv.h"/a \\/* Here is a new comment *\\/\' src/pv/number.c', cwd=tempdir)
+        result = runCmd('git status --porcelain', cwd=tempdir)
+        self.assertIn('M src/pv/number.c', result.output)
+        result = runCmd('git commit src/pv/number.c -m "Add a comment to the code"', cwd=tempdir)
+        # Check if patch is there
+        recipedir = os.path.dirname(oldrecipefile)
+        olddir = os.path.join(recipedir, recipe + '-' + oldversion)
+        patchfn = '0001-Add-a-note-line-to-the-quick-reference.patch'
+        self.assertExists(os.path.join(olddir, patchfn), 'Original patch file does not exist')
+        return recipe, oldrecipefile, recipedir, olddir, newversion, patchfn
+
+    @OETestID(1623)
+    def test_devtool_finish_upgrade_origlayer(self):
+        recipe, oldrecipefile, recipedir, olddir, newversion, patchfn = self._setup_test_devtool_finish_upgrade()
+        # Ensure the recipe is where we think it should be (so that cleanup doesn't trash things)
+        self.assertIn('/meta-selftest/', recipedir)
+        # Try finish to the original layer
+        self.add_command_to_tearDown('rm -rf %s ; cd %s ; git checkout %s' % (recipedir, os.path.dirname(recipedir), recipedir))
+        result = runCmd('devtool finish %s meta-selftest' % recipe)
+        result = runCmd('devtool status')
+        self.assertNotIn(recipe, result.output, 'Recipe should have been reset by finish but wasn\'t')
+        self.assertNotExists(os.path.join(self.workspacedir, 'recipes', recipe), 'Recipe directory should not exist after finish')
+        self.assertNotExists(oldrecipefile, 'Old recipe file should have been deleted but wasn\'t')
+        self.assertNotExists(os.path.join(olddir, patchfn), 'Old patch file should have been deleted but wasn\'t')
+        newrecipefile = os.path.join(recipedir, '%s_%s.bb' % (recipe, newversion))
+        newdir = os.path.join(recipedir, recipe + '-' + newversion)
+        self.assertExists(newrecipefile, 'New recipe file should have been copied into existing layer but wasn\'t')
+        self.assertExists(os.path.join(newdir, patchfn), 'Patch file should have been copied into new directory but wasn\'t')
+        self.assertExists(os.path.join(newdir, '0002-Add-a-comment-to-the-code.patch'), 'New patch file should have been created but wasn\'t')
+
+    @OETestID(1624)
+    def test_devtool_finish_upgrade_otherlayer(self):
+        recipe, oldrecipefile, recipedir, olddir, newversion, patchfn = self._setup_test_devtool_finish_upgrade()
+        # Ensure the recipe is where we think it should be (so that cleanup doesn't trash things)
+        self.assertIn('/meta-selftest/', recipedir)
+        # Try finish to a different layer - should create a bbappend
+        # This cleanup isn't strictly necessary but do it anyway just in case it goes wrong and writes to here
+        self.add_command_to_tearDown('rm -rf %s ; cd %s ; git checkout %s' % (recipedir, os.path.dirname(recipedir), recipedir))
+        oe_core_dir = os.path.join(get_bb_var('COREBASE'), 'meta')
+        newrecipedir = os.path.join(oe_core_dir, 'recipes-test', 'devtool')
+        newrecipefile = os.path.join(newrecipedir, '%s_%s.bb' % (recipe, newversion))
+        self.track_for_cleanup(newrecipedir)
+        result = runCmd('devtool finish %s oe-core' % recipe)
+        result = runCmd('devtool status')
+        self.assertNotIn(recipe, result.output, 'Recipe should have been reset by finish but wasn\'t')
+        self.assertNotExists(os.path.join(self.workspacedir, 'recipes', recipe), 'Recipe directory should not exist after finish')
+        self.assertExists(oldrecipefile, 'Old recipe file should not have been deleted')
+        self.assertExists(os.path.join(olddir, patchfn), 'Old patch file should not have been deleted')
+        newdir = os.path.join(newrecipedir, recipe + '-' + newversion)
+        self.assertExists(newrecipefile, 'New recipe file should have been copied into existing layer but wasn\'t')
+        self.assertExists(os.path.join(newdir, patchfn), 'Patch file should have been copied into new directory but wasn\'t')
+        self.assertExists(os.path.join(newdir, '0002-Add-a-comment-to-the-code.patch'), 'New patch file should have been created but wasn\'t')
+
+    def _setup_test_devtool_finish_modify(self):
+        # Check preconditions
+        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
+        # Try modifying a recipe
+        self.track_for_cleanup(self.workspacedir)
+        recipe = 'mdadm'
+        oldrecipefile = get_bb_var('FILE', recipe)
+        recipedir = os.path.dirname(oldrecipefile)
+        result = runCmd('git status --porcelain .', cwd=recipedir)
+        if result.output.strip():
+            self.fail('Recipe directory for %s contains uncommitted changes' % recipe)
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        self.track_for_cleanup(tempdir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        result = runCmd('devtool modify %s %s' % (recipe, tempdir))
+        self.assertExists(os.path.join(tempdir, 'Makefile'), 'Extracted source could not be found')
+        # Test devtool status
+        result = runCmd('devtool status')
+        self.assertIn(recipe, result.output)
+        self.assertIn(tempdir, result.output)
+        # Make a change to the source
+        result = runCmd('sed -i \'/^#include "mdadm.h"/a \\/* Here is a new comment *\\/\' maps.c', cwd=tempdir)
+        result = runCmd('git status --porcelain', cwd=tempdir)
+        self.assertIn('M maps.c', result.output)
+        result = runCmd('git commit maps.c -m "Add a comment to the code"', cwd=tempdir)
+        for entry in os.listdir(recipedir):
+            filesdir = os.path.join(recipedir, entry)
+            if os.path.isdir(filesdir):
+                break
+        else:
+            self.fail('Unable to find recipe files directory for %s' % recipe)
+        return recipe, oldrecipefile, recipedir, filesdir
+
+    @OETestID(1621)
+    def test_devtool_finish_modify_origlayer(self):
+        recipe, oldrecipefile, recipedir, filesdir = self._setup_test_devtool_finish_modify()
+        # Ensure the recipe is where we think it should be (so that cleanup doesn't trash things)
+        self.assertIn('/meta/', recipedir)
+        # Try finish to the original layer
+        self.add_command_to_tearDown('rm -rf %s ; cd %s ; git checkout %s' % (recipedir, os.path.dirname(recipedir), recipedir))
+        result = runCmd('devtool finish %s meta' % recipe)
+        result = runCmd('devtool status')
+        self.assertNotIn(recipe, result.output, 'Recipe should have been reset by finish but wasn\'t')
+        self.assertNotExists(os.path.join(self.workspacedir, 'recipes', recipe), 'Recipe directory should not exist after finish')
+        expected_status = [(' M', '.*/%s$' % os.path.basename(oldrecipefile)),
+                           ('??', '.*/.*-Add-a-comment-to-the-code.patch$')]
+        self._check_repo_status(recipedir, expected_status)
+
+    @OETestID(1622)
+    def test_devtool_finish_modify_otherlayer(self):
+        recipe, oldrecipefile, recipedir, filesdir = self._setup_test_devtool_finish_modify()
+        # Ensure the recipe is where we think it should be (so that cleanup doesn't trash things)
+        self.assertIn('/meta/', recipedir)
+        relpth = os.path.relpath(recipedir, os.path.join(get_bb_var('COREBASE'), 'meta'))
+        appenddir = os.path.join(get_test_layer(), relpth)
+        self.track_for_cleanup(appenddir)
+        # Try finish to the original layer
+        self.add_command_to_tearDown('rm -rf %s ; cd %s ; git checkout %s' % (recipedir, os.path.dirname(recipedir), recipedir))
+        result = runCmd('devtool finish %s meta-selftest' % recipe)
+        result = runCmd('devtool status')
+        self.assertNotIn(recipe, result.output, 'Recipe should have been reset by finish but wasn\'t')
+        self.assertNotExists(os.path.join(self.workspacedir, 'recipes', recipe), 'Recipe directory should not exist after finish')
+        result = runCmd('git status --porcelain .', cwd=recipedir)
+        if result.output.strip():
+            self.fail('Recipe directory for %s contains the following unexpected changes after finish:\n%s' % (recipe, result.output.strip()))
+        recipefn = os.path.splitext(os.path.basename(oldrecipefile))[0]
+        recipefn = recipefn.split('_')[0] + '_%'
+        appendfile = os.path.join(appenddir, recipefn + '.bbappend')
+        self.assertExists(appendfile, 'bbappend %s should have been created but wasn\'t' % appendfile)
+        newdir = os.path.join(appenddir, recipe)
+        files = os.listdir(newdir)
+        foundpatch = None
+        for fn in files:
+            if fnmatch.fnmatch(fn, '*-Add-a-comment-to-the-code.patch'):
+                foundpatch = fn
+        if not foundpatch:
+            self.fail('No patch file created next to bbappend')
+        files.remove(foundpatch)
+        if files:
+            self.fail('Unexpected file(s) copied next to bbappend: %s' % ', '.join(files))
+
+    @OETestID(1626)
+    def test_devtool_rename(self):
+        # Check preconditions
+        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+
+        # First run devtool add
+        # We already have this recipe in OE-Core, but that doesn't matter
+        recipename = 'i2c-tools'
+        recipever = '3.1.2'
+        recipefile = os.path.join(self.workspacedir, 'recipes', recipename, '%s_%s.bb' % (recipename, recipever))
+        url = 'http://downloads.yoctoproject.org/mirror/sources/i2c-tools-%s.tar.bz2' % recipever
+        def add_recipe():
+            result = runCmd('devtool add %s' % url)
+            self.assertExists(recipefile, 'Expected recipe file not created')
+            self.assertExists(os.path.join(self.workspacedir, 'sources', recipename), 'Source directory not created')
+            checkvars = {}
+            checkvars['S'] = None
+            checkvars['SRC_URI'] = url.replace(recipever, '${PV}')
+            self._test_recipe_contents(recipefile, checkvars, [])
+        add_recipe()
+        # Now rename it - change both name and version
+        newrecipename = 'mynewrecipe'
+        newrecipever = '456'
+        newrecipefile = os.path.join(self.workspacedir, 'recipes', newrecipename, '%s_%s.bb' % (newrecipename, newrecipever))
+        result = runCmd('devtool rename %s %s -V %s' % (recipename, newrecipename, newrecipever))
+        self.assertExists(newrecipefile, 'Recipe file not renamed')
+        self.assertNotExists(os.path.join(self.workspacedir, 'recipes', recipename), 'Old recipe directory still exists')
+        newsrctree = os.path.join(self.workspacedir, 'sources', newrecipename)
+        self.assertExists(newsrctree, 'Source directory not renamed')
+        checkvars = {}
+        checkvars['S'] = '${WORKDIR}/%s-%s' % (recipename, recipever)
+        checkvars['SRC_URI'] = url
+        self._test_recipe_contents(newrecipefile, checkvars, [])
+        # Try again - change just name this time
+        result = runCmd('devtool reset -n %s' % newrecipename)
+        shutil.rmtree(newsrctree)
+        add_recipe()
+        newrecipefile = os.path.join(self.workspacedir, 'recipes', newrecipename, '%s_%s.bb' % (newrecipename, recipever))
+        result = runCmd('devtool rename %s %s' % (recipename, newrecipename))
+        self.assertExists(newrecipefile, 'Recipe file not renamed')
+        self.assertNotExists(os.path.join(self.workspacedir, 'recipes', recipename), 'Old recipe directory still exists')
+        self.assertExists(os.path.join(self.workspacedir, 'sources', newrecipename), 'Source directory not renamed')
+        checkvars = {}
+        checkvars['S'] = '${WORKDIR}/%s-${PV}' % recipename
+        checkvars['SRC_URI'] = url.replace(recipever, '${PV}')
+        self._test_recipe_contents(newrecipefile, checkvars, [])
+        # Try again - change just version this time
+        result = runCmd('devtool reset -n %s' % newrecipename)
+        shutil.rmtree(newsrctree)
+        add_recipe()
+        newrecipefile = os.path.join(self.workspacedir, 'recipes', recipename, '%s_%s.bb' % (recipename, newrecipever))
+        result = runCmd('devtool rename %s -V %s' % (recipename, newrecipever))
+        self.assertExists(newrecipefile, 'Recipe file not renamed')
+        self.assertExists(os.path.join(self.workspacedir, 'sources', recipename), 'Source directory no longer exists')
+        checkvars = {}
+        checkvars['S'] = '${WORKDIR}/${BPN}-%s' % recipever
+        checkvars['SRC_URI'] = url
+        self._test_recipe_contents(newrecipefile, checkvars, [])
+
+    @OETestID(1577)
+    def test_devtool_virtual_kernel_modify(self):
+        """
+        Summary:        The purpose of this test case is to verify that
+                        devtool modify works correctly when building
+                        the kernel.
+        Dependencies:   NA
+        Steps:          1. Build kernel with bitbake.
+                        2. Save the config file generated.
+                        3. Clean the environment.
+                        4. Use `devtool modify virtual/kernel` to validate following:
+                           4.1 The source is checked out correctly.
+                           4.2 The resulting configuration is the same as
+                               what was get on step 2.
+                           4.3 The Kernel can be build correctly.
+                           4.4 Changes made on the source are reflected on the
+                               subsequent builds.
+                           4.5 Changes on the configuration are reflected on the
+                               subsequent builds
+         Expected:       devtool modify is able to checkout the source of the kernel
+                         and modification to the source and configurations are reflected
+                         when building the kernel.
+         """
+        kernel_provider = get_bb_var('PREFERRED_PROVIDER_virtual/kernel')
+        # Clean up the enviroment
+        bitbake('%s -c clean' % kernel_provider)
+        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
+        tempdir_cfg = tempfile.mkdtemp(prefix='config_qa')
+        self.track_for_cleanup(tempdir)
+        self.track_for_cleanup(tempdir_cfg)
+        self.track_for_cleanup(self.workspacedir)
+        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
+        self.add_command_to_tearDown('bitbake -c clean %s' % kernel_provider)
+        #Step 1
+        #Here is just generated the config file instead of all the kernel to optimize the
+        #time of executing this test case.
+        bitbake('%s -c configure' % kernel_provider)
+        bbconfig = os.path.join(get_bb_var('B', kernel_provider),'.config')
+        #Step 2
+        runCmd('cp %s %s' % (bbconfig, tempdir_cfg))
+        self.assertExists(os.path.join(tempdir_cfg, '.config'), 'Could not copy .config file from kernel')
+
+        tmpconfig = os.path.join(tempdir_cfg, '.config')
+        #Step 3
+        bitbake('%s -c clean' % kernel_provider)
+        #Step 4.1
+        runCmd('devtool modify virtual/kernel -x %s' % tempdir)
+        self.assertExists(os.path.join(tempdir, 'Makefile'), 'Extracted source could not be found')
+        #Step 4.2
+        configfile = os.path.join(tempdir,'.config')
+        diff = runCmd('diff %s %s' % (tmpconfig, configfile))
+        self.assertEqual(0,diff.status,'Kernel .config file is not the same using bitbake and devtool')
+        #Step 4.3
+        #NOTE: virtual/kernel is mapped to kernel_provider
+        result = runCmd('devtool build %s' % kernel_provider)
+        self.assertEqual(0,result.status,'Cannot build kernel using `devtool build`')
+        kernelfile = os.path.join(get_bb_var('KBUILD_OUTPUT', kernel_provider), 'vmlinux')
+        self.assertExists(kernelfile, 'Kernel was not build correctly')
+
+        #Modify the kernel source
+        modfile = os.path.join(tempdir,'arch/x86/boot/header.S')
+        modstring = "Use a boot loader. Devtool testing."
+        modapplied = runCmd("sed -i 's/Use a boot loader./%s/' %s" % (modstring, modfile))
+        self.assertEqual(0,modapplied.status,'Modification to %s on kernel source failed' % modfile)
+        #Modify the configuration
+        codeconfigfile = os.path.join(tempdir,'.config.new')
+        modconfopt = "CONFIG_SG_POOL=n"
+        modconf = runCmd("sed -i 's/CONFIG_SG_POOL=y/%s/' %s" % (modconfopt, codeconfigfile))
+        self.assertEqual(0,modconf.status,'Modification to %s failed' % codeconfigfile)
+        #Build again kernel with devtool
+        rebuild = runCmd('devtool build %s' % kernel_provider)
+        self.assertEqual(0,rebuild.status,'Fail to build kernel after modification of source and config')
+        #Step 4.4
+        bzimagename = 'bzImage-' + get_bb_var('KERNEL_VERSION_NAME', kernel_provider)
+        bzimagefile = os.path.join(get_bb_var('D', kernel_provider),'boot', bzimagename)
+        checkmodcode = runCmd("grep '%s' %s" % (modstring, bzimagefile))
+        self.assertEqual(0,checkmodcode.status,'Modification on kernel source failed')
+        #Step 4.5
+        checkmodconfg = runCmd("grep %s %s" % (modconfopt, codeconfigfile))
+        self.assertEqual(0,checkmodconfg.status,'Modification to configuration file failed')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/distrodata.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/distrodata.py
new file mode 100644
index 0000000..12540ad
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/distrodata.py
@@ -0,0 +1,42 @@
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars
+from oeqa.utils.decorators import testcase
+from oeqa.utils.ftools import write_file
+from oeqa.core.decorator.oeid import OETestID
+
+class Distrodata(OESelftestTestCase):
+
+    @classmethod
+    def setUpClass(cls):
+        super(Distrodata, cls).setUpClass()
+
+    @OETestID(1902)
+    def test_checkpkg(self):
+        """
+        Summary:     Test that upstream version checks do not regress
+        Expected:    Upstream version checks should succeed except for the recipes listed in the exception list.
+        Product:     oe-core
+        Author:      Alexander Kanavin <alexander.kanavin@intel.com>
+        """
+        feature = 'INHERIT += "distrodata"\n'
+        feature += 'LICENSE_FLAGS_WHITELIST += " commercial"\n'
+
+        self.write_config(feature)
+        bitbake('-c checkpkg world')
+        checkpkg_result = open(os.path.join(get_bb_var("LOG_DIR"), "checkpkg.csv")).readlines()[1:]
+        regressed_failures = [pkg_data[0] for pkg_data in [pkg_line.split('\t') for pkg_line in checkpkg_result] if pkg_data[11] == 'UNKNOWN_BROKEN']
+        regressed_successes = [pkg_data[0] for pkg_data in [pkg_line.split('\t') for pkg_line in checkpkg_result] if pkg_data[11] == 'KNOWN_BROKEN']
+        msg = ""
+        if len(regressed_failures) > 0:
+            msg = msg + """
+The following packages failed upstream version checks. Please fix them using UPSTREAM_CHECK_URI/UPSTREAM_CHECK_REGEX
+(when using tarballs) or UPSTREAM_CHECK_GITTAGREGEX (when using git). If an upstream version check cannot be performed
+(for example, if upstream does not use git tags), you can set UPSTREAM_VERSION_UNKNOWN to '1' in the recipe to acknowledge
+that the check cannot be performed.
+""" + "\n".join(regressed_failures)
+        if len(regressed_successes) > 0:
+            msg = msg + """
+The following packages have been checked successfully for upstream versions,
+but their recipes claim otherwise by setting UPSTREAM_VERSION_UNKNOWN. Please remove that line from the recipes.
+""" + "\n".join(regressed_successes)
+        self.assertTrue(len(regressed_failures) == 0 and len(regressed_successes) == 0, msg)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/eSDK.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/eSDK.py
new file mode 100644
index 0000000..d03188f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/eSDK.py
@@ -0,0 +1,111 @@
+import tempfile
+import shutil
+import os
+import glob
+from oeqa.core.decorator.oeid import OETestID
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars
+
+class oeSDKExtSelfTest(OESelftestTestCase):
+    """
+    # Bugzilla Test Plan: 6033
+    # This code is planned to be part of the automation for eSDK containig
+    # Install libraries and headers, image generation binary feeds, sdk-update.
+    """
+
+    @staticmethod
+    def get_esdk_environment(env_eSDK, tmpdir_eSDKQA):
+        # XXX: at this time use the first env need to investigate
+        # what environment load oe-selftest, i586, x86_64
+        pattern = os.path.join(tmpdir_eSDKQA, 'environment-setup-*')
+        return glob.glob(pattern)[0]
+
+    @staticmethod
+    def run_esdk_cmd(env_eSDK, tmpdir_eSDKQA, cmd, postconfig=None, **options):
+        if postconfig:
+            esdk_conf_file = os.path.join(tmpdir_eSDKQA, 'conf', 'local.conf')
+            with open(esdk_conf_file, 'a+') as f:
+                f.write(postconfig)
+        if not options:
+            options = {}
+        if not 'shell' in options:
+            options['shell'] = True
+
+        runCmd("cd %s; . %s; %s" % (tmpdir_eSDKQA, env_eSDK, cmd), **options)
+
+    @staticmethod
+    def generate_eSDK(image):
+        pn_task = '%s -c populate_sdk_ext' % image
+        bitbake(pn_task)
+
+    @staticmethod
+    def get_eSDK_toolchain(image):
+        pn_task = '%s -c populate_sdk_ext' % image
+
+        bb_vars = get_bb_vars(['SDK_DEPLOY', 'TOOLCHAINEXT_OUTPUTNAME'], pn_task)
+        sdk_deploy = bb_vars['SDK_DEPLOY']
+        toolchain_name = bb_vars['TOOLCHAINEXT_OUTPUTNAME']
+        return os.path.join(sdk_deploy, toolchain_name + '.sh')
+
+    @staticmethod
+    def update_configuration(cls, image, tmpdir_eSDKQA, env_eSDK, ext_sdk_path):
+        sstate_dir = os.path.join(os.environ['BUILDDIR'], 'sstate-cache')
+
+        oeSDKExtSelfTest.generate_eSDK(cls.image)
+
+        cls.ext_sdk_path = oeSDKExtSelfTest.get_eSDK_toolchain(cls.image)
+        runCmd("%s -y -d \"%s\"" % (cls.ext_sdk_path, cls.tmpdir_eSDKQA))
+
+        cls.env_eSDK = oeSDKExtSelfTest.get_esdk_environment('', cls.tmpdir_eSDKQA)
+
+        sstate_config="""
+SDK_LOCAL_CONF_WHITELIST = "SSTATE_MIRRORS"
+SSTATE_MIRRORS =  "file://.* file://%s/PATH"
+CORE_IMAGE_EXTRA_INSTALL = "perl"
+        """ % sstate_dir
+
+        with open(os.path.join(cls.tmpdir_eSDKQA, 'conf', 'local.conf'), 'a+') as f:
+            f.write(sstate_config)
+
+    @classmethod
+    def setUpClass(cls):
+        super(oeSDKExtSelfTest, cls).setUpClass()
+        cls.tmpdir_eSDKQA = tempfile.mkdtemp(prefix='eSDKQA')
+
+        sstate_dir = get_bb_var('SSTATE_DIR')
+
+        cls.image = 'core-image-minimal'
+        oeSDKExtSelfTest.generate_eSDK(cls.image)
+
+        # Install eSDK
+        cls.ext_sdk_path = oeSDKExtSelfTest.get_eSDK_toolchain(cls.image)
+        runCmd("%s -y -d \"%s\"" % (cls.ext_sdk_path, cls.tmpdir_eSDKQA))
+
+        cls.env_eSDK = oeSDKExtSelfTest.get_esdk_environment('', cls.tmpdir_eSDKQA)
+
+        # Configure eSDK to use sstate mirror from poky
+        sstate_config="""
+SDK_LOCAL_CONF_WHITELIST = "SSTATE_MIRRORS"
+SSTATE_MIRRORS =  "file://.* file://%s/PATH"
+            """ % sstate_dir
+        with open(os.path.join(cls.tmpdir_eSDKQA, 'conf', 'local.conf'), 'a+') as f:
+            f.write(sstate_config)
+
+    @classmethod
+    def tearDownClass(cls):
+        shutil.rmtree(cls.tmpdir_eSDKQA, ignore_errors=True)
+        super(oeSDKExtSelfTest, cls).tearDownClass()
+
+    @OETestID(1602)
+    def test_install_libraries_headers(self):
+        pn_sstate = 'bc'
+        bitbake(pn_sstate)
+        cmd = "devtool sdk-install %s " % pn_sstate
+        oeSDKExtSelfTest.run_esdk_cmd(self.env_eSDK, self.tmpdir_eSDKQA, cmd)
+
+    @OETestID(1603)
+    def test_image_generation_binary_feeds(self):
+        image = 'core-image-minimal'
+        cmd = "devtool build-image %s" % image
+        oeSDKExtSelfTest.run_esdk_cmd(self.env_eSDK, self.tmpdir_eSDKQA, cmd)
+
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/image_typedep.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/image_typedep.py
new file mode 100644
index 0000000..e678885
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/image_typedep.py
@@ -0,0 +1,53 @@
+import os
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import bitbake
+from oeqa.core.decorator.oeid import OETestID
+
+class ImageTypeDepTests(OESelftestTestCase):
+
+    # Verify that when specifying a IMAGE_TYPEDEP_ of the form "foo.bar" that
+    # the conversion type bar gets added as a dep as well
+    @OETestID(1633)
+    def test_conversion_typedep_added(self):
+
+        self.write_recipeinc('emptytest', """
+# Try to empty out the default dependency list
+PACKAGE_INSTALL = ""
+DISTRO_EXTRA_RDEPENDS=""
+
+LICENSE = "MIT"
+IMAGE_FSTYPES = "testfstype"
+
+IMAGE_TYPES_MASKED += "testfstype"
+IMAGE_TYPEDEP_testfstype = "tar.bz2"
+
+inherit image
+
+""")
+        # First get the dependency that should exist for bz2, it will look
+        # like CONVERSION_DEPENDS_bz2="somedep"
+        result = bitbake('-e emptytest')
+
+        for line in result.output.split('\n'):
+            if line.startswith('CONVERSION_DEPENDS_bz2'):
+                dep = line.split('=')[1].strip('"')
+                break
+
+        # Now get the dependency task list and check for the expected task
+        # dependency
+        bitbake('-g emptytest')
+
+        taskdependsfile = os.path.join(self.builddir, 'task-depends.dot')
+        dep =  dep + ".do_populate_sysroot"
+        depfound = False
+        expectedline = '"emptytest.do_rootfs" -> "{}"'.format(dep)
+
+        with open(taskdependsfile, "r") as f:
+            for line in f:
+                if line.strip() == expectedline:
+                    depfound = True
+                    break
+
+        if not depfound:
+            raise AssertionError("\"{}\" not found".format(expectedline))
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/imagefeatures.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/imagefeatures.py
new file mode 100644
index 0000000..0ffb686
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/imagefeatures.py
@@ -0,0 +1,240 @@
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, runqemu
+from oeqa.core.decorator.oeid import OETestID
+from oeqa.utils.sshcontrol import SSHControl
+import os
+import json
+
+class ImageFeatures(OESelftestTestCase):
+
+    test_user = 'tester'
+    root_user = 'root'
+
+    buffer = True
+
+    @OETestID(1107)
+    def test_non_root_user_can_connect_via_ssh_without_password(self):
+        """
+        Summary: Check if non root user can connect via ssh without password
+        Expected: 1. Connection to the image via ssh using root user without providing a password should be allowed.
+                  2. Connection to the image via ssh using tester user without providing a password should be allowed.
+        Product: oe-core
+        Author: Ionut Chisanovici <ionutx.chisanovici@intel.com>
+        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        """
+
+        features = 'EXTRA_IMAGE_FEATURES = "ssh-server-openssh empty-root-password allow-empty-password"\n'
+        features += 'INHERIT += "extrausers"\n'
+        features += 'EXTRA_USERS_PARAMS = "useradd -p \'\' {}; usermod -s /bin/sh {};"'.format(self.test_user, self.test_user)
+        self.write_config(features)
+
+        # Build a core-image-minimal
+        bitbake('core-image-minimal')
+
+        with runqemu("core-image-minimal") as qemu:
+            # Attempt to ssh with each user into qemu with empty password
+            for user in [self.root_user, self.test_user]:
+                ssh = SSHControl(ip=qemu.ip, logfile=qemu.sshlog, user=user)
+                status, output = ssh.run("true")
+                self.assertEqual(status, 0, 'ssh to user %s failed with %s' % (user, output))
+
+    @OETestID(1115)
+    def test_all_users_can_connect_via_ssh_without_password(self):
+        """
+        Summary:     Check if all users can connect via ssh without password
+        Expected: 1. Connection to the image via ssh using root user without providing a password should NOT be allowed.
+                  2. Connection to the image via ssh using tester user without providing a password should be allowed.
+        Product:     oe-core
+        Author:      Ionut Chisanovici <ionutx.chisanovici@intel.com>
+        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        """
+
+        features = 'EXTRA_IMAGE_FEATURES = "ssh-server-openssh allow-empty-password"\n'
+        features += 'INHERIT += "extrausers"\n'
+        features += 'EXTRA_USERS_PARAMS = "useradd -p \'\' {}; usermod -s /bin/sh {};"'.format(self.test_user, self.test_user)
+        self.write_config(features)
+
+        # Build a core-image-minimal
+        bitbake('core-image-minimal')
+
+        with runqemu("core-image-minimal") as qemu:
+            # Attempt to ssh with each user into qemu with empty password
+            for user in [self.root_user, self.test_user]:
+                ssh = SSHControl(ip=qemu.ip, logfile=qemu.sshlog, user=user)
+                status, output = ssh.run("true")
+                if user == 'root':
+                    self.assertNotEqual(status, 0, 'ssh to user root was allowed when it should not have been')
+                else:
+                    self.assertEqual(status, 0, 'ssh to user tester failed with %s' % output)
+
+
+    @OETestID(1116)
+    def test_clutter_image_can_be_built(self):
+        """
+        Summary:     Check if clutter image can be built
+        Expected:    1. core-image-clutter can be built
+        Product:     oe-core
+        Author:      Ionut Chisanovici <ionutx.chisanovici@intel.com>
+        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        """
+
+        # Build a core-image-clutter
+        bitbake('core-image-clutter')
+
+    @OETestID(1117)
+    def test_wayland_support_in_image(self):
+        """
+        Summary:     Check Wayland support in image
+        Expected:    1. Wayland image can be build
+                     2. Wayland feature can be installed
+        Product:     oe-core
+        Author:      Ionut Chisanovici <ionutx.chisanovici@intel.com>
+        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        """
+
+        distro_features = get_bb_var('DISTRO_FEATURES')
+        if not ('opengl' in distro_features and 'wayland' in distro_features):
+            self.skipTest('neither opengl nor wayland present on DISTRO_FEATURES so core-image-weston cannot be built')
+
+        # Build a core-image-weston
+        bitbake('core-image-weston')
+
+    @OETestID(1497)
+    def test_bmap(self):
+        """
+        Summary:     Check bmap support
+        Expected:    1. core-image-minimal can be build with bmap support
+                     2. core-image-minimal is sparse
+        Product:     oe-core
+        Author:      Ed Bartosh <ed.bartosh@linux.intel.com>
+        """
+
+        features = 'IMAGE_FSTYPES += " ext4 ext4.bmap ext4.bmap.gz"'
+        self.write_config(features)
+
+        image_name = 'core-image-minimal'
+        bitbake(image_name)
+
+        deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
+        link_name = get_bb_var('IMAGE_LINK_NAME', image_name)
+        image_path = os.path.join(deploy_dir_image, "%s.ext4" % link_name)
+        bmap_path = "%s.bmap" % image_path
+        gzip_path = "%s.gz" % bmap_path
+
+        # check if result image, bmap and bmap.gz files are in deploy directory
+        self.assertTrue(os.path.exists(image_path))
+        self.assertTrue(os.path.exists(bmap_path))
+        self.assertTrue(os.path.exists(gzip_path))
+
+        # check if result image is sparse
+        image_stat = os.stat(image_path)
+        self.assertTrue(image_stat.st_size > image_stat.st_blocks * 512)
+
+        # check if the resulting gzip is valid
+        self.assertTrue(runCmd('gzip -t %s' % gzip_path))
+
+    @OETestID(1903)
+    def test_hypervisor_fmts(self):
+        """
+        Summary:     Check various hypervisor formats
+        Expected:    1. core-image-minimal can be built with vmdk, vdi and
+                        qcow2 support.
+                     2. qemu-img says each image has the expected format
+        Product:     oe-core
+        Author:      Tom Rini <trini@konsulko.com>
+        """
+
+        img_types = [ 'vmdk', 'vdi', 'qcow2' ]
+        features = ""
+        for itype in img_types:
+            features += 'IMAGE_FSTYPES += "wic.%s"\n' % itype
+        self.write_config(features)
+
+        image_name = 'core-image-minimal'
+        bitbake(image_name)
+
+        deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
+        link_name = get_bb_var('IMAGE_LINK_NAME', image_name)
+        for itype in img_types:
+            image_path = os.path.join(deploy_dir_image, "%s.wic.%s" %
+                                      (link_name, itype))
+
+            # check if result image file is in deploy directory
+            self.assertTrue(os.path.exists(image_path))
+
+            # check if result image is vmdk
+            sysroot = get_bb_var('STAGING_DIR_NATIVE', 'core-image-minimal')
+            result = runCmd('qemu-img info --output json %s' % image_path,
+                            native_sysroot=sysroot)
+            self.assertTrue(json.loads(result.output).get('format') == itype)
+
+    @OETestID(1905)
+    def test_long_chain_conversion(self):
+        """
+        Summary:     Check for chaining many CONVERSION_CMDs together
+        Expected:    1. core-image-minimal can be built with
+                        ext4.bmap.gz.bz2.lzo.xz.u-boot and also create a
+                        sha256sum
+                     2. The above image has a valid sha256sum
+        Product:     oe-core
+        Author:      Tom Rini <trini@konsulko.com>
+        """
+
+        conv = "ext4.bmap.gz.bz2.lzo.xz.u-boot"
+        features = 'IMAGE_FSTYPES += "%s %s.sha256sum"' % (conv, conv)
+        self.write_config(features)
+
+        image_name = 'core-image-minimal'
+        bitbake(image_name)
+
+        deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
+        link_name = get_bb_var('IMAGE_LINK_NAME', image_name)
+        image_path = os.path.join(deploy_dir_image, "%s.%s" %
+                                  (link_name, conv))
+
+        # check if resulting image is in the deploy directory
+        self.assertTrue(os.path.exists(image_path))
+        self.assertTrue(os.path.exists(image_path + ".sha256sum"))
+
+        # check if the resulting sha256sum agrees
+        self.assertTrue(runCmd('cd %s;sha256sum -c %s.%s.sha256sum' %
+                               (deploy_dir_image, link_name, conv)))
+
+    @OETestID(1904)
+    def test_image_fstypes(self):
+        """
+        Summary:     Check if image of supported image fstypes can be built
+        Expected:    core-image-minimal can be built for various image types
+        Product:     oe-core
+        Author:      Ed Bartosh <ed.bartosh@linux.intel.com>
+        """
+        image_name = 'core-image-minimal'
+
+        img_types = [itype for itype in get_bb_var("IMAGE_TYPES", image_name).split() \
+                         if itype not in ('container', 'elf', 'multiubi')]
+
+        config = 'IMAGE_FSTYPES += "%s"\n'\
+                 'MKUBIFS_ARGS ?= "-m 2048 -e 129024 -c 2047"\n'\
+                 'UBINIZE_ARGS ?= "-m 2048 -p 128KiB -s 512"' % ' '.join(img_types)
+
+        self.write_config(config)
+
+        bitbake(image_name)
+
+        deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
+        link_name = get_bb_var('IMAGE_LINK_NAME', image_name)
+        for itype in img_types:
+            image_path = os.path.join(deploy_dir_image, "%s.%s" % (link_name, itype))
+            # check if result image is in deploy directory
+            self.assertTrue(os.path.exists(image_path),
+                            "%s image %s doesn't exist" % (itype, image_path))
+
+    def test_useradd_static(self):
+        config = """
+USERADDEXTENSION = "useradd-staticids"
+USERADD_ERROR_DYNAMIC = "skip"
+USERADD_UID_TABLES += "files/static-passwd"
+USERADD_GID_TABLES += "files/static-group"
+"""
+        self.write_config(config)
+        bitbake("core-image-base")
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/layerappend.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/layerappend.py
new file mode 100644
index 0000000..2fd5cdb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/layerappend.py
@@ -0,0 +1,95 @@
+import os
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var
+import oeqa.utils.ftools as ftools
+from oeqa.core.decorator.oeid import OETestID
+
+class LayerAppendTests(OESelftestTestCase):
+    layerconf = """
+# We have a conf and classes directory, append to BBPATH
+BBPATH .= ":${LAYERDIR}"
+
+# We have a recipes directory, add to BBFILES
+BBFILES += "${LAYERDIR}/recipes*/*.bb ${LAYERDIR}/recipes*/*.bbappend"
+
+BBFILE_COLLECTIONS += "meta-layerINT"
+BBFILE_PATTERN_meta-layerINT := "^${LAYERDIR}/"
+BBFILE_PRIORITY_meta-layerINT = "6"
+"""
+    recipe = """
+LICENSE="CLOSED"
+INHIBIT_DEFAULT_DEPS = "1"
+
+python do_build() {
+    bb.plain('Building ...')
+}
+addtask build
+"""
+    append = """
+FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
+
+SRC_URI_append = " file://appendtest.txt"
+
+sysroot_stage_all_append() {
+	install -m 644 ${WORKDIR}/appendtest.txt ${SYSROOT_DESTDIR}/
+}
+
+"""
+
+    append2 = """
+FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
+
+SRC_URI_append = " file://appendtest.txt"
+"""
+    layerappend = ''
+
+    def tearDownLocal(self):
+        if self.layerappend:
+            ftools.remove_from_file(self.builddir + "/conf/bblayers.conf", self.layerappend)
+        super(LayerAppendTests, self).tearDownLocal()
+
+    @OETestID(1196)
+    def test_layer_appends(self):
+        corebase = get_bb_var("COREBASE")
+
+        for l in ["0", "1", "2"]:
+            layer = os.path.join(corebase, "meta-layertest" + l)
+            self.assertFalse(os.path.exists(layer))
+            os.mkdir(layer)
+            os.mkdir(layer + "/conf")
+            with open(layer + "/conf/layer.conf", "w") as f:
+                f.write(self.layerconf.replace("INT", l))
+            os.mkdir(layer + "/recipes-test")
+            if l == "0":
+                with open(layer + "/recipes-test/layerappendtest.bb", "w") as f:
+                    f.write(self.recipe)
+            elif l == "1":
+                with open(layer + "/recipes-test/layerappendtest.bbappend", "w") as f:
+                    f.write(self.append)
+                os.mkdir(layer + "/recipes-test/layerappendtest")
+                with open(layer + "/recipes-test/layerappendtest/appendtest.txt", "w") as f:
+                    f.write("Layer 1 test")
+            elif l == "2":
+                with open(layer + "/recipes-test/layerappendtest.bbappend", "w") as f:
+                    f.write(self.append2)
+                os.mkdir(layer + "/recipes-test/layerappendtest")
+                with open(layer + "/recipes-test/layerappendtest/appendtest.txt", "w") as f:
+                    f.write("Layer 2 test")
+            self.track_for_cleanup(layer)
+
+        self.layerappend = "BBLAYERS += \"{0}/meta-layertest0 {0}/meta-layertest1 {0}/meta-layertest2\"".format(corebase)
+        ftools.append_file(self.builddir + "/conf/bblayers.conf", self.layerappend)
+        stagingdir = get_bb_var("SYSROOT_DESTDIR", "layerappendtest")
+        bitbake("layerappendtest")
+        data = ftools.read_file(stagingdir + "/appendtest.txt")
+        self.assertEqual(data, "Layer 2 test")
+        os.remove(corebase + "/meta-layertest2/recipes-test/layerappendtest/appendtest.txt")
+        bitbake("layerappendtest")
+        data = ftools.read_file(stagingdir + "/appendtest.txt")
+        self.assertEqual(data, "Layer 1 test")
+        with open(corebase + "/meta-layertest2/recipes-test/layerappendtest/appendtest.txt", "w") as f:
+            f.write("Layer 2 test")
+        bitbake("layerappendtest")
+        data = ftools.read_file(stagingdir + "/appendtest.txt")
+        self.assertEqual(data, "Layer 2 test")
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/liboe.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/liboe.py
new file mode 100644
index 0000000..e846092
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/liboe.py
@@ -0,0 +1,102 @@
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.core.decorator.oeid import OETestID
+from oeqa.utils.commands import get_bb_var, get_bb_vars, bitbake, runCmd
+import oe.path
+import os
+
+class LibOE(OESelftestTestCase):
+
+    @classmethod
+    def setUpClass(cls):
+        super(LibOE, cls).setUpClass()
+        cls.tmp_dir = get_bb_var('TMPDIR')
+
+    @OETestID(1635)
+    def test_copy_tree_special(self):
+        """
+        Summary:    oe.path.copytree() should copy files with special character
+        Expected:   'test file with sp£c!al @nd spaces' should exist in
+                    copy destination
+        Product:    OE-Core
+        Author:     Joshua Lock <joshua.g.lock@intel.com>
+        """
+        testloc = oe.path.join(self.tmp_dir, 'liboetests')
+        src = oe.path.join(testloc, 'src')
+        dst = oe.path.join(testloc, 'dst')
+        bb.utils.mkdirhier(testloc)
+        bb.utils.mkdirhier(src)
+        testfilename = 'test file with sp£c!al @nd spaces'
+
+        # create the test file and copy it
+        open(oe.path.join(src, testfilename), 'w+b').close()
+        oe.path.copytree(src, dst)
+
+        # ensure path exists in dest
+        fileindst = os.path.isfile(oe.path.join(dst, testfilename))
+        self.assertTrue(fileindst, "File with spaces doesn't exist in dst")
+
+        oe.path.remove(testloc)
+
+    @OETestID(1636)
+    def test_copy_tree_xattr(self):
+        """
+        Summary:    oe.path.copytree() should preserve xattr on copied files
+        Expected:   testxattr file in destination should have user.oetest
+                    extended attribute
+        Product:    OE-Core
+        Author:     Joshua Lock <joshua.g.lock@intel.com>
+        """
+        testloc = oe.path.join(self.tmp_dir, 'liboetests')
+        src = oe.path.join(testloc, 'src')
+        dst = oe.path.join(testloc, 'dst')
+        bb.utils.mkdirhier(testloc)
+        bb.utils.mkdirhier(src)
+        testfilename = 'testxattr'
+
+        # ensure we have setfattr available
+        bitbake("attr-native")
+
+        bb_vars = get_bb_vars(['SYSROOT_DESTDIR', 'bindir'], 'attr-native')
+        destdir = bb_vars['SYSROOT_DESTDIR']
+        bindir = bb_vars['bindir']
+        bindir = destdir + bindir
+
+        # create a file with xattr and copy it
+        open(oe.path.join(src, testfilename), 'w+b').close()
+        runCmd('%s/setfattr -n user.oetest -v "testing liboe" %s' % (bindir, oe.path.join(src, testfilename)))
+        oe.path.copytree(src, dst)
+
+        # ensure file in dest has user.oetest xattr
+        result = runCmd('%s/getfattr -n user.oetest %s' % (bindir, oe.path.join(dst, testfilename)))
+        self.assertIn('user.oetest="testing liboe"', result.output, 'Extended attribute not sert in dst')
+
+        oe.path.remove(testloc)
+
+    @OETestID(1634)
+    def test_copy_hardlink_tree_count(self):
+        """
+        Summary:    oe.path.copyhardlinktree() shouldn't miss out files
+        Expected:   src and dst should have the same number of files
+        Product:    OE-Core
+        Author:     Joshua Lock <joshua.g.lock@intel.com>
+        """
+        testloc = oe.path.join(self.tmp_dir, 'liboetests')
+        src = oe.path.join(testloc, 'src')
+        dst = oe.path.join(testloc, 'dst')
+        bb.utils.mkdirhier(testloc)
+        bb.utils.mkdirhier(src)
+        testfiles = ['foo', 'bar', '.baz', 'quux']
+
+        def touchfile(tf):
+            open(oe.path.join(src, tf), 'w+b').close()
+
+        for f in testfiles:
+            touchfile(f)
+
+        oe.path.copyhardlinktree(src, dst)
+
+        dstcnt = len(os.listdir(dst))
+        srccnt = len(os.listdir(src))
+        self.assertEquals(dstcnt, len(testfiles), "Number of files in dst (%s) differs from number of files in src(%s)." % (dstcnt, srccnt))
+
+        oe.path.remove(testloc)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/lic_checksum.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/lic_checksum.py
new file mode 100644
index 0000000..3740715
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/lic_checksum.py
@@ -0,0 +1,35 @@
+import os
+import tempfile
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import bitbake
+from oeqa.utils import CommandError
+from oeqa.core.decorator.oeid import OETestID
+
+class LicenseTests(OESelftestTestCase):
+
+    # Verify that changing a license file that has an absolute path causes
+    # the license qa to fail due to a mismatched md5sum.
+    @OETestID(1197)
+    def test_nonmatching_checksum(self):
+        bitbake_cmd = '-c populate_lic emptytest'
+        error_msg = 'emptytest: The new md5 checksum is 8d777f385d3dfec8815d20f7496026dc'
+
+        lic_file, lic_path = tempfile.mkstemp()
+        os.close(lic_file)
+        self.track_for_cleanup(lic_path)
+
+        self.write_recipeinc('emptytest', """
+INHIBIT_DEFAULT_DEPS = "1"
+LIC_FILES_CHKSUM = "file://%s;md5=d41d8cd98f00b204e9800998ecf8427e"
+SRC_URI = "file://%s;md5=d41d8cd98f00b204e9800998ecf8427e"
+""" % (lic_path, lic_path))
+        result = bitbake(bitbake_cmd)
+
+        with open(lic_path, "w") as f:
+            f.write("data")
+
+        self.write_config("INHERIT_remove = \"report-error\"")
+        result = bitbake(bitbake_cmd, ignore_status=True)
+        if error_msg not in result.output:
+            raise AssertionError(result.output)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/manifest.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/manifest.py
new file mode 100644
index 0000000..1460719
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/manifest.py
@@ -0,0 +1,166 @@
+import os
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import get_bb_var, get_bb_vars, bitbake
+from oeqa.core.decorator.oeid import OETestID
+
+class ManifestEntry:
+    '''A manifest item of a collection able to list missing packages'''
+    def __init__(self, entry):
+        self.file = entry
+        self.missing = []
+
+class VerifyManifest(OESelftestTestCase):
+    '''Tests for the manifest files and contents of an image'''
+
+    @classmethod
+    def check_manifest_entries(self, manifest, path):
+        manifest_errors = []
+        try:
+            with open(manifest, "r") as mfile:
+                for line in mfile:
+                    manifest_entry = os.path.join(path, line.split()[0])
+                    self.logger.debug("{}: looking for {}"\
+                                    .format(self.classname, manifest_entry))
+                    if not os.path.isfile(manifest_entry):
+                        manifest_errors.append(manifest_entry)
+                        self.logger.debug("{}: {} not found"\
+                                    .format(self.classname, manifest_entry))
+        except OSError as e:
+            self.logger.debug("{}: checking of {} failed"\
+                    .format(self.classname, manifest))
+            raise e
+
+        return manifest_errors
+
+    #this will possibly move from here
+    @classmethod
+    def get_dir_from_bb_var(self, bb_var, target = None):
+        target == self.buildtarget if target == None else target
+        directory = get_bb_var(bb_var, target);
+        if not directory or not os.path.isdir(directory):
+            self.logger.debug("{}: {} points to {} when target = {}"\
+                    .format(self.classname, bb_var, directory, target))
+            raise OSError
+        return directory
+
+    @classmethod
+    def setUpClass(self):
+
+        super(VerifyManifest, self).setUpClass()
+        self.buildtarget = 'core-image-minimal'
+        self.classname = 'VerifyManifest'
+
+        self.logger.info("{}: doing bitbake {} as a prerequisite of the test"\
+                .format(self.classname, self.buildtarget))
+        if bitbake(self.buildtarget).status:
+            self.logger.debug("{} Failed to setup {}"\
+                    .format(self.classname, self.buildtarget))
+            self.skipTest("{}: Cannot setup testing scenario"\
+                    .format(self.classname))
+
+    @OETestID(1380)
+    def test_SDK_manifest_entries(self):
+        '''Verifying the SDK manifest entries exist, this may take a build'''
+
+        # the setup should bitbake core-image-minimal and here it is required
+        # to do an additional setup for the sdk
+        sdktask = '-c populate_sdk'
+        bbargs = sdktask + ' ' + self.buildtarget
+        self.logger.debug("{}: doing bitbake {} as a prerequisite of the test"\
+                .format(self.classname, bbargs))
+        if bitbake(bbargs).status:
+            self.logger.debug("{} Failed to bitbake {}"\
+                    .format(self.classname, bbargs))
+            self.skipTest("{}: Cannot setup testing scenario"\
+                    .format(self.classname))
+
+
+        pkgdata_dir = reverse_dir = {}
+        mfilename = mpath = m_entry = {}
+        # get manifest location based on target to query about
+        d_target= dict(target = self.buildtarget,
+                         host = 'nativesdk-packagegroup-sdk-host')
+        try:
+            mdir = self.get_dir_from_bb_var('SDK_DEPLOY', self.buildtarget)
+            for k in d_target.keys():
+                bb_vars = get_bb_vars(['SDK_NAME', 'SDK_VERSION'], self.buildtarget)
+                mfilename[k] = "{}-toolchain-{}.{}.manifest".format(
+                        bb_vars['SDK_NAME'],
+                        bb_vars['SDK_VERSION'],
+                        k)
+                mpath[k] = os.path.join(mdir, mfilename[k])
+                if not os.path.isfile(mpath[k]):
+                    self.logger.debug("{}: {} does not exist".format(
+                        self.classname, mpath[k]))
+                    raise IOError
+                m_entry[k] = ManifestEntry(mpath[k])
+
+                pkgdata_dir[k] = self.get_dir_from_bb_var('PKGDATA_DIR',
+                        d_target[k])
+                reverse_dir[k] = os.path.join(pkgdata_dir[k],
+                        'runtime-reverse')
+                if not os.path.exists(reverse_dir[k]):
+                    self.logger.debug("{}: {} does not exist".format(
+                        self.classname, reverse_dir[k]))
+                    raise IOError
+        except OSError:
+            raise self.skipTest("{}: Error in obtaining manifest dirs"\
+                .format(self.classname))
+        except IOError:
+            msg = "{}: Error cannot find manifests in the specified dir:\n{}"\
+                    .format(self.classname, mdir)
+            self.fail(msg)
+
+        for k in d_target.keys():
+            self.logger.debug("{}: Check manifest {}".format(
+                self.classname, m_entry[k].file))
+
+            m_entry[k].missing = self.check_manifest_entries(\
+                                               m_entry[k].file,reverse_dir[k])
+            if m_entry[k].missing:
+                msg = '{}: {} Error has the following missing entries'\
+                        .format(self.classname, m_entry[k].file)
+                logmsg = msg+':\n'+'\n'.join(m_entry[k].missing)
+                self.logger.debug(logmsg)
+                self.logger.info(msg)
+                self.fail(logmsg)
+
+    @OETestID(1381)
+    def test_image_manifest_entries(self):
+        '''Verifying the image manifest entries exist'''
+
+        # get manifest location based on target to query about
+        try:
+            mdir = self.get_dir_from_bb_var('DEPLOY_DIR_IMAGE',
+                                                self.buildtarget)
+            mfilename = get_bb_var("IMAGE_LINK_NAME", self.buildtarget)\
+                    + ".manifest"
+            mpath = os.path.join(mdir, mfilename)
+            if not os.path.isfile(mpath): raise IOError
+            m_entry = ManifestEntry(mpath)
+
+            pkgdata_dir = {}
+            pkgdata_dir = self.get_dir_from_bb_var('PKGDATA_DIR',
+                                                self.buildtarget)
+            revdir = os.path.join(pkgdata_dir, 'runtime-reverse')
+            if not os.path.exists(revdir): raise IOError
+        except OSError:
+            raise self.skipTest("{}: Error in obtaining manifest dirs"\
+                .format(self.classname))
+        except IOError:
+            msg = "{}: Error cannot find manifests in dir:\n{}"\
+                    .format(self.classname, mdir)
+            self.fail(msg)
+
+        self.logger.debug("{}: Check manifest {}"\
+                            .format(self.classname, m_entry.file))
+        m_entry.missing = self.check_manifest_entries(\
+                                                    m_entry.file, revdir)
+        if m_entry.missing:
+            msg = '{}: {} Error has the following missing entries'\
+                    .format(self.classname, m_entry.file)
+            logmsg = msg+':\n'+'\n'.join(m_entry.missing)
+            self.logger.debug(logmsg)
+            self.logger.info(msg)
+            self.fail(logmsg)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/__init__.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/__init__.py
similarity index 100%
rename from import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/__init__.py
rename to import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/__init__.py
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/buildhistory.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/buildhistory.py
new file mode 100644
index 0000000..08675fd
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/buildhistory.py
@@ -0,0 +1,99 @@
+import os
+from oeqa.selftest.case import OESelftestTestCase
+import tempfile
+from oeqa.utils.commands import get_bb_var
+from oeqa.core.decorator.oeid import OETestID
+
+class TestBlobParsing(OESelftestTestCase):
+
+    def setUp(self):
+        import time
+        self.repo_path = tempfile.mkdtemp(prefix='selftest-buildhistory',
+            dir=get_bb_var('TOPDIR'))
+
+        try:
+            from git import Repo
+            self.repo = Repo.init(self.repo_path)
+        except ImportError:
+            self.skipTest('Python module GitPython is not present')
+
+        self.test_file = "test"
+        self.var_map = {}
+
+    def tearDown(self):
+        import shutil
+        shutil.rmtree(self.repo_path)
+
+    def commit_vars(self, to_add={}, to_remove = [], msg="A commit message"):
+        if len(to_add) == 0 and len(to_remove) == 0:
+            return
+
+        for k in to_remove:
+            self.var_map.pop(x,None)
+        for k in to_add:
+            self.var_map[k] = to_add[k]
+
+        with open(os.path.join(self.repo_path, self.test_file), 'w') as repo_file:
+            for k in self.var_map:
+                repo_file.write("%s = %s\n" % (k, self.var_map[k]))
+
+        self.repo.git.add("--all")
+        self.repo.git.commit(message=msg)
+
+    @OETestID(1859)
+    def test_blob_to_dict(self):
+        """
+        Test convertion of git blobs to dictionary
+        """
+        from oe.buildhistory_analysis import blob_to_dict
+        valuesmap = { "foo" : "1", "bar" : "2" }
+        self.commit_vars(to_add = valuesmap)
+
+        blob = self.repo.head.commit.tree.blobs[0]
+        self.assertEqual(valuesmap, blob_to_dict(blob),
+            "commit was not translated correctly to dictionary")
+
+    @OETestID(1860)
+    def test_compare_dict_blobs(self):
+        """
+        Test comparisson of dictionaries extracted from git blobs
+        """
+        from oe.buildhistory_analysis import compare_dict_blobs
+
+        changesmap = { "foo-2" : ("2", "8"), "bar" : ("","4"), "bar-2" : ("","5")}
+
+        self.commit_vars(to_add = { "foo" : "1", "foo-2" : "2", "foo-3" : "3" })
+        blob1 = self.repo.heads.master.commit.tree.blobs[0]
+
+        self.commit_vars(to_add = { "foo-2" : "8", "bar" : "4", "bar-2" : "5" })
+        blob2 = self.repo.heads.master.commit.tree.blobs[0]
+
+        change_records = compare_dict_blobs(os.path.join(self.repo_path, self.test_file),
+            blob1, blob2, False, False)
+
+        var_changes = { x.fieldname : (x.oldvalue, x.newvalue) for x in change_records}
+        self.assertEqual(changesmap, var_changes, "Changes not reported correctly")
+
+    @OETestID(1861)
+    def test_compare_dict_blobs_default(self):
+        """
+        Test default values for comparisson of git blob dictionaries
+        """
+        from oe.buildhistory_analysis import compare_dict_blobs
+        defaultmap = { x : ("default", "1")  for x in ["PKG", "PKGE", "PKGV", "PKGR"]}
+
+        self.commit_vars(to_add = { "foo" : "1" })
+        blob1 = self.repo.heads.master.commit.tree.blobs[0]
+
+        self.commit_vars(to_add = { "PKG" : "1", "PKGE" : "1", "PKGV" : "1", "PKGR" : "1" })
+        blob2 = self.repo.heads.master.commit.tree.blobs[0]
+
+        change_records = compare_dict_blobs(os.path.join(self.repo_path, self.test_file),
+            blob1, blob2, False, False)
+
+        var_changes = {}
+        for x in change_records:
+            oldvalue = "default" if ("default" in x.oldvalue) else x.oldvalue
+            var_changes[x.fieldname] = (oldvalue, x.newvalue)
+
+        self.assertEqual(defaultmap, var_changes, "Defaults not set properly")
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/elf.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/elf.py
new file mode 100644
index 0000000..74ee6a1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/elf.py
@@ -0,0 +1,21 @@
+from unittest.case import TestCase
+import oe.qa
+
+class TestElf(TestCase):
+    def test_machine_name(self):
+        """
+        Test elf_machine_to_string()
+        """
+        self.assertEqual(oe.qa.elf_machine_to_string(0x02), "SPARC")
+        self.assertEqual(oe.qa.elf_machine_to_string(0x03), "x86")
+        self.assertEqual(oe.qa.elf_machine_to_string(0x08), "MIPS")
+        self.assertEqual(oe.qa.elf_machine_to_string(0x14), "PowerPC")
+        self.assertEqual(oe.qa.elf_machine_to_string(0x28), "ARM")
+        self.assertEqual(oe.qa.elf_machine_to_string(0x2A), "SuperH")
+        self.assertEqual(oe.qa.elf_machine_to_string(0x32), "IA-64")
+        self.assertEqual(oe.qa.elf_machine_to_string(0x3E), "x86-64")
+        self.assertEqual(oe.qa.elf_machine_to_string(0xB7), "AArch64")
+
+        self.assertEqual(oe.qa.elf_machine_to_string(0x00), "Unknown (0)")
+        self.assertEqual(oe.qa.elf_machine_to_string(0xDEADBEEF), "Unknown (3735928559)")
+        self.assertEqual(oe.qa.elf_machine_to_string("foobar"), "Unknown ('foobar')")
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/license.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/license.py
new file mode 100644
index 0000000..d7f91fb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/license.py
@@ -0,0 +1,99 @@
+from unittest.case import TestCase
+import oe.license
+
+class SeenVisitor(oe.license.LicenseVisitor):
+    def __init__(self):
+        self.seen = []
+        oe.license.LicenseVisitor.__init__(self)
+
+    def visit_Str(self, node):
+        self.seen.append(node.s)
+
+class TestSingleLicense(TestCase):
+    licenses = [
+        "GPLv2",
+        "LGPL-2.0",
+        "Artistic",
+        "MIT",
+        "GPLv3+",
+        "FOO_BAR",
+    ]
+    invalid_licenses = ["GPL/BSD"]
+
+    @staticmethod
+    def parse(licensestr):
+        visitor = SeenVisitor()
+        visitor.visit_string(licensestr)
+        return visitor.seen
+
+    def test_single_licenses(self):
+        for license in self.licenses:
+            licenses = self.parse(license)
+            self.assertListEqual(licenses, [license])
+
+    def test_invalid_licenses(self):
+        for license in self.invalid_licenses:
+            with self.assertRaises(oe.license.InvalidLicense) as cm:
+                self.parse(license)
+            self.assertEqual(cm.exception.license, license)
+
+class TestSimpleCombinations(TestCase):
+    tests = {
+        "FOO&BAR": ["FOO", "BAR"],
+        "BAZ & MOO": ["BAZ", "MOO"],
+        "ALPHA|BETA": ["ALPHA"],
+        "BAZ&MOO|FOO": ["FOO"],
+        "FOO&BAR|BAZ": ["FOO", "BAR"],
+    }
+    preferred = ["ALPHA", "FOO", "BAR"]
+
+    def test_tests(self):
+        def choose(a, b):
+            if all(lic in self.preferred for lic in b):
+                return b
+            else:
+                return a
+
+        for license, expected in self.tests.items():
+            licenses = oe.license.flattened_licenses(license, choose)
+            self.assertListEqual(licenses, expected)
+
+class TestComplexCombinations(TestSimpleCombinations):
+    tests = {
+        "FOO & (BAR | BAZ)&MOO": ["FOO", "BAR", "MOO"],
+        "(ALPHA|(BETA&THETA)|OMEGA)&DELTA": ["OMEGA", "DELTA"],
+        "((ALPHA|BETA)&FOO)|BAZ": ["BETA", "FOO"],
+        "(GPL-2.0|Proprietary)&BSD-4-clause&MIT": ["GPL-2.0", "BSD-4-clause", "MIT"],
+    }
+    preferred = ["BAR", "OMEGA", "BETA", "GPL-2.0"]
+
+class TestIsIncluded(TestCase):
+    tests = {
+        ("FOO | BAR", None, None):
+            [True, ["FOO"]],
+        ("FOO | BAR", None, "FOO"):
+            [True, ["BAR"]],
+        ("FOO | BAR", "BAR", None):
+            [True, ["BAR"]],
+        ("FOO | BAR & FOOBAR", "*BAR", None):
+            [True, ["BAR", "FOOBAR"]],
+        ("FOO | BAR & FOOBAR", None, "FOO*"):
+            [False, ["FOOBAR"]],
+        ("(FOO | BAR) & FOOBAR | BARFOO", None, "FOO"):
+            [True, ["BAR", "FOOBAR"]],
+        ("(FOO | BAR) & FOOBAR | BAZ & MOO & BARFOO", None, "FOO"):
+            [True, ["BAZ", "MOO", "BARFOO"]],
+        ("GPL-3.0 & GPL-2.0 & LGPL-2.1 | Proprietary", None, None):
+            [True, ["GPL-3.0", "GPL-2.0", "LGPL-2.1"]],
+        ("GPL-3.0 & GPL-2.0 & LGPL-2.1 | Proprietary", None, "GPL-3.0"):
+            [True, ["Proprietary"]],
+        ("GPL-3.0 & GPL-2.0 & LGPL-2.1 | Proprietary", None, "GPL-3.0 Proprietary"):
+            [False, ["GPL-3.0"]]
+    }
+
+    def test_tests(self):
+        for args, expected in self.tests.items():
+            is_included, licenses = oe.license.is_included(
+                args[0], (args[1] or '').split(), (args[2] or '').split())
+            self.assertEqual(is_included, expected[0])
+            self.assertListEqual(licenses, expected[1])
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/path.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/path.py
new file mode 100644
index 0000000..75a27c0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/path.py
@@ -0,0 +1,89 @@
+from unittest.case import TestCase
+import oe, oe.path
+import tempfile
+import os
+import errno
+import shutil
+
+class TestRealPath(TestCase):
+    DIRS = [ "a", "b", "etc", "sbin", "usr", "usr/bin", "usr/binX", "usr/sbin", "usr/include", "usr/include/gdbm" ]
+    FILES = [ "etc/passwd", "b/file" ]
+    LINKS = [
+        ( "bin",             "/usr/bin",             "/usr/bin" ),
+        ( "binX",            "usr/binX",             "/usr/binX" ),
+        ( "c",               "broken",               "/broken" ),
+        ( "etc/passwd-1",    "passwd",               "/etc/passwd" ),
+        ( "etc/passwd-2",    "passwd-1",             "/etc/passwd" ),
+        ( "etc/passwd-3",    "/etc/passwd-1",        "/etc/passwd" ),
+        ( "etc/shadow-1",    "/etc/shadow",          "/etc/shadow" ),
+        ( "etc/shadow-2",    "/etc/shadow-1",        "/etc/shadow" ),
+        ( "prog-A",          "bin/prog-A",           "/usr/bin/prog-A" ),
+        ( "prog-B",          "/bin/prog-B",          "/usr/bin/prog-B" ),
+        ( "usr/bin/prog-C",  "../../sbin/prog-C",    "/sbin/prog-C" ),
+        ( "usr/bin/prog-D",  "/sbin/prog-D",         "/sbin/prog-D" ),
+        ( "usr/binX/prog-E", "../sbin/prog-E",       None ),
+        ( "usr/bin/prog-F",  "../../../sbin/prog-F", "/sbin/prog-F" ),
+        ( "loop",            "a/loop",               None ),
+        ( "a/loop",          "../loop",              None ),
+        ( "b/test",          "file/foo",             "/b/file/foo" ),
+    ]
+
+    LINKS_PHYS = [
+        ( "./",          "/",                "" ),
+        ( "binX/prog-E", "/usr/sbin/prog-E", "/sbin/prog-E" ),
+    ]
+
+    EXCEPTIONS = [
+        ( "loop",   errno.ELOOP ),
+        ( "b/test", errno.ENOENT ),
+    ]
+
+    def __del__(self):
+        try:
+            #os.system("tree -F %s" % self.tmpdir)
+            shutil.rmtree(self.tmpdir)
+        except:
+            pass
+
+    def setUp(self):
+        self.tmpdir = tempfile.mkdtemp(prefix = "oe-test_path")
+        self.root = os.path.join(self.tmpdir, "R")
+
+        os.mkdir(os.path.join(self.tmpdir, "_real"))
+        os.symlink("_real", self.root)
+
+        for d in self.DIRS:
+            os.mkdir(os.path.join(self.root, d))
+        for f in self.FILES:
+            open(os.path.join(self.root, f), "w")
+        for l in self.LINKS:
+            os.symlink(l[1], os.path.join(self.root, l[0]))
+
+    def __realpath(self, file, use_physdir, assume_dir = True):
+        return oe.path.realpath(os.path.join(self.root, file), self.root,
+                                use_physdir, assume_dir = assume_dir)
+
+    def test_norm(self):
+        for l in self.LINKS:
+            if l[2] == None:
+                continue
+
+            target_p = self.__realpath(l[0], True)
+            target_l = self.__realpath(l[0], False)
+
+            if l[2] != False:
+                self.assertEqual(target_p, target_l)
+                self.assertEqual(l[2], target_p[len(self.root):])
+
+    def test_phys(self):
+        for l in self.LINKS_PHYS:
+            target_p = self.__realpath(l[0], True)
+            target_l = self.__realpath(l[0], False)
+
+            self.assertEqual(l[1], target_p[len(self.root):])
+            self.assertEqual(l[2], target_l[len(self.root):])
+
+    def test_loop(self):
+        for e in self.EXCEPTIONS:
+            self.assertRaisesRegex(OSError, r'\[Errno %u\]' % e[1],
+                                    self.__realpath, e[0], False, False)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/types.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/types.py
new file mode 100644
index 0000000..6b53aa6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/types.py
@@ -0,0 +1,50 @@
+from unittest.case import TestCase
+from oe.maketype import create
+
+class TestBooleanType(TestCase):
+    def test_invalid(self):
+        self.assertRaises(ValueError, create, '', 'boolean')
+        self.assertRaises(ValueError, create, 'foo', 'boolean')
+        self.assertRaises(TypeError, create, object(), 'boolean')
+
+    def test_true(self):
+        self.assertTrue(create('y', 'boolean'))
+        self.assertTrue(create('yes', 'boolean'))
+        self.assertTrue(create('1', 'boolean'))
+        self.assertTrue(create('t', 'boolean'))
+        self.assertTrue(create('true', 'boolean'))
+        self.assertTrue(create('TRUE', 'boolean'))
+        self.assertTrue(create('truE', 'boolean'))
+
+    def test_false(self):
+        self.assertFalse(create('n', 'boolean'))
+        self.assertFalse(create('no', 'boolean'))
+        self.assertFalse(create('0', 'boolean'))
+        self.assertFalse(create('f', 'boolean'))
+        self.assertFalse(create('false', 'boolean'))
+        self.assertFalse(create('FALSE', 'boolean'))
+        self.assertFalse(create('faLse', 'boolean'))
+
+    def test_bool_equality(self):
+        self.assertEqual(create('n', 'boolean'), False)
+        self.assertNotEqual(create('n', 'boolean'), True)
+        self.assertEqual(create('y', 'boolean'), True)
+        self.assertNotEqual(create('y', 'boolean'), False)
+
+class TestList(TestCase):
+    def assertListEqual(self, value, valid, sep=None):
+        obj = create(value, 'list', separator=sep)
+        self.assertEqual(obj, valid)
+        if sep is not None:
+            self.assertEqual(obj.separator, sep)
+        self.assertEqual(str(obj), obj.separator.join(obj))
+
+    def test_list_nosep(self):
+        testlist = ['alpha', 'beta', 'theta']
+        self.assertListEqual('alpha beta theta', testlist)
+        self.assertListEqual('alpha  beta\ttheta', testlist)
+        self.assertListEqual('alpha', ['alpha'])
+
+    def test_list_usersep(self):
+        self.assertListEqual('foo:bar', ['foo', 'bar'], ':')
+        self.assertListEqual('foo:bar:baz', ['foo', 'bar', 'baz'], ':')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/utils.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/utils.py
new file mode 100644
index 0000000..9fb6c15
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oelib/utils.py
@@ -0,0 +1,51 @@
+from unittest.case import TestCase
+from oe.utils import packages_filter_out_system, trim_version
+
+class TestPackagesFilterOutSystem(TestCase):
+    def test_filter(self):
+        """
+        Test that oe.utils.packages_filter_out_system works.
+        """
+        try:
+            import bb
+        except ImportError:
+            self.skipTest("Cannot import bb")
+
+        d = bb.data_smart.DataSmart()
+        d.setVar("PN", "foo")
+
+        d.setVar("PACKAGES", "foo foo-doc foo-dev")
+        pkgs = packages_filter_out_system(d)
+        self.assertEqual(pkgs, [])
+
+        d.setVar("PACKAGES", "foo foo-doc foo-data foo-dev")
+        pkgs = packages_filter_out_system(d)
+        self.assertEqual(pkgs, ["foo-data"])
+
+        d.setVar("PACKAGES", "foo foo-locale-en-gb")
+        pkgs = packages_filter_out_system(d)
+        self.assertEqual(pkgs, [])
+
+        d.setVar("PACKAGES", "foo foo-data foo-locale-en-gb")
+        pkgs = packages_filter_out_system(d)
+        self.assertEqual(pkgs, ["foo-data"])
+
+
+class TestTrimVersion(TestCase):
+    def test_version_exception(self):
+        with self.assertRaises(TypeError):
+            trim_version(None, 2)
+        with self.assertRaises(TypeError):
+            trim_version((1, 2, 3), 2)
+
+    def test_num_exception(self):
+        with self.assertRaises(ValueError):
+            trim_version("1.2.3", 0)
+        with self.assertRaises(ValueError):
+            trim_version("1.2.3", -1)
+
+    def test_valid(self):
+        self.assertEqual(trim_version("1.2.3", 1), "1")
+        self.assertEqual(trim_version("1.2.3", 2), "1.2")
+        self.assertEqual(trim_version("1.2.3", 3), "1.2.3")
+        self.assertEqual(trim_version("1.2.3", 4), "1.2.3")
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oescripts.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oescripts.py
new file mode 100644
index 0000000..1ee7537
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/oescripts.py
@@ -0,0 +1,15 @@
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.selftest.cases.buildhistory import BuildhistoryBase
+from oeqa.utils.commands import Command, runCmd, bitbake, get_bb_var, get_test_layer
+from oeqa.core.decorator.oeid import OETestID
+
+class BuildhistoryDiffTests(BuildhistoryBase):
+
+    @OETestID(295)
+    def test_buildhistory_diff(self):
+        target = 'xcursor-transparent-theme'
+        self.run_buildhistory_operation(target, target_config="PR = \"r1\"", change_bh_location=True)
+        self.run_buildhistory_operation(target, target_config="PR = \"r0\"", change_bh_location=False, expect_error=True)
+        result = runCmd("buildhistory-diff -p %s" % get_bb_var('BUILDHISTORY_DIR'))
+        expected_output = 'PR changed from "r1" to "r0"'
+        self.assertTrue(expected_output in result.output, msg="Did not find expected output: %s" % result.output)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/package.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/package.py
new file mode 100644
index 0000000..169698f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/package.py
@@ -0,0 +1,86 @@
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.core.decorator.oeid import OETestID
+from oeqa.utils.commands import bitbake, get_bb_vars
+import subprocess, os
+import oe.path
+
+class VersionOrdering(OESelftestTestCase):
+    # version1, version2, sort order
+    tests = (
+        ("1.0", "1.0", 0),
+        ("1.0", "2.0", -1),
+        ("2.0", "1.0", 1),
+        ("2.0-rc", "2.0", 1),
+        ("2.0~rc", "2.0", -1),
+        ("1.2rc2", "1.2.0", -1)
+        )
+
+    @classmethod
+    def setUpClass(cls):
+        super().setUpClass()
+
+        # Build the tools we need and populate a sysroot
+        bitbake("dpkg-native opkg-native rpm-native python3-native")
+        bitbake("build-sysroots -c build_native_sysroot")
+
+        # Get the paths so we can point into the sysroot correctly
+        vars = get_bb_vars(["STAGING_DIR", "BUILD_ARCH", "bindir_native", "libdir_native"])
+        cls.staging = oe.path.join(vars["STAGING_DIR"], vars["BUILD_ARCH"])
+        cls.bindir = oe.path.join(cls.staging, vars["bindir_native"])
+        cls.libdir = oe.path.join(cls.staging, vars["libdir_native"])
+
+    def setUp(self):
+        # Just for convenience
+        self.staging = type(self).staging
+        self.bindir = type(self).bindir
+        self.libdir = type(self).libdir
+
+    @OETestID(1880)
+    def test_dpkg(self):
+        for ver1, ver2, sort in self.tests:
+            op = { -1: "<<", 0: "=", 1: ">>" }[sort]
+            status = subprocess.call((oe.path.join(self.bindir, "dpkg"), "--compare-versions", ver1, op, ver2))
+            self.assertEqual(status, 0, "%s %s %s failed" % (ver1, op, ver2))
+
+            # Now do it again but with incorrect operations
+            op = { -1: ">>", 0: ">>", 1: "<<" }[sort]
+            status = subprocess.call((oe.path.join(self.bindir, "dpkg"), "--compare-versions", ver1, op, ver2))
+            self.assertNotEqual(status, 0, "%s %s %s failed" % (ver1, op, ver2))
+
+            # Now do it again but with incorrect operations
+            op = { -1: "=", 0: "<<", 1: "=" }[sort]
+            status = subprocess.call((oe.path.join(self.bindir, "dpkg"), "--compare-versions", ver1, op, ver2))
+            self.assertNotEqual(status, 0, "%s %s %s failed" % (ver1, op, ver2))
+
+    @OETestID(1881)
+    def test_opkg(self):
+        for ver1, ver2, sort in self.tests:
+            op = { -1: "<<", 0: "=", 1: ">>" }[sort]
+            status = subprocess.call((oe.path.join(self.bindir, "opkg"), "compare-versions", ver1, op, ver2))
+            self.assertEqual(status, 0, "%s %s %s failed" % (ver1, op, ver2))
+
+            # Now do it again but with incorrect operations
+            op = { -1: ">>", 0: ">>", 1: "<<" }[sort]
+            status = subprocess.call((oe.path.join(self.bindir, "opkg"), "compare-versions", ver1, op, ver2))
+            self.assertNotEqual(status, 0, "%s %s %s failed" % (ver1, op, ver2))
+
+            # Now do it again but with incorrect operations
+            op = { -1: "=", 0: "<<", 1: "=" }[sort]
+            status = subprocess.call((oe.path.join(self.bindir, "opkg"), "compare-versions", ver1, op, ver2))
+            self.assertNotEqual(status, 0, "%s %s %s failed" % (ver1, op, ver2))
+
+    @OETestID(1882)
+    def test_rpm(self):
+        # Need to tell the Python bindings where to find its configuration
+        env = os.environ.copy()
+        env["RPM_CONFIGDIR"] = oe.path.join(self.libdir, "rpm")
+
+        for ver1, ver2, sort in self.tests:
+            # The only way to test rpm is via the Python module, so we need to
+            # execute python3-native.  labelCompare returns -1/0/1 (like strcmp)
+            # so add 100 and use that as the exit code.
+            command = (oe.path.join(self.bindir, "python3-native", "python3"), "-c",
+                       "import sys, rpm; v1=(None, \"%s\", None); v2=(None, \"%s\", None); sys.exit(rpm.labelCompare(v1, v2) + 100)" % (ver1, ver2))
+            status = subprocess.call(command, env=env)
+            self.assertIn(status, (99, 100, 101))
+            self.assertEqual(status - 100, sort, "%s %s (%d) failed" % (ver1, ver2, sort))
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/pkgdata.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/pkgdata.py
new file mode 100644
index 0000000..0b4caf1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/pkgdata.py
@@ -0,0 +1,224 @@
+import os
+import tempfile
+import fnmatch
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars
+from oeqa.core.decorator.oeid import OETestID
+
+class OePkgdataUtilTests(OESelftestTestCase):
+
+    @classmethod
+    def setUpClass(cls):
+        super(OePkgdataUtilTests, cls).setUpClass()
+        # Ensure we have the right data in pkgdata
+        cls.logger.info('Running bitbake to generate pkgdata')
+        bitbake('busybox zlib m4')
+
+    @OETestID(1203)
+    def test_lookup_pkg(self):
+        # Forward tests
+        result = runCmd('oe-pkgdata-util lookup-pkg "zlib busybox"')
+        self.assertEqual(result.output, 'libz1\nbusybox')
+        result = runCmd('oe-pkgdata-util lookup-pkg zlib-dev')
+        self.assertEqual(result.output, 'libz-dev')
+        result = runCmd('oe-pkgdata-util lookup-pkg nonexistentpkg', ignore_status=True)
+        self.assertEqual(result.status, 1, "Status different than 1. output: %s" % result.output)
+        self.assertEqual(result.output, 'ERROR: The following packages could not be found: nonexistentpkg')
+        # Reverse tests
+        result = runCmd('oe-pkgdata-util lookup-pkg -r "libz1 busybox"')
+        self.assertEqual(result.output, 'zlib\nbusybox')
+        result = runCmd('oe-pkgdata-util lookup-pkg -r libz-dev')
+        self.assertEqual(result.output, 'zlib-dev')
+        result = runCmd('oe-pkgdata-util lookup-pkg -r nonexistentpkg', ignore_status=True)
+        self.assertEqual(result.status, 1, "Status different than 1. output: %s" % result.output)
+        self.assertEqual(result.output, 'ERROR: The following packages could not be found: nonexistentpkg')
+
+    @OETestID(1205)
+    def test_read_value(self):
+        result = runCmd('oe-pkgdata-util read-value PN libz1')
+        self.assertEqual(result.output, 'zlib')
+        result = runCmd('oe-pkgdata-util read-value PKG libz1')
+        self.assertEqual(result.output, 'libz1')
+        result = runCmd('oe-pkgdata-util read-value PKGSIZE m4')
+        pkgsize = int(result.output.strip())
+        self.assertGreater(pkgsize, 1, "Size should be greater than 1. %s" % result.output)
+
+    @OETestID(1198)
+    def test_find_path(self):
+        result = runCmd('oe-pkgdata-util find-path /lib/libz.so.1')
+        self.assertEqual(result.output, 'zlib: /lib/libz.so.1')
+        result = runCmd('oe-pkgdata-util find-path /usr/bin/m4')
+        self.assertEqual(result.output, 'm4: /usr/bin/m4')
+        result = runCmd('oe-pkgdata-util find-path /not/exist', ignore_status=True)
+        self.assertEqual(result.status, 1, "Status different than 1. output: %s" % result.output)
+        self.assertEqual(result.output, 'ERROR: Unable to find any package producing path /not/exist')
+
+    @OETestID(1204)
+    def test_lookup_recipe(self):
+        result = runCmd('oe-pkgdata-util lookup-recipe "libz-staticdev busybox"')
+        self.assertEqual(result.output, 'zlib\nbusybox')
+        result = runCmd('oe-pkgdata-util lookup-recipe libz-dbg')
+        self.assertEqual(result.output, 'zlib')
+        result = runCmd('oe-pkgdata-util lookup-recipe nonexistentpkg', ignore_status=True)
+        self.assertEqual(result.status, 1, "Status different than 1. output: %s" % result.output)
+        self.assertEqual(result.output, 'ERROR: The following packages could not be found: nonexistentpkg')
+
+    @OETestID(1202)
+    def test_list_pkgs(self):
+        # No arguments
+        result = runCmd('oe-pkgdata-util list-pkgs')
+        pkglist = result.output.split()
+        self.assertIn('zlib', pkglist, "Listed packages: %s" % result.output)
+        self.assertIn('zlib-dev', pkglist, "Listed packages: %s" % result.output)
+        # No pkgspec, runtime
+        result = runCmd('oe-pkgdata-util list-pkgs -r')
+        pkglist = result.output.split()
+        self.assertIn('libz-dev', pkglist, "Listed packages: %s" % result.output)
+        # With recipe specified
+        result = runCmd('oe-pkgdata-util list-pkgs -p zlib')
+        pkglist = sorted(result.output.split())
+        try:
+            pkglist.remove('zlib-ptest') # in case ptest is disabled
+        except ValueError:
+            pass
+        self.assertEqual(pkglist, ['zlib', 'zlib-dbg', 'zlib-dev', 'zlib-doc', 'zlib-staticdev'], "Packages listed after remove: %s" % result.output)
+        # With recipe specified, runtime
+        result = runCmd('oe-pkgdata-util list-pkgs -p zlib -r')
+        pkglist = sorted(result.output.split())
+        try:
+            pkglist.remove('libz-ptest') # in case ptest is disabled
+        except ValueError:
+            pass
+        self.assertEqual(pkglist, ['libz-dbg', 'libz-dev', 'libz-doc', 'libz-staticdev', 'libz1'], "Packages listed after remove: %s" % result.output)
+        # With recipe specified and unpackaged
+        result = runCmd('oe-pkgdata-util list-pkgs -p zlib -u')
+        pkglist = sorted(result.output.split())
+        self.assertIn('zlib-locale', pkglist, "Listed packages: %s" % result.output)
+        # With recipe specified and unpackaged, runtime
+        result = runCmd('oe-pkgdata-util list-pkgs -p zlib -u -r')
+        pkglist = sorted(result.output.split())
+        self.assertIn('libz-locale', pkglist, "Listed packages: %s" % result.output)
+        # With recipe specified and pkgspec
+        result = runCmd('oe-pkgdata-util list-pkgs -p zlib "*-d*"')
+        pkglist = sorted(result.output.split())
+        self.assertEqual(pkglist, ['zlib-dbg', 'zlib-dev', 'zlib-doc'], "Packages listed: %s" % result.output)
+        # With recipe specified and pkgspec, runtime
+        result = runCmd('oe-pkgdata-util list-pkgs -p zlib -r "*-d*"')
+        pkglist = sorted(result.output.split())
+        self.assertEqual(pkglist, ['libz-dbg', 'libz-dev', 'libz-doc'], "Packages listed: %s" % result.output)
+
+    @OETestID(1201)
+    def test_list_pkg_files(self):
+        def splitoutput(output):
+            files = {}
+            curpkg = None
+            for line in output.splitlines():
+                if line.startswith('\t'):
+                    self.assertTrue(curpkg, 'Unexpected non-package line:\n%s' % line)
+                    files[curpkg].append(line.strip())
+                else:
+                    self.assertTrue(line.rstrip().endswith(':'), 'Invalid package line in output:\n%s' % line)
+                    curpkg = line.split(':')[0]
+                    files[curpkg] = []
+            return files
+        bb_vars = get_bb_vars(['base_libdir', 'libdir', 'includedir', 'mandir'])
+        base_libdir = bb_vars['base_libdir']
+        libdir = bb_vars['libdir']
+        includedir = bb_vars['includedir']
+        mandir = bb_vars['mandir']
+        # Test recipe-space package name
+        result = runCmd('oe-pkgdata-util list-pkg-files zlib-dev zlib-doc')
+        files = splitoutput(result.output)
+        self.assertIn('zlib-dev', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('zlib-doc', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn(os.path.join(includedir, 'zlib.h'), files['zlib-dev'])
+        self.assertIn(os.path.join(mandir, 'man3/zlib.3'), files['zlib-doc'])
+        # Test runtime package name
+        result = runCmd('oe-pkgdata-util list-pkg-files -r libz1 libz-dev')
+        files = splitoutput(result.output)
+        self.assertIn('libz1', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('libz-dev', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertGreater(len(files['libz1']), 1)
+        libspec = os.path.join(base_libdir, 'libz.so.1.*')
+        found = False
+        for fileitem in files['libz1']:
+            if fnmatch.fnmatchcase(fileitem, libspec):
+                found = True
+                break
+        self.assertTrue(found, 'Could not find zlib library file %s in libz1 package file list: %s' % (libspec, files['libz1']))
+        self.assertIn(os.path.join(includedir, 'zlib.h'), files['libz-dev'])
+        # Test recipe
+        result = runCmd('oe-pkgdata-util list-pkg-files -p zlib')
+        files = splitoutput(result.output)
+        self.assertIn('zlib-dbg', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('zlib-doc', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('zlib-dev', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('zlib-staticdev', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('zlib', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertNotIn('zlib-locale', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        # (ignore ptest, might not be there depending on config)
+        self.assertIn(os.path.join(includedir, 'zlib.h'), files['zlib-dev'])
+        self.assertIn(os.path.join(mandir, 'man3/zlib.3'), files['zlib-doc'])
+        self.assertIn(os.path.join(libdir, 'libz.a'), files['zlib-staticdev'])
+        # Test recipe, runtime
+        result = runCmd('oe-pkgdata-util list-pkg-files -p zlib -r')
+        files = splitoutput(result.output)
+        self.assertIn('libz-dbg', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('libz-doc', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('libz-dev', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('libz-staticdev', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('libz1', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertNotIn('libz-locale', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn(os.path.join(includedir, 'zlib.h'), files['libz-dev'])
+        self.assertIn(os.path.join(mandir, 'man3/zlib.3'), files['libz-doc'])
+        self.assertIn(os.path.join(libdir, 'libz.a'), files['libz-staticdev'])
+        # Test recipe, unpackaged
+        result = runCmd('oe-pkgdata-util list-pkg-files -p zlib -u')
+        files = splitoutput(result.output)
+        self.assertIn('zlib-dbg', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('zlib-doc', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('zlib-dev', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('zlib-staticdev', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('zlib', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('zlib-locale', list(files.keys()), "listed pkgs. files: %s" %result.output) # this is the key one
+        self.assertIn(os.path.join(includedir, 'zlib.h'), files['zlib-dev'])
+        self.assertIn(os.path.join(mandir, 'man3/zlib.3'), files['zlib-doc'])
+        self.assertIn(os.path.join(libdir, 'libz.a'), files['zlib-staticdev'])
+        # Test recipe, runtime, unpackaged
+        result = runCmd('oe-pkgdata-util list-pkg-files -p zlib -r -u')
+        files = splitoutput(result.output)
+        self.assertIn('libz-dbg', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('libz-doc', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('libz-dev', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('libz-staticdev', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('libz1', list(files.keys()), "listed pkgs. files: %s" %result.output)
+        self.assertIn('libz-locale', list(files.keys()), "listed pkgs. files: %s" %result.output) # this is the key one
+        self.assertIn(os.path.join(includedir, 'zlib.h'), files['libz-dev'])
+        self.assertIn(os.path.join(mandir, 'man3/zlib.3'), files['libz-doc'])
+        self.assertIn(os.path.join(libdir, 'libz.a'), files['libz-staticdev'])
+
+    @OETestID(1200)
+    def test_glob(self):
+        tempdir = tempfile.mkdtemp(prefix='pkgdataqa')
+        self.track_for_cleanup(tempdir)
+        pkglistfile = os.path.join(tempdir, 'pkglist')
+        with open(pkglistfile, 'w') as f:
+            f.write('libz1\n')
+            f.write('busybox\n')
+        result = runCmd('oe-pkgdata-util glob %s "*-dev"' % pkglistfile)
+        desiredresult = ['libz-dev', 'busybox-dev']
+        self.assertEqual(sorted(result.output.split()), sorted(desiredresult))
+        # The following should not error (because when we use this during rootfs construction, sometimes the complementary package won't exist)
+        result = runCmd('oe-pkgdata-util glob %s "*-nonexistent"' % pkglistfile)
+        self.assertEqual(result.output, '')
+        # Test exclude option
+        result = runCmd('oe-pkgdata-util glob %s "*-dev *-dbg" -x "^libz"' % pkglistfile)
+        resultlist = result.output.split()
+        self.assertNotIn('libz-dev', resultlist)
+        self.assertNotIn('libz-dbg', resultlist)
+
+    @OETestID(1206)
+    def test_specify_pkgdatadir(self):
+        result = runCmd('oe-pkgdata-util -p %s lookup-pkg zlib' % get_bb_var('PKGDATA_DIR'))
+        self.assertEqual(result.output, 'libz1')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/prservice.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/prservice.py
new file mode 100644
index 0000000..479e520
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/prservice.py
@@ -0,0 +1,131 @@
+import os
+import re
+import shutil
+import datetime
+
+import oeqa.utils.ftools as ftools
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var
+from oeqa.core.decorator.oeid import OETestID
+from oeqa.utils.network import get_free_port
+
+class BitbakePrTests(OESelftestTestCase):
+
+    @classmethod
+    def setUpClass(cls):
+        super(BitbakePrTests, cls).setUpClass()
+        cls.pkgdata_dir = get_bb_var('PKGDATA_DIR')
+
+    def get_pr_version(self, package_name):
+        package_data_file = os.path.join(self.pkgdata_dir, 'runtime', package_name)
+        package_data = ftools.read_file(package_data_file)
+        find_pr = re.search("PKGR: r[0-9]+\.([0-9]+)", package_data)
+        self.assertTrue(find_pr, "No PKG revision found in %s" % package_data_file)
+        return int(find_pr.group(1))
+
+    def get_task_stamp(self, package_name, recipe_task):
+        stampdata = get_bb_var('STAMP', target=package_name).split('/')
+        prefix = stampdata[-1]
+        package_stamps_path = "/".join(stampdata[:-1])
+        stamps = []
+        for stamp in os.listdir(package_stamps_path):
+            find_stamp = re.match("%s\.%s\.([a-z0-9]{32})" % (re.escape(prefix), recipe_task), stamp)
+            if find_stamp:
+                stamps.append(find_stamp.group(1))
+        self.assertFalse(len(stamps) == 0, msg="Cound not find stamp for task %s for recipe %s" % (recipe_task, package_name))
+        self.assertFalse(len(stamps) > 1, msg="Found multiple %s stamps for the %s recipe in the %s directory." % (recipe_task, package_name, package_stamps_path))
+        return str(stamps[0])
+
+    def increment_package_pr(self, package_name):
+        inc_data = "do_package_append() {\n    bb.build.exec_func('do_test_prserv', d)\n}\ndo_test_prserv() {\necho \"The current date is: %s\"\n}" % datetime.datetime.now()
+        self.write_recipeinc(package_name, inc_data)
+        res = bitbake(package_name, ignore_status=True)
+        self.delete_recipeinc(package_name)
+        self.assertEqual(res.status, 0, msg=res.output)
+
+    def config_pr_tests(self, package_name, package_type='rpm', pr_socket='localhost:0'):
+        config_package_data = 'PACKAGE_CLASSES = "package_%s"' % package_type
+        self.write_config(config_package_data)
+        config_server_data = 'PRSERV_HOST = "%s"' % pr_socket
+        self.append_config(config_server_data)
+
+    def run_test_pr_service(self, package_name, package_type='rpm', track_task='do_package', pr_socket='localhost:0'):
+        self.config_pr_tests(package_name, package_type, pr_socket)
+
+        self.increment_package_pr(package_name)
+        pr_1 = self.get_pr_version(package_name)
+        stamp_1 = self.get_task_stamp(package_name, track_task)
+
+        self.increment_package_pr(package_name)
+        pr_2 = self.get_pr_version(package_name)
+        stamp_2 = self.get_task_stamp(package_name, track_task)
+
+        self.assertTrue(pr_2 - pr_1 == 1, "Step between same pkg. revision is greater than 1")
+        self.assertTrue(stamp_1 != stamp_2, "Different pkg rev. but same stamp: %s" % stamp_1)
+
+    def run_test_pr_export_import(self, package_name, replace_current_db=True):
+        self.config_pr_tests(package_name)
+
+        self.increment_package_pr(package_name)
+        pr_1 = self.get_pr_version(package_name)
+
+        exported_db_path = os.path.join(self.builddir, 'export.inc')
+        export_result = runCmd("bitbake-prserv-tool export %s" % exported_db_path, ignore_status=True)
+        self.assertEqual(export_result.status, 0, msg="PR Service database export failed: %s" % export_result.output)
+        self.assertTrue(os.path.exists(exported_db_path))
+
+        if replace_current_db:
+            current_db_path = os.path.join(get_bb_var('PERSISTENT_DIR'), 'prserv.sqlite3')
+            self.assertTrue(os.path.exists(current_db_path), msg="Path to current PR Service database is invalid: %s" % current_db_path)
+            os.remove(current_db_path)
+
+        import_result = runCmd("bitbake-prserv-tool import %s" % exported_db_path, ignore_status=True)
+        os.remove(exported_db_path)
+        self.assertEqual(import_result.status, 0, msg="PR Service database import failed: %s" % import_result.output)
+
+        self.increment_package_pr(package_name)
+        pr_2 = self.get_pr_version(package_name)
+
+        self.assertTrue(pr_2 - pr_1 == 1, "Step between same pkg. revision is greater than 1")
+
+    @OETestID(930)
+    def test_import_export_replace_db(self):
+        self.run_test_pr_export_import('m4')
+
+    @OETestID(931)
+    def test_import_export_override_db(self):
+        self.run_test_pr_export_import('m4', replace_current_db=False)
+
+    @OETestID(932)
+    def test_pr_service_rpm_arch_dep(self):
+        self.run_test_pr_service('m4', 'rpm', 'do_package')
+
+    @OETestID(934)
+    def test_pr_service_deb_arch_dep(self):
+        self.run_test_pr_service('m4', 'deb', 'do_package')
+
+    @OETestID(933)
+    def test_pr_service_ipk_arch_dep(self):
+        self.run_test_pr_service('m4', 'ipk', 'do_package')
+
+    @OETestID(935)
+    def test_pr_service_rpm_arch_indep(self):
+        self.run_test_pr_service('xcursor-transparent-theme', 'rpm', 'do_package')
+
+    @OETestID(937)
+    def test_pr_service_deb_arch_indep(self):
+        self.run_test_pr_service('xcursor-transparent-theme', 'deb', 'do_package')
+
+    @OETestID(936)
+    def test_pr_service_ipk_arch_indep(self):
+        self.run_test_pr_service('xcursor-transparent-theme', 'ipk', 'do_package')
+
+    @OETestID(1419)
+    def test_stopping_prservice_message(self):
+        port = get_free_port()
+
+        runCmd('bitbake-prserv --host localhost --port %s --loglevel=DEBUG --start' % port)
+        ret = runCmd('bitbake-prserv --host localhost --port %s --loglevel=DEBUG --stop' % port)
+
+        self.assertEqual(ret.status, 0)
+
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/recipetool.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/recipetool.py
new file mode 100644
index 0000000..754ea94
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/recipetool.py
@@ -0,0 +1,698 @@
+import os
+import shutil
+import tempfile
+import urllib.parse
+
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var
+from oeqa.utils.commands import get_bb_vars, create_temp_layer
+from oeqa.core.decorator.oeid import OETestID
+from oeqa.selftest.cases import devtool
+
+templayerdir = None
+
+def setUpModule():
+    global templayerdir
+    templayerdir = tempfile.mkdtemp(prefix='recipetoolqa')
+    create_temp_layer(templayerdir, 'selftestrecipetool')
+    runCmd('bitbake-layers add-layer %s' % templayerdir)
+
+
+def tearDownModule():
+    runCmd('bitbake-layers remove-layer %s' % templayerdir, ignore_status=True)
+    runCmd('rm -rf %s' % templayerdir)
+
+
+class RecipetoolBase(devtool.DevtoolBase):
+
+    def setUpLocal(self):
+        super(RecipetoolBase, self).setUpLocal()
+        self.templayerdir = templayerdir
+        self.tempdir = tempfile.mkdtemp(prefix='recipetoolqa')
+        self.track_for_cleanup(self.tempdir)
+        self.testfile = os.path.join(self.tempdir, 'testfile')
+        with open(self.testfile, 'w') as f:
+            f.write('Test file\n')
+
+    def tearDownLocal(self):
+        runCmd('rm -rf %s/recipes-*' % self.templayerdir)
+        super(RecipetoolBase, self).tearDownLocal()
+
+    def _try_recipetool_appendcmd(self, cmd, testrecipe, expectedfiles, expectedlines=None):
+        result = runCmd(cmd)
+        self.assertNotIn('Traceback', result.output)
+
+        # Check the bbappend was created and applies properly
+        recipefile = get_bb_var('FILE', testrecipe)
+        bbappendfile = self._check_bbappend(testrecipe, recipefile, self.templayerdir)
+
+        # Check the bbappend contents
+        if expectedlines is not None:
+            with open(bbappendfile, 'r') as f:
+                self.assertEqual(expectedlines, f.readlines(), "Expected lines are not present in %s" % bbappendfile)
+
+        # Check file was copied
+        filesdir = os.path.join(os.path.dirname(bbappendfile), testrecipe)
+        for expectedfile in expectedfiles:
+            self.assertTrue(os.path.isfile(os.path.join(filesdir, expectedfile)), 'Expected file %s to be copied next to bbappend, but it wasn\'t' % expectedfile)
+
+        # Check no other files created
+        createdfiles = []
+        for root, _, files in os.walk(filesdir):
+            for f in files:
+                createdfiles.append(os.path.relpath(os.path.join(root, f), filesdir))
+        self.assertTrue(sorted(createdfiles), sorted(expectedfiles))
+
+        return bbappendfile, result.output
+
+
+class RecipetoolTests(RecipetoolBase):
+
+    @classmethod
+    def setUpClass(cls):
+        super(RecipetoolTests, cls).setUpClass()
+        # Ensure we have the right data in shlibs/pkgdata
+        cls.logger.info('Running bitbake to generate pkgdata')
+        bitbake('-c packagedata base-files coreutils busybox selftest-recipetool-appendfile')
+        bb_vars = get_bb_vars(['COREBASE', 'BBPATH'])
+        cls.corebase = bb_vars['COREBASE']
+        cls.bbpath = bb_vars['BBPATH']
+
+    def _try_recipetool_appendfile(self, testrecipe, destfile, newfile, options, expectedlines, expectedfiles):
+        cmd = 'recipetool appendfile %s %s %s %s' % (self.templayerdir, destfile, newfile, options)
+        return self._try_recipetool_appendcmd(cmd, testrecipe, expectedfiles, expectedlines)
+
+    def _try_recipetool_appendfile_fail(self, destfile, newfile, checkerror):
+        cmd = 'recipetool appendfile %s %s %s' % (self.templayerdir, destfile, newfile)
+        result = runCmd(cmd, ignore_status=True)
+        self.assertNotEqual(result.status, 0, 'Command "%s" should have failed but didn\'t' % cmd)
+        self.assertNotIn('Traceback', result.output)
+        for errorstr in checkerror:
+            self.assertIn(errorstr, result.output)
+
+    @OETestID(1177)
+    def test_recipetool_appendfile_basic(self):
+        # Basic test
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                        '\n']
+        _, output = self._try_recipetool_appendfile('base-files', '/etc/motd', self.testfile, '', expectedlines, ['motd'])
+        self.assertNotIn('WARNING: ', output)
+
+    @OETestID(1183)
+    def test_recipetool_appendfile_invalid(self):
+        # Test some commands that should error
+        self._try_recipetool_appendfile_fail('/etc/passwd', self.testfile, ['ERROR: /etc/passwd cannot be handled by this tool', 'useradd', 'extrausers'])
+        self._try_recipetool_appendfile_fail('/etc/timestamp', self.testfile, ['ERROR: /etc/timestamp cannot be handled by this tool'])
+        self._try_recipetool_appendfile_fail('/dev/console', self.testfile, ['ERROR: /dev/console cannot be handled by this tool'])
+
+    @OETestID(1176)
+    def test_recipetool_appendfile_alternatives(self):
+        # Now try with a file we know should be an alternative
+        # (this is very much a fake example, but one we know is reliably an alternative)
+        self._try_recipetool_appendfile_fail('/bin/ls', self.testfile, ['ERROR: File /bin/ls is an alternative possibly provided by the following recipes:', 'coreutils', 'busybox'])
+        # Need a test file - should be executable
+        testfile2 = os.path.join(self.corebase, 'oe-init-build-env')
+        testfile2name = os.path.basename(testfile2)
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n',
+                         'SRC_URI += "file://%s"\n' % testfile2name,
+                         '\n',
+                         'do_install_append() {\n',
+                         '    install -d ${D}${base_bindir}\n',
+                         '    install -m 0755 ${WORKDIR}/%s ${D}${base_bindir}/ls\n' % testfile2name,
+                         '}\n']
+        self._try_recipetool_appendfile('coreutils', '/bin/ls', testfile2, '-r coreutils', expectedlines, [testfile2name])
+        # Now try bbappending the same file again, contents should not change
+        bbappendfile, _ = self._try_recipetool_appendfile('coreutils', '/bin/ls', self.testfile, '-r coreutils', expectedlines, [testfile2name])
+        # But file should have
+        copiedfile = os.path.join(os.path.dirname(bbappendfile), 'coreutils', testfile2name)
+        result = runCmd('diff -q %s %s' % (testfile2, copiedfile), ignore_status=True)
+        self.assertNotEqual(result.status, 0, 'New file should have been copied but was not %s' % result.output)
+
+    @OETestID(1178)
+    def test_recipetool_appendfile_binary(self):
+        # Try appending a binary file
+        # /bin/ls can be a symlink to /usr/bin/ls
+        ls = os.path.realpath("/bin/ls")
+        result = runCmd('recipetool appendfile %s /bin/ls %s -r coreutils' % (self.templayerdir, ls))
+        self.assertIn('WARNING: ', result.output)
+        self.assertIn('is a binary', result.output)
+
+    @OETestID(1173)
+    def test_recipetool_appendfile_add(self):
+        # Try arbitrary file add to a recipe
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n',
+                         'SRC_URI += "file://testfile"\n',
+                         '\n',
+                         'do_install_append() {\n',
+                         '    install -d ${D}${datadir}\n',
+                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/something\n',
+                         '}\n']
+        self._try_recipetool_appendfile('netbase', '/usr/share/something', self.testfile, '-r netbase', expectedlines, ['testfile'])
+        # Try adding another file, this time where the source file is executable
+        # (so we're testing that, plus modifying an existing bbappend)
+        testfile2 = os.path.join(self.corebase, 'oe-init-build-env')
+        testfile2name = os.path.basename(testfile2)
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n',
+                         'SRC_URI += "file://testfile \\\n',
+                         '            file://%s \\\n' % testfile2name,
+                         '            "\n',
+                         '\n',
+                         'do_install_append() {\n',
+                         '    install -d ${D}${datadir}\n',
+                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/something\n',
+                         '    install -m 0755 ${WORKDIR}/%s ${D}${datadir}/scriptname\n' % testfile2name,
+                         '}\n']
+        self._try_recipetool_appendfile('netbase', '/usr/share/scriptname', testfile2, '-r netbase', expectedlines, ['testfile', testfile2name])
+
+    @OETestID(1174)
+    def test_recipetool_appendfile_add_bindir(self):
+        # Try arbitrary file add to a recipe, this time to a location such that should be installed as executable
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n',
+                         'SRC_URI += "file://testfile"\n',
+                         '\n',
+                         'do_install_append() {\n',
+                         '    install -d ${D}${bindir}\n',
+                         '    install -m 0755 ${WORKDIR}/testfile ${D}${bindir}/selftest-recipetool-testbin\n',
+                         '}\n']
+        _, output = self._try_recipetool_appendfile('netbase', '/usr/bin/selftest-recipetool-testbin', self.testfile, '-r netbase', expectedlines, ['testfile'])
+        self.assertNotIn('WARNING: ', output)
+
+    @OETestID(1175)
+    def test_recipetool_appendfile_add_machine(self):
+        # Try arbitrary file add to a recipe, this time to a location such that should be installed as executable
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n',
+                         'PACKAGE_ARCH = "${MACHINE_ARCH}"\n',
+                         '\n',
+                         'SRC_URI_append_mymachine = " file://testfile"\n',
+                         '\n',
+                         'do_install_append_mymachine() {\n',
+                         '    install -d ${D}${datadir}\n',
+                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/something\n',
+                         '}\n']
+        _, output = self._try_recipetool_appendfile('netbase', '/usr/share/something', self.testfile, '-r netbase -m mymachine', expectedlines, ['mymachine/testfile'])
+        self.assertNotIn('WARNING: ', output)
+
+    @OETestID(1184)
+    def test_recipetool_appendfile_orig(self):
+        # A file that's in SRC_URI and in do_install with the same name
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n']
+        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-orig', self.testfile, '', expectedlines, ['selftest-replaceme-orig'])
+        self.assertNotIn('WARNING: ', output)
+
+    @OETestID(1191)
+    def test_recipetool_appendfile_todir(self):
+        # A file that's in SRC_URI and in do_install with destination directory rather than file
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n']
+        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-todir', self.testfile, '', expectedlines, ['selftest-replaceme-todir'])
+        self.assertNotIn('WARNING: ', output)
+
+    @OETestID(1187)
+    def test_recipetool_appendfile_renamed(self):
+        # A file that's in SRC_URI with a different name to the destination file
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n']
+        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-renamed', self.testfile, '', expectedlines, ['file1'])
+        self.assertNotIn('WARNING: ', output)
+
+    @OETestID(1190)
+    def test_recipetool_appendfile_subdir(self):
+        # A file that's in SRC_URI in a subdir
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n',
+                         'SRC_URI += "file://testfile"\n',
+                         '\n',
+                         'do_install_append() {\n',
+                         '    install -d ${D}${datadir}\n',
+                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/selftest-replaceme-subdir\n',
+                         '}\n']
+        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-subdir', self.testfile, '', expectedlines, ['testfile'])
+        self.assertNotIn('WARNING: ', output)
+
+    @OETestID(1189)
+    def test_recipetool_appendfile_src_glob(self):
+        # A file that's in SRC_URI as a glob
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n',
+                         'SRC_URI += "file://testfile"\n',
+                         '\n',
+                         'do_install_append() {\n',
+                         '    install -d ${D}${datadir}\n',
+                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/selftest-replaceme-src-globfile\n',
+                         '}\n']
+        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-src-globfile', self.testfile, '', expectedlines, ['testfile'])
+        self.assertNotIn('WARNING: ', output)
+
+    @OETestID(1181)
+    def test_recipetool_appendfile_inst_glob(self):
+        # A file that's in do_install as a glob
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n']
+        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-inst-globfile', self.testfile, '', expectedlines, ['selftest-replaceme-inst-globfile'])
+        self.assertNotIn('WARNING: ', output)
+
+    @OETestID(1182)
+    def test_recipetool_appendfile_inst_todir_glob(self):
+        # A file that's in do_install as a glob with destination as a directory
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n']
+        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-inst-todir-globfile', self.testfile, '', expectedlines, ['selftest-replaceme-inst-todir-globfile'])
+        self.assertNotIn('WARNING: ', output)
+
+    @OETestID(1185)
+    def test_recipetool_appendfile_patch(self):
+        # A file that's added by a patch in SRC_URI
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n',
+                         'SRC_URI += "file://testfile"\n',
+                         '\n',
+                         'do_install_append() {\n',
+                         '    install -d ${D}${sysconfdir}\n',
+                         '    install -m 0644 ${WORKDIR}/testfile ${D}${sysconfdir}/selftest-replaceme-patched\n',
+                         '}\n']
+        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/etc/selftest-replaceme-patched', self.testfile, '', expectedlines, ['testfile'])
+        for line in output.splitlines():
+            if 'WARNING: ' in line:
+                self.assertIn('add-file.patch', line, 'Unexpected warning found in output:\n%s' % line)
+                break
+        else:
+            self.fail('Patch warning not found in output:\n%s' % output)
+
+    @OETestID(1188)
+    def test_recipetool_appendfile_script(self):
+        # Now, a file that's in SRC_URI but installed by a script (so no mention in do_install)
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n',
+                         'SRC_URI += "file://testfile"\n',
+                         '\n',
+                         'do_install_append() {\n',
+                         '    install -d ${D}${datadir}\n',
+                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/selftest-replaceme-scripted\n',
+                         '}\n']
+        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-scripted', self.testfile, '', expectedlines, ['testfile'])
+        self.assertNotIn('WARNING: ', output)
+
+    @OETestID(1180)
+    def test_recipetool_appendfile_inst_func(self):
+        # A file that's installed from a function called by do_install
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n']
+        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-inst-func', self.testfile, '', expectedlines, ['selftest-replaceme-inst-func'])
+        self.assertNotIn('WARNING: ', output)
+
+    @OETestID(1186)
+    def test_recipetool_appendfile_postinstall(self):
+        # A file that's created by a postinstall script (and explicitly mentioned in it)
+        # First try without specifying recipe
+        self._try_recipetool_appendfile_fail('/usr/share/selftest-replaceme-postinst', self.testfile, ['File /usr/share/selftest-replaceme-postinst may be written out in a pre/postinstall script of the following recipes:', 'selftest-recipetool-appendfile'])
+        # Now specify recipe
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n',
+                         'SRC_URI += "file://testfile"\n',
+                         '\n',
+                         'do_install_append() {\n',
+                         '    install -d ${D}${datadir}\n',
+                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/selftest-replaceme-postinst\n',
+                         '}\n']
+        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-postinst', self.testfile, '-r selftest-recipetool-appendfile', expectedlines, ['testfile'])
+
+    @OETestID(1179)
+    def test_recipetool_appendfile_extlayer(self):
+        # Try creating a bbappend in a layer that's not in bblayers.conf and has a different structure
+        exttemplayerdir = os.path.join(self.tempdir, 'extlayer')
+        self._create_temp_layer(exttemplayerdir, False, 'oeselftestextlayer', recipepathspec='metadata/recipes/recipes-*/*')
+        result = runCmd('recipetool appendfile %s /usr/share/selftest-replaceme-orig %s' % (exttemplayerdir, self.testfile))
+        self.assertNotIn('Traceback', result.output)
+        createdfiles = []
+        for root, _, files in os.walk(exttemplayerdir):
+            for f in files:
+                createdfiles.append(os.path.relpath(os.path.join(root, f), exttemplayerdir))
+        createdfiles.remove('conf/layer.conf')
+        expectedfiles = ['metadata/recipes/recipes-test/selftest-recipetool-appendfile/selftest-recipetool-appendfile.bbappend',
+                         'metadata/recipes/recipes-test/selftest-recipetool-appendfile/selftest-recipetool-appendfile/selftest-replaceme-orig']
+        self.assertEqual(sorted(createdfiles), sorted(expectedfiles))
+
+    @OETestID(1192)
+    def test_recipetool_appendfile_wildcard(self):
+
+        def try_appendfile_wc(options):
+            result = runCmd('recipetool appendfile %s /etc/profile %s %s' % (self.templayerdir, self.testfile, options))
+            self.assertNotIn('Traceback', result.output)
+            bbappendfile = None
+            for root, _, files in os.walk(self.templayerdir):
+                for f in files:
+                    if f.endswith('.bbappend'):
+                        bbappendfile = f
+                        break
+            if not bbappendfile:
+                self.fail('No bbappend file created')
+            runCmd('rm -rf %s/recipes-*' % self.templayerdir)
+            return bbappendfile
+
+        # Check without wildcard option
+        recipefn = os.path.basename(get_bb_var('FILE', 'base-files'))
+        filename = try_appendfile_wc('')
+        self.assertEqual(filename, recipefn.replace('.bb', '.bbappend'))
+        # Now check with wildcard option
+        filename = try_appendfile_wc('-w')
+        self.assertEqual(filename, recipefn.split('_')[0] + '_%.bbappend')
+
+    @OETestID(1193)
+    def test_recipetool_create(self):
+        # Try adding a recipe
+        tempsrc = os.path.join(self.tempdir, 'srctree')
+        os.makedirs(tempsrc)
+        recipefile = os.path.join(self.tempdir, 'logrotate_3.12.3.bb')
+        srcuri = 'https://github.com/logrotate/logrotate/releases/download/3.12.3/logrotate-3.12.3.tar.xz'
+        result = runCmd('recipetool create -o %s %s -x %s' % (recipefile, srcuri, tempsrc))
+        self.assertTrue(os.path.isfile(recipefile))
+        checkvars = {}
+        checkvars['LICENSE'] = 'GPLv2'
+        checkvars['LIC_FILES_CHKSUM'] = 'file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263'
+        checkvars['SRC_URI'] = 'https://github.com/logrotate/logrotate/releases/download/${PV}/logrotate-${PV}.tar.xz'
+        checkvars['SRC_URI[md5sum]'] = 'a560c57fac87c45b2fc17406cdf79288'
+        checkvars['SRC_URI[sha256sum]'] = '2e6a401cac9024db2288297e3be1a8ab60e7401ba8e91225218aaf4a27e82a07'
+        self._test_recipe_contents(recipefile, checkvars, [])
+
+    @OETestID(1194)
+    def test_recipetool_create_git(self):
+        if 'x11' not in get_bb_var('DISTRO_FEATURES'):
+            self.skipTest('Test requires x11 as distro feature')
+        # Ensure we have the right data in shlibs/pkgdata
+        bitbake('libpng pango libx11 libxext jpeg libcheck')
+        # Try adding a recipe
+        tempsrc = os.path.join(self.tempdir, 'srctree')
+        os.makedirs(tempsrc)
+        recipefile = os.path.join(self.tempdir, 'libmatchbox.bb')
+        srcuri = 'git://git.yoctoproject.org/libmatchbox'
+        result = runCmd(['recipetool', 'create', '-o', recipefile, srcuri + ";rev=9f7cf8895ae2d39c465c04cc78e918c157420269", '-x', tempsrc])
+        self.assertTrue(os.path.isfile(recipefile), 'recipetool did not create recipe file; output:\n%s' % result.output)
+        checkvars = {}
+        checkvars['LICENSE'] = 'LGPLv2.1'
+        checkvars['LIC_FILES_CHKSUM'] = 'file://COPYING;md5=7fbc338309ac38fefcd64b04bb903e34'
+        checkvars['S'] = '${WORKDIR}/git'
+        checkvars['PV'] = '1.11+git${SRCPV}'
+        checkvars['SRC_URI'] = srcuri
+        checkvars['DEPENDS'] = set(['libcheck', 'libjpeg-turbo', 'libpng', 'libx11', 'libxext', 'pango'])
+        inherits = ['autotools', 'pkgconfig']
+        self._test_recipe_contents(recipefile, checkvars, inherits)
+
+    @OETestID(1392)
+    def test_recipetool_create_simple(self):
+        # Try adding a recipe
+        temprecipe = os.path.join(self.tempdir, 'recipe')
+        os.makedirs(temprecipe)
+        pv = '1.7.3.0'
+        srcuri = 'http://www.dest-unreach.org/socat/download/socat-%s.tar.bz2' % pv
+        result = runCmd('recipetool create %s -o %s' % (srcuri, temprecipe))
+        dirlist = os.listdir(temprecipe)
+        if len(dirlist) > 1:
+            self.fail('recipetool created more than just one file; output:\n%s\ndirlist:\n%s' % (result.output, str(dirlist)))
+        if len(dirlist) < 1 or not os.path.isfile(os.path.join(temprecipe, dirlist[0])):
+            self.fail('recipetool did not create recipe file; output:\n%s\ndirlist:\n%s' % (result.output, str(dirlist)))
+        self.assertEqual(dirlist[0], 'socat_%s.bb' % pv, 'Recipe file incorrectly named')
+        checkvars = {}
+        checkvars['LICENSE'] = set(['Unknown', 'GPLv2'])
+        checkvars['LIC_FILES_CHKSUM'] = set(['file://COPYING.OpenSSL;md5=5c9bccc77f67a8328ef4ebaf468116f4', 'file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263'])
+        # We don't check DEPENDS since they are variable for this recipe depending on what's in the sysroot
+        checkvars['S'] = None
+        checkvars['SRC_URI'] = srcuri.replace(pv, '${PV}')
+        inherits = ['autotools']
+        self._test_recipe_contents(os.path.join(temprecipe, dirlist[0]), checkvars, inherits)
+
+    @OETestID(1418)
+    def test_recipetool_create_cmake(self):
+        # Try adding a recipe
+        temprecipe = os.path.join(self.tempdir, 'recipe')
+        os.makedirs(temprecipe)
+        recipefile = os.path.join(temprecipe, 'navit_0.5.0.bb')
+        srcuri = 'http://downloads.sourceforge.net/project/navit/v0.5.0/navit-0.5.0.tar.gz'
+        result = runCmd('recipetool create -o %s %s' % (temprecipe, srcuri))
+        self.assertTrue(os.path.isfile(recipefile))
+        checkvars = {}
+        checkvars['LICENSE'] = set(['Unknown', 'GPLv2', 'LGPLv2'])
+        checkvars['SRC_URI'] = 'http://downloads.sourceforge.net/project/navit/v${PV}/navit-${PV}.tar.gz'
+        checkvars['SRC_URI[md5sum]'] = '242f398e979a6b8c0f3c802b63435b68'
+        checkvars['SRC_URI[sha256sum]'] = '13353481d7fc01a4f64e385dda460b51496366bba0fd2cc85a89a0747910e94d'
+        checkvars['DEPENDS'] = set(['freetype', 'zlib', 'openssl', 'glib-2.0', 'virtual/libgl', 'virtual/egl', 'gtk+', 'libpng', 'libsdl', 'freeglut', 'dbus-glib'])
+        inherits = ['cmake', 'python-dir', 'gettext', 'pkgconfig']
+        self._test_recipe_contents(recipefile, checkvars, inherits)
+
+    @OETestID(1638)
+    def test_recipetool_create_github(self):
+        # Basic test to see if github URL mangling works
+        temprecipe = os.path.join(self.tempdir, 'recipe')
+        os.makedirs(temprecipe)
+        recipefile = os.path.join(temprecipe, 'meson_git.bb')
+        srcuri = 'https://github.com/mesonbuild/meson;rev=0.32.0'
+        result = runCmd(['recipetool', 'create', '-o', temprecipe, srcuri])
+        self.assertTrue(os.path.isfile(recipefile))
+        checkvars = {}
+        checkvars['LICENSE'] = set(['Apache-2.0'])
+        checkvars['SRC_URI'] = 'git://github.com/mesonbuild/meson;protocol=https'
+        inherits = ['setuptools']
+        self._test_recipe_contents(recipefile, checkvars, inherits)
+
+    @OETestID(1639)
+    def test_recipetool_create_github_tarball(self):
+        # Basic test to ensure github URL mangling doesn't apply to release tarballs
+        temprecipe = os.path.join(self.tempdir, 'recipe')
+        os.makedirs(temprecipe)
+        pv = '0.32.0'
+        recipefile = os.path.join(temprecipe, 'meson_%s.bb' % pv)
+        srcuri = 'https://github.com/mesonbuild/meson/releases/download/%s/meson-%s.tar.gz' % (pv, pv)
+        result = runCmd('recipetool create -o %s %s' % (temprecipe, srcuri))
+        self.assertTrue(os.path.isfile(recipefile))
+        checkvars = {}
+        checkvars['LICENSE'] = set(['Apache-2.0'])
+        checkvars['SRC_URI'] = 'https://github.com/mesonbuild/meson/releases/download/${PV}/meson-${PV}.tar.gz'
+        inherits = ['setuptools']
+        self._test_recipe_contents(recipefile, checkvars, inherits)
+
+    @OETestID(1637)
+    def test_recipetool_create_git_http(self):
+        # Basic test to check http git URL mangling works
+        temprecipe = os.path.join(self.tempdir, 'recipe')
+        os.makedirs(temprecipe)
+        recipefile = os.path.join(temprecipe, 'matchbox-terminal_git.bb')
+        srcuri = 'http://git.yoctoproject.org/git/matchbox-terminal'
+        result = runCmd('recipetool create -o %s %s' % (temprecipe, srcuri))
+        self.assertTrue(os.path.isfile(recipefile))
+        checkvars = {}
+        checkvars['LICENSE'] = set(['GPLv2'])
+        checkvars['SRC_URI'] = 'git://git.yoctoproject.org/git/matchbox-terminal;protocol=http'
+        inherits = ['pkgconfig', 'autotools']
+        self._test_recipe_contents(recipefile, checkvars, inherits)
+
+    def _copy_file_with_cleanup(self, srcfile, basedstdir, *paths):
+        dstdir = basedstdir
+        self.assertTrue(os.path.exists(dstdir))
+        for p in paths:
+            dstdir = os.path.join(dstdir, p)
+            if not os.path.exists(dstdir):
+                os.makedirs(dstdir)
+                self.track_for_cleanup(dstdir)
+        dstfile = os.path.join(dstdir, os.path.basename(srcfile))
+        if srcfile != dstfile:
+            shutil.copy(srcfile, dstfile)
+            self.track_for_cleanup(dstfile)
+
+    @OETestID(1640)
+    def test_recipetool_load_plugin(self):
+        """Test that recipetool loads only the first found plugin in BBPATH."""
+
+        recipetool = runCmd("which recipetool")
+        fromname = runCmd("recipetool --quiet pluginfile")
+        srcfile = fromname.output
+        searchpath = self.bbpath.split(':') + [os.path.dirname(recipetool.output)]
+        plugincontent = []
+        with open(srcfile) as fh:
+            plugincontent = fh.readlines()
+        try:
+            self.assertIn('meta-selftest', srcfile, 'wrong bbpath plugin found')
+            for path in searchpath:
+                self._copy_file_with_cleanup(srcfile, path, 'lib', 'recipetool')
+            result = runCmd("recipetool --quiet count")
+            self.assertEqual(result.output, '1')
+            result = runCmd("recipetool --quiet multiloaded")
+            self.assertEqual(result.output, "no")
+            for path in searchpath:
+                result = runCmd("recipetool --quiet bbdir")
+                self.assertEqual(result.output, path)
+                os.unlink(os.path.join(result.output, 'lib', 'recipetool', 'bbpath.py'))
+        finally:
+            with open(srcfile, 'w') as fh:
+                fh.writelines(plugincontent)
+
+
+class RecipetoolAppendsrcBase(RecipetoolBase):
+    def _try_recipetool_appendsrcfile(self, testrecipe, newfile, destfile, options, expectedlines, expectedfiles):
+        cmd = 'recipetool appendsrcfile %s %s %s %s %s' % (options, self.templayerdir, testrecipe, newfile, destfile)
+        return self._try_recipetool_appendcmd(cmd, testrecipe, expectedfiles, expectedlines)
+
+    def _try_recipetool_appendsrcfiles(self, testrecipe, newfiles, expectedlines=None, expectedfiles=None, destdir=None, options=''):
+
+        if destdir:
+            options += ' -D %s' % destdir
+
+        if expectedfiles is None:
+            expectedfiles = [os.path.basename(f) for f in newfiles]
+
+        cmd = 'recipetool appendsrcfiles %s %s %s %s' % (options, self.templayerdir, testrecipe, ' '.join(newfiles))
+        return self._try_recipetool_appendcmd(cmd, testrecipe, expectedfiles, expectedlines)
+
+    def _try_recipetool_appendsrcfile_fail(self, testrecipe, newfile, destfile, checkerror):
+        cmd = 'recipetool appendsrcfile %s %s %s %s' % (self.templayerdir, testrecipe, newfile, destfile or '')
+        result = runCmd(cmd, ignore_status=True)
+        self.assertNotEqual(result.status, 0, 'Command "%s" should have failed but didn\'t' % cmd)
+        self.assertNotIn('Traceback', result.output)
+        for errorstr in checkerror:
+            self.assertIn(errorstr, result.output)
+
+    @staticmethod
+    def _get_first_file_uri(recipe):
+        '''Return the first file:// in SRC_URI for the specified recipe.'''
+        src_uri = get_bb_var('SRC_URI', recipe).split()
+        for uri in src_uri:
+            p = urllib.parse.urlparse(uri)
+            if p.scheme == 'file':
+                return p.netloc + p.path
+
+    def _test_appendsrcfile(self, testrecipe, filename=None, destdir=None, has_src_uri=True, srcdir=None, newfile=None, options=''):
+        if newfile is None:
+            newfile = self.testfile
+
+        if srcdir:
+            if destdir:
+                expected_subdir = os.path.join(srcdir, destdir)
+            else:
+                expected_subdir = srcdir
+        else:
+            options += " -W"
+            expected_subdir = destdir
+
+        if filename:
+            if destdir:
+                destpath = os.path.join(destdir, filename)
+            else:
+                destpath = filename
+        else:
+            filename = os.path.basename(newfile)
+            if destdir:
+                destpath = destdir + os.sep
+            else:
+                destpath = '.' + os.sep
+
+        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
+                         '\n']
+        if has_src_uri:
+            uri = 'file://%s' % filename
+            if expected_subdir:
+                uri += ';subdir=%s' % expected_subdir
+            expectedlines[0:0] = ['SRC_URI += "%s"\n' % uri,
+                                  '\n']
+
+        return self._try_recipetool_appendsrcfile(testrecipe, newfile, destpath, options, expectedlines, [filename])
+
+    def _test_appendsrcfiles(self, testrecipe, newfiles, expectedfiles=None, destdir=None, options=''):
+        if expectedfiles is None:
+            expectedfiles = [os.path.basename(n) for n in newfiles]
+
+        self._try_recipetool_appendsrcfiles(testrecipe, newfiles, expectedfiles=expectedfiles, destdir=destdir, options=options)
+
+        bb_vars = get_bb_vars(['SRC_URI', 'FILE', 'FILESEXTRAPATHS'], testrecipe)
+        src_uri = bb_vars['SRC_URI'].split()
+        for f in expectedfiles:
+            if destdir:
+                self.assertIn('file://%s;subdir=%s' % (f, destdir), src_uri)
+            else:
+                self.assertIn('file://%s' % f, src_uri)
+
+        recipefile = bb_vars['FILE']
+        bbappendfile = self._check_bbappend(testrecipe, recipefile, self.templayerdir)
+        filesdir = os.path.join(os.path.dirname(bbappendfile), testrecipe)
+        filesextrapaths = bb_vars['FILESEXTRAPATHS'].split(':')
+        self.assertIn(filesdir, filesextrapaths)
+
+
+
+
+class RecipetoolAppendsrcTests(RecipetoolAppendsrcBase):
+
+    @OETestID(1273)
+    def test_recipetool_appendsrcfile_basic(self):
+        self._test_appendsrcfile('base-files', 'a-file')
+
+    @OETestID(1274)
+    def test_recipetool_appendsrcfile_basic_wildcard(self):
+        testrecipe = 'base-files'
+        self._test_appendsrcfile(testrecipe, 'a-file', options='-w')
+        recipefile = get_bb_var('FILE', testrecipe)
+        bbappendfile = self._check_bbappend(testrecipe, recipefile, self.templayerdir)
+        self.assertEqual(os.path.basename(bbappendfile), '%s_%%.bbappend' % testrecipe)
+
+    @OETestID(1281)
+    def test_recipetool_appendsrcfile_subdir_basic(self):
+        self._test_appendsrcfile('base-files', 'a-file', 'tmp')
+
+    @OETestID(1282)
+    def test_recipetool_appendsrcfile_subdir_basic_dirdest(self):
+        self._test_appendsrcfile('base-files', destdir='tmp')
+
+    @OETestID(1280)
+    def test_recipetool_appendsrcfile_srcdir_basic(self):
+        testrecipe = 'bash'
+        bb_vars = get_bb_vars(['S', 'WORKDIR'], testrecipe)
+        srcdir = bb_vars['S']
+        workdir = bb_vars['WORKDIR']
+        subdir = os.path.relpath(srcdir, workdir)
+        self._test_appendsrcfile(testrecipe, 'a-file', srcdir=subdir)
+
+    @OETestID(1275)
+    def test_recipetool_appendsrcfile_existing_in_src_uri(self):
+        testrecipe = 'base-files'
+        filepath = self._get_first_file_uri(testrecipe)
+        self.assertTrue(filepath, 'Unable to test, no file:// uri found in SRC_URI for %s' % testrecipe)
+        self._test_appendsrcfile(testrecipe, filepath, has_src_uri=False)
+
+    @OETestID(1276)
+    def test_recipetool_appendsrcfile_existing_in_src_uri_diff_params(self):
+        testrecipe = 'base-files'
+        subdir = 'tmp'
+        filepath = self._get_first_file_uri(testrecipe)
+        self.assertTrue(filepath, 'Unable to test, no file:// uri found in SRC_URI for %s' % testrecipe)
+
+        output = self._test_appendsrcfile(testrecipe, filepath, subdir, has_src_uri=False)
+        self.assertTrue(any('with different parameters' in l for l in output))
+
+    @OETestID(1277)
+    def test_recipetool_appendsrcfile_replace_file_srcdir(self):
+        testrecipe = 'bash'
+        filepath = 'Makefile.in'
+        bb_vars = get_bb_vars(['S', 'WORKDIR'], testrecipe)
+        srcdir = bb_vars['S']
+        workdir = bb_vars['WORKDIR']
+        subdir = os.path.relpath(srcdir, workdir)
+
+        self._test_appendsrcfile(testrecipe, filepath, srcdir=subdir)
+        bitbake('%s:do_unpack' % testrecipe)
+        self.assertEqual(open(self.testfile, 'r').read(), open(os.path.join(srcdir, filepath), 'r').read())
+
+    @OETestID(1278)
+    def test_recipetool_appendsrcfiles_basic(self, destdir=None):
+        newfiles = [self.testfile]
+        for i in range(1, 5):
+            testfile = os.path.join(self.tempdir, 'testfile%d' % i)
+            with open(testfile, 'w') as f:
+                f.write('Test file %d\n' % i)
+            newfiles.append(testfile)
+        self._test_appendsrcfiles('gcc', newfiles, destdir=destdir, options='-W')
+
+    @OETestID(1279)
+    def test_recipetool_appendsrcfiles_basic_subdir(self):
+        self.test_recipetool_appendsrcfiles_basic(destdir='testdir')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/runcmd.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/runcmd.py
new file mode 100644
index 0000000..d76d706
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/runcmd.py
@@ -0,0 +1,134 @@
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd
+from oeqa.utils import CommandError
+from oeqa.core.decorator.oeid import OETestID
+
+import subprocess
+import threading
+import time
+import signal
+
+class MemLogger(object):
+    def __init__(self):
+        self.info_msgs = []
+        self.error_msgs = []
+
+    def info(self, msg):
+        self.info_msgs.append(msg)
+
+    def error(self, msg):
+        self.error_msgs.append(msg)
+
+class RunCmdTests(OESelftestTestCase):
+    """ Basic tests for runCmd() utility function """
+
+    # The delta is intentionally smaller than the timeout, to detect cases where
+    # we incorrectly apply the timeout more than once.
+    TIMEOUT = 2
+    DELTA = 1
+
+    @OETestID(1916)
+    def test_result_okay(self):
+        result = runCmd("true")
+        self.assertEqual(result.status, 0)
+
+    @OETestID(1915)
+    def test_result_false(self):
+        result = runCmd("false", ignore_status=True)
+        self.assertEqual(result.status, 1)
+
+    @OETestID(1917)
+    def test_shell(self):
+        # A shell is used for all string commands.
+        result = runCmd("false; true", ignore_status=True)
+        self.assertEqual(result.status, 0)
+
+    @OETestID(1910)
+    def test_no_shell(self):
+        self.assertRaises(FileNotFoundError,
+                          runCmd, "false; true", shell=False)
+
+    @OETestID(1906)
+    def test_list_not_found(self):
+        self.assertRaises(FileNotFoundError,
+                          runCmd, ["false; true"])
+
+    @OETestID(1907)
+    def test_list_okay(self):
+        result = runCmd(["true"])
+        self.assertEqual(result.status, 0)
+
+    @OETestID(1913)
+    def test_result_assertion(self):
+        self.assertRaisesRegexp(AssertionError, "Command 'echo .* false' returned non-zero exit status 1:\nfoobar",
+                                runCmd, "echo foobar >&2; false", shell=True)
+
+    @OETestID(1914)
+    def test_result_exception(self):
+        self.assertRaisesRegexp(CommandError, "Command 'echo .* false' returned non-zero exit status 1 with output: foobar",
+                                runCmd, "echo foobar >&2; false", shell=True, assert_error=False)
+
+    @OETestID(1911)
+    def test_output(self):
+        result = runCmd("echo stdout; echo stderr >&2", shell=True)
+        self.assertEqual("stdout\nstderr", result.output)
+        self.assertEqual("", result.error)
+
+    @OETestID(1912)
+    def test_output_split(self):
+        result = runCmd("echo stdout; echo stderr >&2", shell=True, stderr=subprocess.PIPE)
+        self.assertEqual("stdout", result.output)
+        self.assertEqual("stderr", result.error)
+
+    @OETestID(1920)
+    def test_timeout(self):
+        numthreads = threading.active_count()
+        start = time.time()
+        # Killing a hanging process only works when not using a shell?!
+        result = runCmd(['sleep', '60'], timeout=self.TIMEOUT, ignore_status=True)
+        self.assertEqual(result.status, -signal.SIGTERM)
+        end = time.time()
+        self.assertLess(end - start, self.TIMEOUT + self.DELTA)
+        self.assertEqual(numthreads, threading.active_count())
+
+    @OETestID(1921)
+    def test_timeout_split(self):
+        numthreads = threading.active_count()
+        start = time.time()
+        # Killing a hanging process only works when not using a shell?!
+        result = runCmd(['sleep', '60'], timeout=self.TIMEOUT, ignore_status=True, stderr=subprocess.PIPE)
+        self.assertEqual(result.status, -signal.SIGTERM)
+        end = time.time()
+        self.assertLess(end - start, self.TIMEOUT + self.DELTA)
+        self.assertEqual(numthreads, threading.active_count())
+
+    @OETestID(1918)
+    def test_stdin(self):
+        numthreads = threading.active_count()
+        result = runCmd("cat", data=b"hello world", timeout=self.TIMEOUT)
+        self.assertEqual("hello world", result.output)
+        self.assertEqual(numthreads, threading.active_count())
+
+    @OETestID(1919)
+    def test_stdin_timeout(self):
+        numthreads = threading.active_count()
+        start = time.time()
+        result = runCmd(['sleep', '60'], data=b"hello world", timeout=self.TIMEOUT, ignore_status=True)
+        self.assertEqual(result.status, -signal.SIGTERM)
+        end = time.time()
+        self.assertLess(end - start, self.TIMEOUT + self.DELTA)
+        self.assertEqual(numthreads, threading.active_count())
+
+    @OETestID(1908)
+    def test_log(self):
+        log = MemLogger()
+        result = runCmd("echo stdout; echo stderr >&2", shell=True, output_log=log)
+        self.assertEqual(["Running: echo stdout; echo stderr >&2", "stdout", "stderr"], log.info_msgs)
+        self.assertEqual([], log.error_msgs)
+
+    @OETestID(1909)
+    def test_log_split(self):
+        log = MemLogger()
+        result = runCmd("echo stdout; echo stderr >&2", shell=True, output_log=log, stderr=subprocess.PIPE)
+        self.assertEqual(["Running: echo stdout; echo stderr >&2", "stdout"], log.info_msgs)
+        self.assertEqual(["stderr"], log.error_msgs)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/runqemu.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/runqemu.py
new file mode 100644
index 0000000..47d41f5
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/runqemu.py
@@ -0,0 +1,138 @@
+#
+# Copyright (c) 2017 Wind River Systems, Inc.
+#
+
+import re
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import bitbake, runqemu, get_bb_var
+from oeqa.core.decorator.oeid import OETestID
+
+class RunqemuTests(OESelftestTestCase):
+    """Runqemu test class"""
+
+    image_is_ready = False
+    deploy_dir_image = ''
+    # We only want to print runqemu stdout/stderr if there is a test case failure
+    buffer = True
+
+    def setUpLocal(self):
+        super(RunqemuTests, self).setUpLocal()
+        self.recipe = 'core-image-minimal'
+        self.machine =  'qemux86-64'
+        self.fstypes = "ext4 iso hddimg wic.vmdk wic.qcow2 wic.vdi"
+        self.cmd_common = "runqemu nographic"
+
+        self.write_config(
+"""
+MACHINE = "%s"
+IMAGE_FSTYPES = "%s"
+# 10 means 1 second
+SYSLINUX_TIMEOUT = "10"
+"""
+% (self.machine, self.fstypes)
+        )
+
+        if not RunqemuTests.image_is_ready:
+            RunqemuTests.deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
+            bitbake(self.recipe)
+            RunqemuTests.image_is_ready = True
+
+    @OETestID(2001)
+    def test_boot_machine(self):
+        """Test runqemu machine"""
+        cmd = "%s %s" % (self.cmd_common, self.machine)
+        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
+            self.assertTrue(qemu.runner.logged, "Failed: %s" % cmd)
+
+    @OETestID(2002)
+    def test_boot_machine_ext4(self):
+        """Test runqemu machine ext4"""
+        cmd = "%s %s ext4" % (self.cmd_common, self.machine)
+        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
+            with open(qemu.qemurunnerlog) as f:
+                self.assertTrue('rootfs.ext4' in f.read(), "Failed: %s" % cmd)
+
+    @OETestID(2003)
+    def test_boot_machine_iso(self):
+        """Test runqemu machine iso"""
+        cmd = "%s %s iso" % (self.cmd_common, self.machine)
+        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
+            with open(qemu.qemurunnerlog) as f:
+                self.assertTrue('media=cdrom' in f.read(), "Failed: %s" % cmd)
+
+    @OETestID(2004)
+    def test_boot_recipe_image(self):
+        """Test runqemu recipe-image"""
+        cmd = "%s %s" % (self.cmd_common, self.recipe)
+        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
+            self.assertTrue(qemu.runner.logged, "Failed: %s" % cmd)
+
+    @OETestID(2005)
+    def test_boot_recipe_image_vmdk(self):
+        """Test runqemu recipe-image vmdk"""
+        cmd = "%s %s wic.vmdk" % (self.cmd_common, self.recipe)
+        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
+            with open(qemu.qemurunnerlog) as f:
+                self.assertTrue('format=vmdk' in f.read(), "Failed: %s" % cmd)
+
+    @OETestID(2006)
+    def test_boot_recipe_image_vdi(self):
+        """Test runqemu recipe-image vdi"""
+        cmd = "%s %s wic.vdi" % (self.cmd_common, self.recipe)
+        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
+            with open(qemu.qemurunnerlog) as f:
+                self.assertTrue('format=vdi' in f.read(), "Failed: %s" % cmd)
+
+    @OETestID(2007)
+    def test_boot_deploy(self):
+        """Test runqemu deploy_dir_image"""
+        cmd = "%s %s" % (self.cmd_common, self.deploy_dir_image)
+        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
+            self.assertTrue(qemu.runner.logged, "Failed: %s" % cmd)
+
+    @OETestID(2008)
+    def test_boot_deploy_hddimg(self):
+        """Test runqemu deploy_dir_image hddimg"""
+        cmd = "%s %s hddimg" % (self.cmd_common, self.deploy_dir_image)
+        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
+            with open(qemu.qemurunnerlog) as f:
+                self.assertTrue(re.search('file=.*.hddimg', f.read()), "Failed: %s" % cmd)
+
+    @OETestID(2009)
+    def test_boot_machine_slirp(self):
+        """Test runqemu machine slirp"""
+        cmd = "%s slirp %s" % (self.cmd_common, self.machine)
+        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
+            with open(qemu.qemurunnerlog) as f:
+                self.assertTrue(' -netdev user' in f.read(), "Failed: %s" % cmd)
+
+    @OETestID(2009)
+    def test_boot_machine_slirp_qcow2(self):
+        """Test runqemu machine slirp qcow2"""
+        cmd = "%s slirp wic.qcow2 %s" % (self.cmd_common, self.machine)
+        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
+            with open(qemu.qemurunnerlog) as f:
+                self.assertTrue('format=qcow2' in f.read(), "Failed: %s" % cmd)
+
+    @OETestID(2010)
+    def test_boot_qemu_boot(self):
+        """Test runqemu /path/to/image.qemuboot.conf"""
+        qemuboot_conf = "%s-%s.qemuboot.conf" % (self.recipe, self.machine)
+        qemuboot_conf = os.path.join(self.deploy_dir_image, qemuboot_conf)
+        if not os.path.exists(qemuboot_conf):
+            self.skipTest("%s not found" % qemuboot_conf)
+        cmd = "%s %s" % (self.cmd_common, qemuboot_conf)
+        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
+            self.assertTrue(qemu.runner.logged, "Failed: %s" % cmd)
+
+    @OETestID(2011)
+    def test_boot_rootfs(self):
+        """Test runqemu /path/to/rootfs.ext4"""
+        rootfs = "%s-%s.ext4" % (self.recipe, self.machine)
+        rootfs = os.path.join(self.deploy_dir_image, rootfs)
+        if not os.path.exists(rootfs):
+            self.skipTest("%s not found" % rootfs)
+        cmd = "%s %s" % (self.cmd_common, rootfs)
+        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
+            self.assertTrue(qemu.runner.logged, "Failed: %s" % cmd)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/runtime_test.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/runtime_test.py
new file mode 100644
index 0000000..25270b7
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/runtime_test.py
@@ -0,0 +1,267 @@
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars, runqemu
+from oeqa.utils.sshcontrol import SSHControl
+from oeqa.core.decorator.oeid import OETestID
+import os
+import re
+import tempfile
+import shutil
+
+class TestExport(OESelftestTestCase):
+
+    @classmethod
+    def tearDownClass(cls):
+        runCmd("rm -rf /tmp/sdk")
+        super(TestExport, cls).tearDownClass()
+
+    @OETestID(1499)
+    def test_testexport_basic(self):
+        """
+        Summary: Check basic testexport functionality with only ping test enabled.
+        Expected: 1. testexport directory must be created.
+                  2. runexported.py must run without any error/exception.
+                  3. ping test must succeed.
+        Product: oe-core
+        Author: Mariano Lopez <mariano.lopez@intel.com>
+        """
+
+        features = 'INHERIT += "testexport"\n'
+        # These aren't the actual IP addresses but testexport class needs something defined
+        features += 'TEST_SERVER_IP = "192.168.7.1"\n'
+        features += 'TEST_TARGET_IP = "192.168.7.1"\n'
+        features += 'TEST_SUITES = "ping"\n'
+        self.write_config(features)
+
+        # Build tesexport for core-image-minimal
+        bitbake('core-image-minimal')
+        bitbake('-c testexport core-image-minimal')
+
+        testexport_dir = get_bb_var('TEST_EXPORT_DIR', 'core-image-minimal')
+
+        # Verify if TEST_EXPORT_DIR was created
+        isdir = os.path.isdir(testexport_dir)
+        self.assertEqual(True, isdir, 'Failed to create testexport dir: %s' % testexport_dir)
+
+        with runqemu('core-image-minimal') as qemu:
+            # Attempt to run runexported.py to perform ping test
+            test_path = os.path.join(testexport_dir, "oe-test")
+            data_file = os.path.join(testexport_dir, 'data', 'testdata.json')
+            manifest = os.path.join(testexport_dir, 'data', 'manifest')
+            cmd = ("%s runtime --test-data-file %s --packages-manifest %s "
+                   "--target-ip %s --server-ip %s --quiet"
+                  % (test_path, data_file, manifest, qemu.ip, qemu.server_ip))
+            result = runCmd(cmd)
+            # Verify ping test was succesful
+            self.assertEqual(0, result.status, 'oe-test runtime returned a non 0 status')
+
+    @OETestID(1641)
+    def test_testexport_sdk(self):
+        """
+        Summary: Check sdk functionality for testexport.
+        Expected: 1. testexport directory must be created.
+                  2. SDK tarball must exists.
+                  3. Uncompressing of tarball must succeed.
+                  4. Check if the SDK directory is added to PATH.
+                  5. Run tar from the SDK directory.
+        Product: oe-core
+        Author: Mariano Lopez <mariano.lopez@intel.com>
+        """
+
+        features = 'INHERIT += "testexport"\n'
+        # These aren't the actual IP addresses but testexport class needs something defined
+        features += 'TEST_SERVER_IP = "192.168.7.1"\n'
+        features += 'TEST_TARGET_IP = "192.168.7.1"\n'
+        features += 'TEST_SUITES = "ping"\n'
+        features += 'TEST_EXPORT_SDK_ENABLED = "1"\n'
+        features += 'TEST_EXPORT_SDK_PACKAGES = "nativesdk-tar"\n'
+        self.write_config(features)
+
+        # Build tesexport for core-image-minimal
+        bitbake('core-image-minimal')
+        bitbake('-c testexport core-image-minimal')
+
+        needed_vars = ['TEST_EXPORT_DIR', 'TEST_EXPORT_SDK_DIR', 'TEST_EXPORT_SDK_NAME']
+        bb_vars = get_bb_vars(needed_vars, 'core-image-minimal')
+        testexport_dir = bb_vars['TEST_EXPORT_DIR']
+        sdk_dir = bb_vars['TEST_EXPORT_SDK_DIR']
+        sdk_name = bb_vars['TEST_EXPORT_SDK_NAME']
+
+        # Check for SDK
+        tarball_name = "%s.sh" % sdk_name
+        tarball_path = os.path.join(testexport_dir, sdk_dir, tarball_name)
+        msg = "Couldn't find SDK tarball: %s" % tarball_path
+        self.assertEqual(os.path.isfile(tarball_path), True, msg)
+
+        # Extract SDK and run tar from SDK
+        result = runCmd("%s -y -d /tmp/sdk" % tarball_path)
+        self.assertEqual(0, result.status, "Couldn't extract SDK")
+
+        env_script = result.output.split()[-1]
+        result = runCmd(". %s; which tar" % env_script, shell=True)
+        self.assertEqual(0, result.status, "Couldn't setup SDK environment")
+        is_sdk_tar = True if "/tmp/sdk" in result.output else False
+        self.assertTrue(is_sdk_tar, "Couldn't setup SDK environment")
+
+        tar_sdk = result.output
+        result = runCmd("%s --version" % tar_sdk)
+        self.assertEqual(0, result.status, "Couldn't run tar from SDK")
+
+
+class TestImage(OESelftestTestCase):
+
+    @OETestID(1644)
+    def test_testimage_install(self):
+        """
+        Summary: Check install packages functionality for testimage/testexport.
+        Expected: 1. Import tests from a directory other than meta.
+                  2. Check install/uninstall of socat.
+        Product: oe-core
+        Author: Mariano Lopez <mariano.lopez@intel.com>
+        """
+        if get_bb_var('DISTRO') == 'poky-tiny':
+            self.skipTest('core-image-full-cmdline not buildable for poky-tiny')
+
+        features = 'INHERIT += "testimage"\n'
+        features += 'TEST_SUITES = "ping ssh selftest"\n'
+        self.write_config(features)
+
+        # Build core-image-sato and testimage
+        bitbake('core-image-full-cmdline socat')
+        bitbake('-c testimage core-image-full-cmdline')
+
+    @OETestID(1883)
+    def test_testimage_dnf(self):
+        """
+        Summary: Check package feeds functionality for dnf
+        Expected: 1. Check that remote package feeds can be accessed
+        Product: oe-core
+        Author: Alexander Kanavin <alexander.kanavin@intel.com>
+        """
+        if get_bb_var('DISTRO') == 'poky-tiny':
+            self.skipTest('core-image-full-cmdline not buildable for poky-tiny')
+
+        features = 'INHERIT += "testimage"\n'
+        features += 'TEST_SUITES = "ping ssh dnf_runtime dnf.DnfBasicTest.test_dnf_help"\n'
+        # We don't yet know what the server ip and port will be - they will be patched
+        # in at the start of the on-image test
+        features += 'PACKAGE_FEED_URIS = "http://bogus_ip:bogus_port"\n'
+        features += 'EXTRA_IMAGE_FEATURES += "package-management"\n'
+        features += 'PACKAGE_CLASSES = "package_rpm"\n'
+
+        # Enable package feed signing
+        self.gpg_home = tempfile.mkdtemp(prefix="oeqa-feed-sign-")
+        signing_key_dir = os.path.join(self.testlayer_path, 'files', 'signing')
+        runCmd('gpg --batch --homedir %s --import %s' % (self.gpg_home, os.path.join(signing_key_dir, 'key.secret')))
+        features += 'INHERIT += "sign_package_feed"\n'
+        features += 'PACKAGE_FEED_GPG_NAME = "testuser"\n'
+        features += 'PACKAGE_FEED_GPG_PASSPHRASE_FILE = "%s"\n' % os.path.join(signing_key_dir, 'key.passphrase')
+        features += 'GPG_PATH = "%s"\n' % self.gpg_home
+        self.write_config(features)
+
+        # Build core-image-sato and testimage
+        bitbake('core-image-full-cmdline socat')
+        bitbake('-c testimage core-image-full-cmdline')
+
+        # remove the oeqa-feed-sign temporal directory
+        shutil.rmtree(self.gpg_home, ignore_errors=True)
+
+class Postinst(OESelftestTestCase):
+    @OETestID(1540)
+    def test_verify_postinst(self):
+        """
+        Summary: The purpose of this test is to verify the execution order of postinst Bugzilla ID: [5319]
+        Expected :
+        1. Compile a minimal image.
+        2. The compiled image will add the created layer with the recipes postinst[ abdpt]
+        3. Run qemux86
+        4. Validate the task execution order
+        Author: Francisco Pedraza <francisco.j.pedraza.gonzalez@intel.com>
+        """
+        features = 'INHERIT += "testimage"\n'
+        features += 'CORE_IMAGE_EXTRA_INSTALL += "postinst-at-rootfs \
+postinst-delayed-a \
+postinst-delayed-b \
+postinst-delayed-d \
+postinst-delayed-p \
+postinst-delayed-t \
+"\n'
+        self.write_config(features)
+
+        bitbake('core-image-minimal -f ')
+
+        postinst_list = ['100-postinst-at-rootfs',
+                         '101-postinst-delayed-a',
+                         '102-postinst-delayed-b',
+                         '103-postinst-delayed-d',
+                         '104-postinst-delayed-p',
+                         '105-postinst-delayed-t']
+        path_workdir = get_bb_var('WORKDIR','core-image-minimal')
+        workspacedir = 'testimage/qemu_boot_log'
+        workspacedir = os.path.join(path_workdir, workspacedir)
+        rexp = re.compile("^Running postinst .*/(?P<postinst>.*)\.\.\.$")
+        with runqemu('core-image-minimal') as qemu:
+            with open(workspacedir) as f:
+                found = False
+                idx = 0
+                for line in f.readlines():
+                    line = line.strip().replace("^M","")
+                    if not line: # To avoid empty lines
+                        continue
+                    m = rexp.search(line)
+                    if m:
+                        self.assertEqual(postinst_list[idx], m.group('postinst'), "Fail")
+                        idx = idx+1
+                        found = True
+                    elif found:
+                        self.assertEqual(idx, len(postinst_list), "Not found all postinsts")
+                        break
+
+    @OETestID(1545)
+    def test_postinst_rootfs_and_boot(self):
+        """
+        Summary:        The purpose of this test case is to verify Post-installation
+                        scripts are called when rootfs is created and also test
+                        that script can be delayed to run at first boot.
+        Dependencies:   NA
+        Steps:          1. Add proper configuration to local.conf file
+                        2. Build a "core-image-minimal" image
+                        3. Verify that file created by postinst_rootfs recipe is
+                           present on rootfs dir.
+                        4. Boot the image created on qemu and verify that the file
+                           created by postinst_boot recipe is present on image.
+        Expected:       The files are successfully created during rootfs and boot
+                        time for 3 different package managers: rpm,ipk,deb and
+                        for initialization managers: sysvinit and systemd.
+
+        """
+        file_rootfs_name = "this-was-created-at-rootfstime"
+        fileboot_name = "this-was-created-at-first-boot"
+        rootfs_pkg = 'postinst-at-rootfs'
+        boot_pkg = 'postinst-delayed-a'
+
+        for init_manager in ("sysvinit", "systemd"):
+            for classes in ("package_rpm", "package_deb", "package_ipk"):
+                with self.subTest(init_manager=init_manager, package_class=classes):
+                    features = 'MACHINE = "qemux86"\n'
+                    features += 'CORE_IMAGE_EXTRA_INSTALL += "%s %s "\n'% (rootfs_pkg, boot_pkg)
+                    features += 'IMAGE_FEATURES += "package-management empty-root-password"\n'
+                    features += 'PACKAGE_CLASSES = "%s"\n' % classes
+                    if init_manager == "systemd":
+                        features += 'DISTRO_FEATURES_append = " systemd"\n'
+                        features += 'VIRTUAL-RUNTIME_init_manager = "systemd"\n'
+                        features += 'DISTRO_FEATURES_BACKFILL_CONSIDERED = "sysvinit"\n'
+                        features += 'VIRTUAL-RUNTIME_initscripts = ""\n'
+                    self.write_config(features)
+
+                    bitbake('core-image-minimal')
+
+                    file_rootfs_created = os.path.join(get_bb_var('IMAGE_ROOTFS', "core-image-minimal"),
+                                                       file_rootfs_name)
+                    found = os.path.isfile(file_rootfs_created)
+                    self.assertTrue(found, "File %s was not created at rootfs time by %s" % \
+                                    (file_rootfs_name, rootfs_pkg))
+
+                    testcommand = 'ls /etc/' + fileboot_name
+                    with runqemu('core-image-minimal') as qemu:
+                        status, output = qemu.run_serial("-f /etc/" + fileboot_name)
+                        self.assertEqual(status, 0, 'File %s was not created at first boot (%s)' % (fileboot_name, output))
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/selftest.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/selftest.py
new file mode 100644
index 0000000..4b3cb14
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/selftest.py
@@ -0,0 +1,51 @@
+import importlib
+from oeqa.utils.commands import runCmd
+import oeqa.selftest
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.core.decorator.oeid import OETestID
+
+class ExternalLayer(OESelftestTestCase):
+
+    @OETestID(1885)
+    def test_list_imported(self):
+        """
+        Summary: Checks functionality to import tests from other layers.
+        Expected: 1. File "external-layer.py" must be in
+        oeqa.selftest.__path__
+                  2. test_unconditional_pas method must exists
+                     in ImportedTests class
+        Product: oe-core
+        Author: Mariano Lopez <mariano.lopez@intel.com>
+        """
+
+        test_file = "external-layer.py"
+        test_module = "oeqa.selftest.cases.external-layer"
+        method_name = "test_unconditional_pass"
+
+        # Check if "external-layer.py" is in oeqa path
+        found_file = search_test_file(test_file)
+        self.assertTrue(found_file, msg="Can't find %s in the oeqa path" % test_file)
+
+        # Import oeqa.selftest.external-layer module and search for
+        # test_unconditional_pass method of ImportedTests class
+        found_method = search_method(test_module, method_name)
+        self.assertTrue(method_name, msg="Can't find %s method" % method_name)
+
+def search_test_file(file_name):
+    for layer_path in oeqa.selftest.__path__:
+        for _, _, files in os.walk(layer_path):
+            for f in files:
+                if f == file_name:
+                    return True
+    return False
+
+def search_method(module, method):
+    modlib = importlib.import_module(module)
+    for var in vars(modlib):
+        klass = vars(modlib)[var]
+        if isinstance(klass, type(OESelftestTestCase)) and issubclass(klass, OESelftestTestCase):
+            for m in dir(klass):
+                if m == method:
+                    return True
+    return False
+
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/signing.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/signing.py
new file mode 100644
index 0000000..b3d1a82
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/signing.py
@@ -0,0 +1,187 @@
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars
+import os
+import glob
+import re
+import shutil
+import tempfile
+from oeqa.core.decorator.oeid import OETestID
+from oeqa.utils.ftools import write_file
+
+
+class Signing(OESelftestTestCase):
+
+    gpg_dir = ""
+    pub_key_path = ""
+    secret_key_path = ""
+
+    @classmethod
+    def setUpClass(cls):
+        super(Signing, cls).setUpClass()
+        # Check that we can find the gpg binary and fail early if we can't
+        if not shutil.which("gpg"):
+            raise AssertionError("This test needs GnuPG")
+
+        cls.gpg_dir = tempfile.mkdtemp(prefix="oeqa-signing-")
+
+        cls.pub_key_path = os.path.join(cls.testlayer_path, 'files', 'signing', "key.pub")
+        cls.secret_key_path = os.path.join(cls.testlayer_path, 'files', 'signing', "key.secret")
+
+        runCmd('gpg --batch --homedir %s --import %s %s' % (cls.gpg_dir, cls.pub_key_path, cls.secret_key_path))
+
+    @classmethod
+    def tearDownClass(cls):
+        shutil.rmtree(cls.gpg_dir, ignore_errors=True)
+
+    @OETestID(1362)
+    def test_signing_packages(self):
+        """
+        Summary:     Test that packages can be signed in the package feed
+        Expected:    Package should be signed with the correct key
+        Expected:    Images can be created from signed packages
+        Product:     oe-core
+        Author:      Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        Author:      Alexander Kanavin <alexander.kanavin@intel.com>
+        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        """
+        import oe.packagedata
+
+        package_classes = get_bb_var('PACKAGE_CLASSES')
+        if 'package_rpm' not in package_classes:
+            self.skipTest('This test requires RPM Packaging.')
+
+        test_recipe = 'ed'
+
+        feature = 'INHERIT += "sign_rpm"\n'
+        feature += 'RPM_GPG_PASSPHRASE = "test123"\n'
+        feature += 'RPM_GPG_NAME = "testuser"\n'
+        feature += 'GPG_PATH = "%s"\n' % self.gpg_dir
+
+        self.write_config(feature)
+
+        bitbake('-c clean %s' % test_recipe)
+        bitbake('-f -c package_write_rpm %s' % test_recipe)
+
+        self.add_command_to_tearDown('bitbake -c clean %s' % test_recipe)
+
+        needed_vars = ['PKGDATA_DIR', 'DEPLOY_DIR_RPM', 'PACKAGE_ARCH', 'STAGING_BINDIR_NATIVE']
+        bb_vars = get_bb_vars(needed_vars, test_recipe)
+        pkgdatadir = bb_vars['PKGDATA_DIR']
+        pkgdata = oe.packagedata.read_pkgdatafile(pkgdatadir + "/runtime/ed")
+        if 'PKGE' in pkgdata:
+            pf = pkgdata['PN'] + "-" + pkgdata['PKGE'] + pkgdata['PKGV'] + '-' + pkgdata['PKGR']
+        else:
+            pf = pkgdata['PN'] + "-" + pkgdata['PKGV'] + '-' + pkgdata['PKGR']
+        deploy_dir_rpm = bb_vars['DEPLOY_DIR_RPM']
+        package_arch = bb_vars['PACKAGE_ARCH'].replace('-', '_')
+        staging_bindir_native = bb_vars['STAGING_BINDIR_NATIVE']
+
+        pkg_deploy = os.path.join(deploy_dir_rpm, package_arch, '.'.join((pf, package_arch, 'rpm')))
+
+        # Use a temporary rpmdb
+        rpmdb = tempfile.mkdtemp(prefix='oeqa-rpmdb')
+
+        runCmd('%s/rpmkeys --define "_dbpath %s" --import %s' %
+               (staging_bindir_native, rpmdb, self.pub_key_path))
+
+        ret = runCmd('%s/rpmkeys --define "_dbpath %s" --checksig %s' %
+                     (staging_bindir_native, rpmdb, pkg_deploy))
+        # tmp/deploy/rpm/i586/ed-1.9-r0.i586.rpm: rsa sha1 md5 OK
+        self.assertIn('rsa sha1 (md5) pgp md5 OK', ret.output, 'Package signed incorrectly.')
+        shutil.rmtree(rpmdb)
+
+        #Check that an image can be built from signed packages
+        self.add_command_to_tearDown('bitbake -c clean core-image-minimal')
+        bitbake('-c clean core-image-minimal')
+        bitbake('core-image-minimal')
+
+
+    @OETestID(1382)
+    def test_signing_sstate_archive(self):
+        """
+        Summary:     Test that sstate archives can be signed
+        Expected:    Package should be signed with the correct key
+        Product:     oe-core
+        Author:      Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        """
+
+        test_recipe = 'ed'
+
+        builddir = os.environ.get('BUILDDIR')
+        sstatedir = os.path.join(builddir, 'test-sstate')
+
+        self.add_command_to_tearDown('bitbake -c clean %s' % test_recipe)
+        self.add_command_to_tearDown('rm -rf %s' % sstatedir)
+
+        feature = 'SSTATE_SIG_KEY ?= "testuser"\n'
+        feature += 'SSTATE_SIG_PASSPHRASE ?= "test123"\n'
+        feature += 'SSTATE_VERIFY_SIG ?= "1"\n'
+        feature += 'GPG_PATH = "%s"\n' % self.gpg_dir
+        feature += 'SSTATE_DIR = "%s"\n' % sstatedir
+        # Any mirror might have partial sstate without .sig files, triggering failures
+        feature += 'SSTATE_MIRRORS_forcevariable = ""\n'
+
+        self.write_config(feature)
+
+        bitbake('-c clean %s' % test_recipe)
+        bitbake(test_recipe)
+
+        recipe_sig = glob.glob(sstatedir + '/*/*:ed:*_package.tgz.sig')
+        recipe_tgz = glob.glob(sstatedir + '/*/*:ed:*_package.tgz')
+
+        self.assertEqual(len(recipe_sig), 1, 'Failed to find .sig file.')
+        self.assertEqual(len(recipe_tgz), 1, 'Failed to find .tgz file.')
+
+        ret = runCmd('gpg --homedir %s --verify %s %s' % (self.gpg_dir, recipe_sig[0], recipe_tgz[0]))
+        # gpg: Signature made Thu 22 Oct 2015 01:45:09 PM EEST using RSA key ID 61EEFB30
+        # gpg: Good signature from "testuser (nocomment) <testuser@email.com>"
+        self.assertIn('gpg: Good signature from', ret.output, 'Package signed incorrectly.')
+
+
+class LockedSignatures(OESelftestTestCase):
+
+    @OETestID(1420)
+    def test_locked_signatures(self):
+        """
+        Summary:     Test locked signature mechanism
+        Expected:    Locked signatures will prevent task to run
+        Product:     oe-core
+        Author:      Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
+        """
+
+        test_recipe = 'ed'
+        locked_sigs_file = 'locked-sigs.inc'
+
+        self.add_command_to_tearDown('rm -f %s' % os.path.join(self.builddir, locked_sigs_file))
+
+        bitbake(test_recipe)
+        # Generate locked sigs include file
+        bitbake('-S none %s' % test_recipe)
+
+        feature = 'require %s\n' % locked_sigs_file
+        feature += 'SIGGEN_LOCKEDSIGS_TASKSIG_CHECK = "warn"\n'
+        self.write_config(feature)
+
+        # Build a locked recipe
+        bitbake(test_recipe)
+
+        # Make a change that should cause the locked task signature to change
+        recipe_append_file = test_recipe + '_' + get_bb_var('PV', test_recipe) + '.bbappend'
+        recipe_append_path = os.path.join(self.testlayer_path, 'recipes-test', test_recipe, recipe_append_file)
+        feature = 'SUMMARY += "test locked signature"\n'
+
+        os.mkdir(os.path.join(self.testlayer_path, 'recipes-test', test_recipe))
+        write_file(recipe_append_path, feature)
+
+        self.add_command_to_tearDown('rm -rf %s' % os.path.join(self.testlayer_path, 'recipes-test', test_recipe))
+
+        # Build the recipe again
+        ret = bitbake(test_recipe)
+
+        # Verify you get the warning and that the real task *isn't* run (i.e. the locked signature has worked)
+        patt = r'WARNING: The %s:do_package sig is computed to be \S+, but the sig is locked to \S+ in SIGGEN_LOCKEDSIGS\S+' % test_recipe
+        found_warn = re.search(patt, ret.output)
+
+        self.assertIsNotNone(found_warn, "Didn't find the expected warning message. Output: %s" % ret.output)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/sstate.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/sstate.py
new file mode 100644
index 0000000..bc2fdbd
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/sstate.py
@@ -0,0 +1,63 @@
+import datetime
+import unittest
+import os
+import re
+import shutil
+
+import oeqa.utils.ftools as ftools
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_vars, get_test_layer
+
+
+class SStateBase(OESelftestTestCase):
+
+    def setUpLocal(self):
+        super(SStateBase, self).setUpLocal()
+        self.temp_sstate_location = None
+        needed_vars = ['SSTATE_DIR', 'NATIVELSBSTRING', 'TCLIBC', 'TUNE_ARCH',
+                       'TOPDIR', 'TARGET_VENDOR', 'TARGET_OS']
+        bb_vars = get_bb_vars(needed_vars)
+        self.sstate_path = bb_vars['SSTATE_DIR']
+        self.hostdistro = bb_vars['NATIVELSBSTRING']
+        self.tclibc = bb_vars['TCLIBC']
+        self.tune_arch = bb_vars['TUNE_ARCH']
+        self.topdir = bb_vars['TOPDIR']
+        self.target_vendor = bb_vars['TARGET_VENDOR']
+        self.target_os = bb_vars['TARGET_OS']
+        self.distro_specific_sstate = os.path.join(self.sstate_path, self.hostdistro)
+
+    # Creates a special sstate configuration with the option to add sstate mirrors
+    def config_sstate(self, temp_sstate_location=False, add_local_mirrors=[]):
+        self.temp_sstate_location = temp_sstate_location
+
+        if self.temp_sstate_location:
+            temp_sstate_path = os.path.join(self.builddir, "temp_sstate_%s" % datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
+            config_temp_sstate = "SSTATE_DIR = \"%s\"" % temp_sstate_path
+            self.append_config(config_temp_sstate)
+            self.track_for_cleanup(temp_sstate_path)
+        bb_vars = get_bb_vars(['SSTATE_DIR', 'NATIVELSBSTRING'])
+        self.sstate_path = bb_vars['SSTATE_DIR']
+        self.hostdistro = bb_vars['NATIVELSBSTRING']
+        self.distro_specific_sstate = os.path.join(self.sstate_path, self.hostdistro)
+
+        if add_local_mirrors:
+            config_set_sstate_if_not_set = 'SSTATE_MIRRORS ?= ""'
+            self.append_config(config_set_sstate_if_not_set)
+            for local_mirror in add_local_mirrors:
+                self.assertFalse(os.path.join(local_mirror) == os.path.join(self.sstate_path), msg='Cannot add the current sstate path as a sstate mirror')
+                config_sstate_mirror = "SSTATE_MIRRORS += \"file://.* file:///%s/PATH\"" % local_mirror
+                self.append_config(config_sstate_mirror)
+
+    # Returns a list containing sstate files
+    def search_sstate(self, filename_regex, distro_specific=True, distro_nonspecific=True):
+        result = []
+        for root, dirs, files in os.walk(self.sstate_path):
+            if distro_specific and re.search("%s/[a-z0-9]{2}$" % self.hostdistro, root):
+                for f in files:
+                    if re.search(filename_regex, f):
+                        result.append(f)
+            if distro_nonspecific and re.search("%s/[a-z0-9]{2}$" % self.sstate_path, root):
+                for f in files:
+                    if re.search(filename_regex, f):
+                        result.append(f)
+        return result
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/sstatetests.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/sstatetests.py
new file mode 100644
index 0000000..4790088
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/sstatetests.py
@@ -0,0 +1,496 @@
+import os
+import shutil
+import glob
+import subprocess
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_test_layer
+from oeqa.selftest.cases.sstate import SStateBase
+from oeqa.core.decorator.oeid import OETestID
+
+import bb.siggen
+
+class SStateTests(SStateBase):
+
+    # Test sstate files creation and their location
+    def run_test_sstate_creation(self, targets, distro_specific=True, distro_nonspecific=True, temp_sstate_location=True, should_pass=True):
+        self.config_sstate(temp_sstate_location, [self.sstate_path])
+
+        if  self.temp_sstate_location:
+            bitbake(['-cclean'] + targets)
+        else:
+            bitbake(['-ccleansstate'] + targets)
+
+        bitbake(targets)
+        file_tracker = []
+        results = self.search_sstate('|'.join(map(str, targets)), distro_specific, distro_nonspecific)
+        if distro_nonspecific:
+            for r in results:
+                if r.endswith(("_populate_lic.tgz", "_populate_lic.tgz.siginfo", "_fetch.tgz.siginfo", "_unpack.tgz.siginfo", "_patch.tgz.siginfo")):
+                    continue
+                file_tracker.append(r)
+        else:
+            file_tracker = results
+
+        if should_pass:
+            self.assertTrue(file_tracker , msg="Could not find sstate files for: %s" % ', '.join(map(str, targets)))
+        else:
+            self.assertTrue(not file_tracker , msg="Found sstate files in the wrong place for: %s (found %s)" % (', '.join(map(str, targets)), str(file_tracker)))
+
+    @OETestID(975)
+    def test_sstate_creation_distro_specific_pass(self):
+        self.run_test_sstate_creation(['binutils-cross-'+ self.tune_arch, 'binutils-native'], distro_specific=True, distro_nonspecific=False, temp_sstate_location=True)
+
+    @OETestID(1374)
+    def test_sstate_creation_distro_specific_fail(self):
+        self.run_test_sstate_creation(['binutils-cross-'+ self.tune_arch, 'binutils-native'], distro_specific=False, distro_nonspecific=True, temp_sstate_location=True, should_pass=False)
+
+    @OETestID(976)
+    def test_sstate_creation_distro_nonspecific_pass(self):
+        self.run_test_sstate_creation(['linux-libc-headers'], distro_specific=False, distro_nonspecific=True, temp_sstate_location=True)
+
+    @OETestID(1375)
+    def test_sstate_creation_distro_nonspecific_fail(self):
+        self.run_test_sstate_creation(['linux-libc-headers'], distro_specific=True, distro_nonspecific=False, temp_sstate_location=True, should_pass=False)
+
+    # Test the sstate files deletion part of the do_cleansstate task
+    def run_test_cleansstate_task(self, targets, distro_specific=True, distro_nonspecific=True, temp_sstate_location=True):
+        self.config_sstate(temp_sstate_location, [self.sstate_path])
+
+        bitbake(['-ccleansstate'] + targets)
+
+        bitbake(targets)
+        tgz_created = self.search_sstate('|'.join(map(str, [s + '.*?\.tgz$' for s in targets])), distro_specific, distro_nonspecific)
+        self.assertTrue(tgz_created, msg="Could not find sstate .tgz files for: %s (%s)" % (', '.join(map(str, targets)), str(tgz_created)))
+
+        siginfo_created = self.search_sstate('|'.join(map(str, [s + '.*?\.siginfo$' for s in targets])), distro_specific, distro_nonspecific)
+        self.assertTrue(siginfo_created, msg="Could not find sstate .siginfo files for: %s (%s)" % (', '.join(map(str, targets)), str(siginfo_created)))
+
+        bitbake(['-ccleansstate'] + targets)
+        tgz_removed = self.search_sstate('|'.join(map(str, [s + '.*?\.tgz$' for s in targets])), distro_specific, distro_nonspecific)
+        self.assertTrue(not tgz_removed, msg="do_cleansstate didn't remove .tgz sstate files for: %s (%s)" % (', '.join(map(str, targets)), str(tgz_removed)))
+
+    @OETestID(977)
+    def test_cleansstate_task_distro_specific_nonspecific(self):
+        targets = ['binutils-cross-'+ self.tune_arch, 'binutils-native']
+        targets.append('linux-libc-headers')
+        self.run_test_cleansstate_task(targets, distro_specific=True, distro_nonspecific=True, temp_sstate_location=True)
+
+    @OETestID(1376)
+    def test_cleansstate_task_distro_nonspecific(self):
+        self.run_test_cleansstate_task(['linux-libc-headers'], distro_specific=False, distro_nonspecific=True, temp_sstate_location=True)
+
+    @OETestID(1377)
+    def test_cleansstate_task_distro_specific(self):
+        targets = ['binutils-cross-'+ self.tune_arch, 'binutils-native']
+        targets.append('linux-libc-headers')
+        self.run_test_cleansstate_task(targets, distro_specific=True, distro_nonspecific=False, temp_sstate_location=True)
+
+
+    # Test rebuilding of distro-specific sstate files
+    def run_test_rebuild_distro_specific_sstate(self, targets, temp_sstate_location=True):
+        self.config_sstate(temp_sstate_location, [self.sstate_path])
+
+        bitbake(['-ccleansstate'] + targets)
+
+        bitbake(targets)
+        results = self.search_sstate('|'.join(map(str, [s + '.*?\.tgz$' for s in targets])), distro_specific=False, distro_nonspecific=True)
+        filtered_results = []
+        for r in results:
+            if r.endswith(("_populate_lic.tgz", "_populate_lic.tgz.siginfo")):
+                continue
+            filtered_results.append(r)
+        self.assertTrue(filtered_results == [], msg="Found distro non-specific sstate for: %s (%s)" % (', '.join(map(str, targets)), str(filtered_results)))
+        file_tracker_1 = self.search_sstate('|'.join(map(str, [s + '.*?\.tgz$' for s in targets])), distro_specific=True, distro_nonspecific=False)
+        self.assertTrue(len(file_tracker_1) >= len(targets), msg = "Not all sstate files ware created for: %s" % ', '.join(map(str, targets)))
+
+        self.track_for_cleanup(self.distro_specific_sstate + "_old")
+        shutil.copytree(self.distro_specific_sstate, self.distro_specific_sstate + "_old")
+        shutil.rmtree(self.distro_specific_sstate)
+
+        bitbake(['-cclean'] + targets)
+        bitbake(targets)
+        file_tracker_2 = self.search_sstate('|'.join(map(str, [s + '.*?\.tgz$' for s in targets])), distro_specific=True, distro_nonspecific=False)
+        self.assertTrue(len(file_tracker_2) >= len(targets), msg = "Not all sstate files ware created for: %s" % ', '.join(map(str, targets)))
+
+        not_recreated = [x for x in file_tracker_1 if x not in file_tracker_2]
+        self.assertTrue(not_recreated == [], msg="The following sstate files ware not recreated: %s" % ', '.join(map(str, not_recreated)))
+
+        created_once = [x for x in file_tracker_2 if x not in file_tracker_1]
+        self.assertTrue(created_once == [], msg="The following sstate files ware created only in the second run: %s" % ', '.join(map(str, created_once)))
+
+    @OETestID(175)
+    def test_rebuild_distro_specific_sstate_cross_native_targets(self):
+        self.run_test_rebuild_distro_specific_sstate(['binutils-cross-' + self.tune_arch, 'binutils-native'], temp_sstate_location=True)
+
+    @OETestID(1372)
+    def test_rebuild_distro_specific_sstate_cross_target(self):
+        self.run_test_rebuild_distro_specific_sstate(['binutils-cross-' + self.tune_arch], temp_sstate_location=True)
+
+    @OETestID(1373)
+    def test_rebuild_distro_specific_sstate_native_target(self):
+        self.run_test_rebuild_distro_specific_sstate(['binutils-native'], temp_sstate_location=True)
+
+
+    # Test the sstate-cache-management script. Each element in the global_config list is used with the corresponding element in the target_config list
+    # global_config elements are expected to not generate any sstate files that would be removed by sstate-cache-management.sh (such as changing the value of MACHINE)
+    def run_test_sstate_cache_management_script(self, target, global_config=[''], target_config=[''], ignore_patterns=[]):
+        self.assertTrue(global_config)
+        self.assertTrue(target_config)
+        self.assertTrue(len(global_config) == len(target_config), msg='Lists global_config and target_config should have the same number of elements')
+        self.config_sstate(temp_sstate_location=True, add_local_mirrors=[self.sstate_path])
+
+        # If buildhistory is enabled, we need to disable version-going-backwards
+        # QA checks for this test. It may report errors otherwise.
+        self.append_config('ERROR_QA_remove = "version-going-backwards"')
+
+        # For not this only checks if random sstate tasks are handled correctly as a group.
+        # In the future we should add control over what tasks we check for.
+
+        sstate_archs_list = []
+        expected_remaining_sstate = []
+        for idx in range(len(target_config)):
+            self.append_config(global_config[idx])
+            self.append_recipeinc(target, target_config[idx])
+            sstate_arch = get_bb_var('SSTATE_PKGARCH', target)
+            if not sstate_arch in sstate_archs_list:
+                sstate_archs_list.append(sstate_arch)
+            if target_config[idx] == target_config[-1]:
+                target_sstate_before_build = self.search_sstate(target + '.*?\.tgz$')
+            bitbake("-cclean %s" % target)
+            result = bitbake(target, ignore_status=True)
+            if target_config[idx] == target_config[-1]:
+                target_sstate_after_build = self.search_sstate(target + '.*?\.tgz$')
+                expected_remaining_sstate += [x for x in target_sstate_after_build if x not in target_sstate_before_build if not any(pattern in x for pattern in ignore_patterns)]
+            self.remove_config(global_config[idx])
+            self.remove_recipeinc(target, target_config[idx])
+            self.assertEqual(result.status, 0, msg = "build of %s failed with %s" % (target, result.output))
+
+        runCmd("sstate-cache-management.sh -y --cache-dir=%s --remove-duplicated --extra-archs=%s" % (self.sstate_path, ','.join(map(str, sstate_archs_list))))
+        actual_remaining_sstate = [x for x in self.search_sstate(target + '.*?\.tgz$') if not any(pattern in x for pattern in ignore_patterns)]
+
+        actual_not_expected = [x for x in actual_remaining_sstate if x not in expected_remaining_sstate]
+        self.assertFalse(actual_not_expected, msg="Files should have been removed but ware not: %s" % ', '.join(map(str, actual_not_expected)))
+        expected_not_actual = [x for x in expected_remaining_sstate if x not in actual_remaining_sstate]
+        self.assertFalse(expected_not_actual, msg="Extra files ware removed: %s" ', '.join(map(str, expected_not_actual)))
+
+    @OETestID(973)
+    def test_sstate_cache_management_script_using_pr_1(self):
+        global_config = []
+        target_config = []
+        global_config.append('')
+        target_config.append('PR = "0"')
+        self.run_test_sstate_cache_management_script('m4', global_config,  target_config, ignore_patterns=['populate_lic'])
+
+    @OETestID(978)
+    def test_sstate_cache_management_script_using_pr_2(self):
+        global_config = []
+        target_config = []
+        global_config.append('')
+        target_config.append('PR = "0"')
+        global_config.append('')
+        target_config.append('PR = "1"')
+        self.run_test_sstate_cache_management_script('m4', global_config,  target_config, ignore_patterns=['populate_lic'])
+
+    @OETestID(979)
+    def test_sstate_cache_management_script_using_pr_3(self):
+        global_config = []
+        target_config = []
+        global_config.append('MACHINE = "qemux86-64"')
+        target_config.append('PR = "0"')
+        global_config.append(global_config[0])
+        target_config.append('PR = "1"')
+        global_config.append('MACHINE = "qemux86"')
+        target_config.append('PR = "1"')
+        self.run_test_sstate_cache_management_script('m4', global_config,  target_config, ignore_patterns=['populate_lic'])
+
+    @OETestID(974)
+    def test_sstate_cache_management_script_using_machine(self):
+        global_config = []
+        target_config = []
+        global_config.append('MACHINE = "qemux86-64"')
+        target_config.append('')
+        global_config.append('MACHINE = "qemux86"')
+        target_config.append('')
+        self.run_test_sstate_cache_management_script('m4', global_config,  target_config, ignore_patterns=['populate_lic'])
+
+    @OETestID(1270)
+    def test_sstate_32_64_same_hash(self):
+        """
+        The sstate checksums for both native and target should not vary whether
+        they're built on a 32 or 64 bit system. Rather than requiring two different
+        build machines and running a builds, override the variables calling uname()
+        manually and check using bitbake -S.
+        """
+
+        self.write_config("""
+MACHINE = "qemux86"
+TMPDIR = "${TOPDIR}/tmp-sstatesamehash"
+BUILD_ARCH = "x86_64"
+BUILD_OS = "linux"
+SDKMACHINE = "x86_64"
+PACKAGE_CLASSES = "package_rpm package_ipk package_deb"
+""")
+        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash")
+        bitbake("core-image-sato -S none")
+        self.write_config("""
+MACHINE = "qemux86"
+TMPDIR = "${TOPDIR}/tmp-sstatesamehash2"
+BUILD_ARCH = "i686"
+BUILD_OS = "linux"
+SDKMACHINE = "i686"
+PACKAGE_CLASSES = "package_rpm package_ipk package_deb"
+""")
+        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash2")
+        bitbake("core-image-sato -S none")
+
+        def get_files(d):
+            f = []
+            for root, dirs, files in os.walk(d):
+                if "core-image-sato" in root:
+                    # SDKMACHINE changing will change
+                    # do_rootfs/do_testimage/do_build stamps of images which
+                    # is safe to ignore.
+                    continue
+                f.extend(os.path.join(root, name) for name in files)
+            return f
+        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps/")
+        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps/")
+        files2 = [x.replace("tmp-sstatesamehash2", "tmp-sstatesamehash").replace("i686-linux", "x86_64-linux").replace("i686" + self.target_vendor + "-linux", "x86_64" + self.target_vendor + "-linux", ) for x in files2]
+        self.maxDiff = None
+        self.assertCountEqual(files1, files2)
+
+
+    @OETestID(1271)
+    def test_sstate_nativelsbstring_same_hash(self):
+        """
+        The sstate checksums should be independent of whichever NATIVELSBSTRING is
+        detected. Rather than requiring two different build machines and running
+        builds, override the variables manually and check using bitbake -S.
+        """
+
+        self.write_config("""
+TMPDIR = \"${TOPDIR}/tmp-sstatesamehash\"
+NATIVELSBSTRING = \"DistroA\"
+""")
+        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash")
+        bitbake("core-image-sato -S none")
+        self.write_config("""
+TMPDIR = \"${TOPDIR}/tmp-sstatesamehash2\"
+NATIVELSBSTRING = \"DistroB\"
+""")
+        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash2")
+        bitbake("core-image-sato -S none")
+
+        def get_files(d):
+            f = []
+            for root, dirs, files in os.walk(d):
+                f.extend(os.path.join(root, name) for name in files)
+            return f
+        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps/")
+        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps/")
+        files2 = [x.replace("tmp-sstatesamehash2", "tmp-sstatesamehash") for x in files2]
+        self.maxDiff = None
+        self.assertCountEqual(files1, files2)
+
+    @OETestID(1368)
+    def test_sstate_allarch_samesigs(self):
+        """
+        The sstate checksums of allarch packages should be independent of whichever
+        MACHINE is set. Check this using bitbake -S.
+        Also, rather than duplicate the test, check nativesdk stamps are the same between
+        the two MACHINE values.
+        """
+
+        configA = """
+TMPDIR = \"${TOPDIR}/tmp-sstatesamehash\"
+MACHINE = \"qemux86-64\"
+"""
+        configB = """
+TMPDIR = \"${TOPDIR}/tmp-sstatesamehash2\"
+MACHINE = \"qemuarm\"
+"""
+        self.sstate_allarch_samesigs(configA, configB)
+
+    @OETestID(1645)
+    def test_sstate_allarch_samesigs_multilib(self):
+        """
+        The sstate checksums of allarch multilib packages should be independent of whichever
+        MACHINE is set. Check this using bitbake -S.
+        Also, rather than duplicate the test, check nativesdk stamps are the same between
+        the two MACHINE values.
+        """
+
+        configA = """
+TMPDIR = \"${TOPDIR}/tmp-sstatesamehash\"
+MACHINE = \"qemux86-64\"
+require conf/multilib.conf
+MULTILIBS = \"multilib:lib32\"
+DEFAULTTUNE_virtclass-multilib-lib32 = \"x86\"
+"""
+        configB = """
+TMPDIR = \"${TOPDIR}/tmp-sstatesamehash2\"
+MACHINE = \"qemuarm\"
+require conf/multilib.conf
+MULTILIBS = \"\"
+"""
+        self.sstate_allarch_samesigs(configA, configB)
+
+    def sstate_allarch_samesigs(self, configA, configB):
+
+        self.write_config(configA)
+        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash")
+        bitbake("world meta-toolchain -S none")
+        self.write_config(configB)
+        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash2")
+        bitbake("world meta-toolchain -S none")
+
+        def get_files(d):
+            f = {}
+            for root, dirs, files in os.walk(d):
+                for name in files:
+                    if "meta-environment" in root or "cross-canadian" in root:
+                        continue
+                    if "do_build" not in name:
+                        # 1.4.1+gitAUTOINC+302fca9f4c-r0.do_package_write_ipk.sigdata.f3a2a38697da743f0dbed8b56aafcf79
+                        (_, task, _, shash) = name.rsplit(".", 3)
+                        f[os.path.join(os.path.basename(root), task)] = shash
+            return f
+        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps/all" + self.target_vendor + "-" + self.target_os)
+        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps/all" + self.target_vendor + "-" + self.target_os)
+        self.maxDiff = None
+        self.assertEqual(files1, files2)
+
+        nativesdkdir = os.path.basename(glob.glob(self.topdir + "/tmp-sstatesamehash/stamps/*-nativesdk*-linux")[0])
+
+        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps/" + nativesdkdir)
+        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps/" + nativesdkdir)
+        self.maxDiff = None
+        self.assertEqual(files1, files2)
+
+    @OETestID(1369)
+    def test_sstate_sametune_samesigs(self):
+        """
+        The sstate checksums of two identical machines (using the same tune) should be the
+        same, apart from changes within the machine specific stamps directory. We use the
+        qemux86copy machine to test this. Also include multilibs in the test.
+        """
+
+        self.write_config("""
+TMPDIR = \"${TOPDIR}/tmp-sstatesamehash\"
+MACHINE = \"qemux86\"
+require conf/multilib.conf
+MULTILIBS = "multilib:lib32"
+DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
+""")
+        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash")
+        bitbake("world meta-toolchain -S none")
+        self.write_config("""
+TMPDIR = \"${TOPDIR}/tmp-sstatesamehash2\"
+MACHINE = \"qemux86copy\"
+require conf/multilib.conf
+MULTILIBS = "multilib:lib32"
+DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
+""")
+        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash2")
+        bitbake("world meta-toolchain -S none")
+
+        def get_files(d):
+            f = []
+            for root, dirs, files in os.walk(d):
+                for name in files:
+                    if "meta-environment" in root or "cross-canadian" in root:
+                        continue
+                    if "qemux86copy-" in root or "qemux86-" in root:
+                        continue
+                    if "do_build" not in name and "do_populate_sdk" not in name:
+                        f.append(os.path.join(root, name))
+            return f
+        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps")
+        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps")
+        files2 = [x.replace("tmp-sstatesamehash2", "tmp-sstatesamehash") for x in files2]
+        self.maxDiff = None
+        self.assertCountEqual(files1, files2)
+
+
+    @OETestID(1498)
+    def test_sstate_noop_samesigs(self):
+        """
+        The sstate checksums of two builds with these variables changed or
+        classes inherits should be the same.
+        """
+
+        self.write_config("""
+TMPDIR = "${TOPDIR}/tmp-sstatesamehash"
+BB_NUMBER_THREADS = "${@oe.utils.cpu_count()}"
+PARALLEL_MAKE = "-j 1"
+DL_DIR = "${TOPDIR}/download1"
+TIME = "111111"
+DATE = "20161111"
+INHERIT_remove = "buildstats-summary buildhistory uninative"
+http_proxy = ""
+""")
+        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash")
+        self.track_for_cleanup(self.topdir + "/download1")
+        bitbake("world meta-toolchain -S none")
+        self.write_config("""
+TMPDIR = "${TOPDIR}/tmp-sstatesamehash2"
+BB_NUMBER_THREADS = "${@oe.utils.cpu_count()+1}"
+PARALLEL_MAKE = "-j 2"
+DL_DIR = "${TOPDIR}/download2"
+TIME = "222222"
+DATE = "20161212"
+# Always remove uninative as we're changing proxies
+INHERIT_remove = "uninative"
+INHERIT += "buildstats-summary buildhistory"
+http_proxy = "http://example.com/"
+""")
+        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash2")
+        self.track_for_cleanup(self.topdir + "/download2")
+        bitbake("world meta-toolchain -S none")
+
+        def get_files(d):
+            f = {}
+            for root, dirs, files in os.walk(d):
+                for name in files:
+                    name, shash = name.rsplit('.', 1)
+                    # Extract just the machine and recipe name
+                    base = os.sep.join(root.rsplit(os.sep, 2)[-2:] + [name])
+                    f[base] = shash
+            return f
+
+        def compare_sigfiles(files, files1, files2, compare=False):
+            for k in files:
+                if k in files1 and k in files2:
+                    print("%s differs:" % k)
+                    if compare:
+                        sigdatafile1 = self.topdir + "/tmp-sstatesamehash/stamps/" + k + "." + files1[k]
+                        sigdatafile2 = self.topdir + "/tmp-sstatesamehash2/stamps/" + k + "." + files2[k]
+                        output = bb.siggen.compare_sigfiles(sigdatafile1, sigdatafile2)
+                        if output:
+                            print('\n'.join(output))
+                elif k in files1 and k not in files2:
+                    print("%s in files1" % k)
+                elif k not in files1 and k in files2:
+                    print("%s in files2" % k)
+                else:
+                    assert "shouldn't reach here"
+
+        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps/")
+        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps/")
+        # Remove items that are identical in both sets
+        for k,v in files1.items() & files2.items():
+            del files1[k]
+            del files2[k]
+        if not files1 and not files2:
+            # No changes, so we're done
+            return
+
+        files = list(files1.keys() | files2.keys())
+        # this is an expensive computation, thus just compare the first 'max_sigfiles_to_compare' k files
+        max_sigfiles_to_compare = 20
+        first, rest = files[:max_sigfiles_to_compare], files[max_sigfiles_to_compare:]
+        compare_sigfiles(first, files1.keys(), files2.keys(), compare=True)
+        compare_sigfiles(rest, files1.keys(), files2.keys(), compare=False)
+
+        self.fail("sstate hashes not identical.")
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/tinfoil.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/tinfoil.py
new file mode 100644
index 0000000..f889a47
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/tinfoil.py
@@ -0,0 +1,231 @@
+import os
+import re
+import time
+import logging
+import bb.tinfoil
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd
+from oeqa.core.decorator.oeid import OETestID
+
+class TinfoilTests(OESelftestTestCase):
+    """ Basic tests for the tinfoil API """
+
+    @OETestID(1568)
+    def test_getvar(self):
+        with bb.tinfoil.Tinfoil() as tinfoil:
+            tinfoil.prepare(True)
+            machine = tinfoil.config_data.getVar('MACHINE')
+            if not machine:
+                self.fail('Unable to get MACHINE value - returned %s' % machine)
+
+    @OETestID(1569)
+    def test_expand(self):
+        with bb.tinfoil.Tinfoil() as tinfoil:
+            tinfoil.prepare(True)
+            expr = '${@os.getpid()}'
+            pid = tinfoil.config_data.expand(expr)
+            if not pid:
+                self.fail('Unable to expand "%s" - returned %s' % (expr, pid))
+
+    @OETestID(1570)
+    def test_getvar_bb_origenv(self):
+        with bb.tinfoil.Tinfoil() as tinfoil:
+            tinfoil.prepare(True)
+            origenv = tinfoil.config_data.getVar('BB_ORIGENV', False)
+            if not origenv:
+                self.fail('Unable to get BB_ORIGENV value - returned %s' % origenv)
+            self.assertEqual(origenv.getVar('HOME', False), os.environ['HOME'])
+
+    @OETestID(1571)
+    def test_parse_recipe(self):
+        with bb.tinfoil.Tinfoil() as tinfoil:
+            tinfoil.prepare(config_only=False, quiet=2)
+            testrecipe = 'mdadm'
+            best = tinfoil.find_best_provider(testrecipe)
+            if not best:
+                self.fail('Unable to find recipe providing %s' % testrecipe)
+            rd = tinfoil.parse_recipe_file(best[3])
+            self.assertEqual(testrecipe, rd.getVar('PN'))
+
+    @OETestID(1572)
+    def test_parse_recipe_copy_expand(self):
+        with bb.tinfoil.Tinfoil() as tinfoil:
+            tinfoil.prepare(config_only=False, quiet=2)
+            testrecipe = 'mdadm'
+            best = tinfoil.find_best_provider(testrecipe)
+            if not best:
+                self.fail('Unable to find recipe providing %s' % testrecipe)
+            rd = tinfoil.parse_recipe_file(best[3])
+            # Check we can get variable values
+            self.assertEqual(testrecipe, rd.getVar('PN'))
+            # Check that expanding a value that includes a variable reference works
+            self.assertEqual(testrecipe, rd.getVar('BPN'))
+            # Now check that changing the referenced variable's value in a copy gives that
+            # value when expanding
+            localdata = bb.data.createCopy(rd)
+            localdata.setVar('PN', 'hello')
+            self.assertEqual('hello', localdata.getVar('BPN'))
+
+    @OETestID(1573)
+    def test_parse_recipe_initial_datastore(self):
+        with bb.tinfoil.Tinfoil() as tinfoil:
+            tinfoil.prepare(config_only=False, quiet=2)
+            testrecipe = 'mdadm'
+            best = tinfoil.find_best_provider(testrecipe)
+            if not best:
+                self.fail('Unable to find recipe providing %s' % testrecipe)
+            dcopy = bb.data.createCopy(tinfoil.config_data)
+            dcopy.setVar('MYVARIABLE', 'somevalue')
+            rd = tinfoil.parse_recipe_file(best[3], config_data=dcopy)
+            # Check we can get variable values
+            self.assertEqual('somevalue', rd.getVar('MYVARIABLE'))
+
+    @OETestID(1574)
+    def test_list_recipes(self):
+        with bb.tinfoil.Tinfoil() as tinfoil:
+            tinfoil.prepare(config_only=False, quiet=2)
+            # Check pkg_pn
+            checkpns = ['tar', 'automake', 'coreutils', 'm4-native', 'nativesdk-gcc']
+            pkg_pn = tinfoil.cooker.recipecaches[''].pkg_pn
+            for pn in checkpns:
+                self.assertIn(pn, pkg_pn)
+            # Check pkg_fn
+            checkfns = {'nativesdk-gcc': '^virtual:nativesdk:.*', 'coreutils': '.*/coreutils_.*.bb'}
+            for fn, pn in tinfoil.cooker.recipecaches[''].pkg_fn.items():
+                if pn in checkpns:
+                    if pn in checkfns:
+                        self.assertTrue(re.match(checkfns[pn], fn), 'Entry for %s: %s did not match %s' % (pn, fn, checkfns[pn]))
+                    checkpns.remove(pn)
+            if checkpns:
+                self.fail('Unable to find pkg_fn entries for: %s' % ', '.join(checkpns))
+
+    @OETestID(1575)
+    def test_wait_event(self):
+        with bb.tinfoil.Tinfoil() as tinfoil:
+            tinfoil.prepare(config_only=True)
+
+            tinfoil.set_event_mask(['bb.event.FilesMatchingFound', 'bb.command.CommandCompleted'])
+
+            # Need to drain events otherwise events that were masked may still be in the queue
+            while tinfoil.wait_event():
+                pass
+
+            pattern = 'conf'
+            res = tinfoil.run_command('findFilesMatchingInDir', pattern, 'conf/machine')
+            self.assertTrue(res)
+
+            eventreceived = False
+            commandcomplete = False
+            start = time.time()
+            # Wait for 5s in total so we'd detect spurious heartbeat events for example
+            while time.time() - start < 5:
+                event = tinfoil.wait_event(1)
+                if event:
+                    if isinstance(event, bb.command.CommandCompleted):
+                        commandcomplete = True
+                    elif isinstance(event, bb.event.FilesMatchingFound):
+                        self.assertEqual(pattern, event._pattern)
+                        self.assertIn('qemuarm.conf', event._matches)
+                        eventreceived = True
+                    elif isinstance(event, logging.LogRecord):
+                        continue
+                    else:
+                        self.fail('Unexpected event: %s' % event)
+
+            self.assertTrue(commandcomplete, 'Timed out waiting for CommandCompleted event from bitbake server')
+            self.assertTrue(eventreceived, 'Did not receive FilesMatchingFound event from bitbake server')
+
+    @OETestID(1576)
+    def test_setvariable_clean(self):
+        # First check that setVariable affects the datastore
+        with bb.tinfoil.Tinfoil() as tinfoil:
+            tinfoil.prepare(config_only=True)
+            tinfoil.run_command('setVariable', 'TESTVAR', 'specialvalue')
+            self.assertEqual(tinfoil.config_data.getVar('TESTVAR'), 'specialvalue', 'Value set using setVariable is not reflected in client-side getVar()')
+
+        # Now check that the setVariable's effects are no longer present
+        # (this may legitimately break in future if we stop reinitialising
+        # the datastore, in which case we'll have to reconsider use of
+        # setVariable entirely)
+        with bb.tinfoil.Tinfoil() as tinfoil:
+            tinfoil.prepare(config_only=True)
+            self.assertNotEqual(tinfoil.config_data.getVar('TESTVAR'), 'specialvalue', 'Value set using setVariable is still present!')
+
+        # Now check that setVar on the main datastore works (uses setVariable internally)
+        with bb.tinfoil.Tinfoil() as tinfoil:
+            tinfoil.prepare(config_only=True)
+            tinfoil.config_data.setVar('TESTVAR', 'specialvalue')
+            value = tinfoil.run_command('getVariable', 'TESTVAR')
+            self.assertEqual(value, 'specialvalue', 'Value set using config_data.setVar() is not reflected in config_data.getVar()')
+
+    @OETestID(1884)
+    def test_datastore_operations(self):
+        with bb.tinfoil.Tinfoil() as tinfoil:
+            tinfoil.prepare(config_only=True)
+            # Test setVarFlag() / getVarFlag()
+            tinfoil.config_data.setVarFlag('TESTVAR', 'flagname', 'flagval')
+            value = tinfoil.config_data.getVarFlag('TESTVAR', 'flagname')
+            self.assertEqual(value, 'flagval', 'Value set using config_data.setVarFlag() is not reflected in config_data.getVarFlag()')
+            # Test delVarFlag()
+            tinfoil.config_data.setVarFlag('TESTVAR', 'otherflag', 'othervalue')
+            tinfoil.config_data.delVarFlag('TESTVAR', 'flagname')
+            value = tinfoil.config_data.getVarFlag('TESTVAR', 'flagname')
+            self.assertEqual(value, None, 'Varflag deleted using config_data.delVarFlag() is not reflected in config_data.getVarFlag()')
+            value = tinfoil.config_data.getVarFlag('TESTVAR', 'otherflag')
+            self.assertEqual(value, 'othervalue', 'Varflag deleted using config_data.delVarFlag() caused unrelated flag to be removed')
+            # Test delVar()
+            tinfoil.config_data.setVar('TESTVAR', 'varvalue')
+            value = tinfoil.config_data.getVar('TESTVAR')
+            self.assertEqual(value, 'varvalue', 'Value set using config_data.setVar() is not reflected in config_data.getVar()')
+            tinfoil.config_data.delVar('TESTVAR')
+            value = tinfoil.config_data.getVar('TESTVAR')
+            self.assertEqual(value, None, 'Variable deleted using config_data.delVar() appears to still have a value')
+            # Test renameVar()
+            tinfoil.config_data.setVar('TESTVAROLD', 'origvalue')
+            tinfoil.config_data.renameVar('TESTVAROLD', 'TESTVARNEW')
+            value = tinfoil.config_data.getVar('TESTVAROLD')
+            self.assertEqual(value, None, 'Variable renamed using config_data.renameVar() still seems to exist')
+            value = tinfoil.config_data.getVar('TESTVARNEW')
+            self.assertEqual(value, 'origvalue', 'Variable renamed using config_data.renameVar() does not appear with new name')
+            # Test overrides
+            tinfoil.config_data.setVar('TESTVAR', 'original')
+            tinfoil.config_data.setVar('TESTVAR_overrideone', 'one')
+            tinfoil.config_data.setVar('TESTVAR_overridetwo', 'two')
+            tinfoil.config_data.appendVar('OVERRIDES', ':overrideone')
+            value = tinfoil.config_data.getVar('TESTVAR')
+            self.assertEqual(value, 'one', 'Variable overrides not functioning correctly')
+
+    def test_variable_history(self):
+        # Basic test to ensure that variable history works when tracking=True
+        with bb.tinfoil.Tinfoil(tracking=True) as tinfoil:
+            tinfoil.prepare(config_only=False, quiet=2)
+            # Note that _tracking for any datastore we get will be
+            # false here, that's currently expected - so we can't check
+            # for that
+            history = tinfoil.config_data.varhistory.variable('DL_DIR')
+            for entry in history:
+                if entry['file'].endswith('/bitbake.conf'):
+                    if entry['op'] in ['set', 'set?']:
+                        break
+            else:
+                self.fail('Did not find history entry setting DL_DIR in bitbake.conf. History: %s' % history)
+            # Check it works for recipes as well
+            testrecipe = 'zlib'
+            rd = tinfoil.parse_recipe(testrecipe)
+            history = rd.varhistory.variable('LICENSE')
+            bbfound = -1
+            recipefound = -1
+            for i, entry in enumerate(history):
+                if entry['file'].endswith('/bitbake.conf'):
+                    if entry['detail'] == 'INVALID' and entry['op'] in ['set', 'set?']:
+                        bbfound = i
+                elif entry['file'].endswith('.bb'):
+                    if entry['op'] == 'set':
+                        recipefound = i
+            if bbfound == -1:
+                self.fail('Did not find history entry setting LICENSE in bitbake.conf parsing %s recipe. History: %s' % (testrecipe, history))
+            if recipefound == -1:
+                self.fail('Did not find history entry setting LICENSE in %s recipe. History: %s' % (testrecipe, history))
+            if bbfound > recipefound:
+                self.fail('History entry setting LICENSE in %s recipe and in bitbake.conf in wrong order. History: %s' % (testrecipe, history))
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/wic.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/wic.py
new file mode 100644
index 0000000..651d575
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/cases/wic.py
@@ -0,0 +1,1063 @@
+#!/usr/bin/env python
+# ex:ts=4:sw=4:sts=4:et
+# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
+#
+# Copyright (c) 2015, Intel Corporation.
+# All rights reserved.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# AUTHORS
+# Ed Bartosh <ed.bartosh@linux.intel.com>
+
+"""Test cases for wic."""
+
+import os
+import sys
+import unittest
+
+from glob import glob
+from shutil import rmtree, copy
+from functools import wraps, lru_cache
+from tempfile import NamedTemporaryFile
+
+from oeqa.selftest.case import OESelftestTestCase
+from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars, runqemu
+from oeqa.core.decorator.oeid import OETestID
+
+
+@lru_cache(maxsize=32)
+def get_host_arch(recipe):
+    """A cached call to get_bb_var('HOST_ARCH', <recipe>)"""
+    return get_bb_var('HOST_ARCH', recipe)
+
+
+def only_for_arch(archs, image='core-image-minimal'):
+    """Decorator for wrapping test cases that can be run only for specific target
+    architectures. A list of compatible architectures is passed in `archs`.
+    Current architecture will be determined by parsing bitbake output for
+    `image` recipe.
+    """
+    def wrapper(func):
+        @wraps(func)
+        def wrapped_f(*args, **kwargs):
+            arch = get_host_arch(image)
+            if archs and arch not in archs:
+                raise unittest.SkipTest("Testcase arch dependency not met: %s" % arch)
+            return func(*args, **kwargs)
+        wrapped_f.__name__ = func.__name__
+        return wrapped_f
+    return wrapper
+
+
+class Wic(OESelftestTestCase):
+    """Wic test class."""
+
+    resultdir = "/var/tmp/wic.oe-selftest/"
+    image_is_ready = False
+    native_sysroot = None
+    wicenv_cache = {}
+
+    def setUpLocal(self):
+        """This code is executed before each test method."""
+        super(Wic, self).setUpLocal()
+        if not self.native_sysroot:
+            Wic.native_sysroot = get_bb_var('STAGING_DIR_NATIVE', 'wic-tools')
+
+        # Do this here instead of in setUpClass as the base setUp does some
+        # clean up which can result in the native tools built earlier in
+        # setUpClass being unavailable.
+        if not Wic.image_is_ready:
+            if get_bb_var('USE_NLS') == 'yes':
+                bitbake('wic-tools')
+            else:
+                self.skipTest('wic-tools cannot be built due its (intltool|gettext)-native dependency and NLS disable')
+
+            bitbake('core-image-minimal')
+            Wic.image_is_ready = True
+
+        rmtree(self.resultdir, ignore_errors=True)
+
+    def tearDownLocal(self):
+        """Remove resultdir as it may contain images."""
+        rmtree(self.resultdir, ignore_errors=True)
+        super(Wic, self).tearDownLocal()
+
+    @OETestID(1552)
+    def test_version(self):
+        """Test wic --version"""
+        self.assertEqual(0, runCmd('wic --version').status)
+
+    @OETestID(1208)
+    def test_help(self):
+        """Test wic --help and wic -h"""
+        self.assertEqual(0, runCmd('wic --help').status)
+        self.assertEqual(0, runCmd('wic -h').status)
+
+    @OETestID(1209)
+    def test_createhelp(self):
+        """Test wic create --help"""
+        self.assertEqual(0, runCmd('wic create --help').status)
+
+    @OETestID(1210)
+    def test_listhelp(self):
+        """Test wic list --help"""
+        self.assertEqual(0, runCmd('wic list --help').status)
+
+    @OETestID(1553)
+    def test_help_create(self):
+        """Test wic help create"""
+        self.assertEqual(0, runCmd('wic help create').status)
+
+    @OETestID(1554)
+    def test_help_list(self):
+        """Test wic help list"""
+        self.assertEqual(0, runCmd('wic help list').status)
+
+    @OETestID(1215)
+    def test_help_overview(self):
+        """Test wic help overview"""
+        self.assertEqual(0, runCmd('wic help overview').status)
+
+    @OETestID(1216)
+    def test_help_plugins(self):
+        """Test wic help plugins"""
+        self.assertEqual(0, runCmd('wic help plugins').status)
+
+    @OETestID(1217)
+    def test_help_kickstart(self):
+        """Test wic help kickstart"""
+        self.assertEqual(0, runCmd('wic help kickstart').status)
+
+    @OETestID(1555)
+    def test_list_images(self):
+        """Test wic list images"""
+        self.assertEqual(0, runCmd('wic list images').status)
+
+    @OETestID(1556)
+    def test_list_source_plugins(self):
+        """Test wic list source-plugins"""
+        self.assertEqual(0, runCmd('wic list source-plugins').status)
+
+    @OETestID(1557)
+    def test_listed_images_help(self):
+        """Test wic listed images help"""
+        output = runCmd('wic list images').output
+        imagelist = [line.split()[0] for line in output.splitlines()]
+        for image in imagelist:
+            self.assertEqual(0, runCmd('wic list %s help' % image).status)
+
+    @OETestID(1213)
+    def test_unsupported_subcommand(self):
+        """Test unsupported subcommand"""
+        self.assertNotEqual(0, runCmd('wic unsupported', ignore_status=True).status)
+
+    @OETestID(1214)
+    def test_no_command(self):
+        """Test wic without command"""
+        self.assertEqual(1, runCmd('wic', ignore_status=True).status)
+
+    @OETestID(1211)
+    def test_build_image_name(self):
+        """Test wic create wictestdisk --image-name=core-image-minimal"""
+        cmd = "wic create wictestdisk --image-name=core-image-minimal -o %s" % self.resultdir
+        self.assertEqual(0, runCmd(cmd).status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
+
+    @OETestID(1157)
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    def test_gpt_image(self):
+        """Test creation of core-image-minimal with gpt table and UUID boot"""
+        cmd = "wic create directdisk-gpt --image-name core-image-minimal -o %s" % self.resultdir
+        self.assertEqual(0, runCmd(cmd).status)
+        self.assertEqual(1, len(glob(self.resultdir + "directdisk-*.direct")))
+
+    @OETestID(1346)
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    def test_iso_image(self):
+        """Test creation of hybrid iso image with legacy and EFI boot"""
+        config = 'INITRAMFS_IMAGE = "core-image-minimal-initramfs"\n'\
+                 'MACHINE_FEATURES_append = " efi"\n'\
+                 'DEPENDS_pn-core-image-minimal += "syslinux"\n'
+        self.append_config(config)
+        bitbake('core-image-minimal')
+        self.remove_config(config)
+        cmd = "wic create mkhybridiso --image-name core-image-minimal -o %s" % self.resultdir
+        self.assertEqual(0, runCmd(cmd).status)
+        self.assertEqual(1, len(glob(self.resultdir + "HYBRID_ISO_IMG-*.direct")))
+        self.assertEqual(1, len(glob(self.resultdir + "HYBRID_ISO_IMG-*.iso")))
+
+    @OETestID(1348)
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    def test_qemux86_directdisk(self):
+        """Test creation of qemux-86-directdisk image"""
+        cmd = "wic create qemux86-directdisk -e core-image-minimal -o %s" % self.resultdir
+        self.assertEqual(0, runCmd(cmd).status)
+        self.assertEqual(1, len(glob(self.resultdir + "qemux86-directdisk-*direct")))
+
+    @OETestID(1350)
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    def test_mkefidisk(self):
+        """Test creation of mkefidisk image"""
+        cmd = "wic create mkefidisk -e core-image-minimal -o %s" % self.resultdir
+        self.assertEqual(0, runCmd(cmd).status)
+        self.assertEqual(1, len(glob(self.resultdir + "mkefidisk-*direct")))
+
+    @OETestID(1385)
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    def test_bootloader_config(self):
+        """Test creation of directdisk-bootloader-config image"""
+        config = 'DEPENDS_pn-core-image-minimal += "syslinux"\n'
+        self.append_config(config)
+        bitbake('core-image-minimal')
+        self.remove_config(config)
+        cmd = "wic create directdisk-bootloader-config -e core-image-minimal -o %s" % self.resultdir
+        self.assertEqual(0, runCmd(cmd).status)
+        self.assertEqual(1, len(glob(self.resultdir + "directdisk-bootloader-config-*direct")))
+
+    @OETestID(1560)
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    def test_systemd_bootdisk(self):
+        """Test creation of systemd-bootdisk image"""
+        config = 'MACHINE_FEATURES_append = " efi"\n'
+        self.append_config(config)
+        bitbake('core-image-minimal')
+        self.remove_config(config)
+        cmd = "wic create systemd-bootdisk -e core-image-minimal -o %s" % self.resultdir
+        self.assertEqual(0, runCmd(cmd).status)
+        self.assertEqual(1, len(glob(self.resultdir + "systemd-bootdisk-*direct")))
+
+    @OETestID(1561)
+    def test_sdimage_bootpart(self):
+        """Test creation of sdimage-bootpart image"""
+        cmd = "wic create sdimage-bootpart -e core-image-minimal -o %s" % self.resultdir
+        kimgtype = get_bb_var('KERNEL_IMAGETYPE', 'core-image-minimal')
+        self.write_config('IMAGE_BOOT_FILES = "%s"\n' % kimgtype)
+        self.assertEqual(0, runCmd(cmd).status)
+        self.assertEqual(1, len(glob(self.resultdir + "sdimage-bootpart-*direct")))
+
+    @OETestID(1562)
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    def test_default_output_dir(self):
+        """Test default output location"""
+        for fname in glob("directdisk-*.direct"):
+            os.remove(fname)
+        config = 'DEPENDS_pn-core-image-minimal += "syslinux"\n'
+        self.append_config(config)
+        bitbake('core-image-minimal')
+        self.remove_config(config)
+        cmd = "wic create directdisk -e core-image-minimal"
+        self.assertEqual(0, runCmd(cmd).status)
+        self.assertEqual(1, len(glob("directdisk-*.direct")))
+
+    @OETestID(1212)
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    def test_build_artifacts(self):
+        """Test wic create directdisk providing all artifacts."""
+        bb_vars = get_bb_vars(['STAGING_DATADIR', 'RECIPE_SYSROOT_NATIVE'],
+                              'wic-tools')
+        bb_vars.update(get_bb_vars(['DEPLOY_DIR_IMAGE', 'IMAGE_ROOTFS'],
+                                   'core-image-minimal'))
+        bbvars = {key.lower(): value for key, value in bb_vars.items()}
+        bbvars['resultdir'] = self.resultdir
+        status = runCmd("wic create directdisk "
+                        "-b %(staging_datadir)s "
+                        "-k %(deploy_dir_image)s "
+                        "-n %(recipe_sysroot_native)s "
+                        "-r %(image_rootfs)s "
+                        "-o %(resultdir)s" % bbvars).status
+        self.assertEqual(0, status)
+        self.assertEqual(1, len(glob(self.resultdir + "directdisk-*.direct")))
+
+    @OETestID(1264)
+    def test_compress_gzip(self):
+        """Test compressing an image with gzip"""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name core-image-minimal "
+                                   "-c gzip -o %s" % self.resultdir).status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct.gz")))
+
+    @OETestID(1265)
+    def test_compress_bzip2(self):
+        """Test compressing an image with bzip2"""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "-c bzip2 -o %s" % self.resultdir).status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct.bz2")))
+
+    @OETestID(1266)
+    def test_compress_xz(self):
+        """Test compressing an image with xz"""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "--compress-with=xz -o %s" % self.resultdir).status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct.xz")))
+
+    @OETestID(1267)
+    def test_wrong_compressor(self):
+        """Test how wic breaks if wrong compressor is provided"""
+        self.assertEqual(2, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "-c wrong -o %s" % self.resultdir,
+                                   ignore_status=True).status)
+
+    @OETestID(1558)
+    def test_debug_short(self):
+        """Test -D option"""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "-D -o %s" % self.resultdir).status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
+
+    @OETestID(1658)
+    def test_debug_long(self):
+        """Test --debug option"""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "--debug -o %s" % self.resultdir).status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
+
+    @OETestID(1563)
+    def test_skip_build_check_short(self):
+        """Test -s option"""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "-s -o %s" % self.resultdir).status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
+
+    @OETestID(1671)
+    def test_skip_build_check_long(self):
+        """Test --skip-build-check option"""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "--skip-build-check "
+                                   "--outdir %s" % self.resultdir).status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
+
+    @OETestID(1564)
+    def test_build_rootfs_short(self):
+        """Test -f option"""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "-f -o %s" % self.resultdir).status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
+
+    @OETestID(1656)
+    def test_build_rootfs_long(self):
+        """Test --build-rootfs option"""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "--build-rootfs "
+                                   "--outdir %s" % self.resultdir).status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
+
+    @OETestID(1268)
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    def test_rootfs_indirect_recipes(self):
+        """Test usage of rootfs plugin with rootfs recipes"""
+        status = runCmd("wic create directdisk-multi-rootfs "
+                        "--image-name=core-image-minimal "
+                        "--rootfs rootfs1=core-image-minimal "
+                        "--rootfs rootfs2=core-image-minimal "
+                        "--outdir %s" % self.resultdir).status
+        self.assertEqual(0, status)
+        self.assertEqual(1, len(glob(self.resultdir + "directdisk-multi-rootfs*.direct")))
+
+    @OETestID(1269)
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    def test_rootfs_artifacts(self):
+        """Test usage of rootfs plugin with rootfs paths"""
+        bb_vars = get_bb_vars(['STAGING_DATADIR', 'RECIPE_SYSROOT_NATIVE'],
+                              'wic-tools')
+        bb_vars.update(get_bb_vars(['DEPLOY_DIR_IMAGE', 'IMAGE_ROOTFS'],
+                                   'core-image-minimal'))
+        bbvars = {key.lower(): value for key, value in bb_vars.items()}
+        bbvars['wks'] = "directdisk-multi-rootfs"
+        bbvars['resultdir'] = self.resultdir
+        status = runCmd("wic create %(wks)s "
+                        "--bootimg-dir=%(staging_datadir)s "
+                        "--kernel-dir=%(deploy_dir_image)s "
+                        "--native-sysroot=%(recipe_sysroot_native)s "
+                        "--rootfs-dir rootfs1=%(image_rootfs)s "
+                        "--rootfs-dir rootfs2=%(image_rootfs)s "
+                        "--outdir %(resultdir)s" % bbvars).status
+        self.assertEqual(0, status)
+        self.assertEqual(1, len(glob(self.resultdir + "%(wks)s-*.direct" % bbvars)))
+
+    @OETestID(1661)
+    def test_exclude_path(self):
+        """Test --exclude-path wks option."""
+
+        oldpath = os.environ['PATH']
+        os.environ['PATH'] = get_bb_var("PATH", "wic-tools")
+
+        try:
+            wks_file = 'temp.wks'
+            with open(wks_file, 'w') as wks:
+                rootfs_dir = get_bb_var('IMAGE_ROOTFS', 'core-image-minimal')
+                wks.write("""
+part / --source rootfs --ondisk mmcblk0 --fstype=ext4 --exclude-path usr
+part /usr --source rootfs --ondisk mmcblk0 --fstype=ext4 --rootfs-dir %s/usr
+part /etc --source rootfs --ondisk mmcblk0 --fstype=ext4 --exclude-path bin/ --rootfs-dir %s/usr"""
+                          % (rootfs_dir, rootfs_dir))
+            self.assertEqual(0, runCmd("wic create %s -e core-image-minimal -o %s" \
+                                       % (wks_file, self.resultdir)).status)
+
+            os.remove(wks_file)
+            wicout = glob(self.resultdir + "%s-*direct" % 'temp')
+            self.assertEqual(1, len(wicout))
+
+            wicimg = wicout[0]
+
+            # verify partition size with wic
+            res = runCmd("parted -m %s unit b p 2>/dev/null" % wicimg)
+            self.assertEqual(0, res.status)
+
+            # parse parted output which looks like this:
+            # BYT;\n
+            # /var/tmp/wic/build/tmpfwvjjkf_-201611101222-hda.direct:200MiB:file:512:512:msdos::;\n
+            # 1:0.00MiB:200MiB:200MiB:ext4::;\n
+            partlns = res.output.splitlines()[2:]
+
+            self.assertEqual(3, len(partlns))
+
+            for part in [1, 2, 3]:
+                part_file = os.path.join(self.resultdir, "selftest_img.part%d" % part)
+                partln = partlns[part-1].split(":")
+                self.assertEqual(7, len(partln))
+                start = int(partln[1].rstrip("B")) / 512
+                length = int(partln[3].rstrip("B")) / 512
+                self.assertEqual(0, runCmd("dd if=%s of=%s skip=%d count=%d" %
+                                           (wicimg, part_file, start, length)).status)
+
+            def extract_files(debugfs_output):
+                """
+                extract file names from the output of debugfs -R 'ls -p',
+                which looks like this:
+
+                 /2/040755/0/0/.//\n
+                 /2/040755/0/0/..//\n
+                 /11/040700/0/0/lost+found^M//\n
+                 /12/040755/1002/1002/run//\n
+                 /13/040755/1002/1002/sys//\n
+                 /14/040755/1002/1002/bin//\n
+                 /80/040755/1002/1002/var//\n
+                 /92/040755/1002/1002/tmp//\n
+                """
+                # NOTE the occasional ^M in file names
+                return [line.split('/')[5].strip() for line in \
+                        debugfs_output.strip().split('/\n')]
+
+            # Test partition 1, should contain the normal root directories, except
+            # /usr.
+            res = runCmd("debugfs -R 'ls -p' %s 2>/dev/null" % \
+                             os.path.join(self.resultdir, "selftest_img.part1"))
+            self.assertEqual(0, res.status)
+            files = extract_files(res.output)
+            self.assertIn("etc", files)
+            self.assertNotIn("usr", files)
+
+            # Partition 2, should contain common directories for /usr, not root
+            # directories.
+            res = runCmd("debugfs -R 'ls -p' %s 2>/dev/null" % \
+                             os.path.join(self.resultdir, "selftest_img.part2"))
+            self.assertEqual(0, res.status)
+            files = extract_files(res.output)
+            self.assertNotIn("etc", files)
+            self.assertNotIn("usr", files)
+            self.assertIn("share", files)
+
+            # Partition 3, should contain the same as partition 2, including the bin
+            # directory, but not the files inside it.
+            res = runCmd("debugfs -R 'ls -p' %s 2>/dev/null" % \
+                             os.path.join(self.resultdir, "selftest_img.part3"))
+            self.assertEqual(0, res.status)
+            files = extract_files(res.output)
+            self.assertNotIn("etc", files)
+            self.assertNotIn("usr", files)
+            self.assertIn("share", files)
+            self.assertIn("bin", files)
+            res = runCmd("debugfs -R 'ls -p bin' %s 2>/dev/null" % \
+                             os.path.join(self.resultdir, "selftest_img.part3"))
+            self.assertEqual(0, res.status)
+            files = extract_files(res.output)
+            self.assertIn(".", files)
+            self.assertIn("..", files)
+            self.assertEqual(2, len(files))
+
+            for part in [1, 2, 3]:
+                part_file = os.path.join(self.resultdir, "selftest_img.part%d" % part)
+                os.remove(part_file)
+
+        finally:
+            os.environ['PATH'] = oldpath
+
+    @OETestID(1662)
+    def test_exclude_path_errors(self):
+        """Test --exclude-path wks option error handling."""
+        wks_file = 'temp.wks'
+
+        # Absolute argument.
+        with open(wks_file, 'w') as wks:
+            wks.write("part / --source rootfs --ondisk mmcblk0 --fstype=ext4 --exclude-path /usr")
+        self.assertNotEqual(0, runCmd("wic create %s -e core-image-minimal -o %s" \
+                                      % (wks_file, self.resultdir), ignore_status=True).status)
+        os.remove(wks_file)
+
+        # Argument pointing to parent directory.
+        with open(wks_file, 'w') as wks:
+            wks.write("part / --source rootfs --ondisk mmcblk0 --fstype=ext4 --exclude-path ././..")
+        self.assertNotEqual(0, runCmd("wic create %s -e core-image-minimal -o %s" \
+                                      % (wks_file, self.resultdir), ignore_status=True).status)
+        os.remove(wks_file)
+
+    @OETestID(1496)
+    def test_bmap_short(self):
+        """Test generation of .bmap file -m option"""
+        cmd = "wic create wictestdisk -e core-image-minimal -m -o %s" % self.resultdir
+        status = runCmd(cmd).status
+        self.assertEqual(0, status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*direct")))
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*direct.bmap")))
+
+    @OETestID(1655)
+    def test_bmap_long(self):
+        """Test generation of .bmap file --bmap option"""
+        cmd = "wic create wictestdisk -e core-image-minimal --bmap -o %s" % self.resultdir
+        status = runCmd(cmd).status
+        self.assertEqual(0, status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*direct")))
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*direct.bmap")))
+
+    def _get_image_env_path(self, image):
+        """Generate and obtain the path to <image>.env"""
+        if image not in self.wicenv_cache:
+            self.assertEqual(0, bitbake('%s -c do_rootfs_wicenv' % image).status)
+            bb_vars = get_bb_vars(['STAGING_DIR', 'MACHINE'], image)
+            stdir = bb_vars['STAGING_DIR']
+            machine = bb_vars['MACHINE']
+            self.wicenv_cache[image] = os.path.join(stdir, machine, 'imgdata')
+        return self.wicenv_cache[image]
+
+    @OETestID(1347)
+    def test_image_env(self):
+        """Test generation of <image>.env files."""
+        image = 'core-image-minimal'
+        imgdatadir = self._get_image_env_path(image)
+
+        bb_vars = get_bb_vars(['IMAGE_BASENAME', 'WICVARS'], image)
+        basename = bb_vars['IMAGE_BASENAME']
+        self.assertEqual(basename, image)
+        path = os.path.join(imgdatadir, basename) + '.env'
+        self.assertTrue(os.path.isfile(path))
+
+        wicvars = set(bb_vars['WICVARS'].split())
+        # filter out optional variables
+        wicvars = wicvars.difference(('DEPLOY_DIR_IMAGE', 'IMAGE_BOOT_FILES',
+                                      'INITRD', 'INITRD_LIVE', 'ISODIR'))
+        with open(path) as envfile:
+            content = dict(line.split("=", 1) for line in envfile)
+            # test if variables used by wic present in the .env file
+            for var in wicvars:
+                self.assertTrue(var in content, "%s is not in .env file" % var)
+                self.assertTrue(content[var])
+
+    @OETestID(1559)
+    def test_image_vars_dir_short(self):
+        """Test image vars directory selection -v option"""
+        image = 'core-image-minimal'
+        imgenvdir = self._get_image_env_path(image)
+        native_sysroot = get_bb_var("RECIPE_SYSROOT_NATIVE", "wic-tools")
+
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=%s -v %s -n %s -o %s"
+                                   % (image, imgenvdir, native_sysroot,
+                                      self.resultdir)).status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*direct")))
+
+    @OETestID(1665)
+    def test_image_vars_dir_long(self):
+        """Test image vars directory selection --vars option"""
+        image = 'core-image-minimal'
+        imgenvdir = self._get_image_env_path(image)
+        native_sysroot = get_bb_var("RECIPE_SYSROOT_NATIVE", "wic-tools")
+
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=%s "
+                                   "--vars %s "
+                                   "--native-sysroot %s "
+                                   "--outdir %s"
+                                   % (image, imgenvdir, native_sysroot,
+                                      self.resultdir)).status)
+        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*direct")))
+
+    @OETestID(1351)
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    def test_wic_image_type(self):
+        """Test building wic images by bitbake"""
+        config = 'IMAGE_FSTYPES += "wic"\nWKS_FILE = "wic-image-minimal"\n'\
+                 'MACHINE_FEATURES_append = " efi"\n'
+        self.append_config(config)
+        self.assertEqual(0, bitbake('wic-image-minimal').status)
+        self.remove_config(config)
+
+        bb_vars = get_bb_vars(['DEPLOY_DIR_IMAGE', 'MACHINE'])
+        deploy_dir = bb_vars['DEPLOY_DIR_IMAGE']
+        machine = bb_vars['MACHINE']
+        prefix = os.path.join(deploy_dir, 'wic-image-minimal-%s.' % machine)
+        # check if we have result image and manifests symlinks
+        # pointing to existing files
+        for suffix in ('wic', 'manifest'):
+            path = prefix + suffix
+            self.assertTrue(os.path.islink(path))
+            self.assertTrue(os.path.isfile(os.path.realpath(path)))
+
+    @OETestID(1422)
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    def test_qemu(self):
+        """Test wic-image-minimal under qemu"""
+        config = 'IMAGE_FSTYPES += "wic"\nWKS_FILE = "wic-image-minimal"\n'\
+                 'MACHINE_FEATURES_append = " efi"\n'
+        self.append_config(config)
+        self.assertEqual(0, bitbake('wic-image-minimal').status)
+        self.remove_config(config)
+
+        with runqemu('wic-image-minimal', ssh=False) as qemu:
+            cmd = "mount |grep '^/dev/' | cut -f1,3 -d ' '"
+            status, output = qemu.run_serial(cmd)
+            self.assertEqual(1, status, 'Failed to run command "%s": %s' % (cmd, output))
+            self.assertEqual(output, '/dev/root /\r\n/dev/sda1 /boot\r\n/dev/sda3 /mnt')
+
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    @OETestID(1852)
+    def test_qemu_efi(self):
+        """Test core-image-minimal efi image under qemu"""
+        config = 'IMAGE_FSTYPES = "wic"\nWKS_FILE = "mkefidisk.wks"\n'
+        self.append_config(config)
+        self.assertEqual(0, bitbake('core-image-minimal ovmf').status)
+        self.remove_config(config)
+
+        with runqemu('core-image-minimal', ssh=False,
+                     runqemuparams='ovmf', image_fstype='wic') as qemu:
+            cmd = "grep sda. /proc/partitions  |wc -l"
+            status, output = qemu.run_serial(cmd)
+            self.assertEqual(1, status, 'Failed to run command "%s": %s' % (cmd, output))
+            self.assertEqual(output, '3')
+
+    @staticmethod
+    def _make_fixed_size_wks(size):
+        """
+        Create a wks of an image with a single partition. Size of the partition is set
+        using --fixed-size flag. Returns a tuple: (path to wks file, wks image name)
+        """
+        with NamedTemporaryFile("w", suffix=".wks", delete=False) as tempf:
+            wkspath = tempf.name
+            tempf.write("part " \
+                     "--source rootfs --ondisk hda --align 4 --fixed-size %d "
+                     "--fstype=ext4\n" % size)
+        wksname = os.path.splitext(os.path.basename(wkspath))[0]
+
+        return wkspath, wksname
+
+    @OETestID(1847)
+    def test_fixed_size(self):
+        """
+        Test creation of a simple image with partition size controlled through
+        --fixed-size flag
+        """
+        wkspath, wksname = Wic._make_fixed_size_wks(200)
+
+        self.assertEqual(0, runCmd("wic create %s -e core-image-minimal -o %s" \
+                                   % (wkspath, self.resultdir)).status)
+        os.remove(wkspath)
+        wicout = glob(self.resultdir + "%s-*direct" % wksname)
+        self.assertEqual(1, len(wicout))
+
+        wicimg = wicout[0]
+
+        # verify partition size with wic
+        res = runCmd("parted -m %s unit mib p 2>/dev/null" % wicimg,
+                     ignore_status=True,
+                     native_sysroot=self.native_sysroot)
+        self.assertEqual(0, res.status)
+
+        # parse parted output which looks like this:
+        # BYT;\n
+        # /var/tmp/wic/build/tmpfwvjjkf_-201611101222-hda.direct:200MiB:file:512:512:msdos::;\n
+        # 1:0.00MiB:200MiB:200MiB:ext4::;\n
+        partlns = res.output.splitlines()[2:]
+
+        self.assertEqual(1, len(partlns))
+        self.assertEqual("1:0.00MiB:200MiB:200MiB:ext4::;", partlns[0])
+
+    @OETestID(1848)
+    def test_fixed_size_error(self):
+        """
+        Test creation of a simple image with partition size controlled through
+        --fixed-size flag. The size of partition is intentionally set to 1MiB
+        in order to trigger an error in wic.
+        """
+        wkspath, wksname = Wic._make_fixed_size_wks(1)
+
+        self.assertEqual(1, runCmd("wic create %s -e core-image-minimal -o %s" \
+                                   % (wkspath, self.resultdir), ignore_status=True).status)
+        os.remove(wkspath)
+        wicout = glob(self.resultdir + "%s-*direct" % wksname)
+        self.assertEqual(0, len(wicout))
+
+    @only_for_arch(['i586', 'i686', 'x86_64'])
+    @OETestID(1854)
+    def test_rawcopy_plugin_qemu(self):
+        """Test rawcopy plugin in qemu"""
+        # build ext4 and wic images
+        for fstype in ("ext4", "wic"):
+            config = 'IMAGE_FSTYPES = "%s"\nWKS_FILE = "test_rawcopy_plugin.wks.in"\n' % fstype
+            self.append_config(config)
+            self.assertEqual(0, bitbake('core-image-minimal').status)
+            self.remove_config(config)
+
+        with runqemu('core-image-minimal', ssh=False, image_fstype='wic') as qemu:
+            cmd = "grep sda. /proc/partitions  |wc -l"
+            status, output = qemu.run_serial(cmd)
+            self.assertEqual(1, status, 'Failed to run command "%s": %s' % (cmd, output))
+            self.assertEqual(output, '2')
+
+    @OETestID(1853)
+    def test_rawcopy_plugin(self):
+        """Test rawcopy plugin"""
+        img = 'core-image-minimal'
+        machine = get_bb_var('MACHINE', img)
+        with NamedTemporaryFile("w", suffix=".wks") as wks:
+            wks.writelines(['part /boot --active --source bootimg-pcbios\n',
+                            'part / --source rawcopy --sourceparams="file=%s-%s.ext4" --use-uuid\n'\
+                             % (img, machine),
+                            'bootloader --timeout=0 --append="console=ttyS0,115200n8"\n'])
+            wks.flush()
+            cmd = "wic create %s -e %s -o %s" % (wks.name, img, self.resultdir)
+            self.assertEqual(0, runCmd(cmd).status)
+            wksname = os.path.splitext(os.path.basename(wks.name))[0]
+            out = glob(self.resultdir + "%s-*direct" % wksname)
+            self.assertEqual(1, len(out))
+
+    @OETestID(1849)
+    def test_fs_types(self):
+        """Test filesystem types for empty and not empty partitions"""
+        img = 'core-image-minimal'
+        with NamedTemporaryFile("w", suffix=".wks") as wks:
+            wks.writelines(['part ext2   --fstype ext2     --source rootfs\n',
+                            'part btrfs  --fstype btrfs    --source rootfs --size 40M\n',
+                            'part squash --fstype squashfs --source rootfs\n',
+                            'part swap   --fstype swap --size 1M\n',
+                            'part emptyvfat   --fstype vfat   --size 1M\n',
+                            'part emptymsdos  --fstype msdos  --size 1M\n',
+                            'part emptyext2   --fstype ext2   --size 1M\n',
+                            'part emptybtrfs  --fstype btrfs  --size 100M\n'])
+            wks.flush()
+            cmd = "wic create %s -e %s -o %s" % (wks.name, img, self.resultdir)
+            self.assertEqual(0, runCmd(cmd).status)
+            wksname = os.path.splitext(os.path.basename(wks.name))[0]
+            out = glob(self.resultdir + "%s-*direct" % wksname)
+            self.assertEqual(1, len(out))
+
+    @OETestID(1851)
+    def test_kickstart_parser(self):
+        """Test wks parser options"""
+        with NamedTemporaryFile("w", suffix=".wks") as wks:
+            wks.writelines(['part / --fstype ext3 --source rootfs --system-id 0xFF '\
+                            '--overhead-factor 1.2 --size 100k\n'])
+            wks.flush()
+            cmd = "wic create %s -e core-image-minimal -o %s" % (wks.name, self.resultdir)
+            self.assertEqual(0, runCmd(cmd).status)
+            wksname = os.path.splitext(os.path.basename(wks.name))[0]
+            out = glob(self.resultdir + "%s-*direct" % wksname)
+            self.assertEqual(1, len(out))
+
+    @OETestID(1850)
+    def test_image_bootpart_globbed(self):
+        """Test globbed sources with image-bootpart plugin"""
+        img = "core-image-minimal"
+        cmd = "wic create sdimage-bootpart -e %s -o %s" % (img, self.resultdir)
+        config = 'IMAGE_BOOT_FILES = "%s*"' % get_bb_var('KERNEL_IMAGETYPE', img)
+        self.append_config(config)
+        self.assertEqual(0, runCmd(cmd).status)
+        self.remove_config(config)
+        self.assertEqual(1, len(glob(self.resultdir + "sdimage-bootpart-*direct")))
+
+    @OETestID(1855)
+    def test_sparse_copy(self):
+        """Test sparse_copy with FIEMAP and SEEK_HOLE filemap APIs"""
+        libpath = os.path.join(get_bb_var('COREBASE'), 'scripts', 'lib', 'wic')
+        sys.path.insert(0, libpath)
+        from  filemap import FilemapFiemap, FilemapSeek, sparse_copy, ErrorNotSupp
+        with NamedTemporaryFile("w", suffix=".wic-sparse") as sparse:
+            src_name = sparse.name
+            src_size = 1024 * 10
+            sparse.truncate(src_size)
+            # write one byte to the file
+            with open(src_name, 'r+b') as sfile:
+                sfile.seek(1024 * 4)
+                sfile.write(b'\x00')
+            dest = sparse.name + '.out'
+            # copy src file to dest using different filemap APIs
+            for api in (FilemapFiemap, FilemapSeek, None):
+                if os.path.exists(dest):
+                    os.unlink(dest)
+                try:
+                    sparse_copy(sparse.name, dest, api=api)
+                except ErrorNotSupp:
+                    continue # skip unsupported API
+                dest_stat = os.stat(dest)
+                self.assertEqual(dest_stat.st_size, src_size)
+                # 8 blocks is 4K (physical sector size)
+                self.assertEqual(dest_stat.st_blocks, 8)
+            os.unlink(dest)
+
+    @OETestID(1857)
+    def test_wic_ls(self):
+        """Test listing image content using 'wic ls'"""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "-D -o %s" % self.resultdir).status)
+        images = glob(self.resultdir + "wictestdisk-*.direct")
+        self.assertEqual(1, len(images))
+
+        sysroot = get_bb_var('RECIPE_SYSROOT_NATIVE', 'wic-tools')
+
+        # list partitions
+        result = runCmd("wic ls %s -n %s" % (images[0], sysroot))
+        self.assertEqual(0, result.status)
+        self.assertEqual(3, len(result.output.split('\n')))
+
+        # list directory content of the first partition
+        result = runCmd("wic ls %s:1/ -n %s" % (images[0], sysroot))
+        self.assertEqual(0, result.status)
+        self.assertEqual(6, len(result.output.split('\n')))
+
+    @OETestID(1856)
+    def test_wic_cp(self):
+        """Test copy files and directories to the the wic image."""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "-D -o %s" % self.resultdir).status)
+        images = glob(self.resultdir + "wictestdisk-*.direct")
+        self.assertEqual(1, len(images))
+
+        sysroot = get_bb_var('RECIPE_SYSROOT_NATIVE', 'wic-tools')
+
+        # list directory content of the first partition
+        result = runCmd("wic ls %s:1/ -n %s" % (images[0], sysroot))
+        self.assertEqual(0, result.status)
+        self.assertEqual(6, len(result.output.split('\n')))
+
+        with NamedTemporaryFile("w", suffix=".wic-cp") as testfile:
+            testfile.write("test")
+
+            # copy file to the partition
+            result = runCmd("wic cp %s %s:1/ -n %s" % (testfile.name, images[0], sysroot))
+            self.assertEqual(0, result.status)
+
+            # check if file is there
+            result = runCmd("wic ls %s:1/ -n %s" % (images[0], sysroot))
+            self.assertEqual(0, result.status)
+            self.assertEqual(7, len(result.output.split('\n')))
+            self.assertTrue(os.path.basename(testfile.name) in result.output)
+
+            # prepare directory
+            testdir = os.path.join(self.resultdir, 'wic-test-cp-dir')
+            testsubdir = os.path.join(testdir, 'subdir')
+            os.makedirs(os.path.join(testsubdir))
+            copy(testfile.name, testdir)
+
+            # copy directory to the partition
+            result = runCmd("wic cp %s %s:1/ -n %s" % (testdir, images[0], sysroot))
+            self.assertEqual(0, result.status)
+
+            # check if directory is there
+            result = runCmd("wic ls %s:1/ -n %s" % (images[0], sysroot))
+            self.assertEqual(0, result.status)
+            self.assertEqual(8, len(result.output.split('\n')))
+            self.assertTrue(os.path.basename(testdir) in result.output)
+
+    @OETestID(1858)
+    def test_wic_rm(self):
+        """Test removing files and directories from the the wic image."""
+        self.assertEqual(0, runCmd("wic create mkefidisk "
+                                   "--image-name=core-image-minimal "
+                                   "-D -o %s" % self.resultdir).status)
+        images = glob(self.resultdir + "mkefidisk-*.direct")
+        self.assertEqual(1, len(images))
+
+        sysroot = get_bb_var('RECIPE_SYSROOT_NATIVE', 'wic-tools')
+
+        # list directory content of the first partition
+        result = runCmd("wic ls %s:1 -n %s" % (images[0], sysroot))
+        self.assertEqual(0, result.status)
+        self.assertIn('\nBZIMAGE        ', result.output)
+        self.assertIn('\nEFI          <DIR>     ', result.output)
+
+        # remove file
+        result = runCmd("wic rm %s:1/bzimage -n %s" % (images[0], sysroot))
+        self.assertEqual(0, result.status)
+
+        # remove directory
+        result = runCmd("wic rm %s:1/efi -n %s" % (images[0], sysroot))
+        self.assertEqual(0, result.status)
+
+        # check if they're removed
+        result = runCmd("wic ls %s:1 -n %s" % (images[0], sysroot))
+        self.assertEqual(0, result.status)
+        self.assertNotIn('\nBZIMAGE        ', result.output)
+        self.assertNotIn('\nEFI          <DIR>     ', result.output)
+
+    @OETestID(1922)
+    def test_mkfs_extraopts(self):
+        """Test wks option --mkfs-extraopts for empty and not empty partitions"""
+        img = 'core-image-minimal'
+        with NamedTemporaryFile("w", suffix=".wks") as wks:
+            wks.writelines(
+                ['part ext2   --fstype ext2     --source rootfs --mkfs-extraopts "-D -F -i 8192"\n',
+                 "part btrfs  --fstype btrfs    --source rootfs --size 40M --mkfs-extraopts='--quiet'\n",
+                 'part squash --fstype squashfs --source rootfs --mkfs-extraopts "-no-sparse -b 4096"\n',
+                 'part emptyvfat   --fstype vfat   --size 1M --mkfs-extraopts "-S 1024 -s 64"\n',
+                 'part emptymsdos  --fstype msdos  --size 1M --mkfs-extraopts "-S 1024 -s 64"\n',
+                 'part emptyext2   --fstype ext2   --size 1M --mkfs-extraopts "-D -F -i 8192"\n',
+                 'part emptybtrfs  --fstype btrfs  --size 100M --mkfs-extraopts "--mixed -K"\n'])
+            wks.flush()
+            cmd = "wic create %s -e %s -o %s" % (wks.name, img, self.resultdir)
+            self.assertEqual(0, runCmd(cmd).status)
+            wksname = os.path.splitext(os.path.basename(wks.name))[0]
+            out = glob(self.resultdir + "%s-*direct" % wksname)
+            self.assertEqual(1, len(out))
+
+    def test_expand_mbr_image(self):
+        """Test wic write --expand command for mbr image"""
+        # build an image
+        config = 'IMAGE_FSTYPES = "wic"\nWKS_FILE = "directdisk.wks"\n'
+        self.append_config(config)
+        self.assertEqual(0, bitbake('core-image-minimal').status)
+
+        # get path to the image
+        bb_vars = get_bb_vars(['DEPLOY_DIR_IMAGE', 'MACHINE'])
+        deploy_dir = bb_vars['DEPLOY_DIR_IMAGE']
+        machine = bb_vars['MACHINE']
+        image_path = os.path.join(deploy_dir, 'core-image-minimal-%s.wic' % machine)
+
+        self.remove_config(config)
+
+        try:
+            # expand image to 1G
+            new_image_path = None
+            with NamedTemporaryFile(mode='wb', suffix='.wic.exp',
+                                    dir=deploy_dir, delete=False) as sparse:
+                sparse.truncate(1024 ** 3)
+                new_image_path = sparse.name
+
+            sysroot = get_bb_var('RECIPE_SYSROOT_NATIVE', 'wic-tools')
+            cmd = "wic write -n %s --expand 1:0 %s %s" % (sysroot, image_path, new_image_path)
+            self.assertEqual(0, runCmd(cmd).status)
+
+            # check if partitions are expanded
+            orig = runCmd("wic ls %s -n %s" % (image_path, sysroot))
+            exp = runCmd("wic ls %s -n %s" % (new_image_path, sysroot))
+            orig_sizes = [int(line.split()[3]) for line in orig.output.split('\n')[1:]]
+            exp_sizes = [int(line.split()[3]) for line in exp.output.split('\n')[1:]]
+            self.assertEqual(orig_sizes[0], exp_sizes[0]) # first partition is not resized
+            self.assertTrue(orig_sizes[1] < exp_sizes[1])
+
+            # Check if all free space is partitioned
+            result = runCmd("%s/usr/sbin/sfdisk -F %s" % (sysroot, new_image_path))
+            self.assertTrue("0 B, 0 bytes, 0 sectors" in result.output)
+
+            os.rename(image_path, image_path + '.bak')
+            os.rename(new_image_path, image_path)
+
+            # Check if it boots in qemu
+            with runqemu('core-image-minimal', ssh=False) as qemu:
+                cmd = "ls /etc/"
+                status, output = qemu.run_serial('true')
+                self.assertEqual(1, status, 'Failed to run command "%s": %s' % (cmd, output))
+        finally:
+            if os.path.exists(new_image_path):
+                os.unlink(new_image_path)
+            if os.path.exists(image_path + '.bak'):
+                os.rename(image_path + '.bak', image_path)
+
+    def test_wic_ls_ext(self):
+        """Test listing content of the ext partition using 'wic ls'"""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "-D -o %s" % self.resultdir).status)
+        images = glob(self.resultdir + "wictestdisk-*.direct")
+        self.assertEqual(1, len(images))
+
+        sysroot = get_bb_var('RECIPE_SYSROOT_NATIVE', 'wic-tools')
+
+        # list directory content of the second ext4 partition
+        result = runCmd("wic ls %s:2/ -n %s" % (images[0], sysroot))
+        self.assertEqual(0, result.status)
+        self.assertTrue(set(['bin', 'home', 'proc', 'usr', 'var', 'dev', 'lib', 'sbin']).issubset(
+                            set(line.split()[-1] for line in result.output.split('\n') if line)))
+
+    def test_wic_cp_ext(self):
+        """Test copy files and directories to the ext partition."""
+        self.assertEqual(0, runCmd("wic create wictestdisk "
+                                   "--image-name=core-image-minimal "
+                                   "-D -o %s" % self.resultdir).status)
+        images = glob(self.resultdir + "wictestdisk-*.direct")
+        self.assertEqual(1, len(images))
+
+        sysroot = get_bb_var('RECIPE_SYSROOT_NATIVE', 'wic-tools')
+
+        # list directory content of the ext4 partition
+        result = runCmd("wic ls %s:2/ -n %s" % (images[0], sysroot))
+        self.assertEqual(0, result.status)
+        dirs = set(line.split()[-1] for line in result.output.split('\n') if line)
+        self.assertTrue(set(['bin', 'home', 'proc', 'usr', 'var', 'dev', 'lib', 'sbin']).issubset(dirs))
+
+        with NamedTemporaryFile("w", suffix=".wic-cp") as testfile:
+            testfile.write("test")
+
+            # copy file to the partition
+            result = runCmd("wic cp %s %s:2/ -n %s" % (testfile.name, images[0], sysroot))
+            self.assertEqual(0, result.status)
+
+            # check if file is there
+            result = runCmd("wic ls %s:2/ -n %s" % (images[0], sysroot))
+            self.assertEqual(0, result.status)
+            newdirs = set(line.split()[-1] for line in result.output.split('\n') if line)
+            self.assertEqual(newdirs.difference(dirs), set([os.path.basename(testfile.name)]))
+
+    def test_wic_rm_ext(self):
+        """Test removing files from the ext partition."""
+        self.assertEqual(0, runCmd("wic create mkefidisk "
+                                   "--image-name=core-image-minimal "
+                                   "-D -o %s" % self.resultdir).status)
+        images = glob(self.resultdir + "mkefidisk-*.direct")
+        self.assertEqual(1, len(images))
+
+        sysroot = get_bb_var('RECIPE_SYSROOT_NATIVE', 'wic-tools')
+
+        # list directory content of the /etc directory on ext4 partition
+        result = runCmd("wic ls %s:2/etc/ -n %s" % (images[0], sysroot))
+        self.assertEqual(0, result.status)
+        self.assertTrue('fstab' in [line.split()[-1] for line in result.output.split('\n') if line])
+
+        # remove file
+        result = runCmd("wic rm %s:2/etc/fstab -n %s" % (images[0], sysroot))
+        self.assertEqual(0, result.status)
+
+        # check if it's removed
+        result = runCmd("wic ls %s:2/etc/ -n %s" % (images[0], sysroot))
+        self.assertEqual(0, result.status)
+        self.assertTrue('fstab' not in [line.split()[-1] for line in result.output.split('\n') if line])
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/containerimage.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/containerimage.py
deleted file mode 100644
index def481f..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/containerimage.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import os
-
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import bitbake, get_bb_vars, runCmd
-
-# This test builds an image with using the "container" IMAGE_FSTYPE, and
-# ensures that then files in the image are only the ones expected.
-#
-# The only package added to the image is container_image_testpkg, which
-# contains one file. However, due to some other things not cleaning up during
-# rootfs creation, there is some cruft. Ideally bugs will be filed and the
-# cruft removed, but for now we whitelist some known set.
-#
-# Also for performance reasons we're only checking the cruft when using ipk.
-# When using deb, and rpm it is a bit different and we could test all
-# of them, but this test is more to catch if other packages get added by
-# default other than what is in ROOTFS_BOOTSTRAP_INSTALL.
-#
-class ContainerImageTests(oeSelfTest):
-
-    # Verify that when specifying a IMAGE_TYPEDEP_ of the form "foo.bar" that
-    # the conversion type bar gets added as a dep as well
-    def test_expected_files(self):
-
-        def get_each_path_part(path):
-            if path:
-                part = [ '.' + path + '/' ]
-                result = get_each_path_part(path.rsplit('/', 1)[0])
-                if result:
-                    return part + result
-                else:
-                    return part
-            else:
-                return None
-
-        self.write_config("""PREFERRED_PROVIDER_virtual/kernel = "linux-dummy"
-IMAGE_FSTYPES = "container"
-PACKAGE_CLASSES = "package_ipk"
-IMAGE_FEATURES = ""
-""")
-
-        bbvars = get_bb_vars(['bindir', 'sysconfdir', 'localstatedir',
-                              'DEPLOY_DIR_IMAGE', 'IMAGE_LINK_NAME'],
-                              target='container-test-image')
-        expected_files = [
-                    './',
-                    '.{bindir}/theapp',
-                    '.{sysconfdir}/default/',
-                    '.{sysconfdir}/default/postinst',
-                    '.{sysconfdir}/ld.so.cache',
-                    '.{sysconfdir}/timestamp',
-                    '.{sysconfdir}/version',
-                    './run/',
-                    '.{localstatedir}/cache/',
-                    '.{localstatedir}/cache/ldconfig/',
-                    '.{localstatedir}/cache/ldconfig/aux-cache',
-                    '.{localstatedir}/cache/opkg/',
-                    '.{localstatedir}/lib/',
-                    '.{localstatedir}/lib/opkg/'
-                ]
-
-        expected_files = [ x.format(bindir=bbvars['bindir'],
-                                    sysconfdir=bbvars['sysconfdir'],
-                                    localstatedir=bbvars['localstatedir'])
-                                    for x in expected_files ]
-
-        # Since tar lists all directories individually, make sure each element
-        # from bindir, sysconfdir, etc is added
-        expected_files += get_each_path_part(bbvars['bindir'])
-        expected_files += get_each_path_part(bbvars['sysconfdir'])
-        expected_files += get_each_path_part(bbvars['localstatedir'])
-
-        expected_files = sorted(expected_files)
-
-        # Build the image of course
-        bitbake('container-test-image')
-
-        image = os.path.join(bbvars['DEPLOY_DIR_IMAGE'],
-                             bbvars['IMAGE_LINK_NAME'] + '.tar.bz2')
-
-        # Ensure the files in the image are what we expect
-        result = runCmd("tar tf {} | sort".format(image), shell=True)
-        self.assertEqual(result.output.split('\n'), expected_files)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/context.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/context.py
new file mode 100644
index 0000000..9e90d3c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/lib/oeqa/selftest/context.py
@@ -0,0 +1,279 @@
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+import os
+import time
+import glob
+import sys
+import imp
+import signal
+from shutil import copyfile
+from random import choice
+
+import oeqa
+
+from oeqa.core.context import OETestContext, OETestContextExecutor
+from oeqa.core.exception import OEQAPreRun, OEQATestNotFound
+
+from oeqa.utils.commands import runCmd, get_bb_vars, get_test_layer
+
+class OESelftestTestContext(OETestContext):
+    def __init__(self, td=None, logger=None, machines=None, config_paths=None):
+        super(OESelftestTestContext, self).__init__(td, logger)
+
+        self.machines = machines
+        self.custommachine = None
+        self.config_paths = config_paths
+
+    def runTests(self, machine=None, skips=[]):
+        if machine:
+            self.custommachine = machine
+            if machine == 'random':
+                self.custommachine = choice(self.machines)
+            self.logger.info('Run tests with custom MACHINE set to: %s' % \
+                    self.custommachine)
+        return super(OESelftestTestContext, self).runTests(skips)
+
+    def listTests(self, display_type, machine=None):
+        return super(OESelftestTestContext, self).listTests(display_type)
+
+class OESelftestTestContextExecutor(OETestContextExecutor):
+    _context_class = OESelftestTestContext
+    _script_executor = 'oe-selftest'
+
+    name = 'oe-selftest'
+    help = 'oe-selftest test component'
+    description = 'Executes selftest tests'
+
+    def register_commands(self, logger, parser):
+        group = parser.add_mutually_exclusive_group(required=True)
+
+        group.add_argument('-a', '--run-all-tests', default=False,
+                action="store_true", dest="run_all_tests",
+                help='Run all (unhidden) tests')
+        group.add_argument('-R', '--skip-tests', required=False, action='store',
+                nargs='+', dest="skips", default=None,
+                help='Run all (unhidden) tests except the ones specified. Format should be <module>[.<class>[.<test_method>]]')
+        group.add_argument('-r', '--run-tests', required=False, action='store',
+                nargs='+', dest="run_tests", default=None,
+                help='Select what tests to run (modules, classes or test methods). Format should be: <module>.<class>.<test_method>')
+
+        group.add_argument('-m', '--list-modules', required=False,
+                action="store_true", default=False,
+                help='List all available test modules.')
+        group.add_argument('--list-classes', required=False,
+                action="store_true", default=False,
+                help='List all available test classes.')
+        group.add_argument('-l', '--list-tests', required=False,
+                action="store_true", default=False,
+                help='List all available tests.')
+
+        parser.add_argument('--machine', required=False, choices=['random', 'all'],
+                            help='Run tests on different machines (random/all).')
+        
+        parser.set_defaults(func=self.run)
+
+    def _get_available_machines(self):
+        machines = []
+
+        bbpath = self.tc_kwargs['init']['td']['BBPATH'].split(':')
+    
+        for path in bbpath:
+            found_machines = glob.glob(os.path.join(path, 'conf', 'machine', '*.conf'))
+            if found_machines:
+                for i in found_machines:
+                    # eg: '/home/<user>/poky/meta-intel/conf/machine/intel-core2-32.conf'
+                    machines.append(os.path.splitext(os.path.basename(i))[0])
+    
+        return machines
+
+    def _get_cases_paths(self, bbpath):
+        cases_paths = []
+        for layer in bbpath:
+            cases_dir = os.path.join(layer, 'lib', 'oeqa', 'selftest', 'cases')
+            if os.path.isdir(cases_dir):
+                cases_paths.append(cases_dir)
+        return cases_paths
+
+    def _process_args(self, logger, args):
+        args.output_log = '%s-results-%s.log' % (self.name,
+                time.strftime("%Y%m%d%H%M%S"))
+        args.test_data_file = None
+        args.CASES_PATHS = None
+
+        super(OESelftestTestContextExecutor, self)._process_args(logger, args)
+
+        if args.list_modules:
+            args.list_tests = 'module'
+        elif args.list_classes:
+            args.list_tests = 'class'
+        elif args.list_tests:
+            args.list_tests = 'name'
+
+        self.tc_kwargs['init']['td'] = get_bb_vars()
+        self.tc_kwargs['init']['machines'] = self._get_available_machines()
+
+        builddir = os.environ.get("BUILDDIR")
+        self.tc_kwargs['init']['config_paths'] = {}
+        self.tc_kwargs['init']['config_paths']['testlayer_path'] = \
+                get_test_layer()
+        self.tc_kwargs['init']['config_paths']['builddir'] = builddir
+        self.tc_kwargs['init']['config_paths']['localconf'] = \
+                os.path.join(builddir, "conf/local.conf")
+        self.tc_kwargs['init']['config_paths']['localconf_backup'] = \
+                os.path.join(builddir, "conf/local.conf.orig")
+        self.tc_kwargs['init']['config_paths']['localconf_class_backup'] = \
+                os.path.join(builddir, "conf/local.conf.bk")
+        self.tc_kwargs['init']['config_paths']['bblayers'] = \
+                os.path.join(builddir, "conf/bblayers.conf")
+        self.tc_kwargs['init']['config_paths']['bblayers_backup'] = \
+                os.path.join(builddir, "conf/bblayers.conf.orig")
+        self.tc_kwargs['init']['config_paths']['bblayers_class_backup'] = \
+                os.path.join(builddir, "conf/bblayers.conf.bk")
+
+        copyfile(self.tc_kwargs['init']['config_paths']['localconf'],
+                self.tc_kwargs['init']['config_paths']['localconf_backup'])
+        copyfile(self.tc_kwargs['init']['config_paths']['bblayers'], 
+                self.tc_kwargs['init']['config_paths']['bblayers_backup'])
+
+        self.tc_kwargs['run']['skips'] = args.skips
+
+    def _pre_run(self):
+        def _check_required_env_variables(vars):
+            for var in vars:
+                if not os.environ.get(var):
+                    self.tc.logger.error("%s is not set. Did you forget to source your build environment setup script?" % var)
+                    raise OEQAPreRun
+
+        def _check_presence_meta_selftest():
+            builddir = os.environ.get("BUILDDIR")
+            if os.getcwd() != builddir:
+                self.tc.logger.info("Changing cwd to %s" % builddir)
+                os.chdir(builddir)
+
+            if not "meta-selftest" in self.tc.td["BBLAYERS"]:
+                self.tc.logger.warn("meta-selftest layer not found in BBLAYERS, adding it")
+                meta_selftestdir = os.path.join(
+                    self.tc.td["BBLAYERS_FETCH_DIR"], 'meta-selftest')
+                if os.path.isdir(meta_selftestdir):
+                    runCmd("bitbake-layers add-layer %s" %meta_selftestdir)
+                    # reload data is needed because a meta-selftest layer was add
+                    self.tc.td = get_bb_vars()
+                    self.tc.config_paths['testlayer_path'] = get_test_layer()
+                else:
+                    self.tc.logger.error("could not locate meta-selftest in:\n%s" % meta_selftestdir)
+                    raise OEQAPreRun
+
+        def _add_layer_libs():
+            bbpath = self.tc.td['BBPATH'].split(':')
+            layer_libdirs = [p for p in (os.path.join(l, 'lib') \
+                    for l in bbpath) if os.path.exists(p)]
+            if layer_libdirs:
+                self.tc.logger.info("Adding layer libraries:")
+                for l in layer_libdirs:
+                    self.tc.logger.info("\t%s" % l)
+
+                sys.path.extend(layer_libdirs)
+                imp.reload(oeqa.selftest)
+
+        _check_required_env_variables(["BUILDDIR"])
+        _check_presence_meta_selftest()
+
+        if "buildhistory.bbclass" in self.tc.td["BBINCLUDED"]:
+            self.tc.logger.error("You have buildhistory enabled already and this isn't recommended for selftest, please disable it first.")
+            raise OEQAPreRun
+
+        if "PRSERV_HOST" in self.tc.td:
+            self.tc.logger.error("Please unset PRSERV_HOST in order to run oe-selftest")
+            raise OEQAPreRun
+
+        if "SANITY_TESTED_DISTROS" in self.tc.td:
+            self.tc.logger.error("Please unset SANITY_TESTED_DISTROS in order to run oe-selftest")
+            raise OEQAPreRun
+
+        _add_layer_libs()
+
+        self.tc.logger.info("Running bitbake -p")
+        runCmd("bitbake -p")
+
+    def _internal_run(self, logger, args):
+        self.module_paths = self._get_cases_paths(
+                self.tc_kwargs['init']['td']['BBPATH'].split(':'))
+
+        self.tc = self._context_class(**self.tc_kwargs['init'])
+        try:
+            self.tc.loadTests(self.module_paths, **self.tc_kwargs['load'])
+        except OEQATestNotFound as ex:
+            logger.error(ex)
+            sys.exit(1)
+
+        if args.list_tests:
+            rc = self.tc.listTests(args.list_tests, **self.tc_kwargs['list'])
+        else:
+            self._pre_run()
+            rc = self.tc.runTests(**self.tc_kwargs['run'])
+            rc.logDetails()
+            rc.logSummary(self.name)
+
+        return rc
+
+    def _signal_clean_handler(self, signum, frame):
+        sys.exit(1)
+    
+    def run(self, logger, args):
+        self._process_args(logger, args)
+
+        signal.signal(signal.SIGTERM, self._signal_clean_handler)
+
+        rc = None
+        try:
+            if args.machine:
+                logger.info('Custom machine mode enabled. MACHINE set to %s' %
+                        args.machine)
+
+                if args.machine == 'all':
+                    results = []
+                    for m in self.tc_kwargs['init']['machines']:
+                        self.tc_kwargs['run']['machine'] = m
+                        results.append(self._internal_run(logger, args))
+
+                        # XXX: the oe-selftest script only needs to know if one
+                        # machine run fails
+                        for r in results:
+                            rc = r
+                            if not r.wasSuccessful():
+                                break
+
+                else:
+                    self.tc_kwargs['run']['machine'] = args.machine
+                    return self._internal_run(logger, args)
+
+            else:
+                self.tc_kwargs['run']['machine'] = args.machine
+                rc = self._internal_run(logger, args)
+        finally:
+            config_paths = self.tc_kwargs['init']['config_paths']
+            if os.path.exists(config_paths['localconf_backup']):
+                copyfile(config_paths['localconf_backup'],
+                        config_paths['localconf'])
+                os.remove(config_paths['localconf_backup'])
+
+            if os.path.exists(config_paths['bblayers_backup']):
+                copyfile(config_paths['bblayers_backup'], 
+                        config_paths['bblayers'])
+                os.remove(config_paths['bblayers_backup'])
+
+            if os.path.exists(config_paths['localconf_class_backup']):
+                os.remove(config_paths['localconf_class_backup'])
+            if os.path.exists(config_paths['bblayers_class_backup']):
+                os.remove(config_paths['bblayers_class_backup'])
+
+            output_link = os.path.join(os.path.dirname(args.output_log),
+                    "%s-results.log" % self.name)
+            if os.path.exists(output_link):
+                os.remove(output_link)
+            os.symlink(args.output_log, output_link)
+
+        return rc
+
+_executor_class = OESelftestTestContextExecutor
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/devtool.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/devtool.py
deleted file mode 100644
index 5704866..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/devtool.py
+++ /dev/null
@@ -1,1696 +0,0 @@
-import unittest
-import os
-import logging
-import re
-import shutil
-import tempfile
-import glob
-import fnmatch
-
-import oeqa.utils.ftools as ftools
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, create_temp_layer
-from oeqa.utils.commands import get_bb_vars, runqemu, get_test_layer
-from oeqa.utils.decorators import testcase
-
-class DevtoolBase(oeSelfTest):
-
-    def _test_recipe_contents(self, recipefile, checkvars, checkinherits):
-        with open(recipefile, 'r') as f:
-            invar = None
-            invalue = None
-            for line in f:
-                var = None
-                if invar:
-                    value = line.strip().strip('"')
-                    if value.endswith('\\'):
-                        invalue += ' ' + value[:-1].strip()
-                        continue
-                    else:
-                        invalue += ' ' + value.strip()
-                        var = invar
-                        value = invalue
-                        invar = None
-                elif '=' in line:
-                    splitline = line.split('=', 1)
-                    var = splitline[0].rstrip()
-                    value = splitline[1].strip().strip('"')
-                    if value.endswith('\\'):
-                        invalue = value[:-1].strip()
-                        invar = var
-                        continue
-                elif line.startswith('inherit '):
-                    inherits = line.split()[1:]
-
-                if var and var in checkvars:
-                    needvalue = checkvars.pop(var)
-                    if needvalue is None:
-                        self.fail('Variable %s should not appear in recipe, but value is being set to "%s"' % (var, value))
-                    if isinstance(needvalue, set):
-                        if var == 'LICENSE':
-                            value = set(value.split(' & '))
-                        else:
-                            value = set(value.split())
-                    self.assertEqual(value, needvalue, 'values for %s do not match' % var)
-
-
-        missingvars = {}
-        for var, value in checkvars.items():
-            if value is not None:
-                missingvars[var] = value
-        self.assertEqual(missingvars, {}, 'Some expected variables not found in recipe: %s' % checkvars)
-
-        for inherit in checkinherits:
-            self.assertIn(inherit, inherits, 'Missing inherit of %s' % inherit)
-
-    def _check_bbappend(self, testrecipe, recipefile, appenddir):
-        result = runCmd('bitbake-layers show-appends', cwd=self.builddir)
-        resultlines = result.output.splitlines()
-        inrecipe = False
-        bbappends = []
-        bbappendfile = None
-        for line in resultlines:
-            if inrecipe:
-                if line.startswith(' '):
-                    bbappends.append(line.strip())
-                else:
-                    break
-            elif line == '%s:' % os.path.basename(recipefile):
-                inrecipe = True
-        self.assertLessEqual(len(bbappends), 2, '%s recipe is being bbappended by another layer - bbappends found:\n  %s' % (testrecipe, '\n  '.join(bbappends)))
-        for bbappend in bbappends:
-            if bbappend.startswith(appenddir):
-                bbappendfile = bbappend
-                break
-        else:
-            self.fail('bbappend for recipe %s does not seem to be created in test layer' % testrecipe)
-        return bbappendfile
-
-    def _create_temp_layer(self, templayerdir, addlayer, templayername, priority=999, recipepathspec='recipes-*/*'):
-        create_temp_layer(templayerdir, templayername, priority, recipepathspec)
-        if addlayer:
-            self.add_command_to_tearDown('bitbake-layers remove-layer %s || true' % templayerdir)
-            result = runCmd('bitbake-layers add-layer %s' % templayerdir, cwd=self.builddir)
-
-    def _process_ls_output(self, output):
-        """
-        Convert ls -l output to a format we can reasonably compare from one context
-        to another (e.g. from host to target)
-        """
-        filelist = []
-        for line in output.splitlines():
-            splitline = line.split()
-            if len(splitline) < 8:
-                self.fail('_process_ls_output: invalid output line: %s' % line)
-            # Remove trailing . on perms
-            splitline[0] = splitline[0].rstrip('.')
-            # Remove leading . on paths
-            splitline[-1] = splitline[-1].lstrip('.')
-            # Drop fields we don't want to compare
-            del splitline[7]
-            del splitline[6]
-            del splitline[5]
-            del splitline[4]
-            del splitline[1]
-            filelist.append(' '.join(splitline))
-        return filelist
-
-
-class DevtoolTests(DevtoolBase):
-
-    @classmethod
-    def setUpClass(cls):
-        bb_vars = get_bb_vars(['TOPDIR', 'SSTATE_DIR'])
-        cls.original_sstate = bb_vars['SSTATE_DIR']
-        cls.devtool_sstate = os.path.join(bb_vars['TOPDIR'], 'sstate_devtool')
-        cls.sstate_conf  = 'SSTATE_DIR = "%s"\n' % cls.devtool_sstate
-        cls.sstate_conf += ('SSTATE_MIRRORS += "file://.* file:///%s/PATH"\n'
-                            % cls.original_sstate)
-
-    @classmethod
-    def tearDownClass(cls):
-        cls.log.debug('Deleting devtool sstate cache on %s' % cls.devtool_sstate)
-        runCmd('rm -rf %s' % cls.devtool_sstate)
-
-    def setUp(self):
-        """Test case setup function"""
-        super(DevtoolTests, self).setUp()
-        self.workspacedir = os.path.join(self.builddir, 'workspace')
-        self.assertTrue(not os.path.exists(self.workspacedir),
-                        'This test cannot be run with a workspace directory '
-                        'under the build directory')
-        self.append_config(self.sstate_conf)
-
-    def _check_src_repo(self, repo_dir):
-        """Check srctree git repository"""
-        self.assertTrue(os.path.isdir(os.path.join(repo_dir, '.git')),
-                        'git repository for external source tree not found')
-        result = runCmd('git status --porcelain', cwd=repo_dir)
-        self.assertEqual(result.output.strip(), "",
-                         'Created git repo is not clean')
-        result = runCmd('git symbolic-ref HEAD', cwd=repo_dir)
-        self.assertEqual(result.output.strip(), "refs/heads/devtool",
-                         'Wrong branch in git repo')
-
-    def _check_repo_status(self, repo_dir, expected_status):
-        """Check the worktree status of a repository"""
-        result = runCmd('git status . --porcelain',
-                        cwd=repo_dir)
-        for line in result.output.splitlines():
-            for ind, (f_status, fn_re) in enumerate(expected_status):
-                if re.match(fn_re, line[3:]):
-                    if f_status != line[:2]:
-                        self.fail('Unexpected status in line: %s' % line)
-                    expected_status.pop(ind)
-                    break
-            else:
-                self.fail('Unexpected modified file in line: %s' % line)
-        if expected_status:
-            self.fail('Missing file changes: %s' % expected_status)
-
-    @testcase(1158)
-    def test_create_workspace(self):
-        # Check preconditions
-        result = runCmd('bitbake-layers show-layers')
-        self.assertTrue('/workspace' not in result.output, 'This test cannot be run with a workspace layer in bblayers.conf')
-        # Try creating a workspace layer with a specific path
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        result = runCmd('devtool create-workspace %s' % tempdir)
-        self.assertTrue(os.path.isfile(os.path.join(tempdir, 'conf', 'layer.conf')), msg = "No workspace created. devtool output: %s " % result.output)
-        result = runCmd('bitbake-layers show-layers')
-        self.assertIn(tempdir, result.output)
-        # Try creating a workspace layer with the default path
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        result = runCmd('devtool create-workspace')
-        self.assertTrue(os.path.isfile(os.path.join(self.workspacedir, 'conf', 'layer.conf')), msg = "No workspace created. devtool output: %s " % result.output)
-        result = runCmd('bitbake-layers show-layers')
-        self.assertNotIn(tempdir, result.output)
-        self.assertIn(self.workspacedir, result.output)
-
-    @testcase(1159)
-    def test_devtool_add(self):
-        # Fetch source
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        url = 'http://www.ivarch.com/programs/sources/pv-1.5.3.tar.bz2'
-        result = runCmd('wget %s' % url, cwd=tempdir)
-        result = runCmd('tar xfv pv-1.5.3.tar.bz2', cwd=tempdir)
-        srcdir = os.path.join(tempdir, 'pv-1.5.3')
-        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'configure')), 'Unable to find configure script in source directory')
-        # Test devtool add
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake -c cleansstate pv')
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        result = runCmd('devtool add pv %s' % srcdir)
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'conf', 'layer.conf')), 'Workspace directory not created')
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertIn('pv', result.output)
-        self.assertIn(srcdir, result.output)
-        # Clean up anything in the workdir/sysroot/sstate cache (have to do this *after* devtool add since the recipe only exists then)
-        bitbake('pv -c cleansstate')
-        # Test devtool build
-        result = runCmd('devtool build pv')
-        bb_vars = get_bb_vars(['D', 'bindir'], 'pv')
-        installdir = bb_vars['D']
-        self.assertTrue(installdir, 'Could not query installdir variable')
-        bindir = bb_vars['bindir']
-        self.assertTrue(bindir, 'Could not query bindir variable')
-        if bindir[0] == '/':
-            bindir = bindir[1:]
-        self.assertTrue(os.path.isfile(os.path.join(installdir, bindir, 'pv')), 'pv binary not found in D')
-
-    @testcase(1423)
-    def test_devtool_add_git_local(self):
-        # Fetch source from a remote URL, but do it outside of devtool
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        pn = 'dbus-wait'
-        srcrev = '6cc6077a36fe2648a5f993fe7c16c9632f946517'
-        # We choose an https:// git URL here to check rewriting the URL works
-        url = 'https://git.yoctoproject.org/git/dbus-wait'
-        # Force fetching to "noname" subdir so we verify we're picking up the name from autoconf
-        # instead of the directory name
-        result = runCmd('git clone %s noname' % url, cwd=tempdir)
-        srcdir = os.path.join(tempdir, 'noname')
-        result = runCmd('git reset --hard %s' % srcrev, cwd=srcdir)
-        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'configure.ac')), 'Unable to find configure script in source directory')
-        # Test devtool add
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        # Don't specify a name since we should be able to auto-detect it
-        result = runCmd('devtool add %s' % srcdir)
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'conf', 'layer.conf')), 'Workspace directory not created')
-        # Check the recipe name is correct
-        recipefile = get_bb_var('FILE', pn)
-        self.assertIn('%s_git.bb' % pn, recipefile, 'Recipe file incorrectly named')
-        self.assertIn(recipefile, result.output)
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertIn(pn, result.output)
-        self.assertIn(srcdir, result.output)
-        self.assertIn(recipefile, result.output)
-        checkvars = {}
-        checkvars['LICENSE'] = 'GPLv2'
-        checkvars['LIC_FILES_CHKSUM'] = 'file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263'
-        checkvars['S'] = '${WORKDIR}/git'
-        checkvars['PV'] = '0.1+git${SRCPV}'
-        checkvars['SRC_URI'] = 'git://git.yoctoproject.org/git/dbus-wait;protocol=https'
-        checkvars['SRCREV'] = srcrev
-        checkvars['DEPENDS'] = set(['dbus'])
-        self._test_recipe_contents(recipefile, checkvars, [])
-
-    @testcase(1162)
-    def test_devtool_add_library(self):
-        # Fetch source
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        version = '1.1'
-        url = 'https://www.intra2net.com/en/developer/libftdi/download/libftdi1-%s.tar.bz2' % version
-        result = runCmd('wget %s' % url, cwd=tempdir)
-        result = runCmd('tar xfv libftdi1-%s.tar.bz2' % version, cwd=tempdir)
-        srcdir = os.path.join(tempdir, 'libftdi1-%s' % version)
-        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'CMakeLists.txt')), 'Unable to find CMakeLists.txt in source directory')
-        # Test devtool add (and use -V so we test that too)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        result = runCmd('devtool add libftdi %s -V %s' % (srcdir, version))
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'conf', 'layer.conf')), 'Workspace directory not created')
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertIn('libftdi', result.output)
-        self.assertIn(srcdir, result.output)
-        # Clean up anything in the workdir/sysroot/sstate cache (have to do this *after* devtool add since the recipe only exists then)
-        bitbake('libftdi -c cleansstate')
-        # libftdi's python/CMakeLists.txt is a bit broken, so let's just disable it
-        # There's also the matter of it installing cmake files to a path we don't
-        # normally cover, which triggers the installed-vs-shipped QA test we have
-        # within do_package
-        recipefile = '%s/recipes/libftdi/libftdi_%s.bb' % (self.workspacedir, version)
-        result = runCmd('recipetool setvar %s EXTRA_OECMAKE -- \'-DPYTHON_BINDINGS=OFF -DLIBFTDI_CMAKE_CONFIG_DIR=${datadir}/cmake/Modules\'' % recipefile)
-        with open(recipefile, 'a') as f:
-            f.write('\nFILES_${PN}-dev += "${datadir}/cmake/Modules"\n')
-            # We don't have the ability to pick up this dependency automatically yet...
-            f.write('\nDEPENDS += "libusb1"\n')
-            f.write('\nTESTLIBOUTPUT = "${COMPONENTS_DIR}/${TUNE_PKGARCH}/${PN}/${libdir}"\n')
-        # Test devtool build
-        result = runCmd('devtool build libftdi')
-        bb_vars = get_bb_vars(['TESTLIBOUTPUT', 'STAMP'], 'libftdi')
-        staging_libdir = bb_vars['TESTLIBOUTPUT']
-        self.assertTrue(staging_libdir, 'Could not query TESTLIBOUTPUT variable')
-        self.assertTrue(os.path.isfile(os.path.join(staging_libdir, 'libftdi1.so.2.1.0')), "libftdi binary not found in STAGING_LIBDIR. Output of devtool build libftdi %s" % result.output)
-        # Test devtool reset
-        stampprefix = bb_vars['STAMP']
-        result = runCmd('devtool reset libftdi')
-        result = runCmd('devtool status')
-        self.assertNotIn('libftdi', result.output)
-        self.assertTrue(stampprefix, 'Unable to get STAMP value for recipe libftdi')
-        matches = glob.glob(stampprefix + '*')
-        self.assertFalse(matches, 'Stamp files exist for recipe libftdi that should have been cleaned')
-        self.assertFalse(os.path.isfile(os.path.join(staging_libdir, 'libftdi1.so.2.1.0')), 'libftdi binary still found in STAGING_LIBDIR after cleaning')
-
-    @testcase(1160)
-    def test_devtool_add_fetch(self):
-        # Fetch source
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        testver = '0.23'
-        url = 'https://pypi.python.org/packages/source/M/MarkupSafe/MarkupSafe-%s.tar.gz' % testver
-        testrecipe = 'python-markupsafe'
-        srcdir = os.path.join(tempdir, testrecipe)
-        # Test devtool add
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake -c cleansstate %s' % testrecipe)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        result = runCmd('devtool add %s %s -f %s' % (testrecipe, srcdir, url))
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'conf', 'layer.conf')), 'Workspace directory not created. %s' % result.output)
-        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'setup.py')), 'Unable to find setup.py in source directory')
-        self.assertTrue(os.path.isdir(os.path.join(srcdir, '.git')), 'git repository for external source tree was not created')
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertIn(testrecipe, result.output)
-        self.assertIn(srcdir, result.output)
-        # Check recipe
-        recipefile = get_bb_var('FILE', testrecipe)
-        self.assertIn('%s_%s.bb' % (testrecipe, testver), recipefile, 'Recipe file incorrectly named')
-        checkvars = {}
-        checkvars['S'] = '${WORKDIR}/MarkupSafe-${PV}'
-        checkvars['SRC_URI'] = url.replace(testver, '${PV}')
-        self._test_recipe_contents(recipefile, checkvars, [])
-        # Try with version specified
-        result = runCmd('devtool reset -n %s' % testrecipe)
-        shutil.rmtree(srcdir)
-        fakever = '1.9'
-        result = runCmd('devtool add %s %s -f %s -V %s' % (testrecipe, srcdir, url, fakever))
-        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'setup.py')), 'Unable to find setup.py in source directory')
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertIn(testrecipe, result.output)
-        self.assertIn(srcdir, result.output)
-        # Check recipe
-        recipefile = get_bb_var('FILE', testrecipe)
-        self.assertIn('%s_%s.bb' % (testrecipe, fakever), recipefile, 'Recipe file incorrectly named')
-        checkvars = {}
-        checkvars['S'] = '${WORKDIR}/MarkupSafe-%s' % testver
-        checkvars['SRC_URI'] = url
-        self._test_recipe_contents(recipefile, checkvars, [])
-
-    @testcase(1161)
-    def test_devtool_add_fetch_git(self):
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        url = 'gitsm://git.yoctoproject.org/mraa'
-        checkrev = 'ae127b19a50aa54255e4330ccfdd9a5d058e581d'
-        testrecipe = 'mraa'
-        srcdir = os.path.join(tempdir, testrecipe)
-        # Test devtool add
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake -c cleansstate %s' % testrecipe)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        result = runCmd('devtool add %s %s -a -f %s' % (testrecipe, srcdir, url))
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'conf', 'layer.conf')), 'Workspace directory not created: %s' % result.output)
-        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'imraa', 'imraa.c')), 'Unable to find imraa/imraa.c in source directory')
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertIn(testrecipe, result.output)
-        self.assertIn(srcdir, result.output)
-        # Check recipe
-        recipefile = get_bb_var('FILE', testrecipe)
-        self.assertIn('_git.bb', recipefile, 'Recipe file incorrectly named')
-        checkvars = {}
-        checkvars['S'] = '${WORKDIR}/git'
-        checkvars['PV'] = '1.0+git${SRCPV}'
-        checkvars['SRC_URI'] = url
-        checkvars['SRCREV'] = '${AUTOREV}'
-        self._test_recipe_contents(recipefile, checkvars, [])
-        # Try with revision and version specified
-        result = runCmd('devtool reset -n %s' % testrecipe)
-        shutil.rmtree(srcdir)
-        url_rev = '%s;rev=%s' % (url, checkrev)
-        result = runCmd('devtool add %s %s -f "%s" -V 1.5' % (testrecipe, srcdir, url_rev))
-        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'imraa', 'imraa.c')), 'Unable to find imraa/imraa.c in source directory')
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertIn(testrecipe, result.output)
-        self.assertIn(srcdir, result.output)
-        # Check recipe
-        recipefile = get_bb_var('FILE', testrecipe)
-        self.assertIn('_git.bb', recipefile, 'Recipe file incorrectly named')
-        checkvars = {}
-        checkvars['S'] = '${WORKDIR}/git'
-        checkvars['PV'] = '1.5+git${SRCPV}'
-        checkvars['SRC_URI'] = url
-        checkvars['SRCREV'] = checkrev
-        self._test_recipe_contents(recipefile, checkvars, [])
-
-    @testcase(1391)
-    def test_devtool_add_fetch_simple(self):
-        # Fetch source from a remote URL, auto-detecting name
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        testver = '1.6.0'
-        url = 'http://www.ivarch.com/programs/sources/pv-%s.tar.bz2' % testver
-        testrecipe = 'pv'
-        srcdir = os.path.join(self.workspacedir, 'sources', testrecipe)
-        # Test devtool add
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        result = runCmd('devtool add %s' % url)
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'conf', 'layer.conf')), 'Workspace directory not created. %s' % result.output)
-        self.assertTrue(os.path.isfile(os.path.join(srcdir, 'configure')), 'Unable to find configure script in source directory')
-        self.assertTrue(os.path.isdir(os.path.join(srcdir, '.git')), 'git repository for external source tree was not created')
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertIn(testrecipe, result.output)
-        self.assertIn(srcdir, result.output)
-        # Check recipe
-        recipefile = get_bb_var('FILE', testrecipe)
-        self.assertIn('%s_%s.bb' % (testrecipe, testver), recipefile, 'Recipe file incorrectly named')
-        checkvars = {}
-        checkvars['S'] = None
-        checkvars['SRC_URI'] = url.replace(testver, '${PV}')
-        self._test_recipe_contents(recipefile, checkvars, [])
-
-    @testcase(1164)
-    def test_devtool_modify(self):
-        import oe.path
-
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        self.add_command_to_tearDown('bitbake -c clean mdadm')
-        result = runCmd('devtool modify mdadm -x %s' % tempdir)
-        self.assertTrue(os.path.exists(os.path.join(tempdir, 'Makefile')), 'Extracted source could not be found')
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'conf', 'layer.conf')), 'Workspace directory not created')
-        matches = glob.glob(os.path.join(self.workspacedir, 'appends', 'mdadm_*.bbappend'))
-        self.assertTrue(matches, 'bbappend not created %s' % result.output)
-
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertIn('mdadm', result.output)
-        self.assertIn(tempdir, result.output)
-        self._check_src_repo(tempdir)
-
-        bitbake('mdadm -C unpack')
-
-        def check_line(checkfile, expected, message, present=True):
-            # Check for $expected, on a line on its own, in checkfile.
-            with open(checkfile, 'r') as f:
-                if present:
-                    self.assertIn(expected + '\n', f, message)
-                else:
-                    self.assertNotIn(expected + '\n', f, message)
-
-        modfile = os.path.join(tempdir, 'mdadm.8.in')
-        bb_vars = get_bb_vars(['PKGD', 'mandir'], 'mdadm')
-        pkgd = bb_vars['PKGD']
-        self.assertTrue(pkgd, 'Could not query PKGD variable')
-        mandir = bb_vars['mandir']
-        self.assertTrue(mandir, 'Could not query mandir variable')
-        manfile = oe.path.join(pkgd, mandir, 'man8', 'mdadm.8')
-
-        check_line(modfile, 'Linux Software RAID', 'Could not find initial string')
-        check_line(modfile, 'antique pin sardine', 'Unexpectedly found replacement string', present=False)
-
-        result = runCmd("sed -i 's!^Linux Software RAID$!antique pin sardine!' %s" % modfile)
-        check_line(modfile, 'antique pin sardine', 'mdadm.8.in file not modified (sed failed)')
-
-        bitbake('mdadm -c package')
-        check_line(manfile, 'antique pin sardine', 'man file not modified. man searched file path: %s' % manfile)
-
-        result = runCmd('git checkout -- %s' % modfile, cwd=tempdir)
-        check_line(modfile, 'Linux Software RAID', 'man .in file not restored (git failed)')
-
-        bitbake('mdadm -c package')
-        check_line(manfile, 'Linux Software RAID', 'man file not updated. man searched file path: %s' % manfile)
-
-        result = runCmd('devtool reset mdadm')
-        result = runCmd('devtool status')
-        self.assertNotIn('mdadm', result.output)
-
-    def test_devtool_buildclean(self):
-        def assertFile(path, *paths):
-            f = os.path.join(path, *paths)
-            self.assertTrue(os.path.exists(f), "%r does not exist" % f)
-        def assertNoFile(path, *paths):
-            f = os.path.join(path, *paths)
-            self.assertFalse(os.path.exists(os.path.join(f)), "%r exists" % f)
-
-        # Clean up anything in the workdir/sysroot/sstate cache
-        bitbake('mdadm m4 -c cleansstate')
-        # Try modifying a recipe
-        tempdir_mdadm = tempfile.mkdtemp(prefix='devtoolqa')
-        tempdir_m4 = tempfile.mkdtemp(prefix='devtoolqa')
-        builddir_m4 = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir_mdadm)
-        self.track_for_cleanup(tempdir_m4)
-        self.track_for_cleanup(builddir_m4)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        self.add_command_to_tearDown('bitbake -c clean mdadm m4')
-        self.write_recipeinc('m4', 'EXTERNALSRC_BUILD = "%s"\ndo_clean() {\n\t:\n}\n' % builddir_m4)
-        try:
-            runCmd('devtool modify mdadm -x %s' % tempdir_mdadm)
-            runCmd('devtool modify m4 -x %s' % tempdir_m4)
-            assertNoFile(tempdir_mdadm, 'mdadm')
-            assertNoFile(builddir_m4, 'src/m4')
-            result = bitbake('m4 -e')
-            result = bitbake('mdadm m4 -c compile')
-            self.assertEqual(result.status, 0)
-            assertFile(tempdir_mdadm, 'mdadm')
-            assertFile(builddir_m4, 'src/m4')
-            # Check that buildclean task exists and does call make clean
-            bitbake('mdadm m4 -c buildclean')
-            assertNoFile(tempdir_mdadm, 'mdadm')
-            assertNoFile(builddir_m4, 'src/m4')
-            bitbake('mdadm m4 -c compile')
-            assertFile(tempdir_mdadm, 'mdadm')
-            assertFile(builddir_m4, 'src/m4')
-            bitbake('mdadm m4 -c clean')
-            # Check that buildclean task is run before clean for B == S
-            assertNoFile(tempdir_mdadm, 'mdadm')
-            # Check that buildclean task is not run before clean for B != S
-            assertFile(builddir_m4, 'src/m4')
-        finally:
-            self.delete_recipeinc('m4')
-
-    @testcase(1166)
-    def test_devtool_modify_invalid(self):
-        # Try modifying some recipes
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-
-        testrecipes = 'perf kernel-devsrc package-index core-image-minimal meta-toolchain packagegroup-core-sdk meta-ide-support'.split()
-        # Find actual name of gcc-source since it now includes the version - crude, but good enough for this purpose
-        result = runCmd('bitbake-layers show-recipes gcc-source*')
-        for line in result.output.splitlines():
-            # just match those lines that contain a real target
-            m = re.match('(?P<recipe>^[a-zA-Z0-9.-]+)(?P<colon>:$)', line)
-            if m:
-                testrecipes.append(m.group('recipe'))
-        for testrecipe in testrecipes:
-            # Check it's a valid recipe
-            bitbake('%s -e' % testrecipe)
-            # devtool extract should fail
-            result = runCmd('devtool extract %s %s' % (testrecipe, os.path.join(tempdir, testrecipe)), ignore_status=True)
-            self.assertNotEqual(result.status, 0, 'devtool extract on %s should have failed. devtool output: %s' % (testrecipe, result.output))
-            self.assertNotIn('Fetching ', result.output, 'devtool extract on %s should have errored out before trying to fetch' % testrecipe)
-            self.assertIn('ERROR: ', result.output, 'devtool extract on %s should have given an ERROR' % testrecipe)
-            # devtool modify should fail
-            result = runCmd('devtool modify %s -x %s' % (testrecipe, os.path.join(tempdir, testrecipe)), ignore_status=True)
-            self.assertNotEqual(result.status, 0, 'devtool modify on %s should have failed. devtool output: %s' %  (testrecipe, result.output))
-            self.assertIn('ERROR: ', result.output, 'devtool modify on %s should have given an ERROR' % testrecipe)
-
-    @testcase(1365)
-    def test_devtool_modify_native(self):
-        # Check preconditions
-        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
-        # Try modifying some recipes
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-
-        bbclassextended = False
-        inheritnative = False
-        testrecipes = 'mtools-native apt-native desktop-file-utils-native'.split()
-        for testrecipe in testrecipes:
-            checkextend = 'native' in (get_bb_var('BBCLASSEXTEND', testrecipe) or '').split()
-            if not bbclassextended:
-                bbclassextended = checkextend
-            if not inheritnative:
-                inheritnative = not checkextend
-            result = runCmd('devtool modify %s -x %s' % (testrecipe, os.path.join(tempdir, testrecipe)))
-            self.assertNotIn('ERROR: ', result.output, 'ERROR in devtool modify output: %s' % result.output)
-            result = runCmd('devtool build %s' % testrecipe)
-            self.assertNotIn('ERROR: ', result.output, 'ERROR in devtool build output: %s' % result.output)
-            result = runCmd('devtool reset %s' % testrecipe)
-            self.assertNotIn('ERROR: ', result.output, 'ERROR in devtool reset output: %s' % result.output)
-
-        self.assertTrue(bbclassextended, 'None of these recipes are BBCLASSEXTENDed to native - need to adjust testrecipes list: %s' % ', '.join(testrecipes))
-        self.assertTrue(inheritnative, 'None of these recipes do "inherit native" - need to adjust testrecipes list: %s' % ', '.join(testrecipes))
-
-
-    @testcase(1165)
-    def test_devtool_modify_git(self):
-        # Check preconditions
-        testrecipe = 'mkelfimage'
-        src_uri = get_bb_var('SRC_URI', testrecipe)
-        self.assertIn('git://', src_uri, 'This test expects the %s recipe to be a git recipe' % testrecipe)
-        # Clean up anything in the workdir/sysroot/sstate cache
-        bitbake('%s -c cleansstate' % testrecipe)
-        # Try modifying a recipe
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        self.add_command_to_tearDown('bitbake -c clean %s' % testrecipe)
-        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempdir))
-        self.assertTrue(os.path.exists(os.path.join(tempdir, 'Makefile')), 'Extracted source could not be found')
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'conf', 'layer.conf')), 'Workspace directory not created. devtool output: %s' % result.output)
-        matches = glob.glob(os.path.join(self.workspacedir, 'appends', 'mkelfimage_*.bbappend'))
-        self.assertTrue(matches, 'bbappend not created')
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertIn(testrecipe, result.output)
-        self.assertIn(tempdir, result.output)
-        # Check git repo
-        self._check_src_repo(tempdir)
-        # Try building
-        bitbake(testrecipe)
-
-    @testcase(1167)
-    def test_devtool_modify_localfiles(self):
-        # Check preconditions
-        testrecipe = 'lighttpd'
-        src_uri = (get_bb_var('SRC_URI', testrecipe) or '').split()
-        foundlocal = False
-        for item in src_uri:
-            if item.startswith('file://') and '.patch' not in item:
-                foundlocal = True
-                break
-        self.assertTrue(foundlocal, 'This test expects the %s recipe to fetch local files and it seems that it no longer does' % testrecipe)
-        # Clean up anything in the workdir/sysroot/sstate cache
-        bitbake('%s -c cleansstate' % testrecipe)
-        # Try modifying a recipe
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        self.add_command_to_tearDown('bitbake -c clean %s' % testrecipe)
-        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempdir))
-        self.assertTrue(os.path.exists(os.path.join(tempdir, 'configure.ac')), 'Extracted source could not be found')
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'conf', 'layer.conf')), 'Workspace directory not created')
-        matches = glob.glob(os.path.join(self.workspacedir, 'appends', '%s_*.bbappend' % testrecipe))
-        self.assertTrue(matches, 'bbappend not created')
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertIn(testrecipe, result.output)
-        self.assertIn(tempdir, result.output)
-        # Try building
-        bitbake(testrecipe)
-
-    @testcase(1378)
-    def test_devtool_modify_virtual(self):
-        # Try modifying a virtual recipe
-        virtrecipe = 'virtual/make'
-        realrecipe = 'make'
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        result = runCmd('devtool modify %s -x %s' % (virtrecipe, tempdir))
-        self.assertTrue(os.path.exists(os.path.join(tempdir, 'Makefile.am')), 'Extracted source could not be found')
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'conf', 'layer.conf')), 'Workspace directory not created')
-        matches = glob.glob(os.path.join(self.workspacedir, 'appends', '%s_*.bbappend' % realrecipe))
-        self.assertTrue(matches, 'bbappend not created %s' % result.output)
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertNotIn(virtrecipe, result.output)
-        self.assertIn(realrecipe, result.output)
-        # Check git repo
-        self._check_src_repo(tempdir)
-        # This is probably sufficient
-
-
-    @testcase(1169)
-    def test_devtool_update_recipe(self):
-        # Check preconditions
-        testrecipe = 'minicom'
-        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
-        recipefile = bb_vars['FILE']
-        src_uri = bb_vars['SRC_URI']
-        self.assertNotIn('git://', src_uri, 'This test expects the %s recipe to NOT be a git recipe' % testrecipe)
-        self._check_repo_status(os.path.dirname(recipefile), [])
-        # First, modify a recipe
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        # (don't bother with cleaning the recipe on teardown, we won't be building it)
-        # We don't use -x here so that we test the behaviour of devtool modify without it
-        result = runCmd('devtool modify %s %s' % (testrecipe, tempdir))
-        # Check git repo
-        self._check_src_repo(tempdir)
-        # Add a couple of commits
-        # FIXME: this only tests adding, need to also test update and remove
-        result = runCmd('echo "Additional line" >> README', cwd=tempdir)
-        result = runCmd('git commit -a -m "Change the README"', cwd=tempdir)
-        result = runCmd('echo "A new file" > devtool-new-file', cwd=tempdir)
-        result = runCmd('git add devtool-new-file', cwd=tempdir)
-        result = runCmd('git commit -m "Add a new file"', cwd=tempdir)
-        self.add_command_to_tearDown('cd %s; rm %s/*.patch; git checkout %s %s' % (os.path.dirname(recipefile), testrecipe, testrecipe, os.path.basename(recipefile)))
-        result = runCmd('devtool update-recipe %s' % testrecipe)
-        expected_status = [(' M', '.*/%s$' % os.path.basename(recipefile)),
-                           ('??', '.*/0001-Change-the-README.patch$'),
-                           ('??', '.*/0002-Add-a-new-file.patch$')]
-        self._check_repo_status(os.path.dirname(recipefile), expected_status)
-
-    @testcase(1172)
-    def test_devtool_update_recipe_git(self):
-        # Check preconditions
-        testrecipe = 'mtd-utils'
-        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
-        recipefile = bb_vars['FILE']
-        src_uri = bb_vars['SRC_URI']
-        self.assertIn('git://', src_uri, 'This test expects the %s recipe to be a git recipe' % testrecipe)
-        patches = []
-        for entry in src_uri.split():
-            if entry.startswith('file://') and entry.endswith('.patch'):
-                patches.append(entry[7:].split(';')[0])
-        self.assertGreater(len(patches), 0, 'The %s recipe does not appear to contain any patches, so this test will not be effective' % testrecipe)
-        self._check_repo_status(os.path.dirname(recipefile), [])
-        # First, modify a recipe
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        # (don't bother with cleaning the recipe on teardown, we won't be building it)
-        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempdir))
-        # Check git repo
-        self._check_src_repo(tempdir)
-        # Add a couple of commits
-        # FIXME: this only tests adding, need to also test update and remove
-        result = runCmd('echo "# Additional line" >> Makefile.am', cwd=tempdir)
-        result = runCmd('git commit -a -m "Change the Makefile"', cwd=tempdir)
-        result = runCmd('echo "A new file" > devtool-new-file', cwd=tempdir)
-        result = runCmd('git add devtool-new-file', cwd=tempdir)
-        result = runCmd('git commit -m "Add a new file"', cwd=tempdir)
-        self.add_command_to_tearDown('cd %s; rm -rf %s; git checkout %s %s' % (os.path.dirname(recipefile), testrecipe, testrecipe, os.path.basename(recipefile)))
-        result = runCmd('devtool update-recipe -m srcrev %s' % testrecipe)
-        expected_status = [(' M', '.*/%s$' % os.path.basename(recipefile))] + \
-                          [(' D', '.*/%s$' % patch) for patch in patches]
-        self._check_repo_status(os.path.dirname(recipefile), expected_status)
-
-        result = runCmd('git diff %s' % os.path.basename(recipefile), cwd=os.path.dirname(recipefile))
-        addlines = ['SRCREV = ".*"', 'SRC_URI = "git://git.infradead.org/mtd-utils.git"']
-        srcurilines = src_uri.split()
-        srcurilines[0] = 'SRC_URI = "' + srcurilines[0]
-        srcurilines.append('"')
-        removelines = ['SRCREV = ".*"'] + srcurilines
-        for line in result.output.splitlines():
-            if line.startswith('+++') or line.startswith('---'):
-                continue
-            elif line.startswith('+'):
-                matched = False
-                for item in addlines:
-                    if re.match(item, line[1:].strip()):
-                        matched = True
-                        break
-                self.assertTrue(matched, 'Unexpected diff add line: %s' % line)
-            elif line.startswith('-'):
-                matched = False
-                for item in removelines:
-                    if re.match(item, line[1:].strip()):
-                        matched = True
-                        break
-                self.assertTrue(matched, 'Unexpected diff remove line: %s' % line)
-        # Now try with auto mode
-        runCmd('cd %s; git checkout %s %s' % (os.path.dirname(recipefile), testrecipe, os.path.basename(recipefile)))
-        result = runCmd('devtool update-recipe %s' % testrecipe)
-        result = runCmd('git rev-parse --show-toplevel', cwd=os.path.dirname(recipefile))
-        topleveldir = result.output.strip()
-        relpatchpath = os.path.join(os.path.relpath(os.path.dirname(recipefile), topleveldir), testrecipe)
-        expected_status = [(' M', os.path.relpath(recipefile, topleveldir)),
-                           ('??', '%s/0001-Change-the-Makefile.patch' % relpatchpath),
-                           ('??', '%s/0002-Add-a-new-file.patch' % relpatchpath)]
-        self._check_repo_status(os.path.dirname(recipefile), expected_status)
-
-    @testcase(1170)
-    def test_devtool_update_recipe_append(self):
-        # Check preconditions
-        testrecipe = 'mdadm'
-        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
-        recipefile = bb_vars['FILE']
-        src_uri = bb_vars['SRC_URI']
-        self.assertNotIn('git://', src_uri, 'This test expects the %s recipe to NOT be a git recipe' % testrecipe)
-        self._check_repo_status(os.path.dirname(recipefile), [])
-        # First, modify a recipe
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        tempsrcdir = os.path.join(tempdir, 'source')
-        templayerdir = os.path.join(tempdir, 'layer')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        # (don't bother with cleaning the recipe on teardown, we won't be building it)
-        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempsrcdir))
-        # Check git repo
-        self._check_src_repo(tempsrcdir)
-        # Add a commit
-        result = runCmd("sed 's!\\(#define VERSION\\W*\"[^\"]*\\)\"!\\1-custom\"!' -i ReadMe.c", cwd=tempsrcdir)
-        result = runCmd('git commit -a -m "Add our custom version"', cwd=tempsrcdir)
-        self.add_command_to_tearDown('cd %s; rm -f %s/*.patch; git checkout .' % (os.path.dirname(recipefile), testrecipe))
-        # Create a temporary layer and add it to bblayers.conf
-        self._create_temp_layer(templayerdir, True, 'selftestupdaterecipe')
-        # Create the bbappend
-        result = runCmd('devtool update-recipe %s -a %s' % (testrecipe, templayerdir))
-        self.assertNotIn('WARNING:', result.output)
-        # Check recipe is still clean
-        self._check_repo_status(os.path.dirname(recipefile), [])
-        # Check bbappend was created
-        splitpath = os.path.dirname(recipefile).split(os.sep)
-        appenddir = os.path.join(templayerdir, splitpath[-2], splitpath[-1])
-        bbappendfile = self._check_bbappend(testrecipe, recipefile, appenddir)
-        patchfile = os.path.join(appenddir, testrecipe, '0001-Add-our-custom-version.patch')
-        self.assertTrue(os.path.exists(patchfile), 'Patch file not created')
-
-        # Check bbappend contents
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n',
-                         'SRC_URI += "file://0001-Add-our-custom-version.patch"\n',
-                         '\n']
-        with open(bbappendfile, 'r') as f:
-            self.assertEqual(expectedlines, f.readlines())
-
-        # Check we can run it again and bbappend isn't modified
-        result = runCmd('devtool update-recipe %s -a %s' % (testrecipe, templayerdir))
-        with open(bbappendfile, 'r') as f:
-            self.assertEqual(expectedlines, f.readlines())
-        # Drop new commit and check patch gets deleted
-        result = runCmd('git reset HEAD^', cwd=tempsrcdir)
-        result = runCmd('devtool update-recipe %s -a %s' % (testrecipe, templayerdir))
-        self.assertFalse(os.path.exists(patchfile), 'Patch file not deleted')
-        expectedlines2 = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n']
-        with open(bbappendfile, 'r') as f:
-            self.assertEqual(expectedlines2, f.readlines())
-        # Put commit back and check we can run it if layer isn't in bblayers.conf
-        os.remove(bbappendfile)
-        result = runCmd('git commit -a -m "Add our custom version"', cwd=tempsrcdir)
-        result = runCmd('bitbake-layers remove-layer %s' % templayerdir, cwd=self.builddir)
-        result = runCmd('devtool update-recipe %s -a %s' % (testrecipe, templayerdir))
-        self.assertIn('WARNING: Specified layer is not currently enabled in bblayers.conf', result.output)
-        self.assertTrue(os.path.exists(patchfile), 'Patch file not created (with disabled layer)')
-        with open(bbappendfile, 'r') as f:
-            self.assertEqual(expectedlines, f.readlines())
-        # Deleting isn't expected to work under these circumstances
-
-    @testcase(1171)
-    def test_devtool_update_recipe_append_git(self):
-        # Check preconditions
-        testrecipe = 'mtd-utils'
-        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
-        recipefile = bb_vars['FILE']
-        src_uri = bb_vars['SRC_URI']
-        self.assertIn('git://', src_uri, 'This test expects the %s recipe to be a git recipe' % testrecipe)
-        for entry in src_uri.split():
-            if entry.startswith('git://'):
-                git_uri = entry
-                break
-        self._check_repo_status(os.path.dirname(recipefile), [])
-        # First, modify a recipe
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        tempsrcdir = os.path.join(tempdir, 'source')
-        templayerdir = os.path.join(tempdir, 'layer')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        # (don't bother with cleaning the recipe on teardown, we won't be building it)
-        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempsrcdir))
-        # Check git repo
-        self._check_src_repo(tempsrcdir)
-        # Add a commit
-        result = runCmd('echo "# Additional line" >> Makefile.am', cwd=tempsrcdir)
-        result = runCmd('git commit -a -m "Change the Makefile"', cwd=tempsrcdir)
-        self.add_command_to_tearDown('cd %s; rm -f %s/*.patch; git checkout .' % (os.path.dirname(recipefile), testrecipe))
-        # Create a temporary layer
-        os.makedirs(os.path.join(templayerdir, 'conf'))
-        with open(os.path.join(templayerdir, 'conf', 'layer.conf'), 'w') as f:
-            f.write('BBPATH .= ":${LAYERDIR}"\n')
-            f.write('BBFILES += "${LAYERDIR}/recipes-*/*/*.bbappend"\n')
-            f.write('BBFILE_COLLECTIONS += "oeselftesttemplayer"\n')
-            f.write('BBFILE_PATTERN_oeselftesttemplayer = "^${LAYERDIR}/"\n')
-            f.write('BBFILE_PRIORITY_oeselftesttemplayer = "999"\n')
-            f.write('BBFILE_PATTERN_IGNORE_EMPTY_oeselftesttemplayer = "1"\n')
-        self.add_command_to_tearDown('bitbake-layers remove-layer %s || true' % templayerdir)
-        result = runCmd('bitbake-layers add-layer %s' % templayerdir, cwd=self.builddir)
-        # Create the bbappend
-        result = runCmd('devtool update-recipe -m srcrev %s -a %s' % (testrecipe, templayerdir))
-        self.assertNotIn('WARNING:', result.output)
-        # Check recipe is still clean
-        self._check_repo_status(os.path.dirname(recipefile), [])
-        # Check bbappend was created
-        splitpath = os.path.dirname(recipefile).split(os.sep)
-        appenddir = os.path.join(templayerdir, splitpath[-2], splitpath[-1])
-        bbappendfile = self._check_bbappend(testrecipe, recipefile, appenddir)
-        self.assertFalse(os.path.exists(os.path.join(appenddir, testrecipe)), 'Patch directory should not be created')
-
-        # Check bbappend contents
-        result = runCmd('git rev-parse HEAD', cwd=tempsrcdir)
-        expectedlines = set(['SRCREV = "%s"\n' % result.output,
-                             '\n',
-                             'SRC_URI = "%s"\n' % git_uri,
-                             '\n'])
-        with open(bbappendfile, 'r') as f:
-            self.assertEqual(expectedlines, set(f.readlines()))
-
-        # Check we can run it again and bbappend isn't modified
-        result = runCmd('devtool update-recipe -m srcrev %s -a %s' % (testrecipe, templayerdir))
-        with open(bbappendfile, 'r') as f:
-            self.assertEqual(expectedlines, set(f.readlines()))
-        # Drop new commit and check SRCREV changes
-        result = runCmd('git reset HEAD^', cwd=tempsrcdir)
-        result = runCmd('devtool update-recipe -m srcrev %s -a %s' % (testrecipe, templayerdir))
-        self.assertFalse(os.path.exists(os.path.join(appenddir, testrecipe)), 'Patch directory should not be created')
-        result = runCmd('git rev-parse HEAD', cwd=tempsrcdir)
-        expectedlines = set(['SRCREV = "%s"\n' % result.output,
-                             '\n',
-                             'SRC_URI = "%s"\n' % git_uri,
-                             '\n'])
-        with open(bbappendfile, 'r') as f:
-            self.assertEqual(expectedlines, set(f.readlines()))
-        # Put commit back and check we can run it if layer isn't in bblayers.conf
-        os.remove(bbappendfile)
-        result = runCmd('git commit -a -m "Change the Makefile"', cwd=tempsrcdir)
-        result = runCmd('bitbake-layers remove-layer %s' % templayerdir, cwd=self.builddir)
-        result = runCmd('devtool update-recipe -m srcrev %s -a %s' % (testrecipe, templayerdir))
-        self.assertIn('WARNING: Specified layer is not currently enabled in bblayers.conf', result.output)
-        self.assertFalse(os.path.exists(os.path.join(appenddir, testrecipe)), 'Patch directory should not be created')
-        result = runCmd('git rev-parse HEAD', cwd=tempsrcdir)
-        expectedlines = set(['SRCREV = "%s"\n' % result.output,
-                             '\n',
-                             'SRC_URI = "%s"\n' % git_uri,
-                             '\n'])
-        with open(bbappendfile, 'r') as f:
-            self.assertEqual(expectedlines, set(f.readlines()))
-        # Deleting isn't expected to work under these circumstances
-
-    @testcase(1370)
-    def test_devtool_update_recipe_local_files(self):
-        """Check that local source files are copied over instead of patched"""
-        testrecipe = 'makedevs'
-        recipefile = get_bb_var('FILE', testrecipe)
-        # Setup srctree for modifying the recipe
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        # (don't bother with cleaning the recipe on teardown, we won't be
-        # building it)
-        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempdir))
-        # Check git repo
-        self._check_src_repo(tempdir)
-        # Try building just to ensure we haven't broken that
-        bitbake("%s" % testrecipe)
-        # Edit / commit local source
-        runCmd('echo "/* Foobar */" >> oe-local-files/makedevs.c', cwd=tempdir)
-        runCmd('echo "Foo" > oe-local-files/new-local', cwd=tempdir)
-        runCmd('echo "Bar" > new-file', cwd=tempdir)
-        runCmd('git add new-file', cwd=tempdir)
-        runCmd('git commit -m "Add new file"', cwd=tempdir)
-        self.add_command_to_tearDown('cd %s; git clean -fd .; git checkout .' %
-                                     os.path.dirname(recipefile))
-        runCmd('devtool update-recipe %s' % testrecipe)
-        expected_status = [(' M', '.*/%s$' % os.path.basename(recipefile)),
-                           (' M', '.*/makedevs/makedevs.c$'),
-                           ('??', '.*/makedevs/new-local$'),
-                           ('??', '.*/makedevs/0001-Add-new-file.patch$')]
-        self._check_repo_status(os.path.dirname(recipefile), expected_status)
-
-    @testcase(1371)
-    def test_devtool_update_recipe_local_files_2(self):
-        """Check local source files support when oe-local-files is in Git"""
-        testrecipe = 'lzo'
-        recipefile = get_bb_var('FILE', testrecipe)
-        # Setup srctree for modifying the recipe
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempdir))
-        # Check git repo
-        self._check_src_repo(tempdir)
-        # Add oe-local-files to Git
-        runCmd('rm oe-local-files/.gitignore', cwd=tempdir)
-        runCmd('git add oe-local-files', cwd=tempdir)
-        runCmd('git commit -m "Add local sources"', cwd=tempdir)
-        # Edit / commit local sources
-        runCmd('echo "# Foobar" >> oe-local-files/acinclude.m4', cwd=tempdir)
-        runCmd('git commit -am "Edit existing file"', cwd=tempdir)
-        runCmd('git rm oe-local-files/run-ptest', cwd=tempdir)
-        runCmd('git commit -m"Remove file"', cwd=tempdir)
-        runCmd('echo "Foo" > oe-local-files/new-local', cwd=tempdir)
-        runCmd('git add oe-local-files/new-local', cwd=tempdir)
-        runCmd('git commit -m "Add new local file"', cwd=tempdir)
-        runCmd('echo "Gar" > new-file', cwd=tempdir)
-        runCmd('git add new-file', cwd=tempdir)
-        runCmd('git commit -m "Add new file"', cwd=tempdir)
-        self.add_command_to_tearDown('cd %s; git clean -fd .; git checkout .' %
-                                     os.path.dirname(recipefile))
-        # Checkout unmodified file to working copy -> devtool should still pick
-        # the modified version from HEAD
-        runCmd('git checkout HEAD^ -- oe-local-files/acinclude.m4', cwd=tempdir)
-        runCmd('devtool update-recipe %s' % testrecipe)
-        expected_status = [(' M', '.*/%s$' % os.path.basename(recipefile)),
-                           (' M', '.*/acinclude.m4$'),
-                           (' D', '.*/run-ptest$'),
-                           ('??', '.*/new-local$'),
-                           ('??', '.*/0001-Add-new-file.patch$')]
-        self._check_repo_status(os.path.dirname(recipefile), expected_status)
-
-    def test_devtool_update_recipe_local_files_3(self):
-        # First, modify the recipe
-        testrecipe = 'devtool-test-localonly'
-        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
-        recipefile = bb_vars['FILE']
-        src_uri = bb_vars['SRC_URI']
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        # (don't bother with cleaning the recipe on teardown, we won't be building it)
-        result = runCmd('devtool modify %s' % testrecipe)
-        # Modify one file
-        runCmd('echo "Another line" >> file2', cwd=os.path.join(self.workspacedir, 'sources', testrecipe, 'oe-local-files'))
-        self.add_command_to_tearDown('cd %s; rm %s/*; git checkout %s %s' % (os.path.dirname(recipefile), testrecipe, testrecipe, os.path.basename(recipefile)))
-        result = runCmd('devtool update-recipe %s' % testrecipe)
-        expected_status = [(' M', '.*/%s/file2$' % testrecipe)]
-        self._check_repo_status(os.path.dirname(recipefile), expected_status)
-
-    def test_devtool_update_recipe_local_patch_gz(self):
-        # First, modify the recipe
-        testrecipe = 'devtool-test-patch-gz'
-        if get_bb_var('DISTRO') == 'poky-tiny':
-            self.skipTest("The DISTRO 'poky-tiny' does not provide the dependencies needed by %s" % testrecipe)
-        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
-        recipefile = bb_vars['FILE']
-        src_uri = bb_vars['SRC_URI']
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        # (don't bother with cleaning the recipe on teardown, we won't be building it)
-        result = runCmd('devtool modify %s' % testrecipe)
-        # Modify one file
-        srctree = os.path.join(self.workspacedir, 'sources', testrecipe)
-        runCmd('echo "Another line" >> README', cwd=srctree)
-        runCmd('git commit -a --amend --no-edit', cwd=srctree)
-        self.add_command_to_tearDown('cd %s; rm %s/*; git checkout %s %s' % (os.path.dirname(recipefile), testrecipe, testrecipe, os.path.basename(recipefile)))
-        result = runCmd('devtool update-recipe %s' % testrecipe)
-        expected_status = [(' M', '.*/%s/readme.patch.gz$' % testrecipe)]
-        self._check_repo_status(os.path.dirname(recipefile), expected_status)
-        patch_gz = os.path.join(os.path.dirname(recipefile), testrecipe, 'readme.patch.gz')
-        result = runCmd('file %s' % patch_gz)
-        if 'gzip compressed data' not in result.output:
-            self.fail('New patch file is not gzipped - file reports:\n%s' % result.output)
-
-    def test_devtool_update_recipe_local_files_subdir(self):
-        # Try devtool extract on a recipe that has a file with subdir= set in
-        # SRC_URI such that it overwrites a file that was in an archive that
-        # was also in SRC_URI
-        # First, modify the recipe
-        testrecipe = 'devtool-test-subdir'
-        bb_vars = get_bb_vars(['FILE', 'SRC_URI'], testrecipe)
-        recipefile = bb_vars['FILE']
-        src_uri = bb_vars['SRC_URI']
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        # (don't bother with cleaning the recipe on teardown, we won't be building it)
-        result = runCmd('devtool modify %s' % testrecipe)
-        testfile = os.path.join(self.workspacedir, 'sources', testrecipe, 'testfile')
-        self.assertTrue(os.path.exists(testfile), 'Extracted source could not be found')
-        with open(testfile, 'r') as f:
-            contents = f.read().rstrip()
-        self.assertEqual(contents, 'Modified version', 'File has apparently not been overwritten as it should have been')
-        # Test devtool update-recipe without modifying any files
-        self.add_command_to_tearDown('cd %s; rm %s/*; git checkout %s %s' % (os.path.dirname(recipefile), testrecipe, testrecipe, os.path.basename(recipefile)))
-        result = runCmd('devtool update-recipe %s' % testrecipe)
-        expected_status = []
-        self._check_repo_status(os.path.dirname(recipefile), expected_status)
-
-    @testcase(1163)
-    def test_devtool_extract(self):
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        # Try devtool extract
-        self.track_for_cleanup(tempdir)
-        self.append_config('PREFERRED_PROVIDER_virtual/make = "remake"')
-        result = runCmd('devtool extract remake %s' % tempdir)
-        self.assertTrue(os.path.exists(os.path.join(tempdir, 'Makefile.am')), 'Extracted source could not be found')
-        # devtool extract shouldn't create the workspace
-        self.assertFalse(os.path.exists(self.workspacedir))
-        self._check_src_repo(tempdir)
-
-    @testcase(1379)
-    def test_devtool_extract_virtual(self):
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        # Try devtool extract
-        self.track_for_cleanup(tempdir)
-        result = runCmd('devtool extract virtual/make %s' % tempdir)
-        self.assertTrue(os.path.exists(os.path.join(tempdir, 'Makefile.am')), 'Extracted source could not be found')
-        # devtool extract shouldn't create the workspace
-        self.assertFalse(os.path.exists(self.workspacedir))
-        self._check_src_repo(tempdir)
-
-    @testcase(1168)
-    def test_devtool_reset_all(self):
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        testrecipe1 = 'mdadm'
-        testrecipe2 = 'cronie'
-        result = runCmd('devtool modify -x %s %s' % (testrecipe1, os.path.join(tempdir, testrecipe1)))
-        result = runCmd('devtool modify -x %s %s' % (testrecipe2, os.path.join(tempdir, testrecipe2)))
-        result = runCmd('devtool build %s' % testrecipe1)
-        result = runCmd('devtool build %s' % testrecipe2)
-        stampprefix1 = get_bb_var('STAMP', testrecipe1)
-        self.assertTrue(stampprefix1, 'Unable to get STAMP value for recipe %s' % testrecipe1)
-        stampprefix2 = get_bb_var('STAMP', testrecipe2)
-        self.assertTrue(stampprefix2, 'Unable to get STAMP value for recipe %s' % testrecipe2)
-        result = runCmd('devtool reset -a')
-        self.assertIn(testrecipe1, result.output)
-        self.assertIn(testrecipe2, result.output)
-        result = runCmd('devtool status')
-        self.assertNotIn(testrecipe1, result.output)
-        self.assertNotIn(testrecipe2, result.output)
-        matches1 = glob.glob(stampprefix1 + '*')
-        self.assertFalse(matches1, 'Stamp files exist for recipe %s that should have been cleaned' % testrecipe1)
-        matches2 = glob.glob(stampprefix2 + '*')
-        self.assertFalse(matches2, 'Stamp files exist for recipe %s that should have been cleaned' % testrecipe2)
-
-    @testcase(1272)
-    def test_devtool_deploy_target(self):
-        # NOTE: Whilst this test would seemingly be better placed as a runtime test,
-        # unfortunately the runtime tests run under bitbake and you can't run
-        # devtool within bitbake (since devtool needs to run bitbake itself).
-        # Additionally we are testing build-time functionality as well, so
-        # really this has to be done as an oe-selftest test.
-        #
-        # Check preconditions
-        machine = get_bb_var('MACHINE')
-        if not machine.startswith('qemu'):
-            self.skipTest('This test only works with qemu machines')
-        if not os.path.exists('/etc/runqemu-nosudo'):
-            self.skipTest('You must set up tap devices with scripts/runqemu-gen-tapdevs before running this test')
-        result = runCmd('PATH="$PATH:/sbin:/usr/sbin" ip tuntap show', ignore_status=True)
-        if result.status != 0:
-            result = runCmd('PATH="$PATH:/sbin:/usr/sbin" ifconfig -a', ignore_status=True)
-            if result.status != 0:
-                self.skipTest('Failed to determine if tap devices exist with ifconfig or ip: %s' % result.output)
-        for line in result.output.splitlines():
-            if line.startswith('tap'):
-                break
-        else:
-            self.skipTest('No tap devices found - you must set up tap devices with scripts/runqemu-gen-tapdevs before running this test')
-        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
-        # Definitions
-        testrecipe = 'mdadm'
-        testfile = '/sbin/mdadm'
-        testimage = 'oe-selftest-image'
-        testcommand = '/sbin/mdadm --help'
-        # Build an image to run
-        bitbake("%s qemu-native qemu-helper-native" % testimage)
-        deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
-        self.add_command_to_tearDown('bitbake -c clean %s' % testimage)
-        self.add_command_to_tearDown('rm -f %s/%s*' % (deploy_dir_image, testimage))
-        # Clean recipe so the first deploy will fail
-        bitbake("%s -c clean" % testrecipe)
-        # Try devtool modify
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        self.add_command_to_tearDown('bitbake -c clean %s' % testrecipe)
-        result = runCmd('devtool modify %s -x %s' % (testrecipe, tempdir))
-        # Test that deploy-target at this point fails (properly)
-        result = runCmd('devtool deploy-target -n %s root@localhost' % testrecipe, ignore_status=True)
-        self.assertNotEqual(result.output, 0, 'devtool deploy-target should have failed, output: %s' % result.output)
-        self.assertNotIn(result.output, 'Traceback', 'devtool deploy-target should have failed with a proper error not a traceback, output: %s' % result.output)
-        result = runCmd('devtool build %s' % testrecipe)
-        # First try a dry-run of deploy-target
-        result = runCmd('devtool deploy-target -n %s root@localhost' % testrecipe)
-        self.assertIn('  %s' % testfile, result.output)
-        # Boot the image
-        with runqemu(testimage) as qemu:
-            # Now really test deploy-target
-            result = runCmd('devtool deploy-target -c %s root@%s' % (testrecipe, qemu.ip))
-            # Run a test command to see if it was installed properly
-            sshargs = '-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'
-            result = runCmd('ssh %s root@%s %s' % (sshargs, qemu.ip, testcommand))
-            # Check if it deployed all of the files with the right ownership/perms
-            # First look on the host - need to do this under pseudo to get the correct ownership/perms
-            bb_vars = get_bb_vars(['D', 'FAKEROOTENV', 'FAKEROOTCMD'], testrecipe)
-            installdir = bb_vars['D']
-            fakerootenv = bb_vars['FAKEROOTENV']
-            fakerootcmd = bb_vars['FAKEROOTCMD']
-            result = runCmd('%s %s find . -type f -exec ls -l {} \;' % (fakerootenv, fakerootcmd), cwd=installdir)
-            filelist1 = self._process_ls_output(result.output)
-
-            # Now look on the target
-            tempdir2 = tempfile.mkdtemp(prefix='devtoolqa')
-            self.track_for_cleanup(tempdir2)
-            tmpfilelist = os.path.join(tempdir2, 'files.txt')
-            with open(tmpfilelist, 'w') as f:
-                for line in filelist1:
-                    splitline = line.split()
-                    f.write(splitline[-1] + '\n')
-            result = runCmd('cat %s | ssh -q %s root@%s \'xargs ls -l\'' % (tmpfilelist, sshargs, qemu.ip))
-            filelist2 = self._process_ls_output(result.output)
-            filelist1.sort(key=lambda item: item.split()[-1])
-            filelist2.sort(key=lambda item: item.split()[-1])
-            self.assertEqual(filelist1, filelist2)
-            # Test undeploy-target
-            result = runCmd('devtool undeploy-target -c %s root@%s' % (testrecipe, qemu.ip))
-            result = runCmd('ssh %s root@%s %s' % (sshargs, qemu.ip, testcommand), ignore_status=True)
-            self.assertNotEqual(result, 0, 'undeploy-target did not remove command as it should have')
-
-    @testcase(1366)
-    def test_devtool_build_image(self):
-        """Test devtool build-image plugin"""
-        # Check preconditions
-        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
-        image = 'core-image-minimal'
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        self.add_command_to_tearDown('bitbake -c clean %s' % image)
-        bitbake('%s -c clean' % image)
-        # Add target and native recipes to workspace
-        recipes = ['mdadm', 'parted-native']
-        for recipe in recipes:
-            tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-            self.track_for_cleanup(tempdir)
-            self.add_command_to_tearDown('bitbake -c clean %s' % recipe)
-            runCmd('devtool modify %s -x %s' % (recipe, tempdir))
-        # Try to build image
-        result = runCmd('devtool build-image %s' % image)
-        self.assertNotEqual(result, 0, 'devtool build-image failed')
-        # Check if image contains expected packages
-        deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
-        image_link_name = get_bb_var('IMAGE_LINK_NAME', image)
-        reqpkgs = [item for item in recipes if not item.endswith('-native')]
-        with open(os.path.join(deploy_dir_image, image_link_name + '.manifest'), 'r') as f:
-            for line in f:
-                splitval = line.split()
-                if splitval:
-                    pkg = splitval[0]
-                    if pkg in reqpkgs:
-                        reqpkgs.remove(pkg)
-        if reqpkgs:
-            self.fail('The following packages were not present in the image as expected: %s' % ', '.join(reqpkgs))
-
-    @testcase(1367)
-    def test_devtool_upgrade(self):
-        # Check preconditions
-        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        # Check parameters
-        result = runCmd('devtool upgrade -h')
-        for param in 'recipename srctree --version -V --branch -b --keep-temp --no-patch'.split():
-            self.assertIn(param, result.output)
-        # For the moment, we are using a real recipe.
-        recipe = 'devtool-upgrade-test1'
-        version = '1.6.0'
-        oldrecipefile = get_bb_var('FILE', recipe)
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        # Check that recipe is not already under devtool control
-        result = runCmd('devtool status')
-        self.assertNotIn(recipe, result.output)
-        # Check upgrade. Code does not check if new PV is older or newer that current PV, so, it may be that
-        # we are downgrading instead of upgrading.
-        result = runCmd('devtool upgrade %s %s -V %s' % (recipe, tempdir, version))
-        # Check if srctree at least is populated
-        self.assertTrue(len(os.listdir(tempdir)) > 0, 'srctree (%s) should be populated with new (%s) source code' % (tempdir, version))
-        # Check new recipe subdirectory is present
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'recipes', recipe, '%s-%s' % (recipe, version))), 'Recipe folder should exist')
-        # Check new recipe file is present
-        newrecipefile = os.path.join(self.workspacedir, 'recipes', recipe, '%s_%s.bb' % (recipe, version))
-        self.assertTrue(os.path.exists(newrecipefile), 'Recipe file should exist after upgrade')
-        # Check devtool status and make sure recipe is present
-        result = runCmd('devtool status')
-        self.assertIn(recipe, result.output)
-        self.assertIn(tempdir, result.output)
-        # Check recipe got changed as expected
-        with open(oldrecipefile + '.upgraded', 'r') as f:
-            desiredlines = f.readlines()
-        with open(newrecipefile, 'r') as f:
-            newlines = f.readlines()
-        self.assertEqual(desiredlines, newlines)
-        # Check devtool reset recipe
-        result = runCmd('devtool reset %s -n' % recipe)
-        result = runCmd('devtool status')
-        self.assertNotIn(recipe, result.output)
-        self.assertFalse(os.path.exists(os.path.join(self.workspacedir, 'recipes', recipe)), 'Recipe directory should not exist after resetting')
-
-    @testcase(1433)
-    def test_devtool_upgrade_git(self):
-        # Check preconditions
-        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        recipe = 'devtool-upgrade-test2'
-        commit = '6cc6077a36fe2648a5f993fe7c16c9632f946517'
-        oldrecipefile = get_bb_var('FILE', recipe)
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        # Check that recipe is not already under devtool control
-        result = runCmd('devtool status')
-        self.assertNotIn(recipe, result.output)
-        # Check upgrade
-        result = runCmd('devtool upgrade %s %s -S %s' % (recipe, tempdir, commit))
-        # Check if srctree at least is populated
-        self.assertTrue(len(os.listdir(tempdir)) > 0, 'srctree (%s) should be populated with new (%s) source code' % (tempdir, commit))
-        # Check new recipe file is present
-        newrecipefile = os.path.join(self.workspacedir, 'recipes', recipe, os.path.basename(oldrecipefile))
-        self.assertTrue(os.path.exists(newrecipefile), 'Recipe file should exist after upgrade')
-        # Check devtool status and make sure recipe is present
-        result = runCmd('devtool status')
-        self.assertIn(recipe, result.output)
-        self.assertIn(tempdir, result.output)
-        # Check recipe got changed as expected
-        with open(oldrecipefile + '.upgraded', 'r') as f:
-            desiredlines = f.readlines()
-        with open(newrecipefile, 'r') as f:
-            newlines = f.readlines()
-        self.assertEqual(desiredlines, newlines)
-        # Check devtool reset recipe
-        result = runCmd('devtool reset %s -n' % recipe)
-        result = runCmd('devtool status')
-        self.assertNotIn(recipe, result.output)
-        self.assertFalse(os.path.exists(os.path.join(self.workspacedir, 'recipes', recipe)), 'Recipe directory should not exist after resetting')
-
-    @testcase(1352)
-    def test_devtool_layer_plugins(self):
-        """Test that devtool can use plugins from other layers.
-
-        This test executes the selftest-reverse command from meta-selftest."""
-
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-
-        s = "Microsoft Made No Profit From Anyone's Zunes Yo"
-        result = runCmd("devtool --quiet selftest-reverse \"%s\"" % s)
-        self.assertEqual(result.output, s[::-1])
-
-    def _copy_file_with_cleanup(self, srcfile, basedstdir, *paths):
-        dstdir = basedstdir
-        self.assertTrue(os.path.exists(dstdir))
-        for p in paths:
-            dstdir = os.path.join(dstdir, p)
-            if not os.path.exists(dstdir):
-                os.makedirs(dstdir)
-                self.track_for_cleanup(dstdir)
-        dstfile = os.path.join(dstdir, os.path.basename(srcfile))
-        if srcfile != dstfile:
-            shutil.copy(srcfile, dstfile)
-            self.track_for_cleanup(dstfile)
-
-    def test_devtool_load_plugin(self):
-        """Test that devtool loads only the first found plugin in BBPATH."""
-
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-
-        devtool = runCmd("which devtool")
-        fromname = runCmd("devtool --quiet pluginfile")
-        srcfile = fromname.output
-        bbpath = get_bb_var('BBPATH')
-        searchpath = bbpath.split(':') + [os.path.dirname(devtool.output)]
-        plugincontent = []
-        with open(srcfile) as fh:
-            plugincontent = fh.readlines()
-        try:
-            self.assertIn('meta-selftest', srcfile, 'wrong bbpath plugin found')
-            for path in searchpath:
-                self._copy_file_with_cleanup(srcfile, path, 'lib', 'devtool')
-            result = runCmd("devtool --quiet count")
-            self.assertEqual(result.output, '1')
-            result = runCmd("devtool --quiet multiloaded")
-            self.assertEqual(result.output, "no")
-            for path in searchpath:
-                result = runCmd("devtool --quiet bbdir")
-                self.assertEqual(result.output, path)
-                os.unlink(os.path.join(result.output, 'lib', 'devtool', 'bbpath.py'))
-        finally:
-            with open(srcfile, 'w') as fh:
-                fh.writelines(plugincontent)
-
-    def _setup_test_devtool_finish_upgrade(self):
-        # Check preconditions
-        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        # Use a "real" recipe from meta-selftest
-        recipe = 'devtool-upgrade-test1'
-        oldversion = '1.5.3'
-        newversion = '1.6.0'
-        oldrecipefile = get_bb_var('FILE', recipe)
-        recipedir = os.path.dirname(oldrecipefile)
-        result = runCmd('git status --porcelain .', cwd=recipedir)
-        if result.output.strip():
-            self.fail('Recipe directory for %s contains uncommitted changes' % recipe)
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        # Check that recipe is not already under devtool control
-        result = runCmd('devtool status')
-        self.assertNotIn(recipe, result.output)
-        # Do the upgrade
-        result = runCmd('devtool upgrade %s %s -V %s' % (recipe, tempdir, newversion))
-        # Check devtool status and make sure recipe is present
-        result = runCmd('devtool status')
-        self.assertIn(recipe, result.output)
-        self.assertIn(tempdir, result.output)
-        # Make a change to the source
-        result = runCmd('sed -i \'/^#include "pv.h"/a \\/* Here is a new comment *\\/\' src/pv/number.c', cwd=tempdir)
-        result = runCmd('git status --porcelain', cwd=tempdir)
-        self.assertIn('M src/pv/number.c', result.output)
-        result = runCmd('git commit src/pv/number.c -m "Add a comment to the code"', cwd=tempdir)
-        # Check if patch is there
-        recipedir = os.path.dirname(oldrecipefile)
-        olddir = os.path.join(recipedir, recipe + '-' + oldversion)
-        patchfn = '0001-Add-a-note-line-to-the-quick-reference.patch'
-        self.assertTrue(os.path.exists(os.path.join(olddir, patchfn)), 'Original patch file does not exist')
-        return recipe, oldrecipefile, recipedir, olddir, newversion, patchfn
-
-    def test_devtool_finish_upgrade_origlayer(self):
-        recipe, oldrecipefile, recipedir, olddir, newversion, patchfn = self._setup_test_devtool_finish_upgrade()
-        # Ensure the recipe is where we think it should be (so that cleanup doesn't trash things)
-        self.assertIn('/meta-selftest/', recipedir)
-        # Try finish to the original layer
-        self.add_command_to_tearDown('rm -rf %s ; cd %s ; git checkout %s' % (recipedir, os.path.dirname(recipedir), recipedir))
-        result = runCmd('devtool finish %s meta-selftest' % recipe)
-        result = runCmd('devtool status')
-        self.assertNotIn(recipe, result.output, 'Recipe should have been reset by finish but wasn\'t')
-        self.assertFalse(os.path.exists(os.path.join(self.workspacedir, 'recipes', recipe)), 'Recipe directory should not exist after finish')
-        self.assertFalse(os.path.exists(oldrecipefile), 'Old recipe file should have been deleted but wasn\'t')
-        self.assertFalse(os.path.exists(os.path.join(olddir, patchfn)), 'Old patch file should have been deleted but wasn\'t')
-        newrecipefile = os.path.join(recipedir, '%s_%s.bb' % (recipe, newversion))
-        newdir = os.path.join(recipedir, recipe + '-' + newversion)
-        self.assertTrue(os.path.exists(newrecipefile), 'New recipe file should have been copied into existing layer but wasn\'t')
-        self.assertTrue(os.path.exists(os.path.join(newdir, patchfn)), 'Patch file should have been copied into new directory but wasn\'t')
-        self.assertTrue(os.path.exists(os.path.join(newdir, '0002-Add-a-comment-to-the-code.patch')), 'New patch file should have been created but wasn\'t')
-
-    def test_devtool_finish_upgrade_otherlayer(self):
-        recipe, oldrecipefile, recipedir, olddir, newversion, patchfn = self._setup_test_devtool_finish_upgrade()
-        # Ensure the recipe is where we think it should be (so that cleanup doesn't trash things)
-        self.assertIn('/meta-selftest/', recipedir)
-        # Try finish to a different layer - should create a bbappend
-        # This cleanup isn't strictly necessary but do it anyway just in case it goes wrong and writes to here
-        self.add_command_to_tearDown('rm -rf %s ; cd %s ; git checkout %s' % (recipedir, os.path.dirname(recipedir), recipedir))
-        oe_core_dir = os.path.join(get_bb_var('COREBASE'), 'meta')
-        newrecipedir = os.path.join(oe_core_dir, 'recipes-test', 'devtool')
-        newrecipefile = os.path.join(newrecipedir, '%s_%s.bb' % (recipe, newversion))
-        self.track_for_cleanup(newrecipedir)
-        result = runCmd('devtool finish %s oe-core' % recipe)
-        result = runCmd('devtool status')
-        self.assertNotIn(recipe, result.output, 'Recipe should have been reset by finish but wasn\'t')
-        self.assertFalse(os.path.exists(os.path.join(self.workspacedir, 'recipes', recipe)), 'Recipe directory should not exist after finish')
-        self.assertTrue(os.path.exists(oldrecipefile), 'Old recipe file should not have been deleted')
-        self.assertTrue(os.path.exists(os.path.join(olddir, patchfn)), 'Old patch file should not have been deleted')
-        newdir = os.path.join(newrecipedir, recipe + '-' + newversion)
-        self.assertTrue(os.path.exists(newrecipefile), 'New recipe file should have been copied into existing layer but wasn\'t')
-        self.assertTrue(os.path.exists(os.path.join(newdir, patchfn)), 'Patch file should have been copied into new directory but wasn\'t')
-        self.assertTrue(os.path.exists(os.path.join(newdir, '0002-Add-a-comment-to-the-code.patch')), 'New patch file should have been created but wasn\'t')
-
-    def _setup_test_devtool_finish_modify(self):
-        # Check preconditions
-        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
-        # Try modifying a recipe
-        self.track_for_cleanup(self.workspacedir)
-        recipe = 'mdadm'
-        oldrecipefile = get_bb_var('FILE', recipe)
-        recipedir = os.path.dirname(oldrecipefile)
-        result = runCmd('git status --porcelain .', cwd=recipedir)
-        if result.output.strip():
-            self.fail('Recipe directory for %s contains uncommitted changes' % recipe)
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        result = runCmd('devtool modify %s %s' % (recipe, tempdir))
-        self.assertTrue(os.path.exists(os.path.join(tempdir, 'Makefile')), 'Extracted source could not be found')
-        # Test devtool status
-        result = runCmd('devtool status')
-        self.assertIn(recipe, result.output)
-        self.assertIn(tempdir, result.output)
-        # Make a change to the source
-        result = runCmd('sed -i \'/^#include "mdadm.h"/a \\/* Here is a new comment *\\/\' maps.c', cwd=tempdir)
-        result = runCmd('git status --porcelain', cwd=tempdir)
-        self.assertIn('M maps.c', result.output)
-        result = runCmd('git commit maps.c -m "Add a comment to the code"', cwd=tempdir)
-        for entry in os.listdir(recipedir):
-            filesdir = os.path.join(recipedir, entry)
-            if os.path.isdir(filesdir):
-                break
-        else:
-            self.fail('Unable to find recipe files directory for %s' % recipe)
-        return recipe, oldrecipefile, recipedir, filesdir
-
-    def test_devtool_finish_modify_origlayer(self):
-        recipe, oldrecipefile, recipedir, filesdir = self._setup_test_devtool_finish_modify()
-        # Ensure the recipe is where we think it should be (so that cleanup doesn't trash things)
-        self.assertIn('/meta/', recipedir)
-        # Try finish to the original layer
-        self.add_command_to_tearDown('rm -rf %s ; cd %s ; git checkout %s' % (recipedir, os.path.dirname(recipedir), recipedir))
-        result = runCmd('devtool finish %s meta' % recipe)
-        result = runCmd('devtool status')
-        self.assertNotIn(recipe, result.output, 'Recipe should have been reset by finish but wasn\'t')
-        self.assertFalse(os.path.exists(os.path.join(self.workspacedir, 'recipes', recipe)), 'Recipe directory should not exist after finish')
-        expected_status = [(' M', '.*/%s$' % os.path.basename(oldrecipefile)),
-                           ('??', '.*/.*-Add-a-comment-to-the-code.patch$')]
-        self._check_repo_status(recipedir, expected_status)
-
-    def test_devtool_finish_modify_otherlayer(self):
-        recipe, oldrecipefile, recipedir, filesdir = self._setup_test_devtool_finish_modify()
-        # Ensure the recipe is where we think it should be (so that cleanup doesn't trash things)
-        self.assertIn('/meta/', recipedir)
-        relpth = os.path.relpath(recipedir, os.path.join(get_bb_var('COREBASE'), 'meta'))
-        appenddir = os.path.join(get_test_layer(), relpth)
-        self.track_for_cleanup(appenddir)
-        # Try finish to the original layer
-        self.add_command_to_tearDown('rm -rf %s ; cd %s ; git checkout %s' % (recipedir, os.path.dirname(recipedir), recipedir))
-        result = runCmd('devtool finish %s meta-selftest' % recipe)
-        result = runCmd('devtool status')
-        self.assertNotIn(recipe, result.output, 'Recipe should have been reset by finish but wasn\'t')
-        self.assertFalse(os.path.exists(os.path.join(self.workspacedir, 'recipes', recipe)), 'Recipe directory should not exist after finish')
-        result = runCmd('git status --porcelain .', cwd=recipedir)
-        if result.output.strip():
-            self.fail('Recipe directory for %s contains the following unexpected changes after finish:\n%s' % (recipe, result.output.strip()))
-        recipefn = os.path.splitext(os.path.basename(oldrecipefile))[0]
-        recipefn = recipefn.split('_')[0] + '_%'
-        appendfile = os.path.join(appenddir, recipefn + '.bbappend')
-        self.assertTrue(os.path.exists(appendfile), 'bbappend %s should have been created but wasn\'t' % appendfile)
-        newdir = os.path.join(appenddir, recipe)
-        files = os.listdir(newdir)
-        foundpatch = None
-        for fn in files:
-            if fnmatch.fnmatch(fn, '*-Add-a-comment-to-the-code.patch'):
-                foundpatch = fn
-        if not foundpatch:
-            self.fail('No patch file created next to bbappend')
-        files.remove(foundpatch)
-        if files:
-            self.fail('Unexpected file(s) copied next to bbappend: %s' % ', '.join(files))
-
-    def test_devtool_rename(self):
-        # Check preconditions
-        self.assertTrue(not os.path.exists(self.workspacedir), 'This test cannot be run with a workspace directory under the build directory')
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-
-        # First run devtool add
-        # We already have this recipe in OE-Core, but that doesn't matter
-        recipename = 'i2c-tools'
-        recipever = '3.1.2'
-        recipefile = os.path.join(self.workspacedir, 'recipes', recipename, '%s_%s.bb' % (recipename, recipever))
-        url = 'http://downloads.yoctoproject.org/mirror/sources/i2c-tools-%s.tar.bz2' % recipever
-        def add_recipe():
-            result = runCmd('devtool add %s' % url)
-            self.assertTrue(os.path.exists(recipefile), 'Expected recipe file not created')
-            self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'sources', recipename)), 'Source directory not created')
-            checkvars = {}
-            checkvars['S'] = None
-            checkvars['SRC_URI'] = url.replace(recipever, '${PV}')
-            self._test_recipe_contents(recipefile, checkvars, [])
-        add_recipe()
-        # Now rename it - change both name and version
-        newrecipename = 'mynewrecipe'
-        newrecipever = '456'
-        newrecipefile = os.path.join(self.workspacedir, 'recipes', newrecipename, '%s_%s.bb' % (newrecipename, newrecipever))
-        result = runCmd('devtool rename %s %s -V %s' % (recipename, newrecipename, newrecipever))
-        self.assertTrue(os.path.exists(newrecipefile), 'Recipe file not renamed')
-        self.assertFalse(os.path.exists(os.path.join(self.workspacedir, 'recipes', recipename)), 'Old recipe directory still exists')
-        newsrctree = os.path.join(self.workspacedir, 'sources', newrecipename)
-        self.assertTrue(os.path.exists(newsrctree), 'Source directory not renamed')
-        checkvars = {}
-        checkvars['S'] = '${WORKDIR}/%s-%s' % (recipename, recipever)
-        checkvars['SRC_URI'] = url
-        self._test_recipe_contents(newrecipefile, checkvars, [])
-        # Try again - change just name this time
-        result = runCmd('devtool reset -n %s' % newrecipename)
-        shutil.rmtree(newsrctree)
-        add_recipe()
-        newrecipefile = os.path.join(self.workspacedir, 'recipes', newrecipename, '%s_%s.bb' % (newrecipename, recipever))
-        result = runCmd('devtool rename %s %s' % (recipename, newrecipename))
-        self.assertTrue(os.path.exists(newrecipefile), 'Recipe file not renamed')
-        self.assertFalse(os.path.exists(os.path.join(self.workspacedir, 'recipes', recipename)), 'Old recipe directory still exists')
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'sources', newrecipename)), 'Source directory not renamed')
-        checkvars = {}
-        checkvars['S'] = '${WORKDIR}/%s-${PV}' % recipename
-        checkvars['SRC_URI'] = url.replace(recipever, '${PV}')
-        self._test_recipe_contents(newrecipefile, checkvars, [])
-        # Try again - change just version this time
-        result = runCmd('devtool reset -n %s' % newrecipename)
-        shutil.rmtree(newsrctree)
-        add_recipe()
-        newrecipefile = os.path.join(self.workspacedir, 'recipes', recipename, '%s_%s.bb' % (recipename, newrecipever))
-        result = runCmd('devtool rename %s -V %s' % (recipename, newrecipever))
-        self.assertTrue(os.path.exists(newrecipefile), 'Recipe file not renamed')
-        self.assertTrue(os.path.exists(os.path.join(self.workspacedir, 'sources', recipename)), 'Source directory no longer exists')
-        checkvars = {}
-        checkvars['S'] = '${WORKDIR}/${BPN}-%s' % recipever
-        checkvars['SRC_URI'] = url
-        self._test_recipe_contents(newrecipefile, checkvars, [])
-
-    @testcase(1577)
-    def test_devtool_virtual_kernel_modify(self):
-        """
-        Summary:        The purpose of this test case is to verify that
-                        devtool modify works correctly when building
-                        the kernel.
-        Dependencies:   NA
-        Steps:          1. Build kernel with bitbake.
-                        2. Save the config file generated.
-                        3. Clean the environment.
-                        4. Use `devtool modify virtual/kernel` to validate following:
-                           4.1 The source is checked out correctly.
-                           4.2 The resulting configuration is the same as
-                               what was get on step 2.
-                           4.3 The Kernel can be build correctly.
-                           4.4 Changes made on the source are reflected on the
-                               subsequent builds.
-                           4.5 Changes on the configuration are reflected on the
-                               subsequent builds
-         Expected:       devtool modify is able to checkout the source of the kernel
-                         and modification to the source and configurations are reflected
-                         when building the kernel.
-         """
-        #Set machine to qemxu86 to be able to modify the kernel and
-        #verify the modification.
-        features = 'MACHINE = "qemux86"\n'
-        self.write_config(features)
-        kernel_provider = get_bb_var('PREFERRED_PROVIDER_virtual/kernel')
-        # Clean up the enviroment
-        bitbake('%s -c clean' % kernel_provider)
-        tempdir = tempfile.mkdtemp(prefix='devtoolqa')
-        self.track_for_cleanup(tempdir)
-        self.track_for_cleanup(self.workspacedir)
-        self.add_command_to_tearDown('bitbake-layers remove-layer */workspace')
-        self.add_command_to_tearDown('bitbake -c clean %s' % kernel_provider)
-        #Step 1
-        #Here is just generated the config file instead of all the kernel to optimize the
-        #time of executing this test case.
-        bitbake('%s -c configure' % kernel_provider)
-        bbconfig = os.path.join(get_bb_var('B', kernel_provider),'.config')
-        buildir= get_bb_var('TOPDIR')
-        #Step 2
-        runCmd('cp %s %s' % (bbconfig, buildir))
-        self.assertTrue(os.path.exists(os.path.join(buildir, '.config')),
-                        'Could not copy .config file from kernel')
-
-        tmpconfig = os.path.join(buildir, '.config')
-        #Step 3
-        bitbake('%s -c clean' % kernel_provider)
-        #Step 4.1
-        runCmd('devtool modify virtual/kernel -x %s' % tempdir)
-        self.assertTrue(os.path.exists(os.path.join(tempdir, 'Makefile')),
-                        'Extracted source could not be found')
-        #Step 4.2
-        configfile = os.path.join(tempdir,'.config')
-        diff = runCmd('diff %s %s' % (tmpconfig, configfile))
-        self.assertEqual(0,diff.status,'Kernel .config file is not the same using bitbake and devtool')
-        #Step 4.3
-        #NOTE: virtual/kernel is mapped to kernel_provider
-        result = runCmd('devtool build %s' % kernel_provider)
-        self.assertEqual(0,result.status,'Cannot build kernel using `devtool build`')
-        kernelfile = os.path.join(get_bb_var('KBUILD_OUTPUT', kernel_provider), 'vmlinux')
-        self.assertTrue(os.path.exists(kernelfile),'Kernel was not build correctly')
-
-        #Modify the kernel source, this is specific for qemux86
-        modfile = os.path.join(tempdir,'arch/x86/boot/header.S')
-        modstring = "use a boot loader - Devtool kernel testing"
-        modapplied = runCmd("sed -i 's/boot loader/%s/' %s" % (modstring, modfile))
-        self.assertEqual(0,modapplied.status,'Modification to %s on kernel source failed' % modfile)
-        #Modify the configuration
-        codeconfigfile = os.path.join(tempdir,'.config.new')
-        modconfopt = "CONFIG_SG_POOL=n"
-        modconf = runCmd("sed -i 's/CONFIG_SG_POOL=y/%s/' %s" % (modconfopt, codeconfigfile))
-        self.assertEqual(0,modconf.status,'Modification to %s failed' % codeconfigfile)
-        #Build again kernel with devtool
-        rebuild = runCmd('devtool build %s' % kernel_provider)
-        self.assertEqual(0,rebuild.status,'Fail to build kernel after modification of source and config')
-        #Step 4.4
-        bzimagename = 'bzImage-' + get_bb_var('KERNEL_VERSION_NAME', kernel_provider)
-        bzimagefile = os.path.join(get_bb_var('D', kernel_provider),'boot', bzimagename)
-        checkmodcode = runCmd("grep '%s' %s" % (modstring, bzimagefile))
-        self.assertEqual(0,checkmodcode.status,'Modification on kernel source failed')
-        #Step 4.5
-        checkmodconfg = runCmd("grep %s %s" % (modconfopt, codeconfigfile))
-        self.assertEqual(0,checkmodconfg.status,'Modification to configuration file failed')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/eSDK.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/eSDK.py
deleted file mode 100644
index 1596c6e..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/eSDK.py
+++ /dev/null
@@ -1,115 +0,0 @@
-import unittest
-import tempfile
-import shutil
-import os
-import glob
-import logging
-import subprocess
-import oeqa.utils.ftools as ftools
-from oeqa.utils.decorators import testcase
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars
-
-class oeSDKExtSelfTest(oeSelfTest):
-    """
-    # Bugzilla Test Plan: 6033
-    # This code is planned to be part of the automation for eSDK containig
-    # Install libraries and headers, image generation binary feeds, sdk-update.
-    """
-
-    @staticmethod
-    def get_esdk_environment(env_eSDK, tmpdir_eSDKQA):
-        # XXX: at this time use the first env need to investigate
-        # what environment load oe-selftest, i586, x86_64
-        pattern = os.path.join(tmpdir_eSDKQA, 'environment-setup-*')
-        return glob.glob(pattern)[0]
-
-    @staticmethod
-    def run_esdk_cmd(env_eSDK, tmpdir_eSDKQA, cmd, postconfig=None, **options):
-        if postconfig:
-            esdk_conf_file = os.path.join(tmpdir_eSDKQA, 'conf', 'local.conf')
-            with open(esdk_conf_file, 'a+') as f:
-                f.write(postconfig)
-        if not options:
-            options = {}
-        if not 'shell' in options:
-            options['shell'] = True
-
-        runCmd("cd %s; . %s; %s" % (tmpdir_eSDKQA, env_eSDK, cmd), **options)
-
-    @staticmethod
-    def generate_eSDK(image):
-        pn_task = '%s -c populate_sdk_ext' % image
-        bitbake(pn_task)
-
-    @staticmethod
-    def get_eSDK_toolchain(image):
-        pn_task = '%s -c populate_sdk_ext' % image
-
-        bb_vars = get_bb_vars(['SDK_DEPLOY', 'TOOLCHAINEXT_OUTPUTNAME'], pn_task)
-        sdk_deploy = bb_vars['SDK_DEPLOY']
-        toolchain_name = bb_vars['TOOLCHAINEXT_OUTPUTNAME']
-        return os.path.join(sdk_deploy, toolchain_name + '.sh')
-
-    @staticmethod
-    def update_configuration(cls, image, tmpdir_eSDKQA, env_eSDK, ext_sdk_path):
-        sstate_dir = os.path.join(os.environ['BUILDDIR'], 'sstate-cache')
-
-        oeSDKExtSelfTest.generate_eSDK(cls.image)
-
-        cls.ext_sdk_path = oeSDKExtSelfTest.get_eSDK_toolchain(cls.image)
-        runCmd("%s -y -d \"%s\"" % (cls.ext_sdk_path, cls.tmpdir_eSDKQA))
-
-        cls.env_eSDK = oeSDKExtSelfTest.get_esdk_environment('', cls.tmpdir_eSDKQA)
-
-        sstate_config="""
-SDK_LOCAL_CONF_WHITELIST = "SSTATE_MIRRORS"
-SSTATE_MIRRORS =  "file://.* file://%s/PATH"
-CORE_IMAGE_EXTRA_INSTALL = "perl"
-        """ % sstate_dir
-
-        with open(os.path.join(cls.tmpdir_eSDKQA, 'conf', 'local.conf'), 'a+') as f:
-            f.write(sstate_config)
-
-    @classmethod
-    def setUpClass(cls):
-        cls.tmpdir_eSDKQA = tempfile.mkdtemp(prefix='eSDKQA')
-
-        sstate_dir = get_bb_var('SSTATE_DIR')
-
-        cls.image = 'core-image-minimal'
-        oeSDKExtSelfTest.generate_eSDK(cls.image)
-
-        # Install eSDK
-        cls.ext_sdk_path = oeSDKExtSelfTest.get_eSDK_toolchain(cls.image)
-        runCmd("%s -y -d \"%s\"" % (cls.ext_sdk_path, cls.tmpdir_eSDKQA))
-
-        cls.env_eSDK = oeSDKExtSelfTest.get_esdk_environment('', cls.tmpdir_eSDKQA)
-
-        # Configure eSDK to use sstate mirror from poky
-        sstate_config="""
-SDK_LOCAL_CONF_WHITELIST = "SSTATE_MIRRORS"
-SSTATE_MIRRORS =  "file://.* file://%s/PATH"
-            """ % sstate_dir
-        with open(os.path.join(cls.tmpdir_eSDKQA, 'conf', 'local.conf'), 'a+') as f:
-            f.write(sstate_config)
-
-    @classmethod
-    def tearDownClass(cls):
-        shutil.rmtree(cls.tmpdir_eSDKQA)
-
-    @testcase (1602)
-    def test_install_libraries_headers(self):
-        pn_sstate = 'bc'
-        bitbake(pn_sstate)
-        cmd = "devtool sdk-install %s " % pn_sstate
-        oeSDKExtSelfTest.run_esdk_cmd(self.env_eSDK, self.tmpdir_eSDKQA, cmd)
-
-    @testcase(1603)
-    def test_image_generation_binary_feeds(self):
-        image = 'core-image-minimal'
-        cmd = "devtool build-image %s" % image
-        oeSDKExtSelfTest.run_esdk_cmd(self.env_eSDK, self.tmpdir_eSDKQA, cmd)
-
-if __name__ == '__main__':
-    unittest.main()
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/image_typedep.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/image_typedep.py
deleted file mode 100644
index 256142d..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/image_typedep.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import os
-
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import bitbake
-
-class ImageTypeDepTests(oeSelfTest):
-
-    # Verify that when specifying a IMAGE_TYPEDEP_ of the form "foo.bar" that
-    # the conversion type bar gets added as a dep as well
-    def test_conversion_typedep_added(self):
-
-        self.write_recipeinc('emptytest', """
-# Try to empty out the default dependency list
-PACKAGE_INSTALL = ""
-DISTRO_EXTRA_RDEPENDS=""
-
-LICENSE = "MIT"
-IMAGE_FSTYPES = "testfstype"
-
-IMAGE_TYPES_MASKED += "testfstype"
-IMAGE_TYPEDEP_testfstype = "tar.bz2"
-
-inherit image
-
-""")
-        # First get the dependency that should exist for bz2, it will look
-        # like CONVERSION_DEPENDS_bz2="somedep"
-        result = bitbake('-e emptytest')
-
-        for line in result.output.split('\n'):
-            if line.startswith('CONVERSION_DEPENDS_bz2'):
-                dep = line.split('=')[1].strip('"')
-                break
-
-        # Now get the dependency task list and check for the expected task
-        # dependency
-        bitbake('-g emptytest')
-
-        taskdependsfile = os.path.join(self.builddir, 'task-depends.dot')
-        dep =  dep + ".do_populate_sysroot"
-        depfound = False
-        expectedline = '"emptytest.do_rootfs" -> "{}"'.format(dep)
-
-        with open(taskdependsfile, "r") as f:
-            for line in f:
-                if line.strip() == expectedline:
-                    depfound = True
-                    break
-
-        if not depfound:
-            raise AssertionError("\"{}\" not found".format(expectedline))
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/imagefeatures.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/imagefeatures.py
deleted file mode 100644
index 76896c7..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/imagefeatures.py
+++ /dev/null
@@ -1,127 +0,0 @@
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, runqemu
-from oeqa.utils.decorators import testcase
-from oeqa.utils.sshcontrol import SSHControl
-import os
-import sys
-import logging
-
-class ImageFeatures(oeSelfTest):
-
-    test_user = 'tester'
-    root_user = 'root'
-
-    @testcase(1107)
-    def test_non_root_user_can_connect_via_ssh_without_password(self):
-        """
-        Summary: Check if non root user can connect via ssh without password
-        Expected: 1. Connection to the image via ssh using root user without providing a password should be allowed.
-                  2. Connection to the image via ssh using tester user without providing a password should be allowed.
-        Product: oe-core
-        Author: Ionut Chisanovici <ionutx.chisanovici@intel.com>
-        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        """
-
-        features = 'EXTRA_IMAGE_FEATURES = "ssh-server-openssh empty-root-password allow-empty-password"\n'
-        features += 'INHERIT += "extrausers"\n'
-        features += 'EXTRA_USERS_PARAMS = "useradd -p \'\' {}; usermod -s /bin/sh {};"'.format(self.test_user, self.test_user)
-        self.write_config(features)
-
-        # Build a core-image-minimal
-        bitbake('core-image-minimal')
-
-        with runqemu("core-image-minimal") as qemu:
-            # Attempt to ssh with each user into qemu with empty password
-            for user in [self.root_user, self.test_user]:
-                ssh = SSHControl(ip=qemu.ip, logfile=qemu.sshlog, user=user)
-                status, output = ssh.run("true")
-                self.assertEqual(status, 0, 'ssh to user %s failed with %s' % (user, output))
-
-    @testcase(1115)
-    def test_all_users_can_connect_via_ssh_without_password(self):
-        """
-        Summary:     Check if all users can connect via ssh without password
-        Expected: 1. Connection to the image via ssh using root user without providing a password should NOT be allowed.
-                  2. Connection to the image via ssh using tester user without providing a password should be allowed.
-        Product:     oe-core
-        Author:      Ionut Chisanovici <ionutx.chisanovici@intel.com>
-        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        """
-
-        features = 'EXTRA_IMAGE_FEATURES = "ssh-server-openssh allow-empty-password"\n'
-        features += 'INHERIT += "extrausers"\n'
-        features += 'EXTRA_USERS_PARAMS = "useradd -p \'\' {}; usermod -s /bin/sh {};"'.format(self.test_user, self.test_user)
-        self.write_config(features)
-
-        # Build a core-image-minimal
-        bitbake('core-image-minimal')
-
-        with runqemu("core-image-minimal") as qemu:
-            # Attempt to ssh with each user into qemu with empty password
-            for user in [self.root_user, self.test_user]:
-                ssh = SSHControl(ip=qemu.ip, logfile=qemu.sshlog, user=user)
-                status, output = ssh.run("true")
-                if user == 'root':
-                    self.assertNotEqual(status, 0, 'ssh to user root was allowed when it should not have been')
-                else:
-                    self.assertEqual(status, 0, 'ssh to user tester failed with %s' % output)
-
-
-    @testcase(1116)
-    def test_clutter_image_can_be_built(self):
-        """
-        Summary:     Check if clutter image can be built
-        Expected:    1. core-image-clutter can be built
-        Product:     oe-core
-        Author:      Ionut Chisanovici <ionutx.chisanovici@intel.com>
-        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        """
-
-        # Build a core-image-clutter
-        bitbake('core-image-clutter')
-
-    @testcase(1117)
-    def test_wayland_support_in_image(self):
-        """
-        Summary:     Check Wayland support in image
-        Expected:    1. Wayland image can be build
-                     2. Wayland feature can be installed
-        Product:     oe-core
-        Author:      Ionut Chisanovici <ionutx.chisanovici@intel.com>
-        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        """
-
-        distro_features = get_bb_var('DISTRO_FEATURES')
-        if not ('opengl' in distro_features and 'wayland' in distro_features):
-            self.skipTest('neither opengl nor wayland present on DISTRO_FEATURES so core-image-weston cannot be built')
-
-        # Build a core-image-weston
-        bitbake('core-image-weston')
-
-    def test_bmap(self):
-        """
-        Summary:     Check bmap support
-        Expected:    1. core-image-minimal can be build with bmap support
-                     2. core-image-minimal is sparse
-        Product:     oe-core
-        Author:      Ed Bartosh <ed.bartosh@linux.intel.com>
-        """
-
-        features = 'IMAGE_FSTYPES += " ext4 ext4.bmap"'
-        self.write_config(features)
-
-        image_name = 'core-image-minimal'
-        bitbake(image_name)
-
-        deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
-        link_name = get_bb_var('IMAGE_LINK_NAME', image_name)
-        image_path = os.path.join(deploy_dir_image, "%s.ext4" % link_name)
-        bmap_path = "%s.bmap" % image_path
-
-        # check if result image and bmap file are in deploy directory
-        self.assertTrue(os.path.exists(image_path))
-        self.assertTrue(os.path.exists(bmap_path))
-
-        # check if result image is sparse
-        image_stat = os.stat(image_path)
-        self.assertTrue(image_stat.st_size > image_stat.st_blocks * 512)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/layerappend.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/layerappend.py
deleted file mode 100644
index 37bb32c..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/layerappend.py
+++ /dev/null
@@ -1,100 +0,0 @@
-import unittest
-import os
-import logging
-import re
-
-from oeqa.selftest.base import oeSelfTest
-from oeqa.selftest.buildhistory import BuildhistoryBase
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var
-import oeqa.utils.ftools as ftools
-from oeqa.utils.decorators import testcase
-
-class LayerAppendTests(oeSelfTest):
-    layerconf = """
-# We have a conf and classes directory, append to BBPATH
-BBPATH .= ":${LAYERDIR}"
-
-# We have a recipes directory, add to BBFILES
-BBFILES += "${LAYERDIR}/recipes*/*.bb ${LAYERDIR}/recipes*/*.bbappend"
-
-BBFILE_COLLECTIONS += "meta-layerINT"
-BBFILE_PATTERN_meta-layerINT := "^${LAYERDIR}/"
-BBFILE_PRIORITY_meta-layerINT = "6"
-"""
-    recipe = """
-LICENSE="CLOSED"
-INHIBIT_DEFAULT_DEPS = "1"
-
-python do_build() {
-    bb.plain('Building ...')
-}
-addtask build
-"""
-    append = """
-FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
-
-SRC_URI_append = " file://appendtest.txt"
-
-sysroot_stage_all_append() {
-	install -m 644 ${WORKDIR}/appendtest.txt ${SYSROOT_DESTDIR}/
-}
-
-"""
-
-    append2 = """
-FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"
-
-SRC_URI_append += "file://appendtest.txt"
-"""
-    layerappend = ''
-
-    def tearDownLocal(self):
-        if self.layerappend:
-            ftools.remove_from_file(self.builddir + "/conf/bblayers.conf", self.layerappend)
-
-    @testcase(1196)
-    def test_layer_appends(self):
-        corebase = get_bb_var("COREBASE")
-
-        for l in ["0", "1", "2"]:
-            layer = os.path.join(corebase, "meta-layertest" + l)
-            self.assertFalse(os.path.exists(layer))
-            os.mkdir(layer)
-            os.mkdir(layer + "/conf")
-            with open(layer + "/conf/layer.conf", "w") as f:
-                f.write(self.layerconf.replace("INT", l))
-            os.mkdir(layer + "/recipes-test")
-            if l == "0":
-                with open(layer + "/recipes-test/layerappendtest.bb", "w") as f:
-                    f.write(self.recipe)
-            elif l == "1":
-                with open(layer + "/recipes-test/layerappendtest.bbappend", "w") as f:
-                    f.write(self.append)
-                os.mkdir(layer + "/recipes-test/layerappendtest")
-                with open(layer + "/recipes-test/layerappendtest/appendtest.txt", "w") as f:
-                    f.write("Layer 1 test")
-            elif l == "2":
-                with open(layer + "/recipes-test/layerappendtest.bbappend", "w") as f:
-                    f.write(self.append2)
-                os.mkdir(layer + "/recipes-test/layerappendtest")
-                with open(layer + "/recipes-test/layerappendtest/appendtest.txt", "w") as f:
-                    f.write("Layer 2 test")
-            self.track_for_cleanup(layer)
-
-        self.layerappend = "BBLAYERS += \"{0}/meta-layertest0 {0}/meta-layertest1 {0}/meta-layertest2\"".format(corebase)
-        ftools.append_file(self.builddir + "/conf/bblayers.conf", self.layerappend)
-        stagingdir = get_bb_var("SYSROOT_DESTDIR", "layerappendtest")
-        bitbake("layerappendtest")
-        data = ftools.read_file(stagingdir + "/appendtest.txt")
-        self.assertEqual(data, "Layer 2 test")
-        os.remove(corebase + "/meta-layertest2/recipes-test/layerappendtest/appendtest.txt")
-        bitbake("layerappendtest")
-        data = ftools.read_file(stagingdir + "/appendtest.txt")
-        self.assertEqual(data, "Layer 1 test")
-        with open(corebase + "/meta-layertest2/recipes-test/layerappendtest/appendtest.txt", "w") as f:
-            f.write("Layer 2 test")
-        bitbake("layerappendtest")
-        data = ftools.read_file(stagingdir + "/appendtest.txt")
-        self.assertEqual(data, "Layer 2 test")
-
-
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/liboe.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/liboe.py
deleted file mode 100644
index 0b0301d..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/liboe.py
+++ /dev/null
@@ -1,99 +0,0 @@
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import get_bb_var, get_bb_vars, bitbake, runCmd
-import oe.path
-import glob
-import os
-import os.path
-
-class LibOE(oeSelfTest):
-
-    @classmethod
-    def setUpClass(cls):
-        cls.tmp_dir = get_bb_var('TMPDIR')
-
-    def test_copy_tree_special(self):
-        """
-        Summary:    oe.path.copytree() should copy files with special character
-        Expected:   'test file with sp£c!al @nd spaces' should exist in
-                    copy destination
-        Product:    OE-Core
-        Author:     Joshua Lock <joshua.g.lock@intel.com>
-        """
-        testloc = oe.path.join(self.tmp_dir, 'liboetests')
-        src = oe.path.join(testloc, 'src')
-        dst = oe.path.join(testloc, 'dst')
-        bb.utils.mkdirhier(testloc)
-        bb.utils.mkdirhier(src)
-        testfilename = 'test file with sp£c!al @nd spaces'
-
-        # create the test file and copy it
-        open(oe.path.join(src, testfilename), 'w+b').close()
-        oe.path.copytree(src, dst)
-
-        # ensure path exists in dest
-        fileindst = os.path.isfile(oe.path.join(dst, testfilename))
-        self.assertTrue(fileindst, "File with spaces doesn't exist in dst")
-
-        oe.path.remove(testloc)
-
-    def test_copy_tree_xattr(self):
-        """
-        Summary:    oe.path.copytree() should preserve xattr on copied files
-        Expected:   testxattr file in destination should have user.oetest
-                    extended attribute
-        Product:    OE-Core
-        Author:     Joshua Lock <joshua.g.lock@intel.com>
-        """
-        testloc = oe.path.join(self.tmp_dir, 'liboetests')
-        src = oe.path.join(testloc, 'src')
-        dst = oe.path.join(testloc, 'dst')
-        bb.utils.mkdirhier(testloc)
-        bb.utils.mkdirhier(src)
-        testfilename = 'testxattr'
-
-        # ensure we have setfattr available
-        bitbake("attr-native")
-
-        bb_vars = get_bb_vars(['SYSROOT_DESTDIR', 'bindir'], 'attr-native')
-        destdir = bb_vars['SYSROOT_DESTDIR']
-        bindir = bb_vars['bindir']
-        bindir = destdir + bindir
-
-        # create a file with xattr and copy it
-        open(oe.path.join(src, testfilename), 'w+b').close()
-        runCmd('%s/setfattr -n user.oetest -v "testing liboe" %s' % (bindir, oe.path.join(src, testfilename)))
-        oe.path.copytree(src, dst)
-
-        # ensure file in dest has user.oetest xattr
-        result = runCmd('%s/getfattr -n user.oetest %s' % (bindir, oe.path.join(dst, testfilename)))
-        self.assertIn('user.oetest="testing liboe"', result.output, 'Extended attribute not sert in dst')
-
-        oe.path.remove(testloc)
-
-    def test_copy_hardlink_tree_count(self):
-        """
-        Summary:    oe.path.copyhardlinktree() shouldn't miss out files
-        Expected:   src and dst should have the same number of files
-        Product:    OE-Core
-        Author:     Joshua Lock <joshua.g.lock@intel.com>
-        """
-        testloc = oe.path.join(self.tmp_dir, 'liboetests')
-        src = oe.path.join(testloc, 'src')
-        dst = oe.path.join(testloc, 'dst')
-        bb.utils.mkdirhier(testloc)
-        bb.utils.mkdirhier(src)
-        testfiles = ['foo', 'bar', '.baz', 'quux']
-
-        def touchfile(tf):
-            open(oe.path.join(src, tf), 'w+b').close()
-
-        for f in testfiles:
-            touchfile(f)
-
-        oe.path.copyhardlinktree(src, dst)
-
-        dstcnt = len(os.listdir(dst))
-        srccnt = len(os.listdir(src))
-        self.assertEquals(dstcnt, len(testfiles), "Number of files in dst (%s) differs from number of files in src(%s)." % (dstcnt, srccnt))
-
-        oe.path.remove(testloc)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/lic-checksum.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/lic-checksum.py
deleted file mode 100644
index 2e81373..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/lic-checksum.py
+++ /dev/null
@@ -1,35 +0,0 @@
-import os
-import tempfile
-
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import bitbake
-from oeqa.utils import CommandError
-from oeqa.utils.decorators import testcase
-
-class LicenseTests(oeSelfTest):
-
-    # Verify that changing a license file that has an absolute path causes
-    # the license qa to fail due to a mismatched md5sum.
-    @testcase(1197)
-    def test_nonmatching_checksum(self):
-        bitbake_cmd = '-c populate_lic emptytest'
-        error_msg = 'emptytest: The new md5 checksum is 8d777f385d3dfec8815d20f7496026dc'
-
-        lic_file, lic_path = tempfile.mkstemp()
-        os.close(lic_file)
-        self.track_for_cleanup(lic_path)
-
-        self.write_recipeinc('emptytest', """
-INHIBIT_DEFAULT_DEPS = "1"
-LIC_FILES_CHKSUM = "file://%s;md5=d41d8cd98f00b204e9800998ecf8427e"
-SRC_URI = "file://%s;md5=d41d8cd98f00b204e9800998ecf8427e"
-""" % (lic_path, lic_path))
-        result = bitbake(bitbake_cmd)
-
-        with open(lic_path, "w") as f:
-            f.write("data")
-
-        self.write_config("INHERIT_remove = \"report-error\"")
-        result = bitbake(bitbake_cmd, ignore_status=True)
-        if error_msg not in result.output:
-            raise AssertionError(result.output)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/manifest.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/manifest.py
deleted file mode 100644
index fe6f949..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/manifest.py
+++ /dev/null
@@ -1,166 +0,0 @@
-import unittest
-import os
-
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import get_bb_var, get_bb_vars, bitbake
-from oeqa.utils.decorators import testcase
-
-class ManifestEntry:
-    '''A manifest item of a collection able to list missing packages'''
-    def __init__(self, entry):
-        self.file = entry
-        self.missing = []
-
-class VerifyManifest(oeSelfTest):
-    '''Tests for the manifest files and contents of an image'''
-
-    @classmethod
-    def check_manifest_entries(self, manifest, path):
-        manifest_errors = []
-        try:
-            with open(manifest, "r") as mfile:
-                for line in mfile:
-                    manifest_entry = os.path.join(path, line.split()[0])
-                    self.log.debug("{}: looking for {}"\
-                                    .format(self.classname, manifest_entry))
-                    if not os.path.isfile(manifest_entry):
-                        manifest_errors.append(manifest_entry)
-                        self.log.debug("{}: {} not found"\
-                                    .format(self.classname, manifest_entry))
-        except OSError as e:
-            self.log.debug("{}: checking of {} failed"\
-                    .format(self.classname, manifest))
-            raise e
-
-        return manifest_errors
-
-    #this will possibly move from here
-    @classmethod
-    def get_dir_from_bb_var(self, bb_var, target = None):
-        target == self.buildtarget if target == None else target
-        directory = get_bb_var(bb_var, target);
-        if not directory or not os.path.isdir(directory):
-            self.log.debug("{}: {} points to {} when target = {}"\
-                    .format(self.classname, bb_var, directory, target))
-            raise OSError
-        return directory
-
-    @classmethod
-    def setUpClass(self):
-
-        self.buildtarget = 'core-image-minimal'
-        self.classname = 'VerifyManifest'
-
-        self.log.info("{}: doing bitbake {} as a prerequisite of the test"\
-                .format(self.classname, self.buildtarget))
-        if bitbake(self.buildtarget).status:
-            self.log.debug("{} Failed to setup {}"\
-                    .format(self.classname, self.buildtarget))
-            unittest.SkipTest("{}: Cannot setup testing scenario"\
-                    .format(self.classname))
-
-    @testcase(1380)
-    def test_SDK_manifest_entries(self):
-        '''Verifying the SDK manifest entries exist, this may take a build'''
-
-        # the setup should bitbake core-image-minimal and here it is required
-        # to do an additional setup for the sdk
-        sdktask = '-c populate_sdk'
-        bbargs = sdktask + ' ' + self.buildtarget
-        self.log.debug("{}: doing bitbake {} as a prerequisite of the test"\
-                .format(self.classname, bbargs))
-        if bitbake(bbargs).status:
-            self.log.debug("{} Failed to bitbake {}"\
-                    .format(self.classname, bbargs))
-            unittest.SkipTest("{}: Cannot setup testing scenario"\
-                    .format(self.classname))
-
-
-        pkgdata_dir = reverse_dir = {}
-        mfilename = mpath = m_entry = {}
-        # get manifest location based on target to query about
-        d_target= dict(target = self.buildtarget,
-                         host = 'nativesdk-packagegroup-sdk-host')
-        try:
-            mdir = self.get_dir_from_bb_var('SDK_DEPLOY', self.buildtarget)
-            for k in d_target.keys():
-                bb_vars = get_bb_vars(['SDK_NAME', 'SDK_VERSION'], self.buildtarget)
-                mfilename[k] = "{}-toolchain-{}.{}.manifest".format(
-                        bb_vars['SDK_NAME'],
-                        bb_vars['SDK_VERSION'],
-                        k)
-                mpath[k] = os.path.join(mdir, mfilename[k])
-                if not os.path.isfile(mpath[k]):
-                    self.log.debug("{}: {} does not exist".format(
-                        self.classname, mpath[k]))
-                    raise IOError
-                m_entry[k] = ManifestEntry(mpath[k])
-
-                pkgdata_dir[k] = self.get_dir_from_bb_var('PKGDATA_DIR',
-                        d_target[k])
-                reverse_dir[k] = os.path.join(pkgdata_dir[k],
-                        'runtime-reverse')
-                if not os.path.exists(reverse_dir[k]):
-                    self.log.debug("{}: {} does not exist".format(
-                        self.classname, reverse_dir[k]))
-                    raise IOError
-        except OSError:
-            raise unittest.SkipTest("{}: Error in obtaining manifest dirs"\
-                .format(self.classname))
-        except IOError:
-            msg = "{}: Error cannot find manifests in the specified dir:\n{}"\
-                    .format(self.classname, mdir)
-            self.fail(msg)
-
-        for k in d_target.keys():
-            self.log.debug("{}: Check manifest {}".format(
-                self.classname, m_entry[k].file))
-
-            m_entry[k].missing = self.check_manifest_entries(\
-                                               m_entry[k].file,reverse_dir[k])
-            if m_entry[k].missing:
-                msg = '{}: {} Error has the following missing entries'\
-                        .format(self.classname, m_entry[k].file)
-                logmsg = msg+':\n'+'\n'.join(m_entry[k].missing)
-                self.log.debug(logmsg)
-                self.log.info(msg)
-                self.fail(logmsg)
-
-    @testcase(1381)
-    def test_image_manifest_entries(self):
-        '''Verifying the image manifest entries exist'''
-
-        # get manifest location based on target to query about
-        try:
-            mdir = self.get_dir_from_bb_var('DEPLOY_DIR_IMAGE',
-                                                self.buildtarget)
-            mfilename = get_bb_var("IMAGE_LINK_NAME", self.buildtarget)\
-                    + ".manifest"
-            mpath = os.path.join(mdir, mfilename)
-            if not os.path.isfile(mpath): raise IOError
-            m_entry = ManifestEntry(mpath)
-
-            pkgdata_dir = {}
-            pkgdata_dir = self.get_dir_from_bb_var('PKGDATA_DIR',
-                                                self.buildtarget)
-            revdir = os.path.join(pkgdata_dir, 'runtime-reverse')
-            if not os.path.exists(revdir): raise IOError
-        except OSError:
-            raise unittest.SkipTest("{}: Error in obtaining manifest dirs"\
-                .format(self.classname))
-        except IOError:
-            msg = "{}: Error cannot find manifests in dir:\n{}"\
-                    .format(self.classname, mdir)
-            self.fail(msg)
-
-        self.log.debug("{}: Check manifest {}"\
-                            .format(self.classname, m_entry.file))
-        m_entry.missing = self.check_manifest_entries(\
-                                                    m_entry.file, revdir)
-        if m_entry.missing:
-            msg = '{}: {} Error has the following missing entries'\
-                    .format(self.classname, m_entry.file)
-            logmsg = msg+':\n'+'\n'.join(m_entry.missing)
-            self.log.debug(logmsg)
-            self.log.info(msg)
-            self.fail(logmsg)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/buildhistory.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/buildhistory.py
deleted file mode 100644
index 5ed4b02..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/buildhistory.py
+++ /dev/null
@@ -1,88 +0,0 @@
-import os
-import unittest
-import tempfile
-from git import Repo
-from oeqa.utils.commands import get_bb_var
-from oe.buildhistory_analysis import blob_to_dict, compare_dict_blobs
-
-class TestBlobParsing(unittest.TestCase):
-
-    def setUp(self):
-        import time
-        self.repo_path = tempfile.mkdtemp(prefix='selftest-buildhistory',
-            dir=get_bb_var('TOPDIR'))
-
-        self.repo = Repo.init(self.repo_path)
-        self.test_file = "test"
-        self.var_map = {}
-
-    def tearDown(self):
-        import shutil
-        shutil.rmtree(self.repo_path)
-
-    def commit_vars(self, to_add={}, to_remove = [], msg="A commit message"):
-        if len(to_add) == 0 and len(to_remove) == 0:
-            return
-
-        for k in to_remove:
-            self.var_map.pop(x,None)
-        for k in to_add:
-            self.var_map[k] = to_add[k]
-
-        with open(os.path.join(self.repo_path, self.test_file), 'w') as repo_file:
-            for k in self.var_map:
-                repo_file.write("%s = %s\n" % (k, self.var_map[k]))
-
-        self.repo.git.add("--all")
-        self.repo.git.commit(message=msg)
-
-    def test_blob_to_dict(self):
-        """
-        Test convertion of git blobs to dictionary
-        """
-        valuesmap = { "foo" : "1", "bar" : "2" }
-        self.commit_vars(to_add = valuesmap)
-
-        blob = self.repo.head.commit.tree.blobs[0]
-        self.assertEqual(valuesmap, blob_to_dict(blob),
-            "commit was not translated correctly to dictionary")
-
-    def test_compare_dict_blobs(self):
-        """
-        Test comparisson of dictionaries extracted from git blobs
-        """
-        changesmap = { "foo-2" : ("2", "8"), "bar" : ("","4"), "bar-2" : ("","5")}
-
-        self.commit_vars(to_add = { "foo" : "1", "foo-2" : "2", "foo-3" : "3" })
-        blob1 = self.repo.heads.master.commit.tree.blobs[0]
-
-        self.commit_vars(to_add = { "foo-2" : "8", "bar" : "4", "bar-2" : "5" })
-        blob2 = self.repo.heads.master.commit.tree.blobs[0]
-
-        change_records = compare_dict_blobs(os.path.join(self.repo_path, self.test_file),
-            blob1, blob2, False, False)
-
-        var_changes = { x.fieldname : (x.oldvalue, x.newvalue) for x in change_records}
-        self.assertEqual(changesmap, var_changes, "Changes not reported correctly")
-
-    def test_compare_dict_blobs_default(self):
-        """
-        Test default values for comparisson of git blob dictionaries
-        """
-        defaultmap = { x : ("default", "1")  for x in ["PKG", "PKGE", "PKGV", "PKGR"]}
-
-        self.commit_vars(to_add = { "foo" : "1" })
-        blob1 = self.repo.heads.master.commit.tree.blobs[0]
-
-        self.commit_vars(to_add = { "PKG" : "1", "PKGE" : "1", "PKGV" : "1", "PKGR" : "1" })
-        blob2 = self.repo.heads.master.commit.tree.blobs[0]
-
-        change_records = compare_dict_blobs(os.path.join(self.repo_path, self.test_file),
-            blob1, blob2, False, False)
-
-        var_changes = {}
-        for x in change_records:
-            oldvalue = "default" if ("default" in x.oldvalue) else x.oldvalue
-            var_changes[x.fieldname] = (oldvalue, x.newvalue)
-
-        self.assertEqual(defaultmap, var_changes, "Defaults not set properly")
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/elf.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/elf.py
deleted file mode 100644
index 1f59037..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/elf.py
+++ /dev/null
@@ -1,21 +0,0 @@
-import unittest
-import oe.qa
-
-class TestElf(unittest.TestCase):
-    def test_machine_name(self):
-        """
-        Test elf_machine_to_string()
-        """
-        self.assertEqual(oe.qa.elf_machine_to_string(0x02), "SPARC")
-        self.assertEqual(oe.qa.elf_machine_to_string(0x03), "x86")
-        self.assertEqual(oe.qa.elf_machine_to_string(0x08), "MIPS")
-        self.assertEqual(oe.qa.elf_machine_to_string(0x14), "PowerPC")
-        self.assertEqual(oe.qa.elf_machine_to_string(0x28), "ARM")
-        self.assertEqual(oe.qa.elf_machine_to_string(0x2A), "SuperH")
-        self.assertEqual(oe.qa.elf_machine_to_string(0x32), "IA-64")
-        self.assertEqual(oe.qa.elf_machine_to_string(0x3E), "x86-64")
-        self.assertEqual(oe.qa.elf_machine_to_string(0xB7), "AArch64")
-
-        self.assertEqual(oe.qa.elf_machine_to_string(0x00), "Unknown (0)")
-        self.assertEqual(oe.qa.elf_machine_to_string(0xDEADBEEF), "Unknown (3735928559)")
-        self.assertEqual(oe.qa.elf_machine_to_string("foobar"), "Unknown ('foobar')")
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/license.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/license.py
deleted file mode 100644
index c388886..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/license.py
+++ /dev/null
@@ -1,68 +0,0 @@
-import unittest
-import oe.license
-
-class SeenVisitor(oe.license.LicenseVisitor):
-    def __init__(self):
-        self.seen = []
-        oe.license.LicenseVisitor.__init__(self)
-
-    def visit_Str(self, node):
-        self.seen.append(node.s)
-
-class TestSingleLicense(unittest.TestCase):
-    licenses = [
-        "GPLv2",
-        "LGPL-2.0",
-        "Artistic",
-        "MIT",
-        "GPLv3+",
-        "FOO_BAR",
-    ]
-    invalid_licenses = ["GPL/BSD"]
-
-    @staticmethod
-    def parse(licensestr):
-        visitor = SeenVisitor()
-        visitor.visit_string(licensestr)
-        return visitor.seen
-
-    def test_single_licenses(self):
-        for license in self.licenses:
-            licenses = self.parse(license)
-            self.assertListEqual(licenses, [license])
-
-    def test_invalid_licenses(self):
-        for license in self.invalid_licenses:
-            with self.assertRaises(oe.license.InvalidLicense) as cm:
-                self.parse(license)
-            self.assertEqual(cm.exception.license, license)
-
-class TestSimpleCombinations(unittest.TestCase):
-    tests = {
-        "FOO&BAR": ["FOO", "BAR"],
-        "BAZ & MOO": ["BAZ", "MOO"],
-        "ALPHA|BETA": ["ALPHA"],
-        "BAZ&MOO|FOO": ["FOO"],
-        "FOO&BAR|BAZ": ["FOO", "BAR"],
-    }
-    preferred = ["ALPHA", "FOO", "BAR"]
-
-    def test_tests(self):
-        def choose(a, b):
-            if all(lic in self.preferred for lic in b):
-                return b
-            else:
-                return a
-
-        for license, expected in self.tests.items():
-            licenses = oe.license.flattened_licenses(license, choose)
-            self.assertListEqual(licenses, expected)
-
-class TestComplexCombinations(TestSimpleCombinations):
-    tests = {
-        "FOO & (BAR | BAZ)&MOO": ["FOO", "BAR", "MOO"],
-        "(ALPHA|(BETA&THETA)|OMEGA)&DELTA": ["OMEGA", "DELTA"],
-        "((ALPHA|BETA)&FOO)|BAZ": ["BETA", "FOO"],
-        "(GPL-2.0|Proprietary)&BSD-4-clause&MIT": ["GPL-2.0", "BSD-4-clause", "MIT"],
-    }
-    preferred = ["BAR", "OMEGA", "BETA", "GPL-2.0"]
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/path.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/path.py
deleted file mode 100644
index 44d0681..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/path.py
+++ /dev/null
@@ -1,89 +0,0 @@
-import unittest
-import oe, oe.path
-import tempfile
-import os
-import errno
-import shutil
-
-class TestRealPath(unittest.TestCase):
-    DIRS = [ "a", "b", "etc", "sbin", "usr", "usr/bin", "usr/binX", "usr/sbin", "usr/include", "usr/include/gdbm" ]
-    FILES = [ "etc/passwd", "b/file" ]
-    LINKS = [
-        ( "bin",             "/usr/bin",             "/usr/bin" ),
-        ( "binX",            "usr/binX",             "/usr/binX" ),
-        ( "c",               "broken",               "/broken" ),
-        ( "etc/passwd-1",    "passwd",               "/etc/passwd" ),
-        ( "etc/passwd-2",    "passwd-1",             "/etc/passwd" ),
-        ( "etc/passwd-3",    "/etc/passwd-1",        "/etc/passwd" ),
-        ( "etc/shadow-1",    "/etc/shadow",          "/etc/shadow" ),
-        ( "etc/shadow-2",    "/etc/shadow-1",        "/etc/shadow" ),
-        ( "prog-A",          "bin/prog-A",           "/usr/bin/prog-A" ),
-        ( "prog-B",          "/bin/prog-B",          "/usr/bin/prog-B" ),
-        ( "usr/bin/prog-C",  "../../sbin/prog-C",    "/sbin/prog-C" ),
-        ( "usr/bin/prog-D",  "/sbin/prog-D",         "/sbin/prog-D" ),
-        ( "usr/binX/prog-E", "../sbin/prog-E",       None ),
-        ( "usr/bin/prog-F",  "../../../sbin/prog-F", "/sbin/prog-F" ),
-        ( "loop",            "a/loop",               None ),
-        ( "a/loop",          "../loop",              None ),
-        ( "b/test",          "file/foo",             "/b/file/foo" ),
-    ]
-
-    LINKS_PHYS = [
-        ( "./",          "/",                "" ),
-        ( "binX/prog-E", "/usr/sbin/prog-E", "/sbin/prog-E" ),
-    ]
-
-    EXCEPTIONS = [
-        ( "loop",   errno.ELOOP ),
-        ( "b/test", errno.ENOENT ),
-    ]
-
-    def __del__(self):
-        try:
-            #os.system("tree -F %s" % self.tmpdir)
-            shutil.rmtree(self.tmpdir)
-        except:
-            pass
-
-    def setUp(self):
-        self.tmpdir = tempfile.mkdtemp(prefix = "oe-test_path")
-        self.root = os.path.join(self.tmpdir, "R")
-
-        os.mkdir(os.path.join(self.tmpdir, "_real"))
-        os.symlink("_real", self.root)
-
-        for d in self.DIRS:
-            os.mkdir(os.path.join(self.root, d))
-        for f in self.FILES:
-            open(os.path.join(self.root, f), "w")
-        for l in self.LINKS:
-            os.symlink(l[1], os.path.join(self.root, l[0]))
-
-    def __realpath(self, file, use_physdir, assume_dir = True):
-        return oe.path.realpath(os.path.join(self.root, file), self.root,
-                                use_physdir, assume_dir = assume_dir)
-
-    def test_norm(self):
-        for l in self.LINKS:
-            if l[2] == None:
-                continue
-
-            target_p = self.__realpath(l[0], True)
-            target_l = self.__realpath(l[0], False)
-
-            if l[2] != False:
-                self.assertEqual(target_p, target_l)
-                self.assertEqual(l[2], target_p[len(self.root):])
-
-    def test_phys(self):
-        for l in self.LINKS_PHYS:
-            target_p = self.__realpath(l[0], True)
-            target_l = self.__realpath(l[0], False)
-
-            self.assertEqual(l[1], target_p[len(self.root):])
-            self.assertEqual(l[2], target_l[len(self.root):])
-
-    def test_loop(self):
-        for e in self.EXCEPTIONS:
-            self.assertRaisesRegex(OSError, r'\[Errno %u\]' % e[1],
-                                    self.__realpath, e[0], False, False)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/types.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/types.py
deleted file mode 100644
index 4fe2746..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/types.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import unittest
-from oe.maketype import create
-
-class TestBooleanType(unittest.TestCase):
-    def test_invalid(self):
-        self.assertRaises(ValueError, create, '', 'boolean')
-        self.assertRaises(ValueError, create, 'foo', 'boolean')
-        self.assertRaises(TypeError, create, object(), 'boolean')
-
-    def test_true(self):
-        self.assertTrue(create('y', 'boolean'))
-        self.assertTrue(create('yes', 'boolean'))
-        self.assertTrue(create('1', 'boolean'))
-        self.assertTrue(create('t', 'boolean'))
-        self.assertTrue(create('true', 'boolean'))
-        self.assertTrue(create('TRUE', 'boolean'))
-        self.assertTrue(create('truE', 'boolean'))
-
-    def test_false(self):
-        self.assertFalse(create('n', 'boolean'))
-        self.assertFalse(create('no', 'boolean'))
-        self.assertFalse(create('0', 'boolean'))
-        self.assertFalse(create('f', 'boolean'))
-        self.assertFalse(create('false', 'boolean'))
-        self.assertFalse(create('FALSE', 'boolean'))
-        self.assertFalse(create('faLse', 'boolean'))
-
-    def test_bool_equality(self):
-        self.assertEqual(create('n', 'boolean'), False)
-        self.assertNotEqual(create('n', 'boolean'), True)
-        self.assertEqual(create('y', 'boolean'), True)
-        self.assertNotEqual(create('y', 'boolean'), False)
-
-class TestList(unittest.TestCase):
-    def assertListEqual(self, value, valid, sep=None):
-        obj = create(value, 'list', separator=sep)
-        self.assertEqual(obj, valid)
-        if sep is not None:
-            self.assertEqual(obj.separator, sep)
-        self.assertEqual(str(obj), obj.separator.join(obj))
-
-    def test_list_nosep(self):
-        testlist = ['alpha', 'beta', 'theta']
-        self.assertListEqual('alpha beta theta', testlist)
-        self.assertListEqual('alpha  beta\ttheta', testlist)
-        self.assertListEqual('alpha', ['alpha'])
-
-    def test_list_usersep(self):
-        self.assertListEqual('foo:bar', ['foo', 'bar'], ':')
-        self.assertListEqual('foo:bar:baz', ['foo', 'bar', 'baz'], ':')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/utils.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/utils.py
deleted file mode 100644
index 7deb10f..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oelib/utils.py
+++ /dev/null
@@ -1,51 +0,0 @@
-import unittest
-from oe.utils import packages_filter_out_system, trim_version
-
-class TestPackagesFilterOutSystem(unittest.TestCase):
-    def test_filter(self):
-        """
-        Test that oe.utils.packages_filter_out_system works.
-        """
-        try:
-            import bb
-        except ImportError:
-            self.skipTest("Cannot import bb")
-
-        d = bb.data_smart.DataSmart()
-        d.setVar("PN", "foo")
-
-        d.setVar("PACKAGES", "foo foo-doc foo-dev")
-        pkgs = packages_filter_out_system(d)
-        self.assertEqual(pkgs, [])
-
-        d.setVar("PACKAGES", "foo foo-doc foo-data foo-dev")
-        pkgs = packages_filter_out_system(d)
-        self.assertEqual(pkgs, ["foo-data"])
-
-        d.setVar("PACKAGES", "foo foo-locale-en-gb")
-        pkgs = packages_filter_out_system(d)
-        self.assertEqual(pkgs, [])
-
-        d.setVar("PACKAGES", "foo foo-data foo-locale-en-gb")
-        pkgs = packages_filter_out_system(d)
-        self.assertEqual(pkgs, ["foo-data"])
-
-
-class TestTrimVersion(unittest.TestCase):
-    def test_version_exception(self):
-        with self.assertRaises(TypeError):
-            trim_version(None, 2)
-        with self.assertRaises(TypeError):
-            trim_version((1, 2, 3), 2)
-
-    def test_num_exception(self):
-        with self.assertRaises(ValueError):
-            trim_version("1.2.3", 0)
-        with self.assertRaises(ValueError):
-            trim_version("1.2.3", -1)
-
-    def test_valid(self):
-        self.assertEqual(trim_version("1.2.3", 1), "1")
-        self.assertEqual(trim_version("1.2.3", 2), "1.2")
-        self.assertEqual(trim_version("1.2.3", 3), "1.2.3")
-        self.assertEqual(trim_version("1.2.3", 4), "1.2.3")
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oescripts.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/oescripts.py
deleted file mode 100644
index 29547f5..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/oescripts.py
+++ /dev/null
@@ -1,22 +0,0 @@
-import datetime
-import unittest
-import os
-import re
-import shutil
-
-import oeqa.utils.ftools as ftools
-from oeqa.selftest.base import oeSelfTest
-from oeqa.selftest.buildhistory import BuildhistoryBase
-from oeqa.utils.commands import Command, runCmd, bitbake, get_bb_var, get_test_layer
-from oeqa.utils.decorators import testcase
-
-class BuildhistoryDiffTests(BuildhistoryBase):
-
-    @testcase(295)
-    def test_buildhistory_diff(self):
-        target = 'xcursor-transparent-theme'
-        self.run_buildhistory_operation(target, target_config="PR = \"r1\"", change_bh_location=True)
-        self.run_buildhistory_operation(target, target_config="PR = \"r0\"", change_bh_location=False, expect_error=True)
-        result = runCmd("buildhistory-diff -p %s" % get_bb_var('BUILDHISTORY_DIR'))
-        expected_output = 'PR changed from "r1" to "r0"'
-        self.assertTrue(expected_output in result.output, msg="Did not find expected output: %s" % result.output)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/pkgdata.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/pkgdata.py
deleted file mode 100644
index d69c3c8..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/pkgdata.py
+++ /dev/null
@@ -1,227 +0,0 @@
-import unittest
-import os
-import tempfile
-import logging
-import fnmatch
-
-import oeqa.utils.ftools as ftools
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars
-from oeqa.utils.decorators import testcase
-
-class OePkgdataUtilTests(oeSelfTest):
-
-    @classmethod
-    def setUpClass(cls):
-        # Ensure we have the right data in pkgdata
-        logger = logging.getLogger("selftest")
-        logger.info('Running bitbake to generate pkgdata')
-        bitbake('busybox zlib m4')
-
-    @testcase(1203)
-    def test_lookup_pkg(self):
-        # Forward tests
-        result = runCmd('oe-pkgdata-util lookup-pkg "zlib busybox"')
-        self.assertEqual(result.output, 'libz1\nbusybox')
-        result = runCmd('oe-pkgdata-util lookup-pkg zlib-dev')
-        self.assertEqual(result.output, 'libz-dev')
-        result = runCmd('oe-pkgdata-util lookup-pkg nonexistentpkg', ignore_status=True)
-        self.assertEqual(result.status, 1, "Status different than 1. output: %s" % result.output)
-        self.assertEqual(result.output, 'ERROR: The following packages could not be found: nonexistentpkg')
-        # Reverse tests
-        result = runCmd('oe-pkgdata-util lookup-pkg -r "libz1 busybox"')
-        self.assertEqual(result.output, 'zlib\nbusybox')
-        result = runCmd('oe-pkgdata-util lookup-pkg -r libz-dev')
-        self.assertEqual(result.output, 'zlib-dev')
-        result = runCmd('oe-pkgdata-util lookup-pkg -r nonexistentpkg', ignore_status=True)
-        self.assertEqual(result.status, 1, "Status different than 1. output: %s" % result.output)
-        self.assertEqual(result.output, 'ERROR: The following packages could not be found: nonexistentpkg')
-
-    @testcase(1205)
-    def test_read_value(self):
-        result = runCmd('oe-pkgdata-util read-value PN libz1')
-        self.assertEqual(result.output, 'zlib')
-        result = runCmd('oe-pkgdata-util read-value PKG libz1')
-        self.assertEqual(result.output, 'libz1')
-        result = runCmd('oe-pkgdata-util read-value PKGSIZE m4')
-        pkgsize = int(result.output.strip())
-        self.assertGreater(pkgsize, 1, "Size should be greater than 1. %s" % result.output)
-
-    @testcase(1198)
-    def test_find_path(self):
-        result = runCmd('oe-pkgdata-util find-path /lib/libz.so.1')
-        self.assertEqual(result.output, 'zlib: /lib/libz.so.1')
-        result = runCmd('oe-pkgdata-util find-path /usr/bin/m4')
-        self.assertEqual(result.output, 'm4: /usr/bin/m4')
-        result = runCmd('oe-pkgdata-util find-path /not/exist', ignore_status=True)
-        self.assertEqual(result.status, 1, "Status different than 1. output: %s" % result.output)
-        self.assertEqual(result.output, 'ERROR: Unable to find any package producing path /not/exist')
-
-    @testcase(1204)
-    def test_lookup_recipe(self):
-        result = runCmd('oe-pkgdata-util lookup-recipe "libz-staticdev busybox"')
-        self.assertEqual(result.output, 'zlib\nbusybox')
-        result = runCmd('oe-pkgdata-util lookup-recipe libz-dbg')
-        self.assertEqual(result.output, 'zlib')
-        result = runCmd('oe-pkgdata-util lookup-recipe nonexistentpkg', ignore_status=True)
-        self.assertEqual(result.status, 1, "Status different than 1. output: %s" % result.output)
-        self.assertEqual(result.output, 'ERROR: The following packages could not be found: nonexistentpkg')
-
-    @testcase(1202)
-    def test_list_pkgs(self):
-        # No arguments
-        result = runCmd('oe-pkgdata-util list-pkgs')
-        pkglist = result.output.split()
-        self.assertIn('zlib', pkglist, "Listed packages: %s" % result.output)
-        self.assertIn('zlib-dev', pkglist, "Listed packages: %s" % result.output)
-        # No pkgspec, runtime
-        result = runCmd('oe-pkgdata-util list-pkgs -r')
-        pkglist = result.output.split()
-        self.assertIn('libz-dev', pkglist, "Listed packages: %s" % result.output)
-        # With recipe specified
-        result = runCmd('oe-pkgdata-util list-pkgs -p zlib')
-        pkglist = sorted(result.output.split())
-        try:
-            pkglist.remove('zlib-ptest') # in case ptest is disabled
-        except ValueError:
-            pass
-        self.assertEqual(pkglist, ['zlib', 'zlib-dbg', 'zlib-dev', 'zlib-doc', 'zlib-staticdev'], "Packages listed after remove: %s" % result.output)
-        # With recipe specified, runtime
-        result = runCmd('oe-pkgdata-util list-pkgs -p zlib -r')
-        pkglist = sorted(result.output.split())
-        try:
-            pkglist.remove('libz-ptest') # in case ptest is disabled
-        except ValueError:
-            pass
-        self.assertEqual(pkglist, ['libz-dbg', 'libz-dev', 'libz-doc', 'libz-staticdev', 'libz1'], "Packages listed after remove: %s" % result.output)
-        # With recipe specified and unpackaged
-        result = runCmd('oe-pkgdata-util list-pkgs -p zlib -u')
-        pkglist = sorted(result.output.split())
-        self.assertIn('zlib-locale', pkglist, "Listed packages: %s" % result.output)
-        # With recipe specified and unpackaged, runtime
-        result = runCmd('oe-pkgdata-util list-pkgs -p zlib -u -r')
-        pkglist = sorted(result.output.split())
-        self.assertIn('libz-locale', pkglist, "Listed packages: %s" % result.output)
-        # With recipe specified and pkgspec
-        result = runCmd('oe-pkgdata-util list-pkgs -p zlib "*-d*"')
-        pkglist = sorted(result.output.split())
-        self.assertEqual(pkglist, ['zlib-dbg', 'zlib-dev', 'zlib-doc'], "Packages listed: %s" % result.output)
-        # With recipe specified and pkgspec, runtime
-        result = runCmd('oe-pkgdata-util list-pkgs -p zlib -r "*-d*"')
-        pkglist = sorted(result.output.split())
-        self.assertEqual(pkglist, ['libz-dbg', 'libz-dev', 'libz-doc'], "Packages listed: %s" % result.output)
-
-    @testcase(1201)
-    def test_list_pkg_files(self):
-        def splitoutput(output):
-            files = {}
-            curpkg = None
-            for line in output.splitlines():
-                if line.startswith('\t'):
-                    self.assertTrue(curpkg, 'Unexpected non-package line:\n%s' % line)
-                    files[curpkg].append(line.strip())
-                else:
-                    self.assertTrue(line.rstrip().endswith(':'), 'Invalid package line in output:\n%s' % line)
-                    curpkg = line.split(':')[0]
-                    files[curpkg] = []
-            return files
-        bb_vars = get_bb_vars(['base_libdir', 'libdir', 'includedir', 'mandir'])
-        base_libdir = bb_vars['base_libdir']
-        libdir = bb_vars['libdir']
-        includedir = bb_vars['includedir']
-        mandir = bb_vars['mandir']
-        # Test recipe-space package name
-        result = runCmd('oe-pkgdata-util list-pkg-files zlib-dev zlib-doc')
-        files = splitoutput(result.output)
-        self.assertIn('zlib-dev', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('zlib-doc', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn(os.path.join(includedir, 'zlib.h'), files['zlib-dev'])
-        self.assertIn(os.path.join(mandir, 'man3/zlib.3'), files['zlib-doc'])
-        # Test runtime package name
-        result = runCmd('oe-pkgdata-util list-pkg-files -r libz1 libz-dev')
-        files = splitoutput(result.output)
-        self.assertIn('libz1', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('libz-dev', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertGreater(len(files['libz1']), 1)
-        libspec = os.path.join(base_libdir, 'libz.so.1.*')
-        found = False
-        for fileitem in files['libz1']:
-            if fnmatch.fnmatchcase(fileitem, libspec):
-                found = True
-                break
-        self.assertTrue(found, 'Could not find zlib library file %s in libz1 package file list: %s' % (libspec, files['libz1']))
-        self.assertIn(os.path.join(includedir, 'zlib.h'), files['libz-dev'])
-        # Test recipe
-        result = runCmd('oe-pkgdata-util list-pkg-files -p zlib')
-        files = splitoutput(result.output)
-        self.assertIn('zlib-dbg', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('zlib-doc', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('zlib-dev', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('zlib-staticdev', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('zlib', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertNotIn('zlib-locale', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        # (ignore ptest, might not be there depending on config)
-        self.assertIn(os.path.join(includedir, 'zlib.h'), files['zlib-dev'])
-        self.assertIn(os.path.join(mandir, 'man3/zlib.3'), files['zlib-doc'])
-        self.assertIn(os.path.join(libdir, 'libz.a'), files['zlib-staticdev'])
-        # Test recipe, runtime
-        result = runCmd('oe-pkgdata-util list-pkg-files -p zlib -r')
-        files = splitoutput(result.output)
-        self.assertIn('libz-dbg', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('libz-doc', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('libz-dev', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('libz-staticdev', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('libz1', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertNotIn('libz-locale', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn(os.path.join(includedir, 'zlib.h'), files['libz-dev'])
-        self.assertIn(os.path.join(mandir, 'man3/zlib.3'), files['libz-doc'])
-        self.assertIn(os.path.join(libdir, 'libz.a'), files['libz-staticdev'])
-        # Test recipe, unpackaged
-        result = runCmd('oe-pkgdata-util list-pkg-files -p zlib -u')
-        files = splitoutput(result.output)
-        self.assertIn('zlib-dbg', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('zlib-doc', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('zlib-dev', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('zlib-staticdev', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('zlib', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('zlib-locale', list(files.keys()), "listed pkgs. files: %s" %result.output) # this is the key one
-        self.assertIn(os.path.join(includedir, 'zlib.h'), files['zlib-dev'])
-        self.assertIn(os.path.join(mandir, 'man3/zlib.3'), files['zlib-doc'])
-        self.assertIn(os.path.join(libdir, 'libz.a'), files['zlib-staticdev'])
-        # Test recipe, runtime, unpackaged
-        result = runCmd('oe-pkgdata-util list-pkg-files -p zlib -r -u')
-        files = splitoutput(result.output)
-        self.assertIn('libz-dbg', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('libz-doc', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('libz-dev', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('libz-staticdev', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('libz1', list(files.keys()), "listed pkgs. files: %s" %result.output)
-        self.assertIn('libz-locale', list(files.keys()), "listed pkgs. files: %s" %result.output) # this is the key one
-        self.assertIn(os.path.join(includedir, 'zlib.h'), files['libz-dev'])
-        self.assertIn(os.path.join(mandir, 'man3/zlib.3'), files['libz-doc'])
-        self.assertIn(os.path.join(libdir, 'libz.a'), files['libz-staticdev'])
-
-    @testcase(1200)
-    def test_glob(self):
-        tempdir = tempfile.mkdtemp(prefix='pkgdataqa')
-        self.track_for_cleanup(tempdir)
-        pkglistfile = os.path.join(tempdir, 'pkglist')
-        with open(pkglistfile, 'w') as f:
-            f.write('libz1\n')
-            f.write('busybox\n')
-        result = runCmd('oe-pkgdata-util glob %s "*-dev"' % pkglistfile)
-        desiredresult = ['libz-dev', 'busybox-dev']
-        self.assertEqual(sorted(result.output.split()), sorted(desiredresult))
-        # The following should not error (because when we use this during rootfs construction, sometimes the complementary package won't exist)
-        result = runCmd('oe-pkgdata-util glob %s "*-nonexistent"' % pkglistfile)
-        self.assertEqual(result.output, '')
-        # Test exclude option
-        result = runCmd('oe-pkgdata-util glob %s "*-dev *-dbg" -x "^libz"' % pkglistfile)
-        resultlist = result.output.split()
-        self.assertNotIn('libz-dev', resultlist)
-        self.assertNotIn('libz-dbg', resultlist)
-
-    @testcase(1206)
-    def test_specify_pkgdatadir(self):
-        result = runCmd('oe-pkgdata-util -p %s lookup-pkg zlib' % get_bb_var('PKGDATA_DIR'))
-        self.assertEqual(result.output, 'libz1')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/prservice.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/prservice.py
deleted file mode 100644
index 34d4197..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/prservice.py
+++ /dev/null
@@ -1,132 +0,0 @@
-import unittest
-import os
-import logging
-import re
-import shutil
-import datetime
-
-import oeqa.utils.ftools as ftools
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var
-from oeqa.utils.decorators import testcase
-from oeqa.utils.network import get_free_port
-
-class BitbakePrTests(oeSelfTest):
-
-    @classmethod
-    def setUpClass(cls):
-        cls.pkgdata_dir = get_bb_var('PKGDATA_DIR')
-
-    def get_pr_version(self, package_name):
-        package_data_file = os.path.join(self.pkgdata_dir, 'runtime', package_name)
-        package_data = ftools.read_file(package_data_file)
-        find_pr = re.search("PKGR: r[0-9]+\.([0-9]+)", package_data)
-        self.assertTrue(find_pr, "No PKG revision found in %s" % package_data_file)
-        return int(find_pr.group(1))
-
-    def get_task_stamp(self, package_name, recipe_task):
-        stampdata = get_bb_var('STAMP', target=package_name).split('/')
-        prefix = stampdata[-1]
-        package_stamps_path = "/".join(stampdata[:-1])
-        stamps = []
-        for stamp in os.listdir(package_stamps_path):
-            find_stamp = re.match("%s\.%s\.([a-z0-9]{32})" % (re.escape(prefix), recipe_task), stamp)
-            if find_stamp:
-                stamps.append(find_stamp.group(1))
-        self.assertFalse(len(stamps) == 0, msg="Cound not find stamp for task %s for recipe %s" % (recipe_task, package_name))
-        self.assertFalse(len(stamps) > 1, msg="Found multiple %s stamps for the %s recipe in the %s directory." % (recipe_task, package_name, package_stamps_path))
-        return str(stamps[0])
-
-    def increment_package_pr(self, package_name):
-        inc_data = "do_package_append() {\n    bb.build.exec_func('do_test_prserv', d)\n}\ndo_test_prserv() {\necho \"The current date is: %s\"\n}" % datetime.datetime.now()
-        self.write_recipeinc(package_name, inc_data)
-        res = bitbake(package_name, ignore_status=True)
-        self.delete_recipeinc(package_name)
-        self.assertEqual(res.status, 0, msg=res.output)
-        self.assertTrue("NOTE: Started PRServer with DBfile" in res.output, msg=res.output)
-
-    def config_pr_tests(self, package_name, package_type='rpm', pr_socket='localhost:0'):
-        config_package_data = 'PACKAGE_CLASSES = "package_%s"' % package_type
-        self.write_config(config_package_data)
-        config_server_data = 'PRSERV_HOST = "%s"' % pr_socket
-        self.append_config(config_server_data)
-
-    def run_test_pr_service(self, package_name, package_type='rpm', track_task='do_package', pr_socket='localhost:0'):
-        self.config_pr_tests(package_name, package_type, pr_socket)
-
-        self.increment_package_pr(package_name)
-        pr_1 = self.get_pr_version(package_name)
-        stamp_1 = self.get_task_stamp(package_name, track_task)
-
-        self.increment_package_pr(package_name)
-        pr_2 = self.get_pr_version(package_name)
-        stamp_2 = self.get_task_stamp(package_name, track_task)
-
-        self.assertTrue(pr_2 - pr_1 == 1, "Step between same pkg. revision is greater than 1")
-        self.assertTrue(stamp_1 != stamp_2, "Different pkg rev. but same stamp: %s" % stamp_1)
-
-    def run_test_pr_export_import(self, package_name, replace_current_db=True):
-        self.config_pr_tests(package_name)
-
-        self.increment_package_pr(package_name)
-        pr_1 = self.get_pr_version(package_name)
-
-        exported_db_path = os.path.join(self.builddir, 'export.inc')
-        export_result = runCmd("bitbake-prserv-tool export %s" % exported_db_path, ignore_status=True)
-        self.assertEqual(export_result.status, 0, msg="PR Service database export failed: %s" % export_result.output)
-
-        if replace_current_db:
-            current_db_path = os.path.join(get_bb_var('PERSISTENT_DIR'), 'prserv.sqlite3')
-            self.assertTrue(os.path.exists(current_db_path), msg="Path to current PR Service database is invalid: %s" % current_db_path)
-            os.remove(current_db_path)
-
-        import_result = runCmd("bitbake-prserv-tool import %s" % exported_db_path, ignore_status=True)
-        os.remove(exported_db_path)
-        self.assertEqual(import_result.status, 0, msg="PR Service database import failed: %s" % import_result.output)
-
-        self.increment_package_pr(package_name)
-        pr_2 = self.get_pr_version(package_name)
-
-        self.assertTrue(pr_2 - pr_1 == 1, "Step between same pkg. revision is greater than 1")
-
-    @testcase(930)
-    def test_import_export_replace_db(self):
-        self.run_test_pr_export_import('m4')
-
-    @testcase(931)
-    def test_import_export_override_db(self):
-        self.run_test_pr_export_import('m4', replace_current_db=False)
-
-    @testcase(932)
-    def test_pr_service_rpm_arch_dep(self):
-        self.run_test_pr_service('m4', 'rpm', 'do_package')
-
-    @testcase(934)
-    def test_pr_service_deb_arch_dep(self):
-        self.run_test_pr_service('m4', 'deb', 'do_package')
-
-    @testcase(933)
-    def test_pr_service_ipk_arch_dep(self):
-        self.run_test_pr_service('m4', 'ipk', 'do_package')
-
-    @testcase(935)
-    def test_pr_service_rpm_arch_indep(self):
-        self.run_test_pr_service('xcursor-transparent-theme', 'rpm', 'do_package')
-
-    @testcase(937)
-    def test_pr_service_deb_arch_indep(self):
-        self.run_test_pr_service('xcursor-transparent-theme', 'deb', 'do_package')
-
-    @testcase(936)
-    def test_pr_service_ipk_arch_indep(self):
-        self.run_test_pr_service('xcursor-transparent-theme', 'ipk', 'do_package')
-
-    @testcase(1419)
-    def test_stopping_prservice_message(self):
-        port = get_free_port()
-
-        runCmd('bitbake-prserv --host localhost --port %s --loglevel=DEBUG --start' % port)
-        ret = runCmd('bitbake-prserv --host localhost --port %s --loglevel=DEBUG --stop' % port)
-
-        self.assertEqual(ret.status, 0)
-
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/recipetool.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/recipetool.py
deleted file mode 100644
index dc55a5e..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/recipetool.py
+++ /dev/null
@@ -1,695 +0,0 @@
-import os
-import logging
-import shutil
-import tempfile
-import urllib.parse
-
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var
-from oeqa.utils.commands import get_bb_vars, create_temp_layer
-from oeqa.utils.decorators import testcase
-from oeqa.selftest import devtool
-
-
-templayerdir = None
-
-
-def setUpModule():
-    global templayerdir
-    templayerdir = tempfile.mkdtemp(prefix='recipetoolqa')
-    create_temp_layer(templayerdir, 'selftestrecipetool')
-    runCmd('bitbake-layers add-layer %s' % templayerdir)
-
-
-def tearDownModule():
-    runCmd('bitbake-layers remove-layer %s' % templayerdir, ignore_status=True)
-    runCmd('rm -rf %s' % templayerdir)
-
-
-class RecipetoolBase(devtool.DevtoolBase):
-
-    def setUpLocal(self):
-        self.templayerdir = templayerdir
-        self.tempdir = tempfile.mkdtemp(prefix='recipetoolqa')
-        self.track_for_cleanup(self.tempdir)
-        self.testfile = os.path.join(self.tempdir, 'testfile')
-        with open(self.testfile, 'w') as f:
-            f.write('Test file\n')
-
-    def tearDownLocal(self):
-        runCmd('rm -rf %s/recipes-*' % self.templayerdir)
-
-    def _try_recipetool_appendcmd(self, cmd, testrecipe, expectedfiles, expectedlines=None):
-        result = runCmd(cmd)
-        self.assertNotIn('Traceback', result.output)
-
-        # Check the bbappend was created and applies properly
-        recipefile = get_bb_var('FILE', testrecipe)
-        bbappendfile = self._check_bbappend(testrecipe, recipefile, self.templayerdir)
-
-        # Check the bbappend contents
-        if expectedlines is not None:
-            with open(bbappendfile, 'r') as f:
-                self.assertEqual(expectedlines, f.readlines(), "Expected lines are not present in %s" % bbappendfile)
-
-        # Check file was copied
-        filesdir = os.path.join(os.path.dirname(bbappendfile), testrecipe)
-        for expectedfile in expectedfiles:
-            self.assertTrue(os.path.isfile(os.path.join(filesdir, expectedfile)), 'Expected file %s to be copied next to bbappend, but it wasn\'t' % expectedfile)
-
-        # Check no other files created
-        createdfiles = []
-        for root, _, files in os.walk(filesdir):
-            for f in files:
-                createdfiles.append(os.path.relpath(os.path.join(root, f), filesdir))
-        self.assertTrue(sorted(createdfiles), sorted(expectedfiles))
-
-        return bbappendfile, result.output
-
-
-class RecipetoolTests(RecipetoolBase):
-
-    @classmethod
-    def setUpClass(cls):
-        # Ensure we have the right data in shlibs/pkgdata
-        logger = logging.getLogger("selftest")
-        logger.info('Running bitbake to generate pkgdata')
-        bitbake('-c packagedata base-files coreutils busybox selftest-recipetool-appendfile')
-        bb_vars = get_bb_vars(['COREBASE', 'BBPATH'])
-        cls.corebase = bb_vars['COREBASE']
-        cls.bbpath = bb_vars['BBPATH']
-
-    def _try_recipetool_appendfile(self, testrecipe, destfile, newfile, options, expectedlines, expectedfiles):
-        cmd = 'recipetool appendfile %s %s %s %s' % (self.templayerdir, destfile, newfile, options)
-        return self._try_recipetool_appendcmd(cmd, testrecipe, expectedfiles, expectedlines)
-
-    def _try_recipetool_appendfile_fail(self, destfile, newfile, checkerror):
-        cmd = 'recipetool appendfile %s %s %s' % (self.templayerdir, destfile, newfile)
-        result = runCmd(cmd, ignore_status=True)
-        self.assertNotEqual(result.status, 0, 'Command "%s" should have failed but didn\'t' % cmd)
-        self.assertNotIn('Traceback', result.output)
-        for errorstr in checkerror:
-            self.assertIn(errorstr, result.output)
-
-    @testcase(1177)
-    def test_recipetool_appendfile_basic(self):
-        # Basic test
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                        '\n']
-        _, output = self._try_recipetool_appendfile('base-files', '/etc/motd', self.testfile, '', expectedlines, ['motd'])
-        self.assertNotIn('WARNING: ', output)
-
-    @testcase(1183)
-    def test_recipetool_appendfile_invalid(self):
-        # Test some commands that should error
-        self._try_recipetool_appendfile_fail('/etc/passwd', self.testfile, ['ERROR: /etc/passwd cannot be handled by this tool', 'useradd', 'extrausers'])
-        self._try_recipetool_appendfile_fail('/etc/timestamp', self.testfile, ['ERROR: /etc/timestamp cannot be handled by this tool'])
-        self._try_recipetool_appendfile_fail('/dev/console', self.testfile, ['ERROR: /dev/console cannot be handled by this tool'])
-
-    @testcase(1176)
-    def test_recipetool_appendfile_alternatives(self):
-        # Now try with a file we know should be an alternative
-        # (this is very much a fake example, but one we know is reliably an alternative)
-        self._try_recipetool_appendfile_fail('/bin/ls', self.testfile, ['ERROR: File /bin/ls is an alternative possibly provided by the following recipes:', 'coreutils', 'busybox'])
-        # Need a test file - should be executable
-        testfile2 = os.path.join(self.corebase, 'oe-init-build-env')
-        testfile2name = os.path.basename(testfile2)
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n',
-                         'SRC_URI += "file://%s"\n' % testfile2name,
-                         '\n',
-                         'do_install_append() {\n',
-                         '    install -d ${D}${base_bindir}\n',
-                         '    install -m 0755 ${WORKDIR}/%s ${D}${base_bindir}/ls\n' % testfile2name,
-                         '}\n']
-        self._try_recipetool_appendfile('coreutils', '/bin/ls', testfile2, '-r coreutils', expectedlines, [testfile2name])
-        # Now try bbappending the same file again, contents should not change
-        bbappendfile, _ = self._try_recipetool_appendfile('coreutils', '/bin/ls', self.testfile, '-r coreutils', expectedlines, [testfile2name])
-        # But file should have
-        copiedfile = os.path.join(os.path.dirname(bbappendfile), 'coreutils', testfile2name)
-        result = runCmd('diff -q %s %s' % (testfile2, copiedfile), ignore_status=True)
-        self.assertNotEqual(result.status, 0, 'New file should have been copied but was not %s' % result.output)
-
-    @testcase(1178)
-    def test_recipetool_appendfile_binary(self):
-        # Try appending a binary file
-        # /bin/ls can be a symlink to /usr/bin/ls
-        ls = os.path.realpath("/bin/ls")
-        result = runCmd('recipetool appendfile %s /bin/ls %s -r coreutils' % (self.templayerdir, ls))
-        self.assertIn('WARNING: ', result.output)
-        self.assertIn('is a binary', result.output)
-
-    @testcase(1173)
-    def test_recipetool_appendfile_add(self):
-        # Try arbitrary file add to a recipe
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n',
-                         'SRC_URI += "file://testfile"\n',
-                         '\n',
-                         'do_install_append() {\n',
-                         '    install -d ${D}${datadir}\n',
-                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/something\n',
-                         '}\n']
-        self._try_recipetool_appendfile('netbase', '/usr/share/something', self.testfile, '-r netbase', expectedlines, ['testfile'])
-        # Try adding another file, this time where the source file is executable
-        # (so we're testing that, plus modifying an existing bbappend)
-        testfile2 = os.path.join(self.corebase, 'oe-init-build-env')
-        testfile2name = os.path.basename(testfile2)
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n',
-                         'SRC_URI += "file://testfile \\\n',
-                         '            file://%s \\\n' % testfile2name,
-                         '            "\n',
-                         '\n',
-                         'do_install_append() {\n',
-                         '    install -d ${D}${datadir}\n',
-                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/something\n',
-                         '    install -m 0755 ${WORKDIR}/%s ${D}${datadir}/scriptname\n' % testfile2name,
-                         '}\n']
-        self._try_recipetool_appendfile('netbase', '/usr/share/scriptname', testfile2, '-r netbase', expectedlines, ['testfile', testfile2name])
-
-    @testcase(1174)
-    def test_recipetool_appendfile_add_bindir(self):
-        # Try arbitrary file add to a recipe, this time to a location such that should be installed as executable
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n',
-                         'SRC_URI += "file://testfile"\n',
-                         '\n',
-                         'do_install_append() {\n',
-                         '    install -d ${D}${bindir}\n',
-                         '    install -m 0755 ${WORKDIR}/testfile ${D}${bindir}/selftest-recipetool-testbin\n',
-                         '}\n']
-        _, output = self._try_recipetool_appendfile('netbase', '/usr/bin/selftest-recipetool-testbin', self.testfile, '-r netbase', expectedlines, ['testfile'])
-        self.assertNotIn('WARNING: ', output)
-
-    @testcase(1175)
-    def test_recipetool_appendfile_add_machine(self):
-        # Try arbitrary file add to a recipe, this time to a location such that should be installed as executable
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n',
-                         'PACKAGE_ARCH = "${MACHINE_ARCH}"\n',
-                         '\n',
-                         'SRC_URI_append_mymachine = " file://testfile"\n',
-                         '\n',
-                         'do_install_append_mymachine() {\n',
-                         '    install -d ${D}${datadir}\n',
-                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/something\n',
-                         '}\n']
-        _, output = self._try_recipetool_appendfile('netbase', '/usr/share/something', self.testfile, '-r netbase -m mymachine', expectedlines, ['mymachine/testfile'])
-        self.assertNotIn('WARNING: ', output)
-
-    @testcase(1184)
-    def test_recipetool_appendfile_orig(self):
-        # A file that's in SRC_URI and in do_install with the same name
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n']
-        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-orig', self.testfile, '', expectedlines, ['selftest-replaceme-orig'])
-        self.assertNotIn('WARNING: ', output)
-
-    @testcase(1191)
-    def test_recipetool_appendfile_todir(self):
-        # A file that's in SRC_URI and in do_install with destination directory rather than file
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n']
-        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-todir', self.testfile, '', expectedlines, ['selftest-replaceme-todir'])
-        self.assertNotIn('WARNING: ', output)
-
-    @testcase(1187)
-    def test_recipetool_appendfile_renamed(self):
-        # A file that's in SRC_URI with a different name to the destination file
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n']
-        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-renamed', self.testfile, '', expectedlines, ['file1'])
-        self.assertNotIn('WARNING: ', output)
-
-    @testcase(1190)
-    def test_recipetool_appendfile_subdir(self):
-        # A file that's in SRC_URI in a subdir
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n',
-                         'SRC_URI += "file://testfile"\n',
-                         '\n',
-                         'do_install_append() {\n',
-                         '    install -d ${D}${datadir}\n',
-                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/selftest-replaceme-subdir\n',
-                         '}\n']
-        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-subdir', self.testfile, '', expectedlines, ['testfile'])
-        self.assertNotIn('WARNING: ', output)
-
-    @testcase(1189)
-    def test_recipetool_appendfile_src_glob(self):
-        # A file that's in SRC_URI as a glob
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n',
-                         'SRC_URI += "file://testfile"\n',
-                         '\n',
-                         'do_install_append() {\n',
-                         '    install -d ${D}${datadir}\n',
-                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/selftest-replaceme-src-globfile\n',
-                         '}\n']
-        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-src-globfile', self.testfile, '', expectedlines, ['testfile'])
-        self.assertNotIn('WARNING: ', output)
-
-    @testcase(1181)
-    def test_recipetool_appendfile_inst_glob(self):
-        # A file that's in do_install as a glob
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n']
-        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-inst-globfile', self.testfile, '', expectedlines, ['selftest-replaceme-inst-globfile'])
-        self.assertNotIn('WARNING: ', output)
-
-    @testcase(1182)
-    def test_recipetool_appendfile_inst_todir_glob(self):
-        # A file that's in do_install as a glob with destination as a directory
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n']
-        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-inst-todir-globfile', self.testfile, '', expectedlines, ['selftest-replaceme-inst-todir-globfile'])
-        self.assertNotIn('WARNING: ', output)
-
-    @testcase(1185)
-    def test_recipetool_appendfile_patch(self):
-        # A file that's added by a patch in SRC_URI
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n',
-                         'SRC_URI += "file://testfile"\n',
-                         '\n',
-                         'do_install_append() {\n',
-                         '    install -d ${D}${sysconfdir}\n',
-                         '    install -m 0644 ${WORKDIR}/testfile ${D}${sysconfdir}/selftest-replaceme-patched\n',
-                         '}\n']
-        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/etc/selftest-replaceme-patched', self.testfile, '', expectedlines, ['testfile'])
-        for line in output.splitlines():
-            if 'WARNING: ' in line:
-                self.assertIn('add-file.patch', line, 'Unexpected warning found in output:\n%s' % line)
-                break
-        else:
-            self.fail('Patch warning not found in output:\n%s' % output)
-
-    @testcase(1188)
-    def test_recipetool_appendfile_script(self):
-        # Now, a file that's in SRC_URI but installed by a script (so no mention in do_install)
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n',
-                         'SRC_URI += "file://testfile"\n',
-                         '\n',
-                         'do_install_append() {\n',
-                         '    install -d ${D}${datadir}\n',
-                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/selftest-replaceme-scripted\n',
-                         '}\n']
-        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-scripted', self.testfile, '', expectedlines, ['testfile'])
-        self.assertNotIn('WARNING: ', output)
-
-    @testcase(1180)
-    def test_recipetool_appendfile_inst_func(self):
-        # A file that's installed from a function called by do_install
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n']
-        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-inst-func', self.testfile, '', expectedlines, ['selftest-replaceme-inst-func'])
-        self.assertNotIn('WARNING: ', output)
-
-    @testcase(1186)
-    def test_recipetool_appendfile_postinstall(self):
-        # A file that's created by a postinstall script (and explicitly mentioned in it)
-        # First try without specifying recipe
-        self._try_recipetool_appendfile_fail('/usr/share/selftest-replaceme-postinst', self.testfile, ['File /usr/share/selftest-replaceme-postinst may be written out in a pre/postinstall script of the following recipes:', 'selftest-recipetool-appendfile'])
-        # Now specify recipe
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n',
-                         'SRC_URI += "file://testfile"\n',
-                         '\n',
-                         'do_install_append() {\n',
-                         '    install -d ${D}${datadir}\n',
-                         '    install -m 0644 ${WORKDIR}/testfile ${D}${datadir}/selftest-replaceme-postinst\n',
-                         '}\n']
-        _, output = self._try_recipetool_appendfile('selftest-recipetool-appendfile', '/usr/share/selftest-replaceme-postinst', self.testfile, '-r selftest-recipetool-appendfile', expectedlines, ['testfile'])
-
-    @testcase(1179)
-    def test_recipetool_appendfile_extlayer(self):
-        # Try creating a bbappend in a layer that's not in bblayers.conf and has a different structure
-        exttemplayerdir = os.path.join(self.tempdir, 'extlayer')
-        self._create_temp_layer(exttemplayerdir, False, 'oeselftestextlayer', recipepathspec='metadata/recipes/recipes-*/*')
-        result = runCmd('recipetool appendfile %s /usr/share/selftest-replaceme-orig %s' % (exttemplayerdir, self.testfile))
-        self.assertNotIn('Traceback', result.output)
-        createdfiles = []
-        for root, _, files in os.walk(exttemplayerdir):
-            for f in files:
-                createdfiles.append(os.path.relpath(os.path.join(root, f), exttemplayerdir))
-        createdfiles.remove('conf/layer.conf')
-        expectedfiles = ['metadata/recipes/recipes-test/selftest-recipetool-appendfile/selftest-recipetool-appendfile.bbappend',
-                         'metadata/recipes/recipes-test/selftest-recipetool-appendfile/selftest-recipetool-appendfile/selftest-replaceme-orig']
-        self.assertEqual(sorted(createdfiles), sorted(expectedfiles))
-
-    @testcase(1192)
-    def test_recipetool_appendfile_wildcard(self):
-
-        def try_appendfile_wc(options):
-            result = runCmd('recipetool appendfile %s /etc/profile %s %s' % (self.templayerdir, self.testfile, options))
-            self.assertNotIn('Traceback', result.output)
-            bbappendfile = None
-            for root, _, files in os.walk(self.templayerdir):
-                for f in files:
-                    if f.endswith('.bbappend'):
-                        bbappendfile = f
-                        break
-            if not bbappendfile:
-                self.fail('No bbappend file created')
-            runCmd('rm -rf %s/recipes-*' % self.templayerdir)
-            return bbappendfile
-
-        # Check without wildcard option
-        recipefn = os.path.basename(get_bb_var('FILE', 'base-files'))
-        filename = try_appendfile_wc('')
-        self.assertEqual(filename, recipefn.replace('.bb', '.bbappend'))
-        # Now check with wildcard option
-        filename = try_appendfile_wc('-w')
-        self.assertEqual(filename, recipefn.split('_')[0] + '_%.bbappend')
-
-    @testcase(1193)
-    def test_recipetool_create(self):
-        # Try adding a recipe
-        tempsrc = os.path.join(self.tempdir, 'srctree')
-        os.makedirs(tempsrc)
-        recipefile = os.path.join(self.tempdir, 'logrotate_3.12.3.bb')
-        srcuri = 'https://github.com/logrotate/logrotate/releases/download/3.12.3/logrotate-3.12.3.tar.xz'
-        result = runCmd('recipetool create -o %s %s -x %s' % (recipefile, srcuri, tempsrc))
-        self.assertTrue(os.path.isfile(recipefile))
-        checkvars = {}
-        checkvars['LICENSE'] = 'GPLv2'
-        checkvars['LIC_FILES_CHKSUM'] = 'file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263'
-        checkvars['SRC_URI'] = 'https://github.com/logrotate/logrotate/releases/download/${PV}/logrotate-${PV}.tar.xz'
-        checkvars['SRC_URI[md5sum]'] = 'a560c57fac87c45b2fc17406cdf79288'
-        checkvars['SRC_URI[sha256sum]'] = '2e6a401cac9024db2288297e3be1a8ab60e7401ba8e91225218aaf4a27e82a07'
-        self._test_recipe_contents(recipefile, checkvars, [])
-
-    @testcase(1194)
-    def test_recipetool_create_git(self):
-        if 'x11' not in get_bb_var('DISTRO_FEATURES'):
-            self.skipTest('Test requires x11 as distro feature')
-        # Ensure we have the right data in shlibs/pkgdata
-        bitbake('libpng pango libx11 libxext jpeg libcheck')
-        # Try adding a recipe
-        tempsrc = os.path.join(self.tempdir, 'srctree')
-        os.makedirs(tempsrc)
-        recipefile = os.path.join(self.tempdir, 'libmatchbox.bb')
-        srcuri = 'git://git.yoctoproject.org/libmatchbox'
-        result = runCmd(['recipetool', 'create', '-o', recipefile, srcuri + ";rev=9f7cf8895ae2d39c465c04cc78e918c157420269", '-x', tempsrc])
-        self.assertTrue(os.path.isfile(recipefile), 'recipetool did not create recipe file; output:\n%s' % result.output)
-        checkvars = {}
-        checkvars['LICENSE'] = 'LGPLv2.1'
-        checkvars['LIC_FILES_CHKSUM'] = 'file://COPYING;md5=7fbc338309ac38fefcd64b04bb903e34'
-        checkvars['S'] = '${WORKDIR}/git'
-        checkvars['PV'] = '1.11+git${SRCPV}'
-        checkvars['SRC_URI'] = srcuri
-        checkvars['DEPENDS'] = set(['libcheck', 'libjpeg-turbo', 'libpng', 'libx11', 'libxext', 'pango'])
-        inherits = ['autotools', 'pkgconfig']
-        self._test_recipe_contents(recipefile, checkvars, inherits)
-
-    @testcase(1392)
-    def test_recipetool_create_simple(self):
-        # Try adding a recipe
-        temprecipe = os.path.join(self.tempdir, 'recipe')
-        os.makedirs(temprecipe)
-        pv = '1.7.3.0'
-        srcuri = 'http://www.dest-unreach.org/socat/download/socat-%s.tar.bz2' % pv
-        result = runCmd('recipetool create %s -o %s' % (srcuri, temprecipe))
-        dirlist = os.listdir(temprecipe)
-        if len(dirlist) > 1:
-            self.fail('recipetool created more than just one file; output:\n%s\ndirlist:\n%s' % (result.output, str(dirlist)))
-        if len(dirlist) < 1 or not os.path.isfile(os.path.join(temprecipe, dirlist[0])):
-            self.fail('recipetool did not create recipe file; output:\n%s\ndirlist:\n%s' % (result.output, str(dirlist)))
-        self.assertEqual(dirlist[0], 'socat_%s.bb' % pv, 'Recipe file incorrectly named')
-        checkvars = {}
-        checkvars['LICENSE'] = set(['Unknown', 'GPLv2'])
-        checkvars['LIC_FILES_CHKSUM'] = set(['file://COPYING.OpenSSL;md5=5c9bccc77f67a8328ef4ebaf468116f4', 'file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263'])
-        # We don't check DEPENDS since they are variable for this recipe depending on what's in the sysroot
-        checkvars['S'] = None
-        checkvars['SRC_URI'] = srcuri.replace(pv, '${PV}')
-        inherits = ['autotools']
-        self._test_recipe_contents(os.path.join(temprecipe, dirlist[0]), checkvars, inherits)
-
-    @testcase(1418)
-    def test_recipetool_create_cmake(self):
-        # Try adding a recipe
-        temprecipe = os.path.join(self.tempdir, 'recipe')
-        os.makedirs(temprecipe)
-        recipefile = os.path.join(temprecipe, 'navit_0.5.0.bb')
-        srcuri = 'http://downloads.sourceforge.net/project/navit/v0.5.0/navit-0.5.0.tar.gz'
-        result = runCmd('recipetool create -o %s %s' % (temprecipe, srcuri))
-        self.assertTrue(os.path.isfile(recipefile))
-        checkvars = {}
-        checkvars['LICENSE'] = set(['Unknown', 'GPLv2', 'LGPLv2'])
-        checkvars['SRC_URI'] = 'http://downloads.sourceforge.net/project/navit/v${PV}/navit-${PV}.tar.gz'
-        checkvars['SRC_URI[md5sum]'] = '242f398e979a6b8c0f3c802b63435b68'
-        checkvars['SRC_URI[sha256sum]'] = '13353481d7fc01a4f64e385dda460b51496366bba0fd2cc85a89a0747910e94d'
-        checkvars['DEPENDS'] = set(['freetype', 'zlib', 'openssl', 'glib-2.0', 'virtual/libgl', 'virtual/egl', 'gtk+', 'libpng', 'libsdl', 'freeglut', 'dbus-glib'])
-        inherits = ['cmake', 'python-dir', 'gettext', 'pkgconfig']
-        self._test_recipe_contents(recipefile, checkvars, inherits)
-
-    def test_recipetool_create_github(self):
-        # Basic test to see if github URL mangling works
-        temprecipe = os.path.join(self.tempdir, 'recipe')
-        os.makedirs(temprecipe)
-        recipefile = os.path.join(temprecipe, 'meson_git.bb')
-        srcuri = 'https://github.com/mesonbuild/meson;rev=0.32.0'
-        result = runCmd(['recipetool', 'create', '-o', temprecipe, srcuri])
-        self.assertTrue(os.path.isfile(recipefile))
-        checkvars = {}
-        checkvars['LICENSE'] = set(['Apache-2.0'])
-        checkvars['SRC_URI'] = 'git://github.com/mesonbuild/meson;protocol=https'
-        inherits = ['setuptools']
-        self._test_recipe_contents(recipefile, checkvars, inherits)
-
-    def test_recipetool_create_github_tarball(self):
-        # Basic test to ensure github URL mangling doesn't apply to release tarballs
-        temprecipe = os.path.join(self.tempdir, 'recipe')
-        os.makedirs(temprecipe)
-        pv = '0.32.0'
-        recipefile = os.path.join(temprecipe, 'meson_%s.bb' % pv)
-        srcuri = 'https://github.com/mesonbuild/meson/releases/download/%s/meson-%s.tar.gz' % (pv, pv)
-        result = runCmd('recipetool create -o %s %s' % (temprecipe, srcuri))
-        self.assertTrue(os.path.isfile(recipefile))
-        checkvars = {}
-        checkvars['LICENSE'] = set(['Apache-2.0'])
-        checkvars['SRC_URI'] = 'https://github.com/mesonbuild/meson/releases/download/${PV}/meson-${PV}.tar.gz'
-        inherits = ['setuptools']
-        self._test_recipe_contents(recipefile, checkvars, inherits)
-
-    def test_recipetool_create_git_http(self):
-        # Basic test to check http git URL mangling works
-        temprecipe = os.path.join(self.tempdir, 'recipe')
-        os.makedirs(temprecipe)
-        recipefile = os.path.join(temprecipe, 'matchbox-terminal_git.bb')
-        srcuri = 'http://git.yoctoproject.org/git/matchbox-terminal'
-        result = runCmd('recipetool create -o %s %s' % (temprecipe, srcuri))
-        self.assertTrue(os.path.isfile(recipefile))
-        checkvars = {}
-        checkvars['LICENSE'] = set(['GPLv2'])
-        checkvars['SRC_URI'] = 'git://git.yoctoproject.org/git/matchbox-terminal;protocol=http'
-        inherits = ['pkgconfig', 'autotools']
-        self._test_recipe_contents(recipefile, checkvars, inherits)
-
-    def _copy_file_with_cleanup(self, srcfile, basedstdir, *paths):
-        dstdir = basedstdir
-        self.assertTrue(os.path.exists(dstdir))
-        for p in paths:
-            dstdir = os.path.join(dstdir, p)
-            if not os.path.exists(dstdir):
-                os.makedirs(dstdir)
-                self.track_for_cleanup(dstdir)
-        dstfile = os.path.join(dstdir, os.path.basename(srcfile))
-        if srcfile != dstfile:
-            shutil.copy(srcfile, dstfile)
-            self.track_for_cleanup(dstfile)
-
-    def test_recipetool_load_plugin(self):
-        """Test that recipetool loads only the first found plugin in BBPATH."""
-
-        recipetool = runCmd("which recipetool")
-        fromname = runCmd("recipetool --quiet pluginfile")
-        srcfile = fromname.output
-        searchpath = self.bbpath.split(':') + [os.path.dirname(recipetool.output)]
-        plugincontent = []
-        with open(srcfile) as fh:
-            plugincontent = fh.readlines()
-        try:
-            self.assertIn('meta-selftest', srcfile, 'wrong bbpath plugin found')
-            for path in searchpath:
-                self._copy_file_with_cleanup(srcfile, path, 'lib', 'recipetool')
-            result = runCmd("recipetool --quiet count")
-            self.assertEqual(result.output, '1')
-            result = runCmd("recipetool --quiet multiloaded")
-            self.assertEqual(result.output, "no")
-            for path in searchpath:
-                result = runCmd("recipetool --quiet bbdir")
-                self.assertEqual(result.output, path)
-                os.unlink(os.path.join(result.output, 'lib', 'recipetool', 'bbpath.py'))
-        finally:
-            with open(srcfile, 'w') as fh:
-                fh.writelines(plugincontent)
-
-
-class RecipetoolAppendsrcBase(RecipetoolBase):
-    def _try_recipetool_appendsrcfile(self, testrecipe, newfile, destfile, options, expectedlines, expectedfiles):
-        cmd = 'recipetool appendsrcfile %s %s %s %s %s' % (options, self.templayerdir, testrecipe, newfile, destfile)
-        return self._try_recipetool_appendcmd(cmd, testrecipe, expectedfiles, expectedlines)
-
-    def _try_recipetool_appendsrcfiles(self, testrecipe, newfiles, expectedlines=None, expectedfiles=None, destdir=None, options=''):
-
-        if destdir:
-            options += ' -D %s' % destdir
-
-        if expectedfiles is None:
-            expectedfiles = [os.path.basename(f) for f in newfiles]
-
-        cmd = 'recipetool appendsrcfiles %s %s %s %s' % (options, self.templayerdir, testrecipe, ' '.join(newfiles))
-        return self._try_recipetool_appendcmd(cmd, testrecipe, expectedfiles, expectedlines)
-
-    def _try_recipetool_appendsrcfile_fail(self, testrecipe, newfile, destfile, checkerror):
-        cmd = 'recipetool appendsrcfile %s %s %s %s' % (self.templayerdir, testrecipe, newfile, destfile or '')
-        result = runCmd(cmd, ignore_status=True)
-        self.assertNotEqual(result.status, 0, 'Command "%s" should have failed but didn\'t' % cmd)
-        self.assertNotIn('Traceback', result.output)
-        for errorstr in checkerror:
-            self.assertIn(errorstr, result.output)
-
-    @staticmethod
-    def _get_first_file_uri(recipe):
-        '''Return the first file:// in SRC_URI for the specified recipe.'''
-        src_uri = get_bb_var('SRC_URI', recipe).split()
-        for uri in src_uri:
-            p = urllib.parse.urlparse(uri)
-            if p.scheme == 'file':
-                return p.netloc + p.path
-
-    def _test_appendsrcfile(self, testrecipe, filename=None, destdir=None, has_src_uri=True, srcdir=None, newfile=None, options=''):
-        if newfile is None:
-            newfile = self.testfile
-
-        if srcdir:
-            if destdir:
-                expected_subdir = os.path.join(srcdir, destdir)
-            else:
-                expected_subdir = srcdir
-        else:
-            options += " -W"
-            expected_subdir = destdir
-
-        if filename:
-            if destdir:
-                destpath = os.path.join(destdir, filename)
-            else:
-                destpath = filename
-        else:
-            filename = os.path.basename(newfile)
-            if destdir:
-                destpath = destdir + os.sep
-            else:
-                destpath = '.' + os.sep
-
-        expectedlines = ['FILESEXTRAPATHS_prepend := "${THISDIR}/${PN}:"\n',
-                         '\n']
-        if has_src_uri:
-            uri = 'file://%s' % filename
-            if expected_subdir:
-                uri += ';subdir=%s' % expected_subdir
-            expectedlines[0:0] = ['SRC_URI += "%s"\n' % uri,
-                                  '\n']
-
-        return self._try_recipetool_appendsrcfile(testrecipe, newfile, destpath, options, expectedlines, [filename])
-
-    def _test_appendsrcfiles(self, testrecipe, newfiles, expectedfiles=None, destdir=None, options=''):
-        if expectedfiles is None:
-            expectedfiles = [os.path.basename(n) for n in newfiles]
-
-        self._try_recipetool_appendsrcfiles(testrecipe, newfiles, expectedfiles=expectedfiles, destdir=destdir, options=options)
-
-        bb_vars = get_bb_vars(['SRC_URI', 'FILE', 'FILESEXTRAPATHS'], testrecipe)
-        src_uri = bb_vars['SRC_URI'].split()
-        for f in expectedfiles:
-            if destdir:
-                self.assertIn('file://%s;subdir=%s' % (f, destdir), src_uri)
-            else:
-                self.assertIn('file://%s' % f, src_uri)
-
-        recipefile = bb_vars['FILE']
-        bbappendfile = self._check_bbappend(testrecipe, recipefile, self.templayerdir)
-        filesdir = os.path.join(os.path.dirname(bbappendfile), testrecipe)
-        filesextrapaths = bb_vars['FILESEXTRAPATHS'].split(':')
-        self.assertIn(filesdir, filesextrapaths)
-
-
-
-
-class RecipetoolAppendsrcTests(RecipetoolAppendsrcBase):
-
-    @testcase(1273)
-    def test_recipetool_appendsrcfile_basic(self):
-        self._test_appendsrcfile('base-files', 'a-file')
-
-    @testcase(1274)
-    def test_recipetool_appendsrcfile_basic_wildcard(self):
-        testrecipe = 'base-files'
-        self._test_appendsrcfile(testrecipe, 'a-file', options='-w')
-        recipefile = get_bb_var('FILE', testrecipe)
-        bbappendfile = self._check_bbappend(testrecipe, recipefile, self.templayerdir)
-        self.assertEqual(os.path.basename(bbappendfile), '%s_%%.bbappend' % testrecipe)
-
-    @testcase(1281)
-    def test_recipetool_appendsrcfile_subdir_basic(self):
-        self._test_appendsrcfile('base-files', 'a-file', 'tmp')
-
-    @testcase(1282)
-    def test_recipetool_appendsrcfile_subdir_basic_dirdest(self):
-        self._test_appendsrcfile('base-files', destdir='tmp')
-
-    @testcase(1280)
-    def test_recipetool_appendsrcfile_srcdir_basic(self):
-        testrecipe = 'bash'
-        bb_vars = get_bb_vars(['S', 'WORKDIR'], testrecipe)
-        srcdir = bb_vars['S']
-        workdir = bb_vars['WORKDIR']
-        subdir = os.path.relpath(srcdir, workdir)
-        self._test_appendsrcfile(testrecipe, 'a-file', srcdir=subdir)
-
-    @testcase(1275)
-    def test_recipetool_appendsrcfile_existing_in_src_uri(self):
-        testrecipe = 'base-files'
-        filepath = self._get_first_file_uri(testrecipe)
-        self.assertTrue(filepath, 'Unable to test, no file:// uri found in SRC_URI for %s' % testrecipe)
-        self._test_appendsrcfile(testrecipe, filepath, has_src_uri=False)
-
-    @testcase(1276)
-    def test_recipetool_appendsrcfile_existing_in_src_uri_diff_params(self):
-        testrecipe = 'base-files'
-        subdir = 'tmp'
-        filepath = self._get_first_file_uri(testrecipe)
-        self.assertTrue(filepath, 'Unable to test, no file:// uri found in SRC_URI for %s' % testrecipe)
-
-        output = self._test_appendsrcfile(testrecipe, filepath, subdir, has_src_uri=False)
-        self.assertTrue(any('with different parameters' in l for l in output))
-
-    @testcase(1277)
-    def test_recipetool_appendsrcfile_replace_file_srcdir(self):
-        testrecipe = 'bash'
-        filepath = 'Makefile.in'
-        bb_vars = get_bb_vars(['S', 'WORKDIR'], testrecipe)
-        srcdir = bb_vars['S']
-        workdir = bb_vars['WORKDIR']
-        subdir = os.path.relpath(srcdir, workdir)
-
-        self._test_appendsrcfile(testrecipe, filepath, srcdir=subdir)
-        bitbake('%s:do_unpack' % testrecipe)
-        self.assertEqual(open(self.testfile, 'r').read(), open(os.path.join(srcdir, filepath), 'r').read())
-
-    @testcase(1278)
-    def test_recipetool_appendsrcfiles_basic(self, destdir=None):
-        newfiles = [self.testfile]
-        for i in range(1, 5):
-            testfile = os.path.join(self.tempdir, 'testfile%d' % i)
-            with open(testfile, 'w') as f:
-                f.write('Test file %d\n' % i)
-            newfiles.append(testfile)
-        self._test_appendsrcfiles('gcc', newfiles, destdir=destdir, options='-W')
-
-    @testcase(1279)
-    def test_recipetool_appendsrcfiles_basic_subdir(self):
-        self.test_recipetool_appendsrcfiles_basic(destdir='testdir')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/runqemu.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/runqemu.py
deleted file mode 100644
index 58c6f96..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/runqemu.py
+++ /dev/null
@@ -1,140 +0,0 @@
-#
-# Copyright (c) 2017 Wind River Systems, Inc.
-#
-
-import re
-import logging
-
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import bitbake, runqemu, get_bb_var
-from oeqa.utils.decorators import testcase
-
-class RunqemuTests(oeSelfTest):
-    """Runqemu test class"""
-
-    image_is_ready = False
-    deploy_dir_image = ''
-
-    def setUpLocal(self):
-        self.recipe = 'core-image-minimal'
-        self.machine =  'qemux86-64'
-        self.fstypes = "ext4 iso hddimg vmdk qcow2 vdi"
-        self.cmd_common = "runqemu nographic"
-
-        # Avoid emit the same record multiple times.
-        mainlogger = logging.getLogger("BitBake.Main")
-        mainlogger.propagate = False
-
-        self.write_config(
-"""
-MACHINE = "%s"
-IMAGE_FSTYPES = "%s"
-# 10 means 1 second
-SYSLINUX_TIMEOUT = "10"
-"""
-% (self.machine, self.fstypes)
-        )
-
-        if not RunqemuTests.image_is_ready:
-            RunqemuTests.deploy_dir_image = get_bb_var('DEPLOY_DIR_IMAGE')
-            bitbake(self.recipe)
-            RunqemuTests.image_is_ready = True
-
-    @testcase(2001)
-    def test_boot_machine(self):
-        """Test runqemu machine"""
-        cmd = "%s %s" % (self.cmd_common, self.machine)
-        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
-            self.assertTrue(qemu.runner.logged, "Failed: %s" % cmd)
-
-    @testcase(2002)
-    def test_boot_machine_ext4(self):
-        """Test runqemu machine ext4"""
-        cmd = "%s %s ext4" % (self.cmd_common, self.machine)
-        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
-            with open(qemu.qemurunnerlog) as f:
-                self.assertTrue('rootfs.ext4' in f.read(), "Failed: %s" % cmd)
-
-    @testcase(2003)
-    def test_boot_machine_iso(self):
-        """Test runqemu machine iso"""
-        cmd = "%s %s iso" % (self.cmd_common, self.machine)
-        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
-            with open(qemu.qemurunnerlog) as f:
-                self.assertTrue(' -cdrom ' in f.read(), "Failed: %s" % cmd)
-
-    @testcase(2004)
-    def test_boot_recipe_image(self):
-        """Test runqemu recipe-image"""
-        cmd = "%s %s" % (self.cmd_common, self.recipe)
-        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
-            self.assertTrue(qemu.runner.logged, "Failed: %s" % cmd)
-
-    @testcase(2005)
-    def test_boot_recipe_image_vmdk(self):
-        """Test runqemu recipe-image vmdk"""
-        cmd = "%s %s vmdk" % (self.cmd_common, self.recipe)
-        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
-            with open(qemu.qemurunnerlog) as f:
-                self.assertTrue('format=vmdk' in f.read(), "Failed: %s" % cmd)
-
-    @testcase(2006)
-    def test_boot_recipe_image_vdi(self):
-        """Test runqemu recipe-image vdi"""
-        cmd = "%s %s vdi" % (self.cmd_common, self.recipe)
-        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
-            with open(qemu.qemurunnerlog) as f:
-                self.assertTrue('format=vdi' in f.read(), "Failed: %s" % cmd)
-
-    @testcase(2007)
-    def test_boot_deploy(self):
-        """Test runqemu deploy_dir_image"""
-        cmd = "%s %s" % (self.cmd_common, self.deploy_dir_image)
-        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
-            self.assertTrue(qemu.runner.logged, "Failed: %s" % cmd)
-
-    @testcase(2008)
-    def test_boot_deploy_hddimg(self):
-        """Test runqemu deploy_dir_image hddimg"""
-        cmd = "%s %s hddimg" % (self.cmd_common, self.deploy_dir_image)
-        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
-            with open(qemu.qemurunnerlog) as f:
-                self.assertTrue(re.search('file=.*.hddimg', f.read()), "Failed: %s" % cmd)
-
-    @testcase(2009)
-    def test_boot_machine_slirp(self):
-        """Test runqemu machine slirp"""
-        cmd = "%s slirp %s" % (self.cmd_common, self.machine)
-        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
-            with open(qemu.qemurunnerlog) as f:
-                self.assertTrue(' -netdev user' in f.read(), "Failed: %s" % cmd)
-
-    @testcase(2009)
-    def test_boot_machine_slirp_qcow2(self):
-        """Test runqemu machine slirp qcow2"""
-        cmd = "%s slirp qcow2 %s" % (self.cmd_common, self.machine)
-        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
-            with open(qemu.qemurunnerlog) as f:
-                self.assertTrue('format=qcow2' in f.read(), "Failed: %s" % cmd)
-
-    @testcase(2010)
-    def test_boot_qemu_boot(self):
-        """Test runqemu /path/to/image.qemuboot.conf"""
-        qemuboot_conf = "%s-%s.qemuboot.conf" % (self.recipe, self.machine)
-        qemuboot_conf = os.path.join(self.deploy_dir_image, qemuboot_conf)
-        if not os.path.exists(qemuboot_conf):
-            self.skipTest("%s not found" % qemuboot_conf)
-        cmd = "%s %s" % (self.cmd_common, qemuboot_conf)
-        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
-            self.assertTrue(qemu.runner.logged, "Failed: %s" % cmd)
-
-    @testcase(2011)
-    def test_boot_rootfs(self):
-        """Test runqemu /path/to/rootfs.ext4"""
-        rootfs = "%s-%s.ext4" % (self.recipe, self.machine)
-        rootfs = os.path.join(self.deploy_dir_image, rootfs)
-        if not os.path.exists(rootfs):
-            self.skipTest("%s not found" % rootfs)
-        cmd = "%s %s" % (self.cmd_common, rootfs)
-        with runqemu(self.recipe, ssh=False, launch_cmd=cmd) as qemu:
-            self.assertTrue(qemu.runner.logged, "Failed: %s" % cmd)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/runtime-test.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/runtime-test.py
deleted file mode 100644
index e498d04..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/runtime-test.py
+++ /dev/null
@@ -1,237 +0,0 @@
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars, runqemu
-from oeqa.utils.decorators import testcase
-import os
-import re
-
-class TestExport(oeSelfTest):
-
-    @classmethod
-    def tearDownClass(cls):
-        runCmd("rm -rf /tmp/sdk")
-
-    def test_testexport_basic(self):
-        """
-        Summary: Check basic testexport functionality with only ping test enabled.
-        Expected: 1. testexport directory must be created.
-                  2. runexported.py must run without any error/exception.
-                  3. ping test must succeed.
-        Product: oe-core
-        Author: Mariano Lopez <mariano.lopez@intel.com>
-        """
-
-        features = 'INHERIT += "testexport"\n'
-        # These aren't the actual IP addresses but testexport class needs something defined
-        features += 'TEST_SERVER_IP = "192.168.7.1"\n'
-        features += 'TEST_TARGET_IP = "192.168.7.1"\n'
-        features += 'TEST_SUITES = "ping"\n'
-        self.write_config(features)
-
-        # Build tesexport for core-image-minimal
-        bitbake('core-image-minimal')
-        bitbake('-c testexport core-image-minimal')
-
-        testexport_dir = get_bb_var('TEST_EXPORT_DIR', 'core-image-minimal')
-
-        # Verify if TEST_EXPORT_DIR was created
-        isdir = os.path.isdir(testexport_dir)
-        self.assertEqual(True, isdir, 'Failed to create testexport dir: %s' % testexport_dir)
-
-        with runqemu('core-image-minimal') as qemu:
-            # Attempt to run runexported.py to perform ping test
-            test_path = os.path.join(testexport_dir, "oe-test")
-            data_file = os.path.join(testexport_dir, 'data', 'testdata.json')
-            manifest = os.path.join(testexport_dir, 'data', 'manifest')
-            cmd = ("%s runtime --test-data-file %s --packages-manifest %s "
-                   "--target-ip %s --server-ip %s --quiet"
-                  % (test_path, data_file, manifest, qemu.ip, qemu.server_ip))
-            result = runCmd(cmd)
-            # Verify ping test was succesful
-            self.assertEqual(0, result.status, 'oe-test runtime returned a non 0 status')
-
-    def test_testexport_sdk(self):
-        """
-        Summary: Check sdk functionality for testexport.
-        Expected: 1. testexport directory must be created.
-                  2. SDK tarball must exists.
-                  3. Uncompressing of tarball must succeed.
-                  4. Check if the SDK directory is added to PATH.
-                  5. Run tar from the SDK directory.
-        Product: oe-core
-        Author: Mariano Lopez <mariano.lopez@intel.com>
-        """
-
-        features = 'INHERIT += "testexport"\n'
-        # These aren't the actual IP addresses but testexport class needs something defined
-        features += 'TEST_SERVER_IP = "192.168.7.1"\n'
-        features += 'TEST_TARGET_IP = "192.168.7.1"\n'
-        features += 'TEST_SUITES = "ping"\n'
-        features += 'TEST_EXPORT_SDK_ENABLED = "1"\n'
-        features += 'TEST_EXPORT_SDK_PACKAGES = "nativesdk-tar"\n'
-        self.write_config(features)
-
-        # Build tesexport for core-image-minimal
-        bitbake('core-image-minimal')
-        bitbake('-c testexport core-image-minimal')
-
-        needed_vars = ['TEST_EXPORT_DIR', 'TEST_EXPORT_SDK_DIR', 'TEST_EXPORT_SDK_NAME']
-        bb_vars = get_bb_vars(needed_vars, 'core-image-minimal')
-        testexport_dir = bb_vars['TEST_EXPORT_DIR']
-        sdk_dir = bb_vars['TEST_EXPORT_SDK_DIR']
-        sdk_name = bb_vars['TEST_EXPORT_SDK_NAME']
-
-        # Check for SDK
-        tarball_name = "%s.sh" % sdk_name
-        tarball_path = os.path.join(testexport_dir, sdk_dir, tarball_name)
-        msg = "Couldn't find SDK tarball: %s" % tarball_path
-        self.assertEqual(os.path.isfile(tarball_path), True, msg)
-
-        # Extract SDK and run tar from SDK
-        result = runCmd("%s -y -d /tmp/sdk" % tarball_path)
-        self.assertEqual(0, result.status, "Couldn't extract SDK")
-
-        env_script = result.output.split()[-1]
-        result = runCmd(". %s; which tar" % env_script, shell=True)
-        self.assertEqual(0, result.status, "Couldn't setup SDK environment")
-        is_sdk_tar = True if "/tmp/sdk" in result.output else False
-        self.assertTrue(is_sdk_tar, "Couldn't setup SDK environment")
-
-        tar_sdk = result.output
-        result = runCmd("%s --version" % tar_sdk)
-        self.assertEqual(0, result.status, "Couldn't run tar from SDK")
-
-
-class TestImage(oeSelfTest):
-
-    def test_testimage_install(self):
-        """
-        Summary: Check install packages functionality for testimage/testexport.
-        Expected: 1. Import tests from a directory other than meta.
-                  2. Check install/uninstall of socat.
-                  3. Check that remote package feeds can be accessed
-        Product: oe-core
-        Author: Mariano Lopez <mariano.lopez@intel.com>
-        Author: Alexander Kanavin <alexander.kanavin@intel.com>
-        """
-        if get_bb_var('DISTRO') == 'poky-tiny':
-            self.skipTest('core-image-full-cmdline not buildable for poky-tiny')
-
-        features = 'INHERIT += "testimage"\n'
-        features += 'TEST_SUITES = "ping ssh selftest"\n'
-        # We don't yet know what the server ip and port will be - they will be patched
-        # in at the start of the on-image test
-        features += 'PACKAGE_FEED_URIS = "http://bogus_ip:bogus_port"\n'
-        features += 'EXTRA_IMAGE_FEATURES += "package-management"\n'
-        features += 'PACKAGE_CLASSES = "package_rpm"'
-        self.write_config(features)
-
-        # Build core-image-sato and testimage
-        bitbake('core-image-full-cmdline socat')
-        bitbake('-c testimage core-image-full-cmdline')
-
-class Postinst(oeSelfTest):
-    @testcase(1540)
-    def test_verify_postinst(self):
-        """
-        Summary: The purpose of this test is to verify the execution order of postinst Bugzilla ID: [5319]
-        Expected :
-        1. Compile a minimal image.
-        2. The compiled image will add the created layer with the recipes postinst[ abdpt]
-        3. Run qemux86
-        4. Validate the task execution order
-        Author: Francisco Pedraza <francisco.j.pedraza.gonzalez@intel.com>
-        """
-        features = 'INHERIT += "testimage"\n'
-        features += 'CORE_IMAGE_EXTRA_INSTALL += "postinst-at-rootfs \
-postinst-delayed-a \
-postinst-delayed-b \
-postinst-delayed-d \
-postinst-delayed-p \
-postinst-delayed-t \
-"\n'
-        self.write_config(features)
-
-        bitbake('core-image-minimal -f ')
-
-        postinst_list = ['100-postinst-at-rootfs',
-                         '101-postinst-delayed-a',
-                         '102-postinst-delayed-b',
-                         '103-postinst-delayed-d',
-                         '104-postinst-delayed-p',
-                         '105-postinst-delayed-t']
-        path_workdir = get_bb_var('WORKDIR','core-image-minimal')
-        workspacedir = 'testimage/qemu_boot_log'
-        workspacedir = os.path.join(path_workdir, workspacedir)
-        rexp = re.compile("^Running postinst .*/(?P<postinst>.*)\.\.\.$")
-        with runqemu('core-image-minimal') as qemu:
-            with open(workspacedir) as f:
-                found = False
-                idx = 0
-                for line in f.readlines():
-                    line = line.strip().replace("^M","")
-                    if not line: # To avoid empty lines
-                        continue
-                    m = rexp.search(line)
-                    if m:
-                        self.assertEqual(postinst_list[idx], m.group('postinst'), "Fail")
-                        idx = idx+1
-                        found = True
-                    elif found:
-                        self.assertEqual(idx, len(postinst_list), "Not found all postinsts")
-                        break
-
-    @testcase(1545)
-    def test_postinst_rootfs_and_boot(self):
-        """
-        Summary:        The purpose of this test case is to verify Post-installation
-                        scripts are called when rootfs is created and also test
-                        that script can be delayed to run at first boot.
-        Dependencies:   NA
-        Steps:          1. Add proper configuration to local.conf file
-                        2. Build a "core-image-minimal" image
-                        3. Verify that file created by postinst_rootfs recipe is
-                           present on rootfs dir.
-                        4. Boot the image created on qemu and verify that the file
-                           created by postinst_boot recipe is present on image.
-        Expected:       The files are successfully created during rootfs and boot
-                        time for 3 different package managers: rpm,ipk,deb and
-                        for initialization managers: sysvinit and systemd.
-
-        """
-        file_rootfs_name = "this-was-created-at-rootfstime"
-        fileboot_name = "this-was-created-at-first-boot"
-        rootfs_pkg = 'postinst-at-rootfs'
-        boot_pkg = 'postinst-delayed-a'
-        #Step 1
-        features = 'MACHINE = "qemux86"\n'
-        features += 'CORE_IMAGE_EXTRA_INSTALL += "%s %s "\n'% (rootfs_pkg, boot_pkg)
-        features += 'IMAGE_FEATURES += "ssh-server-openssh"\n'
-        for init_manager in ("sysvinit", "systemd"):
-            #for sysvinit no extra configuration is needed,
-            if (init_manager is "systemd"):
-                features += 'DISTRO_FEATURES_append = " systemd"\n'
-                features += 'VIRTUAL-RUNTIME_init_manager = "systemd"\n'
-                features += 'DISTRO_FEATURES_BACKFILL_CONSIDERED = "sysvinit"\n'
-                features += 'VIRTUAL-RUNTIME_initscripts = ""\n'
-            for classes in ("package_rpm package_deb package_ipk",
-                            "package_deb package_rpm package_ipk",
-                            "package_ipk package_deb package_rpm"):
-                features += 'PACKAGE_CLASSES = "%s"\n' % classes
-                self.write_config(features)
-
-                #Step 2
-                bitbake('core-image-minimal')
-
-                #Step 3
-                file_rootfs_created = os.path.join(get_bb_var('IMAGE_ROOTFS',"core-image-minimal"),
-                                                   file_rootfs_name)
-                found = os.path.isfile(file_rootfs_created)
-                self.assertTrue(found, "File %s was not created at rootfs time by %s" % \
-                                (file_rootfs_name, rootfs_pkg))
-
-                #Step 4
-                testcommand = 'ls /etc/'+fileboot_name
-                with runqemu('core-image-minimal') as qemu:
-                    sshargs = '-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'
-                    result = runCmd('ssh %s root@%s %s' % (sshargs, qemu.ip, testcommand))
-                    self.assertEqual(result.status, 0, 'File %s was not created at firts boot'% fileboot_name)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/signing.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/signing.py
deleted file mode 100644
index 0ac3d1fa..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/signing.py
+++ /dev/null
@@ -1,183 +0,0 @@
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars
-import os
-import glob
-import re
-import shutil
-import tempfile
-from oeqa.utils.decorators import testcase
-from oeqa.utils.ftools import write_file
-
-
-class Signing(oeSelfTest):
-
-    gpg_dir = ""
-    pub_key_path = ""
-    secret_key_path = ""
-
-    @classmethod
-    def setUpClass(cls):
-        # Check that we can find the gpg binary and fail early if we can't
-        if not shutil.which("gpg"):
-            raise AssertionError("This test needs GnuPG")
-
-        cls.gpg_home_dir = tempfile.TemporaryDirectory(prefix="oeqa-signing-")
-        cls.gpg_dir = cls.gpg_home_dir.name
-
-        cls.pub_key_path = os.path.join(cls.testlayer_path, 'files', 'signing', "key.pub")
-        cls.secret_key_path = os.path.join(cls.testlayer_path, 'files', 'signing', "key.secret")
-
-        runCmd('gpg --batch --homedir %s --import %s %s' % (cls.gpg_dir, cls.pub_key_path, cls.secret_key_path))
-
-    @testcase(1362)
-    def test_signing_packages(self):
-        """
-        Summary:     Test that packages can be signed in the package feed
-        Expected:    Package should be signed with the correct key
-        Expected:    Images can be created from signed packages
-        Product:     oe-core
-        Author:      Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        Author:      Alexander Kanavin <alexander.kanavin@intel.com>
-        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        """
-        import oe.packagedata
-
-        package_classes = get_bb_var('PACKAGE_CLASSES')
-        if 'package_rpm' not in package_classes:
-            self.skipTest('This test requires RPM Packaging.')
-
-        test_recipe = 'ed'
-
-        feature = 'INHERIT += "sign_rpm"\n'
-        feature += 'RPM_GPG_PASSPHRASE = "test123"\n'
-        feature += 'RPM_GPG_NAME = "testuser"\n'
-        feature += 'GPG_PATH = "%s"\n' % self.gpg_dir
-
-        self.write_config(feature)
-
-        bitbake('-c clean %s' % test_recipe)
-        bitbake('-f -c package_write_rpm %s' % test_recipe)
-
-        self.add_command_to_tearDown('bitbake -c clean %s' % test_recipe)
-
-        needed_vars = ['PKGDATA_DIR', 'DEPLOY_DIR_RPM', 'PACKAGE_ARCH', 'STAGING_BINDIR_NATIVE']
-        bb_vars = get_bb_vars(needed_vars, test_recipe)
-        pkgdatadir = bb_vars['PKGDATA_DIR']
-        pkgdata = oe.packagedata.read_pkgdatafile(pkgdatadir + "/runtime/ed")
-        if 'PKGE' in pkgdata:
-            pf = pkgdata['PN'] + "-" + pkgdata['PKGE'] + pkgdata['PKGV'] + '-' + pkgdata['PKGR']
-        else:
-            pf = pkgdata['PN'] + "-" + pkgdata['PKGV'] + '-' + pkgdata['PKGR']
-        deploy_dir_rpm = bb_vars['DEPLOY_DIR_RPM']
-        package_arch = bb_vars['PACKAGE_ARCH'].replace('-', '_')
-        staging_bindir_native = bb_vars['STAGING_BINDIR_NATIVE']
-
-        pkg_deploy = os.path.join(deploy_dir_rpm, package_arch, '.'.join((pf, package_arch, 'rpm')))
-
-        # Use a temporary rpmdb
-        rpmdb = tempfile.mkdtemp(prefix='oeqa-rpmdb')
-
-        runCmd('%s/rpmkeys --define "_dbpath %s" --import %s' %
-               (staging_bindir_native, rpmdb, self.pub_key_path))
-
-        ret = runCmd('%s/rpmkeys --define "_dbpath %s" --checksig %s' %
-                     (staging_bindir_native, rpmdb, pkg_deploy))
-        # tmp/deploy/rpm/i586/ed-1.9-r0.i586.rpm: rsa sha1 md5 OK
-        self.assertIn('rsa sha1 (md5) pgp md5 OK', ret.output, 'Package signed incorrectly.')
-        shutil.rmtree(rpmdb)
-
-        #Check that an image can be built from signed packages
-        self.add_command_to_tearDown('bitbake -c clean core-image-minimal')
-        bitbake('-c clean core-image-minimal')
-        bitbake('core-image-minimal')
-
-
-    @testcase(1382)
-    def test_signing_sstate_archive(self):
-        """
-        Summary:     Test that sstate archives can be signed
-        Expected:    Package should be signed with the correct key
-        Product:     oe-core
-        Author:      Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        """
-
-        test_recipe = 'ed'
-
-        builddir = os.environ.get('BUILDDIR')
-        sstatedir = os.path.join(builddir, 'test-sstate')
-
-        self.add_command_to_tearDown('bitbake -c clean %s' % test_recipe)
-        self.add_command_to_tearDown('rm -rf %s' % sstatedir)
-
-        feature = 'SSTATE_SIG_KEY ?= "testuser"\n'
-        feature += 'SSTATE_SIG_PASSPHRASE ?= "test123"\n'
-        feature += 'SSTATE_VERIFY_SIG ?= "1"\n'
-        feature += 'GPG_PATH = "%s"\n' % self.gpg_dir
-        feature += 'SSTATE_DIR = "%s"\n' % sstatedir
-        # Any mirror might have partial sstate without .sig files, triggering failures
-        feature += 'SSTATE_MIRRORS_forcevariable = ""\n'
-
-        self.write_config(feature)
-
-        bitbake('-c clean %s' % test_recipe)
-        bitbake(test_recipe)
-
-        recipe_sig = glob.glob(sstatedir + '/*/*:ed:*_package.tgz.sig')
-        recipe_tgz = glob.glob(sstatedir + '/*/*:ed:*_package.tgz')
-
-        self.assertEqual(len(recipe_sig), 1, 'Failed to find .sig file.')
-        self.assertEqual(len(recipe_tgz), 1, 'Failed to find .tgz file.')
-
-        ret = runCmd('gpg --homedir %s --verify %s %s' % (self.gpg_dir, recipe_sig[0], recipe_tgz[0]))
-        # gpg: Signature made Thu 22 Oct 2015 01:45:09 PM EEST using RSA key ID 61EEFB30
-        # gpg: Good signature from "testuser (nocomment) <testuser@email.com>"
-        self.assertIn('gpg: Good signature from', ret.output, 'Package signed incorrectly.')
-
-
-class LockedSignatures(oeSelfTest):
-
-    @testcase(1420)
-    def test_locked_signatures(self):
-        """
-        Summary:     Test locked signature mechanism
-        Expected:    Locked signatures will prevent task to run
-        Product:     oe-core
-        Author:      Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        AutomatedBy: Daniel Istrate <daniel.alexandrux.istrate@intel.com>
-        """
-
-        test_recipe = 'ed'
-        locked_sigs_file = 'locked-sigs.inc'
-
-        self.add_command_to_tearDown('rm -f %s' % os.path.join(self.builddir, locked_sigs_file))
-
-        bitbake(test_recipe)
-        # Generate locked sigs include file
-        bitbake('-S none %s' % test_recipe)
-
-        feature = 'require %s\n' % locked_sigs_file
-        feature += 'SIGGEN_LOCKEDSIGS_TASKSIG_CHECK = "warn"\n'
-        self.write_config(feature)
-
-        # Build a locked recipe
-        bitbake(test_recipe)
-
-        # Make a change that should cause the locked task signature to change
-        recipe_append_file = test_recipe + '_' + get_bb_var('PV', test_recipe) + '.bbappend'
-        recipe_append_path = os.path.join(self.testlayer_path, 'recipes-test', test_recipe, recipe_append_file)
-        feature = 'SUMMARY += "test locked signature"\n'
-
-        os.mkdir(os.path.join(self.testlayer_path, 'recipes-test', test_recipe))
-        write_file(recipe_append_path, feature)
-
-        self.add_command_to_tearDown('rm -rf %s' % os.path.join(self.testlayer_path, 'recipes-test', test_recipe))
-
-        # Build the recipe again
-        ret = bitbake(test_recipe)
-
-        # Verify you get the warning and that the real task *isn't* run (i.e. the locked signature has worked)
-        patt = r'WARNING: The %s:do_package sig is computed to be \S+, but the sig is locked to \S+ in SIGGEN_LOCKEDSIGS\S+' % test_recipe
-        found_warn = re.search(patt, ret.output)
-
-        self.assertIsNotNone(found_warn, "Didn't find the expected warning message. Output: %s" % ret.output)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/sstate.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/sstate.py
deleted file mode 100644
index f54bc41..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/sstate.py
+++ /dev/null
@@ -1,62 +0,0 @@
-import datetime
-import unittest
-import os
-import re
-import shutil
-
-import oeqa.utils.ftools as ftools
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_vars, get_test_layer
-
-
-class SStateBase(oeSelfTest):
-
-    def setUpLocal(self):
-        self.temp_sstate_location = None
-        needed_vars = ['SSTATE_DIR', 'NATIVELSBSTRING', 'TCLIBC', 'TUNE_ARCH',
-                       'TOPDIR', 'TARGET_VENDOR', 'TARGET_OS']
-        bb_vars = get_bb_vars(needed_vars)
-        self.sstate_path = bb_vars['SSTATE_DIR']
-        self.hostdistro = bb_vars['NATIVELSBSTRING']
-        self.tclibc = bb_vars['TCLIBC']
-        self.tune_arch = bb_vars['TUNE_ARCH']
-        self.topdir = bb_vars['TOPDIR']
-        self.target_vendor = bb_vars['TARGET_VENDOR']
-        self.target_os = bb_vars['TARGET_OS']
-        self.distro_specific_sstate = os.path.join(self.sstate_path, self.hostdistro)
-
-    # Creates a special sstate configuration with the option to add sstate mirrors
-    def config_sstate(self, temp_sstate_location=False, add_local_mirrors=[]):
-        self.temp_sstate_location = temp_sstate_location
-
-        if self.temp_sstate_location:
-            temp_sstate_path = os.path.join(self.builddir, "temp_sstate_%s" % datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
-            config_temp_sstate = "SSTATE_DIR = \"%s\"" % temp_sstate_path
-            self.append_config(config_temp_sstate)
-            self.track_for_cleanup(temp_sstate_path)
-        bb_vars = get_bb_vars(['SSTATE_DIR', 'NATIVELSBSTRING'])
-        self.sstate_path = bb_vars['SSTATE_DIR']
-        self.hostdistro = bb_vars['NATIVELSBSTRING']
-        self.distro_specific_sstate = os.path.join(self.sstate_path, self.hostdistro)
-
-        if add_local_mirrors:
-            config_set_sstate_if_not_set = 'SSTATE_MIRRORS ?= ""'
-            self.append_config(config_set_sstate_if_not_set)
-            for local_mirror in add_local_mirrors:
-                self.assertFalse(os.path.join(local_mirror) == os.path.join(self.sstate_path), msg='Cannot add the current sstate path as a sstate mirror')
-                config_sstate_mirror = "SSTATE_MIRRORS += \"file://.* file:///%s/PATH\"" % local_mirror
-                self.append_config(config_sstate_mirror)
-
-    # Returns a list containing sstate files
-    def search_sstate(self, filename_regex, distro_specific=True, distro_nonspecific=True):
-        result = []
-        for root, dirs, files in os.walk(self.sstate_path):
-            if distro_specific and re.search("%s/[a-z0-9]{2}$" % self.hostdistro, root):
-                for f in files:
-                    if re.search(filename_regex, f):
-                        result.append(f)
-            if distro_nonspecific and re.search("%s/[a-z0-9]{2}$" % self.sstate_path, root):
-                for f in files:
-                    if re.search(filename_regex, f):
-                        result.append(f)
-        return result
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/sstatetests.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/sstatetests.py
deleted file mode 100644
index e35ddff..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/sstatetests.py
+++ /dev/null
@@ -1,483 +0,0 @@
-import datetime
-import unittest
-import os
-import re
-import shutil
-import glob
-import subprocess
-
-import oeqa.utils.ftools as ftools
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_test_layer
-from oeqa.selftest.sstate import SStateBase
-from oeqa.utils.decorators import testcase
-
-class SStateTests(SStateBase):
-
-    # Test sstate files creation and their location
-    def run_test_sstate_creation(self, targets, distro_specific=True, distro_nonspecific=True, temp_sstate_location=True, should_pass=True):
-        self.config_sstate(temp_sstate_location, [self.sstate_path])
-
-        if  self.temp_sstate_location:
-            bitbake(['-cclean'] + targets)
-        else:
-            bitbake(['-ccleansstate'] + targets)
-
-        bitbake(targets)
-        file_tracker = []
-        results = self.search_sstate('|'.join(map(str, targets)), distro_specific, distro_nonspecific)
-        if distro_nonspecific:
-            for r in results:
-                if r.endswith(("_populate_lic.tgz", "_populate_lic.tgz.siginfo", "_fetch.tgz.siginfo", "_unpack.tgz.siginfo", "_patch.tgz.siginfo")):
-                    continue
-                file_tracker.append(r)
-        else:
-            file_tracker = results
-
-        if should_pass:
-            self.assertTrue(file_tracker , msg="Could not find sstate files for: %s" % ', '.join(map(str, targets)))
-        else:
-            self.assertTrue(not file_tracker , msg="Found sstate files in the wrong place for: %s (found %s)" % (', '.join(map(str, targets)), str(file_tracker)))
-
-    @testcase(975)
-    def test_sstate_creation_distro_specific_pass(self):
-        self.run_test_sstate_creation(['binutils-cross-'+ self.tune_arch, 'binutils-native'], distro_specific=True, distro_nonspecific=False, temp_sstate_location=True)
-
-    @testcase(1374)
-    def test_sstate_creation_distro_specific_fail(self):
-        self.run_test_sstate_creation(['binutils-cross-'+ self.tune_arch, 'binutils-native'], distro_specific=False, distro_nonspecific=True, temp_sstate_location=True, should_pass=False)
-
-    @testcase(976)
-    def test_sstate_creation_distro_nonspecific_pass(self):
-        self.run_test_sstate_creation(['linux-libc-headers'], distro_specific=False, distro_nonspecific=True, temp_sstate_location=True)
-
-    @testcase(1375)
-    def test_sstate_creation_distro_nonspecific_fail(self):
-        self.run_test_sstate_creation(['linux-libc-headers'], distro_specific=True, distro_nonspecific=False, temp_sstate_location=True, should_pass=False)
-
-    # Test the sstate files deletion part of the do_cleansstate task
-    def run_test_cleansstate_task(self, targets, distro_specific=True, distro_nonspecific=True, temp_sstate_location=True):
-        self.config_sstate(temp_sstate_location, [self.sstate_path])
-
-        bitbake(['-ccleansstate'] + targets)
-
-        bitbake(targets)
-        tgz_created = self.search_sstate('|'.join(map(str, [s + '.*?\.tgz$' for s in targets])), distro_specific, distro_nonspecific)
-        self.assertTrue(tgz_created, msg="Could not find sstate .tgz files for: %s (%s)" % (', '.join(map(str, targets)), str(tgz_created)))
-
-        siginfo_created = self.search_sstate('|'.join(map(str, [s + '.*?\.siginfo$' for s in targets])), distro_specific, distro_nonspecific)
-        self.assertTrue(siginfo_created, msg="Could not find sstate .siginfo files for: %s (%s)" % (', '.join(map(str, targets)), str(siginfo_created)))
-
-        bitbake(['-ccleansstate'] + targets)
-        tgz_removed = self.search_sstate('|'.join(map(str, [s + '.*?\.tgz$' for s in targets])), distro_specific, distro_nonspecific)
-        self.assertTrue(not tgz_removed, msg="do_cleansstate didn't remove .tgz sstate files for: %s (%s)" % (', '.join(map(str, targets)), str(tgz_removed)))
-
-    @testcase(977)
-    def test_cleansstate_task_distro_specific_nonspecific(self):
-        targets = ['binutils-cross-'+ self.tune_arch, 'binutils-native']
-        targets.append('linux-libc-headers')
-        self.run_test_cleansstate_task(targets, distro_specific=True, distro_nonspecific=True, temp_sstate_location=True)
-
-    @testcase(1376)
-    def test_cleansstate_task_distro_nonspecific(self):
-        self.run_test_cleansstate_task(['linux-libc-headers'], distro_specific=False, distro_nonspecific=True, temp_sstate_location=True)
-
-    @testcase(1377)
-    def test_cleansstate_task_distro_specific(self):
-        targets = ['binutils-cross-'+ self.tune_arch, 'binutils-native']
-        targets.append('linux-libc-headers')
-        self.run_test_cleansstate_task(targets, distro_specific=True, distro_nonspecific=False, temp_sstate_location=True)
-
-
-    # Test rebuilding of distro-specific sstate files
-    def run_test_rebuild_distro_specific_sstate(self, targets, temp_sstate_location=True):
-        self.config_sstate(temp_sstate_location, [self.sstate_path])
-
-        bitbake(['-ccleansstate'] + targets)
-
-        bitbake(targets)
-        results = self.search_sstate('|'.join(map(str, [s + '.*?\.tgz$' for s in targets])), distro_specific=False, distro_nonspecific=True)
-        filtered_results = []
-        for r in results:
-            if r.endswith(("_populate_lic.tgz", "_populate_lic.tgz.siginfo")):
-                continue
-            filtered_results.append(r)
-        self.assertTrue(filtered_results == [], msg="Found distro non-specific sstate for: %s (%s)" % (', '.join(map(str, targets)), str(filtered_results)))
-        file_tracker_1 = self.search_sstate('|'.join(map(str, [s + '.*?\.tgz$' for s in targets])), distro_specific=True, distro_nonspecific=False)
-        self.assertTrue(len(file_tracker_1) >= len(targets), msg = "Not all sstate files ware created for: %s" % ', '.join(map(str, targets)))
-
-        self.track_for_cleanup(self.distro_specific_sstate + "_old")
-        shutil.copytree(self.distro_specific_sstate, self.distro_specific_sstate + "_old")
-        shutil.rmtree(self.distro_specific_sstate)
-
-        bitbake(['-cclean'] + targets)
-        bitbake(targets)
-        file_tracker_2 = self.search_sstate('|'.join(map(str, [s + '.*?\.tgz$' for s in targets])), distro_specific=True, distro_nonspecific=False)
-        self.assertTrue(len(file_tracker_2) >= len(targets), msg = "Not all sstate files ware created for: %s" % ', '.join(map(str, targets)))
-
-        not_recreated = [x for x in file_tracker_1 if x not in file_tracker_2]
-        self.assertTrue(not_recreated == [], msg="The following sstate files ware not recreated: %s" % ', '.join(map(str, not_recreated)))
-
-        created_once = [x for x in file_tracker_2 if x not in file_tracker_1]
-        self.assertTrue(created_once == [], msg="The following sstate files ware created only in the second run: %s" % ', '.join(map(str, created_once)))
-
-    @testcase(175)
-    def test_rebuild_distro_specific_sstate_cross_native_targets(self):
-        self.run_test_rebuild_distro_specific_sstate(['binutils-cross-' + self.tune_arch, 'binutils-native'], temp_sstate_location=True)
-
-    @testcase(1372)
-    def test_rebuild_distro_specific_sstate_cross_target(self):
-        self.run_test_rebuild_distro_specific_sstate(['binutils-cross-' + self.tune_arch], temp_sstate_location=True)
-
-    @testcase(1373)
-    def test_rebuild_distro_specific_sstate_native_target(self):
-        self.run_test_rebuild_distro_specific_sstate(['binutils-native'], temp_sstate_location=True)
-
-
-    # Test the sstate-cache-management script. Each element in the global_config list is used with the corresponding element in the target_config list
-    # global_config elements are expected to not generate any sstate files that would be removed by sstate-cache-management.sh (such as changing the value of MACHINE)
-    def run_test_sstate_cache_management_script(self, target, global_config=[''], target_config=[''], ignore_patterns=[]):
-        self.assertTrue(global_config)
-        self.assertTrue(target_config)
-        self.assertTrue(len(global_config) == len(target_config), msg='Lists global_config and target_config should have the same number of elements')
-        self.config_sstate(temp_sstate_location=True, add_local_mirrors=[self.sstate_path])
-
-        # If buildhistory is enabled, we need to disable version-going-backwards
-        # QA checks for this test. It may report errors otherwise.
-        self.append_config('ERROR_QA_remove = "version-going-backwards"')
-
-        # For not this only checks if random sstate tasks are handled correctly as a group.
-        # In the future we should add control over what tasks we check for.
-
-        sstate_archs_list = []
-        expected_remaining_sstate = []
-        for idx in range(len(target_config)):
-            self.append_config(global_config[idx])
-            self.append_recipeinc(target, target_config[idx])
-            sstate_arch = get_bb_var('SSTATE_PKGARCH', target)
-            if not sstate_arch in sstate_archs_list:
-                sstate_archs_list.append(sstate_arch)
-            if target_config[idx] == target_config[-1]:
-                target_sstate_before_build = self.search_sstate(target + '.*?\.tgz$')
-            bitbake("-cclean %s" % target)
-            result = bitbake(target, ignore_status=True)
-            if target_config[idx] == target_config[-1]:
-                target_sstate_after_build = self.search_sstate(target + '.*?\.tgz$')
-                expected_remaining_sstate += [x for x in target_sstate_after_build if x not in target_sstate_before_build if not any(pattern in x for pattern in ignore_patterns)]
-            self.remove_config(global_config[idx])
-            self.remove_recipeinc(target, target_config[idx])
-            self.assertEqual(result.status, 0, msg = "build of %s failed with %s" % (target, result.output))
-
-        runCmd("sstate-cache-management.sh -y --cache-dir=%s --remove-duplicated --extra-archs=%s" % (self.sstate_path, ','.join(map(str, sstate_archs_list))))
-        actual_remaining_sstate = [x for x in self.search_sstate(target + '.*?\.tgz$') if not any(pattern in x for pattern in ignore_patterns)]
-
-        actual_not_expected = [x for x in actual_remaining_sstate if x not in expected_remaining_sstate]
-        self.assertFalse(actual_not_expected, msg="Files should have been removed but ware not: %s" % ', '.join(map(str, actual_not_expected)))
-        expected_not_actual = [x for x in expected_remaining_sstate if x not in actual_remaining_sstate]
-        self.assertFalse(expected_not_actual, msg="Extra files ware removed: %s" ', '.join(map(str, expected_not_actual)))
-
-    @testcase(973)
-    def test_sstate_cache_management_script_using_pr_1(self):
-        global_config = []
-        target_config = []
-        global_config.append('')
-        target_config.append('PR = "0"')
-        self.run_test_sstate_cache_management_script('m4', global_config,  target_config, ignore_patterns=['populate_lic'])
-
-    @testcase(978)
-    def test_sstate_cache_management_script_using_pr_2(self):
-        global_config = []
-        target_config = []
-        global_config.append('')
-        target_config.append('PR = "0"')
-        global_config.append('')
-        target_config.append('PR = "1"')
-        self.run_test_sstate_cache_management_script('m4', global_config,  target_config, ignore_patterns=['populate_lic'])
-
-    @testcase(979)
-    def test_sstate_cache_management_script_using_pr_3(self):
-        global_config = []
-        target_config = []
-        global_config.append('MACHINE = "qemux86-64"')
-        target_config.append('PR = "0"')
-        global_config.append(global_config[0])
-        target_config.append('PR = "1"')
-        global_config.append('MACHINE = "qemux86"')
-        target_config.append('PR = "1"')
-        self.run_test_sstate_cache_management_script('m4', global_config,  target_config, ignore_patterns=['populate_lic'])
-
-    @testcase(974)
-    def test_sstate_cache_management_script_using_machine(self):
-        global_config = []
-        target_config = []
-        global_config.append('MACHINE = "qemux86-64"')
-        target_config.append('')
-        global_config.append('MACHINE = "qemux86"')
-        target_config.append('')
-        self.run_test_sstate_cache_management_script('m4', global_config,  target_config, ignore_patterns=['populate_lic'])
-
-    @testcase(1270)
-    def test_sstate_32_64_same_hash(self):
-        """
-        The sstate checksums for both native and target should not vary whether
-        they're built on a 32 or 64 bit system. Rather than requiring two different
-        build machines and running a builds, override the variables calling uname()
-        manually and check using bitbake -S.
-        """
-
-        self.write_config("""
-MACHINE = "qemux86"
-TMPDIR = "${TOPDIR}/tmp-sstatesamehash"
-BUILD_ARCH = "x86_64"
-BUILD_OS = "linux"
-SDKMACHINE = "x86_64"
-PACKAGE_CLASSES = "package_rpm package_ipk package_deb"
-""")
-        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash")
-        bitbake("core-image-sato -S none")
-        self.write_config("""
-MACHINE = "qemux86"
-TMPDIR = "${TOPDIR}/tmp-sstatesamehash2"
-BUILD_ARCH = "i686"
-BUILD_OS = "linux"
-SDKMACHINE = "i686"
-PACKAGE_CLASSES = "package_rpm package_ipk package_deb"
-""")
-        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash2")
-        bitbake("core-image-sato -S none")
-
-        def get_files(d):
-            f = []
-            for root, dirs, files in os.walk(d):
-                if "core-image-sato" in root:
-                    # SDKMACHINE changing will change
-                    # do_rootfs/do_testimage/do_build stamps of images which
-                    # is safe to ignore.
-                    continue
-                f.extend(os.path.join(root, name) for name in files)
-            return f
-        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps/")
-        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps/")
-        files2 = [x.replace("tmp-sstatesamehash2", "tmp-sstatesamehash").replace("i686-linux", "x86_64-linux").replace("i686" + self.target_vendor + "-linux", "x86_64" + self.target_vendor + "-linux", ) for x in files2]
-        self.maxDiff = None
-        self.assertCountEqual(files1, files2)
-
-
-    @testcase(1271)
-    def test_sstate_nativelsbstring_same_hash(self):
-        """
-        The sstate checksums should be independent of whichever NATIVELSBSTRING is
-        detected. Rather than requiring two different build machines and running
-        builds, override the variables manually and check using bitbake -S.
-        """
-
-        self.write_config("""
-TMPDIR = \"${TOPDIR}/tmp-sstatesamehash\"
-NATIVELSBSTRING = \"DistroA\"
-""")
-        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash")
-        bitbake("core-image-sato -S none")
-        self.write_config("""
-TMPDIR = \"${TOPDIR}/tmp-sstatesamehash2\"
-NATIVELSBSTRING = \"DistroB\"
-""")
-        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash2")
-        bitbake("core-image-sato -S none")
-
-        def get_files(d):
-            f = []
-            for root, dirs, files in os.walk(d):
-                f.extend(os.path.join(root, name) for name in files)
-            return f
-        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps/")
-        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps/")
-        files2 = [x.replace("tmp-sstatesamehash2", "tmp-sstatesamehash") for x in files2]
-        self.maxDiff = None
-        self.assertCountEqual(files1, files2)
-
-    @testcase(1368)
-    def test_sstate_allarch_samesigs(self):
-        """
-        The sstate checksums of allarch packages should be independent of whichever
-        MACHINE is set. Check this using bitbake -S.
-        Also, rather than duplicate the test, check nativesdk stamps are the same between
-        the two MACHINE values.
-        """
-
-        configA = """
-TMPDIR = \"${TOPDIR}/tmp-sstatesamehash\"
-MACHINE = \"qemux86-64\"
-"""
-        configB = """
-TMPDIR = \"${TOPDIR}/tmp-sstatesamehash2\"
-MACHINE = \"qemuarm\"
-"""
-        self.sstate_allarch_samesigs(configA, configB)
-
-    def test_sstate_allarch_samesigs_multilib(self):
-        """
-        The sstate checksums of allarch multilib packages should be independent of whichever
-        MACHINE is set. Check this using bitbake -S.
-        Also, rather than duplicate the test, check nativesdk stamps are the same between
-        the two MACHINE values.
-        """
-
-        configA = """
-TMPDIR = \"${TOPDIR}/tmp-sstatesamehash\"
-MACHINE = \"qemux86-64\"
-require conf/multilib.conf
-MULTILIBS = \"multilib:lib32\"
-DEFAULTTUNE_virtclass-multilib-lib32 = \"x86\"
-"""
-        configB = """
-TMPDIR = \"${TOPDIR}/tmp-sstatesamehash2\"
-MACHINE = \"qemuarm\"
-require conf/multilib.conf
-MULTILIBS = \"\"
-"""
-        self.sstate_allarch_samesigs(configA, configB)
-
-    def sstate_allarch_samesigs(self, configA, configB):
-
-        self.write_config(configA)
-        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash")
-        bitbake("world meta-toolchain -S none")
-        self.write_config(configB)
-        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash2")
-        bitbake("world meta-toolchain -S none")
-
-        def get_files(d):
-            f = {}
-            for root, dirs, files in os.walk(d):
-                for name in files:
-                    if "meta-environment" in root or "cross-canadian" in root:
-                        continue
-                    if "do_build" not in name:
-                        # 1.4.1+gitAUTOINC+302fca9f4c-r0.do_package_write_ipk.sigdata.f3a2a38697da743f0dbed8b56aafcf79
-                        (_, task, _, shash) = name.rsplit(".", 3)
-                        f[os.path.join(os.path.basename(root), task)] = shash
-            return f
-        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps/all" + self.target_vendor + "-" + self.target_os)
-        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps/all" + self.target_vendor + "-" + self.target_os)
-        self.maxDiff = None
-        self.assertEqual(files1, files2)
-
-        nativesdkdir = os.path.basename(glob.glob(self.topdir + "/tmp-sstatesamehash/stamps/*-nativesdk*-linux")[0])
-
-        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps/" + nativesdkdir)
-        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps/" + nativesdkdir)
-        self.maxDiff = None
-        self.assertEqual(files1, files2)
-
-    @testcase(1369)
-    def test_sstate_sametune_samesigs(self):
-        """
-        The sstate checksums of two identical machines (using the same tune) should be the
-        same, apart from changes within the machine specific stamps directory. We use the
-        qemux86copy machine to test this. Also include multilibs in the test.
-        """
-
-        self.write_config("""
-TMPDIR = \"${TOPDIR}/tmp-sstatesamehash\"
-MACHINE = \"qemux86\"
-require conf/multilib.conf
-MULTILIBS = "multilib:lib32"
-DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
-""")
-        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash")
-        bitbake("world meta-toolchain -S none")
-        self.write_config("""
-TMPDIR = \"${TOPDIR}/tmp-sstatesamehash2\"
-MACHINE = \"qemux86copy\"
-require conf/multilib.conf
-MULTILIBS = "multilib:lib32"
-DEFAULTTUNE_virtclass-multilib-lib32 = "x86"
-""")
-        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash2")
-        bitbake("world meta-toolchain -S none")
-
-        def get_files(d):
-            f = []
-            for root, dirs, files in os.walk(d):
-                for name in files:
-                    if "meta-environment" in root or "cross-canadian" in root:
-                        continue
-                    if "qemux86copy-" in root or "qemux86-" in root:
-                        continue
-                    if "do_build" not in name and "do_populate_sdk" not in name:
-                        f.append(os.path.join(root, name))
-            return f
-        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps")
-        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps")
-        files2 = [x.replace("tmp-sstatesamehash2", "tmp-sstatesamehash") for x in files2]
-        self.maxDiff = None
-        self.assertCountEqual(files1, files2)
-
-
-    def test_sstate_noop_samesigs(self):
-        """
-        The sstate checksums of two builds with these variables changed or
-        classes inherits should be the same.
-        """
-
-        self.write_config("""
-TMPDIR = "${TOPDIR}/tmp-sstatesamehash"
-BB_NUMBER_THREADS = "1"
-PARALLEL_MAKE = "-j 1"
-DL_DIR = "${TOPDIR}/download1"
-TIME = "111111"
-DATE = "20161111"
-INHERIT_remove = "buildstats-summary buildhistory uninative"
-http_proxy = ""
-""")
-        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash")
-        self.track_for_cleanup(self.topdir + "/download1")
-        bitbake("world meta-toolchain -S none")
-        self.write_config("""
-TMPDIR = "${TOPDIR}/tmp-sstatesamehash2"
-BB_NUMBER_THREADS = "2"
-PARALLEL_MAKE = "-j 2"
-DL_DIR = "${TOPDIR}/download2"
-TIME = "222222"
-DATE = "20161212"
-# Always remove uninative as we're changing proxies
-INHERIT_remove = "uninative"
-INHERIT += "buildstats-summary buildhistory"
-http_proxy = "http://example.com/"
-""")
-        self.track_for_cleanup(self.topdir + "/tmp-sstatesamehash2")
-        self.track_for_cleanup(self.topdir + "/download2")
-        bitbake("world meta-toolchain -S none")
-
-        def get_files(d):
-            f = {}
-            for root, dirs, files in os.walk(d):
-                for name in files:
-                    name, shash = name.rsplit('.', 1)
-                    # Extract just the machine and recipe name
-                    base = os.sep.join(root.rsplit(os.sep, 2)[-2:] + [name])
-                    f[base] = shash
-            return f
-        files1 = get_files(self.topdir + "/tmp-sstatesamehash/stamps/")
-        files2 = get_files(self.topdir + "/tmp-sstatesamehash2/stamps/")
-        # Remove items that are identical in both sets
-        for k,v in files1.items() & files2.items():
-            del files1[k]
-            del files2[k]
-        if not files1 and not files2:
-            # No changes, so we're done
-            return
-
-        for k in files1.keys() | files2.keys():
-            if k in files1 and k in files2:
-                print("%s differs:" % k)
-                print(subprocess.check_output(("bitbake-diffsigs",
-                                               self.topdir + "/tmp-sstatesamehash/stamps/" + k + "." + files1[k],
-                                               self.topdir + "/tmp-sstatesamehash2/stamps/" + k + "." + files2[k])))
-            elif k in files1 and k not in files2:
-                print("%s in files1" % k)
-            elif k not in files1 and k in files2:
-                print("%s in files2" % k)
-            else:
-                assert "shouldn't reach here"
-        self.fail("sstate hashes not identical.")
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/tinfoil.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/tinfoil.py
deleted file mode 100644
index 73a0c3b..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/tinfoil.py
+++ /dev/null
@@ -1,190 +0,0 @@
-import unittest
-import os
-import re
-import bb.tinfoil
-
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd
-from oeqa.utils.decorators import testcase
-
-class TinfoilTests(oeSelfTest):
-    """ Basic tests for the tinfoil API """
-
-    @testcase(1568)
-    def test_getvar(self):
-        with bb.tinfoil.Tinfoil() as tinfoil:
-            tinfoil.prepare(True)
-            machine = tinfoil.config_data.getVar('MACHINE')
-            if not machine:
-                self.fail('Unable to get MACHINE value - returned %s' % machine)
-
-    @testcase(1569)
-    def test_expand(self):
-        with bb.tinfoil.Tinfoil() as tinfoil:
-            tinfoil.prepare(True)
-            expr = '${@os.getpid()}'
-            pid = tinfoil.config_data.expand(expr)
-            if not pid:
-                self.fail('Unable to expand "%s" - returned %s' % (expr, pid))
-
-    @testcase(1570)
-    def test_getvar_bb_origenv(self):
-        with bb.tinfoil.Tinfoil() as tinfoil:
-            tinfoil.prepare(True)
-            origenv = tinfoil.config_data.getVar('BB_ORIGENV', False)
-            if not origenv:
-                self.fail('Unable to get BB_ORIGENV value - returned %s' % origenv)
-            self.assertEqual(origenv.getVar('HOME', False), os.environ['HOME'])
-
-    @testcase(1571)
-    def test_parse_recipe(self):
-        with bb.tinfoil.Tinfoil() as tinfoil:
-            tinfoil.prepare(config_only=False, quiet=2)
-            testrecipe = 'mdadm'
-            best = tinfoil.find_best_provider(testrecipe)
-            if not best:
-                self.fail('Unable to find recipe providing %s' % testrecipe)
-            rd = tinfoil.parse_recipe_file(best[3])
-            self.assertEqual(testrecipe, rd.getVar('PN'))
-
-    @testcase(1572)
-    def test_parse_recipe_copy_expand(self):
-        with bb.tinfoil.Tinfoil() as tinfoil:
-            tinfoil.prepare(config_only=False, quiet=2)
-            testrecipe = 'mdadm'
-            best = tinfoil.find_best_provider(testrecipe)
-            if not best:
-                self.fail('Unable to find recipe providing %s' % testrecipe)
-            rd = tinfoil.parse_recipe_file(best[3])
-            # Check we can get variable values
-            self.assertEqual(testrecipe, rd.getVar('PN'))
-            # Check that expanding a value that includes a variable reference works
-            self.assertEqual(testrecipe, rd.getVar('BPN'))
-            # Now check that changing the referenced variable's value in a copy gives that
-            # value when expanding
-            localdata = bb.data.createCopy(rd)
-            localdata.setVar('PN', 'hello')
-            self.assertEqual('hello', localdata.getVar('BPN'))
-
-    @testcase(1573)
-    def test_parse_recipe_initial_datastore(self):
-        with bb.tinfoil.Tinfoil() as tinfoil:
-            tinfoil.prepare(config_only=False, quiet=2)
-            testrecipe = 'mdadm'
-            best = tinfoil.find_best_provider(testrecipe)
-            if not best:
-                self.fail('Unable to find recipe providing %s' % testrecipe)
-            dcopy = bb.data.createCopy(tinfoil.config_data)
-            dcopy.setVar('MYVARIABLE', 'somevalue')
-            rd = tinfoil.parse_recipe_file(best[3], config_data=dcopy)
-            # Check we can get variable values
-            self.assertEqual('somevalue', rd.getVar('MYVARIABLE'))
-
-    @testcase(1574)
-    def test_list_recipes(self):
-        with bb.tinfoil.Tinfoil() as tinfoil:
-            tinfoil.prepare(config_only=False, quiet=2)
-            # Check pkg_pn
-            checkpns = ['tar', 'automake', 'coreutils', 'm4-native', 'nativesdk-gcc']
-            pkg_pn = tinfoil.cooker.recipecaches[''].pkg_pn
-            for pn in checkpns:
-                self.assertIn(pn, pkg_pn)
-            # Check pkg_fn
-            checkfns = {'nativesdk-gcc': '^virtual:nativesdk:.*', 'coreutils': '.*/coreutils_.*.bb'}
-            for fn, pn in tinfoil.cooker.recipecaches[''].pkg_fn.items():
-                if pn in checkpns:
-                    if pn in checkfns:
-                        self.assertTrue(re.match(checkfns[pn], fn), 'Entry for %s: %s did not match %s' % (pn, fn, checkfns[pn]))
-                    checkpns.remove(pn)
-            if checkpns:
-                self.fail('Unable to find pkg_fn entries for: %s' % ', '.join(checkpns))
-
-    @testcase(1575)
-    def test_wait_event(self):
-        with bb.tinfoil.Tinfoil() as tinfoil:
-            tinfoil.prepare(config_only=True)
-            # Need to drain events otherwise events that will be masked will still be in the queue
-            while tinfoil.wait_event(0.25):
-                pass
-            tinfoil.set_event_mask(['bb.event.FilesMatchingFound', 'bb.command.CommandCompleted'])
-            pattern = 'conf'
-            res = tinfoil.run_command('findFilesMatchingInDir', pattern, 'conf/machine')
-            self.assertTrue(res)
-
-            eventreceived = False
-            waitcount = 5
-            while waitcount > 0:
-                event = tinfoil.wait_event(1)
-                if event:
-                    if isinstance(event, bb.command.CommandCompleted):
-                        break
-                    elif isinstance(event, bb.event.FilesMatchingFound):
-                        self.assertEqual(pattern, event._pattern)
-                        self.assertIn('qemuarm.conf', event._matches)
-                        eventreceived = True
-                    else:
-                        self.fail('Unexpected event: %s' % event)
-
-                waitcount = waitcount - 1
-
-            self.assertNotEqual(waitcount, 0, 'Timed out waiting for CommandCompleted event from bitbake server')
-            self.assertTrue(eventreceived, 'Did not receive FilesMatchingFound event from bitbake server')
-
-    @testcase(1576)
-    def test_setvariable_clean(self):
-        # First check that setVariable affects the datastore
-        with bb.tinfoil.Tinfoil() as tinfoil:
-            tinfoil.prepare(config_only=True)
-            tinfoil.run_command('setVariable', 'TESTVAR', 'specialvalue')
-            self.assertEqual(tinfoil.config_data.getVar('TESTVAR'), 'specialvalue', 'Value set using setVariable is not reflected in client-side getVar()')
-
-        # Now check that the setVariable's effects are no longer present
-        # (this may legitimately break in future if we stop reinitialising
-        # the datastore, in which case we'll have to reconsider use of
-        # setVariable entirely)
-        with bb.tinfoil.Tinfoil() as tinfoil:
-            tinfoil.prepare(config_only=True)
-            self.assertNotEqual(tinfoil.config_data.getVar('TESTVAR'), 'specialvalue', 'Value set using setVariable is still present!')
-
-        # Now check that setVar on the main datastore works (uses setVariable internally)
-        with bb.tinfoil.Tinfoil() as tinfoil:
-            tinfoil.prepare(config_only=True)
-            tinfoil.config_data.setVar('TESTVAR', 'specialvalue')
-            value = tinfoil.run_command('getVariable', 'TESTVAR')
-            self.assertEqual(value, 'specialvalue', 'Value set using config_data.setVar() is not reflected in config_data.getVar()')
-
-    def test_datastore_operations(self):
-        with bb.tinfoil.Tinfoil() as tinfoil:
-            tinfoil.prepare(config_only=True)
-            # Test setVarFlag() / getVarFlag()
-            tinfoil.config_data.setVarFlag('TESTVAR', 'flagname', 'flagval')
-            value = tinfoil.config_data.getVarFlag('TESTVAR', 'flagname')
-            self.assertEqual(value, 'flagval', 'Value set using config_data.setVarFlag() is not reflected in config_data.getVarFlag()')
-            # Test delVarFlag()
-            tinfoil.config_data.setVarFlag('TESTVAR', 'otherflag', 'othervalue')
-            tinfoil.config_data.delVarFlag('TESTVAR', 'flagname')
-            value = tinfoil.config_data.getVarFlag('TESTVAR', 'flagname')
-            self.assertEqual(value, None, 'Varflag deleted using config_data.delVarFlag() is not reflected in config_data.getVarFlag()')
-            value = tinfoil.config_data.getVarFlag('TESTVAR', 'otherflag')
-            self.assertEqual(value, 'othervalue', 'Varflag deleted using config_data.delVarFlag() caused unrelated flag to be removed')
-            # Test delVar()
-            tinfoil.config_data.setVar('TESTVAR', 'varvalue')
-            value = tinfoil.config_data.getVar('TESTVAR')
-            self.assertEqual(value, 'varvalue', 'Value set using config_data.setVar() is not reflected in config_data.getVar()')
-            tinfoil.config_data.delVar('TESTVAR')
-            value = tinfoil.config_data.getVar('TESTVAR')
-            self.assertEqual(value, None, 'Variable deleted using config_data.delVar() appears to still have a value')
-            # Test renameVar()
-            tinfoil.config_data.setVar('TESTVAROLD', 'origvalue')
-            tinfoil.config_data.renameVar('TESTVAROLD', 'TESTVARNEW')
-            value = tinfoil.config_data.getVar('TESTVAROLD')
-            self.assertEqual(value, None, 'Variable renamed using config_data.renameVar() still seems to exist')
-            value = tinfoil.config_data.getVar('TESTVARNEW')
-            self.assertEqual(value, 'origvalue', 'Variable renamed using config_data.renameVar() does not appear with new name')
-            # Test overrides
-            tinfoil.config_data.setVar('TESTVAR', 'original')
-            tinfoil.config_data.setVar('TESTVAR_overrideone', 'one')
-            tinfoil.config_data.setVar('TESTVAR_overridetwo', 'two')
-            tinfoil.config_data.appendVar('OVERRIDES', ':overrideone')
-            value = tinfoil.config_data.getVar('TESTVAR')
-            self.assertEqual(value, 'one', 'Variable overrides not functioning correctly')
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/selftest/wic.py b/import-layers/yocto-poky/meta/lib/oeqa/selftest/wic.py
deleted file mode 100644
index 726af19..0000000
--- a/import-layers/yocto-poky/meta/lib/oeqa/selftest/wic.py
+++ /dev/null
@@ -1,792 +0,0 @@
-#!/usr/bin/env python
-# ex:ts=4:sw=4:sts=4:et
-# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
-#
-# Copyright (c) 2015, Intel Corporation.
-# All rights reserved.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License version 2 as
-# published by the Free Software Foundation.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License along
-# with this program; if not, write to the Free Software Foundation, Inc.,
-# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-#
-# AUTHORS
-# Ed Bartosh <ed.bartosh@linux.intel.com>
-
-"""Test cases for wic."""
-
-import os
-import sys
-import unittest
-
-from glob import glob
-from shutil import rmtree
-from functools import wraps, lru_cache
-from tempfile import NamedTemporaryFile
-
-from oeqa.selftest.base import oeSelfTest
-from oeqa.utils.commands import runCmd, bitbake, get_bb_var, get_bb_vars, runqemu
-from oeqa.utils.decorators import testcase
-
-
-@lru_cache(maxsize=32)
-def get_host_arch(recipe):
-    """A cached call to get_bb_var('HOST_ARCH', <recipe>)"""
-    return get_bb_var('HOST_ARCH', recipe)
-
-
-def only_for_arch(archs, image='core-image-minimal'):
-    """Decorator for wrapping test cases that can be run only for specific target
-    architectures. A list of compatible architectures is passed in `archs`.
-    Current architecture will be determined by parsing bitbake output for
-    `image` recipe.
-    """
-    def wrapper(func):
-        @wraps(func)
-        def wrapped_f(*args, **kwargs):
-            arch = get_host_arch(image)
-            if archs and arch not in archs:
-                raise unittest.SkipTest("Testcase arch dependency not met: %s" % arch)
-            return func(*args, **kwargs)
-        wrapped_f.__name__ = func.__name__
-        return wrapped_f
-    return wrapper
-
-
-class Wic(oeSelfTest):
-    """Wic test class."""
-
-    resultdir = "/var/tmp/wic.oe-selftest/"
-    image_is_ready = False
-    native_sysroot = None
-    wicenv_cache = {}
-
-    def setUpLocal(self):
-        """This code is executed before each test method."""
-        if not self.native_sysroot:
-            Wic.native_sysroot = get_bb_var('STAGING_DIR_NATIVE', 'wic-tools')
-
-        # Do this here instead of in setUpClass as the base setUp does some
-        # clean up which can result in the native tools built earlier in
-        # setUpClass being unavailable.
-        if not Wic.image_is_ready:
-            if get_bb_var('USE_NLS') == 'yes':
-                bitbake('wic-tools')
-            else:
-                self.skipTest('wic-tools cannot be built due its (intltool|gettext)-native dependency and NLS disable')
-
-            bitbake('core-image-minimal')
-            Wic.image_is_ready = True
-
-        rmtree(self.resultdir, ignore_errors=True)
-
-    def tearDownLocal(self):
-        """Remove resultdir as it may contain images."""
-        rmtree(self.resultdir, ignore_errors=True)
-
-    @testcase(1552)
-    def test_version(self):
-        """Test wic --version"""
-        self.assertEqual(0, runCmd('wic --version').status)
-
-    @testcase(1208)
-    def test_help(self):
-        """Test wic --help and wic -h"""
-        self.assertEqual(0, runCmd('wic --help').status)
-        self.assertEqual(0, runCmd('wic -h').status)
-
-    @testcase(1209)
-    def test_createhelp(self):
-        """Test wic create --help"""
-        self.assertEqual(0, runCmd('wic create --help').status)
-
-    @testcase(1210)
-    def test_listhelp(self):
-        """Test wic list --help"""
-        self.assertEqual(0, runCmd('wic list --help').status)
-
-    @testcase(1553)
-    def test_help_create(self):
-        """Test wic help create"""
-        self.assertEqual(0, runCmd('wic help create').status)
-
-    @testcase(1554)
-    def test_help_list(self):
-        """Test wic help list"""
-        self.assertEqual(0, runCmd('wic help list').status)
-
-    @testcase(1215)
-    def test_help_overview(self):
-        """Test wic help overview"""
-        self.assertEqual(0, runCmd('wic help overview').status)
-
-    @testcase(1216)
-    def test_help_plugins(self):
-        """Test wic help plugins"""
-        self.assertEqual(0, runCmd('wic help plugins').status)
-
-    @testcase(1217)
-    def test_help_kickstart(self):
-        """Test wic help kickstart"""
-        self.assertEqual(0, runCmd('wic help kickstart').status)
-
-    @testcase(1555)
-    def test_list_images(self):
-        """Test wic list images"""
-        self.assertEqual(0, runCmd('wic list images').status)
-
-    @testcase(1556)
-    def test_list_source_plugins(self):
-        """Test wic list source-plugins"""
-        self.assertEqual(0, runCmd('wic list source-plugins').status)
-
-    @testcase(1557)
-    def test_listed_images_help(self):
-        """Test wic listed images help"""
-        output = runCmd('wic list images').output
-        imagelist = [line.split()[0] for line in output.splitlines()]
-        for image in imagelist:
-            self.assertEqual(0, runCmd('wic list %s help' % image).status)
-
-    @testcase(1213)
-    def test_unsupported_subcommand(self):
-        """Test unsupported subcommand"""
-        self.assertEqual(1, runCmd('wic unsupported',
-                                   ignore_status=True).status)
-
-    @testcase(1214)
-    def test_no_command(self):
-        """Test wic without command"""
-        self.assertEqual(1, runCmd('wic', ignore_status=True).status)
-
-    @testcase(1211)
-    def test_build_image_name(self):
-        """Test wic create wictestdisk --image-name=core-image-minimal"""
-        cmd = "wic create wictestdisk --image-name=core-image-minimal -o %s" % self.resultdir
-        self.assertEqual(0, runCmd(cmd).status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
-
-    @testcase(1157)
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_gpt_image(self):
-        """Test creation of core-image-minimal with gpt table and UUID boot"""
-        cmd = "wic create directdisk-gpt --image-name core-image-minimal -o %s" % self.resultdir
-        self.assertEqual(0, runCmd(cmd).status)
-        self.assertEqual(1, len(glob(self.resultdir + "directdisk-*.direct")))
-
-    @testcase(1346)
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_iso_image(self):
-        """Test creation of hybrid iso image with legacy and EFI boot"""
-        config = 'INITRAMFS_IMAGE = "core-image-minimal-initramfs"\n'\
-                 'MACHINE_FEATURES_append = " efi"\n'
-        self.append_config(config)
-        bitbake('core-image-minimal')
-        self.remove_config(config)
-        cmd = "wic create mkhybridiso --image-name core-image-minimal -o %s" % self.resultdir
-        self.assertEqual(0, runCmd(cmd).status)
-        self.assertEqual(1, len(glob(self.resultdir + "HYBRID_ISO_IMG-*.direct")))
-        self.assertEqual(1, len(glob(self.resultdir + "HYBRID_ISO_IMG-*.iso")))
-
-    @testcase(1348)
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_qemux86_directdisk(self):
-        """Test creation of qemux-86-directdisk image"""
-        cmd = "wic create qemux86-directdisk -e core-image-minimal -o %s" % self.resultdir
-        self.assertEqual(0, runCmd(cmd).status)
-        self.assertEqual(1, len(glob(self.resultdir + "qemux86-directdisk-*direct")))
-
-    @testcase(1350)
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_mkefidisk(self):
-        """Test creation of mkefidisk image"""
-        cmd = "wic create mkefidisk -e core-image-minimal -o %s" % self.resultdir
-        self.assertEqual(0, runCmd(cmd).status)
-        self.assertEqual(1, len(glob(self.resultdir + "mkefidisk-*direct")))
-
-    @testcase(1385)
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_bootloader_config(self):
-        """Test creation of directdisk-bootloader-config image"""
-        cmd = "wic create directdisk-bootloader-config -e core-image-minimal -o %s" % self.resultdir
-        self.assertEqual(0, runCmd(cmd).status)
-        self.assertEqual(1, len(glob(self.resultdir + "directdisk-bootloader-config-*direct")))
-
-    @testcase(1560)
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_systemd_bootdisk(self):
-        """Test creation of systemd-bootdisk image"""
-        config = 'MACHINE_FEATURES_append = " efi"\n'
-        self.append_config(config)
-        bitbake('core-image-minimal')
-        self.remove_config(config)
-        cmd = "wic create systemd-bootdisk -e core-image-minimal -o %s" % self.resultdir
-        self.assertEqual(0, runCmd(cmd).status)
-        self.assertEqual(1, len(glob(self.resultdir + "systemd-bootdisk-*direct")))
-
-    @testcase(1561)
-    def test_sdimage_bootpart(self):
-        """Test creation of sdimage-bootpart image"""
-        cmd = "wic create sdimage-bootpart -e core-image-minimal -o %s" % self.resultdir
-        kimgtype = get_bb_var('KERNEL_IMAGETYPE', 'core-image-minimal')
-        self.write_config('IMAGE_BOOT_FILES = "%s"\n' % kimgtype)
-        self.assertEqual(0, runCmd(cmd).status)
-        self.assertEqual(1, len(glob(self.resultdir + "sdimage-bootpart-*direct")))
-
-    @testcase(1562)
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_default_output_dir(self):
-        """Test default output location"""
-        for fname in glob("directdisk-*.direct"):
-            os.remove(fname)
-        cmd = "wic create directdisk -e core-image-minimal"
-        self.assertEqual(0, runCmd(cmd).status)
-        self.assertEqual(1, len(glob("directdisk-*.direct")))
-
-    @testcase(1212)
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_build_artifacts(self):
-        """Test wic create directdisk providing all artifacts."""
-        bb_vars = get_bb_vars(['STAGING_DATADIR', 'RECIPE_SYSROOT_NATIVE'],
-                              'wic-tools')
-        bb_vars.update(get_bb_vars(['DEPLOY_DIR_IMAGE', 'IMAGE_ROOTFS'],
-                                   'core-image-minimal'))
-        bbvars = {key.lower(): value for key, value in bb_vars.items()}
-        bbvars['resultdir'] = self.resultdir
-        status = runCmd("wic create directdisk "
-                        "-b %(staging_datadir)s "
-                        "-k %(deploy_dir_image)s "
-                        "-n %(recipe_sysroot_native)s "
-                        "-r %(image_rootfs)s "
-                        "-o %(resultdir)s" % bbvars).status
-        self.assertEqual(0, status)
-        self.assertEqual(1, len(glob(self.resultdir + "directdisk-*.direct")))
-
-    @testcase(1264)
-    def test_compress_gzip(self):
-        """Test compressing an image with gzip"""
-        self.assertEqual(0, runCmd("wic create wictestdisk "
-                                   "--image-name core-image-minimal "
-                                   "-c gzip -o %s" % self.resultdir).status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct.gz")))
-
-    @testcase(1265)
-    def test_compress_bzip2(self):
-        """Test compressing an image with bzip2"""
-        self.assertEqual(0, runCmd("wic create wictestdisk "
-                                   "--image-name=core-image-minimal "
-                                   "-c bzip2 -o %s" % self.resultdir).status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct.bz2")))
-
-    @testcase(1266)
-    def test_compress_xz(self):
-        """Test compressing an image with xz"""
-        self.assertEqual(0, runCmd("wic create wictestdisk "
-                                   "--image-name=core-image-minimal "
-                                   "--compress-with=xz -o %s" % self.resultdir).status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct.xz")))
-
-    @testcase(1267)
-    def test_wrong_compressor(self):
-        """Test how wic breaks if wrong compressor is provided"""
-        self.assertEqual(2, runCmd("wic create wictestdisk "
-                                   "--image-name=core-image-minimal "
-                                   "-c wrong -o %s" % self.resultdir,
-                                   ignore_status=True).status)
-
-    @testcase(1558)
-    def test_debug_short(self):
-        """Test -D option"""
-        self.assertEqual(0, runCmd("wic create wictestdisk "
-                                   "--image-name=core-image-minimal "
-                                   "-D -o %s" % self.resultdir).status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
-
-    def test_debug_long(self):
-        """Test --debug option"""
-        self.assertEqual(0, runCmd("wic create wictestdisk "
-                                   "--image-name=core-image-minimal "
-                                   "--debug -o %s" % self.resultdir).status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
-
-    @testcase(1563)
-    def test_skip_build_check_short(self):
-        """Test -s option"""
-        self.assertEqual(0, runCmd("wic create wictestdisk "
-                                   "--image-name=core-image-minimal "
-                                   "-s -o %s" % self.resultdir).status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
-
-    def test_skip_build_check_long(self):
-        """Test --skip-build-check option"""
-        self.assertEqual(0, runCmd("wic create wictestdisk "
-                                   "--image-name=core-image-minimal "
-                                   "--skip-build-check "
-                                   "--outdir %s" % self.resultdir).status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
-
-    @testcase(1564)
-    def test_build_rootfs_short(self):
-        """Test -f option"""
-        self.assertEqual(0, runCmd("wic create wictestdisk "
-                                   "--image-name=core-image-minimal "
-                                   "-f -o %s" % self.resultdir).status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
-
-    def test_build_rootfs_long(self):
-        """Test --build-rootfs option"""
-        self.assertEqual(0, runCmd("wic create wictestdisk "
-                                   "--image-name=core-image-minimal "
-                                   "--build-rootfs "
-                                   "--outdir %s" % self.resultdir).status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*.direct")))
-
-    @testcase(1268)
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_rootfs_indirect_recipes(self):
-        """Test usage of rootfs plugin with rootfs recipes"""
-        status = runCmd("wic create directdisk-multi-rootfs "
-                        "--image-name=core-image-minimal "
-                        "--rootfs rootfs1=core-image-minimal "
-                        "--rootfs rootfs2=core-image-minimal "
-                        "--outdir %s" % self.resultdir).status
-        self.assertEqual(0, status)
-        self.assertEqual(1, len(glob(self.resultdir + "directdisk-multi-rootfs*.direct")))
-
-    @testcase(1269)
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_rootfs_artifacts(self):
-        """Test usage of rootfs plugin with rootfs paths"""
-        bb_vars = get_bb_vars(['STAGING_DATADIR', 'RECIPE_SYSROOT_NATIVE'],
-                              'wic-tools')
-        bb_vars.update(get_bb_vars(['DEPLOY_DIR_IMAGE', 'IMAGE_ROOTFS'],
-                                   'core-image-minimal'))
-        bbvars = {key.lower(): value for key, value in bb_vars.items()}
-        bbvars['wks'] = "directdisk-multi-rootfs"
-        bbvars['resultdir'] = self.resultdir
-        status = runCmd("wic create %(wks)s "
-                        "--bootimg-dir=%(staging_datadir)s "
-                        "--kernel-dir=%(deploy_dir_image)s "
-                        "--native-sysroot=%(recipe_sysroot_native)s "
-                        "--rootfs-dir rootfs1=%(image_rootfs)s "
-                        "--rootfs-dir rootfs2=%(image_rootfs)s "
-                        "--outdir %(resultdir)s" % bbvars).status
-        self.assertEqual(0, status)
-        self.assertEqual(1, len(glob(self.resultdir + "%(wks)s-*.direct" % bbvars)))
-
-    def test_exclude_path(self):
-        """Test --exclude-path wks option."""
-
-        oldpath = os.environ['PATH']
-        os.environ['PATH'] = get_bb_var("PATH", "wic-tools")
-
-        try:
-            wks_file = 'temp.wks'
-            with open(wks_file, 'w') as wks:
-                rootfs_dir = get_bb_var('IMAGE_ROOTFS', 'core-image-minimal')
-                wks.write("""
-part / --source rootfs --ondisk mmcblk0 --fstype=ext4 --exclude-path usr
-part /usr --source rootfs --ondisk mmcblk0 --fstype=ext4 --rootfs-dir %s/usr
-part /etc --source rootfs --ondisk mmcblk0 --fstype=ext4 --exclude-path bin/ --rootfs-dir %s/usr"""
-                          % (rootfs_dir, rootfs_dir))
-            self.assertEqual(0, runCmd("wic create %s -e core-image-minimal -o %s" \
-                                       % (wks_file, self.resultdir)).status)
-
-            os.remove(wks_file)
-            wicout = glob(self.resultdir + "%s-*direct" % 'temp')
-            self.assertEqual(1, len(wicout))
-
-            wicimg = wicout[0]
-
-            # verify partition size with wic
-            res = runCmd("parted -m %s unit b p 2>/dev/null" % wicimg)
-            self.assertEqual(0, res.status)
-
-            # parse parted output which looks like this:
-            # BYT;\n
-            # /var/tmp/wic/build/tmpfwvjjkf_-201611101222-hda.direct:200MiB:file:512:512:msdos::;\n
-            # 1:0.00MiB:200MiB:200MiB:ext4::;\n
-            partlns = res.output.splitlines()[2:]
-
-            self.assertEqual(3, len(partlns))
-
-            for part in [1, 2, 3]:
-                part_file = os.path.join(self.resultdir, "selftest_img.part%d" % part)
-                partln = partlns[part-1].split(":")
-                self.assertEqual(7, len(partln))
-                start = int(partln[1].rstrip("B")) / 512
-                length = int(partln[3].rstrip("B")) / 512
-                self.assertEqual(0, runCmd("dd if=%s of=%s skip=%d count=%d" %
-                                           (wicimg, part_file, start, length)).status)
-
-            def extract_files(debugfs_output):
-                """
-                extract file names from the output of debugfs -R 'ls -p',
-                which looks like this:
-
-                 /2/040755/0/0/.//\n
-                 /2/040755/0/0/..//\n
-                 /11/040700/0/0/lost+found^M//\n
-                 /12/040755/1002/1002/run//\n
-                 /13/040755/1002/1002/sys//\n
-                 /14/040755/1002/1002/bin//\n
-                 /80/040755/1002/1002/var//\n
-                 /92/040755/1002/1002/tmp//\n
-                """
-                # NOTE the occasional ^M in file names
-                return [line.split('/')[5].strip() for line in \
-                        debugfs_output.strip().split('/\n')]
-
-            # Test partition 1, should contain the normal root directories, except
-            # /usr.
-            res = runCmd("debugfs -R 'ls -p' %s 2>/dev/null" % \
-                             os.path.join(self.resultdir, "selftest_img.part1"))
-            self.assertEqual(0, res.status)
-            files = extract_files(res.output)
-            self.assertIn("etc", files)
-            self.assertNotIn("usr", files)
-
-            # Partition 2, should contain common directories for /usr, not root
-            # directories.
-            res = runCmd("debugfs -R 'ls -p' %s 2>/dev/null" % \
-                             os.path.join(self.resultdir, "selftest_img.part2"))
-            self.assertEqual(0, res.status)
-            files = extract_files(res.output)
-            self.assertNotIn("etc", files)
-            self.assertNotIn("usr", files)
-            self.assertIn("share", files)
-
-            # Partition 3, should contain the same as partition 2, including the bin
-            # directory, but not the files inside it.
-            res = runCmd("debugfs -R 'ls -p' %s 2>/dev/null" % \
-                             os.path.join(self.resultdir, "selftest_img.part3"))
-            self.assertEqual(0, res.status)
-            files = extract_files(res.output)
-            self.assertNotIn("etc", files)
-            self.assertNotIn("usr", files)
-            self.assertIn("share", files)
-            self.assertIn("bin", files)
-            res = runCmd("debugfs -R 'ls -p bin' %s 2>/dev/null" % \
-                             os.path.join(self.resultdir, "selftest_img.part3"))
-            self.assertEqual(0, res.status)
-            files = extract_files(res.output)
-            self.assertIn(".", files)
-            self.assertIn("..", files)
-            self.assertEqual(2, len(files))
-
-            for part in [1, 2, 3]:
-                part_file = os.path.join(self.resultdir, "selftest_img.part%d" % part)
-                os.remove(part_file)
-
-        finally:
-            os.environ['PATH'] = oldpath
-
-    def test_exclude_path_errors(self):
-        """Test --exclude-path wks option error handling."""
-        wks_file = 'temp.wks'
-
-        # Absolute argument.
-        with open(wks_file, 'w') as wks:
-            wks.write("part / --source rootfs --ondisk mmcblk0 --fstype=ext4 --exclude-path /usr")
-        self.assertNotEqual(0, runCmd("wic create %s -e core-image-minimal -o %s" \
-                                      % (wks_file, self.resultdir), ignore_status=True).status)
-        os.remove(wks_file)
-
-        # Argument pointing to parent directory.
-        with open(wks_file, 'w') as wks:
-            wks.write("part / --source rootfs --ondisk mmcblk0 --fstype=ext4 --exclude-path ././..")
-        self.assertNotEqual(0, runCmd("wic create %s -e core-image-minimal -o %s" \
-                                      % (wks_file, self.resultdir), ignore_status=True).status)
-        os.remove(wks_file)
-
-    @testcase(1496)
-    def test_bmap_short(self):
-        """Test generation of .bmap file -m option"""
-        cmd = "wic create wictestdisk -e core-image-minimal -m -o %s" % self.resultdir
-        status = runCmd(cmd).status
-        self.assertEqual(0, status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*direct")))
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*direct.bmap")))
-
-    def test_bmap_long(self):
-        """Test generation of .bmap file --bmap option"""
-        cmd = "wic create wictestdisk -e core-image-minimal --bmap -o %s" % self.resultdir
-        status = runCmd(cmd).status
-        self.assertEqual(0, status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*direct")))
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*direct.bmap")))
-
-    def _get_image_env_path(self, image):
-        """Generate and obtain the path to <image>.env"""
-        if image not in self.wicenv_cache:
-            self.assertEqual(0, bitbake('%s -c do_rootfs_wicenv' % image).status)
-            bb_vars = get_bb_vars(['STAGING_DIR', 'MACHINE'], image)
-            stdir = bb_vars['STAGING_DIR']
-            machine = bb_vars['MACHINE']
-            self.wicenv_cache[image] = os.path.join(stdir, machine, 'imgdata')
-        return self.wicenv_cache[image]
-
-    @testcase(1347)
-    def test_image_env(self):
-        """Test generation of <image>.env files."""
-        image = 'core-image-minimal'
-        imgdatadir = self._get_image_env_path(image)
-
-        bb_vars = get_bb_vars(['IMAGE_BASENAME', 'WICVARS'], image)
-        basename = bb_vars['IMAGE_BASENAME']
-        self.assertEqual(basename, image)
-        path = os.path.join(imgdatadir, basename) + '.env'
-        self.assertTrue(os.path.isfile(path))
-
-        wicvars = set(bb_vars['WICVARS'].split())
-        # filter out optional variables
-        wicvars = wicvars.difference(('DEPLOY_DIR_IMAGE', 'IMAGE_BOOT_FILES',
-                                      'INITRD', 'INITRD_LIVE', 'ISODIR'))
-        with open(path) as envfile:
-            content = dict(line.split("=", 1) for line in envfile)
-            # test if variables used by wic present in the .env file
-            for var in wicvars:
-                self.assertTrue(var in content, "%s is not in .env file" % var)
-                self.assertTrue(content[var])
-
-    @testcase(1559)
-    def test_image_vars_dir_short(self):
-        """Test image vars directory selection -v option"""
-        image = 'core-image-minimal'
-        imgenvdir = self._get_image_env_path(image)
-
-        self.assertEqual(0, runCmd("wic create wictestdisk "
-                                   "--image-name=%s -v %s -o %s"
-                                   % (image, imgenvdir, self.resultdir)).status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*direct")))
-
-    def test_image_vars_dir_long(self):
-        """Test image vars directory selection --vars option"""
-        image = 'core-image-minimal'
-        imgenvdir = self._get_image_env_path(image)
-        self.assertEqual(0, runCmd("wic create wictestdisk "
-                                   "--image-name=%s "
-                                   "--vars %s "
-                                   "--outdir %s"
-                                   % (image, imgenvdir, self.resultdir)).status)
-        self.assertEqual(1, len(glob(self.resultdir + "wictestdisk-*direct")))
-
-    @testcase(1351)
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_wic_image_type(self):
-        """Test building wic images by bitbake"""
-        config = 'IMAGE_FSTYPES += "wic"\nWKS_FILE = "wic-image-minimal"\n'\
-                 'MACHINE_FEATURES_append = " efi"\n'
-        self.append_config(config)
-        self.assertEqual(0, bitbake('wic-image-minimal').status)
-        self.remove_config(config)
-
-        bb_vars = get_bb_vars(['DEPLOY_DIR_IMAGE', 'MACHINE'])
-        deploy_dir = bb_vars['DEPLOY_DIR_IMAGE']
-        machine = bb_vars['MACHINE']
-        prefix = os.path.join(deploy_dir, 'wic-image-minimal-%s.' % machine)
-        # check if we have result image and manifests symlinks
-        # pointing to existing files
-        for suffix in ('wic', 'manifest'):
-            path = prefix + suffix
-            self.assertTrue(os.path.islink(path))
-            self.assertTrue(os.path.isfile(os.path.realpath(path)))
-
-    @testcase(1422)
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_qemu(self):
-        """Test wic-image-minimal under qemu"""
-        config = 'IMAGE_FSTYPES += "wic"\nWKS_FILE = "wic-image-minimal"\n'\
-                 'MACHINE_FEATURES_append = " efi"\n'
-        self.append_config(config)
-        self.assertEqual(0, bitbake('wic-image-minimal').status)
-        self.remove_config(config)
-
-        with runqemu('wic-image-minimal', ssh=False) as qemu:
-            cmd = "mount |grep '^/dev/' | cut -f1,3 -d ' '"
-            status, output = qemu.run_serial(cmd)
-            self.assertEqual(1, status, 'Failed to run command "%s": %s' % (cmd, output))
-            self.assertEqual(output, '/dev/root /\r\n/dev/sda3 /mnt')
-
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_qemu_efi(self):
-        """Test core-image-minimal efi image under qemu"""
-        config = 'IMAGE_FSTYPES = "wic"\nWKS_FILE = "mkefidisk.wks"\n'
-        self.append_config(config)
-        self.assertEqual(0, bitbake('core-image-minimal ovmf').status)
-        self.remove_config(config)
-
-        with runqemu('core-image-minimal', ssh=False,
-                     runqemuparams='ovmf', image_fstype='wic') as qemu:
-            cmd = "grep sda. /proc/partitions  |wc -l"
-            status, output = qemu.run_serial(cmd)
-            self.assertEqual(1, status, 'Failed to run command "%s": %s' % (cmd, output))
-            self.assertEqual(output, '3')
-
-    @staticmethod
-    def _make_fixed_size_wks(size):
-        """
-        Create a wks of an image with a single partition. Size of the partition is set
-        using --fixed-size flag. Returns a tuple: (path to wks file, wks image name)
-        """
-        with NamedTemporaryFile("w", suffix=".wks", delete=False) as tempf:
-            wkspath = tempf.name
-            tempf.write("part " \
-                     "--source rootfs --ondisk hda --align 4 --fixed-size %d "
-                     "--fstype=ext4\n" % size)
-        wksname = os.path.splitext(os.path.basename(wkspath))[0]
-
-        return wkspath, wksname
-
-    def test_fixed_size(self):
-        """
-        Test creation of a simple image with partition size controlled through
-        --fixed-size flag
-        """
-        wkspath, wksname = Wic._make_fixed_size_wks(200)
-
-        self.assertEqual(0, runCmd("wic create %s -e core-image-minimal -o %s" \
-                                   % (wkspath, self.resultdir)).status)
-        os.remove(wkspath)
-        wicout = glob(self.resultdir + "%s-*direct" % wksname)
-        self.assertEqual(1, len(wicout))
-
-        wicimg = wicout[0]
-
-        # verify partition size with wic
-        res = runCmd("parted -m %s unit mib p 2>/dev/null" % wicimg,
-                     ignore_status=True,
-                     native_sysroot=self.native_sysroot)
-        self.assertEqual(0, res.status)
-
-        # parse parted output which looks like this:
-        # BYT;\n
-        # /var/tmp/wic/build/tmpfwvjjkf_-201611101222-hda.direct:200MiB:file:512:512:msdos::;\n
-        # 1:0.00MiB:200MiB:200MiB:ext4::;\n
-        partlns = res.output.splitlines()[2:]
-
-        self.assertEqual(1, len(partlns))
-        self.assertEqual("1:0.00MiB:200MiB:200MiB:ext4::;", partlns[0])
-
-    def test_fixed_size_error(self):
-        """
-        Test creation of a simple image with partition size controlled through
-        --fixed-size flag. The size of partition is intentionally set to 1MiB
-        in order to trigger an error in wic.
-        """
-        wkspath, wksname = Wic._make_fixed_size_wks(1)
-
-        self.assertEqual(1, runCmd("wic create %s -e core-image-minimal -o %s" \
-                                   % (wkspath, self.resultdir), ignore_status=True).status)
-        os.remove(wkspath)
-        wicout = glob(self.resultdir + "%s-*direct" % wksname)
-        self.assertEqual(0, len(wicout))
-
-    @only_for_arch(['i586', 'i686', 'x86_64'])
-    def test_rawcopy_plugin_qemu(self):
-        """Test rawcopy plugin in qemu"""
-        # build ext4 and wic images
-        for fstype in ("ext4", "wic"):
-            config = 'IMAGE_FSTYPES = "%s"\nWKS_FILE = "test_rawcopy_plugin.wks.in"\n' % fstype
-            self.append_config(config)
-            self.assertEqual(0, bitbake('core-image-minimal').status)
-            self.remove_config(config)
-
-        with runqemu('core-image-minimal', ssh=False, image_fstype='wic') as qemu:
-            cmd = "grep sda. /proc/partitions  |wc -l"
-            status, output = qemu.run_serial(cmd)
-            self.assertEqual(1, status, 'Failed to run command "%s": %s' % (cmd, output))
-            self.assertEqual(output, '2')
-
-    def test_rawcopy_plugin(self):
-        """Test rawcopy plugin"""
-        img = 'core-image-minimal'
-        machine = get_bb_var('MACHINE', img)
-        with NamedTemporaryFile("w", suffix=".wks") as wks:
-            wks.writelines(['part /boot --active --source bootimg-pcbios\n',
-                            'part / --source rawcopy --sourceparams="file=%s-%s.ext4" --use-uuid\n'\
-                             % (img, machine),
-                            'bootloader --timeout=0 --append="console=ttyS0,115200n8"\n'])
-            wks.flush()
-            cmd = "wic create %s -e %s -o %s" % (wks.name, img, self.resultdir)
-            self.assertEqual(0, runCmd(cmd).status)
-            wksname = os.path.splitext(os.path.basename(wks.name))[0]
-            out = glob(self.resultdir + "%s-*direct" % wksname)
-            self.assertEqual(1, len(out))
-
-    def test_fs_types(self):
-        """Test filesystem types for empty and not empty partitions"""
-        img = 'core-image-minimal'
-        with NamedTemporaryFile("w", suffix=".wks") as wks:
-            wks.writelines(['part ext2   --fstype ext2     --source rootfs\n',
-                            'part btrfs  --fstype btrfs    --source rootfs --size 40M\n',
-                            'part squash --fstype squashfs --source rootfs\n',
-                            'part swap   --fstype swap --size 1M\n',
-                            'part emptyvfat   --fstype vfat   --size 1M\n',
-                            'part emptymsdos  --fstype msdos  --size 1M\n',
-                            'part emptyext2   --fstype ext2   --size 1M\n',
-                            'part emptybtrfs  --fstype btrfs  --size 100M\n'])
-            wks.flush()
-            cmd = "wic create %s -e %s -o %s" % (wks.name, img, self.resultdir)
-            self.assertEqual(0, runCmd(cmd).status)
-            wksname = os.path.splitext(os.path.basename(wks.name))[0]
-            out = glob(self.resultdir + "%s-*direct" % wksname)
-            self.assertEqual(1, len(out))
-
-    def test_kickstart_parser(self):
-        """Test wks parser options"""
-        with NamedTemporaryFile("w", suffix=".wks") as wks:
-            wks.writelines(['part / --fstype ext3 --source rootfs --system-id 0xFF '\
-                            '--overhead-factor 1.2 --size 100k\n'])
-            wks.flush()
-            cmd = "wic create %s -e core-image-minimal -o %s" % (wks.name, self.resultdir)
-            self.assertEqual(0, runCmd(cmd).status)
-            wksname = os.path.splitext(os.path.basename(wks.name))[0]
-            out = glob(self.resultdir + "%s-*direct" % wksname)
-            self.assertEqual(1, len(out))
-
-    def test_image_bootpart_globbed(self):
-        """Test globbed sources with image-bootpart plugin"""
-        img = "core-image-minimal"
-        cmd = "wic create sdimage-bootpart -e %s -o %s" % (img, self.resultdir)
-        config = 'IMAGE_BOOT_FILES = "%s*"' % get_bb_var('KERNEL_IMAGETYPE', img)
-        self.append_config(config)
-        self.assertEqual(0, runCmd(cmd).status)
-        self.remove_config(config)
-        self.assertEqual(1, len(glob(self.resultdir + "sdimage-bootpart-*direct")))
-
-    def test_sparse_copy(self):
-        """Test sparse_copy with FIEMAP and SEEK_HOLE filemap APIs"""
-        libpath = os.path.join(get_bb_var('COREBASE'), 'scripts', 'lib', 'wic')
-        sys.path.insert(0, libpath)
-        from  filemap import FilemapFiemap, FilemapSeek, sparse_copy, ErrorNotSupp
-        with NamedTemporaryFile("w", suffix=".wic-sparse") as sparse:
-            src_name = sparse.name
-            src_size = 1024 * 10
-            sparse.truncate(src_size)
-            # write one byte to the file
-            with open(src_name, 'r+b') as sfile:
-                sfile.seek(1024 * 4)
-                sfile.write(b'\x00')
-            dest = sparse.name + '.out'
-            # copy src file to dest using different filemap APIs
-            for api in (FilemapFiemap, FilemapSeek, None):
-                if os.path.exists(dest):
-                    os.unlink(dest)
-                try:
-                    sparse_copy(sparse.name, dest, api=api)
-                except ErrorNotSupp:
-                    continue # skip unsupported API
-                dest_stat = os.stat(dest)
-                self.assertEqual(dest_stat.st_size, src_size)
-                # 8 blocks is 4K (physical sector size)
-                self.assertEqual(dest_stat.st_blocks, 8)
-            os.unlink(dest)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/targetcontrol.py b/import-layers/yocto-poky/meta/lib/oeqa/targetcontrol.py
index 3255e3a..f63936c 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/targetcontrol.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/targetcontrol.py
@@ -18,44 +18,18 @@
 from oeqa.controllers.testtargetloader import TestTargetLoader
 from abc import ABCMeta, abstractmethod
 
-logger = logging.getLogger('BitBake.QemuRunner')
-
-def get_target_controller(d):
-    testtarget = d.getVar("TEST_TARGET")
-    # old, simple names
-    if testtarget == "qemu":
-        return QemuTarget(d)
-    elif testtarget == "simpleremote":
-        return SimpleRemoteTarget(d)
-    else:
-        # use the class name
-        try:
-            # is it a core class defined here?
-            controller = getattr(sys.modules[__name__], testtarget)
-        except AttributeError:
-            # nope, perhaps a layer defined one
-            try:
-                bbpath = d.getVar("BBPATH").split(':')
-                testtargetloader = TestTargetLoader()
-                controller = testtargetloader.get_controller_module(testtarget, bbpath)
-            except ImportError as e:
-                bb.fatal("Failed to import {0} from available controller modules:\n{1}".format(testtarget,traceback.format_exc()))
-            except AttributeError as e:
-                bb.fatal("Invalid TEST_TARGET - " + str(e))
-        return controller(d)
-
-
 class BaseTarget(object, metaclass=ABCMeta):
 
     supported_image_fstypes = []
 
-    def __init__(self, d):
+    def __init__(self, d, logger):
         self.connection = None
         self.ip = None
         self.server_ip = None
         self.datetime = d.getVar('DATETIME')
         self.testdir = d.getVar("TEST_LOG_DIR")
         self.pn = d.getVar("PN")
+        self.logger = logger
 
     @abstractmethod
     def deploy(self):
@@ -65,7 +39,7 @@
         if os.path.islink(sshloglink):
             os.unlink(sshloglink)
         os.symlink(self.sshlog, sshloglink)
-        logger.info("SSH log file: %s" %  self.sshlog)
+        self.logger.info("SSH log file: %s" %  self.sshlog)
 
     @abstractmethod
     def start(self, params=None, ssh=True, extra_bootparams=None):
@@ -115,9 +89,9 @@
 
     supported_image_fstypes = ['ext3', 'ext4', 'cpio.gz', 'wic']
 
-    def __init__(self, d, image_fstype=None):
+    def __init__(self, d, logger, image_fstype=None):
 
-        super(QemuTarget, self).__init__(d)
+        super(QemuTarget, self).__init__(d, logger)
 
         self.rootfs = ''
         self.kernel = ''
@@ -145,7 +119,7 @@
         self.qemurunnerlog = os.path.join(self.testdir, 'qemurunner_log.%s' % self.datetime)
         loggerhandler = logging.FileHandler(self.qemurunnerlog)
         loggerhandler.setFormatter(logging.Formatter("%(levelname)s: %(message)s"))
-        logger.addHandler(loggerhandler)
+        self.logger.addHandler(loggerhandler)
         oe.path.symlink(os.path.basename(self.qemurunnerlog), os.path.join(self.testdir, 'qemurunner_log'), force=True)
 
         if d.getVar("DISTRO") == "poky-tiny":
@@ -156,7 +130,8 @@
                             display = d.getVar("BB_ORIGENV", False).getVar("DISPLAY"),
                             logfile = self.qemulog,
                             kernel = self.kernel,
-                            boottime = int(d.getVar("TEST_QEMUBOOT_TIMEOUT")))
+                            boottime = int(d.getVar("TEST_QEMUBOOT_TIMEOUT")),
+                            logger = logger)
         else:
             self.runner = QemuRunner(machine=d.getVar("MACHINE"),
                             rootfs=self.rootfs,
@@ -167,7 +142,8 @@
                             boottime = int(d.getVar("TEST_QEMUBOOT_TIMEOUT")),
                             use_kvm = use_kvm,
                             dump_dir = dump_dir,
-                            dump_host_cmds = d.getVar("testimage_dump_host"))
+                            dump_host_cmds = d.getVar("testimage_dump_host"),
+                            logger = logger)
 
         self.target_dumper = TargetDumper(dump_target_cmds, dump_dir, self.runner)
 
@@ -179,8 +155,8 @@
             os.unlink(qemuloglink)
         os.symlink(self.qemulog, qemuloglink)
 
-        logger.info("rootfs file: %s" %  self.rootfs)
-        logger.info("Qemu log file: %s" % self.qemulog)
+        self.logger.info("rootfs file: %s" %  self.rootfs)
+        self.logger.info("Qemu log file: %s" % self.qemulog)
         super(QemuTarget, self).deploy()
 
     def start(self, params=None, ssh=True, extra_bootparams='', runqemuparams='', launch_cmd='', discard_writes=True):
@@ -232,14 +208,14 @@
             self.port = addr.split(":")[1]
         except IndexError:
             self.port = None
-        logger.info("Target IP: %s" % self.ip)
+        self.logger.info("Target IP: %s" % self.ip)
         self.server_ip = d.getVar("TEST_SERVER_IP")
         if not self.server_ip:
             try:
                 self.server_ip = subprocess.check_output(['ip', 'route', 'get', self.ip ]).split("\n")[0].split()[-1]
             except Exception as e:
                 bb.fatal("Failed to determine the host IP address (alternatively you can set TEST_SERVER_IP with the IP address of this machine): %s" % e)
-        logger.info("Server IP: %s" % self.server_ip)
+        self.logger.info("Server IP: %s" % self.server_ip)
 
     def deploy(self):
         super(SimpleRemoteTarget, self).deploy()
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/utils/__init__.py b/import-layers/yocto-poky/meta/lib/oeqa/utils/__init__.py
index 485de03..d38a323 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/utils/__init__.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/utils/__init__.py
@@ -2,7 +2,6 @@
 from pkgutil import extend_path
 __path__ = extend_path(__path__, __name__)
 
-
 # Borrowed from CalledProcessError
 
 class CommandError(Exception):
@@ -66,3 +65,39 @@
     logger.info = _bitbake_log_info
 
     return logger
+
+def load_test_components(logger, executor):
+    import sys
+    import os
+    import importlib
+
+    from oeqa.core.context import OETestContextExecutor
+
+    components = {}
+
+    for path in sys.path:
+        base_dir = os.path.join(path, 'oeqa')
+        if os.path.exists(base_dir) and os.path.isdir(base_dir):
+            for file in os.listdir(base_dir):
+                comp_name = file
+                comp_context = os.path.join(base_dir, file, 'context.py')
+                if os.path.exists(comp_context):
+                    comp_plugin = importlib.import_module('oeqa.%s.%s' % \
+                            (comp_name, 'context'))
+                    try:
+                        if not issubclass(comp_plugin._executor_class,
+                                OETestContextExecutor):
+                            raise TypeError("Component %s in %s, _executor_class "\
+                                "isn't derived from OETestContextExecutor."\
+                                % (comp_name, comp_context))
+
+                        if comp_plugin._executor_class._script_executor \
+                                != executor:
+                            continue
+
+                        components[comp_name] = comp_plugin._executor_class()
+                    except AttributeError:
+                        raise AttributeError("Component %s in %s don't have "\
+                                "_executor_class defined." % (comp_name, comp_context))
+
+    return components
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/utils/buildproject.py b/import-layers/yocto-poky/meta/lib/oeqa/utils/buildproject.py
index 487f08b..721f35d 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/utils/buildproject.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/utils/buildproject.py
@@ -52,4 +52,4 @@
 
     def clean(self):
         self._run('rm -rf %s' % self.targetdir)
-        subprocess.call('rm -f %s' % self.localarchive, shell=True)
+        subprocess.check_call('rm -f %s' % self.localarchive, shell=True)
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/utils/commands.py b/import-layers/yocto-poky/meta/lib/oeqa/utils/commands.py
index 57286fc..0bb9002 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/utils/commands.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/utils/commands.py
@@ -13,6 +13,7 @@
 import signal
 import subprocess
 import threading
+import time
 import logging
 from oeqa.utils import CommandError
 from oeqa.utils import ftools
@@ -25,7 +26,7 @@
     pass
 
 class Command(object):
-    def __init__(self, command, bg=False, timeout=None, data=None, **options):
+    def __init__(self, command, bg=False, timeout=None, data=None, output_log=None, **options):
 
         self.defaultopts = {
             "stdout": subprocess.PIPE,
@@ -48,41 +49,103 @@
         self.options.update(options)
 
         self.status = None
+        # We collect chunks of output before joining them at the end.
+        self._output_chunks = []
+        self._error_chunks = []
         self.output = None
         self.error = None
-        self.thread = None
+        self.threads = []
 
+        self.output_log = output_log
         self.log = logging.getLogger("utils.commands")
 
     def run(self):
         self.process = subprocess.Popen(self.cmd, **self.options)
 
-        def commThread():
-            self.output, self.error = self.process.communicate(self.data)
+        def readThread(output, stream, logfunc):
+            if logfunc:
+                for line in stream:
+                    output.append(line)
+                    logfunc(line.decode("utf-8", errors='replace').rstrip())
+            else:
+                output.append(stream.read())
 
-        self.thread = threading.Thread(target=commThread)
-        self.thread.start()
+        def readStderrThread():
+            readThread(self._error_chunks, self.process.stderr, self.output_log.error if self.output_log else None)
+
+        def readStdoutThread():
+            readThread(self._output_chunks, self.process.stdout, self.output_log.info if self.output_log else None)
+
+        def writeThread():
+            try:
+                self.process.stdin.write(self.data)
+                self.process.stdin.close()
+            except OSError as ex:
+                # It's not an error when the command does not consume all
+                # of our data. subprocess.communicate() also ignores that.
+                if ex.errno != EPIPE:
+                    raise
+
+        # We write in a separate thread because then we can read
+        # without worrying about deadlocks. The additional thread is
+        # expected to terminate by itself and we mark it as a daemon,
+        # so even it should happen to not terminate for whatever
+        # reason, the main process will still exit, which will then
+        # kill the write thread.
+        if self.data:
+            threading.Thread(target=writeThread, daemon=True).start()
+        if self.process.stderr:
+            thread = threading.Thread(target=readStderrThread)
+            thread.start()
+            self.threads.append(thread)
+        if self.output_log:
+            self.output_log.info('Running: %s' % self.cmd)
+        thread = threading.Thread(target=readStdoutThread)
+        thread.start()
+        self.threads.append(thread)
 
         self.log.debug("Running command '%s'" % self.cmd)
 
         if not self.bg:
-            self.thread.join(self.timeout)
+            if self.timeout is None:
+                for thread in self.threads:
+                    thread.join()
+            else:
+                deadline = time.time() + self.timeout
+                for thread in self.threads:
+                    timeout = deadline - time.time() 
+                    if timeout < 0:
+                        timeout = 0
+                    thread.join(timeout)
             self.stop()
 
     def stop(self):
-        if self.thread.isAlive():
-            self.process.terminate()
+        for thread in self.threads:
+            if thread.isAlive():
+                self.process.terminate()
             # let's give it more time to terminate gracefully before killing it
-            self.thread.join(5)
-            if self.thread.isAlive():
+            thread.join(5)
+            if thread.isAlive():
                 self.process.kill()
-                self.thread.join()
+                thread.join()
 
-        if not self.output:
-            self.output = ""
-        else:
-            self.output = self.output.decode("utf-8", errors='replace').rstrip()
-        self.status = self.process.poll()
+        def finalize_output(data):
+            if not data:
+                data = ""
+            else:
+                data = b"".join(data)
+                data = data.decode("utf-8", errors='replace').rstrip()
+            return data
+
+        self.output = finalize_output(self._output_chunks)
+        self._output_chunks = None
+        # self.error used to be a byte string earlier, probably unintentionally.
+        # Now it is a normal string, just like self.output.
+        self.error = finalize_output(self._error_chunks)
+        self._error_chunks = None
+        # At this point we know that the process has closed stdout/stderr, so
+        # it is safe and necessary to wait for the actual process completion.
+        self.status = self.process.wait()
 
         self.log.debug("Command '%s' returned %d as exit code." % (self.cmd, self.status))
         # logging the complete output is insane
@@ -98,7 +161,7 @@
 
 
 def runCmd(command, ignore_status=False, timeout=None, assert_error=True,
-          native_sysroot=None, limit_exc_output=0, **options):
+          native_sysroot=None, limit_exc_output=0, output_log=None, **options):
     result = Result()
 
     if native_sysroot:
@@ -108,7 +171,7 @@
         nenv['PATH'] = extra_paths + ':' + nenv.get('PATH', '')
         options['env'] = nenv
 
-    cmd = Command(command, timeout=timeout, **options)
+    cmd = Command(command, timeout=timeout, output_log=output_log, **options)
     cmd.run()
 
     result.command = command
@@ -132,7 +195,7 @@
     return result
 
 
-def bitbake(command, ignore_status=False, timeout=None, postconfig=None, **options):
+def bitbake(command, ignore_status=False, timeout=None, postconfig=None, output_log=None, **options):
 
     if postconfig:
         postconfig_file = os.path.join(os.environ.get('BUILDDIR'), 'oeqa-post.conf')
@@ -147,7 +210,7 @@
         cmd = [ "bitbake" ] + [a for a in (command + extra_args.split(" ")) if a not in [""]]
 
     try:
-        return runCmd(cmd, ignore_status, timeout, **options)
+        return runCmd(cmd, ignore_status, timeout, output_log=output_log, **options)
     finally:
         if postconfig:
             os.remove(postconfig_file)
@@ -233,6 +296,12 @@
     import bb.tinfoil
     import bb.build
 
+    # Need a non-'BitBake' logger to capture the runner output
+    targetlogger = logging.getLogger('TargetRunner')
+    targetlogger.setLevel(logging.DEBUG)
+    handler = logging.StreamHandler(sys.stdout)
+    targetlogger.addHandler(handler)
+
     tinfoil = bb.tinfoil.Tinfoil()
     tinfoil.prepare(config_only=False, quiet=True)
     try:
@@ -250,31 +319,15 @@
         for key, value in overrides.items():
             recipedata.setVar(key, value)
 
-        # The QemuRunner log is saved out, but we need to ensure it is at the right
-        # log level (and then ensure that since it's a child of the BitBake logger,
-        # we disable propagation so we don't then see the log events on the console)
-        logger = logging.getLogger('BitBake.QemuRunner')
-        logger.setLevel(logging.DEBUG)
-        logger.propagate = False
         logdir = recipedata.getVar("TEST_LOG_DIR")
 
-        qemu = oeqa.targetcontrol.QemuTarget(recipedata, image_fstype)
+        qemu = oeqa.targetcontrol.QemuTarget(recipedata, targetlogger, image_fstype)
     finally:
         # We need to shut down tinfoil early here in case we actually want
         # to run tinfoil-using utilities with the running QEMU instance.
         # Luckily QemuTarget doesn't need it after the constructor.
         tinfoil.shutdown()
 
-    # Setup bitbake logger as console handler is removed by tinfoil.shutdown
-    bblogger = logging.getLogger('BitBake')
-    bblogger.setLevel(logging.INFO)
-    console = logging.StreamHandler(sys.stdout)
-    bbformat = bb.msg.BBLogFormatter("%(levelname)s: %(message)s")
-    if sys.stdout.isatty():
-        bbformat.enable_color()
-    console.setFormatter(bbformat)
-    bblogger.addHandler(console)
-
     try:
         qemu.deploy()
         try:
@@ -289,6 +342,7 @@
             qemu.stop()
         except:
             pass
+    targetlogger.removeHandler(handler)
 
 def updateEnv(env_file):
     """
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/utils/git.py b/import-layers/yocto-poky/meta/lib/oeqa/utils/git.py
index e0cb3f0..757e3f0 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/utils/git.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/utils/git.py
@@ -64,7 +64,7 @@
     def rev_parse(self, revision):
         """Do git rev-parse"""
         try:
-            return self.run_cmd(['rev-parse', revision])
+            return self.run_cmd(['rev-parse', '--verify', revision])
         except GitError:
             # Revision does not exist
             return None
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/utils/logparser.py b/import-layers/yocto-poky/meta/lib/oeqa/utils/logparser.py
index b377dcd..0670627 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/utils/logparser.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/utils/logparser.py
@@ -9,7 +9,7 @@
 # A parser that can be used to identify weather a line is a test result or a section statement.
 class Lparser(object):
 
-    def __init__(self, test_0_pass_regex, test_0_fail_regex, section_0_begin_regex=None, section_0_end_regex=None, **kwargs):
+    def __init__(self, test_0_pass_regex, test_0_fail_regex, test_0_skip_regex, section_0_begin_regex=None, section_0_end_regex=None, **kwargs):
         # Initialize the arguments dictionary
         if kwargs:
             self.args = kwargs
@@ -19,12 +19,13 @@
         # Add the default args to the dictionary
         self.args['test_0_pass_regex'] = test_0_pass_regex
         self.args['test_0_fail_regex'] = test_0_fail_regex
+        self.args['test_0_skip_regex'] = test_0_skip_regex
         if section_0_begin_regex:
             self.args['section_0_begin_regex'] = section_0_begin_regex
         if section_0_end_regex:
             self.args['section_0_end_regex'] = section_0_end_regex
 
-        self.test_possible_status = ['pass', 'fail', 'error']
+        self.test_possible_status = ['pass', 'fail', 'error', 'skip']
         self.section_possible_status = ['begin', 'end']
 
         self.initialized = False
@@ -108,7 +109,7 @@
             prefix = ''
             for x in test_status:
                 prefix +=x+'.'
-            if (section != ''):
+            if section:
                 prefix += section
             section_file = os.path.join(target_dir, prefix)
             # purge the file contents if it exists
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/utils/metadata.py b/import-layers/yocto-poky/meta/lib/oeqa/utils/metadata.py
index cb81155..65bbdc6 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/utils/metadata.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/utils/metadata.py
@@ -10,19 +10,9 @@
 from xml.dom.minidom import parseString
 from xml.etree.ElementTree import Element, tostring
 
+from oe.lsb import get_os_release
 from oeqa.utils.commands import runCmd, get_bb_vars
 
-def get_os_release():
-    """Get info from /etc/os-release as a dict"""
-    data = OrderedDict()
-    os_release_file = '/etc/os-release'
-    if not os.path.exists(os_release_file):
-        return None
-    with open(os_release_file) as fobj:
-        for line in fobj:
-            key, value = line.split('=', 1)
-            data[key.strip().lower()] = value.strip().strip('"')
-    return data
 
 def metadata_from_bb():
     """ Returns test's metadata as OrderedDict.
@@ -45,9 +35,9 @@
     os_release = get_os_release()
     if os_release:
         info_dict['host_distro'] = OrderedDict()
-        for key in ('id', 'version_id', 'pretty_name'):
+        for key in ('ID', 'VERSION_ID', 'PRETTY_NAME'):
             if key in os_release:
-                info_dict['host_distro'][key] = os_release[key]
+                info_dict['host_distro'][key.lower()] = os_release[key]
 
     info_dict['layers'] = get_layers(data_dict['BBLAYERS'])
     info_dict['bitbake'] = git_rev_info(os.path.dirname(bb.__file__))
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/utils/qemurunner.py b/import-layers/yocto-poky/meta/lib/oeqa/utils/qemurunner.py
index ba44b96..0631d43 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/utils/qemurunner.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/utils/qemurunner.py
@@ -17,11 +17,8 @@
 import string
 import threading
 import codecs
-from oeqa.utils.dump import HostDumper
-
 import logging
-logger = logging.getLogger("BitBake.QemuRunner")
-logger.addHandler(logging.StreamHandler())
+from oeqa.utils.dump import HostDumper
 
 # Get Unicode non printable control chars
 control_range = list(range(0,32))+list(range(127,160))
@@ -31,7 +28,7 @@
 
 class QemuRunner:
 
-    def __init__(self, machine, rootfs, display, tmpdir, deploy_dir_image, logfile, boottime, dump_dir, dump_host_cmds, use_kvm):
+    def __init__(self, machine, rootfs, display, tmpdir, deploy_dir_image, logfile, boottime, dump_dir, dump_host_cmds, use_kvm, logger):
 
         # Popen object for runqemu
         self.runqemu = None
@@ -54,10 +51,14 @@
         self.logged = False
         self.thread = None
         self.use_kvm = use_kvm
+        self.msg = ''
 
-        self.runqemutime = 60
+        self.runqemutime = 120
+        self.qemu_pidfile = 'pidfile_'+str(os.getpid())
         self.host_dumper = HostDumper(dump_host_cmds, dump_dir)
 
+        self.logger = logger
+
     def create_socket(self):
         try:
             sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
@@ -65,7 +66,7 @@
             sock.bind(("127.0.0.1",0))
             sock.listen(2)
             port = sock.getsockname()[1]
-            logger.info("Created listening socket for qemu serial console on: 127.0.0.1:%s" % port)
+            self.logger.debug("Created listening socket for qemu serial console on: 127.0.0.1:%s" % port)
             return (sock, port)
 
         except socket.error:
@@ -78,6 +79,7 @@
             # because is possible to have control characters
             msg = msg.decode("utf-8", errors='ignore')
             msg = re_control_char.sub('', msg)
+            self.msg += msg
             with codecs.open(self.logfile, "a", encoding="utf-8") as f:
                 f.write("%s" % msg)
 
@@ -91,58 +93,63 @@
     def handleSIGCHLD(self, signum, frame):
         if self.runqemu and self.runqemu.poll():
             if self.runqemu.returncode:
-                logger.info('runqemu exited with code %d' % self.runqemu.returncode)
-                logger.info("Output from runqemu:\n%s" % self.getOutput(self.runqemu.stdout))
+                self.logger.debug('runqemu exited with code %d' % self.runqemu.returncode)
+                self.logger.debug("Output from runqemu:\n%s" % self.getOutput(self.runqemu.stdout))
                 self.stop()
                 self._dump_host()
                 raise SystemExit
 
     def start(self, qemuparams = None, get_ip = True, extra_bootparams = None, runqemuparams='', launch_cmd=None, discard_writes=True):
+        env = os.environ.copy()
         if self.display:
-            os.environ["DISPLAY"] = self.display
+            env["DISPLAY"] = self.display
             # Set this flag so that Qemu doesn't do any grabs as SDL grabs
             # interact badly with screensavers.
-            os.environ["QEMU_DONT_GRAB"] = "1"
+            env["QEMU_DONT_GRAB"] = "1"
         if not os.path.exists(self.rootfs):
-            logger.error("Invalid rootfs %s" % self.rootfs)
+            self.logger.error("Invalid rootfs %s" % self.rootfs)
             return False
         if not os.path.exists(self.tmpdir):
-            logger.error("Invalid TMPDIR path %s" % self.tmpdir)
+            self.logger.error("Invalid TMPDIR path %s" % self.tmpdir)
             return False
         else:
-            os.environ["OE_TMPDIR"] = self.tmpdir
+            env["OE_TMPDIR"] = self.tmpdir
         if not os.path.exists(self.deploy_dir_image):
-            logger.error("Invalid DEPLOY_DIR_IMAGE path %s" % self.deploy_dir_image)
+            self.logger.error("Invalid DEPLOY_DIR_IMAGE path %s" % self.deploy_dir_image)
             return False
         else:
-            os.environ["DEPLOY_DIR_IMAGE"] = self.deploy_dir_image
+            env["DEPLOY_DIR_IMAGE"] = self.deploy_dir_image
 
         if not launch_cmd:
             launch_cmd = 'runqemu %s %s ' % ('snapshot' if discard_writes else '', runqemuparams)
             if self.use_kvm:
-                logger.info('Using kvm for runqemu')
+                self.logger.debug('Using kvm for runqemu')
                 launch_cmd += ' kvm'
             else:
-                logger.info('Not using kvm for runqemu')
+                self.logger.debug('Not using kvm for runqemu')
             if not self.display:
                 launch_cmd += ' nographic'
             launch_cmd += ' %s %s' % (self.machine, self.rootfs)
 
-        return self.launch(launch_cmd, qemuparams=qemuparams, get_ip=get_ip, extra_bootparams=extra_bootparams)
+        return self.launch(launch_cmd, qemuparams=qemuparams, get_ip=get_ip, extra_bootparams=extra_bootparams, env=env)
 
-    def launch(self, launch_cmd, get_ip = True, qemuparams = None, extra_bootparams = None):
+    def launch(self, launch_cmd, get_ip = True, qemuparams = None, extra_bootparams = None, env = None):
         try:
             threadsock, threadport = self.create_socket()
             self.server_socket, self.serverport = self.create_socket()
         except socket.error as msg:
-            logger.error("Failed to create listening socket: %s" % msg[1])
+            self.logger.error("Failed to create listening socket: %s" % msg[1])
             return False
 
         bootparams = 'console=tty1 console=ttyS0,115200n8 printk.time=1'
         if extra_bootparams:
             bootparams = bootparams + ' ' + extra_bootparams
 
-        self.qemuparams = 'bootparams="{0}" qemuparams="-serial tcp:127.0.0.1:{1}"'.format(bootparams, threadport)
+        # Ask QEMU to store the QEMU process PID in file, this way we don't have to parse running processes
+        # and analyze descendents in order to determine it.
+        if os.path.exists(self.qemu_pidfile):
+            os.remove(self.qemu_pidfile)
+        self.qemuparams = 'bootparams="{0}" qemuparams="-serial tcp:127.0.0.1:{1} -pidfile {2}"'.format(bootparams, threadport, self.qemu_pidfile)
         if qemuparams:
             self.qemuparams = self.qemuparams[:-1] + " " + qemuparams + " " + '\"'
 
@@ -151,13 +158,13 @@
         self.origchldhandler = signal.getsignal(signal.SIGCHLD)
         signal.signal(signal.SIGCHLD, self.handleSIGCHLD)
 
-        logger.info('launchcmd=%s'%(launch_cmd))
+        self.logger.debug('launchcmd=%s'%(launch_cmd))
 
         # FIXME: We pass in stdin=subprocess.PIPE here to work around stty
         # blocking at the end of the runqemu script when using this within
         # oe-selftest (this makes stty error out immediately). There ought
         # to be a proper fix but this will suffice for now.
-        self.runqemu = subprocess.Popen(launch_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, stdin=subprocess.PIPE, preexec_fn=os.setpgrp)
+        self.runqemu = subprocess.Popen(launch_cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, stdin=subprocess.PIPE, preexec_fn=os.setpgrp, env=env)
         output = self.runqemu.stdout
 
         #
@@ -186,143 +193,149 @@
             os.killpg(os.getpgid(self.runqemu.pid), signal.SIGTERM)
             sys.exit(0)
 
-        logger.info("runqemu started, pid is %s" % self.runqemu.pid)
-        logger.info("waiting at most %s seconds for qemu pid" % self.runqemutime)
+        self.logger.debug("runqemu started, pid is %s" % self.runqemu.pid)
+        self.logger.debug("waiting at most %s seconds for qemu pid" % self.runqemutime)
         endtime = time.time() + self.runqemutime
         while not self.is_alive() and time.time() < endtime:
             if self.runqemu.poll():
                 if self.runqemu.returncode:
                     # No point waiting any longer
-                    logger.info('runqemu exited with code %d' % self.runqemu.returncode)
+                    self.logger.debug('runqemu exited with code %d' % self.runqemu.returncode)
                     self._dump_host()
                     self.stop()
-                    logger.info("Output from runqemu:\n%s" % self.getOutput(output))
+                    self.logger.debug("Output from runqemu:\n%s" % self.getOutput(output))
                     return False
-            time.sleep(1)
+            time.sleep(0.5)
 
-        out = self.getOutput(output)
-        netconf = False # network configuration is not required by default
-        if self.is_alive():
-            logger.info("qemu started - qemu procces pid is %s" % self.qemupid)
-            if get_ip:
-                cmdline = ''
-                with open('/proc/%s/cmdline' % self.qemupid) as p:
-                    cmdline = p.read()
-                    # It is needed to sanitize the data received
-                    # because is possible to have control characters
-                    cmdline = re_control_char.sub('', cmdline)
-                try:
-                    ips = re.findall("((?:[0-9]{1,3}\.){3}[0-9]{1,3})", cmdline.split("ip=")[1])
-                    self.ip = ips[0]
-                    self.server_ip = ips[1]
-                    logger.info("qemu cmdline used:\n{}".format(cmdline))
-                except (IndexError, ValueError):
-                    # Try to get network configuration from runqemu output
-                    match = re.match('.*Network configuration: ([0-9.]+)::([0-9.]+):([0-9.]+)$.*',
-                                     out, re.MULTILINE|re.DOTALL)
-                    if match:
-                        self.ip, self.server_ip, self.netmask = match.groups()
-                        # network configuration is required as we couldn't get it
-                        # from the runqemu command line, so qemu doesn't run kernel
-                        # and guest networking is not configured
-                        netconf = True
-                    else:
-                        logger.error("Couldn't get ip from qemu command line and runqemu output! "
-                                     "Here is the qemu command line used:\n%s\n"
-                                     "and output from runqemu:\n%s" % (cmdline, out))
-                        self._dump_host()
-                        self.stop()
-                        return False
-
-                logger.info("Target IP: %s" % self.ip)
-                logger.info("Server IP: %s" % self.server_ip)
-
-            self.thread = LoggingThread(self.log, threadsock, logger)
-            self.thread.start()
-            if not self.thread.connection_established.wait(self.boottime):
-                logger.error("Didn't receive a console connection from qemu. "
-                             "Here is the qemu command line used:\n%s\nand "
-                             "output from runqemu:\n%s" % (cmdline, out))
-                self.stop_thread()
-                return False
-
-            logger.info("Output from runqemu:\n%s", out)
-            logger.info("Waiting at most %d seconds for login banner" % self.boottime)
-            endtime = time.time() + self.boottime
-            socklist = [self.server_socket]
-            reachedlogin = False
-            stopread = False
-            qemusock = None
-            bootlog = ''
-            data = b''
-            while time.time() < endtime and not stopread:
-                try:
-                    sread, swrite, serror = select.select(socklist, [], [], 5)
-                except InterruptedError:
-                    continue
-                for sock in sread:
-                    if sock is self.server_socket:
-                        qemusock, addr = self.server_socket.accept()
-                        qemusock.setblocking(0)
-                        socklist.append(qemusock)
-                        socklist.remove(self.server_socket)
-                        logger.info("Connection from %s:%s" % addr)
-                    else:
-                        data = data + sock.recv(1024)
-                        if data:
-                            try:
-                                data = data.decode("utf-8", errors="surrogateescape")
-                                bootlog += data
-                                data = b''
-                                if re.search(".* login:", bootlog):
-                                    self.server_socket = qemusock
-                                    stopread = True
-                                    reachedlogin = True
-                                    logger.info("Reached login banner")
-                            except UnicodeDecodeError:
-                                continue
-                        else:
-                            socklist.remove(sock)
-                            sock.close()
-                            stopread = True
-
-            if not reachedlogin:
-                logger.info("Target didn't reached login boot in %d seconds" % self.boottime)
-                lines = "\n".join(bootlog.splitlines()[-25:])
-                logger.info("Last 25 lines of text:\n%s" % lines)
-                logger.info("Check full boot log: %s" % self.logfile)
-                self._dump_host()
-                self.stop()
-                return False
-
-            # If we are not able to login the tests can continue
-            try:
-                (status, output) = self.run_serial("root\n", raw=True)
-                if re.search("root@[a-zA-Z0-9\-]+:~#", output):
-                    self.logged = True
-                    logger.info("Logged as root in serial console")
-                    if netconf:
-                        # configure guest networking
-                        cmd = "ifconfig eth0 %s netmask %s up\n" % (self.ip, self.netmask)
-                        output = self.run_serial(cmd, raw=True)[1]
-                        if re.search("root@[a-zA-Z0-9\-]+:~#", output):
-                            logger.info("configured ip address %s", self.ip)
-                        else:
-                            logger.info("Couldn't configure guest networking")
-                else:
-                    logger.info("Couldn't login into serial console"
-                            " as root using blank password")
-            except:
-                logger.info("Serial console failed while trying to login")
-
-        else:
-            logger.info("Qemu pid didn't appeared in %s seconds" % self.runqemutime)
+        if not self.is_alive():
+            self.logger.error("Qemu pid didn't appear in %s seconds" % self.runqemutime)
+            # Dump all processes to help us to figure out what is going on...
+            ps = subprocess.Popen(['ps', 'axww', '-o', 'pid,ppid,command '], stdout=subprocess.PIPE).communicate()[0]
+            processes = ps.decode("utf-8")
+            self.logger.debug("Running processes:\n%s" % processes)
             self._dump_host()
             self.stop()
-            logger.info("Output from runqemu:\n%s" % self.getOutput(output))
+            op = self.getOutput(output)
+            if op:
+                self.logger.error("Output from runqemu:\n%s" % op)
+            else:
+                self.logger.error("No output from runqemu.\n")
             return False
 
-        return self.is_alive()
+        # We are alive: qemu is running
+        out = self.getOutput(output)
+        netconf = False # network configuration is not required by default
+        self.logger.debug("qemu started in %s seconds - qemu procces pid is %s" % (time.time() - (endtime - self.runqemutime), self.qemupid))
+        if get_ip:
+            cmdline = ''
+            with open('/proc/%s/cmdline' % self.qemupid) as p:
+                cmdline = p.read()
+                # It is needed to sanitize the data received
+                # because is possible to have control characters
+                cmdline = re_control_char.sub(' ', cmdline)
+            try:
+                ips = re.findall("((?:[0-9]{1,3}\.){3}[0-9]{1,3})", cmdline.split("ip=")[1])
+                self.ip = ips[0]
+                self.server_ip = ips[1]
+                self.logger.debug("qemu cmdline used:\n{}".format(cmdline))
+            except (IndexError, ValueError):
+                # Try to get network configuration from runqemu output
+                match = re.match('.*Network configuration: ([0-9.]+)::([0-9.]+):([0-9.]+)$.*',
+                                 out, re.MULTILINE|re.DOTALL)
+                if match:
+                    self.ip, self.server_ip, self.netmask = match.groups()
+                    # network configuration is required as we couldn't get it
+                    # from the runqemu command line, so qemu doesn't run kernel
+                    # and guest networking is not configured
+                    netconf = True
+                else:
+                    self.logger.error("Couldn't get ip from qemu command line and runqemu output! "
+                                 "Here is the qemu command line used:\n%s\n"
+                                 "and output from runqemu:\n%s" % (cmdline, out))
+                    self._dump_host()
+                    self.stop()
+                    return False
+
+        self.logger.debug("Target IP: %s" % self.ip)
+        self.logger.debug("Server IP: %s" % self.server_ip)
+
+        self.thread = LoggingThread(self.log, threadsock, self.logger)
+        self.thread.start()
+        if not self.thread.connection_established.wait(self.boottime):
+            self.logger.error("Didn't receive a console connection from qemu. "
+                         "Here is the qemu command line used:\n%s\nand "
+                         "output from runqemu:\n%s" % (cmdline, out))
+            self.stop_thread()
+            return False
+
+        self.logger.debug("Output from runqemu:\n%s", out)
+        self.logger.debug("Waiting at most %d seconds for login banner" % self.boottime)
+        endtime = time.time() + self.boottime
+        socklist = [self.server_socket]
+        reachedlogin = False
+        stopread = False
+        qemusock = None
+        bootlog = b''
+        data = b''
+        while time.time() < endtime and not stopread:
+            try:
+                sread, swrite, serror = select.select(socklist, [], [], 5)
+            except InterruptedError:
+                continue
+            for sock in sread:
+                if sock is self.server_socket:
+                    qemusock, addr = self.server_socket.accept()
+                    qemusock.setblocking(0)
+                    socklist.append(qemusock)
+                    socklist.remove(self.server_socket)
+                    self.logger.debug("Connection from %s:%s" % addr)
+                else:
+                    data = data + sock.recv(1024)
+                    if data:
+                        bootlog += data
+                        data = b''
+                        if b' login:' in bootlog:
+                            self.server_socket = qemusock
+                            stopread = True
+                            reachedlogin = True
+                            self.logger.debug("Reached login banner")
+                    else:
+                        socklist.remove(sock)
+                        sock.close()
+                        stopread = True
+
+
+        if not reachedlogin:
+            self.logger.debug("Target didn't reached login boot in %d seconds" % self.boottime)
+            tail = lambda l: "\n".join(l.splitlines()[-25:])
+            # in case bootlog is empty, use tail qemu log store at self.msg
+            lines = tail(bootlog if bootlog else self.msg)
+            self.logger.debug("Last 25 lines of text:\n%s" % lines)
+            self.logger.debug("Check full boot log: %s" % self.logfile)
+            self._dump_host()
+            self.stop()
+            return False
+
+        # If we are not able to login the tests can continue
+        try:
+            (status, output) = self.run_serial("root\n", raw=True)
+            if re.search("root@[a-zA-Z0-9\-]+:~#", output):
+                self.logged = True
+                self.logger.debug("Logged as root in serial console")
+                if netconf:
+                    # configure guest networking
+                    cmd = "ifconfig eth0 %s netmask %s up\n" % (self.ip, self.netmask)
+                    output = self.run_serial(cmd, raw=True)[1]
+                    if re.search("root@[a-zA-Z0-9\-]+:~#", output):
+                        self.logger.debug("configured ip address %s", self.ip)
+                    else:
+                        self.logger.debug("Couldn't configure guest networking")
+            else:
+                self.logger.debug("Couldn't login into serial console"
+                            " as root using blank password")
+        except:
+            self.logger.debug("Serial console failed while trying to login")
+        return True
 
     def stop(self):
         self.stop_thread()
@@ -332,7 +345,7 @@
         if self.runqemu:
             if hasattr(self, "monitorpid"):
                 os.kill(self.monitorpid, signal.SIGKILL)
-                logger.info("Sending SIGTERM to runqemu")
+                self.logger.debug("Sending SIGTERM to runqemu")
                 try:
                     os.killpg(os.getpgid(self.runqemu.pid), signal.SIGTERM)
                 except OSError as e:
@@ -342,7 +355,7 @@
             while self.runqemu.poll() is None and time.time() < endtime:
                 time.sleep(1)
             if self.runqemu.poll() is None:
-                logger.info("Sending SIGKILL to runqemu")
+                self.logger.debug("Sending SIGKILL to runqemu")
                 os.killpg(os.getpgid(self.runqemu.pid), signal.SIGKILL)
             self.runqemu = None
         if hasattr(self, 'server_socket') and self.server_socket:
@@ -350,6 +363,8 @@
             self.server_socket = None
         self.qemupid = None
         self.ip = None
+        if os.path.exists(self.qemu_pidfile):
+            os.remove(self.qemu_pidfile)
 
     def stop_qemu_system(self):
         if self.qemupid:
@@ -357,7 +372,7 @@
                 # qemu-system behaves well and a SIGTERM is enough
                 os.kill(self.qemupid, signal.SIGTERM)
             except ProcessLookupError as e:
-                logger.warn('qemu-system ended unexpectedly')
+                self.logger.warn('qemu-system ended unexpectedly')
 
     def stop_thread(self):
         if self.thread and self.thread.is_alive():
@@ -365,7 +380,7 @@
             self.thread.join()
 
     def restart(self, qemuparams = None):
-        logger.info("Restarting qemu process")
+        self.logger.debug("Restarting qemu process")
         if self.runqemu.poll() is None:
             self.stop()
         if self.start(qemuparams):
@@ -375,56 +390,16 @@
     def is_alive(self):
         if not self.runqemu:
             return False
-        qemu_child = self.find_child(str(self.runqemu.pid))
-        if qemu_child:
-            self.qemupid = qemu_child[0]
-            if os.path.exists("/proc/" + str(self.qemupid)):
+        if os.path.isfile(self.qemu_pidfile):
+            f = open(self.qemu_pidfile, 'r')
+            qemu_pid = f.read()
+            f.close()
+            qemupid = int(qemu_pid)
+            if os.path.exists("/proc/" + str(qemupid)):
+                self.qemupid = qemupid
                 return True
         return False
 
-    def find_child(self,parent_pid):
-        #
-        # Walk the process tree from the process specified looking for a qemu-system. Return its [pid'cmd]
-        #
-        ps = subprocess.Popen(['ps', 'axww', '-o', 'pid,ppid,command'], stdout=subprocess.PIPE).communicate()[0]
-        processes = ps.decode("utf-8").split('\n')
-        nfields = len(processes[0].split()) - 1
-        pids = {}
-        commands = {}
-        for row in processes[1:]:
-            data = row.split(None, nfields)
-            if len(data) != 3:
-                continue
-            if data[1] not in pids:
-                pids[data[1]] = []
-
-            pids[data[1]].append(data[0])
-            commands[data[0]] = data[2]
-
-        if parent_pid not in pids:
-            return []
-
-        parents = []
-        newparents = pids[parent_pid]
-        while newparents:
-            next = []
-            for p in newparents:
-                if p in pids:
-                    for n in pids[p]:
-                        if n not in parents and n not in next:
-                            next.append(n)
-                if p not in parents:
-                    parents.append(p)
-                    newparents = next
-        #print("Children matching %s:" % str(parents))
-        for p in parents:
-            # Need to be careful here since runqemu runs "ldd qemu-system-xxxx"
-            # Also, old versions of ldd (2.11) run "LD_XXXX qemu-system-xxxx"
-            basecmd = commands[p].split()[0]
-            basecmd = os.path.basename(basecmd)
-            if "qemu-system" in basecmd and "-serial tcp" in commands[p]:
-                return [int(p),commands[p]]
-
     def run_serial(self, command, raw=False, timeout=5):
         # We assume target system have echo to get command status
         if not raw:
@@ -474,7 +449,7 @@
 
     def _dump_host(self):
         self.host_dumper.create_dir("qemu")
-        logger.warn("Qemu ended unexpectedly, dump data from host"
+        self.logger.warn("Qemu ended unexpectedly, dump data from host"
                 " is in %s" % self.host_dumper.dump_dir)
         self.host_dumper.dump_host()
 
@@ -503,17 +478,17 @@
             self.teardown()
 
     def run(self):
-        self.logger.info("Starting logging thread")
+        self.logger.debug("Starting logging thread")
         self.readpipe, self.writepipe = os.pipe()
         threading.Thread.run(self)
 
     def stop(self):
-        self.logger.info("Stopping logging thread")
+        self.logger.debug("Stopping logging thread")
         if self.running:
             os.write(self.writepipe, bytes("stop", "utf-8"))
 
     def teardown(self):
-        self.logger.info("Tearing down logging thread")
+        self.logger.debug("Tearing down logging thread")
         self.close_socket(self.serversock)
 
         if self.readsock is not None:
@@ -531,7 +506,7 @@
 
         breakout = False
         self.running = True
-        self.logger.info("Starting thread event loop")
+        self.logger.debug("Starting thread event loop")
         while not breakout:
             events = poll.poll()
             for event in events:
@@ -541,19 +516,19 @@
 
                 # Event to stop the thread
                 if self.readpipe == event[0]:
-                    self.logger.info("Stop event received")
+                    self.logger.debug("Stop event received")
                     breakout = True
                     break
 
                 # A connection request was received
                 elif self.serversock.fileno() == event[0]:
-                    self.logger.info("Connection request received")
+                    self.logger.debug("Connection request received")
                     self.readsock, _ = self.serversock.accept()
                     self.readsock.setblocking(0)
                     poll.unregister(self.serversock.fileno())
                     poll.register(self.readsock.fileno(), event_read_mask)
 
-                    self.logger.info("Setting connection established event")
+                    self.logger.debug("Setting connection established event")
                     self.connection_established.set()
 
                 # Actual data to be logged
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/utils/qemutinyrunner.py b/import-layers/yocto-poky/meta/lib/oeqa/utils/qemutinyrunner.py
index 1bf5900..63b5d16 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/utils/qemutinyrunner.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/utils/qemutinyrunner.py
@@ -17,7 +17,7 @@
 
 class QemuTinyRunner(QemuRunner):
 
-    def __init__(self, machine, rootfs, display, tmpdir, deploy_dir_image, logfile, kernel, boottime):
+    def __init__(self, machine, rootfs, display, tmpdir, deploy_dir_image, logfile, kernel, boottime, logger):
 
         # Popen object for runqemu
         self.runqemu = None
@@ -40,6 +40,7 @@
         self.socketfile = "console.sock"
         self.server_socket = None
         self.kernel = kernel
+        self.logger = logger
 
 
     def create_socket(self):
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/utils/sshcontrol.py b/import-layers/yocto-poky/meta/lib/oeqa/utils/sshcontrol.py
index 05d6502..d292893 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/utils/sshcontrol.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/utils/sshcontrol.py
@@ -150,12 +150,9 @@
 
     def copy_to(self, localpath, remotepath):
         if os.path.islink(localpath):
-            link = os.readlink(localpath)
-            dst_dir, dst_base = os.path.split(remotepath)
-            return self.run("cd %s; ln -s %s %s" % (dst_dir, link, dst_base))
-        else:
-            command = self.scp + [localpath, '%s@%s:%s' % (self.user, self.ip, remotepath)]
-            return self._internal_run(command, ignore_status=False)
+            localpath = os.path.dirname(localpath) + "/" + os.readlink(localpath)
+        command = self.scp + [localpath, '%s@%s:%s' % (self.user, self.ip, remotepath)]
+        return self._internal_run(command, ignore_status=False)
 
     def copy_from(self, remotepath, localpath):
         command = self.scp + ['%s@%s:%s' % (self.user, self.ip, remotepath), localpath]
diff --git a/import-layers/yocto-poky/meta/lib/oeqa/utils/targetbuild.py b/import-layers/yocto-poky/meta/lib/oeqa/utils/targetbuild.py
index 9249fa2..1202d57 100644
--- a/import-layers/yocto-poky/meta/lib/oeqa/utils/targetbuild.py
+++ b/import-layers/yocto-poky/meta/lib/oeqa/utils/targetbuild.py
@@ -69,7 +69,7 @@
 
     def clean(self):
         self._run('rm -rf %s' % self.targetdir)
-        subprocess.call('rm -f %s' % self.localarchive, shell=True)
+        subprocess.check_call('rm -f %s' % self.localarchive, shell=True)
         pass
 
 class TargetBuildProject(BuildProject):
@@ -136,4 +136,4 @@
 
     def _run(self, cmd):
         self.log("Running . %s; " % self.sdkenv + cmd)
-        return subprocess.call(". %s; " % self.sdkenv + cmd, shell=True)
+        return subprocess.check_call(". %s; " % self.sdkenv + cmd, shell=True)
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/alsa-state/alsa-state.bb b/import-layers/yocto-poky/meta/recipes-bsp/alsa-state/alsa-state.bb
index d0f7bb3..0670556 100644
--- a/import-layers/yocto-poky/meta/recipes-bsp/alsa-state/alsa-state.bb
+++ b/import-layers/yocto-poky/meta/recipes-bsp/alsa-state/alsa-state.bb
@@ -5,6 +5,7 @@
 # Filename: alsa-state.bb
 
 SUMMARY = "Alsa scenario files to enable alsa state restoration"
+HOMEPAGE = "http://www.alsa-project.org/"
 DESCRIPTION = "Alsa Scenario Files - an init script and state files to restore \
 sound state at system boot and save it at system shut down."
 LICENSE = "MIT"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi/0001-Mark-our-explicit-fall-through-so-Wextra-will-work-i.patch b/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi/0001-Mark-our-explicit-fall-through-so-Wextra-will-work-i.patch
deleted file mode 100644
index d0aeb2d..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi/0001-Mark-our-explicit-fall-through-so-Wextra-will-work-i.patch
+++ /dev/null
@@ -1,34 +0,0 @@
-From 676a8a9001f06808b4dbe0a545d76b5d9a8ebf48 Mon Sep 17 00:00:00 2001
-From: Peter Jones <pjones@redhat.com>
-Date: Thu, 2 Feb 2017 13:51:27 -0500
-Subject: [PATCH] Mark our explicit fall through so -Wextra will work in gcc 7
-
-gcc 7 introduces detection of fall-through behavior in switch/case
-statements, and will warn if -Wimplicit-fallthrough is present and there
-is no comment stating that the fall-through is intentional.  This is
-also triggered by -Wextra, as it enables -Wimplicit-fallthrough=1.
-
-This patch adds the comment in the one place we use fall-through.
-
-Signed-off-by: Peter Jones <pjones@redhat.com>
----
-Upstream-Status: Pending
-
- lib/print.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/lib/print.c b/lib/print.c
-index b8a9d38..cb732f0 100644
---- a/lib/print.c
-+++ b/lib/print.c
-@@ -1131,6 +1131,7 @@ Returns:
-             case 'X':
-                 Item.Width = Item.Long ? 16 : 8;
-                 Item.Pad = '0';
-+		/* falls through */
-             case 'x':
-                 ValueToHex (
-                     Item.Scratch,
--- 
-2.12.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi/gcc46-compatibility.patch b/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi/gcc46-compatibility.patch
index 0ce6d7b..69efd34 100644
--- a/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi/gcc46-compatibility.patch
+++ b/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi/gcc46-compatibility.patch
@@ -1,3 +1,8 @@
+From 8d16ae374c5d4d9fac45c002605a66cfb8c08be5 Mon Sep 17 00:00:00 2001
+From: Steve Langasek <steve.langasek@ubuntu.com>
+Date: Wed, 9 Sep 2015 08:26:06 +0000
+Subject: [PATCH 3/3] gnu-efi, syslinux: Support gcc < 4.7
+
 don't break with old compilers and -DGNU_EFI_USE_MS_ABI
 It's entirely legitimate to request GNU_EFI_USE_MS_ABI even if the current
 compiler doesn't support it, and gnu-efi should transparently fall back to
@@ -6,16 +11,25 @@
 
 Author: Steve Langasek <steve.langasek@ubuntu.com>
 Upstream-Status: Pending
-Index: gnu-efi-3.0.3/inc/x86_64/efibind.h
-===================================================================
---- gnu-efi-3.0.3.orig/inc/x86_64/efibind.h
-+++ gnu-efi-3.0.3/inc/x86_64/efibind.h
+[Rebased for 3.0.6]
+Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
+---
+ inc/x86_64/efibind.h | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/inc/x86_64/efibind.h b/inc/x86_64/efibind.h
+index 4309f9f..02c0af1 100644
+--- a/inc/x86_64/efibind.h
++++ b/inc/x86_64/efibind.h
 @@ -25,8 +25,6 @@ Revision History
  #if defined(GNU_EFI_USE_MS_ABI)
-     #if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 7))
+     #if (defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 7)))||(defined(__clang__) && (__clang_major__ > 3 || (__clang_major__ == 3 && __clang_minor__ >= 2)))
          #define HAVE_USE_MS_ABI 1
 -    #else
 -        #error Compiler is too old for GNU_EFI_USE_MS_ABI
      #endif
  #endif
  
+-- 
+2.9.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi/parallel-make-archives.patch b/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi/parallel-make-archives.patch
index e5b47c1..0110260 100644
--- a/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi/parallel-make-archives.patch
+++ b/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi/parallel-make-archives.patch
@@ -1,4 +1,7 @@
-Fix parallel make failure for archives
+From 16865de66db33ca70872199e70d93efccecc8575 Mon Sep 17 00:00:00 2001
+From: Saul Wold <sgw@linux.intel.com>
+Date: Sun, 9 Mar 2014 15:22:15 +0200
+Subject: [PATCH 1/3] Fix parallel make failure for archives
 
 Upstream-Status: Pending
 
@@ -12,31 +15,18 @@
 
 Signed-off-by: Saul Wold <sgw@linux.intel.com>
 Signed-off-by: Darren Hart <dvhart@linux.intel.com>
+[Rebased for 3.0.6]
+Signed-off-by: California Sullivan <california.l.sullivan@intel.com>
 ---
----
- gnuefi/Makefile |    3 ++-
- lib/Makefile    |    3 ++-
- 2 files changed, 4 insertions(+), 2 deletions(-)
+ gnuefi/Makefile | 3 ++-
+ lib/Makefile    | 2 +-
+ 2 files changed, 3 insertions(+), 2 deletions(-)
 
-Index: gnu-efi-3.0/lib/Makefile
-===================================================================
---- gnu-efi-3.0.orig/lib/Makefile
-+++ gnu-efi-3.0/lib/Makefile
-@@ -66,7 +66,8 @@ all: libsubdirs libefi.a
- libsubdirs:
- 	for sdir in $(SUBDIRS); do mkdir -p $$sdir; done
- 
--libefi.a: $(patsubst %,libefi.a(%),$(OBJS))
-+libefi.a: $(OBJS)
-+	$(AR) rv $@ $(OBJS)
- 
- clean:
- 	rm -f libefi.a *~ $(OBJS) */*.o
-Index: gnu-efi-3.0/gnuefi/Makefile
-===================================================================
---- gnu-efi-3.0.orig/gnuefi/Makefile
-+++ gnu-efi-3.0/gnuefi/Makefile
-@@ -51,7 +51,8 @@ TARGETS	= crt0-efi-$(ARCH).o libgnuefi.a
+diff --git a/gnuefi/Makefile b/gnuefi/Makefile
+index 2a61699..148106e 100644
+--- a/gnuefi/Makefile
++++ b/gnuefi/Makefile
+@@ -54,7 +54,8 @@ TARGETS	= crt0-efi-$(ARCH).o libgnuefi.a
  
  all:	$(TARGETS)
  
@@ -46,3 +36,19 @@
  
  clean:
  	rm -f $(TARGETS) *~ *.o $(OBJS)
+diff --git a/lib/Makefile b/lib/Makefile
+index b8d1ce7..6ef8107 100644
+--- a/lib/Makefile
++++ b/lib/Makefile
+@@ -75,7 +75,7 @@ libsubdirs:
+ 	for sdir in $(SUBDIRS); do mkdir -p $$sdir; done
+ 
+ libefi.a: $(OBJS)
+-	$(AR) rv -U $@ $^
++	$(AR) rv $@ $(OBJS)
+ 
+ clean:
+ 	rm -f libefi.a *~ $(OBJS) */*.o
+-- 
+2.9.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi_3.0.5.bb b/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi_3.0.5.bb
deleted file mode 100644
index d6f9f53..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi_3.0.5.bb
+++ /dev/null
@@ -1,71 +0,0 @@
-SUMMARY = "Libraries for producing EFI binaries"
-HOMEPAGE = "http://sourceforge.net/projects/gnu-efi/"
-SECTION = "devel"
-LICENSE = "GPLv2+ | BSD-2-Clause"
-LIC_FILES_CHKSUM = "file://gnuefi/crt0-efi-arm.S;beginline=4;endline=16;md5=e582764a4776e60c95bf9ab617343d36 \
-                    file://gnuefi/crt0-efi-aarch64.S;beginline=4;endline=16;md5=e582764a4776e60c95bf9ab617343d36 \
-                    file://inc/efishellintf.h;beginline=13;endline=20;md5=202766b79d708eff3cc70fce15fb80c7 \
-                    file://inc/efishellparm.h;beginline=4;endline=11;md5=468b1231b05bbc84bae3a0d5774e3bb5 \
-                    file://lib/arm/math.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
-                    file://lib/arm/initplat.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
-                    file://lib/aarch64/math.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
-                    file://lib/aarch64/initplat.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
-                   "
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BP}.tar.bz2 \
-           file://parallel-make-archives.patch \
-           file://lib-Makefile-fix-parallel-issue.patch \
-           file://gcc46-compatibility.patch \
-           file://0001-Mark-our-explicit-fall-through-so-Wextra-will-work-i.patch \
-           "
-
-SRC_URI[md5sum] = "1f719c9c135778aa6b087b89a1cc2423"
-SRC_URI[sha256sum] = "bd8fcd5914f18fc0e4ba948ab03b00013e528504f529c60739b748f6ef130b22"
-
-COMPATIBLE_HOST = "(x86_64.*|i.86.*|aarch64.*|arm.*)-linux"
-COMPATIBLE_HOST_armv4 = 'null'
-
-do_configure_linux-gnux32_prepend() {
-	cp ${STAGING_INCDIR}/gnu/stubs-x32.h ${STAGING_INCDIR}/gnu/stubs-64.h
-	cp ${STAGING_INCDIR}/bits/long-double-32.h ${STAGING_INCDIR}/bits/long-double-64.h
-}
-
-def gnu_efi_arch(d):
-    import re
-    tarch = d.getVar("TARGET_ARCH")
-    if re.match("i[3456789]86", tarch):
-        return "ia32"
-    return tarch
-
-EXTRA_OEMAKE = "'ARCH=${@gnu_efi_arch(d)}' 'CC=${CC}' 'AS=${AS}' 'LD=${LD}' 'AR=${AR}' \
-                'RANLIB=${RANLIB}' 'OBJCOPY=${OBJCOPY}' 'PREFIX=${prefix}' 'LIBDIR=${libdir}' \
-                "
-
-# gnu-efi's Makefile treats prefix as toolchain prefix, so don't
-# export it.
-prefix[unexport] = "1"
-
-do_install() {
-        oe_runmake install INSTALLROOT="${D}"
-}
-
-FILES_${PN} += "${libdir}/*.lds"
-
-# 64-bit binaries are expected for EFI when targeting X32
-INSANE_SKIP_${PN}-dev_append_linux-gnux32 = " arch"
-
-BBCLASSEXTEND = "native"
-
-# It doesn't support sse, its make.defaults sets:
-# CFLAGS += -mno-mmx -mno-sse
-# So also remove -mfpmath=sse from TUNE_CCARGS
-TUNE_CCARGS_remove = "-mfpmath=sse"
-
-python () {
-    ccargs = d.getVar('TUNE_CCARGS').split()
-    if '-mx32' in ccargs:
-        # use x86_64 EFI ABI
-        ccargs.remove('-mx32')
-        ccargs.append('-m64')
-        d.setVar('TUNE_CCARGS', ' '.join(ccargs))
-}
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi_3.0.6.bb b/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi_3.0.6.bb
new file mode 100644
index 0000000..2a60717
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-bsp/gnu-efi/gnu-efi_3.0.6.bb
@@ -0,0 +1,71 @@
+SUMMARY = "Libraries for producing EFI binaries"
+HOMEPAGE = "http://sourceforge.net/projects/gnu-efi/"
+SECTION = "devel"
+LICENSE = "GPLv2+ | BSD-2-Clause"
+LIC_FILES_CHKSUM = "file://gnuefi/crt0-efi-arm.S;beginline=4;endline=16;md5=e582764a4776e60c95bf9ab617343d36 \
+                    file://gnuefi/crt0-efi-aarch64.S;beginline=4;endline=16;md5=e582764a4776e60c95bf9ab617343d36 \
+                    file://inc/efishellintf.h;beginline=13;endline=20;md5=202766b79d708eff3cc70fce15fb80c7 \
+                    file://inc/efishellparm.h;beginline=4;endline=11;md5=468b1231b05bbc84bae3a0d5774e3bb5 \
+                    file://lib/arm/math.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
+                    file://lib/arm/initplat.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
+                    file://lib/aarch64/math.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
+                    file://lib/aarch64/initplat.c;beginline=2;endline=15;md5=8ed772501da77b2b3345aa6df8744c9e \
+                   "
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BP}.tar.bz2 \
+           file://parallel-make-archives.patch \
+           file://lib-Makefile-fix-parallel-issue.patch \
+           file://gcc46-compatibility.patch \
+           "
+
+SRC_URI[md5sum] = "46f633758a8a37db9fd6909fe270c26b"
+SRC_URI[sha256sum] = "21515902d80fbea23328a61d70d3d51a47204abd1507ebfa27550a7b9bf22c91"
+
+COMPATIBLE_HOST = "(x86_64.*|i.86.*|aarch64.*|arm.*)-linux"
+COMPATIBLE_HOST_armv4 = 'null'
+
+do_configure_linux-gnux32_prepend() {
+	cp ${STAGING_INCDIR}/gnu/stubs-x32.h ${STAGING_INCDIR}/gnu/stubs-64.h
+	cp ${STAGING_INCDIR}/bits/long-double-32.h ${STAGING_INCDIR}/bits/long-double-64.h
+}
+
+def gnu_efi_arch(d):
+    import re
+    tarch = d.getVar("TARGET_ARCH")
+    if re.match("i[3456789]86", tarch):
+        return "ia32"
+    return tarch
+
+EXTRA_OEMAKE = "'ARCH=${@gnu_efi_arch(d)}' 'CC=${CC}' 'AS=${AS}' 'LD=${LD}' 'AR=${AR}' \
+                'RANLIB=${RANLIB}' 'OBJCOPY=${OBJCOPY}' 'PREFIX=${prefix}' 'LIBDIR=${libdir}' \
+                "
+
+# gnu-efi's Makefile treats prefix as toolchain prefix, so don't
+# export it.
+prefix[unexport] = "1"
+
+do_install() {
+        oe_runmake install INSTALLROOT="${D}"
+}
+
+FILES_${PN} += "${libdir}/*.lds"
+
+# 64-bit binaries are expected for EFI when targeting X32
+INSANE_SKIP_${PN}-dev_append_linux-gnux32 = " arch"
+INSANE_SKIP_${PN}-dev_append_linux-muslx32 = " arch"
+
+BBCLASSEXTEND = "native"
+
+# It doesn't support sse, its make.defaults sets:
+# CFLAGS += -mno-mmx -mno-sse
+# So also remove -mfpmath=sse from TUNE_CCARGS
+TUNE_CCARGS_remove = "-mfpmath=sse"
+
+python () {
+    ccargs = d.getVar('TUNE_CCARGS').split()
+    if '-mx32' in ccargs:
+        # use x86_64 EFI ABI
+        ccargs.remove('-mx32')
+        ccargs.append('-m64')
+        d.setVar('TUNE_CCARGS', ' '.join(ccargs))
+}
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub-git/0001-Disable-mfpmath-sse-as-well-when-SSE-is-disabled.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-Disable-mfpmath-sse-as-well-when-SSE-is-disabled.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-bsp/grub/grub-git/0001-Disable-mfpmath-sse-as-well-when-SSE-is-disabled.patch
rename to import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-Disable-mfpmath-sse-as-well-when-SSE-is-disabled.patch
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-Enforce-no-pie-if-the-compiler-supports-it.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-Enforce-no-pie-if-the-compiler-supports-it.patch
deleted file mode 100644
index ccdbee2..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-Enforce-no-pie-if-the-compiler-supports-it.patch
+++ /dev/null
@@ -1,45 +0,0 @@
-From 6186bcf1bcaaa0f16e79339e07c64c841d4d957d Mon Sep 17 00:00:00 2001
-From: Alexander Kanavin <alex.kanavin@gmail.com>
-Date: Fri, 2 Dec 2016 20:52:40 +0200
-Subject: [PATCH] Enforce -no-pie, if the compiler supports it.
-
-Add a -no-pie as recent (2 Dec 2016) Debian testing compiler
-seems to default to enabling PIE when linking. See
-https://wiki.ubuntu.com/SecurityTeam/PIE
-
-Upstream-Status: Pending
-Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
----
- acinclude.m4 | 2 +-
- configure.ac | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/acinclude.m4 b/acinclude.m4
-index 19200b0..a713923 100644
---- a/acinclude.m4
-+++ b/acinclude.m4
-@@ -416,7 +416,7 @@ int main() {
- 
- [# `$CC -c -o ...' might not be portable.  But, oh, well...  Is calling
- # `ac_compile' like this correct, after all?
--if eval "$ac_compile -S -o conftest.s" 2> /dev/null; then]
-+if eval "$ac_compile -S -o conftest.s" 2> /dev/null && eval "$CC -dumpspecs 2>/dev/null | grep -e no-pie" ; then]
-   AC_MSG_RESULT([yes])
-   [# Should we clear up other files as well, having called `AC_LANG_CONFTEST'?
-   rm -f conftest.s
-diff --git a/configure.ac b/configure.ac
-index df20991..506c6b4 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -603,7 +603,7 @@ grub_CHECK_PIE
- [# Need that, because some distributions ship compilers that include
- # `-fPIE' in the default specs.
- if [ x"$pie_possible" = xyes ]; then
--  TARGET_CFLAGS="$TARGET_CFLAGS -fno-PIE"
-+  TARGET_CFLAGS="$TARGET_CFLAGS -fno-PIE -no-pie"
- fi]
- 
- # Position independent executable.
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-Fix-CVE-2015-8370-Grub2-user-pass-vulnerability.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-Fix-CVE-2015-8370-Grub2-user-pass-vulnerability.patch
deleted file mode 100644
index 65ddcaf..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-Fix-CVE-2015-8370-Grub2-user-pass-vulnerability.patch
+++ /dev/null
@@ -1,53 +0,0 @@
-Upstream-Status: Accepted
-CVE: CVE-2015-8370
-Signed-off-by: Awais Belal <awais_belal@mentor.com>
-
-From 451d80e52d851432e109771bb8febafca7a5f1f2 Mon Sep 17 00:00:00 2001
-From: Hector Marco-Gisbert <hecmargi@upv.es>
-Date: Wed, 16 Dec 2015 04:57:18 +0000
-Subject: Fix security issue when reading username and password
-
-This patch fixes two integer underflows at:
-  * grub-core/lib/crypto.c
-  * grub-core/normal/auth.c
-
-CVE-2015-8370
-
-Signed-off-by: Hector Marco-Gisbert <hecmargi@upv.es>
-Signed-off-by: Ismael Ripoll-Ripoll <iripoll@disca.upv.es>
-Also-By: Andrey Borzenkov <arvidjaar@gmail.com>
----
-diff --git a/grub-core/lib/crypto.c b/grub-core/lib/crypto.c
-index 010e550..683a8aa 100644
---- a/grub-core/lib/crypto.c
-+++ b/grub-core/lib/crypto.c
-@@ -470,7 +470,8 @@ grub_password_get (char buf[], unsigned buf_size)
- 
-       if (key == '\b')
- 	{
--	  cur_len--;
-+	  if (cur_len)
-+	    cur_len--;
- 	  continue;
- 	}
- 
-diff --git a/grub-core/normal/auth.c b/grub-core/normal/auth.c
-index c6bd96e..8615c48 100644
---- a/grub-core/normal/auth.c
-+++ b/grub-core/normal/auth.c
-@@ -174,8 +174,11 @@ grub_username_get (char buf[], unsigned buf_size)
- 
-       if (key == '\b')
- 	{
--	  cur_len--;
--	  grub_printf ("\b");
-+	  if (cur_len)
-+	    {
-+	      cur_len--;
-+	      grub_printf ("\b");
-+	    }
- 	  continue;
- 	}
- 
---
-cgit v0.9.0.2
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-Remove-direct-_llseek-code-and-require-long-filesyst.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-Remove-direct-_llseek-code-and-require-long-filesyst.patch
deleted file mode 100644
index 9eabce9..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-Remove-direct-_llseek-code-and-require-long-filesyst.patch
+++ /dev/null
@@ -1,81 +0,0 @@
-From 3bac4caa2bc64db313aaee54fffb90383e118517 Mon Sep 17 00:00:00 2001
-From: Felix Janda <felix.janda@posteo.de>
-Date: Thu, 22 Jan 2015 19:54:36 +0100
-Subject: [PATCH] Remove direct _llseek code and require long filesystem libc.
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
-Upstream-Status: Backport
- configure.ac                    |  8 ++++++++
- grub-core/osdep/unix/hostdisk.c | 24 ------------------------
- 4 files changed, 13 insertions(+), 24 deletions(-)
-
-Index: grub-2.00/configure.ac
-===================================================================
---- grub-2.00.orig/configure.ac
-+++ grub-2.00/configure.ac
-@@ -306,6 +306,14 @@ if test x$grub_cv_apple_cc = xyes ; then
-   HOST_LDFLAGS="$HOST_LDFLAGS -Wl,-allow_stack_execute"
- fi
- 
-+case "$host_os" in
-+  cygwin | windows* | mingw32* | aros*)
-+     ;;
-+  *)
-+     AC_CHECK_SIZEOF(off_t)
-+     test x"$ac_cv_sizeof_off_t" = x8 || AC_MSG_ERROR([Large file support is required]);;
-+esac
-+
- if test x$USE_NLS = xno; then
-   HOST_CFLAGS="$HOST_CFLAGS -fno-builtin-gettext"
- fi
-Index: grub-2.00/grub-core/kern/emu/hostdisk.c
-===================================================================
---- grub-2.00.orig/grub-core/kern/emu/hostdisk.c
-+++ grub-2.00/grub-core/kern/emu/hostdisk.c
-@@ -44,11 +44,6 @@
- #ifdef __linux__
- # include <sys/ioctl.h>         /* ioctl */
- # include <sys/mount.h>
--# if !defined(__GLIBC__) || \
--        ((__GLIBC__ < 2) || ((__GLIBC__ == 2) && (__GLIBC_MINOR__ < 1)))
--/* Maybe libc doesn't have large file support.  */
--#  include <linux/unistd.h>     /* _llseek */
--# endif /* (GLIBC < 2) || ((__GLIBC__ == 2) && (__GLIBC_MINOR < 1)) */
- # ifndef BLKFLSBUF
- #  define BLKFLSBUF     _IO (0x12,97)   /* flush buffer cache */
- # endif /* ! BLKFLSBUF */
-@@ -761,25 +756,6 @@ linux_find_partition (char *dev, grub_di
- }
- #endif /* __linux__ */
- 
--#if defined(__linux__) && (!defined(__GLIBC__) || \
--        ((__GLIBC__ < 2) || ((__GLIBC__ == 2) && (__GLIBC_MINOR__ < 1))))
--  /* Maybe libc doesn't have large file support.  */
--grub_err_t
--grub_util_fd_seek (int fd, const char *name, grub_uint64_t off)
--{
--  loff_t offset, result;
--  static int _llseek (uint filedes, ulong hi, ulong lo,
--		      loff_t *res, uint wh);
--  _syscall5 (int, _llseek, uint, filedes, ulong, hi, ulong, lo,
--	     loff_t *, res, uint, wh);
--
--  offset = (loff_t) off;
--  if (_llseek (fd, offset >> 32, offset & 0xffffffff, &result, SEEK_SET))
--    return grub_error (GRUB_ERR_BAD_DEVICE, N_("cannot seek `%s': %s"),
--		       name, strerror (errno));
--  return GRUB_ERR_NONE;
--}
--#else
- grub_err_t
- grub_util_fd_seek (int fd, const char *name, grub_uint64_t off)
- {
-@@ -790,7 +766,6 @@ grub_util_fd_seek (int fd, const char *n
- 		       name, strerror (errno));
-   return 0;
- }
--#endif
- 
- static void
- flush_initial_buffer (const char *os_dev __attribute__ ((unused)))
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-btrfs-avoid-used-uninitialized-error-with-GCC7.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-btrfs-avoid-used-uninitialized-error-with-GCC7.patch
deleted file mode 100644
index 217a775..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-btrfs-avoid-used-uninitialized-error-with-GCC7.patch
+++ /dev/null
@@ -1,36 +0,0 @@
-From 6cef7f6079550af3bf91dbff824398eaef08c3c5 Mon Sep 17 00:00:00 2001
-From: Andrei Borzenkov <arvidjaar@gmail.com>
-Date: Tue, 4 Apr 2017 19:22:32 +0300
-Subject: [PATCH 1/4] btrfs: avoid "used uninitialized" error with GCC7
-
-sblock was local and so considered new variable on every loop
-iteration.
-
-Closes: 50597
----
-Upstream-Status: Backport
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
- grub-core/fs/btrfs.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/grub-core/fs/btrfs.c b/grub-core/fs/btrfs.c
-index 9cffa91..4849c1c 100644
---- a/grub-core/fs/btrfs.c
-+++ b/grub-core/fs/btrfs.c
-@@ -227,11 +227,11 @@ grub_btrfs_read_logical (struct grub_btrfs_data *data,
- static grub_err_t
- read_sblock (grub_disk_t disk, struct grub_btrfs_superblock *sb)
- {
-+  struct grub_btrfs_superblock sblock;
-   unsigned i;
-   grub_err_t err = GRUB_ERR_NONE;
-   for (i = 0; i < ARRAY_SIZE (superblock_sectors); i++)
-     {
--      struct grub_btrfs_superblock sblock;
-       /* Don't try additional superblocks beyond device size.  */
-       if (i && (grub_le_to_cpu64 (sblock.this_device.size)
- 		>> GRUB_DISK_SECTOR_BITS) <= superblock_sectors[i])
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-build-Use-AC_HEADER_MAJOR-to-find-device-macros.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-build-Use-AC_HEADER_MAJOR-to-find-device-macros.patch
deleted file mode 100644
index f95b9ef..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-build-Use-AC_HEADER_MAJOR-to-find-device-macros.patch
+++ /dev/null
@@ -1,92 +0,0 @@
-From 7a5b301e3adb8e054288518a325135a1883c1c6c Mon Sep 17 00:00:00 2001
-From: Mike Gilbert <floppym@gentoo.org>
-Date: Tue, 19 Apr 2016 14:27:22 -0400
-Subject: [PATCH] build: Use AC_HEADER_MAJOR to find device macros
-
-Depending on the OS/libc, device macros are defined in different
-headers. This change ensures we include the right one.
-
-sys/types.h - BSD
-sys/mkdev.h - Sun
-sys/sysmacros.h - glibc (Linux)
-
-glibc currently pulls sys/sysmacros.h into sys/types.h, but this may
-change in a future release.
-
-https://sourceware.org/ml/libc-alpha/2015-11/msg00253.html
----
-Upstream-Status: Backport
-
- configure.ac                         | 3 ++-
- grub-core/osdep/devmapper/getroot.c  | 6 ++++++
- grub-core/osdep/devmapper/hostdisk.c | 5 +++++
- grub-core/osdep/linux/getroot.c      | 6 ++++++
- grub-core/osdep/unix/getroot.c       | 4 +++-
- 5 files changed, 22 insertions(+), 2 deletions(-)
-
-Index: grub-2.00/configure.ac
-===================================================================
---- grub-2.00.orig/configure.ac
-+++ grub-2.00/configure.ac
-@@ -326,7 +326,8 @@ fi
- 
- # Check for functions and headers.
- AC_CHECK_FUNCS(posix_memalign memalign asprintf vasprintf getextmntent)
--AC_CHECK_HEADERS(sys/param.h sys/mount.h sys/mnttab.h sys/mkdev.h limits.h)
-+AC_CHECK_HEADERS(sys/param.h sys/mount.h sys/mnttab.h limits.h)
-+AC_HEADER_MAJOR
- 
- AC_CHECK_MEMBERS([struct statfs.f_fstypename],,,[$ac_includes_default
- #include <sys/param.h>
-Index: grub-2.00/grub-core/kern/emu/hostdisk.c
-===================================================================
---- grub-2.00.orig/grub-core/kern/emu/hostdisk.c
-+++ grub-2.00/grub-core/kern/emu/hostdisk.c
-@@ -41,6 +41,12 @@
- #include <errno.h>
- #include <limits.h>
- 
-+#if defined(MAJOR_IN_MKDEV)
-+#include <sys/mkdev.h>
-+#elif defined(MAJOR_IN_SYSMACROS)
-+#include <sys/sysmacros.h>
-+#endif
-+
- #ifdef __linux__
- # include <sys/ioctl.h>         /* ioctl */
- # include <sys/mount.h>
-Index: grub-2.00/util/getroot.c
-===================================================================
---- grub-2.00.orig/util/getroot.c
-+++ grub-2.00/util/getroot.c
-@@ -35,6 +35,13 @@
- #ifdef HAVE_LIMITS_H
- #include <limits.h>
- #endif
-+
-+#if defined(MAJOR_IN_MKDEV)
-+#include <sys/mkdev.h>
-+#elif defined(MAJOR_IN_SYSMACROS)
-+#include <sys/sysmacros.h>
-+#endif
-+
- #include <grub/util/misc.h>
- #include <grub/util/lvm.h>
- #include <grub/cryptodisk.h>
-Index: grub-2.00/util/raid.c
-===================================================================
---- grub-2.00.orig/util/raid.c
-+++ grub-2.00/util/raid.c
-@@ -29,6 +29,12 @@
- #include <errno.h>
- #include <sys/types.h>
- 
-+#if defined(MAJOR_IN_MKDEV)
-+#include <sys/mkdev.h>
-+#elif defined(MAJOR_IN_SYSMACROS)
-+#include <sys/sysmacros.h>
-+#endif
-+
- #include <linux/types.h>
- #include <linux/major.h>
- #include <linux/raid/md_p.h>
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-grub-core-gettext-gettext.c-main_context-secondary_c.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-grub-core-gettext-gettext.c-main_context-secondary_c.patch
deleted file mode 100644
index 6ec2363..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-grub-core-gettext-gettext.c-main_context-secondary_c.patch
+++ /dev/null
@@ -1,39 +0,0 @@
-From f30c692c1f9ef0e93bee2b408a24baa017f1ca9d Mon Sep 17 00:00:00 2001
-From: Vladimir Serbinenko <phcoder@gmail.com>
-Date: Thu, 7 Nov 2013 01:01:47 +0100
-Subject: [PATCH] 	* grub-core/gettext/gettext.c (main_context),
- (secondary_context): 	Define after defining type and not before.
-
----
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Backport
-
- ChangeLog                   | 5 +++++
- grub-core/gettext/gettext.c | 4 ++--
- 2 files changed, 7 insertions(+), 2 deletions(-)
-
-diff --git a/grub-core/gettext/gettext.c b/grub-core/gettext/gettext.c
-index df73570..4880cef 100644
---- a/grub-core/gettext/gettext.c
-+++ b/grub-core/gettext/gettext.c
-@@ -34,8 +34,6 @@ GRUB_MOD_LICENSE ("GPLv3+");
-    http://www.gnu.org/software/autoconf/manual/gettext/MO-Files.html .
- */
- 
--static struct grub_gettext_context main_context, secondary_context;
--
- static const char *(*grub_gettext_original) (const char *s);
- 
- struct grub_gettext_msg
-@@ -69,6 +67,8 @@ struct grub_gettext_context
-   struct grub_gettext_msg *grub_gettext_msg_list;
- };
- 
-+static struct grub_gettext_context main_context, secondary_context;
-+
- #define MO_MAGIC_NUMBER 		0x950412de
- 
- static grub_err_t
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-grub-core-kern-efi-mm.c-grub_efi_finish_boot_service.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-grub-core-kern-efi-mm.c-grub_efi_finish_boot_service.patch
deleted file mode 100644
index abf08e1..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-grub-core-kern-efi-mm.c-grub_efi_finish_boot_service.patch
+++ /dev/null
@@ -1,79 +0,0 @@
-From b258761d11946b28a847dff0768c3f271e13d60a Mon Sep 17 00:00:00 2001
-From: Awais Belal <awais_belal@mentor.com>
-Date: Thu, 8 Dec 2016 18:21:12 +0500
-Subject: [PATCH 1/2] * grub-core/kern/efi/mm.c
- (grub_efi_finish_boot_services):  Try terminating EFI services several times
- due to quirks in some  implementations.
-
-Upstream-status: Backport [ http://git.savannah.gnu.org/cgit/grub.git/patch/?id=e75fdee420a7ad95e9a465c9699adc2e2e970440 ]
-
-Signed-off-by: Awais Belal <awais_belal@mentor.com>
----
- grub-core/kern/efi/mm.c | 46 ++++++++++++++++++++++++++++++----------------
- 1 file changed, 30 insertions(+), 16 deletions(-)
-
-diff --git a/grub-core/kern/efi/mm.c b/grub-core/kern/efi/mm.c
-index 461deb0..b00e0bc 100644
---- a/grub-core/kern/efi/mm.c
-+++ b/grub-core/kern/efi/mm.c
-@@ -167,27 +167,41 @@ grub_efi_finish_boot_services (grub_efi_uintn_t *outbuf_size, void *outbuf,
- 			   apple, sizeof (apple)) == 0);
- #endif
- 
--  if (grub_efi_get_memory_map (&finish_mmap_size, finish_mmap_buf, &finish_key,
--			       &finish_desc_size, &finish_desc_version) < 0)
--    return grub_error (GRUB_ERR_IO, "couldn't retrieve memory map");
-+  while (1)
-+    {
-+      if (grub_efi_get_memory_map (&finish_mmap_size, finish_mmap_buf, &finish_key,
-+				   &finish_desc_size, &finish_desc_version) < 0)
-+	return grub_error (GRUB_ERR_IO, "couldn't retrieve memory map");
- 
--  if (outbuf && *outbuf_size < finish_mmap_size)
--    return grub_error (GRUB_ERR_IO, "memory map buffer is too small");
-+      if (outbuf && *outbuf_size < finish_mmap_size)
-+	return grub_error (GRUB_ERR_IO, "memory map buffer is too small");
- 
--  finish_mmap_buf = grub_malloc (finish_mmap_size);
--  if (!finish_mmap_buf)
--    return grub_errno;
-+      finish_mmap_buf = grub_malloc (finish_mmap_size);
-+      if (!finish_mmap_buf)
-+	return grub_errno;
- 
--  if (grub_efi_get_memory_map (&finish_mmap_size, finish_mmap_buf, &finish_key,
--			       &finish_desc_size, &finish_desc_version) <= 0)
--    return grub_error (GRUB_ERR_IO, "couldn't retrieve memory map");
-+      if (grub_efi_get_memory_map (&finish_mmap_size, finish_mmap_buf, &finish_key,
-+				   &finish_desc_size, &finish_desc_version) <= 0)
-+	{
-+	  grub_free (finish_mmap_buf);
-+	  return grub_error (GRUB_ERR_IO, "couldn't retrieve memory map");
-+	}
- 
--  b = grub_efi_system_table->boot_services;
--  status = efi_call_2 (b->exit_boot_services, grub_efi_image_handle,
--		       finish_key);
--  if (status != GRUB_EFI_SUCCESS)
--    return grub_error (GRUB_ERR_IO, "couldn't terminate EFI services");
-+      b = grub_efi_system_table->boot_services;
-+      status = efi_call_2 (b->exit_boot_services, grub_efi_image_handle,
-+			   finish_key);
-+      if (status == GRUB_EFI_SUCCESS)
-+	break;
- 
-+      if (status != GRUB_EFI_INVALID_PARAMETER)
-+	{
-+	  grub_free (finish_mmap_buf);
-+	  return grub_error (GRUB_ERR_IO, "couldn't terminate EFI services");
-+	}
-+
-+      grub_free (finish_mmap_buf);
-+      grub_printf ("Trying to terminate EFI services again\n");
-+    }
-   grub_efi_is_finished = 1;
-   if (outbuf_size)
-     *outbuf_size = finish_mmap_size;
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub-git/0001-grub.d-10_linux.in-add-oe-s-kernel-name.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-grub.d-10_linux.in-add-oe-s-kernel-name.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-bsp/grub/grub-git/0001-grub.d-10_linux.in-add-oe-s-kernel-name.patch
rename to import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-grub.d-10_linux.in-add-oe-s-kernel-name.patch
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-parse_dhcp_vendor-Add-missing-const-qualifiers.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-parse_dhcp_vendor-Add-missing-const-qualifiers.patch
deleted file mode 100644
index 255e3eb..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0001-parse_dhcp_vendor-Add-missing-const-qualifiers.patch
+++ /dev/null
@@ -1,33 +0,0 @@
-Upstream-Status: Backport
-
-Original commit: http://git.savannah.gnu.org/cgit/grub.git/commit/grub-core/net/bootp.c?id=f06c2172c0b32052f22e37523445cf8e7affaea3
-
-From 149d2a14f4723778ced23f439487201ccbf1a2c9 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Thu, 23 Apr 2015 07:03:34 +0000
-Subject: [PATCH] parse_dhcp_vendor: Add missing const qualifiers.
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- grub-core/net/bootp.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/grub-core/net/bootp.c b/grub-core/net/bootp.c
-index bc07d53..44131ed 100644
---- a/grub-core/net/bootp.c
-+++ b/grub-core/net/bootp.c
-@@ -52,9 +52,9 @@ set_env_limn_ro (const char *intername, const char *suffix,
- }
- 
- static void
--parse_dhcp_vendor (const char *name, void *vend, int limit, int *mask)
-+parse_dhcp_vendor (const char *name, const void *vend, int limit, int *mask)
- {
--  grub_uint8_t *ptr, *ptr0;
-+  const grub_uint8_t *ptr, *ptr0;
- 
-   ptr = ptr0 = vend;
- 
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0002-grub-core-kern-efi-mm.c-grub_efi_get_memory_map-Neve.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0002-grub-core-kern-efi-mm.c-grub_efi_get_memory_map-Neve.patch
deleted file mode 100644
index 0e735ff..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0002-grub-core-kern-efi-mm.c-grub_efi_get_memory_map-Neve.patch
+++ /dev/null
@@ -1,43 +0,0 @@
-From 630de45f3d5f9a2dda7fad99acd21449b8c4111d Mon Sep 17 00:00:00 2001
-From: Awais Belal <awais_belal@mentor.com>
-Date: Thu, 8 Dec 2016 18:27:01 +0500
-Subject: [PATCH 2/2] * grub-core/kern/efi/mm.c (grub_efi_get_memory_map):
- Never  return a descriptor_size==0 to avoid potential divisions by zero.
-
-Upstream-status: Backport [ http://git.savannah.gnu.org/cgit/grub.git/commit/?id=69aee43fa64601cabf6efa9279c10d69b466662e ]
-
-Signed-off-by: Awais Belal <awais_belal@mentor.com>
----
- grub-core/kern/efi/mm.c | 5 +++++
- 1 file changed, 5 insertions(+)
-
-diff --git a/grub-core/kern/efi/mm.c b/grub-core/kern/efi/mm.c
-index b00e0bc..9f1d194 100644
---- a/grub-core/kern/efi/mm.c
-+++ b/grub-core/kern/efi/mm.c
-@@ -235,6 +235,7 @@ grub_efi_get_memory_map (grub_efi_uintn_t *memory_map_size,
-   grub_efi_boot_services_t *b;
-   grub_efi_uintn_t key;
-   grub_efi_uint32_t version;
-+  grub_efi_uintn_t size;
- 
-   if (grub_efi_is_finished)
-     {
-@@ -264,10 +265,14 @@ grub_efi_get_memory_map (grub_efi_uintn_t *memory_map_size,
-     map_key = &key;
-   if (! descriptor_version)
-     descriptor_version = &version;
-+  if (! descriptor_size)
-+    descriptor_size = &size;
- 
-   b = grub_efi_system_table->boot_services;
-   status = efi_call_5 (b->get_memory_map, memory_map_size, memory_map, map_key,
- 			      descriptor_size, descriptor_version);
-+  if (*descriptor_size == 0)
-+    *descriptor_size = sizeof (grub_efi_memory_descriptor_t);
-   if (status == GRUB_EFI_SUCCESS)
-     return 1;
-   else if (status == GRUB_EFI_BUFFER_TOO_SMALL)
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0002-i386-x86_64-ppc-fix-switch-fallthrough-cases-with-GC.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0002-i386-x86_64-ppc-fix-switch-fallthrough-cases-with-GC.patch
deleted file mode 100644
index 94f048c..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0002-i386-x86_64-ppc-fix-switch-fallthrough-cases-with-GC.patch
+++ /dev/null
@@ -1,248 +0,0 @@
-From 4bd4a88725604471fdbd86316c91967a7f4dba5a Mon Sep 17 00:00:00 2001
-From: Andrei Borzenkov <arvidjaar@gmail.com>
-Date: Tue, 4 Apr 2017 19:23:55 +0300
-Subject: [PATCH 2/4] i386, x86_64, ppc: fix switch fallthrough cases with GCC7
-
-In util/getroot and efidisk slightly modify exitsing comment to mostly
-retain it but still make GCC7 compliant with respect to fall through
-annotation.
-
-In grub-core/lib/xzembed/xz_dec_lzma2.c it adds same comments as
-upstream.
-
-In grub-core/tests/setjmp_tets.c declare functions as "noreturn" to
-suppress GCC7 warning.
-
-In grub-core/gnulib/regexec.c use new __attribute__, because existing
-annotation is not recognized by GCC7 parser (which requires that comment
-immediately precedes case statement).
-
-Otherwise add FALLTHROUGH comment.
-
-Closes: 50598
----
-Upstream-Status: Backport
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
- grub-core/commands/hdparm.c           | 1 +
- grub-core/commands/nativedisk.c       | 1 +
- grub-core/disk/cryptodisk.c           | 1 +
- grub-core/disk/efi/efidisk.c          | 2 +-
- grub-core/efiemu/mm.c                 | 1 +
- grub-core/gdb/cstub.c                 | 1 +
- grub-core/gnulib/regexec.c            | 3 +++
- grub-core/lib/xzembed/xz_dec_lzma2.c  | 4 ++++
- grub-core/lib/xzembed/xz_dec_stream.c | 6 ++++++
- grub-core/loader/i386/linux.c         | 3 +++
- grub-core/tests/setjmp_test.c         | 5 ++++-
- grub-core/video/ieee1275.c            | 1 +
- grub-core/video/readers/jpeg.c        | 1 +
- util/getroot.c                        | 2 +-
- util/grub-install.c                   | 1 +
- util/grub-mkimagexx.c                 | 1 +
- util/grub-mount.c                     | 1 +
- 17 files changed, 32 insertions(+), 3 deletions(-)
-
-Index: grub-2.00/grub-core/commands/hdparm.c
-===================================================================
---- grub-2.00.orig/grub-core/commands/hdparm.c
-+++ grub-2.00/grub-core/commands/hdparm.c
-@@ -328,6 +328,7 @@ grub_cmd_hdparm (grub_extcmd_context_t c
- 	  ata = ((struct grub_scsi *) disk->data)->data;
- 	  break;
- 	}
-+      /* FALLTHROUGH */
-     default:
-       return grub_error (GRUB_ERR_IO, "not an ATA device");
-     }
-Index: grub-2.00/grub-core/disk/cryptodisk.c
-===================================================================
---- grub-2.00.orig/grub-core/disk/cryptodisk.c
-+++ grub-2.00/grub-core/disk/cryptodisk.c
-@@ -268,6 +268,7 @@ grub_cryptodisk_endecrypt (struct grub_c
- 	  break;
- 	case GRUB_CRYPTODISK_MODE_IV_PLAIN64:
- 	  iv[1] = grub_cpu_to_le32 (sector >> 32);
-+	  /* FALLTHROUGH */
- 	case GRUB_CRYPTODISK_MODE_IV_PLAIN:
- 	  iv[0] = grub_cpu_to_le32 (sector & 0xFFFFFFFF);
- 	  break;
-Index: grub-2.00/grub-core/disk/efi/efidisk.c
-===================================================================
---- grub-2.00.orig/grub-core/disk/efi/efidisk.c
-+++ grub-2.00/grub-core/disk/efi/efidisk.c
-@@ -262,7 +262,7 @@ name_devices (struct grub_efidisk_data *
- 	    {
- 	    case GRUB_EFI_HARD_DRIVE_DEVICE_PATH_SUBTYPE:
- 	      is_hard_drive = 1;
--	      /* Fall through by intention.  */
-+	      /* Intentionally fall through.  */
- 	    case GRUB_EFI_CDROM_DEVICE_PATH_SUBTYPE:
- 	      {
- 		struct grub_efidisk_data *parent, *parent2;
-Index: grub-2.00/grub-core/efiemu/mm.c
-===================================================================
---- grub-2.00.orig/grub-core/efiemu/mm.c
-+++ grub-2.00/grub-core/efiemu/mm.c
-@@ -410,6 +410,7 @@ grub_efiemu_mmap_fill (void)
- 	default:
- 	  grub_dprintf ("efiemu",
- 			"Unknown memory type %d. Assuming unusable\n", type);
-+	/* FALLTHROUGH */
- 	case GRUB_MEMORY_RESERVED:
- 	  return grub_efiemu_add_to_mmap (addr, size,
- 					  GRUB_EFI_UNUSABLE_MEMORY);
-Index: grub-2.00/grub-core/gdb/cstub.c
-===================================================================
---- grub-2.00.orig/grub-core/gdb/cstub.c
-+++ grub-2.00/grub-core/gdb/cstub.c
-@@ -336,6 +336,7 @@ grub_gdb_trap (int trap_no)
- 	/* sAA..AA: Step one instruction from AA..AA(optional).  */
- 	case 's':
- 	  stepping = 1;
-+	  /* FALLTHROUGH */
- 
- 	/* cAA..AA: Continue at address AA..AA(optional).  */
- 	case 'c':
-Index: grub-2.00/grub-core/gnulib/regexec.c
-===================================================================
---- grub-2.00.orig/grub-core/gnulib/regexec.c
-+++ grub-2.00/grub-core/gnulib/regexec.c
-@@ -4104,6 +4104,9 @@ check_node_accept (const re_match_contex
-     case OP_UTF8_PERIOD:
-       if (ch >= ASCII_CHARS)
-         return false;
-+#if defined __GNUC__ && __GNUC__ >= 7
-+      __attribute__ ((fallthrough));
-+#endif
-       /* FALLTHROUGH */
- #endif
-     case OP_PERIOD:
-Index: grub-2.00/grub-core/lib/xzembed/xz_dec_lzma2.c
-===================================================================
---- grub-2.00.orig/grub-core/lib/xzembed/xz_dec_lzma2.c
-+++ grub-2.00/grub-core/lib/xzembed/xz_dec_lzma2.c
-@@ -1042,6 +1042,8 @@ enum xz_ret xz_dec_lzma2_run(
- 
- 			s->lzma2.sequence = SEQ_LZMA_PREPARE;
- 
-+		/* Fall through */
-+
- 		case SEQ_LZMA_PREPARE:
- 			if (s->lzma2.compressed < RC_INIT_BYTES)
- 				return XZ_DATA_ERROR;
-@@ -1052,6 +1054,8 @@ enum xz_ret xz_dec_lzma2_run(
- 			s->lzma2.compressed -= RC_INIT_BYTES;
- 			s->lzma2.sequence = SEQ_LZMA_RUN;
- 
-+		/* Fall through */
-+
- 		case SEQ_LZMA_RUN:
- 			/*
- 			 * Set dictionary limit to indicate how much we want
-Index: grub-2.00/grub-core/lib/xzembed/xz_dec_stream.c
-===================================================================
---- grub-2.00.orig/grub-core/lib/xzembed/xz_dec_stream.c
-+++ grub-2.00/grub-core/lib/xzembed/xz_dec_stream.c
-@@ -749,6 +749,7 @@ static enum xz_ret dec_main(struct xz_de
- 
- 			s->sequence = SEQ_BLOCK_START;
- 
-+			/* FALLTHROUGH */
- 		case SEQ_BLOCK_START:
- 			/* We need one byte of input to continue. */
- 			if (b->in_pos == b->in_size)
-@@ -772,6 +773,7 @@ static enum xz_ret dec_main(struct xz_de
- 			s->temp.pos = 0;
- 			s->sequence = SEQ_BLOCK_HEADER;
- 
-+			/* FALLTHROUGH */
- 		case SEQ_BLOCK_HEADER:
- 			if (!fill_temp(s, b))
- 				return XZ_OK;
-@@ -782,6 +784,7 @@ static enum xz_ret dec_main(struct xz_de
- 
- 			s->sequence = SEQ_BLOCK_UNCOMPRESS;
- 
-+			/* FALLTHROUGH */
- 		case SEQ_BLOCK_UNCOMPRESS:
- 			ret = dec_block(s, b);
- 			if (ret != XZ_STREAM_END)
-@@ -809,6 +812,7 @@ static enum xz_ret dec_main(struct xz_de
- 
- 			s->sequence = SEQ_BLOCK_CHECK;
- 
-+			/* FALLTHROUGH */
- 		case SEQ_BLOCK_CHECK:
- 			ret = hash_validate(s, b, 0);
- 			if (ret != XZ_STREAM_END)
-@@ -863,6 +867,7 @@ static enum xz_ret dec_main(struct xz_de
- 
- 			s->sequence = SEQ_INDEX_CRC32;
- 
-+			/* FALLTHROUGH */
- 		case SEQ_INDEX_CRC32:
- 			ret = hash_validate(s, b, 1);
- 			if (ret != XZ_STREAM_END)
-@@ -871,6 +876,7 @@ static enum xz_ret dec_main(struct xz_de
- 			s->temp.size = STREAM_HEADER_SIZE;
- 			s->sequence = SEQ_STREAM_FOOTER;
- 
-+			/* FALLTHROUGH */
- 		case SEQ_STREAM_FOOTER:
- 			if (!fill_temp(s, b))
- 				return XZ_OK;
-Index: grub-2.00/grub-core/loader/i386/linux.c
-===================================================================
---- grub-2.00.orig/grub-core/loader/i386/linux.c
-+++ grub-2.00/grub-core/loader/i386/linux.c
-@@ -977,10 +977,13 @@ grub_cmd_linux (grub_command_t cmd __att
- 	      {
- 	      case 'g':
- 		shift += 10;
-+		/* FALLTHROUGH */
- 	      case 'm':
- 		shift += 10;
-+		/* FALLTHROUGH */
- 	      case 'k':
- 		shift += 10;
-+		/* FALLTHROUGH */
- 	      default:
- 		break;
- 	      }
-Index: grub-2.00/grub-core/video/readers/jpeg.c
-===================================================================
---- grub-2.00.orig/grub-core/video/readers/jpeg.c
-+++ grub-2.00/grub-core/video/readers/jpeg.c
-@@ -701,6 +701,7 @@ grub_jpeg_decode_jpeg (struct grub_jpeg_
- 	case JPEG_MARKER_SOS:	/* Start Of Scan.  */
- 	  if (grub_jpeg_decode_sos (data))
- 	    break;
-+	  /* FALLTHROUGH */
- 	case JPEG_MARKER_RST0:	/* Restart.  */
- 	case JPEG_MARKER_RST1:
- 	case JPEG_MARKER_RST2:
-Index: grub-2.00/util/grub-mkimagexx.c
-===================================================================
---- grub-2.00.orig/util/grub-mkimagexx.c
-+++ grub-2.00/util/grub-mkimagexx.c
-@@ -485,6 +485,7 @@ SUFFIX (relocate_addresses) (Elf_Ehdr *e
- 									    + sym->st_value
- 									    - image_target->vaddr_offset));
- 		  }
-+		/* FALLTHROUGH */
- 		case R_IA64_LTOFF_FPTR22:
- 		  *gpptr = grub_host_to_target64 (addend + sym_addr);
- 		  add_value_to_slot_21 ((grub_addr_t) target,
-Index: grub-2.00/util/grub-mount.c
-===================================================================
---- grub-2.00.orig/util/grub-mount.c
-+++ grub-2.00/util/grub-mount.c
-@@ -487,6 +487,7 @@ argp_parser (int key, char *arg, struct
-       if (arg[0] != '-')
- 	break;
- 
-+    /* FALLTHROUGH */
-     default:
-       if (!arg)
- 	return 0;
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0003-Add-gnulib-fix-gcc7-fallthrough.diff.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0003-Add-gnulib-fix-gcc7-fallthrough.diff.patch
deleted file mode 100644
index fcfbf5c..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0003-Add-gnulib-fix-gcc7-fallthrough.diff.patch
+++ /dev/null
@@ -1,38 +0,0 @@
-From 007f0b407f72314ec832d77e15b83ea40b160037 Mon Sep 17 00:00:00 2001
-From: Andrei Borzenkov <arvidjaar@gmail.com>
-Date: Tue, 4 Apr 2017 19:37:47 +0300
-Subject: [PATCH 3/4] Add gnulib-fix-gcc7-fallthrough.diff
-
-As long as the code is not upstream, add it as explicit patch for the
-case of gnulib refresh.
----
-Upstream-Status: Backport
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
- grub-core/gnulib-fix-gcc7-fallthrough.diff | 14 ++++++++++++++
- 1 file changed, 14 insertions(+)
- create mode 100644 grub-core/gnulib-fix-gcc7-fallthrough.diff
-
-diff --git a/grub-core/gnulib-fix-gcc7-fallthrough.diff b/grub-core/gnulib-fix-gcc7-fallthrough.diff
-new file mode 100644
-index 0000000..9802e2d
---- /dev/null
-+++ b/grub-core/gnulib-fix-gcc7-fallthrough.diff
-@@ -0,0 +1,14 @@
-+diff --git grub-core/gnulib/regexec.c grub-core/gnulib/regexec.c
-+index f632cd4..a7776f0 100644
-+--- grub-core/gnulib/regexec.c
-++++ grub-core/gnulib/regexec.c
-+@@ -4099,6 +4099,9 @@ check_node_accept (const re_match_context_t *mctx, const re_token_t *node,
-+     case OP_UTF8_PERIOD:
-+       if (ch >= ASCII_CHARS)
-+         return false;
-++#if defined __GNUC__ && __GNUC__ >= 7
-++      __attribute__ ((fallthrough));
-++#endif
-+       /* FALLTHROUGH */
-+ #endif
-+     case OP_PERIOD:
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0004-Fix-remaining-cases-of-gcc-7-fallthrough-warning.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0004-Fix-remaining-cases-of-gcc-7-fallthrough-warning.patch
deleted file mode 100644
index 78a70a2..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/0004-Fix-remaining-cases-of-gcc-7-fallthrough-warning.patch
+++ /dev/null
@@ -1,175 +0,0 @@
-From d454509bb866d4eaefbb558d94dd0ef0228830eb Mon Sep 17 00:00:00 2001
-From: Vladimir Serbinenko <phcoder@gmail.com>
-Date: Wed, 12 Apr 2017 01:42:38 +0000
-Subject: [PATCH 4/4] Fix remaining cases of gcc 7 fallthrough warning.
-
-They are all intended, so just add the relevant comment.
----
-Upstream-Status: Backport
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
- grub-core/kern/ia64/dl.c                     | 1 +
- grub-core/kern/mips/dl.c                     | 1 +
- grub-core/kern/sparc64/dl.c                  | 1 +
- grub-core/loader/i386/coreboot/chainloader.c | 1 +
- 4 files changed, 4 insertions(+)
-
-Index: grub-2.00/grub-core/kern/ia64/dl.c
-===================================================================
---- grub-2.00.orig/grub-core/kern/ia64/dl.c
-+++ grub-2.00/grub-core/kern/ia64/dl.c
-@@ -257,6 +257,7 @@ grub_arch_dl_relocate_symbols (grub_dl_t
- 		  case R_IA64_LTOFF22:
- 		    if (ELF_ST_TYPE (sym->st_info) == STT_FUNC)
- 		      value = *(grub_uint64_t *) sym->st_value + rel->r_addend;
-+		  /* Fallthrough.  */
- 		  case R_IA64_LTOFF_FPTR22:
- 		    *gpptr = value;
- 		    add_value_to_slot_21 (addr, (grub_addr_t) gpptr - (grub_addr_t) gp);
-Index: grub-2.00/grub-core/disk/diskfilter.c
-===================================================================
---- grub-2.00.orig/grub-core/disk/diskfilter.c
-+++ grub-2.00/grub-core/disk/diskfilter.c
-@@ -71,10 +71,12 @@ is_lv_readable (struct grub_diskfilter_l
- 	case GRUB_DISKFILTER_RAID6:
- 	  if (!easily)
- 	    need--;
-+	  /* Fallthrough.  */
- 	case GRUB_DISKFILTER_RAID4:
- 	case GRUB_DISKFILTER_RAID5:
- 	  if (!easily)
- 	    need--;
-+	  /* Fallthrough.  */
- 	case GRUB_DISKFILTER_STRIPED:
- 	  break;
- 
-@@ -507,6 +509,7 @@ read_segment (struct grub_diskfilter_seg
-       if (seg->node_count == 1)
- 	return grub_diskfilter_read_node (&seg->nodes[0],
- 					  sector, size, buf);
-+    /* Fallthrough.  */
-     case GRUB_DISKFILTER_MIRROR:
-     case GRUB_DISKFILTER_RAID10:
-       {
-Index: grub-2.00/grub-core/font/font.c
-===================================================================
---- grub-2.00.orig/grub-core/font/font.c
-+++ grub-2.00/grub-core/font/font.c
-@@ -1297,6 +1297,7 @@ blit_comb (const struct grub_unicode_gly
- 	    - grub_font_get_xheight (combining_glyphs[i]->font) - 1;
- 	  if (space <= 0)
- 	    space = 1 + (grub_font_get_xheight (main_glyph->font)) / 8;
-+        /* Fallthrough.  */
- 
- 	case GRUB_UNICODE_STACK_ATTACHED_ABOVE:
- 	  do_blit (combining_glyphs[i], targetx,
-@@ -1338,6 +1339,7 @@ blit_comb (const struct grub_unicode_gly
- 		    + combining_glyphs[i]->height);
- 	  if (space <= 0)
- 	    space = 1 + (grub_font_get_xheight (main_glyph->font)) / 8;
-+        /* Fallthrough.  */
- 
- 	case GRUB_UNICODE_STACK_ATTACHED_BELOW:
- 	  do_blit (combining_glyphs[i], targetx, -(bounds.y - space));
-Index: grub-2.00/grub-core/fs/udf.c
-===================================================================
---- grub-2.00.orig/grub-core/fs/udf.c
-+++ grub-2.00/grub-core/fs/udf.c
-@@ -970,6 +970,7 @@ grub_udf_read_symlink (grub_fshelp_node_
- 	case 1:
- 	  if (ptr[1])
- 	    goto fail;
-+	  break;
- 	case 2:
- 	  /* in 4 bytes. out: 1 byte.  */
- 	  optr = out;
-Index: grub-2.00/grub-core/lib/legacy_parse.c
-===================================================================
---- grub-2.00.orig/grub-core/lib/legacy_parse.c
-+++ grub-2.00/grub-core/lib/legacy_parse.c
-@@ -626,6 +626,7 @@ grub_legacy_parse (const char *buf, char
- 	  {
- 	  case TYPE_FILE_NO_CONSUME:
- 	    hold_arg = 1;
-+	  /* Fallthrough.  */
- 	  case TYPE_PARTITION:
- 	  case TYPE_FILE:
- 	    args[i] = adjust_file (curarg, curarglen);
-Index: grub-2.00/grub-core/lib/libgcrypt-grub/cipher/rijndael.c
-===================================================================
---- grub-2.00.orig/grub-core/lib/libgcrypt-grub/cipher/rijndael.c
-+++ grub-2.00/grub-core/lib/libgcrypt-grub/cipher/rijndael.c
-@@ -96,7 +96,8 @@ do_setkey (RIJNDAEL_context *ctx, const
-   static int initialized = 0;
-   static const char *selftest_failed=0;
-   int ROUNDS;
--  int i,j, r, t, rconpointer = 0;
-+  unsigned int i, t, rconpointer = 0;
-+  int j, r;
-   int KC;
-   union
-   {
-Index: grub-2.00/grub-core/mmap/efi/mmap.c
-===================================================================
---- grub-2.00.orig/grub-core/mmap/efi/mmap.c
-+++ grub-2.00/grub-core/mmap/efi/mmap.c
-@@ -72,6 +72,7 @@ grub_efi_mmap_iterate (grub_memory_hook_
- 		    GRUB_MEMORY_AVAILABLE);
- 	      break;
- 	    }
-+	  /* Fallthrough.  */
- 	case GRUB_EFI_RUNTIME_SERVICES_CODE:
- 	  hook (desc->physical_start, desc->num_pages * 4096,
- 		GRUB_MEMORY_CODE);
-@@ -86,6 +87,7 @@ grub_efi_mmap_iterate (grub_memory_hook_
- 	  grub_printf ("Unknown memory type %d, considering reserved\n",
- 		       desc->type);
- 
-+	  /* Fallthrough.  */
- 	case GRUB_EFI_BOOT_SERVICES_DATA:
- 	  if (!avoid_efi_boot_services)
- 	    {
-@@ -93,6 +95,7 @@ grub_efi_mmap_iterate (grub_memory_hook_
- 		    GRUB_MEMORY_AVAILABLE);
- 	      break;
- 	    }
-+	  /* Fallthrough.  */
- 	case GRUB_EFI_RESERVED_MEMORY_TYPE:
- 	case GRUB_EFI_RUNTIME_SERVICES_DATA:
- 	case GRUB_EFI_MEMORY_MAPPED_IO:
-Index: grub-2.00/grub-core/normal/charset.c
-===================================================================
---- grub-2.00.orig/grub-core/normal/charset.c
-+++ grub-2.00/grub-core/normal/charset.c
-@@ -858,6 +858,7 @@ grub_bidi_line_logical_to_visual (const
- 	  case GRUB_BIDI_TYPE_R:
- 	  case GRUB_BIDI_TYPE_AL:
- 	    bidi_needed = 1;
-+	  /* Fallthrough.  */
- 	  default:
- 	    {
- 	      if (join_state == JOIN_FORCE)
-Index: grub-2.00/grub-core/video/bochs.c
-===================================================================
---- grub-2.00.orig/grub-core/video/bochs.c
-+++ grub-2.00/grub-core/video/bochs.c
-@@ -351,6 +351,7 @@ grub_video_bochs_setup (unsigned int wid
-     case 32:
-       framebuffer.mode_info.reserved_mask_size = 8;
-       framebuffer.mode_info.reserved_field_pos = 24;
-+      /* Fallthrough.  */
- 
-     case 24:
-       framebuffer.mode_info.red_mask_size = 8;
-Index: grub-2.00/grub-core/video/cirrus.c
-===================================================================
---- grub-2.00.orig/grub-core/video/cirrus.c
-+++ grub-2.00/grub-core/video/cirrus.c
-@@ -431,6 +431,7 @@ grub_video_cirrus_setup (unsigned int wi
-     case 32:
-       framebuffer.mode_info.reserved_mask_size = 8;
-       framebuffer.mode_info.reserved_field_pos = 24;
-+      /* Fallthrough.  */
- 
-     case 24:
-       framebuffer.mode_info.red_mask_size = 8;
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub/autogen.sh-exclude-pc.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/autogen.sh-exclude-pc.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-bsp/grub/grub/autogen.sh-exclude-pc.patch
rename to import-layers/yocto-poky/meta/recipes-bsp/grub/files/autogen.sh-exclude-pc.patch
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/check-if-liblzma-is-disabled.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/check-if-liblzma-is-disabled.patch
deleted file mode 100644
index 0eece08..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/check-if-liblzma-is-disabled.patch
+++ /dev/null
@@ -1,33 +0,0 @@
-Disable liblzma if --enable-liblzma=no
-
-Upstream-Status: Inappropriate [configuration]
-
-Signed-off-by: Constantin Musca <constantinx.musca@intel.com>
-
---- a/configure.ac
-+++ b/configure.ac
-@@ -1029,10 +1029,20 @@ fi
- 
- AC_SUBST([LIBGEOM])
- 
--AC_CHECK_LIB([lzma], [lzma_code],
--             [LIBLZMA="-llzma"
--              AC_DEFINE([HAVE_LIBLZMA], [1],
--                        [Define to 1 if you have the LZMA library.])],)
-+AC_ARG_ENABLE([liblzma],
-+              [AS_HELP_STRING([--enable-liblzma],
-+                              [enable liblzma integration (default=guessed)])])
-+if test x"$enable_liblzma" = xno ; then
-+  liblzma_excuse="explicitly disabled"
-+fi
-+
-+if test x"$liblzma_excuse" = x ; then
-+  AC_CHECK_LIB([lzma], [lzma_code],
-+               [LIBLZMA="-llzma"
-+                AC_DEFINE([HAVE_LIBLZMA], [1],
-+                          [Define to 1 if you have the LZMA library.])],)
-+fi
-+
- AC_SUBST([LIBLZMA])
- 
- AC_ARG_ENABLE([libzfs],
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/fix-endianness-problem.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/fix-endianness-problem.patch
deleted file mode 100644
index 079992a..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/fix-endianness-problem.patch
+++ /dev/null
@@ -1,44 +0,0 @@
-grub-core/net/tftp.c: fix endianness problem.
-
-	* grub-core/net/tftp.c (ack): Fix endianness problem.
-	(tftp_receive): Likewise.
-	Reported by: Michael Davidsaver.
-
-Upstream-Status: Backport
-
-diff --git a/ChangeLog b/ChangeLog
-index 81bdae9..c2f42d5 100644
---- a/ChangeLog
-+++ b/ChangeLog
-@@ -1,3 +1,9 @@
-+2012-07-02  Vladimir Serbinenko  <phcoder@gmail.com>
-+
-+	* grub-core/net/tftp.c (ack): Fix endianness problem.
-+	(tftp_receive): Likewise.
-+	Reported by: Michael Davidsaver.
-+
- 2012-06-27  Vladimir Serbinenko  <phcoder@gmail.com>
- 
- 	* configure.ac: Bump version to 2.00.
-diff --git a/grub-core/net/tftp.c b/grub-core/net/tftp.c
-index 9c70efb..d0f39ea 100644
---- a/grub-core/net/tftp.c
-+++ b/grub-core/net/tftp.c
-@@ -143,7 +143,7 @@ ack (tftp_data_t data, grub_uint16_t block)
- 
-   tftph_ack = (struct tftphdr *) nb_ack.data;
-   tftph_ack->opcode = grub_cpu_to_be16 (TFTP_ACK);
--  tftph_ack->u.ack.block = block;
-+  tftph_ack->u.ack.block = grub_cpu_to_be16 (block);
- 
-   err = grub_net_send_udp_packet (data->sock, &nb_ack);
-   if (err)
-@@ -225,7 +225,7 @@ tftp_receive (grub_net_udp_socket_t sock __attribute__ ((unused)),
- 	    grub_priority_queue_pop (data->pq);
- 
- 	    if (file->device->net->packs.count < 50)
--	      err = ack (data, tftph->u.data.block);
-+	      err = ack (data, data->block + 1);
- 	    else
- 	      {
- 		file->device->net->stall = 1;
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/fix-issue-with-flex-2.5.37.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/fix-issue-with-flex-2.5.37.patch
deleted file mode 100644
index 61ae2f5..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/fix-issue-with-flex-2.5.37.patch
+++ /dev/null
@@ -1,21 +0,0 @@
-Upstream-Status: Backport
-
-This fixes compilation issues when using flex-2.5.37. It was taken from upstream.
-
-Original author is: Vladimir Serbinenko  <phcoder@gmail.com>
-
-Signed-off-by: Laurentiu Palcu <laurentiu.palcu@intel.com>
-
-Index: grub-2.00/grub-core/script/yylex.l
-===================================================================
---- grub-2.00.orig/grub-core/script/yylex.l	2012-06-08 23:24:15.000000000 +0300
-+++ grub-2.00/grub-core/script/yylex.l	2013-07-31 14:34:40.708100982 +0300
-@@ -29,6 +29,8 @@
- #pragma GCC diagnostic ignored "-Wmissing-prototypes"
- #pragma GCC diagnostic ignored "-Wmissing-declarations"
- #pragma GCC diagnostic ignored "-Wunsafe-loop-optimizations"
-+#pragma GCC diagnostic ignored "-Wunused-function"
-+#pragma GCC diagnostic ignored "-Wsign-compare"
- 
- #define yyfree    grub_lexer_yyfree
- #define yyalloc   grub_lexer_yyalloc
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/fix-texinfo.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/fix-texinfo.patch
deleted file mode 100644
index b911d73..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/fix-texinfo.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-The tarball has a texi file that doesn't parse with current texinfo, so if it's
-being re-generated the build will fail.  Take a patch from upstream to fix the
-texi.
-
-Upstream-Status: Backport
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-From d4c4b8e1a085f92afcec36f6e590c6dfc51d0a1c Mon Sep 17 00:00:00 2001
-From: Bryan Hundven <bryanhundven@gmail.com>
-Date: Mon, 08 Apr 2013 13:23:07 +0000
-Subject: 	* docs/grub-dev.texi: Move @itemize after @subsection to satisfy
-
-	texinfo-5.1.
----
-(limited to 'docs/grub-dev.texi')
-
-diff --git a/docs/grub-dev.texi b/docs/grub-dev.texi
-index a4a3820..f74c966 100644
---- a/docs/grub-dev.texi
-+++ b/docs/grub-dev.texi
-@@ -1394,8 +1394,8 @@ grub_video_blit_glyph (&glyph, color, 0, 0);
- 
- @node Bitmap API
- @section Bitmap API
--@itemize
- @subsection grub_video_bitmap_create
-+@itemize
- @item Prototype:
- @example
- grub_err_t grub_video_bitmap_create (struct grub_video_bitmap **bitmap, unsigned int width, unsigned int height, enum grub_video_blit_format blit_format)
---
-cgit v0.9.0.2
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/fix.build.with.gcc-7.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/fix.build.with.gcc-7.patch
new file mode 100644
index 0000000..f35df97
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/fix.build.with.gcc-7.patch
@@ -0,0 +1,39 @@
+* e.g. with gentoo gcc-7.1 they define _FORTIFY_SOURCE by default with:
+  https://sources.gentoo.org/cgi-bin/viewvc.cgi/gentoo/src/patchsets/gcc/7.1.0/gentoo/10_all_default-fortify-source.patch?view=markup
+  which results in following error while building grub-efi-native:
+  ./config-util.h:1504:48: error: this use of "defined" may not be portable [-Werror=expansion-to-defined]
+               || (defined _FORTIFY_SOURCE && 0 < _FORTIFY_SOURCE \
+                                                  ^~~~~~~~~~~~~~~
+  this part comes from gnulib and it's used only for Apple and BSD,
+  so we can ignore it, but we cannot add -Wno-error=expansion-to-defined
+  because this warning was introduced only in gcc-7 and older gcc
+  will fail with:
+  cc1: error: -Werror=expansion-to-defined: no option -Wexpansion-to-defined
+  use #pragma to work around this
+
+Upstream-Status: Pending (should be fixed in gnulib which is then rarely updated in grub)
+
+Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
+
+diff -uNr grub-2.02.old/m4/extern-inline.m4 grub-2.02/m4/extern-inline.m4
+--- grub-2.02.old/m4/extern-inline.m4	2016-02-28 15:22:21.000000000 +0100
++++ grub-2.02/m4/extern-inline.m4	2017-08-22 19:26:45.213637276 +0200
+@@ -39,6 +39,10 @@
+    OS X 10.9 has a macro __header_inline indicating the bug is fixed for C and
+    for clang but remains for g++; see <http://trac.macports.org/ticket/41033>.
+    Assume DragonFly and FreeBSD will be similar.  */
++#pragma GCC diagnostic push
++#if __GNUC__ >= 7
++#pragma GCC diagnostic ignored "-Wexpansion-to-defined"
++#endif
+ #if (((defined __APPLE__ && defined __MACH__) \
+       || defined __DragonFly__ || defined __FreeBSD__) \
+      && (defined __header_inline \
+@@ -50,6 +52,7 @@
+                 && defined __GNUC__ && ! defined __cplusplus))))
+ # define _GL_EXTERN_INLINE_STDHEADER_BUG
+ #endif
++#pragma GCC diagnostic pop
+ #if ((__GNUC__ \
+       ? defined __GNUC_STDC_INLINE__ && __GNUC_STDC_INLINE__ \
+       : (199901L <= __STDC_VERSION__ \
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-2.00-add-oe-kernel.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-2.00-add-oe-kernel.patch
deleted file mode 100644
index eb8916c..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-2.00-add-oe-kernel.patch
+++ /dev/null
@@ -1,53 +0,0 @@
-From 7ab576a7c61406b7e63739d1b11017ae336b9008 Mon Sep 17 00:00:00 2001
-From: Robert Yang <liezhi.yang@windriver.com>
-Date: Mon, 3 Mar 2014 03:34:48 -0500
-Subject: [PATCH] grub.d/10_linux.in: add oe's kernel name
-
-Our kernel's name is bzImage, we need add it to grub.d/10_linux.in so
-that the grub-mkconfig and grub-install can work correctly.
-
-We only need add the bzImage to util/grub.d/10_linux.in, but also add it
-to util/grub.d/20_linux_xen.in to keep compatibility.
-
-Upstream-Status: Inappropriate [OE specific]
-
-Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
----
- util/grub.d/10_linux.in     |    4 ++--
- util/grub.d/20_linux_xen.in |    2 +-
- 2 files changed, 3 insertions(+), 3 deletions(-)
-
-diff --git a/util/grub.d/10_linux.in b/util/grub.d/10_linux.in
-index 14402e8..c58f417 100644
---- a/util/grub.d/10_linux.in
-+++ b/util/grub.d/10_linux.in
-@@ -153,11 +153,11 @@ EOF
- machine=`uname -m`
- case "x$machine" in
-     xi?86 | xx86_64)
--	list=`for i in /boot/vmlinuz-* /vmlinuz-* /boot/kernel-* ; do
-+	list=`for i in /boot/bzImage-* /bzImage-* /boot/vmlinuz-* /vmlinuz-* /boot/kernel-* ; do
-                   if grub_file_is_not_garbage "$i" ; then echo -n "$i " ; fi
-               done` ;;
-     *) 
--	list=`for i in /boot/vmlinuz-* /boot/vmlinux-* /vmlinuz-* /vmlinux-* /boot/kernel-* ; do
-+	list=`for i in /boot/bzImage-* /boot/vmlinuz-* /boot/vmlinux-* /bzImage-* /vmlinuz-* /vmlinux-* /boot/kernel-* ; do
-                   if grub_file_is_not_garbage "$i" ; then echo -n "$i " ; fi
- 	     done` ;;
- esac
-diff --git a/util/grub.d/20_linux_xen.in b/util/grub.d/20_linux_xen.in
-index 1d94502..b2decf3 100644
---- a/util/grub.d/20_linux_xen.in
-+++ b/util/grub.d/20_linux_xen.in
-@@ -138,7 +138,7 @@ EOF
- EOF
- }
- 
--linux_list=`for i in /boot/vmlinu[xz]-* /vmlinu[xz]-* /boot/kernel-*; do
-+linux_list=`for i in /boot/bzImage[xz]-* /bzImage[xz]-* /boot/vmlinu[xz]-* /vmlinu[xz]-* /boot/kernel-*; do
-     if grub_file_is_not_garbage "$i"; then
-     	basename=$(basename $i)
- 	version=$(echo $basename | sed -e "s,^[^0-9]*-,,g")
--- 
-1.7.10.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-2.00-fix-enable_execute_stack-check.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-2.00-fix-enable_execute_stack-check.patch
deleted file mode 100644
index 1ff3c1c..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-2.00-fix-enable_execute_stack-check.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-Upstream-Status: Pending
-
-
-This patch avoids this configure failure
-
-configure:20306: checking whether `ccache i586-poky-linux-gcc  -m32    -march=core2 -msse3 -mtune=generic -mfpmath=sse --sysroot=/builddisk/build/build0/tmp/sysroots/emenlow' generates calls to `__enable_execute_stack()'
-configure:20320: ccache i586-poky-linux-gcc  -m32    -march=core2 -msse3 -mtune=generic -mfpmath=sse --sysroot=/builddisk/build/build0/tmp/sysroots/emenlow -O2 -pipe -g -feliminate-unused-debug-types -Wall -W -Wshadow -Wpointer-arith -Wmissing-prototypes -Wundef -Wstrict-prototypes -g -falign-jumps=1 -falign-loops=1 -falign-functions=1 -mno-mmx -mno-sse -mno-sse2 -mno-3dnow -mfpmath=387 -fno-dwarf2-cfi-asm -m32 -fno-stack-protector -mno-stack-arg-probe -Werror -Wno-trampolines -falign-loops=1 -S conftest.c
-conftest.c:308:6: error: no previous prototype for 'g' [-Werror=missing-prototypes]
-cc1: all warnings being treated as errors
-configure:20323: $? = 1
-configure:20327: error: ccache i586-poky-linux-gcc  -m32    -march=core2 -msse3 -mtune=generic -mfpmath=sse --sysroot=/builddisk/build/build0/tmp/sysroots/emenlow failed to produce assembly code
-
-Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
-2012/04/13
-
-Index: grub-2.00/acinclude.m4
-===================================================================
---- grub-2.00.orig/acinclude.m4
-+++ grub-2.00/acinclude.m4
-@@ -317,6 +317,7 @@ dnl Check if the C compiler generates ca
- AC_DEFUN([grub_CHECK_ENABLE_EXECUTE_STACK],[
- AC_MSG_CHECKING([whether `$CC' generates calls to `__enable_execute_stack()'])
- AC_LANG_CONFTEST([AC_LANG_SOURCE([[
-+void g (int);
- void f (int (*p) (void));
- void g (int i)
- {
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-2.00-fpmath-sse-387-fix.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-2.00-fpmath-sse-387-fix.patch
deleted file mode 100644
index dd30d94..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-2.00-fpmath-sse-387-fix.patch
+++ /dev/null
@@ -1,24 +0,0 @@
-Upstream-Status: pending
-
-This patch fixes this configure issue for grub when -mfpmath=sse is in the gcc parameters.
-
-configure:20574: i586-poky-linux-gcc  -m32    -march=core2 -msse3 -mtune=generic -mfpmath=sse --sysroot=/usr/local/dev/yocto/grubtest2/build/tmp/sysroots/emenlow -o conftest -O2 -pipe -g -feliminate-unused-debug-types -Wall -W -Wshadow -Wpointer-arith -Wmissing-prototypes -Wundef -Wstrict-prototypes -g -falign-jumps=1 -falign-loops=1 -falign-functions=1 -mno-mmx -mno-sse -mno-sse2 -mno-3dnow -fno-dwarf2-cfi-asm -m32 -fno-stack-protector -mno-stack-arg-probe -Werror -nostdlib -Wl,--defsym,___main=0x8100  -Wall -W -I$(top_srcdir)/include -I$(top_builddir)/include  -DGRUB_MACHINE_PCBIOS=1 -DGRUB_MACHINE=I386_PC -Wl,-O1 -Wl,--hash-style=gnu -Wl,--as-needed conftest.c  >&5
-conftest.c:1:0: error: SSE instruction set disabled, using 387 arithmetics [-Werror]
-cc1: all warnings being treated as errors
-
-Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
-2012/04/13
-
-Index: grub-1.99/configure.ac
-===================================================================
---- grub-1.99.orig/configure.ac
-+++ grub-1.99/configure.ac
-@@ -378,7 +378,7 @@ if test "x$target_cpu" = xi386; then
- 
-   # Some toolchains enable these features by default, but they need
-   # registers that aren't set up properly in GRUB.
--  TARGET_CFLAGS="$TARGET_CFLAGS -mno-mmx -mno-sse -mno-sse2 -mno-3dnow"
-+  TARGET_CFLAGS="$TARGET_CFLAGS -mno-mmx -mno-sse -mno-sse2 -mno-3dnow -mfpmath=387"
- fi
- 
- # By default, GCC 4.4 generates .eh_frame sections containing unwind
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-efi-allow-a-compilation-without-mcmodel-large.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-efi-allow-a-compilation-without-mcmodel-large.patch
deleted file mode 100644
index 4588fca..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-efi-allow-a-compilation-without-mcmodel-large.patch
+++ /dev/null
@@ -1,131 +0,0 @@
-Allow a compilation without -mcmodel=large
-
-It's provided by Vladimir Serbinenko, and he will commit
-it upstream so it should be backport patch.
-
-Upstream-Status: Backport
-
-Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
-
---
-diff --git a/configure.ac b/configure.ac
-index 9f8fb8a..2c5e6ed 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -723,9 +723,7 @@ if test "$target_cpu" = x86_64; then
- 		      [grub_cv_cc_mcmodel=yes],
- 		      [grub_cv_cc_mcmodel=no])
-   ])
--  if test "x$grub_cv_cc_mcmodel" = xno; then
--    AC_MSG_ERROR([-mcmodel=large not supported. Upgrade your gcc.])
--  else
-+  if test "x$grub_cv_cc_mcmodel" = xyes; then
-     TARGET_CFLAGS="$TARGET_CFLAGS -mcmodel=large"
-   fi
- fi
-diff --git a/grub-core/kern/efi/mm.c b/grub-core/kern/efi/mm.c
-index 1409b5d..6e9dace 100644
---- a/grub-core/kern/efi/mm.c
-+++ b/grub-core/kern/efi/mm.c
-@@ -32,6 +32,12 @@
- #define BYTES_TO_PAGES(bytes)	(((bytes) + 0xfff) >> 12)
- #define PAGES_TO_BYTES(pages)	((pages) << 12)
-
-+#if defined (__code_model_large__) || !defined (__x86_64__)
-+#define MAX_USABLE_ADDRESS 0xffffffff
-+#else
-+#define MAX_USABLE_ADDRESS 0x7fffffff
-+#endif
-+
- /* The size of a memory map obtained from the firmware. This must be
-    a multiplier of 4KB.  */
- #define MEMORY_MAP_SIZE	0x3000
-@@ -58,7 +64,7 @@ grub_efi_allocate_pages (grub_efi_physical_address_t address,
-
- #if 1
-   /* Limit the memory access to less than 4GB for 32-bit platforms.  */
--  if (address > 0xffffffff)
-+  if (address > MAX_USABLE_ADDRESS)
-     return 0;
- #endif
-
-@@ -66,7 +72,7 @@ grub_efi_allocate_pages (grub_efi_physical_address_t address,
-   if (address == 0)
-     {
-       type = GRUB_EFI_ALLOCATE_MAX_ADDRESS;
--      address = 0xffffffff;
-+      address = MAX_USABLE_ADDRESS;
-     }
-   else
-     type = GRUB_EFI_ALLOCATE_ADDRESS;
-@@ -86,7 +92,7 @@ grub_efi_allocate_pages (grub_efi_physical_address_t address,
-     {
-       /* Uggh, the address 0 was allocated... This is too annoying,
- 	 so reallocate another one.  */
--      address = 0xffffffff;
-+      address = MAX_USABLE_ADDRESS;
-       status = efi_call_4 (b->allocate_pages, type, GRUB_EFI_LOADER_DATA, pages, &address);
-       grub_efi_free_pages (0, pages);
-       if (status != GRUB_EFI_SUCCESS)
-@@ -319,7 +325,7 @@ filter_memory_map (grub_efi_memory_descriptor_t *memory_map,
-     {
-       if (desc->type == GRUB_EFI_CONVENTIONAL_MEMORY
- #if 1
--	  && desc->physical_start <= 0xffffffff
-+	  && desc->physical_start <= MAX_USABLE_ADDRESS
- #endif
- 	  && desc->physical_start + PAGES_TO_BYTES (desc->num_pages) > 0x100000
- 	  && desc->num_pages != 0)
-@@ -337,9 +343,9 @@ filter_memory_map (grub_efi_memory_descriptor_t *memory_map,
- #if 1
- 	  if (BYTES_TO_PAGES (filtered_desc->physical_start)
- 	      + filtered_desc->num_pages
--	      > BYTES_TO_PAGES (0x100000000LL))
-+	      > BYTES_TO_PAGES (MAX_USABLE_ADDRESS+1LL))
- 	    filtered_desc->num_pages
--	      = (BYTES_TO_PAGES (0x100000000LL)
-+	      = (BYTES_TO_PAGES (MAX_USABLE_ADDRESS+1LL)
- 		 - BYTES_TO_PAGES (filtered_desc->physical_start));
- #endif
-
-diff --git a/grub-core/kern/x86_64/dl.c b/grub-core/kern/x86_64/dl.c
-index 65f09ef..17c1215 100644
---- a/grub-core/kern/x86_64/dl.c
-+++ b/grub-core/kern/x86_64/dl.c
-@@ -100,14 +100,32 @@ grub_arch_dl_relocate_symbols (grub_dl_t mod, void *ehdr)
- 		    break;
-
- 		  case R_X86_64_PC32:
--		    *addr32 += rel->r_addend + sym->st_value -
--		              (Elf64_Xword) seg->addr - rel->r_offset;
-+		    {
-+		      grub_int64_t value;
-+		      value = ((grub_int32_t) *addr32) + rel->r_addend + sym->st_value -
-+			(Elf64_Xword) seg->addr - rel->r_offset;
-+		      if (value != (grub_int32_t) value)
-+			return grub_error (GRUB_ERR_BAD_MODULE, "relocation out of range");
-+		      *addr32 = value;
-+		    }
- 		    break;
-
-                   case R_X86_64_32:
-+		    {
-+		      grub_uint64_t value = *addr32 + rel->r_addend + sym->st_value;
-+		      if (value != (grub_uint32_t) value)
-+			return grub_error (GRUB_ERR_BAD_MODULE, "relocation out of range");
-+		      *addr32 = value;
-+		    }
-+		    break;
-                   case R_X86_64_32S:
--                    *addr32 += rel->r_addend + sym->st_value;
--                    break;
-+		    {
-+		      grub_int64_t value = ((grub_int32_t) *addr32) + rel->r_addend + sym->st_value;
-+		      if (value != (grub_int32_t) value)
-+			return grub_error (GRUB_ERR_BAD_MODULE, "relocation out of range");
-+		      *addr32 = value;
-+		    }
-+		    break;
-
- 		  default:
- 		    return grub_error (GRUB_ERR_NOT_IMPLEMENTED_YET,
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-efi-fix-with-glibc-2.20.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-efi-fix-with-glibc-2.20.patch
deleted file mode 100644
index 4f12628..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-efi-fix-with-glibc-2.20.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From eb6368e65f6988eebad26cebdec057f797bceb40 Mon Sep 17 00:00:00 2001
-From: Robert Yang <liezhi.yang@windriver.com>
-Date: Tue, 9 Sep 2014 00:02:30 -0700
-Subject: [PATCH] Fix build with glibc 2.20
-
-* grub-core/kern/emu/hostfs.c: squahes below warning
-  warning: #warning "_BSD_SOURCE and _SVID_SOURCE are deprecated, use _DEFAULT_SOURCE"
-
-Upstream-Status: Submitted
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
----
- grub-core/kern/emu/hostfs.c |    2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/grub-core/kern/emu/hostfs.c b/grub-core/kern/emu/hostfs.c
-index 3cb089c..a51ee32 100644
---- a/grub-core/kern/emu/hostfs.c
-+++ b/grub-core/kern/emu/hostfs.c
-@@ -16,7 +16,7 @@
-  *  You should have received a copy of the GNU General Public License
-  *  along with GRUB.  If not, see <http://www.gnu.org/licenses/>.
-  */
--#define _BSD_SOURCE
-+#define _DEFAULT_SOURCE
- #include <grub/fs.h>
- #include <grub/file.h>
- #include <grub/disk.h>
--- 
-1.7.9.5
-
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-install.in.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-install.in.patch
deleted file mode 100644
index 326951d..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-install.in.patch
+++ /dev/null
@@ -1,20 +0,0 @@
-Upstream-Status: Inappropriate [embedded specific]
-
-Our use of grub-install doesn't require the -x option, so we should be
-be able make use of grep versions that don't support it.
-
-Signed-off-by: Tom Zanussi <tom.zanussi@intel.com>
-
-Index: grub-1.99/util/grub-install.in
-===================================================================
---- grub-1.99.orig/util/grub-install.in	2011-09-09 22:37:20.093906679 -0500
-+++ grub-1.99/util/grub-install.in	2011-09-09 22:37:30.854737882 -0500
-@@ -510,7 +510,7 @@
- 
- if [ "x${devabstraction_module}" = "x" ] ; then
-     if [ x"${install_device}" != x ]; then
--      if echo "${install_device}" | grep -qx "(.*)" ; then
-+      if echo "${install_device}" | grep -q "(.*)" ; then
-         install_drive="${install_device}"
-       else
-         install_drive="`"$grub_probe" --device-map="${device_map}" --target=drive --device "${install_device}"`" || exit 1
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-no-unused-result.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-no-unused-result.patch
deleted file mode 100644
index 4cbd083..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub-no-unused-result.patch
+++ /dev/null
@@ -1,19 +0,0 @@
-Signed-off-by: Radu Moisan <radu.moisan@intel.com>
-Upstream-Status: Pending
-
-I had an error because of an unused return value for read().
-I added -Wno-unused-result.
-
-Index: grub-2.00/configure.ac
-===================================================================
---- grub-2.00.orig/configure.ac	2012-08-13 16:32:33.000000000 +0300
-+++ grub-2.00/configure.ac	2012-08-13 16:38:22.000000000 +0300
-@@ -394,7 +394,7 @@
- LIBS=""
- 
- # debug flags.
--WARN_FLAGS="-Wall -W -Wshadow -Wold-style-definition -Wpointer-arith -Wundef -Wextra -Waddress -Wattributes -Wcast-align -Wchar-subscripts -Wcomment -Wdeprecated-declarations -Wdisabled-optimization -Wdiv-by-zero -Wempty-body -Wendif-labels -Wfloat-equal -Wformat-extra-args -Wformat-security -Wformat-y2k -Wimplicit -Wimplicit-function-declaration -Wimplicit-int -Winit-self -Wint-to-pointer-cast -Winvalid-pch -Wmain -Wmissing-braces -Wmissing-field-initializers -Wmissing-format-attribute -Wmissing-noreturn -Wmultichar -Wnonnull -Woverflow -Wparentheses -Wpointer-arith -Wpointer-to-int-cast -Wreturn-type -Wsequence-point -Wshadow -Wsign-compare -Wstrict-aliasing -Wswitch -Wtrigraphs -Wundef -Wunknown-pragmas -Wunused -Wunused-function -Wunused-label -Wunused-parameter -Wunused-value  -Wunused-variable -Wvariadic-macros -Wvolatile-register-var -Wwrite-strings -Wnested-externs -Wstrict-prototypes -Wpointer-sign"
-+WARN_FLAGS="-Wall -W -Wshadow -Wold-style-definition -Wpointer-arith -Wundef -Wextra -Waddress -Wattributes -Wcast-align -Wchar-subscripts -Wcomment -Wdeprecated-declarations -Wdisabled-optimization -Wdiv-by-zero -Wempty-body -Wendif-labels -Wfloat-equal -Wformat-extra-args -Wformat-security -Wformat-y2k -Wimplicit -Wimplicit-function-declaration -Wimplicit-int -Winit-self -Wint-to-pointer-cast -Winvalid-pch -Wmain -Wmissing-braces -Wmissing-field-initializers -Wmissing-format-attribute -Wmissing-noreturn -Wmultichar -Wnonnull -Woverflow -Wparentheses -Wpointer-arith -Wpointer-to-int-cast -Wreturn-type -Wsequence-point -Wshadow -Wsign-compare -Wstrict-aliasing -Wswitch -Wtrigraphs -Wundef -Wunknown-pragmas -Wunused -Wunused-function -Wunused-label -Wunused-parameter -Wunused-value  -Wunused-variable -Wno-unused-result -Wvariadic-macros -Wvolatile-register-var -Wwrite-strings -Wnested-externs -Wstrict-prototypes -Wpointer-sign"
- HOST_CFLAGS="$HOST_CFLAGS $WARN_FLAGS"
- TARGET_CFLAGS="$TARGET_CFLAGS $WARN_FLAGS -g -Wredundant-decls -Wmissing-prototypes -Wmissing-declarations"
- TARGET_CCASFLAGS="$TARGET_CCASFLAGS -g"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub2-fix-initrd-size-bug.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub2-fix-initrd-size-bug.patch
deleted file mode 100644
index d114f48..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub2-fix-initrd-size-bug.patch
+++ /dev/null
@@ -1,48 +0,0 @@
-From 8fbb150a56966edde4dc07b8d01be5eb149b65ab Mon Sep 17 00:00:00 2001
-From: Colin Watson <cjwatson@ubuntu.com>
-Date: Sun, 20 Jan 2013 23:03:35 +0000
-Subject: [PATCH 1/1] * grub-core/loader/i386/linux.c (grub_cmd_initrd): Don't
- add the initrd size to addr_min, since the initrd will be allocated after
- this address.
-
-commit 6a0debbd9167e8f79cdef5497a73d23e580c0cd4 upstream
-
-Upstream-Status: Backport
-
-Signed-off-by: Shan Hai <shan.hai@windriver.com>
----
- ChangeLog                     | 6 ++++++
- grub-core/loader/i386/linux.c | 3 +--
- 2 files changed, 7 insertions(+), 2 deletions(-)
-
-diff --git a/ChangeLog b/ChangeLog
-index c2f42d5..40cb508 100644
---- a/ChangeLog
-+++ b/ChangeLog
-@@ -1,3 +1,9 @@
-+2013-01-20  Colin Watson  <cjwatson@ubuntu.com>
-+
-+	* grub-core/loader/i386/linux.c (grub_cmd_initrd): Don't add the
-+	initrd size to addr_min, since the initrd will be allocated after
-+	this address.
-+
- 2012-07-02  Vladimir Serbinenko  <phcoder@gmail.com>
- 
- 	* grub-core/net/tftp.c (ack): Fix endianness problem.
-diff --git a/grub-core/loader/i386/linux.c b/grub-core/loader/i386/linux.c
-index 62087cf..e2425c8 100644
---- a/grub-core/loader/i386/linux.c
-+++ b/grub-core/loader/i386/linux.c
-@@ -1098,8 +1098,7 @@ grub_cmd_initrd (grub_command_t cmd __attribute__ ((unused)),
-      worse than that of Linux 2.3.xx, so avoid the last 64kb.  */
-   addr_max -= 0x10000;
- 
--  addr_min = (grub_addr_t) prot_mode_target + prot_init_space
--             + page_align (size);
-+  addr_min = (grub_addr_t) prot_mode_target + prot_init_space;
- 
-   /* Put the initrd as high as possible, 4KiB aligned.  */
-   addr = (addr_max - size) & ~0xFFF;
--- 
-1.8.5.2.233.g932f7e4
-
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub2-remove-sparc64-setup-from-x86-builds.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub2-remove-sparc64-setup-from-x86-builds.patch
deleted file mode 100644
index 5168d3c..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/grub2-remove-sparc64-setup-from-x86-builds.patch
+++ /dev/null
@@ -1,44 +0,0 @@
-Subject: [PATCH] grub2: remove grub-sparc64-setup from x86 builds
-
-* remove the grub-sparc64-setup files from the x86 builds.
-
-Upstream-Status: Inappropriate [embedded specific]
-
-Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
-Signed-off-by: Mingli Yu <mingli.yu@windriver.com>
----
- Makefile.util.def | 18 ------------------
- 1 file changed, 18 deletions(-)
-
-diff --git a/Makefile.util.def b/Makefile.util.def
-index b80187c..a670cf2 100644
---- a/Makefile.util.def
-+++ b/Makefile.util.def
-@@ -321,24 +321,6 @@ program = {
- };
- 
- program = {
--  name = grub-sparc64-setup;
--  installdir = sbin;
--  mansection = 8;
--  common = util/grub-setup.c;
--  common = util/lvm.c;
--  common = grub-core/kern/emu/argp_common.c;
--  common = grub-core/lib/reed_solomon.c;
--  common = util/ieee1275/ofpath.c;
--
--  ldadd = libgrubmods.a;
--  ldadd = libgrubkern.a;
--  ldadd = libgrubgcry.a;
--  ldadd = grub-core/gnulib/libgnu.a;
--  ldadd = '$(LIBINTL) $(LIBDEVMAPPER) $(LIBUTIL) $(LIBZFS) $(LIBNVPAIR) $(LIBGEOM)';
--  cppflags = '-DGRUB_SETUP_SPARC64=1';
--};
--
--program = {
-   name = grub-ofpathname;
-   installdir = sbin;
-   mansection = 8;
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/remove-gets.patch b/import-layers/yocto-poky/meta/recipes-bsp/grub/files/remove-gets.patch
deleted file mode 100644
index 463f784..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/files/remove-gets.patch
+++ /dev/null
@@ -1,20 +0,0 @@
-ISO C11 removes the specification of gets() from the C language, eglibc 2.16+ removed it
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
-Index: grub-1.99/grub-core/gnulib/stdio.in.h
-===================================================================
---- grub-1.99.orig/grub-core/gnulib/stdio.in.h	2010-12-01 06:45:43.000000000 -0800
-+++ grub-1.99/grub-core/gnulib/stdio.in.h	2012-07-04 12:25:02.057099107 -0700
-@@ -140,8 +140,10 @@
- /* It is very rare that the developer ever has full control of stdin,
-    so any use of gets warrants an unconditional warning.  Assume it is
-    always declared, since it is required by C89.  */
-+#if defined gets
- #undef gets
- _GL_WARN_ON_USE (gets, "gets is a security hole - use fgets instead");
-+#endif
- 
- #if @GNULIB_FOPEN@
- # if @REPLACE_FOPEN@
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub-efi_2.00.bb b/import-layers/yocto-poky/meta/recipes-bsp/grub/grub-efi_2.00.bb
deleted file mode 100644
index e12f1d7..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub-efi_2.00.bb
+++ /dev/null
@@ -1,72 +0,0 @@
-require grub2.inc
-
-DEPENDS_class-target = "grub-efi-native"
-RDEPENDS_${PN}_class-target = "diffutils freetype"
-PR = "r3"
-
-SRC_URI += " \
-           file://cfg \
-          "
-
-S = "${WORKDIR}/grub-${PV}"
-
-# Determine the target arch for the grub modules
-python __anonymous () {
-    import re
-    target = d.getVar('TARGET_ARCH')
-    if target == "x86_64":
-        grubtarget = 'x86_64'
-        grubimage = "grub-efi-bootx64.efi"
-    elif re.match('i.86', target):
-        grubtarget = 'i386'
-        grubimage = "grub-efi-bootia32.efi"
-    else:
-        raise bb.parse.SkipPackage("grub-efi is incompatible with target %s" % target)
-    d.setVar("GRUB_TARGET", grubtarget)
-    d.setVar("GRUB_IMAGE", grubimage)
-}
-
-inherit deploy
-
-CACHED_CONFIGUREVARS += "ac_cv_path_HELP2MAN="
-EXTRA_OECONF = "--with-platform=efi --disable-grub-mkfont \
-                --enable-efiemu=no --program-prefix='' \
-                --enable-liblzma=no --enable-device-mapper=no --enable-libzfs=no \
-                --enable-largefile \
-"
-
-# ldm.c:114:7: error: trampoline generated for nested function 'hook' [-Werror=trampolines]
-# and many other places in the grub code when compiled with some native gcc compilers (specifically, gentoo)
-CFLAGS_append_class-native = " -Wno-error=trampolines"
-
-do_install_class-native() {
-	install -d ${D}${bindir}
-	install -m 755 grub-mkimage ${D}${bindir}
-}
-
-GRUB_BUILDIN ?= "boot linux ext2 fat serial part_msdos part_gpt normal efi_gop iso9660 search"
-
-do_deploy() {
-	# Search for the grub.cfg on the local boot media by using the
-	# built in cfg file provided via this recipe
-	grub-mkimage -c ../cfg -p /EFI/BOOT -d ./grub-core/ \
-	               -O ${GRUB_TARGET}-efi -o ./${GRUB_IMAGE} \
-	               ${GRUB_BUILDIN}
-	install -m 644 ${B}/${GRUB_IMAGE} ${DEPLOYDIR}
-}
-
-do_deploy_class-native() {
-	:
-}
-
-addtask deploy after do_install before do_build
-
-FILES_${PN} += "${libdir}/grub/${GRUB_TARGET}-efi \
-                ${datadir}/grub \
-                "
-
-# 64-bit binaries are expected for the bootloader with an x32 userland
-INSANE_SKIP_${PN}_append_linux-gnux32 = " arch"
-INSANE_SKIP_${PN}-dbg_append_linux-gnux32 = " arch"
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub-efi_2.02.bb b/import-layers/yocto-poky/meta/recipes-bsp/grub/grub-efi_2.02.bb
new file mode 100644
index 0000000..128da16
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-bsp/grub/grub-efi_2.02.bb
@@ -0,0 +1,82 @@
+require grub2.inc
+
+GRUBPLATFORM = "efi"
+
+DEPENDS_append_class-target = " grub-efi-native"
+RDEPENDS_${PN}_class-target = "diffutils freetype"
+
+SRC_URI += " \
+           file://cfg \
+          "
+
+S = "${WORKDIR}/grub-${PV}"
+
+# Determine the target arch for the grub modules
+python __anonymous () {
+    import re
+    target = d.getVar('TARGET_ARCH')
+    if target == "x86_64":
+        grubtarget = 'x86_64'
+        grubimage = "grub-efi-bootx64.efi"
+    elif re.match('i.86', target):
+        grubtarget = 'i386'
+        grubimage = "grub-efi-bootia32.efi"
+    else:
+        raise bb.parse.SkipPackage("grub-efi is incompatible with target %s" % target)
+    d.setVar("GRUB_TARGET", grubtarget)
+    d.setVar("GRUB_IMAGE", grubimage)
+}
+
+inherit deploy
+
+CACHED_CONFIGUREVARS += "ac_cv_path_HELP2MAN="
+EXTRA_OECONF += "--enable-efiemu=no"
+
+# ldm.c:114:7: error: trampoline generated for nested function 'hook' [-Werror=trampolines]
+# and many other places in the grub code when compiled with some native gcc compilers (specifically, gentoo)
+CFLAGS_append_class-native = " -Wno-error=trampolines"
+
+do_install_class-native() {
+	install -d ${D}${bindir}
+	install -m 755 grub-mkimage ${D}${bindir}
+}
+
+do_install_append_class-target() {
+    # Remove build host references...
+    find "${D}" -name modinfo.sh -type f -exec \
+        sed -i \
+        -e 's,--sysroot=${STAGING_DIR_TARGET},,g' \
+        -e 's|${DEBUG_PREFIX_MAP}||g' \
+        -e 's:${RECIPE_SYSROOT_NATIVE}::g' \
+        {} +
+}
+
+GRUB_BUILDIN ?= "boot linux ext2 fat serial part_msdos part_gpt normal \
+                 efi_gop iso9660 search loadenv test"
+
+do_deploy() {
+	# Search for the grub.cfg on the local boot media by using the
+	# built in cfg file provided via this recipe
+	grub-mkimage -c ../cfg -p /EFI/BOOT -d ./grub-core/ \
+	               -O ${GRUB_TARGET}-efi -o ./${GRUB_IMAGE} \
+	               ${GRUB_BUILDIN}
+	install -m 644 ${B}/${GRUB_IMAGE} ${DEPLOYDIR}
+}
+
+do_deploy_class-native() {
+	:
+}
+
+addtask deploy after do_install before do_build
+
+FILES_${PN} += "${libdir}/grub/${GRUB_TARGET}-efi \
+                ${datadir}/grub \
+                "
+
+# 64-bit binaries are expected for the bootloader with an x32 userland
+INSANE_SKIP_${PN}_append_linux-gnux32 = " arch"
+INSANE_SKIP_${PN}-dbg_append_linux-gnux32 = " arch"
+INSANE_SKIP_${PN}_append_linux-muslx32 = " arch"
+INSANE_SKIP_${PN}-dbg_append_linux-muslx32 = " arch"
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub2.inc b/import-layers/yocto-poky/meta/recipes-bsp/grub/grub2.inc
index a93c99e..28f96bb 100644
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub2.inc
+++ b/import-layers/yocto-poky/meta/recipes-bsp/grub/grub2.inc
@@ -11,46 +11,40 @@
 LICENSE = "GPLv3"
 LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
 
-SRC_URI = "ftp://ftp.gnu.org/gnu/grub/grub-${PV}.tar.gz \
-           file://grub-module-explicitly-keeps-symbole-.module_license.patch \
-           file://grub-2.00-fpmath-sse-387-fix.patch \
-           file://check-if-liblzma-is-disabled.patch \
-           file://fix-issue-with-flex-2.5.37.patch \
-           file://grub-2.00-add-oe-kernel.patch \
-           file://grub-install.in.patch \
-           file://remove-gets.patch \
-           file://fix-endianness-problem.patch \
-           file://grub2-remove-sparc64-setup-from-x86-builds.patch \
-           file://grub-2.00-fix-enable_execute_stack-check.patch \
-           file://grub-no-unused-result.patch \
-           file://grub-efi-allow-a-compilation-without-mcmodel-large.patch \
-           file://grub-efi-fix-with-glibc-2.20.patch \
+SRC_URI = "https://ftp.gnu.org/gnu/grub/grub-${PV}.tar.gz \
+           file://0001-Disable-mfpmath-sse-as-well-when-SSE-is-disabled.patch \
            file://0001-Unset-need_charset_alias-when-building-for-musl.patch \
-           file://0001-parse_dhcp_vendor-Add-missing-const-qualifiers.patch \
-           file://grub2-fix-initrd-size-bug.patch \
-           file://0001-Fix-CVE-2015-8370-Grub2-user-pass-vulnerability.patch \
-           file://0001-Remove-direct-_llseek-code-and-require-long-filesyst.patch \
-           file://fix-texinfo.patch \
-           file://0001-grub-core-gettext-gettext.c-main_context-secondary_c.patch \
-           file://0001-Enforce-no-pie-if-the-compiler-supports-it.patch \
-           file://0001-grub-core-kern-efi-mm.c-grub_efi_finish_boot_service.patch \
-           file://0002-grub-core-kern-efi-mm.c-grub_efi_get_memory_map-Neve.patch \
-           file://0001-build-Use-AC_HEADER_MAJOR-to-find-device-macros.patch \
-           file://0001-btrfs-avoid-used-uninitialized-error-with-GCC7.patch \
-           file://0002-i386-x86_64-ppc-fix-switch-fallthrough-cases-with-GC.patch \
-           file://0003-Add-gnulib-fix-gcc7-fallthrough.diff.patch \
-           file://0004-Fix-remaining-cases-of-gcc-7-fallthrough-warning.patch \
-            "
+           file://autogen.sh-exclude-pc.patch \
+           file://grub-module-explicitly-keeps-symbole-.module_license.patch \
+           file://0001-grub.d-10_linux.in-add-oe-s-kernel-name.patch \
+	   file://fix.build.with.gcc-7.patch \
+"
+SRC_URI[md5sum] = "1116d1f60c840e6dbd67abbc99acb45d"
+SRC_URI[sha256sum] = "660ee136fbcee08858516ed4de2ad87068bfe1b6b8b37896ce3529ff054a726d"
 
-DEPENDS = "flex-native bison-native autogen-native"
+DEPENDS = "flex-native bison-native"
 
-SRC_URI[md5sum] = "e927540b6eda8b024fb0391eeaa4091c"
-SRC_URI[sha256sum] = "65b39a0558f8c802209c574f4d02ca263a804e8a564bc6caf1cd0fd3b3cc11e3"
+COMPATIBLE_HOST = '(x86_64.*|i.86.*|arm.*|aarch64.*)-(linux.*|freebsd.*)'
+COMPATIBLE_HOST_armv7a = 'null'
+COMPATIBLE_HOST_armv7ve = 'null'
 
-COMPATIBLE_HOST = '(x86_64.*|i.86.*)-(linux|freebsd.*)'
+# configure.ac has code to set this automagically from the target tuple
+# but the OE freeform one (core2-foo-bar-linux) don't work with that.
+
+GRUBPLATFORM_arm = "uboot"
+GRUBPLATFORM_aarch64 = "efi"
+GRUBPLATFORM ??= "pc"
 
 inherit autotools gettext texinfo
 
+EXTRA_OECONF = "--with-platform=${GRUBPLATFORM} \
+                --disable-grub-mkfont \
+                --program-prefix="" \
+                --enable-liblzma=no \
+                --enable-libzfs=no \
+                --enable-largefile \
+"
+
 PACKAGECONFIG ??= ""
 PACKAGECONFIG[grub-mount] = "--enable-grub-mount,--disable-grub-mount,fuse"
 PACKAGECONFIG[device-mapper] = "--enable-device-mapper,--disable-device-mapper,lvm2"
@@ -76,3 +70,9 @@
 # grub and grub-efi's sysroot/${datadir}/grub/grub-mkconfig_lib are
 # conflicted, remove it since no one uses it.
 SYSROOT_DIRS_BLACKLIST += "${datadir}/grub/grub-mkconfig_lib"
+
+PACKAGES =+ "${PN}-editenv"
+
+FILES_${PN}-editenv = "${bindir}/grub-editenv"
+RDEPENDS_${PN} += "${PN}-editenv"
+RDEPENDS_${PN}_class-native = ""
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub_2.00.bb b/import-layers/yocto-poky/meta/recipes-bsp/grub/grub_2.00.bb
deleted file mode 100644
index c382938..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub_2.00.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-require grub2.inc
-
-RDEPENDS_${PN} = "diffutils freetype grub-editenv"
-PR = "r1"
-
-EXTRA_OECONF = "--with-platform=pc --disable-grub-mkfont --program-prefix="" \
-                --enable-liblzma=no --enable-device-mapper=no --enable-libzfs=no \
-                --enable-largefile \
-"
-
-PACKAGES =+ "grub-editenv"
-
-FILES_grub-editenv = "${bindir}/grub-editenv"
-
-do_install_append () {
-    install -d ${D}${sysconfdir}/grub.d
-}
-
-INSANE_SKIP_${PN} = "arch"
-INSANE_SKIP_${PN}-dbg = "arch"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub_2.02.bb b/import-layers/yocto-poky/meta/recipes-bsp/grub/grub_2.02.bb
new file mode 100644
index 0000000..3e61f6a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-bsp/grub/grub_2.02.bb
@@ -0,0 +1,17 @@
+require grub2.inc
+
+RDEPENDS_${PN} += "diffutils freetype"
+
+do_install_append () {
+    install -d ${D}${sysconfdir}/grub.d
+    # Remove build host references...
+    find "${D}" -name modinfo.sh -type f -exec \
+        sed -i \
+        -e 's,--sysroot=${STAGING_DIR_TARGET},,g' \
+        -e 's|${DEBUG_PREFIX_MAP}||g' \
+        -e 's:${RECIPE_SYSROOT_NATIVE}::g' \
+        {} +
+}
+
+INSANE_SKIP_${PN} = "arch"
+INSANE_SKIP_${PN}-dbg = "arch"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub_git.bb b/import-layers/yocto-poky/meta/recipes-bsp/grub/grub_git.bb
deleted file mode 100644
index 0a81e53..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/grub/grub_git.bb
+++ /dev/null
@@ -1,45 +0,0 @@
-require grub2.inc
-
-DEFAULT_PREFERENCE = "-1"
-DEFAULT_PREFERENCE_arm = "1"
-
-FILESEXTRAPATHS =. "${FILE_DIRNAME}/grub-git:"
-
-PV = "2.00+${SRCPV}"
-SRCREV = "ce95549cc54b5d6f494608a7c390dba3aab4fba7"
-SRC_URI = "git://git.savannah.gnu.org/grub.git \
-           file://0001-Disable-mfpmath-sse-as-well-when-SSE-is-disabled.patch \
-           file://autogen.sh-exclude-pc.patch \
-           file://0001-grub.d-10_linux.in-add-oe-s-kernel-name.patch \
-          "
-
-S = "${WORKDIR}/git"
-
-COMPATIBLE_HOST = '(x86_64.*|i.86.*|arm.*|aarch64.*)-(linux.*|freebsd.*)'
-COMPATIBLE_HOST_armv7a = 'null'
-COMPATIBLE_HOST_armv7ve = 'null'
-
-# configure.ac has code to set this automagically from the target tuple
-# but the OE freeform one (core2-foo-bar-linux) don't work with that.
-
-GRUBPLATFORM_arm = "uboot"
-GRUBPLATFORM_aarch64 = "efi"
-GRUBPLATFORM ??= "pc"
-
-EXTRA_OECONF = "--with-platform=${GRUBPLATFORM} --disable-grub-mkfont --program-prefix="" \
-                --enable-liblzma=no --enable-device-mapper=no --enable-libzfs=no \
-                --enable-largefile \
-"
-
-do_install_append () {
-    install -d ${D}${sysconfdir}/grub.d
-    rm -rf ${D}${libdir}/charset.alias
-}
-
-# debugedit chokes on bare metal binaries
-INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
-
-RDEPENDS_${PN} = "diffutils freetype"
-
-INSANE_SKIP_${PN} = "arch"
-INSANE_SKIP_${PN}-dbg = "arch"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/hostap/hostap-utils.inc b/import-layers/yocto-poky/meta/recipes-bsp/hostap/hostap-utils.inc
index 140321d..6d98e3a 100644
--- a/import-layers/yocto-poky/meta/recipes-bsp/hostap/hostap-utils.inc
+++ b/import-layers/yocto-poky/meta/recipes-bsp/hostap/hostap-utils.inc
@@ -1,15 +1,14 @@
 SUMMARY = "User mode helpers for the hostap driver"
 DESCRIPTION = "The hostap driver supports Host AP mode, it allows for IEEE 802.11 \
 management functions on the host computer and allows the system to act as an access point."
-HOMEPAGE = "http://hostap.epitest.fi"
-BUGTRACKER = "http://hostap.epitest.fi/bugz/"
+HOMEPAGE = "https://w1.fi"
 LICENSE = "GPLv2"
 LIC_FILES_CHKSUM = "file://COPYING;md5=0636e73ff0215e8d672dc4c32c317bb3 \
 			file://util.c;beginline=1;endline=9;md5=d3b9280851302e5ba34e5fb717489b6d"
 SECTION = "kernel/userland"
 PR = "r4"
 
-SRC_URI = "http://hostap.epitest.fi/releases/hostap-utils-${PV}.tar.gz \
+SRC_URI = "https://w1.fi/releases/hostap-utils-${PV}.tar.gz \
            file://hostap-fw-load.patch \
            file://0001-Define-_u32-__s32-__u16-__s16-__u8-in-terms-of-c99-t.patch \
 "
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils/configure.patch b/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils/configure.patch
index 66c9f91..55edfea 100644
--- a/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils/configure.patch
+++ b/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils/configure.patch
@@ -1,46 +1,36 @@
+This patch:
+* ensures we link correctly
+* allows us to optionally pass target information to configure rather than using uname
+* select linux as the platform in most cases we care about
+
+This is a merge of various tweaks to allow us to build pciutils including 
+work from:
+
+7/30/2010 - Qing He <qing.he@intel.com>
+1/22/2012 - Shane Wang <shane.wang@intel.com>
+Ionut Radu <ionutx.radu@intel.com>
+2017/6/15 - RP - Cleanups and merging patches
+
 Upstream-Status: Inappropriate [embedded specific]
 
-#
-# Patch managed by http://www.mn-logistik.de/unsupported/pxa250/patcher
-#
----
-
-7/30/2010 - rebased to 3.1.5 by Qing He <qing.he@intel.com>
-1/22/2012 - rebased to 3.1.9 by Shane Wang <shane.wang@intel.com>
-
-diff -r af2b10cc3c14 Makefile
---- a/Makefile	Sun Jan 22 18:36:34 2012 +0800
-+++ b/Makefile	Sun Jan 22 18:38:54 2012 +0800
-@@ -37,7 +37,6 @@
- # Commands
- INSTALL=install
- DIRINSTALL=install -d
--STRIP=-s
- CC=$(CROSS_COMPILE)gcc
- AR=$(CROSS_COMPILE)ar
- RANLIB=$(CROSS_COMPILE)ranlib
-@@ -86,7 +85,7 @@
+Index: pciutils-3.5.4/Makefile
+===================================================================
+--- pciutils-3.5.4.orig/Makefile
++++ pciutils-3.5.4/Makefile
+@@ -96,7 +95,7 @@ example: example.o lib/$(PCILIB)
  example.o: example.c $(PCIINC)
  
  %: %.o
 -	$(CC) $(LDFLAGS) $(TARGET_ARCH) $^ $(LDLIBS) -o $@
-+	$(CC) $(LDFLAGS) $(TARGET_ARCH) $^ $(LIB_LDLIBS) -o $@
++	$(CC) $(LDFLAGS) $(TARGET_ARCH) $^ $(LIB_LDLIBS) $(LDLIBS) -o $@
  
  %.8 %.7: %.man
  	M=`echo $(DATE) | sed 's/-01-/-January-/;s/-02-/-February-/;s/-03-/-March-/;s/-04-/-April-/;s/-05-/-May-/;s/-06-/-June-/;s/-07-/-July-/;s/-08-/-August-/;s/-09-/-September-/;s/-10-/-October-/;s/-11-/-November-/;s/-12-/-December-/;s/\(.*\)-\(.*\)-\(.*\)/\3 \2 \1/'` ; sed <$< >$@ "s/@TODAY@/$$M/;s/@VERSION@/pciutils-$(VERSION)/;s#@IDSDIR@#$(IDSDIR)#"
-@@ -101,7 +100,7 @@
- install: all
- # -c is ignored on Linux, but required on FreeBSD
- 	$(DIRINSTALL) -m 755 $(DESTDIR)$(SBINDIR) $(DESTDIR)$(IDSDIR) $(DESTDIR)$(MANDIR)/man8 $(DESTDIR)$(MANDIR)/man7
--	$(INSTALL) -c -m 755 $(STRIP) lspci setpci $(DESTDIR)$(SBINDIR)
-+	$(INSTALL) -c -m 755 lspci setpci $(DESTDIR)$(SBINDIR)
- 	$(INSTALL) -c -m 755 update-pciids $(DESTDIR)$(SBINDIR)
- 	$(INSTALL) -c -m 644 $(PCI_IDS) $(DESTDIR)$(IDSDIR)
- 	$(INSTALL) -c -m 644 lspci.8 setpci.8 update-pciids.8 $(DESTDIR)$(MANDIR)/man8
-diff -r af2b10cc3c14 lib/configure
---- a/lib/configure	Sun Jan 22 18:36:34 2012 +0800
-+++ b/lib/configure	Sun Jan 22 18:38:54 2012 +0800
-@@ -14,6 +14,10 @@
+Index: pciutils-3.5.4/lib/configure
+===================================================================
+--- pciutils-3.5.4.orig/lib/configure
++++ pciutils-3.5.4/lib/configure
+@@ -14,6 +14,10 @@ echo_n() {
  	fi
  }
  
@@ -51,7 +41,7 @@
  if [ -z "$VERSION" -o -z "$IDSDIR" ] ; then
  	echo >&2 "Please run the configure script from the top-level Makefile"
  	exit 1
-@@ -21,8 +25,8 @@
+@@ -21,8 +25,8 @@ fi
  
  echo_n "Configuring libpci for your system..."
  if [ -z "$HOST" ] ; then
@@ -62,7 +52,7 @@
  	realsys="$sys"
  	if [ "$sys" = "AIX" -a -x /usr/bin/oslevel -a -x /usr/sbin/lsattr ]
  	then
-@@ -30,7 +34,7 @@
+@@ -30,7 +34,7 @@ if [ -z "$HOST" ] ; then
  		proc=`/usr/sbin/lsdev -C -c processor -S available -F name | head -1`
  		cpu=`/usr/sbin/lsattr -F value -l $proc -a type | sed 's/_.*//'`
  	else
@@ -71,7 +61,7 @@
  	fi
  	if [ "$sys" = "GNU/kFreeBSD" -o "$sys" = "DragonFly" ]
  	then
-@@ -40,7 +44,7 @@
+@@ -40,7 +44,7 @@ if [ -z "$HOST" ] ; then
  	then
  		sys=cygwin
  	fi
@@ -80,24 +70,11 @@
  fi
  [ -n "$RELEASE" ] && rel="${RELEASE}"
  # CAVEAT: tr on Solaris is a bit weird and the extra [] is otherwise harmless.
-@@ -49,6 +53,21 @@
+@@ -49,6 +53,8 @@ cpu=`echo $host | sed 's/^\([^-]*\)-\([^
  sys=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\3/'`
  echo " $host $rel $cpu $sys"
  
-+if [ "$host" = "linux--gnueabi" ]
-+then
-+	sys=linux
-+fi
-+
-+if [ "$host" = "linux--uclibc" ]
-+then
-+	sys=linux
-+fi
-+
-+if [ "$host" = "linux--uclibceabi" ]
-+then
-+	sys=linux
-+fi
++{ echo "$host" | grep linux; } && sys=linux
 +
  c=config.h
  m=config.mk
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils/guess-fix.patch b/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils/guess-fix.patch
deleted file mode 100644
index 540b4a0..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils/guess-fix.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-the original guess algorithm is broken for many archs
-for example, the following two would break:
-	arm-linux-gnueabi  --> sys=gnueabi
-	x86_64-unknown-pc-linux-gnu  --> sys = pc-linux-gnu
-
-use a simpler scheme here and hope it works for all the cases
-
-Upstream-Status: Pending
-
-7/30/2010 - created by Qing He <qing.he@intel.com>
-
-diff --git a/lib/configure b/lib/configure
-index 4318b05..84f6acb 100755
---- a/lib/configure
-+++ b/lib/configure
-@@ -53,20 +53,7 @@ cpu=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\1/'`
- sys=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\3/'`
- echo " $host $rel $cpu $sys"
- 
--if [ "$host" = "linux--gnueabi" ]
--then
--	sys=linux
--fi
--
--if [ "$host" = "linux--uclibc" ]
--then
--	sys=linux
--fi
--
--if [ "$host" = "linux--uclibceabi" ]
--then
--	sys=linux
--fi
-+{ echo "$host" | grep linux; } && sys=linux
- 
- c=config.h
- m=config.mk
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils/makefile.patch b/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils/makefile.patch
deleted file mode 100644
index c3fbc6f..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils/makefile.patch
+++ /dev/null
@@ -1,26 +0,0 @@
-Upstream-Status: Inappropriate [configuration]
-
-Signed-off-by: Ionut Radu <ionutx.radu@intel.com>
-
-Index: pciutils-3.2.0/Makefile
-===================================================================
---- pciutils-3.2.0.orig/Makefile
-+++ pciutils-3.2.0/Makefile
-@@ -35,7 +35,7 @@ SHAREDIR=$(PREFIX)/share
- IDSDIR=$(SHAREDIR)
- MANDIR:=$(shell if [ -d $(PREFIX)/share/man ] ; then echo $(PREFIX)/share/man ; else echo $(PREFIX)/man ; fi)
- INCDIR=$(PREFIX)/include
--LIBDIR=$(PREFIX)/lib
-+LIBDIR=$(libdir)
- PKGCFDIR=$(LIBDIR)/pkgconfig
- 
- # Commands
-@@ -94,7 +94,7 @@ example: example.o lib/$(PCILIB_DEV)
- example.o: example.c $(PCIINC)
- 
- %: %.o
--	$(CC) $(LDFLAGS) $(TARGET_ARCH) $^ $(LIB_LDLIBS) -o $@
-+	$(CC) $(LDFLAGS) $(TARGET_ARCH) $^ $(LIB_LDLIBS) $(LDLIBS) -o $@
- 
- %.8 %.7: %.man
- 	M=`echo $(DATE) | sed 's/-01-/-January-/;s/-02-/-February-/;s/-03-/-March-/;s/-04-/-April-/;s/-05-/-May-/;s/-06-/-June-/;s/-07-/-July-/;s/-08-/-August-/;s/-09-/-September-/;s/-10-/-October-/;s/-11-/-November-/;s/-12-/-December-/;s/\(.*\)-\(.*\)-\(.*\)/\3 \2 \1/'` ; sed <$< >$@ "s/@TODAY@/$$M/;s/@VERSION@/pciutils-$(VERSION)/;s#@IDSDIR@#$(IDSDIR)#"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils_3.5.2.bb b/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils_3.5.2.bb
deleted file mode 100644
index 9a7297e..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils_3.5.2.bb
+++ /dev/null
@@ -1,60 +0,0 @@
-SUMMARY = "PCI utilities"
-DESCRIPTION = 'The PCI Utilities package contains a library for portable access \
-to PCI bus configuration space and several utilities based on this library.'
-HOMEPAGE = "http://atrey.karlin.mff.cuni.cz/~mj/pciutils.shtml"
-SECTION = "console/utils"
-
-LICENSE = "GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
-DEPENDS = "zlib kmod"
-
-SRC_URI = "${KERNELORG_MIRROR}/software/utils/pciutils/pciutils-${PV}.tar.xz \
-           file://configure.patch \
-           file://guess-fix.patch \
-           file://makefile.patch"
-
-SRC_URI[md5sum] = "1bf5b068bd9f7512e8c68b060b25a1b2"
-SRC_URI[sha256sum] = "3a99141a9f40528d0a0035665a06dc37ddb1ae341658e51b50a76ecf86235efc"
-
-inherit multilib_header
-
-PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'hwdb', '', d)}"
-PACKAGECONFIG[hwdb] = "HWDB=yes,HWDB=no,udev"
-
-PCI_CONF_FLAG = "ZLIB=yes DNS=yes SHARED=yes"
-
-# see configure.patch
-do_configure () {
-	(
-	  cd lib && \
-	  # PACKAGECONFIG_CONFARGS for this recipe could only possibly contain 'HWDB=yes/no',
-	  # so we put it before ./configure
-	  ${PCI_CONF_FLAG} ${PACKAGECONFIG_CONFARGS} ./configure ${PV} ${datadir} ${TARGET_OS} ${TARGET_ARCH}
-	)
-}
-
-export PREFIX = "${prefix}"
-export SBINDIR = "${sbindir}"
-export SHAREDIR = "${datadir}"
-export MANDIR = "${mandir}"
-
-EXTRA_OEMAKE = "-e MAKEFLAGS= ${PCI_CONF_FLAG}"
-
-# The configure script breaks if the HOST variable is set
-HOST[unexport] = "1"
-
-do_install () {
-	oe_runmake DESTDIR=${D} install install-lib
-
-	install -d ${D}${bindir}
-	ln -s ../sbin/lspci ${D}${bindir}/lspci
-
-	oe_multilib_header pci/config.h
-}
-
-PACKAGES =+ "${PN}-ids libpci"
-FILES_${PN}-ids = "${datadir}/pci.ids*"
-FILES_libpci = "${libdir}/libpci.so.*"
-SUMMARY_${PN}-ids = "PCI utilities - device ID database"
-DESCRIPTION_${PN}-ids = "Package providing the PCI device ID database for pciutils."
-RDEPENDS_${PN} += "${PN}-ids"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils_3.5.5.bb b/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils_3.5.5.bb
new file mode 100644
index 0000000..0051fd8
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-bsp/pciutils/pciutils_3.5.5.bb
@@ -0,0 +1,58 @@
+SUMMARY = "PCI utilities"
+DESCRIPTION = 'The PCI Utilities package contains a library for portable access \
+to PCI bus configuration space and several utilities based on this library.'
+HOMEPAGE = "http://atrey.karlin.mff.cuni.cz/~mj/pciutils.shtml"
+SECTION = "console/utils"
+
+LICENSE = "GPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
+DEPENDS = "zlib kmod"
+
+SRC_URI = "${KERNELORG_MIRROR}/software/utils/pciutils/pciutils-${PV}.tar.xz \
+           file://configure.patch"
+
+SRC_URI[md5sum] = "238d9969cc0de8b9105d972007d9d546"
+SRC_URI[sha256sum] = "1d62f8fa192f90e61c35a6fc15ff3cb9a7a792f782407acc42ef67817c5939f5"
+
+inherit multilib_header pkgconfig
+
+PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'hwdb', '', d)}"
+PACKAGECONFIG[hwdb] = "HWDB=yes,HWDB=no,udev"
+
+PCI_CONF_FLAG = "ZLIB=yes DNS=yes SHARED=yes STRIP= LIBDIR=${libdir}"
+
+# see configure.patch
+do_configure () {
+	(
+	  cd lib && \
+	  # PACKAGECONFIG_CONFARGS for this recipe could only possibly contain 'HWDB=yes/no',
+	  # so we put it before ./configure
+	  ${PCI_CONF_FLAG} ${PACKAGECONFIG_CONFARGS} ./configure ${PV} ${datadir} ${TARGET_OS} ${TARGET_ARCH}
+	)
+}
+
+export PREFIX = "${prefix}"
+export SBINDIR = "${sbindir}"
+export SHAREDIR = "${datadir}"
+export MANDIR = "${mandir}"
+
+EXTRA_OEMAKE = "-e MAKEFLAGS= ${PCI_CONF_FLAG}"
+
+# The configure script breaks if the HOST variable is set
+HOST[unexport] = "1"
+
+do_install () {
+	oe_runmake DESTDIR=${D} install install-lib
+
+	install -d ${D}${bindir}
+	ln -s ../sbin/lspci ${D}${bindir}/lspci
+
+	oe_multilib_header pci/config.h
+}
+
+PACKAGES =+ "${PN}-ids libpci"
+FILES_${PN}-ids = "${datadir}/pci.ids*"
+FILES_libpci = "${libdir}/libpci.so.*"
+SUMMARY_${PN}-ids = "PCI utilities - device ID database"
+DESCRIPTION_${PN}-ids = "Package providing the PCI device ID database for pciutils."
+RDEPENDS_${PN} += "${PN}-ids"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/systemd-boot/files/0001-use-lnr-wrapper-instead-of-looking-for-relative-opti.patch b/import-layers/yocto-poky/meta/recipes-bsp/systemd-boot/files/0001-use-lnr-wrapper-instead-of-looking-for-relative-opti.patch
deleted file mode 100644
index bc92db7..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/systemd-boot/files/0001-use-lnr-wrapper-instead-of-looking-for-relative-opti.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-From a3482c91642cf568b3ac27fa6c0cb3c6b30669b7 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 9 Nov 2016 19:32:14 -0800
-Subject: [PATCH 07/19] use lnr wrapper instead of looking for --relative
- option for ln
-
-Upstream-Status: Inappropriate [OE-Specific]
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- Makefile.am  | 2 +-
- configure.ac | 2 --
- 2 files changed, 1 insertion(+), 3 deletions(-)
-
-Index: git/Makefile.am
-===================================================================
---- git.orig/Makefile.am
-+++ git/Makefile.am
-@@ -320,7 +320,7 @@ define install-relative-aliases
- 	while [ -n "$$1" ]; do \
- 		$(MKDIR_P) `dirname $(DESTDIR)$$dir/$$2` && \
- 		rm -f $(DESTDIR)$$dir/$$2 && \
--		$(LN_S) --relative $(DESTDIR)$$1 $(DESTDIR)$$dir/$$2 && \
-+		lnr $(DESTDIR)$$1 $(DESTDIR)$$dir/$$2 && \
- 		shift 2 || exit $$?; \
- 	done
- endef
-Index: git/configure.ac
-===================================================================
---- git.orig/configure.ac
-+++ git/configure.ac
-@@ -110,8 +110,6 @@ AC_PATH_PROG([SULOGIN], [sulogin], [/usr
- AC_PATH_PROG([MOUNT_PATH], [mount], [/usr/bin/mount], [$PATH:/usr/sbin:/sbin])
- AC_PATH_PROG([UMOUNT_PATH], [umount], [/usr/bin/umount], [$PATH:/usr/sbin:/sbin])
- 
--AS_IF([! ln --relative --help > /dev/null 2>&1], [AC_MSG_ERROR([*** ln doesn't support --relative ***])])
--
- M4_DEFINES=
- 
- AC_CHECK_TOOL(OBJCOPY, objcopy)
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/systemd-boot/systemd-boot_232.bb b/import-layers/yocto-poky/meta/recipes-bsp/systemd-boot/systemd-boot_232.bb
deleted file mode 100644
index 0471ce2..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/systemd-boot/systemd-boot_232.bb
+++ /dev/null
@@ -1,39 +0,0 @@
-require recipes-core/systemd/systemd.inc
-
-DEPENDS = "intltool-native libcap util-linux gnu-efi gperf-native"
-
-SRC_URI += "file://0001-use-lnr-wrapper-instead-of-looking-for-relative-opti.patch"
-
-inherit autotools pkgconfig gettext
-inherit deploy
-
-# Man pages are packaged through the main systemd recipe
-EXTRA_OECONF = " --enable-gnuefi \
-                 --with-efi-includedir=${STAGING_INCDIR} \
-                 --with-efi-ldsdir=${STAGING_LIBDIR} \
-                 --with-efi-libdir=${STAGING_LIBDIR} \
-                 --disable-manpages \
-               "
-
-# Imported from the old gummiboot recipe
-TUNE_CCARGS_remove = "-mfpmath=sse"
-COMPATIBLE_HOST = "(x86_64.*|i.86.*)-linux"
-
-do_compile() {
-	SYSTEMD_BOOT_EFI_ARCH="ia32"
-	if [ "${TARGET_ARCH}" = "x86_64" ]; then
-		SYSTEMD_BOOT_EFI_ARCH="x64"
-	fi
-
-	oe_runmake systemd-boot${SYSTEMD_BOOT_EFI_ARCH}.efi
-}
-
-do_install() {
-	# Bypass systemd installation with a NOP
-	:
-}
-
-do_deploy () {
-	install ${B}/systemd-boot*.efi ${DEPLOYDIR}
-}
-addtask deploy before do_build after do_compile
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/files/10m50-update-device-tree.patch b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/files/10m50-update-device-tree.patch
new file mode 100644
index 0000000..841953c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/files/10m50-update-device-tree.patch
@@ -0,0 +1,28 @@
+Nios II for MAX10 10m50 board requires update to 
+its device tree to enable CPU driver during
+u-boot pre-relocation. This patch tag the CPU 
+with dm-pre-reloc flag.
+
+Upstream-Status: Submitted
+
+Signed-off-by: Gan, Yau Wai <yau.wai.gan@intel.com>
+
+---
+ arch/nios2/dts/10m50_devboard.dts | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/arch/nios2/dts/10m50_devboard.dts b/arch/nios2/dts/10m50_devboard.dts
+index 05eac30..461ae68 100644
+--- a/arch/nios2/dts/10m50_devboard.dts
++++ b/arch/nios2/dts/10m50_devboard.dts
+@@ -19,6 +19,7 @@
+ 		#size-cells = <0>;
+ 
+ 		cpu: cpu@0 {
++			u-boot,dm-pre-reloc;
+ 			device_type = "cpu";
+ 			compatible = "altr,nios2-1.1";
+ 			reg = <0x00000000>;
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/files/MPC8315ERDB-enable-DHCP.patch b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/files/MPC8315ERDB-enable-DHCP.patch
new file mode 100644
index 0000000..cea52b7
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/files/MPC8315ERDB-enable-DHCP.patch
@@ -0,0 +1,19 @@
+Enabled dhcp client functionality for Yocto reference
+hardware MPC8315E-RDB.
+
+Upstream-Status: Pending
+
+Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
+
+diff --git a/configs/MPC8315ERDB_defconfig b/configs/MPC8315ERDB_defconfig
+index 4e2b705..b02ab1f 100644
+--- a/configs/MPC8315ERDB_defconfig
++++ b/configs/MPC8315ERDB_defconfig
+@@ -9,6 +9,7 @@ CONFIG_HUSH_PARSER=y
+ CONFIG_CMD_I2C=y
+ CONFIG_CMD_USB=y
+ # CONFIG_CMD_SETEXPR is not set
++CONFIG_CMD_DHCP=y
+ CONFIG_CMD_MII=y
+ CONFIG_CMD_PING=y
+ CONFIG_CMD_EXT2=y
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/files/default-gcc.patch b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/files/default-gcc.patch
deleted file mode 100644
index 04184df..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/files/default-gcc.patch
+++ /dev/null
@@ -1,39 +0,0 @@
-OE needs to be able to change the default compiler. If we pass in HOSTCC
-through the make command, it overwrites not only this setting but also the 
-setting in tools/Makefile wrapped in ifneq ($(CROSS_BUILD_TOOLS),) which 
-breaks the build.
-
-We therefore use override to ensure the value of HOSTCC is overwritten when
-needed.
-
-RP: Updated the patch to the version being submitted to upstream u-boot
-
-Upstream-Status: Submitted [emailed to Masahiro Yamada for discussion]
-RP 2017/3/11
-
-Index: git/tools/Makefile
-===================================================================
---- git.orig/tools/Makefile
-+++ git/tools/Makefile
-@@ -262,7 +262,7 @@ $(LICENSE_H): $(obj)/bin2header $(srctre
- subdir- += env
- 
- ifneq ($(CROSS_BUILD_TOOLS),)
--HOSTCC = $(CC)
-+override HOSTCC = $(CC)
- 
- quiet_cmd_crosstools_strip = STRIP   $^
-       cmd_crosstools_strip = $(STRIP) $^; touch $@
-Index: git/tools/env/Makefile
-===================================================================
---- git.orig/tools/env/Makefile
-+++ git/tools/env/Makefile
-@@ -8,7 +8,7 @@
- # fw_printenv is supposed to run on the target system, which means it should be
- # built with cross tools. Although it may look weird, we only replace "HOSTCC"
- # with "CC" here for the maximum code reuse of scripts/Makefile.host.
--HOSTCC = $(CC)
-+override HOSTCC = $(CC)
- 
- # Compile for a hosted environment on the target
- HOST_EXTRACFLAGS  = $(patsubst -I%,-idirafter%, $(filter -I%, $(UBOOTINCLUDE))) \
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-common_2017.01.inc b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-common_2017.01.inc
deleted file mode 100644
index df24c85..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-common_2017.01.inc
+++ /dev/null
@@ -1,14 +0,0 @@
-HOMEPAGE = "http://www.denx.de/wiki/U-Boot/WebHome"
-SECTION = "bootloaders"
-
-LICENSE = "GPLv2+"
-LIC_FILES_CHKSUM = "file://Licenses/README;md5=a2c678cfd4a4d97135585cad908541c6"
-PE = "1"
-
-# We use the revision in order to avoid having to fetch it from the
-# repo during parse
-SRCREV = "a705ebc81b7f91bbd0ef7c634284208342901149"
-
-SRC_URI = "git://git.denx.de/u-boot.git"
-
-S = "${WORKDIR}/git"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-common_2017.09.inc b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-common_2017.09.inc
new file mode 100644
index 0000000..02e5124
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-common_2017.09.inc
@@ -0,0 +1,17 @@
+HOMEPAGE = "http://www.denx.de/wiki/U-Boot/WebHome"
+SECTION = "bootloaders"
+
+LICENSE = "GPLv2+"
+LIC_FILES_CHKSUM = "file://Licenses/README;md5=a2c678cfd4a4d97135585cad908541c6"
+PE = "1"
+
+# We use the revision in order to avoid having to fetch it from the
+# repo during parse
+SRCREV = "c98ac3487e413c71e5d36322ef3324b21c6f60f9"
+
+SRC_URI = "git://git.denx.de/u-boot.git \
+    file://MPC8315ERDB-enable-DHCP.patch \
+    file://10m50-update-device-tree.patch \
+"
+
+S = "${WORKDIR}/git"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-fw-utils_2017.01.bb b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-fw-utils_2017.01.bb
deleted file mode 100644
index 2631499..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-fw-utils_2017.01.bb
+++ /dev/null
@@ -1,36 +0,0 @@
-require u-boot-common_${PV}.inc
-
-SRC_URI += "file://default-gcc.patch"
-
-SUMMARY = "U-Boot bootloader fw_printenv/setenv utilities"
-DEPENDS = "mtd-utils"
-
-INSANE_SKIP_${PN} = "already-stripped"
-EXTRA_OEMAKE_class-target = 'CROSS_COMPILE=${TARGET_PREFIX} CC="${CC} ${CFLAGS} ${LDFLAGS}" HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" V=1'
-EXTRA_OEMAKE_class-cross = 'ARCH=${TARGET_ARCH} CC="${CC} ${CFLAGS} ${LDFLAGS}" V=1'
-
-inherit uboot-config
-
-do_compile () {
-	oe_runmake ${UBOOT_MACHINE}
-	oe_runmake env
-}
-
-do_install () {
-	install -d ${D}${base_sbindir}
-	install -d ${D}${sysconfdir}
-	install -m 755 ${S}/tools/env/fw_printenv ${D}${base_sbindir}/fw_printenv
-	install -m 755 ${S}/tools/env/fw_printenv ${D}${base_sbindir}/fw_setenv
-	install -m 0644 ${S}/tools/env/fw_env.config ${D}${sysconfdir}/fw_env.config
-}
-
-do_install_class-cross () {
-	install -d ${D}${bindir_cross}
-	install -m 755 ${S}/tools/env/fw_printenv ${D}${bindir_cross}/fw_printenv
-	install -m 755 ${S}/tools/env/fw_printenv ${D}${bindir_cross}/fw_setenv
-}
-
-SYSROOT_DIRS_append_class-cross = " ${bindir_cross}"
-
-PACKAGE_ARCH = "${MACHINE_ARCH}"
-BBCLASSEXTEND = "cross"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-fw-utils_2017.09.bb b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-fw-utils_2017.09.bb
new file mode 100644
index 0000000..02887a1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-fw-utils_2017.09.bb
@@ -0,0 +1,34 @@
+require u-boot-common_${PV}.inc
+
+SUMMARY = "U-Boot bootloader fw_printenv/setenv utilities"
+DEPENDS = "mtd-utils"
+
+INSANE_SKIP_${PN} = "already-stripped"
+EXTRA_OEMAKE_class-target = 'CROSS_COMPILE=${TARGET_PREFIX} CC="${CC} ${CFLAGS} ${LDFLAGS}" HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" V=1'
+EXTRA_OEMAKE_class-cross = 'HOSTCC="${CC} ${CFLAGS} ${LDFLAGS}" V=1'
+
+inherit uboot-config
+
+do_compile () {
+	oe_runmake ${UBOOT_MACHINE}
+	oe_runmake envtools
+}
+
+do_install () {
+	install -d ${D}${base_sbindir}
+	install -d ${D}${sysconfdir}
+	install -m 755 ${S}/tools/env/fw_printenv ${D}${base_sbindir}/fw_printenv
+	install -m 755 ${S}/tools/env/fw_printenv ${D}${base_sbindir}/fw_setenv
+	install -m 0644 ${S}/tools/env/fw_env.config ${D}${sysconfdir}/fw_env.config
+}
+
+do_install_class-cross () {
+	install -d ${D}${bindir_cross}
+	install -m 755 ${S}/tools/env/fw_printenv ${D}${bindir_cross}/fw_printenv
+	install -m 755 ${S}/tools/env/fw_printenv ${D}${bindir_cross}/fw_setenv
+}
+
+SYSROOT_DIRS_append_class-cross = " ${bindir_cross}"
+
+PACKAGE_ARCH = "${MACHINE_ARCH}"
+BBCLASSEXTEND = "cross"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-mkimage_2017.01.bb b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-mkimage_2017.01.bb
deleted file mode 100644
index de999e7..0000000
--- a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-mkimage_2017.01.bb
+++ /dev/null
@@ -1,29 +0,0 @@
-require u-boot-common_${PV}.inc
-
-SRC_URI += "file://default-gcc.patch"
-
-SUMMARY = "U-Boot bootloader image creation tool"
-DEPENDS = "openssl"
-
-EXTRA_OEMAKE_class-target = 'CROSS_COMPILE="${TARGET_PREFIX}" CC="${CC} ${CFLAGS} ${LDFLAGS}" HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" STRIP=true V=1'
-EXTRA_OEMAKE_class-native = 'CC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" STRIP=true V=1'
-EXTRA_OEMAKE_class-nativesdk = 'CROSS_COMPILE="${HOST_PREFIX}" CC="${CC} ${CFLAGS} ${LDFLAGS}" HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" STRIP=true V=1'
-
-do_compile () {
-	oe_runmake sandbox_defconfig
-
-	# Disable CONFIG_CMD_LICENSE, license.h is not used by tools and
-	# generating it requires bin2header tool, which for target build
-	# is built with target tools and thus cannot be executed on host.
-	sed -i "s/CONFIG_CMD_LICENSE=.*/# CONFIG_CMD_LICENSE is not set/" .config
-
-	oe_runmake cross_tools NO_SDL=1
-}
-
-do_install () {
-	install -d ${D}${bindir}
-	install -m 0755 tools/mkimage ${D}${bindir}/uboot-mkimage
-	ln -sf uboot-mkimage ${D}${bindir}/mkimage
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-mkimage_2017.09.bb b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-mkimage_2017.09.bb
new file mode 100644
index 0000000..f1fc564
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot-mkimage_2017.09.bb
@@ -0,0 +1,27 @@
+require u-boot-common_${PV}.inc
+
+SUMMARY = "U-Boot bootloader image creation tool"
+DEPENDS = "openssl"
+
+EXTRA_OEMAKE_class-target = 'CROSS_COMPILE="${TARGET_PREFIX}" CC="${CC} ${CFLAGS} ${LDFLAGS}" HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" STRIP=true V=1'
+EXTRA_OEMAKE_class-native = 'CC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" STRIP=true V=1'
+EXTRA_OEMAKE_class-nativesdk = 'CROSS_COMPILE="${HOST_PREFIX}" CC="${CC} ${CFLAGS} ${LDFLAGS}" HOSTCC="${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS}" STRIP=true V=1'
+
+do_compile () {
+	oe_runmake sandbox_defconfig
+
+	# Disable CONFIG_CMD_LICENSE, license.h is not used by tools and
+	# generating it requires bin2header tool, which for target build
+	# is built with target tools and thus cannot be executed on host.
+	sed -i "s/CONFIG_CMD_LICENSE=.*/# CONFIG_CMD_LICENSE is not set/" .config
+
+	oe_runmake cross_tools NO_SDL=1
+}
+
+do_install () {
+	install -d ${D}${bindir}
+	install -m 0755 tools/mkimage ${D}${bindir}/uboot-mkimage
+	ln -sf uboot-mkimage ${D}${bindir}/mkimage
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot.inc b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot.inc
index aa21c0e..c2bcf99 100644
--- a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot.inc
+++ b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot.inc
@@ -304,4 +304,4 @@
     fi
 }
 
-addtask deploy before do_build after do_install
+addtask deploy before do_build after do_compile
diff --git a/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot_2017.01.bb b/import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot_2017.09.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot_2017.01.bb
rename to import-layers/yocto-poky/meta/recipes-bsp/u-boot/u-boot_2017.09.bb
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/avahi/avahi.inc b/import-layers/yocto-poky/meta/recipes-connectivity/avahi/avahi.inc
index faa8741..7814464 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/avahi/avahi.inc
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/avahi/avahi.inc
@@ -63,10 +63,6 @@
 EXTRA_OECONF_SYSVINIT = "${@bb.utils.contains('DISTRO_FEATURES','sysvinit','--with-distro=debian','--with-distro=none',d)}"
 EXTRA_OECONF_SYSTEMD = "${@bb.utils.contains('DISTRO_FEATURES','systemd','--with-systemdsystemunitdir=${systemd_unitdir}/system/','--without-systemdsystemunitdir',d)}"
 
-
-LDFLAGS_append_libc-uclibc = " -lintl"
-LDFLAGS_append_uclinux-uclibc = " -lintl"
-
 do_configure_prepend() {
     sed 's:AM_CHECK_PYMOD:echo "no pymod" #AM_CHECK_PYMOD:g' -i ${S}/configure.ac
 
@@ -111,7 +107,6 @@
 
 RDEPENDS_${PN}-dev = "avahi-daemon (= ${EXTENDPKGV}) libavahi-core (= ${EXTENDPKGV}) libavahi-client (= ${EXTENDPKGV})"
 
-# uclibc has no nss
 RRECOMMENDS_avahi-daemon_append_libc-glibc = " libnss-mdns"
 RRECOMMENDS_${PN}_append_libc-glibc = " libnss-mdns"
 
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/avahi/files/0001-configure.ac-install-GtkBuilder-interface-files-for-.patch b/import-layers/yocto-poky/meta/recipes-connectivity/avahi/files/0001-configure.ac-install-GtkBuilder-interface-files-for-.patch
index 8ccef08..942607a 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/avahi/files/0001-configure.ac-install-GtkBuilder-interface-files-for-.patch
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/avahi/files/0001-configure.ac-install-GtkBuilder-interface-files-for-.patch
@@ -1,17 +1,18 @@
-From a59f13fab31a6e25bb03b2c2bc3aea576f857b6c Mon Sep 17 00:00:00 2001
+From 6ff255eff4fea6350b5e0462fee176fadc26fc1c Mon Sep 17 00:00:00 2001
 From: Jussi Kukkonen <jussi.kukkonen@intel.com>
 Date: Sun, 12 Jun 2016 18:32:49 +0300
 Subject: [PATCH] configure.ac: install GtkBuilder interface files for GTK+3
  too
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://github.com/lathiat/avahi/pull/130]
 Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Signed-off-by: Dengke Du <dengke.du@windriver.com>
 ---
  configure.ac | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/configure.ac b/configure.ac
-index aebb716..48bdf63 100644
+index 87a9a17..9860dcc 100644
 --- a/configure.ac
 +++ b/configure.ac
 @@ -965,7 +965,7 @@ AC_SUBST(avahi_socket)
@@ -24,5 +25,5 @@
  	AC_SUBST(interfacesdir)
  fi
 -- 
-2.1.4
+2.8.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/0001-build-use-pkg-config-to-find-libxml2.patch b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/0001-build-use-pkg-config-to-find-libxml2.patch
index 805cbb3..1e23c0f 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/0001-build-use-pkg-config-to-find-libxml2.patch
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/0001-build-use-pkg-config-to-find-libxml2.patch
@@ -7,15 +7,19 @@
 Update context for version 9.10.3-P2.
 
 Signed-off-by: Kai Kang <kai.kang@windriver.com>
+
+Update context for version 9.10.5-P3.
+
+Signed-off-by: Kai Kang <kai.kang@windriver.com>
 ---
  configure.in | 23 +++--------------------
  1 file changed, 3 insertions(+), 20 deletions(-)
 
 diff --git a/configure.in b/configure.in
-index 0db826d..75819eb 100644
+index 4da73a4..6f2a754 100644
 --- a/configure.in
 +++ b/configure.in
-@@ -2107,26 +2107,9 @@ case "$use_libxml2" in
+@@ -2282,26 +2282,9 @@ case "$use_libxml2" in
  		DST_LIBXML2_INC=""
  		;;
  	auto|yes)
@@ -25,7 +29,7 @@
 -			libxml2_cflags=`xml2-config --cflags`
 -			;;
 -		*)
--			if test "$use_libxml2" = "yes" ; then
+-			if test "yes" = "$use_libxml2" ; then
 -				AC_MSG_RESULT(no)
 -				AC_MSG_ERROR(required libxml2 version not available)
 -			else
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-1285.patch b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-1285.patch
deleted file mode 100644
index 2149bd1..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-1285.patch
+++ /dev/null
@@ -1,154 +0,0 @@
-From 70037e040e587329cec82123e12b9f4f7c945f67 Mon Sep 17 00:00:00 2001
-From: Mark Andrews <marka@isc.org>
-Date: Thu, 18 Feb 2016 12:11:27 +1100
-Subject: [PATCH] 4318.   [security]      Malformed control messages can
- trigger assertions                         in named and rndc. (CVE-2016-1285)
- [RT #41666]
-
-(cherry picked from commit a2b15b3305acd52179e6f3dc7d073b07fbc40b8e)
-
-CVE: CVE-2016-1285
-Upstream-Status: Backport
-[Removed doc/arm/notes.xml changes from upstream patch]
-
-Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
----
- CHANGES                 |  3 +++
- bin/named/control.c     |  2 +-
- bin/named/controlconf.c |  2 +-
- bin/rndc/rndc.c         |  8 ++++----
- doc/arm/notes.xml       | 11 +++++++++++
- lib/isccc/cc.c          | 14 +++++++-------
- 6 files changed, 27 insertions(+), 13 deletions(-)
-
-diff --git a/CHANGES b/CHANGES
-index b9bd9ef..2c727d5 100644
---- a/CHANGES
-+++ b/CHANGES
-@@ -1,3 +1,6 @@
-+4318.	[security]	Malformed control messages can trigger assertions
-+			in named and rndc. (CVE-2016-1285) [RT #41666]
-+
- 	--- 9.10.3-P3 released ---
- 
- 4288.	[bug]		Fixed a regression in resolver.c:possibly_mark()
-diff --git a/bin/named/control.c b/bin/named/control.c
-index 8554335..81340ca 100644
---- a/bin/named/control.c
-+++ b/bin/named/control.c
-@@ -69,7 +69,7 @@ ns_control_docommand(isccc_sexpr_t *message, isc_buffer_t *text) {
- #endif
- 
- 	data = isccc_alist_lookup(message, "_data");
--	if (data == NULL) {
-+	if (!isccc_alist_alistp(data)) {
- 		/*
- 		 * No data section.
- 		 */
-diff --git a/bin/named/controlconf.c b/bin/named/controlconf.c
-index 765afdd..a39ab8b 100644
---- a/bin/named/controlconf.c
-+++ b/bin/named/controlconf.c
-@@ -402,7 +402,7 @@ control_recvmessage(isc_task_t *task, isc_event_t *event) {
- 	 * Limit exposure to replay attacks.
- 	 */
- 	_ctrl = isccc_alist_lookup(request, "_ctrl");
--	if (_ctrl == NULL) {
-+	if (!isccc_alist_alistp(_ctrl)) {
- 		log_invalid(&conn->ccmsg, ISC_R_FAILURE);
- 		goto cleanup_request;
- 	}
-diff --git a/bin/rndc/rndc.c b/bin/rndc/rndc.c
-index cb17050..b6e05c8 100644
---- a/bin/rndc/rndc.c
-+++ b/bin/rndc/rndc.c
-@@ -255,8 +255,8 @@ rndc_recvdone(isc_task_t *task, isc_event_t *event) {
- 	   isccc_cc_fromwire(&source, &response, algorithm, &secret));
- 
- 	data = isccc_alist_lookup(response, "_data");
--	if (data == NULL)
--		fatal("no data section in response");
-+	if (!isccc_alist_alistp(data))
-+		fatal("bad or missing data section in response");
- 	result = isccc_cc_lookupstring(data, "err", &errormsg);
- 	if (result == ISC_R_SUCCESS) {
- 		failed = ISC_TRUE;
-@@ -321,8 +321,8 @@ rndc_recvnonce(isc_task_t *task, isc_event_t *event) {
- 	   isccc_cc_fromwire(&source, &response, algorithm, &secret));
- 
- 	_ctrl = isccc_alist_lookup(response, "_ctrl");
--	if (_ctrl == NULL)
--		fatal("_ctrl section missing");
-+	if (!isccc_alist_alistp(_ctrl))
-+		fatal("bad or missing ctrl section in response");
- 	nonce = 0;
- 	if (isccc_cc_lookupuint32(_ctrl, "_nonce", &nonce) != ISC_R_SUCCESS)
- 		nonce = 0;
-diff --git a/lib/isccc/cc.c b/lib/isccc/cc.c
-index 47a3b74..2bb961e 100644
---- a/lib/isccc/cc.c
-+++ b/lib/isccc/cc.c
-@@ -403,13 +403,13 @@ verify(isccc_sexpr_t *alist, unsigned char *data, unsigned int length,
- 	 * Extract digest.
- 	 */
- 	_auth = isccc_alist_lookup(alist, "_auth");
--	if (_auth == NULL)
-+	if (!isccc_alist_alistp(_auth))
- 		return (ISC_R_FAILURE);
- 	if (algorithm == ISCCC_ALG_HMACMD5)
- 		hmac = isccc_alist_lookup(_auth, "hmd5");
- 	else
- 		hmac = isccc_alist_lookup(_auth, "hsha");
--	if (hmac == NULL)
-+	if (!isccc_sexpr_binaryp(hmac))
- 		return (ISC_R_FAILURE);
- 	/*
- 	 * Compute digest.
-@@ -728,7 +728,7 @@ isccc_cc_createack(isccc_sexpr_t *message, isc_boolean_t ok,
- 	REQUIRE(ackp != NULL && *ackp == NULL);
- 
- 	_ctrl = isccc_alist_lookup(message, "_ctrl");
--	if (_ctrl == NULL ||
-+	if (!isccc_alist_alistp(_ctrl) ||
- 	    isccc_cc_lookupuint32(_ctrl, "_ser", &serial) != ISC_R_SUCCESS ||
- 	    isccc_cc_lookupuint32(_ctrl, "_tim", &t) != ISC_R_SUCCESS)
- 		return (ISC_R_FAILURE);
-@@ -773,7 +773,7 @@ isccc_cc_isack(isccc_sexpr_t *message)
- 	isccc_sexpr_t *_ctrl;
- 
- 	_ctrl = isccc_alist_lookup(message, "_ctrl");
--	if (_ctrl == NULL)
-+	if (!isccc_alist_alistp(_ctrl))
- 		return (ISC_FALSE);
- 	if (isccc_cc_lookupstring(_ctrl, "_ack", NULL) == ISC_R_SUCCESS)
- 		return (ISC_TRUE);
-@@ -786,7 +786,7 @@ isccc_cc_isreply(isccc_sexpr_t *message)
- 	isccc_sexpr_t *_ctrl;
- 
- 	_ctrl = isccc_alist_lookup(message, "_ctrl");
--	if (_ctrl == NULL)
-+	if (!isccc_alist_alistp(_ctrl))
- 		return (ISC_FALSE);
- 	if (isccc_cc_lookupstring(_ctrl, "_rpl", NULL) == ISC_R_SUCCESS)
- 		return (ISC_TRUE);
-@@ -806,7 +806,7 @@ isccc_cc_createresponse(isccc_sexpr_t *message, isccc_time_t now,
- 
- 	_ctrl = isccc_alist_lookup(message, "_ctrl");
- 	_data = isccc_alist_lookup(message, "_data");
--	if (_ctrl == NULL || _data == NULL ||
-+	if (!isccc_alist_alistp(_ctrl) || !isccc_alist_alistp(_data) ||
- 	    isccc_cc_lookupuint32(_ctrl, "_ser", &serial) != ISC_R_SUCCESS ||
- 	    isccc_cc_lookupstring(_data, "type", &type) != ISC_R_SUCCESS)
- 		return (ISC_R_FAILURE);
-@@ -995,7 +995,7 @@ isccc_cc_checkdup(isccc_symtab_t *symtab, isccc_sexpr_t *message,
- 	isccc_sexpr_t *_ctrl;
- 
- 	_ctrl = isccc_alist_lookup(message, "_ctrl");
--	if (_ctrl == NULL ||
-+	if (!isccc_alist_alistp(_ctrl) ||
- 	    isccc_cc_lookupstring(_ctrl, "_ser", &_ser) != ISC_R_SUCCESS ||
- 	    isccc_cc_lookupstring(_ctrl, "_tim", &_tim) != ISC_R_SUCCESS)
- 		return (ISC_R_FAILURE);
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-1286_1.patch b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-1286_1.patch
deleted file mode 100644
index ae5cc48..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-1286_1.patch
+++ /dev/null
@@ -1,79 +0,0 @@
-From a3d327bf1ceaaeabb20223d8de85166e940b9f12 Mon Sep 17 00:00:00 2001
-From: Mukund Sivaraman <muks@isc.org>
-Date: Mon, 22 Feb 2016 12:22:43 +0530
-Subject: [PATCH] Fix resolver assertion failure due to improper DNAME handling
- (CVE-2016-1286) (#41753)
-
-(cherry picked from commit 5995fec51cc8bb7e53804e4936e60aa1537f3673)
-
-CVE: CVE-2016-1286
-Upstream-Status: Backport
-
-[Removed doc/arm/notes.xml changes from upstream patch.]
-
-Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
----
-diff -ruN a/CHANGES b/CHANGES
---- a/CHANGES	2016-04-13 07:28:44.940873629 +0200
-+++ b/CHANGES	2016-04-13 07:38:38.923167851 +0200
-@@ -1,3 +1,7 @@
-+4319.  [security]      Fix resolver assertion failure due to improper
-+                       DNAME handling when parsing fetch reply messages.
-+                       (CVE-2016-1286) [RT #41753]
-+
- 4318.	[security]	Malformed control messages can trigger assertions
- 			in named and rndc. (CVE-2016-1285) [RT #41666]
- 
-diff -ruN a/lib/dns/resolver.c b/lib/dns/resolver.c
---- a/lib/dns/resolver.c	2016-04-13 07:28:43.088953790 +0200
-+++ b/lib/dns/resolver.c	2016-04-13 07:38:20.411968925 +0200
-@@ -6967,21 +6967,26 @@
- 				isc_boolean_t found_dname = ISC_FALSE;
- 				dns_name_t *dname_name;
- 
-+				/*
-+				 * Only pass DNAME or RRSIG(DNAME).
-+				 */
-+				if (rdataset->type != dns_rdatatype_dname &&
-+				    (rdataset->type != dns_rdatatype_rrsig ||
-+				     rdataset->covers != dns_rdatatype_dname))
-+					continue;
-+
-+				/*
-+				 * If we're not chaining, then the DNAME and
-+				 * its signature should not be external.
-+				 */
-+				if (!chaining && external) {
-+					log_formerr(fctx, "external DNAME");
-+					return (DNS_R_FORMERR);
-+				}
-+
- 				found = ISC_FALSE;
- 				aflag = 0;
- 				if (rdataset->type == dns_rdatatype_dname) {
--					/*
--					 * We're looking for something else,
--					 * but we found a DNAME.
--					 *
--					 * If we're not chaining, then the
--					 * DNAME should not be external.
--					 */
--					if (!chaining && external) {
--						log_formerr(fctx,
--							    "external DNAME");
--						return (DNS_R_FORMERR);
--					}
- 					found = ISC_TRUE;
- 					want_chaining = ISC_TRUE;
- 					POST(want_chaining);
-@@ -7010,9 +7015,7 @@
- 							&fctx->domain)) {
- 						return (DNS_R_SERVFAIL);
- 					}
--				} else if (rdataset->type == dns_rdatatype_rrsig
--					   && rdataset->covers ==
--					   dns_rdatatype_dname) {
-+				} else {
- 					/*
- 					 * We've found a signature that
- 					 * covers the DNAME.
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-1286_2.patch b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-1286_2.patch
deleted file mode 100644
index 5f5cb0d..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-1286_2.patch
+++ /dev/null
@@ -1,317 +0,0 @@
-From 7602be276a73a6eb5431c5acd9718e68a55e8b61 Mon Sep 17 00:00:00 2001
-From: Mark Andrews <marka@isc.org>
-Date: Mon, 29 Feb 2016 07:16:48 +1100
-Subject: [PATCH] Part 2 of: 4319.   [security]      Fix resolver assertion
- failure due to improper                         DNAME handling when parsing
- fetch reply messages.                         (CVE-2016-1286) [RT #41753]
-
-CVE: CVE-2016-1286
-Upstream-Status: Backport
-
-(cherry picked from commit 2de89ee9de8c8da9dc153a754b02dcdbb7fe2374)
-Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
----
- lib/dns/resolver.c | 192 ++++++++++++++++++++++++++---------------------------
- 1 file changed, 93 insertions(+), 99 deletions(-)
-
-diff --git a/lib/dns/resolver.c b/lib/dns/resolver.c
-index 70aba87..41e9df4 100644
---- a/lib/dns/resolver.c
-+++ b/lib/dns/resolver.c
-@@ -6074,14 +6074,11 @@ cname_target(dns_rdataset_t *rdataset, dns_name_t *tname) {
- }
- 
- static inline isc_result_t
--dname_target(fetchctx_t *fctx, dns_rdataset_t *rdataset, dns_name_t *qname,
--	     dns_name_t *oname, dns_fixedname_t *fixeddname)
-+dname_target(dns_rdataset_t *rdataset, dns_name_t *qname,
-+	     unsigned int nlabels, dns_fixedname_t *fixeddname)
- {
- 	isc_result_t result;
- 	dns_rdata_t rdata = DNS_RDATA_INIT;
--	unsigned int nlabels;
--	int order;
--	dns_namereln_t namereln;
- 	dns_rdata_dname_t dname;
- 	dns_fixedname_t prefix;
- 
-@@ -6096,21 +6093,6 @@ dname_target(fetchctx_t *fctx, dns_rdataset_t *rdataset, dns_name_t *qname,
- 	if (result != ISC_R_SUCCESS)
- 		return (result);
- 
--	/*
--	 * Get the prefix of qname.
--	 */
--	namereln = dns_name_fullcompare(qname, oname, &order, &nlabels);
--	if (namereln != dns_namereln_subdomain) {
--		char qbuf[DNS_NAME_FORMATSIZE];
--		char obuf[DNS_NAME_FORMATSIZE];
--
--		dns_rdata_freestruct(&dname);
--		dns_name_format(qname, qbuf, sizeof(qbuf));
--		dns_name_format(oname, obuf, sizeof(obuf));
--		log_formerr(fctx, "unrelated DNAME in answer: "
--				   "%s is not in %s", qbuf, obuf);
--		return (DNS_R_FORMERR);
--	}
- 	dns_fixedname_init(&prefix);
- 	dns_name_split(qname, nlabels, dns_fixedname_name(&prefix), NULL);
- 	dns_fixedname_init(fixeddname);
-@@ -6736,13 +6718,13 @@ static isc_result_t
- answer_response(fetchctx_t *fctx) {
- 	isc_result_t result;
- 	dns_message_t *message;
--	dns_name_t *name, *qname, tname, *ns_name;
-+	dns_name_t *name, *dname, *qname, tname, *ns_name;
- 	dns_rdataset_t *rdataset, *ns_rdataset;
- 	isc_boolean_t done, external, chaining, aa, found, want_chaining;
- 	isc_boolean_t have_answer, found_cname, found_type, wanted_chaining;
- 	unsigned int aflag;
- 	dns_rdatatype_t type;
--	dns_fixedname_t dname, fqname;
-+	dns_fixedname_t fdname, fqname;
- 	dns_view_t *view;
- 
- 	FCTXTRACE("answer_response");
-@@ -6770,10 +6752,15 @@ answer_response(fetchctx_t *fctx) {
- 	view = fctx->res->view;
- 	result = dns_message_firstname(message, DNS_SECTION_ANSWER);
- 	while (!done && result == ISC_R_SUCCESS) {
-+		dns_namereln_t namereln;
-+		int order;
-+		unsigned int nlabels;
-+
- 		name = NULL;
- 		dns_message_currentname(message, DNS_SECTION_ANSWER, &name);
- 		external = ISC_TF(!dns_name_issubdomain(name, &fctx->domain));
--		if (dns_name_equal(name, qname)) {
-+		namereln = dns_name_fullcompare(qname, name, &order, &nlabels);
-+		if (namereln == dns_namereln_equal) {
- 			wanted_chaining = ISC_FALSE;
- 			for (rdataset = ISC_LIST_HEAD(name->list);
- 			     rdataset != NULL;
-@@ -6898,10 +6885,11 @@ answer_response(fetchctx_t *fctx) {
- 						 */
- 						INSIST(!external);
- 						if (aflag ==
--						    DNS_RDATASETATTR_ANSWER)
-+						    DNS_RDATASETATTR_ANSWER) {
- 							have_answer = ISC_TRUE;
--						name->attributes |=
--							DNS_NAMEATTR_ANSWER;
-+							name->attributes |=
-+								DNS_NAMEATTR_ANSWER;
-+						}
- 						rdataset->attributes |= aflag;
- 						if (aa)
- 							rdataset->trust =
-@@ -6956,6 +6944,8 @@ answer_response(fetchctx_t *fctx) {
- 			if (wanted_chaining)
- 				chaining = ISC_TRUE;
- 		} else {
-+			dns_rdataset_t *dnameset = NULL;
-+
- 			/*
- 			 * Look for a DNAME (or its SIG).  Anything else is
- 			 * ignored.
-@@ -6963,10 +6953,8 @@ answer_response(fetchctx_t *fctx) {
- 			wanted_chaining = ISC_FALSE;
- 			for (rdataset = ISC_LIST_HEAD(name->list);
- 			     rdataset != NULL;
--			     rdataset = ISC_LIST_NEXT(rdataset, link)) {
--				isc_boolean_t found_dname = ISC_FALSE;
--				dns_name_t *dname_name;
--
-+			     rdataset = ISC_LIST_NEXT(rdataset, link))
-+			{
- 				/*
- 				 * Only pass DNAME or RRSIG(DNAME).
- 				 */
-@@ -6980,20 +6968,41 @@ answer_response(fetchctx_t *fctx) {
- 				 * its signature should not be external.
- 				 */
- 				if (!chaining && external) {
--					log_formerr(fctx, "external DNAME");
-+					char qbuf[DNS_NAME_FORMATSIZE];
-+					char obuf[DNS_NAME_FORMATSIZE];
-+
-+					dns_name_format(name, qbuf,
-+							sizeof(qbuf));
-+					dns_name_format(&fctx->domain, obuf,
-+							sizeof(obuf));
-+					log_formerr(fctx, "external DNAME or "
-+						    "RRSIG covering DNAME "
-+						    "in answer: %s is "
-+						    "not in %s", qbuf, obuf);
-+					return (DNS_R_FORMERR);
-+				}
-+
-+				if (namereln != dns_namereln_subdomain) {
-+					char qbuf[DNS_NAME_FORMATSIZE];
-+					char obuf[DNS_NAME_FORMATSIZE];
-+
-+					dns_name_format(qname, qbuf,
-+							sizeof(qbuf));
-+					dns_name_format(name, obuf,
-+							sizeof(obuf));
-+					log_formerr(fctx, "unrelated DNAME "
-+						    "in answer: %s is "
-+						    "not in %s", qbuf, obuf);
- 					return (DNS_R_FORMERR);
- 				}
- 
--				found = ISC_FALSE;
- 				aflag = 0;
- 				if (rdataset->type == dns_rdatatype_dname) {
--					found = ISC_TRUE;
- 					want_chaining = ISC_TRUE;
- 					POST(want_chaining);
- 					aflag = DNS_RDATASETATTR_ANSWER;
--					result = dname_target(fctx, rdataset,
--							      qname, name,
--							      &dname);
-+					result = dname_target(rdataset, qname,
-+							      nlabels, &fdname);
- 					if (result == ISC_R_NOSPACE) {
- 						/*
- 						 * We can't construct the
-@@ -7005,14 +7014,12 @@ answer_response(fetchctx_t *fctx) {
- 					} else if (result != ISC_R_SUCCESS)
- 						return (result);
- 					else
--						found_dname = ISC_TRUE;
-+						dnameset = rdataset;
- 
--					dname_name = dns_fixedname_name(&dname);
-+					dname = dns_fixedname_name(&fdname);
- 					if (!is_answertarget_allowed(view,
--							qname,
--							rdataset->type,
--							dname_name,
--							&fctx->domain)) {
-+							qname, rdataset->type,
-+							dname, &fctx->domain)) {
- 						return (DNS_R_SERVFAIL);
- 					}
- 				} else {
-@@ -7020,73 +7027,60 @@ answer_response(fetchctx_t *fctx) {
- 					 * We've found a signature that
- 					 * covers the DNAME.
- 					 */
--					found = ISC_TRUE;
- 					aflag = DNS_RDATASETATTR_ANSWERSIG;
- 				}
- 
--				if (found) {
-+				/*
-+				 * We've found an answer to our
-+				 * question.
-+				 */
-+				name->attributes |= DNS_NAMEATTR_CACHE;
-+				rdataset->attributes |= DNS_RDATASETATTR_CACHE;
-+				rdataset->trust = dns_trust_answer;
-+				if (!chaining) {
- 					/*
--					 * We've found an answer to our
--					 * question.
-+					 * This data is "the" answer to
-+					 * our question only if we're
-+					 * not chaining.
- 					 */
--					name->attributes |=
--						DNS_NAMEATTR_CACHE;
--					rdataset->attributes |=
--						DNS_RDATASETATTR_CACHE;
--					rdataset->trust = dns_trust_answer;
--					if (!chaining) {
--						/*
--						 * This data is "the" answer
--						 * to our question only if
--						 * we're not chaining.
--						 */
--						INSIST(!external);
--						if (aflag ==
--						    DNS_RDATASETATTR_ANSWER)
--							have_answer = ISC_TRUE;
-+					INSIST(!external);
-+					if (aflag == DNS_RDATASETATTR_ANSWER) {
-+						have_answer = ISC_TRUE;
- 						name->attributes |=
- 							DNS_NAMEATTR_ANSWER;
--						rdataset->attributes |= aflag;
--						if (aa)
--							rdataset->trust =
--							  dns_trust_authanswer;
--					} else if (external) {
--						rdataset->attributes |=
--						    DNS_RDATASETATTR_EXTERNAL;
--					}
--
--					/*
--					 * DNAME chaining.
--					 */
--					if (found_dname) {
--						/*
--						 * Copy the dname into the
--						 * qname fixed name.
--						 *
--						 * Although we check for
--						 * failure of the copy
--						 * operation, in practice it
--						 * should never fail since
--						 * we already know that the
--						 * result fits in a fixedname.
--						 */
--						dns_fixedname_init(&fqname);
--						result = dns_name_copy(
--						  dns_fixedname_name(&dname),
--						  dns_fixedname_name(&fqname),
--						  NULL);
--						if (result != ISC_R_SUCCESS)
--							return (result);
--						wanted_chaining = ISC_TRUE;
--						name->attributes |=
--							DNS_NAMEATTR_CHAINING;
--						rdataset->attributes |=
--						    DNS_RDATASETATTR_CHAINING;
--						qname = dns_fixedname_name(
--								   &fqname);
- 					}
-+					rdataset->attributes |= aflag;
-+					if (aa)
-+						rdataset->trust =
-+						  dns_trust_authanswer;
-+				} else if (external) {
-+					rdataset->attributes |=
-+					    DNS_RDATASETATTR_EXTERNAL;
- 				}
- 			}
-+
-+			/*
-+			 * DNAME chaining.
-+			 */
-+			if (dnameset != NULL) {
-+				/*
-+				 * Copy the dname into the qname fixed name.
-+				 *
-+				 * Although we check for failure of the copy
-+				 * operation, in practice it should never fail
-+				 * since we already know that the  result fits
-+				 * in a fixedname.
-+				 */
-+				dns_fixedname_init(&fqname);
-+				qname = dns_fixedname_name(&fqname);
-+				result = dns_name_copy(dname, qname, NULL);
-+				if (result != ISC_R_SUCCESS)
-+					return (result);
-+				wanted_chaining = ISC_TRUE;
-+				name->attributes |= DNS_NAMEATTR_CHAINING;
-+				dnameset->attributes |=
-+					    DNS_RDATASETATTR_CHAINING;
-+			}
- 			if (wanted_chaining)
- 				chaining = ISC_TRUE;
- 		}
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-2088.patch b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-2088.patch
deleted file mode 100644
index 1b84d46..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-2088.patch
+++ /dev/null
@@ -1,247 +0,0 @@
-CVE-2016-2088
-
-Backport commit d7ff9a1c41bf0ba9773cb3adb08b48b9fd57c956 from the
-v9_10_3_patch branch.
-
-https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-2088
-https://kb.isc.org/article/AA-01351
-
-CVE: CVE-2016-2088
-Upstream-Status: Backport
-Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
-
-
-Original commit message from Mark Andrews <marka@isc.org> below:
-
-4322.   [security]      Duplicate EDNS COOKIE options in a response could
-                        trigger an assertion failure. (CVE-2016-2088)
-                        [RT #41809]
-
-(cherry picked from commit 455c0848f80a8acda27aad1466c72987cafaa029)
-(cherry picked from commit 7cd300abd6ee8b8ee8730593daf742ba53f90bc3)
----
- CHANGES            |  4 ++++
- bin/dig/dighost.c  |  9 +++++++++
- bin/named/client.c | 33 +++++++++++++++++++++++----------
- doc/arm/notes.xml  |  7 +++++++
- lib/dns/resolver.c | 14 +++++++++++++-
- 5 files changed, 56 insertions(+), 11 deletions(-)
-
-diff --git a/CHANGES b/CHANGES
-index c5b5d2b..d2e3360 100644
---- a/CHANGES
-+++ b/CHANGES
-@@ -1,3 +1,7 @@
-+4322.  [security]      Duplicate EDNS COOKIE options in a response could
-+                       trigger an assertion failure. (CVE-2016-2088)
-+                       [RT #41809]
-+
- 4319.  [security]      Fix resolver assertion failure due to improper
-                        DNAME handling when parsing fetch reply messages.
-                        (CVE-2016-1286) [RT #41753]
-diff --git a/bin/dig/dighost.c b/bin/dig/dighost.c
-index ca82f8e..340904f 100644
---- a/bin/dig/dighost.c
-+++ b/bin/dig/dighost.c
-@@ -3458,6 +3458,7 @@ process_opt(dig_lookup_t *l, dns_message_t *msg) {
- 	isc_buffer_t optbuf;
- 	isc_uint16_t optcode, optlen;
- 	dns_rdataset_t *opt = msg->opt;
-+	isc_boolean_t seen_cookie = ISC_FALSE;
- 
- 	result = dns_rdataset_first(opt);
- 	if (result == ISC_R_SUCCESS) {
-@@ -3470,7 +3471,15 @@ process_opt(dig_lookup_t *l, dns_message_t *msg) {
- 			optlen = isc_buffer_getuint16(&optbuf);
- 			switch (optcode) {
- 			case DNS_OPT_COOKIE:
-+				/*
-+				 * Only process the first cookie option.
-+				 */
-+				if (seen_cookie) {
-+					isc_buffer_forward(&optbuf, optlen);
-+					break;
-+				}
- 				process_sit(l, msg, &optbuf, optlen);
-+				seen_cookie = ISC_TRUE;
- 				break;
- 			default:
- 				isc_buffer_forward(&optbuf, optlen);
-diff --git a/bin/named/client.c b/bin/named/client.c
-index 683305c..0d7331a 100644
---- a/bin/named/client.c
-+++ b/bin/named/client.c
-@@ -120,7 +120,10 @@
-  */
- #endif
- 
--#define SIT_SIZE 24U /* 8 + 4 + 4 + 8 */
-+#define COOKIE_SIZE 24U /* 8 + 4 + 4 + 8 */
-+
-+#define WANTNSID(x) (((x)->attributes & NS_CLIENTATTR_WANTNSID) != 0)
-+#define WANTEXPIRE(x) (((x)->attributes & NS_CLIENTATTR_WANTEXPIRE) != 0)
- 
- /*% nameserver client manager structure */
- struct ns_clientmgr {
-@@ -1395,7 +1398,7 @@ ns_client_addopt(ns_client_t *client, dns_message_t *message,
- {
- 	char nsid[BUFSIZ], *nsidp;
- #ifdef ISC_PLATFORM_USESIT
--	unsigned char sit[SIT_SIZE];
-+	unsigned char sit[COOKIE_SIZE];
- #endif
- 	isc_result_t result;
- 	dns_view_t *view;
-@@ -1420,7 +1423,7 @@ ns_client_addopt(ns_client_t *client, dns_message_t *message,
- 	flags = client->extflags & DNS_MESSAGEEXTFLAG_REPLYPRESERVE;
- 
- 	/* Set EDNS options if applicable */
--	if ((client->attributes & NS_CLIENTATTR_WANTNSID) != 0 &&
-+	if (WANTNSID(client) &&
- 	    (ns_g_server->server_id != NULL ||
- 	     ns_g_server->server_usehostname)) {
- 		if (ns_g_server->server_usehostname) {
-@@ -1453,7 +1456,7 @@ ns_client_addopt(ns_client_t *client, dns_message_t *message,
- 
- 		INSIST(count < DNS_EDNSOPTIONS);
- 		ednsopts[count].code = DNS_OPT_COOKIE;
--		ednsopts[count].length = SIT_SIZE;
-+		ednsopts[count].length = COOKIE_SIZE;
- 		ednsopts[count].value = sit;
- 		count++;
- 	}
-@@ -1661,19 +1664,26 @@ compute_sit(ns_client_t *client, isc_uint32_t when, isc_uint32_t nonce,
- 
- static void
- process_sit(ns_client_t *client, isc_buffer_t *buf, size_t optlen) {
--	unsigned char dbuf[SIT_SIZE];
-+	unsigned char dbuf[COOKIE_SIZE];
- 	unsigned char *old;
- 	isc_stdtime_t now;
- 	isc_uint32_t when;
- 	isc_uint32_t nonce;
- 	isc_buffer_t db;
- 
-+	/*
-+	 * If we have already seen a ECS option skip this ECS option.
-+	 */
-+	if ((client->attributes & NS_CLIENTATTR_WANTSIT) != 0) {
-+		isc_buffer_forward(buf, optlen);
-+		return;
-+	}
- 	client->attributes |= NS_CLIENTATTR_WANTSIT;
- 
- 	isc_stats_increment(ns_g_server->nsstats,
- 			    dns_nsstatscounter_sitopt);
- 
--	if (optlen != SIT_SIZE) {
-+	if (optlen != COOKIE_SIZE) {
- 		/*
- 		 * Not our token.
- 		 */
-@@ -1717,14 +1727,13 @@ process_sit(ns_client_t *client, isc_buffer_t *buf, size_t optlen) {
- 	isc_buffer_init(&db, dbuf, sizeof(dbuf));
- 	compute_sit(client, when, nonce, &db);
- 
--	if (!isc_safe_memequal(old, dbuf, SIT_SIZE)) {
-+	if (!isc_safe_memequal(old, dbuf, COOKIE_SIZE)) {
- 		isc_stats_increment(ns_g_server->nsstats,
- 				    dns_nsstatscounter_sitnomatch);
- 		return;
- 	}
- 	isc_stats_increment(ns_g_server->nsstats,
- 			    dns_nsstatscounter_sitmatch);
--
- 	client->attributes |= NS_CLIENTATTR_HAVESIT;
- }
- #endif
-@@ -1783,7 +1792,9 @@ process_opt(ns_client_t *client, dns_rdataset_t *opt) {
- 			optlen = isc_buffer_getuint16(&optbuf);
- 			switch (optcode) {
- 			case DNS_OPT_NSID:
--				isc_stats_increment(ns_g_server->nsstats,
-+				if (!WANTNSID(client))
-+					isc_stats_increment(
-+						    ns_g_server->nsstats,
- 						    dns_nsstatscounter_nsidopt);
- 				client->attributes |= NS_CLIENTATTR_WANTNSID;
- 				isc_buffer_forward(&optbuf, optlen);
-@@ -1794,7 +1805,9 @@ process_opt(ns_client_t *client, dns_rdataset_t *opt) {
- 				break;
- #endif
- 			case DNS_OPT_EXPIRE:
--				isc_stats_increment(ns_g_server->nsstats,
-+				if (!WANTEXPIRE(client))
-+					isc_stats_increment(
-+						  ns_g_server->nsstats,
- 						  dns_nsstatscounter_expireopt);
- 				client->attributes |= NS_CLIENTATTR_WANTEXPIRE;
- 				isc_buffer_forward(&optbuf, optlen);
-diff --git a/doc/arm/notes.xml b/doc/arm/notes.xml
-index ebf4f55..095eb5b 100644
---- a/doc/arm/notes.xml
-+++ b/doc/arm/notes.xml
-@@ -51,6 +51,13 @@
-     <title>Security Fixes</title>
-     <itemizedlist>
-       <listitem>
-+       <para>
-+         Duplicate EDNS COOKIE options in a response could trigger
-+         an assertion failure. This flaw is disclosed in CVE-2016-2088.
-+         [RT #41809]
-+       </para>
-+      </listitem>
-+      <listitem>
- 	<para>
- 	  Specific APL data could trigger an INSIST.  This flaw
- 	  was discovered by Brian Mitchell and is disclosed in
-diff --git a/lib/dns/resolver.c b/lib/dns/resolver.c
-index a797e3f..ba1ae23 100644
---- a/lib/dns/resolver.c
-+++ b/lib/dns/resolver.c
-@@ -7502,7 +7502,9 @@ process_opt(resquery_t *query, dns_rdataset_t *opt) {
- 	unsigned char *sit;
- 	dns_adbaddrinfo_t *addrinfo;
- 	unsigned char cookie[8];
-+	isc_boolean_t seen_cookie = ISC_FALSE;
- #endif
-+	isc_boolean_t seen_nsid = ISC_FALSE;
- 
- 	result = dns_rdataset_first(opt);
- 	if (result == ISC_R_SUCCESS) {
-@@ -7516,14 +7518,23 @@ process_opt(resquery_t *query, dns_rdataset_t *opt) {
- 			INSIST(optlen <= isc_buffer_remaininglength(&optbuf));
- 			switch (optcode) {
- 			case DNS_OPT_NSID:
--				if (query->options & DNS_FETCHOPT_WANTNSID)
-+				if (!seen_nsid &&
-+				    query->options & DNS_FETCHOPT_WANTNSID)
- 					log_nsid(&optbuf, optlen, query,
- 						 ISC_LOG_DEBUG(3),
- 						 query->fctx->res->mctx);
- 				isc_buffer_forward(&optbuf, optlen);
-+				seen_nsid = ISC_TRUE;
- 				break;
- #ifdef ISC_PLATFORM_USESIT
- 			case DNS_OPT_COOKIE:
-+				/*
-+				 * Only process the first cookie option.
-+				 */
-+				if (seen_cookie) {
-+					isc_buffer_forward(&optbuf, optlen);
-+					break;
-+				}
- 				sit = isc_buffer_current(&optbuf);
- 				compute_cc(query, cookie, sizeof(cookie));
- 				INSIST(query->fctx->rmessage->sitbad == 0 &&
-@@ -7541,6 +7552,7 @@ process_opt(resquery_t *query, dns_rdataset_t *opt) {
- 				isc_buffer_forward(&optbuf, optlen);
- 				inc_stats(query->fctx->res,
- 					  dns_resstatscounter_sitin);
-+				seen_cookie = ISC_TRUE;
- 				break;
- #endif
- 			default:
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-2775.patch b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-2775.patch
deleted file mode 100644
index 5393063..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-2775.patch
+++ /dev/null
@@ -1,90 +0,0 @@
-From 9d8aba8a7778721ae2cee6e4670a8e6be6590b05 Mon Sep 17 00:00:00 2001
-From: Mark Andrews <marka@isc.org>
-Date: Wed, 12 Oct 2016 19:52:59 +0900
-Subject: [PATCH]
-4406.   [security]      getrrsetbyname with a non absolute name could
-                        trigger an infinite recursion bug in lwresd
-                        and named with lwres configured if when combined
-                        with a search list entry the resulting name is
-                        too long. (CVE-2016-2775) [RT #42694]
-
-Backport commit 38cc2d14e218e536e0102fa70deef99461354232 from the
-v9.11.0_patch branch.
-
-CVE: CVE-2016-2775
-Upstream-Status: Backport
-
-Signed-off-by: zhengruoqin <zhengrq.fnst@cn.fujitsu.com>
-
----
- CHANGES                          |  6 ++++++
- bin/named/lwdgrbn.c              | 16 ++++++++++------
- bin/tests/system/lwresd/lwtest.c |  9 ++++++++-
- 3 files changed, 24 insertions(+), 7 deletions(-)
-
-diff --git a/CHANGES b/CHANGES
-index d2e3360..d0a9d12 100644
---- a/CHANGES
-+++ b/CHANGES
-@@ -1,3 +1,9 @@
-+4406.   [security]      getrrsetbyname with a non absolute name could
-+                        trigger an infinite recursion bug in lwresd
-+                        and named with lwres configured if when combined
-+                        with a search list entry the resulting name is
-+                        too long. (CVE-2016-2775) [RT #42694]
-+
- 4322.  [security]      Duplicate EDNS COOKIE options in a response could
-                        trigger an assertion failure. (CVE-2016-2088)
-                        [RT #41809]
-diff --git a/bin/named/lwdgrbn.c b/bin/named/lwdgrbn.c
-index 3e7b15b..e1e9adc 100644
---- a/bin/named/lwdgrbn.c
-+++ b/bin/named/lwdgrbn.c
-@@ -403,14 +403,18 @@ start_lookup(ns_lwdclient_t *client) {
- 	INSIST(client->lookup == NULL);
- 
- 	dns_fixedname_init(&absname);
--	result = ns_lwsearchctx_current(&client->searchctx,
--					dns_fixedname_name(&absname));
-+
- 	/*
--	 * This will return failure if relative name + suffix is too long.
--	 * In this case, just go on to the next entry in the search path.
-+         * Perform search across all search domains until success
-+         * is returned. Return in case of failure.
- 	 */
--	if (result != ISC_R_SUCCESS)
--		start_lookup(client);
-+        while (ns_lwsearchctx_current(&client->searchctx,
-+                        dns_fixedname_name(&absname)) != ISC_R_SUCCESS) {
-+                if (ns_lwsearchctx_next(&client->searchctx) != ISC_R_SUCCESS) {
-+                        ns_lwdclient_errorpktsend(client, LWRES_R_FAILURE);
-+                        return;
-+                }
-+        }
- 
- 	result = dns_lookup_create(cm->mctx,
- 				   dns_fixedname_name(&absname),
-diff --git a/bin/tests/system/lwresd/lwtest.c b/bin/tests/system/lwresd/lwtest.c
-index ad9b551..3eb4a66 100644
---- a/bin/tests/system/lwresd/lwtest.c
-+++ b/bin/tests/system/lwresd/lwtest.c
-@@ -768,7 +768,14 @@ main(void) {
- 	test_getrrsetbyname("e.example1.", 1, 2, 1, 1, 1);
- 	test_getrrsetbyname("e.example1.", 1, 46, 2, 0, 1);
- 	test_getrrsetbyname("", 1, 1, 0, 0, 0);
--
-+        test_getrrsetbyname("123456789.123456789.123456789.123456789."
-+                            "123456789.123456789.123456789.123456789."
-+                            "123456789.123456789.123456789.123456789."
-+                            "123456789.123456789.123456789.123456789."
-+                            "123456789.123456789.123456789.123456789."
-+                            "123456789.123456789.123456789.123456789."
-+                            "123456789", 1, 1, 0, 0, 0);
-+ 
- 	if (fails == 0)
- 		printf("I:ok\n");
- 	return (fails);
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-2776.patch b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-2776.patch
deleted file mode 100644
index 738bf60..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-2776.patch
+++ /dev/null
@@ -1,123 +0,0 @@
-From 1171111657081970585f9f0e03b476358c33a6c0 Mon Sep 17 00:00:00 2001
-From: Mark Andrews <marka@isc.org>
-Date: Wed, 12 Oct 2016 20:36:52 +0900
-Subject: [PATCH] 
-4467.   [security]      It was possible to trigger an assertion when 
-                        rendering a message. (CVE-2016-2776) [RT #43139]
-
-Backport commit 2bd0922cf995b9ac205fc83baf7e220b95c6bf12 from the
-v9.11.0_patch branch.
-
-CVE: CVE-2016-2776
-Upstream-Status: Backport
-
-Signed-off-by: zhengruoqin <zhengrq.fnst@cn.fujitsu.com>
-
----
- CHANGES           |  3 +++
- lib/dns/message.c | 42 +++++++++++++++++++++++++++++++-----------
- 2 files changed, 34 insertions(+), 11 deletions(-)
-
-diff --git a/CHANGES b/CHANGES
-index d0a9d12..5c8c61a 100644
---- a/CHANGES
-+++ b/CHANGES
-@@ -1,3 +1,6 @@
-+4467.   [security]      It was possible to trigger an assertion when
-+                        rendering a message. (CVE-2016-2776) [RT #43139]
-+
- 4406.   [security]      getrrsetbyname with a non absolute name could
-                         trigger an infinite recursion bug in lwresd
-                         and named with lwres configured if when combined
-diff --git a/lib/dns/message.c b/lib/dns/message.c
-index 6b5b4bb..b74dc81 100644
---- a/lib/dns/message.c
-+++ b/lib/dns/message.c
-@@ -1754,7 +1754,7 @@ dns_message_renderbegin(dns_message_t *msg, dns_compress_t *cctx,
- 	if (r.length < DNS_MESSAGE_HEADERLEN)
- 		return (ISC_R_NOSPACE);
- 
--	if (r.length < msg->reserved)
-+        if (r.length - DNS_MESSAGE_HEADERLEN < msg->reserved)
- 		return (ISC_R_NOSPACE);
- 
- 	/*
-@@ -1895,8 +1895,29 @@ norender_rdataset(const dns_rdataset_t *rdataset, unsigned int options,
- 
- 	return (ISC_TRUE);
- }
--
- #endif
-+
-+static isc_result_t
-+renderset(dns_rdataset_t *rdataset, dns_name_t *owner_name,
-+         dns_compress_t *cctx, isc_buffer_t *target,
-+         unsigned int reserved, unsigned int options, unsigned int *countp)
-+{
-+       isc_result_t result;
-+
-+       /*
-+        * Shrink the space in the buffer by the reserved amount.
-+        */
-+       if (target->length - target->used < reserved)
-+               return (ISC_R_NOSPACE);
-+
-+       target->length -= reserved;
-+       result = dns_rdataset_towire(rdataset, owner_name,
-+                                    cctx, target, options, countp);
-+       target->length += reserved;
-+
-+       return (result);
-+}
-+
- isc_result_t
- dns_message_rendersection(dns_message_t *msg, dns_section_t sectionid,
- 			  unsigned int options)
-@@ -1939,6 +1960,8 @@ dns_message_rendersection(dns_message_t *msg, dns_section_t sectionid,
- 	/*
- 	 * Shrink the space in the buffer by the reserved amount.
- 	 */
-+        if (msg->buffer->length - msg->buffer->used < msg->reserved)
-+                return (ISC_R_NOSPACE);
- 	msg->buffer->length -= msg->reserved;
- 
- 	total = 0;
-@@ -2214,9 +2237,8 @@ dns_message_renderend(dns_message_t *msg) {
- 		 * Render.
- 		 */
- 		count = 0;
--		result = dns_rdataset_towire(msg->opt, dns_rootname,
--					     msg->cctx, msg->buffer, 0,
--					     &count);
-+                result = renderset(msg->opt, dns_rootname, msg->cctx,
-+                                   msg->buffer, msg->reserved, 0, &count);
- 		msg->counts[DNS_SECTION_ADDITIONAL] += count;
- 		if (result != ISC_R_SUCCESS)
- 			return (result);
-@@ -2232,9 +2254,8 @@ dns_message_renderend(dns_message_t *msg) {
- 		if (result != ISC_R_SUCCESS)
- 			return (result);
- 		count = 0;
--		result = dns_rdataset_towire(msg->tsig, msg->tsigname,
--					     msg->cctx, msg->buffer, 0,
--					     &count);
-+                result = renderset(msg->tsig, msg->tsigname, msg->cctx,
-+                                   msg->buffer, msg->reserved, 0, &count);
- 		msg->counts[DNS_SECTION_ADDITIONAL] += count;
- 		if (result != ISC_R_SUCCESS)
- 			return (result);
-@@ -2255,9 +2276,8 @@ dns_message_renderend(dns_message_t *msg) {
- 		 * the owner name of a SIG(0) is irrelevant, and will not
- 		 * be set in a message being rendered.
- 		 */
--		result = dns_rdataset_towire(msg->sig0, dns_rootname,
--					     msg->cctx, msg->buffer, 0,
--					     &count);
-+                result = renderset(msg->sig0, dns_rootname, msg->cctx,
-+                                   msg->buffer, msg->reserved, 0, &count);
- 		msg->counts[DNS_SECTION_ADDITIONAL] += count;
- 		if (result != ISC_R_SUCCESS)
- 			return (result);
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-6170.patch b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-6170.patch
deleted file mode 100644
index 75bc211..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-6170.patch
+++ /dev/null
@@ -1,1090 +0,0 @@
-From 1bbcfe2fc84f57b1e4e075fb3bc2a1dd0a3a851f Mon Sep 17 00:00:00 2001
-From: Mark Andrews <marka@isc.org>
-Date: Wed, 2 Nov 2016 17:31:27 +1100
-Subject: [PATCH] 4504. [security] Allow the maximum number of records in a
- zone to be specified. This provides a control for issues raised in
- CVE-2016-6170. [RT #42143]
-
-(cherry picked from commit 5f8412a4cb5ee14a0e8cddd4107854b40ee3291e)
-
-Upstream-Status: Backport
-[https://source.isc.org/cgi-bin/gitweb.cgi?p=bind9.git;a=commit;h=1bbcfe2fc84f57b1e4e075fb3bc2a1dd0a3a851f]
-
-CVE: CVE-2016-6170
-
-Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
----
- CHANGES                                          |   4 +
- bin/named/config.c                               |   1 +
- bin/named/named.conf.docbook                     |   3 +
- bin/named/update.c                               |  16 +++
- bin/named/zoneconf.c                             |   7 ++
- bin/tests/system/nsupdate/clean.sh               |   1 +
- bin/tests/system/nsupdate/ns3/named.conf         |   7 ++
- bin/tests/system/nsupdate/ns3/too-big.test.db.in |  10 ++
- bin/tests/system/nsupdate/setup.sh               |   2 +
- bin/tests/system/nsupdate/tests.sh               |  15 +++
- bin/tests/system/xfer/clean.sh                   |   1 +
- bin/tests/system/xfer/ns1/axfr-too-big.db        |  10 ++
- bin/tests/system/xfer/ns1/ixfr-too-big.db.in     |  13 +++
- bin/tests/system/xfer/ns1/named.conf             |  11 ++
- bin/tests/system/xfer/ns6/named.conf             |  14 +++
- bin/tests/system/xfer/setup.sh                   |   2 +
- bin/tests/system/xfer/tests.sh                   |  26 +++++
- doc/arm/Bv9ARM-book.xml                          |  21 ++++
- doc/arm/notes.xml                                |   9 ++
- lib/bind9/check.c                                |   2 +
- lib/dns/db.c                                     |  13 +++
- lib/dns/ecdb.c                                   |   3 +-
- lib/dns/include/dns/db.h                         |  20 ++++
- lib/dns/include/dns/rdataslab.h                  |  13 +++
- lib/dns/include/dns/result.h                     |   6 +-
- lib/dns/include/dns/zone.h                       |  28 ++++-
- lib/dns/rbtdb.c                                  | 127 +++++++++++++++++++++--
- lib/dns/rdataslab.c                              |  13 +++
- lib/dns/result.c                                 |   9 +-
- lib/dns/sdb.c                                    |   3 +-
- lib/dns/sdlz.c                                   |   3 +-
- lib/dns/xfrin.c                                  |  22 +++-
- lib/dns/zone.c                                   |  23 +++-
- lib/isccfg/namedconf.c                           |   1 +
- 34 files changed, 444 insertions(+), 15 deletions(-)
- create mode 100644 bin/tests/system/nsupdate/ns3/too-big.test.db.in
- create mode 100644 bin/tests/system/xfer/ns1/axfr-too-big.db
- create mode 100644 bin/tests/system/xfer/ns1/ixfr-too-big.db.in
-
-diff --git a/CHANGES b/CHANGES
-index 41cfce5..97d2e60 100644
---- a/CHANGES
-+++ b/CHANGES
-@@ -1,3 +1,7 @@
-+4504.	[security]	Allow the maximum number of records in a zone to
-+			be specified.  This provides a control for issues
-+			raised in CVE-2016-6170. [RT #42143]
-+
- 4489.	[security]	It was possible to trigger assertions when processing
- 			a response. (CVE-2016-8864) [RT #43465]
- 
-diff --git a/bin/named/config.c b/bin/named/config.c
-index f06348c..c24e334 100644
---- a/bin/named/config.c
-+++ b/bin/named/config.c
-@@ -209,6 +209,7 @@ options {\n\
- 	max-transfer-time-out 120;\n\
- 	max-transfer-idle-in 60;\n\
- 	max-transfer-idle-out 60;\n\
-+	max-records 0;\n\
- 	max-retry-time 1209600; /* 2 weeks */\n\
- 	min-retry-time 500;\n\
- 	max-refresh-time 2419200; /* 4 weeks */\n\
-diff --git a/bin/named/named.conf.docbook b/bin/named/named.conf.docbook
-index 4c99a61..c2d173a 100644
---- a/bin/named/named.conf.docbook
-+++ b/bin/named/named.conf.docbook
-@@ -338,6 +338,7 @@ options {
- 	};
- 
- 	max-journal-size <replaceable>size_no_default</replaceable>;
-+	max-records <replaceable>integer</replaceable>;
- 	max-transfer-time-in <replaceable>integer</replaceable>;
- 	max-transfer-time-out <replaceable>integer</replaceable>;
- 	max-transfer-idle-in <replaceable>integer</replaceable>;
-@@ -527,6 +528,7 @@ view <replaceable>string</replaceable> <replaceable>optional_class</replaceable>
- 	};
- 
- 	max-journal-size <replaceable>size_no_default</replaceable>;
-+	max-records <replaceable>integer</replaceable>;
- 	max-transfer-time-in <replaceable>integer</replaceable>;
- 	max-transfer-time-out <replaceable>integer</replaceable>;
- 	max-transfer-idle-in <replaceable>integer</replaceable>;
-@@ -624,6 +626,7 @@ zone <replaceable>string</replaceable> <replaceable>optional_class</replaceable>
- 	};
- 
- 	max-journal-size <replaceable>size_no_default</replaceable>;
-+	max-records <replaceable>integer</replaceable>;
- 	max-transfer-time-in <replaceable>integer</replaceable>;
- 	max-transfer-time-out <replaceable>integer</replaceable>;
- 	max-transfer-idle-in <replaceable>integer</replaceable>;
-diff --git a/bin/named/update.c b/bin/named/update.c
-index 83b1a05..cc2a611 100644
---- a/bin/named/update.c
-+++ b/bin/named/update.c
-@@ -2455,6 +2455,8 @@ update_action(isc_task_t *task, isc_event_t *event) {
- 	isc_boolean_t had_dnskey;
- 	dns_rdatatype_t privatetype = dns_zone_getprivatetype(zone);
- 	dns_ttl_t maxttl = 0;
-+	isc_uint32_t maxrecords;
-+	isc_uint64_t records;
- 
- 	INSIST(event->ev_type == DNS_EVENT_UPDATE);
- 
-@@ -3138,6 +3140,20 @@ update_action(isc_task_t *task, isc_event_t *event) {
- 			}
- 		}
- 
-+		maxrecords = dns_zone_getmaxrecords(zone);
-+		if (maxrecords != 0U) {
-+			result = dns_db_getsize(db, ver, &records, NULL);
-+			if (result == ISC_R_SUCCESS && records > maxrecords) {
-+				update_log(client, zone, ISC_LOG_ERROR,
-+					   "records in zone (%"
-+					   ISC_PRINT_QUADFORMAT
-+					   "u) exceeds max-records (%u)",
-+					   records, maxrecords);
-+				result = DNS_R_TOOMANYRECORDS;
-+				goto failure;
-+			}
-+		}
-+
- 		journalfile = dns_zone_getjournal(zone);
- 		if (journalfile != NULL) {
- 			update_log(client, zone, LOGLEVEL_DEBUG,
-diff --git a/bin/named/zoneconf.c b/bin/named/zoneconf.c
-index 4ee3dfe..14dd8ce 100644
---- a/bin/named/zoneconf.c
-+++ b/bin/named/zoneconf.c
-@@ -978,6 +978,13 @@ ns_zone_configure(const cfg_obj_t *config, const cfg_obj_t *vconfig,
- 			dns_zone_setmaxttl(raw, maxttl);
- 	}
- 
-+	obj = NULL;
-+	result = ns_config_get(maps, "max-records", &obj);
-+	INSIST(result == ISC_R_SUCCESS && obj != NULL);
-+	dns_zone_setmaxrecords(mayberaw, cfg_obj_asuint32(obj));
-+	if (zone != mayberaw)
-+		dns_zone_setmaxrecords(zone, 0);
-+
- 	if (raw != NULL && filename != NULL) {
- #define SIGNED ".signed"
- 		size_t signedlen = strlen(filename) + sizeof(SIGNED);
-diff --git a/bin/tests/system/nsupdate/clean.sh b/bin/tests/system/nsupdate/clean.sh
-index aaefc02..ea25545 100644
---- a/bin/tests/system/nsupdate/clean.sh
-+++ b/bin/tests/system/nsupdate/clean.sh
-@@ -32,6 +32,7 @@ rm -f ns3/example.db.jnl ns3/example.db
- rm -f ns3/nsec3param.test.db.signed.jnl ns3/nsec3param.test.db ns3/nsec3param.test.db.signed ns3/dsset-nsec3param.test.
- rm -f ns3/dnskey.test.db.signed.jnl ns3/dnskey.test.db ns3/dnskey.test.db.signed ns3/dsset-dnskey.test.
- rm -f ns3/K*
-+rm -f ns3/too-big.test.db
- rm -f dig.out.*
- rm -f jp.out.ns3.*
- rm -f Kxxx.*
-diff --git a/bin/tests/system/nsupdate/ns3/named.conf b/bin/tests/system/nsupdate/ns3/named.conf
-index 2abd522..68ff27a 100644
---- a/bin/tests/system/nsupdate/ns3/named.conf
-+++ b/bin/tests/system/nsupdate/ns3/named.conf
-@@ -60,3 +60,10 @@ zone "dnskey.test" {
- 	allow-update { any; };
- 	file "dnskey.test.db.signed";
- };
-+
-+zone "too-big.test" {
-+	type master;
-+	allow-update { any; };
-+	max-records 3;
-+	file "too-big.test.db";
-+};
-diff --git a/bin/tests/system/nsupdate/ns3/too-big.test.db.in b/bin/tests/system/nsupdate/ns3/too-big.test.db.in
-new file mode 100644
-index 0000000..7ff1e4a
---- /dev/null
-+++ b/bin/tests/system/nsupdate/ns3/too-big.test.db.in
-@@ -0,0 +1,10 @@
-+; Copyright (C) 2016  Internet Systems Consortium, Inc. ("ISC")
-+;
-+; This Source Code Form is subject to the terms of the Mozilla Public
-+; License, v. 2.0. If a copy of the MPL was not distributed with this
-+; file, You can obtain one at http://mozilla.org/MPL/2.0/.
-+
-+$TTL 10
-+too-big.test. IN SOA too-big.test. hostmaster.too-big.test. 1 3600 900 2419200 3600
-+too-big.test. IN NS too-big.test.
-+too-big.test. IN A 10.53.0.3
-diff --git a/bin/tests/system/nsupdate/setup.sh b/bin/tests/system/nsupdate/setup.sh
-index 828255e..43c4094 100644
---- a/bin/tests/system/nsupdate/setup.sh
-+++ b/bin/tests/system/nsupdate/setup.sh
-@@ -27,12 +27,14 @@ test -r $RANDFILE || $GENRANDOM 400 $RANDFILE
- rm -f ns1/*.jnl ns1/example.db ns2/*.jnl ns2/example.bk
- rm -f ns2/update.bk ns2/update.alt.bk
- rm -f ns3/example.db.jnl
-+rm -f ns3/too-big.test.db.jnl
- 
- cp -f ns1/example1.db ns1/example.db
- sed 's/example.nil/other.nil/g' ns1/example1.db > ns1/other.db
- sed 's/example.nil/unixtime.nil/g' ns1/example1.db > ns1/unixtime.db
- sed 's/example.nil/keytests.nil/g' ns1/example1.db > ns1/keytests.db
- cp -f ns3/example.db.in ns3/example.db
-+cp -f ns3/too-big.test.db.in ns3/too-big.test.db
- 
- # update_test.pl has its own zone file because it
- # requires a specific NS record set.
-diff --git a/bin/tests/system/nsupdate/tests.sh b/bin/tests/system/nsupdate/tests.sh
-index 78d501e..0a6bbd3 100755
---- a/bin/tests/system/nsupdate/tests.sh
-+++ b/bin/tests/system/nsupdate/tests.sh
-@@ -581,5 +581,20 @@ if [ $ret -ne 0 ]; then
-     status=1
- fi
- 
-+n=`expr $n + 1`
-+echo "I:check that adding too many records is blocked ($n)"
-+ret=0
-+$NSUPDATE -v << EOF > nsupdate.out-$n 2>&1 && ret=1
-+server 10.53.0.3 5300
-+zone too-big.test.
-+update add r1.too-big.test 3600 IN TXT r1.too-big.test
-+send
-+EOF
-+grep "update failed: SERVFAIL" nsupdate.out-$n > /dev/null || ret=1
-+DIG +tcp @10.53.0.3 -p 5300 r1.too-big.test TXT > dig.out.ns3.test$n
-+grep "status: NXDOMAIN" dig.out.ns3.test$n > /dev/null || ret=1
-+grep "records in zone (4) exceeds max-records (3)" ns3/named.run > /dev/null || ret=1
-+[ $ret = 0 ] || { echo I:failed; status=1; }
-+
- echo "I:exit status: $status"
- exit $status
-diff --git a/bin/tests/system/xfer/clean.sh b/bin/tests/system/xfer/clean.sh
-index 48aa159..da62a33 100644
---- a/bin/tests/system/xfer/clean.sh
-+++ b/bin/tests/system/xfer/clean.sh
-@@ -36,3 +36,4 @@ rm -f ns7/*.db ns7/*.bk ns7/*.jnl
- rm -f */named.memstats
- rm -f */named.run
- rm -f */ans.run
-+rm -f ns1/ixfr-too-big.db ns1/ixfr-too-big.db.jnl
-diff --git a/bin/tests/system/xfer/ns1/axfr-too-big.db b/bin/tests/system/xfer/ns1/axfr-too-big.db
-new file mode 100644
-index 0000000..d43760d
---- /dev/null
-+++ b/bin/tests/system/xfer/ns1/axfr-too-big.db
-@@ -0,0 +1,10 @@
-+; Copyright (C) 2016  Internet Systems Consortium, Inc. ("ISC")
-+;
-+; This Source Code Form is subject to the terms of the Mozilla Public
-+; License, v. 2.0. If a copy of the MPL was not distributed with this
-+; file, You can obtain one at http://mozilla.org/MPL/2.0/.
-+
-+$TTL	3600
-+@	IN	SOA	. . 0 0 0 0 0
-+@	IN	NS	.
-+$GENERATE 1-29	host$	A	1.2.3.$
-diff --git a/bin/tests/system/xfer/ns1/ixfr-too-big.db.in b/bin/tests/system/xfer/ns1/ixfr-too-big.db.in
-new file mode 100644
-index 0000000..318bb77
---- /dev/null
-+++ b/bin/tests/system/xfer/ns1/ixfr-too-big.db.in
-@@ -0,0 +1,13 @@
-+; Copyright (C) 2016  Internet Systems Consortium, Inc. ("ISC")
-+;
-+; This Source Code Form is subject to the terms of the Mozilla Public
-+; License, v. 2.0. If a copy of the MPL was not distributed with this
-+; file, You can obtain one at http://mozilla.org/MPL/2.0/.
-+
-+$TTL	3600
-+@	IN	SOA	. . 0 0 0 0 0
-+@	IN	NS	ns1
-+@	IN	NS	ns6
-+ns1	IN	A	10.53.0.1
-+ns6	IN	A	10.53.0.6
-+$GENERATE 1-25	host$	A	1.2.3.$
-diff --git a/bin/tests/system/xfer/ns1/named.conf b/bin/tests/system/xfer/ns1/named.conf
-index 07dad85..1d29292 100644
---- a/bin/tests/system/xfer/ns1/named.conf
-+++ b/bin/tests/system/xfer/ns1/named.conf
-@@ -44,3 +44,14 @@ zone "slave" {
- 	type master;
- 	file "slave.db";
- };
-+
-+zone "axfr-too-big" {
-+        type master;
-+        file "axfr-too-big.db";
-+};
-+
-+zone "ixfr-too-big" {
-+        type master;
-+	allow-update { any; };
-+        file "ixfr-too-big.db";
-+};
-diff --git a/bin/tests/system/xfer/ns6/named.conf b/bin/tests/system/xfer/ns6/named.conf
-index c9421b1..a12a92c 100644
---- a/bin/tests/system/xfer/ns6/named.conf
-+++ b/bin/tests/system/xfer/ns6/named.conf
-@@ -52,3 +52,17 @@ zone "slave" {
- 	masters { 10.53.0.1; };
- 	file "slave.bk";
- };
-+
-+zone "axfr-too-big" {
-+	type slave;
-+	max-records 30;
-+	masters { 10.53.0.1; };
-+	file "axfr-too-big.bk";
-+};
-+
-+zone "ixfr-too-big" {
-+	type slave;
-+	max-records 30;
-+	masters { 10.53.0.1; };
-+	file "ixfr-too-big.bk";
-+};
-diff --git a/bin/tests/system/xfer/setup.sh b/bin/tests/system/xfer/setup.sh
-index 56ca901..c55abf8 100644
---- a/bin/tests/system/xfer/setup.sh
-+++ b/bin/tests/system/xfer/setup.sh
-@@ -33,3 +33,5 @@ cp -f ns4/named.conf.base ns4/named.conf
- 
- cp ns2/slave.db.in ns2/slave.db
- touch -t 200101010000 ns2/slave.db
-+
-+cp -f ns1/ixfr-too-big.db.in ns1/ixfr-too-big.db
-diff --git a/bin/tests/system/xfer/tests.sh b/bin/tests/system/xfer/tests.sh
-index 67b2a1a..fe33f0a 100644
---- a/bin/tests/system/xfer/tests.sh
-+++ b/bin/tests/system/xfer/tests.sh
-@@ -368,5 +368,31 @@ $DIGCMD nil. TXT | grep 'incorrect key AXFR' >/dev/null && {
-     status=1
- }
- 
-+n=`expr $n + 1`
-+echo "I:test that a zone with too many records is rejected (AXFR) ($n)"
-+tmp=0
-+grep "'axfr-too-big/IN'.*: too many records" ns6/named.run >/dev/null || tmp=1
-+if test $tmp != 0 ; then echo "I:failed"; fi
-+status=`expr $status + $tmp`
-+
-+n=`expr $n + 1`
-+echo "I:test that a zone with too many records is rejected (IXFR) ($n)"
-+tmp=0
-+grep "'ixfr-too-big./IN.*: too many records" ns6/named.run >/dev/null && tmp=1
-+$NSUPDATE << EOF
-+zone ixfr-too-big
-+server 10.53.0.1 5300
-+update add the-31st-record.ixfr-too-big 0 TXT this is it
-+send
-+EOF
-+for i in 1 2 3 4 5 6 7 8
-+do
-+    grep "'ixfr-too-big/IN'.*: too many records" ns6/named.run >/dev/null && break
-+    sleep 1
-+done
-+grep "'ixfr-too-big/IN'.*: too many records" ns6/named.run >/dev/null || tmp=1
-+if test $tmp != 0 ; then echo "I:failed"; fi
-+status=`expr $status + $tmp`
-+
- echo "I:exit status: $status"
- exit $status
-diff --git a/doc/arm/Bv9ARM-book.xml b/doc/arm/Bv9ARM-book.xml
-index 848b582..0369505 100644
---- a/doc/arm/Bv9ARM-book.xml
-+++ b/doc/arm/Bv9ARM-book.xml
-@@ -4858,6 +4858,7 @@ badresp:1,adberr:0,findfail:0,valfail:0]
-     <optional> use-queryport-pool <replaceable>yes_or_no</replaceable>; </optional>
-     <optional> queryport-pool-ports <replaceable>number</replaceable>; </optional>
-     <optional> queryport-pool-updateinterval <replaceable>number</replaceable>; </optional>
-+    <optional> max-records <replaceable>number</replaceable>; </optional>
-     <optional> max-transfer-time-in <replaceable>number</replaceable>; </optional>
-     <optional> max-transfer-time-out <replaceable>number</replaceable>; </optional>
-     <optional> max-transfer-idle-in <replaceable>number</replaceable>; </optional>
-@@ -8164,6 +8165,16 @@ avoid-v6-udp-ports { 40000; range 50000 60000; };
- 	    </varlistentry>
- 
- 	    <varlistentry>
-+	      <term><command>max-records</command></term>
-+	      <listitem>
-+		<para>
-+		  The maximum number of records permitted in a zone.
-+		  The default is zero which means unlimited.
-+	 	</para>
-+	      </listitem>
-+	    </varlistentry>
-+
-+	    <varlistentry>
- 	      <term><command>host-statistics-max</command></term>
- 	      <listitem>
- 		<para>
-@@ -12056,6 +12067,16 @@ zone <replaceable>zone_name</replaceable> <optional><replaceable>class</replacea
- 	      </varlistentry>
- 
- 	      <varlistentry>
-+		<term><command>max-records</command></term>
-+		<listitem>
-+		  <para>
-+		    See the description of
-+		    <command>max-records</command> in <xref linkend="server_resource_limits"/>.
-+		  </para>
-+		</listitem>
-+	      </varlistentry>
-+
-+	      <varlistentry>
- 		<term><command>max-transfer-time-in</command></term>
- 		<listitem>
- 		  <para>
-diff --git a/doc/arm/notes.xml b/doc/arm/notes.xml
-index 095eb5b..36495e7 100644
---- a/doc/arm/notes.xml
-+++ b/doc/arm/notes.xml
-@@ -52,6 +52,15 @@
-     <itemizedlist>
-       <listitem>
-        <para>
-+	  Added the ability to specify the maximum number of records
-+	  permitted in a zone (max-records #;).  This provides a mechanism
-+	  to block overly large zone transfers, which is a potential risk
-+	  with slave zones from other parties, as described in CVE-2016-6170.
-+	  [RT #42143]
-+	</para>
-+      </listitem>
-+      <listitem>
-+	<para>
-          Duplicate EDNS COOKIE options in a response could trigger
-          an assertion failure. This flaw is disclosed in CVE-2016-2088.
-          [RT #41809]
-diff --git a/lib/bind9/check.c b/lib/bind9/check.c
-index b8c05dd..edb7534 100644
---- a/lib/bind9/check.c
-+++ b/lib/bind9/check.c
-@@ -1510,6 +1510,8 @@ check_zoneconf(const cfg_obj_t *zconfig, const cfg_obj_t *voptions,
- 	  REDIRECTZONE },
- 	{ "masters", SLAVEZONE | STUBZONE | REDIRECTZONE },
- 	{ "max-ixfr-log-size", MASTERZONE | SLAVEZONE | STREDIRECTZONE },
-+	{ "max-records", MASTERZONE | SLAVEZONE | STUBZONE | STREDIRECTZONE |
-+          STATICSTUBZONE | REDIRECTZONE },
- 	{ "max-refresh-time", SLAVEZONE | STUBZONE | STREDIRECTZONE },
- 	{ "max-retry-time", SLAVEZONE | STUBZONE | STREDIRECTZONE },
- 	{ "max-transfer-idle-in", SLAVEZONE | STUBZONE | STREDIRECTZONE },
-diff --git a/lib/dns/db.c b/lib/dns/db.c
-index 7e4f357..ced94a5 100644
---- a/lib/dns/db.c
-+++ b/lib/dns/db.c
-@@ -999,6 +999,19 @@ dns_db_getnsec3parameters(dns_db_t *db, dns_dbversion_t *version,
- }
- 
- isc_result_t
-+dns_db_getsize(dns_db_t *db, dns_dbversion_t *version, isc_uint64_t *records,
-+	       isc_uint64_t *bytes)
-+{
-+	REQUIRE(DNS_DB_VALID(db));
-+	REQUIRE(dns_db_iszone(db) == ISC_TRUE);
-+
-+	if (db->methods->getsize != NULL)
-+		return ((db->methods->getsize)(db, version, records, bytes));
-+
-+	return (ISC_R_NOTFOUND);
-+}
-+
-+isc_result_t
- dns_db_setsigningtime(dns_db_t *db, dns_rdataset_t *rdataset,
- 		      isc_stdtime_t resign)
- {
-diff --git a/lib/dns/ecdb.c b/lib/dns/ecdb.c
-index 553a339..b5d04d2 100644
---- a/lib/dns/ecdb.c
-+++ b/lib/dns/ecdb.c
-@@ -587,7 +587,8 @@ static dns_dbmethods_t ecdb_methods = {
- 	NULL,			/* findnodeext */
- 	NULL,			/* findext */
- 	NULL,			/* setcachestats */
--	NULL			/* hashsize */
-+	NULL,			/* hashsize */
-+	NULL			/* getsize */
- };
- 
- static isc_result_t
-diff --git a/lib/dns/include/dns/db.h b/lib/dns/include/dns/db.h
-index a4a4482..aff42d6 100644
---- a/lib/dns/include/dns/db.h
-+++ b/lib/dns/include/dns/db.h
-@@ -195,6 +195,8 @@ typedef struct dns_dbmethods {
- 				   dns_rdataset_t *sigrdataset);
- 	isc_result_t	(*setcachestats)(dns_db_t *db, isc_stats_t *stats);
- 	unsigned int	(*hashsize)(dns_db_t *db);
-+	isc_result_t	(*getsize)(dns_db_t *db, dns_dbversion_t *version,
-+				   isc_uint64_t *records, isc_uint64_t *bytes);
- } dns_dbmethods_t;
- 
- typedef isc_result_t
-@@ -1485,6 +1487,24 @@ dns_db_getnsec3parameters(dns_db_t *db, dns_dbversion_t *version,
-  */
- 
- isc_result_t
-+dns_db_getsize(dns_db_t *db, dns_dbversion_t *version, isc_uint64_t *records,
-+               isc_uint64_t *bytes);
-+/*%<
-+ * Get the number of records in the given version of the database as well
-+ * as the number bytes used to store those records.
-+ *
-+ * Requires:
-+ * \li	'db' is a valid zone database.
-+ * \li	'version' is NULL or a valid version.
-+ * \li	'records' is NULL or a pointer to return the record count in.
-+ * \li	'bytes' is NULL or a pointer to return the byte count in.
-+ *
-+ * Returns:
-+ * \li	#ISC_R_SUCCESS
-+ * \li	#ISC_R_NOTIMPLEMENTED
-+ */
-+
-+isc_result_t
- dns_db_findnsec3node(dns_db_t *db, dns_name_t *name,
- 		     isc_boolean_t create, dns_dbnode_t **nodep);
- /*%<
-diff --git a/lib/dns/include/dns/rdataslab.h b/lib/dns/include/dns/rdataslab.h
-index 3ac44b8..2e1e759 100644
---- a/lib/dns/include/dns/rdataslab.h
-+++ b/lib/dns/include/dns/rdataslab.h
-@@ -104,6 +104,7 @@ dns_rdataslab_tordataset(unsigned char *slab, unsigned int reservelen,
-  * Ensures:
-  *\li	'rdataset' is associated and points to a valid rdataest.
-  */
-+
- unsigned int
- dns_rdataslab_size(unsigned char *slab, unsigned int reservelen);
- /*%<
-@@ -116,6 +117,18 @@ dns_rdataslab_size(unsigned char *slab, unsigned int reservelen);
-  *\li	The number of bytes in the slab, including the reservelen.
-  */
- 
-+unsigned int
-+dns_rdataslab_count(unsigned char *slab, unsigned int reservelen);
-+/*%<
-+ * Return the number of records in the rdataslab
-+ *
-+ * Requires:
-+ *\li	'slab' points to a slab.
-+ *
-+ * Returns:
-+ *\li	The number of records in the slab.
-+ */
-+
- isc_result_t
- dns_rdataslab_merge(unsigned char *oslab, unsigned char *nslab,
- 		    unsigned int reservelen, isc_mem_t *mctx,
-diff --git a/lib/dns/include/dns/result.h b/lib/dns/include/dns/result.h
-index 7d11c2b..93d1fd5 100644
---- a/lib/dns/include/dns/result.h
-+++ b/lib/dns/include/dns/result.h
-@@ -157,8 +157,12 @@
- #define DNS_R_BADCDS			(ISC_RESULTCLASS_DNS + 111)
- #define DNS_R_BADCDNSKEY		(ISC_RESULTCLASS_DNS + 112)
- #define DNS_R_OPTERR			(ISC_RESULTCLASS_DNS + 113)
-+#define DNS_R_BADDNSTAP			(ISC_RESULTCLASS_DNS + 114)
-+#define DNS_R_BADTSIG			(ISC_RESULTCLASS_DNS + 115)
-+#define DNS_R_BADSIG0			(ISC_RESULTCLASS_DNS + 116)
-+#define DNS_R_TOOMANYRECORDS		(ISC_RESULTCLASS_DNS + 117)
- 
--#define DNS_R_NRESULTS			114	/*%< Number of results */
-+#define DNS_R_NRESULTS			118	/*%< Number of results */
- 
- /*
-  * DNS wire format rcodes.
-diff --git a/lib/dns/include/dns/zone.h b/lib/dns/include/dns/zone.h
-index a9367f1..227540b 100644
---- a/lib/dns/include/dns/zone.h
-+++ b/lib/dns/include/dns/zone.h
-@@ -296,6 +296,32 @@ dns_zone_getfile(dns_zone_t *zone);
-  */
- 
- void
-+dns_zone_setmaxrecords(dns_zone_t *zone, isc_uint32_t records);
-+/*%<
-+ * 	Sets the maximim number of records permitted in a zone.
-+ *	0 implies unlimited.
-+ *
-+ * Requires:
-+ *\li	'zone' to be valid initialised zone.
-+ *
-+ * Returns:
-+ *\li	void
-+ */
-+
-+isc_uint32_t
-+dns_zone_getmaxrecords(dns_zone_t *zone);
-+/*%<
-+ * 	Gets the maximim number of records permitted in a zone.
-+ *	0 implies unlimited.
-+ *
-+ * Requires:
-+ *\li	'zone' to be valid initialised zone.
-+ *
-+ * Returns:
-+ *\li	isc_uint32_t maxrecords.
-+ */
-+
-+void
- dns_zone_setmaxttl(dns_zone_t *zone, isc_uint32_t maxttl);
- /*%<
-  * 	Sets the max ttl of the zone.
-@@ -316,7 +342,7 @@ dns_zone_getmaxttl(dns_zone_t *zone);
-  *\li	'zone' to be valid initialised zone.
-  *
-  * Returns:
-- *\li	isc_uint32_t maxttl.
-+ *\li	dns_ttl_t maxttl.
-  */
- 
- isc_result_t
-diff --git a/lib/dns/rbtdb.c b/lib/dns/rbtdb.c
-index 62becfc..72d722f 100644
---- a/lib/dns/rbtdb.c
-+++ b/lib/dns/rbtdb.c
-@@ -209,6 +209,7 @@ typedef isc_uint64_t                    rbtdb_serial_t;
- #define free_rbtdb_callback free_rbtdb_callback64
- #define free_rdataset free_rdataset64
- #define getnsec3parameters getnsec3parameters64
-+#define getsize getsize64
- #define getoriginnode getoriginnode64
- #define getrrsetstats getrrsetstats64
- #define getsigningtime getsigningtime64
-@@ -589,6 +590,13 @@ typedef struct rbtdb_version {
- 	isc_uint16_t			iterations;
- 	isc_uint8_t			salt_length;
- 	unsigned char			salt[DNS_NSEC3_SALTSIZE];
-+
-+	/*
-+	 * records and bytes are covered by rwlock.
-+	 */
-+	isc_rwlock_t                    rwlock;
-+	isc_uint64_t			records;
-+	isc_uint64_t			bytes;
- } rbtdb_version_t;
- 
- typedef ISC_LIST(rbtdb_version_t)       rbtdb_versionlist_t;
-@@ -1130,6 +1138,7 @@ free_rbtdb(dns_rbtdb_t *rbtdb, isc_boolean_t log, isc_event_t *event) {
- 		INSIST(refs == 0);
- 		UNLINK(rbtdb->open_versions, rbtdb->current_version, link);
- 		isc_refcount_destroy(&rbtdb->current_version->references);
-+		isc_rwlock_destroy(&rbtdb->current_version->rwlock);
- 		isc_mem_put(rbtdb->common.mctx, rbtdb->current_version,
- 			    sizeof(rbtdb_version_t));
- 	}
-@@ -1383,6 +1392,7 @@ allocate_version(isc_mem_t *mctx, rbtdb_serial_t serial,
- 
- static isc_result_t
- newversion(dns_db_t *db, dns_dbversion_t **versionp) {
-+	isc_result_t result;
- 	dns_rbtdb_t *rbtdb = (dns_rbtdb_t *)db;
- 	rbtdb_version_t *version;
- 
-@@ -1415,13 +1425,28 @@ newversion(dns_db_t *db, dns_dbversion_t **versionp) {
- 			version->salt_length = 0;
- 			memset(version->salt, 0, sizeof(version->salt));
- 		}
--		rbtdb->next_serial++;
--		rbtdb->future_version = version;
--	}
-+		result = isc_rwlock_init(&version->rwlock, 0, 0);
-+		if (result != ISC_R_SUCCESS) {
-+			isc_refcount_destroy(&version->references);
-+			isc_mem_put(rbtdb->common.mctx, version,
-+				    sizeof(*version));
-+			version = NULL;
-+		} else {
-+			RWLOCK(&rbtdb->current_version->rwlock,
-+			       isc_rwlocktype_read);
-+			version->records = rbtdb->current_version->records;
-+			version->bytes = rbtdb->current_version->bytes;
-+			RWUNLOCK(&rbtdb->current_version->rwlock,
-+				 isc_rwlocktype_read);
-+			rbtdb->next_serial++;
-+			rbtdb->future_version = version;
-+		}
-+	} else
-+		result = ISC_R_NOMEMORY;
- 	RBTDB_UNLOCK(&rbtdb->lock, isc_rwlocktype_write);
- 
- 	if (version == NULL)
--		return (ISC_R_NOMEMORY);
-+		return (result);
- 
- 	*versionp = version;
- 
-@@ -2681,6 +2706,7 @@ closeversion(dns_db_t *db, dns_dbversion_t **versionp, isc_boolean_t commit) {
- 
- 	if (cleanup_version != NULL) {
- 		INSIST(EMPTY(cleanup_version->changed_list));
-+		isc_rwlock_destroy(&cleanup_version->rwlock);
- 		isc_mem_put(rbtdb->common.mctx, cleanup_version,
- 			    sizeof(*cleanup_version));
- 	}
-@@ -6254,6 +6280,26 @@ add32(dns_rbtdb_t *rbtdb, dns_rbtnode_t *rbtnode, rbtdb_version_t *rbtversion,
- 		else
- 			rbtnode->data = newheader;
- 		newheader->next = topheader->next;
-+		if (rbtversion != NULL)
-+			RWLOCK(&rbtversion->rwlock, isc_rwlocktype_write);
-+		if (rbtversion != NULL && !header_nx) {
-+			rbtversion->records -=
-+				dns_rdataslab_count((unsigned char *)header,
-+						    sizeof(*header));
-+			rbtversion->bytes -=
-+				dns_rdataslab_size((unsigned char *)header,
-+						   sizeof(*header));
-+		}
-+		if (rbtversion != NULL && !newheader_nx) {
-+			rbtversion->records +=
-+				dns_rdataslab_count((unsigned char *)newheader,
-+						    sizeof(*newheader));
-+			rbtversion->bytes +=
-+				dns_rdataslab_size((unsigned char *)newheader,
-+						   sizeof(*newheader));
-+		}
-+		if (rbtversion != NULL)
-+			RWUNLOCK(&rbtversion->rwlock, isc_rwlocktype_write);
- 		if (loading) {
- 			/*
- 			 * There are no other references to 'header' when
-@@ -6355,6 +6401,16 @@ add32(dns_rbtdb_t *rbtdb, dns_rbtnode_t *rbtnode, rbtdb_version_t *rbtversion,
- 			newheader->down = NULL;
- 			rbtnode->data = newheader;
- 		}
-+		if (rbtversion != NULL && !newheader_nx) {
-+			RWLOCK(&rbtversion->rwlock, isc_rwlocktype_write);
-+			rbtversion->records +=
-+				dns_rdataslab_count((unsigned char *)newheader,
-+						    sizeof(*newheader));
-+			rbtversion->bytes +=
-+				dns_rdataslab_size((unsigned char *)newheader,
-+						   sizeof(*newheader));
-+			RWUNLOCK(&rbtversion->rwlock, isc_rwlocktype_write);
-+		}
- 		idx = newheader->node->locknum;
- 		if (IS_CACHE(rbtdb)) {
- 			ISC_LIST_PREPEND(rbtdb->rdatasets[idx],
-@@ -6811,6 +6867,12 @@ subtractrdataset(dns_db_t *db, dns_dbnode_t *node, dns_dbversion_t *version,
- 			 */
- 			newheader->additional_auth = NULL;
- 			newheader->additional_glue = NULL;
-+			rbtversion->records +=
-+				dns_rdataslab_count((unsigned char *)newheader,
-+						    sizeof(*newheader));
-+			rbtversion->bytes +=
-+				dns_rdataslab_size((unsigned char *)newheader,
-+						   sizeof(*newheader));
- 		} else if (result == DNS_R_NXRRSET) {
- 			/*
- 			 * This subtraction would remove all of the rdata;
-@@ -6846,6 +6908,12 @@ subtractrdataset(dns_db_t *db, dns_dbnode_t *node, dns_dbversion_t *version,
- 		 * topheader.
- 		 */
- 		INSIST(rbtversion->serial >= topheader->serial);
-+		rbtversion->records -=
-+				dns_rdataslab_count((unsigned char *)header,
-+						    sizeof(*header));
-+		rbtversion->bytes -=
-+				dns_rdataslab_size((unsigned char *)header,
-+						   sizeof(*header));
- 		if (topheader_prev != NULL)
- 			topheader_prev->next = newheader;
- 		else
-@@ -7172,6 +7240,7 @@ rbt_datafixer(dns_rbtnode_t *rbtnode, void *base, size_t filesize,
- 	unsigned char *limit = ((unsigned char *) base) + filesize;
- 	unsigned char *p;
- 	size_t size;
-+	unsigned int count;
- 
- 	REQUIRE(rbtnode != NULL);
- 
-@@ -7179,6 +7248,9 @@ rbt_datafixer(dns_rbtnode_t *rbtnode, void *base, size_t filesize,
- 		p = (unsigned char *) header;
- 
- 		size = dns_rdataslab_size(p, sizeof(*header));
-+		count = dns_rdataslab_count(p, sizeof(*header));;
-+		rbtdb->current_version->records += count;
-+		rbtdb->current_version->bytes += size;
- 		isc_crc64_update(crc, p, size);
- #ifdef DEBUG
- 		hexdump("hashing header", p, sizeof(rdatasetheader_t));
-@@ -7777,6 +7849,33 @@ getnsec3parameters(dns_db_t *db, dns_dbversion_t *version, dns_hash_t *hash,
- }
- 
- static isc_result_t
-+getsize(dns_db_t *db, dns_dbversion_t *version, isc_uint64_t *records,
-+        isc_uint64_t *bytes)
-+{
-+	dns_rbtdb_t *rbtdb;
-+	isc_result_t result = ISC_R_SUCCESS;
-+	rbtdb_version_t *rbtversion = version;
-+
-+	rbtdb = (dns_rbtdb_t *)db;
-+
-+	REQUIRE(VALID_RBTDB(rbtdb));
-+	INSIST(rbtversion == NULL || rbtversion->rbtdb == rbtdb);
-+
-+	if (rbtversion == NULL)
-+		rbtversion = rbtdb->current_version;
-+
-+	RWLOCK(&rbtversion->rwlock, isc_rwlocktype_read);
-+	if (records != NULL)
-+		*records = rbtversion->records;
-+
-+	if (bytes != NULL)
-+		*bytes = rbtversion->bytes;
-+	RWUNLOCK(&rbtversion->rwlock, isc_rwlocktype_read);
-+
-+	return (result);
-+}
-+
-+static isc_result_t
- setsigningtime(dns_db_t *db, dns_rdataset_t *rdataset, isc_stdtime_t resign) {
- 	dns_rbtdb_t *rbtdb = (dns_rbtdb_t *)db;
- 	isc_stdtime_t oldresign;
-@@ -7972,7 +8071,8 @@ static dns_dbmethods_t zone_methods = {
- 	NULL,
- 	NULL,
- 	NULL,
--	hashsize
-+	hashsize,
-+	getsize
- };
- 
- static dns_dbmethods_t cache_methods = {
-@@ -8018,7 +8118,8 @@ static dns_dbmethods_t cache_methods = {
- 	NULL,
- 	NULL,
- 	setcachestats,
--	hashsize
-+	hashsize,
-+	NULL
- };
- 
- isc_result_t
-@@ -8310,6 +8411,20 @@ dns_rbtdb_create
- 	rbtdb->current_version->salt_length = 0;
- 	memset(rbtdb->current_version->salt, 0,
- 	       sizeof(rbtdb->current_version->salt));
-+	result = isc_rwlock_init(&rbtdb->current_version->rwlock, 0, 0);
-+	if (result != ISC_R_SUCCESS) {
-+		isc_refcount_destroy(&rbtdb->current_version->references);
-+		isc_mem_put(mctx, rbtdb->current_version,
-+			    sizeof(*rbtdb->current_version));
-+		rbtdb->current_version = NULL;
-+		isc_refcount_decrement(&rbtdb->references, NULL);
-+		isc_refcount_destroy(&rbtdb->references);
-+		free_rbtdb(rbtdb, ISC_FALSE, NULL);
-+		return (result);
-+	}
-+
-+	rbtdb->current_version->records = 0;
-+	rbtdb->current_version->bytes = 0;
- 	rbtdb->future_version = NULL;
- 	ISC_LIST_INIT(rbtdb->open_versions);
- 	/*
-diff --git a/lib/dns/rdataslab.c b/lib/dns/rdataslab.c
-index e29dc84..63e3728 100644
---- a/lib/dns/rdataslab.c
-+++ b/lib/dns/rdataslab.c
-@@ -523,6 +523,19 @@ dns_rdataslab_size(unsigned char *slab, unsigned int reservelen) {
- 	return ((unsigned int)(current - slab));
- }
- 
-+unsigned int
-+dns_rdataslab_count(unsigned char *slab, unsigned int reservelen) {
-+	unsigned int count;
-+	unsigned char *current;
-+
-+	REQUIRE(slab != NULL);
-+
-+	current = slab + reservelen;
-+	count = *current++ * 256;
-+	count += *current++;
-+	return (count);
-+}
-+
- /*
-  * Make the dns_rdata_t 'rdata' refer to the slab item
-  * beginning at '*current', which is part of a slab of type
-diff --git a/lib/dns/result.c b/lib/dns/result.c
-index 7be4f57..a621909 100644
---- a/lib/dns/result.c
-+++ b/lib/dns/result.c
-@@ -167,11 +167,16 @@ static const char *text[DNS_R_NRESULTS] = {
- 	"covered by negative trust anchor",    /*%< 110 DNS_R_NTACOVERED */
- 	"bad CDS",			       /*%< 111 DNS_R_BADCSD */
- 	"bad CDNSKEY",			       /*%< 112 DNS_R_BADCDNSKEY */
--	"malformed OPT option"		       /*%< 113 DNS_R_OPTERR */
-+	"malformed OPT option",		       /*%< 113 DNS_R_OPTERR */
-+	"malformed DNSTAP data",	       /*%< 114 DNS_R_BADDNSTAP */
-+
-+	"TSIG in wrong location",	       /*%< 115 DNS_R_BADTSIG */
-+	"SIG(0) in wrong location",	       /*%< 116 DNS_R_BADSIG0 */
-+	"too many records",	               /*%< 117 DNS_R_TOOMANYRECORDS */
- };
- 
- static const char *rcode_text[DNS_R_NRCODERESULTS] = {
--	"NOERROR",				/*%< 0 DNS_R_NOEROR */
-+	"NOERROR",				/*%< 0 DNS_R_NOERROR */
- 	"FORMERR",				/*%< 1 DNS_R_FORMERR */
- 	"SERVFAIL",				/*%< 2 DNS_R_SERVFAIL */
- 	"NXDOMAIN",				/*%< 3 DNS_R_NXDOMAIN */
-diff --git a/lib/dns/sdb.c b/lib/dns/sdb.c
-index abfeeb0..19397e0 100644
---- a/lib/dns/sdb.c
-+++ b/lib/dns/sdb.c
-@@ -1298,7 +1298,8 @@ static dns_dbmethods_t sdb_methods = {
- 	findnodeext,
- 	findext,
- 	NULL,			/* setcachestats */
--	NULL			/* hashsize */
-+	NULL,			/* hashsize */
-+	NULL			/* getsize */
- };
- 
- static isc_result_t
-diff --git a/lib/dns/sdlz.c b/lib/dns/sdlz.c
-index b1198a4..0e3163d 100644
---- a/lib/dns/sdlz.c
-+++ b/lib/dns/sdlz.c
-@@ -1269,7 +1269,8 @@ static dns_dbmethods_t sdlzdb_methods = {
- 	findnodeext,
- 	findext,
- 	NULL,			/* setcachestats */
--	NULL			/* hashsize */
-+	NULL,			/* hashsize */
-+	NULL			/* getsize */
- };
- 
- /*
-diff --git a/lib/dns/xfrin.c b/lib/dns/xfrin.c
-index 2a6c1b4..ac566e1 100644
---- a/lib/dns/xfrin.c
-+++ b/lib/dns/xfrin.c
-@@ -149,6 +149,9 @@ struct dns_xfrin_ctx {
- 	unsigned int		nrecs;		/*%< Number of records recvd */
- 	isc_uint64_t		nbytes;		/*%< Number of bytes received */
- 
-+	unsigned int		maxrecords;	/*%< The maximum number of
-+						     records set for the zone */
-+
- 	isc_time_t		start;		/*%< Start time of the transfer */
- 	isc_time_t		end;		/*%< End time of the transfer */
- 
-@@ -309,10 +312,18 @@ axfr_putdata(dns_xfrin_ctx_t *xfr, dns_diffop_t op,
- static isc_result_t
- axfr_apply(dns_xfrin_ctx_t *xfr) {
- 	isc_result_t result;
-+	isc_uint64_t records;
- 
- 	CHECK(dns_diff_load(&xfr->diff, xfr->axfr.add, xfr->axfr.add_private));
- 	xfr->difflen = 0;
- 	dns_diff_clear(&xfr->diff);
-+	if (xfr->maxrecords != 0U) {
-+		result = dns_db_getsize(xfr->db, xfr->ver, &records, NULL);
-+		if (result == ISC_R_SUCCESS && records > xfr->maxrecords) {
-+			result = DNS_R_TOOMANYRECORDS;
-+			goto failure;
-+		}
-+	}
- 	result = ISC_R_SUCCESS;
-  failure:
- 	return (result);
-@@ -396,6 +407,7 @@ ixfr_putdata(dns_xfrin_ctx_t *xfr, dns_diffop_t op,
- static isc_result_t
- ixfr_apply(dns_xfrin_ctx_t *xfr) {
- 	isc_result_t result;
-+	isc_uint64_t records;
- 
- 	if (xfr->ver == NULL) {
- 		CHECK(dns_db_newversion(xfr->db, &xfr->ver));
-@@ -403,6 +415,13 @@ ixfr_apply(dns_xfrin_ctx_t *xfr) {
- 			CHECK(dns_journal_begin_transaction(xfr->ixfr.journal));
- 	}
- 	CHECK(dns_diff_apply(&xfr->diff, xfr->db, xfr->ver));
-+	if (xfr->maxrecords != 0U) {
-+		result = dns_db_getsize(xfr->db, xfr->ver, &records, NULL);
-+		if (result == ISC_R_SUCCESS && records > xfr->maxrecords) {
-+			result = DNS_R_TOOMANYRECORDS;
-+			goto failure;
-+		}
-+	}
- 	if (xfr->ixfr.journal != NULL) {
- 		result = dns_journal_writediff(xfr->ixfr.journal, &xfr->diff);
- 		if (result != ISC_R_SUCCESS)
-@@ -759,7 +778,7 @@ xfrin_reset(dns_xfrin_ctx_t *xfr) {
- 
- static void
- xfrin_fail(dns_xfrin_ctx_t *xfr, isc_result_t result, const char *msg) {
--	if (result != DNS_R_UPTODATE) {
-+	if (result != DNS_R_UPTODATE && result != DNS_R_TOOMANYRECORDS) {
- 		xfrin_log(xfr, ISC_LOG_ERROR, "%s: %s",
- 			  msg, isc_result_totext(result));
- 		if (xfr->is_ixfr)
-@@ -852,6 +871,7 @@ xfrin_create(isc_mem_t *mctx,
- 	xfr->nmsg = 0;
- 	xfr->nrecs = 0;
- 	xfr->nbytes = 0;
-+	xfr->maxrecords = dns_zone_getmaxrecords(zone);
- 	isc_time_now(&xfr->start);
- 
- 	xfr->tsigkey = NULL;
-diff --git a/lib/dns/zone.c b/lib/dns/zone.c
-index 90e558d..2b0d8e4 100644
---- a/lib/dns/zone.c
-+++ b/lib/dns/zone.c
-@@ -253,6 +253,8 @@ struct dns_zone {
- 	isc_uint32_t		maxretry;
- 	isc_uint32_t		minretry;
- 
-+	isc_uint32_t		maxrecords;
-+
- 	isc_sockaddr_t		*masters;
- 	isc_dscp_t		*masterdscps;
- 	dns_name_t		**masterkeynames;
-@@ -10088,6 +10090,20 @@ dns_zone_setmaxretrytime(dns_zone_t *zone, isc_uint32_t val) {
- 	zone->maxretry = val;
- }
- 
-+isc_uint32_t
-+dns_zone_getmaxrecords(dns_zone_t *zone) {
-+        REQUIRE(DNS_ZONE_VALID(zone));
-+
-+	return (zone->maxrecords);
-+}
-+
-+void
-+dns_zone_setmaxrecords(dns_zone_t *zone, isc_uint32_t val) {
-+        REQUIRE(DNS_ZONE_VALID(zone));
-+
-+	zone->maxrecords = val;
-+}
-+
- static isc_boolean_t
- notify_isqueued(dns_zone_t *zone, unsigned int flags, dns_name_t *name,
- 		isc_sockaddr_t *addr, dns_tsigkey_t *key)
-@@ -14431,7 +14447,7 @@ zone_xfrdone(dns_zone_t *zone, isc_result_t result) {
- 	DNS_ZONE_CLRFLAG(zone, DNS_ZONEFLG_SOABEFOREAXFR);
- 
- 	TIME_NOW(&now);
--	switch (result) {
-+	switch (xfrresult) {
- 	case ISC_R_SUCCESS:
- 		DNS_ZONE_SETFLAG(zone, DNS_ZONEFLG_NEEDNOTIFY);
- 		/*FALLTHROUGH*/
-@@ -14558,6 +14574,11 @@ zone_xfrdone(dns_zone_t *zone, isc_result_t result) {
- 		DNS_ZONE_SETFLAG(zone, DNS_ZONEFLAG_NOIXFR);
- 		goto same_master;
- 
-+	case DNS_R_TOOMANYRECORDS:
-+		DNS_ZONE_JITTER_ADD(&now, zone->refresh, &zone->refreshtime);
-+		inc_stats(zone, dns_zonestatscounter_xfrfail);
-+		break;
-+
- 	default:
- 	next_master:
- 		/*
-diff --git a/lib/isccfg/namedconf.c b/lib/isccfg/namedconf.c
-index 780ab46..e7ff1cc 100644
---- a/lib/isccfg/namedconf.c
-+++ b/lib/isccfg/namedconf.c
-@@ -1679,6 +1679,7 @@ zone_clauses[] = {
- 	{ "masterfile-format", &cfg_type_masterformat, 0 },
- 	{ "max-ixfr-log-size", &cfg_type_size, CFG_CLAUSEFLAG_OBSOLETE },
- 	{ "max-journal-size", &cfg_type_sizenodefault, 0 },
-+	{ "max-records", &cfg_type_uint32, 0 },
- 	{ "max-refresh-time", &cfg_type_uint32, 0 },
- 	{ "max-retry-time", &cfg_type_uint32, 0 },
- 	{ "max-transfer-idle-in", &cfg_type_uint32, 0 },
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-8864.patch b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-8864.patch
deleted file mode 100644
index b52d680..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/CVE-2016-8864.patch
+++ /dev/null
@@ -1,219 +0,0 @@
-From c1d0599a246f646d1c22018f8fa09459270a44b8 Mon Sep 17 00:00:00 2001
-From: Mark Andrews <marka@isc.org>
-Date: Fri, 21 Oct 2016 14:55:10 +1100
-Subject: [PATCH] 4489. [security] It was possible to trigger assertions when
- processing a response. (CVE-2016-8864) [RT #43465]
-
-(cherry picked from commit bd6f27f5c353133b563fe69100b2f168c129f3ca)
-
-Upstream-Status: Backport
-[https://source.isc.org/cgi-bin/gitweb.cgi?p=bind9.git;a=commit;h=c1d0599a246f646d1c22018f8fa09459270a44b8]
-
-CVE: CVE-2016-8864
-
-Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
----
- CHANGES            |  3 +++
- lib/dns/resolver.c | 69 +++++++++++++++++++++++++++++++++++++-----------------
- 2 files changed, 50 insertions(+), 22 deletions(-)
-
-diff --git a/CHANGES b/CHANGES
-index 5c8c61a..41cfce5 100644
---- a/CHANGES
-+++ b/CHANGES
-@@ -1,3 +1,6 @@
-+4489.	[security]	It was possible to trigger assertions when processing
-+			a response. (CVE-2016-8864) [RT #43465]
-+
- 4467.   [security]      It was possible to trigger an assertion when
-                         rendering a message. (CVE-2016-2776) [RT #43139]
- 
-diff --git a/lib/dns/resolver.c b/lib/dns/resolver.c
-index ba1ae23..13c8b44 100644
---- a/lib/dns/resolver.c
-+++ b/lib/dns/resolver.c
-@@ -612,7 +612,9 @@ valcreate(fetchctx_t *fctx, dns_adbaddrinfo_t *addrinfo, dns_name_t *name,
- 	valarg->addrinfo = addrinfo;
- 
- 	if (!ISC_LIST_EMPTY(fctx->validators))
--		INSIST((valoptions & DNS_VALIDATOR_DEFER) != 0);
-+		valoptions |= DNS_VALIDATOR_DEFER;
-+	else
-+		valoptions &= ~DNS_VALIDATOR_DEFER;
- 
- 	result = dns_validator_create(fctx->res->view, name, type, rdataset,
- 				      sigrdataset, fctx->rmessage,
-@@ -5526,13 +5528,6 @@ cache_name(fetchctx_t *fctx, dns_name_t *name, dns_adbaddrinfo_t *addrinfo,
- 							   rdataset,
- 							   sigrdataset,
- 							   valoptions, task);
--					/*
--					 * Defer any further validations.
--					 * This prevents multiple validators
--					 * from manipulating fctx->rmessage
--					 * simultaneously.
--					 */
--					valoptions |= DNS_VALIDATOR_DEFER;
- 				}
- 			} else if (CHAINING(rdataset)) {
- 				if (rdataset->type == dns_rdatatype_cname)
-@@ -5647,6 +5642,11 @@ cache_name(fetchctx_t *fctx, dns_name_t *name, dns_adbaddrinfo_t *addrinfo,
- 				       eresult == DNS_R_NCACHENXRRSET);
- 			}
- 			event->result = eresult;
-+			if (adbp != NULL && *adbp != NULL) {
-+				if (anodep != NULL && *anodep != NULL)
-+					dns_db_detachnode(*adbp, anodep);
-+				dns_db_detach(adbp);
-+			}
- 			dns_db_attach(fctx->cache, adbp);
- 			dns_db_transfernode(fctx->cache, &node, anodep);
- 			clone_results(fctx);
-@@ -5897,6 +5897,11 @@ ncache_message(fetchctx_t *fctx, dns_adbaddrinfo_t *addrinfo,
- 		fctx->attributes |= FCTX_ATTR_HAVEANSWER;
- 		if (event != NULL) {
- 			event->result = eresult;
-+			if (adbp != NULL && *adbp != NULL) {
-+				if (anodep != NULL && *anodep != NULL)
-+					dns_db_detachnode(*adbp, anodep);
-+				dns_db_detach(adbp);
-+			}
- 			dns_db_attach(fctx->cache, adbp);
- 			dns_db_transfernode(fctx->cache, &node, anodep);
- 			clone_results(fctx);
-@@ -6718,13 +6723,15 @@ static isc_result_t
- answer_response(fetchctx_t *fctx) {
- 	isc_result_t result;
- 	dns_message_t *message;
--	dns_name_t *name, *dname, *qname, tname, *ns_name;
-+	dns_name_t *name, *dname = NULL, *qname, *dqname, tname, *ns_name;
-+	dns_name_t *cname = NULL;
- 	dns_rdataset_t *rdataset, *ns_rdataset;
- 	isc_boolean_t done, external, chaining, aa, found, want_chaining;
--	isc_boolean_t have_answer, found_cname, found_type, wanted_chaining;
-+	isc_boolean_t have_answer, found_cname, found_dname, found_type;
-+	isc_boolean_t wanted_chaining;
- 	unsigned int aflag;
- 	dns_rdatatype_t type;
--	dns_fixedname_t fdname, fqname;
-+	dns_fixedname_t fdname, fqname, fqdname;
- 	dns_view_t *view;
- 
- 	FCTXTRACE("answer_response");
-@@ -6738,6 +6745,7 @@ answer_response(fetchctx_t *fctx) {
- 
- 	done = ISC_FALSE;
- 	found_cname = ISC_FALSE;
-+	found_dname = ISC_FALSE;
- 	found_type = ISC_FALSE;
- 	chaining = ISC_FALSE;
- 	have_answer = ISC_FALSE;
-@@ -6747,12 +6755,13 @@ answer_response(fetchctx_t *fctx) {
- 		aa = ISC_TRUE;
- 	else
- 		aa = ISC_FALSE;
--	qname = &fctx->name;
-+	dqname = qname = &fctx->name;
- 	type = fctx->type;
- 	view = fctx->res->view;
-+	dns_fixedname_init(&fqdname);
- 	result = dns_message_firstname(message, DNS_SECTION_ANSWER);
- 	while (!done && result == ISC_R_SUCCESS) {
--		dns_namereln_t namereln;
-+		dns_namereln_t namereln, dnamereln;
- 		int order;
- 		unsigned int nlabels;
- 
-@@ -6760,6 +6769,8 @@ answer_response(fetchctx_t *fctx) {
- 		dns_message_currentname(message, DNS_SECTION_ANSWER, &name);
- 		external = ISC_TF(!dns_name_issubdomain(name, &fctx->domain));
- 		namereln = dns_name_fullcompare(qname, name, &order, &nlabels);
-+		dnamereln = dns_name_fullcompare(dqname, name, &order,
-+						 &nlabels);
- 		if (namereln == dns_namereln_equal) {
- 			wanted_chaining = ISC_FALSE;
- 			for (rdataset = ISC_LIST_HEAD(name->list);
-@@ -6854,7 +6865,7 @@ answer_response(fetchctx_t *fctx) {
- 					}
- 				} else if (rdataset->type == dns_rdatatype_rrsig
- 					   && rdataset->covers ==
--					   dns_rdatatype_cname
-+					      dns_rdatatype_cname
- 					   && !found_type) {
- 					/*
- 					 * We're looking for something else,
-@@ -6884,11 +6895,18 @@ answer_response(fetchctx_t *fctx) {
- 						 * a CNAME or DNAME).
- 						 */
- 						INSIST(!external);
--						if (aflag ==
--						    DNS_RDATASETATTR_ANSWER) {
-+						if ((rdataset->type !=
-+						     dns_rdatatype_cname) ||
-+						    !found_dname ||
-+						    (aflag ==
-+						     DNS_RDATASETATTR_ANSWER))
-+						{
- 							have_answer = ISC_TRUE;
-+							if (rdataset->type ==
-+							    dns_rdatatype_cname)
-+								cname = name;
- 							name->attributes |=
--								DNS_NAMEATTR_ANSWER;
-+							    DNS_NAMEATTR_ANSWER;
- 						}
- 						rdataset->attributes |= aflag;
- 						if (aa)
-@@ -6982,11 +7000,11 @@ answer_response(fetchctx_t *fctx) {
- 					return (DNS_R_FORMERR);
- 				}
- 
--				if (namereln != dns_namereln_subdomain) {
-+				if (dnamereln != dns_namereln_subdomain) {
- 					char qbuf[DNS_NAME_FORMATSIZE];
- 					char obuf[DNS_NAME_FORMATSIZE];
- 
--					dns_name_format(qname, qbuf,
-+					dns_name_format(dqname, qbuf,
- 							sizeof(qbuf));
- 					dns_name_format(name, obuf,
- 							sizeof(obuf));
-@@ -7001,7 +7019,7 @@ answer_response(fetchctx_t *fctx) {
- 					want_chaining = ISC_TRUE;
- 					POST(want_chaining);
- 					aflag = DNS_RDATASETATTR_ANSWER;
--					result = dname_target(rdataset, qname,
-+					result = dname_target(rdataset, dqname,
- 							      nlabels, &fdname);
- 					if (result == ISC_R_NOSPACE) {
- 						/*
-@@ -7018,10 +7036,13 @@ answer_response(fetchctx_t *fctx) {
- 
- 					dname = dns_fixedname_name(&fdname);
- 					if (!is_answertarget_allowed(view,
--							qname, rdataset->type,
--							dname, &fctx->domain)) {
-+						     dqname, rdataset->type,
-+						     dname, &fctx->domain))
-+					{
- 						return (DNS_R_SERVFAIL);
- 					}
-+					dqname = dns_fixedname_name(&fqdname);
-+					dns_name_copy(dname, dqname, NULL);
- 				} else {
- 					/*
- 					 * We've found a signature that
-@@ -7046,6 +7067,10 @@ answer_response(fetchctx_t *fctx) {
- 					INSIST(!external);
- 					if (aflag == DNS_RDATASETATTR_ANSWER) {
- 						have_answer = ISC_TRUE;
-+						found_dname = ISC_TRUE;
-+						if (cname != NULL)
-+							cname->attributes &=
-+							   ~DNS_NAMEATTR_ANSWER;
- 						name->attributes |=
- 							DNS_NAMEATTR_ANSWER;
- 					}
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/bind-confgen-build-unix.o-once.patch b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/bind-confgen-build-unix.o-once.patch
index 096d5d8..8bc4ea3 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/bind-confgen-build-unix.o-once.patch
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/bind-confgen-build-unix.o-once.patch
@@ -17,24 +17,28 @@
 Upstream-Status: Pending
 
 Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
+
+Update context(trailing whitespace) for version 9.10.5-P3.
+
+Signed-off-by: Kai Kang <kai.kang@windriver.com>
 ---
  bin/confgen/Makefile.in |    4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/bin/confgen/Makefile.in b/bin/confgen/Makefile.in
-index 8b3e5aa..4868a24 100644
+index dca272f..02becce 100644
 --- a/bin/confgen/Makefile.in
 +++ b/bin/confgen/Makefile.in
 @@ -74,11 +74,11 @@ rndc-confgen.@O@: rndc-confgen.c
  ddns-confgen.@O@: ddns-confgen.c
  	${LIBTOOL_MODE_COMPILE} ${CC} ${ALL_CFLAGS} -c ${srcdir}/ddns-confgen.c
  
--rndc-confgen@EXEEXT@: rndc-confgen.@O@ util.@O@ keygen.@O@ ${UOBJS} ${CONFDEPLIBS} 
+-rndc-confgen@EXEEXT@: rndc-confgen.@O@ util.@O@ keygen.@O@ ${UOBJS} ${CONFDEPLIBS}
 +rndc-confgen@EXEEXT@: rndc-confgen.@O@ util.@O@ keygen.@O@ ${CONFDEPLIBS} $(SUBDIRS)
  	export BASEOBJS="rndc-confgen.@O@ util.@O@ keygen.@O@ ${UOBJS}"; \
  	${FINALBUILDCMD}
  
--ddns-confgen@EXEEXT@: ddns-confgen.@O@ util.@O@ keygen.@O@ ${UOBJS} ${CONFDEPLIBS} 
+-ddns-confgen@EXEEXT@: ddns-confgen.@O@ util.@O@ keygen.@O@ ${UOBJS} ${CONFDEPLIBS}
 +ddns-confgen@EXEEXT@: ddns-confgen.@O@ util.@O@ keygen.@O@ ${CONFDEPLIBS} $(SUBDIRS)
  	export BASEOBJS="ddns-confgen.@O@ util.@O@ keygen.@O@ ${UOBJS}"; \
  	${FINALBUILDCMD}
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/generate-rndc-key.sh b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/generate-rndc-key.sh
index db20127..ef915c0 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/generate-rndc-key.sh
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/generate-rndc-key.sh
@@ -3,5 +3,6 @@
 if [ ! -s /etc/bind/rndc.key ]; then
     echo -n "Generating /etc/bind/rndc.key:"
     /usr/sbin/rndc-confgen -a -b 512 -r /dev/urandom
+    chown root:bind /etc/bind/rndc.key
     chmod 0640 /etc/bind/rndc.key
 fi
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/mips1-not-support-opcode.diff b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/mips1-not-support-opcode.diff
deleted file mode 100644
index 2930796..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/mips1-not-support-opcode.diff
+++ /dev/null
@@ -1,104 +0,0 @@
-bind: port a patch to fix a build failure
-
-mips1 does not support ll and sc instructions, and lead to below error, now
-we port a patch from debian to fix it
-[http://security.debian.org/debian-security/pool/updates/main/b/bind9/bind9_9.8.4.dfsg.P1-6+nmu2+deb7u1.diff.gz]
-
-| {standard input}: Assembler messages:
-| {standard input}:47: Error: Opcode not supported on this processor: mips1 (mips1) `ll $3,0($6)'
-| {standard input}:50: Error: Opcode not supported on this processor: mips1 (mips1) `sc $3,0($6)'
-
-Upstream-Status: Pending
-
-Signed-off-by: Roy Li <rongqing.li@windriver.com>
-
---- bind9-9.8.4.dfsg.P1.orig/lib/isc/mips/include/isc/atomic.h
-+++ bind9-9.8.4.dfsg.P1/lib/isc/mips/include/isc/atomic.h
-@@ -31,18 +31,20 @@
- isc_atomic_xadd(isc_int32_t *p, int val) {
- 	isc_int32_t orig;
- 
--	/* add is a cheat, since MIPS has no mov instruction */
--	__asm__ volatile (
--	    "1:"
--	    "ll $3, %1\n"
--	    "add %0, $0, $3\n"
--	    "add $3, $3, %2\n"
--	    "sc $3, %1\n"
--	    "beq $3, 0, 1b"
--	    : "=&r"(orig)
--	    : "m"(*p), "r"(val)
--	    : "memory", "$3"
--		);
-+	__asm__ __volatile__ (
-+	"	.set	push		\n"
-+	"	.set	mips2		\n"
-+	"	.set	noreorder	\n"
-+	"	.set	noat		\n"
-+	"1:	ll	$1, %1		\n"
-+	"	addu	%0, $1, %2	\n"
-+	"	sc	%0, %1		\n"
-+	"	beqz	%0, 1b		\n"
-+	"	move	%0, $1		\n"
-+	"	.set	pop		\n"
-+	: "=&r" (orig), "+R" (*p)
-+	: "r" (val)
-+	: "memory");
- 
- 	return (orig);
- }
-@@ -52,16 +54,7 @@
-  */
- static inline void
- isc_atomic_store(isc_int32_t *p, isc_int32_t val) {
--	__asm__ volatile (
--	    "1:"
--	    "ll $3, %0\n"
--	    "add $3, $0, %1\n"
--	    "sc $3, %0\n"
--	    "beq $3, 0, 1b"
--	    :
--	    : "m"(*p), "r"(val)
--	    : "memory", "$3"
--		);
-+	*p = val;
- }
- 
- /*
-@@ -72,20 +65,23 @@
- static inline isc_int32_t
- isc_atomic_cmpxchg(isc_int32_t *p, int cmpval, int val) {
- 	isc_int32_t orig;
-+	isc_int32_t tmp;
- 
--	__asm__ volatile(
--	    "1:"
--	    "ll $3, %1\n"
--	    "add %0, $0, $3\n"
--	    "bne $3, %2, 2f\n"
--	    "add $3, $0, %3\n"
--	    "sc $3, %1\n"
--	    "beq $3, 0, 1b\n"
--	    "2:"
--	    : "=&r"(orig)
--	    : "m"(*p), "r"(cmpval), "r"(val)
--	    : "memory", "$3"
--		);
-+	__asm__ __volatile__ (
-+	"	.set	push		\n"
-+	"	.set	mips2		\n"
-+	"	.set	noreorder	\n"
-+	"	.set	noat		\n"
-+	"1:	ll	$1, %1		\n"
-+	"	bne	$1, %3, 2f	\n"
-+	"	move	%2, %4		\n"
-+	"	sc	%2, %1		\n"
-+	"	beqz	%2, 1b		\n"
-+	"2:	move	%0, $1		\n"
-+	"	.set	pop		\n"
-+	: "=&r"(orig), "+R" (*p), "=r" (tmp)
-+	: "r"(cmpval), "r"(val)
-+	: "memory");
- 
- 	return (orig);
- }
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/use-python3-and-fix-install-lib-path.patch b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/use-python3-and-fix-install-lib-path.patch
new file mode 100644
index 0000000..9829f15
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind/use-python3-and-fix-install-lib-path.patch
@@ -0,0 +1,36 @@
+Use python3 rather default python which maybe links to python2 for oe. And add
+option for setup.py to install files to right directory.
+
+Upstream-Status: Inappropriate [OE specific]
+
+Signed-off-by: Kai Kang <kai.kang@windriver.com>
+---
+diff --git a/bin/python/Makefile.in b/bin/python/Makefile.in
+index a43a3c1..2e727f2 100644
+--- a/bin/python/Makefile.in
++++ b/bin/python/Makefile.in
+@@ -55,9 +55,9 @@ install:: ${TARGETS} installdirs
+ 	${INSTALL_DATA} ${srcdir}/dnssec-coverage.8 ${DESTDIR}${mandir}/man8
+ 	if test -n "${PYTHON}" ; then \
+ 		if test -n "${DESTDIR}" ; then \
+-			${PYTHON} ${srcdir}/setup.py install --root=${DESTDIR} --prefix=${prefix} ; \
++			${PYTHON} ${srcdir}/setup.py install --root=${DESTDIR} --prefix=${prefix} --install-lib=${PYTHON_SITEPACKAGES_DIR} ; \
+ 		else \
+-			${PYTHON} ${srcdir}/setup.py install --prefix=${prefix} ; \
++			${PYTHON} ${srcdir}/setup.py install --prefix=${prefix} --install-lib=${PYTHON_SITEPACKAGES_DIR} ; \
+ 		fi \
+ 	fi
+ 
+diff --git a/configure.in b/configure.in
+index 314bb90..867923e 100644
+--- a/configure.in
++++ b/configure.in
+@@ -227,7 +227,7 @@ AC_ARG_WITH(python,
+ [  --with-python=PATH      specify path to python interpreter],
+     use_python="$withval", use_python="unspec")
+ 
+-python="python python3 python3.5 python3.4 python3.3 python3.2 python2 python2.7"
++python="python3 python3.5 python3.4 python3.3 python3.2 python2 python2.7"
+ 
+ testargparse='try: import argparse
+ except: exit(1)'
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind_9.10.3-P3.bb b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind_9.10.3-P3.bb
deleted file mode 100644
index a802274..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind_9.10.3-P3.bb
+++ /dev/null
@@ -1,113 +0,0 @@
-SUMMARY = "ISC Internet Domain Name Server"
-HOMEPAGE = "http://www.isc.org/sw/bind/"
-SECTION = "console/network"
-
-LICENSE = "ISC & BSD"
-LIC_FILES_CHKSUM = "file://COPYRIGHT;md5=0a95f52a0ab6c5f52dedc9a45e7abb3f"
-
-DEPENDS = "openssl libcap"
-
-SRC_URI = "ftp://ftp.isc.org/isc/bind9/${PV}/${BPN}-${PV}.tar.gz \
-           file://conf.patch \
-           file://make-etc-initd-bind-stop-work.patch \
-           file://mips1-not-support-opcode.diff \
-           file://dont-test-on-host.patch \
-           file://generate-rndc-key.sh \
-           file://named.service \
-           file://bind9 \
-           file://init.d-add-support-for-read-only-rootfs.patch \
-           file://bind-confgen-build-unix.o-once.patch \
-           file://0001-build-use-pkg-config-to-find-libxml2.patch \
-           file://bind-ensure-searching-for-json-headers-searches-sysr.patch \
-           file://0001-gen.c-extend-DIRNAMESIZE-from-256-to-512.patch \
-           file://0001-lib-dns-gen.c-fix-too-long-error.patch \
-           file://CVE-2016-1285.patch \
-           file://CVE-2016-1286_1.patch \
-           file://CVE-2016-1286_2.patch \
-           file://CVE-2016-2088.patch \
-           file://CVE-2016-2775.patch \
-           file://CVE-2016-2776.patch \
-           file://CVE-2016-8864.patch \
-           file://CVE-2016-6170.patch \
-           "
-
-SRC_URI[md5sum] = "bcf7e772b616f7259420a3edc5df350a"
-SRC_URI[sha256sum] = "690810d1fbb72afa629e74638d19cd44e28d2b2e5eb63f55c705ad85d1a4cb83"
-
-ENABLE_IPV6 = "--enable-ipv6=${@bb.utils.contains('DISTRO_FEATURES', 'ipv6', 'yes', 'no', d)}"
-EXTRA_OECONF = " ${ENABLE_IPV6} --with-randomdev=/dev/random --disable-threads \
-                 --disable-devpoll --disable-epoll --with-gost=no \
-                 --with-gssapi=no --with-ecdsa=yes \
-                 --sysconfdir=${sysconfdir}/bind \
-                 --with-openssl=${STAGING_LIBDIR}/.. \
-               "
-inherit autotools update-rc.d systemd useradd pkgconfig
-
-# PACKAGECONFIGs readline and libedit should NOT be set at same time
-PACKAGECONFIG ?= "readline"
-PACKAGECONFIG[httpstats] = "--with-libxml2,--without-libxml2,libxml2"
-PACKAGECONFIG[readline] = "--with-readline=-lreadline,,readline"
-PACKAGECONFIG[libedit] = "--with-readline=-ledit,,libedit"
-
-USERADD_PACKAGES = "${PN}"
-USERADD_PARAM_${PN} = "--system --home ${localstatedir}/cache/bind --no-create-home \
-                       --user-group bind"
-
-INITSCRIPT_NAME = "bind"
-INITSCRIPT_PARAMS = "defaults"
-
-SYSTEMD_SERVICE_${PN} = "named.service"
-
-PARALLEL_MAKE = ""
-
-RDEPENDS_${PN} = "python3-core"
-RDEPENDS_${PN}-dev = ""
-
-PACKAGE_BEFORE_PN += "${PN}-utils"
-FILES_${PN}-utils = "${bindir}/host ${bindir}/dig"
-FILES_${PN}-dev += "${bindir}/isc-config.h"
-FILES_${PN} += "${sbindir}/generate-rndc-key.sh"
-
-do_install_prepend() {
-	# clean host path in isc-config.sh before the hardlink created
-	# by "make install":
-	#   bind9-config -> isc-config.sh
-	sed -i -e "s,${STAGING_LIBDIR},${libdir}," ${B}/isc-config.sh
-}
-
-do_install_append() {
-	rm "${D}${bindir}/nslookup"
-	rm "${D}${mandir}/man1/nslookup.1"
-	rmdir "${D}${localstatedir}/run"
-	rmdir --ignore-fail-on-non-empty "${D}${localstatedir}"
-	install -d -o bind "${D}${localstatedir}/cache/bind"
-	install -d "${D}${sysconfdir}/bind"
-	install -d "${D}${sysconfdir}/init.d"
-	install -m 644 ${S}/conf/* "${D}${sysconfdir}/bind/"
-	install -m 755 "${S}/init.d" "${D}${sysconfdir}/init.d/bind"
-	sed -i -e '1s,#!.*python3,#! /usr/bin/python3,' ${D}${sbindir}/dnssec-coverage ${D}${sbindir}/dnssec-checkds
-
-	# Install systemd related files
-	install -d ${D}${sbindir}
-	install -m 755 ${WORKDIR}/generate-rndc-key.sh ${D}${sbindir}
-	install -d ${D}${systemd_unitdir}/system
-	install -m 0644 ${WORKDIR}/named.service ${D}${systemd_unitdir}/system
-	sed -i -e 's,@BASE_BINDIR@,${base_bindir},g' \
-	       -e 's,@SBINDIR@,${sbindir},g' \
-	       ${D}${systemd_unitdir}/system/named.service
-
-	install -d ${D}${sysconfdir}/default
-	install -m 0644 ${WORKDIR}/bind9 ${D}${sysconfdir}/default
-}
-
-CONFFILES_${PN} = " \
-	${sysconfdir}/bind/named.conf \
-	${sysconfdir}/bind/named.conf.local \
-	${sysconfdir}/bind/named.conf.options \
-	${sysconfdir}/bind/db.0 \
-	${sysconfdir}/bind/db.127 \
-	${sysconfdir}/bind/db.empty \
-	${sysconfdir}/bind/db.local \
-	${sysconfdir}/bind/db.root \
-	"
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind_9.10.5-P3.bb b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind_9.10.5-P3.bb
new file mode 100644
index 0000000..13724a8
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/bind/bind_9.10.5-P3.bb
@@ -0,0 +1,119 @@
+SUMMARY = "ISC Internet Domain Name Server"
+HOMEPAGE = "http://www.isc.org/sw/bind/"
+SECTION = "console/network"
+
+LICENSE = "ISC & BSD"
+LIC_FILES_CHKSUM = "file://COPYRIGHT;md5=dba46507446198119bcde32a4feaab43"
+
+DEPENDS = "openssl libcap"
+
+SRC_URI = "https://ftp.isc.org/isc/bind9/${PV}/${BPN}-${PV}.tar.gz \
+           file://conf.patch \
+           file://make-etc-initd-bind-stop-work.patch \
+           file://dont-test-on-host.patch \
+           file://generate-rndc-key.sh \
+           file://named.service \
+           file://bind9 \
+           file://init.d-add-support-for-read-only-rootfs.patch \
+           file://bind-confgen-build-unix.o-once.patch \
+           file://0001-build-use-pkg-config-to-find-libxml2.patch \
+           file://bind-ensure-searching-for-json-headers-searches-sysr.patch \
+           file://0001-gen.c-extend-DIRNAMESIZE-from-256-to-512.patch \
+           file://0001-lib-dns-gen.c-fix-too-long-error.patch \
+           file://use-python3-and-fix-install-lib-path.patch \
+           "
+
+UPSTREAM_CHECK_URI = "https://ftp.isc.org/isc/bind9/"
+UPSTREAM_CHECK_REGEX = "(?P<pver>9(\.\d+)+(-P\d+)*)/"
+
+SRC_URI[md5sum] = "d79cafbd9ac76239ee532dd89d05cc83"
+SRC_URI[sha256sum] = "8d7e96b5b0bbac7b900d4c4bbb82e0956b4e509433c5fa392bb72a929b96606a"
+
+ENABLE_IPV6 = "--enable-ipv6=${@bb.utils.contains('DISTRO_FEATURES', 'ipv6', 'yes', 'no', d)}"
+EXTRA_OECONF = " ${ENABLE_IPV6} --with-libtool --enable-threads \
+                 --disable-devpoll --enable-epoll --with-gost=no \
+                 --with-gssapi=no --with-ecdsa=yes \
+                 --sysconfdir=${sysconfdir}/bind \
+                 --with-openssl=${STAGING_LIBDIR}/.. \
+               "
+
+inherit autotools update-rc.d systemd useradd pkgconfig python3-dir
+
+export PYTHON_SITEPACKAGES_DIR
+
+# PACKAGECONFIGs readline and libedit should NOT be set at same time
+PACKAGECONFIG ?= "readline"
+PACKAGECONFIG[httpstats] = "--with-libxml2,--without-libxml2,libxml2"
+PACKAGECONFIG[readline] = "--with-readline=-lreadline,,readline"
+PACKAGECONFIG[libedit] = "--with-readline=-ledit,,libedit"
+PACKAGECONFIG[urandom] = "--with-randomdev=/dev/urandom,--with-randomdev=/dev/random,,"
+
+USERADD_PACKAGES = "${PN}"
+USERADD_PARAM_${PN} = "--system --home ${localstatedir}/cache/bind --no-create-home \
+                       --user-group bind"
+
+INITSCRIPT_NAME = "bind"
+INITSCRIPT_PARAMS = "defaults"
+
+SYSTEMD_SERVICE_${PN} = "named.service"
+
+PARALLEL_MAKE = ""
+
+RDEPENDS_${PN} = "python3-core"
+RDEPENDS_${PN}-dev = ""
+
+PACKAGE_BEFORE_PN += "${PN}-utils"
+FILES_${PN}-utils = "${bindir}/host ${bindir}/dig"
+FILES_${PN}-dev += "${bindir}/isc-config.h"
+FILES_${PN} += "${sbindir}/generate-rndc-key.sh ${PYTHON_SITEPACKAGES_DIR}"
+
+do_install_prepend() {
+	# clean host path in isc-config.sh before the hardlink created
+	# by "make install":
+	#   bind9-config -> isc-config.sh
+	sed -i -e "s,${STAGING_LIBDIR},${libdir}," ${B}/isc-config.sh
+}
+
+do_install_append() {
+	rm "${D}${bindir}/nslookup"
+	rm "${D}${mandir}/man1/nslookup.1"
+	rmdir "${D}${localstatedir}/run"
+	rmdir --ignore-fail-on-non-empty "${D}${localstatedir}"
+	install -d -o bind "${D}${localstatedir}/cache/bind"
+	install -d "${D}${sysconfdir}/bind"
+	install -d "${D}${sysconfdir}/init.d"
+	install -m 644 ${S}/conf/* "${D}${sysconfdir}/bind/"
+	install -m 755 "${S}/init.d" "${D}${sysconfdir}/init.d/bind"
+	sed -i -e '1s,#!.*python3,#! /usr/bin/python3,' ${D}${sbindir}/dnssec-coverage ${D}${sbindir}/dnssec-checkds
+
+	# Install systemd related files
+	install -d ${D}${sbindir}
+	install -m 755 ${WORKDIR}/generate-rndc-key.sh ${D}${sbindir}
+	install -d ${D}${systemd_unitdir}/system
+	install -m 0644 ${WORKDIR}/named.service ${D}${systemd_unitdir}/system
+	sed -i -e 's,@BASE_BINDIR@,${base_bindir},g' \
+	       -e 's,@SBINDIR@,${sbindir},g' \
+	       ${D}${systemd_unitdir}/system/named.service
+
+	install -d ${D}${sysconfdir}/default
+	install -m 0644 ${WORKDIR}/bind9 ${D}${sysconfdir}/default
+
+	if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
+		install -d ${D}${sysconfdir}/tmpfiles.d
+		echo "d /run/named 0755 bind bind - -" > ${D}${sysconfdir}/tmpfiles.d/bind.conf
+	fi
+
+    rm -f ${D}${PYTHON_SITEPACKAGES_DIR}/isc/*.pyc
+}
+
+CONFFILES_${PN} = " \
+	${sysconfdir}/bind/named.conf \
+	${sysconfdir}/bind/named.conf.local \
+	${sysconfdir}/bind/named.conf.options \
+	${sysconfdir}/bind/db.0 \
+	${sysconfdir}/bind/db.127 \
+	${sysconfdir}/bind/db.empty \
+	${sysconfdir}/bind/db.local \
+	${sysconfdir}/bind/db.root \
+	"
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5.inc b/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5.inc
index 882873a..1807aa7 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5.inc
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5.inc
@@ -6,16 +6,41 @@
 LIC_FILES_CHKSUM = "file://COPYING;md5=12f884d2ae1ff87c09e5b7ccc2c4ca7e \
                     file://COPYING.LIB;md5=fb504b67c50331fc78734fed90fb0e09 \
                     file://src/main.c;beginline=1;endline=24;md5=9bc54b93cd7e17bf03f52513f39f926e"
-DEPENDS = "udev libusb dbus-glib glib-2.0 libcheck"
+DEPENDS = "udev dbus-glib glib-2.0 libcheck"
 PROVIDES += "bluez-hcidump"
 RPROVIDES_${PN} += "bluez-hcidump"
 
 RCONFLICTS_${PN} = "bluez4"
 
-PACKAGECONFIG ??= "obex-profiles readline"
+PACKAGECONFIG ??= "obex-profiles \
+    readline \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'systemd', d)} \
+    a2dp-profiles \
+    avrcp-profiles \
+    network-profiles \
+    hid-profiles \
+    hog-profiles \
+    tools \
+    deprecated \
+"
 PACKAGECONFIG[obex-profiles] = "--enable-obex,--disable-obex,libical"
-PACKAGECONFIG[experimental] = "--enable-experimental,--disable-experimental,"
 PACKAGECONFIG[readline] = "--enable-client,--disable-client,readline,"
+PACKAGECONFIG[testing] = "--enable-testing,--disable-testing"
+PACKAGECONFIG[midi] = "--enable-midi,--disable-midi,alsa-lib"
+PACKAGECONFIG[systemd] = "--enable-systemd,--disable-systemd"
+PACKAGECONFIG[cups] = "--enable-cups,--disable-cups,,cups"
+PACKAGECONFIG[nfc] = "--enable-nfc,--disable-nfc"
+PACKAGECONFIG[sap-profiles] = "--enable-sap,--disable-sap"
+PACKAGECONFIG[a2dp-profiles] = "--enable-a2dp,--disable-a2dp"
+PACKAGECONFIG[avrcp-profiles] = "--enable-avrcp,--disable-avrcp"
+PACKAGECONFIG[network-profiles] = "--enable-network,--disable-network"
+PACKAGECONFIG[hid-profiles] = "--enable-hid,--disable-hid"
+PACKAGECONFIG[hog-profiles] = "--enable-hog,--disable-hog"
+PACKAGECONFIG[health-profiles] = "--enable-health,--disable-health"
+PACKAGECONFIG[sixaxis] = "--enable-sixaxis,--disable-sixaxis"
+PACKAGECONFIG[tools] = "--enable-tools,--disable-tools"
+PACKAGECONFIG[threads] = "--enable-threads,--disable-threads"
+PACKAGECONFIG[deprecated] = "--enable-deprecated,--disable-deprecated"
 
 SRC_URI = "\
     ${KERNELORG_MIRROR}/linux/bluetooth/bluez-${PV}.tar.xz \
@@ -24,6 +49,7 @@
     file://run-ptest \
     ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', '', 'file://0001-Allow-using-obexd-without-systemd-in-the-user-sessio.patch', d)} \
     file://0001-tests-add-a-target-for-building-tests-without-runnin.patch \
+    file://0001-hciattach-bcm43xx-fix-the-delay-timer-for-firmware-d.patch \
     file://cve-2017-1000250.patch \
 "
 S = "${WORKDIR}/bluez-${PV}"
@@ -33,21 +59,20 @@
 inherit autotools pkgconfig systemd update-rc.d distro_features_check ptest
 
 EXTRA_OECONF = "\
-  --enable-tools \
-  --disable-cups \
   --enable-test \
   --enable-datafiles \
-  ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', '--enable-systemd', '--disable-systemd', d)} \
   --enable-library \
 "
 
 # bluez5 builds a large number of useful utilities but does not
 # install them.  Specify which ones we want put into ${PN}-noinst-tools.
 NOINST_TOOLS_READLINE ??= ""
-NOINST_TOOLS_EXPERIMENTAL ??= ""
+NOINST_TOOLS_TESTING ??= ""
+NOINST_TOOLS_BT ??= ""
 NOINST_TOOLS = " \
     ${@bb.utils.contains('PACKAGECONFIG', 'readline', '${NOINST_TOOLS_READLINE}', '', d)} \
-    ${@bb.utils.contains('PACKAGECONFIG', 'experimental', '${NOINST_TOOLS_EXPERIMENTAL}', '', d)} \
+    ${@bb.utils.contains('PACKAGECONFIG', 'testing', '${NOINST_TOOLS_TESTING}', '', d)} \
+    ${@bb.utils.contains('PACKAGECONFIG', 'tools', '${NOINST_TOOLS_BT}', '', d)} \
 "
 
 do_install_append() {
@@ -55,38 +80,36 @@
 	install -m 0755 ${WORKDIR}/init ${D}${INIT_D_DIR}/bluetooth
 
 	install -d ${D}${sysconfdir}/bluetooth/
-	if [ -f ${S}/profiles/audio/audio.conf ]; then
-	    install -m 0644 ${S}/profiles/audio/audio.conf ${D}/${sysconfdir}/bluetooth/
-	fi
 	if [ -f ${S}/profiles/network/network.conf ]; then
-	    install -m 0644 ${S}/profiles/network/network.conf ${D}/${sysconfdir}/bluetooth/
+		install -m 0644 ${S}/profiles/network/network.conf ${D}/${sysconfdir}/bluetooth/
 	fi
 	if [ -f ${S}/profiles/input/input.conf ]; then
-	    install -m 0644 ${S}/profiles/input/input.conf ${D}/${sysconfdir}/bluetooth/
+		install -m 0644 ${S}/profiles/input/input.conf ${D}/${sysconfdir}/bluetooth/
 	fi
 
-  if [ -f ${D}/${sysconfdir}/init.d/bluetooth ]; then
-    sed -i -e 's#@LIBEXECDIR@#${libexecdir}#g' ${D}/${sysconfdir}/init.d/bluetooth
-  fi
+	if [ -f ${D}/${sysconfdir}/init.d/bluetooth ]; then
+		sed -i -e 's#@LIBEXECDIR@#${libexecdir}#g' ${D}/${sysconfdir}/init.d/bluetooth
+	fi
 
 	# Install desired tools that upstream leaves in build area
-        for f in ${NOINST_TOOLS} ; do
-	    install -m 755 ${B}/$f ${D}/${bindir}
+	for f in ${NOINST_TOOLS} ; do
+		install -m 755 ${B}/$f ${D}/${bindir}
 	done
 
-        # Patch python tools to use Python 3; they should be source compatible, but
-        # still refer to Python 2 in the shebang
-        sed -i -e '1s,#!.*python.*,#!${bindir}/python3,' ${D}${libdir}/bluez/test/*
+	# Patch python tools to use Python 3; they should be source compatible, but
+	# still refer to Python 2 in the shebang
+	sed -i -e '1s,#!.*python.*,#!${bindir}/python3,' ${D}${libdir}/bluez/test/*
 }
 
-ALLOW_EMPTY_libasound-module-bluez = "1"
-PACKAGES =+ "libasound-module-bluez ${PN}-testtools ${PN}-obex ${PN}-noinst-tools"
+PACKAGES =+ "${PN}-testtools ${PN}-obex ${PN}-noinst-tools"
 
-FILES_libasound-module-bluez = "${libdir}/alsa-lib/lib*.so ${datadir}/alsa"
-FILES_${PN} += "${libdir}/bluetooth/plugins/*.so ${systemd_unitdir}/ ${datadir}/dbus-1"
-FILES_${PN}-dev += "\
-  ${libdir}/bluetooth/plugins/*.la \
-  ${libdir}/alsa-lib/*.la \
+FILES_${PN} += " \
+    ${libdir}/bluetooth/plugins/*.so \
+    ${systemd_unitdir}/ ${datadir}/dbus-1 \
+    ${libdir}/cups \
+"
+FILES_${PN}-dev += " \
+    ${libdir}/bluetooth/plugins/*.la \
 "
 
 FILES_${PN}-obex = "${libexecdir}/bluetooth/obexd \
@@ -109,17 +132,17 @@
 
 RDEPENDS_${PN}-testtools += "python3 python3-dbus python3-pygobject"
 
-SYSTEMD_SERVICE_${PN} = "bluetooth.service"
+SYSTEMD_SERVICE_${PN} = "${@bb.utils.contains('PACKAGECONFIG', 'systemd', 'bluetooth.service', '', d)}"
 INITSCRIPT_PACKAGES = "${PN}"
 INITSCRIPT_NAME_${PN} = "bluetooth"
 
 EXCLUDE_FROM_WORLD = "1"
 
 do_compile_ptest() {
-        oe_runmake buildtests
+	oe_runmake buildtests
 }
 
 do_install_ptest() {
-        cp -r ${B}/unit/ ${D}${PTEST_PATH}
-        rm -f ${D}${PTEST_PATH}/unit/*.o
+	cp -r ${B}/unit/ ${D}${PTEST_PATH}
+	rm -f ${D}${PTEST_PATH}/unit/*.o
 }
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5/0001-hciattach-bcm43xx-fix-the-delay-timer-for-firmware-d.patch b/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5/0001-hciattach-bcm43xx-fix-the-delay-timer-for-firmware-d.patch
new file mode 100644
index 0000000..4679438
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5/0001-hciattach-bcm43xx-fix-the-delay-timer-for-firmware-d.patch
@@ -0,0 +1,36 @@
+From 3b341fb421ef61db7782bf1314ec693828467de9 Mon Sep 17 00:00:00 2001
+From: Andy Duan <fugang.duan@nxp.com>
+Date: Wed, 23 Nov 2016 17:12:12 +0800
+Subject: [PATCH] hciattach: bcm43xx: fix the delay timer for firmware download
+
+From the log in .bcm43xx_load_firmware():
+        /* Wait 50ms to let the firmware placed in download mode */
+        nanosleep(&tm_mode, NULL);
+
+But timespec tm_mode is real is 50us. Correct the delayed timer count.
+
+Upstream-Status: Accepted [https://git.kernel.org/pub/scm/bluetooth/bluez.git/commit/?id=76255f732d68aef2b90d36d9c7be51a9e1739ce7]
+
+Signed-off-by: Fugang Duan <fugang.duan@nxp.com>
+---
+ tools/hciattach_bcm43xx.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/tools/hciattach_bcm43xx.c b/tools/hciattach_bcm43xx.c
+index 81f38cb..ac1b3c1 100644
+--- a/tools/hciattach_bcm43xx.c
++++ b/tools/hciattach_bcm43xx.c
+@@ -228,8 +228,8 @@ static int bcm43xx_set_speed(int fd, struct termios *ti, uint32_t speed)
+ static int bcm43xx_load_firmware(int fd, const char *fw)
+ {
+ 	unsigned char cmd[] = { HCI_COMMAND_PKT, 0x2e, 0xfc, 0x00 };
+-	struct timespec tm_mode = { 0, 50000 };
+-	struct timespec tm_ready = { 0, 2000000 };
++	struct timespec tm_mode = { 0, 50000000 };
++	struct timespec tm_ready = { 0, 200000000 };
+ 	unsigned char resp[CC_MIN_SIZE];
+ 	unsigned char tx_buf[1024];
+ 	int len, fd_fw, n;
+-- 
+1.9.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5/init b/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5/init
index 489e9b9..d7972f2 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5/init
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5/init
@@ -21,25 +21,22 @@
 
 case $1 in
   start)
-	echo "Starting $DESC"
-
+	echo -n "Starting $DESC: "
 	if test "$BLUETOOTH_ENABLED" = 0; then
-		echo "disabled. see /etc/default/bluetooth"
+		echo "disabled (see /etc/default/bluetooth)."
 		exit 0
 	fi
-
 	start-stop-daemon --start --background $SSD_OPTIONS
-	echo "${DAEMON##*/}"
-
+	echo "${DAEMON##*/}."
   ;;
   stop)
-	echo "Stopping $DESC"
+	echo -n "Stopping $DESC: "
 	if test "$BLUETOOTH_ENABLED" = 0; then
-		echo "disabled."
+		echo "disabled (see /etc/default/bluetooth)."
 		exit 0
 	fi
 	start-stop-daemon --stop $SSD_OPTIONS
-	echo "${DAEMON}"
+	echo "${DAEMON##*/}."
   ;;
   restart|force-reload)
 	$0 stop
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5_5.43.bb b/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5_5.43.bb
deleted file mode 100644
index e10b82d..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5_5.43.bb
+++ /dev/null
@@ -1,55 +0,0 @@
-require bluez5.inc
-
-REQUIRED_DISTRO_FEATURES = "bluez5"
-
-SRC_URI[md5sum] = "698def88df96840dfbb0858bb6d73350"
-SRC_URI[sha256sum] = "16c9c05d2a1da644ce3570d975ada3643d2e60c007a955bac09c0a0efeb58d15"
-
-# noinst programs in Makefile.tools that are conditional on READLINE
-# support
-NOINST_TOOLS_READLINE ?= " \
-    attrib/gatttool \
-    tools/obex-client-tool \
-    tools/obex-server-tool \
-    tools/bluetooth-player \
-    tools/obexctl \
-    tools/btmgmt \
-"
-
-# noinst programs in Makefile.tools that are conditional on EXPERIMENTAL
-# support
-NOINST_TOOLS_EXPERIMENTAL ?= " \
-    emulator/btvirt \
-    emulator/b1ee \
-    emulator/hfp \
-    tools/3dsp \
-    tools/mgmt-tester \
-    tools/gap-tester \
-    tools/l2cap-tester \
-    tools/sco-tester \
-    tools/smp-tester \
-    tools/hci-tester \
-    tools/rfcomm-tester \
-    tools/bdaddr \
-    tools/avinfo \
-    tools/avtest \
-    tools/scotest \
-    tools/amptest \
-    tools/hwdb \
-    tools/hcieventmask \
-    tools/hcisecfilter \
-    tools/btinfo \
-    tools/btattach \
-    tools/btsnoop \
-    tools/btproxy \
-    tools/btiotest \
-    tools/mcaptest \
-    tools/cltest \
-    tools/oobtest \
-    tools/seq2bseq \
-    tools/ibeacon \
-    tools/btgatt-client \
-    tools/btgatt-server \
-    tools/gatt-service \
-    profiles/iap/iapd \
-"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5_5.46.bb b/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5_5.46.bb
new file mode 100644
index 0000000..e1f8587
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/bluez5/bluez5_5.46.bb
@@ -0,0 +1,69 @@
+require bluez5.inc
+
+REQUIRED_DISTRO_FEATURES = "bluez5"
+
+SRC_URI[md5sum] = "913f35d6fa4ca5772c53adb936bf1947"
+SRC_URI[sha256sum] = "ddab3d3837c1afb8ae228a94ba17709a4650bd4db24211b6771ab735c8908e28"
+
+# noinst programs in Makefile.tools that are conditional on READLINE
+# support
+NOINST_TOOLS_READLINE ?= " \
+    ${@bb.utils.contains('PACKAGECONFIG', 'deprecated', 'attrib/gatttool', '', d)} \
+    tools/obex-client-tool \
+    tools/obex-server-tool \
+    tools/bluetooth-player \
+    tools/obexctl \
+    tools/btmgmt \
+"
+
+# noinst programs in Makefile.tools that are conditional on TESTING
+# support
+NOINST_TOOLS_TESTING ?= " \
+    emulator/btvirt \
+    emulator/b1ee \
+    emulator/hfp \
+    peripheral/btsensor \
+    tools/3dsp \
+    tools/mgmt-tester \
+    tools/gap-tester \
+    tools/l2cap-tester \
+    tools/sco-tester \
+    tools/smp-tester \
+    tools/hci-tester \
+    tools/rfcomm-tester \
+    tools/bnep-tester \
+    tools/userchan-tester \
+"
+
+# noinst programs in Makefile.tools that are conditional on TOOLS
+# support
+NOINST_TOOLS_BT ?= " \
+    tools/bdaddr \
+    tools/avinfo \
+    tools/avtest \
+    tools/scotest \
+    tools/amptest \
+    tools/hwdb \
+    tools/hcieventmask \
+    tools/hcisecfilter \
+    tools/btinfo \
+    tools/btsnoop \
+    tools/btproxy \
+    tools/btiotest \
+    tools/bneptest \
+    tools/mcaptest \
+    tools/cltest \
+    tools/oobtest \
+    tools/advtest \
+    tools/seq2bseq \
+    tools/nokfw \
+    tools/create-image \
+    tools/eddystone \
+    tools/ibeacon \
+    tools/btgatt-client \
+    tools/btgatt-server \
+    tools/test-runner \
+    tools/check-selftest \
+    tools/gatt-service \
+    profiles/iap/iapd \
+"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman.inc b/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman.inc
index 64a5418..2b03f9c 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman.inc
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman.inc
@@ -13,9 +13,9 @@
 LIC_FILES_CHKSUM = "file://COPYING;md5=12f884d2ae1ff87c09e5b7ccc2c4ca7e \
                     file://src/main.c;beginline=1;endline=20;md5=486a279a6ab0c8d152bcda3a5b5edc36"
 
-inherit autotools pkgconfig systemd update-rc.d bluetooth
+inherit autotools pkgconfig systemd update-rc.d bluetooth update-alternatives
 
-DEPENDS  = "dbus glib-2.0 ppp iptables readline"
+DEPENDS  = "dbus glib-2.0 ppp readline"
 
 INC_PR = "r20"
 
@@ -33,6 +33,7 @@
 PACKAGECONFIG ??= "wispr \
                    ${@bb.utils.filter('DISTRO_FEATURES', '3g systemd wifi', d)} \
                    ${@bb.utils.contains('DISTRO_FEATURES', 'bluetooth', 'bluez', '', d)} \
+                   iptables \
 "
 
 # If you want ConnMan to support VPN, add following statement into
@@ -50,6 +51,8 @@
 PACKAGECONFIG[pptp] = "--enable-pptp --with-pptp=${sbindir}/pptp,--disable-pptp,,pptp-linux"
 # WISPr support for logging into hotspots, requires TLS
 PACKAGECONFIG[wispr] = "--enable-wispr,--disable-wispr,gnutls,"
+PACKAGECONFIG[nftables] = "--with-firewall=nftables ,,libmnl libnftnl,,kernel-module-nf-tables-ipv4 kernel-module-nft-chain-nat-ipv4 kernel-module-nft-chain-route-ipv4 kernel-module-nft-meta kernel-module-nft-masq-ipv4 kernel-module-nft-nat"
+PACKAGECONFIG[iptables] = "--with-firewall=iptables ,,iptables,iptables"
 
 INITSCRIPT_NAME = "connman"
 INITSCRIPT_PARAMS = "start 05 5 2 3 . stop 22 0 1 6 ."
@@ -66,6 +69,11 @@
 SYSTEMD_SERVICE_${PN}-vpn = "connman-vpn.service"
 SYSTEMD_SERVICE_${PN}-wait-online = "connman-wait-online.service"
 
+ALTERNATIVE_PRIORITY = "100"
+ALTERNATIVE_${PN} = "${@bb.utils.contains('DISTRO_FEATURES','systemd','resolv-conf','',d)}"
+ALTERNATIVE_TARGET[resolv-conf] = "${@bb.utils.contains('DISTRO_FEATURES','systemd','${sysconfdir}/resolv-conf.connman','',d)}"
+ALTERNATIVE_LINK_NAME[resolv-conf] = "${@bb.utils.contains('DISTRO_FEATURES','systemd','${sysconfdir}/resolv.conf','',d)}"
+
 do_install_append() {
 	if ${@bb.utils.contains('DISTRO_FEATURES','sysvinit','true','false',d)}; then
 		install -d ${D}${sysconfdir}/init.d
@@ -86,6 +94,11 @@
 	# Automake 1.12 won't install empty directories, but we need the
 	# plugins directory to be present for ownership
 	mkdir -p ${D}${libdir}/connman/plugins
+
+    # For read-only filesystem, do not create links during bootup
+    if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
+        ln -sf ../run/connman/resolv.conf ${D}${sysconfdir}/resolv-conf.connman
+    fi
 }
 
 # These used to be plugins, but now they are core
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman/0001-Fix-compile-on-musl-with-kernel-4.9-headers.patch b/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman/0001-Fix-compile-on-musl-with-kernel-4.9-headers.patch
deleted file mode 100644
index bf3b86d..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman/0001-Fix-compile-on-musl-with-kernel-4.9-headers.patch
+++ /dev/null
@@ -1,64 +0,0 @@
-From c8bfad4ee9d2c505c00ccbb8b2139543b5ad6fcb Mon Sep 17 00:00:00 2001
-From: Jussi Kukkonen <jussi.kukkonen@intel.com>
-Date: Mon, 23 Jan 2017 17:41:39 +0200
-Subject: [PATCH] Fix compile on musl with kernel 4.9 headers
-
-Kernel headers break when musl defines IFF_LOWER_UP. While
-waiting for more proper fix in musl, add a hack to connman.
-
-Upstream-Status: Inappropriate [Workaround]
-Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
----
- src/6to4.c     | 4 ++++
- src/firewall.c | 4 ++++
- src/iptables.c | 4 ++++
- 3 files changed, 12 insertions(+)
-
-diff --git a/src/6to4.c b/src/6to4.c
-index 71a2882..1938afb 100644
---- a/src/6to4.c
-+++ b/src/6to4.c
-@@ -24,6 +24,10 @@
- #include <config.h>
- #endif
- 
-+/* hack to make sure kernel headers understand that libc (musl)
-+   does define IFF_LOWER_UP et al. */
-+#define __UAPI_DEF_IF_NET_DEVICE_FLAGS_LOWER_UP_DORMANT_ECHO 0
-+
- #include <errno.h>
- #include <stdio.h>
- #include <stdlib.h>
-diff --git a/src/firewall.c b/src/firewall.c
-index c440df6..c83def9 100644
---- a/src/firewall.c
-+++ b/src/firewall.c
-@@ -23,6 +23,10 @@
- #include <config.h>
- #endif
- 
-+/* hack to make sure kernel headers understand that libc (musl)
-+   does define IFF_LOWER_UP et al. */
-+#define __UAPI_DEF_IF_NET_DEVICE_FLAGS_LOWER_UP_DORMANT_ECHO 0
-+
- #include <errno.h>
- 
- #include <xtables.h>
-diff --git a/src/iptables.c b/src/iptables.c
-index 82e3ac4..46ad9e2 100644
---- a/src/iptables.c
-+++ b/src/iptables.c
-@@ -23,6 +23,10 @@
- #include <config.h>
- #endif
- 
-+/* hack to make sure kernel headers understand that libc (musl)
-+   does define IFF_LOWER_UP et al. */
-+#define __UAPI_DEF_IF_NET_DEVICE_FLAGS_LOWER_UP_DORMANT_ECHO 0
-+
- #include <getopt.h>
- #include <stdlib.h>
- #include <stdio.h>
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman/0001-connman.service-stop-systemd-resolved-when-we-use-co.patch b/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman/0001-connman.service-stop-systemd-resolved-when-we-use-co.patch
new file mode 100644
index 0000000..8e2e0bd0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman/0001-connman.service-stop-systemd-resolved-when-we-use-co.patch
@@ -0,0 +1,29 @@
+From 9f70b94ebf18f52c115634642652830fa77f27a1 Mon Sep 17 00:00:00 2001
+From: "Maxin B. John" <maxin.john@intel.com>
+Date: Mon, 12 Jun 2017 16:52:39 +0300
+Subject: [PATCH] connman.service: stop systemd-resolved when we use connman
+
+Stop systemd-resolved service when we use connman as network manager.
+
+Upstream-Status: Inappropriate [configuration]
+
+Signed-off-by: Maxin B. John <maxin.john@intel.com>
+---
+ src/connman.service.in | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/src/connman.service.in b/src/connman.service.in
+index 9f5c10f..dab48bc 100644
+--- a/src/connman.service.in
++++ b/src/connman.service.in
+@@ -6,6 +6,7 @@ RequiresMountsFor=@localstatedir@/lib/connman
+ After=dbus.service network-pre.target systemd-sysusers.service
+ Before=network.target multi-user.target shutdown.target
+ Wants=network.target
++Conflicts=systemd-resolved.service
+ 
+ [Service]
+ Type=dbus
+-- 
+2.4.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman/0001-firewall-nftables-fix-build-with-libnftnl-1.0.7.patch b/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman/0001-firewall-nftables-fix-build-with-libnftnl-1.0.7.patch
new file mode 100644
index 0000000..cfafbd1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman/0001-firewall-nftables-fix-build-with-libnftnl-1.0.7.patch
@@ -0,0 +1,72 @@
+From 4058ce3186a99fd5f03350fc11a7fc8d38b6a381 Mon Sep 17 00:00:00 2001
+From: "Maxin B. John" <maxin.john@intel.com>
+Date: Mon, 8 May 2017 10:53:18 +0300
+Subject: [PATCH] firewall-nftables: fix build with libnftnl-1.0.7
+
+We need these updates to accommodate the changes caused by the following
+commit in libnftnl-1.0.7
+
+commit 907a9f8e5a93f5bcd449643eb3916a656d634758
+Author: Pablo Neira Ayuso <pablo@netfilter.org>
+Date:   Tue Dec 20 13:47:11 2016 +0100
+
+src: get rid of aliases and compat
+
+This machinery was introduced to avoid sudden compilation breakage of
+old nftables releases. With the upcoming release of 0.7 (and 0.6 which
+is now 6 months old) this is not required anymore.
+
+Moreover, users gain nothing from older releases since they are
+half-boiled and buggy.
+
+So let's get rid of aliases now. Bump LIBVERSION and update map file.
+
+Upstream-Status: Submitted
+
+Signed-off-by: Maxin B. John <maxin.john@intel.com>
+---
+ src/firewall-nftables.c | 14 +++++++-------
+ 1 file changed, 7 insertions(+), 7 deletions(-)
+
+diff --git a/src/firewall-nftables.c b/src/firewall-nftables.c
+index 583d1c4..83b137b 100644
+--- a/src/firewall-nftables.c
++++ b/src/firewall-nftables.c
+@@ -387,9 +387,9 @@ static int add_cmp(struct nftnl_rule *rule, uint32_t sreg, uint32_t op,
+         if (!expr)
+                 return -ENOMEM;
+ 
+-        nftnl_expr_set_u32(expr, NFT_EXPR_CMP_SREG, sreg);
+-        nftnl_expr_set_u32(expr, NFT_EXPR_CMP_OP, op);
+-        nftnl_expr_set(expr, NFT_EXPR_CMP_DATA, data, data_len);
++        nftnl_expr_set_u32(expr, NFTNL_EXPR_CMP_SREG, sreg);
++        nftnl_expr_set_u32(expr, NFTNL_EXPR_CMP_OP, op);
++        nftnl_expr_set(expr, NFTNL_EXPR_CMP_DATA, data, data_len);
+ 
+         nftnl_rule_add_expr(rule, expr);
+ 
+@@ -575,8 +575,8 @@ static int build_rule_nat(const char *address, unsigned char prefixlen,
+ 	expr = nftnl_expr_alloc("meta");
+ 	if (!expr)
+ 		goto err;
+-	nftnl_expr_set_u32(expr, NFT_EXPR_META_KEY, NFT_META_OIFNAME);
+-	nftnl_expr_set_u32(expr, NFT_EXPR_META_DREG, NFT_REG_1);
++	nftnl_expr_set_u32(expr, NFTNL_EXPR_META_KEY, NFT_META_OIFNAME);
++	nftnl_expr_set_u32(expr, NFTNL_EXPR_META_DREG, NFT_REG_1);
+ 	nftnl_rule_add_expr(rule, expr);
+ 	err = add_cmp(rule, NFT_REG_1, NFT_CMP_EQ, interface,
+ 			strlen(interface) + 1);
+@@ -677,8 +677,8 @@ static int build_rule_snat(int index, const char *address,
+ 	expr = nftnl_expr_alloc("meta");
+ 	if (!expr)
+ 		goto err;
+-	nftnl_expr_set_u32(expr, NFT_EXPR_META_KEY, NFT_META_OIF);
+-	nftnl_expr_set_u32(expr, NFT_EXPR_META_DREG, NFT_REG_1);
++	nftnl_expr_set_u32(expr, NFTNL_EXPR_META_KEY, NFT_META_OIF);
++	nftnl_expr_set_u32(expr, NFTNL_EXPR_META_DREG, NFT_REG_1);
+ 	nftnl_rule_add_expr(rule, expr);
+ 	err = add_cmp(rule, NFT_REG_1, NFT_CMP_EQ, &index, sizeof(index));
+ 	if (err < 0)
+-- 
+2.4.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman/0003-stats-Fix-bad-file-descriptor-initialisation.patch b/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman/0003-stats-Fix-bad-file-descriptor-initialisation.patch
deleted file mode 100644
index c545811..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman/0003-stats-Fix-bad-file-descriptor-initialisation.patch
+++ /dev/null
@@ -1,102 +0,0 @@
-From c7f4151fb053b0d0691d8f10d7e3690265d28889 Mon Sep 17 00:00:00 2001
-From: Lukasz Nowak <lnowak@tycoint.com>
-Date: Wed, 26 Oct 2016 18:13:02 +0100
-Subject: [PATCH] stats: Fix bad file descriptor initialisation
-
-Stats file code initialises its file descriptor field to 0.  But 0 is
-a valid fd value. -1 should be used instead.  This causes problems
-when an error happens before a stats file is open (e.g. mkdir
-fails). The clean-up procedure, stats_free() calls close(fd).  When fd
-is 0, this first closes stdin, and then any files/sockets which
-received fd=0, re-used by the OS.
-
-Fixed several instances of bad file descriptor field handling, in case
-of errors.
-
-The bug results with connman freezing if there is no read/write storage
-directory available, and there are multiple active interfaces
-(fd=0 gets re-used for sockets in that case).
-
-The patch was imported from the Connman git repository
-(git://git.kernel.org/pub/scm/network/connman) as of commit id
-c7f4151fb053b0d0691d8f10d7e3690265d28889. 
-
-Upstream-Status: Accepted
-Signed-off-by: Lukasz Nowak <lnowak@tycoint.com>
----
- src/stats.c | 15 +++++++++++++++
- src/util.c  |  4 ++--
- 2 files changed, 17 insertions(+), 2 deletions(-)
-
-diff --git a/src/stats.c b/src/stats.c
-index 26343b1..c3ca738 100644
---- a/src/stats.c
-+++ b/src/stats.c
-@@ -378,6 +378,7 @@ static int stats_file_setup(struct stats_file *file)
- 			strerror(errno), file->name);
- 
- 		TFR(close(file->fd));
-+		file->fd = -1;
- 		g_free(file->name);
- 		file->name = NULL;
- 
-@@ -393,6 +394,7 @@ static int stats_file_setup(struct stats_file *file)
- 	err = stats_file_remap(file, size);
- 	if (err < 0) {
- 		TFR(close(file->fd));
-+		file->fd = -1;
- 		g_free(file->name);
- 		file->name = NULL;
- 
-@@ -649,6 +651,13 @@ static int stats_file_history_update(struct stats_file *data_file)
- 	bzero(history_file, sizeof(struct stats_file));
- 	bzero(temp_file, sizeof(struct stats_file));
- 
-+	/*
-+	 * 0 is a valid file descriptor - fd needs to be initialized
-+	 * to -1 to handle errors correctly
-+	 */
-+	history_file->fd = -1;
-+	temp_file->fd = -1;
-+
- 	err = stats_open(history_file, data_file->history_name);
- 	if (err < 0)
- 		return err;
-@@ -682,6 +691,12 @@ int __connman_stats_service_register(struct connman_service *service)
- 		if (!file)
- 			return -ENOMEM;
- 
-+		/*
-+		 * 0 is a valid file descriptor - fd needs to be initialized
-+		 * to -1 to handle errors correctly
-+		 */
-+		file->fd = -1;
-+
- 		g_hash_table_insert(stats_hash, service, file);
- 	} else {
- 		return -EALREADY;
-diff --git a/src/util.c b/src/util.c
-index e6532c8..732d451 100644
---- a/src/util.c
-+++ b/src/util.c
-@@ -63,7 +63,7 @@ int __connman_util_init(void)
- {
- 	int r = 0;
- 
--	if (f > 0)
-+	if (f >= 0)
- 		return 0;
- 
- 	f = open(URANDOM, O_RDONLY);
-@@ -86,7 +86,7 @@ int __connman_util_init(void)
- 
- void __connman_util_cleanup(void)
- {
--	if (f > 0)
-+	if (f >= 0)
- 		close(f);
- 
- 	f = -1;
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman_1.33.bb b/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman_1.33.bb
deleted file mode 100644
index ee04d9b..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman_1.33.bb
+++ /dev/null
@@ -1,17 +0,0 @@
-require connman.inc
-
-SRC_URI  = "${KERNELORG_MIRROR}/linux/network/${BPN}/${BP}.tar.xz \
-            file://0001-plugin.h-Change-visibility-to-default-for-debug-symb.patch \
-            file://connman \
-            file://no-version-scripts.patch \
-            file://includes.patch \
-            file://0003-stats-Fix-bad-file-descriptor-initialisation.patch \
-            file://CVE-2017-12865.patch \
-            "
-SRC_URI_append_libc-musl = " file://0002-resolve-musl-does-not-implement-res_ninit.patch \
-                             file://0001-Fix-compile-on-musl-with-kernel-4.9-headers.patch"
-
-SRC_URI[md5sum] = "c51903fd3e7a6a371d12ac5d72a1fa01"
-SRC_URI[sha256sum] = "bc8946036fa70124d663136f9f6b6238d897ca482782df907b07a428b09df5a0"
-
-RRECOMMENDS_${PN} = "connman-conf"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman_1.34.bb b/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman_1.34.bb
new file mode 100644
index 0000000..dc2c688
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/connman/connman_1.34.bb
@@ -0,0 +1,18 @@
+require connman.inc
+
+SRC_URI  = "${KERNELORG_MIRROR}/linux/network/${BPN}/${BP}.tar.xz \
+            file://0001-plugin.h-Change-visibility-to-default-for-debug-symb.patch \
+            file://0001-firewall-nftables-fix-build-with-libnftnl-1.0.7.patch \
+            file://0001-connman.service-stop-systemd-resolved-when-we-use-co.patch \
+            file://connman \
+            file://no-version-scripts.patch \
+            file://includes.patch \
+            file://CVE-2017-12865.patch \
+            "
+SRC_URI_append_libc-musl = " file://0002-resolve-musl-does-not-implement-res_ninit.patch \
+                             "
+
+SRC_URI[md5sum] = "e200028702c831d5f535d20d61e608ef"
+SRC_URI[sha256sum] = "a9a0808c729c1f348fc36d8cecb52d19b72bc34cb411c502608cb0e0190fc71e"
+
+RRECOMMENDS_${PN} = "connman-conf"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp.inc b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp.inc
index aafdd0a..e943707 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp.inc
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp.inc
@@ -12,30 +12,33 @@
 
 DEPENDS = "openssl bind"
 
-SRC_URI = "ftp://ftp.isc.org/isc/dhcp/${PV}/dhcp-${PV}.tar.gz \
-           file://define-macro-_PATH_DHCPD_CONF-and-_PATH_DHCLIENT_CON.patch \
+SRC_URI = "http://ftp.isc.org/isc/dhcp/${PV}/dhcp-${PV}.tar.gz \
            file://init-relay file://default-relay \
            file://init-server file://default-server \
            file://dhclient.conf file://dhcpd.conf \
+           file://dhclient-systemd-wrapper \
+           file://dhclient.service \
            file://dhcpd.service file://dhcrelay.service \
            file://dhcpd6.service \
-           file://search-for-libxml2.patch "
-
+           "
 UPSTREAM_CHECK_URI = "ftp://ftp.isc.org/isc/dhcp/"
 UPSTREAM_CHECK_REGEX = "(?P<pver>\d+\.\d+\.(\d+?))/"
 
 inherit autotools systemd useradd update-rc.d
 
 USERADD_PACKAGES = "${PN}-server"
-USERADD_PARAM_${PN}-server = "--system --no-create-home --home-dir /var/run/${PN} --shell /bin/false --user-group ${PN}"
+USERADD_PARAM_${PN}-server = "--system --no-create-home --home-dir /var/run/${BPN} --shell /bin/false --user-group ${BPN}"
 
-SYSTEMD_PACKAGES = "${PN}-server ${PN}-relay"
+SYSTEMD_PACKAGES = "${PN}-server ${PN}-relay ${PN}-client"
 SYSTEMD_SERVICE_${PN}-server = "dhcpd.service dhcpd6.service"
 SYSTEMD_AUTO_ENABLE_${PN}-server = "disable"
 
 SYSTEMD_SERVICE_${PN}-relay = "dhcrelay.service"
 SYSTEMD_AUTO_ENABLE_${PN}-relay = "disable"
 
+SYSTEMD_SERVICE_${PN}-client = "dhclient.service"
+SYSTEMD_AUTO_ENABLE_${PN}-client = "disable"
+
 INITSCRIPT_PACKAGES = "dhcp-server"
 INITSCRIPT_NAME_dhcp-server = "dhcp-server"
 INITSCRIPT_PARAMS_dhcp-server = "defaults"
@@ -46,7 +49,7 @@
                 --with-cli-lease-file=${localstatedir}/lib/dhcp/dhclient.leases \
                 --with-cli6-lease-file=${localstatedir}/lib/dhcp/dhclient6.leases \
                 --with-libbind=${STAGING_LIBDIR}/ \
-                --enable-paranoia \
+                --enable-paranoia --disable-static \
                 --with-randomdev=/dev/random \
                "
 
@@ -79,15 +82,23 @@
 	sed -i -e 's,@SYSCONFDIR@,${sysconfdir},g' ${D}${systemd_unitdir}/system/dhcpd*.service
 	sed -i -e 's,@base_bindir@,${base_bindir},g' ${D}${systemd_unitdir}/system/dhcpd*.service
 	sed -i -e 's,@localstatedir@,${localstatedir},g' ${D}${systemd_unitdir}/system/dhcpd*.service
-       sed -i -e 's,@SYSCONFDIR@,${sysconfdir},g' ${D}${systemd_unitdir}/system/dhcrelay.service
+	sed -i -e 's,@SYSCONFDIR@,${sysconfdir},g' ${D}${systemd_unitdir}/system/dhcrelay.service
+
+	install -d ${D}${base_sbindir}
+	install -m 0755 ${WORKDIR}/dhclient-systemd-wrapper ${D}${base_sbindir}/dhclient-systemd-wrapper
+	install -m 0644 ${WORKDIR}/dhclient.service ${D}${systemd_unitdir}/system
+	sed -i -e 's,@SYSCONFDIR@,${sysconfdir},g' ${D}${systemd_unitdir}/system/dhclient.service
+	sed -i -e 's,@BASE_SBINDIR@,${base_sbindir},g' ${D}${systemd_unitdir}/system/dhclient.service
 }
 
-PACKAGES += "dhcp-server dhcp-server-config dhcp-client dhcp-relay dhcp-omshell"
+PACKAGES += "dhcp-libs dhcp-server dhcp-server-config dhcp-client dhcp-relay dhcp-omshell"
 
-FILES_${PN} = ""
+PACKAGES_remove = "${PN}"
 RDEPENDS_${PN}-dev = ""
 RDEPENDS_${PN}-staticdev = ""
 
+FILES_${PN}-libs = "${libdir}/libdhcpctl.so.0* ${libdir}/libomapi.so.0*"
+
 FILES_${PN}-server = "${sbindir}/dhcpd ${sysconfdir}/init.d/dhcp-server"
 RRECOMMENDS_${PN}-server = "dhcp-server-config"
 
@@ -95,7 +106,11 @@
 
 FILES_${PN}-relay = "${sbindir}/dhcrelay ${sysconfdir}/init.d/dhcp-relay ${sysconfdir}/default/dhcp-relay"
 
-FILES_${PN}-client = "${base_sbindir}/dhclient ${base_sbindir}/dhclient-script ${sysconfdir}/dhcp/dhclient.conf"
+FILES_${PN}-client = "${base_sbindir}/dhclient \
+                      ${base_sbindir}/dhclient-script \
+                      ${sysconfdir}/dhcp/dhclient.conf \
+                      ${base_sbindir}/dhclient-systemd-wrapper \
+                     "
 
 FILES_${PN}-omshell = "${bindir}/omshell"
 
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0001-define-macro-_PATH_DHCPD_CONF-and-_PATH_DHCLIENT_CON.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0001-define-macro-_PATH_DHCPD_CONF-and-_PATH_DHCLIENT_CON.patch
new file mode 100644
index 0000000..e5b3cf9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0001-define-macro-_PATH_DHCPD_CONF-and-_PATH_DHCLIENT_CON.patch
@@ -0,0 +1,30 @@
+From 7cc29144535a622fc671dc86eb1da65b0473a7c4 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Tue, 15 Aug 2017 16:14:22 +0800
+Subject: [PATCH 01/11] define macro _PATH_DHCPD_CONF and _PATH_DHCLIENT_CONF
+
+Upstream-Status: Inappropriate [OE specific]
+
+Rebase to 4.3.6
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ includes/site.h | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/includes/site.h b/includes/site.h
+index b2f7fd7..280fbb9 100644
+--- a/includes/site.h
++++ b/includes/site.h
+@@ -149,7 +149,8 @@
+ /* Define this if you want the dhcpd.conf file to go somewhere other than
+    the default location.   By default, it goes in /etc/dhcpd.conf. */
+ 
+-/* #define _PATH_DHCPD_CONF	"/etc/dhcpd.conf" */
++#define _PATH_DHCPD_CONF	"/etc/dhcp/dhcpd.conf"
++#define _PATH_DHCLIENT_CONF	"/etc/dhcp/dhclient.conf"
+ 
+ /* Network API definitions.   You do not need to choose one of these - if
+    you don't choose, one will be chosen for you in your system's config
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0001-site.h-enable-gentle-shutdown.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0001-site.h-enable-gentle-shutdown.patch
deleted file mode 100644
index 47443a5..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0001-site.h-enable-gentle-shutdown.patch
+++ /dev/null
@@ -1,25 +0,0 @@
-Upstream-Status: Inappropriate [configuration]
-
-Subject: [PATCH] site.h: enable gentle shutdown
-
-Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
----
- includes/site.h | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/includes/site.h b/includes/site.h
-index 1dd1251..abb66e4 100644
---- a/includes/site.h
-+++ b/includes/site.h
-@@ -289,7 +289,7 @@
-    situations.  We plan to revisit this feature and may
-    make non-backwards compatible changes including the
-    removal of this define.  Use at your own risk.  */
--/* #define ENABLE_GENTLE_SHUTDOWN */
-+#define ENABLE_GENTLE_SHUTDOWN
- 
- /* Include old error codes.  This is provided in case you
-    are building an external program similar to omshell for
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0002-dhclient-dbus.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0002-dhclient-dbus.patch
new file mode 100644
index 0000000..6459dc0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0002-dhclient-dbus.patch
@@ -0,0 +1,117 @@
+From be7540d31c356e80ee02e90e8bf162b7ac6e5ba5 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Tue, 15 Aug 2017 14:56:56 +0800
+Subject: [PATCH 02/11] dhclient dbus
+
+upstream-Status: Inappropriate [distribution]
+
+Rebase to 4.3.6
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ client/scripts/bsdos   | 5 +++++
+ client/scripts/freebsd | 5 +++++
+ client/scripts/linux   | 5 +++++
+ client/scripts/netbsd  | 5 +++++
+ client/scripts/openbsd | 5 +++++
+ client/scripts/solaris | 5 +++++
+ 6 files changed, 30 insertions(+)
+
+diff --git a/client/scripts/bsdos b/client/scripts/bsdos
+index d69d0d8..095b143 100755
+--- a/client/scripts/bsdos
++++ b/client/scripts/bsdos
+@@ -45,6 +45,11 @@ exit_with_hooks() {
+     . /etc/dhclient-exit-hooks
+   fi
+ # probably should do something with exit status of the local script
++  if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
++    dbus-send --system --dest=com.redhat.dhcp \
++      --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
++      'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
++  fi
+   exit $exit_status
+ }
+ 
+diff --git a/client/scripts/freebsd b/client/scripts/freebsd
+index 8f3e2a2..ad7fb44 100755
+--- a/client/scripts/freebsd
++++ b/client/scripts/freebsd
+@@ -89,6 +89,11 @@ exit_with_hooks() {
+     . /etc/dhclient-exit-hooks
+   fi
+ # probably should do something with exit status of the local script
++  if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
++    dbus-send --system --dest=com.redhat.dhcp \
++      --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
++      'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
++  fi
+   exit $exit_status
+ }
+ 
+diff --git a/client/scripts/linux b/client/scripts/linux
+index 5fb1612..3d447b6 100755
+--- a/client/scripts/linux
++++ b/client/scripts/linux
+@@ -174,6 +174,11 @@ exit_with_hooks() {
+         exit_status=$?
+     fi
+ 
++    if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
++        dbus-send --system --dest=com.redhat.dhcp \
++           --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
++           'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
++    fi
+     exit $exit_status
+ }
+ 
+diff --git a/client/scripts/netbsd b/client/scripts/netbsd
+index 07383b7..aaba8e8 100755
+--- a/client/scripts/netbsd
++++ b/client/scripts/netbsd
+@@ -45,6 +45,11 @@ exit_with_hooks() {
+     . /etc/dhclient-exit-hooks
+   fi
+ # probably should do something with exit status of the local script
++  if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
++    dbus-send --system --dest=com.redhat.dhcp \
++      --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
++      'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
++  fi
+   exit $exit_status
+ }
+ 
+diff --git a/client/scripts/openbsd b/client/scripts/openbsd
+index e7f4746..56b980c 100644
+--- a/client/scripts/openbsd
++++ b/client/scripts/openbsd
+@@ -45,6 +45,11 @@ exit_with_hooks() {
+     . /etc/dhclient-exit-hooks
+   fi
+ # probably should do something with exit status of the local script
++  if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
++    dbus-send --system --dest=com.redhat.dhcp \
++      --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
++      'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
++  fi
+   exit $exit_status
+ }
+ 
+diff --git a/client/scripts/solaris b/client/scripts/solaris
+index af553b9..4a2aa69 100755
+--- a/client/scripts/solaris
++++ b/client/scripts/solaris
+@@ -26,6 +26,11 @@ exit_with_hooks() {
+     . /etc/dhclient-exit-hooks
+   fi
+ # probably should do something with exit status of the local script
++  if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
++    dbus-send --system --dest=com.redhat.dhcp \
++      --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
++      'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
++  fi
+   exit $exit_status
+ }
+ 
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0003-link-with-lcrypto.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0003-link-with-lcrypto.patch
new file mode 100644
index 0000000..810c7b6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0003-link-with-lcrypto.patch
@@ -0,0 +1,38 @@
+From d80bd792323dbd56269309f85b4506eb6b1b60e9 Mon Sep 17 00:00:00 2001
+From: Andrei Gherzan <andrei@gherzan.ro>
+Date: Tue, 15 Aug 2017 15:05:47 +0800
+Subject: [PATCH 03/11] link with lcrypto
+
+From 4.2.0 final release, -lcrypto check was removed and we compile
+static libraries
+from bind that are linked to libcrypto. This is why i added a patch in
+order to add
+-lcrypto to LIBS.
+
+Upstream-Status: Pending
+Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
+
+Rebase to 4.3.6
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ configure.ac | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/configure.ac b/configure.ac
+index cdfa352..44fb57e 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -591,6 +591,10 @@ AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[]], [[void foo() __attribute__((noreturn));
+ # Look for optional headers.
+ AC_CHECK_HEADERS(sys/socket.h net/if_dl.h net/if6.h regex.h)
+ 
++# find an MD5 library
++AC_SEARCH_LIBS(MD5_Init, [crypto])
++AC_SEARCH_LIBS(MD5Init, [crypto])
++
+ # Solaris needs some libraries for functions
+ AC_SEARCH_LIBS(socket, [socket])
+ AC_SEARCH_LIBS(inet_ntoa, [nsl])
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0004-Fix-out-of-tree-builds.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0004-Fix-out-of-tree-builds.patch
new file mode 100644
index 0000000..7d1d867
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0004-Fix-out-of-tree-builds.patch
@@ -0,0 +1,100 @@
+From cccec0344d68dac4100b6f260ee24e7c2da9dfda Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Tue, 15 Aug 2017 15:08:22 +0800
+Subject: [PATCH 04/11] Fix out of tree builds
+
+Upstream-Status: Pending
+
+RP 2013/03/21
+
+Rebase to 4.3.6
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ client/Makefile.am  | 4 ++--
+ common/Makefile.am  | 3 ++-
+ dhcpctl/Makefile.am | 2 ++
+ omapip/Makefile.am  | 1 +
+ relay/Makefile.am   | 2 +-
+ server/Makefile.am  | 2 +-
+ 6 files changed, 9 insertions(+), 5 deletions(-)
+
+diff --git a/client/Makefile.am b/client/Makefile.am
+index 2cb83d8..4730bb3 100644
+--- a/client/Makefile.am
++++ b/client/Makefile.am
+@@ -7,11 +7,11 @@ SUBDIRS = . tests
+ BINDLIBDIR = @BINDDIR@/lib
+ 
+ AM_CPPFLAGS = -DCLIENT_PATH='"PATH=$(sbindir):/sbin:/bin:/usr/sbin:/usr/bin"' \
+-	      -DLOCALSTATEDIR='"$(localstatedir)"'
++	      -DLOCALSTATEDIR='"$(localstatedir)"' -I$(top_srcdir)/includes
+ 
+ dist_sysconf_DATA = dhclient.conf.example
+ sbin_PROGRAMS = dhclient
+-dhclient_SOURCES = clparse.c dhclient.c dhc6.c \
++dhclient_SOURCES = $(srcdir)/clparse.c $(srcdir)/dhclient.c $(srcdir)/dhc6.c \
+ 		   scripts/bsdos scripts/freebsd scripts/linux scripts/macos \
+ 		   scripts/netbsd scripts/nextstep scripts/openbsd \
+ 		   scripts/solaris scripts/openwrt
+diff --git a/common/Makefile.am b/common/Makefile.am
+index 113aee8..0f24fbb 100644
+--- a/common/Makefile.am
++++ b/common/Makefile.am
+@@ -1,4 +1,5 @@
+-AM_CPPFLAGS = -I$(top_srcdir) -DLOCALSTATEDIR='"@localstatedir@"'
++AM_CPPFLAGS = -I$(top_srcdir)/includes -I$(top_srcdir) -DLOCALSTATEDIR='"@localstatedir@"'
++
+ AM_CFLAGS = $(LDAP_CFLAGS)
+ 
+ noinst_LIBRARIES = libdhcp.a
+diff --git a/dhcpctl/Makefile.am b/dhcpctl/Makefile.am
+index ceb0de1..ba8dd8b 100644
+--- a/dhcpctl/Makefile.am
++++ b/dhcpctl/Makefile.am
+@@ -1,5 +1,7 @@
+ BINDLIBDIR = @BINDDIR@/lib
+ 
++AM_CPPFLAGS = -I$(top_srcdir)/includes -I$(top_srcdir)
++
+ bin_PROGRAMS = omshell
+ lib_LIBRARIES = libdhcpctl.a
+ noinst_PROGRAMS = cltest
+diff --git a/omapip/Makefile.am b/omapip/Makefile.am
+index 446a594..dd1afa0 100644
+--- a/omapip/Makefile.am
++++ b/omapip/Makefile.am
+@@ -1,4 +1,5 @@
+ BINDLIBDIR = @BINDDIR@/lib
++AM_CPPFLAGS = -I$(top_srcdir)/includes
+ 
+ lib_LIBRARIES = libomapi.a
+ noinst_PROGRAMS = svtest
+diff --git a/relay/Makefile.am b/relay/Makefile.am
+index 3060eca..6d652f6 100644
+--- a/relay/Makefile.am
++++ b/relay/Makefile.am
+@@ -1,6 +1,6 @@
+ BINDLIBDIR = @BINDDIR@/lib
+ 
+-AM_CPPFLAGS = -DLOCALSTATEDIR='"@localstatedir@"'
++AM_CPPFLAGS = -DLOCALSTATEDIR='"@localstatedir@"' -I$(top_srcdir)/includes
+ 
+ sbin_PROGRAMS = dhcrelay
+ dhcrelay_SOURCES = dhcrelay.c
+diff --git a/server/Makefile.am b/server/Makefile.am
+index 54feedf..3990b9c 100644
+--- a/server/Makefile.am
++++ b/server/Makefile.am
+@@ -6,7 +6,7 @@ SUBDIRS = . tests
+ 
+ BINDLIBDIR = @BINDDIR@/lib
+ 
+-AM_CPPFLAGS = -I.. -DLOCALSTATEDIR='"@localstatedir@"'
++AM_CPPFLAGS = -I$(top_srcdir) -DLOCALSTATEDIR='"@localstatedir@"' -I$(top_srcdir)/includes
+ 
+ dist_sysconf_DATA = dhcpd.conf.example
+ sbin_PROGRAMS = dhcpd
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0005-dhcp-client-fix-invoke-dhclient-script-failed-on-Rea.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0005-dhcp-client-fix-invoke-dhclient-script-failed-on-Rea.patch
new file mode 100644
index 0000000..dd56381
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0005-dhcp-client-fix-invoke-dhclient-script-failed-on-Rea.patch
@@ -0,0 +1,36 @@
+From 2e8ff0e4f6d39e346ea86b8c514ab4ccc78fa359 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Tue, 15 Aug 2017 15:24:14 +0800
+Subject: [PATCH 05/11] dhcp-client: fix invoke dhclient-script failed on
+ Read-only file system
+
+In read-only file system, '/etc' is on the readonly partition,
+and '/etc/resolv.conf' is symlinked to a separate writable
+partition.
+
+In this situation, we create temp files 'resolv.conf.dhclient-new'
+in /tmp dir.
+
+Upstream-Status: Pending
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ client/scripts/linux | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/client/scripts/linux b/client/scripts/linux
+index 3d447b6..3122a75 100755
+--- a/client/scripts/linux
++++ b/client/scripts/linux
+@@ -40,7 +40,7 @@ make_resolv_conf() {
+     # DHCPv4
+     if [ -n "$new_domain_search" ] || [ -n "$new_domain_name" ] ||
+        [ -n "$new_domain_name_servers" ]; then
+-        new_resolv_conf=/etc/resolv.conf.dhclient-new
++        new_resolv_conf=/tmp/resolv.conf.dhclient-new
+         rm -f $new_resolv_conf
+ 
+         if [ -n "$new_domain_name" ]; then
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0006-site.h-enable-gentle-shutdown.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0006-site.h-enable-gentle-shutdown.patch
new file mode 100644
index 0000000..c62b283
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0006-site.h-enable-gentle-shutdown.patch
@@ -0,0 +1,30 @@
+From 01641d146e4e6bea954e4a4ee1f6230b822665b4 Mon Sep 17 00:00:00 2001
+From: Chen Qi <Qi.Chen@windriver.com>
+Date: Tue, 15 Aug 2017 15:37:49 +0800
+Subject: [PATCH 06/11] site.h: enable gentle shutdown
+
+Upstream-Status: Inappropriate [configuration]
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+
+Rebase to 4.3.6
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ includes/site.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/includes/site.h b/includes/site.h
+index 280fbb9..e6c2972 100644
+--- a/includes/site.h
++++ b/includes/site.h
+@@ -296,7 +296,7 @@
+    situations.  We plan to revisit this feature and may
+    make non-backwards compatible changes including the
+    removal of this define.  Use at your own risk.  */
+-/* #define ENABLE_GENTLE_SHUTDOWN */
++#define ENABLE_GENTLE_SHUTDOWN
+ 
+ /* Include old error codes.  This is provided in case you
+    are building an external program similar to omshell for
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0007-Add-configure-argument-to-make-the-libxml2-dependenc.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0007-Add-configure-argument-to-make-the-libxml2-dependenc.patch
new file mode 100644
index 0000000..43c26ea
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0007-Add-configure-argument-to-make-the-libxml2-dependenc.patch
@@ -0,0 +1,42 @@
+From 7107511fd209f08f9a96f8938041ae48f3295895 Mon Sep 17 00:00:00 2001
+From: Christopher Larson <chris_larson@mentor.com>
+Date: Tue, 15 Aug 2017 16:17:49 +0800
+Subject: [PATCH 07/11] Add configure argument to make the libxml2 dependency
+ explicit and determinisitic.
+
+Upstream-Status: Pending
+
+Signed-off-by: Christopher Larson <chris_larson@mentor.com>
+
+Rebase to 4.3.6
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ configure.ac | 11 +++++++++++
+ 1 file changed, 11 insertions(+)
+
+diff --git a/configure.ac b/configure.ac
+index 44fb57e..8e9f509 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -611,6 +611,17 @@ AC_CHECK_FUNCS(strlcat)
+ # For HP/UX we need -lipv6 for if_nametoindex, perhaps others.
+ AC_SEARCH_LIBS(if_nametoindex, [ipv6])
+ 
++AC_ARG_WITH(libxml2,
++	AS_HELP_STRING([--with-libxml2], [link against libxml2. this is needed if bind was built with xml2 support enabled]),
++	with_libxml2="$withval", with_libxml2="no")
++
++if test x$with_libxml2 != xno; then
++	AC_SEARCH_LIBS(xmlTextWriterStartElement, [xml2],
++		[if test x$with_libxml2 != xauto; then
++			AC_MSG_FAILURE([*** Cannot find xmlTextWriterStartElement with -lxml2 and libxml2 was requested])
++		fi])
++fi
++
+ # check for /dev/random (declares HAVE_DEV_RANDOM)
+ AC_MSG_CHECKING(for random device)
+ AC_ARG_WITH(randomdev,
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0008-tweak-to-support-external-bind.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0008-tweak-to-support-external-bind.patch
new file mode 100644
index 0000000..006d18a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0008-tweak-to-support-external-bind.patch
@@ -0,0 +1,117 @@
+From 92875f5cc44914515e50c11c503a09cec90497b2 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Sat, 11 Jun 2016 22:51:44 -0400
+Subject: [PATCH 08/11] tweak to support external bind
+
+Tweak the external bind to oe-core's sysroot rather than
+external bind source build.
+
+Upstream-Status: Inappropriate <oe-core specific>
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ client/Makefile.am       | 2 +-
+ client/tests/Makefile.am | 2 +-
+ common/tests/Makefile.am | 2 +-
+ dhcpctl/Makefile.am      | 2 +-
+ omapip/Makefile.am       | 2 +-
+ relay/Makefile.am        | 2 +-
+ server/Makefile.am       | 2 +-
+ server/tests/Makefile.am | 2 +-
+ 8 files changed, 8 insertions(+), 8 deletions(-)
+
+diff --git a/client/Makefile.am b/client/Makefile.am
+index 4730bb3..84d8131 100644
+--- a/client/Makefile.am
++++ b/client/Makefile.am
+@@ -4,7 +4,7 @@
+ # production code. Sadly, we are not there yet.
+ SUBDIRS = . tests
+ 
+-BINDLIBDIR = @BINDDIR@/lib
++BINDLIBDIR = @BINDDIR@
+ 
+ AM_CPPFLAGS = -DCLIENT_PATH='"PATH=$(sbindir):/sbin:/bin:/usr/sbin:/usr/bin"' \
+ 	      -DLOCALSTATEDIR='"$(localstatedir)"' -I$(top_srcdir)/includes
+diff --git a/client/tests/Makefile.am b/client/tests/Makefile.am
+index 5031d0c..a8dfd26 100644
+--- a/client/tests/Makefile.am
++++ b/client/tests/Makefile.am
+@@ -1,6 +1,6 @@
+ SUBDIRS = .
+ 
+-BINDLIBDIR = @BINDDIR@/lib
++BINDLIBDIR = @BINDDIR@
+ 
+ AM_CPPFLAGS = $(ATF_CFLAGS) -DUNIT_TEST -I$(top_srcdir)/includes
+ AM_CPPFLAGS += -I@BINDDIR@/include -I$(top_srcdir)
+diff --git a/common/tests/Makefile.am b/common/tests/Makefile.am
+index f6a43e4..2f98d22 100644
+--- a/common/tests/Makefile.am
++++ b/common/tests/Makefile.am
+@@ -1,6 +1,6 @@
+ SUBDIRS = .
+ 
+-BINDLIBDIR = @BINDDIR@/lib
++BINDLIBDIR = @BINDDIR@
+ 
+ AM_CPPFLAGS = $(ATF_CFLAGS) -I$(top_srcdir)/includes
+ 
+diff --git a/dhcpctl/Makefile.am b/dhcpctl/Makefile.am
+index ba8dd8b..9b2486e 100644
+--- a/dhcpctl/Makefile.am
++++ b/dhcpctl/Makefile.am
+@@ -1,4 +1,4 @@
+-BINDLIBDIR = @BINDDIR@/lib
++BINDLIBDIR = @BINDDIR@
+ 
+ AM_CPPFLAGS = -I$(top_srcdir)/includes -I$(top_srcdir)
+ 
+diff --git a/omapip/Makefile.am b/omapip/Makefile.am
+index dd1afa0..e4a8599 100644
+--- a/omapip/Makefile.am
++++ b/omapip/Makefile.am
+@@ -1,4 +1,4 @@
+-BINDLIBDIR = @BINDDIR@/lib
++BINDLIBDIR = @BINDDIR@
+ AM_CPPFLAGS = -I$(top_srcdir)/includes
+ 
+ lib_LIBRARIES = libomapi.a
+diff --git a/relay/Makefile.am b/relay/Makefile.am
+index 6d652f6..b3bf578 100644
+--- a/relay/Makefile.am
++++ b/relay/Makefile.am
+@@ -1,4 +1,4 @@
+-BINDLIBDIR = @BINDDIR@/lib
++BINDLIBDIR = @BINDDIR@
+ 
+ AM_CPPFLAGS = -DLOCALSTATEDIR='"@localstatedir@"' -I$(top_srcdir)/includes
+ 
+diff --git a/server/Makefile.am b/server/Makefile.am
+index 3990b9c..b5d8c2d 100644
+--- a/server/Makefile.am
++++ b/server/Makefile.am
+@@ -4,7 +4,7 @@
+ # production code. Sadly, we are not there yet.
+ SUBDIRS = . tests
+ 
+-BINDLIBDIR = @BINDDIR@/lib
++BINDLIBDIR = @BINDDIR@
+ 
+ AM_CPPFLAGS = -I$(top_srcdir) -DLOCALSTATEDIR='"@localstatedir@"' -I$(top_srcdir)/includes
+ 
+diff --git a/server/tests/Makefile.am b/server/tests/Makefile.am
+index a87c5e7..9821081 100644
+--- a/server/tests/Makefile.am
++++ b/server/tests/Makefile.am
+@@ -1,6 +1,6 @@
+ SUBDIRS = .
+ 
+-BINDLIBDIR = @BINDDIR@/lib
++BINDLIBDIR = @BINDDIR@
+ 
+ AM_CPPFLAGS = $(ATF_CFLAGS) -DUNIT_TEST -I$(top_srcdir)/includes
+ AM_CPPFLAGS += -I@BINDDIR@/include -I$(top_srcdir)
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0009-remove-dhclient-script-bash-dependency.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0009-remove-dhclient-script-bash-dependency.patch
new file mode 100644
index 0000000..912b6d6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0009-remove-dhclient-script-bash-dependency.patch
@@ -0,0 +1,28 @@
+From f3f8b7726e50e24ef3edf5fa5a17e31d39118d7e Mon Sep 17 00:00:00 2001
+From: Andre McCurdy <armccurdy@gmail.com>
+Date: Tue, 15 Aug 2017 15:49:31 +0800
+Subject: [PATCH 09/11] remove dhclient-script bash dependency
+
+Upstream-Status: Inappropriate [OE specific]
+
+Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
+
+Rebase to 4.3.6
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ client/scripts/linux | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/client/scripts/linux b/client/scripts/linux
+index 3122a75..1712d7d 100755
+--- a/client/scripts/linux
++++ b/client/scripts/linux
+@@ -1,4 +1,4 @@
+-#!/bin/bash
++#!/bin/sh
+ # dhclient-script for Linux. Dan Halbert, March, 1997.
+ # Updated for Linux 2.[12] by Brian J. Murrell, January 1999.
+ # No guarantees about this. I'm a novice at the details of Linux
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0010-build-shared-libs.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0010-build-shared-libs.patch
new file mode 100644
index 0000000..f128731
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0010-build-shared-libs.patch
@@ -0,0 +1,208 @@
+From 76c370a929e5ab5dbc81c2fbcf4e50f4fbc08ce9 Mon Sep 17 00:00:00 2001
+From: Kai Kang <kai.kang@windriver.com>
+Date: Tue, 15 Aug 2017 15:53:37 +0800
+Subject: [PATCH 10/11] build shared libs
+
+Upstream-Status: Pending
+
+Port patches from Fedora to build shared libs rather than static libs.
+
+Signed-off-by: Kai Kang <kai.kang@windriver.com>
+
+Rebase to 4.3.6
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ client/Makefile.am       |  4 ++--
+ common/tests/Makefile.am | 13 +++++--------
+ configure.ac             | 12 ++----------
+ dhcpctl/Makefile.am      | 14 ++++++--------
+ omapip/Makefile.am       |  7 +++----
+ relay/Makefile.am        |  5 ++---
+ server/Makefile.am       |  7 +++----
+ server/tests/Makefile.am |  7 +++----
+ 8 files changed, 26 insertions(+), 43 deletions(-)
+
+diff --git a/client/Makefile.am b/client/Makefile.am
+index 84d8131..e776bf0 100644
+--- a/client/Makefile.am
++++ b/client/Makefile.am
+@@ -15,7 +15,7 @@ dhclient_SOURCES = $(srcdir)/clparse.c $(srcdir)/dhclient.c $(srcdir)/dhc6.c \
+ 		   scripts/bsdos scripts/freebsd scripts/linux scripts/macos \
+ 		   scripts/netbsd scripts/nextstep scripts/openbsd \
+ 		   scripts/solaris scripts/openwrt
+-dhclient_LDADD = ../common/libdhcp.a ../omapip/libomapi.a $(BINDLIBDIR)/libirs.a \
+-		 $(BINDLIBDIR)/libdns.a $(BINDLIBDIR)/libisccfg.a $(BINDLIBDIR)/libisc.a
++dhclient_LDADD = ../common/libdhcp.a ../omapip/libomapi.la \
++		  -L$(BINDLIBDIR) -lirs -ldns -lisccfg -lisc
+ man_MANS = dhclient.8 dhclient-script.8 dhclient.conf.5 dhclient.leases.5
+ EXTRA_DIST = $(man_MANS)
+diff --git a/common/tests/Makefile.am b/common/tests/Makefile.am
+index 2f98d22..8745e88 100644
+--- a/common/tests/Makefile.am
++++ b/common/tests/Makefile.am
+@@ -15,26 +15,23 @@ ATF_TESTS += alloc_unittest dns_unittest misc_unittest ns_name_unittest
+ alloc_unittest_SOURCES = test_alloc.c $(top_srcdir)/tests/t_api_dhcp.c
+ alloc_unittest_LDADD = $(ATF_LDFLAGS)
+ alloc_unittest_LDADD += ../libdhcp.a  \
+-	../../omapip/libomapi.a $(BINDLIBDIR)/libirs.a \
+-	$(BINDLIBDIR)/libdns.a $(BINDLIBDIR)/libisccfg.a  $(BINDLIBDIR)/libisc.a
++	../../omapip/libomapi.la -L$(BINDLIBDIR) -ldns -lisccfg -lisc
+ 
+ dns_unittest_SOURCES = dns_unittest.c $(top_srcdir)/tests/t_api_dhcp.c
+ dns_unittest_LDADD = $(ATF_LDFLAGS)
+ dns_unittest_LDADD += ../libdhcp.a  \
+-	../../omapip/libomapi.a $(BINDLIBDIR)/libirs.a \
+-	$(BINDLIBDIR)/libdns.a $(BINDLIBDIR)/libisccfg.a  $(BINDLIBDIR)/libisc.a
++	../../omapip/libomapi.la -L$(BINDLIBDIR) -ldns -lisccfg -lisc
+ 
+ misc_unittest_SOURCES = misc_unittest.c $(top_srcdir)/tests/t_api_dhcp.c
+ misc_unittest_LDADD = $(ATF_LDFLAGS)
+ misc_unittest_LDADD += ../libdhcp.a  \
+-	../../omapip/libomapi.a $(BINDLIBDIR)/libirs.a \
+-	$(BINDLIBDIR)/libdns.a $(BINDLIBDIR)/libisccfg.a  $(BINDLIBDIR)/libisc.a
++	../../omapip/libomapi.la -L$(BINDLIBDIR) -ldns -lisccfg -lisc
+ 
+ ns_name_unittest_SOURCES = ns_name_test.c $(top_srcdir)/tests/t_api_dhcp.c
+ ns_name_unittest_LDADD = $(ATF_LDFLAGS)
+ ns_name_unittest_LDADD += ../libdhcp.a  \
+-	../../omapip/libomapi.a $(BINDLIBDIR)/libirs.a \
+-	$(BINDLIBDIR)/libdns.a $(BINDLIBDIR)/libisccfg.a  $(BINDLIBDIR)/libisc.a
++	../../omapip/libomapi.a -L$(BINDLIBDIR) \
++	-ldns -lisccfg -lisc
+ 
+ check: $(ATF_TESTS)
+ 	@if test $(top_srcdir) != ${top_builddir}; then \
+diff --git a/configure.ac b/configure.ac
+index 8e9f509..bfe988a 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -47,16 +47,8 @@ AM_CONDITIONAL(CROSS_COMPILING, test "$cross_compiling" = "yes")
+ # Use this to define _GNU_SOURCE to pull in the IPv6 Advanced Socket API.
+ AC_USE_SYSTEM_EXTENSIONS
+ 
+-AC_PROG_RANLIB
+-
+-AC_PATH_PROG(AR, ar)
+-AC_SUBST(AR)
+-
+-if test "X$AR" = "X"; then
+-	AC_MSG_ERROR([
+-ar program not found.  Please fix your PATH to include the directory in
+-which ar resides, or set AR in the environment with the full path to ar.])
+-fi
++# Use libtool to simplify building of shared libraries
++AC_PROG_LIBTOOL
+ 
+ AC_CONFIG_HEADERS([includes/config.h])
+ 
+diff --git a/dhcpctl/Makefile.am b/dhcpctl/Makefile.am
+index 9b2486e..784cdf7 100644
+--- a/dhcpctl/Makefile.am
++++ b/dhcpctl/Makefile.am
+@@ -3,19 +3,17 @@ BINDLIBDIR = @BINDDIR@
+ AM_CPPFLAGS = -I$(top_srcdir)/includes -I$(top_srcdir)
+ 
+ bin_PROGRAMS = omshell
+-lib_LIBRARIES = libdhcpctl.a
++lib_LTLIBRARIES = libdhcpctl.la
+ noinst_PROGRAMS = cltest
+ man_MANS = omshell.1 dhcpctl.3
+ EXTRA_DIST = $(man_MANS)
+ 
+ omshell_SOURCES = omshell.c
+-omshell_LDADD = libdhcpctl.a ../common/libdhcp.a ../omapip/libomapi.a \
+-	        $(BINDLIBDIR)/libirs.a $(BINDLIBDIR)/libdns.a \
+-	        $(BINDLIBDIR)/libisccfg.a $(BINDLIBDIR)/libisc.a
++omshell_LDADD = libdhcpctl.la ../common/libdhcp.a ../omapip/libomapi.la \
++	        -L$(BINDLIBDIR) -lirs -ldns -lisccfg -lisc
+ 
+-libdhcpctl_a_SOURCES = dhcpctl.c callback.c remote.c
++libdhcpctl_la_SOURCES = dhcpctl.c callback.c remote.c
+ 
+ cltest_SOURCES = cltest.c
+-cltest_LDADD = libdhcpctl.a ../common/libdhcp.a ../omapip/libomapi.a \
+-	       $(BINDLIBDIR)/libirs.a $(BINDLIBDIR)/libdns.a \
+-               $(BINDLIBDIR)/libisccfg.a $(BINDLIBDIR)/libisc.a
++cltest_LDADD = libdhcpctl.la ../common/libdhcp.a ../omapip/libomapi.la \
++	       -L$(BINDLIBDIR) -lirs -ldns -lisccfg -lisc
+diff --git a/omapip/Makefile.am b/omapip/Makefile.am
+index e4a8599..c0c7a1e 100644
+--- a/omapip/Makefile.am
++++ b/omapip/Makefile.am
+@@ -1,10 +1,10 @@
+ BINDLIBDIR = @BINDDIR@
+ AM_CPPFLAGS = -I$(top_srcdir)/includes
+ 
+-lib_LIBRARIES = libomapi.a
++lib_LTLIBRARIES = libomapi.la
+ noinst_PROGRAMS = svtest
+ 
+-libomapi_a_SOURCES = protocol.c buffer.c alloc.c result.c connection.c \
++libomapi_la_SOURCES = protocol.c buffer.c alloc.c result.c connection.c \
+ 		     errwarn.c listener.c dispatch.c generic.c support.c \
+ 		     handle.c message.c convert.c hash.c auth.c inet_addr.c \
+ 		     array.c trace.c toisc.c iscprint.c isclib.c
+@@ -13,6 +13,5 @@ man_MANS = omapi.3
+ EXTRA_DIST = $(man_MANS)
+ 
+ svtest_SOURCES = test.c
+-svtest_LDADD = libomapi.a $(BINDLIBDIR)/libirs.a $(BINDLIBDIR)/libdns.a \
+-		$(BINDLIBDIR)/libisccfg.a $(BINDLIBDIR)/libisc.a
++svtest_LDADD = libomapi.la -L$(BINDLIBDIR) -lirs -ldns -lisccfg -lisc
+ 
+diff --git a/relay/Makefile.am b/relay/Makefile.am
+index b3bf578..f47009f 100644
+--- a/relay/Makefile.am
++++ b/relay/Makefile.am
+@@ -4,9 +4,8 @@ AM_CPPFLAGS = -DLOCALSTATEDIR='"@localstatedir@"' -I$(top_srcdir)/includes
+ 
+ sbin_PROGRAMS = dhcrelay
+ dhcrelay_SOURCES = dhcrelay.c
+-dhcrelay_LDADD = ../common/libdhcp.a ../omapip/libomapi.a \
+-		 $(BINDLIBDIR)/libirs.a $(BINDLIBDIR)/libdns.a \
+-		 $(BINDLIBDIR)/libisccfg.a $(BINDLIBDIR)/libisc.a
++dhcrelay_LDADD = ../common/libdhcp.a ../omapip/libomapi.la \
++		 -L$(BINDLIBDIR) -lirs -ldns -lisccfg -lisc
+ man_MANS = dhcrelay.8
+ EXTRA_DIST = $(man_MANS)
+ 
+diff --git a/server/Makefile.am b/server/Makefile.am
+index b5d8c2d..d7f876d 100644
+--- a/server/Makefile.am
++++ b/server/Makefile.am
+@@ -15,10 +15,9 @@ dhcpd_SOURCES = dhcpd.c dhcp.c bootp.c confpars.c db.c class.c failover.c \
+ 		dhcpv6.c mdb6.c ldap.c ldap_casa.c leasechain.c ldap_krb_helper.c
+ 
+ dhcpd_CFLAGS = $(LDAP_CFLAGS)
+-dhcpd_LDADD = ../common/libdhcp.a ../omapip/libomapi.a \
+-	      ../dhcpctl/libdhcpctl.a $(BINDLIBDIR)/libirs.a \
+-	      $(BINDLIBDIR)/libdns.a $(BINDLIBDIR)/libisccfg.a \
+-	      $(BINDLIBDIR)/libisc.a $(LDAP_LIBS)
++dhcpd_LDADD = ../common/libdhcp.a ../omapip/libomapi.la \
++	      ../dhcpctl/libdhcpctl.la -L$(BINDLIBDIR) \
++	      -lirs -ldns -lisccfg -lisc $(LDAP_LIBS)
+ 
+ man_MANS = dhcpd.8 dhcpd.conf.5 dhcpd.leases.5
+ EXTRA_DIST = $(man_MANS)
+diff --git a/server/tests/Makefile.am b/server/tests/Makefile.am
+index 9821081..de95872 100644
+--- a/server/tests/Makefile.am
++++ b/server/tests/Makefile.am
+@@ -19,10 +19,9 @@ DHCPSRC = ../dhcp.c ../bootp.c ../confpars.c ../db.c ../class.c      \
+           ../ddns.c ../dhcpleasequery.c ../dhcpv6.c ../mdb6.c        \
+           ../ldap.c ../ldap_casa.c ../dhcpd.c ../leasechain.c
+ 
+-DHCPLIBS = $(top_builddir)/common/libdhcp.a $(top_builddir)/omapip/libomapi.a \
+-          $(top_builddir)/dhcpctl/libdhcpctl.a $(BINDLIBDIR)/libirs.a \
+-	  $(BINDLIBDIR)/libdns.a $(BINDLIBDIR)/libisccfg.a \
+-	  $(BINDLIBDIR)/libisc.a
++DHCPLIBS = $(top_builddir)/common/libdhcp.a $(top_builddir)/omapip/libomapi.la \
++          $(top_builddir)/dhcpctl/libdhcpctl.la \
++          -L$(BINDLIBDIR) -lirs -ldns -lisccfg -lisc
+ 
+ ATF_TESTS =
+ if HAVE_ATF
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0011-Moved-the-call-to-isc_app_ctxstart-to-not-get-signal.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0011-Moved-the-call-to-isc_app_ctxstart-to-not-get-signal.patch
new file mode 100644
index 0000000..67bb463
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0011-Moved-the-call-to-isc_app_ctxstart-to-not-get-signal.patch
@@ -0,0 +1,81 @@
+From 37725f3e22edb50e0ca2d1fff971321a5a4d5112 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Wed, 12 Jul 2017 03:05:13 -0400
+Subject: [PATCH 11/11] Moved the call to isc_app_ctxstart() to not get signal
+ block by all threads
+
+Signed-off-by: Francis Dupont <fdupont@isc.org>
+
+In https://source.isc.org/git/bind9.git, since the following
+commit applied:
+...
+commit b99bfa184bc9375421b5df915eea7dfac6a68a99
+Author: Evan Hunt <each@isc.org>
+Date:   Wed Apr 10 13:49:57 2013 -0700
+
+    [master] unify internal and export libraries
+
+    3550.       [func]          Unified the internal and export versions of the
+                        BIND libraries, allowing external clients to use
+                        the same libraries as BIND. [RT #33131]
+...
+(git show b99bfa184bc9375421b5df915eea7dfac6a68a99 -- ./lib/isc/unix/app.c)
+
+In this commit, if bind9 enable threads(ISC_PLATFORM_USETHREADS),
+it blocks signal SIGHUP, SIGINT and SIGTERM in isc__app_ctxstart.
+Which caused dhclient/dhcpd could not be stopped by SIGTERM.
+
+It caused systemd's reboot hung which send SIGTERM by default.
+
+Upstream-Status: Backport [https://source.isc.org/git/dhcp.git]
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ omapip/isclib.c | 25 +++++++++++++++----------
+ 1 file changed, 15 insertions(+), 10 deletions(-)
+
+diff --git a/omapip/isclib.c b/omapip/isclib.c
+index ce86490..6a04345 100644
+--- a/omapip/isclib.c
++++ b/omapip/isclib.c
+@@ -185,16 +185,6 @@ dhcp_context_create(int flags,
+ 		if (result != ISC_R_SUCCESS)
+ 			goto cleanup;
+ 
+-		result = isc_app_ctxstart(dhcp_gbl_ctx.actx);
+-		if (result != ISC_R_SUCCESS)
+-			return (result);
+-		dhcp_gbl_ctx.actx_started = ISC_TRUE;
+-
+-		/* Not all OSs support suppressing SIGPIPE through socket
+-		 * options, so set the sigal action to be ignore.  This allows
+-		 * broken connections to fail gracefully with EPIPE on writes */
+-		handle_signal(SIGPIPE, SIG_IGN);
+-
+ 		result = isc_taskmgr_createinctx(dhcp_gbl_ctx.mctx,
+ 						 dhcp_gbl_ctx.actx,
+ 						 1, 0,
+@@ -217,6 +207,21 @@ dhcp_context_create(int flags,
+ 		result = isc_task_create(dhcp_gbl_ctx.taskmgr, 0, &dhcp_gbl_ctx.task);
+ 		if (result != ISC_R_SUCCESS)
+ 			goto cleanup;
++
++		result = isc_app_ctxstart(dhcp_gbl_ctx.actx);
++		if (result != ISC_R_SUCCESS)
++			return (result);
++		dhcp_gbl_ctx.actx_started = ISC_TRUE;
++
++		/* Not all OSs support suppressing SIGPIPE through socket
++		 * options, so set the sigal action to be ignore.  This allows
++		 * broken connections to fail gracefully with EPIPE on writes */
++		handle_signal(SIGPIPE, SIG_IGN);
++
++		/* Reset handlers installed by isc_app_ctxstart()
++		 * to default for control-c and kill */
++		handle_signal(SIGINT, SIG_DFL);
++		handle_signal(SIGTERM, SIG_DFL);
+ 	}
+ 
+ #if defined (NSUPDATE)
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0012-dhcp-correct-the-intention-for-xml2-lib-search.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0012-dhcp-correct-the-intention-for-xml2-lib-search.patch
new file mode 100644
index 0000000..2d3af9d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/0012-dhcp-correct-the-intention-for-xml2-lib-search.patch
@@ -0,0 +1,37 @@
+From 501543b3ef715488a142e3d301ff2733aa33eec7 Mon Sep 17 00:00:00 2001
+From: Awais Belal <awais_belal@mentor.com>
+Date: Wed, 25 Oct 2017 21:00:05 +0500
+Subject: [PATCH] dhcp: correct the intention for xml2 lib search
+
+A missing case breaks the build when libxml2 is
+required and found appropriately. The third argument
+to the function AC_SEARCH_LIB is action-if-found which
+was mistakenly been used for the case where the library
+is not found and hence breaks the configure phase
+where it shoud actually pass.
+We now pass on silently when action-if-found is
+executed.
+
+Upstream-Status: Pending
+
+Signed-off-by: Awais Belal <awais_belal@mentor.com>
+---
+ configure.ac | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/configure.ac b/configure.ac
+index bfe988a..f0459e6 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -608,7 +608,7 @@ AC_ARG_WITH(libxml2,
+ 	with_libxml2="$withval", with_libxml2="no")
+ 
+ if test x$with_libxml2 != xno; then
+-	AC_SEARCH_LIBS(xmlTextWriterStartElement, [xml2],
++	AC_SEARCH_LIBS(xmlTextWriterStartElement, [xml2],,
+ 		[if test x$with_libxml2 != xauto; then
+ 			AC_MSG_FAILURE([*** Cannot find xmlTextWriterStartElement with -lxml2 and libxml2 was requested])
+ 		fi])
+-- 
+2.11.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/define-macro-_PATH_DHCPD_CONF-and-_PATH_DHCLIENT_CON.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/define-macro-_PATH_DHCPD_CONF-and-_PATH_DHCLIENT_CON.patch
deleted file mode 100644
index 32bdaf0..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/define-macro-_PATH_DHCPD_CONF-and-_PATH_DHCLIENT_CON.patch
+++ /dev/null
@@ -1,26 +0,0 @@
-define macro _PATH_DHCPD_CONF and _PATH_DHCLIENT_CONF
-
-Upstream-Status: Inappropriate [OE specific]
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- includes/site.h | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/includes/site.h b/includes/site.h
-index d87b309..17bc40d 100644
---- a/includes/site.h
-+++ b/includes/site.h
-@@ -139,7 +139,8 @@
- /* Define this if you want the dhcpd.conf file to go somewhere other than
-    the default location.   By default, it goes in /etc/dhcpd.conf. */
- 
--/* #define _PATH_DHCPD_CONF	"/etc/dhcpd.conf" */
-+#define _PATH_DHCPD_CONF	"/etc/dhcp/dhcpd.conf"
-+#define _PATH_DHCLIENT_CONF	"/etc/dhcp/dhclient.conf"
- 
- /* Network API definitions.   You do not need to choose one of these - if
-    you don't choose, one will be chosen for you in your system's config
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/dhclient-script-drop-resolv.conf.dhclient.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/dhclient-script-drop-resolv.conf.dhclient.patch
deleted file mode 100644
index 96095a5..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/dhclient-script-drop-resolv.conf.dhclient.patch
+++ /dev/null
@@ -1,70 +0,0 @@
-dhcp-client: fix invoke dhclient-script failed on Read-only file system
-
-In read-only file system, '/etc' is on the readonly partition,
-and '/etc/resolv.conf' is symlinked to a separate writable
-partition.
-
-In this situation, we should use shell variable to instead of
-temp files '/etc/resolv.conf.dhclient' and '/etc/resolv.conf.dhclient6'.
-
-Upstream-Status: Pending
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- client/scripts/linux | 20 +++++++++-----------
- 1 file changed, 9 insertions(+), 11 deletions(-)
-
-diff --git a/client/scripts/linux b/client/scripts/linux
---- a/client/scripts/linux
-+++ b/client/scripts/linux
-@@ -27,27 +27,25 @@ ip=/sbin/ip
- 
- make_resolv_conf() {
-   if [ x"$new_domain_name_servers" != x ]; then
--    cat /dev/null > /etc/resolv.conf.dhclient
--    chmod 644 /etc/resolv.conf.dhclient
-+    resolv_conf=""
-     if [ x"$new_domain_search" != x ]; then
--      echo search $new_domain_search >> /etc/resolv.conf.dhclient
-+      resolv_conf="search ${new_domain_search}\n"
-     elif [ x"$new_domain_name" != x ]; then
-       # Note that the DHCP 'Domain Name Option' is really just a domain
-       # name, and that this practice of using the domain name option as
-       # a search path is both nonstandard and deprecated.
--      echo search $new_domain_name >> /etc/resolv.conf.dhclient
-+      resolv_conf="search ${new_domain_name}\n"
-     fi
-     for nameserver in $new_domain_name_servers; do
--      echo nameserver $nameserver >>/etc/resolv.conf.dhclient
-+      resolv_conf="${resolv_conf}nameserver ${nameserver}\n"
-     done
- 
--    mv /etc/resolv.conf.dhclient /etc/resolv.conf
-+    echo -e "${resolv_conf}" > /etc/resolv.conf
-   elif [ "x${new_dhcp6_name_servers}" != x ] ; then
--    cat /dev/null > /etc/resolv.conf.dhclient6
--    chmod 644 /etc/resolv.conf.dhclient6
-+    resolv_conf=""
- 
-     if [ "x${new_dhcp6_domain_search}" != x ] ; then
--      echo search ${new_dhcp6_domain_search} >> /etc/resolv.conf.dhclient6
-+      resolv_conf="search ${new_dhcp6_domain_search}\n"
-     fi
-     shopt -s nocasematch 
-     for nameserver in ${new_dhcp6_name_servers} ; do
-@@ -59,11 +57,11 @@ make_resolv_conf() {
-       else
- 	zone_id=
-       fi
--      echo nameserver ${nameserver}$zone_id >> /etc/resolv.conf.dhclient6
-+      resolv_conf="${resolv_conf}nameserver ${nameserver}$zone_id\n"
-     done
-     shopt -u nocasematch 
- 
--    mv /etc/resolv.conf.dhclient6 /etc/resolv.conf
-+    echo -e "${resolv_conf}" > /etc/resolv.conf
-   fi
- }
- 
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/dhcp-3.0.3-dhclient-dbus.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/dhcp-3.0.3-dhclient-dbus.patch
deleted file mode 100644
index b4a666d..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/dhcp-3.0.3-dhclient-dbus.patch
+++ /dev/null
@@ -1,86 +0,0 @@
-Upstream-Status: Inappropriate [distribution]
-
---- client/scripts/bsdos
-+++ client/scripts/bsdos
-@@ -47,6 +47,11 @@
-     . /etc/dhcp/dhclient-exit-hooks
-   fi
- # probably should do something with exit status of the local script
-+  if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
-+    dbus-send --system --dest=com.redhat.dhcp \
-+      --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
-+      'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
-+  fi
-   exit $exit_status
- }
- 
---- client/scripts/freebsd
-+++ client/scripts/freebsd
-@@ -57,6 +57,11 @@
-     . /etc/dhcp/dhclient-exit-hooks
-   fi
- # probably should do something with exit status of the local script
-+  if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
-+    dbus-send --system --dest=com.redhat.dhcp \
-+      --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
-+      'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
-+  fi
-   exit $exit_status
- }
- 
---- client/scripts/linux
-+++ client/scripts/linux
-@@ -69,6 +69,11 @@
-     . /etc/dhcp/dhclient-exit-hooks
-   fi
- # probably should do something with exit status of the local script
-+  if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
-+    dbus-send --system --dest=com.redhat.dhcp \
-+      --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
-+      'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
-+  fi
-   exit $exit_status
- }
- 
---- client/scripts/netbsd
-+++ client/scripts/netbsd
-@@ -47,6 +47,11 @@
-     . /etc/dhcp/dhclient-exit-hooks
-   fi
- # probably should do something with exit status of the local script
-+  if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
-+    dbus-send --system --dest=com.redhat.dhcp \
-+      --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
-+      'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
-+  fi
-   exit $exit_status
- }
- 
---- client/scripts/openbsd
-+++ client/scripts/openbsd
-@@ -47,6 +47,11 @@
-     . /etc/dhcp/dhclient-exit-hooks
-   fi
- # probably should do something with exit status of the local script
-+  if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
-+    dbus-send --system --dest=com.redhat.dhcp \
-+      --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
-+      'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
-+  fi
-   exit $exit_status
- }
- 
---- client/scripts/solaris
-+++ client/scripts/solaris
-@@ -47,6 +47,11 @@
-     . /etc/dhcp/dhclient-exit-hooks
-   fi
- # probably should do something with exit status of the local script
-+  if [ x$dhc_dbus != x -a $exit_status -eq 0 ]; then
-+    dbus-send --system --dest=com.redhat.dhcp \
-+      --type=method_call /com/redhat/dhcp/$interface com.redhat.dhcp.set \
-+      'string:'"`env | grep -Ev '^(PATH|SHLVL|_|PWD|dhc_dbus)\='`"
-+  fi
-   exit $exit_status
- }
- 
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/fixsepbuild.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/fixsepbuild.patch
deleted file mode 100644
index 2f44147..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/fixsepbuild.patch
+++ /dev/null
@@ -1,97 +0,0 @@
-Fix out of tree builds
-
-Upstream-Status: Pending
-
-RP 2013/03/21
-
-Rebase to 4.3.4
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- client/Makefile.am  | 4 ++--
- common/Makefile.am  | 3 ++-
- dhcpctl/Makefile.am | 2 ++
- omapip/Makefile.am  | 1 +
- relay/Makefile.am   | 2 +-
- server/Makefile.am  | 2 +-
- 6 files changed, 9 insertions(+), 5 deletions(-)
-
-diff --git a/client/Makefile.am b/client/Makefile.am
-index 2cb83d8..4730bb3 100644
---- a/client/Makefile.am
-+++ b/client/Makefile.am
-@@ -7,11 +7,11 @@ SUBDIRS = . tests
- BINDLIBDIR = @BINDDIR@/lib
- 
- AM_CPPFLAGS = -DCLIENT_PATH='"PATH=$(sbindir):/sbin:/bin:/usr/sbin:/usr/bin"' \
--	      -DLOCALSTATEDIR='"$(localstatedir)"'
-+	      -DLOCALSTATEDIR='"$(localstatedir)"' -I$(top_srcdir)/includes
- 
- dist_sysconf_DATA = dhclient.conf.example
- sbin_PROGRAMS = dhclient
--dhclient_SOURCES = clparse.c dhclient.c dhc6.c \
-+dhclient_SOURCES = $(srcdir)/clparse.c $(srcdir)/dhclient.c $(srcdir)/dhc6.c \
- 		   scripts/bsdos scripts/freebsd scripts/linux scripts/macos \
- 		   scripts/netbsd scripts/nextstep scripts/openbsd \
- 		   scripts/solaris scripts/openwrt
-diff --git a/common/Makefile.am b/common/Makefile.am
-index 113aee8..0f24fbb 100644
---- a/common/Makefile.am
-+++ b/common/Makefile.am
-@@ -1,4 +1,5 @@
--AM_CPPFLAGS = -I$(top_srcdir) -DLOCALSTATEDIR='"@localstatedir@"'
-+AM_CPPFLAGS = -I$(top_srcdir)/includes -I$(top_srcdir) -DLOCALSTATEDIR='"@localstatedir@"'
-+
- AM_CFLAGS = $(LDAP_CFLAGS)
- 
- noinst_LIBRARIES = libdhcp.a
-diff --git a/dhcpctl/Makefile.am b/dhcpctl/Makefile.am
-index ceb0de1..ba8dd8b 100644
---- a/dhcpctl/Makefile.am
-+++ b/dhcpctl/Makefile.am
-@@ -1,5 +1,7 @@
- BINDLIBDIR = @BINDDIR@/lib
- 
-+AM_CPPFLAGS = -I$(top_srcdir)/includes -I$(top_srcdir)
-+
- bin_PROGRAMS = omshell
- lib_LIBRARIES = libdhcpctl.a
- noinst_PROGRAMS = cltest
-diff --git a/omapip/Makefile.am b/omapip/Makefile.am
-index 446a594..dd1afa0 100644
---- a/omapip/Makefile.am
-+++ b/omapip/Makefile.am
-@@ -1,4 +1,5 @@
- BINDLIBDIR = @BINDDIR@/lib
-+AM_CPPFLAGS = -I$(top_srcdir)/includes
- 
- lib_LIBRARIES = libomapi.a
- noinst_PROGRAMS = svtest
-diff --git a/relay/Makefile.am b/relay/Makefile.am
-index 3060eca..6d652f6 100644
---- a/relay/Makefile.am
-+++ b/relay/Makefile.am
-@@ -1,6 +1,6 @@
- BINDLIBDIR = @BINDDIR@/lib
- 
--AM_CPPFLAGS = -DLOCALSTATEDIR='"@localstatedir@"'
-+AM_CPPFLAGS = -DLOCALSTATEDIR='"@localstatedir@"' -I$(top_srcdir)/includes
- 
- sbin_PROGRAMS = dhcrelay
- dhcrelay_SOURCES = dhcrelay.c
-diff --git a/server/Makefile.am b/server/Makefile.am
-index 54feedf..3990b9c 100644
---- a/server/Makefile.am
-+++ b/server/Makefile.am
-@@ -6,7 +6,7 @@ SUBDIRS = . tests
- 
- BINDLIBDIR = @BINDDIR@/lib
- 
--AM_CPPFLAGS = -I.. -DLOCALSTATEDIR='"@localstatedir@"'
-+AM_CPPFLAGS = -I$(top_srcdir) -DLOCALSTATEDIR='"@localstatedir@"' -I$(top_srcdir)/includes
- 
- dist_sysconf_DATA = dhcpd.conf.example
- sbin_PROGRAMS = dhcpd
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/libxml2-configure-argument.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/libxml2-configure-argument.patch
deleted file mode 100644
index 1435662..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/libxml2-configure-argument.patch
+++ /dev/null
@@ -1,38 +0,0 @@
-Add configure argument to make the libxml2 dependency explicit and
-determinisitic.
-
-Upstream-Status: Pending
-
-Signed-off-by: Christopher Larson <chris_larson@mentor.com>
-
-Rebase to 4.3.4
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- configure.ac | 11 ++++++++++-
- 1 file changed, 10 insertions(+), 1 deletion(-)
-
-diff --git a/configure.ac b/configure.ac
-index 726c88e..1684df1 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -718,7 +718,16 @@ AC_SUBST(BINDSRCDIR)
- 
- # We need to find libxml2 if bind was built with support enabled
- # otherwise we'll fail to build omapip/test.c
--AC_SEARCH_LIBS(xmlTextWriterStartElement, [xml2],)
-+AC_ARG_WITH(libxml2,
-+	AS_HELP_STRING([--with-libxml2], [link against libxml2. this is needed if bind was built with xml2 support enabled]),
-+	with_libxml2="$withval", with_libxml2="no")
-+
-+if test x$with_libxml2 != xno; then
-+    AC_SEARCH_LIBS(xmlTextWriterStartElement, [xml2],
-+                   [if test x$with_libxml2 != xauto; then
-+                        AC_MSG_FAILURE([*** Cannot find xmlTextWriterStartElement with -lxml2 and libxml2 was requested])
-+                    fi])
-+fi
- 
- # OpenLDAP support.
- AC_ARG_WITH(ldap,
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/link-with-lcrypto.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/link-with-lcrypto.patch
deleted file mode 100644
index 0d0e0dd..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/link-with-lcrypto.patch
+++ /dev/null
@@ -1,34 +0,0 @@
-Author: Andrei Gherzan <andrei@gherzan.ro>
-Date:   Thu Feb 2 23:59:11 2012 +0200
-
-From 4.2.0 final release, -lcrypto check was removed and we compile static libraries
-from bind that are linked to libcrypto. This is why i added a patch in order to add
--lcrypto to LIBS.
-
-Upstream-Status: Pending
-Signed-off-by: Andrei Gherzan <andrei@gherzan.ro>
-
-Rebase to 4.3.4
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- configure.ac | 4 ++++
- 1 file changed, 4 insertions(+)
-
-diff --git a/configure.ac b/configure.ac
-index 097b0c3..726c88e 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -584,6 +584,10 @@ AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[]], [[void foo() __attribute__((noreturn));
- # Look for optional headers.
- AC_CHECK_HEADERS(sys/socket.h net/if_dl.h net/if6.h regex.h)
- 
-+# find an MD5 library
-+AC_SEARCH_LIBS(MD5_Init, [crypto])
-+AC_SEARCH_LIBS(MD5Init, [crypto])
-+
- # Solaris needs some libraries for functions
- AC_SEARCH_LIBS(socket, [socket])
- AC_SEARCH_LIBS(inet_ntoa, [nsl])
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/remove-dhclient-script-bash-dependency.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/remove-dhclient-script-bash-dependency.patch
deleted file mode 100644
index 997b9f6..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/remove-dhclient-script-bash-dependency.patch
+++ /dev/null
@@ -1,55 +0,0 @@
-From 8aed2a9ff09cb0d584ad0a7340fe3a596879d9b1 Mon Sep 17 00:00:00 2001
-From: Andre McCurdy <armccurdy@gmail.com>
-Date: Thu, 21 Jul 2016 19:07:02 -0700
-Subject: [PATCH] remove dhclient-script bash dependency
-
-Take the dash compatible IPv6 link-local address test from the Debian
-version of dhclient-script.
-
-Note that although "echo -e" in the OE version of dhclient-script is
-technically bash specific too, it is supported by Busybox echo when
-Busybox is configured with CONFIG_FEATURE_FANCY_ECHO enabled (which
-is the default in the OE Busybox defconfig) therefore leave as-is.
-
-Upstream-Status: Inappropriate [OE specific]
-
-Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
----
- client/scripts/linux | 7 +++----
- 1 file changed, 3 insertions(+), 4 deletions(-)
-
-diff --git a/client/scripts/linux b/client/scripts/linux
-index 232a0aa..1383f46 100755
---- a/client/scripts/linux
-+++ b/client/scripts/linux
-@@ -1,4 +1,4 @@
--#!/bin/bash
-+#!/bin/sh
- # dhclient-script for Linux. Dan Halbert, March, 1997.
- # Updated for Linux 2.[12] by Brian J. Murrell, January 1999.
- # No guarantees about this. I'm a novice at the details of Linux
-@@ -47,11 +47,11 @@ make_resolv_conf() {
-     if [ "x${new_dhcp6_domain_search}" != x ] ; then
-       resolv_conf="search ${new_dhcp6_domain_search}\n"
-     fi
--    shopt -s nocasematch 
-     for nameserver in ${new_dhcp6_name_servers} ; do
-       # If the nameserver has a link-local address
-       # add a <zone_id> (interface name) to it.
--      if  [[ "$nameserver" =~ ^fe80:: ]]
-+      if [ "${nameserver##fe80::}" != "$nameserver" ] ||
-+         [ "${nameserver##FE80::}" != "$nameserver" ]
-       then
- 	zone_id="%$interface"
-       else
-@@ -59,7 +59,6 @@ make_resolv_conf() {
-       fi
-       resolv_conf="${resolv_conf}nameserver ${nameserver}$zone_id\n"
-     done
--    shopt -u nocasematch 
- 
-     echo -e "${resolv_conf}" > /etc/resolv.conf
-   fi
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/replace-ifconfig-route.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/replace-ifconfig-route.patch
deleted file mode 100644
index d84df5c..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/replace-ifconfig-route.patch
+++ /dev/null
@@ -1,188 +0,0 @@
-Found this patch here:
-https://lists.isc.org/pipermail/dhcp-users/2011-January/012910.html
-
-and made some adjustments/updates to make it work with this version.
-Wasn't able to find that why this patch was not accepted by ISC DHCP developers.
-
-Upstream-Status: Pending
-
-Signed-off-by: Muhammad Shakeel <muhammad_shakeel@mentor.com>
-
-Rebase to 4.3.4
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- client/scripts/linux | 82 ++++++++++++++++++++++++++++------------------------
- 1 file changed, 45 insertions(+), 37 deletions(-)
-
-diff --git a/client/scripts/linux b/client/scripts/linux
-index a02cfd9..232a0aa 100755
---- a/client/scripts/linux
-+++ b/client/scripts/linux
-@@ -101,17 +101,11 @@ fi
- if [ x$old_broadcast_address != x ]; then
-   old_broadcast_arg="broadcast $old_broadcast_address"
- fi
--if [ x$new_subnet_mask != x ]; then
--  new_subnet_arg="netmask $new_subnet_mask"
-+if [ -n "$new_subnet_mask" ]; then
-+    new_mask="/$new_subnet_mask"
- fi
--if [ x$old_subnet_mask != x ]; then
--  old_subnet_arg="netmask $old_subnet_mask"
--fi
--if [ x$alias_subnet_mask != x ]; then
--  alias_subnet_arg="netmask $alias_subnet_mask"
--fi
--if [ x$new_interface_mtu != x ]; then
--  mtu_arg="mtu $new_interface_mtu"
-+if [ -n "$alias_subnet_mask" ]; then
-+    alias_mask="/$alias_subnet_mask"
- fi
- if [ x$IF_METRIC != x ]; then
-   metric_arg="metric $IF_METRIC"
-@@ -125,9 +119,9 @@ fi
- if [ x$reason = xPREINIT ]; then
-   if [ x$alias_ip_address != x ]; then
-     # Bring down alias interface. Its routes will disappear too.
--    ifconfig $interface:0- inet 0
-+    ${ip} -4 addr flush dev ${interface} label ${interface}:0
-   fi
--  ifconfig $interface 0 up
-+  ${ip} link set dev ${interface} up
- 
-   # We need to give the kernel some time to get the interface up.
-   sleep 1
-@@ -154,25 +148,30 @@ if [ x$reason = xBOUND ] || [ x$reason = xRENEW ] || \
-   if [ x$old_ip_address != x ] && [ x$alias_ip_address != x ] && \
- 		[ x$alias_ip_address != x$old_ip_address ]; then
-     # Possible new alias. Remove old alias.
--    ifconfig $interface:0- inet 0
-+    ${ip} -4 addr flush dev ${interface} label ${interface}:0
-   fi
-   if [ x$old_ip_address != x ] && [ x$old_ip_address != x$new_ip_address ]; then
-     # IP address changed. Bringing down the interface will delete all routes,
-     # and clear the ARP cache.
--    ifconfig $interface inet 0 down
-+    ${ip} -4 addr flush dev ${interface} label ${interface}
- 
-   fi
-   if [ x$old_ip_address = x ] || [ x$old_ip_address != x$new_ip_address ] || \
-      [ x$reason = xBOUND ] || [ x$reason = xREBOOT ]; then
- 
--    ifconfig $interface inet $new_ip_address $new_subnet_arg \
--					$new_broadcast_arg $mtu_arg
-+    ${ip} -4 addr add ${new_ip_address}${new_mask} ${new_broadcast_arg} \
-+                dev ${interface} label ${interface}
-+    if [ -n "$new_interface_mtu" ]; then
-+      # set MTU
-+      ${ip} link set dev ${interface} mtu ${new_interface_mtu}
-+    fi
-     # Add a network route to the computed network address.
-     for router in $new_routers; do
-       if [ "x$new_subnet_mask" = "x255.255.255.255" ] ; then
--	route add -host $router dev $interface
-+        ${ip} -4 route add ${router} dev $interface >/dev/null 2>&1
-       fi
--      route add default gw $router $metric_arg dev $interface
-+      ${ip} -4 route add default via ${router} dev ${interface} \
-+        ${metric_arg} >/dev/null 2>&1
-     done
-   else
-     # we haven't changed the address, have we changed other options           
-@@ -180,21 +179,23 @@ if [ x$reason = xBOUND ] || [ x$reason = xRENEW ] || \
-     if [ x$new_routers != x ] && [ x$new_routers != x$old_routers ] ; then
-       # if we've changed routers delete the old and add the new.
-       for router in $old_routers; do
--        route del default gw $router
-+        ${ip} -4 route delete default via ${router}
-       done
-       for router in $new_routers; do
-         if [ "x$new_subnet_mask" = "x255.255.255.255" ] ; then
--	  route add -host $router dev $interface
--	fi
--	route add default gw $router $metric_arg dev $interface
-+	      ${ip} -4 route add ${router} dev $interface >/dev/null 2>&1
-+	    fi
-+        ${ip} -4 route add default via ${router} dev ${interface} \
-+          ${metric_arg} >/dev/null 2>&1
-       done
-     fi
-   fi
-   if [ x$new_ip_address != x$alias_ip_address ] && [ x$alias_ip_address != x ];
-    then
--    ifconfig $interface:0- inet 0
--    ifconfig $interface:0 inet $alias_ip_address $alias_subnet_arg
--    route add -host $alias_ip_address $interface:0
-+    ${ip} -4 addr flush dev ${interface} label ${interface}:0
-+    ${ip} -4 addr add ${alias_ip_address}${alias_mask} \
-+        dev ${interface} label ${interface}:0
-+    ${ip} -4 route add ${alias_ip_address} dev ${interface} >/dev/null 2>&1
-   fi
-   make_resolv_conf
-   exit_with_hooks 0
-@@ -204,42 +205,49 @@ if [ x$reason = xEXPIRE ] || [ x$reason = xFAIL ] || [ x$reason = xRELEASE ] \
-    || [ x$reason = xSTOP ]; then
-   if [ x$alias_ip_address != x ]; then
-     # Turn off alias interface.
--    ifconfig $interface:0- inet 0
-+    ${ip} -4 addr flush dev ${interface} label ${interface}:0
-   fi
-   if [ x$old_ip_address != x ]; then
-     # Shut down interface, which will delete routes and clear arp cache.
--    ifconfig $interface inet 0 down
-+    ${ip} -4 addr flush dev ${interface} label ${interface}
-   fi
-   if [ x$alias_ip_address != x ]; then
--    ifconfig $interface:0 inet $alias_ip_address $alias_subnet_arg
--    route add -host $alias_ip_address $interface:0
-+    ${ip} -4 addr add ${alias_ip_address}${alias_network_arg} \
-+        dev ${interface} label ${interface}:0
-+    ${ip} -4 route add ${alias_ip_address} dev ${interface} >/dev/null 2>&1
-   fi
-   exit_with_hooks 0
- fi
- 
- if [ x$reason = xTIMEOUT ]; then
-   if [ x$alias_ip_address != x ]; then
--    ifconfig $interface:0- inet 0
-+    ${ip} -4 addr flush dev ${interface} label ${interface}:0
-+  fi
-+  ${ip} -4 addr add ${new_ip_address}${new_mask} ${new_broadcast_arg} \
-+            dev ${interface} label ${interface}
-+  if [ -n "$new_interface_mtu" ]; then
-+    # set MTU
-+    ip link set dev ${interface} mtu ${new_interface_mtu}
-   fi
--  ifconfig $interface inet $new_ip_address $new_subnet_arg \
--					$new_broadcast_arg $mtu_arg
-   set $new_routers
-   if ping -q -c 1 $1; then
-     if [ x$new_ip_address != x$alias_ip_address ] && \
- 			[ x$alias_ip_address != x ]; then
--      ifconfig $interface:0 inet $alias_ip_address $alias_subnet_arg
--      route add -host $alias_ip_address dev $interface:0
-+      ${ip} -4 addr add ${alias_ip_address}${alias_mask} \
-+            dev ${interface} label ${interface}:0
-+      ${ip} -4 route add ${alias_ip_address} dev ${interface} >/dev/null 2>&1
-     fi
-     for router in $new_routers; do
-       if [ "x$new_subnet_mask" = "x255.255.255.255" ] ; then
--	route add -host $router dev $interface
-+	    ${ip} -4 route add ${router} dev $interface >/dev/null 2>&1
-       fi
--      route add default gw $router $metric_arg dev $interface
-+      ${ip} -4 route add default via ${router} dev ${interface} \
-+        ${metric_arg} >/dev/null 2>&1
-     done
-     make_resolv_conf
-     exit_with_hooks 0
-   fi
--  ifconfig $interface inet 0 down
-+  ${ip} -4 addr flush dev ${interface}
-   exit_with_hooks 1
- fi
- 
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/search-for-libxml2.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/search-for-libxml2.patch
deleted file mode 100644
index a08a5b7..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/search-for-libxml2.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-libdns requires libxml2 if bind was built with libxml2 support
-enabled. Compilation will fail for omapip/test.c in case
-lxml2 isn't used during the build. So, we add losely coupled
-search path which will pick up the lib if it is present.
-
-Signed-off-by: Awais Belal <awais_belal@mentor.com>
-Upstream-Status: Pending
-
-diff --git a/configure.ac b/configure.ac
-index c9dc8b5..85f59be 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -602,6 +602,10 @@ no)
- esac
- AC_SUBST([libbind])
- 
-+# We need to find libxml2 if bind was built with support enabled
-+# otherwise we'll fail to build omapip/test.c
-+AC_SEARCH_LIBS(xmlTextWriterStartElement, [xml2],)
-+
- # OpenLDAP support.
- AC_ARG_WITH(ldap,
-     AS_HELP_STRING([--with-ldap],[enable OpenLDAP support in dhcpd (default is no)]),
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/tweak-to-support-external-bind.patch b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/tweak-to-support-external-bind.patch
deleted file mode 100644
index 03c6abb..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp/tweak-to-support-external-bind.patch
+++ /dev/null
@@ -1,117 +0,0 @@
-From ad7bb401f47714fc30c408853b796ce0f1c7e65f Mon Sep 17 00:00:00 2001
-From: Hongxu Jia <hongxu.jia@windriver.com>
-Date: Sat, 11 Jun 2016 22:51:44 -0400
-Subject: [PATCH] tweak to support external bind
-
-Tweak the external bind to oe-core's sysroot rather than
-external bind source build.
-
-Upstream-Status: Inappropriate <oe-core specific>
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- client/Makefile.am       | 2 +-
- client/tests/Makefile.am | 2 +-
- common/tests/Makefile.am | 2 +-
- dhcpctl/Makefile.am      | 2 +-
- omapip/Makefile.am       | 2 +-
- relay/Makefile.am        | 2 +-
- server/Makefile.am       | 2 +-
- server/tests/Makefile.am | 2 +-
- 8 files changed, 8 insertions(+), 8 deletions(-)
-
-diff --git a/client/Makefile.am b/client/Makefile.am
-index 4730bb3..84d8131 100644
---- a/client/Makefile.am
-+++ b/client/Makefile.am
-@@ -4,7 +4,7 @@
- # production code. Sadly, we are not there yet.
- SUBDIRS = . tests
- 
--BINDLIBDIR = @BINDDIR@/lib
-+BINDLIBDIR = @BINDDIR@
- 
- AM_CPPFLAGS = -DCLIENT_PATH='"PATH=$(sbindir):/sbin:/bin:/usr/sbin:/usr/bin"' \
- 	      -DLOCALSTATEDIR='"$(localstatedir)"' -I$(top_srcdir)/includes
-diff --git a/client/tests/Makefile.am b/client/tests/Makefile.am
-index da69ea9..fe35e57 100644
---- a/client/tests/Makefile.am
-+++ b/client/tests/Makefile.am
-@@ -1,6 +1,6 @@
- SUBDIRS = .
- 
--BINDLIBDIR = @BINDDIR@/lib
-+BINDLIBDIR = @BINDDIR@
- 
- AM_CPPFLAGS = $(ATF_CFLAGS) -DUNIT_TEST -I$(top_srcdir)/includes
- AM_CPPFLAGS += -I@BINDDIR@/include -I$(top_srcdir)
-diff --git a/common/tests/Makefile.am b/common/tests/Makefile.am
-index f8d6b0e..05cd9c1 100644
---- a/common/tests/Makefile.am
-+++ b/common/tests/Makefile.am
-@@ -1,6 +1,6 @@
- SUBDIRS = .
- 
--BINDLIBDIR = @BINDDIR@/lib
-+BINDLIBDIR = @BINDDIR@
- 
- AM_CPPFLAGS = $(ATF_CFLAGS) -I$(top_srcdir)/includes
- 
-diff --git a/dhcpctl/Makefile.am b/dhcpctl/Makefile.am
-index ba8dd8b..9b2486e 100644
---- a/dhcpctl/Makefile.am
-+++ b/dhcpctl/Makefile.am
-@@ -1,4 +1,4 @@
--BINDLIBDIR = @BINDDIR@/lib
-+BINDLIBDIR = @BINDDIR@
- 
- AM_CPPFLAGS = -I$(top_srcdir)/includes -I$(top_srcdir)
- 
-diff --git a/omapip/Makefile.am b/omapip/Makefile.am
-index dd1afa0..e4a8599 100644
---- a/omapip/Makefile.am
-+++ b/omapip/Makefile.am
-@@ -1,4 +1,4 @@
--BINDLIBDIR = @BINDDIR@/lib
-+BINDLIBDIR = @BINDDIR@
- AM_CPPFLAGS = -I$(top_srcdir)/includes
- 
- lib_LIBRARIES = libomapi.a
-diff --git a/relay/Makefile.am b/relay/Makefile.am
-index 6d652f6..b3bf578 100644
---- a/relay/Makefile.am
-+++ b/relay/Makefile.am
-@@ -1,4 +1,4 @@
--BINDLIBDIR = @BINDDIR@/lib
-+BINDLIBDIR = @BINDDIR@
- 
- AM_CPPFLAGS = -DLOCALSTATEDIR='"@localstatedir@"' -I$(top_srcdir)/includes
- 
-diff --git a/server/Makefile.am b/server/Makefile.am
-index 3990b9c..b5d8c2d 100644
---- a/server/Makefile.am
-+++ b/server/Makefile.am
-@@ -4,7 +4,7 @@
- # production code. Sadly, we are not there yet.
- SUBDIRS = . tests
- 
--BINDLIBDIR = @BINDDIR@/lib
-+BINDLIBDIR = @BINDDIR@
- 
- AM_CPPFLAGS = -I$(top_srcdir) -DLOCALSTATEDIR='"@localstatedir@"' -I$(top_srcdir)/includes
- 
-diff --git a/server/tests/Makefile.am b/server/tests/Makefile.am
-index 65a9f74..2892309 100644
---- a/server/tests/Makefile.am
-+++ b/server/tests/Makefile.am
-@@ -1,6 +1,6 @@
- SUBDIRS = .
- 
--BINDLIBDIR = @BINDDIR@/lib
-+BINDLIBDIR = @BINDDIR@
- 
- AM_CPPFLAGS = $(ATF_CFLAGS) -DUNIT_TEST -I$(top_srcdir)/includes
- AM_CPPFLAGS += -I@BINDDIR@/include -I$(top_srcdir)
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp_4.3.5.bb b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp_4.3.5.bb
deleted file mode 100644
index 678c29a..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp_4.3.5.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-require dhcp.inc
-
-SRC_URI += "file://dhcp-3.0.3-dhclient-dbus.patch;striplevel=0 \
-            file://link-with-lcrypto.patch \
-            file://fixsepbuild.patch \
-            file://dhclient-script-drop-resolv.conf.dhclient.patch \
-            file://replace-ifconfig-route.patch \
-            file://0001-site.h-enable-gentle-shutdown.patch \
-            file://libxml2-configure-argument.patch \
-            file://tweak-to-support-external-bind.patch \
-            file://remove-dhclient-script-bash-dependency.patch \
-           "
-
-SRC_URI[md5sum] = "2b5e5b2fa31c2e27e487039d86f83d3f"
-SRC_URI[sha256sum] = "eb95936bf15d2393c55dd505bc527d1d4408289cec5a9fa8abb99f7577e7f954"
-
-PACKAGECONFIG ?= ""
-PACKAGECONFIG[bind-httpstats] = "--with-libxml2,--without-libxml2,libxml2"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp_4.3.6.bb b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp_4.3.6.bb
new file mode 100644
index 0000000..6615ae2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/dhcp_4.3.6.bb
@@ -0,0 +1,21 @@
+require dhcp.inc
+
+SRC_URI += "file://0001-define-macro-_PATH_DHCPD_CONF-and-_PATH_DHCLIENT_CON.patch \
+            file://0002-dhclient-dbus.patch \
+            file://0003-link-with-lcrypto.patch \
+            file://0004-Fix-out-of-tree-builds.patch \
+            file://0005-dhcp-client-fix-invoke-dhclient-script-failed-on-Rea.patch \
+            file://0006-site.h-enable-gentle-shutdown.patch \
+            file://0007-Add-configure-argument-to-make-the-libxml2-dependenc.patch \
+            file://0008-tweak-to-support-external-bind.patch \
+            file://0009-remove-dhclient-script-bash-dependency.patch \
+            file://0010-build-shared-libs.patch \
+            file://0011-Moved-the-call-to-isc_app_ctxstart-to-not-get-signal.patch \
+            file://0012-dhcp-correct-the-intention-for-xml2-lib-search.patch \
+           "
+
+SRC_URI[md5sum] = "afa6e9b3eb7539ea048421a82c668adc"
+SRC_URI[sha256sum] = "a41eaf6364f1377fe065d35671d9cf82bbbc8f21207819b2b9f33f652aec6f1b"
+
+PACKAGECONFIG ?= ""
+PACKAGECONFIG[bind-httpstats] = "--with-libxml2,--without-libxml2,libxml2"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/files/dhclient-systemd-wrapper b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/files/dhclient-systemd-wrapper
new file mode 100644
index 0000000..7d0e224
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/files/dhclient-systemd-wrapper
@@ -0,0 +1,39 @@
+#!/bin/sh
+
+# In case the interface is used for nfs, skip it.
+nfsroot=0
+interfaces=""
+exec 9<&0 < /proc/mounts
+while read dev mtpt fstype rest; do
+    if test $mtpt = "/" ; then
+        case $fstype in
+            nfs | nfs4)
+                nfsroot=1
+                nfs_addr=`echo $rest | sed -e 's/^.*addr=\([0-9.]*\).*$/\1/'`
+                break
+                ;;
+            *)
+                ;;
+        esac
+    fi
+done
+exec 0<&9 9<&-
+
+if [ $nfsroot -eq 0 ]; then
+    interfaces="$INTERFACES"
+else
+    if [ -x /bin/ip -o -x /sbin/ip ] ; then
+	nfs_iface=`ip route get $nfs_addr | grep dev | sed -e 's/^.*dev \([-a-z0-9.]*\).*$/\1/'`
+    fi
+    for i in $INTERFACES; do
+	if test "x$i" = "x$nfs_iface"; then
+            echo "dhclient skipping nfsroot interface $i"
+	else
+	    interfaces="$interfaces $i"
+	fi
+    done
+fi
+
+if test "x$interfaces" != "x"; then
+    /sbin/dhclient -d -cf /etc/dhcp/dhclient.conf -q -lf /var/lib/dhcp/dhclient.leases $interfaces
+fi
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/files/dhclient.service b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/files/dhclient.service
new file mode 100644
index 0000000..9ddb4d1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/dhcp/files/dhclient.service
@@ -0,0 +1,13 @@
+[Unit]
+Description=Dynamic Host Configuration Protocol (DHCP)
+Wants=network.target
+Before=network.target
+After=systemd-udevd.service
+
+[Service]
+EnvironmentFile=-@SYSCONFDIR@/default/dhcp-client
+ExecStart=@BASE_SBINDIR@/dhclient-systemd-wrapper
+RemainAfterExit=yes
+
+[Install]
+WantedBy=multi-user.target
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2.inc b/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2.inc
index ce64888..a578eb3 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2.inc
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2.inc
@@ -13,7 +13,10 @@
 
 inherit update-alternatives bash-completion pkgconfig
 
-EXTRA_OEMAKE = "CC='${CC}' KERNEL_INCLUDE=${STAGING_INCDIR} DOCDIR=${docdir}/iproute2 SUBDIRS='lib tc ip bridge misc genl' SBINDIR='${base_sbindir}' LIBDIR='${libdir}'"
+PACKAGECONFIG ??= "tipc"
+PACKAGECONFIG[tipc] = ",,libmnl,"
+
+EXTRA_OEMAKE = "CC='${CC}' KERNEL_INCLUDE=${STAGING_INCDIR} DOCDIR=${docdir}/iproute2 SUBDIRS='lib tc ip bridge misc genl ${@bb.utils.contains('PACKAGECONFIG', 'tipc', 'tipc', '', d)}' SBINDIR='${base_sbindir}' LIBDIR='${libdir}'"
 
 do_configure_append () {
     sh configure ${STAGING_INCDIR}
@@ -32,7 +35,7 @@
 # The .so files in iproute2-tc are modules, not traditional libraries
 INSANE_SKIP_${PN}-tc = "dev-so"
 
-PACKAGES =+ "${PN}-tc ${PN}-lnstat ${PN}-ifstat ${PN}-genl ${PN}-rtacct ${PN}-nstat ${PN}-ss"
+PACKAGES =+ "${PN}-tc ${PN}-lnstat ${PN}-ifstat ${PN}-genl ${PN}-rtacct ${PN}-nstat ${PN}-ss ${@bb.utils.contains('PACKAGECONFIG', 'tipc', '${PN}-tipc', '', d)}"
 FILES_${PN}-tc = "${base_sbindir}/tc* \
                   ${libdir}/tc/*.so"
 FILES_${PN}-lnstat = "${base_sbindir}/lnstat ${base_sbindir}/ctstat ${base_sbindir}/rtstat"
@@ -41,6 +44,7 @@
 FILES_${PN}-rtacct = "${base_sbindir}/rtacct"
 FILES_${PN}-nstat = "${base_sbindir}/nstat"
 FILES_${PN}-ss = "${base_sbindir}/ss"
+FILES_${PN}-tipc = "${base_sbindir}/tipc"
 
 ALTERNATIVE_${PN} = "ip"
 ALTERNATIVE_TARGET[ip] = "${base_sbindir}/ip.${BPN}"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2/0001-include-stdint.h-explicitly-for-UINT16_MAX.patch b/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2/0001-include-stdint.h-explicitly-for-UINT16_MAX.patch
new file mode 100644
index 0000000..eb0c0ab
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2/0001-include-stdint.h-explicitly-for-UINT16_MAX.patch
@@ -0,0 +1,32 @@
+From 3c885d87befc706bb923933b9819de6fe2de897e Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 20 May 2017 14:03:19 -0700
+Subject: [PATCH] include stdint.h explicitly for UINT16_MAX)
+
+Fixes
+| tc_core.c:190:29: error: 'UINT16_MAX' undeclared (first use in this function); did you mean '__INT16_MAX__'?
+|    if ((sz >> s->size_log) > UINT16_MAX) {
+|                              ^~~~~~~~~~
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ tc/tc_core.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/tc/tc_core.c b/tc/tc_core.c
+index 7bbe0d7..821b741 100644
+--- a/tc/tc_core.c
++++ b/tc/tc_core.c
+@@ -12,6 +12,7 @@
+ 
+ #include <stdio.h>
+ #include <stdlib.h>
++#include <stdint.h>
+ #include <unistd.h>
+ #include <syslog.h>
+ #include <fcntl.h>
+-- 
+2.13.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2/0001-ip-Remove-unneed-header.patch b/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2/0001-ip-Remove-unneed-header.patch
new file mode 100644
index 0000000..a9f8db6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2/0001-ip-Remove-unneed-header.patch
@@ -0,0 +1,30 @@
+From f58fc99c88a54135e55a6e0956ce8ae71078d1cc Mon Sep 17 00:00:00 2001
+From: Changhyeok Bae <changhyeok.bae@gmail.com>
+Date: Mon, 12 Jun 2017 04:29:07 +0000
+Subject: [PATCH] ip: Remove unneed header
+
+Fix redefinition of struct ethhdr with a suitably patched musl libc
+that suppresses the kernel if_ether.h.
+
+Signed-off-by: Changhyeok Bae <changhyeok.bae@gmail.com>
+
+Upstream-Status: Submitted [netdev@vger.kernel.org]
+---
+ ip/iplink_bridge.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/ip/iplink_bridge.c b/ip/iplink_bridge.c
+index 818b43c..f065b22 100644
+--- a/ip/iplink_bridge.c
++++ b/ip/iplink_bridge.c
+@@ -15,7 +15,6 @@
+ #include <netinet/in.h>
+ #include <linux/if_link.h>
+ #include <linux/if_bridge.h>
+-#include <netinet/ether.h>
+ #include <net/if.h>
+ 
+ #include "rt_names.h"
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2_4.10.0.bb b/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2_4.10.0.bb
deleted file mode 100644
index a050e87..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2_4.10.0.bb
+++ /dev/null
@@ -1,14 +0,0 @@
-require iproute2.inc
-
-SRC_URI = "${KERNELORG_MIRROR}/linux/utils/net/${BPN}/${BP}.tar.xz \
-           file://configure-cross.patch \
-           file://0001-iproute2-de-bash-scripts.patch \
-           file://0001-libc-compat.h-add-musl-workaround.patch \
-          "
-
-SRC_URI[md5sum] = "b94a2b0edefaeac124dc8f5d006931b9"
-SRC_URI[sha256sum] = "22b1e1c1fc704ad35837e5a66103739727b8b48ac90b48c13f79b7367ff0a9a8"
-
-# CFLAGS are computed in Makefile and reference CCOPTS
-#
-EXTRA_OEMAKE_append = " CCOPTS='${CFLAGS}'"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2_4.11.0.bb b/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2_4.11.0.bb
new file mode 100644
index 0000000..dbd0545
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/iproute2/iproute2_4.11.0.bb
@@ -0,0 +1,16 @@
+require iproute2.inc
+
+SRC_URI = "${KERNELORG_MIRROR}/linux/utils/net/${BPN}/${BP}.tar.xz \
+           file://configure-cross.patch \
+           file://0001-iproute2-de-bash-scripts.patch \
+           file://0001-libc-compat.h-add-musl-workaround.patch \
+           file://0001-include-stdint.h-explicitly-for-UINT16_MAX.patch \
+           file://0001-ip-Remove-unneed-header.patch \
+          "
+
+SRC_URI[md5sum] = "7a9498de88bcca95c305df6108ae197e"
+SRC_URI[sha256sum] = "72671028bda696d0cb8f48ec8e702581c3a501caeed33eec3a81d7041cbc8026"
+
+# CFLAGS are computed in Makefile and reference CCOPTS
+#
+EXTRA_OEMAKE_append = " CCOPTS='${CFLAGS}'"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/libpcap/libpcap.inc b/import-layers/yocto-poky/meta/recipes-connectivity/libpcap/libpcap.inc
index 6635779..e57ea87 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/libpcap/libpcap.inc
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/libpcap/libpcap.inc
@@ -38,3 +38,5 @@
 do_configure_prepend () {
     sed -i -e's,^V_RPATH_OPT=.*$,V_RPATH_OPT=,' ${S}/pcap-config.in
 }
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb b/import-layers/yocto-poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb
index bd488eb..dbc578e 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/mobile-broadband-provider-info/mobile-broadband-provider-info_git.bb
@@ -1,9 +1,10 @@
 SUMMARY = "Mobile Broadband Service Provider Database"
+HOMEPAGE = "http://live.gnome.org/NetworkManager/MobileBroadband/ServiceProviders"
 SECTION = "network"
 LICENSE = "PD"
 LIC_FILES_CHKSUM = "file://COPYING;md5=87964579b2a8ece4bc6744d2dc9a8b04"
-SRCREV = "519465766fabc85b9fdea5f2b5ee3d08c2b1f70d"
-PV = "20151214"
+SRCREV = "befcbbc9867e742ac16415660b0b7521218a530c"
+PV = "20170310"
 PE = "1"
 
 SRC_URI = "git://git.gnome.org/mobile-broadband-provider-info"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/nfs-utils/nfs-utils/0001-include-stdint.h-for-UINT16_MAX-definition.patch b/import-layers/yocto-poky/meta/recipes-connectivity/nfs-utils/nfs-utils/0001-include-stdint.h-for-UINT16_MAX-definition.patch
new file mode 100644
index 0000000..235a2c7
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/nfs-utils/nfs-utils/0001-include-stdint.h-for-UINT16_MAX-definition.patch
@@ -0,0 +1,27 @@
+From 36b48057bce76dced335d67a2894a420967811c9 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 20 May 2017 14:07:53 -0700
+Subject: [PATCH] include stdint.h for UINT16_MAX definition
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ support/nsm/rpc.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/support/nsm/rpc.c b/support/nsm/rpc.c
+index 4e5f40e..d91c6ea 100644
+--- a/support/nsm/rpc.c
++++ b/support/nsm/rpc.c
+@@ -40,6 +40,7 @@
+ 
+ #include <time.h>
+ #include <stdbool.h>
++#include <stdint.h>
+ #include <string.h>
+ #include <unistd.h>
+ #include <fcntl.h>
+-- 
+2.13.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/nfs-utils/nfs-utils_1.3.4.bb b/import-layers/yocto-poky/meta/recipes-connectivity/nfs-utils/nfs-utils_1.3.4.bb
deleted file mode 100644
index 4ca9ab2..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/nfs-utils/nfs-utils_1.3.4.bb
+++ /dev/null
@@ -1,148 +0,0 @@
-SUMMARY = "userspace utilities for kernel nfs"
-DESCRIPTION = "The nfs-utils package provides a daemon for the kernel \
-NFS server and related tools."
-HOMEPAGE = "http://nfs.sourceforge.net/"
-SECTION = "console/network"
-
-LICENSE = "MIT & GPLv2+ & BSD"
-LIC_FILES_CHKSUM = "file://COPYING;md5=95f3a93a5c3c7888de623b46ea085a84"
-
-# util-linux for libblkid
-DEPENDS = "libcap libnfsidmap libevent util-linux sqlite3 libtirpc"
-RDEPENDS_${PN} = "${PN}-client bash"
-RRECOMMENDS_${PN} = "kernel-module-nfsd"
-
-inherit useradd
-
-USERADD_PACKAGES = "${PN}-client"
-USERADD_PARAM_${PN}-client = "--system  --home-dir /var/lib/nfs \
-			      --shell /bin/false --user-group rpcuser"
-
-SRC_URI = "${KERNELORG_MIRROR}/linux/utils/nfs-utils/${PV}/nfs-utils-${PV}.tar.xz \
-           file://0001-configure-Allow-to-explicitly-disable-nfsidmap.patch \
-           file://nfs-utils-1.2.3-sm-notify-res_init.patch \
-           file://nfsserver \
-           file://nfscommon \
-           file://nfs-utils.conf \
-           file://nfs-server.service \
-           file://nfs-mountd.service \
-           file://nfs-statd.service \
-           file://proc-fs-nfsd.mount \
-           file://nfs-utils-Do-not-pass-CFLAGS-to-gcc-while-building.patch \
-           file://nfs-utils-debianize-start-statd.patch \
-           file://bugfix-adjust-statd-service-name.patch \
-"
-
-SRC_URI[md5sum] = "54e4119043ec8507a2a0e054cf2889a4"
-SRC_URI[sha256sum] = "b42a5bc0a8d80d04650030ceb9a11f08f4acfbcb1ee297f657fb94e339c45975"
-
-# Only kernel-module-nfsd is required here (but can be built-in)  - the nfsd module will
-# pull in the remainder of the dependencies.
-
-INITSCRIPT_PACKAGES = "${PN} ${PN}-client"
-INITSCRIPT_NAME = "nfsserver"
-INITSCRIPT_PARAMS = "defaults"
-INITSCRIPT_NAME_${PN}-client = "nfscommon"
-INITSCRIPT_PARAMS_${PN}-client = "defaults 19 21"
-
-inherit autotools-brokensep update-rc.d systemd pkgconfig
-
-SYSTEMD_PACKAGES = "${PN} ${PN}-client"
-SYSTEMD_SERVICE_${PN} = "nfs-server.service nfs-mountd.service"
-SYSTEMD_SERVICE_${PN}-client = "nfs-statd.service"
-
-# --enable-uuid is need for cross-compiling
-EXTRA_OECONF = "--with-statduser=rpcuser \
-                --enable-mountconfig \
-                --enable-libmount-mount \
-                --disable-nfsv41 \
-                --enable-uuid \
-                --disable-gss \
-                --disable-nfsdcltrack \
-                --with-statdpath=/var/lib/nfs/statd \
-               "
-
-PACKAGECONFIG ??= "tcp-wrappers \
-    ${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
-"
-PACKAGECONFIG_remove_libc-musl = "tcp-wrappers"
-PACKAGECONFIG[tcp-wrappers] = "--with-tcp-wrappers,--without-tcp-wrappers,tcp-wrappers"
-PACKAGECONFIG[nfsidmap] = "--enable-nfsidmap,--disable-nfsidmap,keyutils"
-PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
-
-PACKAGES =+ "${PN}-client ${PN}-mount ${PN}-stats"
-
-CONFFILES_${PN}-client += "${localstatedir}/lib/nfs/etab \
-			   ${localstatedir}/lib/nfs/rmtab \
-			   ${localstatedir}/lib/nfs/xtab \
-			   ${localstatedir}/lib/nfs/statd/state \
-			   ${sysconfdir}/nfsmount.conf"
-
-FILES_${PN}-client = "${sbindir}/*statd \
-		      ${sbindir}/rpc.idmapd ${sbindir}/sm-notify \
-		      ${sbindir}/showmount ${sbindir}/nfsstat \
-		      ${localstatedir}/lib/nfs \
-		      ${sysconfdir}/nfs-utils.conf \
-		      ${sysconfdir}/nfsmount.conf \
-		      ${sysconfdir}/init.d/nfscommon \
-		      ${systemd_unitdir}/system/nfs-statd.service"
-RDEPENDS_${PN}-client = "${PN}-mount rpcbind"
-
-FILES_${PN}-mount = "${base_sbindir}/*mount.nfs*"
-
-FILES_${PN}-stats = "${sbindir}/mountstats ${sbindir}/nfsiostat"
-RDEPENDS_${PN}-stats = "python3-core"
-
-FILES_${PN} += "${systemd_unitdir}"
-
-do_configure_prepend() {
-        sed -i -e 's,sbindir = /sbin,sbindir = ${base_sbindir},g' \
-            ${S}/utils/mount/Makefile.am
-
-        sed -i -e 's,sbindir = /sbin,sbindir = ${base_sbindir},g' \
-            ${S}/utils/osd_login/Makefile.am
-}
-
-# Make clean needed because the package comes with
-# precompiled 64-bit objects that break the build
-do_compile_prepend() {
-	make clean
-}
-
-do_install_append () {
-	install -d ${D}${sysconfdir}/init.d
-	install -m 0755 ${WORKDIR}/nfsserver ${D}${sysconfdir}/init.d/nfsserver
-	install -m 0755 ${WORKDIR}/nfscommon ${D}${sysconfdir}/init.d/nfscommon
-
-	install -m 0755 ${WORKDIR}/nfs-utils.conf ${D}${sysconfdir}
-	install -m 0755 ${S}/utils/mount/nfsmount.conf ${D}${sysconfdir}
-
-	install -d ${D}${systemd_unitdir}/system
-	install -m 0644 ${WORKDIR}/nfs-server.service ${D}${systemd_unitdir}/system/
-	install -m 0644 ${WORKDIR}/nfs-mountd.service ${D}${systemd_unitdir}/system/
-	install -m 0644 ${WORKDIR}/nfs-statd.service ${D}${systemd_unitdir}/system/
-	sed -i -e 's,@SBINDIR@,${sbindir},g' \
-		-e 's,@SYSCONFDIR@,${sysconfdir},g' \
-		${D}${systemd_unitdir}/system/*.service
-	if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
-	    install -m 0644 ${WORKDIR}/proc-fs-nfsd.mount ${D}${systemd_unitdir}/system/
-	    install -d ${D}${systemd_unitdir}/system/sysinit.target.wants/
-	    ln -sf ../proc-fs-nfsd.mount ${D}${systemd_unitdir}/system/sysinit.target.wants/proc-fs-nfsd.mount
-	fi
-
-	# kernel code as of 3.8 hard-codes this path as a default
-	install -d ${D}/var/lib/nfs/v4recovery
-
-	# chown the directories and files
-	chown -R rpcuser:rpcuser ${D}${localstatedir}/lib/nfs/statd
-	chmod 0644 ${D}${localstatedir}/lib/nfs/statd/state
-
-	# the following are built by CC_FOR_BUILD
-	rm -f ${D}${sbindir}/rpcdebug
-	rm -f ${D}${sbindir}/rpcgen
-	rm -f ${D}${sbindir}/locktest
-
-        # Make python tools use python 3
-        sed -i -e '1s,#!.*python.*,#!${bindir}/python3,' ${D}${sbindir}/mountstats ${D}${sbindir}/nfsiostat
-
-}
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/nfs-utils/nfs-utils_2.1.1.bb b/import-layers/yocto-poky/meta/recipes-connectivity/nfs-utils/nfs-utils_2.1.1.bb
new file mode 100644
index 0000000..d917c4d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/nfs-utils/nfs-utils_2.1.1.bb
@@ -0,0 +1,149 @@
+SUMMARY = "userspace utilities for kernel nfs"
+DESCRIPTION = "The nfs-utils package provides a daemon for the kernel \
+NFS server and related tools."
+HOMEPAGE = "http://nfs.sourceforge.net/"
+SECTION = "console/network"
+
+LICENSE = "MIT & GPLv2+ & BSD"
+LIC_FILES_CHKSUM = "file://COPYING;md5=95f3a93a5c3c7888de623b46ea085a84"
+
+# util-linux for libblkid
+DEPENDS = "libcap libnfsidmap libevent util-linux sqlite3 libtirpc"
+RDEPENDS_${PN} = "${PN}-client bash"
+RRECOMMENDS_${PN} = "kernel-module-nfsd"
+
+inherit useradd
+
+USERADD_PACKAGES = "${PN}-client"
+USERADD_PARAM_${PN}-client = "--system  --home-dir /var/lib/nfs \
+			      --shell /bin/false --user-group rpcuser"
+
+SRC_URI = "${KERNELORG_MIRROR}/linux/utils/nfs-utils/${PV}/nfs-utils-${PV}.tar.xz \
+           file://0001-configure-Allow-to-explicitly-disable-nfsidmap.patch \
+           file://nfs-utils-1.2.3-sm-notify-res_init.patch \
+           file://nfsserver \
+           file://nfscommon \
+           file://nfs-utils.conf \
+           file://nfs-server.service \
+           file://nfs-mountd.service \
+           file://nfs-statd.service \
+           file://proc-fs-nfsd.mount \
+           file://nfs-utils-Do-not-pass-CFLAGS-to-gcc-while-building.patch \
+           file://nfs-utils-debianize-start-statd.patch \
+           file://bugfix-adjust-statd-service-name.patch \
+           file://0001-include-stdint.h-for-UINT16_MAX-definition.patch \
+"
+
+SRC_URI[md5sum] = "59dfcb2e6254b129f901f40c86086b13"
+SRC_URI[sha256sum] = "0faeb54c70b84e6bd3b9b6901544b1f6add8d246f35c1683e402daf4e0c719ef"
+
+# Only kernel-module-nfsd is required here (but can be built-in)  - the nfsd module will
+# pull in the remainder of the dependencies.
+
+INITSCRIPT_PACKAGES = "${PN} ${PN}-client"
+INITSCRIPT_NAME = "nfsserver"
+INITSCRIPT_PARAMS = "defaults"
+INITSCRIPT_NAME_${PN}-client = "nfscommon"
+INITSCRIPT_PARAMS_${PN}-client = "defaults 19 21"
+
+inherit autotools-brokensep update-rc.d systemd pkgconfig
+
+SYSTEMD_PACKAGES = "${PN} ${PN}-client"
+SYSTEMD_SERVICE_${PN} = "nfs-server.service nfs-mountd.service"
+SYSTEMD_SERVICE_${PN}-client = "nfs-statd.service"
+
+# --enable-uuid is need for cross-compiling
+EXTRA_OECONF = "--with-statduser=rpcuser \
+                --enable-mountconfig \
+                --enable-libmount-mount \
+                --disable-nfsv41 \
+                --enable-uuid \
+                --disable-gss \
+                --disable-nfsdcltrack \
+                --with-statdpath=/var/lib/nfs/statd \
+               "
+
+PACKAGECONFIG ??= "tcp-wrappers \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} \
+"
+PACKAGECONFIG_remove_libc-musl = "tcp-wrappers"
+PACKAGECONFIG[tcp-wrappers] = "--with-tcp-wrappers,--without-tcp-wrappers,tcp-wrappers"
+PACKAGECONFIG[nfsidmap] = "--enable-nfsidmap,--disable-nfsidmap,keyutils"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
+
+PACKAGES =+ "${PN}-client ${PN}-mount ${PN}-stats"
+
+CONFFILES_${PN}-client += "${localstatedir}/lib/nfs/etab \
+			   ${localstatedir}/lib/nfs/rmtab \
+			   ${localstatedir}/lib/nfs/xtab \
+			   ${localstatedir}/lib/nfs/statd/state \
+			   ${sysconfdir}/nfsmount.conf"
+
+FILES_${PN}-client = "${sbindir}/*statd \
+		      ${sbindir}/rpc.idmapd ${sbindir}/sm-notify \
+		      ${sbindir}/showmount ${sbindir}/nfsstat \
+		      ${localstatedir}/lib/nfs \
+		      ${sysconfdir}/nfs-utils.conf \
+		      ${sysconfdir}/nfsmount.conf \
+		      ${sysconfdir}/init.d/nfscommon \
+		      ${systemd_unitdir}/system/nfs-statd.service"
+RDEPENDS_${PN}-client = "${PN}-mount rpcbind"
+
+FILES_${PN}-mount = "${base_sbindir}/*mount.nfs*"
+
+FILES_${PN}-stats = "${sbindir}/mountstats ${sbindir}/nfsiostat"
+RDEPENDS_${PN}-stats = "python3-core"
+
+FILES_${PN} += "${systemd_unitdir}"
+
+do_configure_prepend() {
+        sed -i -e 's,sbindir = /sbin,sbindir = ${base_sbindir},g' \
+            ${S}/utils/mount/Makefile.am
+
+        sed -i -e 's,sbindir = /sbin,sbindir = ${base_sbindir},g' \
+            ${S}/utils/osd_login/Makefile.am
+}
+
+# Make clean needed because the package comes with
+# precompiled 64-bit objects that break the build
+do_compile_prepend() {
+	make clean
+}
+
+do_install_append () {
+	install -d ${D}${sysconfdir}/init.d
+	install -m 0755 ${WORKDIR}/nfsserver ${D}${sysconfdir}/init.d/nfsserver
+	install -m 0755 ${WORKDIR}/nfscommon ${D}${sysconfdir}/init.d/nfscommon
+
+	install -m 0755 ${WORKDIR}/nfs-utils.conf ${D}${sysconfdir}
+	install -m 0755 ${S}/utils/mount/nfsmount.conf ${D}${sysconfdir}
+
+	install -d ${D}${systemd_unitdir}/system
+	install -m 0644 ${WORKDIR}/nfs-server.service ${D}${systemd_unitdir}/system/
+	install -m 0644 ${WORKDIR}/nfs-mountd.service ${D}${systemd_unitdir}/system/
+	install -m 0644 ${WORKDIR}/nfs-statd.service ${D}${systemd_unitdir}/system/
+	sed -i -e 's,@SBINDIR@,${sbindir},g' \
+		-e 's,@SYSCONFDIR@,${sysconfdir},g' \
+		${D}${systemd_unitdir}/system/*.service
+	if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)}; then
+	    install -m 0644 ${WORKDIR}/proc-fs-nfsd.mount ${D}${systemd_unitdir}/system/
+	    install -d ${D}${systemd_unitdir}/system/sysinit.target.wants/
+	    ln -sf ../proc-fs-nfsd.mount ${D}${systemd_unitdir}/system/sysinit.target.wants/proc-fs-nfsd.mount
+	fi
+
+	# kernel code as of 3.8 hard-codes this path as a default
+	install -d ${D}/var/lib/nfs/v4recovery
+
+	# chown the directories and files
+	chown -R rpcuser:rpcuser ${D}${localstatedir}/lib/nfs/statd
+	chmod 0644 ${D}${localstatedir}/lib/nfs/statd/state
+
+	# the following are built by CC_FOR_BUILD
+	rm -f ${D}${sbindir}/rpcdebug
+	rm -f ${D}${sbindir}/rpcgen
+	rm -f ${D}${sbindir}/locktest
+
+        # Make python tools use python 3
+        sed -i -e '1s,#!.*python.*,#!${bindir}/python3,' ${D}${sbindir}/mountstats ${D}${sbindir}/nfsiostat
+
+}
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/ofono/ofono_1.19.bb b/import-layers/yocto-poky/meta/recipes-connectivity/ofono/ofono_1.19.bb
deleted file mode 100644
index adebd71..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/ofono/ofono_1.19.bb
+++ /dev/null
@@ -1,10 +0,0 @@
-require ofono.inc
-
-SRC_URI  = "\
-  ${KERNELORG_MIRROR}/linux/network/${BPN}/${BP}.tar.xz \
-  file://ofono \
-"
-SRC_URI[md5sum] = "a5f8803ace110511b6ff5a2b39782e8b"
-SRC_URI[sha256sum] = "a0e09bdd8b53b8d2e4b54f1863ecd9aebe4786477a6cbf8f655496e8edb31c81"
-
-CFLAGS_append_libc-uclibc = " -D_GNU_SOURCE"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/ofono/ofono_1.20.bb b/import-layers/yocto-poky/meta/recipes-connectivity/ofono/ofono_1.20.bb
new file mode 100644
index 0000000..18f983e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/ofono/ofono_1.20.bb
@@ -0,0 +1,8 @@
+require ofono.inc
+
+SRC_URI  = "\
+  ${KERNELORG_MIRROR}/linux/network/${BPN}/${BP}.tar.xz \
+  file://ofono \
+"
+SRC_URI[md5sum] = "fad0630fce6a9aecdb7db37bc1f1db7d"
+SRC_URI[sha256sum] = "5d7ba8f481a7715d013a79f8d6477eb89d8aaae399395d5d008a1317c34a31d5"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/ofono/ofono_git.bb b/import-layers/yocto-poky/meta/recipes-connectivity/ofono/ofono_git.bb
deleted file mode 100644
index beafb77..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/ofono/ofono_git.bb
+++ /dev/null
@@ -1,14 +0,0 @@
-require ofono.inc
-
-S	 = "${WORKDIR}/git"
-SRCREV = "14544d5996836f628613c2ce544380ee6fc8f514"
-PV	 = "0.12-git${SRCPV}"
-PR = "r5"
-
-SRC_URI  = "git://git.kernel.org/pub/scm/network/ofono/ofono.git \
-	    file://ofono"
-
-do_configure_prepend () {
-  ${S}/bootstrap
-}
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/0001-openssh-Fix-syntax-error-on-x32.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/0001-openssh-Fix-syntax-error-on-x32.patch
new file mode 100644
index 0000000..ce9e200
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/0001-openssh-Fix-syntax-error-on-x32.patch
@@ -0,0 +1,33 @@
+From a7e359d4ba345aa2a13c07f1057184e9b4e598a2 Mon Sep 17 00:00:00 2001
+From: sweeaun <swee.aun.khor@intel.com>
+Date: Tue, 22 Aug 2017 11:19:48 -0700
+Subject: [PATCH] openssh: Fix syntax error on x32
+
+Upstream-Status: Backport
+This bug has been fixed in v_7.5 branch https://github.com/openssh/
+openssh-portable/tree/V_7_5 and master branch https://github.com/
+openssh/openssh-portable/tree/master.
+
+Fix compilation error during openssh x32 build due to syntax error.
+
+Signed-off-by: sweeaun <swee.aun.khor@intel.com>
+---
+ sandbox-seccomp-filter.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/sandbox-seccomp-filter.c b/sandbox-seccomp-filter.c
+index 3a1aedc..a8d472a 100644
+--- a/sandbox-seccomp-filter.c
++++ b/sandbox-seccomp-filter.c
+@@ -235,7 +235,7 @@ static const struct sock_filter preauth_insns[] = {
+ 	 * x86-64 syscall under some circumstances, e.g.
+ 	 * https://bugs.debian.org/849923
+ 	 */
+-	SC_ALLOW(__NR_clock_gettime & ~__X32_SYSCALL_BIT);
++	SC_ALLOW(__NR_clock_gettime & ~__X32_SYSCALL_BIT),
+ #endif
+ 
+ 	/* Default deny */
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/fix-potential-signed-overflow-in-pointer-arithmatic.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/fix-potential-signed-overflow-in-pointer-arithmatic.patch
index df64a14..7e043a2 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/fix-potential-signed-overflow-in-pointer-arithmatic.patch
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/fix-potential-signed-overflow-in-pointer-arithmatic.patch
@@ -8,7 +8,7 @@
 In case of compilation by gcc or clang with -ftrapv option, the overflow
 would lead to program abort.
 
-Upstream-status: Submitted [http://bugzilla.mindrot.org/show_bug.cgi?id=2608]
+Upstream-Status: Submitted [http://bugzilla.mindrot.org/show_bug.cgi?id=2608]
 
 Signed-off-by: Yuanjie Huang <yuanjie.huang@windriver.com>
 ---
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/sshd_check_keys b/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/sshd_check_keys
index f5bba53..5463b1a 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/sshd_check_keys
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/sshd_check_keys
@@ -1,5 +1,35 @@
 #! /bin/sh
 
+generate_key() {
+    local FILE=$1
+    local TYPE=$2
+    local DIR="$(dirname "$FILE")"
+
+    mkdir -p "$DIR"
+    ssh-keygen -q -f "${FILE}.tmp" -N '' -t $TYPE
+
+    # Atomically rename file public key
+    mv -f "${FILE}.tmp.pub" "${FILE}.pub"
+
+    # This sync does double duty: Ensuring that the data in the temporary
+    # private key file is on disk before the rename, and ensuring that the
+    # public key rename is completed before the private key rename, since we
+    # switch on the existence of the private key to trigger key generation.
+    # This does mean it is possible for the public key to exist, but be garbage
+    # but this is OK because in that case the private key won't exist and the
+    # keys will be regenerated.
+    #
+    # In the event that sync understands arguments that limit what it tries to
+    # fsync(), we provided them. If it does not, it will simply call sync()
+    # which is just as well
+    sync "${FILE}.pub" "$DIR" "${FILE}.tmp"
+
+    mv "${FILE}.tmp" "$FILE"
+
+    # sync to ensure the atomic rename is committed
+    sync "$DIR"
+}
+
 # /etc/default/ssh may set SYSCONFDIR and SSHD_OPTS
 if test -f /etc/default/ssh; then
     . /etc/default/ssh
@@ -43,22 +73,18 @@
 # create keys if necessary
 if [ ! -f $HOST_KEY_RSA ]; then
     echo "  generating ssh RSA key..."
-    mkdir -p $(dirname $HOST_KEY_RSA)
-    ssh-keygen -q -f $HOST_KEY_RSA -N '' -t rsa
+    generate_key $HOST_KEY_RSA rsa
 fi
 if [ ! -f $HOST_KEY_ECDSA ]; then
     echo "  generating ssh ECDSA key..."
-    mkdir -p $(dirname $HOST_KEY_ECDSA)
-    ssh-keygen -q -f $HOST_KEY_ECDSA -N '' -t ecdsa
+    generate_key $HOST_KEY_ECDSA ecdsa
 fi
 if [ ! -f $HOST_KEY_DSA ]; then
     echo "  generating ssh DSA key..."
-    mkdir -p $(dirname $HOST_KEY_DSA)
-    ssh-keygen -q -f $HOST_KEY_DSA -N '' -t dsa
+    generate_key $HOST_KEY_DSA dsa
 fi
 if [ ! -f $HOST_KEY_ED25519 ]; then
     echo "  generating ssh ED25519 key..."
-    mkdir -p $(dirname $HOST_KEY_ED25519)
-    ssh-keygen -q -f $HOST_KEY_ED25519 -N '' -t ed25519
+    generate_key $HOST_KEY_ED25519 ed25519
 fi
 
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/sshd_config b/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/sshd_config
index d48bd2b..31fe5d9 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/sshd_config
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh/sshd_config
@@ -107,7 +107,6 @@
 #PrintLastLog yes
 #TCPKeepAlive yes
 #UseLogin no
-UsePrivilegeSeparation sandbox # Default for new installations.
 #PermitUserEnvironment no
 Compression no
 ClientAliveInterval 15
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh_7.4p1.bb b/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh_7.4p1.bb
deleted file mode 100644
index e501ead..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh_7.4p1.bb
+++ /dev/null
@@ -1,172 +0,0 @@
-SUMMARY = "A suite of security-related network utilities based on \
-the SSH protocol including the ssh client and sshd server"
-DESCRIPTION = "Secure rlogin/rsh/rcp/telnet replacement (OpenSSH) \
-Ssh (Secure Shell) is a program for logging into a remote machine \
-and for executing commands on a remote machine."
-HOMEPAGE = "http://www.openssh.com/"
-SECTION = "console/network"
-LICENSE = "BSD"
-LIC_FILES_CHKSUM = "file://LICENCE;md5=e326045657e842541d3f35aada442507"
-
-DEPENDS = "zlib openssl"
-DEPENDS += "${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'libpam', '', d)}"
-
-SRC_URI = "http://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-${PV}.tar.gz \
-           file://sshd_config \
-           file://ssh_config \
-           file://init \
-           ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${PAM_SRC_URI}', '', d)} \
-           file://sshd.socket \
-           file://sshd@.service \
-           file://sshdgenkeys.service \
-           file://volatiles.99_sshd \
-           file://add-test-support-for-busybox.patch \
-           file://run-ptest \
-           file://openssh-7.1p1-conditional-compile-des-in-cipher.patch \
-           file://openssh-7.1p1-conditional-compile-des-in-pkcs11.patch \
-           file://fix-potential-signed-overflow-in-pointer-arithmatic.patch \
-           file://sshd_check_keys \
-           "
-
-PAM_SRC_URI = "file://sshd"
-
-SRC_URI[md5sum] = "b2db2a83caf66a208bb78d6d287cdaa3"
-SRC_URI[sha256sum] = "1b1fc4a14e2024293181924ed24872e6f2e06293f3e8926a376b8aec481f19d1"
-
-inherit useradd update-rc.d update-alternatives systemd
-
-USERADD_PACKAGES = "${PN}-sshd"
-USERADD_PARAM_${PN}-sshd = "--system --no-create-home --home-dir /var/run/sshd --shell /bin/false --user-group sshd"
-INITSCRIPT_PACKAGES = "${PN}-sshd"
-INITSCRIPT_NAME_${PN}-sshd = "sshd"
-INITSCRIPT_PARAMS_${PN}-sshd = "defaults 9"
-
-SYSTEMD_PACKAGES = "${PN}-sshd"
-SYSTEMD_SERVICE_${PN}-sshd = "sshd.socket"
-
-inherit autotools-brokensep ptest
-
-# LFS support:
-CFLAGS += "-D__FILE_OFFSET_BITS=64"
-
-# login path is hardcoded in sshd
-EXTRA_OECONF = "'LOGIN_PROGRAM=${base_bindir}/login' \
-                ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '--with-pam', '--without-pam', d)} \
-                --without-zlib-version-check \
-                --with-privsep-path=/var/run/sshd \
-                --sysconfdir=${sysconfdir}/ssh \
-                --with-xauth=/usr/bin/xauth \
-                --disable-strip \
-                "
-
-# Since we do not depend on libbsd, we do not want configure to use it
-# just because it finds libutil.h.  But, specifying --disable-libutil
-# causes compile errors, so...
-CACHED_CONFIGUREVARS += "ac_cv_header_bsd_libutil_h=no ac_cv_header_libutil_h=no"
-
-# passwd path is hardcoded in sshd
-CACHED_CONFIGUREVARS += "ac_cv_path_PATH_PASSWD_PROG=${bindir}/passwd"
-
-# We don't want to depend on libblockfile
-CACHED_CONFIGUREVARS += "ac_cv_header_maillock_h=no"
-
-# This is a workaround for uclibc because including stdio.h
-# pulls in pthreads.h and causes conflicts in function prototypes.
-# This results in compilation failure, so unless this is fixed,
-# disable pam for uclibc.
-EXTRA_OECONF_append_libc-uclibc=" --without-pam"
-
-do_configure_prepend () {
-	export LD="${CC}"
-	install -m 0644 ${WORKDIR}/sshd_config ${B}/
-	install -m 0644 ${WORKDIR}/ssh_config ${B}/
-	if [ ! -e acinclude.m4 -a -e aclocal.m4 ]; then
-		cp aclocal.m4 acinclude.m4
-	fi
-}
-
-do_compile_ptest() {
-        # skip regress/unittests/ binaries: this will silently skip
-        # unittests in run-ptests which is good because they are so slow.
-        oe_runmake regress/modpipe regress/setuid-allowed regress/netcat
-}
-
-do_install_append () {
-	if [ "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}" ]; then
-		install -D -m 0644 ${WORKDIR}/sshd ${D}${sysconfdir}/pam.d/sshd
-		sed -i -e 's:#UsePAM no:UsePAM yes:' ${D}${sysconfdir}/ssh/sshd_config
-	fi
-
-	if [ "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}" ]; then
-		sed -i -e 's:#X11Forwarding no:X11Forwarding yes:' ${D}${sysconfdir}/ssh/sshd_config
-	fi
-
-	install -d ${D}${sysconfdir}/init.d
-	install -m 0755 ${WORKDIR}/init ${D}${sysconfdir}/init.d/sshd
-	rm -f ${D}${bindir}/slogin ${D}${datadir}/Ssh.bin
-	rmdir ${D}${localstatedir}/run/sshd ${D}${localstatedir}/run ${D}${localstatedir}
-	install -d ${D}/${sysconfdir}/default/volatiles
-	install -m 644 ${WORKDIR}/volatiles.99_sshd ${D}/${sysconfdir}/default/volatiles/99_sshd
-	install -m 0755 ${S}/contrib/ssh-copy-id ${D}${bindir}
-
-	# Create config files for read-only rootfs
-	install -d ${D}${sysconfdir}/ssh
-	install -m 644 ${D}${sysconfdir}/ssh/sshd_config ${D}${sysconfdir}/ssh/sshd_config_readonly
-	sed -i '/HostKey/d' ${D}${sysconfdir}/ssh/sshd_config_readonly
-	echo "HostKey /var/run/ssh/ssh_host_rsa_key" >> ${D}${sysconfdir}/ssh/sshd_config_readonly
-	echo "HostKey /var/run/ssh/ssh_host_dsa_key" >> ${D}${sysconfdir}/ssh/sshd_config_readonly
-	echo "HostKey /var/run/ssh/ssh_host_ecdsa_key" >> ${D}${sysconfdir}/ssh/sshd_config_readonly
-	echo "HostKey /var/run/ssh/ssh_host_ed25519_key" >> ${D}${sysconfdir}/ssh/sshd_config_readonly
-
-	install -d ${D}${systemd_unitdir}/system
-	install -c -m 0644 ${WORKDIR}/sshd.socket ${D}${systemd_unitdir}/system
-	install -c -m 0644 ${WORKDIR}/sshd@.service ${D}${systemd_unitdir}/system
-	install -c -m 0644 ${WORKDIR}/sshdgenkeys.service ${D}${systemd_unitdir}/system
-	sed -i -e 's,@BASE_BINDIR@,${base_bindir},g' \
-		-e 's,@SBINDIR@,${sbindir},g' \
-		-e 's,@BINDIR@,${bindir},g' \
-		-e 's,@LIBEXECDIR@,${libexecdir}/${BPN},g' \
-		${D}${systemd_unitdir}/system/sshd.socket ${D}${systemd_unitdir}/system/*.service
-
-	sed -i -e 's,@LIBEXECDIR@,${libexecdir}/${BPN},g' \
-		${D}${sysconfdir}/init.d/sshd
-
-	install -D -m 0755 ${WORKDIR}/sshd_check_keys ${D}${libexecdir}/${BPN}/sshd_check_keys
-}
-
-do_install_ptest () {
-	sed -i -e "s|^SFTPSERVER=.*|SFTPSERVER=${libexecdir}/sftp-server|" regress/test-exec.sh
-	cp -r regress ${D}${PTEST_PATH}
-}
-
-ALLOW_EMPTY_${PN} = "1"
-
-PACKAGES =+ "${PN}-keygen ${PN}-scp ${PN}-ssh ${PN}-sshd ${PN}-sftp ${PN}-misc ${PN}-sftp-server"
-FILES_${PN}-scp = "${bindir}/scp.${BPN}"
-FILES_${PN}-ssh = "${bindir}/ssh.${BPN} ${sysconfdir}/ssh/ssh_config"
-FILES_${PN}-sshd = "${sbindir}/sshd ${sysconfdir}/init.d/sshd ${systemd_unitdir}/system"
-FILES_${PN}-sshd += "${sysconfdir}/ssh/moduli ${sysconfdir}/ssh/sshd_config ${sysconfdir}/ssh/sshd_config_readonly ${sysconfdir}/default/volatiles/99_sshd ${sysconfdir}/pam.d/sshd"
-FILES_${PN}-sshd += "${libexecdir}/${BPN}/sshd_check_keys"
-FILES_${PN}-sftp = "${bindir}/sftp"
-FILES_${PN}-sftp-server = "${libexecdir}/sftp-server"
-FILES_${PN}-misc = "${bindir}/ssh* ${libexecdir}/ssh*"
-FILES_${PN}-keygen = "${bindir}/ssh-keygen"
-
-RDEPENDS_${PN} += "${PN}-scp ${PN}-ssh ${PN}-sshd ${PN}-keygen"
-RDEPENDS_${PN}-sshd += "${PN}-keygen ${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'pam-plugin-keyinit pam-plugin-loginuid', '', d)}"
-RDEPENDS_${PN}-ptest += "${PN}-sftp ${PN}-misc ${PN}-sftp-server make"
-
-RPROVIDES_${PN}-ssh = "ssh"
-RPROVIDES_${PN}-sshd = "sshd"
-
-RCONFLICTS_${PN} = "dropbear"
-RCONFLICTS_${PN}-sshd = "dropbear"
-RCONFLICTS_${PN}-keygen = "ssh-keygen"
-
-CONFFILES_${PN}-sshd = "${sysconfdir}/ssh/sshd_config"
-CONFFILES_${PN}-ssh = "${sysconfdir}/ssh/ssh_config"
-
-ALTERNATIVE_PRIORITY = "90"
-ALTERNATIVE_${PN}-scp = "scp"
-ALTERNATIVE_${PN}-ssh = "ssh"
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh_7.5p1.bb b/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh_7.5p1.bb
new file mode 100644
index 0000000..86ca6ff
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssh/openssh_7.5p1.bb
@@ -0,0 +1,168 @@
+SUMMARY = "A suite of security-related network utilities based on \
+the SSH protocol including the ssh client and sshd server"
+DESCRIPTION = "Secure rlogin/rsh/rcp/telnet replacement (OpenSSH) \
+Ssh (Secure Shell) is a program for logging into a remote machine \
+and for executing commands on a remote machine."
+HOMEPAGE = "http://www.openssh.com/"
+SECTION = "console/network"
+LICENSE = "BSD"
+LIC_FILES_CHKSUM = "file://LICENCE;md5=e326045657e842541d3f35aada442507"
+
+# openssl 1.1 patches are proposed at https://github.com/openssh/openssh-portable/pull/48
+DEPENDS = "zlib openssl10"
+DEPENDS += "${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'libpam', '', d)}"
+
+SRC_URI = "http://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-${PV}.tar.gz \
+           file://sshd_config \
+           file://ssh_config \
+           file://init \
+           ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${PAM_SRC_URI}', '', d)} \
+           file://sshd.socket \
+           file://sshd@.service \
+           file://sshdgenkeys.service \
+           file://volatiles.99_sshd \
+           file://add-test-support-for-busybox.patch \
+           file://run-ptest \
+           file://openssh-7.1p1-conditional-compile-des-in-cipher.patch \
+           file://openssh-7.1p1-conditional-compile-des-in-pkcs11.patch \
+           file://fix-potential-signed-overflow-in-pointer-arithmatic.patch \
+           file://0001-openssh-Fix-syntax-error-on-x32.patch \
+           file://sshd_check_keys \
+           "
+
+PAM_SRC_URI = "file://sshd"
+
+SRC_URI[md5sum] = "652fdc7d8392f112bef11cacf7e69e23"
+SRC_URI[sha256sum] = "9846e3c5fab9f0547400b4d2c017992f914222b3fd1f8eee6c7dc6bc5e59f9f0"
+
+inherit useradd update-rc.d update-alternatives systemd
+
+USERADD_PACKAGES = "${PN}-sshd"
+USERADD_PARAM_${PN}-sshd = "--system --no-create-home --home-dir /var/run/sshd --shell /bin/false --user-group sshd"
+INITSCRIPT_PACKAGES = "${PN}-sshd"
+INITSCRIPT_NAME_${PN}-sshd = "sshd"
+INITSCRIPT_PARAMS_${PN}-sshd = "defaults 9"
+
+SYSTEMD_PACKAGES = "${PN}-sshd"
+SYSTEMD_SERVICE_${PN}-sshd = "sshd.socket"
+
+inherit autotools-brokensep ptest
+
+# LFS support:
+CFLAGS += "-D__FILE_OFFSET_BITS=64"
+
+# login path is hardcoded in sshd
+EXTRA_OECONF = "'LOGIN_PROGRAM=${base_bindir}/login' \
+                ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '--with-pam', '--without-pam', d)} \
+                --without-zlib-version-check \
+                --with-privsep-path=/var/run/sshd \
+                --sysconfdir=${sysconfdir}/ssh \
+                --with-xauth=/usr/bin/xauth \
+                --disable-strip \
+                "
+
+# Since we do not depend on libbsd, we do not want configure to use it
+# just because it finds libutil.h.  But, specifying --disable-libutil
+# causes compile errors, so...
+CACHED_CONFIGUREVARS += "ac_cv_header_bsd_libutil_h=no ac_cv_header_libutil_h=no"
+
+# passwd path is hardcoded in sshd
+CACHED_CONFIGUREVARS += "ac_cv_path_PATH_PASSWD_PROG=${bindir}/passwd"
+
+# We don't want to depend on libblockfile
+CACHED_CONFIGUREVARS += "ac_cv_header_maillock_h=no"
+
+do_configure_prepend () {
+	export LD="${CC}"
+	install -m 0644 ${WORKDIR}/sshd_config ${B}/
+	install -m 0644 ${WORKDIR}/ssh_config ${B}/
+	if [ ! -e acinclude.m4 -a -e aclocal.m4 ]; then
+		cp aclocal.m4 acinclude.m4
+	fi
+}
+
+do_compile_ptest() {
+        # skip regress/unittests/ binaries: this will silently skip
+        # unittests in run-ptests which is good because they are so slow.
+        oe_runmake regress/modpipe regress/setuid-allowed regress/netcat
+}
+
+do_install_append () {
+	if [ "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}" ]; then
+		install -D -m 0644 ${WORKDIR}/sshd ${D}${sysconfdir}/pam.d/sshd
+		sed -i -e 's:#UsePAM no:UsePAM yes:' ${D}${sysconfdir}/ssh/sshd_config
+	fi
+
+	if [ "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}" ]; then
+		sed -i -e 's:#X11Forwarding no:X11Forwarding yes:' ${D}${sysconfdir}/ssh/sshd_config
+	fi
+
+	install -d ${D}${sysconfdir}/init.d
+	install -m 0755 ${WORKDIR}/init ${D}${sysconfdir}/init.d/sshd
+	rm -f ${D}${bindir}/slogin ${D}${datadir}/Ssh.bin
+	rmdir ${D}${localstatedir}/run/sshd ${D}${localstatedir}/run ${D}${localstatedir}
+	install -d ${D}/${sysconfdir}/default/volatiles
+	install -m 644 ${WORKDIR}/volatiles.99_sshd ${D}/${sysconfdir}/default/volatiles/99_sshd
+	install -m 0755 ${S}/contrib/ssh-copy-id ${D}${bindir}
+
+	# Create config files for read-only rootfs
+	install -d ${D}${sysconfdir}/ssh
+	install -m 644 ${D}${sysconfdir}/ssh/sshd_config ${D}${sysconfdir}/ssh/sshd_config_readonly
+	sed -i '/HostKey/d' ${D}${sysconfdir}/ssh/sshd_config_readonly
+	echo "HostKey /var/run/ssh/ssh_host_rsa_key" >> ${D}${sysconfdir}/ssh/sshd_config_readonly
+	echo "HostKey /var/run/ssh/ssh_host_dsa_key" >> ${D}${sysconfdir}/ssh/sshd_config_readonly
+	echo "HostKey /var/run/ssh/ssh_host_ecdsa_key" >> ${D}${sysconfdir}/ssh/sshd_config_readonly
+	echo "HostKey /var/run/ssh/ssh_host_ed25519_key" >> ${D}${sysconfdir}/ssh/sshd_config_readonly
+
+	install -d ${D}${systemd_unitdir}/system
+	install -c -m 0644 ${WORKDIR}/sshd.socket ${D}${systemd_unitdir}/system
+	install -c -m 0644 ${WORKDIR}/sshd@.service ${D}${systemd_unitdir}/system
+	install -c -m 0644 ${WORKDIR}/sshdgenkeys.service ${D}${systemd_unitdir}/system
+	sed -i -e 's,@BASE_BINDIR@,${base_bindir},g' \
+		-e 's,@SBINDIR@,${sbindir},g' \
+		-e 's,@BINDIR@,${bindir},g' \
+		-e 's,@LIBEXECDIR@,${libexecdir}/${BPN},g' \
+		${D}${systemd_unitdir}/system/sshd.socket ${D}${systemd_unitdir}/system/*.service
+
+	sed -i -e 's,@LIBEXECDIR@,${libexecdir}/${BPN},g' \
+		${D}${sysconfdir}/init.d/sshd
+
+	install -D -m 0755 ${WORKDIR}/sshd_check_keys ${D}${libexecdir}/${BPN}/sshd_check_keys
+}
+
+do_install_ptest () {
+	sed -i -e "s|^SFTPSERVER=.*|SFTPSERVER=${libexecdir}/sftp-server|" regress/test-exec.sh
+	cp -r regress ${D}${PTEST_PATH}
+}
+
+ALLOW_EMPTY_${PN} = "1"
+
+PACKAGES =+ "${PN}-keygen ${PN}-scp ${PN}-ssh ${PN}-sshd ${PN}-sftp ${PN}-misc ${PN}-sftp-server"
+FILES_${PN}-scp = "${bindir}/scp.${BPN}"
+FILES_${PN}-ssh = "${bindir}/ssh.${BPN} ${sysconfdir}/ssh/ssh_config"
+FILES_${PN}-sshd = "${sbindir}/sshd ${sysconfdir}/init.d/sshd ${systemd_unitdir}/system"
+FILES_${PN}-sshd += "${sysconfdir}/ssh/moduli ${sysconfdir}/ssh/sshd_config ${sysconfdir}/ssh/sshd_config_readonly ${sysconfdir}/default/volatiles/99_sshd ${sysconfdir}/pam.d/sshd"
+FILES_${PN}-sshd += "${libexecdir}/${BPN}/sshd_check_keys"
+FILES_${PN}-sftp = "${bindir}/sftp"
+FILES_${PN}-sftp-server = "${libexecdir}/sftp-server"
+FILES_${PN}-misc = "${bindir}/ssh* ${libexecdir}/ssh*"
+FILES_${PN}-keygen = "${bindir}/ssh-keygen"
+
+RDEPENDS_${PN} += "${PN}-scp ${PN}-ssh ${PN}-sshd ${PN}-keygen"
+RDEPENDS_${PN}-sshd += "${PN}-keygen ${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'pam-plugin-keyinit pam-plugin-loginuid', '', d)}"
+RDEPENDS_${PN}-ptest += "${PN}-sftp ${PN}-misc ${PN}-sftp-server make"
+
+RPROVIDES_${PN}-ssh = "ssh"
+RPROVIDES_${PN}-sshd = "sshd"
+
+RCONFLICTS_${PN} = "dropbear"
+RCONFLICTS_${PN}-sshd = "dropbear"
+RCONFLICTS_${PN}-keygen = "ssh-keygen"
+
+CONFFILES_${PN}-sshd = "${sysconfdir}/ssh/sshd_config"
+CONFFILES_${PN}-ssh = "${sysconfdir}/ssh/ssh_config"
+
+ALTERNATIVE_PRIORITY = "90"
+ALTERNATIVE_${PN}-scp = "scp"
+ALTERNATIVE_${PN}-ssh = "ssh"
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2m/debian/version-script.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2m/debian/version-script.patch
new file mode 100644
index 0000000..557434f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2m/debian/version-script.patch
@@ -0,0 +1,4666 @@
+
+Upstream-Status: Inappropriate
+
+Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/Configure
+===================================================================
+--- openssl-1.0.2~beta1.obsolete.0.0498436515490575.orig/Configure	2014-02-24 21:02:30.000000000 +0100
++++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/Configure	2014-02-24 21:02:30.000000000 +0100
+@@ -1651,6 +1651,8 @@
+ 		}
+ 	}
+ 
++$shared_ldflag .= " -Wl,--version-script=openssl.ld";
++
+ open(IN,'<Makefile.org') || die "unable to read Makefile.org:$!\n";
+ unlink("$Makefile.new") || die "unable to remove old $Makefile.new:$!\n" if -e "$Makefile.new";
+ open(OUT,">$Makefile.new") || die "unable to create $Makefile.new:$!\n";
+Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/openssl.ld
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/openssl.ld	2014-02-24 22:19:08.601827266 +0100
+@@ -0,0 +1,4615 @@
++OPENSSL_1.0.0 {
++	global:
++		BIO_f_ssl;
++		BIO_new_buffer_ssl_connect;
++		BIO_new_ssl;
++		BIO_new_ssl_connect;
++		BIO_proxy_ssl_copy_session_id;
++		BIO_ssl_copy_session_id;
++		BIO_ssl_shutdown;
++		d2i_SSL_SESSION;
++		DTLSv1_client_method;
++		DTLSv1_method;
++		DTLSv1_server_method;
++		ERR_load_SSL_strings;
++		i2d_SSL_SESSION;
++		kssl_build_principal_2;
++		kssl_cget_tkt;
++		kssl_check_authent;
++		kssl_ctx_free;
++		kssl_ctx_new;
++		kssl_ctx_setkey;
++		kssl_ctx_setprinc;
++		kssl_ctx_setstring;
++		kssl_ctx_show;
++		kssl_err_set;
++		kssl_krb5_free_data_contents;
++		kssl_sget_tkt;
++		kssl_skip_confound;
++		kssl_validate_times;
++		PEM_read_bio_SSL_SESSION;
++		PEM_read_SSL_SESSION;
++		PEM_write_bio_SSL_SESSION;
++		PEM_write_SSL_SESSION;
++		SSL_accept;
++		SSL_add_client_CA;
++		SSL_add_dir_cert_subjects_to_stack;
++		SSL_add_dir_cert_subjs_to_stk;
++		SSL_add_file_cert_subjects_to_stack;
++		SSL_add_file_cert_subjs_to_stk;
++		SSL_alert_desc_string;
++		SSL_alert_desc_string_long;
++		SSL_alert_type_string;
++		SSL_alert_type_string_long;
++		SSL_callback_ctrl;
++		SSL_check_private_key;
++		SSL_CIPHER_description;
++		SSL_CIPHER_get_bits;
++		SSL_CIPHER_get_name;
++		SSL_CIPHER_get_version;
++		SSL_clear;
++		SSL_COMP_add_compression_method;
++		SSL_COMP_get_compression_methods;
++		SSL_COMP_get_compress_methods;
++		SSL_COMP_get_name;
++		SSL_connect;
++		SSL_copy_session_id;
++		SSL_ctrl;
++		SSL_CTX_add_client_CA;
++		SSL_CTX_add_session;
++		SSL_CTX_callback_ctrl;
++		SSL_CTX_check_private_key;
++		SSL_CTX_ctrl;
++		SSL_CTX_flush_sessions;
++		SSL_CTX_free;
++		SSL_CTX_get_cert_store;
++		SSL_CTX_get_client_CA_list;
++		SSL_CTX_get_client_cert_cb;
++		SSL_CTX_get_ex_data;
++		SSL_CTX_get_ex_new_index;
++		SSL_CTX_get_info_callback;
++		SSL_CTX_get_quiet_shutdown;
++		SSL_CTX_get_timeout;
++		SSL_CTX_get_verify_callback;
++		SSL_CTX_get_verify_depth;
++		SSL_CTX_get_verify_mode;
++		SSL_CTX_load_verify_locations;
++		SSL_CTX_new;
++		SSL_CTX_remove_session;
++		SSL_CTX_sess_get_get_cb;
++		SSL_CTX_sess_get_new_cb;
++		SSL_CTX_sess_get_remove_cb;
++		SSL_CTX_sessions;
++		SSL_CTX_sess_set_get_cb;
++		SSL_CTX_sess_set_new_cb;
++		SSL_CTX_sess_set_remove_cb;
++		SSL_CTX_set1_param;
++		SSL_CTX_set_cert_store;
++		SSL_CTX_set_cert_verify_callback;
++		SSL_CTX_set_cert_verify_cb;
++		SSL_CTX_set_cipher_list;
++		SSL_CTX_set_client_CA_list;
++		SSL_CTX_set_client_cert_cb;
++		SSL_CTX_set_client_cert_engine;
++		SSL_CTX_set_cookie_generate_cb;
++		SSL_CTX_set_cookie_verify_cb;
++		SSL_CTX_set_default_passwd_cb;
++		SSL_CTX_set_default_passwd_cb_userdata;
++		SSL_CTX_set_default_verify_paths;
++		SSL_CTX_set_def_passwd_cb_ud;
++		SSL_CTX_set_def_verify_paths;
++		SSL_CTX_set_ex_data;
++		SSL_CTX_set_generate_session_id;
++		SSL_CTX_set_info_callback;
++		SSL_CTX_set_msg_callback;
++		SSL_CTX_set_psk_client_callback;
++		SSL_CTX_set_psk_server_callback;
++		SSL_CTX_set_purpose;
++		SSL_CTX_set_quiet_shutdown;
++		SSL_CTX_set_session_id_context;
++		SSL_CTX_set_ssl_version;
++		SSL_CTX_set_timeout;
++		SSL_CTX_set_tmp_dh_callback;
++		SSL_CTX_set_tmp_ecdh_callback;
++		SSL_CTX_set_tmp_rsa_callback;
++		SSL_CTX_set_trust;
++		SSL_CTX_set_verify;
++		SSL_CTX_set_verify_depth;
++		SSL_CTX_use_cert_chain_file;
++		SSL_CTX_use_certificate;
++		SSL_CTX_use_certificate_ASN1;
++		SSL_CTX_use_certificate_chain_file;
++		SSL_CTX_use_certificate_file;
++		SSL_CTX_use_PrivateKey;
++		SSL_CTX_use_PrivateKey_ASN1;
++		SSL_CTX_use_PrivateKey_file;
++		SSL_CTX_use_psk_identity_hint;
++		SSL_CTX_use_RSAPrivateKey;
++		SSL_CTX_use_RSAPrivateKey_ASN1;
++		SSL_CTX_use_RSAPrivateKey_file;
++		SSL_do_handshake;
++		SSL_dup;
++		SSL_dup_CA_list;
++		SSLeay_add_ssl_algorithms;
++		SSL_free;
++		SSL_get1_session;
++		SSL_get_certificate;
++		SSL_get_cipher_list;
++		SSL_get_ciphers;
++		SSL_get_client_CA_list;
++		SSL_get_current_cipher;
++		SSL_get_current_compression;
++		SSL_get_current_expansion;
++		SSL_get_default_timeout;
++		SSL_get_error;
++		SSL_get_ex_data;
++		SSL_get_ex_data_X509_STORE_CTX_idx;
++		SSL_get_ex_d_X509_STORE_CTX_idx;
++		SSL_get_ex_new_index;
++		SSL_get_fd;
++		SSL_get_finished;
++		SSL_get_info_callback;
++		SSL_get_peer_cert_chain;
++		SSL_get_peer_certificate;
++		SSL_get_peer_finished;
++		SSL_get_privatekey;
++		SSL_get_psk_identity;
++		SSL_get_psk_identity_hint;
++		SSL_get_quiet_shutdown;
++		SSL_get_rbio;
++		SSL_get_read_ahead;
++		SSL_get_rfd;
++		SSL_get_servername;
++		SSL_get_servername_type;
++		SSL_get_session;
++		SSL_get_shared_ciphers;
++		SSL_get_shutdown;
++		SSL_get_SSL_CTX;
++		SSL_get_ssl_method;
++		SSL_get_verify_callback;
++		SSL_get_verify_depth;
++		SSL_get_verify_mode;
++		SSL_get_verify_result;
++		SSL_get_version;
++		SSL_get_wbio;
++		SSL_get_wfd;
++		SSL_has_matching_session_id;
++		SSL_library_init;
++		SSL_load_client_CA_file;
++		SSL_load_error_strings;
++		SSL_new;
++		SSL_peek;
++		SSL_pending;
++		SSL_read;
++		SSL_renegotiate;
++		SSL_renegotiate_pending;
++		SSL_rstate_string;
++		SSL_rstate_string_long;
++		SSL_SESSION_cmp;
++		SSL_SESSION_free;
++		SSL_SESSION_get_ex_data;
++		SSL_SESSION_get_ex_new_index;
++		SSL_SESSION_get_id;
++		SSL_SESSION_get_time;
++		SSL_SESSION_get_timeout;
++		SSL_SESSION_hash;
++		SSL_SESSION_new;
++		SSL_SESSION_print;
++		SSL_SESSION_print_fp;
++		SSL_SESSION_set_ex_data;
++		SSL_SESSION_set_time;
++		SSL_SESSION_set_timeout;
++		SSL_set1_param;
++		SSL_set_accept_state;
++		SSL_set_bio;
++		SSL_set_cipher_list;
++		SSL_set_client_CA_list;
++		SSL_set_connect_state;
++		SSL_set_ex_data;
++		SSL_set_fd;
++		SSL_set_generate_session_id;
++		SSL_set_info_callback;
++		SSL_set_msg_callback;
++		SSL_set_psk_client_callback;
++		SSL_set_psk_server_callback;
++		SSL_set_purpose;
++		SSL_set_quiet_shutdown;
++		SSL_set_read_ahead;
++		SSL_set_rfd;
++		SSL_set_session;
++		SSL_set_session_id_context;
++		SSL_set_session_secret_cb;
++		SSL_set_session_ticket_ext;
++		SSL_set_session_ticket_ext_cb;
++		SSL_set_shutdown;
++		SSL_set_SSL_CTX;
++		SSL_set_ssl_method;
++		SSL_set_tmp_dh_callback;
++		SSL_set_tmp_ecdh_callback;
++		SSL_set_tmp_rsa_callback;
++		SSL_set_trust;
++		SSL_set_verify;
++		SSL_set_verify_depth;
++		SSL_set_verify_result;
++		SSL_set_wfd;
++		SSL_shutdown;
++		SSL_state;
++		SSL_state_string;
++		SSL_state_string_long;
++		SSL_use_certificate;
++		SSL_use_certificate_ASN1;
++		SSL_use_certificate_file;
++		SSL_use_PrivateKey;
++		SSL_use_PrivateKey_ASN1;
++		SSL_use_PrivateKey_file;
++		SSL_use_psk_identity_hint;
++		SSL_use_RSAPrivateKey;
++		SSL_use_RSAPrivateKey_ASN1;
++		SSL_use_RSAPrivateKey_file;
++		SSLv23_client_method;
++		SSLv23_method;
++		SSLv23_server_method;
++		SSLv2_client_method;
++		SSLv2_method;
++		SSLv2_server_method;
++		SSLv3_client_method;
++		SSLv3_method;
++		SSLv3_server_method;
++		SSL_version;
++		SSL_want;
++		SSL_write;
++		TLSv1_client_method;
++		TLSv1_method;
++		TLSv1_server_method;
++
++
++		SSLeay;
++		SSLeay_version;
++		ASN1_BIT_STRING_asn1_meth;
++		ASN1_HEADER_free;
++		ASN1_HEADER_new;
++		ASN1_IA5STRING_asn1_meth;
++		ASN1_INTEGER_get;
++		ASN1_INTEGER_set;
++		ASN1_INTEGER_to_BN;
++		ASN1_OBJECT_create;
++		ASN1_OBJECT_free;
++		ASN1_OBJECT_new;
++		ASN1_PRINTABLE_type;
++		ASN1_STRING_cmp;
++		ASN1_STRING_dup;
++		ASN1_STRING_free;
++		ASN1_STRING_new;
++		ASN1_STRING_print;
++		ASN1_STRING_set;
++		ASN1_STRING_type_new;
++		ASN1_TYPE_free;
++		ASN1_TYPE_new;
++		ASN1_UNIVERSALSTRING_to_string;
++		ASN1_UTCTIME_check;
++		ASN1_UTCTIME_print;
++		ASN1_UTCTIME_set;
++		ASN1_check_infinite_end;
++		ASN1_d2i_bio;
++		ASN1_d2i_fp;
++		ASN1_digest;
++		ASN1_dup;
++		ASN1_get_object;
++		ASN1_i2d_bio;
++		ASN1_i2d_fp;
++		ASN1_object_size;
++		ASN1_parse;
++		ASN1_put_object;
++		ASN1_sign;
++		ASN1_verify;
++		BF_cbc_encrypt;
++		BF_cfb64_encrypt;
++		BF_ecb_encrypt;
++		BF_encrypt;
++		BF_ofb64_encrypt;
++		BF_options;
++		BF_set_key;
++		BIO_CONNECT_free;
++		BIO_CONNECT_new;
++		BIO_accept;
++		BIO_ctrl;
++		BIO_int_ctrl;
++		BIO_debug_callback;
++		BIO_dump;
++		BIO_dup_chain;
++		BIO_f_base64;
++		BIO_f_buffer;
++		BIO_f_cipher;
++		BIO_f_md;
++		BIO_f_null;
++		BIO_f_proxy_server;
++		BIO_fd_non_fatal_error;
++		BIO_fd_should_retry;
++		BIO_find_type;
++		BIO_free;
++		BIO_free_all;
++		BIO_get_accept_socket;
++		BIO_get_filter_bio;
++		BIO_get_host_ip;
++		BIO_get_port;
++		BIO_get_retry_BIO;
++		BIO_get_retry_reason;
++		BIO_gethostbyname;
++		BIO_gets;
++		BIO_new;
++		BIO_new_accept;
++		BIO_new_connect;
++		BIO_new_fd;
++		BIO_new_file;
++		BIO_new_fp;
++		BIO_new_socket;
++		BIO_pop;
++		BIO_printf;
++		BIO_push;
++		BIO_puts;
++		BIO_read;
++		BIO_s_accept;
++		BIO_s_connect;
++		BIO_s_fd;
++		BIO_s_file;
++		BIO_s_mem;
++		BIO_s_null;
++		BIO_s_proxy_client;
++		BIO_s_socket;
++		BIO_set;
++		BIO_set_cipher;
++		BIO_set_tcp_ndelay;
++		BIO_sock_cleanup;
++		BIO_sock_error;
++		BIO_sock_init;
++		BIO_sock_non_fatal_error;
++		BIO_sock_should_retry;
++		BIO_socket_ioctl;
++		BIO_write;
++		BN_CTX_free;
++		BN_CTX_new;
++		BN_MONT_CTX_free;
++		BN_MONT_CTX_new;
++		BN_MONT_CTX_set;
++		BN_add;
++		BN_add_word;
++		BN_hex2bn;
++		BN_bin2bn;
++		BN_bn2hex;
++		BN_bn2bin;
++		BN_clear;
++		BN_clear_bit;
++		BN_clear_free;
++		BN_cmp;
++		BN_copy;
++		BN_div;
++		BN_div_word;
++		BN_dup;
++		BN_free;
++		BN_from_montgomery;
++		BN_gcd;
++		BN_generate_prime;
++		BN_get_word;
++		BN_is_bit_set;
++		BN_is_prime;
++		BN_lshift;
++		BN_lshift1;
++		BN_mask_bits;
++		BN_mod;
++		BN_mod_exp;
++		BN_mod_exp_mont;
++		BN_mod_exp_simple;
++		BN_mod_inverse;
++		BN_mod_mul;
++		BN_mod_mul_montgomery;
++		BN_mod_word;
++		BN_mul;
++		BN_new;
++		BN_num_bits;
++		BN_num_bits_word;
++		BN_options;
++		BN_print;
++		BN_print_fp;
++		BN_rand;
++		BN_reciprocal;
++		BN_rshift;
++		BN_rshift1;
++		BN_set_bit;
++		BN_set_word;
++		BN_sqr;
++		BN_sub;
++		BN_to_ASN1_INTEGER;
++		BN_ucmp;
++		BN_value_one;
++		BUF_MEM_free;
++		BUF_MEM_grow;
++		BUF_MEM_new;
++		BUF_strdup;
++		CONF_free;
++		CONF_get_number;
++		CONF_get_section;
++		CONF_get_string;
++		CONF_load;
++		CRYPTO_add_lock;
++		CRYPTO_dbg_free;
++		CRYPTO_dbg_malloc;
++		CRYPTO_dbg_realloc;
++		CRYPTO_dbg_remalloc;
++		CRYPTO_free;
++		CRYPTO_get_add_lock_callback;
++		CRYPTO_get_id_callback;
++		CRYPTO_get_lock_name;
++		CRYPTO_get_locking_callback;
++		CRYPTO_get_mem_functions;
++		CRYPTO_lock;
++		CRYPTO_malloc;
++		CRYPTO_mem_ctrl;
++		CRYPTO_mem_leaks;
++		CRYPTO_mem_leaks_cb;
++		CRYPTO_mem_leaks_fp;
++		CRYPTO_realloc;
++		CRYPTO_remalloc;
++		CRYPTO_set_add_lock_callback;
++		CRYPTO_set_id_callback;
++		CRYPTO_set_locking_callback;
++		CRYPTO_set_mem_functions;
++		CRYPTO_thread_id;
++		DH_check;
++		DH_compute_key;
++		DH_free;
++		DH_generate_key;
++		DH_generate_parameters;
++		DH_new;
++		DH_size;
++		DHparams_print;
++		DHparams_print_fp;
++		DSA_free;
++		DSA_generate_key;
++		DSA_generate_parameters;
++		DSA_is_prime;
++		DSA_new;
++		DSA_print;
++		DSA_print_fp;
++		DSA_sign;
++		DSA_sign_setup;
++		DSA_size;
++		DSA_verify;
++		DSAparams_print;
++		DSAparams_print_fp;
++		ERR_clear_error;
++		ERR_error_string;
++		ERR_free_strings;
++		ERR_func_error_string;
++		ERR_get_err_state_table;
++		ERR_get_error;
++		ERR_get_error_line;
++		ERR_get_state;
++		ERR_get_string_table;
++		ERR_lib_error_string;
++		ERR_load_ASN1_strings;
++		ERR_load_BIO_strings;
++		ERR_load_BN_strings;
++		ERR_load_BUF_strings;
++		ERR_load_CONF_strings;
++		ERR_load_DH_strings;
++		ERR_load_DSA_strings;
++		ERR_load_ERR_strings;
++		ERR_load_EVP_strings;
++		ERR_load_OBJ_strings;
++		ERR_load_PEM_strings;
++		ERR_load_PROXY_strings;
++		ERR_load_RSA_strings;
++		ERR_load_X509_strings;
++		ERR_load_crypto_strings;
++		ERR_load_strings;
++		ERR_peek_error;
++		ERR_peek_error_line;
++		ERR_print_errors;
++		ERR_print_errors_fp;
++		ERR_put_error;
++		ERR_reason_error_string;
++		ERR_remove_state;
++		EVP_BytesToKey;
++		EVP_CIPHER_CTX_cleanup;
++		EVP_CipherFinal;
++		EVP_CipherInit;
++		EVP_CipherUpdate;
++		EVP_DecodeBlock;
++		EVP_DecodeFinal;
++		EVP_DecodeInit;
++		EVP_DecodeUpdate;
++		EVP_DecryptFinal;
++		EVP_DecryptInit;
++		EVP_DecryptUpdate;
++		EVP_DigestFinal;
++		EVP_DigestInit;
++		EVP_DigestUpdate;
++		EVP_EncodeBlock;
++		EVP_EncodeFinal;
++		EVP_EncodeInit;
++		EVP_EncodeUpdate;
++		EVP_EncryptFinal;
++		EVP_EncryptInit;
++		EVP_EncryptUpdate;
++		EVP_OpenFinal;
++		EVP_OpenInit;
++		EVP_PKEY_assign;
++		EVP_PKEY_copy_parameters;
++		EVP_PKEY_free;
++		EVP_PKEY_missing_parameters;
++		EVP_PKEY_new;
++		EVP_PKEY_save_parameters;
++		EVP_PKEY_size;
++		EVP_PKEY_type;
++		EVP_SealFinal;
++		EVP_SealInit;
++		EVP_SignFinal;
++		EVP_VerifyFinal;
++		EVP_add_alias;
++		EVP_add_cipher;
++		EVP_add_digest;
++		EVP_bf_cbc;
++		EVP_bf_cfb64;
++		EVP_bf_ecb;
++		EVP_bf_ofb;
++		EVP_cleanup;
++		EVP_des_cbc;
++		EVP_des_cfb64;
++		EVP_des_ecb;
++		EVP_des_ede;
++		EVP_des_ede3;
++		EVP_des_ede3_cbc;
++		EVP_des_ede3_cfb64;
++		EVP_des_ede3_ofb;
++		EVP_des_ede_cbc;
++		EVP_des_ede_cfb64;
++		EVP_des_ede_ofb;
++		EVP_des_ofb;
++		EVP_desx_cbc;
++		EVP_dss;
++		EVP_dss1;
++		EVP_enc_null;
++		EVP_get_cipherbyname;
++		EVP_get_digestbyname;
++		EVP_get_pw_prompt;
++		EVP_idea_cbc;
++		EVP_idea_cfb64;
++		EVP_idea_ecb;
++		EVP_idea_ofb;
++		EVP_md2;
++		EVP_md5;
++		EVP_md_null;
++		EVP_rc2_cbc;
++		EVP_rc2_cfb64;
++		EVP_rc2_ecb;
++		EVP_rc2_ofb;
++		EVP_rc4;
++		EVP_read_pw_string;
++		EVP_set_pw_prompt;
++		EVP_sha;
++		EVP_sha1;
++		MD2;
++		MD2_Final;
++		MD2_Init;
++		MD2_Update;
++		MD2_options;
++		MD5;
++		MD5_Final;
++		MD5_Init;
++		MD5_Update;
++		MDC2;
++		MDC2_Final;
++		MDC2_Init;
++		MDC2_Update;
++		NETSCAPE_SPKAC_free;
++		NETSCAPE_SPKAC_new;
++		NETSCAPE_SPKI_free;
++		NETSCAPE_SPKI_new;
++		NETSCAPE_SPKI_sign;
++		NETSCAPE_SPKI_verify;
++		OBJ_add_object;
++		OBJ_bsearch;
++		OBJ_cleanup;
++		OBJ_cmp;
++		OBJ_create;
++		OBJ_dup;
++		OBJ_ln2nid;
++		OBJ_new_nid;
++		OBJ_nid2ln;
++		OBJ_nid2obj;
++		OBJ_nid2sn;
++		OBJ_obj2nid;
++		OBJ_sn2nid;
++		OBJ_txt2nid;
++		PEM_ASN1_read;
++		PEM_ASN1_read_bio;
++		PEM_ASN1_write;
++		PEM_ASN1_write_bio;
++		PEM_SealFinal;
++		PEM_SealInit;
++		PEM_SealUpdate;
++		PEM_SignFinal;
++		PEM_SignInit;
++		PEM_SignUpdate;
++		PEM_X509_INFO_read;
++		PEM_X509_INFO_read_bio;
++		PEM_X509_INFO_write_bio;
++		PEM_dek_info;
++		PEM_do_header;
++		PEM_get_EVP_CIPHER_INFO;
++		PEM_proc_type;
++		PEM_read;
++		PEM_read_DHparams;
++		PEM_read_DSAPrivateKey;
++		PEM_read_DSAparams;
++		PEM_read_PKCS7;
++		PEM_read_PrivateKey;
++		PEM_read_RSAPrivateKey;
++		PEM_read_X509;
++		PEM_read_X509_CRL;
++		PEM_read_X509_REQ;
++		PEM_read_bio;
++		PEM_read_bio_DHparams;
++		PEM_read_bio_DSAPrivateKey;
++		PEM_read_bio_DSAparams;
++		PEM_read_bio_PKCS7;
++		PEM_read_bio_PrivateKey;
++		PEM_read_bio_RSAPrivateKey;
++		PEM_read_bio_X509;
++		PEM_read_bio_X509_CRL;
++		PEM_read_bio_X509_REQ;
++		PEM_write;
++		PEM_write_DHparams;
++		PEM_write_DSAPrivateKey;
++		PEM_write_DSAparams;
++		PEM_write_PKCS7;
++		PEM_write_PrivateKey;
++		PEM_write_RSAPrivateKey;
++		PEM_write_X509;
++		PEM_write_X509_CRL;
++		PEM_write_X509_REQ;
++		PEM_write_bio;
++		PEM_write_bio_DHparams;
++		PEM_write_bio_DSAPrivateKey;
++		PEM_write_bio_DSAparams;
++		PEM_write_bio_PKCS7;
++		PEM_write_bio_PrivateKey;
++		PEM_write_bio_RSAPrivateKey;
++		PEM_write_bio_X509;
++		PEM_write_bio_X509_CRL;
++		PEM_write_bio_X509_REQ;
++		PKCS7_DIGEST_free;
++		PKCS7_DIGEST_new;
++		PKCS7_ENCRYPT_free;
++		PKCS7_ENCRYPT_new;
++		PKCS7_ENC_CONTENT_free;
++		PKCS7_ENC_CONTENT_new;
++		PKCS7_ENVELOPE_free;
++		PKCS7_ENVELOPE_new;
++		PKCS7_ISSUER_AND_SERIAL_digest;
++		PKCS7_ISSUER_AND_SERIAL_free;
++		PKCS7_ISSUER_AND_SERIAL_new;
++		PKCS7_RECIP_INFO_free;
++		PKCS7_RECIP_INFO_new;
++		PKCS7_SIGNED_free;
++		PKCS7_SIGNED_new;
++		PKCS7_SIGNER_INFO_free;
++		PKCS7_SIGNER_INFO_new;
++		PKCS7_SIGN_ENVELOPE_free;
++		PKCS7_SIGN_ENVELOPE_new;
++		PKCS7_dup;
++		PKCS7_free;
++		PKCS7_new;
++		PROXY_ENTRY_add_noproxy;
++		PROXY_ENTRY_clear_noproxy;
++		PROXY_ENTRY_free;
++		PROXY_ENTRY_get_noproxy;
++		PROXY_ENTRY_new;
++		PROXY_ENTRY_set_server;
++		PROXY_add_noproxy;
++		PROXY_add_server;
++		PROXY_check_by_host;
++		PROXY_check_url;
++		PROXY_clear_noproxy;
++		PROXY_free;
++		PROXY_get_noproxy;
++		PROXY_get_proxies;
++		PROXY_get_proxy_entry;
++		PROXY_load_conf;
++		PROXY_new;
++		PROXY_print;
++		RAND_bytes;
++		RAND_cleanup;
++		RAND_file_name;
++		RAND_load_file;
++		RAND_screen;
++		RAND_seed;
++		RAND_write_file;
++		RC2_cbc_encrypt;
++		RC2_cfb64_encrypt;
++		RC2_ecb_encrypt;
++		RC2_encrypt;
++		RC2_ofb64_encrypt;
++		RC2_set_key;
++		RC4;
++		RC4_options;
++		RC4_set_key;
++		RSAPrivateKey_asn1_meth;
++		RSAPrivateKey_dup;
++		RSAPublicKey_dup;
++		RSA_PKCS1_SSLeay;
++		RSA_free;
++		RSA_generate_key;
++		RSA_new;
++		RSA_new_method;
++		RSA_print;
++		RSA_print_fp;
++		RSA_private_decrypt;
++		RSA_private_encrypt;
++		RSA_public_decrypt;
++		RSA_public_encrypt;
++		RSA_set_default_method;
++		RSA_sign;
++		RSA_sign_ASN1_OCTET_STRING;
++		RSA_size;
++		RSA_verify;
++		RSA_verify_ASN1_OCTET_STRING;
++		SHA;
++		SHA1;
++		SHA1_Final;
++		SHA1_Init;
++		SHA1_Update;
++		SHA_Final;
++		SHA_Init;
++		SHA_Update;
++		OpenSSL_add_all_algorithms;
++		OpenSSL_add_all_ciphers;
++		OpenSSL_add_all_digests;
++		TXT_DB_create_index;
++		TXT_DB_free;
++		TXT_DB_get_by_index;
++		TXT_DB_insert;
++		TXT_DB_read;
++		TXT_DB_write;
++		X509_ALGOR_free;
++		X509_ALGOR_new;
++		X509_ATTRIBUTE_free;
++		X509_ATTRIBUTE_new;
++		X509_CINF_free;
++		X509_CINF_new;
++		X509_CRL_INFO_free;
++		X509_CRL_INFO_new;
++		X509_CRL_add_ext;
++		X509_CRL_cmp;
++		X509_CRL_delete_ext;
++		X509_CRL_dup;
++		X509_CRL_free;
++		X509_CRL_get_ext;
++		X509_CRL_get_ext_by_NID;
++		X509_CRL_get_ext_by_OBJ;
++		X509_CRL_get_ext_by_critical;
++		X509_CRL_get_ext_count;
++		X509_CRL_new;
++		X509_CRL_sign;
++		X509_CRL_verify;
++		X509_EXTENSION_create_by_NID;
++		X509_EXTENSION_create_by_OBJ;
++		X509_EXTENSION_dup;
++		X509_EXTENSION_free;
++		X509_EXTENSION_get_critical;
++		X509_EXTENSION_get_data;
++		X509_EXTENSION_get_object;
++		X509_EXTENSION_new;
++		X509_EXTENSION_set_critical;
++		X509_EXTENSION_set_data;
++		X509_EXTENSION_set_object;
++		X509_INFO_free;
++		X509_INFO_new;
++		X509_LOOKUP_by_alias;
++		X509_LOOKUP_by_fingerprint;
++		X509_LOOKUP_by_issuer_serial;
++		X509_LOOKUP_by_subject;
++		X509_LOOKUP_ctrl;
++		X509_LOOKUP_file;
++		X509_LOOKUP_free;
++		X509_LOOKUP_hash_dir;
++		X509_LOOKUP_init;
++		X509_LOOKUP_new;
++		X509_LOOKUP_shutdown;
++		X509_NAME_ENTRY_create_by_NID;
++		X509_NAME_ENTRY_create_by_OBJ;
++		X509_NAME_ENTRY_dup;
++		X509_NAME_ENTRY_free;
++		X509_NAME_ENTRY_get_data;
++		X509_NAME_ENTRY_get_object;
++		X509_NAME_ENTRY_new;
++		X509_NAME_ENTRY_set_data;
++		X509_NAME_ENTRY_set_object;
++		X509_NAME_add_entry;
++		X509_NAME_cmp;
++		X509_NAME_delete_entry;
++		X509_NAME_digest;
++		X509_NAME_dup;
++		X509_NAME_entry_count;
++		X509_NAME_free;
++		X509_NAME_get_entry;
++		X509_NAME_get_index_by_NID;
++		X509_NAME_get_index_by_OBJ;
++		X509_NAME_get_text_by_NID;
++		X509_NAME_get_text_by_OBJ;
++		X509_NAME_hash;
++		X509_NAME_new;
++		X509_NAME_oneline;
++		X509_NAME_print;
++		X509_NAME_set;
++		X509_OBJECT_free_contents;
++		X509_OBJECT_retrieve_by_subject;
++		X509_OBJECT_up_ref_count;
++		X509_PKEY_free;
++		X509_PKEY_new;
++		X509_PUBKEY_free;
++		X509_PUBKEY_get;
++		X509_PUBKEY_new;
++		X509_PUBKEY_set;
++		X509_REQ_INFO_free;
++		X509_REQ_INFO_new;
++		X509_REQ_dup;
++		X509_REQ_free;
++		X509_REQ_get_pubkey;
++		X509_REQ_new;
++		X509_REQ_print;
++		X509_REQ_print_fp;
++		X509_REQ_set_pubkey;
++		X509_REQ_set_subject_name;
++		X509_REQ_set_version;
++		X509_REQ_sign;
++		X509_REQ_to_X509;
++		X509_REQ_verify;
++		X509_REVOKED_add_ext;
++		X509_REVOKED_delete_ext;
++		X509_REVOKED_free;
++		X509_REVOKED_get_ext;
++		X509_REVOKED_get_ext_by_NID;
++		X509_REVOKED_get_ext_by_OBJ;
++		X509_REVOKED_get_ext_by_critical;
++		X509_REVOKED_get_ext_by_critic;
++		X509_REVOKED_get_ext_count;
++		X509_REVOKED_new;
++		X509_SIG_free;
++		X509_SIG_new;
++		X509_STORE_CTX_cleanup;
++		X509_STORE_CTX_init;
++		X509_STORE_add_cert;
++		X509_STORE_add_lookup;
++		X509_STORE_free;
++		X509_STORE_get_by_subject;
++		X509_STORE_load_locations;
++		X509_STORE_new;
++		X509_STORE_set_default_paths;
++		X509_VAL_free;
++		X509_VAL_new;
++		X509_add_ext;
++		X509_asn1_meth;
++		X509_certificate_type;
++		X509_check_private_key;
++		X509_cmp_current_time;
++		X509_delete_ext;
++		X509_digest;
++		X509_dup;
++		X509_free;
++		X509_get_default_cert_area;
++		X509_get_default_cert_dir;
++		X509_get_default_cert_dir_env;
++		X509_get_default_cert_file;
++		X509_get_default_cert_file_env;
++		X509_get_default_private_dir;
++		X509_get_ext;
++		X509_get_ext_by_NID;
++		X509_get_ext_by_OBJ;
++		X509_get_ext_by_critical;
++		X509_get_ext_count;
++		X509_get_issuer_name;
++		X509_get_pubkey;
++		X509_get_pubkey_parameters;
++		X509_get_serialNumber;
++		X509_get_subject_name;
++		X509_gmtime_adj;
++		X509_issuer_and_serial_cmp;
++		X509_issuer_and_serial_hash;
++		X509_issuer_name_cmp;
++		X509_issuer_name_hash;
++		X509_load_cert_file;
++		X509_new;
++		X509_print;
++		X509_print_fp;
++		X509_set_issuer_name;
++		X509_set_notAfter;
++		X509_set_notBefore;
++		X509_set_pubkey;
++		X509_set_serialNumber;
++		X509_set_subject_name;
++		X509_set_version;
++		X509_sign;
++		X509_subject_name_cmp;
++		X509_subject_name_hash;
++		X509_to_X509_REQ;
++		X509_verify;
++		X509_verify_cert;
++		X509_verify_cert_error_string;
++		X509v3_add_ext;
++		X509v3_add_extension;
++		X509v3_add_netscape_extensions;
++		X509v3_add_standard_extensions;
++		X509v3_cleanup_extensions;
++		X509v3_data_type_by_NID;
++		X509v3_data_type_by_OBJ;
++		X509v3_delete_ext;
++		X509v3_get_ext;
++		X509v3_get_ext_by_NID;
++		X509v3_get_ext_by_OBJ;
++		X509v3_get_ext_by_critical;
++		X509v3_get_ext_count;
++		X509v3_pack_string;
++		X509v3_pack_type_by_NID;
++		X509v3_pack_type_by_OBJ;
++		X509v3_unpack_string;
++		_des_crypt;
++		a2d_ASN1_OBJECT;
++		a2i_ASN1_INTEGER;
++		a2i_ASN1_STRING;
++		asn1_Finish;
++		asn1_GetSequence;
++		bn_div_words;
++		bn_expand2;
++		bn_mul_add_words;
++		bn_mul_words;
++		BN_uadd;
++		BN_usub;
++		bn_sqr_words;
++		_ossl_old_crypt;
++		d2i_ASN1_BIT_STRING;
++		d2i_ASN1_BOOLEAN;
++		d2i_ASN1_HEADER;
++		d2i_ASN1_IA5STRING;
++		d2i_ASN1_INTEGER;
++		d2i_ASN1_OBJECT;
++		d2i_ASN1_OCTET_STRING;
++		d2i_ASN1_PRINTABLE;
++		d2i_ASN1_PRINTABLESTRING;
++		d2i_ASN1_SET;
++		d2i_ASN1_T61STRING;
++		d2i_ASN1_TYPE;
++		d2i_ASN1_UTCTIME;
++		d2i_ASN1_bytes;
++		d2i_ASN1_type_bytes;
++		d2i_DHparams;
++		d2i_DSAPrivateKey;
++		d2i_DSAPrivateKey_bio;
++		d2i_DSAPrivateKey_fp;
++		d2i_DSAPublicKey;
++		d2i_DSAparams;
++		d2i_NETSCAPE_SPKAC;
++		d2i_NETSCAPE_SPKI;
++		d2i_Netscape_RSA;
++		d2i_PKCS7;
++		d2i_PKCS7_DIGEST;
++		d2i_PKCS7_ENCRYPT;
++		d2i_PKCS7_ENC_CONTENT;
++		d2i_PKCS7_ENVELOPE;
++		d2i_PKCS7_ISSUER_AND_SERIAL;
++		d2i_PKCS7_RECIP_INFO;
++		d2i_PKCS7_SIGNED;
++		d2i_PKCS7_SIGNER_INFO;
++		d2i_PKCS7_SIGN_ENVELOPE;
++		d2i_PKCS7_bio;
++		d2i_PKCS7_fp;
++		d2i_PrivateKey;
++		d2i_PublicKey;
++		d2i_RSAPrivateKey;
++		d2i_RSAPrivateKey_bio;
++		d2i_RSAPrivateKey_fp;
++		d2i_RSAPublicKey;
++		d2i_X509;
++		d2i_X509_ALGOR;
++		d2i_X509_ATTRIBUTE;
++		d2i_X509_CINF;
++		d2i_X509_CRL;
++		d2i_X509_CRL_INFO;
++		d2i_X509_CRL_bio;
++		d2i_X509_CRL_fp;
++		d2i_X509_EXTENSION;
++		d2i_X509_NAME;
++		d2i_X509_NAME_ENTRY;
++		d2i_X509_PKEY;
++		d2i_X509_PUBKEY;
++		d2i_X509_REQ;
++		d2i_X509_REQ_INFO;
++		d2i_X509_REQ_bio;
++		d2i_X509_REQ_fp;
++		d2i_X509_REVOKED;
++		d2i_X509_SIG;
++		d2i_X509_VAL;
++		d2i_X509_bio;
++		d2i_X509_fp;
++		DES_cbc_cksum;
++		DES_cbc_encrypt;
++		DES_cblock_print_file;
++		DES_cfb64_encrypt;
++		DES_cfb_encrypt;
++		DES_decrypt3;
++		DES_ecb3_encrypt;
++		DES_ecb_encrypt;
++		DES_ede3_cbc_encrypt;
++		DES_ede3_cfb64_encrypt;
++		DES_ede3_ofb64_encrypt;
++		DES_enc_read;
++		DES_enc_write;
++		DES_encrypt1;
++		DES_encrypt2;
++		DES_encrypt3;
++		DES_fcrypt;
++		DES_is_weak_key;
++		DES_key_sched;
++		DES_ncbc_encrypt;
++		DES_ofb64_encrypt;
++		DES_ofb_encrypt;
++		DES_options;
++		DES_pcbc_encrypt;
++		DES_quad_cksum;
++		DES_random_key;
++		_ossl_old_des_random_seed;
++		_ossl_old_des_read_2passwords;
++		_ossl_old_des_read_password;
++		_ossl_old_des_read_pw;
++		_ossl_old_des_read_pw_string;
++		DES_set_key;
++		DES_set_odd_parity;
++		DES_string_to_2keys;
++		DES_string_to_key;
++		DES_xcbc_encrypt;
++		DES_xwhite_in2out;
++		fcrypt_body;
++		i2a_ASN1_INTEGER;
++		i2a_ASN1_OBJECT;
++		i2a_ASN1_STRING;
++		i2d_ASN1_BIT_STRING;
++		i2d_ASN1_BOOLEAN;
++		i2d_ASN1_HEADER;
++		i2d_ASN1_IA5STRING;
++		i2d_ASN1_INTEGER;
++		i2d_ASN1_OBJECT;
++		i2d_ASN1_OCTET_STRING;
++		i2d_ASN1_PRINTABLE;
++		i2d_ASN1_SET;
++		i2d_ASN1_TYPE;
++		i2d_ASN1_UTCTIME;
++		i2d_ASN1_bytes;
++		i2d_DHparams;
++		i2d_DSAPrivateKey;
++		i2d_DSAPrivateKey_bio;
++		i2d_DSAPrivateKey_fp;
++		i2d_DSAPublicKey;
++		i2d_DSAparams;
++		i2d_NETSCAPE_SPKAC;
++		i2d_NETSCAPE_SPKI;
++		i2d_Netscape_RSA;
++		i2d_PKCS7;
++		i2d_PKCS7_DIGEST;
++		i2d_PKCS7_ENCRYPT;
++		i2d_PKCS7_ENC_CONTENT;
++		i2d_PKCS7_ENVELOPE;
++		i2d_PKCS7_ISSUER_AND_SERIAL;
++		i2d_PKCS7_RECIP_INFO;
++		i2d_PKCS7_SIGNED;
++		i2d_PKCS7_SIGNER_INFO;
++		i2d_PKCS7_SIGN_ENVELOPE;
++		i2d_PKCS7_bio;
++		i2d_PKCS7_fp;
++		i2d_PrivateKey;
++		i2d_PublicKey;
++		i2d_RSAPrivateKey;
++		i2d_RSAPrivateKey_bio;
++		i2d_RSAPrivateKey_fp;
++		i2d_RSAPublicKey;
++		i2d_X509;
++		i2d_X509_ALGOR;
++		i2d_X509_ATTRIBUTE;
++		i2d_X509_CINF;
++		i2d_X509_CRL;
++		i2d_X509_CRL_INFO;
++		i2d_X509_CRL_bio;
++		i2d_X509_CRL_fp;
++		i2d_X509_EXTENSION;
++		i2d_X509_NAME;
++		i2d_X509_NAME_ENTRY;
++		i2d_X509_PKEY;
++		i2d_X509_PUBKEY;
++		i2d_X509_REQ;
++		i2d_X509_REQ_INFO;
++		i2d_X509_REQ_bio;
++		i2d_X509_REQ_fp;
++		i2d_X509_REVOKED;
++		i2d_X509_SIG;
++		i2d_X509_VAL;
++		i2d_X509_bio;
++		i2d_X509_fp;
++		idea_cbc_encrypt;
++		idea_cfb64_encrypt;
++		idea_ecb_encrypt;
++		idea_encrypt;
++		idea_ofb64_encrypt;
++		idea_options;
++		idea_set_decrypt_key;
++		idea_set_encrypt_key;
++		lh_delete;
++		lh_doall;
++		lh_doall_arg;
++		lh_free;
++		lh_insert;
++		lh_new;
++		lh_node_stats;
++		lh_node_stats_bio;
++		lh_node_usage_stats;
++		lh_node_usage_stats_bio;
++		lh_retrieve;
++		lh_stats;
++		lh_stats_bio;
++		lh_strhash;
++		sk_delete;
++		sk_delete_ptr;
++		sk_dup;
++		sk_find;
++		sk_free;
++		sk_insert;
++		sk_new;
++		sk_pop;
++		sk_pop_free;
++		sk_push;
++		sk_set_cmp_func;
++		sk_shift;
++		sk_unshift;
++		sk_zero;
++		BIO_f_nbio_test;
++		ASN1_TYPE_get;
++		ASN1_TYPE_set;
++		PKCS7_content_free;
++		ERR_load_PKCS7_strings;
++		X509_find_by_issuer_and_serial;
++		X509_find_by_subject;
++		PKCS7_ctrl;
++		PKCS7_set_type;
++		PKCS7_set_content;
++		PKCS7_SIGNER_INFO_set;
++		PKCS7_add_signer;
++		PKCS7_add_certificate;
++		PKCS7_add_crl;
++		PKCS7_content_new;
++		PKCS7_dataSign;
++		PKCS7_dataVerify;
++		PKCS7_dataInit;
++		PKCS7_add_signature;
++		PKCS7_cert_from_signer_info;
++		PKCS7_get_signer_info;
++		EVP_delete_alias;
++		EVP_mdc2;
++		PEM_read_bio_RSAPublicKey;
++		PEM_write_bio_RSAPublicKey;
++		d2i_RSAPublicKey_bio;
++		i2d_RSAPublicKey_bio;
++		PEM_read_RSAPublicKey;
++		PEM_write_RSAPublicKey;
++		d2i_RSAPublicKey_fp;
++		i2d_RSAPublicKey_fp;
++		BIO_copy_next_retry;
++		RSA_flags;
++		X509_STORE_add_crl;
++		X509_load_crl_file;
++		EVP_rc2_40_cbc;
++		EVP_rc4_40;
++		EVP_CIPHER_CTX_init;
++		HMAC;
++		HMAC_Init;
++		HMAC_Update;
++		HMAC_Final;
++		ERR_get_next_error_library;
++		EVP_PKEY_cmp_parameters;
++		HMAC_cleanup;
++		BIO_ptr_ctrl;
++		BIO_new_file_internal;
++		BIO_new_fp_internal;
++		BIO_s_file_internal;
++		BN_BLINDING_convert;
++		BN_BLINDING_invert;
++		BN_BLINDING_update;
++		RSA_blinding_on;
++		RSA_blinding_off;
++		i2t_ASN1_OBJECT;
++		BN_BLINDING_new;
++		BN_BLINDING_free;
++		EVP_cast5_cbc;
++		EVP_cast5_cfb64;
++		EVP_cast5_ecb;
++		EVP_cast5_ofb;
++		BF_decrypt;
++		CAST_set_key;
++		CAST_encrypt;
++		CAST_decrypt;
++		CAST_ecb_encrypt;
++		CAST_cbc_encrypt;
++		CAST_cfb64_encrypt;
++		CAST_ofb64_encrypt;
++		RC2_decrypt;
++		OBJ_create_objects;
++		BN_exp;
++		BN_mul_word;
++		BN_sub_word;
++		BN_dec2bn;
++		BN_bn2dec;
++		BIO_ghbn_ctrl;
++		CRYPTO_free_ex_data;
++		CRYPTO_get_ex_data;
++		CRYPTO_set_ex_data;
++		ERR_load_CRYPTO_strings;
++		ERR_load_CRYPTOlib_strings;
++		EVP_PKEY_bits;
++		MD5_Transform;
++		SHA1_Transform;
++		SHA_Transform;
++		X509_STORE_CTX_get_chain;
++		X509_STORE_CTX_get_current_cert;
++		X509_STORE_CTX_get_error;
++		X509_STORE_CTX_get_error_depth;
++		X509_STORE_CTX_get_ex_data;
++		X509_STORE_CTX_set_cert;
++		X509_STORE_CTX_set_chain;
++		X509_STORE_CTX_set_error;
++		X509_STORE_CTX_set_ex_data;
++		CRYPTO_dup_ex_data;
++		CRYPTO_get_new_lockid;
++		CRYPTO_new_ex_data;
++		RSA_set_ex_data;
++		RSA_get_ex_data;
++		RSA_get_ex_new_index;
++		RSA_padding_add_PKCS1_type_1;
++		RSA_padding_add_PKCS1_type_2;
++		RSA_padding_add_SSLv23;
++		RSA_padding_add_none;
++		RSA_padding_check_PKCS1_type_1;
++		RSA_padding_check_PKCS1_type_2;
++		RSA_padding_check_SSLv23;
++		RSA_padding_check_none;
++		bn_add_words;
++		d2i_Netscape_RSA_2;
++		CRYPTO_get_ex_new_index;
++		RIPEMD160_Init;
++		RIPEMD160_Update;
++		RIPEMD160_Final;
++		RIPEMD160;
++		RIPEMD160_Transform;
++		RC5_32_set_key;
++		RC5_32_ecb_encrypt;
++		RC5_32_encrypt;
++		RC5_32_decrypt;
++		RC5_32_cbc_encrypt;
++		RC5_32_cfb64_encrypt;
++		RC5_32_ofb64_encrypt;
++		BN_bn2mpi;
++		BN_mpi2bn;
++		ASN1_BIT_STRING_get_bit;
++		ASN1_BIT_STRING_set_bit;
++		BIO_get_ex_data;
++		BIO_get_ex_new_index;
++		BIO_set_ex_data;
++		X509v3_get_key_usage;
++		X509v3_set_key_usage;
++		a2i_X509v3_key_usage;
++		i2a_X509v3_key_usage;
++		EVP_PKEY_decrypt;
++		EVP_PKEY_encrypt;
++		PKCS7_RECIP_INFO_set;
++		PKCS7_add_recipient;
++		PKCS7_add_recipient_info;
++		PKCS7_set_cipher;
++		ASN1_TYPE_get_int_octetstring;
++		ASN1_TYPE_get_octetstring;
++		ASN1_TYPE_set_int_octetstring;
++		ASN1_TYPE_set_octetstring;
++		ASN1_UTCTIME_set_string;
++		ERR_add_error_data;
++		ERR_set_error_data;
++		EVP_CIPHER_asn1_to_param;
++		EVP_CIPHER_param_to_asn1;
++		EVP_CIPHER_get_asn1_iv;
++		EVP_CIPHER_set_asn1_iv;
++		EVP_rc5_32_12_16_cbc;
++		EVP_rc5_32_12_16_cfb64;
++		EVP_rc5_32_12_16_ecb;
++		EVP_rc5_32_12_16_ofb;
++		asn1_add_error;
++		d2i_ASN1_BMPSTRING;
++		i2d_ASN1_BMPSTRING;
++		BIO_f_ber;
++		BN_init;
++		COMP_CTX_new;
++		COMP_CTX_free;
++		COMP_CTX_compress_block;
++		COMP_CTX_expand_block;
++		X509_STORE_CTX_get_ex_new_index;
++		OBJ_NAME_add;
++		BIO_socket_nbio;
++		EVP_rc2_64_cbc;
++		OBJ_NAME_cleanup;
++		OBJ_NAME_get;
++		OBJ_NAME_init;
++		OBJ_NAME_new_index;
++		OBJ_NAME_remove;
++		BN_MONT_CTX_copy;
++		BIO_new_socks4a_connect;
++		BIO_s_socks4a_connect;
++		PROXY_set_connect_mode;
++		RAND_SSLeay;
++		RAND_set_rand_method;
++		RSA_memory_lock;
++		bn_sub_words;
++		bn_mul_normal;
++		bn_mul_comba8;
++		bn_mul_comba4;
++		bn_sqr_normal;
++		bn_sqr_comba8;
++		bn_sqr_comba4;
++		bn_cmp_words;
++		bn_mul_recursive;
++		bn_mul_part_recursive;
++		bn_sqr_recursive;
++		bn_mul_low_normal;
++		BN_RECP_CTX_init;
++		BN_RECP_CTX_new;
++		BN_RECP_CTX_free;
++		BN_RECP_CTX_set;
++		BN_mod_mul_reciprocal;
++		BN_mod_exp_recp;
++		BN_div_recp;
++		BN_CTX_init;
++		BN_MONT_CTX_init;
++		RAND_get_rand_method;
++		PKCS7_add_attribute;
++		PKCS7_add_signed_attribute;
++		PKCS7_digest_from_attributes;
++		PKCS7_get_attribute;
++		PKCS7_get_issuer_and_serial;
++		PKCS7_get_signed_attribute;
++		COMP_compress_block;
++		COMP_expand_block;
++		COMP_rle;
++		COMP_zlib;
++		ms_time_diff;
++		ms_time_new;
++		ms_time_free;
++		ms_time_cmp;
++		ms_time_get;
++		PKCS7_set_attributes;
++		PKCS7_set_signed_attributes;
++		X509_ATTRIBUTE_create;
++		X509_ATTRIBUTE_dup;
++		ASN1_GENERALIZEDTIME_check;
++		ASN1_GENERALIZEDTIME_print;
++		ASN1_GENERALIZEDTIME_set;
++		ASN1_GENERALIZEDTIME_set_string;
++		ASN1_TIME_print;
++		BASIC_CONSTRAINTS_free;
++		BASIC_CONSTRAINTS_new;
++		ERR_load_X509V3_strings;
++		NETSCAPE_CERT_SEQUENCE_free;
++		NETSCAPE_CERT_SEQUENCE_new;
++		OBJ_txt2obj;
++		PEM_read_NETSCAPE_CERT_SEQUENCE;
++		PEM_read_NS_CERT_SEQ;
++		PEM_read_bio_NETSCAPE_CERT_SEQUENCE;
++		PEM_read_bio_NS_CERT_SEQ;
++		PEM_write_NETSCAPE_CERT_SEQUENCE;
++		PEM_write_NS_CERT_SEQ;
++		PEM_write_bio_NETSCAPE_CERT_SEQUENCE;
++		PEM_write_bio_NS_CERT_SEQ;
++		X509V3_EXT_add;
++		X509V3_EXT_add_alias;
++		X509V3_EXT_add_conf;
++		X509V3_EXT_cleanup;
++		X509V3_EXT_conf;
++		X509V3_EXT_conf_nid;
++		X509V3_EXT_get;
++		X509V3_EXT_get_nid;
++		X509V3_EXT_print;
++		X509V3_EXT_print_fp;
++		X509V3_add_standard_extensions;
++		X509V3_add_value;
++		X509V3_add_value_bool;
++		X509V3_add_value_int;
++		X509V3_conf_free;
++		X509V3_get_value_bool;
++		X509V3_get_value_int;
++		X509V3_parse_list;
++		d2i_ASN1_GENERALIZEDTIME;
++		d2i_ASN1_TIME;
++		d2i_BASIC_CONSTRAINTS;
++		d2i_NETSCAPE_CERT_SEQUENCE;
++		d2i_ext_ku;
++		ext_ku_free;
++		ext_ku_new;
++		i2d_ASN1_GENERALIZEDTIME;
++		i2d_ASN1_TIME;
++		i2d_BASIC_CONSTRAINTS;
++		i2d_NETSCAPE_CERT_SEQUENCE;
++		i2d_ext_ku;
++		EVP_MD_CTX_copy;
++		i2d_ASN1_ENUMERATED;
++		d2i_ASN1_ENUMERATED;
++		ASN1_ENUMERATED_set;
++		ASN1_ENUMERATED_get;
++		BN_to_ASN1_ENUMERATED;
++		ASN1_ENUMERATED_to_BN;
++		i2a_ASN1_ENUMERATED;
++		a2i_ASN1_ENUMERATED;
++		i2d_GENERAL_NAME;
++		d2i_GENERAL_NAME;
++		GENERAL_NAME_new;
++		GENERAL_NAME_free;
++		GENERAL_NAMES_new;
++		GENERAL_NAMES_free;
++		d2i_GENERAL_NAMES;
++		i2d_GENERAL_NAMES;
++		i2v_GENERAL_NAMES;
++		i2s_ASN1_OCTET_STRING;
++		s2i_ASN1_OCTET_STRING;
++		X509V3_EXT_check_conf;
++		hex_to_string;
++		string_to_hex;
++		DES_ede3_cbcm_encrypt;
++		RSA_padding_add_PKCS1_OAEP;
++		RSA_padding_check_PKCS1_OAEP;
++		X509_CRL_print_fp;
++		X509_CRL_print;
++		i2v_GENERAL_NAME;
++		v2i_GENERAL_NAME;
++		i2d_PKEY_USAGE_PERIOD;
++		d2i_PKEY_USAGE_PERIOD;
++		PKEY_USAGE_PERIOD_new;
++		PKEY_USAGE_PERIOD_free;
++		v2i_GENERAL_NAMES;
++		i2s_ASN1_INTEGER;
++		X509V3_EXT_d2i;
++		name_cmp;
++		str_dup;
++		i2s_ASN1_ENUMERATED;
++		i2s_ASN1_ENUMERATED_TABLE;
++		BIO_s_log;
++		BIO_f_reliable;
++		PKCS7_dataFinal;
++		PKCS7_dataDecode;
++		X509V3_EXT_CRL_add_conf;
++		BN_set_params;
++		BN_get_params;
++		BIO_get_ex_num;
++		BIO_set_ex_free_func;
++		EVP_ripemd160;
++		ASN1_TIME_set;
++		i2d_AUTHORITY_KEYID;
++		d2i_AUTHORITY_KEYID;
++		AUTHORITY_KEYID_new;
++		AUTHORITY_KEYID_free;
++		ASN1_seq_unpack;
++		ASN1_seq_pack;
++		ASN1_unpack_string;
++		ASN1_pack_string;
++		PKCS12_pack_safebag;
++		PKCS12_MAKE_KEYBAG;
++		PKCS8_encrypt;
++		PKCS12_MAKE_SHKEYBAG;
++		PKCS12_pack_p7data;
++		PKCS12_pack_p7encdata;
++		PKCS12_add_localkeyid;
++		PKCS12_add_friendlyname_asc;
++		PKCS12_add_friendlyname_uni;
++		PKCS12_get_friendlyname;
++		PKCS12_pbe_crypt;
++		PKCS12_decrypt_d2i;
++		PKCS12_i2d_encrypt;
++		PKCS12_init;
++		PKCS12_key_gen_asc;
++		PKCS12_key_gen_uni;
++		PKCS12_gen_mac;
++		PKCS12_verify_mac;
++		PKCS12_set_mac;
++		PKCS12_setup_mac;
++		OPENSSL_asc2uni;
++		OPENSSL_uni2asc;
++		i2d_PKCS12_BAGS;
++		PKCS12_BAGS_new;
++		d2i_PKCS12_BAGS;
++		PKCS12_BAGS_free;
++		i2d_PKCS12;
++		d2i_PKCS12;
++		PKCS12_new;
++		PKCS12_free;
++		i2d_PKCS12_MAC_DATA;
++		PKCS12_MAC_DATA_new;
++		d2i_PKCS12_MAC_DATA;
++		PKCS12_MAC_DATA_free;
++		i2d_PKCS12_SAFEBAG;
++		PKCS12_SAFEBAG_new;
++		d2i_PKCS12_SAFEBAG;
++		PKCS12_SAFEBAG_free;
++		ERR_load_PKCS12_strings;
++		PKCS12_PBE_add;
++		PKCS8_add_keyusage;
++		PKCS12_get_attr_gen;
++		PKCS12_parse;
++		PKCS12_create;
++		i2d_PKCS12_bio;
++		i2d_PKCS12_fp;
++		d2i_PKCS12_bio;
++		d2i_PKCS12_fp;
++		i2d_PBEPARAM;
++		PBEPARAM_new;
++		d2i_PBEPARAM;
++		PBEPARAM_free;
++		i2d_PKCS8_PRIV_KEY_INFO;
++		PKCS8_PRIV_KEY_INFO_new;
++		d2i_PKCS8_PRIV_KEY_INFO;
++		PKCS8_PRIV_KEY_INFO_free;
++		EVP_PKCS82PKEY;
++		EVP_PKEY2PKCS8;
++		PKCS8_set_broken;
++		EVP_PBE_ALGOR_CipherInit;
++		EVP_PBE_alg_add;
++		PKCS5_pbe_set;
++		EVP_PBE_cleanup;
++		i2d_SXNET;
++		d2i_SXNET;
++		SXNET_new;
++		SXNET_free;
++		i2d_SXNETID;
++		d2i_SXNETID;
++		SXNETID_new;
++		SXNETID_free;
++		DSA_SIG_new;
++		DSA_SIG_free;
++		DSA_do_sign;
++		DSA_do_verify;
++		d2i_DSA_SIG;
++		i2d_DSA_SIG;
++		i2d_ASN1_VISIBLESTRING;
++		d2i_ASN1_VISIBLESTRING;
++		i2d_ASN1_UTF8STRING;
++		d2i_ASN1_UTF8STRING;
++		i2d_DIRECTORYSTRING;
++		d2i_DIRECTORYSTRING;
++		i2d_DISPLAYTEXT;
++		d2i_DISPLAYTEXT;
++		d2i_ASN1_SET_OF_X509;
++		i2d_ASN1_SET_OF_X509;
++		i2d_PBKDF2PARAM;
++		PBKDF2PARAM_new;
++		d2i_PBKDF2PARAM;
++		PBKDF2PARAM_free;
++		i2d_PBE2PARAM;
++		PBE2PARAM_new;
++		d2i_PBE2PARAM;
++		PBE2PARAM_free;
++		d2i_ASN1_SET_OF_GENERAL_NAME;
++		i2d_ASN1_SET_OF_GENERAL_NAME;
++		d2i_ASN1_SET_OF_SXNETID;
++		i2d_ASN1_SET_OF_SXNETID;
++		d2i_ASN1_SET_OF_POLICYQUALINFO;
++		i2d_ASN1_SET_OF_POLICYQUALINFO;
++		d2i_ASN1_SET_OF_POLICYINFO;
++		i2d_ASN1_SET_OF_POLICYINFO;
++		SXNET_add_id_asc;
++		SXNET_add_id_ulong;
++		SXNET_add_id_INTEGER;
++		SXNET_get_id_asc;
++		SXNET_get_id_ulong;
++		SXNET_get_id_INTEGER;
++		X509V3_set_conf_lhash;
++		i2d_CERTIFICATEPOLICIES;
++		CERTIFICATEPOLICIES_new;
++		CERTIFICATEPOLICIES_free;
++		d2i_CERTIFICATEPOLICIES;
++		i2d_POLICYINFO;
++		POLICYINFO_new;
++		d2i_POLICYINFO;
++		POLICYINFO_free;
++		i2d_POLICYQUALINFO;
++		POLICYQUALINFO_new;
++		d2i_POLICYQUALINFO;
++		POLICYQUALINFO_free;
++		i2d_USERNOTICE;
++		USERNOTICE_new;
++		d2i_USERNOTICE;
++		USERNOTICE_free;
++		i2d_NOTICEREF;
++		NOTICEREF_new;
++		d2i_NOTICEREF;
++		NOTICEREF_free;
++		X509V3_get_string;
++		X509V3_get_section;
++		X509V3_string_free;
++		X509V3_section_free;
++		X509V3_set_ctx;
++		s2i_ASN1_INTEGER;
++		CRYPTO_set_locked_mem_functions;
++		CRYPTO_get_locked_mem_functions;
++		CRYPTO_malloc_locked;
++		CRYPTO_free_locked;
++		BN_mod_exp2_mont;
++		ERR_get_error_line_data;
++		ERR_peek_error_line_data;
++		PKCS12_PBE_keyivgen;
++		X509_ALGOR_dup;
++		d2i_ASN1_SET_OF_DIST_POINT;
++		i2d_ASN1_SET_OF_DIST_POINT;
++		i2d_CRL_DIST_POINTS;
++		CRL_DIST_POINTS_new;
++		CRL_DIST_POINTS_free;
++		d2i_CRL_DIST_POINTS;
++		i2d_DIST_POINT;
++		DIST_POINT_new;
++		d2i_DIST_POINT;
++		DIST_POINT_free;
++		i2d_DIST_POINT_NAME;
++		DIST_POINT_NAME_new;
++		DIST_POINT_NAME_free;
++		d2i_DIST_POINT_NAME;
++		X509V3_add_value_uchar;
++		d2i_ASN1_SET_OF_X509_ATTRIBUTE;
++		i2d_ASN1_SET_OF_ASN1_TYPE;
++		d2i_ASN1_SET_OF_X509_EXTENSION;
++		d2i_ASN1_SET_OF_X509_NAME_ENTRY;
++		d2i_ASN1_SET_OF_ASN1_TYPE;
++		i2d_ASN1_SET_OF_X509_ATTRIBUTE;
++		i2d_ASN1_SET_OF_X509_EXTENSION;
++		i2d_ASN1_SET_OF_X509_NAME_ENTRY;
++		X509V3_EXT_i2d;
++		X509V3_EXT_val_prn;
++		X509V3_EXT_add_list;
++		EVP_CIPHER_type;
++		EVP_PBE_CipherInit;
++		X509V3_add_value_bool_nf;
++		d2i_ASN1_UINTEGER;
++		sk_value;
++		sk_num;
++		sk_set;
++		i2d_ASN1_SET_OF_X509_REVOKED;
++		sk_sort;
++		d2i_ASN1_SET_OF_X509_REVOKED;
++		i2d_ASN1_SET_OF_X509_ALGOR;
++		i2d_ASN1_SET_OF_X509_CRL;
++		d2i_ASN1_SET_OF_X509_ALGOR;
++		d2i_ASN1_SET_OF_X509_CRL;
++		i2d_ASN1_SET_OF_PKCS7_SIGNER_INFO;
++		i2d_ASN1_SET_OF_PKCS7_RECIP_INFO;
++		d2i_ASN1_SET_OF_PKCS7_SIGNER_INFO;
++		d2i_ASN1_SET_OF_PKCS7_RECIP_INFO;
++		PKCS5_PBE_add;
++		PEM_write_bio_PKCS8;
++		i2d_PKCS8_fp;
++		PEM_read_bio_PKCS8_PRIV_KEY_INFO;
++		PEM_read_bio_P8_PRIV_KEY_INFO;
++		d2i_PKCS8_bio;
++		d2i_PKCS8_PRIV_KEY_INFO_fp;
++		PEM_write_bio_PKCS8_PRIV_KEY_INFO;
++		PEM_write_bio_P8_PRIV_KEY_INFO;
++		PEM_read_PKCS8;
++		d2i_PKCS8_PRIV_KEY_INFO_bio;
++		d2i_PKCS8_fp;
++		PEM_write_PKCS8;
++		PEM_read_PKCS8_PRIV_KEY_INFO;
++		PEM_read_P8_PRIV_KEY_INFO;
++		PEM_read_bio_PKCS8;
++		PEM_write_PKCS8_PRIV_KEY_INFO;
++		PEM_write_P8_PRIV_KEY_INFO;
++		PKCS5_PBE_keyivgen;
++		i2d_PKCS8_bio;
++		i2d_PKCS8_PRIV_KEY_INFO_fp;
++		i2d_PKCS8_PRIV_KEY_INFO_bio;
++		BIO_s_bio;
++		PKCS5_pbe2_set;
++		PKCS5_PBKDF2_HMAC_SHA1;
++		PKCS5_v2_PBE_keyivgen;
++		PEM_write_bio_PKCS8PrivateKey;
++		PEM_write_PKCS8PrivateKey;
++		BIO_ctrl_get_read_request;
++		BIO_ctrl_pending;
++		BIO_ctrl_wpending;
++		BIO_new_bio_pair;
++		BIO_ctrl_get_write_guarantee;
++		CRYPTO_num_locks;
++		CONF_load_bio;
++		CONF_load_fp;
++		i2d_ASN1_SET_OF_ASN1_OBJECT;
++		d2i_ASN1_SET_OF_ASN1_OBJECT;
++		PKCS7_signatureVerify;
++		RSA_set_method;
++		RSA_get_method;
++		RSA_get_default_method;
++		RSA_check_key;
++		OBJ_obj2txt;
++		DSA_dup_DH;
++		X509_REQ_get_extensions;
++		X509_REQ_set_extension_nids;
++		BIO_nwrite;
++		X509_REQ_extension_nid;
++		BIO_nread;
++		X509_REQ_get_extension_nids;
++		BIO_nwrite0;
++		X509_REQ_add_extensions_nid;
++		BIO_nread0;
++		X509_REQ_add_extensions;
++		BIO_new_mem_buf;
++		DH_set_ex_data;
++		DH_set_method;
++		DSA_OpenSSL;
++		DH_get_ex_data;
++		DH_get_ex_new_index;
++		DSA_new_method;
++		DH_new_method;
++		DH_OpenSSL;
++		DSA_get_ex_new_index;
++		DH_get_default_method;
++		DSA_set_ex_data;
++		DH_set_default_method;
++		DSA_get_ex_data;
++		X509V3_EXT_REQ_add_conf;
++		NETSCAPE_SPKI_print;
++		NETSCAPE_SPKI_set_pubkey;
++		NETSCAPE_SPKI_b64_encode;
++		NETSCAPE_SPKI_get_pubkey;
++		NETSCAPE_SPKI_b64_decode;
++		UTF8_putc;
++		UTF8_getc;
++		RSA_null_method;
++		ASN1_tag2str;
++		BIO_ctrl_reset_read_request;
++		DISPLAYTEXT_new;
++		ASN1_GENERALIZEDTIME_free;
++		X509_REVOKED_get_ext_d2i;
++		X509_set_ex_data;
++		X509_reject_set_bit_asc;
++		X509_NAME_add_entry_by_txt;
++		X509_NAME_add_entry_by_NID;
++		X509_PURPOSE_get0;
++		PEM_read_X509_AUX;
++		d2i_AUTHORITY_INFO_ACCESS;
++		PEM_write_PUBKEY;
++		ACCESS_DESCRIPTION_new;
++		X509_CERT_AUX_free;
++		d2i_ACCESS_DESCRIPTION;
++		X509_trust_clear;
++		X509_TRUST_add;
++		ASN1_VISIBLESTRING_new;
++		X509_alias_set1;
++		ASN1_PRINTABLESTRING_free;
++		EVP_PKEY_get1_DSA;
++		ASN1_BMPSTRING_new;
++		ASN1_mbstring_copy;
++		ASN1_UTF8STRING_new;
++		DSA_get_default_method;
++		i2d_ASN1_SET_OF_ACCESS_DESCRIPTION;
++		ASN1_T61STRING_free;
++		DSA_set_method;
++		X509_get_ex_data;
++		ASN1_STRING_type;
++		X509_PURPOSE_get_by_sname;
++		ASN1_TIME_free;
++		ASN1_OCTET_STRING_cmp;
++		ASN1_BIT_STRING_new;
++		X509_get_ext_d2i;
++		PEM_read_bio_X509_AUX;
++		ASN1_STRING_set_default_mask_asc;
++		ASN1_STRING_set_def_mask_asc;
++		PEM_write_bio_RSA_PUBKEY;
++		ASN1_INTEGER_cmp;
++		d2i_RSA_PUBKEY_fp;
++		X509_trust_set_bit_asc;
++		PEM_write_bio_DSA_PUBKEY;
++		X509_STORE_CTX_free;
++		EVP_PKEY_set1_DSA;
++		i2d_DSA_PUBKEY_fp;
++		X509_load_cert_crl_file;
++		ASN1_TIME_new;
++		i2d_RSA_PUBKEY;
++		X509_STORE_CTX_purpose_inherit;
++		PEM_read_RSA_PUBKEY;
++		d2i_X509_AUX;
++		i2d_DSA_PUBKEY;
++		X509_CERT_AUX_print;
++		PEM_read_DSA_PUBKEY;
++		i2d_RSA_PUBKEY_bio;
++		ASN1_BIT_STRING_num_asc;
++		i2d_PUBKEY;
++		ASN1_UTCTIME_free;
++		DSA_set_default_method;
++		X509_PURPOSE_get_by_id;
++		ACCESS_DESCRIPTION_free;
++		PEM_read_bio_PUBKEY;
++		ASN1_STRING_set_by_NID;
++		X509_PURPOSE_get_id;
++		DISPLAYTEXT_free;
++		OTHERNAME_new;
++		X509_CERT_AUX_new;
++		X509_TRUST_cleanup;
++		X509_NAME_add_entry_by_OBJ;
++		X509_CRL_get_ext_d2i;
++		X509_PURPOSE_get0_name;
++		PEM_read_PUBKEY;
++		i2d_DSA_PUBKEY_bio;
++		i2d_OTHERNAME;
++		ASN1_OCTET_STRING_free;
++		ASN1_BIT_STRING_set_asc;
++		X509_get_ex_new_index;
++		ASN1_STRING_TABLE_cleanup;
++		X509_TRUST_get_by_id;
++		X509_PURPOSE_get_trust;
++		ASN1_STRING_length;
++		d2i_ASN1_SET_OF_ACCESS_DESCRIPTION;
++		ASN1_PRINTABLESTRING_new;
++		X509V3_get_d2i;
++		ASN1_ENUMERATED_free;
++		i2d_X509_CERT_AUX;
++		X509_STORE_CTX_set_trust;
++		ASN1_STRING_set_default_mask;
++		X509_STORE_CTX_new;
++		EVP_PKEY_get1_RSA;
++		DIRECTORYSTRING_free;
++		PEM_write_X509_AUX;
++		ASN1_OCTET_STRING_set;
++		d2i_DSA_PUBKEY_fp;
++		d2i_RSA_PUBKEY;
++		X509_TRUST_get0_name;
++		X509_TRUST_get0;
++		AUTHORITY_INFO_ACCESS_free;
++		ASN1_IA5STRING_new;
++		d2i_DSA_PUBKEY;
++		X509_check_purpose;
++		ASN1_ENUMERATED_new;
++		d2i_RSA_PUBKEY_bio;
++		d2i_PUBKEY;
++		X509_TRUST_get_trust;
++		X509_TRUST_get_flags;
++		ASN1_BMPSTRING_free;
++		ASN1_T61STRING_new;
++		ASN1_UTCTIME_new;
++		i2d_AUTHORITY_INFO_ACCESS;
++		EVP_PKEY_set1_RSA;
++		X509_STORE_CTX_set_purpose;
++		ASN1_IA5STRING_free;
++		PEM_write_bio_X509_AUX;
++		X509_PURPOSE_get_count;
++		CRYPTO_add_info;
++		X509_NAME_ENTRY_create_by_txt;
++		ASN1_STRING_get_default_mask;
++		X509_alias_get0;
++		ASN1_STRING_data;
++		i2d_ACCESS_DESCRIPTION;
++		X509_trust_set_bit;
++		ASN1_BIT_STRING_free;
++		PEM_read_bio_RSA_PUBKEY;
++		X509_add1_reject_object;
++		X509_check_trust;
++		PEM_read_bio_DSA_PUBKEY;
++		X509_PURPOSE_add;
++		ASN1_STRING_TABLE_get;
++		ASN1_UTF8STRING_free;
++		d2i_DSA_PUBKEY_bio;
++		PEM_write_RSA_PUBKEY;
++		d2i_OTHERNAME;
++		X509_reject_set_bit;
++		PEM_write_DSA_PUBKEY;
++		X509_PURPOSE_get0_sname;
++		EVP_PKEY_set1_DH;
++		ASN1_OCTET_STRING_dup;
++		ASN1_BIT_STRING_set;
++		X509_TRUST_get_count;
++		ASN1_INTEGER_free;
++		OTHERNAME_free;
++		i2d_RSA_PUBKEY_fp;
++		ASN1_INTEGER_dup;
++		d2i_X509_CERT_AUX;
++		PEM_write_bio_PUBKEY;
++		ASN1_VISIBLESTRING_free;
++		X509_PURPOSE_cleanup;
++		ASN1_mbstring_ncopy;
++		ASN1_GENERALIZEDTIME_new;
++		EVP_PKEY_get1_DH;
++		ASN1_OCTET_STRING_new;
++		ASN1_INTEGER_new;
++		i2d_X509_AUX;
++		ASN1_BIT_STRING_name_print;
++		X509_cmp;
++		ASN1_STRING_length_set;
++		DIRECTORYSTRING_new;
++		X509_add1_trust_object;
++		PKCS12_newpass;
++		SMIME_write_PKCS7;
++		SMIME_read_PKCS7;
++		DES_set_key_checked;
++		PKCS7_verify;
++		PKCS7_encrypt;
++		DES_set_key_unchecked;
++		SMIME_crlf_copy;
++		i2d_ASN1_PRINTABLESTRING;
++		PKCS7_get0_signers;
++		PKCS7_decrypt;
++		SMIME_text;
++		PKCS7_simple_smimecap;
++		PKCS7_get_smimecap;
++		PKCS7_sign;
++		PKCS7_add_attrib_smimecap;
++		CRYPTO_dbg_set_options;
++		CRYPTO_remove_all_info;
++		CRYPTO_get_mem_debug_functions;
++		CRYPTO_is_mem_check_on;
++		CRYPTO_set_mem_debug_functions;
++		CRYPTO_pop_info;
++		CRYPTO_push_info_;
++		CRYPTO_set_mem_debug_options;
++		PEM_write_PKCS8PrivateKey_nid;
++		PEM_write_bio_PKCS8PrivateKey_nid;
++		PEM_write_bio_PKCS8PrivKey_nid;
++		d2i_PKCS8PrivateKey_bio;
++		ASN1_NULL_free;
++		d2i_ASN1_NULL;
++		ASN1_NULL_new;
++		i2d_PKCS8PrivateKey_bio;
++		i2d_PKCS8PrivateKey_fp;
++		i2d_ASN1_NULL;
++		i2d_PKCS8PrivateKey_nid_fp;
++		d2i_PKCS8PrivateKey_fp;
++		i2d_PKCS8PrivateKey_nid_bio;
++		i2d_PKCS8PrivateKeyInfo_fp;
++		i2d_PKCS8PrivateKeyInfo_bio;
++		PEM_cb;
++		i2d_PrivateKey_fp;
++		d2i_PrivateKey_bio;
++		d2i_PrivateKey_fp;
++		i2d_PrivateKey_bio;
++		X509_reject_clear;
++		X509_TRUST_set_default;
++		d2i_AutoPrivateKey;
++		X509_ATTRIBUTE_get0_type;
++		X509_ATTRIBUTE_set1_data;
++		X509at_get_attr;
++		X509at_get_attr_count;
++		X509_ATTRIBUTE_create_by_NID;
++		X509_ATTRIBUTE_set1_object;
++		X509_ATTRIBUTE_count;
++		X509_ATTRIBUTE_create_by_OBJ;
++		X509_ATTRIBUTE_get0_object;
++		X509at_get_attr_by_NID;
++		X509at_add1_attr;
++		X509_ATTRIBUTE_get0_data;
++		X509at_delete_attr;
++		X509at_get_attr_by_OBJ;
++		RAND_add;
++		BIO_number_written;
++		BIO_number_read;
++		X509_STORE_CTX_get1_chain;
++		ERR_load_RAND_strings;
++		RAND_pseudo_bytes;
++		X509_REQ_get_attr_by_NID;
++		X509_REQ_get_attr;
++		X509_REQ_add1_attr_by_NID;
++		X509_REQ_get_attr_by_OBJ;
++		X509at_add1_attr_by_NID;
++		X509_REQ_add1_attr_by_OBJ;
++		X509_REQ_get_attr_count;
++		X509_REQ_add1_attr;
++		X509_REQ_delete_attr;
++		X509at_add1_attr_by_OBJ;
++		X509_REQ_add1_attr_by_txt;
++		X509_ATTRIBUTE_create_by_txt;
++		X509at_add1_attr_by_txt;
++		BN_pseudo_rand;
++		BN_is_prime_fasttest;
++		BN_CTX_end;
++		BN_CTX_start;
++		BN_CTX_get;
++		EVP_PKEY2PKCS8_broken;
++		ASN1_STRING_TABLE_add;
++		CRYPTO_dbg_get_options;
++		AUTHORITY_INFO_ACCESS_new;
++		CRYPTO_get_mem_debug_options;
++		DES_crypt;
++		PEM_write_bio_X509_REQ_NEW;
++		PEM_write_X509_REQ_NEW;
++		BIO_callback_ctrl;
++		RAND_egd;
++		RAND_status;
++		bn_dump1;
++		DES_check_key_parity;
++		lh_num_items;
++		RAND_event;
++		DSO_new;
++		DSO_new_method;
++		DSO_free;
++		DSO_flags;
++		DSO_up;
++		DSO_set_default_method;
++		DSO_get_default_method;
++		DSO_get_method;
++		DSO_set_method;
++		DSO_load;
++		DSO_bind_var;
++		DSO_METHOD_null;
++		DSO_METHOD_openssl;
++		DSO_METHOD_dlfcn;
++		DSO_METHOD_win32;
++		ERR_load_DSO_strings;
++		DSO_METHOD_dl;
++		NCONF_load;
++		NCONF_load_fp;
++		NCONF_new;
++		NCONF_get_string;
++		NCONF_free;
++		NCONF_get_number;
++		CONF_dump_fp;
++		NCONF_load_bio;
++		NCONF_dump_fp;
++		NCONF_get_section;
++		NCONF_dump_bio;
++		CONF_dump_bio;
++		NCONF_free_data;
++		CONF_set_default_method;
++		ERR_error_string_n;
++		BIO_snprintf;
++		DSO_ctrl;
++		i2d_ASN1_SET_OF_ASN1_INTEGER;
++		i2d_ASN1_SET_OF_PKCS12_SAFEBAG;
++		i2d_ASN1_SET_OF_PKCS7;
++		BIO_vfree;
++		d2i_ASN1_SET_OF_ASN1_INTEGER;
++		d2i_ASN1_SET_OF_PKCS12_SAFEBAG;
++		ASN1_UTCTIME_get;
++		X509_REQ_digest;
++		X509_CRL_digest;
++		d2i_ASN1_SET_OF_PKCS7;
++		EVP_CIPHER_CTX_set_key_length;
++		EVP_CIPHER_CTX_ctrl;
++		BN_mod_exp_mont_word;
++		RAND_egd_bytes;
++		X509_REQ_get1_email;
++		X509_get1_email;
++		X509_email_free;
++		i2d_RSA_NET;
++		d2i_RSA_NET_2;
++		d2i_RSA_NET;
++		DSO_bind_func;
++		CRYPTO_get_new_dynlockid;
++		sk_new_null;
++		CRYPTO_set_dynlock_destroy_callback;
++		CRYPTO_set_dynlock_destroy_cb;
++		CRYPTO_destroy_dynlockid;
++		CRYPTO_set_dynlock_size;
++		CRYPTO_set_dynlock_create_callback;
++		CRYPTO_set_dynlock_create_cb;
++		CRYPTO_set_dynlock_lock_callback;
++		CRYPTO_set_dynlock_lock_cb;
++		CRYPTO_get_dynlock_lock_callback;
++		CRYPTO_get_dynlock_lock_cb;
++		CRYPTO_get_dynlock_destroy_callback;
++		CRYPTO_get_dynlock_destroy_cb;
++		CRYPTO_get_dynlock_value;
++		CRYPTO_get_dynlock_create_callback;
++		CRYPTO_get_dynlock_create_cb;
++		c2i_ASN1_BIT_STRING;
++		i2c_ASN1_BIT_STRING;
++		RAND_poll;
++		c2i_ASN1_INTEGER;
++		i2c_ASN1_INTEGER;
++		BIO_dump_indent;
++		ASN1_parse_dump;
++		c2i_ASN1_OBJECT;
++		X509_NAME_print_ex_fp;
++		ASN1_STRING_print_ex_fp;
++		X509_NAME_print_ex;
++		ASN1_STRING_print_ex;
++		MD4;
++		MD4_Transform;
++		MD4_Final;
++		MD4_Update;
++		MD4_Init;
++		EVP_md4;
++		i2d_PUBKEY_bio;
++		i2d_PUBKEY_fp;
++		d2i_PUBKEY_bio;
++		ASN1_STRING_to_UTF8;
++		BIO_vprintf;
++		BIO_vsnprintf;
++		d2i_PUBKEY_fp;
++		X509_cmp_time;
++		X509_STORE_CTX_set_time;
++		X509_STORE_CTX_get1_issuer;
++		X509_OBJECT_retrieve_match;
++		X509_OBJECT_idx_by_subject;
++		X509_STORE_CTX_set_flags;
++		X509_STORE_CTX_trusted_stack;
++		X509_time_adj;
++		X509_check_issued;
++		ASN1_UTCTIME_cmp_time_t;
++		DES_set_weak_key_flag;
++		DES_check_key;
++		DES_rw_mode;
++		RSA_PKCS1_RSAref;
++		X509_keyid_set1;
++		BIO_next;
++		DSO_METHOD_vms;
++		BIO_f_linebuffer;
++		BN_bntest_rand;
++		OPENSSL_issetugid;
++		BN_rand_range;
++		ERR_load_ENGINE_strings;
++		ENGINE_set_DSA;
++		ENGINE_get_finish_function;
++		ENGINE_get_default_RSA;
++		ENGINE_get_BN_mod_exp;
++		DSA_get_default_openssl_method;
++		ENGINE_set_DH;
++		ENGINE_set_def_BN_mod_exp_crt;
++		ENGINE_set_default_BN_mod_exp_crt;
++		ENGINE_init;
++		DH_get_default_openssl_method;
++		RSA_set_default_openssl_method;
++		ENGINE_finish;
++		ENGINE_load_public_key;
++		ENGINE_get_DH;
++		ENGINE_ctrl;
++		ENGINE_get_init_function;
++		ENGINE_set_init_function;
++		ENGINE_set_default_DSA;
++		ENGINE_get_name;
++		ENGINE_get_last;
++		ENGINE_get_prev;
++		ENGINE_get_default_DH;
++		ENGINE_get_RSA;
++		ENGINE_set_default;
++		ENGINE_get_RAND;
++		ENGINE_get_first;
++		ENGINE_by_id;
++		ENGINE_set_finish_function;
++		ENGINE_get_def_BN_mod_exp_crt;
++		ENGINE_get_default_BN_mod_exp_crt;
++		RSA_get_default_openssl_method;
++		ENGINE_set_RSA;
++		ENGINE_load_private_key;
++		ENGINE_set_default_RAND;
++		ENGINE_set_BN_mod_exp;
++		ENGINE_remove;
++		ENGINE_free;
++		ENGINE_get_BN_mod_exp_crt;
++		ENGINE_get_next;
++		ENGINE_set_name;
++		ENGINE_get_default_DSA;
++		ENGINE_set_default_BN_mod_exp;
++		ENGINE_set_default_RSA;
++		ENGINE_get_default_RAND;
++		ENGINE_get_default_BN_mod_exp;
++		ENGINE_set_RAND;
++		ENGINE_set_id;
++		ENGINE_set_BN_mod_exp_crt;
++		ENGINE_set_default_DH;
++		ENGINE_new;
++		ENGINE_get_id;
++		DSA_set_default_openssl_method;
++		ENGINE_add;
++		DH_set_default_openssl_method;
++		ENGINE_get_DSA;
++		ENGINE_get_ctrl_function;
++		ENGINE_set_ctrl_function;
++		BN_pseudo_rand_range;
++		X509_STORE_CTX_set_verify_cb;
++		ERR_load_COMP_strings;
++		PKCS12_item_decrypt_d2i;
++		ASN1_UTF8STRING_it;
++		ENGINE_unregister_ciphers;
++		ENGINE_get_ciphers;
++		d2i_OCSP_BASICRESP;
++		KRB5_CHECKSUM_it;
++		EC_POINT_add;
++		ASN1_item_ex_i2d;
++		OCSP_CERTID_it;
++		d2i_OCSP_RESPBYTES;
++		X509V3_add1_i2d;
++		PKCS7_ENVELOPE_it;
++		UI_add_input_boolean;
++		ENGINE_unregister_RSA;
++		X509V3_EXT_nconf;
++		ASN1_GENERALSTRING_free;
++		d2i_OCSP_CERTSTATUS;
++		X509_REVOKED_set_serialNumber;
++		X509_print_ex;
++		OCSP_ONEREQ_get1_ext_d2i;
++		ENGINE_register_all_RAND;
++		ENGINE_load_dynamic;
++		PBKDF2PARAM_it;
++		EXTENDED_KEY_USAGE_new;
++		EC_GROUP_clear_free;
++		OCSP_sendreq_bio;
++		ASN1_item_digest;
++		OCSP_BASICRESP_delete_ext;
++		OCSP_SIGNATURE_it;
++		X509_CRL_it;
++		OCSP_BASICRESP_add_ext;
++		KRB5_ENCKEY_it;
++		UI_method_set_closer;
++		X509_STORE_set_purpose;
++		i2d_ASN1_GENERALSTRING;
++		OCSP_response_status;
++		i2d_OCSP_SERVICELOC;
++		ENGINE_get_digest_engine;
++		EC_GROUP_set_curve_GFp;
++		OCSP_REQUEST_get_ext_by_OBJ;
++		_ossl_old_des_random_key;
++		ASN1_T61STRING_it;
++		EC_GROUP_method_of;
++		i2d_KRB5_APREQ;
++		_ossl_old_des_encrypt;
++		ASN1_PRINTABLE_new;
++		HMAC_Init_ex;
++		d2i_KRB5_AUTHENT;
++		OCSP_archive_cutoff_new;
++		EC_POINT_set_Jprojective_coordinates_GFp;
++		EC_POINT_set_Jproj_coords_GFp;
++		_ossl_old_des_is_weak_key;
++		OCSP_BASICRESP_get_ext_by_OBJ;
++		EC_POINT_oct2point;
++		OCSP_SINGLERESP_get_ext_count;
++		UI_ctrl;
++		_shadow_DES_rw_mode;
++		asn1_do_adb;
++		ASN1_template_i2d;
++		ENGINE_register_DH;
++		UI_construct_prompt;
++		X509_STORE_set_trust;
++		UI_dup_input_string;
++		d2i_KRB5_APREQ;
++		EVP_MD_CTX_copy_ex;
++		OCSP_request_is_signed;
++		i2d_OCSP_REQINFO;
++		KRB5_ENCKEY_free;
++		OCSP_resp_get0;
++		GENERAL_NAME_it;
++		ASN1_GENERALIZEDTIME_it;
++		X509_STORE_set_flags;
++		EC_POINT_set_compressed_coordinates_GFp;
++		EC_POINT_set_compr_coords_GFp;
++		OCSP_response_status_str;
++		d2i_OCSP_REVOKEDINFO;
++		OCSP_basic_add1_cert;
++		ERR_get_implementation;
++		EVP_CipherFinal_ex;
++		OCSP_CERTSTATUS_new;
++		CRYPTO_cleanup_all_ex_data;
++		OCSP_resp_find;
++		BN_nnmod;
++		X509_CRL_sort;
++		X509_REVOKED_set_revocationDate;
++		ENGINE_register_RAND;
++		OCSP_SERVICELOC_new;
++		EC_POINT_set_affine_coordinates_GFp;
++		EC_POINT_set_affine_coords_GFp;
++		_ossl_old_des_options;
++		SXNET_it;
++		UI_dup_input_boolean;
++		PKCS12_add_CSPName_asc;
++		EC_POINT_is_at_infinity;
++		ENGINE_load_cryptodev;
++		DSO_convert_filename;
++		POLICYQUALINFO_it;
++		ENGINE_register_ciphers;
++		BN_mod_lshift_quick;
++		DSO_set_filename;
++		ASN1_item_free;
++		KRB5_TKTBODY_free;
++		AUTHORITY_KEYID_it;
++		KRB5_APREQBODY_new;
++		X509V3_EXT_REQ_add_nconf;
++		ENGINE_ctrl_cmd_string;
++		i2d_OCSP_RESPDATA;
++		EVP_MD_CTX_init;
++		EXTENDED_KEY_USAGE_free;
++		PKCS7_ATTR_SIGN_it;
++		UI_add_error_string;
++		KRB5_CHECKSUM_free;
++		OCSP_REQUEST_get_ext;
++		ENGINE_load_ubsec;
++		ENGINE_register_all_digests;
++		PKEY_USAGE_PERIOD_it;
++		PKCS12_unpack_authsafes;
++		ASN1_item_unpack;
++		NETSCAPE_SPKAC_it;
++		X509_REVOKED_it;
++		ASN1_STRING_encode;
++		EVP_aes_128_ecb;
++		KRB5_AUTHENT_free;
++		OCSP_BASICRESP_get_ext_by_critical;
++		OCSP_BASICRESP_get_ext_by_crit;
++		OCSP_cert_status_str;
++		d2i_OCSP_REQUEST;
++		UI_dup_info_string;
++		_ossl_old_des_xwhite_in2out;
++		PKCS12_it;
++		OCSP_SINGLERESP_get_ext_by_critical;
++		OCSP_SINGLERESP_get_ext_by_crit;
++		OCSP_CERTSTATUS_free;
++		_ossl_old_des_crypt;
++		ASN1_item_i2d;
++		EVP_DecryptFinal_ex;
++		ENGINE_load_openssl;
++		ENGINE_get_cmd_defns;
++		ENGINE_set_load_privkey_function;
++		ENGINE_set_load_privkey_fn;
++		EVP_EncryptFinal_ex;
++		ENGINE_set_default_digests;
++		X509_get0_pubkey_bitstr;
++		asn1_ex_i2c;
++		ENGINE_register_RSA;
++		ENGINE_unregister_DSA;
++		_ossl_old_des_key_sched;
++		X509_EXTENSION_it;
++		i2d_KRB5_AUTHENT;
++		SXNETID_it;
++		d2i_OCSP_SINGLERESP;
++		EDIPARTYNAME_new;
++		PKCS12_certbag2x509;
++		_ossl_old_des_ofb64_encrypt;
++		d2i_EXTENDED_KEY_USAGE;
++		ERR_print_errors_cb;
++		ENGINE_set_ciphers;
++		d2i_KRB5_APREQBODY;
++		UI_method_get_flusher;
++		X509_PUBKEY_it;
++		_ossl_old_des_enc_read;
++		PKCS7_ENCRYPT_it;
++		i2d_OCSP_RESPONSE;
++		EC_GROUP_get_cofactor;
++		PKCS12_unpack_p7data;
++		d2i_KRB5_AUTHDATA;
++		OCSP_copy_nonce;
++		KRB5_AUTHDATA_new;
++		OCSP_RESPDATA_new;
++		EC_GFp_mont_method;
++		OCSP_REVOKEDINFO_free;
++		UI_get_ex_data;
++		KRB5_APREQBODY_free;
++		EC_GROUP_get0_generator;
++		UI_get_default_method;
++		X509V3_set_nconf;
++		PKCS12_item_i2d_encrypt;
++		X509_add1_ext_i2d;
++		PKCS7_SIGNER_INFO_it;
++		KRB5_PRINCNAME_new;
++		PKCS12_SAFEBAG_it;
++		EC_GROUP_get_order;
++		d2i_OCSP_RESPID;
++		OCSP_request_verify;
++		NCONF_get_number_e;
++		_ossl_old_des_decrypt3;
++		X509_signature_print;
++		OCSP_SINGLERESP_free;
++		ENGINE_load_builtin_engines;
++		i2d_OCSP_ONEREQ;
++		OCSP_REQUEST_add_ext;
++		OCSP_RESPBYTES_new;
++		EVP_MD_CTX_create;
++		OCSP_resp_find_status;
++		X509_ALGOR_it;
++		ASN1_TIME_it;
++		OCSP_request_set1_name;
++		OCSP_ONEREQ_get_ext_count;
++		UI_get0_result;
++		PKCS12_AUTHSAFES_it;
++		EVP_aes_256_ecb;
++		PKCS12_pack_authsafes;
++		ASN1_IA5STRING_it;
++		UI_get_input_flags;
++		EC_GROUP_set_generator;
++		_ossl_old_des_string_to_2keys;
++		OCSP_CERTID_free;
++		X509_CERT_AUX_it;
++		CERTIFICATEPOLICIES_it;
++		_ossl_old_des_ede3_cbc_encrypt;
++		RAND_set_rand_engine;
++		DSO_get_loaded_filename;
++		X509_ATTRIBUTE_it;
++		OCSP_ONEREQ_get_ext_by_NID;
++		PKCS12_decrypt_skey;
++		KRB5_AUTHENT_it;
++		UI_dup_error_string;
++		RSAPublicKey_it;
++		i2d_OCSP_REQUEST;
++		PKCS12_x509crl2certbag;
++		OCSP_SERVICELOC_it;
++		ASN1_item_sign;
++		X509_CRL_set_issuer_name;
++		OBJ_NAME_do_all_sorted;
++		i2d_OCSP_BASICRESP;
++		i2d_OCSP_RESPBYTES;
++		PKCS12_unpack_p7encdata;
++		HMAC_CTX_init;
++		ENGINE_get_digest;
++		OCSP_RESPONSE_print;
++		KRB5_TKTBODY_it;
++		ACCESS_DESCRIPTION_it;
++		PKCS7_ISSUER_AND_SERIAL_it;
++		PBE2PARAM_it;
++		PKCS12_certbag2x509crl;
++		PKCS7_SIGNED_it;
++		ENGINE_get_cipher;
++		i2d_OCSP_CRLID;
++		OCSP_SINGLERESP_new;
++		ENGINE_cmd_is_executable;
++		RSA_up_ref;
++		ASN1_GENERALSTRING_it;
++		ENGINE_register_DSA;
++		X509V3_EXT_add_nconf_sk;
++		ENGINE_set_load_pubkey_function;
++		PKCS8_decrypt;
++		PEM_bytes_read_bio;
++		DIRECTORYSTRING_it;
++		d2i_OCSP_CRLID;
++		EC_POINT_is_on_curve;
++		CRYPTO_set_locked_mem_ex_functions;
++		CRYPTO_set_locked_mem_ex_funcs;
++		d2i_KRB5_CHECKSUM;
++		ASN1_item_dup;
++		X509_it;
++		BN_mod_add;
++		KRB5_AUTHDATA_free;
++		_ossl_old_des_cbc_cksum;
++		ASN1_item_verify;
++		CRYPTO_set_mem_ex_functions;
++		EC_POINT_get_Jprojective_coordinates_GFp;
++		EC_POINT_get_Jproj_coords_GFp;
++		ZLONG_it;
++		CRYPTO_get_locked_mem_ex_functions;
++		CRYPTO_get_locked_mem_ex_funcs;
++		ASN1_TIME_check;
++		UI_get0_user_data;
++		HMAC_CTX_cleanup;
++		DSA_up_ref;
++		_ossl_old_des_ede3_cfb64_encrypt;
++		_ossl_odes_ede3_cfb64_encrypt;
++		ASN1_BMPSTRING_it;
++		ASN1_tag2bit;
++		UI_method_set_flusher;
++		X509_ocspid_print;
++		KRB5_ENCDATA_it;
++		ENGINE_get_load_pubkey_function;
++		UI_add_user_data;
++		OCSP_REQUEST_delete_ext;
++		UI_get_method;
++		OCSP_ONEREQ_free;
++		ASN1_PRINTABLESTRING_it;
++		X509_CRL_set_nextUpdate;
++		OCSP_REQUEST_it;
++		OCSP_BASICRESP_it;
++		AES_ecb_encrypt;
++		BN_mod_sqr;
++		NETSCAPE_CERT_SEQUENCE_it;
++		GENERAL_NAMES_it;
++		AUTHORITY_INFO_ACCESS_it;
++		ASN1_FBOOLEAN_it;
++		UI_set_ex_data;
++		_ossl_old_des_string_to_key;
++		ENGINE_register_all_RSA;
++		d2i_KRB5_PRINCNAME;
++		OCSP_RESPBYTES_it;
++		X509_CINF_it;
++		ENGINE_unregister_digests;
++		d2i_EDIPARTYNAME;
++		d2i_OCSP_SERVICELOC;
++		ENGINE_get_digests;
++		_ossl_old_des_set_odd_parity;
++		OCSP_RESPDATA_free;
++		d2i_KRB5_TICKET;
++		OTHERNAME_it;
++		EVP_MD_CTX_cleanup;
++		d2i_ASN1_GENERALSTRING;
++		X509_CRL_set_version;
++		BN_mod_sub;
++		OCSP_SINGLERESP_get_ext_by_NID;
++		ENGINE_get_ex_new_index;
++		OCSP_REQUEST_free;
++		OCSP_REQUEST_add1_ext_i2d;
++		X509_VAL_it;
++		EC_POINTs_make_affine;
++		EC_POINT_mul;
++		X509V3_EXT_add_nconf;
++		X509_TRUST_set;
++		X509_CRL_add1_ext_i2d;
++		_ossl_old_des_fcrypt;
++		DISPLAYTEXT_it;
++		X509_CRL_set_lastUpdate;
++		OCSP_BASICRESP_free;
++		OCSP_BASICRESP_add1_ext_i2d;
++		d2i_KRB5_AUTHENTBODY;
++		CRYPTO_set_ex_data_implementation;
++		CRYPTO_set_ex_data_impl;
++		KRB5_ENCDATA_new;
++		DSO_up_ref;
++		OCSP_crl_reason_str;
++		UI_get0_result_string;
++		ASN1_GENERALSTRING_new;
++		X509_SIG_it;
++		ERR_set_implementation;
++		ERR_load_EC_strings;
++		UI_get0_action_string;
++		OCSP_ONEREQ_get_ext;
++		EC_POINT_method_of;
++		i2d_KRB5_APREQBODY;
++		_ossl_old_des_ecb3_encrypt;
++		CRYPTO_get_mem_ex_functions;
++		ENGINE_get_ex_data;
++		UI_destroy_method;
++		ASN1_item_i2d_bio;
++		OCSP_ONEREQ_get_ext_by_OBJ;
++		ASN1_primitive_new;
++		ASN1_PRINTABLE_it;
++		EVP_aes_192_ecb;
++		OCSP_SIGNATURE_new;
++		LONG_it;
++		ASN1_VISIBLESTRING_it;
++		OCSP_SINGLERESP_add1_ext_i2d;
++		d2i_OCSP_CERTID;
++		ASN1_item_d2i_fp;
++		CRL_DIST_POINTS_it;
++		GENERAL_NAME_print;
++		OCSP_SINGLERESP_delete_ext;
++		PKCS12_SAFEBAGS_it;
++		d2i_OCSP_SIGNATURE;
++		OCSP_request_add1_nonce;
++		ENGINE_set_cmd_defns;
++		OCSP_SERVICELOC_free;
++		EC_GROUP_free;
++		ASN1_BIT_STRING_it;
++		X509_REQ_it;
++		_ossl_old_des_cbc_encrypt;
++		ERR_unload_strings;
++		PKCS7_SIGN_ENVELOPE_it;
++		EDIPARTYNAME_free;
++		OCSP_REQINFO_free;
++		EC_GROUP_new_curve_GFp;
++		OCSP_REQUEST_get1_ext_d2i;
++		PKCS12_item_pack_safebag;
++		asn1_ex_c2i;
++		ENGINE_register_digests;
++		i2d_OCSP_REVOKEDINFO;
++		asn1_enc_restore;
++		UI_free;
++		UI_new_method;
++		EVP_EncryptInit_ex;
++		X509_pubkey_digest;
++		EC_POINT_invert;
++		OCSP_basic_sign;
++		i2d_OCSP_RESPID;
++		OCSP_check_nonce;
++		ENGINE_ctrl_cmd;
++		d2i_KRB5_ENCKEY;
++		OCSP_parse_url;
++		OCSP_SINGLERESP_get_ext;
++		OCSP_CRLID_free;
++		OCSP_BASICRESP_get1_ext_d2i;
++		RSAPrivateKey_it;
++		ENGINE_register_all_DH;
++		i2d_EDIPARTYNAME;
++		EC_POINT_get_affine_coordinates_GFp;
++		EC_POINT_get_affine_coords_GFp;
++		OCSP_CRLID_new;
++		ENGINE_get_flags;
++		OCSP_ONEREQ_it;
++		UI_process;
++		ASN1_INTEGER_it;
++		EVP_CipherInit_ex;
++		UI_get_string_type;
++		ENGINE_unregister_DH;
++		ENGINE_register_all_DSA;
++		OCSP_ONEREQ_get_ext_by_critical;
++		bn_dup_expand;
++		OCSP_cert_id_new;
++		BASIC_CONSTRAINTS_it;
++		BN_mod_add_quick;
++		EC_POINT_new;
++		EVP_MD_CTX_destroy;
++		OCSP_RESPBYTES_free;
++		EVP_aes_128_cbc;
++		OCSP_SINGLERESP_get1_ext_d2i;
++		EC_POINT_free;
++		DH_up_ref;
++		X509_NAME_ENTRY_it;
++		UI_get_ex_new_index;
++		BN_mod_sub_quick;
++		OCSP_ONEREQ_add_ext;
++		OCSP_request_sign;
++		EVP_DigestFinal_ex;
++		ENGINE_set_digests;
++		OCSP_id_issuer_cmp;
++		OBJ_NAME_do_all;
++		EC_POINTs_mul;
++		ENGINE_register_complete;
++		X509V3_EXT_nconf_nid;
++		ASN1_SEQUENCE_it;
++		UI_set_default_method;
++		RAND_query_egd_bytes;
++		UI_method_get_writer;
++		UI_OpenSSL;
++		PEM_def_callback;
++		ENGINE_cleanup;
++		DIST_POINT_it;
++		OCSP_SINGLERESP_it;
++		d2i_KRB5_TKTBODY;
++		EC_POINT_cmp;
++		OCSP_REVOKEDINFO_new;
++		i2d_OCSP_CERTSTATUS;
++		OCSP_basic_add1_nonce;
++		ASN1_item_ex_d2i;
++		BN_mod_lshift1_quick;
++		UI_set_method;
++		OCSP_id_get0_info;
++		BN_mod_sqrt;
++		EC_GROUP_copy;
++		KRB5_ENCDATA_free;
++		_ossl_old_des_cfb_encrypt;
++		OCSP_SINGLERESP_get_ext_by_OBJ;
++		OCSP_cert_to_id;
++		OCSP_RESPID_new;
++		OCSP_RESPDATA_it;
++		d2i_OCSP_RESPDATA;
++		ENGINE_register_all_complete;
++		OCSP_check_validity;
++		PKCS12_BAGS_it;
++		OCSP_url_svcloc_new;
++		ASN1_template_free;
++		OCSP_SINGLERESP_add_ext;
++		KRB5_AUTHENTBODY_it;
++		X509_supported_extension;
++		i2d_KRB5_AUTHDATA;
++		UI_method_get_opener;
++		ENGINE_set_ex_data;
++		OCSP_REQUEST_print;
++		CBIGNUM_it;
++		KRB5_TICKET_new;
++		KRB5_APREQ_new;
++		EC_GROUP_get_curve_GFp;
++		KRB5_ENCKEY_new;
++		ASN1_template_d2i;
++		_ossl_old_des_quad_cksum;
++		OCSP_single_get0_status;
++		BN_swap;
++		POLICYINFO_it;
++		ENGINE_set_destroy_function;
++		asn1_enc_free;
++		OCSP_RESPID_it;
++		EC_GROUP_new;
++		EVP_aes_256_cbc;
++		i2d_KRB5_PRINCNAME;
++		_ossl_old_des_encrypt2;
++		_ossl_old_des_encrypt3;
++		PKCS8_PRIV_KEY_INFO_it;
++		OCSP_REQINFO_it;
++		PBEPARAM_it;
++		KRB5_AUTHENTBODY_new;
++		X509_CRL_add0_revoked;
++		EDIPARTYNAME_it;
++		NETSCAPE_SPKI_it;
++		UI_get0_test_string;
++		ENGINE_get_cipher_engine;
++		ENGINE_register_all_ciphers;
++		EC_POINT_copy;
++		BN_kronecker;
++		_ossl_old_des_ede3_ofb64_encrypt;
++		_ossl_odes_ede3_ofb64_encrypt;
++		UI_method_get_reader;
++		OCSP_BASICRESP_get_ext_count;
++		ASN1_ENUMERATED_it;
++		UI_set_result;
++		i2d_KRB5_TICKET;
++		X509_print_ex_fp;
++		EVP_CIPHER_CTX_set_padding;
++		d2i_OCSP_RESPONSE;
++		ASN1_UTCTIME_it;
++		_ossl_old_des_enc_write;
++		OCSP_RESPONSE_new;
++		AES_set_encrypt_key;
++		OCSP_resp_count;
++		KRB5_CHECKSUM_new;
++		ENGINE_load_cswift;
++		OCSP_onereq_get0_id;
++		ENGINE_set_default_ciphers;
++		NOTICEREF_it;
++		X509V3_EXT_CRL_add_nconf;
++		OCSP_REVOKEDINFO_it;
++		AES_encrypt;
++		OCSP_REQUEST_new;
++		ASN1_ANY_it;
++		CRYPTO_ex_data_new_class;
++		_ossl_old_des_ncbc_encrypt;
++		i2d_KRB5_TKTBODY;
++		EC_POINT_clear_free;
++		AES_decrypt;
++		asn1_enc_init;
++		UI_get_result_maxsize;
++		OCSP_CERTID_new;
++		ENGINE_unregister_RAND;
++		UI_method_get_closer;
++		d2i_KRB5_ENCDATA;
++		OCSP_request_onereq_count;
++		OCSP_basic_verify;
++		KRB5_AUTHENTBODY_free;
++		ASN1_item_d2i;
++		ASN1_primitive_free;
++		i2d_EXTENDED_KEY_USAGE;
++		i2d_OCSP_SIGNATURE;
++		asn1_enc_save;
++		ENGINE_load_nuron;
++		_ossl_old_des_pcbc_encrypt;
++		PKCS12_MAC_DATA_it;
++		OCSP_accept_responses_new;
++		asn1_do_lock;
++		PKCS7_ATTR_VERIFY_it;
++		KRB5_APREQBODY_it;
++		i2d_OCSP_SINGLERESP;
++		ASN1_item_ex_new;
++		UI_add_verify_string;
++		_ossl_old_des_set_key;
++		KRB5_PRINCNAME_it;
++		EVP_DecryptInit_ex;
++		i2d_OCSP_CERTID;
++		ASN1_item_d2i_bio;
++		EC_POINT_dbl;
++		asn1_get_choice_selector;
++		i2d_KRB5_CHECKSUM;
++		ENGINE_set_table_flags;
++		AES_options;
++		ENGINE_load_chil;
++		OCSP_id_cmp;
++		OCSP_BASICRESP_new;
++		OCSP_REQUEST_get_ext_by_NID;
++		KRB5_APREQ_it;
++		ENGINE_get_destroy_function;
++		CONF_set_nconf;
++		ASN1_PRINTABLE_free;
++		OCSP_BASICRESP_get_ext_by_NID;
++		DIST_POINT_NAME_it;
++		X509V3_extensions_print;
++		_ossl_old_des_cfb64_encrypt;
++		X509_REVOKED_add1_ext_i2d;
++		_ossl_old_des_ofb_encrypt;
++		KRB5_TKTBODY_new;
++		ASN1_OCTET_STRING_it;
++		ERR_load_UI_strings;
++		i2d_KRB5_ENCKEY;
++		ASN1_template_new;
++		OCSP_SIGNATURE_free;
++		ASN1_item_i2d_fp;
++		KRB5_PRINCNAME_free;
++		PKCS7_RECIP_INFO_it;
++		EXTENDED_KEY_USAGE_it;
++		EC_GFp_simple_method;
++		EC_GROUP_precompute_mult;
++		OCSP_request_onereq_get0;
++		UI_method_set_writer;
++		KRB5_AUTHENT_new;
++		X509_CRL_INFO_it;
++		DSO_set_name_converter;
++		AES_set_decrypt_key;
++		PKCS7_DIGEST_it;
++		PKCS12_x5092certbag;
++		EVP_DigestInit_ex;
++		i2a_ACCESS_DESCRIPTION;
++		OCSP_RESPONSE_it;
++		PKCS7_ENC_CONTENT_it;
++		OCSP_request_add0_id;
++		EC_POINT_make_affine;
++		DSO_get_filename;
++		OCSP_CERTSTATUS_it;
++		OCSP_request_add1_cert;
++		UI_get0_output_string;
++		UI_dup_verify_string;
++		BN_mod_lshift;
++		KRB5_AUTHDATA_it;
++		asn1_set_choice_selector;
++		OCSP_basic_add1_status;
++		OCSP_RESPID_free;
++		asn1_get_field_ptr;
++		UI_add_input_string;
++		OCSP_CRLID_it;
++		i2d_KRB5_AUTHENTBODY;
++		OCSP_REQUEST_get_ext_count;
++		ENGINE_load_atalla;
++		X509_NAME_it;
++		USERNOTICE_it;
++		OCSP_REQINFO_new;
++		OCSP_BASICRESP_get_ext;
++		CRYPTO_get_ex_data_implementation;
++		CRYPTO_get_ex_data_impl;
++		ASN1_item_pack;
++		i2d_KRB5_ENCDATA;
++		X509_PURPOSE_set;
++		X509_REQ_INFO_it;
++		UI_method_set_opener;
++		ASN1_item_ex_free;
++		ASN1_BOOLEAN_it;
++		ENGINE_get_table_flags;
++		UI_create_method;
++		OCSP_ONEREQ_add1_ext_i2d;
++		_shadow_DES_check_key;
++		d2i_OCSP_REQINFO;
++		UI_add_info_string;
++		UI_get_result_minsize;
++		ASN1_NULL_it;
++		BN_mod_lshift1;
++		d2i_OCSP_ONEREQ;
++		OCSP_ONEREQ_new;
++		KRB5_TICKET_it;
++		EVP_aes_192_cbc;
++		KRB5_TICKET_free;
++		UI_new;
++		OCSP_response_create;
++		_ossl_old_des_xcbc_encrypt;
++		PKCS7_it;
++		OCSP_REQUEST_get_ext_by_critical;
++		OCSP_REQUEST_get_ext_by_crit;
++		ENGINE_set_flags;
++		_ossl_old_des_ecb_encrypt;
++		OCSP_response_get1_basic;
++		EVP_Digest;
++		OCSP_ONEREQ_delete_ext;
++		ASN1_TBOOLEAN_it;
++		ASN1_item_new;
++		ASN1_TIME_to_generalizedtime;
++		BIGNUM_it;
++		AES_cbc_encrypt;
++		ENGINE_get_load_privkey_function;
++		ENGINE_get_load_privkey_fn;
++		OCSP_RESPONSE_free;
++		UI_method_set_reader;
++		i2d_ASN1_T61STRING;
++		EC_POINT_set_to_infinity;
++		ERR_load_OCSP_strings;
++		EC_POINT_point2oct;
++		KRB5_APREQ_free;
++		ASN1_OBJECT_it;
++		OCSP_crlID_new;
++		OCSP_crlID2_new;
++		CONF_modules_load_file;
++		CONF_imodule_set_usr_data;
++		ENGINE_set_default_string;
++		CONF_module_get_usr_data;
++		ASN1_add_oid_module;
++		CONF_modules_finish;
++		OPENSSL_config;
++		CONF_modules_unload;
++		CONF_imodule_get_value;
++		CONF_module_set_usr_data;
++		CONF_parse_list;
++		CONF_module_add;
++		CONF_get1_default_config_file;
++		CONF_imodule_get_flags;
++		CONF_imodule_get_module;
++		CONF_modules_load;
++		CONF_imodule_get_name;
++		ERR_peek_top_error;
++		CONF_imodule_get_usr_data;
++		CONF_imodule_set_flags;
++		ENGINE_add_conf_module;
++		ERR_peek_last_error_line;
++		ERR_peek_last_error_line_data;
++		ERR_peek_last_error;
++		DES_read_2passwords;
++		DES_read_password;
++		UI_UTIL_read_pw;
++		UI_UTIL_read_pw_string;
++		ENGINE_load_aep;
++		ENGINE_load_sureware;
++		OPENSSL_add_all_algorithms_noconf;
++		OPENSSL_add_all_algo_noconf;
++		OPENSSL_add_all_algorithms_conf;
++		OPENSSL_add_all_algo_conf;
++		OPENSSL_load_builtin_modules;
++		AES_ofb128_encrypt;
++		AES_ctr128_encrypt;
++		AES_cfb128_encrypt;
++		ENGINE_load_4758cca;
++		_ossl_096_des_random_seed;
++		EVP_aes_256_ofb;
++		EVP_aes_192_ofb;
++		EVP_aes_128_cfb128;
++		EVP_aes_256_cfb128;
++		EVP_aes_128_ofb;
++		EVP_aes_192_cfb128;
++		CONF_modules_free;
++		NCONF_default;
++		OPENSSL_no_config;
++		NCONF_WIN32;
++		ASN1_UNIVERSALSTRING_new;
++		EVP_des_ede_ecb;
++		i2d_ASN1_UNIVERSALSTRING;
++		ASN1_UNIVERSALSTRING_free;
++		ASN1_UNIVERSALSTRING_it;
++		d2i_ASN1_UNIVERSALSTRING;
++		EVP_des_ede3_ecb;
++		X509_REQ_print_ex;
++		ENGINE_up_ref;
++		BUF_MEM_grow_clean;
++		CRYPTO_realloc_clean;
++		BUF_strlcat;
++		BIO_indent;
++		BUF_strlcpy;
++		OpenSSLDie;
++		OPENSSL_cleanse;
++		ENGINE_setup_bsd_cryptodev;
++		ERR_release_err_state_table;
++		EVP_aes_128_cfb8;
++		FIPS_corrupt_rsa;
++		FIPS_selftest_des;
++		EVP_aes_128_cfb1;
++		EVP_aes_192_cfb8;
++		FIPS_mode_set;
++		FIPS_selftest_dsa;
++		EVP_aes_256_cfb8;
++		FIPS_allow_md5;
++		DES_ede3_cfb_encrypt;
++		EVP_des_ede3_cfb8;
++		FIPS_rand_seeded;
++		AES_cfbr_encrypt_block;
++		AES_cfb8_encrypt;
++		FIPS_rand_seed;
++		FIPS_corrupt_des;
++		EVP_aes_192_cfb1;
++		FIPS_selftest_aes;
++		FIPS_set_prng_key;
++		EVP_des_cfb8;
++		FIPS_corrupt_dsa;
++		FIPS_test_mode;
++		FIPS_rand_method;
++		EVP_aes_256_cfb1;
++		ERR_load_FIPS_strings;
++		FIPS_corrupt_aes;
++		FIPS_selftest_sha1;
++		FIPS_selftest_rsa;
++		FIPS_corrupt_sha1;
++		EVP_des_cfb1;
++		FIPS_dsa_check;
++		AES_cfb1_encrypt;
++		EVP_des_ede3_cfb1;
++		FIPS_rand_check;
++		FIPS_md5_allowed;
++		FIPS_mode;
++		FIPS_selftest_failed;
++		sk_is_sorted;
++		X509_check_ca;
++		HMAC_CTX_set_flags;
++		d2i_PROXY_CERT_INFO_EXTENSION;
++		PROXY_POLICY_it;
++		i2d_PROXY_POLICY;
++		i2d_PROXY_CERT_INFO_EXTENSION;
++		d2i_PROXY_POLICY;
++		PROXY_CERT_INFO_EXTENSION_new;
++		PROXY_CERT_INFO_EXTENSION_free;
++		PROXY_CERT_INFO_EXTENSION_it;
++		PROXY_POLICY_free;
++		PROXY_POLICY_new;
++		BN_MONT_CTX_set_locked;
++		FIPS_selftest_rng;
++		EVP_sha384;
++		EVP_sha512;
++		EVP_sha224;
++		EVP_sha256;
++		FIPS_selftest_hmac;
++		FIPS_corrupt_rng;
++		BN_mod_exp_mont_consttime;
++		RSA_X931_hash_id;
++		RSA_padding_check_X931;
++		RSA_verify_PKCS1_PSS;
++		RSA_padding_add_X931;
++		RSA_padding_add_PKCS1_PSS;
++		PKCS1_MGF1;
++		BN_X931_generate_Xpq;
++		RSA_X931_generate_key;
++		BN_X931_derive_prime;
++		BN_X931_generate_prime;
++		RSA_X931_derive;
++		BIO_new_dgram;
++		BN_get0_nist_prime_384;
++		ERR_set_mark;
++		X509_STORE_CTX_set0_crls;
++		ENGINE_set_STORE;
++		ENGINE_register_ECDSA;
++		STORE_meth_set_list_start_fn;
++		STORE_method_set_list_start_function;
++		BN_BLINDING_invert_ex;
++		NAME_CONSTRAINTS_free;
++		STORE_ATTR_INFO_set_number;
++		BN_BLINDING_get_thread_id;
++		X509_STORE_CTX_set0_param;
++		POLICY_MAPPING_it;
++		STORE_parse_attrs_start;
++		POLICY_CONSTRAINTS_free;
++		EVP_PKEY_add1_attr_by_NID;
++		BN_nist_mod_192;
++		EC_GROUP_get_trinomial_basis;
++		STORE_set_method;
++		GENERAL_SUBTREE_free;
++		NAME_CONSTRAINTS_it;
++		ECDH_get_default_method;
++		PKCS12_add_safe;
++		EC_KEY_new_by_curve_name;
++		STORE_meth_get_update_store_fn;
++		STORE_method_get_update_store_function;
++		ENGINE_register_ECDH;
++		SHA512_Update;
++		i2d_ECPrivateKey;
++		BN_get0_nist_prime_192;
++		STORE_modify_certificate;
++		EC_POINT_set_affine_coordinates_GF2m;
++		EC_POINT_set_affine_coords_GF2m;
++		BN_GF2m_mod_exp_arr;
++		STORE_ATTR_INFO_modify_number;
++		X509_keyid_get0;
++		ENGINE_load_gmp;
++		pitem_new;
++		BN_GF2m_mod_mul_arr;
++		STORE_list_public_key_endp;
++		o2i_ECPublicKey;
++		EC_KEY_copy;
++		BIO_dump_fp;
++		X509_policy_node_get0_parent;
++		EC_GROUP_check_discriminant;
++		i2o_ECPublicKey;
++		EC_KEY_precompute_mult;
++		a2i_IPADDRESS;
++		STORE_meth_set_initialise_fn;
++		STORE_method_set_initialise_function;
++		X509_STORE_CTX_set_depth;
++		X509_VERIFY_PARAM_inherit;
++		EC_POINT_point2bn;
++		STORE_ATTR_INFO_set_dn;
++		X509_policy_tree_get0_policies;
++		EC_GROUP_new_curve_GF2m;
++		STORE_destroy_method;
++		ENGINE_unregister_STORE;
++		EVP_PKEY_get1_EC_KEY;
++		STORE_ATTR_INFO_get0_number;
++		ENGINE_get_default_ECDH;
++		EC_KEY_get_conv_form;
++		ASN1_OCTET_STRING_NDEF_it;
++		STORE_delete_public_key;
++		STORE_get_public_key;
++		STORE_modify_arbitrary;
++		ENGINE_get_static_state;
++		pqueue_iterator;
++		ECDSA_SIG_new;
++		OPENSSL_DIR_end;
++		BN_GF2m_mod_sqr;
++		EC_POINT_bn2point;
++		X509_VERIFY_PARAM_set_depth;
++		EC_KEY_set_asn1_flag;
++		STORE_get_method;
++		EC_KEY_get_key_method_data;
++		ECDSA_sign_ex;
++		STORE_parse_attrs_end;
++		EC_GROUP_get_point_conversion_form;
++		EC_GROUP_get_point_conv_form;
++		STORE_method_set_store_function;
++		STORE_ATTR_INFO_in;
++		PEM_read_bio_ECPKParameters;
++		EC_GROUP_get_pentanomial_basis;
++		EVP_PKEY_add1_attr_by_txt;
++		BN_BLINDING_set_flags;
++		X509_VERIFY_PARAM_set1_policies;
++		X509_VERIFY_PARAM_set1_name;
++		X509_VERIFY_PARAM_set_purpose;
++		STORE_get_number;
++		ECDSA_sign_setup;
++		BN_GF2m_mod_solve_quad_arr;
++		EC_KEY_up_ref;
++		POLICY_MAPPING_free;
++		BN_GF2m_mod_div;
++		X509_VERIFY_PARAM_set_flags;
++		EC_KEY_free;
++		STORE_meth_set_list_next_fn;
++		STORE_method_set_list_next_function;
++		PEM_write_bio_ECPrivateKey;
++		d2i_EC_PUBKEY;
++		STORE_meth_get_generate_fn;
++		STORE_method_get_generate_function;
++		STORE_meth_set_list_end_fn;
++		STORE_method_set_list_end_function;
++		pqueue_print;
++		EC_GROUP_have_precompute_mult;
++		EC_KEY_print_fp;
++		BN_GF2m_mod_arr;
++		PEM_write_bio_X509_CERT_PAIR;
++		EVP_PKEY_cmp;
++		X509_policy_level_node_count;
++		STORE_new_engine;
++		STORE_list_public_key_start;
++		X509_VERIFY_PARAM_new;
++		ECDH_get_ex_data;
++		EVP_PKEY_get_attr;
++		ECDSA_do_sign;
++		ENGINE_unregister_ECDH;
++		ECDH_OpenSSL;
++		EC_KEY_set_conv_form;
++		EC_POINT_dup;
++		GENERAL_SUBTREE_new;
++		STORE_list_crl_endp;
++		EC_get_builtin_curves;
++		X509_policy_node_get0_qualifiers;
++		X509_pcy_node_get0_qualifiers;
++		STORE_list_crl_end;
++		EVP_PKEY_set1_EC_KEY;
++		BN_GF2m_mod_sqrt_arr;
++		i2d_ECPrivateKey_bio;
++		ECPKParameters_print_fp;
++		pqueue_find;
++		ECDSA_SIG_free;
++		PEM_write_bio_ECPKParameters;
++		STORE_method_set_ctrl_function;
++		STORE_list_public_key_end;
++		EC_KEY_set_private_key;
++		pqueue_peek;
++		STORE_get_arbitrary;
++		STORE_store_crl;
++		X509_policy_node_get0_policy;
++		PKCS12_add_safes;
++		BN_BLINDING_convert_ex;
++		X509_policy_tree_free;
++		OPENSSL_ia32cap_loc;
++		BN_GF2m_poly2arr;
++		STORE_ctrl;
++		STORE_ATTR_INFO_compare;
++		BN_get0_nist_prime_224;
++		i2d_ECParameters;
++		i2d_ECPKParameters;
++		BN_GENCB_call;
++		d2i_ECPKParameters;
++		STORE_meth_set_generate_fn;
++		STORE_method_set_generate_function;
++		ENGINE_set_ECDH;
++		NAME_CONSTRAINTS_new;
++		SHA256_Init;
++		EC_KEY_get0_public_key;
++		PEM_write_bio_EC_PUBKEY;
++		STORE_ATTR_INFO_set_cstr;
++		STORE_list_crl_next;
++		STORE_ATTR_INFO_in_range;
++		ECParameters_print;
++		STORE_meth_set_delete_fn;
++		STORE_method_set_delete_function;
++		STORE_list_certificate_next;
++		ASN1_generate_nconf;
++		BUF_memdup;
++		BN_GF2m_mod_mul;
++		STORE_meth_get_list_next_fn;
++		STORE_method_get_list_next_function;
++		STORE_ATTR_INFO_get0_dn;
++		STORE_list_private_key_next;
++		EC_GROUP_set_seed;
++		X509_VERIFY_PARAM_set_trust;
++		STORE_ATTR_INFO_free;
++		STORE_get_private_key;
++		EVP_PKEY_get_attr_count;
++		STORE_ATTR_INFO_new;
++		EC_GROUP_get_curve_GF2m;
++		STORE_meth_set_revoke_fn;
++		STORE_method_set_revoke_function;
++		STORE_store_number;
++		BN_is_prime_ex;
++		STORE_revoke_public_key;
++		X509_STORE_CTX_get0_param;
++		STORE_delete_arbitrary;
++		PEM_read_X509_CERT_PAIR;
++		X509_STORE_set_depth;
++		ECDSA_get_ex_data;
++		SHA224;
++		BIO_dump_indent_fp;
++		EC_KEY_set_group;
++		BUF_strndup;
++		STORE_list_certificate_start;
++		BN_GF2m_mod;
++		X509_REQ_check_private_key;
++		EC_GROUP_get_seed_len;
++		ERR_load_STORE_strings;
++		PEM_read_bio_EC_PUBKEY;
++		STORE_list_private_key_end;
++		i2d_EC_PUBKEY;
++		ECDSA_get_default_method;
++		ASN1_put_eoc;
++		X509_STORE_CTX_get_explicit_policy;
++		X509_STORE_CTX_get_expl_policy;
++		X509_VERIFY_PARAM_table_cleanup;
++		STORE_modify_private_key;
++		X509_VERIFY_PARAM_free;
++		EC_METHOD_get_field_type;
++		EC_GFp_nist_method;
++		STORE_meth_set_modify_fn;
++		STORE_method_set_modify_function;
++		STORE_parse_attrs_next;
++		ENGINE_load_padlock;
++		EC_GROUP_set_curve_name;
++		X509_CERT_PAIR_it;
++		STORE_meth_get_revoke_fn;
++		STORE_method_get_revoke_function;
++		STORE_method_set_get_function;
++		STORE_modify_number;
++		STORE_method_get_store_function;
++		STORE_store_private_key;
++		BN_GF2m_mod_sqr_arr;
++		RSA_setup_blinding;
++		BIO_s_datagram;
++		STORE_Memory;
++		sk_find_ex;
++		EC_GROUP_set_curve_GF2m;
++		ENGINE_set_default_ECDSA;
++		POLICY_CONSTRAINTS_new;
++		BN_GF2m_mod_sqrt;
++		ECDH_set_default_method;
++		EC_KEY_generate_key;
++		SHA384_Update;
++		BN_GF2m_arr2poly;
++		STORE_method_get_get_function;
++		STORE_meth_set_cleanup_fn;
++		STORE_method_set_cleanup_function;
++		EC_GROUP_check;
++		d2i_ECPrivateKey_bio;
++		EC_KEY_insert_key_method_data;
++		STORE_meth_get_lock_store_fn;
++		STORE_method_get_lock_store_function;
++		X509_VERIFY_PARAM_get_depth;
++		SHA224_Final;
++		STORE_meth_set_update_store_fn;
++		STORE_method_set_update_store_function;
++		SHA224_Update;
++		d2i_ECPrivateKey;
++		ASN1_item_ndef_i2d;
++		STORE_delete_private_key;
++		ERR_pop_to_mark;
++		ENGINE_register_all_STORE;
++		X509_policy_level_get0_node;
++		i2d_PKCS7_NDEF;
++		EC_GROUP_get_degree;
++		ASN1_generate_v3;
++		STORE_ATTR_INFO_modify_cstr;
++		X509_policy_tree_level_count;
++		BN_GF2m_add;
++		EC_KEY_get0_group;
++		STORE_generate_crl;
++		STORE_store_public_key;
++		X509_CERT_PAIR_free;
++		STORE_revoke_private_key;
++		BN_nist_mod_224;
++		SHA512_Final;
++		STORE_ATTR_INFO_modify_dn;
++		STORE_meth_get_initialise_fn;
++		STORE_method_get_initialise_function;
++		STORE_delete_number;
++		i2d_EC_PUBKEY_bio;
++		BIO_dgram_non_fatal_error;
++		EC_GROUP_get_asn1_flag;
++		STORE_ATTR_INFO_in_ex;
++		STORE_list_crl_start;
++		ECDH_get_ex_new_index;
++		STORE_meth_get_modify_fn;
++		STORE_method_get_modify_function;
++		v2i_ASN1_BIT_STRING;
++		STORE_store_certificate;
++		OBJ_bsearch_ex;
++		X509_STORE_CTX_set_default;
++		STORE_ATTR_INFO_set_sha1str;
++		BN_GF2m_mod_inv;
++		BN_GF2m_mod_exp;
++		STORE_modify_public_key;
++		STORE_meth_get_list_start_fn;
++		STORE_method_get_list_start_function;
++		EC_GROUP_get0_seed;
++		STORE_store_arbitrary;
++		STORE_meth_set_unlock_store_fn;
++		STORE_method_set_unlock_store_function;
++		BN_GF2m_mod_div_arr;
++		ENGINE_set_ECDSA;
++		STORE_create_method;
++		ECPKParameters_print;
++		EC_KEY_get0_private_key;
++		PEM_write_EC_PUBKEY;
++		X509_VERIFY_PARAM_set1;
++		ECDH_set_method;
++		v2i_GENERAL_NAME_ex;
++		ECDH_set_ex_data;
++		STORE_generate_key;
++		BN_nist_mod_521;
++		X509_policy_tree_get0_level;
++		EC_GROUP_set_point_conversion_form;
++		EC_GROUP_set_point_conv_form;
++		PEM_read_EC_PUBKEY;
++		i2d_ECDSA_SIG;
++		ECDSA_OpenSSL;
++		STORE_delete_crl;
++		EC_KEY_get_enc_flags;
++		ASN1_const_check_infinite_end;
++		EVP_PKEY_delete_attr;
++		ECDSA_set_default_method;
++		EC_POINT_set_compressed_coordinates_GF2m;
++		EC_POINT_set_compr_coords_GF2m;
++		EC_GROUP_cmp;
++		STORE_revoke_certificate;
++		BN_get0_nist_prime_256;
++		STORE_meth_get_delete_fn;
++		STORE_method_get_delete_function;
++		SHA224_Init;
++		PEM_read_ECPrivateKey;
++		SHA512_Init;
++		STORE_parse_attrs_endp;
++		BN_set_negative;
++		ERR_load_ECDSA_strings;
++		EC_GROUP_get_basis_type;
++		STORE_list_public_key_next;
++		i2v_ASN1_BIT_STRING;
++		STORE_OBJECT_free;
++		BN_nist_mod_384;
++		i2d_X509_CERT_PAIR;
++		PEM_write_ECPKParameters;
++		ECDH_compute_key;
++		STORE_ATTR_INFO_get0_sha1str;
++		ENGINE_register_all_ECDH;
++		pqueue_pop;
++		STORE_ATTR_INFO_get0_cstr;
++		POLICY_CONSTRAINTS_it;
++		STORE_get_ex_new_index;
++		EVP_PKEY_get_attr_by_OBJ;
++		X509_VERIFY_PARAM_add0_policy;
++		BN_GF2m_mod_solve_quad;
++		SHA256;
++		i2d_ECPrivateKey_fp;
++		X509_policy_tree_get0_user_policies;
++		X509_pcy_tree_get0_usr_policies;
++		OPENSSL_DIR_read;
++		ENGINE_register_all_ECDSA;
++		X509_VERIFY_PARAM_lookup;
++		EC_POINT_get_affine_coordinates_GF2m;
++		EC_POINT_get_affine_coords_GF2m;
++		EC_GROUP_dup;
++		ENGINE_get_default_ECDSA;
++		EC_KEY_new;
++		SHA256_Transform;
++		EC_KEY_set_enc_flags;
++		ECDSA_verify;
++		EC_POINT_point2hex;
++		ENGINE_get_STORE;
++		SHA512;
++		STORE_get_certificate;
++		ECDSA_do_sign_ex;
++		ECDSA_do_verify;
++		d2i_ECPrivateKey_fp;
++		STORE_delete_certificate;
++		SHA512_Transform;
++		X509_STORE_set1_param;
++		STORE_method_get_ctrl_function;
++		STORE_free;
++		PEM_write_ECPrivateKey;
++		STORE_meth_get_unlock_store_fn;
++		STORE_method_get_unlock_store_function;
++		STORE_get_ex_data;
++		EC_KEY_set_public_key;
++		PEM_read_ECPKParameters;
++		X509_CERT_PAIR_new;
++		ENGINE_register_STORE;
++		RSA_generate_key_ex;
++		DSA_generate_parameters_ex;
++		ECParameters_print_fp;
++		X509V3_NAME_from_section;
++		EVP_PKEY_add1_attr;
++		STORE_modify_crl;
++		STORE_list_private_key_start;
++		POLICY_MAPPINGS_it;
++		GENERAL_SUBTREE_it;
++		EC_GROUP_get_curve_name;
++		PEM_write_X509_CERT_PAIR;
++		BIO_dump_indent_cb;
++		d2i_X509_CERT_PAIR;
++		STORE_list_private_key_endp;
++		asn1_const_Finish;
++		i2d_EC_PUBKEY_fp;
++		BN_nist_mod_256;
++		X509_VERIFY_PARAM_add0_table;
++		pqueue_free;
++		BN_BLINDING_create_param;
++		ECDSA_size;
++		d2i_EC_PUBKEY_bio;
++		BN_get0_nist_prime_521;
++		STORE_ATTR_INFO_modify_sha1str;
++		BN_generate_prime_ex;
++		EC_GROUP_new_by_curve_name;
++		SHA256_Final;
++		DH_generate_parameters_ex;
++		PEM_read_bio_ECPrivateKey;
++		STORE_meth_get_cleanup_fn;
++		STORE_method_get_cleanup_function;
++		ENGINE_get_ECDH;
++		d2i_ECDSA_SIG;
++		BN_is_prime_fasttest_ex;
++		ECDSA_sign;
++		X509_policy_check;
++		EVP_PKEY_get_attr_by_NID;
++		STORE_set_ex_data;
++		ENGINE_get_ECDSA;
++		EVP_ecdsa;
++		BN_BLINDING_get_flags;
++		PKCS12_add_cert;
++		STORE_OBJECT_new;
++		ERR_load_ECDH_strings;
++		EC_KEY_dup;
++		EVP_CIPHER_CTX_rand_key;
++		ECDSA_set_method;
++		a2i_IPADDRESS_NC;
++		d2i_ECParameters;
++		STORE_list_certificate_end;
++		STORE_get_crl;
++		X509_POLICY_NODE_print;
++		SHA384_Init;
++		EC_GF2m_simple_method;
++		ECDSA_set_ex_data;
++		SHA384_Final;
++		PKCS7_set_digest;
++		EC_KEY_print;
++		STORE_meth_set_lock_store_fn;
++		STORE_method_set_lock_store_function;
++		ECDSA_get_ex_new_index;
++		SHA384;
++		POLICY_MAPPING_new;
++		STORE_list_certificate_endp;
++		X509_STORE_CTX_get0_policy_tree;
++		EC_GROUP_set_asn1_flag;
++		EC_KEY_check_key;
++		d2i_EC_PUBKEY_fp;
++		PKCS7_set0_type_other;
++		PEM_read_bio_X509_CERT_PAIR;
++		pqueue_next;
++		STORE_meth_get_list_end_fn;
++		STORE_method_get_list_end_function;
++		EVP_PKEY_add1_attr_by_OBJ;
++		X509_VERIFY_PARAM_set_time;
++		pqueue_new;
++		ENGINE_set_default_ECDH;
++		STORE_new_method;
++		PKCS12_add_key;
++		DSO_merge;
++		EC_POINT_hex2point;
++		BIO_dump_cb;
++		SHA256_Update;
++		pqueue_insert;
++		pitem_free;
++		BN_GF2m_mod_inv_arr;
++		ENGINE_unregister_ECDSA;
++		BN_BLINDING_set_thread_id;
++		get_rfc3526_prime_8192;
++		X509_VERIFY_PARAM_clear_flags;
++		get_rfc2409_prime_1024;
++		DH_check_pub_key;
++		get_rfc3526_prime_2048;
++		get_rfc3526_prime_6144;
++		get_rfc3526_prime_1536;
++		get_rfc3526_prime_3072;
++		get_rfc3526_prime_4096;
++		get_rfc2409_prime_768;
++		X509_VERIFY_PARAM_get_flags;
++		EVP_CIPHER_CTX_new;
++		EVP_CIPHER_CTX_free;
++		Camellia_cbc_encrypt;
++		Camellia_cfb128_encrypt;
++		Camellia_cfb1_encrypt;
++		Camellia_cfb8_encrypt;
++		Camellia_ctr128_encrypt;
++		Camellia_cfbr_encrypt_block;
++		Camellia_decrypt;
++		Camellia_ecb_encrypt;
++		Camellia_encrypt;
++		Camellia_ofb128_encrypt;
++		Camellia_set_key;
++		EVP_camellia_128_cbc;
++		EVP_camellia_128_cfb128;
++		EVP_camellia_128_cfb1;
++		EVP_camellia_128_cfb8;
++		EVP_camellia_128_ecb;
++		EVP_camellia_128_ofb;
++		EVP_camellia_192_cbc;
++		EVP_camellia_192_cfb128;
++		EVP_camellia_192_cfb1;
++		EVP_camellia_192_cfb8;
++		EVP_camellia_192_ecb;
++		EVP_camellia_192_ofb;
++		EVP_camellia_256_cbc;
++		EVP_camellia_256_cfb128;
++		EVP_camellia_256_cfb1;
++		EVP_camellia_256_cfb8;
++		EVP_camellia_256_ecb;
++		EVP_camellia_256_ofb;
++		a2i_ipadd;
++		ASIdentifiers_free;
++		i2d_ASIdOrRange;
++		EVP_CIPHER_block_size;
++		v3_asid_is_canonical;
++		IPAddressChoice_free;
++		EVP_CIPHER_CTX_set_app_data;
++		BIO_set_callback_arg;
++		v3_addr_add_prefix;
++		IPAddressOrRange_it;
++		BIO_set_flags;
++		ASIdentifiers_it;
++		v3_addr_get_range;
++		BIO_method_type;
++		v3_addr_inherits;
++		IPAddressChoice_it;
++		AES_ige_encrypt;
++		v3_addr_add_range;
++		EVP_CIPHER_CTX_nid;
++		d2i_ASRange;
++		v3_addr_add_inherit;
++		v3_asid_add_id_or_range;
++		v3_addr_validate_resource_set;
++		EVP_CIPHER_iv_length;
++		EVP_MD_type;
++		v3_asid_canonize;
++		IPAddressRange_free;
++		v3_asid_add_inherit;
++		EVP_CIPHER_CTX_key_length;
++		IPAddressRange_new;
++		ASIdOrRange_new;
++		EVP_MD_size;
++		EVP_MD_CTX_test_flags;
++		BIO_clear_flags;
++		i2d_ASRange;
++		IPAddressRange_it;
++		IPAddressChoice_new;
++		ASIdentifierChoice_new;
++		ASRange_free;
++		EVP_MD_pkey_type;
++		EVP_MD_CTX_clear_flags;
++		IPAddressFamily_free;
++		i2d_IPAddressFamily;
++		IPAddressOrRange_new;
++		EVP_CIPHER_flags;
++		v3_asid_validate_resource_set;
++		d2i_IPAddressRange;
++		AES_bi_ige_encrypt;
++		BIO_get_callback;
++		IPAddressOrRange_free;
++		v3_addr_subset;
++		d2i_IPAddressFamily;
++		v3_asid_subset;
++		BIO_test_flags;
++		i2d_ASIdentifierChoice;
++		ASRange_it;
++		d2i_ASIdentifiers;
++		ASRange_new;
++		d2i_IPAddressChoice;
++		v3_addr_get_afi;
++		EVP_CIPHER_key_length;
++		EVP_Cipher;
++		i2d_IPAddressOrRange;
++		ASIdOrRange_it;
++		EVP_CIPHER_nid;
++		i2d_IPAddressChoice;
++		EVP_CIPHER_CTX_block_size;
++		ASIdentifiers_new;
++		v3_addr_validate_path;
++		IPAddressFamily_new;
++		EVP_MD_CTX_set_flags;
++		v3_addr_is_canonical;
++		i2d_IPAddressRange;
++		IPAddressFamily_it;
++		v3_asid_inherits;
++		EVP_CIPHER_CTX_cipher;
++		EVP_CIPHER_CTX_get_app_data;
++		EVP_MD_block_size;
++		EVP_CIPHER_CTX_flags;
++		v3_asid_validate_path;
++		d2i_IPAddressOrRange;
++		v3_addr_canonize;
++		ASIdentifierChoice_it;
++		EVP_MD_CTX_md;
++		d2i_ASIdentifierChoice;
++		BIO_method_name;
++		EVP_CIPHER_CTX_iv_length;
++		ASIdOrRange_free;
++		ASIdentifierChoice_free;
++		BIO_get_callback_arg;
++		BIO_set_callback;
++		d2i_ASIdOrRange;
++		i2d_ASIdentifiers;
++		SEED_decrypt;
++		SEED_encrypt;
++		SEED_cbc_encrypt;
++		EVP_seed_ofb;
++		SEED_cfb128_encrypt;
++		SEED_ofb128_encrypt;
++		EVP_seed_cbc;
++		SEED_ecb_encrypt;
++		EVP_seed_ecb;
++		SEED_set_key;
++		EVP_seed_cfb128;
++		X509_EXTENSIONS_it;
++		X509_get1_ocsp;
++		OCSP_REQ_CTX_free;
++		i2d_X509_EXTENSIONS;
++		OCSP_sendreq_nbio;
++		OCSP_sendreq_new;
++		d2i_X509_EXTENSIONS;
++		X509_ALGORS_it;
++		X509_ALGOR_get0;
++		X509_ALGOR_set0;
++		AES_unwrap_key;
++		AES_wrap_key;
++		X509at_get0_data_by_OBJ;
++		ASN1_TYPE_set1;
++		ASN1_STRING_set0;
++		i2d_X509_ALGORS;
++		BIO_f_zlib;
++		COMP_zlib_cleanup;
++		d2i_X509_ALGORS;
++		CMS_ReceiptRequest_free;
++		PEM_write_CMS;
++		CMS_add0_CertificateChoices;
++		CMS_unsigned_add1_attr_by_OBJ;
++		ERR_load_CMS_strings;
++		CMS_sign_receipt;
++		i2d_CMS_ContentInfo;
++		CMS_signed_delete_attr;
++		d2i_CMS_bio;
++		CMS_unsigned_get_attr_by_NID;
++		CMS_verify;
++		SMIME_read_CMS;
++		CMS_decrypt_set1_key;
++		CMS_SignerInfo_get0_algs;
++		CMS_add1_cert;
++		CMS_set_detached;
++		CMS_encrypt;
++		CMS_EnvelopedData_create;
++		CMS_uncompress;
++		CMS_add0_crl;
++		CMS_SignerInfo_verify_content;
++		CMS_unsigned_get0_data_by_OBJ;
++		PEM_write_bio_CMS;
++		CMS_unsigned_get_attr;
++		CMS_RecipientInfo_ktri_cert_cmp;
++		CMS_RecipientInfo_ktri_get0_algs;
++		CMS_RecipInfo_ktri_get0_algs;
++		CMS_ContentInfo_free;
++		CMS_final;
++		CMS_add_simple_smimecap;
++		CMS_SignerInfo_verify;
++		CMS_data;
++		CMS_ContentInfo_it;
++		d2i_CMS_ReceiptRequest;
++		CMS_compress;
++		CMS_digest_create;
++		CMS_SignerInfo_cert_cmp;
++		CMS_SignerInfo_sign;
++		CMS_data_create;
++		i2d_CMS_bio;
++		CMS_EncryptedData_set1_key;
++		CMS_decrypt;
++		int_smime_write_ASN1;
++		CMS_unsigned_delete_attr;
++		CMS_unsigned_get_attr_count;
++		CMS_add_smimecap;
++		PEM_read_CMS;
++		CMS_signed_get_attr_by_OBJ;
++		d2i_CMS_ContentInfo;
++		CMS_add_standard_smimecap;
++		CMS_ContentInfo_new;
++		CMS_RecipientInfo_type;
++		CMS_get0_type;
++		CMS_is_detached;
++		CMS_sign;
++		CMS_signed_add1_attr;
++		CMS_unsigned_get_attr_by_OBJ;
++		SMIME_write_CMS;
++		CMS_EncryptedData_decrypt;
++		CMS_get0_RecipientInfos;
++		CMS_add0_RevocationInfoChoice;
++		CMS_decrypt_set1_pkey;
++		CMS_SignerInfo_set1_signer_cert;
++		CMS_get0_signers;
++		CMS_ReceiptRequest_get0_values;
++		CMS_signed_get0_data_by_OBJ;
++		CMS_get0_SignerInfos;
++		CMS_add0_cert;
++		CMS_EncryptedData_encrypt;
++		CMS_digest_verify;
++		CMS_set1_signers_certs;
++		CMS_signed_get_attr;
++		CMS_RecipientInfo_set0_key;
++		CMS_SignedData_init;
++		CMS_RecipientInfo_kekri_get0_id;
++		CMS_verify_receipt;
++		CMS_ReceiptRequest_it;
++		PEM_read_bio_CMS;
++		CMS_get1_crls;
++		CMS_add0_recipient_key;
++		SMIME_read_ASN1;
++		CMS_ReceiptRequest_new;
++		CMS_get0_content;
++		CMS_get1_ReceiptRequest;
++		CMS_signed_add1_attr_by_OBJ;
++		CMS_RecipientInfo_kekri_id_cmp;
++		CMS_add1_ReceiptRequest;
++		CMS_SignerInfo_get0_signer_id;
++		CMS_unsigned_add1_attr_by_NID;
++		CMS_unsigned_add1_attr;
++		CMS_signed_get_attr_by_NID;
++		CMS_get1_certs;
++		CMS_signed_add1_attr_by_NID;
++		CMS_unsigned_add1_attr_by_txt;
++		CMS_dataFinal;
++		CMS_RecipientInfo_ktri_get0_signer_id;
++		CMS_RecipInfo_ktri_get0_sigr_id;
++		i2d_CMS_ReceiptRequest;
++		CMS_add1_recipient_cert;
++		CMS_dataInit;
++		CMS_signed_add1_attr_by_txt;
++		CMS_RecipientInfo_decrypt;
++		CMS_signed_get_attr_count;
++		CMS_get0_eContentType;
++		CMS_set1_eContentType;
++		CMS_ReceiptRequest_create0;
++		CMS_add1_signer;
++		CMS_RecipientInfo_set0_pkey;
++		ENGINE_set_load_ssl_client_cert_function;
++		ENGINE_set_ld_ssl_clnt_cert_fn;
++		ENGINE_get_ssl_client_cert_function;
++		ENGINE_get_ssl_client_cert_fn;
++		ENGINE_load_ssl_client_cert;
++		ENGINE_load_capi;
++		OPENSSL_isservice;
++		FIPS_dsa_sig_decode;
++		EVP_CIPHER_CTX_clear_flags;
++		FIPS_rand_status;
++		FIPS_rand_set_key;
++		CRYPTO_set_mem_info_functions;
++		RSA_X931_generate_key_ex;
++		int_ERR_set_state_func;
++		int_EVP_MD_set_engine_callbacks;
++		int_CRYPTO_set_do_dynlock_callback;
++		FIPS_rng_stick;
++		EVP_CIPHER_CTX_set_flags;
++		BN_X931_generate_prime_ex;
++		FIPS_selftest_check;
++		FIPS_rand_set_dt;
++		CRYPTO_dbg_pop_info;
++		FIPS_dsa_free;
++		RSA_X931_derive_ex;
++		FIPS_rsa_new;
++		FIPS_rand_bytes;
++		fips_cipher_test;
++		EVP_CIPHER_CTX_test_flags;
++		CRYPTO_malloc_debug_init;
++		CRYPTO_dbg_push_info;
++		FIPS_corrupt_rsa_keygen;
++		FIPS_dh_new;
++		FIPS_corrupt_dsa_keygen;
++		FIPS_dh_free;
++		fips_pkey_signature_test;
++		EVP_add_alg_module;
++		int_RAND_init_engine_callbacks;
++		int_EVP_CIPHER_set_engine_callbacks;
++		int_EVP_MD_init_engine_callbacks;
++		FIPS_rand_test_mode;
++		FIPS_rand_reset;
++		FIPS_dsa_new;
++		int_RAND_set_callbacks;
++		BN_X931_derive_prime_ex;
++		int_ERR_lib_init;
++		int_EVP_CIPHER_init_engine_callbacks;
++		FIPS_rsa_free;
++		FIPS_dsa_sig_encode;
++		CRYPTO_dbg_remove_all_info;
++		OPENSSL_init;
++		CRYPTO_strdup;
++		JPAKE_STEP3A_process;
++		JPAKE_STEP1_release;
++		JPAKE_get_shared_key;
++		JPAKE_STEP3B_init;
++		JPAKE_STEP1_generate;
++		JPAKE_STEP1_init;
++		JPAKE_STEP3B_process;
++		JPAKE_STEP2_generate;
++		JPAKE_CTX_new;
++		JPAKE_CTX_free;
++		JPAKE_STEP3B_release;
++		JPAKE_STEP3A_release;
++		JPAKE_STEP2_process;
++		JPAKE_STEP3B_generate;
++		JPAKE_STEP1_process;
++		JPAKE_STEP3A_generate;
++		JPAKE_STEP2_release;
++		JPAKE_STEP3A_init;
++		ERR_load_JPAKE_strings;
++		JPAKE_STEP2_init;
++		pqueue_size;
++		i2d_TS_ACCURACY;
++		i2d_TS_MSG_IMPRINT_fp;
++		i2d_TS_MSG_IMPRINT;
++		EVP_PKEY_print_public;
++		EVP_PKEY_CTX_new;
++		i2d_TS_TST_INFO;
++		EVP_PKEY_asn1_find;
++		DSO_METHOD_beos;
++		TS_CONF_load_cert;
++		TS_REQ_get_ext;
++		EVP_PKEY_sign_init;
++		ASN1_item_print;
++		TS_TST_INFO_set_nonce;
++		TS_RESP_dup;
++		ENGINE_register_pkey_meths;
++		EVP_PKEY_asn1_add0;
++		PKCS7_add0_attrib_signing_time;
++		i2d_TS_TST_INFO_fp;
++		BIO_asn1_get_prefix;
++		TS_TST_INFO_set_time;
++		EVP_PKEY_meth_set_decrypt;
++		EVP_PKEY_set_type_str;
++		EVP_PKEY_CTX_get_keygen_info;
++		TS_REQ_set_policy_id;
++		d2i_TS_RESP_fp;
++		ENGINE_get_pkey_asn1_meth_engine;
++		ENGINE_get_pkey_asn1_meth_eng;
++		WHIRLPOOL_Init;
++		TS_RESP_set_status_info;
++		EVP_PKEY_keygen;
++		EVP_DigestSignInit;
++		TS_ACCURACY_set_millis;
++		TS_REQ_dup;
++		GENERAL_NAME_dup;
++		ASN1_SEQUENCE_ANY_it;
++		WHIRLPOOL;
++		X509_STORE_get1_crls;
++		ENGINE_get_pkey_asn1_meth;
++		EVP_PKEY_asn1_new;
++		BIO_new_NDEF;
++		ENGINE_get_pkey_meth;
++		TS_MSG_IMPRINT_set_algo;
++		i2d_TS_TST_INFO_bio;
++		TS_TST_INFO_set_ordering;
++		TS_TST_INFO_get_ext_by_OBJ;
++		CRYPTO_THREADID_set_pointer;
++		TS_CONF_get_tsa_section;
++		SMIME_write_ASN1;
++		TS_RESP_CTX_set_signer_key;
++		EVP_PKEY_encrypt_old;
++		EVP_PKEY_encrypt_init;
++		CRYPTO_THREADID_cpy;
++		ASN1_PCTX_get_cert_flags;
++		i2d_ESS_SIGNING_CERT;
++		TS_CONF_load_key;
++		i2d_ASN1_SEQUENCE_ANY;
++		d2i_TS_MSG_IMPRINT_bio;
++		EVP_PKEY_asn1_set_public;
++		b2i_PublicKey_bio;
++		BIO_asn1_set_prefix;
++		EVP_PKEY_new_mac_key;
++		BIO_new_CMS;
++		CRYPTO_THREADID_cmp;
++		TS_REQ_ext_free;
++		EVP_PKEY_asn1_set_free;
++		EVP_PKEY_get0_asn1;
++		d2i_NETSCAPE_X509;
++		EVP_PKEY_verify_recover_init;
++		EVP_PKEY_CTX_set_data;
++		EVP_PKEY_keygen_init;
++		TS_RESP_CTX_set_status_info;
++		TS_MSG_IMPRINT_get_algo;
++		TS_REQ_print_bio;
++		EVP_PKEY_CTX_ctrl_str;
++		EVP_PKEY_get_default_digest_nid;
++		PEM_write_bio_PKCS7_stream;
++		TS_MSG_IMPRINT_print_bio;
++		BN_asc2bn;
++		TS_REQ_get_policy_id;
++		ENGINE_set_default_pkey_asn1_meths;
++		ENGINE_set_def_pkey_asn1_meths;
++		d2i_TS_ACCURACY;
++		DSO_global_lookup;
++		TS_CONF_set_tsa_name;
++		i2d_ASN1_SET_ANY;
++		ENGINE_load_gost;
++		WHIRLPOOL_BitUpdate;
++		ASN1_PCTX_get_flags;
++		TS_TST_INFO_get_ext_by_NID;
++		TS_RESP_new;
++		ESS_CERT_ID_dup;
++		TS_STATUS_INFO_dup;
++		TS_REQ_delete_ext;
++		EVP_DigestVerifyFinal;
++		EVP_PKEY_print_params;
++		i2d_CMS_bio_stream;
++		TS_REQ_get_msg_imprint;
++		OBJ_find_sigid_by_algs;
++		TS_TST_INFO_get_serial;
++		TS_REQ_get_nonce;
++		X509_PUBKEY_set0_param;
++		EVP_PKEY_CTX_set0_keygen_info;
++		DIST_POINT_set_dpname;
++		i2d_ISSUING_DIST_POINT;
++		ASN1_SET_ANY_it;
++		EVP_PKEY_CTX_get_data;
++		TS_STATUS_INFO_print_bio;
++		EVP_PKEY_derive_init;
++		d2i_TS_TST_INFO;
++		EVP_PKEY_asn1_add_alias;
++		d2i_TS_RESP_bio;
++		OTHERNAME_cmp;
++		GENERAL_NAME_set0_value;
++		PKCS7_RECIP_INFO_get0_alg;
++		TS_RESP_CTX_new;
++		TS_RESP_set_tst_info;
++		PKCS7_final;
++		EVP_PKEY_base_id;
++		TS_RESP_CTX_set_signer_cert;
++		TS_REQ_set_msg_imprint;
++		EVP_PKEY_CTX_ctrl;
++		TS_CONF_set_digests;
++		d2i_TS_MSG_IMPRINT;
++		EVP_PKEY_meth_set_ctrl;
++		TS_REQ_get_ext_by_NID;
++		PKCS5_pbe_set0_algor;
++		BN_BLINDING_thread_id;
++		TS_ACCURACY_new;
++		X509_CRL_METHOD_free;
++		ASN1_PCTX_get_nm_flags;
++		EVP_PKEY_meth_set_sign;
++		CRYPTO_THREADID_current;
++		EVP_PKEY_decrypt_init;
++		NETSCAPE_X509_free;
++		i2b_PVK_bio;
++		EVP_PKEY_print_private;
++		GENERAL_NAME_get0_value;
++		b2i_PVK_bio;
++		ASN1_UTCTIME_adj;
++		TS_TST_INFO_new;
++		EVP_MD_do_all_sorted;
++		TS_CONF_set_default_engine;
++		TS_ACCURACY_set_seconds;
++		TS_TST_INFO_get_time;
++		PKCS8_pkey_get0;
++		EVP_PKEY_asn1_get0;
++		OBJ_add_sigid;
++		PKCS7_SIGNER_INFO_sign;
++		EVP_PKEY_paramgen_init;
++		EVP_PKEY_sign;
++		OBJ_sigid_free;
++		EVP_PKEY_meth_set_init;
++		d2i_ESS_ISSUER_SERIAL;
++		ISSUING_DIST_POINT_new;
++		ASN1_TIME_adj;
++		TS_OBJ_print_bio;
++		EVP_PKEY_meth_set_verify_recover;
++		EVP_PKEY_meth_set_vrfy_recover;
++		TS_RESP_get_status_info;
++		CMS_stream;
++		EVP_PKEY_CTX_set_cb;
++		PKCS7_to_TS_TST_INFO;
++		ASN1_PCTX_get_oid_flags;
++		TS_TST_INFO_add_ext;
++		EVP_PKEY_meth_set_derive;
++		i2d_TS_RESP_fp;
++		i2d_TS_MSG_IMPRINT_bio;
++		TS_RESP_CTX_set_accuracy;
++		TS_REQ_set_nonce;
++		ESS_CERT_ID_new;
++		ENGINE_pkey_asn1_find_str;
++		TS_REQ_get_ext_count;
++		BUF_reverse;
++		TS_TST_INFO_print_bio;
++		d2i_ISSUING_DIST_POINT;
++		ENGINE_get_pkey_meths;
++		i2b_PrivateKey_bio;
++		i2d_TS_RESP;
++		b2i_PublicKey;
++		TS_VERIFY_CTX_cleanup;
++		TS_STATUS_INFO_free;
++		TS_RESP_verify_token;
++		OBJ_bsearch_ex_;
++		ASN1_bn_print;
++		EVP_PKEY_asn1_get_count;
++		ENGINE_register_pkey_asn1_meths;
++		ASN1_PCTX_set_nm_flags;
++		EVP_DigestVerifyInit;
++		ENGINE_set_default_pkey_meths;
++		TS_TST_INFO_get_policy_id;
++		TS_REQ_get_cert_req;
++		X509_CRL_set_meth_data;
++		PKCS8_pkey_set0;
++		ASN1_STRING_copy;
++		d2i_TS_TST_INFO_fp;
++		X509_CRL_match;
++		EVP_PKEY_asn1_set_private;
++		TS_TST_INFO_get_ext_d2i;
++		TS_RESP_CTX_add_policy;
++		d2i_TS_RESP;
++		TS_CONF_load_certs;
++		TS_TST_INFO_get_msg_imprint;
++		ERR_load_TS_strings;
++		TS_TST_INFO_get_version;
++		EVP_PKEY_CTX_dup;
++		EVP_PKEY_meth_set_verify;
++		i2b_PublicKey_bio;
++		TS_CONF_set_certs;
++		EVP_PKEY_asn1_get0_info;
++		TS_VERIFY_CTX_free;
++		TS_REQ_get_ext_by_critical;
++		TS_RESP_CTX_set_serial_cb;
++		X509_CRL_get_meth_data;
++		TS_RESP_CTX_set_time_cb;
++		TS_MSG_IMPRINT_get_msg;
++		TS_TST_INFO_ext_free;
++		TS_REQ_get_version;
++		TS_REQ_add_ext;
++		EVP_PKEY_CTX_set_app_data;
++		OBJ_bsearch_;
++		EVP_PKEY_meth_set_verifyctx;
++		i2d_PKCS7_bio_stream;
++		CRYPTO_THREADID_set_numeric;
++		PKCS7_sign_add_signer;
++		d2i_TS_TST_INFO_bio;
++		TS_TST_INFO_get_ordering;
++		TS_RESP_print_bio;
++		TS_TST_INFO_get_exts;
++		HMAC_CTX_copy;
++		PKCS5_pbe2_set_iv;
++		ENGINE_get_pkey_asn1_meths;
++		b2i_PrivateKey;
++		EVP_PKEY_CTX_get_app_data;
++		TS_REQ_set_cert_req;
++		CRYPTO_THREADID_set_callback;
++		TS_CONF_set_serial;
++		TS_TST_INFO_free;
++		d2i_TS_REQ_fp;
++		TS_RESP_verify_response;
++		i2d_ESS_ISSUER_SERIAL;
++		TS_ACCURACY_get_seconds;
++		EVP_CIPHER_do_all;
++		b2i_PrivateKey_bio;
++		OCSP_CERTID_dup;
++		X509_PUBKEY_get0_param;
++		TS_MSG_IMPRINT_dup;
++		PKCS7_print_ctx;
++		i2d_TS_REQ_bio;
++		EVP_whirlpool;
++		EVP_PKEY_asn1_set_param;
++		EVP_PKEY_meth_set_encrypt;
++		ASN1_PCTX_set_flags;
++		i2d_ESS_CERT_ID;
++		TS_VERIFY_CTX_new;
++		TS_RESP_CTX_set_extension_cb;
++		ENGINE_register_all_pkey_meths;
++		TS_RESP_CTX_set_status_info_cond;
++		TS_RESP_CTX_set_stat_info_cond;
++		EVP_PKEY_verify;
++		WHIRLPOOL_Final;
++		X509_CRL_METHOD_new;
++		EVP_DigestSignFinal;
++		TS_RESP_CTX_set_def_policy;
++		NETSCAPE_X509_it;
++		TS_RESP_create_response;
++		PKCS7_SIGNER_INFO_get0_algs;
++		TS_TST_INFO_get_nonce;
++		EVP_PKEY_decrypt_old;
++		TS_TST_INFO_set_policy_id;
++		TS_CONF_set_ess_cert_id_chain;
++		EVP_PKEY_CTX_get0_pkey;
++		d2i_TS_REQ;
++		EVP_PKEY_asn1_find_str;
++		BIO_f_asn1;
++		ESS_SIGNING_CERT_new;
++		EVP_PBE_find;
++		X509_CRL_get0_by_cert;
++		EVP_PKEY_derive;
++		i2d_TS_REQ;
++		TS_TST_INFO_delete_ext;
++		ESS_ISSUER_SERIAL_free;
++		ASN1_PCTX_set_str_flags;
++		ENGINE_get_pkey_asn1_meth_str;
++		TS_CONF_set_signer_key;
++		TS_ACCURACY_get_millis;
++		TS_RESP_get_token;
++		TS_ACCURACY_dup;
++		ENGINE_register_all_pkey_asn1_meths;
++		ENGINE_reg_all_pkey_asn1_meths;
++		X509_CRL_set_default_method;
++		CRYPTO_THREADID_hash;
++		CMS_ContentInfo_print_ctx;
++		TS_RESP_free;
++		ISSUING_DIST_POINT_free;
++		ESS_ISSUER_SERIAL_new;
++		CMS_add1_crl;
++		PKCS7_add1_attrib_digest;
++		TS_RESP_CTX_add_md;
++		TS_TST_INFO_dup;
++		ENGINE_set_pkey_asn1_meths;
++		PEM_write_bio_Parameters;
++		TS_TST_INFO_get_accuracy;
++		X509_CRL_get0_by_serial;
++		TS_TST_INFO_set_version;
++		TS_RESP_CTX_get_tst_info;
++		TS_RESP_verify_signature;
++		CRYPTO_THREADID_get_callback;
++		TS_TST_INFO_get_tsa;
++		TS_STATUS_INFO_new;
++		EVP_PKEY_CTX_get_cb;
++		TS_REQ_get_ext_d2i;
++		GENERAL_NAME_set0_othername;
++		TS_TST_INFO_get_ext_count;
++		TS_RESP_CTX_get_request;
++		i2d_NETSCAPE_X509;
++		ENGINE_get_pkey_meth_engine;
++		EVP_PKEY_meth_set_signctx;
++		EVP_PKEY_asn1_copy;
++		ASN1_TYPE_cmp;
++		EVP_CIPHER_do_all_sorted;
++		EVP_PKEY_CTX_free;
++		ISSUING_DIST_POINT_it;
++		d2i_TS_MSG_IMPRINT_fp;
++		X509_STORE_get1_certs;
++		EVP_PKEY_CTX_get_operation;
++		d2i_ESS_SIGNING_CERT;
++		TS_CONF_set_ordering;
++		EVP_PBE_alg_add_type;
++		TS_REQ_set_version;
++		EVP_PKEY_get0;
++		BIO_asn1_set_suffix;
++		i2d_TS_STATUS_INFO;
++		EVP_MD_do_all;
++		TS_TST_INFO_set_accuracy;
++		PKCS7_add_attrib_content_type;
++		ERR_remove_thread_state;
++		EVP_PKEY_meth_add0;
++		TS_TST_INFO_set_tsa;
++		EVP_PKEY_meth_new;
++		WHIRLPOOL_Update;
++		TS_CONF_set_accuracy;
++		ASN1_PCTX_set_oid_flags;
++		ESS_SIGNING_CERT_dup;
++		d2i_TS_REQ_bio;
++		X509_time_adj_ex;
++		TS_RESP_CTX_add_flags;
++		d2i_TS_STATUS_INFO;
++		TS_MSG_IMPRINT_set_msg;
++		BIO_asn1_get_suffix;
++		TS_REQ_free;
++		EVP_PKEY_meth_free;
++		TS_REQ_get_exts;
++		TS_RESP_CTX_set_clock_precision_digits;
++		TS_RESP_CTX_set_clk_prec_digits;
++		TS_RESP_CTX_add_failure_info;
++		i2d_TS_RESP_bio;
++		EVP_PKEY_CTX_get0_peerkey;
++		PEM_write_bio_CMS_stream;
++		TS_REQ_new;
++		TS_MSG_IMPRINT_new;
++		EVP_PKEY_meth_find;
++		EVP_PKEY_id;
++		TS_TST_INFO_set_serial;
++		a2i_GENERAL_NAME;
++		TS_CONF_set_crypto_device;
++		EVP_PKEY_verify_init;
++		TS_CONF_set_policies;
++		ASN1_PCTX_new;
++		ESS_CERT_ID_free;
++		ENGINE_unregister_pkey_meths;
++		TS_MSG_IMPRINT_free;
++		TS_VERIFY_CTX_init;
++		PKCS7_stream;
++		TS_RESP_CTX_set_certs;
++		TS_CONF_set_def_policy;
++		ASN1_GENERALIZEDTIME_adj;
++		NETSCAPE_X509_new;
++		TS_ACCURACY_free;
++		TS_RESP_get_tst_info;
++		EVP_PKEY_derive_set_peer;
++		PEM_read_bio_Parameters;
++		TS_CONF_set_clock_precision_digits;
++		TS_CONF_set_clk_prec_digits;
++		ESS_ISSUER_SERIAL_dup;
++		TS_ACCURACY_get_micros;
++		ASN1_PCTX_get_str_flags;
++		NAME_CONSTRAINTS_check;
++		ASN1_BIT_STRING_check;
++		X509_check_akid;
++		ENGINE_unregister_pkey_asn1_meths;
++		ENGINE_unreg_pkey_asn1_meths;
++		ASN1_PCTX_free;
++		PEM_write_bio_ASN1_stream;
++		i2d_ASN1_bio_stream;
++		TS_X509_ALGOR_print_bio;
++		EVP_PKEY_meth_set_cleanup;
++		EVP_PKEY_asn1_free;
++		ESS_SIGNING_CERT_free;
++		TS_TST_INFO_set_msg_imprint;
++		GENERAL_NAME_cmp;
++		d2i_ASN1_SET_ANY;
++		ENGINE_set_pkey_meths;
++		i2d_TS_REQ_fp;
++		d2i_ASN1_SEQUENCE_ANY;
++		GENERAL_NAME_get0_otherName;
++		d2i_ESS_CERT_ID;
++		OBJ_find_sigid_algs;
++		EVP_PKEY_meth_set_keygen;
++		PKCS5_PBKDF2_HMAC;
++		EVP_PKEY_paramgen;
++		EVP_PKEY_meth_set_paramgen;
++		BIO_new_PKCS7;
++		EVP_PKEY_verify_recover;
++		TS_ext_print_bio;
++		TS_ASN1_INTEGER_print_bio;
++		check_defer;
++		DSO_pathbyaddr;
++		EVP_PKEY_set_type;
++		TS_ACCURACY_set_micros;
++		TS_REQ_to_TS_VERIFY_CTX;
++		EVP_PKEY_meth_set_copy;
++		ASN1_PCTX_set_cert_flags;
++		TS_TST_INFO_get_ext;
++		EVP_PKEY_asn1_set_ctrl;
++		TS_TST_INFO_get_ext_by_critical;
++		EVP_PKEY_CTX_new_id;
++		TS_REQ_get_ext_by_OBJ;
++		TS_CONF_set_signer_cert;
++		X509_NAME_hash_old;
++		ASN1_TIME_set_string;
++		EVP_MD_flags;
++		TS_RESP_CTX_free;
++		DSAparams_dup;
++		DHparams_dup;
++		OCSP_REQ_CTX_add1_header;
++		OCSP_REQ_CTX_set1_req;
++		X509_STORE_set_verify_cb;
++		X509_STORE_CTX_get0_current_crl;
++		X509_STORE_CTX_get0_parent_ctx;
++		X509_STORE_CTX_get0_current_issuer;
++		X509_STORE_CTX_get0_cur_issuer;
++		X509_issuer_name_hash_old;
++		X509_subject_name_hash_old;
++		EVP_CIPHER_CTX_copy;
++		UI_method_get_prompt_constructor;
++		UI_method_get_prompt_constructr;
++		UI_method_set_prompt_constructor;
++		UI_method_set_prompt_constructr;
++		EVP_read_pw_string_min;
++		CRYPTO_cts128_encrypt;
++		CRYPTO_cts128_decrypt_block;
++		CRYPTO_cfb128_1_encrypt;
++		CRYPTO_cbc128_encrypt;
++		CRYPTO_ctr128_encrypt;
++		CRYPTO_ofb128_encrypt;
++		CRYPTO_cts128_decrypt;
++		CRYPTO_cts128_encrypt_block;
++		CRYPTO_cbc128_decrypt;
++		CRYPTO_cfb128_encrypt;
++		CRYPTO_cfb128_8_encrypt;
++
++	local:
++		*;
++};
++
++
++OPENSSL_1.0.1 {
++	global:
++		SSL_renegotiate_abbreviated;
++		TLSv1_1_method;
++		TLSv1_1_client_method;
++		TLSv1_1_server_method;
++		SSL_CTX_set_srp_client_pwd_callback;
++		SSL_CTX_set_srp_client_pwd_cb;
++		SSL_get_srp_g;
++		SSL_CTX_set_srp_username_callback;
++		SSL_CTX_set_srp_un_cb;
++		SSL_get_srp_userinfo;
++		SSL_set_srp_server_param;
++		SSL_set_srp_server_param_pw;
++		SSL_get_srp_N;
++		SSL_get_srp_username;
++		SSL_CTX_set_srp_password;
++		SSL_CTX_set_srp_strength;
++		SSL_CTX_set_srp_verify_param_callback;
++		SSL_CTX_set_srp_vfy_param_cb;
++		SSL_CTX_set_srp_cb_arg;
++		SSL_CTX_set_srp_username;
++		SSL_CTX_SRP_CTX_init;
++		SSL_SRP_CTX_init;
++		SRP_Calc_A_param;
++		SRP_generate_server_master_secret;
++		SRP_gen_server_master_secret;
++		SSL_CTX_SRP_CTX_free;
++		SRP_generate_client_master_secret;
++		SRP_gen_client_master_secret;
++		SSL_srp_server_param_with_username;
++		SSL_srp_server_param_with_un;
++		SSL_SRP_CTX_free;
++		SSL_set_debug;
++		SSL_SESSION_get0_peer;
++		TLSv1_2_client_method;
++		SSL_SESSION_set1_id_context;
++		TLSv1_2_server_method;
++		SSL_cache_hit;
++		SSL_get0_kssl_ctx;
++		SSL_set0_kssl_ctx;
++		SSL_set_state;
++		SSL_CIPHER_get_id;
++		TLSv1_2_method;
++		kssl_ctx_get0_client_princ;
++		SSL_export_keying_material;
++		SSL_set_tlsext_use_srtp;
++		SSL_CTX_set_next_protos_advertised_cb;
++		SSL_CTX_set_next_protos_adv_cb;
++		SSL_get0_next_proto_negotiated;
++		SSL_get_selected_srtp_profile;
++		SSL_CTX_set_tlsext_use_srtp;
++		SSL_select_next_proto;
++		SSL_get_srtp_profiles;
++		SSL_CTX_set_next_proto_select_cb;
++		SSL_CTX_set_next_proto_sel_cb;
++		SSL_SESSION_get_compress_id;
++
++		SRP_VBASE_get_by_user;
++		SRP_Calc_server_key;
++		SRP_create_verifier;
++		SRP_create_verifier_BN;
++		SRP_Calc_u;
++		SRP_VBASE_free;
++		SRP_Calc_client_key;
++		SRP_get_default_gN;
++		SRP_Calc_x;
++		SRP_Calc_B;
++		SRP_VBASE_new;
++		SRP_check_known_gN_param;
++		SRP_Calc_A;
++		SRP_Verify_A_mod_N;
++		SRP_VBASE_init;
++		SRP_Verify_B_mod_N;
++		EC_KEY_set_public_key_affine_coordinates;
++		EC_KEY_set_pub_key_aff_coords;
++		EVP_aes_192_ctr;
++		EVP_PKEY_meth_get0_info;
++		EVP_PKEY_meth_copy;
++		ERR_add_error_vdata;
++		EVP_aes_128_ctr;
++		EVP_aes_256_ctr;
++		EC_GFp_nistp224_method;
++		EC_KEY_get_flags;
++		RSA_padding_add_PKCS1_PSS_mgf1;
++		EVP_aes_128_xts;
++		EVP_aes_256_xts;
++		EVP_aes_128_gcm;
++		EC_KEY_clear_flags;
++		EC_KEY_set_flags;
++		EVP_aes_256_ccm;
++		RSA_verify_PKCS1_PSS_mgf1;
++		EVP_aes_128_ccm;
++		EVP_aes_192_gcm;
++		X509_ALGOR_set_md;
++		RAND_init_fips;
++		EVP_aes_256_gcm;
++		EVP_aes_192_ccm;
++		CMAC_CTX_copy;
++		CMAC_CTX_free;
++		CMAC_CTX_get0_cipher_ctx;
++		CMAC_CTX_cleanup;
++		CMAC_Init;
++		CMAC_Update;
++		CMAC_resume;
++		CMAC_CTX_new;
++		CMAC_Final;
++		CRYPTO_ctr128_encrypt_ctr32;
++		CRYPTO_gcm128_release;
++		CRYPTO_ccm128_decrypt_ccm64;
++		CRYPTO_ccm128_encrypt;
++		CRYPTO_gcm128_encrypt;
++		CRYPTO_xts128_encrypt;
++		EVP_rc4_hmac_md5;
++		CRYPTO_nistcts128_decrypt_block;
++		CRYPTO_gcm128_setiv;
++		CRYPTO_nistcts128_encrypt;
++		EVP_aes_128_cbc_hmac_sha1;
++		CRYPTO_gcm128_tag;
++		CRYPTO_ccm128_encrypt_ccm64;
++		ENGINE_load_rdrand;
++		CRYPTO_ccm128_setiv;
++		CRYPTO_nistcts128_encrypt_block;
++		CRYPTO_gcm128_aad;
++		CRYPTO_ccm128_init;
++		CRYPTO_nistcts128_decrypt;
++		CRYPTO_gcm128_new;
++		CRYPTO_ccm128_tag;
++		CRYPTO_ccm128_decrypt;
++		CRYPTO_ccm128_aad;
++		CRYPTO_gcm128_init;
++		CRYPTO_gcm128_decrypt;
++		ENGINE_load_rsax;
++		CRYPTO_gcm128_decrypt_ctr32;
++		CRYPTO_gcm128_encrypt_ctr32;
++		CRYPTO_gcm128_finish;
++		EVP_aes_256_cbc_hmac_sha1;
++		PKCS5_pbkdf2_set;
++		CMS_add0_recipient_password;
++		CMS_decrypt_set1_password;
++		CMS_RecipientInfo_set0_password;
++		RAND_set_fips_drbg_type;
++		X509_REQ_sign_ctx;
++		RSA_PSS_PARAMS_new;
++		X509_CRL_sign_ctx;
++		X509_signature_dump;
++		d2i_RSA_PSS_PARAMS;
++		RSA_PSS_PARAMS_it;
++		RSA_PSS_PARAMS_free;
++		X509_sign_ctx;
++		i2d_RSA_PSS_PARAMS;
++		ASN1_item_sign_ctx;
++		EC_GFp_nistp521_method;
++		EC_GFp_nistp256_method;
++		OPENSSL_stderr;
++		OPENSSL_cpuid_setup;
++		OPENSSL_showfatal;
++		BIO_new_dgram_sctp;
++		BIO_dgram_sctp_msg_waiting;
++		BIO_dgram_sctp_wait_for_dry;
++		BIO_s_datagram_sctp;
++		BIO_dgram_is_sctp;
++		BIO_dgram_sctp_notification_cb;
++} OPENSSL_1.0.0;
++
++OPENSSL_1.0.1d {
++	global:
++		CRYPTO_memcmp;
++} OPENSSL_1.0.1;
++
++OPENSSL_1.0.2 {
++	global:
++		SSL_CTX_set_alpn_protos;
++		SSL_set_alpn_protos;
++		SSL_CTX_set_alpn_select_cb;
++		SSL_get0_alpn_selected;
++		SSL_CTX_set_custom_cli_ext;
++		SSL_CTX_set_custom_srv_ext;
++		SSL_CTX_set_srv_supp_data;
++		SSL_CTX_set_cli_supp_data;
++		SSL_set_cert_cb;
++		SSL_CTX_use_serverinfo;
++		SSL_CTX_use_serverinfo_file;
++		SSL_CTX_set_cert_cb;
++		SSL_CTX_get0_param;
++		SSL_get0_param;
++		SSL_certs_clear;
++		DTLSv1_2_method;
++		DTLSv1_2_server_method;
++		DTLSv1_2_client_method;
++		DTLS_method;
++		DTLS_server_method;
++		DTLS_client_method;
++		SSL_CTX_get_ssl_method;
++		SSL_CTX_get0_certificate;
++		SSL_CTX_get0_privatekey;
++		SSL_COMP_set0_compression_methods;
++		SSL_COMP_free_compression_methods;
++		SSL_CIPHER_find;
++		SSL_is_server;
++		SSL_CONF_CTX_new;
++		SSL_CONF_CTX_finish;
++		SSL_CONF_CTX_free;
++		SSL_CONF_CTX_set_flags;
++		SSL_CONF_CTX_clear_flags;
++		SSL_CONF_CTX_set1_prefix;
++		SSL_CONF_CTX_set_ssl;
++		SSL_CONF_CTX_set_ssl_ctx;
++		SSL_CONF_cmd;
++		SSL_CONF_cmd_argv;
++		SSL_CONF_cmd_value_type;
++		SSL_trace;
++		SSL_CIPHER_standard_name;
++		SSL_get_tlsa_record_byname;
++		ASN1_TIME_diff;
++		BIO_hex_string;
++		CMS_RecipientInfo_get0_pkey_ctx;
++		CMS_RecipientInfo_encrypt;
++		CMS_SignerInfo_get0_pkey_ctx;
++		CMS_SignerInfo_get0_md_ctx;
++		CMS_SignerInfo_get0_signature;
++		CMS_RecipientInfo_kari_get0_alg;
++		CMS_RecipientInfo_kari_get0_reks;
++		CMS_RecipientInfo_kari_get0_orig_id;
++		CMS_RecipientInfo_kari_orig_id_cmp;
++		CMS_RecipientEncryptedKey_get0_id;
++		CMS_RecipientEncryptedKey_cert_cmp;
++		CMS_RecipientInfo_kari_set0_pkey;
++		CMS_RecipientInfo_kari_get0_ctx;
++		CMS_RecipientInfo_kari_decrypt;
++		CMS_SharedInfo_encode;
++		DH_compute_key_padded;
++		d2i_DHxparams;
++		i2d_DHxparams;
++		DH_get_1024_160;
++		DH_get_2048_224;
++		DH_get_2048_256;
++		DH_KDF_X9_42;
++		ECDH_KDF_X9_62;
++		ECDSA_METHOD_new;
++		ECDSA_METHOD_free;
++		ECDSA_METHOD_set_app_data;
++		ECDSA_METHOD_get_app_data;
++		ECDSA_METHOD_set_sign;
++		ECDSA_METHOD_set_sign_setup;
++		ECDSA_METHOD_set_verify;
++		ECDSA_METHOD_set_flags;
++		ECDSA_METHOD_set_name;
++		EVP_des_ede3_wrap;
++		EVP_aes_128_wrap;
++		EVP_aes_192_wrap;
++		EVP_aes_256_wrap;
++		EVP_aes_128_cbc_hmac_sha256;
++		EVP_aes_256_cbc_hmac_sha256;
++		CRYPTO_128_wrap;
++		CRYPTO_128_unwrap;
++		OCSP_REQ_CTX_nbio;
++		OCSP_REQ_CTX_new;
++		OCSP_set_max_response_length;
++		OCSP_REQ_CTX_i2d;
++		OCSP_REQ_CTX_nbio_d2i;
++		OCSP_REQ_CTX_get0_mem_bio;
++		OCSP_REQ_CTX_http;
++		RSA_padding_add_PKCS1_OAEP_mgf1;
++		RSA_padding_check_PKCS1_OAEP_mgf1;
++		RSA_OAEP_PARAMS_free;
++		RSA_OAEP_PARAMS_it;
++		RSA_OAEP_PARAMS_new;
++		SSL_get_sigalgs;
++		SSL_get_shared_sigalgs;
++		SSL_check_chain;
++		X509_chain_up_ref;
++		X509_http_nbio;
++		X509_CRL_http_nbio;
++		X509_REVOKED_dup;
++		i2d_re_X509_tbs;
++		X509_get0_signature;
++		X509_get_signature_nid;
++		X509_CRL_diff;
++		X509_chain_check_suiteb;
++		X509_CRL_check_suiteb;
++		X509_check_host;
++		X509_check_email;
++		X509_check_ip;
++		X509_check_ip_asc;
++		X509_STORE_set_lookup_crls_cb;
++		X509_STORE_CTX_get0_store;
++		X509_VERIFY_PARAM_set1_host;
++		X509_VERIFY_PARAM_add1_host;
++		X509_VERIFY_PARAM_set_hostflags;
++		X509_VERIFY_PARAM_get0_peername;
++		X509_VERIFY_PARAM_set1_email;
++		X509_VERIFY_PARAM_set1_ip;
++		X509_VERIFY_PARAM_set1_ip_asc;
++		X509_VERIFY_PARAM_get0_name;
++		X509_VERIFY_PARAM_get_count;
++		X509_VERIFY_PARAM_get0;
++		X509V3_EXT_free;
++		EC_GROUP_get_mont_data;
++		EC_curve_nid2nist;
++		EC_curve_nist2nid;
++		PEM_write_bio_DHxparams;
++		PEM_write_DHxparams;
++		SSL_CTX_add_client_custom_ext;
++		SSL_CTX_add_server_custom_ext;
++		SSL_extension_supported;
++		BUF_strnlen;
++		sk_deep_copy;
++		SSL_test_functions;
++} OPENSSL_1.0.1d;
++
+Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/openssl.ld
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/openssl.ld	2014-02-24 21:02:30.000000000 +0100
+@@ -0,0 +1,10 @@
++OPENSSL_1.0.0 {
++	global:
++		bind_engine;
++		v_check;
++		OPENSSL_init;
++		OPENSSL_finish;
++	local:
++		*;
++};
++
+Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/ccgost/openssl.ld
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/ccgost/openssl.ld	2014-02-24 21:02:30.000000000 +0100
+@@ -0,0 +1,10 @@
++OPENSSL_1.0.0 {
++	global:
++		bind_engine;
++		v_check;
++		OPENSSL_init;
++		OPENSSL_finish;
++	local:
++		*;
++};
++
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/0001-Fix-build-with-clang-using-external-assembler.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/0001-Fix-build-with-clang-using-external-assembler.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/0001-Fix-build-with-clang-using-external-assembler.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/0001-Fix-build-with-clang-using-external-assembler.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/0001-openssl-force-soft-link-to-avoid-rare-race.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/0001-openssl-force-soft-link-to-avoid-rare-race.patch
new file mode 100644
index 0000000..dd1a9b1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/0001-openssl-force-soft-link-to-avoid-rare-race.patch
@@ -0,0 +1,46 @@
+From 3d9199423d48766649a2b2ebb3924e892ed16fa4 Mon Sep 17 00:00:00 2001
+From: Randy MacLeod <Randy.MacLeod@windriver.com>
+Date: Tue, 20 Jun 2017 15:32:08 -0400
+Subject: [PATCH] openssl: Force soft link to avoid rare race
+
+This patch works around a rare parallel build race condition. 
+The error seen is:
+
+ln: failed to create symbolic link 'libssl.so': File exists
+make[4]: *** [Makefile.shared:171: link_a.gnu] Error 1
+make[4]: Leaving directory
+'/.../build/tmp-glibc/work/x86_64-linux/openssl-native/1.0.2k-r0/openssl-1.0.2k'
+
+The openssl team is rewriting their build files so it's not
+appropriate for openssl upstream and fixing the root cause of
+the Makefile race condition was also not pursued.
+
+Upstream-Status: Inappropriate [build rules rewrite in progress]
+Signed-off-by: Randy MacLeod <Randy.MacLeod@windriver.com>
+---
+ Makefile.shared | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/Makefile.shared b/Makefile.shared
+index e8d222a..1bff92f 100644
+--- a/Makefile.shared
++++ b/Makefile.shared
+@@ -118,14 +118,14 @@
+ 		if [ -n "$$SHLIB_COMPAT" ]; then \
+ 			for x in $$SHLIB_COMPAT; do \
+ 				( $(SET_X); rm -f $$SHLIB$$x$$SHLIB_SUFFIX; \
+-				  ln -s $$prev $$SHLIB$$x$$SHLIB_SUFFIX ); \
++				  ln -sf $$prev $$SHLIB$$x$$SHLIB_SUFFIX ); \
+ 				prev=$$SHLIB$$x$$SHLIB_SUFFIX; \
+ 			done; \
+ 		fi; \
+ 		if [ -n "$$SHLIB_SOVER" ]; then \
+ 			[ -e "$$SHLIB$$SHLIB_SUFFIX" ] || \
+ 			( $(SET_X); rm -f $$SHLIB$$SHLIB_SUFFIX; \
+-			  ln -s $$prev $$SHLIB$$SHLIB_SUFFIX ); \
++			  ln -sf $$prev $$SHLIB$$SHLIB_SUFFIX ); \
+ 		fi; \
+ 	fi
+ 
+-- 
+2.9.3
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/Makefiles-ptest.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/Makefiles-ptest.patch
new file mode 100644
index 0000000..2122fa1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/Makefiles-ptest.patch
@@ -0,0 +1,93 @@
+From a176c69f4fdfbfa7e4ccb79d91c3b6602da7e69a Mon Sep 17 00:00:00 2001
+From: Anders Roxell <anders.roxell@enea.com>
+Date: Thu, 24 Apr 2014 19:28:25 +0200
+Subject: [PATCH 19/28] openssl: enable ptest support
+
+Add 'buildtest' and 'runtest' targets to Makefile, to build and run tests
+cross-compiled.
+
+Signed-off-by: Anders Roxell <anders.roxell@enea.com>
+Signed-off-by: Maxin B. John <maxin.john@enea.com>
+Upstream-Status: Pending
+
+---
+ Makefile.org       |  10 +-
+ Makefile.org.orig  |   7 +-
+ test/Makefile      |  13 +-
+ test/Makefile.orig | 987 +++++++++++++++++++++++++++++++++++++++++++++++++++++
+ 4 files changed, 1009 insertions(+), 8 deletions(-)
+ create mode 100644 test/Makefile.orig
+
+diff --git a/Makefile.org b/Makefile.org
+index 111fbba..8e7936c 100644
+--- a/Makefile.org
++++ b/Makefile.org
+@@ -468,8 +468,16 @@ rehash.time: certs apps
+ test:   tests
+ 
+ tests: rehash
++	$(MAKE) buildtest
++	$(MAKE) runtest
++
++buildtest:
++	@(cd test && \
++	$(CLEARENV) && $(MAKE) -e $(BUILDENV) TOP=.. TESTS='$(TESTS)' OPENSSL_DEBUG_MEMORY=on OPENSSL_CONF=../apps/openssl.cnf exe apps);
++
++runtest:
+ 	@(cd test && echo "testing..." && \
+-	$(CLEARENV) && $(MAKE) -e $(BUILDENV) TOP=.. TESTS='$(TESTS)' OPENSSL_DEBUG_MEMORY=on OPENSSL_CONF=../apps/openssl.cnf tests );
++	$(CLEARENV) && $(MAKE) -e $(BUILDENV) TOP=.. TESTS='$(TESTS)' OPENSSL_DEBUG_MEMORY=on OPENSSL_CONF=../apps/openssl.cnf alltests );
+ 	OPENSSL_CONF=apps/openssl.cnf util/opensslwrap.sh version -a
+ 
+ report:
+diff --git a/test/Makefile b/test/Makefile
+index a1f7eeb..b2984c4 100644
+--- a/test/Makefile
++++ b/test/Makefile
+@@ -150,7 +150,7 @@ tests:	exe apps $(TESTS)
+ apps:
+ 	@(cd ..; $(MAKE) DIRS=apps all)
+ 
+-alltests: \
++all-tests= \
+ 	test_des test_idea test_sha test_md4 test_md5 test_hmac \
+ 	test_md2 test_mdc2 test_wp \
+ 	test_rmd test_rc2 test_rc4 test_rc5 test_bf test_cast test_aes \
+@@ -162,6 +162,11 @@ alltests: \
+ 	test_constant_time test_verify_extra test_clienthello test_sslv2conftest \
+ 	test_dtls test_bad_dtls test_fatalerr
+ 
++alltests:
++	@(for i in $(all-tests); do \
++	( $(MAKE) $$i && echo "PASS: $$i" ) || echo "FAIL: $$i"; \
++	done)
++
+ test_evp: $(EVPTEST)$(EXE_EXT) evptests.txt
+ 	../util/shlib_wrap.sh ./$(EVPTEST) evptests.txt
+ 
+@@ -230,7 +235,7 @@ test_x509: ../apps/openssl$(EXE_EXT) tx509 testx509.pem v3-cert1.pem v3-cert2.pe
+ 	echo test second x509v3 certificate
+ 	sh ./tx509 v3-cert2.pem 2>/dev/null
+ 
+-test_rsa: $(RSATEST)$(EXE_EXT) ../apps/openssl$(EXE_EXT) trsa testrsa.pem
++test_rsa: ../apps/openssl$(EXE_EXT) trsa testrsa.pem
+ 	@sh ./trsa 2>/dev/null
+ 	../util/shlib_wrap.sh ./$(RSATEST)
+ 
+@@ -331,11 +336,11 @@ test_tsa: ../apps/openssl$(EXE_EXT) testtsa CAtsa.cnf ../util/shlib_wrap.sh
+ 	  sh ./testtsa; \
+ 	fi
+ 
+-test_ige: $(IGETEST)$(EXE_EXT)
++test_ige:
+ 	@echo "Test IGE mode"
+ 	../util/shlib_wrap.sh ./$(IGETEST)
+ 
+-test_jpake: $(JPAKETEST)$(EXE_EXT)
++test_jpake:
+ 	@echo "Test JPAKE"
+ 	../util/shlib_wrap.sh ./$(JPAKETEST)
+ 
+-- 
+2.15.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/Use-SHA256-not-MD5-as-default-digest.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/Use-SHA256-not-MD5-as-default-digest.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/Use-SHA256-not-MD5-as-default-digest.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/Use-SHA256-not-MD5-as-default-digest.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/configure-musl-target.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/configure-musl-target.patch
new file mode 100644
index 0000000..f357b3f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/configure-musl-target.patch
@@ -0,0 +1,25 @@
+Add musl triplet support
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Index: openssl-1.0.2a/Configure
+===================================================================
+--- openssl-1.0.2a.orig/Configure
++++ openssl-1.0.2a/Configure
+@@ -431,7 +431,7 @@ my %table=(
+ #
+ #       ./Configure linux-armv4 -march=armv6 -D__ARM_MAX_ARCH__=8
+ #
+-"linux-armv4",	"gcc: -O3 -Wall::-D_REENTRANT::-ldl:BN_LLONG RC4_CHAR RC4_CHUNK DES_INT DES_UNROLL BF_PTR:${armv4_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
++"linux-armv4", "gcc: -O3 -Wall::-D_REENTRANT::-ldl:BN_LLONG RC4_CHAR RC4_CHUNK DES_INT DES_UNROLL BF_PTR:${armv4_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
+ "linux-aarch64","gcc: -O3 -Wall::-D_REENTRANT::-ldl:SIXTY_FOUR_BIT_LONG RC4_CHAR RC4_CHUNK DES_INT DES_UNROLL BF_PTR:${aarch64_asm}:linux64:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
+ # Configure script adds minimally required -march for assembly support,
+ # if no -march was specified at command line. mips32 and mips64 below
+@@ -504,4 +504,6 @@ my %table=(
+ "linux-gnueabi-armeb","$ENV{'CC'}:-DB_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
++"linux-musleabi-arm","$ENV{'CC'}:-DL_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
++"linux-musleabi-armeb","$ENV{'CC'}:-DB_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
+ 
+ "linux-avr32","$ENV{'CC'}:-O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).",
+ 
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/configure-targets.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/configure-targets.patch
new file mode 100644
index 0000000..1e01589
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/configure-targets.patch
@@ -0,0 +1,35 @@
+Upstream-Status: Inappropriate [embedded specific]
+
+The number of colons are important :)
+
+
+---
+ Configure |   16 ++++++++++++++++
+ 1 file changed, 16 insertions(+)
+
+Index: openssl-1.0.2a/Configure
+===================================================================
+--- openssl-1.0.2a.orig/Configure
++++ openssl-1.0.2a/Configure
+@@ -443,6 +443,21 @@ my %table=(
+ "linux-alpha-ccc","ccc:-fast -readonly_strings -DL_ENDIAN::-D_REENTRANT:::SIXTY_FOUR_BIT_LONG RC4_CHUNK DES_INT DES_PTR DES_RISC1 DES_UNROLL:${alpha_asm}",
+ "linux-alpha+bwx-ccc","ccc:-fast -readonly_strings -DL_ENDIAN::-D_REENTRANT:::SIXTY_FOUR_BIT_LONG RC4_CHAR RC4_CHUNK DES_INT DES_PTR DES_RISC1 DES_UNROLL:${alpha_asm}",
+ 
++ 
++# Linux on ARM
++"linux-elf-arm","$ENV{'CC'}:-DL_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
++"linux-elf-armeb","$ENV{'CC'}:-DB_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
++"linux-gnueabi-arm","$ENV{'CC'}:-DL_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
++"linux-gnueabi-armeb","$ENV{'CC'}:-DB_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
++
++"linux-avr32","$ENV{'CC'}:-O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).",
++
++#### Linux on MIPS/MIPS64
++"linux-mips","$ENV{'CC'}:-DB_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG RC2_CHAR RC4_INDEX DES_INT DES_UNROLL DES_RISC2:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
++"linux-mips64","$ENV{'CC'}:-DB_ENDIAN -mabi=64 -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:SIXTY_FOUR_BIT_LONG RC2_CHAR RC4_INDEX DES_INT DES_UNROLL DES_RISC2:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
++"linux-mips64el","$ENV{'CC'}:-DL_ENDIAN -mabi=64 -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:SIXTY_FOUR_BIT_LONG RC2_CHAR RC4_INDEX DES_INT DES_UNROLL DES_RISC2:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
++"linux-mipsel","$ENV{'CC'}:-DL_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG RC2_CHAR RC4_INDEX DES_INT DES_UNROLL DES_RISC2:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
++
+ # Android: linux-* but without pointers to headers and libs.
+ "android","gcc:-mandroid -I\$(ANDROID_DEV)/include -B\$(ANDROID_DEV)/lib -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG RC4_CHAR RC4_CHUNK DES_INT DES_UNROLL BF_PTR:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
+ "android-x86","gcc:-mandroid -I\$(ANDROID_DEV)/include -B\$(ANDROID_DEV)/lib -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG ${x86_gcc_des} ${x86_gcc_opts}:".eval{my $asm=${x86_elf_asm};$asm=~s/:elf/:android/;$asm}.":dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/c_rehash-compat.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/c_rehash-compat.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/c_rehash-compat.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/c_rehash-compat.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/ca.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/ca.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/ca.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/ca.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/debian-targets.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/debian-targets.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/debian-targets.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/debian-targets.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/man-dir.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/man-dir.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/man-dir.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/man-dir.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/man-section.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/man-section.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/man-section.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/man-section.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/no-rpath.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/no-rpath.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/no-rpath.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/no-rpath.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/no-symbolic.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/no-symbolic.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/no-symbolic.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/no-symbolic.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/pic.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/pic.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/pic.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian/pic.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian1.0.2/block_digicert_malaysia.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian1.0.2/block_digicert_malaysia.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian1.0.2/block_digicert_malaysia.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian1.0.2/block_digicert_malaysia.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian1.0.2/block_diginotar.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian1.0.2/block_diginotar.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian1.0.2/block_diginotar.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian1.0.2/block_diginotar.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian1.0.2/soname.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian1.0.2/soname.patch
new file mode 100644
index 0000000..09dd9ea
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian1.0.2/soname.patch
@@ -0,0 +1,15 @@
+Upstream-Status: Inappropriate
+
+Index: openssl-1.0.2d/crypto/opensslv.h
+===================================================================
+--- openssl-1.0.2d.orig/crypto/opensslv.h
++++ openssl-1.0.2d/crypto/opensslv.h
+@@ -88,7 +88,7 @@ extern "C" {
+  * should only keep the versions that are binary compatible with the current.
+  */
+ # define SHLIB_VERSION_HISTORY ""
+-# define SHLIB_VERSION_NUMBER "1.0.0"
++# define SHLIB_VERSION_NUMBER "1.0.2"
+ 
+ 
+ #ifdef  __cplusplus
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian1.0.2/version-script.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian1.0.2/version-script.patch
new file mode 100644
index 0000000..e404ee3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/debian1.0.2/version-script.patch
@@ -0,0 +1,4658 @@
+Upstream-Status: Inappropriate
+
+Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/Configure
+===================================================================
+--- openssl-1.0.2~beta1.obsolete.0.0498436515490575.orig/Configure	2014-02-24 21:02:30.000000000 +0100
++++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/Configure	2014-02-24 21:02:30.000000000 +0100
+@@ -1651,6 +1651,8 @@
+ 		}
+ 	}
+ 
++$shared_ldflag .= " -Wl,--version-script=openssl.ld";
++
+ open(IN,'<Makefile.org') || die "unable to read Makefile.org:$!\n";
+ unlink("$Makefile.new") || die "unable to remove old $Makefile.new:$!\n" if -e "$Makefile.new";
+ open(OUT,">$Makefile.new") || die "unable to create $Makefile.new:$!\n";
+Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/openssl.ld
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/openssl.ld	2014-02-24 22:19:08.601827266 +0100
+@@ -0,0 +1,4608 @@
++OPENSSL_1.0.2d {
++	global:
++		BIO_f_ssl;
++		BIO_new_buffer_ssl_connect;
++		BIO_new_ssl;
++		BIO_new_ssl_connect;
++		BIO_proxy_ssl_copy_session_id;
++		BIO_ssl_copy_session_id;
++		BIO_ssl_shutdown;
++		d2i_SSL_SESSION;
++		DTLSv1_client_method;
++		DTLSv1_method;
++		DTLSv1_server_method;
++		ERR_load_SSL_strings;
++		i2d_SSL_SESSION;
++		kssl_build_principal_2;
++		kssl_cget_tkt;
++		kssl_check_authent;
++		kssl_ctx_free;
++		kssl_ctx_new;
++		kssl_ctx_setkey;
++		kssl_ctx_setprinc;
++		kssl_ctx_setstring;
++		kssl_ctx_show;
++		kssl_err_set;
++		kssl_krb5_free_data_contents;
++		kssl_sget_tkt;
++		kssl_skip_confound;
++		kssl_validate_times;
++		PEM_read_bio_SSL_SESSION;
++		PEM_read_SSL_SESSION;
++		PEM_write_bio_SSL_SESSION;
++		PEM_write_SSL_SESSION;
++		SSL_accept;
++		SSL_add_client_CA;
++		SSL_add_dir_cert_subjects_to_stack;
++		SSL_add_dir_cert_subjs_to_stk;
++		SSL_add_file_cert_subjects_to_stack;
++		SSL_add_file_cert_subjs_to_stk;
++		SSL_alert_desc_string;
++		SSL_alert_desc_string_long;
++		SSL_alert_type_string;
++		SSL_alert_type_string_long;
++		SSL_callback_ctrl;
++		SSL_check_private_key;
++		SSL_CIPHER_description;
++		SSL_CIPHER_get_bits;
++		SSL_CIPHER_get_name;
++		SSL_CIPHER_get_version;
++		SSL_clear;
++		SSL_COMP_add_compression_method;
++		SSL_COMP_get_compression_methods;
++		SSL_COMP_get_compress_methods;
++		SSL_COMP_get_name;
++		SSL_connect;
++		SSL_copy_session_id;
++		SSL_ctrl;
++		SSL_CTX_add_client_CA;
++		SSL_CTX_add_session;
++		SSL_CTX_callback_ctrl;
++		SSL_CTX_check_private_key;
++		SSL_CTX_ctrl;
++		SSL_CTX_flush_sessions;
++		SSL_CTX_free;
++		SSL_CTX_get_cert_store;
++		SSL_CTX_get_client_CA_list;
++		SSL_CTX_get_client_cert_cb;
++		SSL_CTX_get_ex_data;
++		SSL_CTX_get_ex_new_index;
++		SSL_CTX_get_info_callback;
++		SSL_CTX_get_quiet_shutdown;
++		SSL_CTX_get_timeout;
++		SSL_CTX_get_verify_callback;
++		SSL_CTX_get_verify_depth;
++		SSL_CTX_get_verify_mode;
++		SSL_CTX_load_verify_locations;
++		SSL_CTX_new;
++		SSL_CTX_remove_session;
++		SSL_CTX_sess_get_get_cb;
++		SSL_CTX_sess_get_new_cb;
++		SSL_CTX_sess_get_remove_cb;
++		SSL_CTX_sessions;
++		SSL_CTX_sess_set_get_cb;
++		SSL_CTX_sess_set_new_cb;
++		SSL_CTX_sess_set_remove_cb;
++		SSL_CTX_set1_param;
++		SSL_CTX_set_cert_store;
++		SSL_CTX_set_cert_verify_callback;
++		SSL_CTX_set_cert_verify_cb;
++		SSL_CTX_set_cipher_list;
++		SSL_CTX_set_client_CA_list;
++		SSL_CTX_set_client_cert_cb;
++		SSL_CTX_set_client_cert_engine;
++		SSL_CTX_set_cookie_generate_cb;
++		SSL_CTX_set_cookie_verify_cb;
++		SSL_CTX_set_default_passwd_cb;
++		SSL_CTX_set_default_passwd_cb_userdata;
++		SSL_CTX_set_default_verify_paths;
++		SSL_CTX_set_def_passwd_cb_ud;
++		SSL_CTX_set_def_verify_paths;
++		SSL_CTX_set_ex_data;
++		SSL_CTX_set_generate_session_id;
++		SSL_CTX_set_info_callback;
++		SSL_CTX_set_msg_callback;
++		SSL_CTX_set_psk_client_callback;
++		SSL_CTX_set_psk_server_callback;
++		SSL_CTX_set_purpose;
++		SSL_CTX_set_quiet_shutdown;
++		SSL_CTX_set_session_id_context;
++		SSL_CTX_set_ssl_version;
++		SSL_CTX_set_timeout;
++		SSL_CTX_set_tmp_dh_callback;
++		SSL_CTX_set_tmp_ecdh_callback;
++		SSL_CTX_set_tmp_rsa_callback;
++		SSL_CTX_set_trust;
++		SSL_CTX_set_verify;
++		SSL_CTX_set_verify_depth;
++		SSL_CTX_use_cert_chain_file;
++		SSL_CTX_use_certificate;
++		SSL_CTX_use_certificate_ASN1;
++		SSL_CTX_use_certificate_chain_file;
++		SSL_CTX_use_certificate_file;
++		SSL_CTX_use_PrivateKey;
++		SSL_CTX_use_PrivateKey_ASN1;
++		SSL_CTX_use_PrivateKey_file;
++		SSL_CTX_use_psk_identity_hint;
++		SSL_CTX_use_RSAPrivateKey;
++		SSL_CTX_use_RSAPrivateKey_ASN1;
++		SSL_CTX_use_RSAPrivateKey_file;
++		SSL_do_handshake;
++		SSL_dup;
++		SSL_dup_CA_list;
++		SSLeay_add_ssl_algorithms;
++		SSL_free;
++		SSL_get1_session;
++		SSL_get_certificate;
++		SSL_get_cipher_list;
++		SSL_get_ciphers;
++		SSL_get_client_CA_list;
++		SSL_get_current_cipher;
++		SSL_get_current_compression;
++		SSL_get_current_expansion;
++		SSL_get_default_timeout;
++		SSL_get_error;
++		SSL_get_ex_data;
++		SSL_get_ex_data_X509_STORE_CTX_idx;
++		SSL_get_ex_d_X509_STORE_CTX_idx;
++		SSL_get_ex_new_index;
++		SSL_get_fd;
++		SSL_get_finished;
++		SSL_get_info_callback;
++		SSL_get_peer_cert_chain;
++		SSL_get_peer_certificate;
++		SSL_get_peer_finished;
++		SSL_get_privatekey;
++		SSL_get_psk_identity;
++		SSL_get_psk_identity_hint;
++		SSL_get_quiet_shutdown;
++		SSL_get_rbio;
++		SSL_get_read_ahead;
++		SSL_get_rfd;
++		SSL_get_servername;
++		SSL_get_servername_type;
++		SSL_get_session;
++		SSL_get_shared_ciphers;
++		SSL_get_shutdown;
++		SSL_get_SSL_CTX;
++		SSL_get_ssl_method;
++		SSL_get_verify_callback;
++		SSL_get_verify_depth;
++		SSL_get_verify_mode;
++		SSL_get_verify_result;
++		SSL_get_version;
++		SSL_get_wbio;
++		SSL_get_wfd;
++		SSL_has_matching_session_id;
++		SSL_library_init;
++		SSL_load_client_CA_file;
++		SSL_load_error_strings;
++		SSL_new;
++		SSL_peek;
++		SSL_pending;
++		SSL_read;
++		SSL_renegotiate;
++		SSL_renegotiate_pending;
++		SSL_rstate_string;
++		SSL_rstate_string_long;
++		SSL_SESSION_cmp;
++		SSL_SESSION_free;
++		SSL_SESSION_get_ex_data;
++		SSL_SESSION_get_ex_new_index;
++		SSL_SESSION_get_id;
++		SSL_SESSION_get_time;
++		SSL_SESSION_get_timeout;
++		SSL_SESSION_hash;
++		SSL_SESSION_new;
++		SSL_SESSION_print;
++		SSL_SESSION_print_fp;
++		SSL_SESSION_set_ex_data;
++		SSL_SESSION_set_time;
++		SSL_SESSION_set_timeout;
++		SSL_set1_param;
++		SSL_set_accept_state;
++		SSL_set_bio;
++		SSL_set_cipher_list;
++		SSL_set_client_CA_list;
++		SSL_set_connect_state;
++		SSL_set_ex_data;
++		SSL_set_fd;
++		SSL_set_generate_session_id;
++		SSL_set_info_callback;
++		SSL_set_msg_callback;
++		SSL_set_psk_client_callback;
++		SSL_set_psk_server_callback;
++		SSL_set_purpose;
++		SSL_set_quiet_shutdown;
++		SSL_set_read_ahead;
++		SSL_set_rfd;
++		SSL_set_session;
++		SSL_set_session_id_context;
++		SSL_set_session_secret_cb;
++		SSL_set_session_ticket_ext;
++		SSL_set_session_ticket_ext_cb;
++		SSL_set_shutdown;
++		SSL_set_SSL_CTX;
++		SSL_set_ssl_method;
++		SSL_set_tmp_dh_callback;
++		SSL_set_tmp_ecdh_callback;
++		SSL_set_tmp_rsa_callback;
++		SSL_set_trust;
++		SSL_set_verify;
++		SSL_set_verify_depth;
++		SSL_set_verify_result;
++		SSL_set_wfd;
++		SSL_shutdown;
++		SSL_state;
++		SSL_state_string;
++		SSL_state_string_long;
++		SSL_use_certificate;
++		SSL_use_certificate_ASN1;
++		SSL_use_certificate_file;
++		SSL_use_PrivateKey;
++		SSL_use_PrivateKey_ASN1;
++		SSL_use_PrivateKey_file;
++		SSL_use_psk_identity_hint;
++		SSL_use_RSAPrivateKey;
++		SSL_use_RSAPrivateKey_ASN1;
++		SSL_use_RSAPrivateKey_file;
++		SSLv23_client_method;
++		SSLv23_method;
++		SSLv23_server_method;
++		SSLv2_client_method;
++		SSLv2_method;
++		SSLv2_server_method;
++		SSLv3_client_method;
++		SSLv3_method;
++		SSLv3_server_method;
++		SSL_version;
++		SSL_want;
++		SSL_write;
++		TLSv1_client_method;
++		TLSv1_method;
++		TLSv1_server_method;
++
++
++		SSLeay;
++		SSLeay_version;
++		ASN1_BIT_STRING_asn1_meth;
++		ASN1_HEADER_free;
++		ASN1_HEADER_new;
++		ASN1_IA5STRING_asn1_meth;
++		ASN1_INTEGER_get;
++		ASN1_INTEGER_set;
++		ASN1_INTEGER_to_BN;
++		ASN1_OBJECT_create;
++		ASN1_OBJECT_free;
++		ASN1_OBJECT_new;
++		ASN1_PRINTABLE_type;
++		ASN1_STRING_cmp;
++		ASN1_STRING_dup;
++		ASN1_STRING_free;
++		ASN1_STRING_new;
++		ASN1_STRING_print;
++		ASN1_STRING_set;
++		ASN1_STRING_type_new;
++		ASN1_TYPE_free;
++		ASN1_TYPE_new;
++		ASN1_UNIVERSALSTRING_to_string;
++		ASN1_UTCTIME_check;
++		ASN1_UTCTIME_print;
++		ASN1_UTCTIME_set;
++		ASN1_check_infinite_end;
++		ASN1_d2i_bio;
++		ASN1_d2i_fp;
++		ASN1_digest;
++		ASN1_dup;
++		ASN1_get_object;
++		ASN1_i2d_bio;
++		ASN1_i2d_fp;
++		ASN1_object_size;
++		ASN1_parse;
++		ASN1_put_object;
++		ASN1_sign;
++		ASN1_verify;
++		BF_cbc_encrypt;
++		BF_cfb64_encrypt;
++		BF_ecb_encrypt;
++		BF_encrypt;
++		BF_ofb64_encrypt;
++		BF_options;
++		BF_set_key;
++		BIO_CONNECT_free;
++		BIO_CONNECT_new;
++		BIO_accept;
++		BIO_ctrl;
++		BIO_int_ctrl;
++		BIO_debug_callback;
++		BIO_dump;
++		BIO_dup_chain;
++		BIO_f_base64;
++		BIO_f_buffer;
++		BIO_f_cipher;
++		BIO_f_md;
++		BIO_f_null;
++		BIO_f_proxy_server;
++		BIO_fd_non_fatal_error;
++		BIO_fd_should_retry;
++		BIO_find_type;
++		BIO_free;
++		BIO_free_all;
++		BIO_get_accept_socket;
++		BIO_get_filter_bio;
++		BIO_get_host_ip;
++		BIO_get_port;
++		BIO_get_retry_BIO;
++		BIO_get_retry_reason;
++		BIO_gethostbyname;
++		BIO_gets;
++		BIO_new;
++		BIO_new_accept;
++		BIO_new_connect;
++		BIO_new_fd;
++		BIO_new_file;
++		BIO_new_fp;
++		BIO_new_socket;
++		BIO_pop;
++		BIO_printf;
++		BIO_push;
++		BIO_puts;
++		BIO_read;
++		BIO_s_accept;
++		BIO_s_connect;
++		BIO_s_fd;
++		BIO_s_file;
++		BIO_s_mem;
++		BIO_s_null;
++		BIO_s_proxy_client;
++		BIO_s_socket;
++		BIO_set;
++		BIO_set_cipher;
++		BIO_set_tcp_ndelay;
++		BIO_sock_cleanup;
++		BIO_sock_error;
++		BIO_sock_init;
++		BIO_sock_non_fatal_error;
++		BIO_sock_should_retry;
++		BIO_socket_ioctl;
++		BIO_write;
++		BN_CTX_free;
++		BN_CTX_new;
++		BN_MONT_CTX_free;
++		BN_MONT_CTX_new;
++		BN_MONT_CTX_set;
++		BN_add;
++		BN_add_word;
++		BN_hex2bn;
++		BN_bin2bn;
++		BN_bn2hex;
++		BN_bn2bin;
++		BN_clear;
++		BN_clear_bit;
++		BN_clear_free;
++		BN_cmp;
++		BN_copy;
++		BN_div;
++		BN_div_word;
++		BN_dup;
++		BN_free;
++		BN_from_montgomery;
++		BN_gcd;
++		BN_generate_prime;
++		BN_get_word;
++		BN_is_bit_set;
++		BN_is_prime;
++		BN_lshift;
++		BN_lshift1;
++		BN_mask_bits;
++		BN_mod;
++		BN_mod_exp;
++		BN_mod_exp_mont;
++		BN_mod_exp_simple;
++		BN_mod_inverse;
++		BN_mod_mul;
++		BN_mod_mul_montgomery;
++		BN_mod_word;
++		BN_mul;
++		BN_new;
++		BN_num_bits;
++		BN_num_bits_word;
++		BN_options;
++		BN_print;
++		BN_print_fp;
++		BN_rand;
++		BN_reciprocal;
++		BN_rshift;
++		BN_rshift1;
++		BN_set_bit;
++		BN_set_word;
++		BN_sqr;
++		BN_sub;
++		BN_to_ASN1_INTEGER;
++		BN_ucmp;
++		BN_value_one;
++		BUF_MEM_free;
++		BUF_MEM_grow;
++		BUF_MEM_new;
++		BUF_strdup;
++		CONF_free;
++		CONF_get_number;
++		CONF_get_section;
++		CONF_get_string;
++		CONF_load;
++		CRYPTO_add_lock;
++		CRYPTO_dbg_free;
++		CRYPTO_dbg_malloc;
++		CRYPTO_dbg_realloc;
++		CRYPTO_dbg_remalloc;
++		CRYPTO_free;
++		CRYPTO_get_add_lock_callback;
++		CRYPTO_get_id_callback;
++		CRYPTO_get_lock_name;
++		CRYPTO_get_locking_callback;
++		CRYPTO_get_mem_functions;
++		CRYPTO_lock;
++		CRYPTO_malloc;
++		CRYPTO_mem_ctrl;
++		CRYPTO_mem_leaks;
++		CRYPTO_mem_leaks_cb;
++		CRYPTO_mem_leaks_fp;
++		CRYPTO_realloc;
++		CRYPTO_remalloc;
++		CRYPTO_set_add_lock_callback;
++		CRYPTO_set_id_callback;
++		CRYPTO_set_locking_callback;
++		CRYPTO_set_mem_functions;
++		CRYPTO_thread_id;
++		DH_check;
++		DH_compute_key;
++		DH_free;
++		DH_generate_key;
++		DH_generate_parameters;
++		DH_new;
++		DH_size;
++		DHparams_print;
++		DHparams_print_fp;
++		DSA_free;
++		DSA_generate_key;
++		DSA_generate_parameters;
++		DSA_is_prime;
++		DSA_new;
++		DSA_print;
++		DSA_print_fp;
++		DSA_sign;
++		DSA_sign_setup;
++		DSA_size;
++		DSA_verify;
++		DSAparams_print;
++		DSAparams_print_fp;
++		ERR_clear_error;
++		ERR_error_string;
++		ERR_free_strings;
++		ERR_func_error_string;
++		ERR_get_err_state_table;
++		ERR_get_error;
++		ERR_get_error_line;
++		ERR_get_state;
++		ERR_get_string_table;
++		ERR_lib_error_string;
++		ERR_load_ASN1_strings;
++		ERR_load_BIO_strings;
++		ERR_load_BN_strings;
++		ERR_load_BUF_strings;
++		ERR_load_CONF_strings;
++		ERR_load_DH_strings;
++		ERR_load_DSA_strings;
++		ERR_load_ERR_strings;
++		ERR_load_EVP_strings;
++		ERR_load_OBJ_strings;
++		ERR_load_PEM_strings;
++		ERR_load_PROXY_strings;
++		ERR_load_RSA_strings;
++		ERR_load_X509_strings;
++		ERR_load_crypto_strings;
++		ERR_load_strings;
++		ERR_peek_error;
++		ERR_peek_error_line;
++		ERR_print_errors;
++		ERR_print_errors_fp;
++		ERR_put_error;
++		ERR_reason_error_string;
++		ERR_remove_state;
++		EVP_BytesToKey;
++		EVP_CIPHER_CTX_cleanup;
++		EVP_CipherFinal;
++		EVP_CipherInit;
++		EVP_CipherUpdate;
++		EVP_DecodeBlock;
++		EVP_DecodeFinal;
++		EVP_DecodeInit;
++		EVP_DecodeUpdate;
++		EVP_DecryptFinal;
++		EVP_DecryptInit;
++		EVP_DecryptUpdate;
++		EVP_DigestFinal;
++		EVP_DigestInit;
++		EVP_DigestUpdate;
++		EVP_EncodeBlock;
++		EVP_EncodeFinal;
++		EVP_EncodeInit;
++		EVP_EncodeUpdate;
++		EVP_EncryptFinal;
++		EVP_EncryptInit;
++		EVP_EncryptUpdate;
++		EVP_OpenFinal;
++		EVP_OpenInit;
++		EVP_PKEY_assign;
++		EVP_PKEY_copy_parameters;
++		EVP_PKEY_free;
++		EVP_PKEY_missing_parameters;
++		EVP_PKEY_new;
++		EVP_PKEY_save_parameters;
++		EVP_PKEY_size;
++		EVP_PKEY_type;
++		EVP_SealFinal;
++		EVP_SealInit;
++		EVP_SignFinal;
++		EVP_VerifyFinal;
++		EVP_add_alias;
++		EVP_add_cipher;
++		EVP_add_digest;
++		EVP_bf_cbc;
++		EVP_bf_cfb64;
++		EVP_bf_ecb;
++		EVP_bf_ofb;
++		EVP_cleanup;
++		EVP_des_cbc;
++		EVP_des_cfb64;
++		EVP_des_ecb;
++		EVP_des_ede;
++		EVP_des_ede3;
++		EVP_des_ede3_cbc;
++		EVP_des_ede3_cfb64;
++		EVP_des_ede3_ofb;
++		EVP_des_ede_cbc;
++		EVP_des_ede_cfb64;
++		EVP_des_ede_ofb;
++		EVP_des_ofb;
++		EVP_desx_cbc;
++		EVP_dss;
++		EVP_dss1;
++		EVP_enc_null;
++		EVP_get_cipherbyname;
++		EVP_get_digestbyname;
++		EVP_get_pw_prompt;
++		EVP_idea_cbc;
++		EVP_idea_cfb64;
++		EVP_idea_ecb;
++		EVP_idea_ofb;
++		EVP_md2;
++		EVP_md5;
++		EVP_md_null;
++		EVP_rc2_cbc;
++		EVP_rc2_cfb64;
++		EVP_rc2_ecb;
++		EVP_rc2_ofb;
++		EVP_rc4;
++		EVP_read_pw_string;
++		EVP_set_pw_prompt;
++		EVP_sha;
++		EVP_sha1;
++		MD2;
++		MD2_Final;
++		MD2_Init;
++		MD2_Update;
++		MD2_options;
++		MD5;
++		MD5_Final;
++		MD5_Init;
++		MD5_Update;
++		MDC2;
++		MDC2_Final;
++		MDC2_Init;
++		MDC2_Update;
++		NETSCAPE_SPKAC_free;
++		NETSCAPE_SPKAC_new;
++		NETSCAPE_SPKI_free;
++		NETSCAPE_SPKI_new;
++		NETSCAPE_SPKI_sign;
++		NETSCAPE_SPKI_verify;
++		OBJ_add_object;
++		OBJ_bsearch;
++		OBJ_cleanup;
++		OBJ_cmp;
++		OBJ_create;
++		OBJ_dup;
++		OBJ_ln2nid;
++		OBJ_new_nid;
++		OBJ_nid2ln;
++		OBJ_nid2obj;
++		OBJ_nid2sn;
++		OBJ_obj2nid;
++		OBJ_sn2nid;
++		OBJ_txt2nid;
++		PEM_ASN1_read;
++		PEM_ASN1_read_bio;
++		PEM_ASN1_write;
++		PEM_ASN1_write_bio;
++		PEM_SealFinal;
++		PEM_SealInit;
++		PEM_SealUpdate;
++		PEM_SignFinal;
++		PEM_SignInit;
++		PEM_SignUpdate;
++		PEM_X509_INFO_read;
++		PEM_X509_INFO_read_bio;
++		PEM_X509_INFO_write_bio;
++		PEM_dek_info;
++		PEM_do_header;
++		PEM_get_EVP_CIPHER_INFO;
++		PEM_proc_type;
++		PEM_read;
++		PEM_read_DHparams;
++		PEM_read_DSAPrivateKey;
++		PEM_read_DSAparams;
++		PEM_read_PKCS7;
++		PEM_read_PrivateKey;
++		PEM_read_RSAPrivateKey;
++		PEM_read_X509;
++		PEM_read_X509_CRL;
++		PEM_read_X509_REQ;
++		PEM_read_bio;
++		PEM_read_bio_DHparams;
++		PEM_read_bio_DSAPrivateKey;
++		PEM_read_bio_DSAparams;
++		PEM_read_bio_PKCS7;
++		PEM_read_bio_PrivateKey;
++		PEM_read_bio_RSAPrivateKey;
++		PEM_read_bio_X509;
++		PEM_read_bio_X509_CRL;
++		PEM_read_bio_X509_REQ;
++		PEM_write;
++		PEM_write_DHparams;
++		PEM_write_DSAPrivateKey;
++		PEM_write_DSAparams;
++		PEM_write_PKCS7;
++		PEM_write_PrivateKey;
++		PEM_write_RSAPrivateKey;
++		PEM_write_X509;
++		PEM_write_X509_CRL;
++		PEM_write_X509_REQ;
++		PEM_write_bio;
++		PEM_write_bio_DHparams;
++		PEM_write_bio_DSAPrivateKey;
++		PEM_write_bio_DSAparams;
++		PEM_write_bio_PKCS7;
++		PEM_write_bio_PrivateKey;
++		PEM_write_bio_RSAPrivateKey;
++		PEM_write_bio_X509;
++		PEM_write_bio_X509_CRL;
++		PEM_write_bio_X509_REQ;
++		PKCS7_DIGEST_free;
++		PKCS7_DIGEST_new;
++		PKCS7_ENCRYPT_free;
++		PKCS7_ENCRYPT_new;
++		PKCS7_ENC_CONTENT_free;
++		PKCS7_ENC_CONTENT_new;
++		PKCS7_ENVELOPE_free;
++		PKCS7_ENVELOPE_new;
++		PKCS7_ISSUER_AND_SERIAL_digest;
++		PKCS7_ISSUER_AND_SERIAL_free;
++		PKCS7_ISSUER_AND_SERIAL_new;
++		PKCS7_RECIP_INFO_free;
++		PKCS7_RECIP_INFO_new;
++		PKCS7_SIGNED_free;
++		PKCS7_SIGNED_new;
++		PKCS7_SIGNER_INFO_free;
++		PKCS7_SIGNER_INFO_new;
++		PKCS7_SIGN_ENVELOPE_free;
++		PKCS7_SIGN_ENVELOPE_new;
++		PKCS7_dup;
++		PKCS7_free;
++		PKCS7_new;
++		PROXY_ENTRY_add_noproxy;
++		PROXY_ENTRY_clear_noproxy;
++		PROXY_ENTRY_free;
++		PROXY_ENTRY_get_noproxy;
++		PROXY_ENTRY_new;
++		PROXY_ENTRY_set_server;
++		PROXY_add_noproxy;
++		PROXY_add_server;
++		PROXY_check_by_host;
++		PROXY_check_url;
++		PROXY_clear_noproxy;
++		PROXY_free;
++		PROXY_get_noproxy;
++		PROXY_get_proxies;
++		PROXY_get_proxy_entry;
++		PROXY_load_conf;
++		PROXY_new;
++		PROXY_print;
++		RAND_bytes;
++		RAND_cleanup;
++		RAND_file_name;
++		RAND_load_file;
++		RAND_screen;
++		RAND_seed;
++		RAND_write_file;
++		RC2_cbc_encrypt;
++		RC2_cfb64_encrypt;
++		RC2_ecb_encrypt;
++		RC2_encrypt;
++		RC2_ofb64_encrypt;
++		RC2_set_key;
++		RC4;
++		RC4_options;
++		RC4_set_key;
++		RSAPrivateKey_asn1_meth;
++		RSAPrivateKey_dup;
++		RSAPublicKey_dup;
++		RSA_PKCS1_SSLeay;
++		RSA_free;
++		RSA_generate_key;
++		RSA_new;
++		RSA_new_method;
++		RSA_print;
++		RSA_print_fp;
++		RSA_private_decrypt;
++		RSA_private_encrypt;
++		RSA_public_decrypt;
++		RSA_public_encrypt;
++		RSA_set_default_method;
++		RSA_sign;
++		RSA_sign_ASN1_OCTET_STRING;
++		RSA_size;
++		RSA_verify;
++		RSA_verify_ASN1_OCTET_STRING;
++		SHA;
++		SHA1;
++		SHA1_Final;
++		SHA1_Init;
++		SHA1_Update;
++		SHA_Final;
++		SHA_Init;
++		SHA_Update;
++		OpenSSL_add_all_algorithms;
++		OpenSSL_add_all_ciphers;
++		OpenSSL_add_all_digests;
++		TXT_DB_create_index;
++		TXT_DB_free;
++		TXT_DB_get_by_index;
++		TXT_DB_insert;
++		TXT_DB_read;
++		TXT_DB_write;
++		X509_ALGOR_free;
++		X509_ALGOR_new;
++		X509_ATTRIBUTE_free;
++		X509_ATTRIBUTE_new;
++		X509_CINF_free;
++		X509_CINF_new;
++		X509_CRL_INFO_free;
++		X509_CRL_INFO_new;
++		X509_CRL_add_ext;
++		X509_CRL_cmp;
++		X509_CRL_delete_ext;
++		X509_CRL_dup;
++		X509_CRL_free;
++		X509_CRL_get_ext;
++		X509_CRL_get_ext_by_NID;
++		X509_CRL_get_ext_by_OBJ;
++		X509_CRL_get_ext_by_critical;
++		X509_CRL_get_ext_count;
++		X509_CRL_new;
++		X509_CRL_sign;
++		X509_CRL_verify;
++		X509_EXTENSION_create_by_NID;
++		X509_EXTENSION_create_by_OBJ;
++		X509_EXTENSION_dup;
++		X509_EXTENSION_free;
++		X509_EXTENSION_get_critical;
++		X509_EXTENSION_get_data;
++		X509_EXTENSION_get_object;
++		X509_EXTENSION_new;
++		X509_EXTENSION_set_critical;
++		X509_EXTENSION_set_data;
++		X509_EXTENSION_set_object;
++		X509_INFO_free;
++		X509_INFO_new;
++		X509_LOOKUP_by_alias;
++		X509_LOOKUP_by_fingerprint;
++		X509_LOOKUP_by_issuer_serial;
++		X509_LOOKUP_by_subject;
++		X509_LOOKUP_ctrl;
++		X509_LOOKUP_file;
++		X509_LOOKUP_free;
++		X509_LOOKUP_hash_dir;
++		X509_LOOKUP_init;
++		X509_LOOKUP_new;
++		X509_LOOKUP_shutdown;
++		X509_NAME_ENTRY_create_by_NID;
++		X509_NAME_ENTRY_create_by_OBJ;
++		X509_NAME_ENTRY_dup;
++		X509_NAME_ENTRY_free;
++		X509_NAME_ENTRY_get_data;
++		X509_NAME_ENTRY_get_object;
++		X509_NAME_ENTRY_new;
++		X509_NAME_ENTRY_set_data;
++		X509_NAME_ENTRY_set_object;
++		X509_NAME_add_entry;
++		X509_NAME_cmp;
++		X509_NAME_delete_entry;
++		X509_NAME_digest;
++		X509_NAME_dup;
++		X509_NAME_entry_count;
++		X509_NAME_free;
++		X509_NAME_get_entry;
++		X509_NAME_get_index_by_NID;
++		X509_NAME_get_index_by_OBJ;
++		X509_NAME_get_text_by_NID;
++		X509_NAME_get_text_by_OBJ;
++		X509_NAME_hash;
++		X509_NAME_new;
++		X509_NAME_oneline;
++		X509_NAME_print;
++		X509_NAME_set;
++		X509_OBJECT_free_contents;
++		X509_OBJECT_retrieve_by_subject;
++		X509_OBJECT_up_ref_count;
++		X509_PKEY_free;
++		X509_PKEY_new;
++		X509_PUBKEY_free;
++		X509_PUBKEY_get;
++		X509_PUBKEY_new;
++		X509_PUBKEY_set;
++		X509_REQ_INFO_free;
++		X509_REQ_INFO_new;
++		X509_REQ_dup;
++		X509_REQ_free;
++		X509_REQ_get_pubkey;
++		X509_REQ_new;
++		X509_REQ_print;
++		X509_REQ_print_fp;
++		X509_REQ_set_pubkey;
++		X509_REQ_set_subject_name;
++		X509_REQ_set_version;
++		X509_REQ_sign;
++		X509_REQ_to_X509;
++		X509_REQ_verify;
++		X509_REVOKED_add_ext;
++		X509_REVOKED_delete_ext;
++		X509_REVOKED_free;
++		X509_REVOKED_get_ext;
++		X509_REVOKED_get_ext_by_NID;
++		X509_REVOKED_get_ext_by_OBJ;
++		X509_REVOKED_get_ext_by_critical;
++		X509_REVOKED_get_ext_by_critic;
++		X509_REVOKED_get_ext_count;
++		X509_REVOKED_new;
++		X509_SIG_free;
++		X509_SIG_new;
++		X509_STORE_CTX_cleanup;
++		X509_STORE_CTX_init;
++		X509_STORE_add_cert;
++		X509_STORE_add_lookup;
++		X509_STORE_free;
++		X509_STORE_get_by_subject;
++		X509_STORE_load_locations;
++		X509_STORE_new;
++		X509_STORE_set_default_paths;
++		X509_VAL_free;
++		X509_VAL_new;
++		X509_add_ext;
++		X509_asn1_meth;
++		X509_certificate_type;
++		X509_check_private_key;
++		X509_cmp_current_time;
++		X509_delete_ext;
++		X509_digest;
++		X509_dup;
++		X509_free;
++		X509_get_default_cert_area;
++		X509_get_default_cert_dir;
++		X509_get_default_cert_dir_env;
++		X509_get_default_cert_file;
++		X509_get_default_cert_file_env;
++		X509_get_default_private_dir;
++		X509_get_ext;
++		X509_get_ext_by_NID;
++		X509_get_ext_by_OBJ;
++		X509_get_ext_by_critical;
++		X509_get_ext_count;
++		X509_get_issuer_name;
++		X509_get_pubkey;
++		X509_get_pubkey_parameters;
++		X509_get_serialNumber;
++		X509_get_subject_name;
++		X509_gmtime_adj;
++		X509_issuer_and_serial_cmp;
++		X509_issuer_and_serial_hash;
++		X509_issuer_name_cmp;
++		X509_issuer_name_hash;
++		X509_load_cert_file;
++		X509_new;
++		X509_print;
++		X509_print_fp;
++		X509_set_issuer_name;
++		X509_set_notAfter;
++		X509_set_notBefore;
++		X509_set_pubkey;
++		X509_set_serialNumber;
++		X509_set_subject_name;
++		X509_set_version;
++		X509_sign;
++		X509_subject_name_cmp;
++		X509_subject_name_hash;
++		X509_to_X509_REQ;
++		X509_verify;
++		X509_verify_cert;
++		X509_verify_cert_error_string;
++		X509v3_add_ext;
++		X509v3_add_extension;
++		X509v3_add_netscape_extensions;
++		X509v3_add_standard_extensions;
++		X509v3_cleanup_extensions;
++		X509v3_data_type_by_NID;
++		X509v3_data_type_by_OBJ;
++		X509v3_delete_ext;
++		X509v3_get_ext;
++		X509v3_get_ext_by_NID;
++		X509v3_get_ext_by_OBJ;
++		X509v3_get_ext_by_critical;
++		X509v3_get_ext_count;
++		X509v3_pack_string;
++		X509v3_pack_type_by_NID;
++		X509v3_pack_type_by_OBJ;
++		X509v3_unpack_string;
++		_des_crypt;
++		a2d_ASN1_OBJECT;
++		a2i_ASN1_INTEGER;
++		a2i_ASN1_STRING;
++		asn1_Finish;
++		asn1_GetSequence;
++		bn_div_words;
++		bn_expand2;
++		bn_mul_add_words;
++		bn_mul_words;
++		BN_uadd;
++		BN_usub;
++		bn_sqr_words;
++		_ossl_old_crypt;
++		d2i_ASN1_BIT_STRING;
++		d2i_ASN1_BOOLEAN;
++		d2i_ASN1_HEADER;
++		d2i_ASN1_IA5STRING;
++		d2i_ASN1_INTEGER;
++		d2i_ASN1_OBJECT;
++		d2i_ASN1_OCTET_STRING;
++		d2i_ASN1_PRINTABLE;
++		d2i_ASN1_PRINTABLESTRING;
++		d2i_ASN1_SET;
++		d2i_ASN1_T61STRING;
++		d2i_ASN1_TYPE;
++		d2i_ASN1_UTCTIME;
++		d2i_ASN1_bytes;
++		d2i_ASN1_type_bytes;
++		d2i_DHparams;
++		d2i_DSAPrivateKey;
++		d2i_DSAPrivateKey_bio;
++		d2i_DSAPrivateKey_fp;
++		d2i_DSAPublicKey;
++		d2i_DSAparams;
++		d2i_NETSCAPE_SPKAC;
++		d2i_NETSCAPE_SPKI;
++		d2i_Netscape_RSA;
++		d2i_PKCS7;
++		d2i_PKCS7_DIGEST;
++		d2i_PKCS7_ENCRYPT;
++		d2i_PKCS7_ENC_CONTENT;
++		d2i_PKCS7_ENVELOPE;
++		d2i_PKCS7_ISSUER_AND_SERIAL;
++		d2i_PKCS7_RECIP_INFO;
++		d2i_PKCS7_SIGNED;
++		d2i_PKCS7_SIGNER_INFO;
++		d2i_PKCS7_SIGN_ENVELOPE;
++		d2i_PKCS7_bio;
++		d2i_PKCS7_fp;
++		d2i_PrivateKey;
++		d2i_PublicKey;
++		d2i_RSAPrivateKey;
++		d2i_RSAPrivateKey_bio;
++		d2i_RSAPrivateKey_fp;
++		d2i_RSAPublicKey;
++		d2i_X509;
++		d2i_X509_ALGOR;
++		d2i_X509_ATTRIBUTE;
++		d2i_X509_CINF;
++		d2i_X509_CRL;
++		d2i_X509_CRL_INFO;
++		d2i_X509_CRL_bio;
++		d2i_X509_CRL_fp;
++		d2i_X509_EXTENSION;
++		d2i_X509_NAME;
++		d2i_X509_NAME_ENTRY;
++		d2i_X509_PKEY;
++		d2i_X509_PUBKEY;
++		d2i_X509_REQ;
++		d2i_X509_REQ_INFO;
++		d2i_X509_REQ_bio;
++		d2i_X509_REQ_fp;
++		d2i_X509_REVOKED;
++		d2i_X509_SIG;
++		d2i_X509_VAL;
++		d2i_X509_bio;
++		d2i_X509_fp;
++		DES_cbc_cksum;
++		DES_cbc_encrypt;
++		DES_cblock_print_file;
++		DES_cfb64_encrypt;
++		DES_cfb_encrypt;
++		DES_decrypt3;
++		DES_ecb3_encrypt;
++		DES_ecb_encrypt;
++		DES_ede3_cbc_encrypt;
++		DES_ede3_cfb64_encrypt;
++		DES_ede3_ofb64_encrypt;
++		DES_enc_read;
++		DES_enc_write;
++		DES_encrypt1;
++		DES_encrypt2;
++		DES_encrypt3;
++		DES_fcrypt;
++		DES_is_weak_key;
++		DES_key_sched;
++		DES_ncbc_encrypt;
++		DES_ofb64_encrypt;
++		DES_ofb_encrypt;
++		DES_options;
++		DES_pcbc_encrypt;
++		DES_quad_cksum;
++		DES_random_key;
++		_ossl_old_des_random_seed;
++		_ossl_old_des_read_2passwords;
++		_ossl_old_des_read_password;
++		_ossl_old_des_read_pw;
++		_ossl_old_des_read_pw_string;
++		DES_set_key;
++		DES_set_odd_parity;
++		DES_string_to_2keys;
++		DES_string_to_key;
++		DES_xcbc_encrypt;
++		DES_xwhite_in2out;
++		fcrypt_body;
++		i2a_ASN1_INTEGER;
++		i2a_ASN1_OBJECT;
++		i2a_ASN1_STRING;
++		i2d_ASN1_BIT_STRING;
++		i2d_ASN1_BOOLEAN;
++		i2d_ASN1_HEADER;
++		i2d_ASN1_IA5STRING;
++		i2d_ASN1_INTEGER;
++		i2d_ASN1_OBJECT;
++		i2d_ASN1_OCTET_STRING;
++		i2d_ASN1_PRINTABLE;
++		i2d_ASN1_SET;
++		i2d_ASN1_TYPE;
++		i2d_ASN1_UTCTIME;
++		i2d_ASN1_bytes;
++		i2d_DHparams;
++		i2d_DSAPrivateKey;
++		i2d_DSAPrivateKey_bio;
++		i2d_DSAPrivateKey_fp;
++		i2d_DSAPublicKey;
++		i2d_DSAparams;
++		i2d_NETSCAPE_SPKAC;
++		i2d_NETSCAPE_SPKI;
++		i2d_Netscape_RSA;
++		i2d_PKCS7;
++		i2d_PKCS7_DIGEST;
++		i2d_PKCS7_ENCRYPT;
++		i2d_PKCS7_ENC_CONTENT;
++		i2d_PKCS7_ENVELOPE;
++		i2d_PKCS7_ISSUER_AND_SERIAL;
++		i2d_PKCS7_RECIP_INFO;
++		i2d_PKCS7_SIGNED;
++		i2d_PKCS7_SIGNER_INFO;
++		i2d_PKCS7_SIGN_ENVELOPE;
++		i2d_PKCS7_bio;
++		i2d_PKCS7_fp;
++		i2d_PrivateKey;
++		i2d_PublicKey;
++		i2d_RSAPrivateKey;
++		i2d_RSAPrivateKey_bio;
++		i2d_RSAPrivateKey_fp;
++		i2d_RSAPublicKey;
++		i2d_X509;
++		i2d_X509_ALGOR;
++		i2d_X509_ATTRIBUTE;
++		i2d_X509_CINF;
++		i2d_X509_CRL;
++		i2d_X509_CRL_INFO;
++		i2d_X509_CRL_bio;
++		i2d_X509_CRL_fp;
++		i2d_X509_EXTENSION;
++		i2d_X509_NAME;
++		i2d_X509_NAME_ENTRY;
++		i2d_X509_PKEY;
++		i2d_X509_PUBKEY;
++		i2d_X509_REQ;
++		i2d_X509_REQ_INFO;
++		i2d_X509_REQ_bio;
++		i2d_X509_REQ_fp;
++		i2d_X509_REVOKED;
++		i2d_X509_SIG;
++		i2d_X509_VAL;
++		i2d_X509_bio;
++		i2d_X509_fp;
++		idea_cbc_encrypt;
++		idea_cfb64_encrypt;
++		idea_ecb_encrypt;
++		idea_encrypt;
++		idea_ofb64_encrypt;
++		idea_options;
++		idea_set_decrypt_key;
++		idea_set_encrypt_key;
++		lh_delete;
++		lh_doall;
++		lh_doall_arg;
++		lh_free;
++		lh_insert;
++		lh_new;
++		lh_node_stats;
++		lh_node_stats_bio;
++		lh_node_usage_stats;
++		lh_node_usage_stats_bio;
++		lh_retrieve;
++		lh_stats;
++		lh_stats_bio;
++		lh_strhash;
++		sk_delete;
++		sk_delete_ptr;
++		sk_dup;
++		sk_find;
++		sk_free;
++		sk_insert;
++		sk_new;
++		sk_pop;
++		sk_pop_free;
++		sk_push;
++		sk_set_cmp_func;
++		sk_shift;
++		sk_unshift;
++		sk_zero;
++		BIO_f_nbio_test;
++		ASN1_TYPE_get;
++		ASN1_TYPE_set;
++		PKCS7_content_free;
++		ERR_load_PKCS7_strings;
++		X509_find_by_issuer_and_serial;
++		X509_find_by_subject;
++		PKCS7_ctrl;
++		PKCS7_set_type;
++		PKCS7_set_content;
++		PKCS7_SIGNER_INFO_set;
++		PKCS7_add_signer;
++		PKCS7_add_certificate;
++		PKCS7_add_crl;
++		PKCS7_content_new;
++		PKCS7_dataSign;
++		PKCS7_dataVerify;
++		PKCS7_dataInit;
++		PKCS7_add_signature;
++		PKCS7_cert_from_signer_info;
++		PKCS7_get_signer_info;
++		EVP_delete_alias;
++		EVP_mdc2;
++		PEM_read_bio_RSAPublicKey;
++		PEM_write_bio_RSAPublicKey;
++		d2i_RSAPublicKey_bio;
++		i2d_RSAPublicKey_bio;
++		PEM_read_RSAPublicKey;
++		PEM_write_RSAPublicKey;
++		d2i_RSAPublicKey_fp;
++		i2d_RSAPublicKey_fp;
++		BIO_copy_next_retry;
++		RSA_flags;
++		X509_STORE_add_crl;
++		X509_load_crl_file;
++		EVP_rc2_40_cbc;
++		EVP_rc4_40;
++		EVP_CIPHER_CTX_init;
++		HMAC;
++		HMAC_Init;
++		HMAC_Update;
++		HMAC_Final;
++		ERR_get_next_error_library;
++		EVP_PKEY_cmp_parameters;
++		HMAC_cleanup;
++		BIO_ptr_ctrl;
++		BIO_new_file_internal;
++		BIO_new_fp_internal;
++		BIO_s_file_internal;
++		BN_BLINDING_convert;
++		BN_BLINDING_invert;
++		BN_BLINDING_update;
++		RSA_blinding_on;
++		RSA_blinding_off;
++		i2t_ASN1_OBJECT;
++		BN_BLINDING_new;
++		BN_BLINDING_free;
++		EVP_cast5_cbc;
++		EVP_cast5_cfb64;
++		EVP_cast5_ecb;
++		EVP_cast5_ofb;
++		BF_decrypt;
++		CAST_set_key;
++		CAST_encrypt;
++		CAST_decrypt;
++		CAST_ecb_encrypt;
++		CAST_cbc_encrypt;
++		CAST_cfb64_encrypt;
++		CAST_ofb64_encrypt;
++		RC2_decrypt;
++		OBJ_create_objects;
++		BN_exp;
++		BN_mul_word;
++		BN_sub_word;
++		BN_dec2bn;
++		BN_bn2dec;
++		BIO_ghbn_ctrl;
++		CRYPTO_free_ex_data;
++		CRYPTO_get_ex_data;
++		CRYPTO_set_ex_data;
++		ERR_load_CRYPTO_strings;
++		ERR_load_CRYPTOlib_strings;
++		EVP_PKEY_bits;
++		MD5_Transform;
++		SHA1_Transform;
++		SHA_Transform;
++		X509_STORE_CTX_get_chain;
++		X509_STORE_CTX_get_current_cert;
++		X509_STORE_CTX_get_error;
++		X509_STORE_CTX_get_error_depth;
++		X509_STORE_CTX_get_ex_data;
++		X509_STORE_CTX_set_cert;
++		X509_STORE_CTX_set_chain;
++		X509_STORE_CTX_set_error;
++		X509_STORE_CTX_set_ex_data;
++		CRYPTO_dup_ex_data;
++		CRYPTO_get_new_lockid;
++		CRYPTO_new_ex_data;
++		RSA_set_ex_data;
++		RSA_get_ex_data;
++		RSA_get_ex_new_index;
++		RSA_padding_add_PKCS1_type_1;
++		RSA_padding_add_PKCS1_type_2;
++		RSA_padding_add_SSLv23;
++		RSA_padding_add_none;
++		RSA_padding_check_PKCS1_type_1;
++		RSA_padding_check_PKCS1_type_2;
++		RSA_padding_check_SSLv23;
++		RSA_padding_check_none;
++		bn_add_words;
++		d2i_Netscape_RSA_2;
++		CRYPTO_get_ex_new_index;
++		RIPEMD160_Init;
++		RIPEMD160_Update;
++		RIPEMD160_Final;
++		RIPEMD160;
++		RIPEMD160_Transform;
++		RC5_32_set_key;
++		RC5_32_ecb_encrypt;
++		RC5_32_encrypt;
++		RC5_32_decrypt;
++		RC5_32_cbc_encrypt;
++		RC5_32_cfb64_encrypt;
++		RC5_32_ofb64_encrypt;
++		BN_bn2mpi;
++		BN_mpi2bn;
++		ASN1_BIT_STRING_get_bit;
++		ASN1_BIT_STRING_set_bit;
++		BIO_get_ex_data;
++		BIO_get_ex_new_index;
++		BIO_set_ex_data;
++		X509v3_get_key_usage;
++		X509v3_set_key_usage;
++		a2i_X509v3_key_usage;
++		i2a_X509v3_key_usage;
++		EVP_PKEY_decrypt;
++		EVP_PKEY_encrypt;
++		PKCS7_RECIP_INFO_set;
++		PKCS7_add_recipient;
++		PKCS7_add_recipient_info;
++		PKCS7_set_cipher;
++		ASN1_TYPE_get_int_octetstring;
++		ASN1_TYPE_get_octetstring;
++		ASN1_TYPE_set_int_octetstring;
++		ASN1_TYPE_set_octetstring;
++		ASN1_UTCTIME_set_string;
++		ERR_add_error_data;
++		ERR_set_error_data;
++		EVP_CIPHER_asn1_to_param;
++		EVP_CIPHER_param_to_asn1;
++		EVP_CIPHER_get_asn1_iv;
++		EVP_CIPHER_set_asn1_iv;
++		EVP_rc5_32_12_16_cbc;
++		EVP_rc5_32_12_16_cfb64;
++		EVP_rc5_32_12_16_ecb;
++		EVP_rc5_32_12_16_ofb;
++		asn1_add_error;
++		d2i_ASN1_BMPSTRING;
++		i2d_ASN1_BMPSTRING;
++		BIO_f_ber;
++		BN_init;
++		COMP_CTX_new;
++		COMP_CTX_free;
++		COMP_CTX_compress_block;
++		COMP_CTX_expand_block;
++		X509_STORE_CTX_get_ex_new_index;
++		OBJ_NAME_add;
++		BIO_socket_nbio;
++		EVP_rc2_64_cbc;
++		OBJ_NAME_cleanup;
++		OBJ_NAME_get;
++		OBJ_NAME_init;
++		OBJ_NAME_new_index;
++		OBJ_NAME_remove;
++		BN_MONT_CTX_copy;
++		BIO_new_socks4a_connect;
++		BIO_s_socks4a_connect;
++		PROXY_set_connect_mode;
++		RAND_SSLeay;
++		RAND_set_rand_method;
++		RSA_memory_lock;
++		bn_sub_words;
++		bn_mul_normal;
++		bn_mul_comba8;
++		bn_mul_comba4;
++		bn_sqr_normal;
++		bn_sqr_comba8;
++		bn_sqr_comba4;
++		bn_cmp_words;
++		bn_mul_recursive;
++		bn_mul_part_recursive;
++		bn_sqr_recursive;
++		bn_mul_low_normal;
++		BN_RECP_CTX_init;
++		BN_RECP_CTX_new;
++		BN_RECP_CTX_free;
++		BN_RECP_CTX_set;
++		BN_mod_mul_reciprocal;
++		BN_mod_exp_recp;
++		BN_div_recp;
++		BN_CTX_init;
++		BN_MONT_CTX_init;
++		RAND_get_rand_method;
++		PKCS7_add_attribute;
++		PKCS7_add_signed_attribute;
++		PKCS7_digest_from_attributes;
++		PKCS7_get_attribute;
++		PKCS7_get_issuer_and_serial;
++		PKCS7_get_signed_attribute;
++		COMP_compress_block;
++		COMP_expand_block;
++		COMP_rle;
++		COMP_zlib;
++		ms_time_diff;
++		ms_time_new;
++		ms_time_free;
++		ms_time_cmp;
++		ms_time_get;
++		PKCS7_set_attributes;
++		PKCS7_set_signed_attributes;
++		X509_ATTRIBUTE_create;
++		X509_ATTRIBUTE_dup;
++		ASN1_GENERALIZEDTIME_check;
++		ASN1_GENERALIZEDTIME_print;
++		ASN1_GENERALIZEDTIME_set;
++		ASN1_GENERALIZEDTIME_set_string;
++		ASN1_TIME_print;
++		BASIC_CONSTRAINTS_free;
++		BASIC_CONSTRAINTS_new;
++		ERR_load_X509V3_strings;
++		NETSCAPE_CERT_SEQUENCE_free;
++		NETSCAPE_CERT_SEQUENCE_new;
++		OBJ_txt2obj;
++		PEM_read_NETSCAPE_CERT_SEQUENCE;
++		PEM_read_NS_CERT_SEQ;
++		PEM_read_bio_NETSCAPE_CERT_SEQUENCE;
++		PEM_read_bio_NS_CERT_SEQ;
++		PEM_write_NETSCAPE_CERT_SEQUENCE;
++		PEM_write_NS_CERT_SEQ;
++		PEM_write_bio_NETSCAPE_CERT_SEQUENCE;
++		PEM_write_bio_NS_CERT_SEQ;
++		X509V3_EXT_add;
++		X509V3_EXT_add_alias;
++		X509V3_EXT_add_conf;
++		X509V3_EXT_cleanup;
++		X509V3_EXT_conf;
++		X509V3_EXT_conf_nid;
++		X509V3_EXT_get;
++		X509V3_EXT_get_nid;
++		X509V3_EXT_print;
++		X509V3_EXT_print_fp;
++		X509V3_add_standard_extensions;
++		X509V3_add_value;
++		X509V3_add_value_bool;
++		X509V3_add_value_int;
++		X509V3_conf_free;
++		X509V3_get_value_bool;
++		X509V3_get_value_int;
++		X509V3_parse_list;
++		d2i_ASN1_GENERALIZEDTIME;
++		d2i_ASN1_TIME;
++		d2i_BASIC_CONSTRAINTS;
++		d2i_NETSCAPE_CERT_SEQUENCE;
++		d2i_ext_ku;
++		ext_ku_free;
++		ext_ku_new;
++		i2d_ASN1_GENERALIZEDTIME;
++		i2d_ASN1_TIME;
++		i2d_BASIC_CONSTRAINTS;
++		i2d_NETSCAPE_CERT_SEQUENCE;
++		i2d_ext_ku;
++		EVP_MD_CTX_copy;
++		i2d_ASN1_ENUMERATED;
++		d2i_ASN1_ENUMERATED;
++		ASN1_ENUMERATED_set;
++		ASN1_ENUMERATED_get;
++		BN_to_ASN1_ENUMERATED;
++		ASN1_ENUMERATED_to_BN;
++		i2a_ASN1_ENUMERATED;
++		a2i_ASN1_ENUMERATED;
++		i2d_GENERAL_NAME;
++		d2i_GENERAL_NAME;
++		GENERAL_NAME_new;
++		GENERAL_NAME_free;
++		GENERAL_NAMES_new;
++		GENERAL_NAMES_free;
++		d2i_GENERAL_NAMES;
++		i2d_GENERAL_NAMES;
++		i2v_GENERAL_NAMES;
++		i2s_ASN1_OCTET_STRING;
++		s2i_ASN1_OCTET_STRING;
++		X509V3_EXT_check_conf;
++		hex_to_string;
++		string_to_hex;
++		DES_ede3_cbcm_encrypt;
++		RSA_padding_add_PKCS1_OAEP;
++		RSA_padding_check_PKCS1_OAEP;
++		X509_CRL_print_fp;
++		X509_CRL_print;
++		i2v_GENERAL_NAME;
++		v2i_GENERAL_NAME;
++		i2d_PKEY_USAGE_PERIOD;
++		d2i_PKEY_USAGE_PERIOD;
++		PKEY_USAGE_PERIOD_new;
++		PKEY_USAGE_PERIOD_free;
++		v2i_GENERAL_NAMES;
++		i2s_ASN1_INTEGER;
++		X509V3_EXT_d2i;
++		name_cmp;
++		str_dup;
++		i2s_ASN1_ENUMERATED;
++		i2s_ASN1_ENUMERATED_TABLE;
++		BIO_s_log;
++		BIO_f_reliable;
++		PKCS7_dataFinal;
++		PKCS7_dataDecode;
++		X509V3_EXT_CRL_add_conf;
++		BN_set_params;
++		BN_get_params;
++		BIO_get_ex_num;
++		BIO_set_ex_free_func;
++		EVP_ripemd160;
++		ASN1_TIME_set;
++		i2d_AUTHORITY_KEYID;
++		d2i_AUTHORITY_KEYID;
++		AUTHORITY_KEYID_new;
++		AUTHORITY_KEYID_free;
++		ASN1_seq_unpack;
++		ASN1_seq_pack;
++		ASN1_unpack_string;
++		ASN1_pack_string;
++		PKCS12_pack_safebag;
++		PKCS12_MAKE_KEYBAG;
++		PKCS8_encrypt;
++		PKCS12_MAKE_SHKEYBAG;
++		PKCS12_pack_p7data;
++		PKCS12_pack_p7encdata;
++		PKCS12_add_localkeyid;
++		PKCS12_add_friendlyname_asc;
++		PKCS12_add_friendlyname_uni;
++		PKCS12_get_friendlyname;
++		PKCS12_pbe_crypt;
++		PKCS12_decrypt_d2i;
++		PKCS12_i2d_encrypt;
++		PKCS12_init;
++		PKCS12_key_gen_asc;
++		PKCS12_key_gen_uni;
++		PKCS12_gen_mac;
++		PKCS12_verify_mac;
++		PKCS12_set_mac;
++		PKCS12_setup_mac;
++		OPENSSL_asc2uni;
++		OPENSSL_uni2asc;
++		i2d_PKCS12_BAGS;
++		PKCS12_BAGS_new;
++		d2i_PKCS12_BAGS;
++		PKCS12_BAGS_free;
++		i2d_PKCS12;
++		d2i_PKCS12;
++		PKCS12_new;
++		PKCS12_free;
++		i2d_PKCS12_MAC_DATA;
++		PKCS12_MAC_DATA_new;
++		d2i_PKCS12_MAC_DATA;
++		PKCS12_MAC_DATA_free;
++		i2d_PKCS12_SAFEBAG;
++		PKCS12_SAFEBAG_new;
++		d2i_PKCS12_SAFEBAG;
++		PKCS12_SAFEBAG_free;
++		ERR_load_PKCS12_strings;
++		PKCS12_PBE_add;
++		PKCS8_add_keyusage;
++		PKCS12_get_attr_gen;
++		PKCS12_parse;
++		PKCS12_create;
++		i2d_PKCS12_bio;
++		i2d_PKCS12_fp;
++		d2i_PKCS12_bio;
++		d2i_PKCS12_fp;
++		i2d_PBEPARAM;
++		PBEPARAM_new;
++		d2i_PBEPARAM;
++		PBEPARAM_free;
++		i2d_PKCS8_PRIV_KEY_INFO;
++		PKCS8_PRIV_KEY_INFO_new;
++		d2i_PKCS8_PRIV_KEY_INFO;
++		PKCS8_PRIV_KEY_INFO_free;
++		EVP_PKCS82PKEY;
++		EVP_PKEY2PKCS8;
++		PKCS8_set_broken;
++		EVP_PBE_ALGOR_CipherInit;
++		EVP_PBE_alg_add;
++		PKCS5_pbe_set;
++		EVP_PBE_cleanup;
++		i2d_SXNET;
++		d2i_SXNET;
++		SXNET_new;
++		SXNET_free;
++		i2d_SXNETID;
++		d2i_SXNETID;
++		SXNETID_new;
++		SXNETID_free;
++		DSA_SIG_new;
++		DSA_SIG_free;
++		DSA_do_sign;
++		DSA_do_verify;
++		d2i_DSA_SIG;
++		i2d_DSA_SIG;
++		i2d_ASN1_VISIBLESTRING;
++		d2i_ASN1_VISIBLESTRING;
++		i2d_ASN1_UTF8STRING;
++		d2i_ASN1_UTF8STRING;
++		i2d_DIRECTORYSTRING;
++		d2i_DIRECTORYSTRING;
++		i2d_DISPLAYTEXT;
++		d2i_DISPLAYTEXT;
++		d2i_ASN1_SET_OF_X509;
++		i2d_ASN1_SET_OF_X509;
++		i2d_PBKDF2PARAM;
++		PBKDF2PARAM_new;
++		d2i_PBKDF2PARAM;
++		PBKDF2PARAM_free;
++		i2d_PBE2PARAM;
++		PBE2PARAM_new;
++		d2i_PBE2PARAM;
++		PBE2PARAM_free;
++		d2i_ASN1_SET_OF_GENERAL_NAME;
++		i2d_ASN1_SET_OF_GENERAL_NAME;
++		d2i_ASN1_SET_OF_SXNETID;
++		i2d_ASN1_SET_OF_SXNETID;
++		d2i_ASN1_SET_OF_POLICYQUALINFO;
++		i2d_ASN1_SET_OF_POLICYQUALINFO;
++		d2i_ASN1_SET_OF_POLICYINFO;
++		i2d_ASN1_SET_OF_POLICYINFO;
++		SXNET_add_id_asc;
++		SXNET_add_id_ulong;
++		SXNET_add_id_INTEGER;
++		SXNET_get_id_asc;
++		SXNET_get_id_ulong;
++		SXNET_get_id_INTEGER;
++		X509V3_set_conf_lhash;
++		i2d_CERTIFICATEPOLICIES;
++		CERTIFICATEPOLICIES_new;
++		CERTIFICATEPOLICIES_free;
++		d2i_CERTIFICATEPOLICIES;
++		i2d_POLICYINFO;
++		POLICYINFO_new;
++		d2i_POLICYINFO;
++		POLICYINFO_free;
++		i2d_POLICYQUALINFO;
++		POLICYQUALINFO_new;
++		d2i_POLICYQUALINFO;
++		POLICYQUALINFO_free;
++		i2d_USERNOTICE;
++		USERNOTICE_new;
++		d2i_USERNOTICE;
++		USERNOTICE_free;
++		i2d_NOTICEREF;
++		NOTICEREF_new;
++		d2i_NOTICEREF;
++		NOTICEREF_free;
++		X509V3_get_string;
++		X509V3_get_section;
++		X509V3_string_free;
++		X509V3_section_free;
++		X509V3_set_ctx;
++		s2i_ASN1_INTEGER;
++		CRYPTO_set_locked_mem_functions;
++		CRYPTO_get_locked_mem_functions;
++		CRYPTO_malloc_locked;
++		CRYPTO_free_locked;
++		BN_mod_exp2_mont;
++		ERR_get_error_line_data;
++		ERR_peek_error_line_data;
++		PKCS12_PBE_keyivgen;
++		X509_ALGOR_dup;
++		d2i_ASN1_SET_OF_DIST_POINT;
++		i2d_ASN1_SET_OF_DIST_POINT;
++		i2d_CRL_DIST_POINTS;
++		CRL_DIST_POINTS_new;
++		CRL_DIST_POINTS_free;
++		d2i_CRL_DIST_POINTS;
++		i2d_DIST_POINT;
++		DIST_POINT_new;
++		d2i_DIST_POINT;
++		DIST_POINT_free;
++		i2d_DIST_POINT_NAME;
++		DIST_POINT_NAME_new;
++		DIST_POINT_NAME_free;
++		d2i_DIST_POINT_NAME;
++		X509V3_add_value_uchar;
++		d2i_ASN1_SET_OF_X509_ATTRIBUTE;
++		i2d_ASN1_SET_OF_ASN1_TYPE;
++		d2i_ASN1_SET_OF_X509_EXTENSION;
++		d2i_ASN1_SET_OF_X509_NAME_ENTRY;
++		d2i_ASN1_SET_OF_ASN1_TYPE;
++		i2d_ASN1_SET_OF_X509_ATTRIBUTE;
++		i2d_ASN1_SET_OF_X509_EXTENSION;
++		i2d_ASN1_SET_OF_X509_NAME_ENTRY;
++		X509V3_EXT_i2d;
++		X509V3_EXT_val_prn;
++		X509V3_EXT_add_list;
++		EVP_CIPHER_type;
++		EVP_PBE_CipherInit;
++		X509V3_add_value_bool_nf;
++		d2i_ASN1_UINTEGER;
++		sk_value;
++		sk_num;
++		sk_set;
++		i2d_ASN1_SET_OF_X509_REVOKED;
++		sk_sort;
++		d2i_ASN1_SET_OF_X509_REVOKED;
++		i2d_ASN1_SET_OF_X509_ALGOR;
++		i2d_ASN1_SET_OF_X509_CRL;
++		d2i_ASN1_SET_OF_X509_ALGOR;
++		d2i_ASN1_SET_OF_X509_CRL;
++		i2d_ASN1_SET_OF_PKCS7_SIGNER_INFO;
++		i2d_ASN1_SET_OF_PKCS7_RECIP_INFO;
++		d2i_ASN1_SET_OF_PKCS7_SIGNER_INFO;
++		d2i_ASN1_SET_OF_PKCS7_RECIP_INFO;
++		PKCS5_PBE_add;
++		PEM_write_bio_PKCS8;
++		i2d_PKCS8_fp;
++		PEM_read_bio_PKCS8_PRIV_KEY_INFO;
++		PEM_read_bio_P8_PRIV_KEY_INFO;
++		d2i_PKCS8_bio;
++		d2i_PKCS8_PRIV_KEY_INFO_fp;
++		PEM_write_bio_PKCS8_PRIV_KEY_INFO;
++		PEM_write_bio_P8_PRIV_KEY_INFO;
++		PEM_read_PKCS8;
++		d2i_PKCS8_PRIV_KEY_INFO_bio;
++		d2i_PKCS8_fp;
++		PEM_write_PKCS8;
++		PEM_read_PKCS8_PRIV_KEY_INFO;
++		PEM_read_P8_PRIV_KEY_INFO;
++		PEM_read_bio_PKCS8;
++		PEM_write_PKCS8_PRIV_KEY_INFO;
++		PEM_write_P8_PRIV_KEY_INFO;
++		PKCS5_PBE_keyivgen;
++		i2d_PKCS8_bio;
++		i2d_PKCS8_PRIV_KEY_INFO_fp;
++		i2d_PKCS8_PRIV_KEY_INFO_bio;
++		BIO_s_bio;
++		PKCS5_pbe2_set;
++		PKCS5_PBKDF2_HMAC_SHA1;
++		PKCS5_v2_PBE_keyivgen;
++		PEM_write_bio_PKCS8PrivateKey;
++		PEM_write_PKCS8PrivateKey;
++		BIO_ctrl_get_read_request;
++		BIO_ctrl_pending;
++		BIO_ctrl_wpending;
++		BIO_new_bio_pair;
++		BIO_ctrl_get_write_guarantee;
++		CRYPTO_num_locks;
++		CONF_load_bio;
++		CONF_load_fp;
++		i2d_ASN1_SET_OF_ASN1_OBJECT;
++		d2i_ASN1_SET_OF_ASN1_OBJECT;
++		PKCS7_signatureVerify;
++		RSA_set_method;
++		RSA_get_method;
++		RSA_get_default_method;
++		RSA_check_key;
++		OBJ_obj2txt;
++		DSA_dup_DH;
++		X509_REQ_get_extensions;
++		X509_REQ_set_extension_nids;
++		BIO_nwrite;
++		X509_REQ_extension_nid;
++		BIO_nread;
++		X509_REQ_get_extension_nids;
++		BIO_nwrite0;
++		X509_REQ_add_extensions_nid;
++		BIO_nread0;
++		X509_REQ_add_extensions;
++		BIO_new_mem_buf;
++		DH_set_ex_data;
++		DH_set_method;
++		DSA_OpenSSL;
++		DH_get_ex_data;
++		DH_get_ex_new_index;
++		DSA_new_method;
++		DH_new_method;
++		DH_OpenSSL;
++		DSA_get_ex_new_index;
++		DH_get_default_method;
++		DSA_set_ex_data;
++		DH_set_default_method;
++		DSA_get_ex_data;
++		X509V3_EXT_REQ_add_conf;
++		NETSCAPE_SPKI_print;
++		NETSCAPE_SPKI_set_pubkey;
++		NETSCAPE_SPKI_b64_encode;
++		NETSCAPE_SPKI_get_pubkey;
++		NETSCAPE_SPKI_b64_decode;
++		UTF8_putc;
++		UTF8_getc;
++		RSA_null_method;
++		ASN1_tag2str;
++		BIO_ctrl_reset_read_request;
++		DISPLAYTEXT_new;
++		ASN1_GENERALIZEDTIME_free;
++		X509_REVOKED_get_ext_d2i;
++		X509_set_ex_data;
++		X509_reject_set_bit_asc;
++		X509_NAME_add_entry_by_txt;
++		X509_NAME_add_entry_by_NID;
++		X509_PURPOSE_get0;
++		PEM_read_X509_AUX;
++		d2i_AUTHORITY_INFO_ACCESS;
++		PEM_write_PUBKEY;
++		ACCESS_DESCRIPTION_new;
++		X509_CERT_AUX_free;
++		d2i_ACCESS_DESCRIPTION;
++		X509_trust_clear;
++		X509_TRUST_add;
++		ASN1_VISIBLESTRING_new;
++		X509_alias_set1;
++		ASN1_PRINTABLESTRING_free;
++		EVP_PKEY_get1_DSA;
++		ASN1_BMPSTRING_new;
++		ASN1_mbstring_copy;
++		ASN1_UTF8STRING_new;
++		DSA_get_default_method;
++		i2d_ASN1_SET_OF_ACCESS_DESCRIPTION;
++		ASN1_T61STRING_free;
++		DSA_set_method;
++		X509_get_ex_data;
++		ASN1_STRING_type;
++		X509_PURPOSE_get_by_sname;
++		ASN1_TIME_free;
++		ASN1_OCTET_STRING_cmp;
++		ASN1_BIT_STRING_new;
++		X509_get_ext_d2i;
++		PEM_read_bio_X509_AUX;
++		ASN1_STRING_set_default_mask_asc;
++		ASN1_STRING_set_def_mask_asc;
++		PEM_write_bio_RSA_PUBKEY;
++		ASN1_INTEGER_cmp;
++		d2i_RSA_PUBKEY_fp;
++		X509_trust_set_bit_asc;
++		PEM_write_bio_DSA_PUBKEY;
++		X509_STORE_CTX_free;
++		EVP_PKEY_set1_DSA;
++		i2d_DSA_PUBKEY_fp;
++		X509_load_cert_crl_file;
++		ASN1_TIME_new;
++		i2d_RSA_PUBKEY;
++		X509_STORE_CTX_purpose_inherit;
++		PEM_read_RSA_PUBKEY;
++		d2i_X509_AUX;
++		i2d_DSA_PUBKEY;
++		X509_CERT_AUX_print;
++		PEM_read_DSA_PUBKEY;
++		i2d_RSA_PUBKEY_bio;
++		ASN1_BIT_STRING_num_asc;
++		i2d_PUBKEY;
++		ASN1_UTCTIME_free;
++		DSA_set_default_method;
++		X509_PURPOSE_get_by_id;
++		ACCESS_DESCRIPTION_free;
++		PEM_read_bio_PUBKEY;
++		ASN1_STRING_set_by_NID;
++		X509_PURPOSE_get_id;
++		DISPLAYTEXT_free;
++		OTHERNAME_new;
++		X509_CERT_AUX_new;
++		X509_TRUST_cleanup;
++		X509_NAME_add_entry_by_OBJ;
++		X509_CRL_get_ext_d2i;
++		X509_PURPOSE_get0_name;
++		PEM_read_PUBKEY;
++		i2d_DSA_PUBKEY_bio;
++		i2d_OTHERNAME;
++		ASN1_OCTET_STRING_free;
++		ASN1_BIT_STRING_set_asc;
++		X509_get_ex_new_index;
++		ASN1_STRING_TABLE_cleanup;
++		X509_TRUST_get_by_id;
++		X509_PURPOSE_get_trust;
++		ASN1_STRING_length;
++		d2i_ASN1_SET_OF_ACCESS_DESCRIPTION;
++		ASN1_PRINTABLESTRING_new;
++		X509V3_get_d2i;
++		ASN1_ENUMERATED_free;
++		i2d_X509_CERT_AUX;
++		X509_STORE_CTX_set_trust;
++		ASN1_STRING_set_default_mask;
++		X509_STORE_CTX_new;
++		EVP_PKEY_get1_RSA;
++		DIRECTORYSTRING_free;
++		PEM_write_X509_AUX;
++		ASN1_OCTET_STRING_set;
++		d2i_DSA_PUBKEY_fp;
++		d2i_RSA_PUBKEY;
++		X509_TRUST_get0_name;
++		X509_TRUST_get0;
++		AUTHORITY_INFO_ACCESS_free;
++		ASN1_IA5STRING_new;
++		d2i_DSA_PUBKEY;
++		X509_check_purpose;
++		ASN1_ENUMERATED_new;
++		d2i_RSA_PUBKEY_bio;
++		d2i_PUBKEY;
++		X509_TRUST_get_trust;
++		X509_TRUST_get_flags;
++		ASN1_BMPSTRING_free;
++		ASN1_T61STRING_new;
++		ASN1_UTCTIME_new;
++		i2d_AUTHORITY_INFO_ACCESS;
++		EVP_PKEY_set1_RSA;
++		X509_STORE_CTX_set_purpose;
++		ASN1_IA5STRING_free;
++		PEM_write_bio_X509_AUX;
++		X509_PURPOSE_get_count;
++		CRYPTO_add_info;
++		X509_NAME_ENTRY_create_by_txt;
++		ASN1_STRING_get_default_mask;
++		X509_alias_get0;
++		ASN1_STRING_data;
++		i2d_ACCESS_DESCRIPTION;
++		X509_trust_set_bit;
++		ASN1_BIT_STRING_free;
++		PEM_read_bio_RSA_PUBKEY;
++		X509_add1_reject_object;
++		X509_check_trust;
++		PEM_read_bio_DSA_PUBKEY;
++		X509_PURPOSE_add;
++		ASN1_STRING_TABLE_get;
++		ASN1_UTF8STRING_free;
++		d2i_DSA_PUBKEY_bio;
++		PEM_write_RSA_PUBKEY;
++		d2i_OTHERNAME;
++		X509_reject_set_bit;
++		PEM_write_DSA_PUBKEY;
++		X509_PURPOSE_get0_sname;
++		EVP_PKEY_set1_DH;
++		ASN1_OCTET_STRING_dup;
++		ASN1_BIT_STRING_set;
++		X509_TRUST_get_count;
++		ASN1_INTEGER_free;
++		OTHERNAME_free;
++		i2d_RSA_PUBKEY_fp;
++		ASN1_INTEGER_dup;
++		d2i_X509_CERT_AUX;
++		PEM_write_bio_PUBKEY;
++		ASN1_VISIBLESTRING_free;
++		X509_PURPOSE_cleanup;
++		ASN1_mbstring_ncopy;
++		ASN1_GENERALIZEDTIME_new;
++		EVP_PKEY_get1_DH;
++		ASN1_OCTET_STRING_new;
++		ASN1_INTEGER_new;
++		i2d_X509_AUX;
++		ASN1_BIT_STRING_name_print;
++		X509_cmp;
++		ASN1_STRING_length_set;
++		DIRECTORYSTRING_new;
++		X509_add1_trust_object;
++		PKCS12_newpass;
++		SMIME_write_PKCS7;
++		SMIME_read_PKCS7;
++		DES_set_key_checked;
++		PKCS7_verify;
++		PKCS7_encrypt;
++		DES_set_key_unchecked;
++		SMIME_crlf_copy;
++		i2d_ASN1_PRINTABLESTRING;
++		PKCS7_get0_signers;
++		PKCS7_decrypt;
++		SMIME_text;
++		PKCS7_simple_smimecap;
++		PKCS7_get_smimecap;
++		PKCS7_sign;
++		PKCS7_add_attrib_smimecap;
++		CRYPTO_dbg_set_options;
++		CRYPTO_remove_all_info;
++		CRYPTO_get_mem_debug_functions;
++		CRYPTO_is_mem_check_on;
++		CRYPTO_set_mem_debug_functions;
++		CRYPTO_pop_info;
++		CRYPTO_push_info_;
++		CRYPTO_set_mem_debug_options;
++		PEM_write_PKCS8PrivateKey_nid;
++		PEM_write_bio_PKCS8PrivateKey_nid;
++		PEM_write_bio_PKCS8PrivKey_nid;
++		d2i_PKCS8PrivateKey_bio;
++		ASN1_NULL_free;
++		d2i_ASN1_NULL;
++		ASN1_NULL_new;
++		i2d_PKCS8PrivateKey_bio;
++		i2d_PKCS8PrivateKey_fp;
++		i2d_ASN1_NULL;
++		i2d_PKCS8PrivateKey_nid_fp;
++		d2i_PKCS8PrivateKey_fp;
++		i2d_PKCS8PrivateKey_nid_bio;
++		i2d_PKCS8PrivateKeyInfo_fp;
++		i2d_PKCS8PrivateKeyInfo_bio;
++		PEM_cb;
++		i2d_PrivateKey_fp;
++		d2i_PrivateKey_bio;
++		d2i_PrivateKey_fp;
++		i2d_PrivateKey_bio;
++		X509_reject_clear;
++		X509_TRUST_set_default;
++		d2i_AutoPrivateKey;
++		X509_ATTRIBUTE_get0_type;
++		X509_ATTRIBUTE_set1_data;
++		X509at_get_attr;
++		X509at_get_attr_count;
++		X509_ATTRIBUTE_create_by_NID;
++		X509_ATTRIBUTE_set1_object;
++		X509_ATTRIBUTE_count;
++		X509_ATTRIBUTE_create_by_OBJ;
++		X509_ATTRIBUTE_get0_object;
++		X509at_get_attr_by_NID;
++		X509at_add1_attr;
++		X509_ATTRIBUTE_get0_data;
++		X509at_delete_attr;
++		X509at_get_attr_by_OBJ;
++		RAND_add;
++		BIO_number_written;
++		BIO_number_read;
++		X509_STORE_CTX_get1_chain;
++		ERR_load_RAND_strings;
++		RAND_pseudo_bytes;
++		X509_REQ_get_attr_by_NID;
++		X509_REQ_get_attr;
++		X509_REQ_add1_attr_by_NID;
++		X509_REQ_get_attr_by_OBJ;
++		X509at_add1_attr_by_NID;
++		X509_REQ_add1_attr_by_OBJ;
++		X509_REQ_get_attr_count;
++		X509_REQ_add1_attr;
++		X509_REQ_delete_attr;
++		X509at_add1_attr_by_OBJ;
++		X509_REQ_add1_attr_by_txt;
++		X509_ATTRIBUTE_create_by_txt;
++		X509at_add1_attr_by_txt;
++		BN_pseudo_rand;
++		BN_is_prime_fasttest;
++		BN_CTX_end;
++		BN_CTX_start;
++		BN_CTX_get;
++		EVP_PKEY2PKCS8_broken;
++		ASN1_STRING_TABLE_add;
++		CRYPTO_dbg_get_options;
++		AUTHORITY_INFO_ACCESS_new;
++		CRYPTO_get_mem_debug_options;
++		DES_crypt;
++		PEM_write_bio_X509_REQ_NEW;
++		PEM_write_X509_REQ_NEW;
++		BIO_callback_ctrl;
++		RAND_egd;
++		RAND_status;
++		bn_dump1;
++		DES_check_key_parity;
++		lh_num_items;
++		RAND_event;
++		DSO_new;
++		DSO_new_method;
++		DSO_free;
++		DSO_flags;
++		DSO_up;
++		DSO_set_default_method;
++		DSO_get_default_method;
++		DSO_get_method;
++		DSO_set_method;
++		DSO_load;
++		DSO_bind_var;
++		DSO_METHOD_null;
++		DSO_METHOD_openssl;
++		DSO_METHOD_dlfcn;
++		DSO_METHOD_win32;
++		ERR_load_DSO_strings;
++		DSO_METHOD_dl;
++		NCONF_load;
++		NCONF_load_fp;
++		NCONF_new;
++		NCONF_get_string;
++		NCONF_free;
++		NCONF_get_number;
++		CONF_dump_fp;
++		NCONF_load_bio;
++		NCONF_dump_fp;
++		NCONF_get_section;
++		NCONF_dump_bio;
++		CONF_dump_bio;
++		NCONF_free_data;
++		CONF_set_default_method;
++		ERR_error_string_n;
++		BIO_snprintf;
++		DSO_ctrl;
++		i2d_ASN1_SET_OF_ASN1_INTEGER;
++		i2d_ASN1_SET_OF_PKCS12_SAFEBAG;
++		i2d_ASN1_SET_OF_PKCS7;
++		BIO_vfree;
++		d2i_ASN1_SET_OF_ASN1_INTEGER;
++		d2i_ASN1_SET_OF_PKCS12_SAFEBAG;
++		ASN1_UTCTIME_get;
++		X509_REQ_digest;
++		X509_CRL_digest;
++		d2i_ASN1_SET_OF_PKCS7;
++		EVP_CIPHER_CTX_set_key_length;
++		EVP_CIPHER_CTX_ctrl;
++		BN_mod_exp_mont_word;
++		RAND_egd_bytes;
++		X509_REQ_get1_email;
++		X509_get1_email;
++		X509_email_free;
++		i2d_RSA_NET;
++		d2i_RSA_NET_2;
++		d2i_RSA_NET;
++		DSO_bind_func;
++		CRYPTO_get_new_dynlockid;
++		sk_new_null;
++		CRYPTO_set_dynlock_destroy_callback;
++		CRYPTO_set_dynlock_destroy_cb;
++		CRYPTO_destroy_dynlockid;
++		CRYPTO_set_dynlock_size;
++		CRYPTO_set_dynlock_create_callback;
++		CRYPTO_set_dynlock_create_cb;
++		CRYPTO_set_dynlock_lock_callback;
++		CRYPTO_set_dynlock_lock_cb;
++		CRYPTO_get_dynlock_lock_callback;
++		CRYPTO_get_dynlock_lock_cb;
++		CRYPTO_get_dynlock_destroy_callback;
++		CRYPTO_get_dynlock_destroy_cb;
++		CRYPTO_get_dynlock_value;
++		CRYPTO_get_dynlock_create_callback;
++		CRYPTO_get_dynlock_create_cb;
++		c2i_ASN1_BIT_STRING;
++		i2c_ASN1_BIT_STRING;
++		RAND_poll;
++		c2i_ASN1_INTEGER;
++		i2c_ASN1_INTEGER;
++		BIO_dump_indent;
++		ASN1_parse_dump;
++		c2i_ASN1_OBJECT;
++		X509_NAME_print_ex_fp;
++		ASN1_STRING_print_ex_fp;
++		X509_NAME_print_ex;
++		ASN1_STRING_print_ex;
++		MD4;
++		MD4_Transform;
++		MD4_Final;
++		MD4_Update;
++		MD4_Init;
++		EVP_md4;
++		i2d_PUBKEY_bio;
++		i2d_PUBKEY_fp;
++		d2i_PUBKEY_bio;
++		ASN1_STRING_to_UTF8;
++		BIO_vprintf;
++		BIO_vsnprintf;
++		d2i_PUBKEY_fp;
++		X509_cmp_time;
++		X509_STORE_CTX_set_time;
++		X509_STORE_CTX_get1_issuer;
++		X509_OBJECT_retrieve_match;
++		X509_OBJECT_idx_by_subject;
++		X509_STORE_CTX_set_flags;
++		X509_STORE_CTX_trusted_stack;
++		X509_time_adj;
++		X509_check_issued;
++		ASN1_UTCTIME_cmp_time_t;
++		DES_set_weak_key_flag;
++		DES_check_key;
++		DES_rw_mode;
++		RSA_PKCS1_RSAref;
++		X509_keyid_set1;
++		BIO_next;
++		DSO_METHOD_vms;
++		BIO_f_linebuffer;
++		BN_bntest_rand;
++		OPENSSL_issetugid;
++		BN_rand_range;
++		ERR_load_ENGINE_strings;
++		ENGINE_set_DSA;
++		ENGINE_get_finish_function;
++		ENGINE_get_default_RSA;
++		ENGINE_get_BN_mod_exp;
++		DSA_get_default_openssl_method;
++		ENGINE_set_DH;
++		ENGINE_set_def_BN_mod_exp_crt;
++		ENGINE_set_default_BN_mod_exp_crt;
++		ENGINE_init;
++		DH_get_default_openssl_method;
++		RSA_set_default_openssl_method;
++		ENGINE_finish;
++		ENGINE_load_public_key;
++		ENGINE_get_DH;
++		ENGINE_ctrl;
++		ENGINE_get_init_function;
++		ENGINE_set_init_function;
++		ENGINE_set_default_DSA;
++		ENGINE_get_name;
++		ENGINE_get_last;
++		ENGINE_get_prev;
++		ENGINE_get_default_DH;
++		ENGINE_get_RSA;
++		ENGINE_set_default;
++		ENGINE_get_RAND;
++		ENGINE_get_first;
++		ENGINE_by_id;
++		ENGINE_set_finish_function;
++		ENGINE_get_def_BN_mod_exp_crt;
++		ENGINE_get_default_BN_mod_exp_crt;
++		RSA_get_default_openssl_method;
++		ENGINE_set_RSA;
++		ENGINE_load_private_key;
++		ENGINE_set_default_RAND;
++		ENGINE_set_BN_mod_exp;
++		ENGINE_remove;
++		ENGINE_free;
++		ENGINE_get_BN_mod_exp_crt;
++		ENGINE_get_next;
++		ENGINE_set_name;
++		ENGINE_get_default_DSA;
++		ENGINE_set_default_BN_mod_exp;
++		ENGINE_set_default_RSA;
++		ENGINE_get_default_RAND;
++		ENGINE_get_default_BN_mod_exp;
++		ENGINE_set_RAND;
++		ENGINE_set_id;
++		ENGINE_set_BN_mod_exp_crt;
++		ENGINE_set_default_DH;
++		ENGINE_new;
++		ENGINE_get_id;
++		DSA_set_default_openssl_method;
++		ENGINE_add;
++		DH_set_default_openssl_method;
++		ENGINE_get_DSA;
++		ENGINE_get_ctrl_function;
++		ENGINE_set_ctrl_function;
++		BN_pseudo_rand_range;
++		X509_STORE_CTX_set_verify_cb;
++		ERR_load_COMP_strings;
++		PKCS12_item_decrypt_d2i;
++		ASN1_UTF8STRING_it;
++		ENGINE_unregister_ciphers;
++		ENGINE_get_ciphers;
++		d2i_OCSP_BASICRESP;
++		KRB5_CHECKSUM_it;
++		EC_POINT_add;
++		ASN1_item_ex_i2d;
++		OCSP_CERTID_it;
++		d2i_OCSP_RESPBYTES;
++		X509V3_add1_i2d;
++		PKCS7_ENVELOPE_it;
++		UI_add_input_boolean;
++		ENGINE_unregister_RSA;
++		X509V3_EXT_nconf;
++		ASN1_GENERALSTRING_free;
++		d2i_OCSP_CERTSTATUS;
++		X509_REVOKED_set_serialNumber;
++		X509_print_ex;
++		OCSP_ONEREQ_get1_ext_d2i;
++		ENGINE_register_all_RAND;
++		ENGINE_load_dynamic;
++		PBKDF2PARAM_it;
++		EXTENDED_KEY_USAGE_new;
++		EC_GROUP_clear_free;
++		OCSP_sendreq_bio;
++		ASN1_item_digest;
++		OCSP_BASICRESP_delete_ext;
++		OCSP_SIGNATURE_it;
++		X509_CRL_it;
++		OCSP_BASICRESP_add_ext;
++		KRB5_ENCKEY_it;
++		UI_method_set_closer;
++		X509_STORE_set_purpose;
++		i2d_ASN1_GENERALSTRING;
++		OCSP_response_status;
++		i2d_OCSP_SERVICELOC;
++		ENGINE_get_digest_engine;
++		EC_GROUP_set_curve_GFp;
++		OCSP_REQUEST_get_ext_by_OBJ;
++		_ossl_old_des_random_key;
++		ASN1_T61STRING_it;
++		EC_GROUP_method_of;
++		i2d_KRB5_APREQ;
++		_ossl_old_des_encrypt;
++		ASN1_PRINTABLE_new;
++		HMAC_Init_ex;
++		d2i_KRB5_AUTHENT;
++		OCSP_archive_cutoff_new;
++		EC_POINT_set_Jprojective_coordinates_GFp;
++		EC_POINT_set_Jproj_coords_GFp;
++		_ossl_old_des_is_weak_key;
++		OCSP_BASICRESP_get_ext_by_OBJ;
++		EC_POINT_oct2point;
++		OCSP_SINGLERESP_get_ext_count;
++		UI_ctrl;
++		_shadow_DES_rw_mode;
++		asn1_do_adb;
++		ASN1_template_i2d;
++		ENGINE_register_DH;
++		UI_construct_prompt;
++		X509_STORE_set_trust;
++		UI_dup_input_string;
++		d2i_KRB5_APREQ;
++		EVP_MD_CTX_copy_ex;
++		OCSP_request_is_signed;
++		i2d_OCSP_REQINFO;
++		KRB5_ENCKEY_free;
++		OCSP_resp_get0;
++		GENERAL_NAME_it;
++		ASN1_GENERALIZEDTIME_it;
++		X509_STORE_set_flags;
++		EC_POINT_set_compressed_coordinates_GFp;
++		EC_POINT_set_compr_coords_GFp;
++		OCSP_response_status_str;
++		d2i_OCSP_REVOKEDINFO;
++		OCSP_basic_add1_cert;
++		ERR_get_implementation;
++		EVP_CipherFinal_ex;
++		OCSP_CERTSTATUS_new;
++		CRYPTO_cleanup_all_ex_data;
++		OCSP_resp_find;
++		BN_nnmod;
++		X509_CRL_sort;
++		X509_REVOKED_set_revocationDate;
++		ENGINE_register_RAND;
++		OCSP_SERVICELOC_new;
++		EC_POINT_set_affine_coordinates_GFp;
++		EC_POINT_set_affine_coords_GFp;
++		_ossl_old_des_options;
++		SXNET_it;
++		UI_dup_input_boolean;
++		PKCS12_add_CSPName_asc;
++		EC_POINT_is_at_infinity;
++		ENGINE_load_cryptodev;
++		DSO_convert_filename;
++		POLICYQUALINFO_it;
++		ENGINE_register_ciphers;
++		BN_mod_lshift_quick;
++		DSO_set_filename;
++		ASN1_item_free;
++		KRB5_TKTBODY_free;
++		AUTHORITY_KEYID_it;
++		KRB5_APREQBODY_new;
++		X509V3_EXT_REQ_add_nconf;
++		ENGINE_ctrl_cmd_string;
++		i2d_OCSP_RESPDATA;
++		EVP_MD_CTX_init;
++		EXTENDED_KEY_USAGE_free;
++		PKCS7_ATTR_SIGN_it;
++		UI_add_error_string;
++		KRB5_CHECKSUM_free;
++		OCSP_REQUEST_get_ext;
++		ENGINE_load_ubsec;
++		ENGINE_register_all_digests;
++		PKEY_USAGE_PERIOD_it;
++		PKCS12_unpack_authsafes;
++		ASN1_item_unpack;
++		NETSCAPE_SPKAC_it;
++		X509_REVOKED_it;
++		ASN1_STRING_encode;
++		EVP_aes_128_ecb;
++		KRB5_AUTHENT_free;
++		OCSP_BASICRESP_get_ext_by_critical;
++		OCSP_BASICRESP_get_ext_by_crit;
++		OCSP_cert_status_str;
++		d2i_OCSP_REQUEST;
++		UI_dup_info_string;
++		_ossl_old_des_xwhite_in2out;
++		PKCS12_it;
++		OCSP_SINGLERESP_get_ext_by_critical;
++		OCSP_SINGLERESP_get_ext_by_crit;
++		OCSP_CERTSTATUS_free;
++		_ossl_old_des_crypt;
++		ASN1_item_i2d;
++		EVP_DecryptFinal_ex;
++		ENGINE_load_openssl;
++		ENGINE_get_cmd_defns;
++		ENGINE_set_load_privkey_function;
++		ENGINE_set_load_privkey_fn;
++		EVP_EncryptFinal_ex;
++		ENGINE_set_default_digests;
++		X509_get0_pubkey_bitstr;
++		asn1_ex_i2c;
++		ENGINE_register_RSA;
++		ENGINE_unregister_DSA;
++		_ossl_old_des_key_sched;
++		X509_EXTENSION_it;
++		i2d_KRB5_AUTHENT;
++		SXNETID_it;
++		d2i_OCSP_SINGLERESP;
++		EDIPARTYNAME_new;
++		PKCS12_certbag2x509;
++		_ossl_old_des_ofb64_encrypt;
++		d2i_EXTENDED_KEY_USAGE;
++		ERR_print_errors_cb;
++		ENGINE_set_ciphers;
++		d2i_KRB5_APREQBODY;
++		UI_method_get_flusher;
++		X509_PUBKEY_it;
++		_ossl_old_des_enc_read;
++		PKCS7_ENCRYPT_it;
++		i2d_OCSP_RESPONSE;
++		EC_GROUP_get_cofactor;
++		PKCS12_unpack_p7data;
++		d2i_KRB5_AUTHDATA;
++		OCSP_copy_nonce;
++		KRB5_AUTHDATA_new;
++		OCSP_RESPDATA_new;
++		EC_GFp_mont_method;
++		OCSP_REVOKEDINFO_free;
++		UI_get_ex_data;
++		KRB5_APREQBODY_free;
++		EC_GROUP_get0_generator;
++		UI_get_default_method;
++		X509V3_set_nconf;
++		PKCS12_item_i2d_encrypt;
++		X509_add1_ext_i2d;
++		PKCS7_SIGNER_INFO_it;
++		KRB5_PRINCNAME_new;
++		PKCS12_SAFEBAG_it;
++		EC_GROUP_get_order;
++		d2i_OCSP_RESPID;
++		OCSP_request_verify;
++		NCONF_get_number_e;
++		_ossl_old_des_decrypt3;
++		X509_signature_print;
++		OCSP_SINGLERESP_free;
++		ENGINE_load_builtin_engines;
++		i2d_OCSP_ONEREQ;
++		OCSP_REQUEST_add_ext;
++		OCSP_RESPBYTES_new;
++		EVP_MD_CTX_create;
++		OCSP_resp_find_status;
++		X509_ALGOR_it;
++		ASN1_TIME_it;
++		OCSP_request_set1_name;
++		OCSP_ONEREQ_get_ext_count;
++		UI_get0_result;
++		PKCS12_AUTHSAFES_it;
++		EVP_aes_256_ecb;
++		PKCS12_pack_authsafes;
++		ASN1_IA5STRING_it;
++		UI_get_input_flags;
++		EC_GROUP_set_generator;
++		_ossl_old_des_string_to_2keys;
++		OCSP_CERTID_free;
++		X509_CERT_AUX_it;
++		CERTIFICATEPOLICIES_it;
++		_ossl_old_des_ede3_cbc_encrypt;
++		RAND_set_rand_engine;
++		DSO_get_loaded_filename;
++		X509_ATTRIBUTE_it;
++		OCSP_ONEREQ_get_ext_by_NID;
++		PKCS12_decrypt_skey;
++		KRB5_AUTHENT_it;
++		UI_dup_error_string;
++		RSAPublicKey_it;
++		i2d_OCSP_REQUEST;
++		PKCS12_x509crl2certbag;
++		OCSP_SERVICELOC_it;
++		ASN1_item_sign;
++		X509_CRL_set_issuer_name;
++		OBJ_NAME_do_all_sorted;
++		i2d_OCSP_BASICRESP;
++		i2d_OCSP_RESPBYTES;
++		PKCS12_unpack_p7encdata;
++		HMAC_CTX_init;
++		ENGINE_get_digest;
++		OCSP_RESPONSE_print;
++		KRB5_TKTBODY_it;
++		ACCESS_DESCRIPTION_it;
++		PKCS7_ISSUER_AND_SERIAL_it;
++		PBE2PARAM_it;
++		PKCS12_certbag2x509crl;
++		PKCS7_SIGNED_it;
++		ENGINE_get_cipher;
++		i2d_OCSP_CRLID;
++		OCSP_SINGLERESP_new;
++		ENGINE_cmd_is_executable;
++		RSA_up_ref;
++		ASN1_GENERALSTRING_it;
++		ENGINE_register_DSA;
++		X509V3_EXT_add_nconf_sk;
++		ENGINE_set_load_pubkey_function;
++		PKCS8_decrypt;
++		PEM_bytes_read_bio;
++		DIRECTORYSTRING_it;
++		d2i_OCSP_CRLID;
++		EC_POINT_is_on_curve;
++		CRYPTO_set_locked_mem_ex_functions;
++		CRYPTO_set_locked_mem_ex_funcs;
++		d2i_KRB5_CHECKSUM;
++		ASN1_item_dup;
++		X509_it;
++		BN_mod_add;
++		KRB5_AUTHDATA_free;
++		_ossl_old_des_cbc_cksum;
++		ASN1_item_verify;
++		CRYPTO_set_mem_ex_functions;
++		EC_POINT_get_Jprojective_coordinates_GFp;
++		EC_POINT_get_Jproj_coords_GFp;
++		ZLONG_it;
++		CRYPTO_get_locked_mem_ex_functions;
++		CRYPTO_get_locked_mem_ex_funcs;
++		ASN1_TIME_check;
++		UI_get0_user_data;
++		HMAC_CTX_cleanup;
++		DSA_up_ref;
++		_ossl_old_des_ede3_cfb64_encrypt;
++		_ossl_odes_ede3_cfb64_encrypt;
++		ASN1_BMPSTRING_it;
++		ASN1_tag2bit;
++		UI_method_set_flusher;
++		X509_ocspid_print;
++		KRB5_ENCDATA_it;
++		ENGINE_get_load_pubkey_function;
++		UI_add_user_data;
++		OCSP_REQUEST_delete_ext;
++		UI_get_method;
++		OCSP_ONEREQ_free;
++		ASN1_PRINTABLESTRING_it;
++		X509_CRL_set_nextUpdate;
++		OCSP_REQUEST_it;
++		OCSP_BASICRESP_it;
++		AES_ecb_encrypt;
++		BN_mod_sqr;
++		NETSCAPE_CERT_SEQUENCE_it;
++		GENERAL_NAMES_it;
++		AUTHORITY_INFO_ACCESS_it;
++		ASN1_FBOOLEAN_it;
++		UI_set_ex_data;
++		_ossl_old_des_string_to_key;
++		ENGINE_register_all_RSA;
++		d2i_KRB5_PRINCNAME;
++		OCSP_RESPBYTES_it;
++		X509_CINF_it;
++		ENGINE_unregister_digests;
++		d2i_EDIPARTYNAME;
++		d2i_OCSP_SERVICELOC;
++		ENGINE_get_digests;
++		_ossl_old_des_set_odd_parity;
++		OCSP_RESPDATA_free;
++		d2i_KRB5_TICKET;
++		OTHERNAME_it;
++		EVP_MD_CTX_cleanup;
++		d2i_ASN1_GENERALSTRING;
++		X509_CRL_set_version;
++		BN_mod_sub;
++		OCSP_SINGLERESP_get_ext_by_NID;
++		ENGINE_get_ex_new_index;
++		OCSP_REQUEST_free;
++		OCSP_REQUEST_add1_ext_i2d;
++		X509_VAL_it;
++		EC_POINTs_make_affine;
++		EC_POINT_mul;
++		X509V3_EXT_add_nconf;
++		X509_TRUST_set;
++		X509_CRL_add1_ext_i2d;
++		_ossl_old_des_fcrypt;
++		DISPLAYTEXT_it;
++		X509_CRL_set_lastUpdate;
++		OCSP_BASICRESP_free;
++		OCSP_BASICRESP_add1_ext_i2d;
++		d2i_KRB5_AUTHENTBODY;
++		CRYPTO_set_ex_data_implementation;
++		CRYPTO_set_ex_data_impl;
++		KRB5_ENCDATA_new;
++		DSO_up_ref;
++		OCSP_crl_reason_str;
++		UI_get0_result_string;
++		ASN1_GENERALSTRING_new;
++		X509_SIG_it;
++		ERR_set_implementation;
++		ERR_load_EC_strings;
++		UI_get0_action_string;
++		OCSP_ONEREQ_get_ext;
++		EC_POINT_method_of;
++		i2d_KRB5_APREQBODY;
++		_ossl_old_des_ecb3_encrypt;
++		CRYPTO_get_mem_ex_functions;
++		ENGINE_get_ex_data;
++		UI_destroy_method;
++		ASN1_item_i2d_bio;
++		OCSP_ONEREQ_get_ext_by_OBJ;
++		ASN1_primitive_new;
++		ASN1_PRINTABLE_it;
++		EVP_aes_192_ecb;
++		OCSP_SIGNATURE_new;
++		LONG_it;
++		ASN1_VISIBLESTRING_it;
++		OCSP_SINGLERESP_add1_ext_i2d;
++		d2i_OCSP_CERTID;
++		ASN1_item_d2i_fp;
++		CRL_DIST_POINTS_it;
++		GENERAL_NAME_print;
++		OCSP_SINGLERESP_delete_ext;
++		PKCS12_SAFEBAGS_it;
++		d2i_OCSP_SIGNATURE;
++		OCSP_request_add1_nonce;
++		ENGINE_set_cmd_defns;
++		OCSP_SERVICELOC_free;
++		EC_GROUP_free;
++		ASN1_BIT_STRING_it;
++		X509_REQ_it;
++		_ossl_old_des_cbc_encrypt;
++		ERR_unload_strings;
++		PKCS7_SIGN_ENVELOPE_it;
++		EDIPARTYNAME_free;
++		OCSP_REQINFO_free;
++		EC_GROUP_new_curve_GFp;
++		OCSP_REQUEST_get1_ext_d2i;
++		PKCS12_item_pack_safebag;
++		asn1_ex_c2i;
++		ENGINE_register_digests;
++		i2d_OCSP_REVOKEDINFO;
++		asn1_enc_restore;
++		UI_free;
++		UI_new_method;
++		EVP_EncryptInit_ex;
++		X509_pubkey_digest;
++		EC_POINT_invert;
++		OCSP_basic_sign;
++		i2d_OCSP_RESPID;
++		OCSP_check_nonce;
++		ENGINE_ctrl_cmd;
++		d2i_KRB5_ENCKEY;
++		OCSP_parse_url;
++		OCSP_SINGLERESP_get_ext;
++		OCSP_CRLID_free;
++		OCSP_BASICRESP_get1_ext_d2i;
++		RSAPrivateKey_it;
++		ENGINE_register_all_DH;
++		i2d_EDIPARTYNAME;
++		EC_POINT_get_affine_coordinates_GFp;
++		EC_POINT_get_affine_coords_GFp;
++		OCSP_CRLID_new;
++		ENGINE_get_flags;
++		OCSP_ONEREQ_it;
++		UI_process;
++		ASN1_INTEGER_it;
++		EVP_CipherInit_ex;
++		UI_get_string_type;
++		ENGINE_unregister_DH;
++		ENGINE_register_all_DSA;
++		OCSP_ONEREQ_get_ext_by_critical;
++		bn_dup_expand;
++		OCSP_cert_id_new;
++		BASIC_CONSTRAINTS_it;
++		BN_mod_add_quick;
++		EC_POINT_new;
++		EVP_MD_CTX_destroy;
++		OCSP_RESPBYTES_free;
++		EVP_aes_128_cbc;
++		OCSP_SINGLERESP_get1_ext_d2i;
++		EC_POINT_free;
++		DH_up_ref;
++		X509_NAME_ENTRY_it;
++		UI_get_ex_new_index;
++		BN_mod_sub_quick;
++		OCSP_ONEREQ_add_ext;
++		OCSP_request_sign;
++		EVP_DigestFinal_ex;
++		ENGINE_set_digests;
++		OCSP_id_issuer_cmp;
++		OBJ_NAME_do_all;
++		EC_POINTs_mul;
++		ENGINE_register_complete;
++		X509V3_EXT_nconf_nid;
++		ASN1_SEQUENCE_it;
++		UI_set_default_method;
++		RAND_query_egd_bytes;
++		UI_method_get_writer;
++		UI_OpenSSL;
++		PEM_def_callback;
++		ENGINE_cleanup;
++		DIST_POINT_it;
++		OCSP_SINGLERESP_it;
++		d2i_KRB5_TKTBODY;
++		EC_POINT_cmp;
++		OCSP_REVOKEDINFO_new;
++		i2d_OCSP_CERTSTATUS;
++		OCSP_basic_add1_nonce;
++		ASN1_item_ex_d2i;
++		BN_mod_lshift1_quick;
++		UI_set_method;
++		OCSP_id_get0_info;
++		BN_mod_sqrt;
++		EC_GROUP_copy;
++		KRB5_ENCDATA_free;
++		_ossl_old_des_cfb_encrypt;
++		OCSP_SINGLERESP_get_ext_by_OBJ;
++		OCSP_cert_to_id;
++		OCSP_RESPID_new;
++		OCSP_RESPDATA_it;
++		d2i_OCSP_RESPDATA;
++		ENGINE_register_all_complete;
++		OCSP_check_validity;
++		PKCS12_BAGS_it;
++		OCSP_url_svcloc_new;
++		ASN1_template_free;
++		OCSP_SINGLERESP_add_ext;
++		KRB5_AUTHENTBODY_it;
++		X509_supported_extension;
++		i2d_KRB5_AUTHDATA;
++		UI_method_get_opener;
++		ENGINE_set_ex_data;
++		OCSP_REQUEST_print;
++		CBIGNUM_it;
++		KRB5_TICKET_new;
++		KRB5_APREQ_new;
++		EC_GROUP_get_curve_GFp;
++		KRB5_ENCKEY_new;
++		ASN1_template_d2i;
++		_ossl_old_des_quad_cksum;
++		OCSP_single_get0_status;
++		BN_swap;
++		POLICYINFO_it;
++		ENGINE_set_destroy_function;
++		asn1_enc_free;
++		OCSP_RESPID_it;
++		EC_GROUP_new;
++		EVP_aes_256_cbc;
++		i2d_KRB5_PRINCNAME;
++		_ossl_old_des_encrypt2;
++		_ossl_old_des_encrypt3;
++		PKCS8_PRIV_KEY_INFO_it;
++		OCSP_REQINFO_it;
++		PBEPARAM_it;
++		KRB5_AUTHENTBODY_new;
++		X509_CRL_add0_revoked;
++		EDIPARTYNAME_it;
++		NETSCAPE_SPKI_it;
++		UI_get0_test_string;
++		ENGINE_get_cipher_engine;
++		ENGINE_register_all_ciphers;
++		EC_POINT_copy;
++		BN_kronecker;
++		_ossl_old_des_ede3_ofb64_encrypt;
++		_ossl_odes_ede3_ofb64_encrypt;
++		UI_method_get_reader;
++		OCSP_BASICRESP_get_ext_count;
++		ASN1_ENUMERATED_it;
++		UI_set_result;
++		i2d_KRB5_TICKET;
++		X509_print_ex_fp;
++		EVP_CIPHER_CTX_set_padding;
++		d2i_OCSP_RESPONSE;
++		ASN1_UTCTIME_it;
++		_ossl_old_des_enc_write;
++		OCSP_RESPONSE_new;
++		AES_set_encrypt_key;
++		OCSP_resp_count;
++		KRB5_CHECKSUM_new;
++		ENGINE_load_cswift;
++		OCSP_onereq_get0_id;
++		ENGINE_set_default_ciphers;
++		NOTICEREF_it;
++		X509V3_EXT_CRL_add_nconf;
++		OCSP_REVOKEDINFO_it;
++		AES_encrypt;
++		OCSP_REQUEST_new;
++		ASN1_ANY_it;
++		CRYPTO_ex_data_new_class;
++		_ossl_old_des_ncbc_encrypt;
++		i2d_KRB5_TKTBODY;
++		EC_POINT_clear_free;
++		AES_decrypt;
++		asn1_enc_init;
++		UI_get_result_maxsize;
++		OCSP_CERTID_new;
++		ENGINE_unregister_RAND;
++		UI_method_get_closer;
++		d2i_KRB5_ENCDATA;
++		OCSP_request_onereq_count;
++		OCSP_basic_verify;
++		KRB5_AUTHENTBODY_free;
++		ASN1_item_d2i;
++		ASN1_primitive_free;
++		i2d_EXTENDED_KEY_USAGE;
++		i2d_OCSP_SIGNATURE;
++		asn1_enc_save;
++		ENGINE_load_nuron;
++		_ossl_old_des_pcbc_encrypt;
++		PKCS12_MAC_DATA_it;
++		OCSP_accept_responses_new;
++		asn1_do_lock;
++		PKCS7_ATTR_VERIFY_it;
++		KRB5_APREQBODY_it;
++		i2d_OCSP_SINGLERESP;
++		ASN1_item_ex_new;
++		UI_add_verify_string;
++		_ossl_old_des_set_key;
++		KRB5_PRINCNAME_it;
++		EVP_DecryptInit_ex;
++		i2d_OCSP_CERTID;
++		ASN1_item_d2i_bio;
++		EC_POINT_dbl;
++		asn1_get_choice_selector;
++		i2d_KRB5_CHECKSUM;
++		ENGINE_set_table_flags;
++		AES_options;
++		ENGINE_load_chil;
++		OCSP_id_cmp;
++		OCSP_BASICRESP_new;
++		OCSP_REQUEST_get_ext_by_NID;
++		KRB5_APREQ_it;
++		ENGINE_get_destroy_function;
++		CONF_set_nconf;
++		ASN1_PRINTABLE_free;
++		OCSP_BASICRESP_get_ext_by_NID;
++		DIST_POINT_NAME_it;
++		X509V3_extensions_print;
++		_ossl_old_des_cfb64_encrypt;
++		X509_REVOKED_add1_ext_i2d;
++		_ossl_old_des_ofb_encrypt;
++		KRB5_TKTBODY_new;
++		ASN1_OCTET_STRING_it;
++		ERR_load_UI_strings;
++		i2d_KRB5_ENCKEY;
++		ASN1_template_new;
++		OCSP_SIGNATURE_free;
++		ASN1_item_i2d_fp;
++		KRB5_PRINCNAME_free;
++		PKCS7_RECIP_INFO_it;
++		EXTENDED_KEY_USAGE_it;
++		EC_GFp_simple_method;
++		EC_GROUP_precompute_mult;
++		OCSP_request_onereq_get0;
++		UI_method_set_writer;
++		KRB5_AUTHENT_new;
++		X509_CRL_INFO_it;
++		DSO_set_name_converter;
++		AES_set_decrypt_key;
++		PKCS7_DIGEST_it;
++		PKCS12_x5092certbag;
++		EVP_DigestInit_ex;
++		i2a_ACCESS_DESCRIPTION;
++		OCSP_RESPONSE_it;
++		PKCS7_ENC_CONTENT_it;
++		OCSP_request_add0_id;
++		EC_POINT_make_affine;
++		DSO_get_filename;
++		OCSP_CERTSTATUS_it;
++		OCSP_request_add1_cert;
++		UI_get0_output_string;
++		UI_dup_verify_string;
++		BN_mod_lshift;
++		KRB5_AUTHDATA_it;
++		asn1_set_choice_selector;
++		OCSP_basic_add1_status;
++		OCSP_RESPID_free;
++		asn1_get_field_ptr;
++		UI_add_input_string;
++		OCSP_CRLID_it;
++		i2d_KRB5_AUTHENTBODY;
++		OCSP_REQUEST_get_ext_count;
++		ENGINE_load_atalla;
++		X509_NAME_it;
++		USERNOTICE_it;
++		OCSP_REQINFO_new;
++		OCSP_BASICRESP_get_ext;
++		CRYPTO_get_ex_data_implementation;
++		CRYPTO_get_ex_data_impl;
++		ASN1_item_pack;
++		i2d_KRB5_ENCDATA;
++		X509_PURPOSE_set;
++		X509_REQ_INFO_it;
++		UI_method_set_opener;
++		ASN1_item_ex_free;
++		ASN1_BOOLEAN_it;
++		ENGINE_get_table_flags;
++		UI_create_method;
++		OCSP_ONEREQ_add1_ext_i2d;
++		_shadow_DES_check_key;
++		d2i_OCSP_REQINFO;
++		UI_add_info_string;
++		UI_get_result_minsize;
++		ASN1_NULL_it;
++		BN_mod_lshift1;
++		d2i_OCSP_ONEREQ;
++		OCSP_ONEREQ_new;
++		KRB5_TICKET_it;
++		EVP_aes_192_cbc;
++		KRB5_TICKET_free;
++		UI_new;
++		OCSP_response_create;
++		_ossl_old_des_xcbc_encrypt;
++		PKCS7_it;
++		OCSP_REQUEST_get_ext_by_critical;
++		OCSP_REQUEST_get_ext_by_crit;
++		ENGINE_set_flags;
++		_ossl_old_des_ecb_encrypt;
++		OCSP_response_get1_basic;
++		EVP_Digest;
++		OCSP_ONEREQ_delete_ext;
++		ASN1_TBOOLEAN_it;
++		ASN1_item_new;
++		ASN1_TIME_to_generalizedtime;
++		BIGNUM_it;
++		AES_cbc_encrypt;
++		ENGINE_get_load_privkey_function;
++		ENGINE_get_load_privkey_fn;
++		OCSP_RESPONSE_free;
++		UI_method_set_reader;
++		i2d_ASN1_T61STRING;
++		EC_POINT_set_to_infinity;
++		ERR_load_OCSP_strings;
++		EC_POINT_point2oct;
++		KRB5_APREQ_free;
++		ASN1_OBJECT_it;
++		OCSP_crlID_new;
++		OCSP_crlID2_new;
++		CONF_modules_load_file;
++		CONF_imodule_set_usr_data;
++		ENGINE_set_default_string;
++		CONF_module_get_usr_data;
++		ASN1_add_oid_module;
++		CONF_modules_finish;
++		OPENSSL_config;
++		CONF_modules_unload;
++		CONF_imodule_get_value;
++		CONF_module_set_usr_data;
++		CONF_parse_list;
++		CONF_module_add;
++		CONF_get1_default_config_file;
++		CONF_imodule_get_flags;
++		CONF_imodule_get_module;
++		CONF_modules_load;
++		CONF_imodule_get_name;
++		ERR_peek_top_error;
++		CONF_imodule_get_usr_data;
++		CONF_imodule_set_flags;
++		ENGINE_add_conf_module;
++		ERR_peek_last_error_line;
++		ERR_peek_last_error_line_data;
++		ERR_peek_last_error;
++		DES_read_2passwords;
++		DES_read_password;
++		UI_UTIL_read_pw;
++		UI_UTIL_read_pw_string;
++		ENGINE_load_aep;
++		ENGINE_load_sureware;
++		OPENSSL_add_all_algorithms_noconf;
++		OPENSSL_add_all_algo_noconf;
++		OPENSSL_add_all_algorithms_conf;
++		OPENSSL_add_all_algo_conf;
++		OPENSSL_load_builtin_modules;
++		AES_ofb128_encrypt;
++		AES_ctr128_encrypt;
++		AES_cfb128_encrypt;
++		ENGINE_load_4758cca;
++		_ossl_096_des_random_seed;
++		EVP_aes_256_ofb;
++		EVP_aes_192_ofb;
++		EVP_aes_128_cfb128;
++		EVP_aes_256_cfb128;
++		EVP_aes_128_ofb;
++		EVP_aes_192_cfb128;
++		CONF_modules_free;
++		NCONF_default;
++		OPENSSL_no_config;
++		NCONF_WIN32;
++		ASN1_UNIVERSALSTRING_new;
++		EVP_des_ede_ecb;
++		i2d_ASN1_UNIVERSALSTRING;
++		ASN1_UNIVERSALSTRING_free;
++		ASN1_UNIVERSALSTRING_it;
++		d2i_ASN1_UNIVERSALSTRING;
++		EVP_des_ede3_ecb;
++		X509_REQ_print_ex;
++		ENGINE_up_ref;
++		BUF_MEM_grow_clean;
++		CRYPTO_realloc_clean;
++		BUF_strlcat;
++		BIO_indent;
++		BUF_strlcpy;
++		OpenSSLDie;
++		OPENSSL_cleanse;
++		ENGINE_setup_bsd_cryptodev;
++		ERR_release_err_state_table;
++		EVP_aes_128_cfb8;
++		FIPS_corrupt_rsa;
++		FIPS_selftest_des;
++		EVP_aes_128_cfb1;
++		EVP_aes_192_cfb8;
++		FIPS_mode_set;
++		FIPS_selftest_dsa;
++		EVP_aes_256_cfb8;
++		FIPS_allow_md5;
++		DES_ede3_cfb_encrypt;
++		EVP_des_ede3_cfb8;
++		FIPS_rand_seeded;
++		AES_cfbr_encrypt_block;
++		AES_cfb8_encrypt;
++		FIPS_rand_seed;
++		FIPS_corrupt_des;
++		EVP_aes_192_cfb1;
++		FIPS_selftest_aes;
++		FIPS_set_prng_key;
++		EVP_des_cfb8;
++		FIPS_corrupt_dsa;
++		FIPS_test_mode;
++		FIPS_rand_method;
++		EVP_aes_256_cfb1;
++		ERR_load_FIPS_strings;
++		FIPS_corrupt_aes;
++		FIPS_selftest_sha1;
++		FIPS_selftest_rsa;
++		FIPS_corrupt_sha1;
++		EVP_des_cfb1;
++		FIPS_dsa_check;
++		AES_cfb1_encrypt;
++		EVP_des_ede3_cfb1;
++		FIPS_rand_check;
++		FIPS_md5_allowed;
++		FIPS_mode;
++		FIPS_selftest_failed;
++		sk_is_sorted;
++		X509_check_ca;
++		HMAC_CTX_set_flags;
++		d2i_PROXY_CERT_INFO_EXTENSION;
++		PROXY_POLICY_it;
++		i2d_PROXY_POLICY;
++		i2d_PROXY_CERT_INFO_EXTENSION;
++		d2i_PROXY_POLICY;
++		PROXY_CERT_INFO_EXTENSION_new;
++		PROXY_CERT_INFO_EXTENSION_free;
++		PROXY_CERT_INFO_EXTENSION_it;
++		PROXY_POLICY_free;
++		PROXY_POLICY_new;
++		BN_MONT_CTX_set_locked;
++		FIPS_selftest_rng;
++		EVP_sha384;
++		EVP_sha512;
++		EVP_sha224;
++		EVP_sha256;
++		FIPS_selftest_hmac;
++		FIPS_corrupt_rng;
++		BN_mod_exp_mont_consttime;
++		RSA_X931_hash_id;
++		RSA_padding_check_X931;
++		RSA_verify_PKCS1_PSS;
++		RSA_padding_add_X931;
++		RSA_padding_add_PKCS1_PSS;
++		PKCS1_MGF1;
++		BN_X931_generate_Xpq;
++		RSA_X931_generate_key;
++		BN_X931_derive_prime;
++		BN_X931_generate_prime;
++		RSA_X931_derive;
++		BIO_new_dgram;
++		BN_get0_nist_prime_384;
++		ERR_set_mark;
++		X509_STORE_CTX_set0_crls;
++		ENGINE_set_STORE;
++		ENGINE_register_ECDSA;
++		STORE_meth_set_list_start_fn;
++		STORE_method_set_list_start_function;
++		BN_BLINDING_invert_ex;
++		NAME_CONSTRAINTS_free;
++		STORE_ATTR_INFO_set_number;
++		BN_BLINDING_get_thread_id;
++		X509_STORE_CTX_set0_param;
++		POLICY_MAPPING_it;
++		STORE_parse_attrs_start;
++		POLICY_CONSTRAINTS_free;
++		EVP_PKEY_add1_attr_by_NID;
++		BN_nist_mod_192;
++		EC_GROUP_get_trinomial_basis;
++		STORE_set_method;
++		GENERAL_SUBTREE_free;
++		NAME_CONSTRAINTS_it;
++		ECDH_get_default_method;
++		PKCS12_add_safe;
++		EC_KEY_new_by_curve_name;
++		STORE_meth_get_update_store_fn;
++		STORE_method_get_update_store_function;
++		ENGINE_register_ECDH;
++		SHA512_Update;
++		i2d_ECPrivateKey;
++		BN_get0_nist_prime_192;
++		STORE_modify_certificate;
++		EC_POINT_set_affine_coordinates_GF2m;
++		EC_POINT_set_affine_coords_GF2m;
++		BN_GF2m_mod_exp_arr;
++		STORE_ATTR_INFO_modify_number;
++		X509_keyid_get0;
++		ENGINE_load_gmp;
++		pitem_new;
++		BN_GF2m_mod_mul_arr;
++		STORE_list_public_key_endp;
++		o2i_ECPublicKey;
++		EC_KEY_copy;
++		BIO_dump_fp;
++		X509_policy_node_get0_parent;
++		EC_GROUP_check_discriminant;
++		i2o_ECPublicKey;
++		EC_KEY_precompute_mult;
++		a2i_IPADDRESS;
++		STORE_meth_set_initialise_fn;
++		STORE_method_set_initialise_function;
++		X509_STORE_CTX_set_depth;
++		X509_VERIFY_PARAM_inherit;
++		EC_POINT_point2bn;
++		STORE_ATTR_INFO_set_dn;
++		X509_policy_tree_get0_policies;
++		EC_GROUP_new_curve_GF2m;
++		STORE_destroy_method;
++		ENGINE_unregister_STORE;
++		EVP_PKEY_get1_EC_KEY;
++		STORE_ATTR_INFO_get0_number;
++		ENGINE_get_default_ECDH;
++		EC_KEY_get_conv_form;
++		ASN1_OCTET_STRING_NDEF_it;
++		STORE_delete_public_key;
++		STORE_get_public_key;
++		STORE_modify_arbitrary;
++		ENGINE_get_static_state;
++		pqueue_iterator;
++		ECDSA_SIG_new;
++		OPENSSL_DIR_end;
++		BN_GF2m_mod_sqr;
++		EC_POINT_bn2point;
++		X509_VERIFY_PARAM_set_depth;
++		EC_KEY_set_asn1_flag;
++		STORE_get_method;
++		EC_KEY_get_key_method_data;
++		ECDSA_sign_ex;
++		STORE_parse_attrs_end;
++		EC_GROUP_get_point_conversion_form;
++		EC_GROUP_get_point_conv_form;
++		STORE_method_set_store_function;
++		STORE_ATTR_INFO_in;
++		PEM_read_bio_ECPKParameters;
++		EC_GROUP_get_pentanomial_basis;
++		EVP_PKEY_add1_attr_by_txt;
++		BN_BLINDING_set_flags;
++		X509_VERIFY_PARAM_set1_policies;
++		X509_VERIFY_PARAM_set1_name;
++		X509_VERIFY_PARAM_set_purpose;
++		STORE_get_number;
++		ECDSA_sign_setup;
++		BN_GF2m_mod_solve_quad_arr;
++		EC_KEY_up_ref;
++		POLICY_MAPPING_free;
++		BN_GF2m_mod_div;
++		X509_VERIFY_PARAM_set_flags;
++		EC_KEY_free;
++		STORE_meth_set_list_next_fn;
++		STORE_method_set_list_next_function;
++		PEM_write_bio_ECPrivateKey;
++		d2i_EC_PUBKEY;
++		STORE_meth_get_generate_fn;
++		STORE_method_get_generate_function;
++		STORE_meth_set_list_end_fn;
++		STORE_method_set_list_end_function;
++		pqueue_print;
++		EC_GROUP_have_precompute_mult;
++		EC_KEY_print_fp;
++		BN_GF2m_mod_arr;
++		PEM_write_bio_X509_CERT_PAIR;
++		EVP_PKEY_cmp;
++		X509_policy_level_node_count;
++		STORE_new_engine;
++		STORE_list_public_key_start;
++		X509_VERIFY_PARAM_new;
++		ECDH_get_ex_data;
++		EVP_PKEY_get_attr;
++		ECDSA_do_sign;
++		ENGINE_unregister_ECDH;
++		ECDH_OpenSSL;
++		EC_KEY_set_conv_form;
++		EC_POINT_dup;
++		GENERAL_SUBTREE_new;
++		STORE_list_crl_endp;
++		EC_get_builtin_curves;
++		X509_policy_node_get0_qualifiers;
++		X509_pcy_node_get0_qualifiers;
++		STORE_list_crl_end;
++		EVP_PKEY_set1_EC_KEY;
++		BN_GF2m_mod_sqrt_arr;
++		i2d_ECPrivateKey_bio;
++		ECPKParameters_print_fp;
++		pqueue_find;
++		ECDSA_SIG_free;
++		PEM_write_bio_ECPKParameters;
++		STORE_method_set_ctrl_function;
++		STORE_list_public_key_end;
++		EC_KEY_set_private_key;
++		pqueue_peek;
++		STORE_get_arbitrary;
++		STORE_store_crl;
++		X509_policy_node_get0_policy;
++		PKCS12_add_safes;
++		BN_BLINDING_convert_ex;
++		X509_policy_tree_free;
++		OPENSSL_ia32cap_loc;
++		BN_GF2m_poly2arr;
++		STORE_ctrl;
++		STORE_ATTR_INFO_compare;
++		BN_get0_nist_prime_224;
++		i2d_ECParameters;
++		i2d_ECPKParameters;
++		BN_GENCB_call;
++		d2i_ECPKParameters;
++		STORE_meth_set_generate_fn;
++		STORE_method_set_generate_function;
++		ENGINE_set_ECDH;
++		NAME_CONSTRAINTS_new;
++		SHA256_Init;
++		EC_KEY_get0_public_key;
++		PEM_write_bio_EC_PUBKEY;
++		STORE_ATTR_INFO_set_cstr;
++		STORE_list_crl_next;
++		STORE_ATTR_INFO_in_range;
++		ECParameters_print;
++		STORE_meth_set_delete_fn;
++		STORE_method_set_delete_function;
++		STORE_list_certificate_next;
++		ASN1_generate_nconf;
++		BUF_memdup;
++		BN_GF2m_mod_mul;
++		STORE_meth_get_list_next_fn;
++		STORE_method_get_list_next_function;
++		STORE_ATTR_INFO_get0_dn;
++		STORE_list_private_key_next;
++		EC_GROUP_set_seed;
++		X509_VERIFY_PARAM_set_trust;
++		STORE_ATTR_INFO_free;
++		STORE_get_private_key;
++		EVP_PKEY_get_attr_count;
++		STORE_ATTR_INFO_new;
++		EC_GROUP_get_curve_GF2m;
++		STORE_meth_set_revoke_fn;
++		STORE_method_set_revoke_function;
++		STORE_store_number;
++		BN_is_prime_ex;
++		STORE_revoke_public_key;
++		X509_STORE_CTX_get0_param;
++		STORE_delete_arbitrary;
++		PEM_read_X509_CERT_PAIR;
++		X509_STORE_set_depth;
++		ECDSA_get_ex_data;
++		SHA224;
++		BIO_dump_indent_fp;
++		EC_KEY_set_group;
++		BUF_strndup;
++		STORE_list_certificate_start;
++		BN_GF2m_mod;
++		X509_REQ_check_private_key;
++		EC_GROUP_get_seed_len;
++		ERR_load_STORE_strings;
++		PEM_read_bio_EC_PUBKEY;
++		STORE_list_private_key_end;
++		i2d_EC_PUBKEY;
++		ECDSA_get_default_method;
++		ASN1_put_eoc;
++		X509_STORE_CTX_get_explicit_policy;
++		X509_STORE_CTX_get_expl_policy;
++		X509_VERIFY_PARAM_table_cleanup;
++		STORE_modify_private_key;
++		X509_VERIFY_PARAM_free;
++		EC_METHOD_get_field_type;
++		EC_GFp_nist_method;
++		STORE_meth_set_modify_fn;
++		STORE_method_set_modify_function;
++		STORE_parse_attrs_next;
++		ENGINE_load_padlock;
++		EC_GROUP_set_curve_name;
++		X509_CERT_PAIR_it;
++		STORE_meth_get_revoke_fn;
++		STORE_method_get_revoke_function;
++		STORE_method_set_get_function;
++		STORE_modify_number;
++		STORE_method_get_store_function;
++		STORE_store_private_key;
++		BN_GF2m_mod_sqr_arr;
++		RSA_setup_blinding;
++		BIO_s_datagram;
++		STORE_Memory;
++		sk_find_ex;
++		EC_GROUP_set_curve_GF2m;
++		ENGINE_set_default_ECDSA;
++		POLICY_CONSTRAINTS_new;
++		BN_GF2m_mod_sqrt;
++		ECDH_set_default_method;
++		EC_KEY_generate_key;
++		SHA384_Update;
++		BN_GF2m_arr2poly;
++		STORE_method_get_get_function;
++		STORE_meth_set_cleanup_fn;
++		STORE_method_set_cleanup_function;
++		EC_GROUP_check;
++		d2i_ECPrivateKey_bio;
++		EC_KEY_insert_key_method_data;
++		STORE_meth_get_lock_store_fn;
++		STORE_method_get_lock_store_function;
++		X509_VERIFY_PARAM_get_depth;
++		SHA224_Final;
++		STORE_meth_set_update_store_fn;
++		STORE_method_set_update_store_function;
++		SHA224_Update;
++		d2i_ECPrivateKey;
++		ASN1_item_ndef_i2d;
++		STORE_delete_private_key;
++		ERR_pop_to_mark;
++		ENGINE_register_all_STORE;
++		X509_policy_level_get0_node;
++		i2d_PKCS7_NDEF;
++		EC_GROUP_get_degree;
++		ASN1_generate_v3;
++		STORE_ATTR_INFO_modify_cstr;
++		X509_policy_tree_level_count;
++		BN_GF2m_add;
++		EC_KEY_get0_group;
++		STORE_generate_crl;
++		STORE_store_public_key;
++		X509_CERT_PAIR_free;
++		STORE_revoke_private_key;
++		BN_nist_mod_224;
++		SHA512_Final;
++		STORE_ATTR_INFO_modify_dn;
++		STORE_meth_get_initialise_fn;
++		STORE_method_get_initialise_function;
++		STORE_delete_number;
++		i2d_EC_PUBKEY_bio;
++		BIO_dgram_non_fatal_error;
++		EC_GROUP_get_asn1_flag;
++		STORE_ATTR_INFO_in_ex;
++		STORE_list_crl_start;
++		ECDH_get_ex_new_index;
++		STORE_meth_get_modify_fn;
++		STORE_method_get_modify_function;
++		v2i_ASN1_BIT_STRING;
++		STORE_store_certificate;
++		OBJ_bsearch_ex;
++		X509_STORE_CTX_set_default;
++		STORE_ATTR_INFO_set_sha1str;
++		BN_GF2m_mod_inv;
++		BN_GF2m_mod_exp;
++		STORE_modify_public_key;
++		STORE_meth_get_list_start_fn;
++		STORE_method_get_list_start_function;
++		EC_GROUP_get0_seed;
++		STORE_store_arbitrary;
++		STORE_meth_set_unlock_store_fn;
++		STORE_method_set_unlock_store_function;
++		BN_GF2m_mod_div_arr;
++		ENGINE_set_ECDSA;
++		STORE_create_method;
++		ECPKParameters_print;
++		EC_KEY_get0_private_key;
++		PEM_write_EC_PUBKEY;
++		X509_VERIFY_PARAM_set1;
++		ECDH_set_method;
++		v2i_GENERAL_NAME_ex;
++		ECDH_set_ex_data;
++		STORE_generate_key;
++		BN_nist_mod_521;
++		X509_policy_tree_get0_level;
++		EC_GROUP_set_point_conversion_form;
++		EC_GROUP_set_point_conv_form;
++		PEM_read_EC_PUBKEY;
++		i2d_ECDSA_SIG;
++		ECDSA_OpenSSL;
++		STORE_delete_crl;
++		EC_KEY_get_enc_flags;
++		ASN1_const_check_infinite_end;
++		EVP_PKEY_delete_attr;
++		ECDSA_set_default_method;
++		EC_POINT_set_compressed_coordinates_GF2m;
++		EC_POINT_set_compr_coords_GF2m;
++		EC_GROUP_cmp;
++		STORE_revoke_certificate;
++		BN_get0_nist_prime_256;
++		STORE_meth_get_delete_fn;
++		STORE_method_get_delete_function;
++		SHA224_Init;
++		PEM_read_ECPrivateKey;
++		SHA512_Init;
++		STORE_parse_attrs_endp;
++		BN_set_negative;
++		ERR_load_ECDSA_strings;
++		EC_GROUP_get_basis_type;
++		STORE_list_public_key_next;
++		i2v_ASN1_BIT_STRING;
++		STORE_OBJECT_free;
++		BN_nist_mod_384;
++		i2d_X509_CERT_PAIR;
++		PEM_write_ECPKParameters;
++		ECDH_compute_key;
++		STORE_ATTR_INFO_get0_sha1str;
++		ENGINE_register_all_ECDH;
++		pqueue_pop;
++		STORE_ATTR_INFO_get0_cstr;
++		POLICY_CONSTRAINTS_it;
++		STORE_get_ex_new_index;
++		EVP_PKEY_get_attr_by_OBJ;
++		X509_VERIFY_PARAM_add0_policy;
++		BN_GF2m_mod_solve_quad;
++		SHA256;
++		i2d_ECPrivateKey_fp;
++		X509_policy_tree_get0_user_policies;
++		X509_pcy_tree_get0_usr_policies;
++		OPENSSL_DIR_read;
++		ENGINE_register_all_ECDSA;
++		X509_VERIFY_PARAM_lookup;
++		EC_POINT_get_affine_coordinates_GF2m;
++		EC_POINT_get_affine_coords_GF2m;
++		EC_GROUP_dup;
++		ENGINE_get_default_ECDSA;
++		EC_KEY_new;
++		SHA256_Transform;
++		EC_KEY_set_enc_flags;
++		ECDSA_verify;
++		EC_POINT_point2hex;
++		ENGINE_get_STORE;
++		SHA512;
++		STORE_get_certificate;
++		ECDSA_do_sign_ex;
++		ECDSA_do_verify;
++		d2i_ECPrivateKey_fp;
++		STORE_delete_certificate;
++		SHA512_Transform;
++		X509_STORE_set1_param;
++		STORE_method_get_ctrl_function;
++		STORE_free;
++		PEM_write_ECPrivateKey;
++		STORE_meth_get_unlock_store_fn;
++		STORE_method_get_unlock_store_function;
++		STORE_get_ex_data;
++		EC_KEY_set_public_key;
++		PEM_read_ECPKParameters;
++		X509_CERT_PAIR_new;
++		ENGINE_register_STORE;
++		RSA_generate_key_ex;
++		DSA_generate_parameters_ex;
++		ECParameters_print_fp;
++		X509V3_NAME_from_section;
++		EVP_PKEY_add1_attr;
++		STORE_modify_crl;
++		STORE_list_private_key_start;
++		POLICY_MAPPINGS_it;
++		GENERAL_SUBTREE_it;
++		EC_GROUP_get_curve_name;
++		PEM_write_X509_CERT_PAIR;
++		BIO_dump_indent_cb;
++		d2i_X509_CERT_PAIR;
++		STORE_list_private_key_endp;
++		asn1_const_Finish;
++		i2d_EC_PUBKEY_fp;
++		BN_nist_mod_256;
++		X509_VERIFY_PARAM_add0_table;
++		pqueue_free;
++		BN_BLINDING_create_param;
++		ECDSA_size;
++		d2i_EC_PUBKEY_bio;
++		BN_get0_nist_prime_521;
++		STORE_ATTR_INFO_modify_sha1str;
++		BN_generate_prime_ex;
++		EC_GROUP_new_by_curve_name;
++		SHA256_Final;
++		DH_generate_parameters_ex;
++		PEM_read_bio_ECPrivateKey;
++		STORE_meth_get_cleanup_fn;
++		STORE_method_get_cleanup_function;
++		ENGINE_get_ECDH;
++		d2i_ECDSA_SIG;
++		BN_is_prime_fasttest_ex;
++		ECDSA_sign;
++		X509_policy_check;
++		EVP_PKEY_get_attr_by_NID;
++		STORE_set_ex_data;
++		ENGINE_get_ECDSA;
++		EVP_ecdsa;
++		BN_BLINDING_get_flags;
++		PKCS12_add_cert;
++		STORE_OBJECT_new;
++		ERR_load_ECDH_strings;
++		EC_KEY_dup;
++		EVP_CIPHER_CTX_rand_key;
++		ECDSA_set_method;
++		a2i_IPADDRESS_NC;
++		d2i_ECParameters;
++		STORE_list_certificate_end;
++		STORE_get_crl;
++		X509_POLICY_NODE_print;
++		SHA384_Init;
++		EC_GF2m_simple_method;
++		ECDSA_set_ex_data;
++		SHA384_Final;
++		PKCS7_set_digest;
++		EC_KEY_print;
++		STORE_meth_set_lock_store_fn;
++		STORE_method_set_lock_store_function;
++		ECDSA_get_ex_new_index;
++		SHA384;
++		POLICY_MAPPING_new;
++		STORE_list_certificate_endp;
++		X509_STORE_CTX_get0_policy_tree;
++		EC_GROUP_set_asn1_flag;
++		EC_KEY_check_key;
++		d2i_EC_PUBKEY_fp;
++		PKCS7_set0_type_other;
++		PEM_read_bio_X509_CERT_PAIR;
++		pqueue_next;
++		STORE_meth_get_list_end_fn;
++		STORE_method_get_list_end_function;
++		EVP_PKEY_add1_attr_by_OBJ;
++		X509_VERIFY_PARAM_set_time;
++		pqueue_new;
++		ENGINE_set_default_ECDH;
++		STORE_new_method;
++		PKCS12_add_key;
++		DSO_merge;
++		EC_POINT_hex2point;
++		BIO_dump_cb;
++		SHA256_Update;
++		pqueue_insert;
++		pitem_free;
++		BN_GF2m_mod_inv_arr;
++		ENGINE_unregister_ECDSA;
++		BN_BLINDING_set_thread_id;
++		get_rfc3526_prime_8192;
++		X509_VERIFY_PARAM_clear_flags;
++		get_rfc2409_prime_1024;
++		DH_check_pub_key;
++		get_rfc3526_prime_2048;
++		get_rfc3526_prime_6144;
++		get_rfc3526_prime_1536;
++		get_rfc3526_prime_3072;
++		get_rfc3526_prime_4096;
++		get_rfc2409_prime_768;
++		X509_VERIFY_PARAM_get_flags;
++		EVP_CIPHER_CTX_new;
++		EVP_CIPHER_CTX_free;
++		Camellia_cbc_encrypt;
++		Camellia_cfb128_encrypt;
++		Camellia_cfb1_encrypt;
++		Camellia_cfb8_encrypt;
++		Camellia_ctr128_encrypt;
++		Camellia_cfbr_encrypt_block;
++		Camellia_decrypt;
++		Camellia_ecb_encrypt;
++		Camellia_encrypt;
++		Camellia_ofb128_encrypt;
++		Camellia_set_key;
++		EVP_camellia_128_cbc;
++		EVP_camellia_128_cfb128;
++		EVP_camellia_128_cfb1;
++		EVP_camellia_128_cfb8;
++		EVP_camellia_128_ecb;
++		EVP_camellia_128_ofb;
++		EVP_camellia_192_cbc;
++		EVP_camellia_192_cfb128;
++		EVP_camellia_192_cfb1;
++		EVP_camellia_192_cfb8;
++		EVP_camellia_192_ecb;
++		EVP_camellia_192_ofb;
++		EVP_camellia_256_cbc;
++		EVP_camellia_256_cfb128;
++		EVP_camellia_256_cfb1;
++		EVP_camellia_256_cfb8;
++		EVP_camellia_256_ecb;
++		EVP_camellia_256_ofb;
++		a2i_ipadd;
++		ASIdentifiers_free;
++		i2d_ASIdOrRange;
++		EVP_CIPHER_block_size;
++		v3_asid_is_canonical;
++		IPAddressChoice_free;
++		EVP_CIPHER_CTX_set_app_data;
++		BIO_set_callback_arg;
++		v3_addr_add_prefix;
++		IPAddressOrRange_it;
++		BIO_set_flags;
++		ASIdentifiers_it;
++		v3_addr_get_range;
++		BIO_method_type;
++		v3_addr_inherits;
++		IPAddressChoice_it;
++		AES_ige_encrypt;
++		v3_addr_add_range;
++		EVP_CIPHER_CTX_nid;
++		d2i_ASRange;
++		v3_addr_add_inherit;
++		v3_asid_add_id_or_range;
++		v3_addr_validate_resource_set;
++		EVP_CIPHER_iv_length;
++		EVP_MD_type;
++		v3_asid_canonize;
++		IPAddressRange_free;
++		v3_asid_add_inherit;
++		EVP_CIPHER_CTX_key_length;
++		IPAddressRange_new;
++		ASIdOrRange_new;
++		EVP_MD_size;
++		EVP_MD_CTX_test_flags;
++		BIO_clear_flags;
++		i2d_ASRange;
++		IPAddressRange_it;
++		IPAddressChoice_new;
++		ASIdentifierChoice_new;
++		ASRange_free;
++		EVP_MD_pkey_type;
++		EVP_MD_CTX_clear_flags;
++		IPAddressFamily_free;
++		i2d_IPAddressFamily;
++		IPAddressOrRange_new;
++		EVP_CIPHER_flags;
++		v3_asid_validate_resource_set;
++		d2i_IPAddressRange;
++		AES_bi_ige_encrypt;
++		BIO_get_callback;
++		IPAddressOrRange_free;
++		v3_addr_subset;
++		d2i_IPAddressFamily;
++		v3_asid_subset;
++		BIO_test_flags;
++		i2d_ASIdentifierChoice;
++		ASRange_it;
++		d2i_ASIdentifiers;
++		ASRange_new;
++		d2i_IPAddressChoice;
++		v3_addr_get_afi;
++		EVP_CIPHER_key_length;
++		EVP_Cipher;
++		i2d_IPAddressOrRange;
++		ASIdOrRange_it;
++		EVP_CIPHER_nid;
++		i2d_IPAddressChoice;
++		EVP_CIPHER_CTX_block_size;
++		ASIdentifiers_new;
++		v3_addr_validate_path;
++		IPAddressFamily_new;
++		EVP_MD_CTX_set_flags;
++		v3_addr_is_canonical;
++		i2d_IPAddressRange;
++		IPAddressFamily_it;
++		v3_asid_inherits;
++		EVP_CIPHER_CTX_cipher;
++		EVP_CIPHER_CTX_get_app_data;
++		EVP_MD_block_size;
++		EVP_CIPHER_CTX_flags;
++		v3_asid_validate_path;
++		d2i_IPAddressOrRange;
++		v3_addr_canonize;
++		ASIdentifierChoice_it;
++		EVP_MD_CTX_md;
++		d2i_ASIdentifierChoice;
++		BIO_method_name;
++		EVP_CIPHER_CTX_iv_length;
++		ASIdOrRange_free;
++		ASIdentifierChoice_free;
++		BIO_get_callback_arg;
++		BIO_set_callback;
++		d2i_ASIdOrRange;
++		i2d_ASIdentifiers;
++		SEED_decrypt;
++		SEED_encrypt;
++		SEED_cbc_encrypt;
++		EVP_seed_ofb;
++		SEED_cfb128_encrypt;
++		SEED_ofb128_encrypt;
++		EVP_seed_cbc;
++		SEED_ecb_encrypt;
++		EVP_seed_ecb;
++		SEED_set_key;
++		EVP_seed_cfb128;
++		X509_EXTENSIONS_it;
++		X509_get1_ocsp;
++		OCSP_REQ_CTX_free;
++		i2d_X509_EXTENSIONS;
++		OCSP_sendreq_nbio;
++		OCSP_sendreq_new;
++		d2i_X509_EXTENSIONS;
++		X509_ALGORS_it;
++		X509_ALGOR_get0;
++		X509_ALGOR_set0;
++		AES_unwrap_key;
++		AES_wrap_key;
++		X509at_get0_data_by_OBJ;
++		ASN1_TYPE_set1;
++		ASN1_STRING_set0;
++		i2d_X509_ALGORS;
++		BIO_f_zlib;
++		COMP_zlib_cleanup;
++		d2i_X509_ALGORS;
++		CMS_ReceiptRequest_free;
++		PEM_write_CMS;
++		CMS_add0_CertificateChoices;
++		CMS_unsigned_add1_attr_by_OBJ;
++		ERR_load_CMS_strings;
++		CMS_sign_receipt;
++		i2d_CMS_ContentInfo;
++		CMS_signed_delete_attr;
++		d2i_CMS_bio;
++		CMS_unsigned_get_attr_by_NID;
++		CMS_verify;
++		SMIME_read_CMS;
++		CMS_decrypt_set1_key;
++		CMS_SignerInfo_get0_algs;
++		CMS_add1_cert;
++		CMS_set_detached;
++		CMS_encrypt;
++		CMS_EnvelopedData_create;
++		CMS_uncompress;
++		CMS_add0_crl;
++		CMS_SignerInfo_verify_content;
++		CMS_unsigned_get0_data_by_OBJ;
++		PEM_write_bio_CMS;
++		CMS_unsigned_get_attr;
++		CMS_RecipientInfo_ktri_cert_cmp;
++		CMS_RecipientInfo_ktri_get0_algs;
++		CMS_RecipInfo_ktri_get0_algs;
++		CMS_ContentInfo_free;
++		CMS_final;
++		CMS_add_simple_smimecap;
++		CMS_SignerInfo_verify;
++		CMS_data;
++		CMS_ContentInfo_it;
++		d2i_CMS_ReceiptRequest;
++		CMS_compress;
++		CMS_digest_create;
++		CMS_SignerInfo_cert_cmp;
++		CMS_SignerInfo_sign;
++		CMS_data_create;
++		i2d_CMS_bio;
++		CMS_EncryptedData_set1_key;
++		CMS_decrypt;
++		int_smime_write_ASN1;
++		CMS_unsigned_delete_attr;
++		CMS_unsigned_get_attr_count;
++		CMS_add_smimecap;
++		PEM_read_CMS;
++		CMS_signed_get_attr_by_OBJ;
++		d2i_CMS_ContentInfo;
++		CMS_add_standard_smimecap;
++		CMS_ContentInfo_new;
++		CMS_RecipientInfo_type;
++		CMS_get0_type;
++		CMS_is_detached;
++		CMS_sign;
++		CMS_signed_add1_attr;
++		CMS_unsigned_get_attr_by_OBJ;
++		SMIME_write_CMS;
++		CMS_EncryptedData_decrypt;
++		CMS_get0_RecipientInfos;
++		CMS_add0_RevocationInfoChoice;
++		CMS_decrypt_set1_pkey;
++		CMS_SignerInfo_set1_signer_cert;
++		CMS_get0_signers;
++		CMS_ReceiptRequest_get0_values;
++		CMS_signed_get0_data_by_OBJ;
++		CMS_get0_SignerInfos;
++		CMS_add0_cert;
++		CMS_EncryptedData_encrypt;
++		CMS_digest_verify;
++		CMS_set1_signers_certs;
++		CMS_signed_get_attr;
++		CMS_RecipientInfo_set0_key;
++		CMS_SignedData_init;
++		CMS_RecipientInfo_kekri_get0_id;
++		CMS_verify_receipt;
++		CMS_ReceiptRequest_it;
++		PEM_read_bio_CMS;
++		CMS_get1_crls;
++		CMS_add0_recipient_key;
++		SMIME_read_ASN1;
++		CMS_ReceiptRequest_new;
++		CMS_get0_content;
++		CMS_get1_ReceiptRequest;
++		CMS_signed_add1_attr_by_OBJ;
++		CMS_RecipientInfo_kekri_id_cmp;
++		CMS_add1_ReceiptRequest;
++		CMS_SignerInfo_get0_signer_id;
++		CMS_unsigned_add1_attr_by_NID;
++		CMS_unsigned_add1_attr;
++		CMS_signed_get_attr_by_NID;
++		CMS_get1_certs;
++		CMS_signed_add1_attr_by_NID;
++		CMS_unsigned_add1_attr_by_txt;
++		CMS_dataFinal;
++		CMS_RecipientInfo_ktri_get0_signer_id;
++		CMS_RecipInfo_ktri_get0_sigr_id;
++		i2d_CMS_ReceiptRequest;
++		CMS_add1_recipient_cert;
++		CMS_dataInit;
++		CMS_signed_add1_attr_by_txt;
++		CMS_RecipientInfo_decrypt;
++		CMS_signed_get_attr_count;
++		CMS_get0_eContentType;
++		CMS_set1_eContentType;
++		CMS_ReceiptRequest_create0;
++		CMS_add1_signer;
++		CMS_RecipientInfo_set0_pkey;
++		ENGINE_set_load_ssl_client_cert_function;
++		ENGINE_set_ld_ssl_clnt_cert_fn;
++		ENGINE_get_ssl_client_cert_function;
++		ENGINE_get_ssl_client_cert_fn;
++		ENGINE_load_ssl_client_cert;
++		ENGINE_load_capi;
++		OPENSSL_isservice;
++		FIPS_dsa_sig_decode;
++		EVP_CIPHER_CTX_clear_flags;
++		FIPS_rand_status;
++		FIPS_rand_set_key;
++		CRYPTO_set_mem_info_functions;
++		RSA_X931_generate_key_ex;
++		int_ERR_set_state_func;
++		int_EVP_MD_set_engine_callbacks;
++		int_CRYPTO_set_do_dynlock_callback;
++		FIPS_rng_stick;
++		EVP_CIPHER_CTX_set_flags;
++		BN_X931_generate_prime_ex;
++		FIPS_selftest_check;
++		FIPS_rand_set_dt;
++		CRYPTO_dbg_pop_info;
++		FIPS_dsa_free;
++		RSA_X931_derive_ex;
++		FIPS_rsa_new;
++		FIPS_rand_bytes;
++		fips_cipher_test;
++		EVP_CIPHER_CTX_test_flags;
++		CRYPTO_malloc_debug_init;
++		CRYPTO_dbg_push_info;
++		FIPS_corrupt_rsa_keygen;
++		FIPS_dh_new;
++		FIPS_corrupt_dsa_keygen;
++		FIPS_dh_free;
++		fips_pkey_signature_test;
++		EVP_add_alg_module;
++		int_RAND_init_engine_callbacks;
++		int_EVP_CIPHER_set_engine_callbacks;
++		int_EVP_MD_init_engine_callbacks;
++		FIPS_rand_test_mode;
++		FIPS_rand_reset;
++		FIPS_dsa_new;
++		int_RAND_set_callbacks;
++		BN_X931_derive_prime_ex;
++		int_ERR_lib_init;
++		int_EVP_CIPHER_init_engine_callbacks;
++		FIPS_rsa_free;
++		FIPS_dsa_sig_encode;
++		CRYPTO_dbg_remove_all_info;
++		OPENSSL_init;
++		CRYPTO_strdup;
++		JPAKE_STEP3A_process;
++		JPAKE_STEP1_release;
++		JPAKE_get_shared_key;
++		JPAKE_STEP3B_init;
++		JPAKE_STEP1_generate;
++		JPAKE_STEP1_init;
++		JPAKE_STEP3B_process;
++		JPAKE_STEP2_generate;
++		JPAKE_CTX_new;
++		JPAKE_CTX_free;
++		JPAKE_STEP3B_release;
++		JPAKE_STEP3A_release;
++		JPAKE_STEP2_process;
++		JPAKE_STEP3B_generate;
++		JPAKE_STEP1_process;
++		JPAKE_STEP3A_generate;
++		JPAKE_STEP2_release;
++		JPAKE_STEP3A_init;
++		ERR_load_JPAKE_strings;
++		JPAKE_STEP2_init;
++		pqueue_size;
++		i2d_TS_ACCURACY;
++		i2d_TS_MSG_IMPRINT_fp;
++		i2d_TS_MSG_IMPRINT;
++		EVP_PKEY_print_public;
++		EVP_PKEY_CTX_new;
++		i2d_TS_TST_INFO;
++		EVP_PKEY_asn1_find;
++		DSO_METHOD_beos;
++		TS_CONF_load_cert;
++		TS_REQ_get_ext;
++		EVP_PKEY_sign_init;
++		ASN1_item_print;
++		TS_TST_INFO_set_nonce;
++		TS_RESP_dup;
++		ENGINE_register_pkey_meths;
++		EVP_PKEY_asn1_add0;
++		PKCS7_add0_attrib_signing_time;
++		i2d_TS_TST_INFO_fp;
++		BIO_asn1_get_prefix;
++		TS_TST_INFO_set_time;
++		EVP_PKEY_meth_set_decrypt;
++		EVP_PKEY_set_type_str;
++		EVP_PKEY_CTX_get_keygen_info;
++		TS_REQ_set_policy_id;
++		d2i_TS_RESP_fp;
++		ENGINE_get_pkey_asn1_meth_engine;
++		ENGINE_get_pkey_asn1_meth_eng;
++		WHIRLPOOL_Init;
++		TS_RESP_set_status_info;
++		EVP_PKEY_keygen;
++		EVP_DigestSignInit;
++		TS_ACCURACY_set_millis;
++		TS_REQ_dup;
++		GENERAL_NAME_dup;
++		ASN1_SEQUENCE_ANY_it;
++		WHIRLPOOL;
++		X509_STORE_get1_crls;
++		ENGINE_get_pkey_asn1_meth;
++		EVP_PKEY_asn1_new;
++		BIO_new_NDEF;
++		ENGINE_get_pkey_meth;
++		TS_MSG_IMPRINT_set_algo;
++		i2d_TS_TST_INFO_bio;
++		TS_TST_INFO_set_ordering;
++		TS_TST_INFO_get_ext_by_OBJ;
++		CRYPTO_THREADID_set_pointer;
++		TS_CONF_get_tsa_section;
++		SMIME_write_ASN1;
++		TS_RESP_CTX_set_signer_key;
++		EVP_PKEY_encrypt_old;
++		EVP_PKEY_encrypt_init;
++		CRYPTO_THREADID_cpy;
++		ASN1_PCTX_get_cert_flags;
++		i2d_ESS_SIGNING_CERT;
++		TS_CONF_load_key;
++		i2d_ASN1_SEQUENCE_ANY;
++		d2i_TS_MSG_IMPRINT_bio;
++		EVP_PKEY_asn1_set_public;
++		b2i_PublicKey_bio;
++		BIO_asn1_set_prefix;
++		EVP_PKEY_new_mac_key;
++		BIO_new_CMS;
++		CRYPTO_THREADID_cmp;
++		TS_REQ_ext_free;
++		EVP_PKEY_asn1_set_free;
++		EVP_PKEY_get0_asn1;
++		d2i_NETSCAPE_X509;
++		EVP_PKEY_verify_recover_init;
++		EVP_PKEY_CTX_set_data;
++		EVP_PKEY_keygen_init;
++		TS_RESP_CTX_set_status_info;
++		TS_MSG_IMPRINT_get_algo;
++		TS_REQ_print_bio;
++		EVP_PKEY_CTX_ctrl_str;
++		EVP_PKEY_get_default_digest_nid;
++		PEM_write_bio_PKCS7_stream;
++		TS_MSG_IMPRINT_print_bio;
++		BN_asc2bn;
++		TS_REQ_get_policy_id;
++		ENGINE_set_default_pkey_asn1_meths;
++		ENGINE_set_def_pkey_asn1_meths;
++		d2i_TS_ACCURACY;
++		DSO_global_lookup;
++		TS_CONF_set_tsa_name;
++		i2d_ASN1_SET_ANY;
++		ENGINE_load_gost;
++		WHIRLPOOL_BitUpdate;
++		ASN1_PCTX_get_flags;
++		TS_TST_INFO_get_ext_by_NID;
++		TS_RESP_new;
++		ESS_CERT_ID_dup;
++		TS_STATUS_INFO_dup;
++		TS_REQ_delete_ext;
++		EVP_DigestVerifyFinal;
++		EVP_PKEY_print_params;
++		i2d_CMS_bio_stream;
++		TS_REQ_get_msg_imprint;
++		OBJ_find_sigid_by_algs;
++		TS_TST_INFO_get_serial;
++		TS_REQ_get_nonce;
++		X509_PUBKEY_set0_param;
++		EVP_PKEY_CTX_set0_keygen_info;
++		DIST_POINT_set_dpname;
++		i2d_ISSUING_DIST_POINT;
++		ASN1_SET_ANY_it;
++		EVP_PKEY_CTX_get_data;
++		TS_STATUS_INFO_print_bio;
++		EVP_PKEY_derive_init;
++		d2i_TS_TST_INFO;
++		EVP_PKEY_asn1_add_alias;
++		d2i_TS_RESP_bio;
++		OTHERNAME_cmp;
++		GENERAL_NAME_set0_value;
++		PKCS7_RECIP_INFO_get0_alg;
++		TS_RESP_CTX_new;
++		TS_RESP_set_tst_info;
++		PKCS7_final;
++		EVP_PKEY_base_id;
++		TS_RESP_CTX_set_signer_cert;
++		TS_REQ_set_msg_imprint;
++		EVP_PKEY_CTX_ctrl;
++		TS_CONF_set_digests;
++		d2i_TS_MSG_IMPRINT;
++		EVP_PKEY_meth_set_ctrl;
++		TS_REQ_get_ext_by_NID;
++		PKCS5_pbe_set0_algor;
++		BN_BLINDING_thread_id;
++		TS_ACCURACY_new;
++		X509_CRL_METHOD_free;
++		ASN1_PCTX_get_nm_flags;
++		EVP_PKEY_meth_set_sign;
++		CRYPTO_THREADID_current;
++		EVP_PKEY_decrypt_init;
++		NETSCAPE_X509_free;
++		i2b_PVK_bio;
++		EVP_PKEY_print_private;
++		GENERAL_NAME_get0_value;
++		b2i_PVK_bio;
++		ASN1_UTCTIME_adj;
++		TS_TST_INFO_new;
++		EVP_MD_do_all_sorted;
++		TS_CONF_set_default_engine;
++		TS_ACCURACY_set_seconds;
++		TS_TST_INFO_get_time;
++		PKCS8_pkey_get0;
++		EVP_PKEY_asn1_get0;
++		OBJ_add_sigid;
++		PKCS7_SIGNER_INFO_sign;
++		EVP_PKEY_paramgen_init;
++		EVP_PKEY_sign;
++		OBJ_sigid_free;
++		EVP_PKEY_meth_set_init;
++		d2i_ESS_ISSUER_SERIAL;
++		ISSUING_DIST_POINT_new;
++		ASN1_TIME_adj;
++		TS_OBJ_print_bio;
++		EVP_PKEY_meth_set_verify_recover;
++		EVP_PKEY_meth_set_vrfy_recover;
++		TS_RESP_get_status_info;
++		CMS_stream;
++		EVP_PKEY_CTX_set_cb;
++		PKCS7_to_TS_TST_INFO;
++		ASN1_PCTX_get_oid_flags;
++		TS_TST_INFO_add_ext;
++		EVP_PKEY_meth_set_derive;
++		i2d_TS_RESP_fp;
++		i2d_TS_MSG_IMPRINT_bio;
++		TS_RESP_CTX_set_accuracy;
++		TS_REQ_set_nonce;
++		ESS_CERT_ID_new;
++		ENGINE_pkey_asn1_find_str;
++		TS_REQ_get_ext_count;
++		BUF_reverse;
++		TS_TST_INFO_print_bio;
++		d2i_ISSUING_DIST_POINT;
++		ENGINE_get_pkey_meths;
++		i2b_PrivateKey_bio;
++		i2d_TS_RESP;
++		b2i_PublicKey;
++		TS_VERIFY_CTX_cleanup;
++		TS_STATUS_INFO_free;
++		TS_RESP_verify_token;
++		OBJ_bsearch_ex_;
++		ASN1_bn_print;
++		EVP_PKEY_asn1_get_count;
++		ENGINE_register_pkey_asn1_meths;
++		ASN1_PCTX_set_nm_flags;
++		EVP_DigestVerifyInit;
++		ENGINE_set_default_pkey_meths;
++		TS_TST_INFO_get_policy_id;
++		TS_REQ_get_cert_req;
++		X509_CRL_set_meth_data;
++		PKCS8_pkey_set0;
++		ASN1_STRING_copy;
++		d2i_TS_TST_INFO_fp;
++		X509_CRL_match;
++		EVP_PKEY_asn1_set_private;
++		TS_TST_INFO_get_ext_d2i;
++		TS_RESP_CTX_add_policy;
++		d2i_TS_RESP;
++		TS_CONF_load_certs;
++		TS_TST_INFO_get_msg_imprint;
++		ERR_load_TS_strings;
++		TS_TST_INFO_get_version;
++		EVP_PKEY_CTX_dup;
++		EVP_PKEY_meth_set_verify;
++		i2b_PublicKey_bio;
++		TS_CONF_set_certs;
++		EVP_PKEY_asn1_get0_info;
++		TS_VERIFY_CTX_free;
++		TS_REQ_get_ext_by_critical;
++		TS_RESP_CTX_set_serial_cb;
++		X509_CRL_get_meth_data;
++		TS_RESP_CTX_set_time_cb;
++		TS_MSG_IMPRINT_get_msg;
++		TS_TST_INFO_ext_free;
++		TS_REQ_get_version;
++		TS_REQ_add_ext;
++		EVP_PKEY_CTX_set_app_data;
++		OBJ_bsearch_;
++		EVP_PKEY_meth_set_verifyctx;
++		i2d_PKCS7_bio_stream;
++		CRYPTO_THREADID_set_numeric;
++		PKCS7_sign_add_signer;
++		d2i_TS_TST_INFO_bio;
++		TS_TST_INFO_get_ordering;
++		TS_RESP_print_bio;
++		TS_TST_INFO_get_exts;
++		HMAC_CTX_copy;
++		PKCS5_pbe2_set_iv;
++		ENGINE_get_pkey_asn1_meths;
++		b2i_PrivateKey;
++		EVP_PKEY_CTX_get_app_data;
++		TS_REQ_set_cert_req;
++		CRYPTO_THREADID_set_callback;
++		TS_CONF_set_serial;
++		TS_TST_INFO_free;
++		d2i_TS_REQ_fp;
++		TS_RESP_verify_response;
++		i2d_ESS_ISSUER_SERIAL;
++		TS_ACCURACY_get_seconds;
++		EVP_CIPHER_do_all;
++		b2i_PrivateKey_bio;
++		OCSP_CERTID_dup;
++		X509_PUBKEY_get0_param;
++		TS_MSG_IMPRINT_dup;
++		PKCS7_print_ctx;
++		i2d_TS_REQ_bio;
++		EVP_whirlpool;
++		EVP_PKEY_asn1_set_param;
++		EVP_PKEY_meth_set_encrypt;
++		ASN1_PCTX_set_flags;
++		i2d_ESS_CERT_ID;
++		TS_VERIFY_CTX_new;
++		TS_RESP_CTX_set_extension_cb;
++		ENGINE_register_all_pkey_meths;
++		TS_RESP_CTX_set_status_info_cond;
++		TS_RESP_CTX_set_stat_info_cond;
++		EVP_PKEY_verify;
++		WHIRLPOOL_Final;
++		X509_CRL_METHOD_new;
++		EVP_DigestSignFinal;
++		TS_RESP_CTX_set_def_policy;
++		NETSCAPE_X509_it;
++		TS_RESP_create_response;
++		PKCS7_SIGNER_INFO_get0_algs;
++		TS_TST_INFO_get_nonce;
++		EVP_PKEY_decrypt_old;
++		TS_TST_INFO_set_policy_id;
++		TS_CONF_set_ess_cert_id_chain;
++		EVP_PKEY_CTX_get0_pkey;
++		d2i_TS_REQ;
++		EVP_PKEY_asn1_find_str;
++		BIO_f_asn1;
++		ESS_SIGNING_CERT_new;
++		EVP_PBE_find;
++		X509_CRL_get0_by_cert;
++		EVP_PKEY_derive;
++		i2d_TS_REQ;
++		TS_TST_INFO_delete_ext;
++		ESS_ISSUER_SERIAL_free;
++		ASN1_PCTX_set_str_flags;
++		ENGINE_get_pkey_asn1_meth_str;
++		TS_CONF_set_signer_key;
++		TS_ACCURACY_get_millis;
++		TS_RESP_get_token;
++		TS_ACCURACY_dup;
++		ENGINE_register_all_pkey_asn1_meths;
++		ENGINE_reg_all_pkey_asn1_meths;
++		X509_CRL_set_default_method;
++		CRYPTO_THREADID_hash;
++		CMS_ContentInfo_print_ctx;
++		TS_RESP_free;
++		ISSUING_DIST_POINT_free;
++		ESS_ISSUER_SERIAL_new;
++		CMS_add1_crl;
++		PKCS7_add1_attrib_digest;
++		TS_RESP_CTX_add_md;
++		TS_TST_INFO_dup;
++		ENGINE_set_pkey_asn1_meths;
++		PEM_write_bio_Parameters;
++		TS_TST_INFO_get_accuracy;
++		X509_CRL_get0_by_serial;
++		TS_TST_INFO_set_version;
++		TS_RESP_CTX_get_tst_info;
++		TS_RESP_verify_signature;
++		CRYPTO_THREADID_get_callback;
++		TS_TST_INFO_get_tsa;
++		TS_STATUS_INFO_new;
++		EVP_PKEY_CTX_get_cb;
++		TS_REQ_get_ext_d2i;
++		GENERAL_NAME_set0_othername;
++		TS_TST_INFO_get_ext_count;
++		TS_RESP_CTX_get_request;
++		i2d_NETSCAPE_X509;
++		ENGINE_get_pkey_meth_engine;
++		EVP_PKEY_meth_set_signctx;
++		EVP_PKEY_asn1_copy;
++		ASN1_TYPE_cmp;
++		EVP_CIPHER_do_all_sorted;
++		EVP_PKEY_CTX_free;
++		ISSUING_DIST_POINT_it;
++		d2i_TS_MSG_IMPRINT_fp;
++		X509_STORE_get1_certs;
++		EVP_PKEY_CTX_get_operation;
++		d2i_ESS_SIGNING_CERT;
++		TS_CONF_set_ordering;
++		EVP_PBE_alg_add_type;
++		TS_REQ_set_version;
++		EVP_PKEY_get0;
++		BIO_asn1_set_suffix;
++		i2d_TS_STATUS_INFO;
++		EVP_MD_do_all;
++		TS_TST_INFO_set_accuracy;
++		PKCS7_add_attrib_content_type;
++		ERR_remove_thread_state;
++		EVP_PKEY_meth_add0;
++		TS_TST_INFO_set_tsa;
++		EVP_PKEY_meth_new;
++		WHIRLPOOL_Update;
++		TS_CONF_set_accuracy;
++		ASN1_PCTX_set_oid_flags;
++		ESS_SIGNING_CERT_dup;
++		d2i_TS_REQ_bio;
++		X509_time_adj_ex;
++		TS_RESP_CTX_add_flags;
++		d2i_TS_STATUS_INFO;
++		TS_MSG_IMPRINT_set_msg;
++		BIO_asn1_get_suffix;
++		TS_REQ_free;
++		EVP_PKEY_meth_free;
++		TS_REQ_get_exts;
++		TS_RESP_CTX_set_clock_precision_digits;
++		TS_RESP_CTX_set_clk_prec_digits;
++		TS_RESP_CTX_add_failure_info;
++		i2d_TS_RESP_bio;
++		EVP_PKEY_CTX_get0_peerkey;
++		PEM_write_bio_CMS_stream;
++		TS_REQ_new;
++		TS_MSG_IMPRINT_new;
++		EVP_PKEY_meth_find;
++		EVP_PKEY_id;
++		TS_TST_INFO_set_serial;
++		a2i_GENERAL_NAME;
++		TS_CONF_set_crypto_device;
++		EVP_PKEY_verify_init;
++		TS_CONF_set_policies;
++		ASN1_PCTX_new;
++		ESS_CERT_ID_free;
++		ENGINE_unregister_pkey_meths;
++		TS_MSG_IMPRINT_free;
++		TS_VERIFY_CTX_init;
++		PKCS7_stream;
++		TS_RESP_CTX_set_certs;
++		TS_CONF_set_def_policy;
++		ASN1_GENERALIZEDTIME_adj;
++		NETSCAPE_X509_new;
++		TS_ACCURACY_free;
++		TS_RESP_get_tst_info;
++		EVP_PKEY_derive_set_peer;
++		PEM_read_bio_Parameters;
++		TS_CONF_set_clock_precision_digits;
++		TS_CONF_set_clk_prec_digits;
++		ESS_ISSUER_SERIAL_dup;
++		TS_ACCURACY_get_micros;
++		ASN1_PCTX_get_str_flags;
++		NAME_CONSTRAINTS_check;
++		ASN1_BIT_STRING_check;
++		X509_check_akid;
++		ENGINE_unregister_pkey_asn1_meths;
++		ENGINE_unreg_pkey_asn1_meths;
++		ASN1_PCTX_free;
++		PEM_write_bio_ASN1_stream;
++		i2d_ASN1_bio_stream;
++		TS_X509_ALGOR_print_bio;
++		EVP_PKEY_meth_set_cleanup;
++		EVP_PKEY_asn1_free;
++		ESS_SIGNING_CERT_free;
++		TS_TST_INFO_set_msg_imprint;
++		GENERAL_NAME_cmp;
++		d2i_ASN1_SET_ANY;
++		ENGINE_set_pkey_meths;
++		i2d_TS_REQ_fp;
++		d2i_ASN1_SEQUENCE_ANY;
++		GENERAL_NAME_get0_otherName;
++		d2i_ESS_CERT_ID;
++		OBJ_find_sigid_algs;
++		EVP_PKEY_meth_set_keygen;
++		PKCS5_PBKDF2_HMAC;
++		EVP_PKEY_paramgen;
++		EVP_PKEY_meth_set_paramgen;
++		BIO_new_PKCS7;
++		EVP_PKEY_verify_recover;
++		TS_ext_print_bio;
++		TS_ASN1_INTEGER_print_bio;
++		check_defer;
++		DSO_pathbyaddr;
++		EVP_PKEY_set_type;
++		TS_ACCURACY_set_micros;
++		TS_REQ_to_TS_VERIFY_CTX;
++		EVP_PKEY_meth_set_copy;
++		ASN1_PCTX_set_cert_flags;
++		TS_TST_INFO_get_ext;
++		EVP_PKEY_asn1_set_ctrl;
++		TS_TST_INFO_get_ext_by_critical;
++		EVP_PKEY_CTX_new_id;
++		TS_REQ_get_ext_by_OBJ;
++		TS_CONF_set_signer_cert;
++		X509_NAME_hash_old;
++		ASN1_TIME_set_string;
++		EVP_MD_flags;
++		TS_RESP_CTX_free;
++		DSAparams_dup;
++		DHparams_dup;
++		OCSP_REQ_CTX_add1_header;
++		OCSP_REQ_CTX_set1_req;
++		X509_STORE_set_verify_cb;
++		X509_STORE_CTX_get0_current_crl;
++		X509_STORE_CTX_get0_parent_ctx;
++		X509_STORE_CTX_get0_current_issuer;
++		X509_STORE_CTX_get0_cur_issuer;
++		X509_issuer_name_hash_old;
++		X509_subject_name_hash_old;
++		EVP_CIPHER_CTX_copy;
++		UI_method_get_prompt_constructor;
++		UI_method_get_prompt_constructr;
++		UI_method_set_prompt_constructor;
++		UI_method_set_prompt_constructr;
++		EVP_read_pw_string_min;
++		CRYPTO_cts128_encrypt;
++		CRYPTO_cts128_decrypt_block;
++		CRYPTO_cfb128_1_encrypt;
++		CRYPTO_cbc128_encrypt;
++		CRYPTO_ctr128_encrypt;
++		CRYPTO_ofb128_encrypt;
++		CRYPTO_cts128_decrypt;
++		CRYPTO_cts128_encrypt_block;
++		CRYPTO_cbc128_decrypt;
++		CRYPTO_cfb128_encrypt;
++		CRYPTO_cfb128_8_encrypt;
++		SSL_renegotiate_abbreviated;
++		TLSv1_1_method;
++		TLSv1_1_client_method;
++		TLSv1_1_server_method;
++		SSL_CTX_set_srp_client_pwd_callback;
++		SSL_CTX_set_srp_client_pwd_cb;
++		SSL_get_srp_g;
++		SSL_CTX_set_srp_username_callback;
++		SSL_CTX_set_srp_un_cb;
++		SSL_get_srp_userinfo;
++		SSL_set_srp_server_param;
++		SSL_set_srp_server_param_pw;
++		SSL_get_srp_N;
++		SSL_get_srp_username;
++		SSL_CTX_set_srp_password;
++		SSL_CTX_set_srp_strength;
++		SSL_CTX_set_srp_verify_param_callback;
++		SSL_CTX_set_srp_vfy_param_cb;
++		SSL_CTX_set_srp_cb_arg;
++		SSL_CTX_set_srp_username;
++		SSL_CTX_SRP_CTX_init;
++		SSL_SRP_CTX_init;
++		SRP_Calc_A_param;
++		SRP_generate_server_master_secret;
++		SRP_gen_server_master_secret;
++		SSL_CTX_SRP_CTX_free;
++		SRP_generate_client_master_secret;
++		SRP_gen_client_master_secret;
++		SSL_srp_server_param_with_username;
++		SSL_srp_server_param_with_un;
++		SSL_SRP_CTX_free;
++		SSL_set_debug;
++		SSL_SESSION_get0_peer;
++		TLSv1_2_client_method;
++		SSL_SESSION_set1_id_context;
++		TLSv1_2_server_method;
++		SSL_cache_hit;
++		SSL_get0_kssl_ctx;
++		SSL_set0_kssl_ctx;
++		SSL_set_state;
++		SSL_CIPHER_get_id;
++		TLSv1_2_method;
++		kssl_ctx_get0_client_princ;
++		SSL_export_keying_material;
++		SSL_set_tlsext_use_srtp;
++		SSL_CTX_set_next_protos_advertised_cb;
++		SSL_CTX_set_next_protos_adv_cb;
++		SSL_get0_next_proto_negotiated;
++		SSL_get_selected_srtp_profile;
++		SSL_CTX_set_tlsext_use_srtp;
++		SSL_select_next_proto;
++		SSL_get_srtp_profiles;
++		SSL_CTX_set_next_proto_select_cb;
++		SSL_CTX_set_next_proto_sel_cb;
++		SSL_SESSION_get_compress_id;
++
++		SRP_VBASE_get_by_user;
++		SRP_Calc_server_key;
++		SRP_create_verifier;
++		SRP_create_verifier_BN;
++		SRP_Calc_u;
++		SRP_VBASE_free;
++		SRP_Calc_client_key;
++		SRP_get_default_gN;
++		SRP_Calc_x;
++		SRP_Calc_B;
++		SRP_VBASE_new;
++		SRP_check_known_gN_param;
++		SRP_Calc_A;
++		SRP_Verify_A_mod_N;
++		SRP_VBASE_init;
++		SRP_Verify_B_mod_N;
++		EC_KEY_set_public_key_affine_coordinates;
++		EC_KEY_set_pub_key_aff_coords;
++		EVP_aes_192_ctr;
++		EVP_PKEY_meth_get0_info;
++		EVP_PKEY_meth_copy;
++		ERR_add_error_vdata;
++		EVP_aes_128_ctr;
++		EVP_aes_256_ctr;
++		EC_GFp_nistp224_method;
++		EC_KEY_get_flags;
++		RSA_padding_add_PKCS1_PSS_mgf1;
++		EVP_aes_128_xts;
++		EVP_aes_256_xts;
++		EVP_aes_128_gcm;
++		EC_KEY_clear_flags;
++		EC_KEY_set_flags;
++		EVP_aes_256_ccm;
++		RSA_verify_PKCS1_PSS_mgf1;
++		EVP_aes_128_ccm;
++		EVP_aes_192_gcm;
++		X509_ALGOR_set_md;
++		RAND_init_fips;
++		EVP_aes_256_gcm;
++		EVP_aes_192_ccm;
++		CMAC_CTX_copy;
++		CMAC_CTX_free;
++		CMAC_CTX_get0_cipher_ctx;
++		CMAC_CTX_cleanup;
++		CMAC_Init;
++		CMAC_Update;
++		CMAC_resume;
++		CMAC_CTX_new;
++		CMAC_Final;
++		CRYPTO_ctr128_encrypt_ctr32;
++		CRYPTO_gcm128_release;
++		CRYPTO_ccm128_decrypt_ccm64;
++		CRYPTO_ccm128_encrypt;
++		CRYPTO_gcm128_encrypt;
++		CRYPTO_xts128_encrypt;
++		EVP_rc4_hmac_md5;
++		CRYPTO_nistcts128_decrypt_block;
++		CRYPTO_gcm128_setiv;
++		CRYPTO_nistcts128_encrypt;
++		EVP_aes_128_cbc_hmac_sha1;
++		CRYPTO_gcm128_tag;
++		CRYPTO_ccm128_encrypt_ccm64;
++		ENGINE_load_rdrand;
++		CRYPTO_ccm128_setiv;
++		CRYPTO_nistcts128_encrypt_block;
++		CRYPTO_gcm128_aad;
++		CRYPTO_ccm128_init;
++		CRYPTO_nistcts128_decrypt;
++		CRYPTO_gcm128_new;
++		CRYPTO_ccm128_tag;
++		CRYPTO_ccm128_decrypt;
++		CRYPTO_ccm128_aad;
++		CRYPTO_gcm128_init;
++		CRYPTO_gcm128_decrypt;
++		ENGINE_load_rsax;
++		CRYPTO_gcm128_decrypt_ctr32;
++		CRYPTO_gcm128_encrypt_ctr32;
++		CRYPTO_gcm128_finish;
++		EVP_aes_256_cbc_hmac_sha1;
++		PKCS5_pbkdf2_set;
++		CMS_add0_recipient_password;
++		CMS_decrypt_set1_password;
++		CMS_RecipientInfo_set0_password;
++		RAND_set_fips_drbg_type;
++		X509_REQ_sign_ctx;
++		RSA_PSS_PARAMS_new;
++		X509_CRL_sign_ctx;
++		X509_signature_dump;
++		d2i_RSA_PSS_PARAMS;
++		RSA_PSS_PARAMS_it;
++		RSA_PSS_PARAMS_free;
++		X509_sign_ctx;
++		i2d_RSA_PSS_PARAMS;
++		ASN1_item_sign_ctx;
++		EC_GFp_nistp521_method;
++		EC_GFp_nistp256_method;
++		OPENSSL_stderr;
++		OPENSSL_cpuid_setup;
++		OPENSSL_showfatal;
++		BIO_new_dgram_sctp;
++		BIO_dgram_sctp_msg_waiting;
++		BIO_dgram_sctp_wait_for_dry;
++		BIO_s_datagram_sctp;
++		BIO_dgram_is_sctp;
++		BIO_dgram_sctp_notification_cb;
++		CRYPTO_memcmp;
++		SSL_CTX_set_alpn_protos;
++		SSL_set_alpn_protos;
++		SSL_CTX_set_alpn_select_cb;
++		SSL_get0_alpn_selected;
++		SSL_CTX_set_custom_cli_ext;
++		SSL_CTX_set_custom_srv_ext;
++		SSL_CTX_set_srv_supp_data;
++		SSL_CTX_set_cli_supp_data;
++		SSL_set_cert_cb;
++		SSL_CTX_use_serverinfo;
++		SSL_CTX_use_serverinfo_file;
++		SSL_CTX_set_cert_cb;
++		SSL_CTX_get0_param;
++		SSL_get0_param;
++		SSL_certs_clear;
++		DTLSv1_2_method;
++		DTLSv1_2_server_method;
++		DTLSv1_2_client_method;
++		DTLS_method;
++		DTLS_server_method;
++		DTLS_client_method;
++		SSL_CTX_get_ssl_method;
++		SSL_CTX_get0_certificate;
++		SSL_CTX_get0_privatekey;
++		SSL_COMP_set0_compression_methods;
++		SSL_COMP_free_compression_methods;
++		SSL_CIPHER_find;
++		SSL_is_server;
++		SSL_CONF_CTX_new;
++		SSL_CONF_CTX_finish;
++		SSL_CONF_CTX_free;
++		SSL_CONF_CTX_set_flags;
++		SSL_CONF_CTX_clear_flags;
++		SSL_CONF_CTX_set1_prefix;
++		SSL_CONF_CTX_set_ssl;
++		SSL_CONF_CTX_set_ssl_ctx;
++		SSL_CONF_cmd;
++		SSL_CONF_cmd_argv;
++		SSL_CONF_cmd_value_type;
++		SSL_trace;
++		SSL_CIPHER_standard_name;
++		SSL_get_tlsa_record_byname;
++		ASN1_TIME_diff;
++		BIO_hex_string;
++		CMS_RecipientInfo_get0_pkey_ctx;
++		CMS_RecipientInfo_encrypt;
++		CMS_SignerInfo_get0_pkey_ctx;
++		CMS_SignerInfo_get0_md_ctx;
++		CMS_SignerInfo_get0_signature;
++		CMS_RecipientInfo_kari_get0_alg;
++		CMS_RecipientInfo_kari_get0_reks;
++		CMS_RecipientInfo_kari_get0_orig_id;
++		CMS_RecipientInfo_kari_orig_id_cmp;
++		CMS_RecipientEncryptedKey_get0_id;
++		CMS_RecipientEncryptedKey_cert_cmp;
++		CMS_RecipientInfo_kari_set0_pkey;
++		CMS_RecipientInfo_kari_get0_ctx;
++		CMS_RecipientInfo_kari_decrypt;
++		CMS_SharedInfo_encode;
++		DH_compute_key_padded;
++		d2i_DHxparams;
++		i2d_DHxparams;
++		DH_get_1024_160;
++		DH_get_2048_224;
++		DH_get_2048_256;
++		DH_KDF_X9_42;
++		ECDH_KDF_X9_62;
++		ECDSA_METHOD_new;
++		ECDSA_METHOD_free;
++		ECDSA_METHOD_set_app_data;
++		ECDSA_METHOD_get_app_data;
++		ECDSA_METHOD_set_sign;
++		ECDSA_METHOD_set_sign_setup;
++		ECDSA_METHOD_set_verify;
++		ECDSA_METHOD_set_flags;
++		ECDSA_METHOD_set_name;
++		EVP_des_ede3_wrap;
++		EVP_aes_128_wrap;
++		EVP_aes_192_wrap;
++		EVP_aes_256_wrap;
++		EVP_aes_128_cbc_hmac_sha256;
++		EVP_aes_256_cbc_hmac_sha256;
++		CRYPTO_128_wrap;
++		CRYPTO_128_unwrap;
++		OCSP_REQ_CTX_nbio;
++		OCSP_REQ_CTX_new;
++		OCSP_set_max_response_length;
++		OCSP_REQ_CTX_i2d;
++		OCSP_REQ_CTX_nbio_d2i;
++		OCSP_REQ_CTX_get0_mem_bio;
++		OCSP_REQ_CTX_http;
++		RSA_padding_add_PKCS1_OAEP_mgf1;
++		RSA_padding_check_PKCS1_OAEP_mgf1;
++		RSA_OAEP_PARAMS_free;
++		RSA_OAEP_PARAMS_it;
++		RSA_OAEP_PARAMS_new;
++		SSL_get_sigalgs;
++		SSL_get_shared_sigalgs;
++		SSL_check_chain;
++		X509_chain_up_ref;
++		X509_http_nbio;
++		X509_CRL_http_nbio;
++		X509_REVOKED_dup;
++		i2d_re_X509_tbs;
++		X509_get0_signature;
++		X509_get_signature_nid;
++		X509_CRL_diff;
++		X509_chain_check_suiteb;
++		X509_CRL_check_suiteb;
++		X509_check_host;
++		X509_check_email;
++		X509_check_ip;
++		X509_check_ip_asc;
++		X509_STORE_set_lookup_crls_cb;
++		X509_STORE_CTX_get0_store;
++		X509_VERIFY_PARAM_set1_host;
++		X509_VERIFY_PARAM_add1_host;
++		X509_VERIFY_PARAM_set_hostflags;
++		X509_VERIFY_PARAM_get0_peername;
++		X509_VERIFY_PARAM_set1_email;
++		X509_VERIFY_PARAM_set1_ip;
++		X509_VERIFY_PARAM_set1_ip_asc;
++		X509_VERIFY_PARAM_get0_name;
++		X509_VERIFY_PARAM_get_count;
++		X509_VERIFY_PARAM_get0;
++		X509V3_EXT_free;
++		EC_GROUP_get_mont_data;
++		EC_curve_nid2nist;
++		EC_curve_nist2nid;
++		PEM_write_bio_DHxparams;
++		PEM_write_DHxparams;
++		SSL_CTX_add_client_custom_ext;
++		SSL_CTX_add_server_custom_ext;
++		SSL_extension_supported;
++		BUF_strnlen;
++		sk_deep_copy;
++		SSL_test_functions;
++
++	local:
++		*;
++};
++
++OPENSSL_1.0.2g {
++       global:
++               SRP_VBASE_get1_by_user;
++               SRP_user_pwd_free;
++} OPENSSL_1.0.2d;
++
+Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/openssl.ld
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/openssl.ld	2014-02-24 21:02:30.000000000 +0100
+@@ -0,0 +1,10 @@
++OPENSSL_1.0.2 {
++	global:
++		bind_engine;
++		v_check;
++		OPENSSL_init;
++		OPENSSL_finish;
++	local:
++		*;
++};
++
+Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/ccgost/openssl.ld
+===================================================================
+--- /dev/null	1970-01-01 00:00:00.000000000 +0000
++++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/ccgost/openssl.ld	2014-02-24 21:02:30.000000000 +0100
+@@ -0,0 +1,10 @@
++OPENSSL_1.0.2 {
++	global:
++		bind_engine;
++		v_check;
++		OPENSSL_init;
++		OPENSSL_finish;
++	local:
++		*;
++};
++
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/engines-install-in-libdir-ssl.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/engines-install-in-libdir-ssl.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/engines-install-in-libdir-ssl.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/engines-install-in-libdir-ssl.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/find.pl b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/find.pl
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/find.pl
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/find.pl
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/oe-ldflags.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/oe-ldflags.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/oe-ldflags.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/oe-ldflags.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/openssl-1.0.2a-x32-asm.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/openssl-1.0.2a-x32-asm.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/openssl-1.0.2a-x32-asm.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/openssl-1.0.2a-x32-asm.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/openssl-c_rehash.sh b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/openssl-c_rehash.sh
new file mode 100644
index 0000000..6620fdc
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/openssl-c_rehash.sh
@@ -0,0 +1,222 @@
+#!/bin/sh
+#
+# Ben Secrest <blsecres@gmail.com>
+#
+# sh c_rehash script, scan all files in a directory
+# and add symbolic links to their hash values.
+#
+# based on the c_rehash perl script distributed with openssl
+#
+# LICENSE: See OpenSSL license
+# ^^acceptable?^^
+#
+
+# default certificate location
+DIR=/etc/openssl
+
+# for filetype bitfield
+IS_CERT=$(( 1 << 0 ))
+IS_CRL=$(( 1 << 1 ))
+
+
+# check to see if a file is a certificate file or a CRL file
+# arguments:
+#       1. the filename to be scanned
+# returns:
+#       bitfield of file type; uses ${IS_CERT} and ${IS_CRL}
+#
+check_file()
+{
+    local IS_TYPE=0
+
+    # make IFS a newline so we can process grep output line by line
+    local OLDIFS=${IFS}
+    IFS=$( printf "\n" )
+
+    # XXX: could be more efficient to have two 'grep -m' but is -m portable?
+    for LINE in $( grep '^-----BEGIN .*-----' ${1} )
+    do
+	if echo ${LINE} \
+	    | grep -q -E '^-----BEGIN (X509 |TRUSTED )?CERTIFICATE-----'
+	then
+	    IS_TYPE=$(( ${IS_TYPE} | ${IS_CERT} ))
+
+	    if [ $(( ${IS_TYPE} & ${IS_CRL} )) -ne 0 ]
+	    then
+	    	break
+	    fi
+	elif echo ${LINE} | grep -q '^-----BEGIN X509 CRL-----'
+	then
+	    IS_TYPE=$(( ${IS_TYPE} | ${IS_CRL} ))
+
+	    if [ $(( ${IS_TYPE} & ${IS_CERT} )) -ne 0 ]
+	    then
+	    	break
+	    fi
+	fi
+    done
+
+    # restore IFS
+    IFS=${OLDIFS}
+
+    return ${IS_TYPE}
+}
+
+
+#
+# use openssl to fingerprint a file
+#    arguments:
+#	1. the filename to fingerprint
+#	2. the method to use (x509, crl)
+#    returns:
+#	none
+#    assumptions:
+#	user will capture output from last stage of pipeline
+#
+fingerprint()
+{
+    ${SSL_CMD} ${2} -fingerprint -noout -in ${1} | sed 's/^.*=//' | tr -d ':'
+}
+
+
+#
+# link_hash - create links to certificate files
+#    arguments:
+#       1. the filename to create a link for
+#	2. the type of certificate being linked (x509, crl)
+#    returns:
+#	0 on success, 1 otherwise
+#
+link_hash()
+{
+    local FINGERPRINT=$( fingerprint ${1} ${2} )
+    local HASH=$( ${SSL_CMD} ${2} -hash -noout -in ${1} )
+    local SUFFIX=0
+    local LINKFILE=''
+    local TAG=''
+
+    if [ ${2} = "crl" ]
+    then
+    	TAG='r'
+    fi
+
+    LINKFILE=${HASH}.${TAG}${SUFFIX}
+
+    while [ -f ${LINKFILE} ]
+    do
+	if [ ${FINGERPRINT} = $( fingerprint ${LINKFILE} ${2} ) ]
+	then
+	    echo "NOTE: Skipping duplicate file ${1}" >&2
+	    return 1
+	fi	
+
+	SUFFIX=$(( ${SUFFIX} + 1 ))
+	LINKFILE=${HASH}.${TAG}${SUFFIX}
+    done
+
+    echo "${3} => ${LINKFILE}"
+
+    # assume any system with a POSIX shell will either support symlinks or
+    # do something to handle this gracefully
+    ln -s ${3} ${LINKFILE}
+
+    return 0
+}
+
+
+# hash_dir create hash links in a given directory
+hash_dir()
+{
+    echo "Doing ${1}"
+
+    cd ${1}
+
+    ls -1 * 2>/dev/null | while read FILE
+    do
+        if echo ${FILE} | grep -q -E '^[[:xdigit:]]{8}\.r?[[:digit:]]+$' \
+	    	&& [ -h "${FILE}" ]
+        then
+            rm ${FILE}
+        fi
+    done
+
+    ls -1 *.pem *.cer *.crt *.crl 2>/dev/null | while read FILE
+    do
+	REAL_FILE=${FILE}
+	# if we run on build host then get to the real files in rootfs
+	if [ -n "${SYSROOT}" -a -h ${FILE} ]
+	then
+	    FILE=$( readlink ${FILE} )
+	    # check the symlink is absolute (or dangling in other word)
+	    if [ "x/" = "x$( echo ${FILE} | cut -c1 -)" ]
+	    then
+		REAL_FILE=${SYSROOT}/${FILE}
+	    fi
+	fi
+
+	check_file ${REAL_FILE}
+        local FILE_TYPE=${?}
+	local TYPE_STR=''
+
+        if [ $(( ${FILE_TYPE} & ${IS_CERT} )) -ne 0 ]
+        then
+            TYPE_STR='x509'
+        elif [ $(( ${FILE_TYPE} & ${IS_CRL} )) -ne 0 ]
+        then
+            TYPE_STR='crl'
+        else
+            echo "NOTE: ${FILE} does not contain a certificate or CRL: skipping" >&2
+	    continue
+        fi
+
+	link_hash ${REAL_FILE} ${TYPE_STR} ${FILE}
+    done
+}
+
+
+# choose the name of an ssl application
+if [ -n "${OPENSSL}" ]
+then
+    SSL_CMD=$(which ${OPENSSL} 2>/dev/null)
+else
+    SSL_CMD=/usr/bin/openssl
+    OPENSSL=${SSL_CMD}
+    export OPENSSL
+fi
+
+# fix paths
+PATH=${PATH}:${DIR}/bin
+export PATH
+
+# confirm existance/executability of ssl command
+if ! [ -x ${SSL_CMD} ]
+then
+    echo "${0}: rehashing skipped ('openssl' program not available)" >&2
+    exit 0
+fi
+
+# determine which directories to process
+old_IFS=$IFS
+if [ ${#} -gt 0 ]
+then
+    IFS=':'
+    DIRLIST=${*}
+elif [ -n "${SSL_CERT_DIR}" ]
+then
+    DIRLIST=$SSL_CERT_DIR
+else
+    DIRLIST=${DIR}/certs
+fi
+
+IFS=':'
+
+# process directories
+for CERT_DIR in ${DIRLIST}
+do
+    if [ -d ${CERT_DIR} -a -w ${CERT_DIR} ]
+    then
+        IFS=$old_IFS
+        hash_dir ${CERT_DIR}
+        IFS=':'
+    fi
+done
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/openssl-fix-des.pod-error.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/openssl-fix-des.pod-error.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/openssl-fix-des.pod-error.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/openssl-fix-des.pod-error.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/openssl-util-perlpath.pl-cwd.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/openssl-util-perlpath.pl-cwd.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/openssl-util-perlpath.pl-cwd.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/openssl-util-perlpath.pl-cwd.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/openssl_fix_for_x32.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/openssl_fix_for_x32.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/openssl_fix_for_x32.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/openssl_fix_for_x32.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/parallel.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/parallel.patch
new file mode 100644
index 0000000..e5413bf
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/parallel.patch
@@ -0,0 +1,370 @@
+From 7fb1192f112c1920bfd39f4185f34e9afff3cff2 Mon Sep 17 00:00:00 2001
+From: Ross Burton <ross.burton@intel.com>
+Date: Sat, 5 Mar 2016 00:12:02 +0000
+Subject: [PATCH 24/28] Fix the parallel races in the Makefiles.
+
+This patch was taken from the Gentoo packaging:
+https://gitweb.gentoo.org/repo/gentoo.git/plain/dev-libs/openssl/files/openssl-1.0.2g-parallel-build.patch
+
+Upstream-Status: Pending
+Signed-off-by: Ross Burton <ross.burton@intel.com>
+
+Refreshed for 1.0.2i
+Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
+
+---
+ Makefile.org          |  14 +-
+ Makefile.org.orig     |  10 +-
+ Makefile.shared       |   2 +
+ Makefile.shared.orig  | 655 ++++++++++++++++++++++++++++++++++++++++++++++++++
+ crypto/Makefile       |  10 +-
+ engines/Makefile      |   6 +-
+ engines/Makefile.orig | 338 ++++++++++++++++++++++++++
+ test/Makefile         |  92 +++----
+ test/Makefile.orig    |  88 ++++---
+ 9 files changed, 1108 insertions(+), 107 deletions(-)
+ create mode 100644 Makefile.shared.orig
+ create mode 100644 engines/Makefile.orig
+
+diff --git a/Makefile.org b/Makefile.org
+index 8e7936c..ed98d2a 100644
+--- a/Makefile.org
++++ b/Makefile.org
+@@ -283,17 +283,17 @@ build_libcrypto: build_crypto build_engines libcrypto.pc
+ build_libssl: build_ssl libssl.pc
+ 
+ build_crypto:
+-	@dir=crypto; target=all; $(BUILD_ONE_CMD)
++	+@dir=crypto; target=all; $(BUILD_ONE_CMD)
+ build_ssl: build_crypto
+-	@dir=ssl; target=all; $(BUILD_ONE_CMD)
++	+@dir=ssl; target=all; $(BUILD_ONE_CMD)
+ build_engines: build_crypto
+-	@dir=engines; target=all; $(BUILD_ONE_CMD)
++	+@dir=engines; target=all; $(BUILD_ONE_CMD)
+ build_apps: build_libs
+-	@dir=apps; target=all; $(BUILD_ONE_CMD)
++	+@dir=apps; target=all; $(BUILD_ONE_CMD)
+ build_tests: build_libs
+-	@dir=test; target=all; $(BUILD_ONE_CMD)
++	+@dir=test; target=all; $(BUILD_ONE_CMD)
+ build_tools: build_libs
+-	@dir=tools; target=all; $(BUILD_ONE_CMD)
++	+@dir=tools; target=all; $(BUILD_ONE_CMD)
+ 
+ all_testapps: build_libs build_testapps
+ build_testapps:
+@@ -565,7 +565,7 @@ install_sw:
+ 	(cp $$i $(INSTALL_PREFIX)$(INSTALLTOP)/include/openssl/$$i; \
+ 	chmod 644 $(INSTALL_PREFIX)$(INSTALLTOP)/include/openssl/$$i ); \
+ 	done;
+-	@set -e; target=install; $(RECURSIVE_BUILD_CMD)
++	+@set -e; target=install; $(RECURSIVE_BUILD_CMD)
+ 	@set -e; liblist="$(LIBS)"; for i in $$liblist ;\
+ 	do \
+ 		if [ -f "$$i" ]; then \
+diff --git a/Makefile.shared b/Makefile.shared
+index f6f92e7..8164186 100644
+--- a/Makefile.shared
++++ b/Makefile.shared
+@@ -105,6 +105,7 @@ LINK_SO=	\
+     SHAREDFLAGS="$(OE_LDFLAGS) $${SHAREDFLAGS:-$(CFLAGS) $(SHARED_LDFLAGS)}"; \
+     LIBPATH=`for x in $$LIBDEPS; do echo $$x; done | sed -e 's/^ *-L//;t' -e d | uniq`; \
+     LIBPATH=`echo $$LIBPATH | sed -e 's/ /:/g'`; \
++    [ -e $$SHLIB$$SHLIB_SOVER$$SHLIB_SUFFIX ] && exit 0; \
+     LD_LIBRARY_PATH=$$LIBPATH:$$LD_LIBRARY_PATH \
+     $${SHAREDCMD} $${SHAREDFLAGS} \
+ 	-o $$SHLIB$$SHLIB_SOVER$$SHLIB_SUFFIX \
+@@ -122,6 +123,7 @@ SYMLINK_SO=	\
+ 			done; \
+ 		fi; \
+ 		if [ -n "$$SHLIB_SOVER" ]; then \
++			[ -e "$$SHLIB$$SHLIB_SUFFIX" ] || \
+ 			( $(SET_X); rm -f $$SHLIB$$SHLIB_SUFFIX; \
+ 			  ln -s $$prev $$SHLIB$$SHLIB_SUFFIX ); \
+ 		fi; \
+diff --git a/crypto/Makefile b/crypto/Makefile
+index 17a87f8..29c2dcf 100644
+--- a/crypto/Makefile
++++ b/crypto/Makefile
+@@ -85,11 +85,11 @@ testapps:
+ 	@if [ -z "$(THIS)" ]; then $(MAKE) -f $(TOP)/Makefile reflect THIS=$@; fi
+ 
+ subdirs:
+-	@target=all; $(RECURSIVE_MAKE)
++	+@target=all; $(RECURSIVE_MAKE)
+ 
+ files:
+ 	$(PERL) $(TOP)/util/files.pl "CPUID_OBJ=$(CPUID_OBJ)" Makefile >> $(TOP)/MINFO
+-	@target=files; $(RECURSIVE_MAKE)
++	+@target=files; $(RECURSIVE_MAKE)
+ 
+ links:
+ 	@$(PERL) $(TOP)/util/mklink.pl ../include/openssl $(EXHEADER)
+@@ -100,7 +100,7 @@ links:
+ # lib: $(LIB): are splitted to avoid end-less loop
+ lib:	$(LIB)
+ 	@touch lib
+-$(LIB):	$(LIBOBJ)
++$(LIB):	$(LIBOBJ) | subdirs
+ 	$(AR) $(LIB) $(LIBOBJ)
+ 	test -z "$(FIPSLIBDIR)" || $(AR) $(LIB) $(FIPSLIBDIR)fipscanister.o
+ 	$(RANLIB) $(LIB) || echo Never mind.
+@@ -111,7 +111,7 @@ shared: buildinf.h lib subdirs
+ 	fi
+ 
+ libs:
+-	@target=lib; $(RECURSIVE_MAKE)
++	+@target=lib; $(RECURSIVE_MAKE)
+ 
+ install:
+ 	@[ -n "$(INSTALLTOP)" ] # should be set by top Makefile...
+@@ -120,7 +120,7 @@ install:
+ 	(cp $$i $(INSTALL_PREFIX)$(INSTALLTOP)/include/openssl/$$i; \
+ 	chmod 644 $(INSTALL_PREFIX)$(INSTALLTOP)/include/openssl/$$i ); \
+ 	done;
+-	@target=install; $(RECURSIVE_MAKE)
++	+@target=install; $(RECURSIVE_MAKE)
+ 
+ lint:
+ 	@target=lint; $(RECURSIVE_MAKE)
+diff --git a/engines/Makefile b/engines/Makefile
+index fe8e9ca..a43d21b 100644
+--- a/engines/Makefile
++++ b/engines/Makefile
+@@ -72,7 +72,7 @@ top:
+ 
+ all:	lib subdirs
+ 
+-lib:	$(LIBOBJ)
++lib:	$(LIBOBJ) | subdirs
+ 	@if [ -n "$(SHARED_LIBS)" ]; then \
+ 		set -e; \
+ 		for l in $(LIBNAMES); do \
+@@ -89,7 +89,7 @@ lib:	$(LIBOBJ)
+ 
+ subdirs:
+ 	echo $(EDIRS)
+-	@target=all; $(RECURSIVE_MAKE)
++	+@target=all; $(RECURSIVE_MAKE)
+ 
+ files:
+ 	$(PERL) $(TOP)/util/files.pl Makefile >> $(TOP)/MINFO
+@@ -128,7 +128,7 @@ install:
+ 			  mv -f $(INSTALL_PREFIX)$(INSTALLTOP)/$(LIBDIR)/ssl/engines/$$pfx$$l$$sfx.new $(INSTALL_PREFIX)$(INSTALLTOP)/$(LIBDIR)/ssl/engines/$$pfx$$l$$sfx ); \
+ 		done; \
+ 	fi
+-	@target=install; $(RECURSIVE_MAKE)
++	+@target=install; $(RECURSIVE_MAKE)
+ 
+ tags:
+ 	ctags $(SRC)
+diff --git a/test/Makefile b/test/Makefile
+index 40abd60..78d3788 100644
+--- a/test/Makefile
++++ b/test/Makefile
+@@ -145,7 +145,7 @@ install:
+ tags:
+ 	ctags $(SRC)
+ 
+-tests:	exe apps $(TESTS)
++tests:	exe $(TESTS)
+ 
+ apps:
+ 	@(cd ..; $(MAKE) DIRS=apps all)
+@@ -444,139 +444,139 @@ BUILD_CMD_STATIC=shlib_target=; \
+ 		link_app.$${shlib_target}
+ 
+ $(RSATEST)$(EXE_EXT): $(RSATEST).o $(DLIBCRYPTO)
+-	@target=$(RSATEST); $(BUILD_CMD)
++	+@target=$(RSATEST); $(BUILD_CMD)
+ 
+ $(BNTEST)$(EXE_EXT): $(BNTEST).o $(DLIBCRYPTO)
+-	@target=$(BNTEST); $(BUILD_CMD)
++	+@target=$(BNTEST); $(BUILD_CMD)
+ 
+ $(ECTEST)$(EXE_EXT): $(ECTEST).o $(DLIBCRYPTO)
+-	@target=$(ECTEST); $(BUILD_CMD)
++	+@target=$(ECTEST); $(BUILD_CMD)
+ 
+ $(EXPTEST)$(EXE_EXT): $(EXPTEST).o $(DLIBCRYPTO)
+-	@target=$(EXPTEST); $(BUILD_CMD)
++	+@target=$(EXPTEST); $(BUILD_CMD)
+ 
+ $(IDEATEST)$(EXE_EXT): $(IDEATEST).o $(DLIBCRYPTO)
+-	@target=$(IDEATEST); $(BUILD_CMD)
++	+@target=$(IDEATEST); $(BUILD_CMD)
+ 
+ $(MD2TEST)$(EXE_EXT): $(MD2TEST).o $(DLIBCRYPTO)
+-	@target=$(MD2TEST); $(BUILD_CMD)
++	+@target=$(MD2TEST); $(BUILD_CMD)
+ 
+ $(SHATEST)$(EXE_EXT): $(SHATEST).o $(DLIBCRYPTO)
+-	@target=$(SHATEST); $(BUILD_CMD)
++	+@target=$(SHATEST); $(BUILD_CMD)
+ 
+ $(SHA1TEST)$(EXE_EXT): $(SHA1TEST).o $(DLIBCRYPTO)
+-	@target=$(SHA1TEST); $(BUILD_CMD)
++	+@target=$(SHA1TEST); $(BUILD_CMD)
+ 
+ $(SHA256TEST)$(EXE_EXT): $(SHA256TEST).o $(DLIBCRYPTO)
+-	@target=$(SHA256TEST); $(BUILD_CMD)
++	+@target=$(SHA256TEST); $(BUILD_CMD)
+ 
+ $(SHA512TEST)$(EXE_EXT): $(SHA512TEST).o $(DLIBCRYPTO)
+-	@target=$(SHA512TEST); $(BUILD_CMD)
++	+@target=$(SHA512TEST); $(BUILD_CMD)
+ 
+ $(RMDTEST)$(EXE_EXT): $(RMDTEST).o $(DLIBCRYPTO)
+-	@target=$(RMDTEST); $(BUILD_CMD)
++	+@target=$(RMDTEST); $(BUILD_CMD)
+ 
+ $(MDC2TEST)$(EXE_EXT): $(MDC2TEST).o $(DLIBCRYPTO)
+-	@target=$(MDC2TEST); $(BUILD_CMD)
++	+@target=$(MDC2TEST); $(BUILD_CMD)
+ 
+ $(MD4TEST)$(EXE_EXT): $(MD4TEST).o $(DLIBCRYPTO)
+-	@target=$(MD4TEST); $(BUILD_CMD)
++	+@target=$(MD4TEST); $(BUILD_CMD)
+ 
+ $(MD5TEST)$(EXE_EXT): $(MD5TEST).o $(DLIBCRYPTO)
+-	@target=$(MD5TEST); $(BUILD_CMD)
++	+@target=$(MD5TEST); $(BUILD_CMD)
+ 
+ $(HMACTEST)$(EXE_EXT): $(HMACTEST).o $(DLIBCRYPTO)
+-	@target=$(HMACTEST); $(BUILD_CMD)
++	+@target=$(HMACTEST); $(BUILD_CMD)
+ 
+ $(WPTEST)$(EXE_EXT): $(WPTEST).o $(DLIBCRYPTO)
+-	@target=$(WPTEST); $(BUILD_CMD)
++	+@target=$(WPTEST); $(BUILD_CMD)
+ 
+ $(RC2TEST)$(EXE_EXT): $(RC2TEST).o $(DLIBCRYPTO)
+-	@target=$(RC2TEST); $(BUILD_CMD)
++	+@target=$(RC2TEST); $(BUILD_CMD)
+ 
+ $(BFTEST)$(EXE_EXT): $(BFTEST).o $(DLIBCRYPTO)
+-	@target=$(BFTEST); $(BUILD_CMD)
++	+@target=$(BFTEST); $(BUILD_CMD)
+ 
+ $(CASTTEST)$(EXE_EXT): $(CASTTEST).o $(DLIBCRYPTO)
+-	@target=$(CASTTEST); $(BUILD_CMD)
++	+@target=$(CASTTEST); $(BUILD_CMD)
+ 
+ $(RC4TEST)$(EXE_EXT): $(RC4TEST).o $(DLIBCRYPTO)
+-	@target=$(RC4TEST); $(BUILD_CMD)
++	+@target=$(RC4TEST); $(BUILD_CMD)
+ 
+ $(RC5TEST)$(EXE_EXT): $(RC5TEST).o $(DLIBCRYPTO)
+-	@target=$(RC5TEST); $(BUILD_CMD)
++	+@target=$(RC5TEST); $(BUILD_CMD)
+ 
+ $(DESTEST)$(EXE_EXT): $(DESTEST).o $(DLIBCRYPTO)
+-	@target=$(DESTEST); $(BUILD_CMD)
++	+@target=$(DESTEST); $(BUILD_CMD)
+ 
+ $(RANDTEST)$(EXE_EXT): $(RANDTEST).o $(DLIBCRYPTO)
+-	@target=$(RANDTEST); $(BUILD_CMD)
++	+@target=$(RANDTEST); $(BUILD_CMD)
+ 
+ $(DHTEST)$(EXE_EXT): $(DHTEST).o $(DLIBCRYPTO)
+-	@target=$(DHTEST); $(BUILD_CMD)
++	+@target=$(DHTEST); $(BUILD_CMD)
+ 
+ $(DSATEST)$(EXE_EXT): $(DSATEST).o $(DLIBCRYPTO)
+-	@target=$(DSATEST); $(BUILD_CMD)
++	+@target=$(DSATEST); $(BUILD_CMD)
+ 
+ $(METHTEST)$(EXE_EXT): $(METHTEST).o $(DLIBCRYPTO)
+-	@target=$(METHTEST); $(BUILD_CMD)
++	+@target=$(METHTEST); $(BUILD_CMD)
+ 
+ $(SSLTEST)$(EXE_EXT): $(SSLTEST).o $(DLIBSSL) $(DLIBCRYPTO)
+-	@target=$(SSLTEST); $(FIPS_BUILD_CMD)
++	+@target=$(SSLTEST); $(FIPS_BUILD_CMD)
+ 
+ $(ENGINETEST)$(EXE_EXT): $(ENGINETEST).o $(DLIBCRYPTO)
+-	@target=$(ENGINETEST); $(BUILD_CMD)
++	+@target=$(ENGINETEST); $(BUILD_CMD)
+ 
+ $(EVPTEST)$(EXE_EXT): $(EVPTEST).o $(DLIBCRYPTO)
+-	@target=$(EVPTEST); $(BUILD_CMD)
++	+@target=$(EVPTEST); $(BUILD_CMD)
+ 
+ $(EVPEXTRATEST)$(EXE_EXT): $(EVPEXTRATEST).o $(DLIBCRYPTO)
+-	@target=$(EVPEXTRATEST); $(BUILD_CMD)
++	+@target=$(EVPEXTRATEST); $(BUILD_CMD)
+ 
+ $(ECDSATEST)$(EXE_EXT): $(ECDSATEST).o $(DLIBCRYPTO)
+-	@target=$(ECDSATEST); $(BUILD_CMD)
++	+@target=$(ECDSATEST); $(BUILD_CMD)
+ 
+ $(ECDHTEST)$(EXE_EXT): $(ECDHTEST).o $(DLIBCRYPTO)
+-	@target=$(ECDHTEST); $(BUILD_CMD)
++	+@target=$(ECDHTEST); $(BUILD_CMD)
+ 
+ $(IGETEST)$(EXE_EXT): $(IGETEST).o $(DLIBCRYPTO)
+-	@target=$(IGETEST); $(BUILD_CMD)
++	+@target=$(IGETEST); $(BUILD_CMD)
+ 
+ $(JPAKETEST)$(EXE_EXT): $(JPAKETEST).o $(DLIBCRYPTO)
+-	@target=$(JPAKETEST); $(BUILD_CMD)
++	+@target=$(JPAKETEST); $(BUILD_CMD)
+ 
+ $(ASN1TEST)$(EXE_EXT): $(ASN1TEST).o $(DLIBCRYPTO)
+-	@target=$(ASN1TEST); $(BUILD_CMD)
++	+@target=$(ASN1TEST); $(BUILD_CMD)
+ 
+ $(SRPTEST)$(EXE_EXT): $(SRPTEST).o $(DLIBCRYPTO)
+-	@target=$(SRPTEST); $(BUILD_CMD)
++	+@target=$(SRPTEST); $(BUILD_CMD)
+ 
+ $(V3NAMETEST)$(EXE_EXT): $(V3NAMETEST).o $(DLIBCRYPTO)
+-	@target=$(V3NAMETEST); $(BUILD_CMD)
++	+@target=$(V3NAMETEST); $(BUILD_CMD)
+ 
+ $(HEARTBEATTEST)$(EXE_EXT): $(HEARTBEATTEST).o $(DLIBCRYPTO)
+-	@target=$(HEARTBEATTEST); $(BUILD_CMD_STATIC)
++	+@target=$(HEARTBEATTEST); $(BUILD_CMD_STATIC)
+ 
+ $(CONSTTIMETEST)$(EXE_EXT): $(CONSTTIMETEST).o
+-	@target=$(CONSTTIMETEST) $(BUILD_CMD)
++	+@target=$(CONSTTIMETEST) $(BUILD_CMD)
+ 
+ $(VERIFYEXTRATEST)$(EXE_EXT): $(VERIFYEXTRATEST).o
+-	@target=$(VERIFYEXTRATEST) $(BUILD_CMD)
++	+@target=$(VERIFYEXTRATEST) $(BUILD_CMD)
+ 
+ $(CLIENTHELLOTEST)$(EXE_EXT): $(CLIENTHELLOTEST).o
+-	@target=$(CLIENTHELLOTEST) $(BUILD_CMD)
++	+@target=$(CLIENTHELLOTEST) $(BUILD_CMD)
+ 
+ $(BADDTLSTEST)$(EXE_EXT): $(BADDTLSTEST).o
+-	@target=$(BADDTLSTEST) $(BUILD_CMD)
++	+@target=$(BADDTLSTEST) $(BUILD_CMD)
+ 
+ $(FATALERRTEST)$(EXE_EXT): $(FATALERRTEST).o ssltestlib.o $(DLIBSSL) $(DLIBCRYPTO)
+ 	@target=$(FATALERRTEST); exobj=ssltestlib.o; $(BUILD_CMD)
+ 
+ $(SSLV2CONFTEST)$(EXE_EXT): $(SSLV2CONFTEST).o
+-	@target=$(SSLV2CONFTEST) $(BUILD_CMD)
++	+@target=$(SSLV2CONFTEST) $(BUILD_CMD)
+ 
+ $(DTLSTEST)$(EXE_EXT): $(DTLSTEST).o ssltestlib.o $(DLIBSSL) $(DLIBCRYPTO)
+-	@target=$(DTLSTEST); exobj=ssltestlib.o; $(BUILD_CMD)
++	+@target=$(DTLSTEST); exobj=ssltestlib.o; $(BUILD_CMD)
+ 
+ #$(AESTEST).o: $(AESTEST).c
+ #	$(CC) -c $(CFLAGS) -DINTERMEDIATE_VALUE_KAT -DTRACE_KAT_MCT $(AESTEST).c
+@@ -589,7 +589,7 @@ $(DTLSTEST)$(EXE_EXT): $(DTLSTEST).o ssltestlib.o $(DLIBSSL) $(DLIBCRYPTO)
+ #	fi
+ 
+ dummytest$(EXE_EXT): dummytest.o $(DLIBCRYPTO)
+-	@target=dummytest; $(BUILD_CMD)
++	+@target=dummytest; $(BUILD_CMD)
+ 
+ # DO NOT DELETE THIS LINE -- make depend depends on it.
+ 
+-- 
+2.15.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/ptest-deps.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/ptest-deps.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/ptest-deps.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/ptest-deps.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/ptest_makefile_deps.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/ptest_makefile_deps.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/ptest_makefile_deps.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/ptest_makefile_deps.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/run-ptest b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/run-ptest
new file mode 100755
index 0000000..3b20fce
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/run-ptest
@@ -0,0 +1,2 @@
+#!/bin/sh
+make -k runtest
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/shared-libs.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/shared-libs.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/shared-libs.patch
rename to import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl-1.0.2n/shared-libs.patch
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl.inc b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl.inc
deleted file mode 100644
index 8f2a797..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl.inc
+++ /dev/null
@@ -1,252 +0,0 @@
-SUMMARY = "Secure Socket Layer"
-DESCRIPTION = "Secure Socket Layer (SSL) binary and related cryptographic tools."
-HOMEPAGE = "http://www.openssl.org/"
-BUGTRACKER = "http://www.openssl.org/news/vulnerabilities.html"
-SECTION = "libs/network"
-
-# "openssl | SSLeay" dual license
-LICENSE = "openssl"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=f9a8f968107345e0b75aa8c2ecaa7ec8"
-
-DEPENDS = "makedepend-native hostperl-runtime-native"
-DEPENDS_append_class-target = " openssl-native"
-
-SRC_URI = "http://www.openssl.org/source/openssl-${PV}.tar.gz \
-          "
-S = "${WORKDIR}/openssl-${PV}"
-
-PACKAGECONFIG[perl] = ",,,"
-
-TERMIO_libc-musl = "-DTERMIOS"
-TERMIO ?= "-DTERMIO"
-# Avoid binaries being marked as requiring an executable stack since it 
-# doesn't(which causes and this causes issues with SELinux
-CFLAG = "${@base_conditional('SITEINFO_ENDIANNESS', 'le', '-DL_ENDIAN', '-DB_ENDIAN', d)} \
-	 ${TERMIO} ${CFLAGS} -Wall -Wa,--noexecstack"
-
-export DIRS = "crypto ssl apps"
-export EX_LIBS = "-lgcc -ldl"
-export AS = "${CC} -c"
-
-inherit pkgconfig siteinfo multilib_header ptest relative_symlinks
-
-PACKAGES =+ "libcrypto libssl ${PN}-misc openssl-conf"
-FILES_libcrypto = "${libdir}/libcrypto${SOLIBS}"
-FILES_libssl = "${libdir}/libssl${SOLIBS}"
-FILES_${PN} =+ " ${libdir}/ssl/*"
-FILES_${PN}-misc = "${libdir}/ssl/misc"
-RDEPENDS_${PN}-misc = "${@bb.utils.filter('PACKAGECONFIG', 'perl', d)}"
-
-PROVIDES += "openssl10"
-
-# Add the openssl.cnf file to the openssl-conf package.  Make the libcrypto
-# package RRECOMMENDS on this package.  This will enable the configuration
-# file to be installed for both the base openssl package and the libcrypto
-# package since the base openssl package depends on the libcrypto package.
-FILES_openssl-conf = "${sysconfdir}/ssl/openssl.cnf"
-CONFFILES_openssl-conf = "${sysconfdir}/ssl/openssl.cnf"
-RRECOMMENDS_libcrypto += "openssl-conf"
-RDEPENDS_${PN}-ptest += "${PN}-misc make perl perl-module-filehandle bc"
-
-# Remove this to enable SSLv3. SSLv3 is defaulted to disabled due to the POODLE
-# vulnerability
-EXTRA_OECONF = " -no-ssl3"
-
-do_configure_prepend_darwin () {
-	sed -i -e '/version-script=openssl\.ld/d' Configure
-}
-
-do_configure () {
-	cd util
-	perl perlpath.pl ${STAGING_BINDIR_NATIVE}
-	cd ..
-	ln -sf apps/openssl.pod crypto/crypto.pod ssl/ssl.pod doc/
-
-	os=${HOST_OS}
-	case $os in
-	linux-uclibc |\
-	linux-uclibceabi |\
-	linux-gnueabi |\
-	linux-uclibcspe |\
-	linux-gnuspe |\
-	linux-musl*)
-		os=linux
-		;;
-		*)
-		;;
-	esac
-	target="$os-${HOST_ARCH}"
-	case $target in
-	linux-arm)
-		target=linux-armv4
-		;;
-	linux-armeb)
-		target=linux-elf-armeb
-		;;
-	linux-aarch64*)
-		target=linux-aarch64
-		;;
-	linux-sh3)
-		target=debian-sh3
-		;;
-	linux-sh4)
-		target=debian-sh4
-		;;
-	linux-i486)
-		target=debian-i386-i486
-		;;
-	linux-i586 | linux-viac3)
-		target=debian-i386-i586
-		;;
-	linux-i686)
-		target=debian-i386-i686/cmov
-		;;
-	linux-gnux32-x86_64)
-		target=linux-x32
-		;;
-	linux-gnu64-x86_64)
-		target=linux-x86_64
-		;;
-	linux-gnun32-mips*el)
-		target=debian-mipsn32el
-		;;
-	linux-gnun32-mips*)
-		target=debian-mipsn32
-		;;
-	linux-mips*64*el)
-		target=debian-mips64el
-		;;
-	linux-mips*64*)
-		target=debian-mips64
-		;;
-	linux-mips*el)
-		target=debian-mipsel
-		;;
-	linux-mips*)
-		target=debian-mips
-		;;
-	linux-microblaze*|linux-nios2*)
-		target=linux-generic32
-		;;
-	linux-powerpc)
-		target=linux-ppc
-		;;
-	linux-powerpc64)
-		target=linux-ppc64
-		;;
-	linux-supersparc)
-		target=linux-sparcv8
-		;;
-	linux-sparc)
-		target=linux-sparcv8
-		;;
-	darwin-i386)
-		target=darwin-i386-cc
-		;;
-	esac
-	# inject machine-specific flags
-	sed -i -e "s|^\(\"$target\",\s*\"[^:]\+\):\([^:]\+\)|\1:${CFLAG}|g" Configure
-        useprefix=${prefix}
-        if [ "x$useprefix" = "x" ]; then
-                useprefix=/
-        fi        
-	perl ./Configure ${EXTRA_OECONF} shared --prefix=$useprefix --openssldir=${libdir}/ssl --libdir=`basename ${libdir}` $target
-}
-
-do_compile_prepend_class-target () {
-    sed -i 's/\((OPENSSL=\)".*"/\1"openssl"/' Makefile
-}
-
-do_compile () {
-	oe_runmake depend
-	oe_runmake
-}
-
-do_compile_ptest () {
-	# build dependencies for test directory too
-	export DIRS="$DIRS test"
-	oe_runmake depend
-	oe_runmake buildtest
-}
-
-do_install () {
-	# Create ${D}/${prefix} to fix parallel issues
-	mkdir -p ${D}/${prefix}/
-
-	oe_runmake INSTALL_PREFIX="${D}" MANDIR="${mandir}" install
-
-	oe_libinstall -so libcrypto ${D}${libdir}
-	oe_libinstall -so libssl ${D}${libdir}
-
-	install -d ${D}${includedir}
-	cp --dereference -R include/openssl ${D}${includedir}
-
-	install -Dm 0755 ${WORKDIR}/openssl-c_rehash.sh ${D}${bindir}/c_rehash
-	sed -i -e 's,/etc/openssl,${sysconfdir}/ssl,g' ${D}${bindir}/c_rehash
-
-	oe_multilib_header openssl/opensslconf.h
-	if [ "${@bb.utils.filter('PACKAGECONFIG', 'perl', d)}" ]; then
-		sed -i -e '1s,.*,#!${bindir}/env perl,' ${D}${libdir}/ssl/misc/CA.pl
-		sed -i -e '1s,.*,#!${bindir}/env perl,' ${D}${libdir}/ssl/misc/tsget
-	else
-		rm -f ${D}${libdir}/ssl/misc/CA.pl ${D}${libdir}/ssl/misc/tsget
-	fi
-
-	# Create SSL structure
-	install -d ${D}${sysconfdir}/ssl/
-	mv ${D}${libdir}/ssl/openssl.cnf \
-	   ${D}${libdir}/ssl/certs \
-	   ${D}${libdir}/ssl/private \
-	   \
-	   ${D}${sysconfdir}/ssl/
-	ln -sf ${sysconfdir}/ssl/certs ${D}${libdir}/ssl/certs
-	ln -sf ${sysconfdir}/ssl/private ${D}${libdir}/ssl/private
-	ln -sf ${sysconfdir}/ssl/openssl.cnf ${D}${libdir}/ssl/openssl.cnf
-}
-
-do_install_ptest () {
-	cp -r -L Makefile.org Makefile test ${D}${PTEST_PATH}
-
-        # Replace the path to native perl with the path to target perl
-        sed -i 's,^PERL=.*,PERL=${bindir}/perl,' ${D}${PTEST_PATH}/Makefile
-
-	cp Configure config e_os.h ${D}${PTEST_PATH}
-	cp -r -L include ${D}${PTEST_PATH}
-	ln -sf ${libdir}/libcrypto.a ${D}${PTEST_PATH}
-	ln -sf ${libdir}/libssl.a ${D}${PTEST_PATH}
-	mkdir -p ${D}${PTEST_PATH}/crypto
-	cp crypto/constant_time_locl.h ${D}${PTEST_PATH}/crypto
-	cp -r certs ${D}${PTEST_PATH}
-	mkdir -p ${D}${PTEST_PATH}/apps
-	ln -sf ${libdir}/ssl/misc/CA.sh  ${D}${PTEST_PATH}/apps
-	ln -sf ${sysconfdir}/ssl/openssl.cnf ${D}${PTEST_PATH}/apps
-	ln -sf ${bindir}/openssl         ${D}${PTEST_PATH}/apps
-	cp apps/server.pem              ${D}${PTEST_PATH}/apps
-	cp apps/server2.pem             ${D}${PTEST_PATH}/apps
-	mkdir -p ${D}${PTEST_PATH}/util
-	install util/opensslwrap.sh    ${D}${PTEST_PATH}/util
-	install util/shlib_wrap.sh     ${D}${PTEST_PATH}/util
-	# Time stamps are relevant for "make alltests", otherwise
-	# make may try to recompile binaries. Not only must the
-	# binary files be newer than the sources, they also must
-	# be more recent than the header files in /usr/include.
-	#
-	# Using "cp -a" is not sufficient, because do_install
-	# does not preserve the original time stamps.
-	#
-	# So instead of using the original file stamps, we set
-	# the current time for all files. Binaries will get
-	# modified again later when stripping them, but that's okay.
-	touch ${D}${PTEST_PATH}
-	find ${D}${PTEST_PATH} -type f -print0 | xargs --verbose -0 touch -r ${D}${PTEST_PATH}
-}
-
-do_install_append_class-native() {
-	create_wrapper ${D}${bindir}/openssl \
-	    OPENSSL_CONF=${libdir}/ssl/openssl.cnf \
-	    SSL_CERT_DIR=${libdir}/ssl/certs \
-	    SSL_CERT_FILE=${libdir}/ssl/cert.pem \
-	    OPENSSL_ENGINES=${libdir}/ssl/engines
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/0001-Remove-test-that-requires-running-as-non-root.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/0001-Remove-test-that-requires-running-as-non-root.patch
new file mode 100644
index 0000000..736bb39
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/0001-Remove-test-that-requires-running-as-non-root.patch
@@ -0,0 +1,49 @@
+From 3fdb1e2a16ea405c6731447a8994f222808ef7e6 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Fri, 7 Apr 2017 18:01:52 +0300
+Subject: [PATCH] Remove test that requires running as non-root
+
+Upstream-Status: Inappropriate [oe-core specific]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+ test/recipes/40-test_rehash.t | 17 +----------------
+ 1 file changed, 1 insertion(+), 16 deletions(-)
+
+diff --git a/test/recipes/40-test_rehash.t b/test/recipes/40-test_rehash.t
+index f902c23..c7567c1 100644
+--- a/test/recipes/40-test_rehash.t
++++ b/test/recipes/40-test_rehash.t
+@@ -23,7 +23,7 @@ setup("test_rehash");
+ plan skip_all => "test_rehash is not available on this platform"
+     unless run(app(["openssl", "rehash", "-help"]));
+ 
+-plan tests => 5;
++plan tests => 3;
+ 
+ indir "rehash.$$" => sub {
+     prepare();
+@@ -42,21 +42,6 @@ indir "rehash.$$" => sub {
+        'Testing rehash operations on empty directory');
+ }, create => 1, cleanup => 1;
+ 
+-indir "rehash.$$" => sub {
+-    prepare();
+-    chmod 0500, curdir();
+-  SKIP: {
+-      if (!ok(!open(FOO, ">unwritable.txt"),
+-              "Testing that we aren't running as a privileged user, such as root")) {
+-          close FOO;
+-          skip "It's pointless to run the next test as root", 1;
+-      }
+-      isnt(run(app(["openssl", "rehash", curdir()])), 1,
+-           'Testing rehash operations on readonly directory');
+-    }
+-    chmod 0700, curdir();       # make it writable again, so cleanup works
+-}, create => 1, cleanup => 1;
+-
+ sub prepare {
+     my @pemsourcefiles = sort glob(srctop_file('test', "*.pem"));
+     my @destfiles = ();
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/0001-Take-linking-flags-from-LDFLAGS-env-var.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/0001-Take-linking-flags-from-LDFLAGS-env-var.patch
new file mode 100644
index 0000000..6ce4e47
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/0001-Take-linking-flags-from-LDFLAGS-env-var.patch
@@ -0,0 +1,43 @@
+From 08face4353d80111973aba9c1304c92158cfad0e Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Tue, 28 Mar 2017 16:40:12 +0300
+Subject: [PATCH] Take linking flags from LDFLAGS env var
+
+This fixes "No GNU_HASH in the elf binary" issues.
+
+Upstream-Status: Inappropriate [oe-core specific]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+ Configurations/unix-Makefile.tmpl | 2 +-
+ Configure                         | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/Configurations/unix-Makefile.tmpl b/Configurations/unix-Makefile.tmpl
+index c029817..43b769b 100644
+--- a/Configurations/unix-Makefile.tmpl
++++ b/Configurations/unix-Makefile.tmpl
+@@ -173,7 +173,7 @@ CROSS_COMPILE= {- $config{cross_compile_prefix} -}
+ CC= $(CROSS_COMPILE){- $target{cc} -}
+ CFLAGS={- our $cflags2 = join(" ",(map { "-D".$_} @{$target{defines}}, @{$config{defines}}),"-DOPENSSLDIR=\"\\\"\$(OPENSSLDIR)\\\"\"","-DENGINESDIR=\"\\\"\$(ENGINESDIR)\\\"\"") -} {- $target{cflags} -} {- $config{cflags} -}
+ CFLAGS_Q={- $cflags2 =~ s|([\\"])|\\$1|g; $cflags2 -} {- $config{cflags} -}
+-LDFLAGS= {- $target{lflags} -}
++LDFLAGS= {- $target{lflags}." ".$ENV{'LDFLAGS'} -}
+ PLIB_LDFLAGS= {- $target{plib_lflags} -}
+ EX_LIBS= {- $target{ex_libs} -} {- $config{ex_libs} -}
+ LIB_CFLAGS={- $target{shared_cflag} || "" -}
+diff --git a/Configure b/Configure
+index aee7cc3..274d236 100755
+--- a/Configure
++++ b/Configure
+@@ -979,7 +979,7 @@ $config{build_file} = $target{build_file};
+ $config{defines} = [];
+ $config{cflags} = "";
+ $config{ex_libs} = "";
+-$config{shared_ldflag} = "";
++$config{shared_ldflag} = $ENV{'LDFLAGS'};
+ 
+ # Make sure build_scheme is consistent.
+ $target{build_scheme} = [ $target{build_scheme} ]
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/0001-aes-asm-aes-armv4-bsaes-armv7-.pl-make-it-work-with-.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/0001-aes-asm-aes-armv4-bsaes-armv7-.pl-make-it-work-with-.patch
new file mode 100644
index 0000000..bb0a168
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/0001-aes-asm-aes-armv4-bsaes-armv7-.pl-make-it-work-with-.patch
@@ -0,0 +1,88 @@
+From bcc096a50811bf0f0c4fd34b2993fed7a7015972 Mon Sep 17 00:00:00 2001
+From: Andy Polyakov <appro@openssl.org>
+Date: Fri, 3 Nov 2017 23:30:01 +0100
+Subject: [PATCH] aes/asm/{aes-armv4|bsaes-armv7}.pl: make it work with
+ binutils-2.29.
+
+It's not clear if it's a feature or bug, but binutils-2.29[.1]
+interprets 'adr' instruction with Thumb2 code reference differently,
+in a way that affects calculation of addresses of constants' tables.
+
+Upstream-Status: Backport
+
+Reviewed-by: Tim Hudson <tjh@openssl.org>
+Reviewed-by: Bernd Edlinger <bernd.edlinger@hotmail.de>
+Signed-off-by: Stefan Agner <stefan.agner@toradex.com>
+(Merged from https://github.com/openssl/openssl/pull/4669)
+
+(cherry picked from commit b82acc3c1a7f304c9df31841753a0fa76b5b3cda)
+---
+ crypto/aes/asm/aes-armv4.pl   | 6 +++---
+ crypto/aes/asm/bsaes-armv7.pl | 6 +++---
+ 2 files changed, 6 insertions(+), 6 deletions(-)
+
+diff --git a/crypto/aes/asm/aes-armv4.pl b/crypto/aes/asm/aes-armv4.pl
+index 16d79aae53..c6474b8aad 100644
+--- a/crypto/aes/asm/aes-armv4.pl
++++ b/crypto/aes/asm/aes-armv4.pl
+@@ -200,7 +200,7 @@ AES_encrypt:
+ #ifndef	__thumb2__
+ 	sub	r3,pc,#8		@ AES_encrypt
+ #else
+-	adr	r3,AES_encrypt
++	adr	r3,.
+ #endif
+ 	stmdb   sp!,{r1,r4-r12,lr}
+ #ifdef	__APPLE__
+@@ -450,7 +450,7 @@ _armv4_AES_set_encrypt_key:
+ #ifndef	__thumb2__
+ 	sub	r3,pc,#8		@ AES_set_encrypt_key
+ #else
+-	adr	r3,AES_set_encrypt_key
++	adr	r3,.
+ #endif
+ 	teq	r0,#0
+ #ifdef	__thumb2__
+@@ -976,7 +976,7 @@ AES_decrypt:
+ #ifndef	__thumb2__
+ 	sub	r3,pc,#8		@ AES_decrypt
+ #else
+-	adr	r3,AES_decrypt
++	adr	r3,.
+ #endif
+ 	stmdb   sp!,{r1,r4-r12,lr}
+ #ifdef	__APPLE__
+diff --git a/crypto/aes/asm/bsaes-armv7.pl b/crypto/aes/asm/bsaes-armv7.pl
+index 9f288660ef..a27bb4a179 100644
+--- a/crypto/aes/asm/bsaes-armv7.pl
++++ b/crypto/aes/asm/bsaes-armv7.pl
+@@ -744,7 +744,7 @@ $code.=<<___;
+ .type	_bsaes_decrypt8,%function
+ .align	4
+ _bsaes_decrypt8:
+-	adr	$const,_bsaes_decrypt8
++	adr	$const,.
+ 	vldmia	$key!, {@XMM[9]}		@ round 0 key
+ #ifdef	__APPLE__
+ 	adr	$const,.LM0ISR
+@@ -843,7 +843,7 @@ _bsaes_const:
+ .type	_bsaes_encrypt8,%function
+ .align	4
+ _bsaes_encrypt8:
+-	adr	$const,_bsaes_encrypt8
++	adr	$const,.
+ 	vldmia	$key!, {@XMM[9]}		@ round 0 key
+ #ifdef	__APPLE__
+ 	adr	$const,.LM0SR
+@@ -951,7 +951,7 @@ $code.=<<___;
+ .type	_bsaes_key_convert,%function
+ .align	4
+ _bsaes_key_convert:
+-	adr	$const,_bsaes_key_convert
++	adr	$const,.
+ 	vld1.8	{@XMM[7]},  [$inp]!		@ load round 0 key
+ #ifdef	__APPLE__
+ 	adr	$const,.LM0
+-- 
+2.15.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/Makefiles-ptest.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/Makefiles-ptest.patch
deleted file mode 100644
index 249446a..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/Makefiles-ptest.patch
+++ /dev/null
@@ -1,77 +0,0 @@
-Add 'buildtest' and 'runtest' targets to Makefile, to build and run tests
-cross-compiled.
-
-Signed-off-by: Anders Roxell <anders.roxell@enea.com>
-Signed-off-by: Maxin B. John <maxin.john@enea.com>
-Upstream-Status: Pending
----
-Index: openssl-1.0.2/Makefile.org
-===================================================================
---- openssl-1.0.2.orig/Makefile.org
-+++ openssl-1.0.2/Makefile.org
-@@ -451,8 +451,16 @@ rehash.time: certs apps
- test:   tests
- 
- tests: rehash
-+	$(MAKE) buildtest
-+	$(MAKE) runtest
-+
-+buildtest:
-+	@(cd test && \
-+	$(CLEARENV) && $(MAKE) -e $(BUILDENV) TOP=.. TESTS='$(TESTS)' OPENSSL_DEBUG_MEMORY=on OPENSSL_CONF=../apps/openssl.cnf exe apps);
-+
-+runtest:
- 	@(cd test && echo "testing..." && \
--	$(CLEARENV) && $(MAKE) -e $(BUILDENV) TOP=.. TESTS='$(TESTS)' OPENSSL_DEBUG_MEMORY=on OPENSSL_CONF=../apps/openssl.cnf tests );
-+	$(CLEARENV) && $(MAKE) -e $(BUILDENV) TOP=.. TESTS='$(TESTS)' OPENSSL_DEBUG_MEMORY=on OPENSSL_CONF=../apps/openssl.cnf alltests );
- 	OPENSSL_CONF=apps/openssl.cnf util/opensslwrap.sh version -a
- 
- report:
-Index: openssl-1.0.2/test/Makefile
-===================================================================
---- openssl-1.0.2.orig/test/Makefile
-+++ openssl-1.0.2/test/Makefile
-@@ -137,7 +137,7 @@ tests:	exe apps $(TESTS)
- apps:
- 	@(cd ..; $(MAKE) DIRS=apps all)
- 
--alltests: \
-+all-tests= \
- 	test_des test_idea test_sha test_md4 test_md5 test_hmac \
- 	test_md2 test_mdc2 test_wp \
- 	test_rmd test_rc2 test_rc4 test_rc5 test_bf test_cast test_aes \
-@@ -148,6 +148,11 @@ alltests: \
- 	test_jpake test_srp test_cms test_ocsp test_v3name test_heartbeat \
- 	test_constant_time
- 
-+alltests:
-+	@(for i in $(all-tests); do \
-+	( $(MAKE) $$i && echo "PASS: $$i" ) || echo "FAIL: $$i"; \
-+	done)
-+
- test_evp: $(EVPTEST)$(EXE_EXT) evptests.txt
- 	../util/shlib_wrap.sh ./$(EVPTEST) evptests.txt
- 
-@@ -213,7 +218,7 @@ test_x509: ../apps/openssl$(EXE_EXT) tx5
- 	echo test second x509v3 certificate
- 	sh ./tx509 v3-cert2.pem 2>/dev/null
- 
--test_rsa: $(RSATEST)$(EXE_EXT) ../apps/openssl$(EXE_EXT) trsa testrsa.pem
-+test_rsa: ../apps/openssl$(EXE_EXT) trsa testrsa.pem
- 	@sh ./trsa 2>/dev/null
- 	../util/shlib_wrap.sh ./$(RSATEST)
- 
-@@ -313,11 +318,11 @@ test_tsa: ../apps/openssl$(EXE_EXT) test
- 	  sh ./testtsa; \
- 	fi
- 
--test_ige: $(IGETEST)$(EXE_EXT)
-+test_ige:
- 	@echo "Test IGE mode"
- 	../util/shlib_wrap.sh ./$(IGETEST)
- 
--test_jpake: $(JPAKETEST)$(EXE_EXT)
-+test_jpake:
- 	@echo "Test JPAKE"
- 	../util/shlib_wrap.sh ./$(JPAKETEST)
- 
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/configure-musl-target.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/configure-musl-target.patch
deleted file mode 100644
index 613dc7b..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/configure-musl-target.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-Add musl triplet support
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Index: openssl-1.0.2a/Configure
-===================================================================
---- openssl-1.0.2a.orig/Configure
-+++ openssl-1.0.2a/Configure
-@@ -431,7 +431,7 @@ my %table=(
- #
- #       ./Configure linux-armv4 -march=armv6 -D__ARM_MAX_ARCH__=8
- #
--"linux-armv4",	"gcc: -O3 -Wall::-D_REENTRANT::-ldl:BN_LLONG RC4_CHAR RC4_CHUNK DES_INT DES_UNROLL BF_PTR:${armv4_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+"linux-armv4", "gcc: -O3 -Wall::-D_REENTRANT::-ldl:BN_LLONG RC4_CHAR RC4_CHUNK DES_INT DES_UNROLL BF_PTR:${armv4_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
- "linux-aarch64","gcc: -O3 -Wall::-D_REENTRANT::-ldl:SIXTY_FOUR_BIT_LONG RC4_CHAR RC4_CHUNK DES_INT DES_UNROLL BF_PTR:${aarch64_asm}:linux64:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
- # Configure script adds minimally required -march for assembly support,
- # if no -march was specified at command line. mips32 and mips64 below
-@@ -504,6 +504,8 @@ my %table=(
- "linux-gnueabi-armeb","$ENV{'CC'}:-DB_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
- "linux-uclibceabi-arm","$ENV{'CC'}:-DL_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
- "linux-uclibceabi-armeb","$ENV{'CC'}:-DB_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+"linux-musleabi-arm","$ENV{'CC'}:-DL_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+"linux-musleabi-armeb","$ENV{'CC'}:-DB_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
- 
- "linux-avr32","$ENV{'CC'}:-O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).",
- 
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/configure-targets.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/configure-targets.patch
deleted file mode 100644
index 691e74a..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/configure-targets.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-Upstream-Status: Inappropriate [embedded specific]
-
-The number of colons are important :)
-
-
----
- Configure |   16 ++++++++++++++++
- 1 file changed, 16 insertions(+)
-
-Index: openssl-1.0.2a/Configure
-===================================================================
---- openssl-1.0.2a.orig/Configure
-+++ openssl-1.0.2a/Configure
-@@ -443,6 +443,23 @@ my %table=(
- "linux-alpha-ccc","ccc:-fast -readonly_strings -DL_ENDIAN::-D_REENTRANT:::SIXTY_FOUR_BIT_LONG RC4_CHUNK DES_INT DES_PTR DES_RISC1 DES_UNROLL:${alpha_asm}",
- "linux-alpha+bwx-ccc","ccc:-fast -readonly_strings -DL_ENDIAN::-D_REENTRANT:::SIXTY_FOUR_BIT_LONG RC4_CHAR RC4_CHUNK DES_INT DES_PTR DES_RISC1 DES_UNROLL:${alpha_asm}",
- 
-+ 
-+# Linux on ARM
-+"linux-elf-arm","$ENV{'CC'}:-DL_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+"linux-elf-armeb","$ENV{'CC'}:-DB_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+"linux-gnueabi-arm","$ENV{'CC'}:-DL_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+"linux-gnueabi-armeb","$ENV{'CC'}:-DB_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+"linux-uclibceabi-arm","$ENV{'CC'}:-DL_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+"linux-uclibceabi-armeb","$ENV{'CC'}:-DB_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+
-+"linux-avr32","$ENV{'CC'}:-O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG DES_RISC1:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).",
-+
-+#### Linux on MIPS/MIPS64
-+"linux-mips","$ENV{'CC'}:-DB_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG RC2_CHAR RC4_INDEX DES_INT DES_UNROLL DES_RISC2:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+"linux-mips64","$ENV{'CC'}:-DB_ENDIAN -mabi=64 -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:SIXTY_FOUR_BIT_LONG RC2_CHAR RC4_INDEX DES_INT DES_UNROLL DES_RISC2:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+"linux-mips64el","$ENV{'CC'}:-DL_ENDIAN -mabi=64 -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:SIXTY_FOUR_BIT_LONG RC2_CHAR RC4_INDEX DES_INT DES_UNROLL DES_RISC2:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+"linux-mipsel","$ENV{'CC'}:-DL_ENDIAN -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG RC2_CHAR RC4_INDEX DES_INT DES_UNROLL DES_RISC2:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
-+
- # Android: linux-* but without pointers to headers and libs.
- "android","gcc:-mandroid -I\$(ANDROID_DEV)/include -B\$(ANDROID_DEV)/lib -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG RC4_CHAR RC4_CHUNK DES_INT DES_UNROLL BF_PTR:${no_asm}:dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
- "android-x86","gcc:-mandroid -I\$(ANDROID_DEV)/include -B\$(ANDROID_DEV)/lib -O3 -fomit-frame-pointer -Wall::-D_REENTRANT::-ldl:BN_LLONG ${x86_gcc_des} ${x86_gcc_opts}:".eval{my $asm=${x86_elf_asm};$asm=~s/:elf/:android/;$asm}.":dlfcn:linux-shared:-fPIC::.so.\$(SHLIB_MAJOR).\$(SHLIB_MINOR)",
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/version-script.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/version-script.patch
deleted file mode 100644
index a249180..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian/version-script.patch
+++ /dev/null
@@ -1,4663 +0,0 @@
-Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/Configure
-===================================================================
---- openssl-1.0.2~beta1.obsolete.0.0498436515490575.orig/Configure	2014-02-24 21:02:30.000000000 +0100
-+++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/Configure	2014-02-24 21:02:30.000000000 +0100
-@@ -1651,6 +1651,8 @@
- 		}
- 	}
- 
-+$shared_ldflag .= " -Wl,--version-script=openssl.ld";
-+
- open(IN,'<Makefile.org') || die "unable to read Makefile.org:$!\n";
- unlink("$Makefile.new") || die "unable to remove old $Makefile.new:$!\n" if -e "$Makefile.new";
- open(OUT,">$Makefile.new") || die "unable to create $Makefile.new:$!\n";
-Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/openssl.ld
-===================================================================
---- /dev/null	1970-01-01 00:00:00.000000000 +0000
-+++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/openssl.ld	2014-02-24 22:19:08.601827266 +0100
-@@ -0,0 +1,4615 @@
-+OPENSSL_1.0.0 {
-+	global:
-+		BIO_f_ssl;
-+		BIO_new_buffer_ssl_connect;
-+		BIO_new_ssl;
-+		BIO_new_ssl_connect;
-+		BIO_proxy_ssl_copy_session_id;
-+		BIO_ssl_copy_session_id;
-+		BIO_ssl_shutdown;
-+		d2i_SSL_SESSION;
-+		DTLSv1_client_method;
-+		DTLSv1_method;
-+		DTLSv1_server_method;
-+		ERR_load_SSL_strings;
-+		i2d_SSL_SESSION;
-+		kssl_build_principal_2;
-+		kssl_cget_tkt;
-+		kssl_check_authent;
-+		kssl_ctx_free;
-+		kssl_ctx_new;
-+		kssl_ctx_setkey;
-+		kssl_ctx_setprinc;
-+		kssl_ctx_setstring;
-+		kssl_ctx_show;
-+		kssl_err_set;
-+		kssl_krb5_free_data_contents;
-+		kssl_sget_tkt;
-+		kssl_skip_confound;
-+		kssl_validate_times;
-+		PEM_read_bio_SSL_SESSION;
-+		PEM_read_SSL_SESSION;
-+		PEM_write_bio_SSL_SESSION;
-+		PEM_write_SSL_SESSION;
-+		SSL_accept;
-+		SSL_add_client_CA;
-+		SSL_add_dir_cert_subjects_to_stack;
-+		SSL_add_dir_cert_subjs_to_stk;
-+		SSL_add_file_cert_subjects_to_stack;
-+		SSL_add_file_cert_subjs_to_stk;
-+		SSL_alert_desc_string;
-+		SSL_alert_desc_string_long;
-+		SSL_alert_type_string;
-+		SSL_alert_type_string_long;
-+		SSL_callback_ctrl;
-+		SSL_check_private_key;
-+		SSL_CIPHER_description;
-+		SSL_CIPHER_get_bits;
-+		SSL_CIPHER_get_name;
-+		SSL_CIPHER_get_version;
-+		SSL_clear;
-+		SSL_COMP_add_compression_method;
-+		SSL_COMP_get_compression_methods;
-+		SSL_COMP_get_compress_methods;
-+		SSL_COMP_get_name;
-+		SSL_connect;
-+		SSL_copy_session_id;
-+		SSL_ctrl;
-+		SSL_CTX_add_client_CA;
-+		SSL_CTX_add_session;
-+		SSL_CTX_callback_ctrl;
-+		SSL_CTX_check_private_key;
-+		SSL_CTX_ctrl;
-+		SSL_CTX_flush_sessions;
-+		SSL_CTX_free;
-+		SSL_CTX_get_cert_store;
-+		SSL_CTX_get_client_CA_list;
-+		SSL_CTX_get_client_cert_cb;
-+		SSL_CTX_get_ex_data;
-+		SSL_CTX_get_ex_new_index;
-+		SSL_CTX_get_info_callback;
-+		SSL_CTX_get_quiet_shutdown;
-+		SSL_CTX_get_timeout;
-+		SSL_CTX_get_verify_callback;
-+		SSL_CTX_get_verify_depth;
-+		SSL_CTX_get_verify_mode;
-+		SSL_CTX_load_verify_locations;
-+		SSL_CTX_new;
-+		SSL_CTX_remove_session;
-+		SSL_CTX_sess_get_get_cb;
-+		SSL_CTX_sess_get_new_cb;
-+		SSL_CTX_sess_get_remove_cb;
-+		SSL_CTX_sessions;
-+		SSL_CTX_sess_set_get_cb;
-+		SSL_CTX_sess_set_new_cb;
-+		SSL_CTX_sess_set_remove_cb;
-+		SSL_CTX_set1_param;
-+		SSL_CTX_set_cert_store;
-+		SSL_CTX_set_cert_verify_callback;
-+		SSL_CTX_set_cert_verify_cb;
-+		SSL_CTX_set_cipher_list;
-+		SSL_CTX_set_client_CA_list;
-+		SSL_CTX_set_client_cert_cb;
-+		SSL_CTX_set_client_cert_engine;
-+		SSL_CTX_set_cookie_generate_cb;
-+		SSL_CTX_set_cookie_verify_cb;
-+		SSL_CTX_set_default_passwd_cb;
-+		SSL_CTX_set_default_passwd_cb_userdata;
-+		SSL_CTX_set_default_verify_paths;
-+		SSL_CTX_set_def_passwd_cb_ud;
-+		SSL_CTX_set_def_verify_paths;
-+		SSL_CTX_set_ex_data;
-+		SSL_CTX_set_generate_session_id;
-+		SSL_CTX_set_info_callback;
-+		SSL_CTX_set_msg_callback;
-+		SSL_CTX_set_psk_client_callback;
-+		SSL_CTX_set_psk_server_callback;
-+		SSL_CTX_set_purpose;
-+		SSL_CTX_set_quiet_shutdown;
-+		SSL_CTX_set_session_id_context;
-+		SSL_CTX_set_ssl_version;
-+		SSL_CTX_set_timeout;
-+		SSL_CTX_set_tmp_dh_callback;
-+		SSL_CTX_set_tmp_ecdh_callback;
-+		SSL_CTX_set_tmp_rsa_callback;
-+		SSL_CTX_set_trust;
-+		SSL_CTX_set_verify;
-+		SSL_CTX_set_verify_depth;
-+		SSL_CTX_use_cert_chain_file;
-+		SSL_CTX_use_certificate;
-+		SSL_CTX_use_certificate_ASN1;
-+		SSL_CTX_use_certificate_chain_file;
-+		SSL_CTX_use_certificate_file;
-+		SSL_CTX_use_PrivateKey;
-+		SSL_CTX_use_PrivateKey_ASN1;
-+		SSL_CTX_use_PrivateKey_file;
-+		SSL_CTX_use_psk_identity_hint;
-+		SSL_CTX_use_RSAPrivateKey;
-+		SSL_CTX_use_RSAPrivateKey_ASN1;
-+		SSL_CTX_use_RSAPrivateKey_file;
-+		SSL_do_handshake;
-+		SSL_dup;
-+		SSL_dup_CA_list;
-+		SSLeay_add_ssl_algorithms;
-+		SSL_free;
-+		SSL_get1_session;
-+		SSL_get_certificate;
-+		SSL_get_cipher_list;
-+		SSL_get_ciphers;
-+		SSL_get_client_CA_list;
-+		SSL_get_current_cipher;
-+		SSL_get_current_compression;
-+		SSL_get_current_expansion;
-+		SSL_get_default_timeout;
-+		SSL_get_error;
-+		SSL_get_ex_data;
-+		SSL_get_ex_data_X509_STORE_CTX_idx;
-+		SSL_get_ex_d_X509_STORE_CTX_idx;
-+		SSL_get_ex_new_index;
-+		SSL_get_fd;
-+		SSL_get_finished;
-+		SSL_get_info_callback;
-+		SSL_get_peer_cert_chain;
-+		SSL_get_peer_certificate;
-+		SSL_get_peer_finished;
-+		SSL_get_privatekey;
-+		SSL_get_psk_identity;
-+		SSL_get_psk_identity_hint;
-+		SSL_get_quiet_shutdown;
-+		SSL_get_rbio;
-+		SSL_get_read_ahead;
-+		SSL_get_rfd;
-+		SSL_get_servername;
-+		SSL_get_servername_type;
-+		SSL_get_session;
-+		SSL_get_shared_ciphers;
-+		SSL_get_shutdown;
-+		SSL_get_SSL_CTX;
-+		SSL_get_ssl_method;
-+		SSL_get_verify_callback;
-+		SSL_get_verify_depth;
-+		SSL_get_verify_mode;
-+		SSL_get_verify_result;
-+		SSL_get_version;
-+		SSL_get_wbio;
-+		SSL_get_wfd;
-+		SSL_has_matching_session_id;
-+		SSL_library_init;
-+		SSL_load_client_CA_file;
-+		SSL_load_error_strings;
-+		SSL_new;
-+		SSL_peek;
-+		SSL_pending;
-+		SSL_read;
-+		SSL_renegotiate;
-+		SSL_renegotiate_pending;
-+		SSL_rstate_string;
-+		SSL_rstate_string_long;
-+		SSL_SESSION_cmp;
-+		SSL_SESSION_free;
-+		SSL_SESSION_get_ex_data;
-+		SSL_SESSION_get_ex_new_index;
-+		SSL_SESSION_get_id;
-+		SSL_SESSION_get_time;
-+		SSL_SESSION_get_timeout;
-+		SSL_SESSION_hash;
-+		SSL_SESSION_new;
-+		SSL_SESSION_print;
-+		SSL_SESSION_print_fp;
-+		SSL_SESSION_set_ex_data;
-+		SSL_SESSION_set_time;
-+		SSL_SESSION_set_timeout;
-+		SSL_set1_param;
-+		SSL_set_accept_state;
-+		SSL_set_bio;
-+		SSL_set_cipher_list;
-+		SSL_set_client_CA_list;
-+		SSL_set_connect_state;
-+		SSL_set_ex_data;
-+		SSL_set_fd;
-+		SSL_set_generate_session_id;
-+		SSL_set_info_callback;
-+		SSL_set_msg_callback;
-+		SSL_set_psk_client_callback;
-+		SSL_set_psk_server_callback;
-+		SSL_set_purpose;
-+		SSL_set_quiet_shutdown;
-+		SSL_set_read_ahead;
-+		SSL_set_rfd;
-+		SSL_set_session;
-+		SSL_set_session_id_context;
-+		SSL_set_session_secret_cb;
-+		SSL_set_session_ticket_ext;
-+		SSL_set_session_ticket_ext_cb;
-+		SSL_set_shutdown;
-+		SSL_set_SSL_CTX;
-+		SSL_set_ssl_method;
-+		SSL_set_tmp_dh_callback;
-+		SSL_set_tmp_ecdh_callback;
-+		SSL_set_tmp_rsa_callback;
-+		SSL_set_trust;
-+		SSL_set_verify;
-+		SSL_set_verify_depth;
-+		SSL_set_verify_result;
-+		SSL_set_wfd;
-+		SSL_shutdown;
-+		SSL_state;
-+		SSL_state_string;
-+		SSL_state_string_long;
-+		SSL_use_certificate;
-+		SSL_use_certificate_ASN1;
-+		SSL_use_certificate_file;
-+		SSL_use_PrivateKey;
-+		SSL_use_PrivateKey_ASN1;
-+		SSL_use_PrivateKey_file;
-+		SSL_use_psk_identity_hint;
-+		SSL_use_RSAPrivateKey;
-+		SSL_use_RSAPrivateKey_ASN1;
-+		SSL_use_RSAPrivateKey_file;
-+		SSLv23_client_method;
-+		SSLv23_method;
-+		SSLv23_server_method;
-+		SSLv2_client_method;
-+		SSLv2_method;
-+		SSLv2_server_method;
-+		SSLv3_client_method;
-+		SSLv3_method;
-+		SSLv3_server_method;
-+		SSL_version;
-+		SSL_want;
-+		SSL_write;
-+		TLSv1_client_method;
-+		TLSv1_method;
-+		TLSv1_server_method;
-+
-+
-+		SSLeay;
-+		SSLeay_version;
-+		ASN1_BIT_STRING_asn1_meth;
-+		ASN1_HEADER_free;
-+		ASN1_HEADER_new;
-+		ASN1_IA5STRING_asn1_meth;
-+		ASN1_INTEGER_get;
-+		ASN1_INTEGER_set;
-+		ASN1_INTEGER_to_BN;
-+		ASN1_OBJECT_create;
-+		ASN1_OBJECT_free;
-+		ASN1_OBJECT_new;
-+		ASN1_PRINTABLE_type;
-+		ASN1_STRING_cmp;
-+		ASN1_STRING_dup;
-+		ASN1_STRING_free;
-+		ASN1_STRING_new;
-+		ASN1_STRING_print;
-+		ASN1_STRING_set;
-+		ASN1_STRING_type_new;
-+		ASN1_TYPE_free;
-+		ASN1_TYPE_new;
-+		ASN1_UNIVERSALSTRING_to_string;
-+		ASN1_UTCTIME_check;
-+		ASN1_UTCTIME_print;
-+		ASN1_UTCTIME_set;
-+		ASN1_check_infinite_end;
-+		ASN1_d2i_bio;
-+		ASN1_d2i_fp;
-+		ASN1_digest;
-+		ASN1_dup;
-+		ASN1_get_object;
-+		ASN1_i2d_bio;
-+		ASN1_i2d_fp;
-+		ASN1_object_size;
-+		ASN1_parse;
-+		ASN1_put_object;
-+		ASN1_sign;
-+		ASN1_verify;
-+		BF_cbc_encrypt;
-+		BF_cfb64_encrypt;
-+		BF_ecb_encrypt;
-+		BF_encrypt;
-+		BF_ofb64_encrypt;
-+		BF_options;
-+		BF_set_key;
-+		BIO_CONNECT_free;
-+		BIO_CONNECT_new;
-+		BIO_accept;
-+		BIO_ctrl;
-+		BIO_int_ctrl;
-+		BIO_debug_callback;
-+		BIO_dump;
-+		BIO_dup_chain;
-+		BIO_f_base64;
-+		BIO_f_buffer;
-+		BIO_f_cipher;
-+		BIO_f_md;
-+		BIO_f_null;
-+		BIO_f_proxy_server;
-+		BIO_fd_non_fatal_error;
-+		BIO_fd_should_retry;
-+		BIO_find_type;
-+		BIO_free;
-+		BIO_free_all;
-+		BIO_get_accept_socket;
-+		BIO_get_filter_bio;
-+		BIO_get_host_ip;
-+		BIO_get_port;
-+		BIO_get_retry_BIO;
-+		BIO_get_retry_reason;
-+		BIO_gethostbyname;
-+		BIO_gets;
-+		BIO_new;
-+		BIO_new_accept;
-+		BIO_new_connect;
-+		BIO_new_fd;
-+		BIO_new_file;
-+		BIO_new_fp;
-+		BIO_new_socket;
-+		BIO_pop;
-+		BIO_printf;
-+		BIO_push;
-+		BIO_puts;
-+		BIO_read;
-+		BIO_s_accept;
-+		BIO_s_connect;
-+		BIO_s_fd;
-+		BIO_s_file;
-+		BIO_s_mem;
-+		BIO_s_null;
-+		BIO_s_proxy_client;
-+		BIO_s_socket;
-+		BIO_set;
-+		BIO_set_cipher;
-+		BIO_set_tcp_ndelay;
-+		BIO_sock_cleanup;
-+		BIO_sock_error;
-+		BIO_sock_init;
-+		BIO_sock_non_fatal_error;
-+		BIO_sock_should_retry;
-+		BIO_socket_ioctl;
-+		BIO_write;
-+		BN_CTX_free;
-+		BN_CTX_new;
-+		BN_MONT_CTX_free;
-+		BN_MONT_CTX_new;
-+		BN_MONT_CTX_set;
-+		BN_add;
-+		BN_add_word;
-+		BN_hex2bn;
-+		BN_bin2bn;
-+		BN_bn2hex;
-+		BN_bn2bin;
-+		BN_clear;
-+		BN_clear_bit;
-+		BN_clear_free;
-+		BN_cmp;
-+		BN_copy;
-+		BN_div;
-+		BN_div_word;
-+		BN_dup;
-+		BN_free;
-+		BN_from_montgomery;
-+		BN_gcd;
-+		BN_generate_prime;
-+		BN_get_word;
-+		BN_is_bit_set;
-+		BN_is_prime;
-+		BN_lshift;
-+		BN_lshift1;
-+		BN_mask_bits;
-+		BN_mod;
-+		BN_mod_exp;
-+		BN_mod_exp_mont;
-+		BN_mod_exp_simple;
-+		BN_mod_inverse;
-+		BN_mod_mul;
-+		BN_mod_mul_montgomery;
-+		BN_mod_word;
-+		BN_mul;
-+		BN_new;
-+		BN_num_bits;
-+		BN_num_bits_word;
-+		BN_options;
-+		BN_print;
-+		BN_print_fp;
-+		BN_rand;
-+		BN_reciprocal;
-+		BN_rshift;
-+		BN_rshift1;
-+		BN_set_bit;
-+		BN_set_word;
-+		BN_sqr;
-+		BN_sub;
-+		BN_to_ASN1_INTEGER;
-+		BN_ucmp;
-+		BN_value_one;
-+		BUF_MEM_free;
-+		BUF_MEM_grow;
-+		BUF_MEM_new;
-+		BUF_strdup;
-+		CONF_free;
-+		CONF_get_number;
-+		CONF_get_section;
-+		CONF_get_string;
-+		CONF_load;
-+		CRYPTO_add_lock;
-+		CRYPTO_dbg_free;
-+		CRYPTO_dbg_malloc;
-+		CRYPTO_dbg_realloc;
-+		CRYPTO_dbg_remalloc;
-+		CRYPTO_free;
-+		CRYPTO_get_add_lock_callback;
-+		CRYPTO_get_id_callback;
-+		CRYPTO_get_lock_name;
-+		CRYPTO_get_locking_callback;
-+		CRYPTO_get_mem_functions;
-+		CRYPTO_lock;
-+		CRYPTO_malloc;
-+		CRYPTO_mem_ctrl;
-+		CRYPTO_mem_leaks;
-+		CRYPTO_mem_leaks_cb;
-+		CRYPTO_mem_leaks_fp;
-+		CRYPTO_realloc;
-+		CRYPTO_remalloc;
-+		CRYPTO_set_add_lock_callback;
-+		CRYPTO_set_id_callback;
-+		CRYPTO_set_locking_callback;
-+		CRYPTO_set_mem_functions;
-+		CRYPTO_thread_id;
-+		DH_check;
-+		DH_compute_key;
-+		DH_free;
-+		DH_generate_key;
-+		DH_generate_parameters;
-+		DH_new;
-+		DH_size;
-+		DHparams_print;
-+		DHparams_print_fp;
-+		DSA_free;
-+		DSA_generate_key;
-+		DSA_generate_parameters;
-+		DSA_is_prime;
-+		DSA_new;
-+		DSA_print;
-+		DSA_print_fp;
-+		DSA_sign;
-+		DSA_sign_setup;
-+		DSA_size;
-+		DSA_verify;
-+		DSAparams_print;
-+		DSAparams_print_fp;
-+		ERR_clear_error;
-+		ERR_error_string;
-+		ERR_free_strings;
-+		ERR_func_error_string;
-+		ERR_get_err_state_table;
-+		ERR_get_error;
-+		ERR_get_error_line;
-+		ERR_get_state;
-+		ERR_get_string_table;
-+		ERR_lib_error_string;
-+		ERR_load_ASN1_strings;
-+		ERR_load_BIO_strings;
-+		ERR_load_BN_strings;
-+		ERR_load_BUF_strings;
-+		ERR_load_CONF_strings;
-+		ERR_load_DH_strings;
-+		ERR_load_DSA_strings;
-+		ERR_load_ERR_strings;
-+		ERR_load_EVP_strings;
-+		ERR_load_OBJ_strings;
-+		ERR_load_PEM_strings;
-+		ERR_load_PROXY_strings;
-+		ERR_load_RSA_strings;
-+		ERR_load_X509_strings;
-+		ERR_load_crypto_strings;
-+		ERR_load_strings;
-+		ERR_peek_error;
-+		ERR_peek_error_line;
-+		ERR_print_errors;
-+		ERR_print_errors_fp;
-+		ERR_put_error;
-+		ERR_reason_error_string;
-+		ERR_remove_state;
-+		EVP_BytesToKey;
-+		EVP_CIPHER_CTX_cleanup;
-+		EVP_CipherFinal;
-+		EVP_CipherInit;
-+		EVP_CipherUpdate;
-+		EVP_DecodeBlock;
-+		EVP_DecodeFinal;
-+		EVP_DecodeInit;
-+		EVP_DecodeUpdate;
-+		EVP_DecryptFinal;
-+		EVP_DecryptInit;
-+		EVP_DecryptUpdate;
-+		EVP_DigestFinal;
-+		EVP_DigestInit;
-+		EVP_DigestUpdate;
-+		EVP_EncodeBlock;
-+		EVP_EncodeFinal;
-+		EVP_EncodeInit;
-+		EVP_EncodeUpdate;
-+		EVP_EncryptFinal;
-+		EVP_EncryptInit;
-+		EVP_EncryptUpdate;
-+		EVP_OpenFinal;
-+		EVP_OpenInit;
-+		EVP_PKEY_assign;
-+		EVP_PKEY_copy_parameters;
-+		EVP_PKEY_free;
-+		EVP_PKEY_missing_parameters;
-+		EVP_PKEY_new;
-+		EVP_PKEY_save_parameters;
-+		EVP_PKEY_size;
-+		EVP_PKEY_type;
-+		EVP_SealFinal;
-+		EVP_SealInit;
-+		EVP_SignFinal;
-+		EVP_VerifyFinal;
-+		EVP_add_alias;
-+		EVP_add_cipher;
-+		EVP_add_digest;
-+		EVP_bf_cbc;
-+		EVP_bf_cfb64;
-+		EVP_bf_ecb;
-+		EVP_bf_ofb;
-+		EVP_cleanup;
-+		EVP_des_cbc;
-+		EVP_des_cfb64;
-+		EVP_des_ecb;
-+		EVP_des_ede;
-+		EVP_des_ede3;
-+		EVP_des_ede3_cbc;
-+		EVP_des_ede3_cfb64;
-+		EVP_des_ede3_ofb;
-+		EVP_des_ede_cbc;
-+		EVP_des_ede_cfb64;
-+		EVP_des_ede_ofb;
-+		EVP_des_ofb;
-+		EVP_desx_cbc;
-+		EVP_dss;
-+		EVP_dss1;
-+		EVP_enc_null;
-+		EVP_get_cipherbyname;
-+		EVP_get_digestbyname;
-+		EVP_get_pw_prompt;
-+		EVP_idea_cbc;
-+		EVP_idea_cfb64;
-+		EVP_idea_ecb;
-+		EVP_idea_ofb;
-+		EVP_md2;
-+		EVP_md5;
-+		EVP_md_null;
-+		EVP_rc2_cbc;
-+		EVP_rc2_cfb64;
-+		EVP_rc2_ecb;
-+		EVP_rc2_ofb;
-+		EVP_rc4;
-+		EVP_read_pw_string;
-+		EVP_set_pw_prompt;
-+		EVP_sha;
-+		EVP_sha1;
-+		MD2;
-+		MD2_Final;
-+		MD2_Init;
-+		MD2_Update;
-+		MD2_options;
-+		MD5;
-+		MD5_Final;
-+		MD5_Init;
-+		MD5_Update;
-+		MDC2;
-+		MDC2_Final;
-+		MDC2_Init;
-+		MDC2_Update;
-+		NETSCAPE_SPKAC_free;
-+		NETSCAPE_SPKAC_new;
-+		NETSCAPE_SPKI_free;
-+		NETSCAPE_SPKI_new;
-+		NETSCAPE_SPKI_sign;
-+		NETSCAPE_SPKI_verify;
-+		OBJ_add_object;
-+		OBJ_bsearch;
-+		OBJ_cleanup;
-+		OBJ_cmp;
-+		OBJ_create;
-+		OBJ_dup;
-+		OBJ_ln2nid;
-+		OBJ_new_nid;
-+		OBJ_nid2ln;
-+		OBJ_nid2obj;
-+		OBJ_nid2sn;
-+		OBJ_obj2nid;
-+		OBJ_sn2nid;
-+		OBJ_txt2nid;
-+		PEM_ASN1_read;
-+		PEM_ASN1_read_bio;
-+		PEM_ASN1_write;
-+		PEM_ASN1_write_bio;
-+		PEM_SealFinal;
-+		PEM_SealInit;
-+		PEM_SealUpdate;
-+		PEM_SignFinal;
-+		PEM_SignInit;
-+		PEM_SignUpdate;
-+		PEM_X509_INFO_read;
-+		PEM_X509_INFO_read_bio;
-+		PEM_X509_INFO_write_bio;
-+		PEM_dek_info;
-+		PEM_do_header;
-+		PEM_get_EVP_CIPHER_INFO;
-+		PEM_proc_type;
-+		PEM_read;
-+		PEM_read_DHparams;
-+		PEM_read_DSAPrivateKey;
-+		PEM_read_DSAparams;
-+		PEM_read_PKCS7;
-+		PEM_read_PrivateKey;
-+		PEM_read_RSAPrivateKey;
-+		PEM_read_X509;
-+		PEM_read_X509_CRL;
-+		PEM_read_X509_REQ;
-+		PEM_read_bio;
-+		PEM_read_bio_DHparams;
-+		PEM_read_bio_DSAPrivateKey;
-+		PEM_read_bio_DSAparams;
-+		PEM_read_bio_PKCS7;
-+		PEM_read_bio_PrivateKey;
-+		PEM_read_bio_RSAPrivateKey;
-+		PEM_read_bio_X509;
-+		PEM_read_bio_X509_CRL;
-+		PEM_read_bio_X509_REQ;
-+		PEM_write;
-+		PEM_write_DHparams;
-+		PEM_write_DSAPrivateKey;
-+		PEM_write_DSAparams;
-+		PEM_write_PKCS7;
-+		PEM_write_PrivateKey;
-+		PEM_write_RSAPrivateKey;
-+		PEM_write_X509;
-+		PEM_write_X509_CRL;
-+		PEM_write_X509_REQ;
-+		PEM_write_bio;
-+		PEM_write_bio_DHparams;
-+		PEM_write_bio_DSAPrivateKey;
-+		PEM_write_bio_DSAparams;
-+		PEM_write_bio_PKCS7;
-+		PEM_write_bio_PrivateKey;
-+		PEM_write_bio_RSAPrivateKey;
-+		PEM_write_bio_X509;
-+		PEM_write_bio_X509_CRL;
-+		PEM_write_bio_X509_REQ;
-+		PKCS7_DIGEST_free;
-+		PKCS7_DIGEST_new;
-+		PKCS7_ENCRYPT_free;
-+		PKCS7_ENCRYPT_new;
-+		PKCS7_ENC_CONTENT_free;
-+		PKCS7_ENC_CONTENT_new;
-+		PKCS7_ENVELOPE_free;
-+		PKCS7_ENVELOPE_new;
-+		PKCS7_ISSUER_AND_SERIAL_digest;
-+		PKCS7_ISSUER_AND_SERIAL_free;
-+		PKCS7_ISSUER_AND_SERIAL_new;
-+		PKCS7_RECIP_INFO_free;
-+		PKCS7_RECIP_INFO_new;
-+		PKCS7_SIGNED_free;
-+		PKCS7_SIGNED_new;
-+		PKCS7_SIGNER_INFO_free;
-+		PKCS7_SIGNER_INFO_new;
-+		PKCS7_SIGN_ENVELOPE_free;
-+		PKCS7_SIGN_ENVELOPE_new;
-+		PKCS7_dup;
-+		PKCS7_free;
-+		PKCS7_new;
-+		PROXY_ENTRY_add_noproxy;
-+		PROXY_ENTRY_clear_noproxy;
-+		PROXY_ENTRY_free;
-+		PROXY_ENTRY_get_noproxy;
-+		PROXY_ENTRY_new;
-+		PROXY_ENTRY_set_server;
-+		PROXY_add_noproxy;
-+		PROXY_add_server;
-+		PROXY_check_by_host;
-+		PROXY_check_url;
-+		PROXY_clear_noproxy;
-+		PROXY_free;
-+		PROXY_get_noproxy;
-+		PROXY_get_proxies;
-+		PROXY_get_proxy_entry;
-+		PROXY_load_conf;
-+		PROXY_new;
-+		PROXY_print;
-+		RAND_bytes;
-+		RAND_cleanup;
-+		RAND_file_name;
-+		RAND_load_file;
-+		RAND_screen;
-+		RAND_seed;
-+		RAND_write_file;
-+		RC2_cbc_encrypt;
-+		RC2_cfb64_encrypt;
-+		RC2_ecb_encrypt;
-+		RC2_encrypt;
-+		RC2_ofb64_encrypt;
-+		RC2_set_key;
-+		RC4;
-+		RC4_options;
-+		RC4_set_key;
-+		RSAPrivateKey_asn1_meth;
-+		RSAPrivateKey_dup;
-+		RSAPublicKey_dup;
-+		RSA_PKCS1_SSLeay;
-+		RSA_free;
-+		RSA_generate_key;
-+		RSA_new;
-+		RSA_new_method;
-+		RSA_print;
-+		RSA_print_fp;
-+		RSA_private_decrypt;
-+		RSA_private_encrypt;
-+		RSA_public_decrypt;
-+		RSA_public_encrypt;
-+		RSA_set_default_method;
-+		RSA_sign;
-+		RSA_sign_ASN1_OCTET_STRING;
-+		RSA_size;
-+		RSA_verify;
-+		RSA_verify_ASN1_OCTET_STRING;
-+		SHA;
-+		SHA1;
-+		SHA1_Final;
-+		SHA1_Init;
-+		SHA1_Update;
-+		SHA_Final;
-+		SHA_Init;
-+		SHA_Update;
-+		OpenSSL_add_all_algorithms;
-+		OpenSSL_add_all_ciphers;
-+		OpenSSL_add_all_digests;
-+		TXT_DB_create_index;
-+		TXT_DB_free;
-+		TXT_DB_get_by_index;
-+		TXT_DB_insert;
-+		TXT_DB_read;
-+		TXT_DB_write;
-+		X509_ALGOR_free;
-+		X509_ALGOR_new;
-+		X509_ATTRIBUTE_free;
-+		X509_ATTRIBUTE_new;
-+		X509_CINF_free;
-+		X509_CINF_new;
-+		X509_CRL_INFO_free;
-+		X509_CRL_INFO_new;
-+		X509_CRL_add_ext;
-+		X509_CRL_cmp;
-+		X509_CRL_delete_ext;
-+		X509_CRL_dup;
-+		X509_CRL_free;
-+		X509_CRL_get_ext;
-+		X509_CRL_get_ext_by_NID;
-+		X509_CRL_get_ext_by_OBJ;
-+		X509_CRL_get_ext_by_critical;
-+		X509_CRL_get_ext_count;
-+		X509_CRL_new;
-+		X509_CRL_sign;
-+		X509_CRL_verify;
-+		X509_EXTENSION_create_by_NID;
-+		X509_EXTENSION_create_by_OBJ;
-+		X509_EXTENSION_dup;
-+		X509_EXTENSION_free;
-+		X509_EXTENSION_get_critical;
-+		X509_EXTENSION_get_data;
-+		X509_EXTENSION_get_object;
-+		X509_EXTENSION_new;
-+		X509_EXTENSION_set_critical;
-+		X509_EXTENSION_set_data;
-+		X509_EXTENSION_set_object;
-+		X509_INFO_free;
-+		X509_INFO_new;
-+		X509_LOOKUP_by_alias;
-+		X509_LOOKUP_by_fingerprint;
-+		X509_LOOKUP_by_issuer_serial;
-+		X509_LOOKUP_by_subject;
-+		X509_LOOKUP_ctrl;
-+		X509_LOOKUP_file;
-+		X509_LOOKUP_free;
-+		X509_LOOKUP_hash_dir;
-+		X509_LOOKUP_init;
-+		X509_LOOKUP_new;
-+		X509_LOOKUP_shutdown;
-+		X509_NAME_ENTRY_create_by_NID;
-+		X509_NAME_ENTRY_create_by_OBJ;
-+		X509_NAME_ENTRY_dup;
-+		X509_NAME_ENTRY_free;
-+		X509_NAME_ENTRY_get_data;
-+		X509_NAME_ENTRY_get_object;
-+		X509_NAME_ENTRY_new;
-+		X509_NAME_ENTRY_set_data;
-+		X509_NAME_ENTRY_set_object;
-+		X509_NAME_add_entry;
-+		X509_NAME_cmp;
-+		X509_NAME_delete_entry;
-+		X509_NAME_digest;
-+		X509_NAME_dup;
-+		X509_NAME_entry_count;
-+		X509_NAME_free;
-+		X509_NAME_get_entry;
-+		X509_NAME_get_index_by_NID;
-+		X509_NAME_get_index_by_OBJ;
-+		X509_NAME_get_text_by_NID;
-+		X509_NAME_get_text_by_OBJ;
-+		X509_NAME_hash;
-+		X509_NAME_new;
-+		X509_NAME_oneline;
-+		X509_NAME_print;
-+		X509_NAME_set;
-+		X509_OBJECT_free_contents;
-+		X509_OBJECT_retrieve_by_subject;
-+		X509_OBJECT_up_ref_count;
-+		X509_PKEY_free;
-+		X509_PKEY_new;
-+		X509_PUBKEY_free;
-+		X509_PUBKEY_get;
-+		X509_PUBKEY_new;
-+		X509_PUBKEY_set;
-+		X509_REQ_INFO_free;
-+		X509_REQ_INFO_new;
-+		X509_REQ_dup;
-+		X509_REQ_free;
-+		X509_REQ_get_pubkey;
-+		X509_REQ_new;
-+		X509_REQ_print;
-+		X509_REQ_print_fp;
-+		X509_REQ_set_pubkey;
-+		X509_REQ_set_subject_name;
-+		X509_REQ_set_version;
-+		X509_REQ_sign;
-+		X509_REQ_to_X509;
-+		X509_REQ_verify;
-+		X509_REVOKED_add_ext;
-+		X509_REVOKED_delete_ext;
-+		X509_REVOKED_free;
-+		X509_REVOKED_get_ext;
-+		X509_REVOKED_get_ext_by_NID;
-+		X509_REVOKED_get_ext_by_OBJ;
-+		X509_REVOKED_get_ext_by_critical;
-+		X509_REVOKED_get_ext_by_critic;
-+		X509_REVOKED_get_ext_count;
-+		X509_REVOKED_new;
-+		X509_SIG_free;
-+		X509_SIG_new;
-+		X509_STORE_CTX_cleanup;
-+		X509_STORE_CTX_init;
-+		X509_STORE_add_cert;
-+		X509_STORE_add_lookup;
-+		X509_STORE_free;
-+		X509_STORE_get_by_subject;
-+		X509_STORE_load_locations;
-+		X509_STORE_new;
-+		X509_STORE_set_default_paths;
-+		X509_VAL_free;
-+		X509_VAL_new;
-+		X509_add_ext;
-+		X509_asn1_meth;
-+		X509_certificate_type;
-+		X509_check_private_key;
-+		X509_cmp_current_time;
-+		X509_delete_ext;
-+		X509_digest;
-+		X509_dup;
-+		X509_free;
-+		X509_get_default_cert_area;
-+		X509_get_default_cert_dir;
-+		X509_get_default_cert_dir_env;
-+		X509_get_default_cert_file;
-+		X509_get_default_cert_file_env;
-+		X509_get_default_private_dir;
-+		X509_get_ext;
-+		X509_get_ext_by_NID;
-+		X509_get_ext_by_OBJ;
-+		X509_get_ext_by_critical;
-+		X509_get_ext_count;
-+		X509_get_issuer_name;
-+		X509_get_pubkey;
-+		X509_get_pubkey_parameters;
-+		X509_get_serialNumber;
-+		X509_get_subject_name;
-+		X509_gmtime_adj;
-+		X509_issuer_and_serial_cmp;
-+		X509_issuer_and_serial_hash;
-+		X509_issuer_name_cmp;
-+		X509_issuer_name_hash;
-+		X509_load_cert_file;
-+		X509_new;
-+		X509_print;
-+		X509_print_fp;
-+		X509_set_issuer_name;
-+		X509_set_notAfter;
-+		X509_set_notBefore;
-+		X509_set_pubkey;
-+		X509_set_serialNumber;
-+		X509_set_subject_name;
-+		X509_set_version;
-+		X509_sign;
-+		X509_subject_name_cmp;
-+		X509_subject_name_hash;
-+		X509_to_X509_REQ;
-+		X509_verify;
-+		X509_verify_cert;
-+		X509_verify_cert_error_string;
-+		X509v3_add_ext;
-+		X509v3_add_extension;
-+		X509v3_add_netscape_extensions;
-+		X509v3_add_standard_extensions;
-+		X509v3_cleanup_extensions;
-+		X509v3_data_type_by_NID;
-+		X509v3_data_type_by_OBJ;
-+		X509v3_delete_ext;
-+		X509v3_get_ext;
-+		X509v3_get_ext_by_NID;
-+		X509v3_get_ext_by_OBJ;
-+		X509v3_get_ext_by_critical;
-+		X509v3_get_ext_count;
-+		X509v3_pack_string;
-+		X509v3_pack_type_by_NID;
-+		X509v3_pack_type_by_OBJ;
-+		X509v3_unpack_string;
-+		_des_crypt;
-+		a2d_ASN1_OBJECT;
-+		a2i_ASN1_INTEGER;
-+		a2i_ASN1_STRING;
-+		asn1_Finish;
-+		asn1_GetSequence;
-+		bn_div_words;
-+		bn_expand2;
-+		bn_mul_add_words;
-+		bn_mul_words;
-+		BN_uadd;
-+		BN_usub;
-+		bn_sqr_words;
-+		_ossl_old_crypt;
-+		d2i_ASN1_BIT_STRING;
-+		d2i_ASN1_BOOLEAN;
-+		d2i_ASN1_HEADER;
-+		d2i_ASN1_IA5STRING;
-+		d2i_ASN1_INTEGER;
-+		d2i_ASN1_OBJECT;
-+		d2i_ASN1_OCTET_STRING;
-+		d2i_ASN1_PRINTABLE;
-+		d2i_ASN1_PRINTABLESTRING;
-+		d2i_ASN1_SET;
-+		d2i_ASN1_T61STRING;
-+		d2i_ASN1_TYPE;
-+		d2i_ASN1_UTCTIME;
-+		d2i_ASN1_bytes;
-+		d2i_ASN1_type_bytes;
-+		d2i_DHparams;
-+		d2i_DSAPrivateKey;
-+		d2i_DSAPrivateKey_bio;
-+		d2i_DSAPrivateKey_fp;
-+		d2i_DSAPublicKey;
-+		d2i_DSAparams;
-+		d2i_NETSCAPE_SPKAC;
-+		d2i_NETSCAPE_SPKI;
-+		d2i_Netscape_RSA;
-+		d2i_PKCS7;
-+		d2i_PKCS7_DIGEST;
-+		d2i_PKCS7_ENCRYPT;
-+		d2i_PKCS7_ENC_CONTENT;
-+		d2i_PKCS7_ENVELOPE;
-+		d2i_PKCS7_ISSUER_AND_SERIAL;
-+		d2i_PKCS7_RECIP_INFO;
-+		d2i_PKCS7_SIGNED;
-+		d2i_PKCS7_SIGNER_INFO;
-+		d2i_PKCS7_SIGN_ENVELOPE;
-+		d2i_PKCS7_bio;
-+		d2i_PKCS7_fp;
-+		d2i_PrivateKey;
-+		d2i_PublicKey;
-+		d2i_RSAPrivateKey;
-+		d2i_RSAPrivateKey_bio;
-+		d2i_RSAPrivateKey_fp;
-+		d2i_RSAPublicKey;
-+		d2i_X509;
-+		d2i_X509_ALGOR;
-+		d2i_X509_ATTRIBUTE;
-+		d2i_X509_CINF;
-+		d2i_X509_CRL;
-+		d2i_X509_CRL_INFO;
-+		d2i_X509_CRL_bio;
-+		d2i_X509_CRL_fp;
-+		d2i_X509_EXTENSION;
-+		d2i_X509_NAME;
-+		d2i_X509_NAME_ENTRY;
-+		d2i_X509_PKEY;
-+		d2i_X509_PUBKEY;
-+		d2i_X509_REQ;
-+		d2i_X509_REQ_INFO;
-+		d2i_X509_REQ_bio;
-+		d2i_X509_REQ_fp;
-+		d2i_X509_REVOKED;
-+		d2i_X509_SIG;
-+		d2i_X509_VAL;
-+		d2i_X509_bio;
-+		d2i_X509_fp;
-+		DES_cbc_cksum;
-+		DES_cbc_encrypt;
-+		DES_cblock_print_file;
-+		DES_cfb64_encrypt;
-+		DES_cfb_encrypt;
-+		DES_decrypt3;
-+		DES_ecb3_encrypt;
-+		DES_ecb_encrypt;
-+		DES_ede3_cbc_encrypt;
-+		DES_ede3_cfb64_encrypt;
-+		DES_ede3_ofb64_encrypt;
-+		DES_enc_read;
-+		DES_enc_write;
-+		DES_encrypt1;
-+		DES_encrypt2;
-+		DES_encrypt3;
-+		DES_fcrypt;
-+		DES_is_weak_key;
-+		DES_key_sched;
-+		DES_ncbc_encrypt;
-+		DES_ofb64_encrypt;
-+		DES_ofb_encrypt;
-+		DES_options;
-+		DES_pcbc_encrypt;
-+		DES_quad_cksum;
-+		DES_random_key;
-+		_ossl_old_des_random_seed;
-+		_ossl_old_des_read_2passwords;
-+		_ossl_old_des_read_password;
-+		_ossl_old_des_read_pw;
-+		_ossl_old_des_read_pw_string;
-+		DES_set_key;
-+		DES_set_odd_parity;
-+		DES_string_to_2keys;
-+		DES_string_to_key;
-+		DES_xcbc_encrypt;
-+		DES_xwhite_in2out;
-+		fcrypt_body;
-+		i2a_ASN1_INTEGER;
-+		i2a_ASN1_OBJECT;
-+		i2a_ASN1_STRING;
-+		i2d_ASN1_BIT_STRING;
-+		i2d_ASN1_BOOLEAN;
-+		i2d_ASN1_HEADER;
-+		i2d_ASN1_IA5STRING;
-+		i2d_ASN1_INTEGER;
-+		i2d_ASN1_OBJECT;
-+		i2d_ASN1_OCTET_STRING;
-+		i2d_ASN1_PRINTABLE;
-+		i2d_ASN1_SET;
-+		i2d_ASN1_TYPE;
-+		i2d_ASN1_UTCTIME;
-+		i2d_ASN1_bytes;
-+		i2d_DHparams;
-+		i2d_DSAPrivateKey;
-+		i2d_DSAPrivateKey_bio;
-+		i2d_DSAPrivateKey_fp;
-+		i2d_DSAPublicKey;
-+		i2d_DSAparams;
-+		i2d_NETSCAPE_SPKAC;
-+		i2d_NETSCAPE_SPKI;
-+		i2d_Netscape_RSA;
-+		i2d_PKCS7;
-+		i2d_PKCS7_DIGEST;
-+		i2d_PKCS7_ENCRYPT;
-+		i2d_PKCS7_ENC_CONTENT;
-+		i2d_PKCS7_ENVELOPE;
-+		i2d_PKCS7_ISSUER_AND_SERIAL;
-+		i2d_PKCS7_RECIP_INFO;
-+		i2d_PKCS7_SIGNED;
-+		i2d_PKCS7_SIGNER_INFO;
-+		i2d_PKCS7_SIGN_ENVELOPE;
-+		i2d_PKCS7_bio;
-+		i2d_PKCS7_fp;
-+		i2d_PrivateKey;
-+		i2d_PublicKey;
-+		i2d_RSAPrivateKey;
-+		i2d_RSAPrivateKey_bio;
-+		i2d_RSAPrivateKey_fp;
-+		i2d_RSAPublicKey;
-+		i2d_X509;
-+		i2d_X509_ALGOR;
-+		i2d_X509_ATTRIBUTE;
-+		i2d_X509_CINF;
-+		i2d_X509_CRL;
-+		i2d_X509_CRL_INFO;
-+		i2d_X509_CRL_bio;
-+		i2d_X509_CRL_fp;
-+		i2d_X509_EXTENSION;
-+		i2d_X509_NAME;
-+		i2d_X509_NAME_ENTRY;
-+		i2d_X509_PKEY;
-+		i2d_X509_PUBKEY;
-+		i2d_X509_REQ;
-+		i2d_X509_REQ_INFO;
-+		i2d_X509_REQ_bio;
-+		i2d_X509_REQ_fp;
-+		i2d_X509_REVOKED;
-+		i2d_X509_SIG;
-+		i2d_X509_VAL;
-+		i2d_X509_bio;
-+		i2d_X509_fp;
-+		idea_cbc_encrypt;
-+		idea_cfb64_encrypt;
-+		idea_ecb_encrypt;
-+		idea_encrypt;
-+		idea_ofb64_encrypt;
-+		idea_options;
-+		idea_set_decrypt_key;
-+		idea_set_encrypt_key;
-+		lh_delete;
-+		lh_doall;
-+		lh_doall_arg;
-+		lh_free;
-+		lh_insert;
-+		lh_new;
-+		lh_node_stats;
-+		lh_node_stats_bio;
-+		lh_node_usage_stats;
-+		lh_node_usage_stats_bio;
-+		lh_retrieve;
-+		lh_stats;
-+		lh_stats_bio;
-+		lh_strhash;
-+		sk_delete;
-+		sk_delete_ptr;
-+		sk_dup;
-+		sk_find;
-+		sk_free;
-+		sk_insert;
-+		sk_new;
-+		sk_pop;
-+		sk_pop_free;
-+		sk_push;
-+		sk_set_cmp_func;
-+		sk_shift;
-+		sk_unshift;
-+		sk_zero;
-+		BIO_f_nbio_test;
-+		ASN1_TYPE_get;
-+		ASN1_TYPE_set;
-+		PKCS7_content_free;
-+		ERR_load_PKCS7_strings;
-+		X509_find_by_issuer_and_serial;
-+		X509_find_by_subject;
-+		PKCS7_ctrl;
-+		PKCS7_set_type;
-+		PKCS7_set_content;
-+		PKCS7_SIGNER_INFO_set;
-+		PKCS7_add_signer;
-+		PKCS7_add_certificate;
-+		PKCS7_add_crl;
-+		PKCS7_content_new;
-+		PKCS7_dataSign;
-+		PKCS7_dataVerify;
-+		PKCS7_dataInit;
-+		PKCS7_add_signature;
-+		PKCS7_cert_from_signer_info;
-+		PKCS7_get_signer_info;
-+		EVP_delete_alias;
-+		EVP_mdc2;
-+		PEM_read_bio_RSAPublicKey;
-+		PEM_write_bio_RSAPublicKey;
-+		d2i_RSAPublicKey_bio;
-+		i2d_RSAPublicKey_bio;
-+		PEM_read_RSAPublicKey;
-+		PEM_write_RSAPublicKey;
-+		d2i_RSAPublicKey_fp;
-+		i2d_RSAPublicKey_fp;
-+		BIO_copy_next_retry;
-+		RSA_flags;
-+		X509_STORE_add_crl;
-+		X509_load_crl_file;
-+		EVP_rc2_40_cbc;
-+		EVP_rc4_40;
-+		EVP_CIPHER_CTX_init;
-+		HMAC;
-+		HMAC_Init;
-+		HMAC_Update;
-+		HMAC_Final;
-+		ERR_get_next_error_library;
-+		EVP_PKEY_cmp_parameters;
-+		HMAC_cleanup;
-+		BIO_ptr_ctrl;
-+		BIO_new_file_internal;
-+		BIO_new_fp_internal;
-+		BIO_s_file_internal;
-+		BN_BLINDING_convert;
-+		BN_BLINDING_invert;
-+		BN_BLINDING_update;
-+		RSA_blinding_on;
-+		RSA_blinding_off;
-+		i2t_ASN1_OBJECT;
-+		BN_BLINDING_new;
-+		BN_BLINDING_free;
-+		EVP_cast5_cbc;
-+		EVP_cast5_cfb64;
-+		EVP_cast5_ecb;
-+		EVP_cast5_ofb;
-+		BF_decrypt;
-+		CAST_set_key;
-+		CAST_encrypt;
-+		CAST_decrypt;
-+		CAST_ecb_encrypt;
-+		CAST_cbc_encrypt;
-+		CAST_cfb64_encrypt;
-+		CAST_ofb64_encrypt;
-+		RC2_decrypt;
-+		OBJ_create_objects;
-+		BN_exp;
-+		BN_mul_word;
-+		BN_sub_word;
-+		BN_dec2bn;
-+		BN_bn2dec;
-+		BIO_ghbn_ctrl;
-+		CRYPTO_free_ex_data;
-+		CRYPTO_get_ex_data;
-+		CRYPTO_set_ex_data;
-+		ERR_load_CRYPTO_strings;
-+		ERR_load_CRYPTOlib_strings;
-+		EVP_PKEY_bits;
-+		MD5_Transform;
-+		SHA1_Transform;
-+		SHA_Transform;
-+		X509_STORE_CTX_get_chain;
-+		X509_STORE_CTX_get_current_cert;
-+		X509_STORE_CTX_get_error;
-+		X509_STORE_CTX_get_error_depth;
-+		X509_STORE_CTX_get_ex_data;
-+		X509_STORE_CTX_set_cert;
-+		X509_STORE_CTX_set_chain;
-+		X509_STORE_CTX_set_error;
-+		X509_STORE_CTX_set_ex_data;
-+		CRYPTO_dup_ex_data;
-+		CRYPTO_get_new_lockid;
-+		CRYPTO_new_ex_data;
-+		RSA_set_ex_data;
-+		RSA_get_ex_data;
-+		RSA_get_ex_new_index;
-+		RSA_padding_add_PKCS1_type_1;
-+		RSA_padding_add_PKCS1_type_2;
-+		RSA_padding_add_SSLv23;
-+		RSA_padding_add_none;
-+		RSA_padding_check_PKCS1_type_1;
-+		RSA_padding_check_PKCS1_type_2;
-+		RSA_padding_check_SSLv23;
-+		RSA_padding_check_none;
-+		bn_add_words;
-+		d2i_Netscape_RSA_2;
-+		CRYPTO_get_ex_new_index;
-+		RIPEMD160_Init;
-+		RIPEMD160_Update;
-+		RIPEMD160_Final;
-+		RIPEMD160;
-+		RIPEMD160_Transform;
-+		RC5_32_set_key;
-+		RC5_32_ecb_encrypt;
-+		RC5_32_encrypt;
-+		RC5_32_decrypt;
-+		RC5_32_cbc_encrypt;
-+		RC5_32_cfb64_encrypt;
-+		RC5_32_ofb64_encrypt;
-+		BN_bn2mpi;
-+		BN_mpi2bn;
-+		ASN1_BIT_STRING_get_bit;
-+		ASN1_BIT_STRING_set_bit;
-+		BIO_get_ex_data;
-+		BIO_get_ex_new_index;
-+		BIO_set_ex_data;
-+		X509v3_get_key_usage;
-+		X509v3_set_key_usage;
-+		a2i_X509v3_key_usage;
-+		i2a_X509v3_key_usage;
-+		EVP_PKEY_decrypt;
-+		EVP_PKEY_encrypt;
-+		PKCS7_RECIP_INFO_set;
-+		PKCS7_add_recipient;
-+		PKCS7_add_recipient_info;
-+		PKCS7_set_cipher;
-+		ASN1_TYPE_get_int_octetstring;
-+		ASN1_TYPE_get_octetstring;
-+		ASN1_TYPE_set_int_octetstring;
-+		ASN1_TYPE_set_octetstring;
-+		ASN1_UTCTIME_set_string;
-+		ERR_add_error_data;
-+		ERR_set_error_data;
-+		EVP_CIPHER_asn1_to_param;
-+		EVP_CIPHER_param_to_asn1;
-+		EVP_CIPHER_get_asn1_iv;
-+		EVP_CIPHER_set_asn1_iv;
-+		EVP_rc5_32_12_16_cbc;
-+		EVP_rc5_32_12_16_cfb64;
-+		EVP_rc5_32_12_16_ecb;
-+		EVP_rc5_32_12_16_ofb;
-+		asn1_add_error;
-+		d2i_ASN1_BMPSTRING;
-+		i2d_ASN1_BMPSTRING;
-+		BIO_f_ber;
-+		BN_init;
-+		COMP_CTX_new;
-+		COMP_CTX_free;
-+		COMP_CTX_compress_block;
-+		COMP_CTX_expand_block;
-+		X509_STORE_CTX_get_ex_new_index;
-+		OBJ_NAME_add;
-+		BIO_socket_nbio;
-+		EVP_rc2_64_cbc;
-+		OBJ_NAME_cleanup;
-+		OBJ_NAME_get;
-+		OBJ_NAME_init;
-+		OBJ_NAME_new_index;
-+		OBJ_NAME_remove;
-+		BN_MONT_CTX_copy;
-+		BIO_new_socks4a_connect;
-+		BIO_s_socks4a_connect;
-+		PROXY_set_connect_mode;
-+		RAND_SSLeay;
-+		RAND_set_rand_method;
-+		RSA_memory_lock;
-+		bn_sub_words;
-+		bn_mul_normal;
-+		bn_mul_comba8;
-+		bn_mul_comba4;
-+		bn_sqr_normal;
-+		bn_sqr_comba8;
-+		bn_sqr_comba4;
-+		bn_cmp_words;
-+		bn_mul_recursive;
-+		bn_mul_part_recursive;
-+		bn_sqr_recursive;
-+		bn_mul_low_normal;
-+		BN_RECP_CTX_init;
-+		BN_RECP_CTX_new;
-+		BN_RECP_CTX_free;
-+		BN_RECP_CTX_set;
-+		BN_mod_mul_reciprocal;
-+		BN_mod_exp_recp;
-+		BN_div_recp;
-+		BN_CTX_init;
-+		BN_MONT_CTX_init;
-+		RAND_get_rand_method;
-+		PKCS7_add_attribute;
-+		PKCS7_add_signed_attribute;
-+		PKCS7_digest_from_attributes;
-+		PKCS7_get_attribute;
-+		PKCS7_get_issuer_and_serial;
-+		PKCS7_get_signed_attribute;
-+		COMP_compress_block;
-+		COMP_expand_block;
-+		COMP_rle;
-+		COMP_zlib;
-+		ms_time_diff;
-+		ms_time_new;
-+		ms_time_free;
-+		ms_time_cmp;
-+		ms_time_get;
-+		PKCS7_set_attributes;
-+		PKCS7_set_signed_attributes;
-+		X509_ATTRIBUTE_create;
-+		X509_ATTRIBUTE_dup;
-+		ASN1_GENERALIZEDTIME_check;
-+		ASN1_GENERALIZEDTIME_print;
-+		ASN1_GENERALIZEDTIME_set;
-+		ASN1_GENERALIZEDTIME_set_string;
-+		ASN1_TIME_print;
-+		BASIC_CONSTRAINTS_free;
-+		BASIC_CONSTRAINTS_new;
-+		ERR_load_X509V3_strings;
-+		NETSCAPE_CERT_SEQUENCE_free;
-+		NETSCAPE_CERT_SEQUENCE_new;
-+		OBJ_txt2obj;
-+		PEM_read_NETSCAPE_CERT_SEQUENCE;
-+		PEM_read_NS_CERT_SEQ;
-+		PEM_read_bio_NETSCAPE_CERT_SEQUENCE;
-+		PEM_read_bio_NS_CERT_SEQ;
-+		PEM_write_NETSCAPE_CERT_SEQUENCE;
-+		PEM_write_NS_CERT_SEQ;
-+		PEM_write_bio_NETSCAPE_CERT_SEQUENCE;
-+		PEM_write_bio_NS_CERT_SEQ;
-+		X509V3_EXT_add;
-+		X509V3_EXT_add_alias;
-+		X509V3_EXT_add_conf;
-+		X509V3_EXT_cleanup;
-+		X509V3_EXT_conf;
-+		X509V3_EXT_conf_nid;
-+		X509V3_EXT_get;
-+		X509V3_EXT_get_nid;
-+		X509V3_EXT_print;
-+		X509V3_EXT_print_fp;
-+		X509V3_add_standard_extensions;
-+		X509V3_add_value;
-+		X509V3_add_value_bool;
-+		X509V3_add_value_int;
-+		X509V3_conf_free;
-+		X509V3_get_value_bool;
-+		X509V3_get_value_int;
-+		X509V3_parse_list;
-+		d2i_ASN1_GENERALIZEDTIME;
-+		d2i_ASN1_TIME;
-+		d2i_BASIC_CONSTRAINTS;
-+		d2i_NETSCAPE_CERT_SEQUENCE;
-+		d2i_ext_ku;
-+		ext_ku_free;
-+		ext_ku_new;
-+		i2d_ASN1_GENERALIZEDTIME;
-+		i2d_ASN1_TIME;
-+		i2d_BASIC_CONSTRAINTS;
-+		i2d_NETSCAPE_CERT_SEQUENCE;
-+		i2d_ext_ku;
-+		EVP_MD_CTX_copy;
-+		i2d_ASN1_ENUMERATED;
-+		d2i_ASN1_ENUMERATED;
-+		ASN1_ENUMERATED_set;
-+		ASN1_ENUMERATED_get;
-+		BN_to_ASN1_ENUMERATED;
-+		ASN1_ENUMERATED_to_BN;
-+		i2a_ASN1_ENUMERATED;
-+		a2i_ASN1_ENUMERATED;
-+		i2d_GENERAL_NAME;
-+		d2i_GENERAL_NAME;
-+		GENERAL_NAME_new;
-+		GENERAL_NAME_free;
-+		GENERAL_NAMES_new;
-+		GENERAL_NAMES_free;
-+		d2i_GENERAL_NAMES;
-+		i2d_GENERAL_NAMES;
-+		i2v_GENERAL_NAMES;
-+		i2s_ASN1_OCTET_STRING;
-+		s2i_ASN1_OCTET_STRING;
-+		X509V3_EXT_check_conf;
-+		hex_to_string;
-+		string_to_hex;
-+		DES_ede3_cbcm_encrypt;
-+		RSA_padding_add_PKCS1_OAEP;
-+		RSA_padding_check_PKCS1_OAEP;
-+		X509_CRL_print_fp;
-+		X509_CRL_print;
-+		i2v_GENERAL_NAME;
-+		v2i_GENERAL_NAME;
-+		i2d_PKEY_USAGE_PERIOD;
-+		d2i_PKEY_USAGE_PERIOD;
-+		PKEY_USAGE_PERIOD_new;
-+		PKEY_USAGE_PERIOD_free;
-+		v2i_GENERAL_NAMES;
-+		i2s_ASN1_INTEGER;
-+		X509V3_EXT_d2i;
-+		name_cmp;
-+		str_dup;
-+		i2s_ASN1_ENUMERATED;
-+		i2s_ASN1_ENUMERATED_TABLE;
-+		BIO_s_log;
-+		BIO_f_reliable;
-+		PKCS7_dataFinal;
-+		PKCS7_dataDecode;
-+		X509V3_EXT_CRL_add_conf;
-+		BN_set_params;
-+		BN_get_params;
-+		BIO_get_ex_num;
-+		BIO_set_ex_free_func;
-+		EVP_ripemd160;
-+		ASN1_TIME_set;
-+		i2d_AUTHORITY_KEYID;
-+		d2i_AUTHORITY_KEYID;
-+		AUTHORITY_KEYID_new;
-+		AUTHORITY_KEYID_free;
-+		ASN1_seq_unpack;
-+		ASN1_seq_pack;
-+		ASN1_unpack_string;
-+		ASN1_pack_string;
-+		PKCS12_pack_safebag;
-+		PKCS12_MAKE_KEYBAG;
-+		PKCS8_encrypt;
-+		PKCS12_MAKE_SHKEYBAG;
-+		PKCS12_pack_p7data;
-+		PKCS12_pack_p7encdata;
-+		PKCS12_add_localkeyid;
-+		PKCS12_add_friendlyname_asc;
-+		PKCS12_add_friendlyname_uni;
-+		PKCS12_get_friendlyname;
-+		PKCS12_pbe_crypt;
-+		PKCS12_decrypt_d2i;
-+		PKCS12_i2d_encrypt;
-+		PKCS12_init;
-+		PKCS12_key_gen_asc;
-+		PKCS12_key_gen_uni;
-+		PKCS12_gen_mac;
-+		PKCS12_verify_mac;
-+		PKCS12_set_mac;
-+		PKCS12_setup_mac;
-+		OPENSSL_asc2uni;
-+		OPENSSL_uni2asc;
-+		i2d_PKCS12_BAGS;
-+		PKCS12_BAGS_new;
-+		d2i_PKCS12_BAGS;
-+		PKCS12_BAGS_free;
-+		i2d_PKCS12;
-+		d2i_PKCS12;
-+		PKCS12_new;
-+		PKCS12_free;
-+		i2d_PKCS12_MAC_DATA;
-+		PKCS12_MAC_DATA_new;
-+		d2i_PKCS12_MAC_DATA;
-+		PKCS12_MAC_DATA_free;
-+		i2d_PKCS12_SAFEBAG;
-+		PKCS12_SAFEBAG_new;
-+		d2i_PKCS12_SAFEBAG;
-+		PKCS12_SAFEBAG_free;
-+		ERR_load_PKCS12_strings;
-+		PKCS12_PBE_add;
-+		PKCS8_add_keyusage;
-+		PKCS12_get_attr_gen;
-+		PKCS12_parse;
-+		PKCS12_create;
-+		i2d_PKCS12_bio;
-+		i2d_PKCS12_fp;
-+		d2i_PKCS12_bio;
-+		d2i_PKCS12_fp;
-+		i2d_PBEPARAM;
-+		PBEPARAM_new;
-+		d2i_PBEPARAM;
-+		PBEPARAM_free;
-+		i2d_PKCS8_PRIV_KEY_INFO;
-+		PKCS8_PRIV_KEY_INFO_new;
-+		d2i_PKCS8_PRIV_KEY_INFO;
-+		PKCS8_PRIV_KEY_INFO_free;
-+		EVP_PKCS82PKEY;
-+		EVP_PKEY2PKCS8;
-+		PKCS8_set_broken;
-+		EVP_PBE_ALGOR_CipherInit;
-+		EVP_PBE_alg_add;
-+		PKCS5_pbe_set;
-+		EVP_PBE_cleanup;
-+		i2d_SXNET;
-+		d2i_SXNET;
-+		SXNET_new;
-+		SXNET_free;
-+		i2d_SXNETID;
-+		d2i_SXNETID;
-+		SXNETID_new;
-+		SXNETID_free;
-+		DSA_SIG_new;
-+		DSA_SIG_free;
-+		DSA_do_sign;
-+		DSA_do_verify;
-+		d2i_DSA_SIG;
-+		i2d_DSA_SIG;
-+		i2d_ASN1_VISIBLESTRING;
-+		d2i_ASN1_VISIBLESTRING;
-+		i2d_ASN1_UTF8STRING;
-+		d2i_ASN1_UTF8STRING;
-+		i2d_DIRECTORYSTRING;
-+		d2i_DIRECTORYSTRING;
-+		i2d_DISPLAYTEXT;
-+		d2i_DISPLAYTEXT;
-+		d2i_ASN1_SET_OF_X509;
-+		i2d_ASN1_SET_OF_X509;
-+		i2d_PBKDF2PARAM;
-+		PBKDF2PARAM_new;
-+		d2i_PBKDF2PARAM;
-+		PBKDF2PARAM_free;
-+		i2d_PBE2PARAM;
-+		PBE2PARAM_new;
-+		d2i_PBE2PARAM;
-+		PBE2PARAM_free;
-+		d2i_ASN1_SET_OF_GENERAL_NAME;
-+		i2d_ASN1_SET_OF_GENERAL_NAME;
-+		d2i_ASN1_SET_OF_SXNETID;
-+		i2d_ASN1_SET_OF_SXNETID;
-+		d2i_ASN1_SET_OF_POLICYQUALINFO;
-+		i2d_ASN1_SET_OF_POLICYQUALINFO;
-+		d2i_ASN1_SET_OF_POLICYINFO;
-+		i2d_ASN1_SET_OF_POLICYINFO;
-+		SXNET_add_id_asc;
-+		SXNET_add_id_ulong;
-+		SXNET_add_id_INTEGER;
-+		SXNET_get_id_asc;
-+		SXNET_get_id_ulong;
-+		SXNET_get_id_INTEGER;
-+		X509V3_set_conf_lhash;
-+		i2d_CERTIFICATEPOLICIES;
-+		CERTIFICATEPOLICIES_new;
-+		CERTIFICATEPOLICIES_free;
-+		d2i_CERTIFICATEPOLICIES;
-+		i2d_POLICYINFO;
-+		POLICYINFO_new;
-+		d2i_POLICYINFO;
-+		POLICYINFO_free;
-+		i2d_POLICYQUALINFO;
-+		POLICYQUALINFO_new;
-+		d2i_POLICYQUALINFO;
-+		POLICYQUALINFO_free;
-+		i2d_USERNOTICE;
-+		USERNOTICE_new;
-+		d2i_USERNOTICE;
-+		USERNOTICE_free;
-+		i2d_NOTICEREF;
-+		NOTICEREF_new;
-+		d2i_NOTICEREF;
-+		NOTICEREF_free;
-+		X509V3_get_string;
-+		X509V3_get_section;
-+		X509V3_string_free;
-+		X509V3_section_free;
-+		X509V3_set_ctx;
-+		s2i_ASN1_INTEGER;
-+		CRYPTO_set_locked_mem_functions;
-+		CRYPTO_get_locked_mem_functions;
-+		CRYPTO_malloc_locked;
-+		CRYPTO_free_locked;
-+		BN_mod_exp2_mont;
-+		ERR_get_error_line_data;
-+		ERR_peek_error_line_data;
-+		PKCS12_PBE_keyivgen;
-+		X509_ALGOR_dup;
-+		d2i_ASN1_SET_OF_DIST_POINT;
-+		i2d_ASN1_SET_OF_DIST_POINT;
-+		i2d_CRL_DIST_POINTS;
-+		CRL_DIST_POINTS_new;
-+		CRL_DIST_POINTS_free;
-+		d2i_CRL_DIST_POINTS;
-+		i2d_DIST_POINT;
-+		DIST_POINT_new;
-+		d2i_DIST_POINT;
-+		DIST_POINT_free;
-+		i2d_DIST_POINT_NAME;
-+		DIST_POINT_NAME_new;
-+		DIST_POINT_NAME_free;
-+		d2i_DIST_POINT_NAME;
-+		X509V3_add_value_uchar;
-+		d2i_ASN1_SET_OF_X509_ATTRIBUTE;
-+		i2d_ASN1_SET_OF_ASN1_TYPE;
-+		d2i_ASN1_SET_OF_X509_EXTENSION;
-+		d2i_ASN1_SET_OF_X509_NAME_ENTRY;
-+		d2i_ASN1_SET_OF_ASN1_TYPE;
-+		i2d_ASN1_SET_OF_X509_ATTRIBUTE;
-+		i2d_ASN1_SET_OF_X509_EXTENSION;
-+		i2d_ASN1_SET_OF_X509_NAME_ENTRY;
-+		X509V3_EXT_i2d;
-+		X509V3_EXT_val_prn;
-+		X509V3_EXT_add_list;
-+		EVP_CIPHER_type;
-+		EVP_PBE_CipherInit;
-+		X509V3_add_value_bool_nf;
-+		d2i_ASN1_UINTEGER;
-+		sk_value;
-+		sk_num;
-+		sk_set;
-+		i2d_ASN1_SET_OF_X509_REVOKED;
-+		sk_sort;
-+		d2i_ASN1_SET_OF_X509_REVOKED;
-+		i2d_ASN1_SET_OF_X509_ALGOR;
-+		i2d_ASN1_SET_OF_X509_CRL;
-+		d2i_ASN1_SET_OF_X509_ALGOR;
-+		d2i_ASN1_SET_OF_X509_CRL;
-+		i2d_ASN1_SET_OF_PKCS7_SIGNER_INFO;
-+		i2d_ASN1_SET_OF_PKCS7_RECIP_INFO;
-+		d2i_ASN1_SET_OF_PKCS7_SIGNER_INFO;
-+		d2i_ASN1_SET_OF_PKCS7_RECIP_INFO;
-+		PKCS5_PBE_add;
-+		PEM_write_bio_PKCS8;
-+		i2d_PKCS8_fp;
-+		PEM_read_bio_PKCS8_PRIV_KEY_INFO;
-+		PEM_read_bio_P8_PRIV_KEY_INFO;
-+		d2i_PKCS8_bio;
-+		d2i_PKCS8_PRIV_KEY_INFO_fp;
-+		PEM_write_bio_PKCS8_PRIV_KEY_INFO;
-+		PEM_write_bio_P8_PRIV_KEY_INFO;
-+		PEM_read_PKCS8;
-+		d2i_PKCS8_PRIV_KEY_INFO_bio;
-+		d2i_PKCS8_fp;
-+		PEM_write_PKCS8;
-+		PEM_read_PKCS8_PRIV_KEY_INFO;
-+		PEM_read_P8_PRIV_KEY_INFO;
-+		PEM_read_bio_PKCS8;
-+		PEM_write_PKCS8_PRIV_KEY_INFO;
-+		PEM_write_P8_PRIV_KEY_INFO;
-+		PKCS5_PBE_keyivgen;
-+		i2d_PKCS8_bio;
-+		i2d_PKCS8_PRIV_KEY_INFO_fp;
-+		i2d_PKCS8_PRIV_KEY_INFO_bio;
-+		BIO_s_bio;
-+		PKCS5_pbe2_set;
-+		PKCS5_PBKDF2_HMAC_SHA1;
-+		PKCS5_v2_PBE_keyivgen;
-+		PEM_write_bio_PKCS8PrivateKey;
-+		PEM_write_PKCS8PrivateKey;
-+		BIO_ctrl_get_read_request;
-+		BIO_ctrl_pending;
-+		BIO_ctrl_wpending;
-+		BIO_new_bio_pair;
-+		BIO_ctrl_get_write_guarantee;
-+		CRYPTO_num_locks;
-+		CONF_load_bio;
-+		CONF_load_fp;
-+		i2d_ASN1_SET_OF_ASN1_OBJECT;
-+		d2i_ASN1_SET_OF_ASN1_OBJECT;
-+		PKCS7_signatureVerify;
-+		RSA_set_method;
-+		RSA_get_method;
-+		RSA_get_default_method;
-+		RSA_check_key;
-+		OBJ_obj2txt;
-+		DSA_dup_DH;
-+		X509_REQ_get_extensions;
-+		X509_REQ_set_extension_nids;
-+		BIO_nwrite;
-+		X509_REQ_extension_nid;
-+		BIO_nread;
-+		X509_REQ_get_extension_nids;
-+		BIO_nwrite0;
-+		X509_REQ_add_extensions_nid;
-+		BIO_nread0;
-+		X509_REQ_add_extensions;
-+		BIO_new_mem_buf;
-+		DH_set_ex_data;
-+		DH_set_method;
-+		DSA_OpenSSL;
-+		DH_get_ex_data;
-+		DH_get_ex_new_index;
-+		DSA_new_method;
-+		DH_new_method;
-+		DH_OpenSSL;
-+		DSA_get_ex_new_index;
-+		DH_get_default_method;
-+		DSA_set_ex_data;
-+		DH_set_default_method;
-+		DSA_get_ex_data;
-+		X509V3_EXT_REQ_add_conf;
-+		NETSCAPE_SPKI_print;
-+		NETSCAPE_SPKI_set_pubkey;
-+		NETSCAPE_SPKI_b64_encode;
-+		NETSCAPE_SPKI_get_pubkey;
-+		NETSCAPE_SPKI_b64_decode;
-+		UTF8_putc;
-+		UTF8_getc;
-+		RSA_null_method;
-+		ASN1_tag2str;
-+		BIO_ctrl_reset_read_request;
-+		DISPLAYTEXT_new;
-+		ASN1_GENERALIZEDTIME_free;
-+		X509_REVOKED_get_ext_d2i;
-+		X509_set_ex_data;
-+		X509_reject_set_bit_asc;
-+		X509_NAME_add_entry_by_txt;
-+		X509_NAME_add_entry_by_NID;
-+		X509_PURPOSE_get0;
-+		PEM_read_X509_AUX;
-+		d2i_AUTHORITY_INFO_ACCESS;
-+		PEM_write_PUBKEY;
-+		ACCESS_DESCRIPTION_new;
-+		X509_CERT_AUX_free;
-+		d2i_ACCESS_DESCRIPTION;
-+		X509_trust_clear;
-+		X509_TRUST_add;
-+		ASN1_VISIBLESTRING_new;
-+		X509_alias_set1;
-+		ASN1_PRINTABLESTRING_free;
-+		EVP_PKEY_get1_DSA;
-+		ASN1_BMPSTRING_new;
-+		ASN1_mbstring_copy;
-+		ASN1_UTF8STRING_new;
-+		DSA_get_default_method;
-+		i2d_ASN1_SET_OF_ACCESS_DESCRIPTION;
-+		ASN1_T61STRING_free;
-+		DSA_set_method;
-+		X509_get_ex_data;
-+		ASN1_STRING_type;
-+		X509_PURPOSE_get_by_sname;
-+		ASN1_TIME_free;
-+		ASN1_OCTET_STRING_cmp;
-+		ASN1_BIT_STRING_new;
-+		X509_get_ext_d2i;
-+		PEM_read_bio_X509_AUX;
-+		ASN1_STRING_set_default_mask_asc;
-+		ASN1_STRING_set_def_mask_asc;
-+		PEM_write_bio_RSA_PUBKEY;
-+		ASN1_INTEGER_cmp;
-+		d2i_RSA_PUBKEY_fp;
-+		X509_trust_set_bit_asc;
-+		PEM_write_bio_DSA_PUBKEY;
-+		X509_STORE_CTX_free;
-+		EVP_PKEY_set1_DSA;
-+		i2d_DSA_PUBKEY_fp;
-+		X509_load_cert_crl_file;
-+		ASN1_TIME_new;
-+		i2d_RSA_PUBKEY;
-+		X509_STORE_CTX_purpose_inherit;
-+		PEM_read_RSA_PUBKEY;
-+		d2i_X509_AUX;
-+		i2d_DSA_PUBKEY;
-+		X509_CERT_AUX_print;
-+		PEM_read_DSA_PUBKEY;
-+		i2d_RSA_PUBKEY_bio;
-+		ASN1_BIT_STRING_num_asc;
-+		i2d_PUBKEY;
-+		ASN1_UTCTIME_free;
-+		DSA_set_default_method;
-+		X509_PURPOSE_get_by_id;
-+		ACCESS_DESCRIPTION_free;
-+		PEM_read_bio_PUBKEY;
-+		ASN1_STRING_set_by_NID;
-+		X509_PURPOSE_get_id;
-+		DISPLAYTEXT_free;
-+		OTHERNAME_new;
-+		X509_CERT_AUX_new;
-+		X509_TRUST_cleanup;
-+		X509_NAME_add_entry_by_OBJ;
-+		X509_CRL_get_ext_d2i;
-+		X509_PURPOSE_get0_name;
-+		PEM_read_PUBKEY;
-+		i2d_DSA_PUBKEY_bio;
-+		i2d_OTHERNAME;
-+		ASN1_OCTET_STRING_free;
-+		ASN1_BIT_STRING_set_asc;
-+		X509_get_ex_new_index;
-+		ASN1_STRING_TABLE_cleanup;
-+		X509_TRUST_get_by_id;
-+		X509_PURPOSE_get_trust;
-+		ASN1_STRING_length;
-+		d2i_ASN1_SET_OF_ACCESS_DESCRIPTION;
-+		ASN1_PRINTABLESTRING_new;
-+		X509V3_get_d2i;
-+		ASN1_ENUMERATED_free;
-+		i2d_X509_CERT_AUX;
-+		X509_STORE_CTX_set_trust;
-+		ASN1_STRING_set_default_mask;
-+		X509_STORE_CTX_new;
-+		EVP_PKEY_get1_RSA;
-+		DIRECTORYSTRING_free;
-+		PEM_write_X509_AUX;
-+		ASN1_OCTET_STRING_set;
-+		d2i_DSA_PUBKEY_fp;
-+		d2i_RSA_PUBKEY;
-+		X509_TRUST_get0_name;
-+		X509_TRUST_get0;
-+		AUTHORITY_INFO_ACCESS_free;
-+		ASN1_IA5STRING_new;
-+		d2i_DSA_PUBKEY;
-+		X509_check_purpose;
-+		ASN1_ENUMERATED_new;
-+		d2i_RSA_PUBKEY_bio;
-+		d2i_PUBKEY;
-+		X509_TRUST_get_trust;
-+		X509_TRUST_get_flags;
-+		ASN1_BMPSTRING_free;
-+		ASN1_T61STRING_new;
-+		ASN1_UTCTIME_new;
-+		i2d_AUTHORITY_INFO_ACCESS;
-+		EVP_PKEY_set1_RSA;
-+		X509_STORE_CTX_set_purpose;
-+		ASN1_IA5STRING_free;
-+		PEM_write_bio_X509_AUX;
-+		X509_PURPOSE_get_count;
-+		CRYPTO_add_info;
-+		X509_NAME_ENTRY_create_by_txt;
-+		ASN1_STRING_get_default_mask;
-+		X509_alias_get0;
-+		ASN1_STRING_data;
-+		i2d_ACCESS_DESCRIPTION;
-+		X509_trust_set_bit;
-+		ASN1_BIT_STRING_free;
-+		PEM_read_bio_RSA_PUBKEY;
-+		X509_add1_reject_object;
-+		X509_check_trust;
-+		PEM_read_bio_DSA_PUBKEY;
-+		X509_PURPOSE_add;
-+		ASN1_STRING_TABLE_get;
-+		ASN1_UTF8STRING_free;
-+		d2i_DSA_PUBKEY_bio;
-+		PEM_write_RSA_PUBKEY;
-+		d2i_OTHERNAME;
-+		X509_reject_set_bit;
-+		PEM_write_DSA_PUBKEY;
-+		X509_PURPOSE_get0_sname;
-+		EVP_PKEY_set1_DH;
-+		ASN1_OCTET_STRING_dup;
-+		ASN1_BIT_STRING_set;
-+		X509_TRUST_get_count;
-+		ASN1_INTEGER_free;
-+		OTHERNAME_free;
-+		i2d_RSA_PUBKEY_fp;
-+		ASN1_INTEGER_dup;
-+		d2i_X509_CERT_AUX;
-+		PEM_write_bio_PUBKEY;
-+		ASN1_VISIBLESTRING_free;
-+		X509_PURPOSE_cleanup;
-+		ASN1_mbstring_ncopy;
-+		ASN1_GENERALIZEDTIME_new;
-+		EVP_PKEY_get1_DH;
-+		ASN1_OCTET_STRING_new;
-+		ASN1_INTEGER_new;
-+		i2d_X509_AUX;
-+		ASN1_BIT_STRING_name_print;
-+		X509_cmp;
-+		ASN1_STRING_length_set;
-+		DIRECTORYSTRING_new;
-+		X509_add1_trust_object;
-+		PKCS12_newpass;
-+		SMIME_write_PKCS7;
-+		SMIME_read_PKCS7;
-+		DES_set_key_checked;
-+		PKCS7_verify;
-+		PKCS7_encrypt;
-+		DES_set_key_unchecked;
-+		SMIME_crlf_copy;
-+		i2d_ASN1_PRINTABLESTRING;
-+		PKCS7_get0_signers;
-+		PKCS7_decrypt;
-+		SMIME_text;
-+		PKCS7_simple_smimecap;
-+		PKCS7_get_smimecap;
-+		PKCS7_sign;
-+		PKCS7_add_attrib_smimecap;
-+		CRYPTO_dbg_set_options;
-+		CRYPTO_remove_all_info;
-+		CRYPTO_get_mem_debug_functions;
-+		CRYPTO_is_mem_check_on;
-+		CRYPTO_set_mem_debug_functions;
-+		CRYPTO_pop_info;
-+		CRYPTO_push_info_;
-+		CRYPTO_set_mem_debug_options;
-+		PEM_write_PKCS8PrivateKey_nid;
-+		PEM_write_bio_PKCS8PrivateKey_nid;
-+		PEM_write_bio_PKCS8PrivKey_nid;
-+		d2i_PKCS8PrivateKey_bio;
-+		ASN1_NULL_free;
-+		d2i_ASN1_NULL;
-+		ASN1_NULL_new;
-+		i2d_PKCS8PrivateKey_bio;
-+		i2d_PKCS8PrivateKey_fp;
-+		i2d_ASN1_NULL;
-+		i2d_PKCS8PrivateKey_nid_fp;
-+		d2i_PKCS8PrivateKey_fp;
-+		i2d_PKCS8PrivateKey_nid_bio;
-+		i2d_PKCS8PrivateKeyInfo_fp;
-+		i2d_PKCS8PrivateKeyInfo_bio;
-+		PEM_cb;
-+		i2d_PrivateKey_fp;
-+		d2i_PrivateKey_bio;
-+		d2i_PrivateKey_fp;
-+		i2d_PrivateKey_bio;
-+		X509_reject_clear;
-+		X509_TRUST_set_default;
-+		d2i_AutoPrivateKey;
-+		X509_ATTRIBUTE_get0_type;
-+		X509_ATTRIBUTE_set1_data;
-+		X509at_get_attr;
-+		X509at_get_attr_count;
-+		X509_ATTRIBUTE_create_by_NID;
-+		X509_ATTRIBUTE_set1_object;
-+		X509_ATTRIBUTE_count;
-+		X509_ATTRIBUTE_create_by_OBJ;
-+		X509_ATTRIBUTE_get0_object;
-+		X509at_get_attr_by_NID;
-+		X509at_add1_attr;
-+		X509_ATTRIBUTE_get0_data;
-+		X509at_delete_attr;
-+		X509at_get_attr_by_OBJ;
-+		RAND_add;
-+		BIO_number_written;
-+		BIO_number_read;
-+		X509_STORE_CTX_get1_chain;
-+		ERR_load_RAND_strings;
-+		RAND_pseudo_bytes;
-+		X509_REQ_get_attr_by_NID;
-+		X509_REQ_get_attr;
-+		X509_REQ_add1_attr_by_NID;
-+		X509_REQ_get_attr_by_OBJ;
-+		X509at_add1_attr_by_NID;
-+		X509_REQ_add1_attr_by_OBJ;
-+		X509_REQ_get_attr_count;
-+		X509_REQ_add1_attr;
-+		X509_REQ_delete_attr;
-+		X509at_add1_attr_by_OBJ;
-+		X509_REQ_add1_attr_by_txt;
-+		X509_ATTRIBUTE_create_by_txt;
-+		X509at_add1_attr_by_txt;
-+		BN_pseudo_rand;
-+		BN_is_prime_fasttest;
-+		BN_CTX_end;
-+		BN_CTX_start;
-+		BN_CTX_get;
-+		EVP_PKEY2PKCS8_broken;
-+		ASN1_STRING_TABLE_add;
-+		CRYPTO_dbg_get_options;
-+		AUTHORITY_INFO_ACCESS_new;
-+		CRYPTO_get_mem_debug_options;
-+		DES_crypt;
-+		PEM_write_bio_X509_REQ_NEW;
-+		PEM_write_X509_REQ_NEW;
-+		BIO_callback_ctrl;
-+		RAND_egd;
-+		RAND_status;
-+		bn_dump1;
-+		DES_check_key_parity;
-+		lh_num_items;
-+		RAND_event;
-+		DSO_new;
-+		DSO_new_method;
-+		DSO_free;
-+		DSO_flags;
-+		DSO_up;
-+		DSO_set_default_method;
-+		DSO_get_default_method;
-+		DSO_get_method;
-+		DSO_set_method;
-+		DSO_load;
-+		DSO_bind_var;
-+		DSO_METHOD_null;
-+		DSO_METHOD_openssl;
-+		DSO_METHOD_dlfcn;
-+		DSO_METHOD_win32;
-+		ERR_load_DSO_strings;
-+		DSO_METHOD_dl;
-+		NCONF_load;
-+		NCONF_load_fp;
-+		NCONF_new;
-+		NCONF_get_string;
-+		NCONF_free;
-+		NCONF_get_number;
-+		CONF_dump_fp;
-+		NCONF_load_bio;
-+		NCONF_dump_fp;
-+		NCONF_get_section;
-+		NCONF_dump_bio;
-+		CONF_dump_bio;
-+		NCONF_free_data;
-+		CONF_set_default_method;
-+		ERR_error_string_n;
-+		BIO_snprintf;
-+		DSO_ctrl;
-+		i2d_ASN1_SET_OF_ASN1_INTEGER;
-+		i2d_ASN1_SET_OF_PKCS12_SAFEBAG;
-+		i2d_ASN1_SET_OF_PKCS7;
-+		BIO_vfree;
-+		d2i_ASN1_SET_OF_ASN1_INTEGER;
-+		d2i_ASN1_SET_OF_PKCS12_SAFEBAG;
-+		ASN1_UTCTIME_get;
-+		X509_REQ_digest;
-+		X509_CRL_digest;
-+		d2i_ASN1_SET_OF_PKCS7;
-+		EVP_CIPHER_CTX_set_key_length;
-+		EVP_CIPHER_CTX_ctrl;
-+		BN_mod_exp_mont_word;
-+		RAND_egd_bytes;
-+		X509_REQ_get1_email;
-+		X509_get1_email;
-+		X509_email_free;
-+		i2d_RSA_NET;
-+		d2i_RSA_NET_2;
-+		d2i_RSA_NET;
-+		DSO_bind_func;
-+		CRYPTO_get_new_dynlockid;
-+		sk_new_null;
-+		CRYPTO_set_dynlock_destroy_callback;
-+		CRYPTO_set_dynlock_destroy_cb;
-+		CRYPTO_destroy_dynlockid;
-+		CRYPTO_set_dynlock_size;
-+		CRYPTO_set_dynlock_create_callback;
-+		CRYPTO_set_dynlock_create_cb;
-+		CRYPTO_set_dynlock_lock_callback;
-+		CRYPTO_set_dynlock_lock_cb;
-+		CRYPTO_get_dynlock_lock_callback;
-+		CRYPTO_get_dynlock_lock_cb;
-+		CRYPTO_get_dynlock_destroy_callback;
-+		CRYPTO_get_dynlock_destroy_cb;
-+		CRYPTO_get_dynlock_value;
-+		CRYPTO_get_dynlock_create_callback;
-+		CRYPTO_get_dynlock_create_cb;
-+		c2i_ASN1_BIT_STRING;
-+		i2c_ASN1_BIT_STRING;
-+		RAND_poll;
-+		c2i_ASN1_INTEGER;
-+		i2c_ASN1_INTEGER;
-+		BIO_dump_indent;
-+		ASN1_parse_dump;
-+		c2i_ASN1_OBJECT;
-+		X509_NAME_print_ex_fp;
-+		ASN1_STRING_print_ex_fp;
-+		X509_NAME_print_ex;
-+		ASN1_STRING_print_ex;
-+		MD4;
-+		MD4_Transform;
-+		MD4_Final;
-+		MD4_Update;
-+		MD4_Init;
-+		EVP_md4;
-+		i2d_PUBKEY_bio;
-+		i2d_PUBKEY_fp;
-+		d2i_PUBKEY_bio;
-+		ASN1_STRING_to_UTF8;
-+		BIO_vprintf;
-+		BIO_vsnprintf;
-+		d2i_PUBKEY_fp;
-+		X509_cmp_time;
-+		X509_STORE_CTX_set_time;
-+		X509_STORE_CTX_get1_issuer;
-+		X509_OBJECT_retrieve_match;
-+		X509_OBJECT_idx_by_subject;
-+		X509_STORE_CTX_set_flags;
-+		X509_STORE_CTX_trusted_stack;
-+		X509_time_adj;
-+		X509_check_issued;
-+		ASN1_UTCTIME_cmp_time_t;
-+		DES_set_weak_key_flag;
-+		DES_check_key;
-+		DES_rw_mode;
-+		RSA_PKCS1_RSAref;
-+		X509_keyid_set1;
-+		BIO_next;
-+		DSO_METHOD_vms;
-+		BIO_f_linebuffer;
-+		BN_bntest_rand;
-+		OPENSSL_issetugid;
-+		BN_rand_range;
-+		ERR_load_ENGINE_strings;
-+		ENGINE_set_DSA;
-+		ENGINE_get_finish_function;
-+		ENGINE_get_default_RSA;
-+		ENGINE_get_BN_mod_exp;
-+		DSA_get_default_openssl_method;
-+		ENGINE_set_DH;
-+		ENGINE_set_def_BN_mod_exp_crt;
-+		ENGINE_set_default_BN_mod_exp_crt;
-+		ENGINE_init;
-+		DH_get_default_openssl_method;
-+		RSA_set_default_openssl_method;
-+		ENGINE_finish;
-+		ENGINE_load_public_key;
-+		ENGINE_get_DH;
-+		ENGINE_ctrl;
-+		ENGINE_get_init_function;
-+		ENGINE_set_init_function;
-+		ENGINE_set_default_DSA;
-+		ENGINE_get_name;
-+		ENGINE_get_last;
-+		ENGINE_get_prev;
-+		ENGINE_get_default_DH;
-+		ENGINE_get_RSA;
-+		ENGINE_set_default;
-+		ENGINE_get_RAND;
-+		ENGINE_get_first;
-+		ENGINE_by_id;
-+		ENGINE_set_finish_function;
-+		ENGINE_get_def_BN_mod_exp_crt;
-+		ENGINE_get_default_BN_mod_exp_crt;
-+		RSA_get_default_openssl_method;
-+		ENGINE_set_RSA;
-+		ENGINE_load_private_key;
-+		ENGINE_set_default_RAND;
-+		ENGINE_set_BN_mod_exp;
-+		ENGINE_remove;
-+		ENGINE_free;
-+		ENGINE_get_BN_mod_exp_crt;
-+		ENGINE_get_next;
-+		ENGINE_set_name;
-+		ENGINE_get_default_DSA;
-+		ENGINE_set_default_BN_mod_exp;
-+		ENGINE_set_default_RSA;
-+		ENGINE_get_default_RAND;
-+		ENGINE_get_default_BN_mod_exp;
-+		ENGINE_set_RAND;
-+		ENGINE_set_id;
-+		ENGINE_set_BN_mod_exp_crt;
-+		ENGINE_set_default_DH;
-+		ENGINE_new;
-+		ENGINE_get_id;
-+		DSA_set_default_openssl_method;
-+		ENGINE_add;
-+		DH_set_default_openssl_method;
-+		ENGINE_get_DSA;
-+		ENGINE_get_ctrl_function;
-+		ENGINE_set_ctrl_function;
-+		BN_pseudo_rand_range;
-+		X509_STORE_CTX_set_verify_cb;
-+		ERR_load_COMP_strings;
-+		PKCS12_item_decrypt_d2i;
-+		ASN1_UTF8STRING_it;
-+		ENGINE_unregister_ciphers;
-+		ENGINE_get_ciphers;
-+		d2i_OCSP_BASICRESP;
-+		KRB5_CHECKSUM_it;
-+		EC_POINT_add;
-+		ASN1_item_ex_i2d;
-+		OCSP_CERTID_it;
-+		d2i_OCSP_RESPBYTES;
-+		X509V3_add1_i2d;
-+		PKCS7_ENVELOPE_it;
-+		UI_add_input_boolean;
-+		ENGINE_unregister_RSA;
-+		X509V3_EXT_nconf;
-+		ASN1_GENERALSTRING_free;
-+		d2i_OCSP_CERTSTATUS;
-+		X509_REVOKED_set_serialNumber;
-+		X509_print_ex;
-+		OCSP_ONEREQ_get1_ext_d2i;
-+		ENGINE_register_all_RAND;
-+		ENGINE_load_dynamic;
-+		PBKDF2PARAM_it;
-+		EXTENDED_KEY_USAGE_new;
-+		EC_GROUP_clear_free;
-+		OCSP_sendreq_bio;
-+		ASN1_item_digest;
-+		OCSP_BASICRESP_delete_ext;
-+		OCSP_SIGNATURE_it;
-+		X509_CRL_it;
-+		OCSP_BASICRESP_add_ext;
-+		KRB5_ENCKEY_it;
-+		UI_method_set_closer;
-+		X509_STORE_set_purpose;
-+		i2d_ASN1_GENERALSTRING;
-+		OCSP_response_status;
-+		i2d_OCSP_SERVICELOC;
-+		ENGINE_get_digest_engine;
-+		EC_GROUP_set_curve_GFp;
-+		OCSP_REQUEST_get_ext_by_OBJ;
-+		_ossl_old_des_random_key;
-+		ASN1_T61STRING_it;
-+		EC_GROUP_method_of;
-+		i2d_KRB5_APREQ;
-+		_ossl_old_des_encrypt;
-+		ASN1_PRINTABLE_new;
-+		HMAC_Init_ex;
-+		d2i_KRB5_AUTHENT;
-+		OCSP_archive_cutoff_new;
-+		EC_POINT_set_Jprojective_coordinates_GFp;
-+		EC_POINT_set_Jproj_coords_GFp;
-+		_ossl_old_des_is_weak_key;
-+		OCSP_BASICRESP_get_ext_by_OBJ;
-+		EC_POINT_oct2point;
-+		OCSP_SINGLERESP_get_ext_count;
-+		UI_ctrl;
-+		_shadow_DES_rw_mode;
-+		asn1_do_adb;
-+		ASN1_template_i2d;
-+		ENGINE_register_DH;
-+		UI_construct_prompt;
-+		X509_STORE_set_trust;
-+		UI_dup_input_string;
-+		d2i_KRB5_APREQ;
-+		EVP_MD_CTX_copy_ex;
-+		OCSP_request_is_signed;
-+		i2d_OCSP_REQINFO;
-+		KRB5_ENCKEY_free;
-+		OCSP_resp_get0;
-+		GENERAL_NAME_it;
-+		ASN1_GENERALIZEDTIME_it;
-+		X509_STORE_set_flags;
-+		EC_POINT_set_compressed_coordinates_GFp;
-+		EC_POINT_set_compr_coords_GFp;
-+		OCSP_response_status_str;
-+		d2i_OCSP_REVOKEDINFO;
-+		OCSP_basic_add1_cert;
-+		ERR_get_implementation;
-+		EVP_CipherFinal_ex;
-+		OCSP_CERTSTATUS_new;
-+		CRYPTO_cleanup_all_ex_data;
-+		OCSP_resp_find;
-+		BN_nnmod;
-+		X509_CRL_sort;
-+		X509_REVOKED_set_revocationDate;
-+		ENGINE_register_RAND;
-+		OCSP_SERVICELOC_new;
-+		EC_POINT_set_affine_coordinates_GFp;
-+		EC_POINT_set_affine_coords_GFp;
-+		_ossl_old_des_options;
-+		SXNET_it;
-+		UI_dup_input_boolean;
-+		PKCS12_add_CSPName_asc;
-+		EC_POINT_is_at_infinity;
-+		ENGINE_load_cryptodev;
-+		DSO_convert_filename;
-+		POLICYQUALINFO_it;
-+		ENGINE_register_ciphers;
-+		BN_mod_lshift_quick;
-+		DSO_set_filename;
-+		ASN1_item_free;
-+		KRB5_TKTBODY_free;
-+		AUTHORITY_KEYID_it;
-+		KRB5_APREQBODY_new;
-+		X509V3_EXT_REQ_add_nconf;
-+		ENGINE_ctrl_cmd_string;
-+		i2d_OCSP_RESPDATA;
-+		EVP_MD_CTX_init;
-+		EXTENDED_KEY_USAGE_free;
-+		PKCS7_ATTR_SIGN_it;
-+		UI_add_error_string;
-+		KRB5_CHECKSUM_free;
-+		OCSP_REQUEST_get_ext;
-+		ENGINE_load_ubsec;
-+		ENGINE_register_all_digests;
-+		PKEY_USAGE_PERIOD_it;
-+		PKCS12_unpack_authsafes;
-+		ASN1_item_unpack;
-+		NETSCAPE_SPKAC_it;
-+		X509_REVOKED_it;
-+		ASN1_STRING_encode;
-+		EVP_aes_128_ecb;
-+		KRB5_AUTHENT_free;
-+		OCSP_BASICRESP_get_ext_by_critical;
-+		OCSP_BASICRESP_get_ext_by_crit;
-+		OCSP_cert_status_str;
-+		d2i_OCSP_REQUEST;
-+		UI_dup_info_string;
-+		_ossl_old_des_xwhite_in2out;
-+		PKCS12_it;
-+		OCSP_SINGLERESP_get_ext_by_critical;
-+		OCSP_SINGLERESP_get_ext_by_crit;
-+		OCSP_CERTSTATUS_free;
-+		_ossl_old_des_crypt;
-+		ASN1_item_i2d;
-+		EVP_DecryptFinal_ex;
-+		ENGINE_load_openssl;
-+		ENGINE_get_cmd_defns;
-+		ENGINE_set_load_privkey_function;
-+		ENGINE_set_load_privkey_fn;
-+		EVP_EncryptFinal_ex;
-+		ENGINE_set_default_digests;
-+		X509_get0_pubkey_bitstr;
-+		asn1_ex_i2c;
-+		ENGINE_register_RSA;
-+		ENGINE_unregister_DSA;
-+		_ossl_old_des_key_sched;
-+		X509_EXTENSION_it;
-+		i2d_KRB5_AUTHENT;
-+		SXNETID_it;
-+		d2i_OCSP_SINGLERESP;
-+		EDIPARTYNAME_new;
-+		PKCS12_certbag2x509;
-+		_ossl_old_des_ofb64_encrypt;
-+		d2i_EXTENDED_KEY_USAGE;
-+		ERR_print_errors_cb;
-+		ENGINE_set_ciphers;
-+		d2i_KRB5_APREQBODY;
-+		UI_method_get_flusher;
-+		X509_PUBKEY_it;
-+		_ossl_old_des_enc_read;
-+		PKCS7_ENCRYPT_it;
-+		i2d_OCSP_RESPONSE;
-+		EC_GROUP_get_cofactor;
-+		PKCS12_unpack_p7data;
-+		d2i_KRB5_AUTHDATA;
-+		OCSP_copy_nonce;
-+		KRB5_AUTHDATA_new;
-+		OCSP_RESPDATA_new;
-+		EC_GFp_mont_method;
-+		OCSP_REVOKEDINFO_free;
-+		UI_get_ex_data;
-+		KRB5_APREQBODY_free;
-+		EC_GROUP_get0_generator;
-+		UI_get_default_method;
-+		X509V3_set_nconf;
-+		PKCS12_item_i2d_encrypt;
-+		X509_add1_ext_i2d;
-+		PKCS7_SIGNER_INFO_it;
-+		KRB5_PRINCNAME_new;
-+		PKCS12_SAFEBAG_it;
-+		EC_GROUP_get_order;
-+		d2i_OCSP_RESPID;
-+		OCSP_request_verify;
-+		NCONF_get_number_e;
-+		_ossl_old_des_decrypt3;
-+		X509_signature_print;
-+		OCSP_SINGLERESP_free;
-+		ENGINE_load_builtin_engines;
-+		i2d_OCSP_ONEREQ;
-+		OCSP_REQUEST_add_ext;
-+		OCSP_RESPBYTES_new;
-+		EVP_MD_CTX_create;
-+		OCSP_resp_find_status;
-+		X509_ALGOR_it;
-+		ASN1_TIME_it;
-+		OCSP_request_set1_name;
-+		OCSP_ONEREQ_get_ext_count;
-+		UI_get0_result;
-+		PKCS12_AUTHSAFES_it;
-+		EVP_aes_256_ecb;
-+		PKCS12_pack_authsafes;
-+		ASN1_IA5STRING_it;
-+		UI_get_input_flags;
-+		EC_GROUP_set_generator;
-+		_ossl_old_des_string_to_2keys;
-+		OCSP_CERTID_free;
-+		X509_CERT_AUX_it;
-+		CERTIFICATEPOLICIES_it;
-+		_ossl_old_des_ede3_cbc_encrypt;
-+		RAND_set_rand_engine;
-+		DSO_get_loaded_filename;
-+		X509_ATTRIBUTE_it;
-+		OCSP_ONEREQ_get_ext_by_NID;
-+		PKCS12_decrypt_skey;
-+		KRB5_AUTHENT_it;
-+		UI_dup_error_string;
-+		RSAPublicKey_it;
-+		i2d_OCSP_REQUEST;
-+		PKCS12_x509crl2certbag;
-+		OCSP_SERVICELOC_it;
-+		ASN1_item_sign;
-+		X509_CRL_set_issuer_name;
-+		OBJ_NAME_do_all_sorted;
-+		i2d_OCSP_BASICRESP;
-+		i2d_OCSP_RESPBYTES;
-+		PKCS12_unpack_p7encdata;
-+		HMAC_CTX_init;
-+		ENGINE_get_digest;
-+		OCSP_RESPONSE_print;
-+		KRB5_TKTBODY_it;
-+		ACCESS_DESCRIPTION_it;
-+		PKCS7_ISSUER_AND_SERIAL_it;
-+		PBE2PARAM_it;
-+		PKCS12_certbag2x509crl;
-+		PKCS7_SIGNED_it;
-+		ENGINE_get_cipher;
-+		i2d_OCSP_CRLID;
-+		OCSP_SINGLERESP_new;
-+		ENGINE_cmd_is_executable;
-+		RSA_up_ref;
-+		ASN1_GENERALSTRING_it;
-+		ENGINE_register_DSA;
-+		X509V3_EXT_add_nconf_sk;
-+		ENGINE_set_load_pubkey_function;
-+		PKCS8_decrypt;
-+		PEM_bytes_read_bio;
-+		DIRECTORYSTRING_it;
-+		d2i_OCSP_CRLID;
-+		EC_POINT_is_on_curve;
-+		CRYPTO_set_locked_mem_ex_functions;
-+		CRYPTO_set_locked_mem_ex_funcs;
-+		d2i_KRB5_CHECKSUM;
-+		ASN1_item_dup;
-+		X509_it;
-+		BN_mod_add;
-+		KRB5_AUTHDATA_free;
-+		_ossl_old_des_cbc_cksum;
-+		ASN1_item_verify;
-+		CRYPTO_set_mem_ex_functions;
-+		EC_POINT_get_Jprojective_coordinates_GFp;
-+		EC_POINT_get_Jproj_coords_GFp;
-+		ZLONG_it;
-+		CRYPTO_get_locked_mem_ex_functions;
-+		CRYPTO_get_locked_mem_ex_funcs;
-+		ASN1_TIME_check;
-+		UI_get0_user_data;
-+		HMAC_CTX_cleanup;
-+		DSA_up_ref;
-+		_ossl_old_des_ede3_cfb64_encrypt;
-+		_ossl_odes_ede3_cfb64_encrypt;
-+		ASN1_BMPSTRING_it;
-+		ASN1_tag2bit;
-+		UI_method_set_flusher;
-+		X509_ocspid_print;
-+		KRB5_ENCDATA_it;
-+		ENGINE_get_load_pubkey_function;
-+		UI_add_user_data;
-+		OCSP_REQUEST_delete_ext;
-+		UI_get_method;
-+		OCSP_ONEREQ_free;
-+		ASN1_PRINTABLESTRING_it;
-+		X509_CRL_set_nextUpdate;
-+		OCSP_REQUEST_it;
-+		OCSP_BASICRESP_it;
-+		AES_ecb_encrypt;
-+		BN_mod_sqr;
-+		NETSCAPE_CERT_SEQUENCE_it;
-+		GENERAL_NAMES_it;
-+		AUTHORITY_INFO_ACCESS_it;
-+		ASN1_FBOOLEAN_it;
-+		UI_set_ex_data;
-+		_ossl_old_des_string_to_key;
-+		ENGINE_register_all_RSA;
-+		d2i_KRB5_PRINCNAME;
-+		OCSP_RESPBYTES_it;
-+		X509_CINF_it;
-+		ENGINE_unregister_digests;
-+		d2i_EDIPARTYNAME;
-+		d2i_OCSP_SERVICELOC;
-+		ENGINE_get_digests;
-+		_ossl_old_des_set_odd_parity;
-+		OCSP_RESPDATA_free;
-+		d2i_KRB5_TICKET;
-+		OTHERNAME_it;
-+		EVP_MD_CTX_cleanup;
-+		d2i_ASN1_GENERALSTRING;
-+		X509_CRL_set_version;
-+		BN_mod_sub;
-+		OCSP_SINGLERESP_get_ext_by_NID;
-+		ENGINE_get_ex_new_index;
-+		OCSP_REQUEST_free;
-+		OCSP_REQUEST_add1_ext_i2d;
-+		X509_VAL_it;
-+		EC_POINTs_make_affine;
-+		EC_POINT_mul;
-+		X509V3_EXT_add_nconf;
-+		X509_TRUST_set;
-+		X509_CRL_add1_ext_i2d;
-+		_ossl_old_des_fcrypt;
-+		DISPLAYTEXT_it;
-+		X509_CRL_set_lastUpdate;
-+		OCSP_BASICRESP_free;
-+		OCSP_BASICRESP_add1_ext_i2d;
-+		d2i_KRB5_AUTHENTBODY;
-+		CRYPTO_set_ex_data_implementation;
-+		CRYPTO_set_ex_data_impl;
-+		KRB5_ENCDATA_new;
-+		DSO_up_ref;
-+		OCSP_crl_reason_str;
-+		UI_get0_result_string;
-+		ASN1_GENERALSTRING_new;
-+		X509_SIG_it;
-+		ERR_set_implementation;
-+		ERR_load_EC_strings;
-+		UI_get0_action_string;
-+		OCSP_ONEREQ_get_ext;
-+		EC_POINT_method_of;
-+		i2d_KRB5_APREQBODY;
-+		_ossl_old_des_ecb3_encrypt;
-+		CRYPTO_get_mem_ex_functions;
-+		ENGINE_get_ex_data;
-+		UI_destroy_method;
-+		ASN1_item_i2d_bio;
-+		OCSP_ONEREQ_get_ext_by_OBJ;
-+		ASN1_primitive_new;
-+		ASN1_PRINTABLE_it;
-+		EVP_aes_192_ecb;
-+		OCSP_SIGNATURE_new;
-+		LONG_it;
-+		ASN1_VISIBLESTRING_it;
-+		OCSP_SINGLERESP_add1_ext_i2d;
-+		d2i_OCSP_CERTID;
-+		ASN1_item_d2i_fp;
-+		CRL_DIST_POINTS_it;
-+		GENERAL_NAME_print;
-+		OCSP_SINGLERESP_delete_ext;
-+		PKCS12_SAFEBAGS_it;
-+		d2i_OCSP_SIGNATURE;
-+		OCSP_request_add1_nonce;
-+		ENGINE_set_cmd_defns;
-+		OCSP_SERVICELOC_free;
-+		EC_GROUP_free;
-+		ASN1_BIT_STRING_it;
-+		X509_REQ_it;
-+		_ossl_old_des_cbc_encrypt;
-+		ERR_unload_strings;
-+		PKCS7_SIGN_ENVELOPE_it;
-+		EDIPARTYNAME_free;
-+		OCSP_REQINFO_free;
-+		EC_GROUP_new_curve_GFp;
-+		OCSP_REQUEST_get1_ext_d2i;
-+		PKCS12_item_pack_safebag;
-+		asn1_ex_c2i;
-+		ENGINE_register_digests;
-+		i2d_OCSP_REVOKEDINFO;
-+		asn1_enc_restore;
-+		UI_free;
-+		UI_new_method;
-+		EVP_EncryptInit_ex;
-+		X509_pubkey_digest;
-+		EC_POINT_invert;
-+		OCSP_basic_sign;
-+		i2d_OCSP_RESPID;
-+		OCSP_check_nonce;
-+		ENGINE_ctrl_cmd;
-+		d2i_KRB5_ENCKEY;
-+		OCSP_parse_url;
-+		OCSP_SINGLERESP_get_ext;
-+		OCSP_CRLID_free;
-+		OCSP_BASICRESP_get1_ext_d2i;
-+		RSAPrivateKey_it;
-+		ENGINE_register_all_DH;
-+		i2d_EDIPARTYNAME;
-+		EC_POINT_get_affine_coordinates_GFp;
-+		EC_POINT_get_affine_coords_GFp;
-+		OCSP_CRLID_new;
-+		ENGINE_get_flags;
-+		OCSP_ONEREQ_it;
-+		UI_process;
-+		ASN1_INTEGER_it;
-+		EVP_CipherInit_ex;
-+		UI_get_string_type;
-+		ENGINE_unregister_DH;
-+		ENGINE_register_all_DSA;
-+		OCSP_ONEREQ_get_ext_by_critical;
-+		bn_dup_expand;
-+		OCSP_cert_id_new;
-+		BASIC_CONSTRAINTS_it;
-+		BN_mod_add_quick;
-+		EC_POINT_new;
-+		EVP_MD_CTX_destroy;
-+		OCSP_RESPBYTES_free;
-+		EVP_aes_128_cbc;
-+		OCSP_SINGLERESP_get1_ext_d2i;
-+		EC_POINT_free;
-+		DH_up_ref;
-+		X509_NAME_ENTRY_it;
-+		UI_get_ex_new_index;
-+		BN_mod_sub_quick;
-+		OCSP_ONEREQ_add_ext;
-+		OCSP_request_sign;
-+		EVP_DigestFinal_ex;
-+		ENGINE_set_digests;
-+		OCSP_id_issuer_cmp;
-+		OBJ_NAME_do_all;
-+		EC_POINTs_mul;
-+		ENGINE_register_complete;
-+		X509V3_EXT_nconf_nid;
-+		ASN1_SEQUENCE_it;
-+		UI_set_default_method;
-+		RAND_query_egd_bytes;
-+		UI_method_get_writer;
-+		UI_OpenSSL;
-+		PEM_def_callback;
-+		ENGINE_cleanup;
-+		DIST_POINT_it;
-+		OCSP_SINGLERESP_it;
-+		d2i_KRB5_TKTBODY;
-+		EC_POINT_cmp;
-+		OCSP_REVOKEDINFO_new;
-+		i2d_OCSP_CERTSTATUS;
-+		OCSP_basic_add1_nonce;
-+		ASN1_item_ex_d2i;
-+		BN_mod_lshift1_quick;
-+		UI_set_method;
-+		OCSP_id_get0_info;
-+		BN_mod_sqrt;
-+		EC_GROUP_copy;
-+		KRB5_ENCDATA_free;
-+		_ossl_old_des_cfb_encrypt;
-+		OCSP_SINGLERESP_get_ext_by_OBJ;
-+		OCSP_cert_to_id;
-+		OCSP_RESPID_new;
-+		OCSP_RESPDATA_it;
-+		d2i_OCSP_RESPDATA;
-+		ENGINE_register_all_complete;
-+		OCSP_check_validity;
-+		PKCS12_BAGS_it;
-+		OCSP_url_svcloc_new;
-+		ASN1_template_free;
-+		OCSP_SINGLERESP_add_ext;
-+		KRB5_AUTHENTBODY_it;
-+		X509_supported_extension;
-+		i2d_KRB5_AUTHDATA;
-+		UI_method_get_opener;
-+		ENGINE_set_ex_data;
-+		OCSP_REQUEST_print;
-+		CBIGNUM_it;
-+		KRB5_TICKET_new;
-+		KRB5_APREQ_new;
-+		EC_GROUP_get_curve_GFp;
-+		KRB5_ENCKEY_new;
-+		ASN1_template_d2i;
-+		_ossl_old_des_quad_cksum;
-+		OCSP_single_get0_status;
-+		BN_swap;
-+		POLICYINFO_it;
-+		ENGINE_set_destroy_function;
-+		asn1_enc_free;
-+		OCSP_RESPID_it;
-+		EC_GROUP_new;
-+		EVP_aes_256_cbc;
-+		i2d_KRB5_PRINCNAME;
-+		_ossl_old_des_encrypt2;
-+		_ossl_old_des_encrypt3;
-+		PKCS8_PRIV_KEY_INFO_it;
-+		OCSP_REQINFO_it;
-+		PBEPARAM_it;
-+		KRB5_AUTHENTBODY_new;
-+		X509_CRL_add0_revoked;
-+		EDIPARTYNAME_it;
-+		NETSCAPE_SPKI_it;
-+		UI_get0_test_string;
-+		ENGINE_get_cipher_engine;
-+		ENGINE_register_all_ciphers;
-+		EC_POINT_copy;
-+		BN_kronecker;
-+		_ossl_old_des_ede3_ofb64_encrypt;
-+		_ossl_odes_ede3_ofb64_encrypt;
-+		UI_method_get_reader;
-+		OCSP_BASICRESP_get_ext_count;
-+		ASN1_ENUMERATED_it;
-+		UI_set_result;
-+		i2d_KRB5_TICKET;
-+		X509_print_ex_fp;
-+		EVP_CIPHER_CTX_set_padding;
-+		d2i_OCSP_RESPONSE;
-+		ASN1_UTCTIME_it;
-+		_ossl_old_des_enc_write;
-+		OCSP_RESPONSE_new;
-+		AES_set_encrypt_key;
-+		OCSP_resp_count;
-+		KRB5_CHECKSUM_new;
-+		ENGINE_load_cswift;
-+		OCSP_onereq_get0_id;
-+		ENGINE_set_default_ciphers;
-+		NOTICEREF_it;
-+		X509V3_EXT_CRL_add_nconf;
-+		OCSP_REVOKEDINFO_it;
-+		AES_encrypt;
-+		OCSP_REQUEST_new;
-+		ASN1_ANY_it;
-+		CRYPTO_ex_data_new_class;
-+		_ossl_old_des_ncbc_encrypt;
-+		i2d_KRB5_TKTBODY;
-+		EC_POINT_clear_free;
-+		AES_decrypt;
-+		asn1_enc_init;
-+		UI_get_result_maxsize;
-+		OCSP_CERTID_new;
-+		ENGINE_unregister_RAND;
-+		UI_method_get_closer;
-+		d2i_KRB5_ENCDATA;
-+		OCSP_request_onereq_count;
-+		OCSP_basic_verify;
-+		KRB5_AUTHENTBODY_free;
-+		ASN1_item_d2i;
-+		ASN1_primitive_free;
-+		i2d_EXTENDED_KEY_USAGE;
-+		i2d_OCSP_SIGNATURE;
-+		asn1_enc_save;
-+		ENGINE_load_nuron;
-+		_ossl_old_des_pcbc_encrypt;
-+		PKCS12_MAC_DATA_it;
-+		OCSP_accept_responses_new;
-+		asn1_do_lock;
-+		PKCS7_ATTR_VERIFY_it;
-+		KRB5_APREQBODY_it;
-+		i2d_OCSP_SINGLERESP;
-+		ASN1_item_ex_new;
-+		UI_add_verify_string;
-+		_ossl_old_des_set_key;
-+		KRB5_PRINCNAME_it;
-+		EVP_DecryptInit_ex;
-+		i2d_OCSP_CERTID;
-+		ASN1_item_d2i_bio;
-+		EC_POINT_dbl;
-+		asn1_get_choice_selector;
-+		i2d_KRB5_CHECKSUM;
-+		ENGINE_set_table_flags;
-+		AES_options;
-+		ENGINE_load_chil;
-+		OCSP_id_cmp;
-+		OCSP_BASICRESP_new;
-+		OCSP_REQUEST_get_ext_by_NID;
-+		KRB5_APREQ_it;
-+		ENGINE_get_destroy_function;
-+		CONF_set_nconf;
-+		ASN1_PRINTABLE_free;
-+		OCSP_BASICRESP_get_ext_by_NID;
-+		DIST_POINT_NAME_it;
-+		X509V3_extensions_print;
-+		_ossl_old_des_cfb64_encrypt;
-+		X509_REVOKED_add1_ext_i2d;
-+		_ossl_old_des_ofb_encrypt;
-+		KRB5_TKTBODY_new;
-+		ASN1_OCTET_STRING_it;
-+		ERR_load_UI_strings;
-+		i2d_KRB5_ENCKEY;
-+		ASN1_template_new;
-+		OCSP_SIGNATURE_free;
-+		ASN1_item_i2d_fp;
-+		KRB5_PRINCNAME_free;
-+		PKCS7_RECIP_INFO_it;
-+		EXTENDED_KEY_USAGE_it;
-+		EC_GFp_simple_method;
-+		EC_GROUP_precompute_mult;
-+		OCSP_request_onereq_get0;
-+		UI_method_set_writer;
-+		KRB5_AUTHENT_new;
-+		X509_CRL_INFO_it;
-+		DSO_set_name_converter;
-+		AES_set_decrypt_key;
-+		PKCS7_DIGEST_it;
-+		PKCS12_x5092certbag;
-+		EVP_DigestInit_ex;
-+		i2a_ACCESS_DESCRIPTION;
-+		OCSP_RESPONSE_it;
-+		PKCS7_ENC_CONTENT_it;
-+		OCSP_request_add0_id;
-+		EC_POINT_make_affine;
-+		DSO_get_filename;
-+		OCSP_CERTSTATUS_it;
-+		OCSP_request_add1_cert;
-+		UI_get0_output_string;
-+		UI_dup_verify_string;
-+		BN_mod_lshift;
-+		KRB5_AUTHDATA_it;
-+		asn1_set_choice_selector;
-+		OCSP_basic_add1_status;
-+		OCSP_RESPID_free;
-+		asn1_get_field_ptr;
-+		UI_add_input_string;
-+		OCSP_CRLID_it;
-+		i2d_KRB5_AUTHENTBODY;
-+		OCSP_REQUEST_get_ext_count;
-+		ENGINE_load_atalla;
-+		X509_NAME_it;
-+		USERNOTICE_it;
-+		OCSP_REQINFO_new;
-+		OCSP_BASICRESP_get_ext;
-+		CRYPTO_get_ex_data_implementation;
-+		CRYPTO_get_ex_data_impl;
-+		ASN1_item_pack;
-+		i2d_KRB5_ENCDATA;
-+		X509_PURPOSE_set;
-+		X509_REQ_INFO_it;
-+		UI_method_set_opener;
-+		ASN1_item_ex_free;
-+		ASN1_BOOLEAN_it;
-+		ENGINE_get_table_flags;
-+		UI_create_method;
-+		OCSP_ONEREQ_add1_ext_i2d;
-+		_shadow_DES_check_key;
-+		d2i_OCSP_REQINFO;
-+		UI_add_info_string;
-+		UI_get_result_minsize;
-+		ASN1_NULL_it;
-+		BN_mod_lshift1;
-+		d2i_OCSP_ONEREQ;
-+		OCSP_ONEREQ_new;
-+		KRB5_TICKET_it;
-+		EVP_aes_192_cbc;
-+		KRB5_TICKET_free;
-+		UI_new;
-+		OCSP_response_create;
-+		_ossl_old_des_xcbc_encrypt;
-+		PKCS7_it;
-+		OCSP_REQUEST_get_ext_by_critical;
-+		OCSP_REQUEST_get_ext_by_crit;
-+		ENGINE_set_flags;
-+		_ossl_old_des_ecb_encrypt;
-+		OCSP_response_get1_basic;
-+		EVP_Digest;
-+		OCSP_ONEREQ_delete_ext;
-+		ASN1_TBOOLEAN_it;
-+		ASN1_item_new;
-+		ASN1_TIME_to_generalizedtime;
-+		BIGNUM_it;
-+		AES_cbc_encrypt;
-+		ENGINE_get_load_privkey_function;
-+		ENGINE_get_load_privkey_fn;
-+		OCSP_RESPONSE_free;
-+		UI_method_set_reader;
-+		i2d_ASN1_T61STRING;
-+		EC_POINT_set_to_infinity;
-+		ERR_load_OCSP_strings;
-+		EC_POINT_point2oct;
-+		KRB5_APREQ_free;
-+		ASN1_OBJECT_it;
-+		OCSP_crlID_new;
-+		OCSP_crlID2_new;
-+		CONF_modules_load_file;
-+		CONF_imodule_set_usr_data;
-+		ENGINE_set_default_string;
-+		CONF_module_get_usr_data;
-+		ASN1_add_oid_module;
-+		CONF_modules_finish;
-+		OPENSSL_config;
-+		CONF_modules_unload;
-+		CONF_imodule_get_value;
-+		CONF_module_set_usr_data;
-+		CONF_parse_list;
-+		CONF_module_add;
-+		CONF_get1_default_config_file;
-+		CONF_imodule_get_flags;
-+		CONF_imodule_get_module;
-+		CONF_modules_load;
-+		CONF_imodule_get_name;
-+		ERR_peek_top_error;
-+		CONF_imodule_get_usr_data;
-+		CONF_imodule_set_flags;
-+		ENGINE_add_conf_module;
-+		ERR_peek_last_error_line;
-+		ERR_peek_last_error_line_data;
-+		ERR_peek_last_error;
-+		DES_read_2passwords;
-+		DES_read_password;
-+		UI_UTIL_read_pw;
-+		UI_UTIL_read_pw_string;
-+		ENGINE_load_aep;
-+		ENGINE_load_sureware;
-+		OPENSSL_add_all_algorithms_noconf;
-+		OPENSSL_add_all_algo_noconf;
-+		OPENSSL_add_all_algorithms_conf;
-+		OPENSSL_add_all_algo_conf;
-+		OPENSSL_load_builtin_modules;
-+		AES_ofb128_encrypt;
-+		AES_ctr128_encrypt;
-+		AES_cfb128_encrypt;
-+		ENGINE_load_4758cca;
-+		_ossl_096_des_random_seed;
-+		EVP_aes_256_ofb;
-+		EVP_aes_192_ofb;
-+		EVP_aes_128_cfb128;
-+		EVP_aes_256_cfb128;
-+		EVP_aes_128_ofb;
-+		EVP_aes_192_cfb128;
-+		CONF_modules_free;
-+		NCONF_default;
-+		OPENSSL_no_config;
-+		NCONF_WIN32;
-+		ASN1_UNIVERSALSTRING_new;
-+		EVP_des_ede_ecb;
-+		i2d_ASN1_UNIVERSALSTRING;
-+		ASN1_UNIVERSALSTRING_free;
-+		ASN1_UNIVERSALSTRING_it;
-+		d2i_ASN1_UNIVERSALSTRING;
-+		EVP_des_ede3_ecb;
-+		X509_REQ_print_ex;
-+		ENGINE_up_ref;
-+		BUF_MEM_grow_clean;
-+		CRYPTO_realloc_clean;
-+		BUF_strlcat;
-+		BIO_indent;
-+		BUF_strlcpy;
-+		OpenSSLDie;
-+		OPENSSL_cleanse;
-+		ENGINE_setup_bsd_cryptodev;
-+		ERR_release_err_state_table;
-+		EVP_aes_128_cfb8;
-+		FIPS_corrupt_rsa;
-+		FIPS_selftest_des;
-+		EVP_aes_128_cfb1;
-+		EVP_aes_192_cfb8;
-+		FIPS_mode_set;
-+		FIPS_selftest_dsa;
-+		EVP_aes_256_cfb8;
-+		FIPS_allow_md5;
-+		DES_ede3_cfb_encrypt;
-+		EVP_des_ede3_cfb8;
-+		FIPS_rand_seeded;
-+		AES_cfbr_encrypt_block;
-+		AES_cfb8_encrypt;
-+		FIPS_rand_seed;
-+		FIPS_corrupt_des;
-+		EVP_aes_192_cfb1;
-+		FIPS_selftest_aes;
-+		FIPS_set_prng_key;
-+		EVP_des_cfb8;
-+		FIPS_corrupt_dsa;
-+		FIPS_test_mode;
-+		FIPS_rand_method;
-+		EVP_aes_256_cfb1;
-+		ERR_load_FIPS_strings;
-+		FIPS_corrupt_aes;
-+		FIPS_selftest_sha1;
-+		FIPS_selftest_rsa;
-+		FIPS_corrupt_sha1;
-+		EVP_des_cfb1;
-+		FIPS_dsa_check;
-+		AES_cfb1_encrypt;
-+		EVP_des_ede3_cfb1;
-+		FIPS_rand_check;
-+		FIPS_md5_allowed;
-+		FIPS_mode;
-+		FIPS_selftest_failed;
-+		sk_is_sorted;
-+		X509_check_ca;
-+		HMAC_CTX_set_flags;
-+		d2i_PROXY_CERT_INFO_EXTENSION;
-+		PROXY_POLICY_it;
-+		i2d_PROXY_POLICY;
-+		i2d_PROXY_CERT_INFO_EXTENSION;
-+		d2i_PROXY_POLICY;
-+		PROXY_CERT_INFO_EXTENSION_new;
-+		PROXY_CERT_INFO_EXTENSION_free;
-+		PROXY_CERT_INFO_EXTENSION_it;
-+		PROXY_POLICY_free;
-+		PROXY_POLICY_new;
-+		BN_MONT_CTX_set_locked;
-+		FIPS_selftest_rng;
-+		EVP_sha384;
-+		EVP_sha512;
-+		EVP_sha224;
-+		EVP_sha256;
-+		FIPS_selftest_hmac;
-+		FIPS_corrupt_rng;
-+		BN_mod_exp_mont_consttime;
-+		RSA_X931_hash_id;
-+		RSA_padding_check_X931;
-+		RSA_verify_PKCS1_PSS;
-+		RSA_padding_add_X931;
-+		RSA_padding_add_PKCS1_PSS;
-+		PKCS1_MGF1;
-+		BN_X931_generate_Xpq;
-+		RSA_X931_generate_key;
-+		BN_X931_derive_prime;
-+		BN_X931_generate_prime;
-+		RSA_X931_derive;
-+		BIO_new_dgram;
-+		BN_get0_nist_prime_384;
-+		ERR_set_mark;
-+		X509_STORE_CTX_set0_crls;
-+		ENGINE_set_STORE;
-+		ENGINE_register_ECDSA;
-+		STORE_meth_set_list_start_fn;
-+		STORE_method_set_list_start_function;
-+		BN_BLINDING_invert_ex;
-+		NAME_CONSTRAINTS_free;
-+		STORE_ATTR_INFO_set_number;
-+		BN_BLINDING_get_thread_id;
-+		X509_STORE_CTX_set0_param;
-+		POLICY_MAPPING_it;
-+		STORE_parse_attrs_start;
-+		POLICY_CONSTRAINTS_free;
-+		EVP_PKEY_add1_attr_by_NID;
-+		BN_nist_mod_192;
-+		EC_GROUP_get_trinomial_basis;
-+		STORE_set_method;
-+		GENERAL_SUBTREE_free;
-+		NAME_CONSTRAINTS_it;
-+		ECDH_get_default_method;
-+		PKCS12_add_safe;
-+		EC_KEY_new_by_curve_name;
-+		STORE_meth_get_update_store_fn;
-+		STORE_method_get_update_store_function;
-+		ENGINE_register_ECDH;
-+		SHA512_Update;
-+		i2d_ECPrivateKey;
-+		BN_get0_nist_prime_192;
-+		STORE_modify_certificate;
-+		EC_POINT_set_affine_coordinates_GF2m;
-+		EC_POINT_set_affine_coords_GF2m;
-+		BN_GF2m_mod_exp_arr;
-+		STORE_ATTR_INFO_modify_number;
-+		X509_keyid_get0;
-+		ENGINE_load_gmp;
-+		pitem_new;
-+		BN_GF2m_mod_mul_arr;
-+		STORE_list_public_key_endp;
-+		o2i_ECPublicKey;
-+		EC_KEY_copy;
-+		BIO_dump_fp;
-+		X509_policy_node_get0_parent;
-+		EC_GROUP_check_discriminant;
-+		i2o_ECPublicKey;
-+		EC_KEY_precompute_mult;
-+		a2i_IPADDRESS;
-+		STORE_meth_set_initialise_fn;
-+		STORE_method_set_initialise_function;
-+		X509_STORE_CTX_set_depth;
-+		X509_VERIFY_PARAM_inherit;
-+		EC_POINT_point2bn;
-+		STORE_ATTR_INFO_set_dn;
-+		X509_policy_tree_get0_policies;
-+		EC_GROUP_new_curve_GF2m;
-+		STORE_destroy_method;
-+		ENGINE_unregister_STORE;
-+		EVP_PKEY_get1_EC_KEY;
-+		STORE_ATTR_INFO_get0_number;
-+		ENGINE_get_default_ECDH;
-+		EC_KEY_get_conv_form;
-+		ASN1_OCTET_STRING_NDEF_it;
-+		STORE_delete_public_key;
-+		STORE_get_public_key;
-+		STORE_modify_arbitrary;
-+		ENGINE_get_static_state;
-+		pqueue_iterator;
-+		ECDSA_SIG_new;
-+		OPENSSL_DIR_end;
-+		BN_GF2m_mod_sqr;
-+		EC_POINT_bn2point;
-+		X509_VERIFY_PARAM_set_depth;
-+		EC_KEY_set_asn1_flag;
-+		STORE_get_method;
-+		EC_KEY_get_key_method_data;
-+		ECDSA_sign_ex;
-+		STORE_parse_attrs_end;
-+		EC_GROUP_get_point_conversion_form;
-+		EC_GROUP_get_point_conv_form;
-+		STORE_method_set_store_function;
-+		STORE_ATTR_INFO_in;
-+		PEM_read_bio_ECPKParameters;
-+		EC_GROUP_get_pentanomial_basis;
-+		EVP_PKEY_add1_attr_by_txt;
-+		BN_BLINDING_set_flags;
-+		X509_VERIFY_PARAM_set1_policies;
-+		X509_VERIFY_PARAM_set1_name;
-+		X509_VERIFY_PARAM_set_purpose;
-+		STORE_get_number;
-+		ECDSA_sign_setup;
-+		BN_GF2m_mod_solve_quad_arr;
-+		EC_KEY_up_ref;
-+		POLICY_MAPPING_free;
-+		BN_GF2m_mod_div;
-+		X509_VERIFY_PARAM_set_flags;
-+		EC_KEY_free;
-+		STORE_meth_set_list_next_fn;
-+		STORE_method_set_list_next_function;
-+		PEM_write_bio_ECPrivateKey;
-+		d2i_EC_PUBKEY;
-+		STORE_meth_get_generate_fn;
-+		STORE_method_get_generate_function;
-+		STORE_meth_set_list_end_fn;
-+		STORE_method_set_list_end_function;
-+		pqueue_print;
-+		EC_GROUP_have_precompute_mult;
-+		EC_KEY_print_fp;
-+		BN_GF2m_mod_arr;
-+		PEM_write_bio_X509_CERT_PAIR;
-+		EVP_PKEY_cmp;
-+		X509_policy_level_node_count;
-+		STORE_new_engine;
-+		STORE_list_public_key_start;
-+		X509_VERIFY_PARAM_new;
-+		ECDH_get_ex_data;
-+		EVP_PKEY_get_attr;
-+		ECDSA_do_sign;
-+		ENGINE_unregister_ECDH;
-+		ECDH_OpenSSL;
-+		EC_KEY_set_conv_form;
-+		EC_POINT_dup;
-+		GENERAL_SUBTREE_new;
-+		STORE_list_crl_endp;
-+		EC_get_builtin_curves;
-+		X509_policy_node_get0_qualifiers;
-+		X509_pcy_node_get0_qualifiers;
-+		STORE_list_crl_end;
-+		EVP_PKEY_set1_EC_KEY;
-+		BN_GF2m_mod_sqrt_arr;
-+		i2d_ECPrivateKey_bio;
-+		ECPKParameters_print_fp;
-+		pqueue_find;
-+		ECDSA_SIG_free;
-+		PEM_write_bio_ECPKParameters;
-+		STORE_method_set_ctrl_function;
-+		STORE_list_public_key_end;
-+		EC_KEY_set_private_key;
-+		pqueue_peek;
-+		STORE_get_arbitrary;
-+		STORE_store_crl;
-+		X509_policy_node_get0_policy;
-+		PKCS12_add_safes;
-+		BN_BLINDING_convert_ex;
-+		X509_policy_tree_free;
-+		OPENSSL_ia32cap_loc;
-+		BN_GF2m_poly2arr;
-+		STORE_ctrl;
-+		STORE_ATTR_INFO_compare;
-+		BN_get0_nist_prime_224;
-+		i2d_ECParameters;
-+		i2d_ECPKParameters;
-+		BN_GENCB_call;
-+		d2i_ECPKParameters;
-+		STORE_meth_set_generate_fn;
-+		STORE_method_set_generate_function;
-+		ENGINE_set_ECDH;
-+		NAME_CONSTRAINTS_new;
-+		SHA256_Init;
-+		EC_KEY_get0_public_key;
-+		PEM_write_bio_EC_PUBKEY;
-+		STORE_ATTR_INFO_set_cstr;
-+		STORE_list_crl_next;
-+		STORE_ATTR_INFO_in_range;
-+		ECParameters_print;
-+		STORE_meth_set_delete_fn;
-+		STORE_method_set_delete_function;
-+		STORE_list_certificate_next;
-+		ASN1_generate_nconf;
-+		BUF_memdup;
-+		BN_GF2m_mod_mul;
-+		STORE_meth_get_list_next_fn;
-+		STORE_method_get_list_next_function;
-+		STORE_ATTR_INFO_get0_dn;
-+		STORE_list_private_key_next;
-+		EC_GROUP_set_seed;
-+		X509_VERIFY_PARAM_set_trust;
-+		STORE_ATTR_INFO_free;
-+		STORE_get_private_key;
-+		EVP_PKEY_get_attr_count;
-+		STORE_ATTR_INFO_new;
-+		EC_GROUP_get_curve_GF2m;
-+		STORE_meth_set_revoke_fn;
-+		STORE_method_set_revoke_function;
-+		STORE_store_number;
-+		BN_is_prime_ex;
-+		STORE_revoke_public_key;
-+		X509_STORE_CTX_get0_param;
-+		STORE_delete_arbitrary;
-+		PEM_read_X509_CERT_PAIR;
-+		X509_STORE_set_depth;
-+		ECDSA_get_ex_data;
-+		SHA224;
-+		BIO_dump_indent_fp;
-+		EC_KEY_set_group;
-+		BUF_strndup;
-+		STORE_list_certificate_start;
-+		BN_GF2m_mod;
-+		X509_REQ_check_private_key;
-+		EC_GROUP_get_seed_len;
-+		ERR_load_STORE_strings;
-+		PEM_read_bio_EC_PUBKEY;
-+		STORE_list_private_key_end;
-+		i2d_EC_PUBKEY;
-+		ECDSA_get_default_method;
-+		ASN1_put_eoc;
-+		X509_STORE_CTX_get_explicit_policy;
-+		X509_STORE_CTX_get_expl_policy;
-+		X509_VERIFY_PARAM_table_cleanup;
-+		STORE_modify_private_key;
-+		X509_VERIFY_PARAM_free;
-+		EC_METHOD_get_field_type;
-+		EC_GFp_nist_method;
-+		STORE_meth_set_modify_fn;
-+		STORE_method_set_modify_function;
-+		STORE_parse_attrs_next;
-+		ENGINE_load_padlock;
-+		EC_GROUP_set_curve_name;
-+		X509_CERT_PAIR_it;
-+		STORE_meth_get_revoke_fn;
-+		STORE_method_get_revoke_function;
-+		STORE_method_set_get_function;
-+		STORE_modify_number;
-+		STORE_method_get_store_function;
-+		STORE_store_private_key;
-+		BN_GF2m_mod_sqr_arr;
-+		RSA_setup_blinding;
-+		BIO_s_datagram;
-+		STORE_Memory;
-+		sk_find_ex;
-+		EC_GROUP_set_curve_GF2m;
-+		ENGINE_set_default_ECDSA;
-+		POLICY_CONSTRAINTS_new;
-+		BN_GF2m_mod_sqrt;
-+		ECDH_set_default_method;
-+		EC_KEY_generate_key;
-+		SHA384_Update;
-+		BN_GF2m_arr2poly;
-+		STORE_method_get_get_function;
-+		STORE_meth_set_cleanup_fn;
-+		STORE_method_set_cleanup_function;
-+		EC_GROUP_check;
-+		d2i_ECPrivateKey_bio;
-+		EC_KEY_insert_key_method_data;
-+		STORE_meth_get_lock_store_fn;
-+		STORE_method_get_lock_store_function;
-+		X509_VERIFY_PARAM_get_depth;
-+		SHA224_Final;
-+		STORE_meth_set_update_store_fn;
-+		STORE_method_set_update_store_function;
-+		SHA224_Update;
-+		d2i_ECPrivateKey;
-+		ASN1_item_ndef_i2d;
-+		STORE_delete_private_key;
-+		ERR_pop_to_mark;
-+		ENGINE_register_all_STORE;
-+		X509_policy_level_get0_node;
-+		i2d_PKCS7_NDEF;
-+		EC_GROUP_get_degree;
-+		ASN1_generate_v3;
-+		STORE_ATTR_INFO_modify_cstr;
-+		X509_policy_tree_level_count;
-+		BN_GF2m_add;
-+		EC_KEY_get0_group;
-+		STORE_generate_crl;
-+		STORE_store_public_key;
-+		X509_CERT_PAIR_free;
-+		STORE_revoke_private_key;
-+		BN_nist_mod_224;
-+		SHA512_Final;
-+		STORE_ATTR_INFO_modify_dn;
-+		STORE_meth_get_initialise_fn;
-+		STORE_method_get_initialise_function;
-+		STORE_delete_number;
-+		i2d_EC_PUBKEY_bio;
-+		BIO_dgram_non_fatal_error;
-+		EC_GROUP_get_asn1_flag;
-+		STORE_ATTR_INFO_in_ex;
-+		STORE_list_crl_start;
-+		ECDH_get_ex_new_index;
-+		STORE_meth_get_modify_fn;
-+		STORE_method_get_modify_function;
-+		v2i_ASN1_BIT_STRING;
-+		STORE_store_certificate;
-+		OBJ_bsearch_ex;
-+		X509_STORE_CTX_set_default;
-+		STORE_ATTR_INFO_set_sha1str;
-+		BN_GF2m_mod_inv;
-+		BN_GF2m_mod_exp;
-+		STORE_modify_public_key;
-+		STORE_meth_get_list_start_fn;
-+		STORE_method_get_list_start_function;
-+		EC_GROUP_get0_seed;
-+		STORE_store_arbitrary;
-+		STORE_meth_set_unlock_store_fn;
-+		STORE_method_set_unlock_store_function;
-+		BN_GF2m_mod_div_arr;
-+		ENGINE_set_ECDSA;
-+		STORE_create_method;
-+		ECPKParameters_print;
-+		EC_KEY_get0_private_key;
-+		PEM_write_EC_PUBKEY;
-+		X509_VERIFY_PARAM_set1;
-+		ECDH_set_method;
-+		v2i_GENERAL_NAME_ex;
-+		ECDH_set_ex_data;
-+		STORE_generate_key;
-+		BN_nist_mod_521;
-+		X509_policy_tree_get0_level;
-+		EC_GROUP_set_point_conversion_form;
-+		EC_GROUP_set_point_conv_form;
-+		PEM_read_EC_PUBKEY;
-+		i2d_ECDSA_SIG;
-+		ECDSA_OpenSSL;
-+		STORE_delete_crl;
-+		EC_KEY_get_enc_flags;
-+		ASN1_const_check_infinite_end;
-+		EVP_PKEY_delete_attr;
-+		ECDSA_set_default_method;
-+		EC_POINT_set_compressed_coordinates_GF2m;
-+		EC_POINT_set_compr_coords_GF2m;
-+		EC_GROUP_cmp;
-+		STORE_revoke_certificate;
-+		BN_get0_nist_prime_256;
-+		STORE_meth_get_delete_fn;
-+		STORE_method_get_delete_function;
-+		SHA224_Init;
-+		PEM_read_ECPrivateKey;
-+		SHA512_Init;
-+		STORE_parse_attrs_endp;
-+		BN_set_negative;
-+		ERR_load_ECDSA_strings;
-+		EC_GROUP_get_basis_type;
-+		STORE_list_public_key_next;
-+		i2v_ASN1_BIT_STRING;
-+		STORE_OBJECT_free;
-+		BN_nist_mod_384;
-+		i2d_X509_CERT_PAIR;
-+		PEM_write_ECPKParameters;
-+		ECDH_compute_key;
-+		STORE_ATTR_INFO_get0_sha1str;
-+		ENGINE_register_all_ECDH;
-+		pqueue_pop;
-+		STORE_ATTR_INFO_get0_cstr;
-+		POLICY_CONSTRAINTS_it;
-+		STORE_get_ex_new_index;
-+		EVP_PKEY_get_attr_by_OBJ;
-+		X509_VERIFY_PARAM_add0_policy;
-+		BN_GF2m_mod_solve_quad;
-+		SHA256;
-+		i2d_ECPrivateKey_fp;
-+		X509_policy_tree_get0_user_policies;
-+		X509_pcy_tree_get0_usr_policies;
-+		OPENSSL_DIR_read;
-+		ENGINE_register_all_ECDSA;
-+		X509_VERIFY_PARAM_lookup;
-+		EC_POINT_get_affine_coordinates_GF2m;
-+		EC_POINT_get_affine_coords_GF2m;
-+		EC_GROUP_dup;
-+		ENGINE_get_default_ECDSA;
-+		EC_KEY_new;
-+		SHA256_Transform;
-+		EC_KEY_set_enc_flags;
-+		ECDSA_verify;
-+		EC_POINT_point2hex;
-+		ENGINE_get_STORE;
-+		SHA512;
-+		STORE_get_certificate;
-+		ECDSA_do_sign_ex;
-+		ECDSA_do_verify;
-+		d2i_ECPrivateKey_fp;
-+		STORE_delete_certificate;
-+		SHA512_Transform;
-+		X509_STORE_set1_param;
-+		STORE_method_get_ctrl_function;
-+		STORE_free;
-+		PEM_write_ECPrivateKey;
-+		STORE_meth_get_unlock_store_fn;
-+		STORE_method_get_unlock_store_function;
-+		STORE_get_ex_data;
-+		EC_KEY_set_public_key;
-+		PEM_read_ECPKParameters;
-+		X509_CERT_PAIR_new;
-+		ENGINE_register_STORE;
-+		RSA_generate_key_ex;
-+		DSA_generate_parameters_ex;
-+		ECParameters_print_fp;
-+		X509V3_NAME_from_section;
-+		EVP_PKEY_add1_attr;
-+		STORE_modify_crl;
-+		STORE_list_private_key_start;
-+		POLICY_MAPPINGS_it;
-+		GENERAL_SUBTREE_it;
-+		EC_GROUP_get_curve_name;
-+		PEM_write_X509_CERT_PAIR;
-+		BIO_dump_indent_cb;
-+		d2i_X509_CERT_PAIR;
-+		STORE_list_private_key_endp;
-+		asn1_const_Finish;
-+		i2d_EC_PUBKEY_fp;
-+		BN_nist_mod_256;
-+		X509_VERIFY_PARAM_add0_table;
-+		pqueue_free;
-+		BN_BLINDING_create_param;
-+		ECDSA_size;
-+		d2i_EC_PUBKEY_bio;
-+		BN_get0_nist_prime_521;
-+		STORE_ATTR_INFO_modify_sha1str;
-+		BN_generate_prime_ex;
-+		EC_GROUP_new_by_curve_name;
-+		SHA256_Final;
-+		DH_generate_parameters_ex;
-+		PEM_read_bio_ECPrivateKey;
-+		STORE_meth_get_cleanup_fn;
-+		STORE_method_get_cleanup_function;
-+		ENGINE_get_ECDH;
-+		d2i_ECDSA_SIG;
-+		BN_is_prime_fasttest_ex;
-+		ECDSA_sign;
-+		X509_policy_check;
-+		EVP_PKEY_get_attr_by_NID;
-+		STORE_set_ex_data;
-+		ENGINE_get_ECDSA;
-+		EVP_ecdsa;
-+		BN_BLINDING_get_flags;
-+		PKCS12_add_cert;
-+		STORE_OBJECT_new;
-+		ERR_load_ECDH_strings;
-+		EC_KEY_dup;
-+		EVP_CIPHER_CTX_rand_key;
-+		ECDSA_set_method;
-+		a2i_IPADDRESS_NC;
-+		d2i_ECParameters;
-+		STORE_list_certificate_end;
-+		STORE_get_crl;
-+		X509_POLICY_NODE_print;
-+		SHA384_Init;
-+		EC_GF2m_simple_method;
-+		ECDSA_set_ex_data;
-+		SHA384_Final;
-+		PKCS7_set_digest;
-+		EC_KEY_print;
-+		STORE_meth_set_lock_store_fn;
-+		STORE_method_set_lock_store_function;
-+		ECDSA_get_ex_new_index;
-+		SHA384;
-+		POLICY_MAPPING_new;
-+		STORE_list_certificate_endp;
-+		X509_STORE_CTX_get0_policy_tree;
-+		EC_GROUP_set_asn1_flag;
-+		EC_KEY_check_key;
-+		d2i_EC_PUBKEY_fp;
-+		PKCS7_set0_type_other;
-+		PEM_read_bio_X509_CERT_PAIR;
-+		pqueue_next;
-+		STORE_meth_get_list_end_fn;
-+		STORE_method_get_list_end_function;
-+		EVP_PKEY_add1_attr_by_OBJ;
-+		X509_VERIFY_PARAM_set_time;
-+		pqueue_new;
-+		ENGINE_set_default_ECDH;
-+		STORE_new_method;
-+		PKCS12_add_key;
-+		DSO_merge;
-+		EC_POINT_hex2point;
-+		BIO_dump_cb;
-+		SHA256_Update;
-+		pqueue_insert;
-+		pitem_free;
-+		BN_GF2m_mod_inv_arr;
-+		ENGINE_unregister_ECDSA;
-+		BN_BLINDING_set_thread_id;
-+		get_rfc3526_prime_8192;
-+		X509_VERIFY_PARAM_clear_flags;
-+		get_rfc2409_prime_1024;
-+		DH_check_pub_key;
-+		get_rfc3526_prime_2048;
-+		get_rfc3526_prime_6144;
-+		get_rfc3526_prime_1536;
-+		get_rfc3526_prime_3072;
-+		get_rfc3526_prime_4096;
-+		get_rfc2409_prime_768;
-+		X509_VERIFY_PARAM_get_flags;
-+		EVP_CIPHER_CTX_new;
-+		EVP_CIPHER_CTX_free;
-+		Camellia_cbc_encrypt;
-+		Camellia_cfb128_encrypt;
-+		Camellia_cfb1_encrypt;
-+		Camellia_cfb8_encrypt;
-+		Camellia_ctr128_encrypt;
-+		Camellia_cfbr_encrypt_block;
-+		Camellia_decrypt;
-+		Camellia_ecb_encrypt;
-+		Camellia_encrypt;
-+		Camellia_ofb128_encrypt;
-+		Camellia_set_key;
-+		EVP_camellia_128_cbc;
-+		EVP_camellia_128_cfb128;
-+		EVP_camellia_128_cfb1;
-+		EVP_camellia_128_cfb8;
-+		EVP_camellia_128_ecb;
-+		EVP_camellia_128_ofb;
-+		EVP_camellia_192_cbc;
-+		EVP_camellia_192_cfb128;
-+		EVP_camellia_192_cfb1;
-+		EVP_camellia_192_cfb8;
-+		EVP_camellia_192_ecb;
-+		EVP_camellia_192_ofb;
-+		EVP_camellia_256_cbc;
-+		EVP_camellia_256_cfb128;
-+		EVP_camellia_256_cfb1;
-+		EVP_camellia_256_cfb8;
-+		EVP_camellia_256_ecb;
-+		EVP_camellia_256_ofb;
-+		a2i_ipadd;
-+		ASIdentifiers_free;
-+		i2d_ASIdOrRange;
-+		EVP_CIPHER_block_size;
-+		v3_asid_is_canonical;
-+		IPAddressChoice_free;
-+		EVP_CIPHER_CTX_set_app_data;
-+		BIO_set_callback_arg;
-+		v3_addr_add_prefix;
-+		IPAddressOrRange_it;
-+		BIO_set_flags;
-+		ASIdentifiers_it;
-+		v3_addr_get_range;
-+		BIO_method_type;
-+		v3_addr_inherits;
-+		IPAddressChoice_it;
-+		AES_ige_encrypt;
-+		v3_addr_add_range;
-+		EVP_CIPHER_CTX_nid;
-+		d2i_ASRange;
-+		v3_addr_add_inherit;
-+		v3_asid_add_id_or_range;
-+		v3_addr_validate_resource_set;
-+		EVP_CIPHER_iv_length;
-+		EVP_MD_type;
-+		v3_asid_canonize;
-+		IPAddressRange_free;
-+		v3_asid_add_inherit;
-+		EVP_CIPHER_CTX_key_length;
-+		IPAddressRange_new;
-+		ASIdOrRange_new;
-+		EVP_MD_size;
-+		EVP_MD_CTX_test_flags;
-+		BIO_clear_flags;
-+		i2d_ASRange;
-+		IPAddressRange_it;
-+		IPAddressChoice_new;
-+		ASIdentifierChoice_new;
-+		ASRange_free;
-+		EVP_MD_pkey_type;
-+		EVP_MD_CTX_clear_flags;
-+		IPAddressFamily_free;
-+		i2d_IPAddressFamily;
-+		IPAddressOrRange_new;
-+		EVP_CIPHER_flags;
-+		v3_asid_validate_resource_set;
-+		d2i_IPAddressRange;
-+		AES_bi_ige_encrypt;
-+		BIO_get_callback;
-+		IPAddressOrRange_free;
-+		v3_addr_subset;
-+		d2i_IPAddressFamily;
-+		v3_asid_subset;
-+		BIO_test_flags;
-+		i2d_ASIdentifierChoice;
-+		ASRange_it;
-+		d2i_ASIdentifiers;
-+		ASRange_new;
-+		d2i_IPAddressChoice;
-+		v3_addr_get_afi;
-+		EVP_CIPHER_key_length;
-+		EVP_Cipher;
-+		i2d_IPAddressOrRange;
-+		ASIdOrRange_it;
-+		EVP_CIPHER_nid;
-+		i2d_IPAddressChoice;
-+		EVP_CIPHER_CTX_block_size;
-+		ASIdentifiers_new;
-+		v3_addr_validate_path;
-+		IPAddressFamily_new;
-+		EVP_MD_CTX_set_flags;
-+		v3_addr_is_canonical;
-+		i2d_IPAddressRange;
-+		IPAddressFamily_it;
-+		v3_asid_inherits;
-+		EVP_CIPHER_CTX_cipher;
-+		EVP_CIPHER_CTX_get_app_data;
-+		EVP_MD_block_size;
-+		EVP_CIPHER_CTX_flags;
-+		v3_asid_validate_path;
-+		d2i_IPAddressOrRange;
-+		v3_addr_canonize;
-+		ASIdentifierChoice_it;
-+		EVP_MD_CTX_md;
-+		d2i_ASIdentifierChoice;
-+		BIO_method_name;
-+		EVP_CIPHER_CTX_iv_length;
-+		ASIdOrRange_free;
-+		ASIdentifierChoice_free;
-+		BIO_get_callback_arg;
-+		BIO_set_callback;
-+		d2i_ASIdOrRange;
-+		i2d_ASIdentifiers;
-+		SEED_decrypt;
-+		SEED_encrypt;
-+		SEED_cbc_encrypt;
-+		EVP_seed_ofb;
-+		SEED_cfb128_encrypt;
-+		SEED_ofb128_encrypt;
-+		EVP_seed_cbc;
-+		SEED_ecb_encrypt;
-+		EVP_seed_ecb;
-+		SEED_set_key;
-+		EVP_seed_cfb128;
-+		X509_EXTENSIONS_it;
-+		X509_get1_ocsp;
-+		OCSP_REQ_CTX_free;
-+		i2d_X509_EXTENSIONS;
-+		OCSP_sendreq_nbio;
-+		OCSP_sendreq_new;
-+		d2i_X509_EXTENSIONS;
-+		X509_ALGORS_it;
-+		X509_ALGOR_get0;
-+		X509_ALGOR_set0;
-+		AES_unwrap_key;
-+		AES_wrap_key;
-+		X509at_get0_data_by_OBJ;
-+		ASN1_TYPE_set1;
-+		ASN1_STRING_set0;
-+		i2d_X509_ALGORS;
-+		BIO_f_zlib;
-+		COMP_zlib_cleanup;
-+		d2i_X509_ALGORS;
-+		CMS_ReceiptRequest_free;
-+		PEM_write_CMS;
-+		CMS_add0_CertificateChoices;
-+		CMS_unsigned_add1_attr_by_OBJ;
-+		ERR_load_CMS_strings;
-+		CMS_sign_receipt;
-+		i2d_CMS_ContentInfo;
-+		CMS_signed_delete_attr;
-+		d2i_CMS_bio;
-+		CMS_unsigned_get_attr_by_NID;
-+		CMS_verify;
-+		SMIME_read_CMS;
-+		CMS_decrypt_set1_key;
-+		CMS_SignerInfo_get0_algs;
-+		CMS_add1_cert;
-+		CMS_set_detached;
-+		CMS_encrypt;
-+		CMS_EnvelopedData_create;
-+		CMS_uncompress;
-+		CMS_add0_crl;
-+		CMS_SignerInfo_verify_content;
-+		CMS_unsigned_get0_data_by_OBJ;
-+		PEM_write_bio_CMS;
-+		CMS_unsigned_get_attr;
-+		CMS_RecipientInfo_ktri_cert_cmp;
-+		CMS_RecipientInfo_ktri_get0_algs;
-+		CMS_RecipInfo_ktri_get0_algs;
-+		CMS_ContentInfo_free;
-+		CMS_final;
-+		CMS_add_simple_smimecap;
-+		CMS_SignerInfo_verify;
-+		CMS_data;
-+		CMS_ContentInfo_it;
-+		d2i_CMS_ReceiptRequest;
-+		CMS_compress;
-+		CMS_digest_create;
-+		CMS_SignerInfo_cert_cmp;
-+		CMS_SignerInfo_sign;
-+		CMS_data_create;
-+		i2d_CMS_bio;
-+		CMS_EncryptedData_set1_key;
-+		CMS_decrypt;
-+		int_smime_write_ASN1;
-+		CMS_unsigned_delete_attr;
-+		CMS_unsigned_get_attr_count;
-+		CMS_add_smimecap;
-+		PEM_read_CMS;
-+		CMS_signed_get_attr_by_OBJ;
-+		d2i_CMS_ContentInfo;
-+		CMS_add_standard_smimecap;
-+		CMS_ContentInfo_new;
-+		CMS_RecipientInfo_type;
-+		CMS_get0_type;
-+		CMS_is_detached;
-+		CMS_sign;
-+		CMS_signed_add1_attr;
-+		CMS_unsigned_get_attr_by_OBJ;
-+		SMIME_write_CMS;
-+		CMS_EncryptedData_decrypt;
-+		CMS_get0_RecipientInfos;
-+		CMS_add0_RevocationInfoChoice;
-+		CMS_decrypt_set1_pkey;
-+		CMS_SignerInfo_set1_signer_cert;
-+		CMS_get0_signers;
-+		CMS_ReceiptRequest_get0_values;
-+		CMS_signed_get0_data_by_OBJ;
-+		CMS_get0_SignerInfos;
-+		CMS_add0_cert;
-+		CMS_EncryptedData_encrypt;
-+		CMS_digest_verify;
-+		CMS_set1_signers_certs;
-+		CMS_signed_get_attr;
-+		CMS_RecipientInfo_set0_key;
-+		CMS_SignedData_init;
-+		CMS_RecipientInfo_kekri_get0_id;
-+		CMS_verify_receipt;
-+		CMS_ReceiptRequest_it;
-+		PEM_read_bio_CMS;
-+		CMS_get1_crls;
-+		CMS_add0_recipient_key;
-+		SMIME_read_ASN1;
-+		CMS_ReceiptRequest_new;
-+		CMS_get0_content;
-+		CMS_get1_ReceiptRequest;
-+		CMS_signed_add1_attr_by_OBJ;
-+		CMS_RecipientInfo_kekri_id_cmp;
-+		CMS_add1_ReceiptRequest;
-+		CMS_SignerInfo_get0_signer_id;
-+		CMS_unsigned_add1_attr_by_NID;
-+		CMS_unsigned_add1_attr;
-+		CMS_signed_get_attr_by_NID;
-+		CMS_get1_certs;
-+		CMS_signed_add1_attr_by_NID;
-+		CMS_unsigned_add1_attr_by_txt;
-+		CMS_dataFinal;
-+		CMS_RecipientInfo_ktri_get0_signer_id;
-+		CMS_RecipInfo_ktri_get0_sigr_id;
-+		i2d_CMS_ReceiptRequest;
-+		CMS_add1_recipient_cert;
-+		CMS_dataInit;
-+		CMS_signed_add1_attr_by_txt;
-+		CMS_RecipientInfo_decrypt;
-+		CMS_signed_get_attr_count;
-+		CMS_get0_eContentType;
-+		CMS_set1_eContentType;
-+		CMS_ReceiptRequest_create0;
-+		CMS_add1_signer;
-+		CMS_RecipientInfo_set0_pkey;
-+		ENGINE_set_load_ssl_client_cert_function;
-+		ENGINE_set_ld_ssl_clnt_cert_fn;
-+		ENGINE_get_ssl_client_cert_function;
-+		ENGINE_get_ssl_client_cert_fn;
-+		ENGINE_load_ssl_client_cert;
-+		ENGINE_load_capi;
-+		OPENSSL_isservice;
-+		FIPS_dsa_sig_decode;
-+		EVP_CIPHER_CTX_clear_flags;
-+		FIPS_rand_status;
-+		FIPS_rand_set_key;
-+		CRYPTO_set_mem_info_functions;
-+		RSA_X931_generate_key_ex;
-+		int_ERR_set_state_func;
-+		int_EVP_MD_set_engine_callbacks;
-+		int_CRYPTO_set_do_dynlock_callback;
-+		FIPS_rng_stick;
-+		EVP_CIPHER_CTX_set_flags;
-+		BN_X931_generate_prime_ex;
-+		FIPS_selftest_check;
-+		FIPS_rand_set_dt;
-+		CRYPTO_dbg_pop_info;
-+		FIPS_dsa_free;
-+		RSA_X931_derive_ex;
-+		FIPS_rsa_new;
-+		FIPS_rand_bytes;
-+		fips_cipher_test;
-+		EVP_CIPHER_CTX_test_flags;
-+		CRYPTO_malloc_debug_init;
-+		CRYPTO_dbg_push_info;
-+		FIPS_corrupt_rsa_keygen;
-+		FIPS_dh_new;
-+		FIPS_corrupt_dsa_keygen;
-+		FIPS_dh_free;
-+		fips_pkey_signature_test;
-+		EVP_add_alg_module;
-+		int_RAND_init_engine_callbacks;
-+		int_EVP_CIPHER_set_engine_callbacks;
-+		int_EVP_MD_init_engine_callbacks;
-+		FIPS_rand_test_mode;
-+		FIPS_rand_reset;
-+		FIPS_dsa_new;
-+		int_RAND_set_callbacks;
-+		BN_X931_derive_prime_ex;
-+		int_ERR_lib_init;
-+		int_EVP_CIPHER_init_engine_callbacks;
-+		FIPS_rsa_free;
-+		FIPS_dsa_sig_encode;
-+		CRYPTO_dbg_remove_all_info;
-+		OPENSSL_init;
-+		CRYPTO_strdup;
-+		JPAKE_STEP3A_process;
-+		JPAKE_STEP1_release;
-+		JPAKE_get_shared_key;
-+		JPAKE_STEP3B_init;
-+		JPAKE_STEP1_generate;
-+		JPAKE_STEP1_init;
-+		JPAKE_STEP3B_process;
-+		JPAKE_STEP2_generate;
-+		JPAKE_CTX_new;
-+		JPAKE_CTX_free;
-+		JPAKE_STEP3B_release;
-+		JPAKE_STEP3A_release;
-+		JPAKE_STEP2_process;
-+		JPAKE_STEP3B_generate;
-+		JPAKE_STEP1_process;
-+		JPAKE_STEP3A_generate;
-+		JPAKE_STEP2_release;
-+		JPAKE_STEP3A_init;
-+		ERR_load_JPAKE_strings;
-+		JPAKE_STEP2_init;
-+		pqueue_size;
-+		i2d_TS_ACCURACY;
-+		i2d_TS_MSG_IMPRINT_fp;
-+		i2d_TS_MSG_IMPRINT;
-+		EVP_PKEY_print_public;
-+		EVP_PKEY_CTX_new;
-+		i2d_TS_TST_INFO;
-+		EVP_PKEY_asn1_find;
-+		DSO_METHOD_beos;
-+		TS_CONF_load_cert;
-+		TS_REQ_get_ext;
-+		EVP_PKEY_sign_init;
-+		ASN1_item_print;
-+		TS_TST_INFO_set_nonce;
-+		TS_RESP_dup;
-+		ENGINE_register_pkey_meths;
-+		EVP_PKEY_asn1_add0;
-+		PKCS7_add0_attrib_signing_time;
-+		i2d_TS_TST_INFO_fp;
-+		BIO_asn1_get_prefix;
-+		TS_TST_INFO_set_time;
-+		EVP_PKEY_meth_set_decrypt;
-+		EVP_PKEY_set_type_str;
-+		EVP_PKEY_CTX_get_keygen_info;
-+		TS_REQ_set_policy_id;
-+		d2i_TS_RESP_fp;
-+		ENGINE_get_pkey_asn1_meth_engine;
-+		ENGINE_get_pkey_asn1_meth_eng;
-+		WHIRLPOOL_Init;
-+		TS_RESP_set_status_info;
-+		EVP_PKEY_keygen;
-+		EVP_DigestSignInit;
-+		TS_ACCURACY_set_millis;
-+		TS_REQ_dup;
-+		GENERAL_NAME_dup;
-+		ASN1_SEQUENCE_ANY_it;
-+		WHIRLPOOL;
-+		X509_STORE_get1_crls;
-+		ENGINE_get_pkey_asn1_meth;
-+		EVP_PKEY_asn1_new;
-+		BIO_new_NDEF;
-+		ENGINE_get_pkey_meth;
-+		TS_MSG_IMPRINT_set_algo;
-+		i2d_TS_TST_INFO_bio;
-+		TS_TST_INFO_set_ordering;
-+		TS_TST_INFO_get_ext_by_OBJ;
-+		CRYPTO_THREADID_set_pointer;
-+		TS_CONF_get_tsa_section;
-+		SMIME_write_ASN1;
-+		TS_RESP_CTX_set_signer_key;
-+		EVP_PKEY_encrypt_old;
-+		EVP_PKEY_encrypt_init;
-+		CRYPTO_THREADID_cpy;
-+		ASN1_PCTX_get_cert_flags;
-+		i2d_ESS_SIGNING_CERT;
-+		TS_CONF_load_key;
-+		i2d_ASN1_SEQUENCE_ANY;
-+		d2i_TS_MSG_IMPRINT_bio;
-+		EVP_PKEY_asn1_set_public;
-+		b2i_PublicKey_bio;
-+		BIO_asn1_set_prefix;
-+		EVP_PKEY_new_mac_key;
-+		BIO_new_CMS;
-+		CRYPTO_THREADID_cmp;
-+		TS_REQ_ext_free;
-+		EVP_PKEY_asn1_set_free;
-+		EVP_PKEY_get0_asn1;
-+		d2i_NETSCAPE_X509;
-+		EVP_PKEY_verify_recover_init;
-+		EVP_PKEY_CTX_set_data;
-+		EVP_PKEY_keygen_init;
-+		TS_RESP_CTX_set_status_info;
-+		TS_MSG_IMPRINT_get_algo;
-+		TS_REQ_print_bio;
-+		EVP_PKEY_CTX_ctrl_str;
-+		EVP_PKEY_get_default_digest_nid;
-+		PEM_write_bio_PKCS7_stream;
-+		TS_MSG_IMPRINT_print_bio;
-+		BN_asc2bn;
-+		TS_REQ_get_policy_id;
-+		ENGINE_set_default_pkey_asn1_meths;
-+		ENGINE_set_def_pkey_asn1_meths;
-+		d2i_TS_ACCURACY;
-+		DSO_global_lookup;
-+		TS_CONF_set_tsa_name;
-+		i2d_ASN1_SET_ANY;
-+		ENGINE_load_gost;
-+		WHIRLPOOL_BitUpdate;
-+		ASN1_PCTX_get_flags;
-+		TS_TST_INFO_get_ext_by_NID;
-+		TS_RESP_new;
-+		ESS_CERT_ID_dup;
-+		TS_STATUS_INFO_dup;
-+		TS_REQ_delete_ext;
-+		EVP_DigestVerifyFinal;
-+		EVP_PKEY_print_params;
-+		i2d_CMS_bio_stream;
-+		TS_REQ_get_msg_imprint;
-+		OBJ_find_sigid_by_algs;
-+		TS_TST_INFO_get_serial;
-+		TS_REQ_get_nonce;
-+		X509_PUBKEY_set0_param;
-+		EVP_PKEY_CTX_set0_keygen_info;
-+		DIST_POINT_set_dpname;
-+		i2d_ISSUING_DIST_POINT;
-+		ASN1_SET_ANY_it;
-+		EVP_PKEY_CTX_get_data;
-+		TS_STATUS_INFO_print_bio;
-+		EVP_PKEY_derive_init;
-+		d2i_TS_TST_INFO;
-+		EVP_PKEY_asn1_add_alias;
-+		d2i_TS_RESP_bio;
-+		OTHERNAME_cmp;
-+		GENERAL_NAME_set0_value;
-+		PKCS7_RECIP_INFO_get0_alg;
-+		TS_RESP_CTX_new;
-+		TS_RESP_set_tst_info;
-+		PKCS7_final;
-+		EVP_PKEY_base_id;
-+		TS_RESP_CTX_set_signer_cert;
-+		TS_REQ_set_msg_imprint;
-+		EVP_PKEY_CTX_ctrl;
-+		TS_CONF_set_digests;
-+		d2i_TS_MSG_IMPRINT;
-+		EVP_PKEY_meth_set_ctrl;
-+		TS_REQ_get_ext_by_NID;
-+		PKCS5_pbe_set0_algor;
-+		BN_BLINDING_thread_id;
-+		TS_ACCURACY_new;
-+		X509_CRL_METHOD_free;
-+		ASN1_PCTX_get_nm_flags;
-+		EVP_PKEY_meth_set_sign;
-+		CRYPTO_THREADID_current;
-+		EVP_PKEY_decrypt_init;
-+		NETSCAPE_X509_free;
-+		i2b_PVK_bio;
-+		EVP_PKEY_print_private;
-+		GENERAL_NAME_get0_value;
-+		b2i_PVK_bio;
-+		ASN1_UTCTIME_adj;
-+		TS_TST_INFO_new;
-+		EVP_MD_do_all_sorted;
-+		TS_CONF_set_default_engine;
-+		TS_ACCURACY_set_seconds;
-+		TS_TST_INFO_get_time;
-+		PKCS8_pkey_get0;
-+		EVP_PKEY_asn1_get0;
-+		OBJ_add_sigid;
-+		PKCS7_SIGNER_INFO_sign;
-+		EVP_PKEY_paramgen_init;
-+		EVP_PKEY_sign;
-+		OBJ_sigid_free;
-+		EVP_PKEY_meth_set_init;
-+		d2i_ESS_ISSUER_SERIAL;
-+		ISSUING_DIST_POINT_new;
-+		ASN1_TIME_adj;
-+		TS_OBJ_print_bio;
-+		EVP_PKEY_meth_set_verify_recover;
-+		EVP_PKEY_meth_set_vrfy_recover;
-+		TS_RESP_get_status_info;
-+		CMS_stream;
-+		EVP_PKEY_CTX_set_cb;
-+		PKCS7_to_TS_TST_INFO;
-+		ASN1_PCTX_get_oid_flags;
-+		TS_TST_INFO_add_ext;
-+		EVP_PKEY_meth_set_derive;
-+		i2d_TS_RESP_fp;
-+		i2d_TS_MSG_IMPRINT_bio;
-+		TS_RESP_CTX_set_accuracy;
-+		TS_REQ_set_nonce;
-+		ESS_CERT_ID_new;
-+		ENGINE_pkey_asn1_find_str;
-+		TS_REQ_get_ext_count;
-+		BUF_reverse;
-+		TS_TST_INFO_print_bio;
-+		d2i_ISSUING_DIST_POINT;
-+		ENGINE_get_pkey_meths;
-+		i2b_PrivateKey_bio;
-+		i2d_TS_RESP;
-+		b2i_PublicKey;
-+		TS_VERIFY_CTX_cleanup;
-+		TS_STATUS_INFO_free;
-+		TS_RESP_verify_token;
-+		OBJ_bsearch_ex_;
-+		ASN1_bn_print;
-+		EVP_PKEY_asn1_get_count;
-+		ENGINE_register_pkey_asn1_meths;
-+		ASN1_PCTX_set_nm_flags;
-+		EVP_DigestVerifyInit;
-+		ENGINE_set_default_pkey_meths;
-+		TS_TST_INFO_get_policy_id;
-+		TS_REQ_get_cert_req;
-+		X509_CRL_set_meth_data;
-+		PKCS8_pkey_set0;
-+		ASN1_STRING_copy;
-+		d2i_TS_TST_INFO_fp;
-+		X509_CRL_match;
-+		EVP_PKEY_asn1_set_private;
-+		TS_TST_INFO_get_ext_d2i;
-+		TS_RESP_CTX_add_policy;
-+		d2i_TS_RESP;
-+		TS_CONF_load_certs;
-+		TS_TST_INFO_get_msg_imprint;
-+		ERR_load_TS_strings;
-+		TS_TST_INFO_get_version;
-+		EVP_PKEY_CTX_dup;
-+		EVP_PKEY_meth_set_verify;
-+		i2b_PublicKey_bio;
-+		TS_CONF_set_certs;
-+		EVP_PKEY_asn1_get0_info;
-+		TS_VERIFY_CTX_free;
-+		TS_REQ_get_ext_by_critical;
-+		TS_RESP_CTX_set_serial_cb;
-+		X509_CRL_get_meth_data;
-+		TS_RESP_CTX_set_time_cb;
-+		TS_MSG_IMPRINT_get_msg;
-+		TS_TST_INFO_ext_free;
-+		TS_REQ_get_version;
-+		TS_REQ_add_ext;
-+		EVP_PKEY_CTX_set_app_data;
-+		OBJ_bsearch_;
-+		EVP_PKEY_meth_set_verifyctx;
-+		i2d_PKCS7_bio_stream;
-+		CRYPTO_THREADID_set_numeric;
-+		PKCS7_sign_add_signer;
-+		d2i_TS_TST_INFO_bio;
-+		TS_TST_INFO_get_ordering;
-+		TS_RESP_print_bio;
-+		TS_TST_INFO_get_exts;
-+		HMAC_CTX_copy;
-+		PKCS5_pbe2_set_iv;
-+		ENGINE_get_pkey_asn1_meths;
-+		b2i_PrivateKey;
-+		EVP_PKEY_CTX_get_app_data;
-+		TS_REQ_set_cert_req;
-+		CRYPTO_THREADID_set_callback;
-+		TS_CONF_set_serial;
-+		TS_TST_INFO_free;
-+		d2i_TS_REQ_fp;
-+		TS_RESP_verify_response;
-+		i2d_ESS_ISSUER_SERIAL;
-+		TS_ACCURACY_get_seconds;
-+		EVP_CIPHER_do_all;
-+		b2i_PrivateKey_bio;
-+		OCSP_CERTID_dup;
-+		X509_PUBKEY_get0_param;
-+		TS_MSG_IMPRINT_dup;
-+		PKCS7_print_ctx;
-+		i2d_TS_REQ_bio;
-+		EVP_whirlpool;
-+		EVP_PKEY_asn1_set_param;
-+		EVP_PKEY_meth_set_encrypt;
-+		ASN1_PCTX_set_flags;
-+		i2d_ESS_CERT_ID;
-+		TS_VERIFY_CTX_new;
-+		TS_RESP_CTX_set_extension_cb;
-+		ENGINE_register_all_pkey_meths;
-+		TS_RESP_CTX_set_status_info_cond;
-+		TS_RESP_CTX_set_stat_info_cond;
-+		EVP_PKEY_verify;
-+		WHIRLPOOL_Final;
-+		X509_CRL_METHOD_new;
-+		EVP_DigestSignFinal;
-+		TS_RESP_CTX_set_def_policy;
-+		NETSCAPE_X509_it;
-+		TS_RESP_create_response;
-+		PKCS7_SIGNER_INFO_get0_algs;
-+		TS_TST_INFO_get_nonce;
-+		EVP_PKEY_decrypt_old;
-+		TS_TST_INFO_set_policy_id;
-+		TS_CONF_set_ess_cert_id_chain;
-+		EVP_PKEY_CTX_get0_pkey;
-+		d2i_TS_REQ;
-+		EVP_PKEY_asn1_find_str;
-+		BIO_f_asn1;
-+		ESS_SIGNING_CERT_new;
-+		EVP_PBE_find;
-+		X509_CRL_get0_by_cert;
-+		EVP_PKEY_derive;
-+		i2d_TS_REQ;
-+		TS_TST_INFO_delete_ext;
-+		ESS_ISSUER_SERIAL_free;
-+		ASN1_PCTX_set_str_flags;
-+		ENGINE_get_pkey_asn1_meth_str;
-+		TS_CONF_set_signer_key;
-+		TS_ACCURACY_get_millis;
-+		TS_RESP_get_token;
-+		TS_ACCURACY_dup;
-+		ENGINE_register_all_pkey_asn1_meths;
-+		ENGINE_reg_all_pkey_asn1_meths;
-+		X509_CRL_set_default_method;
-+		CRYPTO_THREADID_hash;
-+		CMS_ContentInfo_print_ctx;
-+		TS_RESP_free;
-+		ISSUING_DIST_POINT_free;
-+		ESS_ISSUER_SERIAL_new;
-+		CMS_add1_crl;
-+		PKCS7_add1_attrib_digest;
-+		TS_RESP_CTX_add_md;
-+		TS_TST_INFO_dup;
-+		ENGINE_set_pkey_asn1_meths;
-+		PEM_write_bio_Parameters;
-+		TS_TST_INFO_get_accuracy;
-+		X509_CRL_get0_by_serial;
-+		TS_TST_INFO_set_version;
-+		TS_RESP_CTX_get_tst_info;
-+		TS_RESP_verify_signature;
-+		CRYPTO_THREADID_get_callback;
-+		TS_TST_INFO_get_tsa;
-+		TS_STATUS_INFO_new;
-+		EVP_PKEY_CTX_get_cb;
-+		TS_REQ_get_ext_d2i;
-+		GENERAL_NAME_set0_othername;
-+		TS_TST_INFO_get_ext_count;
-+		TS_RESP_CTX_get_request;
-+		i2d_NETSCAPE_X509;
-+		ENGINE_get_pkey_meth_engine;
-+		EVP_PKEY_meth_set_signctx;
-+		EVP_PKEY_asn1_copy;
-+		ASN1_TYPE_cmp;
-+		EVP_CIPHER_do_all_sorted;
-+		EVP_PKEY_CTX_free;
-+		ISSUING_DIST_POINT_it;
-+		d2i_TS_MSG_IMPRINT_fp;
-+		X509_STORE_get1_certs;
-+		EVP_PKEY_CTX_get_operation;
-+		d2i_ESS_SIGNING_CERT;
-+		TS_CONF_set_ordering;
-+		EVP_PBE_alg_add_type;
-+		TS_REQ_set_version;
-+		EVP_PKEY_get0;
-+		BIO_asn1_set_suffix;
-+		i2d_TS_STATUS_INFO;
-+		EVP_MD_do_all;
-+		TS_TST_INFO_set_accuracy;
-+		PKCS7_add_attrib_content_type;
-+		ERR_remove_thread_state;
-+		EVP_PKEY_meth_add0;
-+		TS_TST_INFO_set_tsa;
-+		EVP_PKEY_meth_new;
-+		WHIRLPOOL_Update;
-+		TS_CONF_set_accuracy;
-+		ASN1_PCTX_set_oid_flags;
-+		ESS_SIGNING_CERT_dup;
-+		d2i_TS_REQ_bio;
-+		X509_time_adj_ex;
-+		TS_RESP_CTX_add_flags;
-+		d2i_TS_STATUS_INFO;
-+		TS_MSG_IMPRINT_set_msg;
-+		BIO_asn1_get_suffix;
-+		TS_REQ_free;
-+		EVP_PKEY_meth_free;
-+		TS_REQ_get_exts;
-+		TS_RESP_CTX_set_clock_precision_digits;
-+		TS_RESP_CTX_set_clk_prec_digits;
-+		TS_RESP_CTX_add_failure_info;
-+		i2d_TS_RESP_bio;
-+		EVP_PKEY_CTX_get0_peerkey;
-+		PEM_write_bio_CMS_stream;
-+		TS_REQ_new;
-+		TS_MSG_IMPRINT_new;
-+		EVP_PKEY_meth_find;
-+		EVP_PKEY_id;
-+		TS_TST_INFO_set_serial;
-+		a2i_GENERAL_NAME;
-+		TS_CONF_set_crypto_device;
-+		EVP_PKEY_verify_init;
-+		TS_CONF_set_policies;
-+		ASN1_PCTX_new;
-+		ESS_CERT_ID_free;
-+		ENGINE_unregister_pkey_meths;
-+		TS_MSG_IMPRINT_free;
-+		TS_VERIFY_CTX_init;
-+		PKCS7_stream;
-+		TS_RESP_CTX_set_certs;
-+		TS_CONF_set_def_policy;
-+		ASN1_GENERALIZEDTIME_adj;
-+		NETSCAPE_X509_new;
-+		TS_ACCURACY_free;
-+		TS_RESP_get_tst_info;
-+		EVP_PKEY_derive_set_peer;
-+		PEM_read_bio_Parameters;
-+		TS_CONF_set_clock_precision_digits;
-+		TS_CONF_set_clk_prec_digits;
-+		ESS_ISSUER_SERIAL_dup;
-+		TS_ACCURACY_get_micros;
-+		ASN1_PCTX_get_str_flags;
-+		NAME_CONSTRAINTS_check;
-+		ASN1_BIT_STRING_check;
-+		X509_check_akid;
-+		ENGINE_unregister_pkey_asn1_meths;
-+		ENGINE_unreg_pkey_asn1_meths;
-+		ASN1_PCTX_free;
-+		PEM_write_bio_ASN1_stream;
-+		i2d_ASN1_bio_stream;
-+		TS_X509_ALGOR_print_bio;
-+		EVP_PKEY_meth_set_cleanup;
-+		EVP_PKEY_asn1_free;
-+		ESS_SIGNING_CERT_free;
-+		TS_TST_INFO_set_msg_imprint;
-+		GENERAL_NAME_cmp;
-+		d2i_ASN1_SET_ANY;
-+		ENGINE_set_pkey_meths;
-+		i2d_TS_REQ_fp;
-+		d2i_ASN1_SEQUENCE_ANY;
-+		GENERAL_NAME_get0_otherName;
-+		d2i_ESS_CERT_ID;
-+		OBJ_find_sigid_algs;
-+		EVP_PKEY_meth_set_keygen;
-+		PKCS5_PBKDF2_HMAC;
-+		EVP_PKEY_paramgen;
-+		EVP_PKEY_meth_set_paramgen;
-+		BIO_new_PKCS7;
-+		EVP_PKEY_verify_recover;
-+		TS_ext_print_bio;
-+		TS_ASN1_INTEGER_print_bio;
-+		check_defer;
-+		DSO_pathbyaddr;
-+		EVP_PKEY_set_type;
-+		TS_ACCURACY_set_micros;
-+		TS_REQ_to_TS_VERIFY_CTX;
-+		EVP_PKEY_meth_set_copy;
-+		ASN1_PCTX_set_cert_flags;
-+		TS_TST_INFO_get_ext;
-+		EVP_PKEY_asn1_set_ctrl;
-+		TS_TST_INFO_get_ext_by_critical;
-+		EVP_PKEY_CTX_new_id;
-+		TS_REQ_get_ext_by_OBJ;
-+		TS_CONF_set_signer_cert;
-+		X509_NAME_hash_old;
-+		ASN1_TIME_set_string;
-+		EVP_MD_flags;
-+		TS_RESP_CTX_free;
-+		DSAparams_dup;
-+		DHparams_dup;
-+		OCSP_REQ_CTX_add1_header;
-+		OCSP_REQ_CTX_set1_req;
-+		X509_STORE_set_verify_cb;
-+		X509_STORE_CTX_get0_current_crl;
-+		X509_STORE_CTX_get0_parent_ctx;
-+		X509_STORE_CTX_get0_current_issuer;
-+		X509_STORE_CTX_get0_cur_issuer;
-+		X509_issuer_name_hash_old;
-+		X509_subject_name_hash_old;
-+		EVP_CIPHER_CTX_copy;
-+		UI_method_get_prompt_constructor;
-+		UI_method_get_prompt_constructr;
-+		UI_method_set_prompt_constructor;
-+		UI_method_set_prompt_constructr;
-+		EVP_read_pw_string_min;
-+		CRYPTO_cts128_encrypt;
-+		CRYPTO_cts128_decrypt_block;
-+		CRYPTO_cfb128_1_encrypt;
-+		CRYPTO_cbc128_encrypt;
-+		CRYPTO_ctr128_encrypt;
-+		CRYPTO_ofb128_encrypt;
-+		CRYPTO_cts128_decrypt;
-+		CRYPTO_cts128_encrypt_block;
-+		CRYPTO_cbc128_decrypt;
-+		CRYPTO_cfb128_encrypt;
-+		CRYPTO_cfb128_8_encrypt;
-+
-+	local:
-+		*;
-+};
-+
-+
-+OPENSSL_1.0.1 {
-+	global:
-+		SSL_renegotiate_abbreviated;
-+		TLSv1_1_method;
-+		TLSv1_1_client_method;
-+		TLSv1_1_server_method;
-+		SSL_CTX_set_srp_client_pwd_callback;
-+		SSL_CTX_set_srp_client_pwd_cb;
-+		SSL_get_srp_g;
-+		SSL_CTX_set_srp_username_callback;
-+		SSL_CTX_set_srp_un_cb;
-+		SSL_get_srp_userinfo;
-+		SSL_set_srp_server_param;
-+		SSL_set_srp_server_param_pw;
-+		SSL_get_srp_N;
-+		SSL_get_srp_username;
-+		SSL_CTX_set_srp_password;
-+		SSL_CTX_set_srp_strength;
-+		SSL_CTX_set_srp_verify_param_callback;
-+		SSL_CTX_set_srp_vfy_param_cb;
-+		SSL_CTX_set_srp_cb_arg;
-+		SSL_CTX_set_srp_username;
-+		SSL_CTX_SRP_CTX_init;
-+		SSL_SRP_CTX_init;
-+		SRP_Calc_A_param;
-+		SRP_generate_server_master_secret;
-+		SRP_gen_server_master_secret;
-+		SSL_CTX_SRP_CTX_free;
-+		SRP_generate_client_master_secret;
-+		SRP_gen_client_master_secret;
-+		SSL_srp_server_param_with_username;
-+		SSL_srp_server_param_with_un;
-+		SSL_SRP_CTX_free;
-+		SSL_set_debug;
-+		SSL_SESSION_get0_peer;
-+		TLSv1_2_client_method;
-+		SSL_SESSION_set1_id_context;
-+		TLSv1_2_server_method;
-+		SSL_cache_hit;
-+		SSL_get0_kssl_ctx;
-+		SSL_set0_kssl_ctx;
-+		SSL_set_state;
-+		SSL_CIPHER_get_id;
-+		TLSv1_2_method;
-+		kssl_ctx_get0_client_princ;
-+		SSL_export_keying_material;
-+		SSL_set_tlsext_use_srtp;
-+		SSL_CTX_set_next_protos_advertised_cb;
-+		SSL_CTX_set_next_protos_adv_cb;
-+		SSL_get0_next_proto_negotiated;
-+		SSL_get_selected_srtp_profile;
-+		SSL_CTX_set_tlsext_use_srtp;
-+		SSL_select_next_proto;
-+		SSL_get_srtp_profiles;
-+		SSL_CTX_set_next_proto_select_cb;
-+		SSL_CTX_set_next_proto_sel_cb;
-+		SSL_SESSION_get_compress_id;
-+
-+		SRP_VBASE_get_by_user;
-+		SRP_Calc_server_key;
-+		SRP_create_verifier;
-+		SRP_create_verifier_BN;
-+		SRP_Calc_u;
-+		SRP_VBASE_free;
-+		SRP_Calc_client_key;
-+		SRP_get_default_gN;
-+		SRP_Calc_x;
-+		SRP_Calc_B;
-+		SRP_VBASE_new;
-+		SRP_check_known_gN_param;
-+		SRP_Calc_A;
-+		SRP_Verify_A_mod_N;
-+		SRP_VBASE_init;
-+		SRP_Verify_B_mod_N;
-+		EC_KEY_set_public_key_affine_coordinates;
-+		EC_KEY_set_pub_key_aff_coords;
-+		EVP_aes_192_ctr;
-+		EVP_PKEY_meth_get0_info;
-+		EVP_PKEY_meth_copy;
-+		ERR_add_error_vdata;
-+		EVP_aes_128_ctr;
-+		EVP_aes_256_ctr;
-+		EC_GFp_nistp224_method;
-+		EC_KEY_get_flags;
-+		RSA_padding_add_PKCS1_PSS_mgf1;
-+		EVP_aes_128_xts;
-+		EVP_aes_256_xts;
-+		EVP_aes_128_gcm;
-+		EC_KEY_clear_flags;
-+		EC_KEY_set_flags;
-+		EVP_aes_256_ccm;
-+		RSA_verify_PKCS1_PSS_mgf1;
-+		EVP_aes_128_ccm;
-+		EVP_aes_192_gcm;
-+		X509_ALGOR_set_md;
-+		RAND_init_fips;
-+		EVP_aes_256_gcm;
-+		EVP_aes_192_ccm;
-+		CMAC_CTX_copy;
-+		CMAC_CTX_free;
-+		CMAC_CTX_get0_cipher_ctx;
-+		CMAC_CTX_cleanup;
-+		CMAC_Init;
-+		CMAC_Update;
-+		CMAC_resume;
-+		CMAC_CTX_new;
-+		CMAC_Final;
-+		CRYPTO_ctr128_encrypt_ctr32;
-+		CRYPTO_gcm128_release;
-+		CRYPTO_ccm128_decrypt_ccm64;
-+		CRYPTO_ccm128_encrypt;
-+		CRYPTO_gcm128_encrypt;
-+		CRYPTO_xts128_encrypt;
-+		EVP_rc4_hmac_md5;
-+		CRYPTO_nistcts128_decrypt_block;
-+		CRYPTO_gcm128_setiv;
-+		CRYPTO_nistcts128_encrypt;
-+		EVP_aes_128_cbc_hmac_sha1;
-+		CRYPTO_gcm128_tag;
-+		CRYPTO_ccm128_encrypt_ccm64;
-+		ENGINE_load_rdrand;
-+		CRYPTO_ccm128_setiv;
-+		CRYPTO_nistcts128_encrypt_block;
-+		CRYPTO_gcm128_aad;
-+		CRYPTO_ccm128_init;
-+		CRYPTO_nistcts128_decrypt;
-+		CRYPTO_gcm128_new;
-+		CRYPTO_ccm128_tag;
-+		CRYPTO_ccm128_decrypt;
-+		CRYPTO_ccm128_aad;
-+		CRYPTO_gcm128_init;
-+		CRYPTO_gcm128_decrypt;
-+		ENGINE_load_rsax;
-+		CRYPTO_gcm128_decrypt_ctr32;
-+		CRYPTO_gcm128_encrypt_ctr32;
-+		CRYPTO_gcm128_finish;
-+		EVP_aes_256_cbc_hmac_sha1;
-+		PKCS5_pbkdf2_set;
-+		CMS_add0_recipient_password;
-+		CMS_decrypt_set1_password;
-+		CMS_RecipientInfo_set0_password;
-+		RAND_set_fips_drbg_type;
-+		X509_REQ_sign_ctx;
-+		RSA_PSS_PARAMS_new;
-+		X509_CRL_sign_ctx;
-+		X509_signature_dump;
-+		d2i_RSA_PSS_PARAMS;
-+		RSA_PSS_PARAMS_it;
-+		RSA_PSS_PARAMS_free;
-+		X509_sign_ctx;
-+		i2d_RSA_PSS_PARAMS;
-+		ASN1_item_sign_ctx;
-+		EC_GFp_nistp521_method;
-+		EC_GFp_nistp256_method;
-+		OPENSSL_stderr;
-+		OPENSSL_cpuid_setup;
-+		OPENSSL_showfatal;
-+		BIO_new_dgram_sctp;
-+		BIO_dgram_sctp_msg_waiting;
-+		BIO_dgram_sctp_wait_for_dry;
-+		BIO_s_datagram_sctp;
-+		BIO_dgram_is_sctp;
-+		BIO_dgram_sctp_notification_cb;
-+} OPENSSL_1.0.0;
-+
-+OPENSSL_1.0.1d {
-+	global:
-+		CRYPTO_memcmp;
-+} OPENSSL_1.0.1;
-+
-+OPENSSL_1.0.2 {
-+	global:
-+		SSL_CTX_set_alpn_protos;
-+		SSL_set_alpn_protos;
-+		SSL_CTX_set_alpn_select_cb;
-+		SSL_get0_alpn_selected;
-+		SSL_CTX_set_custom_cli_ext;
-+		SSL_CTX_set_custom_srv_ext;
-+		SSL_CTX_set_srv_supp_data;
-+		SSL_CTX_set_cli_supp_data;
-+		SSL_set_cert_cb;
-+		SSL_CTX_use_serverinfo;
-+		SSL_CTX_use_serverinfo_file;
-+		SSL_CTX_set_cert_cb;
-+		SSL_CTX_get0_param;
-+		SSL_get0_param;
-+		SSL_certs_clear;
-+		DTLSv1_2_method;
-+		DTLSv1_2_server_method;
-+		DTLSv1_2_client_method;
-+		DTLS_method;
-+		DTLS_server_method;
-+		DTLS_client_method;
-+		SSL_CTX_get_ssl_method;
-+		SSL_CTX_get0_certificate;
-+		SSL_CTX_get0_privatekey;
-+		SSL_COMP_set0_compression_methods;
-+		SSL_COMP_free_compression_methods;
-+		SSL_CIPHER_find;
-+		SSL_is_server;
-+		SSL_CONF_CTX_new;
-+		SSL_CONF_CTX_finish;
-+		SSL_CONF_CTX_free;
-+		SSL_CONF_CTX_set_flags;
-+		SSL_CONF_CTX_clear_flags;
-+		SSL_CONF_CTX_set1_prefix;
-+		SSL_CONF_CTX_set_ssl;
-+		SSL_CONF_CTX_set_ssl_ctx;
-+		SSL_CONF_cmd;
-+		SSL_CONF_cmd_argv;
-+		SSL_CONF_cmd_value_type;
-+		SSL_trace;
-+		SSL_CIPHER_standard_name;
-+		SSL_get_tlsa_record_byname;
-+		ASN1_TIME_diff;
-+		BIO_hex_string;
-+		CMS_RecipientInfo_get0_pkey_ctx;
-+		CMS_RecipientInfo_encrypt;
-+		CMS_SignerInfo_get0_pkey_ctx;
-+		CMS_SignerInfo_get0_md_ctx;
-+		CMS_SignerInfo_get0_signature;
-+		CMS_RecipientInfo_kari_get0_alg;
-+		CMS_RecipientInfo_kari_get0_reks;
-+		CMS_RecipientInfo_kari_get0_orig_id;
-+		CMS_RecipientInfo_kari_orig_id_cmp;
-+		CMS_RecipientEncryptedKey_get0_id;
-+		CMS_RecipientEncryptedKey_cert_cmp;
-+		CMS_RecipientInfo_kari_set0_pkey;
-+		CMS_RecipientInfo_kari_get0_ctx;
-+		CMS_RecipientInfo_kari_decrypt;
-+		CMS_SharedInfo_encode;
-+		DH_compute_key_padded;
-+		d2i_DHxparams;
-+		i2d_DHxparams;
-+		DH_get_1024_160;
-+		DH_get_2048_224;
-+		DH_get_2048_256;
-+		DH_KDF_X9_42;
-+		ECDH_KDF_X9_62;
-+		ECDSA_METHOD_new;
-+		ECDSA_METHOD_free;
-+		ECDSA_METHOD_set_app_data;
-+		ECDSA_METHOD_get_app_data;
-+		ECDSA_METHOD_set_sign;
-+		ECDSA_METHOD_set_sign_setup;
-+		ECDSA_METHOD_set_verify;
-+		ECDSA_METHOD_set_flags;
-+		ECDSA_METHOD_set_name;
-+		EVP_des_ede3_wrap;
-+		EVP_aes_128_wrap;
-+		EVP_aes_192_wrap;
-+		EVP_aes_256_wrap;
-+		EVP_aes_128_cbc_hmac_sha256;
-+		EVP_aes_256_cbc_hmac_sha256;
-+		CRYPTO_128_wrap;
-+		CRYPTO_128_unwrap;
-+		OCSP_REQ_CTX_nbio;
-+		OCSP_REQ_CTX_new;
-+		OCSP_set_max_response_length;
-+		OCSP_REQ_CTX_i2d;
-+		OCSP_REQ_CTX_nbio_d2i;
-+		OCSP_REQ_CTX_get0_mem_bio;
-+		OCSP_REQ_CTX_http;
-+		RSA_padding_add_PKCS1_OAEP_mgf1;
-+		RSA_padding_check_PKCS1_OAEP_mgf1;
-+		RSA_OAEP_PARAMS_free;
-+		RSA_OAEP_PARAMS_it;
-+		RSA_OAEP_PARAMS_new;
-+		SSL_get_sigalgs;
-+		SSL_get_shared_sigalgs;
-+		SSL_check_chain;
-+		X509_chain_up_ref;
-+		X509_http_nbio;
-+		X509_CRL_http_nbio;
-+		X509_REVOKED_dup;
-+		i2d_re_X509_tbs;
-+		X509_get0_signature;
-+		X509_get_signature_nid;
-+		X509_CRL_diff;
-+		X509_chain_check_suiteb;
-+		X509_CRL_check_suiteb;
-+		X509_check_host;
-+		X509_check_email;
-+		X509_check_ip;
-+		X509_check_ip_asc;
-+		X509_STORE_set_lookup_crls_cb;
-+		X509_STORE_CTX_get0_store;
-+		X509_VERIFY_PARAM_set1_host;
-+		X509_VERIFY_PARAM_add1_host;
-+		X509_VERIFY_PARAM_set_hostflags;
-+		X509_VERIFY_PARAM_get0_peername;
-+		X509_VERIFY_PARAM_set1_email;
-+		X509_VERIFY_PARAM_set1_ip;
-+		X509_VERIFY_PARAM_set1_ip_asc;
-+		X509_VERIFY_PARAM_get0_name;
-+		X509_VERIFY_PARAM_get_count;
-+		X509_VERIFY_PARAM_get0;
-+		X509V3_EXT_free;
-+		EC_GROUP_get_mont_data;
-+		EC_curve_nid2nist;
-+		EC_curve_nist2nid;
-+		PEM_write_bio_DHxparams;
-+		PEM_write_DHxparams;
-+		SSL_CTX_add_client_custom_ext;
-+		SSL_CTX_add_server_custom_ext;
-+		SSL_extension_supported;
-+		BUF_strnlen;
-+		sk_deep_copy;
-+		SSL_test_functions;
-+} OPENSSL_1.0.1d;
-+
-Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/openssl.ld
-===================================================================
---- /dev/null	1970-01-01 00:00:00.000000000 +0000
-+++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/openssl.ld	2014-02-24 21:02:30.000000000 +0100
-@@ -0,0 +1,10 @@
-+OPENSSL_1.0.0 {
-+	global:
-+		bind_engine;
-+		v_check;
-+		OPENSSL_init;
-+		OPENSSL_finish;
-+	local:
-+		*;
-+};
-+
-Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/ccgost/openssl.ld
-===================================================================
---- /dev/null	1970-01-01 00:00:00.000000000 +0000
-+++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/ccgost/openssl.ld	2014-02-24 21:02:30.000000000 +0100
-@@ -0,0 +1,10 @@
-+OPENSSL_1.0.0 {
-+	global:
-+		bind_engine;
-+		v_check;
-+		OPENSSL_init;
-+		OPENSSL_finish;
-+	local:
-+		*;
-+};
-+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian1.0.2/soname.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian1.0.2/soname.patch
deleted file mode 100644
index f9cdfec..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian1.0.2/soname.patch
+++ /dev/null
@@ -1,13 +0,0 @@
-Index: openssl-1.0.2d/crypto/opensslv.h
-===================================================================
---- openssl-1.0.2d.orig/crypto/opensslv.h
-+++ openssl-1.0.2d/crypto/opensslv.h
-@@ -88,7 +88,7 @@ extern "C" {
-  * should only keep the versions that are binary compatible with the current.
-  */
- # define SHLIB_VERSION_HISTORY ""
--# define SHLIB_VERSION_NUMBER "1.0.0"
-+# define SHLIB_VERSION_NUMBER "1.0.2"
- 
- 
- #ifdef  __cplusplus
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian1.0.2/version-script.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian1.0.2/version-script.patch
deleted file mode 100644
index 29f11a2..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/debian1.0.2/version-script.patch
+++ /dev/null
@@ -1,4656 +0,0 @@
-Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/Configure
-===================================================================
---- openssl-1.0.2~beta1.obsolete.0.0498436515490575.orig/Configure	2014-02-24 21:02:30.000000000 +0100
-+++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/Configure	2014-02-24 21:02:30.000000000 +0100
-@@ -1651,6 +1651,8 @@
- 		}
- 	}
- 
-+$shared_ldflag .= " -Wl,--version-script=openssl.ld";
-+
- open(IN,'<Makefile.org') || die "unable to read Makefile.org:$!\n";
- unlink("$Makefile.new") || die "unable to remove old $Makefile.new:$!\n" if -e "$Makefile.new";
- open(OUT,">$Makefile.new") || die "unable to create $Makefile.new:$!\n";
-Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/openssl.ld
-===================================================================
---- /dev/null	1970-01-01 00:00:00.000000000 +0000
-+++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/openssl.ld	2014-02-24 22:19:08.601827266 +0100
-@@ -0,0 +1,4608 @@
-+OPENSSL_1.0.2d {
-+	global:
-+		BIO_f_ssl;
-+		BIO_new_buffer_ssl_connect;
-+		BIO_new_ssl;
-+		BIO_new_ssl_connect;
-+		BIO_proxy_ssl_copy_session_id;
-+		BIO_ssl_copy_session_id;
-+		BIO_ssl_shutdown;
-+		d2i_SSL_SESSION;
-+		DTLSv1_client_method;
-+		DTLSv1_method;
-+		DTLSv1_server_method;
-+		ERR_load_SSL_strings;
-+		i2d_SSL_SESSION;
-+		kssl_build_principal_2;
-+		kssl_cget_tkt;
-+		kssl_check_authent;
-+		kssl_ctx_free;
-+		kssl_ctx_new;
-+		kssl_ctx_setkey;
-+		kssl_ctx_setprinc;
-+		kssl_ctx_setstring;
-+		kssl_ctx_show;
-+		kssl_err_set;
-+		kssl_krb5_free_data_contents;
-+		kssl_sget_tkt;
-+		kssl_skip_confound;
-+		kssl_validate_times;
-+		PEM_read_bio_SSL_SESSION;
-+		PEM_read_SSL_SESSION;
-+		PEM_write_bio_SSL_SESSION;
-+		PEM_write_SSL_SESSION;
-+		SSL_accept;
-+		SSL_add_client_CA;
-+		SSL_add_dir_cert_subjects_to_stack;
-+		SSL_add_dir_cert_subjs_to_stk;
-+		SSL_add_file_cert_subjects_to_stack;
-+		SSL_add_file_cert_subjs_to_stk;
-+		SSL_alert_desc_string;
-+		SSL_alert_desc_string_long;
-+		SSL_alert_type_string;
-+		SSL_alert_type_string_long;
-+		SSL_callback_ctrl;
-+		SSL_check_private_key;
-+		SSL_CIPHER_description;
-+		SSL_CIPHER_get_bits;
-+		SSL_CIPHER_get_name;
-+		SSL_CIPHER_get_version;
-+		SSL_clear;
-+		SSL_COMP_add_compression_method;
-+		SSL_COMP_get_compression_methods;
-+		SSL_COMP_get_compress_methods;
-+		SSL_COMP_get_name;
-+		SSL_connect;
-+		SSL_copy_session_id;
-+		SSL_ctrl;
-+		SSL_CTX_add_client_CA;
-+		SSL_CTX_add_session;
-+		SSL_CTX_callback_ctrl;
-+		SSL_CTX_check_private_key;
-+		SSL_CTX_ctrl;
-+		SSL_CTX_flush_sessions;
-+		SSL_CTX_free;
-+		SSL_CTX_get_cert_store;
-+		SSL_CTX_get_client_CA_list;
-+		SSL_CTX_get_client_cert_cb;
-+		SSL_CTX_get_ex_data;
-+		SSL_CTX_get_ex_new_index;
-+		SSL_CTX_get_info_callback;
-+		SSL_CTX_get_quiet_shutdown;
-+		SSL_CTX_get_timeout;
-+		SSL_CTX_get_verify_callback;
-+		SSL_CTX_get_verify_depth;
-+		SSL_CTX_get_verify_mode;
-+		SSL_CTX_load_verify_locations;
-+		SSL_CTX_new;
-+		SSL_CTX_remove_session;
-+		SSL_CTX_sess_get_get_cb;
-+		SSL_CTX_sess_get_new_cb;
-+		SSL_CTX_sess_get_remove_cb;
-+		SSL_CTX_sessions;
-+		SSL_CTX_sess_set_get_cb;
-+		SSL_CTX_sess_set_new_cb;
-+		SSL_CTX_sess_set_remove_cb;
-+		SSL_CTX_set1_param;
-+		SSL_CTX_set_cert_store;
-+		SSL_CTX_set_cert_verify_callback;
-+		SSL_CTX_set_cert_verify_cb;
-+		SSL_CTX_set_cipher_list;
-+		SSL_CTX_set_client_CA_list;
-+		SSL_CTX_set_client_cert_cb;
-+		SSL_CTX_set_client_cert_engine;
-+		SSL_CTX_set_cookie_generate_cb;
-+		SSL_CTX_set_cookie_verify_cb;
-+		SSL_CTX_set_default_passwd_cb;
-+		SSL_CTX_set_default_passwd_cb_userdata;
-+		SSL_CTX_set_default_verify_paths;
-+		SSL_CTX_set_def_passwd_cb_ud;
-+		SSL_CTX_set_def_verify_paths;
-+		SSL_CTX_set_ex_data;
-+		SSL_CTX_set_generate_session_id;
-+		SSL_CTX_set_info_callback;
-+		SSL_CTX_set_msg_callback;
-+		SSL_CTX_set_psk_client_callback;
-+		SSL_CTX_set_psk_server_callback;
-+		SSL_CTX_set_purpose;
-+		SSL_CTX_set_quiet_shutdown;
-+		SSL_CTX_set_session_id_context;
-+		SSL_CTX_set_ssl_version;
-+		SSL_CTX_set_timeout;
-+		SSL_CTX_set_tmp_dh_callback;
-+		SSL_CTX_set_tmp_ecdh_callback;
-+		SSL_CTX_set_tmp_rsa_callback;
-+		SSL_CTX_set_trust;
-+		SSL_CTX_set_verify;
-+		SSL_CTX_set_verify_depth;
-+		SSL_CTX_use_cert_chain_file;
-+		SSL_CTX_use_certificate;
-+		SSL_CTX_use_certificate_ASN1;
-+		SSL_CTX_use_certificate_chain_file;
-+		SSL_CTX_use_certificate_file;
-+		SSL_CTX_use_PrivateKey;
-+		SSL_CTX_use_PrivateKey_ASN1;
-+		SSL_CTX_use_PrivateKey_file;
-+		SSL_CTX_use_psk_identity_hint;
-+		SSL_CTX_use_RSAPrivateKey;
-+		SSL_CTX_use_RSAPrivateKey_ASN1;
-+		SSL_CTX_use_RSAPrivateKey_file;
-+		SSL_do_handshake;
-+		SSL_dup;
-+		SSL_dup_CA_list;
-+		SSLeay_add_ssl_algorithms;
-+		SSL_free;
-+		SSL_get1_session;
-+		SSL_get_certificate;
-+		SSL_get_cipher_list;
-+		SSL_get_ciphers;
-+		SSL_get_client_CA_list;
-+		SSL_get_current_cipher;
-+		SSL_get_current_compression;
-+		SSL_get_current_expansion;
-+		SSL_get_default_timeout;
-+		SSL_get_error;
-+		SSL_get_ex_data;
-+		SSL_get_ex_data_X509_STORE_CTX_idx;
-+		SSL_get_ex_d_X509_STORE_CTX_idx;
-+		SSL_get_ex_new_index;
-+		SSL_get_fd;
-+		SSL_get_finished;
-+		SSL_get_info_callback;
-+		SSL_get_peer_cert_chain;
-+		SSL_get_peer_certificate;
-+		SSL_get_peer_finished;
-+		SSL_get_privatekey;
-+		SSL_get_psk_identity;
-+		SSL_get_psk_identity_hint;
-+		SSL_get_quiet_shutdown;
-+		SSL_get_rbio;
-+		SSL_get_read_ahead;
-+		SSL_get_rfd;
-+		SSL_get_servername;
-+		SSL_get_servername_type;
-+		SSL_get_session;
-+		SSL_get_shared_ciphers;
-+		SSL_get_shutdown;
-+		SSL_get_SSL_CTX;
-+		SSL_get_ssl_method;
-+		SSL_get_verify_callback;
-+		SSL_get_verify_depth;
-+		SSL_get_verify_mode;
-+		SSL_get_verify_result;
-+		SSL_get_version;
-+		SSL_get_wbio;
-+		SSL_get_wfd;
-+		SSL_has_matching_session_id;
-+		SSL_library_init;
-+		SSL_load_client_CA_file;
-+		SSL_load_error_strings;
-+		SSL_new;
-+		SSL_peek;
-+		SSL_pending;
-+		SSL_read;
-+		SSL_renegotiate;
-+		SSL_renegotiate_pending;
-+		SSL_rstate_string;
-+		SSL_rstate_string_long;
-+		SSL_SESSION_cmp;
-+		SSL_SESSION_free;
-+		SSL_SESSION_get_ex_data;
-+		SSL_SESSION_get_ex_new_index;
-+		SSL_SESSION_get_id;
-+		SSL_SESSION_get_time;
-+		SSL_SESSION_get_timeout;
-+		SSL_SESSION_hash;
-+		SSL_SESSION_new;
-+		SSL_SESSION_print;
-+		SSL_SESSION_print_fp;
-+		SSL_SESSION_set_ex_data;
-+		SSL_SESSION_set_time;
-+		SSL_SESSION_set_timeout;
-+		SSL_set1_param;
-+		SSL_set_accept_state;
-+		SSL_set_bio;
-+		SSL_set_cipher_list;
-+		SSL_set_client_CA_list;
-+		SSL_set_connect_state;
-+		SSL_set_ex_data;
-+		SSL_set_fd;
-+		SSL_set_generate_session_id;
-+		SSL_set_info_callback;
-+		SSL_set_msg_callback;
-+		SSL_set_psk_client_callback;
-+		SSL_set_psk_server_callback;
-+		SSL_set_purpose;
-+		SSL_set_quiet_shutdown;
-+		SSL_set_read_ahead;
-+		SSL_set_rfd;
-+		SSL_set_session;
-+		SSL_set_session_id_context;
-+		SSL_set_session_secret_cb;
-+		SSL_set_session_ticket_ext;
-+		SSL_set_session_ticket_ext_cb;
-+		SSL_set_shutdown;
-+		SSL_set_SSL_CTX;
-+		SSL_set_ssl_method;
-+		SSL_set_tmp_dh_callback;
-+		SSL_set_tmp_ecdh_callback;
-+		SSL_set_tmp_rsa_callback;
-+		SSL_set_trust;
-+		SSL_set_verify;
-+		SSL_set_verify_depth;
-+		SSL_set_verify_result;
-+		SSL_set_wfd;
-+		SSL_shutdown;
-+		SSL_state;
-+		SSL_state_string;
-+		SSL_state_string_long;
-+		SSL_use_certificate;
-+		SSL_use_certificate_ASN1;
-+		SSL_use_certificate_file;
-+		SSL_use_PrivateKey;
-+		SSL_use_PrivateKey_ASN1;
-+		SSL_use_PrivateKey_file;
-+		SSL_use_psk_identity_hint;
-+		SSL_use_RSAPrivateKey;
-+		SSL_use_RSAPrivateKey_ASN1;
-+		SSL_use_RSAPrivateKey_file;
-+		SSLv23_client_method;
-+		SSLv23_method;
-+		SSLv23_server_method;
-+		SSLv2_client_method;
-+		SSLv2_method;
-+		SSLv2_server_method;
-+		SSLv3_client_method;
-+		SSLv3_method;
-+		SSLv3_server_method;
-+		SSL_version;
-+		SSL_want;
-+		SSL_write;
-+		TLSv1_client_method;
-+		TLSv1_method;
-+		TLSv1_server_method;
-+
-+
-+		SSLeay;
-+		SSLeay_version;
-+		ASN1_BIT_STRING_asn1_meth;
-+		ASN1_HEADER_free;
-+		ASN1_HEADER_new;
-+		ASN1_IA5STRING_asn1_meth;
-+		ASN1_INTEGER_get;
-+		ASN1_INTEGER_set;
-+		ASN1_INTEGER_to_BN;
-+		ASN1_OBJECT_create;
-+		ASN1_OBJECT_free;
-+		ASN1_OBJECT_new;
-+		ASN1_PRINTABLE_type;
-+		ASN1_STRING_cmp;
-+		ASN1_STRING_dup;
-+		ASN1_STRING_free;
-+		ASN1_STRING_new;
-+		ASN1_STRING_print;
-+		ASN1_STRING_set;
-+		ASN1_STRING_type_new;
-+		ASN1_TYPE_free;
-+		ASN1_TYPE_new;
-+		ASN1_UNIVERSALSTRING_to_string;
-+		ASN1_UTCTIME_check;
-+		ASN1_UTCTIME_print;
-+		ASN1_UTCTIME_set;
-+		ASN1_check_infinite_end;
-+		ASN1_d2i_bio;
-+		ASN1_d2i_fp;
-+		ASN1_digest;
-+		ASN1_dup;
-+		ASN1_get_object;
-+		ASN1_i2d_bio;
-+		ASN1_i2d_fp;
-+		ASN1_object_size;
-+		ASN1_parse;
-+		ASN1_put_object;
-+		ASN1_sign;
-+		ASN1_verify;
-+		BF_cbc_encrypt;
-+		BF_cfb64_encrypt;
-+		BF_ecb_encrypt;
-+		BF_encrypt;
-+		BF_ofb64_encrypt;
-+		BF_options;
-+		BF_set_key;
-+		BIO_CONNECT_free;
-+		BIO_CONNECT_new;
-+		BIO_accept;
-+		BIO_ctrl;
-+		BIO_int_ctrl;
-+		BIO_debug_callback;
-+		BIO_dump;
-+		BIO_dup_chain;
-+		BIO_f_base64;
-+		BIO_f_buffer;
-+		BIO_f_cipher;
-+		BIO_f_md;
-+		BIO_f_null;
-+		BIO_f_proxy_server;
-+		BIO_fd_non_fatal_error;
-+		BIO_fd_should_retry;
-+		BIO_find_type;
-+		BIO_free;
-+		BIO_free_all;
-+		BIO_get_accept_socket;
-+		BIO_get_filter_bio;
-+		BIO_get_host_ip;
-+		BIO_get_port;
-+		BIO_get_retry_BIO;
-+		BIO_get_retry_reason;
-+		BIO_gethostbyname;
-+		BIO_gets;
-+		BIO_new;
-+		BIO_new_accept;
-+		BIO_new_connect;
-+		BIO_new_fd;
-+		BIO_new_file;
-+		BIO_new_fp;
-+		BIO_new_socket;
-+		BIO_pop;
-+		BIO_printf;
-+		BIO_push;
-+		BIO_puts;
-+		BIO_read;
-+		BIO_s_accept;
-+		BIO_s_connect;
-+		BIO_s_fd;
-+		BIO_s_file;
-+		BIO_s_mem;
-+		BIO_s_null;
-+		BIO_s_proxy_client;
-+		BIO_s_socket;
-+		BIO_set;
-+		BIO_set_cipher;
-+		BIO_set_tcp_ndelay;
-+		BIO_sock_cleanup;
-+		BIO_sock_error;
-+		BIO_sock_init;
-+		BIO_sock_non_fatal_error;
-+		BIO_sock_should_retry;
-+		BIO_socket_ioctl;
-+		BIO_write;
-+		BN_CTX_free;
-+		BN_CTX_new;
-+		BN_MONT_CTX_free;
-+		BN_MONT_CTX_new;
-+		BN_MONT_CTX_set;
-+		BN_add;
-+		BN_add_word;
-+		BN_hex2bn;
-+		BN_bin2bn;
-+		BN_bn2hex;
-+		BN_bn2bin;
-+		BN_clear;
-+		BN_clear_bit;
-+		BN_clear_free;
-+		BN_cmp;
-+		BN_copy;
-+		BN_div;
-+		BN_div_word;
-+		BN_dup;
-+		BN_free;
-+		BN_from_montgomery;
-+		BN_gcd;
-+		BN_generate_prime;
-+		BN_get_word;
-+		BN_is_bit_set;
-+		BN_is_prime;
-+		BN_lshift;
-+		BN_lshift1;
-+		BN_mask_bits;
-+		BN_mod;
-+		BN_mod_exp;
-+		BN_mod_exp_mont;
-+		BN_mod_exp_simple;
-+		BN_mod_inverse;
-+		BN_mod_mul;
-+		BN_mod_mul_montgomery;
-+		BN_mod_word;
-+		BN_mul;
-+		BN_new;
-+		BN_num_bits;
-+		BN_num_bits_word;
-+		BN_options;
-+		BN_print;
-+		BN_print_fp;
-+		BN_rand;
-+		BN_reciprocal;
-+		BN_rshift;
-+		BN_rshift1;
-+		BN_set_bit;
-+		BN_set_word;
-+		BN_sqr;
-+		BN_sub;
-+		BN_to_ASN1_INTEGER;
-+		BN_ucmp;
-+		BN_value_one;
-+		BUF_MEM_free;
-+		BUF_MEM_grow;
-+		BUF_MEM_new;
-+		BUF_strdup;
-+		CONF_free;
-+		CONF_get_number;
-+		CONF_get_section;
-+		CONF_get_string;
-+		CONF_load;
-+		CRYPTO_add_lock;
-+		CRYPTO_dbg_free;
-+		CRYPTO_dbg_malloc;
-+		CRYPTO_dbg_realloc;
-+		CRYPTO_dbg_remalloc;
-+		CRYPTO_free;
-+		CRYPTO_get_add_lock_callback;
-+		CRYPTO_get_id_callback;
-+		CRYPTO_get_lock_name;
-+		CRYPTO_get_locking_callback;
-+		CRYPTO_get_mem_functions;
-+		CRYPTO_lock;
-+		CRYPTO_malloc;
-+		CRYPTO_mem_ctrl;
-+		CRYPTO_mem_leaks;
-+		CRYPTO_mem_leaks_cb;
-+		CRYPTO_mem_leaks_fp;
-+		CRYPTO_realloc;
-+		CRYPTO_remalloc;
-+		CRYPTO_set_add_lock_callback;
-+		CRYPTO_set_id_callback;
-+		CRYPTO_set_locking_callback;
-+		CRYPTO_set_mem_functions;
-+		CRYPTO_thread_id;
-+		DH_check;
-+		DH_compute_key;
-+		DH_free;
-+		DH_generate_key;
-+		DH_generate_parameters;
-+		DH_new;
-+		DH_size;
-+		DHparams_print;
-+		DHparams_print_fp;
-+		DSA_free;
-+		DSA_generate_key;
-+		DSA_generate_parameters;
-+		DSA_is_prime;
-+		DSA_new;
-+		DSA_print;
-+		DSA_print_fp;
-+		DSA_sign;
-+		DSA_sign_setup;
-+		DSA_size;
-+		DSA_verify;
-+		DSAparams_print;
-+		DSAparams_print_fp;
-+		ERR_clear_error;
-+		ERR_error_string;
-+		ERR_free_strings;
-+		ERR_func_error_string;
-+		ERR_get_err_state_table;
-+		ERR_get_error;
-+		ERR_get_error_line;
-+		ERR_get_state;
-+		ERR_get_string_table;
-+		ERR_lib_error_string;
-+		ERR_load_ASN1_strings;
-+		ERR_load_BIO_strings;
-+		ERR_load_BN_strings;
-+		ERR_load_BUF_strings;
-+		ERR_load_CONF_strings;
-+		ERR_load_DH_strings;
-+		ERR_load_DSA_strings;
-+		ERR_load_ERR_strings;
-+		ERR_load_EVP_strings;
-+		ERR_load_OBJ_strings;
-+		ERR_load_PEM_strings;
-+		ERR_load_PROXY_strings;
-+		ERR_load_RSA_strings;
-+		ERR_load_X509_strings;
-+		ERR_load_crypto_strings;
-+		ERR_load_strings;
-+		ERR_peek_error;
-+		ERR_peek_error_line;
-+		ERR_print_errors;
-+		ERR_print_errors_fp;
-+		ERR_put_error;
-+		ERR_reason_error_string;
-+		ERR_remove_state;
-+		EVP_BytesToKey;
-+		EVP_CIPHER_CTX_cleanup;
-+		EVP_CipherFinal;
-+		EVP_CipherInit;
-+		EVP_CipherUpdate;
-+		EVP_DecodeBlock;
-+		EVP_DecodeFinal;
-+		EVP_DecodeInit;
-+		EVP_DecodeUpdate;
-+		EVP_DecryptFinal;
-+		EVP_DecryptInit;
-+		EVP_DecryptUpdate;
-+		EVP_DigestFinal;
-+		EVP_DigestInit;
-+		EVP_DigestUpdate;
-+		EVP_EncodeBlock;
-+		EVP_EncodeFinal;
-+		EVP_EncodeInit;
-+		EVP_EncodeUpdate;
-+		EVP_EncryptFinal;
-+		EVP_EncryptInit;
-+		EVP_EncryptUpdate;
-+		EVP_OpenFinal;
-+		EVP_OpenInit;
-+		EVP_PKEY_assign;
-+		EVP_PKEY_copy_parameters;
-+		EVP_PKEY_free;
-+		EVP_PKEY_missing_parameters;
-+		EVP_PKEY_new;
-+		EVP_PKEY_save_parameters;
-+		EVP_PKEY_size;
-+		EVP_PKEY_type;
-+		EVP_SealFinal;
-+		EVP_SealInit;
-+		EVP_SignFinal;
-+		EVP_VerifyFinal;
-+		EVP_add_alias;
-+		EVP_add_cipher;
-+		EVP_add_digest;
-+		EVP_bf_cbc;
-+		EVP_bf_cfb64;
-+		EVP_bf_ecb;
-+		EVP_bf_ofb;
-+		EVP_cleanup;
-+		EVP_des_cbc;
-+		EVP_des_cfb64;
-+		EVP_des_ecb;
-+		EVP_des_ede;
-+		EVP_des_ede3;
-+		EVP_des_ede3_cbc;
-+		EVP_des_ede3_cfb64;
-+		EVP_des_ede3_ofb;
-+		EVP_des_ede_cbc;
-+		EVP_des_ede_cfb64;
-+		EVP_des_ede_ofb;
-+		EVP_des_ofb;
-+		EVP_desx_cbc;
-+		EVP_dss;
-+		EVP_dss1;
-+		EVP_enc_null;
-+		EVP_get_cipherbyname;
-+		EVP_get_digestbyname;
-+		EVP_get_pw_prompt;
-+		EVP_idea_cbc;
-+		EVP_idea_cfb64;
-+		EVP_idea_ecb;
-+		EVP_idea_ofb;
-+		EVP_md2;
-+		EVP_md5;
-+		EVP_md_null;
-+		EVP_rc2_cbc;
-+		EVP_rc2_cfb64;
-+		EVP_rc2_ecb;
-+		EVP_rc2_ofb;
-+		EVP_rc4;
-+		EVP_read_pw_string;
-+		EVP_set_pw_prompt;
-+		EVP_sha;
-+		EVP_sha1;
-+		MD2;
-+		MD2_Final;
-+		MD2_Init;
-+		MD2_Update;
-+		MD2_options;
-+		MD5;
-+		MD5_Final;
-+		MD5_Init;
-+		MD5_Update;
-+		MDC2;
-+		MDC2_Final;
-+		MDC2_Init;
-+		MDC2_Update;
-+		NETSCAPE_SPKAC_free;
-+		NETSCAPE_SPKAC_new;
-+		NETSCAPE_SPKI_free;
-+		NETSCAPE_SPKI_new;
-+		NETSCAPE_SPKI_sign;
-+		NETSCAPE_SPKI_verify;
-+		OBJ_add_object;
-+		OBJ_bsearch;
-+		OBJ_cleanup;
-+		OBJ_cmp;
-+		OBJ_create;
-+		OBJ_dup;
-+		OBJ_ln2nid;
-+		OBJ_new_nid;
-+		OBJ_nid2ln;
-+		OBJ_nid2obj;
-+		OBJ_nid2sn;
-+		OBJ_obj2nid;
-+		OBJ_sn2nid;
-+		OBJ_txt2nid;
-+		PEM_ASN1_read;
-+		PEM_ASN1_read_bio;
-+		PEM_ASN1_write;
-+		PEM_ASN1_write_bio;
-+		PEM_SealFinal;
-+		PEM_SealInit;
-+		PEM_SealUpdate;
-+		PEM_SignFinal;
-+		PEM_SignInit;
-+		PEM_SignUpdate;
-+		PEM_X509_INFO_read;
-+		PEM_X509_INFO_read_bio;
-+		PEM_X509_INFO_write_bio;
-+		PEM_dek_info;
-+		PEM_do_header;
-+		PEM_get_EVP_CIPHER_INFO;
-+		PEM_proc_type;
-+		PEM_read;
-+		PEM_read_DHparams;
-+		PEM_read_DSAPrivateKey;
-+		PEM_read_DSAparams;
-+		PEM_read_PKCS7;
-+		PEM_read_PrivateKey;
-+		PEM_read_RSAPrivateKey;
-+		PEM_read_X509;
-+		PEM_read_X509_CRL;
-+		PEM_read_X509_REQ;
-+		PEM_read_bio;
-+		PEM_read_bio_DHparams;
-+		PEM_read_bio_DSAPrivateKey;
-+		PEM_read_bio_DSAparams;
-+		PEM_read_bio_PKCS7;
-+		PEM_read_bio_PrivateKey;
-+		PEM_read_bio_RSAPrivateKey;
-+		PEM_read_bio_X509;
-+		PEM_read_bio_X509_CRL;
-+		PEM_read_bio_X509_REQ;
-+		PEM_write;
-+		PEM_write_DHparams;
-+		PEM_write_DSAPrivateKey;
-+		PEM_write_DSAparams;
-+		PEM_write_PKCS7;
-+		PEM_write_PrivateKey;
-+		PEM_write_RSAPrivateKey;
-+		PEM_write_X509;
-+		PEM_write_X509_CRL;
-+		PEM_write_X509_REQ;
-+		PEM_write_bio;
-+		PEM_write_bio_DHparams;
-+		PEM_write_bio_DSAPrivateKey;
-+		PEM_write_bio_DSAparams;
-+		PEM_write_bio_PKCS7;
-+		PEM_write_bio_PrivateKey;
-+		PEM_write_bio_RSAPrivateKey;
-+		PEM_write_bio_X509;
-+		PEM_write_bio_X509_CRL;
-+		PEM_write_bio_X509_REQ;
-+		PKCS7_DIGEST_free;
-+		PKCS7_DIGEST_new;
-+		PKCS7_ENCRYPT_free;
-+		PKCS7_ENCRYPT_new;
-+		PKCS7_ENC_CONTENT_free;
-+		PKCS7_ENC_CONTENT_new;
-+		PKCS7_ENVELOPE_free;
-+		PKCS7_ENVELOPE_new;
-+		PKCS7_ISSUER_AND_SERIAL_digest;
-+		PKCS7_ISSUER_AND_SERIAL_free;
-+		PKCS7_ISSUER_AND_SERIAL_new;
-+		PKCS7_RECIP_INFO_free;
-+		PKCS7_RECIP_INFO_new;
-+		PKCS7_SIGNED_free;
-+		PKCS7_SIGNED_new;
-+		PKCS7_SIGNER_INFO_free;
-+		PKCS7_SIGNER_INFO_new;
-+		PKCS7_SIGN_ENVELOPE_free;
-+		PKCS7_SIGN_ENVELOPE_new;
-+		PKCS7_dup;
-+		PKCS7_free;
-+		PKCS7_new;
-+		PROXY_ENTRY_add_noproxy;
-+		PROXY_ENTRY_clear_noproxy;
-+		PROXY_ENTRY_free;
-+		PROXY_ENTRY_get_noproxy;
-+		PROXY_ENTRY_new;
-+		PROXY_ENTRY_set_server;
-+		PROXY_add_noproxy;
-+		PROXY_add_server;
-+		PROXY_check_by_host;
-+		PROXY_check_url;
-+		PROXY_clear_noproxy;
-+		PROXY_free;
-+		PROXY_get_noproxy;
-+		PROXY_get_proxies;
-+		PROXY_get_proxy_entry;
-+		PROXY_load_conf;
-+		PROXY_new;
-+		PROXY_print;
-+		RAND_bytes;
-+		RAND_cleanup;
-+		RAND_file_name;
-+		RAND_load_file;
-+		RAND_screen;
-+		RAND_seed;
-+		RAND_write_file;
-+		RC2_cbc_encrypt;
-+		RC2_cfb64_encrypt;
-+		RC2_ecb_encrypt;
-+		RC2_encrypt;
-+		RC2_ofb64_encrypt;
-+		RC2_set_key;
-+		RC4;
-+		RC4_options;
-+		RC4_set_key;
-+		RSAPrivateKey_asn1_meth;
-+		RSAPrivateKey_dup;
-+		RSAPublicKey_dup;
-+		RSA_PKCS1_SSLeay;
-+		RSA_free;
-+		RSA_generate_key;
-+		RSA_new;
-+		RSA_new_method;
-+		RSA_print;
-+		RSA_print_fp;
-+		RSA_private_decrypt;
-+		RSA_private_encrypt;
-+		RSA_public_decrypt;
-+		RSA_public_encrypt;
-+		RSA_set_default_method;
-+		RSA_sign;
-+		RSA_sign_ASN1_OCTET_STRING;
-+		RSA_size;
-+		RSA_verify;
-+		RSA_verify_ASN1_OCTET_STRING;
-+		SHA;
-+		SHA1;
-+		SHA1_Final;
-+		SHA1_Init;
-+		SHA1_Update;
-+		SHA_Final;
-+		SHA_Init;
-+		SHA_Update;
-+		OpenSSL_add_all_algorithms;
-+		OpenSSL_add_all_ciphers;
-+		OpenSSL_add_all_digests;
-+		TXT_DB_create_index;
-+		TXT_DB_free;
-+		TXT_DB_get_by_index;
-+		TXT_DB_insert;
-+		TXT_DB_read;
-+		TXT_DB_write;
-+		X509_ALGOR_free;
-+		X509_ALGOR_new;
-+		X509_ATTRIBUTE_free;
-+		X509_ATTRIBUTE_new;
-+		X509_CINF_free;
-+		X509_CINF_new;
-+		X509_CRL_INFO_free;
-+		X509_CRL_INFO_new;
-+		X509_CRL_add_ext;
-+		X509_CRL_cmp;
-+		X509_CRL_delete_ext;
-+		X509_CRL_dup;
-+		X509_CRL_free;
-+		X509_CRL_get_ext;
-+		X509_CRL_get_ext_by_NID;
-+		X509_CRL_get_ext_by_OBJ;
-+		X509_CRL_get_ext_by_critical;
-+		X509_CRL_get_ext_count;
-+		X509_CRL_new;
-+		X509_CRL_sign;
-+		X509_CRL_verify;
-+		X509_EXTENSION_create_by_NID;
-+		X509_EXTENSION_create_by_OBJ;
-+		X509_EXTENSION_dup;
-+		X509_EXTENSION_free;
-+		X509_EXTENSION_get_critical;
-+		X509_EXTENSION_get_data;
-+		X509_EXTENSION_get_object;
-+		X509_EXTENSION_new;
-+		X509_EXTENSION_set_critical;
-+		X509_EXTENSION_set_data;
-+		X509_EXTENSION_set_object;
-+		X509_INFO_free;
-+		X509_INFO_new;
-+		X509_LOOKUP_by_alias;
-+		X509_LOOKUP_by_fingerprint;
-+		X509_LOOKUP_by_issuer_serial;
-+		X509_LOOKUP_by_subject;
-+		X509_LOOKUP_ctrl;
-+		X509_LOOKUP_file;
-+		X509_LOOKUP_free;
-+		X509_LOOKUP_hash_dir;
-+		X509_LOOKUP_init;
-+		X509_LOOKUP_new;
-+		X509_LOOKUP_shutdown;
-+		X509_NAME_ENTRY_create_by_NID;
-+		X509_NAME_ENTRY_create_by_OBJ;
-+		X509_NAME_ENTRY_dup;
-+		X509_NAME_ENTRY_free;
-+		X509_NAME_ENTRY_get_data;
-+		X509_NAME_ENTRY_get_object;
-+		X509_NAME_ENTRY_new;
-+		X509_NAME_ENTRY_set_data;
-+		X509_NAME_ENTRY_set_object;
-+		X509_NAME_add_entry;
-+		X509_NAME_cmp;
-+		X509_NAME_delete_entry;
-+		X509_NAME_digest;
-+		X509_NAME_dup;
-+		X509_NAME_entry_count;
-+		X509_NAME_free;
-+		X509_NAME_get_entry;
-+		X509_NAME_get_index_by_NID;
-+		X509_NAME_get_index_by_OBJ;
-+		X509_NAME_get_text_by_NID;
-+		X509_NAME_get_text_by_OBJ;
-+		X509_NAME_hash;
-+		X509_NAME_new;
-+		X509_NAME_oneline;
-+		X509_NAME_print;
-+		X509_NAME_set;
-+		X509_OBJECT_free_contents;
-+		X509_OBJECT_retrieve_by_subject;
-+		X509_OBJECT_up_ref_count;
-+		X509_PKEY_free;
-+		X509_PKEY_new;
-+		X509_PUBKEY_free;
-+		X509_PUBKEY_get;
-+		X509_PUBKEY_new;
-+		X509_PUBKEY_set;
-+		X509_REQ_INFO_free;
-+		X509_REQ_INFO_new;
-+		X509_REQ_dup;
-+		X509_REQ_free;
-+		X509_REQ_get_pubkey;
-+		X509_REQ_new;
-+		X509_REQ_print;
-+		X509_REQ_print_fp;
-+		X509_REQ_set_pubkey;
-+		X509_REQ_set_subject_name;
-+		X509_REQ_set_version;
-+		X509_REQ_sign;
-+		X509_REQ_to_X509;
-+		X509_REQ_verify;
-+		X509_REVOKED_add_ext;
-+		X509_REVOKED_delete_ext;
-+		X509_REVOKED_free;
-+		X509_REVOKED_get_ext;
-+		X509_REVOKED_get_ext_by_NID;
-+		X509_REVOKED_get_ext_by_OBJ;
-+		X509_REVOKED_get_ext_by_critical;
-+		X509_REVOKED_get_ext_by_critic;
-+		X509_REVOKED_get_ext_count;
-+		X509_REVOKED_new;
-+		X509_SIG_free;
-+		X509_SIG_new;
-+		X509_STORE_CTX_cleanup;
-+		X509_STORE_CTX_init;
-+		X509_STORE_add_cert;
-+		X509_STORE_add_lookup;
-+		X509_STORE_free;
-+		X509_STORE_get_by_subject;
-+		X509_STORE_load_locations;
-+		X509_STORE_new;
-+		X509_STORE_set_default_paths;
-+		X509_VAL_free;
-+		X509_VAL_new;
-+		X509_add_ext;
-+		X509_asn1_meth;
-+		X509_certificate_type;
-+		X509_check_private_key;
-+		X509_cmp_current_time;
-+		X509_delete_ext;
-+		X509_digest;
-+		X509_dup;
-+		X509_free;
-+		X509_get_default_cert_area;
-+		X509_get_default_cert_dir;
-+		X509_get_default_cert_dir_env;
-+		X509_get_default_cert_file;
-+		X509_get_default_cert_file_env;
-+		X509_get_default_private_dir;
-+		X509_get_ext;
-+		X509_get_ext_by_NID;
-+		X509_get_ext_by_OBJ;
-+		X509_get_ext_by_critical;
-+		X509_get_ext_count;
-+		X509_get_issuer_name;
-+		X509_get_pubkey;
-+		X509_get_pubkey_parameters;
-+		X509_get_serialNumber;
-+		X509_get_subject_name;
-+		X509_gmtime_adj;
-+		X509_issuer_and_serial_cmp;
-+		X509_issuer_and_serial_hash;
-+		X509_issuer_name_cmp;
-+		X509_issuer_name_hash;
-+		X509_load_cert_file;
-+		X509_new;
-+		X509_print;
-+		X509_print_fp;
-+		X509_set_issuer_name;
-+		X509_set_notAfter;
-+		X509_set_notBefore;
-+		X509_set_pubkey;
-+		X509_set_serialNumber;
-+		X509_set_subject_name;
-+		X509_set_version;
-+		X509_sign;
-+		X509_subject_name_cmp;
-+		X509_subject_name_hash;
-+		X509_to_X509_REQ;
-+		X509_verify;
-+		X509_verify_cert;
-+		X509_verify_cert_error_string;
-+		X509v3_add_ext;
-+		X509v3_add_extension;
-+		X509v3_add_netscape_extensions;
-+		X509v3_add_standard_extensions;
-+		X509v3_cleanup_extensions;
-+		X509v3_data_type_by_NID;
-+		X509v3_data_type_by_OBJ;
-+		X509v3_delete_ext;
-+		X509v3_get_ext;
-+		X509v3_get_ext_by_NID;
-+		X509v3_get_ext_by_OBJ;
-+		X509v3_get_ext_by_critical;
-+		X509v3_get_ext_count;
-+		X509v3_pack_string;
-+		X509v3_pack_type_by_NID;
-+		X509v3_pack_type_by_OBJ;
-+		X509v3_unpack_string;
-+		_des_crypt;
-+		a2d_ASN1_OBJECT;
-+		a2i_ASN1_INTEGER;
-+		a2i_ASN1_STRING;
-+		asn1_Finish;
-+		asn1_GetSequence;
-+		bn_div_words;
-+		bn_expand2;
-+		bn_mul_add_words;
-+		bn_mul_words;
-+		BN_uadd;
-+		BN_usub;
-+		bn_sqr_words;
-+		_ossl_old_crypt;
-+		d2i_ASN1_BIT_STRING;
-+		d2i_ASN1_BOOLEAN;
-+		d2i_ASN1_HEADER;
-+		d2i_ASN1_IA5STRING;
-+		d2i_ASN1_INTEGER;
-+		d2i_ASN1_OBJECT;
-+		d2i_ASN1_OCTET_STRING;
-+		d2i_ASN1_PRINTABLE;
-+		d2i_ASN1_PRINTABLESTRING;
-+		d2i_ASN1_SET;
-+		d2i_ASN1_T61STRING;
-+		d2i_ASN1_TYPE;
-+		d2i_ASN1_UTCTIME;
-+		d2i_ASN1_bytes;
-+		d2i_ASN1_type_bytes;
-+		d2i_DHparams;
-+		d2i_DSAPrivateKey;
-+		d2i_DSAPrivateKey_bio;
-+		d2i_DSAPrivateKey_fp;
-+		d2i_DSAPublicKey;
-+		d2i_DSAparams;
-+		d2i_NETSCAPE_SPKAC;
-+		d2i_NETSCAPE_SPKI;
-+		d2i_Netscape_RSA;
-+		d2i_PKCS7;
-+		d2i_PKCS7_DIGEST;
-+		d2i_PKCS7_ENCRYPT;
-+		d2i_PKCS7_ENC_CONTENT;
-+		d2i_PKCS7_ENVELOPE;
-+		d2i_PKCS7_ISSUER_AND_SERIAL;
-+		d2i_PKCS7_RECIP_INFO;
-+		d2i_PKCS7_SIGNED;
-+		d2i_PKCS7_SIGNER_INFO;
-+		d2i_PKCS7_SIGN_ENVELOPE;
-+		d2i_PKCS7_bio;
-+		d2i_PKCS7_fp;
-+		d2i_PrivateKey;
-+		d2i_PublicKey;
-+		d2i_RSAPrivateKey;
-+		d2i_RSAPrivateKey_bio;
-+		d2i_RSAPrivateKey_fp;
-+		d2i_RSAPublicKey;
-+		d2i_X509;
-+		d2i_X509_ALGOR;
-+		d2i_X509_ATTRIBUTE;
-+		d2i_X509_CINF;
-+		d2i_X509_CRL;
-+		d2i_X509_CRL_INFO;
-+		d2i_X509_CRL_bio;
-+		d2i_X509_CRL_fp;
-+		d2i_X509_EXTENSION;
-+		d2i_X509_NAME;
-+		d2i_X509_NAME_ENTRY;
-+		d2i_X509_PKEY;
-+		d2i_X509_PUBKEY;
-+		d2i_X509_REQ;
-+		d2i_X509_REQ_INFO;
-+		d2i_X509_REQ_bio;
-+		d2i_X509_REQ_fp;
-+		d2i_X509_REVOKED;
-+		d2i_X509_SIG;
-+		d2i_X509_VAL;
-+		d2i_X509_bio;
-+		d2i_X509_fp;
-+		DES_cbc_cksum;
-+		DES_cbc_encrypt;
-+		DES_cblock_print_file;
-+		DES_cfb64_encrypt;
-+		DES_cfb_encrypt;
-+		DES_decrypt3;
-+		DES_ecb3_encrypt;
-+		DES_ecb_encrypt;
-+		DES_ede3_cbc_encrypt;
-+		DES_ede3_cfb64_encrypt;
-+		DES_ede3_ofb64_encrypt;
-+		DES_enc_read;
-+		DES_enc_write;
-+		DES_encrypt1;
-+		DES_encrypt2;
-+		DES_encrypt3;
-+		DES_fcrypt;
-+		DES_is_weak_key;
-+		DES_key_sched;
-+		DES_ncbc_encrypt;
-+		DES_ofb64_encrypt;
-+		DES_ofb_encrypt;
-+		DES_options;
-+		DES_pcbc_encrypt;
-+		DES_quad_cksum;
-+		DES_random_key;
-+		_ossl_old_des_random_seed;
-+		_ossl_old_des_read_2passwords;
-+		_ossl_old_des_read_password;
-+		_ossl_old_des_read_pw;
-+		_ossl_old_des_read_pw_string;
-+		DES_set_key;
-+		DES_set_odd_parity;
-+		DES_string_to_2keys;
-+		DES_string_to_key;
-+		DES_xcbc_encrypt;
-+		DES_xwhite_in2out;
-+		fcrypt_body;
-+		i2a_ASN1_INTEGER;
-+		i2a_ASN1_OBJECT;
-+		i2a_ASN1_STRING;
-+		i2d_ASN1_BIT_STRING;
-+		i2d_ASN1_BOOLEAN;
-+		i2d_ASN1_HEADER;
-+		i2d_ASN1_IA5STRING;
-+		i2d_ASN1_INTEGER;
-+		i2d_ASN1_OBJECT;
-+		i2d_ASN1_OCTET_STRING;
-+		i2d_ASN1_PRINTABLE;
-+		i2d_ASN1_SET;
-+		i2d_ASN1_TYPE;
-+		i2d_ASN1_UTCTIME;
-+		i2d_ASN1_bytes;
-+		i2d_DHparams;
-+		i2d_DSAPrivateKey;
-+		i2d_DSAPrivateKey_bio;
-+		i2d_DSAPrivateKey_fp;
-+		i2d_DSAPublicKey;
-+		i2d_DSAparams;
-+		i2d_NETSCAPE_SPKAC;
-+		i2d_NETSCAPE_SPKI;
-+		i2d_Netscape_RSA;
-+		i2d_PKCS7;
-+		i2d_PKCS7_DIGEST;
-+		i2d_PKCS7_ENCRYPT;
-+		i2d_PKCS7_ENC_CONTENT;
-+		i2d_PKCS7_ENVELOPE;
-+		i2d_PKCS7_ISSUER_AND_SERIAL;
-+		i2d_PKCS7_RECIP_INFO;
-+		i2d_PKCS7_SIGNED;
-+		i2d_PKCS7_SIGNER_INFO;
-+		i2d_PKCS7_SIGN_ENVELOPE;
-+		i2d_PKCS7_bio;
-+		i2d_PKCS7_fp;
-+		i2d_PrivateKey;
-+		i2d_PublicKey;
-+		i2d_RSAPrivateKey;
-+		i2d_RSAPrivateKey_bio;
-+		i2d_RSAPrivateKey_fp;
-+		i2d_RSAPublicKey;
-+		i2d_X509;
-+		i2d_X509_ALGOR;
-+		i2d_X509_ATTRIBUTE;
-+		i2d_X509_CINF;
-+		i2d_X509_CRL;
-+		i2d_X509_CRL_INFO;
-+		i2d_X509_CRL_bio;
-+		i2d_X509_CRL_fp;
-+		i2d_X509_EXTENSION;
-+		i2d_X509_NAME;
-+		i2d_X509_NAME_ENTRY;
-+		i2d_X509_PKEY;
-+		i2d_X509_PUBKEY;
-+		i2d_X509_REQ;
-+		i2d_X509_REQ_INFO;
-+		i2d_X509_REQ_bio;
-+		i2d_X509_REQ_fp;
-+		i2d_X509_REVOKED;
-+		i2d_X509_SIG;
-+		i2d_X509_VAL;
-+		i2d_X509_bio;
-+		i2d_X509_fp;
-+		idea_cbc_encrypt;
-+		idea_cfb64_encrypt;
-+		idea_ecb_encrypt;
-+		idea_encrypt;
-+		idea_ofb64_encrypt;
-+		idea_options;
-+		idea_set_decrypt_key;
-+		idea_set_encrypt_key;
-+		lh_delete;
-+		lh_doall;
-+		lh_doall_arg;
-+		lh_free;
-+		lh_insert;
-+		lh_new;
-+		lh_node_stats;
-+		lh_node_stats_bio;
-+		lh_node_usage_stats;
-+		lh_node_usage_stats_bio;
-+		lh_retrieve;
-+		lh_stats;
-+		lh_stats_bio;
-+		lh_strhash;
-+		sk_delete;
-+		sk_delete_ptr;
-+		sk_dup;
-+		sk_find;
-+		sk_free;
-+		sk_insert;
-+		sk_new;
-+		sk_pop;
-+		sk_pop_free;
-+		sk_push;
-+		sk_set_cmp_func;
-+		sk_shift;
-+		sk_unshift;
-+		sk_zero;
-+		BIO_f_nbio_test;
-+		ASN1_TYPE_get;
-+		ASN1_TYPE_set;
-+		PKCS7_content_free;
-+		ERR_load_PKCS7_strings;
-+		X509_find_by_issuer_and_serial;
-+		X509_find_by_subject;
-+		PKCS7_ctrl;
-+		PKCS7_set_type;
-+		PKCS7_set_content;
-+		PKCS7_SIGNER_INFO_set;
-+		PKCS7_add_signer;
-+		PKCS7_add_certificate;
-+		PKCS7_add_crl;
-+		PKCS7_content_new;
-+		PKCS7_dataSign;
-+		PKCS7_dataVerify;
-+		PKCS7_dataInit;
-+		PKCS7_add_signature;
-+		PKCS7_cert_from_signer_info;
-+		PKCS7_get_signer_info;
-+		EVP_delete_alias;
-+		EVP_mdc2;
-+		PEM_read_bio_RSAPublicKey;
-+		PEM_write_bio_RSAPublicKey;
-+		d2i_RSAPublicKey_bio;
-+		i2d_RSAPublicKey_bio;
-+		PEM_read_RSAPublicKey;
-+		PEM_write_RSAPublicKey;
-+		d2i_RSAPublicKey_fp;
-+		i2d_RSAPublicKey_fp;
-+		BIO_copy_next_retry;
-+		RSA_flags;
-+		X509_STORE_add_crl;
-+		X509_load_crl_file;
-+		EVP_rc2_40_cbc;
-+		EVP_rc4_40;
-+		EVP_CIPHER_CTX_init;
-+		HMAC;
-+		HMAC_Init;
-+		HMAC_Update;
-+		HMAC_Final;
-+		ERR_get_next_error_library;
-+		EVP_PKEY_cmp_parameters;
-+		HMAC_cleanup;
-+		BIO_ptr_ctrl;
-+		BIO_new_file_internal;
-+		BIO_new_fp_internal;
-+		BIO_s_file_internal;
-+		BN_BLINDING_convert;
-+		BN_BLINDING_invert;
-+		BN_BLINDING_update;
-+		RSA_blinding_on;
-+		RSA_blinding_off;
-+		i2t_ASN1_OBJECT;
-+		BN_BLINDING_new;
-+		BN_BLINDING_free;
-+		EVP_cast5_cbc;
-+		EVP_cast5_cfb64;
-+		EVP_cast5_ecb;
-+		EVP_cast5_ofb;
-+		BF_decrypt;
-+		CAST_set_key;
-+		CAST_encrypt;
-+		CAST_decrypt;
-+		CAST_ecb_encrypt;
-+		CAST_cbc_encrypt;
-+		CAST_cfb64_encrypt;
-+		CAST_ofb64_encrypt;
-+		RC2_decrypt;
-+		OBJ_create_objects;
-+		BN_exp;
-+		BN_mul_word;
-+		BN_sub_word;
-+		BN_dec2bn;
-+		BN_bn2dec;
-+		BIO_ghbn_ctrl;
-+		CRYPTO_free_ex_data;
-+		CRYPTO_get_ex_data;
-+		CRYPTO_set_ex_data;
-+		ERR_load_CRYPTO_strings;
-+		ERR_load_CRYPTOlib_strings;
-+		EVP_PKEY_bits;
-+		MD5_Transform;
-+		SHA1_Transform;
-+		SHA_Transform;
-+		X509_STORE_CTX_get_chain;
-+		X509_STORE_CTX_get_current_cert;
-+		X509_STORE_CTX_get_error;
-+		X509_STORE_CTX_get_error_depth;
-+		X509_STORE_CTX_get_ex_data;
-+		X509_STORE_CTX_set_cert;
-+		X509_STORE_CTX_set_chain;
-+		X509_STORE_CTX_set_error;
-+		X509_STORE_CTX_set_ex_data;
-+		CRYPTO_dup_ex_data;
-+		CRYPTO_get_new_lockid;
-+		CRYPTO_new_ex_data;
-+		RSA_set_ex_data;
-+		RSA_get_ex_data;
-+		RSA_get_ex_new_index;
-+		RSA_padding_add_PKCS1_type_1;
-+		RSA_padding_add_PKCS1_type_2;
-+		RSA_padding_add_SSLv23;
-+		RSA_padding_add_none;
-+		RSA_padding_check_PKCS1_type_1;
-+		RSA_padding_check_PKCS1_type_2;
-+		RSA_padding_check_SSLv23;
-+		RSA_padding_check_none;
-+		bn_add_words;
-+		d2i_Netscape_RSA_2;
-+		CRYPTO_get_ex_new_index;
-+		RIPEMD160_Init;
-+		RIPEMD160_Update;
-+		RIPEMD160_Final;
-+		RIPEMD160;
-+		RIPEMD160_Transform;
-+		RC5_32_set_key;
-+		RC5_32_ecb_encrypt;
-+		RC5_32_encrypt;
-+		RC5_32_decrypt;
-+		RC5_32_cbc_encrypt;
-+		RC5_32_cfb64_encrypt;
-+		RC5_32_ofb64_encrypt;
-+		BN_bn2mpi;
-+		BN_mpi2bn;
-+		ASN1_BIT_STRING_get_bit;
-+		ASN1_BIT_STRING_set_bit;
-+		BIO_get_ex_data;
-+		BIO_get_ex_new_index;
-+		BIO_set_ex_data;
-+		X509v3_get_key_usage;
-+		X509v3_set_key_usage;
-+		a2i_X509v3_key_usage;
-+		i2a_X509v3_key_usage;
-+		EVP_PKEY_decrypt;
-+		EVP_PKEY_encrypt;
-+		PKCS7_RECIP_INFO_set;
-+		PKCS7_add_recipient;
-+		PKCS7_add_recipient_info;
-+		PKCS7_set_cipher;
-+		ASN1_TYPE_get_int_octetstring;
-+		ASN1_TYPE_get_octetstring;
-+		ASN1_TYPE_set_int_octetstring;
-+		ASN1_TYPE_set_octetstring;
-+		ASN1_UTCTIME_set_string;
-+		ERR_add_error_data;
-+		ERR_set_error_data;
-+		EVP_CIPHER_asn1_to_param;
-+		EVP_CIPHER_param_to_asn1;
-+		EVP_CIPHER_get_asn1_iv;
-+		EVP_CIPHER_set_asn1_iv;
-+		EVP_rc5_32_12_16_cbc;
-+		EVP_rc5_32_12_16_cfb64;
-+		EVP_rc5_32_12_16_ecb;
-+		EVP_rc5_32_12_16_ofb;
-+		asn1_add_error;
-+		d2i_ASN1_BMPSTRING;
-+		i2d_ASN1_BMPSTRING;
-+		BIO_f_ber;
-+		BN_init;
-+		COMP_CTX_new;
-+		COMP_CTX_free;
-+		COMP_CTX_compress_block;
-+		COMP_CTX_expand_block;
-+		X509_STORE_CTX_get_ex_new_index;
-+		OBJ_NAME_add;
-+		BIO_socket_nbio;
-+		EVP_rc2_64_cbc;
-+		OBJ_NAME_cleanup;
-+		OBJ_NAME_get;
-+		OBJ_NAME_init;
-+		OBJ_NAME_new_index;
-+		OBJ_NAME_remove;
-+		BN_MONT_CTX_copy;
-+		BIO_new_socks4a_connect;
-+		BIO_s_socks4a_connect;
-+		PROXY_set_connect_mode;
-+		RAND_SSLeay;
-+		RAND_set_rand_method;
-+		RSA_memory_lock;
-+		bn_sub_words;
-+		bn_mul_normal;
-+		bn_mul_comba8;
-+		bn_mul_comba4;
-+		bn_sqr_normal;
-+		bn_sqr_comba8;
-+		bn_sqr_comba4;
-+		bn_cmp_words;
-+		bn_mul_recursive;
-+		bn_mul_part_recursive;
-+		bn_sqr_recursive;
-+		bn_mul_low_normal;
-+		BN_RECP_CTX_init;
-+		BN_RECP_CTX_new;
-+		BN_RECP_CTX_free;
-+		BN_RECP_CTX_set;
-+		BN_mod_mul_reciprocal;
-+		BN_mod_exp_recp;
-+		BN_div_recp;
-+		BN_CTX_init;
-+		BN_MONT_CTX_init;
-+		RAND_get_rand_method;
-+		PKCS7_add_attribute;
-+		PKCS7_add_signed_attribute;
-+		PKCS7_digest_from_attributes;
-+		PKCS7_get_attribute;
-+		PKCS7_get_issuer_and_serial;
-+		PKCS7_get_signed_attribute;
-+		COMP_compress_block;
-+		COMP_expand_block;
-+		COMP_rle;
-+		COMP_zlib;
-+		ms_time_diff;
-+		ms_time_new;
-+		ms_time_free;
-+		ms_time_cmp;
-+		ms_time_get;
-+		PKCS7_set_attributes;
-+		PKCS7_set_signed_attributes;
-+		X509_ATTRIBUTE_create;
-+		X509_ATTRIBUTE_dup;
-+		ASN1_GENERALIZEDTIME_check;
-+		ASN1_GENERALIZEDTIME_print;
-+		ASN1_GENERALIZEDTIME_set;
-+		ASN1_GENERALIZEDTIME_set_string;
-+		ASN1_TIME_print;
-+		BASIC_CONSTRAINTS_free;
-+		BASIC_CONSTRAINTS_new;
-+		ERR_load_X509V3_strings;
-+		NETSCAPE_CERT_SEQUENCE_free;
-+		NETSCAPE_CERT_SEQUENCE_new;
-+		OBJ_txt2obj;
-+		PEM_read_NETSCAPE_CERT_SEQUENCE;
-+		PEM_read_NS_CERT_SEQ;
-+		PEM_read_bio_NETSCAPE_CERT_SEQUENCE;
-+		PEM_read_bio_NS_CERT_SEQ;
-+		PEM_write_NETSCAPE_CERT_SEQUENCE;
-+		PEM_write_NS_CERT_SEQ;
-+		PEM_write_bio_NETSCAPE_CERT_SEQUENCE;
-+		PEM_write_bio_NS_CERT_SEQ;
-+		X509V3_EXT_add;
-+		X509V3_EXT_add_alias;
-+		X509V3_EXT_add_conf;
-+		X509V3_EXT_cleanup;
-+		X509V3_EXT_conf;
-+		X509V3_EXT_conf_nid;
-+		X509V3_EXT_get;
-+		X509V3_EXT_get_nid;
-+		X509V3_EXT_print;
-+		X509V3_EXT_print_fp;
-+		X509V3_add_standard_extensions;
-+		X509V3_add_value;
-+		X509V3_add_value_bool;
-+		X509V3_add_value_int;
-+		X509V3_conf_free;
-+		X509V3_get_value_bool;
-+		X509V3_get_value_int;
-+		X509V3_parse_list;
-+		d2i_ASN1_GENERALIZEDTIME;
-+		d2i_ASN1_TIME;
-+		d2i_BASIC_CONSTRAINTS;
-+		d2i_NETSCAPE_CERT_SEQUENCE;
-+		d2i_ext_ku;
-+		ext_ku_free;
-+		ext_ku_new;
-+		i2d_ASN1_GENERALIZEDTIME;
-+		i2d_ASN1_TIME;
-+		i2d_BASIC_CONSTRAINTS;
-+		i2d_NETSCAPE_CERT_SEQUENCE;
-+		i2d_ext_ku;
-+		EVP_MD_CTX_copy;
-+		i2d_ASN1_ENUMERATED;
-+		d2i_ASN1_ENUMERATED;
-+		ASN1_ENUMERATED_set;
-+		ASN1_ENUMERATED_get;
-+		BN_to_ASN1_ENUMERATED;
-+		ASN1_ENUMERATED_to_BN;
-+		i2a_ASN1_ENUMERATED;
-+		a2i_ASN1_ENUMERATED;
-+		i2d_GENERAL_NAME;
-+		d2i_GENERAL_NAME;
-+		GENERAL_NAME_new;
-+		GENERAL_NAME_free;
-+		GENERAL_NAMES_new;
-+		GENERAL_NAMES_free;
-+		d2i_GENERAL_NAMES;
-+		i2d_GENERAL_NAMES;
-+		i2v_GENERAL_NAMES;
-+		i2s_ASN1_OCTET_STRING;
-+		s2i_ASN1_OCTET_STRING;
-+		X509V3_EXT_check_conf;
-+		hex_to_string;
-+		string_to_hex;
-+		DES_ede3_cbcm_encrypt;
-+		RSA_padding_add_PKCS1_OAEP;
-+		RSA_padding_check_PKCS1_OAEP;
-+		X509_CRL_print_fp;
-+		X509_CRL_print;
-+		i2v_GENERAL_NAME;
-+		v2i_GENERAL_NAME;
-+		i2d_PKEY_USAGE_PERIOD;
-+		d2i_PKEY_USAGE_PERIOD;
-+		PKEY_USAGE_PERIOD_new;
-+		PKEY_USAGE_PERIOD_free;
-+		v2i_GENERAL_NAMES;
-+		i2s_ASN1_INTEGER;
-+		X509V3_EXT_d2i;
-+		name_cmp;
-+		str_dup;
-+		i2s_ASN1_ENUMERATED;
-+		i2s_ASN1_ENUMERATED_TABLE;
-+		BIO_s_log;
-+		BIO_f_reliable;
-+		PKCS7_dataFinal;
-+		PKCS7_dataDecode;
-+		X509V3_EXT_CRL_add_conf;
-+		BN_set_params;
-+		BN_get_params;
-+		BIO_get_ex_num;
-+		BIO_set_ex_free_func;
-+		EVP_ripemd160;
-+		ASN1_TIME_set;
-+		i2d_AUTHORITY_KEYID;
-+		d2i_AUTHORITY_KEYID;
-+		AUTHORITY_KEYID_new;
-+		AUTHORITY_KEYID_free;
-+		ASN1_seq_unpack;
-+		ASN1_seq_pack;
-+		ASN1_unpack_string;
-+		ASN1_pack_string;
-+		PKCS12_pack_safebag;
-+		PKCS12_MAKE_KEYBAG;
-+		PKCS8_encrypt;
-+		PKCS12_MAKE_SHKEYBAG;
-+		PKCS12_pack_p7data;
-+		PKCS12_pack_p7encdata;
-+		PKCS12_add_localkeyid;
-+		PKCS12_add_friendlyname_asc;
-+		PKCS12_add_friendlyname_uni;
-+		PKCS12_get_friendlyname;
-+		PKCS12_pbe_crypt;
-+		PKCS12_decrypt_d2i;
-+		PKCS12_i2d_encrypt;
-+		PKCS12_init;
-+		PKCS12_key_gen_asc;
-+		PKCS12_key_gen_uni;
-+		PKCS12_gen_mac;
-+		PKCS12_verify_mac;
-+		PKCS12_set_mac;
-+		PKCS12_setup_mac;
-+		OPENSSL_asc2uni;
-+		OPENSSL_uni2asc;
-+		i2d_PKCS12_BAGS;
-+		PKCS12_BAGS_new;
-+		d2i_PKCS12_BAGS;
-+		PKCS12_BAGS_free;
-+		i2d_PKCS12;
-+		d2i_PKCS12;
-+		PKCS12_new;
-+		PKCS12_free;
-+		i2d_PKCS12_MAC_DATA;
-+		PKCS12_MAC_DATA_new;
-+		d2i_PKCS12_MAC_DATA;
-+		PKCS12_MAC_DATA_free;
-+		i2d_PKCS12_SAFEBAG;
-+		PKCS12_SAFEBAG_new;
-+		d2i_PKCS12_SAFEBAG;
-+		PKCS12_SAFEBAG_free;
-+		ERR_load_PKCS12_strings;
-+		PKCS12_PBE_add;
-+		PKCS8_add_keyusage;
-+		PKCS12_get_attr_gen;
-+		PKCS12_parse;
-+		PKCS12_create;
-+		i2d_PKCS12_bio;
-+		i2d_PKCS12_fp;
-+		d2i_PKCS12_bio;
-+		d2i_PKCS12_fp;
-+		i2d_PBEPARAM;
-+		PBEPARAM_new;
-+		d2i_PBEPARAM;
-+		PBEPARAM_free;
-+		i2d_PKCS8_PRIV_KEY_INFO;
-+		PKCS8_PRIV_KEY_INFO_new;
-+		d2i_PKCS8_PRIV_KEY_INFO;
-+		PKCS8_PRIV_KEY_INFO_free;
-+		EVP_PKCS82PKEY;
-+		EVP_PKEY2PKCS8;
-+		PKCS8_set_broken;
-+		EVP_PBE_ALGOR_CipherInit;
-+		EVP_PBE_alg_add;
-+		PKCS5_pbe_set;
-+		EVP_PBE_cleanup;
-+		i2d_SXNET;
-+		d2i_SXNET;
-+		SXNET_new;
-+		SXNET_free;
-+		i2d_SXNETID;
-+		d2i_SXNETID;
-+		SXNETID_new;
-+		SXNETID_free;
-+		DSA_SIG_new;
-+		DSA_SIG_free;
-+		DSA_do_sign;
-+		DSA_do_verify;
-+		d2i_DSA_SIG;
-+		i2d_DSA_SIG;
-+		i2d_ASN1_VISIBLESTRING;
-+		d2i_ASN1_VISIBLESTRING;
-+		i2d_ASN1_UTF8STRING;
-+		d2i_ASN1_UTF8STRING;
-+		i2d_DIRECTORYSTRING;
-+		d2i_DIRECTORYSTRING;
-+		i2d_DISPLAYTEXT;
-+		d2i_DISPLAYTEXT;
-+		d2i_ASN1_SET_OF_X509;
-+		i2d_ASN1_SET_OF_X509;
-+		i2d_PBKDF2PARAM;
-+		PBKDF2PARAM_new;
-+		d2i_PBKDF2PARAM;
-+		PBKDF2PARAM_free;
-+		i2d_PBE2PARAM;
-+		PBE2PARAM_new;
-+		d2i_PBE2PARAM;
-+		PBE2PARAM_free;
-+		d2i_ASN1_SET_OF_GENERAL_NAME;
-+		i2d_ASN1_SET_OF_GENERAL_NAME;
-+		d2i_ASN1_SET_OF_SXNETID;
-+		i2d_ASN1_SET_OF_SXNETID;
-+		d2i_ASN1_SET_OF_POLICYQUALINFO;
-+		i2d_ASN1_SET_OF_POLICYQUALINFO;
-+		d2i_ASN1_SET_OF_POLICYINFO;
-+		i2d_ASN1_SET_OF_POLICYINFO;
-+		SXNET_add_id_asc;
-+		SXNET_add_id_ulong;
-+		SXNET_add_id_INTEGER;
-+		SXNET_get_id_asc;
-+		SXNET_get_id_ulong;
-+		SXNET_get_id_INTEGER;
-+		X509V3_set_conf_lhash;
-+		i2d_CERTIFICATEPOLICIES;
-+		CERTIFICATEPOLICIES_new;
-+		CERTIFICATEPOLICIES_free;
-+		d2i_CERTIFICATEPOLICIES;
-+		i2d_POLICYINFO;
-+		POLICYINFO_new;
-+		d2i_POLICYINFO;
-+		POLICYINFO_free;
-+		i2d_POLICYQUALINFO;
-+		POLICYQUALINFO_new;
-+		d2i_POLICYQUALINFO;
-+		POLICYQUALINFO_free;
-+		i2d_USERNOTICE;
-+		USERNOTICE_new;
-+		d2i_USERNOTICE;
-+		USERNOTICE_free;
-+		i2d_NOTICEREF;
-+		NOTICEREF_new;
-+		d2i_NOTICEREF;
-+		NOTICEREF_free;
-+		X509V3_get_string;
-+		X509V3_get_section;
-+		X509V3_string_free;
-+		X509V3_section_free;
-+		X509V3_set_ctx;
-+		s2i_ASN1_INTEGER;
-+		CRYPTO_set_locked_mem_functions;
-+		CRYPTO_get_locked_mem_functions;
-+		CRYPTO_malloc_locked;
-+		CRYPTO_free_locked;
-+		BN_mod_exp2_mont;
-+		ERR_get_error_line_data;
-+		ERR_peek_error_line_data;
-+		PKCS12_PBE_keyivgen;
-+		X509_ALGOR_dup;
-+		d2i_ASN1_SET_OF_DIST_POINT;
-+		i2d_ASN1_SET_OF_DIST_POINT;
-+		i2d_CRL_DIST_POINTS;
-+		CRL_DIST_POINTS_new;
-+		CRL_DIST_POINTS_free;
-+		d2i_CRL_DIST_POINTS;
-+		i2d_DIST_POINT;
-+		DIST_POINT_new;
-+		d2i_DIST_POINT;
-+		DIST_POINT_free;
-+		i2d_DIST_POINT_NAME;
-+		DIST_POINT_NAME_new;
-+		DIST_POINT_NAME_free;
-+		d2i_DIST_POINT_NAME;
-+		X509V3_add_value_uchar;
-+		d2i_ASN1_SET_OF_X509_ATTRIBUTE;
-+		i2d_ASN1_SET_OF_ASN1_TYPE;
-+		d2i_ASN1_SET_OF_X509_EXTENSION;
-+		d2i_ASN1_SET_OF_X509_NAME_ENTRY;
-+		d2i_ASN1_SET_OF_ASN1_TYPE;
-+		i2d_ASN1_SET_OF_X509_ATTRIBUTE;
-+		i2d_ASN1_SET_OF_X509_EXTENSION;
-+		i2d_ASN1_SET_OF_X509_NAME_ENTRY;
-+		X509V3_EXT_i2d;
-+		X509V3_EXT_val_prn;
-+		X509V3_EXT_add_list;
-+		EVP_CIPHER_type;
-+		EVP_PBE_CipherInit;
-+		X509V3_add_value_bool_nf;
-+		d2i_ASN1_UINTEGER;
-+		sk_value;
-+		sk_num;
-+		sk_set;
-+		i2d_ASN1_SET_OF_X509_REVOKED;
-+		sk_sort;
-+		d2i_ASN1_SET_OF_X509_REVOKED;
-+		i2d_ASN1_SET_OF_X509_ALGOR;
-+		i2d_ASN1_SET_OF_X509_CRL;
-+		d2i_ASN1_SET_OF_X509_ALGOR;
-+		d2i_ASN1_SET_OF_X509_CRL;
-+		i2d_ASN1_SET_OF_PKCS7_SIGNER_INFO;
-+		i2d_ASN1_SET_OF_PKCS7_RECIP_INFO;
-+		d2i_ASN1_SET_OF_PKCS7_SIGNER_INFO;
-+		d2i_ASN1_SET_OF_PKCS7_RECIP_INFO;
-+		PKCS5_PBE_add;
-+		PEM_write_bio_PKCS8;
-+		i2d_PKCS8_fp;
-+		PEM_read_bio_PKCS8_PRIV_KEY_INFO;
-+		PEM_read_bio_P8_PRIV_KEY_INFO;
-+		d2i_PKCS8_bio;
-+		d2i_PKCS8_PRIV_KEY_INFO_fp;
-+		PEM_write_bio_PKCS8_PRIV_KEY_INFO;
-+		PEM_write_bio_P8_PRIV_KEY_INFO;
-+		PEM_read_PKCS8;
-+		d2i_PKCS8_PRIV_KEY_INFO_bio;
-+		d2i_PKCS8_fp;
-+		PEM_write_PKCS8;
-+		PEM_read_PKCS8_PRIV_KEY_INFO;
-+		PEM_read_P8_PRIV_KEY_INFO;
-+		PEM_read_bio_PKCS8;
-+		PEM_write_PKCS8_PRIV_KEY_INFO;
-+		PEM_write_P8_PRIV_KEY_INFO;
-+		PKCS5_PBE_keyivgen;
-+		i2d_PKCS8_bio;
-+		i2d_PKCS8_PRIV_KEY_INFO_fp;
-+		i2d_PKCS8_PRIV_KEY_INFO_bio;
-+		BIO_s_bio;
-+		PKCS5_pbe2_set;
-+		PKCS5_PBKDF2_HMAC_SHA1;
-+		PKCS5_v2_PBE_keyivgen;
-+		PEM_write_bio_PKCS8PrivateKey;
-+		PEM_write_PKCS8PrivateKey;
-+		BIO_ctrl_get_read_request;
-+		BIO_ctrl_pending;
-+		BIO_ctrl_wpending;
-+		BIO_new_bio_pair;
-+		BIO_ctrl_get_write_guarantee;
-+		CRYPTO_num_locks;
-+		CONF_load_bio;
-+		CONF_load_fp;
-+		i2d_ASN1_SET_OF_ASN1_OBJECT;
-+		d2i_ASN1_SET_OF_ASN1_OBJECT;
-+		PKCS7_signatureVerify;
-+		RSA_set_method;
-+		RSA_get_method;
-+		RSA_get_default_method;
-+		RSA_check_key;
-+		OBJ_obj2txt;
-+		DSA_dup_DH;
-+		X509_REQ_get_extensions;
-+		X509_REQ_set_extension_nids;
-+		BIO_nwrite;
-+		X509_REQ_extension_nid;
-+		BIO_nread;
-+		X509_REQ_get_extension_nids;
-+		BIO_nwrite0;
-+		X509_REQ_add_extensions_nid;
-+		BIO_nread0;
-+		X509_REQ_add_extensions;
-+		BIO_new_mem_buf;
-+		DH_set_ex_data;
-+		DH_set_method;
-+		DSA_OpenSSL;
-+		DH_get_ex_data;
-+		DH_get_ex_new_index;
-+		DSA_new_method;
-+		DH_new_method;
-+		DH_OpenSSL;
-+		DSA_get_ex_new_index;
-+		DH_get_default_method;
-+		DSA_set_ex_data;
-+		DH_set_default_method;
-+		DSA_get_ex_data;
-+		X509V3_EXT_REQ_add_conf;
-+		NETSCAPE_SPKI_print;
-+		NETSCAPE_SPKI_set_pubkey;
-+		NETSCAPE_SPKI_b64_encode;
-+		NETSCAPE_SPKI_get_pubkey;
-+		NETSCAPE_SPKI_b64_decode;
-+		UTF8_putc;
-+		UTF8_getc;
-+		RSA_null_method;
-+		ASN1_tag2str;
-+		BIO_ctrl_reset_read_request;
-+		DISPLAYTEXT_new;
-+		ASN1_GENERALIZEDTIME_free;
-+		X509_REVOKED_get_ext_d2i;
-+		X509_set_ex_data;
-+		X509_reject_set_bit_asc;
-+		X509_NAME_add_entry_by_txt;
-+		X509_NAME_add_entry_by_NID;
-+		X509_PURPOSE_get0;
-+		PEM_read_X509_AUX;
-+		d2i_AUTHORITY_INFO_ACCESS;
-+		PEM_write_PUBKEY;
-+		ACCESS_DESCRIPTION_new;
-+		X509_CERT_AUX_free;
-+		d2i_ACCESS_DESCRIPTION;
-+		X509_trust_clear;
-+		X509_TRUST_add;
-+		ASN1_VISIBLESTRING_new;
-+		X509_alias_set1;
-+		ASN1_PRINTABLESTRING_free;
-+		EVP_PKEY_get1_DSA;
-+		ASN1_BMPSTRING_new;
-+		ASN1_mbstring_copy;
-+		ASN1_UTF8STRING_new;
-+		DSA_get_default_method;
-+		i2d_ASN1_SET_OF_ACCESS_DESCRIPTION;
-+		ASN1_T61STRING_free;
-+		DSA_set_method;
-+		X509_get_ex_data;
-+		ASN1_STRING_type;
-+		X509_PURPOSE_get_by_sname;
-+		ASN1_TIME_free;
-+		ASN1_OCTET_STRING_cmp;
-+		ASN1_BIT_STRING_new;
-+		X509_get_ext_d2i;
-+		PEM_read_bio_X509_AUX;
-+		ASN1_STRING_set_default_mask_asc;
-+		ASN1_STRING_set_def_mask_asc;
-+		PEM_write_bio_RSA_PUBKEY;
-+		ASN1_INTEGER_cmp;
-+		d2i_RSA_PUBKEY_fp;
-+		X509_trust_set_bit_asc;
-+		PEM_write_bio_DSA_PUBKEY;
-+		X509_STORE_CTX_free;
-+		EVP_PKEY_set1_DSA;
-+		i2d_DSA_PUBKEY_fp;
-+		X509_load_cert_crl_file;
-+		ASN1_TIME_new;
-+		i2d_RSA_PUBKEY;
-+		X509_STORE_CTX_purpose_inherit;
-+		PEM_read_RSA_PUBKEY;
-+		d2i_X509_AUX;
-+		i2d_DSA_PUBKEY;
-+		X509_CERT_AUX_print;
-+		PEM_read_DSA_PUBKEY;
-+		i2d_RSA_PUBKEY_bio;
-+		ASN1_BIT_STRING_num_asc;
-+		i2d_PUBKEY;
-+		ASN1_UTCTIME_free;
-+		DSA_set_default_method;
-+		X509_PURPOSE_get_by_id;
-+		ACCESS_DESCRIPTION_free;
-+		PEM_read_bio_PUBKEY;
-+		ASN1_STRING_set_by_NID;
-+		X509_PURPOSE_get_id;
-+		DISPLAYTEXT_free;
-+		OTHERNAME_new;
-+		X509_CERT_AUX_new;
-+		X509_TRUST_cleanup;
-+		X509_NAME_add_entry_by_OBJ;
-+		X509_CRL_get_ext_d2i;
-+		X509_PURPOSE_get0_name;
-+		PEM_read_PUBKEY;
-+		i2d_DSA_PUBKEY_bio;
-+		i2d_OTHERNAME;
-+		ASN1_OCTET_STRING_free;
-+		ASN1_BIT_STRING_set_asc;
-+		X509_get_ex_new_index;
-+		ASN1_STRING_TABLE_cleanup;
-+		X509_TRUST_get_by_id;
-+		X509_PURPOSE_get_trust;
-+		ASN1_STRING_length;
-+		d2i_ASN1_SET_OF_ACCESS_DESCRIPTION;
-+		ASN1_PRINTABLESTRING_new;
-+		X509V3_get_d2i;
-+		ASN1_ENUMERATED_free;
-+		i2d_X509_CERT_AUX;
-+		X509_STORE_CTX_set_trust;
-+		ASN1_STRING_set_default_mask;
-+		X509_STORE_CTX_new;
-+		EVP_PKEY_get1_RSA;
-+		DIRECTORYSTRING_free;
-+		PEM_write_X509_AUX;
-+		ASN1_OCTET_STRING_set;
-+		d2i_DSA_PUBKEY_fp;
-+		d2i_RSA_PUBKEY;
-+		X509_TRUST_get0_name;
-+		X509_TRUST_get0;
-+		AUTHORITY_INFO_ACCESS_free;
-+		ASN1_IA5STRING_new;
-+		d2i_DSA_PUBKEY;
-+		X509_check_purpose;
-+		ASN1_ENUMERATED_new;
-+		d2i_RSA_PUBKEY_bio;
-+		d2i_PUBKEY;
-+		X509_TRUST_get_trust;
-+		X509_TRUST_get_flags;
-+		ASN1_BMPSTRING_free;
-+		ASN1_T61STRING_new;
-+		ASN1_UTCTIME_new;
-+		i2d_AUTHORITY_INFO_ACCESS;
-+		EVP_PKEY_set1_RSA;
-+		X509_STORE_CTX_set_purpose;
-+		ASN1_IA5STRING_free;
-+		PEM_write_bio_X509_AUX;
-+		X509_PURPOSE_get_count;
-+		CRYPTO_add_info;
-+		X509_NAME_ENTRY_create_by_txt;
-+		ASN1_STRING_get_default_mask;
-+		X509_alias_get0;
-+		ASN1_STRING_data;
-+		i2d_ACCESS_DESCRIPTION;
-+		X509_trust_set_bit;
-+		ASN1_BIT_STRING_free;
-+		PEM_read_bio_RSA_PUBKEY;
-+		X509_add1_reject_object;
-+		X509_check_trust;
-+		PEM_read_bio_DSA_PUBKEY;
-+		X509_PURPOSE_add;
-+		ASN1_STRING_TABLE_get;
-+		ASN1_UTF8STRING_free;
-+		d2i_DSA_PUBKEY_bio;
-+		PEM_write_RSA_PUBKEY;
-+		d2i_OTHERNAME;
-+		X509_reject_set_bit;
-+		PEM_write_DSA_PUBKEY;
-+		X509_PURPOSE_get0_sname;
-+		EVP_PKEY_set1_DH;
-+		ASN1_OCTET_STRING_dup;
-+		ASN1_BIT_STRING_set;
-+		X509_TRUST_get_count;
-+		ASN1_INTEGER_free;
-+		OTHERNAME_free;
-+		i2d_RSA_PUBKEY_fp;
-+		ASN1_INTEGER_dup;
-+		d2i_X509_CERT_AUX;
-+		PEM_write_bio_PUBKEY;
-+		ASN1_VISIBLESTRING_free;
-+		X509_PURPOSE_cleanup;
-+		ASN1_mbstring_ncopy;
-+		ASN1_GENERALIZEDTIME_new;
-+		EVP_PKEY_get1_DH;
-+		ASN1_OCTET_STRING_new;
-+		ASN1_INTEGER_new;
-+		i2d_X509_AUX;
-+		ASN1_BIT_STRING_name_print;
-+		X509_cmp;
-+		ASN1_STRING_length_set;
-+		DIRECTORYSTRING_new;
-+		X509_add1_trust_object;
-+		PKCS12_newpass;
-+		SMIME_write_PKCS7;
-+		SMIME_read_PKCS7;
-+		DES_set_key_checked;
-+		PKCS7_verify;
-+		PKCS7_encrypt;
-+		DES_set_key_unchecked;
-+		SMIME_crlf_copy;
-+		i2d_ASN1_PRINTABLESTRING;
-+		PKCS7_get0_signers;
-+		PKCS7_decrypt;
-+		SMIME_text;
-+		PKCS7_simple_smimecap;
-+		PKCS7_get_smimecap;
-+		PKCS7_sign;
-+		PKCS7_add_attrib_smimecap;
-+		CRYPTO_dbg_set_options;
-+		CRYPTO_remove_all_info;
-+		CRYPTO_get_mem_debug_functions;
-+		CRYPTO_is_mem_check_on;
-+		CRYPTO_set_mem_debug_functions;
-+		CRYPTO_pop_info;
-+		CRYPTO_push_info_;
-+		CRYPTO_set_mem_debug_options;
-+		PEM_write_PKCS8PrivateKey_nid;
-+		PEM_write_bio_PKCS8PrivateKey_nid;
-+		PEM_write_bio_PKCS8PrivKey_nid;
-+		d2i_PKCS8PrivateKey_bio;
-+		ASN1_NULL_free;
-+		d2i_ASN1_NULL;
-+		ASN1_NULL_new;
-+		i2d_PKCS8PrivateKey_bio;
-+		i2d_PKCS8PrivateKey_fp;
-+		i2d_ASN1_NULL;
-+		i2d_PKCS8PrivateKey_nid_fp;
-+		d2i_PKCS8PrivateKey_fp;
-+		i2d_PKCS8PrivateKey_nid_bio;
-+		i2d_PKCS8PrivateKeyInfo_fp;
-+		i2d_PKCS8PrivateKeyInfo_bio;
-+		PEM_cb;
-+		i2d_PrivateKey_fp;
-+		d2i_PrivateKey_bio;
-+		d2i_PrivateKey_fp;
-+		i2d_PrivateKey_bio;
-+		X509_reject_clear;
-+		X509_TRUST_set_default;
-+		d2i_AutoPrivateKey;
-+		X509_ATTRIBUTE_get0_type;
-+		X509_ATTRIBUTE_set1_data;
-+		X509at_get_attr;
-+		X509at_get_attr_count;
-+		X509_ATTRIBUTE_create_by_NID;
-+		X509_ATTRIBUTE_set1_object;
-+		X509_ATTRIBUTE_count;
-+		X509_ATTRIBUTE_create_by_OBJ;
-+		X509_ATTRIBUTE_get0_object;
-+		X509at_get_attr_by_NID;
-+		X509at_add1_attr;
-+		X509_ATTRIBUTE_get0_data;
-+		X509at_delete_attr;
-+		X509at_get_attr_by_OBJ;
-+		RAND_add;
-+		BIO_number_written;
-+		BIO_number_read;
-+		X509_STORE_CTX_get1_chain;
-+		ERR_load_RAND_strings;
-+		RAND_pseudo_bytes;
-+		X509_REQ_get_attr_by_NID;
-+		X509_REQ_get_attr;
-+		X509_REQ_add1_attr_by_NID;
-+		X509_REQ_get_attr_by_OBJ;
-+		X509at_add1_attr_by_NID;
-+		X509_REQ_add1_attr_by_OBJ;
-+		X509_REQ_get_attr_count;
-+		X509_REQ_add1_attr;
-+		X509_REQ_delete_attr;
-+		X509at_add1_attr_by_OBJ;
-+		X509_REQ_add1_attr_by_txt;
-+		X509_ATTRIBUTE_create_by_txt;
-+		X509at_add1_attr_by_txt;
-+		BN_pseudo_rand;
-+		BN_is_prime_fasttest;
-+		BN_CTX_end;
-+		BN_CTX_start;
-+		BN_CTX_get;
-+		EVP_PKEY2PKCS8_broken;
-+		ASN1_STRING_TABLE_add;
-+		CRYPTO_dbg_get_options;
-+		AUTHORITY_INFO_ACCESS_new;
-+		CRYPTO_get_mem_debug_options;
-+		DES_crypt;
-+		PEM_write_bio_X509_REQ_NEW;
-+		PEM_write_X509_REQ_NEW;
-+		BIO_callback_ctrl;
-+		RAND_egd;
-+		RAND_status;
-+		bn_dump1;
-+		DES_check_key_parity;
-+		lh_num_items;
-+		RAND_event;
-+		DSO_new;
-+		DSO_new_method;
-+		DSO_free;
-+		DSO_flags;
-+		DSO_up;
-+		DSO_set_default_method;
-+		DSO_get_default_method;
-+		DSO_get_method;
-+		DSO_set_method;
-+		DSO_load;
-+		DSO_bind_var;
-+		DSO_METHOD_null;
-+		DSO_METHOD_openssl;
-+		DSO_METHOD_dlfcn;
-+		DSO_METHOD_win32;
-+		ERR_load_DSO_strings;
-+		DSO_METHOD_dl;
-+		NCONF_load;
-+		NCONF_load_fp;
-+		NCONF_new;
-+		NCONF_get_string;
-+		NCONF_free;
-+		NCONF_get_number;
-+		CONF_dump_fp;
-+		NCONF_load_bio;
-+		NCONF_dump_fp;
-+		NCONF_get_section;
-+		NCONF_dump_bio;
-+		CONF_dump_bio;
-+		NCONF_free_data;
-+		CONF_set_default_method;
-+		ERR_error_string_n;
-+		BIO_snprintf;
-+		DSO_ctrl;
-+		i2d_ASN1_SET_OF_ASN1_INTEGER;
-+		i2d_ASN1_SET_OF_PKCS12_SAFEBAG;
-+		i2d_ASN1_SET_OF_PKCS7;
-+		BIO_vfree;
-+		d2i_ASN1_SET_OF_ASN1_INTEGER;
-+		d2i_ASN1_SET_OF_PKCS12_SAFEBAG;
-+		ASN1_UTCTIME_get;
-+		X509_REQ_digest;
-+		X509_CRL_digest;
-+		d2i_ASN1_SET_OF_PKCS7;
-+		EVP_CIPHER_CTX_set_key_length;
-+		EVP_CIPHER_CTX_ctrl;
-+		BN_mod_exp_mont_word;
-+		RAND_egd_bytes;
-+		X509_REQ_get1_email;
-+		X509_get1_email;
-+		X509_email_free;
-+		i2d_RSA_NET;
-+		d2i_RSA_NET_2;
-+		d2i_RSA_NET;
-+		DSO_bind_func;
-+		CRYPTO_get_new_dynlockid;
-+		sk_new_null;
-+		CRYPTO_set_dynlock_destroy_callback;
-+		CRYPTO_set_dynlock_destroy_cb;
-+		CRYPTO_destroy_dynlockid;
-+		CRYPTO_set_dynlock_size;
-+		CRYPTO_set_dynlock_create_callback;
-+		CRYPTO_set_dynlock_create_cb;
-+		CRYPTO_set_dynlock_lock_callback;
-+		CRYPTO_set_dynlock_lock_cb;
-+		CRYPTO_get_dynlock_lock_callback;
-+		CRYPTO_get_dynlock_lock_cb;
-+		CRYPTO_get_dynlock_destroy_callback;
-+		CRYPTO_get_dynlock_destroy_cb;
-+		CRYPTO_get_dynlock_value;
-+		CRYPTO_get_dynlock_create_callback;
-+		CRYPTO_get_dynlock_create_cb;
-+		c2i_ASN1_BIT_STRING;
-+		i2c_ASN1_BIT_STRING;
-+		RAND_poll;
-+		c2i_ASN1_INTEGER;
-+		i2c_ASN1_INTEGER;
-+		BIO_dump_indent;
-+		ASN1_parse_dump;
-+		c2i_ASN1_OBJECT;
-+		X509_NAME_print_ex_fp;
-+		ASN1_STRING_print_ex_fp;
-+		X509_NAME_print_ex;
-+		ASN1_STRING_print_ex;
-+		MD4;
-+		MD4_Transform;
-+		MD4_Final;
-+		MD4_Update;
-+		MD4_Init;
-+		EVP_md4;
-+		i2d_PUBKEY_bio;
-+		i2d_PUBKEY_fp;
-+		d2i_PUBKEY_bio;
-+		ASN1_STRING_to_UTF8;
-+		BIO_vprintf;
-+		BIO_vsnprintf;
-+		d2i_PUBKEY_fp;
-+		X509_cmp_time;
-+		X509_STORE_CTX_set_time;
-+		X509_STORE_CTX_get1_issuer;
-+		X509_OBJECT_retrieve_match;
-+		X509_OBJECT_idx_by_subject;
-+		X509_STORE_CTX_set_flags;
-+		X509_STORE_CTX_trusted_stack;
-+		X509_time_adj;
-+		X509_check_issued;
-+		ASN1_UTCTIME_cmp_time_t;
-+		DES_set_weak_key_flag;
-+		DES_check_key;
-+		DES_rw_mode;
-+		RSA_PKCS1_RSAref;
-+		X509_keyid_set1;
-+		BIO_next;
-+		DSO_METHOD_vms;
-+		BIO_f_linebuffer;
-+		BN_bntest_rand;
-+		OPENSSL_issetugid;
-+		BN_rand_range;
-+		ERR_load_ENGINE_strings;
-+		ENGINE_set_DSA;
-+		ENGINE_get_finish_function;
-+		ENGINE_get_default_RSA;
-+		ENGINE_get_BN_mod_exp;
-+		DSA_get_default_openssl_method;
-+		ENGINE_set_DH;
-+		ENGINE_set_def_BN_mod_exp_crt;
-+		ENGINE_set_default_BN_mod_exp_crt;
-+		ENGINE_init;
-+		DH_get_default_openssl_method;
-+		RSA_set_default_openssl_method;
-+		ENGINE_finish;
-+		ENGINE_load_public_key;
-+		ENGINE_get_DH;
-+		ENGINE_ctrl;
-+		ENGINE_get_init_function;
-+		ENGINE_set_init_function;
-+		ENGINE_set_default_DSA;
-+		ENGINE_get_name;
-+		ENGINE_get_last;
-+		ENGINE_get_prev;
-+		ENGINE_get_default_DH;
-+		ENGINE_get_RSA;
-+		ENGINE_set_default;
-+		ENGINE_get_RAND;
-+		ENGINE_get_first;
-+		ENGINE_by_id;
-+		ENGINE_set_finish_function;
-+		ENGINE_get_def_BN_mod_exp_crt;
-+		ENGINE_get_default_BN_mod_exp_crt;
-+		RSA_get_default_openssl_method;
-+		ENGINE_set_RSA;
-+		ENGINE_load_private_key;
-+		ENGINE_set_default_RAND;
-+		ENGINE_set_BN_mod_exp;
-+		ENGINE_remove;
-+		ENGINE_free;
-+		ENGINE_get_BN_mod_exp_crt;
-+		ENGINE_get_next;
-+		ENGINE_set_name;
-+		ENGINE_get_default_DSA;
-+		ENGINE_set_default_BN_mod_exp;
-+		ENGINE_set_default_RSA;
-+		ENGINE_get_default_RAND;
-+		ENGINE_get_default_BN_mod_exp;
-+		ENGINE_set_RAND;
-+		ENGINE_set_id;
-+		ENGINE_set_BN_mod_exp_crt;
-+		ENGINE_set_default_DH;
-+		ENGINE_new;
-+		ENGINE_get_id;
-+		DSA_set_default_openssl_method;
-+		ENGINE_add;
-+		DH_set_default_openssl_method;
-+		ENGINE_get_DSA;
-+		ENGINE_get_ctrl_function;
-+		ENGINE_set_ctrl_function;
-+		BN_pseudo_rand_range;
-+		X509_STORE_CTX_set_verify_cb;
-+		ERR_load_COMP_strings;
-+		PKCS12_item_decrypt_d2i;
-+		ASN1_UTF8STRING_it;
-+		ENGINE_unregister_ciphers;
-+		ENGINE_get_ciphers;
-+		d2i_OCSP_BASICRESP;
-+		KRB5_CHECKSUM_it;
-+		EC_POINT_add;
-+		ASN1_item_ex_i2d;
-+		OCSP_CERTID_it;
-+		d2i_OCSP_RESPBYTES;
-+		X509V3_add1_i2d;
-+		PKCS7_ENVELOPE_it;
-+		UI_add_input_boolean;
-+		ENGINE_unregister_RSA;
-+		X509V3_EXT_nconf;
-+		ASN1_GENERALSTRING_free;
-+		d2i_OCSP_CERTSTATUS;
-+		X509_REVOKED_set_serialNumber;
-+		X509_print_ex;
-+		OCSP_ONEREQ_get1_ext_d2i;
-+		ENGINE_register_all_RAND;
-+		ENGINE_load_dynamic;
-+		PBKDF2PARAM_it;
-+		EXTENDED_KEY_USAGE_new;
-+		EC_GROUP_clear_free;
-+		OCSP_sendreq_bio;
-+		ASN1_item_digest;
-+		OCSP_BASICRESP_delete_ext;
-+		OCSP_SIGNATURE_it;
-+		X509_CRL_it;
-+		OCSP_BASICRESP_add_ext;
-+		KRB5_ENCKEY_it;
-+		UI_method_set_closer;
-+		X509_STORE_set_purpose;
-+		i2d_ASN1_GENERALSTRING;
-+		OCSP_response_status;
-+		i2d_OCSP_SERVICELOC;
-+		ENGINE_get_digest_engine;
-+		EC_GROUP_set_curve_GFp;
-+		OCSP_REQUEST_get_ext_by_OBJ;
-+		_ossl_old_des_random_key;
-+		ASN1_T61STRING_it;
-+		EC_GROUP_method_of;
-+		i2d_KRB5_APREQ;
-+		_ossl_old_des_encrypt;
-+		ASN1_PRINTABLE_new;
-+		HMAC_Init_ex;
-+		d2i_KRB5_AUTHENT;
-+		OCSP_archive_cutoff_new;
-+		EC_POINT_set_Jprojective_coordinates_GFp;
-+		EC_POINT_set_Jproj_coords_GFp;
-+		_ossl_old_des_is_weak_key;
-+		OCSP_BASICRESP_get_ext_by_OBJ;
-+		EC_POINT_oct2point;
-+		OCSP_SINGLERESP_get_ext_count;
-+		UI_ctrl;
-+		_shadow_DES_rw_mode;
-+		asn1_do_adb;
-+		ASN1_template_i2d;
-+		ENGINE_register_DH;
-+		UI_construct_prompt;
-+		X509_STORE_set_trust;
-+		UI_dup_input_string;
-+		d2i_KRB5_APREQ;
-+		EVP_MD_CTX_copy_ex;
-+		OCSP_request_is_signed;
-+		i2d_OCSP_REQINFO;
-+		KRB5_ENCKEY_free;
-+		OCSP_resp_get0;
-+		GENERAL_NAME_it;
-+		ASN1_GENERALIZEDTIME_it;
-+		X509_STORE_set_flags;
-+		EC_POINT_set_compressed_coordinates_GFp;
-+		EC_POINT_set_compr_coords_GFp;
-+		OCSP_response_status_str;
-+		d2i_OCSP_REVOKEDINFO;
-+		OCSP_basic_add1_cert;
-+		ERR_get_implementation;
-+		EVP_CipherFinal_ex;
-+		OCSP_CERTSTATUS_new;
-+		CRYPTO_cleanup_all_ex_data;
-+		OCSP_resp_find;
-+		BN_nnmod;
-+		X509_CRL_sort;
-+		X509_REVOKED_set_revocationDate;
-+		ENGINE_register_RAND;
-+		OCSP_SERVICELOC_new;
-+		EC_POINT_set_affine_coordinates_GFp;
-+		EC_POINT_set_affine_coords_GFp;
-+		_ossl_old_des_options;
-+		SXNET_it;
-+		UI_dup_input_boolean;
-+		PKCS12_add_CSPName_asc;
-+		EC_POINT_is_at_infinity;
-+		ENGINE_load_cryptodev;
-+		DSO_convert_filename;
-+		POLICYQUALINFO_it;
-+		ENGINE_register_ciphers;
-+		BN_mod_lshift_quick;
-+		DSO_set_filename;
-+		ASN1_item_free;
-+		KRB5_TKTBODY_free;
-+		AUTHORITY_KEYID_it;
-+		KRB5_APREQBODY_new;
-+		X509V3_EXT_REQ_add_nconf;
-+		ENGINE_ctrl_cmd_string;
-+		i2d_OCSP_RESPDATA;
-+		EVP_MD_CTX_init;
-+		EXTENDED_KEY_USAGE_free;
-+		PKCS7_ATTR_SIGN_it;
-+		UI_add_error_string;
-+		KRB5_CHECKSUM_free;
-+		OCSP_REQUEST_get_ext;
-+		ENGINE_load_ubsec;
-+		ENGINE_register_all_digests;
-+		PKEY_USAGE_PERIOD_it;
-+		PKCS12_unpack_authsafes;
-+		ASN1_item_unpack;
-+		NETSCAPE_SPKAC_it;
-+		X509_REVOKED_it;
-+		ASN1_STRING_encode;
-+		EVP_aes_128_ecb;
-+		KRB5_AUTHENT_free;
-+		OCSP_BASICRESP_get_ext_by_critical;
-+		OCSP_BASICRESP_get_ext_by_crit;
-+		OCSP_cert_status_str;
-+		d2i_OCSP_REQUEST;
-+		UI_dup_info_string;
-+		_ossl_old_des_xwhite_in2out;
-+		PKCS12_it;
-+		OCSP_SINGLERESP_get_ext_by_critical;
-+		OCSP_SINGLERESP_get_ext_by_crit;
-+		OCSP_CERTSTATUS_free;
-+		_ossl_old_des_crypt;
-+		ASN1_item_i2d;
-+		EVP_DecryptFinal_ex;
-+		ENGINE_load_openssl;
-+		ENGINE_get_cmd_defns;
-+		ENGINE_set_load_privkey_function;
-+		ENGINE_set_load_privkey_fn;
-+		EVP_EncryptFinal_ex;
-+		ENGINE_set_default_digests;
-+		X509_get0_pubkey_bitstr;
-+		asn1_ex_i2c;
-+		ENGINE_register_RSA;
-+		ENGINE_unregister_DSA;
-+		_ossl_old_des_key_sched;
-+		X509_EXTENSION_it;
-+		i2d_KRB5_AUTHENT;
-+		SXNETID_it;
-+		d2i_OCSP_SINGLERESP;
-+		EDIPARTYNAME_new;
-+		PKCS12_certbag2x509;
-+		_ossl_old_des_ofb64_encrypt;
-+		d2i_EXTENDED_KEY_USAGE;
-+		ERR_print_errors_cb;
-+		ENGINE_set_ciphers;
-+		d2i_KRB5_APREQBODY;
-+		UI_method_get_flusher;
-+		X509_PUBKEY_it;
-+		_ossl_old_des_enc_read;
-+		PKCS7_ENCRYPT_it;
-+		i2d_OCSP_RESPONSE;
-+		EC_GROUP_get_cofactor;
-+		PKCS12_unpack_p7data;
-+		d2i_KRB5_AUTHDATA;
-+		OCSP_copy_nonce;
-+		KRB5_AUTHDATA_new;
-+		OCSP_RESPDATA_new;
-+		EC_GFp_mont_method;
-+		OCSP_REVOKEDINFO_free;
-+		UI_get_ex_data;
-+		KRB5_APREQBODY_free;
-+		EC_GROUP_get0_generator;
-+		UI_get_default_method;
-+		X509V3_set_nconf;
-+		PKCS12_item_i2d_encrypt;
-+		X509_add1_ext_i2d;
-+		PKCS7_SIGNER_INFO_it;
-+		KRB5_PRINCNAME_new;
-+		PKCS12_SAFEBAG_it;
-+		EC_GROUP_get_order;
-+		d2i_OCSP_RESPID;
-+		OCSP_request_verify;
-+		NCONF_get_number_e;
-+		_ossl_old_des_decrypt3;
-+		X509_signature_print;
-+		OCSP_SINGLERESP_free;
-+		ENGINE_load_builtin_engines;
-+		i2d_OCSP_ONEREQ;
-+		OCSP_REQUEST_add_ext;
-+		OCSP_RESPBYTES_new;
-+		EVP_MD_CTX_create;
-+		OCSP_resp_find_status;
-+		X509_ALGOR_it;
-+		ASN1_TIME_it;
-+		OCSP_request_set1_name;
-+		OCSP_ONEREQ_get_ext_count;
-+		UI_get0_result;
-+		PKCS12_AUTHSAFES_it;
-+		EVP_aes_256_ecb;
-+		PKCS12_pack_authsafes;
-+		ASN1_IA5STRING_it;
-+		UI_get_input_flags;
-+		EC_GROUP_set_generator;
-+		_ossl_old_des_string_to_2keys;
-+		OCSP_CERTID_free;
-+		X509_CERT_AUX_it;
-+		CERTIFICATEPOLICIES_it;
-+		_ossl_old_des_ede3_cbc_encrypt;
-+		RAND_set_rand_engine;
-+		DSO_get_loaded_filename;
-+		X509_ATTRIBUTE_it;
-+		OCSP_ONEREQ_get_ext_by_NID;
-+		PKCS12_decrypt_skey;
-+		KRB5_AUTHENT_it;
-+		UI_dup_error_string;
-+		RSAPublicKey_it;
-+		i2d_OCSP_REQUEST;
-+		PKCS12_x509crl2certbag;
-+		OCSP_SERVICELOC_it;
-+		ASN1_item_sign;
-+		X509_CRL_set_issuer_name;
-+		OBJ_NAME_do_all_sorted;
-+		i2d_OCSP_BASICRESP;
-+		i2d_OCSP_RESPBYTES;
-+		PKCS12_unpack_p7encdata;
-+		HMAC_CTX_init;
-+		ENGINE_get_digest;
-+		OCSP_RESPONSE_print;
-+		KRB5_TKTBODY_it;
-+		ACCESS_DESCRIPTION_it;
-+		PKCS7_ISSUER_AND_SERIAL_it;
-+		PBE2PARAM_it;
-+		PKCS12_certbag2x509crl;
-+		PKCS7_SIGNED_it;
-+		ENGINE_get_cipher;
-+		i2d_OCSP_CRLID;
-+		OCSP_SINGLERESP_new;
-+		ENGINE_cmd_is_executable;
-+		RSA_up_ref;
-+		ASN1_GENERALSTRING_it;
-+		ENGINE_register_DSA;
-+		X509V3_EXT_add_nconf_sk;
-+		ENGINE_set_load_pubkey_function;
-+		PKCS8_decrypt;
-+		PEM_bytes_read_bio;
-+		DIRECTORYSTRING_it;
-+		d2i_OCSP_CRLID;
-+		EC_POINT_is_on_curve;
-+		CRYPTO_set_locked_mem_ex_functions;
-+		CRYPTO_set_locked_mem_ex_funcs;
-+		d2i_KRB5_CHECKSUM;
-+		ASN1_item_dup;
-+		X509_it;
-+		BN_mod_add;
-+		KRB5_AUTHDATA_free;
-+		_ossl_old_des_cbc_cksum;
-+		ASN1_item_verify;
-+		CRYPTO_set_mem_ex_functions;
-+		EC_POINT_get_Jprojective_coordinates_GFp;
-+		EC_POINT_get_Jproj_coords_GFp;
-+		ZLONG_it;
-+		CRYPTO_get_locked_mem_ex_functions;
-+		CRYPTO_get_locked_mem_ex_funcs;
-+		ASN1_TIME_check;
-+		UI_get0_user_data;
-+		HMAC_CTX_cleanup;
-+		DSA_up_ref;
-+		_ossl_old_des_ede3_cfb64_encrypt;
-+		_ossl_odes_ede3_cfb64_encrypt;
-+		ASN1_BMPSTRING_it;
-+		ASN1_tag2bit;
-+		UI_method_set_flusher;
-+		X509_ocspid_print;
-+		KRB5_ENCDATA_it;
-+		ENGINE_get_load_pubkey_function;
-+		UI_add_user_data;
-+		OCSP_REQUEST_delete_ext;
-+		UI_get_method;
-+		OCSP_ONEREQ_free;
-+		ASN1_PRINTABLESTRING_it;
-+		X509_CRL_set_nextUpdate;
-+		OCSP_REQUEST_it;
-+		OCSP_BASICRESP_it;
-+		AES_ecb_encrypt;
-+		BN_mod_sqr;
-+		NETSCAPE_CERT_SEQUENCE_it;
-+		GENERAL_NAMES_it;
-+		AUTHORITY_INFO_ACCESS_it;
-+		ASN1_FBOOLEAN_it;
-+		UI_set_ex_data;
-+		_ossl_old_des_string_to_key;
-+		ENGINE_register_all_RSA;
-+		d2i_KRB5_PRINCNAME;
-+		OCSP_RESPBYTES_it;
-+		X509_CINF_it;
-+		ENGINE_unregister_digests;
-+		d2i_EDIPARTYNAME;
-+		d2i_OCSP_SERVICELOC;
-+		ENGINE_get_digests;
-+		_ossl_old_des_set_odd_parity;
-+		OCSP_RESPDATA_free;
-+		d2i_KRB5_TICKET;
-+		OTHERNAME_it;
-+		EVP_MD_CTX_cleanup;
-+		d2i_ASN1_GENERALSTRING;
-+		X509_CRL_set_version;
-+		BN_mod_sub;
-+		OCSP_SINGLERESP_get_ext_by_NID;
-+		ENGINE_get_ex_new_index;
-+		OCSP_REQUEST_free;
-+		OCSP_REQUEST_add1_ext_i2d;
-+		X509_VAL_it;
-+		EC_POINTs_make_affine;
-+		EC_POINT_mul;
-+		X509V3_EXT_add_nconf;
-+		X509_TRUST_set;
-+		X509_CRL_add1_ext_i2d;
-+		_ossl_old_des_fcrypt;
-+		DISPLAYTEXT_it;
-+		X509_CRL_set_lastUpdate;
-+		OCSP_BASICRESP_free;
-+		OCSP_BASICRESP_add1_ext_i2d;
-+		d2i_KRB5_AUTHENTBODY;
-+		CRYPTO_set_ex_data_implementation;
-+		CRYPTO_set_ex_data_impl;
-+		KRB5_ENCDATA_new;
-+		DSO_up_ref;
-+		OCSP_crl_reason_str;
-+		UI_get0_result_string;
-+		ASN1_GENERALSTRING_new;
-+		X509_SIG_it;
-+		ERR_set_implementation;
-+		ERR_load_EC_strings;
-+		UI_get0_action_string;
-+		OCSP_ONEREQ_get_ext;
-+		EC_POINT_method_of;
-+		i2d_KRB5_APREQBODY;
-+		_ossl_old_des_ecb3_encrypt;
-+		CRYPTO_get_mem_ex_functions;
-+		ENGINE_get_ex_data;
-+		UI_destroy_method;
-+		ASN1_item_i2d_bio;
-+		OCSP_ONEREQ_get_ext_by_OBJ;
-+		ASN1_primitive_new;
-+		ASN1_PRINTABLE_it;
-+		EVP_aes_192_ecb;
-+		OCSP_SIGNATURE_new;
-+		LONG_it;
-+		ASN1_VISIBLESTRING_it;
-+		OCSP_SINGLERESP_add1_ext_i2d;
-+		d2i_OCSP_CERTID;
-+		ASN1_item_d2i_fp;
-+		CRL_DIST_POINTS_it;
-+		GENERAL_NAME_print;
-+		OCSP_SINGLERESP_delete_ext;
-+		PKCS12_SAFEBAGS_it;
-+		d2i_OCSP_SIGNATURE;
-+		OCSP_request_add1_nonce;
-+		ENGINE_set_cmd_defns;
-+		OCSP_SERVICELOC_free;
-+		EC_GROUP_free;
-+		ASN1_BIT_STRING_it;
-+		X509_REQ_it;
-+		_ossl_old_des_cbc_encrypt;
-+		ERR_unload_strings;
-+		PKCS7_SIGN_ENVELOPE_it;
-+		EDIPARTYNAME_free;
-+		OCSP_REQINFO_free;
-+		EC_GROUP_new_curve_GFp;
-+		OCSP_REQUEST_get1_ext_d2i;
-+		PKCS12_item_pack_safebag;
-+		asn1_ex_c2i;
-+		ENGINE_register_digests;
-+		i2d_OCSP_REVOKEDINFO;
-+		asn1_enc_restore;
-+		UI_free;
-+		UI_new_method;
-+		EVP_EncryptInit_ex;
-+		X509_pubkey_digest;
-+		EC_POINT_invert;
-+		OCSP_basic_sign;
-+		i2d_OCSP_RESPID;
-+		OCSP_check_nonce;
-+		ENGINE_ctrl_cmd;
-+		d2i_KRB5_ENCKEY;
-+		OCSP_parse_url;
-+		OCSP_SINGLERESP_get_ext;
-+		OCSP_CRLID_free;
-+		OCSP_BASICRESP_get1_ext_d2i;
-+		RSAPrivateKey_it;
-+		ENGINE_register_all_DH;
-+		i2d_EDIPARTYNAME;
-+		EC_POINT_get_affine_coordinates_GFp;
-+		EC_POINT_get_affine_coords_GFp;
-+		OCSP_CRLID_new;
-+		ENGINE_get_flags;
-+		OCSP_ONEREQ_it;
-+		UI_process;
-+		ASN1_INTEGER_it;
-+		EVP_CipherInit_ex;
-+		UI_get_string_type;
-+		ENGINE_unregister_DH;
-+		ENGINE_register_all_DSA;
-+		OCSP_ONEREQ_get_ext_by_critical;
-+		bn_dup_expand;
-+		OCSP_cert_id_new;
-+		BASIC_CONSTRAINTS_it;
-+		BN_mod_add_quick;
-+		EC_POINT_new;
-+		EVP_MD_CTX_destroy;
-+		OCSP_RESPBYTES_free;
-+		EVP_aes_128_cbc;
-+		OCSP_SINGLERESP_get1_ext_d2i;
-+		EC_POINT_free;
-+		DH_up_ref;
-+		X509_NAME_ENTRY_it;
-+		UI_get_ex_new_index;
-+		BN_mod_sub_quick;
-+		OCSP_ONEREQ_add_ext;
-+		OCSP_request_sign;
-+		EVP_DigestFinal_ex;
-+		ENGINE_set_digests;
-+		OCSP_id_issuer_cmp;
-+		OBJ_NAME_do_all;
-+		EC_POINTs_mul;
-+		ENGINE_register_complete;
-+		X509V3_EXT_nconf_nid;
-+		ASN1_SEQUENCE_it;
-+		UI_set_default_method;
-+		RAND_query_egd_bytes;
-+		UI_method_get_writer;
-+		UI_OpenSSL;
-+		PEM_def_callback;
-+		ENGINE_cleanup;
-+		DIST_POINT_it;
-+		OCSP_SINGLERESP_it;
-+		d2i_KRB5_TKTBODY;
-+		EC_POINT_cmp;
-+		OCSP_REVOKEDINFO_new;
-+		i2d_OCSP_CERTSTATUS;
-+		OCSP_basic_add1_nonce;
-+		ASN1_item_ex_d2i;
-+		BN_mod_lshift1_quick;
-+		UI_set_method;
-+		OCSP_id_get0_info;
-+		BN_mod_sqrt;
-+		EC_GROUP_copy;
-+		KRB5_ENCDATA_free;
-+		_ossl_old_des_cfb_encrypt;
-+		OCSP_SINGLERESP_get_ext_by_OBJ;
-+		OCSP_cert_to_id;
-+		OCSP_RESPID_new;
-+		OCSP_RESPDATA_it;
-+		d2i_OCSP_RESPDATA;
-+		ENGINE_register_all_complete;
-+		OCSP_check_validity;
-+		PKCS12_BAGS_it;
-+		OCSP_url_svcloc_new;
-+		ASN1_template_free;
-+		OCSP_SINGLERESP_add_ext;
-+		KRB5_AUTHENTBODY_it;
-+		X509_supported_extension;
-+		i2d_KRB5_AUTHDATA;
-+		UI_method_get_opener;
-+		ENGINE_set_ex_data;
-+		OCSP_REQUEST_print;
-+		CBIGNUM_it;
-+		KRB5_TICKET_new;
-+		KRB5_APREQ_new;
-+		EC_GROUP_get_curve_GFp;
-+		KRB5_ENCKEY_new;
-+		ASN1_template_d2i;
-+		_ossl_old_des_quad_cksum;
-+		OCSP_single_get0_status;
-+		BN_swap;
-+		POLICYINFO_it;
-+		ENGINE_set_destroy_function;
-+		asn1_enc_free;
-+		OCSP_RESPID_it;
-+		EC_GROUP_new;
-+		EVP_aes_256_cbc;
-+		i2d_KRB5_PRINCNAME;
-+		_ossl_old_des_encrypt2;
-+		_ossl_old_des_encrypt3;
-+		PKCS8_PRIV_KEY_INFO_it;
-+		OCSP_REQINFO_it;
-+		PBEPARAM_it;
-+		KRB5_AUTHENTBODY_new;
-+		X509_CRL_add0_revoked;
-+		EDIPARTYNAME_it;
-+		NETSCAPE_SPKI_it;
-+		UI_get0_test_string;
-+		ENGINE_get_cipher_engine;
-+		ENGINE_register_all_ciphers;
-+		EC_POINT_copy;
-+		BN_kronecker;
-+		_ossl_old_des_ede3_ofb64_encrypt;
-+		_ossl_odes_ede3_ofb64_encrypt;
-+		UI_method_get_reader;
-+		OCSP_BASICRESP_get_ext_count;
-+		ASN1_ENUMERATED_it;
-+		UI_set_result;
-+		i2d_KRB5_TICKET;
-+		X509_print_ex_fp;
-+		EVP_CIPHER_CTX_set_padding;
-+		d2i_OCSP_RESPONSE;
-+		ASN1_UTCTIME_it;
-+		_ossl_old_des_enc_write;
-+		OCSP_RESPONSE_new;
-+		AES_set_encrypt_key;
-+		OCSP_resp_count;
-+		KRB5_CHECKSUM_new;
-+		ENGINE_load_cswift;
-+		OCSP_onereq_get0_id;
-+		ENGINE_set_default_ciphers;
-+		NOTICEREF_it;
-+		X509V3_EXT_CRL_add_nconf;
-+		OCSP_REVOKEDINFO_it;
-+		AES_encrypt;
-+		OCSP_REQUEST_new;
-+		ASN1_ANY_it;
-+		CRYPTO_ex_data_new_class;
-+		_ossl_old_des_ncbc_encrypt;
-+		i2d_KRB5_TKTBODY;
-+		EC_POINT_clear_free;
-+		AES_decrypt;
-+		asn1_enc_init;
-+		UI_get_result_maxsize;
-+		OCSP_CERTID_new;
-+		ENGINE_unregister_RAND;
-+		UI_method_get_closer;
-+		d2i_KRB5_ENCDATA;
-+		OCSP_request_onereq_count;
-+		OCSP_basic_verify;
-+		KRB5_AUTHENTBODY_free;
-+		ASN1_item_d2i;
-+		ASN1_primitive_free;
-+		i2d_EXTENDED_KEY_USAGE;
-+		i2d_OCSP_SIGNATURE;
-+		asn1_enc_save;
-+		ENGINE_load_nuron;
-+		_ossl_old_des_pcbc_encrypt;
-+		PKCS12_MAC_DATA_it;
-+		OCSP_accept_responses_new;
-+		asn1_do_lock;
-+		PKCS7_ATTR_VERIFY_it;
-+		KRB5_APREQBODY_it;
-+		i2d_OCSP_SINGLERESP;
-+		ASN1_item_ex_new;
-+		UI_add_verify_string;
-+		_ossl_old_des_set_key;
-+		KRB5_PRINCNAME_it;
-+		EVP_DecryptInit_ex;
-+		i2d_OCSP_CERTID;
-+		ASN1_item_d2i_bio;
-+		EC_POINT_dbl;
-+		asn1_get_choice_selector;
-+		i2d_KRB5_CHECKSUM;
-+		ENGINE_set_table_flags;
-+		AES_options;
-+		ENGINE_load_chil;
-+		OCSP_id_cmp;
-+		OCSP_BASICRESP_new;
-+		OCSP_REQUEST_get_ext_by_NID;
-+		KRB5_APREQ_it;
-+		ENGINE_get_destroy_function;
-+		CONF_set_nconf;
-+		ASN1_PRINTABLE_free;
-+		OCSP_BASICRESP_get_ext_by_NID;
-+		DIST_POINT_NAME_it;
-+		X509V3_extensions_print;
-+		_ossl_old_des_cfb64_encrypt;
-+		X509_REVOKED_add1_ext_i2d;
-+		_ossl_old_des_ofb_encrypt;
-+		KRB5_TKTBODY_new;
-+		ASN1_OCTET_STRING_it;
-+		ERR_load_UI_strings;
-+		i2d_KRB5_ENCKEY;
-+		ASN1_template_new;
-+		OCSP_SIGNATURE_free;
-+		ASN1_item_i2d_fp;
-+		KRB5_PRINCNAME_free;
-+		PKCS7_RECIP_INFO_it;
-+		EXTENDED_KEY_USAGE_it;
-+		EC_GFp_simple_method;
-+		EC_GROUP_precompute_mult;
-+		OCSP_request_onereq_get0;
-+		UI_method_set_writer;
-+		KRB5_AUTHENT_new;
-+		X509_CRL_INFO_it;
-+		DSO_set_name_converter;
-+		AES_set_decrypt_key;
-+		PKCS7_DIGEST_it;
-+		PKCS12_x5092certbag;
-+		EVP_DigestInit_ex;
-+		i2a_ACCESS_DESCRIPTION;
-+		OCSP_RESPONSE_it;
-+		PKCS7_ENC_CONTENT_it;
-+		OCSP_request_add0_id;
-+		EC_POINT_make_affine;
-+		DSO_get_filename;
-+		OCSP_CERTSTATUS_it;
-+		OCSP_request_add1_cert;
-+		UI_get0_output_string;
-+		UI_dup_verify_string;
-+		BN_mod_lshift;
-+		KRB5_AUTHDATA_it;
-+		asn1_set_choice_selector;
-+		OCSP_basic_add1_status;
-+		OCSP_RESPID_free;
-+		asn1_get_field_ptr;
-+		UI_add_input_string;
-+		OCSP_CRLID_it;
-+		i2d_KRB5_AUTHENTBODY;
-+		OCSP_REQUEST_get_ext_count;
-+		ENGINE_load_atalla;
-+		X509_NAME_it;
-+		USERNOTICE_it;
-+		OCSP_REQINFO_new;
-+		OCSP_BASICRESP_get_ext;
-+		CRYPTO_get_ex_data_implementation;
-+		CRYPTO_get_ex_data_impl;
-+		ASN1_item_pack;
-+		i2d_KRB5_ENCDATA;
-+		X509_PURPOSE_set;
-+		X509_REQ_INFO_it;
-+		UI_method_set_opener;
-+		ASN1_item_ex_free;
-+		ASN1_BOOLEAN_it;
-+		ENGINE_get_table_flags;
-+		UI_create_method;
-+		OCSP_ONEREQ_add1_ext_i2d;
-+		_shadow_DES_check_key;
-+		d2i_OCSP_REQINFO;
-+		UI_add_info_string;
-+		UI_get_result_minsize;
-+		ASN1_NULL_it;
-+		BN_mod_lshift1;
-+		d2i_OCSP_ONEREQ;
-+		OCSP_ONEREQ_new;
-+		KRB5_TICKET_it;
-+		EVP_aes_192_cbc;
-+		KRB5_TICKET_free;
-+		UI_new;
-+		OCSP_response_create;
-+		_ossl_old_des_xcbc_encrypt;
-+		PKCS7_it;
-+		OCSP_REQUEST_get_ext_by_critical;
-+		OCSP_REQUEST_get_ext_by_crit;
-+		ENGINE_set_flags;
-+		_ossl_old_des_ecb_encrypt;
-+		OCSP_response_get1_basic;
-+		EVP_Digest;
-+		OCSP_ONEREQ_delete_ext;
-+		ASN1_TBOOLEAN_it;
-+		ASN1_item_new;
-+		ASN1_TIME_to_generalizedtime;
-+		BIGNUM_it;
-+		AES_cbc_encrypt;
-+		ENGINE_get_load_privkey_function;
-+		ENGINE_get_load_privkey_fn;
-+		OCSP_RESPONSE_free;
-+		UI_method_set_reader;
-+		i2d_ASN1_T61STRING;
-+		EC_POINT_set_to_infinity;
-+		ERR_load_OCSP_strings;
-+		EC_POINT_point2oct;
-+		KRB5_APREQ_free;
-+		ASN1_OBJECT_it;
-+		OCSP_crlID_new;
-+		OCSP_crlID2_new;
-+		CONF_modules_load_file;
-+		CONF_imodule_set_usr_data;
-+		ENGINE_set_default_string;
-+		CONF_module_get_usr_data;
-+		ASN1_add_oid_module;
-+		CONF_modules_finish;
-+		OPENSSL_config;
-+		CONF_modules_unload;
-+		CONF_imodule_get_value;
-+		CONF_module_set_usr_data;
-+		CONF_parse_list;
-+		CONF_module_add;
-+		CONF_get1_default_config_file;
-+		CONF_imodule_get_flags;
-+		CONF_imodule_get_module;
-+		CONF_modules_load;
-+		CONF_imodule_get_name;
-+		ERR_peek_top_error;
-+		CONF_imodule_get_usr_data;
-+		CONF_imodule_set_flags;
-+		ENGINE_add_conf_module;
-+		ERR_peek_last_error_line;
-+		ERR_peek_last_error_line_data;
-+		ERR_peek_last_error;
-+		DES_read_2passwords;
-+		DES_read_password;
-+		UI_UTIL_read_pw;
-+		UI_UTIL_read_pw_string;
-+		ENGINE_load_aep;
-+		ENGINE_load_sureware;
-+		OPENSSL_add_all_algorithms_noconf;
-+		OPENSSL_add_all_algo_noconf;
-+		OPENSSL_add_all_algorithms_conf;
-+		OPENSSL_add_all_algo_conf;
-+		OPENSSL_load_builtin_modules;
-+		AES_ofb128_encrypt;
-+		AES_ctr128_encrypt;
-+		AES_cfb128_encrypt;
-+		ENGINE_load_4758cca;
-+		_ossl_096_des_random_seed;
-+		EVP_aes_256_ofb;
-+		EVP_aes_192_ofb;
-+		EVP_aes_128_cfb128;
-+		EVP_aes_256_cfb128;
-+		EVP_aes_128_ofb;
-+		EVP_aes_192_cfb128;
-+		CONF_modules_free;
-+		NCONF_default;
-+		OPENSSL_no_config;
-+		NCONF_WIN32;
-+		ASN1_UNIVERSALSTRING_new;
-+		EVP_des_ede_ecb;
-+		i2d_ASN1_UNIVERSALSTRING;
-+		ASN1_UNIVERSALSTRING_free;
-+		ASN1_UNIVERSALSTRING_it;
-+		d2i_ASN1_UNIVERSALSTRING;
-+		EVP_des_ede3_ecb;
-+		X509_REQ_print_ex;
-+		ENGINE_up_ref;
-+		BUF_MEM_grow_clean;
-+		CRYPTO_realloc_clean;
-+		BUF_strlcat;
-+		BIO_indent;
-+		BUF_strlcpy;
-+		OpenSSLDie;
-+		OPENSSL_cleanse;
-+		ENGINE_setup_bsd_cryptodev;
-+		ERR_release_err_state_table;
-+		EVP_aes_128_cfb8;
-+		FIPS_corrupt_rsa;
-+		FIPS_selftest_des;
-+		EVP_aes_128_cfb1;
-+		EVP_aes_192_cfb8;
-+		FIPS_mode_set;
-+		FIPS_selftest_dsa;
-+		EVP_aes_256_cfb8;
-+		FIPS_allow_md5;
-+		DES_ede3_cfb_encrypt;
-+		EVP_des_ede3_cfb8;
-+		FIPS_rand_seeded;
-+		AES_cfbr_encrypt_block;
-+		AES_cfb8_encrypt;
-+		FIPS_rand_seed;
-+		FIPS_corrupt_des;
-+		EVP_aes_192_cfb1;
-+		FIPS_selftest_aes;
-+		FIPS_set_prng_key;
-+		EVP_des_cfb8;
-+		FIPS_corrupt_dsa;
-+		FIPS_test_mode;
-+		FIPS_rand_method;
-+		EVP_aes_256_cfb1;
-+		ERR_load_FIPS_strings;
-+		FIPS_corrupt_aes;
-+		FIPS_selftest_sha1;
-+		FIPS_selftest_rsa;
-+		FIPS_corrupt_sha1;
-+		EVP_des_cfb1;
-+		FIPS_dsa_check;
-+		AES_cfb1_encrypt;
-+		EVP_des_ede3_cfb1;
-+		FIPS_rand_check;
-+		FIPS_md5_allowed;
-+		FIPS_mode;
-+		FIPS_selftest_failed;
-+		sk_is_sorted;
-+		X509_check_ca;
-+		HMAC_CTX_set_flags;
-+		d2i_PROXY_CERT_INFO_EXTENSION;
-+		PROXY_POLICY_it;
-+		i2d_PROXY_POLICY;
-+		i2d_PROXY_CERT_INFO_EXTENSION;
-+		d2i_PROXY_POLICY;
-+		PROXY_CERT_INFO_EXTENSION_new;
-+		PROXY_CERT_INFO_EXTENSION_free;
-+		PROXY_CERT_INFO_EXTENSION_it;
-+		PROXY_POLICY_free;
-+		PROXY_POLICY_new;
-+		BN_MONT_CTX_set_locked;
-+		FIPS_selftest_rng;
-+		EVP_sha384;
-+		EVP_sha512;
-+		EVP_sha224;
-+		EVP_sha256;
-+		FIPS_selftest_hmac;
-+		FIPS_corrupt_rng;
-+		BN_mod_exp_mont_consttime;
-+		RSA_X931_hash_id;
-+		RSA_padding_check_X931;
-+		RSA_verify_PKCS1_PSS;
-+		RSA_padding_add_X931;
-+		RSA_padding_add_PKCS1_PSS;
-+		PKCS1_MGF1;
-+		BN_X931_generate_Xpq;
-+		RSA_X931_generate_key;
-+		BN_X931_derive_prime;
-+		BN_X931_generate_prime;
-+		RSA_X931_derive;
-+		BIO_new_dgram;
-+		BN_get0_nist_prime_384;
-+		ERR_set_mark;
-+		X509_STORE_CTX_set0_crls;
-+		ENGINE_set_STORE;
-+		ENGINE_register_ECDSA;
-+		STORE_meth_set_list_start_fn;
-+		STORE_method_set_list_start_function;
-+		BN_BLINDING_invert_ex;
-+		NAME_CONSTRAINTS_free;
-+		STORE_ATTR_INFO_set_number;
-+		BN_BLINDING_get_thread_id;
-+		X509_STORE_CTX_set0_param;
-+		POLICY_MAPPING_it;
-+		STORE_parse_attrs_start;
-+		POLICY_CONSTRAINTS_free;
-+		EVP_PKEY_add1_attr_by_NID;
-+		BN_nist_mod_192;
-+		EC_GROUP_get_trinomial_basis;
-+		STORE_set_method;
-+		GENERAL_SUBTREE_free;
-+		NAME_CONSTRAINTS_it;
-+		ECDH_get_default_method;
-+		PKCS12_add_safe;
-+		EC_KEY_new_by_curve_name;
-+		STORE_meth_get_update_store_fn;
-+		STORE_method_get_update_store_function;
-+		ENGINE_register_ECDH;
-+		SHA512_Update;
-+		i2d_ECPrivateKey;
-+		BN_get0_nist_prime_192;
-+		STORE_modify_certificate;
-+		EC_POINT_set_affine_coordinates_GF2m;
-+		EC_POINT_set_affine_coords_GF2m;
-+		BN_GF2m_mod_exp_arr;
-+		STORE_ATTR_INFO_modify_number;
-+		X509_keyid_get0;
-+		ENGINE_load_gmp;
-+		pitem_new;
-+		BN_GF2m_mod_mul_arr;
-+		STORE_list_public_key_endp;
-+		o2i_ECPublicKey;
-+		EC_KEY_copy;
-+		BIO_dump_fp;
-+		X509_policy_node_get0_parent;
-+		EC_GROUP_check_discriminant;
-+		i2o_ECPublicKey;
-+		EC_KEY_precompute_mult;
-+		a2i_IPADDRESS;
-+		STORE_meth_set_initialise_fn;
-+		STORE_method_set_initialise_function;
-+		X509_STORE_CTX_set_depth;
-+		X509_VERIFY_PARAM_inherit;
-+		EC_POINT_point2bn;
-+		STORE_ATTR_INFO_set_dn;
-+		X509_policy_tree_get0_policies;
-+		EC_GROUP_new_curve_GF2m;
-+		STORE_destroy_method;
-+		ENGINE_unregister_STORE;
-+		EVP_PKEY_get1_EC_KEY;
-+		STORE_ATTR_INFO_get0_number;
-+		ENGINE_get_default_ECDH;
-+		EC_KEY_get_conv_form;
-+		ASN1_OCTET_STRING_NDEF_it;
-+		STORE_delete_public_key;
-+		STORE_get_public_key;
-+		STORE_modify_arbitrary;
-+		ENGINE_get_static_state;
-+		pqueue_iterator;
-+		ECDSA_SIG_new;
-+		OPENSSL_DIR_end;
-+		BN_GF2m_mod_sqr;
-+		EC_POINT_bn2point;
-+		X509_VERIFY_PARAM_set_depth;
-+		EC_KEY_set_asn1_flag;
-+		STORE_get_method;
-+		EC_KEY_get_key_method_data;
-+		ECDSA_sign_ex;
-+		STORE_parse_attrs_end;
-+		EC_GROUP_get_point_conversion_form;
-+		EC_GROUP_get_point_conv_form;
-+		STORE_method_set_store_function;
-+		STORE_ATTR_INFO_in;
-+		PEM_read_bio_ECPKParameters;
-+		EC_GROUP_get_pentanomial_basis;
-+		EVP_PKEY_add1_attr_by_txt;
-+		BN_BLINDING_set_flags;
-+		X509_VERIFY_PARAM_set1_policies;
-+		X509_VERIFY_PARAM_set1_name;
-+		X509_VERIFY_PARAM_set_purpose;
-+		STORE_get_number;
-+		ECDSA_sign_setup;
-+		BN_GF2m_mod_solve_quad_arr;
-+		EC_KEY_up_ref;
-+		POLICY_MAPPING_free;
-+		BN_GF2m_mod_div;
-+		X509_VERIFY_PARAM_set_flags;
-+		EC_KEY_free;
-+		STORE_meth_set_list_next_fn;
-+		STORE_method_set_list_next_function;
-+		PEM_write_bio_ECPrivateKey;
-+		d2i_EC_PUBKEY;
-+		STORE_meth_get_generate_fn;
-+		STORE_method_get_generate_function;
-+		STORE_meth_set_list_end_fn;
-+		STORE_method_set_list_end_function;
-+		pqueue_print;
-+		EC_GROUP_have_precompute_mult;
-+		EC_KEY_print_fp;
-+		BN_GF2m_mod_arr;
-+		PEM_write_bio_X509_CERT_PAIR;
-+		EVP_PKEY_cmp;
-+		X509_policy_level_node_count;
-+		STORE_new_engine;
-+		STORE_list_public_key_start;
-+		X509_VERIFY_PARAM_new;
-+		ECDH_get_ex_data;
-+		EVP_PKEY_get_attr;
-+		ECDSA_do_sign;
-+		ENGINE_unregister_ECDH;
-+		ECDH_OpenSSL;
-+		EC_KEY_set_conv_form;
-+		EC_POINT_dup;
-+		GENERAL_SUBTREE_new;
-+		STORE_list_crl_endp;
-+		EC_get_builtin_curves;
-+		X509_policy_node_get0_qualifiers;
-+		X509_pcy_node_get0_qualifiers;
-+		STORE_list_crl_end;
-+		EVP_PKEY_set1_EC_KEY;
-+		BN_GF2m_mod_sqrt_arr;
-+		i2d_ECPrivateKey_bio;
-+		ECPKParameters_print_fp;
-+		pqueue_find;
-+		ECDSA_SIG_free;
-+		PEM_write_bio_ECPKParameters;
-+		STORE_method_set_ctrl_function;
-+		STORE_list_public_key_end;
-+		EC_KEY_set_private_key;
-+		pqueue_peek;
-+		STORE_get_arbitrary;
-+		STORE_store_crl;
-+		X509_policy_node_get0_policy;
-+		PKCS12_add_safes;
-+		BN_BLINDING_convert_ex;
-+		X509_policy_tree_free;
-+		OPENSSL_ia32cap_loc;
-+		BN_GF2m_poly2arr;
-+		STORE_ctrl;
-+		STORE_ATTR_INFO_compare;
-+		BN_get0_nist_prime_224;
-+		i2d_ECParameters;
-+		i2d_ECPKParameters;
-+		BN_GENCB_call;
-+		d2i_ECPKParameters;
-+		STORE_meth_set_generate_fn;
-+		STORE_method_set_generate_function;
-+		ENGINE_set_ECDH;
-+		NAME_CONSTRAINTS_new;
-+		SHA256_Init;
-+		EC_KEY_get0_public_key;
-+		PEM_write_bio_EC_PUBKEY;
-+		STORE_ATTR_INFO_set_cstr;
-+		STORE_list_crl_next;
-+		STORE_ATTR_INFO_in_range;
-+		ECParameters_print;
-+		STORE_meth_set_delete_fn;
-+		STORE_method_set_delete_function;
-+		STORE_list_certificate_next;
-+		ASN1_generate_nconf;
-+		BUF_memdup;
-+		BN_GF2m_mod_mul;
-+		STORE_meth_get_list_next_fn;
-+		STORE_method_get_list_next_function;
-+		STORE_ATTR_INFO_get0_dn;
-+		STORE_list_private_key_next;
-+		EC_GROUP_set_seed;
-+		X509_VERIFY_PARAM_set_trust;
-+		STORE_ATTR_INFO_free;
-+		STORE_get_private_key;
-+		EVP_PKEY_get_attr_count;
-+		STORE_ATTR_INFO_new;
-+		EC_GROUP_get_curve_GF2m;
-+		STORE_meth_set_revoke_fn;
-+		STORE_method_set_revoke_function;
-+		STORE_store_number;
-+		BN_is_prime_ex;
-+		STORE_revoke_public_key;
-+		X509_STORE_CTX_get0_param;
-+		STORE_delete_arbitrary;
-+		PEM_read_X509_CERT_PAIR;
-+		X509_STORE_set_depth;
-+		ECDSA_get_ex_data;
-+		SHA224;
-+		BIO_dump_indent_fp;
-+		EC_KEY_set_group;
-+		BUF_strndup;
-+		STORE_list_certificate_start;
-+		BN_GF2m_mod;
-+		X509_REQ_check_private_key;
-+		EC_GROUP_get_seed_len;
-+		ERR_load_STORE_strings;
-+		PEM_read_bio_EC_PUBKEY;
-+		STORE_list_private_key_end;
-+		i2d_EC_PUBKEY;
-+		ECDSA_get_default_method;
-+		ASN1_put_eoc;
-+		X509_STORE_CTX_get_explicit_policy;
-+		X509_STORE_CTX_get_expl_policy;
-+		X509_VERIFY_PARAM_table_cleanup;
-+		STORE_modify_private_key;
-+		X509_VERIFY_PARAM_free;
-+		EC_METHOD_get_field_type;
-+		EC_GFp_nist_method;
-+		STORE_meth_set_modify_fn;
-+		STORE_method_set_modify_function;
-+		STORE_parse_attrs_next;
-+		ENGINE_load_padlock;
-+		EC_GROUP_set_curve_name;
-+		X509_CERT_PAIR_it;
-+		STORE_meth_get_revoke_fn;
-+		STORE_method_get_revoke_function;
-+		STORE_method_set_get_function;
-+		STORE_modify_number;
-+		STORE_method_get_store_function;
-+		STORE_store_private_key;
-+		BN_GF2m_mod_sqr_arr;
-+		RSA_setup_blinding;
-+		BIO_s_datagram;
-+		STORE_Memory;
-+		sk_find_ex;
-+		EC_GROUP_set_curve_GF2m;
-+		ENGINE_set_default_ECDSA;
-+		POLICY_CONSTRAINTS_new;
-+		BN_GF2m_mod_sqrt;
-+		ECDH_set_default_method;
-+		EC_KEY_generate_key;
-+		SHA384_Update;
-+		BN_GF2m_arr2poly;
-+		STORE_method_get_get_function;
-+		STORE_meth_set_cleanup_fn;
-+		STORE_method_set_cleanup_function;
-+		EC_GROUP_check;
-+		d2i_ECPrivateKey_bio;
-+		EC_KEY_insert_key_method_data;
-+		STORE_meth_get_lock_store_fn;
-+		STORE_method_get_lock_store_function;
-+		X509_VERIFY_PARAM_get_depth;
-+		SHA224_Final;
-+		STORE_meth_set_update_store_fn;
-+		STORE_method_set_update_store_function;
-+		SHA224_Update;
-+		d2i_ECPrivateKey;
-+		ASN1_item_ndef_i2d;
-+		STORE_delete_private_key;
-+		ERR_pop_to_mark;
-+		ENGINE_register_all_STORE;
-+		X509_policy_level_get0_node;
-+		i2d_PKCS7_NDEF;
-+		EC_GROUP_get_degree;
-+		ASN1_generate_v3;
-+		STORE_ATTR_INFO_modify_cstr;
-+		X509_policy_tree_level_count;
-+		BN_GF2m_add;
-+		EC_KEY_get0_group;
-+		STORE_generate_crl;
-+		STORE_store_public_key;
-+		X509_CERT_PAIR_free;
-+		STORE_revoke_private_key;
-+		BN_nist_mod_224;
-+		SHA512_Final;
-+		STORE_ATTR_INFO_modify_dn;
-+		STORE_meth_get_initialise_fn;
-+		STORE_method_get_initialise_function;
-+		STORE_delete_number;
-+		i2d_EC_PUBKEY_bio;
-+		BIO_dgram_non_fatal_error;
-+		EC_GROUP_get_asn1_flag;
-+		STORE_ATTR_INFO_in_ex;
-+		STORE_list_crl_start;
-+		ECDH_get_ex_new_index;
-+		STORE_meth_get_modify_fn;
-+		STORE_method_get_modify_function;
-+		v2i_ASN1_BIT_STRING;
-+		STORE_store_certificate;
-+		OBJ_bsearch_ex;
-+		X509_STORE_CTX_set_default;
-+		STORE_ATTR_INFO_set_sha1str;
-+		BN_GF2m_mod_inv;
-+		BN_GF2m_mod_exp;
-+		STORE_modify_public_key;
-+		STORE_meth_get_list_start_fn;
-+		STORE_method_get_list_start_function;
-+		EC_GROUP_get0_seed;
-+		STORE_store_arbitrary;
-+		STORE_meth_set_unlock_store_fn;
-+		STORE_method_set_unlock_store_function;
-+		BN_GF2m_mod_div_arr;
-+		ENGINE_set_ECDSA;
-+		STORE_create_method;
-+		ECPKParameters_print;
-+		EC_KEY_get0_private_key;
-+		PEM_write_EC_PUBKEY;
-+		X509_VERIFY_PARAM_set1;
-+		ECDH_set_method;
-+		v2i_GENERAL_NAME_ex;
-+		ECDH_set_ex_data;
-+		STORE_generate_key;
-+		BN_nist_mod_521;
-+		X509_policy_tree_get0_level;
-+		EC_GROUP_set_point_conversion_form;
-+		EC_GROUP_set_point_conv_form;
-+		PEM_read_EC_PUBKEY;
-+		i2d_ECDSA_SIG;
-+		ECDSA_OpenSSL;
-+		STORE_delete_crl;
-+		EC_KEY_get_enc_flags;
-+		ASN1_const_check_infinite_end;
-+		EVP_PKEY_delete_attr;
-+		ECDSA_set_default_method;
-+		EC_POINT_set_compressed_coordinates_GF2m;
-+		EC_POINT_set_compr_coords_GF2m;
-+		EC_GROUP_cmp;
-+		STORE_revoke_certificate;
-+		BN_get0_nist_prime_256;
-+		STORE_meth_get_delete_fn;
-+		STORE_method_get_delete_function;
-+		SHA224_Init;
-+		PEM_read_ECPrivateKey;
-+		SHA512_Init;
-+		STORE_parse_attrs_endp;
-+		BN_set_negative;
-+		ERR_load_ECDSA_strings;
-+		EC_GROUP_get_basis_type;
-+		STORE_list_public_key_next;
-+		i2v_ASN1_BIT_STRING;
-+		STORE_OBJECT_free;
-+		BN_nist_mod_384;
-+		i2d_X509_CERT_PAIR;
-+		PEM_write_ECPKParameters;
-+		ECDH_compute_key;
-+		STORE_ATTR_INFO_get0_sha1str;
-+		ENGINE_register_all_ECDH;
-+		pqueue_pop;
-+		STORE_ATTR_INFO_get0_cstr;
-+		POLICY_CONSTRAINTS_it;
-+		STORE_get_ex_new_index;
-+		EVP_PKEY_get_attr_by_OBJ;
-+		X509_VERIFY_PARAM_add0_policy;
-+		BN_GF2m_mod_solve_quad;
-+		SHA256;
-+		i2d_ECPrivateKey_fp;
-+		X509_policy_tree_get0_user_policies;
-+		X509_pcy_tree_get0_usr_policies;
-+		OPENSSL_DIR_read;
-+		ENGINE_register_all_ECDSA;
-+		X509_VERIFY_PARAM_lookup;
-+		EC_POINT_get_affine_coordinates_GF2m;
-+		EC_POINT_get_affine_coords_GF2m;
-+		EC_GROUP_dup;
-+		ENGINE_get_default_ECDSA;
-+		EC_KEY_new;
-+		SHA256_Transform;
-+		EC_KEY_set_enc_flags;
-+		ECDSA_verify;
-+		EC_POINT_point2hex;
-+		ENGINE_get_STORE;
-+		SHA512;
-+		STORE_get_certificate;
-+		ECDSA_do_sign_ex;
-+		ECDSA_do_verify;
-+		d2i_ECPrivateKey_fp;
-+		STORE_delete_certificate;
-+		SHA512_Transform;
-+		X509_STORE_set1_param;
-+		STORE_method_get_ctrl_function;
-+		STORE_free;
-+		PEM_write_ECPrivateKey;
-+		STORE_meth_get_unlock_store_fn;
-+		STORE_method_get_unlock_store_function;
-+		STORE_get_ex_data;
-+		EC_KEY_set_public_key;
-+		PEM_read_ECPKParameters;
-+		X509_CERT_PAIR_new;
-+		ENGINE_register_STORE;
-+		RSA_generate_key_ex;
-+		DSA_generate_parameters_ex;
-+		ECParameters_print_fp;
-+		X509V3_NAME_from_section;
-+		EVP_PKEY_add1_attr;
-+		STORE_modify_crl;
-+		STORE_list_private_key_start;
-+		POLICY_MAPPINGS_it;
-+		GENERAL_SUBTREE_it;
-+		EC_GROUP_get_curve_name;
-+		PEM_write_X509_CERT_PAIR;
-+		BIO_dump_indent_cb;
-+		d2i_X509_CERT_PAIR;
-+		STORE_list_private_key_endp;
-+		asn1_const_Finish;
-+		i2d_EC_PUBKEY_fp;
-+		BN_nist_mod_256;
-+		X509_VERIFY_PARAM_add0_table;
-+		pqueue_free;
-+		BN_BLINDING_create_param;
-+		ECDSA_size;
-+		d2i_EC_PUBKEY_bio;
-+		BN_get0_nist_prime_521;
-+		STORE_ATTR_INFO_modify_sha1str;
-+		BN_generate_prime_ex;
-+		EC_GROUP_new_by_curve_name;
-+		SHA256_Final;
-+		DH_generate_parameters_ex;
-+		PEM_read_bio_ECPrivateKey;
-+		STORE_meth_get_cleanup_fn;
-+		STORE_method_get_cleanup_function;
-+		ENGINE_get_ECDH;
-+		d2i_ECDSA_SIG;
-+		BN_is_prime_fasttest_ex;
-+		ECDSA_sign;
-+		X509_policy_check;
-+		EVP_PKEY_get_attr_by_NID;
-+		STORE_set_ex_data;
-+		ENGINE_get_ECDSA;
-+		EVP_ecdsa;
-+		BN_BLINDING_get_flags;
-+		PKCS12_add_cert;
-+		STORE_OBJECT_new;
-+		ERR_load_ECDH_strings;
-+		EC_KEY_dup;
-+		EVP_CIPHER_CTX_rand_key;
-+		ECDSA_set_method;
-+		a2i_IPADDRESS_NC;
-+		d2i_ECParameters;
-+		STORE_list_certificate_end;
-+		STORE_get_crl;
-+		X509_POLICY_NODE_print;
-+		SHA384_Init;
-+		EC_GF2m_simple_method;
-+		ECDSA_set_ex_data;
-+		SHA384_Final;
-+		PKCS7_set_digest;
-+		EC_KEY_print;
-+		STORE_meth_set_lock_store_fn;
-+		STORE_method_set_lock_store_function;
-+		ECDSA_get_ex_new_index;
-+		SHA384;
-+		POLICY_MAPPING_new;
-+		STORE_list_certificate_endp;
-+		X509_STORE_CTX_get0_policy_tree;
-+		EC_GROUP_set_asn1_flag;
-+		EC_KEY_check_key;
-+		d2i_EC_PUBKEY_fp;
-+		PKCS7_set0_type_other;
-+		PEM_read_bio_X509_CERT_PAIR;
-+		pqueue_next;
-+		STORE_meth_get_list_end_fn;
-+		STORE_method_get_list_end_function;
-+		EVP_PKEY_add1_attr_by_OBJ;
-+		X509_VERIFY_PARAM_set_time;
-+		pqueue_new;
-+		ENGINE_set_default_ECDH;
-+		STORE_new_method;
-+		PKCS12_add_key;
-+		DSO_merge;
-+		EC_POINT_hex2point;
-+		BIO_dump_cb;
-+		SHA256_Update;
-+		pqueue_insert;
-+		pitem_free;
-+		BN_GF2m_mod_inv_arr;
-+		ENGINE_unregister_ECDSA;
-+		BN_BLINDING_set_thread_id;
-+		get_rfc3526_prime_8192;
-+		X509_VERIFY_PARAM_clear_flags;
-+		get_rfc2409_prime_1024;
-+		DH_check_pub_key;
-+		get_rfc3526_prime_2048;
-+		get_rfc3526_prime_6144;
-+		get_rfc3526_prime_1536;
-+		get_rfc3526_prime_3072;
-+		get_rfc3526_prime_4096;
-+		get_rfc2409_prime_768;
-+		X509_VERIFY_PARAM_get_flags;
-+		EVP_CIPHER_CTX_new;
-+		EVP_CIPHER_CTX_free;
-+		Camellia_cbc_encrypt;
-+		Camellia_cfb128_encrypt;
-+		Camellia_cfb1_encrypt;
-+		Camellia_cfb8_encrypt;
-+		Camellia_ctr128_encrypt;
-+		Camellia_cfbr_encrypt_block;
-+		Camellia_decrypt;
-+		Camellia_ecb_encrypt;
-+		Camellia_encrypt;
-+		Camellia_ofb128_encrypt;
-+		Camellia_set_key;
-+		EVP_camellia_128_cbc;
-+		EVP_camellia_128_cfb128;
-+		EVP_camellia_128_cfb1;
-+		EVP_camellia_128_cfb8;
-+		EVP_camellia_128_ecb;
-+		EVP_camellia_128_ofb;
-+		EVP_camellia_192_cbc;
-+		EVP_camellia_192_cfb128;
-+		EVP_camellia_192_cfb1;
-+		EVP_camellia_192_cfb8;
-+		EVP_camellia_192_ecb;
-+		EVP_camellia_192_ofb;
-+		EVP_camellia_256_cbc;
-+		EVP_camellia_256_cfb128;
-+		EVP_camellia_256_cfb1;
-+		EVP_camellia_256_cfb8;
-+		EVP_camellia_256_ecb;
-+		EVP_camellia_256_ofb;
-+		a2i_ipadd;
-+		ASIdentifiers_free;
-+		i2d_ASIdOrRange;
-+		EVP_CIPHER_block_size;
-+		v3_asid_is_canonical;
-+		IPAddressChoice_free;
-+		EVP_CIPHER_CTX_set_app_data;
-+		BIO_set_callback_arg;
-+		v3_addr_add_prefix;
-+		IPAddressOrRange_it;
-+		BIO_set_flags;
-+		ASIdentifiers_it;
-+		v3_addr_get_range;
-+		BIO_method_type;
-+		v3_addr_inherits;
-+		IPAddressChoice_it;
-+		AES_ige_encrypt;
-+		v3_addr_add_range;
-+		EVP_CIPHER_CTX_nid;
-+		d2i_ASRange;
-+		v3_addr_add_inherit;
-+		v3_asid_add_id_or_range;
-+		v3_addr_validate_resource_set;
-+		EVP_CIPHER_iv_length;
-+		EVP_MD_type;
-+		v3_asid_canonize;
-+		IPAddressRange_free;
-+		v3_asid_add_inherit;
-+		EVP_CIPHER_CTX_key_length;
-+		IPAddressRange_new;
-+		ASIdOrRange_new;
-+		EVP_MD_size;
-+		EVP_MD_CTX_test_flags;
-+		BIO_clear_flags;
-+		i2d_ASRange;
-+		IPAddressRange_it;
-+		IPAddressChoice_new;
-+		ASIdentifierChoice_new;
-+		ASRange_free;
-+		EVP_MD_pkey_type;
-+		EVP_MD_CTX_clear_flags;
-+		IPAddressFamily_free;
-+		i2d_IPAddressFamily;
-+		IPAddressOrRange_new;
-+		EVP_CIPHER_flags;
-+		v3_asid_validate_resource_set;
-+		d2i_IPAddressRange;
-+		AES_bi_ige_encrypt;
-+		BIO_get_callback;
-+		IPAddressOrRange_free;
-+		v3_addr_subset;
-+		d2i_IPAddressFamily;
-+		v3_asid_subset;
-+		BIO_test_flags;
-+		i2d_ASIdentifierChoice;
-+		ASRange_it;
-+		d2i_ASIdentifiers;
-+		ASRange_new;
-+		d2i_IPAddressChoice;
-+		v3_addr_get_afi;
-+		EVP_CIPHER_key_length;
-+		EVP_Cipher;
-+		i2d_IPAddressOrRange;
-+		ASIdOrRange_it;
-+		EVP_CIPHER_nid;
-+		i2d_IPAddressChoice;
-+		EVP_CIPHER_CTX_block_size;
-+		ASIdentifiers_new;
-+		v3_addr_validate_path;
-+		IPAddressFamily_new;
-+		EVP_MD_CTX_set_flags;
-+		v3_addr_is_canonical;
-+		i2d_IPAddressRange;
-+		IPAddressFamily_it;
-+		v3_asid_inherits;
-+		EVP_CIPHER_CTX_cipher;
-+		EVP_CIPHER_CTX_get_app_data;
-+		EVP_MD_block_size;
-+		EVP_CIPHER_CTX_flags;
-+		v3_asid_validate_path;
-+		d2i_IPAddressOrRange;
-+		v3_addr_canonize;
-+		ASIdentifierChoice_it;
-+		EVP_MD_CTX_md;
-+		d2i_ASIdentifierChoice;
-+		BIO_method_name;
-+		EVP_CIPHER_CTX_iv_length;
-+		ASIdOrRange_free;
-+		ASIdentifierChoice_free;
-+		BIO_get_callback_arg;
-+		BIO_set_callback;
-+		d2i_ASIdOrRange;
-+		i2d_ASIdentifiers;
-+		SEED_decrypt;
-+		SEED_encrypt;
-+		SEED_cbc_encrypt;
-+		EVP_seed_ofb;
-+		SEED_cfb128_encrypt;
-+		SEED_ofb128_encrypt;
-+		EVP_seed_cbc;
-+		SEED_ecb_encrypt;
-+		EVP_seed_ecb;
-+		SEED_set_key;
-+		EVP_seed_cfb128;
-+		X509_EXTENSIONS_it;
-+		X509_get1_ocsp;
-+		OCSP_REQ_CTX_free;
-+		i2d_X509_EXTENSIONS;
-+		OCSP_sendreq_nbio;
-+		OCSP_sendreq_new;
-+		d2i_X509_EXTENSIONS;
-+		X509_ALGORS_it;
-+		X509_ALGOR_get0;
-+		X509_ALGOR_set0;
-+		AES_unwrap_key;
-+		AES_wrap_key;
-+		X509at_get0_data_by_OBJ;
-+		ASN1_TYPE_set1;
-+		ASN1_STRING_set0;
-+		i2d_X509_ALGORS;
-+		BIO_f_zlib;
-+		COMP_zlib_cleanup;
-+		d2i_X509_ALGORS;
-+		CMS_ReceiptRequest_free;
-+		PEM_write_CMS;
-+		CMS_add0_CertificateChoices;
-+		CMS_unsigned_add1_attr_by_OBJ;
-+		ERR_load_CMS_strings;
-+		CMS_sign_receipt;
-+		i2d_CMS_ContentInfo;
-+		CMS_signed_delete_attr;
-+		d2i_CMS_bio;
-+		CMS_unsigned_get_attr_by_NID;
-+		CMS_verify;
-+		SMIME_read_CMS;
-+		CMS_decrypt_set1_key;
-+		CMS_SignerInfo_get0_algs;
-+		CMS_add1_cert;
-+		CMS_set_detached;
-+		CMS_encrypt;
-+		CMS_EnvelopedData_create;
-+		CMS_uncompress;
-+		CMS_add0_crl;
-+		CMS_SignerInfo_verify_content;
-+		CMS_unsigned_get0_data_by_OBJ;
-+		PEM_write_bio_CMS;
-+		CMS_unsigned_get_attr;
-+		CMS_RecipientInfo_ktri_cert_cmp;
-+		CMS_RecipientInfo_ktri_get0_algs;
-+		CMS_RecipInfo_ktri_get0_algs;
-+		CMS_ContentInfo_free;
-+		CMS_final;
-+		CMS_add_simple_smimecap;
-+		CMS_SignerInfo_verify;
-+		CMS_data;
-+		CMS_ContentInfo_it;
-+		d2i_CMS_ReceiptRequest;
-+		CMS_compress;
-+		CMS_digest_create;
-+		CMS_SignerInfo_cert_cmp;
-+		CMS_SignerInfo_sign;
-+		CMS_data_create;
-+		i2d_CMS_bio;
-+		CMS_EncryptedData_set1_key;
-+		CMS_decrypt;
-+		int_smime_write_ASN1;
-+		CMS_unsigned_delete_attr;
-+		CMS_unsigned_get_attr_count;
-+		CMS_add_smimecap;
-+		PEM_read_CMS;
-+		CMS_signed_get_attr_by_OBJ;
-+		d2i_CMS_ContentInfo;
-+		CMS_add_standard_smimecap;
-+		CMS_ContentInfo_new;
-+		CMS_RecipientInfo_type;
-+		CMS_get0_type;
-+		CMS_is_detached;
-+		CMS_sign;
-+		CMS_signed_add1_attr;
-+		CMS_unsigned_get_attr_by_OBJ;
-+		SMIME_write_CMS;
-+		CMS_EncryptedData_decrypt;
-+		CMS_get0_RecipientInfos;
-+		CMS_add0_RevocationInfoChoice;
-+		CMS_decrypt_set1_pkey;
-+		CMS_SignerInfo_set1_signer_cert;
-+		CMS_get0_signers;
-+		CMS_ReceiptRequest_get0_values;
-+		CMS_signed_get0_data_by_OBJ;
-+		CMS_get0_SignerInfos;
-+		CMS_add0_cert;
-+		CMS_EncryptedData_encrypt;
-+		CMS_digest_verify;
-+		CMS_set1_signers_certs;
-+		CMS_signed_get_attr;
-+		CMS_RecipientInfo_set0_key;
-+		CMS_SignedData_init;
-+		CMS_RecipientInfo_kekri_get0_id;
-+		CMS_verify_receipt;
-+		CMS_ReceiptRequest_it;
-+		PEM_read_bio_CMS;
-+		CMS_get1_crls;
-+		CMS_add0_recipient_key;
-+		SMIME_read_ASN1;
-+		CMS_ReceiptRequest_new;
-+		CMS_get0_content;
-+		CMS_get1_ReceiptRequest;
-+		CMS_signed_add1_attr_by_OBJ;
-+		CMS_RecipientInfo_kekri_id_cmp;
-+		CMS_add1_ReceiptRequest;
-+		CMS_SignerInfo_get0_signer_id;
-+		CMS_unsigned_add1_attr_by_NID;
-+		CMS_unsigned_add1_attr;
-+		CMS_signed_get_attr_by_NID;
-+		CMS_get1_certs;
-+		CMS_signed_add1_attr_by_NID;
-+		CMS_unsigned_add1_attr_by_txt;
-+		CMS_dataFinal;
-+		CMS_RecipientInfo_ktri_get0_signer_id;
-+		CMS_RecipInfo_ktri_get0_sigr_id;
-+		i2d_CMS_ReceiptRequest;
-+		CMS_add1_recipient_cert;
-+		CMS_dataInit;
-+		CMS_signed_add1_attr_by_txt;
-+		CMS_RecipientInfo_decrypt;
-+		CMS_signed_get_attr_count;
-+		CMS_get0_eContentType;
-+		CMS_set1_eContentType;
-+		CMS_ReceiptRequest_create0;
-+		CMS_add1_signer;
-+		CMS_RecipientInfo_set0_pkey;
-+		ENGINE_set_load_ssl_client_cert_function;
-+		ENGINE_set_ld_ssl_clnt_cert_fn;
-+		ENGINE_get_ssl_client_cert_function;
-+		ENGINE_get_ssl_client_cert_fn;
-+		ENGINE_load_ssl_client_cert;
-+		ENGINE_load_capi;
-+		OPENSSL_isservice;
-+		FIPS_dsa_sig_decode;
-+		EVP_CIPHER_CTX_clear_flags;
-+		FIPS_rand_status;
-+		FIPS_rand_set_key;
-+		CRYPTO_set_mem_info_functions;
-+		RSA_X931_generate_key_ex;
-+		int_ERR_set_state_func;
-+		int_EVP_MD_set_engine_callbacks;
-+		int_CRYPTO_set_do_dynlock_callback;
-+		FIPS_rng_stick;
-+		EVP_CIPHER_CTX_set_flags;
-+		BN_X931_generate_prime_ex;
-+		FIPS_selftest_check;
-+		FIPS_rand_set_dt;
-+		CRYPTO_dbg_pop_info;
-+		FIPS_dsa_free;
-+		RSA_X931_derive_ex;
-+		FIPS_rsa_new;
-+		FIPS_rand_bytes;
-+		fips_cipher_test;
-+		EVP_CIPHER_CTX_test_flags;
-+		CRYPTO_malloc_debug_init;
-+		CRYPTO_dbg_push_info;
-+		FIPS_corrupt_rsa_keygen;
-+		FIPS_dh_new;
-+		FIPS_corrupt_dsa_keygen;
-+		FIPS_dh_free;
-+		fips_pkey_signature_test;
-+		EVP_add_alg_module;
-+		int_RAND_init_engine_callbacks;
-+		int_EVP_CIPHER_set_engine_callbacks;
-+		int_EVP_MD_init_engine_callbacks;
-+		FIPS_rand_test_mode;
-+		FIPS_rand_reset;
-+		FIPS_dsa_new;
-+		int_RAND_set_callbacks;
-+		BN_X931_derive_prime_ex;
-+		int_ERR_lib_init;
-+		int_EVP_CIPHER_init_engine_callbacks;
-+		FIPS_rsa_free;
-+		FIPS_dsa_sig_encode;
-+		CRYPTO_dbg_remove_all_info;
-+		OPENSSL_init;
-+		CRYPTO_strdup;
-+		JPAKE_STEP3A_process;
-+		JPAKE_STEP1_release;
-+		JPAKE_get_shared_key;
-+		JPAKE_STEP3B_init;
-+		JPAKE_STEP1_generate;
-+		JPAKE_STEP1_init;
-+		JPAKE_STEP3B_process;
-+		JPAKE_STEP2_generate;
-+		JPAKE_CTX_new;
-+		JPAKE_CTX_free;
-+		JPAKE_STEP3B_release;
-+		JPAKE_STEP3A_release;
-+		JPAKE_STEP2_process;
-+		JPAKE_STEP3B_generate;
-+		JPAKE_STEP1_process;
-+		JPAKE_STEP3A_generate;
-+		JPAKE_STEP2_release;
-+		JPAKE_STEP3A_init;
-+		ERR_load_JPAKE_strings;
-+		JPAKE_STEP2_init;
-+		pqueue_size;
-+		i2d_TS_ACCURACY;
-+		i2d_TS_MSG_IMPRINT_fp;
-+		i2d_TS_MSG_IMPRINT;
-+		EVP_PKEY_print_public;
-+		EVP_PKEY_CTX_new;
-+		i2d_TS_TST_INFO;
-+		EVP_PKEY_asn1_find;
-+		DSO_METHOD_beos;
-+		TS_CONF_load_cert;
-+		TS_REQ_get_ext;
-+		EVP_PKEY_sign_init;
-+		ASN1_item_print;
-+		TS_TST_INFO_set_nonce;
-+		TS_RESP_dup;
-+		ENGINE_register_pkey_meths;
-+		EVP_PKEY_asn1_add0;
-+		PKCS7_add0_attrib_signing_time;
-+		i2d_TS_TST_INFO_fp;
-+		BIO_asn1_get_prefix;
-+		TS_TST_INFO_set_time;
-+		EVP_PKEY_meth_set_decrypt;
-+		EVP_PKEY_set_type_str;
-+		EVP_PKEY_CTX_get_keygen_info;
-+		TS_REQ_set_policy_id;
-+		d2i_TS_RESP_fp;
-+		ENGINE_get_pkey_asn1_meth_engine;
-+		ENGINE_get_pkey_asn1_meth_eng;
-+		WHIRLPOOL_Init;
-+		TS_RESP_set_status_info;
-+		EVP_PKEY_keygen;
-+		EVP_DigestSignInit;
-+		TS_ACCURACY_set_millis;
-+		TS_REQ_dup;
-+		GENERAL_NAME_dup;
-+		ASN1_SEQUENCE_ANY_it;
-+		WHIRLPOOL;
-+		X509_STORE_get1_crls;
-+		ENGINE_get_pkey_asn1_meth;
-+		EVP_PKEY_asn1_new;
-+		BIO_new_NDEF;
-+		ENGINE_get_pkey_meth;
-+		TS_MSG_IMPRINT_set_algo;
-+		i2d_TS_TST_INFO_bio;
-+		TS_TST_INFO_set_ordering;
-+		TS_TST_INFO_get_ext_by_OBJ;
-+		CRYPTO_THREADID_set_pointer;
-+		TS_CONF_get_tsa_section;
-+		SMIME_write_ASN1;
-+		TS_RESP_CTX_set_signer_key;
-+		EVP_PKEY_encrypt_old;
-+		EVP_PKEY_encrypt_init;
-+		CRYPTO_THREADID_cpy;
-+		ASN1_PCTX_get_cert_flags;
-+		i2d_ESS_SIGNING_CERT;
-+		TS_CONF_load_key;
-+		i2d_ASN1_SEQUENCE_ANY;
-+		d2i_TS_MSG_IMPRINT_bio;
-+		EVP_PKEY_asn1_set_public;
-+		b2i_PublicKey_bio;
-+		BIO_asn1_set_prefix;
-+		EVP_PKEY_new_mac_key;
-+		BIO_new_CMS;
-+		CRYPTO_THREADID_cmp;
-+		TS_REQ_ext_free;
-+		EVP_PKEY_asn1_set_free;
-+		EVP_PKEY_get0_asn1;
-+		d2i_NETSCAPE_X509;
-+		EVP_PKEY_verify_recover_init;
-+		EVP_PKEY_CTX_set_data;
-+		EVP_PKEY_keygen_init;
-+		TS_RESP_CTX_set_status_info;
-+		TS_MSG_IMPRINT_get_algo;
-+		TS_REQ_print_bio;
-+		EVP_PKEY_CTX_ctrl_str;
-+		EVP_PKEY_get_default_digest_nid;
-+		PEM_write_bio_PKCS7_stream;
-+		TS_MSG_IMPRINT_print_bio;
-+		BN_asc2bn;
-+		TS_REQ_get_policy_id;
-+		ENGINE_set_default_pkey_asn1_meths;
-+		ENGINE_set_def_pkey_asn1_meths;
-+		d2i_TS_ACCURACY;
-+		DSO_global_lookup;
-+		TS_CONF_set_tsa_name;
-+		i2d_ASN1_SET_ANY;
-+		ENGINE_load_gost;
-+		WHIRLPOOL_BitUpdate;
-+		ASN1_PCTX_get_flags;
-+		TS_TST_INFO_get_ext_by_NID;
-+		TS_RESP_new;
-+		ESS_CERT_ID_dup;
-+		TS_STATUS_INFO_dup;
-+		TS_REQ_delete_ext;
-+		EVP_DigestVerifyFinal;
-+		EVP_PKEY_print_params;
-+		i2d_CMS_bio_stream;
-+		TS_REQ_get_msg_imprint;
-+		OBJ_find_sigid_by_algs;
-+		TS_TST_INFO_get_serial;
-+		TS_REQ_get_nonce;
-+		X509_PUBKEY_set0_param;
-+		EVP_PKEY_CTX_set0_keygen_info;
-+		DIST_POINT_set_dpname;
-+		i2d_ISSUING_DIST_POINT;
-+		ASN1_SET_ANY_it;
-+		EVP_PKEY_CTX_get_data;
-+		TS_STATUS_INFO_print_bio;
-+		EVP_PKEY_derive_init;
-+		d2i_TS_TST_INFO;
-+		EVP_PKEY_asn1_add_alias;
-+		d2i_TS_RESP_bio;
-+		OTHERNAME_cmp;
-+		GENERAL_NAME_set0_value;
-+		PKCS7_RECIP_INFO_get0_alg;
-+		TS_RESP_CTX_new;
-+		TS_RESP_set_tst_info;
-+		PKCS7_final;
-+		EVP_PKEY_base_id;
-+		TS_RESP_CTX_set_signer_cert;
-+		TS_REQ_set_msg_imprint;
-+		EVP_PKEY_CTX_ctrl;
-+		TS_CONF_set_digests;
-+		d2i_TS_MSG_IMPRINT;
-+		EVP_PKEY_meth_set_ctrl;
-+		TS_REQ_get_ext_by_NID;
-+		PKCS5_pbe_set0_algor;
-+		BN_BLINDING_thread_id;
-+		TS_ACCURACY_new;
-+		X509_CRL_METHOD_free;
-+		ASN1_PCTX_get_nm_flags;
-+		EVP_PKEY_meth_set_sign;
-+		CRYPTO_THREADID_current;
-+		EVP_PKEY_decrypt_init;
-+		NETSCAPE_X509_free;
-+		i2b_PVK_bio;
-+		EVP_PKEY_print_private;
-+		GENERAL_NAME_get0_value;
-+		b2i_PVK_bio;
-+		ASN1_UTCTIME_adj;
-+		TS_TST_INFO_new;
-+		EVP_MD_do_all_sorted;
-+		TS_CONF_set_default_engine;
-+		TS_ACCURACY_set_seconds;
-+		TS_TST_INFO_get_time;
-+		PKCS8_pkey_get0;
-+		EVP_PKEY_asn1_get0;
-+		OBJ_add_sigid;
-+		PKCS7_SIGNER_INFO_sign;
-+		EVP_PKEY_paramgen_init;
-+		EVP_PKEY_sign;
-+		OBJ_sigid_free;
-+		EVP_PKEY_meth_set_init;
-+		d2i_ESS_ISSUER_SERIAL;
-+		ISSUING_DIST_POINT_new;
-+		ASN1_TIME_adj;
-+		TS_OBJ_print_bio;
-+		EVP_PKEY_meth_set_verify_recover;
-+		EVP_PKEY_meth_set_vrfy_recover;
-+		TS_RESP_get_status_info;
-+		CMS_stream;
-+		EVP_PKEY_CTX_set_cb;
-+		PKCS7_to_TS_TST_INFO;
-+		ASN1_PCTX_get_oid_flags;
-+		TS_TST_INFO_add_ext;
-+		EVP_PKEY_meth_set_derive;
-+		i2d_TS_RESP_fp;
-+		i2d_TS_MSG_IMPRINT_bio;
-+		TS_RESP_CTX_set_accuracy;
-+		TS_REQ_set_nonce;
-+		ESS_CERT_ID_new;
-+		ENGINE_pkey_asn1_find_str;
-+		TS_REQ_get_ext_count;
-+		BUF_reverse;
-+		TS_TST_INFO_print_bio;
-+		d2i_ISSUING_DIST_POINT;
-+		ENGINE_get_pkey_meths;
-+		i2b_PrivateKey_bio;
-+		i2d_TS_RESP;
-+		b2i_PublicKey;
-+		TS_VERIFY_CTX_cleanup;
-+		TS_STATUS_INFO_free;
-+		TS_RESP_verify_token;
-+		OBJ_bsearch_ex_;
-+		ASN1_bn_print;
-+		EVP_PKEY_asn1_get_count;
-+		ENGINE_register_pkey_asn1_meths;
-+		ASN1_PCTX_set_nm_flags;
-+		EVP_DigestVerifyInit;
-+		ENGINE_set_default_pkey_meths;
-+		TS_TST_INFO_get_policy_id;
-+		TS_REQ_get_cert_req;
-+		X509_CRL_set_meth_data;
-+		PKCS8_pkey_set0;
-+		ASN1_STRING_copy;
-+		d2i_TS_TST_INFO_fp;
-+		X509_CRL_match;
-+		EVP_PKEY_asn1_set_private;
-+		TS_TST_INFO_get_ext_d2i;
-+		TS_RESP_CTX_add_policy;
-+		d2i_TS_RESP;
-+		TS_CONF_load_certs;
-+		TS_TST_INFO_get_msg_imprint;
-+		ERR_load_TS_strings;
-+		TS_TST_INFO_get_version;
-+		EVP_PKEY_CTX_dup;
-+		EVP_PKEY_meth_set_verify;
-+		i2b_PublicKey_bio;
-+		TS_CONF_set_certs;
-+		EVP_PKEY_asn1_get0_info;
-+		TS_VERIFY_CTX_free;
-+		TS_REQ_get_ext_by_critical;
-+		TS_RESP_CTX_set_serial_cb;
-+		X509_CRL_get_meth_data;
-+		TS_RESP_CTX_set_time_cb;
-+		TS_MSG_IMPRINT_get_msg;
-+		TS_TST_INFO_ext_free;
-+		TS_REQ_get_version;
-+		TS_REQ_add_ext;
-+		EVP_PKEY_CTX_set_app_data;
-+		OBJ_bsearch_;
-+		EVP_PKEY_meth_set_verifyctx;
-+		i2d_PKCS7_bio_stream;
-+		CRYPTO_THREADID_set_numeric;
-+		PKCS7_sign_add_signer;
-+		d2i_TS_TST_INFO_bio;
-+		TS_TST_INFO_get_ordering;
-+		TS_RESP_print_bio;
-+		TS_TST_INFO_get_exts;
-+		HMAC_CTX_copy;
-+		PKCS5_pbe2_set_iv;
-+		ENGINE_get_pkey_asn1_meths;
-+		b2i_PrivateKey;
-+		EVP_PKEY_CTX_get_app_data;
-+		TS_REQ_set_cert_req;
-+		CRYPTO_THREADID_set_callback;
-+		TS_CONF_set_serial;
-+		TS_TST_INFO_free;
-+		d2i_TS_REQ_fp;
-+		TS_RESP_verify_response;
-+		i2d_ESS_ISSUER_SERIAL;
-+		TS_ACCURACY_get_seconds;
-+		EVP_CIPHER_do_all;
-+		b2i_PrivateKey_bio;
-+		OCSP_CERTID_dup;
-+		X509_PUBKEY_get0_param;
-+		TS_MSG_IMPRINT_dup;
-+		PKCS7_print_ctx;
-+		i2d_TS_REQ_bio;
-+		EVP_whirlpool;
-+		EVP_PKEY_asn1_set_param;
-+		EVP_PKEY_meth_set_encrypt;
-+		ASN1_PCTX_set_flags;
-+		i2d_ESS_CERT_ID;
-+		TS_VERIFY_CTX_new;
-+		TS_RESP_CTX_set_extension_cb;
-+		ENGINE_register_all_pkey_meths;
-+		TS_RESP_CTX_set_status_info_cond;
-+		TS_RESP_CTX_set_stat_info_cond;
-+		EVP_PKEY_verify;
-+		WHIRLPOOL_Final;
-+		X509_CRL_METHOD_new;
-+		EVP_DigestSignFinal;
-+		TS_RESP_CTX_set_def_policy;
-+		NETSCAPE_X509_it;
-+		TS_RESP_create_response;
-+		PKCS7_SIGNER_INFO_get0_algs;
-+		TS_TST_INFO_get_nonce;
-+		EVP_PKEY_decrypt_old;
-+		TS_TST_INFO_set_policy_id;
-+		TS_CONF_set_ess_cert_id_chain;
-+		EVP_PKEY_CTX_get0_pkey;
-+		d2i_TS_REQ;
-+		EVP_PKEY_asn1_find_str;
-+		BIO_f_asn1;
-+		ESS_SIGNING_CERT_new;
-+		EVP_PBE_find;
-+		X509_CRL_get0_by_cert;
-+		EVP_PKEY_derive;
-+		i2d_TS_REQ;
-+		TS_TST_INFO_delete_ext;
-+		ESS_ISSUER_SERIAL_free;
-+		ASN1_PCTX_set_str_flags;
-+		ENGINE_get_pkey_asn1_meth_str;
-+		TS_CONF_set_signer_key;
-+		TS_ACCURACY_get_millis;
-+		TS_RESP_get_token;
-+		TS_ACCURACY_dup;
-+		ENGINE_register_all_pkey_asn1_meths;
-+		ENGINE_reg_all_pkey_asn1_meths;
-+		X509_CRL_set_default_method;
-+		CRYPTO_THREADID_hash;
-+		CMS_ContentInfo_print_ctx;
-+		TS_RESP_free;
-+		ISSUING_DIST_POINT_free;
-+		ESS_ISSUER_SERIAL_new;
-+		CMS_add1_crl;
-+		PKCS7_add1_attrib_digest;
-+		TS_RESP_CTX_add_md;
-+		TS_TST_INFO_dup;
-+		ENGINE_set_pkey_asn1_meths;
-+		PEM_write_bio_Parameters;
-+		TS_TST_INFO_get_accuracy;
-+		X509_CRL_get0_by_serial;
-+		TS_TST_INFO_set_version;
-+		TS_RESP_CTX_get_tst_info;
-+		TS_RESP_verify_signature;
-+		CRYPTO_THREADID_get_callback;
-+		TS_TST_INFO_get_tsa;
-+		TS_STATUS_INFO_new;
-+		EVP_PKEY_CTX_get_cb;
-+		TS_REQ_get_ext_d2i;
-+		GENERAL_NAME_set0_othername;
-+		TS_TST_INFO_get_ext_count;
-+		TS_RESP_CTX_get_request;
-+		i2d_NETSCAPE_X509;
-+		ENGINE_get_pkey_meth_engine;
-+		EVP_PKEY_meth_set_signctx;
-+		EVP_PKEY_asn1_copy;
-+		ASN1_TYPE_cmp;
-+		EVP_CIPHER_do_all_sorted;
-+		EVP_PKEY_CTX_free;
-+		ISSUING_DIST_POINT_it;
-+		d2i_TS_MSG_IMPRINT_fp;
-+		X509_STORE_get1_certs;
-+		EVP_PKEY_CTX_get_operation;
-+		d2i_ESS_SIGNING_CERT;
-+		TS_CONF_set_ordering;
-+		EVP_PBE_alg_add_type;
-+		TS_REQ_set_version;
-+		EVP_PKEY_get0;
-+		BIO_asn1_set_suffix;
-+		i2d_TS_STATUS_INFO;
-+		EVP_MD_do_all;
-+		TS_TST_INFO_set_accuracy;
-+		PKCS7_add_attrib_content_type;
-+		ERR_remove_thread_state;
-+		EVP_PKEY_meth_add0;
-+		TS_TST_INFO_set_tsa;
-+		EVP_PKEY_meth_new;
-+		WHIRLPOOL_Update;
-+		TS_CONF_set_accuracy;
-+		ASN1_PCTX_set_oid_flags;
-+		ESS_SIGNING_CERT_dup;
-+		d2i_TS_REQ_bio;
-+		X509_time_adj_ex;
-+		TS_RESP_CTX_add_flags;
-+		d2i_TS_STATUS_INFO;
-+		TS_MSG_IMPRINT_set_msg;
-+		BIO_asn1_get_suffix;
-+		TS_REQ_free;
-+		EVP_PKEY_meth_free;
-+		TS_REQ_get_exts;
-+		TS_RESP_CTX_set_clock_precision_digits;
-+		TS_RESP_CTX_set_clk_prec_digits;
-+		TS_RESP_CTX_add_failure_info;
-+		i2d_TS_RESP_bio;
-+		EVP_PKEY_CTX_get0_peerkey;
-+		PEM_write_bio_CMS_stream;
-+		TS_REQ_new;
-+		TS_MSG_IMPRINT_new;
-+		EVP_PKEY_meth_find;
-+		EVP_PKEY_id;
-+		TS_TST_INFO_set_serial;
-+		a2i_GENERAL_NAME;
-+		TS_CONF_set_crypto_device;
-+		EVP_PKEY_verify_init;
-+		TS_CONF_set_policies;
-+		ASN1_PCTX_new;
-+		ESS_CERT_ID_free;
-+		ENGINE_unregister_pkey_meths;
-+		TS_MSG_IMPRINT_free;
-+		TS_VERIFY_CTX_init;
-+		PKCS7_stream;
-+		TS_RESP_CTX_set_certs;
-+		TS_CONF_set_def_policy;
-+		ASN1_GENERALIZEDTIME_adj;
-+		NETSCAPE_X509_new;
-+		TS_ACCURACY_free;
-+		TS_RESP_get_tst_info;
-+		EVP_PKEY_derive_set_peer;
-+		PEM_read_bio_Parameters;
-+		TS_CONF_set_clock_precision_digits;
-+		TS_CONF_set_clk_prec_digits;
-+		ESS_ISSUER_SERIAL_dup;
-+		TS_ACCURACY_get_micros;
-+		ASN1_PCTX_get_str_flags;
-+		NAME_CONSTRAINTS_check;
-+		ASN1_BIT_STRING_check;
-+		X509_check_akid;
-+		ENGINE_unregister_pkey_asn1_meths;
-+		ENGINE_unreg_pkey_asn1_meths;
-+		ASN1_PCTX_free;
-+		PEM_write_bio_ASN1_stream;
-+		i2d_ASN1_bio_stream;
-+		TS_X509_ALGOR_print_bio;
-+		EVP_PKEY_meth_set_cleanup;
-+		EVP_PKEY_asn1_free;
-+		ESS_SIGNING_CERT_free;
-+		TS_TST_INFO_set_msg_imprint;
-+		GENERAL_NAME_cmp;
-+		d2i_ASN1_SET_ANY;
-+		ENGINE_set_pkey_meths;
-+		i2d_TS_REQ_fp;
-+		d2i_ASN1_SEQUENCE_ANY;
-+		GENERAL_NAME_get0_otherName;
-+		d2i_ESS_CERT_ID;
-+		OBJ_find_sigid_algs;
-+		EVP_PKEY_meth_set_keygen;
-+		PKCS5_PBKDF2_HMAC;
-+		EVP_PKEY_paramgen;
-+		EVP_PKEY_meth_set_paramgen;
-+		BIO_new_PKCS7;
-+		EVP_PKEY_verify_recover;
-+		TS_ext_print_bio;
-+		TS_ASN1_INTEGER_print_bio;
-+		check_defer;
-+		DSO_pathbyaddr;
-+		EVP_PKEY_set_type;
-+		TS_ACCURACY_set_micros;
-+		TS_REQ_to_TS_VERIFY_CTX;
-+		EVP_PKEY_meth_set_copy;
-+		ASN1_PCTX_set_cert_flags;
-+		TS_TST_INFO_get_ext;
-+		EVP_PKEY_asn1_set_ctrl;
-+		TS_TST_INFO_get_ext_by_critical;
-+		EVP_PKEY_CTX_new_id;
-+		TS_REQ_get_ext_by_OBJ;
-+		TS_CONF_set_signer_cert;
-+		X509_NAME_hash_old;
-+		ASN1_TIME_set_string;
-+		EVP_MD_flags;
-+		TS_RESP_CTX_free;
-+		DSAparams_dup;
-+		DHparams_dup;
-+		OCSP_REQ_CTX_add1_header;
-+		OCSP_REQ_CTX_set1_req;
-+		X509_STORE_set_verify_cb;
-+		X509_STORE_CTX_get0_current_crl;
-+		X509_STORE_CTX_get0_parent_ctx;
-+		X509_STORE_CTX_get0_current_issuer;
-+		X509_STORE_CTX_get0_cur_issuer;
-+		X509_issuer_name_hash_old;
-+		X509_subject_name_hash_old;
-+		EVP_CIPHER_CTX_copy;
-+		UI_method_get_prompt_constructor;
-+		UI_method_get_prompt_constructr;
-+		UI_method_set_prompt_constructor;
-+		UI_method_set_prompt_constructr;
-+		EVP_read_pw_string_min;
-+		CRYPTO_cts128_encrypt;
-+		CRYPTO_cts128_decrypt_block;
-+		CRYPTO_cfb128_1_encrypt;
-+		CRYPTO_cbc128_encrypt;
-+		CRYPTO_ctr128_encrypt;
-+		CRYPTO_ofb128_encrypt;
-+		CRYPTO_cts128_decrypt;
-+		CRYPTO_cts128_encrypt_block;
-+		CRYPTO_cbc128_decrypt;
-+		CRYPTO_cfb128_encrypt;
-+		CRYPTO_cfb128_8_encrypt;
-+		SSL_renegotiate_abbreviated;
-+		TLSv1_1_method;
-+		TLSv1_1_client_method;
-+		TLSv1_1_server_method;
-+		SSL_CTX_set_srp_client_pwd_callback;
-+		SSL_CTX_set_srp_client_pwd_cb;
-+		SSL_get_srp_g;
-+		SSL_CTX_set_srp_username_callback;
-+		SSL_CTX_set_srp_un_cb;
-+		SSL_get_srp_userinfo;
-+		SSL_set_srp_server_param;
-+		SSL_set_srp_server_param_pw;
-+		SSL_get_srp_N;
-+		SSL_get_srp_username;
-+		SSL_CTX_set_srp_password;
-+		SSL_CTX_set_srp_strength;
-+		SSL_CTX_set_srp_verify_param_callback;
-+		SSL_CTX_set_srp_vfy_param_cb;
-+		SSL_CTX_set_srp_cb_arg;
-+		SSL_CTX_set_srp_username;
-+		SSL_CTX_SRP_CTX_init;
-+		SSL_SRP_CTX_init;
-+		SRP_Calc_A_param;
-+		SRP_generate_server_master_secret;
-+		SRP_gen_server_master_secret;
-+		SSL_CTX_SRP_CTX_free;
-+		SRP_generate_client_master_secret;
-+		SRP_gen_client_master_secret;
-+		SSL_srp_server_param_with_username;
-+		SSL_srp_server_param_with_un;
-+		SSL_SRP_CTX_free;
-+		SSL_set_debug;
-+		SSL_SESSION_get0_peer;
-+		TLSv1_2_client_method;
-+		SSL_SESSION_set1_id_context;
-+		TLSv1_2_server_method;
-+		SSL_cache_hit;
-+		SSL_get0_kssl_ctx;
-+		SSL_set0_kssl_ctx;
-+		SSL_set_state;
-+		SSL_CIPHER_get_id;
-+		TLSv1_2_method;
-+		kssl_ctx_get0_client_princ;
-+		SSL_export_keying_material;
-+		SSL_set_tlsext_use_srtp;
-+		SSL_CTX_set_next_protos_advertised_cb;
-+		SSL_CTX_set_next_protos_adv_cb;
-+		SSL_get0_next_proto_negotiated;
-+		SSL_get_selected_srtp_profile;
-+		SSL_CTX_set_tlsext_use_srtp;
-+		SSL_select_next_proto;
-+		SSL_get_srtp_profiles;
-+		SSL_CTX_set_next_proto_select_cb;
-+		SSL_CTX_set_next_proto_sel_cb;
-+		SSL_SESSION_get_compress_id;
-+
-+		SRP_VBASE_get_by_user;
-+		SRP_Calc_server_key;
-+		SRP_create_verifier;
-+		SRP_create_verifier_BN;
-+		SRP_Calc_u;
-+		SRP_VBASE_free;
-+		SRP_Calc_client_key;
-+		SRP_get_default_gN;
-+		SRP_Calc_x;
-+		SRP_Calc_B;
-+		SRP_VBASE_new;
-+		SRP_check_known_gN_param;
-+		SRP_Calc_A;
-+		SRP_Verify_A_mod_N;
-+		SRP_VBASE_init;
-+		SRP_Verify_B_mod_N;
-+		EC_KEY_set_public_key_affine_coordinates;
-+		EC_KEY_set_pub_key_aff_coords;
-+		EVP_aes_192_ctr;
-+		EVP_PKEY_meth_get0_info;
-+		EVP_PKEY_meth_copy;
-+		ERR_add_error_vdata;
-+		EVP_aes_128_ctr;
-+		EVP_aes_256_ctr;
-+		EC_GFp_nistp224_method;
-+		EC_KEY_get_flags;
-+		RSA_padding_add_PKCS1_PSS_mgf1;
-+		EVP_aes_128_xts;
-+		EVP_aes_256_xts;
-+		EVP_aes_128_gcm;
-+		EC_KEY_clear_flags;
-+		EC_KEY_set_flags;
-+		EVP_aes_256_ccm;
-+		RSA_verify_PKCS1_PSS_mgf1;
-+		EVP_aes_128_ccm;
-+		EVP_aes_192_gcm;
-+		X509_ALGOR_set_md;
-+		RAND_init_fips;
-+		EVP_aes_256_gcm;
-+		EVP_aes_192_ccm;
-+		CMAC_CTX_copy;
-+		CMAC_CTX_free;
-+		CMAC_CTX_get0_cipher_ctx;
-+		CMAC_CTX_cleanup;
-+		CMAC_Init;
-+		CMAC_Update;
-+		CMAC_resume;
-+		CMAC_CTX_new;
-+		CMAC_Final;
-+		CRYPTO_ctr128_encrypt_ctr32;
-+		CRYPTO_gcm128_release;
-+		CRYPTO_ccm128_decrypt_ccm64;
-+		CRYPTO_ccm128_encrypt;
-+		CRYPTO_gcm128_encrypt;
-+		CRYPTO_xts128_encrypt;
-+		EVP_rc4_hmac_md5;
-+		CRYPTO_nistcts128_decrypt_block;
-+		CRYPTO_gcm128_setiv;
-+		CRYPTO_nistcts128_encrypt;
-+		EVP_aes_128_cbc_hmac_sha1;
-+		CRYPTO_gcm128_tag;
-+		CRYPTO_ccm128_encrypt_ccm64;
-+		ENGINE_load_rdrand;
-+		CRYPTO_ccm128_setiv;
-+		CRYPTO_nistcts128_encrypt_block;
-+		CRYPTO_gcm128_aad;
-+		CRYPTO_ccm128_init;
-+		CRYPTO_nistcts128_decrypt;
-+		CRYPTO_gcm128_new;
-+		CRYPTO_ccm128_tag;
-+		CRYPTO_ccm128_decrypt;
-+		CRYPTO_ccm128_aad;
-+		CRYPTO_gcm128_init;
-+		CRYPTO_gcm128_decrypt;
-+		ENGINE_load_rsax;
-+		CRYPTO_gcm128_decrypt_ctr32;
-+		CRYPTO_gcm128_encrypt_ctr32;
-+		CRYPTO_gcm128_finish;
-+		EVP_aes_256_cbc_hmac_sha1;
-+		PKCS5_pbkdf2_set;
-+		CMS_add0_recipient_password;
-+		CMS_decrypt_set1_password;
-+		CMS_RecipientInfo_set0_password;
-+		RAND_set_fips_drbg_type;
-+		X509_REQ_sign_ctx;
-+		RSA_PSS_PARAMS_new;
-+		X509_CRL_sign_ctx;
-+		X509_signature_dump;
-+		d2i_RSA_PSS_PARAMS;
-+		RSA_PSS_PARAMS_it;
-+		RSA_PSS_PARAMS_free;
-+		X509_sign_ctx;
-+		i2d_RSA_PSS_PARAMS;
-+		ASN1_item_sign_ctx;
-+		EC_GFp_nistp521_method;
-+		EC_GFp_nistp256_method;
-+		OPENSSL_stderr;
-+		OPENSSL_cpuid_setup;
-+		OPENSSL_showfatal;
-+		BIO_new_dgram_sctp;
-+		BIO_dgram_sctp_msg_waiting;
-+		BIO_dgram_sctp_wait_for_dry;
-+		BIO_s_datagram_sctp;
-+		BIO_dgram_is_sctp;
-+		BIO_dgram_sctp_notification_cb;
-+		CRYPTO_memcmp;
-+		SSL_CTX_set_alpn_protos;
-+		SSL_set_alpn_protos;
-+		SSL_CTX_set_alpn_select_cb;
-+		SSL_get0_alpn_selected;
-+		SSL_CTX_set_custom_cli_ext;
-+		SSL_CTX_set_custom_srv_ext;
-+		SSL_CTX_set_srv_supp_data;
-+		SSL_CTX_set_cli_supp_data;
-+		SSL_set_cert_cb;
-+		SSL_CTX_use_serverinfo;
-+		SSL_CTX_use_serverinfo_file;
-+		SSL_CTX_set_cert_cb;
-+		SSL_CTX_get0_param;
-+		SSL_get0_param;
-+		SSL_certs_clear;
-+		DTLSv1_2_method;
-+		DTLSv1_2_server_method;
-+		DTLSv1_2_client_method;
-+		DTLS_method;
-+		DTLS_server_method;
-+		DTLS_client_method;
-+		SSL_CTX_get_ssl_method;
-+		SSL_CTX_get0_certificate;
-+		SSL_CTX_get0_privatekey;
-+		SSL_COMP_set0_compression_methods;
-+		SSL_COMP_free_compression_methods;
-+		SSL_CIPHER_find;
-+		SSL_is_server;
-+		SSL_CONF_CTX_new;
-+		SSL_CONF_CTX_finish;
-+		SSL_CONF_CTX_free;
-+		SSL_CONF_CTX_set_flags;
-+		SSL_CONF_CTX_clear_flags;
-+		SSL_CONF_CTX_set1_prefix;
-+		SSL_CONF_CTX_set_ssl;
-+		SSL_CONF_CTX_set_ssl_ctx;
-+		SSL_CONF_cmd;
-+		SSL_CONF_cmd_argv;
-+		SSL_CONF_cmd_value_type;
-+		SSL_trace;
-+		SSL_CIPHER_standard_name;
-+		SSL_get_tlsa_record_byname;
-+		ASN1_TIME_diff;
-+		BIO_hex_string;
-+		CMS_RecipientInfo_get0_pkey_ctx;
-+		CMS_RecipientInfo_encrypt;
-+		CMS_SignerInfo_get0_pkey_ctx;
-+		CMS_SignerInfo_get0_md_ctx;
-+		CMS_SignerInfo_get0_signature;
-+		CMS_RecipientInfo_kari_get0_alg;
-+		CMS_RecipientInfo_kari_get0_reks;
-+		CMS_RecipientInfo_kari_get0_orig_id;
-+		CMS_RecipientInfo_kari_orig_id_cmp;
-+		CMS_RecipientEncryptedKey_get0_id;
-+		CMS_RecipientEncryptedKey_cert_cmp;
-+		CMS_RecipientInfo_kari_set0_pkey;
-+		CMS_RecipientInfo_kari_get0_ctx;
-+		CMS_RecipientInfo_kari_decrypt;
-+		CMS_SharedInfo_encode;
-+		DH_compute_key_padded;
-+		d2i_DHxparams;
-+		i2d_DHxparams;
-+		DH_get_1024_160;
-+		DH_get_2048_224;
-+		DH_get_2048_256;
-+		DH_KDF_X9_42;
-+		ECDH_KDF_X9_62;
-+		ECDSA_METHOD_new;
-+		ECDSA_METHOD_free;
-+		ECDSA_METHOD_set_app_data;
-+		ECDSA_METHOD_get_app_data;
-+		ECDSA_METHOD_set_sign;
-+		ECDSA_METHOD_set_sign_setup;
-+		ECDSA_METHOD_set_verify;
-+		ECDSA_METHOD_set_flags;
-+		ECDSA_METHOD_set_name;
-+		EVP_des_ede3_wrap;
-+		EVP_aes_128_wrap;
-+		EVP_aes_192_wrap;
-+		EVP_aes_256_wrap;
-+		EVP_aes_128_cbc_hmac_sha256;
-+		EVP_aes_256_cbc_hmac_sha256;
-+		CRYPTO_128_wrap;
-+		CRYPTO_128_unwrap;
-+		OCSP_REQ_CTX_nbio;
-+		OCSP_REQ_CTX_new;
-+		OCSP_set_max_response_length;
-+		OCSP_REQ_CTX_i2d;
-+		OCSP_REQ_CTX_nbio_d2i;
-+		OCSP_REQ_CTX_get0_mem_bio;
-+		OCSP_REQ_CTX_http;
-+		RSA_padding_add_PKCS1_OAEP_mgf1;
-+		RSA_padding_check_PKCS1_OAEP_mgf1;
-+		RSA_OAEP_PARAMS_free;
-+		RSA_OAEP_PARAMS_it;
-+		RSA_OAEP_PARAMS_new;
-+		SSL_get_sigalgs;
-+		SSL_get_shared_sigalgs;
-+		SSL_check_chain;
-+		X509_chain_up_ref;
-+		X509_http_nbio;
-+		X509_CRL_http_nbio;
-+		X509_REVOKED_dup;
-+		i2d_re_X509_tbs;
-+		X509_get0_signature;
-+		X509_get_signature_nid;
-+		X509_CRL_diff;
-+		X509_chain_check_suiteb;
-+		X509_CRL_check_suiteb;
-+		X509_check_host;
-+		X509_check_email;
-+		X509_check_ip;
-+		X509_check_ip_asc;
-+		X509_STORE_set_lookup_crls_cb;
-+		X509_STORE_CTX_get0_store;
-+		X509_VERIFY_PARAM_set1_host;
-+		X509_VERIFY_PARAM_add1_host;
-+		X509_VERIFY_PARAM_set_hostflags;
-+		X509_VERIFY_PARAM_get0_peername;
-+		X509_VERIFY_PARAM_set1_email;
-+		X509_VERIFY_PARAM_set1_ip;
-+		X509_VERIFY_PARAM_set1_ip_asc;
-+		X509_VERIFY_PARAM_get0_name;
-+		X509_VERIFY_PARAM_get_count;
-+		X509_VERIFY_PARAM_get0;
-+		X509V3_EXT_free;
-+		EC_GROUP_get_mont_data;
-+		EC_curve_nid2nist;
-+		EC_curve_nist2nid;
-+		PEM_write_bio_DHxparams;
-+		PEM_write_DHxparams;
-+		SSL_CTX_add_client_custom_ext;
-+		SSL_CTX_add_server_custom_ext;
-+		SSL_extension_supported;
-+		BUF_strnlen;
-+		sk_deep_copy;
-+		SSL_test_functions;
-+
-+	local:
-+		*;
-+};
-+
-+OPENSSL_1.0.2g {
-+       global:
-+               SRP_VBASE_get1_by_user;
-+               SRP_user_pwd_free;
-+} OPENSSL_1.0.2d;
-+
-Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/openssl.ld
-===================================================================
---- /dev/null	1970-01-01 00:00:00.000000000 +0000
-+++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/openssl.ld	2014-02-24 21:02:30.000000000 +0100
-@@ -0,0 +1,10 @@
-+OPENSSL_1.0.2 {
-+	global:
-+		bind_engine;
-+		v_check;
-+		OPENSSL_init;
-+		OPENSSL_finish;
-+	local:
-+		*;
-+};
-+
-Index: openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/ccgost/openssl.ld
-===================================================================
---- /dev/null	1970-01-01 00:00:00.000000000 +0000
-+++ openssl-1.0.2~beta1.obsolete.0.0498436515490575/engines/ccgost/openssl.ld	2014-02-24 21:02:30.000000000 +0100
-@@ -0,0 +1,10 @@
-+OPENSSL_1.0.2 {
-+	global:
-+		bind_engine;
-+		v_check;
-+		OPENSSL_init;
-+		OPENSSL_finish;
-+	local:
-+		*;
-+};
-+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/fix-cipher-des-ede3-cfb1.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/fix-cipher-des-ede3-cfb1.patch
deleted file mode 100644
index 2a318a4..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/fix-cipher-des-ede3-cfb1.patch
+++ /dev/null
@@ -1,21 +0,0 @@
-Upstream-Status: Submitted
-
-This patch adds the fix for one of the ciphers used in openssl, namely
-the cipher des-ede3-cfb1. Complete bug log and patch is present here:
-http://rt.openssl.org/Ticket/Display.html?id=2867
-
-Signed-off-by: Muhammad Shakeel <muhammad_shakeel@mentor.com>
-
-Index: openssl-1.0.2/crypto/evp/e_des3.c
-===================================================================
---- openssl-1.0.2.orig/crypto/evp/e_des3.c
-+++ openssl-1.0.2/crypto/evp/e_des3.c
-@@ -211,7 +211,7 @@ static int des_ede3_cfb1_cipher(EVP_CIPH
-     size_t n;
-     unsigned char c[1], d[1];
- 
--    for (n = 0; n < inl; ++n) {
-+    for (n = 0; n * 8 < inl; ++n) {
-         c[0] = (in[n / 8] & (1 << (7 - n % 8))) ? 0x80 : 0;
-         DES_ede3_cfb_encrypt(c, d, 1, 1,
-                              &data(ctx)->ks1, &data(ctx)->ks2,
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/openssl-avoid-NULL-pointer-dereference-in-EVP_DigestInit_ex.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/openssl-avoid-NULL-pointer-dereference-in-EVP_DigestInit_ex.patch
deleted file mode 100644
index f736e5c..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/openssl-avoid-NULL-pointer-dereference-in-EVP_DigestInit_ex.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-openssl: avoid NULL pointer dereference in EVP_DigestInit_ex()
-
-We should avoid accessing the type pointer if it's NULL,
-this could happen if ctx->digest is not NULL.
-
-Upstream-Status: Submitted
-http://www.mail-archive.com/openssl-dev@openssl.org/msg32860.html
-
-Signed-off-by: Xufeng Zhang <xufeng.zhang@windriver.com>
----
-Index: openssl-1.0.2h/crypto/evp/digest.c
-===================================================================
---- openssl-1.0.2h.orig/crypto/evp/digest.c
-+++ openssl-1.0.2h/crypto/evp/digest.c
-@@ -211,7 +211,7 @@ int EVP_DigestInit_ex(EVP_MD_CTX *ctx, c
-         type = ctx->digest;
-     }
- #endif
--    if (ctx->digest != type) {
-+    if (type && (ctx->digest != type)) {
-         if (ctx->digest && ctx->digest->ctx_size) {
-             OPENSSL_free(ctx->md_data);
-             ctx->md_data = NULL;
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/parallel.patch b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/parallel.patch
deleted file mode 100644
index f3f4c99..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/parallel.patch
+++ /dev/null
@@ -1,337 +0,0 @@
-Fix the parallel races in the Makefiles.
-
-This patch was taken from the Gentoo packaging:
-https://gitweb.gentoo.org/repo/gentoo.git/plain/dev-libs/openssl/files/openssl-1.0.2g-parallel-build.patch
-
-Upstream-Status: Pending
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-Refreshed for 1.0.2i
-Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
-
---- openssl-1.0.2g/crypto/Makefile
-+++ openssl-1.0.2g/crypto/Makefile
-@@ -85,11 +85,11 @@
- 	@if [ -z "$(THIS)" ]; then $(MAKE) -f $(TOP)/Makefile reflect THIS=$@; fi
- 
- subdirs:
--	@target=all; $(RECURSIVE_MAKE)
-+	+@target=all; $(RECURSIVE_MAKE)
- 
- files:
- 	$(PERL) $(TOP)/util/files.pl "CPUID_OBJ=$(CPUID_OBJ)" Makefile >> $(TOP)/MINFO
--	@target=files; $(RECURSIVE_MAKE)
-+	+@target=files; $(RECURSIVE_MAKE)
- 
- links:
- 	@$(PERL) $(TOP)/util/mklink.pl ../include/openssl $(EXHEADER)
-@@ -100,7 +100,7 @@
- # lib: $(LIB): are splitted to avoid end-less loop
- lib:	$(LIB)
- 	@touch lib
--$(LIB):	$(LIBOBJ)
-+$(LIB):	$(LIBOBJ) | subdirs
- 	$(AR) $(LIB) $(LIBOBJ)
- 	test -z "$(FIPSLIBDIR)" || $(AR) $(LIB) $(FIPSLIBDIR)fipscanister.o
- 	$(RANLIB) $(LIB) || echo Never mind.
-@@ -111,7 +111,7 @@
- 	fi
- 
- libs:
--	@target=lib; $(RECURSIVE_MAKE)
-+	+@target=lib; $(RECURSIVE_MAKE)
- 
- install:
- 	@[ -n "$(INSTALLTOP)" ] # should be set by top Makefile...
-@@ -120,7 +120,7 @@
- 	(cp $$i $(INSTALL_PREFIX)$(INSTALLTOP)/include/openssl/$$i; \
- 	chmod 644 $(INSTALL_PREFIX)$(INSTALLTOP)/include/openssl/$$i ); \
- 	done;
--	@target=install; $(RECURSIVE_MAKE)
-+	+@target=install; $(RECURSIVE_MAKE)
- 
- lint:
- 	@target=lint; $(RECURSIVE_MAKE)
---- openssl-1.0.2g/engines/Makefile
-+++ openssl-1.0.2g/engines/Makefile
-@@ -72,7 +72,7 @@
- 
- all:	lib subdirs
- 
--lib:	$(LIBOBJ)
-+lib:	$(LIBOBJ) | subdirs
- 	@if [ -n "$(SHARED_LIBS)" ]; then \
- 		set -e; \
- 		for l in $(LIBNAMES); do \
-@@ -89,7 +89,7 @@
- 
- subdirs:
- 	echo $(EDIRS)
--	@target=all; $(RECURSIVE_MAKE)
-+	+@target=all; $(RECURSIVE_MAKE)
- 
- files:
- 	$(PERL) $(TOP)/util/files.pl Makefile >> $(TOP)/MINFO
-@@ -128,7 +128,7 @@
- 			  mv -f $(INSTALL_PREFIX)$(INSTALLTOP)/$(LIBDIR)/engines/$$pfx$$l$$sfx.new $(INSTALL_PREFIX)$(INSTALLTOP)/$(LIBDIR)/engines/$$pfx$$l$$sfx ); \
- 		done; \
- 	fi
--	@target=install; $(RECURSIVE_MAKE)
-+	+@target=install; $(RECURSIVE_MAKE)
- 
- tags:
- 	ctags $(SRC)
---- openssl-1.0.2g/Makefile.org
-+++ openssl-1.0.2g/Makefile.org
-@@ -279,17 +279,17 @@
- build_libssl: build_ssl libssl.pc
- 
- build_crypto:
--	@dir=crypto; target=all; $(BUILD_ONE_CMD)
-+	+@dir=crypto; target=all; $(BUILD_ONE_CMD)
- build_ssl: build_crypto
--	@dir=ssl; target=all; $(BUILD_ONE_CMD)
-+	+@dir=ssl; target=all; $(BUILD_ONE_CMD)
- build_engines: build_crypto
--	@dir=engines; target=all; $(BUILD_ONE_CMD)
-+	+@dir=engines; target=all; $(BUILD_ONE_CMD)
- build_apps: build_libs
--	@dir=apps; target=all; $(BUILD_ONE_CMD)
-+	+@dir=apps; target=all; $(BUILD_ONE_CMD)
- build_tests: build_libs
--	@dir=test; target=all; $(BUILD_ONE_CMD)
-+	+@dir=test; target=all; $(BUILD_ONE_CMD)
- build_tools: build_libs
--	@dir=tools; target=all; $(BUILD_ONE_CMD)
-+	+@dir=tools; target=all; $(BUILD_ONE_CMD)
- 
- all_testapps: build_libs build_testapps
- build_testapps:
-@@ -544,7 +544,7 @@
- 	(cp $$i $(INSTALL_PREFIX)$(INSTALLTOP)/include/openssl/$$i; \
- 	chmod 644 $(INSTALL_PREFIX)$(INSTALLTOP)/include/openssl/$$i ); \
- 	done;
--	@set -e; target=install; $(RECURSIVE_BUILD_CMD)
-+	+@set -e; target=install; $(RECURSIVE_BUILD_CMD)
- 	@set -e; liblist="$(LIBS)"; for i in $$liblist ;\
- 	do \
- 		if [ -f "$$i" ]; then \
---- openssl-1.0.2g/Makefile.shared
-+++ openssl-1.0.2g/Makefile.shared
-@@ -105,6 +105,7 @@
-     SHAREDFLAGS="$${SHAREDFLAGS:-$(CFLAGS) $(SHARED_LDFLAGS)}"; \
-     LIBPATH=`for x in $$LIBDEPS; do echo $$x; done | sed -e 's/^ *-L//;t' -e d | uniq`; \
-     LIBPATH=`echo $$LIBPATH | sed -e 's/ /:/g'`; \
-+    [ -e $$SHLIB$$SHLIB_SOVER$$SHLIB_SUFFIX ] && exit 0; \
-     LD_LIBRARY_PATH=$$LIBPATH:$$LD_LIBRARY_PATH \
-     $${SHAREDCMD} $${SHAREDFLAGS} \
- 	-o $$SHLIB$$SHLIB_SOVER$$SHLIB_SUFFIX \
-@@ -122,6 +123,7 @@
- 			done; \
- 		fi; \
- 		if [ -n "$$SHLIB_SOVER" ]; then \
-+			[ -e "$$SHLIB$$SHLIB_SUFFIX" ] || \
- 			( $(SET_X); rm -f $$SHLIB$$SHLIB_SUFFIX; \
- 			  ln -s $$prev $$SHLIB$$SHLIB_SUFFIX ); \
- 		fi; \
---- openssl-1.0.2g/test/Makefile
-+++ openssl-1.0.2g/test/Makefile
-@@ -144,7 +144,7 @@
- tags:
- 	ctags $(SRC)
- 
--tests:	exe apps $(TESTS)
-+tests:	exe $(TESTS)
- 
- apps:
- 	@(cd ..; $(MAKE) DIRS=apps all)
-@@ -438,136 +438,136 @@
- 		link_app.$${shlib_target}
- 
- $(RSATEST)$(EXE_EXT): $(RSATEST).o $(DLIBCRYPTO)
--	@target=$(RSATEST); $(BUILD_CMD)
-+	+@target=$(RSATEST); $(BUILD_CMD)
- 
- $(BNTEST)$(EXE_EXT): $(BNTEST).o $(DLIBCRYPTO)
--	@target=$(BNTEST); $(BUILD_CMD)
-+	+@target=$(BNTEST); $(BUILD_CMD)
- 
- $(ECTEST)$(EXE_EXT): $(ECTEST).o $(DLIBCRYPTO)
--	@target=$(ECTEST); $(BUILD_CMD)
-+	+@target=$(ECTEST); $(BUILD_CMD)
- 
- $(EXPTEST)$(EXE_EXT): $(EXPTEST).o $(DLIBCRYPTO)
--	@target=$(EXPTEST); $(BUILD_CMD)
-+	+@target=$(EXPTEST); $(BUILD_CMD)
- 
- $(IDEATEST)$(EXE_EXT): $(IDEATEST).o $(DLIBCRYPTO)
--	@target=$(IDEATEST); $(BUILD_CMD)
-+	+@target=$(IDEATEST); $(BUILD_CMD)
- 
- $(MD2TEST)$(EXE_EXT): $(MD2TEST).o $(DLIBCRYPTO)
--	@target=$(MD2TEST); $(BUILD_CMD)
-+	+@target=$(MD2TEST); $(BUILD_CMD)
- 
- $(SHATEST)$(EXE_EXT): $(SHATEST).o $(DLIBCRYPTO)
--	@target=$(SHATEST); $(BUILD_CMD)
-+	+@target=$(SHATEST); $(BUILD_CMD)
- 
- $(SHA1TEST)$(EXE_EXT): $(SHA1TEST).o $(DLIBCRYPTO)
--	@target=$(SHA1TEST); $(BUILD_CMD)
-+	+@target=$(SHA1TEST); $(BUILD_CMD)
- 
- $(SHA256TEST)$(EXE_EXT): $(SHA256TEST).o $(DLIBCRYPTO)
--	@target=$(SHA256TEST); $(BUILD_CMD)
-+	+@target=$(SHA256TEST); $(BUILD_CMD)
- 
- $(SHA512TEST)$(EXE_EXT): $(SHA512TEST).o $(DLIBCRYPTO)
--	@target=$(SHA512TEST); $(BUILD_CMD)
-+	+@target=$(SHA512TEST); $(BUILD_CMD)
- 
- $(RMDTEST)$(EXE_EXT): $(RMDTEST).o $(DLIBCRYPTO)
--	@target=$(RMDTEST); $(BUILD_CMD)
-+	+@target=$(RMDTEST); $(BUILD_CMD)
- 
- $(MDC2TEST)$(EXE_EXT): $(MDC2TEST).o $(DLIBCRYPTO)
--	@target=$(MDC2TEST); $(BUILD_CMD)
-+	+@target=$(MDC2TEST); $(BUILD_CMD)
- 
- $(MD4TEST)$(EXE_EXT): $(MD4TEST).o $(DLIBCRYPTO)
--	@target=$(MD4TEST); $(BUILD_CMD)
-+	+@target=$(MD4TEST); $(BUILD_CMD)
- 
- $(MD5TEST)$(EXE_EXT): $(MD5TEST).o $(DLIBCRYPTO)
--	@target=$(MD5TEST); $(BUILD_CMD)
-+	+@target=$(MD5TEST); $(BUILD_CMD)
- 
- $(HMACTEST)$(EXE_EXT): $(HMACTEST).o $(DLIBCRYPTO)
--	@target=$(HMACTEST); $(BUILD_CMD)
-+	+@target=$(HMACTEST); $(BUILD_CMD)
- 
- $(WPTEST)$(EXE_EXT): $(WPTEST).o $(DLIBCRYPTO)
--	@target=$(WPTEST); $(BUILD_CMD)
-+	+@target=$(WPTEST); $(BUILD_CMD)
- 
- $(RC2TEST)$(EXE_EXT): $(RC2TEST).o $(DLIBCRYPTO)
--	@target=$(RC2TEST); $(BUILD_CMD)
-+	+@target=$(RC2TEST); $(BUILD_CMD)
- 
- $(BFTEST)$(EXE_EXT): $(BFTEST).o $(DLIBCRYPTO)
--	@target=$(BFTEST); $(BUILD_CMD)
-+	+@target=$(BFTEST); $(BUILD_CMD)
- 
- $(CASTTEST)$(EXE_EXT): $(CASTTEST).o $(DLIBCRYPTO)
--	@target=$(CASTTEST); $(BUILD_CMD)
-+	+@target=$(CASTTEST); $(BUILD_CMD)
- 
- $(RC4TEST)$(EXE_EXT): $(RC4TEST).o $(DLIBCRYPTO)
--	@target=$(RC4TEST); $(BUILD_CMD)
-+	+@target=$(RC4TEST); $(BUILD_CMD)
- 
- $(RC5TEST)$(EXE_EXT): $(RC5TEST).o $(DLIBCRYPTO)
--	@target=$(RC5TEST); $(BUILD_CMD)
-+	+@target=$(RC5TEST); $(BUILD_CMD)
- 
- $(DESTEST)$(EXE_EXT): $(DESTEST).o $(DLIBCRYPTO)
--	@target=$(DESTEST); $(BUILD_CMD)
-+	+@target=$(DESTEST); $(BUILD_CMD)
- 
- $(RANDTEST)$(EXE_EXT): $(RANDTEST).o $(DLIBCRYPTO)
--	@target=$(RANDTEST); $(BUILD_CMD)
-+	+@target=$(RANDTEST); $(BUILD_CMD)
- 
- $(DHTEST)$(EXE_EXT): $(DHTEST).o $(DLIBCRYPTO)
--	@target=$(DHTEST); $(BUILD_CMD)
-+	+@target=$(DHTEST); $(BUILD_CMD)
- 
- $(DSATEST)$(EXE_EXT): $(DSATEST).o $(DLIBCRYPTO)
--	@target=$(DSATEST); $(BUILD_CMD)
-+	+@target=$(DSATEST); $(BUILD_CMD)
- 
- $(METHTEST)$(EXE_EXT): $(METHTEST).o $(DLIBCRYPTO)
--	@target=$(METHTEST); $(BUILD_CMD)
-+	+@target=$(METHTEST); $(BUILD_CMD)
- 
- $(SSLTEST)$(EXE_EXT): $(SSLTEST).o $(DLIBSSL) $(DLIBCRYPTO)
--	@target=$(SSLTEST); $(FIPS_BUILD_CMD)
-+	+@target=$(SSLTEST); $(FIPS_BUILD_CMD)
- 
- $(ENGINETEST)$(EXE_EXT): $(ENGINETEST).o $(DLIBCRYPTO)
--	@target=$(ENGINETEST); $(BUILD_CMD)
-+	+@target=$(ENGINETEST); $(BUILD_CMD)
- 
- $(EVPTEST)$(EXE_EXT): $(EVPTEST).o $(DLIBCRYPTO)
--	@target=$(EVPTEST); $(BUILD_CMD)
-+	+@target=$(EVPTEST); $(BUILD_CMD)
- 
- $(EVPEXTRATEST)$(EXE_EXT): $(EVPEXTRATEST).o $(DLIBCRYPTO)
--	@target=$(EVPEXTRATEST); $(BUILD_CMD)
-+	+@target=$(EVPEXTRATEST); $(BUILD_CMD)
- 
- $(ECDSATEST)$(EXE_EXT): $(ECDSATEST).o $(DLIBCRYPTO)
--	@target=$(ECDSATEST); $(BUILD_CMD)
-+	+@target=$(ECDSATEST); $(BUILD_CMD)
- 
- $(ECDHTEST)$(EXE_EXT): $(ECDHTEST).o $(DLIBCRYPTO)
--	@target=$(ECDHTEST); $(BUILD_CMD)
-+	+@target=$(ECDHTEST); $(BUILD_CMD)
- 
- $(IGETEST)$(EXE_EXT): $(IGETEST).o $(DLIBCRYPTO)
--	@target=$(IGETEST); $(BUILD_CMD)
-+	+@target=$(IGETEST); $(BUILD_CMD)
- 
- $(JPAKETEST)$(EXE_EXT): $(JPAKETEST).o $(DLIBCRYPTO)
--	@target=$(JPAKETEST); $(BUILD_CMD)
-+	+@target=$(JPAKETEST); $(BUILD_CMD)
- 
- $(ASN1TEST)$(EXE_EXT): $(ASN1TEST).o $(DLIBCRYPTO)
--	@target=$(ASN1TEST); $(BUILD_CMD)
-+	+@target=$(ASN1TEST); $(BUILD_CMD)
- 
- $(SRPTEST)$(EXE_EXT): $(SRPTEST).o $(DLIBCRYPTO)
--	@target=$(SRPTEST); $(BUILD_CMD)
-+	+@target=$(SRPTEST); $(BUILD_CMD)
- 
- $(V3NAMETEST)$(EXE_EXT): $(V3NAMETEST).o $(DLIBCRYPTO)
--	@target=$(V3NAMETEST); $(BUILD_CMD)
-+	+@target=$(V3NAMETEST); $(BUILD_CMD)
- 
- $(HEARTBEATTEST)$(EXE_EXT): $(HEARTBEATTEST).o $(DLIBCRYPTO)
--	@target=$(HEARTBEATTEST); $(BUILD_CMD_STATIC)
-+	+@target=$(HEARTBEATTEST); $(BUILD_CMD_STATIC)
- 
- $(CONSTTIMETEST)$(EXE_EXT): $(CONSTTIMETEST).o
--	@target=$(CONSTTIMETEST) $(BUILD_CMD)
-+	+@target=$(CONSTTIMETEST) $(BUILD_CMD)
- 
- $(VERIFYEXTRATEST)$(EXE_EXT): $(VERIFYEXTRATEST).o
--	@target=$(VERIFYEXTRATEST) $(BUILD_CMD)
-+	+@target=$(VERIFYEXTRATEST) $(BUILD_CMD)
- 
- $(CLIENTHELLOTEST)$(EXE_EXT): $(CLIENTHELLOTEST).o
--	@target=$(CLIENTHELLOTEST) $(BUILD_CMD)
-+	+@target=$(CLIENTHELLOTEST) $(BUILD_CMD)
- 
- $(BADDTLSTEST)$(EXE_EXT): $(BADDTLSTEST).o
--	@target=$(BADDTLSTEST) $(BUILD_CMD)
-+	+@target=$(BADDTLSTEST) $(BUILD_CMD)
- 
- $(SSLV2CONFTEST)$(EXE_EXT): $(SSLV2CONFTEST).o
--	@target=$(SSLV2CONFTEST) $(BUILD_CMD)
-+	+@target=$(SSLV2CONFTEST) $(BUILD_CMD)
- 
- $(DTLSTEST)$(EXE_EXT): $(DTLSTEST).o ssltestlib.o $(DLIBSSL) $(DLIBCRYPTO)
--	@target=$(DTLSTEST); exobj=ssltestlib.o; $(BUILD_CMD)
-+	+@target=$(DTLSTEST); exobj=ssltestlib.o; $(BUILD_CMD)
- 
- #$(AESTEST).o: $(AESTEST).c
- #	$(CC) -c $(CFLAGS) -DINTERMEDIATE_VALUE_KAT -DTRACE_KAT_MCT $(AESTEST).c
-@@ -580,6 +580,6 @@
- #	fi
- 
- dummytest$(EXE_EXT): dummytest.o $(DLIBCRYPTO)
--	@target=dummytest; $(BUILD_CMD)
-+	+@target=dummytest; $(BUILD_CMD)
- 
- # DO NOT DELETE THIS LINE -- make depend depends on it.
- 
\ No newline at end of file
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/run-ptest b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/run-ptest
old mode 100755
new mode 100644
index 3b20fce..65c6cc7
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/run-ptest
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl/run-ptest
@@ -1,2 +1,4 @@
 #!/bin/sh
-make -k runtest
+cd test
+OPENSSL_ENGINES=../engines BLDTOP=.. SRCTOP=.. perl run_tests.pl
+cd ..
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl10.inc b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl10.inc
new file mode 100644
index 0000000..23f97d7
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl10.inc
@@ -0,0 +1,275 @@
+SUMMARY = "Secure Socket Layer"
+DESCRIPTION = "Secure Socket Layer (SSL) binary and related cryptographic tools."
+HOMEPAGE = "http://www.openssl.org/"
+BUGTRACKER = "http://www.openssl.org/news/vulnerabilities.html"
+SECTION = "libs/network"
+
+# "openssl | SSLeay" dual license
+LICENSE = "openssl"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=f9a8f968107345e0b75aa8c2ecaa7ec8"
+
+DEPENDS = "makedepend-native hostperl-runtime-native"
+DEPENDS_append_class-target = " openssl-native"
+
+PROVIDES += "openssl10"
+
+SRC_URI = "http://www.openssl.org/source/openssl-${PV}.tar.gz \
+          "
+S = "${WORKDIR}/openssl-${PV}"
+
+PACKAGECONFIG ?= "cryptodev-linux"
+PACKAGECONFIG[perl] = ",,,"
+PACKAGECONFIG[cryptodev-linux] = "-DHAVE_CRYPTODEV -DUSE_CRYPTODEV_DIGESTS,,cryptodev-linux"
+
+TERMIO_libc-musl = "-DTERMIOS"
+TERMIO ?= "-DTERMIO"
+# Avoid binaries being marked as requiring an executable stack since it 
+# doesn't(which causes and this causes issues with SELinux
+CFLAG = "${@base_conditional('SITEINFO_ENDIANNESS', 'le', '-DL_ENDIAN', '-DB_ENDIAN', d)} \
+	 ${TERMIO} ${CFLAGS} -Wall -Wa,--noexecstack"
+
+export DIRS = "crypto ssl apps"
+export EX_LIBS = "-lgcc -ldl"
+export AS = "${CC} -c"
+
+inherit pkgconfig siteinfo multilib_header ptest relative_symlinks
+
+PACKAGES =+ "libcrypto libssl ${PN}-misc openssl-conf"
+FILES_libcrypto = "${libdir}/libcrypto${SOLIBS}"
+FILES_libssl = "${libdir}/libssl${SOLIBS}"
+FILES_${PN} =+ " ${libdir}/ssl/*"
+FILES_${PN}-misc = "${libdir}/ssl/misc"
+RDEPENDS_${PN}-misc = "${@bb.utils.filter('PACKAGECONFIG', 'perl', d)}"
+
+# Add the openssl.cnf file to the openssl-conf package.  Make the libcrypto
+# package RRECOMMENDS on this package.  This will enable the configuration
+# file to be installed for both the base openssl package and the libcrypto
+# package since the base openssl package depends on the libcrypto package.
+FILES_openssl-conf = "${sysconfdir}/ssl/openssl.cnf"
+CONFFILES_openssl-conf = "${sysconfdir}/ssl/openssl.cnf"
+RRECOMMENDS_libcrypto += "openssl-conf"
+RDEPENDS_${PN}-ptest += "${PN}-misc make perl perl-module-filehandle bc"
+
+# Remove this to enable SSLv3. SSLv3 is defaulted to disabled due to the POODLE
+# vulnerability
+EXTRA_OECONF = " -no-ssl3"
+
+do_configure_prepend_darwin () {
+	sed -i -e '/version-script=openssl\.ld/d' Configure
+}
+
+do_configure () {
+	cd util
+	perl perlpath.pl ${STAGING_BINDIR_NATIVE}
+	cd ..
+	ln -sf apps/openssl.pod crypto/crypto.pod ssl/ssl.pod doc/
+
+	os=${HOST_OS}
+	case $os in
+	linux-gnueabi |\
+	linux-gnuspe |\
+	linux-musleabi |\
+	linux-muslspe |\
+	linux-musl )
+		os=linux
+		;;
+		*)
+		;;
+	esac
+	target="$os-${HOST_ARCH}"
+	case $target in
+	linux-arm)
+		target=linux-armv4
+		;;
+	linux-armeb)
+		target=linux-elf-armeb
+		;;
+	linux-aarch64*)
+		target=linux-aarch64
+		;;
+	linux-sh3)
+		target=debian-sh3
+		;;
+	linux-sh4)
+		target=debian-sh4
+		;;
+	linux-i486)
+		target=debian-i386-i486
+		;;
+	linux-i586 | linux-viac3)
+		target=debian-i386-i586
+		;;
+	linux-i686)
+		target=debian-i386-i686/cmov
+		;;
+	linux-gnux32-x86_64 | linux-muslx32-x86_64 )
+		target=linux-x32
+		;;
+	linux-gnu64-x86_64)
+		target=linux-x86_64
+		;;
+	linux-gnun32-mips*el)
+		target=debian-mipsn32el
+		;;
+	linux-gnun32-mips*)
+		target=debian-mipsn32
+		;;
+	linux-mips*64*el)
+		target=debian-mips64el
+		;;
+	linux-mips*64*)
+		target=debian-mips64
+		;;
+	linux-mips*el)
+		target=debian-mipsel
+		;;
+	linux-mips*)
+		target=debian-mips
+		;;
+	linux-microblaze*|linux-nios2*|linux-gnu*ilp32**)
+		target=linux-generic32
+		;;
+	linux-powerpc)
+		target=linux-ppc
+		;;
+	linux-powerpc64)
+		target=linux-ppc64
+		;;
+	linux-supersparc)
+		target=linux-sparcv8
+		;;
+	linux-sparc)
+		target=linux-sparcv8
+		;;
+	darwin-i386)
+		target=darwin-i386-cc
+		;;
+	esac
+	# inject machine-specific flags
+	sed -i -e "s|^\(\"$target\",\s*\"[^:]\+\):\([^:]\+\)|\1:${CFLAG}|g" Configure
+        useprefix=${prefix}
+        if [ "x$useprefix" = "x" ]; then
+                useprefix=/
+        fi        
+	perl ./Configure ${EXTRA_OECONF} shared --prefix=$useprefix --openssldir=${libdir}/ssl --libdir=`basename ${libdir}` $target
+}
+
+do_compile_prepend_class-target () {
+    sed -i 's/\((OPENSSL=\)".*"/\1"openssl"/' Makefile
+}
+
+do_compile () {
+	oe_runmake depend
+	oe_runmake
+}
+
+do_compile_ptest () {
+	# build dependencies for test directory too
+	export DIRS="$DIRS test"
+	oe_runmake depend
+	oe_runmake buildtest
+}
+
+do_install () {
+	# Create ${D}/${prefix} to fix parallel issues
+	mkdir -p ${D}/${prefix}/
+
+	oe_runmake INSTALL_PREFIX="${D}" MANDIR="${mandir}" install
+
+	oe_libinstall -so libcrypto ${D}${libdir}
+	oe_libinstall -so libssl ${D}${libdir}
+
+	install -d ${D}${includedir}
+	cp --dereference -R include/openssl ${D}${includedir}
+
+	install -Dm 0755 ${WORKDIR}/openssl-c_rehash.sh ${D}${bindir}/c_rehash
+	sed -i -e 's,/etc/openssl,${sysconfdir}/ssl,g' ${D}${bindir}/c_rehash
+
+	oe_multilib_header openssl/opensslconf.h
+	if [ "${@bb.utils.filter('PACKAGECONFIG', 'perl', d)}" ]; then
+		sed -i -e '1s,.*,#!${bindir}/env perl,' ${D}${libdir}/ssl/misc/CA.pl
+		sed -i -e '1s,.*,#!${bindir}/env perl,' ${D}${libdir}/ssl/misc/tsget
+	else
+		rm -f ${D}${libdir}/ssl/misc/CA.pl ${D}${libdir}/ssl/misc/tsget
+	fi
+
+	# Create SSL structure
+	install -d ${D}${sysconfdir}/ssl/
+	mv ${D}${libdir}/ssl/openssl.cnf \
+	   ${D}${libdir}/ssl/certs \
+	   ${D}${libdir}/ssl/private \
+	   \
+	   ${D}${sysconfdir}/ssl/
+	ln -sf ${sysconfdir}/ssl/certs ${D}${libdir}/ssl/certs
+	ln -sf ${sysconfdir}/ssl/private ${D}${libdir}/ssl/private
+	ln -sf ${sysconfdir}/ssl/openssl.cnf ${D}${libdir}/ssl/openssl.cnf
+
+	# Rename man pages to prefix openssl10-*
+	for f in `find ${D}${mandir} -type f`; do
+	    mv $f $(dirname $f)/openssl10-$(basename $f)
+	done
+	for f in `find ${D}${mandir} -type l`; do
+	    ln_f=`readlink $f`
+	    rm -f $f
+	    ln -s openssl10-$ln_f $(dirname $f)/openssl10-$(basename $f)
+	done
+}
+
+do_install_ptest () {
+	cp -r -L Makefile.org Makefile test ${D}${PTEST_PATH}
+
+        # Replace the path to native perl with the path to target perl
+        sed -i 's,^PERL=.*,PERL=${bindir}/perl,' ${D}${PTEST_PATH}/Makefile
+
+	cp Configure config e_os.h ${D}${PTEST_PATH}
+	cp -r -L include ${D}${PTEST_PATH}
+	ln -sf ${libdir}/libcrypto.a ${D}${PTEST_PATH}
+	ln -sf ${libdir}/libssl.a ${D}${PTEST_PATH}
+	mkdir -p ${D}${PTEST_PATH}/crypto
+	cp crypto/constant_time_locl.h ${D}${PTEST_PATH}/crypto
+	cp -r certs ${D}${PTEST_PATH}
+	mkdir -p ${D}${PTEST_PATH}/apps
+	ln -sf ${libdir}/ssl/misc/CA.sh  ${D}${PTEST_PATH}/apps
+	ln -sf ${sysconfdir}/ssl/openssl.cnf ${D}${PTEST_PATH}/apps
+	ln -sf ${bindir}/openssl         ${D}${PTEST_PATH}/apps
+	cp apps/server.pem              ${D}${PTEST_PATH}/apps
+	cp apps/server2.pem             ${D}${PTEST_PATH}/apps
+	mkdir -p ${D}${PTEST_PATH}/util
+	install util/opensslwrap.sh    ${D}${PTEST_PATH}/util
+	install util/shlib_wrap.sh     ${D}${PTEST_PATH}/util
+	# Time stamps are relevant for "make alltests", otherwise
+	# make may try to recompile binaries. Not only must the
+	# binary files be newer than the sources, they also must
+	# be more recent than the header files in /usr/include.
+	#
+	# Using "cp -a" is not sufficient, because do_install
+	# does not preserve the original time stamps.
+	#
+	# So instead of using the original file stamps, we set
+	# the current time for all files. Binaries will get
+	# modified again later when stripping them, but that's okay.
+	touch ${D}${PTEST_PATH}
+	find ${D}${PTEST_PATH} -type f -print0 | xargs --verbose -0 touch -r ${D}${PTEST_PATH}
+
+	# exclude binary files or the package won't install
+	for d in ssltest_old v3ext x509aux; do
+		rm -rf ${D}${libdir}/${BPN}/ptest/test/$d
+	done
+
+	# Remove build host references
+	sed -i \
+       -e 's,--sysroot=${STAGING_DIR_TARGET},,g' \
+       -e 's|${DEBUG_PREFIX_MAP}||g' \
+       ${D}${PTEST_PATH}/Makefile ${D}${PTEST_PATH}/Configure
+}
+
+do_install_append_class-native() {
+	create_wrapper ${D}${bindir}/openssl \
+	    OPENSSL_CONF=${libdir}/ssl/openssl.cnf \
+	    SSL_CERT_DIR=${libdir}/ssl/certs \
+	    SSL_CERT_FILE=${libdir}/ssl/cert.pem \
+	    OPENSSL_ENGINES=${libdir}/ssl/engines
+}
+
+BBCLASSEXTEND = "native nativesdk"
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl_1.0.2k.bb b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl_1.0.2k.bb
deleted file mode 100644
index 83d1a50..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl_1.0.2k.bb
+++ /dev/null
@@ -1,62 +0,0 @@
-require openssl.inc
-
-# For target side versions of openssl enable support for OCF Linux driver
-# if they are available.
-DEPENDS += "cryptodev-linux"
-
-CFLAG += "-DHAVE_CRYPTODEV -DUSE_CRYPTODEV_DIGESTS"
-CFLAG_append_class-native = " -fPIC"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=27ffa5d74bb5a337056c14b2ef93fbf6"
-
-export DIRS = "crypto ssl apps engines"
-export OE_LDFLAGS="${LDFLAGS}"
-
-SRC_URI += "file://find.pl;subdir=${BP}/util/ \
-            file://run-ptest \
-            file://openssl-c_rehash.sh \
-            file://configure-targets.patch \
-            file://shared-libs.patch \
-            file://oe-ldflags.patch \
-            file://engines-install-in-libdir-ssl.patch \
-            file://debian1.0.2/block_diginotar.patch \
-            file://debian1.0.2/block_digicert_malaysia.patch \
-            file://debian/ca.patch \
-            file://debian/c_rehash-compat.patch \
-            file://debian/debian-targets.patch \
-            file://debian/man-dir.patch \
-            file://debian/man-section.patch \
-            file://debian/no-rpath.patch \
-            file://debian/no-symbolic.patch \
-            file://debian/pic.patch \
-            file://debian1.0.2/version-script.patch \
-            file://debian1.0.2/soname.patch \
-            file://openssl_fix_for_x32.patch \
-            file://fix-cipher-des-ede3-cfb1.patch \
-            file://openssl-avoid-NULL-pointer-dereference-in-EVP_DigestInit_ex.patch \
-            file://openssl-fix-des.pod-error.patch \
-            file://Makefiles-ptest.patch \
-            file://ptest-deps.patch \
-            file://openssl-1.0.2a-x32-asm.patch \
-            file://ptest_makefile_deps.patch \
-            file://configure-musl-target.patch \
-            file://parallel.patch \
-            file://openssl-util-perlpath.pl-cwd.patch \
-            file://Use-SHA256-not-MD5-as-default-digest.patch \
-            file://0001-Fix-build-with-clang-using-external-assembler.patch \
-            "
-SRC_URI[md5sum] = "f965fc0bf01bf882b31314b61391ae65"
-SRC_URI[sha256sum] = "6b3977c61f2aedf0f96367dcfb5c6e578cf37e7b8d913b4ecb6643c3cb88d8c0"
-
-PACKAGES =+ "${PN}-engines"
-FILES_${PN}-engines = "${libdir}/ssl/engines/*.so ${libdir}/engines"
-
-# The crypto_use_bigint patch means that perl's bignum module needs to be
-# installed, but some distributions (for example Fedora 23) don't ship it by
-# default.  As the resulting error is very misleading check for bignum before
-# building.
-do_configure_prepend() {
-	if ! perl -Mbigint -e true; then
-		bbfatal "The perl module 'bignum' was not found but this is required to build openssl.  Please install this module (often packaged as perl-bignum) and re-run bitbake."
-	fi
-}
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl_1.0.2n.bb b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl_1.0.2n.bb
new file mode 100644
index 0000000..32444c6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl_1.0.2n.bb
@@ -0,0 +1,60 @@
+require openssl10.inc
+
+# For target side versions of openssl enable support for OCF Linux driver
+# if they are available.
+
+CFLAG += "-DHAVE_CRYPTODEV -DUSE_CRYPTODEV_DIGESTS"
+CFLAG_append_class-native = " -fPIC"
+
+LIC_FILES_CHKSUM = "file://LICENSE;md5=057d9218c6180e1d9ee407572b2dd225"
+
+export DIRS = "crypto ssl apps engines"
+export OE_LDFLAGS="${LDFLAGS}"
+
+SRC_URI += "file://find.pl;subdir=openssl-${PV}/util/ \
+           file://run-ptest \
+           file://openssl-c_rehash.sh \
+           file://configure-targets.patch \
+           file://shared-libs.patch \
+           file://oe-ldflags.patch \
+           file://engines-install-in-libdir-ssl.patch \
+           file://debian1.0.2/block_diginotar.patch \
+           file://debian1.0.2/block_digicert_malaysia.patch \
+           file://debian/ca.patch \
+           file://debian/c_rehash-compat.patch \
+           file://debian/debian-targets.patch \
+           file://debian/man-dir.patch \
+           file://debian/man-section.patch \
+           file://debian/no-rpath.patch \
+           file://debian/no-symbolic.patch \
+           file://debian/pic.patch \
+           file://debian1.0.2/version-script.patch \
+           file://debian1.0.2/soname.patch \
+           file://openssl_fix_for_x32.patch \
+           file://openssl-fix-des.pod-error.patch \
+           file://Makefiles-ptest.patch \
+           file://ptest-deps.patch \
+           file://openssl-1.0.2a-x32-asm.patch \
+           file://ptest_makefile_deps.patch \
+           file://configure-musl-target.patch \
+           file://parallel.patch \
+           file://openssl-util-perlpath.pl-cwd.patch \
+           file://Use-SHA256-not-MD5-as-default-digest.patch \
+           file://0001-Fix-build-with-clang-using-external-assembler.patch \
+           file://0001-openssl-force-soft-link-to-avoid-rare-race.patch \
+           "
+SRC_URI[md5sum] = "13bdc1b1d1ff39b6fd42a255e74676a4"
+SRC_URI[sha256sum] = "370babb75f278c39e0c50e8c4e7493bc0f18db6867478341a832a982fd15a8fe"
+
+PACKAGES =+ "${PN}-engines"
+FILES_${PN}-engines = "${libdir}/ssl/engines/*.so ${libdir}/engines"
+
+# The crypto_use_bigint patch means that perl's bignum module needs to be
+# installed, but some distributions (for example Fedora 23) don't ship it by
+# default.  As the resulting error is very misleading check for bignum before
+# building.
+do_configure_prepend() {
+	if ! perl -Mbigint -e true; then
+		bbfatal "The perl module 'bignum' was not found but this is required to build openssl.  Please install this module (often packaged as perl-bignum) and re-run bitbake."
+	fi
+}
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl_1.1.0g.bb b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl_1.1.0g.bb
new file mode 100644
index 0000000..1649bff
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/openssl/openssl_1.1.0g.bb
@@ -0,0 +1,156 @@
+SUMMARY = "Secure Socket Layer"
+DESCRIPTION = "Secure Socket Layer (SSL) binary and related cryptographic tools."
+HOMEPAGE = "http://www.openssl.org/"
+BUGTRACKER = "http://www.openssl.org/news/vulnerabilities.html"
+SECTION = "libs/network"
+
+# "openssl | SSLeay" dual license
+LICENSE = "openssl"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=cae6da10f4ffd9703214776d2aabce32"
+
+BBCLASSEXTEND = "native nativesdk"
+
+SRC_URI[md5sum] = "ba5f1b8b835b88cadbce9b35ed9531a6"
+SRC_URI[sha256sum] = "de4d501267da39310905cb6dc8c6121f7a2cad45a7707f76df828fe1b85073af"
+
+SRC_URI = "http://www.openssl.org/source/openssl-${PV}.tar.gz \
+           file://run-ptest \
+           file://openssl-c_rehash.sh \
+           file://0001-Take-linking-flags-from-LDFLAGS-env-var.patch \
+           file://0001-Remove-test-that-requires-running-as-non-root.patch \
+           file://0001-aes-asm-aes-armv4-bsaes-armv7-.pl-make-it-work-with-.patch \
+          "
+
+S = "${WORKDIR}/openssl-${PV}"
+
+inherit lib_package multilib_header ptest
+
+do_configure () {
+	os=${HOST_OS}
+	case $os in
+	linux-uclibc |\
+	linux-uclibceabi |\
+	linux-gnueabi |\
+	linux-uclibcspe |\
+	linux-gnuspe |\
+	linux-musl*)
+		os=linux
+		;;
+		*)
+		;;
+	esac
+	target="$os-${HOST_ARCH}"
+	case $target in
+	linux-arm)
+		target=linux-armv4
+		;;
+	linux-armeb)
+		target=linux-armv4
+		;;
+	linux-aarch64*)
+		target=linux-aarch64
+		;;
+	linux-sh3)
+		target=linux-generic32
+		;;
+	linux-sh4)
+		target=linux-generic32
+		;;
+	linux-i486)
+		target=linux-elf
+		;;
+	linux-i586 | linux-viac3)
+		target=linux-elf
+		;;
+	linux-i686)
+		target=linux-elf
+		;;
+	linux-gnux32-x86_64)
+		target=linux-x32
+		;;
+	linux-gnu64-x86_64)
+		target=linux-x86_64
+		;;
+	linux-mips)
+                # specifying TARGET_CC_ARCH prevents openssl from (incorrectly) adding target architecture flags
+		target="linux-mips32 ${TARGET_CC_ARCH}"
+		;;
+	linux-mipsel)
+		target="linux-mips32 ${TARGET_CC_ARCH}"
+		;;
+        linux-gnun32-mips*)
+               target=linux-mips64
+                ;;
+        linux-*-mips64 | linux-mips64)
+               target=linux64-mips64
+                ;;
+        linux-*-mips64el | linux-mips64el)
+               target=linux64-mips64
+                ;;
+	linux-microblaze*|linux-nios2*)
+		target=linux-generic32
+		;;
+	linux-powerpc)
+		target=linux-ppc
+		;;
+	linux-powerpc64)
+		target=linux-ppc64
+		;;
+	linux-supersparc)
+		target=linux-sparcv9
+		;;
+	linux-sparc)
+		target=linux-sparcv9
+		;;
+	darwin-i386)
+		target=darwin-i386-cc
+		;;
+	esac
+        useprefix=${prefix}
+        if [ "x$useprefix" = "x" ]; then
+                useprefix=/
+        fi
+	perl ./Configure ${EXTRA_OECONF} --prefix=$useprefix --openssldir=${libdir}/ssl-1.1 --libdir=`basename ${libdir}` $target
+}
+
+#| engines/afalg/e_afalg.c: In function 'eventfd':
+#| engines/afalg/e_afalg.c:110:20: error: '__NR_eventfd' undeclared (first use in this function)
+#|      return syscall(__NR_eventfd, n);
+#|                     ^~~~~~~~~~~~
+EXTRA_OECONF_aarch64 += "no-afalgeng"
+
+#| ./libcrypto.so: undefined reference to `getcontext'
+#| ./libcrypto.so: undefined reference to `setcontext'
+#| ./libcrypto.so: undefined reference to `makecontext'
+EXTRA_OECONF_libc-musl += "-DOPENSSL_NO_ASYNC"
+
+do_install () {
+        oe_runmake DESTDIR="${D}" MANDIR="${mandir}" MANSUFFIX=ssl install
+        oe_multilib_header openssl/opensslconf.h
+}
+
+do_install_append_class-native () {
+        # Install a custom version of c_rehash that can handle sysroots properly.
+        # This version is used for example when installing ca-certificates during
+        # image creation.
+        install -Dm 0755 ${WORKDIR}/openssl-c_rehash.sh ${D}${bindir}/c_rehash
+        sed -i -e 's,/etc/openssl,${sysconfdir}/ssl,g' ${D}${bindir}/c_rehash
+}
+
+do_install_ptest() {
+        cp -r * ${D}${PTEST_PATH}
+
+        # Putting .so files in ptest package will mess up the dependencies of the main openssl package
+        # so we rename them to .so.ptest and patch the test accordingly
+        mv ${D}${PTEST_PATH}/libcrypto.so ${D}${PTEST_PATH}/libcrypto.so.ptest
+        mv ${D}${PTEST_PATH}/libssl.so ${D}${PTEST_PATH}/libssl.so.ptest
+        sed -i 's/$target{shared_extension_simple}/".so.ptest"/' ${D}${PTEST_PATH}/test/recipes/90-test_shlibload.t
+}
+
+RDEPENDS_${PN}-ptest += "perl-module-file-spec-functions bash python"
+
+FILES_${PN} =+ " ${libdir}/ssl-1.1/*"
+
+PACKAGES =+ "${PN}-engines"
+FILES_${PN}-engines = "${libdir}/engines-1.1"
+
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap.inc b/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap.inc
deleted file mode 100644
index 338af33..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap.inc
+++ /dev/null
@@ -1,17 +0,0 @@
-SUMMARY = "RPC program number mapper"
-HOMEPAGE = "http://neil.brown.name/portmap/"
-SECTION = "console/network"
-LICENSE = "BSD"
-LIC_FILES_CHKSUM = "file://portmap.c;beginline=2;endline=31;md5=51ff67e66ec84b2009b017b1f94afbf4 \
-                    file://from_local.c;beginline=9;endline=35;md5=1bec938a2268b8b423c58801ace3adc1"
-
-INITSCRIPT_NAME = "portmap"
-INITSCRIPT_PARAMS = "start 10 2 3 4 5 . stop 32 0 1 6 ."
-
-inherit update-rc.d systemd
-
-SYSTEMD_SERVICE_${PN} = "portmap.service"
-
-PACKAGES =+ "portmap-utils"
-FILES_portmap-utils = "${base_sbindir}/pmap_set ${base_sbindir}/pmap_dump"
-FILES_${PN}-doc += "${docdir}"
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap/destdir-no-strip.patch b/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap/destdir-no-strip.patch
deleted file mode 100644
index 2fbf784..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap/destdir-no-strip.patch
+++ /dev/null
@@ -1,46 +0,0 @@
-Upstream-Status: Backport
-
-From: Mike Frysinger <vapier@gentoo.org>
-Date: Sun, 13 May 2007 21:15:12 +0000 (-0400)
-Subject: respect DESTDIR and dont use -s with install
-X-Git-Url: http://neil.brown.name/git?p=portmap;a=commitdiff_plain;h=603c59b978c04df2354f68d4a2dc676a758ff46d
-
-respect DESTDIR and dont use -s with install
-
-$(DESTDIR) is the standard for installing into other trees, not $(BASEDIR) ...
-so I've converted the Makefile to use that.  I've also left in $(BASEDIR) as a
-default to support old installs; not sure if you'd just cut it.
-
-Stripping should be left to the person to handle, not automatically done by
-the install step.  Also, `install -s` always calls `strip` which is
-wrong/undesired in cross-compiling scenarios.
-
-Signed-off-by: Mike Frysinger <vapier@gentoo.org>
-Signed-off-by: Neil Brown <neilb@suse.de>
----
-
-diff --git a/Makefile b/Makefile
-index 9e9a4b4..5343428 100644
---- a/Makefile
-+++ b/Makefile
-@@ -135,13 +135,14 @@ from_local: CPPFLAGS += -DTEST
- portmap.man : portmap.8
- 	sed $(MAN_SED) < portmap.8 > portmap.man
- 
-+DESTDIR = $(BASEDIR)
- install: all
--	install -o root -g root -m 0755 -s portmap ${BASEDIR}/sbin
--	install -o root -g root -m 0755 -s pmap_dump ${BASEDIR}/sbin
--	install -o root -g root -m 0755 -s pmap_set ${BASEDIR}/sbin
--	install -o root -g root -m 0644 portmap.man ${BASEDIR}/usr/share/man/man8/portmap.8
--	install -o root -g root -m 0644 pmap_dump.8 ${BASEDIR}/usr/share/man/man8
--	install -o root -g root -m 0644 pmap_set.8 ${BASEDIR}/usr/share/man/man8
-+	install -o root -g root -m 0755 portmap $(DESTDIR)/sbin
-+	install -o root -g root -m 0755 pmap_dump $(DESTDIR)/sbin
-+	install -o root -g root -m 0755 pmap_set $(DESTDIR)/sbin
-+	install -o root -g root -m 0644 portmap.man $(DESTDIR)/usr/share/man/man8/portmap.8
-+	install -o root -g root -m 0644 pmap_dump.8 $(DESTDIR)/usr/share/man/man8
-+	install -o root -g root -m 0644 pmap_set.8 $(DESTDIR)/usr/share/man/man8
- 
- clean:
- 	rm -f *.o portmap pmap_dump pmap_set from_local \
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap/portmap.init b/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap/portmap.init
deleted file mode 100755
index 621aa17..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap/portmap.init
+++ /dev/null
@@ -1,67 +0,0 @@
-#!/bin/sh
-#
-### BEGIN INIT INFO
-# Provides:          portmap
-# Required-Start:    $network
-# Required-Stop:     $network
-# Default-Start:     S 2 3 4 5
-# Default-Stop:      0 1 6
-# Short-Description: The RPC portmapper
-# Description:       Portmap is a server that converts RPC (Remote
-#                    Procedure Call) program numbers into DARPA
-#                    protocol port numbers. It must be running in
-#                    order to make RPC calls. Services that use
-#                    RPC include NFS and NIS.
-### END INIT INFO
-
-test -f /sbin/portmap || exit 0
-
-case "$1" in
-    start)
-	echo "Starting portmap daemon..."
-        start-stop-daemon --start --quiet --exec /sbin/portmap
-
-	if [ -f /var/run/portmap.upgrade-state ]; then
-          echo "Restoring old RPC service information..."
-          sleep 1 # needs a short pause or pmap_set won't work. :(
-	  pmap_set </var/run/portmap.upgrade-state
-	  rm -f /var/run/portmap.upgrade-state
-          echo "done."
-        fi
-
-	;;
-    stop)
-        echo "Stopping portmap daemon..."
-        start-stop-daemon --stop --quiet --exec /sbin/portmap
-	;;
-    reload)
-	;;
-    force-reload)
-        $0 restart
-	;;
-    restart)
-	# pmap_dump and pmap_set may be in a different package and not installed...
-	if [ -f /sbin/pmap_dump -a -f /sbin/pmap_set ]; then
-		do_state=1
-	else
-		do_state=0
-	fi
-	[ $do_state -eq 1 ] && pmap_dump >/var/run/portmap.state
-        $0 stop
-        $0 start
-	if [ $do_state -eq 1 ]; then
-	  if [ ! -f /var/run/portmap.upgrade-state ]; then
-            sleep 1
-	    pmap_set </var/run/portmap.state
-	  fi
-	  rm -f /var/run/portmap.state
-	fi
-	;;
-    *)
-	echo "Usage: /etc/init.d/portmap {start|stop|reload|restart}"
-	exit 1
-	;;
-esac
-
-exit 0
-
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap/portmap.service b/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap/portmap.service
deleted file mode 100644
index 7ef9d7b..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap/portmap.service
+++ /dev/null
@@ -1,10 +0,0 @@
-[Unit]
-Description=The RPC portmapper
-After=network.target
-
-[Service]
-Type=forking
-ExecStart=@BASE_SBINDIR@/portmap
-
-[Install]
-WantedBy=multi-user.target
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap/tcpd-config.patch b/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap/tcpd-config.patch
deleted file mode 100644
index 2f25058..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap/tcpd-config.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-Upstream-Status: Backport
-
-From: Mike Frysinger <vapier@gentoo.org>
-Date: Sun, 13 May 2007 21:17:32 +0000 (-0400)
-Subject: fix building with tcpd support disabled
-X-Git-Url: http://neil.brown.name/git?p=portmap;a=commitdiff_plain;h=7847207aed1b44faf077eed14a9ac9c68244eba5
-
-fix building with tcpd support disabled
-
-Make sure pmap_check.c only includes tcpd.h when HOSTS_ACCESS is defined.
-
-Signed-off-by: Timothy Redaelli <drizzt@gentoo.org>
-Signed-off-by: Mike Frysinger <vapier@gentoo.org>
-Signed-off-by: Neil Brown <neilb@suse.de>
----
-
-diff --git a/pmap_check.c b/pmap_check.c
-index 84f2c12..443a822 100644
---- a/pmap_check.c
-+++ b/pmap_check.c
-@@ -44,7 +44,9 @@
- #include <netinet/in.h>
- #include <rpc/rpcent.h>
- #endif
-+#ifdef HOSTS_ACCESS
- #include <tcpd.h>
-+#endif
- #include <arpa/inet.h>
- #include <grp.h>
- 
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap_6.0.bb b/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap_6.0.bb
deleted file mode 100644
index d970095..0000000
--- a/import-layers/yocto-poky/meta/recipes-connectivity/portmap/portmap_6.0.bb
+++ /dev/null
@@ -1,35 +0,0 @@
-require portmap.inc
-
-DEPENDS_append_libc-musl = " libtirpc "
-
-PR = "r9"
-
-SRC_URI = "https://fossies.org/linux/misc/old/portmap-6.0.tgz \
-           file://destdir-no-strip.patch \
-           file://tcpd-config.patch \
-           file://portmap.init \
-           file://portmap.service"
-
-SRC_URI[md5sum] = "ac108ab68bf0f34477f8317791aaf1ff"
-SRC_URI[sha256sum] = "02c820d39f3e6e729d1bea3287a2d8a6c684f1006fb9612f97dcad4a281d41de"
-
-S = "${WORKDIR}/${BPN}_${PV}/"
-
-PACKAGECONFIG ??= "tcp-wrappers"
-PACKAGECONFIG[tcp-wrappers] = ",,tcp-wrappers"
-
-CPPFLAGS += "-DFACILITY=LOG_DAEMON -DENABLE_DNS -DHOSTS_ACCESS"
-CFLAGS += "-Wall -Wstrict-prototypes -fPIC"
-EXTRA_OEMAKE += "'NO_TCP_WRAPPER=${@bb.utils.contains('PACKAGECONFIG', 'tcp-wrappers', '', '1', d)}'"
-CFLAGS_append_libc-musl = " -I${STAGING_INCDIR}/tirpc "
-LDFLAGS_append_libc-musl = " -ltirpc "
-
-do_install() {
-    install -d ${D}${mandir}/man8/ ${D}${base_sbindir} ${D}${sysconfdir}/init.d
-    install -m 0755 ${WORKDIR}/portmap.init ${D}${sysconfdir}/init.d/portmap
-    oe_runmake install DESTDIR=${D}
-
-    install -d ${D}${systemd_unitdir}/system
-    install -m 0644 ${WORKDIR}/portmap.service ${D}${systemd_unitdir}/system
-    sed -i -e 's,@BASE_SBINDIR@,${base_sbindir},g' ${D}${systemd_unitdir}/system/portmap.service
-}
diff --git a/import-layers/yocto-poky/meta/recipes-connectivity/wpa-supplicant/wpa-supplicant/wpa-supplicant.sh b/import-layers/yocto-poky/meta/recipes-connectivity/wpa-supplicant/wpa-supplicant/wpa-supplicant.sh
index 5c9e5d3..35a1aa6 100644
--- a/import-layers/yocto-poky/meta/recipes-connectivity/wpa-supplicant/wpa-supplicant/wpa-supplicant.sh
+++ b/import-layers/yocto-poky/meta/recipes-connectivity/wpa-supplicant/wpa-supplicant/wpa-supplicant.sh
@@ -4,6 +4,7 @@
 WPA_SUP_BIN="/usr/sbin/wpa_supplicant"
 WPA_SUP_PNAME="wpa_supplicant"
 WPA_SUP_PIDFILE="/var/run/wpa_supplicant.$IFACE.pid"
+WPA_COMMON_CTRL_IFACE="/var/run/wpa_supplicant"
 WPA_SUP_OPTIONS="-B -P $WPA_SUP_PIDFILE -i $IFACE"
 
 VERBOSITY=0
diff --git a/import-layers/yocto-poky/meta/recipes-core/base-files/base-files/profile b/import-layers/yocto-poky/meta/recipes-core/base-files/base-files/profile
index ceaf15f..a062028 100644
--- a/import-layers/yocto-poky/meta/recipes-core/base-files/base-files/profile
+++ b/import-layers/yocto-poky/meta/recipes-core/base-files/base-files/profile
@@ -3,15 +3,13 @@
 
 PATH="/usr/local/bin:/usr/bin:/bin"
 EDITOR="vi"			# needed for packages like cron, git-commit
-test -z "$TERM" && TERM="vt100"	# Basic terminal capab. For screen etc.
+[ "$TERM" ] || TERM="vt100"	# Basic terminal capab. For screen etc.
 
-if [ "$HOME" = "ROOTHOME" ]; then
-	PATH=$PATH:/usr/local/sbin:/usr/sbin:/sbin
-fi
-if [ "$PS1" ]; then
-	# works for bash and ash (no other shells known to be in use here)
-	PS1='\u@\h:\w\$ '
-fi
+# Add /sbin & co to $PATH for the root user
+[ "$HOME" != "ROOTHOME" ] || PATH=$PATH:/usr/local/sbin:/usr/sbin:/sbin
+
+# Set the prompt for bash and ash (no other shells known to be in use here)
+[ -z "$PS1" ] || PS1='\u@\h:\w\$ '
 
 if [ -d /etc/profile.d ]; then
 	for i in /etc/profile.d/*.sh; do
diff --git a/import-layers/yocto-poky/meta/recipes-core/base-files/base-files/share/dot.profile b/import-layers/yocto-poky/meta/recipes-core/base-files/base-files/share/dot.profile
index 979793e..a873160 100644
--- a/import-layers/yocto-poky/meta/recipes-core/base-files/base-files/share/dot.profile
+++ b/import-layers/yocto-poky/meta/recipes-core/base-files/base-files/share/dot.profile
@@ -7,4 +7,5 @@
 # path set by /etc/profile
 # export PATH
 
-mesg n
+# Might fail after "su - myuser" when /dev/tty* is not writable by "myuser".
+mesg n 2>/dev/null
diff --git a/import-layers/yocto-poky/meta/recipes-core/base-files/base-files_3.0.14.bb b/import-layers/yocto-poky/meta/recipes-core/base-files/base-files_3.0.14.bb
index ca7bf06..1c0863b 100644
--- a/import-layers/yocto-poky/meta/recipes-core/base-files/base-files_3.0.14.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/base-files/base-files_3.0.14.bb
@@ -42,7 +42,7 @@
            ${localstatedir}/backups ${localstatedir}/lib \
            /sys ${localstatedir}/lib/misc ${localstatedir}/spool \
            ${localstatedir}/volatile \
-           ${localstatedir}/volatile/log \
+           ${localstatedir}/${@'volatile/' if oe.types.boolean('${VOLATILE_LOG_DIR}') else ''}log \
            /home ${prefix}/src ${localstatedir}/local \
            /media"
 
@@ -53,7 +53,7 @@
                ${prefix}/lib/locale"
 dirs2775-lsb = "/var/mail"
 
-volatiles = "log tmp"
+volatiles = "${@'log' if oe.types.boolean('${VOLATILE_LOG_DIR}') else ''} tmp"
 conffiles = "${sysconfdir}/debian_version ${sysconfdir}/host.conf \
              ${sysconfdir}/issue /${sysconfdir}/issue.net \
              ${sysconfdir}/nsswitch.conf ${sysconfdir}/profile \
diff --git a/import-layers/yocto-poky/meta/recipes-core/busybox/busybox.inc b/import-layers/yocto-poky/meta/recipes-core/busybox/busybox.inc
index adc6e9a..48910ca 100644
--- a/import-layers/yocto-poky/meta/recipes-core/busybox/busybox.inc
+++ b/import-layers/yocto-poky/meta/recipes-core/busybox/busybox.inc
@@ -39,9 +39,11 @@
 INITSCRIPT_NAME_${PN}-udhcpd = "busybox-udhcpd"
 
 SYSTEMD_PACKAGES = "${PN}-syslog"
-SYSTEMD_SERVICE_${PN}-syslog = "busybox-syslog.service"
+SYSTEMD_SERVICE_${PN}-syslog = "${@bb.utils.contains('SRC_URI', 'file://syslog.cfg', 'busybox-syslog.service', '', d)}"
 
-CONFFILES_${PN}-syslog = "${sysconfdir}/syslog-startup.conf.${BPN}"
+CONFFILES_${PN}-syslog = "${sysconfdir}/syslog-startup.conf"
+RCONFLICTS_${PN}-syslog = "rsyslog sysklogd syslog-ng"
+
 CONFFILES_${PN}-mdev = "${sysconfdir}/mdev.conf"
 
 RRECOMMENDS_${PN} = "${PN}-syslog ${PN}-udhcpc"
@@ -71,7 +73,7 @@
 	busybox_cfg(bb.utils.contains('DISTRO_FEATURES', 'bluetooth', True, False, d), 'CONFIG_RFKILL', cnf, rem)
 	return "\n".join(cnf), "\n".join(rem)
 
-# X, Y = ${@features_to_uclibc_settings(d)}
+# X, Y = ${@features_to_busybox_settings(d)}
 # unfortunately doesn't seem to work with bitbake, workaround:
 def features_to_busybox_conf(d):
 	cnf, rem = features_to_busybox_settings(d)
@@ -102,6 +104,9 @@
 }
 
 do_prepare_config () {
+	if [ "$BUILD_REPRODUCIBLE_BINARIES" = "1" ]; then
+		export KCONFIG_NOTIMESTAMP=1
+	fi
 	sed -e '/CONFIG_STATIC/d' \
 		< ${WORKDIR}/defconfig > ${S}/.config
 	echo "# CONFIG_STATIC is not set" >> .config
@@ -118,6 +123,7 @@
 		  ${S}/.config.oe-tmp > ${S}/.config
 	fi
 	sed -i 's/CONFIG_IFUPDOWN_UDHCPC_CMD_OPTIONS="-R -n"/CONFIG_IFUPDOWN_UDHCPC_CMD_OPTIONS="-R -b"/' ${S}/.config
+	sed -i 's|${DEBUG_PREFIX_MAP}||g' ${S}/.config
 }
 
 # returns all the elements from the src uri that are .cfg files
@@ -138,6 +144,9 @@
 
 do_compile() {
 	unset CFLAGS CPPFLAGS CXXFLAGS LDFLAGS
+	if [ "$BUILD_REPRODUCIBLE_BINARIES" = "1" ]; then
+		export KCONFIG_NOTIMESTAMP=1
+	fi
 	if [ "${BUSYBOX_SPLIT_SUID}" = "1" -a x`grep "CONFIG_FEATURE_INDIVIDUAL=y" .config` = x ]; then
 	# split the .config into two parts, and make two busybox binaries
 		if [ -e .config.orig ]; then
@@ -241,9 +250,9 @@
 	fi
 
 	if grep -q "CONFIG_SYSLOGD=y" ${B}/.config; then
-		install -m 0755 ${WORKDIR}/syslog ${D}${sysconfdir}/init.d/syslog.${BPN}
-		install -m 644 ${WORKDIR}/syslog-startup.conf ${D}${sysconfdir}/syslog-startup.conf.${BPN}
-		install -m 644 ${WORKDIR}/syslog.conf ${D}${sysconfdir}/syslog.conf.${BPN}
+		install -m 0755 ${WORKDIR}/syslog ${D}${sysconfdir}/init.d/syslog
+		install -m 644 ${WORKDIR}/syslog-startup.conf ${D}${sysconfdir}/syslog-startup.conf
+		install -m 644 ${WORKDIR}/syslog.conf ${D}${sysconfdir}/syslog.conf
 	fi
 	if grep "CONFIG_CROND=y" ${B}/.config; then
 		install -m 0755 ${WORKDIR}/busybox-cron ${D}${sysconfdir}/init.d/
@@ -304,7 +313,6 @@
 		install -d ${D}${sysconfdir}/default
 		install -m 0644 ${WORKDIR}/busybox-syslog.default ${D}${sysconfdir}/default/busybox-syslog
             fi
-            ln -sf /dev/null ${D}${systemd_unitdir}/system/syslog.service
         fi
         if grep -q "CONFIG_KLOGD=y" ${B}/.config; then
             install -d ${D}${systemd_unitdir}/system
@@ -315,7 +323,7 @@
 
     # Remove the sysvinit specific configuration file for systemd systems to avoid confusion
     if ${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', 'false', 'true', d)}; then
-	rm -f ${D}${sysconfdir}/syslog-startup.conf.${BPN}
+	rm -f ${D}${sysconfdir}/syslog-startup.conf
     fi
 }
 
@@ -329,20 +337,6 @@
 
 ALTERNATIVE_PRIORITY = "50"
 
-ALTERNATIVE_${PN}-syslog += "syslog-conf"
-ALTERNATIVE_LINK_NAME[syslog-conf] = "${sysconfdir}/syslog.conf"
-
-python () {
-    if bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d):
-        pn = d.getVar('PN')
-        d.appendVar('ALTERNATIVE_%s-syslog' % (pn), ' syslog-init')
-        d.setVarFlag('ALTERNATIVE_LINK_NAME', 'syslog-init', '%s/init.d/syslog' % (d.getVar('sysconfdir')))
-        d.setVarFlag('ALTERNATIVE_TARGET', 'syslog-init', '%s/init.d/syslog.%s' % (d.getVar('sysconfdir'), d.getVar('BPN')))
-        d.appendVar('ALTERNATIVE_%s-syslog' % (pn), ' syslog-startup-conf')
-        d.setVarFlag('ALTERNATIVE_LINK_NAME', 'syslog-startup-conf', '%s/syslog-startup.conf' % (d.getVar('sysconfdir')))
-        d.setVarFlag('ALTERNATIVE_TARGET', 'syslog-startup-conf', '%s/syslog-startup.conf.%s' % (d.getVar('sysconfdir'), d.getVar('BPN')))
-}
-
 python do_package_prepend () {
     # We need to load the full set of busybox provides from the /etc/busybox.links
     # Use this to see the update-alternatives with the right information
@@ -439,3 +433,5 @@
 		fi
 	fi
 }
+
+RPROVIDES_${PN} += "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', '/bin/sh', '', d)}"
diff --git a/import-layers/yocto-poky/meta/recipes-core/busybox/busybox/defconfig b/import-layers/yocto-poky/meta/recipes-core/busybox/busybox/defconfig
index 8803b52..cc68bea 100644
--- a/import-layers/yocto-poky/meta/recipes-core/busybox/busybox/defconfig
+++ b/import-layers/yocto-poky/meta/recipes-core/busybox/busybox/defconfig
@@ -820,9 +820,9 @@
 # CONFIG_IFPLUGD is not set
 CONFIG_IFUPDOWN=y
 CONFIG_IFUPDOWN_IFSTATE_PATH="/var/run/ifstate"
-# CONFIG_FEATURE_IFUPDOWN_IP is not set
-# CONFIG_FEATURE_IFUPDOWN_IP_BUILTIN is not set
-CONFIG_FEATURE_IFUPDOWN_IFCONFIG_BUILTIN=y
+CONFIG_FEATURE_IFUPDOWN_IP=y
+CONFIG_FEATURE_IFUPDOWN_IP_BUILTIN=y
+# CONFIG_FEATURE_IFUPDOWN_IFCONFIG_BUILTIN is not set
 CONFIG_FEATURE_IFUPDOWN_IPV4=y
 CONFIG_FEATURE_IFUPDOWN_IPV6=y
 CONFIG_FEATURE_IFUPDOWN_MAPPING=y
@@ -1053,17 +1053,15 @@
 #
 # System Logging Utilities
 #
-CONFIG_SYSLOGD=y
-CONFIG_FEATURE_ROTATE_LOGFILE=y
-CONFIG_FEATURE_REMOTE_LOG=y
-CONFIG_FEATURE_SYSLOGD_DUP=y
-CONFIG_FEATURE_SYSLOGD_CFG=y
-CONFIG_FEATURE_SYSLOGD_READ_BUFFER_SIZE=256
-CONFIG_FEATURE_IPC_SYSLOG=y
-CONFIG_FEATURE_IPC_SYSLOG_BUFFER_SIZE=64
-CONFIG_LOGREAD=y
-CONFIG_FEATURE_LOGREAD_REDUCED_LOCKING=y
-CONFIG_FEATURE_KMSG_SYSLOG=y
+# CONFIG_SYSLOGD is not set
+# CONFIG_FEATURE_ROTATE_LOGFILE is not set
+# CONFIG_FEATURE_REMOTE_LOG is not set
+# CONFIG_FEATURE_SYSLOGD_DUP is not set
+# CONFIG_FEATURE_SYSLOGD_CFG is not set
+# CONFIG_FEATURE_IPC_SYSLOG is not set
+# CONFIG_LOGREAD is not set
+# CONFIG_FEATURE_LOGREAD_REDUCED_LOCKING is not set
+# CONFIG_FEATURE_KMSG_SYSLOG is not set
 CONFIG_KLOGD=y
 
 #
diff --git a/import-layers/yocto-poky/meta/recipes-core/busybox/busybox/syslog.cfg b/import-layers/yocto-poky/meta/recipes-core/busybox/busybox/syslog.cfg
new file mode 100644
index 0000000..e2425ed
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/busybox/busybox/syslog.cfg
@@ -0,0 +1,11 @@
+CONFIG_SYSLOGD=y
+CONFIG_FEATURE_ROTATE_LOGFILE=y
+CONFIG_FEATURE_REMOTE_LOG=y
+CONFIG_FEATURE_SYSLOGD_DUP=y
+CONFIG_FEATURE_SYSLOGD_CFG=y
+CONFIG_FEATURE_SYSLOGD_READ_BUFFER_SIZE=256
+CONFIG_FEATURE_IPC_SYSLOG=y
+CONFIG_FEATURE_IPC_SYSLOG_BUFFER_SIZE=64
+CONFIG_LOGREAD=y
+CONFIG_FEATURE_LOGREAD_REDUCED_LOCKING=y
+CONFIG_FEATURE_KMSG_SYSLOG=y
diff --git a/import-layers/yocto-poky/meta/recipes-core/busybox/busybox_1.24.1.bb b/import-layers/yocto-poky/meta/recipes-core/busybox/busybox_1.24.1.bb
index 6ccbffd..1c85808 100644
--- a/import-layers/yocto-poky/meta/recipes-core/busybox/busybox_1.24.1.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/busybox/busybox_1.24.1.bb
@@ -40,6 +40,7 @@
            file://resize.cfg \
            ${@["", "file://init.cfg"][(d.getVar('VIRTUAL-RUNTIME_init_manager') == 'busybox')]} \
            ${@["", "file://mdev.cfg"][(d.getVar('VIRTUAL-RUNTIME_dev_manager') == 'busybox-mdev')]} \
+           file://syslog.cfg \
            file://inittab \
            file://rcS \
            file://rcK \
diff --git a/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/0001-Unset-need_charset_alias-when-building-for-musl.patch b/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/0001-Unset-need_charset_alias-when-building-for-musl.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/0001-Unset-need_charset_alias-when-building-for-musl.patch
rename to import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/0001-Unset-need_charset_alias-when-building-for-musl.patch
diff --git a/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/0001-local.mk-fix-cross-compiling-problem.patch b/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/0001-local.mk-fix-cross-compiling-problem.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/0001-local.mk-fix-cross-compiling-problem.patch
rename to import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/0001-local.mk-fix-cross-compiling-problem.patch
diff --git a/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/0001-uname-report-processor-and-hardware-correctly.patch b/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/0001-uname-report-processor-and-hardware-correctly.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/0001-uname-report-processor-and-hardware-correctly.patch
rename to import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/0001-uname-report-processor-and-hardware-correctly.patch
diff --git a/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/disable-ls-output-quoting.patch b/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/disable-ls-output-quoting.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/disable-ls-output-quoting.patch
rename to import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/disable-ls-output-quoting.patch
diff --git a/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/fix-selinux-flask.patch b/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/fix-selinux-flask.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/fix-selinux-flask.patch
rename to import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/fix-selinux-flask.patch
diff --git a/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/man-decouple-manpages-from-build.patch b/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/man-decouple-manpages-from-build.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/man-decouple-manpages-from-build.patch
rename to import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/man-decouple-manpages-from-build.patch
diff --git a/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/remove-usr-local-lib-from-m4.patch b/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/remove-usr-local-lib-from-m4.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils-8.26/remove-usr-local-lib-from-m4.patch
rename to import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils/remove-usr-local-lib-from-m4.patch
diff --git a/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils_8.26.bb b/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils_8.26.bb
deleted file mode 100644
index 52ef101..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils_8.26.bb
+++ /dev/null
@@ -1,142 +0,0 @@
-SUMMARY = "The basic file, shell and text manipulation utilities"
-DESCRIPTION = "The GNU Core Utilities provide the basic file, shell and text \
-manipulation utilities. These are the core utilities which are expected to exist on \
-every system."
-HOMEPAGE = "http://www.gnu.org/software/coreutils/"
-BUGTRACKER = "http://debbugs.gnu.org/coreutils"
-LICENSE = "GPLv3+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504\
-                    file://src/ls.c;beginline=5;endline=16;md5=38b79785ca88537b75871782a2a3c6b8"
-DEPENDS = "gmp libcap"
-DEPENDS_class-native = ""
-
-inherit autotools gettext texinfo
-
-SRC_URI = "${GNU_MIRROR}/coreutils/${BP}.tar.xz;name=tarball \
-           http://distfiles.gentoo.org/distfiles/${BP}-man.tar.xz;name=manpages \
-           file://man-decouple-manpages-from-build.patch \
-           file://remove-usr-local-lib-from-m4.patch \
-           file://fix-selinux-flask.patch \
-           file://0001-Unset-need_charset_alias-when-building-for-musl.patch \
-           file://0001-uname-report-processor-and-hardware-correctly.patch \
-           file://disable-ls-output-quoting.patch \
-           file://0001-local.mk-fix-cross-compiling-problem.patch \
-          "
-
-SRC_URI[tarball.md5sum] = "d5aa2072f662d4118b9f4c63b94601a6"
-SRC_URI[tarball.sha256sum] = "155e94d748f8e2bc327c66e0cbebdb8d6ab265d2f37c3c928f7bf6c3beba9a8e"
-SRC_URI[manpages.md5sum] = "b58107f532f7beffcb2f38e2ac1f2da3"
-SRC_URI[manpages.sha256sum] = "9324ec412ffca3b0431e6299720c33ac98e749e430f72a7c6e65f3635c86aa29"
-
-EXTRA_OECONF_class-native = "--without-gmp"
-EXTRA_OECONF_class-target = "--enable-install-program=arch --libexecdir=${libdir}"
-EXTRA_OECONF_class-nativesdk = "--enable-install-program=arch"
-
-# acl and xattr are not default features
-#
-PACKAGECONFIG_class-target ??= "\
-    ${@bb.utils.filter('DISTRO_FEATURES', 'acl xattr', d)} \
-"
-
-# The lib/oe/path.py requires xattr
-PACKAGECONFIG_class-native ??= "xattr"
-
-# with, without, depends, rdepends
-#
-PACKAGECONFIG[acl] = "--enable-acl,--disable-acl,acl,"
-PACKAGECONFIG[xattr] = "--enable-xattr,--disable-xattr,attr,"
-
-# [ df mktemp base64 gets a special treatment and is not included in this
-bindir_progs = "arch basename chcon cksum comm csplit cut dir dircolors dirname du \
-                env expand expr factor fmt fold groups head hostid id install \
-                join link logname md5sum mkfifo nice nl nohup nproc od paste pathchk \
-                pinky pr printenv printf ptx readlink realpath runcon seq sha1sum sha224sum sha256sum \
-                sha384sum sha512sum shred shuf sort split stdbuf sum tac tail tee test timeout\
-                tr truncate tsort tty unexpand uniq unlink uptime users vdir wc who whoami yes"
-
-# hostname gets a special treatment and is not included in this
-base_bindir_progs = "cat chgrp chmod chown cp date dd echo false kill ln ls mkdir \
-                     mknod mv pwd rm rmdir sleep stty sync touch true uname stat"
-
-sbindir_progs= "chroot"
-
-# Let aclocal use the relative path for the m4 file rather than the
-# absolute since coreutils has a lot of m4 files, otherwise there might
-# be an "Argument list too long" error when it is built in a long/deep
-# directory.
-acpaths = "-I ./m4"
-
-# Deal with a separate builddir failure if src doesn't exist when creating version.c/version.h
-do_compile_prepend () {
-	mkdir -p ${B}/src
-}
-
-do_install_class-native() {
-	autotools_do_install
-	# remove groups to fix conflict with shadow-native
-	rm -f ${D}${STAGING_BINDIR_NATIVE}/groups
-	# The return is a must since native doesn't need the
-	# do_install_append() in the below.
-	return
-}
-
-do_install_append() {
-	for i in df mktemp base64; do mv ${D}${bindir}/$i ${D}${bindir}/$i.${BPN}; done
-
-	install -d ${D}${base_bindir}
-	[ "${base_bindir}" != "${bindir}" ] && for i in ${base_bindir_progs}; do mv ${D}${bindir}/$i ${D}${base_bindir}/$i.${BPN}; done
-
-	install -d ${D}${sbindir}
-	[ "${sbindir}" != "${bindir}" ] && for i in ${sbindir_progs}; do mv ${D}${bindir}/$i ${D}${sbindir}/$i.${BPN}; done
-
-	# [ requires special handling because [.coreutils will cause the sed stuff
-	# in update-alternatives to fail, therefore use lbracket - the name used
-	# for the actual source file.
-	mv ${D}${bindir}/[ ${D}${bindir}/lbracket.${BPN}
-
-	# prebuilt man pages
-	install -d ${D}/${mandir}/man1
-	install -t ${D}/${mandir}/man1 ${S}/man/*.1
-	# prebuilt man pages don't do a separate man page for [ vs test.
-	# see comment above r.e. sed and update-alternatives
-	cp -a ${D}${mandir}/man1/test.1 ${D}${mandir}/man1/lbracket.1.${BPN}
-}
-
-inherit update-alternatives
-
-ALTERNATIVE_PRIORITY = "100"
-ALTERNATIVE_${PN} = "lbracket ${bindir_progs} ${base_bindir_progs} ${sbindir_progs} base64 mktemp df"
-ALTERNATIVE_${PN}-doc = "base64.1 mktemp.1 df.1 lbracket.1 groups.1 kill.1 uptime.1 stat.1  hostname.1"
-
-ALTERNATIVE_LINK_NAME[hostname.1] = "${mandir}/man1/hostname.1"
-
-ALTERNATIVE_LINK_NAME[base64] = "${base_bindir}/base64"
-ALTERNATIVE_TARGET[base64] = "${bindir}/base64.${BPN}"
-ALTERNATIVE_LINK_NAME[base64.1] = "${mandir}/man1/base64.1"
-
-ALTERNATIVE_LINK_NAME[mktemp] = "${base_bindir}/mktemp"
-ALTERNATIVE_TARGET[mktemp] = "${bindir}/mktemp.${BPN}"
-ALTERNATIVE_LINK_NAME[mktemp.1] = "${mandir}/man1/mktemp.1"
-
-ALTERNATIVE_LINK_NAME[df] = "${base_bindir}/df"
-ALTERNATIVE_TARGET[df] = "${bindir}/df.${BPN}"
-ALTERNATIVE_LINK_NAME[df.1] = "${mandir}/man1/df.1"
-
-ALTERNATIVE_LINK_NAME[lbracket] = "${bindir}/["
-ALTERNATIVE_TARGET[lbracket] = "${bindir}/lbracket.${BPN}"
-ALTERNATIVE_LINK_NAME[lbracket.1] = "${mandir}/man1/lbracket.1"
-
-ALTERNATIVE_LINK_NAME[groups.1] = "${mandir}/man1/groups.1"
-ALTERNATIVE_LINK_NAME[uptime.1] = "${mandir}/man1/uptime.1"
-ALTERNATIVE_LINK_NAME[kill.1] = "${mandir}/man1/kill.1"
-ALTERNATIVE_LINK_NAME[stat.1] = "${mandir}/man1/stat.1"
-
-python __anonymous() {
-	for prog in d.getVar('base_bindir_progs').split():
-		d.setVarFlag('ALTERNATIVE_LINK_NAME', prog, '%s/%s' % (d.getVar('base_bindir'), prog))
-
-	for prog in d.getVar('sbindir_progs').split():
-		d.setVarFlag('ALTERNATIVE_LINK_NAME', prog, '%s/%s' % (d.getVar('sbindir'), prog))
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils_8.27.bb b/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils_8.27.bb
new file mode 100644
index 0000000..ea8740a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/coreutils/coreutils_8.27.bb
@@ -0,0 +1,142 @@
+SUMMARY = "The basic file, shell and text manipulation utilities"
+DESCRIPTION = "The GNU Core Utilities provide the basic file, shell and text \
+manipulation utilities. These are the core utilities which are expected to exist on \
+every system."
+HOMEPAGE = "http://www.gnu.org/software/coreutils/"
+BUGTRACKER = "http://debbugs.gnu.org/coreutils"
+LICENSE = "GPLv3+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504\
+                    file://src/ls.c;beginline=5;endline=16;md5=38b79785ca88537b75871782a2a3c6b8"
+DEPENDS = "gmp libcap"
+DEPENDS_class-native = ""
+
+inherit autotools gettext texinfo
+
+SRC_URI = "${GNU_MIRROR}/coreutils/${BP}.tar.xz;name=tarball \
+           http://distfiles.gentoo.org/distfiles/${BP}-man.tar.xz;name=manpages \
+           file://man-decouple-manpages-from-build.patch \
+           file://remove-usr-local-lib-from-m4.patch \
+           file://fix-selinux-flask.patch \
+           file://0001-Unset-need_charset_alias-when-building-for-musl.patch \
+           file://0001-uname-report-processor-and-hardware-correctly.patch \
+           file://disable-ls-output-quoting.patch \
+           file://0001-local.mk-fix-cross-compiling-problem.patch \
+          "
+
+SRC_URI[tarball.md5sum] = "502795792c212932365e077946d353ae"
+SRC_URI[tarball.sha256sum] = "8891d349ee87b9ff7870f52b6d9312a9db672d2439d289bc57084771ca21656b"
+SRC_URI[manpages.md5sum] = "1b31a688d06764e0e94aa20b7ea08222"
+SRC_URI[manpages.sha256sum] = "1f615819e9167646c731636b6c5ecbe79837e82a18666bacc82c3fb1dfcfaea3"
+
+EXTRA_OECONF_class-native = "--without-gmp"
+EXTRA_OECONF_class-target = "--enable-install-program=arch --libexecdir=${libdir}"
+EXTRA_OECONF_class-nativesdk = "--enable-install-program=arch"
+
+# acl and xattr are not default features
+#
+PACKAGECONFIG_class-target ??= "\
+    ${@bb.utils.filter('DISTRO_FEATURES', 'acl xattr', d)} \
+"
+
+# The lib/oe/path.py requires xattr
+PACKAGECONFIG_class-native ??= "xattr"
+
+# with, without, depends, rdepends
+#
+PACKAGECONFIG[acl] = "--enable-acl,--disable-acl,acl,"
+PACKAGECONFIG[xattr] = "--enable-xattr,--disable-xattr,attr,"
+
+# [ df mktemp base64 gets a special treatment and is not included in this
+bindir_progs = "arch basename chcon cksum comm csplit cut dir dircolors dirname du \
+                env expand expr factor fmt fold groups head hostid id install \
+                join link logname md5sum mkfifo nice nl nohup nproc od paste pathchk \
+                pinky pr printenv printf ptx readlink realpath runcon seq sha1sum sha224sum sha256sum \
+                sha384sum sha512sum shred shuf sort split stdbuf sum tac tail tee test timeout\
+                tr truncate tsort tty unexpand uniq unlink uptime users vdir wc who whoami yes"
+
+# hostname gets a special treatment and is not included in this
+base_bindir_progs = "cat chgrp chmod chown cp date dd echo false kill ln ls mkdir \
+                     mknod mv pwd rm rmdir sleep stty sync touch true uname stat"
+
+sbindir_progs= "chroot"
+
+# Let aclocal use the relative path for the m4 file rather than the
+# absolute since coreutils has a lot of m4 files, otherwise there might
+# be an "Argument list too long" error when it is built in a long/deep
+# directory.
+acpaths = "-I ./m4"
+
+# Deal with a separate builddir failure if src doesn't exist when creating version.c/version.h
+do_compile_prepend () {
+	mkdir -p ${B}/src
+}
+
+do_install_class-native() {
+	autotools_do_install
+	# remove groups to fix conflict with shadow-native
+	rm -f ${D}${STAGING_BINDIR_NATIVE}/groups
+	# The return is a must since native doesn't need the
+	# do_install_append() in the below.
+	return
+}
+
+do_install_append() {
+	for i in df mktemp base64; do mv ${D}${bindir}/$i ${D}${bindir}/$i.${BPN}; done
+
+	install -d ${D}${base_bindir}
+	[ "${base_bindir}" != "${bindir}" ] && for i in ${base_bindir_progs}; do mv ${D}${bindir}/$i ${D}${base_bindir}/$i.${BPN}; done
+
+	install -d ${D}${sbindir}
+	[ "${sbindir}" != "${bindir}" ] && for i in ${sbindir_progs}; do mv ${D}${bindir}/$i ${D}${sbindir}/$i.${BPN}; done
+
+	# [ requires special handling because [.coreutils will cause the sed stuff
+	# in update-alternatives to fail, therefore use lbracket - the name used
+	# for the actual source file.
+	mv ${D}${bindir}/[ ${D}${bindir}/lbracket.${BPN}
+
+	# prebuilt man pages
+	install -d ${D}/${mandir}/man1
+	install -t ${D}/${mandir}/man1 ${S}/man/*.1
+	# prebuilt man pages don't do a separate man page for [ vs test.
+	# see comment above r.e. sed and update-alternatives
+	cp -R --no-dereference --preserve=mode,links -v ${D}${mandir}/man1/test.1 ${D}${mandir}/man1/lbracket.1.${BPN}
+}
+
+inherit update-alternatives
+
+ALTERNATIVE_PRIORITY = "100"
+ALTERNATIVE_${PN} = "lbracket ${bindir_progs} ${base_bindir_progs} ${sbindir_progs} base64 mktemp df"
+ALTERNATIVE_${PN}-doc = "base64.1 mktemp.1 df.1 lbracket.1 groups.1 kill.1 uptime.1 stat.1  hostname.1"
+
+ALTERNATIVE_LINK_NAME[hostname.1] = "${mandir}/man1/hostname.1"
+
+ALTERNATIVE_LINK_NAME[base64] = "${base_bindir}/base64"
+ALTERNATIVE_TARGET[base64] = "${bindir}/base64.${BPN}"
+ALTERNATIVE_LINK_NAME[base64.1] = "${mandir}/man1/base64.1"
+
+ALTERNATIVE_LINK_NAME[mktemp] = "${base_bindir}/mktemp"
+ALTERNATIVE_TARGET[mktemp] = "${bindir}/mktemp.${BPN}"
+ALTERNATIVE_LINK_NAME[mktemp.1] = "${mandir}/man1/mktemp.1"
+
+ALTERNATIVE_LINK_NAME[df] = "${base_bindir}/df"
+ALTERNATIVE_TARGET[df] = "${bindir}/df.${BPN}"
+ALTERNATIVE_LINK_NAME[df.1] = "${mandir}/man1/df.1"
+
+ALTERNATIVE_LINK_NAME[lbracket] = "${bindir}/["
+ALTERNATIVE_TARGET[lbracket] = "${bindir}/lbracket.${BPN}"
+ALTERNATIVE_LINK_NAME[lbracket.1] = "${mandir}/man1/lbracket.1"
+
+ALTERNATIVE_LINK_NAME[groups.1] = "${mandir}/man1/groups.1"
+ALTERNATIVE_LINK_NAME[uptime.1] = "${mandir}/man1/uptime.1"
+ALTERNATIVE_LINK_NAME[kill.1] = "${mandir}/man1/kill.1"
+ALTERNATIVE_LINK_NAME[stat.1] = "${mandir}/man1/stat.1"
+
+python __anonymous() {
+	for prog in d.getVar('base_bindir_progs').split():
+		d.setVarFlag('ALTERNATIVE_LINK_NAME', prog, '%s/%s' % (d.getVar('base_bindir'), prog))
+
+	for prog in d.getVar('sbindir_progs').split():
+		d.setVarFlag('ALTERNATIVE_LINK_NAME', prog, '%s/%s' % (d.getVar('sbindir'), prog))
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-core/dbus-wait/dbus-wait_git.bb b/import-layers/yocto-poky/meta/recipes-core/dbus-wait/dbus-wait_git.bb
index 691dc86..4afb90c 100644
--- a/import-layers/yocto-poky/meta/recipes-core/dbus-wait/dbus-wait_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/dbus-wait/dbus-wait_git.bb
@@ -11,6 +11,7 @@
 PR = "r2"
 
 SRC_URI = "git://git.yoctoproject.org/${BPN}"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 S = "${WORKDIR}/git"
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/dbus/dbus-test_1.10.14.bb b/import-layers/yocto-poky/meta/recipes-core/dbus/dbus-test_1.10.14.bb
deleted file mode 100644
index 5394814..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/dbus/dbus-test_1.10.14.bb
+++ /dev/null
@@ -1,58 +0,0 @@
-SUMMARY = "D-Bus test package (for D-bus functionality testing only)"
-HOMEPAGE = "http://dbus.freedesktop.org"
-SECTION = "base"
-LICENSE = "AFL-2 | GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=10dded3b58148f3f1fd804b26354af3e \
-                    file://dbus/dbus.h;beginline=6;endline=20;md5=7755c9d7abccd5dbd25a6a974538bb3c"
-
-DEPENDS = "dbus glib-2.0"
-
-RDEPENDS_${PN} += "make"
-RDEPENDS_${PN}-dev = ""
-
-SRC_URI = "http://dbus.freedesktop.org/releases/dbus/dbus-${PV}.tar.gz \
-           file://tmpdir.patch \
-           file://run-ptest \
-           file://python-config.patch \
-           file://clear-guid_from_server-if-send_negotiate_unix_f.patch \
-           "
-
-SRC_URI[md5sum] = "3f7b013ce8f641cd4c897acda0ef3467"
-SRC_URI[sha256sum] = "23238f70353e38ce5ca183ebc9525c0d97ac00ef640ad29cf794782af6e6a083"
-
-S="${WORKDIR}/dbus-${PV}"
-FILESEXTRAPATHS =. "${FILE_DIRNAME}/dbus:"
-
-inherit autotools pkgconfig gettext ptest upstream-version-is-even
-
-EXTRA_OECONF_X = "${@bb.utils.contains('DISTRO_FEATURES', 'x11', '--with-x', '--without-x', d)}"
-EXTRA_OECONF_X_class-native = "--without-x"
-
-EXTRA_OECONF = "--enable-tests \
-                --enable-modular-tests \
-                --enable-installed-tests \
-                --enable-checks \
-                --enable-asserts \
-                --enable-verbose-mode \
-                --disable-xml-docs \
-                --disable-doxygen-docs \
-                --disable-libaudit \
-                --disable-systemd \
-                --without-systemdsystemunitdir \
-                --with-dbus-test-dir=${PTEST_PATH} \
-                ${EXTRA_OECONF_X}"
-
-do_install() {
-    :
-}
-
-do_install_ptest() {
-	install -d ${D}${PTEST_PATH}/test
-	l="shell printf refs syslog marshal syntax corrupt dbus-daemon dbus-daemon-eavesdrop loopback relay"
-	for i in $l; do install ${B}/test/.libs/test-$i ${D}${PTEST_PATH}/test; done
-	l="bus bus-system bus-launch-helper"
-	for i in $l; do install ${B}/bus/.libs/test-$i ${D}${PTEST_PATH}/test; done
-	install ${B}/dbus/.libs/test-dbus ${D}${PTEST_PATH}/test
-	cp -r ${B}/test/data ${D}${PTEST_PATH}/test
-}
-RDEPENDS_${PN}-ptest += "bash"
diff --git a/import-layers/yocto-poky/meta/recipes-core/dbus/dbus-test_1.10.20.bb b/import-layers/yocto-poky/meta/recipes-core/dbus/dbus-test_1.10.20.bb
new file mode 100644
index 0000000..eeadb7d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/dbus/dbus-test_1.10.20.bb
@@ -0,0 +1,58 @@
+SUMMARY = "D-Bus test package (for D-bus functionality testing only)"
+HOMEPAGE = "http://dbus.freedesktop.org"
+SECTION = "base"
+LICENSE = "AFL-2 | GPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=10dded3b58148f3f1fd804b26354af3e \
+                    file://dbus/dbus.h;beginline=6;endline=20;md5=7755c9d7abccd5dbd25a6a974538bb3c"
+
+DEPENDS = "dbus glib-2.0"
+
+RDEPENDS_${PN} += "make"
+RDEPENDS_${PN}-dev = ""
+
+SRC_URI = "http://dbus.freedesktop.org/releases/dbus/dbus-${PV}.tar.gz \
+           file://tmpdir.patch \
+           file://run-ptest \
+           file://python-config.patch \
+           file://clear-guid_from_server-if-send_negotiate_unix_f.patch \
+           "
+
+SRC_URI[md5sum] = "94c991e763d4f9f13690416b2dcd9411"
+SRC_URI[sha256sum] = "e574b9780b5425fde4d973bb596e7ea0f09e00fe2edd662da9016e976c460b48"
+
+S="${WORKDIR}/dbus-${PV}"
+FILESEXTRAPATHS =. "${FILE_DIRNAME}/dbus:"
+
+inherit autotools pkgconfig gettext ptest upstream-version-is-even
+
+EXTRA_OECONF_X = "${@bb.utils.contains('DISTRO_FEATURES', 'x11', '--with-x', '--without-x', d)}"
+EXTRA_OECONF_X_class-native = "--without-x"
+
+EXTRA_OECONF = "--enable-tests \
+                --enable-modular-tests \
+                --enable-installed-tests \
+                --enable-checks \
+                --enable-asserts \
+                --enable-verbose-mode \
+                --disable-xml-docs \
+                --disable-doxygen-docs \
+                --disable-libaudit \
+                --disable-systemd \
+                --without-systemdsystemunitdir \
+                --with-dbus-test-dir=${PTEST_PATH} \
+                ${EXTRA_OECONF_X}"
+
+do_install() {
+    :
+}
+
+do_install_ptest() {
+	install -d ${D}${PTEST_PATH}/test
+	l="shell printf refs syslog marshal syntax corrupt dbus-daemon dbus-daemon-eavesdrop loopback relay"
+	for i in $l; do install ${B}/test/.libs/test-$i ${D}${PTEST_PATH}/test; done
+	l="bus bus-system bus-launch-helper"
+	for i in $l; do install ${B}/bus/.libs/test-$i ${D}${PTEST_PATH}/test; done
+	install ${B}/dbus/.libs/test-dbus ${D}${PTEST_PATH}/test
+	cp -r ${B}/test/data ${D}${PTEST_PATH}/test
+}
+RDEPENDS_${PN}-ptest += "bash"
diff --git a/import-layers/yocto-poky/meta/recipes-core/dbus/dbus_1.10.14.bb b/import-layers/yocto-poky/meta/recipes-core/dbus/dbus_1.10.14.bb
deleted file mode 100644
index e1d73563..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/dbus/dbus_1.10.14.bb
+++ /dev/null
@@ -1,180 +0,0 @@
-SUMMARY = "D-Bus message bus"
-DESCRIPTION = "D-Bus is a message bus system, a simple way for applications to talk to one another. In addition to interprocess communication, D-Bus helps coordinate process lifecycle; it makes it simple and reliable to code a \"single instance\" application or daemon, and to launch applications and daemons on demand when their services are needed."
-HOMEPAGE = "http://dbus.freedesktop.org"
-SECTION = "base"
-LICENSE = "AFL-2 | GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=10dded3b58148f3f1fd804b26354af3e \
-                    file://dbus/dbus.h;beginline=6;endline=20;md5=7755c9d7abccd5dbd25a6a974538bb3c"
-DEPENDS = "expat virtual/libintl"
-RDEPENDS_dbus_class-native = ""
-RDEPENDS_dbus_class-nativesdk = ""
-PACKAGES += "${@bb.utils.contains('DISTRO_FEATURES', 'ptest', '${PN}-ptest', '', d)}"
-ALLOW_EMPTY_dbus-ptest = "1"
-RDEPENDS_dbus-ptest_class-target = "dbus-test-ptest"
-
-SRC_URI = "http://dbus.freedesktop.org/releases/dbus/dbus-${PV}.tar.gz \
-           file://tmpdir.patch \
-           file://dbus-1.init \
-           file://os-test.patch \
-           file://clear-guid_from_server-if-send_negotiate_unix_f.patch \
-           file://0001-configure.ac-explicitely-check-stdint.h.patch \
-"
-
-SRC_URI[md5sum] = "3f7b013ce8f641cd4c897acda0ef3467"
-SRC_URI[sha256sum] = "23238f70353e38ce5ca183ebc9525c0d97ac00ef640ad29cf794782af6e6a083"
-
-inherit useradd autotools pkgconfig gettext update-rc.d upstream-version-is-even
-
-INITSCRIPT_NAME = "dbus-1"
-INITSCRIPT_PARAMS = "start 02 5 3 2 . stop 20 0 1 6 ."
-
-python __anonymous() {
-    if not bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d):
-        d.setVar("INHIBIT_UPDATERCD_BBCLASS", "1")
-}
-
-USERADD_PACKAGES = "${PN}"
-GROUPADD_PARAM_${PN} = "-r netdev"
-USERADD_PARAM_${PN} = "--system --home ${localstatedir}/lib/dbus \
-                       --no-create-home --shell /bin/false \
-                       --user-group messagebus"
-
-CONFFILES_${PN} = "${sysconfdir}/dbus-1/system.conf ${sysconfdir}/dbus-1/session.conf"
-
-DEBIANNAME_${PN} = "dbus-1"
-
-PACKAGES =+ "${PN}-lib"
-
-OLDPKGNAME = "dbus-x11"
-OLDPKGNAME_class-nativesdk = ""
-
-# for compatibility
-RPROVIDES_${PN} = "${OLDPKGNAME}"
-RREPLACES_${PN} += "${OLDPKGNAME}"
-
-FILES_${PN} = "${bindir}/dbus-daemon* \
-               ${bindir}/dbus-uuidgen \
-               ${bindir}/dbus-cleanup-sockets \
-               ${bindir}/dbus-send \
-               ${bindir}/dbus-monitor \
-               ${bindir}/dbus-launch \
-               ${bindir}/dbus-run-session \
-               ${bindir}/dbus-update-activation-environment \
-               ${libexecdir}/dbus* \
-               ${sysconfdir} \
-               ${localstatedir} \
-               ${datadir}/dbus-1/services \
-               ${datadir}/dbus-1/system-services \
-               ${datadir}/dbus-1/session.d \
-               ${datadir}/dbus-1/session.conf \
-               ${datadir}/dbus-1/system.d \
-               ${datadir}/dbus-1/system.conf \
-               ${systemd_system_unitdir} \
-               ${systemd_user_unitdir} \
-"
-FILES_${PN}-lib = "${libdir}/lib*.so.*"
-RRECOMMENDS_${PN}-lib = "${PN}"
-FILES_${PN}-dev += "${libdir}/dbus-1.0/include ${bindir}/dbus-test-tool"
-
-PACKAGE_WRITE_DEPS += "${@bb.utils.contains('DISTRO_FEATURES','systemd sysvinit','systemd-systemctl-native','',d)}"
-pkg_postinst_dbus() {
-	# If both systemd and sysvinit are enabled, mask the dbus-1 init script
-        if ${@bb.utils.contains('DISTRO_FEATURES','systemd sysvinit','true','false',d)}; then
-		if [ -n "$D" ]; then
-			OPTS="--root=$D"
-		fi
-		systemctl $OPTS mask dbus-1.service
-	fi
-
-	if [ -z "$D" ] && [ -e /etc/init.d/populate-volatile.sh ] ; then
-		/etc/init.d/populate-volatile.sh update
-	fi
-}
-
-EXTRA_OECONF = "--disable-tests \
-                --disable-xml-docs \
-                --disable-doxygen-docs \
-                --disable-libaudit \
-                --enable-largefile \
-                "
-
-EXTRA_OECONF_append_class-target = " SYSTEMCTL=${base_bindir}/systemctl"
-EXTRA_OECONF_append_class-native = " --disable-selinux"
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'systemd x11', d)}"
-PACKAGECONFIG_class-native = ""
-PACKAGECONFIG_class-nativesdk = ""
-
-PACKAGECONFIG[systemd] = "--enable-systemd --with-systemdsystemunitdir=${systemd_system_unitdir},--disable-systemd --without-systemdsystemunitdir,systemd"
-PACKAGECONFIG[x11] = "--with-x --enable-x11-autolaunch,--without-x --disable-x11-autolaunch, virtual/libx11 libsm"
-PACKAGECONFIG[user-session] = "--enable-user-session --with-systemduserunitdir=${systemd_user_unitdir},--disable-user-session"
-
-do_install() {
-	autotools_do_install
-
-	if ${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', 'true', 'false', d)}; then
-		install -d ${D}${sysconfdir}/init.d
-		sed 's:@bindir@:${bindir}:' < ${WORKDIR}/dbus-1.init >${WORKDIR}/dbus-1.init.sh
-		install -m 0755 ${WORKDIR}/dbus-1.init.sh ${D}${sysconfdir}/init.d/dbus-1
-	fi
-
-	if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
-		for i in dbus.target.wants sockets.target.wants multi-user.target.wants; do \
-			install -d ${D}${systemd_system_unitdir}/$i; done
-		install -m 0644 ${B}/bus/dbus.service ${B}/bus/dbus.socket ${D}${systemd_system_unitdir}/
-		ln -fs ../dbus.socket ${D}${systemd_system_unitdir}/dbus.target.wants/dbus.socket
-		ln -fs ../dbus.socket ${D}${systemd_system_unitdir}/sockets.target.wants/dbus.socket
-		ln -fs ../dbus.service ${D}${systemd_system_unitdir}/multi-user.target.wants/dbus.service
-	fi
-
-	install -d ${D}${sysconfdir}/default/volatiles
-	echo "d messagebus messagebus 0755 ${localstatedir}/run/dbus none" \
-	     > ${D}${sysconfdir}/default/volatiles/99_dbus
-
-
-	mkdir -p ${D}${localstatedir}/lib/dbus
-
-	chown messagebus:messagebus ${D}${localstatedir}/lib/dbus
-
-	chown root:messagebus ${D}${libexecdir}/dbus-daemon-launch-helper
-	chmod 4755 ${D}${libexecdir}/dbus-daemon-launch-helper
-
-	# Remove Red Hat initscript
-	rm -rf ${D}${sysconfdir}/rc.d
-
-	# Remove empty testexec directory as we don't build tests
-	rm -rf ${D}${libdir}/dbus-1.0/test
-
-	# Remove /var/run as it is created on startup
-	rm -rf ${D}${localstatedir}/run
-}
-
-do_install_class-native() {
-	autotools_do_install
-
-	# for dbus-glib-native introspection generation
-	install -d ${D}${STAGING_DATADIR_NATIVE}/dbus/
-	# N.B. is below install actually required?
-	install -m 0644 bus/session.conf ${D}${STAGING_DATADIR_NATIVE}/dbus/session.conf
-
-	# dbus-glib-native and dbus-glib need this xml file
-	./bus/dbus-daemon --introspect > ${D}${STAGING_DATADIR_NATIVE}/dbus/dbus-bus-introspect.xml
-	
-	# dbus-launch has no X support so lets not install it in case the host
-	# has a more featured and useful version
-	rm -f ${D}${bindir}/dbus-launch
-}
-
-do_install_class-nativesdk() {
-	autotools_do_install
-
-	# dbus-launch has no X support so lets not install it in case the host
-	# has a more featured and useful version
-	rm -f ${D}${bindir}/dbus-launch
-
-	# Remove /var/run to avoid QA error
-	rm -rf ${D}${localstatedir}/run
-}
-BBCLASSEXTEND = "native nativesdk"
-
-INSANE_SKIP_${PN}-ptest += "build-deps"
diff --git a/import-layers/yocto-poky/meta/recipes-core/dbus/dbus_1.10.20.bb b/import-layers/yocto-poky/meta/recipes-core/dbus/dbus_1.10.20.bb
new file mode 100644
index 0000000..9ddedc1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/dbus/dbus_1.10.20.bb
@@ -0,0 +1,180 @@
+SUMMARY = "D-Bus message bus"
+DESCRIPTION = "D-Bus is a message bus system, a simple way for applications to talk to one another. In addition to interprocess communication, D-Bus helps coordinate process lifecycle; it makes it simple and reliable to code a \"single instance\" application or daemon, and to launch applications and daemons on demand when their services are needed."
+HOMEPAGE = "http://dbus.freedesktop.org"
+SECTION = "base"
+LICENSE = "AFL-2 | GPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=10dded3b58148f3f1fd804b26354af3e \
+                    file://dbus/dbus.h;beginline=6;endline=20;md5=7755c9d7abccd5dbd25a6a974538bb3c"
+DEPENDS = "expat virtual/libintl"
+RDEPENDS_dbus_class-native = ""
+RDEPENDS_dbus_class-nativesdk = ""
+PACKAGES += "${@bb.utils.contains('DISTRO_FEATURES', 'ptest', '${PN}-ptest', '', d)}"
+ALLOW_EMPTY_dbus-ptest = "1"
+RDEPENDS_dbus-ptest_class-target = "dbus-test-ptest"
+
+SRC_URI = "http://dbus.freedesktop.org/releases/dbus/dbus-${PV}.tar.gz \
+           file://tmpdir.patch \
+           file://dbus-1.init \
+           file://os-test.patch \
+           file://clear-guid_from_server-if-send_negotiate_unix_f.patch \
+           file://0001-configure.ac-explicitely-check-stdint.h.patch \
+"
+
+SRC_URI[md5sum] = "94c991e763d4f9f13690416b2dcd9411"
+SRC_URI[sha256sum] = "e574b9780b5425fde4d973bb596e7ea0f09e00fe2edd662da9016e976c460b48"
+
+inherit useradd autotools pkgconfig gettext update-rc.d upstream-version-is-even
+
+INITSCRIPT_NAME = "dbus-1"
+INITSCRIPT_PARAMS = "start 02 5 3 2 . stop 20 0 1 6 ."
+
+python __anonymous() {
+    if not bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d):
+        d.setVar("INHIBIT_UPDATERCD_BBCLASS", "1")
+}
+
+USERADD_PACKAGES = "${PN}"
+GROUPADD_PARAM_${PN} = "-r netdev"
+USERADD_PARAM_${PN} = "--system --home ${localstatedir}/lib/dbus \
+                       --no-create-home --shell /bin/false \
+                       --user-group messagebus"
+
+CONFFILES_${PN} = "${sysconfdir}/dbus-1/system.conf ${sysconfdir}/dbus-1/session.conf"
+
+DEBIANNAME_${PN} = "dbus-1"
+
+PACKAGES =+ "${PN}-lib"
+
+OLDPKGNAME = "dbus-x11"
+OLDPKGNAME_class-nativesdk = ""
+
+# for compatibility
+RPROVIDES_${PN} = "${OLDPKGNAME}"
+RREPLACES_${PN} += "${OLDPKGNAME}"
+
+FILES_${PN} = "${bindir}/dbus-daemon* \
+               ${bindir}/dbus-uuidgen \
+               ${bindir}/dbus-cleanup-sockets \
+               ${bindir}/dbus-send \
+               ${bindir}/dbus-monitor \
+               ${bindir}/dbus-launch \
+               ${bindir}/dbus-run-session \
+               ${bindir}/dbus-update-activation-environment \
+               ${libexecdir}/dbus* \
+               ${sysconfdir} \
+               ${localstatedir} \
+               ${datadir}/dbus-1/services \
+               ${datadir}/dbus-1/system-services \
+               ${datadir}/dbus-1/session.d \
+               ${datadir}/dbus-1/session.conf \
+               ${datadir}/dbus-1/system.d \
+               ${datadir}/dbus-1/system.conf \
+               ${systemd_system_unitdir} \
+               ${systemd_user_unitdir} \
+"
+FILES_${PN}-lib = "${libdir}/lib*.so.*"
+RRECOMMENDS_${PN}-lib = "${PN}"
+FILES_${PN}-dev += "${libdir}/dbus-1.0/include ${bindir}/dbus-test-tool"
+
+PACKAGE_WRITE_DEPS += "${@bb.utils.contains('DISTRO_FEATURES','systemd sysvinit','systemd-systemctl-native','',d)}"
+pkg_postinst_dbus() {
+	# If both systemd and sysvinit are enabled, mask the dbus-1 init script
+        if ${@bb.utils.contains('DISTRO_FEATURES','systemd sysvinit','true','false',d)}; then
+		if [ -n "$D" ]; then
+			OPTS="--root=$D"
+		fi
+		systemctl $OPTS mask dbus-1.service
+	fi
+
+	if [ -z "$D" ] && [ -e /etc/init.d/populate-volatile.sh ] ; then
+		/etc/init.d/populate-volatile.sh update
+	fi
+}
+
+EXTRA_OECONF = "--disable-tests \
+                --disable-xml-docs \
+                --disable-doxygen-docs \
+                --disable-libaudit \
+                --enable-largefile \
+                "
+
+EXTRA_OECONF_append_class-target = " SYSTEMCTL=${base_bindir}/systemctl"
+EXTRA_OECONF_append_class-native = " --disable-selinux"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'systemd x11', d)}"
+PACKAGECONFIG_class-native = ""
+PACKAGECONFIG_class-nativesdk = ""
+
+PACKAGECONFIG[systemd] = "--enable-systemd --with-systemdsystemunitdir=${systemd_system_unitdir},--disable-systemd --without-systemdsystemunitdir,systemd"
+PACKAGECONFIG[x11] = "--with-x --enable-x11-autolaunch,--without-x --disable-x11-autolaunch, virtual/libx11 libsm"
+PACKAGECONFIG[user-session] = "--enable-user-session --with-systemduserunitdir=${systemd_user_unitdir},--disable-user-session"
+
+do_install() {
+	autotools_do_install
+
+	if ${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', 'true', 'false', d)}; then
+		install -d ${D}${sysconfdir}/init.d
+		sed 's:@bindir@:${bindir}:' < ${WORKDIR}/dbus-1.init >${WORKDIR}/dbus-1.init.sh
+		install -m 0755 ${WORKDIR}/dbus-1.init.sh ${D}${sysconfdir}/init.d/dbus-1
+	fi
+
+	if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
+		for i in dbus.target.wants sockets.target.wants multi-user.target.wants; do \
+			install -d ${D}${systemd_system_unitdir}/$i; done
+		install -m 0644 ${B}/bus/dbus.service ${B}/bus/dbus.socket ${D}${systemd_system_unitdir}/
+		ln -fs ../dbus.socket ${D}${systemd_system_unitdir}/dbus.target.wants/dbus.socket
+		ln -fs ../dbus.socket ${D}${systemd_system_unitdir}/sockets.target.wants/dbus.socket
+		ln -fs ../dbus.service ${D}${systemd_system_unitdir}/multi-user.target.wants/dbus.service
+	fi
+
+	install -d ${D}${sysconfdir}/default/volatiles
+	echo "d messagebus messagebus 0755 ${localstatedir}/run/dbus none" \
+	     > ${D}${sysconfdir}/default/volatiles/99_dbus
+
+
+	mkdir -p ${D}${localstatedir}/lib/dbus
+
+	chown messagebus:messagebus ${D}${localstatedir}/lib/dbus
+
+	chown root:messagebus ${D}${libexecdir}/dbus-daemon-launch-helper
+	chmod 4755 ${D}${libexecdir}/dbus-daemon-launch-helper
+
+	# Remove Red Hat initscript
+	rm -rf ${D}${sysconfdir}/rc.d
+
+	# Remove empty testexec directory as we don't build tests
+	rm -rf ${D}${libdir}/dbus-1.0/test
+
+	# Remove /var/run as it is created on startup
+	rm -rf ${D}${localstatedir}/run
+}
+
+do_install_class-native() {
+	autotools_do_install
+
+	# for dbus-glib-native introspection generation
+	install -d ${D}${STAGING_DATADIR_NATIVE}/dbus/
+	# N.B. is below install actually required?
+	install -m 0644 bus/session.conf ${D}${STAGING_DATADIR_NATIVE}/dbus/session.conf
+
+	# dbus-glib-native and dbus-glib need this xml file
+	./bus/dbus-daemon --introspect > ${D}${STAGING_DATADIR_NATIVE}/dbus/dbus-bus-introspect.xml
+	
+	# dbus-launch has no X support so lets not install it in case the host
+	# has a more featured and useful version
+	rm -f ${D}${bindir}/dbus-launch
+}
+
+do_install_class-nativesdk() {
+	autotools_do_install
+
+	# dbus-launch has no X support so lets not install it in case the host
+	# has a more featured and useful version
+	rm -f ${D}${bindir}/dbus-launch
+
+	# Remove /var/run to avoid QA error
+	rm -rf ${D}${localstatedir}/run
+}
+BBCLASSEXTEND = "native nativesdk"
+
+INSANE_SKIP_${PN}-ptest += "build-deps"
diff --git a/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear/0003-configure.patch b/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear/0003-configure.patch
index c53ab01..8469a50 100644
--- a/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear/0003-configure.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear/0003-configure.patch
@@ -1,19 +1,20 @@
-From c5f5c5054c1b15539dccf866e2c3faba7ed68456 Mon Sep 17 00:00:00 2001
+From 58dd24a80ca0f400d0761afd9ce2b7f684fc9125 Mon Sep 17 00:00:00 2001
 From: =?UTF-8?q?Eric=20B=C3=A9nard?= <eric@eukrea.com>
 Date: Thu, 25 Apr 2013 00:27:25 +0200
-Subject: [PATCH 3/6] configure: add a variable to allow openpty check to be cached
+Subject: [PATCH] configure: add a variable to allow openpty check to be cached
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [ https://github.com/mkj/dropbear/pull/48 ]
 
+Signed-off-by: Dengke Du <dengke.du@windriver.com>
 ---
  configure.ac | 11 ++++++++---
  1 file changed, 8 insertions(+), 3 deletions(-)
 
 diff --git a/configure.ac b/configure.ac
-index 05461f3..9c16d90 100644
+index 893b904..245408d 100644
 --- a/configure.ac
 +++ b/configure.ac
-@@ -166,15 +166,20 @@ AC_ARG_ENABLE(openpty,
+@@ -177,15 +177,20 @@ AC_ARG_ENABLE(openpty,
  			AC_MSG_NOTICE(Not using openpty)
  		else
  			AC_MSG_NOTICE(Using openpty if available)
@@ -38,5 +39,5 @@
  AC_ARG_ENABLE(syslog,
  	[  --disable-syslog        Don't include syslog support],
 -- 
-1.7.11.7
+2.8.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear/fix-libtomcrypt-libtommath-ordering.patch b/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear/fix-libtomcrypt-libtommath-ordering.patch
index de930f2..2b05e18 100644
--- a/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear/fix-libtomcrypt-libtommath-ordering.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear/fix-libtomcrypt-libtommath-ordering.patch
@@ -1,4 +1,4 @@
-From 2fd8d2aedad0c50cdf1e43edd2387874b720ad4c Mon Sep 17 00:00:00 2001
+From f37fa9a41f248fa41dd74a41c66cb41a291c03d2 Mon Sep 17 00:00:00 2001
 From: Andre McCurdy <armccurdy@gmail.com>
 Date: Fri, 16 Sep 2016 12:18:23 -0700
 Subject: [PATCH] fix libtomcrypt/libtommath ordering
@@ -11,18 +11,19 @@
 Note that LIBTOM_LIBS is not used when linking with the bundled
 libtom libs.
 
-Upstream-Status: Pending
+Upstream-Status: Backport [ https://github.com/mkj/dropbear/commit/f9e6bc2aecab0f4b5b529e07a92cc63c8a66cd4b ]
 
 Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
+Signed-off-by: Dengke Du <dengke.du@windriver.com>
 ---
  configure.ac | 8 ++++----
  1 file changed, 4 insertions(+), 4 deletions(-)
 
 diff --git a/configure.ac b/configure.ac
-index b6abe4c..85bb8bc 100644
+index 245408d..d624853 100644
 --- a/configure.ac
 +++ b/configure.ac
-@@ -390,16 +390,16 @@ AC_ARG_ENABLE(bundled-libtom,
+@@ -393,16 +393,16 @@ AC_ARG_ENABLE(bundled-libtom,
  			AC_MSG_NOTICE(Forcing bundled libtom*)
  		else
  			BUNDLED_LIBTOM=0
@@ -44,5 +45,5 @@
  )
  
 -- 
-1.9.1
+2.8.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear/support-out-of-tree-builds.patch b/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear/support-out-of-tree-builds.patch
deleted file mode 100644
index df6efb4..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear/support-out-of-tree-builds.patch
+++ /dev/null
@@ -1,43 +0,0 @@
-From: =?UTF-8?q?Henrik=20Nordstr=C3=B6m?= <henrik@knc.nu>
-Date: Wed, 11 May 2016 12:35:06 +0200
-Subject: [PATCH] Support out-of-tree builds usign bundled libtom
-
-When building out-of-tree we need both source and generated
-folders in include paths to find both distributed and generated
-headers.
-
-
-
-Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
-Upstream-Status: Backport
----
- libtomcrypt/Makefile.in | 2 +-
- libtommath/Makefile.in  | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/libtomcrypt/Makefile.in b/libtomcrypt/Makefile.in
-index 3056ef0..7970700 100644
---- a/libtomcrypt/Makefile.in
-+++ b/libtomcrypt/Makefile.in
-@@ -19,7 +19,7 @@ srcdir=@srcdir@
- 
- # Compilation flags. Note the += does not write over the user's CFLAGS!
- # The rest of the flags come from the parent Dropbear makefile
--CFLAGS += -c -I$(srcdir)/src/headers/ -I$(srcdir)/../ -DLTC_SOURCE -I$(srcdir)/../libtommath/
-+CFLAGS += -c -Isrc/headers/ -I$(srcdir)/src/headers/ -I../ -I$(srcdir)/../ -DLTC_SOURCE -I../libtommath/ -I$(srcdir)/../libtommath/
- 
- # additional warnings (newer GCC 3.4 and higher)
- ifdef GCC_34
-diff --git a/libtommath/Makefile.in b/libtommath/Makefile.in
-index 06aba68..019c50b 100644
---- a/libtommath/Makefile.in
-+++ b/libtommath/Makefile.in
-@@ -9,7 +9,7 @@ VPATH=@srcdir@
- srcdir=@srcdir@
- 
- # So that libtommath can include Dropbear headers for options and m_burn()
--CFLAGS += -I$(srcdir)/../libtomcrypt/src/headers/ -I$(srcdir)/../
-+CFLAGS += -I. -I$(srcdir) -I../libtomcrypt/src/headers/ -I$(srcdir)/../libtomcrypt/src/headers/ -I../ -I$(srcdir)/../
- 
- ifndef IGNORE_SPEED
- 
diff --git a/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear_2016.74.bb b/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear_2016.74.bb
deleted file mode 100644
index a702097..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear_2016.74.bb
+++ /dev/null
@@ -1,7 +0,0 @@
-require dropbear.inc
-
-SRC_URI += "file://support-out-of-tree-builds.patch"
-
-SRC_URI[md5sum] = "9ad0172731e0f16623937804643b5bd8"
-SRC_URI[sha256sum] = "2720ea54ed009af812701bcc290a2a601d5c107d12993e5d92c0f5f81f718891"
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear_2017.75.bb b/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear_2017.75.bb
new file mode 100644
index 0000000..cfb0d19
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/dropbear/dropbear_2017.75.bb
@@ -0,0 +1,5 @@
+require dropbear.inc
+
+SRC_URI[md5sum] = "e57e9b9d25705dcb073ba15c416424fd"
+SRC_URI[sha256sum] = "6cbc1dcb1c9709d226dff669e5604172a18cf5dbf9a201474d5618ae4465098c"
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/expat/expat.inc b/import-layers/yocto-poky/meta/recipes-core/expat/expat.inc
index 9fa0ca2..0ee6c27 100644
--- a/import-layers/yocto-poky/meta/recipes-core/expat/expat.inc
+++ b/import-layers/yocto-poky/meta/recipes-core/expat/expat.inc
@@ -6,7 +6,11 @@
 
 SRC_URI = "${SOURCEFORGE_MIRROR}/expat/expat-${PV}.tar.bz2 \
            file://autotools.patch \
+           file://libtool-tag.patch \
 	  "
+
+SRC_URI_append_class-native = " file://no_getrandom.patch"
+
 inherit autotools lib_package
 
 # This package uses an archive format known to have issue with some
diff --git a/import-layers/yocto-poky/meta/recipes-core/expat/expat/libtool-tag.patch b/import-layers/yocto-poky/meta/recipes-core/expat/expat/libtool-tag.patch
new file mode 100644
index 0000000..3ef4197
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/expat/expat/libtool-tag.patch
@@ -0,0 +1,18 @@
+Add CC tag to build
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Index: expat-2.2.2/Makefile.in
+===================================================================
+--- expat-2.2.2.orig/Makefile.in
++++ expat-2.2.2/Makefile.in
+@@ -109,7 +109,7 @@ mkdir-init:
+ 
+ CC = @CC@
+ CXX = @CXX@
+-LIBTOOL = @LIBTOOL@
++LIBTOOL = @LIBTOOL@ --tag CC
+ 
+ INCLUDES = -I$(srcdir)/lib -I.
+ LDFLAGS = @LDFLAGS@
diff --git a/import-layers/yocto-poky/meta/recipes-core/expat/expat/no_getrandom.patch b/import-layers/yocto-poky/meta/recipes-core/expat/expat/no_getrandom.patch
new file mode 100644
index 0000000..d64f1bf
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/expat/expat/no_getrandom.patch
@@ -0,0 +1,23 @@
+The native version of expat may be used on older systems which dont have glibc 2.25
+and hence don't have getrandom() thanks to uninative. Disable the libc call and
+use the syscall instead to avoid a compatibility issue until we have 2.25 everywhere
+we support with uninative.
+
+RP
+2017/8/14
+
+Upstream-Status: Inappropriate
+
+Index: expat-2.2.3/configure.ac
+===================================================================
+--- expat-2.2.3.orig/configure.ac
++++ expat-2.2.3/configure.ac
+@@ -151,7 +151,7 @@ AC_LINK_IFELSE([AC_LANG_SOURCE([
+   #include <stdlib.h>  /* for NULL */
+   #include <sys/random.h>
+   int main() {
+-    return getrandom(NULL, 0U, 0U);
++    return getrandomBREAKME(NULL, 0U, 0U);
+   }
+ ])], [
+     AC_DEFINE([HAVE_GETRANDOM], [1],
diff --git a/import-layers/yocto-poky/meta/recipes-core/expat/expat_2.2.0.bb b/import-layers/yocto-poky/meta/recipes-core/expat/expat_2.2.0.bb
deleted file mode 100644
index ef21a11..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/expat/expat_2.2.0.bb
+++ /dev/null
@@ -1,5 +0,0 @@
-require expat.inc
-LIC_FILES_CHKSUM = "file://COPYING;md5=9c3ee559c6f9dcee1043ead112139f4f"
-
-SRC_URI[md5sum] = "2f47841c829facb346eb6e3fab5212e2"
-SRC_URI[sha256sum] = "d9e50ff2d19b3538bd2127902a89987474e1a4db8e43a66a4d1a712ab9a504ff"
diff --git a/import-layers/yocto-poky/meta/recipes-core/expat/expat_2.2.3.bb b/import-layers/yocto-poky/meta/recipes-core/expat/expat_2.2.3.bb
new file mode 100644
index 0000000..abf8450
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/expat/expat_2.2.3.bb
@@ -0,0 +1,4 @@
+require expat.inc
+LIC_FILES_CHKSUM = "file://COPYING;md5=5b8620d98e49772d95fc1d291c26aa79"
+SRC_URI[md5sum] = "f053af63ef5f39bd9b78d01fbc203334"
+SRC_URI[sha256sum] = "b31890fb02f85c002a67491923f89bda5028a880fd6c374f707193ad81aace5f"
diff --git a/import-layers/yocto-poky/meta/recipes-core/fts/fts.bb b/import-layers/yocto-poky/meta/recipes-core/fts/fts.bb
index de9297e..02f5408 100644
--- a/import-layers/yocto-poky/meta/recipes-core/fts/fts.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/fts/fts.bb
@@ -3,38 +3,18 @@
 
 SUMMARY = "POSIX file tree stream operations library"
 HOMEPAGE = "https://sites.google.com/a/bostic.com/keithbostic"
-LICENSE = "BSD-4-Clause"
-LIC_FILES_CHKSUM = "file://fts.h;beginline=1;endline=36;md5=2532eddb3d1a21905723a4011ec4e085"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://COPYING;md5=5ffe358174aad383f1b69ce3b53da982"
 SECTION = "libs"
 
-SRC_URI = "https://sites.google.com/a/bostic.com/keithbostic/files/fts.tar.gz \
-           file://fts-header-correctness.patch \
-           file://fts-uclibc.patch \
-           file://remove_cdefs.patch \
-           file://stdint.patch \
-           file://gcc5.patch \
+SRCREV = "944333aed9dc24cfa76cc64bfe70c75d25652753"
+PV = "1.2+git${SRCPV}"
+
+SRC_URI = "git://github.com/voidlinux/musl-fts \
 "
+S = "${WORKDIR}/git"
 
-SRC_URI[md5sum] = "120c14715485ec6ced14f494d059d20a"
-SRC_URI[sha256sum] = "3df9b9b5a45aeaf16f33bb84e692a10dc662e22ec8a51748f98767d67fb6f342"
-
-S = "${WORKDIR}/${BPN}"
-
-do_configure[noexec] = "1"
-
-HASHSTYLE_mipsarch = "sysv"
-HASHSTYLE = "gnu"
-
-VER = "0"
-do_compile () {
-    ${CC} -I${S} -fPIC -shared -Wl,--hash-style=${HASHSTYLE} -o libfts.so.${VER} -Wl,-soname,libfts.so.${VER} ${S}/fts.c
-}
-
-do_install() {
-    install -Dm755 ${B}/libfts.so.${VER} ${D}${libdir}/libfts.so.${VER}
-    ln -sf libfts.so.${VER} ${D}${libdir}/libfts.so
-    install -Dm644 ${S}/fts.h ${D}${includedir}/fts.h
-}
+inherit autotools pkgconfig
 #
 # We will skip parsing for non-musl systems
 #
diff --git a/import-layers/yocto-poky/meta/recipes-core/fts/fts/fts-header-correctness.patch b/import-layers/yocto-poky/meta/recipes-core/fts/fts/fts-header-correctness.patch
deleted file mode 100644
index c73ddc9..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/fts/fts/fts-header-correctness.patch
+++ /dev/null
@@ -1,25 +0,0 @@
-Included needed headers for compiling with musl
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Inappropriate
-
---- fts.orig/fts.h
-+++ fts/fts.h
-@@ -38,6 +38,17 @@
- #ifndef	_FTS_H_
- #define	_FTS_H_
- 
-+#include <sys/types.h>
-+#include <sys/param.h>
-+#include <sys/stat.h>
-+
-+#include <dirent.h>
-+#include <errno.h>
-+#include <fcntl.h>
-+#include <stdlib.h>
-+#include <string.h>
-+#include <unistd.h>
-+
- typedef struct {
- 	struct _ftsent *fts_cur;	/* current node */
- 	struct _ftsent *fts_child;	/* linked list of children */
diff --git a/import-layers/yocto-poky/meta/recipes-core/fts/fts/fts-uclibc.patch b/import-layers/yocto-poky/meta/recipes-core/fts/fts/fts-uclibc.patch
deleted file mode 100644
index 397654b..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/fts/fts/fts-uclibc.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-Add missing defines for uclibc
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Inappropriate
-
---- fts.orig/fts.c
-+++ fts/fts.c
-@@ -31,6 +31,10 @@
-  * SUCH DAMAGE.
-  */
- 
-+#define	alignof(TYPE)   ((int) &((struct { char dummy1; TYPE dummy2; } *) 0)->dummy2)
-+#define	ALIGNBYTES	(alignof(long double) - 1)
-+#define	ALIGN(p)	(((uintptr_t)(p) + ALIGNBYTES) & ~ALIGNBYTES) 
-+
- #if defined(LIBC_SCCS) && !defined(lint)
- static char sccsid[] = "@(#)fts.c	8.6 (Berkeley) 8/14/94";
- #endif /* LIBC_SCCS and not lint */
-@@ -652,10 +656,10 @@
- 		if (!ISSET(FTS_SEEDOT) && ISDOT(dp->d_name))
- 			continue;
- 
--		if ((p = fts_alloc(sp, dp->d_name, (int)dp->d_namlen)) == NULL)
-+		if ((p = fts_alloc(sp, dp->d_name, (int)dp->d_reclen)) == NULL)
- 			goto mem1;
--		if (dp->d_namlen > maxlen) {
--			if (fts_palloc(sp, (size_t)dp->d_namlen)) {
-+		if (dp->d_reclen > maxlen) {
-+			if (fts_palloc(sp, (size_t)dp->d_reclen)) {
- 				/*
- 				 * No more memory for path or structures.  Save
- 				 * errno, free up the current structure and the
-@@ -675,7 +679,7 @@
- 			maxlen = sp->fts_pathlen - sp->fts_cur->fts_pathlen - 1;
- 		}
- 
--		p->fts_pathlen = len + dp->d_namlen + 1;
-+		p->fts_pathlen = len + dp->d_reclen + 1;
- 		p->fts_parent = sp->fts_cur;
- 		p->fts_level = level;
- 
-@@ -784,7 +788,7 @@
- 	/* If user needs stat info, stat buffer already allocated. */
- 	sbp = ISSET(FTS_NOSTAT) ? &sb : p->fts_statp;
- 
--#ifdef DT_WHT
-+#ifdef S_IFWHT
- 	/*
- 	 * Whited-out files don't really exist.  However, there's stat(2) file
- 	 * mask for them, so we set it so that programs (i.e., find) don't have
diff --git a/import-layers/yocto-poky/meta/recipes-core/fts/fts/gcc5.patch b/import-layers/yocto-poky/meta/recipes-core/fts/fts/gcc5.patch
deleted file mode 100644
index f5b948e..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/fts/fts/gcc5.patch
+++ /dev/null
@@ -1,1368 +0,0 @@
-Forward port the sources to be able to compile with c99/gcc5
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Inappropriate
-
-Index: fts/fts.c
-===================================================================
---- fts.orig/fts.c
-+++ fts/fts.c
-@@ -51,16 +51,6 @@ static char sccsid[] = "@(#)fts.c	8.6 (B
- #include <string.h>
- #include <unistd.h>
- 
--static FTSENT	*fts_alloc __P(FTS *, char *, int);
--static FTSENT	*fts_build __P(FTS *, int);
--static void	 fts_lfree __P(FTSENT *);
--static void	 fts_load __P(FTS *, FTSENT *);
--static size_t	 fts_maxarglen __P(char * const *);
--static void	 fts_padjust __P(FTS *, void *);
--static int	 fts_palloc __P(FTS *, size_t);
--static FTSENT	*fts_sort __P(FTS *, FTSENT *, int);
--static u_short	 fts_stat __P(FTS *, struct dirent *, FTSENT *, int);
--
- #define	ISDOT(a)	(a[0] == '.' && (!a[1] || a[1] == '.' && !a[2]))
- 
- #define	ISSET(opt)	(sp->fts_options & opt)
-@@ -73,119 +63,16 @@ static u_short	 fts_stat __P(FTS *, stru
- #define	BCHILD		1		/* fts_children */
- #define	BNAMES		2		/* fts_children, names only */
- #define	BREAD		3		/* fts_read */
--
--FTS *
--fts_open(argv, options, compar)
--	char * const *argv;
--	register int options;
--	int (*compar)();
--{
--	register FTS *sp;
--	register FTSENT *p, *root;
--	register int nitems;
--	FTSENT *parent, *tmp;
--	int len;
--
--	/* Options check. */
--	if (options & ~FTS_OPTIONMASK) {
--		errno = EINVAL;
--		return (NULL);
--	}
--
--	/* Allocate/initialize the stream */
--	if ((sp = malloc((u_int)sizeof(FTS))) == NULL)
--		return (NULL);
--	memset(sp, 0, sizeof(FTS));
--	sp->fts_compar = compar;
--	sp->fts_options = options;
--
--	/* Logical walks turn on NOCHDIR; symbolic links are too hard. */
--	if (ISSET(FTS_LOGICAL))
--		SET(FTS_NOCHDIR);
--
--	/*
--	 * Start out with 1K of path space, and enough, in any case,
--	 * to hold the user's paths.
--	 */
--	if (fts_palloc(sp, MAX(fts_maxarglen(argv), MAXPATHLEN)))
--		goto mem1;
--
--	/* Allocate/initialize root's parent. */
--	if ((parent = fts_alloc(sp, "", 0)) == NULL)
--		goto mem2;
--	parent->fts_level = FTS_ROOTPARENTLEVEL;
--
--	/* Allocate/initialize root(s). */
--	for (root = NULL, nitems = 0; *argv; ++argv, ++nitems) {
--		/* Don't allow zero-length paths. */
--		if ((len = strlen(*argv)) == 0) {
--			errno = EINVAL;
--			goto mem3;
--		}
--
--		p = fts_alloc(sp, *argv, len);
--		p->fts_level = FTS_ROOTLEVEL;
--		p->fts_parent = parent;
--		p->fts_accpath = p->fts_name;
--		p->fts_info = fts_stat(sp, NULL, p, ISSET(FTS_COMFOLLOW));
--
--		/* Command-line "." and ".." are real directories. */
--		if (p->fts_info == FTS_DOT)
--			p->fts_info = FTS_D;
--
--		/*
--		 * If comparison routine supplied, traverse in sorted
--		 * order; otherwise traverse in the order specified.
--		 */
--		if (compar) {
--			p->fts_link = root;
--			root = p;
--		} else {
--			p->fts_link = NULL;
--			if (root == NULL)
--				tmp = root = p;
--			else {
--				tmp->fts_link = p;
--				tmp = p;
--			}
--		}
--	}
--	if (compar && nitems > 1)
--		root = fts_sort(sp, root, nitems);
--
--	/*
--	 * Allocate a dummy pointer and make fts_read think that we've just
--	 * finished the node before the root(s); set p->fts_info to FTS_INIT
--	 * so that everything about the "current" node is ignored.
--	 */
--	if ((sp->fts_cur = fts_alloc(sp, "", 0)) == NULL)
--		goto mem3;
--	sp->fts_cur->fts_link = root;
--	sp->fts_cur->fts_info = FTS_INIT;
--
--	/*
--	 * If using chdir(2), grab a file descriptor pointing to dot to insure
--	 * that we can get back here; this could be avoided for some paths,
--	 * but almost certainly not worth the effort.  Slashes, symbolic links,
--	 * and ".." are all fairly nasty problems.  Note, if we can't get the
--	 * descriptor we run anyway, just more slowly.
--	 */
--	if (!ISSET(FTS_NOCHDIR) && (sp->fts_rfd = open(".", O_RDONLY, 0)) < 0)
--		SET(FTS_NOCHDIR);
--
--	return (sp);
--
--mem3:	fts_lfree(root);
--	free(parent);
--mem2:	free(sp->fts_path);
--mem1:	free(sp);
--	return (NULL);
--}
-+/*
-+ * Special case a root of "/" so that slashes aren't appended which would
-+ * cause paths to be written as "//foo".
-+ */
-+#define	NAPPEND(p)							\
-+	(p->fts_level == FTS_ROOTLEVEL && p->fts_pathlen == 1 &&	\
-+	    p->fts_path[0] == '/' ? 0 : p->fts_pathlen)
- 
- static void
--fts_load(sp, p)
--	FTS *sp;
--	register FTSENT *p;
-+fts_load(FTS *sp, register FTSENT *p)
- {
- 	register int len;
- 	register char *cp;
-@@ -208,332 +95,214 @@ fts_load(sp, p)
- 	sp->fts_dev = p->fts_dev;
- }
- 
--int
--fts_close(sp)
--	FTS *sp;
-+static void
-+fts_lfree(register FTSENT *head)
- {
--	register FTSENT *freep, *p;
--	int saved_errno;
-+	register FTSENT *p;
- 
--	/*
--	 * This still works if we haven't read anything -- the dummy structure
--	 * points to the root list, so we step through to the end of the root
--	 * list which has a valid parent pointer.
--	 */
--	if (sp->fts_cur) {
--		for (p = sp->fts_cur; p->fts_level >= FTS_ROOTLEVEL;) {
--			freep = p;
--			p = p->fts_link ? p->fts_link : p->fts_parent;
--			free(freep);
--		}
-+	/* Free a linked list of structures. */
-+	while (p = head) {
-+		head = head->fts_link;
- 		free(p);
- 	}
-+}
- 
--	/* Free up child linked list, sort array, path buffer. */
--	if (sp->fts_child)
--		fts_lfree(sp->fts_child);
--	if (sp->fts_array)
--		free(sp->fts_array);
--	free(sp->fts_path);
-+static size_t
-+fts_maxarglen(char * const *argv)
-+{
-+	size_t len, max;
- 
--	/* Return to original directory, save errno if necessary. */
--	if (!ISSET(FTS_NOCHDIR)) {
--		saved_errno = fchdir(sp->fts_rfd) ? errno : 0;
--		(void)close(sp->fts_rfd);
--	}
-+	for (max = 0; *argv; ++argv)
-+		if ((len = strlen(*argv)) > max)
-+			max = len;
-+	return (max);
-+}
- 
--	/* Free up the stream pointer. */
--	free(sp);
- 
--	/* Set errno and return. */
--	if (!ISSET(FTS_NOCHDIR) && saved_errno) {
--		errno = saved_errno;
--		return (-1);
-+/*
-+ * When the path is realloc'd, have to fix all of the pointers in structures
-+ * already returned.
-+ */
-+static void
-+fts_padjust(FTS *sp, void *addr)
-+{
-+	FTSENT *p;
-+
-+#define	ADJUST(p) {							\
-+	(p)->fts_accpath =						\
-+	    (char *)addr + ((p)->fts_accpath - (p)->fts_path);		\
-+	(p)->fts_path = addr;						\
-+}
-+	/* Adjust the current set of children. */
-+	for (p = sp->fts_child; p; p = p->fts_link)
-+		ADJUST(p);
-+
-+	/* Adjust the rest of the tree. */
-+	for (p = sp->fts_cur; p->fts_level >= FTS_ROOTLEVEL;) {
-+		ADJUST(p);
-+		p = p->fts_link ? p->fts_link : p->fts_parent;
- 	}
--	return (0);
- }
- 
- /*
-- * Special case a root of "/" so that slashes aren't appended which would
-- * cause paths to be written as "//foo".
-+ * Allow essentially unlimited paths; find, rm, ls should all work on any tree.
-+ * Most systems will allow creation of paths much longer than MAXPATHLEN, even
-+ * though the kernel won't resolve them.  Add the size (not just what's needed)
-+ * plus 256 bytes so don't realloc the path 2 bytes at a time.
-  */
--#define	NAPPEND(p)							\
--	(p->fts_level == FTS_ROOTLEVEL && p->fts_pathlen == 1 &&	\
--	    p->fts_path[0] == '/' ? 0 : p->fts_pathlen)
-+static int
-+fts_palloc(FTS *sp, size_t more)
-+{
-+	sp->fts_pathlen += more + 256;
-+	sp->fts_path = realloc(sp->fts_path, (size_t)sp->fts_pathlen);
-+	return (sp->fts_path == NULL);
-+}
- 
--FTSENT *
--fts_read(sp)
--	register FTS *sp;
-+static FTSENT *
-+fts_alloc(FTS *sp, char *name, register int namelen)
- {
--	register FTSENT *p, *tmp;
--	register int instr;
--	register char *t;
--	int saved_errno;
-+	register FTSENT *p;
-+	size_t len;
- 
--	/* If finished or unrecoverable error, return NULL. */
--	if (sp->fts_cur == NULL || ISSET(FTS_STOP))
-+	/*
-+	 * The file name is a variable length array and no stat structure is
-+	 * necessary if the user has set the nostat bit.  Allocate the FTSENT
-+	 * structure, the file name and the stat structure in one chunk, but
-+	 * be careful that the stat structure is reasonably aligned.  Since the
-+	 * fts_name field is declared to be of size 1, the fts_name pointer is
-+	 * namelen + 2 before the first possible address of the stat structure.
-+	 */
-+	len = sizeof(FTSENT) + namelen;
-+	if (!ISSET(FTS_NOSTAT))
-+		len += sizeof(struct stat) + ALIGNBYTES;
-+	if ((p = malloc(len)) == NULL)
- 		return (NULL);
- 
--	/* Set current node pointer. */
--	p = sp->fts_cur;
-+	/* Copy the name plus the trailing NULL. */
-+	memmove(p->fts_name, name, namelen + 1);
- 
--	/* Save and zero out user instructions. */
--	instr = p->fts_instr;
-+	if (!ISSET(FTS_NOSTAT))
-+		p->fts_statp = (struct stat *)ALIGN(p->fts_name + namelen + 2);
-+	p->fts_namelen = namelen;
-+	p->fts_path = sp->fts_path;
-+	p->fts_errno = 0;
-+	p->fts_flags = 0;
- 	p->fts_instr = FTS_NOINSTR;
-+	p->fts_number = 0;
-+	p->fts_pointer = NULL;
-+	return (p);
-+}
- 
--	/* Any type of file may be re-visited; re-stat and re-turn. */
--	if (instr == FTS_AGAIN) {
--		p->fts_info = fts_stat(sp, NULL, p, 0);
--		return (p);
--	}
- 
-+static u_short
-+fts_stat(FTS *sp, register FTSENT *p, struct dirent *dp, int follow)
-+{
-+	register FTSENT *t;
-+	register dev_t dev;
-+	register ino_t ino;
-+	struct stat *sbp, sb;
-+	int saved_errno;
-+
-+	/* If user needs stat info, stat buffer already allocated. */
-+	sbp = ISSET(FTS_NOSTAT) ? &sb : p->fts_statp;
-+
-+#ifdef S_IFWHT
- 	/*
--	 * Following a symlink -- SLNONE test allows application to see
--	 * SLNONE and recover.  If indirecting through a symlink, have
--	 * keep a pointer to current location.  If unable to get that
--	 * pointer, follow fails.
--	 */
--	if (instr == FTS_FOLLOW &&
--	    (p->fts_info == FTS_SL || p->fts_info == FTS_SLNONE)) {
--		p->fts_info = fts_stat(sp, NULL, p, 1);
--		if (p->fts_info == FTS_D && !ISSET(FTS_NOCHDIR))
--			if ((p->fts_symfd = open(".", O_RDONLY, 0)) < 0) {
--				p->fts_errno = errno;
--				p->fts_info = FTS_ERR;
--			} else
--				p->fts_flags |= FTS_SYMFOLLOW;
--		return (p);
-+	 * Whited-out files don't really exist.  However, there's stat(2) file
-+	 * mask for them, so we set it so that programs (i.e., find) don't have
-+	 * to test FTS_W separately from other file types.
-+	 */
-+	if (dp != NULL && dp->d_type == DT_WHT) {
-+		memset(sbp, 0, sizeof(struct stat));
-+		sbp->st_mode = S_IFWHT;
-+		return (FTS_W);
- 	}
--
--	/* Directory in pre-order. */
--	if (p->fts_info == FTS_D) {
--		/* If skipped or crossed mount point, do post-order visit. */
--		if (instr == FTS_SKIP ||
--		    ISSET(FTS_XDEV) && p->fts_dev != sp->fts_dev) {
--			if (p->fts_flags & FTS_SYMFOLLOW)
--				(void)close(p->fts_symfd);
--			if (sp->fts_child) {
--				fts_lfree(sp->fts_child);
--				sp->fts_child = NULL;
--			}
--			p->fts_info = FTS_DP;
--			return (p);
--		} 
--
--		/* Rebuild if only read the names and now traversing. */
--		if (sp->fts_child && sp->fts_options & FTS_NAMEONLY) {
--			sp->fts_options &= ~FTS_NAMEONLY;
--			fts_lfree(sp->fts_child);
--			sp->fts_child = NULL;
--		}
--
--		/*
--		 * Cd to the subdirectory.
--		 *
--		 * If have already read and now fail to chdir, whack the list
--		 * to make the names come out right, and set the parent errno
--		 * so the application will eventually get an error condition.
--		 * Set the FTS_DONTCHDIR flag so that when we logically change
--		 * directories back to the parent we don't do a chdir.
--		 *
--		 * If haven't read do so.  If the read fails, fts_build sets
--		 * FTS_STOP or the fts_info field of the node.
--		 */
--		if (sp->fts_child) {
--			if (CHDIR(sp, p->fts_accpath)) {
--				p->fts_errno = errno;
--				p->fts_flags |= FTS_DONTCHDIR;
--				for (p = sp->fts_child; p; p = p->fts_link)
--					p->fts_accpath =
--					    p->fts_parent->fts_accpath;
--			}
--		} else if ((sp->fts_child = fts_build(sp, BREAD)) == NULL) {
--			if (ISSET(FTS_STOP))
--				return (NULL);
--			return (p);
-+#endif
-+
-+	/*
-+	 * If doing a logical walk, or application requested FTS_FOLLOW, do
-+	 * a stat(2).  If that fails, check for a non-existent symlink.  If
-+	 * fail, set the errno from the stat call.
-+	 */
-+	if (ISSET(FTS_LOGICAL) || follow) {
-+		if (stat(p->fts_accpath, sbp)) {
-+			saved_errno = errno;
-+			if (!lstat(p->fts_accpath, sbp)) {
-+				errno = 0;
-+				return (FTS_SLNONE);
-+			}
-+			p->fts_errno = saved_errno;
-+			goto err;
- 		}
--		p = sp->fts_child;
--		sp->fts_child = NULL;
--		goto name;
-+	} else if (lstat(p->fts_accpath, sbp)) {
-+		p->fts_errno = errno;
-+err:		memset(sbp, 0, sizeof(struct stat));
-+		return (FTS_NS);
- 	}
- 
--	/* Move to the next node on this level. */
--next:	tmp = p;
--	if (p = p->fts_link) {
--		free(tmp);
--
--		/*
--		 * If reached the top, return to the original directory, and
--		 * load the paths for the next root.
--		 */
--		if (p->fts_level == FTS_ROOTLEVEL) {
--			if (!ISSET(FTS_NOCHDIR) && FCHDIR(sp, sp->fts_rfd)) {
--				SET(FTS_STOP);
--				return (NULL);
--			}
--			fts_load(sp, p);
--			return (sp->fts_cur = p);
--		}
--
-+	if (S_ISDIR(sbp->st_mode)) {
- 		/*
--		 * User may have called fts_set on the node.  If skipped,
--		 * ignore.  If followed, get a file descriptor so we can
--		 * get back if necessary.
-+		 * Set the device/inode.  Used to find cycles and check for
-+		 * crossing mount points.  Also remember the link count, used
-+		 * in fts_build to limit the number of stat calls.  It is
-+		 * understood that these fields are only referenced if fts_info
-+		 * is set to FTS_D.
- 		 */
--		if (p->fts_instr == FTS_SKIP)
--			goto next;
--		if (p->fts_instr == FTS_FOLLOW) {
--			p->fts_info = fts_stat(sp, NULL, p, 1);
--			if (p->fts_info == FTS_D && !ISSET(FTS_NOCHDIR))
--				if ((p->fts_symfd =
--				    open(".", O_RDONLY, 0)) < 0) {
--					p->fts_errno = errno;
--					p->fts_info = FTS_ERR;
--				} else
--					p->fts_flags |= FTS_SYMFOLLOW;
--			p->fts_instr = FTS_NOINSTR;
--		}
--
--name:		t = sp->fts_path + NAPPEND(p->fts_parent);
--		*t++ = '/';
--		memmove(t, p->fts_name, p->fts_namelen + 1);
--		return (sp->fts_cur = p);
--	}
-+		dev = p->fts_dev = sbp->st_dev;
-+		ino = p->fts_ino = sbp->st_ino;
-+		p->fts_nlink = sbp->st_nlink;
- 
--	/* Move up to the parent node. */
--	p = tmp->fts_parent;
--	free(tmp);
-+		if (ISDOT(p->fts_name))
-+			return (FTS_DOT);
- 
--	if (p->fts_level == FTS_ROOTPARENTLEVEL) {
- 		/*
--		 * Done; free everything up and set errno to 0 so the user
--		 * can distinguish between error and EOF.
-+		 * Cycle detection is done by brute force when the directory
-+		 * is first encountered.  If the tree gets deep enough or the
-+		 * number of symbolic links to directories is high enough,
-+		 * something faster might be worthwhile.
- 		 */
--		free(p);
--		errno = 0;
--		return (sp->fts_cur = NULL);
--	}
--
--	/* Nul terminate the pathname. */
--	sp->fts_path[p->fts_pathlen] = '\0';
--
--	/*
--	 * Return to the parent directory.  If at a root node or came through
--	 * a symlink, go back through the file descriptor.  Otherwise, cd up
--	 * one directory.
--	 */
--	if (p->fts_level == FTS_ROOTLEVEL) {
--		if (!ISSET(FTS_NOCHDIR) && FCHDIR(sp, sp->fts_rfd)) {
--			SET(FTS_STOP);
--			return (NULL);
--		}
--	} else if (p->fts_flags & FTS_SYMFOLLOW) {
--		if (FCHDIR(sp, p->fts_symfd)) {
--			saved_errno = errno;
--			(void)close(p->fts_symfd);
--			errno = saved_errno;
--			SET(FTS_STOP);
--			return (NULL);
--		}
--		(void)close(p->fts_symfd);
--	} else if (!(p->fts_flags & FTS_DONTCHDIR)) {
--		if (CHDIR(sp, "..")) {
--			SET(FTS_STOP);
--			return (NULL);
--		}
--	}
--	p->fts_info = p->fts_errno ? FTS_ERR : FTS_DP;
--	return (sp->fts_cur = p);
--}
--
--/*
-- * Fts_set takes the stream as an argument although it's not used in this
-- * implementation; it would be necessary if anyone wanted to add global
-- * semantics to fts using fts_set.  An error return is allowed for similar
-- * reasons.
-- */
--/* ARGSUSED */
--int
--fts_set(sp, p, instr)
--	FTS *sp;
--	FTSENT *p;
--	int instr;
--{
--	if (instr && instr != FTS_AGAIN && instr != FTS_FOLLOW &&
--	    instr != FTS_NOINSTR && instr != FTS_SKIP) {
--		errno = EINVAL;
--		return (1);
-+		for (t = p->fts_parent;
-+		    t->fts_level >= FTS_ROOTLEVEL; t = t->fts_parent)
-+			if (ino == t->fts_ino && dev == t->fts_dev) {
-+				p->fts_cycle = t;
-+				return (FTS_DC);
-+			}
-+		return (FTS_D);
- 	}
--	p->fts_instr = instr;
--	return (0);
-+	if (S_ISLNK(sbp->st_mode))
-+		return (FTS_SL);
-+	if (S_ISREG(sbp->st_mode))
-+		return (FTS_F);
-+	return (FTS_DEFAULT);
- }
- 
--FTSENT *
--fts_children(sp, instr)
--	register FTS *sp;
--	int instr;
-+static FTSENT *
-+fts_sort(FTS *sp, FTSENT *head, register int nitems)
- {
--	register FTSENT *p;
--	int fd;
--
--	if (instr && instr != FTS_NAMEONLY) {
--		errno = EINVAL;
--		return (NULL);
--	}
--
--	/* Set current node pointer. */
--	p = sp->fts_cur;
--
--	/*
--	 * Errno set to 0 so user can distinguish empty directory from
--	 * an error.
--	 */
--	errno = 0;
--
--	/* Fatal errors stop here. */
--	if (ISSET(FTS_STOP))
--		return (NULL);
--
--	/* Return logical hierarchy of user's arguments. */
--	if (p->fts_info == FTS_INIT)
--		return (p->fts_link);
--
--	/*
--	 * If not a directory being visited in pre-order, stop here.  Could
--	 * allow FTS_DNR, assuming the user has fixed the problem, but the
--	 * same effect is available with FTS_AGAIN.
--	 */
--	if (p->fts_info != FTS_D /* && p->fts_info != FTS_DNR */)
--		return (NULL);
--
--	/* Free up any previous child list. */
--	if (sp->fts_child)
--		fts_lfree(sp->fts_child);
--
--	if (instr == FTS_NAMEONLY) {
--		sp->fts_options |= FTS_NAMEONLY;
--		instr = BNAMES;
--	} else 
--		instr = BCHILD;
-+	register FTSENT **ap, *p;
- 
- 	/*
--	 * If using chdir on a relative path and called BEFORE fts_read does
--	 * its chdir to the root of a traversal, we can lose -- we need to
--	 * chdir into the subdirectory, and we don't know where the current
--	 * directory is, so we can't get back so that the upcoming chdir by
--	 * fts_read will work.
-+	 * Construct an array of pointers to the structures and call qsort(3).
-+	 * Reassemble the array in the order returned by qsort.  If unable to
-+	 * sort for memory reasons, return the directory entries in their
-+	 * current order.  Allocate enough space for the current needs plus
-+	 * 40 so don't realloc one entry at a time.
- 	 */
--	if (p->fts_level != FTS_ROOTLEVEL || p->fts_accpath[0] == '/' ||
--	    ISSET(FTS_NOCHDIR))
--		return (sp->fts_child = fts_build(sp, instr));
--
--	if ((fd = open(".", O_RDONLY, 0)) < 0)
--		return (NULL);
--	sp->fts_child = fts_build(sp, instr);
--	if (fchdir(fd))
--		return (NULL);
--	(void)close(fd);
--	return (sp->fts_child);
-+	if (nitems > sp->fts_nitems) {
-+		sp->fts_nitems = nitems + 40;
-+		if ((sp->fts_array = realloc(sp->fts_array,
-+		    (size_t)(sp->fts_nitems * sizeof(FTSENT *)))) == NULL) {
-+			sp->fts_nitems = 0;
-+			return (head);
-+		}
-+	}
-+	for (ap = sp->fts_array, p = head; p; p = p->fts_link)
-+		*ap++ = p;
-+	qsort((void *)sp->fts_array, nitems, sizeof(FTSENT *), sp->fts_compar);
-+	for (head = *(ap = sp->fts_array); --nitems; ++ap)
-+		ap[0]->fts_link = ap[1];
-+	ap[0]->fts_link = NULL;
-+	return (head);
- }
- 
- /*
-@@ -551,9 +320,7 @@ fts_children(sp, instr)
-  * been found, cutting the stat calls by about 2/3.
-  */
- static FTSENT *
--fts_build(sp, type)
--	register FTS *sp;
--	int type;
-+fts_build(register FTS *sp, int type)
- {
- 	register struct dirent *dp;
- 	register FTSENT *p, *head;
-@@ -716,283 +483,479 @@ mem1:				saved_errno = errno;
- 				--nlinks;
- 		}
- 
--		/* We walk in directory order so "ls -f" doesn't get upset. */
--		p->fts_link = NULL;
--		if (head == NULL)
--			head = tail = p;
--		else {
--			tail->fts_link = p;
--			tail = p;
-+		/* We walk in directory order so "ls -f" doesn't get upset. */
-+		p->fts_link = NULL;
-+		if (head == NULL)
-+			head = tail = p;
-+		else {
-+			tail->fts_link = p;
-+			tail = p;
-+		}
-+		++nitems;
-+	}
-+	(void)closedir(dirp);
-+
-+	/*
-+	 * If had to realloc the path, adjust the addresses for the rest
-+	 * of the tree.
-+	 */
-+	if (adjaddr)
-+		fts_padjust(sp, adjaddr);
-+
-+	/*
-+	 * If not changing directories, reset the path back to original
-+	 * state.
-+	 */
-+	if (ISSET(FTS_NOCHDIR)) {
-+		if (cp - 1 > sp->fts_path)
-+			--cp;
-+		*cp = '\0';
-+	}
-+
-+	/*
-+	 * If descended after called from fts_children or after called from
-+	 * fts_read and nothing found, get back.  At the root level we use
-+	 * the saved fd; if one of fts_open()'s arguments is a relative path
-+	 * to an empty directory, we wind up here with no other way back.  If
-+	 * can't get back, we're done.
-+	 */
-+	if (descend && (type == BCHILD || !nitems) &&
-+	    (cur->fts_level == FTS_ROOTLEVEL ?
-+	    FCHDIR(sp, sp->fts_rfd) : CHDIR(sp, ".."))) {
-+		cur->fts_info = FTS_ERR;
-+		SET(FTS_STOP);
-+		return (NULL);
-+	}
-+
-+	/* If didn't find anything, return NULL. */
-+	if (!nitems) {
-+		if (type == BREAD)
-+			cur->fts_info = FTS_DP;
-+		return (NULL);
-+	}
-+
-+	/* Sort the entries. */
-+	if (sp->fts_compar && nitems > 1)
-+		head = fts_sort(sp, head, nitems);
-+	return (head);
-+}
-+
-+
-+FTS *
-+fts_open(char * const *argv, register int options, int (*compar)())
-+{
-+	register FTS *sp;
-+	register FTSENT *p, *root;
-+	register int nitems;
-+	FTSENT *parent, *tmp;
-+	int len;
-+
-+	/* Options check. */
-+	if (options & ~FTS_OPTIONMASK) {
-+		errno = EINVAL;
-+		return (NULL);
-+	}
-+
-+	/* Allocate/initialize the stream */
-+	if ((sp = malloc((u_int)sizeof(FTS))) == NULL)
-+		return (NULL);
-+	memset(sp, 0, sizeof(FTS));
-+	sp->fts_compar = compar;
-+	sp->fts_options = options;
-+
-+	/* Logical walks turn on NOCHDIR; symbolic links are too hard. */
-+	if (ISSET(FTS_LOGICAL))
-+		SET(FTS_NOCHDIR);
-+
-+	/*
-+	 * Start out with 1K of path space, and enough, in any case,
-+	 * to hold the user's paths.
-+	 */
-+	if (fts_palloc(sp, MAX(fts_maxarglen(argv), MAXPATHLEN)))
-+		goto mem1;
-+
-+	/* Allocate/initialize root's parent. */
-+	if ((parent = fts_alloc(sp, "", 0)) == NULL)
-+		goto mem2;
-+	parent->fts_level = FTS_ROOTPARENTLEVEL;
-+
-+	/* Allocate/initialize root(s). */
-+	for (root = NULL, nitems = 0; *argv; ++argv, ++nitems) {
-+		/* Don't allow zero-length paths. */
-+		if ((len = strlen(*argv)) == 0) {
-+			errno = EINVAL;
-+			goto mem3;
-+		}
-+
-+		p = fts_alloc(sp, *argv, len);
-+		p->fts_level = FTS_ROOTLEVEL;
-+		p->fts_parent = parent;
-+		p->fts_accpath = p->fts_name;
-+		p->fts_info = fts_stat(sp, NULL, p, ISSET(FTS_COMFOLLOW));
-+
-+		/* Command-line "." and ".." are real directories. */
-+		if (p->fts_info == FTS_DOT)
-+			p->fts_info = FTS_D;
-+
-+		/*
-+		 * If comparison routine supplied, traverse in sorted
-+		 * order; otherwise traverse in the order specified.
-+		 */
-+		if (compar) {
-+			p->fts_link = root;
-+			root = p;
-+		} else {
-+			p->fts_link = NULL;
-+			if (root == NULL)
-+				tmp = root = p;
-+			else {
-+				tmp->fts_link = p;
-+				tmp = p;
-+			}
- 		}
--		++nitems;
- 	}
--	(void)closedir(dirp);
--
--	/*
--	 * If had to realloc the path, adjust the addresses for the rest
--	 * of the tree.
--	 */
--	if (adjaddr)
--		fts_padjust(sp, adjaddr);
-+	if (compar && nitems > 1)
-+		root = fts_sort(sp, root, nitems);
- 
- 	/*
--	 * If not changing directories, reset the path back to original
--	 * state.
-+	 * Allocate a dummy pointer and make fts_read think that we've just
-+	 * finished the node before the root(s); set p->fts_info to FTS_INIT
-+	 * so that everything about the "current" node is ignored.
- 	 */
--	if (ISSET(FTS_NOCHDIR)) {
--		if (cp - 1 > sp->fts_path)
--			--cp;
--		*cp = '\0';
--	}
-+	if ((sp->fts_cur = fts_alloc(sp, "", 0)) == NULL)
-+		goto mem3;
-+	sp->fts_cur->fts_link = root;
-+	sp->fts_cur->fts_info = FTS_INIT;
- 
- 	/*
--	 * If descended after called from fts_children or after called from
--	 * fts_read and nothing found, get back.  At the root level we use
--	 * the saved fd; if one of fts_open()'s arguments is a relative path
--	 * to an empty directory, we wind up here with no other way back.  If
--	 * can't get back, we're done.
-+	 * If using chdir(2), grab a file descriptor pointing to dot to insure
-+	 * that we can get back here; this could be avoided for some paths,
-+	 * but almost certainly not worth the effort.  Slashes, symbolic links,
-+	 * and ".." are all fairly nasty problems.  Note, if we can't get the
-+	 * descriptor we run anyway, just more slowly.
- 	 */
--	if (descend && (type == BCHILD || !nitems) &&
--	    (cur->fts_level == FTS_ROOTLEVEL ?
--	    FCHDIR(sp, sp->fts_rfd) : CHDIR(sp, ".."))) {
--		cur->fts_info = FTS_ERR;
--		SET(FTS_STOP);
--		return (NULL);
--	}
-+	if (!ISSET(FTS_NOCHDIR) && (sp->fts_rfd = open(".", O_RDONLY, 0)) < 0)
-+		SET(FTS_NOCHDIR);
- 
--	/* If didn't find anything, return NULL. */
--	if (!nitems) {
--		if (type == BREAD)
--			cur->fts_info = FTS_DP;
--		return (NULL);
--	}
-+	return (sp);
- 
--	/* Sort the entries. */
--	if (sp->fts_compar && nitems > 1)
--		head = fts_sort(sp, head, nitems);
--	return (head);
-+mem3:	fts_lfree(root);
-+	free(parent);
-+mem2:	free(sp->fts_path);
-+mem1:	free(sp);
-+	return (NULL);
- }
- 
--static u_short
--fts_stat(sp, dp, p, follow)
--	FTS *sp;
--	register FTSENT *p;
--	struct dirent *dp;
--	int follow;
-+FTSENT *
-+fts_read(register FTS *sp)
- {
--	register FTSENT *t;
--	register dev_t dev;
--	register ino_t ino;
--	struct stat *sbp, sb;
-+	register FTSENT *p, *tmp;
-+	register int instr;
-+	register char *t;
- 	int saved_errno;
- 
--	/* If user needs stat info, stat buffer already allocated. */
--	sbp = ISSET(FTS_NOSTAT) ? &sb : p->fts_statp;
-+	/* If finished or unrecoverable error, return NULL. */
-+	if (sp->fts_cur == NULL || ISSET(FTS_STOP))
-+		return (NULL);
- 
--#ifdef S_IFWHT
--	/*
--	 * Whited-out files don't really exist.  However, there's stat(2) file
--	 * mask for them, so we set it so that programs (i.e., find) don't have
--	 * to test FTS_W separately from other file types.
--	 */
--	if (dp != NULL && dp->d_type == DT_WHT) {
--		memset(sbp, 0, sizeof(struct stat));
--		sbp->st_mode = S_IFWHT;
--		return (FTS_W);
-+	/* Set current node pointer. */
-+	p = sp->fts_cur;
-+
-+	/* Save and zero out user instructions. */
-+	instr = p->fts_instr;
-+	p->fts_instr = FTS_NOINSTR;
-+
-+	/* Any type of file may be re-visited; re-stat and re-turn. */
-+	if (instr == FTS_AGAIN) {
-+		p->fts_info = fts_stat(sp, NULL, p, 0);
-+		return (p);
- 	}
--#endif
--	
-+
- 	/*
--	 * If doing a logical walk, or application requested FTS_FOLLOW, do
--	 * a stat(2).  If that fails, check for a non-existent symlink.  If
--	 * fail, set the errno from the stat call.
-+	 * Following a symlink -- SLNONE test allows application to see
-+	 * SLNONE and recover.  If indirecting through a symlink, have
-+	 * keep a pointer to current location.  If unable to get that
-+	 * pointer, follow fails.
- 	 */
--	if (ISSET(FTS_LOGICAL) || follow) {
--		if (stat(p->fts_accpath, sbp)) {
--			saved_errno = errno;
--			if (!lstat(p->fts_accpath, sbp)) {
--				errno = 0;
--				return (FTS_SLNONE);
--			} 
--			p->fts_errno = saved_errno;
--			goto err;
-+	if (instr == FTS_FOLLOW &&
-+	    (p->fts_info == FTS_SL || p->fts_info == FTS_SLNONE)) {
-+		p->fts_info = fts_stat(sp, NULL, p, 1);
-+		if (p->fts_info == FTS_D && !ISSET(FTS_NOCHDIR))
-+			if ((p->fts_symfd = open(".", O_RDONLY, 0)) < 0) {
-+				p->fts_errno = errno;
-+				p->fts_info = FTS_ERR;
-+			} else
-+				p->fts_flags |= FTS_SYMFOLLOW;
-+		return (p);
-+	}
-+
-+	/* Directory in pre-order. */
-+	if (p->fts_info == FTS_D) {
-+		/* If skipped or crossed mount point, do post-order visit. */
-+		if (instr == FTS_SKIP ||
-+		    ISSET(FTS_XDEV) && p->fts_dev != sp->fts_dev) {
-+			if (p->fts_flags & FTS_SYMFOLLOW)
-+				(void)close(p->fts_symfd);
-+			if (sp->fts_child) {
-+				fts_lfree(sp->fts_child);
-+				sp->fts_child = NULL;
-+			}
-+			p->fts_info = FTS_DP;
-+			return (p);
- 		}
--	} else if (lstat(p->fts_accpath, sbp)) {
--		p->fts_errno = errno;
--err:		memset(sbp, 0, sizeof(struct stat));
--		return (FTS_NS);
-+
-+		/* Rebuild if only read the names and now traversing. */
-+		if (sp->fts_child && sp->fts_options & FTS_NAMEONLY) {
-+			sp->fts_options &= ~FTS_NAMEONLY;
-+			fts_lfree(sp->fts_child);
-+			sp->fts_child = NULL;
-+		}
-+
-+		/*
-+		 * Cd to the subdirectory.
-+		 *
-+		 * If have already read and now fail to chdir, whack the list
-+		 * to make the names come out right, and set the parent errno
-+		 * so the application will eventually get an error condition.
-+		 * Set the FTS_DONTCHDIR flag so that when we logically change
-+		 * directories back to the parent we don't do a chdir.
-+		 *
-+		 * If haven't read do so.  If the read fails, fts_build sets
-+		 * FTS_STOP or the fts_info field of the node.
-+		 */
-+		if (sp->fts_child) {
-+			if (CHDIR(sp, p->fts_accpath)) {
-+				p->fts_errno = errno;
-+				p->fts_flags |= FTS_DONTCHDIR;
-+				for (p = sp->fts_child; p; p = p->fts_link)
-+					p->fts_accpath =
-+					    p->fts_parent->fts_accpath;
-+			}
-+		} else if ((sp->fts_child = fts_build(sp, BREAD)) == NULL) {
-+			if (ISSET(FTS_STOP))
-+				return (NULL);
-+			return (p);
-+		}
-+		p = sp->fts_child;
-+		sp->fts_child = NULL;
-+		goto name;
- 	}
- 
--	if (S_ISDIR(sbp->st_mode)) {
-+	/* Move to the next node on this level. */
-+next:	tmp = p;
-+	if (p = p->fts_link) {
-+		free(tmp);
-+
- 		/*
--		 * Set the device/inode.  Used to find cycles and check for
--		 * crossing mount points.  Also remember the link count, used
--		 * in fts_build to limit the number of stat calls.  It is
--		 * understood that these fields are only referenced if fts_info
--		 * is set to FTS_D.
-+		 * If reached the top, return to the original directory, and
-+		 * load the paths for the next root.
- 		 */
--		dev = p->fts_dev = sbp->st_dev;
--		ino = p->fts_ino = sbp->st_ino;
--		p->fts_nlink = sbp->st_nlink;
-+		if (p->fts_level == FTS_ROOTLEVEL) {
-+			if (!ISSET(FTS_NOCHDIR) && FCHDIR(sp, sp->fts_rfd)) {
-+				SET(FTS_STOP);
-+				return (NULL);
-+			}
-+			fts_load(sp, p);
-+			return (sp->fts_cur = p);
-+		}
-+
-+		/*
-+		 * User may have called fts_set on the node.  If skipped,
-+		 * ignore.  If followed, get a file descriptor so we can
-+		 * get back if necessary.
-+		 */
-+		if (p->fts_instr == FTS_SKIP)
-+			goto next;
-+		if (p->fts_instr == FTS_FOLLOW) {
-+			p->fts_info = fts_stat(sp, NULL, p, 1);
-+			if (p->fts_info == FTS_D && !ISSET(FTS_NOCHDIR))
-+				if ((p->fts_symfd =
-+				    open(".", O_RDONLY, 0)) < 0) {
-+					p->fts_errno = errno;
-+					p->fts_info = FTS_ERR;
-+				} else
-+					p->fts_flags |= FTS_SYMFOLLOW;
-+			p->fts_instr = FTS_NOINSTR;
-+		}
- 
--		if (ISDOT(p->fts_name))
--			return (FTS_DOT);
-+name:		t = sp->fts_path + NAPPEND(p->fts_parent);
-+		*t++ = '/';
-+		memmove(t, p->fts_name, p->fts_namelen + 1);
-+		return (sp->fts_cur = p);
-+	}
- 
-+	/* Move up to the parent node. */
-+	p = tmp->fts_parent;
-+	free(tmp);
-+
-+	if (p->fts_level == FTS_ROOTPARENTLEVEL) {
- 		/*
--		 * Cycle detection is done by brute force when the directory
--		 * is first encountered.  If the tree gets deep enough or the
--		 * number of symbolic links to directories is high enough,
--		 * something faster might be worthwhile.
-+		 * Done; free everything up and set errno to 0 so the user
-+		 * can distinguish between error and EOF.
- 		 */
--		for (t = p->fts_parent;
--		    t->fts_level >= FTS_ROOTLEVEL; t = t->fts_parent)
--			if (ino == t->fts_ino && dev == t->fts_dev) {
--				p->fts_cycle = t;
--				return (FTS_DC);
--			}
--		return (FTS_D);
-+		free(p);
-+		errno = 0;
-+		return (sp->fts_cur = NULL);
- 	}
--	if (S_ISLNK(sbp->st_mode))
--		return (FTS_SL);
--	if (S_ISREG(sbp->st_mode))
--		return (FTS_F);
--	return (FTS_DEFAULT);
--}
- 
--static FTSENT *
--fts_sort(sp, head, nitems)
--	FTS *sp;
--	FTSENT *head;
--	register int nitems;
--{
--	register FTSENT **ap, *p;
-+	/* Nul terminate the pathname. */
-+	sp->fts_path[p->fts_pathlen] = '\0';
- 
- 	/*
--	 * Construct an array of pointers to the structures and call qsort(3).
--	 * Reassemble the array in the order returned by qsort.  If unable to
--	 * sort for memory reasons, return the directory entries in their
--	 * current order.  Allocate enough space for the current needs plus
--	 * 40 so don't realloc one entry at a time.
-+	 * Return to the parent directory.  If at a root node or came through
-+	 * a symlink, go back through the file descriptor.  Otherwise, cd up
-+	 * one directory.
- 	 */
--	if (nitems > sp->fts_nitems) {
--		sp->fts_nitems = nitems + 40;
--		if ((sp->fts_array = realloc(sp->fts_array,
--		    (size_t)(sp->fts_nitems * sizeof(FTSENT *)))) == NULL) {
--			sp->fts_nitems = 0;
--			return (head);
-+	if (p->fts_level == FTS_ROOTLEVEL) {
-+		if (!ISSET(FTS_NOCHDIR) && FCHDIR(sp, sp->fts_rfd)) {
-+			SET(FTS_STOP);
-+			return (NULL);
-+		}
-+	} else if (p->fts_flags & FTS_SYMFOLLOW) {
-+		if (FCHDIR(sp, p->fts_symfd)) {
-+			saved_errno = errno;
-+			(void)close(p->fts_symfd);
-+			errno = saved_errno;
-+			SET(FTS_STOP);
-+			return (NULL);
-+		}
-+		(void)close(p->fts_symfd);
-+	} else if (!(p->fts_flags & FTS_DONTCHDIR)) {
-+		if (CHDIR(sp, "..")) {
-+			SET(FTS_STOP);
-+			return (NULL);
- 		}
- 	}
--	for (ap = sp->fts_array, p = head; p; p = p->fts_link)
--		*ap++ = p;
--	qsort((void *)sp->fts_array, nitems, sizeof(FTSENT *), sp->fts_compar);
--	for (head = *(ap = sp->fts_array); --nitems; ++ap)
--		ap[0]->fts_link = ap[1];
--	ap[0]->fts_link = NULL;
--	return (head);
-+	p->fts_info = p->fts_errno ? FTS_ERR : FTS_DP;
-+	return (sp->fts_cur = p);
- }
- 
--static FTSENT *
--fts_alloc(sp, name, namelen)
--	FTS *sp;
--	char *name;
--	register int namelen;
-+/*
-+ * Fts_set takes the stream as an argument although it's not used in this
-+ * implementation; it would be necessary if anyone wanted to add global
-+ * semantics to fts using fts_set.  An error return is allowed for similar
-+ * reasons.
-+ */
-+/* ARGSUSED */
-+int
-+fts_set(FTS *sp, FTSENT *p, int instr)
-+{
-+	if (instr && instr != FTS_AGAIN && instr != FTS_FOLLOW &&
-+	    instr != FTS_NOINSTR && instr != FTS_SKIP) {
-+		errno = EINVAL;
-+		return (1);
-+	}
-+	p->fts_instr = instr;
-+	return (0);
-+}
-+
-+FTSENT *
-+fts_children(register FTS *sp, int instr)
- {
- 	register FTSENT *p;
--	size_t len;
-+	int fd;
-+
-+	if (instr && instr != FTS_NAMEONLY) {
-+		errno = EINVAL;
-+		return (NULL);
-+	}
-+
-+	/* Set current node pointer. */
-+	p = sp->fts_cur;
- 
- 	/*
--	 * The file name is a variable length array and no stat structure is
--	 * necessary if the user has set the nostat bit.  Allocate the FTSENT
--	 * structure, the file name and the stat structure in one chunk, but
--	 * be careful that the stat structure is reasonably aligned.  Since the
--	 * fts_name field is declared to be of size 1, the fts_name pointer is
--	 * namelen + 2 before the first possible address of the stat structure.
-+	 * Errno set to 0 so user can distinguish empty directory from
-+	 * an error.
- 	 */
--	len = sizeof(FTSENT) + namelen;
--	if (!ISSET(FTS_NOSTAT))
--		len += sizeof(struct stat) + ALIGNBYTES;
--	if ((p = malloc(len)) == NULL)
-+	errno = 0;
-+
-+	/* Fatal errors stop here. */
-+	if (ISSET(FTS_STOP))
- 		return (NULL);
- 
--	/* Copy the name plus the trailing NULL. */
--	memmove(p->fts_name, name, namelen + 1);
-+	/* Return logical hierarchy of user's arguments. */
-+	if (p->fts_info == FTS_INIT)
-+		return (p->fts_link);
- 
--	if (!ISSET(FTS_NOSTAT))
--		p->fts_statp = (struct stat *)ALIGN(p->fts_name + namelen + 2);
--	p->fts_namelen = namelen;
--	p->fts_path = sp->fts_path;
--	p->fts_errno = 0;
--	p->fts_flags = 0;
--	p->fts_instr = FTS_NOINSTR;
--	p->fts_number = 0;
--	p->fts_pointer = NULL;
--	return (p);
-+	/*
-+	 * If not a directory being visited in pre-order, stop here.  Could
-+	 * allow FTS_DNR, assuming the user has fixed the problem, but the
-+	 * same effect is available with FTS_AGAIN.
-+	 */
-+	if (p->fts_info != FTS_D /* && p->fts_info != FTS_DNR */)
-+		return (NULL);
-+
-+	/* Free up any previous child list. */
-+	if (sp->fts_child)
-+		fts_lfree(sp->fts_child);
-+
-+	if (instr == FTS_NAMEONLY) {
-+		sp->fts_options |= FTS_NAMEONLY;
-+		instr = BNAMES;
-+	} else
-+		instr = BCHILD;
-+
-+	/*
-+	 * If using chdir on a relative path and called BEFORE fts_read does
-+	 * its chdir to the root of a traversal, we can lose -- we need to
-+	 * chdir into the subdirectory, and we don't know where the current
-+	 * directory is, so we can't get back so that the upcoming chdir by
-+	 * fts_read will work.
-+	 */
-+	if (p->fts_level != FTS_ROOTLEVEL || p->fts_accpath[0] == '/' ||
-+	    ISSET(FTS_NOCHDIR))
-+		return (sp->fts_child = fts_build(sp, instr));
-+
-+	if ((fd = open(".", O_RDONLY, 0)) < 0)
-+		return (NULL);
-+	sp->fts_child = fts_build(sp, instr);
-+	if (fchdir(fd))
-+		return (NULL);
-+	(void)close(fd);
-+	return (sp->fts_child);
- }
- 
--static void
--fts_lfree(head)
--	register FTSENT *head;
-+int
-+fts_close(FTS *sp)
- {
--	register FTSENT *p;
-+	register FTSENT *freep, *p;
-+	int saved_errno;
- 
--	/* Free a linked list of structures. */
--	while (p = head) {
--		head = head->fts_link;
-+	/*
-+	 * This still works if we haven't read anything -- the dummy structure
-+	 * points to the root list, so we step through to the end of the root
-+	 * list which has a valid parent pointer.
-+	 */
-+	if (sp->fts_cur) {
-+		for (p = sp->fts_cur; p->fts_level >= FTS_ROOTLEVEL;) {
-+			freep = p;
-+			p = p->fts_link ? p->fts_link : p->fts_parent;
-+			free(freep);
-+		}
- 		free(p);
- 	}
--}
- 
--/*
-- * Allow essentially unlimited paths; find, rm, ls should all work on any tree.
-- * Most systems will allow creation of paths much longer than MAXPATHLEN, even
-- * though the kernel won't resolve them.  Add the size (not just what's needed)
-- * plus 256 bytes so don't realloc the path 2 bytes at a time. 
-- */
--static int
--fts_palloc(sp, more)
--	FTS *sp;
--	size_t more;
--{
--	sp->fts_pathlen += more + 256;
--	sp->fts_path = realloc(sp->fts_path, (size_t)sp->fts_pathlen);
--	return (sp->fts_path == NULL);
--}
-+	/* Free up child linked list, sort array, path buffer. */
-+	if (sp->fts_child)
-+		fts_lfree(sp->fts_child);
-+	if (sp->fts_array)
-+		free(sp->fts_array);
-+	free(sp->fts_path);
- 
--/*
-- * When the path is realloc'd, have to fix all of the pointers in structures
-- * already returned.
-- */
--static void
--fts_padjust(sp, addr)
--	FTS *sp;
--	void *addr;
--{
--	FTSENT *p;
-+	/* Return to original directory, save errno if necessary. */
-+	if (!ISSET(FTS_NOCHDIR)) {
-+		saved_errno = fchdir(sp->fts_rfd) ? errno : 0;
-+		(void)close(sp->fts_rfd);
-+	}
- 
--#define	ADJUST(p) {							\
--	(p)->fts_accpath =						\
--	    (char *)addr + ((p)->fts_accpath - (p)->fts_path);		\
--	(p)->fts_path = addr;						\
--}
--	/* Adjust the current set of children. */
--	for (p = sp->fts_child; p; p = p->fts_link)
--		ADJUST(p);
-+	/* Free up the stream pointer. */
-+	free(sp);
- 
--	/* Adjust the rest of the tree. */
--	for (p = sp->fts_cur; p->fts_level >= FTS_ROOTLEVEL;) {
--		ADJUST(p);
--		p = p->fts_link ? p->fts_link : p->fts_parent;
-+	/* Set errno and return. */
-+	if (!ISSET(FTS_NOCHDIR) && saved_errno) {
-+		errno = saved_errno;
-+		return (-1);
- 	}
-+	return (0);
- }
- 
--static size_t
--fts_maxarglen(argv)
--	char * const *argv;
--{
--	size_t len, max;
--
--	for (max = 0; *argv; ++argv)
--		if ((len = strlen(*argv)) > max)
--			max = len;
--	return (max);
--}
diff --git a/import-layers/yocto-poky/meta/recipes-core/fts/fts/remove_cdefs.patch b/import-layers/yocto-poky/meta/recipes-core/fts/fts/remove_cdefs.patch
deleted file mode 100644
index c152704..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/fts/fts/remove_cdefs.patch
+++ /dev/null
@@ -1,69 +0,0 @@
-Replace use of macros from sys/cdefs.h since cdefs.h is missing on musl
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Inappropriate
-
-Index: fts/fts.h
-===================================================================
---- fts.orig/fts.h
-+++ fts/fts.h
-@@ -126,15 +126,21 @@ typedef struct _ftsent {
- 	char fts_name[1];		/* file name */
- } FTSENT;
- 
--#include <sys/cdefs.h>
-+#ifdef __cplusplus
-+extern "C" {
-+#endif
- 
--__BEGIN_DECLS
--FTSENT	*fts_children __P((FTS *, int));
--int	 fts_close __P((FTS *));
--FTS	*fts_open __P((char * const *, int,
--	    int (*)(const FTSENT **, const FTSENT **)));
--FTSENT	*fts_read __P((FTS *));
--int	 fts_set __P((FTS *, FTSENT *, int));
--__END_DECLS
-+#ifndef __P
-+#define __P
-+#endif
-+FTSENT	*fts_children (FTS *p, int opts);
-+int	 fts_close (FTS *p);
-+FTS	*fts_open (char * const * path, int opts,
-+	    int (*compfn)(const FTSENT **, const FTSENT **));
-+FTSENT	*fts_read (FTS *p);
-+int	 fts_set (FTS *p, FTSENT *f, int opts);
- 
-+#ifdef __cplusplus
-+}
-+#endif
- #endif /* !_FTS_H_ */
-Index: fts/fts.c
-===================================================================
---- fts.orig/fts.c
-+++ fts/fts.c
-@@ -50,15 +50,15 @@ static char sccsid[] = "@(#)fts.c	8.6 (B
- #include <string.h>
- #include <unistd.h>
- 
--static FTSENT	*fts_alloc __P((FTS *, char *, int));
--static FTSENT	*fts_build __P((FTS *, int));
--static void	 fts_lfree __P((FTSENT *));
--static void	 fts_load __P((FTS *, FTSENT *));
--static size_t	 fts_maxarglen __P((char * const *));
--static void	 fts_padjust __P((FTS *, void *));
--static int	 fts_palloc __P((FTS *, size_t));
--static FTSENT	*fts_sort __P((FTS *, FTSENT *, int));
--static u_short	 fts_stat __P((FTS *, struct dirent *, FTSENT *, int));
-+static FTSENT	*fts_alloc __P(FTS *, char *, int);
-+static FTSENT	*fts_build __P(FTS *, int);
-+static void	 fts_lfree __P(FTSENT *);
-+static void	 fts_load __P(FTS *, FTSENT *);
-+static size_t	 fts_maxarglen __P(char * const *);
-+static void	 fts_padjust __P(FTS *, void *);
-+static int	 fts_palloc __P(FTS *, size_t);
-+static FTSENT	*fts_sort __P(FTS *, FTSENT *, int);
-+static u_short	 fts_stat __P(FTS *, struct dirent *, FTSENT *, int);
- 
- #define	ISDOT(a)	(a[0] == '.' && (!a[1] || a[1] == '.' && !a[2]))
- 
diff --git a/import-layers/yocto-poky/meta/recipes-core/fts/fts/stdint.patch b/import-layers/yocto-poky/meta/recipes-core/fts/fts/stdint.patch
deleted file mode 100644
index 89e6097..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/fts/fts/stdint.patch
+++ /dev/null
@@ -1,15 +0,0 @@
-Include stdint.h for u_* typedefs
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Inappropriate
-
---- ./fts.c.orig
-+++ ./fts.c
-@@ -46,6 +46,7 @@
- #include <errno.h>
- #include <fcntl.h>
- #include <fts.h>
-+#include <stdint.h>
- #include <stdlib.h>
- #include <string.h>
- #include <unistd.h>
diff --git a/import-layers/yocto-poky/meta/recipes-core/gettext/gettext_0.19.8.1.bb b/import-layers/yocto-poky/meta/recipes-core/gettext/gettext_0.19.8.1.bb
index 83edffe..c2059e6 100644
--- a/import-layers/yocto-poky/meta/recipes-core/gettext/gettext_0.19.8.1.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/gettext/gettext_0.19.8.1.bb
@@ -24,8 +24,6 @@
 
 PACKAGECONFIG[msgcat-curses] = "--with-libncurses-prefix=${STAGING_LIBDIR}/..,--disable-curses,ncurses,"
 
-LDFLAGS_prepend_libc-uclibc = " -lrt -lpthread "
-
 inherit autotools texinfo
 
 EXTRA_OECONF += "--without-lispdir \
@@ -86,15 +84,9 @@
                          ${libdir}/libasprintf.so* \
                          ${libdir}/GNU.Gettext.dll \
                         "
-FILES_gettext-runtime_append_libc-uclibc = " ${libdir}/libintl.so.* \
-                                             ${libdir}/charset.alias \
-                                           "
 FILES_gettext-runtime-dev += "${libdir}/libasprintf.a \
                       ${includedir}/autosprintf.h \
                      "
-FILES_gettext-runtime-dev_append_libc-uclibc = " ${libdir}/libintl.so \
-                                                 ${includedir}/libintl.h \
-                                               "
 FILES_gettext-runtime-doc = "${mandir}/man1/gettext.* \
                              ${mandir}/man1/ngettext.* \
                              ${mandir}/man1/envsubst.* \
@@ -119,6 +111,10 @@
 	rm ${D}${datadir}/gettext/config.rpath
 	rm ${D}${datadir}/gettext/po/Makefile.in.in
 	rm ${D}${datadir}/gettext/po/remove-potcdate.sin
+
+        create_wrapper ${D}${bindir}/msgfmt \
+                GETTEXTDATADIR="${STAGING_DATADIR_NATIVE}/gettext-0.19.8/"
+
 }
 
 BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib-2.0/fix-conflicting-rand.patch b/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib-2.0/fix-conflicting-rand.patch
deleted file mode 100644
index 1571112..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib-2.0/fix-conflicting-rand.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-Rename 'rand' variable to avoid conflict.
-
-Upstream-Status: pending
-Signed-off-by: Björn Stenberg <bjst@enea.com>
-
-diff -u glib-2.34.3/tests/refcount/signals.c~ glib-2.34.3/tests/refcount/signals.c
---- glib-2.34.3/tests/refcount/signals.c	2012-11-26 17:52:48.000000000 +0100
-+++ glib-2.34.3/tests/refcount/signals.c	2013-02-08 14:24:10.052477546 +0100
-@@ -9,7 +9,7 @@
- #define MY_IS_TEST_CLASS(tclass)   (G_TYPE_CHECK_CLASS_TYPE ((tclass), G_TYPE_TEST))
- #define MY_TEST_GET_CLASS(test)    (G_TYPE_INSTANCE_GET_CLASS ((test), G_TYPE_TEST, GTestClass))
- 
--static GRand *rand;
-+static GRand *grand;
- 
- typedef struct _GTest GTest;
- typedef struct _GTestClass GTestClass;
-@@ -84,7 +84,7 @@
-       NULL
-     };
- 
--    rand = g_rand_new();
-+    grand = g_rand_new();
- 
-     test_type = g_type_register_static (G_TYPE_OBJECT, "GTest",
-         &test_info, 0);
-@@ -218,7 +218,7 @@
- static void
- my_test_do_prop (GTest * test)
- {
--  test->value = g_rand_int (rand);
-+  test->value = g_rand_int (grand);
-   g_object_notify (G_OBJECT (test), "test-prop");
- }
- 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib-2.0/glib-gettextize-dir.patch b/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib-2.0/glib-gettextize-dir.patch
deleted file mode 100644
index ee43511..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib-2.0/glib-gettextize-dir.patch
+++ /dev/null
@@ -1,24 +0,0 @@
-# an very old patch cherry-picked in every glib-2.0 patch directory. The earliest container
-# for it is 2.6.5 in OE. The earliest commit for it is c8e5702127e507e82e6f68a4b8c546803accea9d
-# in OE side which ports from previous bitkeeper SCM. In OE side it's only used til 2.12.4.
-#
-# keep it since it's always cleaner to not hardcode destination path. Use @datadir@ is more
-# portable here. mark for upstream
-#
-# by Kevin Tian <kevin.tian@intel.com>, 06/25/2010
-# Rebased by Dongxiao Xu <dongxiao.xu@intel.com>, 11/16/2010
-
-Upstream-Status: Inappropriate [configuration]
-
-diff -ruN glib-2.27.3-orig/glib-gettextize.in glib-2.27.3/glib-gettextize.in
---- glib-2.27.3-orig/glib-gettextize.in	2009-04-01 07:04:20.000000000 +0800
-+++ glib-2.27.3/glib-gettextize.in	2010-11-16 12:55:06.874605916 +0800
-@@ -52,7 +52,7 @@
- datadir=@datadir@
- datarootdir=@datarootdir@
- 
--gettext_dir=$prefix/share/glib-2.0/gettext
-+gettext_dir=@datadir@/glib-2.0/gettext
- 
- while test $# -gt 0; do
-   case "$1" in
diff --git a/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib-2.0_2.50.3.bb b/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib-2.0_2.50.3.bb
deleted file mode 100644
index 22ea347..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib-2.0_2.50.3.bb
+++ /dev/null
@@ -1,25 +0,0 @@
-require glib.inc
-
-PE = "1"
-
-SHRT_VER = "${@oe.utils.trim_version("${PV}", 2)}"
-
-SRC_URI = "${GNOME_MIRROR}/glib/${SHRT_VER}/glib-${PV}.tar.xz \
-           file://configure-libtool.patch \
-           file://fix-conflicting-rand.patch \
-           file://run-ptest \
-           file://ptest-paths.patch \
-           file://uclibc_musl_translation.patch \
-           file://allow-run-media-sdX-drive-mount-if-username-root.patch \
-           file://0001-Remove-the-warning-about-deprecated-paths-in-schemas.patch \
-           file://Enable-more-tests-while-cross-compiling.patch \
-           file://0001-Install-gio-querymodules-as-libexec_PROGRAM.patch \
-           file://0001-Do-not-ignore-return-value-of-write.patch \
-           file://0001-Test-for-pthread_getname_np-before-using-it.patch \
-           "
-
-SRC_URI_append_class-native = " file://glib-gettextize-dir.patch \
-                                file://relocate-modules.patch"
-
-SRC_URI[md5sum] = "381ab22934f296750d036aa55a397ded"
-SRC_URI[sha256sum] = "82ee94bf4c01459b6b00cb9db0545c2237921e3060c0b74cff13fbc020cfd999"
diff --git a/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib-2.0_2.52.3.bb b/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib-2.0_2.52.3.bb
new file mode 100644
index 0000000..b1fe600
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib-2.0_2.52.3.bb
@@ -0,0 +1,23 @@
+require glib.inc
+
+PE = "1"
+
+SHRT_VER = "${@oe.utils.trim_version("${PV}", 2)}"
+
+SRC_URI = "${GNOME_MIRROR}/glib/${SHRT_VER}/glib-${PV}.tar.xz \
+           file://configure-libtool.patch \
+           file://run-ptest \
+           file://ptest-paths.patch \
+           file://uclibc_musl_translation.patch \
+           file://allow-run-media-sdX-drive-mount-if-username-root.patch \
+           file://0001-Remove-the-warning-about-deprecated-paths-in-schemas.patch \
+           file://Enable-more-tests-while-cross-compiling.patch \
+           file://0001-Install-gio-querymodules-as-libexec_PROGRAM.patch \
+           file://0001-Do-not-ignore-return-value-of-write.patch \
+           file://0001-Test-for-pthread_getname_np-before-using-it.patch \
+           "
+
+SRC_URI_append_class-native = " file://relocate-modules.patch"
+
+SRC_URI[md5sum] = "89265d0289a436e99cad54491eb21ef4"
+SRC_URI[sha256sum] = "25ee7635a7c0fcd4ec91cbc3ae07c7f8f5ce621d8183511f414ded09e7e4e128"
diff --git a/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib.inc b/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib.inc
index 2b30e37..4cdf141 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib.inc
+++ b/import-layers/yocto-poky/meta/recipes-core/glib-2.0/glib.inc
@@ -45,7 +45,6 @@
 PRINTF_mingw32 = "--enable-included-printf=yes"
 EXTRA_OECONF = "${PRINTF} ${CORECONF}"
 EXTRA_OECONF_class-native = "${CORECONF} --disable-selinux"
-EXTRA_OECONF_append_libc-uclibc = " --with-libiconv=gnu"
 
 # Tell configure that we'll have dbus-daemon on the target for the tests
 EXTRA_OECONF_class-target_append = " ${@bb.utils.contains('PTEST_ENABLED', '1', ' ac_cv_prog_DBUS_DAEMON=dbus-daemon', '', d)}"
@@ -67,12 +66,17 @@
                     ${bindir}/glib-compile-resources \
                     ${datadir}/glib-2.0/gettext/po/Makefile.in.in \
                     ${datadir}/glib-2.0/schemas/gschema.dtd \
+                    ${datadir}/glib-2.0/valgrind/glib.supp \
                     ${datadir}/gettext/its"
 FILES_${PN}-dbg += "${datadir}/glib-2.0/gdb ${datadir}/gdb"
 FILES_${PN}-codegen = "${datadir}/glib-2.0/codegen/*.py \
                        ${bindir}/gdbus-codegen"
 FILES_${PN}-utils = "${bindir}/*"
 
+RRECOMMENDS_${PN} += "shared-mime-info"
+# When cross compiling for Windows we don't want to include this
+RRECOMMENDS_${PN}_remove_mingw32 = "shared-mime-info"
+
 ARM_INSTRUCTION_SET_armv4 = "arm"
 ARM_INSTRUCTION_SET_armv5 = "arm"
 # Valgrind runtime detection works using hand-written assembly, which
@@ -117,6 +121,12 @@
 	fi
 }
 
+RDEPENDS_${PN}-codegen += "\
+            python3 \
+            python3-distutils \
+            python3-xml \
+           "
+
 RDEPENDS_${PN}-ptest += "\
             dbus \
             gnome-desktop-testing \
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/cross-localedef-native_2.25.bb b/import-layers/yocto-poky/meta/recipes-core/glibc/cross-localedef-native_2.25.bb
deleted file mode 100644
index fae8683..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/cross-localedef-native_2.25.bb
+++ /dev/null
@@ -1,53 +0,0 @@
-SUMMARY = "Cross locale generation tool for glibc"
-HOMEPAGE = "http://www.gnu.org/software/libc/libc.html"
-SECTION = "libs"
-LICENSE = "LGPL-2.1"
-
-LIC_FILES_CHKSUM = "file://LICENSES;md5=e9a558e243b36d3209f380deb394b213 \
-      file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-      file://posix/rxspencer/COPYRIGHT;md5=dc5485bb394a13b2332ec1c785f5d83a \
-      file://COPYING.LIB;md5=4fbd65380cdd255951079008b364516c"
-
-# Tell autotools that we're working in the localedef directory
-#
-AUTOTOOLS_SCRIPT_PATH = "${S}/localedef"
-
-inherit native
-inherit autotools
-
-FILESEXTRAPATHS =. "${FILE_DIRNAME}/${PN}:${FILE_DIRNAME}/glibc:"
-
-SRCBRANCH ?= "release/${PV}/master"
-GLIBC_GIT_URI ?= "git://sourceware.org/git/glibc.git"
-UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>\d+\.\d+(\.\d+)*)"
-
-SRCREV_glibc ?= "db0242e3023436757bbc7c488a779e6e3343db04"
-SRCREV_localedef ?= "29869b6dc11427c5bab839bdb155c85a7c644c71"
-
-SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \
-           git://github.com/kraj/localedef;branch=master;name=localedef;destsuffix=git/localedef \
-           file://0016-timezone-re-written-tzselect-as-posix-sh.patch \
-           file://0017-Remove-bash-dependency-for-nscd-init-script.patch \
-           file://0018-eglibc-Cross-building-and-testing-instructions.patch \
-           file://0019-eglibc-Help-bootstrap-cross-toolchain.patch \
-           file://0020-eglibc-cherry-picked-from.patch \
-           file://0021-eglibc-Clear-cache-lines-on-ppc8xx.patch \
-           file://0022-eglibc-Resolve-__fpscr_values-on-SH4.patch \
-           file://0023-eglibc-Install-PIC-archives.patch \
-           file://0024-eglibc-Forward-port-cross-locale-generation-support.patch \
-           file://0025-Define-DUMMY_LOCALE_T-if-not-defined.patch \
-           file://0001-Include-locale_t.h-compatibility-header.patch \
-"
-# Makes for a rather long rev (22 characters), but...
-#
-SRCREV_FORMAT = "glibc_localedef"
-
-S = "${WORKDIR}/git"
-
-EXTRA_OECONF = "--with-glibc=${S}"
-CFLAGS += "-fgnu89-inline -std=gnu99 -DIS_IN\(x\)='0'"
-
-do_install() {
-	install -d ${D}${bindir}
-	install -m 0755 ${B}/localedef ${D}${bindir}/cross-localedef
-}
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/cross-localedef-native_2.26.bb b/import-layers/yocto-poky/meta/recipes-core/glibc/cross-localedef-native_2.26.bb
new file mode 100644
index 0000000..fc5d70d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/cross-localedef-native_2.26.bb
@@ -0,0 +1,51 @@
+SUMMARY = "Cross locale generation tool for glibc"
+HOMEPAGE = "http://www.gnu.org/software/libc/libc.html"
+SECTION = "libs"
+LICENSE = "LGPL-2.1"
+
+LIC_FILES_CHKSUM = "file://LICENSES;md5=e9a558e243b36d3209f380deb394b213 \
+      file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+      file://posix/rxspencer/COPYRIGHT;md5=dc5485bb394a13b2332ec1c785f5d83a \
+      file://COPYING.LIB;md5=4fbd65380cdd255951079008b364516c"
+
+# Tell autotools that we're working in the localedef directory
+#
+AUTOTOOLS_SCRIPT_PATH = "${S}/localedef"
+
+inherit native
+inherit autotools
+
+FILESEXTRAPATHS =. "${FILE_DIRNAME}/${PN}:${FILE_DIRNAME}/glibc:"
+
+SRCBRANCH ?= "release/${PV}/master"
+GLIBC_GIT_URI ?= "git://sourceware.org/git/glibc.git"
+UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>\d+\.\d+(\.\d+)*)"
+
+SRCREV_glibc ?= "1c9a5c270d8b66f30dcfaf1cb2d6cf39d3e18369"
+SRCREV_localedef ?= "dfb4afe551c6c6e94f9cc85417bd1f582168c843"
+
+SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \
+           git://github.com/kraj/localedef;branch=master;name=localedef;destsuffix=git/localedef \
+           file://0015-timezone-re-written-tzselect-as-posix-sh.patch \
+           file://0016-Remove-bash-dependency-for-nscd-init-script.patch \
+           file://0017-eglibc-Cross-building-and-testing-instructions.patch \
+           file://0018-eglibc-Help-bootstrap-cross-toolchain.patch \
+           file://0019-eglibc-Clear-cache-lines-on-ppc8xx.patch \
+           file://0020-eglibc-Resolve-__fpscr_values-on-SH4.patch \
+           file://0021-eglibc-Install-PIC-archives.patch \
+           file://0022-eglibc-Forward-port-cross-locale-generation-support.patch \
+           file://0023-Define-DUMMY_LOCALE_T-if-not-defined.patch \
+"
+# Makes for a rather long rev (22 characters), but...
+#
+SRCREV_FORMAT = "glibc_localedef"
+
+S = "${WORKDIR}/git"
+
+EXTRA_OECONF = "--with-glibc=${S}"
+CFLAGS += "-fgnu89-inline -std=gnu99 -DIS_IN\(x\)='0'"
+
+do_install() {
+	install -d ${D}${bindir}
+	install -m 0755 ${B}/localedef ${D}${bindir}/cross-localedef
+}
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-collateral.inc b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-collateral.inc
index 37f27ca..de859d5 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-collateral.inc
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-collateral.inc
@@ -18,5 +18,4 @@
 do_install[depends] += "virtual/${MLPREFIX}libc:do_stash_locale"
 
 COMPATIBLE_HOST_libc-musl_class-target = "null"
-COMPATIBLE_HOST_libc-uclibc_class-target = "null"
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-initial_2.25.bb b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-initial_2.26.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-core/glibc/glibc-initial_2.25.bb
rename to import-layers/yocto-poky/meta/recipes-core/glibc/glibc-initial_2.26.bb
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-locale.inc b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-locale.inc
index 75ababe..b3cb10b 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-locale.inc
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-locale.inc
@@ -12,7 +12,7 @@
 BINUTILSDEP_class-nativesdk = "virtual/${TARGET_PREFIX}binutils-crosssdk:do_populate_sysroot"
 do_package[depends] += "${BINUTILSDEP}"
 
-# localedef links with libc.so and glibc-collateral.incinhibits all default deps
+# localedef links with libc.so and glibc-collateral.inc inhibits all default deps
 # cannot add virtual/libc to DEPENDS, because it would conflict with libc-initial in RSS
 RDEPENDS_localedef += "glibc"
 
@@ -39,7 +39,6 @@
 
 PACKAGES_DYNAMIC = "^locale-base-.* \
                     ^glibc-gconv-.* ^glibc-charmap-.* ^glibc-localedata-.* ^glibc-binary-localedata-.* \
-                    ^glibc-gconv-.*  ^glibc-charmap-.*  ^glibc-localedata-.*  ^glibc-binary-localedata-.* \
                     ^${MLPREFIX}glibc-gconv$"
 
 # Create a glibc-binaries package
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-locale_2.25.bb b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-locale_2.26.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-core/glibc/glibc-locale_2.25.bb
rename to import-layers/yocto-poky/meta/recipes-core/glibc/glibc-locale_2.26.bb
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-mtrace_2.25.bb b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-mtrace_2.26.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-core/glibc/glibc-mtrace_2.25.bb
rename to import-layers/yocto-poky/meta/recipes-core/glibc/glibc-mtrace_2.26.bb
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-package.inc b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-package.inc
index 9f7fa62..df3db2c 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-package.inc
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-package.inc
@@ -1,19 +1,3 @@
-#
-# For now, we will skip building of a gcc package if it is a uclibc one
-# and our build is not a uclibc one, and we skip a glibc one if our build
-# is a uclibc build.
-#
-# See the note in gcc/gcc_3.4.0.oe
-#
-
-python __anonymous () {
-    import bb, re
-    uc_os = (re.match('.*uclibc*', d.getVar('TARGET_OS')) != None)
-    if uc_os:
-        raise bb.parse.SkipPackage("incompatible with target %s" %
-                                   d.getVar('TARGET_OS'))
-}
-
 INHIBIT_SYSROOT_STRIP = "1"
 
 PACKAGES = "${PN}-dbg catchsegv sln nscd ldd tzcode glibc-thread-db ${PN}-pic libcidn libmemusage libsegfault ${PN}-pcprofile libsotruss ${PN} ${PN}-utils glibc-extra-nss ${PN}-dev ${PN}-staticdev ${PN}-doc"
@@ -147,11 +131,15 @@
 	do_install_armmultilib
 }
 
+do_install_append_armeb () {
+	do_install_armmultilib
+}
+
 do_install_armmultilib () {
 
 	oe_multilib_header bits/endian.h bits/fcntl.h bits/fenv.h bits/fp-fast.h bits/hwcap.h bits/ipc.h bits/link.h bits/wordsize.h
-	oe_multilib_header bits/local_lim.h bits/mman.h bits/msq.h bits/pthreadtypes.h  bits/sem.h  bits/semaphore.h bits/setjmp.h
-	oe_multilib_header bits/shm.h bits/sigstack.h bits/stat.h bits/statfs.h bits/string.h bits/typesizes.h
+	oe_multilib_header bits/local_lim.h bits/mman.h bits/msq.h bits/pthreadtypes.h bits/pthreadtypes-arch.h  bits/sem.h  bits/semaphore.h bits/setjmp.h
+	oe_multilib_header bits/shm.h bits/sigstack.h bits/stat.h bits/statfs.h bits/typesizes.h
 
 	oe_multilib_header fpu_control.h gnu/lib-names.h gnu/stubs.h ieee754.h
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-scripts_2.25.bb b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc-scripts_2.26.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-core/glibc/glibc-scripts_2.25.bb
rename to import-layers/yocto-poky/meta/recipes-core/glibc/glibc-scripts_2.26.bb
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0001-Include-locale_t.h-compatibility-header.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0001-Include-locale_t.h-compatibility-header.patch
deleted file mode 100644
index a13c428..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0001-Include-locale_t.h-compatibility-header.patch
+++ /dev/null
@@ -1,29 +0,0 @@
-From abfeb0cf4e3261a66a7a23abc9aed33c034c850d Mon Sep 17 00:00:00 2001
-From: Joshua Watt <Joshua.Watt@garmin.com>
-Date: Wed, 6 Dec 2017 13:26:19 -0600
-Subject: [PATCH] Include locale_t.h compatibility header
-
-Newer versions of glibc (since 2.26) moved the locale typedefs from
-xlocale.h to bits/types/locale_t.h. Create a compatibility header for
-these newer versions of glibc
-
-See f0be25b6336db7492e47d2e8e72eb8af53b5506d in glibc
-
-Upstream-Status: Inappropriate compatibility with newer host glibc
-Signed-off-by: Joshua Watt <JPEWhacker@gmail.com>
-
----
- locale/bits/types/locale_t.h | 1 +
- 1 file changed, 1 insertion(+)
- create mode 100644 locale/bits/types/locale_t.h
-
-diff --git a/locale/bits/types/locale_t.h b/locale/bits/types/locale_t.h
-new file mode 100644
-index 0000000000..b519a6c5f8
---- /dev/null
-+++ b/locale/bits/types/locale_t.h
-@@ -0,0 +1 @@
-+#include <xlocale.h>
--- 
-2.14.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0001-nativesdk-glibc-Look-for-host-system-ld.so.cache-as-.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0001-nativesdk-glibc-Look-for-host-system-ld.so.cache-as-.patch
index 0553f8a..19c1d9b 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0001-nativesdk-glibc-Look-for-host-system-ld.so.cache-as-.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0001-nativesdk-glibc-Look-for-host-system-ld.so.cache-as-.patch
@@ -1,7 +1,7 @@
-From 2727e58d1d269994de17cadb12195001b14585e7 Mon Sep 17 00:00:00 2001
+From 81346b2f7735698078d5bf919a78b6c0269d6fee Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 01:48:24 +0000
-Subject: [PATCH 01/26] nativesdk-glibc: Look for host system ld.so.cache as
+Subject: [PATCH 01/25] nativesdk-glibc: Look for host system ld.so.cache as
  well
 
 Upstream-Status: Inappropriate [embedded specific]
@@ -31,7 +31,7 @@
  1 file changed, 8 insertions(+), 8 deletions(-)
 
 diff --git a/elf/dl-load.c b/elf/dl-load.c
-index 51fb0d0..f503dbc 100644
+index c1b6d4ba0f..d7af9ebcbc 100644
 --- a/elf/dl-load.c
 +++ b/elf/dl-load.c
 @@ -2054,6 +2054,14 @@ _dl_map_object (struct link_map *loader, const char *name,
@@ -65,5 +65,5 @@
        if (__glibc_unlikely (GLRO(dl_debug_mask) & DL_DEBUG_LIBS))
  	_dl_debug_printf ("\n");
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0002-nativesdk-glibc-Fix-buffer-overrun-with-a-relocated-.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0002-nativesdk-glibc-Fix-buffer-overrun-with-a-relocated-.patch
index e5ef341..2ce240b 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0002-nativesdk-glibc-Fix-buffer-overrun-with-a-relocated-.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0002-nativesdk-glibc-Fix-buffer-overrun-with-a-relocated-.patch
@@ -1,7 +1,7 @@
-From 1578f52647ec8804186d1944d4cd2095132efc39 Mon Sep 17 00:00:00 2001
+From 82f2e910ec0e2de6a9e2b007825bddfc5850575d Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 01:50:00 +0000
-Subject: [PATCH 02/26] nativesdk-glibc: Fix buffer overrun with a relocated
+Subject: [PATCH 02/25] nativesdk-glibc: Fix buffer overrun with a relocated
  SDK
 
 When ld-linux-*.so.2 is relocated to a path that is longer than the
@@ -22,7 +22,7 @@
  1 file changed, 12 insertions(+)
 
 diff --git a/elf/dl-load.c b/elf/dl-load.c
-index f503dbc..3a3d112 100644
+index d7af9ebcbc..19c1db9948 100644
 --- a/elf/dl-load.c
 +++ b/elf/dl-load.c
 @@ -1753,7 +1753,19 @@ open_path (const char *name, size_t namelen, int mode,
@@ -46,5 +46,5 @@
      {
        struct r_search_path_elem *this_dir = *dirs;
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0003-nativesdk-glibc-Raise-the-size-of-arrays-containing-.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0003-nativesdk-glibc-Raise-the-size-of-arrays-containing-.patch
index 9e207e4..397e8b3 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0003-nativesdk-glibc-Raise-the-size-of-arrays-containing-.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0003-nativesdk-glibc-Raise-the-size-of-arrays-containing-.patch
@@ -1,7 +1,7 @@
-From e53968d61804b6bab32ec6e13cc0b3cd57214796 Mon Sep 17 00:00:00 2001
+From 490a0eb4da1af726ea5d68e3efc0d18ba94c4054 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 01:51:38 +0000
-Subject: [PATCH 03/26] nativesdk-glibc: Raise the size of arrays containing dl
+Subject: [PATCH 03/25] nativesdk-glibc: Raise the size of arrays containing dl
  paths
 
 This patch puts the dynamic loader path in the binaries, SYSTEM_DIRS strings
@@ -26,10 +26,10 @@
  7 files changed, 14 insertions(+), 10 deletions(-)
 
 diff --git a/elf/dl-cache.c b/elf/dl-cache.c
-index cfa335e..daa12ec 100644
+index e9632da0b3..4de529d2cf 100644
 --- a/elf/dl-cache.c
 +++ b/elf/dl-cache.c
-@@ -132,6 +132,10 @@ do									      \
+@@ -133,6 +133,10 @@ do									      \
  while (0)
  
  
@@ -41,7 +41,7 @@
  internal_function
  _dl_cache_libcmp (const char *p1, const char *p2)
 diff --git a/elf/dl-load.c b/elf/dl-load.c
-index 3a3d112..a1410e4 100644
+index 19c1db9948..70c259b400 100644
 --- a/elf/dl-load.c
 +++ b/elf/dl-load.c
 @@ -106,8 +106,8 @@ static size_t max_capstrlen attribute_relro;
@@ -56,7 +56,7 @@
    SYSTEM_DIRS_LEN
  };
 diff --git a/elf/interp.c b/elf/interp.c
-index 9448802..e7e8c70 100644
+index b6e8f04444..47c20415bc 100644
 --- a/elf/interp.c
 +++ b/elf/interp.c
 @@ -18,5 +18,5 @@
@@ -67,7 +67,7 @@
 +const char __invoke_dynamic_linker__[4096] __attribute__ ((section (".interp")))
    = RUNTIME_LINKER;
 diff --git a/elf/ldconfig.c b/elf/ldconfig.c
-index 467ca82..631a2a9 100644
+index 99caf9e9bb..36ea5df5f1 100644
 --- a/elf/ldconfig.c
 +++ b/elf/ldconfig.c
 @@ -168,6 +168,9 @@ static struct argp argp =
@@ -81,18 +81,18 @@
     a platform.  */
  static int
 diff --git a/elf/rtld.c b/elf/rtld.c
-index 4ec25d7..e159c12 100644
+index 65647fb1c8..cd8381cb33 100644
 --- a/elf/rtld.c
 +++ b/elf/rtld.c
-@@ -99,6 +99,7 @@ uintptr_t __pointer_chk_guard_local
- strong_alias (__pointer_chk_guard_local, __pointer_chk_guard)
- #endif
- 
+@@ -128,6 +128,7 @@ dso_name_valid_for_suid (const char *p)
+     }
+   return *p != '\0';
+ }
 +extern const char LD_SO_CACHE[4096] __attribute__ ((section (".ldsocache")));
  
- /* List of auditing DSOs.  */
- static struct audit_list
-@@ -854,12 +855,12 @@ of this helper program; chances are you did not intend to run this program.\n\
+ /* LD_AUDIT variable contents.  Must be processed before the
+    audit_list below.  */
+@@ -999,12 +1000,12 @@ of this helper program; chances are you did not intend to run this program.\n\
    --list                list all dependencies and how they are resolved\n\
    --verify              verify that given object really is a dynamically linked\n\
  			object we can handle\n\
@@ -108,7 +108,7 @@
        ++_dl_skip_args;
        --_dl_argc;
 diff --git a/iconv/gconv_conf.c b/iconv/gconv_conf.c
-index e235188..569f72e 100644
+index 5aa055de6e..b9a14b9bd3 100644
 --- a/iconv/gconv_conf.c
 +++ b/iconv/gconv_conf.c
 @@ -36,7 +36,7 @@
@@ -121,7 +121,7 @@
  /* The path elements, as determined by the __gconv_get_path function.
     All path elements end in a slash.  */
 diff --git a/sysdeps/generic/dl-cache.h b/sysdeps/generic/dl-cache.h
-index eb2f900..505804e 100644
+index 1f0b8f629d..acbe68399d 100644
 --- a/sysdeps/generic/dl-cache.h
 +++ b/sysdeps/generic/dl-cache.h
 @@ -27,10 +27,6 @@
@@ -136,5 +136,5 @@
  # define add_system_dir(dir) add_dir (dir)
  #endif
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0004-nativesdk-glibc-Allow-64-bit-atomics-for-x86.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0004-nativesdk-glibc-Allow-64-bit-atomics-for-x86.patch
index b981f7b..8db47bc 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0004-nativesdk-glibc-Allow-64-bit-atomics-for-x86.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0004-nativesdk-glibc-Allow-64-bit-atomics-for-x86.patch
@@ -1,12 +1,14 @@
-From 0b95f34207ffed3aa53fa949662bfbccc7c864a4 Mon Sep 17 00:00:00 2001
+From 8fe1b56180c30d237cc2ab9a5a9c97a0311f41da Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Thu, 31 Dec 2015 14:35:35 -0800
-Subject: [PATCH 04/26] nativesdk-glibc: Allow 64 bit atomics for x86
+Subject: [PATCH 04/25] nativesdk-glibc: Allow 64 bit atomics for x86
 
 The fix consist of allowing 64bit atomic ops for x86.
 This should be safe for i586 and newer CPUs.
 It also makes the synchronization more efficient.
 
+Upstream-Status: Inappropriate [OE-Specific]
+
 Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
 Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
@@ -15,7 +17,7 @@
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/sysdeps/i386/atomic-machine.h b/sysdeps/i386/atomic-machine.h
-index ce62b33..4fe44ea 100644
+index 0e24200617..1532f52dec 100644
 --- a/sysdeps/i386/atomic-machine.h
 +++ b/sysdeps/i386/atomic-machine.h
 @@ -54,7 +54,7 @@ typedef uintmax_t uatomic_max_t;
@@ -25,8 +27,8 @@
 -#define __HAVE_64B_ATOMICS 0
 +#define __HAVE_64B_ATOMICS 1
  #define USE_ATOMIC_COMPILER_BUILTINS 0
- 
+ #define ATOMIC_EXCHANGE_USES_CAS 0
  
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0005-fsl-e500-e5500-e6500-603e-fsqrt-implementation.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0005-fsl-e500-e5500-e6500-603e-fsqrt-implementation.patch
index ee50000..956b2aa 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0005-fsl-e500-e5500-e6500-603e-fsqrt-implementation.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0005-fsl-e500-e5500-e6500-603e-fsqrt-implementation.patch
@@ -1,7 +1,7 @@
-From 77a7495376c7d0c5507c0ec99bf1568150339ef4 Mon Sep 17 00:00:00 2001
+From b9edcc845641956b7286c60c833f05a9f70cfab9 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 00:01:50 +0000
-Subject: [PATCH 05/26] fsl e500/e5500/e6500/603e fsqrt implementation
+Subject: [PATCH 05/25] fsl e500/e5500/e6500/603e fsqrt implementation
 
 Upstream-Status: Pending
 Signed-off-by: Edmar Wienskoski <edmar@freescale.com>
@@ -49,7 +49,7 @@
 
 diff --git a/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrt.c
 new file mode 100644
-index 0000000..71e516d
+index 0000000000..71e516d1c8
 --- /dev/null
 +++ b/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrt.c
 @@ -0,0 +1,134 @@
@@ -189,7 +189,7 @@
 +}
 diff --git a/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrtf.c
 new file mode 100644
-index 0000000..26fa067
+index 0000000000..26fa067abf
 --- /dev/null
 +++ b/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrtf.c
 @@ -0,0 +1,101 @@
@@ -296,7 +296,7 @@
 +}
 diff --git a/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrt.c
 new file mode 100644
-index 0000000..71e516d
+index 0000000000..71e516d1c8
 --- /dev/null
 +++ b/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrt.c
 @@ -0,0 +1,134 @@
@@ -436,7 +436,7 @@
 +}
 diff --git a/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrtf.c
 new file mode 100644
-index 0000000..26fa067
+index 0000000000..26fa067abf
 --- /dev/null
 +++ b/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrtf.c
 @@ -0,0 +1,101 @@
@@ -543,7 +543,7 @@
 +}
 diff --git a/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrt.c
 new file mode 100644
-index 0000000..71e516d
+index 0000000000..71e516d1c8
 --- /dev/null
 +++ b/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrt.c
 @@ -0,0 +1,134 @@
@@ -683,7 +683,7 @@
 +}
 diff --git a/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrtf.c
 new file mode 100644
-index 0000000..26fa067
+index 0000000000..26fa067abf
 --- /dev/null
 +++ b/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrtf.c
 @@ -0,0 +1,101 @@
@@ -790,7 +790,7 @@
 +}
 diff --git a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c
 new file mode 100644
-index 0000000..71e516d
+index 0000000000..71e516d1c8
 --- /dev/null
 +++ b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c
 @@ -0,0 +1,134 @@
@@ -930,7 +930,7 @@
 +}
 diff --git a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c
 new file mode 100644
-index 0000000..26fa067
+index 0000000000..26fa067abf
 --- /dev/null
 +++ b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c
 @@ -0,0 +1,101 @@
@@ -1037,7 +1037,7 @@
 +}
 diff --git a/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrt.c
 new file mode 100644
-index 0000000..71e516d
+index 0000000000..71e516d1c8
 --- /dev/null
 +++ b/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrt.c
 @@ -0,0 +1,134 @@
@@ -1177,7 +1177,7 @@
 +}
 diff --git a/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrtf.c
 new file mode 100644
-index 0000000..26fa067
+index 0000000000..26fa067abf
 --- /dev/null
 +++ b/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrtf.c
 @@ -0,0 +1,101 @@
@@ -1284,7 +1284,7 @@
 +}
 diff --git a/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrt.c
 new file mode 100644
-index 0000000..71e516d
+index 0000000000..71e516d1c8
 --- /dev/null
 +++ b/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrt.c
 @@ -0,0 +1,134 @@
@@ -1424,7 +1424,7 @@
 +}
 diff --git a/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrtf.c
 new file mode 100644
-index 0000000..26fa067
+index 0000000000..26fa067abf
 --- /dev/null
 +++ b/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrtf.c
 @@ -0,0 +1,101 @@
@@ -1531,14 +1531,14 @@
 +}
 diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc32/603e/fpu/Implies b/sysdeps/unix/sysv/linux/powerpc/powerpc32/603e/fpu/Implies
 new file mode 100644
-index 0000000..b103b4d
+index 0000000000..b103b4dea5
 --- /dev/null
 +++ b/sysdeps/unix/sysv/linux/powerpc/powerpc32/603e/fpu/Implies
 @@ -0,0 +1 @@
 +powerpc/powerpc32/603e/fpu
 diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc32/e300c3/fpu/Implies b/sysdeps/unix/sysv/linux/powerpc/powerpc32/e300c3/fpu/Implies
 new file mode 100644
-index 0000000..64db17f
+index 0000000000..64db17fada
 --- /dev/null
 +++ b/sysdeps/unix/sysv/linux/powerpc/powerpc32/e300c3/fpu/Implies
 @@ -0,0 +1,2 @@
@@ -1546,39 +1546,39 @@
 +powerpc/powerpc32/603e/fpu
 diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc32/e500mc/fpu/Implies b/sysdeps/unix/sysv/linux/powerpc/powerpc32/e500mc/fpu/Implies
 new file mode 100644
-index 0000000..7eac5fc
+index 0000000000..7eac5fcf02
 --- /dev/null
 +++ b/sysdeps/unix/sysv/linux/powerpc/powerpc32/e500mc/fpu/Implies
 @@ -0,0 +1 @@
 +powerpc/powerpc32/e500mc/fpu
 diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc32/e5500/fpu/Implies b/sysdeps/unix/sysv/linux/powerpc/powerpc32/e5500/fpu/Implies
 new file mode 100644
-index 0000000..264b2a7
+index 0000000000..264b2a7700
 --- /dev/null
 +++ b/sysdeps/unix/sysv/linux/powerpc/powerpc32/e5500/fpu/Implies
 @@ -0,0 +1 @@
 +powerpc/powerpc32/e5500/fpu
 diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc32/e6500/fpu/Implies b/sysdeps/unix/sysv/linux/powerpc/powerpc32/e6500/fpu/Implies
 new file mode 100644
-index 0000000..a259344
+index 0000000000..a25934467b
 --- /dev/null
 +++ b/sysdeps/unix/sysv/linux/powerpc/powerpc32/e6500/fpu/Implies
 @@ -0,0 +1 @@
 +powerpc/powerpc32/e6500/fpu
 diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc64/e5500/fpu/Implies b/sysdeps/unix/sysv/linux/powerpc/powerpc64/e5500/fpu/Implies
 new file mode 100644
-index 0000000..a7bc854
+index 0000000000..a7bc854be8
 --- /dev/null
 +++ b/sysdeps/unix/sysv/linux/powerpc/powerpc64/e5500/fpu/Implies
 @@ -0,0 +1 @@
 +powerpc/powerpc64/e5500/fpu
 diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc64/e6500/fpu/Implies b/sysdeps/unix/sysv/linux/powerpc/powerpc64/e6500/fpu/Implies
 new file mode 100644
-index 0000000..04ff8cc
+index 0000000000..04ff8cc181
 --- /dev/null
 +++ b/sysdeps/unix/sysv/linux/powerpc/powerpc64/e6500/fpu/Implies
 @@ -0,0 +1 @@
 +powerpc/powerpc64/e6500/fpu
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0006-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0006-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch
index 9088d29..c74fead 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0006-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0006-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch
@@ -1,7 +1,7 @@
-From 520cb9e746af637cf01fea385b7f4ee4aadbdfdd Mon Sep 17 00:00:00 2001
+From 324202488a1c2439be345745722f5cb04c0e0847 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 00:11:22 +0000
-Subject: [PATCH 06/26] readlib: Add OECORE_KNOWN_INTERPRETER_NAMES to known
+Subject: [PATCH 06/25] readlib: Add OECORE_KNOWN_INTERPRETER_NAMES to known
  names
 
 This bolts in a hook for OE to pass its own version of interpreter
@@ -17,7 +17,7 @@
  1 file changed, 1 insertion(+)
 
 diff --git a/elf/readlib.c b/elf/readlib.c
-index 8a66ffe..08d56fc 100644
+index d278a189b2..a84cb85158 100644
 --- a/elf/readlib.c
 +++ b/elf/readlib.c
 @@ -51,6 +51,7 @@ static struct known_names interpreters[] =
@@ -29,5 +29,5 @@
  
  static struct known_names known_libs[] =
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0007-ppc-sqrt-Fix-undefined-reference-to-__sqrt_finite.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0007-ppc-sqrt-Fix-undefined-reference-to-__sqrt_finite.patch
index f33defe..b643276 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0007-ppc-sqrt-Fix-undefined-reference-to-__sqrt_finite.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0007-ppc-sqrt-Fix-undefined-reference-to-__sqrt_finite.patch
@@ -1,7 +1,7 @@
-From 64130262787d54e2e6695ae4ed8783bfec14ffef Mon Sep 17 00:00:00 2001
+From cf00bf9de8128171e79a019de809e35f3aeed281 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 00:15:07 +0000
-Subject: [PATCH 07/26] ppc/sqrt: Fix undefined reference to `__sqrt_finite'
+Subject: [PATCH 07/25] ppc/sqrt: Fix undefined reference to `__sqrt_finite'
 
 on ppc fixes the errors like below
 | ./.libs/libpulsecore-1.1.so: undefined reference to `__sqrt_finite'
@@ -36,7 +36,7 @@
  12 files changed, 12 insertions(+), 24 deletions(-)
 
 diff --git a/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrt.c
-index 71e516d..1795fd6 100644
+index 71e516d1c8..1795fd6c3e 100644
 --- a/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrt.c
 @@ -39,14 +39,8 @@ static const float half = 0.5;
@@ -60,7 +60,7 @@
  }
 +strong_alias (__ieee754_sqrt, __sqrt_finite)
 diff --git a/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrtf.c
-index 26fa067..a917f31 100644
+index 26fa067abf..a917f313ab 100644
 --- a/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrtf.c
 @@ -37,14 +37,8 @@ static const float threehalf = 1.5;
@@ -84,7 +84,7 @@
  }
 +strong_alias (__ieee754_sqrtf, __sqrtf_finite)
 diff --git a/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrt.c
-index 71e516d..fc4a749 100644
+index 71e516d1c8..fc4a74990e 100644
 --- a/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrt.c
 @@ -132,3 +132,4 @@ __ieee754_sqrt (b)
@@ -93,7 +93,7 @@
  }
 +strong_alias (__ieee754_sqrt, __sqrt_finite)
 diff --git a/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrtf.c
-index 26fa067..9d17512 100644
+index 26fa067abf..9d175122a8 100644
 --- a/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrtf.c
 @@ -99,3 +99,4 @@ __ieee754_sqrtf (b)
@@ -102,7 +102,7 @@
  }
 +strong_alias (__ieee754_sqrtf, __sqrtf_finite)
 diff --git a/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrt.c
-index 71e516d..fc4a749 100644
+index 71e516d1c8..fc4a74990e 100644
 --- a/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrt.c
 @@ -132,3 +132,4 @@ __ieee754_sqrt (b)
@@ -111,7 +111,7 @@
  }
 +strong_alias (__ieee754_sqrt, __sqrt_finite)
 diff --git a/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrtf.c
-index 26fa067..9d17512 100644
+index 26fa067abf..9d175122a8 100644
 --- a/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrtf.c
 @@ -99,3 +99,4 @@ __ieee754_sqrtf (b)
@@ -120,7 +120,7 @@
  }
 +strong_alias (__ieee754_sqrtf, __sqrtf_finite)
 diff --git a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c
-index 71e516d..fc4a749 100644
+index 71e516d1c8..fc4a74990e 100644
 --- a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c
 @@ -132,3 +132,4 @@ __ieee754_sqrt (b)
@@ -129,7 +129,7 @@
  }
 +strong_alias (__ieee754_sqrt, __sqrt_finite)
 diff --git a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c
-index 26fa067..9d17512 100644
+index 26fa067abf..9d175122a8 100644
 --- a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c
 @@ -99,3 +99,4 @@ __ieee754_sqrtf (b)
@@ -138,7 +138,7 @@
  }
 +strong_alias (__ieee754_sqrtf, __sqrtf_finite)
 diff --git a/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrt.c
-index 71e516d..1795fd6 100644
+index 71e516d1c8..1795fd6c3e 100644
 --- a/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrt.c
 @@ -39,14 +39,8 @@ static const float half = 0.5;
@@ -162,7 +162,7 @@
  }
 +strong_alias (__ieee754_sqrt, __sqrt_finite)
 diff --git a/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrtf.c
-index 26fa067..a917f31 100644
+index 26fa067abf..a917f313ab 100644
 --- a/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrtf.c
 @@ -37,14 +37,8 @@ static const float threehalf = 1.5;
@@ -186,7 +186,7 @@
  }
 +strong_alias (__ieee754_sqrtf, __sqrtf_finite)
 diff --git a/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrt.c
-index 71e516d..fc4a749 100644
+index 71e516d1c8..fc4a74990e 100644
 --- a/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrt.c
 @@ -132,3 +132,4 @@ __ieee754_sqrt (b)
@@ -195,7 +195,7 @@
  }
 +strong_alias (__ieee754_sqrt, __sqrt_finite)
 diff --git a/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrtf.c
-index 26fa067..9d17512 100644
+index 26fa067abf..9d175122a8 100644
 --- a/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrtf.c
 @@ -99,3 +99,4 @@ __ieee754_sqrtf (b)
@@ -204,5 +204,5 @@
  }
 +strong_alias (__ieee754_sqrtf, __sqrtf_finite)
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0008-__ieee754_sqrt-f-are-now-inline-functions-and-call-o.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0008-__ieee754_sqrt-f-are-now-inline-functions-and-call-o.patch
index 26f65c5..3aeec52 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0008-__ieee754_sqrt-f-are-now-inline-functions-and-call-o.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0008-__ieee754_sqrt-f-are-now-inline-functions-and-call-o.patch
@@ -1,7 +1,7 @@
-From 5afb0147e3e49c3b474404524014efe51b2bca5a Mon Sep 17 00:00:00 2001
+From babe311deca9ee2730278f13b061b914b5286dc3 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 00:16:38 +0000
-Subject: [PATCH 08/26] __ieee754_sqrt{,f} are now inline functions and call
+Subject: [PATCH 08/25] __ieee754_sqrt{,f} are now inline functions and call
  out __slow versions
 
 Upstream-Status: Pending
@@ -23,7 +23,7 @@
  12 files changed, 114 insertions(+), 21 deletions(-)
 
 diff --git a/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrt.c
-index 1795fd6..daa83f3 100644
+index 1795fd6c3e..daa83f3fe8 100644
 --- a/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrt.c
 @@ -40,7 +40,7 @@ static const float half = 0.5;
@@ -58,7 +58,7 @@
 +
  strong_alias (__ieee754_sqrt, __sqrt_finite)
 diff --git a/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrtf.c
-index a917f31..b812cf1 100644
+index a917f313ab..b812cf1705 100644
 --- a/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc32/603e/fpu/e_sqrtf.c
 @@ -38,7 +38,7 @@ static const float threehalf = 1.5;
@@ -82,7 +82,7 @@
 +}
  strong_alias (__ieee754_sqrtf, __sqrtf_finite)
 diff --git a/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrt.c
-index fc4a749..7038a70 100644
+index fc4a74990e..7038a70b47 100644
 --- a/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrt.c
 @@ -41,10 +41,10 @@ static const float half = 0.5;
@@ -121,7 +121,7 @@
 +
  strong_alias (__ieee754_sqrt, __sqrt_finite)
 diff --git a/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrtf.c
-index 9d17512..10de1f0 100644
+index 9d175122a8..10de1f0cc3 100644
 --- a/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc32/e500mc/fpu/e_sqrtf.c
 @@ -39,10 +39,10 @@ static const float threehalf = 1.5;
@@ -151,7 +151,7 @@
 +
  strong_alias (__ieee754_sqrtf, __sqrtf_finite)
 diff --git a/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrt.c
-index fc4a749..7038a70 100644
+index fc4a74990e..7038a70b47 100644
 --- a/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrt.c
 @@ -41,10 +41,10 @@ static const float half = 0.5;
@@ -190,7 +190,7 @@
 +
  strong_alias (__ieee754_sqrt, __sqrt_finite)
 diff --git a/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrtf.c
-index 9d17512..10de1f0 100644
+index 9d175122a8..10de1f0cc3 100644
 --- a/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc32/e5500/fpu/e_sqrtf.c
 @@ -39,10 +39,10 @@ static const float threehalf = 1.5;
@@ -220,7 +220,7 @@
 +
  strong_alias (__ieee754_sqrtf, __sqrtf_finite)
 diff --git a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c
-index fc4a749..1c34244 100644
+index fc4a74990e..1c34244bd8 100644
 --- a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c
 @@ -132,4 +132,12 @@ __ieee754_sqrt (b)
@@ -237,7 +237,7 @@
 +
  strong_alias (__ieee754_sqrt, __sqrt_finite)
 diff --git a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c
-index 9d17512..8126535 100644
+index 9d175122a8..812653558f 100644
 --- a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c
 @@ -99,4 +99,12 @@ __ieee754_sqrtf (b)
@@ -254,7 +254,7 @@
 +
  strong_alias (__ieee754_sqrtf, __sqrtf_finite)
 diff --git a/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrt.c
-index 1795fd6..13a8197 100644
+index 1795fd6c3e..13a81973e3 100644
 --- a/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrt.c
 @@ -40,7 +40,7 @@ static const float half = 0.5;
@@ -289,7 +289,7 @@
 +
  strong_alias (__ieee754_sqrt, __sqrt_finite)
 diff --git a/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrtf.c
-index a917f31..fae2d81 100644
+index a917f313ab..fae2d81210 100644
 --- a/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc64/e5500/fpu/e_sqrtf.c
 @@ -38,7 +38,7 @@ static const float threehalf = 1.5;
@@ -314,7 +314,7 @@
 +
  strong_alias (__ieee754_sqrtf, __sqrtf_finite)
 diff --git a/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrt.c
-index fc4a749..7038a70 100644
+index fc4a74990e..7038a70b47 100644
 --- a/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrt.c
 @@ -41,10 +41,10 @@ static const float half = 0.5;
@@ -353,7 +353,7 @@
 +
  strong_alias (__ieee754_sqrt, __sqrt_finite)
 diff --git a/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrtf.c
-index 9d17512..10de1f0 100644
+index 9d175122a8..10de1f0cc3 100644
 --- a/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc64/e6500/fpu/e_sqrtf.c
 @@ -39,10 +39,10 @@ static const float threehalf = 1.5;
@@ -383,5 +383,5 @@
 +
  strong_alias (__ieee754_sqrtf, __sqrtf_finite)
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0009-Quote-from-bug-1443-which-explains-what-the-patch-do.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0009-Quote-from-bug-1443-which-explains-what-the-patch-do.patch
index d416acd..7d5c2e3 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0009-Quote-from-bug-1443-which-explains-what-the-patch-do.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0009-Quote-from-bug-1443-which-explains-what-the-patch-do.patch
@@ -1,7 +1,7 @@
-From ddd51bb4e005432cb3c0f8f33822954408a9fee1 Mon Sep 17 00:00:00 2001
+From 93b5d6bed19939039031c45b777d29619db06184 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 00:20:09 +0000
-Subject: [PATCH 09/26] Quote from bug 1443 which explains what the patch does
+Subject: [PATCH 09/25] Quote from bug 1443 which explains what the patch does
  :
 
   We build some random program and link it with -lust.  When we run it,
@@ -45,10 +45,10 @@
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/sysdeps/arm/dl-machine.h b/sysdeps/arm/dl-machine.h
-index 60eee00..7d54d5e 100644
+index 7053ead16e..0b1e1716b0 100644
 --- a/sysdeps/arm/dl-machine.h
 +++ b/sysdeps/arm/dl-machine.h
-@@ -499,7 +499,7 @@ elf_machine_rel (struct link_map *map, const Elf32_Rel *reloc,
+@@ -500,7 +500,7 @@ elf_machine_rel (struct link_map *map, const Elf32_Rel *reloc,
  
  	case R_ARM_TLS_DTPOFF32:
  	  if (sym != NULL)
@@ -58,5 +58,5 @@
  
  	case R_ARM_TLS_TPOFF32:
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0010-eglibc-run-libm-err-tab.pl-with-specific-dirs-in-S.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0010-eglibc-run-libm-err-tab.pl-with-specific-dirs-in-S.patch
index 276f1fa..7275c3e 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0010-eglibc-run-libm-err-tab.pl-with-specific-dirs-in-S.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0010-eglibc-run-libm-err-tab.pl-with-specific-dirs-in-S.patch
@@ -1,7 +1,7 @@
-From d7e74670825330f5421a55f5aa2a1ce6fda7d7fb Mon Sep 17 00:00:00 2001
+From 1a6e0f4ee8584b04226156df1a3de3e467f0ef6f Mon Sep 17 00:00:00 2001
 From: Ting Liu <b28495@freescale.com>
 Date: Wed, 19 Dec 2012 04:39:57 -0600
-Subject: [PATCH 10/26] eglibc: run libm-err-tab.pl with specific dirs in ${S}
+Subject: [PATCH 10/25] eglibc: run libm-err-tab.pl with specific dirs in ${S}
 
 libm-err-tab.pl will parse all the files named "libm-test-ulps"
 in the given dir recursively. To avoid parsing the one in
@@ -18,7 +18,7 @@
  1 file changed, 2 insertions(+), 1 deletion(-)
 
 diff --git a/manual/Makefile b/manual/Makefile
-index f2f694f..e062833 100644
+index 4ed63a8ef3..e89919eb19 100644
 --- a/manual/Makefile
 +++ b/manual/Makefile
 @@ -105,7 +105,8 @@ $(objpfx)libm-err.texi: $(objpfx)stamp-libm-err
@@ -32,5 +32,5 @@
  	touch $@
  
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0011-__ieee754_sqrt-f-are-now-inline-functions-and-call-o.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0011-__ieee754_sqrt-f-are-now-inline-functions-and-call-o.patch
index 096dab5..84f2ca5 100644
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0011-__ieee754_sqrt-f-are-now-inline-functions-and-call-o.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0011-__ieee754_sqrt-f-are-now-inline-functions-and-call-o.patch
@@ -1,7 +1,7 @@
-From d6e2076571263e45c48889896d3d94ff576df2be Mon Sep 17 00:00:00 2001
+From 9b2af6cbf68d3353d72519e7f6c46becb7bd1d0f Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 18 Mar 2015 00:24:46 +0000
-Subject: [PATCH 11/26] __ieee754_sqrt{,f} are now inline functions and call
+Subject: [PATCH 11/25] __ieee754_sqrt{,f} are now inline functions and call
  out __slow versions
 
 Upstream-Status: Pending
@@ -14,7 +14,7 @@
  2 files changed, 5 insertions(+), 5 deletions(-)
 
 diff --git a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c
-index 1c34244..7038a70 100644
+index 1c34244bd8..7038a70b47 100644
 --- a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c
 +++ b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrt.c
 @@ -41,10 +41,10 @@ static const float half = 0.5;
@@ -40,7 +40,7 @@
  #define FMADD(a_, c_, b_)                                               \
            ({ double __r;                                                \
 diff --git a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c
-index 8126535..10de1f0 100644
+index 812653558f..10de1f0cc3 100644
 --- a/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c
 +++ b/sysdeps/powerpc/powerpc32/e6500/fpu/e_sqrtf.c
 @@ -39,10 +39,10 @@ static const float threehalf = 1.5;
@@ -57,5 +57,5 @@
  #endif
  {
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0012-Make-ld-version-output-matching-grok-gold-s-output.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0012-Make-ld-version-output-matching-grok-gold-s-output.patch
deleted file mode 100644
index 7728c61..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0012-Make-ld-version-output-matching-grok-gold-s-output.patch
+++ /dev/null
@@ -1,44 +0,0 @@
-From c0974c746e026650bef5d1940eb3f519765c77af Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 18 Mar 2015 00:25:45 +0000
-Subject: [PATCH 12/26] Make ld --version output matching grok gold's output
-
-adapted from from upstream branch roland/gold-vs-libc
-
-Upstream-Status: Backport
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- configure    | 2 +-
- configure.ac | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/configure b/configure
-index 5cf3230..391f29d 100755
---- a/configure
-+++ b/configure
-@@ -4555,7 +4555,7 @@ else
-   # Found it, now check the version.
-   { $as_echo "$as_me:${as_lineno-$LINENO}: checking version of $LD" >&5
- $as_echo_n "checking version of $LD... " >&6; }
--  ac_prog_version=`$LD --version 2>&1 | sed -n 's/^.*GNU ld.* \([0-9][0-9]*\.[0-9.]*\).*$/\1/p'`
-+  ac_prog_version=`$LD --version 2>&1 | sed -n 's/^.*GNU [Bbinutilsd][^.]* \([0-9][0-9]*\.[0-9.]*\).*$/\1/p'`
-   case $ac_prog_version in
-     '') ac_prog_version="v. ?.??, bad"; ac_verc_fail=yes;;
-     2.1[0-9][0-9]*|2.2[2-9]*|2.[3-9][0-9]*|[3-9].*|[1-9][0-9]*)
-diff --git a/configure.ac b/configure.ac
-index d719fad..5b5877c 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -990,7 +990,7 @@ AC_CHECK_PROG_VER(AS, $AS, --version,
- 		  [2.1[0-9][0-9]*|2.2[2-9]*|2.[3-9][0-9]*|[3-9].*|[1-9][0-9]*],
- 		  AS=: critic_missing="$critic_missing as")
- AC_CHECK_PROG_VER(LD, $LD, --version,
--		  [GNU ld.* \([0-9][0-9]*\.[0-9.]*\)],
-+		  [GNU [Bbinutilsd][^.]* \([0-9][0-9]*\.[0-9.]*\)],
- 		  [2.1[0-9][0-9]*|2.2[2-9]*|2.[3-9][0-9]*|[3-9].*|[1-9][0-9]*],
- 		  LD=: critic_missing="$critic_missing ld")
- 
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0012-sysdeps-gnu-configure.ac-handle-correctly-libc_cv_ro.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0012-sysdeps-gnu-configure.ac-handle-correctly-libc_cv_ro.patch
new file mode 100644
index 0000000..2bf6b23
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0012-sysdeps-gnu-configure.ac-handle-correctly-libc_cv_ro.patch
@@ -0,0 +1,42 @@
+From ffd3c5a04d8f2f26fea71fed4ce41e88b6f51086 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 18 Mar 2015 00:27:10 +0000
+Subject: [PATCH 12/25] sysdeps/gnu/configure.ac: handle correctly
+ $libc_cv_rootsbindir
+
+Upstream-Status:Pending
+
+Signed-off-by: Matthieu Crapet <Matthieu.Crapet@ingenico.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ sysdeps/gnu/configure    | 2 +-
+ sysdeps/gnu/configure.ac | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/sysdeps/gnu/configure b/sysdeps/gnu/configure
+index 71243ad0c6..f578187576 100644
+--- a/sysdeps/gnu/configure
++++ b/sysdeps/gnu/configure
+@@ -32,6 +32,6 @@ case "$prefix" in
+   else
+     libc_cv_localstatedir=$localstatedir
+    fi
+-  libc_cv_rootsbindir=/sbin
++  test -n "$libc_cv_rootsbindir" || libc_cv_rootsbindir=/sbin
+   ;;
+ esac
+diff --git a/sysdeps/gnu/configure.ac b/sysdeps/gnu/configure.ac
+index 634fe4de2a..3db1697f4f 100644
+--- a/sysdeps/gnu/configure.ac
++++ b/sysdeps/gnu/configure.ac
+@@ -21,6 +21,6 @@ case "$prefix" in
+   else
+     libc_cv_localstatedir=$localstatedir
+    fi
+-  libc_cv_rootsbindir=/sbin
++  test -n "$libc_cv_rootsbindir" || libc_cv_rootsbindir=/sbin
+   ;;
+ esac
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0013-Add-unused-attribute.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0013-Add-unused-attribute.patch
new file mode 100644
index 0000000..099fe50
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0013-Add-unused-attribute.patch
@@ -0,0 +1,34 @@
+From 049cce82f35e0d864d98075b83888dbba4d68afd Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 18 Mar 2015 00:28:41 +0000
+Subject: [PATCH 13/25] Add unused attribute
+
+Helps in avoiding gcc warning when header is is included in
+a source file which does not use both functions
+
+        * iconv/gconv_charset.h (strip):
+        Add unused attribute.
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
+---
+ iconv/gconv_charset.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/iconv/gconv_charset.h b/iconv/gconv_charset.h
+index 18d8bd6ae7..eb729da5d3 100644
+--- a/iconv/gconv_charset.h
++++ b/iconv/gconv_charset.h
+@@ -21,7 +21,7 @@
+ #include <locale.h>
+ 
+ 
+-static void
++static void __attribute__ ((unused))
+ strip (char *wp, const char *s)
+ {
+   int slash_count = 0;
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0013-sysdeps-gnu-configure.ac-handle-correctly-libc_cv_ro.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0013-sysdeps-gnu-configure.ac-handle-correctly-libc_cv_ro.patch
deleted file mode 100644
index 1c81c72..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0013-sysdeps-gnu-configure.ac-handle-correctly-libc_cv_ro.patch
+++ /dev/null
@@ -1,42 +0,0 @@
-From 2a12eadfd7940b6b0913de8e95d851254cce7953 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 18 Mar 2015 00:27:10 +0000
-Subject: [PATCH 13/26] sysdeps/gnu/configure.ac: handle correctly
- $libc_cv_rootsbindir
-
-Upstream-Status:Pending
-
-Signed-off-by: Matthieu Crapet <Matthieu.Crapet@ingenico.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- sysdeps/gnu/configure    | 2 +-
- sysdeps/gnu/configure.ac | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/sysdeps/gnu/configure b/sysdeps/gnu/configure
-index 71243ad..f578187 100644
---- a/sysdeps/gnu/configure
-+++ b/sysdeps/gnu/configure
-@@ -32,6 +32,6 @@ case "$prefix" in
-   else
-     libc_cv_localstatedir=$localstatedir
-    fi
--  libc_cv_rootsbindir=/sbin
-+  test -n "$libc_cv_rootsbindir" || libc_cv_rootsbindir=/sbin
-   ;;
- esac
-diff --git a/sysdeps/gnu/configure.ac b/sysdeps/gnu/configure.ac
-index 634fe4d..3db1697 100644
---- a/sysdeps/gnu/configure.ac
-+++ b/sysdeps/gnu/configure.ac
-@@ -21,6 +21,6 @@ case "$prefix" in
-   else
-     libc_cv_localstatedir=$localstatedir
-    fi
--  libc_cv_rootsbindir=/sbin
-+  test -n "$libc_cv_rootsbindir" || libc_cv_rootsbindir=/sbin
-   ;;
- esac
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0014-Add-unused-attribute.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0014-Add-unused-attribute.patch
deleted file mode 100644
index b23e104..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0014-Add-unused-attribute.patch
+++ /dev/null
@@ -1,34 +0,0 @@
-From ec4f7763b30603b7ba0b70bd7750e34d442821b3 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 18 Mar 2015 00:28:41 +0000
-Subject: [PATCH 14/26] Add unused attribute
-
-Helps in avoiding gcc warning when header is is included in
-a source file which does not use both functions
-
-        * iconv/gconv_charset.h (strip):
-        Add unused attribute.
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- iconv/gconv_charset.h | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/iconv/gconv_charset.h b/iconv/gconv_charset.h
-index 95cbce7..191a0dd 100644
---- a/iconv/gconv_charset.h
-+++ b/iconv/gconv_charset.h
-@@ -21,7 +21,7 @@
- #include <locale.h>
- 
- 
--static void
-+static void __attribute__ ((unused))
- strip (char *wp, const char *s)
- {
-   int slash_count = 0;
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0014-yes-within-the-path-sets-wrong-config-variables.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0014-yes-within-the-path-sets-wrong-config-variables.patch
new file mode 100644
index 0000000..ddc70e0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0014-yes-within-the-path-sets-wrong-config-variables.patch
@@ -0,0 +1,263 @@
+From 3b904bee81a1cfe81e3f437b5f3296efd54a51ac Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 18 Mar 2015 00:31:06 +0000
+Subject: [PATCH 14/25] 'yes' within the path sets wrong config variables
+
+It seems that the 'AC_EGREP_CPP(yes...' example is quite popular
+but being such a short word to grep it is likely to produce
+false-positive matches with the path it is configured into.
+
+The change is to use a more elaborated string to grep for.
+
+Upstream-Status: Submitted [libc-alpha@sourceware.org]
+
+Signed-off-by: Benjamin Esquivel <benjamin.esquivel@linux.intel.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ sysdeps/aarch64/configure                              | 4 ++--
+ sysdeps/aarch64/configure.ac                           | 4 ++--
+ sysdeps/arm/configure                                  | 4 ++--
+ sysdeps/arm/configure.ac                               | 4 ++--
+ sysdeps/mips/configure                                 | 4 ++--
+ sysdeps/mips/configure.ac                              | 4 ++--
+ sysdeps/nios2/configure                                | 4 ++--
+ sysdeps/nios2/configure.ac                             | 4 ++--
+ sysdeps/unix/sysv/linux/mips/configure                 | 4 ++--
+ sysdeps/unix/sysv/linux/mips/configure.ac              | 4 ++--
+ sysdeps/unix/sysv/linux/powerpc/powerpc64/configure    | 8 ++++----
+ sysdeps/unix/sysv/linux/powerpc/powerpc64/configure.ac | 8 ++++----
+ 12 files changed, 28 insertions(+), 28 deletions(-)
+
+diff --git a/sysdeps/aarch64/configure b/sysdeps/aarch64/configure
+index 5bd355a691..3bc5537bc0 100644
+--- a/sysdeps/aarch64/configure
++++ b/sysdeps/aarch64/configure
+@@ -148,12 +148,12 @@ else
+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ #ifdef __AARCH64EB__
+-                      yes
++                      is_aarch64_be
+                      #endif
+ 
+ _ACEOF
+ if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+-  $EGREP "yes" >/dev/null 2>&1; then :
++  $EGREP "is_aarch64_be" >/dev/null 2>&1; then :
+   libc_cv_aarch64_be=yes
+ else
+   libc_cv_aarch64_be=no
+diff --git a/sysdeps/aarch64/configure.ac b/sysdeps/aarch64/configure.ac
+index 7851dd4dac..6e9238171f 100644
+--- a/sysdeps/aarch64/configure.ac
++++ b/sysdeps/aarch64/configure.ac
+@@ -10,8 +10,8 @@ GLIBC_PROVIDES dnl See aclocal.m4 in the top level source directory.
+ # the dynamic linker via %ifdef.
+ AC_CACHE_CHECK([for big endian],
+   [libc_cv_aarch64_be],
+-  [AC_EGREP_CPP(yes,[#ifdef __AARCH64EB__
+-                      yes
++  [AC_EGREP_CPP(is_aarch64_be,[#ifdef __AARCH64EB__
++                      is_aarch64_be
+                      #endif
+   ], libc_cv_aarch64_be=yes, libc_cv_aarch64_be=no)])
+ if test $libc_cv_aarch64_be = yes; then
+diff --git a/sysdeps/arm/configure b/sysdeps/arm/configure
+index 431e843b2b..e152461138 100644
+--- a/sysdeps/arm/configure
++++ b/sysdeps/arm/configure
+@@ -151,12 +151,12 @@ else
+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ #ifdef __ARM_PCS_VFP
+-		      yes
++		      use_arm_pcs_vfp
+ 		     #endif
+ 
+ _ACEOF
+ if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+-  $EGREP "yes" >/dev/null 2>&1; then :
++  $EGREP "use_arm_pcs_vfp" >/dev/null 2>&1; then :
+   libc_cv_arm_pcs_vfp=yes
+ else
+   libc_cv_arm_pcs_vfp=no
+diff --git a/sysdeps/arm/configure.ac b/sysdeps/arm/configure.ac
+index 90cdd69c75..05a262ba00 100644
+--- a/sysdeps/arm/configure.ac
++++ b/sysdeps/arm/configure.ac
+@@ -15,8 +15,8 @@ AC_DEFINE(PI_STATIC_AND_HIDDEN)
+ # the dynamic linker via %ifdef.
+ AC_CACHE_CHECK([whether the compiler is using the ARM hard-float ABI],
+   [libc_cv_arm_pcs_vfp],
+-  [AC_EGREP_CPP(yes,[#ifdef __ARM_PCS_VFP
+-		      yes
++  [AC_EGREP_CPP(use_arm_pcs_vfp,[#ifdef __ARM_PCS_VFP
++		      use_arm_pcs_vfp
+ 		     #endif
+   ], libc_cv_arm_pcs_vfp=yes, libc_cv_arm_pcs_vfp=no)])
+ if test $libc_cv_arm_pcs_vfp = yes; then
+diff --git a/sysdeps/mips/configure b/sysdeps/mips/configure
+index 4e13248c03..f14af952d0 100644
+--- a/sysdeps/mips/configure
++++ b/sysdeps/mips/configure
+@@ -143,11 +143,11 @@ else
+ /* end confdefs.h.  */
+ dnl
+ #ifdef __mips_nan2008
+-yes
++use_mips_nan2008
+ #endif
+ _ACEOF
+ if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+-  $EGREP "yes" >/dev/null 2>&1; then :
++  $EGREP "use_mips_nan2008" >/dev/null 2>&1; then :
+   libc_cv_mips_nan2008=yes
+ else
+   libc_cv_mips_nan2008=no
+diff --git a/sysdeps/mips/configure.ac b/sysdeps/mips/configure.ac
+index bcbdaffd9f..ad3057f4cc 100644
+--- a/sysdeps/mips/configure.ac
++++ b/sysdeps/mips/configure.ac
+@@ -6,9 +6,9 @@ dnl position independent way.
+ dnl AC_DEFINE(PI_STATIC_AND_HIDDEN)
+ 
+ AC_CACHE_CHECK([whether the compiler is using the 2008 NaN encoding],
+-  libc_cv_mips_nan2008, [AC_EGREP_CPP(yes, [dnl
++  libc_cv_mips_nan2008, [AC_EGREP_CPP(use_mips_nan2008, [dnl
+ #ifdef __mips_nan2008
+-yes
++use_mips_nan2008
+ #endif], libc_cv_mips_nan2008=yes, libc_cv_mips_nan2008=no)])
+ if test x$libc_cv_mips_nan2008 = xyes; then
+   AC_DEFINE(HAVE_MIPS_NAN2008)
+diff --git a/sysdeps/nios2/configure b/sysdeps/nios2/configure
+index 14c8a3a014..dde3814ef2 100644
+--- a/sysdeps/nios2/configure
++++ b/sysdeps/nios2/configure
+@@ -142,12 +142,12 @@ else
+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ #ifdef __nios2_big_endian__
+-                      yes
++                      is_nios2_be
+                      #endif
+ 
+ _ACEOF
+ if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+-  $EGREP "yes" >/dev/null 2>&1; then :
++  $EGREP "is_nios2_be" >/dev/null 2>&1; then :
+   libc_cv_nios2_be=yes
+ else
+   libc_cv_nios2_be=no
+diff --git a/sysdeps/nios2/configure.ac b/sysdeps/nios2/configure.ac
+index f05f43802b..dc8639902d 100644
+--- a/sysdeps/nios2/configure.ac
++++ b/sysdeps/nios2/configure.ac
+@@ -4,8 +4,8 @@ GLIBC_PROVIDES dnl See aclocal.m4 in the top level source directory.
+ # Nios II big endian is not yet supported.
+ AC_CACHE_CHECK([for big endian],
+   [libc_cv_nios2_be],
+-  [AC_EGREP_CPP(yes,[#ifdef __nios2_big_endian__
+-                      yes
++  [AC_EGREP_CPP(is_nios2_be,[#ifdef __nios2_big_endian__
++                      is_nios2_be
+                      #endif
+   ], libc_cv_nios2_be=yes, libc_cv_nios2_be=no)])
+ if test $libc_cv_nios2_be = yes; then
+diff --git a/sysdeps/unix/sysv/linux/mips/configure b/sysdeps/unix/sysv/linux/mips/configure
+index a5513fad48..283b293ff3 100644
+--- a/sysdeps/unix/sysv/linux/mips/configure
++++ b/sysdeps/unix/sysv/linux/mips/configure
+@@ -414,11 +414,11 @@ else
+ /* end confdefs.h.  */
+ dnl
+ #ifdef __mips_nan2008
+-yes
++use_mips_nan2008
+ #endif
+ _ACEOF
+ if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+-  $EGREP "yes" >/dev/null 2>&1; then :
++  $EGREP "use_mips_nan2008" >/dev/null 2>&1; then :
+   libc_cv_mips_nan2008=yes
+ else
+   libc_cv_mips_nan2008=no
+diff --git a/sysdeps/unix/sysv/linux/mips/configure.ac b/sysdeps/unix/sysv/linux/mips/configure.ac
+index 9147aa4582..7898e24738 100644
+--- a/sysdeps/unix/sysv/linux/mips/configure.ac
++++ b/sysdeps/unix/sysv/linux/mips/configure.ac
+@@ -105,9 +105,9 @@ AC_COMPILE_IFELSE(
+ LIBC_CONFIG_VAR([mips-mode-switch],[${libc_mips_mode_switch}])
+ 
+ AC_CACHE_CHECK([whether the compiler is using the 2008 NaN encoding],
+-  libc_cv_mips_nan2008, [AC_EGREP_CPP(yes, [dnl
++  libc_cv_mips_nan2008, [AC_EGREP_CPP(use_mips_nan2008, [dnl
+ #ifdef __mips_nan2008
+-yes
++use_mips_nan2008
+ #endif], libc_cv_mips_nan2008=yes, libc_cv_mips_nan2008=no)])
+ 
+ libc_mips_nan=
+diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure b/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure
+index 4e7fcf1d97..44a9cb3791 100644
+--- a/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure
++++ b/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure
+@@ -155,12 +155,12 @@ else
+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ #if _CALL_ELF == 2
+-                      yes
++                      use_ppc_elfv2_abi
+                      #endif
+ 
+ _ACEOF
+ if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+-  $EGREP "yes" >/dev/null 2>&1; then :
++  $EGREP "use_ppc_elfv2_abi" >/dev/null 2>&1; then :
+   libc_cv_ppc64_elfv2_abi=yes
+ else
+   libc_cv_ppc64_elfv2_abi=no
+@@ -188,12 +188,12 @@ else
+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ #ifdef _CALL_ELF
+-                         yes
++                         is_def_call_elf
+                        #endif
+ 
+ _ACEOF
+ if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
+-  $EGREP "yes" >/dev/null 2>&1; then :
++  $EGREP "is_def_call_elf" >/dev/null 2>&1; then :
+   libc_cv_ppc64_def_call_elf=yes
+ else
+   libc_cv_ppc64_def_call_elf=no
+diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure.ac b/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure.ac
+index f9cba6e15d..b21f72f1e4 100644
+--- a/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure.ac
++++ b/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure.ac
+@@ -6,8 +6,8 @@ LIBC_SLIBDIR_RTLDDIR([lib64], [lib64])
+ # Define default-abi according to compiler flags.
+ AC_CACHE_CHECK([whether the compiler is using the PowerPC64 ELFv2 ABI],
+   [libc_cv_ppc64_elfv2_abi],
+-  [AC_EGREP_CPP(yes,[#if _CALL_ELF == 2
+-                      yes
++  [AC_EGREP_CPP(use_ppc_elfv2_abi,[#if _CALL_ELF == 2
++                      use_ppc_elfv2_abi
+                      #endif
+   ], libc_cv_ppc64_elfv2_abi=yes, libc_cv_ppc64_elfv2_abi=no)])
+ if test $libc_cv_ppc64_elfv2_abi = yes; then
+@@ -19,8 +19,8 @@ else
+   # Compiler that do not support ELFv2 ABI does not define _CALL_ELF
+   AC_CACHE_CHECK([whether the compiler defines _CALL_ELF],
+     [libc_cv_ppc64_def_call_elf],
+-    [AC_EGREP_CPP(yes,[#ifdef _CALL_ELF
+-                         yes
++    [AC_EGREP_CPP(is_def_call_elf,[#ifdef _CALL_ELF
++                         is_def_call_elf
+                        #endif
+     ], libc_cv_ppc64_def_call_elf=yes, libc_cv_ppc64_def_call_elf=no)])
+   if test $libc_cv_ppc64_def_call_elf = no; then
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0015-timezone-re-written-tzselect-as-posix-sh.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0015-timezone-re-written-tzselect-as-posix-sh.patch
new file mode 100644
index 0000000..b5feffa
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0015-timezone-re-written-tzselect-as-posix-sh.patch
@@ -0,0 +1,45 @@
+From b8cb8cb242cb751d888feb1ada5c4d0f05cbc1d7 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 18 Mar 2015 00:33:03 +0000
+Subject: [PATCH 15/25] timezone: re-written tzselect as posix sh
+
+To avoid the bash dependency.
+
+Upstream-Status: Pending
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ timezone/Makefile     | 2 +-
+ timezone/tzselect.ksh | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/timezone/Makefile b/timezone/Makefile
+index d6cc7ba357..e4ead6e1a7 100644
+--- a/timezone/Makefile
++++ b/timezone/Makefile
+@@ -122,7 +122,7 @@ $(testdata)/XT%: testdata/XT%
+ 	cp $< $@
+ 
+ $(objpfx)tzselect: tzselect.ksh $(common-objpfx)config.make
+-	sed -e 's|/bin/bash|$(BASH)|' \
++	sed -e 's|/bin/bash|/bin/sh|' \
+ 	    -e 's|TZDIR=[^}]*|TZDIR=$(zonedir)|' \
+ 	    -e '/TZVERSION=/s|see_Makefile|"$(version)"|' \
+ 	    -e '/PKGVERSION=/s|=.*|="$(PKGVERSION)"|' \
+diff --git a/timezone/tzselect.ksh b/timezone/tzselect.ksh
+index d2c3a6d1dd..089679f306 100755
+--- a/timezone/tzselect.ksh
++++ b/timezone/tzselect.ksh
+@@ -35,7 +35,7 @@ REPORT_BUGS_TO=tz@iana.org
+ 
+ # Specify default values for environment variables if they are unset.
+ : ${AWK=awk}
+-: ${TZDIR=`pwd`}
++: ${TZDIR=$(pwd)}
+ 
+ # Output one argument as-is to standard output.
+ # Safer than 'echo', which can mishandle '\' or leading '-'.
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0015-yes-within-the-path-sets-wrong-config-variables.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0015-yes-within-the-path-sets-wrong-config-variables.patch
deleted file mode 100644
index 98d425a..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0015-yes-within-the-path-sets-wrong-config-variables.patch
+++ /dev/null
@@ -1,263 +0,0 @@
-From 18d64951cbb68d8d75e8ef347cbd0e0a5c14604b Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 18 Mar 2015 00:31:06 +0000
-Subject: [PATCH 15/26] 'yes' within the path sets wrong config variables
-
-It seems that the 'AC_EGREP_CPP(yes...' example is quite popular
-but being such a short word to grep it is likely to produce
-false-positive matches with the path it is configured into.
-
-The change is to use a more elaborated string to grep for.
-
-Upstream-Status: Submitted [libc-alpha@sourceware.org]
-
-Signed-off-by: Benjamin Esquivel <benjamin.esquivel@linux.intel.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- sysdeps/aarch64/configure                              | 4 ++--
- sysdeps/aarch64/configure.ac                           | 4 ++--
- sysdeps/arm/configure                                  | 4 ++--
- sysdeps/arm/configure.ac                               | 4 ++--
- sysdeps/mips/configure                                 | 4 ++--
- sysdeps/mips/configure.ac                              | 4 ++--
- sysdeps/nios2/configure                                | 4 ++--
- sysdeps/nios2/configure.ac                             | 4 ++--
- sysdeps/unix/sysv/linux/mips/configure                 | 4 ++--
- sysdeps/unix/sysv/linux/mips/configure.ac              | 4 ++--
- sysdeps/unix/sysv/linux/powerpc/powerpc64/configure    | 8 ++++----
- sysdeps/unix/sysv/linux/powerpc/powerpc64/configure.ac | 8 ++++----
- 12 files changed, 28 insertions(+), 28 deletions(-)
-
-diff --git a/sysdeps/aarch64/configure b/sysdeps/aarch64/configure
-index 5bd355a..3bc5537 100644
---- a/sysdeps/aarch64/configure
-+++ b/sysdeps/aarch64/configure
-@@ -148,12 +148,12 @@ else
-   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- #ifdef __AARCH64EB__
--                      yes
-+                      is_aarch64_be
-                      #endif
- 
- _ACEOF
- if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
--  $EGREP "yes" >/dev/null 2>&1; then :
-+  $EGREP "is_aarch64_be" >/dev/null 2>&1; then :
-   libc_cv_aarch64_be=yes
- else
-   libc_cv_aarch64_be=no
-diff --git a/sysdeps/aarch64/configure.ac b/sysdeps/aarch64/configure.ac
-index 7851dd4..6e92381 100644
---- a/sysdeps/aarch64/configure.ac
-+++ b/sysdeps/aarch64/configure.ac
-@@ -10,8 +10,8 @@ GLIBC_PROVIDES dnl See aclocal.m4 in the top level source directory.
- # the dynamic linker via %ifdef.
- AC_CACHE_CHECK([for big endian],
-   [libc_cv_aarch64_be],
--  [AC_EGREP_CPP(yes,[#ifdef __AARCH64EB__
--                      yes
-+  [AC_EGREP_CPP(is_aarch64_be,[#ifdef __AARCH64EB__
-+                      is_aarch64_be
-                      #endif
-   ], libc_cv_aarch64_be=yes, libc_cv_aarch64_be=no)])
- if test $libc_cv_aarch64_be = yes; then
-diff --git a/sysdeps/arm/configure b/sysdeps/arm/configure
-index 431e843..e152461 100644
---- a/sysdeps/arm/configure
-+++ b/sysdeps/arm/configure
-@@ -151,12 +151,12 @@ else
-   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- #ifdef __ARM_PCS_VFP
--		      yes
-+		      use_arm_pcs_vfp
- 		     #endif
- 
- _ACEOF
- if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
--  $EGREP "yes" >/dev/null 2>&1; then :
-+  $EGREP "use_arm_pcs_vfp" >/dev/null 2>&1; then :
-   libc_cv_arm_pcs_vfp=yes
- else
-   libc_cv_arm_pcs_vfp=no
-diff --git a/sysdeps/arm/configure.ac b/sysdeps/arm/configure.ac
-index 90cdd69..05a262b 100644
---- a/sysdeps/arm/configure.ac
-+++ b/sysdeps/arm/configure.ac
-@@ -15,8 +15,8 @@ AC_DEFINE(PI_STATIC_AND_HIDDEN)
- # the dynamic linker via %ifdef.
- AC_CACHE_CHECK([whether the compiler is using the ARM hard-float ABI],
-   [libc_cv_arm_pcs_vfp],
--  [AC_EGREP_CPP(yes,[#ifdef __ARM_PCS_VFP
--		      yes
-+  [AC_EGREP_CPP(use_arm_pcs_vfp,[#ifdef __ARM_PCS_VFP
-+		      use_arm_pcs_vfp
- 		     #endif
-   ], libc_cv_arm_pcs_vfp=yes, libc_cv_arm_pcs_vfp=no)])
- if test $libc_cv_arm_pcs_vfp = yes; then
-diff --git a/sysdeps/mips/configure b/sysdeps/mips/configure
-index 4e13248..f14af95 100644
---- a/sysdeps/mips/configure
-+++ b/sysdeps/mips/configure
-@@ -143,11 +143,11 @@ else
- /* end confdefs.h.  */
- dnl
- #ifdef __mips_nan2008
--yes
-+use_mips_nan2008
- #endif
- _ACEOF
- if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
--  $EGREP "yes" >/dev/null 2>&1; then :
-+  $EGREP "use_mips_nan2008" >/dev/null 2>&1; then :
-   libc_cv_mips_nan2008=yes
- else
-   libc_cv_mips_nan2008=no
-diff --git a/sysdeps/mips/configure.ac b/sysdeps/mips/configure.ac
-index bcbdaff..ad3057f 100644
---- a/sysdeps/mips/configure.ac
-+++ b/sysdeps/mips/configure.ac
-@@ -6,9 +6,9 @@ dnl position independent way.
- dnl AC_DEFINE(PI_STATIC_AND_HIDDEN)
- 
- AC_CACHE_CHECK([whether the compiler is using the 2008 NaN encoding],
--  libc_cv_mips_nan2008, [AC_EGREP_CPP(yes, [dnl
-+  libc_cv_mips_nan2008, [AC_EGREP_CPP(use_mips_nan2008, [dnl
- #ifdef __mips_nan2008
--yes
-+use_mips_nan2008
- #endif], libc_cv_mips_nan2008=yes, libc_cv_mips_nan2008=no)])
- if test x$libc_cv_mips_nan2008 = xyes; then
-   AC_DEFINE(HAVE_MIPS_NAN2008)
-diff --git a/sysdeps/nios2/configure b/sysdeps/nios2/configure
-index 14c8a3a..dde3814 100644
---- a/sysdeps/nios2/configure
-+++ b/sysdeps/nios2/configure
-@@ -142,12 +142,12 @@ else
-   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- #ifdef __nios2_big_endian__
--                      yes
-+                      is_nios2_be
-                      #endif
- 
- _ACEOF
- if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
--  $EGREP "yes" >/dev/null 2>&1; then :
-+  $EGREP "is_nios2_be" >/dev/null 2>&1; then :
-   libc_cv_nios2_be=yes
- else
-   libc_cv_nios2_be=no
-diff --git a/sysdeps/nios2/configure.ac b/sysdeps/nios2/configure.ac
-index f05f438..dc86399 100644
---- a/sysdeps/nios2/configure.ac
-+++ b/sysdeps/nios2/configure.ac
-@@ -4,8 +4,8 @@ GLIBC_PROVIDES dnl See aclocal.m4 in the top level source directory.
- # Nios II big endian is not yet supported.
- AC_CACHE_CHECK([for big endian],
-   [libc_cv_nios2_be],
--  [AC_EGREP_CPP(yes,[#ifdef __nios2_big_endian__
--                      yes
-+  [AC_EGREP_CPP(is_nios2_be,[#ifdef __nios2_big_endian__
-+                      is_nios2_be
-                      #endif
-   ], libc_cv_nios2_be=yes, libc_cv_nios2_be=no)])
- if test $libc_cv_nios2_be = yes; then
-diff --git a/sysdeps/unix/sysv/linux/mips/configure b/sysdeps/unix/sysv/linux/mips/configure
-index a5513fa..283b293 100644
---- a/sysdeps/unix/sysv/linux/mips/configure
-+++ b/sysdeps/unix/sysv/linux/mips/configure
-@@ -414,11 +414,11 @@ else
- /* end confdefs.h.  */
- dnl
- #ifdef __mips_nan2008
--yes
-+use_mips_nan2008
- #endif
- _ACEOF
- if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
--  $EGREP "yes" >/dev/null 2>&1; then :
-+  $EGREP "use_mips_nan2008" >/dev/null 2>&1; then :
-   libc_cv_mips_nan2008=yes
- else
-   libc_cv_mips_nan2008=no
-diff --git a/sysdeps/unix/sysv/linux/mips/configure.ac b/sysdeps/unix/sysv/linux/mips/configure.ac
-index 9147aa4..7898e24 100644
---- a/sysdeps/unix/sysv/linux/mips/configure.ac
-+++ b/sysdeps/unix/sysv/linux/mips/configure.ac
-@@ -105,9 +105,9 @@ AC_COMPILE_IFELSE(
- LIBC_CONFIG_VAR([mips-mode-switch],[${libc_mips_mode_switch}])
- 
- AC_CACHE_CHECK([whether the compiler is using the 2008 NaN encoding],
--  libc_cv_mips_nan2008, [AC_EGREP_CPP(yes, [dnl
-+  libc_cv_mips_nan2008, [AC_EGREP_CPP(use_mips_nan2008, [dnl
- #ifdef __mips_nan2008
--yes
-+use_mips_nan2008
- #endif], libc_cv_mips_nan2008=yes, libc_cv_mips_nan2008=no)])
- 
- libc_mips_nan=
-diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure b/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure
-index af06970..27b8c1b 100644
---- a/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure
-+++ b/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure
-@@ -155,12 +155,12 @@ else
-   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- #if _CALL_ELF == 2
--                      yes
-+                      use_ppc_elfv2_abi
-                      #endif
- 
- _ACEOF
- if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
--  $EGREP "yes" >/dev/null 2>&1; then :
-+  $EGREP "use_ppc_elfv2_abi" >/dev/null 2>&1; then :
-   libc_cv_ppc64_elfv2_abi=yes
- else
-   libc_cv_ppc64_elfv2_abi=no
-@@ -188,12 +188,12 @@ else
-   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- #ifdef _CALL_ELF
--                         yes
-+                         is_def_call_elf
-                        #endif
- 
- _ACEOF
- if (eval "$ac_cpp conftest.$ac_ext") 2>&5 |
--  $EGREP "yes" >/dev/null 2>&1; then :
-+  $EGREP "is_def_call_elf" >/dev/null 2>&1; then :
-   libc_cv_ppc64_def_call_elf=yes
- else
-   libc_cv_ppc64_def_call_elf=no
-diff --git a/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure.ac b/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure.ac
-index 0822915..9a32fdd 100644
---- a/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure.ac
-+++ b/sysdeps/unix/sysv/linux/powerpc/powerpc64/configure.ac
-@@ -6,8 +6,8 @@ LIBC_SLIBDIR_RTLDDIR([lib64], [lib64])
- # Define default-abi according to compiler flags.
- AC_CACHE_CHECK([whether the compiler is using the PowerPC64 ELFv2 ABI],
-   [libc_cv_ppc64_elfv2_abi],
--  [AC_EGREP_CPP(yes,[#if _CALL_ELF == 2
--                      yes
-+  [AC_EGREP_CPP(use_ppc_elfv2_abi,[#if _CALL_ELF == 2
-+                      use_ppc_elfv2_abi
-                      #endif
-   ], libc_cv_ppc64_elfv2_abi=yes, libc_cv_ppc64_elfv2_abi=no)])
- if test $libc_cv_ppc64_elfv2_abi = yes; then
-@@ -19,8 +19,8 @@ else
-   # Compiler that do not support ELFv2 ABI does not define _CALL_ELF
-   AC_CACHE_CHECK([whether the compiler defines _CALL_ELF],
-     [libc_cv_ppc64_def_call_elf],
--    [AC_EGREP_CPP(yes,[#ifdef _CALL_ELF
--                         yes
-+    [AC_EGREP_CPP(is_def_call_elf,[#ifdef _CALL_ELF
-+                         is_def_call_elf
-                        #endif
-     ], libc_cv_ppc64_def_call_elf=yes, libc_cv_ppc64_def_call_elf=no)])
-   if test $libc_cv_ppc64_def_call_elf = no; then
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0016-Remove-bash-dependency-for-nscd-init-script.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0016-Remove-bash-dependency-for-nscd-init-script.patch
new file mode 100644
index 0000000..1d9983b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0016-Remove-bash-dependency-for-nscd-init-script.patch
@@ -0,0 +1,75 @@
+From 69d378001adfe9a359d2f4b069c1ed2d36de4480 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Thu, 31 Dec 2015 14:33:02 -0800
+Subject: [PATCH 16/25] Remove bash dependency for nscd init script
+
+The nscd init script uses #! /bin/bash but only really uses one bashism
+(translated strings), so remove them and switch the shell to #!/bin/sh.
+
+Upstream-Status: Pending
+
+Signed-off-by: Ross Burton <ross.burton@intel.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ nscd/nscd.init | 14 +++++++-------
+ 1 file changed, 7 insertions(+), 7 deletions(-)
+
+diff --git a/nscd/nscd.init b/nscd/nscd.init
+index a882da7d8b..b02986ec15 100644
+--- a/nscd/nscd.init
++++ b/nscd/nscd.init
+@@ -1,4 +1,4 @@
+-#!/bin/bash
++#!/bin/sh
+ #
+ # nscd:		Starts the Name Switch Cache Daemon
+ #
+@@ -49,7 +49,7 @@ prog=nscd
+ start () {
+     [ -d /var/run/nscd ] || mkdir /var/run/nscd
+     [ -d /var/db/nscd ] || mkdir /var/db/nscd
+-    echo -n $"Starting $prog: "
++    echo -n "Starting $prog: "
+     daemon /usr/sbin/nscd
+     RETVAL=$?
+     echo
+@@ -58,7 +58,7 @@ start () {
+ }
+ 
+ stop () {
+-    echo -n $"Stopping $prog: "
++    echo -n "Stopping $prog: "
+     /usr/sbin/nscd -K
+     RETVAL=$?
+     if [ $RETVAL -eq 0 ]; then
+@@ -67,9 +67,9 @@ stop () {
+ 	# a non-privileged user
+ 	rm -f /var/run/nscd/nscd.pid
+ 	rm -f /var/run/nscd/socket
+-       	success $"$prog shutdown"
++	success "$prog shutdown"
+     else
+-       	failure $"$prog shutdown"
++	failure "$prog shutdown"
+     fi
+     echo
+     return $RETVAL
+@@ -103,13 +103,13 @@ case "$1" in
+ 	RETVAL=$?
+ 	;;
+     force-reload | reload)
+-    	echo -n $"Reloading $prog: "
++	echo -n "Reloading $prog: "
+ 	killproc /usr/sbin/nscd -HUP
+ 	RETVAL=$?
+ 	echo
+ 	;;
+     *)
+-	echo $"Usage: $0 {start|stop|status|restart|reload|condrestart}"
++	echo "Usage: $0 {start|stop|status|restart|reload|condrestart}"
+ 	RETVAL=1
+ 	;;
+ esac
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0016-timezone-re-written-tzselect-as-posix-sh.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0016-timezone-re-written-tzselect-as-posix-sh.patch
deleted file mode 100644
index 426a2c0..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0016-timezone-re-written-tzselect-as-posix-sh.patch
+++ /dev/null
@@ -1,45 +0,0 @@
-From 2bed515b9f9f613ae0db9b9607d8fa60a4afca5b Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 18 Mar 2015 00:33:03 +0000
-Subject: [PATCH 16/26] timezone: re-written tzselect as posix sh
-
-To avoid the bash dependency.
-
-Upstream-Status: Pending
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- timezone/Makefile     | 2 +-
- timezone/tzselect.ksh | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/timezone/Makefile b/timezone/Makefile
-index dee7568..66a50be 100644
---- a/timezone/Makefile
-+++ b/timezone/Makefile
-@@ -120,7 +120,7 @@ $(testdata)/XT%: testdata/XT%
- 	cp $< $@
- 
- $(objpfx)tzselect: tzselect.ksh $(common-objpfx)config.make
--	sed -e 's|/bin/bash|$(BASH)|' \
-+	sed -e 's|/bin/bash|/bin/sh|' \
- 	    -e 's|TZDIR=[^}]*|TZDIR=$(zonedir)|' \
- 	    -e '/TZVERSION=/s|see_Makefile|"$(version)"|' \
- 	    -e '/PKGVERSION=/s|=.*|="$(PKGVERSION)"|' \
-diff --git a/timezone/tzselect.ksh b/timezone/tzselect.ksh
-index 2c3b2f4..0c04a61 100755
---- a/timezone/tzselect.ksh
-+++ b/timezone/tzselect.ksh
-@@ -35,7 +35,7 @@ REPORT_BUGS_TO=tz@iana.org
- 
- # Specify default values for environment variables if they are unset.
- : ${AWK=awk}
--: ${TZDIR=`pwd`}
-+: ${TZDIR=$(pwd)}
- 
- # Output one argument as-is to standard output.
- # Safer than 'echo', which can mishandle '\' or leading '-'.
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0017-Remove-bash-dependency-for-nscd-init-script.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0017-Remove-bash-dependency-for-nscd-init-script.patch
deleted file mode 100644
index 6c2506c..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0017-Remove-bash-dependency-for-nscd-init-script.patch
+++ /dev/null
@@ -1,73 +0,0 @@
-From c8814875b362efbfd778345d0d2777478bf11a30 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Thu, 31 Dec 2015 14:33:02 -0800
-Subject: [PATCH 17/26] Remove bash dependency for nscd init script
-
-The nscd init script uses #! /bin/bash but only really uses one bashism
-(translated strings), so remove them and switch the shell to #!/bin/sh.
-
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- nscd/nscd.init | 14 +++++++-------
- 1 file changed, 7 insertions(+), 7 deletions(-)
-
-diff --git a/nscd/nscd.init b/nscd/nscd.init
-index a882da7..b02986e 100644
---- a/nscd/nscd.init
-+++ b/nscd/nscd.init
-@@ -1,4 +1,4 @@
--#!/bin/bash
-+#!/bin/sh
- #
- # nscd:		Starts the Name Switch Cache Daemon
- #
-@@ -49,7 +49,7 @@ prog=nscd
- start () {
-     [ -d /var/run/nscd ] || mkdir /var/run/nscd
-     [ -d /var/db/nscd ] || mkdir /var/db/nscd
--    echo -n $"Starting $prog: "
-+    echo -n "Starting $prog: "
-     daemon /usr/sbin/nscd
-     RETVAL=$?
-     echo
-@@ -58,7 +58,7 @@ start () {
- }
- 
- stop () {
--    echo -n $"Stopping $prog: "
-+    echo -n "Stopping $prog: "
-     /usr/sbin/nscd -K
-     RETVAL=$?
-     if [ $RETVAL -eq 0 ]; then
-@@ -67,9 +67,9 @@ stop () {
- 	# a non-privileged user
- 	rm -f /var/run/nscd/nscd.pid
- 	rm -f /var/run/nscd/socket
--       	success $"$prog shutdown"
-+	success "$prog shutdown"
-     else
--       	failure $"$prog shutdown"
-+	failure "$prog shutdown"
-     fi
-     echo
-     return $RETVAL
-@@ -103,13 +103,13 @@ case "$1" in
- 	RETVAL=$?
- 	;;
-     force-reload | reload)
--    	echo -n $"Reloading $prog: "
-+	echo -n "Reloading $prog: "
- 	killproc /usr/sbin/nscd -HUP
- 	RETVAL=$?
- 	echo
- 	;;
-     *)
--	echo $"Usage: $0 {start|stop|status|restart|reload|condrestart}"
-+	echo "Usage: $0 {start|stop|status|restart|reload|condrestart}"
- 	RETVAL=1
- 	;;
- esac
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0017-eglibc-Cross-building-and-testing-instructions.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0017-eglibc-Cross-building-and-testing-instructions.patch
new file mode 100644
index 0000000..3e39d74
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0017-eglibc-Cross-building-and-testing-instructions.patch
@@ -0,0 +1,619 @@
+From cdc88dffa226815e3a218604655459e33dc86483 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 18 Mar 2015 00:42:58 +0000
+Subject: [PATCH 17/25] eglibc: Cross building and testing instructions
+
+Ported from eglibc
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ GLIBC.cross-building | 383 +++++++++++++++++++++++++++++++++++++++++++++++++++
+ GLIBC.cross-testing  | 205 +++++++++++++++++++++++++++
+ 2 files changed, 588 insertions(+)
+ create mode 100644 GLIBC.cross-building
+ create mode 100644 GLIBC.cross-testing
+
+diff --git a/GLIBC.cross-building b/GLIBC.cross-building
+new file mode 100644
+index 0000000000..e6e0da1aaf
+--- /dev/null
++++ b/GLIBC.cross-building
+@@ -0,0 +1,383 @@
++                                                        -*- mode: text -*-
++
++                        Cross-Compiling GLIBC
++                  Jim Blandy <jimb@codesourcery.com>
++
++
++Introduction
++
++Most GNU tools have a simple build procedure: you run their
++'configure' script, and then you run 'make'.  Unfortunately, the
++process of cross-compiling the GNU C library is quite a bit more
++involved:
++
++1) Build a cross-compiler, with certain facilities disabled.
++
++2) Configure the C library using the compiler you built in step 1).
++   Build a few of the C run-time object files, but not the rest of the
++   library.  Install the library's header files and the run-time
++   object files, and create a dummy libc.so.
++
++3) Build a second cross-compiler, using the header files and object
++   files you installed in step 2.
++
++4) Configure, build, and install a fresh C library, using the compiler
++   built in step 3.
++
++5) Build a third cross-compiler, based on the C library built in step 4.
++
++The reason for this complexity is that, although GCC and the GNU C
++library are distributed separately, they are not actually independent
++of each other: GCC requires the C library's headers and some object
++files to compile its own libraries, while the C library depends on
++GCC's libraries.  GLIBC includes features and bug fixes to the stock
++GNU C library that simplify this process, but the fundamental
++interdependency stands.
++
++In this document, we explain how to cross-compile an GLIBC/GCC pair
++from source.  Our intended audience is developers who are already
++familiar with the GNU toolchain and comfortable working with
++cross-development tools.  While we do present a worked example to
++accompany the explanation, for clarity's sake we do not cover many of
++the options available to cross-toolchain users.
++
++
++Preparation
++
++GLIBC requires recent versions of the GNU binutils, GCC, and the
++Linux kernel.  The web page <http://www.eglibc.org/prerequisites>
++documents the current requirements, and lists patches needed for
++certain target architectures.  As of this writing, these build
++instructions have been tested with binutils 2.22.51, GCC 4.6.2,
++and Linux 3.1.
++
++First, let's set some variables, to simplify later commands.  We'll
++build GLIBC and GCC for an ARM target, known to the Linux kernel
++as 'arm', and we'll do the build on an Intel x86_64 Linux box:
++
++    $ build=x86_64-pc-linux-gnu
++    $ host=$build
++    $ target=arm-none-linux-gnueabi
++    $ linux_arch=arm
++
++We're using the aforementioned versions of Binutils, GCC, and Linux:
++
++    $ binutilsv=binutils-2.22.51
++    $ gccv=gcc-4.6.2
++    $ linuxv=linux-3.1
++
++We're carrying out the entire process under '~/cross-build', which
++contains unpacked source trees for binutils, gcc, and linux kernel,
++along with GLIBC svn trunk (which can be checked-out with
++'svn co http://www.eglibc.org/svn/trunk eglibc'):
++
++    $ top=$HOME/cross-build/$target
++    $ src=$HOME/cross-build/src
++    $ ls $src
++    binutils-2.22.51  glibc  gcc-4.6.2  linux-3.1
++
++We're going to place our build directories in a subdirectory 'obj',
++we'll install the cross-development toolchain in 'tools', and we'll
++place our sysroot (containing files to be installed on the target
++system) in 'sysroot':
++
++    $ obj=$top/obj
++    $ tools=$top/tools
++    $ sysroot=$top/sysroot
++
++
++Binutils
++
++Configuring and building binutils for the target is straightforward:
++
++    $ mkdir -p $obj/binutils
++    $ cd $obj/binutils
++    $ $src/$binutilsv/configure \
++    >     --target=$target \
++    >     --prefix=$tools \
++    >     --with-sysroot=$sysroot
++    $ make
++    $ make install
++
++
++The First GCC
++
++For our work, we need a cross-compiler targeting an ARM Linux
++system.  However, that configuration includes the shared library
++'libgcc_s.so', which is compiled against the GLIBC headers (which we
++haven't installed yet) and linked against 'libc.so' (which we haven't
++built yet).
++
++Fortunately, there are configuration options for GCC which tell it not
++to build 'libgcc_s.so'.  The '--without-headers' option is supposed to
++take care of this, but its implementation is incomplete, so you must
++also configure with the '--with-newlib' option.  While '--with-newlib'
++appears to mean "Use the Newlib C library", its effect is to tell the
++GCC build machinery, "Don't assume there is a C library available."
++
++We also need to disable some of the libraries that would normally be
++built along with GCC, and specify that only the compiler for the C
++language is needed.
++
++So, we create a build directory, configure, make, and install.
++
++    $ mkdir -p $obj/gcc1
++    $ cd $obj/gcc1
++    $ $src/$gccv/configure \
++    >     --target=$target \
++    >     --prefix=$tools \
++    >     --without-headers --with-newlib \
++    >     --disable-shared --disable-threads --disable-libssp \
++    >     --disable-libgomp --disable-libmudflap --disable-libquadmath \
++    >     --disable-decimal-float --disable-libffi \
++    >     --enable-languages=c
++    $ PATH=$tools/bin:$PATH make
++    $ PATH=$tools/bin:$PATH make install
++
++
++Linux Kernel Headers
++
++To configure GLIBC, we also need Linux kernel headers in place.
++Fortunately, the Linux makefiles have a target that installs them for
++us.  Since the process does modify the source tree a bit, we make a
++copy first:
++
++    $ cp -r $src/$linuxv $obj/linux
++    $ cd $obj/linux
++
++Now we're ready to install the headers into the sysroot:
++
++    $ PATH=$tools/bin:$PATH \
++    > make headers_install \
++    >      ARCH=$linux_arch CROSS_COMPILE=$target- \
++    >      INSTALL_HDR_PATH=$sysroot/usr
++
++
++GLIBC Headers and Preliminary Objects
++
++Using the cross-compiler we've just built, we can now configure GLIBC
++well enough to install the headers and build the object files that the
++full cross-compiler will need:
++
++    $ mkdir -p $obj/glibc-headers
++    $ cd $obj/glibc-headers
++    $ BUILD_CC=gcc \
++    > CC=$tools/bin/$target-gcc \
++    > CXX=$tools/bin/$target-g++ \
++    > AR=$tools/bin/$target-ar \
++    > RANLIB=$tools/bin/$target-ranlib \
++    > $src/glibc/libc/configure \
++    >     --prefix=/usr \
++    >     --with-headers=$sysroot/usr/include \
++    >     --build=$build \
++    >     --host=$target \
++    >     --disable-profile --without-gd --without-cvs \
++    >     --enable-add-ons=nptl,libidn,../ports
++
++The option '--prefix=/usr' may look strange, but you should never
++configure GLIBC with a prefix other than '/usr': in various places,
++GLIBC's build system checks whether the prefix is '/usr', and does
++special handling only if that is the case.  Unless you use this
++prefix, you will get a sysroot that does not use the standard Linux
++directory layouts and cannot be used as a basis for the root
++filesystem on your target system compatibly with normal GLIBC
++installations.
++
++The '--with-headers' option tells GLIBC where the Linux headers have
++been installed.
++
++The '--enable-add-ons=nptl,libidn,../ports' option tells GLIBC to look
++for the listed glibc add-ons. Most notably the ports add-on (located
++just above the libc sources in the GLIBC svn tree) is required to
++support ARM targets.
++
++We can now use the 'install-headers' makefile target to install the
++headers:
++
++    $ make install-headers install_root=$sysroot \
++    >                      install-bootstrap-headers=yes
++
++The 'install_root' variable indicates where the files should actually
++be installed; its value is treated as the parent of the '--prefix'
++directory we passed to the configure script, so the headers will go in
++'$sysroot/usr/include'.  The 'install-bootstrap-headers' variable
++requests special handling for certain tricky header files.
++
++Next, there are a few object files needed to link shared libraries,
++which we build and install by hand:
++
++    $ mkdir -p $sysroot/usr/lib
++    $ make csu/subdir_lib
++    $ cp csu/crt1.o csu/crti.o csu/crtn.o $sysroot/usr/lib
++
++Finally, 'libgcc_s.so' requires a 'libc.so' to link against.  However,
++since we will never actually execute its code, it doesn't matter what
++it contains.  So, treating '/dev/null' as a C source file, we produce
++a dummy 'libc.so' in one step:
++
++    $ $tools/bin/$target-gcc -nostdlib -nostartfiles -shared -x c /dev/null \
++    >                        -o $sysroot/usr/lib/libc.so
++
++
++The Second GCC
++
++With the GLIBC headers and selected object files installed, we can
++now build a GCC that is capable of compiling GLIBC.  We configure,
++build, and install the second GCC, again building only the C compiler,
++and avoiding libraries we won't use:
++
++    $ mkdir -p $obj/gcc2
++    $ cd $obj/gcc2
++    $ $src/$gccv/configure \
++    >     --target=$target \
++    >     --prefix=$tools \
++    >     --with-sysroot=$sysroot \
++    >     --disable-libssp --disable-libgomp --disable-libmudflap \
++    >     --disable-libffi --disable-libquadmath \
++    >     --enable-languages=c
++    $ PATH=$tools/bin:$PATH make
++    $ PATH=$tools/bin:$PATH make install
++
++
++GLIBC, Complete
++
++With the second compiler built and installed, we're now ready for the
++full GLIBC build:
++
++    $ mkdir -p $obj/glibc
++    $ cd $obj/glibc
++    $ BUILD_CC=gcc \
++    > CC=$tools/bin/$target-gcc \
++    > CXX=$tools/bin/$target-g++ \
++    > AR=$tools/bin/$target-ar \
++    > RANLIB=$tools/bin/$target-ranlib \
++    > $src/glibc/libc/configure \
++    >     --prefix=/usr \
++    >     --with-headers=$sysroot/usr/include \
++    >     --with-kconfig=$obj/linux/scripts/kconfig \
++    >     --build=$build \
++    >     --host=$target \
++    >     --disable-profile --without-gd --without-cvs \
++    >     --enable-add-ons=nptl,libidn,../ports
++
++Note the additional '--with-kconfig' option. This tells GLIBC where to
++find the host config tools used by the kernel 'make config' and 'make
++menuconfig'.  These tools can be re-used by GLIBC for its own 'make
++*config' support, which will create 'option-groups.config' for you.
++But first make sure those tools have been built by running some
++dummy 'make *config' calls in the kernel directory:
++
++    $ cd $obj/linux
++    $ PATH=$tools/bin:$PATH make config \
++    >      ARCH=$linux_arch CROSS_COMPILE=$target- \
++    $ PATH=$tools/bin:$PATH make menuconfig \
++    >      ARCH=$linux_arch CROSS_COMPILE=$target- \
++
++Now we can configure and build the full GLIBC:
++
++    $ cd $obj/glibc
++    $ PATH=$tools/bin:$PATH make defconfig
++    $ PATH=$tools/bin:$PATH make menuconfig
++    $ PATH=$tools/bin:$PATH make
++    $ PATH=$tools/bin:$PATH make install install_root=$sysroot
++
++At this point, we have a complete GLIBC installation in '$sysroot',
++with header files, library files, and most of the C runtime startup
++files in place.
++
++
++The Third GCC
++
++Finally, we recompile GCC against this full installation, enabling
++whatever languages and libraries we would like to use:
++
++    $ mkdir -p $obj/gcc3
++    $ cd $obj/gcc3
++    $ $src/$gccv/configure \
++    >     --target=$target \
++    >     --prefix=$tools \
++    >     --with-sysroot=$sysroot \
++    >     --enable-__cxa_atexit \
++    >     --disable-libssp --disable-libgomp --disable-libmudflap \
++    >     --enable-languages=c,c++
++    $ PATH=$tools/bin:$PATH make
++    $ PATH=$tools/bin:$PATH make install
++
++The '--enable-__cxa_atexit' option tells GCC what sort of C++
++destructor support to expect from the C library; it's required with
++GLIBC.
++
++And since GCC's installation process isn't designed to help construct
++sysroot trees, we must manually copy certain libraries into place in
++the sysroot.
++
++    $ cp -d $tools/$target/lib/libgcc_s.so* $sysroot/lib
++    $ cp -d $tools/$target/lib/libstdc++.so* $sysroot/usr/lib
++
++
++Trying Things Out
++
++At this point, '$tools' contains a cross toolchain ready to use
++the GLIBC installation in '$sysroot':
++
++    $ cat > hello.c <<EOF
++    > #include <stdio.h>
++    > int
++    > main (int argc, char **argv)
++    > {
++    >   puts ("Hello, world!");
++    >   return 0;
++    > }
++    > EOF
++    $ $tools/bin/$target-gcc -Wall hello.c -o hello
++    $ cat > c++-hello.cc <<EOF
++    > #include <iostream>
++    > int
++    > main (int argc, char **argv)
++    > {
++    >   std::cout << "Hello, C++ world!" << std::endl;
++    >   return 0;
++    > }
++    > EOF
++    $ $tools/bin/$target-g++ -Wall c++-hello.cc -o c++-hello
++
++
++We can use 'readelf' to verify that these are indeed executables for
++our target, using our dynamic linker:
++
++    $ $tools/bin/$target-readelf -hl hello
++    ELF Header:
++    ...
++      Type:                              EXEC (Executable file)
++      Machine:                           ARM
++
++    ...
++    Program Headers:
++      Type           Offset   VirtAddr   PhysAddr   FileSiz MemSiz  Flg Align
++      PHDR           0x000034 0x10000034 0x10000034 0x00100 0x00100 R E 0x4
++      INTERP         0x000134 0x00008134 0x00008134 0x00013 0x00013 R   0x1
++          [Requesting program interpreter: /lib/ld-linux.so.3]
++      LOAD           0x000000 0x00008000 0x00008000 0x0042c 0x0042c R E 0x8000
++    ...
++
++Looking at the dynamic section of the installed 'libgcc_s.so', we see
++that the 'NEEDED' entry for the C library does include the '.6'
++suffix, indicating that was linked against our fully build GLIBC, and
++not our dummy 'libc.so':
++
++    $ $tools/bin/$target-readelf -d $sysroot/lib/libgcc_s.so.1
++    Dynamic section at offset 0x1083c contains 24 entries:
++      Tag        Type                         Name/Value
++     0x00000001 (NEEDED)                     Shared library: [libc.so.6]
++     0x0000000e (SONAME)                     Library soname: [libgcc_s.so.1]
++    ...
++
++
++And on the target machine, we can run our programs:
++
++    $ $sysroot/lib/ld.so.1 --library-path $sysroot/lib:$sysroot/usr/lib \
++    > ./hello
++    Hello, world!
++    $ $sysroot/lib/ld.so.1 --library-path $sysroot/lib:$sysroot/usr/lib \
++    > ./c++-hello
++    Hello, C++ world!
+diff --git a/GLIBC.cross-testing b/GLIBC.cross-testing
+new file mode 100644
+index 0000000000..b67b468466
+--- /dev/null
++++ b/GLIBC.cross-testing
+@@ -0,0 +1,205 @@
++                                                        -*- mode: text -*-
++
++                      Cross-Testing With GLIBC
++                  Jim Blandy <jimb@codesourcery.com>
++
++
++Introduction
++
++Developers writing software for embedded systems often use a desktop
++or other similarly capable computer for development, but need to run
++tests on the embedded system, or perhaps on a simulator.  When
++configured for cross-compilation, the stock GNU C library simply
++disables running tests altogether: the command 'make tests' builds
++test programs, but does not run them.  GLIBC, however, provides
++facilities for compiling tests and generating data files on the build
++system, but running the test programs themselves on a remote system or
++simulator.
++
++
++Test environment requirements
++
++The test environment must meet certain conditions for GLIBC's
++cross-testing facilities to work:
++
++- Shared filesystems.  The 'build' system, on which you configure and
++  compile GLIBC, and the 'host' system, on which you intend to run
++  GLIBC, must share a filesystem containing the GLIBC build and
++  source trees.  Files must appear at the same paths on both systems.
++
++- Remote-shell like invocation.  There must be a way to run a program
++  on the host system from the build system, passing it properly quoted
++  command-line arguments, setting environment variables, and
++  inheriting the caller's standard input and output.
++
++
++Usage
++
++To use GLIBC's cross-testing support, provide values for the
++following Make variables when you invoke 'make':
++
++- cross-test-wrapper
++
++  This should be the name of the cross-testing wrapper command, along
++  with any arguments.
++
++- cross-localedef
++
++  This should be the name of a cross-capable localedef program, like
++  that included in the GLIBC 'localedef' module, along with any
++  arguments needed.
++
++These are each explained in detail below.
++
++
++The Cross-Testing Wrapper
++
++To run test programs reliably, the stock GNU C library takes care to
++ensure that test programs use the newly compiled dynamic linker and
++shared libraries, and never the host system's installed libraries.  To
++accomplish this, it runs the tests by explicitly invoking the dynamic
++linker from the build tree, passing it a list of build tree
++directories to search for shared libraries, followed by the name of
++the executable to run and its arguments.
++
++For example, where one might normally run a test program like this:
++
++    $ ./tst-foo arg1 arg2
++
++the GNU C library might run that program like this:
++
++    $ $objdir/elf/ld-linux.so.3 --library-path $objdir \
++      ./tst-foo arg1 arg2
++
++(where $objdir is the path to the top of the build tree, and the
++trailing backslash indicates a continuation of the command).  In other
++words, each test program invocation is 'wrapped up' inside an explicit
++invocation of the dynamic linker, which must itself execute the test
++program, having loaded shared libraries from the appropriate
++directories.
++
++To support cross-testing, GLIBC allows the developer to optionally
++set the 'cross-test-wrapper' Make variable to another wrapper command,
++to which it passes the entire dynamic linker invocation shown above as
++arguments.  For example, if the developer supplies a wrapper of
++'my-wrapper hostname', then GLIBC would run the test above as
++follows:
++
++    $ my-wrapper hostname \
++      $objdir/elf/ld-linux.so.3 --library-path $objdir \
++      ./tst-foo arg1 arg2
++
++The 'my-wrapper' command is responsible for executing the command
++given on the host system.
++
++Since tests are run in varying directories, the wrapper should either
++be in your command search path, or 'cross-test-wrapper' should give an
++absolute path for the wrapper.
++
++The wrapper must meet several requirements:
++
++- It must preserve the current directory.  As explained above, the
++  build directory tree must be visible on both the build and host
++  systems, at the same path.  The test wrapper must ensure that the
++  current directory it inherits is also inherited by the dynamic
++  linker (and thus the test program itself).
++
++- It must preserve environment variables' values.  Many GLIBC tests
++  set environment variables for test runs; in native testing, it
++  invokes programs like this:
++
++    $ GCONV_PATH=$objdir/iconvdata \
++      $objdir/elf/ld-linux.so.3 --library-path $objdir \
++      ./tst-foo arg1 arg2
++
++  With the cross-testing wrapper, that invocation becomes:
++
++    $ GCONV_PATH=$objdir/iconvdata \
++      my-wrapper hostname \
++      $objdir/elf/ld-linux.so.3 --library-path $objdir \
++      ./tst-foo arg1 arg2
++
++  Here, 'my-wrapper' must ensure that the value it sees for
++  'GCONV_PATH' will be seen by the dynamic linker, and thus 'tst-foo'
++  itself.  (The wrapper supplied with GLIBC simply preserves the
++  values of *all* enviroment variables, with a fixed set of
++  exceptions.)
++
++  If your wrapper is a shell script, take care to correctly propagate
++  environment variables whose values contain spaces and shell
++  metacharacters.
++
++- It must pass the command's arguments, unmodified.  The arguments
++  seen by the test program should be exactly those seen by the wrapper
++  (after whatever arguments are given to the wrapper itself).  The
++  GLIBC test framework performs all needed shell word splitting and
++  expansion (wildcard expansion, parameter substitution, and so on)
++  before invoking the wrapper; further expansion may break the tests.
++
++
++The 'cross-test-ssh.sh' script
++
++If you want to use 'ssh' (or something sufficiently similar) to run
++test programs on your host system, GLIBC includes a shell script,
++'scripts/cross-test-ssh.sh', which you can use as your wrapper
++command.  This script takes care of setting the test command's current
++directory, propagating environment variable values, and carrying
++command-line arguments, all across an 'ssh' connection.  You may even
++supply an alternative to 'ssh' on the command line, if needed.
++
++For more details, pass 'cross-test-ssh.sh' the '--help' option.
++
++
++The Cross-Compiling Locale Definition Command
++
++Some GLIBC tests rely on locales generated especially for the test
++process.  In a native configuration, these tests simply run the
++'localedef' command built by the normal GLIBC build process,
++'locale/localedef', to process and install their locales.  However, in
++a cross-compiling configuration, this 'localedef' is built for the
++host system, not the build system, and since it requires quite a bit
++of memory to run (we have seen it fail on systems with 64MiB of
++memory), it may not be practical to run it on the host system.
++
++If set, GLIBC uses the 'cross-localedef' Make variable as the command
++to run on the build system to process and install locales.  The
++localedef program built from the GLIBC 'localedef' module is
++suitable.
++
++The value of 'cross-localedef' may also include command-line arguments
++to be passed to the program; if you are using GLIBC's 'localedef',
++you may include endianness and 'uint32_t' alignment arguments here.
++
++
++Example
++
++In developing GLIBC's cross-testing facility, we invoked 'make' with
++the following script:
++
++    #!/bin/sh
++
++    srcdir=...
++    test_hostname=...
++    localedefdir=...
++    cross_gxx=...-g++
++
++    wrapper="$srcdir/scripts/cross-test-ssh.sh $test_hostname"
++    localedef="$localedefdir/localedef --little-endian --uint32-align=4"
++
++    make cross-test-wrapper="$wrapper" \
++         cross-localedef="$localedef" \
++         CXX="$cross_gxx" \
++         "$@"
++
++
++Other Cross-Testing Concerns
++
++Here are notes on some other issues which you may encounter in running
++the GLIBC tests in a cross-compiling environment:
++
++- Some tests require a C++ cross-compiler; you should set the 'CXX'
++  Make variable to the name of an appropriate cross-compiler.
++
++- Some tests require access to libstdc++.so.6 and libgcc_s.so.1; we
++  simply place copies of these libraries in the top GLIBC build
++  directory.
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0018-eglibc-Cross-building-and-testing-instructions.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0018-eglibc-Cross-building-and-testing-instructions.patch
deleted file mode 100644
index 2ec01f0..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0018-eglibc-Cross-building-and-testing-instructions.patch
+++ /dev/null
@@ -1,619 +0,0 @@
-From df96d6b61bb60f13cd3d4989d1afc56d705f4a33 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 18 Mar 2015 00:42:58 +0000
-Subject: [PATCH 18/26] eglibc: Cross building and testing instructions
-
-Ported from eglibc
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- GLIBC.cross-building | 383 +++++++++++++++++++++++++++++++++++++++++++++++++++
- GLIBC.cross-testing  | 205 +++++++++++++++++++++++++++
- 2 files changed, 588 insertions(+)
- create mode 100644 GLIBC.cross-building
- create mode 100644 GLIBC.cross-testing
-
-diff --git a/GLIBC.cross-building b/GLIBC.cross-building
-new file mode 100644
-index 0000000..e6e0da1
---- /dev/null
-+++ b/GLIBC.cross-building
-@@ -0,0 +1,383 @@
-+                                                        -*- mode: text -*-
-+
-+                        Cross-Compiling GLIBC
-+                  Jim Blandy <jimb@codesourcery.com>
-+
-+
-+Introduction
-+
-+Most GNU tools have a simple build procedure: you run their
-+'configure' script, and then you run 'make'.  Unfortunately, the
-+process of cross-compiling the GNU C library is quite a bit more
-+involved:
-+
-+1) Build a cross-compiler, with certain facilities disabled.
-+
-+2) Configure the C library using the compiler you built in step 1).
-+   Build a few of the C run-time object files, but not the rest of the
-+   library.  Install the library's header files and the run-time
-+   object files, and create a dummy libc.so.
-+
-+3) Build a second cross-compiler, using the header files and object
-+   files you installed in step 2.
-+
-+4) Configure, build, and install a fresh C library, using the compiler
-+   built in step 3.
-+
-+5) Build a third cross-compiler, based on the C library built in step 4.
-+
-+The reason for this complexity is that, although GCC and the GNU C
-+library are distributed separately, they are not actually independent
-+of each other: GCC requires the C library's headers and some object
-+files to compile its own libraries, while the C library depends on
-+GCC's libraries.  GLIBC includes features and bug fixes to the stock
-+GNU C library that simplify this process, but the fundamental
-+interdependency stands.
-+
-+In this document, we explain how to cross-compile an GLIBC/GCC pair
-+from source.  Our intended audience is developers who are already
-+familiar with the GNU toolchain and comfortable working with
-+cross-development tools.  While we do present a worked example to
-+accompany the explanation, for clarity's sake we do not cover many of
-+the options available to cross-toolchain users.
-+
-+
-+Preparation
-+
-+GLIBC requires recent versions of the GNU binutils, GCC, and the
-+Linux kernel.  The web page <http://www.eglibc.org/prerequisites>
-+documents the current requirements, and lists patches needed for
-+certain target architectures.  As of this writing, these build
-+instructions have been tested with binutils 2.22.51, GCC 4.6.2,
-+and Linux 3.1.
-+
-+First, let's set some variables, to simplify later commands.  We'll
-+build GLIBC and GCC for an ARM target, known to the Linux kernel
-+as 'arm', and we'll do the build on an Intel x86_64 Linux box:
-+
-+    $ build=x86_64-pc-linux-gnu
-+    $ host=$build
-+    $ target=arm-none-linux-gnueabi
-+    $ linux_arch=arm
-+
-+We're using the aforementioned versions of Binutils, GCC, and Linux:
-+
-+    $ binutilsv=binutils-2.22.51
-+    $ gccv=gcc-4.6.2
-+    $ linuxv=linux-3.1
-+
-+We're carrying out the entire process under '~/cross-build', which
-+contains unpacked source trees for binutils, gcc, and linux kernel,
-+along with GLIBC svn trunk (which can be checked-out with
-+'svn co http://www.eglibc.org/svn/trunk eglibc'):
-+
-+    $ top=$HOME/cross-build/$target
-+    $ src=$HOME/cross-build/src
-+    $ ls $src
-+    binutils-2.22.51  glibc  gcc-4.6.2  linux-3.1
-+
-+We're going to place our build directories in a subdirectory 'obj',
-+we'll install the cross-development toolchain in 'tools', and we'll
-+place our sysroot (containing files to be installed on the target
-+system) in 'sysroot':
-+
-+    $ obj=$top/obj
-+    $ tools=$top/tools
-+    $ sysroot=$top/sysroot
-+
-+
-+Binutils
-+
-+Configuring and building binutils for the target is straightforward:
-+
-+    $ mkdir -p $obj/binutils
-+    $ cd $obj/binutils
-+    $ $src/$binutilsv/configure \
-+    >     --target=$target \
-+    >     --prefix=$tools \
-+    >     --with-sysroot=$sysroot
-+    $ make
-+    $ make install
-+
-+
-+The First GCC
-+
-+For our work, we need a cross-compiler targeting an ARM Linux
-+system.  However, that configuration includes the shared library
-+'libgcc_s.so', which is compiled against the GLIBC headers (which we
-+haven't installed yet) and linked against 'libc.so' (which we haven't
-+built yet).
-+
-+Fortunately, there are configuration options for GCC which tell it not
-+to build 'libgcc_s.so'.  The '--without-headers' option is supposed to
-+take care of this, but its implementation is incomplete, so you must
-+also configure with the '--with-newlib' option.  While '--with-newlib'
-+appears to mean "Use the Newlib C library", its effect is to tell the
-+GCC build machinery, "Don't assume there is a C library available."
-+
-+We also need to disable some of the libraries that would normally be
-+built along with GCC, and specify that only the compiler for the C
-+language is needed.
-+
-+So, we create a build directory, configure, make, and install.
-+
-+    $ mkdir -p $obj/gcc1
-+    $ cd $obj/gcc1
-+    $ $src/$gccv/configure \
-+    >     --target=$target \
-+    >     --prefix=$tools \
-+    >     --without-headers --with-newlib \
-+    >     --disable-shared --disable-threads --disable-libssp \
-+    >     --disable-libgomp --disable-libmudflap --disable-libquadmath \
-+    >     --disable-decimal-float --disable-libffi \
-+    >     --enable-languages=c
-+    $ PATH=$tools/bin:$PATH make
-+    $ PATH=$tools/bin:$PATH make install
-+
-+
-+Linux Kernel Headers
-+
-+To configure GLIBC, we also need Linux kernel headers in place.
-+Fortunately, the Linux makefiles have a target that installs them for
-+us.  Since the process does modify the source tree a bit, we make a
-+copy first:
-+
-+    $ cp -r $src/$linuxv $obj/linux
-+    $ cd $obj/linux
-+
-+Now we're ready to install the headers into the sysroot:
-+
-+    $ PATH=$tools/bin:$PATH \
-+    > make headers_install \
-+    >      ARCH=$linux_arch CROSS_COMPILE=$target- \
-+    >      INSTALL_HDR_PATH=$sysroot/usr
-+
-+
-+GLIBC Headers and Preliminary Objects
-+
-+Using the cross-compiler we've just built, we can now configure GLIBC
-+well enough to install the headers and build the object files that the
-+full cross-compiler will need:
-+
-+    $ mkdir -p $obj/glibc-headers
-+    $ cd $obj/glibc-headers
-+    $ BUILD_CC=gcc \
-+    > CC=$tools/bin/$target-gcc \
-+    > CXX=$tools/bin/$target-g++ \
-+    > AR=$tools/bin/$target-ar \
-+    > RANLIB=$tools/bin/$target-ranlib \
-+    > $src/glibc/libc/configure \
-+    >     --prefix=/usr \
-+    >     --with-headers=$sysroot/usr/include \
-+    >     --build=$build \
-+    >     --host=$target \
-+    >     --disable-profile --without-gd --without-cvs \
-+    >     --enable-add-ons=nptl,libidn,../ports
-+
-+The option '--prefix=/usr' may look strange, but you should never
-+configure GLIBC with a prefix other than '/usr': in various places,
-+GLIBC's build system checks whether the prefix is '/usr', and does
-+special handling only if that is the case.  Unless you use this
-+prefix, you will get a sysroot that does not use the standard Linux
-+directory layouts and cannot be used as a basis for the root
-+filesystem on your target system compatibly with normal GLIBC
-+installations.
-+
-+The '--with-headers' option tells GLIBC where the Linux headers have
-+been installed.
-+
-+The '--enable-add-ons=nptl,libidn,../ports' option tells GLIBC to look
-+for the listed glibc add-ons. Most notably the ports add-on (located
-+just above the libc sources in the GLIBC svn tree) is required to
-+support ARM targets.
-+
-+We can now use the 'install-headers' makefile target to install the
-+headers:
-+
-+    $ make install-headers install_root=$sysroot \
-+    >                      install-bootstrap-headers=yes
-+
-+The 'install_root' variable indicates where the files should actually
-+be installed; its value is treated as the parent of the '--prefix'
-+directory we passed to the configure script, so the headers will go in
-+'$sysroot/usr/include'.  The 'install-bootstrap-headers' variable
-+requests special handling for certain tricky header files.
-+
-+Next, there are a few object files needed to link shared libraries,
-+which we build and install by hand:
-+
-+    $ mkdir -p $sysroot/usr/lib
-+    $ make csu/subdir_lib
-+    $ cp csu/crt1.o csu/crti.o csu/crtn.o $sysroot/usr/lib
-+
-+Finally, 'libgcc_s.so' requires a 'libc.so' to link against.  However,
-+since we will never actually execute its code, it doesn't matter what
-+it contains.  So, treating '/dev/null' as a C source file, we produce
-+a dummy 'libc.so' in one step:
-+
-+    $ $tools/bin/$target-gcc -nostdlib -nostartfiles -shared -x c /dev/null \
-+    >                        -o $sysroot/usr/lib/libc.so
-+
-+
-+The Second GCC
-+
-+With the GLIBC headers and selected object files installed, we can
-+now build a GCC that is capable of compiling GLIBC.  We configure,
-+build, and install the second GCC, again building only the C compiler,
-+and avoiding libraries we won't use:
-+
-+    $ mkdir -p $obj/gcc2
-+    $ cd $obj/gcc2
-+    $ $src/$gccv/configure \
-+    >     --target=$target \
-+    >     --prefix=$tools \
-+    >     --with-sysroot=$sysroot \
-+    >     --disable-libssp --disable-libgomp --disable-libmudflap \
-+    >     --disable-libffi --disable-libquadmath \
-+    >     --enable-languages=c
-+    $ PATH=$tools/bin:$PATH make
-+    $ PATH=$tools/bin:$PATH make install
-+
-+
-+GLIBC, Complete
-+
-+With the second compiler built and installed, we're now ready for the
-+full GLIBC build:
-+
-+    $ mkdir -p $obj/glibc
-+    $ cd $obj/glibc
-+    $ BUILD_CC=gcc \
-+    > CC=$tools/bin/$target-gcc \
-+    > CXX=$tools/bin/$target-g++ \
-+    > AR=$tools/bin/$target-ar \
-+    > RANLIB=$tools/bin/$target-ranlib \
-+    > $src/glibc/libc/configure \
-+    >     --prefix=/usr \
-+    >     --with-headers=$sysroot/usr/include \
-+    >     --with-kconfig=$obj/linux/scripts/kconfig \
-+    >     --build=$build \
-+    >     --host=$target \
-+    >     --disable-profile --without-gd --without-cvs \
-+    >     --enable-add-ons=nptl,libidn,../ports
-+
-+Note the additional '--with-kconfig' option. This tells GLIBC where to
-+find the host config tools used by the kernel 'make config' and 'make
-+menuconfig'.  These tools can be re-used by GLIBC for its own 'make
-+*config' support, which will create 'option-groups.config' for you.
-+But first make sure those tools have been built by running some
-+dummy 'make *config' calls in the kernel directory:
-+
-+    $ cd $obj/linux
-+    $ PATH=$tools/bin:$PATH make config \
-+    >      ARCH=$linux_arch CROSS_COMPILE=$target- \
-+    $ PATH=$tools/bin:$PATH make menuconfig \
-+    >      ARCH=$linux_arch CROSS_COMPILE=$target- \
-+
-+Now we can configure and build the full GLIBC:
-+
-+    $ cd $obj/glibc
-+    $ PATH=$tools/bin:$PATH make defconfig
-+    $ PATH=$tools/bin:$PATH make menuconfig
-+    $ PATH=$tools/bin:$PATH make
-+    $ PATH=$tools/bin:$PATH make install install_root=$sysroot
-+
-+At this point, we have a complete GLIBC installation in '$sysroot',
-+with header files, library files, and most of the C runtime startup
-+files in place.
-+
-+
-+The Third GCC
-+
-+Finally, we recompile GCC against this full installation, enabling
-+whatever languages and libraries we would like to use:
-+
-+    $ mkdir -p $obj/gcc3
-+    $ cd $obj/gcc3
-+    $ $src/$gccv/configure \
-+    >     --target=$target \
-+    >     --prefix=$tools \
-+    >     --with-sysroot=$sysroot \
-+    >     --enable-__cxa_atexit \
-+    >     --disable-libssp --disable-libgomp --disable-libmudflap \
-+    >     --enable-languages=c,c++
-+    $ PATH=$tools/bin:$PATH make
-+    $ PATH=$tools/bin:$PATH make install
-+
-+The '--enable-__cxa_atexit' option tells GCC what sort of C++
-+destructor support to expect from the C library; it's required with
-+GLIBC.
-+
-+And since GCC's installation process isn't designed to help construct
-+sysroot trees, we must manually copy certain libraries into place in
-+the sysroot.
-+
-+    $ cp -d $tools/$target/lib/libgcc_s.so* $sysroot/lib
-+    $ cp -d $tools/$target/lib/libstdc++.so* $sysroot/usr/lib
-+
-+
-+Trying Things Out
-+
-+At this point, '$tools' contains a cross toolchain ready to use
-+the GLIBC installation in '$sysroot':
-+
-+    $ cat > hello.c <<EOF
-+    > #include <stdio.h>
-+    > int
-+    > main (int argc, char **argv)
-+    > {
-+    >   puts ("Hello, world!");
-+    >   return 0;
-+    > }
-+    > EOF
-+    $ $tools/bin/$target-gcc -Wall hello.c -o hello
-+    $ cat > c++-hello.cc <<EOF
-+    > #include <iostream>
-+    > int
-+    > main (int argc, char **argv)
-+    > {
-+    >   std::cout << "Hello, C++ world!" << std::endl;
-+    >   return 0;
-+    > }
-+    > EOF
-+    $ $tools/bin/$target-g++ -Wall c++-hello.cc -o c++-hello
-+
-+
-+We can use 'readelf' to verify that these are indeed executables for
-+our target, using our dynamic linker:
-+
-+    $ $tools/bin/$target-readelf -hl hello
-+    ELF Header:
-+    ...
-+      Type:                              EXEC (Executable file)
-+      Machine:                           ARM
-+
-+    ...
-+    Program Headers:
-+      Type           Offset   VirtAddr   PhysAddr   FileSiz MemSiz  Flg Align
-+      PHDR           0x000034 0x10000034 0x10000034 0x00100 0x00100 R E 0x4
-+      INTERP         0x000134 0x00008134 0x00008134 0x00013 0x00013 R   0x1
-+          [Requesting program interpreter: /lib/ld-linux.so.3]
-+      LOAD           0x000000 0x00008000 0x00008000 0x0042c 0x0042c R E 0x8000
-+    ...
-+
-+Looking at the dynamic section of the installed 'libgcc_s.so', we see
-+that the 'NEEDED' entry for the C library does include the '.6'
-+suffix, indicating that was linked against our fully build GLIBC, and
-+not our dummy 'libc.so':
-+
-+    $ $tools/bin/$target-readelf -d $sysroot/lib/libgcc_s.so.1
-+    Dynamic section at offset 0x1083c contains 24 entries:
-+      Tag        Type                         Name/Value
-+     0x00000001 (NEEDED)                     Shared library: [libc.so.6]
-+     0x0000000e (SONAME)                     Library soname: [libgcc_s.so.1]
-+    ...
-+
-+
-+And on the target machine, we can run our programs:
-+
-+    $ $sysroot/lib/ld.so.1 --library-path $sysroot/lib:$sysroot/usr/lib \
-+    > ./hello
-+    Hello, world!
-+    $ $sysroot/lib/ld.so.1 --library-path $sysroot/lib:$sysroot/usr/lib \
-+    > ./c++-hello
-+    Hello, C++ world!
-diff --git a/GLIBC.cross-testing b/GLIBC.cross-testing
-new file mode 100644
-index 0000000..b67b468
---- /dev/null
-+++ b/GLIBC.cross-testing
-@@ -0,0 +1,205 @@
-+                                                        -*- mode: text -*-
-+
-+                      Cross-Testing With GLIBC
-+                  Jim Blandy <jimb@codesourcery.com>
-+
-+
-+Introduction
-+
-+Developers writing software for embedded systems often use a desktop
-+or other similarly capable computer for development, but need to run
-+tests on the embedded system, or perhaps on a simulator.  When
-+configured for cross-compilation, the stock GNU C library simply
-+disables running tests altogether: the command 'make tests' builds
-+test programs, but does not run them.  GLIBC, however, provides
-+facilities for compiling tests and generating data files on the build
-+system, but running the test programs themselves on a remote system or
-+simulator.
-+
-+
-+Test environment requirements
-+
-+The test environment must meet certain conditions for GLIBC's
-+cross-testing facilities to work:
-+
-+- Shared filesystems.  The 'build' system, on which you configure and
-+  compile GLIBC, and the 'host' system, on which you intend to run
-+  GLIBC, must share a filesystem containing the GLIBC build and
-+  source trees.  Files must appear at the same paths on both systems.
-+
-+- Remote-shell like invocation.  There must be a way to run a program
-+  on the host system from the build system, passing it properly quoted
-+  command-line arguments, setting environment variables, and
-+  inheriting the caller's standard input and output.
-+
-+
-+Usage
-+
-+To use GLIBC's cross-testing support, provide values for the
-+following Make variables when you invoke 'make':
-+
-+- cross-test-wrapper
-+
-+  This should be the name of the cross-testing wrapper command, along
-+  with any arguments.
-+
-+- cross-localedef
-+
-+  This should be the name of a cross-capable localedef program, like
-+  that included in the GLIBC 'localedef' module, along with any
-+  arguments needed.
-+
-+These are each explained in detail below.
-+
-+
-+The Cross-Testing Wrapper
-+
-+To run test programs reliably, the stock GNU C library takes care to
-+ensure that test programs use the newly compiled dynamic linker and
-+shared libraries, and never the host system's installed libraries.  To
-+accomplish this, it runs the tests by explicitly invoking the dynamic
-+linker from the build tree, passing it a list of build tree
-+directories to search for shared libraries, followed by the name of
-+the executable to run and its arguments.
-+
-+For example, where one might normally run a test program like this:
-+
-+    $ ./tst-foo arg1 arg2
-+
-+the GNU C library might run that program like this:
-+
-+    $ $objdir/elf/ld-linux.so.3 --library-path $objdir \
-+      ./tst-foo arg1 arg2
-+
-+(where $objdir is the path to the top of the build tree, and the
-+trailing backslash indicates a continuation of the command).  In other
-+words, each test program invocation is 'wrapped up' inside an explicit
-+invocation of the dynamic linker, which must itself execute the test
-+program, having loaded shared libraries from the appropriate
-+directories.
-+
-+To support cross-testing, GLIBC allows the developer to optionally
-+set the 'cross-test-wrapper' Make variable to another wrapper command,
-+to which it passes the entire dynamic linker invocation shown above as
-+arguments.  For example, if the developer supplies a wrapper of
-+'my-wrapper hostname', then GLIBC would run the test above as
-+follows:
-+
-+    $ my-wrapper hostname \
-+      $objdir/elf/ld-linux.so.3 --library-path $objdir \
-+      ./tst-foo arg1 arg2
-+
-+The 'my-wrapper' command is responsible for executing the command
-+given on the host system.
-+
-+Since tests are run in varying directories, the wrapper should either
-+be in your command search path, or 'cross-test-wrapper' should give an
-+absolute path for the wrapper.
-+
-+The wrapper must meet several requirements:
-+
-+- It must preserve the current directory.  As explained above, the
-+  build directory tree must be visible on both the build and host
-+  systems, at the same path.  The test wrapper must ensure that the
-+  current directory it inherits is also inherited by the dynamic
-+  linker (and thus the test program itself).
-+
-+- It must preserve environment variables' values.  Many GLIBC tests
-+  set environment variables for test runs; in native testing, it
-+  invokes programs like this:
-+
-+    $ GCONV_PATH=$objdir/iconvdata \
-+      $objdir/elf/ld-linux.so.3 --library-path $objdir \
-+      ./tst-foo arg1 arg2
-+
-+  With the cross-testing wrapper, that invocation becomes:
-+
-+    $ GCONV_PATH=$objdir/iconvdata \
-+      my-wrapper hostname \
-+      $objdir/elf/ld-linux.so.3 --library-path $objdir \
-+      ./tst-foo arg1 arg2
-+
-+  Here, 'my-wrapper' must ensure that the value it sees for
-+  'GCONV_PATH' will be seen by the dynamic linker, and thus 'tst-foo'
-+  itself.  (The wrapper supplied with GLIBC simply preserves the
-+  values of *all* enviroment variables, with a fixed set of
-+  exceptions.)
-+
-+  If your wrapper is a shell script, take care to correctly propagate
-+  environment variables whose values contain spaces and shell
-+  metacharacters.
-+
-+- It must pass the command's arguments, unmodified.  The arguments
-+  seen by the test program should be exactly those seen by the wrapper
-+  (after whatever arguments are given to the wrapper itself).  The
-+  GLIBC test framework performs all needed shell word splitting and
-+  expansion (wildcard expansion, parameter substitution, and so on)
-+  before invoking the wrapper; further expansion may break the tests.
-+
-+
-+The 'cross-test-ssh.sh' script
-+
-+If you want to use 'ssh' (or something sufficiently similar) to run
-+test programs on your host system, GLIBC includes a shell script,
-+'scripts/cross-test-ssh.sh', which you can use as your wrapper
-+command.  This script takes care of setting the test command's current
-+directory, propagating environment variable values, and carrying
-+command-line arguments, all across an 'ssh' connection.  You may even
-+supply an alternative to 'ssh' on the command line, if needed.
-+
-+For more details, pass 'cross-test-ssh.sh' the '--help' option.
-+
-+
-+The Cross-Compiling Locale Definition Command
-+
-+Some GLIBC tests rely on locales generated especially for the test
-+process.  In a native configuration, these tests simply run the
-+'localedef' command built by the normal GLIBC build process,
-+'locale/localedef', to process and install their locales.  However, in
-+a cross-compiling configuration, this 'localedef' is built for the
-+host system, not the build system, and since it requires quite a bit
-+of memory to run (we have seen it fail on systems with 64MiB of
-+memory), it may not be practical to run it on the host system.
-+
-+If set, GLIBC uses the 'cross-localedef' Make variable as the command
-+to run on the build system to process and install locales.  The
-+localedef program built from the GLIBC 'localedef' module is
-+suitable.
-+
-+The value of 'cross-localedef' may also include command-line arguments
-+to be passed to the program; if you are using GLIBC's 'localedef',
-+you may include endianness and 'uint32_t' alignment arguments here.
-+
-+
-+Example
-+
-+In developing GLIBC's cross-testing facility, we invoked 'make' with
-+the following script:
-+
-+    #!/bin/sh
-+
-+    srcdir=...
-+    test_hostname=...
-+    localedefdir=...
-+    cross_gxx=...-g++
-+
-+    wrapper="$srcdir/scripts/cross-test-ssh.sh $test_hostname"
-+    localedef="$localedefdir/localedef --little-endian --uint32-align=4"
-+
-+    make cross-test-wrapper="$wrapper" \
-+         cross-localedef="$localedef" \
-+         CXX="$cross_gxx" \
-+         "$@"
-+
-+
-+Other Cross-Testing Concerns
-+
-+Here are notes on some other issues which you may encounter in running
-+the GLIBC tests in a cross-compiling environment:
-+
-+- Some tests require a C++ cross-compiler; you should set the 'CXX'
-+  Make variable to the name of an appropriate cross-compiler.
-+
-+- Some tests require access to libstdc++.so.6 and libgcc_s.so.1; we
-+  simply place copies of these libraries in the top GLIBC build
-+  directory.
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0018-eglibc-Help-bootstrap-cross-toolchain.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0018-eglibc-Help-bootstrap-cross-toolchain.patch
new file mode 100644
index 0000000..02f35f4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0018-eglibc-Help-bootstrap-cross-toolchain.patch
@@ -0,0 +1,100 @@
+From 1161cd1c683547d29a03626d9d7de7f9cc03b74a Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 18 Mar 2015 00:49:28 +0000
+Subject: [PATCH 18/25] eglibc: Help bootstrap cross toolchain
+
+Taken from EGLIBC, r1484 + r1525
+
+        2007-02-20  Jim Blandy  <jimb@codesourcery.com>
+
+                * Makefile (install-headers): Preserve old behavior: depend on
+                $(inst_includedir)/gnu/stubs.h only if install-bootstrap-headers
+                is set; otherwise, place gnu/stubs.h on the 'install-others' list.
+
+        2007-02-16  Jim Blandy  <jimb@codesourcery.com>
+
+                * Makefile: Amend make install-headers to install everything
+                necessary for building a cross-compiler.  Install gnu/stubs.h as
+                part of 'install-headers', not 'install-others'.
+                If install-bootstrap-headers is 'yes', install a dummy copy of
+                gnu/stubs.h, instead of computing the real thing.
+                * include/stubs-bootstrap.h: New file.
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ Makefile                  | 22 +++++++++++++++++++++-
+ include/stubs-bootstrap.h | 12 ++++++++++++
+ 2 files changed, 33 insertions(+), 1 deletion(-)
+ create mode 100644 include/stubs-bootstrap.h
+
+diff --git a/Makefile b/Makefile
+index 3e0ae6f43b..24dc66d17c 100644
+--- a/Makefile
++++ b/Makefile
+@@ -70,9 +70,18 @@ subdir-dirs = include
+ vpath %.h $(subdir-dirs)
+ 
+ # What to install.
+-install-others = $(inst_includedir)/gnu/stubs.h
+ install-bin-script =
+ 
++# If we're bootstrapping, install a dummy gnu/stubs.h along with the
++# other headers, so 'make install-headers' produces a useable include
++# tree.  Otherwise, install gnu/stubs.h later, after the rest of the
++# build is done.
++ifeq ($(install-bootstrap-headers),yes)
++install-headers: $(inst_includedir)/gnu/stubs.h
++else
++install-others = $(inst_includedir)/gnu/stubs.h
++endif
++
+ ifeq (yes,$(build-shared))
+ headers += gnu/lib-names.h
+ endif
+@@ -152,6 +161,16 @@ others: $(common-objpfx)testrun.sh
+ 
+ subdir-stubs := $(foreach dir,$(subdirs),$(common-objpfx)$(dir)/stubs)
+ 
++# gnu/stubs.h depends (via the subdir 'stubs' targets) on all the .o
++# files in EGLIBC.  For bootstrapping a GCC/EGLIBC pair, an empty
++# gnu/stubs.h is good enough.
++ifeq ($(install-bootstrap-headers),yes)
++$(inst_includedir)/gnu/stubs.h: include/stubs-bootstrap.h $(+force)
++	$(make-target-directory)
++	$(INSTALL_DATA) $< $@
++
++installed-stubs =
++else
+ ifndef abi-variants
+ installed-stubs = $(inst_includedir)/gnu/stubs.h
+ else
+@@ -178,6 +197,7 @@ $(inst_includedir)/gnu/stubs.h: $(+force)
+ 
+ install-others-nosubdir: $(installed-stubs)
+ endif
++endif
+ 
+ 
+ # Since stubs.h is never needed when building the library, we simplify the
+diff --git a/include/stubs-bootstrap.h b/include/stubs-bootstrap.h
+new file mode 100644
+index 0000000000..1d2b669aff
+--- /dev/null
++++ b/include/stubs-bootstrap.h
+@@ -0,0 +1,12 @@
++/* Placeholder stubs.h file for bootstrapping.
++
++   When bootstrapping a GCC/EGLIBC pair, GCC requires that the EGLIBC
++   headers be installed, but we can't fully build EGLIBC without that
++   GCC.  So we run the command:
++
++      make install-headers install-bootstrap-headers=yes
++
++   to install the headers GCC needs, but avoid building certain
++   difficult headers.  The <gnu/stubs.h> header depends, via the
++   EGLIBC subdir 'stubs' make targets, on every .o file in EGLIBC, but
++   an empty stubs.h like this will do fine for GCC.  */
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0019-eglibc-Clear-cache-lines-on-ppc8xx.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0019-eglibc-Clear-cache-lines-on-ppc8xx.patch
new file mode 100644
index 0000000..adb28cf
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0019-eglibc-Clear-cache-lines-on-ppc8xx.patch
@@ -0,0 +1,83 @@
+From 1732c7f25453c879c17701839ef34876a7357008 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Thu, 31 Dec 2015 15:15:09 -0800
+Subject: [PATCH 19/25] eglibc: Clear cache lines on ppc8xx
+
+2007-06-13  Nathan Sidwell  <nathan@codesourcery.com>
+            Mark Shinwell  <shinwell@codesourcery.com>
+
+        * sysdeps/unix/sysv/linux/powerpc/libc-start.c
+        (__libc_start_main): Detect 8xx parts and clear
+        __cache_line_size if detected.
+        * sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c
+        (DL_PLATFORM_AUXV): Likewise.
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c  | 14 +++++++++++++-
+ sysdeps/unix/sysv/linux/powerpc/libc-start.c | 16 +++++++++++++++-
+ 2 files changed, 28 insertions(+), 2 deletions(-)
+
+diff --git a/sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c b/sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c
+index 23f5d5d388..7e45288db7 100644
+--- a/sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c
++++ b/sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c
+@@ -24,9 +24,21 @@ int __cache_line_size attribute_hidden;
+ /* Scan the Aux Vector for the "Data Cache Block Size" entry.  If found
+    verify that the static extern __cache_line_size is defined by checking
+    for not NULL.  If it is defined then assign the cache block size
+-   value to __cache_line_size.  */
++   value to __cache_line_size.  This is used by memset to
++   optimize setting to zero.  We have to detect 8xx processors, which
++   have buggy dcbz implementations that cannot report page faults
++   correctly.  That requires reading SPR, which is a privileged
++   operation.  Fortunately 2.2.18 and later emulates PowerPC mfspr
++   reads from the PVR register.   */
+ #define DL_PLATFORM_AUXV						      \
+       case AT_DCACHEBSIZE:						      \
++	if (__LINUX_KERNEL_VERSION >= 0x020218)				      \
++	  {								      \
++	    unsigned pvr = 0;						      \
++	    asm ("mfspr %0, 287" : "=r" (pvr));				      \
++	    if ((pvr & 0xffff0000) == 0x00500000)			      \
++	      break;							      \
++	  }								      \
+ 	__cache_line_size = av->a_un.a_val;				      \
+ 	break;
+ 
+diff --git a/sysdeps/unix/sysv/linux/powerpc/libc-start.c b/sysdeps/unix/sysv/linux/powerpc/libc-start.c
+index ad036c1e4b..afee56a3da 100644
+--- a/sysdeps/unix/sysv/linux/powerpc/libc-start.c
++++ b/sysdeps/unix/sysv/linux/powerpc/libc-start.c
+@@ -73,11 +73,25 @@ __libc_start_main (int argc, char **argv,
+ 
+   /* Initialize the __cache_line_size variable from the aux vector.  For the
+      static case, we also need _dl_hwcap, _dl_hwcap2 and _dl_platform, so we
+-     can call __tcb_parse_hwcap_and_convert_at_platform ().  */
++     can call __tcb_parse_hwcap_and_convert_at_platform ().
++
++     This is used by memset to optimize setting to zero.  We have to
++     detect 8xx processors, which have buggy dcbz implementations that
++     cannot report page faults correctly.  That requires reading SPR,
++     which is a privileged operation.  Fortunately 2.2.18 and later
++     emulates PowerPC mfspr reads from the PVR register.  */
+   for (ElfW (auxv_t) * av = auxvec; av->a_type != AT_NULL; ++av)
+     switch (av->a_type)
+       {
+       case AT_DCACHEBSIZE:
++	if (__LINUX_KERNEL_VERSION >= 0x020218)
++	  {
++	    unsigned pvr = 0;
++
++	    asm ("mfspr %0, 287" : "=r" (pvr) :);
++	    if ((pvr & 0xffff0000) == 0x00500000)
++	      break;
++	  }
+ 	__cache_line_size = av->a_un.a_val;
+ 	break;
+ #ifndef SHARED
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0019-eglibc-Help-bootstrap-cross-toolchain.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0019-eglibc-Help-bootstrap-cross-toolchain.patch
deleted file mode 100644
index f5921bb..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0019-eglibc-Help-bootstrap-cross-toolchain.patch
+++ /dev/null
@@ -1,100 +0,0 @@
-From 2cb7e3cae4020f431d426ad1740bb25506cde899 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 18 Mar 2015 00:49:28 +0000
-Subject: [PATCH 19/26] eglibc: Help bootstrap cross toolchain
-
-Taken from EGLIBC, r1484 + r1525
-
-        2007-02-20  Jim Blandy  <jimb@codesourcery.com>
-
-                * Makefile (install-headers): Preserve old behavior: depend on
-                $(inst_includedir)/gnu/stubs.h only if install-bootstrap-headers
-                is set; otherwise, place gnu/stubs.h on the 'install-others' list.
-
-        2007-02-16  Jim Blandy  <jimb@codesourcery.com>
-
-                * Makefile: Amend make install-headers to install everything
-                necessary for building a cross-compiler.  Install gnu/stubs.h as
-                part of 'install-headers', not 'install-others'.
-                If install-bootstrap-headers is 'yes', install a dummy copy of
-                gnu/stubs.h, instead of computing the real thing.
-                * include/stubs-bootstrap.h: New file.
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- Makefile                  | 22 +++++++++++++++++++++-
- include/stubs-bootstrap.h | 12 ++++++++++++
- 2 files changed, 33 insertions(+), 1 deletion(-)
- create mode 100644 include/stubs-bootstrap.h
-
-diff --git a/Makefile b/Makefile
-index 1ae3281..26ab7bf 100644
---- a/Makefile
-+++ b/Makefile
-@@ -70,9 +70,18 @@ subdir-dirs = include
- vpath %.h $(subdir-dirs)
- 
- # What to install.
--install-others = $(inst_includedir)/gnu/stubs.h
- install-bin-script =
- 
-+# If we're bootstrapping, install a dummy gnu/stubs.h along with the
-+# other headers, so 'make install-headers' produces a useable include
-+# tree.  Otherwise, install gnu/stubs.h later, after the rest of the
-+# build is done.
-+ifeq ($(install-bootstrap-headers),yes)
-+install-headers: $(inst_includedir)/gnu/stubs.h
-+else
-+install-others = $(inst_includedir)/gnu/stubs.h
-+endif
-+
- ifeq (yes,$(build-shared))
- headers += gnu/lib-names.h
- endif
-@@ -152,6 +161,16 @@ others: $(common-objpfx)testrun.sh
- 
- subdir-stubs := $(foreach dir,$(subdirs),$(common-objpfx)$(dir)/stubs)
- 
-+# gnu/stubs.h depends (via the subdir 'stubs' targets) on all the .o
-+# files in EGLIBC.  For bootstrapping a GCC/EGLIBC pair, an empty
-+# gnu/stubs.h is good enough.
-+ifeq ($(install-bootstrap-headers),yes)
-+$(inst_includedir)/gnu/stubs.h: include/stubs-bootstrap.h $(+force)
-+	$(make-target-directory)
-+	$(INSTALL_DATA) $< $@
-+
-+installed-stubs =
-+else
- ifndef abi-variants
- installed-stubs = $(inst_includedir)/gnu/stubs.h
- else
-@@ -178,6 +197,7 @@ $(inst_includedir)/gnu/stubs.h: $(+force)
- 
- install-others-nosubdir: $(installed-stubs)
- endif
-+endif
- 
- 
- # Since stubs.h is never needed when building the library, we simplify the
-diff --git a/include/stubs-bootstrap.h b/include/stubs-bootstrap.h
-new file mode 100644
-index 0000000..1d2b669
---- /dev/null
-+++ b/include/stubs-bootstrap.h
-@@ -0,0 +1,12 @@
-+/* Placeholder stubs.h file for bootstrapping.
-+
-+   When bootstrapping a GCC/EGLIBC pair, GCC requires that the EGLIBC
-+   headers be installed, but we can't fully build EGLIBC without that
-+   GCC.  So we run the command:
-+
-+      make install-headers install-bootstrap-headers=yes
-+
-+   to install the headers GCC needs, but avoid building certain
-+   difficult headers.  The <gnu/stubs.h> header depends, via the
-+   EGLIBC subdir 'stubs' make targets, on every .o file in EGLIBC, but
-+   an empty stubs.h like this will do fine for GCC.  */
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0020-eglibc-Resolve-__fpscr_values-on-SH4.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0020-eglibc-Resolve-__fpscr_values-on-SH4.patch
new file mode 100644
index 0000000..f835d87
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0020-eglibc-Resolve-__fpscr_values-on-SH4.patch
@@ -0,0 +1,56 @@
+From 108b3a1df96a85522c52a0dec032fc2c106f5f2d Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 18 Mar 2015 00:55:53 +0000
+Subject: [PATCH 20/25] eglibc: Resolve __fpscr_values on SH4
+
+2010-09-29  Nobuhiro Iwamatsu  <iwamatsu@nigauri.org>
+            Andrew Stubbs  <ams@codesourcery.com>
+
+        Resolve SH's __fpscr_values to symbol in libc.so.
+
+        * sysdeps/sh/sh4/fpu/fpu_control.h: Add C++ __set_fpscr prototype.
+        * sysdeps/unix/sysv/linux/sh/Versions (GLIBC_2.2): Add __fpscr_values.
+        * sysdeps/unix/sysv/linux/sh/sysdep.S (___fpscr_values): New constant.
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ sysdeps/unix/sysv/linux/sh/Versions |  1 +
+ sysdeps/unix/sysv/linux/sh/sysdep.S | 11 +++++++++++
+ 2 files changed, 12 insertions(+)
+
+diff --git a/sysdeps/unix/sysv/linux/sh/Versions b/sysdeps/unix/sysv/linux/sh/Versions
+index e0938c4165..ca1d7da339 100644
+--- a/sysdeps/unix/sysv/linux/sh/Versions
++++ b/sysdeps/unix/sysv/linux/sh/Versions
+@@ -2,6 +2,7 @@ libc {
+   GLIBC_2.2 {
+     # functions used in other libraries
+     __xstat64; __fxstat64; __lxstat64;
++    __fpscr_values;
+ 
+     # a*
+     alphasort64;
+diff --git a/sysdeps/unix/sysv/linux/sh/sysdep.S b/sysdeps/unix/sysv/linux/sh/sysdep.S
+index 5f11bc737b..2fd217b00b 100644
+--- a/sysdeps/unix/sysv/linux/sh/sysdep.S
++++ b/sysdeps/unix/sysv/linux/sh/sysdep.S
+@@ -30,3 +30,14 @@ ENTRY (__syscall_error)
+ 
+ #define __syscall_error __syscall_error_1
+ #include <sysdeps/unix/sh/sysdep.S>
++
++       .data
++       .align 3
++       .globl ___fpscr_values
++       .type ___fpscr_values, @object
++       .size ___fpscr_values, 8
++___fpscr_values:
++       .long 0
++       .long 0x80000
++weak_alias (___fpscr_values, __fpscr_values)
++
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0020-eglibc-cherry-picked-from.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0020-eglibc-cherry-picked-from.patch
deleted file mode 100644
index 4344573..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0020-eglibc-cherry-picked-from.patch
+++ /dev/null
@@ -1,64 +0,0 @@
-From b2ed906ec864583b43379ef9ad2b5630c1232565 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Thu, 31 Dec 2015 15:10:33 -0800
-Subject: [PATCH 20/26] eglibc: cherry-picked from
-
-http://www.eglibc.org/archives/patches/msg00772.html
-
-Not yet merged into glibc
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- resolv/res_libc.c | 14 +++++++++++++-
- 1 file changed, 13 insertions(+), 1 deletion(-)
-
-diff --git a/resolv/res_libc.c b/resolv/res_libc.c
-index a4b376f..3256e12 100644
---- a/resolv/res_libc.c
-+++ b/resolv/res_libc.c
-@@ -21,11 +21,13 @@
- #include <atomic.h>
- #include <limits.h>
- #include <sys/types.h>
-+#include <sys/stat.h>
- #include <netinet/in.h>
- #include <arpa/nameser.h>
- #include <resolv.h>
- #include <libc-lock.h>
- 
-+__libc_lock_define_initialized (static, lock);
- extern unsigned long long int __res_initstamp attribute_hidden;
- /* We have atomic increment operations on 64-bit platforms.  */
- #if __WORDSIZE == 64
-@@ -33,7 +35,6 @@ extern unsigned long long int __res_initstamp attribute_hidden;
- # define atomicincunlock(lock) (void) 0
- # define atomicinc(var) catomic_increment (&(var))
- #else
--__libc_lock_define_initialized (static, lock);
- # define atomicinclock(lock) __libc_lock_lock (lock)
- # define atomicincunlock(lock) __libc_lock_unlock (lock)
- # define atomicinc(var) ++var
-@@ -92,7 +93,18 @@ res_init(void) {
- int
- __res_maybe_init (res_state resp, int preinit)
- {
-+	static time_t last_mtime;
-+	struct stat statbuf;
-+	int ret;
-+
- 	if (resp->options & RES_INIT) {
-+		ret = stat (_PATH_RESCONF, &statbuf);
-+		__libc_lock_lock (lock);
-+		if ((ret == 0) && (last_mtime != statbuf.st_mtime)) {
-+			last_mtime = statbuf.st_mtime;
-+			atomicinc (__res_initstamp);
-+		}
-+		__libc_lock_unlock (lock);
- 		if (__res_initstamp != resp->_u._ext.initstamp) {
- 			if (resp->nscount > 0)
- 				__res_iclose (resp, true);
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0021-eglibc-Clear-cache-lines-on-ppc8xx.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0021-eglibc-Clear-cache-lines-on-ppc8xx.patch
deleted file mode 100644
index a9a7391..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0021-eglibc-Clear-cache-lines-on-ppc8xx.patch
+++ /dev/null
@@ -1,83 +0,0 @@
-From 000ab518aa1269714bc0a9a4633b0a538fae91d9 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Thu, 31 Dec 2015 15:15:09 -0800
-Subject: [PATCH 21/26] eglibc: Clear cache lines on ppc8xx
-
-2007-06-13  Nathan Sidwell  <nathan@codesourcery.com>
-            Mark Shinwell  <shinwell@codesourcery.com>
-
-        * sysdeps/unix/sysv/linux/powerpc/libc-start.c
-        (__libc_start_main): Detect 8xx parts and clear
-        __cache_line_size if detected.
-        * sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c
-        (DL_PLATFORM_AUXV): Likewise.
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c  | 14 +++++++++++++-
- sysdeps/unix/sysv/linux/powerpc/libc-start.c | 16 +++++++++++++++-
- 2 files changed, 28 insertions(+), 2 deletions(-)
-
-diff --git a/sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c b/sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c
-index 98ec2b3..b384ae0 100644
---- a/sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c
-+++ b/sysdeps/unix/sysv/linux/powerpc/dl-sysdep.c
-@@ -24,9 +24,21 @@ int __cache_line_size attribute_hidden;
- /* Scan the Aux Vector for the "Data Cache Block Size" entry.  If found
-    verify that the static extern __cache_line_size is defined by checking
-    for not NULL.  If it is defined then assign the cache block size
--   value to __cache_line_size.  */
-+   value to __cache_line_size.  This is used by memset to
-+   optimize setting to zero.  We have to detect 8xx processors, which
-+   have buggy dcbz implementations that cannot report page faults
-+   correctly.  That requires reading SPR, which is a privileged
-+   operation.  Fortunately 2.2.18 and later emulates PowerPC mfspr
-+   reads from the PVR register.   */
- #define DL_PLATFORM_AUXV						      \
-       case AT_DCACHEBSIZE:						      \
-+	if (__LINUX_KERNEL_VERSION >= 0x020218)				      \
-+	  {								      \
-+	    unsigned pvr = 0;						      \
-+	    asm ("mfspr %0, 287" : "=r" (pvr));				      \
-+	    if ((pvr & 0xffff0000) == 0x00500000)			      \
-+	      break;							      \
-+	  }								      \
- 	__cache_line_size = av->a_un.a_val;				      \
- 	break;
- 
-diff --git a/sysdeps/unix/sysv/linux/powerpc/libc-start.c b/sysdeps/unix/sysv/linux/powerpc/libc-start.c
-index 0efd297..8cc0ef8 100644
---- a/sysdeps/unix/sysv/linux/powerpc/libc-start.c
-+++ b/sysdeps/unix/sysv/linux/powerpc/libc-start.c
-@@ -73,11 +73,25 @@ __libc_start_main (int argc, char **argv,
- 
-   /* Initialize the __cache_line_size variable from the aux vector.  For the
-      static case, we also need _dl_hwcap, _dl_hwcap2 and _dl_platform, so we
--     can call __tcb_parse_hwcap_and_convert_at_platform ().  */
-+     can call __tcb_parse_hwcap_and_convert_at_platform ().
-+
-+     This is used by memset to optimize setting to zero.  We have to
-+     detect 8xx processors, which have buggy dcbz implementations that
-+     cannot report page faults correctly.  That requires reading SPR,
-+     which is a privileged operation.  Fortunately 2.2.18 and later
-+     emulates PowerPC mfspr reads from the PVR register.  */
-   for (ElfW (auxv_t) * av = auxvec; av->a_type != AT_NULL; ++av)
-     switch (av->a_type)
-       {
-       case AT_DCACHEBSIZE:
-+	if (__LINUX_KERNEL_VERSION >= 0x020218)
-+	  {
-+	    unsigned pvr = 0;
-+
-+	    asm ("mfspr %0, 287" : "=r" (pvr) :);
-+	    if ((pvr & 0xffff0000) == 0x00500000)
-+	      break;
-+	  }
- 	__cache_line_size = av->a_un.a_val;
- 	break;
- #ifndef SHARED
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0021-eglibc-Install-PIC-archives.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0021-eglibc-Install-PIC-archives.patch
new file mode 100644
index 0000000..6ee397b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0021-eglibc-Install-PIC-archives.patch
@@ -0,0 +1,123 @@
+From 3392ee83b0132c089dffb1e9892b4b252ce1ec0e Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 18 Mar 2015 01:57:01 +0000
+Subject: [PATCH 21/25] eglibc: Install PIC archives
+
+Forward port from eglibc
+
+2008-02-07  Joseph Myers  <joseph@codesourcery.com>
+
+        * Makerules (install-extras, install-map): New variables.
+        (installed-libcs): Add libc_pic.a.
+        (install-lib): Include _pic.a files for versioned shared
+        libraries.
+        (install-map-nosubdir, install-extras-nosubdir): Add rules for
+        installing extra files.
+        (install-no-libc.a-nosubdir): Depend on install-map-nosubdir and
+        install-extras-nosubdir.
+
+2008-04-01  Maxim Kuvyrkov  <maxim@codesourcery.com>
+
+        * Makerules (install-lib): Don't install libpthread_pic.a.
+        (install-map): Don't install libpthread_pic.map.
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ Makerules | 42 ++++++++++++++++++++++++++++++++++++++++--
+ 1 file changed, 40 insertions(+), 2 deletions(-)
+
+diff --git a/Makerules b/Makerules
+index 9bb707c168..74cbefb9ba 100644
+--- a/Makerules
++++ b/Makerules
+@@ -775,6 +775,9 @@ ifeq ($(build-shared),yes)
+ $(common-objpfx)libc.so: $(common-objpfx)libc.map
+ endif
+ common-generated += libc.so libc_pic.os
++ifndef subdir
++install-extras := soinit.o sofini.o
++endif
+ ifdef libc.so-version
+ $(common-objpfx)libc.so$(libc.so-version): $(common-objpfx)libc.so
+ 	$(make-link)
+@@ -1026,6 +1029,7 @@ endif
+ 
+ install: check-install-supported
+ 
++installed-libcs := $(installed-libcs) $(inst_libdir)/libc_pic.a
+ install: $(installed-libcs)
+ $(installed-libcs): $(inst_libdir)/lib$(libprefix)%: lib $(+force)
+ 	$(make-target-directory)
+@@ -1054,6 +1058,22 @@ versioned := $(strip $(foreach so,$(install-lib.so),\
+ install-lib.so-versioned := $(filter $(versioned), $(install-lib.so))
+ install-lib.so-unversioned := $(filter-out $(versioned), $(install-lib.so))
+ 
++# Install the _pic.a files for versioned libraries, and corresponding
++# .map files.
++# libpthread_pic.a breaks mklibs, so don't install it and its map.
++install-lib := $(install-lib) $(install-lib.so-versioned:%.so=%_pic.a)
++install-lib := $(filter-out libpthread_pic.a,$(install-lib))
++# Despite having a soname libhurduser and libmachuser do not use symbol
++# versioning, so don't install the corresponding .map files.
++ifeq ($(build-shared),yes)
++install-map := $(patsubst %.so,%.map,\
++			$(foreach L,$(install-lib.so-versioned),$(notdir $L)))
++install-map := $(filter-out libhurduser.map libmachuser.map libpthread.map,$(install-map))
++ifndef subdir
++install-map := $(install-map) libc.map
++endif
++endif
++
+ # For versioned libraries, we install three files:
+ #	$(inst_libdir)/libfoo.so	-- for linking, symlink or ld script
+ #	$(inst_slibdir)/libfoo.so.NN	-- for loading by SONAME, symlink
+@@ -1298,9 +1318,22 @@ $(addprefix $(inst_includedir)/,$(headers-nonh)): $(inst_includedir)/%: \
+ endif	# headers-nonh
+ endif	# headers
+ 
++ifdef install-map
++$(addprefix $(inst_libdir)/,$(patsubst lib%.map,lib%_pic.map,$(install-map))): \
++  $(inst_libdir)/lib%_pic.map: $(common-objpfx)lib%.map $(+force)
++	$(do-install)
++endif
++
++ifdef install-extras
++$(addprefix $(inst_libdir)/libc_pic/,$(install-extras)): \
++  $(inst_libdir)/libc_pic/%.o: $(elf-objpfx)%.os $(+force)
++	$(do-install)
++endif
++
+ .PHONY: install-bin-nosubdir install-bin-script-nosubdir \
+ 	install-rootsbin-nosubdir install-sbin-nosubdir install-lib-nosubdir \
+-	install-data-nosubdir install-headers-nosubdir
++	install-data-nosubdir install-headers-nosubdir install-map-nosubdir \
++	install-extras-nosubdir
+ install-bin-nosubdir: $(addprefix $(inst_bindir)/,$(install-bin))
+ install-bin-script-nosubdir: $(addprefix $(inst_bindir)/,$(install-bin-script))
+ install-rootsbin-nosubdir: \
+@@ -1313,6 +1346,10 @@ install-data-nosubdir: $(addprefix $(inst_datadir)/,$(install-data))
+ install-headers-nosubdir: $(addprefix $(inst_includedir)/,$(headers))
+ install-others-nosubdir: $(install-others)
+ install-others-programs-nosubdir: $(install-others-programs)
++install-map-nosubdir: $(addprefix $(inst_libdir)/,\
++		       $(patsubst lib%.map,lib%_pic.map,$(install-map)))
++install-extras-nosubdir: $(addprefix $(inst_libdir)/libc_pic/,\
++		       $(install-extras))
+ 
+ # We need all the `-nosubdir' targets so that `install' in the parent
+ # doesn't depend on several things which each iterate over the subdirs.
+@@ -1322,7 +1359,8 @@ install-%:: install-%-nosubdir ;
+ 
+ .PHONY: install install-no-libc.a-nosubdir
+ install-no-libc.a-nosubdir: install-headers-nosubdir install-data-nosubdir \
+-			    install-lib-nosubdir install-others-nosubdir
++			    install-lib-nosubdir install-others-nosubdir \
++			    install-map-nosubdir install-extras-nosubdir
+ ifeq ($(build-programs),yes)
+ install-no-libc.a-nosubdir: install-bin-nosubdir install-bin-script-nosubdir \
+ 			    install-rootsbin-nosubdir install-sbin-nosubdir \
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0022-eglibc-Forward-port-cross-locale-generation-support.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0022-eglibc-Forward-port-cross-locale-generation-support.patch
new file mode 100644
index 0000000..2a8a20a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0022-eglibc-Forward-port-cross-locale-generation-support.patch
@@ -0,0 +1,566 @@
+From d97533dc201cfd863765b1a67a27fde3e2622da7 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 18 Mar 2015 01:33:49 +0000
+Subject: [PATCH 22/25] eglibc: Forward port cross locale generation support
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ locale/Makefile               |  3 ++-
+ locale/catnames.c             | 48 +++++++++++++++++++++++++++++++++++
+ locale/localeinfo.h           |  2 +-
+ locale/programs/charmap-dir.c |  6 +++++
+ locale/programs/ld-collate.c  | 17 ++++++-------
+ locale/programs/ld-ctype.c    | 27 ++++++++++----------
+ locale/programs/ld-time.c     | 31 +++++++++++++++--------
+ locale/programs/linereader.c  |  2 +-
+ locale/programs/localedef.c   |  8 ++++++
+ locale/programs/locfile.c     |  5 +++-
+ locale/programs/locfile.h     | 59 +++++++++++++++++++++++++++++++++++++++++--
+ locale/setlocale.c            | 30 ----------------------
+ 12 files changed, 169 insertions(+), 69 deletions(-)
+ create mode 100644 locale/catnames.c
+
+diff --git a/locale/Makefile b/locale/Makefile
+index 98ee76272d..bc3afb2248 100644
+--- a/locale/Makefile
++++ b/locale/Makefile
+@@ -26,7 +26,8 @@ headers		= langinfo.h locale.h bits/locale.h \
+ 		  bits/types/locale_t.h bits/types/__locale_t.h
+ routines	= setlocale findlocale loadlocale loadarchive \
+ 		  localeconv nl_langinfo nl_langinfo_l mb_cur_max \
+-		  newlocale duplocale freelocale uselocale
++		  newlocale duplocale freelocale uselocale \
++		  catnames
+ tests		= tst-C-locale tst-locname tst-duplocale
+ categories	= ctype messages monetary numeric time paper name \
+ 		  address telephone measurement identification collate
+diff --git a/locale/catnames.c b/locale/catnames.c
+new file mode 100644
+index 0000000000..9fad357db1
+--- /dev/null
++++ b/locale/catnames.c
+@@ -0,0 +1,48 @@
++/* Copyright (C) 2006  Free Software Foundation, Inc.
++   This file is part of the GNU C Library.
++
++   The GNU C Library is free software; you can redistribute it and/or
++   modify it under the terms of the GNU Lesser General Public
++   License as published by the Free Software Foundation; either
++   version 2.1 of the License, or (at your option) any later version.
++
++   The GNU C Library is distributed in the hope that it will be useful,
++   but WITHOUT ANY WARRANTY; without even the implied warranty of
++   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++   Lesser General Public License for more details.
++
++   You should have received a copy of the GNU Lesser General Public
++   License along with the GNU C Library; if not, write to the Free
++   Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
++   02111-1307 USA.  */
++
++#include "localeinfo.h"
++
++/* Define an array of category names (also the environment variable names).  */
++const union catnamestr_t _nl_category_names attribute_hidden =
++  {
++    {
++#define DEFINE_CATEGORY(category, category_name, items, a) \
++      category_name,
++#include "categories.def"
++#undef DEFINE_CATEGORY
++    }
++  };
++
++const uint8_t _nl_category_name_idxs[__LC_LAST] attribute_hidden =
++  {
++#define DEFINE_CATEGORY(category, category_name, items, a) \
++    [category] = offsetof (union catnamestr_t, CATNAMEMF (__LINE__)),
++#include "categories.def"
++#undef DEFINE_CATEGORY
++  };
++
++/* An array of their lengths, for convenience.  */
++const uint8_t _nl_category_name_sizes[] attribute_hidden =
++  {
++#define DEFINE_CATEGORY(category, category_name, items, a) \
++    [category] = sizeof (category_name) - 1,
++#include "categories.def"
++#undef	DEFINE_CATEGORY
++    [LC_ALL] = sizeof ("LC_ALL") - 1
++  };
+diff --git a/locale/localeinfo.h b/locale/localeinfo.h
+index 4e1c8c568a..f7ed946f1c 100644
+--- a/locale/localeinfo.h
++++ b/locale/localeinfo.h
+@@ -224,7 +224,7 @@ __libc_tsd_define (extern, locale_t, LOCALE)
+    unused.  We can manage this playing some tricks with weak references.
+    But with thread-local locale settings, it becomes quite ungainly unless
+    we can use __thread variables.  So only in that case do we attempt this.  */
+-#ifndef SHARED
++#if !defined SHARED && !defined IN_GLIBC_LOCALEDEF
+ # include <tls.h>
+ # define NL_CURRENT_INDIRECT	1
+ #endif
+diff --git a/locale/programs/charmap-dir.c b/locale/programs/charmap-dir.c
+index e55ab86e28..0f87e6dd28 100644
+--- a/locale/programs/charmap-dir.c
++++ b/locale/programs/charmap-dir.c
+@@ -19,7 +19,9 @@
+ #include <error.h>
+ #include <fcntl.h>
+ #include <libintl.h>
++#ifndef NO_UNCOMPRESS
+ #include <spawn.h>
++#endif
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
+@@ -156,6 +158,7 @@ charmap_closedir (CHARMAP_DIR *cdir)
+   return closedir (dir);
+ }
+ 
++#ifndef NO_UNCOMPRESS
+ /* Creates a subprocess decompressing the given pathname, and returns
+    a stream reading its output (the decompressed data).  */
+ static
+@@ -204,6 +207,7 @@ fopen_uncompressed (const char *pathname, const char *compressor)
+     }
+   return NULL;
+ }
++#endif
+ 
+ /* Opens a charmap for reading, given its name (not an alias name).  */
+ FILE *
+@@ -226,6 +230,7 @@ charmap_open (const char *directory, const char *name)
+   if (stream != NULL)
+     return stream;
+ 
++#ifndef NO_UNCOMPRESS
+   memcpy (p, ".gz", 4);
+   stream = fopen_uncompressed (pathname, "gzip");
+   if (stream != NULL)
+@@ -235,6 +240,7 @@ charmap_open (const char *directory, const char *name)
+   stream = fopen_uncompressed (pathname, "bzip2");
+   if (stream != NULL)
+     return stream;
++#endif
+ 
+   return NULL;
+ }
+diff --git a/locale/programs/ld-collate.c b/locale/programs/ld-collate.c
+index cec848cb7c..fcd768eb7d 100644
+--- a/locale/programs/ld-collate.c
++++ b/locale/programs/ld-collate.c
+@@ -350,7 +350,7 @@ new_element (struct locale_collate_t *collate, const char *mbs, size_t mbslen,
+     }
+   if (wcs != NULL)
+     {
+-      size_t nwcs = wcslen ((wchar_t *) wcs);
++      size_t nwcs = wcslen_uint32 (wcs);
+       uint32_t zero = 0;
+       /* Handle <U0000> as a single character.  */
+       if (nwcs == 0)
+@@ -1776,8 +1776,7 @@ symbol `%s' has the same encoding as"), (*eptr)->name);
+ 
+ 	      if ((*eptr)->nwcs == runp->nwcs)
+ 		{
+-		  int c = wmemcmp ((wchar_t *) (*eptr)->wcs,
+-				   (wchar_t *) runp->wcs, runp->nwcs);
++		  int c = wmemcmp_uint32 ((*eptr)->wcs, runp->wcs, runp->nwcs);
+ 
+ 		  if (c == 0)
+ 		    {
+@@ -2010,9 +2009,9 @@ add_to_tablewc (uint32_t ch, struct element_t *runp)
+ 	     one consecutive entry.  */
+ 	  if (runp->wcnext != NULL
+ 	      && runp->nwcs == runp->wcnext->nwcs
+-	      && wmemcmp ((wchar_t *) runp->wcs,
+-			  (wchar_t *)runp->wcnext->wcs,
+-			  runp->nwcs - 1) == 0
++	      && wmemcmp_uint32 (runp->wcs,
++				 runp->wcnext->wcs,
++				 runp->nwcs - 1) == 0
+ 	      && (runp->wcs[runp->nwcs - 1]
+ 		  == runp->wcnext->wcs[runp->nwcs - 1] + 1))
+ 	    {
+@@ -2036,9 +2035,9 @@ add_to_tablewc (uint32_t ch, struct element_t *runp)
+ 		runp = runp->wcnext;
+ 	      while (runp->wcnext != NULL
+ 		     && runp->nwcs == runp->wcnext->nwcs
+-		     && wmemcmp ((wchar_t *) runp->wcs,
+-				 (wchar_t *)runp->wcnext->wcs,
+-				 runp->nwcs - 1) == 0
++		     && wmemcmp_uint32 (runp->wcs,
++					runp->wcnext->wcs,
++					runp->nwcs - 1) == 0
+ 		     && (runp->wcs[runp->nwcs - 1]
+ 			 == runp->wcnext->wcs[runp->nwcs - 1] + 1));
+ 
+diff --git a/locale/programs/ld-ctype.c b/locale/programs/ld-ctype.c
+index df266c20d6..05c0152ec9 100644
+--- a/locale/programs/ld-ctype.c
++++ b/locale/programs/ld-ctype.c
+@@ -926,7 +926,7 @@ ctype_output (struct localedef_t *locale, const struct charmap_t *charmap,
+   allocate_arrays (ctype, charmap, ctype->repertoire);
+ 
+   default_missing_len = (ctype->default_missing
+-			 ? wcslen ((wchar_t *) ctype->default_missing)
++			 ? wcslen_uint32 (ctype->default_missing)
+ 			 : 0);
+ 
+   init_locale_data (&file, nelems);
+@@ -1937,7 +1937,7 @@ read_translit_entry (struct linereader *ldfile, struct locale_ctype_t *ctype,
+ 	    ignore = 1;
+ 	  else
+ 	    /* This value is usable.  */
+-	    obstack_grow (ob, to_wstr, wcslen ((wchar_t *) to_wstr) * 4);
++	    obstack_grow (ob, to_wstr, wcslen_uint32 (to_wstr) * 4);
+ 
+ 	  first = 0;
+ 	}
+@@ -2471,8 +2471,8 @@ with character code range values one must use the absolute ellipsis `...'"));
+ 	    }
+ 
+ 	handle_tok_digit:
+-	  class_bit = _ISwdigit;
+-	  class256_bit = _ISdigit;
++	  class_bit = BITw (tok_digit);
++	  class256_bit = BIT (tok_digit);
+ 	  handle_digits = 1;
+ 	  goto read_charclass;
+ 
+@@ -3929,8 +3929,7 @@ allocate_arrays (struct locale_ctype_t *ctype, const struct charmap_t *charmap,
+ 
+ 	  while (idx < number)
+ 	    {
+-	      int res = wcscmp ((const wchar_t *) sorted[idx]->from,
+-				(const wchar_t *) runp->from);
++	      int res = wcscmp_uint32 (sorted[idx]->from, runp->from);
+ 	      if (res == 0)
+ 		{
+ 		  replace = 1;
+@@ -3967,11 +3966,11 @@ allocate_arrays (struct locale_ctype_t *ctype, const struct charmap_t *charmap,
+       for (size_t cnt = 0; cnt < number; ++cnt)
+ 	{
+ 	  struct translit_to_t *srunp;
+-	  from_len += wcslen ((const wchar_t *) sorted[cnt]->from) + 1;
++	  from_len += wcslen_uint32 (sorted[cnt]->from) + 1;
+ 	  srunp = sorted[cnt]->to;
+ 	  while (srunp != NULL)
+ 	    {
+-	      to_len += wcslen ((const wchar_t *) srunp->str) + 1;
++	      to_len += wcslen_uint32 (srunp->str) + 1;
+ 	      srunp = srunp->next;
+ 	    }
+ 	  /* Plus one for the extra NUL character marking the end of
+@@ -3995,18 +3994,18 @@ allocate_arrays (struct locale_ctype_t *ctype, const struct charmap_t *charmap,
+ 	  ctype->translit_from_idx[cnt] = from_len;
+ 	  ctype->translit_to_idx[cnt] = to_len;
+ 
+-	  len = wcslen ((const wchar_t *) sorted[cnt]->from) + 1;
+-	  wmemcpy ((wchar_t *) &ctype->translit_from_tbl[from_len],
+-		   (const wchar_t *) sorted[cnt]->from, len);
++	  len = wcslen_uint32 (sorted[cnt]->from) + 1;
++	  wmemcpy_uint32 (&ctype->translit_from_tbl[from_len],
++			  sorted[cnt]->from, len);
+ 	  from_len += len;
+ 
+ 	  ctype->translit_to_idx[cnt] = to_len;
+ 	  srunp = sorted[cnt]->to;
+ 	  while (srunp != NULL)
+ 	    {
+-	      len = wcslen ((const wchar_t *) srunp->str) + 1;
+-	      wmemcpy ((wchar_t *) &ctype->translit_to_tbl[to_len],
+-		       (const wchar_t *) srunp->str, len);
++	      len = wcslen_uint32 (srunp->str) + 1;
++	      wmemcpy_uint32 (&ctype->translit_to_tbl[to_len],
++			      srunp->str, len);
+ 	      to_len += len;
+ 	      srunp = srunp->next;
+ 	    }
+diff --git a/locale/programs/ld-time.c b/locale/programs/ld-time.c
+index 32e9c41e35..6a61fcedeb 100644
+--- a/locale/programs/ld-time.c
++++ b/locale/programs/ld-time.c
+@@ -215,8 +215,10 @@ No definition for %s category found"), "LC_TIME"));
+ 	}
+       else
+ 	{
++	  static const uint32_t wt_fmt_ampm[]
++	    = { '%','I',':','%','M',':','%','S',' ','%','p',0 };
+ 	  time->t_fmt_ampm = "%I:%M:%S %p";
+-	  time->wt_fmt_ampm = (const uint32_t *) L"%I:%M:%S %p";
++	  time->wt_fmt_ampm = wt_fmt_ampm;
+ 	}
+     }
+ 
+@@ -226,7 +228,7 @@ No definition for %s category found"), "LC_TIME"));
+       const int days_per_month[12] = { 31, 29, 31, 30, 31, 30,
+ 				       31, 31, 30, 31 ,30, 31 };
+       size_t idx;
+-      wchar_t *wstr;
++      uint32_t *wstr;
+ 
+       time->era_entries =
+ 	(struct era_data *) xmalloc (time->num_era
+@@ -464,18 +466,18 @@ No definition for %s category found"), "LC_TIME"));
+ 	    }
+ 
+ 	  /* Now generate the wide character name and format.  */
+-	  wstr = wcschr ((wchar_t *) time->wera[idx], L':');/* end direction */
+-	  wstr = wstr ? wcschr (wstr + 1, L':') : NULL;	/* end offset */
+-	  wstr = wstr ? wcschr (wstr + 1, L':') : NULL;	/* end start */
+-	  wstr = wstr ? wcschr (wstr + 1, L':') : NULL;	/* end end */
++	  wstr = wcschr_uint32 (time->wera[idx], L':'); /* end direction */
++	  wstr = wstr ? wcschr_uint32 (wstr + 1, L':') : NULL; /* end offset */
++	  wstr = wstr ? wcschr_uint32 (wstr + 1, L':') : NULL; /* end start */
++	  wstr = wstr ? wcschr_uint32 (wstr + 1, L':') : NULL; /* end end */
+ 	  if (wstr != NULL)
+ 	    {
+-	      time->era_entries[idx].wname = (uint32_t *) wstr + 1;
+-	      wstr = wcschr (wstr + 1, L':');	/* end name */
++	      time->era_entries[idx].wname = wstr + 1;
++	      wstr = wcschr_uint32 (wstr + 1, L':'); /* end name */
+ 	      if (wstr != NULL)
+ 		{
+ 		  *wstr = L'\0';
+-		  time->era_entries[idx].wformat = (uint32_t *) wstr + 1;
++		  time->era_entries[idx].wformat = wstr + 1;
+ 		}
+ 	      else
+ 		time->era_entries[idx].wname =
+@@ -534,7 +536,16 @@ No definition for %s category found"), "LC_TIME"));
+   if (time->date_fmt == NULL)
+     time->date_fmt = "%a %b %e %H:%M:%S %Z %Y";
+   if (time->wdate_fmt == NULL)
+-    time->wdate_fmt = (const uint32_t *) L"%a %b %e %H:%M:%S %Z %Y";
++    {
++      static const uint32_t wdate_fmt[] =
++	{ '%','a',' ',
++	  '%','b',' ',
++	  '%','e',' ',
++	  '%','H',':','%','M',':','%','S',' ',
++	  '%','Z',' ',
++	  '%','Y',0 };
++      time->wdate_fmt = wdate_fmt;
++    }
+ }
+ 
+ 
+diff --git a/locale/programs/linereader.c b/locale/programs/linereader.c
+index 52b340963a..1a8bce17b4 100644
+--- a/locale/programs/linereader.c
++++ b/locale/programs/linereader.c
+@@ -595,7 +595,7 @@ get_string (struct linereader *lr, const struct charmap_t *charmap,
+ {
+   int return_widestr = lr->return_widestr;
+   char *buf;
+-  wchar_t *buf2 = NULL;
++  uint32_t *buf2 = NULL;
+   size_t bufact;
+   size_t bufmax = 56;
+ 
+diff --git a/locale/programs/localedef.c b/locale/programs/localedef.c
+index 6acc1342c7..df87740f8b 100644
+--- a/locale/programs/localedef.c
++++ b/locale/programs/localedef.c
+@@ -108,6 +108,7 @@ void (*argp_program_version_hook) (FILE *, struct argp_state *) = print_version;
+ #define OPT_LIST_ARCHIVE 309
+ #define OPT_LITTLE_ENDIAN 400
+ #define OPT_BIG_ENDIAN 401
++#define OPT_UINT32_ALIGN 402
+ 
+ /* Definitions of arguments for argp functions.  */
+ static const struct argp_option options[] =
+@@ -143,6 +144,8 @@ static const struct argp_option options[] =
+     N_("Generate little-endian output") },
+   { "big-endian", OPT_BIG_ENDIAN, NULL, 0,
+     N_("Generate big-endian output") },
++  { "uint32-align", OPT_UINT32_ALIGN, "ALIGNMENT", 0,
++    N_("Set the target's uint32_t alignment in bytes (default 4)") },
+   { NULL, 0, NULL, 0, NULL }
+ };
+ 
+@@ -232,12 +235,14 @@ main (int argc, char *argv[])
+      ctype locale.  (P1003.2 4.35.5.2)  */
+   setlocale (LC_CTYPE, "POSIX");
+ 
++#ifndef NO_SYSCONF
+   /* Look whether the system really allows locale definitions.  POSIX
+      defines error code 3 for this situation so I think it must be
+      a fatal error (see P1003.2 4.35.8).  */
+   if (sysconf (_SC_2_LOCALEDEF) < 0)
+     WITH_CUR_LOCALE (error (3, 0, _("\
+ FATAL: system does not define `_POSIX2_LOCALEDEF'")));
++#endif
+ 
+   /* Process charmap file.  */
+   charmap = charmap_read (charmap_file, verbose, 1, be_quiet, 1);
+@@ -328,6 +333,9 @@ parse_opt (int key, char *arg, struct argp_state *state)
+     case OPT_BIG_ENDIAN:
+       set_big_endian (true);
+       break;
++    case OPT_UINT32_ALIGN:
++      uint32_align_mask = strtol (arg, NULL, 0) - 1;
++      break;
+     case 'c':
+       force_output = 1;
+       break;
+diff --git a/locale/programs/locfile.c b/locale/programs/locfile.c
+index 0990ef11be..683422c908 100644
+--- a/locale/programs/locfile.c
++++ b/locale/programs/locfile.c
+@@ -544,6 +544,9 @@ compare_files (const char *filename1, const char *filename2, size_t size,
+    machine running localedef.  */
+ bool swap_endianness_p;
+ 
++/* The target's value of __align__(uint32_t) - 1.  */
++unsigned int uint32_align_mask = 3;
++
+ /* When called outside a start_locale_structure/end_locale_structure
+    or start_locale_prelude/end_locale_prelude block, record that the
+    next byte in FILE's obstack will be the first byte of a new element.
+@@ -621,7 +624,7 @@ add_locale_string (struct locale_file *file, const char *string)
+ void
+ add_locale_wstring (struct locale_file *file, const uint32_t *string)
+ {
+-  add_locale_uint32_array (file, string, wcslen ((const wchar_t *) string) + 1);
++  add_locale_uint32_array (file, string, wcslen_uint32 (string) + 1);
+ }
+ 
+ /* Record that FILE's next element is the 32-bit integer VALUE.  */
+diff --git a/locale/programs/locfile.h b/locale/programs/locfile.h
+index 3407e13c13..0bb556caf8 100644
+--- a/locale/programs/locfile.h
++++ b/locale/programs/locfile.h
+@@ -71,6 +71,8 @@ extern void write_all_categories (struct localedef_t *definitions,
+ 
+ extern bool swap_endianness_p;
+ 
++extern unsigned int uint32_align_mask;
++
+ /* Change the output to be big-endian if BIG_ENDIAN is true and
+    little-endian otherwise.  */
+ static inline void
+@@ -89,7 +91,8 @@ maybe_swap_uint32 (uint32_t value)
+ }
+ 
+ /* Likewise, but munge an array of N uint32_ts starting at ARRAY.  */
+-static inline void
++static void
++__attribute__ ((unused))
+ maybe_swap_uint32_array (uint32_t *array, size_t n)
+ {
+   if (swap_endianness_p)
+@@ -99,7 +102,8 @@ maybe_swap_uint32_array (uint32_t *array, size_t n)
+ 
+ /* Like maybe_swap_uint32_array, but the array of N elements is at
+    the end of OBSTACK's current object.  */
+-static inline void
++static void
++__attribute__ ((unused))
+ maybe_swap_uint32_obstack (struct obstack *obstack, size_t n)
+ {
+   maybe_swap_uint32_array ((uint32_t *) obstack_next_free (obstack) - n, n);
+@@ -276,4 +280,55 @@ extern void identification_output (struct localedef_t *locale,
+ 				   const struct charmap_t *charmap,
+ 				   const char *output_path);
+ 
++static size_t wcslen_uint32 (const uint32_t *str) __attribute__ ((unused));
++static uint32_t * wmemcpy_uint32 (uint32_t *s1, const uint32_t *s2, size_t n) __attribute__ ((unused));
++static uint32_t * wcschr_uint32 (const uint32_t *s, uint32_t ch) __attribute__ ((unused));
++static int wcscmp_uint32 (const uint32_t *s1, const uint32_t *s2) __attribute__ ((unused));
++static int wmemcmp_uint32 (const uint32_t *s1, const uint32_t *s2, size_t n) __attribute__ ((unused));
++
++static size_t
++wcslen_uint32 (const uint32_t *str)
++{
++  size_t len = 0;
++  while (str[len] != 0)
++    len++;
++  return len;
++}
++
++static  int
++wmemcmp_uint32 (const uint32_t *s1, const uint32_t *s2, size_t n)
++{
++  while (n-- != 0)
++    {
++      int diff = *s1++ - *s2++;
++      if (diff != 0)
++	return diff;
++    }
++  return 0;
++}
++
++static int
++wcscmp_uint32 (const uint32_t *s1, const uint32_t *s2)
++{
++  while (*s1 != 0 && *s1 == *s2)
++    s1++, s2++;
++  return *s1 - *s2;
++}
++
++static uint32_t *
++wmemcpy_uint32 (uint32_t *s1, const uint32_t *s2, size_t n)
++{
++  return memcpy (s1, s2, n * sizeof (uint32_t));
++}
++
++static uint32_t *
++wcschr_uint32 (const uint32_t *s, uint32_t ch)
++{
++  do
++    if (*s == ch)
++      return (uint32_t *) s;
++  while (*s++ != 0);
++  return 0;
++}
++
+ #endif /* locfile.h */
+diff --git a/locale/setlocale.c b/locale/setlocale.c
+index 19acc4b2c7..c89d3b87ad 100644
+--- a/locale/setlocale.c
++++ b/locale/setlocale.c
+@@ -64,36 +64,6 @@ static char *const _nl_current_used[] =
+ #endif
+ 
+ 
+-/* Define an array of category names (also the environment variable names).  */
+-const union catnamestr_t _nl_category_names attribute_hidden =
+-  {
+-    {
+-#define DEFINE_CATEGORY(category, category_name, items, a) \
+-      category_name,
+-#include "categories.def"
+-#undef DEFINE_CATEGORY
+-    }
+-  };
+-
+-const uint8_t _nl_category_name_idxs[__LC_LAST] attribute_hidden =
+-  {
+-#define DEFINE_CATEGORY(category, category_name, items, a) \
+-    [category] = offsetof (union catnamestr_t, CATNAMEMF (__LINE__)),
+-#include "categories.def"
+-#undef DEFINE_CATEGORY
+-  };
+-
+-/* An array of their lengths, for convenience.  */
+-const uint8_t _nl_category_name_sizes[] attribute_hidden =
+-  {
+-#define DEFINE_CATEGORY(category, category_name, items, a) \
+-    [category] = sizeof (category_name) - 1,
+-#include "categories.def"
+-#undef	DEFINE_CATEGORY
+-    [LC_ALL] = sizeof ("LC_ALL") - 1
+-  };
+-
+-
+ #ifdef NL_CURRENT_INDIRECT
+ # define WEAK_POSTLOAD(postload) weak_extern (postload)
+ #else
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0022-eglibc-Resolve-__fpscr_values-on-SH4.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0022-eglibc-Resolve-__fpscr_values-on-SH4.patch
deleted file mode 100644
index c0cd5b0..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0022-eglibc-Resolve-__fpscr_values-on-SH4.patch
+++ /dev/null
@@ -1,56 +0,0 @@
-From a50c6e80543fb4cbc589978c11fe846bf4a94492 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 18 Mar 2015 00:55:53 +0000
-Subject: [PATCH 22/26] eglibc: Resolve __fpscr_values on SH4
-
-2010-09-29  Nobuhiro Iwamatsu  <iwamatsu@nigauri.org>
-            Andrew Stubbs  <ams@codesourcery.com>
-
-        Resolve SH's __fpscr_values to symbol in libc.so.
-
-        * sysdeps/sh/sh4/fpu/fpu_control.h: Add C++ __set_fpscr prototype.
-        * sysdeps/unix/sysv/linux/sh/Versions (GLIBC_2.2): Add __fpscr_values.
-        * sysdeps/unix/sysv/linux/sh/sysdep.S (___fpscr_values): New constant.
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- sysdeps/unix/sysv/linux/sh/Versions |  1 +
- sysdeps/unix/sysv/linux/sh/sysdep.S | 11 +++++++++++
- 2 files changed, 12 insertions(+)
-
-diff --git a/sysdeps/unix/sysv/linux/sh/Versions b/sysdeps/unix/sysv/linux/sh/Versions
-index e0938c4..ca1d7da 100644
---- a/sysdeps/unix/sysv/linux/sh/Versions
-+++ b/sysdeps/unix/sysv/linux/sh/Versions
-@@ -2,6 +2,7 @@ libc {
-   GLIBC_2.2 {
-     # functions used in other libraries
-     __xstat64; __fxstat64; __lxstat64;
-+    __fpscr_values;
- 
-     # a*
-     alphasort64;
-diff --git a/sysdeps/unix/sysv/linux/sh/sysdep.S b/sysdeps/unix/sysv/linux/sh/sysdep.S
-index 0024d79..d1db7e4 100644
---- a/sysdeps/unix/sysv/linux/sh/sysdep.S
-+++ b/sysdeps/unix/sysv/linux/sh/sysdep.S
-@@ -30,3 +30,14 @@ ENTRY (__syscall_error)
- 
- #define __syscall_error __syscall_error_1
- #include <sysdeps/unix/sh/sysdep.S>
-+
-+       .data
-+       .align 3
-+       .globl ___fpscr_values
-+       .type ___fpscr_values, @object
-+       .size ___fpscr_values, 8
-+___fpscr_values:
-+       .long 0
-+       .long 0x80000
-+weak_alias (___fpscr_values, __fpscr_values)
-+
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0023-Define-DUMMY_LOCALE_T-if-not-defined.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0023-Define-DUMMY_LOCALE_T-if-not-defined.patch
new file mode 100644
index 0000000..9e580d4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0023-Define-DUMMY_LOCALE_T-if-not-defined.patch
@@ -0,0 +1,32 @@
+From cb4d00eac7f84092314de593626eea40f9529038 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 20 Apr 2016 21:11:00 -0700
+Subject: [PATCH 23/25] Define DUMMY_LOCALE_T if not defined
+
+This is a hack to fix building the locale bits on an older
+CentOs 5.X machine
+
+Upstream-Status: Inappropriate [other]
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ locale/programs/config.h | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/locale/programs/config.h b/locale/programs/config.h
+index 5b416be0d8..79e66eed5e 100644
+--- a/locale/programs/config.h
++++ b/locale/programs/config.h
+@@ -19,6 +19,9 @@
+ #ifndef _LD_CONFIG_H
+ #define _LD_CONFIG_H	1
+ 
++#ifndef DUMMY_LOCALE_T
++#define DUMMY_LOCALE_T
++#endif
+ /* Use the internal textdomain used for libc messages.  */
+ #define PACKAGE _libc_intl_domainname
+ #ifndef VERSION
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0023-eglibc-Install-PIC-archives.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0023-eglibc-Install-PIC-archives.patch
deleted file mode 100644
index c3e571f..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0023-eglibc-Install-PIC-archives.patch
+++ /dev/null
@@ -1,123 +0,0 @@
-From 101568daf48d99e71b280a2fdd85460fe740d583 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 18 Mar 2015 01:57:01 +0000
-Subject: [PATCH 23/26] eglibc: Install PIC archives
-
-Forward port from eglibc
-
-2008-02-07  Joseph Myers  <joseph@codesourcery.com>
-
-        * Makerules (install-extras, install-map): New variables.
-        (installed-libcs): Add libc_pic.a.
-        (install-lib): Include _pic.a files for versioned shared
-        libraries.
-        (install-map-nosubdir, install-extras-nosubdir): Add rules for
-        installing extra files.
-        (install-no-libc.a-nosubdir): Depend on install-map-nosubdir and
-        install-extras-nosubdir.
-
-2008-04-01  Maxim Kuvyrkov  <maxim@codesourcery.com>
-
-        * Makerules (install-lib): Don't install libpthread_pic.a.
-        (install-map): Don't install libpthread_pic.map.
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- Makerules | 42 ++++++++++++++++++++++++++++++++++++++++--
- 1 file changed, 40 insertions(+), 2 deletions(-)
-
-diff --git a/Makerules b/Makerules
-index 61a0240..373e628 100644
---- a/Makerules
-+++ b/Makerules
-@@ -762,6 +762,9 @@ ifeq ($(build-shared),yes)
- $(common-objpfx)libc.so: $(common-objpfx)libc.map
- endif
- common-generated += libc.so libc_pic.os
-+ifndef subdir
-+install-extras := soinit.o sofini.o
-+endif
- ifdef libc.so-version
- $(common-objpfx)libc.so$(libc.so-version): $(common-objpfx)libc.so
- 	$(make-link)
-@@ -1004,6 +1007,7 @@ endif
- 
- install: check-install-supported
- 
-+installed-libcs := $(installed-libcs) $(inst_libdir)/libc_pic.a
- install: $(installed-libcs)
- $(installed-libcs): $(inst_libdir)/lib$(libprefix)%: lib $(+force)
- 	$(make-target-directory)
-@@ -1032,6 +1036,22 @@ versioned := $(strip $(foreach so,$(install-lib.so),\
- install-lib.so-versioned := $(filter $(versioned), $(install-lib.so))
- install-lib.so-unversioned := $(filter-out $(versioned), $(install-lib.so))
- 
-+# Install the _pic.a files for versioned libraries, and corresponding
-+# .map files.
-+# libpthread_pic.a breaks mklibs, so don't install it and its map.
-+install-lib := $(install-lib) $(install-lib.so-versioned:%.so=%_pic.a)
-+install-lib := $(filter-out libpthread_pic.a,$(install-lib))
-+# Despite having a soname libhurduser and libmachuser do not use symbol
-+# versioning, so don't install the corresponding .map files.
-+ifeq ($(build-shared),yes)
-+install-map := $(patsubst %.so,%.map,\
-+			$(foreach L,$(install-lib.so-versioned),$(notdir $L)))
-+install-map := $(filter-out libhurduser.map libmachuser.map libpthread.map,$(install-map))
-+ifndef subdir
-+install-map := $(install-map) libc.map
-+endif
-+endif
-+
- # For versioned libraries, we install three files:
- #	$(inst_libdir)/libfoo.so	-- for linking, symlink or ld script
- #	$(inst_slibdir)/libfoo.so.NN	-- for loading by SONAME, symlink
-@@ -1275,9 +1295,22 @@ $(addprefix $(inst_includedir)/,$(headers-nonh)): $(inst_includedir)/%: \
- endif	# headers-nonh
- endif	# headers
- 
-+ifdef install-map
-+$(addprefix $(inst_libdir)/,$(patsubst lib%.map,lib%_pic.map,$(install-map))): \
-+  $(inst_libdir)/lib%_pic.map: $(common-objpfx)lib%.map $(+force)
-+	$(do-install)
-+endif
-+
-+ifdef install-extras
-+$(addprefix $(inst_libdir)/libc_pic/,$(install-extras)): \
-+  $(inst_libdir)/libc_pic/%.o: $(elf-objpfx)%.os $(+force)
-+	$(do-install)
-+endif
-+
- .PHONY: install-bin-nosubdir install-bin-script-nosubdir \
- 	install-rootsbin-nosubdir install-sbin-nosubdir install-lib-nosubdir \
--	install-data-nosubdir install-headers-nosubdir
-+	install-data-nosubdir install-headers-nosubdir install-map-nosubdir \
-+	install-extras-nosubdir
- install-bin-nosubdir: $(addprefix $(inst_bindir)/,$(install-bin))
- install-bin-script-nosubdir: $(addprefix $(inst_bindir)/,$(install-bin-script))
- install-rootsbin-nosubdir: \
-@@ -1290,6 +1323,10 @@ install-data-nosubdir: $(addprefix $(inst_datadir)/,$(install-data))
- install-headers-nosubdir: $(addprefix $(inst_includedir)/,$(headers))
- install-others-nosubdir: $(install-others)
- install-others-programs-nosubdir: $(install-others-programs)
-+install-map-nosubdir: $(addprefix $(inst_libdir)/,\
-+		       $(patsubst lib%.map,lib%_pic.map,$(install-map)))
-+install-extras-nosubdir: $(addprefix $(inst_libdir)/libc_pic/,\
-+		       $(install-extras))
- 
- # We need all the `-nosubdir' targets so that `install' in the parent
- # doesn't depend on several things which each iterate over the subdirs.
-@@ -1299,7 +1336,8 @@ install-%:: install-%-nosubdir ;
- 
- .PHONY: install install-no-libc.a-nosubdir
- install-no-libc.a-nosubdir: install-headers-nosubdir install-data-nosubdir \
--			    install-lib-nosubdir install-others-nosubdir
-+			    install-lib-nosubdir install-others-nosubdir \
-+			    install-map-nosubdir install-extras-nosubdir
- ifeq ($(build-programs),yes)
- install-no-libc.a-nosubdir: install-bin-nosubdir install-bin-script-nosubdir \
- 			    install-rootsbin-nosubdir install-sbin-nosubdir \
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0024-eglibc-Forward-port-cross-locale-generation-support.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0024-eglibc-Forward-port-cross-locale-generation-support.patch
deleted file mode 100644
index 3399890..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0024-eglibc-Forward-port-cross-locale-generation-support.patch
+++ /dev/null
@@ -1,566 +0,0 @@
-From 82516e3ed372f618c886a2de4f9498f597aa8a8b Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 18 Mar 2015 01:33:49 +0000
-Subject: [PATCH 24/26] eglibc: Forward port cross locale generation support
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- locale/Makefile               |  3 ++-
- locale/catnames.c             | 48 +++++++++++++++++++++++++++++++++++
- locale/localeinfo.h           |  2 +-
- locale/programs/charmap-dir.c |  6 +++++
- locale/programs/ld-collate.c  | 17 ++++++-------
- locale/programs/ld-ctype.c    | 27 ++++++++++----------
- locale/programs/ld-time.c     | 31 +++++++++++++++--------
- locale/programs/linereader.c  |  2 +-
- locale/programs/localedef.c   |  8 ++++++
- locale/programs/locfile.c     |  5 +++-
- locale/programs/locfile.h     | 59 +++++++++++++++++++++++++++++++++++++++++--
- locale/setlocale.c            | 30 ----------------------
- 12 files changed, 169 insertions(+), 69 deletions(-)
- create mode 100644 locale/catnames.c
-
-diff --git a/locale/Makefile b/locale/Makefile
-index c5379e6..c98c675 100644
---- a/locale/Makefile
-+++ b/locale/Makefile
-@@ -25,7 +25,8 @@ include ../Makeconfig
- headers		= locale.h bits/locale.h langinfo.h xlocale.h
- routines	= setlocale findlocale loadlocale loadarchive \
- 		  localeconv nl_langinfo nl_langinfo_l mb_cur_max \
--		  newlocale duplocale freelocale uselocale
-+		  newlocale duplocale freelocale uselocale \
-+		  catnames
- tests		= tst-C-locale tst-locname tst-duplocale
- categories	= ctype messages monetary numeric time paper name \
- 		  address telephone measurement identification collate
-diff --git a/locale/catnames.c b/locale/catnames.c
-new file mode 100644
-index 0000000..9fad357
---- /dev/null
-+++ b/locale/catnames.c
-@@ -0,0 +1,48 @@
-+/* Copyright (C) 2006  Free Software Foundation, Inc.
-+   This file is part of the GNU C Library.
-+
-+   The GNU C Library is free software; you can redistribute it and/or
-+   modify it under the terms of the GNU Lesser General Public
-+   License as published by the Free Software Foundation; either
-+   version 2.1 of the License, or (at your option) any later version.
-+
-+   The GNU C Library is distributed in the hope that it will be useful,
-+   but WITHOUT ANY WARRANTY; without even the implied warranty of
-+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+   Lesser General Public License for more details.
-+
-+   You should have received a copy of the GNU Lesser General Public
-+   License along with the GNU C Library; if not, write to the Free
-+   Software Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA
-+   02111-1307 USA.  */
-+
-+#include "localeinfo.h"
-+
-+/* Define an array of category names (also the environment variable names).  */
-+const union catnamestr_t _nl_category_names attribute_hidden =
-+  {
-+    {
-+#define DEFINE_CATEGORY(category, category_name, items, a) \
-+      category_name,
-+#include "categories.def"
-+#undef DEFINE_CATEGORY
-+    }
-+  };
-+
-+const uint8_t _nl_category_name_idxs[__LC_LAST] attribute_hidden =
-+  {
-+#define DEFINE_CATEGORY(category, category_name, items, a) \
-+    [category] = offsetof (union catnamestr_t, CATNAMEMF (__LINE__)),
-+#include "categories.def"
-+#undef DEFINE_CATEGORY
-+  };
-+
-+/* An array of their lengths, for convenience.  */
-+const uint8_t _nl_category_name_sizes[] attribute_hidden =
-+  {
-+#define DEFINE_CATEGORY(category, category_name, items, a) \
-+    [category] = sizeof (category_name) - 1,
-+#include "categories.def"
-+#undef	DEFINE_CATEGORY
-+    [LC_ALL] = sizeof ("LC_ALL") - 1
-+  };
-diff --git a/locale/localeinfo.h b/locale/localeinfo.h
-index 1f4da92..7f68935 100644
---- a/locale/localeinfo.h
-+++ b/locale/localeinfo.h
-@@ -224,7 +224,7 @@ __libc_tsd_define (extern, __locale_t, LOCALE)
-    unused.  We can manage this playing some tricks with weak references.
-    But with thread-local locale settings, it becomes quite ungainly unless
-    we can use __thread variables.  So only in that case do we attempt this.  */
--#ifndef SHARED
-+#if !defined SHARED && !defined IN_GLIBC_LOCALEDEF
- # include <tls.h>
- # define NL_CURRENT_INDIRECT	1
- #endif
-diff --git a/locale/programs/charmap-dir.c b/locale/programs/charmap-dir.c
-index 99fcd35..5e528dc 100644
---- a/locale/programs/charmap-dir.c
-+++ b/locale/programs/charmap-dir.c
-@@ -19,7 +19,9 @@
- #include <error.h>
- #include <fcntl.h>
- #include <libintl.h>
-+#ifndef NO_UNCOMPRESS
- #include <spawn.h>
-+#endif
- #include <stdio.h>
- #include <stdlib.h>
- #include <string.h>
-@@ -156,6 +158,7 @@ charmap_closedir (CHARMAP_DIR *cdir)
-   return closedir (dir);
- }
- 
-+#ifndef NO_UNCOMPRESS
- /* Creates a subprocess decompressing the given pathname, and returns
-    a stream reading its output (the decompressed data).  */
- static
-@@ -204,6 +207,7 @@ fopen_uncompressed (const char *pathname, const char *compressor)
-     }
-   return NULL;
- }
-+#endif
- 
- /* Opens a charmap for reading, given its name (not an alias name).  */
- FILE *
-@@ -226,6 +230,7 @@ charmap_open (const char *directory, const char *name)
-   if (stream != NULL)
-     return stream;
- 
-+#ifndef NO_UNCOMPRESS
-   memcpy (p, ".gz", 4);
-   stream = fopen_uncompressed (pathname, "gzip");
-   if (stream != NULL)
-@@ -235,6 +240,7 @@ charmap_open (const char *directory, const char *name)
-   stream = fopen_uncompressed (pathname, "bzip2");
-   if (stream != NULL)
-     return stream;
-+#endif
- 
-   return NULL;
- }
-diff --git a/locale/programs/ld-collate.c b/locale/programs/ld-collate.c
-index 1e125f6..3b2867f 100644
---- a/locale/programs/ld-collate.c
-+++ b/locale/programs/ld-collate.c
-@@ -350,7 +350,7 @@ new_element (struct locale_collate_t *collate, const char *mbs, size_t mbslen,
-     }
-   if (wcs != NULL)
-     {
--      size_t nwcs = wcslen ((wchar_t *) wcs);
-+      size_t nwcs = wcslen_uint32 (wcs);
-       uint32_t zero = 0;
-       /* Handle <U0000> as a single character.  */
-       if (nwcs == 0)
-@@ -1776,8 +1776,7 @@ symbol `%s' has the same encoding as"), (*eptr)->name);
- 
- 	      if ((*eptr)->nwcs == runp->nwcs)
- 		{
--		  int c = wmemcmp ((wchar_t *) (*eptr)->wcs,
--				   (wchar_t *) runp->wcs, runp->nwcs);
-+		  int c = wmemcmp_uint32 ((*eptr)->wcs, runp->wcs, runp->nwcs);
- 
- 		  if (c == 0)
- 		    {
-@@ -2010,9 +2009,9 @@ add_to_tablewc (uint32_t ch, struct element_t *runp)
- 	     one consecutive entry.  */
- 	  if (runp->wcnext != NULL
- 	      && runp->nwcs == runp->wcnext->nwcs
--	      && wmemcmp ((wchar_t *) runp->wcs,
--			  (wchar_t *)runp->wcnext->wcs,
--			  runp->nwcs - 1) == 0
-+	      && wmemcmp_uint32 (runp->wcs,
-+				 runp->wcnext->wcs,
-+				 runp->nwcs - 1) == 0
- 	      && (runp->wcs[runp->nwcs - 1]
- 		  == runp->wcnext->wcs[runp->nwcs - 1] + 1))
- 	    {
-@@ -2036,9 +2035,9 @@ add_to_tablewc (uint32_t ch, struct element_t *runp)
- 		runp = runp->wcnext;
- 	      while (runp->wcnext != NULL
- 		     && runp->nwcs == runp->wcnext->nwcs
--		     && wmemcmp ((wchar_t *) runp->wcs,
--				 (wchar_t *)runp->wcnext->wcs,
--				 runp->nwcs - 1) == 0
-+		     && wmemcmp_uint32 (runp->wcs,
-+					runp->wcnext->wcs,
-+					runp->nwcs - 1) == 0
- 		     && (runp->wcs[runp->nwcs - 1]
- 			 == runp->wcnext->wcs[runp->nwcs - 1] + 1));
- 
-diff --git a/locale/programs/ld-ctype.c b/locale/programs/ld-ctype.c
-index 0fd141c..68136e6 100644
---- a/locale/programs/ld-ctype.c
-+++ b/locale/programs/ld-ctype.c
-@@ -926,7 +926,7 @@ ctype_output (struct localedef_t *locale, const struct charmap_t *charmap,
-   allocate_arrays (ctype, charmap, ctype->repertoire);
- 
-   default_missing_len = (ctype->default_missing
--			 ? wcslen ((wchar_t *) ctype->default_missing)
-+			 ? wcslen_uint32 (ctype->default_missing)
- 			 : 0);
- 
-   init_locale_data (&file, nelems);
-@@ -1937,7 +1937,7 @@ read_translit_entry (struct linereader *ldfile, struct locale_ctype_t *ctype,
- 	    ignore = 1;
- 	  else
- 	    /* This value is usable.  */
--	    obstack_grow (ob, to_wstr, wcslen ((wchar_t *) to_wstr) * 4);
-+	    obstack_grow (ob, to_wstr, wcslen_uint32 (to_wstr) * 4);
- 
- 	  first = 0;
- 	}
-@@ -2471,8 +2471,8 @@ with character code range values one must use the absolute ellipsis `...'"));
- 	    }
- 
- 	handle_tok_digit:
--	  class_bit = _ISwdigit;
--	  class256_bit = _ISdigit;
-+	  class_bit = BITw (tok_digit);
-+	  class256_bit = BIT (tok_digit);
- 	  handle_digits = 1;
- 	  goto read_charclass;
- 
-@@ -3929,8 +3929,7 @@ allocate_arrays (struct locale_ctype_t *ctype, const struct charmap_t *charmap,
- 
- 	  while (idx < number)
- 	    {
--	      int res = wcscmp ((const wchar_t *) sorted[idx]->from,
--				(const wchar_t *) runp->from);
-+	      int res = wcscmp_uint32 (sorted[idx]->from, runp->from);
- 	      if (res == 0)
- 		{
- 		  replace = 1;
-@@ -3967,11 +3966,11 @@ allocate_arrays (struct locale_ctype_t *ctype, const struct charmap_t *charmap,
-       for (size_t cnt = 0; cnt < number; ++cnt)
- 	{
- 	  struct translit_to_t *srunp;
--	  from_len += wcslen ((const wchar_t *) sorted[cnt]->from) + 1;
-+	  from_len += wcslen_uint32 (sorted[cnt]->from) + 1;
- 	  srunp = sorted[cnt]->to;
- 	  while (srunp != NULL)
- 	    {
--	      to_len += wcslen ((const wchar_t *) srunp->str) + 1;
-+	      to_len += wcslen_uint32 (srunp->str) + 1;
- 	      srunp = srunp->next;
- 	    }
- 	  /* Plus one for the extra NUL character marking the end of
-@@ -3995,18 +3994,18 @@ allocate_arrays (struct locale_ctype_t *ctype, const struct charmap_t *charmap,
- 	  ctype->translit_from_idx[cnt] = from_len;
- 	  ctype->translit_to_idx[cnt] = to_len;
- 
--	  len = wcslen ((const wchar_t *) sorted[cnt]->from) + 1;
--	  wmemcpy ((wchar_t *) &ctype->translit_from_tbl[from_len],
--		   (const wchar_t *) sorted[cnt]->from, len);
-+	  len = wcslen_uint32 (sorted[cnt]->from) + 1;
-+	  wmemcpy_uint32 (&ctype->translit_from_tbl[from_len],
-+			  sorted[cnt]->from, len);
- 	  from_len += len;
- 
- 	  ctype->translit_to_idx[cnt] = to_len;
- 	  srunp = sorted[cnt]->to;
- 	  while (srunp != NULL)
- 	    {
--	      len = wcslen ((const wchar_t *) srunp->str) + 1;
--	      wmemcpy ((wchar_t *) &ctype->translit_to_tbl[to_len],
--		       (const wchar_t *) srunp->str, len);
-+	      len = wcslen_uint32 (srunp->str) + 1;
-+	      wmemcpy_uint32 (&ctype->translit_to_tbl[to_len],
-+			      srunp->str, len);
- 	      to_len += len;
- 	      srunp = srunp->next;
- 	    }
-diff --git a/locale/programs/ld-time.c b/locale/programs/ld-time.c
-index 87531bc..5f2c266 100644
---- a/locale/programs/ld-time.c
-+++ b/locale/programs/ld-time.c
-@@ -215,8 +215,10 @@ No definition for %s category found"), "LC_TIME"));
- 	}
-       else
- 	{
-+	  static const uint32_t wt_fmt_ampm[]
-+	    = { '%','I',':','%','M',':','%','S',' ','%','p',0 };
- 	  time->t_fmt_ampm = "%I:%M:%S %p";
--	  time->wt_fmt_ampm = (const uint32_t *) L"%I:%M:%S %p";
-+	  time->wt_fmt_ampm = wt_fmt_ampm;
- 	}
-     }
- 
-@@ -226,7 +228,7 @@ No definition for %s category found"), "LC_TIME"));
-       const int days_per_month[12] = { 31, 29, 31, 30, 31, 30,
- 				       31, 31, 30, 31 ,30, 31 };
-       size_t idx;
--      wchar_t *wstr;
-+      uint32_t *wstr;
- 
-       time->era_entries =
- 	(struct era_data *) xmalloc (time->num_era
-@@ -464,18 +466,18 @@ No definition for %s category found"), "LC_TIME"));
- 	    }
- 
- 	  /* Now generate the wide character name and format.  */
--	  wstr = wcschr ((wchar_t *) time->wera[idx], L':');/* end direction */
--	  wstr = wstr ? wcschr (wstr + 1, L':') : NULL;	/* end offset */
--	  wstr = wstr ? wcschr (wstr + 1, L':') : NULL;	/* end start */
--	  wstr = wstr ? wcschr (wstr + 1, L':') : NULL;	/* end end */
-+	  wstr = wcschr_uint32 (time->wera[idx], L':'); /* end direction */
-+	  wstr = wstr ? wcschr_uint32 (wstr + 1, L':') : NULL; /* end offset */
-+	  wstr = wstr ? wcschr_uint32 (wstr + 1, L':') : NULL; /* end start */
-+	  wstr = wstr ? wcschr_uint32 (wstr + 1, L':') : NULL; /* end end */
- 	  if (wstr != NULL)
- 	    {
--	      time->era_entries[idx].wname = (uint32_t *) wstr + 1;
--	      wstr = wcschr (wstr + 1, L':');	/* end name */
-+	      time->era_entries[idx].wname = wstr + 1;
-+	      wstr = wcschr_uint32 (wstr + 1, L':'); /* end name */
- 	      if (wstr != NULL)
- 		{
- 		  *wstr = L'\0';
--		  time->era_entries[idx].wformat = (uint32_t *) wstr + 1;
-+		  time->era_entries[idx].wformat = wstr + 1;
- 		}
- 	      else
- 		time->era_entries[idx].wname =
-@@ -534,7 +536,16 @@ No definition for %s category found"), "LC_TIME"));
-   if (time->date_fmt == NULL)
-     time->date_fmt = "%a %b %e %H:%M:%S %Z %Y";
-   if (time->wdate_fmt == NULL)
--    time->wdate_fmt = (const uint32_t *) L"%a %b %e %H:%M:%S %Z %Y";
-+    {
-+      static const uint32_t wdate_fmt[] =
-+	{ '%','a',' ',
-+	  '%','b',' ',
-+	  '%','e',' ',
-+	  '%','H',':','%','M',':','%','S',' ',
-+	  '%','Z',' ',
-+	  '%','Y',0 };
-+      time->wdate_fmt = wdate_fmt;
-+    }
- }
- 
- 
-diff --git a/locale/programs/linereader.c b/locale/programs/linereader.c
-index b885f65..0afb631 100644
---- a/locale/programs/linereader.c
-+++ b/locale/programs/linereader.c
-@@ -595,7 +595,7 @@ get_string (struct linereader *lr, const struct charmap_t *charmap,
- {
-   int return_widestr = lr->return_widestr;
-   char *buf;
--  wchar_t *buf2 = NULL;
-+  uint32_t *buf2 = NULL;
-   size_t bufact;
-   size_t bufmax = 56;
- 
-diff --git a/locale/programs/localedef.c b/locale/programs/localedef.c
-index b4c48f1..ed08d48 100644
---- a/locale/programs/localedef.c
-+++ b/locale/programs/localedef.c
-@@ -108,6 +108,7 @@ void (*argp_program_version_hook) (FILE *, struct argp_state *) = print_version;
- #define OPT_LIST_ARCHIVE 309
- #define OPT_LITTLE_ENDIAN 400
- #define OPT_BIG_ENDIAN 401
-+#define OPT_UINT32_ALIGN 402
- 
- /* Definitions of arguments for argp functions.  */
- static const struct argp_option options[] =
-@@ -143,6 +144,8 @@ static const struct argp_option options[] =
-     N_("Generate little-endian output") },
-   { "big-endian", OPT_BIG_ENDIAN, NULL, 0,
-     N_("Generate big-endian output") },
-+  { "uint32-align", OPT_UINT32_ALIGN, "ALIGNMENT", 0,
-+    N_("Set the target's uint32_t alignment in bytes (default 4)") },
-   { NULL, 0, NULL, 0, NULL }
- };
- 
-@@ -232,12 +235,14 @@ main (int argc, char *argv[])
-      ctype locale.  (P1003.2 4.35.5.2)  */
-   setlocale (LC_CTYPE, "POSIX");
- 
-+#ifndef NO_SYSCONF
-   /* Look whether the system really allows locale definitions.  POSIX
-      defines error code 3 for this situation so I think it must be
-      a fatal error (see P1003.2 4.35.8).  */
-   if (sysconf (_SC_2_LOCALEDEF) < 0)
-     WITH_CUR_LOCALE (error (3, 0, _("\
- FATAL: system does not define `_POSIX2_LOCALEDEF'")));
-+#endif
- 
-   /* Process charmap file.  */
-   charmap = charmap_read (charmap_file, verbose, 1, be_quiet, 1);
-@@ -328,6 +333,9 @@ parse_opt (int key, char *arg, struct argp_state *state)
-     case OPT_BIG_ENDIAN:
-       set_big_endian (true);
-       break;
-+    case OPT_UINT32_ALIGN:
-+      uint32_align_mask = strtol (arg, NULL, 0) - 1;
-+      break;
-     case 'c':
-       force_output = 1;
-       break;
-diff --git a/locale/programs/locfile.c b/locale/programs/locfile.c
-index 32f5cd2..02967b0 100644
---- a/locale/programs/locfile.c
-+++ b/locale/programs/locfile.c
-@@ -544,6 +544,9 @@ compare_files (const char *filename1, const char *filename2, size_t size,
-    machine running localedef.  */
- bool swap_endianness_p;
- 
-+/* The target's value of __align__(uint32_t) - 1.  */
-+unsigned int uint32_align_mask = 3;
-+
- /* When called outside a start_locale_structure/end_locale_structure
-    or start_locale_prelude/end_locale_prelude block, record that the
-    next byte in FILE's obstack will be the first byte of a new element.
-@@ -621,7 +624,7 @@ add_locale_string (struct locale_file *file, const char *string)
- void
- add_locale_wstring (struct locale_file *file, const uint32_t *string)
- {
--  add_locale_uint32_array (file, string, wcslen ((const wchar_t *) string) + 1);
-+  add_locale_uint32_array (file, string, wcslen_uint32 (string) + 1);
- }
- 
- /* Record that FILE's next element is the 32-bit integer VALUE.  */
-diff --git a/locale/programs/locfile.h b/locale/programs/locfile.h
-index a3dd904..2c7763a 100644
---- a/locale/programs/locfile.h
-+++ b/locale/programs/locfile.h
-@@ -71,6 +71,8 @@ extern void write_all_categories (struct localedef_t *definitions,
- 
- extern bool swap_endianness_p;
- 
-+extern unsigned int uint32_align_mask;
-+
- /* Change the output to be big-endian if BIG_ENDIAN is true and
-    little-endian otherwise.  */
- static inline void
-@@ -89,7 +91,8 @@ maybe_swap_uint32 (uint32_t value)
- }
- 
- /* Likewise, but munge an array of N uint32_ts starting at ARRAY.  */
--static inline void
-+static void
-+__attribute__ ((unused))
- maybe_swap_uint32_array (uint32_t *array, size_t n)
- {
-   if (swap_endianness_p)
-@@ -99,7 +102,8 @@ maybe_swap_uint32_array (uint32_t *array, size_t n)
- 
- /* Like maybe_swap_uint32_array, but the array of N elements is at
-    the end of OBSTACK's current object.  */
--static inline void
-+static void
-+__attribute__ ((unused))
- maybe_swap_uint32_obstack (struct obstack *obstack, size_t n)
- {
-   maybe_swap_uint32_array ((uint32_t *) obstack_next_free (obstack) - n, n);
-@@ -276,4 +280,55 @@ extern void identification_output (struct localedef_t *locale,
- 				   const struct charmap_t *charmap,
- 				   const char *output_path);
- 
-+static size_t wcslen_uint32 (const uint32_t *str) __attribute__ ((unused));
-+static uint32_t * wmemcpy_uint32 (uint32_t *s1, const uint32_t *s2, size_t n) __attribute__ ((unused));
-+static uint32_t * wcschr_uint32 (const uint32_t *s, uint32_t ch) __attribute__ ((unused));
-+static int wcscmp_uint32 (const uint32_t *s1, const uint32_t *s2) __attribute__ ((unused));
-+static int wmemcmp_uint32 (const uint32_t *s1, const uint32_t *s2, size_t n) __attribute__ ((unused));
-+
-+static size_t
-+wcslen_uint32 (const uint32_t *str)
-+{
-+  size_t len = 0;
-+  while (str[len] != 0)
-+    len++;
-+  return len;
-+}
-+
-+static  int
-+wmemcmp_uint32 (const uint32_t *s1, const uint32_t *s2, size_t n)
-+{
-+  while (n-- != 0)
-+    {
-+      int diff = *s1++ - *s2++;
-+      if (diff != 0)
-+	return diff;
-+    }
-+  return 0;
-+}
-+
-+static int
-+wcscmp_uint32 (const uint32_t *s1, const uint32_t *s2)
-+{
-+  while (*s1 != 0 && *s1 == *s2)
-+    s1++, s2++;
-+  return *s1 - *s2;
-+}
-+
-+static uint32_t *
-+wmemcpy_uint32 (uint32_t *s1, const uint32_t *s2, size_t n)
-+{
-+  return memcpy (s1, s2, n * sizeof (uint32_t));
-+}
-+
-+static uint32_t *
-+wcschr_uint32 (const uint32_t *s, uint32_t ch)
-+{
-+  do
-+    if (*s == ch)
-+      return (uint32_t *) s;
-+  while (*s++ != 0);
-+  return 0;
-+}
-+
- #endif /* locfile.h */
-diff --git a/locale/setlocale.c b/locale/setlocale.c
-index 69b3141..1cef0be 100644
---- a/locale/setlocale.c
-+++ b/locale/setlocale.c
-@@ -64,36 +64,6 @@ static char *const _nl_current_used[] =
- #endif
- 
- 
--/* Define an array of category names (also the environment variable names).  */
--const union catnamestr_t _nl_category_names attribute_hidden =
--  {
--    {
--#define DEFINE_CATEGORY(category, category_name, items, a) \
--      category_name,
--#include "categories.def"
--#undef DEFINE_CATEGORY
--    }
--  };
--
--const uint8_t _nl_category_name_idxs[__LC_LAST] attribute_hidden =
--  {
--#define DEFINE_CATEGORY(category, category_name, items, a) \
--    [category] = offsetof (union catnamestr_t, CATNAMEMF (__LINE__)),
--#include "categories.def"
--#undef DEFINE_CATEGORY
--  };
--
--/* An array of their lengths, for convenience.  */
--const uint8_t _nl_category_name_sizes[] attribute_hidden =
--  {
--#define DEFINE_CATEGORY(category, category_name, items, a) \
--    [category] = sizeof (category_name) - 1,
--#include "categories.def"
--#undef	DEFINE_CATEGORY
--    [LC_ALL] = sizeof ("LC_ALL") - 1
--  };
--
--
- #ifdef NL_CURRENT_INDIRECT
- # define WEAK_POSTLOAD(postload) weak_extern (postload)
- #else
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0024-elf-dl-deps.c-Make-_dl_build_local_scope-breadth-fir.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0024-elf-dl-deps.c-Make-_dl_build_local_scope-breadth-fir.patch
new file mode 100644
index 0000000..0b59352
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0024-elf-dl-deps.c-Make-_dl_build_local_scope-breadth-fir.patch
@@ -0,0 +1,56 @@
+From a784742739c90eea0d4ccbbd073a067d55ca95e8 Mon Sep 17 00:00:00 2001
+From: Mark Hatle <mark.hatle@windriver.com>
+Date: Thu, 18 Aug 2016 14:07:58 -0500
+Subject: [PATCH 24/25] elf/dl-deps.c: Make _dl_build_local_scope breadth first
+
+According to the ELF specification:
+
+When resolving symbolic references, the dynamic linker examines the symbol
+tables with a breadth-first search.
+
+This function was using a depth first search.  By doing so the conflict
+resolution reported to the prelinker (when LD_TRACE_PRELINKING=1 is set)
+was incorrect.  This caused problems when their were various circular
+dependencies between libraries.  The problem usually manifested itself by
+the wrong IFUNC being executed.
+
+[BZ# 20488]
+
+Upstream-Status: Submitted [libc-alpha]
+
+Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
+---
+ elf/dl-deps.c | 14 ++++++++++----
+ 1 file changed, 10 insertions(+), 4 deletions(-)
+
+diff --git a/elf/dl-deps.c b/elf/dl-deps.c
+index 1b8bac6593..c616808f31 100644
+--- a/elf/dl-deps.c
++++ b/elf/dl-deps.c
+@@ -73,13 +73,19 @@ _dl_build_local_scope (struct link_map **list, struct link_map *map)
+ {
+   struct link_map **p = list;
+   struct link_map **q;
++  struct link_map **r;
+ 
+   *p++ = map;
+   map->l_reserved = 1;
+-  if (map->l_initfini)
+-    for (q = map->l_initfini + 1; *q; ++q)
+-      if (! (*q)->l_reserved)
+-	p += _dl_build_local_scope (p, *q);
++
++  for (r = list; r < p; ++r)
++    if ((*r)->l_initfini)
++      for (q = (*r)->l_initfini + 1; *q; ++q)
++	if (! (*q)->l_reserved)
++	  {
++	    *p++ = *q;
++	    (*q)->l_reserved = 1;
++	  }
+   return p - list;
+ }
+ 
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0025-Define-DUMMY_LOCALE_T-if-not-defined.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0025-Define-DUMMY_LOCALE_T-if-not-defined.patch
deleted file mode 100644
index 1f0f5d4..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0025-Define-DUMMY_LOCALE_T-if-not-defined.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From c2d8cdeab116caacdfedb35eeb3e743b44807bec Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 20 Apr 2016 21:11:00 -0700
-Subject: [PATCH 25/26] Define DUMMY_LOCALE_T if not defined
-
-This is a hack to fix building the locale bits on an older
-CentOs 5.X machine
-
-Upstream-Status: Inappropriate [other]
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- locale/programs/config.h | 3 +++
- 1 file changed, 3 insertions(+)
-
-diff --git a/locale/programs/config.h b/locale/programs/config.h
-index f606365..0e5f8c3 100644
---- a/locale/programs/config.h
-+++ b/locale/programs/config.h
-@@ -19,6 +19,9 @@
- #ifndef _LD_CONFIG_H
- #define _LD_CONFIG_H	1
- 
-+#ifndef DUMMY_LOCALE_T
-+#define DUMMY_LOCALE_T
-+#endif
- /* Use the internal textdomain used for libc messages.  */
- #define PACKAGE _libc_intl_domainname
- #ifndef VERSION
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0025-locale-fix-hard-coded-reference-to-gcc-E.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0025-locale-fix-hard-coded-reference-to-gcc-E.patch
new file mode 100644
index 0000000..09ad04a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0025-locale-fix-hard-coded-reference-to-gcc-E.patch
@@ -0,0 +1,38 @@
+From f3a670496c8fe6d4acf045f5b167a19cf41b044e Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?J=C3=A9r=C3=A9my=20Rosen?= <jeremy.rosen@smile.fr>
+Date: Mon, 22 Aug 2016 16:09:25 +0200
+Subject: [PATCH 25/25] locale: fix hard-coded reference to gcc -E
+
+When new version of compilers are published, they may not be compatible with
+older versions of software. This is particularly common when software is built
+with -Werror.
+
+Autotools provides a way for a user to specify the name of his compiler using a
+set of variables ($CC $CXX $CPP etc.). Those variables are used correctly when
+compiling glibc but the script used to generate transliterations in the locale/
+subdirectory directly calls the gcc binary to get the output of the
+preprocessor instead of using the $CPP variable provided by the build
+environment.
+
+This patch replaces the hard-coded reference to the gcc binary with the proper
+environment variable, thus allowing a user to override it.
+
+Upstream-Status: Submitted [https://sourceware.org/ml/libc-alpha/2016-08/msg00746.html]
+---
+ locale/gen-translit.pl | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/locale/gen-translit.pl b/locale/gen-translit.pl
+index 30d3f2f195..e97653017c 100644
+--- a/locale/gen-translit.pl
++++ b/locale/gen-translit.pl
+@@ -1,5 +1,5 @@
+ #!/usr/bin/perl -w
+-open F, "cat C-translit.h.in | gcc -E - |" || die "Cannot preprocess input file";
++open F, 'cat C-translit.h.in | ${CPP:-gcc -E} - |' || die "Cannot preprocess input file";
+ 
+ 
+ sub cstrlen {
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0026-assert-Suppress-pedantic-warning-caused-by-statement.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0026-assert-Suppress-pedantic-warning-caused-by-statement.patch
new file mode 100644
index 0000000..b2bb96b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0026-assert-Suppress-pedantic-warning-caused-by-statement.patch
@@ -0,0 +1,90 @@
+From 037283cbc74739b72f36dfec827d120faa243406 Mon Sep 17 00:00:00 2001
+From: Florian Weimer <fweimer at redhat dot com>
+Date: Thu, 6 Jul 2017 11:50:55 +0200
+Subject: [PATCH 26/26] assert: Suppress pedantic warning caused by statement
+ expression [BZ# 21242]
+
+On 07/05/2017 10:15 PM, Zack Weinberg wrote:
+> On Wed, Jul 5, 2017 at 11:51 AM, Florian Weimer <fweimer@redhat.com> wrote:
+>> On 07/05/2017 05:46 PM, Zack Weinberg wrote:
+>>> A problem occurs to me: expressions involving VLAs _are_ evaluated
+>>> inside sizeof.
+>>
+>> The type of the sizeof argument would still be int (due to the
+>> comparison against 0), so this doesn't actually occur.
+>
+> I rechecked what C99 says about sizeof and VLAs, and you're right -
+> the operand of sizeof is only evaluated when sizeof is _directly_
+> applied to a VLA.  So this is indeed safe, but I think this wrinkle
+> should be mentioned in the comment.  Perhaps
+>
+> /* The first occurrence of EXPR is not evaluated due to the sizeof,
+>    but will trigger any pedantic warnings masked by the __extension__
+>    for the second occurrence.  The explicit comparison against zero
+>    ensures that sizeof is not directly applied to a function pointer or
+>    bit-field (which would be ill-formed) or VLA (which would be evaluated).  */
+>
+> zw
+
+What about the attached patch?
+
+Siddhesh, is this okay during the freeze?  I'd like to backport it to
+2.25 as well.
+
+Thanks,
+Florian
+
+assert: Suppress pedantic warning caused by statement expression
+
+2017-07-06  Florian Weimer  <fweimer@redhat.com>
+
+	[BZ #21242]
+	* assert/assert.h [__GNUC__ && !__STRICT_ANSI__] (assert):
+	Suppress pedantic warning resulting from statement expression.
+	(__ASSERT_FUNCTION): Add missing __extendsion__.
+---
+
+Upstream-Status: Submitted
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+ assert/assert.h | 12 +++++++++---
+ 1 file changed, 9 insertions(+), 3 deletions(-)
+
+diff --git a/assert/assert.h b/assert/assert.h
+index 22f019537c..6801cfeb10 100644
+--- a/assert/assert.h
++++ b/assert/assert.h
+@@ -91,13 +91,19 @@ __END_DECLS
+      ? __ASSERT_VOID_CAST (0)						\
+      : __assert_fail (#expr, __FILE__, __LINE__, __ASSERT_FUNCTION))
+ # else
++/* The first occurrence of EXPR is not evaluated due to the sizeof,
++   but will trigger any pedantic warnings masked by the __extension__
++   for the second occurrence.  The explicit comparison against zero is
++   required to support function pointers and bit fields in this
++   context, and to suppress the evaluation of variable length
++   arrays.  */
+ #  define assert(expr)							\
+-    ({									\
++  ((void) sizeof ((expr) == 0), __extension__ ({			\
+       if (expr)								\
+         ; /* empty */							\
+       else								\
+         __assert_fail (#expr, __FILE__, __LINE__, __ASSERT_FUNCTION);	\
+-    })
++    }))
+ # endif
+ 
+ # ifdef	__USE_GNU
+@@ -113,7 +119,7 @@ __END_DECLS
+    C9x has a similar variable called __func__, but prefer the GCC one since
+    it demangles C++ function names.  */
+ # if defined __cplusplus ? __GNUC_PREREQ (2, 6) : __GNUC_PREREQ (2, 4)
+-#   define __ASSERT_FUNCTION	__PRETTY_FUNCTION__
++#   define __ASSERT_FUNCTION	__extension__ __PRETTY_FUNCTION__
+ # else
+ #  if defined __STDC_VERSION__ && __STDC_VERSION__ >= 199901L
+ #   define __ASSERT_FUNCTION	__func__
+-- 
+2.13.3
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0026-elf-dl-deps.c-Make-_dl_build_local_scope-breadth-fir.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0026-elf-dl-deps.c-Make-_dl_build_local_scope-breadth-fir.patch
deleted file mode 100644
index 852f530..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0026-elf-dl-deps.c-Make-_dl_build_local_scope-breadth-fir.patch
+++ /dev/null
@@ -1,56 +0,0 @@
-From fb315c197cca61299a6f6588ea3460145c255d06 Mon Sep 17 00:00:00 2001
-From: Mark Hatle <mark.hatle@windriver.com>
-Date: Thu, 18 Aug 2016 14:07:58 -0500
-Subject: [PATCH 26/26] elf/dl-deps.c: Make _dl_build_local_scope breadth first
-
-According to the ELF specification:
-
-When resolving symbolic references, the dynamic linker examines the symbol
-tables with a breadth-first search.
-
-This function was using a depth first search.  By doing so the conflict
-resolution reported to the prelinker (when LD_TRACE_PRELINKING=1 is set)
-was incorrect.  This caused problems when their were various circular
-dependencies between libraries.  The problem usually manifested itself by
-the wrong IFUNC being executed.
-
-[BZ# 20488]
-
-Upstream-Status: Submitted [libc-alpha]
-
-Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
----
- elf/dl-deps.c | 14 ++++++++++----
- 1 file changed, 10 insertions(+), 4 deletions(-)
-
-diff --git a/elf/dl-deps.c b/elf/dl-deps.c
-index 6a82987..53be824 100644
---- a/elf/dl-deps.c
-+++ b/elf/dl-deps.c
-@@ -73,13 +73,19 @@ _dl_build_local_scope (struct link_map **list, struct link_map *map)
- {
-   struct link_map **p = list;
-   struct link_map **q;
-+  struct link_map **r;
- 
-   *p++ = map;
-   map->l_reserved = 1;
--  if (map->l_initfini)
--    for (q = map->l_initfini + 1; *q; ++q)
--      if (! (*q)->l_reserved)
--	p += _dl_build_local_scope (p, *q);
-+
-+  for (r = list; r < p; ++r)
-+    if ((*r)->l_initfini)
-+      for (q = (*r)->l_initfini + 1; *q; ++q)
-+	if (! (*q)->l_reserved)
-+	  {
-+	    *p++ = *q;
-+	    (*q)->l_reserved = 1;
-+	  }
-   return p - list;
- }
- 
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0027-glibc-reset-dl-load-write-lock-after-forking.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0027-glibc-reset-dl-load-write-lock-after-forking.patch
new file mode 100644
index 0000000..777b253
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0027-glibc-reset-dl-load-write-lock-after-forking.patch
@@ -0,0 +1,37 @@
+From a6bb73d1cfd20a73fbbe6076008376fb87879d1b Mon Sep 17 00:00:00 2001
+From: Yuanjie Huang <yuanjie.huang@windriver.com>
+Date: Thu, 18 Aug 2016 17:59:13 +0800
+Subject: [PATCH] reset dl_load_write_lock after forking
+
+The patch in this Bugzilla entry was requested by a customer:
+
+  https://www.sourceware.org/bugzilla/show_bug.cgi?id=19282
+
+The __libc_fork() code reset dl_load_lock, but it also needed to reset
+dl_load_write_lock.  The patch has not yet been integrated upstream.
+
+Upstream-Status: Pending [ Not Author See bugzilla]
+
+Signed-off-by: Damodar Sonone <damodar.sonone@kpit.com>
+Signed-off-by: Yuanjie Huang <yuanjie.huang@windriver.com>
+---
+ sysdeps/nptl/fork.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/sysdeps/nptl/fork.c b/sysdeps/nptl/fork.c
+index 2b9ae4b..3d0b8da 100644
+--- a/sysdeps/nptl/fork.c
++++ b/sysdeps/nptl/fork.c
+@@ -174,8 +174,9 @@ __libc_fork (void)
+       /* Reset locks in the I/O code.  */
+       _IO_list_resetlock ();
+ 
+-      /* Reset the lock the dynamic loader uses to protect its data.  */
++      /* Reset the locks the dynamic loader uses to protect its data.  */
+       __rtld_lock_initialize (GL(dl_load_lock));
++      __rtld_lock_initialize (GL(dl_load_write_lock));
+ 
+       /* Run the handlers registered for the child.  */
+       while (allp != NULL)
+-- 
+1.9.1
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0027-locale-fix-hard-coded-reference-to-gcc-E.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0027-locale-fix-hard-coded-reference-to-gcc-E.patch
deleted file mode 100644
index 71c0bdc..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0027-locale-fix-hard-coded-reference-to-gcc-E.patch
+++ /dev/null
@@ -1,38 +0,0 @@
-From a2fc86cb8d0366171f100ebd033aeb9609fa40de Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?J=C3=A9r=C3=A9my=20Rosen?= <jeremy.rosen@smile.fr>
-Date: Mon, 22 Aug 2016 16:09:25 +0200
-Subject: [PATCH 27/27] locale: fix hard-coded reference to gcc -E
-
-When new version of compilers are published, they may not be compatible with
-older versions of software. This is particularly common when software is built
-with -Werror.
-
-Autotools provides a way for a user to specify the name of his compiler using a
-set of variables ($CC $CXX $CPP etc.). Those variables are used correctly when
-compiling glibc but the script used to generate transliterations in the locale/
-subdirectory directly calls the gcc binary to get the output of the
-preprocessor instead of using the $CPP variable provided by the build
-environment.
-
-This patch replaces the hard-coded reference to the gcc binary with the proper
-environment variable, thus allowing a user to override it.
-
-Upstream-Status : Submitted [https://sourceware.org/ml/libc-alpha/2016-08/msg00746.html]
----
- locale/gen-translit.pl | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/locale/gen-translit.pl b/locale/gen-translit.pl
-index 30d3f2f..e976530 100644
---- a/locale/gen-translit.pl
-+++ b/locale/gen-translit.pl
-@@ -1,5 +1,5 @@
- #!/usr/bin/perl -w
--open F, "cat C-translit.h.in | gcc -E - |" || die "Cannot preprocess input file";
-+open F, 'cat C-translit.h.in | ${CPP:-gcc -E} - |' || die "Cannot preprocess input file";
- 
- 
- sub cstrlen {
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0028-Bug-4578-add-ld.so-lock-while-fork.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0028-Bug-4578-add-ld.so-lock-while-fork.patch
new file mode 100644
index 0000000..f76237a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0028-Bug-4578-add-ld.so-lock-while-fork.patch
@@ -0,0 +1,57 @@
+The patch in this Bugzilla entry was requested by a customer:
+  https://sourceware.org/bugzilla/show_bug.cgi?id=4578
+
+If a thread happens to hold dl_load_lock and have r_state set to RT_ADD or
+RT_DELETE at the time another thread calls fork(), then the child exit code
+from fork (in nptl/sysdeps/unix/sysv/linux/fork.c in our case) re-initializes
+dl_load_lock but does not restore r_state to RT_CONSISTENT. If the child
+subsequently requires ld.so functionality before calling exec(), then the
+assertion will fire.
+
+The patch acquires dl_load_lock on entry to fork() and releases it on exit
+from the parent path.  The child path is initialized as currently done.
+This is essentially pthreads_atfork, but forced to be first because the
+acquisition of dl_load_lock must happen before malloc_atfork is active
+to avoid a deadlock.
+The patch has not yet been integrated upstream.
+
+Upstream-Status: Pending [ Not Author See bugzilla]
+
+Signed-off-by: Raghunath Lolur <Raghunath.Lolur@kpit.com>
+Signed-off-by: Yuanjie Huang <yuanjie.huang@windriver.com>
+Signed-off-by: Zhixiong Chi <zhixiong.chi@windriver.com>
+
+Index: git/sysdeps/nptl/fork.c
+===================================================================
+--- git.orig/sysdeps/nptl/fork.c       2017-08-03 16:02:15.674704080 +0800
++++ git/sysdeps/nptl/fork.c    2017-08-04 18:15:02.463362015 +0800
+@@ -25,6 +25,7 @@
+ #include <tls.h>
+ #include <hp-timing.h>
+ #include <ldsodefs.h>
++#include <libc-lock.h>
+ #include <stdio-lock.h>
+ #include <atomic.h>
+ #include <nptl/pthreadP.h>
+@@ -60,6 +61,10 @@
+      but our current fork implementation is not.  */
+   bool multiple_threads = THREAD_GETMEM (THREAD_SELF, header.multiple_threads);
+ 
++  /* grab ld.so lock BEFORE switching to malloc_atfork */
++  __rtld_lock_lock_recursive (GL(dl_load_lock));
++  __rtld_lock_lock_recursive (GL(dl_load_write_lock));
++
+   /* Run all the registered preparation handlers.  In reverse order.
+      While doing this we build up a list of all the entries.  */
+   struct fork_handler *runp;
+@@ -247,6 +252,10 @@
+ 
+ 	  allp = allp->next;
+ 	}
++
++      /* unlock ld.so last, because we locked it first */
++      __rtld_lock_unlock_recursive (GL(dl_load_write_lock));
++      __rtld_lock_unlock_recursive (GL(dl_load_lock));
+     }
+ 
+   return pid;
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0028-Rework-fno-omit-frame-pointer-support-on-i386.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0028-Rework-fno-omit-frame-pointer-support-on-i386.patch
deleted file mode 100644
index 7ed2b90..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0028-Rework-fno-omit-frame-pointer-support-on-i386.patch
+++ /dev/null
@@ -1,268 +0,0 @@
-From 1ea003d4fccc4646fd1848a182405a1c7000ab18 Mon Sep 17 00:00:00 2001
-From: Adhemerval Zanella <adhemerval.zanella@linaro.org>
-Date: Sun, 8 Jan 2017 11:38:23 -0200
-Subject: [PATCH 28/28] Rework -fno-omit-frame-pointer support on i386
-
-Commit 6b1df8b27f fixed the -OS build issue on i386 (BZ#20729) by
-expliciting disabling frame pointer (-fomit-frame-pointer) on the
-faulty objects.  Although it does fix the issue, it is a subpar
-workaround that adds complexity in build process (a rule for each
-object to add the required compiler option and pontentially more
-rules for objects that call {INLINE,INTERNAL}_SYSCALL) and does not
-allow the implementations to get all the possible debug/calltrack
-information possible (used mainly in debuggers and performance
-measurement tools).
-
-This patch fixes it by adding an explicit configure check to see
-if -fno-omit-frame-pointer is set and to act accordingly (set or
-not OPTIMIZE_FOR_GCC_5).  The make rules is simplified and only
-one is required: to add libc-do-syscall on loader due mmap
-(which will be empty anyway for default build with
--fomit-frame-pointer).
-
-Checked on i386-linux-gnu with GCC 6.2.1 with CFLAGS sets as
-'-Os', '-O2 -fno-omit-frame-pointer', and '-O2 -fomit-frame-pointer'.
-For '-Os' the testsuite issues described by BZ#19463 and BZ#15105
-still applied.
-
-It fixes BZ #21029, although it is marked as duplicated of #20729
-(I reopened to track this cleanup).
-
-	[BZ #21029]
-	* config.h.in [CAN_USE_REGISTER_ASM_EBP]: New define.
-	* sysdeps/unix/sysv/linux/i386/Makefile
-	[$(subdir) = elf] (sysdep-dl-routines): Add libc-do-syscall.
-	(uses-6-syscall-arguments): Remove.
-	[$(subdir) = misc] (CFLAGS-epoll_pwait.o): Likewise.
-	[$(subdir) = misc] (CFLAGS-epoll_pwait.os): Likewise.
-	[$(subdir) = misc] (CFLAGS-mmap.o): Likewise.
-	[$(subdir) = misc] (CFLAGS-mmap.os): Likewise.
-	[$(subdir) = misc] (CFLAGS-mmap64.o): Likewise.
-	[$(subdir) = misc] (CFLAGS-mmap64.os): Likewise.
-	[$(subdir) = misc] (CFLAGS-pselect.o): Likewise.
-	[$(subdir) = misc] (cflags-pselect.o): Likewise.
-	[$(subdir) = misc] (cflags-pselect.os): Likewise.
-	[$(subdir) = misc] (cflags-rtld-mmap.os): Likewise.
-	[$(subdir) = sysvipc] (cflags-semtimedop.o): Likewise.
-	[$(subdir) = sysvipc] (cflags-semtimedop.os): Likewise.
-	[$(subdir) = io] (CFLAGS-posix_fadvise64.o): Likewise.
-	[$(subdir) = io] (CFLAGS-posix_fadvise64.os): Likewise.
-	[$(subdir) = io] (CFLAGS-posix_fallocate.o): Likewise.
-	[$(subdir) = io] (CFLAGS-posix_fallocate.os): Likewise.
-	[$(subdir) = io] (CFLAGS-posix_fallocate64.o): Likewise.
-	[$(subdir) = io] (CFLAGS-posix_fallocate64.os): Likewise.
-	[$(subdir) = io] (CFLAGS-sync_file_range.o): Likewise.
-	[$(subdir) = io] (CFLAGS-sync_file_range.os): Likewise.
-	[$(subdir) = io] (CFLAGS-fallocate.o): Likewise.
-	[$(subdir) = io] (CFLAGS-fallocate.os): Likewise.
-	[$(subdir) = io] (CFLAGS-fallocate64.o): Likewise.
-	[$(subdir) = io] (CFLAGS-fallocate64.os): Likewise.
-	[$(subdir) = nptl] (CFLAGS-pthread_rwlock_timedrdlock.o):
-	Likewise.
-	[$(subdir) = nptl] (CFLAGS-pthread_rwlock_timedrdlock.os):
-	Likewise.
-	[$(subdir) = nptl] (CFLAGS-pthread_rwlock_timedrwlock.o):
-	Likewise.
-	[$(subdir) = nptl] (CFLAGS-pthread_rwlock_timedrwlock.os):
-	Likewise.
-	[$(subdir) = nptl] (CFLAGS-sem_wait.o): Likewise.
-	[$(subdir) = nptl] (CFLAGS-sem_wait.os): Likewise.
-	[$(subdir) = nptl] (CFLAGS-sem_timedwait.o): Likewise.
-	[$(subdir) = nptl] (CFLAGS-sem_timedwait.os): Likewise.
-	* sysdeps/unix/sysv/linux/i386/configure.ac: Add check if compiler allows
-	ebp on inline assembly.
-	* sysdeps/unix/sysv/linux/i386/configure: Regenerate.
-	* sysdeps/unix/sysv/linux/i386/sysdep.h (OPTIMIZE_FOR_GCC_5):
-	Set if CAN_USE_REGISTER_ASM_EBP is set.
-	(check_consistency): Likewise.
-
-Upstream-Status: Backport
-
-  https://sourceware.org/git/?p=glibc.git;a=commitdiff;h=3b33d6ed6096c1d20d05a650b06026d673f7399a
-
-Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
----
- config.h.in                               |  4 ++++
- sysdeps/unix/sysv/linux/i386/Makefile     | 39 +------------------------------
- sysdeps/unix/sysv/linux/i386/configure    | 39 +++++++++++++++++++++++++++++++
- sysdeps/unix/sysv/linux/i386/configure.ac | 17 ++++++++++++++
- sysdeps/unix/sysv/linux/i386/sysdep.h     |  6 ++---
- 5 files changed, 64 insertions(+), 41 deletions(-)
-
-diff --git a/config.h.in b/config.h.in
-index 7bfe923..fb2cc51 100644
---- a/config.h.in
-+++ b/config.h.in
-@@ -259,4 +259,8 @@
- /* Build glibc with tunables support.  */
- #define HAVE_TUNABLES 0
- 
-+/* Some compiler options may now allow to use ebp in __asm__ (used mainly
-+   in i386 6 argument syscall issue).  */
-+#define CAN_USE_REGISTER_ASM_EBP 0
-+
- #endif
-diff --git a/sysdeps/unix/sysv/linux/i386/Makefile b/sysdeps/unix/sysv/linux/i386/Makefile
-index 9609752..6aac0df 100644
---- a/sysdeps/unix/sysv/linux/i386/Makefile
-+++ b/sysdeps/unix/sysv/linux/i386/Makefile
-@@ -1,47 +1,18 @@
- # The default ABI is 32.
- default-abi := 32
- 
--# %ebp is used to pass the 6th argument to system calls, so these
--# system calls are incompatible with a frame pointer.
--uses-6-syscall-arguments = -fomit-frame-pointer
--
- ifeq ($(subdir),misc)
- sysdep_routines += ioperm iopl vm86
--CFLAGS-epoll_pwait.o += $(uses-6-syscall-arguments)
--CFLAGS-epoll_pwait.os += $(uses-6-syscall-arguments)
--CFLAGS-mmap.o += $(uses-6-syscall-arguments)
--CFLAGS-mmap.os += $(uses-6-syscall-arguments)
--CFLAGS-mmap64.o += $(uses-6-syscall-arguments)
--CFLAGS-mmap64.os += $(uses-6-syscall-arguments)
--CFLAGS-pselect.o += $(uses-6-syscall-arguments)
--CFLAGS-pselect.os += $(uses-6-syscall-arguments)
--CFLAGS-rtld-mmap.os += $(uses-6-syscall-arguments)
--endif
--
--ifeq ($(subdir),sysvipc)
--CFLAGS-semtimedop.o += $(uses-6-syscall-arguments)
--CFLAGS-semtimedop.os += $(uses-6-syscall-arguments)
- endif
- 
- ifeq ($(subdir),elf)
-+sysdep-dl-routines += libc-do-syscall
- sysdep-others += lddlibc4
- install-bin += lddlibc4
- endif
- 
- ifeq ($(subdir),io)
- sysdep_routines += libc-do-syscall
--CFLAGS-posix_fadvise64.o += $(uses-6-syscall-arguments)
--CFLAGS-posix_fadvise64.os += $(uses-6-syscall-arguments)
--CFLAGS-posix_fallocate.o += $(uses-6-syscall-arguments)
--CFLAGS-posix_fallocate.os += $(uses-6-syscall-arguments)
--CFLAGS-posix_fallocate64.o += $(uses-6-syscall-arguments)
--CFLAGS-posix_fallocate64.os += $(uses-6-syscall-arguments)
--CFLAGS-sync_file_range.o += $(uses-6-syscall-arguments)
--CFLAGS-sync_file_range.os += $(uses-6-syscall-arguments)
--CFLAGS-fallocate.o += $(uses-6-syscall-arguments)
--CFLAGS-fallocate.os += $(uses-6-syscall-arguments)
--CFLAGS-fallocate64.o += $(uses-6-syscall-arguments)
--CFLAGS-fallocate64.os += $(uses-6-syscall-arguments)
- endif
- 
- ifeq ($(subdir),nptl)
-@@ -61,14 +32,6 @@ ifeq ($(subdir),nptl)
- # pull in __syscall_error routine
- libpthread-routines += sysdep
- libpthread-shared-only-routines += sysdep
--CFLAGS-pthread_rwlock_timedrdlock.o += $(uses-6-syscall-arguments)
--CFLAGS-pthread_rwlock_timedrdlock.os += $(uses-6-syscall-arguments)
--CFLAGS-pthread_rwlock_timedwrlock.o += $(uses-6-syscall-arguments)
--CFLAGS-pthread_rwlock_timedwrlock.os += $(uses-6-syscall-arguments)
--CFLAGS-sem_wait.o += $(uses-6-syscall-arguments)
--CFLAGS-sem_wait.os += $(uses-6-syscall-arguments)
--CFLAGS-sem_timedwait.o += $(uses-6-syscall-arguments)
--CFLAGS-sem_timedwait.os += $(uses-6-syscall-arguments)
- endif
- 
- ifeq ($(subdir),rt)
-diff --git a/sysdeps/unix/sysv/linux/i386/configure b/sysdeps/unix/sysv/linux/i386/configure
-index eb72659..ae2c356 100644
---- a/sysdeps/unix/sysv/linux/i386/configure
-+++ b/sysdeps/unix/sysv/linux/i386/configure
-@@ -3,5 +3,44 @@
- 
- arch_minimum_kernel=2.6.32
- 
-+# Check if CFLAGS allows compiler to use ebp register in inline assembly.
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if compiler flags allows ebp in inline assembly" >&5
-+$as_echo_n "checking if compiler flags allows ebp in inline assembly... " >&6; }
-+if ${libc_cv_can_use_register_asm_ebp+:} false; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+
-+cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+    void foo (int i)
-+    {
-+      register int reg asm ("ebp") = i;
-+      asm ("# %0" : : "r" (reg));
-+    }
-+int
-+main ()
-+{
-+
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  libc_cv_can_use_register_asm_ebp=yes
-+else
-+  libc_cv_can_use_register_asm_ebp=no
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $libc_cv_can_use_register_asm_ebp" >&5
-+$as_echo "$libc_cv_can_use_register_asm_ebp" >&6; }
-+if test $libc_cv_can_use_register_asm_ebp = yes; then
-+  $as_echo "#define CAN_USE_REGISTER_ASM_EBP 1" >>confdefs.h
-+
-+fi
-+
- libc_cv_gcc_unwind_find_fde=yes
- ldd_rewrite_script=sysdeps/unix/sysv/linux/ldd-rewrite.sed
-diff --git a/sysdeps/unix/sysv/linux/i386/configure.ac b/sysdeps/unix/sysv/linux/i386/configure.ac
-index 1a11da6..1cd632e 100644
---- a/sysdeps/unix/sysv/linux/i386/configure.ac
-+++ b/sysdeps/unix/sysv/linux/i386/configure.ac
-@@ -3,5 +3,22 @@ GLIBC_PROVIDES dnl See aclocal.m4 in the top level source directory.
- 
- arch_minimum_kernel=2.6.32
- 
-+# Check if CFLAGS allows compiler to use ebp register in inline assembly.
-+AC_CACHE_CHECK([if compiler flags allows ebp in inline assembly],
-+                libc_cv_can_use_register_asm_ebp, [
-+AC_COMPILE_IFELSE(
-+  [AC_LANG_PROGRAM([
-+    void foo (int i)
-+    {
-+      register int reg asm ("ebp") = i;
-+      asm ("# %0" : : "r" (reg));
-+    }])],
-+  [libc_cv_can_use_register_asm_ebp=yes],
-+  [libc_cv_can_use_register_asm_ebp=no])
-+])
-+if test $libc_cv_can_use_register_asm_ebp = yes; then
-+  AC_DEFINE(CAN_USE_REGISTER_ASM_EBP)
-+fi
-+
- libc_cv_gcc_unwind_find_fde=yes
- ldd_rewrite_script=sysdeps/unix/sysv/linux/ldd-rewrite.sed
-diff --git a/sysdeps/unix/sysv/linux/i386/sysdep.h b/sysdeps/unix/sysv/linux/i386/sysdep.h
-index baf4642..449b23e 100644
---- a/sysdeps/unix/sysv/linux/i386/sysdep.h
-+++ b/sysdeps/unix/sysv/linux/i386/sysdep.h
-@@ -44,9 +44,9 @@
- /* Since GCC 5 and above can properly spill %ebx with PIC when needed,
-    we can inline syscalls with 6 arguments if GCC 5 or above is used
-    to compile glibc.  Disable GCC 5 optimization when compiling for
--   profiling since asm ("ebp") can't be used to put the 6th argument
--   in %ebp for syscall.  */
--#if __GNUC_PREREQ (5,0) && !defined PROF
-+   profiling or when -fno-omit-frame-pointer is used since asm ("ebp")
-+   can't be used to put the 6th argument in %ebp for syscall.  */
-+#if __GNUC_PREREQ (5,0) && !defined PROF && CAN_USE_REGISTER_ASM_EBP
- # define OPTIMIZE_FOR_GCC_5
- #endif
- 
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0029-assert-Support-types-without-operator-int-BZ-21972.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0029-assert-Support-types-without-operator-int-BZ-21972.patch
new file mode 100644
index 0000000..3c7050f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/0029-assert-Support-types-without-operator-int-BZ-21972.patch
@@ -0,0 +1,194 @@
+Upstream-Status: Backport
+
+* fixes "lambda-expression in unevaluated context" compile failures such as
+  https://github.com/nlohmann/json/issues/705
+
+* fixes "no match for 'operator==" compile failures such as
+  https://bugzilla.redhat.com/show_bug.cgi?id=1482990
+
+* Changelog edit was removed from upstream commit because it caused conflict
+
+Signed-off-by: S. Lockwood-Childs <sjl@vctlabs.com>
+
+From b5889d25e9bf944a89fdd7bcabf3b6c6f6bb6f7c Mon Sep 17 00:00:00 2001
+From: Florian Weimer <fweimer@redhat.com>
+Date: Mon, 21 Aug 2017 13:03:29 +0200
+Subject: [PATCH] assert: Support types without operator== (int) [BZ #21972]
+
+---
+ assert/Makefile          | 11 ++++++-
+ assert/assert.h          | 16 ++++++----
+ assert/tst-assert-c++.cc | 78 ++++++++++++++++++++++++++++++++++++++++++++++++
+ assert/tst-assert-g++.cc | 19 ++++++++++++
+ 4 files changed, 128 insertions(+), 7 deletions(-)
+ create mode 100644 assert/tst-assert-c++.cc
+ create mode 100644 assert/tst-assert-g++.cc
+
+diff --git a/assert/Makefile b/assert/Makefile
+index 1c3be9b..9ec1be8 100644
+--- a/assert/Makefile
++++ b/assert/Makefile
+@@ -25,6 +25,15 @@ include ../Makeconfig
+ headers	:= assert.h
+ 
+ routines := assert assert-perr __assert
+-tests := test-assert test-assert-perr
++tests := test-assert test-assert-perr tst-assert-c++ tst-assert-g++
+ 
+ include ../Rules
++
++ifeq ($(have-cxx-thread_local),yes)
++CFLAGS-tst-assert-c++.o = -std=c++11
++LDLIBS-tst-assert-c++ = -lstdc++
++CFLAGS-tst-assert-g++.o = -std=gnu++11
++LDLIBS-tst-assert-g++ = -lstdc++
++else
++tests-unsupported += tst-assert-c++ tst-assert-g++
++endif
+diff --git a/assert/assert.h b/assert/assert.h
+index 6801cfe..640c95c 100644
+--- a/assert/assert.h
++++ b/assert/assert.h
+@@ -85,7 +85,12 @@ __END_DECLS
+ /* When possible, define assert so that it does not add extra
+    parentheses around EXPR.  Otherwise, those added parentheses would
+    suppress warnings we'd expect to be detected by gcc's -Wparentheses.  */
+-# if !defined __GNUC__ || defined __STRICT_ANSI__
++# if defined __cplusplus
++#  define assert(expr)							\
++     (static_cast <bool> (expr)						\
++      ? void (0)							\
++      : __assert_fail (#expr, __FILE__, __LINE__, __ASSERT_FUNCTION))
++# elif !defined __GNUC__ || defined __STRICT_ANSI__
+ #  define assert(expr)							\
+     ((expr)								\
+      ? __ASSERT_VOID_CAST (0)						\
+@@ -93,12 +98,11 @@ __END_DECLS
+ # else
+ /* The first occurrence of EXPR is not evaluated due to the sizeof,
+    but will trigger any pedantic warnings masked by the __extension__
+-   for the second occurrence.  The explicit comparison against zero is
+-   required to support function pointers and bit fields in this
+-   context, and to suppress the evaluation of variable length
+-   arrays.  */
++   for the second occurrence.  The ternary operator is required to
++   support function pointers and bit fields in this context, and to
++   suppress the evaluation of variable length arrays.  */
+ #  define assert(expr)							\
+-  ((void) sizeof ((expr) == 0), __extension__ ({			\
++  ((void) sizeof ((expr) ? 1 : 0), __extension__ ({			\
+       if (expr)								\
+         ; /* empty */							\
+       else								\
+diff --git a/assert/tst-assert-c++.cc b/assert/tst-assert-c++.cc
+new file mode 100644
+index 0000000..12a5e69
+--- /dev/null
++++ b/assert/tst-assert-c++.cc
+@@ -0,0 +1,78 @@
++/* Tests for interactions between C++ and assert.
++   Copyright (C) 2017 Free Software Foundation, Inc.
++   This file is part of the GNU C Library.
++
++   The GNU C Library is free software; you can redistribute it and/or
++   modify it under the terms of the GNU Lesser General Public
++   License as published by the Free Software Foundation; either
++   version 2.1 of the License, or (at your option) any later version.
++
++   The GNU C Library is distributed in the hope that it will be useful,
++   but WITHOUT ANY WARRANTY; without even the implied warranty of
++   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++   Lesser General Public License for more details.
++
++   You should have received a copy of the GNU Lesser General Public
++   License along with the GNU C Library; if not, see
++   <http://www.gnu.org/licenses/>.  */
++
++#include <assert.h>
++
++/* The C++ standard requires that if the assert argument is a constant
++   subexpression, then the assert itself is one, too.  */
++constexpr int
++check_constexpr ()
++{
++  return (assert (true), 1);
++}
++
++/* Objects of this class can be contextually converted to bool, but
++   cannot be compared to int.  */
++struct no_int
++{
++  no_int () = default;
++  no_int (const no_int &) = delete;
++
++  explicit operator bool () const
++  {
++    return true;
++  }
++
++  bool operator! () const; /* No definition.  */
++  template <class T> bool operator== (T) const; /* No definition.  */
++  template <class T> bool operator!= (T) const; /* No definition.  */
++};
++
++/* This class tests that operator== is not used by assert.  */
++struct bool_and_int
++{
++  bool_and_int () = default;
++  bool_and_int (const no_int &) = delete;
++
++  explicit operator bool () const
++  {
++    return true;
++  }
++
++  bool operator! () const; /* No definition.  */
++  template <class T> bool operator== (T) const; /* No definition.  */
++  template <class T> bool operator!= (T) const; /* No definition.  */
++};
++
++static int
++do_test ()
++{
++  {
++    no_int value;
++    assert (value);
++  }
++
++  {
++    bool_and_int value;
++    assert (value);
++  }
++
++  return 0;
++}
++
++#include <support/test-driver.c>
+diff --git a/assert/tst-assert-g++.cc b/assert/tst-assert-g++.cc
+new file mode 100644
+index 0000000..8c06402
+--- /dev/null
++++ b/assert/tst-assert-g++.cc
+@@ -0,0 +1,19 @@
++/* Tests for interactions between C++ and assert.  GNU C++11 version.
++   Copyright (C) 2017 Free Software Foundation, Inc.
++   This file is part of the GNU C Library.
++
++   The GNU C Library is free software; you can redistribute it and/or
++   modify it under the terms of the GNU Lesser General Public
++   License as published by the Free Software Foundation; either
++   version 2.1 of the License, or (at your option) any later version.
++
++   The GNU C Library is distributed in the hope that it will be useful,
++   but WITHOUT ANY WARRANTY; without even the implied warranty of
++   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++   Lesser General Public License for more details.
++
++   You should have received a copy of the GNU Lesser General Public
++   License along with the GNU C Library; if not, see
++   <http://www.gnu.org/licenses/>.  */
++
++#include <tst-assert-c++.cc>
+-- 
+1.9.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/CVE-2017-15670.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/CVE-2017-15670.patch
new file mode 100644
index 0000000..ae050a5
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/CVE-2017-15670.patch
@@ -0,0 +1,61 @@
+From a76376df7c07e577a9515c3faa5dbd50bda5da07 Mon Sep 17 00:00:00 2001
+From: Paul Eggert <eggert@cs.ucla.edu>
+Date: Fri, 20 Oct 2017 18:41:14 +0200
+Subject: [PATCH] CVE-2017-15670: glob: Fix one-byte overflow [BZ #22320]
+
+(cherry picked from commit c369d66e5426a30e4725b100d5cd28e372754f90)
+
+Upstream-Status: Backport
+CVE: CVE-2017-15670
+Affects: glibc < 2.27
+Signed-off-by: Armin Kuster <akuster@mvista.com>
+
+---
+ ChangeLog    | 6 ++++++
+ NEWS         | 5 +++++
+ posix/glob.c | 2 +-
+ 3 files changed, 12 insertions(+), 1 deletion(-)
+
+Index: git/NEWS
+===================================================================
+--- git.orig/NEWS
++++ git/NEWS
+@@ -206,6 +206,11 @@ Security related changes:
+ * A use-after-free vulnerability in clntudp_call in the Sun RPC system has been
+   fixed (CVE-2017-12133).
+ 
++  CVE-2017-15670: The glob function, when invoked with GLOB_TILDE,
++  suffered from a one-byte overflow during ~ operator processing (either
++  on the stack or the heap, depending on the length of the user name).
++  Reported by Tim Rühsen.
++
+ The following bugs are resolved with this release:
+ 
+   [984] network: Respond to changed resolv.conf in gethostbyname
+Index: git/posix/glob.c
+===================================================================
+--- git.orig/posix/glob.c
++++ git/posix/glob.c
+@@ -843,7 +843,7 @@ glob (const char *pattern, int flags, in
+ 		  *p = '\0';
+ 		}
+ 	      else
+-		*((char *) mempcpy (newp, dirname + 1, end_name - dirname))
++		*((char *) mempcpy (newp, dirname + 1, end_name - dirname - 1))
+ 		  = '\0';
+ 	      user_name = newp;
+ 	    }
+Index: git/ChangeLog
+===================================================================
+--- git.orig/ChangeLog
++++ git/ChangeLog
+@@ -1,3 +1,9 @@
++2017-10-20  Paul Eggert <eggert@cs.ucla.edu>
++
++       [BZ #22320]
++       CVE-2017-15670
++       * posix/glob.c (__glob): Fix one-byte overflow.
++
+ 2017-08-02  Siddhesh Poyarekar  <siddhesh@sourceware.org>
+ 
+ 	* version.h (RELEASE): Set to "stable"
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/CVE-2017-15671.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/CVE-2017-15671.patch
new file mode 100644
index 0000000..3569282
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/CVE-2017-15671.patch
@@ -0,0 +1,66 @@
+From f1cf98b583787cfb6278baea46e286a0ee7567fd Mon Sep 17 00:00:00 2001
+From: Paul Eggert <eggert@cs.ucla.edu>
+Date: Sun, 22 Oct 2017 10:00:57 +0200
+Subject: [PATCH] glob: Fix buffer overflow during GLOB_TILDE unescaping [BZ
+ #22332]
+
+(cherry picked from commit a159b53fa059947cc2548e3b0d5bdcf7b9630ba8)
+
+Upstream-Status: Backport
+CVE: CVE-2017-15671
+Signed-off-by: Armin Kuster <akuster@mvista.com>
+
+---
+ ChangeLog    | 6 ++++++
+ NEWS         | 4 ++++
+ posix/glob.c | 4 ++--
+ 3 files changed, 12 insertions(+), 2 deletions(-)
+
+Index: git/NEWS
+===================================================================
+--- git.orig/NEWS
++++ git/NEWS
+@@ -211,6 +211,10 @@ Security related changes:
+   on the stack or the heap, depending on the length of the user name).
+   Reported by Tim Rühsen.
+ 
++  The glob function, when invoked with GLOB_TILDE and without
++  GLOB_NOESCAPE, could write past the end of a buffer while
++  unescaping user names.  Reported by Tim Rühsen.
++
+ The following bugs are resolved with this release:
+ 
+   [984] network: Respond to changed resolv.conf in gethostbyname
+Index: git/posix/glob.c
+===================================================================
+--- git.orig/posix/glob.c
++++ git/posix/glob.c
+@@ -823,11 +823,11 @@ glob (const char *pattern, int flags, in
+ 		  char *p = mempcpy (newp, dirname + 1,
+ 				     unescape - dirname - 1);
+ 		  char *q = unescape;
+-		  while (*q != '\0')
++		  while (q != end_name)
+ 		    {
+ 		      if (*q == '\\')
+ 			{
+-			  if (q[1] == '\0')
++			  if (q + 1 == end_name)
+ 			    {
+ 			      /* "~fo\\o\\" unescape to user_name "foo\\",
+ 				 but "~fo\\o\\/" unescape to user_name
+Index: git/ChangeLog
+===================================================================
+--- git.orig/ChangeLog
++++ git/ChangeLog
+@@ -1,5 +1,10 @@
++
+ 2017-10-20  Paul Eggert <eggert@cs.ucla.edu>
+ 
++       [BZ #22332]
++       * posix/glob.c (__glob): Fix buffer overflow during GLOB_TILDE
++       unescaping.
++
+        [BZ #22320]
+        CVE-2017-15670
+        * posix/glob.c (__glob): Fix one-byte overflow.
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/CVE-2017-16997.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/CVE-2017-16997.patch
new file mode 100644
index 0000000..38731e4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/CVE-2017-16997.patch
@@ -0,0 +1,150 @@
+From 4ebd0c4191c6073cc8a7c5fdcf1d182c4719bcbb Mon Sep 17 00:00:00 2001
+From: Aurelien Jarno <aurelien@aurel32.net>
+Date: Sat, 30 Dec 2017 10:54:23 +0100
+Subject: [PATCH] elf: Check for empty tokens before dynamic string token
+ expansion [BZ #22625]
+
+The fillin_rpath function in elf/dl-load.c loops over each RPATH or
+RUNPATH tokens and interprets empty tokens as the current directory
+("./"). In practice the check for empty token is done *after* the
+dynamic string token expansion. The expansion process can return an
+empty string for the $ORIGIN token if __libc_enable_secure is set
+or if the path of the binary can not be determined (/proc not mounted).
+
+Fix that by moving the check for empty tokens before the dynamic string
+token expansion. In addition, check for NULL pointer or empty strings
+return by expand_dynamic_string_token.
+
+The above changes highlighted a bug in decompose_rpath, an empty array
+is represented by the first element being NULL at the fillin_rpath
+level, but by using a -1 pointer in decompose_rpath and other functions.
+
+Changelog:
+	[BZ #22625]
+	* elf/dl-load.c (fillin_rpath): Check for empty tokens before dynamic
+	string token expansion. Check for NULL pointer or empty string possibly
+	returned by expand_dynamic_string_token.
+	(decompose_rpath): Check for empty path after dynamic string
+	token expansion.
+(cherry picked from commit 3e3c904daef69b8bf7d5cc07f793c9f07c3553ef)
+
+Upstream-Status: Backport
+CVE: CVE-2017-16997
+Signed-off-by: Armin Kuster <akuster@mvista.com>
+
+---
+ ChangeLog     | 10 ++++++++++
+ NEWS          |  4 ++++
+ elf/dl-load.c | 49 +++++++++++++++++++++++++++++++++----------------
+ 3 files changed, 47 insertions(+), 16 deletions(-)
+
+Index: git/NEWS
+===================================================================
+--- git.orig/NEWS
++++ git/NEWS
+@@ -215,6 +215,10 @@ Security related changes:
+   GLOB_NOESCAPE, could write past the end of a buffer while
+   unescaping user names.  Reported by Tim Rühsen.
+ 
++  CVE-2017-16997: Incorrect handling of RPATH or RUNPATH containing $ORIGIN
++  for AT_SECURE or SUID binaries could be used to load libraries from the
++  current directory.
++
+ The following bugs are resolved with this release:
+ 
+   [984] network: Respond to changed resolv.conf in gethostbyname
+Index: git/elf/dl-load.c
+===================================================================
+--- git.orig/elf/dl-load.c
++++ git/elf/dl-load.c
+@@ -433,32 +433,41 @@ fillin_rpath (char *rpath, struct r_sear
+ {
+   char *cp;
+   size_t nelems = 0;
+-  char *to_free;
+ 
+   while ((cp = __strsep (&rpath, sep)) != NULL)
+     {
+       struct r_search_path_elem *dirp;
++      char *to_free = NULL;
++      size_t len = 0;
+ 
+-      to_free = cp = expand_dynamic_string_token (l, cp, 1);
++      /* `strsep' can pass an empty string.  */
++      if (*cp != '\0')
++	{
++	  to_free = cp = expand_dynamic_string_token (l, cp, 1);
+ 
+-      size_t len = strlen (cp);
++	  /* expand_dynamic_string_token can return NULL in case of empty
++	     path or memory allocation failure.  */
++	  if (cp == NULL)
++	    continue;
++
++	  /* Compute the length after dynamic string token expansion and
++	     ignore empty paths.  */
++	  len = strlen (cp);
++	  if (len == 0)
++	    {
++	      free (to_free);
++	      continue;
++	    }
+ 
+-      /* `strsep' can pass an empty string.  This has to be
+-	 interpreted as `use the current directory'. */
+-      if (len == 0)
+-	{
+-	  static const char curwd[] = "./";
+-	  cp = (char *) curwd;
++	  /* Remove trailing slashes (except for "/").  */
++	  while (len > 1 && cp[len - 1] == '/')
++	    --len;
++
++	  /* Now add one if there is none so far.  */
++	  if (len > 0 && cp[len - 1] != '/')
++	    cp[len++] = '/';
+ 	}
+ 
+-      /* Remove trailing slashes (except for "/").  */
+-      while (len > 1 && cp[len - 1] == '/')
+-	--len;
+-
+-      /* Now add one if there is none so far.  */
+-      if (len > 0 && cp[len - 1] != '/')
+-	cp[len++] = '/';
+-
+       /* Make sure we don't use untrusted directories if we run SUID.  */
+       if (__glibc_unlikely (check_trusted) && !is_trusted_path (cp, len))
+ 	{
+@@ -621,6 +630,14 @@ decompose_rpath (struct r_search_path_st
+      necessary.  */
+   free (copy);
+ 
++  /* There is no path after expansion.  */
++  if (result[0] == NULL)
++    {
++      free (result);
++      sps->dirs = (struct r_search_path_elem **) -1;
++      return false;
++    }
++
+   sps->dirs = result;
+   /* The caller will change this value if we haven't used a real malloc.  */
+   sps->malloced = 1;
+Index: git/ChangeLog
+===================================================================
+--- git.orig/ChangeLog
++++ git/ChangeLog
+@@ -1,3 +1,12 @@
++2017-12-30  Aurelien Jarno  <aurelien@aurel32.net>
++           Dmitry V. Levin  <ldv@altlinux.org>
++
++       [BZ #22625]
++       * elf/dl-load.c (fillin_rpath): Check for empty tokens before dynamic
++       string token expansion. Check for NULL pointer or empty string possibly
++       returned by expand_dynamic_string_token.
++       (decompose_rpath): Check for empty path after dynamic string
++       token expansion.
+ 
+ 2017-10-20  Paul Eggert <eggert@cs.ucla.edu>
+ 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/CVE-2017-17426.patch b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/CVE-2017-17426.patch
new file mode 100644
index 0000000..c7d1cb8
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc/CVE-2017-17426.patch
@@ -0,0 +1,80 @@
+From df8c219cb987cfe85c550efa693a1383a11e38aa Mon Sep 17 00:00:00 2001
+From: Arjun Shankar <arjun@redhat.com>
+Date: Thu, 30 Nov 2017 13:31:45 +0100
+Subject: [PATCH] Fix integer overflow in malloc when tcache is enabled [BZ
+ #22375]
+
+When the per-thread cache is enabled, __libc_malloc uses request2size (which
+does not perform an overflow check) to calculate the chunk size from the
+requested allocation size. This leads to an integer overflow causing malloc
+to incorrectly return the last successfully allocated block when called with
+a very large size argument (close to SIZE_MAX).
+
+This commit uses checked_request2size instead, removing the overflow.
+
+(cherry picked from commit 34697694e8a93b325b18f25f7dcded55d6baeaf6)
+
+Upstream-Status: Backport
+CVE: CVE-2017-17426
+Signed-off-by: Armin Kuster <akuster@mvista.com>
+
+---
+ ChangeLog       | 7 +++++++
+ NEWS            | 6 ++++++
+ malloc/malloc.c | 3 ++-
+ 3 files changed, 15 insertions(+), 1 deletion(-)
+
+Index: git/NEWS
+===================================================================
+--- git.orig/NEWS
++++ git/NEWS
+@@ -4,6 +4,8 @@ See the end for copying conditions.
+ 
+ Please send GNU C library bug reports via <http://sourceware.org/bugzilla/>
+ using `glibc' in the "product" field.
++
++[22375] malloc returns pointer from tcache instead of NULL (CVE-2017-17426)
+ 
+ Version 2.26
+ 
+@@ -215,6 +217,11 @@ Security related changes:
+   for AT_SECURE or SUID binaries could be used to load libraries from the
+   current directory.
+ 
++  CVE-2017-17426: The malloc function, when called with an object size near
++  the value SIZE_MAX, would return a pointer to a buffer which is too small,
++  instead of NULL.  This was a regression introduced with the new malloc
++  thread cache in glibc 2.26.  Reported by Iain Buclaw.
++
+ The following bugs are resolved with this release:
+ 
+   [984] network: Respond to changed resolv.conf in gethostbyname
+Index: git/malloc/malloc.c
+===================================================================
+--- git.orig/malloc/malloc.c
++++ git/malloc/malloc.c
+@@ -3050,7 +3050,8 @@ __libc_malloc (size_t bytes)
+     return (*hook)(bytes, RETURN_ADDRESS (0));
+ #if USE_TCACHE
+   /* int_free also calls request2size, be careful to not pad twice.  */
+-  size_t tbytes = request2size (bytes);
++  size_t tbytes;
++  checked_request2size (bytes, tbytes);
+   size_t tc_idx = csize2tidx (tbytes);
+ 
+   MAYBE_INIT_TCACHE ();
+Index: git/ChangeLog
+===================================================================
+--- git.orig/ChangeLog
++++ git/ChangeLog
+@@ -1,3 +1,10 @@
++2017-11-30  Arjun Shankar  <arjun@redhat.com>
++
++       [BZ #22375]
++       CVE-2017-17426
++       * malloc/malloc.c (__libc_malloc): Use checked_request2size
++       instead of request2size.
++
+ 2017-12-30  Aurelien Jarno  <aurelien@aurel32.net>
+            Dmitry V. Levin  <ldv@altlinux.org>
+ 
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc_2.25.bb b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc_2.25.bb
deleted file mode 100644
index 0f1ec0c..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc_2.25.bb
+++ /dev/null
@@ -1,141 +0,0 @@
-require glibc.inc
-
-LIC_FILES_CHKSUM = "file://LICENSES;md5=e9a558e243b36d3209f380deb394b213 \
-      file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-      file://posix/rxspencer/COPYRIGHT;md5=dc5485bb394a13b2332ec1c785f5d83a \
-      file://COPYING.LIB;md5=4fbd65380cdd255951079008b364516c"
-
-DEPENDS += "gperf-native"
-
-SRCREV ?= "db0242e3023436757bbc7c488a779e6e3343db04"
-
-SRCBRANCH ?= "release/${PV}/master"
-
-GLIBC_GIT_URI ?= "git://sourceware.org/git/glibc.git"
-UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>\d+\.\d+(\.\d+)*)"
-
-SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \
-           file://etc/ld.so.conf \
-           file://generate-supported.mk \
-           \
-           ${NATIVESDKFIXES} \
-           file://0005-fsl-e500-e5500-e6500-603e-fsqrt-implementation.patch \
-           file://0006-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch \
-           file://0007-ppc-sqrt-Fix-undefined-reference-to-__sqrt_finite.patch \
-           file://0008-__ieee754_sqrt-f-are-now-inline-functions-and-call-o.patch \
-           file://0009-Quote-from-bug-1443-which-explains-what-the-patch-do.patch \
-           file://0010-eglibc-run-libm-err-tab.pl-with-specific-dirs-in-S.patch \
-           file://0011-__ieee754_sqrt-f-are-now-inline-functions-and-call-o.patch \
-           file://0012-Make-ld-version-output-matching-grok-gold-s-output.patch \
-           file://0013-sysdeps-gnu-configure.ac-handle-correctly-libc_cv_ro.patch \
-           file://0014-Add-unused-attribute.patch \
-           file://0015-yes-within-the-path-sets-wrong-config-variables.patch \
-           file://0016-timezone-re-written-tzselect-as-posix-sh.patch \
-           file://0017-Remove-bash-dependency-for-nscd-init-script.patch \
-           file://0018-eglibc-Cross-building-and-testing-instructions.patch \
-           file://0019-eglibc-Help-bootstrap-cross-toolchain.patch \
-           file://0020-eglibc-cherry-picked-from.patch \
-           file://0021-eglibc-Clear-cache-lines-on-ppc8xx.patch \
-           file://0022-eglibc-Resolve-__fpscr_values-on-SH4.patch \
-           file://0023-eglibc-Install-PIC-archives.patch \
-           file://0024-eglibc-Forward-port-cross-locale-generation-support.patch \
-           file://0025-Define-DUMMY_LOCALE_T-if-not-defined.patch \
-           file://0026-elf-dl-deps.c-Make-_dl_build_local_scope-breadth-fir.patch \
-           file://0027-locale-fix-hard-coded-reference-to-gcc-E.patch \
-           file://0028-Rework-fno-omit-frame-pointer-support-on-i386.patch \
-"
-
-NATIVESDKFIXES ?= ""
-NATIVESDKFIXES_class-nativesdk = "\
-           file://0001-nativesdk-glibc-Look-for-host-system-ld.so.cache-as-.patch \
-           file://0002-nativesdk-glibc-Fix-buffer-overrun-with-a-relocated-.patch \
-           file://0003-nativesdk-glibc-Raise-the-size-of-arrays-containing-.patch \
-           file://0004-nativesdk-glibc-Allow-64-bit-atomics-for-x86.patch \
-"
-
-S = "${WORKDIR}/git"
-B = "${WORKDIR}/build-${TARGET_SYS}"
-
-PACKAGES_DYNAMIC = ""
-
-# the -isystem in bitbake.conf screws up glibc do_stage
-BUILD_CPPFLAGS = "-I${STAGING_INCDIR_NATIVE}"
-TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir}"
-
-GLIBC_BROKEN_LOCALES = ""
-#
-# We will skip parsing glibc when target system C library selection is not glibc
-# this helps in easing out parsing for non-glibc system libraries
-#
-COMPATIBLE_HOST_libc-musl_class-target = "null"
-COMPATIBLE_HOST_libc-uclibc_class-target = "null"
-
-EXTRA_OECONF = "--enable-kernel=${OLDEST_KERNEL} \
-                --without-cvs --disable-profile \
-                --disable-debug --without-gd \
-                --enable-clocale=gnu \
-                --enable-add-ons \
-                --with-headers=${STAGING_INCDIR} \
-                --without-selinux \
-                --enable-obsolete-rpc \
-                ${GLIBC_EXTRA_OECONF}"
-
-EXTRA_OECONF += "${@get_libc_fpu_setting(bb, d)}"
-EXTRA_OECONF += "${@bb.utils.contains('DISTRO_FEATURES', 'libc-inet-anl', '--enable-nscd', '--disable-nscd', d)}"
-
-
-do_patch_append() {
-    bb.build.exec_func('do_fix_readlib_c', d)
-}
-
-do_fix_readlib_c () {
-	sed -i -e 's#OECORE_KNOWN_INTERPRETER_NAMES#${EGLIBC_KNOWN_INTERPRETER_NAMES}#' ${S}/elf/readlib.c
-}
-
-do_configure () {
-# override this function to avoid the autoconf/automake/aclocal/autoheader
-# calls for now
-# don't pass CPPFLAGS into configure, since it upsets the kernel-headers
-# version check and doesn't really help with anything
-        (cd ${S} && gnu-configize) || die "failure in running gnu-configize"
-        find ${S} -name "configure" | xargs touch
-        CPPFLAGS="" oe_runconf
-}
-
-rpcsvc = "bootparam_prot.x nlm_prot.x rstat.x \
-	  yppasswd.x klm_prot.x rex.x sm_inter.x mount.x \
-	  rusers.x spray.x nfs_prot.x rquota.x key_prot.x"
-
-do_compile () {
-	# -Wl,-rpath-link <staging>/lib in LDFLAGS can cause breakage if another glibc is in staging
-	unset LDFLAGS
-	base_do_compile
-	(
-		cd ${S}/sunrpc/rpcsvc
-		for r in ${rpcsvc}; do
-			h=`echo $r|sed -e's,\.x$,.h,'`
-			rm -f $h
-			${B}/sunrpc/cross-rpcgen -h $r -o $h || bbwarn "${PN}: unable to generate header for $r"
-		done
-	)
-	echo "Adjust ldd script"
-	if [ -n "${RTLDLIST}" ]
-	then
-		prevrtld=`cat ${B}/elf/ldd | grep "^RTLDLIST=" | sed 's#^RTLDLIST="\?\([^"]*\)"\?$#\1#'`
-		if [ "${prevrtld}" != "${RTLDLIST}" ]
-		then
-			sed -i ${B}/elf/ldd -e "s#^RTLDLIST=.*\$#RTLDLIST=\"${prevrtld} ${RTLDLIST}\"#"
-		fi
-	fi
-
-}
-
-# Use the host locale archive when built for nativesdk so that we don't need to
-# ship a complete (100MB) locale set.
-do_compile_prepend_class-nativesdk() {
-    echo "complocaledir=/usr/lib/locale" >> ${S}/configparms
-}
-
-require glibc-package.inc
-
-BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-core/glibc/glibc_2.26.bb b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc_2.26.bb
new file mode 100644
index 0000000..8c0eb98
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/glibc/glibc_2.26.bb
@@ -0,0 +1,154 @@
+require glibc.inc
+
+LIC_FILES_CHKSUM = "file://LICENSES;md5=e9a558e243b36d3209f380deb394b213 \
+      file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+      file://posix/rxspencer/COPYRIGHT;md5=dc5485bb394a13b2332ec1c785f5d83a \
+      file://COPYING.LIB;md5=4fbd65380cdd255951079008b364516c"
+
+DEPENDS += "gperf-native bison-native"
+
+SRCREV ?= "1c9a5c270d8b66f30dcfaf1cb2d6cf39d3e18369"
+
+SRCBRANCH ?= "release/${PV}/master"
+
+GLIBC_GIT_URI ?= "git://sourceware.org/git/glibc.git"
+UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>\d+\.\d+(\.\d+)*)"
+
+SRC_URI = "${GLIBC_GIT_URI};branch=${SRCBRANCH};name=glibc \
+           file://etc/ld.so.conf \
+           file://generate-supported.mk \
+           \
+           ${NATIVESDKFIXES} \
+           file://0005-fsl-e500-e5500-e6500-603e-fsqrt-implementation.patch \
+           file://0006-readlib-Add-OECORE_KNOWN_INTERPRETER_NAMES-to-known-.patch \
+           file://0007-ppc-sqrt-Fix-undefined-reference-to-__sqrt_finite.patch \
+           file://0008-__ieee754_sqrt-f-are-now-inline-functions-and-call-o.patch \
+           file://0009-Quote-from-bug-1443-which-explains-what-the-patch-do.patch \
+           file://0010-eglibc-run-libm-err-tab.pl-with-specific-dirs-in-S.patch \
+           file://0011-__ieee754_sqrt-f-are-now-inline-functions-and-call-o.patch \
+           file://0012-sysdeps-gnu-configure.ac-handle-correctly-libc_cv_ro.patch \
+           file://0013-Add-unused-attribute.patch \
+           file://0014-yes-within-the-path-sets-wrong-config-variables.patch \
+           file://0015-timezone-re-written-tzselect-as-posix-sh.patch \
+           file://0016-Remove-bash-dependency-for-nscd-init-script.patch \
+           file://0017-eglibc-Cross-building-and-testing-instructions.patch \
+           file://0018-eglibc-Help-bootstrap-cross-toolchain.patch \
+           file://0019-eglibc-Clear-cache-lines-on-ppc8xx.patch \
+           file://0020-eglibc-Resolve-__fpscr_values-on-SH4.patch \
+           file://0021-eglibc-Install-PIC-archives.patch \
+           file://0022-eglibc-Forward-port-cross-locale-generation-support.patch \
+           file://0023-Define-DUMMY_LOCALE_T-if-not-defined.patch \
+           file://0024-elf-dl-deps.c-Make-_dl_build_local_scope-breadth-fir.patch \
+           file://0025-locale-fix-hard-coded-reference-to-gcc-E.patch \
+           file://0026-assert-Suppress-pedantic-warning-caused-by-statement.patch \
+           file://0027-glibc-reset-dl-load-write-lock-after-forking.patch \
+           file://0028-Bug-4578-add-ld.so-lock-while-fork.patch \
+           file://CVE-2017-15670.patch \
+           file://CVE-2017-15671.patch \
+           file://0029-assert-Support-types-without-operator-int-BZ-21972.patch \
+           file://CVE-2017-16997.patch \
+           file://CVE-2017-17426.patch \
+"
+
+NATIVESDKFIXES ?= ""
+NATIVESDKFIXES_class-nativesdk = "\
+           file://0001-nativesdk-glibc-Look-for-host-system-ld.so.cache-as-.patch \
+           file://0002-nativesdk-glibc-Fix-buffer-overrun-with-a-relocated-.patch \
+           file://0003-nativesdk-glibc-Raise-the-size-of-arrays-containing-.patch \
+           file://0004-nativesdk-glibc-Allow-64-bit-atomics-for-x86.patch \
+"
+
+S = "${WORKDIR}/git"
+B = "${WORKDIR}/build-${TARGET_SYS}"
+
+PACKAGES_DYNAMIC = ""
+
+# the -isystem in bitbake.conf screws up glibc do_stage
+BUILD_CPPFLAGS = "-I${STAGING_INCDIR_NATIVE}"
+TARGET_CPPFLAGS = "-I${STAGING_DIR_TARGET}${includedir}"
+
+GLIBC_BROKEN_LOCALES = ""
+#
+# We will skip parsing glibc when target system C library selection is not glibc
+# this helps in easing out parsing for non-glibc system libraries
+#
+COMPATIBLE_HOST_libc-musl_class-target = "null"
+
+EXTRA_OECONF = "--enable-kernel=${OLDEST_KERNEL} \
+                --without-cvs --disable-profile \
+                --disable-debug --without-gd \
+                --enable-clocale=gnu \
+                --enable-add-ons=libidn \
+                --with-headers=${STAGING_INCDIR} \
+                --without-selinux \
+                --enable-obsolete-rpc \
+                --enable-obsolete-nsl \
+                --enable-tunables \
+                --enable-bind-now \
+                --enable-stack-protector=strong \
+                --enable-stackguard-randomization \
+                ${GLIBC_EXTRA_OECONF}"
+
+EXTRA_OECONF += "${@get_libc_fpu_setting(bb, d)}"
+EXTRA_OECONF += "${@bb.utils.contains('DISTRO_FEATURES', 'libc-inet-anl', '--enable-nscd', '--disable-nscd', d)}"
+
+
+do_patch_append() {
+    bb.build.exec_func('do_fix_readlib_c', d)
+}
+
+do_fix_readlib_c () {
+	sed -i -e 's#OECORE_KNOWN_INTERPRETER_NAMES#${EGLIBC_KNOWN_INTERPRETER_NAMES}#' ${S}/elf/readlib.c
+}
+
+do_configure () {
+# override this function to avoid the autoconf/automake/aclocal/autoheader
+# calls for now
+# don't pass CPPFLAGS into configure, since it upsets the kernel-headers
+# version check and doesn't really help with anything
+        (cd ${S} && gnu-configize) || die "failure in running gnu-configize"
+        find ${S} -name "configure" | xargs touch
+        # "plural.c" may or may not get regenerated from "plural.y" so we
+        # touch "plural.y" to make sure it does. (This should not be needed
+        # for glibc version 2.26+)
+        find ${S}/intl -name "plural.y" | xargs touch
+        CPPFLAGS="" oe_runconf
+}
+
+rpcsvc = "bootparam_prot.x nlm_prot.x rstat.x \
+	  yppasswd.x klm_prot.x rex.x sm_inter.x mount.x \
+	  rusers.x spray.x nfs_prot.x rquota.x key_prot.x"
+
+do_compile () {
+	# -Wl,-rpath-link <staging>/lib in LDFLAGS can cause breakage if another glibc is in staging
+	unset LDFLAGS
+	base_do_compile
+	(
+		cd ${S}/sunrpc/rpcsvc
+		for r in ${rpcsvc}; do
+			h=`echo $r|sed -e's,\.x$,.h,'`
+			rm -f $h
+			${B}/sunrpc/cross-rpcgen -h $r -o $h || bbwarn "${PN}: unable to generate header for $r"
+		done
+	)
+	echo "Adjust ldd script"
+	if [ -n "${RTLDLIST}" ]
+	then
+		prevrtld=`cat ${B}/elf/ldd | grep "^RTLDLIST=" | sed 's#^RTLDLIST="\?\([^"]*\)"\?$#\1#'`
+		if [ "${prevrtld}" != "${RTLDLIST}" ]
+		then
+			sed -i ${B}/elf/ldd -e "s#^RTLDLIST=.*\$#RTLDLIST=\"${prevrtld} ${RTLDLIST}\"#"
+		fi
+	fi
+
+}
+
+# Use the host locale archive when built for nativesdk so that we don't need to
+# ship a complete (100MB) locale set.
+do_compile_prepend_class-nativesdk() {
+    echo "complocaledir=/usr/lib/locale" >> ${S}/configparms
+}
+
+require glibc-package.inc
+
+BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-core/ifupdown/files/defn2-c-man-don-t-rely-on-dpkg-architecture-to-set-a.patch b/import-layers/yocto-poky/meta/recipes-core/ifupdown/files/defn2-c-man-don-t-rely-on-dpkg-architecture-to-set-a.patch
index 8c4d953..a24b8cd 100644
--- a/import-layers/yocto-poky/meta/recipes-core/ifupdown/files/defn2-c-man-don-t-rely-on-dpkg-architecture-to-set-a.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/ifupdown/files/defn2-c-man-don-t-rely-on-dpkg-architecture-to-set-a.patch
@@ -12,6 +12,7 @@
 like the loopback device not being configured/enabled.
 
 Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
+Upstream-Status: Pending
 ---
  defn2c.pl   | 6 +++---
  defn2man.pl | 6 +++---
diff --git a/import-layers/yocto-poky/meta/recipes-core/images/build-appliance-image_15.0.0.bb b/import-layers/yocto-poky/meta/recipes-core/images/build-appliance-image_15.0.0.bb
index 045781c..bd441ae 100644
--- a/import-layers/yocto-poky/meta/recipes-core/images/build-appliance-image_15.0.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/images/build-appliance-image_15.0.0.bb
@@ -3,8 +3,7 @@
 HOMEPAGE = "http://www.yoctoproject.org/documentation/build-appliance"
 
 LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://${COREBASE}/LICENSE;md5=4d92cd373abda3937c2bc47fbc49d690 \
-                    file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
+LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
 
 IMAGE_INSTALL = "packagegroup-core-boot packagegroup-core-ssh-openssh packagegroup-self-hosted \
                  kernel-dev kernel-devsrc connman connman-plugin-ethernet dhcp-client \
@@ -19,12 +18,12 @@
 APPEND += "rootfstype=ext4 quiet"
 
 DEPENDS = "zip-native python3-pip-native"
-IMAGE_FSTYPES = "vmdk"
+IMAGE_FSTYPES = "wic.vmdk"
 
 inherit core-image module-base setuptools3
 
-SRCREV ?= "b859272ad4053185d4980cac05481b430e05345f"
-SRC_URI = "git://git.yoctoproject.org/poky;branch=pyro \
+SRCREV ?= "a9588646fcec17e53199e1ea7e7b8dccf140817e"
+SRC_URI = "git://git.yoctoproject.org/poky;branch=rocko \
            file://Yocto_Build_Appliance.vmx \
            file://Yocto_Build_Appliance.vmxf \
            file://README_VirtualBox_Guest_Additions.txt \
@@ -101,7 +100,11 @@
 	export STAGING_INCDIR=${STAGING_INCDIR_NATIVE}
 	export HOME=${IMAGE_ROOTFS}/home/builder
 	mkdir -p ${IMAGE_ROOTFS}/home/builder/.cache/pip
-	pip3 install --user -I -U -v -r ${IMAGE_ROOTFS}/home/builder/poky/bitbake/toaster-requirements.txt
+	pip3_install_params="--user -I -U -v -r ${IMAGE_ROOTFS}/home/builder/poky/bitbake/toaster-requirements.txt"
+	if [ -n "${http_proxy}" ]; then
+	   pip3_install_params="${pip3_install_params} --proxy ${http_proxy}"
+	fi
+	pip3 install ${pip3_install_params}
 	chown -R builder.builder ${IMAGE_ROOTFS}/home/builder/.local
 	chown -R builder.builder ${IMAGE_ROOTFS}/home/builder/.cache
 }
@@ -120,7 +123,7 @@
 	cd ${WORKDIR}
 	mkdir -p Yocto_Build_Appliance
 	cp *.vmx* Yocto_Build_Appliance
-	ln -sf ${IMGDEPLOYDIR}/${IMAGE_NAME}.vmdk Yocto_Build_Appliance/Yocto_Build_Appliance.vmdk
+	ln -sf ${IMGDEPLOYDIR}/${IMAGE_NAME}${IMAGE_NAME_SUFFIX}.wic.vmdk Yocto_Build_Appliance/Yocto_Build_Appliance.vmdk
 	zip -r ${IMGDEPLOYDIR}/Yocto_Build_Appliance-${DATETIME}.zip Yocto_Build_Appliance
 	ln -sf Yocto_Build_Appliance-${DATETIME}.zip ${IMGDEPLOYDIR}/Yocto_Build_Appliance.zip
 }
@@ -130,4 +133,4 @@
     bb.build.exec_func('create_bundle_files', d)
 }
 
-addtask bundle_files after do_vmimg before do_image_complete
+addtask bundle_files after do_image_wic before do_image_complete
diff --git a/import-layers/yocto-poky/meta/recipes-core/images/core-image-minimal-initramfs.bb b/import-layers/yocto-poky/meta/recipes-core/images/core-image-minimal-initramfs.bb
index 5794a25..c446e87 100644
--- a/import-layers/yocto-poky/meta/recipes-core/images/core-image-minimal-initramfs.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/images/core-image-minimal-initramfs.bb
@@ -8,7 +8,7 @@
 # Do not pollute the initrd image with rootfs features
 IMAGE_FEATURES = ""
 
-export IMAGE_BASENAME = "core-image-minimal-initramfs"
+export IMAGE_BASENAME = "${MLPREFIX}core-image-minimal-initramfs"
 IMAGE_LINGUAS = ""
 
 LICENSE = "MIT"
diff --git a/import-layers/yocto-poky/meta/recipes-core/images/core-image-tiny-initramfs.bb b/import-layers/yocto-poky/meta/recipes-core/images/core-image-tiny-initramfs.bb
index 184727d..16995e6 100644
--- a/import-layers/yocto-poky/meta/recipes-core/images/core-image-tiny-initramfs.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/images/core-image-tiny-initramfs.bb
@@ -1,10 +1,9 @@
 # Simple initramfs image artifact generation for tiny images.
 DESCRIPTION = "Tiny image capable of booting a device. The kernel includes \
 the Minimal RAM-based Initial Root Filesystem (initramfs), which finds the \
-first 'init' program more efficiently.  core-image-tiny-initramfs doesn't \
+first 'init' program more efficiently. core-image-tiny-initramfs doesn't \
 actually generate an image but rather generates boot and rootfs artifacts \
-into a common location that can subsequently be picked up by external image \
-generation tools such as wic."
+that can subsequently be picked up by external image generation tools such as wic."
 
 PACKAGE_INSTALL = "initramfs-live-boot packagegroup-core-boot dropbear ${VIRTUAL-RUNTIME_base-utils} udev base-passwd ${ROOTFS_BOOTSTRAP_INSTALL}"
 
@@ -17,7 +16,7 @@
 LICENSE = "MIT"
 
 # don't actually generate an image, just the artifacts needed for one
-IMAGE_FSTYPES = "${INITRAMFS_FSTYPES} wic"
+IMAGE_FSTYPES = "${INITRAMFS_FSTYPES}"
 
 inherit core-image
 
@@ -40,3 +39,5 @@
 }
 
 IMAGE_PREPROCESS_COMMAND += "tinyinitrd;"
+
+QB_KERNEL_CMDLINE_APPEND += "debugshell=3 init=/bin/busybox sh init"
diff --git a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/files/init-install-efi.sh b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/files/init-install-efi.sh
index 5ad3a60..706418f 100644
--- a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/files/init-install-efi.sh
+++ b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/files/init-install-efi.sh
@@ -186,6 +186,13 @@
 
 parted ${device} print
 
+echo "Waiting for device nodes..."
+C=0
+while [ $C -ne 3 ] && [ ! -e $bootfs  -o ! -e $rootfs -o ! -e $swap ]; do
+    C=$(( C + 1 ))
+    sleep 1
+done
+
 echo "Formatting $bootfs to vfat..."
 mkfs.vfat $bootfs
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/files/init-install.sh b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/files/init-install.sh
index 572613e..dade059 100644
--- a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/files/init-install.sh
+++ b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/files/init-install.sh
@@ -132,7 +132,7 @@
 
 disk_size=$(parted ${device} unit mb print | grep '^Disk .*: .*MB' | cut -d" " -f 3 | sed -e "s/MB//")
 
-grub_version=$(grub-install -v|sed 's/.* \([0-9]\).*/\1/')
+grub_version=$(grub-install -V|sed 's/.* \([0-9]\).*/\1/')
 
 if [ $grub_version -eq 0 ] ; then
     bios_boot_size=0
@@ -211,6 +211,13 @@
 
 parted ${device} print
 
+echo "Waiting for device nodes..."
+C=0
+while [ $C -ne 3 ] && [ ! -e $bootfs  -o ! -e $rootfs -o ! -e $swap ]; do
+    C=$(( C + 1 ))
+    sleep 1
+done
+
 echo "Formatting $bootfs to ext3..."
 mkfs.ext3 $bootfs
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/files/init-live.sh b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/files/init-live.sh
index 441b41c..46cab6c 100644
--- a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/files/init-live.sh
+++ b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/files/init-live.sh
@@ -84,6 +84,10 @@
     # device node creation events were handled, to avoid unexpected behavior
     killall -9 "${_UDEV_DAEMON##*/}" 2>/dev/null
 
+    # Don't run systemd-update-done on systemd-based live systems
+    # because it triggers a slow rebuild of ldconfig caches.
+    touch ${ROOT_MOUNT}/etc/.updated ${ROOT_MOUNT}/var/.updated
+
     # Allow for identification of the real root even after boot
     mkdir -p  ${ROOT_MOUNT}/media/realroot
     mount -n --move "/run/media/${ROOT_DISK}" ${ROOT_MOUNT}/media/realroot
diff --git a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-framework/mdev b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-framework/mdev
index a5df1d7..9814d97 100644
--- a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-framework/mdev
+++ b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-framework/mdev
@@ -1,5 +1,5 @@
 #!/bin/sh
-# Copyright (C) 2011 O.S. Systems Software LTDA.
+# Copyright (C) 2011, 2017 O.S. Systems Software LTDA.
 # Licensed on MIT
 
 mdev_enabled() {
@@ -25,6 +25,6 @@
 
 	# load modules for devices
 	find /sys -name modalias | while read m; do
-		load_kernel_module $(cat $m)
+		load_kernel_module $(cat "$m")
 	done
 }
diff --git a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-framework/setup-live b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-framework/setup-live
new file mode 100644
index 0000000..4c79f41
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-framework/setup-live
@@ -0,0 +1,64 @@
+#/bin/sh
+# Copyright (C) 2011 O.S. Systems Software LTDA.
+# Licensed on MIT
+
+setup_enabled() {
+	return 0
+}
+
+setup_run() {
+ROOT_IMAGE="rootfs.img"
+ISOLINUX=""
+ROOT_DISK=""
+shelltimeout=30
+
+	if [ -z "$bootparam_root" -o "$bootparam_root" = "/dev/ram0" ]; then
+		echo "Waiting for removable media..."
+		C=0
+		while true
+		do
+		  for i in `ls /run/media 2>/dev/null`; do
+		      if [ -f /run/media/$i/$ROOT_IMAGE ] ; then
+				found="yes"
+				ROOT_DISK="$i"
+				break
+			  elif [ -f /run/media/$i/isolinux/$ROOT_IMAGE ]; then
+				found="yes"
+				ISOLINUX="isolinux"
+				ROOT_DISK="$i"
+				break
+		      fi
+		  done
+		  if [ "$found" = "yes" ]; then
+		      break;
+		  fi
+		  # don't wait for more than $shelltimeout seconds, if it's set
+		  if [ -n "$shelltimeout" ]; then
+		      echo -n " " $(( $shelltimeout - $C ))
+		      if [ $C -ge $shelltimeout ]; then
+		           echo "..."
+			   echo "Mounted filesystems"
+		           mount | grep media
+		           echo "Available block devices"
+		           cat /proc/partitions
+		           fatal "Cannot find $ROOT_IMAGE file in /run/media/* , dropping to a shell "
+		      fi
+		      C=$(( C + 1 ))
+		  fi
+		  sleep 1
+		done
+		# The existing rootfs module has no support for rootfs images. Assign the rootfs image.
+		bootparam_root="/run/media/$ROOT_DISK/$ISOLINUX/$ROOT_IMAGE"
+	fi
+
+	if [ "$bootparam_LABEL" != "boot" -a -f /init.d/$bootparam_LABEL.sh ] ; then
+		if [ -f /run/media/$i/$ISOLINUX/$ROOT_IMAGE ] ; then
+		    ./init.d/$bootparam_LABEL.sh $i/$ISOLINUX $ROOT_IMAGE $video_mode $vga_mode $console_params
+		else
+		    fatal "Could not find $bootparam_LABEL script"
+		fi
+
+		# If we're getting here, we failed...
+		fatal "Target $bootparam_LABEL failed"
+	fi
+}
diff --git a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-framework_1.0.bb b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-framework_1.0.bb
index 67a1b04..2afc37e 100644
--- a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-framework_1.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-framework_1.0.bb
@@ -3,7 +3,7 @@
 LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
 RDEPENDS_${PN} += "${VIRTUAL-RUNTIME_base-utils}"
 
-PR = "r2"
+PR = "r4"
 
 inherit allarch
 
@@ -13,7 +13,8 @@
            file://mdev \
            file://udev \
            file://e2fs \
-           file://debug"
+           file://debug \
+          "
 
 S = "${WORKDIR}"
 
@@ -48,7 +49,8 @@
             initramfs-module-udev \
             initramfs-module-e2fs \
             initramfs-module-rootfs \
-            initramfs-module-debug"
+            initramfs-module-debug \
+           "
 
 FILES_${PN}-base = "/init /init.d/99-finish /dev"
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-live-install-efi_1.0.bb b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-live-install-efi_1.0.bb
index 32c1fce..2a7f84d 100644
--- a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-live-install-efi_1.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-live-install-efi_1.0.bb
@@ -21,4 +21,4 @@
 
 FILES_${PN} = " /install-efi.sh "
 
-COMPATIBLE_HOST = "(i.86|x86_64).*-linux"
+COMPATIBLE_HOST = "(i.86.*|x86_64.*|aarch64.*)-linux"
diff --git a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-live-install_1.0.bb b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-live-install_1.0.bb
index 88b3b30..a553a0d 100644
--- a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-live-install_1.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-live-install_1.0.bb
@@ -21,4 +21,4 @@
 
 FILES_${PN} = " /install.sh "
 
-COMPATIBLE_HOST = "(i.86|x86_64).*-linux"
+COMPATIBLE_HOST = "(i.86.*|x86_64.*|aarch64.*)-linux"
diff --git a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-module-install-efi_1.0.bb b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-module-install-efi_1.0.bb
new file mode 100644
index 0000000..1e7f76f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-module-install-efi_1.0.bb
@@ -0,0 +1,17 @@
+SUMMARY = "initramfs-framework module for EFI installation option"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
+RDEPENDS_${PN} = "initramfs-framework-base parted e2fsprogs-mke2fs dosfstools util-linux-blkid"
+
+PR = "r4"
+
+SRC_URI = "file://init-install-efi.sh"
+
+S = "${WORKDIR}"
+
+do_install() {
+    install -d ${D}/init.d
+    install -m 0755 ${WORKDIR}/init-install-efi.sh ${D}/init.d/install-efi.sh
+}
+
+FILES_${PN} = "/init.d/install-efi.sh"
diff --git a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-module-install_1.0.bb b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-module-install_1.0.bb
new file mode 100644
index 0000000..02b69f3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-module-install_1.0.bb
@@ -0,0 +1,22 @@
+SUMMARY = "initramfs-framework module for installation option"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
+RDEPENDS_${PN} = "initramfs-framework-base grub parted e2fsprogs-mke2fs util-linux-blkid"
+
+# The same restriction as grub
+COMPATIBLE_HOST = '(x86_64.*|i.86.*|arm.*|aarch64.*)-(linux.*|freebsd.*)'
+COMPATIBLE_HOST_armv7a = 'null'
+COMPATIBLE_HOST_armv7ve = 'null'
+
+PR = "r1"
+
+SRC_URI = "file://init-install.sh"
+
+S = "${WORKDIR}"
+
+do_install() {
+    install -d ${D}/init.d
+    install -m 0755 ${WORKDIR}/init-install.sh ${D}/init.d/install.sh
+}
+
+FILES_${PN} = "/init.d/install.sh"
diff --git a/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-module-setup-live_1.0.bb b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-module-setup-live_1.0.bb
new file mode 100644
index 0000000..4d2fe9d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/initrdscripts/initramfs-module-setup-live_1.0.bb
@@ -0,0 +1,20 @@
+SUMMARY = "initramfs-framework module for live booting"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
+RDEPENDS_${PN} = "initramfs-framework-base udev-extraconf"
+
+PR = "r4"
+
+inherit allarch
+
+FILESEXTRAPATHS_prepend := "${THISDIR}/initramfs-framework:"
+SRC_URI = "file://setup-live"
+
+S = "${WORKDIR}"
+
+do_install() {
+    install -d ${D}/init.d
+    install -m 0755 ${WORKDIR}/setup-live ${D}/init.d/80-setup-live
+}
+
+FILES_${PN} = "/init.d/80-setup-live"
diff --git a/import-layers/yocto-poky/meta/recipes-core/initscripts/initscripts-1.0/populate-volatile.sh b/import-layers/yocto-poky/meta/recipes-core/initscripts/initscripts-1.0/populate-volatile.sh
index 22a71ec..35316ec 100755
--- a/import-layers/yocto-poky/meta/recipes-core/initscripts/initscripts-1.0/populate-volatile.sh
+++ b/import-layers/yocto-poky/meta/recipes-core/initscripts/initscripts-1.0/populate-volatile.sh
@@ -25,8 +25,18 @@
 [ "${VERBOSE}" != "no" ] && echo "Populating volatile Filesystems."
 
 create_file() {
+	EXEC=""
+	[ -z "$2" ] && {
+		EXEC="
+		touch \"$1\";
+		"
+	} || {
+		EXEC="
+		cp \"$2\" \"$1\";
+		"
+	}
 	EXEC="
-	touch \"$1\";
+	${EXEC}
 	chown ${TUSER}.${TGROUP} $1 || echo \"Failed to set owner -${TUSER}- for -$1-.\" >/dev/tty0 2>&1;
 	chmod ${TMODE} $1 || echo \"Failed to set mode -${TMODE}- for -$1-.\" >/dev/tty0 2>&1 "
 
@@ -187,7 +197,9 @@
 
 		case "${TTYPE}" in
 			"f")  [ "${VERBOSE}" != "no" ] && echo "Creating file -${TNAME}-."
-				create_file "${TNAME}"
+				TSOURCE="$TLTARGET"
+				[ "${TSOURCE}" = "none" ] && TSOURCE=""
+				create_file "${TNAME}" "${TSOURCE}" &
 				;;
 			"d")  [ "${VERBOSE}" != "no" ] && echo "Creating directory -${TNAME}-."
 				mk_dir "${TNAME}"
diff --git a/import-layers/yocto-poky/meta/recipes-core/initscripts/initscripts-1.0/volatiles b/import-layers/yocto-poky/meta/recipes-core/initscripts/initscripts-1.0/volatiles
index bc17c45..2011066 100644
--- a/import-layers/yocto-poky/meta/recipes-core/initscripts/initscripts-1.0/volatiles
+++ b/import-layers/yocto-poky/meta/recipes-core/initscripts/initscripts-1.0/volatiles
@@ -27,7 +27,6 @@
 d root root 0755 /var/volatile/log none
 d root root 1777 /var/volatile/tmp none
 l root root 1777 /var/lock /run/lock
-l root root 0755 /var/log /var/volatile/log
 l root root 0755 /var/run /run
 l root root 1777 /var/tmp /var/volatile/tmp
 l root root 1777 /tmp /var/tmp
diff --git a/import-layers/yocto-poky/meta/recipes-core/initscripts/initscripts_1.0.bb b/import-layers/yocto-poky/meta/recipes-core/initscripts/initscripts_1.0.bb
index 2e4f7e4..91eea4b 100644
--- a/import-layers/yocto-poky/meta/recipes-core/initscripts/initscripts_1.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/initscripts/initscripts_1.0.bb
@@ -1,4 +1,5 @@
 SUMMARY = "SysV init scripts"
+HOMEPAGE = "https://github.com/fedora-sysv/initscripts"
 DESCRIPTION = "Initscripts provide the basic system startup initialization scripts for the system.  These scripts include actions such as filesystem mounting, fsck, RTC manipulation and other actions routinely performed at system startup.  In addition, the scripts are also used during system shutdown to reverse the actions performed at startup."
 SECTION = "base"
 LICENSE = "GPLv2"
@@ -42,19 +43,19 @@
 
 KERNEL_VERSION = ""
 
-inherit update-alternatives
 DEPENDS_append = " update-rc.d-native"
 PACKAGE_WRITE_DEPS_append = " ${@bb.utils.contains('DISTRO_FEATURES','systemd','systemd-systemctl-native','',d)}"
 
-PACKAGES =+ "${PN}-functions"
-RDEPENDS_${PN} = "${PN}-functions \
-                  ${@bb.utils.contains('DISTRO_FEATURES','selinux','bash','',d)} \
+PACKAGES =+ "${PN}-functions ${PN}-sushell"
+RDEPENDS_${PN} = "initd-functions \
+                  ${@bb.utils.contains('DISTRO_FEATURES','selinux','${PN}-sushell','',d)} \
 		 "
+# Recommend pn-functions so that it will be a preferred default provider for initd-functions
+RRECOMMENDS_${PN} = "${PN}-functions"
+RPROVIDES_${PN}-functions = "initd-functions"
+RCONFLICTS_${PN}-functions = "lsbinitscripts"
 FILES_${PN}-functions = "${sysconfdir}/init.d/functions*"
-
-ALTERNATIVE_PRIORITY_${PN}-functions = "90"
-ALTERNATIVE_${PN}-functions = "functions"
-ALTERNATIVE_LINK_NAME[functions] = "${sysconfdir}/init.d/functions"
+FILES_${PN}-sushell = "${base_sbindir}/sushell"
 
 HALTARGS ?= "-d -f"
 
@@ -102,6 +103,9 @@
 	install -m 0755    ${WORKDIR}/read-only-rootfs-hook.sh ${D}${sysconfdir}/init.d
 	install -m 0755    ${WORKDIR}/save-rtc.sh	${D}${sysconfdir}/init.d
 	install -m 0644    ${WORKDIR}/volatiles		${D}${sysconfdir}/default/volatiles/00_core
+	if [ ${@ oe.types.boolean('${VOLATILE_LOG_DIR}') } = True ]; then
+		echo "l root root 0755 /var/log /var/volatile/log" >> ${D}${sysconfdir}/default/volatiles/00_core
+	fi
 	install -m 0755    ${WORKDIR}/dmesg.sh		${D}${sysconfdir}/init.d
 	install -m 0644    ${WORKDIR}/logrotate-dmesg.conf ${D}${sysconfdir}/
 
@@ -134,7 +138,7 @@
 	update-rc.d -r ${D} mountall.sh start 03 S .
 	update-rc.d -r ${D} hostname.sh start 39 S .
 	update-rc.d -r ${D} mountnfs.sh start 15 2 3 4 5 .
-	update-rc.d -r ${D} bootmisc.sh start 55 S .
+	update-rc.d -r ${D} bootmisc.sh start 36 S .
 	update-rc.d -r ${D} sysfs.sh start 02 S .
 	update-rc.d -r ${D} populate-volatile.sh start 37 S .
 	update-rc.d -r ${D} read-only-rootfs-hook.sh start 29 S .
diff --git a/import-layers/yocto-poky/meta/recipes-core/kbd/kbd_2.0.4.bb b/import-layers/yocto-poky/meta/recipes-core/kbd/kbd_2.0.4.bb
index 65325c0..423b47a 100644
--- a/import-layers/yocto-poky/meta/recipes-core/kbd/kbd_2.0.4.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/kbd/kbd_2.0.4.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Keytable files and keyboard utilities"
+HOMEPAGE = "http://www.kbd-project.org/"
 # everything minus console-fonts is GPLv2+
 LICENSE = "GPLv2+"
 LIC_FILES_CHKSUM = "file://COPYING;md5=a5fcc36121d93e1f69d96a313078c8b5"
diff --git a/import-layers/yocto-poky/meta/recipes-core/libcgroup/libcgroup_0.41.bb b/import-layers/yocto-poky/meta/recipes-core/libcgroup/libcgroup_0.41.bb
index 9597963..e4b1782 100644
--- a/import-layers/yocto-poky/meta/recipes-core/libcgroup/libcgroup_0.41.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/libcgroup/libcgroup_0.41.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Linux control group abstraction library"
+HOMEPAGE = "http://libcg.sourceforge.net/"
 DESCRIPTION = "libcgroup is a library that abstracts the control group file system \
 in Linux. Control groups allow you to limit, account and isolate resource usage \
 (CPU, memory, disk I/O, etc.) of groups of processes."
diff --git a/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/libxml2-CVE-2016-4658.patch b/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/libxml2-CVE-2016-4658.patch
index 5412e8c..bb55eed 100644
--- a/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/libxml2-CVE-2016-4658.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/libxml2-CVE-2016-4658.patch
@@ -8,7 +8,7 @@
 But they don't necessarily have a physical representation in a
 document, so simply disallow them in XPointer ranges.
 
-Upstream-Status: Backported 
+Upstream-Status: Backport
  - [https://git.gnome.org/browse/libxml2/commit/?id=c1d1f7121194036608bf555f08d3062a36fd344b]
  - [https://git.gnome.org/browse/libxml2/commit/?id=3f8a91036d338e51c059d54397a42d645f019c65]
 CVE: CVE-2016-4658
diff --git a/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/libxml2-fix_NULL_pointer_derefs.patch b/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/libxml2-fix_NULL_pointer_derefs.patch
index 83552ca..c60e32f 100644
--- a/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/libxml2-fix_NULL_pointer_derefs.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/libxml2-fix_NULL_pointer_derefs.patch
@@ -2,8 +2,7 @@
 
 xpointer: Fix more NULL pointer derefs
 
-Upstream-Status: Backported [https://git.gnome.org/browse/libxml2/commit/?id=e905f08123e4a6e7731549e6f09dadff4cab65bd]
-CVE: -
+Upstream-Status: Backport [https://git.gnome.org/browse/libxml2/commit/?id=e905f08123e4a6e7731549e6f09dadff4cab65bd]
 Signed-off-by: Andrej Valek <andrej.valek@siemens.com>
 Signed-off-by: Pascal Bach <pascal.bach@siemens.com>
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/libxml2-fix_node_comparison.patch b/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/libxml2-fix_node_comparison.patch
index 11718bb..65f6bef 100644
--- a/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/libxml2-fix_node_comparison.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/libxml2-fix_node_comparison.patch
@@ -5,10 +5,10 @@
  - Add sanity check for empty stack.
  - Include comparation in changes from xmlXPathCmpNodesExt to xmlXPathCmpNodes
 
-Upstream-Status: Backported
+Upstream-Status: Backport
  - [https://git.gnome.org/browse/libxml2/commit/?id=c1d1f7121194036608bf555f08d3062a36fd344b]
  - [https://git.gnome.org/browse/libxml2/commit/?id=a005199330b86dada19d162cae15ef9bdcb6baa8]
-CVE: necessary changes for fixing CVE-2016-5131
+CVE: CVE-2016-5131
 Signed-off-by: Andrej Valek <andrej.valek@siemens.com>
 Signed-off-by: Pascal Bach <pascal.bach@siemens.com>
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/run-ptest b/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/run-ptest
index 473d0b6..c313d83 100644
--- a/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/run-ptest
+++ b/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2/run-ptest
@@ -1,3 +1,4 @@
 #!/bin/sh
 
+export LC_ALL=en_US.UTF-8
 make -k runtests
diff --git a/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2_2.9.4.bb b/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2_2.9.4.bb
index 107539b..9adb29c 100644
--- a/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2_2.9.4.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/libxml/libxml2_2.9.4.bb
@@ -11,7 +11,7 @@
 
 DEPENDS = "zlib virtual/libiconv"
 
-SRC_URI = "ftp://xmlsoft.org/libxml2/libxml2-${PV}.tar.gz;name=libtar \
+SRC_URI = "http://www.xmlsoft.org/sources/libxml2-${PV}.tar.gz;name=libtar \
            http://www.w3.org/XML/Test/xmlts20080827.tar.gz;name=testtar \
            file://libxml-64bit.patch \
            file://ansidecl.patch \
@@ -53,11 +53,11 @@
 
 RDEPENDS_${PN}-python += "${@bb.utils.contains('PACKAGECONFIG', 'python', 'python3-core', '', d)}"
 
-RDEPENDS_${PN}-ptest_append_libc-glibc = " glibc-gconv-ebcdic-us glibc-gconv-ibm1141"
+RDEPENDS_${PN}-ptest_append_libc-glibc = " glibc-gconv-ebcdic-us glibc-gconv-ibm1141 glibc-gconv-iso8859-5"
 
 export PYTHON_SITE_PACKAGES="${PYTHON_SITEPACKAGES_DIR}"
 
-# WARNING: zlib is require for RPM use
+# WARNING: zlib is required for RPM use
 EXTRA_OECONF = "--without-debug --without-legacy --with-catalog --without-docbook --with-c14n --without-lzma --with-fexceptions"
 EXTRA_OECONF_class-native = "--without-legacy --without-docbook --with-c14n --without-lzma --with-zlib"
 EXTRA_OECONF_class-nativesdk = "--without-legacy --without-docbook --with-c14n --without-lzma --with-zlib"
@@ -89,6 +89,17 @@
 		grep -lrZ '#!/usr/bin/python' ${D}${PTEST_PATH}/python |
 			xargs -0 sed -i -e 's|/usr/bin/python|${USRBINPATH}/${PYTHON_PN}|'
 	fi
+	#Remove build host references from various Makefiles
+	find "${D}${PTEST_PATH}" -name Makefile -type f -exec \
+	    sed -i \
+	    -e 's,--sysroot=${STAGING_DIR_TARGET},,g' \
+	    -e 's|${DEBUG_PREFIX_MAP}||g' \
+	    -e 's:${HOSTTOOLS_DIR}/::g' \
+	    -e 's:${RECIPE_SYSROOT_NATIVE}::g' \
+	    -e 's:${RECIPE_SYSROOT}::g' \
+	    -e 's:${BASE_WORKDIR}/${MULTIMACH_TARGET_SYS}::g' \
+	    -e '/^RELDATE/d' \
+	    {} +
 }
 
 do_install_append_class-native () {
diff --git a/import-layers/yocto-poky/meta/recipes-core/meta/buildtools-tarball.bb b/import-layers/yocto-poky/meta/recipes-core/meta/buildtools-tarball.bb
index abdc7fe..be37c44 100644
--- a/import-layers/yocto-poky/meta/recipes-core/meta/buildtools-tarball.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/meta/buildtools-tarball.bb
@@ -46,8 +46,6 @@
 inherit nopackages
 
 deltask install
-deltask package
-deltask packagedata
 deltask populate_sysroot
 
 do_populate_sdk[stamp-extra-info] = "${PACKAGE_ARCH}"
diff --git a/import-layers/yocto-poky/meta/recipes-core/meta/meta-go-toolchain.bb b/import-layers/yocto-poky/meta/recipes-core/meta/meta-go-toolchain.bb
new file mode 100644
index 0000000..dde385c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/meta/meta-go-toolchain.bb
@@ -0,0 +1,12 @@
+SUMMARY = "Meta package for building a installable Go toolchain"
+LICENSE = "MIT"
+
+inherit populate_sdk
+
+TOOLCHAIN_HOST_TASK_append = " \
+    packagegroup-go-cross-canadian-${MACHINE} \
+"
+
+TOOLCHAIN_TARGET_TASK_append = " \
+    ${@multilib_pkg_extend(d, 'packagegroup-go-sdk-target')} \
+"
diff --git a/import-layers/yocto-poky/meta/recipes-core/meta/package-index.bb b/import-layers/yocto-poky/meta/recipes-core/meta/package-index.bb
index fe022ff..a4123b7 100644
--- a/import-layers/yocto-poky/meta/recipes-core/meta/package-index.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/meta/package-index.bb
@@ -4,17 +4,14 @@
 INHIBIT_DEFAULT_DEPS = "1"
 PACKAGES = ""
 
+inherit nopackages
+
 deltask do_fetch
 deltask do_unpack
 deltask do_patch
 deltask do_configure
 deltask do_compile
 deltask do_install
-deltask do_package
-deltask do_packagedata
-deltask do_package_write_ipk
-deltask do_package_write_rpm
-deltask do_package_write_deb
 deltask do_populate_sysroot
 
 do_package_index[nostamp] = "1"
diff --git a/import-layers/yocto-poky/meta/recipes-core/meta/signing-keys.bb b/import-layers/yocto-poky/meta/recipes-core/meta/signing-keys.bb
index aaa01d0..2c1cc38 100644
--- a/import-layers/yocto-poky/meta/recipes-core/meta/signing-keys.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/meta/signing-keys.bb
@@ -44,25 +44,25 @@
 
 do_install () {
     if [ -f "${B}/rpm-key" ]; then
-        install -D -m 0644 "${B}/rpm-key" "${D}${sysconfdir}/pki/rpm-gpg/RPM-GPG-KEY-${DISTRO_VERSION}"
+        install -D -m 0644 "${B}/rpm-key" "${D}${sysconfdir}/pki/rpm-gpg/RPM-GPG-KEY-${DISTRO}-${DISTRO_CODENAME}"
     fi
     if [ -f "${B}/ipk-key" ]; then
-        install -D -m 0644 "${B}/ipk-key" "${D}${sysconfdir}/pki/ipk-gpg/IPK-GPG-KEY-${DISTRO_VERSION}"
+        install -D -m 0644 "${B}/ipk-key" "${D}${sysconfdir}/pki/ipk-gpg/IPK-GPG-KEY-${DISTRO}-${DISTRO_CODENAME}"
     fi
     if [ -f "${B}/pf-key" ]; then
-        install -D -m 0644 "${B}/pf-key" "${D}${sysconfdir}/pki/packagefeed-gpg/PACKAGEFEED-GPG-KEY-${DISTRO_VERSION}"
+        install -D -m 0644 "${B}/pf-key" "${D}${sysconfdir}/pki/packagefeed-gpg/PACKAGEFEED-GPG-KEY-${DISTRO}-${DISTRO_CODENAME}"
     fi
 }
 
 do_deploy () {
     if [ -f "${B}/rpm-key" ]; then
-        install -D -m 0644 "${B}/rpm-key" "${DEPLOYDIR}/RPM-GPG-KEY-${DISTRO_VERSION}"
+        install -D -m 0644 "${B}/rpm-key" "${DEPLOYDIR}/RPM-GPG-KEY-${DISTRO}-${DISTRO_CODENAME}"
     fi
     if [ -f "${B}/ipk-key" ]; then
-        install -D -m 0644 "${B}/ipk-key" "${DEPLOYDIR}/IPK-GPG-KEY-${DISTRO_VERSION}"
+        install -D -m 0644 "${B}/ipk-key" "${DEPLOYDIR}/IPK-GPG-KEY-${DISTRO}-${DISTRO_CODENAME}"
     fi
     if [ -f "${B}/pf-key" ]; then
-        install -D -m 0644 "${B}/pf-key" "${DEPLOYDIR}/PACKAGEFEED-GPG-KEY-${DISTRO_VERSION}"
+        install -D -m 0644 "${B}/pf-key" "${DEPLOYDIR}/PACKAGEFEED-GPG-KEY-${DISTRO}-${DISTRO_CODENAME}"
     fi
 }
 do_deploy[sstate-outputdirs] = "${DEPLOY_DIR_RPM}"
@@ -71,3 +71,11 @@
 # clear stamp-extra-info since MACHINE is normally put there by deploy.bbclass
 do_deploy[stamp-extra-info] = ""
 addtask deploy after do_get_public_keys
+
+# Delete unnecessary tasks. In particular, "do_unpack" _must_ be deleted because
+# it cleans ${B} and will wipe any keys exported by do_get_public_keys.
+deltask do_fetch
+deltask do_unpack
+deltask do_patch
+deltask do_configure
+deltask do_compile
diff --git a/import-layers/yocto-poky/meta/recipes-core/meta/uninative-tarball.bb b/import-layers/yocto-poky/meta/recipes-core/meta/uninative-tarball.bb
index f3fc1eb..5fabf7f 100644
--- a/import-layers/yocto-poky/meta/recipes-core/meta/uninative-tarball.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/meta/uninative-tarball.bb
@@ -34,8 +34,6 @@
 inherit nopackages
 
 deltask install
-deltask package
-deltask packagedata
 deltask populate_sysroot
 
 do_populate_sdk[stamp-extra-info] = "${PACKAGE_ARCH}"
diff --git a/import-layers/yocto-poky/meta/recipes-core/meta/wic-tools.bb b/import-layers/yocto-poky/meta/recipes-core/meta/wic-tools.bb
index cd494ec..09eb409 100644
--- a/import-layers/yocto-poky/meta/recipes-core/meta/wic-tools.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/meta/wic-tools.bb
@@ -5,12 +5,15 @@
 DEPENDS = "\
            parted-native syslinux-native gptfdisk-native dosfstools-native \
            mtools-native bmap-tools-native grub-efi-native cdrtools-native \
-           btrfs-tools-native squashfs-tools-native \
+           btrfs-tools-native squashfs-tools-native pseudo-native \
+           e2fsprogs-native util-linux-native \
            "
 DEPENDS_append_x86 = " syslinux grub-efi systemd-boot"
 DEPENDS_append_x86-64 = " syslinux grub-efi systemd-boot"
+DEPENDS_append_x86-x32 = " syslinux grub-efi"
 
 INHIBIT_DEFAULT_DEPS = "1"
+
 inherit nopackages
 
 # The sysroot of wic-tools is needed for wic, but if rm_work is enabled, it will
@@ -19,14 +22,5 @@
 
 python do_build_sysroot () {
     bb.build.exec_func("extend_recipe_sysroot", d)
-
-    # Write environment variables used by wic
-    # to tmp/sysroots/<machine>/imgdata/wictools.env
-    outdir = os.path.join(d.getVar('STAGING_DIR'), d.getVar('MACHINE'), 'imgdata')
-    bb.utils.mkdirhier(outdir)
-    with open(os.path.join(outdir, "wic-tools.env"), 'w') as envf:
-        for var in ('RECIPE_SYSROOT_NATIVE', 'STAGING_DATADIR', 'STAGING_LIBDIR'):
-            envf.write('%s="%s"\n' % (var, d.getVar(var).strip()))
-
 }
 addtask do_build_sysroot after do_prepare_recipe_sysroot before do_build
diff --git a/import-layers/yocto-poky/meta/recipes-core/musl/musl.inc b/import-layers/yocto-poky/meta/recipes-core/musl/musl.inc
index 56c9d7f..9af1172 100644
--- a/import-layers/yocto-poky/meta/recipes-core/musl/musl.inc
+++ b/import-layers/yocto-poky/meta/recipes-core/musl/musl.inc
@@ -26,3 +26,8 @@
 # Doesn't compile in MIPS16e mode due to use of hand-written
 # assembly
 MIPS_INSTRUCTION_SET = "mips"
+
+# thumb1 is unsupported
+ARM_INSTRUCTION_SET_armv5 = "arm"
+ARM_INSTRUCTION_SET_armv4 = "arm"
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/musl/musl_git.bb b/import-layers/yocto-poky/meta/recipes-core/musl/musl_git.bb
index a88bc4d..db26b4f 100644
--- a/import-layers/yocto-poky/meta/recipes-core/musl/musl_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/musl/musl_git.bb
@@ -3,9 +3,9 @@
 
 require musl.inc
 
-SRCREV = "54807d47acecab778498ced88ce8f62bfa16e379"
+SRCREV = "eb03bde2f24582874cb72b56c7811bf51da0c817"
 
-PV = "1.1.16+git${SRCPV}"
+PV = "1.1.18+git${SRCPV}"
 
 # mirror is at git://github.com/kraj/musl.git
 
@@ -28,6 +28,14 @@
 
 LDFLAGS += "-Wl,-soname,libc.so"
 
+# When compiling for Thumb or Thumb2, frame pointers _must_ be disabled since the
+# Thumb frame pointer in r7 clashes with musl's use of inline asm to make syscalls
+# (where r7 is used for the syscall NR). In most cases, frame pointers will be
+# disabled automatically due to the optimisation level, but append an explicit
+# -fomit-frame-pointer to handle cases where optimisation is set to -O0 or frame
+# pointers have been enabled by -fno-omit-frame-pointer earlier in CFLAGS, etc.
+CFLAGS_append_arm = " ${@bb.utils.contains('TUNE_CCARGS', '-mthumb', '-fomit-frame-pointer', '', d)}"
+
 CONFIGUREOPTS = " \
     --prefix=${prefix} \
     --exec-prefix=${exec_prefix} \
@@ -49,10 +57,11 @@
 	oe_runmake install DESTDIR='${D}'
 
 	install -d ${D}${bindir}
+	rm -f ${D}${bindir}/ldd
 	lnr ${D}${libdir}/libc.so ${D}${bindir}/ldd
 	for l in crypt dl m pthread resolv rt util xnet
 	do
-		ln -s libc.so ${D}${libdir}/lib$l.so
+		ln -sf libc.so ${D}${libdir}/lib$l.so
 	done
 }
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/ncurses/files/0001-tic-hang.patch b/import-layers/yocto-poky/meta/recipes-core/ncurses/files/0001-tic-hang.patch
new file mode 100644
index 0000000..4a97056
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/ncurses/files/0001-tic-hang.patch
@@ -0,0 +1,43 @@
+From a95590f676209832fe0b27226e6de3cb50e2b97c Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Wed, 16 Aug 2017 14:31:51 +0800
+Subject: [PATCH 1/2] tic hang
+
+Upstream-Status: Inappropriate [configuration]
+
+'tic' of some linux distributions (e.g. fedora 11) hang in an infinite
+loop when processing the original file.
+
+Signed-off-by: anonymous
+
+Rebase to 6.0+20170715
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ misc/terminfo.src | 11 +++++------
+ 1 file changed, 5 insertions(+), 6 deletions(-)
+
+diff --git a/misc/terminfo.src b/misc/terminfo.src
+index ee3fab3..176d593 100644
+--- a/misc/terminfo.src
++++ b/misc/terminfo.src
+@@ -5177,12 +5177,11 @@ konsole-xf3x|KDE console window with keyboard for XFree86 3.x xterm,
+ # The value for kbs reflects local customization rather than the settings used
+ # for XFree86 xterm.
+ konsole-xf4x|KDE console window with keyboard for XFree86 4.x xterm,
+-	kend=\EOF, khome=\EOH, use=konsole+pcfkeys,
+-	use=konsole-vt100,
+-# Konsole does not implement shifted cursor-keys.
+-konsole+pcfkeys|konsole subset of xterm+pcfkeys,
+-	kLFT@, kRIT@, kcbt=\E[Z, kind@, kri@, kDN@, kUP@, use=xterm+pcc2,
+-	use=xterm+pcf0,
++	kend=\EOF, kf1=\EOP, kf13=\EO2P, kf14=\EO2Q, kf15=\EO2R,
++	kf16=\EO2S, kf17=\E[15;2~, kf18=\E[17;2~, kf19=\E[18;2~,
++	kf2=\EOQ, kf20=\E[19;2~, kf21=\E[20;2~, kf22=\E[21;2~,
++	kf23=\E[23;2~, kf24=\E[24;2~, kf3=\EOR, kf4=\EOS,
++	khome=\EOH, use=konsole-vt100,
+ # KDE's "vt100" keyboard has no relationship to any terminal that DEC made, but
+ # it is still useful for deriving the other entries.
+ konsole-vt100|KDE console window with vt100 (sic) keyboard,
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/ncurses/files/0002-configure-reproducible.patch b/import-layers/yocto-poky/meta/recipes-core/ncurses/files/0002-configure-reproducible.patch
new file mode 100644
index 0000000..c47ce6a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/ncurses/files/0002-configure-reproducible.patch
@@ -0,0 +1,35 @@
+From 939c994f3756c2d6d3cab2e6a04d05fa7c2b1d56 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Wed, 16 Aug 2017 14:45:27 +0800
+Subject: [PATCH 2/2] configure: reproducible
+
+"configure" enforces -U for ar flags, breaking deterministic builds.
+The flag was added to fix some vaguely specified "recent POSIX binutil
+build problems" in 2015.
+
+Upstream-Status: Pending
+Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
+
+Rebase to Rebase to 6.0+20170715
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ configure | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/configure b/configure
+index 7d7d2c1..f444354 100755
+--- a/configure
++++ b/configure
+@@ -4458,7 +4458,7 @@ if test "${cf_cv_ar_flags+set}" = set; then
+ else
+ 
+ 	cf_cv_ar_flags=unknown
+-	for cf_ar_flags in -curvU -curv curv -crv crv -cqv cqv -rv rv
++	for cf_ar_flags in -curv curv -crv crv -cqv cqv -rv rv
+ 	do
+ 
+ 		# check if $ARFLAGS already contains this choice
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/ncurses/files/CVE-2017-13732-CVE-2017-13734-CVE-2017-13730-CVE-2017-13729-CVE-2017-13728-CVE-2017-13731.patch b/import-layers/yocto-poky/meta/recipes-core/ncurses/files/CVE-2017-13732-CVE-2017-13734-CVE-2017-13730-CVE-2017-13729-CVE-2017-13728-CVE-2017-13731.patch
new file mode 100644
index 0000000..a19332c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/ncurses/files/CVE-2017-13732-CVE-2017-13734-CVE-2017-13730-CVE-2017-13729-CVE-2017-13728-CVE-2017-13731.patch
@@ -0,0 +1,541 @@
+From 4bf72cb8f1d3aa5f33c31eb817a5f0338f4aaf6f Mon Sep 17 00:00:00 2001
+From: Ovidiu Panait <ovidiu.panait@windriver.com>
+Date: Wed, 20 Sep 2017 05:02:00 +0000
+Subject: [PATCH] Import upstream patch 20170826
+
+20170826
+	+ fixes for "iterm2" (report by Leonardo Brondani Schenkel) -TD
+	+ corrected a warning from tic about keys which are the same, to skip
+	  over missing/cancelled values.
+	+ add check in tic for unnecessary use of "2" to denote a shifted
+	  special key.
+	+ improve checks in trim_sgr0, comp_parse.c and parse_entry.c, for
+	  cancelled string capabilities.
+	+ add check in _nc_parse_entry() for invalid entry name, setting the
+	  name to "invalid" to avoid problems storing entries.
+	+ add/improve checks in tic's parser to address invalid input
+	  + add a check in comp_scan.c to handle the special case where a
+	    nontext file ending with a NUL rather than newline is given to tic
+	    as input (Redhat #1484274).
+	  + allow for cancelled capabilities in _nc_save_str (Redhat #1484276).
+	  + add validity checks for "use=" target in _nc_parse_entry (Redhat
+	    #1484284).
+	  + check for invalid strings in postprocess_termcap (Redhat #1484285)
+	  + reset secondary pointers on EOF in next_char() (Redhat #1484287).
+	  + guard _nc_safe_strcpy() and _nc_safe_strcat() against calls using
+	    cancelled strings (Redhat #1484291).
+	+ correct typo in curs_memleaks.3x (Sven Joachim).
+	+ improve test/configure checks for some curses variants not based on
+	  X/Open Curses.
+	+ add options for test/configure to disable checks for form, menu and
+	  panel libraries.
+
+Upstream-Status: Backport
+CVE: CVE-2017-13732, CVE-2017-13734, CVE-2017-13730, CVE-2017-13729, CVE-2017-13728, CVE-2017-13731
+ 
+
+Author: Sven Joachim <svenjoac@gmx.de>
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
+---
+ dist.mk                     |  4 +-
+ include/ncurses_defs        |  4 +-
+ ncurses/tinfo/alloc_entry.c |  4 +-
+ ncurses/tinfo/comp_parse.c  | 10 ++---
+ ncurses/tinfo/comp_scan.c   |  6 ++-
+ ncurses/tinfo/parse_entry.c | 91 ++++++++++++++++++++++++++++++---------------
+ ncurses/tinfo/strings.c     |  9 +++--
+ ncurses/tinfo/trim_sgr0.c   |  4 +-
+ progs/tic.c                 | 75 ++++++++++++++++++++++++++++++++++++-
+ 9 files changed, 157 insertions(+), 50 deletions(-)
+
+diff --git a/dist.mk b/dist.mk
+index 9af2699..2c70472 100644
+--- a/dist.mk
++++ b/dist.mk
+@@ -25,7 +25,7 @@
+ # use or other dealings in this Software without prior written               #
+ # authorization.                                                             #
+ ##############################################################################
+-# $Id: dist.mk,v 1.1172 2017/07/13 00:15:27 tom Exp $
++# $Id: dist.mk,v 1.1179 2017/08/20 15:33:41 tom Exp $
+ # Makefile for creating ncurses distributions.
+ #
+ # This only needs to be used directly as a makefile by developers, but
+@@ -37,7 +37,7 @@ SHELL = /bin/sh
+ # These define the major/minor/patch versions of ncurses.
+ NCURSES_MAJOR = 6
+ NCURSES_MINOR = 0
+-NCURSES_PATCH = 20170715
++NCURSES_PATCH = 20170826
+ 
+ # We don't append the patch to the version, since this only applies to releases
+ VERSION = $(NCURSES_MAJOR).$(NCURSES_MINOR)
+diff --git a/include/ncurses_defs b/include/ncurses_defs
+index e6611b7..d237db1 100644
+--- a/include/ncurses_defs
++++ b/include/ncurses_defs
+@@ -1,4 +1,4 @@
+-# $Id: ncurses_defs,v 1.73 2017/06/24 14:20:57 tom Exp $
++# $Id: ncurses_defs,v 1.75 2017/08/20 16:50:04 tom Exp $
+ ##############################################################################
+ # Copyright (c) 2000-2016,2017 Free Software Foundation, Inc.                #
+ #                                                                            #
+@@ -50,7 +50,9 @@ HAVE_BSD_STRING_H
+ HAVE_BTOWC 
+ HAVE_BUILTIN_H
+ HAVE_CHGAT	1
++HAVE_COLOR_CONTENT	1
+ HAVE_COLOR_SET	1
++HAVE_CURSCR	1
+ HAVE_DIRENT_H
+ HAVE_ERRNO
+ HAVE_FCNTL_H
+diff --git a/ncurses/tinfo/alloc_entry.c b/ncurses/tinfo/alloc_entry.c
+index 5de09f1..09374d6 100644
+--- a/ncurses/tinfo/alloc_entry.c
++++ b/ncurses/tinfo/alloc_entry.c
+@@ -47,7 +47,7 @@
+ 
+ #include <tic.h>
+ 
+-MODULE_ID("$Id: alloc_entry.c,v 1.60 2017/06/27 23:48:55 tom Exp $")
++MODULE_ID("$Id: alloc_entry.c,v 1.61 2017/08/25 09:09:08 tom Exp $")
+ 
+ #define ABSENT_OFFSET    -1
+ #define CANCELLED_OFFSET -2
+@@ -98,7 +98,7 @@ _nc_save_str(const char *const string)
+     size_t old_next_free = next_free;
+     size_t len;
+ 
+-    if (string == 0)
++    if (!VALID_STRING(string))
+ 	return _nc_save_str("");
+     len = strlen(string) + 1;
+ 
+diff --git a/ncurses/tinfo/comp_parse.c b/ncurses/tinfo/comp_parse.c
+index 34e6216..580d4df 100644
+--- a/ncurses/tinfo/comp_parse.c
++++ b/ncurses/tinfo/comp_parse.c
+@@ -47,7 +47,7 @@
+ 
+ #include <tic.h>
+ 
+-MODULE_ID("$Id: comp_parse.c,v 1.96 2017/04/15 15:36:58 tom Exp $")
++MODULE_ID("$Id: comp_parse.c,v 1.99 2017/08/26 16:15:50 tom Exp $")
+ 
+ static void sanity_check2(TERMTYPE2 *, bool);
+ NCURSES_IMPEXP void NCURSES_API(*_nc_check_termtype2) (TERMTYPE2 *, bool) = sanity_check2;
+@@ -510,9 +510,9 @@ static void
+ fixup_acsc(TERMTYPE2 *tp, int literal)
+ {
+     if (!literal) {
+-	if (acs_chars == 0
+-	    && enter_alt_charset_mode != 0
+-	    && exit_alt_charset_mode != 0)
++	if (acs_chars == ABSENT_STRING
++	    && PRESENT(enter_alt_charset_mode)
++	    && PRESENT(exit_alt_charset_mode))
+ 	    acs_chars = strdup(VT_ACSC);
+     }
+ }
+@@ -568,9 +568,7 @@ sanity_check2(TERMTYPE2 *tp, bool literal)
+     PAIRED(enter_xon_mode, exit_xon_mode);
+     PAIRED(enter_am_mode, exit_am_mode);
+     ANDMISSING(label_off, label_on);
+-#ifdef remove_clock
+     PAIRED(display_clock, remove_clock);
+-#endif
+     ANDMISSING(set_color_pair, initialize_pair);
+ }
+ 
+diff --git a/ncurses/tinfo/comp_scan.c b/ncurses/tinfo/comp_scan.c
+index 40d7f6a..b207257 100644
+--- a/ncurses/tinfo/comp_scan.c
++++ b/ncurses/tinfo/comp_scan.c
+@@ -50,7 +50,7 @@
+ #include <ctype.h>
+ #include <tic.h>
+ 
+-MODULE_ID("$Id: comp_scan.c,v 1.106 2017/04/22 11:41:12 tom Exp $")
++MODULE_ID("$Id: comp_scan.c,v 1.108 2017/08/25 22:57:21 tom Exp $")
+ 
+ /*
+  * Maximum length of string capability we'll accept before raising an error.
+@@ -168,6 +168,8 @@ next_char(void)
+ 	if (result != 0) {
+ 	    FreeAndNull(result);
+ 	    FreeAndNull(pushname);
++	    bufptr = 0;
++	    bufstart = 0;
+ 	    allocated = 0;
+ 	}
+ 	/*
+@@ -222,6 +224,8 @@ next_char(void)
+ 		}
+ 		if ((bufptr = bufstart) != 0) {
+ 		    used = strlen(bufptr);
++		    if (used == 0)
++			return (EOF);
+ 		    while (iswhite(*bufptr)) {
+ 			if (*bufptr == '\t') {
+ 			    _nc_curr_col = (_nc_curr_col | 7) + 1;
+diff --git a/ncurses/tinfo/parse_entry.c b/ncurses/tinfo/parse_entry.c
+index 3fa2f25..bbbfcb2 100644
+--- a/ncurses/tinfo/parse_entry.c
++++ b/ncurses/tinfo/parse_entry.c
+@@ -47,7 +47,7 @@
+ #include <ctype.h>
+ #include <tic.h>
+ 
+-MODULE_ID("$Id: parse_entry.c,v 1.86 2017/06/28 00:53:12 tom Exp $")
++MODULE_ID("$Id: parse_entry.c,v 1.91 2017/08/26 16:13:34 tom Exp $")
+ 
+ #ifdef LINT
+ static short const parametrized[] =
+@@ -180,6 +180,20 @@ _nc_extend_names(ENTRY * entryp, char *name, int token_type)
+ }
+ #endif /* NCURSES_XNAMES */
+ 
++static bool
++valid_entryname(const char *name)
++{
++    bool result = TRUE;
++    int ch;
++    while ((ch = UChar(*name++)) != '\0') {
++	if (ch <= ' ' || ch > '~' || ch == '/') {
++	    result = FALSE;
++	    break;
++	}
++    }
++    return result;
++}
++
+ /*
+  *	int
+  *	_nc_parse_entry(entry, literal, silent)
+@@ -211,6 +225,7 @@ _nc_parse_entry(ENTRY * entryp, int literal, bool silent)
+     int token_type;
+     struct name_table_entry const *entry_ptr;
+     char *ptr, *base;
++    const char *name;
+     bool bad_tc_usage = FALSE;
+ 
+     token_type = _nc_get_token(silent);
+@@ -261,7 +276,12 @@ _nc_parse_entry(ENTRY * entryp, int literal, bool silent)
+      * results in the terminal type getting prematurely set to correspond
+      * to that of the next entry.
+      */
+-    _nc_set_type(_nc_first_name(entryp->tterm.term_names));
++    name = _nc_first_name(entryp->tterm.term_names);
++    if (!valid_entryname(name)) {
++	_nc_warning("invalid entry name \"%s\"", name);
++	name = "invalid";
++    }
++    _nc_set_type(name);
+ 
+     /* check for overly-long names and aliases */
+     for (base = entryp->tterm.term_names; (ptr = strchr(base, '|')) != 0;
+@@ -283,13 +303,24 @@ _nc_parse_entry(ENTRY * entryp, int literal, bool silent)
+ 	bool is_use = (strcmp(_nc_curr_token.tk_name, "use") == 0);
+ 	bool is_tc = !is_use && (strcmp(_nc_curr_token.tk_name, "tc") == 0);
+ 	if (is_use || is_tc) {
++	    if (!VALID_STRING(_nc_curr_token.tk_valstring)
++		|| _nc_curr_token.tk_valstring[0] == '\0') {
++		_nc_warning("missing name for use-clause");
++		continue;
++	    } else if (!valid_entryname(_nc_curr_token.tk_valstring)) {
++		_nc_warning("invalid name for use-clause \"%s\"",
++			    _nc_curr_token.tk_valstring);
++		continue;
++	    } else if (entryp->nuses >= MAX_USES) {
++		_nc_warning("too many use-clauses, ignored \"%s\"",
++			    _nc_curr_token.tk_valstring);
++		continue;
++	    }
+ 	    entryp->uses[entryp->nuses].name = _nc_save_str(_nc_curr_token.tk_valstring);
+ 	    entryp->uses[entryp->nuses].line = _nc_curr_line;
+-	    if (VALID_STRING(entryp->uses[entryp->nuses].name)) {
+-		entryp->nuses++;
+-		if (entryp->nuses > 1 && is_tc) {
+-		    BAD_TC_USAGE
+-		}
++	    entryp->nuses++;
++	    if (entryp->nuses > 1 && is_tc) {
++		BAD_TC_USAGE
+ 	    }
+ 	} else {
+ 	    /* normal token lookup */
+@@ -641,13 +672,6 @@ static const char C_BS[] = "\b";
+ static const char C_HT[] = "\t";
+ 
+ /*
+- * Note that WANTED and PRESENT are not simple inverses!  If a capability
+- * has been explicitly cancelled, it's not considered WANTED.
+- */
+-#define WANTED(s)	((s) == ABSENT_STRING)
+-#define PRESENT(s)	(((s) != ABSENT_STRING) && ((s) != CANCELLED_STRING))
+-
+-/*
+  * This bit of legerdemain turns all the terminfo variable names into
+  * references to locations in the arrays Booleans, Numbers, and Strings ---
+  * precisely what's needed.
+@@ -672,10 +696,10 @@ postprocess_termcap(TERMTYPE2 *tp, bool has_base)
+ 
+     /* if there was a tc entry, assume we picked up defaults via that */
+     if (!has_base) {
+-	if (WANTED(init_3string) && termcap_init2)
++	if (WANTED(init_3string) && PRESENT(termcap_init2))
+ 	    init_3string = _nc_save_str(termcap_init2);
+ 
+-	if (WANTED(reset_2string) && termcap_reset)
++	if (WANTED(reset_2string) && PRESENT(termcap_reset))
+ 	    reset_2string = _nc_save_str(termcap_reset);
+ 
+ 	if (WANTED(carriage_return)) {
+@@ -790,7 +814,7 @@ postprocess_termcap(TERMTYPE2 *tp, bool has_base)
+ 	if (init_tabs != 8 && init_tabs != ABSENT_NUMERIC)
+ 	    _nc_warning("hardware tabs with a width other than 8: %d", init_tabs);
+ 	else {
+-	    if (tab && _nc_capcmp(tab, C_HT))
++	    if (PRESENT(tab) && _nc_capcmp(tab, C_HT))
+ 		_nc_warning("hardware tabs with a non-^I tab string %s",
+ 			    _nc_visbuf(tab));
+ 	    else {
+@@ -867,17 +891,22 @@ postprocess_termcap(TERMTYPE2 *tp, bool has_base)
+ 	     * The magic moment -- copy the mapped key string over,
+ 	     * stripping out padding.
+ 	     */
+-	    for (dp = buf2, bp = tp->Strings[from_ptr->nte_index]; *bp; bp++) {
+-		if (bp[0] == '$' && bp[1] == '<') {
+-		    while (*bp && *bp != '>') {
+-			++bp;
+-		    }
+-		} else
+-		    *dp++ = *bp;
+-	    }
+-	    *dp = '\0';
++	    bp = tp->Strings[from_ptr->nte_index];
++	    if (VALID_STRING(bp)) {
++		for (dp = buf2; *bp; bp++) {
++		    if (bp[0] == '$' && bp[1] == '<') {
++			while (*bp && *bp != '>') {
++			    ++bp;
++			}
++		    } else
++			*dp++ = *bp;
++		}
++		*dp = '\0';
+ 
+-	    tp->Strings[to_ptr->nte_index] = _nc_save_str(buf2);
++		tp->Strings[to_ptr->nte_index] = _nc_save_str(buf2);
++	    } else {
++		tp->Strings[to_ptr->nte_index] = bp;
++	    }
+ 	}
+ 
+ 	/*
+@@ -886,7 +915,7 @@ postprocess_termcap(TERMTYPE2 *tp, bool has_base)
+ 	 * got mapped to kich1 and im to kIC to avoid a collision.
+ 	 * If the description has im but not ic, hack kIC back to kich1.
+ 	 */
+-	if (foundim && WANTED(key_ic) && key_sic) {
++	if (foundim && WANTED(key_ic) && PRESENT(key_sic)) {
+ 	    key_ic = key_sic;
+ 	    key_sic = ABSENT_STRING;
+ 	}
+@@ -938,9 +967,9 @@ postprocess_termcap(TERMTYPE2 *tp, bool has_base)
+ 	    acs_chars = _nc_save_str(buf2);
+ 	    _nc_warning("acsc string synthesized from XENIX capabilities");
+ 	}
+-    } else if (acs_chars == 0
+-	       && enter_alt_charset_mode != 0
+-	       && exit_alt_charset_mode != 0) {
++    } else if (acs_chars == ABSENT_STRING
++	       && PRESENT(enter_alt_charset_mode)
++	       && PRESENT(exit_alt_charset_mode)) {
+ 	acs_chars = _nc_save_str(VT_ACSC);
+     }
+ }
+diff --git a/ncurses/tinfo/strings.c b/ncurses/tinfo/strings.c
+index 393d8e7..10ec6c8 100644
+--- a/ncurses/tinfo/strings.c
++++ b/ncurses/tinfo/strings.c
+@@ -1,5 +1,5 @@
+ /****************************************************************************
+- * Copyright (c) 2000-2007,2012 Free Software Foundation, Inc.              *
++ * Copyright (c) 2000-2012,2017 Free Software Foundation, Inc.              *
+  *                                                                          *
+  * Permission is hereby granted, free of charge, to any person obtaining a  *
+  * copy of this software and associated documentation files (the            *
+@@ -35,8 +35,9 @@
+ **/
+ 
+ #include <curses.priv.h>
++#include <tic.h>
+ 
+-MODULE_ID("$Id: strings.c,v 1.8 2012/02/22 22:34:31 tom Exp $")
++MODULE_ID("$Id: strings.c,v 1.9 2017/08/26 13:16:11 tom Exp $")
+ 
+ /****************************************************************************
+  * Useful string functions (especially for mvcur)
+@@ -105,7 +106,7 @@ _nc_str_copy(string_desc * dst, string_desc * src)
+ NCURSES_EXPORT(bool)
+ _nc_safe_strcat(string_desc * dst, const char *src)
+ {
+-    if (src != 0) {
++    if (PRESENT(src)) {
+ 	size_t len = strlen(src);
+ 
+ 	if (len < dst->s_size) {
+@@ -126,7 +127,7 @@ _nc_safe_strcat(string_desc * dst, const char *src)
+ NCURSES_EXPORT(bool)
+ _nc_safe_strcpy(string_desc * dst, const char *src)
+ {
+-    if (src != 0) {
++    if (PRESENT(src)) {
+ 	size_t len = strlen(src);
+ 
+ 	if (len < dst->s_size) {
+diff --git a/ncurses/tinfo/trim_sgr0.c b/ncurses/tinfo/trim_sgr0.c
+index 4cbcb65..4d92d15 100644
+--- a/ncurses/tinfo/trim_sgr0.c
++++ b/ncurses/tinfo/trim_sgr0.c
+@@ -36,7 +36,7 @@
+ 
+ #include <tic.h>
+ 
+-MODULE_ID("$Id: trim_sgr0.c,v 1.16 2017/04/05 22:33:07 tom Exp $")
++MODULE_ID("$Id: trim_sgr0.c,v 1.17 2017/08/26 14:54:16 tom Exp $")
+ 
+ #undef CUR
+ #define CUR tp->
+@@ -263,7 +263,7 @@ _nc_trim_sgr0(TERMTYPE2 *tp)
+ 	    /*
+ 	     * If rmacs is a substring of sgr(0), remove that chunk.
+ 	     */
+-	    if (exit_alt_charset_mode != 0) {
++	    if (PRESENT(exit_alt_charset_mode)) {
+ 		TR(TRACE_DATABASE, ("scan for rmacs %s", _nc_visbuf(exit_alt_charset_mode)));
+ 		j = strlen(off);
+ 		k = strlen(exit_alt_charset_mode);
+diff --git a/progs/tic.c b/progs/tic.c
+index c5d78e5..6dd4678 100644
+--- a/progs/tic.c
++++ b/progs/tic.c
+@@ -48,7 +48,7 @@
+ #include <parametrized.h>
+ #include <transform.h>
+ 
+-MODULE_ID("$Id: tic.c,v 1.233 2017/07/15 17:40:19 tom Exp $")
++MODULE_ID("$Id: tic.c,v 1.243 2017/08/26 20:56:55 tom Exp $")
+ 
+ #define STDIN_NAME "<stdin>"
+ 
+@@ -62,6 +62,10 @@ static bool showsummary = FALSE;
+ static char **namelst = 0;
+ static const char *to_remove;
+ 
++#if NCURSES_XNAMES
++static bool using_extensions = FALSE;
++#endif
++
+ static void (*save_check_termtype) (TERMTYPE2 *, bool);
+ static void check_termtype(TERMTYPE2 *tt, bool);
+ 
+@@ -850,6 +854,7 @@ main(int argc, char *argv[])
+ 	    /* FALLTHRU */
+ 	case 'x':
+ 	    use_extended_names(TRUE);
++	    using_extensions = TRUE;
+ 	    break;
+ #endif
+ 	default:
+@@ -2405,10 +2410,17 @@ check_conflict(TERMTYPE2 *tp)
+ 	    const char *a = given[j].value;
+ 	    bool first = TRUE;
+ 
++	    if (!VALID_STRING(a))
++		continue;
++
+ 	    for (k = j + 1; given[k].keycode; k++) {
+ 		const char *b = given[k].value;
++
++		if (!VALID_STRING(b))
++		    continue;
+ 		if (check[k])
+ 		    continue;
++
+ 		if (!_nc_capcmp(a, b)) {
+ 		    check[j] = 1;
+ 		    check[k] = 1;
+@@ -2431,6 +2443,67 @@ check_conflict(TERMTYPE2 *tp)
+ 	    if (!first)
+ 		fprintf(stderr, "\n");
+ 	}
++#if NCURSES_XNAMES
++	if (using_extensions) {
++	    /* *INDENT-OFF* */
++	    static struct {
++		const char *xcurses;
++		const char *shifted;
++	    } table[] = {
++		{ "kDC",  NULL },
++		{ "kDN",  "kind" },
++		{ "kEND", NULL },
++		{ "kHOM", NULL },
++		{ "kLFT", NULL },
++		{ "kNXT", NULL },
++		{ "kPRV", NULL },
++		{ "kRIT", NULL },
++		{ "kUP",  "kri" },
++		{ NULL,   NULL },
++	    };
++	    /* *INDENT-ON* */
++
++	    /*
++	     * SVr4 curses defines the "xcurses" names listed above except for
++	     * the special cases in the "shifted" column.  When using these
++	     * names for xterm's extensions, that was confusing, and resulted
++	     * in adding extended capabilities with "2" (shift) suffix.  This
++	     * check warns about unnecessary use of extensions for this quirk.
++	     */
++	    for (j = 0; given[j].keycode; ++j) {
++		const char *find = given[j].name;
++		int value;
++		char ch;
++
++		if (!VALID_STRING(given[j].value))
++		    continue;
++
++		for (k = 0; table[k].xcurses; ++k) {
++		    const char *test = table[k].xcurses;
++		    size_t size = strlen(test);
++
++		    if (!strncmp(find, test, size) && strcmp(find, test)) {
++			switch (sscanf(find + size, "%d%c", &value, &ch)) {
++			case 1:
++			    if (value == 2) {
++				_nc_warning("expected '%s' rather than '%s'",
++					    (table[k].shifted
++					     ? table[k].shifted
++					     : test), find);
++			    } else if (value < 2 || value > 15) {
++				_nc_warning("expected numeric 2..15 '%s'", find);
++			    }
++			    break;
++			default:
++			    _nc_warning("expected numeric suffix for '%s'", find);
++			    break;
++			}
++			break;
++		    }
++		}
++	    }
++	}
++#endif
+ 	free(given);
+ 	free(check);
+     }
+-- 
+2.10.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/ncurses/files/configure-reproducible.patch b/import-layers/yocto-poky/meta/recipes-core/ncurses/files/configure-reproducible.patch
deleted file mode 100644
index 54a8bdc..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/ncurses/files/configure-reproducible.patch
+++ /dev/null
@@ -1,20 +0,0 @@
-"configure" enforces -U for ar flags, breaking deterministic builds.
-The flag was added to fix some vaguely specified "recent POSIX binutil
-build problems" in 2015. 
-
-Upstream-Status: Pending
-Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
-
-diff --git a/configure b/configure
-index 7f31208..aa80911 100755
---- a/configure
-+++ b/configure
-@@ -4428,7 +4428,7 @@ if test "${cf_cv_ar_flags+set}" = set; then
- else
- 
- 	cf_cv_ar_flags=unknown
--	for cf_ar_flags in -curvU -curv curv -crv crv -cqv cqv -rv rv
-+	for cf_ar_flags in -curv curv -crv crv -cqv cqv -rv rv
- 	do
- 
- 		# check if $ARFLAGS already contains this choice
diff --git a/import-layers/yocto-poky/meta/recipes-core/ncurses/files/fix-cflags-mangle.patch b/import-layers/yocto-poky/meta/recipes-core/ncurses/files/fix-cflags-mangle.patch
deleted file mode 100644
index e9447c5..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/ncurses/files/fix-cflags-mangle.patch
+++ /dev/null
@@ -1,18 +0,0 @@
-configure has a piece of logic to detect users "abusing" CC to hold compiler
-flags (which we do).  It also has logic to "correct" this by moving the flags
-from CC to CFLAGS, but the sed only handles a single argument in CC.
-
-Replace the sed with awk to filter out all words that start with a hyphen.
-
-Upstream-Status: Pending
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-diff --git a/configure b/configure
-index 7f31208..1a29cfc 100755
---- a/configure
-+++ b/configure
-@@ -2191,2 +2191,2 @@ echo "$as_me: WARNING: your environment misuses the CC variable to hold CFLAGS/C
--	cf_flags=`echo "$CC" | sed -e 's/^.*[ 	]\(-[^ 	]\)/\1/'`
--	CC=`echo "$CC " | sed -e 's/[ 	]-[^ 	].*$//' -e 's/[ 	]*$//'`
-+	cf_flags=`echo "$CC" | awk  'BEGIN{ORS=" ";RS=" "} /^-.+/ {print $1}'`
-+	CC=`echo "$CC " | awk  'BEGIN{ORS=" ";RS=" "} /^[^-].+/ {print $1}'`
diff --git a/import-layers/yocto-poky/meta/recipes-core/ncurses/files/tic-hang.patch b/import-layers/yocto-poky/meta/recipes-core/ncurses/files/tic-hang.patch
deleted file mode 100644
index cba89d2..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/ncurses/files/tic-hang.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-Upstream-Status: Inappropriate [configuration]
-
-'tic' of some linux distributions (e.g. fedora 11) hang in an infinite
-loop when processing the original file.
-
-Index: ncurses-5.7/misc/terminfo.src
-===================================================================
---- ncurses-5.7.orig/misc/terminfo.src
-+++ ncurses-5.7/misc/terminfo.src
-@@ -3706,12 +3706,11 @@ konsole-xf3x|KDE console window with key
- # The value for kbs reflects local customization rather than the settings used
- # for XFree86 xterm.
- konsole-xf4x|KDE console window with keyboard for XFree86 4.x xterm,
--	kend=\EOF, khome=\EOH, use=konsole+pcfkeys,
--	use=konsole-vt100,
--# Konsole does not implement shifted cursor-keys.
--konsole+pcfkeys|konsole subset of xterm+pcfkeys,
--	kLFT@, kRIT@, kcbt=\E[Z, kind@, kri@, kDN@, kUP@, use=xterm+pcc2,
--	use=xterm+pcf0,
-+	kend=\EOF, kf1=\EOP, kf13=\EO2P, kf14=\EO2Q, kf15=\EO2R,
-+	kf16=\EO2S, kf17=\E[15;2~, kf18=\E[17;2~, kf19=\E[18;2~,
-+	kf2=\EOQ, kf20=\E[19;2~, kf21=\E[20;2~, kf22=\E[21;2~,
-+	kf23=\E[23;2~, kf24=\E[24;2~, kf3=\EOR, kf4=\EOS,
-+	khome=\EOH, use=konsole-vt100,
- # KDE's "vt100" keyboard has no relationship to any terminal that DEC made, but
- # it is still useful for deriving the other entries.
- konsole-vt100|KDE console window with vt100 (sic) keyboard,
diff --git a/import-layers/yocto-poky/meta/recipes-core/ncurses/ncurses_6.0+20161126.bb b/import-layers/yocto-poky/meta/recipes-core/ncurses/ncurses_6.0+20161126.bb
deleted file mode 100644
index ace3108..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/ncurses/ncurses_6.0+20161126.bb
+++ /dev/null
@@ -1,12 +0,0 @@
-require ncurses.inc
-
-SRC_URI += "file://tic-hang.patch \
-            file://fix-cflags-mangle.patch \
-            file://config.cache \
-            file://configure-reproducible.patch \
-"
-# commit id corresponds to the revision in package version
-SRCREV = "3db0bd19cb50e3d9b4f2cf15b7a102fe11302068"
-S = "${WORKDIR}/git"
-EXTRA_OECONF += "--with-abi-version=5"
-UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>\d+(\.\d+)+(\+\d+)*)"
diff --git a/import-layers/yocto-poky/meta/recipes-core/ncurses/ncurses_6.0+20170715.bb b/import-layers/yocto-poky/meta/recipes-core/ncurses/ncurses_6.0+20170715.bb
new file mode 100644
index 0000000..d1da5d1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/ncurses/ncurses_6.0+20170715.bb
@@ -0,0 +1,12 @@
+require ncurses.inc
+
+SRC_URI += "file://0001-tic-hang.patch \
+            file://0002-configure-reproducible.patch \
+            file://config.cache \
+            file://CVE-2017-13732-CVE-2017-13734-CVE-2017-13730-CVE-2017-13729-CVE-2017-13728-CVE-2017-13731.patch \
+"
+# commit id corresponds to the revision in package version
+SRCREV = "52681a6a1a18b4d6eb1a716512d0dd827bd71c87"
+S = "${WORKDIR}/git"
+EXTRA_OECONF += "--with-abi-version=5"
+UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>\d+(\.\d+)+(\+\d+)*)"
diff --git a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf-shell-image.bb b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf-shell-image.bb
index 029547b..0d2b8bf 100644
--- a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf-shell-image.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf-shell-image.bb
@@ -1,10 +1,13 @@
 DESCRIPTION = "boot image with UEFI shell and tools"
 
 # For this image recipe, only the wic format with a
-# single vfat partition makes sense.
+# single vfat partition makes sense. Because we have no
+# boot loader and no rootfs partition, not additional
+# tools are needed for this .wks file.
 IMAGE_FSTYPES_forcevariable = 'wic'
-
 WKS_FILE = "ovmf/ovmf-shell-image.wks"
+WKS_FILE_DEPENDS = ""
+
 inherit image
 
 # We want a minimal image with just ovmf-shell-efi unpacked in it. We
diff --git a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0001-MdeModulePkg-UefiHiiLib-Fix-incorrect-comparison-exp.patch b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0001-MdeModulePkg-UefiHiiLib-Fix-incorrect-comparison-exp.patch
deleted file mode 100644
index fcd7a46..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0001-MdeModulePkg-UefiHiiLib-Fix-incorrect-comparison-exp.patch
+++ /dev/null
@@ -1,39 +0,0 @@
-From 73692710d50da1f421b0e6ddff784ca3135389b3 Mon Sep 17 00:00:00 2001
-From: Dandan Bi <dandan.bi@intel.com>
-Date: Sat, 1 Apr 2017 10:31:14 +0800
-Subject: [PATCH] MdeModulePkg/UefiHiiLib:Fix incorrect comparison expression
-
-Fix the incorrect comparison between pointer and constant zero character.
-
-https://bugzilla.tianocore.org/show_bug.cgi?id=416
-
-V2: The pointer StringPtr points to a string returned
-by ExtractConfig/ExportConfig, if it is NULL, function
-InternalHiiIfrValueAction will return FALSE. So in
-current usage model, the StringPtr can not be NULL before
-using it, so we can add ASSERT here.
-
-Cc: Eric Dong <eric.dong@intel.com>
-Cc: Liming Gao <liming.gao@intel.com>
-Contributed-under: TianoCore Contribution Agreement 1.0
-Signed-off-by: Dandan Bi <dandan.bi@intel.com>
-Reviewed-by: Eric Dong <eric.dong@intel.com>
----
-Upstream-Status: Backport
-
- MdeModulePkg/Library/UefiHiiLib/HiiLib.c | 5 +++--
- 1 file changed, 3 insertions(+), 2 deletions(-)
-
-Index: git/MdeModulePkg/Library/UefiHiiLib/HiiLib.c
-===================================================================
---- git.orig/MdeModulePkg/Library/UefiHiiLib/HiiLib.c
-+++ git/MdeModulePkg/Library/UefiHiiLib/HiiLib.c
-@@ -2180,6 +2180,8 @@ InternalHiiIfrValueAction (
-   }

-   

-   StringPtr = ConfigAltResp;

-+
-+  ASSERT (StringPtr != NULL);

-   

-   while (StringPtr != L'\0') {

-     //

diff --git a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0001-ia32-Dont-use-pie.patch b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0001-ia32-Dont-use-pie.patch
new file mode 100644
index 0000000..5bb418b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0001-ia32-Dont-use-pie.patch
@@ -0,0 +1,46 @@
+From f65e9cc025278387b494c2383c5d9ff3bed98687 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 11 Jun 2017 00:47:24 -0700
+Subject: [PATCH] ia32: Dont use -pie
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ BaseTools/Conf/tools_def.template | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+Index: git/BaseTools/Conf/tools_def.template
+===================================================================
+--- git.orig/BaseTools/Conf/tools_def.template
++++ git/BaseTools/Conf/tools_def.template
+@@ -4336,7 +4336,7 @@ RELEASE_*_*_OBJCOPY_ADDDEBUGFLAG   =
+ NOOPT_*_*_OBJCOPY_ADDDEBUGFLAG     = --add-gnu-debuglink=$(DEBUG_DIR)/$(MODULE_NAME).debug

+ 

+ DEFINE GCC_ALL_CC_FLAGS            = -g -Os -fshort-wchar -fno-builtin -fno-strict-aliasing -Wall -Werror -Wno-array-bounds -include AutoGen.h -fno-common

+-DEFINE GCC_IA32_CC_FLAGS           = DEF(GCC_ALL_CC_FLAGS) -m32 -malign-double -freorder-blocks -freorder-blocks-and-partition -O2 -mno-stack-arg-probe

++DEFINE GCC_IA32_CC_FLAGS           = DEF(GCC_ALL_CC_FLAGS) -m32 -malign-double -freorder-blocks -freorder-blocks-and-partition -O2 -mno-stack-arg-probe -fno-PIE -no-pie

+ DEFINE GCC_X64_CC_FLAGS            = DEF(GCC_ALL_CC_FLAGS) -mno-red-zone -Wno-address -mno-stack-arg-probe

+ DEFINE GCC_IPF_CC_FLAGS            = DEF(GCC_ALL_CC_FLAGS) -minline-int-divide-min-latency

+ DEFINE GCC_ARM_CC_FLAGS            = DEF(GCC_ALL_CC_FLAGS) -mlittle-endian -mabi=aapcs -fno-short-enums -funsigned-char -ffunction-sections -fdata-sections -fomit-frame-pointer -Wno-address -mthumb -mfloat-abi=soft -fno-pic -fno-pie

+@@ -4369,9 +4369,9 @@ DEFINE GCC_ARM_RC_FLAGS            = -I
+ DEFINE GCC_AARCH64_RC_FLAGS        = -I binary -O elf64-littleaarch64 -B aarch64 --rename-section .data=.hii

+ 

+ DEFINE GCC44_ALL_CC_FLAGS            = -g -fshort-wchar -fno-builtin -fno-strict-aliasing -Wall -Werror -Wno-array-bounds -ffunction-sections -fdata-sections -include AutoGen.h -fno-common -DSTRING_ARRAY_NAME=$(BASE_NAME)Strings

+-DEFINE GCC44_IA32_CC_FLAGS           = DEF(GCC44_ALL_CC_FLAGS) -m32 -march=i586 -malign-double -fno-stack-protector -D EFI32 -fno-asynchronous-unwind-tables

++DEFINE GCC44_IA32_CC_FLAGS           = DEF(GCC44_ALL_CC_FLAGS) -m32 -march=i586 -malign-double -fno-stack-protector -D EFI32 -fno-asynchronous-unwind-tables -fno-PIE -no-pie

+ DEFINE GCC44_X64_CC_FLAGS            = DEF(GCC44_ALL_CC_FLAGS) -m64 -fno-stack-protector "-DEFIAPI=__attribute__((ms_abi))" -maccumulate-outgoing-args -mno-red-zone -Wno-address -mcmodel=small -fpie -fno-asynchronous-unwind-tables

+-DEFINE GCC44_IA32_X64_DLINK_COMMON   = -nostdlib -Wl,-n,-q,--gc-sections -z common-page-size=0x20

++DEFINE GCC44_IA32_X64_DLINK_COMMON   = -nostdlib -Wl,-n,-q,--gc-sections -z common-page-size=0x20 -no-pie

+ DEFINE GCC44_IA32_X64_ASLDLINK_FLAGS = DEF(GCC44_IA32_X64_DLINK_COMMON) -Wl,--entry,ReferenceAcpiTable -u ReferenceAcpiTable

+ DEFINE GCC44_IA32_X64_DLINK_FLAGS    = DEF(GCC44_IA32_X64_DLINK_COMMON) -Wl,--entry,$(IMAGE_ENTRY_POINT) -u $(IMAGE_ENTRY_POINT) -Wl,-Map,$(DEST_DIR_DEBUG)/$(BASE_NAME).map

+ DEFINE GCC44_IA32_DLINK2_FLAGS       = -Wl,--defsym=PECOFF_HEADER_SIZE=0x220 DEF(GCC_DLINK2_FLAGS_COMMON)

+@@ -4451,7 +4451,7 @@ DEFINE GCC48_AARCH64_ASLDLINK_FLAGS  = D
+ 

+ DEFINE GCC49_IA32_CC_FLAGS           = DEF(GCC48_IA32_CC_FLAGS)

+ DEFINE GCC49_X64_CC_FLAGS            = DEF(GCC48_X64_CC_FLAGS)

+-DEFINE GCC49_IA32_X64_DLINK_COMMON   = -nostdlib -Wl,-n,-q,--gc-sections -z common-page-size=0x40

++DEFINE GCC49_IA32_X64_DLINK_COMMON   = -nostdlib -Wl,-n,-q,--gc-sections -z common-page-size=0x40 -no-pie

+ DEFINE GCC49_IA32_X64_ASLDLINK_FLAGS = DEF(GCC49_IA32_X64_DLINK_COMMON) -Wl,--entry,ReferenceAcpiTable -u ReferenceAcpiTable

+ DEFINE GCC49_IA32_X64_DLINK_FLAGS    = DEF(GCC49_IA32_X64_DLINK_COMMON) -Wl,--entry,$(IMAGE_ENTRY_POINT) -u $(IMAGE_ENTRY_POINT) -Wl,-Map,$(DEST_DIR_DEBUG)/$(BASE_NAME).map

+ DEFINE GCC49_IA32_DLINK2_FLAGS       = DEF(GCC48_IA32_DLINK2_FLAGS)

diff --git a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0002-ovmf-update-path-to-native-BaseTools.patch b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0002-ovmf-update-path-to-native-BaseTools.patch
index 94029a5..94ae5d4 100644
--- a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0002-ovmf-update-path-to-native-BaseTools.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0002-ovmf-update-path-to-native-BaseTools.patch
@@ -10,6 +10,7 @@
 with the appropriate location before building.
 
 Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com>
+Upstream-Status: Pending
 ---
  OvmfPkg/build.sh | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0003-BaseTools-makefile-adjust-to-build-in-under-bitbake.patch b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0003-BaseTools-makefile-adjust-to-build-in-under-bitbake.patch
index 0fdc278..65b5c16 100644
--- a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0003-BaseTools-makefile-adjust-to-build-in-under-bitbake.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0003-BaseTools-makefile-adjust-to-build-in-under-bitbake.patch
@@ -7,33 +7,33 @@
 using the bitbake native sysroot include and library directories.
 
 Signed-off-by: Ricardo Neri <ricardo.neri@linux.intel.com>
+Upstream-Status: Pending
 ---
  BaseTools/Source/C/Makefiles/header.makefile | 8 ++++----
  1 file changed, 4 insertions(+), 4 deletions(-)
 
-diff --git a/BaseTools/Source/C/Makefiles/header.makefile b/BaseTools/Source/C/Makefiles/header.makefile
-index 821d114..fe0f08b 100644
---- a/BaseTools/Source/C/Makefiles/header.makefile
-+++ b/BaseTools/Source/C/Makefiles/header.makefile
-@@ -44,14 +44,14 @@ ARCH_INCLUDE = -I $(MAKEROOT)/Include/AArch64/
+Index: git/BaseTools/Source/C/Makefiles/header.makefile
+===================================================================
+--- git.orig/BaseTools/Source/C/Makefiles/header.makefile
++++ git/BaseTools/Source/C/Makefiles/header.makefile
+@@ -44,15 +44,15 @@ ARCH_INCLUDE = -I $(MAKEROOT)/Include/AA
  endif

  

  INCLUDE = $(TOOL_INCLUDE) -I $(MAKEROOT) -I $(MAKEROOT)/Include/Common -I $(MAKEROOT)/Include/ -I $(MAKEROOT)/Include/IndustryStandard -I $(MAKEROOT)/Common/ -I .. -I . $(ARCH_INCLUDE) 

 -BUILD_CPPFLAGS = $(INCLUDE) -O2

-+BUILD_CPPFLAGS := $(BUILD_CPPFLAGS) $(INCLUDE) -O2

++BUILD_CPPFLAGS += $(INCLUDE) -O2

  ifeq ($(DARWIN),Darwin)

  # assume clang or clang compatible flags on OS X

 -BUILD_CFLAGS = -MD -fshort-wchar -fno-strict-aliasing -Wall -Werror -Wno-deprecated-declarations -Wno-self-assign -Wno-unused-result -nostdlib -c -g

-+BUILD_CFLAGS := $(BUILD_CFLAGS) -MD -fshort-wchar -fno-strict-aliasing -Wall -Werror -Wno-deprecated-declarations -Wno-self-assign -Wno-unused-result -nostdlib -c -g

++BUILD_CFLAGS += -MD -fshort-wchar -fno-strict-aliasing -Wall -Werror -Wno-deprecated-declarations -Wno-self-assign -Wno-unused-result -nostdlib -c -g

  else

 -BUILD_CFLAGS = -MD -fshort-wchar -fno-strict-aliasing -Wall -Werror -Wno-deprecated-declarations -Wno-unused-result -nostdlib -c -g

-+BUILD_CFLAGS := $(BUILD_CFLAGS) -MD -fshort-wchar -fno-strict-aliasing -Wall -Werror -Wno-deprecated-declarations -Wno-unused-result -nostdlib -c -g

++BUILD_CFLAGS += -MD -fshort-wchar -fno-strict-aliasing -Wall -Werror -Wno-deprecated-declarations -Wno-unused-result -nostdlib -c -g

  endif

 -BUILD_LFLAGS =

-+BUILD_LFLAGS := $(LDFLAGS)

- BUILD_CXXFLAGS =

+-BUILD_CXXFLAGS = -Wno-unused-result

++BUILD_LFLAGS = $(LDFLAGS)

++BUILD_CXXFLAGS += -Wno-unused-result

  

  ifeq ($(ARCH), IA32)

--- 
-2.9.3
-
+ #

diff --git a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0004-ovmf-enable-long-path-file.patch b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0004-ovmf-enable-long-path-file.patch
new file mode 100644
index 0000000..d954fbe
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/0004-ovmf-enable-long-path-file.patch
@@ -0,0 +1,18 @@
+From 032fc6b1f7691bd537fd2a6bd13821fcf3c45e64 Mon Sep 17 00:00:00 2001
+From: Dengke Du <dengke.du@windriver.com>
+Date: Mon, 11 Sep 2017 02:21:55 -0400
+Subject: [PATCH] ovmf: enable long path file
+
+Upstream-Status: Pending
+Signed-off-by: Dengke Du <dengke.du@windriver.com>
+---
+ BaseTools/Source/C/Common/CommonLib.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/BaseTools/Source/C/Common/CommonLib.h b/BaseTools/Source/C/Common/CommonLib.h
+index 2041b89e2d..8116aa2e35 100644
+--- a/BaseTools/Source/C/Common/CommonLib.h
++++ b/BaseTools/Source/C/Common/CommonLib.h
+@@ -22 +22 @@ WITHOUT WARRANTIES OR REPRESENTATIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED.
+-#define MAX_LONG_FILE_PATH 500

++#define MAX_LONG_FILE_PATH 1023

diff --git a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/no-stack-protector-all-archs.patch b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/no-stack-protector-all-archs.patch
new file mode 100644
index 0000000..959b1c6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf/no-stack-protector-all-archs.patch
@@ -0,0 +1,20 @@
+Author: Steve Langasek <steve.langasek@ubuntu.com>
+Description: pass -fno-stack-protector to all GCC toolchains
+ The upstream build rules inexplicably pass -fno-stack-protector only
+ when building for i386 and amd64.  Add this essential argument to the
+ generic rules for gcc 4.4 and later.
+Last-Updated: 2016-04-12
+Upstream-Status: Pending
+Index: git/BaseTools/Conf/tools_def.template
+===================================================================
+--- git.orig/BaseTools/Conf/tools_def.template
++++ git/BaseTools/Conf/tools_def.template
+@@ -4368,7 +4368,7 @@ DEFINE GCC_IPF_RC_FLAGS            = -I
+ DEFINE GCC_ARM_RC_FLAGS            = -I binary -O elf32-littlearm     -B arm     --rename-section .data=.hii

+ DEFINE GCC_AARCH64_RC_FLAGS        = -I binary -O elf64-littleaarch64 -B aarch64 --rename-section .data=.hii

+ 

+-DEFINE GCC44_ALL_CC_FLAGS            = -g -fshort-wchar -fno-builtin -fno-strict-aliasing -Wall -Werror -Wno-array-bounds -ffunction-sections -fdata-sections -include AutoGen.h -fno-common -DSTRING_ARRAY_NAME=$(BASE_NAME)Strings

++DEFINE GCC44_ALL_CC_FLAGS            = -g -fshort-wchar -fno-builtin -fno-strict-aliasing -Wall -Werror -Wno-array-bounds -ffunction-sections -fdata-sections -fno-stack-protector -include AutoGen.h -fno-common -DSTRING_ARRAY_NAME=$(BASE_NAME)Strings

+ DEFINE GCC44_IA32_CC_FLAGS           = DEF(GCC44_ALL_CC_FLAGS) -m32 -march=i586 -malign-double -fno-stack-protector -D EFI32 -fno-asynchronous-unwind-tables -fno-PIE -no-pie

+ DEFINE GCC44_X64_CC_FLAGS            = DEF(GCC44_ALL_CC_FLAGS) -m64 -fno-stack-protector "-DEFIAPI=__attribute__((ms_abi))" -maccumulate-outgoing-args -mno-red-zone -Wno-address -mcmodel=small -fpie -fno-asynchronous-unwind-tables

+ DEFINE GCC44_IA32_X64_DLINK_COMMON   = -nostdlib -Wl,-n,-q,--gc-sections -z common-page-size=0x20

diff --git a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf_git.bb b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf_git.bb
index 9d988e9..fa0d662 100644
--- a/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/ovmf/ovmf_git.bb
@@ -1,5 +1,7 @@
-DESCRIPTION = "OVMF - UEFI firmware for Qemu and KVM"
-HOMEPAGE = "http://sourceforge.net/apps/mediawiki/tianocore/index.php?title=OVMF"
+SUMMARY = "OVMF - UEFI firmware for Qemu and KVM"
+DESCRIPTION = "OVMF is an EDK II based project to enable UEFI support for \
+Virtual Machines. OVMF contains sample UEFI firmware for QEMU and KVM"
+HOMEPAGE = "https://github.com/tianocore/tianocore.github.io/wiki/OVMF"
 LICENSE = "BSD"
 LICENSE_class-target = "${@bb.utils.contains('PACKAGECONFIG', 'secureboot', 'BSD & OpenSSL', 'BSD', d)}"
 LIC_FILES_CHKSUM = "file://OvmfPkg/License.txt;md5=343dc88e82ff33d042074f62050c3496"
@@ -11,20 +13,25 @@
 PACKAGECONFIG[secureboot] = ",,,"
 
 SRC_URI = "git://github.com/tianocore/edk2.git;branch=master \
+	file://0001-ia32-Dont-use-pie.patch \
 	file://0002-ovmf-update-path-to-native-BaseTools.patch \
 	file://0003-BaseTools-makefile-adjust-to-build-in-under-bitbake.patch \
+	file://0004-ovmf-enable-long-path-file.patch \
 	file://VfrCompile-increase-path-length-limit.patch \
-        file://0001-MdeModulePkg-UefiHiiLib-Fix-incorrect-comparison-exp.patch \
+	file://no-stack-protector-all-archs.patch \
         "
+UPSTREAM_VERSION_UNKNOWN = "1"
+
+OPENSSL_RELEASE = "openssl-1.1.0e"
 
 SRC_URI_append_class-target = " \
-	${@bb.utils.contains('PACKAGECONFIG', 'secureboot', 'http://www.openssl.org/source/openssl-1.0.2j.tar.gz;name=openssl;subdir=${S}/CryptoPkg/Library/OpensslLib', '', d)} \
+	${@bb.utils.contains('PACKAGECONFIG', 'secureboot', 'http://www.openssl.org/source/${OPENSSL_RELEASE}.tar.gz;name=openssl;subdir=${S}/CryptoPkg/Library/OpensslLib', '', d)} \
 	file://0007-OvmfPkg-EnrollDefaultKeys-application-for-enrolling-.patch \
 "
 
-SRCREV="4575a602ca6072ee9d04150b38bfb143cbff8588"
-SRC_URI[openssl.md5sum] = "96322138f0b69e61b7212bc53d5e912b"
-SRC_URI[openssl.sha256sum] = "e7aff292be21c259c6af26469c7a9b3ba26e9abaaffd325e3dccc9785256c431"
+SRCREV="ec4910cd3336565fdb61dafdd9ec4ae7a6160ba3"
+SRC_URI[openssl.md5sum] = "51c42d152122e474754aea96f66928c6"
+SRC_URI[openssl.sha256sum] = "57be8618979d80c910728cfc99369bf97b2a1abd8f366ab6ebdee8975ad3874c"
 
 inherit deploy
 
@@ -144,7 +151,7 @@
 
 do_compile_class-target() {
     export LFLAGS="${LDFLAGS}"
-    PARALLEL_JOBS="${@ '${PARALLEL_MAKE}'.replace('-j', '-n')}"
+    PARALLEL_JOBS="${@ '${PARALLEL_MAKE}'.replace('-j', '-n ')}"
     OVMF_ARCH="X64"
     if [ "${TARGET_ARCH}" != "x86_64" ] ; then
         OVMF_ARCH="IA32"
@@ -186,10 +193,7 @@
         # building with Secure Boot enabled.
         bbnote "Building with Secure Boot."
         rm -rf ${S}/Build/Ovmf$OVMF_DIR_SUFFIX
-        if ! [ -f ${S}/CryptoPkg/Library/OpensslLib/openssl-*/edk2-patch-applied ]; then
-            ( cd ${S}/CryptoPkg/Library/OpensslLib/openssl-* && patch -p1 <$(echo ../EDKII_openssl-*.patch) && touch edk2-patch-applied )
-        fi
-        ( cd ${S}/CryptoPkg/Library/OpensslLib/ && ./Install.sh )
+        ln -sf ${OPENSSL_RELEASE} ${S}/CryptoPkg/Library/OpensslLib/openssl
         ${S}/OvmfPkg/build.sh $PARALLEL_JOBS -a $OVMF_ARCH -b RELEASE -t ${FIXED_GCCVER} ${OVMF_SECURE_BOOT_FLAGS}
         ln ${build_dir}/FV/OVMF.fd ${WORKDIR}/ovmf/ovmf.secboot.fd
         ln ${build_dir}/FV/OVMF_CODE.fd ${WORKDIR}/ovmf/ovmf.secboot.code.fd
@@ -241,3 +245,4 @@
 addtask do_deploy after do_compile before do_build
 
 BBCLASSEXTEND = "native"
+TOOLCHAIN = "gcc"
diff --git a/import-layers/yocto-poky/meta/recipes-core/packagegroups/nativesdk-packagegroup-sdk-host.bb b/import-layers/yocto-poky/meta/recipes-core/packagegroups/nativesdk-packagegroup-sdk-host.bb
index 2ca6392..aee4a03 100644
--- a/import-layers/yocto-poky/meta/recipes-core/packagegroups/nativesdk-packagegroup-sdk-host.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/packagegroups/nativesdk-packagegroup-sdk-host.bb
@@ -4,7 +4,6 @@
 
 SUMMARY = "Host packages for the standalone SDK or external toolchain"
 PR = "r12"
-LICENSE = "MIT"
 
 inherit packagegroup nativesdk
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-base.bb b/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-base.bb
index 0069e3e..f9e6e2e 100644
--- a/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-base.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-base.bb
@@ -204,7 +204,7 @@
 SUMMARY_packagegroup-base-bluetooth = "Bluetooth support"
 RDEPENDS_packagegroup-base-bluetooth = "\
     ${BLUEZ} \
-    ${@bb.utils.contains('COMBINED_FEATURES', 'alsa', 'libasound-module-bluez', '',d)} \
+    ${@bb.utils.contains('COMBINED_FEATURES', 'alsa', bb.utils.contains('BLUEZ', 'bluez4', 'libasound-module-bluez', '', d), '',d)} \
     "
 
 RRECOMMENDS_packagegroup-base-bluetooth = "\
diff --git a/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-core-sdk.bb b/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-core-sdk.bb
index 7d6d414..af0ce20 100644
--- a/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-core-sdk.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-core-sdk.bb
@@ -33,7 +33,6 @@
 SANITIZERS_powerpc64 = ""
 SANITIZERS_sparc = ""
 SANITIZERS_libc-musl = ""
-SANITIZERS_libc-uclibc = ""
 
 RRECOMMENDS_packagegroup-core-sdk = "\
     libgomp \
diff --git a/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-core-tools-profile.bb b/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-core-tools-profile.bb
index 946c947..a8e47da 100644
--- a/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-core-tools-profile.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-core-tools-profile.bb
@@ -31,23 +31,15 @@
 PERF = "perf"
 PERF_libc-musl = ""
 
-# systemtap needs elfutils which is not fully buildable on uclibc
-# hence we exclude it from uclibc based builds
+# systemtap needs elfutils which is not fully buildable on some arches/libcs
 SYSTEMTAP = "systemtap"
-SYSTEMTAP_libc-uclibc = ""
 SYSTEMTAP_libc-musl = ""
 SYSTEMTAP_mipsarch = ""
 SYSTEMTAP_nios2 = ""
 SYSTEMTAP_aarch64 = ""
 
-# lttng-ust uses sched_getcpu() which is not there on uclibc
-# for some of the architectures it can be patched to call the
-# syscall directly but for x86_64 __NR_getcpu is a vsyscall
-# which means we can not use syscall() to call it. So we ignore
-# it for x86_64/uclibc
-
+# lttng-ust uses sched_getcpu() which is not there on for some platforms.
 LTTNGUST = "lttng-ust"
-LTTNGUST_libc-uclibc = ""
 LTTNGUST_libc-musl = ""
 
 LTTNGTOOLS = "lttng-tools"
@@ -60,13 +52,13 @@
 # valgrind does not work on the following configurations/architectures
 
 VALGRIND = "valgrind"
-VALGRIND_libc-uclibc = ""
 VALGRIND_libc-musl = ""
 VALGRIND_mipsarch = ""
 VALGRIND_nios2 = ""
 VALGRIND_armv4 = ""
 VALGRIND_armv5 = ""
 VALGRIND_armv6 = ""
+VALGRIND_armeb = ""
 VALGRIND_aarch64 = ""
 VALGRIND_linux-gnux32 = ""
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-go-cross-canadian.bb b/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-go-cross-canadian.bb
new file mode 100644
index 0000000..3daace1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-go-cross-canadian.bb
@@ -0,0 +1,12 @@
+SUMMARY = "Host SDK package for Go cross canadian toolchain"
+PN = "packagegroup-go-cross-canadian-${MACHINE}"
+
+inherit cross-canadian packagegroup
+
+PACKAGEGROUP_DISABLE_COMPLEMENTARY = "1"
+
+GO = "go-cross-canadian-${TRANSLATED_TARGET_ARCH}"
+
+RDEPENDS_${PN} = " \
+    ${@all_multilib_tune_values(d, 'GO')} \
+"
diff --git a/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-go-sdk-target.bb b/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-go-sdk-target.bb
new file mode 100644
index 0000000..3e19077
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-go-sdk-target.bb
@@ -0,0 +1,8 @@
+SUMMARY = "Target packages for the Go SDK"
+
+inherit packagegroup goarch
+
+RDEPENDS_${PN} = " \
+    go-runtime \
+    go-runtime-dev \
+"
diff --git a/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-self-hosted.bb b/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-self-hosted.bb
index c1bbdfc..ff42866 100644
--- a/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-self-hosted.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/packagegroups/packagegroup-self-hosted.bb
@@ -42,7 +42,6 @@
     mc-fish \
     mc-helpers \
     mc-helpers-perl \
-    mc-helpers-python \
     parted \
     ${PSEUDO} \
     screen \
@@ -141,17 +140,19 @@
     nfs-utils \
     nfs-utils-client \
     openssl \
+    openssh-scp \
     openssh-sftp-server \
+    openssh-ssh \
     opkg \
     opkg-utils \
     patch \
     perl \
     perl-dev \
+    perl-misc \
     perl-modules \
     perl-pod \
     python \
     python-modules \
-    python-git \
     python3 \
     python3-modules \
     python3-git \
diff --git a/import-layers/yocto-poky/meta/recipes-core/psplash/files/psplash-init b/import-layers/yocto-poky/meta/recipes-core/psplash/files/psplash-init
index 66c85e9..0bce1de 100755
--- a/import-layers/yocto-poky/meta/recipes-core/psplash/files/psplash-init
+++ b/import-layers/yocto-poky/meta/recipes-core/psplash/files/psplash-init
@@ -7,6 +7,12 @@
 # Default-Stop:
 ### END INIT INFO
 
+if [ ! -e /dev/fb0 ]; then
+    echo "Framebuffer /dev/fb0 not detected"
+    echo "Boot splashscreen disabled"
+    exit 0;
+fi
+
 read CMDLINE < /proc/cmdline
 for x in $CMDLINE; do
         case $x in
diff --git a/import-layers/yocto-poky/meta/recipes-core/psplash/psplash_git.bb b/import-layers/yocto-poky/meta/recipes-core/psplash/psplash_git.bb
index 44297e1..3b7f818 100644
--- a/import-layers/yocto-poky/meta/recipes-core/psplash/psplash_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/psplash/psplash_git.bb
@@ -5,13 +5,14 @@
 LICENSE = "GPLv2+"
 LIC_FILES_CHKSUM = "file://psplash.h;beginline=1;endline=16;md5=840fb2356b10a85bed78dd09dc7745c6"
 
-SRCREV = "88343ad23c90fa1dd8d79ac0d784a691aa0c6d2b"
+SRCREV = "2015f7073e98dd9562db0936a254af5ef56356cf"
 PV = "0.1+git${SRCPV}"
 PR = "r15"
 
 SRC_URI = "git://git.yoctoproject.org/${BPN} \
            file://psplash-init \
            ${SPLASH_IMAGES}"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 SPLASH_IMAGES = "file://psplash-poky-img.h;outsuffix=default"
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/readline/readline.inc b/import-layers/yocto-poky/meta/recipes-core/readline/readline.inc
index 1a0a155..e966522 100644
--- a/import-layers/yocto-poky/meta/recipes-core/readline/readline.inc
+++ b/import-layers/yocto-poky/meta/recipes-core/readline/readline.inc
@@ -4,6 +4,7 @@
 additional functions to maintain a list of previously-entered command lines, to recall and perhaps reedit those   \
 lines, and perform csh-like history expansion on previous commands."
 SECTION = "libs"
+HOMEPAGE = "https://cnswww.cns.cwru.edu/php/chet/readline/rltop.html"
 
 # GPLv2+ (< 6.0), GPLv3+ (>= 6.0)
 LICENSE = "GPLv3+"
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-boot_234.bb b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-boot_234.bb
new file mode 100644
index 0000000..7b18b25
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-boot_234.bb
@@ -0,0 +1,43 @@
+require systemd.inc
+FILESEXTRAPATHS =. "${FILE_DIRNAME}/systemd:"
+
+DEPENDS = "intltool-native libcap util-linux gnu-efi gperf-native"
+
+SRC_URI += "file://0007-use-lnr-wrapper-instead-of-looking-for-relative-opti.patch"
+
+inherit autotools pkgconfig gettext
+inherit deploy
+
+EFI_CC ?= "${CC}"
+# Man pages are packaged through the main systemd recipe
+EXTRA_OECONF = " --enable-gnuefi \
+                 --with-efi-includedir=${STAGING_INCDIR} \
+                 --with-efi-ldsdir=${STAGING_LIBDIR} \
+                 --with-efi-libdir=${STAGING_LIBDIR} \
+                 --disable-manpages \
+                 EFI_CC='${EFI_CC}' \
+               "
+
+# Imported from the old gummiboot recipe
+TUNE_CCARGS_remove = "-mfpmath=sse"
+COMPATIBLE_HOST = "(x86_64.*|i.86.*)-linux"
+COMPATIBLE_HOST_linux-gnux32 = "null"
+
+do_compile() {
+	SYSTEMD_BOOT_EFI_ARCH="ia32"
+	if [ "${TARGET_ARCH}" = "x86_64" ]; then
+		SYSTEMD_BOOT_EFI_ARCH="x64"
+	fi
+
+	oe_runmake systemd-boot${SYSTEMD_BOOT_EFI_ARCH}.efi
+}
+
+do_install() {
+	# Bypass systemd installation with a NOP
+	:
+}
+
+do_deploy () {
+	install ${B}/systemd-boot*.efi ${DEPLOYDIR}
+}
+addtask deploy before do_build after do_compile
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-compat-units.bb b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-compat-units.bb
index fe9a521..d228a51 100644
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-compat-units.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-compat-units.bb
@@ -1,5 +1,5 @@
 SUMMARY = "Enhances systemd compatilibity with existing SysVinit scripts"
-
+HOMEPAGE = "http://www.freedesktop.org/wiki/Software/systemd"
 LICENSE = "MIT"
 
 PR = "r29"
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-machine-units_1.0.bb b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-machine-units_1.0.bb
new file mode 100644
index 0000000..02756f4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-machine-units_1.0.bb
@@ -0,0 +1,13 @@
+SUMMARY = "Machine specific systemd units"
+
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
+
+PACKAGE_ARCH = "${MACHINE_ARCH}"
+
+PR = "r19"
+
+inherit systemd
+SYSTEMD_SERVICE_${PN} = ""
+
+ALLOW_EMPTY_${PN} = "1"
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-serialgetty.bb b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-serialgetty.bb
index 768b130..d934716 100644
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-serialgetty.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-serialgetty.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Serial terminal support for systemd"
+HOMEPAGE = "https://www.freedesktop.org/wiki/Software/systemd/"
 LICENSE = "GPLv2+"
 LIC_FILES_CHKSUM = "file://${COREBASE}/meta/files/common-licenses/GPL-2.0;md5=801f80980d171dd6425610833a22dbe6"
 
@@ -38,8 +39,6 @@
 	fi
 }
 
-RDEPENDS_${PN} = "systemd"
-
 # This is a machine specific file
 FILES_${PN} = "${systemd_unitdir}/system/*.service ${sysconfdir}"
 PACKAGE_ARCH = "${MACHINE_ARCH}"
@@ -49,3 +48,5 @@
     if not bb.utils.contains ('DISTRO_FEATURES', 'systemd', True, False, d):
         raise bb.parse.SkipPackage("'systemd' not in DISTRO_FEATURES")
 }
+
+ALLOW_EMPTY_${PN} = "1"
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-systemctl/systemctl b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-systemctl/systemctl
index efad14c..6e5a1b7 100755
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-systemctl/systemctl
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd-systemctl/systemctl
@@ -108,7 +108,7 @@
 
 	# If any new unit types are added to systemd they should be added
 	# to this regular expression.
-	unit_types_re='\.\(service\|socket\|device\|mount\|automount\|swap\|target\|path\|timer\|snapshot\)\s*$'
+	unit_types_re='\.\(service\|socket\|device\|mount\|automount\|swap\|target\|target\.wants\|path\|timer\|snapshot\)\s*$'
 	if [ "$action" = "preset" ]; then
 		action=`egrep -sh  $service $ROOT/etc/systemd/user-preset/*.preset | cut -f1 -d' '`
 		if [ -z "$action" ]; then
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd.inc b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd.inc
index 29e0be6..d99d150 100644
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd.inc
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd.inc
@@ -14,10 +14,8 @@
 LIC_FILES_CHKSUM = "file://LICENSE.GPL2;md5=751419260aa954499f7abaabaa882bbe \
                     file://LICENSE.LGPL2.1;md5=4fbd65380cdd255951079008b364516c"
 
-SRCREV = "a1e2ef7ec912902d8142e7cb5830cbfb47dba86c"
+SRCREV = "c1edab7ad1e7ccc9be693bedfd464cd1cbffb395"
 
 SRC_URI = "git://github.com/systemd/systemd.git;protocol=git"
 
 S = "${WORKDIR}/git"
-
-LDFLAGS_append_libc-uclibc = " -lrt -lssp_nonshared -lssp "
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-Define-_PATH_WTMPX-and-_PATH_UTMPX-if-not-defined.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-Define-_PATH_WTMPX-and-_PATH_UTMPX-if-not-defined.patch
new file mode 100644
index 0000000..35599d4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-Define-_PATH_WTMPX-and-_PATH_UTMPX-if-not-defined.patch
@@ -0,0 +1,43 @@
+From 3ca5326485cb19e775af6de615c17be66e44e472 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 24 Oct 2017 23:08:24 -0700
+Subject: [PATCH] Define _PATH_WTMPX and _PATH_UTMPX if not defined
+
+Musl needs these defines
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/shared/utmp-wtmp.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
+
+diff --git a/src/shared/utmp-wtmp.c b/src/shared/utmp-wtmp.c
+index 9750dcd81..bd55d74a1 100644
+--- a/src/shared/utmp-wtmp.c
++++ b/src/shared/utmp-wtmp.c
+@@ -27,6 +27,7 @@
+ #include <sys/time.h>
+ #include <sys/utsname.h>
+ #include <unistd.h>
++#include <utmp.h>
+ #include <utmpx.h>
+ 
+ #include "alloc-util.h"
+@@ -41,6 +42,13 @@
+ #include "util.h"
+ #include "utmp-wtmp.h"
+ 
++#if defined _PATH_UTMP && !defined _PATH_UTMPX
++# define _PATH_UTMPX _PATH_UTMP
++#endif
++#if defined _PATH_WTMP && !defined _PATH_WTMPX
++# define _PATH_WTMPX _PATH_WTMP
++#endif
++
+ int utmp_get_runlevel(int *runlevel, int *previous) {
+         struct utmpx *found, lookup = { .ut_type = RUN_LVL };
+         int r;
+-- 
+2.14.3
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-Use-uintmax_t-for-handling-rlim_t.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-Use-uintmax_t-for-handling-rlim_t.patch
new file mode 100644
index 0000000..779dc78
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-Use-uintmax_t-for-handling-rlim_t.patch
@@ -0,0 +1,89 @@
+From b2d4171c6e521cf1e70331fb769234d63a4a6d44 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 27 Oct 2017 13:00:41 -0700
+Subject: [PATCH] Use uintmax_t for handling rlim_t
+
+PRIu{32,64} is not right format to represent rlim_t type
+therefore use %ju and typecast the rlim_t variables to
+uintmax_t.
+
+Fixes portablility errors like
+
+execute.c:3446:36: error: format '%lu' expects argument of type 'long unsigned int', but argument 5 has type 'rlim_t {aka long long unsigned int}' [-Werror=format=]
+|                          fprintf(f, "%s%s: " RLIM_FMT "\n",
+|                                     ^~~~~~~~
+|                                  prefix, rlimit_to_string(i), c->rlimit[i]->rlim_max);
+|                                                               ~~~~~~~~~~~~~~~~~~~~~~
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Submitted [https://github.com/systemd/systemd/pull/7199]
+
+ src/basic/format-util.h | 8 --------
+ src/basic/rlimit-util.c | 8 ++++----
+ src/core/execute.c      | 8 ++++----
+ 3 files changed, 8 insertions(+), 16 deletions(-)
+
+diff --git a/src/basic/format-util.h b/src/basic/format-util.h
+index ae42a8f89..144249cd6 100644
+--- a/src/basic/format-util.h
++++ b/src/basic/format-util.h
+@@ -60,14 +60,6 @@
+ #  define PRI_TIMEX "li"
+ #endif
+ 
+-#if SIZEOF_RLIM_T == 8
+-#  define RLIM_FMT "%" PRIu64
+-#elif SIZEOF_RLIM_T == 4
+-#  define RLIM_FMT "%" PRIu32
+-#else
+-#  error Unknown rlim_t size
+-#endif
+-
+ #if SIZEOF_DEV_T == 8
+ #  define DEV_FMT "%" PRIu64
+ #elif SIZEOF_DEV_T == 4
+diff --git a/src/basic/rlimit-util.c b/src/basic/rlimit-util.c
+index ca834df62..41fcebb74 100644
+--- a/src/basic/rlimit-util.c
++++ b/src/basic/rlimit-util.c
+@@ -284,13 +284,13 @@ int rlimit_format(const struct rlimit *rl, char **ret) {
+         if (rl->rlim_cur >= RLIM_INFINITY && rl->rlim_max >= RLIM_INFINITY)
+                 s = strdup("infinity");
+         else if (rl->rlim_cur >= RLIM_INFINITY)
+-                (void) asprintf(&s, "infinity:" RLIM_FMT, rl->rlim_max);
++                (void) asprintf(&s, "infinity:%ju", (uintmax_t)rl->rlim_max);
+         else if (rl->rlim_max >= RLIM_INFINITY)
+-                (void) asprintf(&s, RLIM_FMT ":infinity", rl->rlim_cur);
++                (void) asprintf(&s, "%ju:infinity", (uintmax_t)rl->rlim_cur);
+         else if (rl->rlim_cur == rl->rlim_max)
+-                (void) asprintf(&s, RLIM_FMT, rl->rlim_cur);
++                (void) asprintf(&s, "%ju", (uintmax_t)rl->rlim_cur);
+         else
+-                (void) asprintf(&s, RLIM_FMT ":" RLIM_FMT, rl->rlim_cur, rl->rlim_max);
++                (void) asprintf(&s, "%ju:%ju", (uintmax_t)rl->rlim_cur, (uintmax_t)rl->rlim_max);
+ 
+         if (!s)
+                 return -ENOMEM;
+diff --git a/src/core/execute.c b/src/core/execute.c
+index d72e5bf08..d38946002 100644
+--- a/src/core/execute.c
++++ b/src/core/execute.c
+@@ -3443,10 +3443,10 @@ void exec_context_dump(ExecContext *c, FILE* f, const char *prefix) {
+ 
+         for (i = 0; i < RLIM_NLIMITS; i++)
+                 if (c->rlimit[i]) {
+-                        fprintf(f, "%s%s: " RLIM_FMT "\n",
+-                                prefix, rlimit_to_string(i), c->rlimit[i]->rlim_max);
+-                        fprintf(f, "%s%sSoft: " RLIM_FMT "\n",
+-                                prefix, rlimit_to_string(i), c->rlimit[i]->rlim_cur);
++                        fprintf(f, "%s%s: %ju\n",
++                                prefix, rlimit_to_string(i), (uintmax_t)c->rlimit[i]->rlim_max);
++                        fprintf(f, "%s%sSoft: %ju\n",
++                                prefix, rlimit_to_string(i), (uintmax_t)c->rlimit[i]->rlim_cur);
+                 }
+ 
+         if (c->ioprio_set) {
+-- 
+2.14.3
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-add-fallback-parse_printf_format-implementation.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-add-fallback-parse_printf_format-implementation.patch
new file mode 100644
index 0000000..e2f7458
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-add-fallback-parse_printf_format-implementation.patch
@@ -0,0 +1,433 @@
+From 0933ca6251808f856b92b0ce8da8696d5febc333 Mon Sep 17 00:00:00 2001
+From: Emil Renner Berthing <systemd@esmil.dk>
+Date: Mon, 23 Oct 2017 10:41:39 -0700
+Subject: [PATCH 01/12] add fallback parse_printf_format implementation
+
+Signed-off-by: Emil Renner Berthing <systemd@esmil.dk>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ Makefile.am                     |   4 +
+ configure.ac                    |   2 +
+ src/basic/parse-printf-format.c | 273 ++++++++++++++++++++++++++++++++++++++++
+ src/basic/parse-printf-format.h |  57 +++++++++
+ src/basic/stdio-util.h          |   2 +-
+ src/journal/journal-send.c      |   2 +-
+ 6 files changed, 338 insertions(+), 2 deletions(-)
+ create mode 100644 src/basic/parse-printf-format.c
+ create mode 100644 src/basic/parse-printf-format.h
+
+diff --git a/Makefile.am b/Makefile.am
+index 692d7bb95..3cc8f3451 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -997,6 +997,10 @@ libbasic_la_SOURCES = \
+ 	src/basic/journal-importer.h \
+ 	src/basic/journal-importer.c
+ 
++if !HAVE_PRINTF_H
++libbasic_la_SOURCES += src/basic/parse-printf-format.c
++endif
++
+ nodist_libbasic_la_SOURCES = \
+ 	src/basic/errno-from-name.h \
+ 	src/basic/errno-to-name.h \
+diff --git a/configure.ac b/configure.ac
+index 60e7df5ee..efcdc6c16 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -308,8 +308,10 @@ AC_CHECK_HEADERS([uchar.h], [], [])
+ AC_CHECK_HEADERS([sys/capability.h], [], [AC_MSG_ERROR([*** POSIX caps headers not found])])
+ AC_CHECK_HEADERS([linux/btrfs.h], [], [])
+ AC_CHECK_HEADERS([linux/memfd.h], [], [])
++AC_CHECK_HEADERS([printf.h], [], [])
+ AC_CHECK_HEADERS([linux/vm_sockets.h], [], [], [#include <sys/socket.h>])
+ 
++AM_CONDITIONAL(HAVE_PRINTF_H, [test "x$ac_cv_header_printf_h" = xyes])
+ # unconditionally pull-in librt with old glibc versions
+ AC_SEARCH_LIBS([clock_gettime], [rt], [], [])
+ 
+diff --git a/src/basic/parse-printf-format.c b/src/basic/parse-printf-format.c
+new file mode 100644
+index 000000000..49437e544
+--- /dev/null
++++ b/src/basic/parse-printf-format.c
+@@ -0,0 +1,273 @@
++/*-*- Mode: C; c-basic-offset: 8; indent-tabs-mode: nil -*-*/
++
++/***
++  This file is part of systemd.
++
++  Copyright 2014 Emil Renner Berthing <systemd@esmil.dk>
++
++  With parts from the musl C library
++  Copyright 2005-2014 Rich Felker, et al.
++
++  systemd is free software; you can redistribute it and/or modify it
++  under the terms of the GNU Lesser General Public License as published by
++  the Free Software Foundation; either version 2.1 of the License, or
++  (at your option) any later version.
++
++  systemd is distributed in the hope that it will be useful, but
++  WITHOUT ANY WARRANTY; without even the implied warranty of
++  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
++  Lesser General Public License for more details.
++
++  You should have received a copy of the GNU Lesser General Public License
++  along with systemd; If not, see <http://www.gnu.org/licenses/>.
++***/
++
++#include <stddef.h>
++#include <string.h>
++
++#include "parse-printf-format.h"
++
++static const char *consume_nonarg(const char *fmt)
++{
++        do {
++                if (*fmt == '\0')
++                        return fmt;
++        } while (*fmt++ != '%');
++        return fmt;
++}
++
++static const char *consume_num(const char *fmt)
++{
++        for (;*fmt >= '0' && *fmt <= '9'; fmt++)
++                /* do nothing */;
++        return fmt;
++}
++
++static const char *consume_argn(const char *fmt, size_t *arg)
++{
++        const char *p = fmt;
++        size_t val = 0;
++
++        if (*p < '1' || *p > '9')
++                return fmt;
++        do {
++                val = 10*val + (*p++ - '0');
++        } while (*p >= '0' && *p <= '9');
++
++        if (*p != '$')
++                return fmt;
++        *arg = val;
++        return p+1;
++}
++
++static const char *consume_flags(const char *fmt)
++{
++        while (1) {
++                switch (*fmt) {
++                case '#':
++                case '0':
++                case '-':
++                case ' ':
++                case '+':
++                case '\'':
++                case 'I':
++                        fmt++;
++                        continue;
++                }
++                return fmt;
++        }
++}
++
++enum state {
++        BARE,
++        LPRE,
++        LLPRE,
++        HPRE,
++        HHPRE,
++        BIGLPRE,
++        ZTPRE,
++        JPRE,
++        STOP
++};
++
++enum type {
++        NONE,
++        PTR,
++        INT,
++        UINT,
++        ULLONG,
++        LONG,
++        ULONG,
++        SHORT,
++        USHORT,
++        CHAR,
++        UCHAR,
++        LLONG,
++        SIZET,
++        IMAX,
++        UMAX,
++        PDIFF,
++        UIPTR,
++        DBL,
++        LDBL,
++        MAXTYPE
++};
++
++static const short pa_types[MAXTYPE] = {
++        [NONE]   = PA_INT,
++        [PTR]    = PA_POINTER,
++        [INT]    = PA_INT,
++        [UINT]   = PA_INT,
++        [ULLONG] = PA_INT | PA_FLAG_LONG_LONG,
++        [LONG]   = PA_INT | PA_FLAG_LONG,
++        [ULONG]  = PA_INT | PA_FLAG_LONG,
++        [SHORT]  = PA_INT | PA_FLAG_SHORT,
++        [USHORT] = PA_INT | PA_FLAG_SHORT,
++        [CHAR]   = PA_CHAR,
++        [UCHAR]  = PA_CHAR,
++        [LLONG]  = PA_INT | PA_FLAG_LONG_LONG,
++        [SIZET]  = PA_INT | PA_FLAG_LONG,
++        [IMAX]   = PA_INT | PA_FLAG_LONG_LONG,
++        [UMAX]   = PA_INT | PA_FLAG_LONG_LONG,
++        [PDIFF]  = PA_INT | PA_FLAG_LONG_LONG,
++        [UIPTR]  = PA_INT | PA_FLAG_LONG,
++        [DBL]    = PA_DOUBLE,
++        [LDBL]   = PA_DOUBLE | PA_FLAG_LONG_DOUBLE
++};
++
++#define S(x) [(x)-'A']
++#define E(x) (STOP + (x))
++
++static const unsigned char states[]['z'-'A'+1] = {
++        { /* 0: bare types */
++                S('d') = E(INT), S('i') = E(INT),
++                S('o') = E(UINT),S('u') = E(UINT),S('x') = E(UINT), S('X') = E(UINT),
++                S('e') = E(DBL), S('f') = E(DBL), S('g') = E(DBL),  S('a') = E(DBL),
++                S('E') = E(DBL), S('F') = E(DBL), S('G') = E(DBL),  S('A') = E(DBL),
++                S('c') = E(CHAR),S('C') = E(INT),
++                S('s') = E(PTR), S('S') = E(PTR), S('p') = E(UIPTR),S('n') = E(PTR),
++                S('m') = E(NONE),
++                S('l') = LPRE,   S('h') = HPRE, S('L') = BIGLPRE,
++                S('z') = ZTPRE,  S('j') = JPRE, S('t') = ZTPRE
++        }, { /* 1: l-prefixed */
++                S('d') = E(LONG), S('i') = E(LONG),
++                S('o') = E(ULONG),S('u') = E(ULONG),S('x') = E(ULONG),S('X') = E(ULONG),
++                S('e') = E(DBL),  S('f') = E(DBL),  S('g') = E(DBL),  S('a') = E(DBL),
++                S('E') = E(DBL),  S('F') = E(DBL),  S('G') = E(DBL),  S('A') = E(DBL),
++                S('c') = E(INT),  S('s') = E(PTR),  S('n') = E(PTR),
++                S('l') = LLPRE
++        }, { /* 2: ll-prefixed */
++                S('d') = E(LLONG), S('i') = E(LLONG),
++                S('o') = E(ULLONG),S('u') = E(ULLONG),
++                S('x') = E(ULLONG),S('X') = E(ULLONG),
++                S('n') = E(PTR)
++        }, { /* 3: h-prefixed */
++                S('d') = E(SHORT), S('i') = E(SHORT),
++                S('o') = E(USHORT),S('u') = E(USHORT),
++                S('x') = E(USHORT),S('X') = E(USHORT),
++                S('n') = E(PTR),
++                S('h') = HHPRE
++        }, { /* 4: hh-prefixed */
++                S('d') = E(CHAR), S('i') = E(CHAR),
++                S('o') = E(UCHAR),S('u') = E(UCHAR),
++                S('x') = E(UCHAR),S('X') = E(UCHAR),
++                S('n') = E(PTR)
++        }, { /* 5: L-prefixed */
++                S('e') = E(LDBL),S('f') = E(LDBL),S('g') = E(LDBL), S('a') = E(LDBL),
++                S('E') = E(LDBL),S('F') = E(LDBL),S('G') = E(LDBL), S('A') = E(LDBL),
++                S('n') = E(PTR)
++        }, { /* 6: z- or t-prefixed (assumed to be same size) */
++                S('d') = E(PDIFF),S('i') = E(PDIFF),
++                S('o') = E(SIZET),S('u') = E(SIZET),
++                S('x') = E(SIZET),S('X') = E(SIZET),
++                S('n') = E(PTR)
++        }, { /* 7: j-prefixed */
++                S('d') = E(IMAX), S('i') = E(IMAX),
++                S('o') = E(UMAX), S('u') = E(UMAX),
++                S('x') = E(UMAX), S('X') = E(UMAX),
++                S('n') = E(PTR)
++        }
++};
++
++size_t parse_printf_format(const char *fmt, size_t n, int *types)
++{
++        size_t i = 0;
++        size_t last = 0;
++
++        memset(types, 0, n);
++
++        while (1) {
++                size_t arg;
++                unsigned int state;
++
++                fmt = consume_nonarg(fmt);
++                if (*fmt == '\0')
++                        break;
++                if (*fmt == '%') {
++                        fmt++;
++                        continue;
++                }
++                arg = 0;
++                fmt = consume_argn(fmt, &arg);
++                /* flags */
++                fmt = consume_flags(fmt);
++                /* width */
++                if (*fmt == '*') {
++                        size_t warg = 0;
++                        fmt = consume_argn(fmt+1, &warg);
++                        if (warg == 0)
++                                warg = ++i;
++                        if (warg > last)
++                                last = warg;
++                        if (warg <= n && types[warg-1] == NONE)
++                                types[warg-1] = INT;
++                } else
++                        fmt = consume_num(fmt);
++                /* precision */
++                if (*fmt == '.') {
++                        fmt++;
++                        if (*fmt == '*') {
++                                size_t parg = 0;
++                                fmt = consume_argn(fmt+1, &parg);
++                                if (parg == 0)
++                                        parg = ++i;
++                                if (parg > last)
++                                        last = parg;
++                                if (parg <= n && types[parg-1] == NONE)
++                                        types[parg-1] = INT;
++                        } else {
++                                if (*fmt == '-')
++                                        fmt++;
++                                fmt = consume_num(fmt);
++                        }
++                }
++                /* length modifier and conversion specifier */
++                state = BARE;
++                do {
++                        unsigned char c = *fmt++;
++
++                        if (c < 'A' || c > 'z')
++                                continue;
++                        state = states[state]S(c);
++                        if (state == 0)
++                                continue;
++                } while (state < STOP);
++
++                if (state == E(NONE))
++                        continue;
++
++                if (arg == 0)
++                        arg = ++i;
++                if (arg > last)
++                        last = arg;
++                if (arg <= n)
++                        types[arg-1] = state - STOP;
++        }
++
++        if (last > n)
++                last = n;
++        for (i = 0; i < last; i++)
++                types[i] = pa_types[types[i]];
++
++        return last;
++}
+diff --git a/src/basic/parse-printf-format.h b/src/basic/parse-printf-format.h
+new file mode 100644
+index 000000000..4371177b0
+--- /dev/null
++++ b/src/basic/parse-printf-format.h
+@@ -0,0 +1,57 @@
++/*-*- Mode: C; c-basic-offset: 8; indent-tabs-mode: nil -*-*/
++
++/***
++  This file is part of systemd.
++
++  Copyright 2014 Emil Renner Berthing <systemd@esmil.dk>
++
++  With parts from the GNU C Library
++  Copyright 1991-2014 Free Software Foundation, Inc.
++
++  systemd is free software; you can redistribute it and/or modify it
++  under the terms of the GNU Lesser General Public License as published by
++  the Free Software Foundation; either version 2.1 of the License, or
++  (at your option) any later version.
++
++  systemd is distributed in the hope that it will be useful, but
++  WITHOUT ANY WARRANTY; without even the implied warranty of
++  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
++  Lesser General Public License for more details.
++
++  You should have received a copy of the GNU Lesser General Public License
++  along with systemd; If not, see <http://www.gnu.org/licenses/>.
++***/
++
++#pragma once
++
++#include "config.h"
++
++#ifdef HAVE_PRINTF_H
++#include <printf.h>
++#else
++
++#include <stddef.h>
++
++enum {				/* C type: */
++  PA_INT,			/* int */
++  PA_CHAR,			/* int, cast to char */
++  PA_WCHAR,			/* wide char */
++  PA_STRING,			/* const char *, a '\0'-terminated string */
++  PA_WSTRING,			/* const wchar_t *, wide character string */
++  PA_POINTER,			/* void * */
++  PA_FLOAT,			/* float */
++  PA_DOUBLE,			/* double */
++  PA_LAST
++};
++
++/* Flag bits that can be set in a type returned by `parse_printf_format'.  */
++#define	PA_FLAG_MASK		0xff00
++#define	PA_FLAG_LONG_LONG	(1 << 8)
++#define	PA_FLAG_LONG_DOUBLE	PA_FLAG_LONG_LONG
++#define	PA_FLAG_LONG		(1 << 9)
++#define	PA_FLAG_SHORT		(1 << 10)
++#define	PA_FLAG_PTR		(1 << 11)
++
++size_t parse_printf_format(const char *fmt, size_t n, int *types);
++
++#endif /* HAVE_PRINTF_H */
+diff --git a/src/basic/stdio-util.h b/src/basic/stdio-util.h
+index bd1144b4c..c9c95eb54 100644
+--- a/src/basic/stdio-util.h
++++ b/src/basic/stdio-util.h
+@@ -19,12 +19,12 @@
+   along with systemd; If not, see <http://www.gnu.org/licenses/>.
+ ***/
+ 
+-#include <printf.h>
+ #include <stdarg.h>
+ #include <stdio.h>
+ #include <sys/types.h>
+ 
+ #include "macro.h"
++#include "parse-printf-format.h"
+ 
+ #define xsprintf(buf, fmt, ...) \
+         assert_message_se((size_t) snprintf(buf, ELEMENTSOF(buf), fmt, __VA_ARGS__) < ELEMENTSOF(buf), "xsprintf: " #buf "[] must be big enough")
+diff --git a/src/journal/journal-send.c b/src/journal/journal-send.c
+index 440fba67c..0236c43c4 100644
+--- a/src/journal/journal-send.c
++++ b/src/journal/journal-send.c
+@@ -19,7 +19,6 @@
+ 
+ #include <errno.h>
+ #include <fcntl.h>
+-#include <printf.h>
+ #include <stddef.h>
+ #include <sys/socket.h>
+ #include <sys/un.h>
+@@ -38,6 +37,7 @@
+ #include "stdio-util.h"
+ #include "string-util.h"
+ #include "util.h"
++#include "parse-printf-format.h"
+ 
+ #define SNDBUF_SIZE (8*1024*1024)
+ 
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-core-device.c-Change-the-default-device-timeout-to-2.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-core-device.c-Change-the-default-device-timeout-to-2.patch
index ee2cd6c..7f1bc44 100644
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-core-device.c-Change-the-default-device-timeout-to-2.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-core-device.c-Change-the-default-device-timeout-to-2.patch
@@ -1,7 +1,7 @@
-From a544d6d15f5c418084f322349aafe341128d5fca Mon Sep 17 00:00:00 2001
+From f1b5a6f717bda6f80a6b5e3e4d50b450f6cc7b09 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Mon, 14 Dec 2015 04:09:19 +0000
-Subject: [PATCH 01/19] core/device.c: Change the default device timeout to 240
+Subject: [PATCH 14/14] core/device.c: Change the default device timeout to 240
  sec.
 MIME-Version: 1.0
 Content-Type: text/plain; charset=UTF-8
@@ -11,23 +11,24 @@
 
 Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
 ---
  src/core/device.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/src/core/device.c b/src/core/device.c
-index c572a67..f90774e 100644
+index 77601c552..98bf49ba2 100644
 --- a/src/core/device.c
 +++ b/src/core/device.c
 @@ -112,7 +112,7 @@ static void device_init(Unit *u) {
           * indefinitely for plugged in devices, something which cannot
           * happen for the other units since their operations time out
           * anyway. */
--        u->job_timeout = u->manager->default_timeout_start_usec;
-+        u->job_timeout = (240 * USEC_PER_SEC);
+-        u->job_running_timeout = u->manager->default_timeout_start_usec;
++        u->job_running_timeout = (240 * USEC_PER_SEC);
  
          u->ignore_on_isolate = true;
  }
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-core-evaluate-presets-after-generators-have-run-6526.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-core-evaluate-presets-after-generators-have-run-6526.patch
new file mode 100644
index 0000000..df100e5
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-core-evaluate-presets-after-generators-have-run-6526.patch
@@ -0,0 +1,69 @@
+From 28dd66ecfce743b1ea9046c7bb501e0fcaeff724 Mon Sep 17 00:00:00 2001
+From: Luca Bruno <luca.bruno@coreos.com>
+Date: Sun, 6 Aug 2017 13:24:24 +0000
+Subject: [PATCH] core: evaluate presets after generators have run (#6526)
+
+This commit moves the first-boot system preset-settings evaluation out
+of main and into the manager startup logic itself. Notably, it reverses
+the order between generators and presets evaluation, so that any changes
+performed by first-boot generators are taken into the account by presets
+logic.
+
+After this change, units created by a generator can be enabled as part
+of a preset.
+
+Upstream-Status: Backport
+
+Signed-off-by: Catalin Enache <catalin.enache@windriver.com>
+---
+ src/core/main.c    | 12 ++----------
+ src/core/manager.c |  8 ++++++++
+ 2 files changed, 10 insertions(+), 10 deletions(-)
+
+diff --git a/src/core/main.c b/src/core/main.c
+index dfedc3d..11ac9cf 100644
+--- a/src/core/main.c
++++ b/src/core/main.c
+@@ -1809,18 +1809,10 @@ int main(int argc, char *argv[]) {
+                 if (prctl(PR_SET_CHILD_SUBREAPER, 1) < 0)
+                         log_warning_errno(errno, "Failed to make us a subreaper: %m");
+ 
+-        if (arg_system) {
++        if (arg_system)
++                /* Bump up RLIMIT_NOFILE for systemd itself */
+                 (void) bump_rlimit_nofile(&saved_rlimit_nofile);
+ 
+-                if (empty_etc) {
+-                        r = unit_file_preset_all(UNIT_FILE_SYSTEM, 0, NULL, UNIT_FILE_PRESET_ENABLE_ONLY, NULL, 0);
+-                        if (r < 0)
+-                                log_full_errno(r == -EEXIST ? LOG_NOTICE : LOG_WARNING, r, "Failed to populate /etc with preset unit settings, ignoring: %m");
+-                        else
+-                                log_info("Populated /etc with preset unit settings.");
+-                }
+-        }
+-
+         r = manager_new(arg_system ? UNIT_FILE_SYSTEM : UNIT_FILE_USER, arg_action == ACTION_TEST, &m);
+         if (r < 0) {
+                 log_emergency_errno(r, "Failed to allocate manager object: %m");
+diff --git a/src/core/manager.c b/src/core/manager.c
+index 1aadb70..fb5e2b5 100644
+--- a/src/core/manager.c
++++ b/src/core/manager.c
+@@ -1328,6 +1328,14 @@ int manager_startup(Manager *m, FILE *serialization, FDSet *fds) {
+         if (r < 0)
+                 return r;
+ 
++        if (m->first_boot && m->unit_file_scope == UNIT_FILE_SYSTEM) {
++                q = unit_file_preset_all(UNIT_FILE_SYSTEM, 0, NULL, UNIT_FILE_PRESET_ENABLE_ONLY, NULL, 0);
++                if (q < 0)
++                        log_full_errno(q == -EEXIST ? LOG_NOTICE : LOG_WARNING, q, "Failed to populate /etc with preset unit settings, ignoring: %m");
++                else
++                        log_info("Populated /etc with preset unit settings.");
++        }
++
+         lookup_paths_reduce(&m->lookup_paths);
+         manager_build_unit_path_cache(m);
+ 
+-- 
+2.10.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-core-load-fragment-refuse-units-with-errors-in-certa.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-core-load-fragment-refuse-units-with-errors-in-certa.patch
deleted file mode 100644
index 80948b2..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-core-load-fragment-refuse-units-with-errors-in-certa.patch
+++ /dev/null
@@ -1,329 +0,0 @@
-If a user is created with a strictly-speaking invalid name such as '0day' and a
-unit created to run as that user, systemd rejects the username and runs the unit
-as root.
-
-CVE: CVE-2017-1000082
-Upstream-Status: Backport
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-From d8e1310e1ed7b6f122bc7eb8ba061fbd088783c0 Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Zbigniew=20J=C4=99drzejewski-Szmek?= <zbyszek@in.waw.pl>
-Date: Thu, 6 Jul 2017 13:28:19 -0400
-Subject: [PATCH] core/load-fragment: refuse units with errors in certain
- directives
-
-If an error is encountered in any of the Exec* lines, WorkingDirectory,
-SELinuxContext, ApparmorProfile, SmackProcessLabel, Service (in .socket
-units), User, or Group, refuse to load the unit. If the config stanza
-has support, ignore the failure if '-' is present.
-
-For those configuration directives, even if we started the unit, it's
-pretty likely that it'll do something unexpected (like write files
-in a wrong place, or with a wrong context, or run with wrong permissions,
-etc). It seems better to refuse to start the unit and have the admin
-clean up the configuration without giving the service a chance to mess
-up stuff.
-
-Note that all "security" options that restrict what the unit can do
-(Capabilities, AmbientCapabilities, Restrict*, SystemCallFilter, Limit*,
-PrivateDevices, Protect*, etc) are _not_ treated like this. Such options are
-only supplementary, and are not always available depending on the architecture
-and compilation options, so unit authors have to make sure that the service
-runs correctly without them anyway.
-
-Fixes #6237, #6277.
-
-Signed-off-by: Ross Burton <ross.burton@intel.com>
----
- src/core/load-fragment.c  | 104 ++++++++++++++++++++++++++++------------------
- src/test/test-unit-file.c |  14 +++----
- 2 files changed, 70 insertions(+), 48 deletions(-)
-
-diff --git a/src/core/load-fragment.c b/src/core/load-fragment.c
-index cbc826809..2047974f4 100644
---- a/src/core/load-fragment.c
-+++ b/src/core/load-fragment.c
-@@ -630,20 +630,28 @@ int config_parse_exec(
- 
-                 if (isempty(f)) {
-                         /* First word is either "-" or "@" with no command. */
--                        log_syntax(unit, LOG_ERR, filename, line, 0, "Empty path in command line, ignoring: \"%s\"", rvalue);
--                        return 0;
-+                        log_syntax(unit, LOG_ERR, filename, line, 0,
-+                                   "Empty path in command line%s: \"%s\"",
-+                                   ignore ? ", ignoring" : "", rvalue);
-+                        return ignore ? 0 : -ENOEXEC;
-                 }
-                 if (!string_is_safe(f)) {
--                        log_syntax(unit, LOG_ERR, filename, line, 0, "Executable path contains special characters, ignoring: %s", rvalue);
--                        return 0;
-+                        log_syntax(unit, LOG_ERR, filename, line, 0,
-+                                   "Executable path contains special characters%s: %s",
-+                                   ignore ? ", ignoring" : "", rvalue);
-+                        return ignore ? 0 : -ENOEXEC;
-                 }
-                 if (!path_is_absolute(f)) {
--                        log_syntax(unit, LOG_ERR, filename, line, 0, "Executable path is not absolute, ignoring: %s", rvalue);
--                        return 0;
-+                        log_syntax(unit, LOG_ERR, filename, line, 0,
-+                                   "Executable path is not absolute%s: %s",
-+                                   ignore ? ", ignoring" : "", rvalue);
-+                        return ignore ? 0 : -ENOEXEC;
-                 }
-                 if (endswith(f, "/")) {
--                        log_syntax(unit, LOG_ERR, filename, line, 0, "Executable path specifies a directory, ignoring: %s", rvalue);
--                        return 0;
-+                        log_syntax(unit, LOG_ERR, filename, line, 0,
-+                                   "Executable path specifies a directory%s: %s",
-+                                   ignore ? ", ignoring" : "", rvalue);
-+                        return ignore ? 0 : -ENOEXEC;
-                 }
- 
-                 if (f == firstword) {
-@@ -699,7 +707,7 @@ int config_parse_exec(
-                         if (r == 0)
-                                 break;
-                         else if (r < 0)
--                                return 0;
-+                                return ignore ? 0 : -ENOEXEC;
- 
-                         if (!GREEDY_REALLOC(n, nbufsize, nlen + 2))
-                                 return log_oom();
-@@ -709,8 +717,10 @@ int config_parse_exec(
-                 }
- 
-                 if (!n || !n[0]) {
--                        log_syntax(unit, LOG_ERR, filename, line, 0, "Empty executable name or zeroeth argument, ignoring: %s", rvalue);
--                        return 0;
-+                        log_syntax(unit, LOG_ERR, filename, line, 0,
-+                                   "Empty executable name or zeroeth argument%s: %s",
-+                                   ignore ? ", ignoring" : "", rvalue);
-+                        return ignore ? 0 : -ENOEXEC;
-                 }
- 
-                 nce = new0(ExecCommand, 1);
-@@ -1315,8 +1325,10 @@ int config_parse_exec_selinux_context(
- 
-         r = unit_name_printf(u, rvalue, &k);
-         if (r < 0) {
--                log_syntax(unit, LOG_ERR, filename, line, r, "Failed to resolve specifiers, ignoring: %m");
--                return 0;
-+                log_syntax(unit, LOG_ERR, filename, line, r,
-+                           "Failed to resolve specifiers%s: %m",
-+                           ignore ? ", ignoring" : "");
-+                return ignore ? 0 : -ENOEXEC;
-         }
- 
-         free(c->selinux_context);
-@@ -1363,8 +1375,10 @@ int config_parse_exec_apparmor_profile(
- 
-         r = unit_name_printf(u, rvalue, &k);
-         if (r < 0) {
--                log_syntax(unit, LOG_ERR, filename, line, r, "Failed to resolve specifiers, ignoring: %m");
--                return 0;
-+                log_syntax(unit, LOG_ERR, filename, line, r,
-+                           "Failed to resolve specifiers%s: %m",
-+                           ignore ? ", ignoring" : "");
-+                return ignore ? 0 : -ENOEXEC;
-         }
- 
-         free(c->apparmor_profile);
-@@ -1411,8 +1425,10 @@ int config_parse_exec_smack_process_label(
- 
-         r = unit_name_printf(u, rvalue, &k);
-         if (r < 0) {
--                log_syntax(unit, LOG_ERR, filename, line, r, "Failed to resolve specifiers, ignoring: %m");
--                return 0;
-+                log_syntax(unit, LOG_ERR, filename, line, r,
-+                           "Failed to resolve specifiers%s: %m",
-+                           ignore ? ", ignoring" : "");
-+                return ignore ? 0 : -ENOEXEC;
-         }
- 
-         free(c->smack_process_label);
-@@ -1630,19 +1646,19 @@ int config_parse_socket_service(
- 
-         r = unit_name_printf(UNIT(s), rvalue, &p);
-         if (r < 0) {
--                log_syntax(unit, LOG_ERR, filename, line, r, "Failed to resolve specifiers, ignoring: %s", rvalue);
--                return 0;
-+                log_syntax(unit, LOG_ERR, filename, line, r, "Failed to resolve specifiers: %s", rvalue);
-+                return -ENOEXEC;
-         }
- 
-         if (!endswith(p, ".service")) {
--                log_syntax(unit, LOG_ERR, filename, line, 0, "Unit must be of type service, ignoring: %s", rvalue);
--                return 0;
-+                log_syntax(unit, LOG_ERR, filename, line, 0, "Unit must be of type service: %s", rvalue);
-+                return -ENOEXEC;
-         }
- 
-         r = manager_load_unit(UNIT(s)->manager, p, NULL, &error, &x);
-         if (r < 0) {
--                log_syntax(unit, LOG_ERR, filename, line, r, "Failed to load unit %s, ignoring: %s", rvalue, bus_error_message(&error, r));
--                return 0;
-+                log_syntax(unit, LOG_ERR, filename, line, r, "Failed to load unit %s: %s", rvalue, bus_error_message(&error, r));
-+                return -ENOEXEC;
-         }
- 
-         unit_ref_set(&s->service, x);
-@@ -1893,13 +1909,13 @@ int config_parse_user_group(
- 
-                 r = unit_full_printf(u, rvalue, &k);
-                 if (r < 0) {
--                        log_syntax(unit, LOG_ERR, filename, line, r, "Failed to resolve unit specifiers in %s, ignoring: %m", rvalue);
--                        return 0;
-+                        log_syntax(unit, LOG_ERR, filename, line, r, "Failed to resolve unit specifiers in %s: %m", rvalue);
-+                        return -ENOEXEC;
-                 }
- 
-                 if (!valid_user_group_name_or_id(k)) {
--                        log_syntax(unit, LOG_ERR, filename, line, 0, "Invalid user/group name or numeric ID, ignoring: %s", k);
--                        return 0;
-+                        log_syntax(unit, LOG_ERR, filename, line, 0, "Invalid user/group name or numeric ID: %s", k);
-+                        return -ENOEXEC;
-                 }
- 
-                 n = k;
-@@ -1957,19 +1973,19 @@ int config_parse_user_group_strv(
-                 if (r == -ENOMEM)
-                         return log_oom();
-                 if (r < 0) {
--                        log_syntax(unit, LOG_ERR, filename, line, r, "Invalid syntax, ignoring: %s", rvalue);
--                        break;
-+                        log_syntax(unit, LOG_ERR, filename, line, r, "Invalid syntax: %s", rvalue);
-+                        return -ENOEXEC;
-                 }
- 
-                 r = unit_full_printf(u, word, &k);
-                 if (r < 0) {
--                        log_syntax(unit, LOG_ERR, filename, line, r, "Failed to resolve unit specifiers in %s, ignoring: %m", word);
--                        continue;
-+                        log_syntax(unit, LOG_ERR, filename, line, r, "Failed to resolve unit specifiers in %s: %m", word);
-+                        return -ENOEXEC;
-                 }
- 
-                 if (!valid_user_group_name_or_id(k)) {
--                        log_syntax(unit, LOG_ERR, filename, line, 0, "Invalid user/group name or numeric ID, ignoring: %s", k);
--                        continue;
-+                        log_syntax(unit, LOG_ERR, filename, line, 0, "Invalid user/group name or numeric ID: %s", k);
-+                        return -ENOEXEC;
-                 }
- 
-                 r = strv_push(users, k);
-@@ -2128,25 +2144,28 @@ int config_parse_working_directory(
- 
-                 r = unit_full_printf(u, rvalue, &k);
-                 if (r < 0) {
--                        log_syntax(unit, LOG_ERR, filename, line, r, "Failed to resolve unit specifiers in working directory path '%s', ignoring: %m", rvalue);
--                        return 0;
-+                        log_syntax(unit, LOG_ERR, filename, line, r,
-+                                   "Failed to resolve unit specifiers in working directory path '%s'%s: %m",
-+                                   rvalue, missing_ok ? ", ignoring" : "");
-+                        return missing_ok ? 0 : -ENOEXEC;
-                 }
- 
-                 path_kill_slashes(k);
- 
-                 if (!utf8_is_valid(k)) {
-                         log_syntax_invalid_utf8(unit, LOG_ERR, filename, line, rvalue);
--                        return 0;
-+                        return missing_ok ? 0 : -ENOEXEC;
-                 }
- 
-                 if (!path_is_absolute(k)) {
--                        log_syntax(unit, LOG_ERR, filename, line, 0, "Working directory path '%s' is not absolute, ignoring.", rvalue);
--                        return 0;
-+                        log_syntax(unit, LOG_ERR, filename, line, 0,
-+                                   "Working directory path '%s' is not absolute%s.",
-+                                   rvalue, missing_ok ? ", ignoring" : "");
-+                        return missing_ok ? 0 : -ENOEXEC;
-                 }
- 
--                free_and_replace(c->working_directory, k);
--
-                 c->working_directory_home = false;
-+                free_and_replace(c->working_directory, k);
-         }
- 
-         c->working_directory_missing_ok = missing_ok;
-@@ -4228,8 +4247,11 @@ int unit_load_fragment(Unit *u) {
-                         return r;
- 
-                 r = load_from_path(u, k);
--                if (r < 0)
-+                if (r < 0) {
-+                        if (r == -ENOEXEC)
-+                                log_unit_notice(u, "Unit configuration has fatal error, unit will not be started.");
-                         return r;
-+                }
- 
-                 if (u->load_state == UNIT_STUB) {
-                         SET_FOREACH(t, u->names, i) {
-diff --git a/src/test/test-unit-file.c b/src/test/test-unit-file.c
-index 12f48bf43..fd797b587 100644
---- a/src/test/test-unit-file.c
-+++ b/src/test/test-unit-file.c
-@@ -146,7 +146,7 @@ static void test_config_parse_exec(void) {
-         r = config_parse_exec(NULL, "fake", 4, "section", 1,
-                               "LValue", 0, "/RValue/ argv0 r1",
-                               &c, u);
--        assert_se(r == 0);
-+        assert_se(r == -ENOEXEC);
-         assert_se(c1->command_next == NULL);
- 
-         log_info("/* honour_argv0 */");
-@@ -161,7 +161,7 @@ static void test_config_parse_exec(void) {
-         r = config_parse_exec(NULL, "fake", 3, "section", 1,
-                               "LValue", 0, "@/RValue",
-                               &c, u);
--        assert_se(r == 0);
-+        assert_se(r == -ENOEXEC);
-         assert_se(c1->command_next == NULL);
- 
-         log_info("/* no command, whitespace only, reset */");
-@@ -220,7 +220,7 @@ static void test_config_parse_exec(void) {
-                               "-@/RValue argv0 r1 ; ; "
-                               "/goo/goo boo",
-                               &c, u);
--        assert_se(r >= 0);
-+        assert_se(r == -ENOEXEC);
-         c1 = c1->command_next;
-         check_execcommand(c1, "/RValue", "argv0", "r1", NULL, true);
- 
-@@ -374,7 +374,7 @@ static void test_config_parse_exec(void) {
-                 r = config_parse_exec(NULL, "fake", 4, "section", 1,
-                                       "LValue", 0, path,
-                                       &c, u);
--                assert_se(r == 0);
-+                assert_se(r == -ENOEXEC);
-                 assert_se(c1->command_next == NULL);
-         }
- 
-@@ -401,21 +401,21 @@ static void test_config_parse_exec(void) {
-         r = config_parse_exec(NULL, "fake", 4, "section", 1,
-                               "LValue", 0, "/path\\",
-                               &c, u);
--        assert_se(r == 0);
-+        assert_se(r == -ENOEXEC);
-         assert_se(c1->command_next == NULL);
- 
-         log_info("/* missing ending ' */");
-         r = config_parse_exec(NULL, "fake", 4, "section", 1,
-                               "LValue", 0, "/path 'foo",
-                               &c, u);
--        assert_se(r == 0);
-+        assert_se(r == -ENOEXEC);
-         assert_se(c1->command_next == NULL);
- 
-         log_info("/* missing ending ' with trailing backslash */");
-         r = config_parse_exec(NULL, "fake", 4, "section", 1,
-                               "LValue", 0, "/path 'foo\\",
-                               &c, u);
--        assert_se(r == 0);
-+        assert_se(r == -ENOEXEC);
-         assert_se(c1->command_next == NULL);
- 
-         log_info("/* invalid space between modifiers */");
--- 
-2.11.0
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-main-skip-many-initialization-steps-when-running-in-.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-main-skip-many-initialization-steps-when-running-in-.patch
new file mode 100644
index 0000000..a033b04
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0001-main-skip-many-initialization-steps-when-running-in-.patch
@@ -0,0 +1,163 @@
+From dea374e898a749a0474b72b2015cca9009b1432b Mon Sep 17 00:00:00 2001
+From: Lennart Poettering <lennart@poettering.net>
+Date: Wed, 13 Sep 2017 10:31:40 +0200
+Subject: [PATCH] main: skip many initialization steps when running in --test
+ mode
+
+Most importantly, don't collect open socket activation fds when in
+--test mode. This specifically created a problem because we invoke
+pager_open() beforehand (which these days makes copies of the original
+stdout/stderr in order to be able to restore them when the pager goes
+away) and we might mistakenly the fd copies it creates as socket
+activation fds.
+
+Fixes: #6383
+
+Upstream-Status: Backport
+
+Signed-off-by: Catalin Enache <catalin.enache@windriver.com>
+---
+ src/core/main.c | 108 +++++++++++++++++++++++++++++---------------------------
+ 1 file changed, 56 insertions(+), 52 deletions(-)
+
+diff --git a/src/core/main.c b/src/core/main.c
+index 11ac9cf..d1a53a5 100644
+--- a/src/core/main.c
++++ b/src/core/main.c
+@@ -1679,20 +1679,22 @@ int main(int argc, char *argv[]) {
+         log_close();
+ 
+         /* Remember open file descriptors for later deserialization */
+-        r = fdset_new_fill(&fds);
+-        if (r < 0) {
+-                log_emergency_errno(r, "Failed to allocate fd set: %m");
+-                error_message = "Failed to allocate fd set";
+-                goto finish;
+-        } else
+-                fdset_cloexec(fds, true);
++        if (arg_action == ACTION_RUN) {
++                r = fdset_new_fill(&fds);
++                if (r < 0) {
++                        log_emergency_errno(r, "Failed to allocate fd set: %m");
++                        error_message = "Failed to allocate fd set";
++                        goto finish;
++                } else
++                        fdset_cloexec(fds, true);
+ 
+-        if (arg_serialization)
+-                assert_se(fdset_remove(fds, fileno(arg_serialization)) >= 0);
++                if (arg_serialization)
++                        assert_se(fdset_remove(fds, fileno(arg_serialization)) >= 0);
+ 
+-        if (arg_system)
+-                /* Become a session leader if we aren't one yet. */
+-                setsid();
++                if (arg_system)
++                        /* Become a session leader if we aren't one yet. */
++                        setsid();
++        }
+ 
+         /* Move out of the way, so that we won't block unmounts */
+         assert_se(chdir("/") == 0);
+@@ -1762,56 +1764,58 @@ int main(int argc, char *argv[]) {
+                           arg_action == ACTION_TEST ? " test" : "", getuid(), t);
+         }
+ 
+-        if (arg_system && !skip_setup) {
+-                if (arg_show_status > 0)
+-                        status_welcome();
++        if (arg_action == ACTION_RUN) {
++                if (arg_system && !skip_setup) {
++                        if (arg_show_status > 0)
++                                status_welcome();
+ 
+-                hostname_setup();
+-                machine_id_setup(NULL, arg_machine_id, NULL);
+-                loopback_setup();
+-                bump_unix_max_dgram_qlen();
++                        hostname_setup();
++                        machine_id_setup(NULL, arg_machine_id, NULL);
++                        loopback_setup();
++                        bump_unix_max_dgram_qlen();
+ 
+-                test_usr();
+-        }
++                        test_usr();
++                }
+ 
+-        if (arg_system && arg_runtime_watchdog > 0 && arg_runtime_watchdog != USEC_INFINITY)
+-                watchdog_set_timeout(&arg_runtime_watchdog);
++                if (arg_system && arg_runtime_watchdog > 0 && arg_runtime_watchdog != USEC_INFINITY)
++                        watchdog_set_timeout(&arg_runtime_watchdog);
+ 
+-        if (arg_timer_slack_nsec != NSEC_INFINITY)
+-                if (prctl(PR_SET_TIMERSLACK, arg_timer_slack_nsec) < 0)
+-                        log_error_errno(errno, "Failed to adjust timer slack: %m");
++                if (arg_timer_slack_nsec != NSEC_INFINITY)
++                        if (prctl(PR_SET_TIMERSLACK, arg_timer_slack_nsec) < 0)
++                                log_error_errno(errno, "Failed to adjust timer slack: %m");
+ 
+-        if (arg_system && !cap_test_all(arg_capability_bounding_set)) {
+-                r = capability_bounding_set_drop_usermode(arg_capability_bounding_set);
+-                if (r < 0) {
+-                        log_emergency_errno(r, "Failed to drop capability bounding set of usermode helpers: %m");
+-                        error_message = "Failed to drop capability bounding set of usermode helpers";
+-                        goto finish;
+-                }
+-                r = capability_bounding_set_drop(arg_capability_bounding_set, true);
+-                if (r < 0) {
+-                        log_emergency_errno(r, "Failed to drop capability bounding set: %m");
+-                        error_message = "Failed to drop capability bounding set";
+-                        goto finish;
++                if (arg_system && !cap_test_all(arg_capability_bounding_set)) {
++                        r = capability_bounding_set_drop_usermode(arg_capability_bounding_set);
++                        if (r < 0) {
++                                log_emergency_errno(r, "Failed to drop capability bounding set of usermode helpers: %m");
++                                error_message = "Failed to drop capability bounding set of usermode helpers";
++                                goto finish;
++                        }
++                        r = capability_bounding_set_drop(arg_capability_bounding_set, true);
++                        if (r < 0) {
++                                log_emergency_errno(r, "Failed to drop capability bounding set: %m");
++                                error_message = "Failed to drop capability bounding set";
++                                goto finish;
++                        }
+                 }
+-        }
+ 
+-        if (arg_syscall_archs) {
+-                r = enforce_syscall_archs(arg_syscall_archs);
+-                if (r < 0) {
+-                        error_message = "Failed to set syscall architectures";
+-                        goto finish;
++                if (arg_syscall_archs) {
++                        r = enforce_syscall_archs(arg_syscall_archs);
++                        if (r < 0) {
++                                error_message = "Failed to set syscall architectures";
++                                goto finish;
++                        }
+                 }
+-        }
+ 
+-        if (!arg_system)
+-                /* Become reaper of our children */
+-                if (prctl(PR_SET_CHILD_SUBREAPER, 1) < 0)
+-                        log_warning_errno(errno, "Failed to make us a subreaper: %m");
++                if (!arg_system)
++                        /* Become reaper of our children */
++                        if (prctl(PR_SET_CHILD_SUBREAPER, 1) < 0)
++                                log_warning_errno(errno, "Failed to make us a subreaper: %m");
+ 
+-        if (arg_system)
+-                /* Bump up RLIMIT_NOFILE for systemd itself */
+-                (void) bump_rlimit_nofile(&saved_rlimit_nofile);
++                if (arg_system)
++                        /* Bump up RLIMIT_NOFILE for systemd itself */
++                        (void) bump_rlimit_nofile(&saved_rlimit_nofile);
++        }
+ 
+         r = manager_new(arg_system ? UNIT_FILE_SYSTEM : UNIT_FILE_USER, arg_action == ACTION_TEST, &m);
+         if (r < 0) {
+-- 
+2.10.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0002-src-basic-missing.h-check-for-missing-strndupa.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0002-src-basic-missing.h-check-for-missing-strndupa.patch
new file mode 100644
index 0000000..94c136b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0002-src-basic-missing.h-check-for-missing-strndupa.patch
@@ -0,0 +1,104 @@
+From 585abd891a56409915314304101cac26b42c076b Mon Sep 17 00:00:00 2001
+From: Emil Renner Berthing <systemd@esmil.dk>
+Date: Mon, 23 Oct 2017 10:45:46 -0700
+Subject: [PATCH 02/12] src/basic/missing.h: check for missing strndupa
+
+include missing.h  for definition of strndupa
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ configure.ac           |  1 +
+ src/basic/missing.h    | 11 +++++++++++
+ src/basic/mkdir.c      |  1 +
+ src/basic/parse-util.c |  1 +
+ src/shared/pager.c     |  1 +
+ src/shared/uid-range.c |  1 +
+ 6 files changed, 16 insertions(+)
+
+diff --git a/configure.ac b/configure.ac
+index efcdc6c16..cd035a971 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -329,6 +329,7 @@ AC_CHECK_DECLS([
+         pivot_root,
+         name_to_handle_at,
+         setns,
++        strndupa,
+         renameat2,
+         kcmp,
+         keyctl,
+diff --git a/src/basic/missing.h b/src/basic/missing.h
+index 04912bf52..8009888ad 100644
+--- a/src/basic/missing.h
++++ b/src/basic/missing.h
+@@ -1104,6 +1104,17 @@ typedef int32_t key_serial_t;
+ #define KEYCTL_DESCRIBE 6
+ #endif
+ 
++#if !HAVE_DECL_STRNDUPA
++#define strndupa(s, n) \
++  ({ \
++    const char *__old = (s); \
++    size_t __len = strnlen(__old, (n)); \
++    char *__new = (char *)alloca(__len + 1); \
++    __new[__len] = '\0'; \
++    (char *)memcpy(__new, __old, __len); \
++  })
++#endif
++
+ #ifndef KEYCTL_READ
+ #define KEYCTL_READ 11
+ #endif
+diff --git a/src/basic/mkdir.c b/src/basic/mkdir.c
+index 6b1a98402..d1388df48 100644
+--- a/src/basic/mkdir.c
++++ b/src/basic/mkdir.c
+@@ -28,6 +28,7 @@
+ #include "path-util.h"
+ #include "stat-util.h"
+ #include "user-util.h"
++#include "missing.h"
+ 
+ int mkdir_safe_internal(const char *path, mode_t mode, uid_t uid, gid_t gid, mkdir_func_t _mkdir) {
+         struct stat st;
+diff --git a/src/basic/parse-util.c b/src/basic/parse-util.c
+index 4532f222c..7a30a0e06 100644
+--- a/src/basic/parse-util.c
++++ b/src/basic/parse-util.c
+@@ -30,6 +30,7 @@
+ #include "parse-util.h"
+ #include "process-util.h"
+ #include "string-util.h"
++#include "missing.h"
+ 
+ int parse_boolean(const char *v) {
+         assert(v);
+diff --git a/src/shared/pager.c b/src/shared/pager.c
+index 4d7b02c63..854efc0c9 100644
+--- a/src/shared/pager.c
++++ b/src/shared/pager.c
+@@ -38,6 +38,7 @@
+ #include "string-util.h"
+ #include "strv.h"
+ #include "terminal-util.h"
++#include "missing.h"
+ 
+ static pid_t pager_pid = 0;
+ 
+diff --git a/src/shared/uid-range.c b/src/shared/uid-range.c
+index b6ec47439..91ce9fb7f 100644
+--- a/src/shared/uid-range.c
++++ b/src/shared/uid-range.c
+@@ -24,6 +24,7 @@
+ #include "macro.h"
+ #include "uid-range.h"
+ #include "user-util.h"
++#include "missing.h"
+ 
+ static bool uid_range_intersect(UidRange *range, uid_t start, uid_t nr) {
+         assert(range);
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0002-units-Prefer-getty-to-agetty-in-console-setup-system.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0002-units-Prefer-getty-to-agetty-in-console-setup-system.patch
deleted file mode 100644
index 951a28d..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0002-units-Prefer-getty-to-agetty-in-console-setup-system.patch
+++ /dev/null
@@ -1,44 +0,0 @@
-From 8736d9b9bb492f60e8f3a1a7fb5a05ba3201d86b Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 20 Feb 2015 05:29:15 +0000
-Subject: [PATCH 02/19] units: Prefer getty to agetty in console setup systemd
- units
-
-Upstream-Status: Inappropriate [configuration specific]
-
-Signed-off-by: Maxime Ripard <maxime.ripard@free-electrons.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- units/getty@.service.m4        | 2 +-
- units/serial-getty@.service.m4 | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/units/getty@.service.m4 b/units/getty@.service.m4
-index 5b82c13..e729469 100644
---- a/units/getty@.service.m4
-+++ b/units/getty@.service.m4
-@@ -33,7 +33,7 @@ ConditionPathExists=/dev/tty0
- 
- [Service]
- # the VT is cleared by TTYVTDisallocate
--ExecStart=-/sbin/agetty --noclear %I $TERM
-+ExecStart=-/sbin/getty -L %I $TERM
- Type=idle
- Restart=always
- RestartSec=0
-diff --git a/units/serial-getty@.service.m4 b/units/serial-getty@.service.m4
-index 4522d0d..e6d499d 100644
---- a/units/serial-getty@.service.m4
-+++ b/units/serial-getty@.service.m4
-@@ -22,7 +22,7 @@ Before=getty.target
- IgnoreOnIsolate=yes
- 
- [Service]
--ExecStart=-/sbin/agetty --keep-baud 115200,38400,9600 %I $TERM
-+ExecStart=-/sbin/getty -L 115200 %I $TERM
- Type=idle
- Restart=always
- UtmpIdentifier=%I
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0003-define-exp10-if-missing.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0003-define-exp10-if-missing.patch
deleted file mode 100644
index 37c6ac5..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0003-define-exp10-if-missing.patch
+++ /dev/null
@@ -1,34 +0,0 @@
-From b383c286f58184575216b2bf6f185ba2ad648956 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 9 Nov 2016 19:25:45 -0800
-Subject: [PATCH 03/19] define exp10 if missing
-
-Inspired by: http://peter.korsgaard.com/patches/alsa-utils/alsamixer-fix-build-on-uClibc-exp10.patch
-
-exp10 extension is not part of uClibc, so compute it.
-
-Upstream-Status: Pending
-
-Signed-off-by: Samuel Martin <s.martin49@gmail.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- src/basic/missing.h | 5 +++++
- 1 file changed, 5 insertions(+)
-
-diff --git a/src/basic/missing.h b/src/basic/missing.h
-index 4c013be..4a3fd9c 100644
---- a/src/basic/missing.h
-+++ b/src/basic/missing.h
-@@ -1078,4 +1078,9 @@ typedef int32_t key_serial_t;
- 
- #endif
- 
-+#ifdef __UCLIBC__
-+/* 10^x = 10^(log e^x) = (e^x)^log10 = e^(x * log 10) */
-+#define exp10(x) (exp((x) * log(10)))
-+#endif /* __UCLIBC__ */
-+
- #include "missing_syscall.h"
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0003-don-t-fail-if-GLOB_BRACE-and-GLOB_ALTDIRFUNC-is-not-.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0003-don-t-fail-if-GLOB_BRACE-and-GLOB_ALTDIRFUNC-is-not-.patch
new file mode 100644
index 0000000..9a2d2c8
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0003-don-t-fail-if-GLOB_BRACE-and-GLOB_ALTDIRFUNC-is-not-.patch
@@ -0,0 +1,157 @@
+From 5bbbc2a08a3b4283ec04af0e77e25fb205aa8b82 Mon Sep 17 00:00:00 2001
+From: Emil Renner Berthing <systemd@esmil.dk>
+Date: Mon, 23 Oct 2017 10:50:14 -0700
+Subject: [PATCH 03/12] don't fail if GLOB_BRACE and GLOB_ALTDIRFUNC is not
+ defined
+
+If the standard library doesn't provide brace
+expansion users just won't get it.
+
+Dont use GNU GLOB extentions on non-glibc systems
+
+Conditionalize use of GLOB_ALTDIRFUNC
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/basic/glob-util.c     | 20 +++++++++++++++++---
+ src/test/test-glob-util.c | 17 +++++++++++++++--
+ src/tmpfiles/tmpfiles.c   |  8 ++++++++
+ 3 files changed, 40 insertions(+), 5 deletions(-)
+
+diff --git a/src/basic/glob-util.c b/src/basic/glob-util.c
+index f611c42e4..ad6e2be8d 100644
+--- a/src/basic/glob-util.c
++++ b/src/basic/glob-util.c
+@@ -27,13 +27,18 @@
+ #include "macro.h"
+ #include "path-util.h"
+ #include "strv.h"
++/* Don't fail if the standard library
++ * doesn't provide brace expansion */
++#ifndef GLOB_BRACE
++#define GLOB_BRACE 0
++#endif
+ 
+ int safe_glob(const char *path, int flags, glob_t *pglob) {
+         int k;
+ 
++#ifdef GLOB_ALTDIRFUNC
+         /* We want to set GLOB_ALTDIRFUNC ourselves, don't allow it to be set. */
+         assert(!(flags & GLOB_ALTDIRFUNC));
+-
+         if (!pglob->gl_closedir)
+                 pglob->gl_closedir = (void (*)(void *)) closedir;
+         if (!pglob->gl_readdir)
+@@ -44,10 +49,13 @@ int safe_glob(const char *path, int flags, glob_t *pglob) {
+                 pglob->gl_lstat = lstat;
+         if (!pglob->gl_stat)
+                 pglob->gl_stat = stat;
+-
++#endif
+         errno = 0;
++#ifdef GLOB_ALTDIRFUNC
+         k = glob(path, flags | GLOB_ALTDIRFUNC, NULL, pglob);
+-
++#else
++        k = glob(path, flags, NULL, pglob);
++#endif
+         if (k == GLOB_NOMATCH)
+                 return -ENOENT;
+         if (k == GLOB_NOSPACE)
+@@ -60,6 +68,12 @@ int safe_glob(const char *path, int flags, glob_t *pglob) {
+         return 0;
+ }
+ 
++/* Don't fail if the standard library
++ * doesn't provide brace expansion */
++#ifndef GLOB_BRACE
++#define GLOB_BRACE 0
++#endif
++
+ int glob_exists(const char *path) {
+         _cleanup_globfree_ glob_t g = {};
+         int k;
+diff --git a/src/test/test-glob-util.c b/src/test/test-glob-util.c
+index af866e004..3afa09ada 100644
+--- a/src/test/test-glob-util.c
++++ b/src/test/test-glob-util.c
+@@ -29,6 +29,11 @@
+ #include "glob-util.h"
+ #include "macro.h"
+ #include "rm-rf.h"
++/* Don't fail if the standard library
++ * doesn't provide brace expansion */
++#ifndef GLOB_BRACE
++#define GLOB_BRACE 0
++#endif
+ 
+ static void test_glob_exists(void) {
+         char name[] = "/tmp/test-glob_exists.XXXXXX";
+@@ -51,25 +56,33 @@ static void test_glob_exists(void) {
+ static void test_glob_no_dot(void) {
+         char template[] = "/tmp/test-glob-util.XXXXXXX";
+         const char *fn;
+-
+         _cleanup_globfree_ glob_t g = {
++#ifdef GLOB_ALTDIRFUNC
+                 .gl_closedir = (void (*)(void *)) closedir,
+                 .gl_readdir = (struct dirent *(*)(void *)) readdir_no_dot,
+                 .gl_opendir = (void *(*)(const char *)) opendir,
+                 .gl_lstat = lstat,
+                 .gl_stat = stat,
++#endif
+         };
+-
+         int r;
+ 
+         assert_se(mkdtemp(template));
+ 
+         fn = strjoina(template, "/*");
++#ifdef GLOB_ALTDIRFUNC
+         r = glob(fn, GLOB_NOSORT|GLOB_BRACE|GLOB_ALTDIRFUNC, NULL, &g);
++#else
++        r = glob(fn, GLOB_NOSORT|GLOB_BRACE, NULL, &g);
++#endif
+         assert_se(r == GLOB_NOMATCH);
+ 
+         fn = strjoina(template, "/.*");
++#ifdef GLOB_ALTDIRFUNC
+         r = glob(fn, GLOB_NOSORT|GLOB_BRACE|GLOB_ALTDIRFUNC, NULL, &g);
++#else
++        r = glob(fn, GLOB_NOSORT|GLOB_BRACE, NULL, &g);
++#endif
+         assert_se(r == GLOB_NOMATCH);
+ 
+         (void) rm_rf(template, REMOVE_ROOT|REMOVE_PHYSICAL);
+diff --git a/src/tmpfiles/tmpfiles.c b/src/tmpfiles/tmpfiles.c
+index 9419c99e2..07027a765 100644
+--- a/src/tmpfiles/tmpfiles.c
++++ b/src/tmpfiles/tmpfiles.c
+@@ -71,6 +71,12 @@
+ #include "umask-util.h"
+ #include "user-util.h"
+ #include "util.h"
++/* Don't fail if the standard library
++ * doesn't provide brace expansion */
++#ifndef GLOB_BRACE
++#define GLOB_BRACE 0
++#endif
++
+ 
+ /* This reads all files listed in /etc/tmpfiles.d/?*.conf and creates
+  * them in the file system. This is intended to be used to create
+@@ -1092,7 +1098,9 @@ static int item_do_children(Item *i, const char *path, action_t action) {
+ 
+ static int glob_item(Item *i, action_t action, bool recursive) {
+         _cleanup_globfree_ glob_t g = {
++#ifdef GLOB_ALTDIRFUNC
+                 .gl_opendir = (void *(*)(const char *)) opendir_nomod,
++#endif
+         };
+         int r = 0, k;
+         char **fn;
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0004-src-basic-missing.h-check-for-missing-__compar_fn_t-.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0004-src-basic-missing.h-check-for-missing-__compar_fn_t-.patch
new file mode 100644
index 0000000..cb5ae99
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0004-src-basic-missing.h-check-for-missing-__compar_fn_t-.patch
@@ -0,0 +1,47 @@
+From c850b654e71677e0d6292f1345207b9b5acffc33 Mon Sep 17 00:00:00 2001
+From: Emil Renner Berthing <systemd@esmil.dk>
+Date: Mon, 23 Oct 2017 11:31:03 -0700
+Subject: [PATCH 04/12] src/basic/missing.h: check for missing __compar_fn_t
+ typedef
+
+include missing.h for missing __compar_fn_t
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/basic/missing.h | 5 +++++
+ src/basic/strbuf.c  | 1 +
+ 2 files changed, 6 insertions(+)
+
+diff --git a/src/basic/missing.h b/src/basic/missing.h
+index 8009888ad..671f341c6 100644
+--- a/src/basic/missing.h
++++ b/src/basic/missing.h
+@@ -1063,6 +1063,11 @@ struct input_mask {
+ #define RENAME_NOREPLACE (1 << 0)
+ #endif
+ 
++#ifndef __COMPAR_FN_T
++#define __COMPAR_FN_T
++typedef int (*__compar_fn_t)(const void *, const void *);
++#endif
++
+ #ifndef KCMP_FILE
+ #define KCMP_FILE 0
+ #endif
+diff --git a/src/basic/strbuf.c b/src/basic/strbuf.c
+index 00aaf9e62..9dc4a584a 100644
+--- a/src/basic/strbuf.c
++++ b/src/basic/strbuf.c
+@@ -23,6 +23,7 @@
+ 
+ #include "alloc-util.h"
+ #include "strbuf.h"
++#include "missing.h"
+ 
+ /*
+  * Strbuf stores given strings in a single continuous allocated memory
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0006-Include-netinet-if_ether.h.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0006-Include-netinet-if_ether.h.patch
new file mode 100644
index 0000000..55887ee
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0006-Include-netinet-if_ether.h.patch
@@ -0,0 +1,86 @@
+From 21080b6a40d0a4ddd2db8f0fa37686f6fa885d1c Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 23 Oct 2017 11:38:33 -0700
+Subject: [PATCH 06/12] Include netinet/if_ether.h
+
+Fixes
+/mnt/a/oe/build/tmp/work/mips32r2-bec-linux-musl/systemd/1_234-r0/recipe-sysroot/usr/include/netinet/if_ether.h:101:8: error: redefinition of 'struct ethhdr'
+ struct ethhdr {
+        ^~~~~~
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/libsystemd/sd-netlink/netlink-types.c | 1 +
+ src/network/netdev/tuntap.c               | 1 +
+ src/network/networkd-brvlan.c             | 1 +
+ src/udev/net/ethtool-util.c               | 2 +-
+ src/udev/udev-builtin-net_setup_link.c    | 2 +-
+ 5 files changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/src/libsystemd/sd-netlink/netlink-types.c b/src/libsystemd/sd-netlink/netlink-types.c
+index 923f7dd10..b95b1e4b2 100644
+--- a/src/libsystemd/sd-netlink/netlink-types.c
++++ b/src/libsystemd/sd-netlink/netlink-types.c
+@@ -19,6 +19,7 @@
+ 
+ #include <stdint.h>
+ #include <sys/socket.h>
++#include <netinet/if_ether.h>
+ #include <linux/netlink.h>
+ #include <linux/rtnetlink.h>
+ #include <linux/can/netlink.h>
+diff --git a/src/network/netdev/tuntap.c b/src/network/netdev/tuntap.c
+index 3d6280884..40e58c38f 100644
+--- a/src/network/netdev/tuntap.c
++++ b/src/network/netdev/tuntap.c
+@@ -18,6 +18,7 @@
+ ***/
+ 
+ #include <fcntl.h>
++#include <netinet/if_ether.h>
+ #include <linux/if_tun.h>
+ #include <net/if.h>
+ #include <netinet/if_ether.h>
+diff --git a/src/network/networkd-brvlan.c b/src/network/networkd-brvlan.c
+index fa5d3ee7f..e0828962a 100644
+--- a/src/network/networkd-brvlan.c
++++ b/src/network/networkd-brvlan.c
+@@ -18,6 +18,7 @@
+ ***/
+ 
+ #include <netinet/in.h>
++#include <netinet/if_ether.h>
+ #include <linux/if_bridge.h>
+ #include <stdbool.h>
+ 
+diff --git a/src/udev/net/ethtool-util.c b/src/udev/net/ethtool-util.c
+index 201fc2343..5f7cc2a0a 100644
+--- a/src/udev/net/ethtool-util.c
++++ b/src/udev/net/ethtool-util.c
+@@ -16,7 +16,7 @@
+   You should have received a copy of the GNU Lesser General Public License
+   along with systemd; If not, see <http://www.gnu.org/licenses/>.
+ ***/
+-
++#include <netinet/if_ether.h>
+ #include <net/if.h>
+ #include <sys/ioctl.h>
+ #include <linux/ethtool.h>
+diff --git a/src/udev/udev-builtin-net_setup_link.c b/src/udev/udev-builtin-net_setup_link.c
+index 8e4777513..d01fff2a4 100644
+--- a/src/udev/udev-builtin-net_setup_link.c
++++ b/src/udev/udev-builtin-net_setup_link.c
+@@ -16,7 +16,7 @@
+   You should have received a copy of the GNU Lesser General Public License
+   along with systemd; If not, see <http://www.gnu.org/licenses/>.
+ ***/
+-
++#include <netinet/if_ether.h>
+ #include "alloc-util.h"
+ #include "link-config.h"
+ #include "log.h"
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0006-configure-Check-for-additional-features-that-uclibc-.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0006-configure-Check-for-additional-features-that-uclibc-.patch
deleted file mode 100644
index 43a0d3f..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0006-configure-Check-for-additional-features-that-uclibc-.patch
+++ /dev/null
@@ -1,48 +0,0 @@
-From 82d837b76618a773485b96e38b7b91083a7437e8 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 20 Feb 2015 05:05:45 +0000
-Subject: [PATCH 06/19] configure: Check for additional features that uclibc
- doesnt support
-
-This helps in supporting uclibc which does not have all features that
-glibc might have
-
-Upstream-Status: Denied [no desire for uclibc support]
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- configure.ac | 18 ++++++++++++++++++
- 1 file changed, 18 insertions(+)
-
-diff --git a/configure.ac b/configure.ac
-index 7f6b3b9..7c4b5a2 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -110,6 +110,24 @@ AC_PATH_PROG([UMOUNT_PATH], [umount], [/usr/bin/umount], [$PATH:/usr/sbin:/sbin]
- 
- AS_IF([! ln --relative --help > /dev/null 2>&1], [AC_MSG_ERROR([*** ln doesn't support --relative ***])])
- 
-+# check for few functions not implemented in uClibc
-+
-+AC_CHECK_FUNCS_ONCE(mkostemp execvpe posix_fallocate)
-+
-+# check for %ms format support - assume always no if cross compiling
-+
-+AC_MSG_CHECKING([whether %ms format is supported by *scanf])
-+
-+AC_LINK_IFELSE(
-+	[AC_LANG_PROGRAM([
-+	 #include <stdio.h>
-+	 ],[
-+	    char *buf1, *buf2, *buf3, str="1 2.3 abcde" ;
-+	    int rc = sscanf(str, "%ms %ms %ms", &buf1, &buf2, &buf3) ;
-+	    return (rc==3)?0:1;])],
-+	[AC_DEFINE([HAVE_MSFORMAT], [1], [Define if %ms format is supported by *scanf.])],
-+	[AC_MSG_RESULT([no])])
-+
- M4_DEFINES=
- 
- AC_CHECK_TOOL(OBJCOPY, objcopy)
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0007-check-for-missing-canonicalize_file_name.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0007-check-for-missing-canonicalize_file_name.patch
new file mode 100644
index 0000000..5234c59
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0007-check-for-missing-canonicalize_file_name.patch
@@ -0,0 +1,63 @@
+From 05dffe67919ffc72be5c017bc6cf82f164b2e8f9 Mon Sep 17 00:00:00 2001
+From: Emil Renner Berthing <systemd@esmil.dk>
+Date: Mon, 23 Oct 2017 11:42:03 -0700
+Subject: [PATCH 07/12] check for missing canonicalize_file_name
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ configure.ac                | 2 ++
+ src/basic/missing.h         | 1 +
+ src/basic/missing_syscall.h | 6 ++++++
+ 3 files changed, 9 insertions(+)
+
+diff --git a/configure.ac b/configure.ac
+index cd035a971..3674190fb 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -333,6 +333,7 @@ AC_CHECK_DECLS([
+         renameat2,
+         kcmp,
+         keyctl,
++        canonicalize_file_name,
+         LO_FLAGS_PARTSCAN,
+         copy_file_range,
+         explicit_bzero],
+@@ -343,6 +344,7 @@ AC_CHECK_DECLS([
+ #include <fcntl.h>
+ #include <sched.h>
+ #include <string.h>
++#include <stdlib.h>
+ #include <linux/loop.h>
+ ]])
+ 
+diff --git a/src/basic/missing.h b/src/basic/missing.h
+index 671f341c6..8ae4964e1 100644
+--- a/src/basic/missing.h
++++ b/src/basic/missing.h
+@@ -1246,3 +1246,4 @@ struct ethtool_link_settings {
+ #endif
+ 
+ #include "missing_syscall.h"
++
+diff --git a/src/basic/missing_syscall.h b/src/basic/missing_syscall.h
+index 898116c7b..4d44ee4fa 100644
+--- a/src/basic/missing_syscall.h
++++ b/src/basic/missing_syscall.h
+@@ -28,6 +28,12 @@ static inline int pivot_root(const char *new_root, const char *put_old) {
+ }
+ #endif
+ 
++#if !HAVE_DECL_CANONICALIZE_FILE_NAME
++static inline char *canonicalize_file_name(const char *path) {
++        return realpath(path, NULL);
++}
++#endif
++
+ /* ======================================================================= */
+ 
+ #if !HAVE_DECL_MEMFD_CREATE
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0007-use-lnr-wrapper-instead-of-looking-for-relative-opti.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0007-use-lnr-wrapper-instead-of-looking-for-relative-opti.patch
index fad69a5..bc92db7 100644
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0007-use-lnr-wrapper-instead-of-looking-for-relative-opti.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0007-use-lnr-wrapper-instead-of-looking-for-relative-opti.patch
@@ -12,10 +12,10 @@
  configure.ac | 2 --
  2 files changed, 1 insertion(+), 3 deletions(-)
 
-diff --git a/Makefile.am b/Makefile.am
-index 29ed1dd..02f4017 100644
---- a/Makefile.am
-+++ b/Makefile.am
+Index: git/Makefile.am
+===================================================================
+--- git.orig/Makefile.am
++++ git/Makefile.am
 @@ -320,7 +320,7 @@ define install-relative-aliases
  	while [ -n "$$1" ]; do \
  		$(MKDIR_P) `dirname $(DESTDIR)$$dir/$$2` && \
@@ -25,19 +25,16 @@
  		shift 2 || exit $$?; \
  	done
  endef
-diff --git a/configure.ac b/configure.ac
-index 7c4b5a2..b10c952 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -108,8 +108,6 @@ AC_PATH_PROG([SULOGIN], [sulogin], [/usr/sbin/sulogin], [$PATH:/usr/sbin:/sbin])
+Index: git/configure.ac
+===================================================================
+--- git.orig/configure.ac
++++ git/configure.ac
+@@ -110,8 +110,6 @@ AC_PATH_PROG([SULOGIN], [sulogin], [/usr
  AC_PATH_PROG([MOUNT_PATH], [mount], [/usr/bin/mount], [$PATH:/usr/sbin:/sbin])
  AC_PATH_PROG([UMOUNT_PATH], [umount], [/usr/bin/umount], [$PATH:/usr/sbin:/sbin])
  
 -AS_IF([! ln --relative --help > /dev/null 2>&1], [AC_MSG_ERROR([*** ln doesn't support --relative ***])])
 -
- # check for few functions not implemented in uClibc
+ M4_DEFINES=
  
- AC_CHECK_FUNCS_ONCE(mkostemp execvpe posix_fallocate)
--- 
-2.10.2
-
+ AC_CHECK_TOOL(OBJCOPY, objcopy)
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0008-Do-not-enable-nss-tests.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0008-Do-not-enable-nss-tests.patch
new file mode 100644
index 0000000..67a4f8e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0008-Do-not-enable-nss-tests.patch
@@ -0,0 +1,35 @@
+From 48e7c0f5b2f5d777a16ac5584dc4f50f1dfa832c Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 23 Oct 2017 12:27:53 -0700
+Subject: [PATCH 08/12] Do not enable nss tests
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ Makefile.am | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Makefile.am b/Makefile.am
+index 3cc8f3451..df20a9a11 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -5290,6 +5290,7 @@ EXTRA_DIST += \
+ 	src/timesync/timesyncd.conf.in
+ 
+ # ------------------------------------------------------------------------------
++if ENABLE_NSS_SYSTEMD
+ test_nss_SOURCES = \
+ 	src/test/test-nss.c
+ 
+@@ -5302,7 +5303,6 @@ manual_tests += \
+ 	test-nss
+ 
+ # ------------------------------------------------------------------------------
+-if ENABLE_NSS_SYSTEMD
+ libnss_systemd_la_SOURCES = \
+ 	src/nss-systemd/nss-systemd.sym \
+ 	src/nss-systemd/nss-systemd.c
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0008-nspawn-Use-execvpe-only-when-libc-supports-it.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0008-nspawn-Use-execvpe-only-when-libc-supports-it.patch
deleted file mode 100644
index 586b5aa..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0008-nspawn-Use-execvpe-only-when-libc-supports-it.patch
+++ /dev/null
@@ -1,41 +0,0 @@
-From 96026a3763264eb41a2c3e374f232f6e543284a8 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 9 Nov 2016 19:33:49 -0800
-Subject: [PATCH 08/19] nspawn: Use execvpe only when libc supports it
-
-Upstream-Status: Denied [no desire for uclibc support]
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- src/nspawn/nspawn.c | 7 +++++++
- 1 file changed, 7 insertions(+)
-
-diff --git a/src/nspawn/nspawn.c b/src/nspawn/nspawn.c
-index 9b9ae90..19b47cd 100644
---- a/src/nspawn/nspawn.c
-+++ b/src/nspawn/nspawn.c
-@@ -123,6 +123,8 @@ typedef enum LinkJournal {
-         LINK_GUEST
- } LinkJournal;
- 
-+#include "config.h"
-+
- static char *arg_directory = NULL;
- static char *arg_template = NULL;
- static char *arg_chdir = NULL;
-@@ -2871,7 +2873,12 @@ static int inner_child(
-                 a[0] = (char*) "/sbin/init";
-                 execve(a[0], a, env_use);
-         } else if (!strv_isempty(arg_parameters))
-+#ifdef HAVE_EXECVPE
-                 execvpe(arg_parameters[0], arg_parameters, env_use);
-+#else
-+                environ = env_use;
-+                execvp(arg_parameters[0], arg_parameters);
-+#endif /* HAVE_EXECVPE */
-         else {
-                 if (!arg_chdir)
-                         /* If we cannot change the directory, we'll end up in /, that is expected. */
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0009-test-hexdecoct.c-Include-missing.h-form-strndupa.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0009-test-hexdecoct.c-Include-missing.h-form-strndupa.patch
new file mode 100644
index 0000000..d3694dc
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0009-test-hexdecoct.c-Include-missing.h-form-strndupa.patch
@@ -0,0 +1,27 @@
+From 75f4e7f167de533a160ee1af2a03fba4c5a5ffc6 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 23 Oct 2017 12:33:22 -0700
+Subject: [PATCH 09/12] test-hexdecoct.c: Include missing.h form strndupa
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/test/test-hexdecoct.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/src/test/test-hexdecoct.c b/src/test/test-hexdecoct.c
+index fcae427e7..5eb5e2ed7 100644
+--- a/src/test/test-hexdecoct.c
++++ b/src/test/test-hexdecoct.c
+@@ -21,6 +21,7 @@
+ #include "hexdecoct.h"
+ #include "macro.h"
+ #include "string-util.h"
++#include "missing.h"
+ 
+ static void test_hexchar(void) {
+         assert_se(hexchar(0xa) == 'a');
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0009-util-bypass-unimplemented-_SC_PHYS_PAGES-system-conf.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0009-util-bypass-unimplemented-_SC_PHYS_PAGES-system-conf.patch
deleted file mode 100644
index f150bb0..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0009-util-bypass-unimplemented-_SC_PHYS_PAGES-system-conf.patch
+++ /dev/null
@@ -1,49 +0,0 @@
-From 085c8b6f253726ad547e7be84ff3f2b99701488b Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 9 Nov 2016 19:38:07 -0800
-Subject: [PATCH 09/19] util: bypass unimplemented _SC_PHYS_PAGES system
- configuration API on uclibc
-
-Upstream-Status: Inappropriate [uclibc-specific]
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- src/basic/util.c | 15 +++++++++++++++
- 1 file changed, 15 insertions(+)
-
-diff --git a/src/basic/util.c b/src/basic/util.c
-index c1b5ca1..4c62d43 100644
---- a/src/basic/util.c
-+++ b/src/basic/util.c
-@@ -742,6 +742,20 @@ uint64_t physical_memory(void) {
-          * In order to support containers nicely that have a configured memory limit we'll take the minimum of the
-          * physically reported amount of memory and the limit configured for the root cgroup, if there is any. */
- 
-+#ifdef __UCLIBC__
-+        char line[128];
-+        FILE *f = fopen("/proc/meminfo", "r");
-+        if (f == NULL)
-+                return 0;
-+        while (!feof(f) && fgets(line, sizeof(line)-1, f)) {
-+                if (sscanf(line, "MemTotal: %li kB", &mem) == 1) {
-+                        mem *= 1024;
-+                        break;
-+                }
-+        }
-+        fclose(f);
-+        return (uint64_t) mem;
-+#else
-         sc = sysconf(_SC_PHYS_PAGES);
-         assert(sc > 0);
- 
-@@ -762,6 +776,7 @@ uint64_t physical_memory(void) {
-         lim *= ps;
- 
-         return MIN(mem, lim);
-+#endif
- }
- 
- uint64_t physical_memory_scale(uint64_t v, uint64_t max) {
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0010-test-sizeof.c-Disable-tests-for-missing-typedefs-in-.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0010-test-sizeof.c-Disable-tests-for-missing-typedefs-in-.patch
new file mode 100644
index 0000000..808c83a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0010-test-sizeof.c-Disable-tests-for-missing-typedefs-in-.patch
@@ -0,0 +1,49 @@
+From 6e9d2bcaa6f886b2384c1c35a04e4ebc148aea68 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 23 Oct 2017 12:40:25 -0700
+Subject: [PATCH 10/12] test-sizeof.c: Disable tests for missing typedefs in
+ musl
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/test/test-sizeof.c | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/src/test/test-sizeof.c b/src/test/test-sizeof.c
+index 269adfd18..ba7855dff 100644
+--- a/src/test/test-sizeof.c
++++ b/src/test/test-sizeof.c
+@@ -18,7 +18,6 @@
+ ***/
+ 
+ #include <stdio.h>
+-
+ #include "time-util.h"
+ 
+ /* Print information about various types. Useful when diagnosing
+@@ -48,8 +47,10 @@ int main(void) {
+         info(unsigned);
+         info(long unsigned);
+         info(long long unsigned);
++#ifdef __GLIBC__
+         info(__syscall_ulong_t);
+         info(__syscall_slong_t);
++#endif
+ 
+         info(float);
+         info(double);
+@@ -59,7 +60,9 @@ int main(void) {
+         info(ssize_t);
+         info(time_t);
+         info(usec_t);
++#ifdef __GLIBC__
+         info(__time_t);
++#endif
+ 
+         info(enum Enum);
+         info(enum BigEnum);
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0011-don-t-use-glibc-specific-qsort_r.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0011-don-t-use-glibc-specific-qsort_r.patch
new file mode 100644
index 0000000..7cfe829
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0011-don-t-use-glibc-specific-qsort_r.patch
@@ -0,0 +1,105 @@
+From 2eb45f5a0a8bfb8bdca084587ad28e5001f3cc4b Mon Sep 17 00:00:00 2001
+From: Emil Renner Berthing <systemd@esmil.dk>
+Date: Thu, 18 Sep 2014 15:24:56 +0200
+Subject: [PATCH 11/12] don't use glibc-specific qsort_r
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/hwdb/hwdb.c         | 18 +++++++++++-------
+ src/udev/udevadm-hwdb.c | 16 ++++++++++------
+ 2 files changed, 21 insertions(+), 13 deletions(-)
+
+diff --git a/src/hwdb/hwdb.c b/src/hwdb/hwdb.c
+index 793398ca6..669b00818 100644
+--- a/src/hwdb/hwdb.c
++++ b/src/hwdb/hwdb.c
+@@ -151,13 +151,12 @@ static void trie_free(struct trie *trie) {
+ 
+ DEFINE_TRIVIAL_CLEANUP_FUNC(struct trie*, trie_free);
+ 
+-static int trie_values_cmp(const void *v1, const void *v2, void *arg) {
++static struct trie *trie_node_add_value_trie;
++static int trie_values_cmp(const void *v1, const void *v2) {
+         const struct trie_value_entry *val1 = v1;
+         const struct trie_value_entry *val2 = v2;
+-        struct trie *trie = arg;
+-
+-        return strcmp(trie->strings->buf + val1->key_off,
+-                      trie->strings->buf + val2->key_off);
++        return strcmp(trie_node_add_value_trie->strings->buf + val1->key_off,
++                      trie_node_add_value_trie->strings->buf + val2->key_off);
+ }
+ 
+ static int trie_node_add_value(struct trie *trie, struct trie_node *node,
+@@ -182,7 +181,10 @@ static int trie_node_add_value(struct trie *trie, struct trie_node *node,
+                         .value_off = v,
+                 };
+ 
+-                val = xbsearch_r(&search, node->values, node->values_count, sizeof(struct trie_value_entry), trie_values_cmp, trie);
++                trie_node_add_value_trie = trie;
++                val = bsearch(&search, node->values, node->values_count, sizeof(struct trie_value_entry), trie_values_cmp);
++                trie_node_add_value_trie = NULL;
++
+                 if (val) {
+                         /* At this point we have 2 identical properties on the same match-string.
+                          * Since we process files in order, we just replace the previous value.
+@@ -207,7 +209,9 @@ static int trie_node_add_value(struct trie *trie, struct trie_node *node,
+         node->values[node->values_count].file_priority = file_priority;
+         node->values[node->values_count].line_number = line_number;
+         node->values_count++;
+-        qsort_r(node->values, node->values_count, sizeof(struct trie_value_entry), trie_values_cmp, trie);
++        trie_node_add_value_trie = trie;
++        qsort(node->values, node->values_count, sizeof(struct trie_value_entry), trie_values_cmp);
++        trie_node_add_value_trie = NULL;
+         return 0;
+ }
+ 
+diff --git a/src/udev/udevadm-hwdb.c b/src/udev/udevadm-hwdb.c
+index 69b0b9025..fbd213300 100644
+--- a/src/udev/udevadm-hwdb.c
++++ b/src/udev/udevadm-hwdb.c
+@@ -128,13 +128,13 @@ static void trie_node_cleanup(struct trie_node *node) {
+         free(node);
+ }
+ 
+-static int trie_values_cmp(const void *v1, const void *v2, void *arg) {
++static struct trie *trie_node_add_value_trie;
++static int trie_values_cmp(const void *v1, const void *v2) {
+         const struct trie_value_entry *val1 = v1;
+         const struct trie_value_entry *val2 = v2;
+-        struct trie *trie = arg;
+ 
+-        return strcmp(trie->strings->buf + val1->key_off,
+-                      trie->strings->buf + val2->key_off);
++        return strcmp(trie_node_add_value_trie->strings->buf + val1->key_off,
++                      trie_node_add_value_trie->strings->buf + val2->key_off);
+ }
+ 
+ static int trie_node_add_value(struct trie *trie, struct trie_node *node,
+@@ -155,7 +155,9 @@ static int trie_node_add_value(struct trie *trie, struct trie_node *node,
+                         .value_off = v,
+                 };
+ 
+-                val = xbsearch_r(&search, node->values, node->values_count, sizeof(struct trie_value_entry), trie_values_cmp, trie);
++                trie_node_add_value_trie = trie;
++                val = bsearch(&search, node->values, node->values_count, sizeof(struct trie_value_entry), trie_values_cmp);
++                trie_node_add_value_trie = NULL;
+                 if (val) {
+                         /* replace existing earlier key with new value */
+                         val->value_off = v;
+@@ -172,7 +174,9 @@ static int trie_node_add_value(struct trie *trie, struct trie_node *node,
+         node->values[node->values_count].key_off = k;
+         node->values[node->values_count].value_off = v;
+         node->values_count++;
+-        qsort_r(node->values, node->values_count, sizeof(struct trie_value_entry), trie_values_cmp, trie);
++        trie_node_add_value_trie = trie;
++        qsort(node->values, node->values_count, sizeof(struct trie_value_entry), trie_values_cmp);
++        trie_node_add_value_trie = NULL;
+         return 0;
+ }
+ 
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0012-don-t-pass-AT_SYMLINK_NOFOLLOW-flag-to-faccessat.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0012-don-t-pass-AT_SYMLINK_NOFOLLOW-flag-to-faccessat.patch
new file mode 100644
index 0000000..1a6db65
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0012-don-t-pass-AT_SYMLINK_NOFOLLOW-flag-to-faccessat.patch
@@ -0,0 +1,99 @@
+From 9621618c701a2d5eb3e26f40c68354d4dfb8f872 Mon Sep 17 00:00:00 2001
+From: Andre McCurdy <armccurdy@gmail.com>
+Date: Tue, 10 Oct 2017 14:33:30 -0700
+Subject: [PATCH 12/12] don't pass AT_SYMLINK_NOFOLLOW flag to faccessat()
+
+Avoid using AT_SYMLINK_NOFOLLOW flag. It doesn't seem like the right
+thing to do and it's not portable (not supported by musl). See:
+
+  http://lists.landley.net/pipermail/toybox-landley.net/2014-September/003610.html
+  http://www.openwall.com/lists/musl/2015/02/05/2
+
+Note that laccess() is never passing AT_EACCESS so a lot of the
+discussion in the links above doesn't apply. Note also that
+(currently) all systemd callers of laccess() pass mode as F_OK, so
+only check for existence of a file, not access permissions.
+Therefore, in this case, the only distiction between faccessat()
+with (flag == 0) and (flag == AT_SYMLINK_NOFOLLOW) is the behaviour
+for broken symlinks; laccess() on a broken symlink will succeed with
+(flag == AT_SYMLINK_NOFOLLOW) and fail (flag == 0).
+
+The laccess() macros was added to systemd some time ago and it's not
+clear if or why it needs to return success for broken symlinks. Maybe
+just historical and not actually necessary or desired behaviour?
+
+Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/basic/fs-util.h          | 22 +++++++++++++++++++++-
+ src/shared/base-filesystem.c |  6 +++---
+ 2 files changed, 24 insertions(+), 4 deletions(-)
+
+diff --git a/src/basic/fs-util.h b/src/basic/fs-util.h
+index 094acf179..cdbc0ae72 100644
+--- a/src/basic/fs-util.h
++++ b/src/basic/fs-util.h
+@@ -48,7 +48,27 @@ int fchmod_umask(int fd, mode_t mode);
+ 
+ int fd_warn_permissions(const char *path, int fd);
+ 
+-#define laccess(path, mode) faccessat(AT_FDCWD, (path), (mode), AT_SYMLINK_NOFOLLOW)
++/*
++   Avoid using AT_SYMLINK_NOFOLLOW flag. It doesn't seem like the right thing to
++   do and it's not portable (not supported by musl). See:
++
++     http://lists.landley.net/pipermail/toybox-landley.net/2014-September/003610.html
++     http://www.openwall.com/lists/musl/2015/02/05/2
++
++   Note that laccess() is never passing AT_EACCESS so a lot of the discussion in
++   the links above doesn't apply. Note also that (currently) all systemd callers
++   of laccess() pass mode as F_OK, so only check for existence of a file, not
++   access permissions. Therefore, in this case, the only distiction between
++   faccessat() with (flag == 0) and (flag == AT_SYMLINK_NOFOLLOW) is the
++   behaviour for broken symlinks; laccess() on a broken symlink will succeed
++   with (flag == AT_SYMLINK_NOFOLLOW) and fail (flag == 0).
++
++   The laccess() macros was added to systemd some time ago and it's not clear if
++   or why it needs to return success for broken symlinks. Maybe just historical
++   and not actually necessary or desired behaviour?
++*/
++
++#define laccess(path, mode) faccessat(AT_FDCWD, (path), (mode), 0)
+ 
+ int touch_file(const char *path, bool parents, usec_t stamp, uid_t uid, gid_t gid, mode_t mode);
+ int touch(const char *path);
+diff --git a/src/shared/base-filesystem.c b/src/shared/base-filesystem.c
+index 903a18786..2f6052ee7 100644
+--- a/src/shared/base-filesystem.c
++++ b/src/shared/base-filesystem.c
+@@ -70,7 +70,7 @@ int base_filesystem_create(const char *root, uid_t uid, gid_t gid) {
+                 return log_error_errno(errno, "Failed to open root file system: %m");
+ 
+         for (i = 0; i < ELEMENTSOF(table); i ++) {
+-                if (faccessat(fd, table[i].dir, F_OK, AT_SYMLINK_NOFOLLOW) >= 0)
++                if (faccessat(fd, table[i].dir, F_OK, 0) >= 0)
+                         continue;
+ 
+                 if (table[i].target) {
+@@ -78,7 +78,7 @@ int base_filesystem_create(const char *root, uid_t uid, gid_t gid) {
+ 
+                         /* check if one of the targets exists */
+                         NULSTR_FOREACH(s, table[i].target) {
+-                                if (faccessat(fd, s, F_OK, AT_SYMLINK_NOFOLLOW) < 0)
++                                if (faccessat(fd, s, F_OK, 0) < 0)
+                                         continue;
+ 
+                                 /* check if a specific file exists at the target path */
+@@ -89,7 +89,7 @@ int base_filesystem_create(const char *root, uid_t uid, gid_t gid) {
+                                         if (!p)
+                                                 return log_oom();
+ 
+-                                        if (faccessat(fd, p, F_OK, AT_SYMLINK_NOFOLLOW) < 0)
++                                        if (faccessat(fd, p, F_OK, 0) < 0)
+                                                 continue;
+                                 }
+ 
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0012-rules-whitelist-hd-devices.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0012-rules-whitelist-hd-devices.patch
index 8666bdc..eb380ce 100644
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0012-rules-whitelist-hd-devices.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0012-rules-whitelist-hd-devices.patch
@@ -1,7 +1,7 @@
-From 8cc1ae11f54dcc38ee2168b0f99703b835dd3942 Mon Sep 17 00:00:00 2001
+From ab5a27040133f7cdf062ac8cfeb94e081d3567b3 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 9 Nov 2016 19:41:13 -0800
-Subject: [PATCH 12/19] rules: whitelist hd* devices
+Subject: [PATCH 07/14] rules: whitelist hd* devices
 
 qemu by default emulates IDE and the linux-yocto kernel(s) use
 CONFIG_IDE instead of the more modern libsata, so disks appear as
@@ -11,23 +11,24 @@
 
 Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
 ---
  rules/60-persistent-storage.rules | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/rules/60-persistent-storage.rules b/rules/60-persistent-storage.rules
-index c13d05c..b14fbed 100644
+index d2745f65f..63f472be8 100644
 --- a/rules/60-persistent-storage.rules
 +++ b/rules/60-persistent-storage.rules
 @@ -7,7 +7,7 @@ ACTION=="remove", GOTO="persistent_storage_end"
  ENV{UDEV_DISABLE_PERSISTENT_STORAGE_RULES_FLAG}=="1", GOTO="persistent_storage_end"
  
  SUBSYSTEM!="block", GOTO="persistent_storage_end"
--KERNEL!="loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|sr*|vd*|xvd*|bcache*|cciss*|dasd*|ubd*|scm*|pmem*", GOTO="persistent_storage_end"
-+KERNEL!="loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|hd*|sd*|sr*|vd*|xvd*|bcache*|cciss*|dasd*|ubd*|scm*|pmem*", GOTO="persistent_storage_end"
+-KERNEL!="loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|sr*|vd*|xvd*|bcache*|cciss*|dasd*|ubd*|scm*|pmem*|nbd*", GOTO="persistent_storage_end"
++KERNEL!="loop*|mmcblk*[0-9]|msblk*[0-9]|mspblk*[0-9]|nvme*|sd*|sr*|vd*|xvd*|bcache*|cciss*|dasd*|ubd*|scm*|pmem*|nbd*|hd*", GOTO="persistent_storage_end"
  
  # ignore partitions that span the entire disk
  TEST=="whole_disk", GOTO="persistent_storage_end"
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0013-Make-root-s-home-directory-configurable.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0013-Make-root-s-home-directory-configurable.patch
index 2b33337..aeebbfb 100644
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0013-Make-root-s-home-directory-configurable.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0013-Make-root-s-home-directory-configurable.patch
@@ -1,7 +1,7 @@
-From 79e64a07840e0d97d66e46111f1c086bf83981b7 Mon Sep 17 00:00:00 2001
+From 479e1f4aa2b9f1c911a4d0dd18e222d241a978ea Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 9 Nov 2016 20:35:30 -0800
-Subject: [PATCH 13/19] Make root's home directory configurable
+Subject: [PATCH 42/48] Make root's home directory configurable
 
 OpenEmbedded has a configurable home directory for root. Allow
 systemd to be built using its idea of what root's home directory
@@ -14,6 +14,7 @@
 
 Signed-off-by: Dan McGregor <dan.mcgregor@usask.ca>
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
 ---
  Makefile.am                | 2 ++
  configure.ac               | 7 +++++++
@@ -24,18 +25,18 @@
  6 files changed, 17 insertions(+), 8 deletions(-)
 
 diff --git a/Makefile.am b/Makefile.am
-index 420e0e0..3010b01 100644
+index 1bcd932c2..c2b4a99d2 100644
 --- a/Makefile.am
 +++ b/Makefile.am
-@@ -213,6 +213,7 @@ AM_CPPFLAGS = \
+@@ -226,6 +226,7 @@ AM_CPPFLAGS = \
  	-DLIBDIR=\"$(libdir)\" \
  	-DROOTLIBDIR=\"$(rootlibdir)\" \
  	-DROOTLIBEXECDIR=\"$(rootlibexecdir)\" \
 +	-DROOTHOMEDIR=\"$(roothomedir)\" \
- 	-DTEST_DIR=\"$(abs_top_srcdir)/test\" \
  	-I $(top_srcdir)/src \
  	-I $(top_builddir)/src/basic \
-@@ -6057,6 +6058,7 @@ substitutions = \
+ 	-I $(top_srcdir)/src/basic \
+@@ -6356,6 +6357,7 @@ substitutions = \
         '|rootlibdir=$(rootlibdir)|' \
         '|rootlibexecdir=$(rootlibexecdir)|' \
         '|rootbindir=$(rootbindir)|' \
@@ -44,10 +45,10 @@
         '|SYSTEMCTL=$(rootbindir)/systemctl|' \
         '|SYSTEMD_NOTIFY=$(rootbindir)/systemd-notify|' \
 diff --git a/configure.ac b/configure.ac
-index b10c952..dfc0bd3 100644
+index 0354ffe6a..b53ca1f1a 100644
 --- a/configure.ac
 +++ b/configure.ac
-@@ -1513,6 +1513,11 @@ AC_ARG_WITH([rootlibdir],
+@@ -1641,6 +1641,11 @@ AC_ARG_WITH([rootlibdir],
          [with_rootlibdir=${libdir}])
  AX_NORMALIZE_PATH([with_rootlibdir])
  
@@ -57,29 +58,29 @@
 +        [with_roothomedir=/root])
 +
  AC_ARG_WITH([pamlibdir],
-         AS_HELP_STRING([--with-pamlibdir=DIR], [Directory for PAM modules]),
+         AS_HELP_STRING([--with-pamlibdir=DIR], [directory for PAM modules]),
          [],
-@@ -1598,6 +1603,7 @@ AC_SUBST([pamlibdir], [$with_pamlibdir])
- AC_SUBST([pamconfdir], [$with_pamconfdir])
+@@ -1733,6 +1738,7 @@ AC_SUBST([pamconfdir], [$with_pamconfdir])
+ AC_SUBST([rpmmacrosdir], [$with_rpmmacrosdir])
  AC_SUBST([rootprefix], [$with_rootprefix])
  AC_SUBST([rootlibdir], [$with_rootlibdir])
 +AC_SUBST([roothomedir], [$with_roothomedir])
  
  AC_CONFIG_FILES([
          Makefile
-@@ -1688,6 +1694,7 @@ AC_MSG_RESULT([
+@@ -1829,6 +1835,7 @@ AC_MSG_RESULT([
          includedir:                        ${includedir}
          lib dir:                           ${libdir}
          rootlib dir:                       ${with_rootlibdir}
 +        root home dir:                     ${with_roothomedir}
          SysV init scripts:                 ${SYSTEM_SYSVINIT_PATH}
          SysV rc?.d directories:            ${SYSTEM_SYSVRCND_PATH}
-         Build Python:                      ${PYTHON}
+         build Python:                      ${PYTHON}
 diff --git a/src/basic/user-util.c b/src/basic/user-util.c
-index 938533d..3f9fdc4 100644
+index c619dad52..662682adf 100644
 --- a/src/basic/user-util.c
 +++ b/src/basic/user-util.c
-@@ -127,7 +127,7 @@ int get_user_creds(
+@@ -129,7 +129,7 @@ int get_user_creds(
                          *gid = 0;
  
                  if (home)
@@ -88,7 +89,7 @@
  
                  if (shell)
                          *shell = "/bin/sh";
-@@ -387,7 +387,7 @@ int get_home_dir(char **_h) {
+@@ -389,7 +389,7 @@ int get_home_dir(char **_h) {
          /* Hardcode home directory for root to avoid NSS */
          u = getuid();
          if (u == 0) {
@@ -98,10 +99,10 @@
                          return -ENOMEM;
  
 diff --git a/src/nspawn/nspawn.c b/src/nspawn/nspawn.c
-index 19b47cd..e42bf19 100644
+index 8a5fedd4b..7b01ec078 100644
 --- a/src/nspawn/nspawn.c
 +++ b/src/nspawn/nspawn.c
-@@ -2798,7 +2798,7 @@ static int inner_child(
+@@ -2291,7 +2291,7 @@ static int inner_child(
          if (envp[n_env])
                  n_env++;
  
@@ -110,8 +111,8 @@
              (asprintf((char**)(envp + n_env++), "USER=%s", arg_user ? arg_user : "root") < 0) ||
              (asprintf((char**)(envp + n_env++), "LOGNAME=%s", arg_user ? arg_user : "root") < 0))
                  return log_oom();
-@@ -2882,7 +2882,7 @@ static int inner_child(
-         else {
+@@ -2373,7 +2373,7 @@ static int inner_child(
+         } else {
                  if (!arg_chdir)
                          /* If we cannot change the directory, we'll end up in /, that is expected. */
 -                        (void) chdir(home ?: "/root");
@@ -120,7 +121,7 @@
                  execle("/bin/bash", "-bash", NULL, env_use);
                  execle("/bin/sh", "-sh", NULL, env_use);
 diff --git a/units/emergency.service.in b/units/emergency.service.in
-index da68eb8..e25f879 100644
+index e9eb238b9..32588e48a 100644
 --- a/units/emergency.service.in
 +++ b/units/emergency.service.in
 @@ -15,8 +15,8 @@ Conflicts=syslog.socket
@@ -131,11 +132,11 @@
 -WorkingDirectory=-/root
 +Environment=HOME=@roothomedir@
 +WorkingDirectory=-@roothomedir@
- ExecStartPre=-/bin/plymouth --wait quit
- ExecStartPre=-/bin/echo -e 'You are in emergency mode. After logging in, type "journalctl -xb" to view\\nsystem logs, "systemctl reboot" to reboot, "systemctl default" or ^D to\\ntry again to boot into default mode.'
- ExecStart=-/bin/sh -c "@SULOGIN@; @SYSTEMCTL@ --job-mode=fail --no-block default"
+ ExecStart=-@rootlibexecdir@/systemd-sulogin-shell emergency
+ Type=idle
+ StandardInput=tty-force
 diff --git a/units/rescue.service.in b/units/rescue.service.in
-index 5feff69..a83439e 100644
+index 4ab66f485..bd9898f2c 100644
 --- a/units/rescue.service.in
 +++ b/units/rescue.service.in
 @@ -14,8 +14,8 @@ After=sysinit.target plymouth-start.service
@@ -146,9 +147,9 @@
 -WorkingDirectory=-/root
 +Environment=HOME=@roothomedir@
 +WorkingDirectory=-@roothomedir@
- ExecStartPre=-/bin/plymouth --wait quit
- ExecStartPre=-/bin/echo -e 'You are in rescue mode. After logging in, type "journalctl -xb" to view\\nsystem logs, "systemctl reboot" to reboot, "systemctl default" or ^D to\\nboot into default mode.'
- ExecStart=-/bin/sh -c "@SULOGIN@; @SYSTEMCTL@ --job-mode=fail --no-block default"
+ ExecStart=-@rootlibexecdir@/systemd-sulogin-shell rescue
+ Type=idle
+ StandardInput=tty-force
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0013-comparison_fn_t-is-glibc-specific-use-raw-signature-.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0013-comparison_fn_t-is-glibc-specific-use-raw-signature-.patch
new file mode 100644
index 0000000..e219981
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0013-comparison_fn_t-is-glibc-specific-use-raw-signature-.patch
@@ -0,0 +1,31 @@
+From 4b6733544beb662a0f77310302fae1fb7b76d167 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 12 Sep 2015 18:53:31 +0000
+Subject: [PATCH 13/13] comparison_fn_t is glibc specific, use raw signature in
+ function pointer
+
+make it work with musl where comparison_fn_t is not provided
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/basic/util.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/basic/util.h b/src/basic/util.h
+index c7da6c39b..87f62b891 100644
+--- a/src/basic/util.h
++++ b/src/basic/util.h
+@@ -98,7 +98,7 @@ void *xbsearch_r(const void *key, const void *base, size_t nmemb, size_t size,
+  * Normal qsort requires base to be nonnull. Here were require
+  * that only if nmemb > 0.
+  */
+-static inline void qsort_safe(void *base, size_t nmemb, size_t size, comparison_fn_t compar) {
++static inline void qsort_safe(void *base, size_t nmemb, size_t size, int (*compar)(const void *, const void *)) {
+         if (nmemb <= 1)
+                 return;
+ 
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0015-Revert-udev-remove-userspace-firmware-loading-suppor.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0015-Revert-udev-remove-userspace-firmware-loading-suppor.patch
index f31d211..95871bb 100644
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0015-Revert-udev-remove-userspace-firmware-loading-suppor.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0015-Revert-udev-remove-userspace-firmware-loading-suppor.patch
@@ -1,7 +1,7 @@
-From 4d28d9a7d8d69fb429955d770e53e7a81640da24 Mon Sep 17 00:00:00 2001
+From 7883985a3a78677e9a1d5d61fe7fa8badf39f565 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 9 Nov 2016 20:45:23 -0800
-Subject: [PATCH 15/19] Revert "udev: remove userspace firmware loading
+Subject: [PATCH 10/14] Revert "udev: remove userspace firmware loading
  support"
 
 This reverts commit be2ea723b1d023b3d385d3b791ee4607cbfb20ca.
@@ -11,23 +11,24 @@
 
 Signed-off-by: Jonathan Liu <net147@gmail.com>
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
 ---
  Makefile.am                      |  12 +++
- README                           |   6 +-
+ README                           |   4 +-
  TODO                             |   1 +
  configure.ac                     |  18 +++++
  src/udev/udev-builtin-firmware.c | 154 +++++++++++++++++++++++++++++++++++++++
  src/udev/udev-builtin.c          |   3 +
  src/udev/udev.h                  |   6 ++
  src/udev/udevd.c                 |  13 ++++
- 8 files changed, 210 insertions(+), 3 deletions(-)
+ 8 files changed, 209 insertions(+), 2 deletions(-)
  create mode 100644 src/udev/udev-builtin-firmware.c
 
 diff --git a/Makefile.am b/Makefile.am
-index 3010b01..229492a 100644
+index c2b4a99d2..692d7bb95 100644
 --- a/Makefile.am
 +++ b/Makefile.am
-@@ -3791,6 +3791,18 @@ libudev_core_la_LIBADD = \
+@@ -3985,6 +3985,18 @@ libudev_core_la_LIBADD = \
  	$(BLKID_LIBS) \
  	$(KMOD_LIBS)
  
@@ -47,17 +48,10 @@
  libudev_core_la_SOURCES += \
  	src/udev/udev-builtin-kmod.c
 diff --git a/README b/README
-index 9f5bc93..f60ae11 100644
+index 60388eebe..e21976393 100644
 --- a/README
 +++ b/README
-@@ -50,14 +50,14 @@ REQUIREMENTS:
-           CONFIG_PROC_FS
-           CONFIG_FHANDLE (libudev, mount and bind mount handling)
- 
--        udev will fail to work with the legacy sysfs layout:
-+        Udev will fail to work with the legacy sysfs layout:
-           CONFIG_SYSFS_DEPRECATED=n
- 
+@@ -61,8 +61,8 @@ REQUIREMENTS:
          Legacy hotplug slows down the system and confuses udev:
            CONFIG_UEVENT_HELPER_PATH=""
  
@@ -69,10 +63,10 @@
  
          Some udev rules and virtualization detection relies on it:
 diff --git a/TODO b/TODO
-index baaac94..1ab1691 100644
+index 61efa5e9f..67ccac224 100644
 --- a/TODO
 +++ b/TODO
-@@ -658,6 +658,7 @@ Features:
+@@ -740,6 +740,7 @@ Features:
  * initialize the hostname from the fs label of /, if /etc/hostname does not exist?
  
  * udev:
@@ -81,11 +75,11 @@
    - kill scsi_id
    - add trigger --subsystem-match=usb/usb_device device
 diff --git a/configure.ac b/configure.ac
-index dfc0bd3..1de0066 100644
+index b53ca1f1a..1150ca50e 100644
 --- a/configure.ac
 +++ b/configure.ac
-@@ -1394,6 +1394,23 @@ AM_CONDITIONAL(HAVE_MYHOSTNAME, [test "$have_myhostname" = "yes"])
- AC_ARG_ENABLE(hwdb, [AC_HELP_STRING([--disable-hwdb], [disable hardware database support])],
+@@ -1522,6 +1522,23 @@ AM_CONDITIONAL(HAVE_MYHOSTNAME, [test "$have_myhostname" = "yes"])
+ AC_ARG_ENABLE(hwdb, [AS_HELP_STRING([--disable-hwdb], [disable hardware database support])],
         enable_hwdb=$enableval, enable_hwdb=yes)
  AM_CONDITIONAL(ENABLE_HWDB, [test x$enable_hwdb = xyes])
 +AC_ARG_WITH(firmware-path,
@@ -108,17 +102,17 @@
  
  # ------------------------------------------------------------------------------
  have_manpages=no
-@@ -1698,6 +1715,7 @@ AC_MSG_RESULT([
+@@ -1839,6 +1856,7 @@ AC_MSG_RESULT([
          SysV init scripts:                 ${SYSTEM_SYSVINIT_PATH}
          SysV rc?.d directories:            ${SYSTEM_SYSVRCND_PATH}
-         Build Python:                      ${PYTHON}
+         build Python:                      ${PYTHON}
 +        firmware path:                     ${FIRMWARE_PATH}
          PAM modules dir:                   ${with_pamlibdir}
          PAM configuration dir:             ${with_pamconfdir}
-         D-Bus policy dir:                  ${with_dbuspolicydir}
+         RPM macros dir:                    ${with_rpmmacrosdir}
 diff --git a/src/udev/udev-builtin-firmware.c b/src/udev/udev-builtin-firmware.c
 new file mode 100644
-index 0000000..bd8c2fb
+index 000000000..bd8c2fb96
 --- /dev/null
 +++ b/src/udev/udev-builtin-firmware.c
 @@ -0,0 +1,154 @@
@@ -277,7 +271,7 @@
 +        .run_once = true,
 +};
 diff --git a/src/udev/udev-builtin.c b/src/udev/udev-builtin.c
-index e6b36f1..cd9947e 100644
+index e6b36f124..cd9947e2a 100644
 --- a/src/udev/udev-builtin.c
 +++ b/src/udev/udev-builtin.c
 @@ -31,6 +31,9 @@ static const struct udev_builtin *builtins[] = {
@@ -291,10 +285,10 @@
          [UDEV_BUILTIN_INPUT_ID] = &udev_builtin_input_id,
          [UDEV_BUILTIN_KEYBOARD] = &udev_builtin_keyboard,
 diff --git a/src/udev/udev.h b/src/udev/udev.h
-index 8433e8d..d32366d 100644
+index c0cb7eae8..9f0f1cf13 100644
 --- a/src/udev/udev.h
 +++ b/src/udev/udev.h
-@@ -148,6 +148,9 @@ enum udev_builtin_cmd {
+@@ -150,6 +150,9 @@ enum udev_builtin_cmd {
          UDEV_BUILTIN_BLKID,
  #endif
          UDEV_BUILTIN_BTRFS,
@@ -304,7 +298,7 @@
          UDEV_BUILTIN_HWDB,
          UDEV_BUILTIN_INPUT_ID,
          UDEV_BUILTIN_KEYBOARD,
-@@ -176,6 +179,9 @@ struct udev_builtin {
+@@ -178,6 +181,9 @@ struct udev_builtin {
  extern const struct udev_builtin udev_builtin_blkid;
  #endif
  extern const struct udev_builtin udev_builtin_btrfs;
@@ -315,7 +309,7 @@
  extern const struct udev_builtin udev_builtin_input_id;
  extern const struct udev_builtin udev_builtin_keyboard;
 diff --git a/src/udev/udevd.c b/src/udev/udevd.c
-index d336ee0..81e5dc5 100644
+index acbddd418..20347b402 100644
 --- a/src/udev/udevd.c
 +++ b/src/udev/udevd.c
 @@ -125,6 +125,9 @@ struct event {
@@ -353,5 +347,5 @@
                  if (event->devpath[common] == '/') {
                          event->delaying_seqnum = loop_event->seqnum;
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0016-make-test-dir-configurable.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0016-make-test-dir-configurable.patch
deleted file mode 100644
index 10d1df5..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0016-make-test-dir-configurable.patch
+++ /dev/null
@@ -1,65 +0,0 @@
-From 218bbc555a37f9373fbb7f03c744eb65109d3470 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 9 Nov 2016 20:47:37 -0800
-Subject: [PATCH 16/19] make test dir configurable
-
-Upstream-Status: Pending
-
-test maybe be run on target in cross-compile environment, and test dir
-is not the compilation dir, so make it configurable
-
-Signed-off-by: Roy Li <rongqing.li@windriver.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- Makefile.am  | 2 +-
- configure.ac | 7 +++++++
- 2 files changed, 8 insertions(+), 1 deletion(-)
-
-diff --git a/Makefile.am b/Makefile.am
-index 229492a..e997d82 100644
---- a/Makefile.am
-+++ b/Makefile.am
-@@ -214,7 +214,7 @@ AM_CPPFLAGS = \
- 	-DROOTLIBDIR=\"$(rootlibdir)\" \
- 	-DROOTLIBEXECDIR=\"$(rootlibexecdir)\" \
- 	-DROOTHOMEDIR=\"$(roothomedir)\" \
--	-DTEST_DIR=\"$(abs_top_srcdir)/test\" \
-+	-DTEST_DIR=\"$(testdir)/test\" \
- 	-I $(top_srcdir)/src \
- 	-I $(top_builddir)/src/basic \
- 	-I $(top_srcdir)/src/basic \
-diff --git a/configure.ac b/configure.ac
-index 1de0066..b12e320 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -1535,6 +1535,11 @@ AC_ARG_WITH([roothomedir],
-         [],
-         [with_roothomedir=/root])
- 
-+AC_ARG_WITH([testdir],
-+        AS_HELP_STRING([--with-testdir=DIR], [test file directory]),
-+        [],
-+        [with_testdir=${abs_top_srcdir}])
-+
- AC_ARG_WITH([pamlibdir],
-         AS_HELP_STRING([--with-pamlibdir=DIR], [Directory for PAM modules]),
-         [],
-@@ -1621,6 +1626,7 @@ AC_SUBST([pamconfdir], [$with_pamconfdir])
- AC_SUBST([rootprefix], [$with_rootprefix])
- AC_SUBST([rootlibdir], [$with_rootlibdir])
- AC_SUBST([roothomedir], [$with_roothomedir])
-+AC_SUBST([testdir], [$with_testdir])
- 
- AC_CONFIG_FILES([
-         Makefile
-@@ -1712,6 +1718,7 @@ AC_MSG_RESULT([
-         lib dir:                           ${libdir}
-         rootlib dir:                       ${with_rootlibdir}
-         root home dir:                     ${with_roothomedir}
-+        test dir:                          ${with_testdir}
-         SysV init scripts:                 ${SYSTEM_SYSVINIT_PATH}
-         SysV rc?.d directories:            ${SYSTEM_SYSVRCND_PATH}
-         Build Python:                      ${PYTHON}
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0017-remove-duplicate-include-uchar.h.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0017-remove-duplicate-include-uchar.h.patch
index 77dbd6e..d200635 100644
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0017-remove-duplicate-include-uchar.h.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0017-remove-duplicate-include-uchar.h.patch
@@ -6,6 +6,7 @@
 missing.h already includes it
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Upstream-Status: Pending
 ---
  src/basic/escape.h | 1 -
  src/basic/utf8.h   | 1 -
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0018-check-for-uchar.h-in-configure.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0018-check-for-uchar.h-in-configure.patch
index 5824033..067b73f 100644
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0018-check-for-uchar.h-in-configure.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0018-check-for-uchar.h-in-configure.patch
@@ -1,21 +1,23 @@
-From 7cc0b19d244023c7b3e557765b03b7971e047f29 Mon Sep 17 00:00:00 2001
+From 1355457092b02a15c646fc1c72e68b694a86dd99 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Mon, 22 Feb 2016 06:02:38 +0000
-Subject: [PATCH 18/19] check for uchar.h in configure
+Subject: [PATCH 12/14] check for uchar.h in configure
 
 Use ifdef to include uchar.h
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Upstream-Status: Pending
+
 ---
  configure.ac        | 1 +
  src/basic/missing.h | 2 ++
  2 files changed, 3 insertions(+)
 
 diff --git a/configure.ac b/configure.ac
-index b12e320..4e6dfdf 100644
+index 1150ca50e..60e7df5ee 100644
 --- a/configure.ac
 +++ b/configure.ac
-@@ -298,6 +298,7 @@ AM_CONDITIONAL([HAVE_PYTHON], [test "x$have_python" = "xyes"])
+@@ -304,6 +304,7 @@ AM_CONDITIONAL([HAVE_PYTHON], [test "x$have_python" = "xyes"])
  
  # ------------------------------------------------------------------------------
  
@@ -24,12 +26,12 @@
  AC_CHECK_HEADERS([linux/btrfs.h], [], [])
  AC_CHECK_HEADERS([linux/memfd.h], [], [])
 diff --git a/src/basic/missing.h b/src/basic/missing.h
-index 4936873..ce79404 100644
+index 25a11f351..d631b7e3e 100644
 --- a/src/basic/missing.h
 +++ b/src/basic/missing.h
-@@ -35,7 +35,9 @@
- #include <stdlib.h>
+@@ -37,7 +37,9 @@
  #include <sys/resource.h>
+ #include <sys/socket.h>
  #include <sys/syscall.h>
 +#ifdef HAVE_UCHAR_H
  #include <uchar.h>
@@ -38,5 +40,5 @@
  
  #ifdef HAVE_AUDIT
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0019-socket-util-don-t-fail-if-libc-doesn-t-support-IDN.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0019-socket-util-don-t-fail-if-libc-doesn-t-support-IDN.patch
index 66aa4ca..b609276 100644
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0019-socket-util-don-t-fail-if-libc-doesn-t-support-IDN.patch
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0019-socket-util-don-t-fail-if-libc-doesn-t-support-IDN.patch
@@ -1,18 +1,19 @@
-From 289554d87e4fd96cae08c0fb449bf41d5641cd24 Mon Sep 17 00:00:00 2001
+From b7c6bfe2ec5ae426e586e1d6ecadb52a97128a3f Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 9 Nov 2016 20:49:53 -0800
-Subject: [PATCH 19/19] socket-util: don't fail if libc doesn't support IDN
+Subject: [PATCH 13/14] socket-util: don't fail if libc doesn't support IDN
 
 Upstream-Status: Pending
 
 Signed-off-by: Emil Renner Berthing <systemd@esmil.dk>
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
 ---
  src/basic/socket-util.c | 9 +++++++++
  1 file changed, 9 insertions(+)
 
 diff --git a/src/basic/socket-util.c b/src/basic/socket-util.c
-index 4ebf106..53b9a12 100644
+index 016e64aa0..d4658826e 100644
 --- a/src/basic/socket-util.c
 +++ b/src/basic/socket-util.c
 @@ -47,6 +47,15 @@
@@ -29,8 +30,8 @@
 +#define NI_IDN_USE_STD3_ASCII_RULES 0
 +#endif
  
- int socket_address_parse(SocketAddress *a, const char *s) {
-         char *e, *n;
+ #ifdef ENABLE_IDN
+ #  define IDN_FLAGS (NI_IDN|NI_IDN_USE_STD3_ASCII_RULES)
 -- 
-2.10.2
+2.13.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0020-back-port-233-don-t-use-the-unified-hierarchy-for-the-systemd.patch b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0020-back-port-233-don-t-use-the-unified-hierarchy-for-the-systemd.patch
deleted file mode 100644
index ef2d868..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd/0020-back-port-233-don-t-use-the-unified-hierarchy-for-the-systemd.patch
+++ /dev/null
@@ -1,51 +0,0 @@
-This is a direct backport from systemd-233's stream to fix lxc/docker.
-
-% lxc-start -n container -F
-lxc-start: cgfsng.c: parse_hierarchies: 825 Failed to find current cgroup for controller 'name=systemd'
-lxc-start: cgfsng.c: all_controllers_found: 431 no systemd controller mountpoint found
-lxc-start: start.c: lxc_spawn: 1082 failed initializing cgroup support
-lxc-start: start.c: __lxc_start: 1332 failed to spawn 'container'
-lxc-start: lxc_start.c: main: 344 The container failed to start.
-lxc-start: lxc_start.c: main: 348 Additional information can be obtained by setting the --logfile and --logpriority options.
-
-## begin backport ##
-
-From 843d5baf6aad6c53fc00ea8d95d83209a4f92de1 Mon Sep 17 00:00:00 2001
-From: Martin Pitt <martin.pitt@ubuntu.com>
-Date: Thu, 10 Nov 2016 05:33:13 +0100
-Subject: [PATCH] core: don't use the unified hierarchy for the systemd cgroup
- yet (#4628)
-
-Too many things don't get along with the unified hierarchy yet:
-
- * https://github.com/opencontainers/runc/issues/1175
- * https://github.com/docker/docker/issues/28109
- * https://github.com/lxc/lxc/issues/1280
-
-So revert the default to the legacy hierarchy for now. Developers of the above
-software can opt into the unified hierarchy with
-"systemd.legacy_systemd_cgroup_controller=0".
----
- src/basic/cgroup-util.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/src/basic/cgroup-util.c b/src/basic/cgroup-util.c
-index 5e73753..dc13025 100644
---- a/src/basic/cgroup-util.c
-+++ b/src/basic/cgroup-util.c
-@@ -2423,10 +2423,10 @@ bool cg_is_unified_systemd_controller_wanted(void) {
- 
-                 r = get_proc_cmdline_key("systemd.legacy_systemd_cgroup_controller=", &value);
-                 if (r < 0)
--                        return true;
-+                        return false;
- 
-                 if (r == 0)
--                        wanted = true;
-+                        wanted = false;
-                 else
-                         wanted = parse_boolean(value) <= 0;
-         }
--- 
-2.10.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd_232.bb b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd_232.bb
deleted file mode 100644
index 25fe496..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd_232.bb
+++ /dev/null
@@ -1,599 +0,0 @@
-require systemd.inc
-
-PROVIDES = "udev"
-
-PE = "1"
-
-DEPENDS = "kmod intltool-native gperf-native acl readline libcap libcgroup util-linux"
-
-SECTION = "base/shell"
-
-inherit useradd pkgconfig autotools perlnative update-rc.d update-alternatives qemu systemd ptest gettext bash-completion manpages
-
-SRC_URI += " \
-           file://touchscreen.rules \
-           file://00-create-volatile.conf \
-           file://init \
-           file://run-ptest \
-           file://0003-define-exp10-if-missing.patch \
-           file://0004-Use-getenv-when-secure-versions-are-not-available.patch \
-           file://0005-binfmt-Don-t-install-dependency-links-at-install-tim.patch \
-           file://0006-configure-Check-for-additional-features-that-uclibc-.patch \
-           file://0007-use-lnr-wrapper-instead-of-looking-for-relative-opti.patch \
-           file://0008-nspawn-Use-execvpe-only-when-libc-supports-it.patch \
-           file://0009-util-bypass-unimplemented-_SC_PHYS_PAGES-system-conf.patch \
-           file://0010-implment-systemd-sysv-install-for-OE.patch \
-           file://0011-nss-mymachines-Build-conditionally-when-HAVE_MYHOSTN.patch \
-           file://0012-rules-whitelist-hd-devices.patch \
-           file://0013-Make-root-s-home-directory-configurable.patch \
-           file://0014-Revert-rules-remove-firmware-loading-rules.patch \
-           file://0015-Revert-udev-remove-userspace-firmware-loading-suppor.patch \
-           file://0016-make-test-dir-configurable.patch \
-           file://0017-remove-duplicate-include-uchar.h.patch \
-           file://0018-check-for-uchar.h-in-configure.patch \
-           file://0019-socket-util-don-t-fail-if-libc-doesn-t-support-IDN.patch \
-           file://0020-back-port-233-don-t-use-the-unified-hierarchy-for-the-systemd.patch \
-           file://0001-core-load-fragment-refuse-units-with-errors-in-certa.patch \
-"
-SRC_URI_append_libc-uclibc = "\
-           file://0002-units-Prefer-getty-to-agetty-in-console-setup-system.patch \
-"
-SRC_URI_append_qemuall = " file://0001-core-device.c-Change-the-default-device-timeout-to-2.patch"
-
-PACKAGECONFIG ??= "xz \
-                   ${@bb.utils.filter('DISTRO_FEATURES', 'efi pam selinux ldconfig', d)} \
-                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'xkbcommon', '', d)} \
-                   ${@bb.utils.contains('DISTRO_FEATURES', 'wifi', 'rfkill', '', d)} \
-                   binfmt \
-                   randomseed \
-                   machined \
-                   backlight \
-                   vconsole \
-                   quotacheck \
-                   hostnamed \
-                   ${@bb.utils.contains('TCLIBC', 'glibc', 'myhostname sysusers', '', d)} \
-                   hibernate \
-                   timedated \
-                   timesyncd \
-                   localed \
-                   ima \
-                   smack \
-                   logind \
-                   firstboot \
-                   utmp \
-                   polkit \
-"
-PACKAGECONFIG_remove_libc-musl = "selinux"
-PACKAGECONFIG_remove_libc-musl = "smack"
-
-# Use the upstream systemd serial-getty@.service and rely on
-# systemd-getty-generator instead of using the OE-core specific
-# systemd-serialgetty.bb - not enabled by default.
-PACKAGECONFIG[serial-getty-generator] = ""
-
-PACKAGECONFIG[journal-upload] = "--enable-libcurl,--disable-libcurl,curl"
-# Sign the journal for anti-tampering
-PACKAGECONFIG[gcrypt] = "--enable-gcrypt,--disable-gcrypt,libgcrypt"
-PACKAGECONFIG[cryptsetup] = "--enable-libcryptsetup,--disable-libcryptsetup,cryptsetup"
-PACKAGECONFIG[microhttpd] = "--enable-microhttpd,--disable-microhttpd,libmicrohttpd"
-PACKAGECONFIG[elfutils] = "--enable-elfutils,--disable-elfutils,elfutils"
-PACKAGECONFIG[resolved] = "--enable-resolved,--disable-resolved"
-PACKAGECONFIG[networkd] = "--enable-networkd,--disable-networkd"
-PACKAGECONFIG[machined] = "--enable-machined,--disable-machined"
-PACKAGECONFIG[backlight] = "--enable-backlight,--disable-backlight"
-PACKAGECONFIG[vconsole] = "--enable-vconsole,--disable-vconsole,,${PN}-vconsole-setup"
-PACKAGECONFIG[quotacheck] = "--enable-quotacheck,--disable-quotacheck"
-PACKAGECONFIG[hostnamed] = "--enable-hostnamed,--disable-hostnamed"
-PACKAGECONFIG[myhostname] = "--enable-myhostname,--disable-myhostname"
-PACKAGECONFIG[rfkill] = "--enable-rfkill,--disable-rfkill"
-PACKAGECONFIG[hibernate] = "--enable-hibernate,--disable-hibernate"
-PACKAGECONFIG[timedated] = "--enable-timedated,--disable-timedated"
-PACKAGECONFIG[timesyncd] = "--enable-timesyncd,--disable-timesyncd"
-PACKAGECONFIG[localed] = "--enable-localed,--disable-localed"
-PACKAGECONFIG[efi] = "--enable-efi,--disable-efi"
-PACKAGECONFIG[ima] = "--enable-ima,--disable-ima"
-PACKAGECONFIG[smack] = "--enable-smack,--disable-smack"
-# libseccomp is found in meta-security
-PACKAGECONFIG[seccomp] = "--enable-seccomp,--disable-seccomp,libseccomp"
-PACKAGECONFIG[logind] = "--enable-logind,--disable-logind"
-PACKAGECONFIG[sysusers] = "--enable-sysusers,--disable-sysusers"
-PACKAGECONFIG[firstboot] = "--enable-firstboot,--disable-firstboot"
-PACKAGECONFIG[randomseed] = "--enable-randomseed,--disable-randomseed"
-PACKAGECONFIG[binfmt] = "--enable-binfmt,--disable-binfmt"
-PACKAGECONFIG[utmp] = "--enable-utmp,--disable-utmp"
-PACKAGECONFIG[polkit] = "--enable-polkit,--disable-polkit"
-# importd requires curl/xz/zlib/bzip2/gcrypt
-PACKAGECONFIG[importd] = "--enable-importd,--disable-importd"
-PACKAGECONFIG[libidn] = "--enable-libidn,--disable-libidn,libidn"
-PACKAGECONFIG[audit] = "--enable-audit,--disable-audit,audit"
-PACKAGECONFIG[manpages] = "--enable-manpages,--disable-manpages,libxslt-native xmlto-native docbook-xml-dtd4-native docbook-xsl-stylesheets-native"
-PACKAGECONFIG[pam] = "--enable-pam,--disable-pam,libpam"
-# Verify keymaps on locale change
-PACKAGECONFIG[xkbcommon] = "--enable-xkbcommon,--disable-xkbcommon,libxkbcommon"
-# Update NAT firewall rules
-PACKAGECONFIG[iptc] = "--enable-libiptc,--disable-libiptc,iptables"
-PACKAGECONFIG[ldconfig] = "--enable-ldconfig,--disable-ldconfig,,"
-PACKAGECONFIG[selinux] = "--enable-selinux,--disable-selinux,libselinux"
-PACKAGECONFIG[valgrind] = "ac_cv_header_valgrind_memcheck_h=yes ac_cv_header_valgrind_valgrind_h=yes ,ac_cv_header_valgrind_memcheck_h=no ac_cv_header_valgrind_valgrind_h=no ,valgrind"
-PACKAGECONFIG[qrencode] = "--enable-qrencode,--disable-qrencode,qrencode"
-PACKAGECONFIG[dbus] = "--enable-dbus,--disable-dbus,dbus"
-PACKAGECONFIG[coredump] = "--enable-coredump,--disable-coredump"
-PACKAGECONFIG[bzip2] = "--enable-bzip2,--disable-bzip2,bzip2"
-PACKAGECONFIG[lz4] = "--enable-lz4,--disable-lz4,lz4"
-PACKAGECONFIG[xz] = "--enable-xz,--disable-xz,xz"
-PACKAGECONFIG[zlib] = "--enable-zlib,--disable-zlib,zlib"
-
-CACHED_CONFIGUREVARS += "ac_cv_path_KILL=${base_bindir}/kill"
-CACHED_CONFIGUREVARS += "ac_cv_path_KMOD=${base_bindir}/kmod"
-CACHED_CONFIGUREVARS += "ac_cv_path_QUOTACHECK=${sbindir}/quotacheck"
-CACHED_CONFIGUREVARS += "ac_cv_path_QUOTAON=${sbindir}/quotaon"
-CACHED_CONFIGUREVARS += "ac_cv_path_SULOGIN=${base_sbindir}/sulogin"
-
-# Helper variables to clarify locations.  This mirrors the logic in systemd's
-# build system.
-rootprefix ?= "${base_prefix}"
-rootlibdir ?= "${base_libdir}"
-rootlibexecdir = "${rootprefix}/lib"
-
-CACHED_CONFIGUREVARS_class-target = "\
-                         ac_cv_path_MOUNT_PATH=${base_bindir}/mount \
-                         ac_cv_path_UMOUNT_PATH=${base_bindir}/umount \
-                         ac_cv_path_KMOD=${base_bindir}/kmod \
-                         ac_cv_path_KILL=${base_bindir}/kill \
-                         ac_cv_path_SULOGIN=${base_sbindir}/sulogin \
-                         ac_cv_path_KEXEC=${sbindir}/kexec \
-                         ac_cv_path_QUOTACHECK=${sbindir}/quotacheck \
-                         ac_cv_path_QUOTAON=${sbindir}/quotaon \
-			 "
-
-EXTRA_OECONF = " --with-rootprefix=${rootprefix} \
-                 --with-rootlibdir=${rootlibdir} \
-                 --with-roothomedir=${ROOT_HOME} \
-                 --enable-split-usr \
-                 --without-python \
-                 --with-sysvrcnd-path=${sysconfdir} \
-                 --with-firmware-path=/lib/firmware \
-                 --with-testdir=${PTEST_PATH} \
-               "
-# per the systemd README, define VALGRIND=1 to run under valgrind
-CFLAGS .= "${@bb.utils.contains('PACKAGECONFIG', 'valgrind', ' -DVALGRIND=1', '', d)}"
-
-# disable problematic GCC 5.2 optimizations [YOCTO #8291]
-FULL_OPTIMIZATION_append_arm = " -fno-schedule-insns -fno-schedule-insns2"
-
-# Avoid login failure on qemumips64 when pam is enabled
-FULL_OPTIMIZATION_append_mips64 = " -fno-tree-switch-conversion -fno-tree-tail-merge"
-
-do_configure_prepend() {
-	export NM="${HOST_PREFIX}gcc-nm"
-	export AR="${HOST_PREFIX}gcc-ar"
-	export RANLIB="${HOST_PREFIX}gcc-ranlib"
-	export KMOD="${base_bindir}/kmod"
-	if [ -d ${S}/units.pre_sed ] ; then
-		cp -r ${S}/units.pre_sed ${S}/units
-	else
-		cp -r ${S}/units ${S}/units.pre_sed
-	fi
-	sed -i -e 's:-DTEST_DIR=\\\".*\\\":-DTEST_DIR=\\\"${PTEST_PATH}/tests/test\\\":' ${S}/Makefile.am
-	sed -i -e 's:-DCATALOG_DIR=\\\".*\\\":-DCATALOG_DIR=\\\"${PTEST_PATH}/tests/catalog\\\":' ${S}/Makefile.am
-}
-
-do_install() {
-	autotools_do_install
-	install -d ${D}/${base_sbindir}
-	if ${@bb.utils.contains('PACKAGECONFIG', 'serial-getty-generator', 'false', 'true', d)}; then
-		# Provided by a separate recipe
-		rm ${D}${systemd_unitdir}/system/serial-getty* -f
-	fi
-
-	# Provide support for initramfs
-	[ ! -e ${D}/init ] && ln -s ${rootlibexecdir}/systemd/systemd ${D}/init
-	[ ! -e ${D}/${base_sbindir}/udevd ] && ln -s ${rootlibexecdir}/systemd/systemd-udevd ${D}/${base_sbindir}/udevd
-
-	# Create machine-id
-	# 20:12 < mezcalero> koen: you have three options: a) run systemd-machine-id-setup at install time, b) have / read-only and an empty file there (for stateless) and c) boot with / writable
-	touch ${D}${sysconfdir}/machine-id
-
-
-	install -d ${D}${sysconfdir}/udev/rules.d/
-	install -d ${D}${sysconfdir}/tmpfiles.d
-	install -m 0644 ${WORKDIR}/*.rules ${D}${sysconfdir}/udev/rules.d/
-	install -d ${D}${libdir}/pkgconfig
-	install -m 0644 ${B}/src/udev/udev.pc ${D}${libdir}/pkgconfig/
-
-	install -m 0644 ${WORKDIR}/00-create-volatile.conf ${D}${sysconfdir}/tmpfiles.d/
-
-	if ${@bb.utils.contains('DISTRO_FEATURES','sysvinit','true','false',d)}; then
-		install -d ${D}${sysconfdir}/init.d
-		install -m 0755 ${WORKDIR}/init ${D}${sysconfdir}/init.d/systemd-udevd
-		sed -i s%@UDEVD@%${rootlibexecdir}/systemd/systemd-udevd% ${D}${sysconfdir}/init.d/systemd-udevd
-	fi
-
-	chown root:systemd-journal ${D}/${localstatedir}/log/journal
-
-	# Delete journal README, as log can be symlinked inside volatile.
-	rm -f ${D}/${localstatedir}/log/README
-
-	install -d ${D}${systemd_unitdir}/system/graphical.target.wants
-	install -d ${D}${systemd_unitdir}/system/multi-user.target.wants
-	install -d ${D}${systemd_unitdir}/system/poweroff.target.wants
-	install -d ${D}${systemd_unitdir}/system/reboot.target.wants
-	install -d ${D}${systemd_unitdir}/system/rescue.target.wants
-
-	# Create symlinks for systemd-update-utmp-runlevel.service
-	if ${@bb.utils.contains('PACKAGECONFIG', 'utmp', 'true', 'false', d)}; then
-		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_unitdir}/system/graphical.target.wants/systemd-update-utmp-runlevel.service
-		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_unitdir}/system/multi-user.target.wants/systemd-update-utmp-runlevel.service
-		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_unitdir}/system/poweroff.target.wants/systemd-update-utmp-runlevel.service
-		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_unitdir}/system/reboot.target.wants/systemd-update-utmp-runlevel.service
-		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_unitdir}/system/rescue.target.wants/systemd-update-utmp-runlevel.service
-	fi
-
-	# Enable journal to forward message to syslog daemon
-	sed -i -e 's/.*ForwardToSyslog.*/ForwardToSyslog=yes/' ${D}${sysconfdir}/systemd/journald.conf
-	# Set the maximium size of runtime journal to 64M as default
-	sed -i -e 's/.*RuntimeMaxUse.*/RuntimeMaxUse=64M/' ${D}${sysconfdir}/systemd/journald.conf
-
-	# this file is needed to exist if networkd is disabled but timesyncd is still in use since timesyncd checks it
-	# for existence else it fails
-	if [ -s ${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf ]; then
-		${@bb.utils.contains('PACKAGECONFIG', 'networkd', ':', 'sed -i -e "\$ad /run/systemd/netif/links 0755 root root -" ${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf', d)}
-	fi
-	if ! ${@bb.utils.contains('PACKAGECONFIG', 'resolved', 'true', 'false', d)}; then
-		echo 'L! ${sysconfdir}/resolv.conf - - - - ../run/systemd/resolve/resolv.conf' >>${D}${exec_prefix}/lib/tmpfiles.d/etc.conf
-		echo 'd /run/systemd/resolve 0755 root root -' >>${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf
-		echo 'f /run/systemd/resolve/resolv.conf 0644 root root' >>${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf
-		ln -s ../run/systemd/resolve/resolv.conf ${D}${sysconfdir}/resolv.conf
-	else
-		sed -i -e "s%^L! /etc/resolv.conf.*$%L! /etc/resolv.conf - - - - ../run/systemd/resolve/resolv.conf%g" ${D}${exec_prefix}/lib/tmpfiles.d/etc.conf
-	fi
-	install -Dm 0755 ${S}/src/systemctl/systemd-sysv-install.SKELETON ${D}${systemd_unitdir}/systemd-sysv-install
-}
-
-do_install_ptest () {
-       # install data files needed for tests
-       install -d ${D}${PTEST_PATH}/tests/test
-       cp -rfL ${S}/test/* ${D}${PTEST_PATH}/tests/test
-       # python is disabled for systemd, thus removing these python testing scripts
-       rm ${D}${PTEST_PATH}/tests/test/*.py
-       sed -i 's/"tree"/"ls"/' ${D}${PTEST_PATH}/tests/test/udev-test.pl
-
-       install -d ${D}${PTEST_PATH}/tests/catalog
-       install ${S}/catalog/* ${D}${PTEST_PATH}/tests/catalog/
-
-       install -D ${S}/build-aux/test-driver ${D}${PTEST_PATH}/tests/build-aux/test-driver
-
-       install -d ${D}${PTEST_PATH}/tests/rules
-       install ${B}/rules/* ${D}${PTEST_PATH}/tests/rules/
-
-       # This directory needs to be there for udev-test.pl to work.
-       install -d ${D}${libdir}/udev/rules.d
-
-       # install actual test binaries
-       install -m 0755 ${B}/test-* ${D}${PTEST_PATH}/tests/
-       install -m 0755 ${B}/.libs/test-* ${D}${PTEST_PATH}/tests/
-
-       install ${B}/Makefile ${D}${PTEST_PATH}/tests/
-}
-
-python populate_packages_prepend (){
-    systemdlibdir = d.getVar("rootlibdir")
-    do_split_packages(d, systemdlibdir, '^lib(.*)\.so\.*', 'lib%s', 'Systemd %s library', extra_depends='', allow_links=True)
-}
-PACKAGES_DYNAMIC += "^lib(udev|systemd|nss).*"
-
-PACKAGES =+ "\
-    ${PN}-gui \
-    ${PN}-vconsole-setup \
-    ${PN}-initramfs \
-    ${PN}-analyze \
-    ${PN}-kernel-install \
-    ${PN}-rpm-macros \
-    ${PN}-binfmt \
-    ${PN}-pam \
-    ${PN}-zsh-completion \
-    ${PN}-xorg-xinitrc \
-    ${PN}-container \
-    ${PN}-extra-utils \
-"
-
-SUMMARY_${PN}-container = "Tools for containers and VMs"
-DESCRIPTION_${PN}-container = "Systemd tools to spawn and manage containers and virtual machines."
-
-SYSTEMD_PACKAGES = "${@bb.utils.contains('PACKAGECONFIG', 'binfmt', '${PN}-binfmt', '', d)}"
-SYSTEMD_SERVICE_${PN}-binfmt = "systemd-binfmt.service"
-
-USERADD_PACKAGES = "${PN} ${PN}-extra-utils"
-USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'microhttpd', '--system -d / -M --shell /bin/nologin systemd-journal-gateway;', '', d)}"
-USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'microhttpd', '--system -d / -M --shell /bin/nologin systemd-journal-remote;', '', d)}"
-USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'journal-upload', '--system -d / -M --shell /bin/nologin systemd-journal-upload;', '', d)}"
-USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'timesyncd', '--system -d / -M --shell /bin/nologin systemd-timesync;', '', d)}"
-USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'networkd', '--system -d / -M --shell /bin/nologin systemd-network;', '', d)}"
-USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'coredump', '--system -d / -M --shell /bin/nologin systemd-coredump;', '', d)}"
-USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'resolved', '--system -d / -M --shell /bin/nologin systemd-resolve;', '', d)}"
-GROUPADD_PARAM_${PN} = "-r lock; -r systemd-journal"
-USERADD_PARAM_${PN}-extra-utils += "--system -d / -M --shell /bin/nologin systemd-bus-proxy;"
-
-FILES_${PN}-analyze = "${bindir}/systemd-analyze"
-
-FILES_${PN}-initramfs = "/init"
-RDEPENDS_${PN}-initramfs = "${PN}"
-
-RDEPENDS_${PN}-ptest += "gawk make perl bash xz \
-                         tzdata tzdata-americas tzdata-asia \
-                         tzdata-europe tzdata-africa tzdata-antarctica \
-                         tzdata-arctic tzdata-atlantic tzdata-australia \
-                         tzdata-pacific tzdata-posix"
-
-FILES_${PN}-ptest += "${libdir}/udev/rules.d"
-
-FILES_${PN}-gui = "${bindir}/systemadm"
-
-FILES_${PN}-vconsole-setup = "${rootlibexecdir}/systemd/systemd-vconsole-setup \
-                              ${systemd_unitdir}/system/systemd-vconsole-setup.service \
-                              ${systemd_unitdir}/system/sysinit.target.wants/systemd-vconsole-setup.service"
-
-RDEPENDS_${PN}-kernel-install += "bash"
-FILES_${PN}-kernel-install = "${bindir}/kernel-install \
-                              ${sysconfdir}/kernel/ \
-                              ${exec_prefix}/lib/kernel \
-                             "
-FILES_${PN}-rpm-macros = "${exec_prefix}/lib/rpm \
-                         "
-
-FILES_${PN}-xorg-xinitrc = "${sysconfdir}/X11/xinit/xinitrc.d/*"
-
-FILES_${PN}-zsh-completion = "${datadir}/zsh/site-functions"
-
-FILES_${PN}-binfmt = "${sysconfdir}/binfmt.d/ \
-                      ${exec_prefix}/lib/binfmt.d \
-                      ${rootlibexecdir}/systemd/systemd-binfmt \
-                      ${systemd_unitdir}/system/proc-sys-fs-binfmt_misc.* \
-                      ${systemd_unitdir}/system/systemd-binfmt.service"
-RRECOMMENDS_${PN}-binfmt = "kernel-module-binfmt-misc"
-
-RRECOMMENDS_${PN}-vconsole-setup = "kbd kbd-consolefonts kbd-keymaps"
-
-FILES_${PN}-container = "${sysconfdir}/dbus-1/system.d/org.freedesktop.import1.conf \
-                         ${sysconfdir}/dbus-1/system.d/org.freedesktop.machine1.conf \
-                         ${base_bindir}/machinectl \
-                         ${bindir}/systemd-nspawn \
-                         ${nonarch_libdir}/systemd/import-pubring.gpg \
-                         ${systemd_system_unitdir}/busnames.target.wants/org.freedesktop.import1.busname \
-                         ${systemd_system_unitdir}/busnames.target.wants/org.freedesktop.machine1.busname \
-                         ${systemd_system_unitdir}/local-fs.target.wants/var-lib-machines.mount \
-                         ${systemd_system_unitdir}/machine.slice \
-                         ${systemd_system_unitdir}/machines.target \
-                         ${systemd_system_unitdir}/org.freedesktop.import1.busname \
-                         ${systemd_system_unitdir}/org.freedesktop.machine1.busname \
-                         ${systemd_system_unitdir}/systemd-importd.service \
-                         ${systemd_system_unitdir}/systemd-machined.service \
-                         ${systemd_system_unitdir}/dbus-org.freedesktop.machine1.service \
-                         ${systemd_system_unitdir}/var-lib-machines.mount \
-                         ${rootlibexecdir}/systemd/systemd-import \
-                         ${rootlibexecdir}/systemd/systemd-importd \
-                         ${rootlibexecdir}/systemd/systemd-journal-gatewayd \
-                         ${rootlibexecdir}/systemd/systemd-journal-remote \
-                         ${rootlibexecdir}/systemd/systemd-journal-upload \
-                         ${rootlibexecdir}/systemd/systemd-machined \
-                         ${rootlibexecdir}/systemd/systemd-pull \
-                         ${exec_prefix}/lib/tmpfiles.d/systemd-nspawn.conf \
-                         ${systemd_system_unitdir}/systemd-nspawn@.service \
-                         ${libdir}/libnss_mymachines.so.2 \
-                         ${datadir}/dbus-1/system-services/org.freedesktop.import1.service \
-                         ${datadir}/dbus-1/system-services/org.freedesktop.machine1.service \
-                         ${datadir}/polkit-1/actions/org.freedesktop.import1.policy \
-                         ${datadir}/polkit-1/actions/org.freedesktop.machine1.policy \
-                        "
-
-FILES_${PN}-extra-utils = "\
-                        ${base_bindir}/systemd-escape \
-                        ${base_bindir}/systemd-inhibit \
-                        ${bindir}/systemd-detect-virt \
-                        ${bindir}/systemd-path \
-                        ${bindir}/systemd-run \
-                        ${bindir}/systemd-cat \
-                        ${bindir}/systemd-delta \
-                        ${bindir}/systemd-cgls \
-                        ${bindir}/systemd-cgtop \
-                        ${bindir}/systemd-stdio-bridge \
-                        ${base_bindir}/systemd-ask-password \
-                        ${base_bindir}/systemd-tty-ask-password-agent \
-                        ${systemd_unitdir}/system/systemd-ask-password-console.path \
-                        ${systemd_unitdir}/system/systemd-ask-password-console.service \
-                        ${systemd_unitdir}/system/systemd-ask-password-wall.path \
-                        ${systemd_unitdir}/system/systemd-ask-password-wall.service \
-                        ${systemd_unitdir}/system/sysinit.target.wants/systemd-ask-password-console.path \
-                        ${systemd_unitdir}/system/sysinit.target.wants/systemd-ask-password-wall.path \
-                        ${systemd_unitdir}/system/multi-user.target.wants/systemd-ask-password-wall.path \
-                        ${rootlibexecdir}/systemd/systemd-resolve-host \
-                        ${rootlibexecdir}/systemd/systemd-ac-power \
-                        ${rootlibexecdir}/systemd/systemd-activate \
-                        ${rootlibexecdir}/systemd/systemd-bus-proxyd \
-                        ${systemd_unitdir}/system/systemd-bus-proxyd.service \
-                        ${systemd_unitdir}/system/systemd-bus-proxyd.socket \
-                        ${rootlibexecdir}/systemd/systemd-socket-proxyd \
-                        ${rootlibexecdir}/systemd/systemd-reply-password \
-                        ${rootlibexecdir}/systemd/systemd-sleep \
-                        ${rootlibexecdir}/systemd/system-sleep \
-                        ${systemd_unitdir}/system/systemd-hibernate.service \
-                        ${systemd_unitdir}/system/systemd-hybrid-sleep.service \
-                        ${systemd_unitdir}/system/systemd-suspend.service \
-                        ${systemd_unitdir}/system/sleep.target \
-                        ${rootlibexecdir}/systemd/systemd-initctl \
-                        ${systemd_unitdir}/system/systemd-initctl.service \
-                        ${systemd_unitdir}/system/systemd-initctl.socket \
-                        ${systemd_unitdir}/system/sockets.target.wants/systemd-initctl.socket \
-                        ${rootlibexecdir}/systemd/system-generators/systemd-gpt-auto-generator \
-                        ${rootlibexecdir}/systemd/systemd-cgroups-agent \
-"
-
-CONFFILES_${PN} = "${sysconfdir}/machine-id \
-                ${sysconfdir}/systemd/coredump.conf \
-                ${sysconfdir}/systemd/journald.conf \
-                ${sysconfdir}/systemd/logind.conf \
-                ${sysconfdir}/systemd/system.conf \
-                ${sysconfdir}/systemd/user.conf"
-
-FILES_${PN} = " ${base_bindir}/* \
-                ${datadir}/dbus-1/services \
-                ${datadir}/dbus-1/system-services \
-                ${datadir}/polkit-1 \
-                ${datadir}/${BPN} \
-                ${datadir}/factory \
-                ${sysconfdir}/dbus-1/ \
-                ${sysconfdir}/machine-id \
-                ${sysconfdir}/modules-load.d/ \
-                ${sysconfdir}/pam.d/ \
-                ${sysconfdir}/sysctl.d/ \
-                ${sysconfdir}/systemd/ \
-                ${sysconfdir}/tmpfiles.d/ \
-                ${sysconfdir}/xdg/ \
-                ${sysconfdir}/init.d/README \
-                ${sysconfdir}/resolv.conf \
-                ${rootlibexecdir}/systemd/* \
-                ${systemd_unitdir}/* \
-                ${base_libdir}/security/*.so \
-                /cgroup \
-                ${bindir}/systemd* \
-                ${bindir}/busctl \
-                ${bindir}/coredumpctl \
-                ${bindir}/localectl \
-                ${bindir}/hostnamectl \
-                ${bindir}/timedatectl \
-                ${bindir}/bootctl \
-                ${bindir}/kernel-install \
-                ${exec_prefix}/lib/tmpfiles.d/*.conf \
-                ${exec_prefix}/lib/systemd \
-                ${exec_prefix}/lib/modules-load.d \
-                ${exec_prefix}/lib/sysctl.d \
-                ${exec_prefix}/lib/sysusers.d \
-                ${localstatedir} \
-                ${nonarch_base_libdir}/udev/rules.d/70-uaccess.rules \
-                ${nonarch_base_libdir}/udev/rules.d/71-seat.rules \
-                ${nonarch_base_libdir}/udev/rules.d/73-seat-late.rules \
-                ${nonarch_base_libdir}/udev/rules.d/99-systemd.rules \
-               "
-
-FILES_${PN}-dev += "${base_libdir}/security/*.la ${datadir}/dbus-1/interfaces/ ${sysconfdir}/rpm/macros.systemd"
-
-RDEPENDS_${PN} += "kmod dbus util-linux-mount udev (= ${EXTENDPKGV})"
-RDEPENDS_${PN} += "volatile-binds update-rc.d"
-
-RRECOMMENDS_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'serial-getty-generator', '', 'systemd-serialgetty', d)} \
-                      systemd-extra-utils \
-                      systemd-compat-units udev-hwdb \
-                      util-linux-agetty  util-linux-fsck e2fsprogs-e2fsck \
-                      kernel-module-autofs4 kernel-module-unix kernel-module-ipv6 \
-                      os-release \
-"
-
-INSANE_SKIP_${PN} += "dev-so libdir"
-INSANE_SKIP_${PN}-dbg += "libdir"
-INSANE_SKIP_${PN}-doc += " libdir"
-
-PACKAGES =+ "udev udev-hwdb"
-
-RPROVIDES_udev = "hotplug"
-
-RDEPENDS_udev-hwdb += "udev"
-
-FILES_udev += "${base_sbindir}/udevd \
-               ${rootlibexecdir}/systemd/systemd-udevd \
-               ${rootlibexecdir}/udev/accelerometer \
-               ${rootlibexecdir}/udev/ata_id \
-               ${rootlibexecdir}/udev/cdrom_id \
-               ${rootlibexecdir}/udev/collect \
-               ${rootlibexecdir}/udev/findkeyboards \
-               ${rootlibexecdir}/udev/keyboard-force-release.sh \
-               ${rootlibexecdir}/udev/keymap \
-               ${rootlibexecdir}/udev/mtd_probe \
-               ${rootlibexecdir}/udev/scsi_id \
-               ${rootlibexecdir}/udev/v4l_id \
-               ${rootlibexecdir}/udev/keymaps \
-               ${rootlibexecdir}/udev/rules.d/*.rules \
-               ${sysconfdir}/udev \
-               ${sysconfdir}/init.d/systemd-udevd \
-               ${systemd_unitdir}/system/*udev* \
-               ${systemd_unitdir}/system/*.wants/*udev* \
-               ${base_bindir}/udevadm \
-               ${datadir}/bash-completion/completions/udevadm \
-              "
-
-FILES_udev-hwdb = "${rootlibexecdir}/udev/hwdb.d"
-
-INITSCRIPT_PACKAGES = "udev"
-INITSCRIPT_NAME_udev = "systemd-udevd"
-INITSCRIPT_PARAMS_udev = "start 03 S ."
-
-python __anonymous() {
-    if not bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d):
-        d.setVar("INHIBIT_UPDATERCD_BBCLASS", "1")
-}
-
-# TODO:
-# u-a for runlevel and telinit
-
-ALTERNATIVE_${PN} = "init halt reboot shutdown poweroff runlevel"
-
-ALTERNATIVE_TARGET[init] = "${rootlibexecdir}/systemd/systemd"
-ALTERNATIVE_LINK_NAME[init] = "${base_sbindir}/init"
-ALTERNATIVE_PRIORITY[init] ?= "300"
-
-ALTERNATIVE_TARGET[halt] = "${base_bindir}/systemctl"
-ALTERNATIVE_LINK_NAME[halt] = "${base_sbindir}/halt"
-ALTERNATIVE_PRIORITY[halt] ?= "300"
-
-ALTERNATIVE_TARGET[reboot] = "${base_bindir}/systemctl"
-ALTERNATIVE_LINK_NAME[reboot] = "${base_sbindir}/reboot"
-ALTERNATIVE_PRIORITY[reboot] ?= "300"
-
-ALTERNATIVE_TARGET[shutdown] = "${base_bindir}/systemctl"
-ALTERNATIVE_LINK_NAME[shutdown] = "${base_sbindir}/shutdown"
-ALTERNATIVE_PRIORITY[shutdown] ?= "300"
-
-ALTERNATIVE_TARGET[poweroff] = "${base_bindir}/systemctl"
-ALTERNATIVE_LINK_NAME[poweroff] = "${base_sbindir}/poweroff"
-ALTERNATIVE_PRIORITY[poweroff] ?= "300"
-
-ALTERNATIVE_TARGET[runlevel] = "${base_bindir}/systemctl"
-ALTERNATIVE_LINK_NAME[runlevel] = "${base_sbindir}/runlevel"
-ALTERNATIVE_PRIORITY[runlevel] ?= "300"
-
-pkg_postinst_${PN} () {
-	sed -e '/^hosts:/s/\s*\<myhostname\>//' \
-		-e 's/\(^hosts:.*\)\(\<files\>\)\(.*\)\(\<dns\>\)\(.*\)/\1\2 myhostname \3\4\5/' \
-		-i $D${sysconfdir}/nsswitch.conf
-}
-
-pkg_prerm_${PN} () {
-	sed -e '/^hosts:/s/\s*\<myhostname\>//' \
-		-e '/^hosts:/s/\s*myhostname//' \
-		-i $D${sysconfdir}/nsswitch.conf
-}
-
-PACKAGE_WRITE_DEPS += "qemu-native"
-pkg_postinst_udev-hwdb () {
-	if test -n "$D"; then
-		${@qemu_run_binary(d, '$D', '${base_bindir}/udevadm')} hwdb --update \
-			--root $D
-		chown root:root $D${sysconfdir}/udev/hwdb.bin
-	else
-		udevadm hwdb --update
-	fi
-}
-
-pkg_prerm_udev-hwdb () {
-	rm -f $D${sysconfdir}/udev/hwdb.bin
-}
-
-# As this recipe builds udev, respect systemd being in DISTRO_FEATURES so
-# that we don't build both udev and systemd in world builds.
-python () {
-    if not bb.utils.contains ('DISTRO_FEATURES', 'systemd', True, False, d):
-        raise bb.parse.SkipPackage("'systemd' not in DISTRO_FEATURES")
-
-    import re
-    if re.match('.*musl*', d.getVar('TARGET_OS')) != None:
-        raise bb.parse.SkipPackage("Not _yet_ supported on musl based targets")
-}
diff --git a/import-layers/yocto-poky/meta/recipes-core/systemd/systemd_234.bb b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd_234.bb
new file mode 100644
index 0000000..9ce27bf
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/systemd/systemd_234.bb
@@ -0,0 +1,641 @@
+require systemd.inc
+
+PROVIDES = "udev"
+
+PE = "1"
+
+DEPENDS = "kmod intltool-native gperf-native acl readline libcap libcgroup util-linux"
+
+SECTION = "base/shell"
+
+inherit useradd pkgconfig autotools perlnative update-rc.d update-alternatives qemu systemd ptest gettext bash-completion manpages
+
+SRC_URI = "git://github.com/systemd/systemd.git;protocol=git \
+           file://touchscreen.rules \
+           file://00-create-volatile.conf \
+           file://init \
+           file://run-ptest \
+           file://0004-Use-getenv-when-secure-versions-are-not-available.patch \
+           file://0005-binfmt-Don-t-install-dependency-links-at-install-tim.patch \
+           file://0007-use-lnr-wrapper-instead-of-looking-for-relative-opti.patch \
+           file://0010-implment-systemd-sysv-install-for-OE.patch \
+           file://0011-nss-mymachines-Build-conditionally-when-HAVE_MYHOSTN.patch \
+           file://0012-rules-whitelist-hd-devices.patch \
+           file://0013-Make-root-s-home-directory-configurable.patch \
+           file://0014-Revert-rules-remove-firmware-loading-rules.patch \
+           file://0015-Revert-udev-remove-userspace-firmware-loading-suppor.patch \
+           file://0017-remove-duplicate-include-uchar.h.patch \
+           file://0018-check-for-uchar.h-in-configure.patch \
+           file://0019-socket-util-don-t-fail-if-libc-doesn-t-support-IDN.patch \
+           file://0001-add-fallback-parse_printf_format-implementation.patch \
+           file://0002-src-basic-missing.h-check-for-missing-strndupa.patch \
+           file://0003-don-t-fail-if-GLOB_BRACE-and-GLOB_ALTDIRFUNC-is-not-.patch \
+           file://0004-src-basic-missing.h-check-for-missing-__compar_fn_t-.patch \
+           file://0006-Include-netinet-if_ether.h.patch \
+           file://0007-check-for-missing-canonicalize_file_name.patch \
+           file://0008-Do-not-enable-nss-tests.patch \
+           file://0009-test-hexdecoct.c-Include-missing.h-form-strndupa.patch \
+           file://0010-test-sizeof.c-Disable-tests-for-missing-typedefs-in-.patch \
+           file://0011-don-t-use-glibc-specific-qsort_r.patch \
+           file://0012-don-t-pass-AT_SYMLINK_NOFOLLOW-flag-to-faccessat.patch \
+           file://0013-comparison_fn_t-is-glibc-specific-use-raw-signature-.patch \
+           file://0001-Define-_PATH_WTMPX-and-_PATH_UTMPX-if-not-defined.patch \
+           file://0001-Use-uintmax_t-for-handling-rlim_t.patch \
+           file://0001-core-evaluate-presets-after-generators-have-run-6526.patch \
+           file://0001-main-skip-many-initialization-steps-when-running-in-.patch \
+           "
+SRC_URI_append_qemuall = " file://0001-core-device.c-Change-the-default-device-timeout-to-2.patch"
+
+PAM_PLUGINS = " \
+    pam-plugin-unix \
+    pam-plugin-loginuid \
+    pam-plugin-keyinit \
+"
+
+PACKAGECONFIG ??= " \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'efi ldconfig pam selinux usrmerge', d)} \
+    ${@bb.utils.contains('DISTRO_FEATURES', 'wifi', 'rfkill', '', d)} \
+    ${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'xkbcommon', '', d)} \
+    backlight \
+    binfmt \
+    firstboot \
+    hibernate \
+    hostnamed \
+    ima \
+    localed \
+    logind \
+    machined \
+    myhostname \
+    networkd \
+    nss \
+    polkit \
+    quotacheck \
+    randomseed \
+    resolved \
+    smack \
+    sysusers \
+    timedated \
+    timesyncd \
+    utmp \
+    vconsole \
+    xz \
+"
+
+PACKAGECONFIG_remove_libc-musl = " \
+    localed \
+    myhostname \
+    nss \
+    resolved \
+    selinux \
+    smack \
+    sysusers \
+    utmp \
+"
+
+# Use the upstream systemd serial-getty@.service and rely on
+# systemd-getty-generator instead of using the OE-core specific
+# systemd-serialgetty.bb - not enabled by default.
+PACKAGECONFIG[serial-getty-generator] = ""
+
+PACKAGECONFIG[audit] = "--enable-audit,--disable-audit,audit"
+PACKAGECONFIG[backlight] = "--enable-backlight,--disable-backlight"
+PACKAGECONFIG[binfmt] = "--enable-binfmt,--disable-binfmt"
+PACKAGECONFIG[bzip2] = "--enable-bzip2,--disable-bzip2,bzip2"
+PACKAGECONFIG[coredump] = "--enable-coredump,--disable-coredump"
+PACKAGECONFIG[cryptsetup] = "--enable-libcryptsetup,--disable-libcryptsetup,cryptsetup"
+PACKAGECONFIG[dbus] = "--enable-dbus,--disable-dbus,dbus"
+PACKAGECONFIG[efi] = "--enable-efi,--disable-efi"
+PACKAGECONFIG[elfutils] = "--enable-elfutils,--disable-elfutils,elfutils"
+PACKAGECONFIG[firstboot] = "--enable-firstboot,--disable-firstboot"
+# Sign the journal for anti-tampering
+PACKAGECONFIG[gcrypt] = "--enable-gcrypt,--disable-gcrypt,libgcrypt"
+PACKAGECONFIG[hibernate] = "--enable-hibernate,--disable-hibernate"
+PACKAGECONFIG[hostnamed] = "--enable-hostnamed,--disable-hostnamed"
+PACKAGECONFIG[ima] = "--enable-ima,--disable-ima"
+# importd requires curl/xz/zlib/bzip2/gcrypt
+PACKAGECONFIG[importd] = "--enable-importd,--disable-importd"
+# Update NAT firewall rules
+PACKAGECONFIG[iptc] = "--enable-libiptc,--disable-libiptc,iptables"
+PACKAGECONFIG[journal-upload] = "--enable-libcurl,--disable-libcurl,curl"
+PACKAGECONFIG[ldconfig] = "--enable-ldconfig,--disable-ldconfig"
+PACKAGECONFIG[libidn] = "--enable-libidn,--disable-libidn,libidn"
+PACKAGECONFIG[localed] = "--enable-localed,--disable-localed"
+PACKAGECONFIG[logind] = "--enable-logind,--disable-logind"
+PACKAGECONFIG[lz4] = "--enable-lz4,--disable-lz4,lz4"
+PACKAGECONFIG[machined] = "--enable-machined,--disable-machined"
+PACKAGECONFIG[manpages] = "--enable-manpages,--disable-manpages,libxslt-native xmlto-native docbook-xml-dtd4-native docbook-xsl-stylesheets-native"
+PACKAGECONFIG[microhttpd] = "--enable-microhttpd,--disable-microhttpd,libmicrohttpd"
+PACKAGECONFIG[myhostname] = "--enable-myhostname,--disable-myhostname"
+PACKAGECONFIG[networkd] = "--enable-networkd,--disable-networkd"
+PACKAGECONFIG[nss] = "--enable-nss-systemd,--disable-nss-systemd"
+PACKAGECONFIG[pam] = "--enable-pam,--disable-pam,libpam,${PAM_PLUGINS}"
+PACKAGECONFIG[polkit] = "--enable-polkit,--disable-polkit"
+PACKAGECONFIG[qrencode] = "--enable-qrencode,--disable-qrencode,qrencode"
+PACKAGECONFIG[quotacheck] = "--enable-quotacheck,--disable-quotacheck"
+PACKAGECONFIG[randomseed] = "--enable-randomseed,--disable-randomseed"
+PACKAGECONFIG[resolved] = "--enable-resolved,--disable-resolved"
+PACKAGECONFIG[rfkill] = "--enable-rfkill,--disable-rfkill"
+# libseccomp is found in meta-security
+PACKAGECONFIG[seccomp] = "--enable-seccomp,--disable-seccomp,libseccomp"
+PACKAGECONFIG[selinux] = "--enable-selinux,--disable-selinux,libselinux,initscripts-sushell"
+PACKAGECONFIG[smack] = "--enable-smack,--disable-smack"
+PACKAGECONFIG[sysusers] = "--enable-sysusers,--disable-sysusers"
+PACKAGECONFIG[timedated] = "--enable-timedated,--disable-timedated"
+PACKAGECONFIG[timesyncd] = "--enable-timesyncd,--disable-timesyncd"
+PACKAGECONFIG[usrmerge] = "--disable-split-usr,--enable-split-usr"
+PACKAGECONFIG[utmp] = "--enable-utmp,--disable-utmp"
+PACKAGECONFIG[valgrind] = "ac_cv_header_valgrind_memcheck_h=yes ac_cv_header_valgrind_valgrind_h=yes,ac_cv_header_valgrind_memcheck_h=no ac_cv_header_valgrind_valgrind_h=no,valgrind"
+PACKAGECONFIG[vconsole] = "--enable-vconsole,--disable-vconsole,,${PN}-vconsole-setup"
+# Verify keymaps on locale change
+PACKAGECONFIG[xkbcommon] = "--enable-xkbcommon,--disable-xkbcommon,libxkbcommon"
+PACKAGECONFIG[xz] = "--enable-xz,--disable-xz,xz"
+PACKAGECONFIG[zlib] = "--enable-zlib,--disable-zlib,zlib"
+
+# Hardcode target binary paths to avoid AC_PROG_PATH in the systemd
+# configure script detecting and setting paths from sysroot or host.
+CACHED_CONFIGUREVARS_class-target = " \
+    ac_cv_path_KEXEC=${sbindir}/kexec \
+    ac_cv_path_KILL=${base_bindir}/kill \
+    ac_cv_path_KMOD=${base_bindir}/kmod \
+    ac_cv_path_MOUNT_PATH=${base_bindir}/mount \
+    ac_cv_path_QUOTACHECK=${sbindir}/quotacheck \
+    ac_cv_path_QUOTAON=${sbindir}/quotaon \
+    ac_cv_path_SULOGIN=${base_sbindir}/sulogin \
+    ac_cv_path_UMOUNT_PATH=${base_bindir}/umount \
+"
+
+# Helper variables to clarify locations.  This mirrors the logic in systemd's
+# build system.
+rootprefix ?= "${root_prefix}"
+rootlibdir ?= "${base_libdir}"
+rootlibexecdir = "${rootprefix}/lib"
+
+EXTRA_OECONF = " \
+    --without-python \
+    --with-roothomedir=${ROOT_HOME} \
+    --with-rootlibdir=${rootlibdir} \
+    --with-rootprefix=${rootprefix} \
+    --with-sysvrcnd-path=${sysconfdir} \
+    --with-firmware-path=${nonarch_base_libdir}/firmware \
+"
+
+# per the systemd README, define VALGRIND=1 to run under valgrind
+CFLAGS .= "${@bb.utils.contains('PACKAGECONFIG', 'valgrind', ' -DVALGRIND=1', '', d)}"
+
+# disable problematic GCC 5.2 optimizations [YOCTO #8291]
+FULL_OPTIMIZATION_append_arm = " -fno-schedule-insns -fno-schedule-insns2"
+
+COMPILER_NM ?= "${HOST_PREFIX}gcc-nm"
+COMPILER_AR ?= "${HOST_PREFIX}gcc-ar"
+COMPILER_RANLIB ?= "${HOST_PREFIX}gcc-ranlib"
+
+do_configure_prepend() {
+	export NM="${COMPILER_NM}"
+	export AR="${COMPILER_AR}"
+	export RANLIB="${COMPILER_RANLIB}"
+	export KMOD="${base_bindir}/kmod"
+	if [ -d ${S}/units.pre_sed ] ; then
+		cp -r ${S}/units.pre_sed ${S}/units
+	else
+		cp -r ${S}/units ${S}/units.pre_sed
+	fi
+	sed -i -e 's:-DTEST_DIR=\\\".*\\\":-DTEST_DIR=\\\"${PTEST_PATH}/tests/test\\\":' ${S}/Makefile.am
+	sed -i -e 's:-DCATALOG_DIR=\\\".*\\\":-DCATALOG_DIR=\\\"${PTEST_PATH}/tests/catalog\\\":' ${S}/Makefile.am
+}
+
+do_install() {
+	autotools_do_install
+	install -d ${D}/${base_sbindir}
+	if ${@bb.utils.contains('PACKAGECONFIG', 'serial-getty-generator', 'false', 'true', d)}; then
+		# Provided by a separate recipe
+		rm ${D}${systemd_unitdir}/system/serial-getty* -f
+	fi
+
+	# Provide support for initramfs
+	[ ! -e ${D}/init ] && ln -s ${rootlibexecdir}/systemd/systemd ${D}/init
+	[ ! -e ${D}/${base_sbindir}/udevd ] && ln -s ${rootlibexecdir}/systemd/systemd-udevd ${D}/${base_sbindir}/udevd
+
+	# Create machine-id
+	# 20:12 < mezcalero> koen: you have three options: a) run systemd-machine-id-setup at install time, b) have / read-only and an empty file there (for stateless) and c) boot with / writable
+	touch ${D}${sysconfdir}/machine-id
+
+	install -d ${D}${sysconfdir}/udev/rules.d/
+	install -d ${D}${sysconfdir}/tmpfiles.d
+	install -m 0644 ${WORKDIR}/*.rules ${D}${sysconfdir}/udev/rules.d/
+	install -d ${D}${libdir}/pkgconfig
+	install -m 0644 ${B}/src/udev/udev.pc ${D}${libdir}/pkgconfig/
+
+	install -m 0644 ${WORKDIR}/00-create-volatile.conf ${D}${sysconfdir}/tmpfiles.d/
+
+	if ${@bb.utils.contains('DISTRO_FEATURES','sysvinit','true','false',d)}; then
+		install -d ${D}${sysconfdir}/init.d
+		install -m 0755 ${WORKDIR}/init ${D}${sysconfdir}/init.d/systemd-udevd
+		sed -i s%@UDEVD@%${rootlibexecdir}/systemd/systemd-udevd% ${D}${sysconfdir}/init.d/systemd-udevd
+	fi
+
+	chown root:systemd-journal ${D}/${localstatedir}/log/journal
+
+	# Delete journal README, as log can be symlinked inside volatile.
+	rm -f ${D}/${localstatedir}/log/README
+
+	install -d ${D}${systemd_unitdir}/system/graphical.target.wants
+	install -d ${D}${systemd_unitdir}/system/multi-user.target.wants
+	install -d ${D}${systemd_unitdir}/system/poweroff.target.wants
+	install -d ${D}${systemd_unitdir}/system/reboot.target.wants
+	install -d ${D}${systemd_unitdir}/system/rescue.target.wants
+
+	# Create symlinks for systemd-update-utmp-runlevel.service
+	if ${@bb.utils.contains('PACKAGECONFIG', 'utmp', 'true', 'false', d)}; then
+		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_unitdir}/system/graphical.target.wants/systemd-update-utmp-runlevel.service
+		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_unitdir}/system/multi-user.target.wants/systemd-update-utmp-runlevel.service
+		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_unitdir}/system/poweroff.target.wants/systemd-update-utmp-runlevel.service
+		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_unitdir}/system/reboot.target.wants/systemd-update-utmp-runlevel.service
+		ln -sf ../systemd-update-utmp-runlevel.service ${D}${systemd_unitdir}/system/rescue.target.wants/systemd-update-utmp-runlevel.service
+	fi
+
+	# Enable journal to forward message to syslog daemon
+	sed -i -e 's/.*ForwardToSyslog.*/ForwardToSyslog=yes/' ${D}${sysconfdir}/systemd/journald.conf
+	# Set the maximium size of runtime journal to 64M as default
+	sed -i -e 's/.*RuntimeMaxUse.*/RuntimeMaxUse=64M/' ${D}${sysconfdir}/systemd/journald.conf
+
+	# this file is needed to exist if networkd is disabled but timesyncd is still in use since timesyncd checks it
+	# for existence else it fails
+	if [ -s ${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf ]; then
+		${@bb.utils.contains('PACKAGECONFIG', 'networkd', ':', 'sed -i -e "\$ad /run/systemd/netif/links 0755 root root -" ${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf', d)}
+	fi
+	if ! ${@bb.utils.contains('PACKAGECONFIG', 'resolved', 'true', 'false', d)}; then
+		echo 'L! ${sysconfdir}/resolv.conf - - - - ../run/systemd/resolve/resolv.conf' >>${D}${exec_prefix}/lib/tmpfiles.d/etc.conf
+		echo 'd /run/systemd/resolve 0755 root root -' >>${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf
+		echo 'f /run/systemd/resolve/resolv.conf 0644 root root' >>${D}${exec_prefix}/lib/tmpfiles.d/systemd.conf
+		ln -s ../run/systemd/resolve/resolv.conf ${D}${sysconfdir}/resolv-conf.systemd
+	else
+		sed -i -e "s%^L! /etc/resolv.conf.*$%L! /etc/resolv.conf - - - - ../run/systemd/resolve/resolv.conf%g" ${D}${exec_prefix}/lib/tmpfiles.d/etc.conf
+		ln -s ../run/systemd/resolve/resolv.conf ${D}${sysconfdir}/resolv-conf.systemd
+	fi
+	install -Dm 0755 ${S}/src/systemctl/systemd-sysv-install.SKELETON ${D}${systemd_unitdir}/systemd-sysv-install
+
+	# If polkit is setup fixup permissions and ownership
+	if ${@bb.utils.contains('PACKAGECONFIG', 'polkit', 'true', 'false', d)}; then
+		if [ -d ${D}${datadir}/polkit-1/rules.d ]; then
+			chmod 700 ${D}${datadir}/polkit-1/rules.d
+			chown polkitd:root ${D}${datadir}/polkit-1/rules.d
+		fi
+	fi
+}
+
+do_install_ptest () {
+	# install data files needed for tests
+	install -d ${D}${PTEST_PATH}/tests/test
+	cp -rfL ${S}/test/* ${D}${PTEST_PATH}/tests/test
+	# python is disabled for systemd, thus removing these python testing scripts
+	rm ${D}${PTEST_PATH}/tests/test/*.py
+	sed -i 's/"tree"/"ls"/' ${D}${PTEST_PATH}/tests/test/udev-test.pl
+
+	install -d ${D}${PTEST_PATH}/tests/catalog
+	install ${S}/catalog/* ${D}${PTEST_PATH}/tests/catalog/
+
+	install -D ${S}/build-aux/test-driver ${D}${PTEST_PATH}/tests/build-aux/test-driver
+
+	install -d ${D}${PTEST_PATH}/tests/rules
+	install ${B}/rules/* ${D}${PTEST_PATH}/tests/rules/
+
+	# This directory needs to be there for udev-test.pl to work.
+	install -d ${D}${libdir}/udev/rules.d
+
+	# install actual test binaries
+	install -m 0755 ${B}/test-* ${D}${PTEST_PATH}/tests/
+	install -m 0755 ${B}/.libs/test-* ${D}${PTEST_PATH}/tests/
+
+	install ${B}/Makefile ${D}${PTEST_PATH}/tests/
+}
+
+python populate_packages_prepend (){
+    systemdlibdir = d.getVar("rootlibdir")
+    do_split_packages(d, systemdlibdir, '^lib(.*)\.so\.*', 'lib%s', 'Systemd %s library', extra_depends='', allow_links=True)
+}
+PACKAGES_DYNAMIC += "^lib(udev|systemd|nss).*"
+
+PACKAGES =+ "\
+    ${PN}-gui \
+    ${PN}-vconsole-setup \
+    ${PN}-initramfs \
+    ${PN}-analyze \
+    ${PN}-kernel-install \
+    ${PN}-rpm-macros \
+    ${PN}-binfmt \
+    ${PN}-zsh-completion \
+    ${PN}-xorg-xinitrc \
+    ${PN}-container \
+    ${PN}-extra-utils \
+"
+
+SUMMARY_${PN}-container = "Tools for containers and VMs"
+DESCRIPTION_${PN}-container = "Systemd tools to spawn and manage containers and virtual machines."
+
+SYSTEMD_PACKAGES = "${@bb.utils.contains('PACKAGECONFIG', 'binfmt', '${PN}-binfmt', '', d)}"
+SYSTEMD_SERVICE_${PN}-binfmt = "systemd-binfmt.service"
+
+USERADD_PACKAGES = "${PN} ${PN}-extra-utils"
+USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'microhttpd', '--system -d / -M --shell /bin/nologin systemd-journal-gateway;', '', d)}"
+USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'microhttpd', '--system -d / -M --shell /bin/nologin systemd-journal-remote;', '', d)}"
+USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'journal-upload', '--system -d / -M --shell /bin/nologin systemd-journal-upload;', '', d)}"
+USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'timesyncd', '--system -d / -M --shell /bin/nologin systemd-timesync;', '', d)}"
+USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'networkd', '--system -d / -M --shell /bin/nologin systemd-network;', '', d)}"
+USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'coredump', '--system -d / -M --shell /bin/nologin systemd-coredump;', '', d)}"
+USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'resolved', '--system -d / -M --shell /bin/nologin systemd-resolve;', '', d)}"
+USERADD_PARAM_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'polkit', '--system --no-create-home --user-group --home-dir ${sysconfdir}/polkit-1 polkitd;', '', d)}"
+GROUPADD_PARAM_${PN} = "-r lock; -r systemd-journal"
+USERADD_PARAM_${PN}-extra-utils += "--system -d / -M --shell /bin/nologin systemd-bus-proxy;"
+
+FILES_${PN}-analyze = "${bindir}/systemd-analyze"
+
+FILES_${PN}-initramfs = "/init"
+RDEPENDS_${PN}-initramfs = "${PN}"
+
+RDEPENDS_${PN}-ptest += "gawk make perl bash xz \
+                         tzdata tzdata-americas tzdata-asia \
+                         tzdata-europe tzdata-africa tzdata-antarctica \
+                         tzdata-arctic tzdata-atlantic tzdata-australia \
+                         tzdata-pacific tzdata-posix"
+
+FILES_${PN}-ptest += "${libdir}/udev/rules.d"
+
+FILES_${PN}-gui = "${bindir}/systemadm"
+
+FILES_${PN}-vconsole-setup = "${rootlibexecdir}/systemd/systemd-vconsole-setup \
+                              ${systemd_unitdir}/system/systemd-vconsole-setup.service \
+                              ${systemd_unitdir}/system/sysinit.target.wants/systemd-vconsole-setup.service"
+
+RDEPENDS_${PN}-kernel-install += "bash"
+FILES_${PN}-kernel-install = "${bindir}/kernel-install \
+                              ${sysconfdir}/kernel/ \
+                              ${exec_prefix}/lib/kernel \
+                             "
+FILES_${PN}-rpm-macros = "${exec_prefix}/lib/rpm \
+                         "
+
+FILES_${PN}-xorg-xinitrc = "${sysconfdir}/X11/xinit/xinitrc.d/*"
+
+FILES_${PN}-zsh-completion = "${datadir}/zsh/site-functions"
+
+FILES_${PN}-binfmt = "${sysconfdir}/binfmt.d/ \
+                      ${exec_prefix}/lib/binfmt.d \
+                      ${rootlibexecdir}/systemd/systemd-binfmt \
+                      ${systemd_unitdir}/system/proc-sys-fs-binfmt_misc.* \
+                      ${systemd_unitdir}/system/systemd-binfmt.service"
+RRECOMMENDS_${PN}-binfmt = "kernel-module-binfmt-misc"
+
+RRECOMMENDS_${PN}-vconsole-setup = "kbd kbd-consolefonts kbd-keymaps"
+
+FILES_${PN}-container = "${sysconfdir}/dbus-1/system.d/org.freedesktop.import1.conf \
+                         ${sysconfdir}/dbus-1/system.d/org.freedesktop.machine1.conf \
+                         ${base_bindir}/machinectl \
+                         ${bindir}/systemd-nspawn \
+                         ${nonarch_libdir}/systemd/import-pubring.gpg \
+                         ${systemd_system_unitdir}/busnames.target.wants/org.freedesktop.import1.busname \
+                         ${systemd_system_unitdir}/busnames.target.wants/org.freedesktop.machine1.busname \
+                         ${systemd_system_unitdir}/local-fs.target.wants/var-lib-machines.mount \
+                         ${systemd_system_unitdir}/machine.slice \
+                         ${systemd_system_unitdir}/machines.target \
+                         ${systemd_system_unitdir}/org.freedesktop.import1.busname \
+                         ${systemd_system_unitdir}/org.freedesktop.machine1.busname \
+                         ${systemd_system_unitdir}/systemd-importd.service \
+                         ${systemd_system_unitdir}/systemd-machined.service \
+                         ${systemd_system_unitdir}/dbus-org.freedesktop.machine1.service \
+                         ${systemd_system_unitdir}/var-lib-machines.mount \
+                         ${rootlibexecdir}/systemd/systemd-import \
+                         ${rootlibexecdir}/systemd/systemd-importd \
+                         ${rootlibexecdir}/systemd/systemd-journal-gatewayd \
+                         ${rootlibexecdir}/systemd/systemd-journal-remote \
+                         ${rootlibexecdir}/systemd/systemd-journal-upload \
+                         ${rootlibexecdir}/systemd/systemd-machined \
+                         ${rootlibexecdir}/systemd/systemd-pull \
+                         ${exec_prefix}/lib/tmpfiles.d/systemd-nspawn.conf \
+                         ${systemd_system_unitdir}/systemd-nspawn@.service \
+                         ${libdir}/libnss_mymachines.so.2 \
+                         ${datadir}/dbus-1/system-services/org.freedesktop.import1.service \
+                         ${datadir}/dbus-1/system-services/org.freedesktop.machine1.service \
+                         ${datadir}/dbus-1/system.d/org.freedesktop.machine1.conf \
+                         ${datadir}/polkit-1/actions/org.freedesktop.import1.policy \
+                         ${datadir}/polkit-1/actions/org.freedesktop.machine1.policy \
+                        "
+
+FILES_${PN}-extra-utils = "\
+                        ${base_bindir}/systemd-escape \
+                        ${base_bindir}/systemd-inhibit \
+                        ${bindir}/systemd-detect-virt \
+                        ${bindir}/systemd-path \
+                        ${bindir}/systemd-run \
+                        ${bindir}/systemd-cat \
+                        ${bindir}/systemd-delta \
+                        ${bindir}/systemd-cgls \
+                        ${bindir}/systemd-cgtop \
+                        ${bindir}/systemd-stdio-bridge \
+                        ${base_bindir}/systemd-ask-password \
+                        ${base_bindir}/systemd-tty-ask-password-agent \
+                        ${systemd_unitdir}/system/systemd-ask-password-console.path \
+                        ${systemd_unitdir}/system/systemd-ask-password-console.service \
+                        ${systemd_unitdir}/system/systemd-ask-password-wall.path \
+                        ${systemd_unitdir}/system/systemd-ask-password-wall.service \
+                        ${systemd_unitdir}/system/sysinit.target.wants/systemd-ask-password-console.path \
+                        ${systemd_unitdir}/system/sysinit.target.wants/systemd-ask-password-wall.path \
+                        ${systemd_unitdir}/system/multi-user.target.wants/systemd-ask-password-wall.path \
+                        ${rootlibexecdir}/systemd/systemd-resolve-host \
+                        ${rootlibexecdir}/systemd/systemd-ac-power \
+                        ${rootlibexecdir}/systemd/systemd-activate \
+                        ${rootlibexecdir}/systemd/systemd-bus-proxyd \
+                        ${systemd_unitdir}/system/systemd-bus-proxyd.service \
+                        ${systemd_unitdir}/system/systemd-bus-proxyd.socket \
+                        ${rootlibexecdir}/systemd/systemd-socket-proxyd \
+                        ${rootlibexecdir}/systemd/systemd-reply-password \
+                        ${rootlibexecdir}/systemd/systemd-sleep \
+                        ${rootlibexecdir}/systemd/system-sleep \
+                        ${systemd_unitdir}/system/systemd-hibernate.service \
+                        ${systemd_unitdir}/system/systemd-hybrid-sleep.service \
+                        ${systemd_unitdir}/system/systemd-suspend.service \
+                        ${systemd_unitdir}/system/sleep.target \
+                        ${rootlibexecdir}/systemd/systemd-initctl \
+                        ${systemd_unitdir}/system/systemd-initctl.service \
+                        ${systemd_unitdir}/system/systemd-initctl.socket \
+                        ${systemd_unitdir}/system/sockets.target.wants/systemd-initctl.socket \
+                        ${rootlibexecdir}/systemd/system-generators/systemd-gpt-auto-generator \
+                        ${rootlibexecdir}/systemd/systemd-cgroups-agent \
+"
+
+CONFFILES_${PN} = "${sysconfdir}/machine-id \
+                ${sysconfdir}/systemd/coredump.conf \
+                ${sysconfdir}/systemd/journald.conf \
+                ${sysconfdir}/systemd/logind.conf \
+                ${sysconfdir}/systemd/system.conf \
+                ${sysconfdir}/systemd/user.conf"
+
+FILES_${PN} = " ${base_bindir}/* \
+                ${datadir}/dbus-1/services \
+                ${datadir}/dbus-1/system-services \
+                ${datadir}/polkit-1 \
+                ${datadir}/${BPN} \
+                ${datadir}/factory \
+                ${sysconfdir}/dbus-1/ \
+                ${sysconfdir}/machine-id \
+                ${sysconfdir}/modules-load.d/ \
+                ${sysconfdir}/pam.d/ \
+                ${sysconfdir}/sysctl.d/ \
+                ${sysconfdir}/systemd/ \
+                ${sysconfdir}/tmpfiles.d/ \
+                ${sysconfdir}/xdg/ \
+                ${sysconfdir}/init.d/README \
+                ${sysconfdir}/resolv-conf.systemd \
+                ${rootlibexecdir}/systemd/* \
+                ${systemd_unitdir}/* \
+                ${base_libdir}/security/*.so \
+                /cgroup \
+                ${bindir}/systemd* \
+                ${bindir}/busctl \
+                ${bindir}/coredumpctl \
+                ${bindir}/localectl \
+                ${bindir}/hostnamectl \
+                ${bindir}/timedatectl \
+                ${bindir}/bootctl \
+                ${bindir}/kernel-install \
+                ${exec_prefix}/lib/tmpfiles.d/*.conf \
+                ${exec_prefix}/lib/systemd \
+                ${exec_prefix}/lib/modules-load.d \
+                ${exec_prefix}/lib/sysctl.d \
+                ${exec_prefix}/lib/sysusers.d \
+                ${exec_prefix}/lib/environment.d \
+                ${localstatedir} \
+                ${nonarch_base_libdir}/udev/rules.d/70-uaccess.rules \
+                ${nonarch_base_libdir}/udev/rules.d/71-seat.rules \
+                ${nonarch_base_libdir}/udev/rules.d/73-seat-late.rules \
+                ${nonarch_base_libdir}/udev/rules.d/99-systemd.rules \
+                ${datadir}/dbus-1/system.d/org.freedesktop.timedate1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.locale1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.network1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.resolve1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.systemd1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.hostname1.conf \
+                ${datadir}/dbus-1/system.d/org.freedesktop.login1.conf \
+               "
+
+FILES_${PN}-dev += "${base_libdir}/security/*.la ${datadir}/dbus-1/interfaces/ ${sysconfdir}/rpm/macros.systemd"
+
+RDEPENDS_${PN} += "kmod dbus util-linux-mount udev (= ${EXTENDPKGV}) util-linux-agetty"
+RDEPENDS_${PN} += "${@bb.utils.contains('PACKAGECONFIG', 'serial-getty-generator', '', 'systemd-serialgetty', d)}"
+RDEPENDS_${PN} += "volatile-binds update-rc.d"
+
+RRECOMMENDS_${PN} += "systemd-extra-utils \
+                      systemd-compat-units udev-hwdb \
+                      util-linux-fsck e2fsprogs-e2fsck \
+                      kernel-module-autofs4 kernel-module-unix kernel-module-ipv6 \
+                      os-release \
+"
+
+INSANE_SKIP_${PN} += "dev-so libdir"
+INSANE_SKIP_${PN}-dbg += "libdir"
+INSANE_SKIP_${PN}-doc += " libdir"
+
+PACKAGES =+ "udev udev-hwdb"
+
+RPROVIDES_udev = "hotplug"
+
+RDEPENDS_udev-hwdb += "udev"
+
+FILES_udev += "${base_sbindir}/udevd \
+               ${rootlibexecdir}/systemd/systemd-udevd \
+               ${rootlibexecdir}/udev/accelerometer \
+               ${rootlibexecdir}/udev/ata_id \
+               ${rootlibexecdir}/udev/cdrom_id \
+               ${rootlibexecdir}/udev/collect \
+               ${rootlibexecdir}/udev/findkeyboards \
+               ${rootlibexecdir}/udev/keyboard-force-release.sh \
+               ${rootlibexecdir}/udev/keymap \
+               ${rootlibexecdir}/udev/mtd_probe \
+               ${rootlibexecdir}/udev/scsi_id \
+               ${rootlibexecdir}/udev/v4l_id \
+               ${rootlibexecdir}/udev/keymaps \
+               ${rootlibexecdir}/udev/rules.d/*.rules \
+               ${sysconfdir}/udev \
+               ${sysconfdir}/init.d/systemd-udevd \
+               ${systemd_unitdir}/system/*udev* \
+               ${systemd_unitdir}/system/*.wants/*udev* \
+               ${base_bindir}/udevadm \
+               ${datadir}/bash-completion/completions/udevadm \
+              "
+
+FILES_udev-hwdb = "${rootlibexecdir}/udev/hwdb.d"
+
+INITSCRIPT_PACKAGES = "udev"
+INITSCRIPT_NAME_udev = "systemd-udevd"
+INITSCRIPT_PARAMS_udev = "start 03 S ."
+
+python __anonymous() {
+    if not bb.utils.contains('DISTRO_FEATURES', 'sysvinit', True, False, d):
+        d.setVar("INHIBIT_UPDATERCD_BBCLASS", "1")
+}
+
+# TODO:
+# u-a for runlevel and telinit
+
+ALTERNATIVE_${PN} = "init halt reboot shutdown poweroff runlevel resolv-conf"
+
+ALTERNATIVE_TARGET[init] = "${rootlibexecdir}/systemd/systemd"
+ALTERNATIVE_LINK_NAME[init] = "${base_sbindir}/init"
+ALTERNATIVE_PRIORITY[init] ?= "300"
+
+ALTERNATIVE_TARGET[halt] = "${base_bindir}/systemctl"
+ALTERNATIVE_LINK_NAME[halt] = "${base_sbindir}/halt"
+ALTERNATIVE_PRIORITY[halt] ?= "300"
+
+ALTERNATIVE_TARGET[reboot] = "${base_bindir}/systemctl"
+ALTERNATIVE_LINK_NAME[reboot] = "${base_sbindir}/reboot"
+ALTERNATIVE_PRIORITY[reboot] ?= "300"
+
+ALTERNATIVE_TARGET[shutdown] = "${base_bindir}/systemctl"
+ALTERNATIVE_LINK_NAME[shutdown] = "${base_sbindir}/shutdown"
+ALTERNATIVE_PRIORITY[shutdown] ?= "300"
+
+ALTERNATIVE_TARGET[poweroff] = "${base_bindir}/systemctl"
+ALTERNATIVE_LINK_NAME[poweroff] = "${base_sbindir}/poweroff"
+ALTERNATIVE_PRIORITY[poweroff] ?= "300"
+
+ALTERNATIVE_TARGET[runlevel] = "${base_bindir}/systemctl"
+ALTERNATIVE_LINK_NAME[runlevel] = "${base_sbindir}/runlevel"
+ALTERNATIVE_PRIORITY[runlevel] ?= "300"
+
+ALTERNATIVE_TARGET[resolv-conf] = "${sysconfdir}/resolv-conf.systemd"
+ALTERNATIVE_LINK_NAME[resolv-conf] = "${sysconfdir}/resolv.conf"
+ALTERNATIVE_PRIORITY[resolv-conf] ?= "50"
+
+pkg_postinst_${PN} () {
+	sed -e '/^hosts:/s/\s*\<myhostname\>//' \
+		-e 's/\(^hosts:.*\)\(\<files\>\)\(.*\)\(\<dns\>\)\(.*\)/\1\2 myhostname \3\4\5/' \
+		-i $D${sysconfdir}/nsswitch.conf
+}
+
+pkg_prerm_${PN} () {
+	sed -e '/^hosts:/s/\s*\<myhostname\>//' \
+		-e '/^hosts:/s/\s*myhostname//' \
+		-i $D${sysconfdir}/nsswitch.conf
+}
+
+PACKAGE_WRITE_DEPS += "qemu-native"
+pkg_postinst_udev-hwdb () {
+	if test -n "$D"; then
+		${@qemu_run_binary(d, '$D', '${base_bindir}/udevadm')} hwdb --update \
+			--root $D
+		chown root:root $D${sysconfdir}/udev/hwdb.bin
+	else
+		udevadm hwdb --update
+	fi
+}
+
+pkg_prerm_udev-hwdb () {
+	rm -f $D${sysconfdir}/udev/hwdb.bin
+}
+
+# As this recipe builds udev, respect systemd being in DISTRO_FEATURES so
+# that we don't build both udev and systemd in world builds.
+python () {
+    if not bb.utils.contains ('DISTRO_FEATURES', 'systemd', True, False, d):
+        raise bb.parse.SkipPackage("'systemd' not in DISTRO_FEATURES")
+}
diff --git a/import-layers/yocto-poky/meta/recipes-core/sysvinit/sysvinit-inittab/start_getty b/import-layers/yocto-poky/meta/recipes-core/sysvinit/sysvinit-inittab/start_getty
index e3d052a..e15ae35 100644
--- a/import-layers/yocto-poky/meta/recipes-core/sysvinit/sysvinit-inittab/start_getty
+++ b/import-layers/yocto-poky/meta/recipes-core/sysvinit/sysvinit-inittab/start_getty
@@ -1,5 +1,43 @@
 #!/bin/sh
-if [ -c /dev/$2 ]
-then 
-	/sbin/getty -L $1 $2 $3
-fi
+###############################################################################
+# This script is used to automatically set up the serial console(s) on startup.
+# The variable SERIAL_CONSOLES can be set in meta/conf/machine/*.conf.
+# Script enhancement has been done based on Bug YOCTO #10844.
+# Most of the information is retrieved from /proc virtual filesystem containing
+# all the runtime system information (eg. system memory, device mount, etc).
+###############################################################################
+
+# Get active serial filename.
+active_serial=$(grep "serial" /proc/tty/drivers | cut -d/ -f1 | sed "s/ *$//")
+
+# Rephrase input parameter from ttyS target index (ttyS1, ttyS2, ttyAMA0, etc).
+runtime_tty=$(echo $2 | grep -oh '[0-9]')
+
+# Backup $IFS.
+DEFAULT_IFS=$IFS
+# Customize Internal Field Separator.
+IFS="$(printf '\n\t')"
+
+for line in $active_serial; do
+	# Check we have the file containing current active serial target index.
+	if [ -e "/proc/tty/driver/$line" ]
+        then
+		# Remove all unknown entries and discard the first line (desc).
+		activetty=$(grep -v "unknown" "/proc/tty/driver/$line" \
+			    | tail -n +2 | grep -oh "^\s*\S*[0-9]")
+		for active in $activetty; do
+			# If indexes do match then enable the serial console.
+			if [ $active -eq $runtime_tty ]
+			then
+				if [ -c /dev/$2 ]
+				then
+				    /sbin/getty -L $1 $2 $3
+				fi
+				break
+			fi
+		done
+	fi
+done
+
+# Restore $IFS.
+IFS=$DEFAULT_IFS
diff --git a/import-layers/yocto-poky/meta/recipes-core/sysvinit/sysvinit_2.88dsf.bb b/import-layers/yocto-poky/meta/recipes-core/sysvinit/sysvinit_2.88dsf.bb
index 884857a..22a0ecf 100644
--- a/import-layers/yocto-poky/meta/recipes-core/sysvinit/sysvinit_2.88dsf.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/sysvinit/sysvinit_2.88dsf.bb
@@ -68,7 +68,7 @@
 FILES_sysvinit-pidof = "${base_bindir}/pidof.sysvinit ${base_sbindir}/killall5"
 FILES_sysvinit-sulogin = "${base_sbindir}/sulogin.sysvinit"
 
-RDEPENDS_${PN} += "sysvinit-pidof initscripts-functions"
+RDEPENDS_${PN} += "sysvinit-pidof initd-functions"
 
 CFLAGS_prepend = "-D_GNU_SOURCE "
 export LCRYPT = "-lcrypt"
diff --git a/import-layers/yocto-poky/meta/recipes-core/udev/eudev/init b/import-layers/yocto-poky/meta/recipes-core/udev/eudev/init
index 0ab028b..0455ade 100644
--- a/import-layers/yocto-poky/meta/recipes-core/udev/eudev/init
+++ b/import-layers/yocto-poky/meta/recipes-core/udev/eudev/init
@@ -14,25 +14,7 @@
 [ -d /sys/class ] || exit 1
 [ -r /proc/mounts ] || exit 1
 [ -x @UDEVD@ ] || exit 1
-SYSCONF_CACHED="/etc/udev/cache.data"
-SYSCONF_TMP="/dev/shm/udev.cache"
-DEVCACHE_REGEN="/dev/shm/udev-regen" # create to request cache regen
 
-# A list of files which are used as a criteria to judge whether the udev cache could be reused.
-CMP_FILE_LIST="/proc/version /proc/cmdline /proc/devices"
-[ -f /proc/atags ] && CMP_FILE_LIST="$CMP_FILE_LIST /proc/atags"
-
-# List of files whose metadata (size/mtime/name) will be included in cached
-# system state.
-META_FILE_LIST="lib/udev/rules.d/* etc/udev/rules.d/*"
-
-# Command to compute system configuration.
-sysconf_cmd () {
-	cat -- $CMP_FILE_LIST
-	stat -c '%s %Y %n' -- $META_FILE_LIST | awk -F/ '{print $1 " " $NF;}'
-}
-
-[ -f /etc/default/udev-cache ] && . /etc/default/udev-cache
 [ -f /etc/udev/udev.conf ] && . /etc/udev/udev.conf
 [ -f /etc/default/rcS ] && . /etc/default/rcS
 
@@ -66,37 +48,6 @@
     # /var/volatile/tmp directory to be available.
     mkdir -m 1777 -p /var/volatile/tmp
 
-    # Cache handling.
-    if [ "$DEVCACHE" != "" ]; then
-            if [ -e $DEVCACHE ]; then
-		    sysconf_cmd > "$SYSCONF_TMP"
-		    if cmp $SYSCONF_CACHED $SYSCONF_TMP >/dev/null; then
-                            tar xmf $DEVCACHE -C / -m
-                            not_first_boot=1
-                            [ "$VERBOSE" != "no" ] && echo "udev: using cache file $DEVCACHE"
-                            [ -e $SYSCONF_TMP ] && rm -f "$SYSCONF_TMP"
-                            [ -e "$DEVCACHE_REGEN" ] && rm -f "$DEVCACHE_REGEN"
-                    else
-			    # Output detailed reason why the cached /dev is not used
-			    cat <<EOF
-udev: Not using udev cache because of changes detected in the following files:
-udev:     $CMP_FILE_LIST
-udev:     $META_FILE_LIST
-udev: The udev cache will be regenerated. To identify the detected changes,
-udev: compare the cached sysconf at   $SYSCONF_CACHED
-udev: against the current sysconf at  $SYSCONF_TMP
-EOF
-			    touch "$DEVCACHE_REGEN"
-                    fi
-	    else
-		    if [ "$ROOTFS_READ_ONLY" != "yes" ]; then
-			    # If rootfs is not read-only, it's possible that a new udev cache would be generated;
-			    # otherwise, we do not bother to read files.
-			    touch "$DEVCACHE_REGEN"
-		    fi
-            fi
-    fi
-
     # make_extra_nodes
     kill_udevd > "/dev/null" 2>&1
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/udev/eudev/udev-cache b/import-layers/yocto-poky/meta/recipes-core/udev/eudev/udev-cache
deleted file mode 100644
index dcfff1c..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/udev/eudev/udev-cache
+++ /dev/null
@@ -1,75 +0,0 @@
-#!/bin/sh -e
-
-### BEGIN INIT INFO
-# Provides:          udev-cache
-# Required-Start:    mountall
-# Required-Stop:
-# Default-Start:     S
-# Default-Stop:
-# Short-Description: cache /dev to speedup the udev next boot
-### END INIT INFO
-
-export TZ=/etc/localtime
-
-[ -r /proc/mounts ] || exit 1
-[ -x @UDEVD@ ] || exit 1
-[ -d /sys/class ] || exit 1
-
-[ -f /etc/default/rcS ] && . /etc/default/rcS
-DEVCACHE_TMP="/dev/shm/udev-cache-tmp.tar"
-SYSCONF_CACHED="/etc/udev/cache.data"
-SYSCONF_TMP="/dev/shm/udev.cache"
-DEVCACHE_REGEN="/dev/shm/udev-regen" # create to request cache regen
-
-# A list of files which are used as a criteria to judge whether the udev cache could be reused.
-CMP_FILE_LIST="/proc/version /proc/cmdline /proc/devices"
-[ -f /proc/atags ] && CMP_FILE_LIST="$CMP_FILE_LIST /proc/atags"
-
-# List of files whose metadata (size/mtime/name) will be included in cached
-# system state.
-META_FILE_LIST="lib/udev/rules.d/* etc/udev/rules.d/*"
-
-# Command to compute system configuration.
-sysconf_cmd () {
-	cat -- $CMP_FILE_LIST
-	stat -c '%s %Y %n' -- $META_FILE_LIST | awk -F/ '{print $1 " " $NF;}'
-}
-
-[ -f /etc/default/udev-cache ] && . /etc/default/udev-cache
-
-if [ "$ROOTFS_READ_ONLY" = "yes" ]; then
-    [ "$VERBOSE" != "no" ] && echo "udev-cache: read-only rootfs, skip generating udev-cache"
-    exit 0
-fi
-
-[ "$DEVCACHE" != "" ] || exit 0
-[ "${VERBOSE}" == "no" ] || echo -n "udev-cache: checking for ${DEVCACHE_REGEN}... "
-if ! [ -e "$DEVCACHE_REGEN" ]; then
-	[ "${VERBOSE}" == "no" ] || echo "not found."
-	exit 0
-fi
-[ "${VERBOSE}" == "no" ] || echo "found."
-echo "Populating dev cache"
-
-err_cleanup () {
-        echo "udev-cache: update failed!"
-        udevadm control --start-exec-queue
-	rm -f -- "$SYSCONF_TMP" "$DEVCACHE_TMP" "$DEVCACHE" "$SYSCONF_CACHED"
-}
-
-(
-	set -e
-	trap 'err_cleanup' EXIT
-	udevadm control --stop-exec-queue
-	sysconf_cmd > "$SYSCONF_TMP"
-	find /dev -xdev \( -type b -o -type c -o -type l \) | cut -c 2- \
-		| xargs tar cf "${DEVCACHE_TMP}"
-	gzip < "${DEVCACHE_TMP}" > "$DEVCACHE"
-	rm -f "${DEVCACHE_TMP}"
-	mv "$SYSCONF_TMP" "$SYSCONF_CACHED"
-	udevadm control --start-exec-queue
-	rm -f "$DEVCACHE_REGEN"
-	trap - EXIT
-) &
-
-exit 0
diff --git a/import-layers/yocto-poky/meta/recipes-core/udev/eudev/udev-cache.default b/import-layers/yocto-poky/meta/recipes-core/udev/eudev/udev-cache.default
deleted file mode 100644
index a3b7326..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/udev/eudev/udev-cache.default
+++ /dev/null
@@ -1,5 +0,0 @@
-# Default for /etc/init.d/udev
-
-# Comment this out to disable device cache
-DEVCACHE="/etc/udev-cache.tar.gz"
-PROBE_PLATFORM_BUS="yes"
diff --git a/import-layers/yocto-poky/meta/recipes-core/udev/eudev_3.2.1.bb b/import-layers/yocto-poky/meta/recipes-core/udev/eudev_3.2.1.bb
deleted file mode 100644
index bdfb544..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/udev/eudev_3.2.1.bb
+++ /dev/null
@@ -1,106 +0,0 @@
-SUMMARY = "eudev is a fork of systemd's udev"
-HOMEPAGE = "https://wiki.gentoo.org/wiki/Eudev"
-LICENSE = "GPLv2.0+ & LGPL-2.1+"
-LICENSE_libudev = "LGPL-2.1+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
-
-DEPENDS = "glib-2.0 glib-2.0-native gperf-native kmod libxslt-native util-linux"
-
-PROVIDES = "udev"
-
-SRC_URI = "https://github.com/gentoo/${BPN}/archive/v${PV}.tar.gz;downloadfilename=${BP}.tar.gz \
-           file://0014-Revert-rules-remove-firmware-loading-rules.patch \
-           file://Revert-udev-remove-userspace-firmware-loading-suppor.patch \
-           file://devfs-udev.rules \
-           file://init \
-           file://links.conf \
-           file://local.rules \
-           file://permissions.rules \
-           file://run.rules \
-           file://udev-cache \
-           file://udev-cache.default \
-           file://udev.rules \
-"
-UPSTREAM_CHECK_URI = "https://github.com/gentoo/eudev/releases"
-
-SRC_URI[md5sum] = "a2aae16bc432eac0e71c1267c384e295"
-SRC_URI[sha256sum] = "88f530c1540750e6daa91b5eaeebf88e761e6f0c86515c1c28eedfd871f027c6"
-
-inherit autotools update-rc.d qemu pkgconfig
-
-EXTRA_OECONF = " \
-    --sbindir=${base_sbindir} \
-    --with-rootlibdir=${base_libdir} \
-    --with-rootprefix= \
-"
-
-PACKAGECONFIG ??= "hwdb"
-PACKAGECONFIG[hwdb] = "--enable-hwdb,--disable-hwdb"
-
-do_install_append() {
-	install -d ${D}${sysconfdir}/init.d
-	install -m 0755 ${WORKDIR}/init ${D}${sysconfdir}/init.d/udev
-	install -m 0755 ${WORKDIR}/udev-cache ${D}${sysconfdir}/init.d/udev-cache
-	sed -i s%@UDEVD@%${base_sbindir}/udevd% ${D}${sysconfdir}/init.d/udev
-	sed -i s%@UDEVD@%${base_sbindir}/udevd% ${D}${sysconfdir}/init.d/udev-cache
-
-	install -d ${D}${sysconfdir}/default
-	install -m 0755 ${WORKDIR}/udev-cache.default ${D}${sysconfdir}/default/udev-cache
-
-	touch ${D}${sysconfdir}/udev/cache.data
-
-	install -d ${D}${sysconfdir}/udev/rules.d
-	install -m 0644 ${WORKDIR}/local.rules ${D}${sysconfdir}/udev/rules.d/local.rules
-
-	# Use classic network interface naming scheme
-	touch ${D}${sysconfdir}/udev/rules.d/80-net-name-slot.rules
-
-	# hid2hci has moved to bluez4. removed in udev as of version 169
-	rm -f ${D}${base_libdir}/udev/hid2hci
-}
-
-INITSCRIPT_PACKAGES = "eudev udev-cache"
-INITSCRIPT_NAME_eudev = "udev"
-INITSCRIPT_PARAMS_eudev = "start 04 S ."
-INITSCRIPT_NAME_udev-cache = "udev-cache"
-INITSCRIPT_PARAMS_udev-cache = "start 36 S ."
-
-PACKAGES =+ "libudev"
-PACKAGES =+ "udev-cache"
-PACKAGES =+ "eudev-hwdb"
-
-
-FILES_${PN} += "${libexecdir} ${base_libdir}/udev ${bindir}/udevadm"
-FILES_${PN}-dev = "${datadir}/pkgconfig/udev.pc \
-                   ${includedir}/libudev.h ${libdir}/libudev.so \
-                   ${includedir}/udev.h ${libdir}/libudev.la \
-                   ${libdir}/libudev.a ${libdir}/pkgconfig/libudev.pc"
-FILES_libudev = "${base_libdir}/libudev.so.*"
-FILES_udev-cache = "${sysconfdir}/init.d/udev-cache ${sysconfdir}/default/udev-cache"
-FILES_eudev-hwdb = "${sysconfdir}/udev/hwdb.d"
-
-RDEPENDS_eudev-hwdb += "eudev"
-
-RRECOMMENDS_${PN} += "udev-cache"
-
-RPROVIDES_${PN} = "hotplug udev"
-RPROVIDES_eudev-hwdb += "udev-hwdb"
-
-python () {
-    if bb.utils.contains ('DISTRO_FEATURES', 'systemd', True, False, d):
-        raise bb.parse.SkipPackage("'systemd' in DISTRO_FEATURES")
-}
-
-PACKAGE_WRITE_DEPS += "qemu-native"
-pkg_postinst_eudev-hwdb () {
-    if test -n "$D"; then
-        ${@qemu_run_binary(d, '$D', '${bindir}/udevadm')} hwdb --update --root $D
-        chown root:root $D${sysconfdir}/udev/hwdb.bin
-    else
-        udevadm hwdb --update
-    fi
-}
-
-pkg_prerm_eudev-hwdb () {
-        rm -f $D${sysconfdir}/udev/hwdb.bin
-}
diff --git a/import-layers/yocto-poky/meta/recipes-core/udev/eudev_3.2.2.bb b/import-layers/yocto-poky/meta/recipes-core/udev/eudev_3.2.2.bb
new file mode 100644
index 0000000..02fb23a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/udev/eudev_3.2.2.bb
@@ -0,0 +1,96 @@
+SUMMARY = "eudev is a fork of systemd's udev"
+HOMEPAGE = "https://wiki.gentoo.org/wiki/Eudev"
+LICENSE = "GPLv2.0+ & LGPL-2.1+"
+LICENSE_libudev = "LGPL-2.1+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
+
+DEPENDS = "glib-2.0 glib-2.0-native gperf-native kmod libxslt-native util-linux"
+
+PROVIDES = "udev"
+
+SRC_URI = "http://dev.gentoo.org/~blueness/${BPN}/${BP}.tar.gz \
+           file://0014-Revert-rules-remove-firmware-loading-rules.patch \
+           file://Revert-udev-remove-userspace-firmware-loading-suppor.patch \
+           file://devfs-udev.rules \
+           file://init \
+           file://links.conf \
+           file://local.rules \
+           file://permissions.rules \
+           file://run.rules \
+           file://udev.rules \
+"
+UPSTREAM_CHECK_URI = "https://github.com/gentoo/eudev/releases"
+
+SRC_URI[md5sum] = "41e19b70462692fefd072a3f38818b6e"
+SRC_URI[sha256sum] = "3e4c56ec2fc1854afd0a31f3affa48f922c62d40ee12a0c1a4b4f152ef5b0f63"
+
+inherit autotools update-rc.d qemu pkgconfig
+
+EXTRA_OECONF = " \
+    --sbindir=${base_sbindir} \
+    --with-rootlibdir=${base_libdir} \
+    --with-rootprefix= \
+"
+
+PACKAGECONFIG ??= "hwdb"
+PACKAGECONFIG[hwdb] = "--enable-hwdb,--disable-hwdb"
+
+do_install_append() {
+	install -d ${D}${sysconfdir}/init.d
+	install -m 0755 ${WORKDIR}/init ${D}${sysconfdir}/init.d/udev
+	sed -i s%@UDEVD@%${base_sbindir}/udevd% ${D}${sysconfdir}/init.d/udev
+
+	install -d ${D}${sysconfdir}/udev/rules.d
+	install -m 0644 ${WORKDIR}/local.rules ${D}${sysconfdir}/udev/rules.d/local.rules
+
+	# Use classic network interface naming scheme
+	touch ${D}${sysconfdir}/udev/rules.d/80-net-name-slot.rules
+
+	# hid2hci has moved to bluez4. removed in udev as of version 169
+	rm -f ${D}${base_libdir}/udev/hid2hci
+}
+
+do_install_prepend_class-target () {
+	# Remove references to buildmachine
+	sed -i -e 's:${RECIPE_SYSROOT_NATIVE}::g' \
+		${B}/src/udev/keyboard-keys-from-name.h
+}
+
+INITSCRIPT_NAME = "udev"
+INITSCRIPT_PARAMS = "start 04 S ."
+
+PACKAGES =+ "libudev"
+PACKAGES =+ "eudev-hwdb"
+
+
+FILES_${PN} += "${libexecdir} ${base_libdir}/udev ${bindir}/udevadm"
+FILES_${PN}-dev = "${datadir}/pkgconfig/udev.pc \
+                   ${includedir}/libudev.h ${libdir}/libudev.so \
+                   ${includedir}/udev.h ${libdir}/libudev.la \
+                   ${libdir}/libudev.a ${libdir}/pkgconfig/libudev.pc"
+FILES_libudev = "${base_libdir}/libudev.so.*"
+FILES_eudev-hwdb = "${sysconfdir}/udev/hwdb.d"
+
+RDEPENDS_eudev-hwdb += "eudev"
+
+RPROVIDES_${PN} = "hotplug udev"
+RPROVIDES_eudev-hwdb += "udev-hwdb"
+
+python () {
+    if bb.utils.contains ('DISTRO_FEATURES', 'systemd', True, False, d):
+        raise bb.parse.SkipPackage("'systemd' in DISTRO_FEATURES")
+}
+
+PACKAGE_WRITE_DEPS += "qemu-native"
+pkg_postinst_eudev-hwdb () {
+    if test -n "$D"; then
+        ${@qemu_run_binary(d, '$D', '${bindir}/udevadm')} hwdb --update --root $D
+        chown root:root $D${sysconfdir}/udev/hwdb.bin
+    else
+        udevadm hwdb --update
+    fi
+}
+
+pkg_prerm_eudev-hwdb () {
+        rm -f $D${sysconfdir}/udev/hwdb.bin
+}
diff --git a/import-layers/yocto-poky/meta/recipes-core/udev/udev-extraconf_1.1.bb b/import-layers/yocto-poky/meta/recipes-core/udev/udev-extraconf_1.1.bb
index ae12550..43a1cff 100644
--- a/import-layers/yocto-poky/meta/recipes-core/udev/udev-extraconf_1.1.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/udev/udev-extraconf_1.1.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Extra machine specific configuration files"
+HOMEPAGE = "https://wiki.gentoo.org/wiki/Eudev"
 DESCRIPTION = "Extra machine specific configuration files for udev, specifically blacklist information."
 LICENSE = "MIT"
 LIC_FILES_CHKSUM = "file://${COREBASE}/meta/COPYING.MIT;md5=3da9cfbcb788c80a0384361b4de20420"
diff --git a/import-layers/yocto-poky/meta/recipes-core/update-rc.d/update-rc.d_0.7.bb b/import-layers/yocto-poky/meta/recipes-core/update-rc.d/update-rc.d_0.7.bb
index 3b965c5..6fc6f6e 100644
--- a/import-layers/yocto-poky/meta/recipes-core/update-rc.d/update-rc.d_0.7.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/update-rc.d/update-rc.d_0.7.bb
@@ -1,4 +1,5 @@
 SUMMARY = "manage symlinks in /etc/rcN.d"
+HOMEPAGE = "http://github.com/philb/update-rc.d/"
 DESCRIPTION = "update-rc.d is a utility that allows the management of symlinks to the initscripts in the /etc/rcN.d directory structure."
 SECTION = "base"
 
@@ -15,6 +16,7 @@
            file://check-if-symlinks-are-valid.patch \
            file://fix-to-handle-priority-numbers-correctly.patch \
           "
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 S = "${WORKDIR}/git"
 
diff --git a/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux.inc b/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux.inc
index 63302a9..f0ffd25 100644
--- a/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux.inc
+++ b/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux.inc
@@ -1,4 +1,5 @@
 SUMMARY = "A suite of basic system administration utilities"
+HOMEPAGE = "http://userweb.kernel.org/~kzak/util-linux/"
 DESCRIPTION = "Util-linux includes a suite of basic system administration utilities \
 commonly found on most Linux systems.  Some of the more important utilities include \
 disk partitioning, kernel message management, filesystem creation, and system login."
@@ -33,9 +34,10 @@
              util-linux-blkid util-linux-mkfs util-linux-mcookie util-linux-reset \
              util-linux-lsblk util-linux-mkfs.cramfs util-linux-fstrim \
              util-linux-partx util-linux-hwclock util-linux-mountpoint \
-             util-linux-findfs util-linux-getopt util-linux-sulogin util-linux-prlimit"
+             util-linux-findfs util-linux-getopt util-linux-sulogin util-linux-prlimit \
+             util-linux-ionice util-linux-switch-root"
 PACKAGES += "${@bb.utils.contains('PACKAGECONFIG', 'pylibmount', 'util-linux-pylibmount', '', d)}"
-PACKAGES =+ "${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'util-linux-runuser', '', d)}"
+PACKAGES =+ "${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'util-linux-runuser util-linux-su', '', d)}"
 
 PACKAGES_DYNAMIC = "^util-linux-lib.*"
 
@@ -91,6 +93,9 @@
 FILES_util-linux-getopt = "${base_bindir}/getopt.${BPN}"
 FILES_util-linux-runuser = "${sbindir}/runuser"
 FILES_util-linux-prlimit = "${bindir}/prlimit"
+FILES_util-linux-ionice = "${bindir}/ionice"
+FILES_util-linux-su = "${bindir}/su.util-linux ${sysconfdir}/pam.d/su-l"
+CONFFILES_util-linux-su = "${sysconfdir}/pam.d/su-l"
 
 FILES_util-linux-pylibmount = "${PYTHON_SITEPACKAGES_DIR}/libmount/pylibmount.so \
                                ${PYTHON_SITEPACKAGES_DIR}/libmount/__init__.* \
@@ -107,6 +112,8 @@
 FILES_util-linux-sulogin = "${base_sbindir}/sulogin*"
 FILES_util-linux-mountpoint = "${base_bindir}/mountpoint.${BPN}"
 
+FILES_util-linux-switch-root = "${base_sbindir}/switch_root.${BPN}"
+
 # Util-linux' blkid replaces the e2fsprogs one
 FILES_util-linux-blkid = "${base_sbindir}/blkid*"
 RCONFLICTS_util-linux-blkid = "e2fsprogs-blkid"
@@ -116,11 +123,12 @@
 RDEPENDS_util-linux-reset += "ncurses"
 
 RDEPENDS_util-linux-runuser += "libpam"
+RDEPENDS_util-linux-su += "libpam"
 
 RDEPENDS_${PN} = "util-linux-umount util-linux-swaponoff util-linux-losetup util-linux-sulogin util-linux-lsblk"
-RDEPENDS_${PN} += "${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'util-linux-runuser', '', d)}"
+RDEPENDS_${PN} += "${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'util-linux-runuser util-linux-su', '', d)}"
 
-RRECOMMENDS_${PN} = "util-linux-fdisk util-linux-cfdisk util-linux-sfdisk util-linux-mount util-linux-readprofile util-linux-mkfs util-linux-mountpoint util-linux-prlimit"
+RRECOMMENDS_${PN} = "util-linux-fdisk util-linux-cfdisk util-linux-sfdisk util-linux-mount util-linux-readprofile util-linux-mkfs util-linux-mountpoint util-linux-prlimit util-linux-ionice util-linux-switch-root"
 
 RRECOMMENDS_${PN}_class-native = ""
 RRECOMMENDS_${PN}_class-nativesdk = ""
@@ -182,6 +190,12 @@
 		install -m 0644 ${WORKDIR}/runuser.pamd ${D}${sysconfdir}/pam.d/runuser
 		install -m 0644 ${WORKDIR}/runuser-l.pamd ${D}${sysconfdir}/pam.d/runuser-l
 	fi
+	if [ "${@bb.utils.filter('PACKAGECONFIG', 'pam', d)}" ]; then
+		# Required for "su -" aka "su --login" because
+		# otherwise it uses "other", which has "auth pam_deny.so"
+		# and thus prevents the operation.
+		ln -s su ${D}${sysconfdir}/pam.d/su-l
+	fi
 }
 
 # reset and nologin causes a conflict with ncurses-native and shadow-native
@@ -290,7 +304,7 @@
 }
 
 RDEPENDS_${PN}-bash-completion += "util-linux-lsblk"
-RDEPENDS_${PN}-ptest = "bash grep coreutils"
+RDEPENDS_${PN}-ptest = "bash grep coreutils which util-linux-blkid util-linux-fsck btrfs-tools"
 
 do_compile_ptest() {
     oe_runmake buildtest-TESTS
@@ -298,23 +312,30 @@
 
 do_install_ptest() {
     mkdir -p ${D}${PTEST_PATH}/tests/ts
-    find . -maxdepth 1 -type f -perm -111 -exec cp {} ${D}${PTEST_PATH} \;
-    cp ${S}/tests/functions.sh ${D}${PTEST_PATH}/tests/
-    cp ${S}/tests/commands.sh ${D}${PTEST_PATH}/tests/
-    cp ${S}/tests/run.sh ${D}${PTEST_PATH}/tests/
-    cp -pR ${S}/tests/expected ${D}${PTEST_PATH}/tests/expected
+    find . -name 'test*' -maxdepth 1 -type f -perm -111 -exec cp {} ${D}${PTEST_PATH} \;
+    find ./.libs -name 'sample*' -maxdepth 1 -type f -perm -111 -exec cp {} ${D}${PTEST_PATH} \;
+    find ./.libs -name 'test*' -maxdepth 1 -type f -perm -111 -exec cp {} ${D}${PTEST_PATH} \;
 
-    list="bitops build-sys cal col colrm column dmesg fsck hexdump hwclock ipcs isosize login look md5 misc more namei paths schedutils script swapon tailf"
-    # The following tests are not installed  yet:
-    # blkid scsi_debug module dependent
-    # cramfs gcc dependent
-    # eject gcc dependent
-    # fdisk scsi_debug module and gcc dependent
-    # lscpu gcc dependant
-    # libmount uuidgen dependent
-    # mount gcc dependant
-    # partx blkid dependant
-    for d in $list; do
-        cp -pR ${S}/tests/ts/$d ${D}${PTEST_PATH}/tests/ts/
-    done
+    cp ${S}/tests/*.sh ${D}${PTEST_PATH}/tests/
+    cp -pR ${S}/tests/expected ${D}${PTEST_PATH}/tests/expected
+    cp -pR ${S}/tests/ts ${D}${PTEST_PATH}/tests/
+    cp ${WORKDIR}/build/config.h ${D}${PTEST_PATH}
+
+    # The original paths of executables to be tested point to a local folder containing
+    # the executables. We want to test the installed executables, not the local copies.
+    # So strip the paths, the executables will be located via "which"
+    sed  -i \
+         -e '/^TS_CMD/ s|$top_builddir/||g' \
+         -e '/^TS_HELPER/ s|$top_builddir|${PTEST_PATH}|g' \
+         ${D}${PTEST_PATH}/tests/commands.sh
+
+    # Change 'if [ ! -x "$1" ]' to 'if [ ! -x "`which $1 2>/dev/null`"]'
+    sed -i -e \
+        '/^\tif[[:space:]]\[[[:space:]]![[:space:]]-x[[:space:]]"$1"/s|$1|`which $1 2>/dev/null`|g' \
+         ${D}${PTEST_PATH}/tests/functions.sh
+
+    # "kill -L" behaves differently than "/bin/kill -L" so we need an additional fix
+    sed -i -e \
+         '/^TS_CMD_KILL/ s|kill|/bin/kill|g' \
+         ${D}${PTEST_PATH}/tests/commands.sh
 }
diff --git a/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux/no_getrandom.patch b/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux/no_getrandom.patch
new file mode 100644
index 0000000..b9fa1ca
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux/no_getrandom.patch
@@ -0,0 +1,21 @@
+getrandom() is only available in glibc 2.25+ and uninative may relocate 
+binaries onto systems that don't have this function. For now, force the 
+code to the older codepath until we can come up with a better solution 
+for this kind of issue.
+
+Upstream-Status: Inappropriate
+RP
+2016/8/15
+
+Index: util-linux-2.30/configure.ac
+===================================================================
+--- util-linux-2.30.orig/configure.ac
++++ util-linux-2.30/configure.ac
+@@ -399,7 +399,6 @@ AC_CHECK_FUNCS([ \
+ 	getdtablesize \
+ 	getexecname \
+ 	getmntinfo \
+-	getrandom \
+ 	getrlimit \
+ 	getsgnam \
+ 	inotify_init \
diff --git a/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux/uuid-test-error-api.patch b/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux/uuid-test-error-api.patch
deleted file mode 100644
index a6fde5d..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux/uuid-test-error-api.patch
+++ /dev/null
@@ -1,99 +0,0 @@
-This patch adds error() API implementation for non-glibc system C libs
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
----
- misc-utils/test_uuidd.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++-
- 1 file changed, 61 insertions(+), 1 deletion(-)
-
-diff --git a/misc-utils/test_uuidd.c b/misc-utils/test_uuidd.c
-index 36f3b3d..7d579ce 100644
---- a/misc-utils/test_uuidd.c
-+++ b/misc-utils/test_uuidd.c
-@@ -23,7 +23,6 @@
-  *
-  *	make uuidd uuidgen localstatedir=/var
-  */
--#include <error.h>
- #include <pthread.h>
- #include <stdio.h>
- #include <stdlib.h>
-@@ -38,6 +37,17 @@
- #include "xalloc.h"
- #include "strutils.h"
- 
-+#ifdef __GLIBC__
-+#include <error.h>
-+#else
-+extern void (*error_print_progname)(void);
-+extern unsigned int error_message_count;
-+extern int error_one_per_line;
-+
-+void error(int, int, const char *, ...);
-+void error_at_line(int, int, const char *, unsigned int, const char *, ...);
-+#endif
-+
- #define LOG(level,args) if (loglev >= level) { fprintf args; }
- 
- size_t nprocesses = 4;
-@@ -256,6 +266,56 @@ static void object_dump(size_t idx, object_t *obj)
- 	fprintf(stderr, "}\n");
- }
- 
-+#ifndef __GLIBC__
-+extern char *__progname;
-+
-+void (*error_print_progname)(void) = 0;
-+unsigned int error_message_count = 0;
-+int error_one_per_line = 0;
-+
-+static void eprint(int status, int e, const char *file, unsigned int line, const char *fmt, va_list ap)
-+{
-+	if (file && error_one_per_line) {
-+		static const char *oldfile;
-+		static unsigned int oldline;
-+		if (line == oldline && strcmp(file, oldfile) == 0)
-+			return;
-+		oldfile = file;
-+		oldline = line;
-+	}
-+	if (error_print_progname)
-+		error_print_progname();
-+	else
-+		fprintf(stderr, "%s: ", __progname);
-+	if (file)
-+		fprintf(stderr, "%s:%u: ", file, line);
-+	vfprintf(stderr, fmt, ap);
-+	if (e)
-+		fprintf(stderr, ": %s", strerror(e));
-+	putc('\n', stderr);
-+	fflush(stderr);
-+	error_message_count++;
-+	if (status)
-+		exit(status);
-+}
-+
-+void error(int status, int e, const char *fmt, ...)
-+{
-+	va_list ap;
-+	va_start(ap,fmt);
-+	eprint(status, e, 0, 0, fmt, ap);
-+	va_end(ap);
-+}
-+
-+void error_at_line(int status, int e, const char *file, unsigned int line, const char *fmt, ...)
-+{
-+	va_list ap;
-+	va_start(ap,fmt);
-+	eprint(status, e, file, line, fmt, ap);
-+	va_end(ap);
-+}
-+#endif /* __GLIBC__ */
-+
- int main(int argc, char *argv[])
- {
- 	size_t i, nfailed = 0, nignored = 0;
--- 
-2.8.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux_2.29.1.bb b/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux_2.29.1.bb
deleted file mode 100644
index 1395b47..0000000
--- a/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux_2.29.1.bb
+++ /dev/null
@@ -1,31 +0,0 @@
-MAJOR_VERSION = "2.29"
-require util-linux.inc
-
-# To support older hosts, we need to patch and/or revert
-# some upstream changes.  Only do this for native packages.
-OLDHOST = ""
-OLDHOST_class-native = "file://util-linux-native-qsort.patch"
-
-SRC_URI += "file://configure-sbindir.patch \
-            file://runuser.pamd \
-            file://runuser-l.pamd \
-            ${OLDHOST} \
-            file://ptest.patch \
-            file://run-ptest \
-            file://display_testname_for_subtest.patch \
-            file://avoid_parallel_tests.patch \
-            file://uuid-test-error-api.patch \
-"
-SRC_URI[md5sum] = "0cbb6d16ab9c5736e5649ef1264bee6e"
-SRC_URI[sha256sum] = "0ce40600b934ec2fecfa6bfc4efe6982d051ba96c2832b05201347aec582f54f"
-
-CACHED_CONFIGUREVARS += "scanf_cv_alloc_modifier=ms"
-
-EXTRA_OECONF_class-native = "${SHARED_EXTRA_OECONF} \
-                             --disable-fallocate \
-			     --disable-use-tty-group \
-"
-EXTRA_OECONF_class-nativesdk = "${SHARED_EXTRA_OECONF} \
-                                --disable-fallocate \
-				--disable-use-tty-group \
-"
diff --git a/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux_2.30.bb b/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux_2.30.bb
new file mode 100644
index 0000000..39449d9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-core/util-linux/util-linux_2.30.bb
@@ -0,0 +1,31 @@
+MAJOR_VERSION = "2.30"
+require util-linux.inc
+
+# To support older hosts, we need to patch and/or revert
+# some upstream changes.  Only do this for native packages.
+OLDHOST = ""
+OLDHOST_class-native = "file://util-linux-native-qsort.patch"
+
+SRC_URI += "file://configure-sbindir.patch \
+            file://runuser.pamd \
+            file://runuser-l.pamd \
+            ${OLDHOST} \
+            file://ptest.patch \
+            file://run-ptest \
+            file://display_testname_for_subtest.patch \
+            file://avoid_parallel_tests.patch \
+"
+SRC_URI_append_class-native = " file://no_getrandom.patch"
+SRC_URI[md5sum] = "eaa3429150268027908a1b8ae6ee9a62"
+SRC_URI[sha256sum] = "c208a4ff6906cb7f57940aa5bc3a6eed146e50a7cc0a092f52ef2ab65057a08d"
+
+CACHED_CONFIGUREVARS += "scanf_cv_alloc_modifier=ms"
+
+EXTRA_OECONF_class-native = "${SHARED_EXTRA_OECONF} \
+                             --disable-fallocate \
+			     --disable-use-tty-group \
+"
+EXTRA_OECONF_class-nativesdk = "${SHARED_EXTRA_OECONF} \
+                                --disable-fallocate \
+				--disable-use-tty-group \
+"
diff --git a/import-layers/yocto-poky/meta/recipes-core/zlib/zlib_1.2.11.bb b/import-layers/yocto-poky/meta/recipes-core/zlib/zlib_1.2.11.bb
index ba216f6..6410519 100644
--- a/import-layers/yocto-poky/meta/recipes-core/zlib/zlib_1.2.11.bb
+++ b/import-layers/yocto-poky/meta/recipes-core/zlib/zlib_1.2.11.bb
@@ -45,6 +45,11 @@
 	install ${B}/minigzip   ${D}${PTEST_PATH}
 	install ${B}/examplesh  ${D}${PTEST_PATH}
 	install ${B}/minigzipsh ${D}${PTEST_PATH}
+
+	# Remove buildhost references...
+	sed -i -e "s,--sysroot=${STAGING_DIR_TARGET},,g" \
+		-e 's|${DEBUG_PREFIX_MAP}||g' \
+	 ${D}${PTEST_PATH}/Makefile
 }
 
 # Move zlib shared libraries for target builds to $base_libdir so the library
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt-native_1.2.12.bb b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt-native_1.2.12.bb
deleted file mode 100644
index 1e660da..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt-native_1.2.12.bb
+++ /dev/null
@@ -1,4 +0,0 @@
-require apt-native.inc
-
-SRC_URI += "file://noconfigure.patch \
-            file://no-curl.patch"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt-native_1.2.24.bb b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt-native_1.2.24.bb
new file mode 100644
index 0000000..5b16b50
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt-native_1.2.24.bb
@@ -0,0 +1,7 @@
+require apt-native.inc
+
+SRC_URI += "file://noconfigure.patch \
+            file://no-curl.patch \
+            file://gcc_4.x_apt-pkg-contrib-strutl.cc-Include-array-header.patch \
+            file://gcc_4.x_Revert-avoid-changing-the-global-LC_TIME-for-Release.patch \
+            file://gcc_4.x_Revert-use-de-localed-std-put_time-instead-rolling-o.patch"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt.inc b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt.inc
index 3026370..f1cde30 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt.inc
@@ -2,7 +2,7 @@
 LICENSE = "GPLv2.0+"
 SECTION = "base"
 
-SRC_URI = "http://snapshot.debian.org/archive/debian/20160526T162943Z/pool/main/a/${BPN}/${BPN}_${PV}.tar.xz \
+SRC_URI = "http://archive.ubuntu.com/ubuntu/pool/main/a/${BPN}/${BPN}_${PV}.tar.xz \
            file://use-host.patch \
            file://makerace.patch \
            file://no-nls-dpkg.patch \
@@ -14,9 +14,9 @@
            file://0001-environment.mak-musl-based-systems-can-generate-shar.patch \
            file://0001-apt-1.2.12-Fix-musl-build.patch \
            "
-SRC_URI[md5sum] = "80f6f0ef110a45a7e5af8a9d233fb0e7"
-SRC_URI[sha256sum] = "e820d27cba73476df4abcff27dadd1b5847474bfe85f7e9202a9a07526973ea6"
-LIC_FILES_CHKSUM = "file://COPYING.GPL;md5=0636e73ff0215e8d672dc4c32c317bb3"
+SRC_URI[md5sum] = "ce8f9ab11f4fd0a08ec73eaffd75c8f0"
+SRC_URI[sha256sum] = "fa1311a9ce00e72379a0a3bc6d240ba30c0968cfbbb3472859e50b99e24e9598"
+LIC_FILES_CHKSUM = "file://COPYING.GPL;md5=b234ee4d69f5fce4486a80fdaf4a4263"
 
 # the package is taken from snapshots.debian.org; that source is static and goes stale
 # so we check the latest upstream from a directory that does get updated
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/0001-Revert-always-run-dpkg-configure-a-at-the-end-of-our.patch b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/0001-Revert-always-run-dpkg-configure-a-at-the-end-of-our.patch
index b3a883b..734ba00 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/0001-Revert-always-run-dpkg-configure-a-at-the-end-of-our.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/0001-Revert-always-run-dpkg-configure-a-at-the-end-of-our.patch
@@ -22,10 +22,10 @@
  3 files changed, 11 insertions(+), 18 deletions(-)
 
 diff --git a/apt-pkg/deb/dpkgpm.cc b/apt-pkg/deb/dpkgpm.cc
-index 834cb0e..84ded3a 100644
+index 533d9b367..6ce81bbd9 100644
 --- a/apt-pkg/deb/dpkgpm.cc
 +++ b/apt-pkg/deb/dpkgpm.cc
-@@ -1037,12 +1037,6 @@ void pkgDPkgPM::BuildPackagesProgressMap()
+@@ -1041,12 +1041,6 @@ void pkgDPkgPM::BuildPackagesProgressMap()
  	 PackagesTotal++;
        }
     }
@@ -38,7 +38,7 @@
  }
                                                                          /*}}}*/
  bool pkgDPkgPM::Go(int StatusFd)
-@@ -1250,8 +1244,9 @@ bool pkgDPkgPM::Go(APT::Progress::PackageManager *progress)
+@@ -1268,8 +1262,9 @@ bool pkgDPkgPM::Go(APT::Progress::PackageManager *progress)
  
     // support subpressing of triggers processing for special
     // cases like d-i that runs the triggers handling manually
@@ -50,7 +50,7 @@
  
     // for the progress
 diff --git a/test/integration/test-apt-progress-fd-deb822 b/test/integration/test-apt-progress-fd-deb822
-index 58fd732..3359762 100755
+index a8d59608d..0c6a9bbbf 100755
 --- a/test/integration/test-apt-progress-fd-deb822
 +++ b/test/integration/test-apt-progress-fd-deb822
 @@ -27,36 +27,36 @@ Message: Installing testing (amd64)
@@ -69,19 +69,19 @@
  
  Status: progress
  Package: testing:amd64
--Percent: 50
-+Percent: 60
+-Percent: 50.0000
++Percent: 60.0000
  Message: Preparing to configure testing (amd64)
  
  Status: progress
--Percent: 50
-+Percent: 60
+-Percent: 50.0000
++Percent: 60.0000
  Message: Running dpkg
  
  Status: progress
  Package: testing:amd64
--Percent: 50
-+Percent: 60
+-Percent: 50.0000
++Percent: 60.0000
  Message: Configuring testing (amd64)
  
  Status: progress
@@ -98,7 +98,7 @@
  
  Status: progress
 diff --git a/test/integration/test-no-fds-leaked-to-maintainer-scripts b/test/integration/test-no-fds-leaked-to-maintainer-scripts
-index d86e638..ef6d23b 100755
+index d86e638cd..ef6d23be2 100755
 --- a/test/integration/test-no-fds-leaked-to-maintainer-scripts
 +++ b/test/integration/test-no-fds-leaked-to-maintainer-scripts
 @@ -59,8 +59,7 @@ startup packages configure
@@ -122,5 +122,5 @@
  checkpurge
  
 -- 
-2.1.4
+2.11.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/0001-apt-1.2.12-Fix-musl-build.patch b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/0001-apt-1.2.12-Fix-musl-build.patch
index 04b0406..f7ac19b 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/0001-apt-1.2.12-Fix-musl-build.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/0001-apt-1.2.12-Fix-musl-build.patch
@@ -11,7 +11,7 @@
 apt-pkg/contrib/srvrec.h: Add explicity include of sys/types.h
 to avoid errors in types u_int_SIZE.
 
-Upstream-status: Pending
+Upstream-Status: Pending
 
 Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
 ---
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/gcc_4.x_Revert-avoid-changing-the-global-LC_TIME-for-Release.patch b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/gcc_4.x_Revert-avoid-changing-the-global-LC_TIME-for-Release.patch
new file mode 100644
index 0000000..438de20
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/gcc_4.x_Revert-avoid-changing-the-global-LC_TIME-for-Release.patch
@@ -0,0 +1,80 @@
+From 7ef2b2dba0e0bee450da3c8450ea782a3e7d6429 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?An=C3=ADbal=20Lim=C3=B3n?= <anibal.limon@linux.intel.com>
+Date: Tue, 22 Aug 2017 11:49:01 -0500
+Subject: [PATCH 3/3] Revert "avoid changing the global LC_TIME for Release
+ writing"
+
+This reverts commit 78e7b683c645e907db12658405a4b201a6243ea8.
+
+After we drop debian8 and centos7 that has gcc < 5 (std::put_time not available) 
+versions this patch can be remove.
+
+Signed-off-by: Anibal Limon <limon.anibal@gmail.com>
+
+Upstream-Status: Inappropriate [embedded specific]
+---
+ ftparchive/writer.cc | 29 +++++++++++++++++------------
+ 1 file changed, 17 insertions(+), 12 deletions(-)
+
+diff --git a/ftparchive/writer.cc b/ftparchive/writer.cc
+index 2596382..e43a643 100644
+--- a/ftparchive/writer.cc
++++ b/ftparchive/writer.cc
+@@ -37,7 +37,6 @@
+ #include <unistd.h>
+ #include <ctime>
+ #include <iostream>
+-#include <iomanip>
+ #include <sstream>
+ #include <memory>
+ #include <utility>
+@@ -984,29 +983,35 @@ ReleaseWriter::ReleaseWriter(FileFd * const GivenOutput, string const &/*DB*/) :
+    AddPatterns(_config->FindVector("APT::FTPArchive::Release::Patterns"));
+ 
+    time_t const now = time(NULL);
+-   auto const posix = std::locale("C.UTF-8");
+ 
+-   // FIXME: use TimeRFC1123 here? But that uses GMT to satisfy HTTP/1.1
+-   std::ostringstream datestr;
+-   datestr.imbue(posix);
+-   datestr << std::put_time(gmtime(&now), "%a, %d %b %Y %H:%M:%S UTC");
++   setlocale(LC_TIME, "C");
++
++   char datestr[128];
++   if (strftime(datestr, sizeof(datestr), "%a, %d %b %Y %H:%M:%S UTC",
++                gmtime(&now)) == 0)
++   {
++      datestr[0] = '\0';
++   }
+ 
+    time_t const validuntil = now + _config->FindI("APT::FTPArchive::Release::ValidTime", 0);
+-   std::ostringstream validstr;
+-   if (validuntil != now)
++   char validstr[128];
++   if (now == validuntil ||
++       strftime(validstr, sizeof(validstr), "%a, %d %b %Y %H:%M:%S UTC",
++                gmtime(&validuntil)) == 0)
+    {
+-      datestr.imbue(posix);
+-      validstr << std::put_time(gmtime(&validuntil), "%a, %d %b %Y %H:%M:%S UTC");
++      validstr[0] = '\0';
+    }
+ 
++   setlocale(LC_TIME, "");
++
+    map<string,string> Fields;
+    Fields["Origin"] = "";
+    Fields["Label"] = "";
+    Fields["Suite"] = "";
+    Fields["Version"] = "";
+    Fields["Codename"] = "";
+-   Fields["Date"] = datestr.str();
+-   Fields["Valid-Until"] = validstr.str();
++   Fields["Date"] = datestr;
++   Fields["Valid-Until"] = validstr;
+    Fields["Architectures"] = "";
+    Fields["Components"] = "";
+    Fields["Description"] = "";
+-- 
+2.1.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/gcc_4.x_Revert-use-de-localed-std-put_time-instead-rolling-o.patch b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/gcc_4.x_Revert-use-de-localed-std-put_time-instead-rolling-o.patch
new file mode 100644
index 0000000..088a66a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/gcc_4.x_Revert-use-de-localed-std-put_time-instead-rolling-o.patch
@@ -0,0 +1,46 @@
+From c72ef9b6ae83a0a2fbbefd5c050335f65f0d2bc9 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?An=C3=ADbal=20Lim=C3=B3n?= <anibal.limon@linux.intel.com>
+Date: Tue, 22 Aug 2017 11:48:46 -0500
+Subject: [PATCH 2/3] Revert "use de-localed std::put_time instead rolling our
+ own"
+
+This reverts commit 4ed2a17ab4334f019c00512aa54a162f0bf083c4.
+
+After we drop debian8 and centos7 that has gcc < 5 (std::put_time not available) 
+versions this patch can be remove.
+
+Signed-off-by: Anibal Limon <limon.anibal@gmail.com>
+
+Upstream-Status: Inappropriate [embedded specific]
+---
+ apt-pkg/contrib/strutl.cc | 14 +++++++++-----
+ 1 file changed, 9 insertions(+), 5 deletions(-)
+
+diff --git a/apt-pkg/contrib/strutl.cc b/apt-pkg/contrib/strutl.cc
+index c2ff01d..e9ef2be 100644
+--- a/apt-pkg/contrib/strutl.cc
++++ b/apt-pkg/contrib/strutl.cc
+@@ -760,11 +760,15 @@ string TimeRFC1123(time_t Date)
+    if (gmtime_r(&Date, &Conv) == NULL)
+       return "";
+ 
+-   auto const posix = std::locale::classic();
+-   std::ostringstream datestr;
+-   datestr.imbue(posix);
+-   datestr << std::put_time(&Conv, "%a, %d %b %Y %H:%M:%S GMT");
+-   return datestr.str();
++   char Buf[300];
++   const char *Day[] = {"Sun","Mon","Tue","Wed","Thu","Fri","Sat"};
++   const char *Month[] = {"Jan","Feb","Mar","Apr","May","Jun","Jul",
++                          "Aug","Sep","Oct","Nov","Dec"};
++
++   snprintf(Buf, sizeof(Buf), "%s, %02i %s %i %02i:%02i:%02i GMT",Day[Conv.tm_wday],
++	   Conv.tm_mday,Month[Conv.tm_mon],Conv.tm_year+1900,Conv.tm_hour,
++	   Conv.tm_min,Conv.tm_sec);
++   return Buf;
+ }
+ 									/*}}}*/
+ // ReadMessages - Read messages from the FD				/*{{{*/
+-- 
+2.1.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/gcc_4.x_apt-pkg-contrib-strutl.cc-Include-array-header.patch b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/gcc_4.x_apt-pkg-contrib-strutl.cc-Include-array-header.patch
new file mode 100644
index 0000000..cb32591
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt/gcc_4.x_apt-pkg-contrib-strutl.cc-Include-array-header.patch
@@ -0,0 +1,33 @@
+From ff8562f7724c4db4b83635af9e627f3495222327 Mon Sep 17 00:00:00 2001
+From: Anibal Limon <limon.anibal@gmail.com>
+Date: Tue, 22 Aug 2017 04:41:31 -0500
+Subject: [PATCH 1/3] apt-pkg/contrib/strutl.cc: Include array header
+
+If GCC version is less than 5 the array header needs to be included
+to support std::array.
+
+After we drop debian8 and centos7 that has gcc < 5 versions this patch
+can be remove.
+
+Signed-off-by: Anibal Limon <limon.anibal@gmail.com>
+
+Upstream-Status: Inappropriate [embedded specific]
+---
+ apt-pkg/contrib/strutl.cc | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/apt-pkg/contrib/strutl.cc b/apt-pkg/contrib/strutl.cc
+index 60d0ca8..c2ff01d 100644
+--- a/apt-pkg/contrib/strutl.cc
++++ b/apt-pkg/contrib/strutl.cc
+@@ -27,6 +27,7 @@
+ #include <sstream>
+ #include <string>
+ #include <vector>
++#include <array>
+ 
+ #include <stddef.h>
+ #include <stdlib.h>
+-- 
+2.1.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/apt/apt_1.2.12.bb b/import-layers/yocto-poky/meta/recipes-devtools/apt/apt_1.2.24.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/apt/apt_1.2.12.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/apt/apt_1.2.24.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/autoconf-archive/autoconf-archive_2016.09.16.bb b/import-layers/yocto-poky/meta/recipes-devtools/autoconf-archive/autoconf-archive_2016.09.16.bb
new file mode 100644
index 0000000..104dc38
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/autoconf-archive/autoconf-archive_2016.09.16.bb
@@ -0,0 +1,14 @@
+SUMMARY = "a collection of freely re-usable Autoconf macros"
+HOMEPAGE = "http://www.gnu.org/software/autoconf-archive/"
+SECTION = "devel"
+LICENSE = "GPL-3.0-with-autoconf-exception"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504 \
+    file://COPYING.EXCEPTION;md5=fdef168ebff3bc2f13664c365a5fb515"
+
+SRC_URI = "${GNU_MIRROR}/${BPN}/${BPN}-${PV}.tar.xz"
+SRC_URI[md5sum] = "bf19d4cddce260b3c3e1d51d42509071"
+SRC_URI[sha256sum] = "e8f2efd235f842bad2f6938bf4a72240a5e5fcd248e8444335e63beb60fabd82"
+
+inherit autotools
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/autoconf/autoconf.inc b/import-layers/yocto-poky/meta/recipes-devtools/autoconf/autoconf.inc
index f1b2dfc..ea62df8 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/autoconf/autoconf.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/autoconf/autoconf.inc
@@ -1,4 +1,4 @@
-SUMMARY = "A GNU tool that procude shell scripts to automatically configure software"
+SUMMARY = "A GNU tool that produce shell scripts to automatically configure software"
 DESCRIPTION = "Autoconf is an extensible package of M4 macros that produce shell scripts to automatically \ 
 configure software source code packages. Autoconf creates a configuration script for a package from a template \
 file that lists the operating system features that the package can use, in the form of M4 macro calls."
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen-native_5.18.12.bb b/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen-native_5.18.12.bb
deleted file mode 100644
index 853477c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen-native_5.18.12.bb
+++ /dev/null
@@ -1,40 +0,0 @@
-SUMMARY = "Automated text and program generation tool"
-DESCRIPTION = "AutoGen is a tool designed to simplify the creation and\
- maintenance of programs that contain large amounts of repetitious text.\
- It is especially valuable in programs that have several blocks of text\
- that must be kept synchronized."
-HOMEPAGE = "http://www.gnu.org/software/autogen/"
-SECTION = "devel"
-LICENSE = "GPLv3"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
-
-SRC_URI = "${GNU_MIRROR}/autogen/rel${PV}/autogen-${PV}.tar.gz \
-           file://increase-timeout-limit.patch \
-           file://mk-tpl-config.sh-force-exit-value-to-be-0-in-subproc.patch \
-           file://fix-script-err-when-processing-libguile.patch \
-           file://0001-config-libopts.m4-regenerate-it-from-config-libopts..patch \
-           file://0002-autoopts-mk-tpl-config.sh-fix-perl-path.patch \
-"
-
-SRC_URI[md5sum] = "551d15ccbf5b5fc5658da375d5003389"
-SRC_URI[sha256sum] = "805c20182f3cb0ebf1571d3b01972851c56fb34348dfdc38799fd0ec3b2badbe"
-
-UPSTREAM_CHECK_URI = "http://ftp.gnu.org/gnu/autogen/"
-UPSTREAM_CHECK_REGEX = "rel(?P<pver>\d+(\.\d+)+)/"
-
-DEPENDS = "guile-native libtool-native libxml2-native"
-
-inherit autotools texinfo native pkgconfig
-
-# autogen-native links against libguile which may have been relocated with sstate
-# these environment variables ensure there isn't a relocation issue
-export GUILE_LOAD_PATH = "${STAGING_DATADIR_NATIVE}/guile/2.0"
-export GUILE_LOAD_COMPILED_PATH = "${STAGING_LIBDIR_NATIVE}/guile/2.0/ccache"
-
-export POSIX_SHELL = "/usr/bin/env sh"
-
-do_install_append () {
-	create_wrapper ${D}/${bindir}/autogen \
-		GUILE_LOAD_PATH=${STAGING_DATADIR_NATIVE}/guile/2.0 \
-		GUILE_LOAD_COMPILED_PATH=${STAGING_LIBDIR_NATIVE}/guile/2.0/ccache
-}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/0001-config-libopts.m4-regenerate-it-from-config-libopts..patch b/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/0001-config-libopts.m4-regenerate-it-from-config-libopts..patch
deleted file mode 100644
index a14018e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/0001-config-libopts.m4-regenerate-it-from-config-libopts..patch
+++ /dev/null
@@ -1,39 +0,0 @@
-From 45040e7d268329ebc40e6cb237c64a6637cfab5c Mon Sep 17 00:00:00 2001
-From: Robert Yang <liezhi.yang@windriver.com>
-Date: Mon, 13 Mar 2017 20:22:10 -0700
-Subject: [PATCH] config/libopts.m4: regenerate it from config/libopts.def
-
-It was out of date compared to config/libopts.def, so regenerate it via
-"autogen config/libopts.def" command.
-
-Upstream-Status: Pending
-
-Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
----
- config/libopts.m4 | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/config/libopts.m4 b/config/libopts.m4
-index c7ba4f3..51e6a39 100644
---- a/config/libopts.m4
-+++ b/config/libopts.m4
-@@ -2,7 +2,7 @@ dnl  -*- buffer-read-only: t -*- vi: set ro:
- dnl
- dnl DO NOT EDIT THIS FILE   (libopts.m4)
- dnl
--dnl It has been AutoGen-ed
-+dnl It has been AutoGen-ed  March 13, 2017 at 08:21:21 PM by AutoGen 5.18
- dnl From the definitions    libopts.def
- dnl and the template file   conftest.tpl
- dnl
-@@ -114,6 +114,7 @@ AC_DEFUN([INVOKE_LIBOPTS_MACROS_FIRST],[
-   AC_PROG_SED
-   [while :
-   do
-+      test -x "$POSIX_SHELL" && break
-       POSIX_SHELL=`which bash`
-       test -x "$POSIX_SHELL" && break
-       POSIX_SHELL=`which dash`
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/0002-autoopts-mk-tpl-config.sh-fix-perl-path.patch b/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/0002-autoopts-mk-tpl-config.sh-fix-perl-path.patch
deleted file mode 100644
index d5fe143..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/0002-autoopts-mk-tpl-config.sh-fix-perl-path.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From 9f69f3f5ef22bf1bcffb0e651efc260889cfaa46 Mon Sep 17 00:00:00 2001
-From: Robert Yang <liezhi.yang@windriver.com>
-Date: Mon, 13 Mar 2017 20:33:30 -0700
-Subject: [PATCH] autoopts/mk-tpl-config.sh: fix perl path
-
-Use "which perl" as shebang doesn't work when it is longer than
-BINPRM_BUF_SIZE which is 128 usually. So use "/usr/bin/env perl" to
-instead of.
-
-Upstream-Status: Pending
-
-Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
----
- autoopts/mk-tpl-config.sh | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/autoopts/mk-tpl-config.sh b/autoopts/mk-tpl-config.sh
-index 093e808..8dfc6dd 100755
---- a/autoopts/mk-tpl-config.sh
-+++ b/autoopts/mk-tpl-config.sh
-@@ -98,7 +98,7 @@ fix_scripts() {
-         st=`sed 1q $f`
- 
-         case "$st" in
--        *perl ) echo '#!' `which perl`
-+        *perl ) echo '#!/usr/bin/env perl'
-                  sed 1d $f
-                  ;;
- 
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/fix-script-err-when-processing-libguile.patch b/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/fix-script-err-when-processing-libguile.patch
deleted file mode 100644
index 694a395..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/fix-script-err-when-processing-libguile.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-autogen-native: fix script err when processing libguile
-
-do_configure for autogen will fail if project directory path
-contains '-I' character, which is caused by the unsuitable sed
-script when processing libguile.
-
-Upstream-Status: Pending
-
-Signed-off-by: Zhenbo Gao <zhenbo.gao@windriver.com>
-Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
-
-diff --git a/config/ag_macros.m4 b/config/ag_macros.m4
-index 58186b6..58ed2ad 100644
---- a/config/ag_macros.m4
-+++ b/config/ag_macros.m4
-@@ -32,7 +32,7 @@ AC_DEFUN([INVOKE_AG_MACROS_LAST],[
-   GUILE_FLAGS
-   [ag_gv=`gdir=\`pkg-config --cflags-only-I \
-   guile-${GUILE_EFFECTIVE_VERSION} | \
--  sed 's/-I *//;s/ *-I.*/ /g'\`
-+  sed 's/ *-I *\// \//g'\`
-   for d in $gdir
-   do  test -f "$d/libguile/version.h" && gdir=$d && break
-   done
-diff --git a/config/misc.def b/config/misc.def
-index 490d361..6e183ef 100644
---- a/config/misc.def
-+++ b/config/misc.def
-@@ -342,7 +342,7 @@ do-always = <<- _END_ALWAYS_
- 	GUILE_FLAGS
- 	[ag_gv=`gdir=\`pkg-config --cflags-only-I \
- 			guile-${GUILE_EFFECTIVE_VERSION} | \
--			sed 's/-I *//;s/ *-I.*/ /g'\`
-+			sed 's/ *-I *\// \//g'\`
- 		test -z "$gdir" && gdir=/usr/include
- 		for d in $gdir
- 		do  test -f "$d/libguile/version.h" && gdir=$d && break
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/increase-timeout-limit.patch b/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/increase-timeout-limit.patch
deleted file mode 100644
index 9efd7e5..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/increase-timeout-limit.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-Subject: [PATCH] autogen: increase timeout limit for shell commands
-
-On some overloaded hosts, shell commands of autogen may can not
-finish in 5 secs. This has caused many build failures, so increase
-the timeout limit to fix this.
-
-Upstream-Status: Inappropriate [configuration]
-
-Signed-off-by: Xin Ouyang <Xin.Ouyang@windriver.com>
----
- configure.ac |    6 +++---
- 1 file changed, 3 insertions(+), 3 deletions(-)
-
-diff --git a/configure.ac b/configure.ac
-index 58a848b..170dd9e 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -178,9 +178,9 @@ time_delta=`expr ${config_end_time} - ${config_start_time} 2>/dev/null`
- if test -z "${AG_TIMEOUT}"
- then
-   if test -z "${time_delta}"
--  then time_delta=10
--  elif test ${time_delta} -lt 5
--  then time_delta=5 ; fi
-+  then time_delta=60
-+  elif test ${time_delta} -lt 30
-+  then time_delta=30 ; fi
- 
-   AG_TIMEOUT=${time_delta}
- fi
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/mk-tpl-config.sh-force-exit-value-to-be-0-in-subproc.patch b/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/mk-tpl-config.sh-force-exit-value-to-be-0-in-subproc.patch
deleted file mode 100644
index e56da7b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/autogen/autogen/mk-tpl-config.sh-force-exit-value-to-be-0-in-subproc.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-Upstream-Status: Pending
-
-mk-tpl-config.sh: force exit value to be 0 in subprocess
-
-The return value of statement list=`<subcommands>` is the exit value of the
-subcommands. So if the subcommands fails, the compilation fails. This is obviously
-not intended. In the normal case, we expect the grep command to fail as there should
-be no 'noreturn' word in the libguile files.
-
-Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
----
- autoopts/mk-tpl-config.sh |    2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/autoopts/mk-tpl-config.sh b/autoopts/mk-tpl-config.sh
-index 926f5ab..6b4a0fb 100755
---- a/autoopts/mk-tpl-config.sh
-+++ b/autoopts/mk-tpl-config.sh
-@@ -202,7 +202,7 @@ fix_guile() {
- 
-     list=`set +e ; exec 2>/dev/null
-         find ${libguiledir}/libguile* -type f | \
--            xargs grep -l -E '\<noreturn\>'`
-+            xargs grep -l -E '\<noreturn\>' ; exit 0`
- 
-     test -z "$list" && exit 0
- 
--- 
-1.7.9.5
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/automake/automake/0001-automake-Add-default-libtool_tag-to-cppasm.patch b/import-layers/yocto-poky/meta/recipes-devtools/automake/automake/0001-automake-Add-default-libtool_tag-to-cppasm.patch
new file mode 100644
index 0000000..1221f13
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/automake/automake/0001-automake-Add-default-libtool_tag-to-cppasm.patch
@@ -0,0 +1,27 @@
+From 25a8ac30486798d23f516722d73eb622e6264f28 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 26 Jul 2017 11:19:56 -0700
+Subject: [PATCH] automake: Add default libtool_tag to cppasm.
+
+    * bin/automake.in (register_language): Define default libtool tag to be CC
+    since CPPASCOMPILE is using CC to call assembler
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Submitted
+
+ bin/automake.in | 1 +
+ 1 file changed, 1 insertion(+)
+
+Index: automake-1.15.1/bin/automake.in
+===================================================================
+--- automake-1.15.1.orig/bin/automake.in
++++ automake-1.15.1/bin/automake.in
+@@ -831,6 +831,7 @@ register_language ('name' => 'cppasm',
+ 		   'compiler' => 'CPPASCOMPILE',
+ 		   'compile_flag' => '-c',
+ 		   'output_flag' => '-o',
++		   'libtool_tag' => 'CC',
+ 		   'extensions' => ['.S', '.sx']);
+ 
+ # Fortran 77
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/automake/automake/0001-automake-port-to-Perl-5.22-and-later.patch b/import-layers/yocto-poky/meta/recipes-devtools/automake/automake/0001-automake-port-to-Perl-5.22-and-later.patch
deleted file mode 100644
index 0e6895f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/automake/automake/0001-automake-port-to-Perl-5.22-and-later.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From 13f00eb4493c217269b76614759e452d8302955e Mon Sep 17 00:00:00 2001
-From: Paul Eggert <eggert@cs.ucla.edu>
-Date: Thu, 31 Mar 2016 16:35:29 -0700
-Subject: [PATCH] automake: port to Perl 5.22 and later
-
-Without this change, Perl 5.22 complains "Unescaped left brace in
-regex is deprecated" and this is planned to become a hard error in
-Perl 5.26.  See:
-http://search.cpan.org/dist/perl-5.22.0/pod/perldelta.pod#A_literal_%22{%22_should_now_be_escaped_in_a_pattern
-* bin/automake.in (substitute_ac_subst_variables): Escape left brace.
-
-Upstream-Status: Backport [13f00eb4493c217269b76614759e452d8302955e]
----
- bin/automake.in | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/bin/automake.in b/bin/automake.in
-index a3a0aa318..2c8f31e14 100644
---- a/bin/automake.in
-+++ b/bin/automake.in
-@@ -3878,7 +3878,7 @@ sub substitute_ac_subst_variables_worker
- sub substitute_ac_subst_variables
- {
-   my ($text) = @_;
--  $text =~ s/\${([^ \t=:+{}]+)}/substitute_ac_subst_variables_worker ($1)/ge;
-+  $text =~ s/\$[{]([^ \t=:+{}]+)}/substitute_ac_subst_variables_worker ($1)/ge;
-   return $text;
- }
- 
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/automake/automake_1.15.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/automake/automake_1.15.1.bb
new file mode 100644
index 0000000..4f9b616
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/automake/automake_1.15.1.bb
@@ -0,0 +1,43 @@
+require automake.inc
+LICENSE = "GPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
+DEPENDS_class-native = "autoconf-native"
+
+NAMEVER = "${@oe.utils.trim_version("${PV}", 2)}"
+
+RDEPENDS_${PN} += "\
+    autoconf \
+    perl \
+    perl-module-bytes \
+    perl-module-data-dumper \
+    perl-module-strict \
+    perl-module-text-parsewords \
+    perl-module-thread-queue \
+    perl-module-threads \
+    perl-module-vars "
+
+RDEPENDS_${PN}_class-native = "autoconf-native hostperl-runtime-native"
+RDEPENDS_${PN}_class-nativesdk = "nativesdk-autoconf"
+
+SRC_URI += "file://python-libdir.patch \
+            file://buildtest.patch \
+            file://performance.patch \
+            file://new_rt_path_for_test-driver.patch \
+            file://automake-replace-w-option-in-shebangs-with-modern-use-warnings.patch \
+            file://0001-automake-Add-default-libtool_tag-to-cppasm.patch \
+            "
+
+SRC_URI[md5sum] = "95df3f2d6eb8f81e70b8cb63a93c8853"
+SRC_URI[sha256sum] = "988e32527abe052307d21c8ca000aa238b914df363a617e38f4fb89f5abf6260"
+
+PERL = "${USRBINPATH}/perl"
+PERL_class-native = "${USRBINPATH}/env perl"
+PERL_class-nativesdk = "${USRBINPATH}/env perl"
+
+CACHED_CONFIGUREVARS += "ac_cv_path_PERL='${PERL}'"
+
+do_install_append () {
+    install -d ${D}${datadir}
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/automake/automake_1.15.bb b/import-layers/yocto-poky/meta/recipes-devtools/automake/automake_1.15.bb
deleted file mode 100644
index 902dd63..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/automake/automake_1.15.bb
+++ /dev/null
@@ -1,43 +0,0 @@
-require automake.inc
-LICENSE = "GPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
-DEPENDS_class-native = "autoconf-native"
-
-NAMEVER = "${@oe.utils.trim_version("${PV}", 2)}"
-
-RDEPENDS_${PN} += "\
-    autoconf \
-    perl \
-    perl-module-bytes \
-    perl-module-data-dumper \
-    perl-module-strict \
-    perl-module-text-parsewords \
-    perl-module-thread-queue \
-    perl-module-threads \
-    perl-module-vars "
-
-RDEPENDS_${PN}_class-native = "autoconf-native hostperl-runtime-native"
-RDEPENDS_${PN}_class-nativesdk = "nativesdk-autoconf"
-
-SRC_URI += "file://python-libdir.patch \
-            file://buildtest.patch \
-            file://performance.patch \
-            file://new_rt_path_for_test-driver.patch \
-            file://automake-replace-w-option-in-shebangs-with-modern-use-warnings.patch \
-            file://0001-automake-port-to-Perl-5.22-and-later.patch \
-            "
-
-SRC_URI[md5sum] = "716946a105ca228ab545fc37a70df3a3"
-SRC_URI[sha256sum] = "7946e945a96e28152ba5a6beb0625ca715c6e32ac55f2e353ef54def0c8ed924"
-
-PERL = "${USRBINPATH}/perl"
-PERL_class-native = "${USRBINPATH}/env perl"
-PERL_class-nativesdk = "${USRBINPATH}/env perl"
-
-CACHED_CONFIGUREVARS += "ac_cv_path_PERL='${PERL}'"
-
-do_install_append () {
-    install -d ${D}${datadir}
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-2.28.inc b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-2.28.inc
deleted file mode 100644
index 1784c52..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-2.28.inc
+++ /dev/null
@@ -1,85 +0,0 @@
-LIC_FILES_CHKSUM="\
-    file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552\
-    file://COPYING.LIB;md5=9f604d8a4f8e74f4f5140845a21b6674\
-    file://COPYING3;md5=d32239bcb673463ab874e80d47fae504\
-    file://COPYING3.LIB;md5=6a6a8e020838b23406c81b19c1d46df6\
-    file://gas/COPYING;md5=d32239bcb673463ab874e80d47fae504\
-    file://include/COPYING;md5=59530bdf33659b29e73d4adb9f9f6552\
-    file://include/COPYING3;md5=d32239bcb673463ab874e80d47fae504\
-    file://libiberty/COPYING.LIB;md5=a916467b91076e631dd8edb7424769c7\
-    file://bfd/COPYING;md5=d32239bcb673463ab874e80d47fae504\
-    "
-
-def binutils_branch_version(d):
-    pvsplit = d.getVar('PV').split('.')
-    return pvsplit[0] + "_" + pvsplit[1]
-
-BINUPV = "${@binutils_branch_version(d)}"
-
-UPSTREAM_CHECK_GITTAGREGEX = "binutils-(?P<pver>\d+_(\d_?)*)"
-
-SRCREV = "354199c7692c1bed53a2a15f0e4d531457e95f17"
-SRC_URI = "\
-     git://sourceware.org/git/binutils-gdb.git;branch=binutils-${BINUPV}-branch;protocol=git \
-     file://0003-gprof-add-uclibc-support-to-configure.patch \
-     file://0004-Point-scripts-location-to-libdir.patch \
-     file://0005-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch \
-     file://0006-Explicitly-link-with-libm-on-uclibc.patch \
-     file://0007-Use-libtool-2.4.patch \
-     file://0008-Add-the-armv5e-architecture-to-binutils.patch \
-     file://0009-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch \
-     file://0010-warn-for-uses-of-system-directories-when-cross-linki.patch \
-     file://0011-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch \
-     file://0012-Change-default-emulation-for-mips64-linux.patch \
-     file://0013-Add-support-for-Netlogic-XLP.patch \
-     file://0014-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch \
-     file://0015-sync-with-OE-libtool-changes.patch \
-     file://0016-Detect-64-bit-MIPS-targets.patch \
-     file://CVE-2017-6965.patch \
-     file://CVE-2017-6966.patch \
-     file://0017-bfd-Improve-lookup-of-file-line-information-for-erro.patch \
-     file://0018-PR-21409-segfault-in-_bfd_dwarf2_find_nearest_line.patch \
-     file://CVE-2017-6969.patch \
-     file://CVE-2017-6969_2.patch \
-     file://CVE-2017-7209.patch \
-     file://CVE-2017-7210.patch \
-     file://CVE-2017-7223.patch \
-     file://CVE-2017-7614.patch \
-     file://CVE-2017-8393.patch \
-     file://CVE-2017-8394.patch \
-     file://CVE-2017-8395.patch \
-     file://CVE-2017-8396_8397.patch \
-     file://CVE-2017-8398.patch \
-     file://CVE-2017-8421.patch \
-     file://CVE-2017-9038_9044.patch \
-     file://CVE-2017-9039.patch \
-     file://CVE-2017-9040_9042.patch \
-     file://CVE-2017-9742.patch \
-     file://CVE-2017-9744.patch \
-     file://CVE-2017-9745.patch \
-     file://CVE-2017-9746.patch \
-     file://CVE-2017-9747.patch \
-     file://CVE-2017-9748.patch \
-     file://CVE-2017-9749.patch \
-     file://CVE-2017-9750.patch \
-     file://CVE-2017-9751.patch \
-     file://CVE-2017-9752.patch \
-     file://CVE-2017-9753.patch \
-     file://CVE-2017-9755.patch \
-     file://CVE-2017-9756.patch \
-     file://CVE-2017-9954.patch \
-     file://CVE-2017-9955_1.patch \
-     file://CVE-2017-9955_2.patch \
-     file://CVE-2017-9955_3.patch \
-     file://CVE-2017-9955_4.patch \
-     file://CVE-2017-9955_5.patch \
-     file://CVE-2017-9955_6.patch \
-     file://CVE-2017-9955_7.patch \
-     file://CVE-2017-9955_8.patch \
-     file://CVE-2017-9955_9.patch \
-"
-S  = "${WORKDIR}/git"
-
-do_configure_prepend () {
-        rm -rf ${S}/gdb ${S}/libdecnumber ${S}/readline ${S}/sim
-}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-2.29.1.inc b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-2.29.1.inc
new file mode 100644
index 0000000..07a72e2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-2.29.1.inc
@@ -0,0 +1,43 @@
+LIC_FILES_CHKSUM="\
+    file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552\
+    file://COPYING.LIB;md5=9f604d8a4f8e74f4f5140845a21b6674\
+    file://COPYING3;md5=d32239bcb673463ab874e80d47fae504\
+    file://COPYING3.LIB;md5=6a6a8e020838b23406c81b19c1d46df6\
+    file://gas/COPYING;md5=d32239bcb673463ab874e80d47fae504\
+    file://include/COPYING;md5=59530bdf33659b29e73d4adb9f9f6552\
+    file://include/COPYING3;md5=d32239bcb673463ab874e80d47fae504\
+    file://libiberty/COPYING.LIB;md5=a916467b91076e631dd8edb7424769c7\
+    file://bfd/COPYING;md5=d32239bcb673463ab874e80d47fae504\
+    "
+
+def binutils_branch_version(d):
+    pvsplit = d.getVar('PV').split('.')
+    return pvsplit[0] + "_" + pvsplit[1]
+
+BINUPV = "${@binutils_branch_version(d)}"
+
+UPSTREAM_CHECK_GITTAGREGEX = "binutils-(?P<pver>\d+_(\d_?)*)"
+
+SRCREV ?= "90276f15379d380761fc499da2ba24cfb3c12a94"
+BINUTILS_GIT_URI ?= "git://sourceware.org/git/binutils-gdb.git;branch=binutils-${BINUPV}-branch;protocol=git"
+SRC_URI = "\
+     ${BINUTILS_GIT_URI} \
+     file://0003-configure-widen-the-regexp-for-SH-architectures.patch \
+     file://0004-Point-scripts-location-to-libdir.patch \
+     file://0005-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch \
+     file://0006-Use-libtool-2.4.patch \
+     file://0007-Add-the-armv5e-architecture-to-binutils.patch \
+     file://0008-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch \
+     file://0009-warn-for-uses-of-system-directories-when-cross-linki.patch \
+     file://0010-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch \
+     file://0011-Change-default-emulation-for-mips64-linux.patch \
+     file://0012-Add-support-for-Netlogic-XLP.patch \
+     file://0013-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch \
+     file://0014-Detect-64-bit-MIPS-targets.patch \
+     file://0015-sync-with-OE-libtool-changes.patch \
+"
+S  = "${WORKDIR}/git"
+
+do_configure_prepend () {
+        rm -rf ${S}/gdb ${S}/libdecnumber ${S}/readline ${S}/sim
+}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-cross-canadian_2.28.bb b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-cross-canadian_2.29.1.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-cross-canadian_2.28.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-cross-canadian_2.29.1.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-cross_2.28.bb b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-cross_2.29.1.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-cross_2.28.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-cross_2.29.1.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-crosssdk_2.28.bb b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-crosssdk_2.29.1.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-crosssdk_2.28.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils-crosssdk_2.29.1.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0001-binutils-crosssdk-Generate-relocatable-SDKs.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0001-binutils-crosssdk-Generate-relocatable-SDKs.patch
index 8fb1b4e..0b515d8 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0001-binutils-crosssdk-Generate-relocatable-SDKs.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0001-binutils-crosssdk-Generate-relocatable-SDKs.patch
@@ -1,4 +1,4 @@
-From 689d011688b5ff9481d4367bef3dea7a7b2867fb Mon Sep 17 00:00:00 2001
+From 58ac9f95a3d83c29efaf7a8906fb6aefea8c8e79 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Mon, 2 Mar 2015 01:58:54 +0000
 Subject: [PATCH 01/15] binutils-crosssdk: Generate relocatable SDKs
@@ -43,7 +43,7 @@
  LD_FLAG=
  DATA_ALIGNMENT=${DATA_ALIGNMENT_}
 diff --git a/ld/scripttempl/elf.sc b/ld/scripttempl/elf.sc
-index e65f9a3ccf..d99d2c1d2a 100644
+index d9138bc059..e48faeca43 100644
 --- a/ld/scripttempl/elf.sc
 +++ b/ld/scripttempl/elf.sc
 @@ -138,8 +138,8 @@ if test -z "$DATA_SEGMENT_ALIGN"; then
@@ -58,5 +58,5 @@
  if test -z "$PLT"; then
    IPLT=".iplt         ${RELOCATING-0} : { *(.iplt) }"
 -- 
-2.12.0
+2.14.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0002-binutils-cross-Do-not-generate-linker-script-directo.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0002-binutils-cross-Do-not-generate-linker-script-directo.patch
index 14299fd..370333d 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0002-binutils-cross-Do-not-generate-linker-script-directo.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0002-binutils-cross-Do-not-generate-linker-script-directo.patch
@@ -1,4 +1,4 @@
-From 7c7de107b4b0a507d2aeca3e3a86d01cb4b51360 Mon Sep 17 00:00:00 2001
+From 8f929c616208351d0971d7dfd7574d48d3144603 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Mon, 6 Mar 2017 23:37:05 -0800
 Subject: [PATCH 02/15] binutils-cross: Do not generate linker script
@@ -57,5 +57,5 @@
    libs=${NATIVE_LIB_DIRS}
    if [ "x${NATIVE}" = "xyes" ] ; then
 -- 
-2.12.0
+2.14.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0003-configure-widen-the-regexp-for-SH-architectures.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0003-configure-widen-the-regexp-for-SH-architectures.patch
new file mode 100644
index 0000000..b6c09cc
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0003-configure-widen-the-regexp-for-SH-architectures.patch
@@ -0,0 +1,47 @@
+From e5a806aae02a10290c71deb72f6294c98068368d Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 2 Mar 2015 01:07:33 +0000
+Subject: [PATCH 03/15] configure: widen the regexp for SH architectures
+
+gprof needs to know about uclibc
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ configure       | 2 +-
+ gprof/configure | 5 +++++
+ 2 files changed, 6 insertions(+), 1 deletion(-)
+
+diff --git a/configure b/configure
+index be9dd89d9b..d8af155ab5 100755
+--- a/configure
++++ b/configure
+@@ -3844,7 +3844,7 @@ case "${target}" in
+   or1k*-*-*)
+     noconfigdirs="$noconfigdirs gdb"
+     ;;
+-  sh-*-*)
++  sh*-*-* | sh64-*-*)
+     case "${target}" in
+       sh*-*-elf)
+          ;;
+diff --git a/gprof/configure b/gprof/configure
+index e71fe8b9e4..679e0dce77 100755
+--- a/gprof/configure
++++ b/gprof/configure
+@@ -5874,6 +5874,11 @@ linux* | k*bsd*-gnu | kopensolaris*-gnu)
+   lt_cv_deplibs_check_method=pass_all
+   ;;
+ 
++linux-uclibc*)
++  lt_cv_deplibs_check_method=pass_all
++  lt_cv_file_magic_test_file=`echo /lib/libuClibc-*.so`
++  ;;
++
+ netbsd*)
+   if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then
+     lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$'
+-- 
+2.14.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0003-gprof-add-uclibc-support-to-configure.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0003-gprof-add-uclibc-support-to-configure.patch
deleted file mode 100644
index eddb42b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0003-gprof-add-uclibc-support-to-configure.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-From 7893d2b24d0303bda3a0049846489619ffd1387b Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 2 Mar 2015 01:07:33 +0000
-Subject: [PATCH 03/15] gprof: add uclibc support to configure
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gprof/configure | 5 +++++
- 1 file changed, 5 insertions(+)
-
-diff --git a/gprof/configure b/gprof/configure
-index 9e6b8f3525..38a4c0b0e5 100755
---- a/gprof/configure
-+++ b/gprof/configure
-@@ -5874,6 +5874,11 @@ linux* | k*bsd*-gnu | kopensolaris*-gnu)
-   lt_cv_deplibs_check_method=pass_all
-   ;;
- 
-+linux-uclibc*)
-+  lt_cv_deplibs_check_method=pass_all
-+  lt_cv_file_magic_test_file=`echo /lib/libuClibc-*.so`
-+  ;;
-+
- netbsd*)
-   if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then
-     lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$'
--- 
-2.12.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0004-Point-scripts-location-to-libdir.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0004-Point-scripts-location-to-libdir.patch
index c6b9de7..38eee30 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0004-Point-scripts-location-to-libdir.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0004-Point-scripts-location-to-libdir.patch
@@ -1,4 +1,4 @@
-From e34650c50574a8a39d694567ed607a63006b6f99 Mon Sep 17 00:00:00 2001
+From 3634ec3547bc0f8a5d1b8ad15365e2f836cda642 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Mon, 2 Mar 2015 01:09:58 +0000
 Subject: [PATCH 04/15] Point scripts location to libdir
@@ -12,7 +12,7 @@
  2 files changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/ld/Makefile.am b/ld/Makefile.am
-index 15beaa7021..bbf9c671d8 100644
+index 625347ff62..d5334d2681 100644
 --- a/ld/Makefile.am
 +++ b/ld/Makefile.am
 @@ -57,7 +57,7 @@ endif
@@ -25,10 +25,10 @@
  EMUL = @EMUL@
  EMULATION_OFILES = @EMULATION_OFILES@
 diff --git a/ld/Makefile.in b/ld/Makefile.in
-index 042b690ed6..37e7b25e9a 100644
+index ba251777b0..a2cf2282b5 100644
 --- a/ld/Makefile.in
 +++ b/ld/Makefile.in
-@@ -452,7 +452,7 @@ AM_CFLAGS = $(WARN_CFLAGS) $(ELF_CLFAGS)
+@@ -446,7 +446,7 @@ AM_CFLAGS = $(WARN_CFLAGS) $(ELF_CLFAGS)
  # We put the scripts in the directory $(scriptdir)/ldscripts.
  # We can't put the scripts in $(datadir) because the SEARCH_DIR
  # directives need to be different for native and cross linkers.
@@ -38,5 +38,5 @@
  BFDDIR = $(BASEDIR)/bfd
  INCDIR = $(BASEDIR)/include
 -- 
-2.12.0
+2.14.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0005-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0005-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch
index 726f702..59150a2 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0005-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0005-Only-generate-an-RPATH-entry-if-LD_RUN_PATH-is-not-e.patch
@@ -1,4 +1,4 @@
-From 42292f5533bca904f230a8e03ceee1f84ef0c4ec Mon Sep 17 00:00:00 2001
+From 9d37c8f68c07da63186cb993f1221f6c11eca422 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Mon, 2 Mar 2015 01:27:17 +0000
 Subject: [PATCH 05/15] Only generate an RPATH entry if LD_RUN_PATH is not
@@ -15,19 +15,19 @@
  1 file changed, 4 insertions(+)
 
 diff --git a/ld/emultempl/elf32.em b/ld/emultempl/elf32.em
-index 84adaef6df..ab8c74257e 100644
+index 9ac1840316..9dc4c149bc 100644
 --- a/ld/emultempl/elf32.em
 +++ b/ld/emultempl/elf32.em
-@@ -1411,6 +1411,8 @@ fragment <<EOF
+@@ -1463,6 +1463,8 @@ fragment <<EOF
  	      && command_line.rpath == NULL)
  	    {
- 	      lib_path = (const char *) getenv ("LD_RUN_PATH");
-+	      if ((lib_path) && (strlen (lib_path) == 0))
-+		  lib_path = NULL;
- 	      if (gld${EMULATION_NAME}_search_needed (lib_path, &n,
- 						      force))
+ 	      path = (const char *) getenv ("LD_RUN_PATH");
++	      if ((path) && (strlen (path) == 0))
++		  path = NULL;
+ 	      if (path
+ 		  && gld${EMULATION_NAME}_search_needed (path, &n, force))
  		break;
-@@ -1692,6 +1694,8 @@ gld${EMULATION_NAME}_before_allocation (void)
+@@ -1740,6 +1742,8 @@ gld${EMULATION_NAME}_before_allocation (void)
    rpath = command_line.rpath;
    if (rpath == NULL)
      rpath = (const char *) getenv ("LD_RUN_PATH");
@@ -37,5 +37,5 @@
    for (abfd = link_info.input_bfds; abfd; abfd = abfd->link.next)
      if (bfd_get_flavour (abfd) == bfd_target_elf_flavour)
 -- 
-2.12.0
+2.14.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0006-Explicitly-link-with-libm-on-uclibc.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0006-Explicitly-link-with-libm-on-uclibc.patch
deleted file mode 100644
index 9770ca7..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0006-Explicitly-link-with-libm-on-uclibc.patch
+++ /dev/null
@@ -1,52 +0,0 @@
-From 6a46bf151d7e53df8b5e7645a2d241967688368a Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 2 Mar 2015 01:32:49 +0000
-Subject: [PATCH 06/15] Explicitly link with libm on uclibc
-
-Description:
-
-We do not need to have the libtool patch anymore for binutils after
-libtool has been updated upstream it include support for it. However
-for building gas natively on uclibc systems we have to link it with
--lm so that it picks up missing symbols.
-
-/local/build_area/BUILD/arm_v5t_le_uclibc/binutils-2.17.50/objdir/libiberty/pic/libiberty.a(floatformat.o):
-In function `floatformat_from_double':
-floatformat.c:(.text+0x1ec): undefined reference to `frexp'
-floatformat.c:(.text+0x2f8): undefined reference to `ldexp'
-/local/build_area/BUILD/arm_v5t_le_uclibc/binutils-2.17.50/objdir/libiberty/pic/libiberty.a(floatformat.o):
-In function `floatformat_to_double':
-floatformat.c:(.text+0x38a): undefined reference to `ldexp'
-floatformat.c:(.text+0x3d2): undefined reference to `ldexp'
-floatformat.c:(.text+0x43e): undefined reference to `ldexp'
-floatformat.c:(.text+0x4e2): undefined reference to `ldexp'
-collect2: ld returned 1 exit status
-make[4]: *** [as-new] Error 1
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gas/configure.tgt | 6 ++++++
- 1 file changed, 6 insertions(+)
-
-diff --git a/gas/configure.tgt b/gas/configure.tgt
-index 711d537e95..7cd2dc176a 100644
---- a/gas/configure.tgt
-+++ b/gas/configure.tgt
-@@ -494,6 +494,12 @@ case ${generic_target} in
-   *-*-netware)				fmt=elf em=netware ;;
- esac
- 
-+case ${generic_target} in
-+  arm-*-*uclibc*)
-+    need_libm=yes
-+    ;;
-+esac
-+
- case ${cpu_type} in
-   aarch64 | alpha | arm | i386 | ia64 | microblaze | mips | ns32k | or1k | or1knd | pdp11 | ppc | riscv | sparc | z80 | z8k)
-     bfd_gas=yes
--- 
-2.12.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0006-Use-libtool-2.4.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0006-Use-libtool-2.4.patch
new file mode 100644
index 0000000..e87efe6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0006-Use-libtool-2.4.patch
@@ -0,0 +1,21157 @@
+From 71c734bb3754319029dcfc898cedbade42274dcb Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 14 Feb 2016 17:04:07 +0000
+Subject: [PATCH 06/15] Use libtool 2.4
+
+get libtool sysroot support
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ bfd/configure        | 1318 +++++++++++++++++------
+ bfd/configure.ac     |    2 +-
+ binutils/configure   | 1316 +++++++++++++++++------
+ configure            |    2 +-
+ gas/configure        | 1314 +++++++++++++++++------
+ gprof/configure      | 1321 +++++++++++++++++------
+ ld/configure         | 1691 +++++++++++++++++++++--------
+ libtool.m4           | 1080 +++++++++++++------
+ ltmain.sh            | 2925 +++++++++++++++++++++++++++++++++-----------------
+ ltoptions.m4         |    2 +-
+ ltversion.m4         |   12 +-
+ lt~obsolete.m4       |    2 +-
+ opcodes/configure    | 1318 +++++++++++++++++------
+ opcodes/configure.ac |    2 +-
+ zlib/configure       | 1316 +++++++++++++++++------
+ 15 files changed, 9927 insertions(+), 3694 deletions(-)
+
+diff --git a/bfd/configure b/bfd/configure
+index 48276594ed..3ece8943f3 100755
+--- a/bfd/configure
++++ b/bfd/configure
+@@ -672,6 +672,9 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
++ac_ct_AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -785,6 +788,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_plugins
+ enable_largefile
+@@ -1461,6 +1465,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+   --with-mmap             try using mmap for BFD input files if available
+   --with-separate-debug-dir=DIR
+                           Look for global separate debug info in DIR
+@@ -5393,8 +5399,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -5434,7 +5440,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -6120,8 +6126,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -6170,6 +6176,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if test "${lt_cv_to_host_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if test "${lt_cv_ld_reload_flag+set}" = set; then :
+@@ -6186,6 +6266,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -6354,7 +6439,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -6508,6 +6594,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -6521,11 +6622,164 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
+ 
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_AR+set}" = set; then :
+@@ -6541,7 +6795,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6561,11 +6815,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
+@@ -6581,7 +6839,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6600,6 +6858,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -6611,16 +6873,72 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if test "${lt_cv_ar_at_file+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
+ 
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
+ 
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+ 
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
+ 
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
+ 
+ 
+ 
+@@ -6962,8 +7280,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -6999,6 +7317,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -7040,6 +7359,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -7051,7 +7382,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -7077,8 +7408,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -7088,8 +7419,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -7126,6 +7457,16 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
+ 
+ 
+ 
+@@ -7142,6 +7483,45 @@ fi
+ 
+ 
+ 
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -7353,6 +7733,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if test "${lt_cv_path_mainfest_tool+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -7916,6 +8413,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -8080,7 +8579,8 @@ fi
+ LIBTOOL_DEPS="$ltmain"
+ 
+ # Always use our own libtool.
+-LIBTOOL='$(SHELL) $(top_builddir)/libtool'
++LIBTOOL='$(SHELL) $(top_builddir)'
++LIBTOOL="$LIBTOOL/${host_alias}-libtool"
+ 
+ 
+ 
+@@ -8169,7 +8669,7 @@ aix3*)
+ esac
+ 
+ # Global variables:
+-ofile=libtool
++ofile=${host_alias}-libtool
+ can_build_shared=yes
+ 
+ # All known linkers require a `.a' archive for static linking (except MSVC,
+@@ -8467,8 +8967,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -8634,6 +9132,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -8696,7 +9200,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8753,13 +9257,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if test "${lt_cv_prog_compiler_pic+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8820,6 +9328,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -9170,7 +9683,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -9269,12 +9783,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -9288,8 +9802,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -9307,8 +9821,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9354,8 +9868,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9485,7 +9999,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9498,22 +10018,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -9525,7 +10052,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9538,22 +10071,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -9598,20 +10138,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -9672,7 +10255,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -9680,7 +10263,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -9696,7 +10279,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9720,10 +10303,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -9802,23 +10385,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if test "${lt_cv_irix_exported_symbol+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -9903,7 +10499,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -9922,9 +10518,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -10500,8 +11096,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -10534,13 +11131,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -10632,7 +11287,7 @@ haiku*)
+   soname_spec='${libname}${release}${shared_ext}$major'
+   shlibpath_var=LIBRARY_PATH
+   shlibpath_overrides_runpath=yes
+-  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
++  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
+   hardcode_into_libs=yes
+   ;;
+ 
+@@ -11472,10 +12127,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11578,10 +12233,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -14172,7 +14827,7 @@ SHARED_LDFLAGS=
+ if test "$enable_shared" = "yes"; then
+   x=`sed -n -e 's/^[ 	]*PICFLAG[ 	]*=[ 	]*//p' < ../libiberty/Makefile | sed -n '$p'`
+   if test -n "$x"; then
+-    SHARED_LIBADD="-L`pwd`/../libiberty/pic -liberty"
++    SHARED_LIBADD="`pwd`/../libiberty/pic/libiberty.a"
+   fi
+ 
+ # More hacks to build DLLs on Windows.
+@@ -16879,13 +17534,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -16900,14 +17562,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -16940,12 +17605,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -17000,8 +17665,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -17011,12 +17681,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -17032,7 +17704,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -17068,6 +17739,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -17847,7 +18519,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -17950,19 +18623,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -17992,6 +18688,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -18001,6 +18703,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -18115,12 +18820,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -18207,9 +18912,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -18225,6 +18927,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -18257,210 +18962,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+diff --git a/bfd/configure.ac b/bfd/configure.ac
+index 8fd03a7ea8..8816c3286c 100644
+--- a/bfd/configure.ac
++++ b/bfd/configure.ac
+@@ -254,7 +254,7 @@ changequote(,)dnl
+   x=`sed -n -e 's/^[ 	]*PICFLAG[ 	]*=[ 	]*//p' < ../libiberty/Makefile | sed -n '$p'`
+ changequote([,])dnl
+   if test -n "$x"; then
+-    SHARED_LIBADD="-L`pwd`/../libiberty/pic -liberty"
++    SHARED_LIBADD="`pwd`/../libiberty/pic/libiberty.a"
+   fi
+ 
+ # More hacks to build DLLs on Windows.
+diff --git a/binutils/configure b/binutils/configure
+index 22e1b1736e..321b63535b 100755
+--- a/binutils/configure
++++ b/binutils/configure
+@@ -659,8 +659,11 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
+ RANLIB
++ac_ct_AR
+ AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -772,6 +775,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_plugins
+ enable_largefile
+@@ -1444,6 +1448,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+   --with-system-zlib      use installed libz
+   --with-gnu-ld           assume the C compiler uses GNU ld default=no
+   --with-libiconv-prefix[=DIR]  search for libiconv in DIR/include and DIR/lib
+@@ -5160,8 +5166,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -5201,7 +5207,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -5887,8 +5893,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -5937,6 +5943,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if test "${lt_cv_to_host_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if test "${lt_cv_ld_reload_flag+set}" = set; then :
+@@ -5953,6 +6033,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -6121,7 +6206,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -6275,6 +6361,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -6290,9 +6391,162 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_AR+set}" = set; then :
+@@ -6308,7 +6562,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6328,11 +6582,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
+@@ -6348,7 +6606,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6367,6 +6625,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -6378,12 +6640,10 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++: ${AR=ar}
++: ${AR_FLAGS=cru}
+ 
+ 
+ 
+@@ -6395,6 +6655,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if test "${lt_cv_ar_at_file+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++
++
++
++
++
++
++
+ if test -n "$ac_tool_prefix"; then
+   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+ set dummy ${ac_tool_prefix}strip; ac_word=$2
+@@ -6729,8 +7047,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -6766,6 +7084,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -6807,6 +7126,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -6818,7 +7149,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -6844,8 +7175,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -6855,8 +7186,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -6893,6 +7224,21 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
++
++
++
++
++
+ 
+ 
+ 
+@@ -6908,6 +7254,40 @@ fi
+ 
+ 
+ 
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -7120,6 +7500,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if test "${lt_cv_path_mainfest_tool+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -7683,6 +8180,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -7878,7 +8377,8 @@ fi
+ LIBTOOL_DEPS="$ltmain"
+ 
+ # Always use our own libtool.
+-LIBTOOL='$(SHELL) $(top_builddir)/libtool'
++LIBTOOL='$(SHELL) $(top_builddir)'
++LIBTOOL="$LIBTOOL/${host_alias}-libtool"
+ 
+ 
+ 
+@@ -7967,7 +8467,7 @@ aix3*)
+ esac
+ 
+ # Global variables:
+-ofile=libtool
++ofile=${host_alias}-libtool
+ can_build_shared=yes
+ 
+ # All known linkers require a `.a' archive for static linking (except MSVC,
+@@ -8265,8 +8765,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -8432,6 +8930,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -8494,7 +8998,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8551,13 +9055,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if test "${lt_cv_prog_compiler_pic+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8618,6 +9126,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -8968,7 +9481,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -9067,12 +9581,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -9086,8 +9600,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -9105,8 +9619,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9152,8 +9666,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9283,7 +9797,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9296,22 +9816,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -9323,7 +9850,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9336,22 +9869,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -9396,20 +9936,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -9470,7 +10053,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -9478,7 +10061,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -9494,7 +10077,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9518,10 +10101,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -9600,23 +10183,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if test "${lt_cv_irix_exported_symbol+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -9701,7 +10297,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -9720,9 +10316,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -10298,8 +10894,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -10332,13 +10929,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -10430,7 +11085,7 @@ haiku*)
+   soname_spec='${libname}${release}${shared_ext}$major'
+   shlibpath_var=LIBRARY_PATH
+   shlibpath_overrides_runpath=yes
+-  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
++  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
+   hardcode_into_libs=yes
+   ;;
+ 
+@@ -11270,10 +11925,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11376,10 +12031,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -15446,13 +16101,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -15467,14 +16129,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -15507,12 +16172,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -15567,8 +16232,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -15578,12 +16248,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -15599,7 +16271,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -15635,6 +16306,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -16392,7 +17064,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -16495,19 +17168,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -16537,6 +17233,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -16546,6 +17248,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -16660,12 +17365,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -16752,9 +17457,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -16770,6 +17472,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -16802,210 +17507,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+diff --git a/configure b/configure
+index d8af155ab5..005ed827ab 100755
+--- a/configure
++++ b/configure
+@@ -3844,7 +3844,7 @@ case "${target}" in
+   or1k*-*-*)
+     noconfigdirs="$noconfigdirs gdb"
+     ;;
+-  sh*-*-* | sh64-*-*)
++  sh-*-* | sh64-*-*)
+     case "${target}" in
+       sh*-*-elf)
+          ;;
+diff --git a/gas/configure b/gas/configure
+index 93afb20c8f..81dd4cbd97 100755
+--- a/gas/configure
++++ b/gas/configure
+@@ -650,8 +650,11 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
+ RANLIB
++ac_ct_AR
+ AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -763,6 +766,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_plugins
+ enable_largefile
+@@ -4921,8 +4925,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -4962,7 +4966,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -5648,8 +5652,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -5698,6 +5702,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if test "${lt_cv_to_host_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if test "${lt_cv_ld_reload_flag+set}" = set; then :
+@@ -5714,6 +5792,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -5882,7 +5965,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -6036,6 +6120,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -6051,9 +6150,162 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_AR+set}" = set; then :
+@@ -6069,7 +6321,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6089,11 +6341,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
+@@ -6109,7 +6365,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6128,6 +6384,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -6139,12 +6399,10 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++: ${AR=ar}
++: ${AR_FLAGS=cru}
+ 
+ 
+ 
+@@ -6156,6 +6414,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if test "${lt_cv_ar_at_file+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++
++
++
++
++
++
++
+ if test -n "$ac_tool_prefix"; then
+   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+ set dummy ${ac_tool_prefix}strip; ac_word=$2
+@@ -6490,8 +6806,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -6527,6 +6843,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -6568,6 +6885,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -6579,7 +6908,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -6605,8 +6934,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -6616,8 +6945,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -6654,6 +6983,21 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
++
++
++
++
++
+ 
+ 
+ 
+@@ -6669,6 +7013,40 @@ fi
+ 
+ 
+ 
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -6881,6 +7259,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if test "${lt_cv_path_mainfest_tool+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -7444,6 +7939,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -7639,7 +8136,8 @@ fi
+ LIBTOOL_DEPS="$ltmain"
+ 
+ # Always use our own libtool.
+-LIBTOOL='$(SHELL) $(top_builddir)/libtool'
++LIBTOOL='$(SHELL) $(top_builddir)'
++LIBTOOL="$LIBTOOL/${host_alias}-libtool"
+ 
+ 
+ 
+@@ -7728,7 +8226,7 @@ aix3*)
+ esac
+ 
+ # Global variables:
+-ofile=libtool
++ofile=${host_alias}-libtool
+ can_build_shared=yes
+ 
+ # All known linkers require a `.a' archive for static linking (except MSVC,
+@@ -8026,8 +8524,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -8193,6 +8689,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -8255,7 +8757,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8312,13 +8814,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if test "${lt_cv_prog_compiler_pic+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8379,6 +8885,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -8729,7 +9240,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -8828,12 +9340,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -8847,8 +9359,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -8866,8 +9378,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8913,8 +9425,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9044,7 +9556,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9057,22 +9575,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -9084,7 +9609,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9097,22 +9628,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -9157,20 +9695,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -9231,7 +9812,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -9239,7 +9820,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -9255,7 +9836,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9279,10 +9860,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -9361,23 +9942,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if test "${lt_cv_irix_exported_symbol+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -9462,7 +10056,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -9481,9 +10075,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -10059,8 +10653,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -10093,13 +10688,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -10191,7 +10844,7 @@ haiku*)
+   soname_spec='${libname}${release}${shared_ext}$major'
+   shlibpath_var=LIBRARY_PATH
+   shlibpath_overrides_runpath=yes
+-  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
++  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
+   hardcode_into_libs=yes
+   ;;
+ 
+@@ -11031,10 +11684,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11137,10 +11790,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -15436,13 +16089,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -15457,14 +16117,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -15497,12 +16160,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -15557,8 +16220,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -15568,12 +16236,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -15589,7 +16259,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -15625,6 +16294,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -16389,7 +17059,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -16492,19 +17163,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -16534,6 +17228,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -16543,6 +17243,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -16657,12 +17360,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -16749,9 +17452,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -16767,6 +17467,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -16799,210 +17502,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+diff --git a/gprof/configure b/gprof/configure
+index 679e0dce77..ac4c016a63 100755
+--- a/gprof/configure
++++ b/gprof/configure
+@@ -631,8 +631,11 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
+ RANLIB
++ac_ct_AR
+ AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -744,6 +747,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_plugins
+ enable_largefile
+@@ -1402,6 +1406,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+ 
+ Some influential environment variables:
+   CC          C compiler command
+@@ -4836,8 +4842,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -4877,7 +4883,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -5563,8 +5569,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -5613,6 +5619,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if test "${lt_cv_to_host_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if test "${lt_cv_ld_reload_flag+set}" = set; then :
+@@ -5629,6 +5709,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -5797,7 +5882,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -5874,11 +5960,6 @@ linux* | k*bsd*-gnu | kopensolaris*-gnu)
+   lt_cv_deplibs_check_method=pass_all
+   ;;
+ 
+-linux-uclibc*)
+-  lt_cv_deplibs_check_method=pass_all
+-  lt_cv_file_magic_test_file=`echo /lib/libuClibc-*.so`
+-  ;;
+-
+ netbsd*)
+   if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then
+     lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$'
+@@ -5956,6 +6037,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -5971,9 +6067,162 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_AR+set}" = set; then :
+@@ -5989,7 +6238,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6009,11 +6258,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
+@@ -6029,7 +6282,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6048,6 +6301,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -6059,12 +6316,10 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++: ${AR=ar}
++: ${AR_FLAGS=cru}
+ 
+ 
+ 
+@@ -6076,6 +6331,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if test "${lt_cv_ar_at_file+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++
++
++
++
++
++
++
+ if test -n "$ac_tool_prefix"; then
+   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
+ set dummy ${ac_tool_prefix}strip; ac_word=$2
+@@ -6410,8 +6723,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -6447,6 +6760,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -6488,6 +6802,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -6499,7 +6825,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -6525,8 +6851,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -6536,8 +6862,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -6574,6 +6900,18 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
++
++
+ 
+ 
+ 
+@@ -6590,6 +6928,43 @@ fi
+ 
+ 
+ 
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -6801,6 +7176,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if test "${lt_cv_path_mainfest_tool+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -7364,6 +7856,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -7559,7 +8053,8 @@ fi
+ LIBTOOL_DEPS="$ltmain"
+ 
+ # Always use our own libtool.
+-LIBTOOL='$(SHELL) $(top_builddir)/libtool'
++LIBTOOL='$(SHELL) $(top_builddir)'
++LIBTOOL="$LIBTOOL/${host_alias}-libtool"
+ 
+ 
+ 
+@@ -7648,7 +8143,7 @@ aix3*)
+ esac
+ 
+ # Global variables:
+-ofile=libtool
++ofile=${host_alias}-libtool
+ can_build_shared=yes
+ 
+ # All known linkers require a `.a' archive for static linking (except MSVC,
+@@ -7946,8 +8441,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -8113,6 +8606,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -8175,7 +8674,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8232,13 +8731,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if test "${lt_cv_prog_compiler_pic+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8299,6 +8802,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -8649,7 +9157,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -8748,12 +9257,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -8767,8 +9276,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -8786,8 +9295,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8833,8 +9342,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8964,7 +9473,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -8977,22 +9492,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -9004,7 +9526,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9017,22 +9545,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -9077,20 +9612,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -9151,7 +9729,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -9159,7 +9737,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -9175,7 +9753,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9199,10 +9777,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -9281,23 +9859,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if test "${lt_cv_irix_exported_symbol+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -9382,7 +9973,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -9401,9 +9992,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -9979,8 +10570,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -10013,13 +10605,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -10111,7 +10761,7 @@ haiku*)
+   soname_spec='${libname}${release}${shared_ext}$major'
+   shlibpath_var=LIBRARY_PATH
+   shlibpath_overrides_runpath=yes
+-  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
++  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
+   hardcode_into_libs=yes
+   ;;
+ 
+@@ -10951,10 +11601,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11057,10 +11707,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -13005,13 +13655,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -13026,14 +13683,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -13066,12 +13726,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -13126,8 +13786,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -13137,12 +13802,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -13158,7 +13825,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -13194,6 +13860,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -13950,7 +14617,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -14053,19 +14721,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -14095,6 +14786,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -14104,6 +14801,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -14218,12 +14918,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -14310,9 +15010,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -14328,6 +15025,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -14360,210 +15060,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+diff --git a/ld/configure b/ld/configure
+index d7f66f8cdc..4e71511bd1 100755
+--- a/ld/configure
++++ b/ld/configure
+@@ -655,8 +655,11 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
+ RANLIB
++ac_ct_AR
+ AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -778,6 +781,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_plugins
+ enable_largefile
+@@ -1464,6 +1468,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+   --with-lib-path=dir1:dir2...  set default LIB_PATH
+   --with-sysroot=DIR Search for usr/lib et al within DIR.
+ 
+@@ -5658,8 +5664,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -5699,7 +5705,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -6385,8 +6391,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -6435,6 +6441,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if test "${lt_cv_to_host_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if test "${lt_cv_ld_reload_flag+set}" = set; then :
+@@ -6451,6 +6531,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -6619,7 +6704,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -6773,6 +6859,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -6788,9 +6889,162 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_AR+set}" = set; then :
+@@ -6806,7 +7060,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6826,11 +7080,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
+@@ -6846,7 +7104,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6865,6 +7123,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -6876,12 +7138,12 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++
++
+ 
+ 
+ 
+@@ -6891,6 +7153,62 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if test "${lt_cv_ar_at_file+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
++
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++
++
++
++
++
+ 
+ 
+ if test -n "$ac_tool_prefix"; then
+@@ -7227,8 +7545,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -7264,6 +7582,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -7305,6 +7624,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -7316,7 +7647,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -7342,8 +7673,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -7353,8 +7684,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -7391,6 +7722,19 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
++
++
++
+ 
+ 
+ 
+@@ -7404,6 +7748,42 @@ fi
+ 
+ 
+ 
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -7618,6 +7998,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if test "${lt_cv_path_mainfest_tool+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -8181,6 +8678,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -8249,6 +8748,16 @@ done
+ 
+ 
+ 
++func_stripname_cnf ()
++{
++  case ${2} in
++  .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
++  *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
++  esac
++} # func_stripname_cnf
++
++
++
+ 
+ 
+ # Set options
+@@ -8377,7 +8886,8 @@ fi
+ LIBTOOL_DEPS="$ltmain"
+ 
+ # Always use our own libtool.
+-LIBTOOL='$(SHELL) $(top_builddir)/libtool'
++LIBTOOL='$(SHELL) $(top_builddir)'
++LIBTOOL="$LIBTOOL/${host_alias}-libtool"
+ 
+ 
+ 
+@@ -8466,7 +8976,7 @@ aix3*)
+ esac
+ 
+ # Global variables:
+-ofile=libtool
++ofile=${host_alias}-libtool
+ can_build_shared=yes
+ 
+ # All known linkers require a `.a' archive for static linking (except MSVC,
+@@ -8764,8 +9274,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -8931,6 +9439,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -8993,7 +9507,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -9050,13 +9564,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if test "${lt_cv_prog_compiler_pic+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -9117,6 +9635,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -9467,7 +9990,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -9566,12 +10090,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -9585,8 +10109,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -9604,8 +10128,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9651,8 +10175,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9782,7 +10306,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9795,22 +10325,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -9822,7 +10359,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9835,22 +10378,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -9894,21 +10444,64 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # When not using gcc, we currently assume that we are using
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+-      # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      # no search path for DLLs.
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -9969,7 +10562,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -9977,7 +10570,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -9993,7 +10586,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -10017,10 +10610,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -10099,23 +10692,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if test "${lt_cv_irix_exported_symbol+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -10200,7 +10806,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -10219,9 +10825,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -10797,8 +11403,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -10831,13 +11438,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -10929,7 +11594,7 @@ haiku*)
+   soname_spec='${libname}${release}${shared_ext}$major'
+   shlibpath_var=LIBRARY_PATH
+   shlibpath_overrides_runpath=yes
+-  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
++  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
+   hardcode_into_libs=yes
+   ;;
+ 
+@@ -11769,10 +12434,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11875,10 +12540,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -12270,6 +12935,7 @@ $RM -r conftest*
+ 
+   # Allow CC to be a program name with arguments.
+   lt_save_CC=$CC
++  lt_save_CFLAGS=$CFLAGS
+   lt_save_LD=$LD
+   lt_save_GCC=$GCC
+   GCC=$GXX
+@@ -12287,6 +12953,7 @@ $RM -r conftest*
+   fi
+   test -z "${LDCXX+set}" || LD=$LDCXX
+   CC=${CXX-"c++"}
++  CFLAGS=$CXXFLAGS
+   compiler=$CC
+   compiler_CXX=$CC
+   for cc_temp in $compiler""; do
+@@ -12569,7 +13236,13 @@ $as_echo_n "checking whether the $compiler linker ($LD) supports shared librarie
+           allow_undefined_flag_CXX='-berok'
+           # Determine the default libpath from the value encoded in an empty
+           # executable.
+-          cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++          if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath__CXX+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -12582,22 +13255,29 @@ main ()
+ _ACEOF
+ if ac_fn_cxx_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath__CXX
++fi
+ 
+           hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 
+@@ -12610,7 +13290,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+           else
+ 	    # Determine the default libpath from the value encoded in an
+ 	    # empty executable.
+-	    cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	    if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath__CXX+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -12623,22 +13309,29 @@ main ()
+ _ACEOF
+ if ac_fn_cxx_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath__CXX"; then
++    lt_cv_aix_libpath__CXX="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath__CXX
++fi
+ 
+ 	    hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	    # Warning - without using the other run time loading flags,
+@@ -12681,29 +13374,75 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+         ;;
+ 
+       cygwin* | mingw* | pw32* | cegcc*)
+-        # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
+-        # as there is no search path for DLLs.
+-        hardcode_libdir_flag_spec_CXX='-L$libdir'
+-        export_dynamic_flag_spec_CXX='${wl}--export-all-symbols'
+-        allow_undefined_flag_CXX=unsupported
+-        always_export_symbols_CXX=no
+-        enable_shared_with_static_runtimes_CXX=yes
+-
+-        if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+-          archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+-          # If the export-symbols file already is a .def file (1st line
+-          # is EXPORTS), use it as is; otherwise, prepend...
+-          archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
+-	    cp $export_symbols $output_objdir/$soname.def;
+-          else
+-	    echo EXPORTS > $output_objdir/$soname.def;
+-	    cat $export_symbols >> $output_objdir/$soname.def;
+-          fi~
+-          $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+-        else
+-          ld_shlibs_CXX=no
+-        fi
+-        ;;
++	case $GXX,$cc_basename in
++	,cl* | no,cl*)
++	  # Native MSVC
++	  # hardcode_libdir_flag_spec is actually meaningless, as there is
++	  # no search path for DLLs.
++	  hardcode_libdir_flag_spec_CXX=' '
++	  allow_undefined_flag_CXX=unsupported
++	  always_export_symbols_CXX=yes
++	  file_list_spec_CXX='@'
++	  # Tell ltmain to make .lib files, not .a files.
++	  libext=lib
++	  # Tell ltmain to make .dll files, not .so files.
++	  shrext_cmds=".dll"
++	  # FIXME: Setting linknames here is a bad hack.
++	  archive_cmds_CXX='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	  archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	      $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	    else
++	      $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	    fi~
++	    $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	    linknames='
++	  # The linker will not automatically build a static lib if we build a DLL.
++	  # _LT_TAGVAR(old_archive_from_new_cmds, CXX)='true'
++	  enable_shared_with_static_runtimes_CXX=yes
++	  # Don't use ranlib
++	  old_postinstall_cmds_CXX='chmod 644 $oldlib'
++	  postlink_cmds_CXX='lt_outputfile="@OUTPUT@"~
++	    lt_tool_outputfile="@TOOL_OUTPUT@"~
++	    case $lt_outputfile in
++	      *.exe|*.EXE) ;;
++	      *)
++		lt_outputfile="$lt_outputfile.exe"
++		lt_tool_outputfile="$lt_tool_outputfile.exe"
++		;;
++	    esac~
++	    func_to_tool_file "$lt_outputfile"~
++	    if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	      $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	      $RM "$lt_outputfile.manifest";
++	    fi'
++	  ;;
++	*)
++	  # g++
++	  # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
++	  # as there is no search path for DLLs.
++	  hardcode_libdir_flag_spec_CXX='-L$libdir'
++	  export_dynamic_flag_spec_CXX='${wl}--export-all-symbols'
++	  allow_undefined_flag_CXX=unsupported
++	  always_export_symbols_CXX=no
++	  enable_shared_with_static_runtimes_CXX=yes
++
++	  if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
++	    archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
++	    # If the export-symbols file already is a .def file (1st line
++	    # is EXPORTS), use it as is; otherwise, prepend...
++	    archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	      cp $export_symbols $output_objdir/$soname.def;
++	    else
++	      echo EXPORTS > $output_objdir/$soname.def;
++	      cat $export_symbols >> $output_objdir/$soname.def;
++	    fi~
++	    $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
++	  else
++	    ld_shlibs_CXX=no
++	  fi
++	  ;;
++	esac
++	;;
+       darwin* | rhapsody*)
+ 
+ 
+@@ -12809,7 +13548,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+             ;;
+           *)
+             if test "$GXX" = yes; then
+-              archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++              archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+             else
+               # FIXME: insert proper C++ library support
+               ld_shlibs_CXX=no
+@@ -12880,10 +13619,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	          ia64*)
+-	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
++	            archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	          *)
+-	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
++	            archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	        esac
+ 	      fi
+@@ -12924,9 +13663,9 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+           *)
+ 	    if test "$GXX" = yes; then
+ 	      if test "$with_gnu_ld" = no; then
+-	        archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	      else
+-	        archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
++	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
+ 	      fi
+ 	    fi
+ 	    link_all_deplibs_CXX=yes
+@@ -12996,20 +13735,20 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	      prelink_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~
+-		compile_command="$compile_command `find $tpldir -name \*.o | $NL2SP`"'
++		compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"'
+ 	      old_archive_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~
+-		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | $NL2SP`~
++		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~
+ 		$RANLIB $oldlib'
+ 	      archive_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
+-		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
++		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
+ 	      archive_expsym_cmds_CXX='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
+-		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
++		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
+ 	      ;;
+ 	    *) # Version 6 and above use weak symbols
+ 	      archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
+@@ -13204,7 +13943,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	          archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 		  ;;
+ 	        *)
+-	          archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	          archive_cmds_CXX='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 		  ;;
+ 	      esac
+ 
+@@ -13250,7 +13989,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+       solaris*)
+         case $cc_basename in
+-          CC*)
++          CC* | sunCC*)
+ 	    # Sun C++ 4.2, 5.x and Centerline C++
+             archive_cmds_need_lc_CXX=yes
+ 	    no_undefined_flag_CXX=' -zdefs'
+@@ -13291,9 +14030,9 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	    if test "$GXX" = yes && test "$with_gnu_ld" = no; then
+ 	      no_undefined_flag_CXX=' ${wl}-z ${wl}defs'
+ 	      if $CC --version | $GREP -v '^2\.7' > /dev/null; then
+-	        archive_cmds_CXX='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
++	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
+ 	        archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-		  $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
++		  $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
+ 
+ 	        # Commands to make compiler produce verbose output that lists
+ 	        # what "hidden" libraries, object files and flags are used when
+@@ -13428,6 +14167,13 @@ private:
+ };
+ _LT_EOF
+ 
++
++_lt_libdeps_save_CFLAGS=$CFLAGS
++case "$CC $CFLAGS " in #(
++*\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;;
++*\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;;
++esac
++
+ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+   (eval $ac_compile) 2>&5
+   ac_status=$?
+@@ -13441,7 +14187,7 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+   pre_test_object_deps_done=no
+ 
+   for p in `eval "$output_verbose_link_cmd"`; do
+-    case $p in
++    case ${prev}${p} in
+ 
+     -L* | -R* | -l*)
+        # Some compilers place space between "-{L,R}" and the path.
+@@ -13450,13 +14196,22 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+           test $p = "-R"; then
+ 	 prev=$p
+ 	 continue
+-       else
+-	 prev=
+        fi
+ 
++       # Expand the sysroot to ease extracting the directories later.
++       if test -z "$prev"; then
++         case $p in
++         -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;;
++         -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;;
++         -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;;
++         esac
++       fi
++       case $p in
++       =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;;
++       esac
+        if test "$pre_test_object_deps_done" = no; then
+-	 case $p in
+-	 -L* | -R*)
++	 case ${prev} in
++	 -L | -R)
+ 	   # Internal compiler library paths should come after those
+ 	   # provided the user.  The postdeps already come after the
+ 	   # user supplied libs so there is no need to process them.
+@@ -13476,8 +14231,10 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
+ 	   postdeps_CXX="${postdeps_CXX} ${prev}${p}"
+ 	 fi
+        fi
++       prev=
+        ;;
+ 
++    *.lto.$objext) ;; # Ignore GCC LTO objects
+     *.$objext)
+        # This assumes that the test object file only shows up
+        # once in the compiler output.
+@@ -13513,6 +14270,7 @@ else
+ fi
+ 
+ $RM -f confest.$objext
++CFLAGS=$_lt_libdeps_save_CFLAGS
+ 
+ # PORTME: override above test on systems where it is broken
+ case $host_os in
+@@ -13548,7 +14306,7 @@ linux*)
+ 
+ solaris*)
+   case $cc_basename in
+-  CC*)
++  CC* | sunCC*)
+     # The more standards-conforming stlport4 library is
+     # incompatible with the Cstd library. Avoid specifying
+     # it if it's in CXXFLAGS. Ignore libCrun as
+@@ -13613,8 +14371,6 @@ fi
+ lt_prog_compiler_pic_CXX=
+ lt_prog_compiler_static_CXX=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   # C++ specific cases for pic, static, wl, etc.
+   if test "$GXX" = yes; then
+@@ -13719,6 +14475,11 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	  ;;
+ 	esac
+ 	;;
++      mingw* | cygwin* | os2* | pw32* | cegcc*)
++	# This hack is so that the source file can tell whether it is being
++	# built for inclusion in a dll (and should export symbols for example).
++	lt_prog_compiler_pic_CXX='-DDLL_EXPORT'
++	;;
+       dgux*)
+ 	case $cc_basename in
+ 	  ec++*)
+@@ -13871,7 +14632,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	;;
+       solaris*)
+ 	case $cc_basename in
+-	  CC*)
++	  CC* | sunCC*)
+ 	    # Sun C++ 4.2, 5.x and Centerline C++
+ 	    lt_prog_compiler_pic_CXX='-KPIC'
+ 	    lt_prog_compiler_static_CXX='-Bstatic'
+@@ -13936,10 +14697,17 @@ case $host_os in
+     lt_prog_compiler_pic_CXX="$lt_prog_compiler_pic_CXX -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic_CXX" >&5
+-$as_echo "$lt_prog_compiler_pic_CXX" >&6; }
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if test "${lt_cv_prog_compiler_pic_CXX+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic_CXX=$lt_prog_compiler_pic_CXX
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_CXX" >&5
++$as_echo "$lt_cv_prog_compiler_pic_CXX" >&6; }
++lt_prog_compiler_pic_CXX=$lt_cv_prog_compiler_pic_CXX
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -13997,6 +14765,8 @@ fi
+ 
+ 
+ 
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -14174,6 +14944,7 @@ fi
+ $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; }
+ 
+   export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
++  exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
+   case $host_os in
+   aix[4-9]*)
+     # If we're using GNU nm, then we don't want the "-C" option.
+@@ -14188,15 +14959,20 @@ $as_echo_n "checking whether the $compiler linker ($LD) supports shared librarie
+     ;;
+   pw32*)
+     export_symbols_cmds_CXX="$ltdll_cmds"
+-  ;;
++    ;;
+   cygwin* | mingw* | cegcc*)
+-    export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;/^.*[ ]__nm__/s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
+-  ;;
++    case $cc_basename in
++    cl*) ;;
++    *)
++      export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms_CXX='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
++      ;;
++    esac
++    ;;
+   *)
+     export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+-  ;;
++    ;;
+   esac
+-  exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
+ 
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs_CXX" >&5
+ $as_echo "$ld_shlibs_CXX" >&6; }
+@@ -14459,8 +15235,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -14492,13 +15269,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -14589,7 +15424,7 @@ haiku*)
+   soname_spec='${libname}${release}${shared_ext}$major'
+   shlibpath_var=LIBRARY_PATH
+   shlibpath_overrides_runpath=yes
+-  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
++  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
+   hardcode_into_libs=yes
+   ;;
+ 
+@@ -15048,6 +15883,7 @@ fi
+   fi # test -n "$compiler"
+ 
+   CC=$lt_save_CC
++  CFLAGS=$lt_save_CFLAGS
+   LDCXX=$LD
+   LD=$lt_save_LD
+   GCC=$lt_save_GCC
+@@ -18083,13 +18919,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -18104,14 +18947,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -18144,12 +18990,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -18188,8 +19034,8 @@ old_archive_cmds_CXX='`$ECHO "$old_archive_cmds_CXX" | $SED "$delay_single_quote
+ compiler_CXX='`$ECHO "$compiler_CXX" | $SED "$delay_single_quote_subst"`'
+ GCC_CXX='`$ECHO "$GCC_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag_CXX='`$ECHO "$lt_prog_compiler_no_builtin_flag_CXX" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic_CXX='`$ECHO "$lt_prog_compiler_pic_CXX" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static_CXX='`$ECHO "$lt_prog_compiler_static_CXX" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o_CXX='`$ECHO "$lt_cv_prog_compiler_c_o_CXX" | $SED "$delay_single_quote_subst"`'
+ archive_cmds_need_lc_CXX='`$ECHO "$archive_cmds_need_lc_CXX" | $SED "$delay_single_quote_subst"`'
+@@ -18216,12 +19062,12 @@ hardcode_shlibpath_var_CXX='`$ECHO "$hardcode_shlibpath_var_CXX" | $SED "$delay_
+ hardcode_automatic_CXX='`$ECHO "$hardcode_automatic_CXX" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath_CXX='`$ECHO "$inherit_rpath_CXX" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs_CXX='`$ECHO "$link_all_deplibs_CXX" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path_CXX='`$ECHO "$fix_srcfile_path_CXX" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols_CXX='`$ECHO "$always_export_symbols_CXX" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds_CXX='`$ECHO "$export_symbols_cmds_CXX" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms_CXX='`$ECHO "$exclude_expsyms_CXX" | $SED "$delay_single_quote_subst"`'
+ include_expsyms_CXX='`$ECHO "$include_expsyms_CXX" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds_CXX='`$ECHO "$prelink_cmds_CXX" | $SED "$delay_single_quote_subst"`'
++postlink_cmds_CXX='`$ECHO "$postlink_cmds_CXX" | $SED "$delay_single_quote_subst"`'
+ file_list_spec_CXX='`$ECHO "$file_list_spec_CXX" | $SED "$delay_single_quote_subst"`'
+ hardcode_action_CXX='`$ECHO "$hardcode_action_CXX" | $SED "$delay_single_quote_subst"`'
+ compiler_lib_search_dirs_CXX='`$ECHO "$compiler_lib_search_dirs_CXX" | $SED "$delay_single_quote_subst"`'
+@@ -18259,8 +19105,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -18270,12 +19121,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -18291,7 +19144,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -18313,8 +19165,8 @@ LD_CXX \
+ reload_flag_CXX \
+ compiler_CXX \
+ lt_prog_compiler_no_builtin_flag_CXX \
+-lt_prog_compiler_wl_CXX \
+ lt_prog_compiler_pic_CXX \
++lt_prog_compiler_wl_CXX \
+ lt_prog_compiler_static_CXX \
+ lt_cv_prog_compiler_c_o_CXX \
+ export_dynamic_flag_spec_CXX \
+@@ -18326,7 +19178,6 @@ no_undefined_flag_CXX \
+ hardcode_libdir_flag_spec_CXX \
+ hardcode_libdir_flag_spec_ld_CXX \
+ hardcode_libdir_separator_CXX \
+-fix_srcfile_path_CXX \
+ exclude_expsyms_CXX \
+ include_expsyms_CXX \
+ file_list_spec_CXX \
+@@ -18360,6 +19211,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -18374,7 +19226,8 @@ archive_expsym_cmds_CXX \
+ module_cmds_CXX \
+ module_expsym_cmds_CXX \
+ export_symbols_cmds_CXX \
+-prelink_cmds_CXX; do
++prelink_cmds_CXX \
++postlink_cmds_CXX; do
+     case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in
+     *[\\\\\\\`\\"\\\$]*)
+       eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\""
+@@ -19167,7 +20020,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -19270,19 +20124,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -19312,6 +20189,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -19321,6 +20204,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -19435,12 +20321,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -19527,9 +20413,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -19545,6 +20428,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -19591,210 +20477,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+@@ -19822,12 +20667,12 @@ with_gcc=$GCC_CXX
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_CXX
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl_CXX
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic_CXX
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl_CXX
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static_CXX
+ 
+@@ -19914,9 +20759,6 @@ inherit_rpath=$inherit_rpath_CXX
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs_CXX
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path_CXX
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols_CXX
+ 
+@@ -19932,6 +20774,9 @@ include_expsyms=$lt_include_expsyms_CXX
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds_CXX
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds_CXX
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec_CXX
+ 
+diff --git a/libtool.m4 b/libtool.m4
+index 24d13f3440..e45fdc6998 100644
+--- a/libtool.m4
++++ b/libtool.m4
+@@ -1,7 +1,8 @@
+ # libtool.m4 - Configure libtool for the host system. -*-Autoconf-*-
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ # This file is free software; the Free Software Foundation gives
+@@ -10,7 +11,8 @@
+ 
+ m4_define([_LT_COPYING], [dnl
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -37,7 +39,7 @@ m4_define([_LT_COPYING], [dnl
+ # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
+ ])
+ 
+-# serial 56 LT_INIT
++# serial 57 LT_INIT
+ 
+ 
+ # LT_PREREQ(VERSION)
+@@ -92,7 +94,8 @@ _LT_SET_OPTIONS([$0], [$1])
+ LIBTOOL_DEPS="$ltmain"
+ 
+ # Always use our own libtool.
+-LIBTOOL='$(SHELL) $(top_builddir)/libtool'
++LIBTOOL='$(SHELL) $(top_builddir)'
++LIBTOOL="$LIBTOOL/${host_alias}-libtool"
+ AC_SUBST(LIBTOOL)dnl
+ 
+ _LT_SETUP
+@@ -166,10 +169,13 @@ _LT_DECL([], [exeext], [0], [Executable file suffix (normally "")])dnl
+ dnl
+ m4_require([_LT_FILEUTILS_DEFAULTS])dnl
+ m4_require([_LT_CHECK_SHELL_FEATURES])dnl
++m4_require([_LT_PATH_CONVERSION_FUNCTIONS])dnl
+ m4_require([_LT_CMD_RELOAD])dnl
+ m4_require([_LT_CHECK_MAGIC_METHOD])dnl
++m4_require([_LT_CHECK_SHAREDLIB_FROM_LINKLIB])dnl
+ m4_require([_LT_CMD_OLD_ARCHIVE])dnl
+ m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl
++m4_require([_LT_WITH_SYSROOT])dnl
+ 
+ _LT_CONFIG_LIBTOOL_INIT([
+ # See if we are running on zsh, and set the options which allow our
+@@ -199,7 +205,7 @@ aix3*)
+ esac
+ 
+ # Global variables:
+-ofile=libtool
++ofile=${host_alias}-libtool
+ can_build_shared=yes
+ 
+ # All known linkers require a `.a' archive for static linking (except MSVC,
+@@ -632,7 +638,7 @@ m4_ifset([AC_PACKAGE_NAME], [AC_PACKAGE_NAME ])config.lt[]dnl
+ m4_ifset([AC_PACKAGE_VERSION], [ AC_PACKAGE_VERSION])
+ configured by $[0], generated by m4_PACKAGE_STRING.
+ 
+-Copyright (C) 2009 Free Software Foundation, Inc.
++Copyright (C) 2010 Free Software Foundation, Inc.
+ This config.lt script is free software; the Free Software Foundation
+ gives unlimited permision to copy, distribute and modify it."
+ 
+@@ -746,15 +752,12 @@ _LT_EOF
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
+ 
+-  _LT_PROG_XSI_SHELLFNS
++  _LT_PROG_REPLACE_SHELLFNS
+ 
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ ],
+@@ -980,6 +983,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&AS_MESSAGE_LOG_FD
+       echo "$AR cru libconftest.a conftest.o" >&AS_MESSAGE_LOG_FD
+       $AR cru libconftest.a conftest.o 2>&AS_MESSAGE_LOG_FD
++      echo "$RANLIB libconftest.a" >&AS_MESSAGE_LOG_FD
++      $RANLIB libconftest.a 2>&AS_MESSAGE_LOG_FD
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -1069,30 +1074,41 @@ m4_defun([_LT_DARWIN_LINKER_FEATURES],
+   fi
+ ])
+ 
+-# _LT_SYS_MODULE_PATH_AIX
+-# -----------------------
++# _LT_SYS_MODULE_PATH_AIX([TAGNAME])
++# ----------------------------------
+ # Links a minimal program and checks the executable
+ # for the system default hardcoded library path. In most cases,
+ # this is /usr/lib:/lib, but when the MPI compilers are used
+ # the location of the communication and MPI libs are included too.
+ # If we don't find anything, use the default library path according
+ # to the aix ld manual.
++# Store the results from the different compilers for each TAGNAME.
++# Allow to override them for all tags through lt_cv_aix_libpath.
+ m4_defun([_LT_SYS_MODULE_PATH_AIX],
+ [m4_require([_LT_DECL_SED])dnl
+-AC_LINK_IFELSE(AC_LANG_PROGRAM,[
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi],[])
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  AC_CACHE_VAL([_LT_TAGVAR([lt_cv_aix_libpath_], [$1])],
++  [AC_LINK_IFELSE([AC_LANG_PROGRAM],[
++  lt_aix_libpath_sed='[
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }]'
++  _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then
++    _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi],[])
++  if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then
++    _LT_TAGVAR([lt_cv_aix_libpath_], [$1])="/usr/lib:/lib"
++  fi
++  ])
++  aix_libpath=$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])
++fi
+ ])# _LT_SYS_MODULE_PATH_AIX
+ 
+ 
+@@ -1117,7 +1133,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ 
+ AC_MSG_CHECKING([how to print strings])
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -1161,6 +1177,39 @@ _LT_DECL([], [ECHO], [1], [An echo program that protects backslashes])
+ ])# _LT_PROG_ECHO_BACKSLASH
+ 
+ 
++# _LT_WITH_SYSROOT
++# ----------------
++AC_DEFUN([_LT_WITH_SYSROOT],
++[AC_MSG_CHECKING([for sysroot])
++AC_ARG_WITH([libtool-sysroot],
++[  --with-libtool-sysroot[=DIR] Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).],
++[], [with_libtool_sysroot=no])
++
++dnl lt_sysroot will always be passed unquoted.  We quote it here
++dnl in case the user passed a directory name.
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   AC_MSG_RESULT([${with_libtool_sysroot}])
++   AC_MSG_ERROR([The sysroot must be an absolute path.])
++   ;;
++esac
++
++ AC_MSG_RESULT([${lt_sysroot:-no}])
++_LT_DECL([], [lt_sysroot], [0], [The root where to search for ]dnl
++[dependent libraries, and in which our libraries should be installed.])])
++
+ # _LT_ENABLE_LOCK
+ # ---------------
+ m4_defun([_LT_ENABLE_LOCK],
+@@ -1320,14 +1369,47 @@ need_locks="$enable_libtool_lock"
+ ])# _LT_ENABLE_LOCK
+ 
+ 
++# _LT_PROG_AR
++# -----------
++m4_defun([_LT_PROG_AR],
++[AC_CHECK_TOOLS(AR, [ar], false)
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++_LT_DECL([], [AR], [1], [The archiver])
++_LT_DECL([], [AR_FLAGS], [1], [Flags to create an archive])
++
++AC_CACHE_CHECK([for archiver @FILE support], [lt_cv_ar_at_file],
++  [lt_cv_ar_at_file=no
++   AC_COMPILE_IFELSE([AC_LANG_PROGRAM],
++     [echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&AS_MESSAGE_LOG_FD'
++      AC_TRY_EVAL([lt_ar_try])
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	AC_TRY_EVAL([lt_ar_try])
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
++     ])
++  ])
++
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
++_LT_DECL([], [archiver_list_spec], [1],
++  [How to feed a file listing to the archiver])
++])# _LT_PROG_AR
++
++
+ # _LT_CMD_OLD_ARCHIVE
+ # -------------------
+ m4_defun([_LT_CMD_OLD_ARCHIVE],
+-[AC_CHECK_TOOL(AR, ar, false)
+-test -z "$AR" && AR=ar
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
+-_LT_DECL([], [AR], [1], [The archiver])
+-_LT_DECL([], [AR_FLAGS], [1])
++[_LT_PROG_AR
+ 
+ AC_CHECK_TOOL(STRIP, strip, :)
+ test -z "$STRIP" && STRIP=:
+@@ -1623,7 +1705,7 @@ else
+   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
+   lt_status=$lt_dlunknown
+   cat > conftest.$ac_ext <<_LT_EOF
+-[#line __oline__ "configure"
++[#line $LINENO "configure"
+ #include "confdefs.h"
+ 
+ #if HAVE_DLFCN_H
+@@ -1667,10 +1749,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -2210,8 +2292,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -2244,13 +2327,71 @@ m4_if([$1], [],[
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([[a-zA-Z]]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | [$GREP ';[c-zC-Z]:/' >/dev/null]; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -2342,7 +2483,7 @@ haiku*)
+   soname_spec='${libname}${release}${shared_ext}$major'
+   shlibpath_var=LIBRARY_PATH
+   shlibpath_overrides_runpath=yes
+-  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
++  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
+   hardcode_into_libs=yes
+   ;;
+ 
+@@ -2950,6 +3091,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -3016,7 +3162,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -3167,6 +3314,21 @@ tpf*)
+   ;;
+ esac
+ ])
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[[\1]]\/[[\1]]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -3174,7 +3336,11 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ _LT_DECL([], [deplibs_check_method], [1],
+     [Method to check whether dependent libraries are shared objects])
+ _LT_DECL([], [file_magic_cmd], [1],
+-    [Command to use when deplibs_check_method == "file_magic"])
++    [Command to use when deplibs_check_method = "file_magic"])
++_LT_DECL([], [file_magic_glob], [1],
++    [How to find potential files when deplibs_check_method = "file_magic"])
++_LT_DECL([], [want_nocaseglob], [1],
++    [Find potential files using nocaseglob when deplibs_check_method = "file_magic"])
+ ])# _LT_CHECK_MAGIC_METHOD
+ 
+ 
+@@ -3277,6 +3443,67 @@ dnl aclocal-1.4 backwards compatibility:
+ dnl AC_DEFUN([AM_PROG_NM], [])
+ dnl AC_DEFUN([AC_PROG_NM], [])
+ 
++# _LT_CHECK_SHAREDLIB_FROM_LINKLIB
++# --------------------------------
++# how to determine the name of the shared library
++# associated with a specific link library.
++#  -- PORTME fill in with the dynamic library characteristics
++m4_defun([_LT_CHECK_SHAREDLIB_FROM_LINKLIB],
++[m4_require([_LT_DECL_EGREP])
++m4_require([_LT_DECL_OBJDUMP])
++m4_require([_LT_DECL_DLLTOOL])
++AC_CACHE_CHECK([how to associate runtime and link libraries],
++lt_cv_sharedlib_from_linklib_cmd,
++[lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++])
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++_LT_DECL([], [sharedlib_from_linklib_cmd], [1],
++    [Command to associate shared and link libraries])
++])# _LT_CHECK_SHAREDLIB_FROM_LINKLIB
++
++
++# _LT_PATH_MANIFEST_TOOL
++# ----------------------
++# locate the manifest tool
++m4_defun([_LT_PATH_MANIFEST_TOOL],
++[AC_CHECK_TOOL(MANIFEST_TOOL, mt, :)
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++AC_CACHE_CHECK([if $MANIFEST_TOOL is a manifest tool], [lt_cv_path_mainfest_tool],
++  [lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&AS_MESSAGE_LOG_FD
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&AS_MESSAGE_LOG_FD
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*])
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++_LT_DECL([], [MANIFEST_TOOL], [1], [Manifest tool])dnl
++])# _LT_PATH_MANIFEST_TOOL
++
+ 
+ # LT_LIB_M
+ # --------
+@@ -3403,8 +3630,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([[^ ]]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \(lib[[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \(lib[[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -3440,6 +3667,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[[	 ]]\($symcode$symcode*\)[[	 ]][[	 ]]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -3473,6 +3701,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT@&t@_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT@&t@_DLSYM_CONST
++#else
++# define LT@&t@_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -3484,7 +3724,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT@&t@_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -3510,15 +3750,15 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)"
+ 	  if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&AS_MESSAGE_LOG_FD
+ 	fi
+@@ -3551,6 +3791,13 @@ else
+   AC_MSG_RESULT(ok)
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[[@]]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
+ _LT_DECL([global_symbol_pipe], [lt_cv_sys_global_symbol_pipe], [1],
+     [Take the output of nm and produce a listing of raw symbols and C names])
+ _LT_DECL([global_symbol_to_cdecl], [lt_cv_sys_global_symbol_to_cdecl], [1],
+@@ -3561,6 +3808,8 @@ _LT_DECL([global_symbol_to_c_name_address],
+ _LT_DECL([global_symbol_to_c_name_address_lib_prefix],
+     [lt_cv_sys_global_symbol_to_c_name_address_lib_prefix], [1],
+     [Transform the output of nm in a C name address pair when lib prefix is needed])
++_LT_DECL([], [nm_file_list_spec], [1],
++    [Specify filename containing input files for $NM])
+ ]) # _LT_CMD_GLOBAL_SYMBOLS
+ 
+ 
+@@ -3572,7 +3821,6 @@ _LT_TAGVAR(lt_prog_compiler_wl, $1)=
+ _LT_TAGVAR(lt_prog_compiler_pic, $1)=
+ _LT_TAGVAR(lt_prog_compiler_static, $1)=
+ 
+-AC_MSG_CHECKING([for $compiler option to produce PIC])
+ m4_if([$1], [CXX], [
+   # C++ specific cases for pic, static, wl, etc.
+   if test "$GXX" = yes; then
+@@ -3678,6 +3926,12 @@ m4_if([$1], [CXX], [
+ 	  ;;
+ 	esac
+ 	;;
++      mingw* | cygwin* | os2* | pw32* | cegcc*)
++	# This hack is so that the source file can tell whether it is being
++	# built for inclusion in a dll (and should export symbols for example).
++	m4_if([$1], [GCJ], [],
++	  [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'])
++	;;
+       dgux*)
+ 	case $cc_basename in
+ 	  ec++*)
+@@ -3830,7 +4084,7 @@ m4_if([$1], [CXX], [
+ 	;;
+       solaris*)
+ 	case $cc_basename in
+-	  CC*)
++	  CC* | sunCC*)
+ 	    # Sun C++ 4.2, 5.x and Centerline C++
+ 	    _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
+ 	    _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+@@ -4053,6 +4307,12 @@ m4_if([$1], [CXX], [
+ 	_LT_TAGVAR(lt_prog_compiler_pic, $1)='--shared'
+ 	_LT_TAGVAR(lt_prog_compiler_static, $1)='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,'
++	_LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
++	_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -4115,7 +4375,7 @@ m4_if([$1], [CXX], [
+       _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
+       _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ';;
+       *)
+ 	_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,';;
+@@ -4172,9 +4432,11 @@ case $host_os in
+     _LT_TAGVAR(lt_prog_compiler_pic, $1)="$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])"
+     ;;
+ esac
+-AC_MSG_RESULT([$_LT_TAGVAR(lt_prog_compiler_pic, $1)])
+-_LT_TAGDECL([wl], [lt_prog_compiler_wl], [1],
+-	[How to pass a linker flag through the compiler])
++
++AC_CACHE_CHECK([for $compiler option to produce PIC],
++  [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)],
++  [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_prog_compiler_pic, $1)])
++_LT_TAGVAR(lt_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -4193,6 +4455,8 @@ fi
+ _LT_TAGDECL([pic_flag], [lt_prog_compiler_pic], [1],
+ 	[Additional compiler flags for building library objects])
+ 
++_LT_TAGDECL([wl], [lt_prog_compiler_wl], [1],
++	[How to pass a linker flag through the compiler])
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -4213,6 +4477,7 @@ _LT_TAGDECL([link_static_flag], [lt_prog_compiler_static], [1],
+ m4_defun([_LT_LINKER_SHLIBS],
+ [AC_REQUIRE([LT_PATH_LD])dnl
+ AC_REQUIRE([LT_PATH_NM])dnl
++m4_require([_LT_PATH_MANIFEST_TOOL])dnl
+ m4_require([_LT_FILEUTILS_DEFAULTS])dnl
+ m4_require([_LT_DECL_EGREP])dnl
+ m4_require([_LT_DECL_SED])dnl
+@@ -4221,6 +4486,7 @@ m4_require([_LT_TAG_COMPILER])dnl
+ AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries])
+ m4_if([$1], [CXX], [
+   _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
++  _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*']
+   case $host_os in
+   aix[[4-9]]*)
+     # If we're using GNU nm, then we don't want the "-C" option.
+@@ -4235,15 +4501,20 @@ m4_if([$1], [CXX], [
+     ;;
+   pw32*)
+     _LT_TAGVAR(export_symbols_cmds, $1)="$ltdll_cmds"
+-  ;;
++    ;;
+   cygwin* | mingw* | cegcc*)
+-    _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;/^.*[[ ]]__nm__/s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
+-  ;;
++    case $cc_basename in
++    cl*) ;;
++    *)
++      _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
++      _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname']
++      ;;
++    esac
++    ;;
+   *)
+     _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
+-  ;;
++    ;;
+   esac
+-  _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*']
+ ], [
+   runpath_var=
+   _LT_TAGVAR(allow_undefined_flag, $1)=
+@@ -4411,7 +4682,8 @@ _LT_EOF
+       _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
+       _LT_TAGVAR(always_export_symbols, $1)=no
+       _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
+-      _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols'
++      _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
++      _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname']
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -4510,12 +4782,12 @@ _LT_EOF
+ 	  _LT_TAGVAR(whole_archive_flag_spec, $1)='--whole-archive$convenience --no-whole-archive'
+ 	  _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
+ 	  _LT_TAGVAR(hardcode_libdir_flag_spec_ld, $1)='-rpath $libdir'
+-	  _LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  _LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -4529,8 +4801,8 @@ _LT_EOF
+ 	_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -4548,8 +4820,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	_LT_TAGVAR(ld_shlibs, $1)=no
+       fi
+@@ -4595,8 +4867,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	_LT_TAGVAR(ld_shlibs, $1)=no
+       fi
+@@ -4726,7 +4998,7 @@ _LT_EOF
+ 	_LT_TAGVAR(allow_undefined_flag, $1)='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        _LT_SYS_MODULE_PATH_AIX
++        _LT_SYS_MODULE_PATH_AIX([$1])
+         _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
+         _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+       else
+@@ -4737,7 +5009,7 @@ _LT_EOF
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 _LT_SYS_MODULE_PATH_AIX
++	 _LT_SYS_MODULE_PATH_AIX([$1])
+ 	 _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+ 	  # -berok will link without error, but may produce a broken library.
+@@ -4781,20 +5053,63 @@ _LT_EOF
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
+-      _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      _LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
+-      # FIXME: Should let the user specify the lib program.
+-      _LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      _LT_TAGVAR(fix_srcfile_path, $1)='`cygpath -w "$srcfile"`'
+-      _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
++	_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
++	_LT_TAGVAR(always_export_symbols, $1)=yes
++	_LT_TAGVAR(file_list_spec, $1)='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	_LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	_LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
++	_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
++	_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1,DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	_LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib'
++	_LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
++	_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	_LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	_LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
++	# FIXME: Should let the user specify the lib program.
++	_LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -4828,7 +5143,7 @@ _LT_EOF
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      _LT_TAGVAR(archive_cmds, $1)='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
+       _LT_TAGVAR(hardcode_direct, $1)=yes
+       _LT_TAGVAR(hardcode_shlibpath_var, $1)=no
+@@ -4836,7 +5151,7 @@ _LT_EOF
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -4852,7 +5167,7 @@ _LT_EOF
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	_LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -4876,10 +5191,10 @@ _LT_EOF
+ 	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -4926,16 +5241,31 @@ _LT_EOF
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        AC_LINK_IFELSE(int foo(void) {},
+-          _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-        )
+-        LDFLAGS="$save_LDFLAGS"
++	# This should be the same for all languages, so no per-tag cache variable.
++	AC_CACHE_CHECK([whether the $host_os linker accepts -exported_symbol],
++	  [lt_cv_irix_exported_symbol],
++	  [save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   AC_LINK_IFELSE(
++	     [AC_LANG_SOURCE(
++	        [AC_LANG_CASE([C], [[int foo (void) { return 0; }]],
++			      [C++], [[int foo (void) { return 0; }]],
++			      [Fortran 77], [[
++      subroutine foo
++      end]],
++			      [Fortran], [[
++      subroutine foo
++      end]])])],
++	      [lt_cv_irix_exported_symbol=yes],
++	      [lt_cv_irix_exported_symbol=no])
++           LDFLAGS="$save_LDFLAGS"])
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -5020,7 +5350,7 @@ _LT_EOF
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	_LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*'
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir'
+       else
+ 	_LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
+@@ -5039,9 +5369,9 @@ _LT_EOF
+       _LT_TAGVAR(no_undefined_flag, $1)=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	_LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -5313,8 +5643,6 @@ _LT_TAGDECL([], [inherit_rpath], [0],
+     to runtime path list])
+ _LT_TAGDECL([], [link_all_deplibs], [0],
+     [Whether libtool must link a program against all its dependency libraries])
+-_LT_TAGDECL([], [fix_srcfile_path], [1],
+-    [Fix the shell variable $srcfile for the compiler])
+ _LT_TAGDECL([], [always_export_symbols], [0],
+     [Set to "yes" if exported symbols are required])
+ _LT_TAGDECL([], [export_symbols_cmds], [2],
+@@ -5325,6 +5653,8 @@ _LT_TAGDECL([], [include_expsyms], [1],
+     [Symbols that must always be exported])
+ _LT_TAGDECL([], [prelink_cmds], [2],
+     [Commands necessary for linking programs (against libraries) with templates])
++_LT_TAGDECL([], [postlink_cmds], [2],
++    [Commands necessary for finishing linking programs])
+ _LT_TAGDECL([], [file_list_spec], [1],
+     [Specify filename containing input files])
+ dnl FIXME: Not yet implemented
+@@ -5426,6 +5756,7 @@ CC="$lt_save_CC"
+ m4_defun([_LT_LANG_CXX_CONFIG],
+ [m4_require([_LT_FILEUTILS_DEFAULTS])dnl
+ m4_require([_LT_DECL_EGREP])dnl
++m4_require([_LT_PATH_MANIFEST_TOOL])dnl
+ if test -n "$CXX" && ( test "X$CXX" != "Xno" &&
+     ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) ||
+     (test "X$CXX" != "Xg++"))) ; then
+@@ -5487,6 +5818,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+ 
+   # Allow CC to be a program name with arguments.
+   lt_save_CC=$CC
++  lt_save_CFLAGS=$CFLAGS
+   lt_save_LD=$LD
+   lt_save_GCC=$GCC
+   GCC=$GXX
+@@ -5504,6 +5836,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+   fi
+   test -z "${LDCXX+set}" || LD=$LDCXX
+   CC=${CXX-"c++"}
++  CFLAGS=$CXXFLAGS
+   compiler=$CC
+   _LT_TAGVAR(compiler, $1)=$CC
+   _LT_CC_BASENAME([$compiler])
+@@ -5667,7 +6000,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+           _LT_TAGVAR(allow_undefined_flag, $1)='-berok'
+           # Determine the default libpath from the value encoded in an empty
+           # executable.
+-          _LT_SYS_MODULE_PATH_AIX
++          _LT_SYS_MODULE_PATH_AIX([$1])
+           _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 
+           _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -5679,7 +6012,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+           else
+ 	    # Determine the default libpath from the value encoded in an
+ 	    # empty executable.
+-	    _LT_SYS_MODULE_PATH_AIX
++	    _LT_SYS_MODULE_PATH_AIX([$1])
+ 	    _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	    # Warning - without using the other run time loading flags,
+ 	    # -berok will link without error, but may produce a broken library.
+@@ -5721,29 +6054,75 @@ if test "$_lt_caught_CXX_error" != yes; then
+         ;;
+ 
+       cygwin* | mingw* | pw32* | cegcc*)
+-        # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
+-        # as there is no search path for DLLs.
+-        _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
+-        _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols'
+-        _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
+-        _LT_TAGVAR(always_export_symbols, $1)=no
+-        _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
+-
+-        if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+-          _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+-          # If the export-symbols file already is a .def file (1st line
+-          # is EXPORTS), use it as is; otherwise, prepend...
+-          _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
+-	    cp $export_symbols $output_objdir/$soname.def;
+-          else
+-	    echo EXPORTS > $output_objdir/$soname.def;
+-	    cat $export_symbols >> $output_objdir/$soname.def;
+-          fi~
+-          $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+-        else
+-          _LT_TAGVAR(ld_shlibs, $1)=no
+-        fi
+-        ;;
++	case $GXX,$cc_basename in
++	,cl* | no,cl*)
++	  # Native MSVC
++	  # hardcode_libdir_flag_spec is actually meaningless, as there is
++	  # no search path for DLLs.
++	  _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
++	  _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
++	  _LT_TAGVAR(always_export_symbols, $1)=yes
++	  _LT_TAGVAR(file_list_spec, $1)='@'
++	  # Tell ltmain to make .lib files, not .a files.
++	  libext=lib
++	  # Tell ltmain to make .dll files, not .so files.
++	  shrext_cmds=".dll"
++	  # FIXME: Setting linknames here is a bad hack.
++	  _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	  _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	      $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	    else
++	      $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	    fi~
++	    $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	    linknames='
++	  # The linker will not automatically build a static lib if we build a DLL.
++	  # _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
++	  _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
++	  # Don't use ranlib
++	  _LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib'
++	  _LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~
++	    lt_tool_outputfile="@TOOL_OUTPUT@"~
++	    case $lt_outputfile in
++	      *.exe|*.EXE) ;;
++	      *)
++		lt_outputfile="$lt_outputfile.exe"
++		lt_tool_outputfile="$lt_tool_outputfile.exe"
++		;;
++	    esac~
++	    func_to_tool_file "$lt_outputfile"~
++	    if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	      $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	      $RM "$lt_outputfile.manifest";
++	    fi'
++	  ;;
++	*)
++	  # g++
++	  # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
++	  # as there is no search path for DLLs.
++	  _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
++	  _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols'
++	  _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
++	  _LT_TAGVAR(always_export_symbols, $1)=no
++	  _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
++
++	  if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
++	    _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
++	    # If the export-symbols file already is a .def file (1st line
++	    # is EXPORTS), use it as is; otherwise, prepend...
++	    _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	      cp $export_symbols $output_objdir/$soname.def;
++	    else
++	      echo EXPORTS > $output_objdir/$soname.def;
++	      cat $export_symbols >> $output_objdir/$soname.def;
++	    fi~
++	    $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
++	  else
++	    _LT_TAGVAR(ld_shlibs, $1)=no
++	  fi
++	  ;;
++	esac
++	;;
+       darwin* | rhapsody*)
+         _LT_DARWIN_LINKER_FEATURES($1)
+ 	;;
+@@ -5818,7 +6197,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+             ;;
+           *)
+             if test "$GXX" = yes; then
+-              _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++              _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+             else
+               # FIXME: insert proper C++ library support
+               _LT_TAGVAR(ld_shlibs, $1)=no
+@@ -5889,10 +6268,10 @@ if test "$_lt_caught_CXX_error" != yes; then
+ 	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	          ia64*)
+-	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
++	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	          *)
+-	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
++	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
+ 	            ;;
+ 	        esac
+ 	      fi
+@@ -5933,9 +6312,9 @@ if test "$_lt_caught_CXX_error" != yes; then
+           *)
+ 	    if test "$GXX" = yes; then
+ 	      if test "$with_gnu_ld" = no; then
+-	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	      else
+-	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
++	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
+ 	      fi
+ 	    fi
+ 	    _LT_TAGVAR(link_all_deplibs, $1)=yes
+@@ -6005,20 +6384,20 @@ if test "$_lt_caught_CXX_error" != yes; then
+ 	      _LT_TAGVAR(prelink_cmds, $1)='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~
+-		compile_command="$compile_command `find $tpldir -name \*.o | $NL2SP`"'
++		compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"'
+ 	      _LT_TAGVAR(old_archive_cmds, $1)='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~
+-		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | $NL2SP`~
++		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~
+ 		$RANLIB $oldlib'
+ 	      _LT_TAGVAR(archive_cmds, $1)='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
+-		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
++		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
+ 	      _LT_TAGVAR(archive_expsym_cmds, $1)='tpldir=Template.dir~
+ 		rm -rf $tpldir~
+ 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
+-		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
++		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
+ 	      ;;
+ 	    *) # Version 6 and above use weak symbols
+ 	      _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
+@@ -6213,7 +6592,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+ 	          _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 		  ;;
+ 	        *)
+-	          _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	          _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 		  ;;
+ 	      esac
+ 
+@@ -6259,7 +6638,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+ 
+       solaris*)
+         case $cc_basename in
+-          CC*)
++          CC* | sunCC*)
+ 	    # Sun C++ 4.2, 5.x and Centerline C++
+             _LT_TAGVAR(archive_cmds_need_lc,$1)=yes
+ 	    _LT_TAGVAR(no_undefined_flag, $1)=' -zdefs'
+@@ -6300,9 +6679,9 @@ if test "$_lt_caught_CXX_error" != yes; then
+ 	    if test "$GXX" = yes && test "$with_gnu_ld" = no; then
+ 	      _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-z ${wl}defs'
+ 	      if $CC --version | $GREP -v '^2\.7' > /dev/null; then
+-	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
++	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
+ 	        _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-		  $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
++		  $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
+ 
+ 	        # Commands to make compiler produce verbose output that lists
+ 	        # what "hidden" libraries, object files and flags are used when
+@@ -6431,6 +6810,7 @@ if test "$_lt_caught_CXX_error" != yes; then
+   fi # test -n "$compiler"
+ 
+   CC=$lt_save_CC
++  CFLAGS=$lt_save_CFLAGS
+   LDCXX=$LD
+   LD=$lt_save_LD
+   GCC=$lt_save_GCC
+@@ -6445,6 +6825,29 @@ AC_LANG_POP
+ ])# _LT_LANG_CXX_CONFIG
+ 
+ 
++# _LT_FUNC_STRIPNAME_CNF
++# ----------------------
++# func_stripname_cnf prefix suffix name
++# strip PREFIX and SUFFIX off of NAME.
++# PREFIX and SUFFIX must not contain globbing or regex special
++# characters, hashes, percent signs, but SUFFIX may contain a leading
++# dot (in which case that matches only a dot).
++#
++# This function is identical to the (non-XSI) version of func_stripname,
++# except this one can be used by m4 code that may be executed by configure,
++# rather than the libtool script.
++m4_defun([_LT_FUNC_STRIPNAME_CNF],[dnl
++AC_REQUIRE([_LT_DECL_SED])
++AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])
++func_stripname_cnf ()
++{
++  case ${2} in
++  .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
++  *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
++  esac
++} # func_stripname_cnf
++])# _LT_FUNC_STRIPNAME_CNF
++
+ # _LT_SYS_HIDDEN_LIBDEPS([TAGNAME])
+ # ---------------------------------
+ # Figure out "hidden" library dependencies from verbose
+@@ -6453,6 +6856,7 @@ AC_LANG_POP
+ # objects, libraries and library flags.
+ m4_defun([_LT_SYS_HIDDEN_LIBDEPS],
+ [m4_require([_LT_FILEUTILS_DEFAULTS])dnl
++AC_REQUIRE([_LT_FUNC_STRIPNAME_CNF])dnl
+ # Dependencies to place before and after the object being linked:
+ _LT_TAGVAR(predep_objects, $1)=
+ _LT_TAGVAR(postdep_objects, $1)=
+@@ -6503,6 +6907,13 @@ public class foo {
+ };
+ _LT_EOF
+ ])
++
++_lt_libdeps_save_CFLAGS=$CFLAGS
++case "$CC $CFLAGS " in #(
++*\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;;
++*\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;;
++esac
++
+ dnl Parse the compiler output and extract the necessary
+ dnl objects, libraries and library flags.
+ if AC_TRY_EVAL(ac_compile); then
+@@ -6514,7 +6925,7 @@ if AC_TRY_EVAL(ac_compile); then
+   pre_test_object_deps_done=no
+ 
+   for p in `eval "$output_verbose_link_cmd"`; do
+-    case $p in
++    case ${prev}${p} in
+ 
+     -L* | -R* | -l*)
+        # Some compilers place space between "-{L,R}" and the path.
+@@ -6523,13 +6934,22 @@ if AC_TRY_EVAL(ac_compile); then
+           test $p = "-R"; then
+ 	 prev=$p
+ 	 continue
+-       else
+-	 prev=
+        fi
+ 
++       # Expand the sysroot to ease extracting the directories later.
++       if test -z "$prev"; then
++         case $p in
++         -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;;
++         -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;;
++         -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;;
++         esac
++       fi
++       case $p in
++       =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;;
++       esac
+        if test "$pre_test_object_deps_done" = no; then
+-	 case $p in
+-	 -L* | -R*)
++	 case ${prev} in
++	 -L | -R)
+ 	   # Internal compiler library paths should come after those
+ 	   # provided the user.  The postdeps already come after the
+ 	   # user supplied libs so there is no need to process them.
+@@ -6549,8 +6969,10 @@ if AC_TRY_EVAL(ac_compile); then
+ 	   _LT_TAGVAR(postdeps, $1)="${_LT_TAGVAR(postdeps, $1)} ${prev}${p}"
+ 	 fi
+        fi
++       prev=
+        ;;
+ 
++    *.lto.$objext) ;; # Ignore GCC LTO objects
+     *.$objext)
+        # This assumes that the test object file only shows up
+        # once in the compiler output.
+@@ -6586,6 +7008,7 @@ else
+ fi
+ 
+ $RM -f confest.$objext
++CFLAGS=$_lt_libdeps_save_CFLAGS
+ 
+ # PORTME: override above test on systems where it is broken
+ m4_if([$1], [CXX],
+@@ -6622,7 +7045,7 @@ linux*)
+ 
+ solaris*)
+   case $cc_basename in
+-  CC*)
++  CC* | sunCC*)
+     # The more standards-conforming stlport4 library is
+     # incompatible with the Cstd library. Avoid specifying
+     # it if it's in CXXFLAGS. Ignore libCrun as
+@@ -6735,7 +7158,9 @@ if test "$_lt_disable_F77" != yes; then
+   # Allow CC to be a program name with arguments.
+   lt_save_CC="$CC"
+   lt_save_GCC=$GCC
++  lt_save_CFLAGS=$CFLAGS
+   CC=${F77-"f77"}
++  CFLAGS=$FFLAGS
+   compiler=$CC
+   _LT_TAGVAR(compiler, $1)=$CC
+   _LT_CC_BASENAME([$compiler])
+@@ -6789,6 +7214,7 @@ if test "$_lt_disable_F77" != yes; then
+ 
+   GCC=$lt_save_GCC
+   CC="$lt_save_CC"
++  CFLAGS="$lt_save_CFLAGS"
+ fi # test "$_lt_disable_F77" != yes
+ 
+ AC_LANG_POP
+@@ -6865,7 +7291,9 @@ if test "$_lt_disable_FC" != yes; then
+   # Allow CC to be a program name with arguments.
+   lt_save_CC="$CC"
+   lt_save_GCC=$GCC
++  lt_save_CFLAGS=$CFLAGS
+   CC=${FC-"f95"}
++  CFLAGS=$FCFLAGS
+   compiler=$CC
+   GCC=$ac_cv_fc_compiler_gnu
+ 
+@@ -6921,7 +7349,8 @@ if test "$_lt_disable_FC" != yes; then
+   fi # test -n "$compiler"
+ 
+   GCC=$lt_save_GCC
+-  CC="$lt_save_CC"
++  CC=$lt_save_CC
++  CFLAGS=$lt_save_CFLAGS
+ fi # test "$_lt_disable_FC" != yes
+ 
+ AC_LANG_POP
+@@ -6958,10 +7387,12 @@ _LT_COMPILER_BOILERPLATE
+ _LT_LINKER_BOILERPLATE
+ 
+ # Allow CC to be a program name with arguments.
+-lt_save_CC="$CC"
++lt_save_CC=$CC
++lt_save_CFLAGS=$CFLAGS
+ lt_save_GCC=$GCC
+ GCC=yes
+ CC=${GCJ-"gcj"}
++CFLAGS=$GCJFLAGS
+ compiler=$CC
+ _LT_TAGVAR(compiler, $1)=$CC
+ _LT_TAGVAR(LD, $1)="$LD"
+@@ -6992,7 +7423,8 @@ fi
+ AC_LANG_RESTORE
+ 
+ GCC=$lt_save_GCC
+-CC="$lt_save_CC"
++CC=$lt_save_CC
++CFLAGS=$lt_save_CFLAGS
+ ])# _LT_LANG_GCJ_CONFIG
+ 
+ 
+@@ -7027,9 +7459,11 @@ _LT_LINKER_BOILERPLATE
+ 
+ # Allow CC to be a program name with arguments.
+ lt_save_CC="$CC"
++lt_save_CFLAGS=$CFLAGS
+ lt_save_GCC=$GCC
+ GCC=
+ CC=${RC-"windres"}
++CFLAGS=
+ compiler=$CC
+ _LT_TAGVAR(compiler, $1)=$CC
+ _LT_CC_BASENAME([$compiler])
+@@ -7042,7 +7476,8 @@ fi
+ 
+ GCC=$lt_save_GCC
+ AC_LANG_RESTORE
+-CC="$lt_save_CC"
++CC=$lt_save_CC
++CFLAGS=$lt_save_CFLAGS
+ ])# _LT_LANG_RC_CONFIG
+ 
+ 
+@@ -7101,6 +7536,15 @@ _LT_DECL([], [OBJDUMP], [1], [An object symbol dumper])
+ AC_SUBST([OBJDUMP])
+ ])
+ 
++# _LT_DECL_DLLTOOL
++# ----------------
++# Ensure DLLTOOL variable is set.
++m4_defun([_LT_DECL_DLLTOOL],
++[AC_CHECK_TOOL(DLLTOOL, dlltool, false)
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++_LT_DECL([], [DLLTOOL], [1], [DLL creation program])
++AC_SUBST([DLLTOOL])
++])
+ 
+ # _LT_DECL_SED
+ # ------------
+@@ -7194,8 +7638,8 @@ m4_defun([_LT_CHECK_SHELL_FEATURES],
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -7234,206 +7678,162 @@ _LT_DECL([NL2SP], [lt_NL2SP], [1], [turn newlines into spaces])dnl
+ ])# _LT_CHECK_SHELL_FEATURES
+ 
+ 
+-# _LT_PROG_XSI_SHELLFNS
+-# ---------------------
+-# Bourne and XSI compatible variants of some useful shell functions.
+-m4_defun([_LT_PROG_XSI_SHELLFNS],
+-[case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $[*] ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
++# _LT_PROG_FUNCTION_REPLACE (FUNCNAME, REPLACEMENT-BODY)
++# ------------------------------------------------------
++# In `$cfgfile', look for function FUNCNAME delimited by `^FUNCNAME ()$' and
++# '^} FUNCNAME ', and replace its body with REPLACEMENT-BODY.
++m4_defun([_LT_PROG_FUNCTION_REPLACE],
++[dnl {
++sed -e '/^$1 ()$/,/^} # $1 /c\
++$1 ()\
++{\
++m4_bpatsubsts([$2], [$], [\\], [^\([	 ]\)], [\\\1])
++} # Extended-shell $1 implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++])
+ 
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+ 
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
++# _LT_PROG_REPLACE_SHELLFNS
++# -------------------------
++# Replace existing portable implementations of several shell functions with
++# equivalent extended shell implementations where those features are available..
++m4_defun([_LT_PROG_REPLACE_SHELLFNS],
++[if test x"$xsi_shell" = xyes; then
++  _LT_PROG_FUNCTION_REPLACE([func_dirname], [dnl
++    case ${1} in
++      */*) func_dirname_result="${1%/*}${2}" ;;
++      *  ) func_dirname_result="${3}" ;;
++    esac])
++
++  _LT_PROG_FUNCTION_REPLACE([func_basename], [dnl
++    func_basename_result="${1##*/}"])
++
++  _LT_PROG_FUNCTION_REPLACE([func_dirname_and_basename], [dnl
++    case ${1} in
++      */*) func_dirname_result="${1%/*}${2}" ;;
++      *  ) func_dirname_result="${3}" ;;
++    esac
++    func_basename_result="${1##*/}"])
+ 
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
++  _LT_PROG_FUNCTION_REPLACE([func_stripname], [dnl
++    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
++    # positional parameters, so assign one to ordinary parameter first.
++    func_stripname_result=${3}
++    func_stripname_result=${func_stripname_result#"${1}"}
++    func_stripname_result=${func_stripname_result%"${2}"}])
+ 
+-dnl func_dirname_and_basename
+-dnl A portable version of this function is already defined in general.m4sh
+-dnl so there is no need for it here.
++  _LT_PROG_FUNCTION_REPLACE([func_split_long_opt], [dnl
++    func_split_long_opt_name=${1%%=*}
++    func_split_long_opt_arg=${1#*=}])
+ 
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
++  _LT_PROG_FUNCTION_REPLACE([func_split_short_opt], [dnl
++    func_split_short_opt_arg=${1#??}
++    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}])
+ 
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[[^=]]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[[^=]]*=//'
++  _LT_PROG_FUNCTION_REPLACE([func_lo2o], [dnl
++    case ${1} in
++      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
++      *)    func_lo2o_result=${1} ;;
++    esac])
+ 
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
++  _LT_PROG_FUNCTION_REPLACE([func_xform], [    func_xform_result=${1%.*}.lo])
+ 
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
++  _LT_PROG_FUNCTION_REPLACE([func_arith], [    func_arith_result=$(( $[*] ))])
+ 
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[[^.]]*$/.lo/'`
+-}
++  _LT_PROG_FUNCTION_REPLACE([func_len], [    func_len_result=${#1}])
++fi
+ 
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$[@]"`
+-}
++if test x"$lt_shell_append" = xyes; then
++  _LT_PROG_FUNCTION_REPLACE([func_append], [    eval "${1}+=\\${2}"])
+ 
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$[1]" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
++  _LT_PROG_FUNCTION_REPLACE([func_append_quoted], [dnl
++    func_quote_for_eval "${2}"
++dnl m4 expansion turns \\\\ into \\, and then the shell eval turns that into \
++    eval "${1}+=\\\\ \\$func_quote_for_eval_result"])
+ 
+-_LT_EOF
+-esac
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
+ 
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
++if test x"$_lt_function_replace_fail" = x":"; then
++  AC_MSG_WARN([Unable to substitute extended shell functions in $ofile])
++fi
++])
+ 
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$[1]+=\$[2]"
+-}
+-_LT_EOF
++# _LT_PATH_CONVERSION_FUNCTIONS
++# -----------------------------
++# Determine which file name conversion functions should be used by
++# func_to_host_file (and, implicitly, by func_to_host_path).  These are needed
++# for certain cross-compile configurations and native mingw.
++m4_defun([_LT_PATH_CONVERSION_FUNCTIONS],
++[AC_REQUIRE([AC_CANONICAL_HOST])dnl
++AC_REQUIRE([AC_CANONICAL_BUILD])dnl
++AC_MSG_CHECKING([how to convert $build file names to $host format])
++AC_CACHE_VAL(lt_cv_to_host_file_cmd,
++[case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
+     ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$[1]=\$$[1]\$[2]"
+-}
+-
+-_LT_EOF
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
+     ;;
+-  esac
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++])
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++AC_MSG_RESULT([$lt_cv_to_host_file_cmd])
++_LT_DECL([to_host_file_cmd], [lt_cv_to_host_file_cmd],
++         [0], [convert $build file names to $host format])dnl
++
++AC_MSG_CHECKING([how to convert $build file names to toolchain format])
++AC_CACHE_VAL(lt_cv_to_tool_file_cmd,
++[#assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
+ ])
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++AC_MSG_RESULT([$lt_cv_to_tool_file_cmd])
++_LT_DECL([to_tool_file_cmd], [lt_cv_to_tool_file_cmd],
++         [0], [convert $build files to toolchain format])dnl
++])# _LT_PATH_CONVERSION_FUNCTIONS
+diff --git a/ltmain.sh b/ltmain.sh
+index 9503ec85d7..70e856e065 100644
+--- a/ltmain.sh
++++ b/ltmain.sh
+@@ -1,10 +1,9 @@
+-# Generated from ltmain.m4sh.
+ 
+-# libtool (GNU libtool 1.3134 2009-11-29) 2.2.7a
++# libtool (GNU libtool) 2.4
+ # Written by Gordon Matzigkeit <gord@gnu.ai.mit.edu>, 1996
+ 
+ # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, 2006,
+-# 2007, 2008, 2009 Free Software Foundation, Inc.
++# 2007, 2008, 2009, 2010 Free Software Foundation, Inc.
+ # This is free software; see the source for copying conditions.  There is NO
+ # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ 
+@@ -38,7 +37,6 @@
+ #   -n, --dry-run            display commands without modifying any files
+ #       --features           display basic configuration information and exit
+ #       --mode=MODE          use operation mode MODE
+-#       --no-finish          let install mode avoid finish commands
+ #       --preserve-dup-deps  don't remove duplicate dependency libraries
+ #       --quiet, --silent    don't print informational messages
+ #       --no-quiet, --no-silent
+@@ -71,17 +69,19 @@
+ #         compiler:		$LTCC
+ #         compiler flags:		$LTCFLAGS
+ #         linker:		$LD (gnu? $with_gnu_ld)
+-#         $progname:	(GNU libtool 1.3134 2009-11-29) 2.2.7a
++#         $progname:	(GNU libtool) 2.4
+ #         automake:	$automake_version
+ #         autoconf:	$autoconf_version
+ #
+ # Report bugs to <bug-libtool@gnu.org>.
++# GNU libtool home page: <http://www.gnu.org/software/libtool/>.
++# General help using GNU software: <http://www.gnu.org/gethelp/>.
+ 
+ PROGRAM=libtool
+ PACKAGE=libtool
+-VERSION=2.2.7a
+-TIMESTAMP=" 1.3134 2009-11-29"
+-package_revision=1.3134
++VERSION=2.4
++TIMESTAMP=""
++package_revision=1.3293
+ 
+ # Be Bourne compatible
+ if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then
+@@ -106,9 +106,6 @@ _LTECHO_EOF'
+ }
+ 
+ # NLS nuisances: We save the old values to restore during execute mode.
+-# Only set LANG and LC_ALL to C if already set.
+-# These must not be set unconditionally because not all systems understand
+-# e.g. LANG=C (notably SCO).
+ lt_user_locale=
+ lt_safe_locale=
+ for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES
+@@ -121,15 +118,13 @@ do
+ 	  lt_safe_locale=\"$lt_var=C; \$lt_safe_locale\"
+ 	fi"
+ done
++LC_ALL=C
++LANGUAGE=C
++export LANGUAGE LC_ALL
+ 
+ $lt_unset CDPATH
+ 
+ 
+-
+-
+-
+-
+-
+ # Work around backward compatibility issue on IRIX 6.5. On IRIX 6.4+, sh
+ # is ksh but when the shell is invoked as "sh" and the current value of
+ # the _XPG environment variable is not equal to 1 (one), the special
+@@ -140,7 +135,7 @@ progpath="$0"
+ 
+ 
+ : ${CP="cp -f"}
+-: ${ECHO=$as_echo}
++test "${ECHO+set}" = set || ECHO=${as_echo-'printf %s\n'}
+ : ${EGREP="/bin/grep -E"}
+ : ${FGREP="/bin/grep -F"}
+ : ${GREP="/bin/grep"}
+@@ -149,7 +144,7 @@ progpath="$0"
+ : ${MKDIR="mkdir"}
+ : ${MV="mv -f"}
+ : ${RM="rm -f"}
+-: ${SED="/mount/endor/wildenhu/local-x86_64/bin/sed"}
++: ${SED="/bin/sed"}
+ : ${SHELL="${CONFIG_SHELL-/bin/sh}"}
+ : ${Xsed="$SED -e 1s/^X//"}
+ 
+@@ -169,6 +164,27 @@ IFS=" 	$lt_nl"
+ dirname="s,/[^/]*$,,"
+ basename="s,^.*/,,"
+ 
++# func_dirname file append nondir_replacement
++# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
++# otherwise set result to NONDIR_REPLACEMENT.
++func_dirname ()
++{
++    func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
++    if test "X$func_dirname_result" = "X${1}"; then
++      func_dirname_result="${3}"
++    else
++      func_dirname_result="$func_dirname_result${2}"
++    fi
++} # func_dirname may be replaced by extended shell implementation
++
++
++# func_basename file
++func_basename ()
++{
++    func_basename_result=`$ECHO "${1}" | $SED "$basename"`
++} # func_basename may be replaced by extended shell implementation
++
++
+ # func_dirname_and_basename file append nondir_replacement
+ # perform func_basename and func_dirname in a single function
+ # call:
+@@ -183,17 +199,31 @@ basename="s,^.*/,,"
+ # those functions but instead duplicate the functionality here.
+ func_dirname_and_basename ()
+ {
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED -e "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-  func_basename_result=`$ECHO "${1}" | $SED -e "$basename"`
+-}
++    # Extract subdirectory from the argument.
++    func_dirname_result=`$ECHO "${1}" | $SED -e "$dirname"`
++    if test "X$func_dirname_result" = "X${1}"; then
++      func_dirname_result="${3}"
++    else
++      func_dirname_result="$func_dirname_result${2}"
++    fi
++    func_basename_result=`$ECHO "${1}" | $SED -e "$basename"`
++} # func_dirname_and_basename may be replaced by extended shell implementation
++
++
++# func_stripname prefix suffix name
++# strip PREFIX and SUFFIX off of NAME.
++# PREFIX and SUFFIX must not contain globbing or regex special
++# characters, hashes, percent signs, but SUFFIX may contain a leading
++# dot (in which case that matches only a dot).
++# func_strip_suffix prefix name
++func_stripname ()
++{
++    case ${2} in
++      .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
++      *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
++    esac
++} # func_stripname may be replaced by extended shell implementation
+ 
+-# Generated shell functions inserted here.
+ 
+ # These SED scripts presuppose an absolute path with a trailing slash.
+ pathcar='s,^/\([^/]*\).*$,\1,'
+@@ -376,6 +406,15 @@ sed_quote_subst='s/\([`"$\\]\)/\\\1/g'
+ # Same as above, but do not quote variable references.
+ double_quote_subst='s/\(["`\\]\)/\\\1/g'
+ 
++# Sed substitution that turns a string into a regex matching for the
++# string literally.
++sed_make_literal_regex='s,[].[^$\\*\/],\\&,g'
++
++# Sed substitution that converts a w32 file name or path
++# which contains forward slashes, into one that contains
++# (escaped) backslashes.  A very naive implementation.
++lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
++
+ # Re-`\' parameter expansions in output of double_quote_subst that were
+ # `\'-ed in input to the same.  If an odd number of `\' preceded a '$'
+ # in input to double_quote_subst, that '$' was protected from expansion.
+@@ -404,7 +443,7 @@ opt_warning=:
+ # name if it has been set yet.
+ func_echo ()
+ {
+-    $ECHO "$progname${mode+: }$mode: $*"
++    $ECHO "$progname: ${opt_mode+$opt_mode: }$*"
+ }
+ 
+ # func_verbose arg...
+@@ -430,14 +469,14 @@ func_echo_all ()
+ # Echo program name prefixed message to standard error.
+ func_error ()
+ {
+-    $ECHO "$progname${mode+: }$mode: "${1+"$@"} 1>&2
++    $ECHO "$progname: ${opt_mode+$opt_mode: }"${1+"$@"} 1>&2
+ }
+ 
+ # func_warning arg...
+ # Echo program name prefixed warning message to standard error.
+ func_warning ()
+ {
+-    $opt_warning && $ECHO "$progname${mode+: }$mode: warning: "${1+"$@"} 1>&2
++    $opt_warning && $ECHO "$progname: ${opt_mode+$opt_mode: }warning: "${1+"$@"} 1>&2
+ 
+     # bash bug again:
+     :
+@@ -656,19 +695,35 @@ func_show_eval_locale ()
+     fi
+ }
+ 
+-
+-
++# func_tr_sh
++# Turn $1 into a string suitable for a shell variable name.
++# Result is stored in $func_tr_sh_result.  All characters
++# not in the set a-zA-Z0-9_ are replaced with '_'. Further,
++# if $1 begins with a digit, a '_' is prepended as well.
++func_tr_sh ()
++{
++  case $1 in
++  [0-9]* | *[!a-zA-Z0-9_]*)
++    func_tr_sh_result=`$ECHO "$1" | $SED 's/^\([0-9]\)/_\1/; s/[^a-zA-Z0-9_]/_/g'`
++    ;;
++  * )
++    func_tr_sh_result=$1
++    ;;
++  esac
++}
+ 
+ 
+ # func_version
+ # Echo version message to standard output and exit.
+ func_version ()
+ {
++    $opt_debug
++
+     $SED -n '/(C)/!b go
+ 	:more
+ 	/\./!{
+ 	  N
+-	  s/\n# //
++	  s/\n# / /
+ 	  b more
+ 	}
+ 	:go
+@@ -685,7 +740,9 @@ func_version ()
+ # Echo short help message to standard output and exit.
+ func_usage ()
+ {
+-    $SED -n '/^# Usage:/,/^#  *-h/ {
++    $opt_debug
++
++    $SED -n '/^# Usage:/,/^#  *.*--help/ {
+         s/^# //
+ 	s/^# *$//
+ 	s/\$progname/'$progname'/
+@@ -701,7 +758,10 @@ func_usage ()
+ # unless 'noexit' is passed as argument.
+ func_help ()
+ {
++    $opt_debug
++
+     $SED -n '/^# Usage:/,/# Report bugs to/ {
++	:print
+         s/^# //
+ 	s/^# *$//
+ 	s*\$progname*'$progname'*
+@@ -714,7 +774,11 @@ func_help ()
+ 	s/\$automake_version/'"`(automake --version) 2>/dev/null |$SED 1q`"'/
+ 	s/\$autoconf_version/'"`(autoconf --version) 2>/dev/null |$SED 1q`"'/
+ 	p
+-     }' < "$progpath"
++	d
++     }
++     /^# .* home page:/b print
++     /^# General help using/b print
++     ' < "$progpath"
+     ret=$?
+     if test -z "$1"; then
+       exit $ret
+@@ -726,12 +790,39 @@ func_help ()
+ # exit_cmd.
+ func_missing_arg ()
+ {
+-    func_error "missing argument for $1"
++    $opt_debug
++
++    func_error "missing argument for $1."
+     exit_cmd=exit
+ }
+ 
+-exit_cmd=:
+ 
++# func_split_short_opt shortopt
++# Set func_split_short_opt_name and func_split_short_opt_arg shell
++# variables after splitting SHORTOPT after the 2nd character.
++func_split_short_opt ()
++{
++    my_sed_short_opt='1s/^\(..\).*$/\1/;q'
++    my_sed_short_rest='1s/^..\(.*\)$/\1/;q'
++
++    func_split_short_opt_name=`$ECHO "$1" | $SED "$my_sed_short_opt"`
++    func_split_short_opt_arg=`$ECHO "$1" | $SED "$my_sed_short_rest"`
++} # func_split_short_opt may be replaced by extended shell implementation
++
++
++# func_split_long_opt longopt
++# Set func_split_long_opt_name and func_split_long_opt_arg shell
++# variables after splitting LONGOPT at the `=' sign.
++func_split_long_opt ()
++{
++    my_sed_long_opt='1s/^\(--[^=]*\)=.*/\1/;q'
++    my_sed_long_arg='1s/^--[^=]*=//'
++
++    func_split_long_opt_name=`$ECHO "$1" | $SED "$my_sed_long_opt"`
++    func_split_long_opt_arg=`$ECHO "$1" | $SED "$my_sed_long_arg"`
++} # func_split_long_opt may be replaced by extended shell implementation
++
++exit_cmd=:
+ 
+ 
+ 
+@@ -741,26 +832,64 @@ magic="%%%MAGIC variable%%%"
+ magic_exe="%%%MAGIC EXE variable%%%"
+ 
+ # Global variables.
+-# $mode is unset
+ nonopt=
+-execute_dlfiles=
+ preserve_args=
+ lo2o="s/\\.lo\$/.${objext}/"
+ o2lo="s/\\.${objext}\$/.lo/"
+ extracted_archives=
+ extracted_serial=0
+ 
+-opt_dry_run=false
+-opt_finish=:
+-opt_duplicate_deps=false
+-opt_silent=false
+-opt_debug=:
+-
+ # If this variable is set in any of the actions, the command in it
+ # will be execed at the end.  This prevents here-documents from being
+ # left over by shells.
+ exec_cmd=
+ 
++# func_append var value
++# Append VALUE to the end of shell variable VAR.
++func_append ()
++{
++    eval "${1}=\$${1}\${2}"
++} # func_append may be replaced by extended shell implementation
++
++# func_append_quoted var value
++# Quote VALUE and append to the end of shell variable VAR, separated
++# by a space.
++func_append_quoted ()
++{
++    func_quote_for_eval "${2}"
++    eval "${1}=\$${1}\\ \$func_quote_for_eval_result"
++} # func_append_quoted may be replaced by extended shell implementation
++
++
++# func_arith arithmetic-term...
++func_arith ()
++{
++    func_arith_result=`expr "${@}"`
++} # func_arith may be replaced by extended shell implementation
++
++
++# func_len string
++# STRING may not start with a hyphen.
++func_len ()
++{
++    func_len_result=`expr "${1}" : ".*" 2>/dev/null || echo $max_cmd_len`
++} # func_len may be replaced by extended shell implementation
++
++
++# func_lo2o object
++func_lo2o ()
++{
++    func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
++} # func_lo2o may be replaced by extended shell implementation
++
++
++# func_xform libobj-or-source
++func_xform ()
++{
++    func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
++} # func_xform may be replaced by extended shell implementation
++
++
+ # func_fatal_configuration arg...
+ # Echo program name prefixed message to standard error, followed by
+ # a configuration failure hint, and exit.
+@@ -850,130 +979,204 @@ func_enable_tag ()
+   esac
+ }
+ 
+-# Parse options once, thoroughly.  This comes as soon as possible in
+-# the script to make things like `libtool --version' happen quickly.
++# func_check_version_match
++# Ensure that we are using m4 macros, and libtool script from the same
++# release of libtool.
++func_check_version_match ()
+ {
++  if test "$package_revision" != "$macro_revision"; then
++    if test "$VERSION" != "$macro_version"; then
++      if test -z "$macro_version"; then
++        cat >&2 <<_LT_EOF
++$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
++$progname: definition of this LT_INIT comes from an older release.
++$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
++$progname: and run autoconf again.
++_LT_EOF
++      else
++        cat >&2 <<_LT_EOF
++$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
++$progname: definition of this LT_INIT comes from $PACKAGE $macro_version.
++$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
++$progname: and run autoconf again.
++_LT_EOF
++      fi
++    else
++      cat >&2 <<_LT_EOF
++$progname: Version mismatch error.  This is $PACKAGE $VERSION, revision $package_revision,
++$progname: but the definition of this LT_INIT comes from revision $macro_revision.
++$progname: You should recreate aclocal.m4 with macros from revision $package_revision
++$progname: of $PACKAGE $VERSION and run autoconf again.
++_LT_EOF
++    fi
+ 
+-  # Shorthand for --mode=foo, only valid as the first argument
+-  case $1 in
+-  clean|clea|cle|cl)
+-    shift; set dummy --mode clean ${1+"$@"}; shift
+-    ;;
+-  compile|compil|compi|comp|com|co|c)
+-    shift; set dummy --mode compile ${1+"$@"}; shift
+-    ;;
+-  execute|execut|execu|exec|exe|ex|e)
+-    shift; set dummy --mode execute ${1+"$@"}; shift
+-    ;;
+-  finish|finis|fini|fin|fi|f)
+-    shift; set dummy --mode finish ${1+"$@"}; shift
+-    ;;
+-  install|instal|insta|inst|ins|in|i)
+-    shift; set dummy --mode install ${1+"$@"}; shift
+-    ;;
+-  link|lin|li|l)
+-    shift; set dummy --mode link ${1+"$@"}; shift
+-    ;;
+-  uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u)
+-    shift; set dummy --mode uninstall ${1+"$@"}; shift
+-    ;;
+-  esac
++    exit $EXIT_MISMATCH
++  fi
++}
++
++
++# Shorthand for --mode=foo, only valid as the first argument
++case $1 in
++clean|clea|cle|cl)
++  shift; set dummy --mode clean ${1+"$@"}; shift
++  ;;
++compile|compil|compi|comp|com|co|c)
++  shift; set dummy --mode compile ${1+"$@"}; shift
++  ;;
++execute|execut|execu|exec|exe|ex|e)
++  shift; set dummy --mode execute ${1+"$@"}; shift
++  ;;
++finish|finis|fini|fin|fi|f)
++  shift; set dummy --mode finish ${1+"$@"}; shift
++  ;;
++install|instal|insta|inst|ins|in|i)
++  shift; set dummy --mode install ${1+"$@"}; shift
++  ;;
++link|lin|li|l)
++  shift; set dummy --mode link ${1+"$@"}; shift
++  ;;
++uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u)
++  shift; set dummy --mode uninstall ${1+"$@"}; shift
++  ;;
++esac
+ 
+-  # Parse non-mode specific arguments:
+-  while test "$#" -gt 0; do
++
++
++# Option defaults:
++opt_debug=:
++opt_dry_run=false
++opt_config=false
++opt_preserve_dup_deps=false
++opt_features=false
++opt_finish=false
++opt_help=false
++opt_help_all=false
++opt_silent=:
++opt_verbose=:
++opt_silent=false
++opt_verbose=false
++
++
++# Parse options once, thoroughly.  This comes as soon as possible in the
++# script to make things like `--version' happen as quickly as we can.
++{
++  # this just eases exit handling
++  while test $# -gt 0; do
+     opt="$1"
+     shift
+-
+     case $opt in
+-      --config)		func_config					;;
+-
+-      --debug)		preserve_args="$preserve_args $opt"
++      --debug|-x)	opt_debug='set -x'
+ 			func_echo "enabling shell trace mode"
+-			opt_debug='set -x'
+ 			$opt_debug
+ 			;;
+-
+-      -dlopen)		test "$#" -eq 0 && func_missing_arg "$opt" && break
+-			execute_dlfiles="$execute_dlfiles $1"
+-			shift
++      --dry-run|--dryrun|-n)
++			opt_dry_run=:
+ 			;;
+-
+-      --dry-run | -n)	opt_dry_run=:					;;
+-      --features)       func_features					;;
+-      --finish)		mode="finish"					;;
+-      --no-finish)	opt_finish=false				;;
+-
+-      --mode)		test "$#" -eq 0 && func_missing_arg "$opt" && break
+-			case $1 in
+-			  # Valid mode arguments:
+-			  clean)	;;
+-			  compile)	;;
+-			  execute)	;;
+-			  finish)	;;
+-			  install)	;;
+-			  link)		;;
+-			  relink)	;;
+-			  uninstall)	;;
+-
+-			  # Catch anything else as an error
+-			  *) func_error "invalid argument for $opt"
+-			     exit_cmd=exit
+-			     break
+-			     ;;
+-		        esac
+-
+-			mode="$1"
++      --config)
++			opt_config=:
++func_config
++			;;
++      --dlopen|-dlopen)
++			optarg="$1"
++			opt_dlopen="${opt_dlopen+$opt_dlopen
++}$optarg"
+ 			shift
+ 			;;
+-
+       --preserve-dup-deps)
+-			opt_duplicate_deps=:				;;
+-
+-      --quiet|--silent)	preserve_args="$preserve_args $opt"
+-			opt_silent=:
+-			opt_verbose=false
++			opt_preserve_dup_deps=:
+ 			;;
+-
+-      --no-quiet|--no-silent)
+-			preserve_args="$preserve_args $opt"
+-			opt_silent=false
++      --features)
++			opt_features=:
++func_features
+ 			;;
+-
+-      --verbose| -v)	preserve_args="$preserve_args $opt"
++      --finish)
++			opt_finish=:
++set dummy --mode finish ${1+"$@"}; shift
++			;;
++      --help)
++			opt_help=:
++			;;
++      --help-all)
++			opt_help_all=:
++opt_help=': help-all'
++			;;
++      --mode)
++			test $# = 0 && func_missing_arg $opt && break
++			optarg="$1"
++			opt_mode="$optarg"
++case $optarg in
++  # Valid mode arguments:
++  clean|compile|execute|finish|install|link|relink|uninstall) ;;
++
++  # Catch anything else as an error
++  *) func_error "invalid argument for $opt"
++     exit_cmd=exit
++     break
++     ;;
++esac
++			shift
++			;;
++      --no-silent|--no-quiet)
+ 			opt_silent=false
+-			opt_verbose=:
++func_append preserve_args " $opt"
+ 			;;
+-
+-      --no-verbose)	preserve_args="$preserve_args $opt"
++      --no-verbose)
+ 			opt_verbose=false
++func_append preserve_args " $opt"
+ 			;;
+-
+-      --tag)		test "$#" -eq 0 && func_missing_arg "$opt" && break
+-			preserve_args="$preserve_args $opt $1"
+-			func_enable_tag "$1"	# tagname is set here
++      --silent|--quiet)
++			opt_silent=:
++func_append preserve_args " $opt"
++        opt_verbose=false
++			;;
++      --verbose|-v)
++			opt_verbose=:
++func_append preserve_args " $opt"
++opt_silent=false
++			;;
++      --tag)
++			test $# = 0 && func_missing_arg $opt && break
++			optarg="$1"
++			opt_tag="$optarg"
++func_append preserve_args " $opt $optarg"
++func_enable_tag "$optarg"
+ 			shift
+ 			;;
+ 
++      -\?|-h)		func_usage				;;
++      --help)		func_help				;;
++      --version)	func_version				;;
++
+       # Separate optargs to long options:
+-      -dlopen=*|--mode=*|--tag=*)
+-			func_opt_split "$opt"
+-			set dummy "$func_opt_split_opt" "$func_opt_split_arg" ${1+"$@"}
++      --*=*)
++			func_split_long_opt "$opt"
++			set dummy "$func_split_long_opt_name" "$func_split_long_opt_arg" ${1+"$@"}
+ 			shift
+ 			;;
+ 
+-      -\?|-h)		func_usage					;;
+-      --help)		opt_help=:					;;
+-      --help-all)	opt_help=': help-all'				;;
+-      --version)	func_version					;;
+-
+-      -*)		func_fatal_help "unrecognized option \`$opt'"	;;
+-
+-      *)		nonopt="$opt"
+-			break
++      # Separate non-argument short options:
++      -\?*|-h*|-n*|-v*)
++			func_split_short_opt "$opt"
++			set dummy "$func_split_short_opt_name" "-$func_split_short_opt_arg" ${1+"$@"}
++			shift
+ 			;;
++
++      --)		break					;;
++      -*)		func_fatal_help "unrecognized option \`$opt'" ;;
++      *)		set dummy "$opt" ${1+"$@"};	shift; break  ;;
+     esac
+   done
+ 
++  # Validate options:
++
++  # save first non-option argument
++  if test "$#" -gt 0; then
++    nonopt="$opt"
++    shift
++  fi
++
++  # preserve --debug
++  test "$opt_debug" = : || func_append preserve_args " --debug"
+ 
+   case $host in
+     *cygwin* | *mingw* | *pw32* | *cegcc* | *solaris2* )
+@@ -981,82 +1184,44 @@ func_enable_tag ()
+       opt_duplicate_compiler_generated_deps=:
+       ;;
+     *)
+-      opt_duplicate_compiler_generated_deps=$opt_duplicate_deps
++      opt_duplicate_compiler_generated_deps=$opt_preserve_dup_deps
+       ;;
+   esac
+ 
+-  # Having warned about all mis-specified options, bail out if
+-  # anything was wrong.
+-  $exit_cmd $EXIT_FAILURE
+-}
++  $opt_help || {
++    # Sanity checks first:
++    func_check_version_match
+ 
+-# func_check_version_match
+-# Ensure that we are using m4 macros, and libtool script from the same
+-# release of libtool.
+-func_check_version_match ()
+-{
+-  if test "$package_revision" != "$macro_revision"; then
+-    if test "$VERSION" != "$macro_version"; then
+-      if test -z "$macro_version"; then
+-        cat >&2 <<_LT_EOF
+-$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
+-$progname: definition of this LT_INIT comes from an older release.
+-$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
+-$progname: and run autoconf again.
+-_LT_EOF
+-      else
+-        cat >&2 <<_LT_EOF
+-$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
+-$progname: definition of this LT_INIT comes from $PACKAGE $macro_version.
+-$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
+-$progname: and run autoconf again.
+-_LT_EOF
+-      fi
+-    else
+-      cat >&2 <<_LT_EOF
+-$progname: Version mismatch error.  This is $PACKAGE $VERSION, revision $package_revision,
+-$progname: but the definition of this LT_INIT comes from revision $macro_revision.
+-$progname: You should recreate aclocal.m4 with macros from revision $package_revision
+-$progname: of $PACKAGE $VERSION and run autoconf again.
+-_LT_EOF
++    if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then
++      func_fatal_configuration "not configured to build any kind of library"
+     fi
+ 
+-    exit $EXIT_MISMATCH
+-  fi
+-}
+-
++    # Darwin sucks
++    eval std_shrext=\"$shrext_cmds\"
+ 
+-## ----------- ##
+-##    Main.    ##
+-## ----------- ##
+-
+-$opt_help || {
+-  # Sanity checks first:
+-  func_check_version_match
+-
+-  if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then
+-    func_fatal_configuration "not configured to build any kind of library"
+-  fi
++    # Only execute mode is allowed to have -dlopen flags.
++    if test -n "$opt_dlopen" && test "$opt_mode" != execute; then
++      func_error "unrecognized option \`-dlopen'"
++      $ECHO "$help" 1>&2
++      exit $EXIT_FAILURE
++    fi
+ 
+-  test -z "$mode" && func_fatal_error "error: you must specify a MODE."
++    # Change the help message to a mode-specific one.
++    generic_help="$help"
++    help="Try \`$progname --help --mode=$opt_mode' for more information."
++  }
+ 
+ 
+-  # Darwin sucks
+-  eval "std_shrext=\"$shrext_cmds\""
++  # Bail if the options were screwed
++  $exit_cmd $EXIT_FAILURE
++}
+ 
+ 
+-  # Only execute mode is allowed to have -dlopen flags.
+-  if test -n "$execute_dlfiles" && test "$mode" != execute; then
+-    func_error "unrecognized option \`-dlopen'"
+-    $ECHO "$help" 1>&2
+-    exit $EXIT_FAILURE
+-  fi
+ 
+-  # Change the help message to a mode-specific one.
+-  generic_help="$help"
+-  help="Try \`$progname --help --mode=$mode' for more information."
+-}
+ 
++## ----------- ##
++##    Main.    ##
++## ----------- ##
+ 
+ # func_lalib_p file
+ # True iff FILE is a libtool `.la' library or `.lo' object file.
+@@ -1121,12 +1286,9 @@ func_ltwrapper_executable_p ()
+ # temporary ltwrapper_script.
+ func_ltwrapper_scriptname ()
+ {
+-    func_ltwrapper_scriptname_result=""
+-    if func_ltwrapper_executable_p "$1"; then
+-	func_dirname_and_basename "$1" "" "."
+-	func_stripname '' '.exe' "$func_basename_result"
+-	func_ltwrapper_scriptname_result="$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper"
+-    fi
++    func_dirname_and_basename "$1" "" "."
++    func_stripname '' '.exe' "$func_basename_result"
++    func_ltwrapper_scriptname_result="$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper"
+ }
+ 
+ # func_ltwrapper_p file
+@@ -1149,7 +1311,7 @@ func_execute_cmds ()
+     save_ifs=$IFS; IFS='~'
+     for cmd in $1; do
+       IFS=$save_ifs
+-      eval "cmd=\"$cmd\""
++      eval cmd=\"$cmd\"
+       func_show_eval "$cmd" "${2-:}"
+     done
+     IFS=$save_ifs
+@@ -1172,6 +1334,37 @@ func_source ()
+ }
+ 
+ 
++# func_resolve_sysroot PATH
++# Replace a leading = in PATH with a sysroot.  Store the result into
++# func_resolve_sysroot_result
++func_resolve_sysroot ()
++{
++  func_resolve_sysroot_result=$1
++  case $func_resolve_sysroot_result in
++  =*)
++    func_stripname '=' '' "$func_resolve_sysroot_result"
++    func_resolve_sysroot_result=$lt_sysroot$func_stripname_result
++    ;;
++  esac
++}
++
++# func_replace_sysroot PATH
++# If PATH begins with the sysroot, replace it with = and
++# store the result into func_replace_sysroot_result.
++func_replace_sysroot ()
++{
++  case "$lt_sysroot:$1" in
++  ?*:"$lt_sysroot"*)
++    func_stripname "$lt_sysroot" '' "$1"
++    func_replace_sysroot_result="=$func_stripname_result"
++    ;;
++  *)
++    # Including no sysroot.
++    func_replace_sysroot_result=$1
++    ;;
++  esac
++}
++
+ # func_infer_tag arg
+ # Infer tagged configuration to use if any are available and
+ # if one wasn't chosen via the "--tag" command line option.
+@@ -1184,8 +1377,7 @@ func_infer_tag ()
+     if test -n "$available_tags" && test -z "$tagname"; then
+       CC_quoted=
+       for arg in $CC; do
+-        func_quote_for_eval "$arg"
+-	CC_quoted="$CC_quoted $func_quote_for_eval_result"
++	func_append_quoted CC_quoted "$arg"
+       done
+       CC_expanded=`func_echo_all $CC`
+       CC_quoted_expanded=`func_echo_all $CC_quoted`
+@@ -1204,8 +1396,7 @@ func_infer_tag ()
+ 	    CC_quoted=
+ 	    for arg in $CC; do
+ 	      # Double-quote args containing other shell metacharacters.
+-	      func_quote_for_eval "$arg"
+-	      CC_quoted="$CC_quoted $func_quote_for_eval_result"
++	      func_append_quoted CC_quoted "$arg"
+ 	    done
+ 	    CC_expanded=`func_echo_all $CC`
+ 	    CC_quoted_expanded=`func_echo_all $CC_quoted`
+@@ -1274,6 +1465,486 @@ EOF
+     }
+ }
+ 
++
++##################################################
++# FILE NAME AND PATH CONVERSION HELPER FUNCTIONS #
++##################################################
++
++# func_convert_core_file_wine_to_w32 ARG
++# Helper function used by file name conversion functions when $build is *nix,
++# and $host is mingw, cygwin, or some other w32 environment. Relies on a
++# correctly configured wine environment available, with the winepath program
++# in $build's $PATH.
++#
++# ARG is the $build file name to be converted to w32 format.
++# Result is available in $func_convert_core_file_wine_to_w32_result, and will
++# be empty on error (or when ARG is empty)
++func_convert_core_file_wine_to_w32 ()
++{
++  $opt_debug
++  func_convert_core_file_wine_to_w32_result="$1"
++  if test -n "$1"; then
++    # Unfortunately, winepath does not exit with a non-zero error code, so we
++    # are forced to check the contents of stdout. On the other hand, if the
++    # command is not found, the shell will set an exit code of 127 and print
++    # *an error message* to stdout. So we must check for both error code of
++    # zero AND non-empty stdout, which explains the odd construction:
++    func_convert_core_file_wine_to_w32_tmp=`winepath -w "$1" 2>/dev/null`
++    if test "$?" -eq 0 && test -n "${func_convert_core_file_wine_to_w32_tmp}"; then
++      func_convert_core_file_wine_to_w32_result=`$ECHO "$func_convert_core_file_wine_to_w32_tmp" |
++        $SED -e "$lt_sed_naive_backslashify"`
++    else
++      func_convert_core_file_wine_to_w32_result=
++    fi
++  fi
++}
++# end: func_convert_core_file_wine_to_w32
++
++
++# func_convert_core_path_wine_to_w32 ARG
++# Helper function used by path conversion functions when $build is *nix, and
++# $host is mingw, cygwin, or some other w32 environment. Relies on a correctly
++# configured wine environment available, with the winepath program in $build's
++# $PATH. Assumes ARG has no leading or trailing path separator characters.
++#
++# ARG is path to be converted from $build format to win32.
++# Result is available in $func_convert_core_path_wine_to_w32_result.
++# Unconvertible file (directory) names in ARG are skipped; if no directory names
++# are convertible, then the result may be empty.
++func_convert_core_path_wine_to_w32 ()
++{
++  $opt_debug
++  # unfortunately, winepath doesn't convert paths, only file names
++  func_convert_core_path_wine_to_w32_result=""
++  if test -n "$1"; then
++    oldIFS=$IFS
++    IFS=:
++    for func_convert_core_path_wine_to_w32_f in $1; do
++      IFS=$oldIFS
++      func_convert_core_file_wine_to_w32 "$func_convert_core_path_wine_to_w32_f"
++      if test -n "$func_convert_core_file_wine_to_w32_result" ; then
++        if test -z "$func_convert_core_path_wine_to_w32_result"; then
++          func_convert_core_path_wine_to_w32_result="$func_convert_core_file_wine_to_w32_result"
++        else
++          func_append func_convert_core_path_wine_to_w32_result ";$func_convert_core_file_wine_to_w32_result"
++        fi
++      fi
++    done
++    IFS=$oldIFS
++  fi
++}
++# end: func_convert_core_path_wine_to_w32
++
++
++# func_cygpath ARGS...
++# Wrapper around calling the cygpath program via LT_CYGPATH. This is used when
++# when (1) $build is *nix and Cygwin is hosted via a wine environment; or (2)
++# $build is MSYS and $host is Cygwin, or (3) $build is Cygwin. In case (1) or
++# (2), returns the Cygwin file name or path in func_cygpath_result (input
++# file name or path is assumed to be in w32 format, as previously converted
++# from $build's *nix or MSYS format). In case (3), returns the w32 file name
++# or path in func_cygpath_result (input file name or path is assumed to be in
++# Cygwin format). Returns an empty string on error.
++#
++# ARGS are passed to cygpath, with the last one being the file name or path to
++# be converted.
++#
++# Specify the absolute *nix (or w32) name to cygpath in the LT_CYGPATH
++# environment variable; do not put it in $PATH.
++func_cygpath ()
++{
++  $opt_debug
++  if test -n "$LT_CYGPATH" && test -f "$LT_CYGPATH"; then
++    func_cygpath_result=`$LT_CYGPATH "$@" 2>/dev/null`
++    if test "$?" -ne 0; then
++      # on failure, ensure result is empty
++      func_cygpath_result=
++    fi
++  else
++    func_cygpath_result=
++    func_error "LT_CYGPATH is empty or specifies non-existent file: \`$LT_CYGPATH'"
++  fi
++}
++#end: func_cygpath
++
++
++# func_convert_core_msys_to_w32 ARG
++# Convert file name or path ARG from MSYS format to w32 format.  Return
++# result in func_convert_core_msys_to_w32_result.
++func_convert_core_msys_to_w32 ()
++{
++  $opt_debug
++  # awkward: cmd appends spaces to result
++  func_convert_core_msys_to_w32_result=`( cmd //c echo "$1" ) 2>/dev/null |
++    $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"`
++}
++#end: func_convert_core_msys_to_w32
++
++
++# func_convert_file_check ARG1 ARG2
++# Verify that ARG1 (a file name in $build format) was converted to $host
++# format in ARG2. Otherwise, emit an error message, but continue (resetting
++# func_to_host_file_result to ARG1).
++func_convert_file_check ()
++{
++  $opt_debug
++  if test -z "$2" && test -n "$1" ; then
++    func_error "Could not determine host file name corresponding to"
++    func_error "  \`$1'"
++    func_error "Continuing, but uninstalled executables may not work."
++    # Fallback:
++    func_to_host_file_result="$1"
++  fi
++}
++# end func_convert_file_check
++
++
++# func_convert_path_check FROM_PATHSEP TO_PATHSEP FROM_PATH TO_PATH
++# Verify that FROM_PATH (a path in $build format) was converted to $host
++# format in TO_PATH. Otherwise, emit an error message, but continue, resetting
++# func_to_host_file_result to a simplistic fallback value (see below).
++func_convert_path_check ()
++{
++  $opt_debug
++  if test -z "$4" && test -n "$3"; then
++    func_error "Could not determine the host path corresponding to"
++    func_error "  \`$3'"
++    func_error "Continuing, but uninstalled executables may not work."
++    # Fallback.  This is a deliberately simplistic "conversion" and
++    # should not be "improved".  See libtool.info.
++    if test "x$1" != "x$2"; then
++      lt_replace_pathsep_chars="s|$1|$2|g"
++      func_to_host_path_result=`echo "$3" |
++        $SED -e "$lt_replace_pathsep_chars"`
++    else
++      func_to_host_path_result="$3"
++    fi
++  fi
++}
++# end func_convert_path_check
++
++
++# func_convert_path_front_back_pathsep FRONTPAT BACKPAT REPL ORIG
++# Modifies func_to_host_path_result by prepending REPL if ORIG matches FRONTPAT
++# and appending REPL if ORIG matches BACKPAT.
++func_convert_path_front_back_pathsep ()
++{
++  $opt_debug
++  case $4 in
++  $1 ) func_to_host_path_result="$3$func_to_host_path_result"
++    ;;
++  esac
++  case $4 in
++  $2 ) func_append func_to_host_path_result "$3"
++    ;;
++  esac
++}
++# end func_convert_path_front_back_pathsep
++
++
++##################################################
++# $build to $host FILE NAME CONVERSION FUNCTIONS #
++##################################################
++# invoked via `$to_host_file_cmd ARG'
++#
++# In each case, ARG is the path to be converted from $build to $host format.
++# Result will be available in $func_to_host_file_result.
++
++
++# func_to_host_file ARG
++# Converts the file name ARG from $build format to $host format. Return result
++# in func_to_host_file_result.
++func_to_host_file ()
++{
++  $opt_debug
++  $to_host_file_cmd "$1"
++}
++# end func_to_host_file
++
++
++# func_to_tool_file ARG LAZY
++# converts the file name ARG from $build format to toolchain format. Return
++# result in func_to_tool_file_result.  If the conversion in use is listed
++# in (the comma separated) LAZY, no conversion takes place.
++func_to_tool_file ()
++{
++  $opt_debug
++  case ,$2, in
++    *,"$to_tool_file_cmd",*)
++      func_to_tool_file_result=$1
++      ;;
++    *)
++      $to_tool_file_cmd "$1"
++      func_to_tool_file_result=$func_to_host_file_result
++      ;;
++  esac
++}
++# end func_to_tool_file
++
++
++# func_convert_file_noop ARG
++# Copy ARG to func_to_host_file_result.
++func_convert_file_noop ()
++{
++  func_to_host_file_result="$1"
++}
++# end func_convert_file_noop
++
++
++# func_convert_file_msys_to_w32 ARG
++# Convert file name ARG from (mingw) MSYS to (mingw) w32 format; automatic
++# conversion to w32 is not available inside the cwrapper.  Returns result in
++# func_to_host_file_result.
++func_convert_file_msys_to_w32 ()
++{
++  $opt_debug
++  func_to_host_file_result="$1"
++  if test -n "$1"; then
++    func_convert_core_msys_to_w32 "$1"
++    func_to_host_file_result="$func_convert_core_msys_to_w32_result"
++  fi
++  func_convert_file_check "$1" "$func_to_host_file_result"
++}
++# end func_convert_file_msys_to_w32
++
++
++# func_convert_file_cygwin_to_w32 ARG
++# Convert file name ARG from Cygwin to w32 format.  Returns result in
++# func_to_host_file_result.
++func_convert_file_cygwin_to_w32 ()
++{
++  $opt_debug
++  func_to_host_file_result="$1"
++  if test -n "$1"; then
++    # because $build is cygwin, we call "the" cygpath in $PATH; no need to use
++    # LT_CYGPATH in this case.
++    func_to_host_file_result=`cygpath -m "$1"`
++  fi
++  func_convert_file_check "$1" "$func_to_host_file_result"
++}
++# end func_convert_file_cygwin_to_w32
++
++
++# func_convert_file_nix_to_w32 ARG
++# Convert file name ARG from *nix to w32 format.  Requires a wine environment
++# and a working winepath. Returns result in func_to_host_file_result.
++func_convert_file_nix_to_w32 ()
++{
++  $opt_debug
++  func_to_host_file_result="$1"
++  if test -n "$1"; then
++    func_convert_core_file_wine_to_w32 "$1"
++    func_to_host_file_result="$func_convert_core_file_wine_to_w32_result"
++  fi
++  func_convert_file_check "$1" "$func_to_host_file_result"
++}
++# end func_convert_file_nix_to_w32
++
++
++# func_convert_file_msys_to_cygwin ARG
++# Convert file name ARG from MSYS to Cygwin format.  Requires LT_CYGPATH set.
++# Returns result in func_to_host_file_result.
++func_convert_file_msys_to_cygwin ()
++{
++  $opt_debug
++  func_to_host_file_result="$1"
++  if test -n "$1"; then
++    func_convert_core_msys_to_w32 "$1"
++    func_cygpath -u "$func_convert_core_msys_to_w32_result"
++    func_to_host_file_result="$func_cygpath_result"
++  fi
++  func_convert_file_check "$1" "$func_to_host_file_result"
++}
++# end func_convert_file_msys_to_cygwin
++
++
++# func_convert_file_nix_to_cygwin ARG
++# Convert file name ARG from *nix to Cygwin format.  Requires Cygwin installed
++# in a wine environment, working winepath, and LT_CYGPATH set.  Returns result
++# in func_to_host_file_result.
++func_convert_file_nix_to_cygwin ()
++{
++  $opt_debug
++  func_to_host_file_result="$1"
++  if test -n "$1"; then
++    # convert from *nix to w32, then use cygpath to convert from w32 to cygwin.
++    func_convert_core_file_wine_to_w32 "$1"
++    func_cygpath -u "$func_convert_core_file_wine_to_w32_result"
++    func_to_host_file_result="$func_cygpath_result"
++  fi
++  func_convert_file_check "$1" "$func_to_host_file_result"
++}
++# end func_convert_file_nix_to_cygwin
++
++
++#############################################
++# $build to $host PATH CONVERSION FUNCTIONS #
++#############################################
++# invoked via `$to_host_path_cmd ARG'
++#
++# In each case, ARG is the path to be converted from $build to $host format.
++# The result will be available in $func_to_host_path_result.
++#
++# Path separators are also converted from $build format to $host format.  If
++# ARG begins or ends with a path separator character, it is preserved (but
++# converted to $host format) on output.
++#
++# All path conversion functions are named using the following convention:
++#   file name conversion function    : func_convert_file_X_to_Y ()
++#   path conversion function         : func_convert_path_X_to_Y ()
++# where, for any given $build/$host combination the 'X_to_Y' value is the
++# same.  If conversion functions are added for new $build/$host combinations,
++# the two new functions must follow this pattern, or func_init_to_host_path_cmd
++# will break.
++
++
++# func_init_to_host_path_cmd
++# Ensures that function "pointer" variable $to_host_path_cmd is set to the
++# appropriate value, based on the value of $to_host_file_cmd.
++to_host_path_cmd=
++func_init_to_host_path_cmd ()
++{
++  $opt_debug
++  if test -z "$to_host_path_cmd"; then
++    func_stripname 'func_convert_file_' '' "$to_host_file_cmd"
++    to_host_path_cmd="func_convert_path_${func_stripname_result}"
++  fi
++}
++
++
++# func_to_host_path ARG
++# Converts the path ARG from $build format to $host format. Return result
++# in func_to_host_path_result.
++func_to_host_path ()
++{
++  $opt_debug
++  func_init_to_host_path_cmd
++  $to_host_path_cmd "$1"
++}
++# end func_to_host_path
++
++
++# func_convert_path_noop ARG
++# Copy ARG to func_to_host_path_result.
++func_convert_path_noop ()
++{
++  func_to_host_path_result="$1"
++}
++# end func_convert_path_noop
++
++
++# func_convert_path_msys_to_w32 ARG
++# Convert path ARG from (mingw) MSYS to (mingw) w32 format; automatic
++# conversion to w32 is not available inside the cwrapper.  Returns result in
++# func_to_host_path_result.
++func_convert_path_msys_to_w32 ()
++{
++  $opt_debug
++  func_to_host_path_result="$1"
++  if test -n "$1"; then
++    # Remove leading and trailing path separator characters from ARG.  MSYS
++    # behavior is inconsistent here; cygpath turns them into '.;' and ';.';
++    # and winepath ignores them completely.
++    func_stripname : : "$1"
++    func_to_host_path_tmp1=$func_stripname_result
++    func_convert_core_msys_to_w32 "$func_to_host_path_tmp1"
++    func_to_host_path_result="$func_convert_core_msys_to_w32_result"
++    func_convert_path_check : ";" \
++      "$func_to_host_path_tmp1" "$func_to_host_path_result"
++    func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
++  fi
++}
++# end func_convert_path_msys_to_w32
++
++
++# func_convert_path_cygwin_to_w32 ARG
++# Convert path ARG from Cygwin to w32 format.  Returns result in
++# func_to_host_file_result.
++func_convert_path_cygwin_to_w32 ()
++{
++  $opt_debug
++  func_to_host_path_result="$1"
++  if test -n "$1"; then
++    # See func_convert_path_msys_to_w32:
++    func_stripname : : "$1"
++    func_to_host_path_tmp1=$func_stripname_result
++    func_to_host_path_result=`cygpath -m -p "$func_to_host_path_tmp1"`
++    func_convert_path_check : ";" \
++      "$func_to_host_path_tmp1" "$func_to_host_path_result"
++    func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
++  fi
++}
++# end func_convert_path_cygwin_to_w32
++
++
++# func_convert_path_nix_to_w32 ARG
++# Convert path ARG from *nix to w32 format.  Requires a wine environment and
++# a working winepath.  Returns result in func_to_host_file_result.
++func_convert_path_nix_to_w32 ()
++{
++  $opt_debug
++  func_to_host_path_result="$1"
++  if test -n "$1"; then
++    # See func_convert_path_msys_to_w32:
++    func_stripname : : "$1"
++    func_to_host_path_tmp1=$func_stripname_result
++    func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1"
++    func_to_host_path_result="$func_convert_core_path_wine_to_w32_result"
++    func_convert_path_check : ";" \
++      "$func_to_host_path_tmp1" "$func_to_host_path_result"
++    func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
++  fi
++}
++# end func_convert_path_nix_to_w32
++
++
++# func_convert_path_msys_to_cygwin ARG
++# Convert path ARG from MSYS to Cygwin format.  Requires LT_CYGPATH set.
++# Returns result in func_to_host_file_result.
++func_convert_path_msys_to_cygwin ()
++{
++  $opt_debug
++  func_to_host_path_result="$1"
++  if test -n "$1"; then
++    # See func_convert_path_msys_to_w32:
++    func_stripname : : "$1"
++    func_to_host_path_tmp1=$func_stripname_result
++    func_convert_core_msys_to_w32 "$func_to_host_path_tmp1"
++    func_cygpath -u -p "$func_convert_core_msys_to_w32_result"
++    func_to_host_path_result="$func_cygpath_result"
++    func_convert_path_check : : \
++      "$func_to_host_path_tmp1" "$func_to_host_path_result"
++    func_convert_path_front_back_pathsep ":*" "*:" : "$1"
++  fi
++}
++# end func_convert_path_msys_to_cygwin
++
++
++# func_convert_path_nix_to_cygwin ARG
++# Convert path ARG from *nix to Cygwin format.  Requires Cygwin installed in a
++# a wine environment, working winepath, and LT_CYGPATH set.  Returns result in
++# func_to_host_file_result.
++func_convert_path_nix_to_cygwin ()
++{
++  $opt_debug
++  func_to_host_path_result="$1"
++  if test -n "$1"; then
++    # Remove leading and trailing path separator characters from
++    # ARG. msys behavior is inconsistent here, cygpath turns them
++    # into '.;' and ';.', and winepath ignores them completely.
++    func_stripname : : "$1"
++    func_to_host_path_tmp1=$func_stripname_result
++    func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1"
++    func_cygpath -u -p "$func_convert_core_path_wine_to_w32_result"
++    func_to_host_path_result="$func_cygpath_result"
++    func_convert_path_check : : \
++      "$func_to_host_path_tmp1" "$func_to_host_path_result"
++    func_convert_path_front_back_pathsep ":*" "*:" : "$1"
++  fi
++}
++# end func_convert_path_nix_to_cygwin
++
++
+ # func_mode_compile arg...
+ func_mode_compile ()
+ {
+@@ -1314,12 +1985,12 @@ func_mode_compile ()
+ 	  ;;
+ 
+ 	-pie | -fpie | -fPIE)
+-          pie_flag="$pie_flag $arg"
++          func_append pie_flag " $arg"
+ 	  continue
+ 	  ;;
+ 
+ 	-shared | -static | -prefer-pic | -prefer-non-pic)
+-	  later="$later $arg"
++	  func_append later " $arg"
+ 	  continue
+ 	  ;;
+ 
+@@ -1340,15 +2011,14 @@ func_mode_compile ()
+ 	  save_ifs="$IFS"; IFS=','
+ 	  for arg in $args; do
+ 	    IFS="$save_ifs"
+-	    func_quote_for_eval "$arg"
+-	    lastarg="$lastarg $func_quote_for_eval_result"
++	    func_append_quoted lastarg "$arg"
+ 	  done
+ 	  IFS="$save_ifs"
+ 	  func_stripname ' ' '' "$lastarg"
+ 	  lastarg=$func_stripname_result
+ 
+ 	  # Add the arguments to base_compile.
+-	  base_compile="$base_compile $lastarg"
++	  func_append base_compile " $lastarg"
+ 	  continue
+ 	  ;;
+ 
+@@ -1364,8 +2034,7 @@ func_mode_compile ()
+       esac    #  case $arg_mode
+ 
+       # Aesthetically quote the previous argument.
+-      func_quote_for_eval "$lastarg"
+-      base_compile="$base_compile $func_quote_for_eval_result"
++      func_append_quoted base_compile "$lastarg"
+     done # for arg
+ 
+     case $arg_mode in
+@@ -1496,17 +2165,16 @@ compiler."
+ 	$opt_dry_run || $RM $removelist
+ 	exit $EXIT_FAILURE
+       fi
+-      removelist="$removelist $output_obj"
++      func_append removelist " $output_obj"
+       $ECHO "$srcfile" > "$lockfile"
+     fi
+ 
+     $opt_dry_run || $RM $removelist
+-    removelist="$removelist $lockfile"
++    func_append removelist " $lockfile"
+     trap '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' 1 2 15
+ 
+-    if test -n "$fix_srcfile_path"; then
+-      eval "srcfile=\"$fix_srcfile_path\""
+-    fi
++    func_to_tool_file "$srcfile" func_convert_file_msys_to_w32
++    srcfile=$func_to_tool_file_result
+     func_quote_for_eval "$srcfile"
+     qsrcfile=$func_quote_for_eval_result
+ 
+@@ -1526,7 +2194,7 @@ compiler."
+ 
+       if test -z "$output_obj"; then
+ 	# Place PIC objects in $objdir
+-	command="$command -o $lobj"
++	func_append command " -o $lobj"
+       fi
+ 
+       func_show_eval_locale "$command"	\
+@@ -1573,11 +2241,11 @@ compiler."
+ 	command="$base_compile $qsrcfile $pic_flag"
+       fi
+       if test "$compiler_c_o" = yes; then
+-	command="$command -o $obj"
++	func_append command " -o $obj"
+       fi
+ 
+       # Suppress compiler output if we already did a PIC compilation.
+-      command="$command$suppress_output"
++      func_append command "$suppress_output"
+       func_show_eval_locale "$command" \
+         '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE'
+ 
+@@ -1622,13 +2290,13 @@ compiler."
+ }
+ 
+ $opt_help || {
+-  test "$mode" = compile && func_mode_compile ${1+"$@"}
++  test "$opt_mode" = compile && func_mode_compile ${1+"$@"}
+ }
+ 
+ func_mode_help ()
+ {
+     # We need to display help for each of the modes.
+-    case $mode in
++    case $opt_mode in
+       "")
+         # Generic help is extracted from the usage comments
+         # at the start of this file.
+@@ -1659,8 +2327,8 @@ This mode accepts the following additional options:
+ 
+   -o OUTPUT-FILE    set the output file name to OUTPUT-FILE
+   -no-suppress      do not suppress compiler output for multiple passes
+-  -prefer-pic       try to building PIC objects only
+-  -prefer-non-pic   try to building non-PIC objects only
++  -prefer-pic       try to build PIC objects only
++  -prefer-non-pic   try to build non-PIC objects only
+   -shared           do not build a \`.o' file suitable for static linking
+   -static           only build a \`.o' file suitable for static linking
+   -Wc,FLAG          pass FLAG directly to the compiler
+@@ -1804,7 +2472,7 @@ Otherwise, only FILE itself is deleted using RM."
+         ;;
+ 
+       *)
+-        func_fatal_help "invalid operation mode \`$mode'"
++        func_fatal_help "invalid operation mode \`$opt_mode'"
+         ;;
+     esac
+ 
+@@ -1819,13 +2487,13 @@ if $opt_help; then
+   else
+     {
+       func_help noexit
+-      for mode in compile link execute install finish uninstall clean; do
++      for opt_mode in compile link execute install finish uninstall clean; do
+ 	func_mode_help
+       done
+     } | sed -n '1p; 2,$s/^Usage:/  or: /p'
+     {
+       func_help noexit
+-      for mode in compile link execute install finish uninstall clean; do
++      for opt_mode in compile link execute install finish uninstall clean; do
+ 	echo
+ 	func_mode_help
+       done
+@@ -1854,13 +2522,16 @@ func_mode_execute ()
+       func_fatal_help "you must specify a COMMAND"
+ 
+     # Handle -dlopen flags immediately.
+-    for file in $execute_dlfiles; do
++    for file in $opt_dlopen; do
+       test -f "$file" \
+ 	|| func_fatal_help "\`$file' is not a file"
+ 
+       dir=
+       case $file in
+       *.la)
++	func_resolve_sysroot "$file"
++	file=$func_resolve_sysroot_result
++
+ 	# Check to see that this really is a libtool archive.
+ 	func_lalib_unsafe_p "$file" \
+ 	  || func_fatal_help "\`$lib' is not a valid libtool archive"
+@@ -1882,7 +2553,7 @@ func_mode_execute ()
+ 	dir="$func_dirname_result"
+ 
+ 	if test -f "$dir/$objdir/$dlname"; then
+-	  dir="$dir/$objdir"
++	  func_append dir "/$objdir"
+ 	else
+ 	  if test ! -f "$dir/$dlname"; then
+ 	    func_fatal_error "cannot find \`$dlname' in \`$dir' or \`$dir/$objdir'"
+@@ -1907,10 +2578,10 @@ func_mode_execute ()
+       test -n "$absdir" && dir="$absdir"
+ 
+       # Now add the directory to shlibpath_var.
+-      if eval test -z \"\$$shlibpath_var\"; then
+-	eval $shlibpath_var=\$dir
++      if eval "test -z \"\$$shlibpath_var\""; then
++	eval "$shlibpath_var=\"\$dir\""
+       else
+-	eval $shlibpath_var=\$dir:\$$shlibpath_var
++	eval "$shlibpath_var=\"\$dir:\$$shlibpath_var\""
+       fi
+     done
+ 
+@@ -1939,8 +2610,7 @@ func_mode_execute ()
+ 	;;
+       esac
+       # Quote arguments (to preserve shell metacharacters).
+-      func_quote_for_eval "$file"
+-      args="$args $func_quote_for_eval_result"
++      func_append_quoted args "$file"
+     done
+ 
+     if test "X$opt_dry_run" = Xfalse; then
+@@ -1972,22 +2642,59 @@ func_mode_execute ()
+     fi
+ }
+ 
+-test "$mode" = execute && func_mode_execute ${1+"$@"}
++test "$opt_mode" = execute && func_mode_execute ${1+"$@"}
+ 
+ 
+ # func_mode_finish arg...
+ func_mode_finish ()
+ {
+     $opt_debug
+-    libdirs="$nonopt"
++    libs=
++    libdirs=
+     admincmds=
+ 
+-    if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
+-      for dir
+-      do
+-	libdirs="$libdirs $dir"
+-      done
++    for opt in "$nonopt" ${1+"$@"}
++    do
++      if test -d "$opt"; then
++	func_append libdirs " $opt"
+ 
++      elif test -f "$opt"; then
++	if func_lalib_unsafe_p "$opt"; then
++	  func_append libs " $opt"
++	else
++	  func_warning "\`$opt' is not a valid libtool archive"
++	fi
++
++      else
++	func_fatal_error "invalid argument \`$opt'"
++      fi
++    done
++
++    if test -n "$libs"; then
++      if test -n "$lt_sysroot"; then
++        sysroot_regex=`$ECHO "$lt_sysroot" | $SED "$sed_make_literal_regex"`
++        sysroot_cmd="s/\([ ']\)$sysroot_regex/\1/g;"
++      else
++        sysroot_cmd=
++      fi
++
++      # Remove sysroot references
++      if $opt_dry_run; then
++        for lib in $libs; do
++          echo "removing references to $lt_sysroot and \`=' prefixes from $lib"
++        done
++      else
++        tmpdir=`func_mktempdir`
++        for lib in $libs; do
++	  sed -e "${sysroot_cmd} s/\([ ']-[LR]\)=/\1/g; s/\([ ']\)=/\1/g" $lib \
++	    > $tmpdir/tmp-la
++	  mv -f $tmpdir/tmp-la $lib
++	done
++        ${RM}r "$tmpdir"
++      fi
++    fi
++
++    if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
+       for libdir in $libdirs; do
+ 	if test -n "$finish_cmds"; then
+ 	  # Do each command in the finish commands.
+@@ -1997,7 +2704,7 @@ func_mode_finish ()
+ 	if test -n "$finish_eval"; then
+ 	  # Do the single finish_eval.
+ 	  eval cmds=\"$finish_eval\"
+-	  $opt_dry_run || eval "$cmds" || admincmds="$admincmds
++	  $opt_dry_run || eval "$cmds" || func_append admincmds "
+        $cmds"
+ 	fi
+       done
+@@ -2006,53 +2713,55 @@ func_mode_finish ()
+     # Exit here if they wanted silent mode.
+     $opt_silent && exit $EXIT_SUCCESS
+ 
+-    echo "----------------------------------------------------------------------"
+-    echo "Libraries have been installed in:"
+-    for libdir in $libdirs; do
+-      $ECHO "   $libdir"
+-    done
+-    echo
+-    echo "If you ever happen to want to link against installed libraries"
+-    echo "in a given directory, LIBDIR, you must either use libtool, and"
+-    echo "specify the full pathname of the library, or use the \`-LLIBDIR'"
+-    echo "flag during linking and do at least one of the following:"
+-    if test -n "$shlibpath_var"; then
+-      echo "   - add LIBDIR to the \`$shlibpath_var' environment variable"
+-      echo "     during execution"
+-    fi
+-    if test -n "$runpath_var"; then
+-      echo "   - add LIBDIR to the \`$runpath_var' environment variable"
+-      echo "     during linking"
+-    fi
+-    if test -n "$hardcode_libdir_flag_spec"; then
+-      libdir=LIBDIR
+-      eval "flag=\"$hardcode_libdir_flag_spec\""
++    if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
++      echo "----------------------------------------------------------------------"
++      echo "Libraries have been installed in:"
++      for libdir in $libdirs; do
++	$ECHO "   $libdir"
++      done
++      echo
++      echo "If you ever happen to want to link against installed libraries"
++      echo "in a given directory, LIBDIR, you must either use libtool, and"
++      echo "specify the full pathname of the library, or use the \`-LLIBDIR'"
++      echo "flag during linking and do at least one of the following:"
++      if test -n "$shlibpath_var"; then
++	echo "   - add LIBDIR to the \`$shlibpath_var' environment variable"
++	echo "     during execution"
++      fi
++      if test -n "$runpath_var"; then
++	echo "   - add LIBDIR to the \`$runpath_var' environment variable"
++	echo "     during linking"
++      fi
++      if test -n "$hardcode_libdir_flag_spec"; then
++	libdir=LIBDIR
++	eval flag=\"$hardcode_libdir_flag_spec\"
+ 
+-      $ECHO "   - use the \`$flag' linker flag"
+-    fi
+-    if test -n "$admincmds"; then
+-      $ECHO "   - have your system administrator run these commands:$admincmds"
+-    fi
+-    if test -f /etc/ld.so.conf; then
+-      echo "   - have your system administrator add LIBDIR to \`/etc/ld.so.conf'"
+-    fi
+-    echo
++	$ECHO "   - use the \`$flag' linker flag"
++      fi
++      if test -n "$admincmds"; then
++	$ECHO "   - have your system administrator run these commands:$admincmds"
++      fi
++      if test -f /etc/ld.so.conf; then
++	echo "   - have your system administrator add LIBDIR to \`/etc/ld.so.conf'"
++      fi
++      echo
+ 
+-    echo "See any operating system documentation about shared libraries for"
+-    case $host in
+-      solaris2.[6789]|solaris2.1[0-9])
+-        echo "more information, such as the ld(1), crle(1) and ld.so(8) manual"
+-	echo "pages."
+-	;;
+-      *)
+-        echo "more information, such as the ld(1) and ld.so(8) manual pages."
+-        ;;
+-    esac
+-    echo "----------------------------------------------------------------------"
++      echo "See any operating system documentation about shared libraries for"
++      case $host in
++	solaris2.[6789]|solaris2.1[0-9])
++	  echo "more information, such as the ld(1), crle(1) and ld.so(8) manual"
++	  echo "pages."
++	  ;;
++	*)
++	  echo "more information, such as the ld(1) and ld.so(8) manual pages."
++	  ;;
++      esac
++      echo "----------------------------------------------------------------------"
++    fi
+     exit $EXIT_SUCCESS
+ }
+ 
+-test "$mode" = finish && func_mode_finish ${1+"$@"}
++test "$opt_mode" = finish && func_mode_finish ${1+"$@"}
+ 
+ 
+ # func_mode_install arg...
+@@ -2077,7 +2786,7 @@ func_mode_install ()
+     # The real first argument should be the name of the installation program.
+     # Aesthetically quote it.
+     func_quote_for_eval "$arg"
+-    install_prog="$install_prog$func_quote_for_eval_result"
++    func_append install_prog "$func_quote_for_eval_result"
+     install_shared_prog=$install_prog
+     case " $install_prog " in
+       *[\\\ /]cp\ *) install_cp=: ;;
+@@ -2097,7 +2806,7 @@ func_mode_install ()
+     do
+       arg2=
+       if test -n "$dest"; then
+-	files="$files $dest"
++	func_append files " $dest"
+ 	dest=$arg
+ 	continue
+       fi
+@@ -2135,11 +2844,11 @@ func_mode_install ()
+ 
+       # Aesthetically quote the argument.
+       func_quote_for_eval "$arg"
+-      install_prog="$install_prog $func_quote_for_eval_result"
++      func_append install_prog " $func_quote_for_eval_result"
+       if test -n "$arg2"; then
+ 	func_quote_for_eval "$arg2"
+       fi
+-      install_shared_prog="$install_shared_prog $func_quote_for_eval_result"
++      func_append install_shared_prog " $func_quote_for_eval_result"
+     done
+ 
+     test -z "$install_prog" && \
+@@ -2151,7 +2860,7 @@ func_mode_install ()
+     if test -n "$install_override_mode" && $no_mode; then
+       if $install_cp; then :; else
+ 	func_quote_for_eval "$install_override_mode"
+-	install_shared_prog="$install_shared_prog -m $func_quote_for_eval_result"
++	func_append install_shared_prog " -m $func_quote_for_eval_result"
+       fi
+     fi
+ 
+@@ -2209,10 +2918,13 @@ func_mode_install ()
+       case $file in
+       *.$libext)
+ 	# Do the static libraries later.
+-	staticlibs="$staticlibs $file"
++	func_append staticlibs " $file"
+ 	;;
+ 
+       *.la)
++	func_resolve_sysroot "$file"
++	file=$func_resolve_sysroot_result
++
+ 	# Check to see that this really is a libtool archive.
+ 	func_lalib_unsafe_p "$file" \
+ 	  || func_fatal_help "\`$file' is not a valid libtool archive"
+@@ -2226,23 +2938,30 @@ func_mode_install ()
+ 	if test "X$destdir" = "X$libdir"; then
+ 	  case "$current_libdirs " in
+ 	  *" $libdir "*) ;;
+-	  *) current_libdirs="$current_libdirs $libdir" ;;
++	  *) func_append current_libdirs " $libdir" ;;
+ 	  esac
+ 	else
+ 	  # Note the libdir as a future libdir.
+ 	  case "$future_libdirs " in
+ 	  *" $libdir "*) ;;
+-	  *) future_libdirs="$future_libdirs $libdir" ;;
++	  *) func_append future_libdirs " $libdir" ;;
+ 	  esac
+ 	fi
+ 
+ 	func_dirname "$file" "/" ""
+ 	dir="$func_dirname_result"
+-	dir="$dir$objdir"
++	func_append dir "$objdir"
+ 
+ 	if test -n "$relink_command"; then
++      # Strip any trailing slash from the destination.
++      func_stripname '' '/' "$libdir"
++      destlibdir=$func_stripname_result
++
++      func_stripname '' '/' "$destdir"
++      s_destdir=$func_stripname_result
++
+ 	  # Determine the prefix the user has applied to our future dir.
+-	  inst_prefix_dir=`$ECHO "$destdir" | $SED -e "s%$libdir\$%%"`
++	  inst_prefix_dir=`$ECHO "X$s_destdir" | $Xsed -e "s%$destlibdir\$%%"`
+ 
+ 	  # Don't allow the user to place us outside of our expected
+ 	  # location b/c this prevents finding dependent libraries that
+@@ -2315,7 +3034,7 @@ func_mode_install ()
+ 	func_show_eval "$install_prog $instname $destdir/$name" 'exit $?'
+ 
+ 	# Maybe install the static library, too.
+-	test -n "$old_library" && staticlibs="$staticlibs $dir/$old_library"
++	test -n "$old_library" && func_append staticlibs " $dir/$old_library"
+ 	;;
+ 
+       *.lo)
+@@ -2503,7 +3222,7 @@ func_mode_install ()
+     test -n "$future_libdirs" && \
+       func_warning "remember to run \`$progname --finish$future_libdirs'"
+ 
+-    if test -n "$current_libdirs" && $opt_finish; then
++    if test -n "$current_libdirs"; then
+       # Maybe just do a dry run.
+       $opt_dry_run && current_libdirs=" -n$current_libdirs"
+       exec_cmd='$SHELL $progpath $preserve_args --finish$current_libdirs'
+@@ -2512,7 +3231,7 @@ func_mode_install ()
+     fi
+ }
+ 
+-test "$mode" = install && func_mode_install ${1+"$@"}
++test "$opt_mode" = install && func_mode_install ${1+"$@"}
+ 
+ 
+ # func_generate_dlsyms outputname originator pic_p
+@@ -2559,6 +3278,18 @@ extern \"C\" {
+ #pragma GCC diagnostic ignored \"-Wstrict-prototypes\"
+ #endif
+ 
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ /* External symbol declarations for the compiler. */\
+ "
+ 
+@@ -2570,21 +3301,22 @@ extern \"C\" {
+ 	  # Add our own program objects to the symbol list.
+ 	  progfiles=`$ECHO "$objs$old_deplibs" | $SP2NL | $SED "$lo2o" | $NL2SP`
+ 	  for progfile in $progfiles; do
+-	    func_verbose "extracting global C symbols from \`$progfile'"
+-	    $opt_dry_run || eval "$NM $progfile | $global_symbol_pipe >> '$nlist'"
++	    func_to_tool_file "$progfile" func_convert_file_msys_to_w32
++	    func_verbose "extracting global C symbols from \`$func_to_tool_file_result'"
++	    $opt_dry_run || eval "$NM $func_to_tool_file_result | $global_symbol_pipe >> '$nlist'"
+ 	  done
+ 
+ 	  if test -n "$exclude_expsyms"; then
+ 	    $opt_dry_run || {
+-	      $EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T
+-	      $MV "$nlist"T "$nlist"
++	      eval '$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T'
++	      eval '$MV "$nlist"T "$nlist"'
+ 	    }
+ 	  fi
+ 
+ 	  if test -n "$export_symbols_regex"; then
+ 	    $opt_dry_run || {
+-	      $EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T
+-	      $MV "$nlist"T "$nlist"
++	      eval '$EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T'
++	      eval '$MV "$nlist"T "$nlist"'
+ 	    }
+ 	  fi
+ 
+@@ -2593,23 +3325,23 @@ extern \"C\" {
+ 	    export_symbols="$output_objdir/$outputname.exp"
+ 	    $opt_dry_run || {
+ 	      $RM $export_symbols
+-	      ${SED} -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' < "$nlist" > "$export_symbols"
++	      eval "${SED} -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' "'< "$nlist" > "$export_symbols"'
+ 	      case $host in
+ 	      *cygwin* | *mingw* | *cegcc* )
+-                echo EXPORTS > "$output_objdir/$outputname.def"
+-                cat "$export_symbols" >> "$output_objdir/$outputname.def"
++                eval "echo EXPORTS "'> "$output_objdir/$outputname.def"'
++                eval 'cat "$export_symbols" >> "$output_objdir/$outputname.def"'
+ 	        ;;
+ 	      esac
+ 	    }
+ 	  else
+ 	    $opt_dry_run || {
+-	      ${SED} -e 's/\([].[*^$]\)/\\\1/g' -e 's/^/ /' -e 's/$/$/' < "$export_symbols" > "$output_objdir/$outputname.exp"
+-	      $GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T
+-	      $MV "$nlist"T "$nlist"
++	      eval "${SED} -e 's/\([].[*^$]\)/\\\\\1/g' -e 's/^/ /' -e 's/$/$/'"' < "$export_symbols" > "$output_objdir/$outputname.exp"'
++	      eval '$GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T'
++	      eval '$MV "$nlist"T "$nlist"'
+ 	      case $host in
+ 	        *cygwin* | *mingw* | *cegcc* )
+-	          echo EXPORTS > "$output_objdir/$outputname.def"
+-	          cat "$nlist" >> "$output_objdir/$outputname.def"
++	          eval "echo EXPORTS "'> "$output_objdir/$outputname.def"'
++	          eval 'cat "$nlist" >> "$output_objdir/$outputname.def"'
+ 	          ;;
+ 	      esac
+ 	    }
+@@ -2620,10 +3352,52 @@ extern \"C\" {
+ 	  func_verbose "extracting global C symbols from \`$dlprefile'"
+ 	  func_basename "$dlprefile"
+ 	  name="$func_basename_result"
+-	  $opt_dry_run || {
+-	    $ECHO ": $name " >> "$nlist"
+-	    eval "$NM $dlprefile 2>/dev/null | $global_symbol_pipe >> '$nlist'"
+-	  }
++          case $host in
++	    *cygwin* | *mingw* | *cegcc* )
++	      # if an import library, we need to obtain dlname
++	      if func_win32_import_lib_p "$dlprefile"; then
++	        func_tr_sh "$dlprefile"
++	        eval "curr_lafile=\$libfile_$func_tr_sh_result"
++	        dlprefile_dlbasename=""
++	        if test -n "$curr_lafile" && func_lalib_p "$curr_lafile"; then
++	          # Use subshell, to avoid clobbering current variable values
++	          dlprefile_dlname=`source "$curr_lafile" && echo "$dlname"`
++	          if test -n "$dlprefile_dlname" ; then
++	            func_basename "$dlprefile_dlname"
++	            dlprefile_dlbasename="$func_basename_result"
++	          else
++	            # no lafile. user explicitly requested -dlpreopen <import library>.
++	            $sharedlib_from_linklib_cmd "$dlprefile"
++	            dlprefile_dlbasename=$sharedlib_from_linklib_result
++	          fi
++	        fi
++	        $opt_dry_run || {
++	          if test -n "$dlprefile_dlbasename" ; then
++	            eval '$ECHO ": $dlprefile_dlbasename" >> "$nlist"'
++	          else
++	            func_warning "Could not compute DLL name from $name"
++	            eval '$ECHO ": $name " >> "$nlist"'
++	          fi
++	          func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
++	          eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe |
++	            $SED -e '/I __imp/d' -e 's/I __nm_/D /;s/_nm__//' >> '$nlist'"
++	        }
++	      else # not an import lib
++	        $opt_dry_run || {
++	          eval '$ECHO ": $name " >> "$nlist"'
++	          func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
++	          eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'"
++	        }
++	      fi
++	    ;;
++	    *)
++	      $opt_dry_run || {
++	        eval '$ECHO ": $name " >> "$nlist"'
++	        func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
++	        eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'"
++	      }
++	    ;;
++          esac
+ 	done
+ 
+ 	$opt_dry_run || {
+@@ -2661,26 +3435,9 @@ typedef struct {
+   const char *name;
+   void *address;
+ } lt_dlsymlist;
+-"
+-	  case $host in
+-	  *cygwin* | *mingw* | *cegcc* )
+-	    echo >> "$output_objdir/$my_dlsyms" "\
+-/* DATA imports from DLLs on WIN32 con't be const, because
+-   runtime relocations are performed -- see ld's documentation
+-   on pseudo-relocs.  */"
+-	    lt_dlsym_const= ;;
+-	  *osf5*)
+-	    echo >> "$output_objdir/$my_dlsyms" "\
+-/* This system does not cope well with relocations in const data */"
+-	    lt_dlsym_const= ;;
+-	  *)
+-	    lt_dlsym_const=const ;;
+-	  esac
+-
+-	  echo >> "$output_objdir/$my_dlsyms" "\
+-extern $lt_dlsym_const lt_dlsymlist
++extern LT_DLSYM_CONST lt_dlsymlist
+ lt_${my_prefix}_LTX_preloaded_symbols[];
+-$lt_dlsym_const lt_dlsymlist
++LT_DLSYM_CONST lt_dlsymlist
+ lt_${my_prefix}_LTX_preloaded_symbols[] =
+ {\
+   { \"$my_originator\", (void *) 0 },"
+@@ -2736,7 +3493,7 @@ static const void *lt_preloaded_setup() {
+ 	for arg in $LTCFLAGS; do
+ 	  case $arg in
+ 	  -pie | -fpie | -fPIE) ;;
+-	  *) symtab_cflags="$symtab_cflags $arg" ;;
++	  *) func_append symtab_cflags " $arg" ;;
+ 	  esac
+ 	done
+ 
+@@ -2796,9 +3553,11 @@ func_win32_libid ()
+     win32_libid_type="x86 archive import"
+     ;;
+   *ar\ archive*) # could be an import, or static
+-    if $OBJDUMP -f "$1" | $SED -e '10q' 2>/dev/null |
+-       $EGREP 'file format (pe-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then
+-      win32_nmres=`$NM -f posix -A "$1" |
++    # Keep the egrep pattern in sync with the one in _LT_CHECK_MAGIC_METHOD.
++    if eval $OBJDUMP -f $1 | $SED -e '10q' 2>/dev/null |
++       $EGREP 'file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then
++      func_to_tool_file "$1" func_convert_file_msys_to_w32
++      win32_nmres=`eval $NM -f posix -A \"$func_to_tool_file_result\" |
+ 	$SED -n -e '
+ 	    1,100{
+ 		/ I /{
+@@ -2827,6 +3586,131 @@ func_win32_libid ()
+   $ECHO "$win32_libid_type"
+ }
+ 
++# func_cygming_dll_for_implib ARG
++#
++# Platform-specific function to extract the
++# name of the DLL associated with the specified
++# import library ARG.
++# Invoked by eval'ing the libtool variable
++#    $sharedlib_from_linklib_cmd
++# Result is available in the variable
++#    $sharedlib_from_linklib_result
++func_cygming_dll_for_implib ()
++{
++  $opt_debug
++  sharedlib_from_linklib_result=`$DLLTOOL --identify-strict --identify "$1"`
++}
++
++# func_cygming_dll_for_implib_fallback_core SECTION_NAME LIBNAMEs
++#
++# The is the core of a fallback implementation of a
++# platform-specific function to extract the name of the
++# DLL associated with the specified import library LIBNAME.
++#
++# SECTION_NAME is either .idata$6 or .idata$7, depending
++# on the platform and compiler that created the implib.
++#
++# Echos the name of the DLL associated with the
++# specified import library.
++func_cygming_dll_for_implib_fallback_core ()
++{
++  $opt_debug
++  match_literal=`$ECHO "$1" | $SED "$sed_make_literal_regex"`
++  $OBJDUMP -s --section "$1" "$2" 2>/dev/null |
++    $SED '/^Contents of section '"$match_literal"':/{
++      # Place marker at beginning of archive member dllname section
++      s/.*/====MARK====/
++      p
++      d
++    }
++    # These lines can sometimes be longer than 43 characters, but
++    # are always uninteresting
++    /:[	 ]*file format pe[i]\{,1\}-/d
++    /^In archive [^:]*:/d
++    # Ensure marker is printed
++    /^====MARK====/p
++    # Remove all lines with less than 43 characters
++    /^.\{43\}/!d
++    # From remaining lines, remove first 43 characters
++    s/^.\{43\}//' |
++    $SED -n '
++      # Join marker and all lines until next marker into a single line
++      /^====MARK====/ b para
++      H
++      $ b para
++      b
++      :para
++      x
++      s/\n//g
++      # Remove the marker
++      s/^====MARK====//
++      # Remove trailing dots and whitespace
++      s/[\. \t]*$//
++      # Print
++      /./p' |
++    # we now have a list, one entry per line, of the stringified
++    # contents of the appropriate section of all members of the
++    # archive which possess that section. Heuristic: eliminate
++    # all those which have a first or second character that is
++    # a '.' (that is, objdump's representation of an unprintable
++    # character.) This should work for all archives with less than
++    # 0x302f exports -- but will fail for DLLs whose name actually
++    # begins with a literal '.' or a single character followed by
++    # a '.'.
++    #
++    # Of those that remain, print the first one.
++    $SED -e '/^\./d;/^.\./d;q'
++}
++
++# func_cygming_gnu_implib_p ARG
++# This predicate returns with zero status (TRUE) if
++# ARG is a GNU/binutils-style import library. Returns
++# with nonzero status (FALSE) otherwise.
++func_cygming_gnu_implib_p ()
++{
++  $opt_debug
++  func_to_tool_file "$1" func_convert_file_msys_to_w32
++  func_cygming_gnu_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $EGREP ' (_head_[A-Za-z0-9_]+_[ad]l*|[A-Za-z0-9_]+_[ad]l*_iname)$'`
++  test -n "$func_cygming_gnu_implib_tmp"
++}
++
++# func_cygming_ms_implib_p ARG
++# This predicate returns with zero status (TRUE) if
++# ARG is an MS-style import library. Returns
++# with nonzero status (FALSE) otherwise.
++func_cygming_ms_implib_p ()
++{
++  $opt_debug
++  func_to_tool_file "$1" func_convert_file_msys_to_w32
++  func_cygming_ms_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $GREP '_NULL_IMPORT_DESCRIPTOR'`
++  test -n "$func_cygming_ms_implib_tmp"
++}
++
++# func_cygming_dll_for_implib_fallback ARG
++# Platform-specific function to extract the
++# name of the DLL associated with the specified
++# import library ARG.
++#
++# This fallback implementation is for use when $DLLTOOL
++# does not support the --identify-strict option.
++# Invoked by eval'ing the libtool variable
++#    $sharedlib_from_linklib_cmd
++# Result is available in the variable
++#    $sharedlib_from_linklib_result
++func_cygming_dll_for_implib_fallback ()
++{
++  $opt_debug
++  if func_cygming_gnu_implib_p "$1" ; then
++    # binutils import library
++    sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$7' "$1"`
++  elif func_cygming_ms_implib_p "$1" ; then
++    # ms-generated import library
++    sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$6' "$1"`
++  else
++    # unknown
++    sharedlib_from_linklib_result=""
++  fi
++}
+ 
+ 
+ # func_extract_an_archive dir oldlib
+@@ -2917,7 +3801,7 @@ func_extract_archives ()
+ 	    darwin_file=
+ 	    darwin_files=
+ 	    for darwin_file in $darwin_filelist; do
+-	      darwin_files=`find unfat-$$ -name $darwin_file -print | $NL2SP`
++	      darwin_files=`find unfat-$$ -name $darwin_file -print | sort | $NL2SP`
+ 	      $LIPO -create -output "$darwin_file" $darwin_files
+ 	    done # $darwin_filelist
+ 	    $RM -rf unfat-$$
+@@ -2932,7 +3816,7 @@ func_extract_archives ()
+         func_extract_an_archive "$my_xdir" "$my_xabs"
+ 	;;
+       esac
+-      my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | $NL2SP`
++      my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | sort | $NL2SP`
+     done
+ 
+     func_extract_archives_result="$my_oldobjs"
+@@ -3014,7 +3898,110 @@ func_fallback_echo ()
+ _LTECHO_EOF'
+ }
+     ECHO=\"$qECHO\"
+-  fi\
++  fi
++
++# Very basic option parsing. These options are (a) specific to
++# the libtool wrapper, (b) are identical between the wrapper
++# /script/ and the wrapper /executable/ which is used only on
++# windows platforms, and (c) all begin with the string "--lt-"
++# (application programs are unlikely to have options which match
++# this pattern).
++#
++# There are only two supported options: --lt-debug and
++# --lt-dump-script. There is, deliberately, no --lt-help.
++#
++# The first argument to this parsing function should be the
++# script's $0 value, followed by "$@".
++lt_option_debug=
++func_parse_lt_options ()
++{
++  lt_script_arg0=\$0
++  shift
++  for lt_opt
++  do
++    case \"\$lt_opt\" in
++    --lt-debug) lt_option_debug=1 ;;
++    --lt-dump-script)
++        lt_dump_D=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%/[^/]*$%%'\`
++        test \"X\$lt_dump_D\" = \"X\$lt_script_arg0\" && lt_dump_D=.
++        lt_dump_F=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%^.*/%%'\`
++        cat \"\$lt_dump_D/\$lt_dump_F\"
++        exit 0
++      ;;
++    --lt-*)
++        \$ECHO \"Unrecognized --lt- option: '\$lt_opt'\" 1>&2
++        exit 1
++      ;;
++    esac
++  done
++
++  # Print the debug banner immediately:
++  if test -n \"\$lt_option_debug\"; then
++    echo \"${outputname}:${output}:\${LINENO}: libtool wrapper (GNU $PACKAGE$TIMESTAMP) $VERSION\" 1>&2
++  fi
++}
++
++# Used when --lt-debug. Prints its arguments to stdout
++# (redirection is the responsibility of the caller)
++func_lt_dump_args ()
++{
++  lt_dump_args_N=1;
++  for lt_arg
++  do
++    \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[\$lt_dump_args_N]: \$lt_arg\"
++    lt_dump_args_N=\`expr \$lt_dump_args_N + 1\`
++  done
++}
++
++# Core function for launching the target application
++func_exec_program_core ()
++{
++"
++  case $host in
++  # Backslashes separate directories on plain windows
++  *-*-mingw | *-*-os2* | *-cegcc*)
++    $ECHO "\
++      if test -n \"\$lt_option_debug\"; then
++        \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir\\\\\$program\" 1>&2
++        func_lt_dump_args \${1+\"\$@\"} 1>&2
++      fi
++      exec \"\$progdir\\\\\$program\" \${1+\"\$@\"}
++"
++    ;;
++
++  *)
++    $ECHO "\
++      if test -n \"\$lt_option_debug\"; then
++        \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir/\$program\" 1>&2
++        func_lt_dump_args \${1+\"\$@\"} 1>&2
++      fi
++      exec \"\$progdir/\$program\" \${1+\"\$@\"}
++"
++    ;;
++  esac
++  $ECHO "\
++      \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2
++      exit 1
++}
++
++# A function to encapsulate launching the target application
++# Strips options in the --lt-* namespace from \$@ and
++# launches target application with the remaining arguments.
++func_exec_program ()
++{
++  for lt_wr_arg
++  do
++    case \$lt_wr_arg in
++    --lt-*) ;;
++    *) set x \"\$@\" \"\$lt_wr_arg\"; shift;;
++    esac
++    shift
++  done
++  func_exec_program_core \${1+\"\$@\"}
++}
++
++  # Parse options
++  func_parse_lt_options \"\$0\" \${1+\"\$@\"}
+ 
+   # Find the directory that this script lives in.
+   thisdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*$%%'\`
+@@ -3078,7 +4065,7 @@ _LTECHO_EOF'
+ 
+     # relink executable if necessary
+     if test -n \"\$relink_command\"; then
+-      if relink_command_output=\`eval \"\$relink_command\" 2>&1\`; then :
++      if relink_command_output=\`eval \$relink_command 2>&1\`; then :
+       else
+ 	$ECHO \"\$relink_command_output\" >&2
+ 	$RM \"\$progdir/\$file\"
+@@ -3102,6 +4089,18 @@ _LTECHO_EOF'
+ 
+   if test -f \"\$progdir/\$program\"; then"
+ 
++	# fixup the dll searchpath if we need to.
++	#
++	# Fix the DLL searchpath if we need to.  Do this before prepending
++	# to shlibpath, because on Windows, both are PATH and uninstalled
++	# libraries must come first.
++	if test -n "$dllsearchpath"; then
++	  $ECHO "\
++    # Add the dll search path components to the executable PATH
++    PATH=$dllsearchpath:\$PATH
++"
++	fi
++
+ 	# Export our shlibpath_var if we have one.
+ 	if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then
+ 	  $ECHO "\
+@@ -3116,35 +4115,10 @@ _LTECHO_EOF'
+ "
+ 	fi
+ 
+-	# fixup the dll searchpath if we need to.
+-	if test -n "$dllsearchpath"; then
+-	  $ECHO "\
+-    # Add the dll search path components to the executable PATH
+-    PATH=$dllsearchpath:\$PATH
+-"
+-	fi
+-
+ 	$ECHO "\
+     if test \"\$libtool_execute_magic\" != \"$magic\"; then
+       # Run the actual program with our arguments.
+-"
+-	case $host in
+-	# Backslashes separate directories on plain windows
+-	*-*-mingw | *-*-os2* | *-cegcc*)
+-	  $ECHO "\
+-      exec \"\$progdir\\\\\$program\" \${1+\"\$@\"}
+-"
+-	  ;;
+-
+-	*)
+-	  $ECHO "\
+-      exec \"\$progdir/\$program\" \${1+\"\$@\"}
+-"
+-	  ;;
+-	esac
+-	$ECHO "\
+-      \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2
+-      exit 1
++      func_exec_program \${1+\"\$@\"}
+     fi
+   else
+     # The program doesn't exist.
+@@ -3158,166 +4132,6 @@ fi\
+ }
+ 
+ 
+-# func_to_host_path arg
+-#
+-# Convert paths to host format when used with build tools.
+-# Intended for use with "native" mingw (where libtool itself
+-# is running under the msys shell), or in the following cross-
+-# build environments:
+-#    $build          $host
+-#    mingw (msys)    mingw  [e.g. native]
+-#    cygwin          mingw
+-#    *nix + wine     mingw
+-# where wine is equipped with the `winepath' executable.
+-# In the native mingw case, the (msys) shell automatically
+-# converts paths for any non-msys applications it launches,
+-# but that facility isn't available from inside the cwrapper.
+-# Similar accommodations are necessary for $host mingw and
+-# $build cygwin.  Calling this function does no harm for other
+-# $host/$build combinations not listed above.
+-#
+-# ARG is the path (on $build) that should be converted to
+-# the proper representation for $host. The result is stored
+-# in $func_to_host_path_result.
+-func_to_host_path ()
+-{
+-  func_to_host_path_result="$1"
+-  if test -n "$1"; then
+-    case $host in
+-      *mingw* )
+-        lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
+-        case $build in
+-          *mingw* ) # actually, msys
+-            # awkward: cmd appends spaces to result
+-            func_to_host_path_result=`( cmd //c echo "$1" ) 2>/dev/null |
+-              $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"`
+-            ;;
+-          *cygwin* )
+-            func_to_host_path_result=`cygpath -w "$1" |
+-	      $SED -e "$lt_sed_naive_backslashify"`
+-            ;;
+-          * )
+-            # Unfortunately, winepath does not exit with a non-zero
+-            # error code, so we are forced to check the contents of
+-            # stdout. On the other hand, if the command is not
+-            # found, the shell will set an exit code of 127 and print
+-            # *an error message* to stdout. So we must check for both
+-            # error code of zero AND non-empty stdout, which explains
+-            # the odd construction:
+-            func_to_host_path_tmp1=`winepath -w "$1" 2>/dev/null`
+-            if test "$?" -eq 0 && test -n "${func_to_host_path_tmp1}"; then
+-              func_to_host_path_result=`$ECHO "$func_to_host_path_tmp1" |
+-                $SED -e "$lt_sed_naive_backslashify"`
+-            else
+-              # Allow warning below.
+-              func_to_host_path_result=
+-            fi
+-            ;;
+-        esac
+-        if test -z "$func_to_host_path_result" ; then
+-          func_error "Could not determine host path corresponding to"
+-          func_error "  \`$1'"
+-          func_error "Continuing, but uninstalled executables may not work."
+-          # Fallback:
+-          func_to_host_path_result="$1"
+-        fi
+-        ;;
+-    esac
+-  fi
+-}
+-# end: func_to_host_path
+-
+-# func_to_host_pathlist arg
+-#
+-# Convert pathlists to host format when used with build tools.
+-# See func_to_host_path(), above. This function supports the
+-# following $build/$host combinations (but does no harm for
+-# combinations not listed here):
+-#    $build          $host
+-#    mingw (msys)    mingw  [e.g. native]
+-#    cygwin          mingw
+-#    *nix + wine     mingw
+-#
+-# Path separators are also converted from $build format to
+-# $host format. If ARG begins or ends with a path separator
+-# character, it is preserved (but converted to $host format)
+-# on output.
+-#
+-# ARG is a pathlist (on $build) that should be converted to
+-# the proper representation on $host. The result is stored
+-# in $func_to_host_pathlist_result.
+-func_to_host_pathlist ()
+-{
+-  func_to_host_pathlist_result="$1"
+-  if test -n "$1"; then
+-    case $host in
+-      *mingw* )
+-        lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
+-        # Remove leading and trailing path separator characters from
+-        # ARG. msys behavior is inconsistent here, cygpath turns them
+-        # into '.;' and ';.', and winepath ignores them completely.
+-	func_stripname : : "$1"
+-        func_to_host_pathlist_tmp1=$func_stripname_result
+-        case $build in
+-          *mingw* ) # Actually, msys.
+-            # Awkward: cmd appends spaces to result.
+-            func_to_host_pathlist_result=`
+-	      ( cmd //c echo "$func_to_host_pathlist_tmp1" ) 2>/dev/null |
+-	      $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"`
+-            ;;
+-          *cygwin* )
+-            func_to_host_pathlist_result=`cygpath -w -p "$func_to_host_pathlist_tmp1" |
+-              $SED -e "$lt_sed_naive_backslashify"`
+-            ;;
+-          * )
+-            # unfortunately, winepath doesn't convert pathlists
+-            func_to_host_pathlist_result=""
+-            func_to_host_pathlist_oldIFS=$IFS
+-            IFS=:
+-            for func_to_host_pathlist_f in $func_to_host_pathlist_tmp1 ; do
+-              IFS=$func_to_host_pathlist_oldIFS
+-              if test -n "$func_to_host_pathlist_f" ; then
+-                func_to_host_path "$func_to_host_pathlist_f"
+-                if test -n "$func_to_host_path_result" ; then
+-                  if test -z "$func_to_host_pathlist_result" ; then
+-                    func_to_host_pathlist_result="$func_to_host_path_result"
+-                  else
+-                    func_append func_to_host_pathlist_result ";$func_to_host_path_result"
+-                  fi
+-                fi
+-              fi
+-            done
+-            IFS=$func_to_host_pathlist_oldIFS
+-            ;;
+-        esac
+-        if test -z "$func_to_host_pathlist_result"; then
+-          func_error "Could not determine the host path(s) corresponding to"
+-          func_error "  \`$1'"
+-          func_error "Continuing, but uninstalled executables may not work."
+-          # Fallback. This may break if $1 contains DOS-style drive
+-          # specifications. The fix is not to complicate the expression
+-          # below, but for the user to provide a working wine installation
+-          # with winepath so that path translation in the cross-to-mingw
+-          # case works properly.
+-          lt_replace_pathsep_nix_to_dos="s|:|;|g"
+-          func_to_host_pathlist_result=`echo "$func_to_host_pathlist_tmp1" |\
+-            $SED -e "$lt_replace_pathsep_nix_to_dos"`
+-        fi
+-        # Now, add the leading and trailing path separators back
+-        case "$1" in
+-          :* ) func_to_host_pathlist_result=";$func_to_host_pathlist_result"
+-            ;;
+-        esac
+-        case "$1" in
+-          *: ) func_append func_to_host_pathlist_result ";"
+-            ;;
+-        esac
+-        ;;
+-    esac
+-  fi
+-}
+-# end: func_to_host_pathlist
+-
+ # func_emit_cwrapperexe_src
+ # emit the source code for a wrapper executable on stdout
+ # Must ONLY be called from within func_mode_link because
+@@ -3334,10 +4148,6 @@ func_emit_cwrapperexe_src ()
+ 
+    This wrapper executable should never be moved out of the build directory.
+    If it is, it will not operate correctly.
+-
+-   Currently, it simply execs the wrapper *script* "$SHELL $output",
+-   but could eventually absorb all of the scripts functionality and
+-   exec $objdir/$outputname directly.
+ */
+ EOF
+ 	    cat <<"EOF"
+@@ -3462,22 +4272,13 @@ int setenv (const char *, const char *, int);
+   if (stale) { free ((void *) stale); stale = 0; } \
+ } while (0)
+ 
+-#undef LTWRAPPER_DEBUGPRINTF
+-#if defined LT_DEBUGWRAPPER
+-# define LTWRAPPER_DEBUGPRINTF(args) ltwrapper_debugprintf args
+-static void
+-ltwrapper_debugprintf (const char *fmt, ...)
+-{
+-    va_list args;
+-    va_start (args, fmt);
+-    (void) vfprintf (stderr, fmt, args);
+-    va_end (args);
+-}
++#if defined(LT_DEBUGWRAPPER)
++static int lt_debug = 1;
+ #else
+-# define LTWRAPPER_DEBUGPRINTF(args)
++static int lt_debug = 0;
+ #endif
+ 
+-const char *program_name = NULL;
++const char *program_name = "libtool-wrapper"; /* in case xstrdup fails */
+ 
+ void *xmalloc (size_t num);
+ char *xstrdup (const char *string);
+@@ -3487,7 +4288,10 @@ char *chase_symlinks (const char *pathspec);
+ int make_executable (const char *path);
+ int check_executable (const char *path);
+ char *strendzap (char *str, const char *pat);
+-void lt_fatal (const char *message, ...);
++void lt_debugprintf (const char *file, int line, const char *fmt, ...);
++void lt_fatal (const char *file, int line, const char *message, ...);
++static const char *nonnull (const char *s);
++static const char *nonempty (const char *s);
+ void lt_setenv (const char *name, const char *value);
+ char *lt_extend_str (const char *orig_value, const char *add, int to_end);
+ void lt_update_exe_path (const char *name, const char *value);
+@@ -3497,14 +4301,14 @@ void lt_dump_script (FILE *f);
+ EOF
+ 
+ 	    cat <<EOF
+-const char * MAGIC_EXE = "$magic_exe";
++volatile const char * MAGIC_EXE = "$magic_exe";
+ const char * LIB_PATH_VARNAME = "$shlibpath_var";
+ EOF
+ 
+ 	    if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then
+-              func_to_host_pathlist "$temp_rpath"
++              func_to_host_path "$temp_rpath"
+ 	      cat <<EOF
+-const char * LIB_PATH_VALUE   = "$func_to_host_pathlist_result";
++const char * LIB_PATH_VALUE   = "$func_to_host_path_result";
+ EOF
+ 	    else
+ 	      cat <<"EOF"
+@@ -3513,10 +4317,10 @@ EOF
+ 	    fi
+ 
+ 	    if test -n "$dllsearchpath"; then
+-              func_to_host_pathlist "$dllsearchpath:"
++              func_to_host_path "$dllsearchpath:"
+ 	      cat <<EOF
+ const char * EXE_PATH_VARNAME = "PATH";
+-const char * EXE_PATH_VALUE   = "$func_to_host_pathlist_result";
++const char * EXE_PATH_VALUE   = "$func_to_host_path_result";
+ EOF
+ 	    else
+ 	      cat <<"EOF"
+@@ -3539,12 +4343,10 @@ EOF
+ 	    cat <<"EOF"
+ 
+ #define LTWRAPPER_OPTION_PREFIX         "--lt-"
+-#define LTWRAPPER_OPTION_PREFIX_LENGTH  5
+ 
+-static const size_t opt_prefix_len         = LTWRAPPER_OPTION_PREFIX_LENGTH;
+ static const char *ltwrapper_option_prefix = LTWRAPPER_OPTION_PREFIX;
+-
+ static const char *dumpscript_opt       = LTWRAPPER_OPTION_PREFIX "dump-script";
++static const char *debug_opt            = LTWRAPPER_OPTION_PREFIX "debug";
+ 
+ int
+ main (int argc, char *argv[])
+@@ -3561,10 +4363,13 @@ main (int argc, char *argv[])
+   int i;
+ 
+   program_name = (char *) xstrdup (base_name (argv[0]));
+-  LTWRAPPER_DEBUGPRINTF (("(main) argv[0]      : %s\n", argv[0]));
+-  LTWRAPPER_DEBUGPRINTF (("(main) program_name : %s\n", program_name));
++  newargz = XMALLOC (char *, argc + 1);
+ 
+-  /* very simple arg parsing; don't want to rely on getopt */
++  /* very simple arg parsing; don't want to rely on getopt
++   * also, copy all non cwrapper options to newargz, except
++   * argz[0], which is handled differently
++   */
++  newargc=0;
+   for (i = 1; i < argc; i++)
+     {
+       if (strcmp (argv[i], dumpscript_opt) == 0)
+@@ -3581,21 +4386,54 @@ EOF
+ 	  lt_dump_script (stdout);
+ 	  return 0;
+ 	}
++      if (strcmp (argv[i], debug_opt) == 0)
++	{
++          lt_debug = 1;
++          continue;
++	}
++      if (strcmp (argv[i], ltwrapper_option_prefix) == 0)
++        {
++          /* however, if there is an option in the LTWRAPPER_OPTION_PREFIX
++             namespace, but it is not one of the ones we know about and
++             have already dealt with, above (inluding dump-script), then
++             report an error. Otherwise, targets might begin to believe
++             they are allowed to use options in the LTWRAPPER_OPTION_PREFIX
++             namespace. The first time any user complains about this, we'll
++             need to make LTWRAPPER_OPTION_PREFIX a configure-time option
++             or a configure.ac-settable value.
++           */
++          lt_fatal (__FILE__, __LINE__,
++		    "unrecognized %s option: '%s'",
++                    ltwrapper_option_prefix, argv[i]);
++        }
++      /* otherwise ... */
++      newargz[++newargc] = xstrdup (argv[i]);
+     }
++  newargz[++newargc] = NULL;
++
++EOF
++	    cat <<EOF
++  /* The GNU banner must be the first non-error debug message */
++  lt_debugprintf (__FILE__, __LINE__, "libtool wrapper (GNU $PACKAGE$TIMESTAMP) $VERSION\n");
++EOF
++	    cat <<"EOF"
++  lt_debugprintf (__FILE__, __LINE__, "(main) argv[0]: %s\n", argv[0]);
++  lt_debugprintf (__FILE__, __LINE__, "(main) program_name: %s\n", program_name);
+ 
+-  newargz = XMALLOC (char *, argc + 1);
+   tmp_pathspec = find_executable (argv[0]);
+   if (tmp_pathspec == NULL)
+-    lt_fatal ("Couldn't find %s", argv[0]);
+-  LTWRAPPER_DEBUGPRINTF (("(main) found exe (before symlink chase) at : %s\n",
+-			  tmp_pathspec));
++    lt_fatal (__FILE__, __LINE__, "couldn't find %s", argv[0]);
++  lt_debugprintf (__FILE__, __LINE__,
++                  "(main) found exe (before symlink chase) at: %s\n",
++		  tmp_pathspec);
+ 
+   actual_cwrapper_path = chase_symlinks (tmp_pathspec);
+-  LTWRAPPER_DEBUGPRINTF (("(main) found exe (after symlink chase) at : %s\n",
+-			  actual_cwrapper_path));
++  lt_debugprintf (__FILE__, __LINE__,
++                  "(main) found exe (after symlink chase) at: %s\n",
++		  actual_cwrapper_path);
+   XFREE (tmp_pathspec);
+ 
+-  actual_cwrapper_name = xstrdup( base_name (actual_cwrapper_path));
++  actual_cwrapper_name = xstrdup (base_name (actual_cwrapper_path));
+   strendzap (actual_cwrapper_path, actual_cwrapper_name);
+ 
+   /* wrapper name transforms */
+@@ -3613,8 +4451,9 @@ EOF
+   target_name = tmp_pathspec;
+   tmp_pathspec = 0;
+ 
+-  LTWRAPPER_DEBUGPRINTF (("(main) libtool target name: %s\n",
+-			  target_name));
++  lt_debugprintf (__FILE__, __LINE__,
++		  "(main) libtool target name: %s\n",
++		  target_name);
+ EOF
+ 
+ 	    cat <<EOF
+@@ -3664,35 +4503,19 @@ EOF
+ 
+   lt_setenv ("BIN_SH", "xpg4"); /* for Tru64 */
+   lt_setenv ("DUALCASE", "1");  /* for MSK sh */
+-  lt_update_lib_path (LIB_PATH_VARNAME, LIB_PATH_VALUE);
++  /* Update the DLL searchpath.  EXE_PATH_VALUE ($dllsearchpath) must
++     be prepended before (that is, appear after) LIB_PATH_VALUE ($temp_rpath)
++     because on Windows, both *_VARNAMEs are PATH but uninstalled
++     libraries must come first. */
+   lt_update_exe_path (EXE_PATH_VARNAME, EXE_PATH_VALUE);
++  lt_update_lib_path (LIB_PATH_VARNAME, LIB_PATH_VALUE);
+ 
+-  newargc=0;
+-  for (i = 1; i < argc; i++)
+-    {
+-      if (strncmp (argv[i], ltwrapper_option_prefix, opt_prefix_len) == 0)
+-        {
+-          /* however, if there is an option in the LTWRAPPER_OPTION_PREFIX
+-             namespace, but it is not one of the ones we know about and
+-             have already dealt with, above (inluding dump-script), then
+-             report an error. Otherwise, targets might begin to believe
+-             they are allowed to use options in the LTWRAPPER_OPTION_PREFIX
+-             namespace. The first time any user complains about this, we'll
+-             need to make LTWRAPPER_OPTION_PREFIX a configure-time option
+-             or a configure.ac-settable value.
+-           */
+-          lt_fatal ("Unrecognized option in %s namespace: '%s'",
+-                    ltwrapper_option_prefix, argv[i]);
+-        }
+-      /* otherwise ... */
+-      newargz[++newargc] = xstrdup (argv[i]);
+-    }
+-  newargz[++newargc] = NULL;
+-
+-  LTWRAPPER_DEBUGPRINTF     (("(main) lt_argv_zero : %s\n", (lt_argv_zero ? lt_argv_zero : "<NULL>")));
++  lt_debugprintf (__FILE__, __LINE__, "(main) lt_argv_zero: %s\n",
++		  nonnull (lt_argv_zero));
+   for (i = 0; i < newargc; i++)
+     {
+-      LTWRAPPER_DEBUGPRINTF (("(main) newargz[%d]   : %s\n", i, (newargz[i] ? newargz[i] : "<NULL>")));
++      lt_debugprintf (__FILE__, __LINE__, "(main) newargz[%d]: %s\n",
++		      i, nonnull (newargz[i]));
+     }
+ 
+ EOF
+@@ -3706,7 +4529,9 @@ EOF
+   if (rval == -1)
+     {
+       /* failed to start process */
+-      LTWRAPPER_DEBUGPRINTF (("(main) failed to launch target \"%s\": errno = %d\n", lt_argv_zero, errno));
++      lt_debugprintf (__FILE__, __LINE__,
++		      "(main) failed to launch target \"%s\": %s\n",
++		      lt_argv_zero, nonnull (strerror (errno)));
+       return 127;
+     }
+   return rval;
+@@ -3728,7 +4553,7 @@ xmalloc (size_t num)
+ {
+   void *p = (void *) malloc (num);
+   if (!p)
+-    lt_fatal ("Memory exhausted");
++    lt_fatal (__FILE__, __LINE__, "memory exhausted");
+ 
+   return p;
+ }
+@@ -3762,8 +4587,8 @@ check_executable (const char *path)
+ {
+   struct stat st;
+ 
+-  LTWRAPPER_DEBUGPRINTF (("(check_executable)  : %s\n",
+-			  path ? (*path ? path : "EMPTY!") : "NULL!"));
++  lt_debugprintf (__FILE__, __LINE__, "(check_executable): %s\n",
++                  nonempty (path));
+   if ((!path) || (!*path))
+     return 0;
+ 
+@@ -3780,8 +4605,8 @@ make_executable (const char *path)
+   int rval = 0;
+   struct stat st;
+ 
+-  LTWRAPPER_DEBUGPRINTF (("(make_executable)   : %s\n",
+-			  path ? (*path ? path : "EMPTY!") : "NULL!"));
++  lt_debugprintf (__FILE__, __LINE__, "(make_executable): %s\n",
++                  nonempty (path));
+   if ((!path) || (!*path))
+     return 0;
+ 
+@@ -3807,8 +4632,8 @@ find_executable (const char *wrapper)
+   int tmp_len;
+   char *concat_name;
+ 
+-  LTWRAPPER_DEBUGPRINTF (("(find_executable)   : %s\n",
+-			  wrapper ? (*wrapper ? wrapper : "EMPTY!") : "NULL!"));
++  lt_debugprintf (__FILE__, __LINE__, "(find_executable): %s\n",
++                  nonempty (wrapper));
+ 
+   if ((wrapper == NULL) || (*wrapper == '\0'))
+     return NULL;
+@@ -3861,7 +4686,8 @@ find_executable (const char *wrapper)
+ 		{
+ 		  /* empty path: current directory */
+ 		  if (getcwd (tmp, LT_PATHMAX) == NULL)
+-		    lt_fatal ("getcwd failed");
++		    lt_fatal (__FILE__, __LINE__, "getcwd failed: %s",
++                              nonnull (strerror (errno)));
+ 		  tmp_len = strlen (tmp);
+ 		  concat_name =
+ 		    XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1);
+@@ -3886,7 +4712,8 @@ find_executable (const char *wrapper)
+     }
+   /* Relative path | not found in path: prepend cwd */
+   if (getcwd (tmp, LT_PATHMAX) == NULL)
+-    lt_fatal ("getcwd failed");
++    lt_fatal (__FILE__, __LINE__, "getcwd failed: %s",
++              nonnull (strerror (errno)));
+   tmp_len = strlen (tmp);
+   concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1);
+   memcpy (concat_name, tmp, tmp_len);
+@@ -3912,8 +4739,9 @@ chase_symlinks (const char *pathspec)
+   int has_symlinks = 0;
+   while (strlen (tmp_pathspec) && !has_symlinks)
+     {
+-      LTWRAPPER_DEBUGPRINTF (("checking path component for symlinks: %s\n",
+-			      tmp_pathspec));
++      lt_debugprintf (__FILE__, __LINE__,
++		      "checking path component for symlinks: %s\n",
++		      tmp_pathspec);
+       if (lstat (tmp_pathspec, &s) == 0)
+ 	{
+ 	  if (S_ISLNK (s.st_mode) != 0)
+@@ -3935,8 +4763,9 @@ chase_symlinks (const char *pathspec)
+ 	}
+       else
+ 	{
+-	  char *errstr = strerror (errno);
+-	  lt_fatal ("Error accessing file %s (%s)", tmp_pathspec, errstr);
++	  lt_fatal (__FILE__, __LINE__,
++		    "error accessing file \"%s\": %s",
++		    tmp_pathspec, nonnull (strerror (errno)));
+ 	}
+     }
+   XFREE (tmp_pathspec);
+@@ -3949,7 +4778,8 @@ chase_symlinks (const char *pathspec)
+   tmp_pathspec = realpath (pathspec, buf);
+   if (tmp_pathspec == 0)
+     {
+-      lt_fatal ("Could not follow symlinks for %s", pathspec);
++      lt_fatal (__FILE__, __LINE__,
++		"could not follow symlinks for %s", pathspec);
+     }
+   return xstrdup (tmp_pathspec);
+ #endif
+@@ -3975,11 +4805,25 @@ strendzap (char *str, const char *pat)
+   return str;
+ }
+ 
++void
++lt_debugprintf (const char *file, int line, const char *fmt, ...)
++{
++  va_list args;
++  if (lt_debug)
++    {
++      (void) fprintf (stderr, "%s:%s:%d: ", program_name, file, line);
++      va_start (args, fmt);
++      (void) vfprintf (stderr, fmt, args);
++      va_end (args);
++    }
++}
++
+ static void
+-lt_error_core (int exit_status, const char *mode,
++lt_error_core (int exit_status, const char *file,
++	       int line, const char *mode,
+ 	       const char *message, va_list ap)
+ {
+-  fprintf (stderr, "%s: %s: ", program_name, mode);
++  fprintf (stderr, "%s:%s:%d: %s: ", program_name, file, line, mode);
+   vfprintf (stderr, message, ap);
+   fprintf (stderr, ".\n");
+ 
+@@ -3988,20 +4832,32 @@ lt_error_core (int exit_status, const char *mode,
+ }
+ 
+ void
+-lt_fatal (const char *message, ...)
++lt_fatal (const char *file, int line, const char *message, ...)
+ {
+   va_list ap;
+   va_start (ap, message);
+-  lt_error_core (EXIT_FAILURE, "FATAL", message, ap);
++  lt_error_core (EXIT_FAILURE, file, line, "FATAL", message, ap);
+   va_end (ap);
+ }
+ 
++static const char *
++nonnull (const char *s)
++{
++  return s ? s : "(null)";
++}
++
++static const char *
++nonempty (const char *s)
++{
++  return (s && !*s) ? "(empty)" : nonnull (s);
++}
++
+ void
+ lt_setenv (const char *name, const char *value)
+ {
+-  LTWRAPPER_DEBUGPRINTF (("(lt_setenv) setting '%s' to '%s'\n",
+-                          (name ? name : "<NULL>"),
+-                          (value ? value : "<NULL>")));
++  lt_debugprintf (__FILE__, __LINE__,
++		  "(lt_setenv) setting '%s' to '%s'\n",
++                  nonnull (name), nonnull (value));
+   {
+ #ifdef HAVE_SETENV
+     /* always make a copy, for consistency with !HAVE_SETENV */
+@@ -4049,9 +4905,9 @@ lt_extend_str (const char *orig_value, const char *add, int to_end)
+ void
+ lt_update_exe_path (const char *name, const char *value)
+ {
+-  LTWRAPPER_DEBUGPRINTF (("(lt_update_exe_path) modifying '%s' by prepending '%s'\n",
+-                          (name ? name : "<NULL>"),
+-                          (value ? value : "<NULL>")));
++  lt_debugprintf (__FILE__, __LINE__,
++		  "(lt_update_exe_path) modifying '%s' by prepending '%s'\n",
++                  nonnull (name), nonnull (value));
+ 
+   if (name && *name && value && *value)
+     {
+@@ -4070,9 +4926,9 @@ lt_update_exe_path (const char *name, const char *value)
+ void
+ lt_update_lib_path (const char *name, const char *value)
+ {
+-  LTWRAPPER_DEBUGPRINTF (("(lt_update_lib_path) modifying '%s' by prepending '%s'\n",
+-                          (name ? name : "<NULL>"),
+-                          (value ? value : "<NULL>")));
++  lt_debugprintf (__FILE__, __LINE__,
++		  "(lt_update_lib_path) modifying '%s' by prepending '%s'\n",
++                  nonnull (name), nonnull (value));
+ 
+   if (name && *name && value && *value)
+     {
+@@ -4222,7 +5078,7 @@ EOF
+ func_win32_import_lib_p ()
+ {
+     $opt_debug
+-    case `eval "$file_magic_cmd \"\$1\" 2>/dev/null" | $SED -e 10q` in
++    case `eval $file_magic_cmd \"\$1\" 2>/dev/null | $SED -e 10q` in
+     *import*) : ;;
+     *) false ;;
+     esac
+@@ -4401,9 +5257,9 @@ func_mode_link ()
+ 	    ;;
+ 	  *)
+ 	    if test "$prev" = dlfiles; then
+-	      dlfiles="$dlfiles $arg"
++	      func_append dlfiles " $arg"
+ 	    else
+-	      dlprefiles="$dlprefiles $arg"
++	      func_append dlprefiles " $arg"
+ 	    fi
+ 	    prev=
+ 	    continue
+@@ -4427,7 +5283,7 @@ func_mode_link ()
+ 	    *-*-darwin*)
+ 	      case "$deplibs " in
+ 		*" $qarg.ltframework "*) ;;
+-		*) deplibs="$deplibs $qarg.ltframework" # this is fixed later
++		*) func_append deplibs " $qarg.ltframework" # this is fixed later
+ 		   ;;
+ 	      esac
+ 	      ;;
+@@ -4446,7 +5302,7 @@ func_mode_link ()
+ 	    moreargs=
+ 	    for fil in `cat "$save_arg"`
+ 	    do
+-#	      moreargs="$moreargs $fil"
++#	      func_append moreargs " $fil"
+ 	      arg=$fil
+ 	      # A libtool-controlled object.
+ 
+@@ -4475,7 +5331,7 @@ func_mode_link ()
+ 
+ 		  if test "$prev" = dlfiles; then
+ 		    if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then
+-		      dlfiles="$dlfiles $pic_object"
++		      func_append dlfiles " $pic_object"
+ 		      prev=
+ 		      continue
+ 		    else
+@@ -4487,7 +5343,7 @@ func_mode_link ()
+ 		  # CHECK ME:  I think I busted this.  -Ossama
+ 		  if test "$prev" = dlprefiles; then
+ 		    # Preload the old-style object.
+-		    dlprefiles="$dlprefiles $pic_object"
++		    func_append dlprefiles " $pic_object"
+ 		    prev=
+ 		  fi
+ 
+@@ -4557,12 +5413,12 @@ func_mode_link ()
+ 	  if test "$prev" = rpath; then
+ 	    case "$rpath " in
+ 	    *" $arg "*) ;;
+-	    *) rpath="$rpath $arg" ;;
++	    *) func_append rpath " $arg" ;;
+ 	    esac
+ 	  else
+ 	    case "$xrpath " in
+ 	    *" $arg "*) ;;
+-	    *) xrpath="$xrpath $arg" ;;
++	    *) func_append xrpath " $arg" ;;
+ 	    esac
+ 	  fi
+ 	  prev=
+@@ -4574,28 +5430,28 @@ func_mode_link ()
+ 	  continue
+ 	  ;;
+ 	weak)
+-	  weak_libs="$weak_libs $arg"
++	  func_append weak_libs " $arg"
+ 	  prev=
+ 	  continue
+ 	  ;;
+ 	xcclinker)
+-	  linker_flags="$linker_flags $qarg"
+-	  compiler_flags="$compiler_flags $qarg"
++	  func_append linker_flags " $qarg"
++	  func_append compiler_flags " $qarg"
+ 	  prev=
+ 	  func_append compile_command " $qarg"
+ 	  func_append finalize_command " $qarg"
+ 	  continue
+ 	  ;;
+ 	xcompiler)
+-	  compiler_flags="$compiler_flags $qarg"
++	  func_append compiler_flags " $qarg"
+ 	  prev=
+ 	  func_append compile_command " $qarg"
+ 	  func_append finalize_command " $qarg"
+ 	  continue
+ 	  ;;
+ 	xlinker)
+-	  linker_flags="$linker_flags $qarg"
+-	  compiler_flags="$compiler_flags $wl$qarg"
++	  func_append linker_flags " $qarg"
++	  func_append compiler_flags " $wl$qarg"
+ 	  prev=
+ 	  func_append compile_command " $wl$qarg"
+ 	  func_append finalize_command " $wl$qarg"
+@@ -4686,15 +5542,16 @@ func_mode_link ()
+ 	;;
+ 
+       -L*)
+-	func_stripname '-L' '' "$arg"
+-	dir=$func_stripname_result
+-	if test -z "$dir"; then
++	func_stripname "-L" '' "$arg"
++	if test -z "$func_stripname_result"; then
+ 	  if test "$#" -gt 0; then
+ 	    func_fatal_error "require no space between \`-L' and \`$1'"
+ 	  else
+ 	    func_fatal_error "need path for \`-L' option"
+ 	  fi
+ 	fi
++	func_resolve_sysroot "$func_stripname_result"
++	dir=$func_resolve_sysroot_result
+ 	# We need an absolute path.
+ 	case $dir in
+ 	[\\/]* | [A-Za-z]:[\\/]*) ;;
+@@ -4706,10 +5563,16 @@ func_mode_link ()
+ 	  ;;
+ 	esac
+ 	case "$deplibs " in
+-	*" -L$dir "*) ;;
++	*" -L$dir "* | *" $arg "*)
++	  # Will only happen for absolute or sysroot arguments
++	  ;;
+ 	*)
+-	  deplibs="$deplibs -L$dir"
+-	  lib_search_path="$lib_search_path $dir"
++	  # Preserve sysroot, but never include relative directories
++	  case $dir in
++	    [\\/]* | [A-Za-z]:[\\/]* | =*) func_append deplibs " $arg" ;;
++	    *) func_append deplibs " -L$dir" ;;
++	  esac
++	  func_append lib_search_path " $dir"
+ 	  ;;
+ 	esac
+ 	case $host in
+@@ -4718,12 +5581,12 @@ func_mode_link ()
+ 	  case :$dllsearchpath: in
+ 	  *":$dir:"*) ;;
+ 	  ::) dllsearchpath=$dir;;
+-	  *) dllsearchpath="$dllsearchpath:$dir";;
++	  *) func_append dllsearchpath ":$dir";;
+ 	  esac
+ 	  case :$dllsearchpath: in
+ 	  *":$testbindir:"*) ;;
+ 	  ::) dllsearchpath=$testbindir;;
+-	  *) dllsearchpath="$dllsearchpath:$testbindir";;
++	  *) func_append dllsearchpath ":$testbindir";;
+ 	  esac
+ 	  ;;
+ 	esac
+@@ -4747,7 +5610,7 @@ func_mode_link ()
+ 	    ;;
+ 	  *-*-rhapsody* | *-*-darwin1.[012])
+ 	    # Rhapsody C and math libraries are in the System framework
+-	    deplibs="$deplibs System.ltframework"
++	    func_append deplibs " System.ltframework"
+ 	    continue
+ 	    ;;
+ 	  *-*-sco3.2v5* | *-*-sco5v6*)
+@@ -4758,9 +5621,6 @@ func_mode_link ()
+ 	    # Compiler inserts libc in the correct place for threads to work
+ 	    test "X$arg" = "X-lc" && continue
+ 	    ;;
+-	  *-*-linux*)
+-	    test "X$arg" = "X-lc" && continue
+-	    ;;
+ 	  esac
+ 	elif test "X$arg" = "X-lc_r"; then
+ 	 case $host in
+@@ -4770,7 +5630,7 @@ func_mode_link ()
+ 	   ;;
+ 	 esac
+ 	fi
+-	deplibs="$deplibs $arg"
++	func_append deplibs " $arg"
+ 	continue
+ 	;;
+ 
+@@ -4782,8 +5642,8 @@ func_mode_link ()
+       # Tru64 UNIX uses -model [arg] to determine the layout of C++
+       # classes, name mangling, and exception handling.
+       # Darwin uses the -arch flag to determine output architecture.
+-      -model|-arch|-isysroot)
+-	compiler_flags="$compiler_flags $arg"
++      -model|-arch|-isysroot|--sysroot)
++	func_append compiler_flags " $arg"
+ 	func_append compile_command " $arg"
+ 	func_append finalize_command " $arg"
+ 	prev=xcompiler
+@@ -4791,12 +5651,12 @@ func_mode_link ()
+ 	;;
+ 
+       -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe|-threads)
+-	compiler_flags="$compiler_flags $arg"
++	func_append compiler_flags " $arg"
+ 	func_append compile_command " $arg"
+ 	func_append finalize_command " $arg"
+ 	case "$new_inherited_linker_flags " in
+ 	    *" $arg "*) ;;
+-	    * ) new_inherited_linker_flags="$new_inherited_linker_flags $arg" ;;
++	    * ) func_append new_inherited_linker_flags " $arg" ;;
+ 	esac
+ 	continue
+ 	;;
+@@ -4863,13 +5723,17 @@ func_mode_link ()
+ 	# We need an absolute path.
+ 	case $dir in
+ 	[\\/]* | [A-Za-z]:[\\/]*) ;;
++	=*)
++	  func_stripname '=' '' "$dir"
++	  dir=$lt_sysroot$func_stripname_result
++	  ;;
+ 	*)
+ 	  func_fatal_error "only absolute run-paths are allowed"
+ 	  ;;
+ 	esac
+ 	case "$xrpath " in
+ 	*" $dir "*) ;;
+-	*) xrpath="$xrpath $dir" ;;
++	*) func_append xrpath " $dir" ;;
+ 	esac
+ 	continue
+ 	;;
+@@ -4922,8 +5786,8 @@ func_mode_link ()
+ 	for flag in $args; do
+ 	  IFS="$save_ifs"
+           func_quote_for_eval "$flag"
+-	  arg="$arg $func_quote_for_eval_result"
+-	  compiler_flags="$compiler_flags $func_quote_for_eval_result"
++	  func_append arg " $func_quote_for_eval_result"
++	  func_append compiler_flags " $func_quote_for_eval_result"
+ 	done
+ 	IFS="$save_ifs"
+ 	func_stripname ' ' '' "$arg"
+@@ -4938,9 +5802,9 @@ func_mode_link ()
+ 	for flag in $args; do
+ 	  IFS="$save_ifs"
+           func_quote_for_eval "$flag"
+-	  arg="$arg $wl$func_quote_for_eval_result"
+-	  compiler_flags="$compiler_flags $wl$func_quote_for_eval_result"
+-	  linker_flags="$linker_flags $func_quote_for_eval_result"
++	  func_append arg " $wl$func_quote_for_eval_result"
++	  func_append compiler_flags " $wl$func_quote_for_eval_result"
++	  func_append linker_flags " $func_quote_for_eval_result"
+ 	done
+ 	IFS="$save_ifs"
+ 	func_stripname ' ' '' "$arg"
+@@ -4968,24 +5832,27 @@ func_mode_link ()
+ 	arg="$func_quote_for_eval_result"
+ 	;;
+ 
+-      # -64, -mips[0-9] enable 64-bit mode on the SGI compiler
+-      # -r[0-9][0-9]* specifies the processor on the SGI compiler
+-      # -xarch=*, -xtarget=* enable 64-bit mode on the Sun compiler
+-      # +DA*, +DD* enable 64-bit mode on the HP compiler
+-      # -q* pass through compiler args for the IBM compiler
+-      # -m*, -t[45]*, -txscale* pass through architecture-specific
+-      # compiler args for GCC
+-      # -F/path gives path to uninstalled frameworks, gcc on darwin
+-      # -p, -pg, --coverage, -fprofile-* pass through profiling flag for GCC
+-      # @file GCC response files
+-      # -tp=* Portland pgcc target processor selection
++      # Flags to be passed through unchanged, with rationale:
++      # -64, -mips[0-9]      enable 64-bit mode for the SGI compiler
++      # -r[0-9][0-9]*        specify processor for the SGI compiler
++      # -xarch=*, -xtarget=* enable 64-bit mode for the Sun compiler
++      # +DA*, +DD*           enable 64-bit mode for the HP compiler
++      # -q*                  compiler args for the IBM compiler
++      # -m*, -t[45]*, -txscale* architecture-specific flags for GCC
++      # -F/path              path to uninstalled frameworks, gcc on darwin
++      # -p, -pg, --coverage, -fprofile-*  profiling flags for GCC
++      # @file                GCC response files
++      # -tp=*                Portland pgcc target processor selection
++      # --sysroot=*          for sysroot support
++      # -O*, -flto*, -fwhopr*, -fuse-linker-plugin GCC link-time optimization
+       -64|-mips[0-9]|-r[0-9][0-9]*|-xarch=*|-xtarget=*|+DA*|+DD*|-q*|-m*| \
+-      -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*)
++      -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*|--sysroot=*| \
++      -O*|-flto*|-fwhopr*|-fuse-linker-plugin)
+         func_quote_for_eval "$arg"
+ 	arg="$func_quote_for_eval_result"
+         func_append compile_command " $arg"
+         func_append finalize_command " $arg"
+-        compiler_flags="$compiler_flags $arg"
++        func_append compiler_flags " $arg"
+         continue
+         ;;
+ 
+@@ -4997,7 +5864,7 @@ func_mode_link ()
+ 
+       *.$objext)
+ 	# A standard object.
+-	objs="$objs $arg"
++	func_append objs " $arg"
+ 	;;
+ 
+       *.lo)
+@@ -5028,7 +5895,7 @@ func_mode_link ()
+ 
+ 	    if test "$prev" = dlfiles; then
+ 	      if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then
+-		dlfiles="$dlfiles $pic_object"
++		func_append dlfiles " $pic_object"
+ 		prev=
+ 		continue
+ 	      else
+@@ -5040,7 +5907,7 @@ func_mode_link ()
+ 	    # CHECK ME:  I think I busted this.  -Ossama
+ 	    if test "$prev" = dlprefiles; then
+ 	      # Preload the old-style object.
+-	      dlprefiles="$dlprefiles $pic_object"
++	      func_append dlprefiles " $pic_object"
+ 	      prev=
+ 	    fi
+ 
+@@ -5085,24 +5952,25 @@ func_mode_link ()
+ 
+       *.$libext)
+ 	# An archive.
+-	deplibs="$deplibs $arg"
+-	old_deplibs="$old_deplibs $arg"
++	func_append deplibs " $arg"
++	func_append old_deplibs " $arg"
+ 	continue
+ 	;;
+ 
+       *.la)
+ 	# A libtool-controlled library.
+ 
++	func_resolve_sysroot "$arg"
+ 	if test "$prev" = dlfiles; then
+ 	  # This library was specified with -dlopen.
+-	  dlfiles="$dlfiles $arg"
++	  func_append dlfiles " $func_resolve_sysroot_result"
+ 	  prev=
+ 	elif test "$prev" = dlprefiles; then
+ 	  # The library was specified with -dlpreopen.
+-	  dlprefiles="$dlprefiles $arg"
++	  func_append dlprefiles " $func_resolve_sysroot_result"
+ 	  prev=
+ 	else
+-	  deplibs="$deplibs $arg"
++	  func_append deplibs " $func_resolve_sysroot_result"
+ 	fi
+ 	continue
+ 	;;
+@@ -5127,7 +5995,7 @@ func_mode_link ()
+       func_fatal_help "the \`$prevarg' option requires an argument"
+ 
+     if test "$export_dynamic" = yes && test -n "$export_dynamic_flag_spec"; then
+-      eval "arg=\"$export_dynamic_flag_spec\""
++      eval arg=\"$export_dynamic_flag_spec\"
+       func_append compile_command " $arg"
+       func_append finalize_command " $arg"
+     fi
+@@ -5144,11 +6012,13 @@ func_mode_link ()
+     else
+       shlib_search_path=
+     fi
+-    eval "sys_lib_search_path=\"$sys_lib_search_path_spec\""
+-    eval "sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\""
++    eval sys_lib_search_path=\"$sys_lib_search_path_spec\"
++    eval sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\"
+ 
+     func_dirname "$output" "/" ""
+     output_objdir="$func_dirname_result$objdir"
++    func_to_tool_file "$output_objdir/"
++    tool_output_objdir=$func_to_tool_file_result
+     # Create the object directory.
+     func_mkdir_p "$output_objdir"
+ 
+@@ -5169,12 +6039,12 @@ func_mode_link ()
+     # Find all interdependent deplibs by searching for libraries
+     # that are linked more than once (e.g. -la -lb -la)
+     for deplib in $deplibs; do
+-      if $opt_duplicate_deps ; then
++      if $opt_preserve_dup_deps ; then
+ 	case "$libs " in
+-	*" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
++	*" $deplib "*) func_append specialdeplibs " $deplib" ;;
+ 	esac
+       fi
+-      libs="$libs $deplib"
++      func_append libs " $deplib"
+     done
+ 
+     if test "$linkmode" = lib; then
+@@ -5187,9 +6057,9 @@ func_mode_link ()
+       if $opt_duplicate_compiler_generated_deps; then
+ 	for pre_post_dep in $predeps $postdeps; do
+ 	  case "$pre_post_deps " in
+-	  *" $pre_post_dep "*) specialdeplibs="$specialdeplibs $pre_post_deps" ;;
++	  *" $pre_post_dep "*) func_append specialdeplibs " $pre_post_deps" ;;
+ 	  esac
+-	  pre_post_deps="$pre_post_deps $pre_post_dep"
++	  func_append pre_post_deps " $pre_post_dep"
+ 	done
+       fi
+       pre_post_deps=
+@@ -5256,8 +6126,9 @@ func_mode_link ()
+ 	for lib in $dlprefiles; do
+ 	  # Ignore non-libtool-libs
+ 	  dependency_libs=
++	  func_resolve_sysroot "$lib"
+ 	  case $lib in
+-	  *.la)	func_source "$lib" ;;
++	  *.la)	func_source "$func_resolve_sysroot_result" ;;
+ 	  esac
+ 
+ 	  # Collect preopened libtool deplibs, except any this library
+@@ -5267,7 +6138,7 @@ func_mode_link ()
+             deplib_base=$func_basename_result
+ 	    case " $weak_libs " in
+ 	    *" $deplib_base "*) ;;
+-	    *) deplibs="$deplibs $deplib" ;;
++	    *) func_append deplibs " $deplib" ;;
+ 	    esac
+ 	  done
+ 	done
+@@ -5288,11 +6159,11 @@ func_mode_link ()
+ 	    compile_deplibs="$deplib $compile_deplibs"
+ 	    finalize_deplibs="$deplib $finalize_deplibs"
+ 	  else
+-	    compiler_flags="$compiler_flags $deplib"
++	    func_append compiler_flags " $deplib"
+ 	    if test "$linkmode" = lib ; then
+ 		case "$new_inherited_linker_flags " in
+ 		    *" $deplib "*) ;;
+-		    * ) new_inherited_linker_flags="$new_inherited_linker_flags $deplib" ;;
++		    * ) func_append new_inherited_linker_flags " $deplib" ;;
+ 		esac
+ 	    fi
+ 	  fi
+@@ -5377,7 +6248,7 @@ func_mode_link ()
+ 	    if test "$linkmode" = lib ; then
+ 		case "$new_inherited_linker_flags " in
+ 		    *" $deplib "*) ;;
+-		    * ) new_inherited_linker_flags="$new_inherited_linker_flags $deplib" ;;
++		    * ) func_append new_inherited_linker_flags " $deplib" ;;
+ 		esac
+ 	    fi
+ 	  fi
+@@ -5390,7 +6261,8 @@ func_mode_link ()
+ 	    test "$pass" = conv && continue
+ 	    newdependency_libs="$deplib $newdependency_libs"
+ 	    func_stripname '-L' '' "$deplib"
+-	    newlib_search_path="$newlib_search_path $func_stripname_result"
++	    func_resolve_sysroot "$func_stripname_result"
++	    func_append newlib_search_path " $func_resolve_sysroot_result"
+ 	    ;;
+ 	  prog)
+ 	    if test "$pass" = conv; then
+@@ -5404,7 +6276,8 @@ func_mode_link ()
+ 	      finalize_deplibs="$deplib $finalize_deplibs"
+ 	    fi
+ 	    func_stripname '-L' '' "$deplib"
+-	    newlib_search_path="$newlib_search_path $func_stripname_result"
++	    func_resolve_sysroot "$func_stripname_result"
++	    func_append newlib_search_path " $func_resolve_sysroot_result"
+ 	    ;;
+ 	  *)
+ 	    func_warning "\`-L' is ignored for archives/objects"
+@@ -5415,17 +6288,21 @@ func_mode_link ()
+ 	-R*)
+ 	  if test "$pass" = link; then
+ 	    func_stripname '-R' '' "$deplib"
+-	    dir=$func_stripname_result
++	    func_resolve_sysroot "$func_stripname_result"
++	    dir=$func_resolve_sysroot_result
+ 	    # Make sure the xrpath contains only unique directories.
+ 	    case "$xrpath " in
+ 	    *" $dir "*) ;;
+-	    *) xrpath="$xrpath $dir" ;;
++	    *) func_append xrpath " $dir" ;;
+ 	    esac
+ 	  fi
+ 	  deplibs="$deplib $deplibs"
+ 	  continue
+ 	  ;;
+-	*.la) lib="$deplib" ;;
++	*.la)
++	  func_resolve_sysroot "$deplib"
++	  lib=$func_resolve_sysroot_result
++	  ;;
+ 	*.$libext)
+ 	  if test "$pass" = conv; then
+ 	    deplibs="$deplib $deplibs"
+@@ -5488,11 +6365,11 @@ func_mode_link ()
+ 	    if test "$pass" = dlpreopen || test "$dlopen_support" != yes || test "$build_libtool_libs" = no; then
+ 	      # If there is no dlopen support or we're linking statically,
+ 	      # we need to preload.
+-	      newdlprefiles="$newdlprefiles $deplib"
++	      func_append newdlprefiles " $deplib"
+ 	      compile_deplibs="$deplib $compile_deplibs"
+ 	      finalize_deplibs="$deplib $finalize_deplibs"
+ 	    else
+-	      newdlfiles="$newdlfiles $deplib"
++	      func_append newdlfiles " $deplib"
+ 	    fi
+ 	  fi
+ 	  continue
+@@ -5538,7 +6415,7 @@ func_mode_link ()
+ 	  for tmp_inherited_linker_flag in $tmp_inherited_linker_flags; do
+ 	    case " $new_inherited_linker_flags " in
+ 	      *" $tmp_inherited_linker_flag "*) ;;
+-	      *) new_inherited_linker_flags="$new_inherited_linker_flags $tmp_inherited_linker_flag";;
++	      *) func_append new_inherited_linker_flags " $tmp_inherited_linker_flag";;
+ 	    esac
+ 	  done
+ 	fi
+@@ -5546,8 +6423,8 @@ func_mode_link ()
+ 	if test "$linkmode,$pass" = "lib,link" ||
+ 	   test "$linkmode,$pass" = "prog,scan" ||
+ 	   { test "$linkmode" != prog && test "$linkmode" != lib; }; then
+-	  test -n "$dlopen" && dlfiles="$dlfiles $dlopen"
+-	  test -n "$dlpreopen" && dlprefiles="$dlprefiles $dlpreopen"
++	  test -n "$dlopen" && func_append dlfiles " $dlopen"
++	  test -n "$dlpreopen" && func_append dlprefiles " $dlpreopen"
+ 	fi
+ 
+ 	if test "$pass" = conv; then
+@@ -5558,20 +6435,20 @@ func_mode_link ()
+ 	      func_fatal_error "cannot find name of link library for \`$lib'"
+ 	    fi
+ 	    # It is a libtool convenience library, so add in its objects.
+-	    convenience="$convenience $ladir/$objdir/$old_library"
+-	    old_convenience="$old_convenience $ladir/$objdir/$old_library"
++	    func_append convenience " $ladir/$objdir/$old_library"
++	    func_append old_convenience " $ladir/$objdir/$old_library"
+ 	  elif test "$linkmode" != prog && test "$linkmode" != lib; then
+ 	    func_fatal_error "\`$lib' is not a convenience library"
+ 	  fi
+ 	  tmp_libs=
+ 	  for deplib in $dependency_libs; do
+ 	    deplibs="$deplib $deplibs"
+-	    if $opt_duplicate_deps ; then
++	    if $opt_preserve_dup_deps ; then
+ 	      case "$tmp_libs " in
+-	      *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
++	      *" $deplib "*) func_append specialdeplibs " $deplib" ;;
+ 	      esac
+ 	    fi
+-	    tmp_libs="$tmp_libs $deplib"
++	    func_append tmp_libs " $deplib"
+ 	  done
+ 	  continue
+ 	fi # $pass = conv
+@@ -5579,9 +6456,15 @@ func_mode_link ()
+ 
+ 	# Get the name of the library we link against.
+ 	linklib=
+-	for l in $old_library $library_names; do
+-	  linklib="$l"
+-	done
++	if test -n "$old_library" &&
++	   { test "$prefer_static_libs" = yes ||
++	     test "$prefer_static_libs,$installed" = "built,no"; }; then
++	  linklib=$old_library
++	else
++	  for l in $old_library $library_names; do
++	    linklib="$l"
++	  done
++	fi
+ 	if test -z "$linklib"; then
+ 	  func_fatal_error "cannot find name of link library for \`$lib'"
+ 	fi
+@@ -5598,9 +6481,9 @@ func_mode_link ()
+ 	    # statically, we need to preload.  We also need to preload any
+ 	    # dependent libraries so libltdl's deplib preloader doesn't
+ 	    # bomb out in the load deplibs phase.
+-	    dlprefiles="$dlprefiles $lib $dependency_libs"
++	    func_append dlprefiles " $lib $dependency_libs"
+ 	  else
+-	    newdlfiles="$newdlfiles $lib"
++	    func_append newdlfiles " $lib"
+ 	  fi
+ 	  continue
+ 	fi # $pass = dlopen
+@@ -5622,14 +6505,14 @@ func_mode_link ()
+ 
+ 	# Find the relevant object directory and library name.
+ 	if test "X$installed" = Xyes; then
+-	  if test ! -f "$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then
++	  if test ! -f "$lt_sysroot$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then
+ 	    func_warning "library \`$lib' was moved."
+ 	    dir="$ladir"
+ 	    absdir="$abs_ladir"
+ 	    libdir="$abs_ladir"
+ 	  else
+-	    dir="$libdir"
+-	    absdir="$libdir"
++	    dir="$lt_sysroot$libdir"
++	    absdir="$lt_sysroot$libdir"
+ 	  fi
+ 	  test "X$hardcode_automatic" = Xyes && avoidtemprpath=yes
+ 	else
+@@ -5637,12 +6520,12 @@ func_mode_link ()
+ 	    dir="$ladir"
+ 	    absdir="$abs_ladir"
+ 	    # Remove this search path later
+-	    notinst_path="$notinst_path $abs_ladir"
++	    func_append notinst_path " $abs_ladir"
+ 	  else
+ 	    dir="$ladir/$objdir"
+ 	    absdir="$abs_ladir/$objdir"
+ 	    # Remove this search path later
+-	    notinst_path="$notinst_path $abs_ladir"
++	    func_append notinst_path " $abs_ladir"
+ 	  fi
+ 	fi # $installed = yes
+ 	func_stripname 'lib' '.la' "$laname"
+@@ -5653,20 +6536,46 @@ func_mode_link ()
+ 	  if test -z "$libdir" && test "$linkmode" = prog; then
+ 	    func_fatal_error "only libraries may -dlpreopen a convenience library: \`$lib'"
+ 	  fi
+-	  # Prefer using a static library (so that no silly _DYNAMIC symbols
+-	  # are required to link).
+-	  if test -n "$old_library"; then
+-	    newdlprefiles="$newdlprefiles $dir/$old_library"
+-	    # Keep a list of preopened convenience libraries to check
+-	    # that they are being used correctly in the link pass.
+-	    test -z "$libdir" && \
+-		dlpreconveniencelibs="$dlpreconveniencelibs $dir/$old_library"
+-	  # Otherwise, use the dlname, so that lt_dlopen finds it.
+-	  elif test -n "$dlname"; then
+-	    newdlprefiles="$newdlprefiles $dir/$dlname"
+-	  else
+-	    newdlprefiles="$newdlprefiles $dir/$linklib"
+-	  fi
++	  case "$host" in
++	    # special handling for platforms with PE-DLLs.
++	    *cygwin* | *mingw* | *cegcc* )
++	      # Linker will automatically link against shared library if both
++	      # static and shared are present.  Therefore, ensure we extract
++	      # symbols from the import library if a shared library is present
++	      # (otherwise, the dlopen module name will be incorrect).  We do
++	      # this by putting the import library name into $newdlprefiles.
++	      # We recover the dlopen module name by 'saving' the la file
++	      # name in a special purpose variable, and (later) extracting the
++	      # dlname from the la file.
++	      if test -n "$dlname"; then
++	        func_tr_sh "$dir/$linklib"
++	        eval "libfile_$func_tr_sh_result=\$abs_ladir/\$laname"
++	        func_append newdlprefiles " $dir/$linklib"
++	      else
++	        func_append newdlprefiles " $dir/$old_library"
++	        # Keep a list of preopened convenience libraries to check
++	        # that they are being used correctly in the link pass.
++	        test -z "$libdir" && \
++	          func_append dlpreconveniencelibs " $dir/$old_library"
++	      fi
++	    ;;
++	    * )
++	      # Prefer using a static library (so that no silly _DYNAMIC symbols
++	      # are required to link).
++	      if test -n "$old_library"; then
++	        func_append newdlprefiles " $dir/$old_library"
++	        # Keep a list of preopened convenience libraries to check
++	        # that they are being used correctly in the link pass.
++	        test -z "$libdir" && \
++	          func_append dlpreconveniencelibs " $dir/$old_library"
++	      # Otherwise, use the dlname, so that lt_dlopen finds it.
++	      elif test -n "$dlname"; then
++	        func_append newdlprefiles " $dir/$dlname"
++	      else
++	        func_append newdlprefiles " $dir/$linklib"
++	      fi
++	    ;;
++	  esac
+ 	fi # $pass = dlpreopen
+ 
+ 	if test -z "$libdir"; then
+@@ -5684,7 +6593,7 @@ func_mode_link ()
+ 
+ 
+ 	if test "$linkmode" = prog && test "$pass" != link; then
+-	  newlib_search_path="$newlib_search_path $ladir"
++	  func_append newlib_search_path " $ladir"
+ 	  deplibs="$lib $deplibs"
+ 
+ 	  linkalldeplibs=no
+@@ -5697,7 +6606,8 @@ func_mode_link ()
+ 	  for deplib in $dependency_libs; do
+ 	    case $deplib in
+ 	    -L*) func_stripname '-L' '' "$deplib"
+-	         newlib_search_path="$newlib_search_path $func_stripname_result"
++	         func_resolve_sysroot "$func_stripname_result"
++	         func_append newlib_search_path " $func_resolve_sysroot_result"
+ 		 ;;
+ 	    esac
+ 	    # Need to link against all dependency_libs?
+@@ -5708,12 +6618,12 @@ func_mode_link ()
+ 	      # or/and link against static libraries
+ 	      newdependency_libs="$deplib $newdependency_libs"
+ 	    fi
+-	    if $opt_duplicate_deps ; then
++	    if $opt_preserve_dup_deps ; then
+ 	      case "$tmp_libs " in
+-	      *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
++	      *" $deplib "*) func_append specialdeplibs " $deplib" ;;
+ 	      esac
+ 	    fi
+-	    tmp_libs="$tmp_libs $deplib"
++	    func_append tmp_libs " $deplib"
+ 	  done # for deplib
+ 	  continue
+ 	fi # $linkmode = prog...
+@@ -5728,7 +6638,7 @@ func_mode_link ()
+ 	      # Make sure the rpath contains only unique directories.
+ 	      case "$temp_rpath:" in
+ 	      *"$absdir:"*) ;;
+-	      *) temp_rpath="$temp_rpath$absdir:" ;;
++	      *) func_append temp_rpath "$absdir:" ;;
+ 	      esac
+ 	    fi
+ 
+@@ -5740,7 +6650,7 @@ func_mode_link ()
+ 	    *)
+ 	      case "$compile_rpath " in
+ 	      *" $absdir "*) ;;
+-	      *) compile_rpath="$compile_rpath $absdir"
++	      *) func_append compile_rpath " $absdir" ;;
+ 	      esac
+ 	      ;;
+ 	    esac
+@@ -5749,7 +6659,7 @@ func_mode_link ()
+ 	    *)
+ 	      case "$finalize_rpath " in
+ 	      *" $libdir "*) ;;
+-	      *) finalize_rpath="$finalize_rpath $libdir"
++	      *) func_append finalize_rpath " $libdir" ;;
+ 	      esac
+ 	      ;;
+ 	    esac
+@@ -5774,12 +6684,12 @@ func_mode_link ()
+ 	  case $host in
+ 	  *cygwin* | *mingw* | *cegcc*)
+ 	      # No point in relinking DLLs because paths are not encoded
+-	      notinst_deplibs="$notinst_deplibs $lib"
++	      func_append notinst_deplibs " $lib"
+ 	      need_relink=no
+ 	    ;;
+ 	  *)
+ 	    if test "$installed" = no; then
+-	      notinst_deplibs="$notinst_deplibs $lib"
++	      func_append notinst_deplibs " $lib"
+ 	      need_relink=yes
+ 	    fi
+ 	    ;;
+@@ -5814,7 +6724,7 @@ func_mode_link ()
+ 	    *)
+ 	      case "$compile_rpath " in
+ 	      *" $absdir "*) ;;
+-	      *) compile_rpath="$compile_rpath $absdir"
++	      *) func_append compile_rpath " $absdir" ;;
+ 	      esac
+ 	      ;;
+ 	    esac
+@@ -5823,7 +6733,7 @@ func_mode_link ()
+ 	    *)
+ 	      case "$finalize_rpath " in
+ 	      *" $libdir "*) ;;
+-	      *) finalize_rpath="$finalize_rpath $libdir"
++	      *) func_append finalize_rpath " $libdir" ;;
+ 	      esac
+ 	      ;;
+ 	    esac
+@@ -5835,7 +6745,7 @@ func_mode_link ()
+ 	    shift
+ 	    realname="$1"
+ 	    shift
+-	    eval "libname=\"$libname_spec\""
++	    libname=`eval "\\$ECHO \"$libname_spec\""`
+ 	    # use dlname if we got it. it's perfectly good, no?
+ 	    if test -n "$dlname"; then
+ 	      soname="$dlname"
+@@ -5848,7 +6758,7 @@ func_mode_link ()
+ 		versuffix="-$major"
+ 		;;
+ 	      esac
+-	      eval "soname=\"$soname_spec\""
++	      eval soname=\"$soname_spec\"
+ 	    else
+ 	      soname="$realname"
+ 	    fi
+@@ -5877,7 +6787,7 @@ func_mode_link ()
+ 	    linklib=$newlib
+ 	  fi # test -n "$old_archive_from_expsyms_cmds"
+ 
+-	  if test "$linkmode" = prog || test "$mode" != relink; then
++	  if test "$linkmode" = prog || test "$opt_mode" != relink; then
+ 	    add_shlibpath=
+ 	    add_dir=
+ 	    add=
+@@ -5933,7 +6843,7 @@ func_mode_link ()
+ 		if test -n "$inst_prefix_dir"; then
+ 		  case $libdir in
+ 		    [\\/]*)
+-		      add_dir="$add_dir -L$inst_prefix_dir$libdir"
++		      func_append add_dir " -L$inst_prefix_dir$libdir"
+ 		      ;;
+ 		  esac
+ 		fi
+@@ -5955,7 +6865,7 @@ func_mode_link ()
+ 	    if test -n "$add_shlibpath"; then
+ 	      case :$compile_shlibpath: in
+ 	      *":$add_shlibpath:"*) ;;
+-	      *) compile_shlibpath="$compile_shlibpath$add_shlibpath:" ;;
++	      *) func_append compile_shlibpath "$add_shlibpath:" ;;
+ 	      esac
+ 	    fi
+ 	    if test "$linkmode" = prog; then
+@@ -5969,13 +6879,13 @@ func_mode_link ()
+ 		 test "$hardcode_shlibpath_var" = yes; then
+ 		case :$finalize_shlibpath: in
+ 		*":$libdir:"*) ;;
+-		*) finalize_shlibpath="$finalize_shlibpath$libdir:" ;;
++		*) func_append finalize_shlibpath "$libdir:" ;;
+ 		esac
+ 	      fi
+ 	    fi
+ 	  fi
+ 
+-	  if test "$linkmode" = prog || test "$mode" = relink; then
++	  if test "$linkmode" = prog || test "$opt_mode" = relink; then
+ 	    add_shlibpath=
+ 	    add_dir=
+ 	    add=
+@@ -5989,7 +6899,7 @@ func_mode_link ()
+ 	    elif test "$hardcode_shlibpath_var" = yes; then
+ 	      case :$finalize_shlibpath: in
+ 	      *":$libdir:"*) ;;
+-	      *) finalize_shlibpath="$finalize_shlibpath$libdir:" ;;
++	      *) func_append finalize_shlibpath "$libdir:" ;;
+ 	      esac
+ 	      add="-l$name"
+ 	    elif test "$hardcode_automatic" = yes; then
+@@ -6001,12 +6911,12 @@ func_mode_link ()
+ 	      fi
+ 	    else
+ 	      # We cannot seem to hardcode it, guess we'll fake it.
+-	      add_dir="-L$libdir"
++	      add_dir="-L$lt_sysroot$libdir"
+ 	      # Try looking first in the location we're being installed to.
+ 	      if test -n "$inst_prefix_dir"; then
+ 		case $libdir in
+ 		  [\\/]*)
+-		    add_dir="$add_dir -L$inst_prefix_dir$libdir"
++		    func_append add_dir " -L$inst_prefix_dir$libdir"
+ 		    ;;
+ 		esac
+ 	      fi
+@@ -6083,27 +6993,33 @@ func_mode_link ()
+ 	           temp_xrpath=$func_stripname_result
+ 		   case " $xrpath " in
+ 		   *" $temp_xrpath "*) ;;
+-		   *) xrpath="$xrpath $temp_xrpath";;
++		   *) func_append xrpath " $temp_xrpath";;
+ 		   esac;;
+-	      *) temp_deplibs="$temp_deplibs $libdir";;
++	      *) func_append temp_deplibs " $libdir";;
+ 	      esac
+ 	    done
+ 	    dependency_libs="$temp_deplibs"
+ 	  fi
+ 
+-	  newlib_search_path="$newlib_search_path $absdir"
++	  func_append newlib_search_path " $absdir"
+ 	  # Link against this library
+ 	  test "$link_static" = no && newdependency_libs="$abs_ladir/$laname $newdependency_libs"
+ 	  # ... and its dependency_libs
+ 	  tmp_libs=
+ 	  for deplib in $dependency_libs; do
+ 	    newdependency_libs="$deplib $newdependency_libs"
+-	    if $opt_duplicate_deps ; then
++	    case $deplib in
++              -L*) func_stripname '-L' '' "$deplib"
++                   func_resolve_sysroot "$func_stripname_result";;
++              *) func_resolve_sysroot "$deplib" ;;
++            esac
++	    if $opt_preserve_dup_deps ; then
+ 	      case "$tmp_libs " in
+-	      *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
++	      *" $func_resolve_sysroot_result "*)
++                func_append specialdeplibs " $func_resolve_sysroot_result" ;;
+ 	      esac
+ 	    fi
+-	    tmp_libs="$tmp_libs $deplib"
++	    func_append tmp_libs " $func_resolve_sysroot_result"
+ 	  done
+ 
+ 	  if test "$link_all_deplibs" != no; then
+@@ -6113,8 +7029,10 @@ func_mode_link ()
+ 	      case $deplib in
+ 	      -L*) path="$deplib" ;;
+ 	      *.la)
++	        func_resolve_sysroot "$deplib"
++	        deplib=$func_resolve_sysroot_result
+ 	        func_dirname "$deplib" "" "."
+-		dir="$func_dirname_result"
++		dir=$func_dirname_result
+ 		# We need an absolute path.
+ 		case $dir in
+ 		[\\/]* | [A-Za-z]:[\\/]*) absdir="$dir" ;;
+@@ -6130,7 +7048,7 @@ func_mode_link ()
+ 		case $host in
+ 		*-*-darwin*)
+ 		  depdepl=
+-		  deplibrary_names=`${SED} -n -e 's/^library_names=\(.*\)$/\1/p' $deplib`
++		  eval deplibrary_names=`${SED} -n -e 's/^library_names=\(.*\)$/\1/p' $deplib`
+ 		  if test -n "$deplibrary_names" ; then
+ 		    for tmp in $deplibrary_names ; do
+ 		      depdepl=$tmp
+@@ -6141,8 +7059,8 @@ func_mode_link ()
+                       if test -z "$darwin_install_name"; then
+                           darwin_install_name=`${OTOOL64} -L $depdepl  | awk '{if (NR == 2) {print $1;exit}}'`
+                       fi
+-		      compiler_flags="$compiler_flags ${wl}-dylib_file ${wl}${darwin_install_name}:${depdepl}"
+-		      linker_flags="$linker_flags -dylib_file ${darwin_install_name}:${depdepl}"
++		      func_append compiler_flags " ${wl}-dylib_file ${wl}${darwin_install_name}:${depdepl}"
++		      func_append linker_flags " -dylib_file ${darwin_install_name}:${depdepl}"
+ 		      path=
+ 		    fi
+ 		  fi
+@@ -6152,7 +7070,7 @@ func_mode_link ()
+ 		  ;;
+ 		esac
+ 		else
+-		  libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
++		  eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
+ 		  test -z "$libdir" && \
+ 		    func_fatal_error "\`$deplib' is not a valid libtool archive"
+ 		  test "$absdir" != "$libdir" && \
+@@ -6192,7 +7110,7 @@ func_mode_link ()
+ 	  for dir in $newlib_search_path; do
+ 	    case "$lib_search_path " in
+ 	    *" $dir "*) ;;
+-	    *) lib_search_path="$lib_search_path $dir" ;;
++	    *) func_append lib_search_path " $dir" ;;
+ 	    esac
+ 	  done
+ 	  newlib_search_path=
+@@ -6205,7 +7123,7 @@ func_mode_link ()
+ 	fi
+ 	for var in $vars dependency_libs; do
+ 	  # Add libraries to $var in reverse order
+-	  eval tmp_libs=\$$var
++	  eval tmp_libs=\"\$$var\"
+ 	  new_libs=
+ 	  for deplib in $tmp_libs; do
+ 	    # FIXME: Pedantically, this is the right thing to do, so
+@@ -6250,13 +7168,13 @@ func_mode_link ()
+ 	    -L*)
+ 	      case " $tmp_libs " in
+ 	      *" $deplib "*) ;;
+-	      *) tmp_libs="$tmp_libs $deplib" ;;
++	      *) func_append tmp_libs " $deplib" ;;
+ 	      esac
+ 	      ;;
+-	    *) tmp_libs="$tmp_libs $deplib" ;;
++	    *) func_append tmp_libs " $deplib" ;;
+ 	    esac
+ 	  done
+-	  eval $var=\$tmp_libs
++	  eval $var=\"$tmp_libs\"
+ 	done # for var
+       fi
+       # Last step: remove runtime libs from dependency_libs
+@@ -6269,7 +7187,7 @@ func_mode_link ()
+ 	  ;;
+ 	esac
+ 	if test -n "$i" ; then
+-	  tmp_libs="$tmp_libs $i"
++	  func_append tmp_libs " $i"
+ 	fi
+       done
+       dependency_libs=$tmp_libs
+@@ -6310,7 +7228,7 @@ func_mode_link ()
+       # Now set the variables for building old libraries.
+       build_libtool_libs=no
+       oldlibs="$output"
+-      objs="$objs$old_deplibs"
++      func_append objs "$old_deplibs"
+       ;;
+ 
+     lib)
+@@ -6319,8 +7237,8 @@ func_mode_link ()
+       lib*)
+ 	func_stripname 'lib' '.la' "$outputname"
+ 	name=$func_stripname_result
+-	eval "shared_ext=\"$shrext_cmds\""
+-	eval "libname=\"$libname_spec\""
++	eval shared_ext=\"$shrext_cmds\"
++	eval libname=\"$libname_spec\"
+ 	;;
+       *)
+ 	test "$module" = no && \
+@@ -6330,8 +7248,8 @@ func_mode_link ()
+ 	  # Add the "lib" prefix for modules if required
+ 	  func_stripname '' '.la' "$outputname"
+ 	  name=$func_stripname_result
+-	  eval "shared_ext=\"$shrext_cmds\""
+-	  eval "libname=\"$libname_spec\""
++	  eval shared_ext=\"$shrext_cmds\"
++	  eval libname=\"$libname_spec\"
+ 	else
+ 	  func_stripname '' '.la' "$outputname"
+ 	  libname=$func_stripname_result
+@@ -6346,7 +7264,7 @@ func_mode_link ()
+ 	  echo
+ 	  $ECHO "*** Warning: Linking the shared library $output against the non-libtool"
+ 	  $ECHO "*** objects $objs is not portable!"
+-	  libobjs="$libobjs $objs"
++	  func_append libobjs " $objs"
+ 	fi
+       fi
+ 
+@@ -6544,7 +7462,7 @@ func_mode_link ()
+ 	  done
+ 
+ 	  # Make executables depend on our current version.
+-	  verstring="$verstring:${current}.0"
++	  func_append verstring ":${current}.0"
+ 	  ;;
+ 
+ 	qnx)
+@@ -6612,10 +7530,10 @@ func_mode_link ()
+       fi
+ 
+       func_generate_dlsyms "$libname" "$libname" "yes"
+-      libobjs="$libobjs $symfileobj"
++      func_append libobjs " $symfileobj"
+       test "X$libobjs" = "X " && libobjs=
+ 
+-      if test "$mode" != relink; then
++      if test "$opt_mode" != relink; then
+ 	# Remove our outputs, but don't remove object files since they
+ 	# may have been created when compiling PIC objects.
+ 	removelist=
+@@ -6631,7 +7549,7 @@ func_mode_link ()
+ 		   continue
+ 		 fi
+ 	       fi
+-	       removelist="$removelist $p"
++	       func_append removelist " $p"
+ 	       ;;
+ 	    *) ;;
+ 	  esac
+@@ -6642,7 +7560,7 @@ func_mode_link ()
+ 
+       # Now set the variables for building old libraries.
+       if test "$build_old_libs" = yes && test "$build_libtool_libs" != convenience ; then
+-	oldlibs="$oldlibs $output_objdir/$libname.$libext"
++	func_append oldlibs " $output_objdir/$libname.$libext"
+ 
+ 	# Transform .lo files to .o files.
+ 	oldobjs="$objs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; $lo2o" | $NL2SP`
+@@ -6659,10 +7577,11 @@ func_mode_link ()
+ 	# If the user specified any rpath flags, then add them.
+ 	temp_xrpath=
+ 	for libdir in $xrpath; do
+-	  temp_xrpath="$temp_xrpath -R$libdir"
++	  func_replace_sysroot "$libdir"
++	  func_append temp_xrpath " -R$func_replace_sysroot_result"
+ 	  case "$finalize_rpath " in
+ 	  *" $libdir "*) ;;
+-	  *) finalize_rpath="$finalize_rpath $libdir" ;;
++	  *) func_append finalize_rpath " $libdir" ;;
+ 	  esac
+ 	done
+ 	if test "$hardcode_into_libs" != yes || test "$build_old_libs" = yes; then
+@@ -6676,7 +7595,7 @@ func_mode_link ()
+       for lib in $old_dlfiles; do
+ 	case " $dlprefiles $dlfiles " in
+ 	*" $lib "*) ;;
+-	*) dlfiles="$dlfiles $lib" ;;
++	*) func_append dlfiles " $lib" ;;
+ 	esac
+       done
+ 
+@@ -6686,7 +7605,7 @@ func_mode_link ()
+       for lib in $old_dlprefiles; do
+ 	case "$dlprefiles " in
+ 	*" $lib "*) ;;
+-	*) dlprefiles="$dlprefiles $lib" ;;
++	*) func_append dlprefiles " $lib" ;;
+ 	esac
+       done
+ 
+@@ -6698,7 +7617,7 @@ func_mode_link ()
+ 	    ;;
+ 	  *-*-rhapsody* | *-*-darwin1.[012])
+ 	    # Rhapsody C library is in the System framework
+-	    deplibs="$deplibs System.ltframework"
++	    func_append deplibs " System.ltframework"
+ 	    ;;
+ 	  *-*-netbsd*)
+ 	    # Don't link with libc until the a.out ld.so is fixed.
+@@ -6715,7 +7634,7 @@ func_mode_link ()
+ 	  *)
+ 	    # Add libc to deplibs on all other systems if necessary.
+ 	    if test "$build_libtool_need_lc" = "yes"; then
+-	      deplibs="$deplibs -lc"
++	      func_append deplibs " -lc"
+ 	    fi
+ 	    ;;
+ 	  esac
+@@ -6764,18 +7683,18 @@ EOF
+ 		if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ 		  case " $predeps $postdeps " in
+ 		  *" $i "*)
+-		    newdeplibs="$newdeplibs $i"
++		    func_append newdeplibs " $i"
+ 		    i=""
+ 		    ;;
+ 		  esac
+ 		fi
+ 		if test -n "$i" ; then
+-		  eval "libname=\"$libname_spec\""
+-		  eval "deplib_matches=\"$library_names_spec\""
++		  libname=`eval "\\$ECHO \"$libname_spec\""`
++		  deplib_matches=`eval "\\$ECHO \"$library_names_spec\""`
+ 		  set dummy $deplib_matches; shift
+ 		  deplib_match=$1
+ 		  if test `expr "$ldd_output" : ".*$deplib_match"` -ne 0 ; then
+-		    newdeplibs="$newdeplibs $i"
++		    func_append newdeplibs " $i"
+ 		  else
+ 		    droppeddeps=yes
+ 		    echo
+@@ -6789,7 +7708,7 @@ EOF
+ 		fi
+ 		;;
+ 	      *)
+-		newdeplibs="$newdeplibs $i"
++		func_append newdeplibs " $i"
+ 		;;
+ 	      esac
+ 	    done
+@@ -6807,18 +7726,18 @@ EOF
+ 		  if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ 		    case " $predeps $postdeps " in
+ 		    *" $i "*)
+-		      newdeplibs="$newdeplibs $i"
++		      func_append newdeplibs " $i"
+ 		      i=""
+ 		      ;;
+ 		    esac
+ 		  fi
+ 		  if test -n "$i" ; then
+-		    eval "libname=\"$libname_spec\""
+-		    eval "deplib_matches=\"$library_names_spec\""
++		    libname=`eval "\\$ECHO \"$libname_spec\""`
++		    deplib_matches=`eval "\\$ECHO \"$library_names_spec\""`
+ 		    set dummy $deplib_matches; shift
+ 		    deplib_match=$1
+ 		    if test `expr "$ldd_output" : ".*$deplib_match"` -ne 0 ; then
+-		      newdeplibs="$newdeplibs $i"
++		      func_append newdeplibs " $i"
+ 		    else
+ 		      droppeddeps=yes
+ 		      echo
+@@ -6840,7 +7759,7 @@ EOF
+ 		fi
+ 		;;
+ 	      *)
+-		newdeplibs="$newdeplibs $i"
++		func_append newdeplibs " $i"
+ 		;;
+ 	      esac
+ 	    done
+@@ -6857,15 +7776,27 @@ EOF
+ 	      if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ 		case " $predeps $postdeps " in
+ 		*" $a_deplib "*)
+-		  newdeplibs="$newdeplibs $a_deplib"
++		  func_append newdeplibs " $a_deplib"
+ 		  a_deplib=""
+ 		  ;;
+ 		esac
+ 	      fi
+ 	      if test -n "$a_deplib" ; then
+-		eval "libname=\"$libname_spec\""
++		libname=`eval "\\$ECHO \"$libname_spec\""`
++		if test -n "$file_magic_glob"; then
++		  libnameglob=`func_echo_all "$libname" | $SED -e $file_magic_glob`
++		else
++		  libnameglob=$libname
++		fi
++		test "$want_nocaseglob" = yes && nocaseglob=`shopt -p nocaseglob`
+ 		for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do
+-		  potential_libs=`ls $i/$libname[.-]* 2>/dev/null`
++		  if test "$want_nocaseglob" = yes; then
++		    shopt -s nocaseglob
++		    potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null`
++		    $nocaseglob
++		  else
++		    potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null`
++		  fi
+ 		  for potent_lib in $potential_libs; do
+ 		      # Follow soft links.
+ 		      if ls -lLd "$potent_lib" 2>/dev/null |
+@@ -6885,10 +7816,10 @@ EOF
+ 			*) potlib=`$ECHO "$potlib" | $SED 's,[^/]*$,,'`"$potliblink";;
+ 			esac
+ 		      done
+-		      if eval "$file_magic_cmd \"\$potlib\"" 2>/dev/null |
++		      if eval $file_magic_cmd \"\$potlib\" 2>/dev/null |
+ 			 $SED -e 10q |
+ 			 $EGREP "$file_magic_regex" > /dev/null; then
+-			newdeplibs="$newdeplibs $a_deplib"
++			func_append newdeplibs " $a_deplib"
+ 			a_deplib=""
+ 			break 2
+ 		      fi
+@@ -6913,7 +7844,7 @@ EOF
+ 	      ;;
+ 	    *)
+ 	      # Add a -L argument.
+-	      newdeplibs="$newdeplibs $a_deplib"
++	      func_append newdeplibs " $a_deplib"
+ 	      ;;
+ 	    esac
+ 	  done # Gone through all deplibs.
+@@ -6929,20 +7860,20 @@ EOF
+ 	      if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
+ 		case " $predeps $postdeps " in
+ 		*" $a_deplib "*)
+-		  newdeplibs="$newdeplibs $a_deplib"
++		  func_append newdeplibs " $a_deplib"
+ 		  a_deplib=""
+ 		  ;;
+ 		esac
+ 	      fi
+ 	      if test -n "$a_deplib" ; then
+-		eval "libname=\"$libname_spec\""
++		libname=`eval "\\$ECHO \"$libname_spec\""`
+ 		for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do
+ 		  potential_libs=`ls $i/$libname[.-]* 2>/dev/null`
+ 		  for potent_lib in $potential_libs; do
+ 		    potlib="$potent_lib" # see symlink-check above in file_magic test
+ 		    if eval "\$ECHO \"$potent_lib\"" 2>/dev/null | $SED 10q | \
+ 		       $EGREP "$match_pattern_regex" > /dev/null; then
+-		      newdeplibs="$newdeplibs $a_deplib"
++		      func_append newdeplibs " $a_deplib"
+ 		      a_deplib=""
+ 		      break 2
+ 		    fi
+@@ -6967,7 +7898,7 @@ EOF
+ 	      ;;
+ 	    *)
+ 	      # Add a -L argument.
+-	      newdeplibs="$newdeplibs $a_deplib"
++	      func_append newdeplibs " $a_deplib"
+ 	      ;;
+ 	    esac
+ 	  done # Gone through all deplibs.
+@@ -7071,7 +8002,7 @@ EOF
+ 	*)
+ 	  case " $deplibs " in
+ 	  *" -L$path/$objdir "*)
+-	    new_libs="$new_libs -L$path/$objdir" ;;
++	    func_append new_libs " -L$path/$objdir" ;;
+ 	  esac
+ 	  ;;
+ 	esac
+@@ -7081,10 +8012,10 @@ EOF
+ 	-L*)
+ 	  case " $new_libs " in
+ 	  *" $deplib "*) ;;
+-	  *) new_libs="$new_libs $deplib" ;;
++	  *) func_append new_libs " $deplib" ;;
+ 	  esac
+ 	  ;;
+-	*) new_libs="$new_libs $deplib" ;;
++	*) func_append new_libs " $deplib" ;;
+ 	esac
+       done
+       deplibs="$new_libs"
+@@ -7101,10 +8032,12 @@ EOF
+ 	  hardcode_libdirs=
+ 	  dep_rpath=
+ 	  rpath="$finalize_rpath"
+-	  test "$mode" != relink && rpath="$compile_rpath$rpath"
++	  test "$opt_mode" != relink && rpath="$compile_rpath$rpath"
+ 	  for libdir in $rpath; do
+ 	    if test -n "$hardcode_libdir_flag_spec"; then
+ 	      if test -n "$hardcode_libdir_separator"; then
++		func_replace_sysroot "$libdir"
++		libdir=$func_replace_sysroot_result
+ 		if test -z "$hardcode_libdirs"; then
+ 		  hardcode_libdirs="$libdir"
+ 		else
+@@ -7113,18 +8046,18 @@ EOF
+ 		  *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
+ 		    ;;
+ 		  *)
+-		    hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
++		    func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
+ 		    ;;
+ 		  esac
+ 		fi
+ 	      else
+-		eval "flag=\"$hardcode_libdir_flag_spec\""
+-		dep_rpath="$dep_rpath $flag"
++		eval flag=\"$hardcode_libdir_flag_spec\"
++		func_append dep_rpath " $flag"
+ 	      fi
+ 	    elif test -n "$runpath_var"; then
+ 	      case "$perm_rpath " in
+ 	      *" $libdir "*) ;;
+-	      *) perm_rpath="$perm_rpath $libdir" ;;
++	      *) func_apped perm_rpath " $libdir" ;;
+ 	      esac
+ 	    fi
+ 	  done
+@@ -7133,40 +8066,38 @@ EOF
+ 	     test -n "$hardcode_libdirs"; then
+ 	    libdir="$hardcode_libdirs"
+ 	    if test -n "$hardcode_libdir_flag_spec_ld"; then
+-	      eval "dep_rpath=\"$hardcode_libdir_flag_spec_ld\""
++	      eval dep_rpath=\"$hardcode_libdir_flag_spec_ld\"
+ 	    else
+-	      eval "dep_rpath=\"$hardcode_libdir_flag_spec\""
++	      eval dep_rpath=\"$hardcode_libdir_flag_spec\"
+ 	    fi
+ 	  fi
+ 	  if test -n "$runpath_var" && test -n "$perm_rpath"; then
+ 	    # We should set the runpath_var.
+ 	    rpath=
+ 	    for dir in $perm_rpath; do
+-	      rpath="$rpath$dir:"
++	      func_append rpath "$dir:"
+ 	    done
+-	    eval $runpath_var=\$rpath\$$runpath_var
+-	    export $runpath_var
++	    eval "$runpath_var='$rpath\$$runpath_var'; export $runpath_var"
+ 	  fi
+ 	  test -n "$dep_rpath" && deplibs="$dep_rpath $deplibs"
+ 	fi
+ 
+ 	shlibpath="$finalize_shlibpath"
+-	test "$mode" != relink && shlibpath="$compile_shlibpath$shlibpath"
++	test "$opt_mode" != relink && shlibpath="$compile_shlibpath$shlibpath"
+ 	if test -n "$shlibpath"; then
+-	  eval $shlibpath_var=\$shlibpath\$$shlibpath_var
+-	  export $shlibpath_var
++	  eval "$shlibpath_var='$shlibpath\$$shlibpath_var'; export $shlibpath_var"
+ 	fi
+ 
+ 	# Get the real and link names of the library.
+-	eval "shared_ext=\"$shrext_cmds\""
+-	eval "library_names=\"$library_names_spec\""
++	eval shared_ext=\"$shrext_cmds\"
++	eval library_names=\"$library_names_spec\"
+ 	set dummy $library_names
+ 	shift
+ 	realname="$1"
+ 	shift
+ 
+ 	if test -n "$soname_spec"; then
+-	  eval "soname=\"$soname_spec\""
++	  eval soname=\"$soname_spec\"
+ 	else
+ 	  soname="$realname"
+ 	fi
+@@ -7178,7 +8109,7 @@ EOF
+ 	linknames=
+ 	for link
+ 	do
+-	  linknames="$linknames $link"
++	  func_append linknames " $link"
+ 	done
+ 
+ 	# Use standard objects if they are pic
+@@ -7189,7 +8120,7 @@ EOF
+ 	if test -n "$export_symbols" && test -n "$include_expsyms"; then
+ 	  $opt_dry_run || cp "$export_symbols" "$output_objdir/$libname.uexp"
+ 	  export_symbols="$output_objdir/$libname.uexp"
+-	  delfiles="$delfiles $export_symbols"
++	  func_append delfiles " $export_symbols"
+ 	fi
+ 
+ 	orig_export_symbols=
+@@ -7220,13 +8151,45 @@ EOF
+ 	    $opt_dry_run || $RM $export_symbols
+ 	    cmds=$export_symbols_cmds
+ 	    save_ifs="$IFS"; IFS='~'
+-	    for cmd in $cmds; do
++	    for cmd1 in $cmds; do
+ 	      IFS="$save_ifs"
+-	      eval "cmd=\"$cmd\""
+-	      func_len " $cmd"
+-	      len=$func_len_result
+-	      if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then
++	      # Take the normal branch if the nm_file_list_spec branch
++	      # doesn't work or if tool conversion is not needed.
++	      case $nm_file_list_spec~$to_tool_file_cmd in
++		*~func_convert_file_noop | *~func_convert_file_msys_to_w32 | ~*)
++		  try_normal_branch=yes
++		  eval cmd=\"$cmd1\"
++		  func_len " $cmd"
++		  len=$func_len_result
++		  ;;
++		*)
++		  try_normal_branch=no
++		  ;;
++	      esac
++	      if test "$try_normal_branch" = yes \
++		 && { test "$len" -lt "$max_cmd_len" \
++		      || test "$max_cmd_len" -le -1; }
++	      then
++		func_show_eval "$cmd" 'exit $?'
++		skipped_export=false
++	      elif test -n "$nm_file_list_spec"; then
++		func_basename "$output"
++		output_la=$func_basename_result
++		save_libobjs=$libobjs
++		save_output=$output
++		output=${output_objdir}/${output_la}.nm
++		func_to_tool_file "$output"
++		libobjs=$nm_file_list_spec$func_to_tool_file_result
++		func_append delfiles " $output"
++		func_verbose "creating $NM input file list: $output"
++		for obj in $save_libobjs; do
++		  func_to_tool_file "$obj"
++		  $ECHO "$func_to_tool_file_result"
++		done > "$output"
++		eval cmd=\"$cmd1\"
+ 		func_show_eval "$cmd" 'exit $?'
++		output=$save_output
++		libobjs=$save_libobjs
+ 		skipped_export=false
+ 	      else
+ 		# The command line is too long to execute in one step.
+@@ -7248,7 +8211,7 @@ EOF
+ 	if test -n "$export_symbols" && test -n "$include_expsyms"; then
+ 	  tmp_export_symbols="$export_symbols"
+ 	  test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols"
+-	  $opt_dry_run || $ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"
++	  $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"'
+ 	fi
+ 
+ 	if test "X$skipped_export" != "X:" && test -n "$orig_export_symbols"; then
+@@ -7260,7 +8223,7 @@ EOF
+ 	  # global variables. join(1) would be nice here, but unfortunately
+ 	  # isn't a blessed tool.
+ 	  $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter
+-	  delfiles="$delfiles $export_symbols $output_objdir/$libname.filter"
++	  func_append delfiles " $export_symbols $output_objdir/$libname.filter"
+ 	  export_symbols=$output_objdir/$libname.def
+ 	  $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols
+ 	fi
+@@ -7270,7 +8233,7 @@ EOF
+ 	  case " $convenience " in
+ 	  *" $test_deplib "*) ;;
+ 	  *)
+-	    tmp_deplibs="$tmp_deplibs $test_deplib"
++	    func_append tmp_deplibs " $test_deplib"
+ 	    ;;
+ 	  esac
+ 	done
+@@ -7286,43 +8249,43 @@ EOF
+ 	  fi
+ 	  if test -n "$whole_archive_flag_spec"; then
+ 	    save_libobjs=$libobjs
+-	    eval "libobjs=\"\$libobjs $whole_archive_flag_spec\""
++	    eval libobjs=\"\$libobjs $whole_archive_flag_spec\"
+ 	    test "X$libobjs" = "X " && libobjs=
+ 	  else
+ 	    gentop="$output_objdir/${outputname}x"
+-	    generated="$generated $gentop"
++	    func_append generated " $gentop"
+ 
+ 	    func_extract_archives $gentop $convenience
+-	    libobjs="$libobjs $func_extract_archives_result"
++	    func_append libobjs " $func_extract_archives_result"
+ 	    test "X$libobjs" = "X " && libobjs=
+ 	  fi
+ 	fi
+ 
+ 	if test "$thread_safe" = yes && test -n "$thread_safe_flag_spec"; then
+-	  eval "flag=\"$thread_safe_flag_spec\""
+-	  linker_flags="$linker_flags $flag"
++	  eval flag=\"$thread_safe_flag_spec\"
++	  func_append linker_flags " $flag"
+ 	fi
+ 
+ 	# Make a backup of the uninstalled library when relinking
+-	if test "$mode" = relink; then
+-	  $opt_dry_run || (cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U) || exit $?
++	if test "$opt_mode" = relink; then
++	  $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U)' || exit $?
+ 	fi
+ 
+ 	# Do each of the archive commands.
+ 	if test "$module" = yes && test -n "$module_cmds" ; then
+ 	  if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then
+-	    eval "test_cmds=\"$module_expsym_cmds\""
++	    eval test_cmds=\"$module_expsym_cmds\"
+ 	    cmds=$module_expsym_cmds
+ 	  else
+-	    eval "test_cmds=\"$module_cmds\""
++	    eval test_cmds=\"$module_cmds\"
+ 	    cmds=$module_cmds
+ 	  fi
+ 	else
+ 	  if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then
+-	    eval "test_cmds=\"$archive_expsym_cmds\""
++	    eval test_cmds=\"$archive_expsym_cmds\"
+ 	    cmds=$archive_expsym_cmds
+ 	  else
+-	    eval "test_cmds=\"$archive_cmds\""
++	    eval test_cmds=\"$archive_cmds\"
+ 	    cmds=$archive_cmds
+ 	  fi
+ 	fi
+@@ -7366,10 +8329,13 @@ EOF
+ 	    echo 'INPUT (' > $output
+ 	    for obj in $save_libobjs
+ 	    do
+-	      $ECHO "$obj" >> $output
++	      func_to_tool_file "$obj"
++	      $ECHO "$func_to_tool_file_result" >> $output
+ 	    done
+ 	    echo ')' >> $output
+-	    delfiles="$delfiles $output"
++	    func_append delfiles " $output"
++	    func_to_tool_file "$output"
++	    output=$func_to_tool_file_result
+ 	  elif test -n "$save_libobjs" && test "X$skipped_export" != "X:" && test "X$file_list_spec" != X; then
+ 	    output=${output_objdir}/${output_la}.lnk
+ 	    func_verbose "creating linker input file list: $output"
+@@ -7383,15 +8349,17 @@ EOF
+ 	    fi
+ 	    for obj
+ 	    do
+-	      $ECHO "$obj" >> $output
++	      func_to_tool_file "$obj"
++	      $ECHO "$func_to_tool_file_result" >> $output
+ 	    done
+-	    delfiles="$delfiles $output"
+-	    output=$firstobj\"$file_list_spec$output\"
++	    func_append delfiles " $output"
++	    func_to_tool_file "$output"
++	    output=$firstobj\"$file_list_spec$func_to_tool_file_result\"
+ 	  else
+ 	    if test -n "$save_libobjs"; then
+ 	      func_verbose "creating reloadable object files..."
+ 	      output=$output_objdir/$output_la-${k}.$objext
+-	      eval "test_cmds=\"$reload_cmds\""
++	      eval test_cmds=\"$reload_cmds\"
+ 	      func_len " $test_cmds"
+ 	      len0=$func_len_result
+ 	      len=$len0
+@@ -7411,12 +8379,12 @@ EOF
+ 		  if test "$k" -eq 1 ; then
+ 		    # The first file doesn't have a previous command to add.
+ 		    reload_objs=$objlist
+-		    eval "concat_cmds=\"$reload_cmds\""
++		    eval concat_cmds=\"$reload_cmds\"
+ 		  else
+ 		    # All subsequent reloadable object files will link in
+ 		    # the last one created.
+ 		    reload_objs="$objlist $last_robj"
+-		    eval "concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\""
++		    eval concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\"
+ 		  fi
+ 		  last_robj=$output_objdir/$output_la-${k}.$objext
+ 		  func_arith $k + 1
+@@ -7433,11 +8401,11 @@ EOF
+ 	      # files will link in the last one created.
+ 	      test -z "$concat_cmds" || concat_cmds=$concat_cmds~
+ 	      reload_objs="$objlist $last_robj"
+-	      eval "concat_cmds=\"\${concat_cmds}$reload_cmds\""
++	      eval concat_cmds=\"\${concat_cmds}$reload_cmds\"
+ 	      if test -n "$last_robj"; then
+-	        eval "concat_cmds=\"\${concat_cmds}~\$RM $last_robj\""
++	        eval concat_cmds=\"\${concat_cmds}~\$RM $last_robj\"
+ 	      fi
+-	      delfiles="$delfiles $output"
++	      func_append delfiles " $output"
+ 
+ 	    else
+ 	      output=
+@@ -7450,9 +8418,9 @@ EOF
+ 	      libobjs=$output
+ 	      # Append the command to create the export file.
+ 	      test -z "$concat_cmds" || concat_cmds=$concat_cmds~
+-	      eval "concat_cmds=\"\$concat_cmds$export_symbols_cmds\""
++	      eval concat_cmds=\"\$concat_cmds$export_symbols_cmds\"
+ 	      if test -n "$last_robj"; then
+-		eval "concat_cmds=\"\$concat_cmds~\$RM $last_robj\""
++		eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\"
+ 	      fi
+ 	    fi
+ 
+@@ -7471,7 +8439,7 @@ EOF
+ 		lt_exit=$?
+ 
+ 		# Restore the uninstalled library and exit
+-		if test "$mode" = relink; then
++		if test "$opt_mode" = relink; then
+ 		  ( cd "$output_objdir" && \
+ 		    $RM "${realname}T" && \
+ 		    $MV "${realname}U" "$realname" )
+@@ -7492,7 +8460,7 @@ EOF
+ 	    if test -n "$export_symbols" && test -n "$include_expsyms"; then
+ 	      tmp_export_symbols="$export_symbols"
+ 	      test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols"
+-	      $opt_dry_run || $ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"
++	      $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"'
+ 	    fi
+ 
+ 	    if test -n "$orig_export_symbols"; then
+@@ -7504,7 +8472,7 @@ EOF
+ 	      # global variables. join(1) would be nice here, but unfortunately
+ 	      # isn't a blessed tool.
+ 	      $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter
+-	      delfiles="$delfiles $export_symbols $output_objdir/$libname.filter"
++	      func_append delfiles " $export_symbols $output_objdir/$libname.filter"
+ 	      export_symbols=$output_objdir/$libname.def
+ 	      $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols
+ 	    fi
+@@ -7515,7 +8483,7 @@ EOF
+ 	  output=$save_output
+ 
+ 	  if test -n "$convenience" && test -n "$whole_archive_flag_spec"; then
+-	    eval "libobjs=\"\$libobjs $whole_archive_flag_spec\""
++	    eval libobjs=\"\$libobjs $whole_archive_flag_spec\"
+ 	    test "X$libobjs" = "X " && libobjs=
+ 	  fi
+ 	  # Expand the library linking commands again to reset the
+@@ -7539,23 +8507,23 @@ EOF
+ 
+ 	if test -n "$delfiles"; then
+ 	  # Append the command to remove temporary files to $cmds.
+-	  eval "cmds=\"\$cmds~\$RM $delfiles\""
++	  eval cmds=\"\$cmds~\$RM $delfiles\"
+ 	fi
+ 
+ 	# Add any objects from preloaded convenience libraries
+ 	if test -n "$dlprefiles"; then
+ 	  gentop="$output_objdir/${outputname}x"
+-	  generated="$generated $gentop"
++	  func_append generated " $gentop"
+ 
+ 	  func_extract_archives $gentop $dlprefiles
+-	  libobjs="$libobjs $func_extract_archives_result"
++	  func_append libobjs " $func_extract_archives_result"
+ 	  test "X$libobjs" = "X " && libobjs=
+ 	fi
+ 
+ 	save_ifs="$IFS"; IFS='~'
+ 	for cmd in $cmds; do
+ 	  IFS="$save_ifs"
+-	  eval "cmd=\"$cmd\""
++	  eval cmd=\"$cmd\"
+ 	  $opt_silent || {
+ 	    func_quote_for_expand "$cmd"
+ 	    eval "func_echo $func_quote_for_expand_result"
+@@ -7564,7 +8532,7 @@ EOF
+ 	    lt_exit=$?
+ 
+ 	    # Restore the uninstalled library and exit
+-	    if test "$mode" = relink; then
++	    if test "$opt_mode" = relink; then
+ 	      ( cd "$output_objdir" && \
+ 	        $RM "${realname}T" && \
+ 		$MV "${realname}U" "$realname" )
+@@ -7576,8 +8544,8 @@ EOF
+ 	IFS="$save_ifs"
+ 
+ 	# Restore the uninstalled library and exit
+-	if test "$mode" = relink; then
+-	  $opt_dry_run || (cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname) || exit $?
++	if test "$opt_mode" = relink; then
++	  $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname)' || exit $?
+ 
+ 	  if test -n "$convenience"; then
+ 	    if test -z "$whole_archive_flag_spec"; then
+@@ -7656,17 +8624,20 @@ EOF
+ 
+       if test -n "$convenience"; then
+ 	if test -n "$whole_archive_flag_spec"; then
+-	  eval "tmp_whole_archive_flags=\"$whole_archive_flag_spec\""
++	  eval tmp_whole_archive_flags=\"$whole_archive_flag_spec\"
+ 	  reload_conv_objs=$reload_objs\ `$ECHO "$tmp_whole_archive_flags" | $SED 's|,| |g'`
+ 	else
+ 	  gentop="$output_objdir/${obj}x"
+-	  generated="$generated $gentop"
++	  func_append generated " $gentop"
+ 
+ 	  func_extract_archives $gentop $convenience
+ 	  reload_conv_objs="$reload_objs $func_extract_archives_result"
+ 	fi
+       fi
+ 
++      # If we're not building shared, we need to use non_pic_objs
++      test "$build_libtool_libs" != yes && libobjs="$non_pic_objects"
++
+       # Create the old-style object.
+       reload_objs="$objs$old_deplibs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; /\.lib$/d; $lo2o" | $NL2SP`" $reload_conv_objs" ### testsuite: skip nested quoting test
+ 
+@@ -7690,7 +8661,7 @@ EOF
+ 	# Create an invalid libtool object if no PIC, so that we don't
+ 	# accidentally link it into a program.
+ 	# $show "echo timestamp > $libobj"
+-	# $opt_dry_run || echo timestamp > $libobj || exit $?
++	# $opt_dry_run || eval "echo timestamp > $libobj" || exit $?
+ 	exit $EXIT_SUCCESS
+       fi
+ 
+@@ -7740,8 +8711,8 @@ EOF
+ 	if test "$tagname" = CXX ; then
+ 	  case ${MACOSX_DEPLOYMENT_TARGET-10.0} in
+ 	    10.[0123])
+-	      compile_command="$compile_command ${wl}-bind_at_load"
+-	      finalize_command="$finalize_command ${wl}-bind_at_load"
++	      func_append compile_command " ${wl}-bind_at_load"
++	      func_append finalize_command " ${wl}-bind_at_load"
+ 	    ;;
+ 	  esac
+ 	fi
+@@ -7761,7 +8732,7 @@ EOF
+ 	*)
+ 	  case " $compile_deplibs " in
+ 	  *" -L$path/$objdir "*)
+-	    new_libs="$new_libs -L$path/$objdir" ;;
++	    func_append new_libs " -L$path/$objdir" ;;
+ 	  esac
+ 	  ;;
+ 	esac
+@@ -7771,17 +8742,17 @@ EOF
+ 	-L*)
+ 	  case " $new_libs " in
+ 	  *" $deplib "*) ;;
+-	  *) new_libs="$new_libs $deplib" ;;
++	  *) func_append new_libs " $deplib" ;;
+ 	  esac
+ 	  ;;
+-	*) new_libs="$new_libs $deplib" ;;
++	*) func_append new_libs " $deplib" ;;
+ 	esac
+       done
+       compile_deplibs="$new_libs"
+ 
+ 
+-      compile_command="$compile_command $compile_deplibs"
+-      finalize_command="$finalize_command $finalize_deplibs"
++      func_append compile_command " $compile_deplibs"
++      func_append finalize_command " $finalize_deplibs"
+ 
+       if test -n "$rpath$xrpath"; then
+ 	# If the user specified any rpath flags, then add them.
+@@ -7789,7 +8760,7 @@ EOF
+ 	  # This is the magic to use -rpath.
+ 	  case "$finalize_rpath " in
+ 	  *" $libdir "*) ;;
+-	  *) finalize_rpath="$finalize_rpath $libdir" ;;
++	  *) func_append finalize_rpath " $libdir" ;;
+ 	  esac
+ 	done
+       fi
+@@ -7808,18 +8779,18 @@ EOF
+ 	      *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
+ 		;;
+ 	      *)
+-		hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
++		func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
+ 		;;
+ 	      esac
+ 	    fi
+ 	  else
+-	    eval "flag=\"$hardcode_libdir_flag_spec\""
+-	    rpath="$rpath $flag"
++	    eval flag=\"$hardcode_libdir_flag_spec\"
++	    func_append rpath " $flag"
+ 	  fi
+ 	elif test -n "$runpath_var"; then
+ 	  case "$perm_rpath " in
+ 	  *" $libdir "*) ;;
+-	  *) perm_rpath="$perm_rpath $libdir" ;;
++	  *) func_append perm_rpath " $libdir" ;;
+ 	  esac
+ 	fi
+ 	case $host in
+@@ -7828,12 +8799,12 @@ EOF
+ 	  case :$dllsearchpath: in
+ 	  *":$libdir:"*) ;;
+ 	  ::) dllsearchpath=$libdir;;
+-	  *) dllsearchpath="$dllsearchpath:$libdir";;
++	  *) func_append dllsearchpath ":$libdir";;
+ 	  esac
+ 	  case :$dllsearchpath: in
+ 	  *":$testbindir:"*) ;;
+ 	  ::) dllsearchpath=$testbindir;;
+-	  *) dllsearchpath="$dllsearchpath:$testbindir";;
++	  *) func_append dllsearchpath ":$testbindir";;
+ 	  esac
+ 	  ;;
+ 	esac
+@@ -7842,7 +8813,7 @@ EOF
+       if test -n "$hardcode_libdir_separator" &&
+ 	 test -n "$hardcode_libdirs"; then
+ 	libdir="$hardcode_libdirs"
+-	eval "rpath=\" $hardcode_libdir_flag_spec\""
++	eval rpath=\" $hardcode_libdir_flag_spec\"
+       fi
+       compile_rpath="$rpath"
+ 
+@@ -7859,18 +8830,18 @@ EOF
+ 	      *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
+ 		;;
+ 	      *)
+-		hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
++		func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
+ 		;;
+ 	      esac
+ 	    fi
+ 	  else
+-	    eval "flag=\"$hardcode_libdir_flag_spec\""
+-	    rpath="$rpath $flag"
++	    eval flag=\"$hardcode_libdir_flag_spec\"
++	    func_append rpath " $flag"
+ 	  fi
+ 	elif test -n "$runpath_var"; then
+ 	  case "$finalize_perm_rpath " in
+ 	  *" $libdir "*) ;;
+-	  *) finalize_perm_rpath="$finalize_perm_rpath $libdir" ;;
++	  *) func_append finalize_perm_rpath " $libdir" ;;
+ 	  esac
+ 	fi
+       done
+@@ -7878,7 +8849,7 @@ EOF
+       if test -n "$hardcode_libdir_separator" &&
+ 	 test -n "$hardcode_libdirs"; then
+ 	libdir="$hardcode_libdirs"
+-	eval "rpath=\" $hardcode_libdir_flag_spec\""
++	eval rpath=\" $hardcode_libdir_flag_spec\"
+       fi
+       finalize_rpath="$rpath"
+ 
+@@ -7921,6 +8892,12 @@ EOF
+ 	exit_status=0
+ 	func_show_eval "$link_command" 'exit_status=$?'
+ 
++	if test -n "$postlink_cmds"; then
++	  func_to_tool_file "$output"
++	  postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
++	  func_execute_cmds "$postlink_cmds" 'exit $?'
++	fi
++
+ 	# Delete the generated files.
+ 	if test -f "$output_objdir/${outputname}S.${objext}"; then
+ 	  func_show_eval '$RM "$output_objdir/${outputname}S.${objext}"'
+@@ -7943,7 +8920,7 @@ EOF
+ 	  # We should set the runpath_var.
+ 	  rpath=
+ 	  for dir in $perm_rpath; do
+-	    rpath="$rpath$dir:"
++	    func_append rpath "$dir:"
+ 	  done
+ 	  compile_var="$runpath_var=\"$rpath\$$runpath_var\" "
+ 	fi
+@@ -7951,7 +8928,7 @@ EOF
+ 	  # We should set the runpath_var.
+ 	  rpath=
+ 	  for dir in $finalize_perm_rpath; do
+-	    rpath="$rpath$dir:"
++	    func_append rpath "$dir:"
+ 	  done
+ 	  finalize_var="$runpath_var=\"$rpath\$$runpath_var\" "
+ 	fi
+@@ -7966,6 +8943,13 @@ EOF
+ 	$opt_dry_run || $RM $output
+ 	# Link the executable and exit
+ 	func_show_eval "$link_command" 'exit $?'
++
++	if test -n "$postlink_cmds"; then
++	  func_to_tool_file "$output"
++	  postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
++	  func_execute_cmds "$postlink_cmds" 'exit $?'
++	fi
++
+ 	exit $EXIT_SUCCESS
+       fi
+ 
+@@ -7999,6 +8983,12 @@ EOF
+ 
+       func_show_eval "$link_command" 'exit $?'
+ 
++      if test -n "$postlink_cmds"; then
++	func_to_tool_file "$output_objdir/$outputname"
++	postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output_objdir/$outputname"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
++	func_execute_cmds "$postlink_cmds" 'exit $?'
++      fi
++
+       # Now create the wrapper script.
+       func_verbose "creating $output"
+ 
+@@ -8096,7 +9086,7 @@ EOF
+ 	else
+ 	  oldobjs="$old_deplibs $non_pic_objects"
+ 	  if test "$preload" = yes && test -f "$symfileobj"; then
+-	    oldobjs="$oldobjs $symfileobj"
++	    func_append oldobjs " $symfileobj"
+ 	  fi
+ 	fi
+ 	addlibs="$old_convenience"
+@@ -8104,10 +9094,10 @@ EOF
+ 
+       if test -n "$addlibs"; then
+ 	gentop="$output_objdir/${outputname}x"
+-	generated="$generated $gentop"
++	func_append generated " $gentop"
+ 
+ 	func_extract_archives $gentop $addlibs
+-	oldobjs="$oldobjs $func_extract_archives_result"
++	func_append oldobjs " $func_extract_archives_result"
+       fi
+ 
+       # Do each command in the archive commands.
+@@ -8118,10 +9108,10 @@ EOF
+ 	# Add any objects from preloaded convenience libraries
+ 	if test -n "$dlprefiles"; then
+ 	  gentop="$output_objdir/${outputname}x"
+-	  generated="$generated $gentop"
++	  func_append generated " $gentop"
+ 
+ 	  func_extract_archives $gentop $dlprefiles
+-	  oldobjs="$oldobjs $func_extract_archives_result"
++	  func_append oldobjs " $func_extract_archives_result"
+ 	fi
+ 
+ 	# POSIX demands no paths to be encoded in archives.  We have
+@@ -8139,7 +9129,7 @@ EOF
+ 	else
+ 	  echo "copying selected object files to avoid basename conflicts..."
+ 	  gentop="$output_objdir/${outputname}x"
+-	  generated="$generated $gentop"
++	  func_append generated " $gentop"
+ 	  func_mkdir_p "$gentop"
+ 	  save_oldobjs=$oldobjs
+ 	  oldobjs=
+@@ -8163,18 +9153,28 @@ EOF
+ 		esac
+ 	      done
+ 	      func_show_eval "ln $obj $gentop/$newobj || cp $obj $gentop/$newobj"
+-	      oldobjs="$oldobjs $gentop/$newobj"
++	      func_append oldobjs " $gentop/$newobj"
+ 	      ;;
+-	    *) oldobjs="$oldobjs $obj" ;;
++	    *) func_append oldobjs " $obj" ;;
+ 	    esac
+ 	  done
+ 	fi
+-	eval "cmds=\"$old_archive_cmds\""
++	eval cmds=\"$old_archive_cmds\"
+ 
+ 	func_len " $cmds"
+ 	len=$func_len_result
+ 	if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then
+ 	  cmds=$old_archive_cmds
++	elif test -n "$archiver_list_spec"; then
++	  func_verbose "using command file archive linking..."
++	  for obj in $oldobjs
++	  do
++	    func_to_tool_file "$obj"
++	    $ECHO "$func_to_tool_file_result"
++	  done > $output_objdir/$libname.libcmd
++	  func_to_tool_file "$output_objdir/$libname.libcmd"
++	  oldobjs=" $archiver_list_spec$func_to_tool_file_result"
++	  cmds=$old_archive_cmds
+ 	else
+ 	  # the command line is too long to link in one step, link in parts
+ 	  func_verbose "using piecewise archive linking..."
+@@ -8189,7 +9189,7 @@ EOF
+ 	  do
+ 	    last_oldobj=$obj
+ 	  done
+-	  eval "test_cmds=\"$old_archive_cmds\""
++	  eval test_cmds=\"$old_archive_cmds\"
+ 	  func_len " $test_cmds"
+ 	  len0=$func_len_result
+ 	  len=$len0
+@@ -8208,7 +9208,7 @@ EOF
+ 		RANLIB=$save_RANLIB
+ 	      fi
+ 	      test -z "$concat_cmds" || concat_cmds=$concat_cmds~
+-	      eval "concat_cmds=\"\${concat_cmds}$old_archive_cmds\""
++	      eval concat_cmds=\"\${concat_cmds}$old_archive_cmds\"
+ 	      objlist=
+ 	      len=$len0
+ 	    fi
+@@ -8216,9 +9216,9 @@ EOF
+ 	  RANLIB=$save_RANLIB
+ 	  oldobjs=$objlist
+ 	  if test "X$oldobjs" = "X" ; then
+-	    eval "cmds=\"\$concat_cmds\""
++	    eval cmds=\"\$concat_cmds\"
+ 	  else
+-	    eval "cmds=\"\$concat_cmds~\$old_archive_cmds\""
++	    eval cmds=\"\$concat_cmds~\$old_archive_cmds\"
+ 	  fi
+ 	fi
+       fi
+@@ -8268,12 +9268,23 @@ EOF
+ 	      *.la)
+ 		func_basename "$deplib"
+ 		name="$func_basename_result"
+-		libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
++		func_resolve_sysroot "$deplib"
++		eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
+ 		test -z "$libdir" && \
+ 		  func_fatal_error "\`$deplib' is not a valid libtool archive"
+-		newdependency_libs="$newdependency_libs $libdir/$name"
++		func_append newdependency_libs " ${lt_sysroot:+=}$libdir/$name"
++		;;
++	      -L*)
++		func_stripname -L '' "$deplib"
++		func_replace_sysroot "$func_stripname_result"
++		func_append newdependency_libs " -L$func_replace_sysroot_result"
+ 		;;
+-	      *) newdependency_libs="$newdependency_libs $deplib" ;;
++	      -R*)
++		func_stripname -R '' "$deplib"
++		func_replace_sysroot "$func_stripname_result"
++		func_append newdependency_libs " -R$func_replace_sysroot_result"
++		;;
++	      *) func_append newdependency_libs " $deplib" ;;
+ 	      esac
+ 	    done
+ 	    dependency_libs="$newdependency_libs"
+@@ -8284,12 +9295,14 @@ EOF
+ 	      *.la)
+ 	        func_basename "$lib"
+ 		name="$func_basename_result"
+-		libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib`
++		func_resolve_sysroot "$lib"
++		eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
++
+ 		test -z "$libdir" && \
+ 		  func_fatal_error "\`$lib' is not a valid libtool archive"
+-		newdlfiles="$newdlfiles $libdir/$name"
++		func_append newdlfiles " ${lt_sysroot:+=}$libdir/$name"
+ 		;;
+-	      *) newdlfiles="$newdlfiles $lib" ;;
++	      *) func_append newdlfiles " $lib" ;;
+ 	      esac
+ 	    done
+ 	    dlfiles="$newdlfiles"
+@@ -8303,10 +9316,11 @@ EOF
+ 		# the library:
+ 		func_basename "$lib"
+ 		name="$func_basename_result"
+-		libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib`
++		func_resolve_sysroot "$lib"
++		eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
+ 		test -z "$libdir" && \
+ 		  func_fatal_error "\`$lib' is not a valid libtool archive"
+-		newdlprefiles="$newdlprefiles $libdir/$name"
++		func_append newdlprefiles " ${lt_sysroot:+=}$libdir/$name"
+ 		;;
+ 	      esac
+ 	    done
+@@ -8318,7 +9332,7 @@ EOF
+ 		[\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;;
+ 		*) abs=`pwd`"/$lib" ;;
+ 	      esac
+-	      newdlfiles="$newdlfiles $abs"
++	      func_append newdlfiles " $abs"
+ 	    done
+ 	    dlfiles="$newdlfiles"
+ 	    newdlprefiles=
+@@ -8327,7 +9341,7 @@ EOF
+ 		[\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;;
+ 		*) abs=`pwd`"/$lib" ;;
+ 	      esac
+-	      newdlprefiles="$newdlprefiles $abs"
++	      func_append newdlprefiles " $abs"
+ 	    done
+ 	    dlprefiles="$newdlprefiles"
+ 	  fi
+@@ -8412,7 +9426,7 @@ relink_command=\"$relink_command\""
+     exit $EXIT_SUCCESS
+ }
+ 
+-{ test "$mode" = link || test "$mode" = relink; } &&
++{ test "$opt_mode" = link || test "$opt_mode" = relink; } &&
+     func_mode_link ${1+"$@"}
+ 
+ 
+@@ -8432,9 +9446,9 @@ func_mode_uninstall ()
+     for arg
+     do
+       case $arg in
+-      -f) RM="$RM $arg"; rmforce=yes ;;
+-      -*) RM="$RM $arg" ;;
+-      *) files="$files $arg" ;;
++      -f) func_append RM " $arg"; rmforce=yes ;;
++      -*) func_append RM " $arg" ;;
++      *) func_append files " $arg" ;;
+       esac
+     done
+ 
+@@ -8443,24 +9457,23 @@ func_mode_uninstall ()
+ 
+     rmdirs=
+ 
+-    origobjdir="$objdir"
+     for file in $files; do
+       func_dirname "$file" "" "."
+       dir="$func_dirname_result"
+       if test "X$dir" = X.; then
+-	objdir="$origobjdir"
++	odir="$objdir"
+       else
+-	objdir="$dir/$origobjdir"
++	odir="$dir/$objdir"
+       fi
+       func_basename "$file"
+       name="$func_basename_result"
+-      test "$mode" = uninstall && objdir="$dir"
++      test "$opt_mode" = uninstall && odir="$dir"
+ 
+-      # Remember objdir for removal later, being careful to avoid duplicates
+-      if test "$mode" = clean; then
++      # Remember odir for removal later, being careful to avoid duplicates
++      if test "$opt_mode" = clean; then
+ 	case " $rmdirs " in
+-	  *" $objdir "*) ;;
+-	  *) rmdirs="$rmdirs $objdir" ;;
++	  *" $odir "*) ;;
++	  *) func_append rmdirs " $odir" ;;
+ 	esac
+       fi
+ 
+@@ -8486,18 +9499,17 @@ func_mode_uninstall ()
+ 
+ 	  # Delete the libtool libraries and symlinks.
+ 	  for n in $library_names; do
+-	    rmfiles="$rmfiles $objdir/$n"
++	    func_append rmfiles " $odir/$n"
+ 	  done
+-	  test -n "$old_library" && rmfiles="$rmfiles $objdir/$old_library"
++	  test -n "$old_library" && func_append rmfiles " $odir/$old_library"
+ 
+-	  case "$mode" in
++	  case "$opt_mode" in
+ 	  clean)
+-	    case "  $library_names " in
+-	    # "  " in the beginning catches empty $dlname
++	    case " $library_names " in
+ 	    *" $dlname "*) ;;
+-	    *) rmfiles="$rmfiles $objdir/$dlname" ;;
++	    *) test -n "$dlname" && func_append rmfiles " $odir/$dlname" ;;
+ 	    esac
+-	    test -n "$libdir" && rmfiles="$rmfiles $objdir/$name $objdir/${name}i"
++	    test -n "$libdir" && func_append rmfiles " $odir/$name $odir/${name}i"
+ 	    ;;
+ 	  uninstall)
+ 	    if test -n "$library_names"; then
+@@ -8525,19 +9537,19 @@ func_mode_uninstall ()
+ 	  # Add PIC object to the list of files to remove.
+ 	  if test -n "$pic_object" &&
+ 	     test "$pic_object" != none; then
+-	    rmfiles="$rmfiles $dir/$pic_object"
++	    func_append rmfiles " $dir/$pic_object"
+ 	  fi
+ 
+ 	  # Add non-PIC object to the list of files to remove.
+ 	  if test -n "$non_pic_object" &&
+ 	     test "$non_pic_object" != none; then
+-	    rmfiles="$rmfiles $dir/$non_pic_object"
++	    func_append rmfiles " $dir/$non_pic_object"
+ 	  fi
+ 	fi
+ 	;;
+ 
+       *)
+-	if test "$mode" = clean ; then
++	if test "$opt_mode" = clean ; then
+ 	  noexename=$name
+ 	  case $file in
+ 	  *.exe)
+@@ -8547,7 +9559,7 @@ func_mode_uninstall ()
+ 	    noexename=$func_stripname_result
+ 	    # $file with .exe has already been added to rmfiles,
+ 	    # add $file without .exe
+-	    rmfiles="$rmfiles $file"
++	    func_append rmfiles " $file"
+ 	    ;;
+ 	  esac
+ 	  # Do a test to see if this is a libtool program.
+@@ -8556,7 +9568,7 @@ func_mode_uninstall ()
+ 	      func_ltwrapper_scriptname "$file"
+ 	      relink_command=
+ 	      func_source $func_ltwrapper_scriptname_result
+-	      rmfiles="$rmfiles $func_ltwrapper_scriptname_result"
++	      func_append rmfiles " $func_ltwrapper_scriptname_result"
+ 	    else
+ 	      relink_command=
+ 	      func_source $dir/$noexename
+@@ -8564,12 +9576,12 @@ func_mode_uninstall ()
+ 
+ 	    # note $name still contains .exe if it was in $file originally
+ 	    # as does the version of $file that was added into $rmfiles
+-	    rmfiles="$rmfiles $objdir/$name $objdir/${name}S.${objext}"
++	    func_append rmfiles " $odir/$name $odir/${name}S.${objext}"
+ 	    if test "$fast_install" = yes && test -n "$relink_command"; then
+-	      rmfiles="$rmfiles $objdir/lt-$name"
++	      func_append rmfiles " $odir/lt-$name"
+ 	    fi
+ 	    if test "X$noexename" != "X$name" ; then
+-	      rmfiles="$rmfiles $objdir/lt-${noexename}.c"
++	      func_append rmfiles " $odir/lt-${noexename}.c"
+ 	    fi
+ 	  fi
+ 	fi
+@@ -8577,7 +9589,6 @@ func_mode_uninstall ()
+       esac
+       func_show_eval "$RM $rmfiles" 'exit_status=1'
+     done
+-    objdir="$origobjdir"
+ 
+     # Try to remove the ${objdir}s in the directories where we deleted files
+     for dir in $rmdirs; do
+@@ -8589,16 +9600,16 @@ func_mode_uninstall ()
+     exit $exit_status
+ }
+ 
+-{ test "$mode" = uninstall || test "$mode" = clean; } &&
++{ test "$opt_mode" = uninstall || test "$opt_mode" = clean; } &&
+     func_mode_uninstall ${1+"$@"}
+ 
+-test -z "$mode" && {
++test -z "$opt_mode" && {
+   help="$generic_help"
+   func_fatal_help "you must specify a MODE"
+ }
+ 
+ test -z "$exec_cmd" && \
+-  func_fatal_help "invalid operation mode \`$mode'"
++  func_fatal_help "invalid operation mode \`$opt_mode'"
+ 
+ if test -n "$exec_cmd"; then
+   eval exec "$exec_cmd"
+diff --git a/ltoptions.m4 b/ltoptions.m4
+index 5ef12ced2a..17cfd51c0b 100644
+--- a/ltoptions.m4
++++ b/ltoptions.m4
+@@ -8,7 +8,7 @@
+ # unlimited permission to copy and/or distribute it, with or without
+ # modifications, as long as this notice is preserved.
+ 
+-# serial 6 ltoptions.m4
++# serial 7 ltoptions.m4
+ 
+ # This is to help aclocal find these macros, as it can't see m4_define.
+ AC_DEFUN([LTOPTIONS_VERSION], [m4_if([1])])
+diff --git a/ltversion.m4 b/ltversion.m4
+index bf87f77132..9c7b5d4118 100644
+--- a/ltversion.m4
++++ b/ltversion.m4
+@@ -7,17 +7,17 @@
+ # unlimited permission to copy and/or distribute it, with or without
+ # modifications, as long as this notice is preserved.
+ 
+-# Generated from ltversion.in.
++# @configure_input@
+ 
+-# serial 3134 ltversion.m4
++# serial 3293 ltversion.m4
+ # This file is part of GNU Libtool
+ 
+-m4_define([LT_PACKAGE_VERSION], [2.2.7a])
+-m4_define([LT_PACKAGE_REVISION], [1.3134])
++m4_define([LT_PACKAGE_VERSION], [2.4])
++m4_define([LT_PACKAGE_REVISION], [1.3293])
+ 
+ AC_DEFUN([LTVERSION_VERSION],
+-[macro_version='2.2.7a'
+-macro_revision='1.3134'
++[macro_version='2.4'
++macro_revision='1.3293'
+ _LT_DECL(, macro_version, 0, [Which release of libtool.m4 was used?])
+ _LT_DECL(, macro_revision, 0)
+ ])
+diff --git a/lt~obsolete.m4 b/lt~obsolete.m4
+index bf92b5e079..c573da90c5 100644
+--- a/lt~obsolete.m4
++++ b/lt~obsolete.m4
+@@ -7,7 +7,7 @@
+ # unlimited permission to copy and/or distribute it, with or without
+ # modifications, as long as this notice is preserved.
+ 
+-# serial 4 lt~obsolete.m4
++# serial 5 lt~obsolete.m4
+ 
+ # These exist entirely to fool aclocal when bootstrapping libtool.
+ #
+diff --git a/opcodes/configure b/opcodes/configure
+index 17530f54b9..79b39611c2 100755
+--- a/opcodes/configure
++++ b/opcodes/configure
+@@ -650,6 +650,9 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
++ac_ct_AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -763,6 +766,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_targets
+ enable_werror
+@@ -1423,6 +1427,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+ 
+ Some influential environment variables:
+   CC          C compiler command
+@@ -5115,8 +5121,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -5156,7 +5162,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -5842,8 +5848,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -5892,6 +5898,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if test "${lt_cv_to_host_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if test "${lt_cv_ld_reload_flag+set}" = set; then :
+@@ -5908,6 +5988,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -6076,7 +6161,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -6230,6 +6316,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -6243,11 +6344,164 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
+ 
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_AR+set}" = set; then :
+@@ -6263,7 +6517,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6283,11 +6537,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
+@@ -6303,7 +6561,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -6322,6 +6580,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -6333,16 +6595,72 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if test "${lt_cv_ar_at_file+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
+ 
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
+ 
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+ 
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
+ 
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
+ 
+ 
+ 
+@@ -6684,8 +7002,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -6721,6 +7039,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -6762,6 +7081,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -6773,7 +7104,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -6799,8 +7130,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -6810,8 +7141,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -6848,6 +7179,16 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
+ 
+ 
+ 
+@@ -6864,6 +7205,45 @@ fi
+ 
+ 
+ 
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -7075,6 +7455,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if test "${lt_cv_path_mainfest_tool+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -7638,6 +8135,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -7803,7 +8302,8 @@ fi
+ LIBTOOL_DEPS="$ltmain"
+ 
+ # Always use our own libtool.
+-LIBTOOL='$(SHELL) $(top_builddir)/libtool'
++LIBTOOL='$(SHELL) $(top_builddir)'
++LIBTOOL="$LIBTOOL/${host_alias}-libtool"
+ 
+ 
+ 
+@@ -7892,7 +8392,7 @@ aix3*)
+ esac
+ 
+ # Global variables:
+-ofile=libtool
++ofile=${host_alias}-libtool
+ can_build_shared=yes
+ 
+ # All known linkers require a `.a' archive for static linking (except MSVC,
+@@ -8190,8 +8690,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -8357,6 +8855,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -8419,7 +8923,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -8476,13 +8980,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if test "${lt_cv_prog_compiler_pic+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -8543,6 +9051,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -8893,7 +9406,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -8992,12 +9506,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -9011,8 +9525,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -9030,8 +9544,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9077,8 +9591,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -9208,7 +9722,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9221,22 +9741,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -9248,7 +9775,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+ 
+ int
+@@ -9261,22 +9794,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -9321,20 +9861,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -9395,7 +9978,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -9403,7 +9986,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -9419,7 +10002,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -9443,10 +10026,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -9525,23 +10108,36 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if test "${lt_cv_irix_exported_symbol+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -9626,7 +10222,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -9645,9 +10241,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -10223,8 +10819,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -10257,13 +10854,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -10355,7 +11010,7 @@ haiku*)
+   soname_spec='${libname}${release}${shared_ext}$major'
+   shlibpath_var=LIBRARY_PATH
+   shlibpath_overrides_runpath=yes
+-  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
++  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
+   hardcode_into_libs=yes
+   ;;
+ 
+@@ -11195,10 +11850,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11301,10 +11956,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -12543,7 +13198,7 @@ if test "$enable_shared" = "yes"; then
+ # since libbfd may not pull in the entirety of libiberty.
+   x=`sed -n -e 's/^[ 	]*PICFLAG[ 	]*=[ 	]*//p' < ../libiberty/Makefile | sed -n '$p'`
+   if test -n "$x"; then
+-    SHARED_LIBADD="-L`pwd`/../libiberty/pic -liberty"
++    SHARED_LIBADD="`pwd`/../libiberty/pic/libiberty.a"
+   fi
+ 
+   case "${host}" in
+@@ -13520,13 +14175,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -13541,14 +14203,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -13581,12 +14246,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -13641,8 +14306,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -13652,12 +14322,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -13673,7 +14345,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -13709,6 +14380,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -14465,7 +15137,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -14568,19 +15241,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -14610,6 +15306,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -14619,6 +15321,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -14733,12 +15438,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -14825,9 +15530,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -14843,6 +15545,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -14875,210 +15580,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+diff --git a/opcodes/configure.ac b/opcodes/configure.ac
+index a9fbfd61f1..c43780f64d 100644
+--- a/opcodes/configure.ac
++++ b/opcodes/configure.ac
+@@ -167,7 +167,7 @@ changequote(,)dnl
+   x=`sed -n -e 's/^[ 	]*PICFLAG[ 	]*=[ 	]*//p' < ../libiberty/Makefile | sed -n '$p'`
+ changequote([,])dnl
+   if test -n "$x"; then
+-    SHARED_LIBADD="-L`pwd`/../libiberty/pic -liberty"
++    SHARED_LIBADD="`pwd`/../libiberty/pic/libiberty.a"
+   fi
+ 
+   case "${host}" in
+diff --git a/zlib/configure b/zlib/configure
+index dc2d9ed383..ed9a492f5c 100755
+--- a/zlib/configure
++++ b/zlib/configure
+@@ -614,8 +614,11 @@ OTOOL
+ LIPO
+ NMEDIT
+ DSYMUTIL
++MANIFEST_TOOL
+ RANLIB
++ac_ct_AR
+ AR
++DLLTOOL
+ OBJDUMP
+ LN_S
+ NM
+@@ -737,6 +740,7 @@ enable_static
+ with_pic
+ enable_fast_install
+ with_gnu_ld
++with_libtool_sysroot
+ enable_libtool_lock
+ enable_host_shared
+ '
+@@ -1385,6 +1389,8 @@ Optional Packages:
+   --with-pic              try to use only PIC/non-PIC objects [default=use
+                           both]
+   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
++  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
++                        (or the compiler's sysroot if not specified).
+ 
+ Some influential environment variables:
+   CC          C compiler command
+@@ -3910,8 +3916,8 @@ esac
+ 
+ 
+ 
+-macro_version='2.2.7a'
+-macro_revision='1.3134'
++macro_version='2.4'
++macro_revision='1.3293'
+ 
+ 
+ 
+@@ -3951,7 +3957,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
+ $as_echo_n "checking how to print strings... " >&6; }
+ # Test print first, because it will be a builtin if present.
+-if test "X`print -r -- -n 2>/dev/null`" = X-n && \
++if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
+    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
+   ECHO='print -r --'
+ elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
+@@ -4767,8 +4773,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
+ # Try some XSI features
+ xsi_shell=no
+ ( _lt_dummy="a/b/c"
+-  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
+-      = c,a/b,, \
++  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
++      = c,a/b,b/c, \
+     && eval 'test $(( 1 + 1 )) -eq 2 \
+     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
+   && xsi_shell=yes
+@@ -4817,6 +4823,80 @@ esac
+ 
+ 
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
++$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
++if test "${lt_cv_to_host_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
++        ;;
++    esac
++    ;;
++  *-*-cygwin* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
++        ;;
++      *-*-cygwin* )
++        lt_cv_to_host_file_cmd=func_convert_file_noop
++        ;;
++      * ) # otherwise, assume *nix
++        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
++        ;;
++    esac
++    ;;
++  * ) # unhandled hosts (and "normal" native builds)
++    lt_cv_to_host_file_cmd=func_convert_file_noop
++    ;;
++esac
++
++fi
++
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
++$as_echo "$lt_cv_to_host_file_cmd" >&6; }
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
++$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
++if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  #assume ordinary cross tools, or native build.
++lt_cv_to_tool_file_cmd=func_convert_file_noop
++case $host in
++  *-*-mingw* )
++    case $build in
++      *-*-mingw* ) # actually msys
++        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
++        ;;
++    esac
++    ;;
++esac
++
++fi
++
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
++$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
++
++
++
++
++
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
+ $as_echo_n "checking for $LD option to reload object files... " >&6; }
+ if test "${lt_cv_ld_reload_flag+set}" = set; then :
+@@ -4833,6 +4913,11 @@ case $reload_flag in
+ esac
+ reload_cmds='$LD$reload_flag -o $output$reload_objs'
+ case $host_os in
++  cygwin* | mingw* | pw32* | cegcc*)
++    if test "$GCC" != yes; then
++      reload_cmds=false
++    fi
++    ;;
+   darwin*)
+     if test "$GCC" = yes; then
+       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
+@@ -5001,7 +5086,8 @@ mingw* | pw32*)
+     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
+     lt_cv_file_magic_cmd='func_win32_libid'
+   else
+-    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
++    # Keep this pattern in sync with the one in func_win32_libid.
++    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
+     lt_cv_file_magic_cmd='$OBJDUMP -f'
+   fi
+   ;;
+@@ -5155,6 +5241,21 @@ esac
+ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
+ $as_echo "$lt_cv_deplibs_check_method" >&6; }
++
++file_magic_glob=
++want_nocaseglob=no
++if test "$build" = "$host"; then
++  case $host_os in
++  mingw* | pw32*)
++    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
++      want_nocaseglob=yes
++    else
++      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
++    fi
++    ;;
++  esac
++fi
++
+ file_magic_cmd=$lt_cv_file_magic_cmd
+ deplibs_check_method=$lt_cv_deplibs_check_method
+ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+@@ -5168,11 +5269,165 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
+ 
+ 
+ 
++
++
++
++
++
++
++
++
++
++
+ 
+ 
+ if test -n "$ac_tool_prefix"; then
+-  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
+-set dummy ${ac_tool_prefix}ar; ac_word=$2
++  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
++set dummy ${ac_tool_prefix}dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$DLLTOOL"; then
++  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++DLLTOOL=$ac_cv_prog_DLLTOOL
++if test -n "$DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
++$as_echo "$DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_DLLTOOL"; then
++  ac_ct_DLLTOOL=$DLLTOOL
++  # Extract the first word of "dlltool", so it can be a program name with args.
++set dummy dlltool; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_DLLTOOL"; then
++  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
++if test -n "$ac_ct_DLLTOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
++$as_echo "$ac_ct_DLLTOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_DLLTOOL" = x; then
++    DLLTOOL="false"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    DLLTOOL=$ac_ct_DLLTOOL
++  fi
++else
++  DLLTOOL="$ac_cv_prog_DLLTOOL"
++fi
++
++test -z "$DLLTOOL" && DLLTOOL=dlltool
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
++$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
++if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_sharedlib_from_linklib_cmd='unknown'
++
++case $host_os in
++cygwin* | mingw* | pw32* | cegcc*)
++  # two different shell functions defined in ltmain.sh
++  # decide which to use based on capabilities of $DLLTOOL
++  case `$DLLTOOL --help 2>&1` in
++  *--identify-strict*)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
++    ;;
++  *)
++    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
++    ;;
++  esac
++  ;;
++*)
++  # fallback: assume linklib IS sharedlib
++  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
++  ;;
++esac
++
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
++$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
++sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
++test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
++
++
++
++
++
++
++
++
++if test -n "$ac_tool_prefix"; then
++  for ac_prog in ar
++  do
++    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
++set dummy $ac_tool_prefix$ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_AR+set}" = set; then :
+@@ -5188,7 +5443,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_AR="${ac_tool_prefix}ar"
++    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -5208,11 +5463,15 @@ $as_echo "no" >&6; }
+ fi
+ 
+ 
++    test -n "$AR" && break
++  done
+ fi
+-if test -z "$ac_cv_prog_AR"; then
++if test -z "$AR"; then
+   ac_ct_AR=$AR
+-  # Extract the first word of "ar", so it can be a program name with args.
+-set dummy ar; ac_word=$2
++  for ac_prog in ar
++do
++  # Extract the first word of "$ac_prog", so it can be a program name with args.
++set dummy $ac_prog; ac_word=$2
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+ $as_echo_n "checking for $ac_word... " >&6; }
+ if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
+@@ -5228,7 +5487,7 @@ do
+   test -z "$as_dir" && as_dir=.
+     for ac_exec_ext in '' $ac_executable_extensions; do
+   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
+-    ac_cv_prog_ac_ct_AR="ar"
++    ac_cv_prog_ac_ct_AR="$ac_prog"
+     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+     break 2
+   fi
+@@ -5247,6 +5506,10 @@ else
+ $as_echo "no" >&6; }
+ fi
+ 
++
++  test -n "$ac_ct_AR" && break
++done
++
+   if test "x$ac_ct_AR" = x; then
+     AR="false"
+   else
+@@ -5258,16 +5521,72 @@ ac_tool_warned=yes ;;
+ esac
+     AR=$ac_ct_AR
+   fi
+-else
+-  AR="$ac_cv_prog_AR"
+ fi
+ 
+-test -z "$AR" && AR=ar
+-test -z "$AR_FLAGS" && AR_FLAGS=cru
++: ${AR=ar}
++: ${AR_FLAGS=cru}
++
++
++
++
++
++
++
++
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
++$as_echo_n "checking for archiver @FILE support... " >&6; }
++if test "${lt_cv_ar_at_file+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_ar_at_file=no
++   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
++/* end confdefs.h.  */
++
++int
++main ()
++{
+ 
++  ;
++  return 0;
++}
++_ACEOF
++if ac_fn_c_try_compile "$LINENO"; then :
++  echo conftest.$ac_objext > conftest.lst
++      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
++      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++      if test "$ac_status" -eq 0; then
++	# Ensure the archiver fails upon bogus file names.
++	rm -f conftest.$ac_objext libconftest.a
++	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
++  (eval $lt_ar_try) 2>&5
++  ac_status=$?
++  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
++  test $ac_status = 0; }
++	if test "$ac_status" -ne 0; then
++          lt_cv_ar_at_file=@
++        fi
++      fi
++      rm -f conftest.* libconftest.a
+ 
++fi
++rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
+ 
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
++$as_echo "$lt_cv_ar_at_file" >&6; }
+ 
++if test "x$lt_cv_ar_at_file" = xno; then
++  archiver_list_spec=
++else
++  archiver_list_spec=$lt_cv_ar_at_file
++fi
+ 
+ 
+ 
+@@ -5609,8 +5928,8 @@ esac
+ lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
+ 
+ # Transform an extracted symbol line into symbol name and symbol address
+-lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
+-lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
++lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
+ 
+ # Handle CRLF in mingw tool chain
+ opt_cr=
+@@ -5646,6 +5965,7 @@ for ac_symprfx in "" "_"; do
+   else
+     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
+   fi
++  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
+ 
+   # Check to see that the pipe works correctly.
+   pipe_works=no
+@@ -5687,6 +6007,18 @@ _LT_EOF
+       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
+ 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
+ 	  cat <<_LT_EOF > conftest.$ac_ext
++/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
++#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
++/* DATA imports from DLLs on WIN32 con't be const, because runtime
++   relocations are performed -- see ld's documentation on pseudo-relocs.  */
++# define LT_DLSYM_CONST
++#elif defined(__osf__)
++/* This system does not cope well with relocations in const data.  */
++# define LT_DLSYM_CONST
++#else
++# define LT_DLSYM_CONST const
++#endif
++
+ #ifdef __cplusplus
+ extern "C" {
+ #endif
+@@ -5698,7 +6030,7 @@ _LT_EOF
+ 	  cat <<_LT_EOF >> conftest.$ac_ext
+ 
+ /* The mapping between symbol names and symbols.  */
+-const struct {
++LT_DLSYM_CONST struct {
+   const char *name;
+   void       *address;
+ }
+@@ -5724,8 +6056,8 @@ static const void *lt_preloaded_setup() {
+ _LT_EOF
+ 	  # Now try linking the two files.
+ 	  mv conftest.$ac_objext conftstm.$ac_objext
+-	  lt_save_LIBS="$LIBS"
+-	  lt_save_CFLAGS="$CFLAGS"
++	  lt_globsym_save_LIBS=$LIBS
++	  lt_globsym_save_CFLAGS=$CFLAGS
+ 	  LIBS="conftstm.$ac_objext"
+ 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
+ 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
+@@ -5735,8 +6067,8 @@ _LT_EOF
+   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
+ 	    pipe_works=yes
+ 	  fi
+-	  LIBS="$lt_save_LIBS"
+-	  CFLAGS="$lt_save_CFLAGS"
++	  LIBS=$lt_globsym_save_LIBS
++	  CFLAGS=$lt_globsym_save_CFLAGS
+ 	else
+ 	  echo "cannot find nm_test_func in $nlist" >&5
+ 	fi
+@@ -5773,6 +6105,19 @@ else
+ $as_echo "ok" >&6; }
+ fi
+ 
++# Response file support.
++if test "$lt_cv_nm_interface" = "MS dumpbin"; then
++  nm_file_list_spec='@'
++elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
++  nm_file_list_spec='@'
++fi
++
++
++
++
++
++
++
+ 
+ 
+ 
+@@ -5790,6 +6135,41 @@ fi
+ 
+ 
+ 
++
++
++
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
++$as_echo_n "checking for sysroot... " >&6; }
++
++# Check whether --with-libtool-sysroot was given.
++if test "${with_libtool_sysroot+set}" = set; then :
++  withval=$with_libtool_sysroot;
++else
++  with_libtool_sysroot=no
++fi
++
++
++lt_sysroot=
++case ${with_libtool_sysroot} in #(
++ yes)
++   if test "$GCC" = yes; then
++     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
++   fi
++   ;; #(
++ /*)
++   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
++   ;; #(
++ no|'')
++   ;; #(
++ *)
++   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
++$as_echo "${with_libtool_sysroot}" >&6; }
++   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
++   ;;
++esac
++
++ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
++$as_echo "${lt_sysroot:-no}" >&6; }
+ 
+ 
+ 
+@@ -6004,6 +6384,123 @@ esac
+ 
+ need_locks="$enable_libtool_lock"
+ 
++if test -n "$ac_tool_prefix"; then
++  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
++set dummy ${ac_tool_prefix}mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$MANIFEST_TOOL"; then
++  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
++if test -n "$MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
++$as_echo "$MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++
++fi
++if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
++  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
++  # Extract the first word of "mt", so it can be a program name with args.
++set dummy mt; ac_word=$2
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
++$as_echo_n "checking for $ac_word... " >&6; }
++if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test -n "$ac_ct_MANIFEST_TOOL"; then
++  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
++else
++as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
++for as_dir in $PATH
++do
++  IFS=$as_save_IFS
++  test -z "$as_dir" && as_dir=.
++    for ac_exec_ext in '' $ac_executable_extensions; do
++  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
++    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
++    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
++    break 2
++  fi
++done
++  done
++IFS=$as_save_IFS
++
++fi
++fi
++ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
++if test -n "$ac_ct_MANIFEST_TOOL"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
++$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
++else
++  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
++$as_echo "no" >&6; }
++fi
++
++  if test "x$ac_ct_MANIFEST_TOOL" = x; then
++    MANIFEST_TOOL=":"
++  else
++    case $cross_compiling:$ac_tool_warned in
++yes:)
++{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
++$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
++ac_tool_warned=yes ;;
++esac
++    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
++  fi
++else
++  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
++fi
++
++test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
++$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
++if test "${lt_cv_path_mainfest_tool+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_path_mainfest_tool=no
++  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
++  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
++  cat conftest.err >&5
++  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
++    lt_cv_path_mainfest_tool=yes
++  fi
++  rm -f conftest*
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
++$as_echo "$lt_cv_path_mainfest_tool" >&6; }
++if test "x$lt_cv_path_mainfest_tool" != xyes; then
++  MANIFEST_TOOL=:
++fi
++
++
++
++
++
+ 
+   case $host_os in
+     rhapsody* | darwin*)
+@@ -6570,6 +7067,8 @@ _LT_EOF
+       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
+       echo "$AR cru libconftest.a conftest.o" >&5
+       $AR cru libconftest.a conftest.o 2>&5
++      echo "$RANLIB libconftest.a" >&5
++      $RANLIB libconftest.a 2>&5
+       cat > conftest.c << _LT_EOF
+ int main() { return 0;}
+ _LT_EOF
+@@ -7033,7 +7532,8 @@ fi
+ LIBTOOL_DEPS="$ltmain"
+ 
+ # Always use our own libtool.
+-LIBTOOL='$(SHELL) $(top_builddir)/libtool'
++LIBTOOL='$(SHELL) $(top_builddir)'
++LIBTOOL="$LIBTOOL/${host_alias}-libtool"
+ 
+ 
+ 
+@@ -7122,7 +7622,7 @@ aix3*)
+ esac
+ 
+ # Global variables:
+-ofile=libtool
++ofile=${host_alias}-libtool
+ can_build_shared=yes
+ 
+ # All known linkers require a `.a' archive for static linking (except MSVC,
+@@ -7420,8 +7920,6 @@ fi
+ lt_prog_compiler_pic=
+ lt_prog_compiler_static=
+ 
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
+-$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 
+   if test "$GCC" = yes; then
+     lt_prog_compiler_wl='-Wl,'
+@@ -7587,6 +8085,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+ 	lt_prog_compiler_pic='--shared'
+ 	lt_prog_compiler_static='--static'
+ 	;;
++      nagfor*)
++	# NAG Fortran compiler
++	lt_prog_compiler_wl='-Wl,-Wl,,'
++	lt_prog_compiler_pic='-PIC'
++	lt_prog_compiler_static='-Bstatic'
++	;;
+       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
+         # Portland Group compilers (*not* the Pentium gcc compiler,
+ 	# which looks to be a dead project)
+@@ -7649,7 +8153,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
+       lt_prog_compiler_pic='-KPIC'
+       lt_prog_compiler_static='-Bstatic'
+       case $cc_basename in
+-      f77* | f90* | f95*)
++      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
+ 	lt_prog_compiler_wl='-Qoption ld ';;
+       *)
+ 	lt_prog_compiler_wl='-Wl,';;
+@@ -7706,13 +8210,17 @@ case $host_os in
+     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
+     ;;
+ esac
+-{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
+-$as_echo "$lt_prog_compiler_pic" >&6; }
+-
+-
+-
+-
+ 
++{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
++$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
++if test "${lt_cv_prog_compiler_pic+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
++$as_echo "$lt_cv_prog_compiler_pic" >&6; }
++lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
+ 
+ #
+ # Check to make sure the PIC flag actually works.
+@@ -7773,6 +8281,11 @@ fi
+ 
+ 
+ 
++
++
++
++
++
+ #
+ # Check to make sure the static flag actually works.
+ #
+@@ -8123,7 +8636,8 @@ _LT_EOF
+       allow_undefined_flag=unsupported
+       always_export_symbols=no
+       enable_shared_with_static_runtimes=yes
+-      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
++      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
+ 
+       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
+         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
+@@ -8222,12 +8736,12 @@ _LT_EOF
+ 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
+ 	  hardcode_libdir_flag_spec=
+ 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
+-	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
++	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
+ 	  if test "x$supports_anon_versioning" = xyes; then
+ 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
+ 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
+ 	      echo "local: *; };" >> $output_objdir/$libname.ver~
+-	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
++	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
+ 	  fi
+ 	  ;;
+ 	esac
+@@ -8241,8 +8755,8 @@ _LT_EOF
+ 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
+ 	wlarc=
+       else
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       fi
+       ;;
+ 
+@@ -8260,8 +8774,8 @@ _LT_EOF
+ 
+ _LT_EOF
+       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8307,8 +8821,8 @@ _LT_EOF
+ 
+     *)
+       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
+-	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
++	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
+       else
+ 	ld_shlibs=no
+       fi
+@@ -8438,7 +8952,13 @@ _LT_EOF
+ 	allow_undefined_flag='-berok'
+         # Determine the default libpath from the value encoded in an
+         # empty executable.
+-        if test x$gcc_no_link = xyes; then
++        if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test x$gcc_no_link = xyes; then
+   as_fn_error "Link tests are not allowed after GCC_NO_EXECUTABLES." "$LINENO" 5
+ fi
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+@@ -8454,22 +8974,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
+@@ -8481,7 +9008,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	else
+ 	 # Determine the default libpath from the value encoded in an
+ 	 # empty executable.
+-	 if test x$gcc_no_link = xyes; then
++	 if test "${lt_cv_aix_libpath+set}" = set; then
++  aix_libpath=$lt_cv_aix_libpath
++else
++  if test "${lt_cv_aix_libpath_+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  if test x$gcc_no_link = xyes; then
+   as_fn_error "Link tests are not allowed after GCC_NO_EXECUTABLES." "$LINENO" 5
+ fi
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+@@ -8497,22 +9030,29 @@ main ()
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+ 
+-lt_aix_libpath_sed='
+-    /Import File Strings/,/^$/ {
+-	/^0/ {
+-	    s/^0  *\(.*\)$/\1/
+-	    p
+-	}
+-    }'
+-aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-# Check for a 64-bit object if we didn't find anything.
+-if test -z "$aix_libpath"; then
+-  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
+-fi
++  lt_aix_libpath_sed='
++      /Import File Strings/,/^$/ {
++	  /^0/ {
++	      s/^0  *\([^ ]*\) *$/\1/
++	      p
++	  }
++      }'
++  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  # Check for a 64-bit object if we didn't find anything.
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
++  fi
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
++  if test -z "$lt_cv_aix_libpath_"; then
++    lt_cv_aix_libpath_="/usr/lib:/lib"
++  fi
++
++fi
++
++  aix_libpath=$lt_cv_aix_libpath_
++fi
+ 
+ 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
+ 	  # Warning - without using the other run time loading flags,
+@@ -8557,20 +9097,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+       # Microsoft Visual C++.
+       # hardcode_libdir_flag_spec is actually meaningless, as there is
+       # no search path for DLLs.
+-      hardcode_libdir_flag_spec=' '
+-      allow_undefined_flag=unsupported
+-      # Tell ltmain to make .lib files, not .a files.
+-      libext=lib
+-      # Tell ltmain to make .dll files, not .so files.
+-      shrext_cmds=".dll"
+-      # FIXME: Setting linknames here is a bad hack.
+-      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
+-      # The linker will automatically build a .lib file if we build a DLL.
+-      old_archive_from_new_cmds='true'
+-      # FIXME: Should let the user specify the lib program.
+-      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
+-      fix_srcfile_path='`cygpath -w "$srcfile"`'
+-      enable_shared_with_static_runtimes=yes
++      case $cc_basename in
++      cl*)
++	# Native MSVC
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	always_export_symbols=yes
++	file_list_spec='@'
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
++	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
++	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
++	  else
++	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
++	  fi~
++	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
++	  linknames='
++	# The linker will not automatically build a static lib if we build a DLL.
++	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
++	enable_shared_with_static_runtimes=yes
++	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
++	# Don't use ranlib
++	old_postinstall_cmds='chmod 644 $oldlib'
++	postlink_cmds='lt_outputfile="@OUTPUT@"~
++	  lt_tool_outputfile="@TOOL_OUTPUT@"~
++	  case $lt_outputfile in
++	    *.exe|*.EXE) ;;
++	    *)
++	      lt_outputfile="$lt_outputfile.exe"
++	      lt_tool_outputfile="$lt_tool_outputfile.exe"
++	      ;;
++	  esac~
++	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
++	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
++	    $RM "$lt_outputfile.manifest";
++	  fi'
++	;;
++      *)
++	# Assume MSVC wrapper
++	hardcode_libdir_flag_spec=' '
++	allow_undefined_flag=unsupported
++	# Tell ltmain to make .lib files, not .a files.
++	libext=lib
++	# Tell ltmain to make .dll files, not .so files.
++	shrext_cmds=".dll"
++	# FIXME: Setting linknames here is a bad hack.
++	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
++	# The linker will automatically build a .lib file if we build a DLL.
++	old_archive_from_new_cmds='true'
++	# FIXME: Should let the user specify the lib program.
++	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
++	enable_shared_with_static_runtimes=yes
++	;;
++      esac
+       ;;
+ 
+     darwin* | rhapsody*)
+@@ -8631,7 +9214,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
+     freebsd* | dragonfly*)
+-      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
++      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
+       hardcode_libdir_flag_spec='-R$libdir'
+       hardcode_direct=yes
+       hardcode_shlibpath_var=no
+@@ -8639,7 +9222,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux9*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
++	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       else
+ 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
+       fi
+@@ -8655,7 +9238,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 
+     hpux10*)
+       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
+-	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+       else
+ 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
+       fi
+@@ -8679,10 +9262,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
+ 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	ia64*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	*)
+-	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
++	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
+ 	  ;;
+ 	esac
+       else
+@@ -8761,26 +9344,39 @@ fi
+ 
+     irix5* | irix6* | nonstopux*)
+       if test "$GCC" = yes; then
+-	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	# Try to use the -exported_symbol ld option, if it does not
+ 	# work, assume that -exports_file does not work either and
+ 	# implicitly export all symbols.
+-        save_LDFLAGS="$LDFLAGS"
+-        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
+-        if test x$gcc_no_link = xyes; then
++	# This should be the same for all languages, so no per-tag cache variable.
++	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
++$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
++if test "${lt_cv_irix_exported_symbol+set}" = set; then :
++  $as_echo_n "(cached) " >&6
++else
++  save_LDFLAGS="$LDFLAGS"
++	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
++	   if test x$gcc_no_link = xyes; then
+   as_fn_error "Link tests are not allowed after GCC_NO_EXECUTABLES." "$LINENO" 5
+ fi
+ cat confdefs.h - <<_ACEOF >conftest.$ac_ext
+ /* end confdefs.h.  */
+-int foo(void) {}
++int foo (void) { return 0; }
+ _ACEOF
+ if ac_fn_c_try_link "$LINENO"; then :
+-  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
+-
++  lt_cv_irix_exported_symbol=yes
++else
++  lt_cv_irix_exported_symbol=no
+ fi
+ rm -f core conftest.err conftest.$ac_objext \
+     conftest$ac_exeext conftest.$ac_ext
+-        LDFLAGS="$save_LDFLAGS"
++           LDFLAGS="$save_LDFLAGS"
++fi
++{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
++$as_echo "$lt_cv_irix_exported_symbol" >&6; }
++	if test "$lt_cv_irix_exported_symbol" = yes; then
++          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
++	fi
+       else
+ 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
+ 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
+@@ -8865,7 +9461,7 @@ rm -f core conftest.err conftest.$ac_objext \
+     osf4* | osf5*)	# as osf3* with the addition of -msym flag
+       if test "$GCC" = yes; then
+ 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
+-	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
++	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
+ 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
+       else
+ 	allow_undefined_flag=' -expect_unresolved \*'
+@@ -8884,9 +9480,9 @@ rm -f core conftest.err conftest.$ac_objext \
+       no_undefined_flag=' -z defs'
+       if test "$GCC" = yes; then
+ 	wlarc='${wl}'
+-	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
++	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
+ 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
+-	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
++	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
+       else
+ 	case `$CC -V 2>&1` in
+ 	*"Compilers 5.0"*)
+@@ -9462,8 +10058,9 @@ cygwin* | mingw* | pw32* | cegcc*)
+   need_version=no
+   need_lib_prefix=no
+ 
+-  case $GCC,$host_os in
+-  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
++  case $GCC,$cc_basename in
++  yes,*)
++    # gcc
+     library_names_spec='$libname.dll.a'
+     # DLL is installed to $(libdir)/../bin by postinstall_cmds
+     postinstall_cmds='base_file=`basename \${file}`~
+@@ -9496,13 +10093,71 @@ cygwin* | mingw* | pw32* | cegcc*)
+       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
+       ;;
+     esac
++    dynamic_linker='Win32 ld.exe'
++    ;;
++
++  *,cl*)
++    # Native MSVC
++    libname_spec='$name'
++    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
++    library_names_spec='${libname}.dll.lib'
++
++    case $build_os in
++    mingw*)
++      sys_lib_search_path_spec=
++      lt_save_ifs=$IFS
++      IFS=';'
++      for lt_path in $LIB
++      do
++        IFS=$lt_save_ifs
++        # Let DOS variable expansion print the short 8.3 style file name.
++        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
++        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
++      done
++      IFS=$lt_save_ifs
++      # Convert to MSYS style.
++      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
++      ;;
++    cygwin*)
++      # Convert to unix form, then to dos form, then back to unix form
++      # but this time dos style (no spaces!) so that the unix form looks
++      # like /cygdrive/c/PROGRA~1:/cygdr...
++      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
++      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
++      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      ;;
++    *)
++      sys_lib_search_path_spec="$LIB"
++      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
++        # It is most probably a Windows format PATH.
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
++      else
++        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
++      fi
++      # FIXME: find the short name or the path components, as spaces are
++      # common. (e.g. "Program Files" -> "PROGRA~1")
++      ;;
++    esac
++
++    # DLL is installed to $(libdir)/../bin by postinstall_cmds
++    postinstall_cmds='base_file=`basename \${file}`~
++      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
++      dldir=$destdir/`dirname \$dlpath`~
++      test -d \$dldir || mkdir -p \$dldir~
++      $install_prog $dir/$dlname \$dldir/$dlname'
++    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
++      dlpath=$dir/\$dldll~
++       $RM \$dlpath'
++    shlibpath_overrides_runpath=yes
++    dynamic_linker='Win32 link.exe'
+     ;;
+ 
+   *)
++    # Assume MSVC wrapper
+     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
++    dynamic_linker='Win32 ld.exe'
+     ;;
+   esac
+-  dynamic_linker='Win32 ld.exe'
+   # FIXME: first we should search . and the directory the executable is in
+   shlibpath_var=PATH
+   ;;
+@@ -9594,7 +10249,7 @@ haiku*)
+   soname_spec='${libname}${release}${shared_ext}$major'
+   shlibpath_var=LIBRARY_PATH
+   shlibpath_overrides_runpath=yes
+-  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
++  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
+   hardcode_into_libs=yes
+   ;;
+ 
+@@ -10452,10 +11107,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -10558,10 +11213,10 @@ else
+ /* When -fvisbility=hidden is used, assume the code has been annotated
+    correspondingly for the symbols needed.  */
+ #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
+-void fnord () __attribute__((visibility("default")));
++int fnord () __attribute__((visibility("default")));
+ #endif
+ 
+-void fnord () { int i=42; }
++int fnord () { return 42; }
+ int main ()
+ {
+   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
+@@ -11992,13 +12647,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
+ lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
+ lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
+ lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
++lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
+ reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
+ reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
+ OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
+ deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
+ file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
++file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
++want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
++DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
++sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
+ AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
+ AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
++archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
+ STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
+ RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
+ old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
+@@ -12013,14 +12675,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
+ lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
++nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
++lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
+ objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
+ MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
+-lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
++lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
+ lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
+ lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
+ need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
++MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
+ DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
+ NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
+ LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
+@@ -12053,12 +12718,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
+ hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
+ inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
+ link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
+-fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
+ always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
+ export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
+ exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
+ include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
+ prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
++postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
+ file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
+ variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
+ need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
+@@ -12113,8 +12778,13 @@ reload_flag \
+ OBJDUMP \
+ deplibs_check_method \
+ file_magic_cmd \
++file_magic_glob \
++want_nocaseglob \
++DLLTOOL \
++sharedlib_from_linklib_cmd \
+ AR \
+ AR_FLAGS \
++archiver_list_spec \
+ STRIP \
+ RANLIB \
+ CC \
+@@ -12124,12 +12794,14 @@ lt_cv_sys_global_symbol_pipe \
+ lt_cv_sys_global_symbol_to_cdecl \
+ lt_cv_sys_global_symbol_to_c_name_address \
+ lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
++nm_file_list_spec \
+ lt_prog_compiler_no_builtin_flag \
+-lt_prog_compiler_wl \
+ lt_prog_compiler_pic \
++lt_prog_compiler_wl \
+ lt_prog_compiler_static \
+ lt_cv_prog_compiler_c_o \
+ need_locks \
++MANIFEST_TOOL \
+ DSYMUTIL \
+ NMEDIT \
+ LIPO \
+@@ -12145,7 +12817,6 @@ no_undefined_flag \
+ hardcode_libdir_flag_spec \
+ hardcode_libdir_flag_spec_ld \
+ hardcode_libdir_separator \
+-fix_srcfile_path \
+ exclude_expsyms \
+ include_expsyms \
+ file_list_spec \
+@@ -12181,6 +12852,7 @@ module_cmds \
+ module_expsym_cmds \
+ export_symbols_cmds \
+ prelink_cmds \
++postlink_cmds \
+ postinstall_cmds \
+ postuninstall_cmds \
+ finish_cmds \
+@@ -12770,7 +13442,8 @@ $as_echo X"$file" |
+ # NOTE: Changes made to this file will be lost: look at ltmain.sh.
+ #
+ #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
+-#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
++#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
++#                 Inc.
+ #   Written by Gordon Matzigkeit, 1996
+ #
+ #   This file is part of GNU Libtool.
+@@ -12873,19 +13546,42 @@ SP2NL=$lt_lt_SP2NL
+ # turn newlines into spaces.
+ NL2SP=$lt_lt_NL2SP
+ 
++# convert \$build file names to \$host format.
++to_host_file_cmd=$lt_cv_to_host_file_cmd
++
++# convert \$build files to toolchain format.
++to_tool_file_cmd=$lt_cv_to_tool_file_cmd
++
+ # An object symbol dumper.
+ OBJDUMP=$lt_OBJDUMP
+ 
+ # Method to check whether dependent libraries are shared objects.
+ deplibs_check_method=$lt_deplibs_check_method
+ 
+-# Command to use when deplibs_check_method == "file_magic".
++# Command to use when deplibs_check_method = "file_magic".
+ file_magic_cmd=$lt_file_magic_cmd
+ 
++# How to find potential files when deplibs_check_method = "file_magic".
++file_magic_glob=$lt_file_magic_glob
++
++# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
++want_nocaseglob=$lt_want_nocaseglob
++
++# DLL creation program.
++DLLTOOL=$lt_DLLTOOL
++
++# Command to associate shared and link libraries.
++sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
++
+ # The archiver.
+ AR=$lt_AR
++
++# Flags to create an archive.
+ AR_FLAGS=$lt_AR_FLAGS
+ 
++# How to feed a file listing to the archiver.
++archiver_list_spec=$lt_archiver_list_spec
++
+ # A symbol stripping program.
+ STRIP=$lt_STRIP
+ 
+@@ -12915,6 +13611,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
+ # Transform the output of nm in a C name address pair when lib prefix is needed.
+ global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
+ 
++# Specify filename containing input files for \$NM.
++nm_file_list_spec=$lt_nm_file_list_spec
++
++# The root where to search for dependent libraries,and in which our libraries should be installed.
++lt_sysroot=$lt_sysroot
++
+ # The name of the directory that contains temporary libtool files.
+ objdir=$objdir
+ 
+@@ -12924,6 +13626,9 @@ MAGIC_CMD=$MAGIC_CMD
+ # Must we lock files when doing compilation?
+ need_locks=$lt_need_locks
+ 
++# Manifest tool.
++MANIFEST_TOOL=$lt_MANIFEST_TOOL
++
+ # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
+ DSYMUTIL=$lt_DSYMUTIL
+ 
+@@ -13038,12 +13743,12 @@ with_gcc=$GCC
+ # Compiler flag to turn off builtin functions.
+ no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
+ 
+-# How to pass a linker flag through the compiler.
+-wl=$lt_lt_prog_compiler_wl
+-
+ # Additional compiler flags for building library objects.
+ pic_flag=$lt_lt_prog_compiler_pic
+ 
++# How to pass a linker flag through the compiler.
++wl=$lt_lt_prog_compiler_wl
++
+ # Compiler flag to prevent dynamic linking.
+ link_static_flag=$lt_lt_prog_compiler_static
+ 
+@@ -13130,9 +13835,6 @@ inherit_rpath=$inherit_rpath
+ # Whether libtool must link a program against all its dependency libraries.
+ link_all_deplibs=$link_all_deplibs
+ 
+-# Fix the shell variable \$srcfile for the compiler.
+-fix_srcfile_path=$lt_fix_srcfile_path
+-
+ # Set to "yes" if exported symbols are required.
+ always_export_symbols=$always_export_symbols
+ 
+@@ -13148,6 +13850,9 @@ include_expsyms=$lt_include_expsyms
+ # Commands necessary for linking programs (against libraries) with templates.
+ prelink_cmds=$lt_prelink_cmds
+ 
++# Commands necessary for finishing linking programs.
++postlink_cmds=$lt_postlink_cmds
++
+ # Specify filename containing input files.
+ file_list_spec=$lt_file_list_spec
+ 
+@@ -13180,210 +13885,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
+   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
+   # text mode, it properly converts lines to CR/LF.  This bash problem
+   # is reportedly fixed, but why not run on old versions too?
+-  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  case $xsi_shell in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_dirname_and_basename file append nondir_replacement
+-# perform func_basename and func_dirname in a single function
+-# call:
+-#   dirname:  Compute the dirname of FILE.  If nonempty,
+-#             add APPEND to the result, otherwise set result
+-#             to NONDIR_REPLACEMENT.
+-#             value returned in "$func_dirname_result"
+-#   basename: Compute filename of FILE.
+-#             value retuned in "$func_basename_result"
+-# Implementation must be kept synchronized with func_dirname
+-# and func_basename. For efficiency, we do not delegate to
+-# those functions but instead duplicate the functionality here.
+-func_dirname_and_basename ()
+-{
+-  case ${1} in
+-    */*) func_dirname_result="${1%/*}${2}" ;;
+-    *  ) func_dirname_result="${3}" ;;
+-  esac
+-  func_basename_result="${1##*/}"
+-}
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-func_stripname ()
+-{
+-  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
+-  # positional parameters, so assign one to ordinary parameter first.
+-  func_stripname_result=${3}
+-  func_stripname_result=${func_stripname_result#"${1}"}
+-  func_stripname_result=${func_stripname_result%"${2}"}
+-}
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=${1%%=*}
+-  func_opt_split_arg=${1#*=}
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  case ${1} in
+-    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
+-    *)    func_lo2o_result=${1} ;;
+-  esac
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=${1%.*}.lo
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=$(( $* ))
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=${#1}
+-}
+-
+-_LT_EOF
+-    ;;
+-  *) # Bourne compatible functions.
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_dirname file append nondir_replacement
+-# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
+-# otherwise set result to NONDIR_REPLACEMENT.
+-func_dirname ()
+-{
+-  # Extract subdirectory from the argument.
+-  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
+-  if test "X$func_dirname_result" = "X${1}"; then
+-    func_dirname_result="${3}"
+-  else
+-    func_dirname_result="$func_dirname_result${2}"
+-  fi
+-}
+-
+-# func_basename file
+-func_basename ()
+-{
+-  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
+-}
+-
+-
+-# func_stripname prefix suffix name
+-# strip PREFIX and SUFFIX off of NAME.
+-# PREFIX and SUFFIX must not contain globbing or regex special
+-# characters, hashes, percent signs, but SUFFIX may contain a leading
+-# dot (in which case that matches only a dot).
+-# func_strip_suffix prefix name
+-func_stripname ()
+-{
+-  case ${2} in
+-    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
+-    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
+-  esac
+-}
+-
+-# sed scripts:
+-my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
+-my_sed_long_arg='1s/^-[^=]*=//'
+-
+-# func_opt_split
+-func_opt_split ()
+-{
+-  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
+-  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
+-}
+-
+-# func_lo2o object
+-func_lo2o ()
+-{
+-  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
+-}
+-
+-# func_xform libobj-or-source
+-func_xform ()
+-{
+-  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
+-}
+-
+-# func_arith arithmetic-term...
+-func_arith ()
+-{
+-  func_arith_result=`expr "$@"`
+-}
+-
+-# func_len string
+-# STRING may not start with a hyphen.
+-func_len ()
+-{
+-  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
+-}
+-
+-_LT_EOF
+-esac
+-
+-case $lt_shell_append in
+-  yes)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1+=\$2"
+-}
+-_LT_EOF
+-    ;;
+-  *)
+-    cat << \_LT_EOF >> "$cfgfile"
+-
+-# func_append var value
+-# Append VALUE to the end of shell variable VAR.
+-func_append ()
+-{
+-  eval "$1=\$$1\$2"
+-}
+-
+-_LT_EOF
+-    ;;
+-  esac
+-
+-
+-  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
+-    || (rm -f "$cfgfile"; exit 1)
+-
+-  mv -f "$cfgfile" "$ofile" ||
++  sed '$q' "$ltmain" >> "$cfgfile" \
++     || (rm -f "$cfgfile"; exit 1)
++
++  if test x"$xsi_shell" = xyes; then
++  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
++func_dirname ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_basename ()$/,/^} # func_basename /c\
++func_basename ()\
++{\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
++func_dirname_and_basename ()\
++{\
++\    case ${1} in\
++\      */*) func_dirname_result="${1%/*}${2}" ;;\
++\      *  ) func_dirname_result="${3}" ;;\
++\    esac\
++\    func_basename_result="${1##*/}"\
++} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
++func_stripname ()\
++{\
++\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
++\    # positional parameters, so assign one to ordinary parameter first.\
++\    func_stripname_result=${3}\
++\    func_stripname_result=${func_stripname_result#"${1}"}\
++\    func_stripname_result=${func_stripname_result%"${2}"}\
++} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
++func_split_long_opt ()\
++{\
++\    func_split_long_opt_name=${1%%=*}\
++\    func_split_long_opt_arg=${1#*=}\
++} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
++func_split_short_opt ()\
++{\
++\    func_split_short_opt_arg=${1#??}\
++\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
++} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
++func_lo2o ()\
++{\
++\    case ${1} in\
++\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
++\      *)    func_lo2o_result=${1} ;;\
++\    esac\
++} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_xform ()$/,/^} # func_xform /c\
++func_xform ()\
++{\
++    func_xform_result=${1%.*}.lo\
++} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_arith ()$/,/^} # func_arith /c\
++func_arith ()\
++{\
++    func_arith_result=$(( $* ))\
++} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_len ()$/,/^} # func_len /c\
++func_len ()\
++{\
++    func_len_result=${#1}\
++} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++fi
++
++if test x"$lt_shell_append" = xyes; then
++  sed -e '/^func_append ()$/,/^} # func_append /c\
++func_append ()\
++{\
++    eval "${1}+=\\${2}"\
++} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
++func_append_quoted ()\
++{\
++\    func_quote_for_eval "${2}"\
++\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
++} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
++  && mv -f "$cfgfile.tmp" "$cfgfile" \
++    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++test 0 -eq $? || _lt_function_replace_fail=:
++
++
++  # Save a `func_append' function call where possible by direct use of '+='
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++else
++  # Save a `func_append' function call even when '+=' is not available
++  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
++    && mv -f "$cfgfile.tmp" "$cfgfile" \
++      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
++  test 0 -eq $? || _lt_function_replace_fail=:
++fi
++
++if test x"$_lt_function_replace_fail" = x":"; then
++  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
++$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
++fi
++
++
++   mv -f "$cfgfile" "$ofile" ||
+     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
+   chmod +x "$ofile"
+ 
+-- 
+2.14.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0007-Add-the-armv5e-architecture-to-binutils.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0007-Add-the-armv5e-architecture-to-binutils.patch
new file mode 100644
index 0000000..8801960
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0007-Add-the-armv5e-architecture-to-binutils.patch
@@ -0,0 +1,35 @@
+From 2b87aad1741bc481dd0982f100ad5ea7f937bb61 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 2 Mar 2015 01:37:10 +0000
+Subject: [PATCH 07/15] Add the armv5e architecture to binutils
+
+Binutils has a comment that indicates it is supposed to match gcc for
+all of the support "-march=" settings, but it was lacking the armv5e setting.
+This was a simple way to add it, as thumb instructions shouldn't be generated
+by the compiler anyway.
+
+Upstream-Status: Denied
+Upstream maintainer indicated that we should not be using armv5e, even
+though it is a legal archicture defined by our gcc.
+
+Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ gas/config/tc-arm.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/gas/config/tc-arm.c b/gas/config/tc-arm.c
+index a885efe4fc..735eaa7447 100644
+--- a/gas/config/tc-arm.c
++++ b/gas/config/tc-arm.c
+@@ -25990,6 +25990,7 @@ static const struct arm_arch_option_table arm_archs[] =
+   ARM_ARCH_OPT ("armv4t",	ARM_ARCH_V4T,	 FPU_ARCH_FPA),
+   ARM_ARCH_OPT ("armv4txm",	ARM_ARCH_V4TxM,	 FPU_ARCH_FPA),
+   ARM_ARCH_OPT ("armv5",	ARM_ARCH_V5,	 FPU_ARCH_VFP),
++  ARM_ARCH_OPT ("armv5e",	ARM_ARCH_V5TE,	 FPU_ARCH_VFP),
+   ARM_ARCH_OPT ("armv5t",	ARM_ARCH_V5T,	 FPU_ARCH_VFP),
+   ARM_ARCH_OPT ("armv5txm",	ARM_ARCH_V5TxM,	 FPU_ARCH_VFP),
+   ARM_ARCH_OPT ("armv5te",	ARM_ARCH_V5TE,	 FPU_ARCH_VFP),
+-- 
+2.14.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0007-Use-libtool-2.4.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0007-Use-libtool-2.4.patch
deleted file mode 100644
index 6b7f753..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0007-Use-libtool-2.4.patch
+++ /dev/null
@@ -1,21137 +0,0 @@
-From 9a3651e120261c72090689ad770ad048b0baf506 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 6 Mar 2017 23:28:33 -0800
-Subject: [PATCH 07/15] Use libtool 2.4
-
-get libtool sysroot support
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
-Upstream-Status: Inappropriate [ OE configuration Specific]
-
- bfd/configure        | 1318 +++++++++++++++++------
- bfd/configure.ac     |    2 +-
- binutils/configure   | 1316 +++++++++++++++++------
- gas/configure        | 1314 +++++++++++++++++------
- gprof/configure      | 1321 +++++++++++++++++------
- ld/configure         | 1691 +++++++++++++++++++++--------
- libtool.m4           | 1080 +++++++++++++------
- ltmain.sh            | 2925 +++++++++++++++++++++++++++++++++-----------------
- ltoptions.m4         |    2 +-
- ltversion.m4         |   12 +-
- lt~obsolete.m4       |    2 +-
- opcodes/configure    | 1318 +++++++++++++++++------
- opcodes/configure.ac |    2 +-
- zlib/configure       | 1316 +++++++++++++++++------
- 14 files changed, 9926 insertions(+), 3693 deletions(-)
-
-diff --git a/bfd/configure b/bfd/configure
-index f30bfabef3..fa1a545e9d 100755
---- a/bfd/configure
-+++ b/bfd/configure
-@@ -672,6 +672,9 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
-+ac_ct_AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -785,6 +788,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_plugins
- enable_largefile
-@@ -1461,6 +1465,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
-   --with-mmap             try using mmap for BFD input files if available
-   --with-separate-debug-dir=DIR
-                           Look for global separate debug info in DIR
-@@ -5393,8 +5399,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -5434,7 +5440,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -6120,8 +6126,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -6170,6 +6176,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if test "${lt_cv_to_host_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if test "${lt_cv_ld_reload_flag+set}" = set; then :
-@@ -6186,6 +6266,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -6354,7 +6439,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -6508,6 +6594,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -6523,9 +6624,162 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_AR+set}" = set; then :
-@@ -6541,7 +6795,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6561,11 +6815,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
-@@ -6581,7 +6839,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6600,6 +6858,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -6611,16 +6873,72 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if test "${lt_cv_ar_at_file+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
- 
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
- 
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
- 
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
- 
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
- 
- 
- 
-@@ -6962,8 +7280,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -6999,6 +7317,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -7040,6 +7359,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -7051,7 +7382,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -7077,8 +7408,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -7088,8 +7419,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -7126,6 +7457,16 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
- 
- 
- 
-@@ -7147,6 +7488,45 @@ fi
- 
- 
- 
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
-+
-+
-+
-+
-+
- # Check whether --enable-libtool-lock was given.
- if test "${enable_libtool_lock+set}" = set; then :
-   enableval=$enable_libtool_lock;
-@@ -7353,6 +7733,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if test "${lt_cv_path_mainfest_tool+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -7916,6 +8413,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -8080,7 +8579,8 @@ fi
- LIBTOOL_DEPS="$ltmain"
- 
- # Always use our own libtool.
--LIBTOOL='$(SHELL) $(top_builddir)/libtool'
-+LIBTOOL='$(SHELL) $(top_builddir)'
-+LIBTOOL="$LIBTOOL/${host_alias}-libtool"
- 
- 
- 
-@@ -8169,7 +8669,7 @@ aix3*)
- esac
- 
- # Global variables:
--ofile=libtool
-+ofile=${host_alias}-libtool
- can_build_shared=yes
- 
- # All known linkers require a `.a' archive for static linking (except MSVC,
-@@ -8467,8 +8967,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -8634,6 +9132,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -8696,7 +9200,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8753,13 +9257,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if test "${lt_cv_prog_compiler_pic+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8820,6 +9328,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -9170,7 +9683,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -9269,12 +9783,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -9288,8 +9802,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -9307,8 +9821,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9354,8 +9868,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9485,7 +9999,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9498,22 +10018,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -9525,7 +10052,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9538,22 +10071,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -9598,20 +10138,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -9672,7 +10255,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -9680,7 +10263,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -9696,7 +10279,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -9720,10 +10303,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -9802,23 +10385,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if test "${lt_cv_irix_exported_symbol+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9903,7 +10499,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -9922,9 +10518,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -10500,8 +11096,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -10534,13 +11131,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -10632,7 +11287,7 @@ haiku*)
-   soname_spec='${libname}${release}${shared_ext}$major'
-   shlibpath_var=LIBRARY_PATH
-   shlibpath_overrides_runpath=yes
--  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
-+  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
-   hardcode_into_libs=yes
-   ;;
- 
-@@ -11472,10 +12127,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11578,10 +12233,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -14122,7 +14777,7 @@ SHARED_LDFLAGS=
- if test "$enable_shared" = "yes"; then
-   x=`sed -n -e 's/^[ 	]*PICFLAG[ 	]*=[ 	]*//p' < ../libiberty/Makefile | sed -n '$p'`
-   if test -n "$x"; then
--    SHARED_LIBADD="-L`pwd`/../libiberty/pic -liberty"
-+    SHARED_LIBADD="`pwd`/../libiberty/pic/libiberty.a"
-   fi
- 
- # More hacks to build DLLs on Windows.
-@@ -16826,13 +17481,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -16847,14 +17509,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -16887,12 +17552,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -16947,8 +17612,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -16958,12 +17628,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -16979,7 +17651,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -17015,6 +17686,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -17794,7 +18466,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -17897,19 +18570,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -17939,6 +18635,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -17948,6 +18650,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -18062,12 +18767,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -18154,9 +18859,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -18172,6 +18874,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -18204,210 +18909,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-diff --git a/bfd/configure.ac b/bfd/configure.ac
-index 9a183c1628..3d8ea07836 100644
---- a/bfd/configure.ac
-+++ b/bfd/configure.ac
-@@ -253,7 +253,7 @@ changequote(,)dnl
-   x=`sed -n -e 's/^[ 	]*PICFLAG[ 	]*=[ 	]*//p' < ../libiberty/Makefile | sed -n '$p'`
- changequote([,])dnl
-   if test -n "$x"; then
--    SHARED_LIBADD="-L`pwd`/../libiberty/pic -liberty"
-+    SHARED_LIBADD="`pwd`/../libiberty/pic/libiberty.a"
-   fi
- 
- # More hacks to build DLLs on Windows.
-diff --git a/binutils/configure b/binutils/configure
-index 82119efe72..4a98918ce1 100755
---- a/binutils/configure
-+++ b/binutils/configure
-@@ -659,8 +659,11 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
- RANLIB
-+ac_ct_AR
- AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -772,6 +775,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_plugins
- enable_largefile
-@@ -1444,6 +1448,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
-   --with-system-zlib      use installed libz
-   --with-gnu-ld           assume the C compiler uses GNU ld default=no
-   --with-libiconv-prefix[=DIR]  search for libiconv in DIR/include and DIR/lib
-@@ -5160,8 +5166,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -5201,7 +5207,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -5887,8 +5893,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -5937,6 +5943,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if test "${lt_cv_to_host_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if test "${lt_cv_ld_reload_flag+set}" = set; then :
-@@ -5953,6 +6033,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -6121,7 +6206,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -6275,6 +6361,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -6290,9 +6391,162 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_AR+set}" = set; then :
-@@ -6308,7 +6562,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6328,11 +6582,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
-@@ -6348,7 +6606,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6367,6 +6625,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -6378,12 +6640,10 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
- 
- 
- 
-@@ -6395,6 +6655,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if test "${lt_cv_ar_at_file+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
-+
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
-   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
- set dummy ${ac_tool_prefix}strip; ac_word=$2
-@@ -6729,8 +7047,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -6766,6 +7084,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -6807,6 +7126,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -6818,7 +7149,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -6844,8 +7175,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -6855,8 +7186,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -6893,6 +7224,21 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
-+
-+
-+
-+
-+
- 
- 
- 
-@@ -6910,6 +7256,40 @@ fi
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
-+
-+
- 
- 
- 
-@@ -7120,6 +7500,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if test "${lt_cv_path_mainfest_tool+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -7683,6 +8180,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -7878,7 +8377,8 @@ fi
- LIBTOOL_DEPS="$ltmain"
- 
- # Always use our own libtool.
--LIBTOOL='$(SHELL) $(top_builddir)/libtool'
-+LIBTOOL='$(SHELL) $(top_builddir)'
-+LIBTOOL="$LIBTOOL/${host_alias}-libtool"
- 
- 
- 
-@@ -7967,7 +8467,7 @@ aix3*)
- esac
- 
- # Global variables:
--ofile=libtool
-+ofile=${host_alias}-libtool
- can_build_shared=yes
- 
- # All known linkers require a `.a' archive for static linking (except MSVC,
-@@ -8265,8 +8765,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -8432,6 +8930,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -8494,7 +8998,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8551,13 +9055,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if test "${lt_cv_prog_compiler_pic+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8618,6 +9126,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -8968,7 +9481,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -9067,12 +9581,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -9086,8 +9600,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -9105,8 +9619,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9152,8 +9666,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9283,7 +9797,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9296,22 +9816,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -9323,7 +9850,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9336,22 +9869,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -9396,20 +9936,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -9470,7 +10053,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -9478,7 +10061,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -9494,7 +10077,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -9518,10 +10101,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -9600,23 +10183,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if test "${lt_cv_irix_exported_symbol+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9701,7 +10297,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -9720,9 +10316,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -10298,8 +10894,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -10332,13 +10929,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -10430,7 +11085,7 @@ haiku*)
-   soname_spec='${libname}${release}${shared_ext}$major'
-   shlibpath_var=LIBRARY_PATH
-   shlibpath_overrides_runpath=yes
--  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
-+  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
-   hardcode_into_libs=yes
-   ;;
- 
-@@ -11270,10 +11925,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11376,10 +12031,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -15436,13 +16091,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -15457,14 +16119,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -15497,12 +16162,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -15557,8 +16222,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -15568,12 +16238,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -15589,7 +16261,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -15625,6 +16296,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -16382,7 +17054,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -16485,19 +17158,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -16527,6 +17223,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -16536,6 +17238,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -16650,12 +17355,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -16742,9 +17447,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -16760,6 +17462,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -16792,210 +17497,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-diff --git a/gas/configure b/gas/configure
-index e574cb8514..a36f1ae161 100755
---- a/gas/configure
-+++ b/gas/configure
-@@ -650,8 +650,11 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
- RANLIB
-+ac_ct_AR
- AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -763,6 +766,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_plugins
- enable_largefile
-@@ -4921,8 +4925,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -4962,7 +4966,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -5648,8 +5652,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -5698,6 +5702,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if test "${lt_cv_to_host_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if test "${lt_cv_ld_reload_flag+set}" = set; then :
-@@ -5714,6 +5792,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -5882,7 +5965,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -6036,6 +6120,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -6051,9 +6150,162 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_AR+set}" = set; then :
-@@ -6069,7 +6321,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6089,11 +6341,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
-@@ -6109,7 +6365,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6128,6 +6384,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -6139,12 +6399,10 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
- 
- 
- 
-@@ -6156,6 +6414,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if test "${lt_cv_ar_at_file+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
-+
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
-   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
- set dummy ${ac_tool_prefix}strip; ac_word=$2
-@@ -6490,8 +6806,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -6527,6 +6843,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -6568,6 +6885,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -6579,7 +6908,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -6605,8 +6934,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -6616,8 +6945,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -6654,6 +6983,21 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
-+
-+
-+
-+
-+
- 
- 
- 
-@@ -6671,6 +7015,40 @@ fi
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
-+
-+
- 
- 
- 
-@@ -6881,6 +7259,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if test "${lt_cv_path_mainfest_tool+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -7444,6 +7939,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -7639,7 +8136,8 @@ fi
- LIBTOOL_DEPS="$ltmain"
- 
- # Always use our own libtool.
--LIBTOOL='$(SHELL) $(top_builddir)/libtool'
-+LIBTOOL='$(SHELL) $(top_builddir)'
-+LIBTOOL="$LIBTOOL/${host_alias}-libtool"
- 
- 
- 
-@@ -7728,7 +8226,7 @@ aix3*)
- esac
- 
- # Global variables:
--ofile=libtool
-+ofile=${host_alias}-libtool
- can_build_shared=yes
- 
- # All known linkers require a `.a' archive for static linking (except MSVC,
-@@ -8026,8 +8524,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -8193,6 +8689,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -8255,7 +8757,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8312,13 +8814,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if test "${lt_cv_prog_compiler_pic+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8379,6 +8885,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -8729,7 +9240,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -8828,12 +9340,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -8847,8 +9359,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -8866,8 +9378,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8913,8 +9425,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9044,7 +9556,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9057,22 +9575,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -9084,7 +9609,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9097,22 +9628,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -9157,20 +9695,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -9231,7 +9812,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -9239,7 +9820,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -9255,7 +9836,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -9279,10 +9860,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -9361,23 +9942,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if test "${lt_cv_irix_exported_symbol+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9462,7 +10056,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -9481,9 +10075,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -10059,8 +10653,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -10093,13 +10688,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -10191,7 +10844,7 @@ haiku*)
-   soname_spec='${libname}${release}${shared_ext}$major'
-   shlibpath_var=LIBRARY_PATH
-   shlibpath_overrides_runpath=yes
--  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
-+  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
-   hardcode_into_libs=yes
-   ;;
- 
-@@ -11031,10 +11684,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11137,10 +11790,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -15425,13 +16078,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -15446,14 +16106,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -15486,12 +16149,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -15546,8 +16209,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -15557,12 +16225,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -15578,7 +16248,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -15614,6 +16283,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -16378,7 +17048,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -16481,19 +17152,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -16523,6 +17217,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -16532,6 +17232,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -16646,12 +17349,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -16738,9 +17441,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -16756,6 +17456,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -16788,210 +17491,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-diff --git a/gprof/configure b/gprof/configure
-index 38a4c0b0e5..38d1f699c7 100755
---- a/gprof/configure
-+++ b/gprof/configure
-@@ -631,8 +631,11 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
- RANLIB
-+ac_ct_AR
- AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -744,6 +747,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_plugins
- enable_largefile
-@@ -1402,6 +1406,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
- 
- Some influential environment variables:
-   CC          C compiler command
-@@ -4836,8 +4842,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -4877,7 +4883,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -5563,8 +5569,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -5613,6 +5619,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if test "${lt_cv_to_host_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if test "${lt_cv_ld_reload_flag+set}" = set; then :
-@@ -5629,6 +5709,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -5797,7 +5882,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -5874,11 +5960,6 @@ linux* | k*bsd*-gnu | kopensolaris*-gnu)
-   lt_cv_deplibs_check_method=pass_all
-   ;;
- 
--linux-uclibc*)
--  lt_cv_deplibs_check_method=pass_all
--  lt_cv_file_magic_test_file=`echo /lib/libuClibc-*.so`
--  ;;
--
- netbsd*)
-   if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then
-     lt_cv_deplibs_check_method='match_pattern /lib[^/]+(\.so\.[0-9]+\.[0-9]+|_pic\.a)$'
-@@ -5956,6 +6037,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -5971,9 +6067,162 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_AR+set}" = set; then :
-@@ -5989,7 +6238,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6009,11 +6258,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
-@@ -6029,7 +6282,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6048,6 +6301,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -6059,12 +6316,10 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
- 
- 
- 
-@@ -6076,6 +6331,64 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if test "${lt_cv_ar_at_file+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
-+
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
-   # Extract the first word of "${ac_tool_prefix}strip", so it can be a program name with args.
- set dummy ${ac_tool_prefix}strip; ac_word=$2
-@@ -6410,8 +6723,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -6447,6 +6760,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -6488,6 +6802,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -6499,7 +6825,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -6525,8 +6851,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -6536,8 +6862,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -6574,6 +6900,18 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
-+
-+
- 
- 
- 
-@@ -6595,6 +6933,43 @@ fi
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
-+
-+
-+
-+
-+
- # Check whether --enable-libtool-lock was given.
- if test "${enable_libtool_lock+set}" = set; then :
-   enableval=$enable_libtool_lock;
-@@ -6801,6 +7176,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if test "${lt_cv_path_mainfest_tool+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -7364,6 +7856,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -7559,7 +8053,8 @@ fi
- LIBTOOL_DEPS="$ltmain"
- 
- # Always use our own libtool.
--LIBTOOL='$(SHELL) $(top_builddir)/libtool'
-+LIBTOOL='$(SHELL) $(top_builddir)'
-+LIBTOOL="$LIBTOOL/${host_alias}-libtool"
- 
- 
- 
-@@ -7648,7 +8143,7 @@ aix3*)
- esac
- 
- # Global variables:
--ofile=libtool
-+ofile=${host_alias}-libtool
- can_build_shared=yes
- 
- # All known linkers require a `.a' archive for static linking (except MSVC,
-@@ -7946,8 +8441,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -8113,6 +8606,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -8175,7 +8674,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8232,13 +8731,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if test "${lt_cv_prog_compiler_pic+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8299,6 +8802,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -8649,7 +9157,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -8748,12 +9257,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -8767,8 +9276,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -8786,8 +9295,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8833,8 +9342,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8964,7 +9473,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -8977,22 +9492,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -9004,7 +9526,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9017,22 +9545,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -9077,20 +9612,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -9151,7 +9729,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -9159,7 +9737,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -9175,7 +9753,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -9199,10 +9777,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -9281,23 +9859,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if test "${lt_cv_irix_exported_symbol+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9382,7 +9973,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -9401,9 +9992,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -9979,8 +10570,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -10013,13 +10605,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -10111,7 +10761,7 @@ haiku*)
-   soname_spec='${libname}${release}${shared_ext}$major'
-   shlibpath_var=LIBRARY_PATH
-   shlibpath_overrides_runpath=yes
--  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
-+  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
-   hardcode_into_libs=yes
-   ;;
- 
-@@ -10951,10 +11601,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11057,10 +11707,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -13005,13 +13655,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -13026,14 +13683,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -13066,12 +13726,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -13126,8 +13786,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -13137,12 +13802,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -13158,7 +13825,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -13194,6 +13860,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -13950,7 +14617,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -14053,19 +14721,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -14095,6 +14786,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -14104,6 +14801,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -14218,12 +14918,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -14310,9 +15010,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -14328,6 +15025,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -14360,210 +15060,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-diff --git a/ld/configure b/ld/configure
-index a16c6db059..4277b74bad 100755
---- a/ld/configure
-+++ b/ld/configure
-@@ -659,8 +659,11 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
- RANLIB
-+ac_ct_AR
- AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -782,6 +785,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_plugins
- enable_largefile
-@@ -1463,6 +1467,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
-   --with-lib-path=dir1:dir2...  set default LIB_PATH
-   --with-sysroot=DIR Search for usr/lib et al within DIR.
- 
-@@ -5657,8 +5663,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -5698,7 +5704,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -6384,8 +6390,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -6434,6 +6440,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if test "${lt_cv_to_host_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if test "${lt_cv_ld_reload_flag+set}" = set; then :
-@@ -6450,6 +6530,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -6618,7 +6703,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -6772,6 +6858,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -6787,9 +6888,162 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_AR+set}" = set; then :
-@@ -6805,7 +7059,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6825,11 +7079,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
-@@ -6845,7 +7103,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6864,6 +7122,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -6875,12 +7137,12 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
-+
-+
- 
- 
- 
-@@ -6890,6 +7152,62 @@ test -z "$AR_FLAGS" && AR_FLAGS=cru
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if test "${lt_cv_ar_at_file+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
-+
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+
-+
-+
-+
-+
- 
- 
- if test -n "$ac_tool_prefix"; then
-@@ -7226,8 +7544,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -7263,6 +7581,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -7304,6 +7623,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -7315,7 +7646,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -7341,8 +7672,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -7352,8 +7683,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -7390,6 +7721,19 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
-+
-+
-+
- 
- 
- 
-@@ -7410,6 +7754,42 @@ fi
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
-+
-+
-+
-+
- 
- # Check whether --enable-libtool-lock was given.
- if test "${enable_libtool_lock+set}" = set; then :
-@@ -7617,6 +7997,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if test "${lt_cv_path_mainfest_tool+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -8180,6 +8677,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -8248,6 +8747,16 @@ done
- 
- 
- 
-+func_stripname_cnf ()
-+{
-+  case ${2} in
-+  .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
-+  *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
-+  esac
-+} # func_stripname_cnf
-+
-+
-+
- 
- 
- # Set options
-@@ -8376,7 +8885,8 @@ fi
- LIBTOOL_DEPS="$ltmain"
- 
- # Always use our own libtool.
--LIBTOOL='$(SHELL) $(top_builddir)/libtool'
-+LIBTOOL='$(SHELL) $(top_builddir)'
-+LIBTOOL="$LIBTOOL/${host_alias}-libtool"
- 
- 
- 
-@@ -8465,7 +8975,7 @@ aix3*)
- esac
- 
- # Global variables:
--ofile=libtool
-+ofile=${host_alias}-libtool
- can_build_shared=yes
- 
- # All known linkers require a `.a' archive for static linking (except MSVC,
-@@ -8763,8 +9273,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -8930,6 +9438,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -8992,7 +9506,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -9049,13 +9563,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if test "${lt_cv_prog_compiler_pic+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -9116,6 +9634,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -9466,7 +9989,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -9565,12 +10089,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -9584,8 +10108,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -9603,8 +10127,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9650,8 +10174,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9781,7 +10305,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9794,22 +10324,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -9821,7 +10358,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9834,22 +10377,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -9893,21 +10443,64 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # When not using gcc, we currently assume that we are using
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
--      # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      # no search path for DLLs.
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -9968,7 +10561,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -9976,7 +10569,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -9992,7 +10585,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -10016,10 +10609,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -10098,23 +10691,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if test "${lt_cv_irix_exported_symbol+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -10199,7 +10805,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -10218,9 +10824,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -10796,8 +11402,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -10830,13 +11437,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -10928,7 +11593,7 @@ haiku*)
-   soname_spec='${libname}${release}${shared_ext}$major'
-   shlibpath_var=LIBRARY_PATH
-   shlibpath_overrides_runpath=yes
--  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
-+  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
-   hardcode_into_libs=yes
-   ;;
- 
-@@ -11768,10 +12433,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11874,10 +12539,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -12269,6 +12934,7 @@ $RM -r conftest*
- 
-   # Allow CC to be a program name with arguments.
-   lt_save_CC=$CC
-+  lt_save_CFLAGS=$CFLAGS
-   lt_save_LD=$LD
-   lt_save_GCC=$GCC
-   GCC=$GXX
-@@ -12286,6 +12952,7 @@ $RM -r conftest*
-   fi
-   test -z "${LDCXX+set}" || LD=$LDCXX
-   CC=${CXX-"c++"}
-+  CFLAGS=$CXXFLAGS
-   compiler=$CC
-   compiler_CXX=$CC
-   for cc_temp in $compiler""; do
-@@ -12568,7 +13235,13 @@ $as_echo_n "checking whether the $compiler linker ($LD) supports shared librarie
-           allow_undefined_flag_CXX='-berok'
-           # Determine the default libpath from the value encoded in an empty
-           # executable.
--          cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+          if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath__CXX+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -12581,22 +13254,29 @@ main ()
- _ACEOF
- if ac_fn_cxx_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath__CXX"; then
-+    lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath__CXX"; then
-+    lt_cv_aix_libpath__CXX="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath__CXX
-+fi
- 
-           hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
- 
-@@ -12609,7 +13289,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-           else
- 	    # Determine the default libpath from the value encoded in an
- 	    # empty executable.
--	    cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	    if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath__CXX+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -12622,22 +13308,29 @@ main ()
- _ACEOF
- if ac_fn_cxx_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath__CXX=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath__CXX"; then
-+    lt_cv_aix_libpath__CXX=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath__CXX"; then
-+    lt_cv_aix_libpath__CXX="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath__CXX
-+fi
- 
- 	    hardcode_libdir_flag_spec_CXX='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	    # Warning - without using the other run time loading flags,
-@@ -12680,29 +13373,75 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-         ;;
- 
-       cygwin* | mingw* | pw32* | cegcc*)
--        # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
--        # as there is no search path for DLLs.
--        hardcode_libdir_flag_spec_CXX='-L$libdir'
--        export_dynamic_flag_spec_CXX='${wl}--export-all-symbols'
--        allow_undefined_flag_CXX=unsupported
--        always_export_symbols_CXX=no
--        enable_shared_with_static_runtimes_CXX=yes
--
--        if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
--          archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
--          # If the export-symbols file already is a .def file (1st line
--          # is EXPORTS), use it as is; otherwise, prepend...
--          archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
--	    cp $export_symbols $output_objdir/$soname.def;
--          else
--	    echo EXPORTS > $output_objdir/$soname.def;
--	    cat $export_symbols >> $output_objdir/$soname.def;
--          fi~
--          $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
--        else
--          ld_shlibs_CXX=no
--        fi
--        ;;
-+	case $GXX,$cc_basename in
-+	,cl* | no,cl*)
-+	  # Native MSVC
-+	  # hardcode_libdir_flag_spec is actually meaningless, as there is
-+	  # no search path for DLLs.
-+	  hardcode_libdir_flag_spec_CXX=' '
-+	  allow_undefined_flag_CXX=unsupported
-+	  always_export_symbols_CXX=yes
-+	  file_list_spec_CXX='@'
-+	  # Tell ltmain to make .lib files, not .a files.
-+	  libext=lib
-+	  # Tell ltmain to make .dll files, not .so files.
-+	  shrext_cmds=".dll"
-+	  # FIXME: Setting linknames here is a bad hack.
-+	  archive_cmds_CXX='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	  archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	      $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	    else
-+	      $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	    fi~
-+	    $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	    linknames='
-+	  # The linker will not automatically build a static lib if we build a DLL.
-+	  # _LT_TAGVAR(old_archive_from_new_cmds, CXX)='true'
-+	  enable_shared_with_static_runtimes_CXX=yes
-+	  # Don't use ranlib
-+	  old_postinstall_cmds_CXX='chmod 644 $oldlib'
-+	  postlink_cmds_CXX='lt_outputfile="@OUTPUT@"~
-+	    lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	    case $lt_outputfile in
-+	      *.exe|*.EXE) ;;
-+	      *)
-+		lt_outputfile="$lt_outputfile.exe"
-+		lt_tool_outputfile="$lt_tool_outputfile.exe"
-+		;;
-+	    esac~
-+	    func_to_tool_file "$lt_outputfile"~
-+	    if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	      $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	      $RM "$lt_outputfile.manifest";
-+	    fi'
-+	  ;;
-+	*)
-+	  # g++
-+	  # _LT_TAGVAR(hardcode_libdir_flag_spec, CXX) is actually meaningless,
-+	  # as there is no search path for DLLs.
-+	  hardcode_libdir_flag_spec_CXX='-L$libdir'
-+	  export_dynamic_flag_spec_CXX='${wl}--export-all-symbols'
-+	  allow_undefined_flag_CXX=unsupported
-+	  always_export_symbols_CXX=no
-+	  enable_shared_with_static_runtimes_CXX=yes
-+
-+	  if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-+	    archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-+	    # If the export-symbols file already is a .def file (1st line
-+	    # is EXPORTS), use it as is; otherwise, prepend...
-+	    archive_expsym_cmds_CXX='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	      cp $export_symbols $output_objdir/$soname.def;
-+	    else
-+	      echo EXPORTS > $output_objdir/$soname.def;
-+	      cat $export_symbols >> $output_objdir/$soname.def;
-+	    fi~
-+	    $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-+	  else
-+	    ld_shlibs_CXX=no
-+	  fi
-+	  ;;
-+	esac
-+	;;
-       darwin* | rhapsody*)
- 
- 
-@@ -12808,7 +13547,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-             ;;
-           *)
-             if test "$GXX" = yes; then
--              archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+              archive_cmds_CXX='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-             else
-               # FIXME: insert proper C++ library support
-               ld_shlibs_CXX=no
-@@ -12879,10 +13618,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
- 	            ;;
- 	          ia64*)
--	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
-+	            archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
- 	            ;;
- 	          *)
--	            archive_cmds_CXX='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
-+	            archive_cmds_CXX='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
- 	            ;;
- 	        esac
- 	      fi
-@@ -12923,9 +13662,9 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-           *)
- 	    if test "$GXX" = yes; then
- 	      if test "$with_gnu_ld" = no; then
--	        archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	      else
--	        archive_cmds_CXX='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
-+	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
- 	      fi
- 	    fi
- 	    link_all_deplibs_CXX=yes
-@@ -12995,20 +13734,20 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	      prelink_cmds_CXX='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~
--		compile_command="$compile_command `find $tpldir -name \*.o | $NL2SP`"'
-+		compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"'
- 	      old_archive_cmds_CXX='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~
--		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | $NL2SP`~
-+		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~
- 		$RANLIB $oldlib'
- 	      archive_cmds_CXX='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
--		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
-+		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
- 	      archive_expsym_cmds_CXX='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
--		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
-+		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
- 	      ;;
- 	    *) # Version 6 and above use weak symbols
- 	      archive_cmds_CXX='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
-@@ -13203,7 +13942,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	          archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 		  ;;
- 	        *)
--	          archive_cmds_CXX='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	          archive_cmds_CXX='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 		  ;;
- 	      esac
- 
-@@ -13249,7 +13988,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-       solaris*)
-         case $cc_basename in
--          CC*)
-+          CC* | sunCC*)
- 	    # Sun C++ 4.2, 5.x and Centerline C++
-             archive_cmds_need_lc_CXX=yes
- 	    no_undefined_flag_CXX=' -zdefs'
-@@ -13290,9 +14029,9 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	    if test "$GXX" = yes && test "$with_gnu_ld" = no; then
- 	      no_undefined_flag_CXX=' ${wl}-z ${wl}defs'
- 	      if $CC --version | $GREP -v '^2\.7' > /dev/null; then
--	        archive_cmds_CXX='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
-+	        archive_cmds_CXX='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
- 	        archive_expsym_cmds_CXX='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--		  $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
-+		  $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
- 
- 	        # Commands to make compiler produce verbose output that lists
- 	        # what "hidden" libraries, object files and flags are used when
-@@ -13427,6 +14166,13 @@ private:
- };
- _LT_EOF
- 
-+
-+_lt_libdeps_save_CFLAGS=$CFLAGS
-+case "$CC $CFLAGS " in #(
-+*\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;;
-+*\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;;
-+esac
-+
- if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
-   (eval $ac_compile) 2>&5
-   ac_status=$?
-@@ -13440,7 +14186,7 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
-   pre_test_object_deps_done=no
- 
-   for p in `eval "$output_verbose_link_cmd"`; do
--    case $p in
-+    case ${prev}${p} in
- 
-     -L* | -R* | -l*)
-        # Some compilers place space between "-{L,R}" and the path.
-@@ -13449,13 +14195,22 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
-           test $p = "-R"; then
- 	 prev=$p
- 	 continue
--       else
--	 prev=
-        fi
- 
-+       # Expand the sysroot to ease extracting the directories later.
-+       if test -z "$prev"; then
-+         case $p in
-+         -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;;
-+         -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;;
-+         -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;;
-+         esac
-+       fi
-+       case $p in
-+       =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;;
-+       esac
-        if test "$pre_test_object_deps_done" = no; then
--	 case $p in
--	 -L* | -R*)
-+	 case ${prev} in
-+	 -L | -R)
- 	   # Internal compiler library paths should come after those
- 	   # provided the user.  The postdeps already come after the
- 	   # user supplied libs so there is no need to process them.
-@@ -13475,8 +14230,10 @@ if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_compile\""; } >&5
- 	   postdeps_CXX="${postdeps_CXX} ${prev}${p}"
- 	 fi
-        fi
-+       prev=
-        ;;
- 
-+    *.lto.$objext) ;; # Ignore GCC LTO objects
-     *.$objext)
-        # This assumes that the test object file only shows up
-        # once in the compiler output.
-@@ -13512,6 +14269,7 @@ else
- fi
- 
- $RM -f confest.$objext
-+CFLAGS=$_lt_libdeps_save_CFLAGS
- 
- # PORTME: override above test on systems where it is broken
- case $host_os in
-@@ -13547,7 +14305,7 @@ linux*)
- 
- solaris*)
-   case $cc_basename in
--  CC*)
-+  CC* | sunCC*)
-     # The more standards-conforming stlport4 library is
-     # incompatible with the Cstd library. Avoid specifying
-     # it if it's in CXXFLAGS. Ignore libCrun as
-@@ -13612,8 +14370,6 @@ fi
- lt_prog_compiler_pic_CXX=
- lt_prog_compiler_static_CXX=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   # C++ specific cases for pic, static, wl, etc.
-   if test "$GXX" = yes; then
-@@ -13718,6 +14474,11 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	  ;;
- 	esac
- 	;;
-+      mingw* | cygwin* | os2* | pw32* | cegcc*)
-+	# This hack is so that the source file can tell whether it is being
-+	# built for inclusion in a dll (and should export symbols for example).
-+	lt_prog_compiler_pic_CXX='-DDLL_EXPORT'
-+	;;
-       dgux*)
- 	case $cc_basename in
- 	  ec++*)
-@@ -13870,7 +14631,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	;;
-       solaris*)
- 	case $cc_basename in
--	  CC*)
-+	  CC* | sunCC*)
- 	    # Sun C++ 4.2, 5.x and Centerline C++
- 	    lt_prog_compiler_pic_CXX='-KPIC'
- 	    lt_prog_compiler_static_CXX='-Bstatic'
-@@ -13935,10 +14696,17 @@ case $host_os in
-     lt_prog_compiler_pic_CXX="$lt_prog_compiler_pic_CXX -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic_CXX" >&5
--$as_echo "$lt_prog_compiler_pic_CXX" >&6; }
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if test "${lt_cv_prog_compiler_pic_CXX+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic_CXX=$lt_prog_compiler_pic_CXX
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic_CXX" >&5
-+$as_echo "$lt_cv_prog_compiler_pic_CXX" >&6; }
-+lt_prog_compiler_pic_CXX=$lt_cv_prog_compiler_pic_CXX
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -13996,6 +14764,8 @@ fi
- 
- 
- 
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -14173,6 +14943,7 @@ fi
- $as_echo_n "checking whether the $compiler linker ($LD) supports shared libraries... " >&6; }
- 
-   export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
-+  exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
-   case $host_os in
-   aix[4-9]*)
-     # If we're using GNU nm, then we don't want the "-C" option.
-@@ -14187,15 +14958,20 @@ $as_echo_n "checking whether the $compiler linker ($LD) supports shared librarie
-     ;;
-   pw32*)
-     export_symbols_cmds_CXX="$ltdll_cmds"
--  ;;
-+    ;;
-   cygwin* | mingw* | cegcc*)
--    export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;/^.*[ ]__nm__/s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
--  ;;
-+    case $cc_basename in
-+    cl*) ;;
-+    *)
-+      export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms_CXX='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
-+      ;;
-+    esac
-+    ;;
-   *)
-     export_symbols_cmds_CXX='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
--  ;;
-+    ;;
-   esac
--  exclude_expsyms_CXX='_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'
- 
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ld_shlibs_CXX" >&5
- $as_echo "$ld_shlibs_CXX" >&6; }
-@@ -14458,8 +15234,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -14491,13 +15268,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -14588,7 +15423,7 @@ haiku*)
-   soname_spec='${libname}${release}${shared_ext}$major'
-   shlibpath_var=LIBRARY_PATH
-   shlibpath_overrides_runpath=yes
--  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
-+  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
-   hardcode_into_libs=yes
-   ;;
- 
-@@ -15047,6 +15882,7 @@ fi
-   fi # test -n "$compiler"
- 
-   CC=$lt_save_CC
-+  CFLAGS=$lt_save_CFLAGS
-   LDCXX=$LD
-   LD=$lt_save_LD
-   GCC=$lt_save_GCC
-@@ -18026,13 +18862,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -18047,14 +18890,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -18087,12 +18933,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -18131,8 +18977,8 @@ old_archive_cmds_CXX='`$ECHO "$old_archive_cmds_CXX" | $SED "$delay_single_quote
- compiler_CXX='`$ECHO "$compiler_CXX" | $SED "$delay_single_quote_subst"`'
- GCC_CXX='`$ECHO "$GCC_CXX" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag_CXX='`$ECHO "$lt_prog_compiler_no_builtin_flag_CXX" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic_CXX='`$ECHO "$lt_prog_compiler_pic_CXX" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl_CXX='`$ECHO "$lt_prog_compiler_wl_CXX" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static_CXX='`$ECHO "$lt_prog_compiler_static_CXX" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o_CXX='`$ECHO "$lt_cv_prog_compiler_c_o_CXX" | $SED "$delay_single_quote_subst"`'
- archive_cmds_need_lc_CXX='`$ECHO "$archive_cmds_need_lc_CXX" | $SED "$delay_single_quote_subst"`'
-@@ -18159,12 +19005,12 @@ hardcode_shlibpath_var_CXX='`$ECHO "$hardcode_shlibpath_var_CXX" | $SED "$delay_
- hardcode_automatic_CXX='`$ECHO "$hardcode_automatic_CXX" | $SED "$delay_single_quote_subst"`'
- inherit_rpath_CXX='`$ECHO "$inherit_rpath_CXX" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs_CXX='`$ECHO "$link_all_deplibs_CXX" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path_CXX='`$ECHO "$fix_srcfile_path_CXX" | $SED "$delay_single_quote_subst"`'
- always_export_symbols_CXX='`$ECHO "$always_export_symbols_CXX" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds_CXX='`$ECHO "$export_symbols_cmds_CXX" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms_CXX='`$ECHO "$exclude_expsyms_CXX" | $SED "$delay_single_quote_subst"`'
- include_expsyms_CXX='`$ECHO "$include_expsyms_CXX" | $SED "$delay_single_quote_subst"`'
- prelink_cmds_CXX='`$ECHO "$prelink_cmds_CXX" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds_CXX='`$ECHO "$postlink_cmds_CXX" | $SED "$delay_single_quote_subst"`'
- file_list_spec_CXX='`$ECHO "$file_list_spec_CXX" | $SED "$delay_single_quote_subst"`'
- hardcode_action_CXX='`$ECHO "$hardcode_action_CXX" | $SED "$delay_single_quote_subst"`'
- compiler_lib_search_dirs_CXX='`$ECHO "$compiler_lib_search_dirs_CXX" | $SED "$delay_single_quote_subst"`'
-@@ -18202,8 +19048,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -18213,12 +19064,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -18234,7 +19087,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -18256,8 +19108,8 @@ LD_CXX \
- reload_flag_CXX \
- compiler_CXX \
- lt_prog_compiler_no_builtin_flag_CXX \
--lt_prog_compiler_wl_CXX \
- lt_prog_compiler_pic_CXX \
-+lt_prog_compiler_wl_CXX \
- lt_prog_compiler_static_CXX \
- lt_cv_prog_compiler_c_o_CXX \
- export_dynamic_flag_spec_CXX \
-@@ -18269,7 +19121,6 @@ no_undefined_flag_CXX \
- hardcode_libdir_flag_spec_CXX \
- hardcode_libdir_flag_spec_ld_CXX \
- hardcode_libdir_separator_CXX \
--fix_srcfile_path_CXX \
- exclude_expsyms_CXX \
- include_expsyms_CXX \
- file_list_spec_CXX \
-@@ -18303,6 +19154,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -18317,7 +19169,8 @@ archive_expsym_cmds_CXX \
- module_cmds_CXX \
- module_expsym_cmds_CXX \
- export_symbols_cmds_CXX \
--prelink_cmds_CXX; do
-+prelink_cmds_CXX \
-+postlink_cmds_CXX; do
-     case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in
-     *[\\\\\\\`\\"\\\$]*)
-       eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\""
-@@ -19110,7 +19963,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -19213,19 +20067,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -19255,6 +20132,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -19264,6 +20147,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -19378,12 +20264,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -19470,9 +20356,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -19488,6 +20371,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -19534,210 +20420,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-@@ -19765,12 +20610,12 @@ with_gcc=$GCC_CXX
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag_CXX
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl_CXX
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic_CXX
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl_CXX
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static_CXX
- 
-@@ -19857,9 +20702,6 @@ inherit_rpath=$inherit_rpath_CXX
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs_CXX
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path_CXX
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols_CXX
- 
-@@ -19875,6 +20717,9 @@ include_expsyms=$lt_include_expsyms_CXX
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds_CXX
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds_CXX
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec_CXX
- 
-diff --git a/libtool.m4 b/libtool.m4
-index 24d13f3440..e45fdc6998 100644
---- a/libtool.m4
-+++ b/libtool.m4
-@@ -1,7 +1,8 @@
- # libtool.m4 - Configure libtool for the host system. -*-Autoconf-*-
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- # This file is free software; the Free Software Foundation gives
-@@ -10,7 +11,8 @@
- 
- m4_define([_LT_COPYING], [dnl
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -37,7 +39,7 @@ m4_define([_LT_COPYING], [dnl
- # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
- ])
- 
--# serial 56 LT_INIT
-+# serial 57 LT_INIT
- 
- 
- # LT_PREREQ(VERSION)
-@@ -92,7 +94,8 @@ _LT_SET_OPTIONS([$0], [$1])
- LIBTOOL_DEPS="$ltmain"
- 
- # Always use our own libtool.
--LIBTOOL='$(SHELL) $(top_builddir)/libtool'
-+LIBTOOL='$(SHELL) $(top_builddir)'
-+LIBTOOL="$LIBTOOL/${host_alias}-libtool"
- AC_SUBST(LIBTOOL)dnl
- 
- _LT_SETUP
-@@ -166,10 +169,13 @@ _LT_DECL([], [exeext], [0], [Executable file suffix (normally "")])dnl
- dnl
- m4_require([_LT_FILEUTILS_DEFAULTS])dnl
- m4_require([_LT_CHECK_SHELL_FEATURES])dnl
-+m4_require([_LT_PATH_CONVERSION_FUNCTIONS])dnl
- m4_require([_LT_CMD_RELOAD])dnl
- m4_require([_LT_CHECK_MAGIC_METHOD])dnl
-+m4_require([_LT_CHECK_SHAREDLIB_FROM_LINKLIB])dnl
- m4_require([_LT_CMD_OLD_ARCHIVE])dnl
- m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl
-+m4_require([_LT_WITH_SYSROOT])dnl
- 
- _LT_CONFIG_LIBTOOL_INIT([
- # See if we are running on zsh, and set the options which allow our
-@@ -199,7 +205,7 @@ aix3*)
- esac
- 
- # Global variables:
--ofile=libtool
-+ofile=${host_alias}-libtool
- can_build_shared=yes
- 
- # All known linkers require a `.a' archive for static linking (except MSVC,
-@@ -632,7 +638,7 @@ m4_ifset([AC_PACKAGE_NAME], [AC_PACKAGE_NAME ])config.lt[]dnl
- m4_ifset([AC_PACKAGE_VERSION], [ AC_PACKAGE_VERSION])
- configured by $[0], generated by m4_PACKAGE_STRING.
- 
--Copyright (C) 2009 Free Software Foundation, Inc.
-+Copyright (C) 2010 Free Software Foundation, Inc.
- This config.lt script is free software; the Free Software Foundation
- gives unlimited permision to copy, distribute and modify it."
- 
-@@ -746,15 +752,12 @@ _LT_EOF
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
- 
--  _LT_PROG_XSI_SHELLFNS
-+  _LT_PROG_REPLACE_SHELLFNS
- 
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- ],
-@@ -980,6 +983,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&AS_MESSAGE_LOG_FD
-       echo "$AR cru libconftest.a conftest.o" >&AS_MESSAGE_LOG_FD
-       $AR cru libconftest.a conftest.o 2>&AS_MESSAGE_LOG_FD
-+      echo "$RANLIB libconftest.a" >&AS_MESSAGE_LOG_FD
-+      $RANLIB libconftest.a 2>&AS_MESSAGE_LOG_FD
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -1069,30 +1074,41 @@ m4_defun([_LT_DARWIN_LINKER_FEATURES],
-   fi
- ])
- 
--# _LT_SYS_MODULE_PATH_AIX
--# -----------------------
-+# _LT_SYS_MODULE_PATH_AIX([TAGNAME])
-+# ----------------------------------
- # Links a minimal program and checks the executable
- # for the system default hardcoded library path. In most cases,
- # this is /usr/lib:/lib, but when the MPI compilers are used
- # the location of the communication and MPI libs are included too.
- # If we don't find anything, use the default library path according
- # to the aix ld manual.
-+# Store the results from the different compilers for each TAGNAME.
-+# Allow to override them for all tags through lt_cv_aix_libpath.
- m4_defun([_LT_SYS_MODULE_PATH_AIX],
- [m4_require([_LT_DECL_SED])dnl
--AC_LINK_IFELSE(AC_LANG_PROGRAM,[
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi],[])
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  AC_CACHE_VAL([_LT_TAGVAR([lt_cv_aix_libpath_], [$1])],
-+  [AC_LINK_IFELSE([AC_LANG_PROGRAM],[
-+  lt_aix_libpath_sed='[
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }]'
-+  _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then
-+    _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi],[])
-+  if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then
-+    _LT_TAGVAR([lt_cv_aix_libpath_], [$1])="/usr/lib:/lib"
-+  fi
-+  ])
-+  aix_libpath=$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])
-+fi
- ])# _LT_SYS_MODULE_PATH_AIX
- 
- 
-@@ -1117,7 +1133,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- 
- AC_MSG_CHECKING([how to print strings])
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -1161,6 +1177,39 @@ _LT_DECL([], [ECHO], [1], [An echo program that protects backslashes])
- ])# _LT_PROG_ECHO_BACKSLASH
- 
- 
-+# _LT_WITH_SYSROOT
-+# ----------------
-+AC_DEFUN([_LT_WITH_SYSROOT],
-+[AC_MSG_CHECKING([for sysroot])
-+AC_ARG_WITH([libtool-sysroot],
-+[  --with-libtool-sysroot[=DIR] Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).],
-+[], [with_libtool_sysroot=no])
-+
-+dnl lt_sysroot will always be passed unquoted.  We quote it here
-+dnl in case the user passed a directory name.
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   AC_MSG_RESULT([${with_libtool_sysroot}])
-+   AC_MSG_ERROR([The sysroot must be an absolute path.])
-+   ;;
-+esac
-+
-+ AC_MSG_RESULT([${lt_sysroot:-no}])
-+_LT_DECL([], [lt_sysroot], [0], [The root where to search for ]dnl
-+[dependent libraries, and in which our libraries should be installed.])])
-+
- # _LT_ENABLE_LOCK
- # ---------------
- m4_defun([_LT_ENABLE_LOCK],
-@@ -1320,14 +1369,47 @@ need_locks="$enable_libtool_lock"
- ])# _LT_ENABLE_LOCK
- 
- 
-+# _LT_PROG_AR
-+# -----------
-+m4_defun([_LT_PROG_AR],
-+[AC_CHECK_TOOLS(AR, [ar], false)
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
-+_LT_DECL([], [AR], [1], [The archiver])
-+_LT_DECL([], [AR_FLAGS], [1], [Flags to create an archive])
-+
-+AC_CACHE_CHECK([for archiver @FILE support], [lt_cv_ar_at_file],
-+  [lt_cv_ar_at_file=no
-+   AC_COMPILE_IFELSE([AC_LANG_PROGRAM],
-+     [echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&AS_MESSAGE_LOG_FD'
-+      AC_TRY_EVAL([lt_ar_try])
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	AC_TRY_EVAL([lt_ar_try])
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
-+     ])
-+  ])
-+
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
-+_LT_DECL([], [archiver_list_spec], [1],
-+  [How to feed a file listing to the archiver])
-+])# _LT_PROG_AR
-+
-+
- # _LT_CMD_OLD_ARCHIVE
- # -------------------
- m4_defun([_LT_CMD_OLD_ARCHIVE],
--[AC_CHECK_TOOL(AR, ar, false)
--test -z "$AR" && AR=ar
--test -z "$AR_FLAGS" && AR_FLAGS=cru
--_LT_DECL([], [AR], [1], [The archiver])
--_LT_DECL([], [AR_FLAGS], [1])
-+[_LT_PROG_AR
- 
- AC_CHECK_TOOL(STRIP, strip, :)
- test -z "$STRIP" && STRIP=:
-@@ -1623,7 +1705,7 @@ else
-   lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
-   lt_status=$lt_dlunknown
-   cat > conftest.$ac_ext <<_LT_EOF
--[#line __oline__ "configure"
-+[#line $LINENO "configure"
- #include "confdefs.h"
- 
- #if HAVE_DLFCN_H
-@@ -1667,10 +1749,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -2210,8 +2292,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -2244,13 +2327,71 @@ m4_if([$1], [],[
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([[a-zA-Z]]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | [$GREP ';[c-zC-Z]:/' >/dev/null]; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -2342,7 +2483,7 @@ haiku*)
-   soname_spec='${libname}${release}${shared_ext}$major'
-   shlibpath_var=LIBRARY_PATH
-   shlibpath_overrides_runpath=yes
--  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
-+  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
-   hardcode_into_libs=yes
-   ;;
- 
-@@ -2950,6 +3091,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -3016,7 +3162,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -3167,6 +3314,21 @@ tpf*)
-   ;;
- esac
- ])
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[[\1]]\/[[\1]]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -3174,7 +3336,11 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- _LT_DECL([], [deplibs_check_method], [1],
-     [Method to check whether dependent libraries are shared objects])
- _LT_DECL([], [file_magic_cmd], [1],
--    [Command to use when deplibs_check_method == "file_magic"])
-+    [Command to use when deplibs_check_method = "file_magic"])
-+_LT_DECL([], [file_magic_glob], [1],
-+    [How to find potential files when deplibs_check_method = "file_magic"])
-+_LT_DECL([], [want_nocaseglob], [1],
-+    [Find potential files using nocaseglob when deplibs_check_method = "file_magic"])
- ])# _LT_CHECK_MAGIC_METHOD
- 
- 
-@@ -3277,6 +3443,67 @@ dnl aclocal-1.4 backwards compatibility:
- dnl AC_DEFUN([AM_PROG_NM], [])
- dnl AC_DEFUN([AC_PROG_NM], [])
- 
-+# _LT_CHECK_SHAREDLIB_FROM_LINKLIB
-+# --------------------------------
-+# how to determine the name of the shared library
-+# associated with a specific link library.
-+#  -- PORTME fill in with the dynamic library characteristics
-+m4_defun([_LT_CHECK_SHAREDLIB_FROM_LINKLIB],
-+[m4_require([_LT_DECL_EGREP])
-+m4_require([_LT_DECL_OBJDUMP])
-+m4_require([_LT_DECL_DLLTOOL])
-+AC_CACHE_CHECK([how to associate runtime and link libraries],
-+lt_cv_sharedlib_from_linklib_cmd,
-+[lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+])
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+_LT_DECL([], [sharedlib_from_linklib_cmd], [1],
-+    [Command to associate shared and link libraries])
-+])# _LT_CHECK_SHAREDLIB_FROM_LINKLIB
-+
-+
-+# _LT_PATH_MANIFEST_TOOL
-+# ----------------------
-+# locate the manifest tool
-+m4_defun([_LT_PATH_MANIFEST_TOOL],
-+[AC_CHECK_TOOL(MANIFEST_TOOL, mt, :)
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+AC_CACHE_CHECK([if $MANIFEST_TOOL is a manifest tool], [lt_cv_path_mainfest_tool],
-+  [lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&AS_MESSAGE_LOG_FD
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&AS_MESSAGE_LOG_FD
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*])
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+_LT_DECL([], [MANIFEST_TOOL], [1], [Manifest tool])dnl
-+])# _LT_PATH_MANIFEST_TOOL
-+
- 
- # LT_LIB_M
- # --------
-@@ -3403,8 +3630,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([[^ ]]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \(lib[[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \(lib[[^ ]]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -3440,6 +3667,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[[	 ]]\($symcode$symcode*\)[[	 ]][[	 ]]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -3473,6 +3701,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT@&t@_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT@&t@_DLSYM_CONST
-+#else
-+# define LT@&t@_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -3484,7 +3724,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT@&t@_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -3510,15 +3750,15 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)"
- 	  if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&AS_MESSAGE_LOG_FD
- 	fi
-@@ -3551,6 +3791,13 @@ else
-   AC_MSG_RESULT(ok)
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[[@]]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
- _LT_DECL([global_symbol_pipe], [lt_cv_sys_global_symbol_pipe], [1],
-     [Take the output of nm and produce a listing of raw symbols and C names])
- _LT_DECL([global_symbol_to_cdecl], [lt_cv_sys_global_symbol_to_cdecl], [1],
-@@ -3561,6 +3808,8 @@ _LT_DECL([global_symbol_to_c_name_address],
- _LT_DECL([global_symbol_to_c_name_address_lib_prefix],
-     [lt_cv_sys_global_symbol_to_c_name_address_lib_prefix], [1],
-     [Transform the output of nm in a C name address pair when lib prefix is needed])
-+_LT_DECL([], [nm_file_list_spec], [1],
-+    [Specify filename containing input files for $NM])
- ]) # _LT_CMD_GLOBAL_SYMBOLS
- 
- 
-@@ -3572,7 +3821,6 @@ _LT_TAGVAR(lt_prog_compiler_wl, $1)=
- _LT_TAGVAR(lt_prog_compiler_pic, $1)=
- _LT_TAGVAR(lt_prog_compiler_static, $1)=
- 
--AC_MSG_CHECKING([for $compiler option to produce PIC])
- m4_if([$1], [CXX], [
-   # C++ specific cases for pic, static, wl, etc.
-   if test "$GXX" = yes; then
-@@ -3678,6 +3926,12 @@ m4_if([$1], [CXX], [
- 	  ;;
- 	esac
- 	;;
-+      mingw* | cygwin* | os2* | pw32* | cegcc*)
-+	# This hack is so that the source file can tell whether it is being
-+	# built for inclusion in a dll (and should export symbols for example).
-+	m4_if([$1], [GCJ], [],
-+	  [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'])
-+	;;
-       dgux*)
- 	case $cc_basename in
- 	  ec++*)
-@@ -3830,7 +4084,7 @@ m4_if([$1], [CXX], [
- 	;;
-       solaris*)
- 	case $cc_basename in
--	  CC*)
-+	  CC* | sunCC*)
- 	    # Sun C++ 4.2, 5.x and Centerline C++
- 	    _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
- 	    _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
-@@ -4053,6 +4307,12 @@ m4_if([$1], [CXX], [
- 	_LT_TAGVAR(lt_prog_compiler_pic, $1)='--shared'
- 	_LT_TAGVAR(lt_prog_compiler_static, $1)='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,'
-+	_LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
-+	_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -4115,7 +4375,7 @@ m4_if([$1], [CXX], [
-       _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
-       _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ';;
-       *)
- 	_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,';;
-@@ -4172,9 +4432,11 @@ case $host_os in
-     _LT_TAGVAR(lt_prog_compiler_pic, $1)="$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])"
-     ;;
- esac
--AC_MSG_RESULT([$_LT_TAGVAR(lt_prog_compiler_pic, $1)])
--_LT_TAGDECL([wl], [lt_prog_compiler_wl], [1],
--	[How to pass a linker flag through the compiler])
-+
-+AC_CACHE_CHECK([for $compiler option to produce PIC],
-+  [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)],
-+  [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_prog_compiler_pic, $1)])
-+_LT_TAGVAR(lt_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -4193,6 +4455,8 @@ fi
- _LT_TAGDECL([pic_flag], [lt_prog_compiler_pic], [1],
- 	[Additional compiler flags for building library objects])
- 
-+_LT_TAGDECL([wl], [lt_prog_compiler_wl], [1],
-+	[How to pass a linker flag through the compiler])
- #
- # Check to make sure the static flag actually works.
- #
-@@ -4213,6 +4477,7 @@ _LT_TAGDECL([link_static_flag], [lt_prog_compiler_static], [1],
- m4_defun([_LT_LINKER_SHLIBS],
- [AC_REQUIRE([LT_PATH_LD])dnl
- AC_REQUIRE([LT_PATH_NM])dnl
-+m4_require([_LT_PATH_MANIFEST_TOOL])dnl
- m4_require([_LT_FILEUTILS_DEFAULTS])dnl
- m4_require([_LT_DECL_EGREP])dnl
- m4_require([_LT_DECL_SED])dnl
-@@ -4221,6 +4486,7 @@ m4_require([_LT_TAG_COMPILER])dnl
- AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries])
- m4_if([$1], [CXX], [
-   _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
-+  _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*']
-   case $host_os in
-   aix[[4-9]]*)
-     # If we're using GNU nm, then we don't want the "-C" option.
-@@ -4235,15 +4501,20 @@ m4_if([$1], [CXX], [
-     ;;
-   pw32*)
-     _LT_TAGVAR(export_symbols_cmds, $1)="$ltdll_cmds"
--  ;;
-+    ;;
-   cygwin* | mingw* | cegcc*)
--    _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;/^.*[[ ]]__nm__/s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
--  ;;
-+    case $cc_basename in
-+    cl*) ;;
-+    *)
-+      _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname']
-+      ;;
-+    esac
-+    ;;
-   *)
-     _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
--  ;;
-+    ;;
-   esac
--  _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*']
- ], [
-   runpath_var=
-   _LT_TAGVAR(allow_undefined_flag, $1)=
-@@ -4411,7 +4682,8 @@ _LT_EOF
-       _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
-       _LT_TAGVAR(always_export_symbols, $1)=no
-       _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
--      _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols'
-+      _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname']
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -4510,12 +4782,12 @@ _LT_EOF
- 	  _LT_TAGVAR(whole_archive_flag_spec, $1)='--whole-archive$convenience --no-whole-archive'
- 	  _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
- 	  _LT_TAGVAR(hardcode_libdir_flag_spec_ld, $1)='-rpath $libdir'
--	  _LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  _LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -4529,8 +4801,8 @@ _LT_EOF
- 	_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -4548,8 +4820,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	_LT_TAGVAR(ld_shlibs, $1)=no
-       fi
-@@ -4595,8 +4867,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	_LT_TAGVAR(ld_shlibs, $1)=no
-       fi
-@@ -4726,7 +4998,7 @@ _LT_EOF
- 	_LT_TAGVAR(allow_undefined_flag, $1)='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        _LT_SYS_MODULE_PATH_AIX
-+        _LT_SYS_MODULE_PATH_AIX([$1])
-         _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
-         _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-       else
-@@ -4737,7 +5009,7 @@ _LT_EOF
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 _LT_SYS_MODULE_PATH_AIX
-+	 _LT_SYS_MODULE_PATH_AIX([$1])
- 	 _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
- 	  # -berok will link without error, but may produce a broken library.
-@@ -4781,20 +5053,63 @@ _LT_EOF
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
--      _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      _LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
--      # FIXME: Should let the user specify the lib program.
--      _LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      _LT_TAGVAR(fix_srcfile_path, $1)='`cygpath -w "$srcfile"`'
--      _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
-+	_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
-+	_LT_TAGVAR(always_export_symbols, $1)=yes
-+	_LT_TAGVAR(file_list_spec, $1)='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	_LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
-+	_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
-+	_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1,DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	_LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib'
-+	_LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
-+	_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	_LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
-+	# FIXME: Should let the user specify the lib program.
-+	_LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -4828,7 +5143,7 @@ _LT_EOF
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      _LT_TAGVAR(archive_cmds, $1)='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
-       _LT_TAGVAR(hardcode_direct, $1)=yes
-       _LT_TAGVAR(hardcode_shlibpath_var, $1)=no
-@@ -4836,7 +5151,7 @@ _LT_EOF
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -4852,7 +5167,7 @@ _LT_EOF
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	_LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -4876,10 +5191,10 @@ _LT_EOF
- 	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -4926,16 +5241,31 @@ _LT_EOF
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        AC_LINK_IFELSE(int foo(void) {},
--          _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--        )
--        LDFLAGS="$save_LDFLAGS"
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	AC_CACHE_CHECK([whether the $host_os linker accepts -exported_symbol],
-+	  [lt_cv_irix_exported_symbol],
-+	  [save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   AC_LINK_IFELSE(
-+	     [AC_LANG_SOURCE(
-+	        [AC_LANG_CASE([C], [[int foo (void) { return 0; }]],
-+			      [C++], [[int foo (void) { return 0; }]],
-+			      [Fortran 77], [[
-+      subroutine foo
-+      end]],
-+			      [Fortran], [[
-+      subroutine foo
-+      end]])])],
-+	      [lt_cv_irix_exported_symbol=yes],
-+	      [lt_cv_irix_exported_symbol=no])
-+           LDFLAGS="$save_LDFLAGS"])
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -5020,7 +5350,7 @@ _LT_EOF
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	_LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*'
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir'
-       else
- 	_LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
-@@ -5039,9 +5369,9 @@ _LT_EOF
-       _LT_TAGVAR(no_undefined_flag, $1)=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	_LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -5313,8 +5643,6 @@ _LT_TAGDECL([], [inherit_rpath], [0],
-     to runtime path list])
- _LT_TAGDECL([], [link_all_deplibs], [0],
-     [Whether libtool must link a program against all its dependency libraries])
--_LT_TAGDECL([], [fix_srcfile_path], [1],
--    [Fix the shell variable $srcfile for the compiler])
- _LT_TAGDECL([], [always_export_symbols], [0],
-     [Set to "yes" if exported symbols are required])
- _LT_TAGDECL([], [export_symbols_cmds], [2],
-@@ -5325,6 +5653,8 @@ _LT_TAGDECL([], [include_expsyms], [1],
-     [Symbols that must always be exported])
- _LT_TAGDECL([], [prelink_cmds], [2],
-     [Commands necessary for linking programs (against libraries) with templates])
-+_LT_TAGDECL([], [postlink_cmds], [2],
-+    [Commands necessary for finishing linking programs])
- _LT_TAGDECL([], [file_list_spec], [1],
-     [Specify filename containing input files])
- dnl FIXME: Not yet implemented
-@@ -5426,6 +5756,7 @@ CC="$lt_save_CC"
- m4_defun([_LT_LANG_CXX_CONFIG],
- [m4_require([_LT_FILEUTILS_DEFAULTS])dnl
- m4_require([_LT_DECL_EGREP])dnl
-+m4_require([_LT_PATH_MANIFEST_TOOL])dnl
- if test -n "$CXX" && ( test "X$CXX" != "Xno" &&
-     ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) ||
-     (test "X$CXX" != "Xg++"))) ; then
-@@ -5487,6 +5818,7 @@ if test "$_lt_caught_CXX_error" != yes; then
- 
-   # Allow CC to be a program name with arguments.
-   lt_save_CC=$CC
-+  lt_save_CFLAGS=$CFLAGS
-   lt_save_LD=$LD
-   lt_save_GCC=$GCC
-   GCC=$GXX
-@@ -5504,6 +5836,7 @@ if test "$_lt_caught_CXX_error" != yes; then
-   fi
-   test -z "${LDCXX+set}" || LD=$LDCXX
-   CC=${CXX-"c++"}
-+  CFLAGS=$CXXFLAGS
-   compiler=$CC
-   _LT_TAGVAR(compiler, $1)=$CC
-   _LT_CC_BASENAME([$compiler])
-@@ -5667,7 +6000,7 @@ if test "$_lt_caught_CXX_error" != yes; then
-           _LT_TAGVAR(allow_undefined_flag, $1)='-berok'
-           # Determine the default libpath from the value encoded in an empty
-           # executable.
--          _LT_SYS_MODULE_PATH_AIX
-+          _LT_SYS_MODULE_PATH_AIX([$1])
-           _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
- 
-           _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -5679,7 +6012,7 @@ if test "$_lt_caught_CXX_error" != yes; then
-           else
- 	    # Determine the default libpath from the value encoded in an
- 	    # empty executable.
--	    _LT_SYS_MODULE_PATH_AIX
-+	    _LT_SYS_MODULE_PATH_AIX([$1])
- 	    _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	    # Warning - without using the other run time loading flags,
- 	    # -berok will link without error, but may produce a broken library.
-@@ -5721,29 +6054,75 @@ if test "$_lt_caught_CXX_error" != yes; then
-         ;;
- 
-       cygwin* | mingw* | pw32* | cegcc*)
--        # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
--        # as there is no search path for DLLs.
--        _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
--        _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols'
--        _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
--        _LT_TAGVAR(always_export_symbols, $1)=no
--        _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
--
--        if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
--          _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
--          # If the export-symbols file already is a .def file (1st line
--          # is EXPORTS), use it as is; otherwise, prepend...
--          _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
--	    cp $export_symbols $output_objdir/$soname.def;
--          else
--	    echo EXPORTS > $output_objdir/$soname.def;
--	    cat $export_symbols >> $output_objdir/$soname.def;
--          fi~
--          $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
--        else
--          _LT_TAGVAR(ld_shlibs, $1)=no
--        fi
--        ;;
-+	case $GXX,$cc_basename in
-+	,cl* | no,cl*)
-+	  # Native MSVC
-+	  # hardcode_libdir_flag_spec is actually meaningless, as there is
-+	  # no search path for DLLs.
-+	  _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
-+	  _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
-+	  _LT_TAGVAR(always_export_symbols, $1)=yes
-+	  _LT_TAGVAR(file_list_spec, $1)='@'
-+	  # Tell ltmain to make .lib files, not .a files.
-+	  libext=lib
-+	  # Tell ltmain to make .dll files, not .so files.
-+	  shrext_cmds=".dll"
-+	  # FIXME: Setting linknames here is a bad hack.
-+	  _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	  _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	      $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	    else
-+	      $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	    fi~
-+	    $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	    linknames='
-+	  # The linker will not automatically build a static lib if we build a DLL.
-+	  # _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
-+	  _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
-+	  # Don't use ranlib
-+	  _LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib'
-+	  _LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~
-+	    lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	    case $lt_outputfile in
-+	      *.exe|*.EXE) ;;
-+	      *)
-+		lt_outputfile="$lt_outputfile.exe"
-+		lt_tool_outputfile="$lt_tool_outputfile.exe"
-+		;;
-+	    esac~
-+	    func_to_tool_file "$lt_outputfile"~
-+	    if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	      $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	      $RM "$lt_outputfile.manifest";
-+	    fi'
-+	  ;;
-+	*)
-+	  # g++
-+	  # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
-+	  # as there is no search path for DLLs.
-+	  _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
-+	  _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols'
-+	  _LT_TAGVAR(allow_undefined_flag, $1)=unsupported
-+	  _LT_TAGVAR(always_export_symbols, $1)=no
-+	  _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
-+
-+	  if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-+	    _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-+	    # If the export-symbols file already is a .def file (1st line
-+	    # is EXPORTS), use it as is; otherwise, prepend...
-+	    _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	      cp $export_symbols $output_objdir/$soname.def;
-+	    else
-+	      echo EXPORTS > $output_objdir/$soname.def;
-+	      cat $export_symbols >> $output_objdir/$soname.def;
-+	    fi~
-+	    $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-+	  else
-+	    _LT_TAGVAR(ld_shlibs, $1)=no
-+	  fi
-+	  ;;
-+	esac
-+	;;
-       darwin* | rhapsody*)
-         _LT_DARWIN_LINKER_FEATURES($1)
- 	;;
-@@ -5818,7 +6197,7 @@ if test "$_lt_caught_CXX_error" != yes; then
-             ;;
-           *)
-             if test "$GXX" = yes; then
--              _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+              _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-             else
-               # FIXME: insert proper C++ library support
-               _LT_TAGVAR(ld_shlibs, $1)=no
-@@ -5889,10 +6268,10 @@ if test "$_lt_caught_CXX_error" != yes; then
- 	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
- 	            ;;
- 	          ia64*)
--	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
-+	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
- 	            ;;
- 	          *)
--	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
-+	            _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
- 	            ;;
- 	        esac
- 	      fi
-@@ -5933,9 +6312,9 @@ if test "$_lt_caught_CXX_error" != yes; then
-           *)
- 	    if test "$GXX" = yes; then
- 	      if test "$with_gnu_ld" = no; then
--	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	      else
--	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
-+	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib'
- 	      fi
- 	    fi
- 	    _LT_TAGVAR(link_all_deplibs, $1)=yes
-@@ -6005,20 +6384,20 @@ if test "$_lt_caught_CXX_error" != yes; then
- 	      _LT_TAGVAR(prelink_cmds, $1)='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~
--		compile_command="$compile_command `find $tpldir -name \*.o | $NL2SP`"'
-+		compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"'
- 	      _LT_TAGVAR(old_archive_cmds, $1)='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~
--		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | $NL2SP`~
-+		$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~
- 		$RANLIB $oldlib'
- 	      _LT_TAGVAR(archive_cmds, $1)='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
--		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
-+		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
- 	      _LT_TAGVAR(archive_expsym_cmds, $1)='tpldir=Template.dir~
- 		rm -rf $tpldir~
- 		$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
--		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
-+		$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib'
- 	      ;;
- 	    *) # Version 6 and above use weak symbols
- 	      _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib'
-@@ -6213,7 +6592,7 @@ if test "$_lt_caught_CXX_error" != yes; then
- 	          _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 		  ;;
- 	        *)
--	          _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	          _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 		  ;;
- 	      esac
- 
-@@ -6259,7 +6638,7 @@ if test "$_lt_caught_CXX_error" != yes; then
- 
-       solaris*)
-         case $cc_basename in
--          CC*)
-+          CC* | sunCC*)
- 	    # Sun C++ 4.2, 5.x and Centerline C++
-             _LT_TAGVAR(archive_cmds_need_lc,$1)=yes
- 	    _LT_TAGVAR(no_undefined_flag, $1)=' -zdefs'
-@@ -6300,9 +6679,9 @@ if test "$_lt_caught_CXX_error" != yes; then
- 	    if test "$GXX" = yes && test "$with_gnu_ld" = no; then
- 	      _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-z ${wl}defs'
- 	      if $CC --version | $GREP -v '^2\.7' > /dev/null; then
--	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
-+	        _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib'
- 	        _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--		  $CC -shared -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
-+		  $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
- 
- 	        # Commands to make compiler produce verbose output that lists
- 	        # what "hidden" libraries, object files and flags are used when
-@@ -6431,6 +6810,7 @@ if test "$_lt_caught_CXX_error" != yes; then
-   fi # test -n "$compiler"
- 
-   CC=$lt_save_CC
-+  CFLAGS=$lt_save_CFLAGS
-   LDCXX=$LD
-   LD=$lt_save_LD
-   GCC=$lt_save_GCC
-@@ -6445,6 +6825,29 @@ AC_LANG_POP
- ])# _LT_LANG_CXX_CONFIG
- 
- 
-+# _LT_FUNC_STRIPNAME_CNF
-+# ----------------------
-+# func_stripname_cnf prefix suffix name
-+# strip PREFIX and SUFFIX off of NAME.
-+# PREFIX and SUFFIX must not contain globbing or regex special
-+# characters, hashes, percent signs, but SUFFIX may contain a leading
-+# dot (in which case that matches only a dot).
-+#
-+# This function is identical to the (non-XSI) version of func_stripname,
-+# except this one can be used by m4 code that may be executed by configure,
-+# rather than the libtool script.
-+m4_defun([_LT_FUNC_STRIPNAME_CNF],[dnl
-+AC_REQUIRE([_LT_DECL_SED])
-+AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])
-+func_stripname_cnf ()
-+{
-+  case ${2} in
-+  .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
-+  *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
-+  esac
-+} # func_stripname_cnf
-+])# _LT_FUNC_STRIPNAME_CNF
-+
- # _LT_SYS_HIDDEN_LIBDEPS([TAGNAME])
- # ---------------------------------
- # Figure out "hidden" library dependencies from verbose
-@@ -6453,6 +6856,7 @@ AC_LANG_POP
- # objects, libraries and library flags.
- m4_defun([_LT_SYS_HIDDEN_LIBDEPS],
- [m4_require([_LT_FILEUTILS_DEFAULTS])dnl
-+AC_REQUIRE([_LT_FUNC_STRIPNAME_CNF])dnl
- # Dependencies to place before and after the object being linked:
- _LT_TAGVAR(predep_objects, $1)=
- _LT_TAGVAR(postdep_objects, $1)=
-@@ -6503,6 +6907,13 @@ public class foo {
- };
- _LT_EOF
- ])
-+
-+_lt_libdeps_save_CFLAGS=$CFLAGS
-+case "$CC $CFLAGS " in #(
-+*\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;;
-+*\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;;
-+esac
-+
- dnl Parse the compiler output and extract the necessary
- dnl objects, libraries and library flags.
- if AC_TRY_EVAL(ac_compile); then
-@@ -6514,7 +6925,7 @@ if AC_TRY_EVAL(ac_compile); then
-   pre_test_object_deps_done=no
- 
-   for p in `eval "$output_verbose_link_cmd"`; do
--    case $p in
-+    case ${prev}${p} in
- 
-     -L* | -R* | -l*)
-        # Some compilers place space between "-{L,R}" and the path.
-@@ -6523,13 +6934,22 @@ if AC_TRY_EVAL(ac_compile); then
-           test $p = "-R"; then
- 	 prev=$p
- 	 continue
--       else
--	 prev=
-        fi
- 
-+       # Expand the sysroot to ease extracting the directories later.
-+       if test -z "$prev"; then
-+         case $p in
-+         -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;;
-+         -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;;
-+         -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;;
-+         esac
-+       fi
-+       case $p in
-+       =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;;
-+       esac
-        if test "$pre_test_object_deps_done" = no; then
--	 case $p in
--	 -L* | -R*)
-+	 case ${prev} in
-+	 -L | -R)
- 	   # Internal compiler library paths should come after those
- 	   # provided the user.  The postdeps already come after the
- 	   # user supplied libs so there is no need to process them.
-@@ -6549,8 +6969,10 @@ if AC_TRY_EVAL(ac_compile); then
- 	   _LT_TAGVAR(postdeps, $1)="${_LT_TAGVAR(postdeps, $1)} ${prev}${p}"
- 	 fi
-        fi
-+       prev=
-        ;;
- 
-+    *.lto.$objext) ;; # Ignore GCC LTO objects
-     *.$objext)
-        # This assumes that the test object file only shows up
-        # once in the compiler output.
-@@ -6586,6 +7008,7 @@ else
- fi
- 
- $RM -f confest.$objext
-+CFLAGS=$_lt_libdeps_save_CFLAGS
- 
- # PORTME: override above test on systems where it is broken
- m4_if([$1], [CXX],
-@@ -6622,7 +7045,7 @@ linux*)
- 
- solaris*)
-   case $cc_basename in
--  CC*)
-+  CC* | sunCC*)
-     # The more standards-conforming stlport4 library is
-     # incompatible with the Cstd library. Avoid specifying
-     # it if it's in CXXFLAGS. Ignore libCrun as
-@@ -6735,7 +7158,9 @@ if test "$_lt_disable_F77" != yes; then
-   # Allow CC to be a program name with arguments.
-   lt_save_CC="$CC"
-   lt_save_GCC=$GCC
-+  lt_save_CFLAGS=$CFLAGS
-   CC=${F77-"f77"}
-+  CFLAGS=$FFLAGS
-   compiler=$CC
-   _LT_TAGVAR(compiler, $1)=$CC
-   _LT_CC_BASENAME([$compiler])
-@@ -6789,6 +7214,7 @@ if test "$_lt_disable_F77" != yes; then
- 
-   GCC=$lt_save_GCC
-   CC="$lt_save_CC"
-+  CFLAGS="$lt_save_CFLAGS"
- fi # test "$_lt_disable_F77" != yes
- 
- AC_LANG_POP
-@@ -6865,7 +7291,9 @@ if test "$_lt_disable_FC" != yes; then
-   # Allow CC to be a program name with arguments.
-   lt_save_CC="$CC"
-   lt_save_GCC=$GCC
-+  lt_save_CFLAGS=$CFLAGS
-   CC=${FC-"f95"}
-+  CFLAGS=$FCFLAGS
-   compiler=$CC
-   GCC=$ac_cv_fc_compiler_gnu
- 
-@@ -6921,7 +7349,8 @@ if test "$_lt_disable_FC" != yes; then
-   fi # test -n "$compiler"
- 
-   GCC=$lt_save_GCC
--  CC="$lt_save_CC"
-+  CC=$lt_save_CC
-+  CFLAGS=$lt_save_CFLAGS
- fi # test "$_lt_disable_FC" != yes
- 
- AC_LANG_POP
-@@ -6958,10 +7387,12 @@ _LT_COMPILER_BOILERPLATE
- _LT_LINKER_BOILERPLATE
- 
- # Allow CC to be a program name with arguments.
--lt_save_CC="$CC"
-+lt_save_CC=$CC
-+lt_save_CFLAGS=$CFLAGS
- lt_save_GCC=$GCC
- GCC=yes
- CC=${GCJ-"gcj"}
-+CFLAGS=$GCJFLAGS
- compiler=$CC
- _LT_TAGVAR(compiler, $1)=$CC
- _LT_TAGVAR(LD, $1)="$LD"
-@@ -6992,7 +7423,8 @@ fi
- AC_LANG_RESTORE
- 
- GCC=$lt_save_GCC
--CC="$lt_save_CC"
-+CC=$lt_save_CC
-+CFLAGS=$lt_save_CFLAGS
- ])# _LT_LANG_GCJ_CONFIG
- 
- 
-@@ -7027,9 +7459,11 @@ _LT_LINKER_BOILERPLATE
- 
- # Allow CC to be a program name with arguments.
- lt_save_CC="$CC"
-+lt_save_CFLAGS=$CFLAGS
- lt_save_GCC=$GCC
- GCC=
- CC=${RC-"windres"}
-+CFLAGS=
- compiler=$CC
- _LT_TAGVAR(compiler, $1)=$CC
- _LT_CC_BASENAME([$compiler])
-@@ -7042,7 +7476,8 @@ fi
- 
- GCC=$lt_save_GCC
- AC_LANG_RESTORE
--CC="$lt_save_CC"
-+CC=$lt_save_CC
-+CFLAGS=$lt_save_CFLAGS
- ])# _LT_LANG_RC_CONFIG
- 
- 
-@@ -7101,6 +7536,15 @@ _LT_DECL([], [OBJDUMP], [1], [An object symbol dumper])
- AC_SUBST([OBJDUMP])
- ])
- 
-+# _LT_DECL_DLLTOOL
-+# ----------------
-+# Ensure DLLTOOL variable is set.
-+m4_defun([_LT_DECL_DLLTOOL],
-+[AC_CHECK_TOOL(DLLTOOL, dlltool, false)
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+_LT_DECL([], [DLLTOOL], [1], [DLL creation program])
-+AC_SUBST([DLLTOOL])
-+])
- 
- # _LT_DECL_SED
- # ------------
-@@ -7194,8 +7638,8 @@ m4_defun([_LT_CHECK_SHELL_FEATURES],
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -7234,206 +7678,162 @@ _LT_DECL([NL2SP], [lt_NL2SP], [1], [turn newlines into spaces])dnl
- ])# _LT_CHECK_SHELL_FEATURES
- 
- 
--# _LT_PROG_XSI_SHELLFNS
--# ---------------------
--# Bourne and XSI compatible variants of some useful shell functions.
--m4_defun([_LT_PROG_XSI_SHELLFNS],
--[case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $[*] ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
-+# _LT_PROG_FUNCTION_REPLACE (FUNCNAME, REPLACEMENT-BODY)
-+# ------------------------------------------------------
-+# In `$cfgfile', look for function FUNCNAME delimited by `^FUNCNAME ()$' and
-+# '^} FUNCNAME ', and replace its body with REPLACEMENT-BODY.
-+m4_defun([_LT_PROG_FUNCTION_REPLACE],
-+[dnl {
-+sed -e '/^$1 ()$/,/^} # $1 /c\
-+$1 ()\
-+{\
-+m4_bpatsubsts([$2], [$], [\\], [^\([	 ]\)], [\\\1])
-+} # Extended-shell $1 implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+])
- 
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
- 
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
-+# _LT_PROG_REPLACE_SHELLFNS
-+# -------------------------
-+# Replace existing portable implementations of several shell functions with
-+# equivalent extended shell implementations where those features are available..
-+m4_defun([_LT_PROG_REPLACE_SHELLFNS],
-+[if test x"$xsi_shell" = xyes; then
-+  _LT_PROG_FUNCTION_REPLACE([func_dirname], [dnl
-+    case ${1} in
-+      */*) func_dirname_result="${1%/*}${2}" ;;
-+      *  ) func_dirname_result="${3}" ;;
-+    esac])
-+
-+  _LT_PROG_FUNCTION_REPLACE([func_basename], [dnl
-+    func_basename_result="${1##*/}"])
-+
-+  _LT_PROG_FUNCTION_REPLACE([func_dirname_and_basename], [dnl
-+    case ${1} in
-+      */*) func_dirname_result="${1%/*}${2}" ;;
-+      *  ) func_dirname_result="${3}" ;;
-+    esac
-+    func_basename_result="${1##*/}"])
- 
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
-+  _LT_PROG_FUNCTION_REPLACE([func_stripname], [dnl
-+    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
-+    # positional parameters, so assign one to ordinary parameter first.
-+    func_stripname_result=${3}
-+    func_stripname_result=${func_stripname_result#"${1}"}
-+    func_stripname_result=${func_stripname_result%"${2}"}])
- 
--dnl func_dirname_and_basename
--dnl A portable version of this function is already defined in general.m4sh
--dnl so there is no need for it here.
-+  _LT_PROG_FUNCTION_REPLACE([func_split_long_opt], [dnl
-+    func_split_long_opt_name=${1%%=*}
-+    func_split_long_opt_arg=${1#*=}])
- 
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
-+  _LT_PROG_FUNCTION_REPLACE([func_split_short_opt], [dnl
-+    func_split_short_opt_arg=${1#??}
-+    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}])
- 
--# sed scripts:
--my_sed_long_opt='1s/^\(-[[^=]]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[[^=]]*=//'
-+  _LT_PROG_FUNCTION_REPLACE([func_lo2o], [dnl
-+    case ${1} in
-+      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
-+      *)    func_lo2o_result=${1} ;;
-+    esac])
- 
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
-+  _LT_PROG_FUNCTION_REPLACE([func_xform], [    func_xform_result=${1%.*}.lo])
- 
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
-+  _LT_PROG_FUNCTION_REPLACE([func_arith], [    func_arith_result=$(( $[*] ))])
- 
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[[^.]]*$/.lo/'`
--}
-+  _LT_PROG_FUNCTION_REPLACE([func_len], [    func_len_result=${#1}])
-+fi
- 
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$[@]"`
--}
-+if test x"$lt_shell_append" = xyes; then
-+  _LT_PROG_FUNCTION_REPLACE([func_append], [    eval "${1}+=\\${2}"])
- 
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$[1]" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
-+  _LT_PROG_FUNCTION_REPLACE([func_append_quoted], [dnl
-+    func_quote_for_eval "${2}"
-+dnl m4 expansion turns \\\\ into \\, and then the shell eval turns that into \
-+    eval "${1}+=\\\\ \\$func_quote_for_eval_result"])
- 
--_LT_EOF
--esac
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
- 
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  AC_MSG_WARN([Unable to substitute extended shell functions in $ofile])
-+fi
-+])
- 
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$[1]+=\$[2]"
--}
--_LT_EOF
-+# _LT_PATH_CONVERSION_FUNCTIONS
-+# -----------------------------
-+# Determine which file name conversion functions should be used by
-+# func_to_host_file (and, implicitly, by func_to_host_path).  These are needed
-+# for certain cross-compile configurations and native mingw.
-+m4_defun([_LT_PATH_CONVERSION_FUNCTIONS],
-+[AC_REQUIRE([AC_CANONICAL_HOST])dnl
-+AC_REQUIRE([AC_CANONICAL_BUILD])dnl
-+AC_MSG_CHECKING([how to convert $build file names to $host format])
-+AC_CACHE_VAL(lt_cv_to_host_file_cmd,
-+[case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-     ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$[1]=\$$[1]\$[2]"
--}
--
--_LT_EOF
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-     ;;
--  esac
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+])
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+AC_MSG_RESULT([$lt_cv_to_host_file_cmd])
-+_LT_DECL([to_host_file_cmd], [lt_cv_to_host_file_cmd],
-+         [0], [convert $build file names to $host format])dnl
-+
-+AC_MSG_CHECKING([how to convert $build file names to toolchain format])
-+AC_CACHE_VAL(lt_cv_to_tool_file_cmd,
-+[#assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
- ])
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+AC_MSG_RESULT([$lt_cv_to_tool_file_cmd])
-+_LT_DECL([to_tool_file_cmd], [lt_cv_to_tool_file_cmd],
-+         [0], [convert $build files to toolchain format])dnl
-+])# _LT_PATH_CONVERSION_FUNCTIONS
-diff --git a/ltmain.sh b/ltmain.sh
-index 9503ec85d7..70e856e065 100644
---- a/ltmain.sh
-+++ b/ltmain.sh
-@@ -1,10 +1,9 @@
--# Generated from ltmain.m4sh.
- 
--# libtool (GNU libtool 1.3134 2009-11-29) 2.2.7a
-+# libtool (GNU libtool) 2.4
- # Written by Gordon Matzigkeit <gord@gnu.ai.mit.edu>, 1996
- 
- # Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, 2006,
--# 2007, 2008, 2009 Free Software Foundation, Inc.
-+# 2007, 2008, 2009, 2010 Free Software Foundation, Inc.
- # This is free software; see the source for copying conditions.  There is NO
- # warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
- 
-@@ -38,7 +37,6 @@
- #   -n, --dry-run            display commands without modifying any files
- #       --features           display basic configuration information and exit
- #       --mode=MODE          use operation mode MODE
--#       --no-finish          let install mode avoid finish commands
- #       --preserve-dup-deps  don't remove duplicate dependency libraries
- #       --quiet, --silent    don't print informational messages
- #       --no-quiet, --no-silent
-@@ -71,17 +69,19 @@
- #         compiler:		$LTCC
- #         compiler flags:		$LTCFLAGS
- #         linker:		$LD (gnu? $with_gnu_ld)
--#         $progname:	(GNU libtool 1.3134 2009-11-29) 2.2.7a
-+#         $progname:	(GNU libtool) 2.4
- #         automake:	$automake_version
- #         autoconf:	$autoconf_version
- #
- # Report bugs to <bug-libtool@gnu.org>.
-+# GNU libtool home page: <http://www.gnu.org/software/libtool/>.
-+# General help using GNU software: <http://www.gnu.org/gethelp/>.
- 
- PROGRAM=libtool
- PACKAGE=libtool
--VERSION=2.2.7a
--TIMESTAMP=" 1.3134 2009-11-29"
--package_revision=1.3134
-+VERSION=2.4
-+TIMESTAMP=""
-+package_revision=1.3293
- 
- # Be Bourne compatible
- if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then
-@@ -106,9 +106,6 @@ _LTECHO_EOF'
- }
- 
- # NLS nuisances: We save the old values to restore during execute mode.
--# Only set LANG and LC_ALL to C if already set.
--# These must not be set unconditionally because not all systems understand
--# e.g. LANG=C (notably SCO).
- lt_user_locale=
- lt_safe_locale=
- for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES
-@@ -121,15 +118,13 @@ do
- 	  lt_safe_locale=\"$lt_var=C; \$lt_safe_locale\"
- 	fi"
- done
-+LC_ALL=C
-+LANGUAGE=C
-+export LANGUAGE LC_ALL
- 
- $lt_unset CDPATH
- 
- 
--
--
--
--
--
- # Work around backward compatibility issue on IRIX 6.5. On IRIX 6.4+, sh
- # is ksh but when the shell is invoked as "sh" and the current value of
- # the _XPG environment variable is not equal to 1 (one), the special
-@@ -140,7 +135,7 @@ progpath="$0"
- 
- 
- : ${CP="cp -f"}
--: ${ECHO=$as_echo}
-+test "${ECHO+set}" = set || ECHO=${as_echo-'printf %s\n'}
- : ${EGREP="/bin/grep -E"}
- : ${FGREP="/bin/grep -F"}
- : ${GREP="/bin/grep"}
-@@ -149,7 +144,7 @@ progpath="$0"
- : ${MKDIR="mkdir"}
- : ${MV="mv -f"}
- : ${RM="rm -f"}
--: ${SED="/mount/endor/wildenhu/local-x86_64/bin/sed"}
-+: ${SED="/bin/sed"}
- : ${SHELL="${CONFIG_SHELL-/bin/sh}"}
- : ${Xsed="$SED -e 1s/^X//"}
- 
-@@ -169,6 +164,27 @@ IFS=" 	$lt_nl"
- dirname="s,/[^/]*$,,"
- basename="s,^.*/,,"
- 
-+# func_dirname file append nondir_replacement
-+# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
-+# otherwise set result to NONDIR_REPLACEMENT.
-+func_dirname ()
-+{
-+    func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
-+    if test "X$func_dirname_result" = "X${1}"; then
-+      func_dirname_result="${3}"
-+    else
-+      func_dirname_result="$func_dirname_result${2}"
-+    fi
-+} # func_dirname may be replaced by extended shell implementation
-+
-+
-+# func_basename file
-+func_basename ()
-+{
-+    func_basename_result=`$ECHO "${1}" | $SED "$basename"`
-+} # func_basename may be replaced by extended shell implementation
-+
-+
- # func_dirname_and_basename file append nondir_replacement
- # perform func_basename and func_dirname in a single function
- # call:
-@@ -183,17 +199,31 @@ basename="s,^.*/,,"
- # those functions but instead duplicate the functionality here.
- func_dirname_and_basename ()
- {
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED -e "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--  func_basename_result=`$ECHO "${1}" | $SED -e "$basename"`
--}
-+    # Extract subdirectory from the argument.
-+    func_dirname_result=`$ECHO "${1}" | $SED -e "$dirname"`
-+    if test "X$func_dirname_result" = "X${1}"; then
-+      func_dirname_result="${3}"
-+    else
-+      func_dirname_result="$func_dirname_result${2}"
-+    fi
-+    func_basename_result=`$ECHO "${1}" | $SED -e "$basename"`
-+} # func_dirname_and_basename may be replaced by extended shell implementation
-+
-+
-+# func_stripname prefix suffix name
-+# strip PREFIX and SUFFIX off of NAME.
-+# PREFIX and SUFFIX must not contain globbing or regex special
-+# characters, hashes, percent signs, but SUFFIX may contain a leading
-+# dot (in which case that matches only a dot).
-+# func_strip_suffix prefix name
-+func_stripname ()
-+{
-+    case ${2} in
-+      .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
-+      *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
-+    esac
-+} # func_stripname may be replaced by extended shell implementation
- 
--# Generated shell functions inserted here.
- 
- # These SED scripts presuppose an absolute path with a trailing slash.
- pathcar='s,^/\([^/]*\).*$,\1,'
-@@ -376,6 +406,15 @@ sed_quote_subst='s/\([`"$\\]\)/\\\1/g'
- # Same as above, but do not quote variable references.
- double_quote_subst='s/\(["`\\]\)/\\\1/g'
- 
-+# Sed substitution that turns a string into a regex matching for the
-+# string literally.
-+sed_make_literal_regex='s,[].[^$\\*\/],\\&,g'
-+
-+# Sed substitution that converts a w32 file name or path
-+# which contains forward slashes, into one that contains
-+# (escaped) backslashes.  A very naive implementation.
-+lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
-+
- # Re-`\' parameter expansions in output of double_quote_subst that were
- # `\'-ed in input to the same.  If an odd number of `\' preceded a '$'
- # in input to double_quote_subst, that '$' was protected from expansion.
-@@ -404,7 +443,7 @@ opt_warning=:
- # name if it has been set yet.
- func_echo ()
- {
--    $ECHO "$progname${mode+: }$mode: $*"
-+    $ECHO "$progname: ${opt_mode+$opt_mode: }$*"
- }
- 
- # func_verbose arg...
-@@ -430,14 +469,14 @@ func_echo_all ()
- # Echo program name prefixed message to standard error.
- func_error ()
- {
--    $ECHO "$progname${mode+: }$mode: "${1+"$@"} 1>&2
-+    $ECHO "$progname: ${opt_mode+$opt_mode: }"${1+"$@"} 1>&2
- }
- 
- # func_warning arg...
- # Echo program name prefixed warning message to standard error.
- func_warning ()
- {
--    $opt_warning && $ECHO "$progname${mode+: }$mode: warning: "${1+"$@"} 1>&2
-+    $opt_warning && $ECHO "$progname: ${opt_mode+$opt_mode: }warning: "${1+"$@"} 1>&2
- 
-     # bash bug again:
-     :
-@@ -656,19 +695,35 @@ func_show_eval_locale ()
-     fi
- }
- 
--
--
-+# func_tr_sh
-+# Turn $1 into a string suitable for a shell variable name.
-+# Result is stored in $func_tr_sh_result.  All characters
-+# not in the set a-zA-Z0-9_ are replaced with '_'. Further,
-+# if $1 begins with a digit, a '_' is prepended as well.
-+func_tr_sh ()
-+{
-+  case $1 in
-+  [0-9]* | *[!a-zA-Z0-9_]*)
-+    func_tr_sh_result=`$ECHO "$1" | $SED 's/^\([0-9]\)/_\1/; s/[^a-zA-Z0-9_]/_/g'`
-+    ;;
-+  * )
-+    func_tr_sh_result=$1
-+    ;;
-+  esac
-+}
- 
- 
- # func_version
- # Echo version message to standard output and exit.
- func_version ()
- {
-+    $opt_debug
-+
-     $SED -n '/(C)/!b go
- 	:more
- 	/\./!{
- 	  N
--	  s/\n# //
-+	  s/\n# / /
- 	  b more
- 	}
- 	:go
-@@ -685,7 +740,9 @@ func_version ()
- # Echo short help message to standard output and exit.
- func_usage ()
- {
--    $SED -n '/^# Usage:/,/^#  *-h/ {
-+    $opt_debug
-+
-+    $SED -n '/^# Usage:/,/^#  *.*--help/ {
-         s/^# //
- 	s/^# *$//
- 	s/\$progname/'$progname'/
-@@ -701,7 +758,10 @@ func_usage ()
- # unless 'noexit' is passed as argument.
- func_help ()
- {
-+    $opt_debug
-+
-     $SED -n '/^# Usage:/,/# Report bugs to/ {
-+	:print
-         s/^# //
- 	s/^# *$//
- 	s*\$progname*'$progname'*
-@@ -714,7 +774,11 @@ func_help ()
- 	s/\$automake_version/'"`(automake --version) 2>/dev/null |$SED 1q`"'/
- 	s/\$autoconf_version/'"`(autoconf --version) 2>/dev/null |$SED 1q`"'/
- 	p
--     }' < "$progpath"
-+	d
-+     }
-+     /^# .* home page:/b print
-+     /^# General help using/b print
-+     ' < "$progpath"
-     ret=$?
-     if test -z "$1"; then
-       exit $ret
-@@ -726,12 +790,39 @@ func_help ()
- # exit_cmd.
- func_missing_arg ()
- {
--    func_error "missing argument for $1"
-+    $opt_debug
-+
-+    func_error "missing argument for $1."
-     exit_cmd=exit
- }
- 
--exit_cmd=:
- 
-+# func_split_short_opt shortopt
-+# Set func_split_short_opt_name and func_split_short_opt_arg shell
-+# variables after splitting SHORTOPT after the 2nd character.
-+func_split_short_opt ()
-+{
-+    my_sed_short_opt='1s/^\(..\).*$/\1/;q'
-+    my_sed_short_rest='1s/^..\(.*\)$/\1/;q'
-+
-+    func_split_short_opt_name=`$ECHO "$1" | $SED "$my_sed_short_opt"`
-+    func_split_short_opt_arg=`$ECHO "$1" | $SED "$my_sed_short_rest"`
-+} # func_split_short_opt may be replaced by extended shell implementation
-+
-+
-+# func_split_long_opt longopt
-+# Set func_split_long_opt_name and func_split_long_opt_arg shell
-+# variables after splitting LONGOPT at the `=' sign.
-+func_split_long_opt ()
-+{
-+    my_sed_long_opt='1s/^\(--[^=]*\)=.*/\1/;q'
-+    my_sed_long_arg='1s/^--[^=]*=//'
-+
-+    func_split_long_opt_name=`$ECHO "$1" | $SED "$my_sed_long_opt"`
-+    func_split_long_opt_arg=`$ECHO "$1" | $SED "$my_sed_long_arg"`
-+} # func_split_long_opt may be replaced by extended shell implementation
-+
-+exit_cmd=:
- 
- 
- 
-@@ -741,26 +832,64 @@ magic="%%%MAGIC variable%%%"
- magic_exe="%%%MAGIC EXE variable%%%"
- 
- # Global variables.
--# $mode is unset
- nonopt=
--execute_dlfiles=
- preserve_args=
- lo2o="s/\\.lo\$/.${objext}/"
- o2lo="s/\\.${objext}\$/.lo/"
- extracted_archives=
- extracted_serial=0
- 
--opt_dry_run=false
--opt_finish=:
--opt_duplicate_deps=false
--opt_silent=false
--opt_debug=:
--
- # If this variable is set in any of the actions, the command in it
- # will be execed at the end.  This prevents here-documents from being
- # left over by shells.
- exec_cmd=
- 
-+# func_append var value
-+# Append VALUE to the end of shell variable VAR.
-+func_append ()
-+{
-+    eval "${1}=\$${1}\${2}"
-+} # func_append may be replaced by extended shell implementation
-+
-+# func_append_quoted var value
-+# Quote VALUE and append to the end of shell variable VAR, separated
-+# by a space.
-+func_append_quoted ()
-+{
-+    func_quote_for_eval "${2}"
-+    eval "${1}=\$${1}\\ \$func_quote_for_eval_result"
-+} # func_append_quoted may be replaced by extended shell implementation
-+
-+
-+# func_arith arithmetic-term...
-+func_arith ()
-+{
-+    func_arith_result=`expr "${@}"`
-+} # func_arith may be replaced by extended shell implementation
-+
-+
-+# func_len string
-+# STRING may not start with a hyphen.
-+func_len ()
-+{
-+    func_len_result=`expr "${1}" : ".*" 2>/dev/null || echo $max_cmd_len`
-+} # func_len may be replaced by extended shell implementation
-+
-+
-+# func_lo2o object
-+func_lo2o ()
-+{
-+    func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
-+} # func_lo2o may be replaced by extended shell implementation
-+
-+
-+# func_xform libobj-or-source
-+func_xform ()
-+{
-+    func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
-+} # func_xform may be replaced by extended shell implementation
-+
-+
- # func_fatal_configuration arg...
- # Echo program name prefixed message to standard error, followed by
- # a configuration failure hint, and exit.
-@@ -850,130 +979,204 @@ func_enable_tag ()
-   esac
- }
- 
--# Parse options once, thoroughly.  This comes as soon as possible in
--# the script to make things like `libtool --version' happen quickly.
-+# func_check_version_match
-+# Ensure that we are using m4 macros, and libtool script from the same
-+# release of libtool.
-+func_check_version_match ()
- {
-+  if test "$package_revision" != "$macro_revision"; then
-+    if test "$VERSION" != "$macro_version"; then
-+      if test -z "$macro_version"; then
-+        cat >&2 <<_LT_EOF
-+$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
-+$progname: definition of this LT_INIT comes from an older release.
-+$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
-+$progname: and run autoconf again.
-+_LT_EOF
-+      else
-+        cat >&2 <<_LT_EOF
-+$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
-+$progname: definition of this LT_INIT comes from $PACKAGE $macro_version.
-+$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
-+$progname: and run autoconf again.
-+_LT_EOF
-+      fi
-+    else
-+      cat >&2 <<_LT_EOF
-+$progname: Version mismatch error.  This is $PACKAGE $VERSION, revision $package_revision,
-+$progname: but the definition of this LT_INIT comes from revision $macro_revision.
-+$progname: You should recreate aclocal.m4 with macros from revision $package_revision
-+$progname: of $PACKAGE $VERSION and run autoconf again.
-+_LT_EOF
-+    fi
- 
--  # Shorthand for --mode=foo, only valid as the first argument
--  case $1 in
--  clean|clea|cle|cl)
--    shift; set dummy --mode clean ${1+"$@"}; shift
--    ;;
--  compile|compil|compi|comp|com|co|c)
--    shift; set dummy --mode compile ${1+"$@"}; shift
--    ;;
--  execute|execut|execu|exec|exe|ex|e)
--    shift; set dummy --mode execute ${1+"$@"}; shift
--    ;;
--  finish|finis|fini|fin|fi|f)
--    shift; set dummy --mode finish ${1+"$@"}; shift
--    ;;
--  install|instal|insta|inst|ins|in|i)
--    shift; set dummy --mode install ${1+"$@"}; shift
--    ;;
--  link|lin|li|l)
--    shift; set dummy --mode link ${1+"$@"}; shift
--    ;;
--  uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u)
--    shift; set dummy --mode uninstall ${1+"$@"}; shift
--    ;;
--  esac
-+    exit $EXIT_MISMATCH
-+  fi
-+}
-+
-+
-+# Shorthand for --mode=foo, only valid as the first argument
-+case $1 in
-+clean|clea|cle|cl)
-+  shift; set dummy --mode clean ${1+"$@"}; shift
-+  ;;
-+compile|compil|compi|comp|com|co|c)
-+  shift; set dummy --mode compile ${1+"$@"}; shift
-+  ;;
-+execute|execut|execu|exec|exe|ex|e)
-+  shift; set dummy --mode execute ${1+"$@"}; shift
-+  ;;
-+finish|finis|fini|fin|fi|f)
-+  shift; set dummy --mode finish ${1+"$@"}; shift
-+  ;;
-+install|instal|insta|inst|ins|in|i)
-+  shift; set dummy --mode install ${1+"$@"}; shift
-+  ;;
-+link|lin|li|l)
-+  shift; set dummy --mode link ${1+"$@"}; shift
-+  ;;
-+uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u)
-+  shift; set dummy --mode uninstall ${1+"$@"}; shift
-+  ;;
-+esac
- 
--  # Parse non-mode specific arguments:
--  while test "$#" -gt 0; do
-+
-+
-+# Option defaults:
-+opt_debug=:
-+opt_dry_run=false
-+opt_config=false
-+opt_preserve_dup_deps=false
-+opt_features=false
-+opt_finish=false
-+opt_help=false
-+opt_help_all=false
-+opt_silent=:
-+opt_verbose=:
-+opt_silent=false
-+opt_verbose=false
-+
-+
-+# Parse options once, thoroughly.  This comes as soon as possible in the
-+# script to make things like `--version' happen as quickly as we can.
-+{
-+  # this just eases exit handling
-+  while test $# -gt 0; do
-     opt="$1"
-     shift
--
-     case $opt in
--      --config)		func_config					;;
--
--      --debug)		preserve_args="$preserve_args $opt"
-+      --debug|-x)	opt_debug='set -x'
- 			func_echo "enabling shell trace mode"
--			opt_debug='set -x'
- 			$opt_debug
- 			;;
--
--      -dlopen)		test "$#" -eq 0 && func_missing_arg "$opt" && break
--			execute_dlfiles="$execute_dlfiles $1"
--			shift
-+      --dry-run|--dryrun|-n)
-+			opt_dry_run=:
- 			;;
--
--      --dry-run | -n)	opt_dry_run=:					;;
--      --features)       func_features					;;
--      --finish)		mode="finish"					;;
--      --no-finish)	opt_finish=false				;;
--
--      --mode)		test "$#" -eq 0 && func_missing_arg "$opt" && break
--			case $1 in
--			  # Valid mode arguments:
--			  clean)	;;
--			  compile)	;;
--			  execute)	;;
--			  finish)	;;
--			  install)	;;
--			  link)		;;
--			  relink)	;;
--			  uninstall)	;;
--
--			  # Catch anything else as an error
--			  *) func_error "invalid argument for $opt"
--			     exit_cmd=exit
--			     break
--			     ;;
--		        esac
--
--			mode="$1"
-+      --config)
-+			opt_config=:
-+func_config
-+			;;
-+      --dlopen|-dlopen)
-+			optarg="$1"
-+			opt_dlopen="${opt_dlopen+$opt_dlopen
-+}$optarg"
- 			shift
- 			;;
--
-       --preserve-dup-deps)
--			opt_duplicate_deps=:				;;
--
--      --quiet|--silent)	preserve_args="$preserve_args $opt"
--			opt_silent=:
--			opt_verbose=false
-+			opt_preserve_dup_deps=:
- 			;;
--
--      --no-quiet|--no-silent)
--			preserve_args="$preserve_args $opt"
--			opt_silent=false
-+      --features)
-+			opt_features=:
-+func_features
- 			;;
--
--      --verbose| -v)	preserve_args="$preserve_args $opt"
-+      --finish)
-+			opt_finish=:
-+set dummy --mode finish ${1+"$@"}; shift
-+			;;
-+      --help)
-+			opt_help=:
-+			;;
-+      --help-all)
-+			opt_help_all=:
-+opt_help=': help-all'
-+			;;
-+      --mode)
-+			test $# = 0 && func_missing_arg $opt && break
-+			optarg="$1"
-+			opt_mode="$optarg"
-+case $optarg in
-+  # Valid mode arguments:
-+  clean|compile|execute|finish|install|link|relink|uninstall) ;;
-+
-+  # Catch anything else as an error
-+  *) func_error "invalid argument for $opt"
-+     exit_cmd=exit
-+     break
-+     ;;
-+esac
-+			shift
-+			;;
-+      --no-silent|--no-quiet)
- 			opt_silent=false
--			opt_verbose=:
-+func_append preserve_args " $opt"
- 			;;
--
--      --no-verbose)	preserve_args="$preserve_args $opt"
-+      --no-verbose)
- 			opt_verbose=false
-+func_append preserve_args " $opt"
- 			;;
--
--      --tag)		test "$#" -eq 0 && func_missing_arg "$opt" && break
--			preserve_args="$preserve_args $opt $1"
--			func_enable_tag "$1"	# tagname is set here
-+      --silent|--quiet)
-+			opt_silent=:
-+func_append preserve_args " $opt"
-+        opt_verbose=false
-+			;;
-+      --verbose|-v)
-+			opt_verbose=:
-+func_append preserve_args " $opt"
-+opt_silent=false
-+			;;
-+      --tag)
-+			test $# = 0 && func_missing_arg $opt && break
-+			optarg="$1"
-+			opt_tag="$optarg"
-+func_append preserve_args " $opt $optarg"
-+func_enable_tag "$optarg"
- 			shift
- 			;;
- 
-+      -\?|-h)		func_usage				;;
-+      --help)		func_help				;;
-+      --version)	func_version				;;
-+
-       # Separate optargs to long options:
--      -dlopen=*|--mode=*|--tag=*)
--			func_opt_split "$opt"
--			set dummy "$func_opt_split_opt" "$func_opt_split_arg" ${1+"$@"}
-+      --*=*)
-+			func_split_long_opt "$opt"
-+			set dummy "$func_split_long_opt_name" "$func_split_long_opt_arg" ${1+"$@"}
- 			shift
- 			;;
- 
--      -\?|-h)		func_usage					;;
--      --help)		opt_help=:					;;
--      --help-all)	opt_help=': help-all'				;;
--      --version)	func_version					;;
--
--      -*)		func_fatal_help "unrecognized option \`$opt'"	;;
--
--      *)		nonopt="$opt"
--			break
-+      # Separate non-argument short options:
-+      -\?*|-h*|-n*|-v*)
-+			func_split_short_opt "$opt"
-+			set dummy "$func_split_short_opt_name" "-$func_split_short_opt_arg" ${1+"$@"}
-+			shift
- 			;;
-+
-+      --)		break					;;
-+      -*)		func_fatal_help "unrecognized option \`$opt'" ;;
-+      *)		set dummy "$opt" ${1+"$@"};	shift; break  ;;
-     esac
-   done
- 
-+  # Validate options:
-+
-+  # save first non-option argument
-+  if test "$#" -gt 0; then
-+    nonopt="$opt"
-+    shift
-+  fi
-+
-+  # preserve --debug
-+  test "$opt_debug" = : || func_append preserve_args " --debug"
- 
-   case $host in
-     *cygwin* | *mingw* | *pw32* | *cegcc* | *solaris2* )
-@@ -981,82 +1184,44 @@ func_enable_tag ()
-       opt_duplicate_compiler_generated_deps=:
-       ;;
-     *)
--      opt_duplicate_compiler_generated_deps=$opt_duplicate_deps
-+      opt_duplicate_compiler_generated_deps=$opt_preserve_dup_deps
-       ;;
-   esac
- 
--  # Having warned about all mis-specified options, bail out if
--  # anything was wrong.
--  $exit_cmd $EXIT_FAILURE
--}
-+  $opt_help || {
-+    # Sanity checks first:
-+    func_check_version_match
- 
--# func_check_version_match
--# Ensure that we are using m4 macros, and libtool script from the same
--# release of libtool.
--func_check_version_match ()
--{
--  if test "$package_revision" != "$macro_revision"; then
--    if test "$VERSION" != "$macro_version"; then
--      if test -z "$macro_version"; then
--        cat >&2 <<_LT_EOF
--$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
--$progname: definition of this LT_INIT comes from an older release.
--$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
--$progname: and run autoconf again.
--_LT_EOF
--      else
--        cat >&2 <<_LT_EOF
--$progname: Version mismatch error.  This is $PACKAGE $VERSION, but the
--$progname: definition of this LT_INIT comes from $PACKAGE $macro_version.
--$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION
--$progname: and run autoconf again.
--_LT_EOF
--      fi
--    else
--      cat >&2 <<_LT_EOF
--$progname: Version mismatch error.  This is $PACKAGE $VERSION, revision $package_revision,
--$progname: but the definition of this LT_INIT comes from revision $macro_revision.
--$progname: You should recreate aclocal.m4 with macros from revision $package_revision
--$progname: of $PACKAGE $VERSION and run autoconf again.
--_LT_EOF
-+    if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then
-+      func_fatal_configuration "not configured to build any kind of library"
-     fi
- 
--    exit $EXIT_MISMATCH
--  fi
--}
--
-+    # Darwin sucks
-+    eval std_shrext=\"$shrext_cmds\"
- 
--## ----------- ##
--##    Main.    ##
--## ----------- ##
--
--$opt_help || {
--  # Sanity checks first:
--  func_check_version_match
--
--  if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then
--    func_fatal_configuration "not configured to build any kind of library"
--  fi
-+    # Only execute mode is allowed to have -dlopen flags.
-+    if test -n "$opt_dlopen" && test "$opt_mode" != execute; then
-+      func_error "unrecognized option \`-dlopen'"
-+      $ECHO "$help" 1>&2
-+      exit $EXIT_FAILURE
-+    fi
- 
--  test -z "$mode" && func_fatal_error "error: you must specify a MODE."
-+    # Change the help message to a mode-specific one.
-+    generic_help="$help"
-+    help="Try \`$progname --help --mode=$opt_mode' for more information."
-+  }
- 
- 
--  # Darwin sucks
--  eval "std_shrext=\"$shrext_cmds\""
-+  # Bail if the options were screwed
-+  $exit_cmd $EXIT_FAILURE
-+}
- 
- 
--  # Only execute mode is allowed to have -dlopen flags.
--  if test -n "$execute_dlfiles" && test "$mode" != execute; then
--    func_error "unrecognized option \`-dlopen'"
--    $ECHO "$help" 1>&2
--    exit $EXIT_FAILURE
--  fi
- 
--  # Change the help message to a mode-specific one.
--  generic_help="$help"
--  help="Try \`$progname --help --mode=$mode' for more information."
--}
- 
-+## ----------- ##
-+##    Main.    ##
-+## ----------- ##
- 
- # func_lalib_p file
- # True iff FILE is a libtool `.la' library or `.lo' object file.
-@@ -1121,12 +1286,9 @@ func_ltwrapper_executable_p ()
- # temporary ltwrapper_script.
- func_ltwrapper_scriptname ()
- {
--    func_ltwrapper_scriptname_result=""
--    if func_ltwrapper_executable_p "$1"; then
--	func_dirname_and_basename "$1" "" "."
--	func_stripname '' '.exe' "$func_basename_result"
--	func_ltwrapper_scriptname_result="$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper"
--    fi
-+    func_dirname_and_basename "$1" "" "."
-+    func_stripname '' '.exe' "$func_basename_result"
-+    func_ltwrapper_scriptname_result="$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper"
- }
- 
- # func_ltwrapper_p file
-@@ -1149,7 +1311,7 @@ func_execute_cmds ()
-     save_ifs=$IFS; IFS='~'
-     for cmd in $1; do
-       IFS=$save_ifs
--      eval "cmd=\"$cmd\""
-+      eval cmd=\"$cmd\"
-       func_show_eval "$cmd" "${2-:}"
-     done
-     IFS=$save_ifs
-@@ -1172,6 +1334,37 @@ func_source ()
- }
- 
- 
-+# func_resolve_sysroot PATH
-+# Replace a leading = in PATH with a sysroot.  Store the result into
-+# func_resolve_sysroot_result
-+func_resolve_sysroot ()
-+{
-+  func_resolve_sysroot_result=$1
-+  case $func_resolve_sysroot_result in
-+  =*)
-+    func_stripname '=' '' "$func_resolve_sysroot_result"
-+    func_resolve_sysroot_result=$lt_sysroot$func_stripname_result
-+    ;;
-+  esac
-+}
-+
-+# func_replace_sysroot PATH
-+# If PATH begins with the sysroot, replace it with = and
-+# store the result into func_replace_sysroot_result.
-+func_replace_sysroot ()
-+{
-+  case "$lt_sysroot:$1" in
-+  ?*:"$lt_sysroot"*)
-+    func_stripname "$lt_sysroot" '' "$1"
-+    func_replace_sysroot_result="=$func_stripname_result"
-+    ;;
-+  *)
-+    # Including no sysroot.
-+    func_replace_sysroot_result=$1
-+    ;;
-+  esac
-+}
-+
- # func_infer_tag arg
- # Infer tagged configuration to use if any are available and
- # if one wasn't chosen via the "--tag" command line option.
-@@ -1184,8 +1377,7 @@ func_infer_tag ()
-     if test -n "$available_tags" && test -z "$tagname"; then
-       CC_quoted=
-       for arg in $CC; do
--        func_quote_for_eval "$arg"
--	CC_quoted="$CC_quoted $func_quote_for_eval_result"
-+	func_append_quoted CC_quoted "$arg"
-       done
-       CC_expanded=`func_echo_all $CC`
-       CC_quoted_expanded=`func_echo_all $CC_quoted`
-@@ -1204,8 +1396,7 @@ func_infer_tag ()
- 	    CC_quoted=
- 	    for arg in $CC; do
- 	      # Double-quote args containing other shell metacharacters.
--	      func_quote_for_eval "$arg"
--	      CC_quoted="$CC_quoted $func_quote_for_eval_result"
-+	      func_append_quoted CC_quoted "$arg"
- 	    done
- 	    CC_expanded=`func_echo_all $CC`
- 	    CC_quoted_expanded=`func_echo_all $CC_quoted`
-@@ -1274,6 +1465,486 @@ EOF
-     }
- }
- 
-+
-+##################################################
-+# FILE NAME AND PATH CONVERSION HELPER FUNCTIONS #
-+##################################################
-+
-+# func_convert_core_file_wine_to_w32 ARG
-+# Helper function used by file name conversion functions when $build is *nix,
-+# and $host is mingw, cygwin, or some other w32 environment. Relies on a
-+# correctly configured wine environment available, with the winepath program
-+# in $build's $PATH.
-+#
-+# ARG is the $build file name to be converted to w32 format.
-+# Result is available in $func_convert_core_file_wine_to_w32_result, and will
-+# be empty on error (or when ARG is empty)
-+func_convert_core_file_wine_to_w32 ()
-+{
-+  $opt_debug
-+  func_convert_core_file_wine_to_w32_result="$1"
-+  if test -n "$1"; then
-+    # Unfortunately, winepath does not exit with a non-zero error code, so we
-+    # are forced to check the contents of stdout. On the other hand, if the
-+    # command is not found, the shell will set an exit code of 127 and print
-+    # *an error message* to stdout. So we must check for both error code of
-+    # zero AND non-empty stdout, which explains the odd construction:
-+    func_convert_core_file_wine_to_w32_tmp=`winepath -w "$1" 2>/dev/null`
-+    if test "$?" -eq 0 && test -n "${func_convert_core_file_wine_to_w32_tmp}"; then
-+      func_convert_core_file_wine_to_w32_result=`$ECHO "$func_convert_core_file_wine_to_w32_tmp" |
-+        $SED -e "$lt_sed_naive_backslashify"`
-+    else
-+      func_convert_core_file_wine_to_w32_result=
-+    fi
-+  fi
-+}
-+# end: func_convert_core_file_wine_to_w32
-+
-+
-+# func_convert_core_path_wine_to_w32 ARG
-+# Helper function used by path conversion functions when $build is *nix, and
-+# $host is mingw, cygwin, or some other w32 environment. Relies on a correctly
-+# configured wine environment available, with the winepath program in $build's
-+# $PATH. Assumes ARG has no leading or trailing path separator characters.
-+#
-+# ARG is path to be converted from $build format to win32.
-+# Result is available in $func_convert_core_path_wine_to_w32_result.
-+# Unconvertible file (directory) names in ARG are skipped; if no directory names
-+# are convertible, then the result may be empty.
-+func_convert_core_path_wine_to_w32 ()
-+{
-+  $opt_debug
-+  # unfortunately, winepath doesn't convert paths, only file names
-+  func_convert_core_path_wine_to_w32_result=""
-+  if test -n "$1"; then
-+    oldIFS=$IFS
-+    IFS=:
-+    for func_convert_core_path_wine_to_w32_f in $1; do
-+      IFS=$oldIFS
-+      func_convert_core_file_wine_to_w32 "$func_convert_core_path_wine_to_w32_f"
-+      if test -n "$func_convert_core_file_wine_to_w32_result" ; then
-+        if test -z "$func_convert_core_path_wine_to_w32_result"; then
-+          func_convert_core_path_wine_to_w32_result="$func_convert_core_file_wine_to_w32_result"
-+        else
-+          func_append func_convert_core_path_wine_to_w32_result ";$func_convert_core_file_wine_to_w32_result"
-+        fi
-+      fi
-+    done
-+    IFS=$oldIFS
-+  fi
-+}
-+# end: func_convert_core_path_wine_to_w32
-+
-+
-+# func_cygpath ARGS...
-+# Wrapper around calling the cygpath program via LT_CYGPATH. This is used when
-+# when (1) $build is *nix and Cygwin is hosted via a wine environment; or (2)
-+# $build is MSYS and $host is Cygwin, or (3) $build is Cygwin. In case (1) or
-+# (2), returns the Cygwin file name or path in func_cygpath_result (input
-+# file name or path is assumed to be in w32 format, as previously converted
-+# from $build's *nix or MSYS format). In case (3), returns the w32 file name
-+# or path in func_cygpath_result (input file name or path is assumed to be in
-+# Cygwin format). Returns an empty string on error.
-+#
-+# ARGS are passed to cygpath, with the last one being the file name or path to
-+# be converted.
-+#
-+# Specify the absolute *nix (or w32) name to cygpath in the LT_CYGPATH
-+# environment variable; do not put it in $PATH.
-+func_cygpath ()
-+{
-+  $opt_debug
-+  if test -n "$LT_CYGPATH" && test -f "$LT_CYGPATH"; then
-+    func_cygpath_result=`$LT_CYGPATH "$@" 2>/dev/null`
-+    if test "$?" -ne 0; then
-+      # on failure, ensure result is empty
-+      func_cygpath_result=
-+    fi
-+  else
-+    func_cygpath_result=
-+    func_error "LT_CYGPATH is empty or specifies non-existent file: \`$LT_CYGPATH'"
-+  fi
-+}
-+#end: func_cygpath
-+
-+
-+# func_convert_core_msys_to_w32 ARG
-+# Convert file name or path ARG from MSYS format to w32 format.  Return
-+# result in func_convert_core_msys_to_w32_result.
-+func_convert_core_msys_to_w32 ()
-+{
-+  $opt_debug
-+  # awkward: cmd appends spaces to result
-+  func_convert_core_msys_to_w32_result=`( cmd //c echo "$1" ) 2>/dev/null |
-+    $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"`
-+}
-+#end: func_convert_core_msys_to_w32
-+
-+
-+# func_convert_file_check ARG1 ARG2
-+# Verify that ARG1 (a file name in $build format) was converted to $host
-+# format in ARG2. Otherwise, emit an error message, but continue (resetting
-+# func_to_host_file_result to ARG1).
-+func_convert_file_check ()
-+{
-+  $opt_debug
-+  if test -z "$2" && test -n "$1" ; then
-+    func_error "Could not determine host file name corresponding to"
-+    func_error "  \`$1'"
-+    func_error "Continuing, but uninstalled executables may not work."
-+    # Fallback:
-+    func_to_host_file_result="$1"
-+  fi
-+}
-+# end func_convert_file_check
-+
-+
-+# func_convert_path_check FROM_PATHSEP TO_PATHSEP FROM_PATH TO_PATH
-+# Verify that FROM_PATH (a path in $build format) was converted to $host
-+# format in TO_PATH. Otherwise, emit an error message, but continue, resetting
-+# func_to_host_file_result to a simplistic fallback value (see below).
-+func_convert_path_check ()
-+{
-+  $opt_debug
-+  if test -z "$4" && test -n "$3"; then
-+    func_error "Could not determine the host path corresponding to"
-+    func_error "  \`$3'"
-+    func_error "Continuing, but uninstalled executables may not work."
-+    # Fallback.  This is a deliberately simplistic "conversion" and
-+    # should not be "improved".  See libtool.info.
-+    if test "x$1" != "x$2"; then
-+      lt_replace_pathsep_chars="s|$1|$2|g"
-+      func_to_host_path_result=`echo "$3" |
-+        $SED -e "$lt_replace_pathsep_chars"`
-+    else
-+      func_to_host_path_result="$3"
-+    fi
-+  fi
-+}
-+# end func_convert_path_check
-+
-+
-+# func_convert_path_front_back_pathsep FRONTPAT BACKPAT REPL ORIG
-+# Modifies func_to_host_path_result by prepending REPL if ORIG matches FRONTPAT
-+# and appending REPL if ORIG matches BACKPAT.
-+func_convert_path_front_back_pathsep ()
-+{
-+  $opt_debug
-+  case $4 in
-+  $1 ) func_to_host_path_result="$3$func_to_host_path_result"
-+    ;;
-+  esac
-+  case $4 in
-+  $2 ) func_append func_to_host_path_result "$3"
-+    ;;
-+  esac
-+}
-+# end func_convert_path_front_back_pathsep
-+
-+
-+##################################################
-+# $build to $host FILE NAME CONVERSION FUNCTIONS #
-+##################################################
-+# invoked via `$to_host_file_cmd ARG'
-+#
-+# In each case, ARG is the path to be converted from $build to $host format.
-+# Result will be available in $func_to_host_file_result.
-+
-+
-+# func_to_host_file ARG
-+# Converts the file name ARG from $build format to $host format. Return result
-+# in func_to_host_file_result.
-+func_to_host_file ()
-+{
-+  $opt_debug
-+  $to_host_file_cmd "$1"
-+}
-+# end func_to_host_file
-+
-+
-+# func_to_tool_file ARG LAZY
-+# converts the file name ARG from $build format to toolchain format. Return
-+# result in func_to_tool_file_result.  If the conversion in use is listed
-+# in (the comma separated) LAZY, no conversion takes place.
-+func_to_tool_file ()
-+{
-+  $opt_debug
-+  case ,$2, in
-+    *,"$to_tool_file_cmd",*)
-+      func_to_tool_file_result=$1
-+      ;;
-+    *)
-+      $to_tool_file_cmd "$1"
-+      func_to_tool_file_result=$func_to_host_file_result
-+      ;;
-+  esac
-+}
-+# end func_to_tool_file
-+
-+
-+# func_convert_file_noop ARG
-+# Copy ARG to func_to_host_file_result.
-+func_convert_file_noop ()
-+{
-+  func_to_host_file_result="$1"
-+}
-+# end func_convert_file_noop
-+
-+
-+# func_convert_file_msys_to_w32 ARG
-+# Convert file name ARG from (mingw) MSYS to (mingw) w32 format; automatic
-+# conversion to w32 is not available inside the cwrapper.  Returns result in
-+# func_to_host_file_result.
-+func_convert_file_msys_to_w32 ()
-+{
-+  $opt_debug
-+  func_to_host_file_result="$1"
-+  if test -n "$1"; then
-+    func_convert_core_msys_to_w32 "$1"
-+    func_to_host_file_result="$func_convert_core_msys_to_w32_result"
-+  fi
-+  func_convert_file_check "$1" "$func_to_host_file_result"
-+}
-+# end func_convert_file_msys_to_w32
-+
-+
-+# func_convert_file_cygwin_to_w32 ARG
-+# Convert file name ARG from Cygwin to w32 format.  Returns result in
-+# func_to_host_file_result.
-+func_convert_file_cygwin_to_w32 ()
-+{
-+  $opt_debug
-+  func_to_host_file_result="$1"
-+  if test -n "$1"; then
-+    # because $build is cygwin, we call "the" cygpath in $PATH; no need to use
-+    # LT_CYGPATH in this case.
-+    func_to_host_file_result=`cygpath -m "$1"`
-+  fi
-+  func_convert_file_check "$1" "$func_to_host_file_result"
-+}
-+# end func_convert_file_cygwin_to_w32
-+
-+
-+# func_convert_file_nix_to_w32 ARG
-+# Convert file name ARG from *nix to w32 format.  Requires a wine environment
-+# and a working winepath. Returns result in func_to_host_file_result.
-+func_convert_file_nix_to_w32 ()
-+{
-+  $opt_debug
-+  func_to_host_file_result="$1"
-+  if test -n "$1"; then
-+    func_convert_core_file_wine_to_w32 "$1"
-+    func_to_host_file_result="$func_convert_core_file_wine_to_w32_result"
-+  fi
-+  func_convert_file_check "$1" "$func_to_host_file_result"
-+}
-+# end func_convert_file_nix_to_w32
-+
-+
-+# func_convert_file_msys_to_cygwin ARG
-+# Convert file name ARG from MSYS to Cygwin format.  Requires LT_CYGPATH set.
-+# Returns result in func_to_host_file_result.
-+func_convert_file_msys_to_cygwin ()
-+{
-+  $opt_debug
-+  func_to_host_file_result="$1"
-+  if test -n "$1"; then
-+    func_convert_core_msys_to_w32 "$1"
-+    func_cygpath -u "$func_convert_core_msys_to_w32_result"
-+    func_to_host_file_result="$func_cygpath_result"
-+  fi
-+  func_convert_file_check "$1" "$func_to_host_file_result"
-+}
-+# end func_convert_file_msys_to_cygwin
-+
-+
-+# func_convert_file_nix_to_cygwin ARG
-+# Convert file name ARG from *nix to Cygwin format.  Requires Cygwin installed
-+# in a wine environment, working winepath, and LT_CYGPATH set.  Returns result
-+# in func_to_host_file_result.
-+func_convert_file_nix_to_cygwin ()
-+{
-+  $opt_debug
-+  func_to_host_file_result="$1"
-+  if test -n "$1"; then
-+    # convert from *nix to w32, then use cygpath to convert from w32 to cygwin.
-+    func_convert_core_file_wine_to_w32 "$1"
-+    func_cygpath -u "$func_convert_core_file_wine_to_w32_result"
-+    func_to_host_file_result="$func_cygpath_result"
-+  fi
-+  func_convert_file_check "$1" "$func_to_host_file_result"
-+}
-+# end func_convert_file_nix_to_cygwin
-+
-+
-+#############################################
-+# $build to $host PATH CONVERSION FUNCTIONS #
-+#############################################
-+# invoked via `$to_host_path_cmd ARG'
-+#
-+# In each case, ARG is the path to be converted from $build to $host format.
-+# The result will be available in $func_to_host_path_result.
-+#
-+# Path separators are also converted from $build format to $host format.  If
-+# ARG begins or ends with a path separator character, it is preserved (but
-+# converted to $host format) on output.
-+#
-+# All path conversion functions are named using the following convention:
-+#   file name conversion function    : func_convert_file_X_to_Y ()
-+#   path conversion function         : func_convert_path_X_to_Y ()
-+# where, for any given $build/$host combination the 'X_to_Y' value is the
-+# same.  If conversion functions are added for new $build/$host combinations,
-+# the two new functions must follow this pattern, or func_init_to_host_path_cmd
-+# will break.
-+
-+
-+# func_init_to_host_path_cmd
-+# Ensures that function "pointer" variable $to_host_path_cmd is set to the
-+# appropriate value, based on the value of $to_host_file_cmd.
-+to_host_path_cmd=
-+func_init_to_host_path_cmd ()
-+{
-+  $opt_debug
-+  if test -z "$to_host_path_cmd"; then
-+    func_stripname 'func_convert_file_' '' "$to_host_file_cmd"
-+    to_host_path_cmd="func_convert_path_${func_stripname_result}"
-+  fi
-+}
-+
-+
-+# func_to_host_path ARG
-+# Converts the path ARG from $build format to $host format. Return result
-+# in func_to_host_path_result.
-+func_to_host_path ()
-+{
-+  $opt_debug
-+  func_init_to_host_path_cmd
-+  $to_host_path_cmd "$1"
-+}
-+# end func_to_host_path
-+
-+
-+# func_convert_path_noop ARG
-+# Copy ARG to func_to_host_path_result.
-+func_convert_path_noop ()
-+{
-+  func_to_host_path_result="$1"
-+}
-+# end func_convert_path_noop
-+
-+
-+# func_convert_path_msys_to_w32 ARG
-+# Convert path ARG from (mingw) MSYS to (mingw) w32 format; automatic
-+# conversion to w32 is not available inside the cwrapper.  Returns result in
-+# func_to_host_path_result.
-+func_convert_path_msys_to_w32 ()
-+{
-+  $opt_debug
-+  func_to_host_path_result="$1"
-+  if test -n "$1"; then
-+    # Remove leading and trailing path separator characters from ARG.  MSYS
-+    # behavior is inconsistent here; cygpath turns them into '.;' and ';.';
-+    # and winepath ignores them completely.
-+    func_stripname : : "$1"
-+    func_to_host_path_tmp1=$func_stripname_result
-+    func_convert_core_msys_to_w32 "$func_to_host_path_tmp1"
-+    func_to_host_path_result="$func_convert_core_msys_to_w32_result"
-+    func_convert_path_check : ";" \
-+      "$func_to_host_path_tmp1" "$func_to_host_path_result"
-+    func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
-+  fi
-+}
-+# end func_convert_path_msys_to_w32
-+
-+
-+# func_convert_path_cygwin_to_w32 ARG
-+# Convert path ARG from Cygwin to w32 format.  Returns result in
-+# func_to_host_file_result.
-+func_convert_path_cygwin_to_w32 ()
-+{
-+  $opt_debug
-+  func_to_host_path_result="$1"
-+  if test -n "$1"; then
-+    # See func_convert_path_msys_to_w32:
-+    func_stripname : : "$1"
-+    func_to_host_path_tmp1=$func_stripname_result
-+    func_to_host_path_result=`cygpath -m -p "$func_to_host_path_tmp1"`
-+    func_convert_path_check : ";" \
-+      "$func_to_host_path_tmp1" "$func_to_host_path_result"
-+    func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
-+  fi
-+}
-+# end func_convert_path_cygwin_to_w32
-+
-+
-+# func_convert_path_nix_to_w32 ARG
-+# Convert path ARG from *nix to w32 format.  Requires a wine environment and
-+# a working winepath.  Returns result in func_to_host_file_result.
-+func_convert_path_nix_to_w32 ()
-+{
-+  $opt_debug
-+  func_to_host_path_result="$1"
-+  if test -n "$1"; then
-+    # See func_convert_path_msys_to_w32:
-+    func_stripname : : "$1"
-+    func_to_host_path_tmp1=$func_stripname_result
-+    func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1"
-+    func_to_host_path_result="$func_convert_core_path_wine_to_w32_result"
-+    func_convert_path_check : ";" \
-+      "$func_to_host_path_tmp1" "$func_to_host_path_result"
-+    func_convert_path_front_back_pathsep ":*" "*:" ";" "$1"
-+  fi
-+}
-+# end func_convert_path_nix_to_w32
-+
-+
-+# func_convert_path_msys_to_cygwin ARG
-+# Convert path ARG from MSYS to Cygwin format.  Requires LT_CYGPATH set.
-+# Returns result in func_to_host_file_result.
-+func_convert_path_msys_to_cygwin ()
-+{
-+  $opt_debug
-+  func_to_host_path_result="$1"
-+  if test -n "$1"; then
-+    # See func_convert_path_msys_to_w32:
-+    func_stripname : : "$1"
-+    func_to_host_path_tmp1=$func_stripname_result
-+    func_convert_core_msys_to_w32 "$func_to_host_path_tmp1"
-+    func_cygpath -u -p "$func_convert_core_msys_to_w32_result"
-+    func_to_host_path_result="$func_cygpath_result"
-+    func_convert_path_check : : \
-+      "$func_to_host_path_tmp1" "$func_to_host_path_result"
-+    func_convert_path_front_back_pathsep ":*" "*:" : "$1"
-+  fi
-+}
-+# end func_convert_path_msys_to_cygwin
-+
-+
-+# func_convert_path_nix_to_cygwin ARG
-+# Convert path ARG from *nix to Cygwin format.  Requires Cygwin installed in a
-+# a wine environment, working winepath, and LT_CYGPATH set.  Returns result in
-+# func_to_host_file_result.
-+func_convert_path_nix_to_cygwin ()
-+{
-+  $opt_debug
-+  func_to_host_path_result="$1"
-+  if test -n "$1"; then
-+    # Remove leading and trailing path separator characters from
-+    # ARG. msys behavior is inconsistent here, cygpath turns them
-+    # into '.;' and ';.', and winepath ignores them completely.
-+    func_stripname : : "$1"
-+    func_to_host_path_tmp1=$func_stripname_result
-+    func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1"
-+    func_cygpath -u -p "$func_convert_core_path_wine_to_w32_result"
-+    func_to_host_path_result="$func_cygpath_result"
-+    func_convert_path_check : : \
-+      "$func_to_host_path_tmp1" "$func_to_host_path_result"
-+    func_convert_path_front_back_pathsep ":*" "*:" : "$1"
-+  fi
-+}
-+# end func_convert_path_nix_to_cygwin
-+
-+
- # func_mode_compile arg...
- func_mode_compile ()
- {
-@@ -1314,12 +1985,12 @@ func_mode_compile ()
- 	  ;;
- 
- 	-pie | -fpie | -fPIE)
--          pie_flag="$pie_flag $arg"
-+          func_append pie_flag " $arg"
- 	  continue
- 	  ;;
- 
- 	-shared | -static | -prefer-pic | -prefer-non-pic)
--	  later="$later $arg"
-+	  func_append later " $arg"
- 	  continue
- 	  ;;
- 
-@@ -1340,15 +2011,14 @@ func_mode_compile ()
- 	  save_ifs="$IFS"; IFS=','
- 	  for arg in $args; do
- 	    IFS="$save_ifs"
--	    func_quote_for_eval "$arg"
--	    lastarg="$lastarg $func_quote_for_eval_result"
-+	    func_append_quoted lastarg "$arg"
- 	  done
- 	  IFS="$save_ifs"
- 	  func_stripname ' ' '' "$lastarg"
- 	  lastarg=$func_stripname_result
- 
- 	  # Add the arguments to base_compile.
--	  base_compile="$base_compile $lastarg"
-+	  func_append base_compile " $lastarg"
- 	  continue
- 	  ;;
- 
-@@ -1364,8 +2034,7 @@ func_mode_compile ()
-       esac    #  case $arg_mode
- 
-       # Aesthetically quote the previous argument.
--      func_quote_for_eval "$lastarg"
--      base_compile="$base_compile $func_quote_for_eval_result"
-+      func_append_quoted base_compile "$lastarg"
-     done # for arg
- 
-     case $arg_mode in
-@@ -1496,17 +2165,16 @@ compiler."
- 	$opt_dry_run || $RM $removelist
- 	exit $EXIT_FAILURE
-       fi
--      removelist="$removelist $output_obj"
-+      func_append removelist " $output_obj"
-       $ECHO "$srcfile" > "$lockfile"
-     fi
- 
-     $opt_dry_run || $RM $removelist
--    removelist="$removelist $lockfile"
-+    func_append removelist " $lockfile"
-     trap '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' 1 2 15
- 
--    if test -n "$fix_srcfile_path"; then
--      eval "srcfile=\"$fix_srcfile_path\""
--    fi
-+    func_to_tool_file "$srcfile" func_convert_file_msys_to_w32
-+    srcfile=$func_to_tool_file_result
-     func_quote_for_eval "$srcfile"
-     qsrcfile=$func_quote_for_eval_result
- 
-@@ -1526,7 +2194,7 @@ compiler."
- 
-       if test -z "$output_obj"; then
- 	# Place PIC objects in $objdir
--	command="$command -o $lobj"
-+	func_append command " -o $lobj"
-       fi
- 
-       func_show_eval_locale "$command"	\
-@@ -1573,11 +2241,11 @@ compiler."
- 	command="$base_compile $qsrcfile $pic_flag"
-       fi
-       if test "$compiler_c_o" = yes; then
--	command="$command -o $obj"
-+	func_append command " -o $obj"
-       fi
- 
-       # Suppress compiler output if we already did a PIC compilation.
--      command="$command$suppress_output"
-+      func_append command "$suppress_output"
-       func_show_eval_locale "$command" \
-         '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE'
- 
-@@ -1622,13 +2290,13 @@ compiler."
- }
- 
- $opt_help || {
--  test "$mode" = compile && func_mode_compile ${1+"$@"}
-+  test "$opt_mode" = compile && func_mode_compile ${1+"$@"}
- }
- 
- func_mode_help ()
- {
-     # We need to display help for each of the modes.
--    case $mode in
-+    case $opt_mode in
-       "")
-         # Generic help is extracted from the usage comments
-         # at the start of this file.
-@@ -1659,8 +2327,8 @@ This mode accepts the following additional options:
- 
-   -o OUTPUT-FILE    set the output file name to OUTPUT-FILE
-   -no-suppress      do not suppress compiler output for multiple passes
--  -prefer-pic       try to building PIC objects only
--  -prefer-non-pic   try to building non-PIC objects only
-+  -prefer-pic       try to build PIC objects only
-+  -prefer-non-pic   try to build non-PIC objects only
-   -shared           do not build a \`.o' file suitable for static linking
-   -static           only build a \`.o' file suitable for static linking
-   -Wc,FLAG          pass FLAG directly to the compiler
-@@ -1804,7 +2472,7 @@ Otherwise, only FILE itself is deleted using RM."
-         ;;
- 
-       *)
--        func_fatal_help "invalid operation mode \`$mode'"
-+        func_fatal_help "invalid operation mode \`$opt_mode'"
-         ;;
-     esac
- 
-@@ -1819,13 +2487,13 @@ if $opt_help; then
-   else
-     {
-       func_help noexit
--      for mode in compile link execute install finish uninstall clean; do
-+      for opt_mode in compile link execute install finish uninstall clean; do
- 	func_mode_help
-       done
-     } | sed -n '1p; 2,$s/^Usage:/  or: /p'
-     {
-       func_help noexit
--      for mode in compile link execute install finish uninstall clean; do
-+      for opt_mode in compile link execute install finish uninstall clean; do
- 	echo
- 	func_mode_help
-       done
-@@ -1854,13 +2522,16 @@ func_mode_execute ()
-       func_fatal_help "you must specify a COMMAND"
- 
-     # Handle -dlopen flags immediately.
--    for file in $execute_dlfiles; do
-+    for file in $opt_dlopen; do
-       test -f "$file" \
- 	|| func_fatal_help "\`$file' is not a file"
- 
-       dir=
-       case $file in
-       *.la)
-+	func_resolve_sysroot "$file"
-+	file=$func_resolve_sysroot_result
-+
- 	# Check to see that this really is a libtool archive.
- 	func_lalib_unsafe_p "$file" \
- 	  || func_fatal_help "\`$lib' is not a valid libtool archive"
-@@ -1882,7 +2553,7 @@ func_mode_execute ()
- 	dir="$func_dirname_result"
- 
- 	if test -f "$dir/$objdir/$dlname"; then
--	  dir="$dir/$objdir"
-+	  func_append dir "/$objdir"
- 	else
- 	  if test ! -f "$dir/$dlname"; then
- 	    func_fatal_error "cannot find \`$dlname' in \`$dir' or \`$dir/$objdir'"
-@@ -1907,10 +2578,10 @@ func_mode_execute ()
-       test -n "$absdir" && dir="$absdir"
- 
-       # Now add the directory to shlibpath_var.
--      if eval test -z \"\$$shlibpath_var\"; then
--	eval $shlibpath_var=\$dir
-+      if eval "test -z \"\$$shlibpath_var\""; then
-+	eval "$shlibpath_var=\"\$dir\""
-       else
--	eval $shlibpath_var=\$dir:\$$shlibpath_var
-+	eval "$shlibpath_var=\"\$dir:\$$shlibpath_var\""
-       fi
-     done
- 
-@@ -1939,8 +2610,7 @@ func_mode_execute ()
- 	;;
-       esac
-       # Quote arguments (to preserve shell metacharacters).
--      func_quote_for_eval "$file"
--      args="$args $func_quote_for_eval_result"
-+      func_append_quoted args "$file"
-     done
- 
-     if test "X$opt_dry_run" = Xfalse; then
-@@ -1972,22 +2642,59 @@ func_mode_execute ()
-     fi
- }
- 
--test "$mode" = execute && func_mode_execute ${1+"$@"}
-+test "$opt_mode" = execute && func_mode_execute ${1+"$@"}
- 
- 
- # func_mode_finish arg...
- func_mode_finish ()
- {
-     $opt_debug
--    libdirs="$nonopt"
-+    libs=
-+    libdirs=
-     admincmds=
- 
--    if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
--      for dir
--      do
--	libdirs="$libdirs $dir"
--      done
-+    for opt in "$nonopt" ${1+"$@"}
-+    do
-+      if test -d "$opt"; then
-+	func_append libdirs " $opt"
- 
-+      elif test -f "$opt"; then
-+	if func_lalib_unsafe_p "$opt"; then
-+	  func_append libs " $opt"
-+	else
-+	  func_warning "\`$opt' is not a valid libtool archive"
-+	fi
-+
-+      else
-+	func_fatal_error "invalid argument \`$opt'"
-+      fi
-+    done
-+
-+    if test -n "$libs"; then
-+      if test -n "$lt_sysroot"; then
-+        sysroot_regex=`$ECHO "$lt_sysroot" | $SED "$sed_make_literal_regex"`
-+        sysroot_cmd="s/\([ ']\)$sysroot_regex/\1/g;"
-+      else
-+        sysroot_cmd=
-+      fi
-+
-+      # Remove sysroot references
-+      if $opt_dry_run; then
-+        for lib in $libs; do
-+          echo "removing references to $lt_sysroot and \`=' prefixes from $lib"
-+        done
-+      else
-+        tmpdir=`func_mktempdir`
-+        for lib in $libs; do
-+	  sed -e "${sysroot_cmd} s/\([ ']-[LR]\)=/\1/g; s/\([ ']\)=/\1/g" $lib \
-+	    > $tmpdir/tmp-la
-+	  mv -f $tmpdir/tmp-la $lib
-+	done
-+        ${RM}r "$tmpdir"
-+      fi
-+    fi
-+
-+    if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
-       for libdir in $libdirs; do
- 	if test -n "$finish_cmds"; then
- 	  # Do each command in the finish commands.
-@@ -1997,7 +2704,7 @@ func_mode_finish ()
- 	if test -n "$finish_eval"; then
- 	  # Do the single finish_eval.
- 	  eval cmds=\"$finish_eval\"
--	  $opt_dry_run || eval "$cmds" || admincmds="$admincmds
-+	  $opt_dry_run || eval "$cmds" || func_append admincmds "
-        $cmds"
- 	fi
-       done
-@@ -2006,53 +2713,55 @@ func_mode_finish ()
-     # Exit here if they wanted silent mode.
-     $opt_silent && exit $EXIT_SUCCESS
- 
--    echo "----------------------------------------------------------------------"
--    echo "Libraries have been installed in:"
--    for libdir in $libdirs; do
--      $ECHO "   $libdir"
--    done
--    echo
--    echo "If you ever happen to want to link against installed libraries"
--    echo "in a given directory, LIBDIR, you must either use libtool, and"
--    echo "specify the full pathname of the library, or use the \`-LLIBDIR'"
--    echo "flag during linking and do at least one of the following:"
--    if test -n "$shlibpath_var"; then
--      echo "   - add LIBDIR to the \`$shlibpath_var' environment variable"
--      echo "     during execution"
--    fi
--    if test -n "$runpath_var"; then
--      echo "   - add LIBDIR to the \`$runpath_var' environment variable"
--      echo "     during linking"
--    fi
--    if test -n "$hardcode_libdir_flag_spec"; then
--      libdir=LIBDIR
--      eval "flag=\"$hardcode_libdir_flag_spec\""
-+    if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then
-+      echo "----------------------------------------------------------------------"
-+      echo "Libraries have been installed in:"
-+      for libdir in $libdirs; do
-+	$ECHO "   $libdir"
-+      done
-+      echo
-+      echo "If you ever happen to want to link against installed libraries"
-+      echo "in a given directory, LIBDIR, you must either use libtool, and"
-+      echo "specify the full pathname of the library, or use the \`-LLIBDIR'"
-+      echo "flag during linking and do at least one of the following:"
-+      if test -n "$shlibpath_var"; then
-+	echo "   - add LIBDIR to the \`$shlibpath_var' environment variable"
-+	echo "     during execution"
-+      fi
-+      if test -n "$runpath_var"; then
-+	echo "   - add LIBDIR to the \`$runpath_var' environment variable"
-+	echo "     during linking"
-+      fi
-+      if test -n "$hardcode_libdir_flag_spec"; then
-+	libdir=LIBDIR
-+	eval flag=\"$hardcode_libdir_flag_spec\"
- 
--      $ECHO "   - use the \`$flag' linker flag"
--    fi
--    if test -n "$admincmds"; then
--      $ECHO "   - have your system administrator run these commands:$admincmds"
--    fi
--    if test -f /etc/ld.so.conf; then
--      echo "   - have your system administrator add LIBDIR to \`/etc/ld.so.conf'"
--    fi
--    echo
-+	$ECHO "   - use the \`$flag' linker flag"
-+      fi
-+      if test -n "$admincmds"; then
-+	$ECHO "   - have your system administrator run these commands:$admincmds"
-+      fi
-+      if test -f /etc/ld.so.conf; then
-+	echo "   - have your system administrator add LIBDIR to \`/etc/ld.so.conf'"
-+      fi
-+      echo
- 
--    echo "See any operating system documentation about shared libraries for"
--    case $host in
--      solaris2.[6789]|solaris2.1[0-9])
--        echo "more information, such as the ld(1), crle(1) and ld.so(8) manual"
--	echo "pages."
--	;;
--      *)
--        echo "more information, such as the ld(1) and ld.so(8) manual pages."
--        ;;
--    esac
--    echo "----------------------------------------------------------------------"
-+      echo "See any operating system documentation about shared libraries for"
-+      case $host in
-+	solaris2.[6789]|solaris2.1[0-9])
-+	  echo "more information, such as the ld(1), crle(1) and ld.so(8) manual"
-+	  echo "pages."
-+	  ;;
-+	*)
-+	  echo "more information, such as the ld(1) and ld.so(8) manual pages."
-+	  ;;
-+      esac
-+      echo "----------------------------------------------------------------------"
-+    fi
-     exit $EXIT_SUCCESS
- }
- 
--test "$mode" = finish && func_mode_finish ${1+"$@"}
-+test "$opt_mode" = finish && func_mode_finish ${1+"$@"}
- 
- 
- # func_mode_install arg...
-@@ -2077,7 +2786,7 @@ func_mode_install ()
-     # The real first argument should be the name of the installation program.
-     # Aesthetically quote it.
-     func_quote_for_eval "$arg"
--    install_prog="$install_prog$func_quote_for_eval_result"
-+    func_append install_prog "$func_quote_for_eval_result"
-     install_shared_prog=$install_prog
-     case " $install_prog " in
-       *[\\\ /]cp\ *) install_cp=: ;;
-@@ -2097,7 +2806,7 @@ func_mode_install ()
-     do
-       arg2=
-       if test -n "$dest"; then
--	files="$files $dest"
-+	func_append files " $dest"
- 	dest=$arg
- 	continue
-       fi
-@@ -2135,11 +2844,11 @@ func_mode_install ()
- 
-       # Aesthetically quote the argument.
-       func_quote_for_eval "$arg"
--      install_prog="$install_prog $func_quote_for_eval_result"
-+      func_append install_prog " $func_quote_for_eval_result"
-       if test -n "$arg2"; then
- 	func_quote_for_eval "$arg2"
-       fi
--      install_shared_prog="$install_shared_prog $func_quote_for_eval_result"
-+      func_append install_shared_prog " $func_quote_for_eval_result"
-     done
- 
-     test -z "$install_prog" && \
-@@ -2151,7 +2860,7 @@ func_mode_install ()
-     if test -n "$install_override_mode" && $no_mode; then
-       if $install_cp; then :; else
- 	func_quote_for_eval "$install_override_mode"
--	install_shared_prog="$install_shared_prog -m $func_quote_for_eval_result"
-+	func_append install_shared_prog " -m $func_quote_for_eval_result"
-       fi
-     fi
- 
-@@ -2209,10 +2918,13 @@ func_mode_install ()
-       case $file in
-       *.$libext)
- 	# Do the static libraries later.
--	staticlibs="$staticlibs $file"
-+	func_append staticlibs " $file"
- 	;;
- 
-       *.la)
-+	func_resolve_sysroot "$file"
-+	file=$func_resolve_sysroot_result
-+
- 	# Check to see that this really is a libtool archive.
- 	func_lalib_unsafe_p "$file" \
- 	  || func_fatal_help "\`$file' is not a valid libtool archive"
-@@ -2226,23 +2938,30 @@ func_mode_install ()
- 	if test "X$destdir" = "X$libdir"; then
- 	  case "$current_libdirs " in
- 	  *" $libdir "*) ;;
--	  *) current_libdirs="$current_libdirs $libdir" ;;
-+	  *) func_append current_libdirs " $libdir" ;;
- 	  esac
- 	else
- 	  # Note the libdir as a future libdir.
- 	  case "$future_libdirs " in
- 	  *" $libdir "*) ;;
--	  *) future_libdirs="$future_libdirs $libdir" ;;
-+	  *) func_append future_libdirs " $libdir" ;;
- 	  esac
- 	fi
- 
- 	func_dirname "$file" "/" ""
- 	dir="$func_dirname_result"
--	dir="$dir$objdir"
-+	func_append dir "$objdir"
- 
- 	if test -n "$relink_command"; then
-+      # Strip any trailing slash from the destination.
-+      func_stripname '' '/' "$libdir"
-+      destlibdir=$func_stripname_result
-+
-+      func_stripname '' '/' "$destdir"
-+      s_destdir=$func_stripname_result
-+
- 	  # Determine the prefix the user has applied to our future dir.
--	  inst_prefix_dir=`$ECHO "$destdir" | $SED -e "s%$libdir\$%%"`
-+	  inst_prefix_dir=`$ECHO "X$s_destdir" | $Xsed -e "s%$destlibdir\$%%"`
- 
- 	  # Don't allow the user to place us outside of our expected
- 	  # location b/c this prevents finding dependent libraries that
-@@ -2315,7 +3034,7 @@ func_mode_install ()
- 	func_show_eval "$install_prog $instname $destdir/$name" 'exit $?'
- 
- 	# Maybe install the static library, too.
--	test -n "$old_library" && staticlibs="$staticlibs $dir/$old_library"
-+	test -n "$old_library" && func_append staticlibs " $dir/$old_library"
- 	;;
- 
-       *.lo)
-@@ -2503,7 +3222,7 @@ func_mode_install ()
-     test -n "$future_libdirs" && \
-       func_warning "remember to run \`$progname --finish$future_libdirs'"
- 
--    if test -n "$current_libdirs" && $opt_finish; then
-+    if test -n "$current_libdirs"; then
-       # Maybe just do a dry run.
-       $opt_dry_run && current_libdirs=" -n$current_libdirs"
-       exec_cmd='$SHELL $progpath $preserve_args --finish$current_libdirs'
-@@ -2512,7 +3231,7 @@ func_mode_install ()
-     fi
- }
- 
--test "$mode" = install && func_mode_install ${1+"$@"}
-+test "$opt_mode" = install && func_mode_install ${1+"$@"}
- 
- 
- # func_generate_dlsyms outputname originator pic_p
-@@ -2559,6 +3278,18 @@ extern \"C\" {
- #pragma GCC diagnostic ignored \"-Wstrict-prototypes\"
- #endif
- 
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- /* External symbol declarations for the compiler. */\
- "
- 
-@@ -2570,21 +3301,22 @@ extern \"C\" {
- 	  # Add our own program objects to the symbol list.
- 	  progfiles=`$ECHO "$objs$old_deplibs" | $SP2NL | $SED "$lo2o" | $NL2SP`
- 	  for progfile in $progfiles; do
--	    func_verbose "extracting global C symbols from \`$progfile'"
--	    $opt_dry_run || eval "$NM $progfile | $global_symbol_pipe >> '$nlist'"
-+	    func_to_tool_file "$progfile" func_convert_file_msys_to_w32
-+	    func_verbose "extracting global C symbols from \`$func_to_tool_file_result'"
-+	    $opt_dry_run || eval "$NM $func_to_tool_file_result | $global_symbol_pipe >> '$nlist'"
- 	  done
- 
- 	  if test -n "$exclude_expsyms"; then
- 	    $opt_dry_run || {
--	      $EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T
--	      $MV "$nlist"T "$nlist"
-+	      eval '$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T'
-+	      eval '$MV "$nlist"T "$nlist"'
- 	    }
- 	  fi
- 
- 	  if test -n "$export_symbols_regex"; then
- 	    $opt_dry_run || {
--	      $EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T
--	      $MV "$nlist"T "$nlist"
-+	      eval '$EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T'
-+	      eval '$MV "$nlist"T "$nlist"'
- 	    }
- 	  fi
- 
-@@ -2593,23 +3325,23 @@ extern \"C\" {
- 	    export_symbols="$output_objdir/$outputname.exp"
- 	    $opt_dry_run || {
- 	      $RM $export_symbols
--	      ${SED} -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' < "$nlist" > "$export_symbols"
-+	      eval "${SED} -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' "'< "$nlist" > "$export_symbols"'
- 	      case $host in
- 	      *cygwin* | *mingw* | *cegcc* )
--                echo EXPORTS > "$output_objdir/$outputname.def"
--                cat "$export_symbols" >> "$output_objdir/$outputname.def"
-+                eval "echo EXPORTS "'> "$output_objdir/$outputname.def"'
-+                eval 'cat "$export_symbols" >> "$output_objdir/$outputname.def"'
- 	        ;;
- 	      esac
- 	    }
- 	  else
- 	    $opt_dry_run || {
--	      ${SED} -e 's/\([].[*^$]\)/\\\1/g' -e 's/^/ /' -e 's/$/$/' < "$export_symbols" > "$output_objdir/$outputname.exp"
--	      $GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T
--	      $MV "$nlist"T "$nlist"
-+	      eval "${SED} -e 's/\([].[*^$]\)/\\\\\1/g' -e 's/^/ /' -e 's/$/$/'"' < "$export_symbols" > "$output_objdir/$outputname.exp"'
-+	      eval '$GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T'
-+	      eval '$MV "$nlist"T "$nlist"'
- 	      case $host in
- 	        *cygwin* | *mingw* | *cegcc* )
--	          echo EXPORTS > "$output_objdir/$outputname.def"
--	          cat "$nlist" >> "$output_objdir/$outputname.def"
-+	          eval "echo EXPORTS "'> "$output_objdir/$outputname.def"'
-+	          eval 'cat "$nlist" >> "$output_objdir/$outputname.def"'
- 	          ;;
- 	      esac
- 	    }
-@@ -2620,10 +3352,52 @@ extern \"C\" {
- 	  func_verbose "extracting global C symbols from \`$dlprefile'"
- 	  func_basename "$dlprefile"
- 	  name="$func_basename_result"
--	  $opt_dry_run || {
--	    $ECHO ": $name " >> "$nlist"
--	    eval "$NM $dlprefile 2>/dev/null | $global_symbol_pipe >> '$nlist'"
--	  }
-+          case $host in
-+	    *cygwin* | *mingw* | *cegcc* )
-+	      # if an import library, we need to obtain dlname
-+	      if func_win32_import_lib_p "$dlprefile"; then
-+	        func_tr_sh "$dlprefile"
-+	        eval "curr_lafile=\$libfile_$func_tr_sh_result"
-+	        dlprefile_dlbasename=""
-+	        if test -n "$curr_lafile" && func_lalib_p "$curr_lafile"; then
-+	          # Use subshell, to avoid clobbering current variable values
-+	          dlprefile_dlname=`source "$curr_lafile" && echo "$dlname"`
-+	          if test -n "$dlprefile_dlname" ; then
-+	            func_basename "$dlprefile_dlname"
-+	            dlprefile_dlbasename="$func_basename_result"
-+	          else
-+	            # no lafile. user explicitly requested -dlpreopen <import library>.
-+	            $sharedlib_from_linklib_cmd "$dlprefile"
-+	            dlprefile_dlbasename=$sharedlib_from_linklib_result
-+	          fi
-+	        fi
-+	        $opt_dry_run || {
-+	          if test -n "$dlprefile_dlbasename" ; then
-+	            eval '$ECHO ": $dlprefile_dlbasename" >> "$nlist"'
-+	          else
-+	            func_warning "Could not compute DLL name from $name"
-+	            eval '$ECHO ": $name " >> "$nlist"'
-+	          fi
-+	          func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
-+	          eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe |
-+	            $SED -e '/I __imp/d' -e 's/I __nm_/D /;s/_nm__//' >> '$nlist'"
-+	        }
-+	      else # not an import lib
-+	        $opt_dry_run || {
-+	          eval '$ECHO ": $name " >> "$nlist"'
-+	          func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
-+	          eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'"
-+	        }
-+	      fi
-+	    ;;
-+	    *)
-+	      $opt_dry_run || {
-+	        eval '$ECHO ": $name " >> "$nlist"'
-+	        func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32
-+	        eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'"
-+	      }
-+	    ;;
-+          esac
- 	done
- 
- 	$opt_dry_run || {
-@@ -2661,26 +3435,9 @@ typedef struct {
-   const char *name;
-   void *address;
- } lt_dlsymlist;
--"
--	  case $host in
--	  *cygwin* | *mingw* | *cegcc* )
--	    echo >> "$output_objdir/$my_dlsyms" "\
--/* DATA imports from DLLs on WIN32 con't be const, because
--   runtime relocations are performed -- see ld's documentation
--   on pseudo-relocs.  */"
--	    lt_dlsym_const= ;;
--	  *osf5*)
--	    echo >> "$output_objdir/$my_dlsyms" "\
--/* This system does not cope well with relocations in const data */"
--	    lt_dlsym_const= ;;
--	  *)
--	    lt_dlsym_const=const ;;
--	  esac
--
--	  echo >> "$output_objdir/$my_dlsyms" "\
--extern $lt_dlsym_const lt_dlsymlist
-+extern LT_DLSYM_CONST lt_dlsymlist
- lt_${my_prefix}_LTX_preloaded_symbols[];
--$lt_dlsym_const lt_dlsymlist
-+LT_DLSYM_CONST lt_dlsymlist
- lt_${my_prefix}_LTX_preloaded_symbols[] =
- {\
-   { \"$my_originator\", (void *) 0 },"
-@@ -2736,7 +3493,7 @@ static const void *lt_preloaded_setup() {
- 	for arg in $LTCFLAGS; do
- 	  case $arg in
- 	  -pie | -fpie | -fPIE) ;;
--	  *) symtab_cflags="$symtab_cflags $arg" ;;
-+	  *) func_append symtab_cflags " $arg" ;;
- 	  esac
- 	done
- 
-@@ -2796,9 +3553,11 @@ func_win32_libid ()
-     win32_libid_type="x86 archive import"
-     ;;
-   *ar\ archive*) # could be an import, or static
--    if $OBJDUMP -f "$1" | $SED -e '10q' 2>/dev/null |
--       $EGREP 'file format (pe-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then
--      win32_nmres=`$NM -f posix -A "$1" |
-+    # Keep the egrep pattern in sync with the one in _LT_CHECK_MAGIC_METHOD.
-+    if eval $OBJDUMP -f $1 | $SED -e '10q' 2>/dev/null |
-+       $EGREP 'file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then
-+      func_to_tool_file "$1" func_convert_file_msys_to_w32
-+      win32_nmres=`eval $NM -f posix -A \"$func_to_tool_file_result\" |
- 	$SED -n -e '
- 	    1,100{
- 		/ I /{
-@@ -2827,6 +3586,131 @@ func_win32_libid ()
-   $ECHO "$win32_libid_type"
- }
- 
-+# func_cygming_dll_for_implib ARG
-+#
-+# Platform-specific function to extract the
-+# name of the DLL associated with the specified
-+# import library ARG.
-+# Invoked by eval'ing the libtool variable
-+#    $sharedlib_from_linklib_cmd
-+# Result is available in the variable
-+#    $sharedlib_from_linklib_result
-+func_cygming_dll_for_implib ()
-+{
-+  $opt_debug
-+  sharedlib_from_linklib_result=`$DLLTOOL --identify-strict --identify "$1"`
-+}
-+
-+# func_cygming_dll_for_implib_fallback_core SECTION_NAME LIBNAMEs
-+#
-+# The is the core of a fallback implementation of a
-+# platform-specific function to extract the name of the
-+# DLL associated with the specified import library LIBNAME.
-+#
-+# SECTION_NAME is either .idata$6 or .idata$7, depending
-+# on the platform and compiler that created the implib.
-+#
-+# Echos the name of the DLL associated with the
-+# specified import library.
-+func_cygming_dll_for_implib_fallback_core ()
-+{
-+  $opt_debug
-+  match_literal=`$ECHO "$1" | $SED "$sed_make_literal_regex"`
-+  $OBJDUMP -s --section "$1" "$2" 2>/dev/null |
-+    $SED '/^Contents of section '"$match_literal"':/{
-+      # Place marker at beginning of archive member dllname section
-+      s/.*/====MARK====/
-+      p
-+      d
-+    }
-+    # These lines can sometimes be longer than 43 characters, but
-+    # are always uninteresting
-+    /:[	 ]*file format pe[i]\{,1\}-/d
-+    /^In archive [^:]*:/d
-+    # Ensure marker is printed
-+    /^====MARK====/p
-+    # Remove all lines with less than 43 characters
-+    /^.\{43\}/!d
-+    # From remaining lines, remove first 43 characters
-+    s/^.\{43\}//' |
-+    $SED -n '
-+      # Join marker and all lines until next marker into a single line
-+      /^====MARK====/ b para
-+      H
-+      $ b para
-+      b
-+      :para
-+      x
-+      s/\n//g
-+      # Remove the marker
-+      s/^====MARK====//
-+      # Remove trailing dots and whitespace
-+      s/[\. \t]*$//
-+      # Print
-+      /./p' |
-+    # we now have a list, one entry per line, of the stringified
-+    # contents of the appropriate section of all members of the
-+    # archive which possess that section. Heuristic: eliminate
-+    # all those which have a first or second character that is
-+    # a '.' (that is, objdump's representation of an unprintable
-+    # character.) This should work for all archives with less than
-+    # 0x302f exports -- but will fail for DLLs whose name actually
-+    # begins with a literal '.' or a single character followed by
-+    # a '.'.
-+    #
-+    # Of those that remain, print the first one.
-+    $SED -e '/^\./d;/^.\./d;q'
-+}
-+
-+# func_cygming_gnu_implib_p ARG
-+# This predicate returns with zero status (TRUE) if
-+# ARG is a GNU/binutils-style import library. Returns
-+# with nonzero status (FALSE) otherwise.
-+func_cygming_gnu_implib_p ()
-+{
-+  $opt_debug
-+  func_to_tool_file "$1" func_convert_file_msys_to_w32
-+  func_cygming_gnu_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $EGREP ' (_head_[A-Za-z0-9_]+_[ad]l*|[A-Za-z0-9_]+_[ad]l*_iname)$'`
-+  test -n "$func_cygming_gnu_implib_tmp"
-+}
-+
-+# func_cygming_ms_implib_p ARG
-+# This predicate returns with zero status (TRUE) if
-+# ARG is an MS-style import library. Returns
-+# with nonzero status (FALSE) otherwise.
-+func_cygming_ms_implib_p ()
-+{
-+  $opt_debug
-+  func_to_tool_file "$1" func_convert_file_msys_to_w32
-+  func_cygming_ms_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $GREP '_NULL_IMPORT_DESCRIPTOR'`
-+  test -n "$func_cygming_ms_implib_tmp"
-+}
-+
-+# func_cygming_dll_for_implib_fallback ARG
-+# Platform-specific function to extract the
-+# name of the DLL associated with the specified
-+# import library ARG.
-+#
-+# This fallback implementation is for use when $DLLTOOL
-+# does not support the --identify-strict option.
-+# Invoked by eval'ing the libtool variable
-+#    $sharedlib_from_linklib_cmd
-+# Result is available in the variable
-+#    $sharedlib_from_linklib_result
-+func_cygming_dll_for_implib_fallback ()
-+{
-+  $opt_debug
-+  if func_cygming_gnu_implib_p "$1" ; then
-+    # binutils import library
-+    sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$7' "$1"`
-+  elif func_cygming_ms_implib_p "$1" ; then
-+    # ms-generated import library
-+    sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$6' "$1"`
-+  else
-+    # unknown
-+    sharedlib_from_linklib_result=""
-+  fi
-+}
- 
- 
- # func_extract_an_archive dir oldlib
-@@ -2917,7 +3801,7 @@ func_extract_archives ()
- 	    darwin_file=
- 	    darwin_files=
- 	    for darwin_file in $darwin_filelist; do
--	      darwin_files=`find unfat-$$ -name $darwin_file -print | $NL2SP`
-+	      darwin_files=`find unfat-$$ -name $darwin_file -print | sort | $NL2SP`
- 	      $LIPO -create -output "$darwin_file" $darwin_files
- 	    done # $darwin_filelist
- 	    $RM -rf unfat-$$
-@@ -2932,7 +3816,7 @@ func_extract_archives ()
-         func_extract_an_archive "$my_xdir" "$my_xabs"
- 	;;
-       esac
--      my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | $NL2SP`
-+      my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | sort | $NL2SP`
-     done
- 
-     func_extract_archives_result="$my_oldobjs"
-@@ -3014,7 +3898,110 @@ func_fallback_echo ()
- _LTECHO_EOF'
- }
-     ECHO=\"$qECHO\"
--  fi\
-+  fi
-+
-+# Very basic option parsing. These options are (a) specific to
-+# the libtool wrapper, (b) are identical between the wrapper
-+# /script/ and the wrapper /executable/ which is used only on
-+# windows platforms, and (c) all begin with the string "--lt-"
-+# (application programs are unlikely to have options which match
-+# this pattern).
-+#
-+# There are only two supported options: --lt-debug and
-+# --lt-dump-script. There is, deliberately, no --lt-help.
-+#
-+# The first argument to this parsing function should be the
-+# script's $0 value, followed by "$@".
-+lt_option_debug=
-+func_parse_lt_options ()
-+{
-+  lt_script_arg0=\$0
-+  shift
-+  for lt_opt
-+  do
-+    case \"\$lt_opt\" in
-+    --lt-debug) lt_option_debug=1 ;;
-+    --lt-dump-script)
-+        lt_dump_D=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%/[^/]*$%%'\`
-+        test \"X\$lt_dump_D\" = \"X\$lt_script_arg0\" && lt_dump_D=.
-+        lt_dump_F=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%^.*/%%'\`
-+        cat \"\$lt_dump_D/\$lt_dump_F\"
-+        exit 0
-+      ;;
-+    --lt-*)
-+        \$ECHO \"Unrecognized --lt- option: '\$lt_opt'\" 1>&2
-+        exit 1
-+      ;;
-+    esac
-+  done
-+
-+  # Print the debug banner immediately:
-+  if test -n \"\$lt_option_debug\"; then
-+    echo \"${outputname}:${output}:\${LINENO}: libtool wrapper (GNU $PACKAGE$TIMESTAMP) $VERSION\" 1>&2
-+  fi
-+}
-+
-+# Used when --lt-debug. Prints its arguments to stdout
-+# (redirection is the responsibility of the caller)
-+func_lt_dump_args ()
-+{
-+  lt_dump_args_N=1;
-+  for lt_arg
-+  do
-+    \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[\$lt_dump_args_N]: \$lt_arg\"
-+    lt_dump_args_N=\`expr \$lt_dump_args_N + 1\`
-+  done
-+}
-+
-+# Core function for launching the target application
-+func_exec_program_core ()
-+{
-+"
-+  case $host in
-+  # Backslashes separate directories on plain windows
-+  *-*-mingw | *-*-os2* | *-cegcc*)
-+    $ECHO "\
-+      if test -n \"\$lt_option_debug\"; then
-+        \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir\\\\\$program\" 1>&2
-+        func_lt_dump_args \${1+\"\$@\"} 1>&2
-+      fi
-+      exec \"\$progdir\\\\\$program\" \${1+\"\$@\"}
-+"
-+    ;;
-+
-+  *)
-+    $ECHO "\
-+      if test -n \"\$lt_option_debug\"; then
-+        \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir/\$program\" 1>&2
-+        func_lt_dump_args \${1+\"\$@\"} 1>&2
-+      fi
-+      exec \"\$progdir/\$program\" \${1+\"\$@\"}
-+"
-+    ;;
-+  esac
-+  $ECHO "\
-+      \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2
-+      exit 1
-+}
-+
-+# A function to encapsulate launching the target application
-+# Strips options in the --lt-* namespace from \$@ and
-+# launches target application with the remaining arguments.
-+func_exec_program ()
-+{
-+  for lt_wr_arg
-+  do
-+    case \$lt_wr_arg in
-+    --lt-*) ;;
-+    *) set x \"\$@\" \"\$lt_wr_arg\"; shift;;
-+    esac
-+    shift
-+  done
-+  func_exec_program_core \${1+\"\$@\"}
-+}
-+
-+  # Parse options
-+  func_parse_lt_options \"\$0\" \${1+\"\$@\"}
- 
-   # Find the directory that this script lives in.
-   thisdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*$%%'\`
-@@ -3078,7 +4065,7 @@ _LTECHO_EOF'
- 
-     # relink executable if necessary
-     if test -n \"\$relink_command\"; then
--      if relink_command_output=\`eval \"\$relink_command\" 2>&1\`; then :
-+      if relink_command_output=\`eval \$relink_command 2>&1\`; then :
-       else
- 	$ECHO \"\$relink_command_output\" >&2
- 	$RM \"\$progdir/\$file\"
-@@ -3102,6 +4089,18 @@ _LTECHO_EOF'
- 
-   if test -f \"\$progdir/\$program\"; then"
- 
-+	# fixup the dll searchpath if we need to.
-+	#
-+	# Fix the DLL searchpath if we need to.  Do this before prepending
-+	# to shlibpath, because on Windows, both are PATH and uninstalled
-+	# libraries must come first.
-+	if test -n "$dllsearchpath"; then
-+	  $ECHO "\
-+    # Add the dll search path components to the executable PATH
-+    PATH=$dllsearchpath:\$PATH
-+"
-+	fi
-+
- 	# Export our shlibpath_var if we have one.
- 	if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then
- 	  $ECHO "\
-@@ -3116,35 +4115,10 @@ _LTECHO_EOF'
- "
- 	fi
- 
--	# fixup the dll searchpath if we need to.
--	if test -n "$dllsearchpath"; then
--	  $ECHO "\
--    # Add the dll search path components to the executable PATH
--    PATH=$dllsearchpath:\$PATH
--"
--	fi
--
- 	$ECHO "\
-     if test \"\$libtool_execute_magic\" != \"$magic\"; then
-       # Run the actual program with our arguments.
--"
--	case $host in
--	# Backslashes separate directories on plain windows
--	*-*-mingw | *-*-os2* | *-cegcc*)
--	  $ECHO "\
--      exec \"\$progdir\\\\\$program\" \${1+\"\$@\"}
--"
--	  ;;
--
--	*)
--	  $ECHO "\
--      exec \"\$progdir/\$program\" \${1+\"\$@\"}
--"
--	  ;;
--	esac
--	$ECHO "\
--      \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2
--      exit 1
-+      func_exec_program \${1+\"\$@\"}
-     fi
-   else
-     # The program doesn't exist.
-@@ -3158,166 +4132,6 @@ fi\
- }
- 
- 
--# func_to_host_path arg
--#
--# Convert paths to host format when used with build tools.
--# Intended for use with "native" mingw (where libtool itself
--# is running under the msys shell), or in the following cross-
--# build environments:
--#    $build          $host
--#    mingw (msys)    mingw  [e.g. native]
--#    cygwin          mingw
--#    *nix + wine     mingw
--# where wine is equipped with the `winepath' executable.
--# In the native mingw case, the (msys) shell automatically
--# converts paths for any non-msys applications it launches,
--# but that facility isn't available from inside the cwrapper.
--# Similar accommodations are necessary for $host mingw and
--# $build cygwin.  Calling this function does no harm for other
--# $host/$build combinations not listed above.
--#
--# ARG is the path (on $build) that should be converted to
--# the proper representation for $host. The result is stored
--# in $func_to_host_path_result.
--func_to_host_path ()
--{
--  func_to_host_path_result="$1"
--  if test -n "$1"; then
--    case $host in
--      *mingw* )
--        lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
--        case $build in
--          *mingw* ) # actually, msys
--            # awkward: cmd appends spaces to result
--            func_to_host_path_result=`( cmd //c echo "$1" ) 2>/dev/null |
--              $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"`
--            ;;
--          *cygwin* )
--            func_to_host_path_result=`cygpath -w "$1" |
--	      $SED -e "$lt_sed_naive_backslashify"`
--            ;;
--          * )
--            # Unfortunately, winepath does not exit with a non-zero
--            # error code, so we are forced to check the contents of
--            # stdout. On the other hand, if the command is not
--            # found, the shell will set an exit code of 127 and print
--            # *an error message* to stdout. So we must check for both
--            # error code of zero AND non-empty stdout, which explains
--            # the odd construction:
--            func_to_host_path_tmp1=`winepath -w "$1" 2>/dev/null`
--            if test "$?" -eq 0 && test -n "${func_to_host_path_tmp1}"; then
--              func_to_host_path_result=`$ECHO "$func_to_host_path_tmp1" |
--                $SED -e "$lt_sed_naive_backslashify"`
--            else
--              # Allow warning below.
--              func_to_host_path_result=
--            fi
--            ;;
--        esac
--        if test -z "$func_to_host_path_result" ; then
--          func_error "Could not determine host path corresponding to"
--          func_error "  \`$1'"
--          func_error "Continuing, but uninstalled executables may not work."
--          # Fallback:
--          func_to_host_path_result="$1"
--        fi
--        ;;
--    esac
--  fi
--}
--# end: func_to_host_path
--
--# func_to_host_pathlist arg
--#
--# Convert pathlists to host format when used with build tools.
--# See func_to_host_path(), above. This function supports the
--# following $build/$host combinations (but does no harm for
--# combinations not listed here):
--#    $build          $host
--#    mingw (msys)    mingw  [e.g. native]
--#    cygwin          mingw
--#    *nix + wine     mingw
--#
--# Path separators are also converted from $build format to
--# $host format. If ARG begins or ends with a path separator
--# character, it is preserved (but converted to $host format)
--# on output.
--#
--# ARG is a pathlist (on $build) that should be converted to
--# the proper representation on $host. The result is stored
--# in $func_to_host_pathlist_result.
--func_to_host_pathlist ()
--{
--  func_to_host_pathlist_result="$1"
--  if test -n "$1"; then
--    case $host in
--      *mingw* )
--        lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g'
--        # Remove leading and trailing path separator characters from
--        # ARG. msys behavior is inconsistent here, cygpath turns them
--        # into '.;' and ';.', and winepath ignores them completely.
--	func_stripname : : "$1"
--        func_to_host_pathlist_tmp1=$func_stripname_result
--        case $build in
--          *mingw* ) # Actually, msys.
--            # Awkward: cmd appends spaces to result.
--            func_to_host_pathlist_result=`
--	      ( cmd //c echo "$func_to_host_pathlist_tmp1" ) 2>/dev/null |
--	      $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"`
--            ;;
--          *cygwin* )
--            func_to_host_pathlist_result=`cygpath -w -p "$func_to_host_pathlist_tmp1" |
--              $SED -e "$lt_sed_naive_backslashify"`
--            ;;
--          * )
--            # unfortunately, winepath doesn't convert pathlists
--            func_to_host_pathlist_result=""
--            func_to_host_pathlist_oldIFS=$IFS
--            IFS=:
--            for func_to_host_pathlist_f in $func_to_host_pathlist_tmp1 ; do
--              IFS=$func_to_host_pathlist_oldIFS
--              if test -n "$func_to_host_pathlist_f" ; then
--                func_to_host_path "$func_to_host_pathlist_f"
--                if test -n "$func_to_host_path_result" ; then
--                  if test -z "$func_to_host_pathlist_result" ; then
--                    func_to_host_pathlist_result="$func_to_host_path_result"
--                  else
--                    func_append func_to_host_pathlist_result ";$func_to_host_path_result"
--                  fi
--                fi
--              fi
--            done
--            IFS=$func_to_host_pathlist_oldIFS
--            ;;
--        esac
--        if test -z "$func_to_host_pathlist_result"; then
--          func_error "Could not determine the host path(s) corresponding to"
--          func_error "  \`$1'"
--          func_error "Continuing, but uninstalled executables may not work."
--          # Fallback. This may break if $1 contains DOS-style drive
--          # specifications. The fix is not to complicate the expression
--          # below, but for the user to provide a working wine installation
--          # with winepath so that path translation in the cross-to-mingw
--          # case works properly.
--          lt_replace_pathsep_nix_to_dos="s|:|;|g"
--          func_to_host_pathlist_result=`echo "$func_to_host_pathlist_tmp1" |\
--            $SED -e "$lt_replace_pathsep_nix_to_dos"`
--        fi
--        # Now, add the leading and trailing path separators back
--        case "$1" in
--          :* ) func_to_host_pathlist_result=";$func_to_host_pathlist_result"
--            ;;
--        esac
--        case "$1" in
--          *: ) func_append func_to_host_pathlist_result ";"
--            ;;
--        esac
--        ;;
--    esac
--  fi
--}
--# end: func_to_host_pathlist
--
- # func_emit_cwrapperexe_src
- # emit the source code for a wrapper executable on stdout
- # Must ONLY be called from within func_mode_link because
-@@ -3334,10 +4148,6 @@ func_emit_cwrapperexe_src ()
- 
-    This wrapper executable should never be moved out of the build directory.
-    If it is, it will not operate correctly.
--
--   Currently, it simply execs the wrapper *script* "$SHELL $output",
--   but could eventually absorb all of the scripts functionality and
--   exec $objdir/$outputname directly.
- */
- EOF
- 	    cat <<"EOF"
-@@ -3462,22 +4272,13 @@ int setenv (const char *, const char *, int);
-   if (stale) { free ((void *) stale); stale = 0; } \
- } while (0)
- 
--#undef LTWRAPPER_DEBUGPRINTF
--#if defined LT_DEBUGWRAPPER
--# define LTWRAPPER_DEBUGPRINTF(args) ltwrapper_debugprintf args
--static void
--ltwrapper_debugprintf (const char *fmt, ...)
--{
--    va_list args;
--    va_start (args, fmt);
--    (void) vfprintf (stderr, fmt, args);
--    va_end (args);
--}
-+#if defined(LT_DEBUGWRAPPER)
-+static int lt_debug = 1;
- #else
--# define LTWRAPPER_DEBUGPRINTF(args)
-+static int lt_debug = 0;
- #endif
- 
--const char *program_name = NULL;
-+const char *program_name = "libtool-wrapper"; /* in case xstrdup fails */
- 
- void *xmalloc (size_t num);
- char *xstrdup (const char *string);
-@@ -3487,7 +4288,10 @@ char *chase_symlinks (const char *pathspec);
- int make_executable (const char *path);
- int check_executable (const char *path);
- char *strendzap (char *str, const char *pat);
--void lt_fatal (const char *message, ...);
-+void lt_debugprintf (const char *file, int line, const char *fmt, ...);
-+void lt_fatal (const char *file, int line, const char *message, ...);
-+static const char *nonnull (const char *s);
-+static const char *nonempty (const char *s);
- void lt_setenv (const char *name, const char *value);
- char *lt_extend_str (const char *orig_value, const char *add, int to_end);
- void lt_update_exe_path (const char *name, const char *value);
-@@ -3497,14 +4301,14 @@ void lt_dump_script (FILE *f);
- EOF
- 
- 	    cat <<EOF
--const char * MAGIC_EXE = "$magic_exe";
-+volatile const char * MAGIC_EXE = "$magic_exe";
- const char * LIB_PATH_VARNAME = "$shlibpath_var";
- EOF
- 
- 	    if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then
--              func_to_host_pathlist "$temp_rpath"
-+              func_to_host_path "$temp_rpath"
- 	      cat <<EOF
--const char * LIB_PATH_VALUE   = "$func_to_host_pathlist_result";
-+const char * LIB_PATH_VALUE   = "$func_to_host_path_result";
- EOF
- 	    else
- 	      cat <<"EOF"
-@@ -3513,10 +4317,10 @@ EOF
- 	    fi
- 
- 	    if test -n "$dllsearchpath"; then
--              func_to_host_pathlist "$dllsearchpath:"
-+              func_to_host_path "$dllsearchpath:"
- 	      cat <<EOF
- const char * EXE_PATH_VARNAME = "PATH";
--const char * EXE_PATH_VALUE   = "$func_to_host_pathlist_result";
-+const char * EXE_PATH_VALUE   = "$func_to_host_path_result";
- EOF
- 	    else
- 	      cat <<"EOF"
-@@ -3539,12 +4343,10 @@ EOF
- 	    cat <<"EOF"
- 
- #define LTWRAPPER_OPTION_PREFIX         "--lt-"
--#define LTWRAPPER_OPTION_PREFIX_LENGTH  5
- 
--static const size_t opt_prefix_len         = LTWRAPPER_OPTION_PREFIX_LENGTH;
- static const char *ltwrapper_option_prefix = LTWRAPPER_OPTION_PREFIX;
--
- static const char *dumpscript_opt       = LTWRAPPER_OPTION_PREFIX "dump-script";
-+static const char *debug_opt            = LTWRAPPER_OPTION_PREFIX "debug";
- 
- int
- main (int argc, char *argv[])
-@@ -3561,10 +4363,13 @@ main (int argc, char *argv[])
-   int i;
- 
-   program_name = (char *) xstrdup (base_name (argv[0]));
--  LTWRAPPER_DEBUGPRINTF (("(main) argv[0]      : %s\n", argv[0]));
--  LTWRAPPER_DEBUGPRINTF (("(main) program_name : %s\n", program_name));
-+  newargz = XMALLOC (char *, argc + 1);
- 
--  /* very simple arg parsing; don't want to rely on getopt */
-+  /* very simple arg parsing; don't want to rely on getopt
-+   * also, copy all non cwrapper options to newargz, except
-+   * argz[0], which is handled differently
-+   */
-+  newargc=0;
-   for (i = 1; i < argc; i++)
-     {
-       if (strcmp (argv[i], dumpscript_opt) == 0)
-@@ -3581,21 +4386,54 @@ EOF
- 	  lt_dump_script (stdout);
- 	  return 0;
- 	}
-+      if (strcmp (argv[i], debug_opt) == 0)
-+	{
-+          lt_debug = 1;
-+          continue;
-+	}
-+      if (strcmp (argv[i], ltwrapper_option_prefix) == 0)
-+        {
-+          /* however, if there is an option in the LTWRAPPER_OPTION_PREFIX
-+             namespace, but it is not one of the ones we know about and
-+             have already dealt with, above (inluding dump-script), then
-+             report an error. Otherwise, targets might begin to believe
-+             they are allowed to use options in the LTWRAPPER_OPTION_PREFIX
-+             namespace. The first time any user complains about this, we'll
-+             need to make LTWRAPPER_OPTION_PREFIX a configure-time option
-+             or a configure.ac-settable value.
-+           */
-+          lt_fatal (__FILE__, __LINE__,
-+		    "unrecognized %s option: '%s'",
-+                    ltwrapper_option_prefix, argv[i]);
-+        }
-+      /* otherwise ... */
-+      newargz[++newargc] = xstrdup (argv[i]);
-     }
-+  newargz[++newargc] = NULL;
-+
-+EOF
-+	    cat <<EOF
-+  /* The GNU banner must be the first non-error debug message */
-+  lt_debugprintf (__FILE__, __LINE__, "libtool wrapper (GNU $PACKAGE$TIMESTAMP) $VERSION\n");
-+EOF
-+	    cat <<"EOF"
-+  lt_debugprintf (__FILE__, __LINE__, "(main) argv[0]: %s\n", argv[0]);
-+  lt_debugprintf (__FILE__, __LINE__, "(main) program_name: %s\n", program_name);
- 
--  newargz = XMALLOC (char *, argc + 1);
-   tmp_pathspec = find_executable (argv[0]);
-   if (tmp_pathspec == NULL)
--    lt_fatal ("Couldn't find %s", argv[0]);
--  LTWRAPPER_DEBUGPRINTF (("(main) found exe (before symlink chase) at : %s\n",
--			  tmp_pathspec));
-+    lt_fatal (__FILE__, __LINE__, "couldn't find %s", argv[0]);
-+  lt_debugprintf (__FILE__, __LINE__,
-+                  "(main) found exe (before symlink chase) at: %s\n",
-+		  tmp_pathspec);
- 
-   actual_cwrapper_path = chase_symlinks (tmp_pathspec);
--  LTWRAPPER_DEBUGPRINTF (("(main) found exe (after symlink chase) at : %s\n",
--			  actual_cwrapper_path));
-+  lt_debugprintf (__FILE__, __LINE__,
-+                  "(main) found exe (after symlink chase) at: %s\n",
-+		  actual_cwrapper_path);
-   XFREE (tmp_pathspec);
- 
--  actual_cwrapper_name = xstrdup( base_name (actual_cwrapper_path));
-+  actual_cwrapper_name = xstrdup (base_name (actual_cwrapper_path));
-   strendzap (actual_cwrapper_path, actual_cwrapper_name);
- 
-   /* wrapper name transforms */
-@@ -3613,8 +4451,9 @@ EOF
-   target_name = tmp_pathspec;
-   tmp_pathspec = 0;
- 
--  LTWRAPPER_DEBUGPRINTF (("(main) libtool target name: %s\n",
--			  target_name));
-+  lt_debugprintf (__FILE__, __LINE__,
-+		  "(main) libtool target name: %s\n",
-+		  target_name);
- EOF
- 
- 	    cat <<EOF
-@@ -3664,35 +4503,19 @@ EOF
- 
-   lt_setenv ("BIN_SH", "xpg4"); /* for Tru64 */
-   lt_setenv ("DUALCASE", "1");  /* for MSK sh */
--  lt_update_lib_path (LIB_PATH_VARNAME, LIB_PATH_VALUE);
-+  /* Update the DLL searchpath.  EXE_PATH_VALUE ($dllsearchpath) must
-+     be prepended before (that is, appear after) LIB_PATH_VALUE ($temp_rpath)
-+     because on Windows, both *_VARNAMEs are PATH but uninstalled
-+     libraries must come first. */
-   lt_update_exe_path (EXE_PATH_VARNAME, EXE_PATH_VALUE);
-+  lt_update_lib_path (LIB_PATH_VARNAME, LIB_PATH_VALUE);
- 
--  newargc=0;
--  for (i = 1; i < argc; i++)
--    {
--      if (strncmp (argv[i], ltwrapper_option_prefix, opt_prefix_len) == 0)
--        {
--          /* however, if there is an option in the LTWRAPPER_OPTION_PREFIX
--             namespace, but it is not one of the ones we know about and
--             have already dealt with, above (inluding dump-script), then
--             report an error. Otherwise, targets might begin to believe
--             they are allowed to use options in the LTWRAPPER_OPTION_PREFIX
--             namespace. The first time any user complains about this, we'll
--             need to make LTWRAPPER_OPTION_PREFIX a configure-time option
--             or a configure.ac-settable value.
--           */
--          lt_fatal ("Unrecognized option in %s namespace: '%s'",
--                    ltwrapper_option_prefix, argv[i]);
--        }
--      /* otherwise ... */
--      newargz[++newargc] = xstrdup (argv[i]);
--    }
--  newargz[++newargc] = NULL;
--
--  LTWRAPPER_DEBUGPRINTF     (("(main) lt_argv_zero : %s\n", (lt_argv_zero ? lt_argv_zero : "<NULL>")));
-+  lt_debugprintf (__FILE__, __LINE__, "(main) lt_argv_zero: %s\n",
-+		  nonnull (lt_argv_zero));
-   for (i = 0; i < newargc; i++)
-     {
--      LTWRAPPER_DEBUGPRINTF (("(main) newargz[%d]   : %s\n", i, (newargz[i] ? newargz[i] : "<NULL>")));
-+      lt_debugprintf (__FILE__, __LINE__, "(main) newargz[%d]: %s\n",
-+		      i, nonnull (newargz[i]));
-     }
- 
- EOF
-@@ -3706,7 +4529,9 @@ EOF
-   if (rval == -1)
-     {
-       /* failed to start process */
--      LTWRAPPER_DEBUGPRINTF (("(main) failed to launch target \"%s\": errno = %d\n", lt_argv_zero, errno));
-+      lt_debugprintf (__FILE__, __LINE__,
-+		      "(main) failed to launch target \"%s\": %s\n",
-+		      lt_argv_zero, nonnull (strerror (errno)));
-       return 127;
-     }
-   return rval;
-@@ -3728,7 +4553,7 @@ xmalloc (size_t num)
- {
-   void *p = (void *) malloc (num);
-   if (!p)
--    lt_fatal ("Memory exhausted");
-+    lt_fatal (__FILE__, __LINE__, "memory exhausted");
- 
-   return p;
- }
-@@ -3762,8 +4587,8 @@ check_executable (const char *path)
- {
-   struct stat st;
- 
--  LTWRAPPER_DEBUGPRINTF (("(check_executable)  : %s\n",
--			  path ? (*path ? path : "EMPTY!") : "NULL!"));
-+  lt_debugprintf (__FILE__, __LINE__, "(check_executable): %s\n",
-+                  nonempty (path));
-   if ((!path) || (!*path))
-     return 0;
- 
-@@ -3780,8 +4605,8 @@ make_executable (const char *path)
-   int rval = 0;
-   struct stat st;
- 
--  LTWRAPPER_DEBUGPRINTF (("(make_executable)   : %s\n",
--			  path ? (*path ? path : "EMPTY!") : "NULL!"));
-+  lt_debugprintf (__FILE__, __LINE__, "(make_executable): %s\n",
-+                  nonempty (path));
-   if ((!path) || (!*path))
-     return 0;
- 
-@@ -3807,8 +4632,8 @@ find_executable (const char *wrapper)
-   int tmp_len;
-   char *concat_name;
- 
--  LTWRAPPER_DEBUGPRINTF (("(find_executable)   : %s\n",
--			  wrapper ? (*wrapper ? wrapper : "EMPTY!") : "NULL!"));
-+  lt_debugprintf (__FILE__, __LINE__, "(find_executable): %s\n",
-+                  nonempty (wrapper));
- 
-   if ((wrapper == NULL) || (*wrapper == '\0'))
-     return NULL;
-@@ -3861,7 +4686,8 @@ find_executable (const char *wrapper)
- 		{
- 		  /* empty path: current directory */
- 		  if (getcwd (tmp, LT_PATHMAX) == NULL)
--		    lt_fatal ("getcwd failed");
-+		    lt_fatal (__FILE__, __LINE__, "getcwd failed: %s",
-+                              nonnull (strerror (errno)));
- 		  tmp_len = strlen (tmp);
- 		  concat_name =
- 		    XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1);
-@@ -3886,7 +4712,8 @@ find_executable (const char *wrapper)
-     }
-   /* Relative path | not found in path: prepend cwd */
-   if (getcwd (tmp, LT_PATHMAX) == NULL)
--    lt_fatal ("getcwd failed");
-+    lt_fatal (__FILE__, __LINE__, "getcwd failed: %s",
-+              nonnull (strerror (errno)));
-   tmp_len = strlen (tmp);
-   concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1);
-   memcpy (concat_name, tmp, tmp_len);
-@@ -3912,8 +4739,9 @@ chase_symlinks (const char *pathspec)
-   int has_symlinks = 0;
-   while (strlen (tmp_pathspec) && !has_symlinks)
-     {
--      LTWRAPPER_DEBUGPRINTF (("checking path component for symlinks: %s\n",
--			      tmp_pathspec));
-+      lt_debugprintf (__FILE__, __LINE__,
-+		      "checking path component for symlinks: %s\n",
-+		      tmp_pathspec);
-       if (lstat (tmp_pathspec, &s) == 0)
- 	{
- 	  if (S_ISLNK (s.st_mode) != 0)
-@@ -3935,8 +4763,9 @@ chase_symlinks (const char *pathspec)
- 	}
-       else
- 	{
--	  char *errstr = strerror (errno);
--	  lt_fatal ("Error accessing file %s (%s)", tmp_pathspec, errstr);
-+	  lt_fatal (__FILE__, __LINE__,
-+		    "error accessing file \"%s\": %s",
-+		    tmp_pathspec, nonnull (strerror (errno)));
- 	}
-     }
-   XFREE (tmp_pathspec);
-@@ -3949,7 +4778,8 @@ chase_symlinks (const char *pathspec)
-   tmp_pathspec = realpath (pathspec, buf);
-   if (tmp_pathspec == 0)
-     {
--      lt_fatal ("Could not follow symlinks for %s", pathspec);
-+      lt_fatal (__FILE__, __LINE__,
-+		"could not follow symlinks for %s", pathspec);
-     }
-   return xstrdup (tmp_pathspec);
- #endif
-@@ -3975,11 +4805,25 @@ strendzap (char *str, const char *pat)
-   return str;
- }
- 
-+void
-+lt_debugprintf (const char *file, int line, const char *fmt, ...)
-+{
-+  va_list args;
-+  if (lt_debug)
-+    {
-+      (void) fprintf (stderr, "%s:%s:%d: ", program_name, file, line);
-+      va_start (args, fmt);
-+      (void) vfprintf (stderr, fmt, args);
-+      va_end (args);
-+    }
-+}
-+
- static void
--lt_error_core (int exit_status, const char *mode,
-+lt_error_core (int exit_status, const char *file,
-+	       int line, const char *mode,
- 	       const char *message, va_list ap)
- {
--  fprintf (stderr, "%s: %s: ", program_name, mode);
-+  fprintf (stderr, "%s:%s:%d: %s: ", program_name, file, line, mode);
-   vfprintf (stderr, message, ap);
-   fprintf (stderr, ".\n");
- 
-@@ -3988,20 +4832,32 @@ lt_error_core (int exit_status, const char *mode,
- }
- 
- void
--lt_fatal (const char *message, ...)
-+lt_fatal (const char *file, int line, const char *message, ...)
- {
-   va_list ap;
-   va_start (ap, message);
--  lt_error_core (EXIT_FAILURE, "FATAL", message, ap);
-+  lt_error_core (EXIT_FAILURE, file, line, "FATAL", message, ap);
-   va_end (ap);
- }
- 
-+static const char *
-+nonnull (const char *s)
-+{
-+  return s ? s : "(null)";
-+}
-+
-+static const char *
-+nonempty (const char *s)
-+{
-+  return (s && !*s) ? "(empty)" : nonnull (s);
-+}
-+
- void
- lt_setenv (const char *name, const char *value)
- {
--  LTWRAPPER_DEBUGPRINTF (("(lt_setenv) setting '%s' to '%s'\n",
--                          (name ? name : "<NULL>"),
--                          (value ? value : "<NULL>")));
-+  lt_debugprintf (__FILE__, __LINE__,
-+		  "(lt_setenv) setting '%s' to '%s'\n",
-+                  nonnull (name), nonnull (value));
-   {
- #ifdef HAVE_SETENV
-     /* always make a copy, for consistency with !HAVE_SETENV */
-@@ -4049,9 +4905,9 @@ lt_extend_str (const char *orig_value, const char *add, int to_end)
- void
- lt_update_exe_path (const char *name, const char *value)
- {
--  LTWRAPPER_DEBUGPRINTF (("(lt_update_exe_path) modifying '%s' by prepending '%s'\n",
--                          (name ? name : "<NULL>"),
--                          (value ? value : "<NULL>")));
-+  lt_debugprintf (__FILE__, __LINE__,
-+		  "(lt_update_exe_path) modifying '%s' by prepending '%s'\n",
-+                  nonnull (name), nonnull (value));
- 
-   if (name && *name && value && *value)
-     {
-@@ -4070,9 +4926,9 @@ lt_update_exe_path (const char *name, const char *value)
- void
- lt_update_lib_path (const char *name, const char *value)
- {
--  LTWRAPPER_DEBUGPRINTF (("(lt_update_lib_path) modifying '%s' by prepending '%s'\n",
--                          (name ? name : "<NULL>"),
--                          (value ? value : "<NULL>")));
-+  lt_debugprintf (__FILE__, __LINE__,
-+		  "(lt_update_lib_path) modifying '%s' by prepending '%s'\n",
-+                  nonnull (name), nonnull (value));
- 
-   if (name && *name && value && *value)
-     {
-@@ -4222,7 +5078,7 @@ EOF
- func_win32_import_lib_p ()
- {
-     $opt_debug
--    case `eval "$file_magic_cmd \"\$1\" 2>/dev/null" | $SED -e 10q` in
-+    case `eval $file_magic_cmd \"\$1\" 2>/dev/null | $SED -e 10q` in
-     *import*) : ;;
-     *) false ;;
-     esac
-@@ -4401,9 +5257,9 @@ func_mode_link ()
- 	    ;;
- 	  *)
- 	    if test "$prev" = dlfiles; then
--	      dlfiles="$dlfiles $arg"
-+	      func_append dlfiles " $arg"
- 	    else
--	      dlprefiles="$dlprefiles $arg"
-+	      func_append dlprefiles " $arg"
- 	    fi
- 	    prev=
- 	    continue
-@@ -4427,7 +5283,7 @@ func_mode_link ()
- 	    *-*-darwin*)
- 	      case "$deplibs " in
- 		*" $qarg.ltframework "*) ;;
--		*) deplibs="$deplibs $qarg.ltframework" # this is fixed later
-+		*) func_append deplibs " $qarg.ltframework" # this is fixed later
- 		   ;;
- 	      esac
- 	      ;;
-@@ -4446,7 +5302,7 @@ func_mode_link ()
- 	    moreargs=
- 	    for fil in `cat "$save_arg"`
- 	    do
--#	      moreargs="$moreargs $fil"
-+#	      func_append moreargs " $fil"
- 	      arg=$fil
- 	      # A libtool-controlled object.
- 
-@@ -4475,7 +5331,7 @@ func_mode_link ()
- 
- 		  if test "$prev" = dlfiles; then
- 		    if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then
--		      dlfiles="$dlfiles $pic_object"
-+		      func_append dlfiles " $pic_object"
- 		      prev=
- 		      continue
- 		    else
-@@ -4487,7 +5343,7 @@ func_mode_link ()
- 		  # CHECK ME:  I think I busted this.  -Ossama
- 		  if test "$prev" = dlprefiles; then
- 		    # Preload the old-style object.
--		    dlprefiles="$dlprefiles $pic_object"
-+		    func_append dlprefiles " $pic_object"
- 		    prev=
- 		  fi
- 
-@@ -4557,12 +5413,12 @@ func_mode_link ()
- 	  if test "$prev" = rpath; then
- 	    case "$rpath " in
- 	    *" $arg "*) ;;
--	    *) rpath="$rpath $arg" ;;
-+	    *) func_append rpath " $arg" ;;
- 	    esac
- 	  else
- 	    case "$xrpath " in
- 	    *" $arg "*) ;;
--	    *) xrpath="$xrpath $arg" ;;
-+	    *) func_append xrpath " $arg" ;;
- 	    esac
- 	  fi
- 	  prev=
-@@ -4574,28 +5430,28 @@ func_mode_link ()
- 	  continue
- 	  ;;
- 	weak)
--	  weak_libs="$weak_libs $arg"
-+	  func_append weak_libs " $arg"
- 	  prev=
- 	  continue
- 	  ;;
- 	xcclinker)
--	  linker_flags="$linker_flags $qarg"
--	  compiler_flags="$compiler_flags $qarg"
-+	  func_append linker_flags " $qarg"
-+	  func_append compiler_flags " $qarg"
- 	  prev=
- 	  func_append compile_command " $qarg"
- 	  func_append finalize_command " $qarg"
- 	  continue
- 	  ;;
- 	xcompiler)
--	  compiler_flags="$compiler_flags $qarg"
-+	  func_append compiler_flags " $qarg"
- 	  prev=
- 	  func_append compile_command " $qarg"
- 	  func_append finalize_command " $qarg"
- 	  continue
- 	  ;;
- 	xlinker)
--	  linker_flags="$linker_flags $qarg"
--	  compiler_flags="$compiler_flags $wl$qarg"
-+	  func_append linker_flags " $qarg"
-+	  func_append compiler_flags " $wl$qarg"
- 	  prev=
- 	  func_append compile_command " $wl$qarg"
- 	  func_append finalize_command " $wl$qarg"
-@@ -4686,15 +5542,16 @@ func_mode_link ()
- 	;;
- 
-       -L*)
--	func_stripname '-L' '' "$arg"
--	dir=$func_stripname_result
--	if test -z "$dir"; then
-+	func_stripname "-L" '' "$arg"
-+	if test -z "$func_stripname_result"; then
- 	  if test "$#" -gt 0; then
- 	    func_fatal_error "require no space between \`-L' and \`$1'"
- 	  else
- 	    func_fatal_error "need path for \`-L' option"
- 	  fi
- 	fi
-+	func_resolve_sysroot "$func_stripname_result"
-+	dir=$func_resolve_sysroot_result
- 	# We need an absolute path.
- 	case $dir in
- 	[\\/]* | [A-Za-z]:[\\/]*) ;;
-@@ -4706,10 +5563,16 @@ func_mode_link ()
- 	  ;;
- 	esac
- 	case "$deplibs " in
--	*" -L$dir "*) ;;
-+	*" -L$dir "* | *" $arg "*)
-+	  # Will only happen for absolute or sysroot arguments
-+	  ;;
- 	*)
--	  deplibs="$deplibs -L$dir"
--	  lib_search_path="$lib_search_path $dir"
-+	  # Preserve sysroot, but never include relative directories
-+	  case $dir in
-+	    [\\/]* | [A-Za-z]:[\\/]* | =*) func_append deplibs " $arg" ;;
-+	    *) func_append deplibs " -L$dir" ;;
-+	  esac
-+	  func_append lib_search_path " $dir"
- 	  ;;
- 	esac
- 	case $host in
-@@ -4718,12 +5581,12 @@ func_mode_link ()
- 	  case :$dllsearchpath: in
- 	  *":$dir:"*) ;;
- 	  ::) dllsearchpath=$dir;;
--	  *) dllsearchpath="$dllsearchpath:$dir";;
-+	  *) func_append dllsearchpath ":$dir";;
- 	  esac
- 	  case :$dllsearchpath: in
- 	  *":$testbindir:"*) ;;
- 	  ::) dllsearchpath=$testbindir;;
--	  *) dllsearchpath="$dllsearchpath:$testbindir";;
-+	  *) func_append dllsearchpath ":$testbindir";;
- 	  esac
- 	  ;;
- 	esac
-@@ -4747,7 +5610,7 @@ func_mode_link ()
- 	    ;;
- 	  *-*-rhapsody* | *-*-darwin1.[012])
- 	    # Rhapsody C and math libraries are in the System framework
--	    deplibs="$deplibs System.ltframework"
-+	    func_append deplibs " System.ltframework"
- 	    continue
- 	    ;;
- 	  *-*-sco3.2v5* | *-*-sco5v6*)
-@@ -4758,9 +5621,6 @@ func_mode_link ()
- 	    # Compiler inserts libc in the correct place for threads to work
- 	    test "X$arg" = "X-lc" && continue
- 	    ;;
--	  *-*-linux*)
--	    test "X$arg" = "X-lc" && continue
--	    ;;
- 	  esac
- 	elif test "X$arg" = "X-lc_r"; then
- 	 case $host in
-@@ -4770,7 +5630,7 @@ func_mode_link ()
- 	   ;;
- 	 esac
- 	fi
--	deplibs="$deplibs $arg"
-+	func_append deplibs " $arg"
- 	continue
- 	;;
- 
-@@ -4782,8 +5642,8 @@ func_mode_link ()
-       # Tru64 UNIX uses -model [arg] to determine the layout of C++
-       # classes, name mangling, and exception handling.
-       # Darwin uses the -arch flag to determine output architecture.
--      -model|-arch|-isysroot)
--	compiler_flags="$compiler_flags $arg"
-+      -model|-arch|-isysroot|--sysroot)
-+	func_append compiler_flags " $arg"
- 	func_append compile_command " $arg"
- 	func_append finalize_command " $arg"
- 	prev=xcompiler
-@@ -4791,12 +5651,12 @@ func_mode_link ()
- 	;;
- 
-       -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe|-threads)
--	compiler_flags="$compiler_flags $arg"
-+	func_append compiler_flags " $arg"
- 	func_append compile_command " $arg"
- 	func_append finalize_command " $arg"
- 	case "$new_inherited_linker_flags " in
- 	    *" $arg "*) ;;
--	    * ) new_inherited_linker_flags="$new_inherited_linker_flags $arg" ;;
-+	    * ) func_append new_inherited_linker_flags " $arg" ;;
- 	esac
- 	continue
- 	;;
-@@ -4863,13 +5723,17 @@ func_mode_link ()
- 	# We need an absolute path.
- 	case $dir in
- 	[\\/]* | [A-Za-z]:[\\/]*) ;;
-+	=*)
-+	  func_stripname '=' '' "$dir"
-+	  dir=$lt_sysroot$func_stripname_result
-+	  ;;
- 	*)
- 	  func_fatal_error "only absolute run-paths are allowed"
- 	  ;;
- 	esac
- 	case "$xrpath " in
- 	*" $dir "*) ;;
--	*) xrpath="$xrpath $dir" ;;
-+	*) func_append xrpath " $dir" ;;
- 	esac
- 	continue
- 	;;
-@@ -4922,8 +5786,8 @@ func_mode_link ()
- 	for flag in $args; do
- 	  IFS="$save_ifs"
-           func_quote_for_eval "$flag"
--	  arg="$arg $func_quote_for_eval_result"
--	  compiler_flags="$compiler_flags $func_quote_for_eval_result"
-+	  func_append arg " $func_quote_for_eval_result"
-+	  func_append compiler_flags " $func_quote_for_eval_result"
- 	done
- 	IFS="$save_ifs"
- 	func_stripname ' ' '' "$arg"
-@@ -4938,9 +5802,9 @@ func_mode_link ()
- 	for flag in $args; do
- 	  IFS="$save_ifs"
-           func_quote_for_eval "$flag"
--	  arg="$arg $wl$func_quote_for_eval_result"
--	  compiler_flags="$compiler_flags $wl$func_quote_for_eval_result"
--	  linker_flags="$linker_flags $func_quote_for_eval_result"
-+	  func_append arg " $wl$func_quote_for_eval_result"
-+	  func_append compiler_flags " $wl$func_quote_for_eval_result"
-+	  func_append linker_flags " $func_quote_for_eval_result"
- 	done
- 	IFS="$save_ifs"
- 	func_stripname ' ' '' "$arg"
-@@ -4968,24 +5832,27 @@ func_mode_link ()
- 	arg="$func_quote_for_eval_result"
- 	;;
- 
--      # -64, -mips[0-9] enable 64-bit mode on the SGI compiler
--      # -r[0-9][0-9]* specifies the processor on the SGI compiler
--      # -xarch=*, -xtarget=* enable 64-bit mode on the Sun compiler
--      # +DA*, +DD* enable 64-bit mode on the HP compiler
--      # -q* pass through compiler args for the IBM compiler
--      # -m*, -t[45]*, -txscale* pass through architecture-specific
--      # compiler args for GCC
--      # -F/path gives path to uninstalled frameworks, gcc on darwin
--      # -p, -pg, --coverage, -fprofile-* pass through profiling flag for GCC
--      # @file GCC response files
--      # -tp=* Portland pgcc target processor selection
-+      # Flags to be passed through unchanged, with rationale:
-+      # -64, -mips[0-9]      enable 64-bit mode for the SGI compiler
-+      # -r[0-9][0-9]*        specify processor for the SGI compiler
-+      # -xarch=*, -xtarget=* enable 64-bit mode for the Sun compiler
-+      # +DA*, +DD*           enable 64-bit mode for the HP compiler
-+      # -q*                  compiler args for the IBM compiler
-+      # -m*, -t[45]*, -txscale* architecture-specific flags for GCC
-+      # -F/path              path to uninstalled frameworks, gcc on darwin
-+      # -p, -pg, --coverage, -fprofile-*  profiling flags for GCC
-+      # @file                GCC response files
-+      # -tp=*                Portland pgcc target processor selection
-+      # --sysroot=*          for sysroot support
-+      # -O*, -flto*, -fwhopr*, -fuse-linker-plugin GCC link-time optimization
-       -64|-mips[0-9]|-r[0-9][0-9]*|-xarch=*|-xtarget=*|+DA*|+DD*|-q*|-m*| \
--      -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*)
-+      -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*|--sysroot=*| \
-+      -O*|-flto*|-fwhopr*|-fuse-linker-plugin)
-         func_quote_for_eval "$arg"
- 	arg="$func_quote_for_eval_result"
-         func_append compile_command " $arg"
-         func_append finalize_command " $arg"
--        compiler_flags="$compiler_flags $arg"
-+        func_append compiler_flags " $arg"
-         continue
-         ;;
- 
-@@ -4997,7 +5864,7 @@ func_mode_link ()
- 
-       *.$objext)
- 	# A standard object.
--	objs="$objs $arg"
-+	func_append objs " $arg"
- 	;;
- 
-       *.lo)
-@@ -5028,7 +5895,7 @@ func_mode_link ()
- 
- 	    if test "$prev" = dlfiles; then
- 	      if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then
--		dlfiles="$dlfiles $pic_object"
-+		func_append dlfiles " $pic_object"
- 		prev=
- 		continue
- 	      else
-@@ -5040,7 +5907,7 @@ func_mode_link ()
- 	    # CHECK ME:  I think I busted this.  -Ossama
- 	    if test "$prev" = dlprefiles; then
- 	      # Preload the old-style object.
--	      dlprefiles="$dlprefiles $pic_object"
-+	      func_append dlprefiles " $pic_object"
- 	      prev=
- 	    fi
- 
-@@ -5085,24 +5952,25 @@ func_mode_link ()
- 
-       *.$libext)
- 	# An archive.
--	deplibs="$deplibs $arg"
--	old_deplibs="$old_deplibs $arg"
-+	func_append deplibs " $arg"
-+	func_append old_deplibs " $arg"
- 	continue
- 	;;
- 
-       *.la)
- 	# A libtool-controlled library.
- 
-+	func_resolve_sysroot "$arg"
- 	if test "$prev" = dlfiles; then
- 	  # This library was specified with -dlopen.
--	  dlfiles="$dlfiles $arg"
-+	  func_append dlfiles " $func_resolve_sysroot_result"
- 	  prev=
- 	elif test "$prev" = dlprefiles; then
- 	  # The library was specified with -dlpreopen.
--	  dlprefiles="$dlprefiles $arg"
-+	  func_append dlprefiles " $func_resolve_sysroot_result"
- 	  prev=
- 	else
--	  deplibs="$deplibs $arg"
-+	  func_append deplibs " $func_resolve_sysroot_result"
- 	fi
- 	continue
- 	;;
-@@ -5127,7 +5995,7 @@ func_mode_link ()
-       func_fatal_help "the \`$prevarg' option requires an argument"
- 
-     if test "$export_dynamic" = yes && test -n "$export_dynamic_flag_spec"; then
--      eval "arg=\"$export_dynamic_flag_spec\""
-+      eval arg=\"$export_dynamic_flag_spec\"
-       func_append compile_command " $arg"
-       func_append finalize_command " $arg"
-     fi
-@@ -5144,11 +6012,13 @@ func_mode_link ()
-     else
-       shlib_search_path=
-     fi
--    eval "sys_lib_search_path=\"$sys_lib_search_path_spec\""
--    eval "sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\""
-+    eval sys_lib_search_path=\"$sys_lib_search_path_spec\"
-+    eval sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\"
- 
-     func_dirname "$output" "/" ""
-     output_objdir="$func_dirname_result$objdir"
-+    func_to_tool_file "$output_objdir/"
-+    tool_output_objdir=$func_to_tool_file_result
-     # Create the object directory.
-     func_mkdir_p "$output_objdir"
- 
-@@ -5169,12 +6039,12 @@ func_mode_link ()
-     # Find all interdependent deplibs by searching for libraries
-     # that are linked more than once (e.g. -la -lb -la)
-     for deplib in $deplibs; do
--      if $opt_duplicate_deps ; then
-+      if $opt_preserve_dup_deps ; then
- 	case "$libs " in
--	*" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
-+	*" $deplib "*) func_append specialdeplibs " $deplib" ;;
- 	esac
-       fi
--      libs="$libs $deplib"
-+      func_append libs " $deplib"
-     done
- 
-     if test "$linkmode" = lib; then
-@@ -5187,9 +6057,9 @@ func_mode_link ()
-       if $opt_duplicate_compiler_generated_deps; then
- 	for pre_post_dep in $predeps $postdeps; do
- 	  case "$pre_post_deps " in
--	  *" $pre_post_dep "*) specialdeplibs="$specialdeplibs $pre_post_deps" ;;
-+	  *" $pre_post_dep "*) func_append specialdeplibs " $pre_post_deps" ;;
- 	  esac
--	  pre_post_deps="$pre_post_deps $pre_post_dep"
-+	  func_append pre_post_deps " $pre_post_dep"
- 	done
-       fi
-       pre_post_deps=
-@@ -5256,8 +6126,9 @@ func_mode_link ()
- 	for lib in $dlprefiles; do
- 	  # Ignore non-libtool-libs
- 	  dependency_libs=
-+	  func_resolve_sysroot "$lib"
- 	  case $lib in
--	  *.la)	func_source "$lib" ;;
-+	  *.la)	func_source "$func_resolve_sysroot_result" ;;
- 	  esac
- 
- 	  # Collect preopened libtool deplibs, except any this library
-@@ -5267,7 +6138,7 @@ func_mode_link ()
-             deplib_base=$func_basename_result
- 	    case " $weak_libs " in
- 	    *" $deplib_base "*) ;;
--	    *) deplibs="$deplibs $deplib" ;;
-+	    *) func_append deplibs " $deplib" ;;
- 	    esac
- 	  done
- 	done
-@@ -5288,11 +6159,11 @@ func_mode_link ()
- 	    compile_deplibs="$deplib $compile_deplibs"
- 	    finalize_deplibs="$deplib $finalize_deplibs"
- 	  else
--	    compiler_flags="$compiler_flags $deplib"
-+	    func_append compiler_flags " $deplib"
- 	    if test "$linkmode" = lib ; then
- 		case "$new_inherited_linker_flags " in
- 		    *" $deplib "*) ;;
--		    * ) new_inherited_linker_flags="$new_inherited_linker_flags $deplib" ;;
-+		    * ) func_append new_inherited_linker_flags " $deplib" ;;
- 		esac
- 	    fi
- 	  fi
-@@ -5377,7 +6248,7 @@ func_mode_link ()
- 	    if test "$linkmode" = lib ; then
- 		case "$new_inherited_linker_flags " in
- 		    *" $deplib "*) ;;
--		    * ) new_inherited_linker_flags="$new_inherited_linker_flags $deplib" ;;
-+		    * ) func_append new_inherited_linker_flags " $deplib" ;;
- 		esac
- 	    fi
- 	  fi
-@@ -5390,7 +6261,8 @@ func_mode_link ()
- 	    test "$pass" = conv && continue
- 	    newdependency_libs="$deplib $newdependency_libs"
- 	    func_stripname '-L' '' "$deplib"
--	    newlib_search_path="$newlib_search_path $func_stripname_result"
-+	    func_resolve_sysroot "$func_stripname_result"
-+	    func_append newlib_search_path " $func_resolve_sysroot_result"
- 	    ;;
- 	  prog)
- 	    if test "$pass" = conv; then
-@@ -5404,7 +6276,8 @@ func_mode_link ()
- 	      finalize_deplibs="$deplib $finalize_deplibs"
- 	    fi
- 	    func_stripname '-L' '' "$deplib"
--	    newlib_search_path="$newlib_search_path $func_stripname_result"
-+	    func_resolve_sysroot "$func_stripname_result"
-+	    func_append newlib_search_path " $func_resolve_sysroot_result"
- 	    ;;
- 	  *)
- 	    func_warning "\`-L' is ignored for archives/objects"
-@@ -5415,17 +6288,21 @@ func_mode_link ()
- 	-R*)
- 	  if test "$pass" = link; then
- 	    func_stripname '-R' '' "$deplib"
--	    dir=$func_stripname_result
-+	    func_resolve_sysroot "$func_stripname_result"
-+	    dir=$func_resolve_sysroot_result
- 	    # Make sure the xrpath contains only unique directories.
- 	    case "$xrpath " in
- 	    *" $dir "*) ;;
--	    *) xrpath="$xrpath $dir" ;;
-+	    *) func_append xrpath " $dir" ;;
- 	    esac
- 	  fi
- 	  deplibs="$deplib $deplibs"
- 	  continue
- 	  ;;
--	*.la) lib="$deplib" ;;
-+	*.la)
-+	  func_resolve_sysroot "$deplib"
-+	  lib=$func_resolve_sysroot_result
-+	  ;;
- 	*.$libext)
- 	  if test "$pass" = conv; then
- 	    deplibs="$deplib $deplibs"
-@@ -5488,11 +6365,11 @@ func_mode_link ()
- 	    if test "$pass" = dlpreopen || test "$dlopen_support" != yes || test "$build_libtool_libs" = no; then
- 	      # If there is no dlopen support or we're linking statically,
- 	      # we need to preload.
--	      newdlprefiles="$newdlprefiles $deplib"
-+	      func_append newdlprefiles " $deplib"
- 	      compile_deplibs="$deplib $compile_deplibs"
- 	      finalize_deplibs="$deplib $finalize_deplibs"
- 	    else
--	      newdlfiles="$newdlfiles $deplib"
-+	      func_append newdlfiles " $deplib"
- 	    fi
- 	  fi
- 	  continue
-@@ -5538,7 +6415,7 @@ func_mode_link ()
- 	  for tmp_inherited_linker_flag in $tmp_inherited_linker_flags; do
- 	    case " $new_inherited_linker_flags " in
- 	      *" $tmp_inherited_linker_flag "*) ;;
--	      *) new_inherited_linker_flags="$new_inherited_linker_flags $tmp_inherited_linker_flag";;
-+	      *) func_append new_inherited_linker_flags " $tmp_inherited_linker_flag";;
- 	    esac
- 	  done
- 	fi
-@@ -5546,8 +6423,8 @@ func_mode_link ()
- 	if test "$linkmode,$pass" = "lib,link" ||
- 	   test "$linkmode,$pass" = "prog,scan" ||
- 	   { test "$linkmode" != prog && test "$linkmode" != lib; }; then
--	  test -n "$dlopen" && dlfiles="$dlfiles $dlopen"
--	  test -n "$dlpreopen" && dlprefiles="$dlprefiles $dlpreopen"
-+	  test -n "$dlopen" && func_append dlfiles " $dlopen"
-+	  test -n "$dlpreopen" && func_append dlprefiles " $dlpreopen"
- 	fi
- 
- 	if test "$pass" = conv; then
-@@ -5558,20 +6435,20 @@ func_mode_link ()
- 	      func_fatal_error "cannot find name of link library for \`$lib'"
- 	    fi
- 	    # It is a libtool convenience library, so add in its objects.
--	    convenience="$convenience $ladir/$objdir/$old_library"
--	    old_convenience="$old_convenience $ladir/$objdir/$old_library"
-+	    func_append convenience " $ladir/$objdir/$old_library"
-+	    func_append old_convenience " $ladir/$objdir/$old_library"
- 	  elif test "$linkmode" != prog && test "$linkmode" != lib; then
- 	    func_fatal_error "\`$lib' is not a convenience library"
- 	  fi
- 	  tmp_libs=
- 	  for deplib in $dependency_libs; do
- 	    deplibs="$deplib $deplibs"
--	    if $opt_duplicate_deps ; then
-+	    if $opt_preserve_dup_deps ; then
- 	      case "$tmp_libs " in
--	      *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
-+	      *" $deplib "*) func_append specialdeplibs " $deplib" ;;
- 	      esac
- 	    fi
--	    tmp_libs="$tmp_libs $deplib"
-+	    func_append tmp_libs " $deplib"
- 	  done
- 	  continue
- 	fi # $pass = conv
-@@ -5579,9 +6456,15 @@ func_mode_link ()
- 
- 	# Get the name of the library we link against.
- 	linklib=
--	for l in $old_library $library_names; do
--	  linklib="$l"
--	done
-+	if test -n "$old_library" &&
-+	   { test "$prefer_static_libs" = yes ||
-+	     test "$prefer_static_libs,$installed" = "built,no"; }; then
-+	  linklib=$old_library
-+	else
-+	  for l in $old_library $library_names; do
-+	    linklib="$l"
-+	  done
-+	fi
- 	if test -z "$linklib"; then
- 	  func_fatal_error "cannot find name of link library for \`$lib'"
- 	fi
-@@ -5598,9 +6481,9 @@ func_mode_link ()
- 	    # statically, we need to preload.  We also need to preload any
- 	    # dependent libraries so libltdl's deplib preloader doesn't
- 	    # bomb out in the load deplibs phase.
--	    dlprefiles="$dlprefiles $lib $dependency_libs"
-+	    func_append dlprefiles " $lib $dependency_libs"
- 	  else
--	    newdlfiles="$newdlfiles $lib"
-+	    func_append newdlfiles " $lib"
- 	  fi
- 	  continue
- 	fi # $pass = dlopen
-@@ -5622,14 +6505,14 @@ func_mode_link ()
- 
- 	# Find the relevant object directory and library name.
- 	if test "X$installed" = Xyes; then
--	  if test ! -f "$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then
-+	  if test ! -f "$lt_sysroot$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then
- 	    func_warning "library \`$lib' was moved."
- 	    dir="$ladir"
- 	    absdir="$abs_ladir"
- 	    libdir="$abs_ladir"
- 	  else
--	    dir="$libdir"
--	    absdir="$libdir"
-+	    dir="$lt_sysroot$libdir"
-+	    absdir="$lt_sysroot$libdir"
- 	  fi
- 	  test "X$hardcode_automatic" = Xyes && avoidtemprpath=yes
- 	else
-@@ -5637,12 +6520,12 @@ func_mode_link ()
- 	    dir="$ladir"
- 	    absdir="$abs_ladir"
- 	    # Remove this search path later
--	    notinst_path="$notinst_path $abs_ladir"
-+	    func_append notinst_path " $abs_ladir"
- 	  else
- 	    dir="$ladir/$objdir"
- 	    absdir="$abs_ladir/$objdir"
- 	    # Remove this search path later
--	    notinst_path="$notinst_path $abs_ladir"
-+	    func_append notinst_path " $abs_ladir"
- 	  fi
- 	fi # $installed = yes
- 	func_stripname 'lib' '.la' "$laname"
-@@ -5653,20 +6536,46 @@ func_mode_link ()
- 	  if test -z "$libdir" && test "$linkmode" = prog; then
- 	    func_fatal_error "only libraries may -dlpreopen a convenience library: \`$lib'"
- 	  fi
--	  # Prefer using a static library (so that no silly _DYNAMIC symbols
--	  # are required to link).
--	  if test -n "$old_library"; then
--	    newdlprefiles="$newdlprefiles $dir/$old_library"
--	    # Keep a list of preopened convenience libraries to check
--	    # that they are being used correctly in the link pass.
--	    test -z "$libdir" && \
--		dlpreconveniencelibs="$dlpreconveniencelibs $dir/$old_library"
--	  # Otherwise, use the dlname, so that lt_dlopen finds it.
--	  elif test -n "$dlname"; then
--	    newdlprefiles="$newdlprefiles $dir/$dlname"
--	  else
--	    newdlprefiles="$newdlprefiles $dir/$linklib"
--	  fi
-+	  case "$host" in
-+	    # special handling for platforms with PE-DLLs.
-+	    *cygwin* | *mingw* | *cegcc* )
-+	      # Linker will automatically link against shared library if both
-+	      # static and shared are present.  Therefore, ensure we extract
-+	      # symbols from the import library if a shared library is present
-+	      # (otherwise, the dlopen module name will be incorrect).  We do
-+	      # this by putting the import library name into $newdlprefiles.
-+	      # We recover the dlopen module name by 'saving' the la file
-+	      # name in a special purpose variable, and (later) extracting the
-+	      # dlname from the la file.
-+	      if test -n "$dlname"; then
-+	        func_tr_sh "$dir/$linklib"
-+	        eval "libfile_$func_tr_sh_result=\$abs_ladir/\$laname"
-+	        func_append newdlprefiles " $dir/$linklib"
-+	      else
-+	        func_append newdlprefiles " $dir/$old_library"
-+	        # Keep a list of preopened convenience libraries to check
-+	        # that they are being used correctly in the link pass.
-+	        test -z "$libdir" && \
-+	          func_append dlpreconveniencelibs " $dir/$old_library"
-+	      fi
-+	    ;;
-+	    * )
-+	      # Prefer using a static library (so that no silly _DYNAMIC symbols
-+	      # are required to link).
-+	      if test -n "$old_library"; then
-+	        func_append newdlprefiles " $dir/$old_library"
-+	        # Keep a list of preopened convenience libraries to check
-+	        # that they are being used correctly in the link pass.
-+	        test -z "$libdir" && \
-+	          func_append dlpreconveniencelibs " $dir/$old_library"
-+	      # Otherwise, use the dlname, so that lt_dlopen finds it.
-+	      elif test -n "$dlname"; then
-+	        func_append newdlprefiles " $dir/$dlname"
-+	      else
-+	        func_append newdlprefiles " $dir/$linklib"
-+	      fi
-+	    ;;
-+	  esac
- 	fi # $pass = dlpreopen
- 
- 	if test -z "$libdir"; then
-@@ -5684,7 +6593,7 @@ func_mode_link ()
- 
- 
- 	if test "$linkmode" = prog && test "$pass" != link; then
--	  newlib_search_path="$newlib_search_path $ladir"
-+	  func_append newlib_search_path " $ladir"
- 	  deplibs="$lib $deplibs"
- 
- 	  linkalldeplibs=no
-@@ -5697,7 +6606,8 @@ func_mode_link ()
- 	  for deplib in $dependency_libs; do
- 	    case $deplib in
- 	    -L*) func_stripname '-L' '' "$deplib"
--	         newlib_search_path="$newlib_search_path $func_stripname_result"
-+	         func_resolve_sysroot "$func_stripname_result"
-+	         func_append newlib_search_path " $func_resolve_sysroot_result"
- 		 ;;
- 	    esac
- 	    # Need to link against all dependency_libs?
-@@ -5708,12 +6618,12 @@ func_mode_link ()
- 	      # or/and link against static libraries
- 	      newdependency_libs="$deplib $newdependency_libs"
- 	    fi
--	    if $opt_duplicate_deps ; then
-+	    if $opt_preserve_dup_deps ; then
- 	      case "$tmp_libs " in
--	      *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
-+	      *" $deplib "*) func_append specialdeplibs " $deplib" ;;
- 	      esac
- 	    fi
--	    tmp_libs="$tmp_libs $deplib"
-+	    func_append tmp_libs " $deplib"
- 	  done # for deplib
- 	  continue
- 	fi # $linkmode = prog...
-@@ -5728,7 +6638,7 @@ func_mode_link ()
- 	      # Make sure the rpath contains only unique directories.
- 	      case "$temp_rpath:" in
- 	      *"$absdir:"*) ;;
--	      *) temp_rpath="$temp_rpath$absdir:" ;;
-+	      *) func_append temp_rpath "$absdir:" ;;
- 	      esac
- 	    fi
- 
-@@ -5740,7 +6650,7 @@ func_mode_link ()
- 	    *)
- 	      case "$compile_rpath " in
- 	      *" $absdir "*) ;;
--	      *) compile_rpath="$compile_rpath $absdir"
-+	      *) func_append compile_rpath " $absdir" ;;
- 	      esac
- 	      ;;
- 	    esac
-@@ -5749,7 +6659,7 @@ func_mode_link ()
- 	    *)
- 	      case "$finalize_rpath " in
- 	      *" $libdir "*) ;;
--	      *) finalize_rpath="$finalize_rpath $libdir"
-+	      *) func_append finalize_rpath " $libdir" ;;
- 	      esac
- 	      ;;
- 	    esac
-@@ -5774,12 +6684,12 @@ func_mode_link ()
- 	  case $host in
- 	  *cygwin* | *mingw* | *cegcc*)
- 	      # No point in relinking DLLs because paths are not encoded
--	      notinst_deplibs="$notinst_deplibs $lib"
-+	      func_append notinst_deplibs " $lib"
- 	      need_relink=no
- 	    ;;
- 	  *)
- 	    if test "$installed" = no; then
--	      notinst_deplibs="$notinst_deplibs $lib"
-+	      func_append notinst_deplibs " $lib"
- 	      need_relink=yes
- 	    fi
- 	    ;;
-@@ -5814,7 +6724,7 @@ func_mode_link ()
- 	    *)
- 	      case "$compile_rpath " in
- 	      *" $absdir "*) ;;
--	      *) compile_rpath="$compile_rpath $absdir"
-+	      *) func_append compile_rpath " $absdir" ;;
- 	      esac
- 	      ;;
- 	    esac
-@@ -5823,7 +6733,7 @@ func_mode_link ()
- 	    *)
- 	      case "$finalize_rpath " in
- 	      *" $libdir "*) ;;
--	      *) finalize_rpath="$finalize_rpath $libdir"
-+	      *) func_append finalize_rpath " $libdir" ;;
- 	      esac
- 	      ;;
- 	    esac
-@@ -5835,7 +6745,7 @@ func_mode_link ()
- 	    shift
- 	    realname="$1"
- 	    shift
--	    eval "libname=\"$libname_spec\""
-+	    libname=`eval "\\$ECHO \"$libname_spec\""`
- 	    # use dlname if we got it. it's perfectly good, no?
- 	    if test -n "$dlname"; then
- 	      soname="$dlname"
-@@ -5848,7 +6758,7 @@ func_mode_link ()
- 		versuffix="-$major"
- 		;;
- 	      esac
--	      eval "soname=\"$soname_spec\""
-+	      eval soname=\"$soname_spec\"
- 	    else
- 	      soname="$realname"
- 	    fi
-@@ -5877,7 +6787,7 @@ func_mode_link ()
- 	    linklib=$newlib
- 	  fi # test -n "$old_archive_from_expsyms_cmds"
- 
--	  if test "$linkmode" = prog || test "$mode" != relink; then
-+	  if test "$linkmode" = prog || test "$opt_mode" != relink; then
- 	    add_shlibpath=
- 	    add_dir=
- 	    add=
-@@ -5933,7 +6843,7 @@ func_mode_link ()
- 		if test -n "$inst_prefix_dir"; then
- 		  case $libdir in
- 		    [\\/]*)
--		      add_dir="$add_dir -L$inst_prefix_dir$libdir"
-+		      func_append add_dir " -L$inst_prefix_dir$libdir"
- 		      ;;
- 		  esac
- 		fi
-@@ -5955,7 +6865,7 @@ func_mode_link ()
- 	    if test -n "$add_shlibpath"; then
- 	      case :$compile_shlibpath: in
- 	      *":$add_shlibpath:"*) ;;
--	      *) compile_shlibpath="$compile_shlibpath$add_shlibpath:" ;;
-+	      *) func_append compile_shlibpath "$add_shlibpath:" ;;
- 	      esac
- 	    fi
- 	    if test "$linkmode" = prog; then
-@@ -5969,13 +6879,13 @@ func_mode_link ()
- 		 test "$hardcode_shlibpath_var" = yes; then
- 		case :$finalize_shlibpath: in
- 		*":$libdir:"*) ;;
--		*) finalize_shlibpath="$finalize_shlibpath$libdir:" ;;
-+		*) func_append finalize_shlibpath "$libdir:" ;;
- 		esac
- 	      fi
- 	    fi
- 	  fi
- 
--	  if test "$linkmode" = prog || test "$mode" = relink; then
-+	  if test "$linkmode" = prog || test "$opt_mode" = relink; then
- 	    add_shlibpath=
- 	    add_dir=
- 	    add=
-@@ -5989,7 +6899,7 @@ func_mode_link ()
- 	    elif test "$hardcode_shlibpath_var" = yes; then
- 	      case :$finalize_shlibpath: in
- 	      *":$libdir:"*) ;;
--	      *) finalize_shlibpath="$finalize_shlibpath$libdir:" ;;
-+	      *) func_append finalize_shlibpath "$libdir:" ;;
- 	      esac
- 	      add="-l$name"
- 	    elif test "$hardcode_automatic" = yes; then
-@@ -6001,12 +6911,12 @@ func_mode_link ()
- 	      fi
- 	    else
- 	      # We cannot seem to hardcode it, guess we'll fake it.
--	      add_dir="-L$libdir"
-+	      add_dir="-L$lt_sysroot$libdir"
- 	      # Try looking first in the location we're being installed to.
- 	      if test -n "$inst_prefix_dir"; then
- 		case $libdir in
- 		  [\\/]*)
--		    add_dir="$add_dir -L$inst_prefix_dir$libdir"
-+		    func_append add_dir " -L$inst_prefix_dir$libdir"
- 		    ;;
- 		esac
- 	      fi
-@@ -6083,27 +6993,33 @@ func_mode_link ()
- 	           temp_xrpath=$func_stripname_result
- 		   case " $xrpath " in
- 		   *" $temp_xrpath "*) ;;
--		   *) xrpath="$xrpath $temp_xrpath";;
-+		   *) func_append xrpath " $temp_xrpath";;
- 		   esac;;
--	      *) temp_deplibs="$temp_deplibs $libdir";;
-+	      *) func_append temp_deplibs " $libdir";;
- 	      esac
- 	    done
- 	    dependency_libs="$temp_deplibs"
- 	  fi
- 
--	  newlib_search_path="$newlib_search_path $absdir"
-+	  func_append newlib_search_path " $absdir"
- 	  # Link against this library
- 	  test "$link_static" = no && newdependency_libs="$abs_ladir/$laname $newdependency_libs"
- 	  # ... and its dependency_libs
- 	  tmp_libs=
- 	  for deplib in $dependency_libs; do
- 	    newdependency_libs="$deplib $newdependency_libs"
--	    if $opt_duplicate_deps ; then
-+	    case $deplib in
-+              -L*) func_stripname '-L' '' "$deplib"
-+                   func_resolve_sysroot "$func_stripname_result";;
-+              *) func_resolve_sysroot "$deplib" ;;
-+            esac
-+	    if $opt_preserve_dup_deps ; then
- 	      case "$tmp_libs " in
--	      *" $deplib "*) specialdeplibs="$specialdeplibs $deplib" ;;
-+	      *" $func_resolve_sysroot_result "*)
-+                func_append specialdeplibs " $func_resolve_sysroot_result" ;;
- 	      esac
- 	    fi
--	    tmp_libs="$tmp_libs $deplib"
-+	    func_append tmp_libs " $func_resolve_sysroot_result"
- 	  done
- 
- 	  if test "$link_all_deplibs" != no; then
-@@ -6113,8 +7029,10 @@ func_mode_link ()
- 	      case $deplib in
- 	      -L*) path="$deplib" ;;
- 	      *.la)
-+	        func_resolve_sysroot "$deplib"
-+	        deplib=$func_resolve_sysroot_result
- 	        func_dirname "$deplib" "" "."
--		dir="$func_dirname_result"
-+		dir=$func_dirname_result
- 		# We need an absolute path.
- 		case $dir in
- 		[\\/]* | [A-Za-z]:[\\/]*) absdir="$dir" ;;
-@@ -6130,7 +7048,7 @@ func_mode_link ()
- 		case $host in
- 		*-*-darwin*)
- 		  depdepl=
--		  deplibrary_names=`${SED} -n -e 's/^library_names=\(.*\)$/\1/p' $deplib`
-+		  eval deplibrary_names=`${SED} -n -e 's/^library_names=\(.*\)$/\1/p' $deplib`
- 		  if test -n "$deplibrary_names" ; then
- 		    for tmp in $deplibrary_names ; do
- 		      depdepl=$tmp
-@@ -6141,8 +7059,8 @@ func_mode_link ()
-                       if test -z "$darwin_install_name"; then
-                           darwin_install_name=`${OTOOL64} -L $depdepl  | awk '{if (NR == 2) {print $1;exit}}'`
-                       fi
--		      compiler_flags="$compiler_flags ${wl}-dylib_file ${wl}${darwin_install_name}:${depdepl}"
--		      linker_flags="$linker_flags -dylib_file ${darwin_install_name}:${depdepl}"
-+		      func_append compiler_flags " ${wl}-dylib_file ${wl}${darwin_install_name}:${depdepl}"
-+		      func_append linker_flags " -dylib_file ${darwin_install_name}:${depdepl}"
- 		      path=
- 		    fi
- 		  fi
-@@ -6152,7 +7070,7 @@ func_mode_link ()
- 		  ;;
- 		esac
- 		else
--		  libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
-+		  eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
- 		  test -z "$libdir" && \
- 		    func_fatal_error "\`$deplib' is not a valid libtool archive"
- 		  test "$absdir" != "$libdir" && \
-@@ -6192,7 +7110,7 @@ func_mode_link ()
- 	  for dir in $newlib_search_path; do
- 	    case "$lib_search_path " in
- 	    *" $dir "*) ;;
--	    *) lib_search_path="$lib_search_path $dir" ;;
-+	    *) func_append lib_search_path " $dir" ;;
- 	    esac
- 	  done
- 	  newlib_search_path=
-@@ -6205,7 +7123,7 @@ func_mode_link ()
- 	fi
- 	for var in $vars dependency_libs; do
- 	  # Add libraries to $var in reverse order
--	  eval tmp_libs=\$$var
-+	  eval tmp_libs=\"\$$var\"
- 	  new_libs=
- 	  for deplib in $tmp_libs; do
- 	    # FIXME: Pedantically, this is the right thing to do, so
-@@ -6250,13 +7168,13 @@ func_mode_link ()
- 	    -L*)
- 	      case " $tmp_libs " in
- 	      *" $deplib "*) ;;
--	      *) tmp_libs="$tmp_libs $deplib" ;;
-+	      *) func_append tmp_libs " $deplib" ;;
- 	      esac
- 	      ;;
--	    *) tmp_libs="$tmp_libs $deplib" ;;
-+	    *) func_append tmp_libs " $deplib" ;;
- 	    esac
- 	  done
--	  eval $var=\$tmp_libs
-+	  eval $var=\"$tmp_libs\"
- 	done # for var
-       fi
-       # Last step: remove runtime libs from dependency_libs
-@@ -6269,7 +7187,7 @@ func_mode_link ()
- 	  ;;
- 	esac
- 	if test -n "$i" ; then
--	  tmp_libs="$tmp_libs $i"
-+	  func_append tmp_libs " $i"
- 	fi
-       done
-       dependency_libs=$tmp_libs
-@@ -6310,7 +7228,7 @@ func_mode_link ()
-       # Now set the variables for building old libraries.
-       build_libtool_libs=no
-       oldlibs="$output"
--      objs="$objs$old_deplibs"
-+      func_append objs "$old_deplibs"
-       ;;
- 
-     lib)
-@@ -6319,8 +7237,8 @@ func_mode_link ()
-       lib*)
- 	func_stripname 'lib' '.la' "$outputname"
- 	name=$func_stripname_result
--	eval "shared_ext=\"$shrext_cmds\""
--	eval "libname=\"$libname_spec\""
-+	eval shared_ext=\"$shrext_cmds\"
-+	eval libname=\"$libname_spec\"
- 	;;
-       *)
- 	test "$module" = no && \
-@@ -6330,8 +7248,8 @@ func_mode_link ()
- 	  # Add the "lib" prefix for modules if required
- 	  func_stripname '' '.la' "$outputname"
- 	  name=$func_stripname_result
--	  eval "shared_ext=\"$shrext_cmds\""
--	  eval "libname=\"$libname_spec\""
-+	  eval shared_ext=\"$shrext_cmds\"
-+	  eval libname=\"$libname_spec\"
- 	else
- 	  func_stripname '' '.la' "$outputname"
- 	  libname=$func_stripname_result
-@@ -6346,7 +7264,7 @@ func_mode_link ()
- 	  echo
- 	  $ECHO "*** Warning: Linking the shared library $output against the non-libtool"
- 	  $ECHO "*** objects $objs is not portable!"
--	  libobjs="$libobjs $objs"
-+	  func_append libobjs " $objs"
- 	fi
-       fi
- 
-@@ -6544,7 +7462,7 @@ func_mode_link ()
- 	  done
- 
- 	  # Make executables depend on our current version.
--	  verstring="$verstring:${current}.0"
-+	  func_append verstring ":${current}.0"
- 	  ;;
- 
- 	qnx)
-@@ -6612,10 +7530,10 @@ func_mode_link ()
-       fi
- 
-       func_generate_dlsyms "$libname" "$libname" "yes"
--      libobjs="$libobjs $symfileobj"
-+      func_append libobjs " $symfileobj"
-       test "X$libobjs" = "X " && libobjs=
- 
--      if test "$mode" != relink; then
-+      if test "$opt_mode" != relink; then
- 	# Remove our outputs, but don't remove object files since they
- 	# may have been created when compiling PIC objects.
- 	removelist=
-@@ -6631,7 +7549,7 @@ func_mode_link ()
- 		   continue
- 		 fi
- 	       fi
--	       removelist="$removelist $p"
-+	       func_append removelist " $p"
- 	       ;;
- 	    *) ;;
- 	  esac
-@@ -6642,7 +7560,7 @@ func_mode_link ()
- 
-       # Now set the variables for building old libraries.
-       if test "$build_old_libs" = yes && test "$build_libtool_libs" != convenience ; then
--	oldlibs="$oldlibs $output_objdir/$libname.$libext"
-+	func_append oldlibs " $output_objdir/$libname.$libext"
- 
- 	# Transform .lo files to .o files.
- 	oldobjs="$objs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; $lo2o" | $NL2SP`
-@@ -6659,10 +7577,11 @@ func_mode_link ()
- 	# If the user specified any rpath flags, then add them.
- 	temp_xrpath=
- 	for libdir in $xrpath; do
--	  temp_xrpath="$temp_xrpath -R$libdir"
-+	  func_replace_sysroot "$libdir"
-+	  func_append temp_xrpath " -R$func_replace_sysroot_result"
- 	  case "$finalize_rpath " in
- 	  *" $libdir "*) ;;
--	  *) finalize_rpath="$finalize_rpath $libdir" ;;
-+	  *) func_append finalize_rpath " $libdir" ;;
- 	  esac
- 	done
- 	if test "$hardcode_into_libs" != yes || test "$build_old_libs" = yes; then
-@@ -6676,7 +7595,7 @@ func_mode_link ()
-       for lib in $old_dlfiles; do
- 	case " $dlprefiles $dlfiles " in
- 	*" $lib "*) ;;
--	*) dlfiles="$dlfiles $lib" ;;
-+	*) func_append dlfiles " $lib" ;;
- 	esac
-       done
- 
-@@ -6686,7 +7605,7 @@ func_mode_link ()
-       for lib in $old_dlprefiles; do
- 	case "$dlprefiles " in
- 	*" $lib "*) ;;
--	*) dlprefiles="$dlprefiles $lib" ;;
-+	*) func_append dlprefiles " $lib" ;;
- 	esac
-       done
- 
-@@ -6698,7 +7617,7 @@ func_mode_link ()
- 	    ;;
- 	  *-*-rhapsody* | *-*-darwin1.[012])
- 	    # Rhapsody C library is in the System framework
--	    deplibs="$deplibs System.ltframework"
-+	    func_append deplibs " System.ltframework"
- 	    ;;
- 	  *-*-netbsd*)
- 	    # Don't link with libc until the a.out ld.so is fixed.
-@@ -6715,7 +7634,7 @@ func_mode_link ()
- 	  *)
- 	    # Add libc to deplibs on all other systems if necessary.
- 	    if test "$build_libtool_need_lc" = "yes"; then
--	      deplibs="$deplibs -lc"
-+	      func_append deplibs " -lc"
- 	    fi
- 	    ;;
- 	  esac
-@@ -6764,18 +7683,18 @@ EOF
- 		if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
- 		  case " $predeps $postdeps " in
- 		  *" $i "*)
--		    newdeplibs="$newdeplibs $i"
-+		    func_append newdeplibs " $i"
- 		    i=""
- 		    ;;
- 		  esac
- 		fi
- 		if test -n "$i" ; then
--		  eval "libname=\"$libname_spec\""
--		  eval "deplib_matches=\"$library_names_spec\""
-+		  libname=`eval "\\$ECHO \"$libname_spec\""`
-+		  deplib_matches=`eval "\\$ECHO \"$library_names_spec\""`
- 		  set dummy $deplib_matches; shift
- 		  deplib_match=$1
- 		  if test `expr "$ldd_output" : ".*$deplib_match"` -ne 0 ; then
--		    newdeplibs="$newdeplibs $i"
-+		    func_append newdeplibs " $i"
- 		  else
- 		    droppeddeps=yes
- 		    echo
-@@ -6789,7 +7708,7 @@ EOF
- 		fi
- 		;;
- 	      *)
--		newdeplibs="$newdeplibs $i"
-+		func_append newdeplibs " $i"
- 		;;
- 	      esac
- 	    done
-@@ -6807,18 +7726,18 @@ EOF
- 		  if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
- 		    case " $predeps $postdeps " in
- 		    *" $i "*)
--		      newdeplibs="$newdeplibs $i"
-+		      func_append newdeplibs " $i"
- 		      i=""
- 		      ;;
- 		    esac
- 		  fi
- 		  if test -n "$i" ; then
--		    eval "libname=\"$libname_spec\""
--		    eval "deplib_matches=\"$library_names_spec\""
-+		    libname=`eval "\\$ECHO \"$libname_spec\""`
-+		    deplib_matches=`eval "\\$ECHO \"$library_names_spec\""`
- 		    set dummy $deplib_matches; shift
- 		    deplib_match=$1
- 		    if test `expr "$ldd_output" : ".*$deplib_match"` -ne 0 ; then
--		      newdeplibs="$newdeplibs $i"
-+		      func_append newdeplibs " $i"
- 		    else
- 		      droppeddeps=yes
- 		      echo
-@@ -6840,7 +7759,7 @@ EOF
- 		fi
- 		;;
- 	      *)
--		newdeplibs="$newdeplibs $i"
-+		func_append newdeplibs " $i"
- 		;;
- 	      esac
- 	    done
-@@ -6857,15 +7776,27 @@ EOF
- 	      if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
- 		case " $predeps $postdeps " in
- 		*" $a_deplib "*)
--		  newdeplibs="$newdeplibs $a_deplib"
-+		  func_append newdeplibs " $a_deplib"
- 		  a_deplib=""
- 		  ;;
- 		esac
- 	      fi
- 	      if test -n "$a_deplib" ; then
--		eval "libname=\"$libname_spec\""
-+		libname=`eval "\\$ECHO \"$libname_spec\""`
-+		if test -n "$file_magic_glob"; then
-+		  libnameglob=`func_echo_all "$libname" | $SED -e $file_magic_glob`
-+		else
-+		  libnameglob=$libname
-+		fi
-+		test "$want_nocaseglob" = yes && nocaseglob=`shopt -p nocaseglob`
- 		for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do
--		  potential_libs=`ls $i/$libname[.-]* 2>/dev/null`
-+		  if test "$want_nocaseglob" = yes; then
-+		    shopt -s nocaseglob
-+		    potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null`
-+		    $nocaseglob
-+		  else
-+		    potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null`
-+		  fi
- 		  for potent_lib in $potential_libs; do
- 		      # Follow soft links.
- 		      if ls -lLd "$potent_lib" 2>/dev/null |
-@@ -6885,10 +7816,10 @@ EOF
- 			*) potlib=`$ECHO "$potlib" | $SED 's,[^/]*$,,'`"$potliblink";;
- 			esac
- 		      done
--		      if eval "$file_magic_cmd \"\$potlib\"" 2>/dev/null |
-+		      if eval $file_magic_cmd \"\$potlib\" 2>/dev/null |
- 			 $SED -e 10q |
- 			 $EGREP "$file_magic_regex" > /dev/null; then
--			newdeplibs="$newdeplibs $a_deplib"
-+			func_append newdeplibs " $a_deplib"
- 			a_deplib=""
- 			break 2
- 		      fi
-@@ -6913,7 +7844,7 @@ EOF
- 	      ;;
- 	    *)
- 	      # Add a -L argument.
--	      newdeplibs="$newdeplibs $a_deplib"
-+	      func_append newdeplibs " $a_deplib"
- 	      ;;
- 	    esac
- 	  done # Gone through all deplibs.
-@@ -6929,20 +7860,20 @@ EOF
- 	      if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then
- 		case " $predeps $postdeps " in
- 		*" $a_deplib "*)
--		  newdeplibs="$newdeplibs $a_deplib"
-+		  func_append newdeplibs " $a_deplib"
- 		  a_deplib=""
- 		  ;;
- 		esac
- 	      fi
- 	      if test -n "$a_deplib" ; then
--		eval "libname=\"$libname_spec\""
-+		libname=`eval "\\$ECHO \"$libname_spec\""`
- 		for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do
- 		  potential_libs=`ls $i/$libname[.-]* 2>/dev/null`
- 		  for potent_lib in $potential_libs; do
- 		    potlib="$potent_lib" # see symlink-check above in file_magic test
- 		    if eval "\$ECHO \"$potent_lib\"" 2>/dev/null | $SED 10q | \
- 		       $EGREP "$match_pattern_regex" > /dev/null; then
--		      newdeplibs="$newdeplibs $a_deplib"
-+		      func_append newdeplibs " $a_deplib"
- 		      a_deplib=""
- 		      break 2
- 		    fi
-@@ -6967,7 +7898,7 @@ EOF
- 	      ;;
- 	    *)
- 	      # Add a -L argument.
--	      newdeplibs="$newdeplibs $a_deplib"
-+	      func_append newdeplibs " $a_deplib"
- 	      ;;
- 	    esac
- 	  done # Gone through all deplibs.
-@@ -7071,7 +8002,7 @@ EOF
- 	*)
- 	  case " $deplibs " in
- 	  *" -L$path/$objdir "*)
--	    new_libs="$new_libs -L$path/$objdir" ;;
-+	    func_append new_libs " -L$path/$objdir" ;;
- 	  esac
- 	  ;;
- 	esac
-@@ -7081,10 +8012,10 @@ EOF
- 	-L*)
- 	  case " $new_libs " in
- 	  *" $deplib "*) ;;
--	  *) new_libs="$new_libs $deplib" ;;
-+	  *) func_append new_libs " $deplib" ;;
- 	  esac
- 	  ;;
--	*) new_libs="$new_libs $deplib" ;;
-+	*) func_append new_libs " $deplib" ;;
- 	esac
-       done
-       deplibs="$new_libs"
-@@ -7101,10 +8032,12 @@ EOF
- 	  hardcode_libdirs=
- 	  dep_rpath=
- 	  rpath="$finalize_rpath"
--	  test "$mode" != relink && rpath="$compile_rpath$rpath"
-+	  test "$opt_mode" != relink && rpath="$compile_rpath$rpath"
- 	  for libdir in $rpath; do
- 	    if test -n "$hardcode_libdir_flag_spec"; then
- 	      if test -n "$hardcode_libdir_separator"; then
-+		func_replace_sysroot "$libdir"
-+		libdir=$func_replace_sysroot_result
- 		if test -z "$hardcode_libdirs"; then
- 		  hardcode_libdirs="$libdir"
- 		else
-@@ -7113,18 +8046,18 @@ EOF
- 		  *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
- 		    ;;
- 		  *)
--		    hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
-+		    func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
- 		    ;;
- 		  esac
- 		fi
- 	      else
--		eval "flag=\"$hardcode_libdir_flag_spec\""
--		dep_rpath="$dep_rpath $flag"
-+		eval flag=\"$hardcode_libdir_flag_spec\"
-+		func_append dep_rpath " $flag"
- 	      fi
- 	    elif test -n "$runpath_var"; then
- 	      case "$perm_rpath " in
- 	      *" $libdir "*) ;;
--	      *) perm_rpath="$perm_rpath $libdir" ;;
-+	      *) func_apped perm_rpath " $libdir" ;;
- 	      esac
- 	    fi
- 	  done
-@@ -7133,40 +8066,38 @@ EOF
- 	     test -n "$hardcode_libdirs"; then
- 	    libdir="$hardcode_libdirs"
- 	    if test -n "$hardcode_libdir_flag_spec_ld"; then
--	      eval "dep_rpath=\"$hardcode_libdir_flag_spec_ld\""
-+	      eval dep_rpath=\"$hardcode_libdir_flag_spec_ld\"
- 	    else
--	      eval "dep_rpath=\"$hardcode_libdir_flag_spec\""
-+	      eval dep_rpath=\"$hardcode_libdir_flag_spec\"
- 	    fi
- 	  fi
- 	  if test -n "$runpath_var" && test -n "$perm_rpath"; then
- 	    # We should set the runpath_var.
- 	    rpath=
- 	    for dir in $perm_rpath; do
--	      rpath="$rpath$dir:"
-+	      func_append rpath "$dir:"
- 	    done
--	    eval $runpath_var=\$rpath\$$runpath_var
--	    export $runpath_var
-+	    eval "$runpath_var='$rpath\$$runpath_var'; export $runpath_var"
- 	  fi
- 	  test -n "$dep_rpath" && deplibs="$dep_rpath $deplibs"
- 	fi
- 
- 	shlibpath="$finalize_shlibpath"
--	test "$mode" != relink && shlibpath="$compile_shlibpath$shlibpath"
-+	test "$opt_mode" != relink && shlibpath="$compile_shlibpath$shlibpath"
- 	if test -n "$shlibpath"; then
--	  eval $shlibpath_var=\$shlibpath\$$shlibpath_var
--	  export $shlibpath_var
-+	  eval "$shlibpath_var='$shlibpath\$$shlibpath_var'; export $shlibpath_var"
- 	fi
- 
- 	# Get the real and link names of the library.
--	eval "shared_ext=\"$shrext_cmds\""
--	eval "library_names=\"$library_names_spec\""
-+	eval shared_ext=\"$shrext_cmds\"
-+	eval library_names=\"$library_names_spec\"
- 	set dummy $library_names
- 	shift
- 	realname="$1"
- 	shift
- 
- 	if test -n "$soname_spec"; then
--	  eval "soname=\"$soname_spec\""
-+	  eval soname=\"$soname_spec\"
- 	else
- 	  soname="$realname"
- 	fi
-@@ -7178,7 +8109,7 @@ EOF
- 	linknames=
- 	for link
- 	do
--	  linknames="$linknames $link"
-+	  func_append linknames " $link"
- 	done
- 
- 	# Use standard objects if they are pic
-@@ -7189,7 +8120,7 @@ EOF
- 	if test -n "$export_symbols" && test -n "$include_expsyms"; then
- 	  $opt_dry_run || cp "$export_symbols" "$output_objdir/$libname.uexp"
- 	  export_symbols="$output_objdir/$libname.uexp"
--	  delfiles="$delfiles $export_symbols"
-+	  func_append delfiles " $export_symbols"
- 	fi
- 
- 	orig_export_symbols=
-@@ -7220,13 +8151,45 @@ EOF
- 	    $opt_dry_run || $RM $export_symbols
- 	    cmds=$export_symbols_cmds
- 	    save_ifs="$IFS"; IFS='~'
--	    for cmd in $cmds; do
-+	    for cmd1 in $cmds; do
- 	      IFS="$save_ifs"
--	      eval "cmd=\"$cmd\""
--	      func_len " $cmd"
--	      len=$func_len_result
--	      if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then
-+	      # Take the normal branch if the nm_file_list_spec branch
-+	      # doesn't work or if tool conversion is not needed.
-+	      case $nm_file_list_spec~$to_tool_file_cmd in
-+		*~func_convert_file_noop | *~func_convert_file_msys_to_w32 | ~*)
-+		  try_normal_branch=yes
-+		  eval cmd=\"$cmd1\"
-+		  func_len " $cmd"
-+		  len=$func_len_result
-+		  ;;
-+		*)
-+		  try_normal_branch=no
-+		  ;;
-+	      esac
-+	      if test "$try_normal_branch" = yes \
-+		 && { test "$len" -lt "$max_cmd_len" \
-+		      || test "$max_cmd_len" -le -1; }
-+	      then
-+		func_show_eval "$cmd" 'exit $?'
-+		skipped_export=false
-+	      elif test -n "$nm_file_list_spec"; then
-+		func_basename "$output"
-+		output_la=$func_basename_result
-+		save_libobjs=$libobjs
-+		save_output=$output
-+		output=${output_objdir}/${output_la}.nm
-+		func_to_tool_file "$output"
-+		libobjs=$nm_file_list_spec$func_to_tool_file_result
-+		func_append delfiles " $output"
-+		func_verbose "creating $NM input file list: $output"
-+		for obj in $save_libobjs; do
-+		  func_to_tool_file "$obj"
-+		  $ECHO "$func_to_tool_file_result"
-+		done > "$output"
-+		eval cmd=\"$cmd1\"
- 		func_show_eval "$cmd" 'exit $?'
-+		output=$save_output
-+		libobjs=$save_libobjs
- 		skipped_export=false
- 	      else
- 		# The command line is too long to execute in one step.
-@@ -7248,7 +8211,7 @@ EOF
- 	if test -n "$export_symbols" && test -n "$include_expsyms"; then
- 	  tmp_export_symbols="$export_symbols"
- 	  test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols"
--	  $opt_dry_run || $ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"
-+	  $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"'
- 	fi
- 
- 	if test "X$skipped_export" != "X:" && test -n "$orig_export_symbols"; then
-@@ -7260,7 +8223,7 @@ EOF
- 	  # global variables. join(1) would be nice here, but unfortunately
- 	  # isn't a blessed tool.
- 	  $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter
--	  delfiles="$delfiles $export_symbols $output_objdir/$libname.filter"
-+	  func_append delfiles " $export_symbols $output_objdir/$libname.filter"
- 	  export_symbols=$output_objdir/$libname.def
- 	  $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols
- 	fi
-@@ -7270,7 +8233,7 @@ EOF
- 	  case " $convenience " in
- 	  *" $test_deplib "*) ;;
- 	  *)
--	    tmp_deplibs="$tmp_deplibs $test_deplib"
-+	    func_append tmp_deplibs " $test_deplib"
- 	    ;;
- 	  esac
- 	done
-@@ -7286,43 +8249,43 @@ EOF
- 	  fi
- 	  if test -n "$whole_archive_flag_spec"; then
- 	    save_libobjs=$libobjs
--	    eval "libobjs=\"\$libobjs $whole_archive_flag_spec\""
-+	    eval libobjs=\"\$libobjs $whole_archive_flag_spec\"
- 	    test "X$libobjs" = "X " && libobjs=
- 	  else
- 	    gentop="$output_objdir/${outputname}x"
--	    generated="$generated $gentop"
-+	    func_append generated " $gentop"
- 
- 	    func_extract_archives $gentop $convenience
--	    libobjs="$libobjs $func_extract_archives_result"
-+	    func_append libobjs " $func_extract_archives_result"
- 	    test "X$libobjs" = "X " && libobjs=
- 	  fi
- 	fi
- 
- 	if test "$thread_safe" = yes && test -n "$thread_safe_flag_spec"; then
--	  eval "flag=\"$thread_safe_flag_spec\""
--	  linker_flags="$linker_flags $flag"
-+	  eval flag=\"$thread_safe_flag_spec\"
-+	  func_append linker_flags " $flag"
- 	fi
- 
- 	# Make a backup of the uninstalled library when relinking
--	if test "$mode" = relink; then
--	  $opt_dry_run || (cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U) || exit $?
-+	if test "$opt_mode" = relink; then
-+	  $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U)' || exit $?
- 	fi
- 
- 	# Do each of the archive commands.
- 	if test "$module" = yes && test -n "$module_cmds" ; then
- 	  if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then
--	    eval "test_cmds=\"$module_expsym_cmds\""
-+	    eval test_cmds=\"$module_expsym_cmds\"
- 	    cmds=$module_expsym_cmds
- 	  else
--	    eval "test_cmds=\"$module_cmds\""
-+	    eval test_cmds=\"$module_cmds\"
- 	    cmds=$module_cmds
- 	  fi
- 	else
- 	  if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then
--	    eval "test_cmds=\"$archive_expsym_cmds\""
-+	    eval test_cmds=\"$archive_expsym_cmds\"
- 	    cmds=$archive_expsym_cmds
- 	  else
--	    eval "test_cmds=\"$archive_cmds\""
-+	    eval test_cmds=\"$archive_cmds\"
- 	    cmds=$archive_cmds
- 	  fi
- 	fi
-@@ -7366,10 +8329,13 @@ EOF
- 	    echo 'INPUT (' > $output
- 	    for obj in $save_libobjs
- 	    do
--	      $ECHO "$obj" >> $output
-+	      func_to_tool_file "$obj"
-+	      $ECHO "$func_to_tool_file_result" >> $output
- 	    done
- 	    echo ')' >> $output
--	    delfiles="$delfiles $output"
-+	    func_append delfiles " $output"
-+	    func_to_tool_file "$output"
-+	    output=$func_to_tool_file_result
- 	  elif test -n "$save_libobjs" && test "X$skipped_export" != "X:" && test "X$file_list_spec" != X; then
- 	    output=${output_objdir}/${output_la}.lnk
- 	    func_verbose "creating linker input file list: $output"
-@@ -7383,15 +8349,17 @@ EOF
- 	    fi
- 	    for obj
- 	    do
--	      $ECHO "$obj" >> $output
-+	      func_to_tool_file "$obj"
-+	      $ECHO "$func_to_tool_file_result" >> $output
- 	    done
--	    delfiles="$delfiles $output"
--	    output=$firstobj\"$file_list_spec$output\"
-+	    func_append delfiles " $output"
-+	    func_to_tool_file "$output"
-+	    output=$firstobj\"$file_list_spec$func_to_tool_file_result\"
- 	  else
- 	    if test -n "$save_libobjs"; then
- 	      func_verbose "creating reloadable object files..."
- 	      output=$output_objdir/$output_la-${k}.$objext
--	      eval "test_cmds=\"$reload_cmds\""
-+	      eval test_cmds=\"$reload_cmds\"
- 	      func_len " $test_cmds"
- 	      len0=$func_len_result
- 	      len=$len0
-@@ -7411,12 +8379,12 @@ EOF
- 		  if test "$k" -eq 1 ; then
- 		    # The first file doesn't have a previous command to add.
- 		    reload_objs=$objlist
--		    eval "concat_cmds=\"$reload_cmds\""
-+		    eval concat_cmds=\"$reload_cmds\"
- 		  else
- 		    # All subsequent reloadable object files will link in
- 		    # the last one created.
- 		    reload_objs="$objlist $last_robj"
--		    eval "concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\""
-+		    eval concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\"
- 		  fi
- 		  last_robj=$output_objdir/$output_la-${k}.$objext
- 		  func_arith $k + 1
-@@ -7433,11 +8401,11 @@ EOF
- 	      # files will link in the last one created.
- 	      test -z "$concat_cmds" || concat_cmds=$concat_cmds~
- 	      reload_objs="$objlist $last_robj"
--	      eval "concat_cmds=\"\${concat_cmds}$reload_cmds\""
-+	      eval concat_cmds=\"\${concat_cmds}$reload_cmds\"
- 	      if test -n "$last_robj"; then
--	        eval "concat_cmds=\"\${concat_cmds}~\$RM $last_robj\""
-+	        eval concat_cmds=\"\${concat_cmds}~\$RM $last_robj\"
- 	      fi
--	      delfiles="$delfiles $output"
-+	      func_append delfiles " $output"
- 
- 	    else
- 	      output=
-@@ -7450,9 +8418,9 @@ EOF
- 	      libobjs=$output
- 	      # Append the command to create the export file.
- 	      test -z "$concat_cmds" || concat_cmds=$concat_cmds~
--	      eval "concat_cmds=\"\$concat_cmds$export_symbols_cmds\""
-+	      eval concat_cmds=\"\$concat_cmds$export_symbols_cmds\"
- 	      if test -n "$last_robj"; then
--		eval "concat_cmds=\"\$concat_cmds~\$RM $last_robj\""
-+		eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\"
- 	      fi
- 	    fi
- 
-@@ -7471,7 +8439,7 @@ EOF
- 		lt_exit=$?
- 
- 		# Restore the uninstalled library and exit
--		if test "$mode" = relink; then
-+		if test "$opt_mode" = relink; then
- 		  ( cd "$output_objdir" && \
- 		    $RM "${realname}T" && \
- 		    $MV "${realname}U" "$realname" )
-@@ -7492,7 +8460,7 @@ EOF
- 	    if test -n "$export_symbols" && test -n "$include_expsyms"; then
- 	      tmp_export_symbols="$export_symbols"
- 	      test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols"
--	      $opt_dry_run || $ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"
-+	      $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"'
- 	    fi
- 
- 	    if test -n "$orig_export_symbols"; then
-@@ -7504,7 +8472,7 @@ EOF
- 	      # global variables. join(1) would be nice here, but unfortunately
- 	      # isn't a blessed tool.
- 	      $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter
--	      delfiles="$delfiles $export_symbols $output_objdir/$libname.filter"
-+	      func_append delfiles " $export_symbols $output_objdir/$libname.filter"
- 	      export_symbols=$output_objdir/$libname.def
- 	      $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols
- 	    fi
-@@ -7515,7 +8483,7 @@ EOF
- 	  output=$save_output
- 
- 	  if test -n "$convenience" && test -n "$whole_archive_flag_spec"; then
--	    eval "libobjs=\"\$libobjs $whole_archive_flag_spec\""
-+	    eval libobjs=\"\$libobjs $whole_archive_flag_spec\"
- 	    test "X$libobjs" = "X " && libobjs=
- 	  fi
- 	  # Expand the library linking commands again to reset the
-@@ -7539,23 +8507,23 @@ EOF
- 
- 	if test -n "$delfiles"; then
- 	  # Append the command to remove temporary files to $cmds.
--	  eval "cmds=\"\$cmds~\$RM $delfiles\""
-+	  eval cmds=\"\$cmds~\$RM $delfiles\"
- 	fi
- 
- 	# Add any objects from preloaded convenience libraries
- 	if test -n "$dlprefiles"; then
- 	  gentop="$output_objdir/${outputname}x"
--	  generated="$generated $gentop"
-+	  func_append generated " $gentop"
- 
- 	  func_extract_archives $gentop $dlprefiles
--	  libobjs="$libobjs $func_extract_archives_result"
-+	  func_append libobjs " $func_extract_archives_result"
- 	  test "X$libobjs" = "X " && libobjs=
- 	fi
- 
- 	save_ifs="$IFS"; IFS='~'
- 	for cmd in $cmds; do
- 	  IFS="$save_ifs"
--	  eval "cmd=\"$cmd\""
-+	  eval cmd=\"$cmd\"
- 	  $opt_silent || {
- 	    func_quote_for_expand "$cmd"
- 	    eval "func_echo $func_quote_for_expand_result"
-@@ -7564,7 +8532,7 @@ EOF
- 	    lt_exit=$?
- 
- 	    # Restore the uninstalled library and exit
--	    if test "$mode" = relink; then
-+	    if test "$opt_mode" = relink; then
- 	      ( cd "$output_objdir" && \
- 	        $RM "${realname}T" && \
- 		$MV "${realname}U" "$realname" )
-@@ -7576,8 +8544,8 @@ EOF
- 	IFS="$save_ifs"
- 
- 	# Restore the uninstalled library and exit
--	if test "$mode" = relink; then
--	  $opt_dry_run || (cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname) || exit $?
-+	if test "$opt_mode" = relink; then
-+	  $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname)' || exit $?
- 
- 	  if test -n "$convenience"; then
- 	    if test -z "$whole_archive_flag_spec"; then
-@@ -7656,17 +8624,20 @@ EOF
- 
-       if test -n "$convenience"; then
- 	if test -n "$whole_archive_flag_spec"; then
--	  eval "tmp_whole_archive_flags=\"$whole_archive_flag_spec\""
-+	  eval tmp_whole_archive_flags=\"$whole_archive_flag_spec\"
- 	  reload_conv_objs=$reload_objs\ `$ECHO "$tmp_whole_archive_flags" | $SED 's|,| |g'`
- 	else
- 	  gentop="$output_objdir/${obj}x"
--	  generated="$generated $gentop"
-+	  func_append generated " $gentop"
- 
- 	  func_extract_archives $gentop $convenience
- 	  reload_conv_objs="$reload_objs $func_extract_archives_result"
- 	fi
-       fi
- 
-+      # If we're not building shared, we need to use non_pic_objs
-+      test "$build_libtool_libs" != yes && libobjs="$non_pic_objects"
-+
-       # Create the old-style object.
-       reload_objs="$objs$old_deplibs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; /\.lib$/d; $lo2o" | $NL2SP`" $reload_conv_objs" ### testsuite: skip nested quoting test
- 
-@@ -7690,7 +8661,7 @@ EOF
- 	# Create an invalid libtool object if no PIC, so that we don't
- 	# accidentally link it into a program.
- 	# $show "echo timestamp > $libobj"
--	# $opt_dry_run || echo timestamp > $libobj || exit $?
-+	# $opt_dry_run || eval "echo timestamp > $libobj" || exit $?
- 	exit $EXIT_SUCCESS
-       fi
- 
-@@ -7740,8 +8711,8 @@ EOF
- 	if test "$tagname" = CXX ; then
- 	  case ${MACOSX_DEPLOYMENT_TARGET-10.0} in
- 	    10.[0123])
--	      compile_command="$compile_command ${wl}-bind_at_load"
--	      finalize_command="$finalize_command ${wl}-bind_at_load"
-+	      func_append compile_command " ${wl}-bind_at_load"
-+	      func_append finalize_command " ${wl}-bind_at_load"
- 	    ;;
- 	  esac
- 	fi
-@@ -7761,7 +8732,7 @@ EOF
- 	*)
- 	  case " $compile_deplibs " in
- 	  *" -L$path/$objdir "*)
--	    new_libs="$new_libs -L$path/$objdir" ;;
-+	    func_append new_libs " -L$path/$objdir" ;;
- 	  esac
- 	  ;;
- 	esac
-@@ -7771,17 +8742,17 @@ EOF
- 	-L*)
- 	  case " $new_libs " in
- 	  *" $deplib "*) ;;
--	  *) new_libs="$new_libs $deplib" ;;
-+	  *) func_append new_libs " $deplib" ;;
- 	  esac
- 	  ;;
--	*) new_libs="$new_libs $deplib" ;;
-+	*) func_append new_libs " $deplib" ;;
- 	esac
-       done
-       compile_deplibs="$new_libs"
- 
- 
--      compile_command="$compile_command $compile_deplibs"
--      finalize_command="$finalize_command $finalize_deplibs"
-+      func_append compile_command " $compile_deplibs"
-+      func_append finalize_command " $finalize_deplibs"
- 
-       if test -n "$rpath$xrpath"; then
- 	# If the user specified any rpath flags, then add them.
-@@ -7789,7 +8760,7 @@ EOF
- 	  # This is the magic to use -rpath.
- 	  case "$finalize_rpath " in
- 	  *" $libdir "*) ;;
--	  *) finalize_rpath="$finalize_rpath $libdir" ;;
-+	  *) func_append finalize_rpath " $libdir" ;;
- 	  esac
- 	done
-       fi
-@@ -7808,18 +8779,18 @@ EOF
- 	      *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
- 		;;
- 	      *)
--		hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
-+		func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
- 		;;
- 	      esac
- 	    fi
- 	  else
--	    eval "flag=\"$hardcode_libdir_flag_spec\""
--	    rpath="$rpath $flag"
-+	    eval flag=\"$hardcode_libdir_flag_spec\"
-+	    func_append rpath " $flag"
- 	  fi
- 	elif test -n "$runpath_var"; then
- 	  case "$perm_rpath " in
- 	  *" $libdir "*) ;;
--	  *) perm_rpath="$perm_rpath $libdir" ;;
-+	  *) func_append perm_rpath " $libdir" ;;
- 	  esac
- 	fi
- 	case $host in
-@@ -7828,12 +8799,12 @@ EOF
- 	  case :$dllsearchpath: in
- 	  *":$libdir:"*) ;;
- 	  ::) dllsearchpath=$libdir;;
--	  *) dllsearchpath="$dllsearchpath:$libdir";;
-+	  *) func_append dllsearchpath ":$libdir";;
- 	  esac
- 	  case :$dllsearchpath: in
- 	  *":$testbindir:"*) ;;
- 	  ::) dllsearchpath=$testbindir;;
--	  *) dllsearchpath="$dllsearchpath:$testbindir";;
-+	  *) func_append dllsearchpath ":$testbindir";;
- 	  esac
- 	  ;;
- 	esac
-@@ -7842,7 +8813,7 @@ EOF
-       if test -n "$hardcode_libdir_separator" &&
- 	 test -n "$hardcode_libdirs"; then
- 	libdir="$hardcode_libdirs"
--	eval "rpath=\" $hardcode_libdir_flag_spec\""
-+	eval rpath=\" $hardcode_libdir_flag_spec\"
-       fi
-       compile_rpath="$rpath"
- 
-@@ -7859,18 +8830,18 @@ EOF
- 	      *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*)
- 		;;
- 	      *)
--		hardcode_libdirs="$hardcode_libdirs$hardcode_libdir_separator$libdir"
-+		func_append hardcode_libdirs "$hardcode_libdir_separator$libdir"
- 		;;
- 	      esac
- 	    fi
- 	  else
--	    eval "flag=\"$hardcode_libdir_flag_spec\""
--	    rpath="$rpath $flag"
-+	    eval flag=\"$hardcode_libdir_flag_spec\"
-+	    func_append rpath " $flag"
- 	  fi
- 	elif test -n "$runpath_var"; then
- 	  case "$finalize_perm_rpath " in
- 	  *" $libdir "*) ;;
--	  *) finalize_perm_rpath="$finalize_perm_rpath $libdir" ;;
-+	  *) func_append finalize_perm_rpath " $libdir" ;;
- 	  esac
- 	fi
-       done
-@@ -7878,7 +8849,7 @@ EOF
-       if test -n "$hardcode_libdir_separator" &&
- 	 test -n "$hardcode_libdirs"; then
- 	libdir="$hardcode_libdirs"
--	eval "rpath=\" $hardcode_libdir_flag_spec\""
-+	eval rpath=\" $hardcode_libdir_flag_spec\"
-       fi
-       finalize_rpath="$rpath"
- 
-@@ -7921,6 +8892,12 @@ EOF
- 	exit_status=0
- 	func_show_eval "$link_command" 'exit_status=$?'
- 
-+	if test -n "$postlink_cmds"; then
-+	  func_to_tool_file "$output"
-+	  postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
-+	  func_execute_cmds "$postlink_cmds" 'exit $?'
-+	fi
-+
- 	# Delete the generated files.
- 	if test -f "$output_objdir/${outputname}S.${objext}"; then
- 	  func_show_eval '$RM "$output_objdir/${outputname}S.${objext}"'
-@@ -7943,7 +8920,7 @@ EOF
- 	  # We should set the runpath_var.
- 	  rpath=
- 	  for dir in $perm_rpath; do
--	    rpath="$rpath$dir:"
-+	    func_append rpath "$dir:"
- 	  done
- 	  compile_var="$runpath_var=\"$rpath\$$runpath_var\" "
- 	fi
-@@ -7951,7 +8928,7 @@ EOF
- 	  # We should set the runpath_var.
- 	  rpath=
- 	  for dir in $finalize_perm_rpath; do
--	    rpath="$rpath$dir:"
-+	    func_append rpath "$dir:"
- 	  done
- 	  finalize_var="$runpath_var=\"$rpath\$$runpath_var\" "
- 	fi
-@@ -7966,6 +8943,13 @@ EOF
- 	$opt_dry_run || $RM $output
- 	# Link the executable and exit
- 	func_show_eval "$link_command" 'exit $?'
-+
-+	if test -n "$postlink_cmds"; then
-+	  func_to_tool_file "$output"
-+	  postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
-+	  func_execute_cmds "$postlink_cmds" 'exit $?'
-+	fi
-+
- 	exit $EXIT_SUCCESS
-       fi
- 
-@@ -7999,6 +8983,12 @@ EOF
- 
-       func_show_eval "$link_command" 'exit $?'
- 
-+      if test -n "$postlink_cmds"; then
-+	func_to_tool_file "$output_objdir/$outputname"
-+	postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output_objdir/$outputname"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'`
-+	func_execute_cmds "$postlink_cmds" 'exit $?'
-+      fi
-+
-       # Now create the wrapper script.
-       func_verbose "creating $output"
- 
-@@ -8096,7 +9086,7 @@ EOF
- 	else
- 	  oldobjs="$old_deplibs $non_pic_objects"
- 	  if test "$preload" = yes && test -f "$symfileobj"; then
--	    oldobjs="$oldobjs $symfileobj"
-+	    func_append oldobjs " $symfileobj"
- 	  fi
- 	fi
- 	addlibs="$old_convenience"
-@@ -8104,10 +9094,10 @@ EOF
- 
-       if test -n "$addlibs"; then
- 	gentop="$output_objdir/${outputname}x"
--	generated="$generated $gentop"
-+	func_append generated " $gentop"
- 
- 	func_extract_archives $gentop $addlibs
--	oldobjs="$oldobjs $func_extract_archives_result"
-+	func_append oldobjs " $func_extract_archives_result"
-       fi
- 
-       # Do each command in the archive commands.
-@@ -8118,10 +9108,10 @@ EOF
- 	# Add any objects from preloaded convenience libraries
- 	if test -n "$dlprefiles"; then
- 	  gentop="$output_objdir/${outputname}x"
--	  generated="$generated $gentop"
-+	  func_append generated " $gentop"
- 
- 	  func_extract_archives $gentop $dlprefiles
--	  oldobjs="$oldobjs $func_extract_archives_result"
-+	  func_append oldobjs " $func_extract_archives_result"
- 	fi
- 
- 	# POSIX demands no paths to be encoded in archives.  We have
-@@ -8139,7 +9129,7 @@ EOF
- 	else
- 	  echo "copying selected object files to avoid basename conflicts..."
- 	  gentop="$output_objdir/${outputname}x"
--	  generated="$generated $gentop"
-+	  func_append generated " $gentop"
- 	  func_mkdir_p "$gentop"
- 	  save_oldobjs=$oldobjs
- 	  oldobjs=
-@@ -8163,18 +9153,28 @@ EOF
- 		esac
- 	      done
- 	      func_show_eval "ln $obj $gentop/$newobj || cp $obj $gentop/$newobj"
--	      oldobjs="$oldobjs $gentop/$newobj"
-+	      func_append oldobjs " $gentop/$newobj"
- 	      ;;
--	    *) oldobjs="$oldobjs $obj" ;;
-+	    *) func_append oldobjs " $obj" ;;
- 	    esac
- 	  done
- 	fi
--	eval "cmds=\"$old_archive_cmds\""
-+	eval cmds=\"$old_archive_cmds\"
- 
- 	func_len " $cmds"
- 	len=$func_len_result
- 	if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then
- 	  cmds=$old_archive_cmds
-+	elif test -n "$archiver_list_spec"; then
-+	  func_verbose "using command file archive linking..."
-+	  for obj in $oldobjs
-+	  do
-+	    func_to_tool_file "$obj"
-+	    $ECHO "$func_to_tool_file_result"
-+	  done > $output_objdir/$libname.libcmd
-+	  func_to_tool_file "$output_objdir/$libname.libcmd"
-+	  oldobjs=" $archiver_list_spec$func_to_tool_file_result"
-+	  cmds=$old_archive_cmds
- 	else
- 	  # the command line is too long to link in one step, link in parts
- 	  func_verbose "using piecewise archive linking..."
-@@ -8189,7 +9189,7 @@ EOF
- 	  do
- 	    last_oldobj=$obj
- 	  done
--	  eval "test_cmds=\"$old_archive_cmds\""
-+	  eval test_cmds=\"$old_archive_cmds\"
- 	  func_len " $test_cmds"
- 	  len0=$func_len_result
- 	  len=$len0
-@@ -8208,7 +9208,7 @@ EOF
- 		RANLIB=$save_RANLIB
- 	      fi
- 	      test -z "$concat_cmds" || concat_cmds=$concat_cmds~
--	      eval "concat_cmds=\"\${concat_cmds}$old_archive_cmds\""
-+	      eval concat_cmds=\"\${concat_cmds}$old_archive_cmds\"
- 	      objlist=
- 	      len=$len0
- 	    fi
-@@ -8216,9 +9216,9 @@ EOF
- 	  RANLIB=$save_RANLIB
- 	  oldobjs=$objlist
- 	  if test "X$oldobjs" = "X" ; then
--	    eval "cmds=\"\$concat_cmds\""
-+	    eval cmds=\"\$concat_cmds\"
- 	  else
--	    eval "cmds=\"\$concat_cmds~\$old_archive_cmds\""
-+	    eval cmds=\"\$concat_cmds~\$old_archive_cmds\"
- 	  fi
- 	fi
-       fi
-@@ -8268,12 +9268,23 @@ EOF
- 	      *.la)
- 		func_basename "$deplib"
- 		name="$func_basename_result"
--		libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib`
-+		func_resolve_sysroot "$deplib"
-+		eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
- 		test -z "$libdir" && \
- 		  func_fatal_error "\`$deplib' is not a valid libtool archive"
--		newdependency_libs="$newdependency_libs $libdir/$name"
-+		func_append newdependency_libs " ${lt_sysroot:+=}$libdir/$name"
-+		;;
-+	      -L*)
-+		func_stripname -L '' "$deplib"
-+		func_replace_sysroot "$func_stripname_result"
-+		func_append newdependency_libs " -L$func_replace_sysroot_result"
- 		;;
--	      *) newdependency_libs="$newdependency_libs $deplib" ;;
-+	      -R*)
-+		func_stripname -R '' "$deplib"
-+		func_replace_sysroot "$func_stripname_result"
-+		func_append newdependency_libs " -R$func_replace_sysroot_result"
-+		;;
-+	      *) func_append newdependency_libs " $deplib" ;;
- 	      esac
- 	    done
- 	    dependency_libs="$newdependency_libs"
-@@ -8284,12 +9295,14 @@ EOF
- 	      *.la)
- 	        func_basename "$lib"
- 		name="$func_basename_result"
--		libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib`
-+		func_resolve_sysroot "$lib"
-+		eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
-+
- 		test -z "$libdir" && \
- 		  func_fatal_error "\`$lib' is not a valid libtool archive"
--		newdlfiles="$newdlfiles $libdir/$name"
-+		func_append newdlfiles " ${lt_sysroot:+=}$libdir/$name"
- 		;;
--	      *) newdlfiles="$newdlfiles $lib" ;;
-+	      *) func_append newdlfiles " $lib" ;;
- 	      esac
- 	    done
- 	    dlfiles="$newdlfiles"
-@@ -8303,10 +9316,11 @@ EOF
- 		# the library:
- 		func_basename "$lib"
- 		name="$func_basename_result"
--		libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib`
-+		func_resolve_sysroot "$lib"
-+		eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result`
- 		test -z "$libdir" && \
- 		  func_fatal_error "\`$lib' is not a valid libtool archive"
--		newdlprefiles="$newdlprefiles $libdir/$name"
-+		func_append newdlprefiles " ${lt_sysroot:+=}$libdir/$name"
- 		;;
- 	      esac
- 	    done
-@@ -8318,7 +9332,7 @@ EOF
- 		[\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;;
- 		*) abs=`pwd`"/$lib" ;;
- 	      esac
--	      newdlfiles="$newdlfiles $abs"
-+	      func_append newdlfiles " $abs"
- 	    done
- 	    dlfiles="$newdlfiles"
- 	    newdlprefiles=
-@@ -8327,7 +9341,7 @@ EOF
- 		[\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;;
- 		*) abs=`pwd`"/$lib" ;;
- 	      esac
--	      newdlprefiles="$newdlprefiles $abs"
-+	      func_append newdlprefiles " $abs"
- 	    done
- 	    dlprefiles="$newdlprefiles"
- 	  fi
-@@ -8412,7 +9426,7 @@ relink_command=\"$relink_command\""
-     exit $EXIT_SUCCESS
- }
- 
--{ test "$mode" = link || test "$mode" = relink; } &&
-+{ test "$opt_mode" = link || test "$opt_mode" = relink; } &&
-     func_mode_link ${1+"$@"}
- 
- 
-@@ -8432,9 +9446,9 @@ func_mode_uninstall ()
-     for arg
-     do
-       case $arg in
--      -f) RM="$RM $arg"; rmforce=yes ;;
--      -*) RM="$RM $arg" ;;
--      *) files="$files $arg" ;;
-+      -f) func_append RM " $arg"; rmforce=yes ;;
-+      -*) func_append RM " $arg" ;;
-+      *) func_append files " $arg" ;;
-       esac
-     done
- 
-@@ -8443,24 +9457,23 @@ func_mode_uninstall ()
- 
-     rmdirs=
- 
--    origobjdir="$objdir"
-     for file in $files; do
-       func_dirname "$file" "" "."
-       dir="$func_dirname_result"
-       if test "X$dir" = X.; then
--	objdir="$origobjdir"
-+	odir="$objdir"
-       else
--	objdir="$dir/$origobjdir"
-+	odir="$dir/$objdir"
-       fi
-       func_basename "$file"
-       name="$func_basename_result"
--      test "$mode" = uninstall && objdir="$dir"
-+      test "$opt_mode" = uninstall && odir="$dir"
- 
--      # Remember objdir for removal later, being careful to avoid duplicates
--      if test "$mode" = clean; then
-+      # Remember odir for removal later, being careful to avoid duplicates
-+      if test "$opt_mode" = clean; then
- 	case " $rmdirs " in
--	  *" $objdir "*) ;;
--	  *) rmdirs="$rmdirs $objdir" ;;
-+	  *" $odir "*) ;;
-+	  *) func_append rmdirs " $odir" ;;
- 	esac
-       fi
- 
-@@ -8486,18 +9499,17 @@ func_mode_uninstall ()
- 
- 	  # Delete the libtool libraries and symlinks.
- 	  for n in $library_names; do
--	    rmfiles="$rmfiles $objdir/$n"
-+	    func_append rmfiles " $odir/$n"
- 	  done
--	  test -n "$old_library" && rmfiles="$rmfiles $objdir/$old_library"
-+	  test -n "$old_library" && func_append rmfiles " $odir/$old_library"
- 
--	  case "$mode" in
-+	  case "$opt_mode" in
- 	  clean)
--	    case "  $library_names " in
--	    # "  " in the beginning catches empty $dlname
-+	    case " $library_names " in
- 	    *" $dlname "*) ;;
--	    *) rmfiles="$rmfiles $objdir/$dlname" ;;
-+	    *) test -n "$dlname" && func_append rmfiles " $odir/$dlname" ;;
- 	    esac
--	    test -n "$libdir" && rmfiles="$rmfiles $objdir/$name $objdir/${name}i"
-+	    test -n "$libdir" && func_append rmfiles " $odir/$name $odir/${name}i"
- 	    ;;
- 	  uninstall)
- 	    if test -n "$library_names"; then
-@@ -8525,19 +9537,19 @@ func_mode_uninstall ()
- 	  # Add PIC object to the list of files to remove.
- 	  if test -n "$pic_object" &&
- 	     test "$pic_object" != none; then
--	    rmfiles="$rmfiles $dir/$pic_object"
-+	    func_append rmfiles " $dir/$pic_object"
- 	  fi
- 
- 	  # Add non-PIC object to the list of files to remove.
- 	  if test -n "$non_pic_object" &&
- 	     test "$non_pic_object" != none; then
--	    rmfiles="$rmfiles $dir/$non_pic_object"
-+	    func_append rmfiles " $dir/$non_pic_object"
- 	  fi
- 	fi
- 	;;
- 
-       *)
--	if test "$mode" = clean ; then
-+	if test "$opt_mode" = clean ; then
- 	  noexename=$name
- 	  case $file in
- 	  *.exe)
-@@ -8547,7 +9559,7 @@ func_mode_uninstall ()
- 	    noexename=$func_stripname_result
- 	    # $file with .exe has already been added to rmfiles,
- 	    # add $file without .exe
--	    rmfiles="$rmfiles $file"
-+	    func_append rmfiles " $file"
- 	    ;;
- 	  esac
- 	  # Do a test to see if this is a libtool program.
-@@ -8556,7 +9568,7 @@ func_mode_uninstall ()
- 	      func_ltwrapper_scriptname "$file"
- 	      relink_command=
- 	      func_source $func_ltwrapper_scriptname_result
--	      rmfiles="$rmfiles $func_ltwrapper_scriptname_result"
-+	      func_append rmfiles " $func_ltwrapper_scriptname_result"
- 	    else
- 	      relink_command=
- 	      func_source $dir/$noexename
-@@ -8564,12 +9576,12 @@ func_mode_uninstall ()
- 
- 	    # note $name still contains .exe if it was in $file originally
- 	    # as does the version of $file that was added into $rmfiles
--	    rmfiles="$rmfiles $objdir/$name $objdir/${name}S.${objext}"
-+	    func_append rmfiles " $odir/$name $odir/${name}S.${objext}"
- 	    if test "$fast_install" = yes && test -n "$relink_command"; then
--	      rmfiles="$rmfiles $objdir/lt-$name"
-+	      func_append rmfiles " $odir/lt-$name"
- 	    fi
- 	    if test "X$noexename" != "X$name" ; then
--	      rmfiles="$rmfiles $objdir/lt-${noexename}.c"
-+	      func_append rmfiles " $odir/lt-${noexename}.c"
- 	    fi
- 	  fi
- 	fi
-@@ -8577,7 +9589,6 @@ func_mode_uninstall ()
-       esac
-       func_show_eval "$RM $rmfiles" 'exit_status=1'
-     done
--    objdir="$origobjdir"
- 
-     # Try to remove the ${objdir}s in the directories where we deleted files
-     for dir in $rmdirs; do
-@@ -8589,16 +9600,16 @@ func_mode_uninstall ()
-     exit $exit_status
- }
- 
--{ test "$mode" = uninstall || test "$mode" = clean; } &&
-+{ test "$opt_mode" = uninstall || test "$opt_mode" = clean; } &&
-     func_mode_uninstall ${1+"$@"}
- 
--test -z "$mode" && {
-+test -z "$opt_mode" && {
-   help="$generic_help"
-   func_fatal_help "you must specify a MODE"
- }
- 
- test -z "$exec_cmd" && \
--  func_fatal_help "invalid operation mode \`$mode'"
-+  func_fatal_help "invalid operation mode \`$opt_mode'"
- 
- if test -n "$exec_cmd"; then
-   eval exec "$exec_cmd"
-diff --git a/ltoptions.m4 b/ltoptions.m4
-index 5ef12ced2a..17cfd51c0b 100644
---- a/ltoptions.m4
-+++ b/ltoptions.m4
-@@ -8,7 +8,7 @@
- # unlimited permission to copy and/or distribute it, with or without
- # modifications, as long as this notice is preserved.
- 
--# serial 6 ltoptions.m4
-+# serial 7 ltoptions.m4
- 
- # This is to help aclocal find these macros, as it can't see m4_define.
- AC_DEFUN([LTOPTIONS_VERSION], [m4_if([1])])
-diff --git a/ltversion.m4 b/ltversion.m4
-index bf87f77132..9c7b5d4118 100644
---- a/ltversion.m4
-+++ b/ltversion.m4
-@@ -7,17 +7,17 @@
- # unlimited permission to copy and/or distribute it, with or without
- # modifications, as long as this notice is preserved.
- 
--# Generated from ltversion.in.
-+# @configure_input@
- 
--# serial 3134 ltversion.m4
-+# serial 3293 ltversion.m4
- # This file is part of GNU Libtool
- 
--m4_define([LT_PACKAGE_VERSION], [2.2.7a])
--m4_define([LT_PACKAGE_REVISION], [1.3134])
-+m4_define([LT_PACKAGE_VERSION], [2.4])
-+m4_define([LT_PACKAGE_REVISION], [1.3293])
- 
- AC_DEFUN([LTVERSION_VERSION],
--[macro_version='2.2.7a'
--macro_revision='1.3134'
-+[macro_version='2.4'
-+macro_revision='1.3293'
- _LT_DECL(, macro_version, 0, [Which release of libtool.m4 was used?])
- _LT_DECL(, macro_revision, 0)
- ])
-diff --git a/lt~obsolete.m4 b/lt~obsolete.m4
-index bf92b5e079..c573da90c5 100644
---- a/lt~obsolete.m4
-+++ b/lt~obsolete.m4
-@@ -7,7 +7,7 @@
- # unlimited permission to copy and/or distribute it, with or without
- # modifications, as long as this notice is preserved.
- 
--# serial 4 lt~obsolete.m4
-+# serial 5 lt~obsolete.m4
- 
- # These exist entirely to fool aclocal when bootstrapping libtool.
- #
-diff --git a/opcodes/configure b/opcodes/configure
-index 0b352a454d..7eaea7db73 100755
---- a/opcodes/configure
-+++ b/opcodes/configure
-@@ -650,6 +650,9 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
-+ac_ct_AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -763,6 +766,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_targets
- enable_werror
-@@ -1423,6 +1427,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
- 
- Some influential environment variables:
-   CC          C compiler command
-@@ -5115,8 +5121,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -5156,7 +5162,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -5842,8 +5848,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -5892,6 +5898,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if test "${lt_cv_to_host_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if test "${lt_cv_ld_reload_flag+set}" = set; then :
-@@ -5908,6 +5988,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -6076,7 +6161,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -6230,6 +6316,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -6245,9 +6346,162 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_AR+set}" = set; then :
-@@ -6263,7 +6517,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6283,11 +6537,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
-@@ -6303,7 +6561,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -6322,6 +6580,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -6333,16 +6595,72 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if test "${lt_cv_ar_at_file+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
- 
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
- 
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
- 
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
- 
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
- 
- 
- 
-@@ -6684,8 +7002,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -6721,6 +7039,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -6762,6 +7081,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -6773,7 +7104,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -6799,8 +7130,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -6810,8 +7141,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -6848,6 +7179,16 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
- 
- 
- 
-@@ -6869,6 +7210,45 @@ fi
- 
- 
- 
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
-+
-+
-+
-+
-+
- # Check whether --enable-libtool-lock was given.
- if test "${enable_libtool_lock+set}" = set; then :
-   enableval=$enable_libtool_lock;
-@@ -7075,6 +7455,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if test "${lt_cv_path_mainfest_tool+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -7638,6 +8135,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -7803,7 +8302,8 @@ fi
- LIBTOOL_DEPS="$ltmain"
- 
- # Always use our own libtool.
--LIBTOOL='$(SHELL) $(top_builddir)/libtool'
-+LIBTOOL='$(SHELL) $(top_builddir)'
-+LIBTOOL="$LIBTOOL/${host_alias}-libtool"
- 
- 
- 
-@@ -7892,7 +8392,7 @@ aix3*)
- esac
- 
- # Global variables:
--ofile=libtool
-+ofile=${host_alias}-libtool
- can_build_shared=yes
- 
- # All known linkers require a `.a' archive for static linking (except MSVC,
-@@ -8190,8 +8690,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -8357,6 +8855,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -8419,7 +8923,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -8476,13 +8980,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if test "${lt_cv_prog_compiler_pic+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -8543,6 +9051,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -8893,7 +9406,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -8992,12 +9506,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -9011,8 +9525,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -9030,8 +9544,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9077,8 +9591,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -9208,7 +9722,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9221,22 +9741,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -9248,7 +9775,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
- 
- int
-@@ -9261,22 +9794,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -9321,20 +9861,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -9395,7 +9978,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -9403,7 +9986,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -9419,7 +10002,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -9443,10 +10026,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -9525,23 +10108,36 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if test "${lt_cv_irix_exported_symbol+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -9626,7 +10222,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -9645,9 +10241,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -10223,8 +10819,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -10257,13 +10854,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -10355,7 +11010,7 @@ haiku*)
-   soname_spec='${libname}${release}${shared_ext}$major'
-   shlibpath_var=LIBRARY_PATH
-   shlibpath_overrides_runpath=yes
--  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
-+  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
-   hardcode_into_libs=yes
-   ;;
- 
-@@ -11195,10 +11850,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11301,10 +11956,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -12543,7 +13198,7 @@ if test "$enable_shared" = "yes"; then
- # since libbfd may not pull in the entirety of libiberty.
-   x=`sed -n -e 's/^[ 	]*PICFLAG[ 	]*=[ 	]*//p' < ../libiberty/Makefile | sed -n '$p'`
-   if test -n "$x"; then
--    SHARED_LIBADD="-L`pwd`/../libiberty/pic -liberty"
-+    SHARED_LIBADD="`pwd`/../libiberty/pic/libiberty.a"
-   fi
- 
-   case "${host}" in
-@@ -13518,13 +14173,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -13539,14 +14201,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -13579,12 +14244,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -13639,8 +14304,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -13650,12 +14320,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -13671,7 +14343,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -13707,6 +14378,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -14463,7 +15135,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -14566,19 +15239,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -14608,6 +15304,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -14617,6 +15319,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -14731,12 +15436,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -14823,9 +15528,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -14841,6 +15543,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -14873,210 +15578,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
-diff --git a/opcodes/configure.ac b/opcodes/configure.ac
-index b9f5eb8a4f..a31b66a2f0 100644
---- a/opcodes/configure.ac
-+++ b/opcodes/configure.ac
-@@ -167,7 +167,7 @@ changequote(,)dnl
-   x=`sed -n -e 's/^[ 	]*PICFLAG[ 	]*=[ 	]*//p' < ../libiberty/Makefile | sed -n '$p'`
- changequote([,])dnl
-   if test -n "$x"; then
--    SHARED_LIBADD="-L`pwd`/../libiberty/pic -liberty"
-+    SHARED_LIBADD="`pwd`/../libiberty/pic/libiberty.a"
-   fi
- 
-   case "${host}" in
-diff --git a/zlib/configure b/zlib/configure
-index bed9e3ea2b..caef0b674e 100755
---- a/zlib/configure
-+++ b/zlib/configure
-@@ -614,8 +614,11 @@ OTOOL
- LIPO
- NMEDIT
- DSYMUTIL
-+MANIFEST_TOOL
- RANLIB
-+ac_ct_AR
- AR
-+DLLTOOL
- OBJDUMP
- LN_S
- NM
-@@ -737,6 +740,7 @@ enable_static
- with_pic
- enable_fast_install
- with_gnu_ld
-+with_libtool_sysroot
- enable_libtool_lock
- enable_host_shared
- '
-@@ -1385,6 +1389,8 @@ Optional Packages:
-   --with-pic              try to use only PIC/non-PIC objects [default=use
-                           both]
-   --with-gnu-ld           assume the C compiler uses GNU ld [default=no]
-+  --with-libtool-sysroot=DIR Search for dependent libraries within DIR
-+                        (or the compiler's sysroot if not specified).
- 
- Some influential environment variables:
-   CC          C compiler command
-@@ -3910,8 +3916,8 @@ esac
- 
- 
- 
--macro_version='2.2.7a'
--macro_revision='1.3134'
-+macro_version='2.4'
-+macro_revision='1.3293'
- 
- 
- 
-@@ -3951,7 +3957,7 @@ ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking how to print strings" >&5
- $as_echo_n "checking how to print strings... " >&6; }
- # Test print first, because it will be a builtin if present.
--if test "X`print -r -- -n 2>/dev/null`" = X-n && \
-+if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
-    test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
-   ECHO='print -r --'
- elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
-@@ -4767,8 +4773,8 @@ $as_echo_n "checking whether the shell understands some XSI constructs... " >&6;
- # Try some XSI features
- xsi_shell=no
- ( _lt_dummy="a/b/c"
--  test "${_lt_dummy##*/},${_lt_dummy%/*},"${_lt_dummy%"$_lt_dummy"}, \
--      = c,a/b,, \
-+  test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \
-+      = c,a/b,b/c, \
-     && eval 'test $(( 1 + 1 )) -eq 2 \
-     && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \
-   && xsi_shell=yes
-@@ -4817,6 +4823,80 @@ esac
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to $host format" >&5
-+$as_echo_n "checking how to convert $build file names to $host format... " >&6; }
-+if test "${lt_cv_to_host_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
-+        ;;
-+    esac
-+    ;;
-+  *-*-cygwin* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
-+        ;;
-+      *-*-cygwin* )
-+        lt_cv_to_host_file_cmd=func_convert_file_noop
-+        ;;
-+      * ) # otherwise, assume *nix
-+        lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
-+        ;;
-+    esac
-+    ;;
-+  * ) # unhandled hosts (and "normal" native builds)
-+    lt_cv_to_host_file_cmd=func_convert_file_noop
-+    ;;
-+esac
-+
-+fi
-+
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_host_file_cmd" >&5
-+$as_echo "$lt_cv_to_host_file_cmd" >&6; }
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to convert $build file names to toolchain format" >&5
-+$as_echo_n "checking how to convert $build file names to toolchain format... " >&6; }
-+if test "${lt_cv_to_tool_file_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  #assume ordinary cross tools, or native build.
-+lt_cv_to_tool_file_cmd=func_convert_file_noop
-+case $host in
-+  *-*-mingw* )
-+    case $build in
-+      *-*-mingw* ) # actually msys
-+        lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
-+        ;;
-+    esac
-+    ;;
-+esac
-+
-+fi
-+
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_to_tool_file_cmd" >&5
-+$as_echo "$lt_cv_to_tool_file_cmd" >&6; }
-+
-+
-+
-+
-+
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $LD option to reload object files" >&5
- $as_echo_n "checking for $LD option to reload object files... " >&6; }
- if test "${lt_cv_ld_reload_flag+set}" = set; then :
-@@ -4833,6 +4913,11 @@ case $reload_flag in
- esac
- reload_cmds='$LD$reload_flag -o $output$reload_objs'
- case $host_os in
-+  cygwin* | mingw* | pw32* | cegcc*)
-+    if test "$GCC" != yes; then
-+      reload_cmds=false
-+    fi
-+    ;;
-   darwin*)
-     if test "$GCC" = yes; then
-       reload_cmds='$LTCC $LTCFLAGS -nostdlib ${wl}-r -o $output$reload_objs'
-@@ -5001,7 +5086,8 @@ mingw* | pw32*)
-     lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
-     lt_cv_file_magic_cmd='func_win32_libid'
-   else
--    lt_cv_deplibs_check_method='file_magic file format pei*-i386(.*architecture: i386)?'
-+    # Keep this pattern in sync with the one in func_win32_libid.
-+    lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
-     lt_cv_file_magic_cmd='$OBJDUMP -f'
-   fi
-   ;;
-@@ -5155,6 +5241,21 @@ esac
- fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_deplibs_check_method" >&5
- $as_echo "$lt_cv_deplibs_check_method" >&6; }
-+
-+file_magic_glob=
-+want_nocaseglob=no
-+if test "$build" = "$host"; then
-+  case $host_os in
-+  mingw* | pw32*)
-+    if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
-+      want_nocaseglob=yes
-+    else
-+      file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[\1]\/[\1]\/g;/g"`
-+    fi
-+    ;;
-+  esac
-+fi
-+
- file_magic_cmd=$lt_cv_file_magic_cmd
- deplibs_check_method=$lt_cv_deplibs_check_method
- test -z "$deplibs_check_method" && deplibs_check_method=unknown
-@@ -5170,9 +5271,163 @@ test -z "$deplibs_check_method" && deplibs_check_method=unknown
- 
- 
- 
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
- if test -n "$ac_tool_prefix"; then
--  # Extract the first word of "${ac_tool_prefix}ar", so it can be a program name with args.
--set dummy ${ac_tool_prefix}ar; ac_word=$2
-+  # Extract the first word of "${ac_tool_prefix}dlltool", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$DLLTOOL"; then
-+  ac_cv_prog_DLLTOOL="$DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_DLLTOOL="${ac_tool_prefix}dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+DLLTOOL=$ac_cv_prog_DLLTOOL
-+if test -n "$DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DLLTOOL" >&5
-+$as_echo "$DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_DLLTOOL"; then
-+  ac_ct_DLLTOOL=$DLLTOOL
-+  # Extract the first word of "dlltool", so it can be a program name with args.
-+set dummy dlltool; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_DLLTOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_DLLTOOL"; then
-+  ac_cv_prog_ac_ct_DLLTOOL="$ac_ct_DLLTOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_DLLTOOL="dlltool"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_DLLTOOL=$ac_cv_prog_ac_ct_DLLTOOL
-+if test -n "$ac_ct_DLLTOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_DLLTOOL" >&5
-+$as_echo "$ac_ct_DLLTOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_DLLTOOL" = x; then
-+    DLLTOOL="false"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    DLLTOOL=$ac_ct_DLLTOOL
-+  fi
-+else
-+  DLLTOOL="$ac_cv_prog_DLLTOOL"
-+fi
-+
-+test -z "$DLLTOOL" && DLLTOOL=dlltool
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking how to associate runtime and link libraries" >&5
-+$as_echo_n "checking how to associate runtime and link libraries... " >&6; }
-+if test "${lt_cv_sharedlib_from_linklib_cmd+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_sharedlib_from_linklib_cmd='unknown'
-+
-+case $host_os in
-+cygwin* | mingw* | pw32* | cegcc*)
-+  # two different shell functions defined in ltmain.sh
-+  # decide which to use based on capabilities of $DLLTOOL
-+  case `$DLLTOOL --help 2>&1` in
-+  *--identify-strict*)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
-+    ;;
-+  *)
-+    lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
-+    ;;
-+  esac
-+  ;;
-+*)
-+  # fallback: assume linklib IS sharedlib
-+  lt_cv_sharedlib_from_linklib_cmd="$ECHO"
-+  ;;
-+esac
-+
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_sharedlib_from_linklib_cmd" >&5
-+$as_echo "$lt_cv_sharedlib_from_linklib_cmd" >&6; }
-+sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
-+test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
-+
-+
-+
-+
-+
-+
-+
-+
-+if test -n "$ac_tool_prefix"; then
-+  for ac_prog in ar
-+  do
-+    # Extract the first word of "$ac_tool_prefix$ac_prog", so it can be a program name with args.
-+set dummy $ac_tool_prefix$ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_AR+set}" = set; then :
-@@ -5188,7 +5443,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_AR="${ac_tool_prefix}ar"
-+    ac_cv_prog_AR="$ac_tool_prefix$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -5208,11 +5463,15 @@ $as_echo "no" >&6; }
- fi
- 
- 
-+    test -n "$AR" && break
-+  done
- fi
--if test -z "$ac_cv_prog_AR"; then
-+if test -z "$AR"; then
-   ac_ct_AR=$AR
--  # Extract the first word of "ar", so it can be a program name with args.
--set dummy ar; ac_word=$2
-+  for ac_prog in ar
-+do
-+  # Extract the first word of "$ac_prog", so it can be a program name with args.
-+set dummy $ac_prog; ac_word=$2
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
- $as_echo_n "checking for $ac_word... " >&6; }
- if test "${ac_cv_prog_ac_ct_AR+set}" = set; then :
-@@ -5228,7 +5487,7 @@ do
-   test -z "$as_dir" && as_dir=.
-     for ac_exec_ext in '' $ac_executable_extensions; do
-   if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
--    ac_cv_prog_ac_ct_AR="ar"
-+    ac_cv_prog_ac_ct_AR="$ac_prog"
-     $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-     break 2
-   fi
-@@ -5247,6 +5506,10 @@ else
- $as_echo "no" >&6; }
- fi
- 
-+
-+  test -n "$ac_ct_AR" && break
-+done
-+
-   if test "x$ac_ct_AR" = x; then
-     AR="false"
-   else
-@@ -5258,16 +5521,72 @@ ac_tool_warned=yes ;;
- esac
-     AR=$ac_ct_AR
-   fi
--else
--  AR="$ac_cv_prog_AR"
- fi
- 
--test -z "$AR" && AR=ar
--test -z "$AR_FLAGS" && AR_FLAGS=cru
-+: ${AR=ar}
-+: ${AR_FLAGS=cru}
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for archiver @FILE support" >&5
-+$as_echo_n "checking for archiver @FILE support... " >&6; }
-+if test "${lt_cv_ar_at_file+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_ar_at_file=no
-+   cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-+/* end confdefs.h.  */
-+
-+int
-+main ()
-+{
- 
-+  ;
-+  return 0;
-+}
-+_ACEOF
-+if ac_fn_c_try_compile "$LINENO"; then :
-+  echo conftest.$ac_objext > conftest.lst
-+      lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&5'
-+      { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+      if test "$ac_status" -eq 0; then
-+	# Ensure the archiver fails upon bogus file names.
-+	rm -f conftest.$ac_objext libconftest.a
-+	{ { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$lt_ar_try\""; } >&5
-+  (eval $lt_ar_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }
-+	if test "$ac_status" -ne 0; then
-+          lt_cv_ar_at_file=@
-+        fi
-+      fi
-+      rm -f conftest.* libconftest.a
- 
-+fi
-+rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
- 
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_ar_at_file" >&5
-+$as_echo "$lt_cv_ar_at_file" >&6; }
- 
-+if test "x$lt_cv_ar_at_file" = xno; then
-+  archiver_list_spec=
-+else
-+  archiver_list_spec=$lt_cv_ar_at_file
-+fi
- 
- 
- 
-@@ -5609,8 +5928,8 @@ esac
- lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'"
- 
- # Transform an extracted symbol line into symbol name and symbol address
--lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
--lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\) $/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"\2\", (void *) \&\2},/p'"
-+lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([^ ]*\)[ ]*$/  {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([^ ]*\) \(lib[^ ]*\)$/  {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([^ ]*\) \([^ ]*\)$/  {\"lib\2\", (void *) \&\2},/p'"
- 
- # Handle CRLF in mingw tool chain
- opt_cr=
-@@ -5646,6 +5965,7 @@ for ac_symprfx in "" "_"; do
-   else
-     lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[	 ]\($symcode$symcode*\)[	 ][	 ]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
-   fi
-+  lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
- 
-   # Check to see that the pipe works correctly.
-   pipe_works=no
-@@ -5687,6 +6007,18 @@ _LT_EOF
-       if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
- 	if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
- 	  cat <<_LT_EOF > conftest.$ac_ext
-+/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests.  */
-+#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE)
-+/* DATA imports from DLLs on WIN32 con't be const, because runtime
-+   relocations are performed -- see ld's documentation on pseudo-relocs.  */
-+# define LT_DLSYM_CONST
-+#elif defined(__osf__)
-+/* This system does not cope well with relocations in const data.  */
-+# define LT_DLSYM_CONST
-+#else
-+# define LT_DLSYM_CONST const
-+#endif
-+
- #ifdef __cplusplus
- extern "C" {
- #endif
-@@ -5698,7 +6030,7 @@ _LT_EOF
- 	  cat <<_LT_EOF >> conftest.$ac_ext
- 
- /* The mapping between symbol names and symbols.  */
--const struct {
-+LT_DLSYM_CONST struct {
-   const char *name;
-   void       *address;
- }
-@@ -5724,8 +6056,8 @@ static const void *lt_preloaded_setup() {
- _LT_EOF
- 	  # Now try linking the two files.
- 	  mv conftest.$ac_objext conftstm.$ac_objext
--	  lt_save_LIBS="$LIBS"
--	  lt_save_CFLAGS="$CFLAGS"
-+	  lt_globsym_save_LIBS=$LIBS
-+	  lt_globsym_save_CFLAGS=$CFLAGS
- 	  LIBS="conftstm.$ac_objext"
- 	  CFLAGS="$CFLAGS$lt_prog_compiler_no_builtin_flag"
- 	  if { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_link\""; } >&5
-@@ -5735,8 +6067,8 @@ _LT_EOF
-   test $ac_status = 0; } && test -s conftest${ac_exeext}; then
- 	    pipe_works=yes
- 	  fi
--	  LIBS="$lt_save_LIBS"
--	  CFLAGS="$lt_save_CFLAGS"
-+	  LIBS=$lt_globsym_save_LIBS
-+	  CFLAGS=$lt_globsym_save_CFLAGS
- 	else
- 	  echo "cannot find nm_test_func in $nlist" >&5
- 	fi
-@@ -5773,6 +6105,19 @@ else
- $as_echo "ok" >&6; }
- fi
- 
-+# Response file support.
-+if test "$lt_cv_nm_interface" = "MS dumpbin"; then
-+  nm_file_list_spec='@'
-+elif $NM --help 2>/dev/null | grep '[@]FILE' >/dev/null; then
-+  nm_file_list_spec='@'
-+fi
-+
-+
-+
-+
-+
-+
-+
- 
- 
- 
-@@ -5793,6 +6138,41 @@ fi
- 
- 
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for sysroot" >&5
-+$as_echo_n "checking for sysroot... " >&6; }
-+
-+# Check whether --with-libtool-sysroot was given.
-+if test "${with_libtool_sysroot+set}" = set; then :
-+  withval=$with_libtool_sysroot;
-+else
-+  with_libtool_sysroot=no
-+fi
-+
-+
-+lt_sysroot=
-+case ${with_libtool_sysroot} in #(
-+ yes)
-+   if test "$GCC" = yes; then
-+     lt_sysroot=`$CC --print-sysroot 2>/dev/null`
-+   fi
-+   ;; #(
-+ /*)
-+   lt_sysroot=`echo "$with_libtool_sysroot" | sed -e "$sed_quote_subst"`
-+   ;; #(
-+ no|'')
-+   ;; #(
-+ *)
-+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${with_libtool_sysroot}" >&5
-+$as_echo "${with_libtool_sysroot}" >&6; }
-+   as_fn_error "The sysroot must be an absolute path." "$LINENO" 5
-+   ;;
-+esac
-+
-+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: ${lt_sysroot:-no}" >&5
-+$as_echo "${lt_sysroot:-no}" >&6; }
-+
-+
-+
- 
- 
- # Check whether --enable-libtool-lock was given.
-@@ -6004,6 +6384,123 @@ esac
- 
- need_locks="$enable_libtool_lock"
- 
-+if test -n "$ac_tool_prefix"; then
-+  # Extract the first word of "${ac_tool_prefix}mt", so it can be a program name with args.
-+set dummy ${ac_tool_prefix}mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$MANIFEST_TOOL"; then
-+  ac_cv_prog_MANIFEST_TOOL="$MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_MANIFEST_TOOL="${ac_tool_prefix}mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+MANIFEST_TOOL=$ac_cv_prog_MANIFEST_TOOL
-+if test -n "$MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $MANIFEST_TOOL" >&5
-+$as_echo "$MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+fi
-+if test -z "$ac_cv_prog_MANIFEST_TOOL"; then
-+  ac_ct_MANIFEST_TOOL=$MANIFEST_TOOL
-+  # Extract the first word of "mt", so it can be a program name with args.
-+set dummy mt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_ac_ct_MANIFEST_TOOL+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  ac_cv_prog_ac_ct_MANIFEST_TOOL="$ac_ct_MANIFEST_TOOL" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_ac_ct_MANIFEST_TOOL="mt"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+fi
-+fi
-+ac_ct_MANIFEST_TOOL=$ac_cv_prog_ac_ct_MANIFEST_TOOL
-+if test -n "$ac_ct_MANIFEST_TOOL"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $ac_ct_MANIFEST_TOOL" >&5
-+$as_echo "$ac_ct_MANIFEST_TOOL" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+  if test "x$ac_ct_MANIFEST_TOOL" = x; then
-+    MANIFEST_TOOL=":"
-+  else
-+    case $cross_compiling:$ac_tool_warned in
-+yes:)
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: using cross tools not prefixed with host triplet" >&5
-+$as_echo "$as_me: WARNING: using cross tools not prefixed with host triplet" >&2;}
-+ac_tool_warned=yes ;;
-+esac
-+    MANIFEST_TOOL=$ac_ct_MANIFEST_TOOL
-+  fi
-+else
-+  MANIFEST_TOOL="$ac_cv_prog_MANIFEST_TOOL"
-+fi
-+
-+test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking if $MANIFEST_TOOL is a manifest tool" >&5
-+$as_echo_n "checking if $MANIFEST_TOOL is a manifest tool... " >&6; }
-+if test "${lt_cv_path_mainfest_tool+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_path_mainfest_tool=no
-+  echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&5
-+  $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
-+  cat conftest.err >&5
-+  if $GREP 'Manifest Tool' conftest.out > /dev/null; then
-+    lt_cv_path_mainfest_tool=yes
-+  fi
-+  rm -f conftest*
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_path_mainfest_tool" >&5
-+$as_echo "$lt_cv_path_mainfest_tool" >&6; }
-+if test "x$lt_cv_path_mainfest_tool" != xyes; then
-+  MANIFEST_TOOL=:
-+fi
-+
-+
-+
-+
-+
- 
-   case $host_os in
-     rhapsody* | darwin*)
-@@ -6570,6 +7067,8 @@ _LT_EOF
-       $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&5
-       echo "$AR cru libconftest.a conftest.o" >&5
-       $AR cru libconftest.a conftest.o 2>&5
-+      echo "$RANLIB libconftest.a" >&5
-+      $RANLIB libconftest.a 2>&5
-       cat > conftest.c << _LT_EOF
- int main() { return 0;}
- _LT_EOF
-@@ -7033,7 +7532,8 @@ fi
- LIBTOOL_DEPS="$ltmain"
- 
- # Always use our own libtool.
--LIBTOOL='$(SHELL) $(top_builddir)/libtool'
-+LIBTOOL='$(SHELL) $(top_builddir)'
-+LIBTOOL="$LIBTOOL/${host_alias}-libtool"
- 
- 
- 
-@@ -7122,7 +7622,7 @@ aix3*)
- esac
- 
- # Global variables:
--ofile=libtool
-+ofile=${host_alias}-libtool
- can_build_shared=yes
- 
- # All known linkers require a `.a' archive for static linking (except MSVC,
-@@ -7420,8 +7920,6 @@ fi
- lt_prog_compiler_pic=
- lt_prog_compiler_static=
- 
--{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
--$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 
-   if test "$GCC" = yes; then
-     lt_prog_compiler_wl='-Wl,'
-@@ -7587,6 +8085,12 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
- 	lt_prog_compiler_pic='--shared'
- 	lt_prog_compiler_static='--static'
- 	;;
-+      nagfor*)
-+	# NAG Fortran compiler
-+	lt_prog_compiler_wl='-Wl,-Wl,,'
-+	lt_prog_compiler_pic='-PIC'
-+	lt_prog_compiler_static='-Bstatic'
-+	;;
-       pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
-         # Portland Group compilers (*not* the Pentium gcc compiler,
- 	# which looks to be a dead project)
-@@ -7649,7 +8153,7 @@ $as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-       lt_prog_compiler_pic='-KPIC'
-       lt_prog_compiler_static='-Bstatic'
-       case $cc_basename in
--      f77* | f90* | f95*)
-+      f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
- 	lt_prog_compiler_wl='-Qoption ld ';;
-       *)
- 	lt_prog_compiler_wl='-Wl,';;
-@@ -7706,13 +8210,17 @@ case $host_os in
-     lt_prog_compiler_pic="$lt_prog_compiler_pic -DPIC"
-     ;;
- esac
--{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_prog_compiler_pic" >&5
--$as_echo "$lt_prog_compiler_pic" >&6; }
--
--
--
--
- 
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $compiler option to produce PIC" >&5
-+$as_echo_n "checking for $compiler option to produce PIC... " >&6; }
-+if test "${lt_cv_prog_compiler_pic+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  lt_cv_prog_compiler_pic=$lt_prog_compiler_pic
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_prog_compiler_pic" >&5
-+$as_echo "$lt_cv_prog_compiler_pic" >&6; }
-+lt_prog_compiler_pic=$lt_cv_prog_compiler_pic
- 
- #
- # Check to make sure the PIC flag actually works.
-@@ -7773,6 +8281,11 @@ fi
- 
- 
- 
-+
-+
-+
-+
-+
- #
- # Check to make sure the static flag actually works.
- #
-@@ -8123,7 +8636,8 @@ _LT_EOF
-       allow_undefined_flag=unsupported
-       always_export_symbols=no
-       enable_shared_with_static_runtimes=yes
--      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+      export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1 DATA/;s/^.*[ ]__nm__\([^ ]*\)[ ][^ ]*/\1 DATA/;/^I[ ]/d;/^[AITW][ ]/s/.* //'\'' | sort | uniq > $export_symbols'
-+      exclude_expsyms='[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'
- 
-       if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
-         archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
-@@ -8222,12 +8736,12 @@ _LT_EOF
- 	  whole_archive_flag_spec='--whole-archive$convenience --no-whole-archive'
- 	  hardcode_libdir_flag_spec=
- 	  hardcode_libdir_flag_spec_ld='-rpath $libdir'
--	  archive_cmds='$LD -shared $libobjs $deplibs $compiler_flags -soname $soname -o $lib'
-+	  archive_cmds='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
- 	  if test "x$supports_anon_versioning" = xyes; then
- 	    archive_expsym_cmds='echo "{ global:" > $output_objdir/$libname.ver~
- 	      cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
- 	      echo "local: *; };" >> $output_objdir/$libname.ver~
--	      $LD -shared $libobjs $deplibs $compiler_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
-+	      $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
- 	  fi
- 	  ;;
- 	esac
-@@ -8241,8 +8755,8 @@ _LT_EOF
- 	archive_cmds='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
- 	wlarc=
-       else
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       fi
-       ;;
- 
-@@ -8260,8 +8774,8 @@ _LT_EOF
- 
- _LT_EOF
-       elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8307,8 +8821,8 @@ _LT_EOF
- 
-     *)
-       if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
--	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib'
-+	archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib'
-       else
- 	ld_shlibs=no
-       fi
-@@ -8438,7 +8952,13 @@ _LT_EOF
- 	allow_undefined_flag='-berok'
-         # Determine the default libpath from the value encoded in an
-         # empty executable.
--        if test x$gcc_no_link = xyes; then
-+        if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test x$gcc_no_link = xyes; then
-   as_fn_error "Link tests are not allowed after GCC_NO_EXECUTABLES." "$LINENO" 5
- fi
- cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-@@ -8454,22 +8974,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
-         hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
-         archive_expsym_cmds='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag"
-@@ -8481,7 +9008,13 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	else
- 	 # Determine the default libpath from the value encoded in an
- 	 # empty executable.
--	 if test x$gcc_no_link = xyes; then
-+	 if test "${lt_cv_aix_libpath+set}" = set; then
-+  aix_libpath=$lt_cv_aix_libpath
-+else
-+  if test "${lt_cv_aix_libpath_+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test x$gcc_no_link = xyes; then
-   as_fn_error "Link tests are not allowed after GCC_NO_EXECUTABLES." "$LINENO" 5
- fi
- cat confdefs.h - <<_ACEOF >conftest.$ac_ext
-@@ -8497,22 +9030,29 @@ main ()
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
- 
--lt_aix_libpath_sed='
--    /Import File Strings/,/^$/ {
--	/^0/ {
--	    s/^0  *\(.*\)$/\1/
--	    p
--	}
--    }'
--aix_libpath=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--# Check for a 64-bit object if we didn't find anything.
--if test -z "$aix_libpath"; then
--  aix_libpath=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
--fi
-+  lt_aix_libpath_sed='
-+      /Import File Strings/,/^$/ {
-+	  /^0/ {
-+	      s/^0  *\([^ ]*\) *$/\1/
-+	      p
-+	  }
-+      }'
-+  lt_cv_aix_libpath_=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  # Check for a 64-bit object if we didn't find anything.
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
-+  fi
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-+  if test -z "$lt_cv_aix_libpath_"; then
-+    lt_cv_aix_libpath_="/usr/lib:/lib"
-+  fi
-+
-+fi
-+
-+  aix_libpath=$lt_cv_aix_libpath_
-+fi
- 
- 	 hardcode_libdir_flag_spec='${wl}-blibpath:$libdir:'"$aix_libpath"
- 	  # Warning - without using the other run time loading flags,
-@@ -8557,20 +9097,63 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
-       # Microsoft Visual C++.
-       # hardcode_libdir_flag_spec is actually meaningless, as there is
-       # no search path for DLLs.
--      hardcode_libdir_flag_spec=' '
--      allow_undefined_flag=unsupported
--      # Tell ltmain to make .lib files, not .a files.
--      libext=lib
--      # Tell ltmain to make .dll files, not .so files.
--      shrext_cmds=".dll"
--      # FIXME: Setting linknames here is a bad hack.
--      archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
--      # The linker will automatically build a .lib file if we build a DLL.
--      old_archive_from_new_cmds='true'
--      # FIXME: Should let the user specify the lib program.
--      old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
--      fix_srcfile_path='`cygpath -w "$srcfile"`'
--      enable_shared_with_static_runtimes=yes
-+      case $cc_basename in
-+      cl*)
-+	# Native MSVC
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	always_export_symbols=yes
-+	file_list_spec='@'
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames='
-+	archive_expsym_cmds='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then
-+	    sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp;
-+	  else
-+	    sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp;
-+	  fi~
-+	  $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
-+	  linknames='
-+	# The linker will not automatically build a static lib if we build a DLL.
-+	# _LT_TAGVAR(old_archive_from_new_cmds, )='true'
-+	enable_shared_with_static_runtimes=yes
-+	export_symbols_cmds='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[BCDGRS][ ]/s/.*[ ]\([^ ]*\)/\1,DATA/'\'' | $SED -e '\''/^[AITW][ ]/s/.*[ ]//'\'' | sort | uniq > $export_symbols'
-+	# Don't use ranlib
-+	old_postinstall_cmds='chmod 644 $oldlib'
-+	postlink_cmds='lt_outputfile="@OUTPUT@"~
-+	  lt_tool_outputfile="@TOOL_OUTPUT@"~
-+	  case $lt_outputfile in
-+	    *.exe|*.EXE) ;;
-+	    *)
-+	      lt_outputfile="$lt_outputfile.exe"
-+	      lt_tool_outputfile="$lt_tool_outputfile.exe"
-+	      ;;
-+	  esac~
-+	  if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then
-+	    $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
-+	    $RM "$lt_outputfile.manifest";
-+	  fi'
-+	;;
-+      *)
-+	# Assume MSVC wrapper
-+	hardcode_libdir_flag_spec=' '
-+	allow_undefined_flag=unsupported
-+	# Tell ltmain to make .lib files, not .a files.
-+	libext=lib
-+	# Tell ltmain to make .dll files, not .so files.
-+	shrext_cmds=".dll"
-+	# FIXME: Setting linknames here is a bad hack.
-+	archive_cmds='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
-+	# The linker will automatically build a .lib file if we build a DLL.
-+	old_archive_from_new_cmds='true'
-+	# FIXME: Should let the user specify the lib program.
-+	old_archive_cmds='lib -OUT:$oldlib$oldobjs$old_deplibs'
-+	enable_shared_with_static_runtimes=yes
-+	;;
-+      esac
-       ;;
- 
-     darwin* | rhapsody*)
-@@ -8631,7 +9214,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     # FreeBSD 3 and greater uses gcc -shared to do shared libraries.
-     freebsd* | dragonfly*)
--      archive_cmds='$CC -shared -o $lib $libobjs $deplibs $compiler_flags'
-+      archive_cmds='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
-       hardcode_libdir_flag_spec='-R$libdir'
-       hardcode_direct=yes
-       hardcode_shlibpath_var=no
-@@ -8639,7 +9222,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux9*)
-       if test "$GCC" = yes; then
--	archive_cmds='$RM $output_objdir/$soname~$CC -shared -fPIC ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-+	archive_cmds='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       else
- 	archive_cmds='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib'
-       fi
-@@ -8655,7 +9238,7 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 
-     hpux10*)
-       if test "$GCC" = yes && test "$with_gnu_ld" = no; then
--	archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-       else
- 	archive_cmds='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
-       fi
-@@ -8679,10 +9262,10 @@ if test -z "$aix_libpath"; then aix_libpath="/usr/lib:/lib"; fi
- 	  archive_cmds='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	ia64*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	*)
--	  archive_cmds='$CC -shared -fPIC ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
-+	  archive_cmds='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
- 	  ;;
- 	esac
-       else
-@@ -8761,26 +9344,39 @@ fi
- 
-     irix5* | irix6* | nonstopux*)
-       if test "$GCC" = yes; then
--	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	# Try to use the -exported_symbol ld option, if it does not
- 	# work, assume that -exports_file does not work either and
- 	# implicitly export all symbols.
--        save_LDFLAGS="$LDFLAGS"
--        LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
--        if test x$gcc_no_link = xyes; then
-+	# This should be the same for all languages, so no per-tag cache variable.
-+	{ $as_echo "$as_me:${as_lineno-$LINENO}: checking whether the $host_os linker accepts -exported_symbol" >&5
-+$as_echo_n "checking whether the $host_os linker accepts -exported_symbol... " >&6; }
-+if test "${lt_cv_irix_exported_symbol+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  save_LDFLAGS="$LDFLAGS"
-+	   LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null"
-+	   if test x$gcc_no_link = xyes; then
-   as_fn_error "Link tests are not allowed after GCC_NO_EXECUTABLES." "$LINENO" 5
- fi
- cat confdefs.h - <<_ACEOF >conftest.$ac_ext
- /* end confdefs.h.  */
--int foo(void) {}
-+int foo (void) { return 0; }
- _ACEOF
- if ac_fn_c_try_link "$LINENO"; then :
--  archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
--
-+  lt_cv_irix_exported_symbol=yes
-+else
-+  lt_cv_irix_exported_symbol=no
- fi
- rm -f core conftest.err conftest.$ac_objext \
-     conftest$ac_exeext conftest.$ac_ext
--        LDFLAGS="$save_LDFLAGS"
-+           LDFLAGS="$save_LDFLAGS"
-+fi
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: result: $lt_cv_irix_exported_symbol" >&5
-+$as_echo "$lt_cv_irix_exported_symbol" >&6; }
-+	if test "$lt_cv_irix_exported_symbol" = yes; then
-+          archive_expsym_cmds='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib'
-+	fi
-       else
- 	archive_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib'
- 	archive_expsym_cmds='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib'
-@@ -8865,7 +9461,7 @@ rm -f core conftest.err conftest.$ac_objext \
-     osf4* | osf5*)	# as osf3* with the addition of -msym flag
-       if test "$GCC" = yes; then
- 	allow_undefined_flag=' ${wl}-expect_unresolved ${wl}\*'
--	archive_cmds='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
-+	archive_cmds='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib'
- 	hardcode_libdir_flag_spec='${wl}-rpath ${wl}$libdir'
-       else
- 	allow_undefined_flag=' -expect_unresolved \*'
-@@ -8884,9 +9480,9 @@ rm -f core conftest.err conftest.$ac_objext \
-       no_undefined_flag=' -z defs'
-       if test "$GCC" = yes; then
- 	wlarc='${wl}'
--	archive_cmds='$CC -shared ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
-+	archive_cmds='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags'
- 	archive_expsym_cmds='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
--	  $CC -shared ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-+	  $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
-       else
- 	case `$CC -V 2>&1` in
- 	*"Compilers 5.0"*)
-@@ -9462,8 +10058,9 @@ cygwin* | mingw* | pw32* | cegcc*)
-   need_version=no
-   need_lib_prefix=no
- 
--  case $GCC,$host_os in
--  yes,cygwin* | yes,mingw* | yes,pw32* | yes,cegcc*)
-+  case $GCC,$cc_basename in
-+  yes,*)
-+    # gcc
-     library_names_spec='$libname.dll.a'
-     # DLL is installed to $(libdir)/../bin by postinstall_cmds
-     postinstall_cmds='base_file=`basename \${file}`~
-@@ -9496,13 +10093,71 @@ cygwin* | mingw* | pw32* | cegcc*)
-       library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-       ;;
-     esac
-+    dynamic_linker='Win32 ld.exe'
-+    ;;
-+
-+  *,cl*)
-+    # Native MSVC
-+    libname_spec='$name'
-+    soname_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext}'
-+    library_names_spec='${libname}.dll.lib'
-+
-+    case $build_os in
-+    mingw*)
-+      sys_lib_search_path_spec=
-+      lt_save_ifs=$IFS
-+      IFS=';'
-+      for lt_path in $LIB
-+      do
-+        IFS=$lt_save_ifs
-+        # Let DOS variable expansion print the short 8.3 style file name.
-+        lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
-+        sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
-+      done
-+      IFS=$lt_save_ifs
-+      # Convert to MSYS style.
-+      sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([a-zA-Z]\\):| /\\1|g' -e 's|^ ||'`
-+      ;;
-+    cygwin*)
-+      # Convert to unix form, then to dos form, then back to unix form
-+      # but this time dos style (no spaces!) so that the unix form looks
-+      # like /cygdrive/c/PROGRA~1:/cygdr...
-+      sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
-+      sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
-+      sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      ;;
-+    *)
-+      sys_lib_search_path_spec="$LIB"
-+      if $ECHO "$sys_lib_search_path_spec" | $GREP ';[c-zC-Z]:/' >/dev/null; then
-+        # It is most probably a Windows format PATH.
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
-+      else
-+        sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
-+      fi
-+      # FIXME: find the short name or the path components, as spaces are
-+      # common. (e.g. "Program Files" -> "PROGRA~1")
-+      ;;
-+    esac
-+
-+    # DLL is installed to $(libdir)/../bin by postinstall_cmds
-+    postinstall_cmds='base_file=`basename \${file}`~
-+      dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~
-+      dldir=$destdir/`dirname \$dlpath`~
-+      test -d \$dldir || mkdir -p \$dldir~
-+      $install_prog $dir/$dlname \$dldir/$dlname'
-+    postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
-+      dlpath=$dir/\$dldll~
-+       $RM \$dlpath'
-+    shlibpath_overrides_runpath=yes
-+    dynamic_linker='Win32 link.exe'
-     ;;
- 
-   *)
-+    # Assume MSVC wrapper
-     library_names_spec='${libname}`echo ${release} | $SED -e 's/[.]/-/g'`${versuffix}${shared_ext} $libname.lib'
-+    dynamic_linker='Win32 ld.exe'
-     ;;
-   esac
--  dynamic_linker='Win32 ld.exe'
-   # FIXME: first we should search . and the directory the executable is in
-   shlibpath_var=PATH
-   ;;
-@@ -9594,7 +10249,7 @@ haiku*)
-   soname_spec='${libname}${release}${shared_ext}$major'
-   shlibpath_var=LIBRARY_PATH
-   shlibpath_overrides_runpath=yes
--  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/beos/system/lib'
-+  sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
-   hardcode_into_libs=yes
-   ;;
- 
-@@ -10452,10 +11107,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -10558,10 +11213,10 @@ else
- /* When -fvisbility=hidden is used, assume the code has been annotated
-    correspondingly for the symbols needed.  */
- #if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
--void fnord () __attribute__((visibility("default")));
-+int fnord () __attribute__((visibility("default")));
- #endif
- 
--void fnord () { int i=42; }
-+int fnord () { return 42; }
- int main ()
- {
-   void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
-@@ -11992,13 +12647,20 @@ exeext='`$ECHO "$exeext" | $SED "$delay_single_quote_subst"`'
- lt_unset='`$ECHO "$lt_unset" | $SED "$delay_single_quote_subst"`'
- lt_SP2NL='`$ECHO "$lt_SP2NL" | $SED "$delay_single_quote_subst"`'
- lt_NL2SP='`$ECHO "$lt_NL2SP" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_host_file_cmd='`$ECHO "$lt_cv_to_host_file_cmd" | $SED "$delay_single_quote_subst"`'
-+lt_cv_to_tool_file_cmd='`$ECHO "$lt_cv_to_tool_file_cmd" | $SED "$delay_single_quote_subst"`'
- reload_flag='`$ECHO "$reload_flag" | $SED "$delay_single_quote_subst"`'
- reload_cmds='`$ECHO "$reload_cmds" | $SED "$delay_single_quote_subst"`'
- OBJDUMP='`$ECHO "$OBJDUMP" | $SED "$delay_single_quote_subst"`'
- deplibs_check_method='`$ECHO "$deplibs_check_method" | $SED "$delay_single_quote_subst"`'
- file_magic_cmd='`$ECHO "$file_magic_cmd" | $SED "$delay_single_quote_subst"`'
-+file_magic_glob='`$ECHO "$file_magic_glob" | $SED "$delay_single_quote_subst"`'
-+want_nocaseglob='`$ECHO "$want_nocaseglob" | $SED "$delay_single_quote_subst"`'
-+DLLTOOL='`$ECHO "$DLLTOOL" | $SED "$delay_single_quote_subst"`'
-+sharedlib_from_linklib_cmd='`$ECHO "$sharedlib_from_linklib_cmd" | $SED "$delay_single_quote_subst"`'
- AR='`$ECHO "$AR" | $SED "$delay_single_quote_subst"`'
- AR_FLAGS='`$ECHO "$AR_FLAGS" | $SED "$delay_single_quote_subst"`'
-+archiver_list_spec='`$ECHO "$archiver_list_spec" | $SED "$delay_single_quote_subst"`'
- STRIP='`$ECHO "$STRIP" | $SED "$delay_single_quote_subst"`'
- RANLIB='`$ECHO "$RANLIB" | $SED "$delay_single_quote_subst"`'
- old_postinstall_cmds='`$ECHO "$old_postinstall_cmds" | $SED "$delay_single_quote_subst"`'
-@@ -12013,14 +12675,17 @@ lt_cv_sys_global_symbol_pipe='`$ECHO "$lt_cv_sys_global_symbol_pipe" | $SED "$de
- lt_cv_sys_global_symbol_to_cdecl='`$ECHO "$lt_cv_sys_global_symbol_to_cdecl" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address" | $SED "$delay_single_quote_subst"`'
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix='`$ECHO "$lt_cv_sys_global_symbol_to_c_name_address_lib_prefix" | $SED "$delay_single_quote_subst"`'
-+nm_file_list_spec='`$ECHO "$nm_file_list_spec" | $SED "$delay_single_quote_subst"`'
-+lt_sysroot='`$ECHO "$lt_sysroot" | $SED "$delay_single_quote_subst"`'
- objdir='`$ECHO "$objdir" | $SED "$delay_single_quote_subst"`'
- MAGIC_CMD='`$ECHO "$MAGIC_CMD" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_no_builtin_flag='`$ECHO "$lt_prog_compiler_no_builtin_flag" | $SED "$delay_single_quote_subst"`'
--lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_pic='`$ECHO "$lt_prog_compiler_pic" | $SED "$delay_single_quote_subst"`'
-+lt_prog_compiler_wl='`$ECHO "$lt_prog_compiler_wl" | $SED "$delay_single_quote_subst"`'
- lt_prog_compiler_static='`$ECHO "$lt_prog_compiler_static" | $SED "$delay_single_quote_subst"`'
- lt_cv_prog_compiler_c_o='`$ECHO "$lt_cv_prog_compiler_c_o" | $SED "$delay_single_quote_subst"`'
- need_locks='`$ECHO "$need_locks" | $SED "$delay_single_quote_subst"`'
-+MANIFEST_TOOL='`$ECHO "$MANIFEST_TOOL" | $SED "$delay_single_quote_subst"`'
- DSYMUTIL='`$ECHO "$DSYMUTIL" | $SED "$delay_single_quote_subst"`'
- NMEDIT='`$ECHO "$NMEDIT" | $SED "$delay_single_quote_subst"`'
- LIPO='`$ECHO "$LIPO" | $SED "$delay_single_quote_subst"`'
-@@ -12053,12 +12718,12 @@ hardcode_shlibpath_var='`$ECHO "$hardcode_shlibpath_var" | $SED "$delay_single_q
- hardcode_automatic='`$ECHO "$hardcode_automatic" | $SED "$delay_single_quote_subst"`'
- inherit_rpath='`$ECHO "$inherit_rpath" | $SED "$delay_single_quote_subst"`'
- link_all_deplibs='`$ECHO "$link_all_deplibs" | $SED "$delay_single_quote_subst"`'
--fix_srcfile_path='`$ECHO "$fix_srcfile_path" | $SED "$delay_single_quote_subst"`'
- always_export_symbols='`$ECHO "$always_export_symbols" | $SED "$delay_single_quote_subst"`'
- export_symbols_cmds='`$ECHO "$export_symbols_cmds" | $SED "$delay_single_quote_subst"`'
- exclude_expsyms='`$ECHO "$exclude_expsyms" | $SED "$delay_single_quote_subst"`'
- include_expsyms='`$ECHO "$include_expsyms" | $SED "$delay_single_quote_subst"`'
- prelink_cmds='`$ECHO "$prelink_cmds" | $SED "$delay_single_quote_subst"`'
-+postlink_cmds='`$ECHO "$postlink_cmds" | $SED "$delay_single_quote_subst"`'
- file_list_spec='`$ECHO "$file_list_spec" | $SED "$delay_single_quote_subst"`'
- variables_saved_for_relink='`$ECHO "$variables_saved_for_relink" | $SED "$delay_single_quote_subst"`'
- need_lib_prefix='`$ECHO "$need_lib_prefix" | $SED "$delay_single_quote_subst"`'
-@@ -12113,8 +12778,13 @@ reload_flag \
- OBJDUMP \
- deplibs_check_method \
- file_magic_cmd \
-+file_magic_glob \
-+want_nocaseglob \
-+DLLTOOL \
-+sharedlib_from_linklib_cmd \
- AR \
- AR_FLAGS \
-+archiver_list_spec \
- STRIP \
- RANLIB \
- CC \
-@@ -12124,12 +12794,14 @@ lt_cv_sys_global_symbol_pipe \
- lt_cv_sys_global_symbol_to_cdecl \
- lt_cv_sys_global_symbol_to_c_name_address \
- lt_cv_sys_global_symbol_to_c_name_address_lib_prefix \
-+nm_file_list_spec \
- lt_prog_compiler_no_builtin_flag \
--lt_prog_compiler_wl \
- lt_prog_compiler_pic \
-+lt_prog_compiler_wl \
- lt_prog_compiler_static \
- lt_cv_prog_compiler_c_o \
- need_locks \
-+MANIFEST_TOOL \
- DSYMUTIL \
- NMEDIT \
- LIPO \
-@@ -12145,7 +12817,6 @@ no_undefined_flag \
- hardcode_libdir_flag_spec \
- hardcode_libdir_flag_spec_ld \
- hardcode_libdir_separator \
--fix_srcfile_path \
- exclude_expsyms \
- include_expsyms \
- file_list_spec \
-@@ -12181,6 +12852,7 @@ module_cmds \
- module_expsym_cmds \
- export_symbols_cmds \
- prelink_cmds \
-+postlink_cmds \
- postinstall_cmds \
- postuninstall_cmds \
- finish_cmds \
-@@ -12770,7 +13442,8 @@ $as_echo X"$file" |
- # NOTE: Changes made to this file will be lost: look at ltmain.sh.
- #
- #   Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005,
--#                 2006, 2007, 2008, 2009 Free Software Foundation, Inc.
-+#                 2006, 2007, 2008, 2009, 2010 Free Software Foundation,
-+#                 Inc.
- #   Written by Gordon Matzigkeit, 1996
- #
- #   This file is part of GNU Libtool.
-@@ -12873,19 +13546,42 @@ SP2NL=$lt_lt_SP2NL
- # turn newlines into spaces.
- NL2SP=$lt_lt_NL2SP
- 
-+# convert \$build file names to \$host format.
-+to_host_file_cmd=$lt_cv_to_host_file_cmd
-+
-+# convert \$build files to toolchain format.
-+to_tool_file_cmd=$lt_cv_to_tool_file_cmd
-+
- # An object symbol dumper.
- OBJDUMP=$lt_OBJDUMP
- 
- # Method to check whether dependent libraries are shared objects.
- deplibs_check_method=$lt_deplibs_check_method
- 
--# Command to use when deplibs_check_method == "file_magic".
-+# Command to use when deplibs_check_method = "file_magic".
- file_magic_cmd=$lt_file_magic_cmd
- 
-+# How to find potential files when deplibs_check_method = "file_magic".
-+file_magic_glob=$lt_file_magic_glob
-+
-+# Find potential files using nocaseglob when deplibs_check_method = "file_magic".
-+want_nocaseglob=$lt_want_nocaseglob
-+
-+# DLL creation program.
-+DLLTOOL=$lt_DLLTOOL
-+
-+# Command to associate shared and link libraries.
-+sharedlib_from_linklib_cmd=$lt_sharedlib_from_linklib_cmd
-+
- # The archiver.
- AR=$lt_AR
-+
-+# Flags to create an archive.
- AR_FLAGS=$lt_AR_FLAGS
- 
-+# How to feed a file listing to the archiver.
-+archiver_list_spec=$lt_archiver_list_spec
-+
- # A symbol stripping program.
- STRIP=$lt_STRIP
- 
-@@ -12915,6 +13611,12 @@ global_symbol_to_c_name_address=$lt_lt_cv_sys_global_symbol_to_c_name_address
- # Transform the output of nm in a C name address pair when lib prefix is needed.
- global_symbol_to_c_name_address_lib_prefix=$lt_lt_cv_sys_global_symbol_to_c_name_address_lib_prefix
- 
-+# Specify filename containing input files for \$NM.
-+nm_file_list_spec=$lt_nm_file_list_spec
-+
-+# The root where to search for dependent libraries,and in which our libraries should be installed.
-+lt_sysroot=$lt_sysroot
-+
- # The name of the directory that contains temporary libtool files.
- objdir=$objdir
- 
-@@ -12924,6 +13626,9 @@ MAGIC_CMD=$MAGIC_CMD
- # Must we lock files when doing compilation?
- need_locks=$lt_need_locks
- 
-+# Manifest tool.
-+MANIFEST_TOOL=$lt_MANIFEST_TOOL
-+
- # Tool to manipulate archived DWARF debug symbol files on Mac OS X.
- DSYMUTIL=$lt_DSYMUTIL
- 
-@@ -13038,12 +13743,12 @@ with_gcc=$GCC
- # Compiler flag to turn off builtin functions.
- no_builtin_flag=$lt_lt_prog_compiler_no_builtin_flag
- 
--# How to pass a linker flag through the compiler.
--wl=$lt_lt_prog_compiler_wl
--
- # Additional compiler flags for building library objects.
- pic_flag=$lt_lt_prog_compiler_pic
- 
-+# How to pass a linker flag through the compiler.
-+wl=$lt_lt_prog_compiler_wl
-+
- # Compiler flag to prevent dynamic linking.
- link_static_flag=$lt_lt_prog_compiler_static
- 
-@@ -13130,9 +13835,6 @@ inherit_rpath=$inherit_rpath
- # Whether libtool must link a program against all its dependency libraries.
- link_all_deplibs=$link_all_deplibs
- 
--# Fix the shell variable \$srcfile for the compiler.
--fix_srcfile_path=$lt_fix_srcfile_path
--
- # Set to "yes" if exported symbols are required.
- always_export_symbols=$always_export_symbols
- 
-@@ -13148,6 +13850,9 @@ include_expsyms=$lt_include_expsyms
- # Commands necessary for linking programs (against libraries) with templates.
- prelink_cmds=$lt_prelink_cmds
- 
-+# Commands necessary for finishing linking programs.
-+postlink_cmds=$lt_postlink_cmds
-+
- # Specify filename containing input files.
- file_list_spec=$lt_file_list_spec
- 
-@@ -13180,210 +13885,169 @@ ltmain="$ac_aux_dir/ltmain.sh"
-   # if finds mixed CR/LF and LF-only lines.  Since sed operates in
-   # text mode, it properly converts lines to CR/LF.  This bash problem
-   # is reportedly fixed, but why not run on old versions too?
--  sed '/^# Generated shell functions inserted here/q' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  case $xsi_shell in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result="${1##*/}"
--}
--
--# func_dirname_and_basename file append nondir_replacement
--# perform func_basename and func_dirname in a single function
--# call:
--#   dirname:  Compute the dirname of FILE.  If nonempty,
--#             add APPEND to the result, otherwise set result
--#             to NONDIR_REPLACEMENT.
--#             value returned in "$func_dirname_result"
--#   basename: Compute filename of FILE.
--#             value retuned in "$func_basename_result"
--# Implementation must be kept synchronized with func_dirname
--# and func_basename. For efficiency, we do not delegate to
--# those functions but instead duplicate the functionality here.
--func_dirname_and_basename ()
--{
--  case ${1} in
--    */*) func_dirname_result="${1%/*}${2}" ;;
--    *  ) func_dirname_result="${3}" ;;
--  esac
--  func_basename_result="${1##*/}"
--}
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--func_stripname ()
--{
--  # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are
--  # positional parameters, so assign one to ordinary parameter first.
--  func_stripname_result=${3}
--  func_stripname_result=${func_stripname_result#"${1}"}
--  func_stripname_result=${func_stripname_result%"${2}"}
--}
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=${1%%=*}
--  func_opt_split_arg=${1#*=}
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  case ${1} in
--    *.lo) func_lo2o_result=${1%.lo}.${objext} ;;
--    *)    func_lo2o_result=${1} ;;
--  esac
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=${1%.*}.lo
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=$(( $* ))
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=${#1}
--}
--
--_LT_EOF
--    ;;
--  *) # Bourne compatible functions.
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_dirname file append nondir_replacement
--# Compute the dirname of FILE.  If nonempty, add APPEND to the result,
--# otherwise set result to NONDIR_REPLACEMENT.
--func_dirname ()
--{
--  # Extract subdirectory from the argument.
--  func_dirname_result=`$ECHO "${1}" | $SED "$dirname"`
--  if test "X$func_dirname_result" = "X${1}"; then
--    func_dirname_result="${3}"
--  else
--    func_dirname_result="$func_dirname_result${2}"
--  fi
--}
--
--# func_basename file
--func_basename ()
--{
--  func_basename_result=`$ECHO "${1}" | $SED "$basename"`
--}
--
--
--# func_stripname prefix suffix name
--# strip PREFIX and SUFFIX off of NAME.
--# PREFIX and SUFFIX must not contain globbing or regex special
--# characters, hashes, percent signs, but SUFFIX may contain a leading
--# dot (in which case that matches only a dot).
--# func_strip_suffix prefix name
--func_stripname ()
--{
--  case ${2} in
--    .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;;
--    *)  func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;;
--  esac
--}
--
--# sed scripts:
--my_sed_long_opt='1s/^\(-[^=]*\)=.*/\1/;q'
--my_sed_long_arg='1s/^-[^=]*=//'
--
--# func_opt_split
--func_opt_split ()
--{
--  func_opt_split_opt=`$ECHO "${1}" | $SED "$my_sed_long_opt"`
--  func_opt_split_arg=`$ECHO "${1}" | $SED "$my_sed_long_arg"`
--}
--
--# func_lo2o object
--func_lo2o ()
--{
--  func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"`
--}
--
--# func_xform libobj-or-source
--func_xform ()
--{
--  func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'`
--}
--
--# func_arith arithmetic-term...
--func_arith ()
--{
--  func_arith_result=`expr "$@"`
--}
--
--# func_len string
--# STRING may not start with a hyphen.
--func_len ()
--{
--  func_len_result=`expr "$1" : ".*" 2>/dev/null || echo $max_cmd_len`
--}
--
--_LT_EOF
--esac
--
--case $lt_shell_append in
--  yes)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1+=\$2"
--}
--_LT_EOF
--    ;;
--  *)
--    cat << \_LT_EOF >> "$cfgfile"
--
--# func_append var value
--# Append VALUE to the end of shell variable VAR.
--func_append ()
--{
--  eval "$1=\$$1\$2"
--}
--
--_LT_EOF
--    ;;
--  esac
--
--
--  sed -n '/^# Generated shell functions inserted here/,$p' "$ltmain" >> "$cfgfile" \
--    || (rm -f "$cfgfile"; exit 1)
--
--  mv -f "$cfgfile" "$ofile" ||
-+  sed '$q' "$ltmain" >> "$cfgfile" \
-+     || (rm -f "$cfgfile"; exit 1)
-+
-+  if test x"$xsi_shell" = xyes; then
-+  sed -e '/^func_dirname ()$/,/^} # func_dirname /c\
-+func_dirname ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+} # Extended-shell func_dirname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_basename ()$/,/^} # func_basename /c\
-+func_basename ()\
-+{\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_dirname_and_basename ()$/,/^} # func_dirname_and_basename /c\
-+func_dirname_and_basename ()\
-+{\
-+\    case ${1} in\
-+\      */*) func_dirname_result="${1%/*}${2}" ;;\
-+\      *  ) func_dirname_result="${3}" ;;\
-+\    esac\
-+\    func_basename_result="${1##*/}"\
-+} # Extended-shell func_dirname_and_basename implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_stripname ()$/,/^} # func_stripname /c\
-+func_stripname ()\
-+{\
-+\    # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are\
-+\    # positional parameters, so assign one to ordinary parameter first.\
-+\    func_stripname_result=${3}\
-+\    func_stripname_result=${func_stripname_result#"${1}"}\
-+\    func_stripname_result=${func_stripname_result%"${2}"}\
-+} # Extended-shell func_stripname implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_long_opt ()$/,/^} # func_split_long_opt /c\
-+func_split_long_opt ()\
-+{\
-+\    func_split_long_opt_name=${1%%=*}\
-+\    func_split_long_opt_arg=${1#*=}\
-+} # Extended-shell func_split_long_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_split_short_opt ()$/,/^} # func_split_short_opt /c\
-+func_split_short_opt ()\
-+{\
-+\    func_split_short_opt_arg=${1#??}\
-+\    func_split_short_opt_name=${1%"$func_split_short_opt_arg"}\
-+} # Extended-shell func_split_short_opt implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_lo2o ()$/,/^} # func_lo2o /c\
-+func_lo2o ()\
-+{\
-+\    case ${1} in\
-+\      *.lo) func_lo2o_result=${1%.lo}.${objext} ;;\
-+\      *)    func_lo2o_result=${1} ;;\
-+\    esac\
-+} # Extended-shell func_lo2o implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_xform ()$/,/^} # func_xform /c\
-+func_xform ()\
-+{\
-+    func_xform_result=${1%.*}.lo\
-+} # Extended-shell func_xform implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_arith ()$/,/^} # func_arith /c\
-+func_arith ()\
-+{\
-+    func_arith_result=$(( $* ))\
-+} # Extended-shell func_arith implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_len ()$/,/^} # func_len /c\
-+func_len ()\
-+{\
-+    func_len_result=${#1}\
-+} # Extended-shell func_len implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+fi
-+
-+if test x"$lt_shell_append" = xyes; then
-+  sed -e '/^func_append ()$/,/^} # func_append /c\
-+func_append ()\
-+{\
-+    eval "${1}+=\\${2}"\
-+} # Extended-shell func_append implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  sed -e '/^func_append_quoted ()$/,/^} # func_append_quoted /c\
-+func_append_quoted ()\
-+{\
-+\    func_quote_for_eval "${2}"\
-+\    eval "${1}+=\\\\ \\$func_quote_for_eval_result"\
-+} # Extended-shell func_append_quoted implementation' "$cfgfile" > $cfgfile.tmp \
-+  && mv -f "$cfgfile.tmp" "$cfgfile" \
-+    || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+test 0 -eq $? || _lt_function_replace_fail=:
-+
-+
-+  # Save a `func_append' function call where possible by direct use of '+='
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+else
-+  # Save a `func_append' function call even when '+=' is not available
-+  sed -e 's%func_append \([a-zA-Z_]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \
-+    && mv -f "$cfgfile.tmp" "$cfgfile" \
-+      || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp")
-+  test 0 -eq $? || _lt_function_replace_fail=:
-+fi
-+
-+if test x"$_lt_function_replace_fail" = x":"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: Unable to substitute extended shell functions in $ofile" >&5
-+$as_echo "$as_me: WARNING: Unable to substitute extended shell functions in $ofile" >&2;}
-+fi
-+
-+
-+   mv -f "$cfgfile" "$ofile" ||
-     (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
-   chmod +x "$ofile"
- 
--- 
-2.12.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0008-Add-the-armv5e-architecture-to-binutils.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0008-Add-the-armv5e-architecture-to-binutils.patch
deleted file mode 100644
index 449225a..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0008-Add-the-armv5e-architecture-to-binutils.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-From 9c313e8a15a7e7c5c0f2906e3218ed211563ac2a Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 2 Mar 2015 01:37:10 +0000
-Subject: [PATCH 08/15] Add the armv5e architecture to binutils
-
-Binutils has a comment that indicates it is supposed to match gcc for
-all of the support "-march=" settings, but it was lacking the armv5e setting.
-This was a simple way to add it, as thumb instructions shouldn't be generated
-by the compiler anyway.
-
-Upstream-Status: Denied
-Upstream maintainer indicated that we should not be using armv5e, even
-though it is a legal archicture defined by our gcc.
-
-Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gas/config/tc-arm.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/gas/config/tc-arm.c b/gas/config/tc-arm.c
-index 60bda51070..eb6d0afd6e 100644
---- a/gas/config/tc-arm.c
-+++ b/gas/config/tc-arm.c
-@@ -25633,6 +25633,7 @@ static const struct arm_arch_option_table arm_archs[] =
-   ARM_ARCH_OPT ("armv4t",	ARM_ARCH_V4T,	 FPU_ARCH_FPA),
-   ARM_ARCH_OPT ("armv4txm",	ARM_ARCH_V4TxM,	 FPU_ARCH_FPA),
-   ARM_ARCH_OPT ("armv5",	ARM_ARCH_V5,	 FPU_ARCH_VFP),
-+  ARM_ARCH_OPT ("armv5e",	ARM_ARCH_V5TE,	 FPU_ARCH_VFP),
-   ARM_ARCH_OPT ("armv5t",	ARM_ARCH_V5T,	 FPU_ARCH_VFP),
-   ARM_ARCH_OPT ("armv5txm",	ARM_ARCH_V5TxM,	 FPU_ARCH_VFP),
-   ARM_ARCH_OPT ("armv5te",	ARM_ARCH_V5TE,	 FPU_ARCH_VFP),
--- 
-2.12.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0008-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0008-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch
new file mode 100644
index 0000000..30a22b5
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0008-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch
@@ -0,0 +1,35 @@
+From 331443a87a31ec504e5652fc099d9129a9a4deb8 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 2 Mar 2015 01:39:01 +0000
+Subject: [PATCH 08/15] don't let the distro compiler point to the wrong
+ installation location
+
+Thanks to RP for helping find the source code causing the issue.
+
+2010/08/13
+Nitin A Kamble <nitin.a.kamble@intel.com>
+
+Upstream-Status: Inappropriate [embedded specific]
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ libiberty/Makefile.in | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/libiberty/Makefile.in b/libiberty/Makefile.in
+index 25cfa29ad5..ce67a710e3 100644
+--- a/libiberty/Makefile.in
++++ b/libiberty/Makefile.in
+@@ -364,7 +364,8 @@ install-strip: install
+ # multilib-specific flags, it's overridden by FLAGS_TO_PASS from the
+ # default multilib, so we have to take CFLAGS into account as well,
+ # since it will be passed the multilib flags.
+-MULTIOSDIR = `$(CC) $(CFLAGS) -print-multi-os-directory`
++#MULTIOSDIR = `$(CC) $(CFLAGS) -print-multi-os-directory`
++MULTIOSDIR = ""
+ install_to_libdir: all
+ 	if test -n "${target_header_dir}"; then \
+ 		${mkinstalldirs} $(DESTDIR)$(libdir)/$(MULTIOSDIR); \
+-- 
+2.14.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0009-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0009-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch
deleted file mode 100644
index 1c40593..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0009-don-t-let-the-distro-compiler-point-to-the-wrong-ins.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-From 2be9b44a4a308e3ea42a027c4c3211170f10c9c0 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 2 Mar 2015 01:39:01 +0000
-Subject: [PATCH 09/15] don't let the distro compiler point to the wrong
- installation location
-
-Thanks to RP for helping find the source code causing the issue.
-
-2010/08/13
-Nitin A Kamble <nitin.a.kamble@intel.com>
-
-Upstream-Status: Inappropriate [embedded specific]
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- libiberty/Makefile.in | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/libiberty/Makefile.in b/libiberty/Makefile.in
-index 0ff9e45e45..42c32642b2 100644
---- a/libiberty/Makefile.in
-+++ b/libiberty/Makefile.in
-@@ -366,7 +366,8 @@ install-strip: install
- # multilib-specific flags, it's overridden by FLAGS_TO_PASS from the
- # default multilib, so we have to take CFLAGS into account as well,
- # since it will be passed the multilib flags.
--MULTIOSDIR = `$(CC) $(CFLAGS) -print-multi-os-directory`
-+#MULTIOSDIR = `$(CC) $(CFLAGS) -print-multi-os-directory`
-+MULTIOSDIR = ""
- install_to_libdir: all
- 	if test -n "${target_header_dir}"; then \
- 		${mkinstalldirs} $(DESTDIR)$(libdir)/$(MULTIOSDIR); \
--- 
-2.12.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0009-warn-for-uses-of-system-directories-when-cross-linki.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0009-warn-for-uses-of-system-directories-when-cross-linki.patch
new file mode 100644
index 0000000..e0e2578
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0009-warn-for-uses-of-system-directories-when-cross-linki.patch
@@ -0,0 +1,273 @@
+From 0a4afdcf0700efd45963568e2d0049127cdf4434 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 15 Jan 2016 06:31:09 +0000
+Subject: [PATCH 09/15] warn for uses of system directories when cross linking
+
+2008-07-02  Joseph Myers  <joseph@codesourcery.com>
+
+    ld/
+    * ld.h (args_type): Add error_poison_system_directories.
+    * ld.texinfo (--error-poison-system-directories): Document.
+    * ldfile.c (ldfile_add_library_path): Check
+    command_line.error_poison_system_directories.
+    * ldmain.c (main): Initialize
+    command_line.error_poison_system_directories.
+    * lexsup.c (enum option_values): Add
+    OPTION_ERROR_POISON_SYSTEM_DIRECTORIES.
+    (ld_options): Add --error-poison-system-directories.
+    (parse_args): Handle new option.
+
+2007-06-13  Joseph Myers  <joseph@codesourcery.com>
+
+    ld/
+    * config.in: Regenerate.
+    * ld.h (args_type): Add poison_system_directories.
+    * ld.texinfo (--no-poison-system-directories): Document.
+    * ldfile.c (ldfile_add_library_path): Check
+    command_line.poison_system_directories.
+    * ldmain.c (main): Initialize
+    command_line.poison_system_directories.
+    * lexsup.c (enum option_values): Add
+    OPTION_NO_POISON_SYSTEM_DIRECTORIES.
+    (ld_options): Add --no-poison-system-directories.
+    (parse_args): Handle new option.
+
+2007-04-20  Joseph Myers  <joseph@codesourcery.com>
+
+    Merge from Sourcery G++ binutils 2.17:
+
+    2007-03-20  Joseph Myers  <joseph@codesourcery.com>
+    Based on patch by Mark Hatle <mark.hatle@windriver.com>.
+    ld/
+    * configure.in (--enable-poison-system-directories): New option.
+    * configure, config.in: Regenerate.
+    * ldfile.c (ldfile_add_library_path): If
+    ENABLE_POISON_SYSTEM_DIRECTORIES defined, warn for use of /lib,
+    /usr/lib, /usr/local/lib or /usr/X11R6/lib.
+
+Upstream-Status: Pending
+
+Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
+Signed-off-by: Scott Garman <scott.a.garman@intel.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ld/config.in    |  3 +++
+ ld/configure    | 16 ++++++++++++++++
+ ld/configure.ac | 10 ++++++++++
+ ld/ld.h         |  8 ++++++++
+ ld/ld.texinfo   | 12 ++++++++++++
+ ld/ldfile.c     | 17 +++++++++++++++++
+ ld/ldlex.h      |  2 ++
+ ld/ldmain.c     |  2 ++
+ ld/lexsup.c     | 16 ++++++++++++++++
+ 9 files changed, 86 insertions(+)
+
+diff --git a/ld/config.in b/ld/config.in
+index a846743da6..df3cd5fb60 100644
+--- a/ld/config.in
++++ b/ld/config.in
+@@ -27,6 +27,9 @@
+    language is requested. */
+ #undef ENABLE_NLS
+ 
++/* Define to warn for use of native system library directories */
++#undef ENABLE_POISON_SYSTEM_DIRECTORIES
++
+ /* Additional extension a shared object might have. */
+ #undef EXTRA_SHLIB_EXTENSION
+ 
+diff --git a/ld/configure b/ld/configure
+index 4e71511bd1..71c6ad1fd1 100755
+--- a/ld/configure
++++ b/ld/configure
+@@ -789,6 +789,7 @@ with_lib_path
+ enable_targets
+ enable_64_bit_bfd
+ with_sysroot
++enable_poison_system_directories
+ enable_gold
+ enable_got
+ enable_compressed_debug_sections
+@@ -1448,6 +1449,8 @@ Optional Features:
+   --disable-largefile     omit support for large files
+   --enable-targets        alternative target configurations
+   --enable-64-bit-bfd     64-bit support (on hosts with narrower word sizes)
++  --enable-poison-system-directories
++                          warn for use of native system library directories
+   --enable-gold[=ARG]     build gold [ARG={default,yes,no}]
+   --enable-got=<type>     GOT handling scheme (target, single, negative,
+                           multigot)
+@@ -16315,6 +16318,19 @@ fi
+ 
+ 
+ 
++# Check whether --enable-poison-system-directories was given.
++if test "${enable_poison_system_directories+set}" = set; then :
++  enableval=$enable_poison_system_directories;
++else
++  enable_poison_system_directories=no
++fi
++
++if test "x${enable_poison_system_directories}" = "xyes"; then
++
++$as_echo "#define ENABLE_POISON_SYSTEM_DIRECTORIES 1" >>confdefs.h
++
++fi
++
+ # Check whether --enable-gold was given.
+ if test "${enable_gold+set}" = set; then :
+   enableval=$enable_gold; case "${enableval}" in
+diff --git a/ld/configure.ac b/ld/configure.ac
+index 00080f85fd..3aa98e37fb 100644
+--- a/ld/configure.ac
++++ b/ld/configure.ac
+@@ -95,6 +95,16 @@ AC_SUBST(use_sysroot)
+ AC_SUBST(TARGET_SYSTEM_ROOT)
+ AC_SUBST(TARGET_SYSTEM_ROOT_DEFINE)
+ 
++AC_ARG_ENABLE([poison-system-directories],
++         AS_HELP_STRING([--enable-poison-system-directories],
++                [warn for use of native system library directories]),,
++         [enable_poison_system_directories=no])
++if test "x${enable_poison_system_directories}" = "xyes"; then
++  AC_DEFINE([ENABLE_POISON_SYSTEM_DIRECTORIES],
++       [1],
++       [Define to warn for use of native system library directories])
++fi
++
+ dnl Use --enable-gold to decide if this linker should be the default.
+ dnl "install_as_default" is set to false if gold is the default linker.
+ dnl "installed_linker" is the installed BFD linker name.
+diff --git a/ld/ld.h b/ld/ld.h
+index c6fa1247f0..01c373498f 100644
+--- a/ld/ld.h
++++ b/ld/ld.h
+@@ -174,6 +174,14 @@ typedef struct
+      in the linker script.  */
+   bfd_boolean force_group_allocation;
+ 
++  /* If TRUE (the default) warn for uses of system directories when
++     cross linking.  */
++  bfd_boolean poison_system_directories;
++
++  /* If TRUE (default FALSE) give an error for uses of system
++     directories when cross linking instead of a warning.  */
++  bfd_boolean error_poison_system_directories;
++
+   /* Big or little endian as set on command line.  */
+   enum endian_enum endian;
+ 
+diff --git a/ld/ld.texinfo b/ld/ld.texinfo
+index ebe7e7b7bd..33aa2c62fa 100644
+--- a/ld/ld.texinfo
++++ b/ld/ld.texinfo
+@@ -2480,6 +2480,18 @@ string identifying the original linked file does not change.
+ 
+ Passing @code{none} for @var{style} disables the setting from any
+ @code{--build-id} options earlier on the command line.
++
++@kindex --no-poison-system-directories
++@item --no-poison-system-directories
++Do not warn for @option{-L} options using system directories such as
++@file{/usr/lib} when cross linking.  This option is intended for use
++in chroot environments when such directories contain the correct
++libraries for the target system rather than the host.
++
++@kindex --error-poison-system-directories
++@item --error-poison-system-directories
++Give an error instead of a warning for @option{-L} options using
++system directories when cross linking.
+ @end table
+ 
+ @c man end
+diff --git a/ld/ldfile.c b/ld/ldfile.c
+index 3b37a0a3e2..5c85b01849 100644
+--- a/ld/ldfile.c
++++ b/ld/ldfile.c
+@@ -116,6 +116,23 @@ ldfile_add_library_path (const char *name, bfd_boolean cmdline)
+     new_dirs->name = concat (ld_sysroot, name + strlen ("$SYSROOT"), (const char *) NULL);
+   else
+     new_dirs->name = xstrdup (name);
++
++#ifdef ENABLE_POISON_SYSTEM_DIRECTORIES
++  if (command_line.poison_system_directories
++  && ((!strncmp (name, "/lib", 4))
++      || (!strncmp (name, "/usr/lib", 8))
++      || (!strncmp (name, "/usr/local/lib", 14))
++      || (!strncmp (name, "/usr/X11R6/lib", 14))))
++   {
++     if (command_line.error_poison_system_directories)
++       einfo (_("%X%P: error: library search path \"%s\" is unsafe for "
++            "cross-compilation\n"), name);
++     else
++       einfo (_("%P: warning: library search path \"%s\" is unsafe for "
++            "cross-compilation\n"), name);
++   }
++#endif
++
+ }
+ 
+ /* Try to open a BFD for a lang_input_statement.  */
+diff --git a/ld/ldlex.h b/ld/ldlex.h
+index 5aa7f6bc3e..cb655e0399 100644
+--- a/ld/ldlex.h
++++ b/ld/ldlex.h
+@@ -147,6 +147,8 @@ enum option_values
+   OPTION_REQUIRE_DEFINED_SYMBOL,
+   OPTION_ORPHAN_HANDLING,
+   OPTION_FORCE_GROUP_ALLOCATION,
++  OPTION_NO_POISON_SYSTEM_DIRECTORIES,
++  OPTION_ERROR_POISON_SYSTEM_DIRECTORIES,
+ };
+ 
+ /* The initial parser states.  */
+diff --git a/ld/ldmain.c b/ld/ldmain.c
+index 2b09f20413..89e2a3a805 100644
+--- a/ld/ldmain.c
++++ b/ld/ldmain.c
+@@ -261,6 +261,8 @@ main (int argc, char **argv)
+   command_line.warn_mismatch = TRUE;
+   command_line.warn_search_mismatch = TRUE;
+   command_line.check_section_addresses = -1;
++  command_line.poison_system_directories = TRUE;
++  command_line.error_poison_system_directories = FALSE;
+ 
+   /* We initialize DEMANGLING based on the environment variable
+      COLLECT_NO_DEMANGLE.  The gcc collect2 program will demangle the
+diff --git a/ld/lexsup.c b/ld/lexsup.c
+index effa277b16..e4929607e9 100644
+--- a/ld/lexsup.c
++++ b/ld/lexsup.c
+@@ -538,6 +538,14 @@ static const struct ld_option ld_options[] =
+   { {"orphan-handling", required_argument, NULL, OPTION_ORPHAN_HANDLING},
+     '\0', N_("=MODE"), N_("Control how orphan sections are handled."),
+     TWO_DASHES },
++  { {"no-poison-system-directories", no_argument, NULL,
++     OPTION_NO_POISON_SYSTEM_DIRECTORIES},
++    '\0', NULL, N_("Do not warn for -L options using system directories"),
++    TWO_DASHES },
++  { {"error-poison-system-directories", no_argument, NULL,
++    +     OPTION_ERROR_POISON_SYSTEM_DIRECTORIES},
++    '\0', NULL, N_("Give an error for -L options using system directories"),
++    TWO_DASHES },
+ };
+ 
+ #define OPTION_COUNT ARRAY_SIZE (ld_options)
+@@ -1568,6 +1576,14 @@ parse_args (unsigned argc, char **argv)
+ 	    einfo (_("%P%F: invalid argument to option"
+ 		     " \"--orphan-handling\"\n"));
+ 	  break;
++
++	case OPTION_NO_POISON_SYSTEM_DIRECTORIES:
++	  command_line.poison_system_directories = FALSE;
++	  break;
++
++	case OPTION_ERROR_POISON_SYSTEM_DIRECTORIES:
++	  command_line.error_poison_system_directories = TRUE;
++	  break;
+ 	}
+     }
+ 
+-- 
+2.14.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0010-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0010-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch
new file mode 100644
index 0000000..496242e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0010-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch
@@ -0,0 +1,52 @@
+From 88fac08f1c472c612f381cbb9408756f2f58b4ff Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 2 Mar 2015 01:42:38 +0000
+Subject: [PATCH 10/15] Fix rpath in libtool when sysroot is enabled
+
+Enabling sysroot support in libtool exposed a bug where the final
+library had an RPATH encoded into it which still pointed to the
+sysroot. This works around the issue until it gets sorted out
+upstream.
+
+Fix suggested by Richard Purdie <richard.purdie@linuxfoundation.org>
+
+Upstream-Status: Inappropriate [embedded specific]
+
+Signed-off-by: Scott Garman <scott.a.garman@intel.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ltmain.sh | 10 ++++++++--
+ 1 file changed, 8 insertions(+), 2 deletions(-)
+
+diff --git a/ltmain.sh b/ltmain.sh
+index 70e856e065..11ee684ccc 100644
+--- a/ltmain.sh
++++ b/ltmain.sh
+@@ -8035,9 +8035,11 @@ EOF
+ 	  test "$opt_mode" != relink && rpath="$compile_rpath$rpath"
+ 	  for libdir in $rpath; do
+ 	    if test -n "$hardcode_libdir_flag_spec"; then
++		  func_replace_sysroot "$libdir"
++		  libdir=$func_replace_sysroot_result
++		  func_stripname '=' '' "$libdir"
++		  libdir=$func_stripname_result
+ 	      if test -n "$hardcode_libdir_separator"; then
+-		func_replace_sysroot "$libdir"
+-		libdir=$func_replace_sysroot_result
+ 		if test -z "$hardcode_libdirs"; then
+ 		  hardcode_libdirs="$libdir"
+ 		else
+@@ -8770,6 +8772,10 @@ EOF
+       hardcode_libdirs=
+       for libdir in $compile_rpath $finalize_rpath; do
+ 	if test -n "$hardcode_libdir_flag_spec"; then
++	  func_replace_sysroot "$libdir"
++	  libdir=$func_replace_sysroot_result
++	  func_stripname '=' '' "$libdir"
++	  libdir=$func_stripname_result
+ 	  if test -n "$hardcode_libdir_separator"; then
+ 	    if test -z "$hardcode_libdirs"; then
+ 	      hardcode_libdirs="$libdir"
+-- 
+2.14.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0010-warn-for-uses-of-system-directories-when-cross-linki.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0010-warn-for-uses-of-system-directories-when-cross-linki.patch
deleted file mode 100644
index 0774ad6..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0010-warn-for-uses-of-system-directories-when-cross-linki.patch
+++ /dev/null
@@ -1,273 +0,0 @@
-From b1ab17abe4128684f19775448545176fb2a5e27e Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 15 Jan 2016 06:31:09 +0000
-Subject: [PATCH 10/15] warn for uses of system directories when cross linking
-
-2008-07-02  Joseph Myers  <joseph@codesourcery.com>
-
-    ld/
-    * ld.h (args_type): Add error_poison_system_directories.
-    * ld.texinfo (--error-poison-system-directories): Document.
-    * ldfile.c (ldfile_add_library_path): Check
-    command_line.error_poison_system_directories.
-    * ldmain.c (main): Initialize
-    command_line.error_poison_system_directories.
-    * lexsup.c (enum option_values): Add
-    OPTION_ERROR_POISON_SYSTEM_DIRECTORIES.
-    (ld_options): Add --error-poison-system-directories.
-    (parse_args): Handle new option.
-
-2007-06-13  Joseph Myers  <joseph@codesourcery.com>
-
-    ld/
-    * config.in: Regenerate.
-    * ld.h (args_type): Add poison_system_directories.
-    * ld.texinfo (--no-poison-system-directories): Document.
-    * ldfile.c (ldfile_add_library_path): Check
-    command_line.poison_system_directories.
-    * ldmain.c (main): Initialize
-    command_line.poison_system_directories.
-    * lexsup.c (enum option_values): Add
-    OPTION_NO_POISON_SYSTEM_DIRECTORIES.
-    (ld_options): Add --no-poison-system-directories.
-    (parse_args): Handle new option.
-
-2007-04-20  Joseph Myers  <joseph@codesourcery.com>
-
-    Merge from Sourcery G++ binutils 2.17:
-
-    2007-03-20  Joseph Myers  <joseph@codesourcery.com>
-    Based on patch by Mark Hatle <mark.hatle@windriver.com>.
-    ld/
-    * configure.in (--enable-poison-system-directories): New option.
-    * configure, config.in: Regenerate.
-    * ldfile.c (ldfile_add_library_path): If
-    ENABLE_POISON_SYSTEM_DIRECTORIES defined, warn for use of /lib,
-    /usr/lib, /usr/local/lib or /usr/X11R6/lib.
-
-Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
-Signed-off-by: Scott Garman <scott.a.garman@intel.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
-Upstream-Status: Pending
-
- ld/config.in    |  3 +++
- ld/configure    | 16 ++++++++++++++++
- ld/configure.ac | 10 ++++++++++
- ld/ld.h         |  8 ++++++++
- ld/ld.texinfo   | 12 ++++++++++++
- ld/ldfile.c     | 17 +++++++++++++++++
- ld/ldlex.h      |  2 ++
- ld/ldmain.c     |  2 ++
- ld/lexsup.c     | 16 ++++++++++++++++
- 9 files changed, 86 insertions(+)
-
-diff --git a/ld/config.in b/ld/config.in
-index 2c6d698b6c..d3cb7e882d 100644
---- a/ld/config.in
-+++ b/ld/config.in
-@@ -17,6 +17,9 @@
-    language is requested. */
- #undef ENABLE_NLS
- 
-+/* Define to warn for use of native system library directories */
-+#undef ENABLE_POISON_SYSTEM_DIRECTORIES
-+
- /* Additional extension a shared object might have. */
- #undef EXTRA_SHLIB_EXTENSION
- 
-diff --git a/ld/configure b/ld/configure
-index 4277b74bad..63109644b6 100755
---- a/ld/configure
-+++ b/ld/configure
-@@ -793,6 +793,7 @@ with_lib_path
- enable_targets
- enable_64_bit_bfd
- with_sysroot
-+enable_poison_system_directories
- enable_gold
- enable_got
- enable_compressed_debug_sections
-@@ -1450,6 +1451,8 @@ Optional Features:
-   --disable-largefile     omit support for large files
-   --enable-targets        alternative target configurations
-   --enable-64-bit-bfd     64-bit support (on hosts with narrower word sizes)
-+  --enable-poison-system-directories
-+                          warn for use of native system library directories
-   --enable-gold[=ARG]     build gold [ARG={default,yes,no}]
-   --enable-got=<type>     GOT handling scheme (target, single, negative,
-                           multigot)
-@@ -16314,6 +16317,19 @@ fi
- 
- 
- 
-+# Check whether --enable-poison-system-directories was given.
-+if test "${enable_poison_system_directories+set}" = set; then :
-+  enableval=$enable_poison_system_directories;
-+else
-+  enable_poison_system_directories=no
-+fi
-+
-+if test "x${enable_poison_system_directories}" = "xyes"; then
-+
-+$as_echo "#define ENABLE_POISON_SYSTEM_DIRECTORIES 1" >>confdefs.h
-+
-+fi
-+
- # Check whether --enable-gold was given.
- if test "${enable_gold+set}" = set; then :
-   enableval=$enable_gold; case "${enableval}" in
-diff --git a/ld/configure.ac b/ld/configure.ac
-index 36a9f5083a..47f1d33fa5 100644
---- a/ld/configure.ac
-+++ b/ld/configure.ac
-@@ -95,6 +95,16 @@ AC_SUBST(use_sysroot)
- AC_SUBST(TARGET_SYSTEM_ROOT)
- AC_SUBST(TARGET_SYSTEM_ROOT_DEFINE)
- 
-+AC_ARG_ENABLE([poison-system-directories],
-+         AS_HELP_STRING([--enable-poison-system-directories],
-+                [warn for use of native system library directories]),,
-+         [enable_poison_system_directories=no])
-+if test "x${enable_poison_system_directories}" = "xyes"; then
-+  AC_DEFINE([ENABLE_POISON_SYSTEM_DIRECTORIES],
-+       [1],
-+       [Define to warn for use of native system library directories])
-+fi
-+
- dnl Use --enable-gold to decide if this linker should be the default.
- dnl "install_as_default" is set to false if gold is the default linker.
- dnl "installed_linker" is the installed BFD linker name.
-diff --git a/ld/ld.h b/ld/ld.h
-index 104bb8e237..74c914bdd5 100644
---- a/ld/ld.h
-+++ b/ld/ld.h
-@@ -172,6 +172,14 @@ typedef struct
-   /* If set, display the target memory usage (per memory region).  */
-   bfd_boolean print_memory_usage;
- 
-+  /* If TRUE (the default) warn for uses of system directories when
-+     cross linking.  */
-+  bfd_boolean poison_system_directories;
-+
-+  /* If TRUE (default FALSE) give an error for uses of system
-+     directories when cross linking instead of a warning.  */
-+  bfd_boolean error_poison_system_directories;
-+
-   /* Big or little endian as set on command line.  */
-   enum endian_enum endian;
- 
-diff --git a/ld/ld.texinfo b/ld/ld.texinfo
-index d393acdd94..ba995b1e3a 100644
---- a/ld/ld.texinfo
-+++ b/ld/ld.texinfo
-@@ -2403,6 +2403,18 @@ string identifying the original linked file does not change.
- 
- Passing @code{none} for @var{style} disables the setting from any
- @code{--build-id} options earlier on the command line.
-+
-+@kindex --no-poison-system-directories
-+@item --no-poison-system-directories
-+Do not warn for @option{-L} options using system directories such as
-+@file{/usr/lib} when cross linking.  This option is intended for use
-+in chroot environments when such directories contain the correct
-+libraries for the target system rather than the host.
-+
-+@kindex --error-poison-system-directories
-+@item --error-poison-system-directories
-+Give an error instead of a warning for @option{-L} options using
-+system directories when cross linking.
- @end table
- 
- @c man end
-diff --git a/ld/ldfile.c b/ld/ldfile.c
-index 0943bb2dfa..95874c75de 100644
---- a/ld/ldfile.c
-+++ b/ld/ldfile.c
-@@ -114,6 +114,23 @@ ldfile_add_library_path (const char *name, bfd_boolean cmdline)
-     new_dirs->name = concat (ld_sysroot, name + 1, (const char *) NULL);
-   else
-     new_dirs->name = xstrdup (name);
-+
-+#ifdef ENABLE_POISON_SYSTEM_DIRECTORIES
-+  if (command_line.poison_system_directories
-+  && ((!strncmp (name, "/lib", 4))
-+      || (!strncmp (name, "/usr/lib", 8))
-+      || (!strncmp (name, "/usr/local/lib", 14))
-+      || (!strncmp (name, "/usr/X11R6/lib", 14))))
-+   {
-+     if (command_line.error_poison_system_directories)
-+       einfo (_("%X%P: error: library search path \"%s\" is unsafe for "
-+            "cross-compilation\n"), name);
-+     else
-+       einfo (_("%P: warning: library search path \"%s\" is unsafe for "
-+            "cross-compilation\n"), name);
-+   }
-+#endif
-+
- }
- 
- /* Try to open a BFD for a lang_input_statement.  */
-diff --git a/ld/ldlex.h b/ld/ldlex.h
-index 3ecac2bc86..34117f43a5 100644
---- a/ld/ldlex.h
-+++ b/ld/ldlex.h
-@@ -146,6 +146,8 @@ enum option_values
-   OPTION_PRINT_MEMORY_USAGE,
-   OPTION_REQUIRE_DEFINED_SYMBOL,
-   OPTION_ORPHAN_HANDLING,
-+  OPTION_NO_POISON_SYSTEM_DIRECTORIES,
-+  OPTION_ERROR_POISON_SYSTEM_DIRECTORIES,
- };
- 
- /* The initial parser states.  */
-diff --git a/ld/ldmain.c b/ld/ldmain.c
-index 1e48b1a2db..21f27bacf1 100644
---- a/ld/ldmain.c
-+++ b/ld/ldmain.c
-@@ -270,6 +270,8 @@ main (int argc, char **argv)
-   command_line.warn_mismatch = TRUE;
-   command_line.warn_search_mismatch = TRUE;
-   command_line.check_section_addresses = -1;
-+  command_line.poison_system_directories = TRUE;
-+  command_line.error_poison_system_directories = FALSE;
- 
-   /* We initialize DEMANGLING based on the environment variable
-      COLLECT_NO_DEMANGLE.  The gcc collect2 program will demangle the
-diff --git a/ld/lexsup.c b/ld/lexsup.c
-index 0b7d4976ac..dedc07a143 100644
---- a/ld/lexsup.c
-+++ b/ld/lexsup.c
-@@ -535,6 +535,14 @@ static const struct ld_option ld_options[] =
-   { {"orphan-handling", required_argument, NULL, OPTION_ORPHAN_HANDLING},
-     '\0', N_("=MODE"), N_("Control how orphan sections are handled."),
-     TWO_DASHES },
-+  { {"no-poison-system-directories", no_argument, NULL,
-+     OPTION_NO_POISON_SYSTEM_DIRECTORIES},
-+    '\0', NULL, N_("Do not warn for -L options using system directories"),
-+    TWO_DASHES },
-+  { {"error-poison-system-directories", no_argument, NULL,
-+    +     OPTION_ERROR_POISON_SYSTEM_DIRECTORIES},
-+    '\0', NULL, N_("Give an error for -L options using system directories"),
-+    TWO_DASHES },
- };
- 
- #define OPTION_COUNT ARRAY_SIZE (ld_options)
-@@ -1562,6 +1570,14 @@ parse_args (unsigned argc, char **argv)
- 	    einfo (_("%P%F: invalid argument to option"
- 		     " \"--orphan-handling\"\n"));
- 	  break;
-+
-+	case OPTION_NO_POISON_SYSTEM_DIRECTORIES:
-+	  command_line.poison_system_directories = FALSE;
-+	  break;
-+
-+	case OPTION_ERROR_POISON_SYSTEM_DIRECTORIES:
-+	  command_line.error_poison_system_directories = TRUE;
-+	  break;
- 	}
-     }
- 
--- 
-2.12.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0011-Change-default-emulation-for-mips64-linux.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0011-Change-default-emulation-for-mips64-linux.patch
new file mode 100644
index 0000000..ac87a2d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0011-Change-default-emulation-for-mips64-linux.patch
@@ -0,0 +1,59 @@
+From 497660bdbeb6788786553a5d733105f7f898dc62 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 2 Mar 2015 01:44:14 +0000
+Subject: [PATCH 11/15] Change default emulation for mips64*-*-linux
+
+we change the default emulations to be N64 instead of N32
+
+Upstream-Status: Inappropriate [ OE configuration Specific]
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ bfd/config.bfd   | 8 ++++----
+ ld/configure.tgt | 8 ++++----
+ 2 files changed, 8 insertions(+), 8 deletions(-)
+
+diff --git a/bfd/config.bfd b/bfd/config.bfd
+index dc24aabad5..4511024f22 100644
+--- a/bfd/config.bfd
++++ b/bfd/config.bfd
+@@ -1186,12 +1186,12 @@ case "${targ}" in
+     targ_selvecs="mips_elf32_le_vec mips_elf64_be_vec mips_elf64_le_vec mips_ecoff_be_vec mips_ecoff_le_vec"
+     ;;
+   mips64*el-*-linux*)
+-    targ_defvec=mips_elf32_ntrad_le_vec
+-    targ_selvecs="mips_elf32_ntrad_be_vec mips_elf32_trad_le_vec mips_elf32_trad_be_vec mips_elf64_trad_le_vec mips_elf64_trad_be_vec"
++    targ_defvec=mips_elf64_trad_le_vec
++    targ_selvecs="mips_elf32_ntrad_be_vec mips_elf32_ntrad_le_vec mips_elf32_trad_le_vec mips_elf32_trad_be_vec mips_elf64_trad_be_vec"
+     ;;
+   mips64*-*-linux*)
+-    targ_defvec=mips_elf32_ntrad_be_vec
+-    targ_selvecs="mips_elf32_ntrad_le_vec mips_elf32_trad_be_vec mips_elf32_trad_le_vec mips_elf64_trad_be_vec mips_elf64_trad_le_vec"
++    targ_defvec=mips_elf64_trad_be_vec
++    targ_selvecs="mips_elf32_ntrad_be_vec mips_elf32_ntrad_be_vec mips_elf32_trad_be_vec mips_elf32_trad_le_vec mips_elf64_trad_le_vec"
+     ;;
+   mips*el-*-linux*)
+     targ_defvec=mips_elf32_trad_le_vec
+diff --git a/ld/configure.tgt b/ld/configure.tgt
+index 47c719cd05..fe7b9238b2 100644
+--- a/ld/configure.tgt
++++ b/ld/configure.tgt
+@@ -530,11 +530,11 @@ mips*el-*-vxworks*)	targ_emul=elf32elmipvxworks
+ mips*-*-vxworks*)	targ_emul=elf32ebmipvxworks
+ 		        targ_extra_emuls="elf32elmipvxworks" ;;
+ mips*-*-windiss)	targ_emul=elf32mipswindiss ;;
+-mips64*el-*-linux-*)	targ_emul=elf32ltsmipn32
+-			targ_extra_emuls="elf32btsmipn32 elf32ltsmip elf32btsmip elf64ltsmip elf64btsmip"
++mips64*el-*-linux-*)	targ_emul=elf64ltsmip
++			targ_extra_emuls="elf32btsmipn32 elf32ltsmipn32 elf32ltsmip elf32btsmip elf64btsmip"
+ 			targ_extra_libpath=$targ_extra_emuls ;;
+-mips64*-*-linux-*)	targ_emul=elf32btsmipn32
+-			targ_extra_emuls="elf32ltsmipn32 elf32btsmip elf32ltsmip elf64btsmip elf64ltsmip"
++mips64*-*-linux-*)	targ_emul=elf64btsmip
++			targ_extra_emuls="elf32btsmipn32 elf32ltsmipn32 elf32btsmip elf32ltsmip elf64ltsmip"
+ 			targ_extra_libpath=$targ_extra_emuls ;;
+ mips*el-*-linux-*)	targ_emul=elf32ltsmip
+ 			targ_extra_emuls="elf32btsmip elf32ltsmipn32 elf64ltsmip elf32btsmipn32 elf64btsmip"
+-- 
+2.14.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0011-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0011-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch
deleted file mode 100644
index 949ef51..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0011-Fix-rpath-in-libtool-when-sysroot-is-enabled.patch
+++ /dev/null
@@ -1,52 +0,0 @@
-From 4fe13a36997253a5c91bcb086aeb392ab2095f67 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 2 Mar 2015 01:42:38 +0000
-Subject: [PATCH 11/15] Fix rpath in libtool when sysroot is enabled
-
-Enabling sysroot support in libtool exposed a bug where the final
-library had an RPATH encoded into it which still pointed to the
-sysroot. This works around the issue until it gets sorted out
-upstream.
-
-Fix suggested by Richard Purdie <richard.purdie@linuxfoundation.org>
-
-Upstream-Status: Inappropriate [embedded specific]
-
-Signed-off-by: Scott Garman <scott.a.garman@intel.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- ltmain.sh | 10 ++++++++--
- 1 file changed, 8 insertions(+), 2 deletions(-)
-
-diff --git a/ltmain.sh b/ltmain.sh
-index 70e856e065..11ee684ccc 100644
---- a/ltmain.sh
-+++ b/ltmain.sh
-@@ -8035,9 +8035,11 @@ EOF
- 	  test "$opt_mode" != relink && rpath="$compile_rpath$rpath"
- 	  for libdir in $rpath; do
- 	    if test -n "$hardcode_libdir_flag_spec"; then
-+		  func_replace_sysroot "$libdir"
-+		  libdir=$func_replace_sysroot_result
-+		  func_stripname '=' '' "$libdir"
-+		  libdir=$func_stripname_result
- 	      if test -n "$hardcode_libdir_separator"; then
--		func_replace_sysroot "$libdir"
--		libdir=$func_replace_sysroot_result
- 		if test -z "$hardcode_libdirs"; then
- 		  hardcode_libdirs="$libdir"
- 		else
-@@ -8770,6 +8772,10 @@ EOF
-       hardcode_libdirs=
-       for libdir in $compile_rpath $finalize_rpath; do
- 	if test -n "$hardcode_libdir_flag_spec"; then
-+	  func_replace_sysroot "$libdir"
-+	  libdir=$func_replace_sysroot_result
-+	  func_stripname '=' '' "$libdir"
-+	  libdir=$func_stripname_result
- 	  if test -n "$hardcode_libdir_separator"; then
- 	    if test -z "$hardcode_libdirs"; then
- 	      hardcode_libdirs="$libdir"
--- 
-2.12.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0012-Add-support-for-Netlogic-XLP.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0012-Add-support-for-Netlogic-XLP.patch
new file mode 100644
index 0000000..dc5e580
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0012-Add-support-for-Netlogic-XLP.patch
@@ -0,0 +1,393 @@
+From 8c60a55d3678589d93739bd27fec216911d80968 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 14 Feb 2016 17:06:19 +0000
+Subject: [PATCH 12/15] Add support for Netlogic XLP
+
+Patch From: Nebu Philips <nphilips@netlogicmicro.com>
+
+Using the mipsisa64r2nlm target, add support for XLP from
+Netlogic. Also, update vendor name to NLM wherever applicable.
+
+Use 0x00000080 for INSN_XLP, the value 0x00000040 has already been
+assigned to INSN_OCTEON3
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Signed-off-by: Baoshan Pang <baoshan.pang@windriver.com>
+Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
+---
+ bfd/aoutx.h           |  1 +
+ bfd/archures.c        |  1 +
+ bfd/bfd-in2.h         |  1 +
+ bfd/config.bfd        |  5 +++++
+ bfd/cpu-mips.c        |  6 ++++--
+ bfd/elfxx-mips.c      |  8 ++++++++
+ binutils/readelf.c    |  1 +
+ gas/config/tc-mips.c  |  4 +++-
+ gas/configure         |  3 +++
+ include/elf/mips.h    |  1 +
+ include/opcode/mips.h |  6 ++++++
+ ld/configure.tgt      |  2 ++
+ opcodes/mips-dis.c    | 12 +++++-------
+ opcodes/mips-opc.c    | 31 ++++++++++++++++++++-----------
+ 14 files changed, 61 insertions(+), 21 deletions(-)
+
+diff --git a/bfd/aoutx.h b/bfd/aoutx.h
+index 3d38fda14b..0aec49bbb3 100644
+--- a/bfd/aoutx.h
++++ b/bfd/aoutx.h
+@@ -814,6 +814,7 @@ NAME (aout, machine_type) (enum bfd_architecture arch,
+ 	case bfd_mach_mipsisa64r6:
+ 	case bfd_mach_mips_sb1:
+ 	case bfd_mach_mips_xlr:
++	case bfd_mach_mips_xlp:
+ 	  /* FIXME: These should be MIPS3, MIPS4, MIPS16, MIPS32, etc.  */
+ 	  arch_flags = M_MIPS2;
+ 	  break;
+diff --git a/bfd/archures.c b/bfd/archures.c
+index 433b95fa08..063b7943a1 100644
+--- a/bfd/archures.c
++++ b/bfd/archures.c
+@@ -201,6 +201,7 @@ DESCRIPTION
+ .#define bfd_mach_mips_octeon3          6503
+ .#define bfd_mach_mips_xlr              887682   {* decimal 'XLR'  *}
+ .#define bfd_mach_mips_interaptiv_mr2   736550   {* decimal 'IA2'  *}
++.#define bfd_mach_mips_xlp              887680   {* decimal 'XLP'  *}
+ .#define bfd_mach_mipsisa32             32
+ .#define bfd_mach_mipsisa32r2           33
+ .#define bfd_mach_mipsisa32r3           34
+diff --git a/bfd/bfd-in2.h b/bfd/bfd-in2.h
+index d126aed086..2b753b3a93 100644
+--- a/bfd/bfd-in2.h
++++ b/bfd/bfd-in2.h
+@@ -2060,6 +2060,7 @@ enum bfd_architecture
+ #define bfd_mach_mips_octeon3          6503
+ #define bfd_mach_mips_xlr              887682   /* decimal 'XLR'  */
+ #define bfd_mach_mips_interaptiv_mr2   736550   /* decimal 'IA2'  */
++#define bfd_mach_mips_xlp              887680   /* decimal 'XLP'  */
+ #define bfd_mach_mipsisa32             32
+ #define bfd_mach_mipsisa32r2           33
+ #define bfd_mach_mipsisa32r3           34
+diff --git a/bfd/config.bfd b/bfd/config.bfd
+index 4511024f22..f0f9072f10 100644
+--- a/bfd/config.bfd
++++ b/bfd/config.bfd
+@@ -1169,6 +1169,11 @@ case "${targ}" in
+     targ_defvec=mips_elf32_le_vec
+     targ_selvecs="mips_elf32_be_vec mips_elf64_be_vec mips_elf64_le_vec"
+     ;;
++  mipsisa64*-*-elf*)
++	targ_defvec=mips_elf32_trad_be_vec
++	targ_selvecs="mips_elf32_trad_le_vec mips_elf64_trad_be_vec mips_elf64_trad_le_vec"
++	want64=true
++	;;
+   mips*-*-elf* | mips*-*-rtems* | mips*-*-windiss | mips*-*-none)
+     targ_defvec=mips_elf32_be_vec
+     targ_selvecs="mips_elf32_le_vec mips_elf64_be_vec mips_elf64_le_vec"
+diff --git a/bfd/cpu-mips.c b/bfd/cpu-mips.c
+index 2493094bef..8375d1ae96 100644
+--- a/bfd/cpu-mips.c
++++ b/bfd/cpu-mips.c
+@@ -105,7 +105,8 @@ enum
+   I_mipsocteon3,
+   I_xlr,
+   I_interaptiv_mr2,
+-  I_micromips
++  I_micromips,
++  I_xlp
+ };
+ 
+ #define NN(index) (&arch_info_struct[(index) + 1])
+@@ -158,7 +159,8 @@ static const bfd_arch_info_type arch_info_struct[] =
+   N (64, 64, bfd_mach_mips_xlr, "mips:xlr",       FALSE, NN(I_xlr)),
+   N (32, 32, bfd_mach_mips_interaptiv_mr2, "mips:interaptiv-mr2", FALSE,
+      NN(I_interaptiv_mr2)),
+-  N (64, 64, bfd_mach_mips_micromips,"mips:micromips",FALSE,0)
++  N (64, 64, bfd_mach_mips_micromips,"mips:micromips",FALSE,NN(I_micromips)),
++  N (64, 64, bfd_mach_mips_xlp, "mips:xlp",      FALSE, 0)
+ };
+ 
+ /* The default architecture is mips:3000, but with a machine number of
+diff --git a/bfd/elfxx-mips.c b/bfd/elfxx-mips.c
+index fddf68c816..354c85d00b 100644
+--- a/bfd/elfxx-mips.c
++++ b/bfd/elfxx-mips.c
+@@ -6796,6 +6796,9 @@ _bfd_elf_mips_mach (flagword flags)
+     case E_MIPS_MACH_IAMR2:
+       return bfd_mach_mips_interaptiv_mr2;
+ 
++    case E_MIPS_MACH_XLP:
++      return bfd_mach_mips_xlp;
++
+     default:
+       switch (flags & EF_MIPS_ARCH)
+ 	{
+@@ -11956,6 +11959,10 @@ mips_set_isa_flags (bfd *abfd)
+       val = E_MIPS_ARCH_64R2 | E_MIPS_MACH_OCTEON2;
+       break;
+ 
++	case bfd_mach_mips_xlp:
++	  val = E_MIPS_ARCH_64R2 | E_MIPS_MACH_XLP;
++	  break;
++
+     case bfd_mach_mipsisa32:
+       val = E_MIPS_ARCH_32;
+       break;
+@@ -13989,6 +13996,7 @@ static const struct mips_mach_extension mips_mach_extensions[] =
+   { bfd_mach_mips_octeonp, bfd_mach_mips_octeon },
+   { bfd_mach_mips_octeon, bfd_mach_mipsisa64r2 },
+   { bfd_mach_mips_loongson_3a, bfd_mach_mipsisa64r2 },
++  { bfd_mach_mips_xlp, bfd_mach_mipsisa64r2 },
+ 
+   /* MIPS64 extensions.  */
+   { bfd_mach_mipsisa64r2, bfd_mach_mipsisa64 },
+diff --git a/binutils/readelf.c b/binutils/readelf.c
+index 2b15f0f2cb..092744708e 100644
+--- a/binutils/readelf.c
++++ b/binutils/readelf.c
+@@ -3335,6 +3335,7 @@ get_machine_flags (unsigned e_flags, unsigned e_machine)
+ 	    case E_MIPS_MACH_OCTEON3: strcat (buf, ", octeon3"); break;
+ 	    case E_MIPS_MACH_XLR:  strcat (buf, ", xlr"); break;
+ 	    case E_MIPS_MACH_IAMR2:  strcat (buf, ", interaptiv-mr2"); break;
++	    case E_MIPS_MACH_XLP:  strcat (buf, ", xlp"); break;
+ 	    case 0:
+ 	    /* We simply ignore the field in this case to avoid confusion:
+ 	       MIPS ELF does not specify EF_MIPS_MACH, it is a GNU
+diff --git a/gas/config/tc-mips.c b/gas/config/tc-mips.c
+index 3804df2958..9576c986db 100644
+--- a/gas/config/tc-mips.c
++++ b/gas/config/tc-mips.c
+@@ -552,6 +552,7 @@ static int mips_32bitmode = 0;
+    || mips_opts.arch == CPU_RM7000                    \
+    || mips_opts.arch == CPU_VR5500                    \
+    || mips_opts.micromips                             \
++   || mips_opts.arch == CPU_XLP                       \
+    )
+ 
+ /* Whether the processor uses hardware interlocks to protect reads
+@@ -581,6 +582,7 @@ static int mips_32bitmode = 0;
+     && mips_opts.isa != ISA_MIPS3)                    \
+    || mips_opts.arch == CPU_R4300                     \
+    || mips_opts.micromips                             \
++   || mips_opts.arch == CPU_XLP                       \
+    )
+ 
+ /* Whether the processor uses hardware interlocks to protect reads
+@@ -19738,7 +19740,7 @@ static const struct mips_cpu_info mips_cpu_info_table[] =
+   /* Broadcom XLP.
+      XLP is mostly like XLR, with the prominent exception that it is
+      MIPS64R2 rather than MIPS64.  */
+-  { "xlp",	      0, 0,			ISA_MIPS64R2, CPU_XLR },
++  { "xlp",	      0, 0,			ISA_MIPS64R2, CPU_XLP },
+ 
+   /* MIPS 64 Release 6 */
+   { "i6400",	      0, ASE_MSA,		ISA_MIPS64R6, CPU_MIPS64R6},
+diff --git a/gas/configure b/gas/configure
+index 81dd4cbd97..95bdf3b19b 100755
+--- a/gas/configure
++++ b/gas/configure
+@@ -12989,6 +12989,9 @@ _ACEOF
+ 	  mipsisa64r6 | mipsisa64r6el)
+ 	    mips_cpu=mips64r6
+ 	    ;;
++	  mipsisa64r2nlm | mipsisa64r2nlmel)
++		mips_cpu=xlp
++		;;
+ 	  mipstx39 | mipstx39el)
+ 	    mips_cpu=r3900
+ 	    ;;
+diff --git a/include/elf/mips.h b/include/elf/mips.h
+index a4bea43ff8..73d904e25f 100644
+--- a/include/elf/mips.h
++++ b/include/elf/mips.h
+@@ -290,6 +290,7 @@ END_RELOC_NUMBERS (R_MIPS_maxext)
+ #define E_MIPS_MACH_SB1         0x008a0000
+ #define E_MIPS_MACH_OCTEON	0x008b0000
+ #define E_MIPS_MACH_XLR     	0x008c0000
++#define E_MIPS_MACH_XLP         0x008f0000
+ #define E_MIPS_MACH_OCTEON2	0x008d0000
+ #define E_MIPS_MACH_OCTEON3	0x008e0000
+ #define E_MIPS_MACH_5400	0x00910000
+diff --git a/include/opcode/mips.h b/include/opcode/mips.h
+index ceae9ec50a..276ee3c6c1 100644
+--- a/include/opcode/mips.h
++++ b/include/opcode/mips.h
+@@ -1259,6 +1259,8 @@ static const unsigned int mips_isa_table[] = {
+ #define INSN_XLR                 0x00000020
+ /* Imagination interAptiv MR2.  */
+ #define INSN_INTERAPTIV_MR2	  0x04000000
++/* Netlogic XlP instruction */
++#define INSN_XLP		0x00000080
+ 
+ /* DSP ASE */
+ #define ASE_DSP			0x00000001
+@@ -1365,6 +1367,7 @@ static const unsigned int mips_isa_table[] = {
+ #define CPU_OCTEON3	6503
+ #define CPU_XLR     	887682   	/* decimal 'XLR'   */
+ #define CPU_INTERAPTIV_MR2 736550	/* decimal 'IA2'  */
++#define CPU_XLP         887680      /* decimal 'XLP'   */
+ 
+ /* Return true if the given CPU is included in INSN_* mask MASK.  */
+ 
+@@ -1445,6 +1448,9 @@ cpu_is_member (int cpu, unsigned int mask)
+       return ((mask & INSN_ISA_MASK) == INSN_ISA32R6)
+ 	     || ((mask & INSN_ISA_MASK) == INSN_ISA64R6);
+ 
++    case CPU_XLP:
++      return (mask & INSN_XLP) != 0;
++
+     default:
+       return FALSE;
+     }
+diff --git a/ld/configure.tgt b/ld/configure.tgt
+index fe7b9238b2..2adf108b17 100644
+--- a/ld/configure.tgt
++++ b/ld/configure.tgt
+@@ -516,6 +516,8 @@ mips*el-sde-elf* | mips*el-mti-elf* | mips*el-img-elf*)
+ mips*-sde-elf* | mips*-mti-elf* | mips*-img-elf*)
+ 			targ_emul=elf32btsmip
+ 			targ_extra_emuls="elf32ltsmip elf32btsmipn32 elf64btsmip elf32ltsmipn32 elf64ltsmip" ;;
++mipsisa64*-*-elf*)	targ_emul=elf32btsmip
++			targ_extra_emuls="elf32ltsmip elf64btsmip elf64ltsmip" ;;
+ mips64*el-ps2-elf*)	targ_emul=elf32lr5900n32
+ 			targ_extra_emuls="elf32lr5900"
+ 			targ_extra_libpath=$targ_extra_emuls ;;
+diff --git a/opcodes/mips-dis.c b/opcodes/mips-dis.c
+index 45195007c1..4a80a05d19 100644
+--- a/opcodes/mips-dis.c
++++ b/opcodes/mips-dis.c
+@@ -655,13 +655,11 @@ const struct mips_arch_choice mips_arch_choices[] =
+     mips_cp0sel_names_xlr, ARRAY_SIZE (mips_cp0sel_names_xlr),
+     mips_cp1_names_mips3264, mips_hwr_names_numeric },
+ 
+-  /* XLP is mostly like XLR, with the prominent exception it is being
+-     MIPS64R2.  */
+-  { "xlp", 1, bfd_mach_mips_xlr, CPU_XLR,
+-    ISA_MIPS64R2 | INSN_XLR, 0,
+-    mips_cp0_names_xlr,
+-    mips_cp0sel_names_xlr, ARRAY_SIZE (mips_cp0sel_names_xlr),
+-    mips_cp1_names_mips3264, mips_hwr_names_numeric },
++  { "xlp", 1, bfd_mach_mips_xlp, CPU_XLP,
++    ISA_MIPS64R2 | INSN_XLP, 0,
++    mips_cp0_names_mips3264r2,
++    mips_cp0sel_names_mips3264r2, ARRAY_SIZE (mips_cp0sel_names_mips3264r2),
++    mips_cp1_names_mips3264, mips_hwr_names_mips3264r2 },
+ 
+   /* This entry, mips16, is here only for ISA/processor selection; do
+      not print its name.  */
+diff --git a/opcodes/mips-opc.c b/opcodes/mips-opc.c
+index 19fca408c9..d02069c528 100644
+--- a/opcodes/mips-opc.c
++++ b/opcodes/mips-opc.c
+@@ -328,6 +328,7 @@ decode_mips_operand (const char *p)
+ #define IOCT3	INSN_OCTEON3
+ #define XLR     INSN_XLR
+ #define IAMR2	INSN_INTERAPTIV_MR2
++#define XLP	INSN_XLP
+ #define IVIRT	ASE_VIRT
+ #define IVIRT64	ASE_VIRT64
+ 
+@@ -966,6 +967,7 @@ const struct mips_opcode mips_builtin_opcodes[] =
+ {"clo",			"U,s",		0x70000021, 0xfc0007ff, WR_1|RD_2,		0,		I32|N55,	0,	I37 },
+ {"clz",			"d,s",		0x00000050, 0xfc1f07ff, WR_1|RD_2,		0,		I37,		0,	0 },
+ {"clz",			"U,s",		0x70000020, 0xfc0007ff, WR_1|RD_2,		0,		I32|N55,	0,	I37 },
++{"crc",			"d,s,t",	0x7000001c, 0xfc0007ff,	WR_1|RD_2|RD_3,	0,		XLP, 		0,	0 },
+ /* ctc0 is at the bottom of the table.  */
+ {"ctc1",		"t,G",		0x44c00000, 0xffe007ff,	RD_1|WR_CC|CM,		0,		I1,		0,	0 },
+ {"ctc1",		"t,S",		0x44c00000, 0xffe007ff,	RD_1|WR_CC|CM,		0,		I1,		0,	0 },
+@@ -998,12 +1000,13 @@ const struct mips_opcode mips_builtin_opcodes[] =
+ {"daddiu",		"t,r,j",	0x64000000, 0xfc000000, WR_1|RD_2,		0,		I3,		0,	0 },
+ {"daddu",		"d,v,t",	0x0000002d, 0xfc0007ff, WR_1|RD_2|RD_3,		0,		I3,		0,	0 },
+ {"daddu",		"t,r,I",	0,    (int) M_DADDU_I,	INSN_MACRO,		0,		I3,		0,	0 },
+-{"daddwc",		"d,s,t", 	0x70000038, 0xfc0007ff, WR_1|RD_2|RD_3|WR_C0|RD_C0, 0,		XLR,		0,	0 },
++{"daddwc",		"d,s,t", 	0x70000038, 0xfc0007ff, WR_1|RD_2|RD_3|WR_C0|RD_C0, 0,		XLR|XLP,	0,	0 },
+ {"dbreak",		"",		0x7000003f, 0xffffffff,	0,			0,		N5,		0,	0 },
+ {"dclo",		"d,s",		0x00000053, 0xfc1f07ff, WR_1|RD_2,		0,		I69,		0,	0 },
+ {"dclo",		"U,s",	 	0x70000025, 0xfc0007ff, WR_1|RD_2, 	0,		I64|N55,	0,	I69 },
+ {"dclz",		"d,s",		0x00000052, 0xfc1f07ff, WR_1|RD_2,		0,		I69,		0,	0 },
+ {"dclz",		"U,s",	 	0x70000024, 0xfc0007ff, WR_1|RD_2, 	0,		I64|N55,	0,	I69 },
++{"dcrc",		"d,s,t",	0x7000001d, 0xfc0007ff, WR_1|RD_2|RD_3,	0,		XLP, 		0,	0 },
+ /* dctr and dctw are used on the r5000.  */
+ {"dctr",		"o(b)",	 	0xbc050000, 0xfc1f0000, RD_2,			0,		I3,		0,	0 },
+ {"dctw",		"o(b)",		0xbc090000, 0xfc1f0000, RD_2,			0,		I3,		0,	0 },
+@@ -1075,6 +1078,7 @@ const struct mips_opcode mips_builtin_opcodes[] =
+ {"dmfc0",		"t,G,H",	0x40200000, 0xffe007f8,	WR_1|RD_C0|LC,		0,		I64,		0,	0 },
+ {"dmfgc0",		"t,G",		0x40600100, 0xffe007ff, WR_1|RD_C0|LC,		0,		0,		IVIRT64, 0 },
+ {"dmfgc0",		"t,G,H",	0x40600100, 0xffe007f8, WR_1|RD_C0|LC,		0,		0,		IVIRT64, 0 },
++{"dmfur",		"t,d",		0x7000001e, 0xffe007ff, WR_1,			0,		XLP,		0,	0 },
+ {"dmt",			"",		0x41600bc1, 0xffffffff, TRAP,			0,		0,		MT32,	0 },
+ {"dmt",			"t",		0x41600bc1, 0xffe0ffff, WR_1|TRAP,		0,		0,		MT32,	0 },
+ {"dmtc0",		"t,G",		0x40a00000, 0xffe007ff,	RD_1|WR_C0|WR_CC|CM,	0,		I3,		0,	EE },
+@@ -1090,6 +1094,8 @@ const struct mips_opcode mips_builtin_opcodes[] =
+ /* dmfc3 is at the bottom of the table.  */
+ /* dmtc3 is at the bottom of the table.  */
+ {"dmuh",		"d,s,t",	0x000000dc, 0xfc0007ff, WR_1|RD_2|RD_3,		0,		I69,		0,	0 },
++{"dmtur",		"t,d",		0x7000001f, 0xffe007ff,	RD_1,			0,		XLP,		0,	0 },
++{"dmul",		"d,s,t",	0x70000006, 0xfc0007ff,	WR_1|RD_2|RD_3,		0,		XLP,		0,	0 },
+ {"dmul",		"d,s,t",	0x0000009c, 0xfc0007ff, WR_1|RD_2|RD_3,		0,		I69,		0,	0 },
+ {"dmul",		"d,v,t",	0x70000003, 0xfc0007ff, WR_1|RD_2|RD_3|WR_HILO,	0,		IOCT,		0,	0 },
+ {"dmul",		"d,v,t",	0,    (int) M_DMUL,	INSN_MACRO,		0,		I3,		0,	M32|I69 },
+@@ -1243,9 +1249,9 @@ const struct mips_opcode mips_builtin_opcodes[] =
+ {"ld",			"s,-b(+R)",	0xec180000, 0xfc1c0000, WR_1,			RD_pc,		I69,		0,	0 },
+ {"ld",			"t,A(b)",	0,    (int) M_LD_AB,	INSN_MACRO,		0,		I1,		0,	0 },
+ {"ld",			"t,o(b)",	0xdc000000, 0xfc000000, WR_1|RD_3|LM,		0,		I3,		0,	0 },
+-{"ldaddw",		"t,b",		0x70000010, 0xfc00ffff,	MOD_1|RD_2|LM|SM,	0,		XLR,		0,	0 },
+-{"ldaddwu",		"t,b",		0x70000011, 0xfc00ffff,	MOD_1|RD_2|LM|SM,	0,		XLR,		0,	0 },
+-{"ldaddd",		"t,b",		0x70000012, 0xfc00ffff,	MOD_1|RD_2|LM|SM,	0,		XLR,		0,	0 },
++{"ldaddw",		"t,b",		0x70000010, 0xfc00ffff,	MOD_1|RD_2|SM,		0,		XLR|XLP,	0,	0 },
++{"ldaddwu",		"t,b",		0x70000011, 0xfc00ffff,	MOD_1|RD_2|SM,		0,		XLR|XLP,	0,	0 },
++{"ldaddd",		"t,b",		0x70000012, 0xfc00ffff,	MOD_1|RD_2|SM,		0,		XLR|XLP,	0,	0 },
+ {"ldc1",		"T,o(b)",	0xd4000000, 0xfc000000, WR_1|RD_3|CLD|FP_D,	0,		I2,		0,	SF },
+ {"ldc1",		"E,o(b)",	0xd4000000, 0xfc000000, WR_1|RD_3|CLD|FP_D,	0,		I2,		0,	SF },
+ {"ldc1",		"T,A(b)",	0,    (int) M_LDC1_AB,	INSN_MACRO,		INSN2_M_FP_D,	I2,		0,	SF },
+@@ -1410,7 +1416,7 @@ const struct mips_opcode mips_builtin_opcodes[] =
+ {"mflo",		"d,9",		0x00000012, 0xff9f07ff, WR_1|RD_LO,		0,		0,		D32,	0 },
+ {"mflo1",		"d",		0x70000012, 0xffff07ff,	WR_1|RD_LO,		0,		EE,		0,	0 },
+ {"mflhxu",		"d",		0x00000052, 0xffff07ff,	WR_1|MOD_HILO,		0,		0,		SMT,	0 },
+-{"mfcr",		"t,s",		0x70000018, 0xfc00ffff, WR_1|RD_2,		0,		XLR,		0,	0 },
++{"mfcr",		"t,s",		0x70000018, 0xfc00ffff, WR_1,			0,		XLR|XLP,	0,	0 },
+ {"mfsa",		"d",		0x00000028, 0xffff07ff,	WR_1,			0,		EE,		0,	0 },
+ {"min.ob",		"X,Y,Q",	0x78000006, 0xfc20003f,	WR_1|RD_2|RD_3|FP_D,	0,		SB1,		MX,	0 },
+ {"min.ob",		"D,S,Q",	0x48000006, 0xfc20003f,	WR_1|RD_2|RD_3|FP_D,	0,		N54,		0,	0 },
+@@ -1455,10 +1461,13 @@ const struct mips_opcode mips_builtin_opcodes[] =
+ /* move is at the top of the table.  */
+ {"msgn.qh",		"X,Y,Q",	0x78200000, 0xfc20003f,	WR_1|RD_2|RD_3|FP_D,	0,		0,		MX,	0 },
+ {"msgsnd",		"t",		0,    (int) M_MSGSND,	INSN_MACRO,		0,		XLR,		0,	0 },
++{"msgsnds",		"d,t",		0x4a000001, 0xffe007ff,	WR_1|RD_2|RD_C0|WR_C0,	0,		XLP,		0,	0 },
+ {"msgld",		"", 		0,    (int) M_MSGLD,	INSN_MACRO,		0,		XLR,		0,	0 },
+ {"msgld",		"t",		0,    (int) M_MSGLD_T,	INSN_MACRO,		0,		XLR,		0,	0 },
+-{"msgwait",		"", 		0,    (int) M_MSGWAIT,	INSN_MACRO,		0,		XLR,		0,	0 },
+-{"msgwait",		"t",		0,    (int) M_MSGWAIT_T,INSN_MACRO,		0,		XLR,		0,	0 },
++{"msglds",		"d,t",		0x4a000002, 0xffe007ff,	WR_1|RD_2|RD_C0|WR_C0,	0,		XLP,		0,	0 },
++{"msgwait",		"",		0,    (int) M_MSGWAIT,  INSN_MACRO,		0,		XLR|XLP,	0,	0 },
++{"msgwait",		"t",		0,    (int) M_MSGWAIT_T,INSN_MACRO,		0,		XLR|XLP,	0,	0 },
++{"msgsync",		"",		0x4a000004, 0xffffffff,0,			0,		XLP,		0,	0 },
+ {"msub.d",		"D,R,S,T",	0x4c000029, 0xfc00003f, WR_1|RD_2|RD_3|RD_4|FP_D, 0,		I4_33,		0,	I37 },
+ {"msub.d",		"D,S,T",	0x46200019, 0xffe0003f,	WR_1|RD_2|RD_3|FP_D,	0,		IL2E,		0,	0 },
+ {"msub.d",		"D,S,T",	0x72200019, 0xffe0003f,	WR_1|RD_2|RD_3|FP_D,	0,		IL2F,		0,	0 },
+@@ -1508,7 +1517,7 @@ const struct mips_opcode mips_builtin_opcodes[] =
+ {"mtlo",		"s,7",		0x00000013, 0xfc1fe7ff, RD_1|WR_LO,		0,		0,		D32,	0 },
+ {"mtlo1",		"s",		0x70000013, 0xfc1fffff,	RD_1|WR_LO,		0,		EE,		0,	0 },
+ {"mtlhx",		"s",		0x00000053, 0xfc1fffff,	RD_1|MOD_HILO,		0,		0,		SMT,	0 },
+-{"mtcr",		"t,s",		0x70000019, 0xfc00ffff, RD_1|RD_2,		0,		XLR,		0,	0 },
++{"mtcr",		"t,s",		0x70000019, 0xfc00ffff, RD_1,			0,		XLR|XLP,	0,	0 },
+ {"mtm0",		"s",		0x70000008, 0xfc1fffff, RD_1,			0,		IOCT,		0,	0 },
+ {"mtm0",    		"s,t",		0x70000008, 0xfc00ffff, RD_1|RD_2,		0,		IOCT3,		0,	0 },
+ {"mtm1",		"s",		0x7000000c, 0xfc1fffff, RD_1,			0,		IOCT,		0,	0 },
+@@ -1945,9 +1954,9 @@ const struct mips_opcode mips_builtin_opcodes[] =
+ {"suxc1",		"S,t(b)",	0x4c00000d, 0xfc0007ff, RD_1|RD_2|RD_3|SM|FP_D,	0,		I5_33|N55,	0,	I37},
+ {"sw",			"t,o(b)",	0xac000000, 0xfc000000,	RD_1|RD_3|SM,		0,		I1,		0,	0 },
+ {"sw",			"t,A(b)",	0,    (int) M_SW_AB,	INSN_MACRO,		0,		I1,		0,	0 },
+-{"swapw",		"t,b",		0x70000014, 0xfc00ffff, MOD_1|RD_2|LM|SM,	0,		XLR,		0,	0 },
+-{"swapwu",		"t,b",		0x70000015, 0xfc00ffff, MOD_1|RD_2|LM|SM,	0,		XLR,		0,	0 },
+-{"swapd",		"t,b",		0x70000016, 0xfc00ffff, MOD_1|RD_2|LM|SM,	0,		XLR,		0,	0 },
++{"swapw",		"t,b",		0x70000014, 0xfc00ffff, MOD_1|RD_2|SM,		0,		XLR|XLP,	0,	0 },
++{"swapwu",		"t,b",		0x70000015, 0xfc00ffff, MOD_1|RD_2|SM,		0,		XLR|XLP,	0,	0 },
++{"swapd",		"t,b",		0x70000016, 0xfc00ffff, MOD_1|RD_2|SM,		0,		XLR|XLP,	0,	0 },
+ {"swc0",		"E,o(b)",	0xe0000000, 0xfc000000,	RD_3|RD_C0|SM,		0,		I1,		0,	IOCT|IOCTP|IOCT2|I37 },
+ {"swc0",		"E,A(b)",	0,    (int) M_SWC0_AB,	INSN_MACRO,		0,		I1,		0,	IOCT|IOCTP|IOCT2|I37 },
+ {"swc1",		"T,o(b)",	0xe4000000, 0xfc000000,	RD_1|RD_3|SM|FP_S,	0,		I1,		0,	0 },
+-- 
+2.14.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0012-Change-default-emulation-for-mips64-linux.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0012-Change-default-emulation-for-mips64-linux.patch
deleted file mode 100644
index 2ac101c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0012-Change-default-emulation-for-mips64-linux.patch
+++ /dev/null
@@ -1,59 +0,0 @@
-From f43f832e0009caea6a3d5bcaa8f0a64d943072ea Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 2 Mar 2015 01:44:14 +0000
-Subject: [PATCH 12/15] Change default emulation for mips64*-*-linux
-
-we change the default emulations to be N64 instead of N32
-
-Upstream-Status: Inappropriate [ OE configuration Specific]
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- bfd/config.bfd   | 8 ++++----
- ld/configure.tgt | 8 ++++----
- 2 files changed, 8 insertions(+), 8 deletions(-)
-
-diff --git a/bfd/config.bfd b/bfd/config.bfd
-index 1b28016b91..63596c2ebc 100644
---- a/bfd/config.bfd
-+++ b/bfd/config.bfd
-@@ -1183,12 +1183,12 @@ case "${targ}" in
-     targ_selvecs="mips_elf32_le_vec mips_elf64_be_vec mips_elf64_le_vec mips_ecoff_be_vec mips_ecoff_le_vec"
-     ;;
-   mips64*el-*-linux*)
--    targ_defvec=mips_elf32_ntrad_le_vec
--    targ_selvecs="mips_elf32_ntrad_be_vec mips_elf32_trad_le_vec mips_elf32_trad_be_vec mips_elf64_trad_le_vec mips_elf64_trad_be_vec"
-+    targ_defvec=mips_elf64_trad_le_vec
-+    targ_selvecs="mips_elf32_ntrad_be_vec mips_elf32_ntrad_le_vec mips_elf32_trad_le_vec mips_elf32_trad_be_vec mips_elf64_trad_be_vec"
-     ;;
-   mips64*-*-linux*)
--    targ_defvec=mips_elf32_ntrad_be_vec
--    targ_selvecs="mips_elf32_ntrad_le_vec mips_elf32_trad_be_vec mips_elf32_trad_le_vec mips_elf64_trad_be_vec mips_elf64_trad_le_vec"
-+    targ_defvec=mips_elf64_trad_be_vec
-+    targ_selvecs="mips_elf32_ntrad_be_vec mips_elf32_ntrad_be_vec mips_elf32_trad_be_vec mips_elf32_trad_le_vec mips_elf64_trad_le_vec"
-     ;;
-   mips*el-*-linux*)
-     targ_defvec=mips_elf32_trad_le_vec
-diff --git a/ld/configure.tgt b/ld/configure.tgt
-index b85c6bb35a..4e77383a19 100644
---- a/ld/configure.tgt
-+++ b/ld/configure.tgt
-@@ -518,11 +518,11 @@ mips*el-*-vxworks*)	targ_emul=elf32elmipvxworks
- mips*-*-vxworks*)	targ_emul=elf32ebmipvxworks
- 		        targ_extra_emuls="elf32elmipvxworks" ;;
- mips*-*-windiss)	targ_emul=elf32mipswindiss ;;
--mips64*el-*-linux-*)	targ_emul=elf32ltsmipn32
--			targ_extra_emuls="elf32btsmipn32 elf32ltsmip elf32btsmip elf64ltsmip elf64btsmip"
-+mips64*el-*-linux-*)	targ_emul=elf64ltsmip
-+			targ_extra_emuls="elf32btsmipn32 elf32ltsmipn32 elf32ltsmip elf32btsmip elf64btsmip"
- 			targ_extra_libpath=$targ_extra_emuls ;;
--mips64*-*-linux-*)	targ_emul=elf32btsmipn32
--			targ_extra_emuls="elf32ltsmipn32 elf32btsmip elf32ltsmip elf64btsmip elf64ltsmip"
-+mips64*-*-linux-*)	targ_emul=elf64btsmip
-+			targ_extra_emuls="elf32btsmipn32 elf32ltsmipn32 elf32btsmip elf32ltsmip elf64ltsmip"
- 			targ_extra_libpath=$targ_extra_emuls ;;
- mips*el-*-linux-*)	targ_emul=elf32ltsmip
- 			targ_extra_emuls="elf32btsmip elf32ltsmipn32 elf64ltsmip elf32btsmipn32 elf64btsmip"
--- 
-2.12.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0013-Add-support-for-Netlogic-XLP.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0013-Add-support-for-Netlogic-XLP.patch
deleted file mode 100644
index b03e046..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0013-Add-support-for-Netlogic-XLP.patch
+++ /dev/null
@@ -1,399 +0,0 @@
-From fc6fa6a6e6e9e6e5ad7080785af31b4ea68f60c4 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sun, 14 Feb 2016 17:06:19 +0000
-Subject: [PATCH 13/15] Add support for Netlogic XLP
-
-Patch From: Nebu Philips <nphilips@netlogicmicro.com>
-
-Using the mipsisa64r2nlm target, add support for XLP from
-Netlogic. Also, update vendor name to NLM wherever applicable.
-
-Use 0x00000080 for INSN_XLP, the value 0x00000040 has already been
-assigned to INSN_OCTEON3
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Signed-off-by: Baoshan Pang <baoshan.pang@windriver.com>
-Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
----
-Upstream-Status: Pending
-
- bfd/aoutx.h           |  1 +
- bfd/archures.c        |  1 +
- bfd/bfd-in2.h         |  1 +
- bfd/config.bfd        |  5 +++++
- bfd/cpu-mips.c        |  6 ++++--
- bfd/elfxx-mips.c      |  8 ++++++++
- binutils/readelf.c    |  1 +
- gas/config/tc-mips.c  |  4 +++-
- gas/configure         |  3 +++
- include/elf/mips.h    |  1 +
- include/opcode/mips.h | 10 ++++++++--
- ld/configure.tgt      |  2 ++
- opcodes/mips-dis.c    | 12 +++++-------
- opcodes/mips-opc.c    | 33 +++++++++++++++++++++------------
- 14 files changed, 64 insertions(+), 24 deletions(-)
-
-diff --git a/bfd/aoutx.h b/bfd/aoutx.h
-index d30e8b8fbc..913b499744 100644
---- a/bfd/aoutx.h
-+++ b/bfd/aoutx.h
-@@ -812,6 +812,7 @@ NAME (aout, machine_type) (enum bfd_architecture arch,
- 	case bfd_mach_mipsisa64r6:
- 	case bfd_mach_mips_sb1:
- 	case bfd_mach_mips_xlr:
-+	case bfd_mach_mips_xlp:
- 	  /* FIXME: These should be MIPS3, MIPS4, MIPS16, MIPS32, etc.  */
- 	  arch_flags = M_MIPS2;
- 	  break;
-diff --git a/bfd/archures.c b/bfd/archures.c
-index 6f35a5b2a7..d12cdf609a 100644
---- a/bfd/archures.c
-+++ b/bfd/archures.c
-@@ -197,6 +197,7 @@ DESCRIPTION
- .#define bfd_mach_mips_octeon2		6502
- .#define bfd_mach_mips_octeon3          6503
- .#define bfd_mach_mips_xlr              887682   {* decimal 'XLR'  *}
-+.#define bfd_mach_mips_xlp              887680   {* decimal 'XLP'  *}
- .#define bfd_mach_mipsisa32             32
- .#define bfd_mach_mipsisa32r2           33
- .#define bfd_mach_mipsisa32r3           34
-diff --git a/bfd/bfd-in2.h b/bfd/bfd-in2.h
-index 6288c3bb4a..e9f9859a7b 100644
---- a/bfd/bfd-in2.h
-+++ b/bfd/bfd-in2.h
-@@ -2041,6 +2041,7 @@ enum bfd_architecture
- #define bfd_mach_mips_octeon2          6502
- #define bfd_mach_mips_octeon3          6503
- #define bfd_mach_mips_xlr              887682   /* decimal 'XLR'  */
-+#define bfd_mach_mips_xlp              887680   /* decimal 'XLP'  */
- #define bfd_mach_mipsisa32             32
- #define bfd_mach_mipsisa32r2           33
- #define bfd_mach_mipsisa32r3           34
-diff --git a/bfd/config.bfd b/bfd/config.bfd
-index 63596c2ebc..6e923fb0ed 100644
---- a/bfd/config.bfd
-+++ b/bfd/config.bfd
-@@ -1166,6 +1166,11 @@ case "${targ}" in
-     targ_defvec=mips_elf32_le_vec
-     targ_selvecs="mips_elf32_be_vec mips_elf64_be_vec mips_elf64_le_vec"
-     ;;
-+  mipsisa64*-*-elf*)
-+	targ_defvec=mips_elf32_trad_be_vec
-+	targ_selvecs="mips_elf32_trad_le_vec mips_elf64_trad_be_vec mips_elf64_trad_le_vec"
-+	want64=true
-+	;;
-   mips*-*-elf* | mips*-*-rtems* | mips*-*-windiss | mips*-*-none)
-     targ_defvec=mips_elf32_be_vec
-     targ_selvecs="mips_elf32_le_vec mips_elf64_be_vec mips_elf64_le_vec"
-diff --git a/bfd/cpu-mips.c b/bfd/cpu-mips.c
-index b9ecdd6e55..df1bffc25b 100644
---- a/bfd/cpu-mips.c
-+++ b/bfd/cpu-mips.c
-@@ -104,7 +104,8 @@ enum
-   I_mipsocteon2,
-   I_mipsocteon3,
-   I_xlr,
--  I_micromips
-+  I_micromips,
-+  I_xlp
- };
- 
- #define NN(index) (&arch_info_struct[(index) + 1])
-@@ -155,7 +156,8 @@ static const bfd_arch_info_type arch_info_struct[] =
-   N (64, 64, bfd_mach_mips_octeon2,"mips:octeon2",  FALSE, NN(I_mipsocteon2)),
-   N (64, 64, bfd_mach_mips_octeon3, "mips:octeon3",  FALSE, NN(I_mipsocteon3)),
-   N (64, 64, bfd_mach_mips_xlr, "mips:xlr",       FALSE, NN(I_xlr)),
--  N (64, 64, bfd_mach_mips_micromips,"mips:micromips",FALSE,0)
-+  N (64, 64, bfd_mach_mips_micromips,"mips:micromips",FALSE,NN(I_micromips)),
-+  N (64, 64, bfd_mach_mips_xlp, "mips:xlp",      FALSE, 0)
- };
- 
- /* The default architecture is mips:3000, but with a machine number of
-diff --git a/bfd/elfxx-mips.c b/bfd/elfxx-mips.c
-index 723853f821..7b464211c3 100644
---- a/bfd/elfxx-mips.c
-+++ b/bfd/elfxx-mips.c
-@@ -6787,6 +6787,9 @@ _bfd_elf_mips_mach (flagword flags)
-     case E_MIPS_MACH_XLR:
-       return bfd_mach_mips_xlr;
- 
-+	case E_MIPS_MACH_XLP:
-+      return bfd_mach_mips_xlp;
-+
-     default:
-       switch (flags & EF_MIPS_ARCH)
- 	{
-@@ -12106,6 +12109,10 @@ mips_set_isa_flags (bfd *abfd)
-       val = E_MIPS_ARCH_64R2 | E_MIPS_MACH_OCTEON2;
-       break;
- 
-+	case bfd_mach_mips_xlp:
-+	  val = E_MIPS_ARCH_64R2 | E_MIPS_MACH_XLP;
-+	  break;
-+
-     case bfd_mach_mipsisa32:
-       val = E_MIPS_ARCH_32;
-       break;
-@@ -14135,6 +14142,7 @@ static const struct mips_mach_extension mips_mach_extensions[] =
-   { bfd_mach_mips_octeonp, bfd_mach_mips_octeon },
-   { bfd_mach_mips_octeon, bfd_mach_mipsisa64r2 },
-   { bfd_mach_mips_loongson_3a, bfd_mach_mipsisa64r2 },
-+  { bfd_mach_mips_xlp, bfd_mach_mipsisa64r2 },
- 
-   /* MIPS64 extensions.  */
-   { bfd_mach_mipsisa64r2, bfd_mach_mipsisa64 },
-diff --git a/binutils/readelf.c b/binutils/readelf.c
-index 8dca490226..b5f577f5a1 100644
---- a/binutils/readelf.c
-+++ b/binutils/readelf.c
-@@ -3261,6 +3261,7 @@ get_machine_flags (unsigned e_flags, unsigned e_machine)
- 	    case E_MIPS_MACH_OCTEON2: strcat (buf, ", octeon2"); break;
- 	    case E_MIPS_MACH_OCTEON3: strcat (buf, ", octeon3"); break;
- 	    case E_MIPS_MACH_XLR:  strcat (buf, ", xlr"); break;
-+		case E_MIPS_MACH_XLP:  strcat (buf, ", xlp"); break;
- 	    case 0:
- 	    /* We simply ignore the field in this case to avoid confusion:
- 	       MIPS ELF does not specify EF_MIPS_MACH, it is a GNU
-diff --git a/gas/config/tc-mips.c b/gas/config/tc-mips.c
-index e24e84df54..baf84e419d 100644
---- a/gas/config/tc-mips.c
-+++ b/gas/config/tc-mips.c
-@@ -552,6 +552,7 @@ static int mips_32bitmode = 0;
-    || mips_opts.arch == CPU_RM7000                    \
-    || mips_opts.arch == CPU_VR5500                    \
-    || mips_opts.micromips                             \
-+   || mips_opts.arch == CPU_XLP                       \
-    )
- 
- /* Whether the processor uses hardware interlocks to protect reads
-@@ -581,6 +582,7 @@ static int mips_32bitmode = 0;
-     && mips_opts.isa != ISA_MIPS3)                    \
-    || mips_opts.arch == CPU_R4300                     \
-    || mips_opts.micromips                             \
-+   || mips_opts.arch == CPU_XLP                       \
-    )
- 
- /* Whether the processor uses hardware interlocks to protect reads
-@@ -19409,7 +19411,7 @@ static const struct mips_cpu_info mips_cpu_info_table[] =
-   /* Broadcom XLP.
-      XLP is mostly like XLR, with the prominent exception that it is
-      MIPS64R2 rather than MIPS64.  */
--  { "xlp",	      0, 0,			ISA_MIPS64R2, CPU_XLR },
-+  { "xlp",	      0, 0,			ISA_MIPS64R2, CPU_XLP },
- 
-   /* MIPS 64 Release 6 */
-   { "i6400",	      0, ASE_MSA,		ISA_MIPS64R6, CPU_MIPS64R6},
-diff --git a/gas/configure b/gas/configure
-index a36f1ae161..99f0a94e20 100755
---- a/gas/configure
-+++ b/gas/configure
-@@ -12989,6 +12989,9 @@ _ACEOF
- 	  mipsisa64r6 | mipsisa64r6el)
- 	    mips_cpu=mips64r6
- 	    ;;
-+	  mipsisa64r2nlm | mipsisa64r2nlmel)
-+		mips_cpu=xlp
-+		;;
- 	  mipstx39 | mipstx39el)
- 	    mips_cpu=r3900
- 	    ;;
-diff --git a/include/elf/mips.h b/include/elf/mips.h
-index 3e27b05122..81ea78a817 100644
---- a/include/elf/mips.h
-+++ b/include/elf/mips.h
-@@ -290,6 +290,7 @@ END_RELOC_NUMBERS (R_MIPS_maxext)
- #define E_MIPS_MACH_SB1         0x008a0000
- #define E_MIPS_MACH_OCTEON	0x008b0000
- #define E_MIPS_MACH_XLR     	0x008c0000
-+#define E_MIPS_MACH_XLP         0x008f0000
- #define E_MIPS_MACH_OCTEON2	0x008d0000
- #define E_MIPS_MACH_OCTEON3	0x008e0000
- #define E_MIPS_MACH_5400	0x00910000
-diff --git a/include/opcode/mips.h b/include/opcode/mips.h
-index 0d043d9520..450e9c2d67 100644
---- a/include/opcode/mips.h
-+++ b/include/opcode/mips.h
-@@ -1244,8 +1244,10 @@ static const unsigned int mips_isa_table[] = {
- #define INSN_LOONGSON_2F          0x80000000
- /* Loongson 3A.  */
- #define INSN_LOONGSON_3A          0x00000400
--/* RMI Xlr instruction */
--#define INSN_XLR                 0x00000020
-+/* Netlogic Xlr instruction */
-+#define INSN_XLR		0x00000020
-+/* Netlogic XlP instruction */
-+#define INSN_XLP		0x00000080
- 
- /* DSP ASE */
- #define ASE_DSP			0x00000001
-@@ -1344,6 +1346,7 @@ static const unsigned int mips_isa_table[] = {
- #define CPU_OCTEON2	6502
- #define CPU_OCTEON3	6503
- #define CPU_XLR     	887682   	/* decimal 'XLR'   */
-+#define CPU_XLP         887680      /* decimal 'XLP'   */
- 
- /* Return true if the given CPU is included in INSN_* mask MASK.  */
- 
-@@ -1421,6 +1424,9 @@ cpu_is_member (int cpu, unsigned int mask)
-       return ((mask & INSN_ISA_MASK) == INSN_ISA32R6)
- 	     || ((mask & INSN_ISA_MASK) == INSN_ISA64R6);
- 
-+    case CPU_XLP:
-+      return (mask & INSN_XLP) != 0;
-+
-     default:
-       return FALSE;
-     }
-diff --git a/ld/configure.tgt b/ld/configure.tgt
-index 4e77383a19..8a81f7ac39 100644
---- a/ld/configure.tgt
-+++ b/ld/configure.tgt
-@@ -504,6 +504,8 @@ mips*el-sde-elf* | mips*el-mti-elf* | mips*el-img-elf*)
- mips*-sde-elf* | mips*-mti-elf* | mips*-img-elf*)
- 			targ_emul=elf32btsmip
- 			targ_extra_emuls="elf32ltsmip elf32btsmipn32 elf64btsmip elf32ltsmipn32 elf64ltsmip" ;;
-+mipsisa64*-*-elf*)	targ_emul=elf32btsmip
-+			targ_extra_emuls="elf32ltsmip elf64btsmip elf64ltsmip" ;;
- mips64*el-ps2-elf*)	targ_emul=elf32lr5900n32
- 			targ_extra_emuls="elf32lr5900"
- 			targ_extra_libpath=$targ_extra_emuls ;;
-diff --git a/opcodes/mips-dis.c b/opcodes/mips-dis.c
-index bb9912e462..70ecc51717 100644
---- a/opcodes/mips-dis.c
-+++ b/opcodes/mips-dis.c
-@@ -648,13 +648,11 @@ const struct mips_arch_choice mips_arch_choices[] =
-     mips_cp0sel_names_xlr, ARRAY_SIZE (mips_cp0sel_names_xlr),
-     mips_cp1_names_mips3264, mips_hwr_names_numeric },
- 
--  /* XLP is mostly like XLR, with the prominent exception it is being
--     MIPS64R2.  */
--  { "xlp", 1, bfd_mach_mips_xlr, CPU_XLR,
--    ISA_MIPS64R2 | INSN_XLR, 0,
--    mips_cp0_names_xlr,
--    mips_cp0sel_names_xlr, ARRAY_SIZE (mips_cp0sel_names_xlr),
--    mips_cp1_names_mips3264, mips_hwr_names_numeric },
-+  { "xlp", 1, bfd_mach_mips_xlp, CPU_XLP,
-+    ISA_MIPS64R2 | INSN_XLP, 0,
-+    mips_cp0_names_mips3264r2,
-+    mips_cp0sel_names_mips3264r2, ARRAY_SIZE (mips_cp0sel_names_mips3264r2),
-+    mips_cp1_names_mips3264, mips_hwr_names_mips3264r2 },
- 
-   /* This entry, mips16, is here only for ISA/processor selection; do
-      not print its name.  */
-diff --git a/opcodes/mips-opc.c b/opcodes/mips-opc.c
-index 5cb8e7365f..f2074856a2 100644
---- a/opcodes/mips-opc.c
-+++ b/opcodes/mips-opc.c
-@@ -320,7 +320,8 @@ decode_mips_operand (const char *p)
- #define IOCTP	(INSN_OCTEONP | INSN_OCTEON2 | INSN_OCTEON3)
- #define IOCT2	(INSN_OCTEON2 | INSN_OCTEON3)
- #define IOCT3	INSN_OCTEON3
--#define XLR     INSN_XLR
-+#define XLR	INSN_XLR
-+#define XLP	INSN_XLP
- #define IVIRT	ASE_VIRT
- #define IVIRT64	ASE_VIRT64
- 
-@@ -958,6 +959,7 @@ const struct mips_opcode mips_builtin_opcodes[] =
- {"clo",			"U,s",		0x70000021, 0xfc0007ff, WR_1|RD_2,		0,		I32|N55,	0,	I37 },
- {"clz",			"d,s",		0x00000050, 0xfc1f07ff, WR_1|RD_2,		0,		I37,		0,	0 },
- {"clz",			"U,s",		0x70000020, 0xfc0007ff, WR_1|RD_2,		0,		I32|N55,	0,	I37 },
-+{"crc",			"d,s,t",	0x7000001c, 0xfc0007ff,	WR_1|RD_2|RD_3,	0,		XLP, 		0,	0 },
- /* ctc0 is at the bottom of the table.  */
- {"ctc1",		"t,G",		0x44c00000, 0xffe007ff,	RD_1|WR_CC|CM,		0,		I1,		0,	0 },
- {"ctc1",		"t,S",		0x44c00000, 0xffe007ff,	RD_1|WR_CC|CM,		0,		I1,		0,	0 },
-@@ -990,12 +992,13 @@ const struct mips_opcode mips_builtin_opcodes[] =
- {"daddiu",		"t,r,j",	0x64000000, 0xfc000000, WR_1|RD_2,		0,		I3,		0,	0 },
- {"daddu",		"d,v,t",	0x0000002d, 0xfc0007ff, WR_1|RD_2|RD_3,		0,		I3,		0,	0 },
- {"daddu",		"t,r,I",	0,    (int) M_DADDU_I,	INSN_MACRO,		0,		I3,		0,	0 },
--{"daddwc",		"d,s,t", 	0x70000038, 0xfc0007ff, WR_1|RD_2|RD_3|WR_C0|RD_C0, 0,		XLR,		0,	0 },
-+{"daddwc",		"d,s,t", 	0x70000038, 0xfc0007ff, WR_1|RD_2|RD_3|WR_C0|RD_C0, 0,		XLR|XLP,	0,	0 },
- {"dbreak",		"",		0x7000003f, 0xffffffff,	0,			0,		N5,		0,	0 },
- {"dclo",		"d,s",		0x00000053, 0xfc1f07ff, WR_1|RD_2,		0,		I69,		0,	0 },
- {"dclo",		"U,s",	 	0x70000025, 0xfc0007ff, WR_1|RD_2, 	0,		I64|N55,	0,	I69 },
- {"dclz",		"d,s",		0x00000052, 0xfc1f07ff, WR_1|RD_2,		0,		I69,		0,	0 },
- {"dclz",		"U,s",	 	0x70000024, 0xfc0007ff, WR_1|RD_2, 	0,		I64|N55,	0,	I69 },
-+{"dcrc",		"d,s,t",	0x7000001d, 0xfc0007ff, WR_1|RD_2|RD_3,	0,		XLP, 		0,	0 },
- /* dctr and dctw are used on the r5000.  */
- {"dctr",		"o(b)",	 	0xbc050000, 0xfc1f0000, RD_2,			0,		I3,		0,	0 },
- {"dctw",		"o(b)",		0xbc090000, 0xfc1f0000, RD_2,			0,		I3,		0,	0 },
-@@ -1067,6 +1070,7 @@ const struct mips_opcode mips_builtin_opcodes[] =
- {"dmfc0",		"t,G,H",	0x40200000, 0xffe007f8,	WR_1|RD_C0|LC,		0,		I64,		0,	0 },
- {"dmfgc0",		"t,G",		0x40600100, 0xffe007ff, WR_1|RD_C0|LC,		0,		0,		IVIRT64, 0 },
- {"dmfgc0",		"t,G,H",	0x40600100, 0xffe007f8, WR_1|RD_C0|LC,		0,		0,		IVIRT64, 0 },
-+{"dmfur",		"t,d",		0x7000001e, 0xffe007ff, WR_1,			0,		XLP,		0,	0 },
- {"dmt",			"",		0x41600bc1, 0xffffffff, TRAP,			0,		0,		MT32,	0 },
- {"dmt",			"t",		0x41600bc1, 0xffe0ffff, WR_1|TRAP,		0,		0,		MT32,	0 },
- {"dmtc0",		"t,G",		0x40a00000, 0xffe007ff,	RD_1|WR_C0|WR_CC|CM,	0,		I3,		0,	EE },
-@@ -1082,6 +1086,8 @@ const struct mips_opcode mips_builtin_opcodes[] =
- /* dmfc3 is at the bottom of the table.  */
- /* dmtc3 is at the bottom of the table.  */
- {"dmuh",		"d,s,t",	0x000000dc, 0xfc0007ff, WR_1|RD_2|RD_3,		0,		I69,		0,	0 },
-+{"dmtur",		"t,d",		0x7000001f, 0xffe007ff,	RD_1,			0,		XLP,		0,	0 },
-+{"dmul",		"d,s,t",	0x70000006, 0xfc0007ff,	WR_1|RD_2|RD_3,		0,		XLP,		0,	0 },
- {"dmul",		"d,s,t",	0x0000009c, 0xfc0007ff, WR_1|RD_2|RD_3,		0,		I69,		0,	0 },
- {"dmul",		"d,v,t",	0x70000003, 0xfc0007ff, WR_1|RD_2|RD_3|WR_HILO,	0,		IOCT,		0,	0 },
- {"dmul",		"d,v,t",	0,    (int) M_DMUL,	INSN_MACRO,		0,		I3,		0,	M32|I69 },
-@@ -1235,9 +1241,9 @@ const struct mips_opcode mips_builtin_opcodes[] =
- {"ld",			"s,-b(+R)",	0xec180000, 0xfc1c0000, WR_1,			RD_pc,		I69,		0,	0 },
- {"ld",			"t,A(b)",	0,    (int) M_LD_AB,	INSN_MACRO,		0,		I1,		0,	0 },
- {"ld",			"t,o(b)",	0xdc000000, 0xfc000000, WR_1|RD_3|LM,		0,		I3,		0,	0 },
--{"ldaddw",		"t,b",		0x70000010, 0xfc00ffff,	MOD_1|RD_2|LM|SM,	0,		XLR,		0,	0 },
--{"ldaddwu",		"t,b",		0x70000011, 0xfc00ffff,	MOD_1|RD_2|LM|SM,	0,		XLR,		0,	0 },
--{"ldaddd",		"t,b",		0x70000012, 0xfc00ffff,	MOD_1|RD_2|LM|SM,	0,		XLR,		0,	0 },
-+{"ldaddw",		"t,b",		0x70000010, 0xfc00ffff,	MOD_1|RD_2|SM,		0,		XLR|XLP,	0,	0 },
-+{"ldaddwu",		"t,b",		0x70000011, 0xfc00ffff,	MOD_1|RD_2|SM,		0,		XLR|XLP,	0,	0 },
-+{"ldaddd",		"t,b",		0x70000012, 0xfc00ffff,	MOD_1|RD_2|SM,		0,		XLR|XLP,	0,	0 },
- {"ldc1",		"T,o(b)",	0xd4000000, 0xfc000000, WR_1|RD_3|CLD|FP_D,	0,		I2,		0,	SF },
- {"ldc1",		"E,o(b)",	0xd4000000, 0xfc000000, WR_1|RD_3|CLD|FP_D,	0,		I2,		0,	SF },
- {"ldc1",		"T,A(b)",	0,    (int) M_LDC1_AB,	INSN_MACRO,		INSN2_M_FP_D,	I2,		0,	SF },
-@@ -1402,7 +1408,7 @@ const struct mips_opcode mips_builtin_opcodes[] =
- {"mflo",		"d,9",		0x00000012, 0xff9f07ff, WR_1|RD_LO,		0,		0,		D32,	0 },
- {"mflo1",		"d",		0x70000012, 0xffff07ff,	WR_1|RD_LO,		0,		EE,		0,	0 },
- {"mflhxu",		"d",		0x00000052, 0xffff07ff,	WR_1|MOD_HILO,		0,		0,		SMT,	0 },
--{"mfcr",		"t,s",		0x70000018, 0xfc00ffff, WR_1|RD_2,		0,		XLR,		0,	0 },
-+{"mfcr",		"t,s",		0x70000018, 0xfc00ffff, WR_1,			0,		XLR|XLP,	0,	0 },
- {"mfsa",		"d",		0x00000028, 0xffff07ff,	WR_1,			0,		EE,		0,	0 },
- {"min.ob",		"X,Y,Q",	0x78000006, 0xfc20003f,	WR_1|RD_2|RD_3|FP_D,	0,		SB1,		MX,	0 },
- {"min.ob",		"D,S,Q",	0x48000006, 0xfc20003f,	WR_1|RD_2|RD_3|FP_D,	0,		N54,		0,	0 },
-@@ -1447,10 +1453,13 @@ const struct mips_opcode mips_builtin_opcodes[] =
- /* move is at the top of the table.  */
- {"msgn.qh",		"X,Y,Q",	0x78200000, 0xfc20003f,	WR_1|RD_2|RD_3|FP_D,	0,		0,		MX,	0 },
- {"msgsnd",		"t",		0,    (int) M_MSGSND,	INSN_MACRO,		0,		XLR,		0,	0 },
-+{"msgsnds",		"d,t",		0x4a000001, 0xffe007ff,	WR_1|RD_2|RD_C0|WR_C0,	0,		XLP,		0,	0 },
- {"msgld",		"", 		0,    (int) M_MSGLD,	INSN_MACRO,		0,		XLR,		0,	0 },
- {"msgld",		"t",		0,    (int) M_MSGLD_T,	INSN_MACRO,		0,		XLR,		0,	0 },
--{"msgwait",		"", 		0,    (int) M_MSGWAIT,	INSN_MACRO,		0,		XLR,		0,	0 },
--{"msgwait",		"t",		0,    (int) M_MSGWAIT_T,INSN_MACRO,		0,		XLR,		0,	0 },
-+{"msglds",		"d,t",		0x4a000002, 0xffe007ff,	WR_1|RD_2|RD_C0|WR_C0,	0,		XLP,		0,	0 },
-+{"msgwait",		"",		0,    (int) M_MSGWAIT,  INSN_MACRO,		0,		XLR|XLP,	0,	0 },
-+{"msgwait",		"t",		0,    (int) M_MSGWAIT_T,INSN_MACRO,		0,		XLR|XLP,	0,	0 },
-+{"msgsync",		"",		0x4a000004, 0xffffffff,0,			0,		XLP,		0,	0 },
- {"msub.d",		"D,R,S,T",	0x4c000029, 0xfc00003f, WR_1|RD_2|RD_3|RD_4|FP_D, 0,		I4_33,		0,	I37 },
- {"msub.d",		"D,S,T",	0x46200019, 0xffe0003f,	WR_1|RD_2|RD_3|FP_D,	0,		IL2E,		0,	0 },
- {"msub.d",		"D,S,T",	0x72200019, 0xffe0003f,	WR_1|RD_2|RD_3|FP_D,	0,		IL2F,		0,	0 },
-@@ -1500,7 +1509,7 @@ const struct mips_opcode mips_builtin_opcodes[] =
- {"mtlo",		"s,7",		0x00000013, 0xfc1fe7ff, RD_1|WR_LO,		0,		0,		D32,	0 },
- {"mtlo1",		"s",		0x70000013, 0xfc1fffff,	RD_1|WR_LO,		0,		EE,		0,	0 },
- {"mtlhx",		"s",		0x00000053, 0xfc1fffff,	RD_1|MOD_HILO,		0,		0,		SMT,	0 },
--{"mtcr",		"t,s",		0x70000019, 0xfc00ffff, RD_1|RD_2,		0,		XLR,		0,	0 },
-+{"mtcr",		"t,s",		0x70000019, 0xfc00ffff, RD_1,			0,		XLR|XLP,	0,	0 },
- {"mtm0",		"s",		0x70000008, 0xfc1fffff, RD_1,			0,		IOCT,		0,	0 },
- {"mtm0",    		"s,t",		0x70000008, 0xfc00ffff, RD_1|RD_2,		0,		IOCT3,		0,	0 },
- {"mtm1",		"s",		0x7000000c, 0xfc1fffff, RD_1,			0,		IOCT,		0,	0 },
-@@ -1937,9 +1946,9 @@ const struct mips_opcode mips_builtin_opcodes[] =
- {"suxc1",		"S,t(b)",	0x4c00000d, 0xfc0007ff, RD_1|RD_2|RD_3|SM|FP_D,	0,		I5_33|N55,	0,	I37},
- {"sw",			"t,o(b)",	0xac000000, 0xfc000000,	RD_1|RD_3|SM,		0,		I1,		0,	0 },
- {"sw",			"t,A(b)",	0,    (int) M_SW_AB,	INSN_MACRO,		0,		I1,		0,	0 },
--{"swapw",		"t,b",		0x70000014, 0xfc00ffff, MOD_1|RD_2|LM|SM,	0,		XLR,		0,	0 },
--{"swapwu",		"t,b",		0x70000015, 0xfc00ffff, MOD_1|RD_2|LM|SM,	0,		XLR,		0,	0 },
--{"swapd",		"t,b",		0x70000016, 0xfc00ffff, MOD_1|RD_2|LM|SM,	0,		XLR,		0,	0 },
-+{"swapw",		"t,b",		0x70000014, 0xfc00ffff, MOD_1|RD_2|SM,		0,		XLR|XLP,	0,	0 },
-+{"swapwu",		"t,b",		0x70000015, 0xfc00ffff, MOD_1|RD_2|SM,		0,		XLR|XLP,	0,	0 },
-+{"swapd",		"t,b",		0x70000016, 0xfc00ffff, MOD_1|RD_2|SM,		0,		XLR|XLP,	0,	0 },
- {"swc0",		"E,o(b)",	0xe0000000, 0xfc000000,	RD_3|RD_C0|SM,		0,		I1,		0,	IOCT|IOCTP|IOCT2|I37 },
- {"swc0",		"E,A(b)",	0,    (int) M_SWC0_AB,	INSN_MACRO,		0,		I1,		0,	IOCT|IOCTP|IOCT2|I37 },
- {"swc1",		"T,o(b)",	0xe4000000, 0xfc000000,	RD_1|RD_3|SM|FP_S,	0,		I1,		0,	0 },
--- 
-2.12.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0013-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0013-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch
new file mode 100644
index 0000000..247376b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0013-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch
@@ -0,0 +1,36 @@
+From e46202becab625c6c08caf91e08ccbbc1bf799c4 Mon Sep 17 00:00:00 2001
+From: Zhenhua Luo <zhenhua.luo@nxp.com>
+Date: Sat, 11 Jun 2016 22:08:29 -0500
+Subject: [PATCH 13/15] fix the incorrect assembling for ppc wait mnemonic
+
+Signed-off-by: Zhenhua Luo <zhenhua.luo@nxp.com>
+
+Upstream-Status: Pending
+---
+ opcodes/ppc-opc.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+diff --git a/opcodes/ppc-opc.c b/opcodes/ppc-opc.c
+index 426261ab21..0d35916cdd 100644
+--- a/opcodes/ppc-opc.c
++++ b/opcodes/ppc-opc.c
+@@ -4881,7 +4881,6 @@ const struct powerpc_opcode powerpc_opcodes[] = {
+ {"ldepx",	X(31,29),	X_MASK,	  E500MC|PPCA2, 0,		{RT, RA0, RB}},
+ 
+ {"waitasec",	X(31,30),      XRTRARB_MASK, POWER8,	POWER9,		{0}},
+-{"wait",	X(31,30),	XWC_MASK,    POWER9,	0,		{WC}},
+ 
+ {"lwepx",	X(31,31),	X_MASK,	  E500MC|PPCA2, 0,		{RT, RA0, RB}},
+ 
+@@ -4935,7 +4934,7 @@ const struct powerpc_opcode powerpc_opcodes[] = {
+ 
+ {"waitrsv",	X(31,62)|(1<<21), 0xffffffff, E500MC|PPCA2, 0,		{0}},
+ {"waitimpl",	X(31,62)|(2<<21), 0xffffffff, E500MC|PPCA2, 0,		{0}},
+-{"wait",	X(31,62),	XWC_MASK,    E500MC|PPCA2, 0,		{WC}},
++{"wait",	X(31,62),	XWC_MASK,    E500MC|PPCA2|POWER9, 0,	{WC}},
+ 
+ {"dcbstep",	XRT(31,63,0),	XRT_MASK,    E500MC|PPCA2, 0,		{RA0, RB}},
+ 
+-- 
+2.14.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0014-Detect-64-bit-MIPS-targets.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0014-Detect-64-bit-MIPS-targets.patch
new file mode 100644
index 0000000..42b1065
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0014-Detect-64-bit-MIPS-targets.patch
@@ -0,0 +1,50 @@
+From bf20d5823662d1f2eb47de2cdfd173627a205b17 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 31 Mar 2017 11:42:03 -0700
+Subject: [PATCH 14/15] Detect 64-bit MIPS targets
+
+Add mips64 target triplets and default to N64
+
+Upstream-Status: Submitted
+https://sourceware.org/ml/binutils/2016-08/msg00048.html
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ gold/configure.tgt | 14 ++++++++++++++
+ 1 file changed, 14 insertions(+)
+
+diff --git a/gold/configure.tgt b/gold/configure.tgt
+index 3d63027297..c1f92a1360 100644
+--- a/gold/configure.tgt
++++ b/gold/configure.tgt
+@@ -153,6 +153,13 @@ aarch64*-*)
+  targ_big_endian=false
+  targ_extra_big_endian=true
+  ;;
++mips*64*el*-*-*|mips*64*le*-*-*)
++ targ_obj=mips
++ targ_machine=EM_MIPS_RS3_LE
++ targ_size=64
++ targ_big_endian=false
++ targ_extra_big_endian=true
++ ;;
+ mips*el*-*-*|mips*le*-*-*)
+  targ_obj=mips
+  targ_machine=EM_MIPS_RS3_LE
+@@ -160,6 +167,13 @@ mips*el*-*-*|mips*le*-*-*)
+  targ_big_endian=false
+  targ_extra_big_endian=true
+  ;;
++mips*64*-*-*)
++ targ_obj=mips
++ targ_machine=EM_MIPS
++ targ_size=64
++ targ_big_endian=true
++ targ_extra_big_endian=false
++ ;;
+ mips*-*-*)
+  targ_obj=mips
+  targ_machine=EM_MIPS
+-- 
+2.14.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0014-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0014-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch
deleted file mode 100644
index bb95a0c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0014-fix-the-incorrect-assembling-for-ppc-wait-mnemonic.patch
+++ /dev/null
@@ -1,36 +0,0 @@
-From a9177150f808d7e6285e1011c85d0ce89037b553 Mon Sep 17 00:00:00 2001
-From: Zhenhua Luo <zhenhua.luo@nxp.com>
-Date: Sat, 11 Jun 2016 22:08:29 -0500
-Subject: [PATCH 14/15] fix the incorrect assembling for ppc wait mnemonic
-
-Signed-off-by: Zhenhua Luo <zhenhua.luo@nxp.com>
-
-Upstream-Status: Pending
----
- opcodes/ppc-opc.c | 3 +--
- 1 file changed, 1 insertion(+), 2 deletions(-)
-
-diff --git a/opcodes/ppc-opc.c b/opcodes/ppc-opc.c
-index 30fd789182..f2708e2276 100644
---- a/opcodes/ppc-opc.c
-+++ b/opcodes/ppc-opc.c
-@@ -4876,7 +4876,6 @@ const struct powerpc_opcode powerpc_opcodes[] = {
- {"ldepx",	X(31,29),	X_MASK,	  E500MC|PPCA2, 0,		{RT, RA0, RB}},
- 
- {"waitasec",	X(31,30),      XRTRARB_MASK, POWER8,	POWER9,		{0}},
--{"wait",	X(31,30),	XWC_MASK,    POWER9,	0,		{WC}},
- 
- {"lwepx",	X(31,31),	X_MASK,	  E500MC|PPCA2, 0,		{RT, RA0, RB}},
- 
-@@ -4930,7 +4929,7 @@ const struct powerpc_opcode powerpc_opcodes[] = {
- 
- {"waitrsv",	X(31,62)|(1<<21), 0xffffffff, E500MC|PPCA2, 0,		{0}},
- {"waitimpl",	X(31,62)|(2<<21), 0xffffffff, E500MC|PPCA2, 0,		{0}},
--{"wait",	X(31,62),	XWC_MASK,    E500MC|PPCA2, 0,		{WC}},
-+{"wait",	X(31,62),	XWC_MASK,    E500MC|PPCA2|POWER9, 0,	{WC}},
- 
- {"dcbstep",	XRT(31,63,0),	XRT_MASK,    E500MC|PPCA2, 0,		{RA0, RB}},
- 
--- 
-2.12.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0015-sync-with-OE-libtool-changes.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0015-sync-with-OE-libtool-changes.patch
index 1559038..2c8900c 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0015-sync-with-OE-libtool-changes.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0015-sync-with-OE-libtool-changes.patch
@@ -1,4 +1,4 @@
-From 58cdb28ed71cb57b4a0ea1b412a708fdb0f84c27 Mon Sep 17 00:00:00 2001
+From 9b456a0e4f284fd41ac36595144ed44dc82410ee Mon Sep 17 00:00:00 2001
 From: Ross Burton <ross.burton@intel.com>
 Date: Mon, 6 Mar 2017 23:33:27 -0800
 Subject: [PATCH 15/15] sync with OE libtool changes
@@ -85,5 +85,5 @@
  	elif test -n "$runpath_var"; then
  	  case "$finalize_perm_rpath " in
 -- 
-2.12.0
+2.14.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0016-Detect-64-bit-MIPS-targets.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0016-Detect-64-bit-MIPS-targets.patch
deleted file mode 100644
index 1b2eb84..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0016-Detect-64-bit-MIPS-targets.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-From c3ebde5d8cc3b0092966b4d725cad7cfd074fd8d Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 31 Mar 2017 11:42:03 -0700
-Subject: [PATCH 16/16] Detect 64-bit MIPS targets
-
-Add mips64 target triplets and default to N64
-
-Upstream-Status: Submitted
-https://sourceware.org/ml/binutils/2016-08/msg00048.html
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gold/configure.tgt | 14 ++++++++++++++
- 1 file changed, 14 insertions(+)
-
-diff --git a/gold/configure.tgt b/gold/configure.tgt
-index 3d63027297..c1f92a1360 100644
---- a/gold/configure.tgt
-+++ b/gold/configure.tgt
-@@ -153,6 +153,13 @@ aarch64*-*)
-  targ_big_endian=false
-  targ_extra_big_endian=true
-  ;;
-+mips*64*el*-*-*|mips*64*le*-*-*)
-+ targ_obj=mips
-+ targ_machine=EM_MIPS_RS3_LE
-+ targ_size=64
-+ targ_big_endian=false
-+ targ_extra_big_endian=true
-+ ;;
- mips*el*-*-*|mips*le*-*-*)
-  targ_obj=mips
-  targ_machine=EM_MIPS_RS3_LE
-@@ -160,6 +167,13 @@ mips*el*-*-*|mips*le*-*-*)
-  targ_big_endian=false
-  targ_extra_big_endian=true
-  ;;
-+mips*64*-*-*)
-+ targ_obj=mips
-+ targ_machine=EM_MIPS
-+ targ_size=64
-+ targ_big_endian=true
-+ targ_extra_big_endian=false
-+ ;;
- mips*-*-*)
-  targ_obj=mips
-  targ_machine=EM_MIPS
--- 
-2.12.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0017-bfd-Improve-lookup-of-file-line-information-for-erro.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0017-bfd-Improve-lookup-of-file-line-information-for-erro.patch
deleted file mode 100644
index 23ad10a..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0017-bfd-Improve-lookup-of-file-line-information-for-erro.patch
+++ /dev/null
@@ -1,75 +0,0 @@
-From 3239a4231ff79bf8b67b8faaf414b1667486167c Mon Sep 17 00:00:00 2001
-From: Andrew Burgess <andrew.burgess@embecosm.com>
-Date: Mon, 19 Dec 2016 15:27:59 +0000
-Subject: [PATCH] bfd: Improve lookup of file / line information for errors
-
-When looking up file and line information (used from the linker to
-report error messages) if no symbol is passed in, then use the symbol
-list to look for a matching symbol.
-
-If a matching symbol is found then use this to look up the file / line
-information.
-
-This should improve errors when looking up file / line information for
-data sections.  Hopefully we should find a matching data symbol, which
-should, in turn (we hope) match a DW_TAG_variable in the DWARF, this
-should allow us to give accurate file / line errors for data symbols.
-
-As the hope is to find a matching DW_TAG_variable in the DWARF then we
-ignore section symbols, and prefer global symbols to locals.
-
-CVE: CVE-2017-8392
-Upstream-Status: Accepted
-
-Signed-off-by: Fan Xin <fan.xin@jp.fujitsu.com>
----
- bfd/dwarf2.c                   | 32 ++++++++++++++++++++++++++++++++
- 1 files changed, 32 insertions(+)
-
-
-diff --git a/bfd/dwarf2.c b/bfd/dwarf2.c
-index 03447a9..9bb8126 100644
---- a/bfd/dwarf2.c
-+++ b/bfd/dwarf2.c
-@@ -4155,6 +4155,38 @@ _bfd_dwarf2_find_nearest_line (bfd *abfd,
-     {
-       BFD_ASSERT (section != NULL && functionname_ptr != NULL);
-       addr = offset;
-+
-+      /* If we have no SYMBOL but the section we're looking at is not a
-+         code section, then take a look through the list of symbols to see
-+         if we have a symbol at the address we're looking for.  If we do
-+         then use this to look up line information.  This will allow us to
-+         give file and line results for data symbols.  We exclude code
-+         symbols here, if we look up a function symbol and then look up the
-+         line information we'll actually return the line number for the
-+         opening '{' rather than the function definition line.  This is
-+         because looking up by symbol uses the line table, in which the
-+         first line for a function is usually the opening '{', while
-+         looking up the function by section + offset uses the
-+         DW_AT_decl_line from the function DW_TAG_subprogram for the line,
-+         which will be the line of the function name.  */
-+      if ((section->flags & SEC_CODE) == 0)
-+	{
-+	  asymbol **tmp;
-+
-+	  for (tmp = symbols; (*tmp) != NULL; ++tmp)
-+	    if ((*tmp)->the_bfd == abfd
-+		&& (*tmp)->section == section
-+		&& (*tmp)->value == offset
-+		&& ((*tmp)->flags & BSF_SECTION_SYM) == 0)
-+	      {
-+		symbol = *tmp;
-+		do_line = TRUE;
-+                /* For local symbols, keep going in the hope we find a
-+                   global.  */
-+                if ((symbol->flags & BSF_GLOBAL) != 0)
-+                  break;
-+	      }
-+	}
-     }
- 
-   if (section->output_section)
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0018-PR-21409-segfault-in-_bfd_dwarf2_find_nearest_line.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0018-PR-21409-segfault-in-_bfd_dwarf2_find_nearest_line.patch
deleted file mode 100644
index acb37df..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/0018-PR-21409-segfault-in-_bfd_dwarf2_find_nearest_line.patch
+++ /dev/null
@@ -1,33 +0,0 @@
-From 97e83a100aa8250be783304bfe0429761c6e6b6b Mon Sep 17 00:00:00 2001
-From: Alan Modra <amodra@gmail.com>
-Date: Sun, 23 Apr 2017 13:55:49 +0930
-Subject: [PATCH] PR 21409, segfault in _bfd_dwarf2_find_nearest_line
-
-	PR 21409
-	* dwarf2.c (_bfd_dwarf2_find_nearest_line): Don't segfault when
-	no symbols.
-
-CVE: CVE-2017-8392
-Upstream-Status: Accepted
-
-Signed-off-by: Fan Xin <fan.xin@jp.fujitsu.com>
----
- bfd/dwarf2.c  | 2 +-
- 1 files changed, 1 insertions(+), 1 deletion(-)
-
-diff --git a/bfd/dwarf2.c b/bfd/dwarf2.c
-index 132a674..0ef3e1f 100644
---- a/bfd/dwarf2.c
-+++ b/bfd/dwarf2.c
-@@ -4205,7 +4205,7 @@ _bfd_dwarf2_find_nearest_line (bfd *abfd,
-          looking up the function by section + offset uses the
-          DW_AT_decl_line from the function DW_TAG_subprogram for the line,
-          which will be the line of the function name.  */
--      if ((section->flags & SEC_CODE) == 0)
-+      if (symbols != NULL && (section->flags & SEC_CODE) == 0)
- 	{
- 	  asymbol **tmp;
- 
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-6965.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-6965.patch
deleted file mode 100644
index 1334c94..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-6965.patch
+++ /dev/null
@@ -1,124 +0,0 @@
-From bdc5166c274b842f83f8328e7cfaaf80fd29934e Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Mon, 13 Feb 2017 13:08:32 +0000
-Subject: [PATCH 1/2] Fix readelf writing to illegal addresses whilst
- processing corrupt input files containing symbol-difference relocations.
-
-	PR binutils/21137
-	* readelf.c (target_specific_reloc_handling): Add end parameter.
-	Check for buffer overflow before writing relocated values.
-	(apply_relocations): Pass end to target_specific_reloc_handling.
-
-(cherry pick from commit 03f7786e2f440b9892b1c34a58fb26222ce1b493)
-Upstream-Status: Backport [master]
-CVE: CVE-2017-6965
-
-Signed-off-by: Yuanjie Huang <yuanjie.huang@windriver.com>
----
- binutils/ChangeLog |  7 +++++++
- binutils/readelf.c | 30 +++++++++++++++++++++++++-----
- 2 files changed, 32 insertions(+), 5 deletions(-)
-
-diff --git a/binutils/ChangeLog b/binutils/ChangeLog
-index f21867f98c..e789a3b99b 100644
---- a/binutils/ChangeLog
-+++ b/binutils/ChangeLog
-@@ -1,3 +1,10 @@
-+2017-02-13  Nick Clifton  <nickc@redhat.com>
-+
-+	PR binutils/21137
-+	* readelf.c (target_specific_reloc_handling): Add end parameter.
-+	Check for buffer overflow before writing relocated values.
-+	(apply_relocations): Pass end to target_specific_reloc_handling.
-+
- 2017-03-02  Tristan Gingold  <gingold@adacore.com>
- 
- 	* configure: Regenerate.
-diff --git a/binutils/readelf.c b/binutils/readelf.c
-index b5f577f5a1..8cdaae3b8c 100644
---- a/binutils/readelf.c
-+++ b/binutils/readelf.c
-@@ -11585,6 +11585,7 @@ process_syminfo (FILE * file ATTRIBUTE_UNUSED)
- static bfd_boolean
- target_specific_reloc_handling (Elf_Internal_Rela * reloc,
- 				unsigned char *     start,
-+				unsigned char *     end,
- 				Elf_Internal_Sym *  symtab)
- {
-   unsigned int reloc_type = get_reloc_type (reloc->r_info);
-@@ -11625,13 +11626,19 @@ target_specific_reloc_handling (Elf_Internal_Rela * reloc,
- 	  handle_sym_diff:
- 	    if (saved_sym != NULL)
- 	      {
-+		int reloc_size = reloc_type == 1 ? 4 : 2;
- 		bfd_vma value;
- 
- 		value = reloc->r_addend
- 		  + (symtab[get_reloc_symindex (reloc->r_info)].st_value
- 		     - saved_sym->st_value);
- 
--		byte_put (start + reloc->r_offset, value, reloc_type == 1 ? 4 : 2);
-+		if (start + reloc->r_offset + reloc_size >= end)
-+		  /* PR 21137 */
-+		  error (_("MSP430 sym diff reloc writes past end of section (%p vs %p)\n"),
-+			 start + reloc->r_offset + reloc_size, end);
-+		else
-+		  byte_put (start + reloc->r_offset, value, reloc_size);
- 
- 		saved_sym = NULL;
- 		return TRUE;
-@@ -11662,13 +11669,18 @@ target_specific_reloc_handling (Elf_Internal_Rela * reloc,
- 	  case 2: /* R_MN10300_16 */
- 	    if (saved_sym != NULL)
- 	      {
-+		int reloc_size = reloc_type == 1 ? 4 : 2;
- 		bfd_vma value;
- 
- 		value = reloc->r_addend
- 		  + (symtab[get_reloc_symindex (reloc->r_info)].st_value
- 		     - saved_sym->st_value);
- 
--		byte_put (start + reloc->r_offset, value, reloc_type == 1 ? 4 : 2);
-+		if (start + reloc->r_offset + reloc_size >= end)
-+		  error (_("MN10300 sym diff reloc writes past end of section (%p vs %p)\n"),
-+			 start + reloc->r_offset + reloc_size, end);
-+		else
-+		  byte_put (start + reloc->r_offset, value, reloc_size);
- 
- 		saved_sym = NULL;
- 		return TRUE;
-@@ -11703,12 +11715,20 @@ target_specific_reloc_handling (Elf_Internal_Rela * reloc,
- 	    break;
- 
- 	  case 0x41: /* R_RL78_ABS32.  */
--	    byte_put (start + reloc->r_offset, value, 4);
-+	    if (start + reloc->r_offset + 4 >= end)
-+	      error (_("RL78 sym diff reloc writes past end of section (%p vs %p)\n"),
-+		     start + reloc->r_offset + 2, end);
-+	    else
-+	      byte_put (start + reloc->r_offset, value, 4);
- 	    value = 0;
- 	    return TRUE;
- 
- 	  case 0x43: /* R_RL78_ABS16.  */
--	    byte_put (start + reloc->r_offset, value, 2);
-+	    if (start + reloc->r_offset + 2 >= end)
-+	      error (_("RL78 sym diff reloc writes past end of section (%p vs %p)\n"),
-+		     start + reloc->r_offset + 2, end);
-+	    else
-+	      byte_put (start + reloc->r_offset, value, 2);
- 	    value = 0;
- 	    return TRUE;
- 
-@@ -12325,7 +12345,7 @@ apply_relocations (void *                     file,
- 
- 	  reloc_type = get_reloc_type (rp->r_info);
- 
--	  if (target_specific_reloc_handling (rp, start, symtab))
-+	  if (target_specific_reloc_handling (rp, start, end, symtab))
- 	    continue;
- 	  else if (is_none_reloc (reloc_type))
- 	    continue;
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-6966.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-6966.patch
deleted file mode 100644
index dd58df5..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-6966.patch
+++ /dev/null
@@ -1,241 +0,0 @@
-From 383ec757d27652448d1511169e1133f486abf54f Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Mon, 13 Feb 2017 14:03:22 +0000
-Subject: [PATCH] Fix read-after-free error in readelf when processing
- multiple, relocated sections in an MSP430 binary.
-
-	PR binutils/21139
-	* readelf.c (target_specific_reloc_handling): Add num_syms
-	parameter.  Check for symbol table overflow before accessing
-	symbol value.  If reloc pointer is NULL, discard all saved state.
-	(apply_relocations): Pass num_syms to target_specific_reloc_handling.
-	Call target_specific_reloc_handling with a NULL reloc pointer
-	after processing all of the relocs.
-
-(cherry pick from commit f84ce13b6708801ca1d6289b7c4003e2f5a6d7f9)
-Upstream-Status: Backport [master]
-CVE: CVE-2017-6966
-
-Signed-off-by: Yuanjie Huang <yuanjie.huang@windriver.com>
----
- binutils/ChangeLog |  10 +++++
- binutils/readelf.c | 109 +++++++++++++++++++++++++++++++++++++++++------------
- 2 files changed, 94 insertions(+), 25 deletions(-)
-
-diff --git a/binutils/ChangeLog b/binutils/ChangeLog
-index e789a3b99b..bd63c8a0d8 100644
---- a/binutils/ChangeLog
-+++ b/binutils/ChangeLog
-@@ -1,5 +1,15 @@
- 2017-02-13  Nick Clifton  <nickc@redhat.com>
- 
-+	PR binutils/21139
-+	* readelf.c (target_specific_reloc_handling): Add num_syms
-+	parameter.  Check for symbol table overflow before accessing
-+	symbol value.  If reloc pointer is NULL, discard all saved state.
-+	(apply_relocations): Pass num_syms to target_specific_reloc_handling.
-+	Call target_specific_reloc_handling with a NULL reloc pointer
-+	after processing all of the relocs.
-+
-+2017-02-13  Nick Clifton  <nickc@redhat.com>
-+
- 	PR binutils/21137
- 	* readelf.c (target_specific_reloc_handling): Add end parameter.
- 	Check for buffer overflow before writing relocated values.
-diff --git a/binutils/readelf.c b/binutils/readelf.c
-index 8cdaae3b8c..7c158c6342 100644
---- a/binutils/readelf.c
-+++ b/binutils/readelf.c
-@@ -11580,15 +11580,27 @@ process_syminfo (FILE * file ATTRIBUTE_UNUSED)
- 
- /* Check to see if the given reloc needs to be handled in a target specific
-    manner.  If so then process the reloc and return TRUE otherwise return
--   FALSE.  */
-+   FALSE.
-+
-+   If called with reloc == NULL, then this is a signal that reloc processing
-+   for the current section has finished, and any saved state should be
-+   discarded.  */
- 
- static bfd_boolean
- target_specific_reloc_handling (Elf_Internal_Rela * reloc,
- 				unsigned char *     start,
- 				unsigned char *     end,
--				Elf_Internal_Sym *  symtab)
-+				Elf_Internal_Sym *  symtab,
-+				unsigned long       num_syms)
- {
--  unsigned int reloc_type = get_reloc_type (reloc->r_info);
-+  unsigned int reloc_type = 0;
-+  unsigned long sym_index = 0;
-+
-+  if (reloc)
-+    {
-+      reloc_type = get_reloc_type (reloc->r_info);
-+      sym_index = get_reloc_symindex (reloc->r_info);
-+    }
- 
-   switch (elf_header.e_machine)
-     {
-@@ -11597,6 +11609,12 @@ target_specific_reloc_handling (Elf_Internal_Rela * reloc,
-       {
- 	static Elf_Internal_Sym * saved_sym = NULL;
- 
-+	if (reloc == NULL)
-+	  {
-+	    saved_sym = NULL;
-+	    return TRUE;
-+	  }
-+
- 	switch (reloc_type)
- 	  {
- 	  case 10: /* R_MSP430_SYM_DIFF */
-@@ -11604,7 +11622,12 @@ target_specific_reloc_handling (Elf_Internal_Rela * reloc,
- 	      break;
- 	    /* Fall through.  */
- 	  case 21: /* R_MSP430X_SYM_DIFF */
--	    saved_sym = symtab + get_reloc_symindex (reloc->r_info);
-+	    /* PR 21139.  */
-+	    if (sym_index >= num_syms)
-+	      error (_("MSP430 SYM_DIFF reloc contains invalid symbol index %lu\n"),
-+		     sym_index);
-+	    else
-+	      saved_sym = symtab + sym_index;
- 	    return TRUE;
- 
- 	  case 1: /* R_MSP430_32 or R_MSP430_ABS32 */
-@@ -11629,16 +11652,21 @@ target_specific_reloc_handling (Elf_Internal_Rela * reloc,
- 		int reloc_size = reloc_type == 1 ? 4 : 2;
- 		bfd_vma value;
- 
--		value = reloc->r_addend
--		  + (symtab[get_reloc_symindex (reloc->r_info)].st_value
--		     - saved_sym->st_value);
--
--		if (start + reloc->r_offset + reloc_size >= end)
--		  /* PR 21137 */
--		  error (_("MSP430 sym diff reloc writes past end of section (%p vs %p)\n"),
--			 start + reloc->r_offset + reloc_size, end);
-+		if (sym_index >= num_syms)
-+		  error (_("MSP430 reloc contains invalid symbol index %lu\n"),
-+			 sym_index);
- 		else
--		  byte_put (start + reloc->r_offset, value, reloc_size);
-+		  {
-+		    value = reloc->r_addend + (symtab[sym_index].st_value
-+					       - saved_sym->st_value);
-+
-+		    if (start + reloc->r_offset + reloc_size >= end)
-+		      /* PR 21137 */
-+		      error (_("MSP430 sym diff reloc writes past end of section (%p vs %p)\n"),
-+			     start + reloc->r_offset + reloc_size, end);
-+		    else
-+		      byte_put (start + reloc->r_offset, value, reloc_size);
-+		  }
- 
- 		saved_sym = NULL;
- 		return TRUE;
-@@ -11658,13 +11686,24 @@ target_specific_reloc_handling (Elf_Internal_Rela * reloc,
-       {
- 	static Elf_Internal_Sym * saved_sym = NULL;
- 
-+	if (reloc == NULL)
-+	  {
-+	    saved_sym = NULL;
-+	    return TRUE;
-+	  }
-+
- 	switch (reloc_type)
- 	  {
- 	  case 34: /* R_MN10300_ALIGN */
- 	    return TRUE;
- 	  case 33: /* R_MN10300_SYM_DIFF */
--	    saved_sym = symtab + get_reloc_symindex (reloc->r_info);
-+	    if (sym_index >= num_syms)
-+	      error (_("MN10300_SYM_DIFF reloc contains invalid symbol index %lu\n"),
-+		     sym_index);
-+	    else
-+	      saved_sym = symtab + sym_index;
- 	    return TRUE;
-+
- 	  case 1: /* R_MN10300_32 */
- 	  case 2: /* R_MN10300_16 */
- 	    if (saved_sym != NULL)
-@@ -11672,15 +11711,20 @@ target_specific_reloc_handling (Elf_Internal_Rela * reloc,
- 		int reloc_size = reloc_type == 1 ? 4 : 2;
- 		bfd_vma value;
- 
--		value = reloc->r_addend
--		  + (symtab[get_reloc_symindex (reloc->r_info)].st_value
--		     - saved_sym->st_value);
--
--		if (start + reloc->r_offset + reloc_size >= end)
--		  error (_("MN10300 sym diff reloc writes past end of section (%p vs %p)\n"),
--			 start + reloc->r_offset + reloc_size, end);
-+		if (sym_index >= num_syms)
-+		  error (_("MN10300 reloc contains invalid symbol index %lu\n"),
-+			 sym_index);
- 		else
--		  byte_put (start + reloc->r_offset, value, reloc_size);
-+		  {
-+		    value = reloc->r_addend + (symtab[sym_index].st_value
-+					       - saved_sym->st_value);
-+
-+		    if (start + reloc->r_offset + reloc_size >= end)
-+		      error (_("MN10300 sym diff reloc writes past end of section (%p vs %p)\n"),
-+			     start + reloc->r_offset + reloc_size, end);
-+		    else
-+		      byte_put (start + reloc->r_offset, value, reloc_size);
-+		  }
- 
- 		saved_sym = NULL;
- 		return TRUE;
-@@ -11700,12 +11744,24 @@ target_specific_reloc_handling (Elf_Internal_Rela * reloc,
- 	static bfd_vma saved_sym2 = 0;
- 	static bfd_vma value;
- 
-+	if (reloc == NULL)
-+	  {
-+	    saved_sym1 = saved_sym2 = 0;
-+	    return TRUE;
-+	  }
-+
- 	switch (reloc_type)
- 	  {
- 	  case 0x80: /* R_RL78_SYM.  */
- 	    saved_sym1 = saved_sym2;
--	    saved_sym2 = symtab[get_reloc_symindex (reloc->r_info)].st_value;
--	    saved_sym2 += reloc->r_addend;
-+	    if (sym_index >= num_syms)
-+	      error (_("RL78_SYM reloc contains invalid symbol index %lu\n"),
-+		     sym_index);
-+	    else
-+	      {
-+		saved_sym2 = symtab[sym_index].st_value;
-+		saved_sym2 += reloc->r_addend;
-+	      }
- 	    return TRUE;
- 
- 	  case 0x83: /* R_RL78_OPsub.  */
-@@ -12345,7 +12401,7 @@ apply_relocations (void *                     file,
- 
- 	  reloc_type = get_reloc_type (rp->r_info);
- 
--	  if (target_specific_reloc_handling (rp, start, end, symtab))
-+	  if (target_specific_reloc_handling (rp, start, end, symtab, num_syms))
- 	    continue;
- 	  else if (is_none_reloc (reloc_type))
- 	    continue;
-@@ -12441,6 +12497,9 @@ apply_relocations (void *                     file,
- 	}
- 
-       free (symtab);
-+      /* Let the target specific reloc processing code know that
-+	 we have finished with these relocs.  */
-+      target_specific_reloc_handling (NULL, NULL, NULL, NULL, 0);
- 
-       if (relocs_return)
- 	{
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-6969.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-6969.patch
deleted file mode 100644
index ed54034..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-6969.patch
+++ /dev/null
@@ -1,57 +0,0 @@
-From 1d9a2696903fc59d6a936f4ab4e4407ef329d066 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Fri, 17 Feb 2017 15:59:45 +0000
-Subject: Fix illegal memory accesses in readelf when parsing
- a corrupt binary.
-
-	PR binutils/21156
-	* readelf.c (find_section_in_set): Test for invalid section
-	indicies.
-
-CVE: CVE-2017-6969
-Upstream-Status: Backport [master]
-
-Signed-off-by: Yuanjie Huang <yuanjie.huang@windriver.com>
----
- binutils/ChangeLog |  6 ++++++
- binutils/readelf.c | 10 ++++++++--
- 2 files changed, 14 insertions(+), 2 deletions(-)
-
-diff --git a/binutils/ChangeLog b/binutils/ChangeLog
-index bd63c8a0d8..1d840b42f9 100644
---- a/binutils/ChangeLog
-+++ b/binutils/ChangeLog
-@@ -1,3 +1,9 @@
-+2017-02-17  Nick Clifton  <nickc@redhat.com>
-+
-+	PR binutils/21156
-+	* readelf.c (find_section_in_set): Test for invalid section
-+	indicies.
-+
- 2017-02-13  Nick Clifton  <nickc@redhat.com>
- 
- 	PR binutils/21139
-diff --git a/binutils/readelf.c b/binutils/readelf.c
-index 7c158c6342..4960491c5c 100644
---- a/binutils/readelf.c
-+++ b/binutils/readelf.c
-@@ -675,8 +675,14 @@ find_section_in_set (const char * name, unsigned int * set)
-   if (set != NULL)
-     {
-       while ((i = *set++) > 0)
--	if (streq (SECTION_NAME (section_headers + i), name))
--	  return section_headers + i;
-+	{
-+	  /* See PR 21156 for a reproducer.  */
-+	  if (i >= elf_header.e_shnum)
-+	    continue; /* FIXME: Should we issue an error message ?  */
-+
-+	  if (streq (SECTION_NAME (section_headers + i), name))
-+	    return section_headers + i;
-+	}
-     }
- 
-   return find_section (name);
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-6969_2.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-6969_2.patch
deleted file mode 100644
index 59a5dec..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-6969_2.patch
+++ /dev/null
@@ -1,122 +0,0 @@
-From ef81126314f67472a46db9581530fbf5ccb6b3f2 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Mon, 20 Feb 2017 14:40:39 +0000
-Subject: Fix another memory access error in readelf when
- parsing a corrupt binary.
-
-	PR binutils/21156
-	* dwarf.c (cu_tu_indexes_read): Move into...
-	(load_cu_tu_indexes): ... here.  Change the variable into
-	tri-state.  Change the function into boolean, returning
-	false if the indicies could not be loaded.
-	(find_cu_tu_set): Return NULL if the indicies could not be
-	loaded.
-
-CVE: CVE-2017-6969
-Upstream-Status: Backport [master]
-
-Signed-off-by: Yuanjie Huang <yuanjie.huang@windriver.com>
----
- binutils/ChangeLog | 10 ++++++++++
- binutils/dwarf.c   | 34 ++++++++++++++++++++--------------
- 2 files changed, 30 insertions(+), 14 deletions(-)
-
-diff --git a/binutils/ChangeLog b/binutils/ChangeLog
-index 1d840b42f9..53352c1801 100644
---- a/binutils/ChangeLog
-+++ b/binutils/ChangeLog
-@@ -1,3 +1,13 @@
-+2017-02-20  Nick Clifton  <nickc@redhat.com>
-+
-+	PR binutils/21156
-+	* dwarf.c (cu_tu_indexes_read): Move into...
-+	(load_cu_tu_indexes): ... here.  Change the variable into
-+	tri-state.  Change the function into boolean, returning
-+	false if the indicies could not be loaded.
-+	(find_cu_tu_set): Return NULL if the indicies could not be
-+	loaded.
-+
- 2017-02-17  Nick Clifton  <nickc@redhat.com>
- 
- 	PR binutils/21156
-diff --git a/binutils/dwarf.c b/binutils/dwarf.c
-index 0184a7ab2e..6d879c9b61 100644
---- a/binutils/dwarf.c
-+++ b/binutils/dwarf.c
-@@ -76,7 +76,6 @@ int dwarf_check = 0;
-    as a zero-terminated list of section indexes comprising one set of debug
-    sections from a .dwo file.  */
- 
--static int cu_tu_indexes_read = 0;
- static unsigned int *shndx_pool = NULL;
- static unsigned int shndx_pool_size = 0;
- static unsigned int shndx_pool_used = 0;
-@@ -99,7 +98,7 @@ static int tu_count = 0;
- static struct cu_tu_set *cu_sets = NULL;
- static struct cu_tu_set *tu_sets = NULL;
- 
--static void load_cu_tu_indexes (void *file);
-+static bfd_boolean load_cu_tu_indexes (void *);
- 
- /* Values for do_debug_lines.  */
- #define FLAG_DEBUG_LINES_RAW	 1
-@@ -2715,7 +2714,7 @@ load_debug_info (void * file)
-     return num_debug_info_entries;
- 
-   /* If this is a DWARF package file, load the CU and TU indexes.  */
--  load_cu_tu_indexes (file);
-+  (void) load_cu_tu_indexes (file);
- 
-   if (load_debug_section (info, file)
-       && process_debug_info (&debug_displays [info].section, file, abbrev, 1, 0))
-@@ -7378,21 +7377,27 @@ process_cu_tu_index (struct dwarf_section *section, int do_display)
-    section sets that we can use to associate a .debug_info.dwo section
-    with its associated .debug_abbrev.dwo section in a .dwp file.  */
- 
--static void
-+static bfd_boolean
- load_cu_tu_indexes (void *file)
- {
-+  static int cu_tu_indexes_read = -1; /* Tri-state variable.  */
-+
-   /* If we have already loaded (or tried to load) the CU and TU indexes
-      then do not bother to repeat the task.  */
--  if (cu_tu_indexes_read)
--    return;
--
--  if (load_debug_section (dwp_cu_index, file))
--    process_cu_tu_index (&debug_displays [dwp_cu_index].section, 0);
--
--  if (load_debug_section (dwp_tu_index, file))
--    process_cu_tu_index (&debug_displays [dwp_tu_index].section, 0);
-+  if (cu_tu_indexes_read == -1)
-+    {
-+      cu_tu_indexes_read = TRUE;
-+  
-+      if (load_debug_section (dwp_cu_index, file))
-+	if (! process_cu_tu_index (&debug_displays [dwp_cu_index].section, 0))
-+	  cu_tu_indexes_read = FALSE;
-+
-+      if (load_debug_section (dwp_tu_index, file))
-+	if (! process_cu_tu_index (&debug_displays [dwp_tu_index].section, 0))
-+	  cu_tu_indexes_read = FALSE;
-+    }
- 
--  cu_tu_indexes_read = 1;
-+  return (bfd_boolean) cu_tu_indexes_read;
- }
- 
- /* Find the set of sections that includes section SHNDX.  */
-@@ -7402,7 +7407,8 @@ find_cu_tu_set (void *file, unsigned int shndx)
- {
-   unsigned int i;
- 
--  load_cu_tu_indexes (file);
-+  if (! load_cu_tu_indexes (file))
-+    return NULL;
- 
-   /* Find SHNDX in the shndx pool.  */
-   for (i = 0; i < shndx_pool_used; i++)
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-7209.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-7209.patch
deleted file mode 100644
index 2357a12..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-7209.patch
+++ /dev/null
@@ -1,62 +0,0 @@
-From b2706ceadac7239e7b02d43f05100fc6538b0d65 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Mon, 13 Feb 2017 15:04:37 +0000
-Subject: Fix invalid read of section contents whilst processing a corrupt binary.
-
-	PR binutils/21135
-	* readelf.c (dump_section_as_bytes): Handle the case where
-	uncompress_section_contents returns false.
-
-CVE: CVE-2017-7209
-Upstream-Status: Backport[master]
-
-Signed-off-by: Yuanjie Huang <yuanjie.huang@windriver.com>
----
- binutils/ChangeLog |  6 ++++++
- binutils/readelf.c | 16 ++++++++++++----
- 2 files changed, 18 insertions(+), 4 deletions(-)
-
-diff --git a/binutils/ChangeLog b/binutils/ChangeLog
-index 53352c1801..cf92744c12 100644
---- a/binutils/ChangeLog
-+++ b/binutils/ChangeLog
-@@ -1,3 +1,9 @@
-+2017-02-13  Nick Clifton  <nickc@redhat.com>
-+
-+	PR binutils/21135
-+	* readelf.c (dump_section_as_bytes): Handle the case where
-+	uncompress_section_contents returns false.
-+
- 2017-02-20  Nick Clifton  <nickc@redhat.com>
- 
- 	PR binutils/21156
-diff --git a/binutils/readelf.c b/binutils/readelf.c
-index 4960491c5c..f0e7b080e8 100644
---- a/binutils/readelf.c
-+++ b/binutils/readelf.c
-@@ -12803,10 +12803,18 @@ dump_section_as_bytes (Elf_Internal_Shdr * section,
- 	  new_size -= 12;
- 	}
- 
--      if (uncompressed_size
--	  && uncompress_section_contents (& start, uncompressed_size,
--					  & new_size))
--	section_size = new_size;
-+      if (uncompressed_size)
-+	{
-+	  if (uncompress_section_contents (& start, uncompressed_size,
-+					   & new_size))
-+	    section_size = new_size;
-+	  else
-+	    {
-+	      error (_("Unable to decompress section %s\n"),
-+		     printable_section_name (section));
-+	      return;
-+	    }
-+	}
-     }
- 
-   if (relocate)
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-7210.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-7210.patch
deleted file mode 100644
index 8791792..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-7210.patch
+++ /dev/null
@@ -1,71 +0,0 @@
-From 4da598a472e1d298825035e452e3bc68f714311c Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Tue, 14 Feb 2017 14:07:29 +0000
-Subject: Fix handling of corrupt STABS enum type strings.
-
-	PR binutils/21157
-	* stabs.c (parse_stab_enum_type): Check for corrupt NAME:VALUE
-	pairs.
-	(parse_number): Exit early if passed an empty string.
-
-CVE: CVE-2017-7210
-Upstream-Status: Backport [master]
-
-Signed-off-by: Yuanjie Huang <yuanjie.huang@windriver.com>
----
- binutils/ChangeLog |  7 +++++++
- binutils/stabs.c   | 14 +++++++++++++-
- 2 files changed, 20 insertions(+), 1 deletion(-)
-
-diff --git a/binutils/ChangeLog b/binutils/ChangeLog
-index cf92744c12..0045fbaaa6 100644
---- a/binutils/ChangeLog
-+++ b/binutils/ChangeLog
-@@ -1,3 +1,10 @@
-+2017-02-14  Nick Clifton  <nickc@redhat.com>
-+
-+	PR binutils/21157
-+	* stabs.c (parse_stab_enum_type): Check for corrupt NAME:VALUE
-+	pairs.
-+	(parse_number): Exit early if passed an empty string.
-+
- 2017-02-13  Nick Clifton  <nickc@redhat.com>
- 
- 	PR binutils/21135
-diff --git a/binutils/stabs.c b/binutils/stabs.c
-index f5c5d2d8e0..5d013cc361 100644
---- a/binutils/stabs.c
-+++ b/binutils/stabs.c
-@@ -232,6 +232,10 @@ parse_number (const char **pp, bfd_boolean *poverflow)
- 
-   orig = *pp;
- 
-+  /* Stop early if we are passed an empty string.  */
-+  if (*orig == 0)
-+    return (bfd_vma) 0;
-+
-   errno = 0;
-   ul = strtoul (*pp, (char **) pp, 0);
-   if (ul + 1 != 0 || errno == 0)
-@@ -1975,9 +1979,17 @@ parse_stab_enum_type (void *dhandle, const char **pp)
-       bfd_signed_vma val;
- 
-       p = *pp;
--      while (*p != ':')
-+      while (*p != ':' && *p != 0)
- 	++p;
- 
-+      if (*p == 0)
-+	{
-+	  bad_stab (orig);
-+	  free (names);
-+	  free (values);
-+	  return DEBUG_TYPE_NULL;
-+	}
-+
-       name = savestring (*pp, p - *pp);
- 
-       *pp = p + 1;
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-7223.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-7223.patch
deleted file mode 100644
index c78c8bf..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-7223.patch
+++ /dev/null
@@ -1,52 +0,0 @@
-From 69ace2200106348a1b00d509a6a234337c104c17 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Thu, 1 Dec 2016 15:20:19 +0000
-Subject: [PATCH] Fix seg fault attempting to unget an EOF character.
-
-	PR gas/20898
-	* app.c (do_scrub_chars): Do not attempt to unget EOF.
-
-Affects: <= 2.28
-Upstream-Status: Backport
-CVE: CVE-2017-7223
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- gas/ChangeLog | 3 +++
- gas/app.c     | 2 +-
- 2 files changed, 4 insertions(+), 1 deletion(-)
-
-Index: git/gas/ChangeLog
-===================================================================
---- git.orig/gas/ChangeLog
-+++ git/gas/ChangeLog
-@@ -1,3 +1,8 @@
-+2016-12-01  Nick Clifton  <nickc@redhat.com>
-+ 
-+       PR gas/20898
-+       * app.c (do_scrub_chars): Do not attempt to unget EOF.
-+
- 2017-03-02  Tristan Gingold  <gingold@adacore.com>
- 
- 	* configure: Regenerate.
-@@ -198,7 +203,6 @@
- 	* config/tc-pru.c (md_number_to_chars): Fix parameter to be
- 	valueT, as declared in tc.h.
- 	(md_apply_fix): Fix to work on 32-bit hosts.
-->>>>>>> 0115611... RISC-V/GAS: Correct branch relaxation for weak symbols.
- 
- 2017-01-02  Alan Modra  <amodra@gmail.com>
- 
-Index: git/gas/app.c
-===================================================================
---- git.orig/gas/app.c
-+++ git/gas/app.c
-@@ -1350,7 +1350,7 @@ do_scrub_chars (size_t (*get) (char *, s
- 		  PUT (ch);
- 		  break;
- 		}
--	      else
-+	      else if (ch2 != EOF)
- 		{
- 		  state = 9;
- 		  if (ch == EOF || !IS_SYMBOL_COMPONENT (ch))
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-7614.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-7614.patch
deleted file mode 100644
index be8631a..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-7614.patch
+++ /dev/null
@@ -1,103 +0,0 @@
-From ad32986fdf9da1c8748e47b8b45100398223dba8 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Tue, 4 Apr 2017 11:23:36 +0100
-Subject: [PATCH] Fix null pointer dereferences when using a link built with
- clang.
-
-	PR binutils/21342
-	* elflink.c (_bfd_elf_define_linkage_sym): Prevent null pointer
-	dereference.
-	(bfd_elf_final_link): Only initialize the extended symbol index
-	section if there are extended symbol tables to list.
-
-Upstream-Status: Backport
-CVE: CVE-2017-7614
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog |  8 ++++++++
- bfd/elflink.c | 35 +++++++++++++++++++++--------------
- 2 files changed, 29 insertions(+), 14 deletions(-)
-
-Index: git/bfd/elflink.c
-===================================================================
---- git.orig/bfd/elflink.c
-+++ git/bfd/elflink.c
-@@ -119,15 +119,18 @@ _bfd_elf_define_linkage_sym (bfd *abfd,
- 	 defined in shared libraries can't be overridden, because we
- 	 lose the link to the bfd which is via the symbol section.  */
-       h->root.type = bfd_link_hash_new;
-+      bh = &h->root;
-     }
-+  else
-+    bh = NULL;
- 
--  bh = &h->root;
-   bed = get_elf_backend_data (abfd);
-   if (!_bfd_generic_link_add_one_symbol (info, abfd, name, BSF_GLOBAL,
- 					 sec, 0, NULL, FALSE, bed->collect,
- 					 &bh))
-     return NULL;
-   h = (struct elf_link_hash_entry *) bh;
-+  BFD_ASSERT (h != NULL);
-   h->def_regular = 1;
-   h->non_elf = 0;
-   h->root.linker_def = 1;
-@@ -11973,24 +11976,28 @@ bfd_elf_final_link (bfd *abfd, struct bf
-     {
-       /* Finish up and write out the symbol string table (.strtab)
- 	 section.  */
--      Elf_Internal_Shdr *symstrtab_hdr;
-+      Elf_Internal_Shdr *symstrtab_hdr = NULL;
-       file_ptr off = symtab_hdr->sh_offset + symtab_hdr->sh_size;
- 
--      symtab_shndx_hdr = & elf_symtab_shndx_list (abfd)->hdr;
--      if (symtab_shndx_hdr != NULL && symtab_shndx_hdr->sh_name != 0)
-+      if (elf_symtab_shndx_list (abfd))
- 	{
--	  symtab_shndx_hdr->sh_type = SHT_SYMTAB_SHNDX;
--	  symtab_shndx_hdr->sh_entsize = sizeof (Elf_External_Sym_Shndx);
--	  symtab_shndx_hdr->sh_addralign = sizeof (Elf_External_Sym_Shndx);
--	  amt = bfd_get_symcount (abfd) * sizeof (Elf_External_Sym_Shndx);
--	  symtab_shndx_hdr->sh_size = amt;
-+	  symtab_shndx_hdr = & elf_symtab_shndx_list (abfd)->hdr;
- 
--	  off = _bfd_elf_assign_file_position_for_section (symtab_shndx_hdr,
--							   off, TRUE);
-+	  if (symtab_shndx_hdr != NULL && symtab_shndx_hdr->sh_name != 0)
-+	    {
-+	      symtab_shndx_hdr->sh_type = SHT_SYMTAB_SHNDX;
-+	      symtab_shndx_hdr->sh_entsize = sizeof (Elf_External_Sym_Shndx);
-+	      symtab_shndx_hdr->sh_addralign = sizeof (Elf_External_Sym_Shndx);
-+	      amt = bfd_get_symcount (abfd) * sizeof (Elf_External_Sym_Shndx);
-+	      symtab_shndx_hdr->sh_size = amt;
- 
--	  if (bfd_seek (abfd, symtab_shndx_hdr->sh_offset, SEEK_SET) != 0
--	      || (bfd_bwrite (flinfo.symshndxbuf, amt, abfd) != amt))
--	    return FALSE;
-+	      off = _bfd_elf_assign_file_position_for_section (symtab_shndx_hdr,
-+							       off, TRUE);
-+
-+	      if (bfd_seek (abfd, symtab_shndx_hdr->sh_offset, SEEK_SET) != 0
-+		  || (bfd_bwrite (flinfo.symshndxbuf, amt, abfd) != amt))
-+		return FALSE;
-+	    }
- 	}
- 
-       symstrtab_hdr = &elf_tdata (abfd)->strtab_hdr;
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,3 +1,11 @@
-+2017-04-04  Nick Clifton  <nickc@redhat.com>
-+
-+       PR binutils/21342
-+       * elflink.c (_bfd_elf_define_linkage_sym): Prevent null pointer
-+       dereference.
-+       (bfd_elf_final_link): Only initialize the extended symbol index
-+       section if there are extended symbol tables to list.
-+
- 2017-03-07  Alan Modra  <amodra@gmail.com>
- 
- 	PR 21224
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8393.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8393.patch
deleted file mode 100644
index 8500a03..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8393.patch
+++ /dev/null
@@ -1,205 +0,0 @@
-From bce964aa6c777d236fbd641f2bc7bb931cfe4bf3 Mon Sep 17 00:00:00 2001
-From: Alan Modra <amodra@gmail.com>
-Date: Sun, 23 Apr 2017 11:03:34 +0930
-Subject: [PATCH] PR 21412, get_reloc_section assumes .rel/.rela name for
- SHT_REL/RELA.
-
-This patch fixes an assumption made by code that runs for objcopy and
-strip, that SHT_REL/SHR_RELA sections are always named starting with a
-.rel/.rela prefix.  I'm also modifying the interface for
-elf_backend_get_reloc_section, so any backend function just needs to
-handle name mapping.
-
-	PR 21412
-	* elf-bfd.h (struct elf_backend_data <get_reloc_section>): Change
-	parameters and comment.
-	(_bfd_elf_get_reloc_section): Delete.
-	(_bfd_elf_plt_get_reloc_section): Declare.
-	* elf.c (_bfd_elf_plt_get_reloc_section, elf_get_reloc_section):
-	New functions.  Don't blindly skip over assumed .rel/.rela prefix.
-	Extracted from..
-	(_bfd_elf_get_reloc_section): ..here.  Delete.
-	(assign_section_numbers): Call elf_get_reloc_section.
-	* elf64-ppc.c (elf_backend_get_reloc_section): Define.
-	* elfxx-target.h (elf_backend_get_reloc_section): Update.
-
-Upstream-Status: Backport
-CVE: CVE-2017-8393
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog      | 15 ++++++++++++++
- bfd/elf-bfd.h      |  8 ++++---
- bfd/elf.c          | 61 +++++++++++++++++++++++++++++++-----------------------
- bfd/elf64-ppc.c    |  1 +
- bfd/elfxx-target.h |  2 +-
- 5 files changed, 57 insertions(+), 30 deletions(-)
-
-Index: git/bfd/elf-bfd.h
-===================================================================
---- git.orig/bfd/elf-bfd.h
-+++ git/bfd/elf-bfd.h
-@@ -1322,8 +1322,10 @@ struct elf_backend_data
-   bfd_size_type (*maybe_function_sym) (const asymbol *sym, asection *sec,
- 				       bfd_vma *code_off);
- 
--  /* Return the section which RELOC_SEC applies to.  */
--  asection *(*get_reloc_section) (asection *reloc_sec);
-+  /* Given NAME, the name of a relocation section stripped of its
-+     .rel/.rela prefix, return the section in ABFD to which the
-+     relocations apply.  */
-+  asection *(*get_reloc_section) (bfd *abfd, const char *name);
- 
-   /* Called to set the sh_flags, sh_link and sh_info fields of OSECTION which
-      has a type >= SHT_LOOS.  Returns TRUE if the fields were initialised,
-@@ -2392,7 +2394,7 @@ extern bfd_boolean _bfd_elf_is_function_
- extern bfd_size_type _bfd_elf_maybe_function_sym (const asymbol *, asection *,
- 						  bfd_vma *);
- 
--extern asection *_bfd_elf_get_reloc_section (asection *);
-+extern asection *_bfd_elf_plt_get_reloc_section (bfd *, const char *);
- 
- extern int bfd_elf_get_default_section_type (flagword);
- 
-Index: git/bfd/elf.c
-===================================================================
---- git.orig/bfd/elf.c
-+++ git/bfd/elf.c
-@@ -3532,17 +3532,39 @@ bfd_elf_set_group_contents (bfd *abfd, a
-   H_PUT_32 (abfd, sec->flags & SEC_LINK_ONCE ? GRP_COMDAT : 0, loc);
- }
- 
--/* Return the section which RELOC_SEC applies to.  */
-+/* Given NAME, the name of a relocation section stripped of its
-+   .rel/.rela prefix, return the section in ABFD to which the
-+   relocations apply.  */
- 
- asection *
--_bfd_elf_get_reloc_section (asection *reloc_sec)
-+_bfd_elf_plt_get_reloc_section (bfd *abfd, const char *name)
-+{
-+  /* If a target needs .got.plt section, relocations in rela.plt/rel.plt
-+     section likely apply to .got.plt or .got section.  */
-+  if (get_elf_backend_data (abfd)->want_got_plt
-+      && strcmp (name, ".plt") == 0)
-+    {
-+      asection *sec;
-+
-+      name = ".got.plt";
-+      sec = bfd_get_section_by_name (abfd, name);
-+      if (sec != NULL)
-+	return sec;
-+      name = ".got";
-+    }
-+
-+  return bfd_get_section_by_name (abfd, name);
-+}
-+
-+/* Return the section to which RELOC_SEC applies.  */
-+
-+static asection *
-+elf_get_reloc_section (asection *reloc_sec)
- {
-   const char *name;
-   unsigned int type;
-   bfd *abfd;
--
--  if (reloc_sec == NULL)
--    return NULL;
-+  const struct elf_backend_data *bed;
- 
-   type = elf_section_data (reloc_sec)->this_hdr.sh_type;
-   if (type != SHT_REL && type != SHT_RELA)
-@@ -3550,28 +3572,15 @@ _bfd_elf_get_reloc_section (asection *re
- 
-   /* We look up the section the relocs apply to by name.  */
-   name = reloc_sec->name;
--  if (type == SHT_REL)
--    name += 4;
--  else
--    name += 5;
-+  if (strncmp (name, ".rel", 4) != 0)
-+    return NULL;
-+  name += 4;
-+  if (type == SHT_RELA && *name++ != 'a')
-+    return NULL;
- 
--  /* If a target needs .got.plt section, relocations in rela.plt/rel.plt
--     section apply to .got.plt section.  */
-   abfd = reloc_sec->owner;
--  if (get_elf_backend_data (abfd)->want_got_plt
--      && strcmp (name, ".plt") == 0)
--    {
--      /* .got.plt is a linker created input section.  It may be mapped
--	 to some other output section.  Try two likely sections.  */
--      name = ".got.plt";
--      reloc_sec = bfd_get_section_by_name (abfd, name);
--      if (reloc_sec != NULL)
--	return reloc_sec;
--      name = ".got";
--    }
--
--  reloc_sec = bfd_get_section_by_name (abfd, name);
--  return reloc_sec;
-+  bed = get_elf_backend_data (abfd);
-+  return bed->get_reloc_section (abfd, name);
- }
- 
- /* Assign all ELF section numbers.  The dummy first section is handled here
-@@ -3833,7 +3842,7 @@ assign_section_numbers (bfd *abfd, struc
- 	  if (s != NULL)
- 	    d->this_hdr.sh_link = elf_section_data (s)->this_idx;
- 
--	  s = get_elf_backend_data (abfd)->get_reloc_section (sec);
-+	  s = elf_get_reloc_section (sec);
- 	  if (s != NULL)
- 	    {
- 	      d->this_hdr.sh_info = elf_section_data (s)->this_idx;
-Index: git/bfd/elf64-ppc.c
-===================================================================
---- git.orig/bfd/elf64-ppc.c
-+++ git/bfd/elf64-ppc.c
-@@ -121,6 +121,7 @@ static bfd_vma opd_entry_value
- #define elf_backend_special_sections	      ppc64_elf_special_sections
- #define elf_backend_merge_symbol_attribute    ppc64_elf_merge_symbol_attribute
- #define elf_backend_merge_symbol	      ppc64_elf_merge_symbol
-+#define elf_backend_get_reloc_section	      bfd_get_section_by_name
- 
- /* The name of the dynamic interpreter.  This is put in the .interp
-    section.  */
-Index: git/bfd/elfxx-target.h
-===================================================================
---- git.orig/bfd/elfxx-target.h
-+++ git/bfd/elfxx-target.h
-@@ -706,7 +706,7 @@
- #endif
- 
- #ifndef elf_backend_get_reloc_section
--#define elf_backend_get_reloc_section _bfd_elf_get_reloc_section
-+#define elf_backend_get_reloc_section _bfd_elf_plt_get_reloc_section
- #endif
- 
- #ifndef elf_backend_copy_special_section_fields
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,3 +1,18 @@
-+2017-04-23  Alan Modra  <amodra@gmail.com>
-+ 
-+       PR 21412
-+       * elf-bfd.h (struct elf_backend_data <get_reloc_section>): Change
-+       parameters and comment.
-+       (_bfd_elf_get_reloc_section): Delete.
-+       (_bfd_elf_plt_get_reloc_section): Declare.
-+       * elf.c (_bfd_elf_plt_get_reloc_section, elf_get_reloc_section):
-+       New functions.  Don't blindly skip over assumed .rel/.rela prefix.
-+       Extracted from..
-+       (_bfd_elf_get_reloc_section): ..here.  Delete.
-+       (assign_section_numbers): Call elf_get_reloc_section.
-+       * elf64-ppc.c (elf_backend_get_reloc_section): Define.
-+       * elfxx-target.h (elf_backend_get_reloc_section): Update.
-+
- 2017-04-04  Nick Clifton  <nickc@redhat.com>
- 
-        PR binutils/21342
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8394.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8394.patch
deleted file mode 100644
index e6c6b17..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8394.patch
+++ /dev/null
@@ -1,118 +0,0 @@
-From 7eacd66b086cabb1daab20890d5481894d4f56b2 Mon Sep 17 00:00:00 2001
-From: Alan Modra <amodra@gmail.com>
-Date: Sun, 23 Apr 2017 15:21:11 +0930
-Subject: [PATCH] PR 21414, null pointer deref of _bfd_elf_large_com_section
- sym
-
-	PR 21414
-	* section.c (GLOBAL_SYM_INIT): Make available in bfd.h.
-	* elf.c (lcomm_sym): New.
-	(_bfd_elf_large_com_section): Use lcomm_sym section symbol.
-	* bfd-in2.h: Regenerate.
-
-Upstream-Status: Backport
-CVE: CVE-2017-8394
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog |  8 ++++++++
- bfd/bfd-in2.h | 12 ++++++++++++
- bfd/elf.c     |  6 ++++--
- bfd/section.c | 24 ++++++++++++------------
- 4 files changed, 36 insertions(+), 14 deletions(-)
-
-Index: git/bfd/bfd-in2.h
-===================================================================
---- git.orig/bfd/bfd-in2.h
-+++ git/bfd/bfd-in2.h
-@@ -1838,6 +1838,18 @@ extern asection _bfd_std_section[4];
-      { NULL }, { NULL }                                                \
-     }
- 
-+/* We use a macro to initialize the static asymbol structures because
-+   traditional C does not permit us to initialize a union member while
-+   gcc warns if we don't initialize it.
-+   the_bfd, name, value, attr, section [, udata]  */
-+#ifdef __STDC__
-+#define GLOBAL_SYM_INIT(NAME, SECTION) \
-+  { 0, NAME, 0, BSF_SECTION_SYM, SECTION, { 0 }}
-+#else
-+#define GLOBAL_SYM_INIT(NAME, SECTION) \
-+  { 0, NAME, 0, BSF_SECTION_SYM, SECTION }
-+#endif
-+
- void bfd_section_list_clear (bfd *);
- 
- asection *bfd_get_section_by_name (bfd *abfd, const char *name);
-Index: git/bfd/elf.c
-===================================================================
---- git.orig/bfd/elf.c
-+++ git/bfd/elf.c
-@@ -11164,9 +11164,11 @@ _bfd_elf_get_synthetic_symtab (bfd *abfd
- 
- /* It is only used by x86-64 so far.
-    ??? This repeats *COM* id of zero.  sec->id is supposed to be unique,
--   but current usage would allow all of _bfd_std_section to be zero.  t*/
-+   but current usage would allow all of _bfd_std_section to be zero.  */
-+static const asymbol lcomm_sym
-+  = GLOBAL_SYM_INIT ("LARGE_COMMON", &_bfd_elf_large_com_section);
- asection _bfd_elf_large_com_section
--  = BFD_FAKE_SECTION (_bfd_elf_large_com_section, NULL,
-+  = BFD_FAKE_SECTION (_bfd_elf_large_com_section, &lcomm_sym,
- 		      "LARGE_COMMON", 0, SEC_IS_COMMON);
- 
- void
-Index: git/bfd/section.c
-===================================================================
---- git.orig/bfd/section.c
-+++ git/bfd/section.c
-@@ -738,20 +738,20 @@ CODE_FRAGMENT
- .     { NULL }, { NULL }						\
- .    }
- .
-+.{* We use a macro to initialize the static asymbol structures because
-+.   traditional C does not permit us to initialize a union member while
-+.   gcc warns if we don't initialize it.
-+.   the_bfd, name, value, attr, section [, udata]  *}
-+.#ifdef __STDC__
-+.#define GLOBAL_SYM_INIT(NAME, SECTION) \
-+.  { 0, NAME, 0, BSF_SECTION_SYM, SECTION, { 0 }}
-+.#else
-+.#define GLOBAL_SYM_INIT(NAME, SECTION) \
-+.  { 0, NAME, 0, BSF_SECTION_SYM, SECTION }
-+.#endif
-+.
- */
- 
--/* We use a macro to initialize the static asymbol structures because
--   traditional C does not permit us to initialize a union member while
--   gcc warns if we don't initialize it.  */
-- /* the_bfd, name, value, attr, section [, udata] */
--#ifdef __STDC__
--#define GLOBAL_SYM_INIT(NAME, SECTION) \
--  { 0, NAME, 0, BSF_SECTION_SYM, SECTION, { 0 }}
--#else
--#define GLOBAL_SYM_INIT(NAME, SECTION) \
--  { 0, NAME, 0, BSF_SECTION_SYM, SECTION }
--#endif
--
- /* These symbols are global, not specific to any BFD.  Therefore, anything
-    that tries to change them is broken, and should be repaired.  */
- 
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,4 +1,12 @@
-+
- 2017-04-23  Alan Modra  <amodra@gmail.com>
-+       PR 21414
-+       * section.c (GLOBAL_SYM_INIT): Make available in bfd.h.
-+       * elf.c (lcomm_sym): New.
-+       (_bfd_elf_large_com_section): Use lcomm_sym section symbol.
-+       * bfd-in2.h: Regenerate.
-+
-++2017-04-23  Alan Modra  <amodra@gmail.com>
-  
-        PR 21412
-        * elf-bfd.h (struct elf_backend_data <get_reloc_section>): Change
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8395.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8395.patch
deleted file mode 100644
index 0a9bce3..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8395.patch
+++ /dev/null
@@ -1,72 +0,0 @@
-From e63d123268f23a4cbc45ee55fb6dbc7d84729da3 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Wed, 26 Apr 2017 13:07:49 +0100
-Subject: [PATCH] Fix seg-fault attempting to compress a debug section in a
- corrupt binary.
-
-	PR binutils/21431
-	* compress.c (bfd_init_section_compress_status): Check the return
-	value from bfd_malloc.
-
-Upstream-Status: Backport
-CVE: CVE-2017-8395
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog  |  6 ++++++
- bfd/compress.c | 19 +++++++++----------
- 2 files changed, 15 insertions(+), 10 deletions(-)
-
-Index: git/bfd/compress.c
-===================================================================
---- git.orig/bfd/compress.c
-+++ git/bfd/compress.c
-@@ -542,7 +542,6 @@ bfd_init_section_compress_status (bfd *a
- {
-   bfd_size_type uncompressed_size;
-   bfd_byte *uncompressed_buffer;
--  bfd_boolean ret;
- 
-   /* Error if not opened for read.  */
-   if (abfd->direction != read_direction
-@@ -558,18 +557,18 @@ bfd_init_section_compress_status (bfd *a
-   /* Read in the full section contents and compress it.  */
-   uncompressed_size = sec->size;
-   uncompressed_buffer = (bfd_byte *) bfd_malloc (uncompressed_size);
-+  /* PR 21431 */
-+  if (uncompressed_buffer == NULL)
-+    return FALSE;
-+
-   if (!bfd_get_section_contents (abfd, sec, uncompressed_buffer,
- 				 0, uncompressed_size))
--    ret = FALSE;
--  else
--    {
--      uncompressed_size = bfd_compress_section_contents (abfd, sec,
--							 uncompressed_buffer,
--							 uncompressed_size);
--      ret = uncompressed_size != 0;
--    }
-+    return FALSE;
- 
--  return ret;
-+  uncompressed_size = bfd_compress_section_contents (abfd, sec,
-+						     uncompressed_buffer,
-+						     uncompressed_size);
-+  return uncompressed_size != 0;
- }
- 
- /*
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,3 +1,8 @@
-+2017-04-26  Nick Clifton  <nickc@redhat.com>
-+
-+       PR binutils/21431
-+       * compress.c (bfd_init_section_compress_status): Check the return
-+       value from bfd_malloc.
- 
- 2017-04-23  Alan Modra  <amodra@gmail.com>
-        PR 21414
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8396_8397.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8396_8397.patch
deleted file mode 100644
index 14f4282..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8396_8397.patch
+++ /dev/null
@@ -1,102 +0,0 @@
-From a941291cab71b9ac356e1c03968c177c03e602ab Mon Sep 17 00:00:00 2001
-From: Alan Modra <amodra@gmail.com>
-Date: Sat, 29 Apr 2017 14:48:16 +0930
-Subject: [PATCH] PR21432, buffer overflow in perform_relocation
-
-The existing reloc offset range tests didn't catch small negative
-offsets less than the size of the reloc field.
-
-	PR 21432
-	* reloc.c (reloc_offset_in_range): New function.
-	(bfd_perform_relocation, bfd_install_relocation): Use it.
-	(_bfd_final_link_relocate): Likewise.
-
-Upstream-Status: Backport
-CVE: CVE-2017-8396
-CVE: CVE-2017-8397
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog |  7 +++++++
- bfd/reloc.c   | 32 ++++++++++++++++++++------------
- 2 files changed, 27 insertions(+), 12 deletions(-)
-
-Index: git/bfd/reloc.c
-===================================================================
---- git.orig/bfd/reloc.c
-+++ git/bfd/reloc.c
-@@ -538,6 +538,22 @@ bfd_check_overflow (enum complain_overfl
-   return flag;
- }
- 
-+/* HOWTO describes a relocation, at offset OCTET.  Return whether the
-+   relocation field is within SECTION of ABFD.  */
-+
-+static bfd_boolean
-+reloc_offset_in_range (reloc_howto_type *howto, bfd *abfd,
-+		       asection *section, bfd_size_type octet)
-+{
-+  bfd_size_type octet_end = bfd_get_section_limit_octets (abfd, section);
-+  bfd_size_type reloc_size = bfd_get_reloc_size (howto);
-+
-+  /* The reloc field must be contained entirely within the section.
-+     Allow zero length fields (marker relocs or NONE relocs where no
-+     relocation will be performed) at the end of the section.  */
-+  return octet <= octet_end && octet + reloc_size <= octet_end;
-+}
-+
- /*
- FUNCTION
- 	bfd_perform_relocation
-@@ -618,13 +634,10 @@ bfd_perform_relocation (bfd *abfd,
-   /* PR 17512: file: 0f67f69d.  */
-   if (howto == NULL)
-     return bfd_reloc_undefined;
--
--  /* Is the address of the relocation really within the section?
--     Include the size of the reloc in the test for out of range addresses.
--     PR 17512: file: c146ab8b, 46dff27f, 38e53ebf.  */
-+  
-+  /* Is the address of the relocation really within the section?  */
-   octets = reloc_entry->address * bfd_octets_per_byte (abfd);
--  if (octets + bfd_get_reloc_size (howto)
--      > bfd_get_section_limit_octets (abfd, input_section))
-+  if (!reloc_offset_in_range (howto, abfd, input_section, octets))
-     return bfd_reloc_outofrange;
- 
-   /* Work out which section the relocation is targeted at and the
-@@ -1012,8 +1025,7 @@ bfd_install_relocation (bfd *abfd,
- 
-   /* Is the address of the relocation really within the section?  */
-   octets = reloc_entry->address * bfd_octets_per_byte (abfd);
--  if (octets + bfd_get_reloc_size (howto)
--      > bfd_get_section_limit_octets (abfd, input_section))
-+  if (!reloc_offset_in_range (howto, abfd, input_section, octets))
-     return bfd_reloc_outofrange;
- 
-   /* Work out which section the relocation is targeted at and the
-@@ -1351,8 +1363,7 @@ _bfd_final_link_relocate (reloc_howto_ty
-   bfd_size_type octets = address * bfd_octets_per_byte (input_bfd);
- 
-   /* Sanity check the address.  */
--  if (octets + bfd_get_reloc_size (howto)
--      > bfd_get_section_limit_octets (input_bfd, input_section))
-+  if (!reloc_offset_in_range (howto, input_bfd, input_section, octets))
-     return bfd_reloc_outofrange;
- 
-   /* This function assumes that we are dealing with a basic relocation
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,3 +1,10 @@
-+2017-04-29  Alan Modra  <amodra@gmail.com>
-+
-+       PR 21432
-+       * reloc.c (reloc_offset_in_range): New function.
-+       (bfd_perform_relocation, bfd_install_relocation): Use it.
-+       (_bfd_final_link_relocate): Likewise.
-+
- 2017-04-26  Nick Clifton  <nickc@redhat.com>
- 
-        PR binutils/21431
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8398.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8398.patch
deleted file mode 100644
index 5b9acc8..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8398.patch
+++ /dev/null
@@ -1,147 +0,0 @@
-From d949ff5607b9f595e0eed2ff15fbe5eb84eb3a34 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Fri, 28 Apr 2017 10:28:04 +0100
-Subject: [PATCH] Fix heap-buffer overflow bugs caused when dumping debug
- information from a corrupt binary.
-
-	PR binutils/21438
-	* dwarf.c (process_extended_line_op): Do not assume that the
-	string extracted from the section is NUL terminated.
-	(fetch_indirect_string): If the string retrieved from the section
-	is not NUL terminated, return an error message.
-	(fetch_indirect_line_string): Likewise.
-	(fetch_indexed_string): Likewise.
-
-Upstream-Status: Backport
-CVE: CVE-2017-8398
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- binutils/ChangeLog | 10 +++++++++
- binutils/dwarf.c   | 66 +++++++++++++++++++++++++++++++++++++++++-------------
- 2 files changed, 60 insertions(+), 16 deletions(-)
-
-Index: git/binutils/dwarf.c
-===================================================================
---- git.orig/binutils/dwarf.c
-+++ git/binutils/dwarf.c
-@@ -472,15 +472,20 @@ process_extended_line_op (unsigned char
-       printf (_("  Entry\tDir\tTime\tSize\tName\n"));
-       printf ("   %d\t", ++state_machine_regs.last_file_entry);
- 
--      name = data;
--      data += strnlen ((char *) data, end - data) + 1;
--      printf ("%s\t", dwarf_vmatoa ("u", read_uleb128 (data, & bytes_read, end)));
--      data += bytes_read;
--      printf ("%s\t", dwarf_vmatoa ("u", read_uleb128 (data, & bytes_read, end)));
--      data += bytes_read;
--      printf ("%s\t", dwarf_vmatoa ("u", read_uleb128 (data, & bytes_read, end)));
--      data += bytes_read;
--      printf ("%s\n\n", name);
-+      {
-+	size_t l;
-+
-+	name = data;
-+	l = strnlen ((char *) data, end - data);
-+	data += len + 1;
-+	printf ("%s\t", dwarf_vmatoa ("u", read_uleb128 (data, & bytes_read, end)));
-+	data += bytes_read;
-+	printf ("%s\t", dwarf_vmatoa ("u", read_uleb128 (data, & bytes_read, end)));
-+	data += bytes_read;
-+	printf ("%s\t", dwarf_vmatoa ("u", read_uleb128 (data, & bytes_read, end)));
-+	data += bytes_read;
-+	printf ("%.*s\n\n", (int) l, name);
-+      }
- 
-       if (((unsigned int) (data - orig_data) != len) || data == end)
- 	warn (_("DW_LNE_define_file: Bad opcode length\n"));
-@@ -597,18 +602,27 @@ static const unsigned char *
- fetch_indirect_string (dwarf_vma offset)
- {
-   struct dwarf_section *section = &debug_displays [str].section;
-+  const unsigned char * ret;
- 
-   if (section->start == NULL)
-     return (const unsigned char *) _("<no .debug_str section>");
- 
--  if (offset > section->size)
-+  if (offset >= section->size)
-     {
-       warn (_("DW_FORM_strp offset too big: %s\n"),
- 	    dwarf_vmatoa ("x", offset));
-       return (const unsigned char *) _("<offset is too big>");
-     }
-+  ret = section->start + offset;
-+  /* Unfortunately we cannot rely upon the .debug_str section ending with a
-+     NUL byte.  Since our caller is expecting to receive a well formed C
-+     string we test for the lack of a terminating byte here.  */
-+  if (strnlen ((const char *) ret, section->size - offset)
-+      == section->size - offset)
-+    ret = (const unsigned char *)
-+      _("<no NUL byte at end of .debug_str section>");
- 
--  return (const unsigned char *) section->start + offset;
-+  return ret; 
- }
- 
- static const char *
-@@ -621,6 +635,7 @@ fetch_indexed_string (dwarf_vma idx, str
-   struct dwarf_section *str_section = &debug_displays [str_sec_idx].section;
-   dwarf_vma index_offset = idx * offset_size;
-   dwarf_vma str_offset;
-+  const char * ret;
- 
-   if (index_section->start == NULL)
-     return (dwo ? _("<no .debug_str_offsets.dwo section>")
-@@ -628,7 +643,7 @@ fetch_indexed_string (dwarf_vma idx, str
- 
-   if (this_set != NULL)
-     index_offset += this_set->section_offsets [DW_SECT_STR_OFFSETS];
--  if (index_offset > index_section->size)
-+  if (index_offset >= index_section->size)
-     {
-       warn (_("DW_FORM_GNU_str_index offset too big: %s\n"),
- 	    dwarf_vmatoa ("x", index_offset));
-@@ -641,14 +656,22 @@ fetch_indexed_string (dwarf_vma idx, str
- 
-   str_offset = byte_get (index_section->start + index_offset, offset_size);
-   str_offset -= str_section->address;
--  if (str_offset > str_section->size)
-+  if (str_offset >= str_section->size)
-     {
-       warn (_("DW_FORM_GNU_str_index indirect offset too big: %s\n"),
- 	    dwarf_vmatoa ("x", str_offset));
-       return _("<indirect index offset is too big>");
-     }
- 
--  return (const char *) str_section->start + str_offset;
-+  ret = (const char *) str_section->start + str_offset;
-+  /* Unfortunately we cannot rely upon str_section ending with a NUL byte.
-+     Since our caller is expecting to receive a well formed C string we test
-+     for the lack of a terminating byte here.  */
-+  if (strnlen (ret, str_section->size - str_offset)
-+      == str_section->size - str_offset)
-+    ret = (const char *) _("<no NUL byte at end of section>");
-+
-+  return ret;
- }
- 
- static const char *
-Index: git/binutils/ChangeLog
-===================================================================
---- git.orig/binutils/ChangeLog
-+++ git/binutils/ChangeLog
-@@ -1,3 +1,13 @@
-+2017-04-28  Nick Clifton  <nickc@redhat.com>
-+
-+       PR binutils/21438
-+       * dwarf.c (process_extended_line_op): Do not assume that the
-+       string extracted from the section is NUL terminated.
-+       (fetch_indirect_string): If the string retrieved from the section
-+       is not NUL terminated, return an error message.
-+       (fetch_indirect_line_string): Likewise.
-+       (fetch_indexed_string): Likewise.
-+
- 2017-02-14  Nick Clifton  <nickc@redhat.com>
- 
- 	PR binutils/21157
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8421.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8421.patch
deleted file mode 100644
index 7969c66..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-8421.patch
+++ /dev/null
@@ -1,52 +0,0 @@
-From 39ff1b79f687b65f4144ddb379f22587003443fb Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Tue, 2 May 2017 11:54:53 +0100
-Subject: [PATCH] Prevent memory exhaustion from a corrupt PE binary with an
- overlarge number of relocs.
-
-	PR 21440
-	* objdump.c (dump_relocs_in_section): Check for an excessive
-	number of relocs before attempting to dump them.
-
-Upstream-Status: Backport
-CVE: CVE-2017-8421
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- binutils/ChangeLog | 6 ++++++
- binutils/objdump.c | 8 ++++++++
- 2 files changed, 14 insertions(+)
-
-Index: git/binutils/objdump.c
-===================================================================
---- git.orig/binutils/objdump.c
-+++ git/binutils/objdump.c
-@@ -3311,6 +3311,14 @@ dump_relocs_in_section (bfd *abfd,
-       return;
-     }
- 
-+  if ((bfd_get_file_flags (abfd) & (BFD_IN_MEMORY | BFD_LINKER_CREATED)) == 0
-+      && relsize > get_file_size (bfd_get_filename (abfd)))
-+    {
-+      printf (" (too many: 0x%x)\n", section->reloc_count);
-+      bfd_set_error (bfd_error_file_truncated);
-+      bfd_fatal (bfd_get_filename (abfd));
-+    }
-+
-   relpp = (arelent **) xmalloc (relsize);
-   relcount = bfd_canonicalize_reloc (abfd, section, relpp, syms);
- 
-Index: git/binutils/ChangeLog
-===================================================================
---- git.orig/binutils/ChangeLog
-+++ git/binutils/ChangeLog
-@@ -1,3 +1,9 @@
-+2017-05-02  Nick Clifton  <nickc@redhat.com>
-+
-+       PR 21440
-+       * objdump.c (dump_relocs_in_section): Check for an excessive
-+       number of relocs before attempting to dump them.
-+
- 2017-04-28  Nick Clifton  <nickc@redhat.com>
- 
-        PR binutils/21438
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9038_9044.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9038_9044.patch
deleted file mode 100644
index 535efc3..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9038_9044.patch
+++ /dev/null
@@ -1,51 +0,0 @@
-From f32ba72991d2406b21ab17edc234a2f3fa7fb23d Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Mon, 3 Apr 2017 11:01:45 +0100
-Subject: [PATCH] readelf: Update check for invalid word offsets in ARM unwind
- information.
-
-	PR binutils/21343
-	* readelf.c (get_unwind_section_word): Fix snafu checking for
-	invalid word offsets in ARM unwind information.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9038
-CVE: CVE-2017-9044
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- binutils/ChangeLog | 6 ++++++
- binutils/readelf.c | 6 +++---
- 2 files changed, 9 insertions(+), 3 deletions(-)
-
-Index: git/binutils/readelf.c
-===================================================================
---- git.orig/binutils/readelf.c
-+++ git/binutils/readelf.c
-@@ -7972,9 +7972,9 @@ get_unwind_section_word (struct arm_unw_
-     return FALSE;
- 
-   /* If the offset is invalid then fail.  */
--  if (word_offset > (sec->sh_size - 4)
--      /* PR 18879 */
--      || (sec->sh_size < 5 && word_offset >= sec->sh_size)
-+  if (/* PR 21343 *//* PR 18879 */
-+      sec->sh_size < 4
-+      || word_offset > (sec->sh_size - 4)
-       || ((bfd_signed_vma) word_offset) < 0)
-     return FALSE;
- 
-Index: git/binutils/ChangeLog
-===================================================================
---- git.orig/binutils/ChangeLog
-+++ git/binutils/ChangeLog
-@@ -1,3 +1,9 @@
-+2017-04-03  Nick Clifton  <nickc@redhat.com>
-+
-+       PR binutils/21343
-+       * readelf.c (get_unwind_section_word): Fix snafu checking for
-+       invalid word offsets in ARM unwind information.
-+
- 2017-05-02  Nick Clifton  <nickc@redhat.com>
- 
-        PR 21440
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9039.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9039.patch
deleted file mode 100644
index aed8f7f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9039.patch
+++ /dev/null
@@ -1,61 +0,0 @@
-From 82156ab704b08b124d319c0decdbd48b3ca2dac5 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Mon, 3 Apr 2017 12:14:06 +0100
-Subject: [PATCH] readelf: Fix overlarge memory allocation when reading a
- binary with an excessive number of program headers.
-
-	PR binutils/21345
-	* readelf.c (get_program_headers): Check for there being too many
-	program headers before attempting to allocate space for them.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9039
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- binutils/ChangeLog |  6 ++++++
- binutils/readelf.c | 17 ++++++++++++++---
- 2 files changed, 20 insertions(+), 3 deletions(-)
-
-Index: git/binutils/readelf.c
-===================================================================
---- git.orig/binutils/readelf.c
-+++ git/binutils/readelf.c
-@@ -4765,9 +4765,19 @@ get_program_headers (FILE * file)
-   if (program_headers != NULL)
-     return 1;
- 
--  phdrs = (Elf_Internal_Phdr *) cmalloc (elf_header.e_phnum,
--                                         sizeof (Elf_Internal_Phdr));
-+  /* Be kind to memory checkers by looking for
-+     e_phnum values which we know must be invalid.  */
-+  if (elf_header.e_phnum
-+      * (is_32bit_elf ? sizeof (Elf32_External_Phdr) : sizeof (Elf64_External_Phdr))
-+      >= current_file_size)
-+    {
-+      error (_("Too many program headers - %#x - the file is not that big\n"),
-+	     elf_header.e_phnum);
-+      return FALSE;
-+    }
- 
-+  phdrs = (Elf_Internal_Phdr *) cmalloc (elf_header.e_phnum,
-+					 sizeof (Elf_Internal_Phdr));
-   if (phdrs == NULL)
-     {
-       error (_("Out of memory reading %u program headers\n"),
-Index: git/binutils/ChangeLog
-===================================================================
---- git.orig/binutils/ChangeLog
-+++ git/binutils/ChangeLog
-@@ -1,5 +1,11 @@
- 2017-04-03  Nick Clifton  <nickc@redhat.com>
- 
-+       PR binutils/21345
-+       * readelf.c (get_program_headers): Check for there being too many
-+       program headers before attempting to allocate space for them.
-+
-+2017-04-03  Nick Clifton  <nickc@redhat.com>
-+
-        PR binutils/21343
-        * readelf.c (get_unwind_section_word): Fix snafu checking for
-        invalid word offsets in ARM unwind information.
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9040_9042.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9040_9042.patch
deleted file mode 100644
index 79c6a7d..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9040_9042.patch
+++ /dev/null
@@ -1,57 +0,0 @@
-From 7296a62a2a237f6b1ad8db8c38b090e9f592c8cf Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Thu, 13 Apr 2017 16:06:30 +0100
-Subject: [PATCH] readelf: fix out of range subtraction, seg fault from a NULL
- pointer and memory exhaustion, all from parsing corrupt binaries.
-
-	PR binutils/21379
-	* readelf.c (process_dynamic_section): Detect over large section
-	offsets in the DT_SYMTAB entry.
-
-	PR binutils/21345
-	* readelf.c (process_mips_specific): Catch an unfeasible memory
-	allocation before it happens and print a suitable error message.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9040
-CVE: CVE-2017-9042
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- binutils/ChangeLog | 12 ++++++++++++
- binutils/readelf.c | 26 +++++++++++++++++++++-----
- 2 files changed, 33 insertions(+), 5 deletions(-)
-
-Index: git/binutils/readelf.c
-===================================================================
---- git.orig/binutils/readelf.c
-+++ git/binutils/readelf.c
-@@ -9306,6 +9306,12 @@ process_dynamic_section (FILE * file)
- 	     processing that.  This is overkill, I know, but it
- 	     should work.  */
- 	  section.sh_offset = offset_from_vma (file, entry->d_un.d_val, 0);
-+	  if ((bfd_size_type) section.sh_offset > current_file_size)
-+	    {
-+	      /* See PR 21379 for a reproducer.  */
-+	      error (_("Invalid DT_SYMTAB entry: %lx"), (long) section.sh_offset);
-+	      return FALSE;
-+	    }
- 
- 	  if (archive_file_offset != 0)
- 	    section.sh_size = archive_file_size - section.sh_offset;
-@@ -15175,6 +15181,15 @@ process_mips_specific (FILE * file)
- 	  return 0;
- 	}
- 
-+      /* PR 21345 - print a slightly more helpful error message
-+	 if we are sure that the cmalloc will fail.  */
-+      if (conflictsno * sizeof (* iconf) > current_file_size)
-+	{
-+	  error (_("Overlarge number of conflicts detected: %lx\n"),
-+		 (long) conflictsno);
-+	  return FALSE;
-+	}
-+
-       iconf = (Elf32_Conflict *) cmalloc (conflictsno, sizeof (* iconf));
-       if (iconf == NULL)
- 	{
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9742.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9742.patch
deleted file mode 100644
index 0c9ed0d..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9742.patch
+++ /dev/null
@@ -1,45 +0,0 @@
-From e64519d1ed7fd8f990f05a5562d5b5c0c44b7d7e Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Wed, 14 Jun 2017 17:10:28 +0100
-Subject: [PATCH] Fix seg-fault when trying to disassemble a corrupt score
- binary.
-
-	PR binutils/21576
-	* score7-dis.c (score_opcodes): Add sentinel.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9742
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- opcodes/ChangeLog    | 5 +++++
- opcodes/score7-dis.c | 3 ++-
- 2 files changed, 7 insertions(+), 1 deletion(-)
-
-Index: git/opcodes/score7-dis.c
-===================================================================
---- git.orig/opcodes/score7-dis.c
-+++ git/opcodes/score7-dis.c
-@@ -513,7 +513,8 @@ static struct score_opcode score_opcodes
-   {0x00000d05, 0x00007f0f, "tvc!"},
-   {0x00000026, 0x3e0003ff, "xor\t\t%20-24r, %15-19r, %10-14r"},
-   {0x00000027, 0x3e0003ff, "xor.c\t\t%20-24r, %15-19r, %10-14r"},
--  {0x00002007, 0x0000700f, "xor!\t\t%8-11r, %4-7r"}
-+  {0x00002007, 0x0000700f, "xor!\t\t%8-11r, %4-7r"},
-+  { 0, 0, NULL }
- };
- 
- typedef struct
-Index: git/opcodes/ChangeLog
-===================================================================
---- git.orig/opcodes/ChangeLog
-+++ git/opcodes/ChangeLog
-@@ -1,3 +1,8 @@
-+2017-06-14  Nick Clifton  <nickc@redhat.com>
-+
-+       PR binutils/21576
-+       * score7-dis.c (score_opcodes): Add sentinel.
-+
- 2017-03-07  Alan Modra  <amodra@gmail.com>
- 
- 	Apply from master
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9744.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9744.patch
deleted file mode 100644
index c34a5a6..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9744.patch
+++ /dev/null
@@ -1,46 +0,0 @@
-From f461bbd847f15657f3dd2f317c30c75a7520da1f Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Wed, 14 Jun 2017 17:01:54 +0100
-Subject: [PATCH] Fix address violation bug when disassembling a corrupt SH
- binary.
-
-	PR binutils/21578
-	* elf32-sh.c (sh_elf_set_mach_from_flags): Fix check for invalid
-	flag value.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9744
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog  | 6 ++++++
- bfd/elf32-sh.c | 2 +-
- 2 files changed, 7 insertions(+), 1 deletion(-)
-
-Index: git/bfd/elf32-sh.c
-===================================================================
---- git.orig/bfd/elf32-sh.c
-+++ git/bfd/elf32-sh.c
-@@ -6344,7 +6344,7 @@ sh_elf_set_mach_from_flags (bfd *abfd)
- {
-   flagword flags = elf_elfheader (abfd)->e_flags & EF_SH_MACH_MASK;
- 
--  if (flags >= sizeof(sh_ef_bfd_table))
-+  if (flags >= ARRAY_SIZE (sh_ef_bfd_table))
-     return FALSE;
- 
-   if (sh_ef_bfd_table[flags] == 0)
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,3 +1,9 @@
-+2017-06-14  Nick Clifton  <nickc@redhat.com>
-+ 
-+       PR binutils/21578
-+       * elf32-sh.c (sh_elf_set_mach_from_flags): Fix check for invalid
-+       flag value.
-+
- 2017-04-29  Alan Modra  <amodra@gmail.com>
- 
-        PR 21432
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9745.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9745.patch
deleted file mode 100644
index 0b3885b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9745.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-From 76800cba595efc3fe95a446c2d664e42ae4ee869 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Thu, 15 Jun 2017 12:08:57 +0100
-Subject: [PATCH] Handle EITR records in VMS Alpha binaries with overlarge
- command length parameters.
-
-	PR binutils/21579
-	* vms-alpha.c (_bfd_vms_slurp_etir): Extend check of cmd_length.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9745
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog   |  5 +++++
- bfd/vms-alpha.c | 16 ++++++++--------
- 2 files changed, 13 insertions(+), 8 deletions(-)
-
-Index: git/bfd/vms-alpha.c
-===================================================================
---- git.orig/bfd/vms-alpha.c
-+++ git/bfd/vms-alpha.c
-@@ -1741,6 +1741,12 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
-       _bfd_hexdump (8, ptr, cmd_length - 4, 0);
- #endif
- 
-+#if VMS_DEBUG
-+      _bfd_vms_debug (4, "etir: %s(%d)\n",
-+                      _bfd_vms_etir_name (cmd), cmd);
-+      _bfd_hexdump (8, ptr, cmd_length - 4, 0);
-+#endif
-+
-       switch (cmd)
-         {
-           /* Stack global
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9746.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9746.patch
deleted file mode 100644
index bd4a40c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9746.patch
+++ /dev/null
@@ -1,91 +0,0 @@
-From ae87f7e73eba29bd38b3a9684a10b948ed715612 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Wed, 14 Jun 2017 16:50:03 +0100
-Subject: [PATCH] Fix address violation when disassembling a corrupt binary.
-
-	PR binutils/21580
-binutils * objdump.c (disassemble_bytes): Check for buffer overrun when
-	printing out rae insns.
-
-ld	* testsuite/ld-nds32/diff.d: Adjust expected output.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9746
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- binutils/objdump.c           | 27 +++++++++++++++------------
- ld/ChangeLog                 |  5 +++++
- ld/testsuite/ld-nds32/diff.d |  6 +++---
- 3 files changed, 23 insertions(+), 15 deletions(-)
-
-Index: git/binutils/objdump.c
-===================================================================
---- git.orig/binutils/objdump.c
-+++ git/binutils/objdump.c
-@@ -1855,20 +1855,23 @@ disassemble_bytes (struct disassemble_in
- 
- 	      for (j = addr_offset * opb; j < addr_offset * opb + pb; j += bpc)
- 		{
--		  int k;
--
--		  if (bpc > 1 && inf->display_endian == BFD_ENDIAN_LITTLE)
--		    {
--		      for (k = bpc - 1; k >= 0; k--)
--			printf ("%02x", (unsigned) data[j + k]);
--		      putchar (' ');
--		    }
--		  else
-+		  /* PR 21580: Check for a buffer ending early.  */
-+		  if (j + bpc <= stop_offset * opb)
- 		    {
--		      for (k = 0; k < bpc; k++)
--			printf ("%02x", (unsigned) data[j + k]);
--		      putchar (' ');
-+		      int k;
-+
-+		      if (inf->display_endian == BFD_ENDIAN_LITTLE)
-+			{
-+			  for (k = bpc - 1; k >= 0; k--)
-+			    printf ("%02x", (unsigned) data[j + k]);
-+			}
-+		      else
-+			{
-+			  for (k = 0; k < bpc; k++)
-+			    printf ("%02x", (unsigned) data[j + k]);
-+			}
- 		    }
-+		  putchar (' ');
- 		}
- 
- 	      for (; pb < octets_per_line; pb += bpc)
-Index: git/ld/testsuite/ld-nds32/diff.d
-===================================================================
---- git.orig/ld/testsuite/ld-nds32/diff.d
-+++ git/ld/testsuite/ld-nds32/diff.d
-@@ -7,9 +7,9 @@
- 
- Disassembly of section .data:
- 00008000 <WORD> (7e 00 00 00|00 00 00 7e).*
--00008004 <HALF> (7e 00 7e fe|00 7e 7e fe).*
--00008006 <BYTE> 7e fe 00 fe.*
--00008007 <ULEB128> fe 00.*
-+00008004 <HALF> (7e 00|00 7e).*
-+00008006 <BYTE> 7e.*
-+00008007 <ULEB128> fe.*
- 	...
- 00008009 <ULEB128_2> fe 00.*
- .*
-Index: git/ld/ChangeLog
-===================================================================
---- git.orig/ld/ChangeLog
-+++ git/ld/ChangeLog
-@@ -1,3 +1,8 @@
-+2017-06-14  Nick Clifton  <nickc@redhat.com>
-+
-+       PR binutils/21580
-+       * testsuite/ld-nds32/diff.d: Adjust expected output.
-+
- 2017-03-07  Alan Modra  <amodra@gmail.com>
- 
- 	* ldlang.c (open_input_bfds): Check that lang_assignment_statement
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9747.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9747.patch
deleted file mode 100644
index 41ead54..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9747.patch
+++ /dev/null
@@ -1,43 +0,0 @@
-From 62b76e4b6e0b4cb5b3e0053d1de4097b32577049 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Thu, 15 Jun 2017 13:08:47 +0100
-Subject: [PATCH] Fix address violation parsing a corrupt ieee binary.
-
-	PR binutils/21581
-	(ieee_archive_p): Use a static buffer to avoid compiler bugs.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9747
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog | 2 ++
- bfd/ieee.c    | 2 +-
- 2 files changed, 3 insertions(+), 1 deletion(-)
-
-Index: git/bfd/ieee.c
-===================================================================
---- git.orig/bfd/ieee.c
-+++ git/bfd/ieee.c
-@@ -1357,7 +1357,7 @@ ieee_archive_p (bfd *abfd)
- {
-   char *library;
-   unsigned int i;
--  unsigned char buffer[512];
-+  static unsigned char buffer[512];
-   file_ptr buffer_offset = 0;
-   ieee_ar_data_type *save = abfd->tdata.ieee_ar_data;
-   ieee_ar_data_type *ieee;
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,3 +1,8 @@
-+2017-06-15  Nick Clifton  <nickc@redhat.com>
-+
-+       PR binutils/21581
-+       (ieee_archive_p): Likewise.
-+
- 2017-06-14  Nick Clifton  <nickc@redhat.com>
-  
-        PR binutils/21578
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9748.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9748.patch
deleted file mode 100644
index 0207023..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9748.patch
+++ /dev/null
@@ -1,46 +0,0 @@
-From 63634bb4a107877dd08b6282e28e11cfd1a1649e Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Thu, 15 Jun 2017 12:44:23 +0100
-Subject: [PATCH] Avoid a possible compiler bug by using a static buffer
- instead of a stack local buffer.
-
-	PR binutils/21582
-	* ieee.c (ieee_object_p): Use a static buffer to avoid compiler
-	bugs.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9748
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog | 6 ++++++
- bfd/ieee.c    | 2 +-
- 2 files changed, 7 insertions(+), 1 deletion(-)
-
-Index: git/bfd/ieee.c
-===================================================================
---- git.orig/bfd/ieee.c
-+++ git/bfd/ieee.c
-@@ -1875,7 +1875,7 @@ ieee_object_p (bfd *abfd)
-   char *processor;
-   unsigned int part;
-   ieee_data_type *ieee;
--  unsigned char buffer[300];
-+  static unsigned char buffer[300];
-   ieee_data_type *save = IEEE_DATA (abfd);
-   bfd_size_type amt;
- 
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,5 +1,9 @@
- 2017-06-15  Nick Clifton  <nickc@redhat.com>
- 
-+       PR binutils/21582
-+       * ieee.c (ieee_object_p): Use a static buffer to avoid compiler
-+       bugs.
-+
-        PR binutils/21581
-        (ieee_archive_p): Likewise.
- 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9749.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9749.patch
deleted file mode 100644
index 3cc2afc..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9749.patch
+++ /dev/null
@@ -1,77 +0,0 @@
-From 08c7881b814c546efc3996fd1decdf0877f7a779 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Thu, 15 Jun 2017 11:52:02 +0100
-Subject: [PATCH] Prevent invalid array accesses when disassembling a corrupt
- bfin binary.
-
-	PR binutils/21586
-	* bfin-dis.c (gregs): Clip index to prevent overflow.
-	(regs): Likewise.
-	(regs_lo): Likewise.
-	(regs_hi): Likewise.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9749
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- opcodes/ChangeLog  | 8 ++++++++
- opcodes/bfin-dis.c | 8 ++++----
- 2 files changed, 12 insertions(+), 4 deletions(-)
-
-Index: git/opcodes/ChangeLog
-===================================================================
---- git.orig/opcodes/ChangeLog
-+++ git/opcodes/ChangeLog
-@@ -1,3 +1,11 @@
-+2017-06-15  Nick Clifton  <nickc@redhat.com>
-+
-+	PR binutils/21586
-+	* bfin-dis.c (gregs): Clip index to prevent overflow.
-+	(regs): Likewise.
-+	(regs_lo): Likewise.
-+	(regs_hi): Likewise.
-+
- 2017-06-14  Nick Clifton  <nickc@redhat.com>
- 
-        PR binutils/21576
-Index: git/opcodes/bfin-dis.c
-===================================================================
---- git.orig/opcodes/bfin-dis.c
-+++ git/opcodes/bfin-dis.c
-@@ -350,7 +350,7 @@ static const enum machine_registers deco
-   REG_P0, REG_P1, REG_P2, REG_P3, REG_P4, REG_P5, REG_SP, REG_FP,
- };
- 
--#define gregs(x, i) REGNAME (decode_gregs[((i) << 3) | (x)])
-+#define gregs(x, i) REGNAME (decode_gregs[(((i) << 3) | (x)) & 15])
- 
- /* [dregs pregs (iregs mregs) (bregs lregs)].  */
- static const enum machine_registers decode_regs[] =
-@@ -361,7 +361,7 @@ static const enum machine_registers deco
-   REG_B0, REG_B1, REG_B2, REG_B3, REG_L0, REG_L1, REG_L2, REG_L3,
- };
- 
--#define regs(x, i) REGNAME (decode_regs[((i) << 3) | (x)])
-+#define regs(x, i) REGNAME (decode_regs[(((i) << 3) | (x)) & 31])
- 
- /* [dregs pregs (iregs mregs) (bregs lregs) Low Half].  */
- static const enum machine_registers decode_regs_lo[] =
-@@ -372,7 +372,7 @@ static const enum machine_registers deco
-   REG_BL0, REG_BL1, REG_BL2, REG_BL3, REG_LL0, REG_LL1, REG_LL2, REG_LL3,
- };
- 
--#define regs_lo(x, i) REGNAME (decode_regs_lo[((i) << 3) | (x)])
-+#define regs_lo(x, i) REGNAME (decode_regs_lo[(((i) << 3) | (x)) & 31])
- 
- /* [dregs pregs (iregs mregs) (bregs lregs) High Half].  */
- static const enum machine_registers decode_regs_hi[] =
-@@ -383,7 +383,7 @@ static const enum machine_registers deco
-   REG_BH0, REG_BH1, REG_BH2, REG_BH3, REG_LH0, REG_LH1, REG_LH2, REG_LH3,
- };
- 
--#define regs_hi(x, i) REGNAME (decode_regs_hi[((i) << 3) | (x)])
-+#define regs_hi(x, i) REGNAME (decode_regs_hi[(((i) << 3) | (x)) & 31])
- 
- static const enum machine_registers decode_statbits[] =
- {
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9750.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9750.patch
deleted file mode 100644
index fe8fa69..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9750.patch
+++ /dev/null
@@ -1,247 +0,0 @@
-From db5fa770268baf8cc82cf9b141d69799fd485fe2 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Wed, 14 Jun 2017 13:35:06 +0100
-Subject: [PATCH] Fix address violation problems when disassembling a corrupt
- RX binary.
-
-	PR binutils/21587
-	* rx-decode.opc: Include libiberty.h
-	(GET_SCALE): New macro - validates access to SCALE array.
-	(GET_PSCALE): New macro - validates access to PSCALE array.
-	(DIs, SIs, S2Is, rx_disp): Use new macros.
-	* rx-decode.c: Regenerate.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9750
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- opcodes/ChangeLog     |  9 +++++++++
- opcodes/rx-decode.c   | 24 ++++++++++++++----------
- opcodes/rx-decode.opc | 24 ++++++++++++++----------
- 3 files changed, 37 insertions(+), 20 deletions(-)
-
-Index: git/opcodes/rx-decode.c
-===================================================================
---- git.orig/opcodes/rx-decode.c
-+++ git/opcodes/rx-decode.c
-@@ -27,6 +27,7 @@
- #include <string.h>
- #include "ansidecl.h"
- #include "opcode/rx.h"
-+#include "libiberty.h"
- 
- #define RX_OPCODE_BIG_ENDIAN 0
- 
-@@ -45,7 +46,7 @@ static int trace = 0;
- #define LSIZE 2
- 
- /* These are for when the upper bits are "don't care" or "undefined".  */
--static int bwl[] =
-+static int bwl[4] =
- {
-   RX_Byte,
-   RX_Word,
-@@ -53,7 +54,7 @@ static int bwl[] =
-   RX_Bad_Size /* Bogus instructions can have a size field set to 3.  */
- };
- 
--static int sbwl[] =
-+static int sbwl[4] =
- {
-   RX_SByte,
-   RX_SWord,
-@@ -61,7 +62,7 @@ static int sbwl[] =
-   RX_Bad_Size /* Bogus instructions can have a size field set to 3.  */
- };
- 
--static int ubw[] =
-+static int ubw[4] =
- {
-   RX_UByte,
-   RX_UWord,
-@@ -69,7 +70,7 @@ static int ubw[] =
-   RX_Bad_Size /* Bogus instructions can have a size field set to 3.  */
- };
- 
--static int memex[] =
-+static int memex[4] =
- {
-   RX_SByte,
-   RX_SWord,
-@@ -89,6 +90,9 @@ static int SCALE[] = { 1, 2, 4, 0 };
- /* This is for the prefix size enum.  */
- static int PSCALE[] = { 4, 1, 1, 1, 2, 2, 2, 3, 4 };
- 
-+#define GET_SCALE(_indx)  ((unsigned)(_indx) < ARRAY_SIZE (SCALE) ? SCALE[(_indx)] : 0)
-+#define GET_PSCALE(_indx) ((unsigned)(_indx) < ARRAY_SIZE (PSCALE) ? PSCALE[(_indx)] : 0)
-+
- static int flagmap[] = {0, 1, 2, 3, 0, 0, 0, 0,
- 		       16, 17, 0, 0, 0, 0, 0, 0 };
- 
-@@ -107,7 +111,7 @@ static int dsp3map[] = { 8, 9, 10, 3, 4,
- #define DC(c)       OP (0, RX_Operand_Immediate, 0, c)
- #define DR(r)       OP (0, RX_Operand_Register,  r, 0)
- #define DI(r,a)     OP (0, RX_Operand_Indirect,  r, a)
--#define DIs(r,a,s)  OP (0, RX_Operand_Indirect,  r, (a) * SCALE[s])
-+#define DIs(r,a,s)  OP (0, RX_Operand_Indirect,  r, (a) * GET_SCALE (s))
- #define DD(t,r,s)   rx_disp (0, t, r, bwl[s], ld);
- #define DF(r)       OP (0, RX_Operand_Flag,  flagmap[r], 0)
- 
-@@ -115,7 +119,7 @@ static int dsp3map[] = { 8, 9, 10, 3, 4,
- #define SR(r)       OP (1, RX_Operand_Register,  r, 0)
- #define SRR(r)      OP (1, RX_Operand_TwoReg,  r, 0)
- #define SI(r,a)     OP (1, RX_Operand_Indirect,  r, a)
--#define SIs(r,a,s)  OP (1, RX_Operand_Indirect,  r, (a) * SCALE[s])
-+#define SIs(r,a,s)  OP (1, RX_Operand_Indirect,  r, (a) * GET_SCALE (s))
- #define SD(t,r,s)   rx_disp (1, t, r, bwl[s], ld);
- #define SP(t,r)     rx_disp (1, t, r, (t!=3) ? RX_UByte : RX_Long, ld); P(t, 1);
- #define SPm(t,r,m)  rx_disp (1, t, r, memex[m], ld); rx->op[1].size = memex[m];
-@@ -124,7 +128,7 @@ static int dsp3map[] = { 8, 9, 10, 3, 4,
- #define S2C(i)      OP (2, RX_Operand_Immediate, 0, i)
- #define S2R(r)      OP (2, RX_Operand_Register,  r, 0)
- #define S2I(r,a)    OP (2, RX_Operand_Indirect,  r, a)
--#define S2Is(r,a,s) OP (2, RX_Operand_Indirect,  r, (a) * SCALE[s])
-+#define S2Is(r,a,s) OP (2, RX_Operand_Indirect,  r, (a) * GET_SCALE (s))
- #define S2D(t,r,s)  rx_disp (2, t, r, bwl[s], ld);
- #define S2P(t,r)    rx_disp (2, t, r, (t!=3) ? RX_UByte : RX_Long, ld); P(t, 2);
- #define S2Pm(t,r,m) rx_disp (2, t, r, memex[m], ld); rx->op[2].size = memex[m];
-@@ -211,7 +215,7 @@ immediate (int sfield, int ex, LocalData
- }
- 
- static void
--rx_disp (int n, int type, int reg, int size, LocalData * ld)
-+rx_disp (int n, int type, int reg, unsigned int size, LocalData * ld)
- {
-   int disp;
- 
-@@ -228,7 +232,7 @@ rx_disp (int n, int type, int reg, int s
-     case 1:
-       ld->rx->op[n].type = RX_Operand_Indirect;
-       disp = GETBYTE ();
--      ld->rx->op[n].addend = disp * PSCALE[size];
-+      ld->rx->op[n].addend = disp * GET_PSCALE (size);
-       break;
-     case 2:
-       ld->rx->op[n].type = RX_Operand_Indirect;
-@@ -238,7 +242,7 @@ rx_disp (int n, int type, int reg, int s
- #else
-       disp = disp + GETBYTE () * 256;
- #endif
--      ld->rx->op[n].addend = disp * PSCALE[size];
-+      ld->rx->op[n].addend = disp * GET_PSCALE (size);
-       break;
-     default:
-       abort ();
-Index: git/opcodes/rx-decode.opc
-===================================================================
---- git.orig/opcodes/rx-decode.opc
-+++ git/opcodes/rx-decode.opc
-@@ -26,6 +26,7 @@
- #include <string.h>
- #include "ansidecl.h"
- #include "opcode/rx.h"
-+#include "libiberty.h"
- 
- #define RX_OPCODE_BIG_ENDIAN 0
- 
-@@ -44,7 +45,7 @@ static int trace = 0;
- #define LSIZE 2
- 
- /* These are for when the upper bits are "don't care" or "undefined".  */
--static int bwl[] =
-+static int bwl[4] =
- {
-   RX_Byte,
-   RX_Word,
-@@ -52,7 +53,7 @@ static int bwl[] =
-   RX_Bad_Size /* Bogus instructions can have a size field set to 3.  */
- };
- 
--static int sbwl[] =
-+static int sbwl[4] =
- {
-   RX_SByte,
-   RX_SWord,
-@@ -60,7 +61,7 @@ static int sbwl[] =
-   RX_Bad_Size /* Bogus instructions can have a size field set to 3.  */
- };
- 
--static int ubw[] =
-+static int ubw[4] =
- {
-   RX_UByte,
-   RX_UWord,
-@@ -68,7 +69,7 @@ static int ubw[] =
-   RX_Bad_Size /* Bogus instructions can have a size field set to 3.  */
- };
- 
--static int memex[] =
-+static int memex[4] =
- {
-   RX_SByte,
-   RX_SWord,
-@@ -88,6 +89,9 @@ static int SCALE[] = { 1, 2, 4, 0 };
- /* This is for the prefix size enum.  */
- static int PSCALE[] = { 4, 1, 1, 1, 2, 2, 2, 3, 4 };
- 
-+#define GET_SCALE(_indx)  ((unsigned)(_indx) < ARRAY_SIZE (SCALE) ? SCALE[(_indx)] : 0)
-+#define GET_PSCALE(_indx) ((unsigned)(_indx) < ARRAY_SIZE (PSCALE) ? PSCALE[(_indx)] : 0)
-+
- static int flagmap[] = {0, 1, 2, 3, 0, 0, 0, 0,
- 		       16, 17, 0, 0, 0, 0, 0, 0 };
- 
-@@ -106,7 +110,7 @@ static int dsp3map[] = { 8, 9, 10, 3, 4,
- #define DC(c)       OP (0, RX_Operand_Immediate, 0, c)
- #define DR(r)       OP (0, RX_Operand_Register,  r, 0)
- #define DI(r,a)     OP (0, RX_Operand_Indirect,  r, a)
--#define DIs(r,a,s)  OP (0, RX_Operand_Indirect,  r, (a) * SCALE[s])
-+#define DIs(r,a,s)  OP (0, RX_Operand_Indirect,  r, (a) * GET_SCALE (s))
- #define DD(t,r,s)   rx_disp (0, t, r, bwl[s], ld);
- #define DF(r)       OP (0, RX_Operand_Flag,  flagmap[r], 0)
- 
-@@ -114,7 +118,7 @@ static int dsp3map[] = { 8, 9, 10, 3, 4,
- #define SR(r)       OP (1, RX_Operand_Register,  r, 0)
- #define SRR(r)      OP (1, RX_Operand_TwoReg,  r, 0)
- #define SI(r,a)     OP (1, RX_Operand_Indirect,  r, a)
--#define SIs(r,a,s)  OP (1, RX_Operand_Indirect,  r, (a) * SCALE[s])
-+#define SIs(r,a,s)  OP (1, RX_Operand_Indirect,  r, (a) * GET_SCALE (s))
- #define SD(t,r,s)   rx_disp (1, t, r, bwl[s], ld);
- #define SP(t,r)     rx_disp (1, t, r, (t!=3) ? RX_UByte : RX_Long, ld); P(t, 1);
- #define SPm(t,r,m)  rx_disp (1, t, r, memex[m], ld); rx->op[1].size = memex[m];
-@@ -123,7 +127,7 @@ static int dsp3map[] = { 8, 9, 10, 3, 4,
- #define S2C(i)      OP (2, RX_Operand_Immediate, 0, i)
- #define S2R(r)      OP (2, RX_Operand_Register,  r, 0)
- #define S2I(r,a)    OP (2, RX_Operand_Indirect,  r, a)
--#define S2Is(r,a,s) OP (2, RX_Operand_Indirect,  r, (a) * SCALE[s])
-+#define S2Is(r,a,s) OP (2, RX_Operand_Indirect,  r, (a) * GET_SCALE (s))
- #define S2D(t,r,s)  rx_disp (2, t, r, bwl[s], ld);
- #define S2P(t,r)    rx_disp (2, t, r, (t!=3) ? RX_UByte : RX_Long, ld); P(t, 2);
- #define S2Pm(t,r,m) rx_disp (2, t, r, memex[m], ld); rx->op[2].size = memex[m];
-@@ -210,7 +214,7 @@ immediate (int sfield, int ex, LocalData
- }
- 
- static void
--rx_disp (int n, int type, int reg, int size, LocalData * ld)
-+rx_disp (int n, int type, int reg, unsigned int size, LocalData * ld)
- {
-   int disp;
- 
-@@ -227,7 +231,7 @@ rx_disp (int n, int type, int reg, int s
-     case 1:
-       ld->rx->op[n].type = RX_Operand_Indirect;
-       disp = GETBYTE ();
--      ld->rx->op[n].addend = disp * PSCALE[size];
-+      ld->rx->op[n].addend = disp * GET_PSCALE (size);
-       break;
-     case 2:
-       ld->rx->op[n].type = RX_Operand_Indirect;
-@@ -237,7 +241,7 @@ rx_disp (int n, int type, int reg, int s
- #else
-       disp = disp + GETBYTE () * 256;
- #endif
--      ld->rx->op[n].addend = disp * PSCALE[size];
-+      ld->rx->op[n].addend = disp * GET_PSCALE (size);
-       break;
-     default:
-       abort ();
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9751.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9751.patch
deleted file mode 100644
index d7c18cf..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9751.patch
+++ /dev/null
@@ -1,3748 +0,0 @@
-From 63323b5b23bd83fa7b04ea00dff593c933e9b0e3 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Thu, 15 Jun 2017 12:37:01 +0100
-Subject: [PATCH] Fix address violation when disassembling a corrupt RL78
- binary.
-
-	PR binutils/21588
-	* rl78-decode.opc (OP_BUF_LEN): Define.
-	(GETBYTE): Check for the index exceeding OP_BUF_LEN.
-	(rl78_decode_opcode): Use OP_BUF_LEN as the length of the op_buf
-	array.
-	* rl78-decode.c: Regenerate.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9751
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- opcodes/ChangeLog       |   9 +
- opcodes/rl78-decode.c   | 820 ++++++++++++++++++++++++------------------------
- opcodes/rl78-decode.opc |   6 +-
- 3 files changed, 424 insertions(+), 411 deletions(-)
-
-diff --git a/opcodes/ChangeLog b/opcodes/ChangeLog
-index 34b1844..c77f00a 100644
---- a/opcodes/ChangeLog
-+++ b/opcodes/ChangeLog
-@@ -1,5 +1,14 @@
- 2017-06-15  Nick Clifton  <nickc@redhat.com>
- 
-+	PR binutils/21588
-+	* rl78-decode.opc (OP_BUF_LEN): Define.
-+	(GETBYTE): Check for the index exceeding OP_BUF_LEN.
-+	(rl78_decode_opcode): Use OP_BUF_LEN as the length of the op_buf
-+	array.
-+	* rl78-decode.c: Regenerate.
-+
-+2017-06-15  Nick Clifton  <nickc@redhat.com>
-+
- 	PR binutils/21586
- 	* bfin-dis.c (gregs): Clip index to prevent overflow.
- 	(regs): Likewise.
-diff --git a/opcodes/rl78-decode.c b/opcodes/rl78-decode.c
-index d0566ea..b2d4bd6 100644
---- a/opcodes/rl78-decode.c
-+++ b/opcodes/rl78-decode.c
-@@ -51,7 +51,9 @@ typedef struct
- #define W() rl78->size = RL78_Word
- 
- #define AU ATTRIBUTE_UNUSED
--#define GETBYTE() (ld->op [ld->rl78->n_bytes++] = ld->getbyte (ld->ptr))
-+
-+#define OP_BUF_LEN 20
-+#define GETBYTE() (ld->rl78->n_bytes < (OP_BUF_LEN - 1) ? ld->op [ld->rl78->n_bytes++] = ld->getbyte (ld->ptr): 0)
- #define B ((unsigned long) GETBYTE())
- 
- #define SYNTAX(x) rl78->syntax = x
-@@ -169,7 +171,7 @@ rl78_decode_opcode (unsigned long pc AU,
- 		  RL78_Dis_Isa isa)
- {
-   LocalData lds, * ld = &lds;
--  unsigned char op_buf[20] = {0};
-+  unsigned char op_buf[OP_BUF_LEN] = {0};
-   unsigned char *op = op_buf;
-   int op0, op1;
- 
-@@ -201,7 +203,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("nop");
--#line 911 "rl78-decode.opc"
-+#line 913 "rl78-decode.opc"
-           ID(nop);
- 
-         /*----------------------------------------------------------------------*/
-@@ -214,7 +216,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0x07:
-         {
-           /** 0000 0rw1			addw	%0, %1				*/
--#line 274 "rl78-decode.opc"
-+#line 276 "rl78-decode.opc"
-           int rw AU = (op[0] >> 1) & 0x03;
-           if (trace)
-             {
-@@ -224,7 +226,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  rw = 0x%x\n", rw);
-             }
-           SYNTAX("addw	%0, %1");
--#line 274 "rl78-decode.opc"
-+#line 276 "rl78-decode.opc"
-           ID(add); W(); DR(AX); SRW(rw); Fzac;
- 
-         }
-@@ -239,7 +241,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("addw	%0, %e!1");
--#line 265 "rl78-decode.opc"
-+#line 267 "rl78-decode.opc"
-           ID(add); W(); DR(AX); SM(None, IMMU(2)); Fzac;
- 
-         }
-@@ -254,7 +256,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("addw	%0, #%1");
--#line 271 "rl78-decode.opc"
-+#line 273 "rl78-decode.opc"
-           ID(add); W(); DR(AX); SC(IMMU(2)); Fzac;
- 
-         }
-@@ -269,7 +271,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("addw	%0, %1");
--#line 277 "rl78-decode.opc"
-+#line 279 "rl78-decode.opc"
-           ID(add); W(); DR(AX); SM(None, SADDR); Fzac;
- 
-         }
-@@ -284,7 +286,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("xch	a, x");
--#line 1234 "rl78-decode.opc"
-+#line 1236 "rl78-decode.opc"
-           ID(xch); DR(A); SR(X);
- 
-         /*----------------------------------------------------------------------*/
-@@ -301,7 +303,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %e1");
--#line 678 "rl78-decode.opc"
-+#line 680 "rl78-decode.opc"
-           ID(mov); DR(A); SM(B, IMMU(2));
- 
-         }
-@@ -316,7 +318,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("add	%0, #%1");
--#line 228 "rl78-decode.opc"
-+#line 230 "rl78-decode.opc"
-           ID(add); DM(None, SADDR); SC(IMMU(1)); Fzac;
- 
-         /*----------------------------------------------------------------------*/
-@@ -333,7 +335,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("add	%0, %1");
--#line 222 "rl78-decode.opc"
-+#line 224 "rl78-decode.opc"
-           ID(add); DR(A); SM(None, SADDR); Fzac;
- 
-         }
-@@ -348,7 +350,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("add	%0, #%1");
--#line 216 "rl78-decode.opc"
-+#line 218 "rl78-decode.opc"
-           ID(add); DR(A); SC(IMMU(1)); Fzac;
- 
-         }
-@@ -363,7 +365,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("add	%0, %e1");
--#line 204 "rl78-decode.opc"
-+#line 206 "rl78-decode.opc"
-           ID(add); DR(A); SM(HL, 0); Fzac;
- 
-         }
-@@ -378,7 +380,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("add	%0, %ea1");
--#line 210 "rl78-decode.opc"
-+#line 212 "rl78-decode.opc"
-           ID(add); DR(A); SM(HL, IMMU(1)); Fzac;
- 
-         }
-@@ -393,7 +395,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("add	%0, %e!1");
--#line 201 "rl78-decode.opc"
-+#line 203 "rl78-decode.opc"
-           ID(add); DR(A); SM(None, IMMU(2)); Fzac;
- 
-         }
-@@ -408,7 +410,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("addw	%0, #%1");
--#line 280 "rl78-decode.opc"
-+#line 282 "rl78-decode.opc"
-           ID(add); W(); DR(SP); SC(IMMU(1)); Fzac;
- 
-         /*----------------------------------------------------------------------*/
-@@ -425,7 +427,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("es:");
--#line 193 "rl78-decode.opc"
-+#line 195 "rl78-decode.opc"
-           DE(); SE();
-           op ++;
-           pc ++;
-@@ -440,7 +442,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0x16:
-         {
-           /** 0001 0ra0			movw	%0, %1				*/
--#line 859 "rl78-decode.opc"
-+#line 861 "rl78-decode.opc"
-           int ra AU = (op[0] >> 1) & 0x03;
-           if (trace)
-             {
-@@ -450,7 +452,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  ra = 0x%x\n", ra);
-             }
-           SYNTAX("movw	%0, %1");
--#line 859 "rl78-decode.opc"
-+#line 861 "rl78-decode.opc"
-           ID(mov); W(); DRW(ra); SR(AX);
- 
-         }
-@@ -460,7 +462,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0x17:
-         {
-           /** 0001 0ra1			movw	%0, %1				*/
--#line 856 "rl78-decode.opc"
-+#line 858 "rl78-decode.opc"
-           int ra AU = (op[0] >> 1) & 0x03;
-           if (trace)
-             {
-@@ -470,7 +472,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  ra = 0x%x\n", ra);
-             }
-           SYNTAX("movw	%0, %1");
--#line 856 "rl78-decode.opc"
-+#line 858 "rl78-decode.opc"
-           ID(mov); W(); DR(AX); SRW(ra);
- 
-         }
-@@ -485,7 +487,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%e0, %1");
--#line 729 "rl78-decode.opc"
-+#line 731 "rl78-decode.opc"
-           ID(mov); DM(B, IMMU(2)); SR(A);
- 
-         }
-@@ -500,7 +502,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%e0, #%1");
--#line 726 "rl78-decode.opc"
-+#line 728 "rl78-decode.opc"
-           ID(mov); DM(B, IMMU(2)); SC(IMMU(1));
- 
-         }
-@@ -515,7 +517,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("addc	%0, #%1");
--#line 260 "rl78-decode.opc"
-+#line 262 "rl78-decode.opc"
-           ID(addc); DM(None, SADDR); SC(IMMU(1)); Fzac;
- 
-         /*----------------------------------------------------------------------*/
-@@ -532,7 +534,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("addc	%0, %1");
--#line 257 "rl78-decode.opc"
-+#line 259 "rl78-decode.opc"
-           ID(addc); DR(A); SM(None, SADDR); Fzac;
- 
-         }
-@@ -547,7 +549,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("addc	%0, #%1");
--#line 248 "rl78-decode.opc"
-+#line 250 "rl78-decode.opc"
-           ID(addc); DR(A); SC(IMMU(1)); Fzac;
- 
-         }
-@@ -562,7 +564,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("addc	%0, %e1");
--#line 236 "rl78-decode.opc"
-+#line 238 "rl78-decode.opc"
-           ID(addc); DR(A); SM(HL, 0); Fzac;
- 
-         }
-@@ -577,7 +579,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("addc	%0, %ea1");
--#line 245 "rl78-decode.opc"
-+#line 247 "rl78-decode.opc"
-           ID(addc); DR(A); SM(HL, IMMU(1)); Fzac;
- 
-         }
-@@ -592,7 +594,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("addc	%0, %e!1");
--#line 233 "rl78-decode.opc"
-+#line 235 "rl78-decode.opc"
-           ID(addc); DR(A); SM(None, IMMU(2)); Fzac;
- 
-         }
-@@ -607,7 +609,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("subw	%0, #%1");
--#line 1198 "rl78-decode.opc"
-+#line 1200 "rl78-decode.opc"
-           ID(sub); W(); DR(SP); SC(IMMU(1)); Fzac;
- 
-         /*----------------------------------------------------------------------*/
-@@ -620,7 +622,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0x27:
-         {
-           /** 0010 0rw1			subw	%0, %1				*/
--#line 1192 "rl78-decode.opc"
-+#line 1194 "rl78-decode.opc"
-           int rw AU = (op[0] >> 1) & 0x03;
-           if (trace)
-             {
-@@ -630,7 +632,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  rw = 0x%x\n", rw);
-             }
-           SYNTAX("subw	%0, %1");
--#line 1192 "rl78-decode.opc"
-+#line 1194 "rl78-decode.opc"
-           ID(sub); W(); DR(AX); SRW(rw); Fzac;
- 
-         }
-@@ -645,7 +647,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("subw	%0, %e!1");
--#line 1183 "rl78-decode.opc"
-+#line 1185 "rl78-decode.opc"
-           ID(sub); W(); DR(AX); SM(None, IMMU(2)); Fzac;
- 
-         }
-@@ -660,7 +662,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("subw	%0, #%1");
--#line 1189 "rl78-decode.opc"
-+#line 1191 "rl78-decode.opc"
-           ID(sub); W(); DR(AX); SC(IMMU(2)); Fzac;
- 
-         }
-@@ -675,7 +677,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("subw	%0, %1");
--#line 1195 "rl78-decode.opc"
-+#line 1197 "rl78-decode.opc"
-           ID(sub); W(); DR(AX); SM(None, SADDR); Fzac;
- 
-         }
-@@ -690,7 +692,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%e0, %1");
--#line 741 "rl78-decode.opc"
-+#line 743 "rl78-decode.opc"
-           ID(mov); DM(C, IMMU(2)); SR(A);
- 
-         }
-@@ -705,7 +707,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %e1");
--#line 684 "rl78-decode.opc"
-+#line 686 "rl78-decode.opc"
-           ID(mov); DR(A); SM(C, IMMU(2));
- 
-         }
-@@ -720,7 +722,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("sub	%0, #%1");
--#line 1146 "rl78-decode.opc"
-+#line 1148 "rl78-decode.opc"
-           ID(sub); DM(None, SADDR); SC(IMMU(1)); Fzac;
- 
-         /*----------------------------------------------------------------------*/
-@@ -737,7 +739,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("sub	%0, %1");
--#line 1140 "rl78-decode.opc"
-+#line 1142 "rl78-decode.opc"
-           ID(sub); DR(A); SM(None, SADDR); Fzac;
- 
-         }
-@@ -752,7 +754,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("sub	%0, #%1");
--#line 1134 "rl78-decode.opc"
-+#line 1136 "rl78-decode.opc"
-           ID(sub); DR(A); SC(IMMU(1)); Fzac;
- 
-         }
-@@ -767,7 +769,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("sub	%0, %e1");
--#line 1122 "rl78-decode.opc"
-+#line 1124 "rl78-decode.opc"
-           ID(sub); DR(A); SM(HL, 0); Fzac;
- 
-         }
-@@ -782,7 +784,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("sub	%0, %ea1");
--#line 1128 "rl78-decode.opc"
-+#line 1130 "rl78-decode.opc"
-           ID(sub); DR(A); SM(HL, IMMU(1)); Fzac;
- 
-         }
-@@ -797,7 +799,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("sub	%0, %e!1");
--#line 1119 "rl78-decode.opc"
-+#line 1121 "rl78-decode.opc"
-           ID(sub); DR(A); SM(None, IMMU(2)); Fzac;
- 
-         }
-@@ -808,7 +810,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0x36:
-         {
-           /** 0011 0rg0			movw	%0, #%1				*/
--#line 853 "rl78-decode.opc"
-+#line 855 "rl78-decode.opc"
-           int rg AU = (op[0] >> 1) & 0x03;
-           if (trace)
-             {
-@@ -818,7 +820,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  rg = 0x%x\n", rg);
-             }
-           SYNTAX("movw	%0, #%1");
--#line 853 "rl78-decode.opc"
-+#line 855 "rl78-decode.opc"
-           ID(mov); W(); DRW(rg); SC(IMMU(2));
- 
-         }
-@@ -830,7 +832,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x00:
-               {
-                 /** 0011 0001 0bit 0000		btclr	%s1, $%a0			*/
--#line 416 "rl78-decode.opc"
-+#line 418 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -840,7 +842,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("btclr	%s1, $%a0");
--#line 416 "rl78-decode.opc"
-+#line 418 "rl78-decode.opc"
-                 ID(branch_cond_clear); SM(None, SADDR); SB(bit); DC(pc+IMMS(1)+4); COND(T);
- 
-               /*----------------------------------------------------------------------*/
-@@ -850,7 +852,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x01:
-               {
-                 /** 0011 0001 0bit 0001		btclr	%1, $%a0			*/
--#line 410 "rl78-decode.opc"
-+#line 412 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -860,7 +862,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("btclr	%1, $%a0");
--#line 410 "rl78-decode.opc"
-+#line 412 "rl78-decode.opc"
-                 ID(branch_cond_clear); DC(pc+IMMS(1)+3); SR(A); SB(bit); COND(T);
- 
-               }
-@@ -868,7 +870,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x02:
-               {
-                 /** 0011 0001 0bit 0010		bt	%s1, $%a0			*/
--#line 402 "rl78-decode.opc"
-+#line 404 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -878,7 +880,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("bt	%s1, $%a0");
--#line 402 "rl78-decode.opc"
-+#line 404 "rl78-decode.opc"
-                 ID(branch_cond); SM(None, SADDR); SB(bit); DC(pc+IMMS(1)+4); COND(T);
- 
-               /*----------------------------------------------------------------------*/
-@@ -888,7 +890,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x03:
-               {
-                 /** 0011 0001 0bit 0011		bt	%1, $%a0			*/
--#line 396 "rl78-decode.opc"
-+#line 398 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -898,7 +900,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("bt	%1, $%a0");
--#line 396 "rl78-decode.opc"
-+#line 398 "rl78-decode.opc"
-                 ID(branch_cond); DC(pc+IMMS(1)+3); SR(A); SB(bit); COND(T);
- 
-               }
-@@ -906,7 +908,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x04:
-               {
-                 /** 0011 0001 0bit 0100		bf	%s1, $%a0			*/
--#line 363 "rl78-decode.opc"
-+#line 365 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -916,7 +918,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("bf	%s1, $%a0");
--#line 363 "rl78-decode.opc"
-+#line 365 "rl78-decode.opc"
-                 ID(branch_cond); SM(None, SADDR); SB(bit); DC(pc+IMMS(1)+4); COND(F);
- 
-               /*----------------------------------------------------------------------*/
-@@ -926,7 +928,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x05:
-               {
-                 /** 0011 0001 0bit 0101		bf	%1, $%a0			*/
--#line 357 "rl78-decode.opc"
-+#line 359 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -936,7 +938,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("bf	%1, $%a0");
--#line 357 "rl78-decode.opc"
-+#line 359 "rl78-decode.opc"
-                 ID(branch_cond); DC(pc+IMMS(1)+3); SR(A); SB(bit); COND(F);
- 
-               }
-@@ -944,7 +946,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x07:
-               {
-                 /** 0011 0001 0cnt 0111		shl	%0, %1				*/
--#line 1075 "rl78-decode.opc"
-+#line 1077 "rl78-decode.opc"
-                 int cnt AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -954,7 +956,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  cnt = 0x%x\n", cnt);
-                   }
-                 SYNTAX("shl	%0, %1");
--#line 1075 "rl78-decode.opc"
-+#line 1077 "rl78-decode.opc"
-                 ID(shl); DR(C); SC(cnt);
- 
-               }
-@@ -962,7 +964,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x08:
-               {
-                 /** 0011 0001 0cnt 1000		shl	%0, %1				*/
--#line 1072 "rl78-decode.opc"
-+#line 1074 "rl78-decode.opc"
-                 int cnt AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -972,7 +974,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  cnt = 0x%x\n", cnt);
-                   }
-                 SYNTAX("shl	%0, %1");
--#line 1072 "rl78-decode.opc"
-+#line 1074 "rl78-decode.opc"
-                 ID(shl); DR(B); SC(cnt);
- 
-               }
-@@ -980,7 +982,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x09:
-               {
-                 /** 0011 0001 0cnt 1001		shl	%0, %1				*/
--#line 1069 "rl78-decode.opc"
-+#line 1071 "rl78-decode.opc"
-                 int cnt AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -990,7 +992,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  cnt = 0x%x\n", cnt);
-                   }
-                 SYNTAX("shl	%0, %1");
--#line 1069 "rl78-decode.opc"
-+#line 1071 "rl78-decode.opc"
-                 ID(shl); DR(A); SC(cnt);
- 
-               }
-@@ -998,7 +1000,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x0a:
-               {
-                 /** 0011 0001 0cnt 1010		shr	%0, %1				*/
--#line 1086 "rl78-decode.opc"
-+#line 1088 "rl78-decode.opc"
-                 int cnt AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -1008,7 +1010,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  cnt = 0x%x\n", cnt);
-                   }
-                 SYNTAX("shr	%0, %1");
--#line 1086 "rl78-decode.opc"
-+#line 1088 "rl78-decode.opc"
-                 ID(shr); DR(A); SC(cnt);
- 
-               }
-@@ -1016,7 +1018,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x0b:
-               {
-                 /** 0011 0001 0cnt 1011		sar	%0, %1				*/
--#line 1033 "rl78-decode.opc"
-+#line 1035 "rl78-decode.opc"
-                 int cnt AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -1026,7 +1028,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  cnt = 0x%x\n", cnt);
-                   }
-                 SYNTAX("sar	%0, %1");
--#line 1033 "rl78-decode.opc"
-+#line 1035 "rl78-decode.opc"
-                 ID(sar); DR(A); SC(cnt);
- 
-               }
-@@ -1035,7 +1037,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x8c:
-               {
-                 /** 0011 0001 wcnt 1100		shlw	%0, %1				*/
--#line 1081 "rl78-decode.opc"
-+#line 1083 "rl78-decode.opc"
-                 int wcnt AU = (op[1] >> 4) & 0x0f;
-                 if (trace)
-                   {
-@@ -1045,7 +1047,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  wcnt = 0x%x\n", wcnt);
-                   }
-                 SYNTAX("shlw	%0, %1");
--#line 1081 "rl78-decode.opc"
-+#line 1083 "rl78-decode.opc"
-                 ID(shl); W(); DR(BC); SC(wcnt);
- 
-               /*----------------------------------------------------------------------*/
-@@ -1056,7 +1058,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x8d:
-               {
-                 /** 0011 0001 wcnt 1101		shlw	%0, %1				*/
--#line 1078 "rl78-decode.opc"
-+#line 1080 "rl78-decode.opc"
-                 int wcnt AU = (op[1] >> 4) & 0x0f;
-                 if (trace)
-                   {
-@@ -1066,7 +1068,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  wcnt = 0x%x\n", wcnt);
-                   }
-                 SYNTAX("shlw	%0, %1");
--#line 1078 "rl78-decode.opc"
-+#line 1080 "rl78-decode.opc"
-                 ID(shl); W(); DR(AX); SC(wcnt);
- 
-               }
-@@ -1075,7 +1077,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x8e:
-               {
-                 /** 0011 0001 wcnt 1110		shrw	%0, %1				*/
--#line 1089 "rl78-decode.opc"
-+#line 1091 "rl78-decode.opc"
-                 int wcnt AU = (op[1] >> 4) & 0x0f;
-                 if (trace)
-                   {
-@@ -1085,7 +1087,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  wcnt = 0x%x\n", wcnt);
-                   }
-                 SYNTAX("shrw	%0, %1");
--#line 1089 "rl78-decode.opc"
-+#line 1091 "rl78-decode.opc"
-                 ID(shr); W(); DR(AX); SC(wcnt);
- 
-               /*----------------------------------------------------------------------*/
-@@ -1096,7 +1098,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x8f:
-               {
-                 /** 0011 0001 wcnt 1111		sarw	%0, %1				*/
--#line 1036 "rl78-decode.opc"
-+#line 1038 "rl78-decode.opc"
-                 int wcnt AU = (op[1] >> 4) & 0x0f;
-                 if (trace)
-                   {
-@@ -1106,7 +1108,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  wcnt = 0x%x\n", wcnt);
-                   }
-                 SYNTAX("sarw	%0, %1");
--#line 1036 "rl78-decode.opc"
-+#line 1038 "rl78-decode.opc"
-                 ID(sar); W(); DR(AX); SC(wcnt);
- 
-               /*----------------------------------------------------------------------*/
-@@ -1116,7 +1118,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x80:
-               {
-                 /** 0011 0001 1bit 0000		btclr	%s1, $%a0			*/
--#line 413 "rl78-decode.opc"
-+#line 415 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -1126,7 +1128,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("btclr	%s1, $%a0");
--#line 413 "rl78-decode.opc"
-+#line 415 "rl78-decode.opc"
-                 ID(branch_cond_clear); SM(None, SFR); SB(bit); DC(pc+IMMS(1)+4); COND(T);
- 
-               }
-@@ -1134,7 +1136,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x81:
-               {
-                 /** 0011 0001 1bit 0001		btclr	%e1, $%a0			*/
--#line 407 "rl78-decode.opc"
-+#line 409 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -1144,7 +1146,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("btclr	%e1, $%a0");
--#line 407 "rl78-decode.opc"
-+#line 409 "rl78-decode.opc"
-                 ID(branch_cond_clear); DC(pc+IMMS(1)+3); SM(HL,0); SB(bit); COND(T);
- 
-               }
-@@ -1152,7 +1154,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x82:
-               {
-                 /** 0011 0001 1bit 0010		bt	%s1, $%a0			*/
--#line 399 "rl78-decode.opc"
-+#line 401 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -1162,7 +1164,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("bt	%s1, $%a0");
--#line 399 "rl78-decode.opc"
-+#line 401 "rl78-decode.opc"
-                 ID(branch_cond); SM(None, SFR); SB(bit); DC(pc+IMMS(1)+4); COND(T);
- 
-               }
-@@ -1170,7 +1172,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x83:
-               {
-                 /** 0011 0001 1bit 0011		bt	%e1, $%a0			*/
--#line 393 "rl78-decode.opc"
-+#line 395 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -1180,7 +1182,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("bt	%e1, $%a0");
--#line 393 "rl78-decode.opc"
-+#line 395 "rl78-decode.opc"
-                 ID(branch_cond); DC(pc+IMMS(1)+3); SM(HL,0); SB(bit); COND(T);
- 
-               }
-@@ -1188,7 +1190,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x84:
-               {
-                 /** 0011 0001 1bit 0100		bf	%s1, $%a0			*/
--#line 360 "rl78-decode.opc"
-+#line 362 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -1198,7 +1200,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("bf	%s1, $%a0");
--#line 360 "rl78-decode.opc"
-+#line 362 "rl78-decode.opc"
-                 ID(branch_cond); SM(None, SFR); SB(bit); DC(pc+IMMS(1)+4); COND(F);
- 
-               }
-@@ -1206,7 +1208,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x85:
-               {
-                 /** 0011 0001 1bit 0101		bf	%e1, $%a0			*/
--#line 354 "rl78-decode.opc"
-+#line 356 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -1216,7 +1218,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("bf	%e1, $%a0");
--#line 354 "rl78-decode.opc"
-+#line 356 "rl78-decode.opc"
-                 ID(branch_cond); DC(pc+IMMS(1)+3); SM(HL,0); SB(bit); COND(F);
- 
-               }
-@@ -1229,7 +1231,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0x37:
-         {
-           /** 0011 0ra1			xchw	%0, %1				*/
--#line 1239 "rl78-decode.opc"
-+#line 1241 "rl78-decode.opc"
-           int ra AU = (op[0] >> 1) & 0x03;
-           if (trace)
-             {
-@@ -1239,7 +1241,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  ra = 0x%x\n", ra);
-             }
-           SYNTAX("xchw	%0, %1");
--#line 1239 "rl78-decode.opc"
-+#line 1241 "rl78-decode.opc"
-           ID(xch); W(); DR(AX); SRW(ra);
- 
-         /*----------------------------------------------------------------------*/
-@@ -1256,7 +1258,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%e0, #%1");
--#line 738 "rl78-decode.opc"
-+#line 740 "rl78-decode.opc"
-           ID(mov); DM(C, IMMU(2)); SC(IMMU(1));
- 
-         }
-@@ -1271,7 +1273,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%e0, #%1");
--#line 732 "rl78-decode.opc"
-+#line 734 "rl78-decode.opc"
-           ID(mov); DM(BC, IMMU(2)); SC(IMMU(1));
- 
-         }
-@@ -1286,7 +1288,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("subc	%0, #%1");
--#line 1178 "rl78-decode.opc"
-+#line 1180 "rl78-decode.opc"
-           ID(subc); DM(None, SADDR); SC(IMMU(1)); Fzac;
- 
-         /*----------------------------------------------------------------------*/
-@@ -1303,7 +1305,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("subc	%0, %1");
--#line 1175 "rl78-decode.opc"
-+#line 1177 "rl78-decode.opc"
-           ID(subc); DR(A); SM(None, SADDR); Fzac;
- 
-         }
-@@ -1318,7 +1320,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("subc	%0, #%1");
--#line 1166 "rl78-decode.opc"
-+#line 1168 "rl78-decode.opc"
-           ID(subc); DR(A); SC(IMMU(1)); Fzac;
- 
-         }
-@@ -1333,7 +1335,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("subc	%0, %e1");
--#line 1154 "rl78-decode.opc"
-+#line 1156 "rl78-decode.opc"
-           ID(subc); DR(A); SM(HL, 0); Fzac;
- 
-         }
-@@ -1348,7 +1350,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("subc	%0, %ea1");
--#line 1163 "rl78-decode.opc"
-+#line 1165 "rl78-decode.opc"
-           ID(subc); DR(A); SM(HL, IMMU(1)); Fzac;
- 
-         }
-@@ -1363,7 +1365,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("subc	%0, %e!1");
--#line 1151 "rl78-decode.opc"
-+#line 1153 "rl78-decode.opc"
-           ID(subc); DR(A); SM(None, IMMU(2)); Fzac;
- 
-         }
-@@ -1378,7 +1380,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("cmp	%e!0, #%1");
--#line 480 "rl78-decode.opc"
-+#line 482 "rl78-decode.opc"
-           ID(cmp); DM(None, IMMU(2)); SC(IMMU(1)); Fzac;
- 
-         }
-@@ -1393,7 +1395,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, #%1");
--#line 717 "rl78-decode.opc"
-+#line 719 "rl78-decode.opc"
-           ID(mov); DR(ES); SC(IMMU(1));
- 
-         }
-@@ -1408,7 +1410,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("cmpw	%0, %e!1");
--#line 531 "rl78-decode.opc"
-+#line 533 "rl78-decode.opc"
-           ID(cmp); W(); DR(AX); SM(None, IMMU(2)); Fzac;
- 
-         }
-@@ -1418,7 +1420,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0x47:
-         {
-           /** 0100 0ra1			cmpw	%0, %1				*/
--#line 540 "rl78-decode.opc"
-+#line 542 "rl78-decode.opc"
-           int ra AU = (op[0] >> 1) & 0x03;
-           if (trace)
-             {
-@@ -1428,7 +1430,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  ra = 0x%x\n", ra);
-             }
-           SYNTAX("cmpw	%0, %1");
--#line 540 "rl78-decode.opc"
-+#line 542 "rl78-decode.opc"
-           ID(cmp); W(); DR(AX); SRW(ra); Fzac;
- 
-         }
-@@ -1443,7 +1445,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("cmpw	%0, #%1");
--#line 537 "rl78-decode.opc"
-+#line 539 "rl78-decode.opc"
-           ID(cmp); W(); DR(AX); SC(IMMU(2)); Fzac;
- 
-         }
-@@ -1458,7 +1460,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("cmpw	%0, %1");
--#line 543 "rl78-decode.opc"
-+#line 545 "rl78-decode.opc"
-           ID(cmp); W(); DR(AX); SM(None, SADDR); Fzac;
- 
-         /*----------------------------------------------------------------------*/
-@@ -1475,7 +1477,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%e0, %1");
--#line 735 "rl78-decode.opc"
-+#line 737 "rl78-decode.opc"
-           ID(mov); DM(BC, IMMU(2)); SR(A);
- 
-         }
-@@ -1490,7 +1492,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %e1");
--#line 681 "rl78-decode.opc"
-+#line 683 "rl78-decode.opc"
-           ID(mov); DR(A); SM(BC, IMMU(2));
- 
-         }
-@@ -1505,7 +1507,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("cmp	%0, #%1");
--#line 483 "rl78-decode.opc"
-+#line 485 "rl78-decode.opc"
-           ID(cmp); DM(None, SADDR); SC(IMMU(1)); Fzac;
- 
-         }
-@@ -1520,7 +1522,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("cmp	%0, %1");
--#line 510 "rl78-decode.opc"
-+#line 512 "rl78-decode.opc"
-           ID(cmp); DR(A); SM(None, SADDR); Fzac;
- 
-         /*----------------------------------------------------------------------*/
-@@ -1537,7 +1539,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("cmp	%0, #%1");
--#line 501 "rl78-decode.opc"
-+#line 503 "rl78-decode.opc"
-           ID(cmp); DR(A); SC(IMMU(1)); Fzac;
- 
-         }
-@@ -1552,7 +1554,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("cmp	%0, %e1");
--#line 489 "rl78-decode.opc"
-+#line 491 "rl78-decode.opc"
-           ID(cmp); DR(A); SM(HL, 0); Fzac;
- 
-         }
-@@ -1567,7 +1569,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("cmp	%0, %ea1");
--#line 498 "rl78-decode.opc"
-+#line 500 "rl78-decode.opc"
-           ID(cmp); DR(A); SM(HL, IMMU(1)); Fzac;
- 
-         }
-@@ -1582,7 +1584,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("cmp	%0, %e!1");
--#line 486 "rl78-decode.opc"
-+#line 488 "rl78-decode.opc"
-           ID(cmp); DR(A); SM(None, IMMU(2)); Fzac;
- 
-         }
-@@ -1597,7 +1599,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0x57:
-         {
-           /** 0101 0reg			mov	%0, #%1				*/
--#line 669 "rl78-decode.opc"
-+#line 671 "rl78-decode.opc"
-           int reg AU = op[0] & 0x07;
-           if (trace)
-             {
-@@ -1607,7 +1609,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  reg = 0x%x\n", reg);
-             }
-           SYNTAX("mov	%0, #%1");
--#line 669 "rl78-decode.opc"
-+#line 671 "rl78-decode.opc"
-           ID(mov); DRB(reg); SC(IMMU(1));
- 
-         }
-@@ -1622,7 +1624,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%e0, %1");
--#line 871 "rl78-decode.opc"
-+#line 873 "rl78-decode.opc"
-           ID(mov); W(); DM(B, IMMU(2)); SR(AX);
- 
-         }
-@@ -1637,7 +1639,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, %e1");
--#line 862 "rl78-decode.opc"
-+#line 864 "rl78-decode.opc"
-           ID(mov); W(); DR(AX); SM(B, IMMU(2));
- 
-         }
-@@ -1652,7 +1654,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("and	%0, #%1");
--#line 312 "rl78-decode.opc"
-+#line 314 "rl78-decode.opc"
-           ID(and); DM(None, SADDR); SC(IMMU(1)); Fz;
- 
-         /*----------------------------------------------------------------------*/
-@@ -1669,7 +1671,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("and	%0, %1");
--#line 309 "rl78-decode.opc"
-+#line 311 "rl78-decode.opc"
-           ID(and); DR(A); SM(None, SADDR); Fz;
- 
-         }
-@@ -1684,7 +1686,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("and	%0, #%1");
--#line 300 "rl78-decode.opc"
-+#line 302 "rl78-decode.opc"
-           ID(and); DR(A); SC(IMMU(1)); Fz;
- 
-         }
-@@ -1699,7 +1701,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("and	%0, %e1");
--#line 288 "rl78-decode.opc"
-+#line 290 "rl78-decode.opc"
-           ID(and); DR(A); SM(HL, 0); Fz;
- 
-         }
-@@ -1714,7 +1716,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("and	%0, %ea1");
--#line 294 "rl78-decode.opc"
-+#line 296 "rl78-decode.opc"
-           ID(and); DR(A); SM(HL, IMMU(1)); Fz;
- 
-         }
-@@ -1729,7 +1731,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("and	%0, %e!1");
--#line 285 "rl78-decode.opc"
-+#line 287 "rl78-decode.opc"
-           ID(and); DR(A); SM(None, IMMU(2)); Fz;
- 
-         }
-@@ -1743,7 +1745,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0x67:
-         {
-           /** 0110 0rba			mov	%0, %1				*/
--#line 672 "rl78-decode.opc"
-+#line 674 "rl78-decode.opc"
-           int rba AU = op[0] & 0x07;
-           if (trace)
-             {
-@@ -1753,7 +1755,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  rba = 0x%x\n", rba);
-             }
-           SYNTAX("mov	%0, %1");
--#line 672 "rl78-decode.opc"
-+#line 674 "rl78-decode.opc"
-           ID(mov); DR(A); SRB(rba);
- 
-         }
-@@ -1772,7 +1774,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x07:
-               {
-                 /** 0110 0001 0000 0reg		add	%0, %1				*/
--#line 225 "rl78-decode.opc"
-+#line 227 "rl78-decode.opc"
-                 int reg AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -1782,7 +1784,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  reg = 0x%x\n", reg);
-                   }
-                 SYNTAX("add	%0, %1");
--#line 225 "rl78-decode.opc"
-+#line 227 "rl78-decode.opc"
-                 ID(add); DRB(reg); SR(A); Fzac;
- 
-               }
-@@ -1796,7 +1798,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x0f:
-               {
-                 /** 0110 0001 0000 1rba		add	%0, %1				*/
--#line 219 "rl78-decode.opc"
-+#line 221 "rl78-decode.opc"
-                 int rba AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -1806,7 +1808,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  rba = 0x%x\n", rba);
-                   }
-                 SYNTAX("add	%0, %1");
--#line 219 "rl78-decode.opc"
-+#line 221 "rl78-decode.opc"
-                 ID(add); DR(A); SRB(rba); Fzac;
- 
-               }
-@@ -1821,7 +1823,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("addw	%0, %ea1");
--#line 268 "rl78-decode.opc"
-+#line 270 "rl78-decode.opc"
-                 ID(add); W(); DR(AX); SM(HL, IMMU(1)); Fzac;
- 
-               }
-@@ -1836,7 +1838,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x17:
-               {
-                 /** 0110 0001 0001 0reg		addc	%0, %1				*/
--#line 254 "rl78-decode.opc"
-+#line 256 "rl78-decode.opc"
-                 int reg AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -1846,7 +1848,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  reg = 0x%x\n", reg);
-                   }
-                 SYNTAX("addc	%0, %1");
--#line 254 "rl78-decode.opc"
-+#line 256 "rl78-decode.opc"
-                 ID(addc); DRB(reg); SR(A); Fzac;
- 
-               }
-@@ -1860,7 +1862,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x1f:
-               {
-                 /** 0110 0001 0001 1rba		addc	%0, %1				*/
--#line 251 "rl78-decode.opc"
-+#line 253 "rl78-decode.opc"
-                 int rba AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -1870,7 +1872,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  rba = 0x%x\n", rba);
-                   }
-                 SYNTAX("addc	%0, %1");
--#line 251 "rl78-decode.opc"
-+#line 253 "rl78-decode.opc"
-                 ID(addc); DR(A); SRB(rba); Fzac;
- 
-               }
-@@ -1885,7 +1887,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x27:
-               {
-                 /** 0110 0001 0010 0reg		sub	%0, %1				*/
--#line 1143 "rl78-decode.opc"
-+#line 1145 "rl78-decode.opc"
-                 int reg AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -1895,7 +1897,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  reg = 0x%x\n", reg);
-                   }
-                 SYNTAX("sub	%0, %1");
--#line 1143 "rl78-decode.opc"
-+#line 1145 "rl78-decode.opc"
-                 ID(sub); DRB(reg); SR(A); Fzac;
- 
-               }
-@@ -1909,7 +1911,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x2f:
-               {
-                 /** 0110 0001 0010 1rba		sub	%0, %1				*/
--#line 1137 "rl78-decode.opc"
-+#line 1139 "rl78-decode.opc"
-                 int rba AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -1919,7 +1921,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  rba = 0x%x\n", rba);
-                   }
-                 SYNTAX("sub	%0, %1");
--#line 1137 "rl78-decode.opc"
-+#line 1139 "rl78-decode.opc"
-                 ID(sub); DR(A); SRB(rba); Fzac;
- 
-               }
-@@ -1934,7 +1936,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("subw	%0, %ea1");
--#line 1186 "rl78-decode.opc"
-+#line 1188 "rl78-decode.opc"
-                 ID(sub); W(); DR(AX); SM(HL, IMMU(1)); Fzac;
- 
-               }
-@@ -1949,7 +1951,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x37:
-               {
-                 /** 0110 0001 0011 0reg		subc	%0, %1				*/
--#line 1172 "rl78-decode.opc"
-+#line 1174 "rl78-decode.opc"
-                 int reg AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -1959,7 +1961,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  reg = 0x%x\n", reg);
-                   }
-                 SYNTAX("subc	%0, %1");
--#line 1172 "rl78-decode.opc"
-+#line 1174 "rl78-decode.opc"
-                 ID(subc); DRB(reg); SR(A); Fzac;
- 
-               }
-@@ -1973,7 +1975,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x3f:
-               {
-                 /** 0110 0001 0011 1rba		subc	%0, %1				*/
--#line 1169 "rl78-decode.opc"
-+#line 1171 "rl78-decode.opc"
-                 int rba AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -1983,7 +1985,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  rba = 0x%x\n", rba);
-                   }
-                 SYNTAX("subc	%0, %1");
--#line 1169 "rl78-decode.opc"
-+#line 1171 "rl78-decode.opc"
-                 ID(subc); DR(A); SRB(rba); Fzac;
- 
-               }
-@@ -1998,7 +2000,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x47:
-               {
-                 /** 0110 0001 0100 0reg		cmp	%0, %1				*/
--#line 507 "rl78-decode.opc"
-+#line 509 "rl78-decode.opc"
-                 int reg AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -2008,7 +2010,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  reg = 0x%x\n", reg);
-                   }
-                 SYNTAX("cmp	%0, %1");
--#line 507 "rl78-decode.opc"
-+#line 509 "rl78-decode.opc"
-                 ID(cmp); DRB(reg); SR(A); Fzac;
- 
-               }
-@@ -2022,7 +2024,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x4f:
-               {
-                 /** 0110 0001 0100 1rba		cmp	%0, %1				*/
--#line 504 "rl78-decode.opc"
-+#line 506 "rl78-decode.opc"
-                 int rba AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -2032,7 +2034,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  rba = 0x%x\n", rba);
-                   }
-                 SYNTAX("cmp	%0, %1");
--#line 504 "rl78-decode.opc"
-+#line 506 "rl78-decode.opc"
-                 ID(cmp); DR(A); SRB(rba); Fzac;
- 
-               }
-@@ -2047,7 +2049,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("cmpw	%0, %ea1");
--#line 534 "rl78-decode.opc"
-+#line 536 "rl78-decode.opc"
-                 ID(cmp); W(); DR(AX); SM(HL, IMMU(1)); Fzac;
- 
-               }
-@@ -2062,7 +2064,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x57:
-               {
-                 /** 0110 0001 0101 0reg		and	%0, %1				*/
--#line 306 "rl78-decode.opc"
-+#line 308 "rl78-decode.opc"
-                 int reg AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -2072,7 +2074,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  reg = 0x%x\n", reg);
-                   }
-                 SYNTAX("and	%0, %1");
--#line 306 "rl78-decode.opc"
-+#line 308 "rl78-decode.opc"
-                 ID(and); DRB(reg); SR(A); Fz;
- 
-               }
-@@ -2086,7 +2088,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x5f:
-               {
-                 /** 0110 0001 0101 1rba		and	%0, %1				*/
--#line 303 "rl78-decode.opc"
-+#line 305 "rl78-decode.opc"
-                 int rba AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -2096,7 +2098,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  rba = 0x%x\n", rba);
-                   }
-                 SYNTAX("and	%0, %1");
--#line 303 "rl78-decode.opc"
-+#line 305 "rl78-decode.opc"
-                 ID(and); DR(A); SRB(rba); Fz;
- 
-               }
-@@ -2111,7 +2113,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("inc	%ea0");
--#line 584 "rl78-decode.opc"
-+#line 586 "rl78-decode.opc"
-                 ID(add); DM(HL, IMMU(1)); SC(1); Fza;
- 
-               }
-@@ -2126,7 +2128,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x67:
-               {
-                 /** 0110 0001 0110 0reg		or	%0, %1				*/
--#line 961 "rl78-decode.opc"
-+#line 963 "rl78-decode.opc"
-                 int reg AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -2136,7 +2138,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  reg = 0x%x\n", reg);
-                   }
-                 SYNTAX("or	%0, %1");
--#line 961 "rl78-decode.opc"
-+#line 963 "rl78-decode.opc"
-                 ID(or); DRB(reg); SR(A); Fz;
- 
-               }
-@@ -2150,7 +2152,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x6f:
-               {
-                 /** 0110 0001 0110 1rba		or	%0, %1				*/
--#line 958 "rl78-decode.opc"
-+#line 960 "rl78-decode.opc"
-                 int rba AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -2160,7 +2162,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  rba = 0x%x\n", rba);
-                   }
-                 SYNTAX("or	%0, %1");
--#line 958 "rl78-decode.opc"
-+#line 960 "rl78-decode.opc"
-                 ID(or); DR(A); SRB(rba); Fz;
- 
-               }
-@@ -2175,7 +2177,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("dec	%ea0");
--#line 551 "rl78-decode.opc"
-+#line 553 "rl78-decode.opc"
-                 ID(sub); DM(HL, IMMU(1)); SC(1); Fza;
- 
-               }
-@@ -2190,7 +2192,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x77:
-               {
-                 /** 0110 0001 0111 0reg		xor	%0, %1				*/
--#line 1265 "rl78-decode.opc"
-+#line 1267 "rl78-decode.opc"
-                 int reg AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -2200,7 +2202,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  reg = 0x%x\n", reg);
-                   }
-                 SYNTAX("xor	%0, %1");
--#line 1265 "rl78-decode.opc"
-+#line 1267 "rl78-decode.opc"
-                 ID(xor); DRB(reg); SR(A); Fz;
- 
-               }
-@@ -2214,7 +2216,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x7f:
-               {
-                 /** 0110 0001 0111 1rba		xor	%0, %1				*/
--#line 1262 "rl78-decode.opc"
-+#line 1264 "rl78-decode.opc"
-                 int rba AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -2224,7 +2226,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  rba = 0x%x\n", rba);
-                   }
-                 SYNTAX("xor	%0, %1");
--#line 1262 "rl78-decode.opc"
-+#line 1264 "rl78-decode.opc"
-                 ID(xor); DR(A); SRB(rba); Fz;
- 
-               }
-@@ -2239,7 +2241,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("incw	%ea0");
--#line 598 "rl78-decode.opc"
-+#line 600 "rl78-decode.opc"
-                 ID(add); W(); DM(HL, IMMU(1)); SC(1);
- 
-               }
-@@ -2255,7 +2257,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("add	%0, %e1");
--#line 207 "rl78-decode.opc"
-+#line 209 "rl78-decode.opc"
-                 ID(add); DR(A); SM2(HL, B, 0); Fzac;
- 
-               }
-@@ -2270,7 +2272,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("add	%0, %e1");
--#line 213 "rl78-decode.opc"
-+#line 215 "rl78-decode.opc"
-                 ID(add); DR(A); SM2(HL, C, 0); Fzac;
- 
-               }
-@@ -2309,9 +2311,9 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xf7:
-               {
-                 /** 0110 0001 1nnn 01mm		callt	[%x0]				*/
--#line 433 "rl78-decode.opc"
-+#line 435 "rl78-decode.opc"
-                 int nnn AU = (op[1] >> 4) & 0x07;
--#line 433 "rl78-decode.opc"
-+#line 435 "rl78-decode.opc"
-                 int mm AU = op[1] & 0x03;
-                 if (trace)
-                   {
-@@ -2322,7 +2324,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  mm = 0x%x\n", mm);
-                   }
-                 SYNTAX("callt	[%x0]");
--#line 433 "rl78-decode.opc"
-+#line 435 "rl78-decode.opc"
-                 ID(call); DM(None, 0x80 + mm*16 + nnn*2);
- 
-               /*----------------------------------------------------------------------*/
-@@ -2338,7 +2340,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x8f:
-               {
-                 /** 0110 0001 1000 1reg		xch	%0, %1				*/
--#line 1224 "rl78-decode.opc"
-+#line 1226 "rl78-decode.opc"
-                 int reg AU = op[1] & 0x07;
-                 if (trace)
-                   {
-@@ -2348,7 +2350,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  reg = 0x%x\n", reg);
-                   }
-                 SYNTAX("xch	%0, %1");
--#line 1224 "rl78-decode.opc"
-+#line 1226 "rl78-decode.opc"
-                 /* Note: DECW uses reg == X, so this must follow DECW */
-                 ID(xch); DR(A); SRB(reg);
- 
-@@ -2364,7 +2366,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("decw	%ea0");
--#line 565 "rl78-decode.opc"
-+#line 567 "rl78-decode.opc"
-                 ID(sub); W(); DM(HL, IMMU(1)); SC(1);
- 
-               }
-@@ -2379,7 +2381,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("addc	%0, %e1");
--#line 239 "rl78-decode.opc"
-+#line 241 "rl78-decode.opc"
-                 ID(addc); DR(A); SM2(HL, B, 0); Fzac;
- 
-               }
-@@ -2394,7 +2396,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("addc	%0, %e1");
--#line 242 "rl78-decode.opc"
-+#line 244 "rl78-decode.opc"
-                 ID(addc); DR(A); SM2(HL, C, 0); Fzac;
- 
-               }
-@@ -2410,7 +2412,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("sub	%0, %e1");
--#line 1125 "rl78-decode.opc"
-+#line 1127 "rl78-decode.opc"
-                 ID(sub); DR(A); SM2(HL, B, 0); Fzac;
- 
-               }
-@@ -2425,7 +2427,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("sub	%0, %e1");
--#line 1131 "rl78-decode.opc"
-+#line 1133 "rl78-decode.opc"
-                 ID(sub); DR(A); SM2(HL, C, 0); Fzac;
- 
-               }
-@@ -2440,7 +2442,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("xch	%0, %1");
--#line 1228 "rl78-decode.opc"
-+#line 1230 "rl78-decode.opc"
-                 ID(xch); DR(A); SM(None, SADDR);
- 
-               }
-@@ -2455,7 +2457,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("xch	%0, %e1");
--#line 1221 "rl78-decode.opc"
-+#line 1223 "rl78-decode.opc"
-                 ID(xch); DR(A); SM2(HL, C, 0);
- 
-               }
-@@ -2470,7 +2472,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("xch	%0, %e!1");
--#line 1203 "rl78-decode.opc"
-+#line 1205 "rl78-decode.opc"
-                 ID(xch); DR(A); SM(None, IMMU(2));
- 
-               }
-@@ -2485,7 +2487,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("xch	%0, %s1");
--#line 1231 "rl78-decode.opc"
-+#line 1233 "rl78-decode.opc"
-                 ID(xch); DR(A); SM(None, SFR);
- 
-               }
-@@ -2500,7 +2502,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("xch	%0, %e1");
--#line 1212 "rl78-decode.opc"
-+#line 1214 "rl78-decode.opc"
-                 ID(xch); DR(A); SM(HL, 0);
- 
-               }
-@@ -2515,7 +2517,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("xch	%0, %ea1");
--#line 1218 "rl78-decode.opc"
-+#line 1220 "rl78-decode.opc"
-                 ID(xch); DR(A); SM(HL, IMMU(1));
- 
-               }
-@@ -2530,7 +2532,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("xch	%0, %e1");
--#line 1206 "rl78-decode.opc"
-+#line 1208 "rl78-decode.opc"
-                 ID(xch); DR(A); SM(DE, 0);
- 
-               }
-@@ -2545,7 +2547,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("xch	%0, %ea1");
--#line 1209 "rl78-decode.opc"
-+#line 1211 "rl78-decode.opc"
-                 ID(xch); DR(A); SM(DE, IMMU(1));
- 
-               }
-@@ -2560,7 +2562,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("subc	%0, %e1");
--#line 1157 "rl78-decode.opc"
-+#line 1159 "rl78-decode.opc"
-                 ID(subc); DR(A); SM2(HL, B, 0); Fzac;
- 
-               }
-@@ -2575,7 +2577,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("subc	%0, %e1");
--#line 1160 "rl78-decode.opc"
-+#line 1162 "rl78-decode.opc"
-                 ID(subc); DR(A); SM2(HL, C, 0); Fzac;
- 
-               }
-@@ -2590,7 +2592,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("mov	%0, %1");
--#line 723 "rl78-decode.opc"
-+#line 725 "rl78-decode.opc"
-                 ID(mov); DR(ES); SM(None, SADDR);
- 
-               }
-@@ -2605,7 +2607,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("xch	%0, %e1");
--#line 1215 "rl78-decode.opc"
-+#line 1217 "rl78-decode.opc"
-                 ID(xch); DR(A); SM2(HL, B, 0);
- 
-               }
-@@ -2620,7 +2622,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("cmp	%0, %e1");
--#line 492 "rl78-decode.opc"
-+#line 494 "rl78-decode.opc"
-                 ID(cmp); DR(A); SM2(HL, B, 0); Fzac;
- 
-               }
-@@ -2635,7 +2637,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("cmp	%0, %e1");
--#line 495 "rl78-decode.opc"
-+#line 497 "rl78-decode.opc"
-                 ID(cmp); DR(A); SM2(HL, C, 0); Fzac;
- 
-               }
-@@ -2650,7 +2652,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("bh	$%a0");
--#line 340 "rl78-decode.opc"
-+#line 342 "rl78-decode.opc"
-                 ID(branch_cond); DC(pc+IMMS(1)+3); SR(None); COND(H);
- 
-               }
-@@ -2665,7 +2667,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("sk%c1");
--#line 1094 "rl78-decode.opc"
-+#line 1096 "rl78-decode.opc"
-                 ID(skip); COND(C);
- 
-               }
-@@ -2680,7 +2682,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("mov	%0, %e1");
--#line 660 "rl78-decode.opc"
-+#line 662 "rl78-decode.opc"
-                 ID(mov); DR(A); SM2(HL, B, 0);
- 
-               }
-@@ -2691,7 +2693,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xfa:
-               {
-                 /** 0110 0001 11rg 1010		call	%0				*/
--#line 430 "rl78-decode.opc"
-+#line 432 "rl78-decode.opc"
-                 int rg AU = (op[1] >> 4) & 0x03;
-                 if (trace)
-                   {
-@@ -2701,7 +2703,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  rg = 0x%x\n", rg);
-                   }
-                 SYNTAX("call	%0");
--#line 430 "rl78-decode.opc"
-+#line 432 "rl78-decode.opc"
-                 ID(call); DRW(rg);
- 
-               }
-@@ -2716,7 +2718,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("br	ax");
--#line 380 "rl78-decode.opc"
-+#line 382 "rl78-decode.opc"
-                 ID(branch); DR(AX);
- 
-               /*----------------------------------------------------------------------*/
-@@ -2733,7 +2735,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("brk");
--#line 388 "rl78-decode.opc"
-+#line 390 "rl78-decode.opc"
-                 ID(break);
- 
-               /*----------------------------------------------------------------------*/
-@@ -2750,7 +2752,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("pop	%s0");
--#line 989 "rl78-decode.opc"
-+#line 991 "rl78-decode.opc"
-                 ID(mov); W(); DR(PSW); SPOP();
- 
-               /*----------------------------------------------------------------------*/
-@@ -2767,7 +2769,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("movs	%ea0, %1");
--#line 811 "rl78-decode.opc"
-+#line 813 "rl78-decode.opc"
-                 ID(mov); DM(HL, IMMU(1)); SR(X); Fzc;
- 
-               /*----------------------------------------------------------------------*/
-@@ -2780,7 +2782,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xff:
-               {
-                 /** 0110 0001 11rb 1111		sel	rb%1				*/
--#line 1041 "rl78-decode.opc"
-+#line 1043 "rl78-decode.opc"
-                 int rb AU = (op[1] >> 4) & 0x03;
-                 if (trace)
-                   {
-@@ -2790,7 +2792,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  rb = 0x%x\n", rb);
-                   }
-                 SYNTAX("sel	rb%1");
--#line 1041 "rl78-decode.opc"
-+#line 1043 "rl78-decode.opc"
-                 ID(sel); SC(rb);
- 
-               /*----------------------------------------------------------------------*/
-@@ -2807,7 +2809,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("and	%0, %e1");
--#line 291 "rl78-decode.opc"
-+#line 293 "rl78-decode.opc"
-                 ID(and); DR(A); SM2(HL, B, 0); Fz;
- 
-               }
-@@ -2822,7 +2824,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("and	%0, %e1");
--#line 297 "rl78-decode.opc"
-+#line 299 "rl78-decode.opc"
-                 ID(and); DR(A); SM2(HL, C, 0); Fz;
- 
-               }
-@@ -2837,7 +2839,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("bnh	$%a0");
--#line 343 "rl78-decode.opc"
-+#line 345 "rl78-decode.opc"
-                 ID(branch_cond); DC(pc+IMMS(1)+3); SR(None); COND(NH);
- 
-               }
-@@ -2852,7 +2854,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("sk%c1");
--#line 1100 "rl78-decode.opc"
-+#line 1102 "rl78-decode.opc"
-                 ID(skip); COND(NC);
- 
-               }
-@@ -2867,7 +2869,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("mov	%e0, %1");
--#line 627 "rl78-decode.opc"
-+#line 629 "rl78-decode.opc"
-                 ID(mov); DM2(HL, B, 0); SR(A);
- 
-               }
-@@ -2882,7 +2884,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("ror	%0, %1");
--#line 1022 "rl78-decode.opc"
-+#line 1024 "rl78-decode.opc"
-                 ID(ror); DR(A); SC(1);
- 
-               }
-@@ -2897,7 +2899,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("rolc	%0, %1");
--#line 1016 "rl78-decode.opc"
-+#line 1018 "rl78-decode.opc"
-                 ID(rolc); DR(A); SC(1);
- 
-               }
-@@ -2912,7 +2914,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("push	%s1");
--#line 997 "rl78-decode.opc"
-+#line 999 "rl78-decode.opc"
-                 ID(mov); W(); DPUSH(); SR(PSW);
- 
-               /*----------------------------------------------------------------------*/
-@@ -2929,7 +2931,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("cmps	%0, %ea1");
--#line 526 "rl78-decode.opc"
-+#line 528 "rl78-decode.opc"
-                 ID(cmp); DR(X); SM(HL, IMMU(1)); Fzac;
- 
-               /*----------------------------------------------------------------------*/
-@@ -2946,7 +2948,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("or	%0, %e1");
--#line 946 "rl78-decode.opc"
-+#line 948 "rl78-decode.opc"
-                 ID(or); DR(A); SM2(HL, B, 0); Fz;
- 
-               }
-@@ -2961,7 +2963,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("or	%0, %e1");
--#line 952 "rl78-decode.opc"
-+#line 954 "rl78-decode.opc"
-                 ID(or); DR(A); SM2(HL, C, 0); Fz;
- 
-               }
-@@ -2976,7 +2978,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("sk%c1");
--#line 1097 "rl78-decode.opc"
-+#line 1099 "rl78-decode.opc"
-                 ID(skip); COND(H);
- 
-               }
-@@ -2991,7 +2993,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("sk%c1");
--#line 1109 "rl78-decode.opc"
-+#line 1111 "rl78-decode.opc"
-                 ID(skip); COND(Z);
- 
-               /*----------------------------------------------------------------------*/
-@@ -3008,7 +3010,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("mov	%0, %e1");
--#line 663 "rl78-decode.opc"
-+#line 665 "rl78-decode.opc"
-                 ID(mov); DR(A); SM2(HL, C, 0);
- 
-               }
-@@ -3023,7 +3025,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("rol	%0, %1");
--#line 1013 "rl78-decode.opc"
-+#line 1015 "rl78-decode.opc"
-                 ID(rol); DR(A); SC(1);
- 
-               }
-@@ -3038,7 +3040,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("retb");
--#line 1008 "rl78-decode.opc"
-+#line 1010 "rl78-decode.opc"
-                 ID(reti);
- 
-               /*----------------------------------------------------------------------*/
-@@ -3055,7 +3057,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("halt");
--#line 576 "rl78-decode.opc"
-+#line 578 "rl78-decode.opc"
-                 ID(halt);
- 
-               /*----------------------------------------------------------------------*/
-@@ -3066,7 +3068,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xfe:
-               {
-                 /** 0110 0001 111r 1110		rolwc	%0, %1				*/
--#line 1019 "rl78-decode.opc"
-+#line 1021 "rl78-decode.opc"
-                 int r AU = (op[1] >> 4) & 0x01;
-                 if (trace)
-                   {
-@@ -3076,7 +3078,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  r = 0x%x\n", r);
-                   }
-                 SYNTAX("rolwc	%0, %1");
--#line 1019 "rl78-decode.opc"
-+#line 1021 "rl78-decode.opc"
-                 ID(rolc); W(); DRW(r); SC(1);
- 
-               }
-@@ -3091,7 +3093,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("xor	%0, %e1");
--#line 1250 "rl78-decode.opc"
-+#line 1252 "rl78-decode.opc"
-                 ID(xor); DR(A); SM2(HL, B, 0); Fz;
- 
-               }
-@@ -3106,7 +3108,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("xor	%0, %e1");
--#line 1256 "rl78-decode.opc"
-+#line 1258 "rl78-decode.opc"
-                 ID(xor); DR(A); SM2(HL, C, 0); Fz;
- 
-               }
-@@ -3121,7 +3123,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("sk%c1");
--#line 1103 "rl78-decode.opc"
-+#line 1105 "rl78-decode.opc"
-                 ID(skip); COND(NH);
- 
-               }
-@@ -3136,7 +3138,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("sk%c1");
--#line 1106 "rl78-decode.opc"
-+#line 1108 "rl78-decode.opc"
-                 ID(skip); COND(NZ);
- 
-               }
-@@ -3151,7 +3153,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("mov	%e0, %1");
--#line 636 "rl78-decode.opc"
-+#line 638 "rl78-decode.opc"
-                 ID(mov); DM2(HL, C, 0); SR(A);
- 
-               }
-@@ -3166,7 +3168,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("rorc	%0, %1");
--#line 1025 "rl78-decode.opc"
-+#line 1027 "rl78-decode.opc"
-                 ID(rorc); DR(A); SC(1);
- 
-               /*----------------------------------------------------------------------*/
-@@ -3186,7 +3188,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("reti");
--#line 1005 "rl78-decode.opc"
-+#line 1007 "rl78-decode.opc"
-                 ID(reti);
- 
-               }
-@@ -3201,7 +3203,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("stop");
--#line 1114 "rl78-decode.opc"
-+#line 1116 "rl78-decode.opc"
-                 ID(stop);
- 
-               /*----------------------------------------------------------------------*/
-@@ -3221,7 +3223,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%e0, %1");
--#line 874 "rl78-decode.opc"
-+#line 876 "rl78-decode.opc"
-           ID(mov); W(); DM(C, IMMU(2)); SR(AX);
- 
-         }
-@@ -3236,7 +3238,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, %e1");
--#line 865 "rl78-decode.opc"
-+#line 867 "rl78-decode.opc"
-           ID(mov); W(); DR(AX); SM(C, IMMU(2));
- 
-         }
-@@ -3251,7 +3253,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("or	%0, #%1");
--#line 967 "rl78-decode.opc"
-+#line 969 "rl78-decode.opc"
-           ID(or); DM(None, SADDR); SC(IMMU(1)); Fz;
- 
-         /*----------------------------------------------------------------------*/
-@@ -3268,7 +3270,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("or	%0, %1");
--#line 964 "rl78-decode.opc"
-+#line 966 "rl78-decode.opc"
-           ID(or); DR(A); SM(None, SADDR); Fz;
- 
-         }
-@@ -3283,7 +3285,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("or	%0, #%1");
--#line 955 "rl78-decode.opc"
-+#line 957 "rl78-decode.opc"
-           ID(or); DR(A); SC(IMMU(1)); Fz;
- 
-         }
-@@ -3298,7 +3300,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("or	%0, %e1");
--#line 943 "rl78-decode.opc"
-+#line 945 "rl78-decode.opc"
-           ID(or); DR(A); SM(HL, 0); Fz;
- 
-         }
-@@ -3313,7 +3315,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("or	%0, %ea1");
--#line 949 "rl78-decode.opc"
-+#line 951 "rl78-decode.opc"
-           ID(or); DR(A); SM(HL, IMMU(1)); Fz;
- 
-         }
-@@ -3328,7 +3330,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("or	%0, %e!1");
--#line 940 "rl78-decode.opc"
-+#line 942 "rl78-decode.opc"
-           ID(or); DR(A); SM(None, IMMU(2)); Fz;
- 
-         }
-@@ -3342,7 +3344,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0x77:
-         {
-           /** 0111 0rba			mov	%0, %1				*/
--#line 696 "rl78-decode.opc"
-+#line 698 "rl78-decode.opc"
-           int rba AU = op[0] & 0x07;
-           if (trace)
-             {
-@@ -3352,7 +3354,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  rba = 0x%x\n", rba);
-             }
-           SYNTAX("mov	%0, %1");
--#line 696 "rl78-decode.opc"
-+#line 698 "rl78-decode.opc"
-           ID(mov); DRB(rba); SR(A);
- 
-         }
-@@ -3371,7 +3373,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x70:
-               {
-                 /** 0111 0001 0bit 0000		set1	%e!0				*/
--#line 1046 "rl78-decode.opc"
-+#line 1048 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3381,7 +3383,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("set1	%e!0");
--#line 1046 "rl78-decode.opc"
-+#line 1048 "rl78-decode.opc"
-                 ID(mov); DM(None, IMMU(2)); DB(bit); SC(1);
- 
-               }
-@@ -3396,7 +3398,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x71:
-               {
-                 /** 0111 0001 0bit 0001		mov1	%0, cy				*/
--#line 803 "rl78-decode.opc"
-+#line 805 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3406,7 +3408,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("mov1	%0, cy");
--#line 803 "rl78-decode.opc"
-+#line 805 "rl78-decode.opc"
-                 ID(mov); DM(None, SADDR); DB(bit); SCY();
- 
-               }
-@@ -3421,7 +3423,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x72:
-               {
-                 /** 0111 0001 0bit 0010		set1	%0				*/
--#line 1064 "rl78-decode.opc"
-+#line 1066 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3431,7 +3433,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("set1	%0");
--#line 1064 "rl78-decode.opc"
-+#line 1066 "rl78-decode.opc"
-                 ID(mov); DM(None, SADDR); DB(bit); SC(1);
- 
-               /*----------------------------------------------------------------------*/
-@@ -3448,7 +3450,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x73:
-               {
-                 /** 0111 0001 0bit 0011		clr1	%0				*/
--#line 456 "rl78-decode.opc"
-+#line 458 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3458,7 +3460,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("clr1	%0");
--#line 456 "rl78-decode.opc"
-+#line 458 "rl78-decode.opc"
-                 ID(mov); DM(None, SADDR); DB(bit); SC(0);
- 
-               /*----------------------------------------------------------------------*/
-@@ -3475,7 +3477,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x74:
-               {
-                 /** 0111 0001 0bit 0100		mov1	cy, %1				*/
--#line 797 "rl78-decode.opc"
-+#line 799 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3485,7 +3487,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("mov1	cy, %1");
--#line 797 "rl78-decode.opc"
-+#line 799 "rl78-decode.opc"
-                 ID(mov); DCY(); SM(None, SADDR); SB(bit);
- 
-               }
-@@ -3500,7 +3502,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x75:
-               {
-                 /** 0111 0001 0bit 0101		and1	cy, %s1				*/
--#line 326 "rl78-decode.opc"
-+#line 328 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3510,7 +3512,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("and1	cy, %s1");
--#line 326 "rl78-decode.opc"
-+#line 328 "rl78-decode.opc"
-                 ID(and); DCY(); SM(None, SADDR); SB(bit);
- 
-               /*----------------------------------------------------------------------*/
-@@ -3530,7 +3532,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x76:
-               {
-                 /** 0111 0001 0bit 0110		or1	cy, %s1				*/
--#line 981 "rl78-decode.opc"
-+#line 983 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3540,7 +3542,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("or1	cy, %s1");
--#line 981 "rl78-decode.opc"
-+#line 983 "rl78-decode.opc"
-                 ID(or); DCY(); SM(None, SADDR); SB(bit);
- 
-               /*----------------------------------------------------------------------*/
-@@ -3557,7 +3559,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x77:
-               {
-                 /** 0111 0001 0bit 0111		xor1	cy, %s1				*/
--#line 1285 "rl78-decode.opc"
-+#line 1287 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3567,7 +3569,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("xor1	cy, %s1");
--#line 1285 "rl78-decode.opc"
-+#line 1287 "rl78-decode.opc"
-                 ID(xor); DCY(); SM(None, SADDR); SB(bit);
- 
-               /*----------------------------------------------------------------------*/
-@@ -3584,7 +3586,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x78:
-               {
-                 /** 0111 0001 0bit 1000		clr1	%e!0				*/
--#line 438 "rl78-decode.opc"
-+#line 440 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3594,7 +3596,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("clr1	%e!0");
--#line 438 "rl78-decode.opc"
-+#line 440 "rl78-decode.opc"
-                 ID(mov); DM(None, IMMU(2)); DB(bit); SC(0);
- 
-               }
-@@ -3609,7 +3611,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x79:
-               {
-                 /** 0111 0001 0bit 1001		mov1	%s0, cy				*/
--#line 806 "rl78-decode.opc"
-+#line 808 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3619,7 +3621,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("mov1	%s0, cy");
--#line 806 "rl78-decode.opc"
-+#line 808 "rl78-decode.opc"
-                 ID(mov); DM(None, SFR); DB(bit); SCY();
- 
-               /*----------------------------------------------------------------------*/
-@@ -3636,7 +3638,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x7a:
-               {
-                 /** 0111 0001 0bit 1010		set1	%s0				*/
--#line 1058 "rl78-decode.opc"
-+#line 1060 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3646,7 +3648,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("set1	%s0");
--#line 1058 "rl78-decode.opc"
-+#line 1060 "rl78-decode.opc"
-                 op0 = SFR;
-                 ID(mov); DM(None, op0); DB(bit); SC(1);
-                 if (op0 == RL78_SFR_PSW && bit == 7)
-@@ -3664,7 +3666,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x7b:
-               {
-                 /** 0111 0001 0bit 1011		clr1	%s0				*/
--#line 450 "rl78-decode.opc"
-+#line 452 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3674,7 +3676,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("clr1	%s0");
--#line 450 "rl78-decode.opc"
-+#line 452 "rl78-decode.opc"
-                 op0 = SFR;
-                 ID(mov); DM(None, op0); DB(bit); SC(0);
-                 if (op0 == RL78_SFR_PSW && bit == 7)
-@@ -3692,7 +3694,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x7c:
-               {
-                 /** 0111 0001 0bit 1100		mov1	cy, %s1				*/
--#line 800 "rl78-decode.opc"
-+#line 802 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3702,7 +3704,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("mov1	cy, %s1");
--#line 800 "rl78-decode.opc"
-+#line 802 "rl78-decode.opc"
-                 ID(mov); DCY(); SM(None, SFR); SB(bit);
- 
-               }
-@@ -3717,7 +3719,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x7d:
-               {
-                 /** 0111 0001 0bit 1101		and1	cy, %s1				*/
--#line 323 "rl78-decode.opc"
-+#line 325 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3727,7 +3729,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("and1	cy, %s1");
--#line 323 "rl78-decode.opc"
-+#line 325 "rl78-decode.opc"
-                 ID(and); DCY(); SM(None, SFR); SB(bit);
- 
-               }
-@@ -3742,7 +3744,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x7e:
-               {
-                 /** 0111 0001 0bit 1110		or1	cy, %s1				*/
--#line 978 "rl78-decode.opc"
-+#line 980 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3752,7 +3754,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("or1	cy, %s1");
--#line 978 "rl78-decode.opc"
-+#line 980 "rl78-decode.opc"
-                 ID(or); DCY(); SM(None, SFR); SB(bit);
- 
-               }
-@@ -3767,7 +3769,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0x7f:
-               {
-                 /** 0111 0001 0bit 1111		xor1	cy, %s1				*/
--#line 1282 "rl78-decode.opc"
-+#line 1284 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3777,7 +3779,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("xor1	cy, %s1");
--#line 1282 "rl78-decode.opc"
-+#line 1284 "rl78-decode.opc"
-                 ID(xor); DCY(); SM(None, SFR); SB(bit);
- 
-               }
-@@ -3792,7 +3794,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("set1	cy");
--#line 1055 "rl78-decode.opc"
-+#line 1057 "rl78-decode.opc"
-                 ID(mov); DCY(); SC(1);
- 
-               }
-@@ -3807,7 +3809,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xf1:
-               {
-                 /** 0111 0001 1bit 0001		mov1	%e0, cy				*/
--#line 785 "rl78-decode.opc"
-+#line 787 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3817,7 +3819,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("mov1	%e0, cy");
--#line 785 "rl78-decode.opc"
-+#line 787 "rl78-decode.opc"
-                 ID(mov); DM(HL, 0); DB(bit); SCY();
- 
-               }
-@@ -3832,7 +3834,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xf2:
-               {
-                 /** 0111 0001 1bit 0010		set1	%e0				*/
--#line 1049 "rl78-decode.opc"
-+#line 1051 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3842,7 +3844,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("set1	%e0");
--#line 1049 "rl78-decode.opc"
-+#line 1051 "rl78-decode.opc"
-                 ID(mov); DM(HL, 0); DB(bit); SC(1);
- 
-               }
-@@ -3857,7 +3859,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xf3:
-               {
-                 /** 0111 0001 1bit 0011		clr1	%e0				*/
--#line 441 "rl78-decode.opc"
-+#line 443 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3867,7 +3869,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("clr1	%e0");
--#line 441 "rl78-decode.opc"
-+#line 443 "rl78-decode.opc"
-                 ID(mov); DM(HL, 0); DB(bit); SC(0);
- 
-               }
-@@ -3882,7 +3884,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xf4:
-               {
-                 /** 0111 0001 1bit 0100		mov1	cy, %e1				*/
--#line 791 "rl78-decode.opc"
-+#line 793 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3892,7 +3894,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("mov1	cy, %e1");
--#line 791 "rl78-decode.opc"
-+#line 793 "rl78-decode.opc"
-                 ID(mov); DCY(); SM(HL, 0); SB(bit);
- 
-               }
-@@ -3907,7 +3909,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xf5:
-               {
-                 /** 0111 0001 1bit 0101		and1	cy, %e1			*/
--#line 317 "rl78-decode.opc"
-+#line 319 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3917,7 +3919,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("and1	cy, %e1");
--#line 317 "rl78-decode.opc"
-+#line 319 "rl78-decode.opc"
-                 ID(and); DCY(); SM(HL, 0); SB(bit);
- 
-               }
-@@ -3932,7 +3934,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xf6:
-               {
-                 /** 0111 0001 1bit 0110		or1	cy, %e1				*/
--#line 972 "rl78-decode.opc"
-+#line 974 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3942,7 +3944,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("or1	cy, %e1");
--#line 972 "rl78-decode.opc"
-+#line 974 "rl78-decode.opc"
-                 ID(or); DCY(); SM(HL, 0); SB(bit);
- 
-               }
-@@ -3957,7 +3959,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xf7:
-               {
-                 /** 0111 0001 1bit 0111		xor1	cy, %e1				*/
--#line 1276 "rl78-decode.opc"
-+#line 1278 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -3967,7 +3969,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("xor1	cy, %e1");
--#line 1276 "rl78-decode.opc"
-+#line 1278 "rl78-decode.opc"
-                 ID(xor); DCY(); SM(HL, 0); SB(bit);
- 
-               }
-@@ -3982,7 +3984,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("clr1	cy");
--#line 447 "rl78-decode.opc"
-+#line 449 "rl78-decode.opc"
-                 ID(mov); DCY(); SC(0);
- 
-               }
-@@ -3997,7 +3999,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xf9:
-               {
-                 /** 0111 0001 1bit 1001		mov1	%e0, cy				*/
--#line 788 "rl78-decode.opc"
-+#line 790 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -4007,7 +4009,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("mov1	%e0, cy");
--#line 788 "rl78-decode.opc"
-+#line 790 "rl78-decode.opc"
-                 ID(mov); DR(A); DB(bit); SCY();
- 
-               }
-@@ -4022,7 +4024,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xfa:
-               {
-                 /** 0111 0001 1bit 1010		set1	%0				*/
--#line 1052 "rl78-decode.opc"
-+#line 1054 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -4032,7 +4034,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("set1	%0");
--#line 1052 "rl78-decode.opc"
-+#line 1054 "rl78-decode.opc"
-                 ID(mov); DR(A); DB(bit); SC(1);
- 
-               }
-@@ -4047,7 +4049,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xfb:
-               {
-                 /** 0111 0001 1bit 1011		clr1	%0				*/
--#line 444 "rl78-decode.opc"
-+#line 446 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -4057,7 +4059,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("clr1	%0");
--#line 444 "rl78-decode.opc"
-+#line 446 "rl78-decode.opc"
-                 ID(mov); DR(A); DB(bit); SC(0);
- 
-               }
-@@ -4072,7 +4074,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xfc:
-               {
-                 /** 0111 0001 1bit 1100		mov1	cy, %e1				*/
--#line 794 "rl78-decode.opc"
-+#line 796 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -4082,7 +4084,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("mov1	cy, %e1");
--#line 794 "rl78-decode.opc"
-+#line 796 "rl78-decode.opc"
-                 ID(mov); DCY(); SR(A); SB(bit);
- 
-               }
-@@ -4097,7 +4099,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xfd:
-               {
-                 /** 0111 0001 1bit 1101		and1	cy, %1				*/
--#line 320 "rl78-decode.opc"
-+#line 322 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -4107,7 +4109,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("and1	cy, %1");
--#line 320 "rl78-decode.opc"
-+#line 322 "rl78-decode.opc"
-                 ID(and); DCY(); SR(A); SB(bit);
- 
-               }
-@@ -4122,7 +4124,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xfe:
-               {
-                 /** 0111 0001 1bit 1110		or1	cy, %1				*/
--#line 975 "rl78-decode.opc"
-+#line 977 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -4132,7 +4134,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("or1	cy, %1");
--#line 975 "rl78-decode.opc"
-+#line 977 "rl78-decode.opc"
-                 ID(or); DCY(); SR(A); SB(bit);
- 
-               }
-@@ -4147,7 +4149,7 @@ rl78_decode_opcode (unsigned long pc AU,
-           case 0xff:
-               {
-                 /** 0111 0001 1bit 1111		xor1	cy, %1				*/
--#line 1279 "rl78-decode.opc"
-+#line 1281 "rl78-decode.opc"
-                 int bit AU = (op[1] >> 4) & 0x07;
-                 if (trace)
-                   {
-@@ -4157,7 +4159,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                     printf ("  bit = 0x%x\n", bit);
-                   }
-                 SYNTAX("xor1	cy, %1");
--#line 1279 "rl78-decode.opc"
-+#line 1281 "rl78-decode.opc"
-                 ID(xor); DCY(); SR(A); SB(bit);
- 
-               }
-@@ -4172,7 +4174,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                            op[0], op[1]);
-                   }
-                 SYNTAX("not1	cy");
--#line 916 "rl78-decode.opc"
-+#line 918 "rl78-decode.opc"
-                 ID(xor); DCY(); SC(1);
- 
-               /*----------------------------------------------------------------------*/
-@@ -4192,7 +4194,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%e0, %1");
--#line 877 "rl78-decode.opc"
-+#line 879 "rl78-decode.opc"
-           ID(mov); W(); DM(BC, IMMU(2)); SR(AX);
- 
-         }
-@@ -4207,7 +4209,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, %e1");
--#line 868 "rl78-decode.opc"
-+#line 870 "rl78-decode.opc"
-           ID(mov); W(); DR(AX); SM(BC, IMMU(2));
- 
-         }
-@@ -4222,7 +4224,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("xor	%0, #%1");
--#line 1271 "rl78-decode.opc"
-+#line 1273 "rl78-decode.opc"
-           ID(xor); DM(None, SADDR); SC(IMMU(1)); Fz;
- 
-         /*----------------------------------------------------------------------*/
-@@ -4239,7 +4241,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("xor	%0, %1");
--#line 1268 "rl78-decode.opc"
-+#line 1270 "rl78-decode.opc"
-           ID(xor); DR(A); SM(None, SADDR); Fz;
- 
-         }
-@@ -4254,7 +4256,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("xor	%0, #%1");
--#line 1259 "rl78-decode.opc"
-+#line 1261 "rl78-decode.opc"
-           ID(xor); DR(A); SC(IMMU(1)); Fz;
- 
-         }
-@@ -4269,7 +4271,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("xor	%0, %e1");
--#line 1247 "rl78-decode.opc"
-+#line 1249 "rl78-decode.opc"
-           ID(xor); DR(A); SM(HL, 0); Fz;
- 
-         }
-@@ -4284,7 +4286,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("xor	%0, %ea1");
--#line 1253 "rl78-decode.opc"
-+#line 1255 "rl78-decode.opc"
-           ID(xor); DR(A); SM(HL, IMMU(1)); Fz;
- 
-         }
-@@ -4299,7 +4301,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("xor	%0, %e!1");
--#line 1244 "rl78-decode.opc"
-+#line 1246 "rl78-decode.opc"
-           ID(xor); DR(A); SM(None, IMMU(2)); Fz;
- 
-         }
-@@ -4314,7 +4316,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0x87:
-         {
-           /** 1000 0reg			inc	%0				*/
--#line 587 "rl78-decode.opc"
-+#line 589 "rl78-decode.opc"
-           int reg AU = op[0] & 0x07;
-           if (trace)
-             {
-@@ -4324,7 +4326,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  reg = 0x%x\n", reg);
-             }
-           SYNTAX("inc	%0");
--#line 587 "rl78-decode.opc"
-+#line 589 "rl78-decode.opc"
-           ID(add); DRB(reg); SC(1); Fza;
- 
-         }
-@@ -4339,7 +4341,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %ea1");
--#line 666 "rl78-decode.opc"
-+#line 668 "rl78-decode.opc"
-           ID(mov); DR(A); SM(SP, IMMU(1));
- 
-         }
-@@ -4354,7 +4356,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %e1");
--#line 648 "rl78-decode.opc"
-+#line 650 "rl78-decode.opc"
-           ID(mov); DR(A); SM(DE, 0);
- 
-         }
-@@ -4369,7 +4371,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %ea1");
--#line 651 "rl78-decode.opc"
-+#line 653 "rl78-decode.opc"
-           ID(mov); DR(A); SM(DE, IMMU(1));
- 
-         }
-@@ -4384,7 +4386,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %e1");
--#line 654 "rl78-decode.opc"
-+#line 656 "rl78-decode.opc"
-           ID(mov); DR(A); SM(HL, 0);
- 
-         }
-@@ -4399,7 +4401,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %ea1");
--#line 657 "rl78-decode.opc"
-+#line 659 "rl78-decode.opc"
-           ID(mov); DR(A); SM(HL, IMMU(1));
- 
-         }
-@@ -4414,7 +4416,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %1");
--#line 690 "rl78-decode.opc"
-+#line 692 "rl78-decode.opc"
-           ID(mov); DR(A); SM(None, SADDR);
- 
-         }
-@@ -4429,7 +4431,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %s1");
--#line 687 "rl78-decode.opc"
-+#line 689 "rl78-decode.opc"
-           ID(mov); DR(A); SM(None, SFR);
- 
-         }
-@@ -4444,7 +4446,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %e!1");
--#line 645 "rl78-decode.opc"
-+#line 647 "rl78-decode.opc"
-           ID(mov); DR(A); SM(None, IMMU(2));
- 
-         }
-@@ -4459,7 +4461,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0x97:
-         {
-           /** 1001 0reg			dec	%0				*/
--#line 554 "rl78-decode.opc"
-+#line 556 "rl78-decode.opc"
-           int reg AU = op[0] & 0x07;
-           if (trace)
-             {
-@@ -4469,7 +4471,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  reg = 0x%x\n", reg);
-             }
-           SYNTAX("dec	%0");
--#line 554 "rl78-decode.opc"
-+#line 556 "rl78-decode.opc"
-           ID(sub); DRB(reg); SC(1); Fza;
- 
-         }
-@@ -4484,7 +4486,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%a0, %1");
--#line 642 "rl78-decode.opc"
-+#line 644 "rl78-decode.opc"
-           ID(mov); DM(SP, IMMU(1)); SR(A);
- 
-         }
-@@ -4499,7 +4501,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%e0, %1");
--#line 615 "rl78-decode.opc"
-+#line 617 "rl78-decode.opc"
-           ID(mov); DM(DE, 0); SR(A);
- 
-         }
-@@ -4514,7 +4516,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%ea0, %1");
--#line 621 "rl78-decode.opc"
-+#line 623 "rl78-decode.opc"
-           ID(mov); DM(DE, IMMU(1)); SR(A);
- 
-         }
-@@ -4529,7 +4531,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%e0, %1");
--#line 624 "rl78-decode.opc"
-+#line 626 "rl78-decode.opc"
-           ID(mov); DM(HL, 0); SR(A);
- 
-         }
-@@ -4544,7 +4546,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%ea0, %1");
--#line 633 "rl78-decode.opc"
-+#line 635 "rl78-decode.opc"
-           ID(mov); DM(HL, IMMU(1)); SR(A);
- 
-         }
-@@ -4559,7 +4561,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %1");
--#line 747 "rl78-decode.opc"
-+#line 749 "rl78-decode.opc"
-           ID(mov); DM(None, SADDR); SR(A);
- 
-         }
-@@ -4574,7 +4576,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%s0, %1");
--#line 780 "rl78-decode.opc"
-+#line 782 "rl78-decode.opc"
-           ID(mov); DM(None, SFR); SR(A);
- 
-         /*----------------------------------------------------------------------*/
-@@ -4591,7 +4593,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%e!0, %1");
--#line 612 "rl78-decode.opc"
-+#line 614 "rl78-decode.opc"
-           ID(mov); DM(None, IMMU(2)); SR(A);
- 
-         }
-@@ -4606,7 +4608,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("inc	%e!0");
--#line 581 "rl78-decode.opc"
-+#line 583 "rl78-decode.opc"
-           ID(add); DM(None, IMMU(2)); SC(1); Fza;
- 
-         }
-@@ -4617,7 +4619,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0xa7:
-         {
-           /** 1010 0rg1			incw	%0				*/
--#line 601 "rl78-decode.opc"
-+#line 603 "rl78-decode.opc"
-           int rg AU = (op[0] >> 1) & 0x03;
-           if (trace)
-             {
-@@ -4627,7 +4629,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  rg = 0x%x\n", rg);
-             }
-           SYNTAX("incw	%0");
--#line 601 "rl78-decode.opc"
-+#line 603 "rl78-decode.opc"
-           ID(add); W(); DRW(rg); SC(1);
- 
-         }
-@@ -4642,7 +4644,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("incw	%e!0");
--#line 595 "rl78-decode.opc"
-+#line 597 "rl78-decode.opc"
-           ID(add); W(); DM(None, IMMU(2)); SC(1);
- 
-         }
-@@ -4657,7 +4659,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("inc	%0");
--#line 590 "rl78-decode.opc"
-+#line 592 "rl78-decode.opc"
-           ID(add); DM(None, SADDR); SC(1); Fza;
- 
-         /*----------------------------------------------------------------------*/
-@@ -4674,7 +4676,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("incw	%0");
--#line 604 "rl78-decode.opc"
-+#line 606 "rl78-decode.opc"
-           ID(add); W(); DM(None, SADDR); SC(1);
- 
-         /*----------------------------------------------------------------------*/
-@@ -4691,7 +4693,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, %a1");
--#line 850 "rl78-decode.opc"
-+#line 852 "rl78-decode.opc"
-           ID(mov); W(); DR(AX); SM(SP, IMMU(1));
- 
-         }
-@@ -4706,7 +4708,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, %e1");
--#line 838 "rl78-decode.opc"
-+#line 840 "rl78-decode.opc"
-           ID(mov); W(); DR(AX); SM(DE, 0);
- 
-         }
-@@ -4721,7 +4723,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, %ea1");
--#line 841 "rl78-decode.opc"
-+#line 843 "rl78-decode.opc"
-           ID(mov); W(); DR(AX); SM(DE, IMMU(1));
- 
-         }
-@@ -4736,7 +4738,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, %e1");
--#line 844 "rl78-decode.opc"
-+#line 846 "rl78-decode.opc"
-           ID(mov); W(); DR(AX); SM(HL, 0);
- 
-         }
-@@ -4751,7 +4753,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, %ea1");
--#line 847 "rl78-decode.opc"
-+#line 849 "rl78-decode.opc"
-           ID(mov); W(); DR(AX); SM(HL, IMMU(1));
- 
-         }
-@@ -4766,7 +4768,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, %1");
--#line 880 "rl78-decode.opc"
-+#line 882 "rl78-decode.opc"
-           ID(mov); W(); DR(AX); SM(None, SADDR);
- 
-         }
-@@ -4781,7 +4783,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, %s1");
--#line 883 "rl78-decode.opc"
-+#line 885 "rl78-decode.opc"
-           ID(mov); W(); DR(AX); SM(None, SFR);
- 
-         }
-@@ -4796,7 +4798,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, %e!1");
--#line 834 "rl78-decode.opc"
-+#line 836 "rl78-decode.opc"
-           ID(mov); W(); DR(AX); SM(None, IMMU(2));
- 
- 
-@@ -4812,7 +4814,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("dec	%e!0");
--#line 548 "rl78-decode.opc"
-+#line 550 "rl78-decode.opc"
-           ID(sub); DM(None, IMMU(2)); SC(1); Fza;
- 
-         }
-@@ -4823,7 +4825,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0xb7:
-         {
-           /** 1011 0rg1 			decw	%0				*/
--#line 568 "rl78-decode.opc"
-+#line 570 "rl78-decode.opc"
-           int rg AU = (op[0] >> 1) & 0x03;
-           if (trace)
-             {
-@@ -4833,7 +4835,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  rg = 0x%x\n", rg);
-             }
-           SYNTAX("decw	%0");
--#line 568 "rl78-decode.opc"
-+#line 570 "rl78-decode.opc"
-           ID(sub); W(); DRW(rg); SC(1);
- 
-         }
-@@ -4848,7 +4850,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("decw	%e!0");
--#line 562 "rl78-decode.opc"
-+#line 564 "rl78-decode.opc"
-           ID(sub); W(); DM(None, IMMU(2)); SC(1);
- 
-         }
-@@ -4863,7 +4865,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("dec	%0");
--#line 557 "rl78-decode.opc"
-+#line 559 "rl78-decode.opc"
-           ID(sub); DM(None, SADDR); SC(1); Fza;
- 
-         /*----------------------------------------------------------------------*/
-@@ -4880,7 +4882,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("decw	%0");
--#line 571 "rl78-decode.opc"
-+#line 573 "rl78-decode.opc"
-           ID(sub); W(); DM(None, SADDR); SC(1);
- 
-         /*----------------------------------------------------------------------*/
-@@ -4897,7 +4899,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%a0, %1");
--#line 831 "rl78-decode.opc"
-+#line 833 "rl78-decode.opc"
-           ID(mov); W(); DM(SP, IMMU(1)); SR(AX);
- 
-         }
-@@ -4912,7 +4914,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%e0, %1");
--#line 819 "rl78-decode.opc"
-+#line 821 "rl78-decode.opc"
-           ID(mov); W(); DM(DE, 0); SR(AX);
- 
-         }
-@@ -4927,7 +4929,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%ea0, %1");
--#line 822 "rl78-decode.opc"
-+#line 824 "rl78-decode.opc"
-           ID(mov); W(); DM(DE, IMMU(1)); SR(AX);
- 
-         }
-@@ -4942,7 +4944,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%e0, %1");
--#line 825 "rl78-decode.opc"
-+#line 827 "rl78-decode.opc"
-           ID(mov); W(); DM(HL, 0); SR(AX);
- 
-         }
-@@ -4957,7 +4959,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%ea0, %1");
--#line 828 "rl78-decode.opc"
-+#line 830 "rl78-decode.opc"
-           ID(mov); W(); DM(HL, IMMU(1)); SR(AX);
- 
-         }
-@@ -4972,7 +4974,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, %1");
--#line 895 "rl78-decode.opc"
-+#line 897 "rl78-decode.opc"
-           ID(mov); W(); DM(None, SADDR); SR(AX);
- 
-         }
-@@ -4987,7 +4989,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%s0, %1");
--#line 901 "rl78-decode.opc"
-+#line 903 "rl78-decode.opc"
-           ID(mov); W(); DM(None, SFR); SR(AX);
- 
-         /*----------------------------------------------------------------------*/
-@@ -5004,7 +5006,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%e!0, %1");
--#line 816 "rl78-decode.opc"
-+#line 818 "rl78-decode.opc"
-           ID(mov); W(); DM(None, IMMU(2)); SR(AX);
- 
-         }
-@@ -5015,7 +5017,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0xc6:
-         {
-           /** 1100 0rg0			pop	%0				*/
--#line 986 "rl78-decode.opc"
-+#line 988 "rl78-decode.opc"
-           int rg AU = (op[0] >> 1) & 0x03;
-           if (trace)
-             {
-@@ -5025,7 +5027,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  rg = 0x%x\n", rg);
-             }
-           SYNTAX("pop	%0");
--#line 986 "rl78-decode.opc"
-+#line 988 "rl78-decode.opc"
-           ID(mov); W(); DRW(rg); SPOP();
- 
-         }
-@@ -5036,7 +5038,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0xc7:
-         {
-           /** 1100 0rg1			push	%1				*/
--#line 994 "rl78-decode.opc"
-+#line 996 "rl78-decode.opc"
-           int rg AU = (op[0] >> 1) & 0x03;
-           if (trace)
-             {
-@@ -5046,7 +5048,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  rg = 0x%x\n", rg);
-             }
-           SYNTAX("push	%1");
--#line 994 "rl78-decode.opc"
-+#line 996 "rl78-decode.opc"
-           ID(mov); W(); DPUSH(); SRW(rg);
- 
-         }
-@@ -5061,7 +5063,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%a0, #%1");
--#line 639 "rl78-decode.opc"
-+#line 641 "rl78-decode.opc"
-           ID(mov); DM(SP, IMMU(1)); SC(IMMU(1));
- 
-         }
-@@ -5076,7 +5078,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%0, #%1");
--#line 892 "rl78-decode.opc"
-+#line 894 "rl78-decode.opc"
-           ID(mov); W(); DM(None, SADDR); SC(IMMU(2));
- 
-         }
-@@ -5091,7 +5093,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%ea0, #%1");
--#line 618 "rl78-decode.opc"
-+#line 620 "rl78-decode.opc"
-           ID(mov); DM(DE, IMMU(1)); SC(IMMU(1));
- 
-         }
-@@ -5106,7 +5108,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("movw	%s0, #%1");
--#line 898 "rl78-decode.opc"
-+#line 900 "rl78-decode.opc"
-           ID(mov); W(); DM(None, SFR); SC(IMMU(2));
- 
-         }
-@@ -5121,7 +5123,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%ea0, #%1");
--#line 630 "rl78-decode.opc"
-+#line 632 "rl78-decode.opc"
-           ID(mov); DM(HL, IMMU(1)); SC(IMMU(1));
- 
-         }
-@@ -5136,7 +5138,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, #%1");
--#line 744 "rl78-decode.opc"
-+#line 746 "rl78-decode.opc"
-           ID(mov); DM(None, SADDR); SC(IMMU(1));
- 
-         }
-@@ -5151,7 +5153,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%s0, #%1");
--#line 750 "rl78-decode.opc"
-+#line 752 "rl78-decode.opc"
-           op0 = SFR;
-           op1 = IMMU(1);
-           ID(mov); DM(None, op0); SC(op1);
-@@ -5193,7 +5195,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%e!0, #%1");
--#line 609 "rl78-decode.opc"
-+#line 611 "rl78-decode.opc"
-           ID(mov); DM(None, IMMU(2)); SC(IMMU(1));
- 
-         }
-@@ -5204,7 +5206,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0xd3:
-         {
-           /** 1101 00rg			cmp0	%0				*/
--#line 518 "rl78-decode.opc"
-+#line 520 "rl78-decode.opc"
-           int rg AU = op[0] & 0x03;
-           if (trace)
-             {
-@@ -5214,7 +5216,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  rg = 0x%x\n", rg);
-             }
-           SYNTAX("cmp0	%0");
--#line 518 "rl78-decode.opc"
-+#line 520 "rl78-decode.opc"
-           ID(cmp); DRB(rg); SC(0); Fzac;
- 
-         }
-@@ -5229,7 +5231,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("cmp0	%0");
--#line 521 "rl78-decode.opc"
-+#line 523 "rl78-decode.opc"
-           ID(cmp); DM(None, SADDR); SC(0); Fzac;
- 
-         /*----------------------------------------------------------------------*/
-@@ -5246,7 +5248,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("cmp0	%e!0");
--#line 515 "rl78-decode.opc"
-+#line 517 "rl78-decode.opc"
-           ID(cmp); DM(None, IMMU(2)); SC(0); Fzac;
- 
-         }
-@@ -5261,7 +5263,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mulu	x");
--#line 906 "rl78-decode.opc"
-+#line 908 "rl78-decode.opc"
-           ID(mulu);
- 
-         /*----------------------------------------------------------------------*/
-@@ -5278,7 +5280,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("ret");
--#line 1002 "rl78-decode.opc"
-+#line 1004 "rl78-decode.opc"
-           ID(ret);
- 
-         }
-@@ -5293,7 +5295,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %1");
--#line 711 "rl78-decode.opc"
-+#line 713 "rl78-decode.opc"
-           ID(mov); DR(X); SM(None, SADDR);
- 
-         }
-@@ -5308,7 +5310,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %e!1");
--#line 708 "rl78-decode.opc"
-+#line 710 "rl78-decode.opc"
-           ID(mov); DR(X); SM(None, IMMU(2));
- 
-         }
-@@ -5318,7 +5320,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0xfa:
-         {
-           /** 11ra 1010			movw	%0, %1				*/
--#line 889 "rl78-decode.opc"
-+#line 891 "rl78-decode.opc"
-           int ra AU = (op[0] >> 4) & 0x03;
-           if (trace)
-             {
-@@ -5328,7 +5330,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  ra = 0x%x\n", ra);
-             }
-           SYNTAX("movw	%0, %1");
--#line 889 "rl78-decode.opc"
-+#line 891 "rl78-decode.opc"
-           ID(mov); W(); DRW(ra); SM(None, SADDR);
- 
-         }
-@@ -5338,7 +5340,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0xfb:
-         {
-           /** 11ra 1011			movw	%0, %es!1			*/
--#line 886 "rl78-decode.opc"
-+#line 888 "rl78-decode.opc"
-           int ra AU = (op[0] >> 4) & 0x03;
-           if (trace)
-             {
-@@ -5348,7 +5350,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  ra = 0x%x\n", ra);
-             }
-           SYNTAX("movw	%0, %es!1");
--#line 886 "rl78-decode.opc"
-+#line 888 "rl78-decode.opc"
-           ID(mov); W(); DRW(ra); SM(None, IMMU(2));
- 
-         }
-@@ -5363,7 +5365,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("bc	$%a0");
--#line 334 "rl78-decode.opc"
-+#line 336 "rl78-decode.opc"
-           ID(branch_cond); DC(pc+IMMS(1)+2); SR(None); COND(C);
- 
-         }
-@@ -5378,7 +5380,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("bz	$%a0");
--#line 346 "rl78-decode.opc"
-+#line 348 "rl78-decode.opc"
-           ID(branch_cond); DC(pc+IMMS(1)+2); SR(None); COND(Z);
- 
-         }
-@@ -5393,7 +5395,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("bnc	$%a0");
--#line 337 "rl78-decode.opc"
-+#line 339 "rl78-decode.opc"
-           ID(branch_cond); DC(pc+IMMS(1)+2); SR(None); COND(NC);
- 
-         }
-@@ -5408,7 +5410,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("bnz	$%a0");
--#line 349 "rl78-decode.opc"
-+#line 351 "rl78-decode.opc"
-           ID(branch_cond); DC(pc+IMMS(1)+2); SR(None); COND(NZ);
- 
-         /*----------------------------------------------------------------------*/
-@@ -5421,7 +5423,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0xe3:
-         {
-           /** 1110 00rg			oneb	%0				*/
--#line 924 "rl78-decode.opc"
-+#line 926 "rl78-decode.opc"
-           int rg AU = op[0] & 0x03;
-           if (trace)
-             {
-@@ -5431,7 +5433,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  rg = 0x%x\n", rg);
-             }
-           SYNTAX("oneb	%0");
--#line 924 "rl78-decode.opc"
-+#line 926 "rl78-decode.opc"
-           ID(mov); DRB(rg); SC(1);
- 
-         }
-@@ -5446,7 +5448,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("oneb	%0");
--#line 927 "rl78-decode.opc"
-+#line 929 "rl78-decode.opc"
-           ID(mov); DM(None, SADDR); SC(1);
- 
-         /*----------------------------------------------------------------------*/
-@@ -5463,7 +5465,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("oneb	%e!0");
--#line 921 "rl78-decode.opc"
-+#line 923 "rl78-decode.opc"
-           ID(mov); DM(None, IMMU(2)); SC(1);
- 
-         }
-@@ -5478,7 +5480,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("onew	%0");
--#line 932 "rl78-decode.opc"
-+#line 934 "rl78-decode.opc"
-           ID(mov); DR(AX); SC(1);
- 
-         }
-@@ -5493,7 +5495,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("onew	%0");
--#line 935 "rl78-decode.opc"
-+#line 937 "rl78-decode.opc"
-           ID(mov); DR(BC); SC(1);
- 
-         /*----------------------------------------------------------------------*/
-@@ -5510,7 +5512,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %1");
--#line 699 "rl78-decode.opc"
-+#line 701 "rl78-decode.opc"
-           ID(mov); DR(B); SM(None, SADDR);
- 
-         }
-@@ -5525,7 +5527,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %e!1");
--#line 693 "rl78-decode.opc"
-+#line 695 "rl78-decode.opc"
-           ID(mov); DR(B); SM(None, IMMU(2));
- 
-         }
-@@ -5540,7 +5542,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("br	!%!a0");
--#line 368 "rl78-decode.opc"
-+#line 370 "rl78-decode.opc"
-           ID(branch); DC(IMMU(3));
- 
-         }
-@@ -5555,7 +5557,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("br	%!a0");
--#line 371 "rl78-decode.opc"
-+#line 373 "rl78-decode.opc"
-           ID(branch); DC(IMMU(2));
- 
-         }
-@@ -5570,7 +5572,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("br	$%!a0");
--#line 374 "rl78-decode.opc"
-+#line 376 "rl78-decode.opc"
-           ID(branch); DC(pc+IMMS(2)+3);
- 
-         }
-@@ -5585,7 +5587,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("br	$%a0");
--#line 377 "rl78-decode.opc"
-+#line 379 "rl78-decode.opc"
-           ID(branch); DC(pc+IMMS(1)+2);
- 
-         }
-@@ -5596,7 +5598,7 @@ rl78_decode_opcode (unsigned long pc AU,
-     case 0xf3:
-         {
-           /** 1111 00rg			clrb	%0				*/
--#line 464 "rl78-decode.opc"
-+#line 466 "rl78-decode.opc"
-           int rg AU = op[0] & 0x03;
-           if (trace)
-             {
-@@ -5606,7 +5608,7 @@ rl78_decode_opcode (unsigned long pc AU,
-               printf ("  rg = 0x%x\n", rg);
-             }
-           SYNTAX("clrb	%0");
--#line 464 "rl78-decode.opc"
-+#line 466 "rl78-decode.opc"
-           ID(mov); DRB(rg); SC(0);
- 
-         }
-@@ -5621,7 +5623,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("clrb	%0");
--#line 467 "rl78-decode.opc"
-+#line 469 "rl78-decode.opc"
-           ID(mov); DM(None, SADDR); SC(0);
- 
-         /*----------------------------------------------------------------------*/
-@@ -5638,7 +5640,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("clrb	%e!0");
--#line 461 "rl78-decode.opc"
-+#line 463 "rl78-decode.opc"
-           ID(mov); DM(None, IMMU(2)); SC(0);
- 
-         }
-@@ -5653,7 +5655,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("clrw	%0");
--#line 472 "rl78-decode.opc"
-+#line 474 "rl78-decode.opc"
-           ID(mov); DR(AX); SC(0);
- 
-         }
-@@ -5668,7 +5670,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("clrw	%0");
--#line 475 "rl78-decode.opc"
-+#line 477 "rl78-decode.opc"
-           ID(mov); DR(BC); SC(0);
- 
-         /*----------------------------------------------------------------------*/
-@@ -5685,7 +5687,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %1");
--#line 705 "rl78-decode.opc"
-+#line 707 "rl78-decode.opc"
-           ID(mov); DR(C); SM(None, SADDR);
- 
-         }
-@@ -5700,7 +5702,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("mov	%0, %e!1");
--#line 702 "rl78-decode.opc"
-+#line 704 "rl78-decode.opc"
-           ID(mov); DR(C); SM(None, IMMU(2));
- 
-         }
-@@ -5715,7 +5717,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("call	!%!a0");
--#line 421 "rl78-decode.opc"
-+#line 423 "rl78-decode.opc"
-           ID(call); DC(IMMU(3));
- 
-         }
-@@ -5730,7 +5732,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("call	%!a0");
--#line 424 "rl78-decode.opc"
-+#line 426 "rl78-decode.opc"
-           ID(call); DC(IMMU(2));
- 
-         }
-@@ -5745,7 +5747,7 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("call	$%!a0");
--#line 427 "rl78-decode.opc"
-+#line 429 "rl78-decode.opc"
-           ID(call); DC(pc+IMMS(2)+3);
- 
-         }
-@@ -5760,13 +5762,13 @@ rl78_decode_opcode (unsigned long pc AU,
-                      op[0]);
-             }
-           SYNTAX("brk1");
--#line 385 "rl78-decode.opc"
-+#line 387 "rl78-decode.opc"
-           ID(break);
- 
-         }
-       break;
-   }
--#line 1290 "rl78-decode.opc"
-+#line 1292 "rl78-decode.opc"
- 
-   return rl78->n_bytes;
- }
-diff --git a/opcodes/rl78-decode.opc b/opcodes/rl78-decode.opc
-index 6212f08..b25e441 100644
---- a/opcodes/rl78-decode.opc
-+++ b/opcodes/rl78-decode.opc
-@@ -50,7 +50,9 @@ typedef struct
- #define W() rl78->size = RL78_Word
- 
- #define AU ATTRIBUTE_UNUSED
--#define GETBYTE() (ld->op [ld->rl78->n_bytes++] = ld->getbyte (ld->ptr))
-+
-+#define OP_BUF_LEN 20
-+#define GETBYTE() (ld->rl78->n_bytes < (OP_BUF_LEN - 1) ? ld->op [ld->rl78->n_bytes++] = ld->getbyte (ld->ptr): 0)
- #define B ((unsigned long) GETBYTE())
- 
- #define SYNTAX(x) rl78->syntax = x
-@@ -168,7 +170,7 @@ rl78_decode_opcode (unsigned long pc AU,
- 		  RL78_Dis_Isa isa)
- {
-   LocalData lds, * ld = &lds;
--  unsigned char op_buf[20] = {0};
-+  unsigned char op_buf[OP_BUF_LEN] = {0};
-   unsigned char *op = op_buf;
-   int op0, op1;
- 
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9752.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9752.patch
deleted file mode 100644
index f63a993..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9752.patch
+++ /dev/null
@@ -1,208 +0,0 @@
-From c53d2e6d744da000aaafe0237bced090aab62818 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Wed, 14 Jun 2017 11:27:15 +0100
-Subject: [PATCH] Fix potential address violations when processing a corrupt
- Alpha VMA binary.
-
-	PR binutils/21589
-	* vms-alpha.c (_bfd_vms_get_value): Add an extra parameter - the
-	maximum value for the ascic pointer.  Check that name processing
-	does not read beyond this value.
-	(_bfd_vms_slurp_etir): Add checks for attempts to read beyond the
-	end of etir record.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9752
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog   |  9 +++++++++
- bfd/vms-alpha.c | 51 +++++++++++++++++++++++++++++++++++++++++----------
- 2 files changed, 50 insertions(+), 10 deletions(-)
-
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -9,6 +9,15 @@
- 
- 2017-06-14  Nick Clifton  <nickc@redhat.com>
-  
-+       PR binutils/21589
-+       * vms-alpha.c (_bfd_vms_get_value): Add an extra parameter - the
-+       maximum value for the ascic pointer.  Check that name processing
-+       does not read beyond this value.
-+       (_bfd_vms_slurp_etir): Add checks for attempts to read beyond the
-+       end of etir record.
-+
-+2017-06-14  Nick Clifton  <nickc@redhat.com>
-+ 
-        PR binutils/21578
-        * elf32-sh.c (sh_elf_set_mach_from_flags): Fix check for invalid
-        flag value.
-Index: git/bfd/vms-alpha.c
-===================================================================
---- git.orig/bfd/vms-alpha.c
-+++ git/bfd/vms-alpha.c
-@@ -1456,7 +1456,7 @@ dst_retrieve_location (bfd *abfd, unsign
- /* Write multiple bytes to section image.  */
- 
- static bfd_boolean
--image_write (bfd *abfd, unsigned char *ptr, int size)
-+image_write (bfd *abfd, unsigned char *ptr, unsigned int size)
- {
- #if VMS_DEBUG
-   _bfd_vms_debug (8, "image_write from (%p, %d) to (%ld)\n", ptr, size,
-@@ -1603,14 +1603,16 @@ _bfd_vms_etir_name (int cmd)
- #define HIGHBIT(op) ((op & 0x80000000L) == 0x80000000L)
- 
- static void
--_bfd_vms_get_value (bfd *abfd, const unsigned char *ascic,
-+_bfd_vms_get_value (bfd *abfd,
-+		    const unsigned char *ascic,
-+		    const unsigned char *max_ascic,
-                     struct bfd_link_info *info,
-                     bfd_vma *vma,
-                     struct alpha_vms_link_hash_entry **hp)
- {
-   char name[257];
--  int len;
--  int i;
-+  unsigned int len;
-+  unsigned int i;
-   struct alpha_vms_link_hash_entry *h;
- 
-   /* Not linking.  Do not try to resolve the symbol.  */
-@@ -1622,6 +1624,14 @@ _bfd_vms_get_value (bfd *abfd, const uns
-     }
- 
-   len = *ascic;
-+  if (ascic + len >= max_ascic)
-+    {
-+      _bfd_error_handler (_("Corrupt vms value"));
-+      *vma = 0;
-+      *hp = NULL;
-+      return;
-+    }
-+
-   for (i = 0; i < len; i++)
-     name[i] = ascic[i + 1];
-   name[i] = 0;
-@@ -1741,6 +1751,15 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
-       _bfd_hexdump (8, ptr, cmd_length - 4, 0);
- #endif
- 
-+      /* PR 21589: Check for a corrupt ETIR record.  */
-+      if (cmd_length < 4)
-+	{
-+	corrupt_etir:
-+	  _bfd_error_handler (_("Corrupt ETIR record encountered"));
-+	  bfd_set_error (bfd_error_bad_value);
-+	  return FALSE;
-+	}
-+
-       switch (cmd)
-         {
-           /* Stack global
-@@ -1748,7 +1767,7 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
- 
-              stack 32 bit value of symbol (high bits set to 0).  */
-         case ETIR__C_STA_GBL:
--          _bfd_vms_get_value (abfd, ptr, info, &op1, &h);
-+          _bfd_vms_get_value (abfd, ptr, maxptr, info, &op1, &h);
-           _bfd_vms_push (abfd, op1, alpha_vms_sym_to_ctxt (h));
-           break;
- 
-@@ -1757,6 +1776,8 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
- 
-              stack 32 bit value, sign extend to 64 bit.  */
-         case ETIR__C_STA_LW:
-+	  if (ptr + 4 >= maxptr)
-+	    goto corrupt_etir;
-           _bfd_vms_push (abfd, bfd_getl32 (ptr), RELC_NONE);
-           break;
- 
-@@ -1765,6 +1786,8 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
- 
-              stack 64 bit value of symbol.  */
-         case ETIR__C_STA_QW:
-+	  if (ptr + 8 >= maxptr)
-+	    goto corrupt_etir;
-           _bfd_vms_push (abfd, bfd_getl64 (ptr), RELC_NONE);
-           break;
- 
-@@ -1778,6 +1801,8 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
-           {
-             int psect;
- 
-+	    if (ptr + 12 >= maxptr)
-+	      goto corrupt_etir;
-             psect = bfd_getl32 (ptr);
-             if ((unsigned int) psect >= PRIV (section_count))
-               {
-@@ -1867,6 +1892,8 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
-           {
-             int size;
- 
-+	    if (ptr + 4 >= maxptr)
-+	      goto corrupt_etir;
-             size = bfd_getl32 (ptr);
-             _bfd_vms_pop (abfd, &op1, &rel1);
-             if (rel1 != RELC_NONE)
-@@ -1879,7 +1906,7 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
-           /* Store global: write symbol value
-              arg: cs	global symbol name.  */
-         case ETIR__C_STO_GBL:
--          _bfd_vms_get_value (abfd, ptr, info, &op1, &h);
-+          _bfd_vms_get_value (abfd, ptr, maxptr, info, &op1, &h);
-           if (h && h->sym)
-             {
-               if (h->sym->typ == EGSD__C_SYMG)
-@@ -1901,7 +1928,7 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
-           /* Store code address: write address of entry point
-              arg: cs	global symbol name (procedure).  */
-         case ETIR__C_STO_CA:
--          _bfd_vms_get_value (abfd, ptr, info, &op1, &h);
-+          _bfd_vms_get_value (abfd, ptr, maxptr, info, &op1, &h);
-           if (h && h->sym)
-             {
-               if (h->sym->flags & EGSY__V_NORM)
-@@ -1946,8 +1973,10 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
-              da	data.  */
-         case ETIR__C_STO_IMM:
-           {
--            int size;
-+            unsigned int size;
- 
-+	    if (ptr + 4 >= maxptr)
-+	      goto corrupt_etir;
-             size = bfd_getl32 (ptr);
-             image_write (abfd, ptr + 4, size);
-           }
-@@ -1960,7 +1989,7 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
-              store global longword: store 32bit value of symbol
-              arg: cs	symbol name.  */
-         case ETIR__C_STO_GBL_LW:
--          _bfd_vms_get_value (abfd, ptr, info, &op1, &h);
-+          _bfd_vms_get_value (abfd, ptr, maxptr, info, &op1, &h);
- #if 0
-           abort ();
- #endif
-@@ -2013,7 +2042,7 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
-              da	signature.  */
- 
-         case ETIR__C_STC_LP_PSB:
--          _bfd_vms_get_value (abfd, ptr + 4, info, &op1, &h);
-+          _bfd_vms_get_value (abfd, ptr + 4, maxptr, info, &op1, &h);
-           if (h && h->sym)
-             {
-               if (h->sym->typ == EGSD__C_SYMG)
-@@ -2109,6 +2138,8 @@ _bfd_vms_slurp_etir (bfd *abfd, struct b
-           /* Augment relocation base: increment image location counter by offset
-              arg: lw	offset value.  */
-         case ETIR__C_CTL_AUGRB:
-+	  if (ptr + 4 >= maxptr)
-+	    goto corrupt_etir;
-           op1 = bfd_getl32 (ptr);
-           image_inc_ptr (abfd, op1);
-           break;
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9753.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9753.patch
deleted file mode 100644
index 241142b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9753.patch
+++ /dev/null
@@ -1,79 +0,0 @@
-From 04f963fd489cae724a60140e13984415c205f4ac Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Wed, 14 Jun 2017 10:35:16 +0100
-Subject: [PATCH] Fix seg-faults in objdump when disassembling a corrupt
- versados binary.
-
-	PR binutils/21591
-	* versados.c (versados_mkobject): Zero the allocated tdata structure.
-	(process_otr): Check for an invalid offset in the otr structure.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9753
-CVE: CVE-2017-9754
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog  |  6 ++++++
- bfd/versados.c | 12 ++++++++----
- 2 files changed, 14 insertions(+), 4 deletions(-)
-
-Index: git/bfd/versados.c
-===================================================================
---- git.orig/bfd/versados.c
-+++ git/bfd/versados.c
-@@ -149,7 +149,7 @@ versados_mkobject (bfd *abfd)
-   if (abfd->tdata.versados_data == NULL)
-     {
-       bfd_size_type amt = sizeof (tdata_type);
--      tdata_type *tdata = bfd_alloc (abfd, amt);
-+      tdata_type *tdata = bfd_zalloc (abfd, amt);
- 
-       if (tdata == NULL)
- 	return FALSE;
-@@ -345,13 +345,13 @@ reloc_howto_type versados_howto_table[]
- };
- 
- static int
--get_offset (int len, unsigned char *ptr)
-+get_offset (unsigned int len, unsigned char *ptr)
- {
-   int val = 0;
- 
-   if (len)
-     {
--      int i;
-+      unsigned int i;
- 
-       val = *ptr++;
-       if (val & 0x80)
-@@ -394,9 +394,13 @@ process_otr (bfd *abfd, struct ext_otr *
- 	  int flag = *srcp++;
- 	  int esdids = (flag >> 5) & 0x7;
- 	  int sizeinwords = ((flag >> 3) & 1) ? 2 : 1;
--	  int offsetlen = flag & 0x7;
-+	  unsigned int offsetlen = flag & 0x7;
- 	  int j;
- 
-+	  /* PR 21591: Check for invalid lengths.  */
-+	  if (srcp + esdids + offsetlen >= endp)
-+	    return;
-+
- 	  if (esdids == 0)
- 	    {
- 	      /* A zero esdid means the new pc is the offset given.  */
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -8,6 +8,10 @@
-        (ieee_archive_p): Likewise.
- 
- 2017-06-14  Nick Clifton  <nickc@redhat.com>
-+
-+       PR binutils/21591
-+       * versados.c (versados_mkobject): Zero the allocated tdata structure.
-+       (process_otr): Check for an invalid offset in the otr structure.
-  
-        PR binutils/21589
-        * vms-alpha.c (_bfd_vms_get_value): Add an extra parameter - the
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9755.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9755.patch
deleted file mode 100644
index 15dc909..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9755.patch
+++ /dev/null
@@ -1,63 +0,0 @@
-From 0d96e4df4812c3bad77c229dfef47a9bc115ac12 Mon Sep 17 00:00:00 2001
-From: "H.J. Lu" <hjl.tools@gmail.com>
-Date: Thu, 15 Jun 2017 06:40:17 -0700
-Subject: [PATCH] i386-dis: Check valid bnd register
-
-Since there are only 4 bnd registers, return "(bad)" for register
-number > 3.
-
-	PR binutils/21594
-	* i386-dis.c (OP_E_register): Check valid bnd register.
-	(OP_G): Likewise.
-
-Upstream-Status: Backport 
-CVE: CVE-2017-9755
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- opcodes/ChangeLog  |  6 ++++++
- opcodes/i386-dis.c | 10 ++++++++++
- 2 files changed, 16 insertions(+)
-
-Index: git/opcodes/ChangeLog
-===================================================================
---- git.orig/opcodes/ChangeLog
-+++ git/opcodes/ChangeLog
-@@ -1,3 +1,9 @@
-+2017-06-15  H.J. Lu  <hongjiu.lu@intel.com>
-+
-+	PR binutils/21594
-+	* i386-dis.c (OP_E_register): Check valid bnd register.
-+	(OP_G): Likewise.
-+
- 2017-06-15  Nick Clifton  <nickc@redhat.com>
- 
- 	PR binutils/21588
-Index: git/opcodes/i386-dis.c
-===================================================================
---- git.orig/opcodes/i386-dis.c
-+++ git/opcodes/i386-dis.c
-@@ -14939,6 +14939,11 @@ OP_E_register (int bytemode, int sizefla
-       names = address_mode == mode_64bit ? names64 : names32;
-       break;
-     case bnd_mode:
-+      if (reg > 0x3)
-+	{
-+	  oappend ("(bad)");
-+	  return;
-+	}
-       names = names_bnd;
-       break;
-     case indir_v_mode:
-@@ -15483,6 +15488,11 @@ OP_G (int bytemode, int sizeflag)
-       oappend (names64[modrm.reg + add]);
-       break;
-     case bnd_mode:
-+      if (modrm.reg > 0x3)
-+	{
-+	  oappend ("(bad)");
-+	  return;
-+	}
-       oappend (names_bnd[modrm.reg]);
-       break;
-     case v_mode:
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9756.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9756.patch
deleted file mode 100644
index 191d0be..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9756.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-From cd3ea7c69acc5045eb28f9bf80d923116e15e4f5 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Thu, 15 Jun 2017 13:26:54 +0100
-Subject: [PATCH] Prevent address violation problem when disassembling corrupt
- aarch64 binary.
-
-	PR binutils/21595
-	* aarch64-dis.c (aarch64_ext_ldst_reglist): Check for an out of
-	range value.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9756
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- opcodes/ChangeLog     | 6 ++++++
- opcodes/aarch64-dis.c | 3 +++
- 2 files changed, 9 insertions(+)
-
-Index: git/opcodes/ChangeLog
-===================================================================
---- git.orig/opcodes/ChangeLog
-+++ git/opcodes/ChangeLog
-@@ -6,6 +6,12 @@
- 
- 2017-06-15  Nick Clifton  <nickc@redhat.com>
- 
-+	PR binutils/21595
-+	* aarch64-dis.c (aarch64_ext_ldst_reglist): Check for an out of
-+	range value.
-+
-+2017-06-15  Nick Clifton  <nickc@redhat.com>
-+
- 	PR binutils/21588
- 	* rl78-decode.opc (OP_BUF_LEN): Define.
- 	(GETBYTE): Check for the index exceeding OP_BUF_LEN.
-Index: git/opcodes/aarch64-dis.c
-===================================================================
---- git.orig/opcodes/aarch64-dis.c
-+++ git/opcodes/aarch64-dis.c
-@@ -409,6 +409,9 @@ aarch64_ext_ldst_reglist (const aarch64_
-   info->reglist.first_regno = extract_field (FLD_Rt, code, 0);
-   /* opcode */
-   value = extract_field (FLD_opcode, code, 0);
-+  /* PR 21595: Check for a bogus value.  */
-+  if (value >= ARRAY_SIZE (data))
-+    return 0;
-   if (expected_num != data[value].num_elements || data[value].is_reserved)
-     return 0;
-   info->reglist.num_regs = data[value].num_regs;
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9954.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9954.patch
deleted file mode 100644
index 8a9d7eb..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9954.patch
+++ /dev/null
@@ -1,58 +0,0 @@
-From 04e15b4a9462cb1ae819e878a6009829aab8020b Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Mon, 26 Jun 2017 15:46:34 +0100
-Subject: [PATCH] Fix address violation parsing a corrupt texhex format file.
-
-	PR binutils/21670
-	* tekhex.c (getvalue): Check for the source pointer exceeding the
-	end pointer before the first byte is read.
-
-Upstream-Status: Backport
-CVE: CVE_2017-9954
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog | 6 ++++++
- bfd/tekhex.c  | 6 +++++-
- 2 files changed, 11 insertions(+), 1 deletion(-)
-
-Index: git/bfd/tekhex.c
-===================================================================
---- git.orig/bfd/tekhex.c
-+++ git/bfd/tekhex.c
-@@ -273,6 +273,9 @@ getvalue (char **srcp, bfd_vma *valuep,
-   bfd_vma value = 0;
-   unsigned int len;
- 
-+  if (src >= endp)
-+    return FALSE;
-+
-   if (!ISHEX (*src))
-     return FALSE;
- 
-@@ -514,9 +517,10 @@ pass_over (bfd *abfd, bfd_boolean (*func
-   /* To the front of the file.  */
-   if (bfd_seek (abfd, (file_ptr) 0, SEEK_SET) != 0)
-     return FALSE;
-+
-   while (! is_eof)
-     {
--      char src[MAXCHUNK];
-+      static char src[MAXCHUNK];
-       char type;
- 
-       /* Find first '%'.  */
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,3 +1,9 @@
-+2017-06-26  Nick Clifton  <nickc@redhat.com>
-+ 
-+       PR binutils/21670
-+       * tekhex.c (getvalue): Check for the source pointer exceeding the
-+       end pointer before the first byte is read.
-+
- 2017-06-15  Nick Clifton  <nickc@redhat.com>
- 
-        PR binutils/21582
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_1.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_1.patch
deleted file mode 100644
index 774670f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_1.patch
+++ /dev/null
@@ -1,168 +0,0 @@
-From cfd14a500e0485374596234de4db10e88ebc7618 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Mon, 26 Jun 2017 15:25:08 +0100
-Subject: [PATCH] Fix address violations when atempting to parse fuzzed
- binaries.
-
-	PR binutils/21665
-bfd	* opncls.c (get_build_id): Check that the section is beig enough
-	to contain the whole note.
-	* compress.c (bfd_get_full_section_contents): Check for and reject
-	a section whoes size is greater than the size of the entire file.
-	* elf32-v850.c (v850_elf_copy_notes): Allow for the ouput to not
-	contain a notes section.
-
-binutils* objdump.c (disassemble_section): Skip any section that is bigger
-	than the entire file.
-
-Upstream-Status: Backport 
-CVE: CVE-2017-9955 #1
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog      | 10 ++++++++++
- bfd/compress.c     |  6 ++++++
- bfd/elf32-v850.c   |  4 +++-
- bfd/opncls.c       | 18 ++++++++++++++++--
- binutils/ChangeLog |  6 ++++++
- binutils/objdump.c |  4 ++--
- 6 files changed, 43 insertions(+), 5 deletions(-)
-
-Index: git/bfd/compress.c
-===================================================================
---- git.orig/bfd/compress.c
-+++ git/bfd/compress.c
-@@ -239,6 +239,12 @@ bfd_get_full_section_contents (bfd *abfd
-       *ptr = NULL;
-       return TRUE;
-     }
-+  else if (bfd_get_file_size (abfd) > 0
-+	   && sz > (bfd_size_type) bfd_get_file_size (abfd))
-+    {
-+      *ptr = NULL;
-+      return FALSE;
-+    }
- 
-   switch (sec->compress_status)
-     {
-Index: git/bfd/elf32-v850.c
-===================================================================
---- git.orig/bfd/elf32-v850.c
-+++ git/bfd/elf32-v850.c
-@@ -2450,7 +2450,9 @@ v850_elf_copy_notes (bfd *ibfd, bfd *obf
- 	BFD_ASSERT (bfd_malloc_and_get_section (ibfd, inotes, & icont));
- 
-       if ((ocont = elf_section_data (onotes)->this_hdr.contents) == NULL)
--	BFD_ASSERT (bfd_malloc_and_get_section (obfd, onotes, & ocont));
-+	/* If the output is being stripped then it is possible for
-+	   the notes section to disappear.  In this case do nothing.  */
-+	return;
- 
-       /* Copy/overwrite notes from the input to the output.  */
-       memcpy (ocont, icont, bfd_section_size (obfd, onotes));
-Index: git/bfd/opncls.c
-===================================================================
---- git.orig/bfd/opncls.c
-+++ git/bfd/opncls.c
-@@ -1776,6 +1776,7 @@ get_build_id (bfd *abfd)
-   Elf_External_Note *enote;
-   bfd_byte *contents;
-   asection *sect;
-+  bfd_size_type size;
- 
-   BFD_ASSERT (abfd);
- 
-@@ -1790,8 +1791,9 @@ get_build_id (bfd *abfd)
-       return NULL;
-     }
- 
-+  size = bfd_get_section_size (sect);
-   /* FIXME: Should we support smaller build-id notes ?  */
--  if (bfd_get_section_size (sect) < 0x24)
-+  if (size < 0x24)
-     {
-       bfd_set_error (bfd_error_invalid_operation);
-       return NULL;
-@@ -1804,6 +1806,17 @@ get_build_id (bfd *abfd)
-       return NULL;
-     }
- 
-+  /* FIXME: Paranoia - allow for compressed build-id sections.
-+     Maybe we should complain if this size is different from
-+     the one obtained above...  */
-+  size = bfd_get_section_size (sect);
-+  if (size < sizeof (Elf_External_Note))
-+    {
-+      bfd_set_error (bfd_error_invalid_operation);
-+      free (contents);
-+      return NULL;
-+    }
-+
-   enote = (Elf_External_Note *) contents;
-   inote.type = H_GET_32 (abfd, enote->type);
-   inote.namesz = H_GET_32 (abfd, enote->namesz);
-@@ -1815,7 +1828,8 @@ get_build_id (bfd *abfd)
-   if (inote.descsz == 0
-       || inote.type != NT_GNU_BUILD_ID
-       || inote.namesz != 4 /* sizeof "GNU"  */
--      || strcmp (inote.namedata, "GNU") != 0)
-+      || strncmp (inote.namedata, "GNU", 4) != 0
-+      || size < (12 + BFD_ALIGN (inote.namesz, 4) + inote.descsz))
-     {
-       free (contents);
-       bfd_set_error (bfd_error_invalid_operation);
-Index: git/binutils/objdump.c
-===================================================================
---- git.orig/binutils/objdump.c
-+++ git/binutils/objdump.c
-@@ -2048,7 +2048,7 @@ disassemble_section (bfd *abfd, asection
-     return;
- 
-   datasize = bfd_get_section_size (section);
--  if (datasize == 0)
-+  if (datasize == 0 || datasize >= (bfd_size_type) bfd_get_file_size (abfd))
-     return;
- 
-   if (start_address == (bfd_vma) -1
-@@ -2912,7 +2912,7 @@ dump_target_specific (bfd *abfd)
- static void
- dump_section (bfd *abfd, asection *section, void *dummy ATTRIBUTE_UNUSED)
- {
--  bfd_byte *data = 0;
-+  bfd_byte *data = NULL;
-   bfd_size_type datasize;
-   bfd_vma addr_offset;
-   bfd_vma start_offset;
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,4 +1,14 @@
- 2017-06-26  Nick Clifton  <nickc@redhat.com>
-+
-+       PR binutils/21665
-+       * opncls.c (get_build_id): Check that the section is beig enough
-+       to contain the whole note.
-+       * compress.c (bfd_get_full_section_contents): Check for and reject
-+       a section whoes size is greater than the size of the entire file.
-+       * elf32-v850.c (v850_elf_copy_notes): Allow for the ouput to not
-+       contain a notes section.
-+
-+2017-06-26  Nick Clifton  <nickc@redhat.com>
-  
-        PR binutils/21670
-        * tekhex.c (getvalue): Check for the source pointer exceeding the
-Index: git/binutils/ChangeLog
-===================================================================
---- git.orig/binutils/ChangeLog
-+++ git/binutils/ChangeLog
-@@ -1,3 +1,9 @@
-+2017-06-26  Nick Clifton  <nickc@redhat.com>
-+ 
-+       PR binutils/21665
-+       * objdump.c (disassemble_section): Skip any section that is bigger
-+       than the entire file.
-+
- 2017-04-03  Nick Clifton  <nickc@redhat.com>
- 
-        PR binutils/21345
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_2.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_2.patch
deleted file mode 100644
index f95295f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_2.patch
+++ /dev/null
@@ -1,122 +0,0 @@
-From 0630b49c470ca2e3c3f74da4c7e4ff63440dd71f Mon Sep 17 00:00:00 2001
-From: "H.J. Lu" <hjl.tools@gmail.com>
-Date: Mon, 26 Jun 2017 09:24:49 -0700
-Subject: [PATCH] Check file size before getting section contents
-
-Don't check the section size in bfd_get_full_section_contents since
-the size of a decompressed section may be larger than the file size.
-Instead, check file size in _bfd_generic_get_section_contents.
-
-	PR binutils/21665
-	* compress.c (bfd_get_full_section_contents): Don't check the
-	file size here.
-	* libbfd.c (_bfd_generic_get_section_contents): Check for and
-	reject a section whoes size + offset is greater than the size
-	of the entire file.
-	(_bfd_generic_get_section_contents_in_window): Likewise.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9955 #2
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog  | 10 +++++++++-
- bfd/compress.c |  8 +-------
- bfd/libbfd.c   | 17 ++++++++++++++++-
- 3 files changed, 26 insertions(+), 9 deletions(-)
-
-Index: git/bfd/compress.c
-===================================================================
---- git.orig/bfd/compress.c
-+++ git/bfd/compress.c
-@@ -239,12 +239,6 @@ bfd_get_full_section_contents (bfd *abfd
-       *ptr = NULL;
-       return TRUE;
-     }
--  else if (bfd_get_file_size (abfd) > 0
--	   && sz > (bfd_size_type) bfd_get_file_size (abfd))
--    {
--      *ptr = NULL;
--      return FALSE;
--    }
- 
-   switch (sec->compress_status)
-     {
-@@ -260,7 +254,7 @@ bfd_get_full_section_contents (bfd *abfd
- 		  /* xgettext:c-format */
- 		  (_("error: %B(%A) is too large (%#lx bytes)"),
- 		  abfd, sec, (long) sz);
--	    return FALSE;
-+	      return FALSE;
- 	    }
- 	}
- 
-Index: git/bfd/libbfd.c
-===================================================================
---- git.orig/bfd/libbfd.c
-+++ git/bfd/libbfd.c
-@@ -780,6 +780,7 @@ _bfd_generic_get_section_contents (bfd *
- 				   bfd_size_type count)
- {
-   bfd_size_type sz;
-+  file_ptr filesz;
-   if (count == 0)
-     return TRUE;
- 
-@@ -802,8 +803,15 @@ _bfd_generic_get_section_contents (bfd *
-     sz = section->rawsize;
-   else
-     sz = section->size;
-+  filesz = bfd_get_file_size (abfd);
-+  if (filesz < 0)
-+    {
-+      /* This should never happen.  */
-+      abort ();
-+    }
-   if (offset + count < count
--      || offset + count > sz)
-+      || offset + count > sz
-+      || (section->filepos + offset + sz) > (bfd_size_type) filesz)
-     {
-       bfd_set_error (bfd_error_invalid_operation);
-       return FALSE;
-@@ -826,6 +834,7 @@ _bfd_generic_get_section_contents_in_win
- {
- #ifdef USE_MMAP
-   bfd_size_type sz;
-+  file_ptr filesz;
- 
-   if (count == 0)
-     return TRUE;
-@@ -858,7 +867,13 @@ _bfd_generic_get_section_contents_in_win
-     sz = section->rawsize;
-   else
-     sz = section->size;
-+  filesz = bfd_get_file_size (abfd);
-+    {
-+      /* This should never happen.  */
-+      abort ();
-+    }
-   if (offset + count > sz
-+      || (section->filepos + offset + sz) > (bfd_size_type) filesz
-       || ! bfd_get_file_window (abfd, section->filepos + offset, count, w,
- 				TRUE))
-     return FALSE;
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,3 +1,13 @@
-+2017-06-26  H.J. Lu  <hongjiu.lu@intel.com>
-+
-+       PR binutils/21665
-+       * compress.c (bfd_get_full_section_contents): Don't check the
-+       file size here.
-+       * libbfd.c (_bfd_generic_get_section_contents): Check for and
-+       reject a section whoes size + offset is greater than the size
-+       of the entire file.
-+       (_bfd_generic_get_section_contents_in_window): Likewise.
-+
- 2017-06-26  Nick Clifton  <nickc@redhat.com>
- 
-        PR binutils/21665
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_3.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_3.patch
deleted file mode 100644
index 1b67c4e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_3.patch
+++ /dev/null
@@ -1,48 +0,0 @@
-From 1f473e3d0ad285195934e6a077c7ed32afe66437 Mon Sep 17 00:00:00 2001
-From: "H.J. Lu" <hjl.tools@gmail.com>
-Date: Mon, 26 Jun 2017 15:47:16 -0700
-Subject: [PATCH] Add a missing line to
- _bfd_generic_get_section_contents_in_window
-
-	PR binutils/21665
-	* libbfd.c (_bfd_generic_get_section_contents_in_window): Add
-	a missing line.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9955 #3
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog | 6 ++++++
- bfd/libbfd.c  | 1 +
- 2 files changed, 7 insertions(+)
-
-Index: git/bfd/libbfd.c
-===================================================================
---- git.orig/bfd/libbfd.c
-+++ git/bfd/libbfd.c
-@@ -868,6 +868,7 @@ _bfd_generic_get_section_contents_in_win
-   else
-     sz = section->size;
-   filesz = bfd_get_file_size (abfd);
-+  if (filesz < 0)
-     {
-       /* This should never happen.  */
-       abort ();
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,6 +1,12 @@
- 2017-06-26  H.J. Lu  <hongjiu.lu@intel.com>
- 
-        PR binutils/21665
-+       * libbfd.c (_bfd_generic_get_section_contents_in_window): Add
-+       a missing line.
-+
-+2017-06-26  H.J. Lu  <hongjiu.lu@intel.com>
-+
-+       PR binutils/21665
-        * compress.c (bfd_get_full_section_contents): Don't check the
-        file size here.
-        * libbfd.c (_bfd_generic_get_section_contents): Check for and
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_4.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_4.patch
deleted file mode 100644
index 97d529a..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_4.patch
+++ /dev/null
@@ -1,51 +0,0 @@
-From ab27f80c5dceaa23c4ba7f62c0d5d22a5d5dd7a1 Mon Sep 17 00:00:00 2001
-From: Pedro Alves <palves@redhat.com>
-Date: Tue, 27 Jun 2017 00:21:25 +0100
-Subject: [PATCH] Fix GDB regressions caused by previous
- bfd_get_section_contents changes
-
-Ref: https://sourceware.org/ml/binutils/2017-06/msg00343.html
-
-bfd/ChangeLog:
-2017-06-26  Pedro Alves  <palves@redhat.com>
-
-	PR binutils/21665
-	* libbfd.c (_bfd_generic_get_section_contents): Add "count", not
-	"sz".
-
-Upstream-Status: Backport
-CVE: CVE-2017-9955 #4
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog | 6 ++++++
- bfd/libbfd.c  | 2 +-
- 2 files changed, 7 insertions(+), 1 deletion(-)
-
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,3 +1,9 @@
-+2017-06-26  Pedro Alves  <palves@redhat.com>
-+
-+	PR binutils/21665
-+	* libbfd.c (_bfd_generic_get_section_contents): Add "count", not
-+	"sz".
-+
- 2017-06-26  H.J. Lu  <hongjiu.lu@intel.com>
- 
-        PR binutils/21665
-Index: git/bfd/libbfd.c
-===================================================================
---- git.orig/bfd/libbfd.c
-+++ git/bfd/libbfd.c
-@@ -811,7 +811,7 @@ _bfd_generic_get_section_contents (bfd *
-     }
-   if (offset + count < count
-       || offset + count > sz
--      || (section->filepos + offset + sz) > (bfd_size_type) filesz)
-+      || (section->filepos + offset + count) > (bfd_size_type) filesz)
-     {
-       bfd_set_error (bfd_error_invalid_operation);
-       return FALSE;
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_5.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_5.patch
deleted file mode 100644
index da3bd37..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_5.patch
+++ /dev/null
@@ -1,89 +0,0 @@
-From 7211ae501eb0de1044983f2dfb00091a58fbd66c Mon Sep 17 00:00:00 2001
-From: Alan Modra <amodra@gmail.com>
-Date: Tue, 27 Jun 2017 09:45:04 +0930
-Subject: [PATCH] More fixes for bfd_get_section_contents change
-
-	PR binutils/21665
-	* libbfd.c (_bfd_generic_get_section_contents): Delete abort.
-	Use unsigned file pointer type, and remove cast.
-	* libbfd.c (_bfd_generic_get_section_contents_in_window): Likewise.
-	Add "count", not "sz".
-
-Upstream-Status: Backport
-CVE: CVE-2017-9955 #5
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog |  8 ++++++++
- bfd/libbfd.c  | 18 ++++--------------
- 2 files changed, 12 insertions(+), 14 deletions(-)
-
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,3 +1,11 @@
-+2017-06-27  Alan Modra  <amodra@gmail.com>
-+
-+	PR binutils/21665
-+	* libbfd.c (_bfd_generic_get_section_contents): Delete abort.
-+	Use unsigned file pointer type, and remove cast.
-+	* libbfd.c (_bfd_generic_get_section_contents_in_window): Likewise.
-+	Add "count", not "sz".
-+
- 2017-06-26  Pedro Alves  <palves@redhat.com>
- 
- 	PR binutils/21665
-Index: git/bfd/libbfd.c
-===================================================================
---- git.orig/bfd/libbfd.c
-+++ git/bfd/libbfd.c
-@@ -780,7 +780,7 @@ _bfd_generic_get_section_contents (bfd *
- 				   bfd_size_type count)
- {
-   bfd_size_type sz;
--  file_ptr filesz;
-+  ufile_ptr filesz;
-   if (count == 0)
-     return TRUE;
- 
-@@ -804,14 +804,9 @@ _bfd_generic_get_section_contents (bfd *
-   else
-     sz = section->size;
-   filesz = bfd_get_file_size (abfd);
--  if (filesz < 0)
--    {
--      /* This should never happen.  */
--      abort ();
--    }
-   if (offset + count < count
-       || offset + count > sz
--      || (section->filepos + offset + count) > (bfd_size_type) filesz)
-+      || section->filepos + offset + count > filesz)
-     {
-       bfd_set_error (bfd_error_invalid_operation);
-       return FALSE;
-@@ -834,7 +829,7 @@ _bfd_generic_get_section_contents_in_win
- {
- #ifdef USE_MMAP
-   bfd_size_type sz;
--  file_ptr filesz;
-+  ufile_ptr filesz;
- 
-   if (count == 0)
-     return TRUE;
-@@ -868,13 +863,8 @@ _bfd_generic_get_section_contents_in_win
-   else
-     sz = section->size;
-   filesz = bfd_get_file_size (abfd);
--  if (filesz < 0)
--    {
--      /* This should never happen.  */
--      abort ();
--    }
-   if (offset + count > sz
--      || (section->filepos + offset + sz) > (bfd_size_type) filesz
-+      || section->filepos + offset + count > filesz
-       || ! bfd_get_file_window (abfd, section->filepos + offset, count, w,
- 				TRUE))
-     return FALSE;
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_6.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_6.patch
deleted file mode 100644
index e36429a..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_6.patch
+++ /dev/null
@@ -1,56 +0,0 @@
-From ea9aafc41a764e4e2dbb88a7b031e886b481b99a Mon Sep 17 00:00:00 2001
-From: Alan Modra <amodra@gmail.com>
-Date: Tue, 27 Jun 2017 14:43:49 +0930
-Subject: [PATCH] Warning fix
-
-	PR binutils/21665
-	* libbfd.c (_bfd_generic_get_section_contents): Warning fix.
-	(_bfd_generic_get_section_contents_in_window): Likewise.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9955 #6
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog | 12 +++++++++---
- bfd/libbfd.c  |  4 ++--
- 2 files changed, 11 insertions(+), 5 deletions(-)
-
-Index: git/bfd/libbfd.c
-===================================================================
---- git.orig/bfd/libbfd.c
-+++ git/bfd/libbfd.c
-@@ -806,7 +806,7 @@ _bfd_generic_get_section_contents (bfd *
-   filesz = bfd_get_file_size (abfd);
-   if (offset + count < count
-       || offset + count > sz
--      || section->filepos + offset + count > filesz)
-+      || (ufile_ptr) section->filepos + offset + count > filesz)
-     {
-       bfd_set_error (bfd_error_invalid_operation);
-       return FALSE;
-@@ -864,7 +864,7 @@ _bfd_generic_get_section_contents_in_win
-     sz = section->size;
-   filesz = bfd_get_file_size (abfd);
-   if (offset + count > sz
--      || section->filepos + offset + count > filesz
-+      || (ufile_ptr) section->filepos + offset + count > filesz
-       || ! bfd_get_file_window (abfd, section->filepos + offset, count, w,
- 				TRUE))
-     return FALSE;
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,5 +1,11 @@
- 2017-06-27  Alan Modra  <amodra@gmail.com>
- 
-+       PR binutils/21665
-+       * libbfd.c (_bfd_generic_get_section_contents): Warning fix.
-+       (_bfd_generic_get_section_contents_in_window): Likewise.
-+
-+2017-06-27  Alan Modra  <amodra@gmail.com>
-+
- 	PR binutils/21665
- 	* libbfd.c (_bfd_generic_get_section_contents): Delete abort.
- 	Use unsigned file pointer type, and remove cast.
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_7.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_7.patch
deleted file mode 100644
index 2cae63b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_7.patch
+++ /dev/null
@@ -1,80 +0,0 @@
-From 60a02042bacf8d25814430080adda61ed086bca6 Mon Sep 17 00:00:00 2001
-From: Nick Clifton <nickc@redhat.com>
-Date: Fri, 30 Jun 2017 11:03:37 +0100
-Subject: [PATCH] Fix failures in MMIX linker tests introduced by fix for PR
- 21665.
-
-	PR binutils/21665
-	* objdump.c (disassemble_section): Move check for an overlarge
-	section to just before the allocation of memory.  Do not check
-	section size against file size, but instead use an arbitrary 2Gb
-	limit.  Issue a warning message if the section is too big.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9955 #7
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- binutils/ChangeLog |  8 ++++++++
- binutils/objdump.c | 25 ++++++++++++++++++++++++-
- 2 files changed, 32 insertions(+), 1 deletion(-)
-
-Index: git/binutils/objdump.c
-===================================================================
---- git.orig/binutils/objdump.c
-+++ git/binutils/objdump.c
-@@ -2048,7 +2048,7 @@ disassemble_section (bfd *abfd, asection
-     return;
- 
-   datasize = bfd_get_section_size (section);
--  if (datasize == 0 || datasize >= (bfd_size_type) bfd_get_file_size (abfd))
-+  if (datasize == 0)
-     return;
- 
-   if (start_address == (bfd_vma) -1
-@@ -2112,6 +2112,29 @@ disassemble_section (bfd *abfd, asection
-     }
-   rel_ppend = rel_pp + rel_count;
- 
-+  /* PR 21665: Check for overlarge datasizes.
-+     Note - we used to check for "datasize > bfd_get_file_size (abfd)" but
-+     this fails when using compressed sections or compressed file formats
-+     (eg MMO, tekhex).
-+
-+     The call to xmalloc below will fail if too much memory is requested,
-+     which will catch the problem in the normal use case.  But if a memory
-+     checker is in use, eg valgrind or sanitize, then an exception will
-+     be still generated, so we try to catch the problem first.
-+
-+     Unfortunately there is no simple way to determine how much memory can
-+     be allocated by calling xmalloc.  So instead we use a simple, arbitrary
-+     limit of 2Gb.  Hopefully this should be enough for most users.  If
-+     someone does start trying to disassemble sections larger then 2Gb in
-+     size they will doubtless complain and we can increase the limit.  */
-+#define MAX_XMALLOC (1024 * 1024 * 1024 * 2UL) /* 2Gb */
-+  if (datasize > MAX_XMALLOC)
-+    {
-+      non_fatal (_("Reading section %s failed because it is too big (%#lx)"),
-+		 section->name, (unsigned long) datasize);
-+      return;
-+    }
-+
-   data = (bfd_byte *) xmalloc (datasize);
- 
-   bfd_get_section_contents (abfd, section, data, 0, datasize);
-Index: git/binutils/ChangeLog
-===================================================================
---- git.orig/binutils/ChangeLog
-+++ git/binutils/ChangeLog
-@@ -1,3 +1,11 @@
-+2017-06-30  Nick Clifton  <nickc@redhat.com>
-+
-+       PR binutils/21665
-+       * objdump.c (disassemble_section): Move check for an overlarge
-+       section to just before the allocation of memory.  Do not check
-+       section size against file size, but instead use an arbitrary 2Gb
-+       limit.  Issue a warning message if the section is too big.
-+
- 2017-06-26  Nick Clifton  <nickc@redhat.com>
-  
-        PR binutils/21665
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_8.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_8.patch
deleted file mode 100644
index 45dd974..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_8.patch
+++ /dev/null
@@ -1,187 +0,0 @@
-From bae7501e87ab614115d9d3213b4dd18d96e604db Mon Sep 17 00:00:00 2001
-From: Alan Modra <amodra@gmail.com>
-Date: Sat, 1 Jul 2017 21:58:10 +0930
-Subject: [PATCH] Use bfd_malloc_and_get_section
-
-It's nicer than xmalloc followed by bfd_get_section_contents, since
-xmalloc exits on failure and needs a check that its size_t arg doesn't
-lose high bits when converted from bfd_size_type.
-
-	PR binutils/21665
-	* objdump.c (strtab): Make var a bfd_byte*.
-	(disassemble_section): Don't limit malloc size.  Instead, use
-	bfd_malloc_and_get_section.
-	(read_section_stabs): Use bfd_malloc_and_get_section.  Return
-	bfd_byte*.
-	(find_stabs_section): Remove now unnecessary cast.
-	* objcopy.c (copy_object): Use bfd_malloc_and_get_section.  Free
-	contents on error return.
-	* nlmconv.c (copy_sections): Use bfd_malloc_and_get_section.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9955 #8
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- binutils/ChangeLog | 13 +++++++++++++
- binutils/nlmconv.c |  6 ++----
- binutils/objcopy.c |  5 +++--
- binutils/objdump.c | 44 +++++++-------------------------------------
- 4 files changed, 25 insertions(+), 43 deletions(-)
-
-Index: git/binutils/ChangeLog
-===================================================================
---- git.orig/binutils/ChangeLog
-+++ git/binutils/ChangeLog
-@@ -1,3 +1,16 @@
-+2017-07-01  Alan Modra  <amodra@gmail.com>
-+
-+	PR binutils/21665
-+	* objdump.c (strtab): Make var a bfd_byte*.
-+	(disassemble_section): Don't limit malloc size.  Instead, use
-+	bfd_malloc_and_get_section.
-+	(read_section_stabs): Use bfd_malloc_and_get_section.  Return
-+	bfd_byte*.
-+	(find_stabs_section): Remove now unnecessary cast.
-+	* objcopy.c (copy_object): Use bfd_malloc_and_get_section.  Free
-+	contents on error return.
-+	* nlmconv.c (copy_sections): Use bfd_malloc_and_get_section.
-+
- 2017-06-30  Nick Clifton  <nickc@redhat.com>
- 
-        PR binutils/21665
-Index: git/binutils/nlmconv.c
-===================================================================
---- git.orig/binutils/nlmconv.c
-+++ git/binutils/nlmconv.c
-@@ -1224,7 +1224,7 @@ copy_sections (bfd *inbfd, asection *ins
-   const char *inname;
-   asection *outsec;
-   bfd_size_type size;
--  void *contents;
-+  bfd_byte *contents;
-   long reloc_size;
-   bfd_byte buf[4];
-   bfd_size_type add;
-@@ -1240,9 +1240,7 @@ copy_sections (bfd *inbfd, asection *ins
-     contents = NULL;
-   else
-     {
--      contents = xmalloc (size);
--      if (! bfd_get_section_contents (inbfd, insec, contents,
--				      (file_ptr) 0, size))
-+      if (!bfd_malloc_and_get_section (inbfd, insec, &contents))
- 	bfd_fatal (bfd_get_filename (inbfd));
-     }
- 
-Index: git/binutils/objdump.c
-===================================================================
---- git.orig/binutils/objdump.c
-+++ git/binutils/objdump.c
-@@ -180,7 +180,7 @@ static long dynsymcount = 0;
- static bfd_byte *stabs;
- static bfd_size_type stab_size;
- 
--static char *strtab;
-+static bfd_byte *strtab;
- static bfd_size_type stabstr_size;
- 
- static bfd_boolean is_relocatable = FALSE;
-@@ -2112,29 +2112,6 @@ disassemble_section (bfd *abfd, asection
-     }
-   rel_ppend = rel_pp + rel_count;
- 
--  /* PR 21665: Check for overlarge datasizes.
--     Note - we used to check for "datasize > bfd_get_file_size (abfd)" but
--     this fails when using compressed sections or compressed file formats
--     (eg MMO, tekhex).
--
--     The call to xmalloc below will fail if too much memory is requested,
--     which will catch the problem in the normal use case.  But if a memory
--     checker is in use, eg valgrind or sanitize, then an exception will
--     be still generated, so we try to catch the problem first.
--
--     Unfortunately there is no simple way to determine how much memory can
--     be allocated by calling xmalloc.  So instead we use a simple, arbitrary
--     limit of 2Gb.  Hopefully this should be enough for most users.  If
--     someone does start trying to disassemble sections larger then 2Gb in
--     size they will doubtless complain and we can increase the limit.  */
--#define MAX_XMALLOC (1024 * 1024 * 1024 * 2UL) /* 2Gb */
--  if (datasize > MAX_XMALLOC)
--    {
--      non_fatal (_("Reading section %s failed because it is too big (%#lx)"),
--		 section->name, (unsigned long) datasize);
--      return;
--    }
--
-   data = (bfd_byte *) xmalloc (datasize);
- 
-   bfd_get_section_contents (abfd, section, data, 0, datasize);
-@@ -2652,12 +2629,11 @@ dump_dwarf (bfd *abfd)
- /* Read ABFD's stabs section STABSECT_NAME, and return a pointer to
-    it.  Return NULL on failure.   */
- 
--static char *
-+static bfd_byte *
- read_section_stabs (bfd *abfd, const char *sect_name, bfd_size_type *size_ptr)
- {
-   asection *stabsect;
--  bfd_size_type size;
--  char *contents;
-+  bfd_byte *contents;
- 
-   stabsect = bfd_get_section_by_name (abfd, sect_name);
-   if (stabsect == NULL)
-@@ -2666,10 +2642,7 @@ read_section_stabs (bfd *abfd, const cha
-       return FALSE;
-     }
- 
--  size = bfd_section_size (abfd, stabsect);
--  contents  = (char *) xmalloc (size);
--
--  if (! bfd_get_section_contents (abfd, stabsect, contents, 0, size))
-+  if (!bfd_malloc_and_get_section (abfd, stabsect, &contents))
-     {
-       non_fatal (_("reading %s section of %s failed: %s"),
- 		 sect_name, bfd_get_filename (abfd),
-@@ -2679,7 +2652,7 @@ read_section_stabs (bfd *abfd, const cha
-       return NULL;
-     }
- 
--  *size_ptr = size;
-+  *size_ptr = bfd_section_size (abfd, stabsect);
- 
-   return contents;
- }
-@@ -2806,8 +2779,7 @@ find_stabs_section (bfd *abfd, asection
- 
-       if (strtab)
- 	{
--	  stabs = (bfd_byte *) read_section_stabs (abfd, section->name,
--						   &stab_size);
-+	  stabs = read_section_stabs (abfd, section->name, &stab_size);
- 	  if (stabs)
- 	    print_section_stabs (abfd, section->name, &sought->string_offset);
- 	}
-Index: git/binutils/objcopy.c
-===================================================================
---- git.orig/binutils/objcopy.c
-+++ git/binutils/objcopy.c
-@@ -2186,14 +2186,15 @@ copy_object (bfd *ibfd, bfd *obfd, const
- 	      continue;
- 	    }
- 
--	  bfd_byte * contents = xmalloc (size);
--	  if (bfd_get_section_contents (ibfd, sec, contents, 0, size))
-+	  bfd_byte *contents;
-+          if (bfd_malloc_and_get_section (ibfd, sec, &contents))
- 	    {
- 	      if (fwrite (contents, 1, size, f) != size)
- 		{
- 		  non_fatal (_("error writing section contents to %s (error: %s)"),
- 			     pdump->filename,
- 			     strerror (errno));
-+                  free (contents);
- 		  return FALSE;
- 		}
- 	    }
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_9.patch b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_9.patch
deleted file mode 100644
index c6353d8..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils/CVE-2017-9955_9.patch
+++ /dev/null
@@ -1,361 +0,0 @@
-From 8e2f54bcee7e3e8315d4a39a302eaf8e4389e07d Mon Sep 17 00:00:00 2001
-From: "H.J. Lu" <hjl.tools@gmail.com>
-Date: Tue, 30 May 2017 06:34:05 -0700
-Subject: [PATCH] Add bfd_get_file_size to get archive element size
-
-We can't use stat() to get archive element size.  Add bfd_get_file_size
-to get size for both normal files and archive elements.
-
-bfd/
-
-	PR binutils/21519
-	* bfdio.c (bfd_get_file_size): New function.
-	* bfd-in2.h: Regenerated.
-
-binutils/
-
-	PR binutils/21519
-	* objdump.c (dump_relocs_in_section): Replace get_file_size
-	with bfd_get_file_size to get archive element size.
-	* testsuite/binutils-all/objdump.exp (test_objdump_f): New
-	proc.
-	(test_objdump_h): Likewise.
-	(test_objdump_t): Likewise.
-	(test_objdump_r): Likewise.
-	(test_objdump_s): Likewise.
-	Add objdump tests on archive.
-
-Upstream-Status: Backport
-CVE: CVE-2017-9955
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
----
- bfd/ChangeLog                               |   6 +
- bfd/bfd-in2.h                               |   2 +
- bfd/bfdio.c                                 |  23 ++++
- binutils/ChangeLog                          |  13 ++
- binutils/objdump.c                          |   2 +-
- binutils/testsuite/binutils-all/objdump.exp | 178 +++++++++++++++++++---------
- 6 files changed, 170 insertions(+), 54 deletions(-)
-
-Index: git/bfd/bfd-in2.h
-===================================================================
---- git.orig/bfd/bfd-in2.h
-+++ git/bfd/bfd-in2.h
-@@ -1241,6 +1241,8 @@ long bfd_get_mtime (bfd *abfd);
- 
- file_ptr bfd_get_size (bfd *abfd);
- 
-+file_ptr bfd_get_file_size (bfd *abfd);
-+
- void *bfd_mmap (bfd *abfd, void *addr, bfd_size_type len,
-     int prot, int flags, file_ptr offset,
-     void **map_addr, bfd_size_type *map_len);
-Index: git/bfd/bfdio.c
-===================================================================
---- git.orig/bfd/bfdio.c
-+++ git/bfd/bfdio.c
-@@ -434,6 +434,29 @@ bfd_get_size (bfd *abfd)
-   return buf.st_size;
- }
- 
-+/*
-+FUNCTION
-+	bfd_get_file_size
-+
-+SYNOPSIS
-+	file_ptr bfd_get_file_size (bfd *abfd);
-+
-+DESCRIPTION
-+	Return the file size (as read from file system) for the file
-+	associated with BFD @var{abfd}.  It supports both normal files
-+	and archive elements.
-+
-+*/
-+
-+file_ptr
-+bfd_get_file_size (bfd *abfd)
-+{
-+  if (abfd->my_archive != NULL
-+      && !bfd_is_thin_archive (abfd->my_archive))
-+    return arelt_size (abfd);
-+
-+  return bfd_get_size (abfd);
-+}
- 
- /*
- FUNCTION
-Index: git/binutils/objdump.c
-===================================================================
---- git.orig/binutils/objdump.c
-+++ git/binutils/objdump.c
-@@ -3310,7 +3310,7 @@ dump_relocs_in_section (bfd *abfd,
-     }
- 
-   if ((bfd_get_file_flags (abfd) & (BFD_IN_MEMORY | BFD_LINKER_CREATED)) == 0
--      && relsize > get_file_size (bfd_get_filename (abfd)))
-+      && relsize > bfd_get_file_size (abfd))
-     {
-       printf (" (too many: 0x%x)\n", section->reloc_count);
-       bfd_set_error (bfd_error_file_truncated);
-Index: git/binutils/testsuite/binutils-all/objdump.exp
-===================================================================
---- git.orig/binutils/testsuite/binutils-all/objdump.exp
-+++ git/binutils/testsuite/binutils-all/objdump.exp
-@@ -64,96 +64,168 @@ if [regexp $want $got] then {
- if {![binutils_assemble $srcdir/$subdir/bintest.s tmpdir/bintest.o]} then {
-     return
- }
-+if {![binutils_assemble $srcdir/$subdir/bintest.s tmpdir/bintest2.o]} then {
-+    return
-+}
- if [is_remote host] {
-     set testfile [remote_download host tmpdir/bintest.o]
-+    set testfile2 [remote_download host tmpdir/bintest2.o]
- } else {
-     set testfile tmpdir/bintest.o
-+    set testfile2 tmpdir/bintest2.o
-+}
-+
-+if { ![istarget "alpha-*-*"] || [is_elf_format] } then {
-+    remote_file host file delete tmpdir/bintest.a
-+    set got [binutils_run $AR "rc tmpdir/bintest.a $testfile2"]
-+    if ![string match "" $got] then {
-+	fail "bintest.a"
-+	remote_file host delete tmpdir/bintest.a
-+    } else {
-+	if [is_remote host] {
-+	    set testarchive [remote_download host tmpdir/bintest.a]
-+	} else {
-+	    set testarchive tmpdir/bintest.a
-+	}
-+    }
-+    remote_file host delete tmpdir/bintest2.o
- }
- 
- # Test objdump -f
- 
--set got [binutils_run $OBJDUMP "$OBJDUMPFLAGS -f $testfile"]
-+proc test_objdump_f { testfile dumpfile } {
-+    global OBJDUMP
-+    global OBJDUMPFLAGS
-+    global cpus_regex
- 
--set want "$testfile:\[ 	\]*file format.*architecture:\[ 	\]*${cpus_regex}.*HAS_RELOC.*HAS_SYMS"
-+    set got [binutils_run $OBJDUMP "$OBJDUMPFLAGS -f $testfile"]
- 
--if ![regexp $want $got] then {
--    fail "objdump -f"
--} else {
--    pass "objdump -f"
-+    set want "$dumpfile:\[ 	\]*file format.*architecture:\[ 	\]*${cpus_regex}.*HAS_RELOC.*HAS_SYMS"
-+
-+    if ![regexp $want $got] then {
-+	fail "objdump -f ($testfile, $dumpfile)"
-+    } else {
-+	pass "objdump -f ($testfile, $dumpfile)"
-+    }
-+}
-+
-+test_objdump_f $testfile $testfile
-+if { [ remote_file host exists $testarchive ] } then {
-+    test_objdump_f $testarchive bintest2.o
- }
- 
- # Test objdump -h
- 
--set got [binutils_run $OBJDUMP "$OBJDUMPFLAGS -h $testfile"]
-+proc test_objdump_h { testfile dumpfile } {
-+    global OBJDUMP
-+    global OBJDUMPFLAGS
- 
--set want "$testfile:\[ 	\]*file format.*Sections.*\[0-9\]+\[ 	\]+\[^ 	\]*(text|TEXT|P|\\\$CODE\\\$)\[^ 	\]*\[ 	\]*(\[0-9a-fA-F\]+).*\[0-9\]+\[ 	\]+\[^ 	\]*(\\.data|DATA|D_1)\[^ 	\]*\[ 	\]*(\[0-9a-fA-F\]+)"
-+    set got [binutils_run $OBJDUMP "$OBJDUMPFLAGS -h $testfile"]
- 
--if ![regexp $want $got all text_name text_size data_name data_size] then {
--    fail "objdump -h"
--} else {
--    verbose "text name is $text_name size is $text_size"
--    verbose "data name is $data_name size is $data_size"
--    set ets 8
--    set eds 4
--    # The [ti]c4x target has the property sizeof(char)=sizeof(long)=1
--    if [istarget *c4x*-*-*] then {
--        set ets 2
--        set eds 1
--    }
--    # c54x section sizes are in bytes, not octets; adjust accordingly
--    if [istarget *c54x*-*-*] then {
--	set ets 4
--	set eds 2
--    }
--    if {[expr "0x$text_size"] < $ets || [expr "0x$data_size"] < $eds} then {
--	send_log "sizes too small\n"
--	fail "objdump -h"
-+    set want "$dumpfile:\[ 	\]*file format.*Sections.*\[0-9\]+\[ 	\]+\[^ 	\]*(text|TEXT|P|\\\$CODE\\\$)\[^ 	\]*\[ 	\]*(\[0-9a-fA-F\]+).*\[0-9\]+\[ 	\]+\[^ 	\]*(\\.data|DATA|D_1)\[^ 	\]*\[ 	\]*(\[0-9a-fA-F\]+)"
-+
-+    if ![regexp $want $got all text_name text_size data_name data_size] then {
-+	fail "objdump -h ($testfile, $dumpfile)"
-     } else {
--	pass "objdump -h"
-+	verbose "text name is $text_name size is $text_size"
-+	verbose "data name is $data_name size is $data_size"
-+	set ets 8
-+	set eds 4
-+	# The [ti]c4x target has the property sizeof(char)=sizeof(long)=1
-+	if [istarget *c4x*-*-*] then {
-+            set ets 2
-+            set eds 1
-+	}
-+	# c54x section sizes are in bytes, not octets; adjust accordingly
-+	if [istarget *c54x*-*-*] then {
-+	    set ets 4
-+	    set eds 2
-+        }
-+	if {[expr "0x$text_size"] < $ets || [expr "0x$data_size"] < $eds} then {
-+	    send_log "sizes too small\n"
-+	    fail "objdump -h ($testfile, $dumpfile)"
-+	} else {
-+	    pass "objdump -h ($testfile, $dumpfile)"
-+	}
-     }
- }
- 
-+test_objdump_h $testfile $testfile
-+if { [ remote_file host exists $testarchive ] } then {
-+    test_objdump_h $testarchive bintest2.o
-+}
-+
- # Test objdump -t
- 
--set got [binutils_run $OBJDUMP "$OBJDUMPFLAGS -t $testfile"]
-+proc test_objdump_t { testfile} {
-+    global OBJDUMP
-+    global OBJDUMPFLAGS
-+
-+    set got [binutils_run $OBJDUMP "$OBJDUMPFLAGS -t $testfile"]
-+
-+    if [info exists vars] then { unset vars }
-+    while {[regexp "(\[a-z\]*_symbol)(.*)" $got all symbol rest]} {
-+	set vars($symbol) 1
-+	set got $rest
-+    }
- 
--if [info exists vars] then { unset vars }
--while {[regexp "(\[a-z\]*_symbol)(.*)" $got all symbol rest]} {
--    set vars($symbol) 1
--    set got $rest
-+    if {![info exists vars(text_symbol)] \
-+	 || ![info exists vars(data_symbol)] \
-+	 || ![info exists vars(common_symbol)] \
-+	 || ![info exists vars(external_symbol)]} then {
-+	fail "objdump -t ($testfile)"
-+    } else {
-+	pass "objdump -t ($testfile)"
-+    }
- }
- 
--if {![info exists vars(text_symbol)] \
--     || ![info exists vars(data_symbol)] \
--     || ![info exists vars(common_symbol)] \
--     || ![info exists vars(external_symbol)]} then {
--    fail "objdump -t"
--} else {
--    pass "objdump -t"
-+test_objdump_t $testfile
-+if { [ remote_file host exists $testarchive ] } then {
-+    test_objdump_t $testarchive
- }
- 
- # Test objdump -r
- 
--set got [binutils_run $OBJDUMP "$OBJDUMPFLAGS -r $testfile"]
-+proc test_objdump_r { testfile dumpfile } {
-+    global OBJDUMP
-+    global OBJDUMPFLAGS
- 
--set want "$testfile:\[ 	\]*file format.*RELOCATION RECORDS FOR \\\[\[^\]\]*(text|TEXT|P|\\\$CODE\\\$)\[^\]\]*\\\].*external_symbol"
-+    set got [binutils_run $OBJDUMP "$OBJDUMPFLAGS -r $testfile"]
- 
--if [regexp $want $got] then {
--    pass "objdump -r"
--} else {
--    fail "objdump -r"
-+    set want "$dumpfile:\[ 	\]*file format.*RELOCATION RECORDS FOR \\\[\[^\]\]*(text|TEXT|P|\\\$CODE\\\$)\[^\]\]*\\\].*external_symbol"
-+
-+    if [regexp $want $got] then {
-+	pass "objdump -r ($testfile, $dumpfile)"
-+    } else {
-+	fail "objdump -r ($testfile, $dumpfile)"
-+    }
-+}
-+
-+test_objdump_r $testfile $testfile
-+if { [ remote_file host exists $testarchive ] } then {
-+    test_objdump_r $testarchive bintest2.o
- }
- 
- # Test objdump -s
- 
--set got [binutils_run $OBJDUMP "$OBJDUMPFLAGS -s $testfile"]
-+proc test_objdump_s { testfile dumpfile } {
-+    global OBJDUMP
-+    global OBJDUMPFLAGS
- 
--set want "$testfile:\[ 	\]*file format.*Contents.*(text|TEXT|P|\\\$CODE\\\$)\[^0-9\]*\[ 	\]*\[0-9a-fA-F\]*\[ 	\]*(00000001|01000000|00000100).*Contents.*(data|DATA|D_1)\[^0-9\]*\[ 	\]*\[0-9a-fA-F\]*\[ 	\]*(00000002|02000000|00000200)"
-+    set got [binutils_run $OBJDUMP "$OBJDUMPFLAGS -s $testfile"]
- 
--if [regexp $want $got] then {
--    pass "objdump -s"
--} else {
--    fail "objdump -s"
-+    set want "$dumpfile:\[ 	\]*file format.*Contents.*(text|TEXT|P|\\\$CODE\\\$)\[^0-9\]*\[ 	\]*\[0-9a-fA-F\]*\[ 	\]*(00000001|01000000|00000100).*Contents.*(data|DATA|D_1)\[^0-9\]*\[ 	\]*\[0-9a-fA-F\]*\[ 	\]*(00000002|02000000|00000200)"
-+
-+    if [regexp $want $got] then {
-+	pass "objdump -s ($testfile, $dumpfile)"
-+    } else {
-+	fail "objdump -s ($testfile, $dumpfile)"
-+    }
-+}
-+
-+test_objdump_s $testfile $testfile
-+if { [ remote_file host exists $testarchive ] } then {
-+    test_objdump_s $testarchive bintest2.o
- }
- 
- # Test objdump -s on a file that contains a compressed .debug section
-Index: git/bfd/ChangeLog
-===================================================================
---- git.orig/bfd/ChangeLog
-+++ git/bfd/ChangeLog
-@@ -1,3 +1,9 @@
-+2017-05-30  H.J. Lu  <hongjiu.lu@intel.com>
-+
-+       PR binutils/21519
-+       * bfdio.c (bfd_get_file_size): New function.
-+       * bfd-in2.h: Regenerated.
-+
- 2017-06-27  Alan Modra  <amodra@gmail.com>
- 
-        PR binutils/21665
-Index: git/binutils/ChangeLog
-===================================================================
---- git.orig/binutils/ChangeLog
-+++ git/binutils/ChangeLog
-@@ -1,3 +1,16 @@
-+2017-05-30  H.J. Lu  <hongjiu.lu@intel.com>
-+
-+       PR binutils/21519
-+       * objdump.c (dump_relocs_in_section): Replace get_file_size
-+       with bfd_get_file_size to get archive element size.
-+       * testsuite/binutils-all/objdump.exp (test_objdump_f): New
-+       proc.
-+       (test_objdump_h): Likewise.
-+       (test_objdump_t): Likewise.
-+       (test_objdump_r): Likewise.
-+       (test_objdump_s): Likewise.
-+       Add objdump tests on archive.
-+
- 2017-07-01  Alan Modra  <amodra@gmail.com>
- 
- 	PR binutils/21665
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils_2.28.bb b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils_2.28.bb
deleted file mode 100644
index b51437b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils_2.28.bb
+++ /dev/null
@@ -1,45 +0,0 @@
-require binutils.inc
-require binutils-${PV}.inc
-
-DEPENDS += "flex bison zlib"
-
-EXTRA_OECONF += "--with-sysroot=/ \
-                --enable-install-libbfd \
-                --enable-install-libiberty \
-                --enable-shared \
-                --with-system-zlib \
-                "
-
-EXTRA_OEMAKE_append_libc-musl = "\
-                                 gt_cv_func_gnugettext1_libc=yes \
-                                 gt_cv_func_gnugettext2_libc=yes \
-                                "
-EXTRA_OECONF_class-native = "--enable-targets=all \
-                             --enable-64-bit-bfd \
-                             --enable-install-libiberty \
-                             --enable-install-libbfd \
-                             --disable-werror"
-
-do_install_class-native () {
-	autotools_do_install
-
-	# Install the libiberty header
-	install -d ${D}${includedir}
-	install -m 644 ${S}/include/ansidecl.h ${D}${includedir}
-	install -m 644 ${S}/include/libiberty.h ${D}${includedir}
-
-	# We only want libiberty, libbfd and libopcodes
-	rm -rf ${D}${bindir}
-	rm -rf ${D}${prefix}/${TARGET_SYS}
-	rm -rf ${D}${prefix}/lib/ldscripts
-	rm -rf ${D}${prefix}/share/info
-	rm -rf ${D}${prefix}/share/locale
-	rm -rf ${D}${prefix}/share/man
-	rmdir ${D}${prefix}/share || :
-	rmdir ${D}/${libdir}/gcc-lib || :
-	rmdir ${D}/${libdir}64/gcc-lib || :
-	rmdir ${D}/${libdir} || :
-	rmdir ${D}/${libdir}64 || :
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils_2.29.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils_2.29.1.bb
new file mode 100644
index 0000000..51a9748
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/binutils/binutils_2.29.1.bb
@@ -0,0 +1,49 @@
+require binutils.inc
+require binutils-${PV}.inc
+
+DEPENDS += "flex bison zlib"
+
+EXTRA_OECONF += "--with-sysroot=/ \
+                --enable-install-libbfd \
+                --enable-install-libiberty \
+                --enable-shared \
+                --with-system-zlib \
+                "
+
+EXTRA_OEMAKE_append_libc-musl = "\
+                                 gt_cv_func_gnugettext1_libc=yes \
+                                 gt_cv_func_gnugettext2_libc=yes \
+                                "
+EXTRA_OECONF_class-native = "--enable-targets=all \
+                             --enable-64-bit-bfd \
+                             --enable-install-libiberty \
+                             --enable-install-libbfd \
+                             --disable-werror"
+
+do_install_class-native () {
+	autotools_do_install
+
+	# Install the libiberty header
+	install -d ${D}${includedir}
+	install -m 644 ${S}/include/ansidecl.h ${D}${includedir}
+	install -m 644 ${S}/include/libiberty.h ${D}${includedir}
+
+	# We only want libiberty, libbfd and libopcodes
+	rm -rf ${D}${bindir}
+	rm -rf ${D}${prefix}/${TARGET_SYS}
+	rm -rf ${D}${prefix}/lib/ldscripts
+	rm -rf ${D}${prefix}/share/info
+	rm -rf ${D}${prefix}/share/locale
+	rm -rf ${D}${prefix}/share/man
+	rmdir ${D}${prefix}/share || :
+	rmdir ${D}/${libdir}/gcc-lib || :
+	rmdir ${D}/${libdir}64/gcc-lib || :
+	rmdir ${D}/${libdir} || :
+	rmdir ${D}/${libdir}64 || :
+}
+
+# Split out libbfd-*.so so including perf doesn't include extra stuff
+PACKAGE_BEFORE_PN += "libbfd"
+FILES_libbfd = "${libdir}/libbfd-*.so"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/bison/bison/0001-src-local.mk-fix-parallel-issue.patch b/import-layers/yocto-poky/meta/recipes-devtools/bison/bison/0001-src-local.mk-fix-parallel-issue.patch
index 9543a56..1e86f55 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/bison/bison/0001-src-local.mk-fix-parallel-issue.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/bison/bison/0001-src-local.mk-fix-parallel-issue.patch
@@ -1,4 +1,4 @@
-From 3a15f7c9ace6c0294edc313a1848cafcc31b2a92 Mon Sep 17 00:00:00 2001
+From 5b9204eee4b06b48d54ecc3ef3a0b56fc5cc84f8 Mon Sep 17 00:00:00 2001
 From: Robert Yang <liezhi.yang@windriver.com>
 Date: Fri, 24 Apr 2015 00:38:32 -0700
 Subject: [PATCH] src/local.mk: fix parallel issue
@@ -9,11 +9,12 @@
 /bin/bash: src/yacc.tmp: No such file or directory
 Makefile:6670: recipe for target 'src/yacc' failed
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [ http://lists.gnu.org/archive/html/bison-patches/2017-07/msg00000.html ]
 
 Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
+Signed-off-by: Dengke Du <dengke.du@windriver.com>
 ---
- src/local.mk |    1 +
+ src/local.mk | 1 +
  1 file changed, 1 insertion(+)
 
 diff --git a/src/local.mk b/src/local.mk
@@ -29,5 +30,5 @@
  	$(AM_V_at)echo "exec '$(bindir)/bison' -y "'"$$@"' >>$@.tmp
  	$(AM_V_at)chmod a+x $@.tmp
 -- 
-1.7.9.5
+2.8.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/bison/bison_3.0.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/bison/bison_3.0.4.bb
index cffcd88..7d066be 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/bison/bison_3.0.4.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/bison/bison_3.0.4.bb
@@ -23,7 +23,6 @@
 SRC_URI[md5sum] = "c342201de104cc9ce0a21e0ad10d4021"
 SRC_URI[sha256sum] = "a72428c7917bdf9fa93cb8181c971b6e22834125848cf1d03ce10b1bb0716fe1"
 
-LDFLAGS_prepend_libc-uclibc = " -lrt "
 DEPENDS_class-native = "gettext-minimal-native"
 
 inherit autotools gettext texinfo
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/bootchart2/bootchart2_0.14.8.bb b/import-layers/yocto-poky/meta/recipes-devtools/bootchart2/bootchart2_0.14.8.bb
index 4f01734..a310135 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/bootchart2/bootchart2_0.14.8.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/bootchart2/bootchart2_0.14.8.bb
@@ -134,7 +134,7 @@
     echo 'EXIT_PROC="$EXIT_PROC matchbox-window-manager"' >> ${D}${sysconfdir}/bootchartd.conf
 
    # Use python 3 instead of python 2
-   sed -i -e '1s,#!.*python.*,#!${bindir}/python3,' ${D}${bindir}/pybootchartgui
+   sed -i -e '1s,#!.*python.*,#!${USRBINPATH}/env python3,' ${D}${bindir}/pybootchartgui
 }
 
 PACKAGES =+ "pybootchartgui"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/btrfs-tools/btrfs-tools/0001-Makefile-build-mktables-using-native-gcc.patch b/import-layers/yocto-poky/meta/recipes-devtools/btrfs-tools/btrfs-tools/0001-Makefile-build-mktables-using-native-gcc.patch
new file mode 100644
index 0000000..a81900e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/btrfs-tools/btrfs-tools/0001-Makefile-build-mktables-using-native-gcc.patch
@@ -0,0 +1,30 @@
+From e58369f6d36bc51eb59d6afa34c1cae3ff0810ef Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Mon, 7 Aug 2017 14:10:38 +0300
+Subject: [PATCH] Makefile: build mktables using native gcc
+
+It's a throwaway helper binary used during build, and so it needs to
+be native.
+
+Upstream-Status: Inappropriate [oe specific]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+ Makefile | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Makefile b/Makefile
+index b3e2b63..347aaf1 100644
+--- a/Makefile
++++ b/Makefile
+@@ -323,7 +323,7 @@ version.h: version.sh version.h.in configure.ac
+ 
+ mktables: kernel-lib/mktables.c
+ 	@echo "    [CC]     $@"
+-	$(Q)$(CC) $(CFLAGS) $< -o $@
++	$(Q)$(BUILD_CC) $(BUILD_CFLAGS) $< -o $@
+ 
+ kernel-lib/tables.c: mktables
+ 	@echo "    [TABLE]  $@"
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/btrfs-tools/btrfs-tools_4.12.bb b/import-layers/yocto-poky/meta/recipes-devtools/btrfs-tools/btrfs-tools_4.12.bb
new file mode 100644
index 0000000..c3cc89c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/btrfs-tools/btrfs-tools_4.12.bb
@@ -0,0 +1,35 @@
+SUMMARY = "Checksumming Copy on Write Filesystem utilities"
+DESCRIPTION = "Btrfs is a new copy on write filesystem for Linux aimed at \
+implementing advanced features while focusing on fault tolerance, repair and \
+easy administration. \
+This package contains utilities (mkfs, fsck, btrfsctl) used to work with \
+btrfs and an utility (btrfs-convert) to make a btrfs filesystem from an ext3."
+
+HOMEPAGE = "https://btrfs.wiki.kernel.org"
+
+LICENSE = "GPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=fcb02dc552a041dee27e4b85c7396067"
+SECTION = "base"
+DEPENDS = "util-linux attr e2fsprogs lzo acl"
+DEPENDS_append_class-target = " udev"
+RDEPENDS_${PN} = "libgcc"
+
+SRCREV = "0607132c3200bcead1426e6dc685432008de95de"
+SRC_URI = "git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git \
+           file://0001-Makefile-build-mktables-using-native-gcc.patch \
+           "
+
+inherit autotools-brokensep pkgconfig manpages
+
+PACKAGECONFIG[manpages] = "--enable-documentation, --disable-documentation, asciidoc-native xmlto-native"
+EXTRA_OECONF_append_libc-musl = " --disable-backtrace "
+
+do_configure_prepend() {
+	# Upstream doesn't ship this and autoreconf won't install it as automake isn't used.
+	mkdir -p ${S}/config
+	cp -f $(automake --print-libdir)/install-sh ${S}/config/
+}
+
+S = "${WORKDIR}/git"
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/btrfs-tools/btrfs-tools_4.9.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/btrfs-tools/btrfs-tools_4.9.1.bb
deleted file mode 100644
index a5e9e22..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/btrfs-tools/btrfs-tools_4.9.1.bb
+++ /dev/null
@@ -1,33 +0,0 @@
-SUMMARY = "Checksumming Copy on Write Filesystem utilities"
-DESCRIPTION = "Btrfs is a new copy on write filesystem for Linux aimed at \
-implementing advanced features while focusing on fault tolerance, repair and \
-easy administration. \
-This package contains utilities (mkfs, fsck, btrfsctl) used to work with \
-btrfs and an utility (btrfs-convert) to make a btrfs filesystem from an ext3."
-
-HOMEPAGE = "https://btrfs.wiki.kernel.org"
-
-LICENSE = "GPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=fcb02dc552a041dee27e4b85c7396067"
-SECTION = "base"
-DEPENDS = "util-linux attr e2fsprogs lzo acl"
-DEPENDS_append_class-target = " udev"
-RDEPENDS_${PN} = "libgcc"
-
-SRCREV = "96485c34ac0329fb0073476f16d2083c64701f29"
-SRC_URI = "git://git.kernel.org/pub/scm/linux/kernel/git/kdave/btrfs-progs.git"
-
-inherit autotools-brokensep pkgconfig manpages
-
-PACKAGECONFIG[manpages] = "--enable-documentation, --disable-documentation, asciidoc-native xmlto-native"
-EXTRA_OECONF_append_libc-musl = " --disable-backtrace "
-
-do_configure_prepend() {
-	# Upstream doesn't ship this and autoreconf won't install it as automake isn't used.
-	mkdir -p ${S}/config
-	cp -f $(automake --print-libdir)/install-sh ${S}/config/
-}
-
-S = "${WORKDIR}/git"
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/build-compare/build-compare_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/build-compare/build-compare_git.bb
index c60c164..84d04cf 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/build-compare/build-compare_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/build-compare/build-compare_git.bb
@@ -22,6 +22,7 @@
 SRCREV = "c5352c054c6ef15735da31b76d6d88620f4aff0a"
 PE = "1"
 PV = "2015.02.10+git${SRCPV}"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 S = "${WORKDIR}/git"
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/chrpath/chrpath/standarddoc.patch b/import-layers/yocto-poky/meta/recipes-devtools/chrpath/chrpath/standarddoc.patch
index f96f104..3d303fb 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/chrpath/chrpath/standarddoc.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/chrpath/chrpath/standarddoc.patch
@@ -1,14 +1,24 @@
-Upstream-Status: Pending
+From 285b5fbf1d6b25ff55d305c17edf4d327bf19dd3 Mon Sep 17 00:00:00 2001
+From: Richard Purdie <richard.purdie@linuxfoundation.org>
+Date: Tue, 5 Jul 2011 23:42:29 +0100
+Subject: [PATCH] chrpath: Ensure the package respects the docdir variable
 
-autoconf/automake set docdir automatically, use their value ensuring 
+autoconf/automake set docdir automatically, use their value ensuring
 doc files are placed into $datadir/doc, not $prefix/doc.
 
 RP 5/7/2011
 
-Index: chrpath-0.13/Makefile.am
-===================================================================
---- chrpath-0.13.orig/Makefile.am	2011-07-05 23:40:14.769920254 +0100
-+++ chrpath-0.13/Makefile.am	2011-07-05 23:40:19.819920635 +0100
+Upstream-Status: Submitted [ http://lists.alioth.debian.org/pipermail/chrpath-devel/Week-of-Mon-20170710/000013.html ]
+
+Signed-off-by: Dengke Du <dengke.du@windriver.com>
+---
+ Makefile.am | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/Makefile.am b/Makefile.am
+index b50ad21..5f7e861 100644
+--- a/Makefile.am
++++ b/Makefile.am
 @@ -1,7 +1,5 @@
  SUBDIRS = testsuite deb
  
@@ -17,3 +27,6 @@
  doc_DATA = AUTHORS COPYING ChangeLog INSTALL NEWS README
  
  bin_PROGRAMS = chrpath
+-- 
+2.8.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake-native_3.7.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake-native_3.7.2.bb
deleted file mode 100644
index 7ad4345..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake-native_3.7.2.bb
+++ /dev/null
@@ -1,37 +0,0 @@
-require cmake.inc
-inherit native
-
-DEPENDS += "bzip2-replacement-native expat-native xz-native zlib-native curl-native"
-
-SRC_URI += "\
-    file://cmlibarchive-disable-ext2fs.patch \
-"
-
-B = "${WORKDIR}/build"
-do_configure[cleandirs] = "${B}"
-
-# Disable ccmake since we don't depend on ncurses
-CMAKE_EXTRACONF = "\
-    -DCMAKE_LIBRARY_PATH=${STAGING_LIBDIR_NATIVE} \
-    -DBUILD_CursesDialog=0 \
-    -DCMAKE_USE_SYSTEM_LIBRARIES=1 \
-    -DCMAKE_USE_SYSTEM_LIBRARY_JSONCPP=0 \
-    -DCMAKE_USE_SYSTEM_LIBRARY_LIBARCHIVE=0 \
-    -DCMAKE_USE_SYSTEM_LIBRARY_LIBUV=0 \
-    -DENABLE_ACL=0 -DHAVE_ACL_LIBACL_H=0 \
-    -DHAVE_SYS_ACL_H=0 \
-"
-
-do_configure () {
-	${S}/configure --verbose --prefix=${prefix} -- ${CMAKE_EXTRACONF}
-}
-
-do_compile() {
-	oe_runmake
-}
-
-do_install() {
-	oe_runmake 'DESTDIR=${D}' install
-}
-
-do_compile[progress] = "percent"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake-native_3.8.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake-native_3.8.2.bb
new file mode 100644
index 0000000..e55e8b1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake-native_3.8.2.bb
@@ -0,0 +1,38 @@
+require cmake.inc
+inherit native
+
+DEPENDS += "bzip2-replacement-native expat-native xz-native zlib-native curl-native"
+
+SRC_URI += "\
+    file://cmlibarchive-disable-ext2fs.patch \
+"
+
+B = "${WORKDIR}/build"
+do_configure[cleandirs] = "${B}"
+
+# Disable ccmake since we don't depend on ncurses
+CMAKE_EXTRACONF = "\
+    -DCMAKE_LIBRARY_PATH=${STAGING_LIBDIR_NATIVE} \
+    -DBUILD_CursesDialog=0 \
+    -DCMAKE_USE_SYSTEM_LIBRARIES=1 \
+    -DCMAKE_USE_SYSTEM_LIBRARY_JSONCPP=0 \
+    -DCMAKE_USE_SYSTEM_LIBRARY_LIBARCHIVE=0 \
+    -DCMAKE_USE_SYSTEM_LIBRARY_LIBUV=0 \
+    -DCMAKE_USE_SYSTEM_LIBRARY_LIBRHASH=0 \
+    -DENABLE_ACL=0 -DHAVE_ACL_LIBACL_H=0 \
+    -DHAVE_SYS_ACL_H=0 \
+"
+
+do_configure () {
+	${S}/configure --verbose --prefix=${prefix} -- ${CMAKE_EXTRACONF}
+}
+
+do_compile() {
+	oe_runmake
+}
+
+do_install() {
+	oe_runmake 'DESTDIR=${D}' install
+}
+
+do_compile[progress] = "percent"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake.inc b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake.inc
index 6c8b36d..6aeb25f 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake.inc
@@ -6,7 +6,7 @@
 BUGTRACKER = "http://public.kitware.com/Bug/my_view_page.php"
 SECTION = "console/utils"
 LICENSE = "BSD"
-LIC_FILES_CHKSUM = "file://Copyright.txt;md5=7a64bc564202bf7401d9a8ef33c9564d \
+LIC_FILES_CHKSUM = "file://Copyright.txt;md5=8d8c7bc32f8797d23f5cf605d9339d2d \
                     file://Source/cmake.h;beginline=1;endline=3;md5=4494dee184212fc89c469c3acd555a14"
 
 CMAKE_MAJOR_VERSION = "${@'.'.join(d.getVar('PV').split('.')[0:2])}"
@@ -14,12 +14,11 @@
 SRC_URI = "https://cmake.org/files/v${CMAKE_MAJOR_VERSION}/cmake-${PV}.tar.gz \
            file://support-oe-qt4-tools-names.patch \
            file://qt4-fail-silent.patch \
-           file://avoid-gcc-warnings-with-Wstrict-prototypes.patch \
-           file://0001-KWIML-tests-Remove-format-security-from-flags.patch \
+           file://0001-FindCUDA-Use-find_program-if-find_host_program-is-no.patch \
            "
 
-SRC_URI[md5sum] = "79bd7e65cd81ea3aa2619484ad6ff25a"
-SRC_URI[sha256sum] = "dc1246c4e6d168ea4d6e042cfba577c1acd65feea27e56f5ff37df920c30cae0"
+SRC_URI[md5sum] = "b5dff61f6a7f1305271ab3f6ae261419"
+SRC_URI[sha256sum] = "da3072794eb4c09f2d782fcee043847b99bb4cf8d4573978d9b2024214d6e92d"
 
 UPSTREAM_CHECK_REGEX = "cmake-(?P<pver>\d+(\.\d+)+)\.tar"
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/0001-FindCUDA-Use-find_program-if-find_host_program-is-no.patch b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/0001-FindCUDA-Use-find_program-if-find_host_program-is-no.patch
new file mode 100644
index 0000000..9b820db
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/0001-FindCUDA-Use-find_program-if-find_host_program-is-no.patch
@@ -0,0 +1,40 @@
+From 46d25e782ebd9b6c50771b6f30433c58fae03a51 Mon Sep 17 00:00:00 2001
+From: Maxime Roussin-Bélanger <maxime.roussinbelanger@gmail.com>
+Date: Mon, 26 Jun 2017 11:30:07 -0400
+Subject: [PATCH] cmake: Use find_program if find_host_program is not
+ available
+
+CMake does not define the `find_host_program` command we've been using
+in the cross-compiling code path.  It was provided by a widely used
+Android toolchain file.  For compatibility, continue to use
+`find_host_program` if available, but otherwise use just `find_program`.
+
+Upstream-Status: Accepted
+[https://gitlab.kitware.com/cmake/cmake/merge_requests/1009]
+        - Will be in 3.10
+
+Signed-off-by: Maxime Roussin-Bélanger <maxime.roussinbelanger@gmail.com>
+---
+ Modules/FindCUDA.cmake | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/Modules/FindCUDA.cmake b/Modules/FindCUDA.cmake
+index a4dca54..77ca351 100644
+--- a/Modules/FindCUDA.cmake
++++ b/Modules/FindCUDA.cmake
+@@ -679,7 +679,11 @@ if(CMAKE_CROSSCOMPILING)
+   # add known CUDA targetr root path to the set of directories we search for programs, libraries and headers
+   set( CMAKE_FIND_ROOT_PATH "${CUDA_TOOLKIT_TARGET_DIR};${CMAKE_FIND_ROOT_PATH}")
+   macro( cuda_find_host_program )
+-    find_host_program( ${ARGN} )
++    if (COMMAND find_host_program)
++      find_host_program( ${ARGN} )
++    else()
++      find_program( ${ARGN} )
++    endif()
+   endmacro()
+ else()
+   # for non-cross-compile, find_host_program == find_program and CUDA_TOOLKIT_TARGET_DIR == CUDA_TOOLKIT_ROOT_DIR
+--
+2.1.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/0001-KWIML-tests-Remove-format-security-from-flags.patch b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/0001-KWIML-tests-Remove-format-security-from-flags.patch
deleted file mode 100644
index 190133b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/0001-KWIML-tests-Remove-format-security-from-flags.patch
+++ /dev/null
@@ -1,33 +0,0 @@
-From 0941395b146804abcd87004589ff6e7a2953412d Mon Sep 17 00:00:00 2001
-From: Jussi Kukkonen <jussi.kukkonen@intel.com>
-Date: Thu, 16 Mar 2017 14:39:04 +0200
-Subject: [PATCH] KWIML tests: Remove format-security from flags
-
-For the tests where "format" is removed from flags, "format-security"
-should be removed as well. Otherwise building cmake with
-"-Wformat -Wformat-security" fails:
-
-| cc1: error: -Wformat-security ignored without -Wformat [-Werror=format-security]
-
-Upstream-Status: Backport [part of commit f77420cfc9]
-Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
----
- Utilities/KWIML/test/CMakeLists.txt | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/Utilities/KWIML/test/CMakeLists.txt b/Utilities/KWIML/test/CMakeLists.txt
-index 4f6f37b..1bf93bb 100644
---- a/Utilities/KWIML/test/CMakeLists.txt
-+++ b/Utilities/KWIML/test/CMakeLists.txt
-@@ -10,7 +10,7 @@ endif()
- # Suppress printf/scanf format warnings; we test if the sizes match.
- foreach(lang C CXX)
-   if(KWIML_LANGUAGE_${lang} AND CMAKE_${lang}_COMPILER_ID STREQUAL "GNU")
--    set(CMAKE_${lang}_FLAGS "${CMAKE_${lang}_FLAGS} -Wno-format")
-+    set(CMAKE_${lang}_FLAGS "${CMAKE_${lang}_FLAGS} -Wno-format -Wno-format-security")
-   endif()
- endforeach()
- 
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/OEToolchainConfig.cmake b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/OEToolchainConfig.cmake
index 6518408..dc8477e 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/OEToolchainConfig.cmake
+++ b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/OEToolchainConfig.cmake
@@ -1,7 +1,7 @@
 set( CMAKE_SYSTEM_NAME Linux )
 set( CMAKE_C_FLAGS $ENV{CFLAGS} CACHE STRING "" FORCE )
 set( CMAKE_CXX_FLAGS $ENV{CXXFLAGS}  CACHE STRING "" FORCE )
-set( CMAKE ASM_FLAGS ${CMAKE_C_FLAGS} CACHE STRING "" FORCE )
+set( CMAKE_ASM_FLAGS ${CMAKE_C_FLAGS} CACHE STRING "" FORCE )
 set( CMAKE_LDFLAGS_FLAGS ${CMAKE_CXX_FLAGS} CACHE STRING "" FORCE )
 
 set( CMAKE_FIND_ROOT_PATH $ENV{OECORE_TARGET_SYSROOT} $ENV{OECORE_NATIVE_SYSROOT} )
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/avoid-gcc-warnings-with-Wstrict-prototypes.patch b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/avoid-gcc-warnings-with-Wstrict-prototypes.patch
deleted file mode 100644
index 8b8d480..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake/avoid-gcc-warnings-with-Wstrict-prototypes.patch
+++ /dev/null
@@ -1,42 +0,0 @@
-From 4bc17345c01ea467099e28c7df30c23ace9e7811 Mon Sep 17 00:00:00 2001
-From: Andre McCurdy <armccurdy@gmail.com>
-Date: Fri, 14 Oct 2016 16:26:58 -0700
-Subject: [PATCH] CheckFunctionExists.c: avoid gcc warnings with
- -Wstrict-prototypes
-
-Avoid warnings (and therefore build failures etc) if a user happens
-to add -Wstrict-prototypes to CFLAGS.
-
- | $ gcc --version
- | gcc (Ubuntu 4.8.4-2ubuntu1~14.04.3) 4.8.4
- |
- | $ gcc -Wstrict-prototypes -Werror -DCHECK_FUNCTION_EXISTS=pthread_create -o foo.o -c Modules/CheckFunctionExists.c
- | Modules/CheckFunctionExists.c:7:3: error: function declaration isn't a prototype [-Werror=strict-prototypes]
- |    CHECK_FUNCTION_EXISTS();
- |    ^
- | cc1: all warnings being treated as errors
- |
-
-Upstream-Status: Pending
-
-Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
----
- Modules/CheckFunctionExists.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/Modules/CheckFunctionExists.c b/Modules/CheckFunctionExists.c
-index 2304000..224e340 100644
---- a/Modules/CheckFunctionExists.c
-+++ b/Modules/CheckFunctionExists.c
-@@ -4,7 +4,7 @@
- extern "C"
- #endif
-   char
--  CHECK_FUNCTION_EXISTS();
-+  CHECK_FUNCTION_EXISTS(void);
- #ifdef __CLASSIC_C__
- int main()
- {
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake_3.7.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake_3.7.2.bb
deleted file mode 100644
index f566a48..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake_3.7.2.bb
+++ /dev/null
@@ -1,49 +0,0 @@
-require cmake.inc
-
-inherit cmake
-
-DEPENDS += "curl expat zlib libarchive xz ncurses bzip2"
-
-SRC_URI_append_class-nativesdk = " \
-    file://OEToolchainConfig.cmake \
-    file://environment.d-cmake.sh"
-
-# Strip ${prefix} from ${docdir}, set result into docdir_stripped
-python () {
-    prefix=d.getVar("prefix")
-    docdir=d.getVar("docdir")
-
-    if not docdir.startswith(prefix):
-        bb.fatal('docdir must contain prefix as its prefix')
-
-    docdir_stripped = docdir[len(prefix):]
-    if len(docdir_stripped) > 0 and docdir_stripped[0] == '/':
-        docdir_stripped = docdir_stripped[1:]
-
-    d.setVar("docdir_stripped", docdir_stripped)
-}
-
-EXTRA_OECMAKE=" \
-    -DCMAKE_DOC_DIR=${docdir_stripped}/cmake-${CMAKE_MAJOR_VERSION} \
-    -DCMAKE_USE_SYSTEM_LIBRARIES=1 \
-    -DCMAKE_USE_SYSTEM_LIBRARY_JSONCPP=0 \
-    -DCMAKE_USE_SYSTEM_LIBRARY_LIBUV=0 \
-    -DKWSYS_CHAR_IS_SIGNED=1 \
-    -DBUILD_CursesDialog=0 \
-    -DKWSYS_LFS_WORKS=1 \
-"
-
-do_install_append_class-nativesdk() {
-    mkdir -p ${D}${datadir}/cmake
-    install -m 644 ${WORKDIR}/OEToolchainConfig.cmake ${D}${datadir}/cmake/
-
-    mkdir -p ${D}${SDKPATHNATIVE}/environment-setup.d
-    install -m 644 ${WORKDIR}/environment.d-cmake.sh ${D}${SDKPATHNATIVE}/environment-setup.d/cmake.sh
-}
-
-FILES_${PN}_append_class-nativesdk = " ${SDKPATHNATIVE}"
-
-FILES_${PN} += "${datadir}/cmake-${CMAKE_MAJOR_VERSION}"
-FILES_${PN}-doc += "${docdir}/cmake-${CMAKE_MAJOR_VERSION}"
-
-BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake_3.8.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake_3.8.2.bb
new file mode 100644
index 0000000..3f8fd7a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/cmake/cmake_3.8.2.bb
@@ -0,0 +1,50 @@
+require cmake.inc
+
+inherit cmake
+
+DEPENDS += "curl expat zlib libarchive xz ncurses bzip2"
+
+SRC_URI_append_class-nativesdk = " \
+    file://OEToolchainConfig.cmake \
+    file://environment.d-cmake.sh"
+
+# Strip ${prefix} from ${docdir}, set result into docdir_stripped
+python () {
+    prefix=d.getVar("prefix")
+    docdir=d.getVar("docdir")
+
+    if not docdir.startswith(prefix):
+        bb.fatal('docdir must contain prefix as its prefix')
+
+    docdir_stripped = docdir[len(prefix):]
+    if len(docdir_stripped) > 0 and docdir_stripped[0] == '/':
+        docdir_stripped = docdir_stripped[1:]
+
+    d.setVar("docdir_stripped", docdir_stripped)
+}
+
+EXTRA_OECMAKE=" \
+    -DCMAKE_DOC_DIR=${docdir_stripped}/cmake-${CMAKE_MAJOR_VERSION} \
+    -DCMAKE_USE_SYSTEM_LIBRARIES=1 \
+    -DCMAKE_USE_SYSTEM_LIBRARY_JSONCPP=0 \
+    -DCMAKE_USE_SYSTEM_LIBRARY_LIBUV=0 \
+    -DCMAKE_USE_SYSTEM_LIBRARY_LIBRHASH=0 \
+    -DKWSYS_CHAR_IS_SIGNED=1 \
+    -DBUILD_CursesDialog=0 \
+    -DKWSYS_LFS_WORKS=1 \
+"
+
+do_install_append_class-nativesdk() {
+    mkdir -p ${D}${datadir}/cmake
+    install -m 644 ${WORKDIR}/OEToolchainConfig.cmake ${D}${datadir}/cmake/
+
+    mkdir -p ${D}${SDKPATHNATIVE}/environment-setup.d
+    install -m 644 ${WORKDIR}/environment.d-cmake.sh ${D}${SDKPATHNATIVE}/environment-setup.d/cmake.sh
+}
+
+FILES_${PN}_append_class-nativesdk = " ${SDKPATHNATIVE}"
+
+FILES_${PN} += "${datadir}/cmake-${CMAKE_MAJOR_VERSION}"
+FILES_${PN}-doc += "${docdir}/cmake-${CMAKE_MAJOR_VERSION}"
+
+BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/createrepo-c/createrepo-c/0001-Correctly-install-the-shared-library.patch b/import-layers/yocto-poky/meta/recipes-devtools/createrepo-c/createrepo-c/0001-Correctly-install-the-shared-library.patch
index 0127124..cd72084 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/createrepo-c/createrepo-c/0001-Correctly-install-the-shared-library.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/createrepo-c/createrepo-c/0001-Correctly-install-the-shared-library.patch
@@ -3,7 +3,7 @@
 Date: Mon, 2 Jan 2017 17:23:59 +0200
 Subject: [PATCH] Correctly install the shared library
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://github.com/rpm-software-management/createrepo_c/pull/78]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 ---
  src/CMakeLists.txt | 3 ++-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/cve-check-tool/cve-check-tool_5.6.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/cve-check-tool/cve-check-tool_5.6.4.bb
index 1f906ee..7b70daa 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/cve-check-tool/cve-check-tool_5.6.4.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/cve-check-tool/cve-check-tool_5.6.4.bb
@@ -11,6 +11,7 @@
            file://0001-print-progress-in-percent-when-downloading-CVE-db.patch \
            file://0001-curl-allow-overriding-default-CA-certificate-file.patch \
            file://0001-update-Compare-computed-vs-expected-sha256-digit-str.patch \
+           file://0001-Fix-freeing-memory-allocated-by-sqlite.patch \
           "
 
 SRC_URI[md5sum] = "c5f4247140fc9be3bf41491d31a34155"
@@ -29,7 +30,7 @@
 
 do_populate_cve_db() {
     if [ "${BB_NO_NETWORK}" = "1" ] ; then
-        bberror "BB_NO_NETWORK is set; Can't update cve-check-tool database, CVEs won't be checked"
+        bbwarn "BB_NO_NETWORK is set; Can't update cve-check-tool database, new CVEs won't be detected"
         return
     fi
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/cve-check-tool/files/0001-Fix-freeing-memory-allocated-by-sqlite.patch b/import-layers/yocto-poky/meta/recipes-devtools/cve-check-tool/files/0001-Fix-freeing-memory-allocated-by-sqlite.patch
new file mode 100644
index 0000000..4a82cf2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/cve-check-tool/files/0001-Fix-freeing-memory-allocated-by-sqlite.patch
@@ -0,0 +1,50 @@
+From a3353429652f83bb8b0316500faa88fa2555542d Mon Sep 17 00:00:00 2001
+From: Peter Marko <peter.marko@siemens.com>
+Date: Thu, 13 Apr 2017 23:09:52 +0200
+Subject: [PATCH] Fix freeing memory allocated by sqlite
+
+Upstream-Status: Backport
+Signed-off-by: Peter Marko <peter.marko@siemens.com>
+---
+ src/core.c | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/src/core.c b/src/core.c
+index 6263031..6788f16 100644
+--- a/src/core.c
++++ b/src/core.c
+@@ -82,7 +82,7 @@ static bool ensure_table(CveDB *self)
+         rc = sqlite3_exec(self->db, query, NULL, NULL, &err);
+         if (rc != SQLITE_OK) {
+                 fprintf(stderr, "ensure_table(): %s\n", err);
+-                free(err);
++                sqlite3_free(err);
+                 return false;
+         }
+         
+@@ -91,7 +91,7 @@ static bool ensure_table(CveDB *self)
+         rc = sqlite3_exec(self->db, query, NULL, NULL, &err);
+         if (rc != SQLITE_OK) {
+                 fprintf(stderr, "ensure_table(): %s\n", err);
+-                free(err);
++                sqlite3_free(err);
+                 return false;
+         }
+ 
+@@ -99,11 +99,11 @@ static bool ensure_table(CveDB *self)
+         rc = sqlite3_exec(self->db, query, NULL, NULL, &err);
+         if (rc != SQLITE_OK) {
+                 fprintf(stderr, "ensure_table(): %s\n", err);
+-                free(err);
++                sqlite3_free(err);
+                 return false;
+         }
+         if (err) {
+-                free(err);
++                sqlite3_free(err);
+         }
+ 
+         return true;
+-- 
+2.1.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/cve-check-tool/files/0001-print-progress-in-percent-when-downloading-CVE-db.patch b/import-layers/yocto-poky/meta/recipes-devtools/cve-check-tool/files/0001-print-progress-in-percent-when-downloading-CVE-db.patch
index 0510e3a..8ea6f68 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/cve-check-tool/files/0001-print-progress-in-percent-when-downloading-CVE-db.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/cve-check-tool/files/0001-print-progress-in-percent-when-downloading-CVE-db.patch
@@ -38,7 +38,7 @@
 +        if (dltotal && percent && percent->end >= percent->start) {
 +                unsigned int diff = percent->end - percent->start;
 +                if (diff) {
-+                        fprintf(stderr,"completed: "CURL_FORMAT_OFF_T"%%\r", percent->start + (diff * dlnow / dltotal));
++                        fprintf(stderr,"completed: %"CURL_FORMAT_CURL_OFF_T"%%\r", percent->start + (diff * dlnow / dltotal));
 +                }
 +        }
 +
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/diffstat/diffstat_1.61.bb b/import-layers/yocto-poky/meta/recipes-devtools/diffstat/diffstat_1.61.bb
index 583b387..f8b7b06 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/diffstat/diffstat_1.61.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/diffstat/diffstat_1.61.bb
@@ -18,14 +18,9 @@
 
 inherit autotools gettext ptest
 
-LDFLAGS += "${TOOLCHAIN_OPTIONS}"
+EXTRA_AUTORECONF += "--exclude=aclocal"
 
-do_configure () {
-	if [ ! -e ${S}/acinclude.m4 ]; then
-		mv ${S}/aclocal.m4 ${S}/acinclude.m4
-	fi
-	autotools_do_configure
-}
+LDFLAGS += "${TOOLCHAIN_OPTIONS}"
 
 do_install_ptest() {
 	cp -r ${S}/testing ${D}${PTEST_PATH}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/distcc/distcc_3.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/distcc/distcc_3.2.bb
index ea3d7c1..b6da65a 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/distcc/distcc_3.2.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/distcc/distcc_3.2.bb
@@ -23,6 +23,7 @@
            file://distcc.service"
 SRCREV = "d8b18df3e9dcbe4f092bed565835d3975e99432c"
 S = "${WORKDIR}/git"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 inherit autotools pkgconfig update-rc.d useradd systemd
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dmidecode/dmidecode_3.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/dmidecode/dmidecode_3.0.bb
deleted file mode 100644
index 109a9d2..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/dmidecode/dmidecode_3.0.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-SUMMARY = "DMI (Desktop Management Interface) table related utilities"
-HOMEPAGE = "http://www.nongnu.org/dmidecode/"
-LICENSE = "GPLv2"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=b234ee4d69f5fce4486a80fdaf4a4263"
-
-SRC_URI = "${SAVANNAH_NONGNU_MIRROR}/dmidecode/${BP}.tar.xz"
-
-COMPATIBLE_HOST = "(i.86|x86_64|aarch64|arm|powerpc|powerpc64).*-linux"
-
-EXTRA_OEMAKE = "-e MAKEFLAGS="
-
-do_install() {
-	oe_runmake DESTDIR="${D}" install
-}
-
-do_unpack_extra() {
-	sed -i -e '/^prefix/s:/usr/local:${exec_prefix}:' ${S}/Makefile
-}
-addtask unpack_extra after do_unpack before do_patch
-
-SRC_URI[md5sum] = "281ee572d45c78eca73a14834c495ffd"
-SRC_URI[sha256sum] = "7ec35bb193729c1d593a1460b59d82d24b89102ab23fd0416e6cf4325d077e45"
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dmidecode/dmidecode_3.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/dmidecode/dmidecode_3.1.bb
new file mode 100644
index 0000000..f83281e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dmidecode/dmidecode_3.1.bb
@@ -0,0 +1,23 @@
+SUMMARY = "DMI (Desktop Management Interface) table related utilities"
+HOMEPAGE = "http://www.nongnu.org/dmidecode/"
+LICENSE = "GPLv2"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=b234ee4d69f5fce4486a80fdaf4a4263"
+
+SRC_URI = "${SAVANNAH_NONGNU_MIRROR}/dmidecode/${BP}.tar.xz"
+
+COMPATIBLE_HOST = "(i.86|x86_64|aarch64|arm|powerpc|powerpc64).*-linux"
+
+EXTRA_OEMAKE = "-e MAKEFLAGS="
+
+do_install() {
+	oe_runmake DESTDIR="${D}" install
+}
+
+do_unpack_extra() {
+	sed -i -e '/^prefix/s:/usr/local:${exec_prefix}:' ${S}/Makefile
+}
+addtask unpack_extra after do_unpack before do_patch
+
+SRC_URI[md5sum] = "679c2c015c515aa6ca5f229aee49c102"
+SRC_URI[sha256sum] = "d766ce9b25548c59b1e7e930505b4cad9a7bb0b904a1a391fbb604d529781ac0"
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf/0001-Check-conf.releasever-instead-of-releasever.patch b/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf/0001-Check-conf.releasever-instead-of-releasever.patch
new file mode 100644
index 0000000..05f3141
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf/0001-Check-conf.releasever-instead-of-releasever.patch
@@ -0,0 +1,31 @@
+From 166833a88a928a574bf9143b9b65f544be482c77 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Fri, 18 Aug 2017 15:55:15 +0300
+Subject: [PATCH] Check conf.releasever instead of releasever
+
+The substitutions may actually set the conf.releasever correctly,
+and so the check should use that instead of the passed-in function
+parameter.
+
+Upstream-Status: Submitted [https://github.com/rpm-software-management/dnf/pull/901]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+ dnf/cli/cli.py | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/dnf/cli/cli.py b/dnf/cli/cli.py
+index 2d63420c..167943b8 100644
+--- a/dnf/cli/cli.py
++++ b/dnf/cli/cli.py
+@@ -914,7 +914,7 @@ class Cli(object):
+         conf.releasever = releasever
+         subst = conf.substitutions
+         subst.update_from_etc(conf.installroot)
+-        if releasever is None:
++        if conf.releasever is None:
+             logger.warning(_("Unable to detect release version (use '--releasever' to specify "
+                              "release version)"))
+ 
+-- 
+2.14.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf/0030-Run-python-scripts-using-env.patch b/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf/0030-Run-python-scripts-using-env.patch
index 61328e6..1abd880 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf/0030-Run-python-scripts-using-env.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf/0030-Run-python-scripts-using-env.patch
@@ -10,8 +10,7 @@
 ---
  bin/dnf-automatic.in | 2 +-
  bin/dnf.in           | 2 +-
- bin/yum.in           | 2 +-
- 3 files changed, 3 insertions(+), 3 deletions(-)
+ 2 files changed, 2 insertions(+), 2 deletions(-)
 
 diff --git a/bin/dnf-automatic.in b/bin/dnf-automatic.in
 index 5b06aa26..9f6f703e 100755
@@ -33,16 +32,6 @@
  # The dnf executable script.
  #
  # Copyright (C) 2012-2016 Red Hat, Inc.
-diff --git a/bin/yum.in b/bin/yum.in
-index f1fee071..013dc8c5 100755
---- a/bin/yum.in
-+++ b/bin/yum.in
-@@ -1,4 +1,4 @@
--#!@PYTHON_EXECUTABLE@
-+#!/usr/bin/env python3
- # The dnf executable script.
- #
- # Copyright (C) 2016 Red Hat, Inc.
 -- 
 2.11.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf_2.6.3.bb b/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf_2.6.3.bb
new file mode 100644
index 0000000..3ed6a74
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf_2.6.3.bb
@@ -0,0 +1,52 @@
+SUMMARY = "Package manager forked from Yum, using libsolv as a dependency resolver"
+LICENSE = "GPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://PACKAGE-LICENSING;md5=bfc29916e11321be06924c4fb096fdcc \
+                   "
+
+SRC_URI = "git://github.com/rpm-software-management/dnf.git \
+           file://0029-Do-not-set-PYTHON_INSTALL_DIR-by-running-python.patch \
+           file://0030-Run-python-scripts-using-env.patch \
+           file://0001-Do-not-prepend-installroot-to-logdir.patch \
+           file://0001-Do-not-hardcode-etc-and-systemd-unit-directories.patch \
+           file://0001-Corretly-install-tmpfiles.d-configuration.patch \
+           file://0001-Check-conf.releasever-instead-of-releasever.patch \
+           "
+
+SRCREV = "be2585183ec4485ee4d5e121f242d8669296f065"
+UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>\d+(\.\d+)+)"
+
+S = "${WORKDIR}/git"
+
+inherit cmake gettext bash-completion distutils3-base systemd
+
+DEPENDS += "libdnf librepo libcomps python3-iniparse"
+
+# manpages generation requires http://www.sphinx-doc.org/
+EXTRA_OECMAKE = " -DWITH_MAN=0 -DPYTHON_INSTALL_DIR=${PYTHON_SITEPACKAGES_DIR} -DPYTHON_DESIRED=3"
+
+BBCLASSEXTEND = "native nativesdk"
+RDEPENDS_${PN}_class-target += "python3-core python3-codecs python3-netclient python3-email python3-threading python3-distutils librepo python3-shell python3-subprocess libcomps libdnf python3-sqlite3 python3-compression python3-rpm python3-iniparse python3-json python3-importlib python3-curses python3-argparse python3-misc python3-gpg"
+# Recommend gnupg so that GPG signature check on repository metadata is possible
+RRECOMMENDS_${PN}_class-target += "gnupg"
+
+# Create a symlink called 'dnf' as 'make install' does not do it, but
+# .spec file in dnf source tree does (and then Fedora and dnf documentation
+# says that dnf binary is plain 'dnf').
+do_install_append() {
+        lnr ${D}/${bindir}/dnf-3 ${D}/${bindir}/dnf
+        lnr ${D}/${bindir}/dnf-automatic-3 ${D}/${bindir}/dnf-automatic
+}
+
+# Direct dnf-native to read rpm configuration from our sysroot, not the one it was compiled in
+do_install_append_class-native() {
+        create_wrapper ${D}/${bindir}/dnf \
+                RPM_CONFIGDIR=${STAGING_LIBDIR_NATIVE}/rpm \
+                RPM_NO_CHROOT_FOR_SCRIPTS=1
+}
+
+SYSTEMD_SERVICE_${PN} = "dnf-makecache.service dnf-makecache.timer \
+                         dnf-automatic-download.service dnf-automatic-download.timer \
+                         dnf-automatic-install.service dnf-automatic-install.timer \
+                         dnf-automatic-notifyonly.service dnf-automatic-notifyonly.timer \
+"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf_git.bb
deleted file mode 100644
index 9f814fb..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/dnf/dnf_git.bb
+++ /dev/null
@@ -1,49 +0,0 @@
-SUMMARY = "Package manager forked from Yum, using libsolv as a dependency resolver"
-LICENSE = "GPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://PACKAGE-LICENSING;md5=bfc29916e11321be06924c4fb096fdcc \
-                   "
-
-SRC_URI = "git://github.com/rpm-software-management/dnf.git \
-           file://0029-Do-not-set-PYTHON_INSTALL_DIR-by-running-python.patch \
-           file://0030-Run-python-scripts-using-env.patch \
-           file://0001-Do-not-prepend-installroot-to-logdir.patch \
-           file://0001-Do-not-hardcode-etc-and-systemd-unit-directories.patch \
-           file://0001-Corretly-install-tmpfiles.d-configuration.patch \
-           "
-
-PV = "2.0.0+git${SRCPV}"
-SRCREV = "f0093d672d3069cfee8447973ae70ef615fd8886"
-
-S = "${WORKDIR}/git"
-
-inherit cmake gettext bash-completion distutils3-base systemd
-
-DEPENDS += "libdnf librepo libcomps python3-pygpgme python3-iniparse"
-
-# manpages generation requires http://www.sphinx-doc.org/
-EXTRA_OECMAKE = " -DWITH_MAN=0 -DPYTHON_INSTALL_DIR=${PYTHON_SITEPACKAGES_DIR} -DPYTHON_DESIRED=3"
-
-BBCLASSEXTEND = "native nativesdk"
-RDEPENDS_${PN}_class-target += "python3-core python3-codecs python3-netclient python3-email python3-threading python3-distutils librepo python3-shell python3-subprocess libcomps libdnf python3-sqlite3 python3-compression python3-pygpgme python3-rpm python3-iniparse python3-json python3-importlib python3-curses python3-argparse python3-misc"
-
-# Create a symlink called 'dnf' as 'make install' does not do it, but
-# .spec file in dnf source tree does (and then Fedora and dnf documentation
-# says that dnf binary is plain 'dnf').
-do_install_append() {
-        lnr ${D}/${bindir}/dnf-3 ${D}/${bindir}/dnf
-        lnr ${D}/${bindir}/dnf-automatic-3 ${D}/${bindir}/dnf-automatic
-}
-
-# Direct dnf-native to read rpm configuration from our sysroot, not the one it was compiled in
-do_install_append_class-native() {
-        create_wrapper ${D}/${bindir}/dnf \
-                RPM_CONFIGDIR=${STAGING_LIBDIR_NATIVE}/rpm \
-                RPM_NO_CHROOT_FOR_SCRIPTS=1
-}
-
-SYSTEMD_SERVICE_${PN} = "dnf-makecache.service dnf-makecache.timer \
-                         dnf-automatic-download.service dnf-automatic-download.timer \
-                         dnf-automatic-install.service dnf-automatic-install.timer \
-                         dnf-automatic-notifyonly.service dnf-automatic-notifyonly.timer \
-"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0001-dpkg-Support-muslx32-build.patch b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0001-dpkg-Support-muslx32-build.patch
new file mode 100644
index 0000000..50e6894
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0001-dpkg-Support-muslx32-build.patch
@@ -0,0 +1,41 @@
+From a328c8bec0bf8071ae8f20fee4c7475205064ba1 Mon Sep 17 00:00:00 2001
+From: sweeaun <swee.aun.khor@intel.com>
+Date: Sun, 10 Sep 2017 00:14:15 -0700
+Subject: [PATCH] dpkg: Support muslx32 build
+
+Upstream-Status: Pending.
+Changes made on ostable and tupletable to enable muslx32 build.
+
+Signed-off-by: sweeaun <swee.aun.khor@intel.com>
+---
+ data/ostable    | 1 +
+ data/tupletable | 1 +
+ 2 files changed, 2 insertions(+)
+
+diff --git a/data/ostable b/data/ostable
+index be64342..87db273 100644
+--- a/data/ostable
++++ b/data/ostable
+@@ -19,6 +19,7 @@ base-uclibc-linux	linux-uclibc		linux[^-]*-uclibc
+ eabihf-musl-linux	linux-musleabihf	linux[^-]*-musleabihf
+ eabi-musl-linux		linux-musleabi		linux[^-]*-musleabi
+ base-musl-linux		linux-musl		linux[^-]*-musl
++x32-musl-linux		linux-muslx32		linux[^-]*-muslx32
+ eabihf-gnu-linux	linux-gnueabihf		linux[^-]*-gnueabihf
+ eabi-gnu-linux		linux-gnueabi		linux[^-]*-gnueabi
+ abin32-gnu-linux	linux-gnuabin32		linux[^-]*-gnuabin32
+diff --git a/data/tupletable b/data/tupletable
+index 28f00bf..748ffab 100644
+--- a/data/tupletable
++++ b/data/tupletable
+@@ -10,6 +10,7 @@ base-uclibc-linux-<cpu>		uclibc-linux-<cpu>
+ eabihf-musl-linux-arm		musl-linux-armhf
+ eabi-musl-linux-arm		musl-linux-armel
+ base-musl-linux-<cpu>		musl-linux-<cpu>
++x32-musl-linux-amd64		x32
+ ilp32-gnu-linux-arm64		arm64ilp32
+ eabihf-gnu-linux-arm		armhf
+ eabi-gnu-linux-arm		armel
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0002-Adapt-to-linux-wrs-kernel-version-which-has-characte.patch b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0002-Adapt-to-linux-wrs-kernel-version-which-has-characte.patch
index 231a6a2..9fe0ca7 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0002-Adapt-to-linux-wrs-kernel-version-which-has-characte.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0002-Adapt-to-linux-wrs-kernel-version-which-has-characte.patch
@@ -13,13 +13,13 @@
  1 file changed, 2 insertions(+), 4 deletions(-)
 
 diff --git a/lib/dpkg/parsehelp.c b/lib/dpkg/parsehelp.c
-index 79b2908..7758aa5 100644
+index 453077fd9..f42ea2882 100644
 --- a/lib/dpkg/parsehelp.c
 +++ b/lib/dpkg/parsehelp.c
-@@ -235,14 +235,12 @@ parseversion(struct dpkg_version *rversion, const char *string,
- 
-   /* XXX: Would be faster to use something like cisversion and cisrevision. */
+@@ -243,14 +243,12 @@ parseversion(struct dpkg_version *rversion, const char *string,
    ptr = rversion->version;
+   if (!*ptr)
+     return dpkg_put_error(err, _("version number is empty"));
 -  if (*ptr && !c_isdigit(*ptr++))
 -    return dpkg_put_warn(err, _("version number does not start with digit"));
    for (; *ptr; ptr++) {
@@ -33,6 +33,6 @@
        return dpkg_put_warn(err, _("invalid character in revision number"));
    }
  
--- 
-2.1.4
 
+-- 
+2.11.0
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0006-add-musleabi-to-known-target-tripets.patch b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0006-add-musleabi-to-known-target-tripets.patch
index a6b0088..d929466 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0006-add-musleabi-to-known-target-tripets.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0006-add-musleabi-to-known-target-tripets.patch
@@ -6,37 +6,36 @@
 helps compiling dpkg for musl/arm-softfloat
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
 ---
-Upstream-Status: Pending
-
- ostable      | 1 +
- triplettable | 1 +
+ data/ostable    | 1 +
+ data/tupletable | 1 +
  2 files changed, 2 insertions(+)
 
-diff --git a/ostable b/ostable
-index 3bb6819..d0ffdc7 100644
---- a/ostable
-+++ b/ostable
-@@ -15,6 +15,7 @@
- uclibceabi-linux	linux-uclibceabi	linux[^-]*-uclibceabi
- uclibc-linux		linux-uclibc		linux[^-]*-uclibc
- musleabihf-linux	linux-musleabihf	linux[^-]*-musleabihf
-+musleabi-linux		linux-musleabi		linux[^-]*-musleabi
- musl-linux		linux-musl		linux[^-]*-musl
- gnueabihf-linux		linux-gnueabihf		linux[^-]*-gnueabihf
- gnueabi-linux		linux-gnueabi		linux[^-]*-gnueabi
-diff --git a/triplettable b/triplettable
-index 1213584..70d24c1 100644
---- a/triplettable
-+++ b/triplettable
-@@ -6,6 +6,7 @@
- uclibceabi-linux-arm	uclibc-linux-armel
- uclibc-linux-<cpu>	uclibc-linux-<cpu>
- musleabihf-linux-arm	musl-linux-armhf
-+musleabi-linux-arm	musl-linux-armel
- musl-linux-<cpu>	musl-linux-<cpu>
- gnueabihf-linux-arm	armhf
- gnueabi-linux-arm	armel
+diff --git a/data/ostable b/data/ostable
+index 99c1f889d..be6434271 100644
+--- a/data/ostable
++++ b/data/ostable
+@@ -17,6 +17,7 @@
+ eabi-uclibc-linux	linux-uclibceabi	linux[^-]*-uclibceabi
+ base-uclibc-linux	linux-uclibc		linux[^-]*-uclibc
+ eabihf-musl-linux	linux-musleabihf	linux[^-]*-musleabihf
++eabi-musl-linux		linux-musleabi		linux[^-]*-musleabi
+ base-musl-linux		linux-musl		linux[^-]*-musl
+ eabihf-gnu-linux	linux-gnueabihf		linux[^-]*-gnueabihf
+ eabi-gnu-linux		linux-gnueabi		linux[^-]*-gnueabi
+diff --git a/data/tupletable b/data/tupletable
+index 5f500f6ca..28f00bfe6 100644
+--- a/data/tupletable
++++ b/data/tupletable
+@@ -8,6 +8,7 @@
+ eabi-uclibc-linux-arm		uclibc-linux-armel
+ base-uclibc-linux-<cpu>		uclibc-linux-<cpu>
+ eabihf-musl-linux-arm		musl-linux-armhf
++eabi-musl-linux-arm		musl-linux-armel
+ base-musl-linux-<cpu>		musl-linux-<cpu>
+ ilp32-gnu-linux-arm64		arm64ilp32
+ eabihf-gnu-linux-arm		armhf
 -- 
-2.6.4
+2.11.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0007-dpkg-deb-build.c-Remove-usage-of-clamp-mtime-in-tar.patch b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0007-dpkg-deb-build.c-Remove-usage-of-clamp-mtime-in-tar.patch
index 8bfaad1..1b985df 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0007-dpkg-deb-build.c-Remove-usage-of-clamp-mtime-in-tar.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/0007-dpkg-deb-build.c-Remove-usage-of-clamp-mtime-in-tar.patch
@@ -23,18 +23,17 @@
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/dpkg-deb/build.c b/dpkg-deb/build.c
-index 655aa55..927f56f 100644
+index a92b58e..a3d1912 100644
 --- a/dpkg-deb/build.c
 +++ b/dpkg-deb/build.c
-@@ -447,7 +447,7 @@ tarball_pack(const char *dir, filenames_feed_func *tar_filenames_feeder,
-     snprintf(mtime, sizeof(mtime), "@%ld", timestamp);
+@@ -450,7 +450,7 @@ tarball_pack(const char *dir, filenames_feed_func *tar_filenames_feeder,
  
-     execlp(TAR, "tar", "-cf", "-", "--format=gnu",
--                       "--mtime", mtime, "--clamp-mtime",
-+                       "--mtime", mtime,
-                        "--null", "--no-unquote",
-                        "--no-recursion", "-T", "-", NULL);
-     ohshite(_("unable to execute %s (%s)"), "tar -cf", TAR);
+     command_init(&cmd, TAR, "tar -cf");
+     command_add_args(&cmd, "tar", "-cf", "-", "--format=gnu",
+-                           "--mtime", mtime, "--clamp-mtime", NULL);
++                           "--mtime", mtime, NULL);
+     /* Mode might become a positional argument, pass it before -T. */
+     if (mode)
+       command_add_args(&cmd, "--mode", mode, NULL);
 -- 
-2.1.4
-
+2.11.0
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/add_armeb_triplet_entry.patch b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/add_armeb_triplet_entry.patch
index dc69eb2..d165616 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/add_armeb_triplet_entry.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/add_armeb_triplet_entry.patch
@@ -24,23 +24,25 @@
 Upstream-Status: Pending
 
 Signed-off-by: Krishnanjanappa, Jagadeesh <jagadeesh.krishnanjanappa@caviumnetworks.com>
+Signed-off-by: Aníbal Limón <anibal.limon@linux.intel.com>
 
 ---
- triplettable | 1 +
+ data/tupletable | 1 +
  1 file changed, 1 insertion(+)
 
-diff --git a/triplettable b/triplettable
-index abe4726..1e9c247 100644
---- a/triplettable
-+++ b/triplettable
-@@ -11,6 +11,7 @@ gnueabihf-linux-arm	armhf
- gnueabi-linux-arm	armel
- gnuabin32-linux-mips64r6el	mipsn32r6el
- gnuabin32-linux-mips64r6	mipsn32r6
-+gnueabi-linux-armeb	armeb
- gnuabin32-linux-mips64el	mipsn32el
- gnuabin32-linux-mips64	mipsn32
- gnuabi64-linux-mips64r6el	mips64r6el
+diff --git a/data/tupletable b/data/tupletable
+index b7802bec3..5f500f6ca 100644
+--- a/data/tupletable
++++ b/data/tupletable
+@@ -12,6 +12,7 @@ base-musl-linux-<cpu>		musl-linux-<cpu>
+ ilp32-gnu-linux-arm64		arm64ilp32
+ eabihf-gnu-linux-arm		armhf
+ eabi-gnu-linux-arm		armel
++eabi-gnu-linux-armeb		armeb
+ abin32-gnu-linux-mips64r6el	mipsn32r6el
+ abin32-gnu-linux-mips64r6	mipsn32r6
+ abin32-gnu-linux-mips64el	mipsn32el
 -- 
-2.1.4
+2.11.0
+
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/arch_pm.patch b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/arch_pm.patch
index cad4c0f..4e0d22a 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/arch_pm.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/arch_pm.patch
@@ -7,16 +7,23 @@
 
 Signed-off-by: Joe Slater <jslater@windriver.com>
 
+---
+ scripts/Dpkg/Arch.pm | 3 ---
+ 1 file changed, 3 deletions(-)
 
+diff --git a/scripts/Dpkg/Arch.pm b/scripts/Dpkg/Arch.pm
+index 1720847b8..6345ce3b9 100644
 --- a/scripts/Dpkg/Arch.pm
 +++ b/scripts/Dpkg/Arch.pm
-@@ -233,9 +233,6 @@ sub read_triplettable()
- 		    (my $dt = $debtriplet) =~ s/<cpu>/$_cpu/;
+@@ -323,9 +323,6 @@ sub _load_tupletable()
+ 		    (my $dt = $debtuple) =~ s/<cpu>/$_cpu/;
  		    (my $da = $debarch) =~ s/<cpu>/$_cpu/;
  
--		    next if exists $debarch_to_debtriplet{$da}
--		         or exists $debtriplet_to_debarch{$dt};
+-		    next if exists $debarch_to_debtuple{$da}
+-		         or exists $debtuple_to_debarch{$dt};
 -
- 		    $debarch_to_debtriplet{$da} = $dt;
- 		    $debtriplet_to_debarch{$dt} = $da;
+ 		    $debarch_to_debtuple{$da} = $dt;
+ 		    $debtuple_to_debarch{$dt} = $da;
  		}
+-- 
+2.11.0
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/dpkg-configure.service b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/dpkg-configure.service
index f0b0789..9a248cc 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/dpkg-configure.service
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/dpkg-configure.service
@@ -8,7 +8,7 @@
 Type=oneshot
 EnvironmentFile=-@SYSCONFDIR@/default/postinst
 ExecStart=-@BASE_BINDIR@/sh -c " if [ $POSTINST_LOGGING = '1' ]; then @BINDIR@/dpkg --configure -a > $LOGFILE 2>&1; else @BINDIR@/dpkg --configure -a; fi"
-ExecStartPost=@BASE_BINDIR@/systemctl disable dpkg-configure.service
+ExecStartPost=@BASE_BINDIR@/systemctl --no-reload disable dpkg-configure.service
 StandardOutput=syslog
 RemainAfterExit=No
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/noman.patch b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/noman.patch
index d30c150..a7f3cb8 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/noman.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg/noman.patch
@@ -1,14 +1,22 @@
 Upstream-Status: Inappropriate [disable feature]
 
-diff -ruN dpkg-1.15.8.5-orig/Makefile.am dpkg-1.15.8.5/Makefile.am
---- dpkg-1.15.8.5-orig/Makefile.am	2010-10-08 12:27:15.042083703 +0800
-+++ dpkg-1.15.8.5/Makefile.am	2010-10-08 12:27:27.755148228 +0800
-@@ -12,8 +12,7 @@
- 	utils \
+---
+ Makefile.am | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
+
+diff --git a/Makefile.am b/Makefile.am
+index 0da52cb16..a1f79e0a2 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -13,8 +13,7 @@ SUBDIRS = \
  	$(MAYBE_DSELECT) \
  	scripts \
+ 	t-func \
 -	po \
 -	man
 +	po
  
  ACLOCAL_AMFLAGS = -I m4
+ 
+-- 
+2.11.0
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg_1.18.10.bb b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg_1.18.10.bb
deleted file mode 100644
index 21385af..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg_1.18.10.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-require dpkg.inc
-LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
-
-SRC_URI = "http://snapshot.debian.org/archive/debian/20160731T221931Z/pool/main/d/${BPN}/${BPN}_${PV}.tar.xz \
-           file://noman.patch \
-           file://remove-tar-no-timestamp.patch \
-           file://arch_pm.patch \
-           file://dpkg-configure.service \
-           file://add_armeb_triplet_entry.patch \
-           file://0002-Adapt-to-linux-wrs-kernel-version-which-has-characte.patch \
-           file://0003-Our-pre-postinsts-expect-D-to-be-set-when-running-in.patch \
-           file://0004-The-lutimes-function-doesn-t-work-properly-for-all-s.patch \
-           file://0005-dpkg-compiler.m4-remove-Wvla.patch \
-           file://0006-add-musleabi-to-known-target-tripets.patch \
-           file://0007-dpkg-deb-build.c-Remove-usage-of-clamp-mtime-in-tar.patch \
-           "
-SRC_URI_append_class-native = " file://glibc2.5-sync_file_range.patch "
-
-SRC_URI[md5sum] = "ccff17730c0964428fc186ded2f2f401"
-SRC_URI[sha256sum] = "025524da41ba18b183ff11e388eb8686f7cc58ee835ed7d48bd159c46a8b6dc5"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg_1.18.24.bb b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg_1.18.24.bb
new file mode 100644
index 0000000..c0c59f1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/dpkg/dpkg_1.18.24.bb
@@ -0,0 +1,21 @@
+require dpkg.inc
+LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
+
+SRC_URI = "http://snapshot.debian.org/archive/debian/20170518T093838Z/pool/main/d/dpkg/dpkg_1.18.24.tar.xz \
+           file://noman.patch \
+           file://remove-tar-no-timestamp.patch \
+           file://arch_pm.patch \
+           file://dpkg-configure.service \
+           file://add_armeb_triplet_entry.patch \
+           file://0002-Adapt-to-linux-wrs-kernel-version-which-has-characte.patch \
+           file://0003-Our-pre-postinsts-expect-D-to-be-set-when-running-in.patch \
+           file://0004-The-lutimes-function-doesn-t-work-properly-for-all-s.patch \
+           file://0005-dpkg-compiler.m4-remove-Wvla.patch \
+           file://0006-add-musleabi-to-known-target-tripets.patch \
+           file://0007-dpkg-deb-build.c-Remove-usage-of-clamp-mtime-in-tar.patch \
+           file://0001-dpkg-Support-muslx32-build.patch \
+           "
+SRC_URI_append_class-native = " file://glibc2.5-sync_file_range.patch "
+
+SRC_URI[md5sum] = "02e8af8faf1e689228da806c3e8c6882"
+SRC_URI[sha256sum] = "d853081d3e06bfd46a227056e591f094e42e78fa8a5793b0093bad30b710d7b4"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/0001-e2fsck-exit-with-exit-status-0-if-no-errors-were-fix.patch b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/0001-e2fsck-exit-with-exit-status-0-if-no-errors-were-fix.patch
deleted file mode 100644
index 1d17520..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/0001-e2fsck-exit-with-exit-status-0-if-no-errors-were-fix.patch
+++ /dev/null
@@ -1,255 +0,0 @@
-From bf9f3b6d5b10d19218b4ed904c12b22e36ec57dd Mon Sep 17 00:00:00 2001
-From: Theodore Ts'o <tytso@mit.edu>
-Date: Thu, 16 Feb 2017 22:02:35 -0500
-Subject: [PATCH] e2fsck: exit with exit status 0 if no errors were fixed
-
-Previously, e2fsck would exit with a status code of 1 even though the
-only changes that it made to the file system were various
-optimziations and not fixing file system corruption.  Since the man
-page states that an exit status of 1 means "file system errors
-corrupted", fix e2fsck to return an exit status of 0.
-
-Upstream-Status: Backport
-
-Signed-off-by: Theodore Ts'o <tytso@mit.edu>
-Signed-off-by: Daniel Schultz <d.schultz@phytec.de>
----
- e2fsck/e2fsck.conf.5.in                     |  7 +++++++
- e2fsck/journal.c                            |  1 +
- e2fsck/problem.c                            |  8 +++++---
- e2fsck/problemP.h                           |  1 +
- e2fsck/unix.c                               | 20 ++++++++++++++++----
- tests/f_collapse_extent_tree/expect.1       |  2 +-
- tests/f_compress_extent_tree_level/expect.1 |  2 +-
- tests/f_convert_bmap/expect.1               |  2 +-
- tests/f_convert_bmap_and_extent/expect.1    |  2 +-
- tests/f_extent_htree/expect.1               |  2 +-
- tests/f_jnl_errno/expect.1                  |  2 +-
- tests/f_journal/expect.1                    |  2 +-
- tests/f_orphan/expect.1                     |  2 +-
- tests/f_orphan_extents_inode/expect.1       |  2 +-
- tests/f_rehash_dir/expect.1                 |  2 +-
- tests/f_unsorted_EAs/expect.1               |  2 +-
- 16 files changed, 41 insertions(+), 18 deletions(-)
-
-diff --git a/e2fsck/e2fsck.conf.5.in b/e2fsck/e2fsck.conf.5.in
-index 1848bdb..0bfc76a 100644
---- a/e2fsck/e2fsck.conf.5.in
-+++ b/e2fsck/e2fsck.conf.5.in
-@@ -303,6 +303,13 @@ of 'should this problem be fixed?'.  The
- option even overrides the
- .B -y
- option given on the command-line (just for the specific problem, of course).
-+.TP
-+.I not_a_fix
-+This boolean option, it set to true, marks the problem as
-+one where if the user gives permission to make the requested change,
-+it does not mean that the file system had a problem which has since
-+been fixed.  This is used for requests to optimize the file system's
-+data structure, such as pruning an extent tree.
- @TDB_MAN_COMMENT@.SH THE [scratch_files] STANZA
- @TDB_MAN_COMMENT@The following relations are defined in the
- @TDB_MAN_COMMENT@.I [scratch_files]
-diff --git a/e2fsck/journal.c b/e2fsck/journal.c
-index 46fe7b4..c4f58f1 100644
---- a/e2fsck/journal.c
-+++ b/e2fsck/journal.c
-@@ -572,6 +572,7 @@ static void clear_v2_journal_fields(journal_t *journal)
- 	if (!fix_problem(ctx, PR_0_CLEAR_V2_JOURNAL, &pctx))
- 		return;
- 
-+	ctx->flags |= E2F_FLAG_PROBLEMS_FIXED;
- 	memset(((char *) journal->j_superblock) + V1_SB_SIZE, 0,
- 	       ctx->fs->blocksize-V1_SB_SIZE);
- 	mark_buffer_dirty(journal->j_sb_buffer);
-diff --git a/e2fsck/problem.c b/e2fsck/problem.c
-index 34a671e..4b25069 100644
---- a/e2fsck/problem.c
-+++ b/e2fsck/problem.c
-@@ -1276,12 +1276,12 @@ static struct e2fsck_problem problem_table[] = {
- 	/* Inode extent tree could be shorter */
- 	{ PR_1E_CAN_COLLAPSE_EXTENT_TREE,
- 	  N_("@i %i @x tree (at level %b) could be shorter.  "),
--	  PROMPT_FIX, PR_NO_OK | PR_PREEN_NO | PR_PREEN_OK },
-+	  PROMPT_FIX, PR_NO_OK | PR_PREEN_NO | PR_PREEN_OK | PR_NOT_A_FIX },
- 
- 	/* Inode extent tree could be narrower */
- 	{ PR_1E_CAN_NARROW_EXTENT_TREE,
- 	  N_("@i %i @x tree (at level %b) could be narrower.  "),
--	  PROMPT_FIX, PR_NO_OK | PR_PREEN_NO | PR_PREEN_OK },
-+	  PROMPT_FIX, PR_NO_OK | PR_PREEN_NO | PR_PREEN_OK | PR_NOT_A_FIX },
- 
- 	/* Pass 2 errors */
- 
-@@ -2166,6 +2166,7 @@ int fix_problem(e2fsck_t ctx, problem_t code, struct problem_context *pctx)
- 		reconfigure_bool(ctx, ptr, key, PR_NO_NOMSG, "no_nomsg");
- 		reconfigure_bool(ctx, ptr, key, PR_PREEN_NOHDR, "preen_noheader");
- 		reconfigure_bool(ctx, ptr, key, PR_FORCE_NO, "force_no");
-+		reconfigure_bool(ctx, ptr, key, PR_NOT_A_FIX, "not_a_fix");
- 		profile_get_integer(ctx->profile, "options",
- 				    "max_count_problems", 0, 0,
- 				    &ptr->max_count);
-@@ -2283,7 +2284,8 @@ int fix_problem(e2fsck_t ctx, problem_t code, struct problem_context *pctx)
- 	if (ptr->flags & PR_AFTER_CODE)
- 		answer = fix_problem(ctx, ptr->second_code, pctx);
- 
--	if (answer && (ptr->prompt != PROMPT_NONE))
-+	if (answer && (ptr->prompt != PROMPT_NONE) &&
-+	    !(ptr->flags & PR_NOT_A_FIX))
- 		ctx->flags |= E2F_FLAG_PROBLEMS_FIXED;
- 
- 	return answer;
-diff --git a/e2fsck/problemP.h b/e2fsck/problemP.h
-index 7944cd6..63bb8df 100644
---- a/e2fsck/problemP.h
-+++ b/e2fsck/problemP.h
-@@ -44,3 +44,4 @@ struct latch_descr {
- #define PR_CONFIG	0x080000 /* This problem has been customized
- 				    from the config file */
- #define PR_FORCE_NO	0x100000 /* Force the answer to be no */
-+#define PR_NOT_A_FIX	0x200000 /* Yes doesn't mean a problem was fixed */
-diff --git a/e2fsck/unix.c b/e2fsck/unix.c
-index eb9f311..9e4d31a 100644
---- a/e2fsck/unix.c
-+++ b/e2fsck/unix.c
-@@ -1901,11 +1901,23 @@ no_journal:
- 		fix_problem(ctx, PR_6_IO_FLUSH, &pctx);
- 
- 	if (was_changed) {
--		exit_value |= FSCK_NONDESTRUCT;
--		if (!(ctx->options & E2F_OPT_PREEN))
--			log_out(ctx, _("\n%s: ***** FILE SYSTEM WAS "
--				       "MODIFIED *****\n"),
-+		int fs_fixed = (ctx->flags & E2F_FLAG_PROBLEMS_FIXED);
-+
-+		if (fs_fixed)
-+			exit_value |= FSCK_NONDESTRUCT;
-+		if (!(ctx->options & E2F_OPT_PREEN)) {
-+#if 0	/* Do this later; it breaks too many tests' golden outputs */
-+			log_out(ctx, fs_fixed ?
-+				_("\n%s: ***** FILE SYSTEM ERRORS "
-+				  "CORRECTED *****\n") :
-+				_("%s: File system was modified.\n"),
- 				ctx->device_name);
-+#else
-+			log_out(ctx,
-+				_("\n%s: ***** FILE SYSTEM WAS MODIFIED *****\n"),
-+				ctx->device_name);
-+#endif
-+		}
- 		if (ctx->mount_flags & EXT2_MF_ISROOT) {
- 			log_out(ctx, _("%s: ***** REBOOT SYSTEM *****\n"),
- 				ctx->device_name);
-diff --git a/tests/f_collapse_extent_tree/expect.1 b/tests/f_collapse_extent_tree/expect.1
-index e2eb65e..8165a58 100644
---- a/tests/f_collapse_extent_tree/expect.1
-+++ b/tests/f_collapse_extent_tree/expect.1
-@@ -13,4 +13,4 @@ Pass 5: Checking group summary information
- 
- test_filesys: ***** FILE SYSTEM WAS MODIFIED *****
- test_filesys: 12/128 files (0.0% non-contiguous), 19/512 blocks
--Exit status is 1
-+Exit status is 0
-diff --git a/tests/f_compress_extent_tree_level/expect.1 b/tests/f_compress_extent_tree_level/expect.1
-index a359c99..dd33f63 100644
---- a/tests/f_compress_extent_tree_level/expect.1
-+++ b/tests/f_compress_extent_tree_level/expect.1
-@@ -20,4 +20,4 @@ Pass 5: Checking group summary information
- 
- test_filesys: ***** FILE SYSTEM WAS MODIFIED *****
- test_filesys: 12/128 files (8.3% non-contiguous), 26/512 blocks
--Exit status is 1
-+Exit status is 0
-diff --git a/tests/f_convert_bmap/expect.1 b/tests/f_convert_bmap/expect.1
-index 7d2ca86..c387962 100644
---- a/tests/f_convert_bmap/expect.1
-+++ b/tests/f_convert_bmap/expect.1
-@@ -23,4 +23,4 @@ Pass 5: Checking group summary information
- 
- test_filesys: ***** FILE SYSTEM WAS MODIFIED *****
- test_filesys: 12/128 files (8.3% non-contiguous), 570/2048 blocks
--Exit status is 1
-+Exit status is 0
-diff --git a/tests/f_convert_bmap_and_extent/expect.1 b/tests/f_convert_bmap_and_extent/expect.1
-index 7af91aa..c86c571 100644
---- a/tests/f_convert_bmap_and_extent/expect.1
-+++ b/tests/f_convert_bmap_and_extent/expect.1
-@@ -30,4 +30,4 @@ Pass 5: Checking group summary information
- 
- test_filesys: ***** FILE SYSTEM WAS MODIFIED *****
- test_filesys: 13/128 files (15.4% non-contiguous), 574/2048 blocks
--Exit status is 1
-+Exit status is 0
-diff --git a/tests/f_extent_htree/expect.1 b/tests/f_extent_htree/expect.1
-index 223ca69..ea48405 100644
---- a/tests/f_extent_htree/expect.1
-+++ b/tests/f_extent_htree/expect.1
-@@ -26,4 +26,4 @@ test_filesys: ***** FILE SYSTEM WAS MODIFIED *****
-            0 sockets
- ------------
-          343 files
--Exit status is 1
-+Exit status is 0
-diff --git a/tests/f_jnl_errno/expect.1 b/tests/f_jnl_errno/expect.1
-index c572951..4134234 100644
---- a/tests/f_jnl_errno/expect.1
-+++ b/tests/f_jnl_errno/expect.1
-@@ -6,4 +6,4 @@ Pass 5: Checking group summary information
- 
- test_filesys: ***** FILE SYSTEM WAS MODIFIED *****
- test_filesys: 11/2048 files (9.1% non-contiguous), 1330/8192 blocks
--Exit status is 1
-+Exit status is 0
-diff --git a/tests/f_journal/expect.1 b/tests/f_journal/expect.1
-index a202c80..0a18654 100644
---- a/tests/f_journal/expect.1
-+++ b/tests/f_journal/expect.1
-@@ -59,4 +59,4 @@ Pass 5: Checking group summary information
- 
- test_filesys: ***** FILE SYSTEM WAS MODIFIED *****
- test_filesys: 53/2048 files (1.9% non-contiguous), 1409/8192 blocks
--Exit status is 1
-+Exit status is 0
-diff --git a/tests/f_orphan/expect.1 b/tests/f_orphan/expect.1
-index eddc1f8..087ebee 100644
---- a/tests/f_orphan/expect.1
-+++ b/tests/f_orphan/expect.1
-@@ -11,4 +11,4 @@ Pass 5: Checking group summary information
- 
- test_filesys: ***** FILE SYSTEM WAS MODIFIED *****
- test_filesys: 12/2048 files (0.0% non-contiguous), 1303/8192 blocks
--Exit status is 1
-+Exit status is 0
-diff --git a/tests/f_orphan_extents_inode/expect.1 b/tests/f_orphan_extents_inode/expect.1
-index 2eaab78..5d713b3 100644
---- a/tests/f_orphan_extents_inode/expect.1
-+++ b/tests/f_orphan_extents_inode/expect.1
-@@ -7,4 +7,4 @@ Pass 5: Checking group summary information
- 
- test_filesys: ***** FILE SYSTEM WAS MODIFIED *****
- test_filesys: 12/16 files (0.0% non-contiguous), 21/100 blocks
--Exit status is 1
-+Exit status is 0
-diff --git a/tests/f_rehash_dir/expect.1 b/tests/f_rehash_dir/expect.1
-index 6076765..c1449ba 100644
---- a/tests/f_rehash_dir/expect.1
-+++ b/tests/f_rehash_dir/expect.1
-@@ -7,4 +7,4 @@ Pass 5: Checking group summary information
- 
- test_filesys: ***** FILE SYSTEM WAS MODIFIED *****
- test_filesys: 105/2048 files (2.9% non-contiguous), 336/512 blocks
--Exit status is 1
-+Exit status is 0
-diff --git a/tests/f_unsorted_EAs/expect.1 b/tests/f_unsorted_EAs/expect.1
-index 7d588d7..64b9045 100644
---- a/tests/f_unsorted_EAs/expect.1
-+++ b/tests/f_unsorted_EAs/expect.1
-@@ -8,4 +8,4 @@ Pass 5: Checking group summary information
- 
- test_filesys: ***** FILE SYSTEM WAS MODIFIED *****
- test_filesys: 12/2048 files (0.0% non-contiguous), 1294/2048 blocks
--Exit status is 1
-+Exit status is 0
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/0001-misc-create_inode.c-set-dir-s-mode-correctly.patch b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/0001-misc-create_inode.c-set-dir-s-mode-correctly.patch
new file mode 100644
index 0000000..fc4a540
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/0001-misc-create_inode.c-set-dir-s-mode-correctly.patch
@@ -0,0 +1,41 @@
+From f6d188580c2c9599319076fee22f2424652c711c Mon Sep 17 00:00:00 2001
+From: Robert Yang <liezhi.yang@windriver.com>
+Date: Wed, 13 Sep 2017 19:55:35 -0700
+Subject: [PATCH] misc/create_inode.c: set dir's mode correctly
+
+The dir's mode has been set by ext2fs_mkdir() with umask, so
+reset it to the source's mode in set_inode_extra().
+
+Fixed when source dir's mode is 521, but tarball would be 721, this was
+incorrect.
+
+Upstream-Status: Submitted
+
+Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
+---
+ misc/create_inode.c | 9 ++++++++-
+ 1 file changed, 8 insertions(+), 1 deletion(-)
+
+diff --git a/misc/create_inode.c b/misc/create_inode.c
+index 8ce3faf..50fbaa8 100644
+--- a/misc/create_inode.c
++++ b/misc/create_inode.c
+@@ -116,7 +116,14 @@ static errcode_t set_inode_extra(ext2_filsys fs, ext2_ino_t ino,
+ 
+ 	inode.i_uid = st->st_uid;
+ 	inode.i_gid = st->st_gid;
+-	inode.i_mode |= st->st_mode;
++	/*
++	 * The dir's mode has been set by ext2fs_mkdir() with umask, so
++	 * reset it to the source's mode
++	 */
++	if S_ISDIR(st->st_mode)
++		inode.i_mode = LINUX_S_IFDIR | st->st_mode;
++	else
++		inode.i_mode |= st->st_mode;
+ 	inode.i_atime = st->st_atime;
+ 	inode.i_mtime = st->st_mtime;
+ 	inode.i_ctime = st->st_ctime;
+-- 
+2.10.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/e2fsprogs-1.43-sysmacros.patch b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/e2fsprogs-1.43-sysmacros.patch
deleted file mode 100644
index abbf2ba..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/e2fsprogs-1.43-sysmacros.patch
+++ /dev/null
@@ -1,130 +0,0 @@
-From 30ef41f68703b6a16027cc8787118b87f1462dff Mon Sep 17 00:00:00 2001
-From: Mike Frysinger <vapier@gentoo.org>
-Date: Mon, 28 Mar 2016 20:31:33 -0400
-Subject: [PATCH e2fsprogs] include sys/sysmacros.h as needed
-
-The minor/major/makedev macros are not entirely standard.  glibc has had
-the definitions in sys/sysmacros.h since the start, and wants to move away
-from always defining them implicitly via sys/types.h (as this pollutes the
-namespace in violation of POSIX).  Other C libraries have already dropped
-them.  Since the configure script already checks for this header, use that
-to pull in the header in files that use these macros.
-
-Signed-off-by: Mike Frysinger <vapier@gentoo.org>
-
-Taken from gentoo portage.
-
-Upstream-Status: Pending
-
----
- debugfs/debugfs.c      | 3 +++
- lib/blkid/devname.c    | 3 +++
- lib/blkid/devno.c      | 3 +++
- lib/ext2fs/finddev.c   | 3 +++
- lib/ext2fs/ismounted.c | 3 +++
- misc/create_inode.c    | 4 ++++
- misc/mk_hugefiles.c    | 3 +++
- 7 files changed, 22 insertions(+)
-
-diff --git a/debugfs/debugfs.c b/debugfs/debugfs.c
-index ba8be40..7d481bc 100644
---- a/debugfs/debugfs.c
-+++ b/debugfs/debugfs.c
-@@ -26,6 +26,9 @@ extern char *optarg;
- #include <errno.h>
- #endif
- #include <fcntl.h>
-+#ifdef HAVE_SYS_SYSMACROS_H
-+#include <sys/sysmacros.h>
-+#endif
- 
- #include "debugfs.h"
- #include "uuid/uuid.h"
-diff --git a/lib/blkid/devname.c b/lib/blkid/devname.c
-index 3e2efa9..671e781 100644
---- a/lib/blkid/devname.c
-+++ b/lib/blkid/devname.c
-@@ -36,6 +36,9 @@
- #if HAVE_SYS_MKDEV_H
- #include <sys/mkdev.h>
- #endif
-+#ifdef HAVE_SYS_SYSMACROS_H
-+#include <sys/sysmacros.h>
-+#endif
- #include <time.h>
- 
- #include "blkidP.h"
-diff --git a/lib/blkid/devno.c b/lib/blkid/devno.c
-index 479d977..61e6fc7 100644
---- a/lib/blkid/devno.c
-+++ b/lib/blkid/devno.c
-@@ -31,6 +31,9 @@
- #if HAVE_SYS_MKDEV_H
- #include <sys/mkdev.h>
- #endif
-+#ifdef HAVE_SYS_SYSMACROS_H
-+#include <sys/sysmacros.h>
-+#endif
- 
- #include "blkidP.h"
- 
-diff --git a/lib/ext2fs/finddev.c b/lib/ext2fs/finddev.c
-index 311608d..62fa0db 100644
---- a/lib/ext2fs/finddev.c
-+++ b/lib/ext2fs/finddev.c
-@@ -31,6 +31,9 @@
- #if HAVE_SYS_MKDEV_H
- #include <sys/mkdev.h>
- #endif
-+#ifdef HAVE_SYS_SYSMACROS_H
-+#include <sys/sysmacros.h>
-+#endif
- 
- #include "ext2_fs.h"
- #include "ext2fs.h"
-diff --git a/lib/ext2fs/ismounted.c b/lib/ext2fs/ismounted.c
-index e0f69dd..7404996 100644
---- a/lib/ext2fs/ismounted.c
-+++ b/lib/ext2fs/ismounted.c
-@@ -49,6 +49,9 @@
- #if HAVE_SYS_TYPES_H
- #include <sys/types.h>
- #endif
-+#ifdef HAVE_SYS_SYSMACROS_H
-+#include <sys/sysmacros.h>
-+#endif
- 
- #include "ext2_fs.h"
- #include "ext2fs.h"
-diff --git a/misc/create_inode.c b/misc/create_inode.c
-index 4dbd8e5..98aeb41 100644
---- a/misc/create_inode.c
-+++ b/misc/create_inode.c
-@@ -22,6 +22,10 @@
- #include <attr/xattr.h>
- #endif
- #include <sys/ioctl.h>
-+#ifdef HAVE_SYS_SYSMACROS_H
-+#include <sys/sysmacros.h>
-+#endif
-+
- #include <ext2fs/ext2fs.h>
- #include <ext2fs/ext2_types.h>
- #include <ext2fs/fiemap.h>
-diff --git a/misc/mk_hugefiles.c b/misc/mk_hugefiles.c
-index 71a15c5..00e95cd 100644
---- a/misc/mk_hugefiles.c
-+++ b/misc/mk_hugefiles.c
-@@ -35,6 +35,9 @@ extern int optind;
- #include <sys/ioctl.h>
- #include <sys/types.h>
- #include <sys/stat.h>
-+#ifdef HAVE_SYS_SYSMACROS_H
-+#include <sys/sysmacros.h>
-+#endif
- #include <libgen.h>
- #include <limits.h>
- #include <blkid/blkid.h>
--- 
-2.8.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/ptest.patch b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/ptest.patch
index 879d936..7df0967 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/ptest.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/ptest.patch
@@ -1,3 +1,5 @@
+Upstream-Status: Inappropriate
+
 diff --git a/tests/Makefile.in b/tests/Makefile.in
 index c130f4a..d2ade03 100644
 --- a/tests/Makefile.in
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/reproducible-doc.patch b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/reproducible-doc.patch
new file mode 100644
index 0000000..8e5d1d3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/reproducible-doc.patch
@@ -0,0 +1,21 @@
+
+Suppport for binary reproducibility.
+When compressing, do not save the original file name and time stamp.
+
+Upstream-Status: Submitted [Theodore Ts'o tytso@mit.edu (maintainer)]
+
+Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
+
+diff --git a/doc/Makefile.in b/doc/Makefile.in
+index 9cb584c..0bee4e4 100644
+--- a/doc/Makefile.in
++++ b/doc/Makefile.in
+@@ -28,7 +28,7 @@ install-doc-libs: libext2fs.info libext2fs.dvi
+ 		$(INSTALL_DATA) $$i $(DESTDIR)$(infodir)/$$i ; \
+ 	done
+ 	$(E) "	GZIP $(infodir)/libext2fs.info*"
+-	-$(Q) gzip -9 $(DESTDIR)$(infodir)/libext2fs.info*
++	-$(Q) gzip -9 -n $(DESTDIR)$(infodir)/libext2fs.info*
+ 
+ uninstall-doc-libs:
+ 	$(RM) -rf $(DESTDIR)$(infodir)/libext2fs.info*
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/run-ptest b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/run-ptest
index e02fc7f..ef10b08 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/run-ptest
+++ b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs/run-ptest
@@ -1,11 +1,10 @@
 #!/bin/sh
 
 cd ./test
-./test_script &>../test.log
-if [ $? -eq 0 ]
-then
-	echo "PASS: e2fsprogs"
-	rm ../test.log
-else
-	echo "FAIL: e2fsprogs"
-fi
+./test_script | sed -u -e '/:[[:space:]]ok/s/^/PASS: /' -e '/:[[:space:]]failed/s/^/FAIL: /' -e '/:[[:space:]]skipped/s/^/SKIP: /'
+rm -rf /var/volatile/tmp/*e2fsprogs*
+rm -f tmp-*
+rm -f *.tmp
+rm -f *.ok
+rm -f *.failed
+rm -f *.log
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs_1.43.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs_1.43.4.bb
deleted file mode 100644
index 5216c70..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs_1.43.4.bb
+++ /dev/null
@@ -1,112 +0,0 @@
-require e2fsprogs.inc
-
-SRC_URI += "file://acinclude.m4 \
-            file://remove.ldconfig.call.patch \
-            file://quiet-debugfs.patch \
-            file://run-ptest \
-            file://ptest.patch \
-            file://mkdir.patch \
-            file://Revert-mke2fs-enable-the-metadata_csum-and-64bit-fea.patch \
-            file://e2fsprogs-1.43-sysmacros.patch \
-            file://mkdir_p.patch \
-            file://0001-e2fsck-exit-with-exit-status-0-if-no-errors-were-fix.patch \
-"
-
-SRC_URI_append_class-native = " file://e2fsprogs-fix-missing-check-for-permission-denied.patch"
-
-SRCREV = "3d66c4b20f09f923078c1e6eb9b549865b549674"
-UPSTREAM_CHECK_GITTAGREGEX = "v(?P<pver>\d+\.\d+(\.\d+)*)$"
-
-EXTRA_OECONF += "--libdir=${base_libdir} --sbindir=${base_sbindir} \
-                --enable-elf-shlibs --disable-libuuid --disable-uuidd \
-                --disable-libblkid --enable-verbose-makecmds"
-
-EXTRA_OECONF_darwin = "--libdir=${base_libdir} --sbindir=${base_sbindir} --enable-bsd-shlibs"
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[fuse] = '--enable-fuse2fs,--disable-fuse2fs,fuse'
-
-do_configure_prepend () {
-	cp ${WORKDIR}/acinclude.m4 ${S}/
-}
-
-do_install () {
-	oe_runmake 'DESTDIR=${D}' install
-	oe_runmake 'DESTDIR=${D}' install-libs
-	# We use blkid from util-linux now so remove from here
-	rm -f ${D}${base_libdir}/libblkid*
-	rm -rf ${D}${includedir}/blkid
-	rm -f ${D}${base_libdir}/pkgconfig/blkid.pc
-	rm -f ${D}${base_sbindir}/blkid
-	rm -f ${D}${base_sbindir}/fsck
-	rm -f ${D}${base_sbindir}/findfs
-
-	# e2initrd_helper and the pkgconfig files belong in libdir
-	if [ ! ${D}${libdir} -ef ${D}${base_libdir} ]; then
-		install -d ${D}${libdir}
-		mv ${D}${base_libdir}/e2initrd_helper ${D}${libdir}
-		mv ${D}${base_libdir}/pkgconfig ${D}${libdir}
-	fi
-
-	oe_multilib_header ext2fs/ext2_types.h
-	install -d ${D}${base_bindir}
-	mv ${D}${bindir}/chattr ${D}${base_bindir}/chattr.e2fsprogs
-
-	install -v -m 755 ${S}/contrib/populate-extfs.sh ${D}${base_sbindir}/
-
-	# Clean host path (build directory) in compile_et, mk_cmds
-	sed -i -e "s,\(ET_DIR=.*\)${S}/lib/et\(.*\),\1${datadir}/et\2,g" ${D}${bindir}/compile_et
-	sed -i -e "s,\(SS_DIR=.*\)${S}/lib/ss\(.*\),\1${datadir}/ss\2,g" ${D}${bindir}/mk_cmds
-}
-
-# Need to find the right mke2fs.conf file
-e2fsprogs_conf_fixup () {
-	for i in mke2fs mkfs.ext2 mkfs.ext3 mkfs.ext4; do
-		create_wrapper ${D}${base_sbindir}/$i MKE2FS_CONFIG=${sysconfdir}/mke2fs.conf
-	done
-}
-
-do_install_append_class-native() {
-	e2fsprogs_conf_fixup
-}
-
-do_install_append_class-nativesdk() {
-	e2fsprogs_conf_fixup
-}
-
-RDEPENDS_e2fsprogs = "e2fsprogs-badblocks"
-RRECOMMENDS_e2fsprogs = "e2fsprogs-mke2fs e2fsprogs-e2fsck"
-
-PACKAGES =+ "e2fsprogs-e2fsck e2fsprogs-mke2fs e2fsprogs-tune2fs e2fsprogs-badblocks e2fsprogs-resize2fs"
-PACKAGES =+ "libcomerr libss libe2p libext2fs"
-
-FILES_e2fsprogs-resize2fs = "${base_sbindir}/resize2fs*"
-FILES_e2fsprogs-e2fsck = "${base_sbindir}/e2fsck ${base_sbindir}/fsck.ext*"
-FILES_e2fsprogs-mke2fs = "${base_sbindir}/mke2fs ${base_sbindir}/mkfs.ext* ${sysconfdir}/mke2fs.conf"
-FILES_e2fsprogs-tune2fs = "${base_sbindir}/tune2fs ${base_sbindir}/e2label"
-FILES_e2fsprogs-badblocks = "${base_sbindir}/badblocks"
-FILES_libcomerr = "${base_libdir}/libcom_err.so.*"
-FILES_libss = "${base_libdir}/libss.so.*"
-FILES_libe2p = "${base_libdir}/libe2p.so.*"
-FILES_libext2fs = "${libdir}/e2initrd_helper ${base_libdir}/libext2fs.so.*"
-FILES_${PN}-dev += "${datadir}/*/*.awk ${datadir}/*/*.sed ${base_libdir}/*.so ${bindir}/compile_et ${bindir}/mk_cmds"
-
-ALTERNATIVE_${PN} = "chattr"
-ALTERNATIVE_PRIORITY = "100"
-ALTERNATIVE_LINK_NAME[chattr] = "${base_bindir}/chattr"
-ALTERNATIVE_TARGET[chattr] = "${base_bindir}/chattr.e2fsprogs"
-
-ALTERNATIVE_${PN}-doc = "fsck.8"
-ALTERNATIVE_LINK_NAME[fsck.8] = "${mandir}/man8/fsck.8"
-
-RDEPENDS_${PN}-ptest += "${PN} ${PN}-tune2fs coreutils procps bash"
-
-do_compile_ptest() {
-	oe_runmake -C ${B}/tests
-}
-
-do_install_ptest() {
-	cp -a ${B}/tests ${D}${PTEST_PATH}/test
-	cp -a ${S}/tests/* ${D}${PTEST_PATH}/test
-	sed -e 's!../e2fsck/e2fsck!e2fsck!g' -i ${D}${PTEST_PATH}/test/*/expect*
-}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs_1.43.5.bb b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs_1.43.5.bb
new file mode 100644
index 0000000..00093cc
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/e2fsprogs/e2fsprogs_1.43.5.bb
@@ -0,0 +1,117 @@
+require e2fsprogs.inc
+
+SRC_URI += "file://acinclude.m4 \
+            file://remove.ldconfig.call.patch \
+            file://quiet-debugfs.patch \
+            file://run-ptest \
+            file://ptest.patch \
+            file://mkdir.patch \
+            file://Revert-mke2fs-enable-the-metadata_csum-and-64bit-fea.patch \
+            file://mkdir_p.patch \
+            file://reproducible-doc.patch \
+            file://0001-misc-create_inode.c-set-dir-s-mode-correctly.patch \
+"
+
+SRC_URI_append_class-native = " file://e2fsprogs-fix-missing-check-for-permission-denied.patch"
+
+SRCREV = "2a13c84b513aa094d1cda727e92d35a89dd777da"
+UPSTREAM_CHECK_GITTAGREGEX = "v(?P<pver>\d+\.\d+(\.\d+)*)$"
+
+EXTRA_OECONF += "--libdir=${base_libdir} --sbindir=${base_sbindir} \
+                --enable-elf-shlibs --disable-libuuid --disable-uuidd \
+                --disable-libblkid --enable-verbose-makecmds"
+
+EXTRA_OECONF_darwin = "--libdir=${base_libdir} --sbindir=${base_sbindir} --enable-bsd-shlibs"
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[fuse] = '--enable-fuse2fs,--disable-fuse2fs,fuse'
+
+do_configure_prepend () {
+	cp ${WORKDIR}/acinclude.m4 ${S}/
+}
+
+do_install () {
+	oe_runmake 'DESTDIR=${D}' install
+	oe_runmake 'DESTDIR=${D}' install-libs
+	# We use blkid from util-linux now so remove from here
+	rm -f ${D}${base_libdir}/libblkid*
+	rm -rf ${D}${includedir}/blkid
+	rm -f ${D}${base_libdir}/pkgconfig/blkid.pc
+	rm -f ${D}${base_sbindir}/blkid
+	rm -f ${D}${base_sbindir}/fsck
+	rm -f ${D}${base_sbindir}/findfs
+
+	# e2initrd_helper and the pkgconfig files belong in libdir
+	if [ ! ${D}${libdir} -ef ${D}${base_libdir} ]; then
+		install -d ${D}${libdir}
+		mv ${D}${base_libdir}/e2initrd_helper ${D}${libdir}
+		mv ${D}${base_libdir}/pkgconfig ${D}${libdir}
+	fi
+
+	oe_multilib_header ext2fs/ext2_types.h
+	install -d ${D}${base_bindir}
+	mv ${D}${bindir}/chattr ${D}${base_bindir}/chattr.e2fsprogs
+
+	install -v -m 755 ${S}/contrib/populate-extfs.sh ${D}${base_sbindir}/
+
+	# Clean host path (build directory) in compile_et, mk_cmds
+	sed -i -e "s,\(ET_DIR=.*\)${S}/lib/et\(.*\),\1${datadir}/et\2,g" ${D}${bindir}/compile_et
+	sed -i -e "s,\(SS_DIR=.*\)${S}/lib/ss\(.*\),\1${datadir}/ss\2,g" ${D}${bindir}/mk_cmds
+}
+
+# Need to find the right mke2fs.conf file
+e2fsprogs_conf_fixup () {
+	for i in mke2fs mkfs.ext2 mkfs.ext3 mkfs.ext4; do
+		create_wrapper ${D}${base_sbindir}/$i MKE2FS_CONFIG=${sysconfdir}/mke2fs.conf
+	done
+}
+
+do_install_append_class-native() {
+	e2fsprogs_conf_fixup
+}
+
+do_install_append_class-nativesdk() {
+	e2fsprogs_conf_fixup
+}
+
+RDEPENDS_e2fsprogs = "e2fsprogs-badblocks"
+RRECOMMENDS_e2fsprogs = "e2fsprogs-mke2fs e2fsprogs-e2fsck"
+
+PACKAGES =+ "e2fsprogs-e2fsck e2fsprogs-mke2fs e2fsprogs-tune2fs e2fsprogs-badblocks e2fsprogs-resize2fs"
+PACKAGES =+ "libcomerr libss libe2p libext2fs"
+
+FILES_e2fsprogs-resize2fs = "${base_sbindir}/resize2fs*"
+FILES_e2fsprogs-e2fsck = "${base_sbindir}/e2fsck ${base_sbindir}/fsck.ext*"
+FILES_e2fsprogs-mke2fs = "${base_sbindir}/mke2fs ${base_sbindir}/mkfs.ext* ${sysconfdir}/mke2fs.conf"
+FILES_e2fsprogs-tune2fs = "${base_sbindir}/tune2fs ${base_sbindir}/e2label"
+FILES_e2fsprogs-badblocks = "${base_sbindir}/badblocks"
+FILES_libcomerr = "${base_libdir}/libcom_err.so.*"
+FILES_libss = "${base_libdir}/libss.so.*"
+FILES_libe2p = "${base_libdir}/libe2p.so.*"
+FILES_libext2fs = "${libdir}/e2initrd_helper ${base_libdir}/libext2fs.so.*"
+FILES_${PN}-dev += "${datadir}/*/*.awk ${datadir}/*/*.sed ${base_libdir}/*.so ${bindir}/compile_et ${bindir}/mk_cmds"
+
+ALTERNATIVE_${PN} = "chattr"
+ALTERNATIVE_PRIORITY = "100"
+ALTERNATIVE_LINK_NAME[chattr] = "${base_bindir}/chattr"
+ALTERNATIVE_TARGET[chattr] = "${base_bindir}/chattr.e2fsprogs"
+
+ALTERNATIVE_${PN}-doc = "fsck.8"
+ALTERNATIVE_LINK_NAME[fsck.8] = "${mandir}/man8/fsck.8"
+
+RDEPENDS_${PN}-ptest += "${PN} ${PN}-tune2fs coreutils procps bash"
+
+do_compile_ptest() {
+	oe_runmake -C ${B}/tests
+}
+
+do_install_ptest() {
+	cp -R --no-dereference --preserve=mode,links -v ${B}/tests ${D}${PTEST_PATH}/test
+	cp -R --no-dereference --preserve=mode,links -v ${S}/tests/* ${D}${PTEST_PATH}/test
+	sed -e 's!../e2fsck/e2fsck!e2fsck!g' -i ${D}${PTEST_PATH}/test/*/expect*
+
+	# Remove various files
+	find "${D}${PTEST_PATH}" -type f \
+	    \( -name 'Makefile' -o -name 'Makefile.in' -o -name '*.o' -o -name '*.c' -o -name '*.h' \)\
+	    -exec  rm -f {} +
+}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/0001-build-Provide-alternatives-for-glibc-assumptions-hel.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/0001-build-Provide-alternatives-for-glibc-assumptions-hel.patch
deleted file mode 100644
index 020ffa1..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/0001-build-Provide-alternatives-for-glibc-assumptions-hel.patch
+++ /dev/null
@@ -1,1051 +0,0 @@
-From 054fedda5ab9b84160d40d90cb967f2f5822b889 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Thu, 31 Dec 2015 06:35:34 +0000
-Subject: [PATCH] build: Provide alternatives for glibc assumptions helps
- compiling it on musl
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Rebase to 0.68
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- Makefile.am                      |  2 +-
- lib/color.c                      |  3 ++-
- lib/crc32_file.c                 |  1 +
- lib/fixedsizehash.h              |  1 -
- lib/system.h                     | 10 ++++++++++
- lib/xmalloc.c                    |  2 +-
- libasm/asm_end.c                 |  2 +-
- libasm/asm_newscn.c              |  2 +-
- libcpu/i386_gendis.c             |  2 +-
- libcpu/i386_lex.c                |  2 +-
- libcpu/i386_parse.c              |  2 +-
- libdw/Makefile.am                |  4 +++-
- libdw/libdw_alloc.c              |  2 +-
- libdwfl/dwfl_build_id_find_elf.c |  3 ++-
- libdwfl/dwfl_error.c             |  4 +++-
- libdwfl/dwfl_module_getdwarf.c   |  1 +
- libdwfl/find-debuginfo.c         |  2 +-
- libdwfl/libdwfl_crc32_file.c     | 10 ++++++++++
- libdwfl/linux-kernel-modules.c   |  1 +
- libebl/eblopenbackend.c          |  2 +-
- libelf/elf.h                     |  8 ++++++--
- libelf/libelf.h                  |  1 +
- libelf/libelfP.h                 |  1 +
- src/addr2line.c                  |  2 +-
- src/ar.c                         |  2 +-
- src/arlib.c                      |  2 +-
- src/arlib2.c                     |  2 +-
- src/elfcmp.c                     |  2 +-
- src/elflint.c                    |  2 +-
- src/findtextrel.c                |  2 +-
- src/nm.c                         |  2 +-
- src/objdump.c                    |  2 +-
- src/ranlib.c                     |  2 +-
- src/readelf.c                    |  2 +-
- src/size.c                       |  2 +-
- src/stack.c                      |  2 +-
- src/strings.c                    |  2 +-
- src/strip.c                      |  2 +-
- src/unstrip.c                    |  2 +-
- tests/addrscopes.c               |  2 +-
- tests/allregs.c                  |  2 +-
- tests/backtrace-data.c           |  2 +-
- tests/backtrace-dwarf.c          |  2 +-
- tests/backtrace.c                |  2 +-
- tests/buildid.c                  |  2 +-
- tests/debugaltlink.c             |  2 +-
- tests/debuglink.c                |  2 +-
- tests/deleted.c                  |  2 +-
- tests/dwfl-addr-sect.c           |  2 +-
- tests/dwfl-bug-addr-overflow.c   |  2 +-
- tests/dwfl-bug-fd-leak.c         |  2 +-
- tests/dwfl-bug-getmodules.c      |  2 +-
- tests/dwfl-report-elf-align.c    |  2 +-
- tests/dwfllines.c                |  2 +-
- tests/dwflmodtest.c              |  2 +-
- tests/dwflsyms.c                 |  2 +-
- tests/early-offscn.c             |  2 +-
- tests/ecp.c                      |  2 +-
- tests/find-prologues.c           |  2 +-
- tests/funcretval.c               |  2 +-
- tests/funcscopes.c               |  2 +-
- tests/getsrc_die.c               |  2 +-
- tests/line2addr.c                |  2 +-
- tests/low_high_pc.c              |  2 +-
- tests/md5-sha1-test.c            |  2 +-
- tests/rdwrmmap.c                 |  2 +-
- tests/saridx.c                   |  2 +-
- tests/sectiondump.c              |  2 +-
- tests/varlocs.c                  |  2 +-
- tests/vdsosyms.c                 |  2 +-
- 70 files changed, 98 insertions(+), 64 deletions(-)
-
-diff --git a/Makefile.am b/Makefile.am
-index 2ff444e..41f77df 100644
---- a/Makefile.am
-+++ b/Makefile.am
-@@ -28,7 +28,7 @@ pkginclude_HEADERS = version.h
- 
- # Add doc back when we have some real content.
- SUBDIRS = config m4 lib libelf libebl libdwelf libdwfl libdw libcpu libasm \
--	  backends src po tests
-+	  backends po tests
- 
- EXTRA_DIST = elfutils.spec GPG-KEY NOTES CONTRIBUTING \
- 	     COPYING COPYING-GPLV2 COPYING-LGPLV3
-diff --git a/lib/color.c b/lib/color.c
-index fde2d9d..73292ac 100644
---- a/lib/color.c
-+++ b/lib/color.c
-@@ -32,12 +32,13 @@
- #endif
- 
- #include <argp.h>
--#include <error.h>
-+#include <err.h>
- #include <libintl.h>
- #include <stdlib.h>
- #include <string.h>
- #include <unistd.h>
- #include "libeu.h"
-+#include "system.h"
- 
- 
- /* Prototype for option handler.  */
-diff --git a/lib/crc32_file.c b/lib/crc32_file.c
-index a8434d4..57e4298 100644
---- a/lib/crc32_file.c
-+++ b/lib/crc32_file.c
-@@ -35,6 +35,7 @@
- #include <unistd.h>
- #include <sys/stat.h>
- #include <sys/mman.h>
-+#include "system.h"
- 
- int
- crc32_file (int fd, uint32_t *resp)
-diff --git a/lib/fixedsizehash.h b/lib/fixedsizehash.h
-index dac2a5f..43016fc 100644
---- a/lib/fixedsizehash.h
-+++ b/lib/fixedsizehash.h
-@@ -30,7 +30,6 @@
- #include <errno.h>
- #include <stdlib.h>
- #include <string.h>
--#include <sys/cdefs.h>
- 
- #include <system.h>
- 
-diff --git a/lib/system.h b/lib/system.h
-index ccd99d6..0e93e60 100644
---- a/lib/system.h
-+++ b/lib/system.h
-@@ -55,6 +55,16 @@
- #else
- # error "Unknown byte order"
- #endif
-+#ifndef TEMP_FAILURE_RETRY
-+#define TEMP_FAILURE_RETRY(expression) \
-+  (__extension__							      \
-+    ({ long int __result;						      \
-+       do __result = (long int) (expression);				      \
-+       while (__result == -1L && errno == EINTR);			      \
-+       __result; }))
-+#endif
-+
-+#define error(status, errno, ...) err(status, __VA_ARGS__)
- 
- #ifndef MAX
- #define MAX(m, n) ((m) < (n) ? (n) : (m))
-diff --git a/lib/xmalloc.c b/lib/xmalloc.c
-index 0cde384..217b054 100644
---- a/lib/xmalloc.c
-+++ b/lib/xmalloc.c
-@@ -30,7 +30,7 @@
- # include <config.h>
- #endif
- 
--#include <error.h>
-+#include <err.h>
- #include <libintl.h>
- #include <stddef.h>
- #include <stdlib.h>
-diff --git a/libasm/asm_end.c b/libasm/asm_end.c
-index 191a535..bf5ab06 100644
---- a/libasm/asm_end.c
-+++ b/libasm/asm_end.c
-@@ -32,7 +32,7 @@
- #endif
- 
- #include <assert.h>
--#include <error.h>
-+#include <err.h>
- #include <libintl.h>
- #include <stdio.h>
- #include <stdlib.h>
-diff --git a/libasm/asm_newscn.c b/libasm/asm_newscn.c
-index ddbb25d..74a598d 100644
---- a/libasm/asm_newscn.c
-+++ b/libasm/asm_newscn.c
-@@ -32,7 +32,7 @@
- #endif
- 
- #include <assert.h>
--#include <error.h>
-+#include <err.h>
- #include <libintl.h>
- #include <stdlib.h>
- #include <string.h>
-diff --git a/libcpu/i386_gendis.c b/libcpu/i386_gendis.c
-index aae5eae..6d76016 100644
---- a/libcpu/i386_gendis.c
-+++ b/libcpu/i386_gendis.c
-@@ -31,7 +31,7 @@
- # include <config.h>
- #endif
- 
--#include <error.h>
-+#include <err.h>
- #include <errno.h>
- #include <stdio.h>
- #include <stdlib.h>
-diff --git a/libcpu/i386_lex.c b/libcpu/i386_lex.c
-index b670608..b842c25 100644
---- a/libcpu/i386_lex.c
-+++ b/libcpu/i386_lex.c
-@@ -592,7 +592,7 @@ char *i386_text;
- #endif
- 
- #include <ctype.h>
--#include <error.h>
-+#include <err.h>
- #include <libintl.h>
- 
- #include <libeu.h>
-diff --git a/libcpu/i386_parse.c b/libcpu/i386_parse.c
-index 724addf..5b67802 100644
---- a/libcpu/i386_parse.c
-+++ b/libcpu/i386_parse.c
-@@ -107,7 +107,7 @@
- #include <assert.h>
- #include <ctype.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <inttypes.h>
- #include <libintl.h>
- #include <math.h>
-diff --git a/libdw/Makefile.am b/libdw/Makefile.am
-index 082d96c..51cbea0 100644
---- a/libdw/Makefile.am
-+++ b/libdw/Makefile.am
-@@ -102,6 +102,8 @@ endif
- libdw_pic_a_SOURCES =
- am_libdw_pic_a_OBJECTS = $(libdw_a_SOURCES:.c=.os)
- 
-+fts_LDADD = -lfts
-+
- libdw_so_SOURCES =
- libdw.so$(EXEEXT): $(srcdir)/libdw.map libdw_pic.a ../libdwelf/libdwelf_pic.a \
- 	  ../libdwfl/libdwfl_pic.a ../libebl/libebl.a \
-@@ -112,7 +114,7 @@ libdw.so$(EXEEXT): $(srcdir)/libdw.map libdw_pic.a ../libdwelf/libdwelf_pic.a \
- 		-Wl,--enable-new-dtags,-rpath,$(pkglibdir) \
- 		-Wl,--version-script,$<,--no-undefined \
- 		-Wl,--whole-archive $(filter-out $<,$^) -Wl,--no-whole-archive\
--		-ldl -lz $(argp_LDADD) $(zip_LIBS)
-+		-ldl -lz $(argp_LDADD) $(zip_LIBS) $(fts_LDADD)
- 	@$(textrel_check)
- 	$(AM_V_at)ln -fs $@ $@.$(VERSION)
- 
-diff --git a/libdw/libdw_alloc.c b/libdw/libdw_alloc.c
-index 28a8cf6..29aeb3f 100644
---- a/libdw/libdw_alloc.c
-+++ b/libdw/libdw_alloc.c
-@@ -31,7 +31,7 @@
- # include <config.h>
- #endif
- 
--#include <error.h>
-+#include <err.h>
- #include <errno.h>
- #include <stdlib.h>
- #include "libdwP.h"
-diff --git a/libdwfl/dwfl_build_id_find_elf.c b/libdwfl/dwfl_build_id_find_elf.c
-index 903e193..b00d10c 100644
---- a/libdwfl/dwfl_build_id_find_elf.c
-+++ b/libdwfl/dwfl_build_id_find_elf.c
-@@ -27,6 +27,7 @@
-    not, see <http://www.gnu.org/licenses/>.  */
- 
- #include "libdwflP.h"
-+#include "system.h"
- #include <inttypes.h>
- #include <fcntl.h>
- #include <unistd.h>
-@@ -94,7 +95,7 @@ __libdwfl_open_by_build_id (Dwfl_Module *mod, bool debug, char **file_name,
- 	{
- 	  if (*file_name != NULL)
- 	    free (*file_name);
--	  *file_name = canonicalize_file_name (name);
-+	  *file_name = realpath (name, NULL);
- 	  if (*file_name == NULL)
- 	    {
- 	      *file_name = name;
-diff --git a/libdwfl/dwfl_error.c b/libdwfl/dwfl_error.c
-index 7bcf61c..c345797 100644
---- a/libdwfl/dwfl_error.c
-+++ b/libdwfl/dwfl_error.c
-@@ -140,6 +140,7 @@ __libdwfl_seterrno (Dwfl_Error error)
- const char *
- dwfl_errmsg (int error)
- {
-+  static __thread char s[64] = "";
-   if (error == 0 || error == -1)
-     {
-       int last_error = global_error;
-@@ -154,7 +155,8 @@ dwfl_errmsg (int error)
-   switch (error &~ 0xffff)
-     {
-     case OTHER_ERROR (ERRNO):
--      return strerror_r (error & 0xffff, "bad", 0);
-+      strerror_r (error & 0xffff, s, sizeof(s));
-+      return s;
-     case OTHER_ERROR (LIBELF):
-       return elf_errmsg (error & 0xffff);
-     case OTHER_ERROR (LIBDW):
-diff --git a/libdwfl/dwfl_module_getdwarf.c b/libdwfl/dwfl_module_getdwarf.c
-index 0e8810b..82ad665 100644
---- a/libdwfl/dwfl_module_getdwarf.c
-+++ b/libdwfl/dwfl_module_getdwarf.c
-@@ -31,6 +31,7 @@
- #include <fcntl.h>
- #include <string.h>
- #include <unistd.h>
-+#include "system.h"
- #include "../libdw/libdwP.h"	/* DWARF_E_* values are here.  */
- #include "../libelf/libelfP.h"
- 
-diff --git a/libdwfl/find-debuginfo.c b/libdwfl/find-debuginfo.c
-index 80515db..80b0148 100644
---- a/libdwfl/find-debuginfo.c
-+++ b/libdwfl/find-debuginfo.c
-@@ -385,7 +385,7 @@ dwfl_standard_find_debuginfo (Dwfl_Module *mod,
-       /* If FILE_NAME is a symlink, the debug file might be associated
- 	 with the symlink target name instead.  */
- 
--      char *canon = canonicalize_file_name (file_name);
-+      char *canon = realpath (file_name, NULL);
-       if (canon != NULL && strcmp (file_name, canon))
- 	fd = find_debuginfo_in_path (mod, canon,
- 				     debuglink_file, debuglink_crc,
-diff --git a/libdwfl/libdwfl_crc32_file.c b/libdwfl/libdwfl_crc32_file.c
-index 6b6b7d3..debc4a4 100644
---- a/libdwfl/libdwfl_crc32_file.c
-+++ b/libdwfl/libdwfl_crc32_file.c
-@@ -31,6 +31,16 @@
- 
- #define crc32_file attribute_hidden __libdwfl_crc32_file
- #define crc32 __libdwfl_crc32
-+
-+#ifndef TEMP_FAILURE_RETRY
-+#define TEMP_FAILURE_RETRY(expression) \
-+  (__extension__							      \
-+    ({ long int __result;						      \
-+       do __result = (long int) (expression);				      \
-+       while (__result == -1L && errno == EINTR);			      \
-+       __result; }))
-+#endif
-+
- #define LIB_SYSTEM_H	1
- #include <libdwflP.h>
- #include "../lib/crc32_file.c"
-diff --git a/libdwfl/linux-kernel-modules.c b/libdwfl/linux-kernel-modules.c
-index 9cd8ea9..4dbf4c5 100644
---- a/libdwfl/linux-kernel-modules.c
-+++ b/libdwfl/linux-kernel-modules.c
-@@ -36,6 +36,7 @@
- #include <config.h>
- 
- #include "libdwflP.h"
-+#include "system.h"
- #include <inttypes.h>
- #include <errno.h>
- #include <stdio.h>
-diff --git a/libebl/eblopenbackend.c b/libebl/eblopenbackend.c
-index 34d439a..56d2345 100644
---- a/libebl/eblopenbackend.c
-+++ b/libebl/eblopenbackend.c
-@@ -32,7 +32,7 @@
- 
- #include <assert.h>
- #include <dlfcn.h>
--#include <error.h>
-+#include <err.h>
- #include <libelfP.h>
- #include <dwarf.h>
- #include <stdlib.h>
-diff --git a/libelf/elf.h b/libelf/elf.h
-index 74654d6..81eee8b 100644
---- a/libelf/elf.h
-+++ b/libelf/elf.h
-@@ -21,7 +21,9 @@
- 
- #include <features.h>
- 
--__BEGIN_DECLS
-+#ifdef __cplusplus
-+extern "C" {
-+#endif
- 
- /* Standard ELF types.  */
- 
-@@ -3704,6 +3706,8 @@ enum
- #define R_BPF_NONE		0	/* No reloc */
- #define R_BPF_MAP_FD		1	/* Map fd to pointer */
- 
--__END_DECLS
-+#ifdef __cplusplus
-+}
-+#endif
- 
- #endif	/* elf.h */
-diff --git a/libelf/libelf.h b/libelf/libelf.h
-index c0d6389..38a68fd 100644
---- a/libelf/libelf.h
-+++ b/libelf/libelf.h
-@@ -29,6 +29,7 @@
- #ifndef _LIBELF_H
- #define _LIBELF_H 1
- 
-+#include <fcntl.h>
- #include <stdint.h>
- #include <sys/types.h>
- 
-diff --git a/libelf/libelfP.h b/libelf/libelfP.h
-index 4459982..1296f20 100644
---- a/libelf/libelfP.h
-+++ b/libelf/libelfP.h
-@@ -36,6 +36,7 @@
- 
- #include <ar.h>
- #include <gelf.h>
-+#include <libelf.h>
- 
- #include <errno.h>
- #include <stdbool.h>
-diff --git a/src/addr2line.c b/src/addr2line.c
-index 0222088..cd6a9a6 100644
---- a/src/addr2line.c
-+++ b/src/addr2line.c
-@@ -23,7 +23,7 @@
- #include <argp.h>
- #include <assert.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <inttypes.h>
- #include <libdwfl.h>
-diff --git a/src/ar.c b/src/ar.c
-index f2f322b..6e70031 100644
---- a/src/ar.c
-+++ b/src/ar.c
-@@ -22,7 +22,7 @@
- 
- #include <argp.h>
- #include <assert.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <libintl.h>
-diff --git a/src/arlib.c b/src/arlib.c
-index e0839aa..1143658 100644
---- a/src/arlib.c
-+++ b/src/arlib.c
-@@ -21,7 +21,7 @@
- #endif
- 
- #include <assert.h>
--#include <error.h>
-+#include <err.h>
- #include <gelf.h>
- #include <inttypes.h>
- #include <libintl.h>
-diff --git a/src/arlib2.c b/src/arlib2.c
-index 553fc57..46443d0 100644
---- a/src/arlib2.c
-+++ b/src/arlib2.c
-@@ -20,7 +20,7 @@
- # include <config.h>
- #endif
- 
--#include <error.h>
-+#include <err.h>
- #include <libintl.h>
- #include <limits.h>
- #include <string.h>
-diff --git a/src/elfcmp.c b/src/elfcmp.c
-index 401ab31..873d253 100644
---- a/src/elfcmp.c
-+++ b/src/elfcmp.c
-@@ -23,7 +23,7 @@
- #include <argp.h>
- #include <assert.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <locale.h>
- #include <libintl.h>
-diff --git a/src/elflint.c b/src/elflint.c
-index 7d3f227..074d21c 100644
---- a/src/elflint.c
-+++ b/src/elflint.c
-@@ -24,7 +24,7 @@
- #include <assert.h>
- #include <byteswap.h>
- #include <endian.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <inttypes.h>
-diff --git a/src/findtextrel.c b/src/findtextrel.c
-index dc41502..325888c 100644
---- a/src/findtextrel.c
-+++ b/src/findtextrel.c
-@@ -23,7 +23,7 @@
- #include <argp.h>
- #include <assert.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <libdw.h>
-diff --git a/src/nm.c b/src/nm.c
-index c54e96f..9e031d9 100644
---- a/src/nm.c
-+++ b/src/nm.c
-@@ -26,7 +26,7 @@
- #include <ctype.h>
- #include <dwarf.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <inttypes.h>
-diff --git a/src/objdump.c b/src/objdump.c
-index fff4b81..4b1f966 100644
---- a/src/objdump.c
-+++ b/src/objdump.c
-@@ -21,7 +21,7 @@
- #endif
- 
- #include <argp.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <inttypes.h>
- #include <libintl.h>
-diff --git a/src/ranlib.c b/src/ranlib.c
-index 41a3bcf..0c7da2c 100644
---- a/src/ranlib.c
-+++ b/src/ranlib.c
-@@ -24,7 +24,7 @@
- #include <argp.h>
- #include <assert.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <libintl.h>
-diff --git a/src/readelf.c b/src/readelf.c
-index d18a4b7..a6cfb35 100644
---- a/src/readelf.c
-+++ b/src/readelf.c
-@@ -25,7 +25,7 @@
- #include <ctype.h>
- #include <dwarf.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <inttypes.h>
-diff --git a/src/size.c b/src/size.c
-index de0d791..4639d42 100644
---- a/src/size.c
-+++ b/src/size.c
-@@ -21,7 +21,7 @@
- #endif
- 
- #include <argp.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <inttypes.h>
-diff --git a/src/stack.c b/src/stack.c
-index a5a7beb..4c075bc 100644
---- a/src/stack.c
-+++ b/src/stack.c
-@@ -18,7 +18,7 @@
- #include <config.h>
- #include <assert.h>
- #include <argp.h>
--#include <error.h>
-+#include <err.h>
- #include <stdlib.h>
- #include <inttypes.h>
- #include <stdio.h>
-diff --git a/src/strings.c b/src/strings.c
-index 49aab8b..09d5b1c 100644
---- a/src/strings.c
-+++ b/src/strings.c
-@@ -25,7 +25,7 @@
- #include <ctype.h>
- #include <endian.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <inttypes.h>
-diff --git a/src/strip.c b/src/strip.c
-index a875ddf..fd76f7f 100644
---- a/src/strip.c
-+++ b/src/strip.c
-@@ -24,7 +24,7 @@
- #include <assert.h>
- #include <byteswap.h>
- #include <endian.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <libelf.h>
-diff --git a/src/unstrip.c b/src/unstrip.c
-index d838ae9..0108272 100644
---- a/src/unstrip.c
-+++ b/src/unstrip.c
-@@ -31,7 +31,7 @@
- #include <argp.h>
- #include <assert.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <fnmatch.h>
- #include <libintl.h>
-diff --git a/tests/addrscopes.c b/tests/addrscopes.c
-index 791569f..54f4311 100644
---- a/tests/addrscopes.c
-+++ b/tests/addrscopes.c
-@@ -25,7 +25,7 @@
- #include <stdio_ext.h>
- #include <locale.h>
- #include <stdlib.h>
--#include <error.h>
-+#include <err.h>
- #include <string.h>
- 
- 
-diff --git a/tests/allregs.c b/tests/allregs.c
-index 286f7e3..c9de089 100644
---- a/tests/allregs.c
-+++ b/tests/allregs.c
-@@ -21,7 +21,7 @@
- #include <stdio.h>
- #include <stdlib.h>
- #include <string.h>
--#include <error.h>
-+#include <err.h>
- #include <locale.h>
- #include <argp.h>
- #include <assert.h>
-diff --git a/tests/backtrace-data.c b/tests/backtrace-data.c
-index b7158da..354fa6a 100644
---- a/tests/backtrace-data.c
-+++ b/tests/backtrace-data.c
-@@ -27,7 +27,7 @@
- #include <dirent.h>
- #include <stdlib.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <unistd.h>
- #include <dwarf.h>
- #if defined(__x86_64__) && defined(__linux__)
-diff --git a/tests/backtrace-dwarf.c b/tests/backtrace-dwarf.c
-index a644c8a..b8cbe27 100644
---- a/tests/backtrace-dwarf.c
-+++ b/tests/backtrace-dwarf.c
-@@ -22,7 +22,7 @@
- #include <stdio_ext.h>
- #include <locale.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <unistd.h>
- #include <sys/ptrace.h>
- #include <sys/types.h>
-diff --git a/tests/backtrace.c b/tests/backtrace.c
-index 1ff6353..47e3f7b 100644
---- a/tests/backtrace.c
-+++ b/tests/backtrace.c
-@@ -24,7 +24,7 @@
- #include <dirent.h>
- #include <stdlib.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <unistd.h>
- #include <dwarf.h>
- #ifdef __linux__
-diff --git a/tests/buildid.c b/tests/buildid.c
-index 87c1877..2953e6b 100644
---- a/tests/buildid.c
-+++ b/tests/buildid.c
-@@ -23,7 +23,7 @@
- #include ELFUTILS_HEADER(elf)
- #include ELFUTILS_HEADER(dwelf)
- #include <stdio.h>
--#include <error.h>
-+#include <err.h>
- #include <string.h>
- #include <stdlib.h>
- #include <sys/types.h>
-diff --git a/tests/debugaltlink.c b/tests/debugaltlink.c
-index 6d97d50..ee7e559 100644
---- a/tests/debugaltlink.c
-+++ b/tests/debugaltlink.c
-@@ -23,7 +23,7 @@
- #include ELFUTILS_HEADER(dw)
- #include ELFUTILS_HEADER(dwelf)
- #include <stdio.h>
--#include <error.h>
-+#include <err.h>
- #include <string.h>
- #include <stdlib.h>
- #include <sys/types.h>
-diff --git a/tests/debuglink.c b/tests/debuglink.c
-index 935d102..741cb81 100644
---- a/tests/debuglink.c
-+++ b/tests/debuglink.c
-@@ -21,7 +21,7 @@
- #include <errno.h>
- #include ELFUTILS_HEADER(dwelf)
- #include <stdio.h>
--#include <error.h>
-+#include <err.h>
- #include <string.h>
- #include <stdlib.h>
- #include <sys/types.h>
-diff --git a/tests/deleted.c b/tests/deleted.c
-index 6be35bc..0190711 100644
---- a/tests/deleted.c
-+++ b/tests/deleted.c
-@@ -21,7 +21,7 @@
- #include <unistd.h>
- #include <assert.h>
- #include <stdio.h>
--#include <error.h>
-+#include <err.h>
- #include <errno.h>
- #ifdef __linux__
- #include <sys/prctl.h>
-diff --git a/tests/dwfl-addr-sect.c b/tests/dwfl-addr-sect.c
-index 21e470a..1ea1e3b 100644
---- a/tests/dwfl-addr-sect.c
-+++ b/tests/dwfl-addr-sect.c
-@@ -23,7 +23,7 @@
- #include <stdio_ext.h>
- #include <stdlib.h>
- #include <string.h>
--#include <error.h>
-+#include <err.h>
- #include <locale.h>
- #include <argp.h>
- #include ELFUTILS_HEADER(dwfl)
-diff --git a/tests/dwfl-bug-addr-overflow.c b/tests/dwfl-bug-addr-overflow.c
-index aa8030e..02c8bef 100644
---- a/tests/dwfl-bug-addr-overflow.c
-+++ b/tests/dwfl-bug-addr-overflow.c
-@@ -20,7 +20,7 @@
- #include <inttypes.h>
- #include <stdio.h>
- #include <stdio_ext.h>
--#include <error.h>
-+#include <err.h>
- #include <locale.h>
- #include ELFUTILS_HEADER(dwfl)
- 
-diff --git a/tests/dwfl-bug-fd-leak.c b/tests/dwfl-bug-fd-leak.c
-index 689cdd7..5973da3 100644
---- a/tests/dwfl-bug-fd-leak.c
-+++ b/tests/dwfl-bug-fd-leak.c
-@@ -24,7 +24,7 @@
- #include <dirent.h>
- #include <stdlib.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <unistd.h>
- #include <dwarf.h>
- 
-diff --git a/tests/dwfl-bug-getmodules.c b/tests/dwfl-bug-getmodules.c
-index 1ee989f..fd62e65 100644
---- a/tests/dwfl-bug-getmodules.c
-+++ b/tests/dwfl-bug-getmodules.c
-@@ -18,7 +18,7 @@
- #include <config.h>
- #include ELFUTILS_HEADER(dwfl)
- 
--#include <error.h>
-+#include <err.h>
- 
- static const Dwfl_Callbacks callbacks =
-   {
-diff --git a/tests/dwfl-report-elf-align.c b/tests/dwfl-report-elf-align.c
-index a4e97d3..f471587 100644
---- a/tests/dwfl-report-elf-align.c
-+++ b/tests/dwfl-report-elf-align.c
-@@ -20,7 +20,7 @@
- #include <inttypes.h>
- #include <stdio.h>
- #include <stdio_ext.h>
--#include <error.h>
-+#include <err.h>
- #include <locale.h>
- #include <string.h>
- #include <stdlib.h>
-diff --git a/tests/dwfllines.c b/tests/dwfllines.c
-index 90379dd..cbdf6c4 100644
---- a/tests/dwfllines.c
-+++ b/tests/dwfllines.c
-@@ -27,7 +27,7 @@
- #include <stdio.h>
- #include <stdlib.h>
- #include <string.h>
--#include <error.h>
-+#include <err.h>
- 
- int
- main (int argc, char *argv[])
-diff --git a/tests/dwflmodtest.c b/tests/dwflmodtest.c
-index 0027f96..e68d3bc 100644
---- a/tests/dwflmodtest.c
-+++ b/tests/dwflmodtest.c
-@@ -23,7 +23,7 @@
- #include <stdio_ext.h>
- #include <stdlib.h>
- #include <string.h>
--#include <error.h>
-+#include <err.h>
- #include <locale.h>
- #include <argp.h>
- #include ELFUTILS_HEADER(dwfl)
-diff --git a/tests/dwflsyms.c b/tests/dwflsyms.c
-index 49ac334..cf07830 100644
---- a/tests/dwflsyms.c
-+++ b/tests/dwflsyms.c
-@@ -25,7 +25,7 @@
- #include <stdio.h>
- #include <stdio_ext.h>
- #include <stdlib.h>
--#include <error.h>
-+#include <err.h>
- #include <string.h>
- 
- static const char *
-diff --git a/tests/early-offscn.c b/tests/early-offscn.c
-index 924cb9e..6f60d5a 100644
---- a/tests/early-offscn.c
-+++ b/tests/early-offscn.c
-@@ -19,7 +19,7 @@
- #endif
- 
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <stdio.h>
-diff --git a/tests/ecp.c b/tests/ecp.c
-index 38a6859..743cea5 100644
---- a/tests/ecp.c
-+++ b/tests/ecp.c
-@@ -20,7 +20,7 @@
- #endif
- 
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <stdlib.h>
-diff --git a/tests/find-prologues.c b/tests/find-prologues.c
-index ba8ae37..76f5f04 100644
---- a/tests/find-prologues.c
-+++ b/tests/find-prologues.c
-@@ -25,7 +25,7 @@
- #include <stdio_ext.h>
- #include <locale.h>
- #include <stdlib.h>
--#include <error.h>
-+#include <err.h>
- #include <string.h>
- #include <fnmatch.h>
- 
-diff --git a/tests/funcretval.c b/tests/funcretval.c
-index 8d19d11..c8aaa93 100644
---- a/tests/funcretval.c
-+++ b/tests/funcretval.c
-@@ -25,7 +25,7 @@
- #include <stdio_ext.h>
- #include <locale.h>
- #include <stdlib.h>
--#include <error.h>
-+#include <err.h>
- #include <string.h>
- #include <fnmatch.h>
- 
-diff --git a/tests/funcscopes.c b/tests/funcscopes.c
-index 9c90185..dbccb89 100644
---- a/tests/funcscopes.c
-+++ b/tests/funcscopes.c
-@@ -25,7 +25,7 @@
- #include <stdio_ext.h>
- #include <locale.h>
- #include <stdlib.h>
--#include <error.h>
-+#include <err.h>
- #include <string.h>
- #include <fnmatch.h>
- 
-diff --git a/tests/getsrc_die.c b/tests/getsrc_die.c
-index 055aede..9c394dd 100644
---- a/tests/getsrc_die.c
-+++ b/tests/getsrc_die.c
-@@ -19,7 +19,7 @@
- #endif
- 
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <inttypes.h>
- #include <libelf.h>
-diff --git a/tests/line2addr.c b/tests/line2addr.c
-index e0d65d3..9bf0023 100644
---- a/tests/line2addr.c
-+++ b/tests/line2addr.c
-@@ -26,7 +26,7 @@
- #include <locale.h>
- #include <stdlib.h>
- #include <string.h>
--#include <error.h>
-+#include <err.h>
- 
- 
- static void
-diff --git a/tests/low_high_pc.c b/tests/low_high_pc.c
-index d0f4302..8da4fbd 100644
---- a/tests/low_high_pc.c
-+++ b/tests/low_high_pc.c
-@@ -25,7 +25,7 @@
- #include <stdio_ext.h>
- #include <locale.h>
- #include <stdlib.h>
--#include <error.h>
-+#include <err.h>
- #include <string.h>
- #include <fnmatch.h>
- 
-diff --git a/tests/md5-sha1-test.c b/tests/md5-sha1-test.c
-index d50355e..3c41f40 100644
---- a/tests/md5-sha1-test.c
-+++ b/tests/md5-sha1-test.c
-@@ -19,7 +19,7 @@
- #endif
- 
- #include <string.h>
--#include <error.h>
-+#include <err.h>
- 
- #include "md5.h"
- #include "sha1.h"
-diff --git a/tests/rdwrmmap.c b/tests/rdwrmmap.c
-index 6f027df..1ce5e6e 100644
---- a/tests/rdwrmmap.c
-+++ b/tests/rdwrmmap.c
-@@ -19,7 +19,7 @@
- #endif
- 
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <stdio.h>
- #include <fcntl.h>
- #include <unistd.h>
-diff --git a/tests/saridx.c b/tests/saridx.c
-index 8a450d8..b387801 100644
---- a/tests/saridx.c
-+++ b/tests/saridx.c
-@@ -17,7 +17,7 @@
- 
- #include <config.h>
- 
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <stdio.h>
-diff --git a/tests/sectiondump.c b/tests/sectiondump.c
-index 3033fed..8e888db 100644
---- a/tests/sectiondump.c
-+++ b/tests/sectiondump.c
-@@ -18,7 +18,7 @@
- #include <config.h>
- 
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <fcntl.h>
- #include <gelf.h>
- #include <inttypes.h>
-diff --git a/tests/varlocs.c b/tests/varlocs.c
-index c3fba89..e043ea2 100644
---- a/tests/varlocs.c
-+++ b/tests/varlocs.c
-@@ -25,7 +25,7 @@
- #include <dwarf.h>
- #include <stdio.h>
- #include <stdlib.h>
--#include <error.h>
-+#include <err.h>
- #include <string.h>
- #include <sys/types.h>
- #include <sys/stat.h>
-diff --git a/tests/vdsosyms.c b/tests/vdsosyms.c
-index b876c10..afb2823 100644
---- a/tests/vdsosyms.c
-+++ b/tests/vdsosyms.c
-@@ -18,7 +18,7 @@
- #include <config.h>
- #include <assert.h>
- #include <errno.h>
--#include <error.h>
-+#include <err.h>
- #include <inttypes.h>
- #include <stdio.h>
- #include <string.h>
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/0001-elf_getarsym-Silence-Werror-maybe-uninitialized-fals.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/0001-elf_getarsym-Silence-Werror-maybe-uninitialized-fals.patch
deleted file mode 100644
index 3754c1c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/0001-elf_getarsym-Silence-Werror-maybe-uninitialized-fals.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-From 668accf322fd7185e273bfd50b84320e71d9de5a Mon Sep 17 00:00:00 2001
-From: Martin Jansa <Martin.Jansa@gmail.com>
-Date: Fri, 10 Apr 2015 00:29:18 +0200
-Subject: [PATCH] elf_getarsym: Silence -Werror=maybe-uninitialized false
- positive
-
-Upstream-Status: Pending
-Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
----
- libelf/elf_getarsym.c | 9 +++++++--
- 1 file changed, 7 insertions(+), 2 deletions(-)
-
-diff --git a/libelf/elf_getarsym.c b/libelf/elf_getarsym.c
-index d0bb28a..08954d2 100644
---- a/libelf/elf_getarsym.c
-+++ b/libelf/elf_getarsym.c
-@@ -165,8 +165,13 @@ elf_getarsym (elf, ptr)
-       int w = index64_p ? 8 : 4;
- 
-       /* We have an archive.  The first word in there is the number of
--	 entries in the table.  */
--      uint64_t n;
-+	 entries in the table.
-+	 Set to SIZE_MAX just to silence -Werror=maybe-uninitialized
-+	 elf_getarsym.c:290:9: error: 'n' may be used uninitialized in this function
-+	 The read_number_entries function doesn't initialize n only when returning
-+	 -1 which in turn ensures to jump over usage of this uninitialized variable.
-+	 */
-+      uint64_t n = SIZE_MAX;
-       size_t off = elf->start_offset + SARMAG + sizeof (struct ar_hdr);
-       if (read_number_entries (&n, elf, &off, index64_p) < 0)
- 	{
--- 
-2.3.5
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/0001-fix-a-stack-usage-warning.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/0001-fix-a-stack-usage-warning.patch
deleted file mode 100644
index 6923bf7..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/0001-fix-a-stack-usage-warning.patch
+++ /dev/null
@@ -1,28 +0,0 @@
-[PATCH] fix a stack-usage warning
-
-Upstream-Status: Pending
-
-not use a variable to as a array size, otherwise the warning to error that
-stack usage might be unbounded [-Werror=stack-usage=] will happen
-
-Signed-off-by: Roy Li <rongqing.li@windriver.com>
----
- backends/ppc_initreg.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/backends/ppc_initreg.c b/backends/ppc_initreg.c
-index 64f5379..52dde3e 100644
---- a/backends/ppc_initreg.c
-+++ b/backends/ppc_initreg.c
-@@ -93,7 +93,7 @@ ppc_set_initial_registers_tid (pid_t tid __attribute__ ((unused)),
- 	return false;
-     }
-   const size_t gprs = sizeof (user_regs.r.gpr) / sizeof (*user_regs.r.gpr);
--  Dwarf_Word dwarf_regs[gprs];
-+  Dwarf_Word dwarf_regs[sizeof (user_regs.r.gpr) / sizeof (*user_regs.r.gpr)];
-   for (unsigned gpr = 0; gpr < gprs; gpr++)
-     dwarf_regs[gpr] = user_regs.r.gpr[gpr];
-   if (! setfunc (0, gprs, dwarf_regs, arg))
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/0001-remove-the-unneed-checking.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/0001-remove-the-unneed-checking.patch
deleted file mode 100644
index 5be92d7..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/0001-remove-the-unneed-checking.patch
+++ /dev/null
@@ -1,38 +0,0 @@
-Disable the test to convert euc-jp
-
-Remove the test "Test against HP-UX 11.11 bug:
-No converter from EUC-JP to UTF-8 is provided"
-since we don't support HP-UX and if the euc-jp is not
-installed on the host, the dependence will be built without
-iconv support and will cause guild-native building fail.
-
-Upstream-Status: Inappropriate [OE specific]
-
-Signed-off-by: Roy Li <rongqing.li@windriver.com>
----
- m4/iconv.m4 | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/m4/iconv.m4 b/m4/iconv.m4
-index a503646..299f1eb 100644
---- a/m4/iconv.m4
-+++ b/m4/iconv.m4
-@@ -159,6 +159,7 @@ int main ()
-       }
-   }
- #endif
-+#if 0
-   /* Test against HP-UX 11.11 bug: No converter from EUC-JP to UTF-8 is
-      provided.  */
-   if (/* Try standardized names.  */
-@@ -170,6 +171,7 @@ int main ()
-       /* Try HP-UX names.  */
-       && iconv_open ("utf8", "eucJP") == (iconv_t)(-1))
-     result |= 16;
-+#endif
-   return result;
- }]])],
-         [am_cv_func_iconv_works=yes],
--- 
-2.0.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/Fix_one_GCC7_warning.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/Fix_one_GCC7_warning.patch
deleted file mode 100644
index d88f4eb..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/Fix_one_GCC7_warning.patch
+++ /dev/null
@@ -1,44 +0,0 @@
-From 93c51144c3f664d4e9709da75a1d0fa00ea0fe95 Mon Sep 17 00:00:00 2001
-From: Mark Wielaard <mark@klomp.org>
-Date: Sun, 12 Feb 2017 21:51:34 +0100
-Subject: [PATCH] libasm: Fix one GCC7 -Wformat-truncation=2 warning.
-
-Make sure that if we have really lots of labels the tempsym doesn't get
-truncated because it is too small to hold the whole name.
-
-This doesn't enable -Wformat-truncation=2 or fix other "issues" pointed
-out by enabling this warning because there are currently some issues
-with it. https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79448
-
-Signed-off-by: Mark Wielaard <mark@klomp.org>
-
-Upstream-Status: Backport (https://sourceware.org/git/?p=elfutils.git;a=commit;h=93c51144c3f664d4e9709da75a1d0fa00ea0fe95)
-Signed-off-by: Joshua Lock <joshua.g.lock@intel.com>
-
----
- libasm/ChangeLog    | 6 +++++-
- libasm/asm_newsym.c | 6 ++++--
- 2 files changed, 9 insertions(+), 3 deletions(-)
-
-Index: elfutils-0.168/libasm/asm_newsym.c
-===================================================================
---- elfutils-0.168.orig/libasm/asm_newsym.c
-+++ elfutils-0.168/libasm/asm_newsym.c
-@@ -1,5 +1,5 @@
- /* Define new symbol for current position in given section.
--   Copyright (C) 2002, 2005, 2016 Red Hat, Inc.
-+   Copyright (C) 2002, 2005, 2016, 2017 Red Hat, Inc.
-    This file is part of elfutils.
-    Written by Ulrich Drepper <drepper@redhat.com>, 2002.
- 
-@@ -44,7 +44,9 @@ AsmSym_t *
- asm_newsym (AsmScn_t *asmscn, const char *name, GElf_Xword size,
- 	    int type, int binding)
- {
--#define TEMPSYMLEN 10
-+/* We don't really expect labels with many digits, but in theory it could
-+   be 10 digits (plus ".L" and a zero terminator).  */
-+#define TEMPSYMLEN 13
-   char tempsym[TEMPSYMLEN];
-   AsmSym_t *result;
- 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/aarch64_uio.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/aarch64_uio.patch
deleted file mode 100644
index 38dc57b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/aarch64_uio.patch
+++ /dev/null
@@ -1,47 +0,0 @@
-Fix build on aarch64/musl
-
-Errors
-
-invalid operands to binary & (have 'long double' and 'unsigned int')
-
-error: redefinition
- of 'struct iovec'
- struct iovec { void *iov_base; size_t iov_len; };
-        ^
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Index: elfutils-0.163/backends/aarch64_initreg.c
-===================================================================
---- elfutils-0.163.orig/backends/aarch64_initreg.c
-+++ elfutils-0.163/backends/aarch64_initreg.c
-@@ -33,7 +33,7 @@
- #include "system.h"
- #include <assert.h>
- #ifdef __aarch64__
--# include <linux/uio.h>
-+# include <sys/uio.h>
- # include <sys/user.h>
- # include <sys/ptrace.h>
- /* Deal with old glibc defining user_pt_regs instead of user_regs_struct.  */
-@@ -82,7 +82,7 @@ aarch64_set_initial_registers_tid (pid_t
- 
-   Dwarf_Word dwarf_fregs[32];
-   for (int r = 0; r < 32; r++)
--    dwarf_fregs[r] = fregs.vregs[r] & 0xFFFFFFFF;
-+    dwarf_fregs[r] = (unsigned int)fregs.vregs[r] & 0xFFFFFFFF;
- 
-   if (! setfunc (64, 32, dwarf_fregs, arg))
-     return false;
-Index: elfutils-0.163/backends/arm_initreg.c
-===================================================================
---- elfutils-0.163.orig/backends/arm_initreg.c
-+++ elfutils-0.163/backends/arm_initreg.c
-@@ -37,7 +37,7 @@
- #endif
- 
- #ifdef __aarch64__
--# include <linux/uio.h>
-+# include <sys/uio.h>
- # include <sys/user.h>
- # include <sys/ptrace.h>
- /* Deal with old glibc defining user_pt_regs instead of user_regs_struct.  */
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/hurd_path.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/hurd_path.patch
deleted file mode 100644
index a4d568b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/hurd_path.patch
+++ /dev/null
@@ -1,17 +0,0 @@
-Upstream-Status: Backport [from debian]
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
-
-Index: elfutils-0.165/tests/run-native-test.sh
-===================================================================
---- elfutils-0.165.orig/tests/run-native-test.sh
-+++ elfutils-0.165/tests/run-native-test.sh
-@@ -83,6 +83,9 @@ native_test()
- # "cannot attach to process: Function not implemented".
- [ "$(uname)" = "GNU/kFreeBSD" ] && exit 77
- 
-+# hurd's /proc/$PID/maps does not give paths yet.
-+[ "$(uname)" = "GNU" ] && exit 77
-+
- native_test ${abs_builddir}/allregs
- native_test ${abs_builddir}/funcretval
- 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/ignore_strmerge.diff b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/ignore_strmerge.diff
deleted file mode 100644
index 3570dec..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/ignore_strmerge.diff
+++ /dev/null
@@ -1,14 +0,0 @@
-Upstream-Status: Backport [from debian]
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
-
---- elfutils-0.165.orig/tests/run-strip-strmerge.sh
-+++ elfutils-0.165/tests/run-strip-strmerge.sh
-@@ -30,7 +30,7 @@ remerged=remerged.elf
- tempfiles $merged $stripped $debugfile $remerged
- 
- echo elflint $input
--testrun ${abs_top_builddir}/src/elflint --gnu $input
-+testrun_on_self_skip ${abs_top_builddir}/src/elflint --gnu $input
- echo elfstrmerge
- testrun ${abs_top_builddir}/tests/elfstrmerge -o $merged $input
- echo elflint $merged
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/kfreebsd_path.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/kfreebsd_path.patch
deleted file mode 100644
index 49085d1..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/kfreebsd_path.patch
+++ /dev/null
@@ -1,20 +0,0 @@
-Upstream-Status: Backport [from debian]
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
-
-Index: b/tests/run-native-test.sh
-===================================================================
---- a/tests/run-native-test.sh
-+++ b/tests/run-native-test.sh
-@@ -77,6 +77,12 @@ native_test()
-   test $native -eq 0 || testrun "$@" -p $native > /dev/null
- }
- 
-+# On the Debian buildds, GNU/kFreeBSD linprocfs /proc/$PID/maps does
-+# not give absolute paths due to sbuild's bind mounts (bug #570805)
-+# therefore the next two test programs are expected to fail with
-+# "cannot attach to process: Function not implemented".
-+[ "$(uname)" = "GNU/kFreeBSD" ] && exit 77
-+
- native_test ${abs_builddir}/allregs
- native_test ${abs_builddir}/funcretval
- 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/mips_backend.diff b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/mips_backend.diff
deleted file mode 100644
index a5e76dd..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/mips_backend.diff
+++ /dev/null
@@ -1,686 +0,0 @@
-Upstream-Status: Backport [from debian]
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
-
-Index: b/backends/mips_init.c
-===================================================================
---- /dev/null
-+++ b/backends/mips_init.c
-@@ -0,0 +1,59 @@
-+/* Initialization of mips specific backend library.
-+   Copyright (C) 2006 Red Hat, Inc.
-+   This file is part of Red Hat elfutils.
-+
-+   Red Hat elfutils is free software; you can redistribute it and/or modify
-+   it under the terms of the GNU General Public License as published by the
-+   Free Software Foundation; version 2 of the License.
-+
-+   Red Hat elfutils is distributed in the hope that it will be useful, but
-+   WITHOUT ANY WARRANTY; without even the implied warranty of
-+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+   General Public License for more details.
-+
-+   You should have received a copy of the GNU General Public License along
-+   with Red Hat elfutils; if not, write to the Free Software Foundation,
-+   Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA.
-+
-+   Red Hat elfutils is an included package of the Open Invention Network.
-+   An included package of the Open Invention Network is a package for which
-+   Open Invention Network licensees cross-license their patents.  No patent
-+   license is granted, either expressly or impliedly, by designation as an
-+   included package.  Should you wish to participate in the Open Invention
-+   Network licensing program, please visit www.openinventionnetwork.com
-+   <http://www.openinventionnetwork.com>.  */
-+
-+#ifdef HAVE_CONFIG_H
-+# include <config.h>
-+#endif
-+
-+#define BACKEND		mips_
-+#define RELOC_PREFIX	R_MIPS_
-+#include "libebl_CPU.h"
-+
-+/* This defines the common reloc hooks based on mips_reloc.def.  */
-+#include "common-reloc.c"
-+
-+const char *
-+mips_init (Elf *elf __attribute__ ((unused)),
-+     GElf_Half machine __attribute__ ((unused)),
-+     Ebl *eh,
-+     size_t ehlen)
-+{
-+  /* Check whether the Elf_BH object has a sufficent size.  */
-+  if (ehlen < sizeof (Ebl))
-+    return NULL;
-+
-+  /* We handle it.  */
-+  if (machine == EM_MIPS)
-+    eh->name = "MIPS R3000 big-endian";
-+  else if (machine == EM_MIPS_RS3_LE)
-+    eh->name = "MIPS R3000 little-endian";
-+
-+  mips_init_reloc (eh);
-+  HOOK (eh, reloc_simple_type);
-+  HOOK (eh, return_value_location);
-+  HOOK (eh, register_info);
-+
-+  return MODVERSION;
-+}
-Index: b/backends/mips_regs.c
-===================================================================
---- /dev/null
-+++ b/backends/mips_regs.c
-@@ -0,0 +1,104 @@
-+/* Register names and numbers for MIPS DWARF.
-+   Copyright (C) 2006 Red Hat, Inc.
-+   This file is part of Red Hat elfutils.
-+
-+   Red Hat elfutils is free software; you can redistribute it and/or modify
-+   it under the terms of the GNU General Public License as published by the
-+   Free Software Foundation; version 2 of the License.
-+
-+   Red Hat elfutils is distributed in the hope that it will be useful, but
-+   WITHOUT ANY WARRANTY; without even the implied warranty of
-+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+   General Public License for more details.
-+
-+   You should have received a copy of the GNU General Public License along
-+   with Red Hat elfutils; if not, write to the Free Software Foundation,
-+   Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA.
-+
-+   Red Hat elfutils is an included package of the Open Invention Network.
-+   An included package of the Open Invention Network is a package for which
-+   Open Invention Network licensees cross-license their patents.  No patent
-+   license is granted, either expressly or impliedly, by designation as an
-+   included package.  Should you wish to participate in the Open Invention
-+   Network licensing program, please visit www.openinventionnetwork.com
-+   <http://www.openinventionnetwork.com>.  */
-+
-+#ifdef HAVE_CONFIG_H
-+# include <config.h>
-+#endif
-+
-+#include <string.h>
-+#include <dwarf.h>
-+
-+#define BACKEND mips_
-+#include "libebl_CPU.h"
-+
-+ssize_t
-+mips_register_info (Ebl *ebl __attribute__((unused)),
-+		    int regno, char *name, size_t namelen,
-+		    const char **prefix, const char **setname,
-+		    int *bits, int *type)
-+{
-+  if (name == NULL)
-+    return 66;
-+
-+  if (regno < 0 || regno > 65 || namelen < 4)
-+    return -1;
-+
-+  *prefix = "$";
-+
-+  if (regno < 32)
-+    {
-+      *setname = "integer";
-+      *type = DW_ATE_signed;
-+      *bits = 32;
-+      if (regno < 32 + 10)
-+        {
-+          name[0] = regno + '0';
-+          namelen = 1;
-+        }
-+      else
-+        {
-+          name[0] = (regno / 10) + '0';
-+          name[1] = (regno % 10) + '0';
-+          namelen = 2;
-+        }
-+    }
-+  else if (regno < 64)
-+    {
-+      *setname = "FPU";
-+      *type = DW_ATE_float;
-+      *bits = 32;
-+      name[0] = 'f';
-+      if (regno < 32 + 10)
-+	{
-+	  name[1] = (regno - 32) + '0';
-+	  namelen = 2;
-+	}
-+      else
-+	{
-+	  name[1] = (regno - 32) / 10 + '0';
-+	  name[2] = (regno - 32) % 10 + '0';
-+	  namelen = 3;
-+	}
-+    }
-+  else if (regno == 64)
-+    {
-+      *type = DW_ATE_signed;
-+      *bits = 32;
-+      name[0] = 'h';
-+      name[1] = 'i';
-+      namelen = 2;
-+    }
-+  else
-+    {
-+      *type = DW_ATE_signed;
-+      *bits = 32;
-+      name[0] = 'l';
-+      name[1] = 'o';
-+      namelen = 2;
-+    }
-+
-+  name[namelen++] = '\0';
-+  return namelen;
-+}
-Index: b/backends/mips_reloc.def
-===================================================================
---- /dev/null
-+++ b/backends/mips_reloc.def
-@@ -0,0 +1,79 @@
-+/* List the relocation types for mips.  -*- C -*-
-+   Copyright (C) 2006 Red Hat, Inc.
-+   This file is part of Red Hat elfutils.
-+
-+   Red Hat elfutils is free software; you can redistribute it and/or modify
-+   it under the terms of the GNU General Public License as published by the
-+   Free Software Foundation; version 2 of the License.
-+
-+   Red Hat elfutils is distributed in the hope that it will be useful, but
-+   WITHOUT ANY WARRANTY; without even the implied warranty of
-+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+   General Public License for more details.
-+
-+   You should have received a copy of the GNU General Public License along
-+   with Red Hat elfutils; if not, write to the Free Software Foundation,
-+   Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA.
-+
-+   Red Hat elfutils is an included package of the Open Invention Network.
-+   An included package of the Open Invention Network is a package for which
-+   Open Invention Network licensees cross-license their patents.  No patent
-+   license is granted, either expressly or impliedly, by designation as an
-+   included package.  Should you wish to participate in the Open Invention
-+   Network licensing program, please visit www.openinventionnetwork.com
-+   <http://www.openinventionnetwork.com>.  */
-+
-+/* 	    NAME,		REL|EXEC|DYN	*/
-+
-+RELOC_TYPE (NONE,               0)
-+RELOC_TYPE (16,                 0)
-+RELOC_TYPE (32,                 0)
-+RELOC_TYPE (REL32,              0)
-+RELOC_TYPE (26,                 0)
-+RELOC_TYPE (HI16,               0)
-+RELOC_TYPE (LO16,               0)
-+RELOC_TYPE (GPREL16,            0)
-+RELOC_TYPE (LITERAL,            0)
-+RELOC_TYPE (GOT16,              0)
-+RELOC_TYPE (PC16,               0)
-+RELOC_TYPE (CALL16,             0)
-+RELOC_TYPE (GPREL32,            0)
-+
-+RELOC_TYPE (SHIFT5,             0)
-+RELOC_TYPE (SHIFT6,             0)
-+RELOC_TYPE (64,                 0)
-+RELOC_TYPE (GOT_DISP,           0)
-+RELOC_TYPE (GOT_PAGE,           0)
-+RELOC_TYPE (GOT_OFST,           0)
-+RELOC_TYPE (GOT_HI16,           0)
-+RELOC_TYPE (GOT_LO16,           0)
-+RELOC_TYPE (SUB,                0)
-+RELOC_TYPE (INSERT_A,           0)
-+RELOC_TYPE (INSERT_B,           0)
-+RELOC_TYPE (DELETE,             0)
-+RELOC_TYPE (HIGHER,             0)
-+RELOC_TYPE (HIGHEST,            0)
-+RELOC_TYPE (CALL_HI16,          0)
-+RELOC_TYPE (CALL_LO16,          0)
-+RELOC_TYPE (SCN_DISP,           0)
-+RELOC_TYPE (REL16,              0)
-+RELOC_TYPE (ADD_IMMEDIATE,      0)
-+RELOC_TYPE (PJUMP,              0)
-+RELOC_TYPE (RELGOT,             0)
-+RELOC_TYPE (JALR,               0)
-+RELOC_TYPE (TLS_DTPMOD32,       0)
-+RELOC_TYPE (TLS_DTPREL32,       0)
-+RELOC_TYPE (TLS_DTPMOD64,       0)
-+RELOC_TYPE (TLS_DTPREL64,       0)
-+RELOC_TYPE (TLS_GD,             0)
-+RELOC_TYPE (TLS_LDM,            0)
-+RELOC_TYPE (TLS_DTPREL_HI16,    0)
-+RELOC_TYPE (TLS_DTPREL_LO16,    0)
-+RELOC_TYPE (TLS_GOTTPREL,       0)
-+RELOC_TYPE (TLS_TPREL32,        0)
-+RELOC_TYPE (TLS_TPREL64,        0)
-+RELOC_TYPE (TLS_TPREL_HI16,     0)
-+RELOC_TYPE (TLS_TPREL_LO16,     0)
-+
-+#define NO_COPY_RELOC 1
-+#define NO_RELATIVE_RELOC 1
-Index: b/backends/mips_retval.c
-===================================================================
---- /dev/null
-+++ b/backends/mips_retval.c
-@@ -0,0 +1,321 @@
-+/* Function return value location for Linux/mips ABI.
-+   Copyright (C) 2005 Red Hat, Inc.
-+   This file is part of Red Hat elfutils.
-+
-+   Red Hat elfutils is free software; you can redistribute it and/or modify
-+   it under the terms of the GNU General Public License as published by the
-+   Free Software Foundation; version 2 of the License.
-+
-+   Red Hat elfutils is distributed in the hope that it will be useful, but
-+   WITHOUT ANY WARRANTY; without even the implied warranty of
-+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+   General Public License for more details.
-+
-+   You should have received a copy of the GNU General Public License along
-+   with Red Hat elfutils; if not, write to the Free Software Foundation,
-+   Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA.
-+
-+   Red Hat elfutils is an included package of the Open Invention Network.
-+   An included package of the Open Invention Network is a package for which
-+   Open Invention Network licensees cross-license their patents.  No patent
-+   license is granted, either expressly or impliedly, by designation as an
-+   included package.  Should you wish to participate in the Open Invention
-+   Network licensing program, please visit www.openinventionnetwork.com
-+   <http://www.openinventionnetwork.com>.  */
-+
-+#ifdef HAVE_CONFIG_H
-+# include <config.h>
-+#endif
-+
-+#include <string.h>
-+#include <assert.h>
-+#include <dwarf.h>
-+#include <elf.h>
-+
-+#include "../libebl/libeblP.h"
-+#include "../libdw/libdwP.h"
-+
-+#define BACKEND mips_
-+#include "libebl_CPU.h"
-+
-+/* The ABI of the file.  Also see EF_MIPS_ABI2 above. */
-+#define EF_MIPS_ABI		0x0000F000
-+
-+/* The original o32 abi. */
-+#define E_MIPS_ABI_O32          0x00001000
-+
-+/* O32 extended to work on 64 bit architectures */
-+#define E_MIPS_ABI_O64          0x00002000
-+
-+/* EABI in 32 bit mode */
-+#define E_MIPS_ABI_EABI32       0x00003000
-+
-+/* EABI in 64 bit mode */
-+#define E_MIPS_ABI_EABI64       0x00004000
-+
-+/* All the possible MIPS ABIs. */
-+enum mips_abi
-+  {
-+    MIPS_ABI_UNKNOWN = 0,
-+    MIPS_ABI_N32,
-+    MIPS_ABI_O32,
-+    MIPS_ABI_N64,
-+    MIPS_ABI_O64,
-+    MIPS_ABI_EABI32,
-+    MIPS_ABI_EABI64,
-+    MIPS_ABI_LAST
-+  };
-+
-+/* Find the mips ABI of the current file */
-+enum mips_abi find_mips_abi(Elf *elf)
-+{
-+  GElf_Ehdr ehdr_mem;
-+  GElf_Ehdr *ehdr = gelf_getehdr (elf, &ehdr_mem);
-+
-+  if (ehdr == NULL)
-+    return MIPS_ABI_LAST;
-+
-+  GElf_Word elf_flags = ehdr->e_flags;
-+
-+  /* Check elf_flags to see if it specifies the ABI being used.  */
-+  switch ((elf_flags & EF_MIPS_ABI))
-+    {
-+    case E_MIPS_ABI_O32:
-+      return MIPS_ABI_O32;
-+    case E_MIPS_ABI_O64:
-+      return MIPS_ABI_O64;
-+    case E_MIPS_ABI_EABI32:
-+      return MIPS_ABI_EABI32;
-+    case E_MIPS_ABI_EABI64:
-+      return MIPS_ABI_EABI64;
-+    default:
-+      if ((elf_flags & EF_MIPS_ABI2))
-+	return MIPS_ABI_N32;
-+    }
-+
-+  /* GCC creates a pseudo-section whose name describes the ABI.  */
-+  size_t shstrndx;
-+  if (elf_getshdrstrndx (elf, &shstrndx) < 0)
-+    return MIPS_ABI_LAST;
-+
-+  const char *name;
-+  Elf_Scn *scn = NULL;
-+  while ((scn = elf_nextscn (elf, scn)) != NULL)
-+    {
-+      GElf_Shdr shdr_mem;
-+      GElf_Shdr *shdr = gelf_getshdr (scn, &shdr_mem);
-+      if (shdr == NULL)
-+        return MIPS_ABI_LAST;
-+
-+      name = elf_strptr (elf, shstrndx, shdr->sh_name) ?: "";
-+      if (strncmp (name, ".mdebug.", 8) != 0)
-+        continue;
-+
-+      if (strcmp (name, ".mdebug.abi32") == 0)
-+        return MIPS_ABI_O32;
-+      else if (strcmp (name, ".mdebug.abiN32") == 0)
-+        return MIPS_ABI_N32;
-+      else if (strcmp (name, ".mdebug.abi64") == 0)
-+        return MIPS_ABI_N64;
-+      else if (strcmp (name, ".mdebug.abiO64") == 0)
-+        return MIPS_ABI_O64;
-+      else if (strcmp (name, ".mdebug.eabi32") == 0)
-+        return MIPS_ABI_EABI32;
-+      else if (strcmp (name, ".mdebug.eabi64") == 0)
-+        return MIPS_ABI_EABI64;
-+      else
-+        return MIPS_ABI_UNKNOWN;
-+    }
-+
-+  return MIPS_ABI_UNKNOWN;
-+}
-+
-+unsigned int
-+mips_abi_regsize (enum mips_abi abi)
-+{
-+  switch (abi)
-+    {
-+    case MIPS_ABI_EABI32:
-+    case MIPS_ABI_O32:
-+      return 4;
-+    case MIPS_ABI_N32:
-+    case MIPS_ABI_N64:
-+    case MIPS_ABI_O64:
-+    case MIPS_ABI_EABI64:
-+      return 8;
-+    case MIPS_ABI_UNKNOWN:
-+    case MIPS_ABI_LAST:
-+    default:
-+      return 0;
-+    }
-+}
-+
-+
-+/* $v0 or pair $v0, $v1 */
-+static const Dwarf_Op loc_intreg_o32[] =
-+  {
-+    { .atom = DW_OP_reg2 }, { .atom = DW_OP_piece, .number = 4 },
-+    { .atom = DW_OP_reg3 }, { .atom = DW_OP_piece, .number = 4 },
-+  };
-+
-+static const Dwarf_Op loc_intreg[] =
-+  {
-+    { .atom = DW_OP_reg2 }, { .atom = DW_OP_piece, .number = 8 },
-+    { .atom = DW_OP_reg3 }, { .atom = DW_OP_piece, .number = 8 },
-+  };
-+#define nloc_intreg	1
-+#define nloc_intregpair	4
-+
-+/* $f0 (float), or pair $f0, $f1 (double).
-+ * f2/f3 are used for COMPLEX (= 2 doubles) returns in Fortran */
-+static const Dwarf_Op loc_fpreg_o32[] =
-+  {
-+    { .atom = DW_OP_regx, .number = 32 }, { .atom = DW_OP_piece, .number = 4 },
-+    { .atom = DW_OP_regx, .number = 33 }, { .atom = DW_OP_piece, .number = 4 },
-+    { .atom = DW_OP_regx, .number = 34 }, { .atom = DW_OP_piece, .number = 4 },
-+    { .atom = DW_OP_regx, .number = 35 }, { .atom = DW_OP_piece, .number = 4 },
-+  };
-+
-+/* $f0, or pair $f0, $f2.  */
-+static const Dwarf_Op loc_fpreg[] =
-+  {
-+    { .atom = DW_OP_regx, .number = 32 }, { .atom = DW_OP_piece, .number = 8 },
-+    { .atom = DW_OP_regx, .number = 34 }, { .atom = DW_OP_piece, .number = 8 },
-+  };
-+#define nloc_fpreg  1
-+#define nloc_fpregpair 4
-+#define nloc_fpregquad 8
-+
-+/* The return value is a structure and is actually stored in stack space
-+   passed in a hidden argument by the caller.  But, the compiler
-+   helpfully returns the address of that space in $v0.  */
-+static const Dwarf_Op loc_aggregate[] =
-+  {
-+    { .atom = DW_OP_breg2, .number = 0 }
-+  };
-+#define nloc_aggregate 1
-+
-+int
-+mips_return_value_location (Dwarf_Die *functypedie, const Dwarf_Op **locp)
-+{
-+  /* First find the ABI used by the elf object */
-+  enum mips_abi abi = find_mips_abi(functypedie->cu->dbg->elf);
-+
-+  /* Something went seriously wrong while trying to figure out the ABI */
-+  if (abi == MIPS_ABI_LAST)
-+    return -1;
-+
-+  /* We couldn't identify the ABI, but the file seems valid */
-+  if (abi == MIPS_ABI_UNKNOWN)
-+    return -2;
-+
-+  /* Can't handle EABI variants */
-+  if ((abi == MIPS_ABI_EABI32) || (abi == MIPS_ABI_EABI64))
-+    return -2;
-+
-+  unsigned int regsize = mips_abi_regsize (abi);
-+  if (!regsize)
-+    return -2;
-+
-+  /* Start with the function's type, and get the DW_AT_type attribute,
-+     which is the type of the return value.  */
-+
-+  Dwarf_Attribute attr_mem;
-+  Dwarf_Attribute *attr = dwarf_attr_integrate (functypedie, DW_AT_type, &attr_mem);
-+  if (attr == NULL)
-+    /* The function has no return value, like a `void' function in C.  */
-+    return 0;
-+
-+  Dwarf_Die die_mem;
-+  Dwarf_Die *typedie = dwarf_formref_die (attr, &die_mem);
-+  int tag = dwarf_tag (typedie);
-+
-+  /* Follow typedefs and qualifiers to get to the actual type.  */
-+  while (tag == DW_TAG_typedef
-+	 || tag == DW_TAG_const_type || tag == DW_TAG_volatile_type
-+	 || tag == DW_TAG_restrict_type)
-+    {
-+      attr = dwarf_attr_integrate (typedie, DW_AT_type, &attr_mem);
-+      typedie = dwarf_formref_die (attr, &die_mem);
-+      tag = dwarf_tag (typedie);
-+    }
-+
-+  switch (tag)
-+    {
-+    case -1:
-+      return -1;
-+
-+    case DW_TAG_subrange_type:
-+      if (! dwarf_hasattr_integrate (typedie, DW_AT_byte_size))
-+	{
-+	  attr = dwarf_attr_integrate (typedie, DW_AT_type, &attr_mem);
-+	  typedie = dwarf_formref_die (attr, &die_mem);
-+	  tag = dwarf_tag (typedie);
-+	}
-+      /* Fall through.  */
-+
-+    case DW_TAG_base_type:
-+    case DW_TAG_enumeration_type:
-+    case DW_TAG_pointer_type:
-+    case DW_TAG_ptr_to_member_type:
-+      {
-+        Dwarf_Word size;
-+	if (dwarf_formudata (dwarf_attr_integrate (typedie, DW_AT_byte_size,
-+					 &attr_mem), &size) != 0)
-+	  {
-+	    if (tag == DW_TAG_pointer_type || tag == DW_TAG_ptr_to_member_type)
-+	      size = regsize;
-+	    else
-+	      return -1;
-+	  }
-+	if (tag == DW_TAG_base_type)
-+	  {
-+	    Dwarf_Word encoding;
-+	    if (dwarf_formudata (dwarf_attr_integrate (typedie, DW_AT_encoding,
-+					     &attr_mem), &encoding) != 0)
-+	      return -1;
-+
-+#define ABI_LOC(loc, regsize) ((regsize) == 4 ? (loc ## _o32) : (loc))
-+
-+	    if (encoding == DW_ATE_float)
-+	      {
-+		*locp = ABI_LOC(loc_fpreg, regsize);
-+		if (size <= regsize)
-+		    return nloc_fpreg;
-+
-+		if (size <= 2*regsize)
-+                  return nloc_fpregpair;
-+
-+		if (size <= 4*regsize && abi == MIPS_ABI_O32)
-+                  return nloc_fpregquad;
-+
-+		goto aggregate;
-+	      }
-+	  }
-+	*locp = ABI_LOC(loc_intreg, regsize);
-+	if (size <= regsize)
-+	  return nloc_intreg;
-+	if (size <= 2*regsize)
-+	  return nloc_intregpair;
-+
-+	/* Else fall through. Shouldn't happen though (at least with gcc) */
-+      }
-+
-+    case DW_TAG_structure_type:
-+    case DW_TAG_class_type:
-+    case DW_TAG_union_type:
-+    case DW_TAG_array_type:
-+    aggregate:
-+      /* XXX TODO: Can't handle structure return with other ABI's yet :-/ */
-+      if ((abi != MIPS_ABI_O32) && (abi != MIPS_ABI_O64))
-+        return -2;
-+
-+      *locp = loc_aggregate;
-+      return nloc_aggregate;
-+    }
-+
-+  /* XXX We don't have a good way to return specific errors from ebl calls.
-+     This value means we do not understand the type, but it is well-formed
-+     DWARF and might be valid.  */
-+  return -2;
-+}
-Index: b/backends/mips_symbol.c
-===================================================================
---- /dev/null
-+++ b/backends/mips_symbol.c
-@@ -0,0 +1,52 @@
-+/* MIPS specific symbolic name handling.
-+   Copyright (C) 2002, 2003, 2005 Red Hat, Inc.
-+   This file is part of Red Hat elfutils.
-+   Written by Jakub Jelinek <jakub@redhat.com>, 2002.
-+
-+   Red Hat elfutils is free software; you can redistribute it and/or modify
-+   it under the terms of the GNU General Public License as published by the
-+   Free Software Foundation; version 2 of the License.
-+
-+   Red Hat elfutils is distributed in the hope that it will be useful, but
-+   WITHOUT ANY WARRANTY; without even the implied warranty of
-+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+   General Public License for more details.
-+
-+   You should have received a copy of the GNU General Public License along
-+   with Red Hat elfutils; if not, write to the Free Software Foundation,
-+   Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA.
-+
-+   Red Hat elfutils is an included package of the Open Invention Network.
-+   An included package of the Open Invention Network is a package for which
-+   Open Invention Network licensees cross-license their patents.  No patent
-+   license is granted, either expressly or impliedly, by designation as an
-+   included package.  Should you wish to participate in the Open Invention
-+   Network licensing program, please visit www.openinventionnetwork.com
-+   <http://www.openinventionnetwork.com>.  */
-+
-+#ifdef HAVE_CONFIG_H
-+# include <config.h>
-+#endif
-+
-+#include <elf.h>
-+#include <stddef.h>
-+
-+#define BACKEND		mips_
-+#include "libebl_CPU.h"
-+
-+/* Check for the simple reloc types.  */
-+Elf_Type
-+mips_reloc_simple_type (Ebl *ebl __attribute__ ((unused)), int type)
-+{
-+  switch (type)
-+    {
-+    case R_MIPS_16:
-+      return ELF_T_HALF;
-+    case R_MIPS_32:
-+      return ELF_T_WORD;
-+    case R_MIPS_64:
-+      return ELF_T_XWORD;
-+    default:
-+      return ELF_T_NUM;
-+    }
-+}
-Index: b/libebl/eblopenbackend.c
-===================================================================
---- a/libebl/eblopenbackend.c
-+++ b/libebl/eblopenbackend.c
-@@ -71,6 +71,8 @@ static const struct
-   { "sparc", "elf_sparc", "sparc", 5, EM_SPARC, 0, 0 },
-   { "sparc", "elf_sparcv8plus", "sparc", 5, EM_SPARC32PLUS, 0, 0 },
-   { "s390", "ebl_s390", "s390", 4, EM_S390, 0, 0 },
-+  { "mips", "elf_mips", "mips", 4, EM_MIPS, 0, 0 },
-+  { "mips", "elf_mipsel", "mipsel", 4, EM_MIPS_RS3_LE, 0, 0 },
- 
-   { "m32", "elf_m32", "m32", 3, EM_M32, 0, 0 },
-   { "m68k", "elf_m68k", "m68k", 4, EM_68K, ELFCLASS32, ELFDATA2MSB },
-Index: b/backends/Makefile.am
-===================================================================
---- a/backends/Makefile.am
-+++ b/backends/Makefile.am
-@@ -33,12 +33,12 @@ AM_CPPFLAGS += -I$(top_srcdir)/libebl -I
- 
- 
- modules = i386 sh x86_64 ia64 alpha arm aarch64 sparc ppc ppc64 s390 \
--	  tilegx m68k bpf parisc
-+	  tilegx m68k bpf parisc mips
- libebl_pic = libebl_i386_pic.a libebl_sh_pic.a libebl_x86_64_pic.a    \
- 	     libebl_ia64_pic.a libebl_alpha_pic.a libebl_arm_pic.a    \
- 	     libebl_aarch64_pic.a libebl_sparc_pic.a libebl_ppc_pic.a \
- 	     libebl_ppc64_pic.a libebl_s390_pic.a libebl_tilegx_pic.a \
--	     libebl_m68k_pic.a libebl_bpf_pic.a libebl_parisc_pic.a
-+	     libebl_m68k_pic.a libebl_bpf_pic.a libebl_parisc_pic.a libebl_mips_pic.a
- noinst_LIBRARIES = $(libebl_pic)
- noinst_DATA = $(libebl_pic:_pic.a=.so)
- 
-@@ -132,6 +132,10 @@ parisc_SRCS = parisc_init.c parisc_symbo
- libebl_parisc_pic_a_SOURCES = $(parisc_SRCS)
- am_libebl_parisc_pic_a_OBJECTS = $(parisc_SRCS:.c=.os)
- 
-+mips_SRCS = mips_init.c mips_symbol.c mips_regs.c mips_retval.c
-+libebl_mips_pic_a_SOURCES = $(mips_SRCS)
-+am_libebl_mips_pic_a_OBJECTS = $(mips_SRCS:.c=.os)
-+
- libebl_%.so libebl_%.map: libebl_%_pic.a $(libelf) $(libdw)
- 	@rm -f $(@:.so=.map)
- 	$(AM_V_at)echo 'ELFUTILS_$(PACKAGE_VERSION) { global: $*_init; local: *; };' \
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/testsuite-ignore-elflint.diff b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/testsuite-ignore-elflint.diff
deleted file mode 100644
index 3df3576..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/testsuite-ignore-elflint.diff
+++ /dev/null
@@ -1,42 +0,0 @@
-On many architectures this test fails because binaries/libs produced by
-binutils don't pass elflint. However elfutils shouldn't FTBFS because of this.
-
-So we run the tests on all archs to see what breaks, but if it breaks we ignore
-the result (exitcode 77 means: this test was skipped).
-
-Upstream-Status: Backport [from debian]
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
-
-Index: b/tests/run-elflint-self.sh
-===================================================================
---- a/tests/run-elflint-self.sh
-+++ b/tests/run-elflint-self.sh
-@@ -18,4 +18,4 @@
- 
- . $srcdir/test-subr.sh
- 
--testrun_on_self ${abs_top_builddir}/src/elflint --quiet --gnu-ld
-+testrun_on_self_skip ${abs_top_builddir}/src/elflint --quiet --gnu-ld
-Index: b/tests/test-subr.sh
-===================================================================
---- a/tests/test-subr.sh
-+++ b/tests/test-subr.sh
-@@ -152,3 +152,18 @@ testrun_on_self_quiet()
-   # Only exit if something failed
-   if test $exit_status != 0; then exit $exit_status; fi
- }
-+
-+# Same as testrun_on_self(), but skip on failure.
-+testrun_on_self_skip()
-+{
-+  exit_status=0
-+
-+  for file in $self_test_files; do
-+      testrun $* $file \
-+	  || { echo "*** failure in $* $file"; exit_status=77; }
-+  done
-+
-+  # Only exit if something failed
-+  if test $exit_status != 0; then exit $exit_status; fi
-+}
-+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/fixheadercheck.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/fixheadercheck.patch
deleted file mode 100644
index 5de3b24..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/fixheadercheck.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-For some binaries we can get a invalid section alignment, for example if 
-sh_align = 1 and sh_addralign is 0. In the case of a zero size section like 
-".note.GNU-stack", this is irrelavent as far as I can tell and we shouldn't
-error in this case.
-
-RP 2014/6/11
-
-Upstream-Status: Pending
-
-diff --git a/libelf/elf32_updatenull.c b/libelf/elf32_updatenull.c
---- a/libelf/elf32_updatenull.c
-+++ b/libelf/elf32_updatenull.c
-@@ -339,8 +339,8 @@ __elfw2(LIBELFBITS,updatenull_wrlock) (Elf *elf, int *change_bop, size_t shnum)
- 		     we test for the alignment of the section being large
- 		     enough for the largest alignment required by a data
- 		     block.  */
--		  if (unlikely (! powerof2 (shdr->sh_addralign))
--		      || unlikely ((shdr->sh_addralign ?: 1) < sh_align))
-+		  if (shdr->sh_size && (unlikely (! powerof2 (shdr->sh_addralign))
-+		      || unlikely ((shdr->sh_addralign ?: 1) < sh_align)))
- 		    {
- 		      __libelf_seterrno (ELF_E_INVALID_ALIGN);
- 		      return -1;
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/shadow.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/shadow.patch
deleted file mode 100644
index d31961f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/shadow.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-Fix control path where we have str as uninitialized string
-
-| /home/ubuntu/work/oe/openembedded-core/build/tmp-musl/work/i586-oe-linux-musl/elfutils/0.164-r0/elfutils-0.164/libcpu/i386_disasm.c: In function 'i386_disasm':
-| /home/ubuntu/work/oe/openembedded-core/build/tmp-musl/work/i586-oe-linux-musl/elfutils/0.164-r0/elfutils-0.164/libcpu/i386_disasm.c:310:5: error: 'str' may be used uninitialized in this function [-Werror=maybe-uninitialized]
-|      memcpy (buf + bufcnt, _str, _len);           \
-|      ^
-| /home/ubuntu/work/oe/openembedded-core/build/tmp-musl/work/i586-oe-linux-musl/elfutils/0.164-r0/elfutils-0.164/libcpu/i386_disasm.c:709:17: note: 'str' was declared here
-|      const char *str;
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: elfutils-0.164/libcpu/i386_disasm.c
-===================================================================
---- elfutils-0.164.orig/libcpu/i386_disasm.c
-+++ elfutils-0.164/libcpu/i386_disasm.c
-@@ -821,6 +821,7 @@ i386_disasm (const uint8_t **startp, con
- 			    }
- 
- 			default:
-+			  str = "";
- 			  assert (! "INVALID not handled");
- 			}
- 		    }
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils/Fix_elf_cvt_gunhash.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils/Fix_elf_cvt_gunhash.patch
deleted file mode 100644
index f861e89..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils/Fix_elf_cvt_gunhash.patch
+++ /dev/null
@@ -1,29 +0,0 @@
-Fix elf_cvt_gunhash if dest and src are same.
-
-Upstream-Status: Pending
-
-The 'dest' and 'src' can be same, we need to save the value of src32[2]
-before swaping it.
-
-Signed-off-by: Baoshan Pang <BaoShan.Pang@windriver.com>
-diff --git a/libelf/gnuhash_xlate.h b/libelf/gnuhash_xlate.h
-index 6faf113..04d9ca1 100644
---- a/libelf/gnuhash_xlate.h
-+++ b/libelf/gnuhash_xlate.h
-@@ -40,6 +40,7 @@ elf_cvt_gnuhash (void *dest, const void *src, size_t len, int encode)
-      words.  We must detangle them here.   */
-   Elf32_Word *dest32 = dest;
-   const Elf32_Word *src32 = src;
-+  Elf32_Word save_src32_2 = src32[2]; // dest could be equal to src
- 
-   /* First four control words, 32 bits.  */
-   for (unsigned int cnt = 0; cnt < 4; ++cnt)
-@@ -50,7 +51,7 @@ elf_cvt_gnuhash (void *dest, const void *src, size_t len, int encode)
-       len -= 4;
-     }
- 
--  Elf32_Word bitmask_words = encode ? src32[2] : dest32[2];
-+  Elf32_Word bitmask_words = encode ? save_src32_2 : dest32[2];
- 
-   /* Now the 64 bit words.  */
-   Elf64_Xword *dest64 = (Elf64_Xword *) &dest32[4];
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils/dso-link-change.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils/dso-link-change.patch
deleted file mode 100644
index d0cd3ed..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils/dso-link-change.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-Upstream-Status: Pending
-
-# This patch makes the link to the dependencies of libdw explicit, as recent
-# ld no longer implicitly links them. See
-# http://lists.fedoraproject.org/pipermail/devel/2010-March/133601.html as
-# a similar example of the error message you can encounter without this patch,
-# and https://fedoraproject.org/wiki/UnderstandingDSOLinkChange and
-# https://fedoraproject.org/wiki/Features/ChangeInImplicitDSOLinking for more
-# details.
-
---- elfutils-0.148.orig/src/Makefile.am
-+++ elfutils-0.148/src/Makefile.am
-@@ -86,7 +86,7 @@ libdw = ../libdw/libdw.a $(zip_LIBS) $(l
- libelf = ../libelf/libelf.a
- else
- libasm = ../libasm/libasm.so
--libdw = ../libdw/libdw.so
-+libdw = ../libdw/libdw.so $(zip_LIBS) $(libelf) $(libebl) -ldl
- libelf = ../libelf/libelf.so
- endif
- libebl = ../libebl/libebl.a
---- elfutils-0.148.orig/tests/Makefile.am
-+++ elfutils-0.148/tests/Makefile.am
-@@ -172,7 +172,7 @@ libdw = ../libdw/libdw.a $(zip_LIBS) $(l
- libelf = ../libelf/libelf.a
- libasm = ../libasm/libasm.a
- else
--libdw = ../libdw/libdw.so
-+libdw = ../libdw/libdw.so $(zip_LIBS) $(libelf) $(libebl) -ldl
- libelf = ../libelf/libelf.so
- libasm = ../libasm/libasm.so
- endif
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils_0.168.bb b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils_0.168.bb
deleted file mode 100644
index b977ce0..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils_0.168.bb
+++ /dev/null
@@ -1,90 +0,0 @@
-SUMMARY = "Utilities and libraries for handling compiled object files"
-HOMEPAGE = "https://sourceware.org/elfutils"
-SECTION = "base"
-LICENSE = "(GPLv3 & Elfutils-Exception)"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
-DEPENDS = "libtool bzip2 zlib virtual/libintl"
-DEPENDS_append_libc-musl = " argp-standalone fts "
-SRC_URI = "https://sourceware.org/elfutils/ftp/${PV}/${BP}.tar.bz2"
-SRC_URI[md5sum] = "52adfa40758d0d39e5d5c57689bf38d6"
-SRC_URI[sha256sum] = "b88d07893ba1373c7dd69a7855974706d05377766568a7d9002706d5de72c276"
-
-SRC_URI += "\
-        file://dso-link-change.patch \
-        file://Fix_elf_cvt_gunhash.patch \
-        file://fixheadercheck.patch \
-        file://0001-elf_getarsym-Silence-Werror-maybe-uninitialized-fals.patch \
-        file://0001-remove-the-unneed-checking.patch \
-        file://0001-fix-a-stack-usage-warning.patch \
-        file://aarch64_uio.patch \
-        file://Fix_one_GCC7_warning.patch \
-        file://shadow.patch \
-"
-
-# pick the patch from debian
-# http://ftp.de.debian.org/debian/pool/main/e/elfutils/elfutils_0.168-0.2.debian.tar.xz
-SRC_URI += "\
-        file://debian/hppa_backend.diff \
-        file://debian/arm_backend.diff \
-        file://debian/mips_backend.diff \
-        file://debian/testsuite-ignore-elflint.diff \
-        file://debian/mips_readelf_w.patch \
-        file://debian/kfreebsd_path.patch \
-        file://debian/0001-Ignore-differences-between-mips-machine-identifiers.patch \
-        file://debian/0002-Add-support-for-mips64-abis-in-mips_retval.c.patch \
-        file://debian/0003-Add-mips-n64-relocation-format-hack.patch \
-        file://debian/hurd_path.patch \
-        file://debian/ignore_strmerge.diff \
-"
-# Fix the patches from Debian with GCC7
-SRC_URI += "file://fallthrough.patch"
-SRC_URI_append_libc-musl = " file://0001-build-Provide-alternatives-for-glibc-assumptions-hel.patch "
-
-# The buildsystem wants to generate 2 .h files from source using a binary it just built,
-# which can not pass the cross compiling, so let's work around it by adding 2 .h files
-# along with the do_configure_prepend()
-
-inherit autotools gettext
-
-EXTRA_OECONF = "--program-prefix=eu- --without-lzma"
-EXTRA_OECONF_append_class-native = " --without-bzlib"
-EXTRA_OECONF_append_libc-uclibc = " --enable-uclibc"
-
-do_install_append() {
-	if [ "${TARGET_ARCH}" != "x86_64" ] && [ -z `echo "${TARGET_ARCH}"|grep 'i.86'` ];then
-		rm -f ${D}${bindir}/eu-objdump
-	fi
-}
-
-# we can not build complete elfutils when using uclibc
-# but some recipes e.g. gcc 4.5 depends on libelf so we
-# build only libelf for uclibc case
-
-EXTRA_OEMAKE_libc-uclibc = "-C libelf"
-EXTRA_OEMAKE_class-native = ""
-EXTRA_OEMAKE_class-nativesdk = ""
-
-ALLOW_EMPTY_${PN}_libc-musl = "1"
-
-BBCLASSEXTEND = "native nativesdk"
-
-# Package utilities separately
-PACKAGES =+ "${PN}-binutils libelf libasm libdw"
-FILES_${PN}-binutils = "\
-    ${bindir}/eu-addr2line \
-    ${bindir}/eu-ld \
-    ${bindir}/eu-nm \
-    ${bindir}/eu-readelf \
-    ${bindir}/eu-size \
-    ${bindir}/eu-strip"
-
-FILES_libelf = "${libdir}/libelf-${PV}.so ${libdir}/libelf.so.*"
-FILES_libasm = "${libdir}/libasm-${PV}.so ${libdir}/libasm.so.*"
-FILES_libdw  = "${libdir}/libdw-${PV}.so ${libdir}/libdw.so.* ${libdir}/elfutils/lib*"
-# Some packages have the version preceeding the .so instead properly
-# versioned .so.<version>, so we need to reorder and repackage.
-#FILES_${PN} += "${libdir}/*-${PV}.so ${base_libdir}/*-${PV}.so"
-#FILES_SOLIBSDEV = "${libdir}/libasm.so ${libdir}/libdw.so ${libdir}/libelf.so"
-
-# The package contains symlinks that trip up insane
-INSANE_SKIP_${MLPREFIX}libdw = "dev-so"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils_0.170.bb b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils_0.170.bb
new file mode 100644
index 0000000..3b81e28
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils_0.170.bb
@@ -0,0 +1,79 @@
+SUMMARY = "Utilities and libraries for handling compiled object files"
+HOMEPAGE = "https://sourceware.org/elfutils"
+SECTION = "base"
+LICENSE = "(GPLv3 & Elfutils-Exception)"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
+DEPENDS = "libtool bzip2 zlib virtual/libintl"
+DEPENDS_append_libc-musl = " argp-standalone fts "
+SRC_URI = "https://sourceware.org/elfutils/ftp/${PV}/${BP}.tar.bz2"
+SRC_URI[md5sum] = "03599aee98c9b726c7a732a2dd0245d5"
+SRC_URI[sha256sum] = "1f844775576b79bdc9f9c717a50058d08620323c1e935458223a12f249c9e066"
+
+SRC_URI += "\
+        file://0001-dso-link-change.patch \
+        file://0002-Fix-elf_cvt_gunhash-if-dest-and-src-are-same.patch \
+        file://0003-fixheadercheck.patch \
+        file://0004-Disable-the-test-to-convert-euc-jp.patch \
+        file://0005-fix-a-stack-usage-warning.patch \
+        file://0006-Fix-build-on-aarch64-musl.patch \
+        file://0007-Fix-control-path-where-we-have-str-as-uninitialized-.patch \
+        file://0001-libasm-may-link-with-libbz2-if-found.patch \
+"
+SRC_URI_append_libc-musl = " file://0008-build-Provide-alternatives-for-glibc-assumptions-hel.patch"
+
+# Pick patches from debian
+# http://ftp.de.debian.org/debian/pool/main/e/elfutils/elfutils_0.168-0.2.debian.tar.xz
+SRC_URI += "\
+        file://debian/hppa_backend.diff \
+        file://debian/arm_backend.diff \
+        file://debian/mips_backend.patch \
+        file://debian/mips_readelf_w.patch \
+        file://debian/0001-Ignore-differences-between-mips-machine-identifiers.patch \
+        file://debian/0002-Add-support-for-mips64-abis-in-mips_retval.c.patch \
+        file://debian/0003-Add-mips-n64-relocation-format-hack.patch \
+"
+# Fix the patches from Debian with GCC7
+SRC_URI += "file://debian/fallthrough.patch"
+
+# The buildsystem wants to generate 2 .h files from source using a binary it just built,
+# which can not pass the cross compiling, so let's work around it by adding 2 .h files
+# along with the do_configure_prepend()
+
+inherit autotools gettext
+
+EXTRA_OECONF = "--program-prefix=eu- --without-lzma"
+EXTRA_OECONF_append_class-native = " --without-bzlib"
+
+do_install_append() {
+	if [ "${TARGET_ARCH}" != "x86_64" ] && [ -z `echo "${TARGET_ARCH}"|grep 'i.86'` ];then
+		rm -f ${D}${bindir}/eu-objdump
+	fi
+}
+
+EXTRA_OEMAKE_class-native = ""
+EXTRA_OEMAKE_class-nativesdk = ""
+
+ALLOW_EMPTY_${PN}_libc-musl = "1"
+
+BBCLASSEXTEND = "native nativesdk"
+
+# Package utilities separately
+PACKAGES =+ "${PN}-binutils libelf libasm libdw"
+FILES_${PN}-binutils = "\
+    ${bindir}/eu-addr2line \
+    ${bindir}/eu-ld \
+    ${bindir}/eu-nm \
+    ${bindir}/eu-readelf \
+    ${bindir}/eu-size \
+    ${bindir}/eu-strip"
+
+FILES_libelf = "${libdir}/libelf-${PV}.so ${libdir}/libelf.so.*"
+FILES_libasm = "${libdir}/libasm-${PV}.so ${libdir}/libasm.so.*"
+FILES_libdw  = "${libdir}/libdw-${PV}.so ${libdir}/libdw.so.* ${libdir}/elfutils/lib*"
+# Some packages have the version preceeding the .so instead properly
+# versioned .so.<version>, so we need to reorder and repackage.
+#FILES_${PN} += "${libdir}/*-${PV}.so ${base_libdir}/*-${PV}.so"
+#FILES_SOLIBSDEV = "${libdir}/libasm.so ${libdir}/libdw.so ${libdir}/libelf.so"
+
+# The package contains symlinks that trip up insane
+INSANE_SKIP_${MLPREFIX}libdw = "dev-so"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0001-dso-link-change.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0001-dso-link-change.patch
new file mode 100644
index 0000000..28c57f2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0001-dso-link-change.patch
@@ -0,0 +1,52 @@
+From 0a69a26c9f7487daca900db87cd1195857a4603f Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Tue, 15 Aug 2017 17:10:57 +0800
+Subject: [PATCH 1/7] dso link change
+
+Upstream-Status: Pending
+
+This patch makes the link to the dependencies of libdw explicit, as
+recent ld no longer implicitly links them. See
+http://lists.fedoraproject.org/pipermail/devel/2010-March/133601.html
+as a similar example of the error message you can encounter without this
+patch, and https://fedoraproject.org/wiki/UnderstandingDSOLinkChange and
+https://fedoraproject.org/wiki/Features/ChangeInImplicitDSOLinking for
+more details.
+
+Rebase to 0.170
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ src/Makefile.am   | 2 +-
+ tests/Makefile.am | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/src/Makefile.am b/src/Makefile.am
+index 2b1c0dc..9305b84 100644
+--- a/src/Makefile.am
++++ b/src/Makefile.am
+@@ -44,7 +44,7 @@ libdw = ../libdw/libdw.a -lz $(zip_LIBS) $(libelf) $(libebl) -ldl
+ libelf = ../libelf/libelf.a -lz
+ else
+ libasm = ../libasm/libasm.so
+-libdw = ../libdw/libdw.so
++libdw = ../libdw/libdw.so $(zip_LIBS) $(libelf) $(libebl) -ldl
+ libelf = ../libelf/libelf.so
+ endif
+ libebl = ../libebl/libebl.a
+diff --git a/tests/Makefile.am b/tests/Makefile.am
+index 3735084..528615d 100644
+--- a/tests/Makefile.am
++++ b/tests/Makefile.am
+@@ -400,7 +400,7 @@ libdw = ../libdw/libdw.a -lz $(zip_LIBS) $(libelf) $(libebl) -ldl
+ libelf = ../libelf/libelf.a -lz
+ libasm = ../libasm/libasm.a
+ else
+-libdw = ../libdw/libdw.so
++libdw = ../libdw/libdw.so $(zip_LIBS) $(libelf) $(libebl) -ldl
+ libelf = ../libelf/libelf.so
+ libasm = ../libasm/libasm.so
+ endif
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0001-libasm-may-link-with-libbz2-if-found.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0001-libasm-may-link-with-libbz2-if-found.patch
new file mode 100644
index 0000000..fb0b060
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0001-libasm-may-link-with-libbz2-if-found.patch
@@ -0,0 +1,39 @@
+From 7672e363468271b4c63ff58770c5aac15ab8f722 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 4 Oct 2017 22:30:46 -0700
+Subject: [PATCH] libasm may link with libbz2 if found
+
+This can fail to link binaries like objdump
+where indirect libraries may be not found by linker
+
+| /mnt/a/oe/build/tmp/work/riscv64-bec-linux/elfutils/0.170-r0/recipe-sysroot/usr/lib/libbz2.so.1: error adding symbols: DSO missing from command line
+| collect2: error: ld returned 1 exit status
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/Makefile.am | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/src/Makefile.am b/src/Makefile.am
+index e03bc32..9b7e853 100644
+--- a/src/Makefile.am
++++ b/src/Makefile.am
+@@ -39,11 +39,11 @@ EXTRA_DIST += make-debug-archive.in
+ CLEANFILES += make-debug-archive
+ 
+ if BUILD_STATIC
+-libasm = ../libasm/libasm.a
++libasm = ../libasm/libasm.a $(zip_LIBS)
+ libdw = ../libdw/libdw.a -lz $(zip_LIBS) $(libelf) $(libebl) -ldl
+ libelf = ../libelf/libelf.a -lz
+ else
+-libasm = ../libasm/libasm.so
++libasm = ../libasm/libasm.so $(zip_LIBS)
+ libdw = ../libdw/libdw.so $(zip_LIBS) $(libelf) $(libebl) -ldl
+ libelf = ../libelf/libelf.so
+ endif
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0002-Fix-elf_cvt_gunhash-if-dest-and-src-are-same.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0002-Fix-elf_cvt_gunhash-if-dest-and-src-are-same.patch
new file mode 100644
index 0000000..2f718eb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0002-Fix-elf_cvt_gunhash-if-dest-and-src-are-same.patch
@@ -0,0 +1,42 @@
+From e98670f7c7b4c73fb65534949716fd8d043960d5 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Tue, 15 Aug 2017 17:13:59 +0800
+Subject: [PATCH 2/7] Fix elf_cvt_gunhash if dest and src are same.
+
+Upstream-Status: Pending
+
+The 'dest' and 'src' can be same, we need to save the value of src32[2]
+before swaping it.
+
+Signed-off-by: Baoshan Pang <BaoShan.Pang@windriver.com>
+
+Rebase to 0.170
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ libelf/gnuhash_xlate.h | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/libelf/gnuhash_xlate.h b/libelf/gnuhash_xlate.h
+index 6faf113..04d9ca1 100644
+--- a/libelf/gnuhash_xlate.h
++++ b/libelf/gnuhash_xlate.h
+@@ -40,6 +40,7 @@ elf_cvt_gnuhash (void *dest, const void *src, size_t len, int encode)
+      words.  We must detangle them here.   */
+   Elf32_Word *dest32 = dest;
+   const Elf32_Word *src32 = src;
++  Elf32_Word save_src32_2 = src32[2]; // dest could be equal to src
+ 
+   /* First four control words, 32 bits.  */
+   for (unsigned int cnt = 0; cnt < 4; ++cnt)
+@@ -50,7 +51,7 @@ elf_cvt_gnuhash (void *dest, const void *src, size_t len, int encode)
+       len -= 4;
+     }
+ 
+-  Elf32_Word bitmask_words = encode ? src32[2] : dest32[2];
++  Elf32_Word bitmask_words = encode ? save_src32_2 : dest32[2];
+ 
+   /* Now the 64 bit words.  */
+   Elf64_Xword *dest64 = (Elf64_Xword *) &dest32[4];
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0003-fixheadercheck.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0003-fixheadercheck.patch
new file mode 100644
index 0000000..7c49fce
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0003-fixheadercheck.patch
@@ -0,0 +1,40 @@
+From 565d5935abf5b58773f9c8385c00189221980d98 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Tue, 15 Aug 2017 17:17:20 +0800
+Subject: [PATCH 3/7] fixheadercheck
+
+For some binaries we can get a invalid section alignment, for example if
+sh_align = 1 and sh_addralign is 0. In the case of a zero size section
+like
+".note.GNU-stack", this is irrelavent as far as I can tell and we
+shouldn't
+error in this case.
+
+RP 2014/6/11
+
+Upstream-Status: Pending
+
+Rebase to 0.170
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ libelf/elf32_updatenull.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/libelf/elf32_updatenull.c b/libelf/elf32_updatenull.c
+index d83c0b3..a51bf70 100644
+--- a/libelf/elf32_updatenull.c
++++ b/libelf/elf32_updatenull.c
+@@ -339,8 +339,8 @@ __elfw2(LIBELFBITS,updatenull_wrlock) (Elf *elf, int *change_bop, size_t shnum)
+ 		     we test for the alignment of the section being large
+ 		     enough for the largest alignment required by a data
+ 		     block.  */
+-		  if (unlikely (! powerof2 (shdr->sh_addralign))
+-		      || unlikely ((shdr->sh_addralign ?: 1) < sh_align))
++		  if (shdr->sh_size && (unlikely (! powerof2 (shdr->sh_addralign))
++		      || unlikely ((shdr->sh_addralign ?: 1) < sh_align)))
+ 		    {
+ 		      __libelf_seterrno (ELF_E_INVALID_ALIGN);
+ 		      return -1;
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0004-Disable-the-test-to-convert-euc-jp.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0004-Disable-the-test-to-convert-euc-jp.patch
new file mode 100644
index 0000000..d893ad6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0004-Disable-the-test-to-convert-euc-jp.patch
@@ -0,0 +1,44 @@
+From bb7ed11950101798aae82f7fda8b3dcb05f755c5 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Tue, 15 Aug 2017 17:24:06 +0800
+Subject: [PATCH 4/7] Disable the test to convert euc-jp
+
+Remove the test "Test against HP-UX 11.11 bug:
+No converter from EUC-JP to UTF-8 is provided"
+since we don't support HP-UX and if the euc-jp is not
+installed on the host, the dependence will be built without
+iconv support and will cause guild-native building fail.
+
+Upstream-Status: Inappropriate [OE specific]
+
+Signed-off-by: Roy Li <rongqing.li@windriver.com>
+
+Rebase to 0.170
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ m4/iconv.m4 | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/m4/iconv.m4 b/m4/iconv.m4
+index a503646..299f1eb 100644
+--- a/m4/iconv.m4
++++ b/m4/iconv.m4
+@@ -159,6 +159,7 @@ int main ()
+       }
+   }
+ #endif
++#if 0
+   /* Test against HP-UX 11.11 bug: No converter from EUC-JP to UTF-8 is
+      provided.  */
+   if (/* Try standardized names.  */
+@@ -170,6 +171,7 @@ int main ()
+       /* Try HP-UX names.  */
+       && iconv_open ("utf8", "eucJP") == (iconv_t)(-1))
+     result |= 16;
++#endif
+   return result;
+ }]])],
+         [am_cv_func_iconv_works=yes],
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0005-fix-a-stack-usage-warning.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0005-fix-a-stack-usage-warning.patch
new file mode 100644
index 0000000..22a01cf
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0005-fix-a-stack-usage-warning.patch
@@ -0,0 +1,35 @@
+From dd6dbf6af396519380f48c0ef1ce6cf4dd77f6d7 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Tue, 15 Aug 2017 17:25:16 +0800
+Subject: [PATCH 5/7] fix a stack-usage warning
+
+Upstream-Status: Pending
+
+not use a variable to as a array size, otherwise the warning to error
+that
+stack usage might be unbounded [-Werror=stack-usage=] will happen
+
+Signed-off-by: Roy Li <rongqing.li@windriver.com>
+
+Rebase to 0.170
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ backends/ppc_initreg.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/backends/ppc_initreg.c b/backends/ppc_initreg.c
+index 69d623b..de41dec 100644
+--- a/backends/ppc_initreg.c
++++ b/backends/ppc_initreg.c
+@@ -93,7 +93,7 @@ ppc_set_initial_registers_tid (pid_t tid __attribute__ ((unused)),
+ 	return false;
+     }
+   const size_t gprs = sizeof (user_regs.r.gpr) / sizeof (*user_regs.r.gpr);
+-  Dwarf_Word dwarf_regs[gprs];
++  Dwarf_Word dwarf_regs[sizeof (user_regs.r.gpr) / sizeof (*user_regs.r.gpr)];
+   for (unsigned gpr = 0; gpr < gprs; gpr++)
+     dwarf_regs[gpr] = user_regs.r.gpr[gpr];
+   if (! setfunc (0, gprs, dwarf_regs, arg))
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0006-Fix-build-on-aarch64-musl.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0006-Fix-build-on-aarch64-musl.patch
new file mode 100644
index 0000000..5f29a03
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0006-Fix-build-on-aarch64-musl.patch
@@ -0,0 +1,61 @@
+From e57ad47fc8549353ca80c23b9b4f38f31fde13e5 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Tue, 15 Aug 2017 17:27:30 +0800
+Subject: [PATCH 6/7] Fix build on aarch64/musl
+
+Errors
+
+invalid operands to binary & (have 'long double' and 'unsigned int')
+
+error: redefinition
+ of 'struct iovec'
+ struct iovec { void *iov_base; size_t iov_len; };
+        ^
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Rebase to 0.170
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ backends/aarch64_initreg.c | 4 ++--
+ backends/arm_initreg.c     | 2 +-
+ 2 files changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/backends/aarch64_initreg.c b/backends/aarch64_initreg.c
+index daf6f37..6445276 100644
+--- a/backends/aarch64_initreg.c
++++ b/backends/aarch64_initreg.c
+@@ -33,7 +33,7 @@
+ #include "system.h"
+ #include <assert.h>
+ #if defined(__aarch64__) && defined(__linux__)
+-# include <linux/uio.h>
++# include <sys/uio.h>
+ # include <sys/user.h>
+ # include <sys/ptrace.h>
+ /* Deal with old glibc defining user_pt_regs instead of user_regs_struct.  */
+@@ -82,7 +82,7 @@ aarch64_set_initial_registers_tid (pid_t tid __attribute__ ((unused)),
+ 
+   Dwarf_Word dwarf_fregs[32];
+   for (int r = 0; r < 32; r++)
+-    dwarf_fregs[r] = fregs.vregs[r] & 0xFFFFFFFF;
++    dwarf_fregs[r] = (unsigned int)fregs.vregs[r] & 0xFFFFFFFF;
+ 
+   if (! setfunc (64, 32, dwarf_fregs, arg))
+     return false;
+diff --git a/backends/arm_initreg.c b/backends/arm_initreg.c
+index efcabaf..062bb9e 100644
+--- a/backends/arm_initreg.c
++++ b/backends/arm_initreg.c
+@@ -38,7 +38,7 @@
+ #endif
+ 
+ #ifdef __aarch64__
+-# include <linux/uio.h>
++# include <sys/uio.h>
+ # include <sys/user.h>
+ # include <sys/ptrace.h>
+ /* Deal with old glibc defining user_pt_regs instead of user_regs_struct.  */
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0007-Fix-control-path-where-we-have-str-as-uninitialized-.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0007-Fix-control-path-where-we-have-str-as-uninitialized-.patch
new file mode 100644
index 0000000..2247704
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0007-Fix-control-path-where-we-have-str-as-uninitialized-.patch
@@ -0,0 +1,45 @@
+From 1e91c1d4e37c05cf95058b4b3c3f352d72886f58 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Tue, 15 Aug 2017 17:31:38 +0800
+Subject: [PATCH 7/7] Fix control path where we have str as uninitialized
+ string
+
+|
+/home/ubuntu/work/oe/openembedded-core/build/tmp-musl/work/i586-oe-linux-musl/elfutils/0.164-r0/elfutils-0.164/libcpu/i386_disasm.c:
+In function 'i386_disasm':
+|
+/home/ubuntu/work/oe/openembedded-core/build/tmp-musl/work/i586-oe-linux-musl/elfutils/0.164-r0/elfutils-0.164/libcpu/i386_disasm.c:310:5:
+error: 'str' may be used uninitialized in this function
+[-Werror=maybe-uninitialized]
+|      memcpy (buf + bufcnt, _str, _len);           \
+|      ^
+|
+/home/ubuntu/work/oe/openembedded-core/build/tmp-musl/work/i586-oe-linux-musl/elfutils/0.164-r0/elfutils-0.164/libcpu/i386_disasm.c:709:17:
+note: 'str' was declared here
+|      const char *str;
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Upstream-Status: Pending
+
+Rebase to 0.170
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ libcpu/i386_disasm.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/libcpu/i386_disasm.c b/libcpu/i386_disasm.c
+index 831afbe..60fd6d4 100644
+--- a/libcpu/i386_disasm.c
++++ b/libcpu/i386_disasm.c
+@@ -821,6 +821,7 @@ i386_disasm (Ebl *ebl __attribute__((unused)),
+ 			    }
+ 			  /* Fallthrough */
+ 			default:
++			  str = "";
+ 			  assert (! "INVALID not handled");
+ 			}
+ 		    }
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0008-build-Provide-alternatives-for-glibc-assumptions-hel.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0008-build-Provide-alternatives-for-glibc-assumptions-hel.patch
new file mode 100644
index 0000000..8864d44
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/0008-build-Provide-alternatives-for-glibc-assumptions-hel.patch
@@ -0,0 +1,1031 @@
+From 010b0c57e748440eb1ceb3d977875f2488d2b4ce Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 16 Aug 2017 10:06:26 +0800
+Subject: [PATCH] build: Provide alternatives for glibc assumptions helps
+ compiling it on musl
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Rebase to 0.170
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ Makefile.am                      |  2 +-
+ lib/color.c                      |  3 ++-
+ lib/fixedsizehash.h              |  1 -
+ lib/system.h                     | 10 ++++++++++
+ lib/xmalloc.c                    |  2 +-
+ libasm/asm_end.c                 |  2 +-
+ libasm/asm_newscn.c              |  2 +-
+ libcpu/i386_gendis.c             |  2 +-
+ libcpu/i386_lex.c                |  2 +-
+ libcpu/i386_parse.c              |  2 +-
+ libdw/Makefile.am                |  3 ++-
+ libdw/libdw_alloc.c              |  2 +-
+ libdwfl/dwfl_build_id_find_elf.c |  3 ++-
+ libdwfl/dwfl_error.c             |  4 +++-
+ libdwfl/dwfl_module_getdwarf.c   |  1 +
+ libdwfl/find-debuginfo.c         |  2 +-
+ libdwfl/libdwfl_crc32_file.c     |  9 +++++++++
+ libdwfl/linux-kernel-modules.c   |  1 +
+ libebl/eblopenbackend.c          |  2 +-
+ libelf/elf.h                     |  8 ++++++--
+ libelf/libelf.h                  |  1 +
+ libelf/libelfP.h                 |  1 +
+ src/addr2line.c                  |  2 +-
+ src/ar.c                         |  2 +-
+ src/arlib.c                      |  2 +-
+ src/arlib2.c                     |  2 +-
+ src/elfcmp.c                     |  2 +-
+ src/elflint.c                    |  2 +-
+ src/findtextrel.c                |  2 +-
+ src/nm.c                         |  2 +-
+ src/objdump.c                    |  2 +-
+ src/ranlib.c                     |  2 +-
+ src/readelf.c                    |  2 +-
+ src/size.c                       |  2 +-
+ src/stack.c                      |  2 +-
+ src/strings.c                    |  2 +-
+ src/strip.c                      |  2 +-
+ src/unstrip.c                    |  2 +-
+ tests/addrscopes.c               |  2 +-
+ tests/allregs.c                  |  2 +-
+ tests/backtrace-data.c           |  2 +-
+ tests/backtrace-dwarf.c          |  2 +-
+ tests/backtrace.c                |  2 +-
+ tests/buildid.c                  |  2 +-
+ tests/debugaltlink.c             |  2 +-
+ tests/debuglink.c                |  2 +-
+ tests/deleted.c                  |  2 +-
+ tests/dwfl-addr-sect.c           |  2 +-
+ tests/dwfl-bug-addr-overflow.c   |  2 +-
+ tests/dwfl-bug-fd-leak.c         |  2 +-
+ tests/dwfl-bug-getmodules.c      |  2 +-
+ tests/dwfl-report-elf-align.c    |  2 +-
+ tests/dwfllines.c                |  2 +-
+ tests/dwflmodtest.c              |  2 +-
+ tests/dwflsyms.c                 |  2 +-
+ tests/early-offscn.c             |  2 +-
+ tests/ecp.c                      |  2 +-
+ tests/find-prologues.c           |  2 +-
+ tests/funcretval.c               |  2 +-
+ tests/funcscopes.c               |  2 +-
+ tests/getsrc_die.c               |  2 +-
+ tests/line2addr.c                |  2 +-
+ tests/low_high_pc.c              |  2 +-
+ tests/md5-sha1-test.c            |  2 +-
+ tests/rdwrmmap.c                 |  2 +-
+ tests/saridx.c                   |  2 +-
+ tests/sectiondump.c              |  2 +-
+ tests/varlocs.c                  |  2 +-
+ tests/vdsosyms.c                 |  2 +-
+ 69 files changed, 95 insertions(+), 64 deletions(-)
+
+diff --git a/Makefile.am b/Makefile.am
+index 2ff444e..41f77df 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -28,7 +28,7 @@ pkginclude_HEADERS = version.h
+ 
+ # Add doc back when we have some real content.
+ SUBDIRS = config m4 lib libelf libebl libdwelf libdwfl libdw libcpu libasm \
+-	  backends src po tests
++	  backends po tests
+ 
+ EXTRA_DIST = elfutils.spec GPG-KEY NOTES CONTRIBUTING \
+ 	     COPYING COPYING-GPLV2 COPYING-LGPLV3
+diff --git a/lib/color.c b/lib/color.c
+index f62389d..a2a84b4 100644
+--- a/lib/color.c
++++ b/lib/color.c
+@@ -32,13 +32,14 @@
+ #endif
+ 
+ #include <argp.h>
+-#include <error.h>
++#include <err.h>
+ #include <libintl.h>
+ #include <stdlib.h>
+ #include <string.h>
+ #include <unistd.h>
+ #include "libeu.h"
+ #include "color.h"
++#include "system.h"
+ 
+ /* Prototype for option handler.  */
+ static error_t parse_opt (int key, char *arg, struct argp_state *state);
+diff --git a/lib/fixedsizehash.h b/lib/fixedsizehash.h
+index dac2a5f..43016fc 100644
+--- a/lib/fixedsizehash.h
++++ b/lib/fixedsizehash.h
+@@ -30,7 +30,6 @@
+ #include <errno.h>
+ #include <stdlib.h>
+ #include <string.h>
+-#include <sys/cdefs.h>
+ 
+ #include <system.h>
+ 
+diff --git a/lib/system.h b/lib/system.h
+index 9203335..1a60131 100644
+--- a/lib/system.h
++++ b/lib/system.h
+@@ -50,6 +50,16 @@
+ #else
+ # error "Unknown byte order"
+ #endif
++#ifndef TEMP_FAILURE_RETRY
++#define TEMP_FAILURE_RETRY(expression) \
++  (__extension__							      \
++    ({ long int __result;						      \
++       do __result = (long int) (expression);				      \
++       while (__result == -1L && errno == EINTR);			      \
++       __result; }))
++#endif
++
++#define error(status, errno, ...) err(status, __VA_ARGS__)
+ 
+ #ifndef MAX
+ #define MAX(m, n) ((m) < (n) ? (n) : (m))
+diff --git a/lib/xmalloc.c b/lib/xmalloc.c
+index 0cde384..217b054 100644
+--- a/lib/xmalloc.c
++++ b/lib/xmalloc.c
+@@ -30,7 +30,7 @@
+ # include <config.h>
+ #endif
+ 
+-#include <error.h>
++#include <err.h>
+ #include <libintl.h>
+ #include <stddef.h>
+ #include <stdlib.h>
+diff --git a/libasm/asm_end.c b/libasm/asm_end.c
+index ced24f5..4ad918c 100644
+--- a/libasm/asm_end.c
++++ b/libasm/asm_end.c
+@@ -32,7 +32,7 @@
+ #endif
+ 
+ #include <assert.h>
+-#include <error.h>
++#include <err.h>
+ #include <libintl.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+diff --git a/libasm/asm_newscn.c b/libasm/asm_newscn.c
+index ddbb25d..74a598d 100644
+--- a/libasm/asm_newscn.c
++++ b/libasm/asm_newscn.c
+@@ -32,7 +32,7 @@
+ #endif
+ 
+ #include <assert.h>
+-#include <error.h>
++#include <err.h>
+ #include <libintl.h>
+ #include <stdlib.h>
+ #include <string.h>
+diff --git a/libcpu/i386_gendis.c b/libcpu/i386_gendis.c
+index aae5eae..6d76016 100644
+--- a/libcpu/i386_gendis.c
++++ b/libcpu/i386_gendis.c
+@@ -31,7 +31,7 @@
+ # include <config.h>
+ #endif
+ 
+-#include <error.h>
++#include <err.h>
+ #include <errno.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+diff --git a/libcpu/i386_lex.c b/libcpu/i386_lex.c
+index ba5f4aa..b1e4191 100644
+--- a/libcpu/i386_lex.c
++++ b/libcpu/i386_lex.c
+@@ -577,7 +577,7 @@ char *i386_text;
+ #endif
+ 
+ #include <ctype.h>
+-#include <error.h>
++#include <err.h>
+ #include <libintl.h>
+ 
+ #include <libeu.h>
+diff --git a/libcpu/i386_parse.c b/libcpu/i386_parse.c
+index ef1ac35..48f2e64 100644
+--- a/libcpu/i386_parse.c
++++ b/libcpu/i386_parse.c
+@@ -107,7 +107,7 @@
+ #include <assert.h>
+ #include <ctype.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <inttypes.h>
+ #include <libintl.h>
+ #include <math.h>
+diff --git a/libdw/Makefile.am b/libdw/Makefile.am
+index ff8c291..89e792a 100644
+--- a/libdw/Makefile.am
++++ b/libdw/Makefile.am
+@@ -105,7 +105,8 @@ am_libdw_pic_a_OBJECTS = $(libdw_a_SOURCES:.c=.os)
+ libdw_so_LIBS = libdw_pic.a ../libdwelf/libdwelf_pic.a \
+ 	  ../libdwfl/libdwfl_pic.a ../libebl/libebl.a
+ libdw_so_DEPS = ../lib/libeu.a ../libelf/libelf.so
+-libdw_so_LDLIBS = $(libdw_so_DEPS) -ldl -lz $(argp_LDADD) $(zip_LIBS)
++fts_LDADD = -lfts
++libdw_so_LDLIBS = $(libdw_so_DEPS) -ldl -lz $(argp_LDADD) $(zip_LIBS) $(fts_LDADD)
+ libdw_so_SOURCES =
+ libdw.so$(EXEEXT): $(srcdir)/libdw.map $(libdw_so_LIBS) $(libdw_so_DEPS)
+ # The rpath is necessary for libebl because its $ORIGIN use will
+diff --git a/libdw/libdw_alloc.c b/libdw/libdw_alloc.c
+index 28a8cf6..29aeb3f 100644
+--- a/libdw/libdw_alloc.c
++++ b/libdw/libdw_alloc.c
+@@ -31,7 +31,7 @@
+ # include <config.h>
+ #endif
+ 
+-#include <error.h>
++#include <err.h>
+ #include <errno.h>
+ #include <stdlib.h>
+ #include "libdwP.h"
+diff --git a/libdwfl/dwfl_build_id_find_elf.c b/libdwfl/dwfl_build_id_find_elf.c
+index ee0c164..b06ab59 100644
+--- a/libdwfl/dwfl_build_id_find_elf.c
++++ b/libdwfl/dwfl_build_id_find_elf.c
+@@ -31,6 +31,7 @@
+ #endif
+ 
+ #include "libdwflP.h"
++#include "system.h"
+ #include <inttypes.h>
+ #include <fcntl.h>
+ #include <unistd.h>
+@@ -99,7 +100,7 @@ __libdwfl_open_by_build_id (Dwfl_Module *mod, bool debug, char **file_name,
+ 	{
+ 	  if (*file_name != NULL)
+ 	    free (*file_name);
+-	  *file_name = canonicalize_file_name (name);
++	  *file_name = realpath (name, NULL);
+ 	  if (*file_name == NULL)
+ 	    {
+ 	      *file_name = name;
+diff --git a/libdwfl/dwfl_error.c b/libdwfl/dwfl_error.c
+index 7bcf61c..c345797 100644
+--- a/libdwfl/dwfl_error.c
++++ b/libdwfl/dwfl_error.c
+@@ -140,6 +140,7 @@ __libdwfl_seterrno (Dwfl_Error error)
+ const char *
+ dwfl_errmsg (int error)
+ {
++  static __thread char s[64] = "";
+   if (error == 0 || error == -1)
+     {
+       int last_error = global_error;
+@@ -154,7 +155,8 @@ dwfl_errmsg (int error)
+   switch (error &~ 0xffff)
+     {
+     case OTHER_ERROR (ERRNO):
+-      return strerror_r (error & 0xffff, "bad", 0);
++      strerror_r (error & 0xffff, s, sizeof(s));
++      return s;
+     case OTHER_ERROR (LIBELF):
+       return elf_errmsg (error & 0xffff);
+     case OTHER_ERROR (LIBDW):
+diff --git a/libdwfl/dwfl_module_getdwarf.c b/libdwfl/dwfl_module_getdwarf.c
+index 9775ace..511c4a6 100644
+--- a/libdwfl/dwfl_module_getdwarf.c
++++ b/libdwfl/dwfl_module_getdwarf.c
+@@ -35,6 +35,7 @@
+ #include <fcntl.h>
+ #include <string.h>
+ #include <unistd.h>
++#include "system.h"
+ #include "../libdw/libdwP.h"	/* DWARF_E_* values are here.  */
+ #include "../libelf/libelfP.h"
+ #include "system.h"
+diff --git a/libdwfl/find-debuginfo.c b/libdwfl/find-debuginfo.c
+index 6d5a42a..9267788 100644
+--- a/libdwfl/find-debuginfo.c
++++ b/libdwfl/find-debuginfo.c
+@@ -389,7 +389,7 @@ dwfl_standard_find_debuginfo (Dwfl_Module *mod,
+       /* If FILE_NAME is a symlink, the debug file might be associated
+ 	 with the symlink target name instead.  */
+ 
+-      char *canon = canonicalize_file_name (file_name);
++      char *canon = realpath (file_name, NULL);
+       if (canon != NULL && strcmp (file_name, canon))
+ 	fd = find_debuginfo_in_path (mod, canon,
+ 				     debuglink_file, debuglink_crc,
+diff --git a/libdwfl/libdwfl_crc32_file.c b/libdwfl/libdwfl_crc32_file.c
+index f849128..6f0aca1 100644
+--- a/libdwfl/libdwfl_crc32_file.c
++++ b/libdwfl/libdwfl_crc32_file.c
+@@ -29,6 +29,15 @@
+ # include <config.h>
+ #endif
+ 
++#ifndef TEMP_FAILURE_RETRY
++#define TEMP_FAILURE_RETRY(expression) \
++  (__extension__                                                             \
++    ({ long int __result;                                                    \
++       do __result = (long int) (expression);                                \
++       while (__result == -1L && errno == EINTR);                            \
++       __result; }))
++#endif
++
+ #define crc32_file attribute_hidden __libdwfl_crc32_file
+ #define crc32 __libdwfl_crc32
+ #include <libdwflP.h>
+diff --git a/libdwfl/linux-kernel-modules.c b/libdwfl/linux-kernel-modules.c
+index 9d0fef2..9fc09b8 100644
+--- a/libdwfl/linux-kernel-modules.c
++++ b/libdwfl/linux-kernel-modules.c
+@@ -40,6 +40,7 @@
+ #include <system.h>
+ 
+ #include "libdwflP.h"
++#include "system.h"
+ #include <inttypes.h>
+ #include <errno.h>
+ #include <stdio.h>
+diff --git a/libebl/eblopenbackend.c b/libebl/eblopenbackend.c
+index 5371396..2e66dfd 100644
+--- a/libebl/eblopenbackend.c
++++ b/libebl/eblopenbackend.c
+@@ -32,7 +32,7 @@
+ 
+ #include <assert.h>
+ #include <dlfcn.h>
+-#include <error.h>
++#include <err.h>
+ #include <libelfP.h>
+ #include <dwarf.h>
+ #include <stdlib.h>
+diff --git a/libelf/elf.h b/libelf/elf.h
+index 5cf2b93..990b3af 100644
+--- a/libelf/elf.h
++++ b/libelf/elf.h
+@@ -21,7 +21,9 @@
+ 
+ #include <features.h>
+ 
+-__BEGIN_DECLS
++#ifdef __cplusplus
++extern "C" {
++#endif
+ 
+ /* Standard ELF types.  */
+ 
+@@ -3705,6 +3707,8 @@ enum
+ #define R_BPF_NONE		0	/* No reloc */
+ #define R_BPF_MAP_FD		1	/* Map fd to pointer */
+ 
+-__END_DECLS
++#ifdef __cplusplus
++}
++#endif
+ 
+ #endif	/* elf.h */
+diff --git a/libelf/libelf.h b/libelf/libelf.h
+index 547c0f5..dd78799 100644
+--- a/libelf/libelf.h
++++ b/libelf/libelf.h
+@@ -29,6 +29,7 @@
+ #ifndef _LIBELF_H
+ #define _LIBELF_H 1
+ 
++#include <fcntl.h>
+ #include <stdint.h>
+ #include <sys/types.h>
+ 
+diff --git a/libelf/libelfP.h b/libelf/libelfP.h
+index 7ee6625..5840899 100644
+--- a/libelf/libelfP.h
++++ b/libelf/libelfP.h
+@@ -32,6 +32,7 @@
+ 
+ #include <ar.h>
+ #include <gelf.h>
++#include <libelf.h>
+ 
+ #include <errno.h>
+ #include <stdbool.h>
+diff --git a/src/addr2line.c b/src/addr2line.c
+index ba414a7..04b7116 100644
+--- a/src/addr2line.c
++++ b/src/addr2line.c
+@@ -23,7 +23,7 @@
+ #include <argp.h>
+ #include <assert.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <inttypes.h>
+ #include <libdwfl.h>
+diff --git a/src/ar.c b/src/ar.c
+index ec32cee..4efd729 100644
+--- a/src/ar.c
++++ b/src/ar.c
+@@ -22,7 +22,7 @@
+ 
+ #include <argp.h>
+ #include <assert.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <gelf.h>
+ #include <libintl.h>
+diff --git a/src/arlib.c b/src/arlib.c
+index e0839aa..1143658 100644
+--- a/src/arlib.c
++++ b/src/arlib.c
+@@ -21,7 +21,7 @@
+ #endif
+ 
+ #include <assert.h>
+-#include <error.h>
++#include <err.h>
+ #include <gelf.h>
+ #include <inttypes.h>
+ #include <libintl.h>
+diff --git a/src/arlib2.c b/src/arlib2.c
+index 553fc57..46443d0 100644
+--- a/src/arlib2.c
++++ b/src/arlib2.c
+@@ -20,7 +20,7 @@
+ # include <config.h>
+ #endif
+ 
+-#include <error.h>
++#include <err.h>
+ #include <libintl.h>
+ #include <limits.h>
+ #include <string.h>
+diff --git a/src/elfcmp.c b/src/elfcmp.c
+index 5046420..cff183f 100644
+--- a/src/elfcmp.c
++++ b/src/elfcmp.c
+@@ -23,7 +23,7 @@
+ #include <argp.h>
+ #include <assert.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <locale.h>
+ #include <libintl.h>
+diff --git a/src/elflint.c b/src/elflint.c
+index 51e53c2..da0b0dc 100644
+--- a/src/elflint.c
++++ b/src/elflint.c
+@@ -24,7 +24,7 @@
+ #include <assert.h>
+ #include <byteswap.h>
+ #include <endian.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <gelf.h>
+ #include <inttypes.h>
+diff --git a/src/findtextrel.c b/src/findtextrel.c
+index 8f1e239..71463af 100644
+--- a/src/findtextrel.c
++++ b/src/findtextrel.c
+@@ -23,7 +23,7 @@
+ #include <argp.h>
+ #include <assert.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <gelf.h>
+ #include <libdw.h>
+diff --git a/src/nm.c b/src/nm.c
+index 969c6d3..3113c04 100644
+--- a/src/nm.c
++++ b/src/nm.c
+@@ -26,7 +26,7 @@
+ #include <ctype.h>
+ #include <dwarf.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <gelf.h>
+ #include <inttypes.h>
+diff --git a/src/objdump.c b/src/objdump.c
+index 860cfac..61e67bf 100644
+--- a/src/objdump.c
++++ b/src/objdump.c
+@@ -21,7 +21,7 @@
+ #endif
+ 
+ #include <argp.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <inttypes.h>
+ #include <libintl.h>
+diff --git a/src/ranlib.c b/src/ranlib.c
+index cc0ee23..ae851e4 100644
+--- a/src/ranlib.c
++++ b/src/ranlib.c
+@@ -24,7 +24,7 @@
+ #include <argp.h>
+ #include <assert.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <gelf.h>
+ #include <libintl.h>
+diff --git a/src/readelf.c b/src/readelf.c
+index 346eccd..c831aa8 100644
+--- a/src/readelf.c
++++ b/src/readelf.c
+@@ -25,7 +25,7 @@
+ #include <ctype.h>
+ #include <dwarf.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <gelf.h>
+ #include <inttypes.h>
+diff --git a/src/size.c b/src/size.c
+index ad8dbcb..fd83be0 100644
+--- a/src/size.c
++++ b/src/size.c
+@@ -21,7 +21,7 @@
+ #endif
+ 
+ #include <argp.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <gelf.h>
+ #include <inttypes.h>
+diff --git a/src/stack.c b/src/stack.c
+index 6f2ff69..6da0243 100644
+--- a/src/stack.c
++++ b/src/stack.c
+@@ -18,7 +18,7 @@
+ #include <config.h>
+ #include <assert.h>
+ #include <argp.h>
+-#include <error.h>
++#include <err.h>
+ #include <stdlib.h>
+ #include <inttypes.h>
+ #include <stdio.h>
+diff --git a/src/strings.c b/src/strings.c
+index d214356..76cb26b 100644
+--- a/src/strings.c
++++ b/src/strings.c
+@@ -25,7 +25,7 @@
+ #include <ctype.h>
+ #include <endian.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <gelf.h>
+ #include <inttypes.h>
+diff --git a/src/strip.c b/src/strip.c
+index c7830ec..0d7f148 100644
+--- a/src/strip.c
++++ b/src/strip.c
+@@ -24,7 +24,7 @@
+ #include <assert.h>
+ #include <byteswap.h>
+ #include <endian.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <fnmatch.h>
+ #include <gelf.h>
+diff --git a/src/unstrip.c b/src/unstrip.c
+index 5074909..3d4f952 100644
+--- a/src/unstrip.c
++++ b/src/unstrip.c
+@@ -31,7 +31,7 @@
+ #include <argp.h>
+ #include <assert.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <fnmatch.h>
+ #include <libintl.h>
+diff --git a/tests/addrscopes.c b/tests/addrscopes.c
+index 791569f..54f4311 100644
+--- a/tests/addrscopes.c
++++ b/tests/addrscopes.c
+@@ -25,7 +25,7 @@
+ #include <stdio_ext.h>
+ #include <locale.h>
+ #include <stdlib.h>
+-#include <error.h>
++#include <err.h>
+ #include <string.h>
+ 
+ 
+diff --git a/tests/allregs.c b/tests/allregs.c
+index 286f7e3..c9de089 100644
+--- a/tests/allregs.c
++++ b/tests/allregs.c
+@@ -21,7 +21,7 @@
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
+-#include <error.h>
++#include <err.h>
+ #include <locale.h>
+ #include <argp.h>
+ #include <assert.h>
+diff --git a/tests/backtrace-data.c b/tests/backtrace-data.c
+index a387d8f..955c27d 100644
+--- a/tests/backtrace-data.c
++++ b/tests/backtrace-data.c
+@@ -27,7 +27,7 @@
+ #include <dirent.h>
+ #include <stdlib.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <unistd.h>
+ #include <dwarf.h>
+ #if defined(__x86_64__) && defined(__linux__)
+diff --git a/tests/backtrace-dwarf.c b/tests/backtrace-dwarf.c
+index 2dc8a9a..24ca7fb 100644
+--- a/tests/backtrace-dwarf.c
++++ b/tests/backtrace-dwarf.c
+@@ -22,7 +22,7 @@
+ #include <stdio_ext.h>
+ #include <locale.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <unistd.h>
+ #include <sys/types.h>
+ #include <sys/wait.h>
+diff --git a/tests/backtrace.c b/tests/backtrace.c
+index 21abe8a..d733248 100644
+--- a/tests/backtrace.c
++++ b/tests/backtrace.c
+@@ -24,7 +24,7 @@
+ #include <dirent.h>
+ #include <stdlib.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <unistd.h>
+ #include <dwarf.h>
+ #ifdef __linux__
+diff --git a/tests/buildid.c b/tests/buildid.c
+index 87c1877..2953e6b 100644
+--- a/tests/buildid.c
++++ b/tests/buildid.c
+@@ -23,7 +23,7 @@
+ #include ELFUTILS_HEADER(elf)
+ #include ELFUTILS_HEADER(dwelf)
+ #include <stdio.h>
+-#include <error.h>
++#include <err.h>
+ #include <string.h>
+ #include <stdlib.h>
+ #include <sys/types.h>
+diff --git a/tests/debugaltlink.c b/tests/debugaltlink.c
+index 6d97d50..ee7e559 100644
+--- a/tests/debugaltlink.c
++++ b/tests/debugaltlink.c
+@@ -23,7 +23,7 @@
+ #include ELFUTILS_HEADER(dw)
+ #include ELFUTILS_HEADER(dwelf)
+ #include <stdio.h>
+-#include <error.h>
++#include <err.h>
+ #include <string.h>
+ #include <stdlib.h>
+ #include <sys/types.h>
+diff --git a/tests/debuglink.c b/tests/debuglink.c
+index 935d102..741cb81 100644
+--- a/tests/debuglink.c
++++ b/tests/debuglink.c
+@@ -21,7 +21,7 @@
+ #include <errno.h>
+ #include ELFUTILS_HEADER(dwelf)
+ #include <stdio.h>
+-#include <error.h>
++#include <err.h>
+ #include <string.h>
+ #include <stdlib.h>
+ #include <sys/types.h>
+diff --git a/tests/deleted.c b/tests/deleted.c
+index 6be35bc..0190711 100644
+--- a/tests/deleted.c
++++ b/tests/deleted.c
+@@ -21,7 +21,7 @@
+ #include <unistd.h>
+ #include <assert.h>
+ #include <stdio.h>
+-#include <error.h>
++#include <err.h>
+ #include <errno.h>
+ #ifdef __linux__
+ #include <sys/prctl.h>
+diff --git a/tests/dwfl-addr-sect.c b/tests/dwfl-addr-sect.c
+index 21e470a..1ea1e3b 100644
+--- a/tests/dwfl-addr-sect.c
++++ b/tests/dwfl-addr-sect.c
+@@ -23,7 +23,7 @@
+ #include <stdio_ext.h>
+ #include <stdlib.h>
+ #include <string.h>
+-#include <error.h>
++#include <err.h>
+ #include <locale.h>
+ #include <argp.h>
+ #include ELFUTILS_HEADER(dwfl)
+diff --git a/tests/dwfl-bug-addr-overflow.c b/tests/dwfl-bug-addr-overflow.c
+index aa8030e..02c8bef 100644
+--- a/tests/dwfl-bug-addr-overflow.c
++++ b/tests/dwfl-bug-addr-overflow.c
+@@ -20,7 +20,7 @@
+ #include <inttypes.h>
+ #include <stdio.h>
+ #include <stdio_ext.h>
+-#include <error.h>
++#include <err.h>
+ #include <locale.h>
+ #include ELFUTILS_HEADER(dwfl)
+ 
+diff --git a/tests/dwfl-bug-fd-leak.c b/tests/dwfl-bug-fd-leak.c
+index 689cdd7..5973da3 100644
+--- a/tests/dwfl-bug-fd-leak.c
++++ b/tests/dwfl-bug-fd-leak.c
+@@ -24,7 +24,7 @@
+ #include <dirent.h>
+ #include <stdlib.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <unistd.h>
+ #include <dwarf.h>
+ 
+diff --git a/tests/dwfl-bug-getmodules.c b/tests/dwfl-bug-getmodules.c
+index 1ee989f..fd62e65 100644
+--- a/tests/dwfl-bug-getmodules.c
++++ b/tests/dwfl-bug-getmodules.c
+@@ -18,7 +18,7 @@
+ #include <config.h>
+ #include ELFUTILS_HEADER(dwfl)
+ 
+-#include <error.h>
++#include <err.h>
+ 
+ static const Dwfl_Callbacks callbacks =
+   {
+diff --git a/tests/dwfl-report-elf-align.c b/tests/dwfl-report-elf-align.c
+index a4e97d3..f471587 100644
+--- a/tests/dwfl-report-elf-align.c
++++ b/tests/dwfl-report-elf-align.c
+@@ -20,7 +20,7 @@
+ #include <inttypes.h>
+ #include <stdio.h>
+ #include <stdio_ext.h>
+-#include <error.h>
++#include <err.h>
+ #include <locale.h>
+ #include <string.h>
+ #include <stdlib.h>
+diff --git a/tests/dwfllines.c b/tests/dwfllines.c
+index 90379dd..cbdf6c4 100644
+--- a/tests/dwfllines.c
++++ b/tests/dwfllines.c
+@@ -27,7 +27,7 @@
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
+-#include <error.h>
++#include <err.h>
+ 
+ int
+ main (int argc, char *argv[])
+diff --git a/tests/dwflmodtest.c b/tests/dwflmodtest.c
+index 0027f96..e68d3bc 100644
+--- a/tests/dwflmodtest.c
++++ b/tests/dwflmodtest.c
+@@ -23,7 +23,7 @@
+ #include <stdio_ext.h>
+ #include <stdlib.h>
+ #include <string.h>
+-#include <error.h>
++#include <err.h>
+ #include <locale.h>
+ #include <argp.h>
+ #include ELFUTILS_HEADER(dwfl)
+diff --git a/tests/dwflsyms.c b/tests/dwflsyms.c
+index 49ac334..cf07830 100644
+--- a/tests/dwflsyms.c
++++ b/tests/dwflsyms.c
+@@ -25,7 +25,7 @@
+ #include <stdio.h>
+ #include <stdio_ext.h>
+ #include <stdlib.h>
+-#include <error.h>
++#include <err.h>
+ #include <string.h>
+ 
+ static const char *
+diff --git a/tests/early-offscn.c b/tests/early-offscn.c
+index 924cb9e..6f60d5a 100644
+--- a/tests/early-offscn.c
++++ b/tests/early-offscn.c
+@@ -19,7 +19,7 @@
+ #endif
+ 
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <gelf.h>
+ #include <stdio.h>
+diff --git a/tests/ecp.c b/tests/ecp.c
+index 38a6859..743cea5 100644
+--- a/tests/ecp.c
++++ b/tests/ecp.c
+@@ -20,7 +20,7 @@
+ #endif
+ 
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <gelf.h>
+ #include <stdlib.h>
+diff --git a/tests/find-prologues.c b/tests/find-prologues.c
+index ba8ae37..76f5f04 100644
+--- a/tests/find-prologues.c
++++ b/tests/find-prologues.c
+@@ -25,7 +25,7 @@
+ #include <stdio_ext.h>
+ #include <locale.h>
+ #include <stdlib.h>
+-#include <error.h>
++#include <err.h>
+ #include <string.h>
+ #include <fnmatch.h>
+ 
+diff --git a/tests/funcretval.c b/tests/funcretval.c
+index 8d19d11..c8aaa93 100644
+--- a/tests/funcretval.c
++++ b/tests/funcretval.c
+@@ -25,7 +25,7 @@
+ #include <stdio_ext.h>
+ #include <locale.h>
+ #include <stdlib.h>
+-#include <error.h>
++#include <err.h>
+ #include <string.h>
+ #include <fnmatch.h>
+ 
+diff --git a/tests/funcscopes.c b/tests/funcscopes.c
+index 9c90185..dbccb89 100644
+--- a/tests/funcscopes.c
++++ b/tests/funcscopes.c
+@@ -25,7 +25,7 @@
+ #include <stdio_ext.h>
+ #include <locale.h>
+ #include <stdlib.h>
+-#include <error.h>
++#include <err.h>
+ #include <string.h>
+ #include <fnmatch.h>
+ 
+diff --git a/tests/getsrc_die.c b/tests/getsrc_die.c
+index 055aede..9c394dd 100644
+--- a/tests/getsrc_die.c
++++ b/tests/getsrc_die.c
+@@ -19,7 +19,7 @@
+ #endif
+ 
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <inttypes.h>
+ #include <libelf.h>
+diff --git a/tests/line2addr.c b/tests/line2addr.c
+index e0d65d3..9bf0023 100644
+--- a/tests/line2addr.c
++++ b/tests/line2addr.c
+@@ -26,7 +26,7 @@
+ #include <locale.h>
+ #include <stdlib.h>
+ #include <string.h>
+-#include <error.h>
++#include <err.h>
+ 
+ 
+ static void
+diff --git a/tests/low_high_pc.c b/tests/low_high_pc.c
+index d0f4302..8da4fbd 100644
+--- a/tests/low_high_pc.c
++++ b/tests/low_high_pc.c
+@@ -25,7 +25,7 @@
+ #include <stdio_ext.h>
+ #include <locale.h>
+ #include <stdlib.h>
+-#include <error.h>
++#include <err.h>
+ #include <string.h>
+ #include <fnmatch.h>
+ 
+diff --git a/tests/md5-sha1-test.c b/tests/md5-sha1-test.c
+index d50355e..3c41f40 100644
+--- a/tests/md5-sha1-test.c
++++ b/tests/md5-sha1-test.c
+@@ -19,7 +19,7 @@
+ #endif
+ 
+ #include <string.h>
+-#include <error.h>
++#include <err.h>
+ 
+ #include "md5.h"
+ #include "sha1.h"
+diff --git a/tests/rdwrmmap.c b/tests/rdwrmmap.c
+index 6f027df..1ce5e6e 100644
+--- a/tests/rdwrmmap.c
++++ b/tests/rdwrmmap.c
+@@ -19,7 +19,7 @@
+ #endif
+ 
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <stdio.h>
+ #include <fcntl.h>
+ #include <unistd.h>
+diff --git a/tests/saridx.c b/tests/saridx.c
+index 8a450d8..b387801 100644
+--- a/tests/saridx.c
++++ b/tests/saridx.c
+@@ -17,7 +17,7 @@
+ 
+ #include <config.h>
+ 
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <gelf.h>
+ #include <stdio.h>
+diff --git a/tests/sectiondump.c b/tests/sectiondump.c
+index 3033fed..8e888db 100644
+--- a/tests/sectiondump.c
++++ b/tests/sectiondump.c
+@@ -18,7 +18,7 @@
+ #include <config.h>
+ 
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <fcntl.h>
+ #include <gelf.h>
+ #include <inttypes.h>
+diff --git a/tests/varlocs.c b/tests/varlocs.c
+index c3fba89..e043ea2 100644
+--- a/tests/varlocs.c
++++ b/tests/varlocs.c
+@@ -25,7 +25,7 @@
+ #include <dwarf.h>
+ #include <stdio.h>
+ #include <stdlib.h>
+-#include <error.h>
++#include <err.h>
+ #include <string.h>
+ #include <sys/types.h>
+ #include <sys/stat.h>
+diff --git a/tests/vdsosyms.c b/tests/vdsosyms.c
+index b876c10..afb2823 100644
+--- a/tests/vdsosyms.c
++++ b/tests/vdsosyms.c
+@@ -18,7 +18,7 @@
+ #include <config.h>
+ #include <assert.h>
+ #include <errno.h>
+-#include <error.h>
++#include <err.h>
+ #include <inttypes.h>
+ #include <stdio.h>
+ #include <string.h>
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/0001-Ignore-differences-between-mips-machine-identifiers.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/0001-Ignore-differences-between-mips-machine-identifiers.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/0001-Ignore-differences-between-mips-machine-identifiers.patch
rename to import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/0001-Ignore-differences-between-mips-machine-identifiers.patch
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/0002-Add-support-for-mips64-abis-in-mips_retval.c.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/0002-Add-support-for-mips64-abis-in-mips_retval.c.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/0002-Add-support-for-mips64-abis-in-mips_retval.c.patch
rename to import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/0002-Add-support-for-mips64-abis-in-mips_retval.c.patch
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/0003-Add-mips-n64-relocation-format-hack.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/0003-Add-mips-n64-relocation-format-hack.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/0003-Add-mips-n64-relocation-format-hack.patch
rename to import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/0003-Add-mips-n64-relocation-format-hack.patch
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/arm_backend.diff b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/arm_backend.diff
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/arm_backend.diff
rename to import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/arm_backend.diff
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/fallthrough.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/fallthrough.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/fallthrough.patch
rename to import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/fallthrough.patch
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/hppa_backend.diff b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/hppa_backend.diff
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/hppa_backend.diff
rename to import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/hppa_backend.diff
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/mips_backend.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/mips_backend.patch
new file mode 100644
index 0000000..2e0e54b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/mips_backend.patch
@@ -0,0 +1,715 @@
+From 46d0d0ca718093486eeeedf1b44134e9e29b56f7 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Wed, 16 Aug 2017 09:18:59 +0800
+Subject: [PATCH] mips backends
+
+Upstream-Status: Backport [from debian]
+
+Rebase to 0.170
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ backends/Makefile.am    |   8 +-
+ backends/mips_init.c    |  59 +++++++++
+ backends/mips_regs.c    | 104 ++++++++++++++++
+ backends/mips_reloc.def |  79 ++++++++++++
+ backends/mips_retval.c  | 321 ++++++++++++++++++++++++++++++++++++++++++++++++
+ backends/mips_symbol.c  |  52 ++++++++
+ libebl/eblopenbackend.c |   2 +
+ 7 files changed, 623 insertions(+), 2 deletions(-)
+ create mode 100644 backends/mips_init.c
+ create mode 100644 backends/mips_regs.c
+ create mode 100644 backends/mips_reloc.def
+ create mode 100644 backends/mips_retval.c
+ create mode 100644 backends/mips_symbol.c
+
+diff --git a/backends/Makefile.am b/backends/Makefile.am
+index 7f1f5d4..91baf6e 100644
+--- a/backends/Makefile.am
++++ b/backends/Makefile.am
+@@ -33,12 +33,12 @@ AM_CPPFLAGS += -I$(top_srcdir)/libebl -I$(top_srcdir)/libasm \
+ 
+ 
+ modules = i386 sh x86_64 ia64 alpha arm aarch64 sparc ppc ppc64 s390 \
+-	  tilegx m68k bpf parisc
++	  tilegx m68k bpf parisc mips
+ libebl_pic = libebl_i386_pic.a libebl_sh_pic.a libebl_x86_64_pic.a    \
+ 	     libebl_ia64_pic.a libebl_alpha_pic.a libebl_arm_pic.a    \
+ 	     libebl_aarch64_pic.a libebl_sparc_pic.a libebl_ppc_pic.a \
+ 	     libebl_ppc64_pic.a libebl_s390_pic.a libebl_tilegx_pic.a \
+-	     libebl_m68k_pic.a libebl_bpf_pic.a libebl_parisc_pic.a
++	     libebl_m68k_pic.a libebl_bpf_pic.a libebl_parisc_pic.a libebl_mips_pic.a
+ noinst_LIBRARIES = $(libebl_pic)
+ noinst_DATA = $(libebl_pic:_pic.a=.so)
+ 
+@@ -128,6 +128,10 @@ parisc_SRCS = parisc_init.c parisc_symbol.c parisc_regs.c parisc_retval.c
+ libebl_parisc_pic_a_SOURCES = $(parisc_SRCS)
+ am_libebl_parisc_pic_a_OBJECTS = $(parisc_SRCS:.c=.os)
+ 
++mips_SRCS = mips_init.c mips_symbol.c mips_regs.c mips_retval.c
++libebl_mips_pic_a_SOURCES = $(mips_SRCS)
++am_libebl_mips_pic_a_OBJECTS = $(mips_SRCS:.c=.os)
++
+ libebl_%.so libebl_%.map: libebl_%_pic.a $(libelf) $(libdw) $(libeu)
+ 	@rm -f $(@:.so=.map)
+ 	$(AM_V_at)echo 'ELFUTILS_$(PACKAGE_VERSION) { global: $*_init; local: *; };' \
+diff --git a/backends/mips_init.c b/backends/mips_init.c
+new file mode 100644
+index 0000000..975c04e
+--- /dev/null
++++ b/backends/mips_init.c
+@@ -0,0 +1,59 @@
++/* Initialization of mips specific backend library.
++   Copyright (C) 2006 Red Hat, Inc.
++   This file is part of Red Hat elfutils.
++
++   Red Hat elfutils is free software; you can redistribute it and/or modify
++   it under the terms of the GNU General Public License as published by the
++   Free Software Foundation; version 2 of the License.
++
++   Red Hat elfutils is distributed in the hope that it will be useful, but
++   WITHOUT ANY WARRANTY; without even the implied warranty of
++   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++   General Public License for more details.
++
++   You should have received a copy of the GNU General Public License along
++   with Red Hat elfutils; if not, write to the Free Software Foundation,
++   Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA.
++
++   Red Hat elfutils is an included package of the Open Invention Network.
++   An included package of the Open Invention Network is a package for which
++   Open Invention Network licensees cross-license their patents.  No patent
++   license is granted, either expressly or impliedly, by designation as an
++   included package.  Should you wish to participate in the Open Invention
++   Network licensing program, please visit www.openinventionnetwork.com
++   <http://www.openinventionnetwork.com>.  */
++
++#ifdef HAVE_CONFIG_H
++# include <config.h>
++#endif
++
++#define BACKEND		mips_
++#define RELOC_PREFIX	R_MIPS_
++#include "libebl_CPU.h"
++
++/* This defines the common reloc hooks based on mips_reloc.def.  */
++#include "common-reloc.c"
++
++const char *
++mips_init (Elf *elf __attribute__ ((unused)),
++     GElf_Half machine __attribute__ ((unused)),
++     Ebl *eh,
++     size_t ehlen)
++{
++  /* Check whether the Elf_BH object has a sufficent size.  */
++  if (ehlen < sizeof (Ebl))
++    return NULL;
++
++  /* We handle it.  */
++  if (machine == EM_MIPS)
++    eh->name = "MIPS R3000 big-endian";
++  else if (machine == EM_MIPS_RS3_LE)
++    eh->name = "MIPS R3000 little-endian";
++
++  mips_init_reloc (eh);
++  HOOK (eh, reloc_simple_type);
++  HOOK (eh, return_value_location);
++  HOOK (eh, register_info);
++
++  return MODVERSION;
++}
+diff --git a/backends/mips_regs.c b/backends/mips_regs.c
+new file mode 100644
+index 0000000..44f86cb
+--- /dev/null
++++ b/backends/mips_regs.c
+@@ -0,0 +1,104 @@
++/* Register names and numbers for MIPS DWARF.
++   Copyright (C) 2006 Red Hat, Inc.
++   This file is part of Red Hat elfutils.
++
++   Red Hat elfutils is free software; you can redistribute it and/or modify
++   it under the terms of the GNU General Public License as published by the
++   Free Software Foundation; version 2 of the License.
++
++   Red Hat elfutils is distributed in the hope that it will be useful, but
++   WITHOUT ANY WARRANTY; without even the implied warranty of
++   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++   General Public License for more details.
++
++   You should have received a copy of the GNU General Public License along
++   with Red Hat elfutils; if not, write to the Free Software Foundation,
++   Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA.
++
++   Red Hat elfutils is an included package of the Open Invention Network.
++   An included package of the Open Invention Network is a package for which
++   Open Invention Network licensees cross-license their patents.  No patent
++   license is granted, either expressly or impliedly, by designation as an
++   included package.  Should you wish to participate in the Open Invention
++   Network licensing program, please visit www.openinventionnetwork.com
++   <http://www.openinventionnetwork.com>.  */
++
++#ifdef HAVE_CONFIG_H
++# include <config.h>
++#endif
++
++#include <string.h>
++#include <dwarf.h>
++
++#define BACKEND mips_
++#include "libebl_CPU.h"
++
++ssize_t
++mips_register_info (Ebl *ebl __attribute__((unused)),
++		    int regno, char *name, size_t namelen,
++		    const char **prefix, const char **setname,
++		    int *bits, int *type)
++{
++  if (name == NULL)
++    return 66;
++
++  if (regno < 0 || regno > 65 || namelen < 4)
++    return -1;
++
++  *prefix = "$";
++
++  if (regno < 32)
++    {
++      *setname = "integer";
++      *type = DW_ATE_signed;
++      *bits = 32;
++      if (regno < 32 + 10)
++        {
++          name[0] = regno + '0';
++          namelen = 1;
++        }
++      else
++        {
++          name[0] = (regno / 10) + '0';
++          name[1] = (regno % 10) + '0';
++          namelen = 2;
++        }
++    }
++  else if (regno < 64)
++    {
++      *setname = "FPU";
++      *type = DW_ATE_float;
++      *bits = 32;
++      name[0] = 'f';
++      if (regno < 32 + 10)
++	{
++	  name[1] = (regno - 32) + '0';
++	  namelen = 2;
++	}
++      else
++	{
++	  name[1] = (regno - 32) / 10 + '0';
++	  name[2] = (regno - 32) % 10 + '0';
++	  namelen = 3;
++	}
++    }
++  else if (regno == 64)
++    {
++      *type = DW_ATE_signed;
++      *bits = 32;
++      name[0] = 'h';
++      name[1] = 'i';
++      namelen = 2;
++    }
++  else
++    {
++      *type = DW_ATE_signed;
++      *bits = 32;
++      name[0] = 'l';
++      name[1] = 'o';
++      namelen = 2;
++    }
++
++  name[namelen++] = '\0';
++  return namelen;
++}
+diff --git a/backends/mips_reloc.def b/backends/mips_reloc.def
+new file mode 100644
+index 0000000..4579970
+--- /dev/null
++++ b/backends/mips_reloc.def
+@@ -0,0 +1,79 @@
++/* List the relocation types for mips.  -*- C -*-
++   Copyright (C) 2006 Red Hat, Inc.
++   This file is part of Red Hat elfutils.
++
++   Red Hat elfutils is free software; you can redistribute it and/or modify
++   it under the terms of the GNU General Public License as published by the
++   Free Software Foundation; version 2 of the License.
++
++   Red Hat elfutils is distributed in the hope that it will be useful, but
++   WITHOUT ANY WARRANTY; without even the implied warranty of
++   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++   General Public License for more details.
++
++   You should have received a copy of the GNU General Public License along
++   with Red Hat elfutils; if not, write to the Free Software Foundation,
++   Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA.
++
++   Red Hat elfutils is an included package of the Open Invention Network.
++   An included package of the Open Invention Network is a package for which
++   Open Invention Network licensees cross-license their patents.  No patent
++   license is granted, either expressly or impliedly, by designation as an
++   included package.  Should you wish to participate in the Open Invention
++   Network licensing program, please visit www.openinventionnetwork.com
++   <http://www.openinventionnetwork.com>.  */
++
++/* 	    NAME,		REL|EXEC|DYN	*/
++
++RELOC_TYPE (NONE,               0)
++RELOC_TYPE (16,                 0)
++RELOC_TYPE (32,                 0)
++RELOC_TYPE (REL32,              0)
++RELOC_TYPE (26,                 0)
++RELOC_TYPE (HI16,               0)
++RELOC_TYPE (LO16,               0)
++RELOC_TYPE (GPREL16,            0)
++RELOC_TYPE (LITERAL,            0)
++RELOC_TYPE (GOT16,              0)
++RELOC_TYPE (PC16,               0)
++RELOC_TYPE (CALL16,             0)
++RELOC_TYPE (GPREL32,            0)
++
++RELOC_TYPE (SHIFT5,             0)
++RELOC_TYPE (SHIFT6,             0)
++RELOC_TYPE (64,                 0)
++RELOC_TYPE (GOT_DISP,           0)
++RELOC_TYPE (GOT_PAGE,           0)
++RELOC_TYPE (GOT_OFST,           0)
++RELOC_TYPE (GOT_HI16,           0)
++RELOC_TYPE (GOT_LO16,           0)
++RELOC_TYPE (SUB,                0)
++RELOC_TYPE (INSERT_A,           0)
++RELOC_TYPE (INSERT_B,           0)
++RELOC_TYPE (DELETE,             0)
++RELOC_TYPE (HIGHER,             0)
++RELOC_TYPE (HIGHEST,            0)
++RELOC_TYPE (CALL_HI16,          0)
++RELOC_TYPE (CALL_LO16,          0)
++RELOC_TYPE (SCN_DISP,           0)
++RELOC_TYPE (REL16,              0)
++RELOC_TYPE (ADD_IMMEDIATE,      0)
++RELOC_TYPE (PJUMP,              0)
++RELOC_TYPE (RELGOT,             0)
++RELOC_TYPE (JALR,               0)
++RELOC_TYPE (TLS_DTPMOD32,       0)
++RELOC_TYPE (TLS_DTPREL32,       0)
++RELOC_TYPE (TLS_DTPMOD64,       0)
++RELOC_TYPE (TLS_DTPREL64,       0)
++RELOC_TYPE (TLS_GD,             0)
++RELOC_TYPE (TLS_LDM,            0)
++RELOC_TYPE (TLS_DTPREL_HI16,    0)
++RELOC_TYPE (TLS_DTPREL_LO16,    0)
++RELOC_TYPE (TLS_GOTTPREL,       0)
++RELOC_TYPE (TLS_TPREL32,        0)
++RELOC_TYPE (TLS_TPREL64,        0)
++RELOC_TYPE (TLS_TPREL_HI16,     0)
++RELOC_TYPE (TLS_TPREL_LO16,     0)
++
++#define NO_COPY_RELOC 1
++#define NO_RELATIVE_RELOC 1
+diff --git a/backends/mips_retval.c b/backends/mips_retval.c
+new file mode 100644
+index 0000000..656cd1f
+--- /dev/null
++++ b/backends/mips_retval.c
+@@ -0,0 +1,321 @@
++/* Function return value location for Linux/mips ABI.
++   Copyright (C) 2005 Red Hat, Inc.
++   This file is part of Red Hat elfutils.
++
++   Red Hat elfutils is free software; you can redistribute it and/or modify
++   it under the terms of the GNU General Public License as published by the
++   Free Software Foundation; version 2 of the License.
++
++   Red Hat elfutils is distributed in the hope that it will be useful, but
++   WITHOUT ANY WARRANTY; without even the implied warranty of
++   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++   General Public License for more details.
++
++   You should have received a copy of the GNU General Public License along
++   with Red Hat elfutils; if not, write to the Free Software Foundation,
++   Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA.
++
++   Red Hat elfutils is an included package of the Open Invention Network.
++   An included package of the Open Invention Network is a package for which
++   Open Invention Network licensees cross-license their patents.  No patent
++   license is granted, either expressly or impliedly, by designation as an
++   included package.  Should you wish to participate in the Open Invention
++   Network licensing program, please visit www.openinventionnetwork.com
++   <http://www.openinventionnetwork.com>.  */
++
++#ifdef HAVE_CONFIG_H
++# include <config.h>
++#endif
++
++#include <string.h>
++#include <assert.h>
++#include <dwarf.h>
++#include <elf.h>
++
++#include "../libebl/libeblP.h"
++#include "../libdw/libdwP.h"
++
++#define BACKEND mips_
++#include "libebl_CPU.h"
++
++/* The ABI of the file.  Also see EF_MIPS_ABI2 above. */
++#define EF_MIPS_ABI		0x0000F000
++
++/* The original o32 abi. */
++#define E_MIPS_ABI_O32          0x00001000
++
++/* O32 extended to work on 64 bit architectures */
++#define E_MIPS_ABI_O64          0x00002000
++
++/* EABI in 32 bit mode */
++#define E_MIPS_ABI_EABI32       0x00003000
++
++/* EABI in 64 bit mode */
++#define E_MIPS_ABI_EABI64       0x00004000
++
++/* All the possible MIPS ABIs. */
++enum mips_abi
++  {
++    MIPS_ABI_UNKNOWN = 0,
++    MIPS_ABI_N32,
++    MIPS_ABI_O32,
++    MIPS_ABI_N64,
++    MIPS_ABI_O64,
++    MIPS_ABI_EABI32,
++    MIPS_ABI_EABI64,
++    MIPS_ABI_LAST
++  };
++
++/* Find the mips ABI of the current file */
++enum mips_abi find_mips_abi(Elf *elf)
++{
++  GElf_Ehdr ehdr_mem;
++  GElf_Ehdr *ehdr = gelf_getehdr (elf, &ehdr_mem);
++
++  if (ehdr == NULL)
++    return MIPS_ABI_LAST;
++
++  GElf_Word elf_flags = ehdr->e_flags;
++
++  /* Check elf_flags to see if it specifies the ABI being used.  */
++  switch ((elf_flags & EF_MIPS_ABI))
++    {
++    case E_MIPS_ABI_O32:
++      return MIPS_ABI_O32;
++    case E_MIPS_ABI_O64:
++      return MIPS_ABI_O64;
++    case E_MIPS_ABI_EABI32:
++      return MIPS_ABI_EABI32;
++    case E_MIPS_ABI_EABI64:
++      return MIPS_ABI_EABI64;
++    default:
++      if ((elf_flags & EF_MIPS_ABI2))
++	return MIPS_ABI_N32;
++    }
++
++  /* GCC creates a pseudo-section whose name describes the ABI.  */
++  size_t shstrndx;
++  if (elf_getshdrstrndx (elf, &shstrndx) < 0)
++    return MIPS_ABI_LAST;
++
++  const char *name;
++  Elf_Scn *scn = NULL;
++  while ((scn = elf_nextscn (elf, scn)) != NULL)
++    {
++      GElf_Shdr shdr_mem;
++      GElf_Shdr *shdr = gelf_getshdr (scn, &shdr_mem);
++      if (shdr == NULL)
++        return MIPS_ABI_LAST;
++
++      name = elf_strptr (elf, shstrndx, shdr->sh_name) ?: "";
++      if (strncmp (name, ".mdebug.", 8) != 0)
++        continue;
++
++      if (strcmp (name, ".mdebug.abi32") == 0)
++        return MIPS_ABI_O32;
++      else if (strcmp (name, ".mdebug.abiN32") == 0)
++        return MIPS_ABI_N32;
++      else if (strcmp (name, ".mdebug.abi64") == 0)
++        return MIPS_ABI_N64;
++      else if (strcmp (name, ".mdebug.abiO64") == 0)
++        return MIPS_ABI_O64;
++      else if (strcmp (name, ".mdebug.eabi32") == 0)
++        return MIPS_ABI_EABI32;
++      else if (strcmp (name, ".mdebug.eabi64") == 0)
++        return MIPS_ABI_EABI64;
++      else
++        return MIPS_ABI_UNKNOWN;
++    }
++
++  return MIPS_ABI_UNKNOWN;
++}
++
++unsigned int
++mips_abi_regsize (enum mips_abi abi)
++{
++  switch (abi)
++    {
++    case MIPS_ABI_EABI32:
++    case MIPS_ABI_O32:
++      return 4;
++    case MIPS_ABI_N32:
++    case MIPS_ABI_N64:
++    case MIPS_ABI_O64:
++    case MIPS_ABI_EABI64:
++      return 8;
++    case MIPS_ABI_UNKNOWN:
++    case MIPS_ABI_LAST:
++    default:
++      return 0;
++    }
++}
++
++
++/* $v0 or pair $v0, $v1 */
++static const Dwarf_Op loc_intreg_o32[] =
++  {
++    { .atom = DW_OP_reg2 }, { .atom = DW_OP_piece, .number = 4 },
++    { .atom = DW_OP_reg3 }, { .atom = DW_OP_piece, .number = 4 },
++  };
++
++static const Dwarf_Op loc_intreg[] =
++  {
++    { .atom = DW_OP_reg2 }, { .atom = DW_OP_piece, .number = 8 },
++    { .atom = DW_OP_reg3 }, { .atom = DW_OP_piece, .number = 8 },
++  };
++#define nloc_intreg	1
++#define nloc_intregpair	4
++
++/* $f0 (float), or pair $f0, $f1 (double).
++ * f2/f3 are used for COMPLEX (= 2 doubles) returns in Fortran */
++static const Dwarf_Op loc_fpreg_o32[] =
++  {
++    { .atom = DW_OP_regx, .number = 32 }, { .atom = DW_OP_piece, .number = 4 },
++    { .atom = DW_OP_regx, .number = 33 }, { .atom = DW_OP_piece, .number = 4 },
++    { .atom = DW_OP_regx, .number = 34 }, { .atom = DW_OP_piece, .number = 4 },
++    { .atom = DW_OP_regx, .number = 35 }, { .atom = DW_OP_piece, .number = 4 },
++  };
++
++/* $f0, or pair $f0, $f2.  */
++static const Dwarf_Op loc_fpreg[] =
++  {
++    { .atom = DW_OP_regx, .number = 32 }, { .atom = DW_OP_piece, .number = 8 },
++    { .atom = DW_OP_regx, .number = 34 }, { .atom = DW_OP_piece, .number = 8 },
++  };
++#define nloc_fpreg  1
++#define nloc_fpregpair 4
++#define nloc_fpregquad 8
++
++/* The return value is a structure and is actually stored in stack space
++   passed in a hidden argument by the caller.  But, the compiler
++   helpfully returns the address of that space in $v0.  */
++static const Dwarf_Op loc_aggregate[] =
++  {
++    { .atom = DW_OP_breg2, .number = 0 }
++  };
++#define nloc_aggregate 1
++
++int
++mips_return_value_location (Dwarf_Die *functypedie, const Dwarf_Op **locp)
++{
++  /* First find the ABI used by the elf object */
++  enum mips_abi abi = find_mips_abi(functypedie->cu->dbg->elf);
++
++  /* Something went seriously wrong while trying to figure out the ABI */
++  if (abi == MIPS_ABI_LAST)
++    return -1;
++
++  /* We couldn't identify the ABI, but the file seems valid */
++  if (abi == MIPS_ABI_UNKNOWN)
++    return -2;
++
++  /* Can't handle EABI variants */
++  if ((abi == MIPS_ABI_EABI32) || (abi == MIPS_ABI_EABI64))
++    return -2;
++
++  unsigned int regsize = mips_abi_regsize (abi);
++  if (!regsize)
++    return -2;
++
++  /* Start with the function's type, and get the DW_AT_type attribute,
++     which is the type of the return value.  */
++
++  Dwarf_Attribute attr_mem;
++  Dwarf_Attribute *attr = dwarf_attr_integrate (functypedie, DW_AT_type, &attr_mem);
++  if (attr == NULL)
++    /* The function has no return value, like a `void' function in C.  */
++    return 0;
++
++  Dwarf_Die die_mem;
++  Dwarf_Die *typedie = dwarf_formref_die (attr, &die_mem);
++  int tag = dwarf_tag (typedie);
++
++  /* Follow typedefs and qualifiers to get to the actual type.  */
++  while (tag == DW_TAG_typedef
++	 || tag == DW_TAG_const_type || tag == DW_TAG_volatile_type
++	 || tag == DW_TAG_restrict_type)
++    {
++      attr = dwarf_attr_integrate (typedie, DW_AT_type, &attr_mem);
++      typedie = dwarf_formref_die (attr, &die_mem);
++      tag = dwarf_tag (typedie);
++    }
++
++  switch (tag)
++    {
++    case -1:
++      return -1;
++
++    case DW_TAG_subrange_type:
++      if (! dwarf_hasattr_integrate (typedie, DW_AT_byte_size))
++	{
++	  attr = dwarf_attr_integrate (typedie, DW_AT_type, &attr_mem);
++	  typedie = dwarf_formref_die (attr, &die_mem);
++	  tag = dwarf_tag (typedie);
++	}
++      /* Fall through.  */
++
++    case DW_TAG_base_type:
++    case DW_TAG_enumeration_type:
++    case DW_TAG_pointer_type:
++    case DW_TAG_ptr_to_member_type:
++      {
++        Dwarf_Word size;
++	if (dwarf_formudata (dwarf_attr_integrate (typedie, DW_AT_byte_size,
++					 &attr_mem), &size) != 0)
++	  {
++	    if (tag == DW_TAG_pointer_type || tag == DW_TAG_ptr_to_member_type)
++	      size = regsize;
++	    else
++	      return -1;
++	  }
++	if (tag == DW_TAG_base_type)
++	  {
++	    Dwarf_Word encoding;
++	    if (dwarf_formudata (dwarf_attr_integrate (typedie, DW_AT_encoding,
++					     &attr_mem), &encoding) != 0)
++	      return -1;
++
++#define ABI_LOC(loc, regsize) ((regsize) == 4 ? (loc ## _o32) : (loc))
++
++	    if (encoding == DW_ATE_float)
++	      {
++		*locp = ABI_LOC(loc_fpreg, regsize);
++		if (size <= regsize)
++		    return nloc_fpreg;
++
++		if (size <= 2*regsize)
++                  return nloc_fpregpair;
++
++		if (size <= 4*regsize && abi == MIPS_ABI_O32)
++                  return nloc_fpregquad;
++
++		goto aggregate;
++	      }
++	  }
++	*locp = ABI_LOC(loc_intreg, regsize);
++	if (size <= regsize)
++	  return nloc_intreg;
++	if (size <= 2*regsize)
++	  return nloc_intregpair;
++
++	/* Else fall through. Shouldn't happen though (at least with gcc) */
++      }
++
++    case DW_TAG_structure_type:
++    case DW_TAG_class_type:
++    case DW_TAG_union_type:
++    case DW_TAG_array_type:
++    aggregate:
++      /* XXX TODO: Can't handle structure return with other ABI's yet :-/ */
++      if ((abi != MIPS_ABI_O32) && (abi != MIPS_ABI_O64))
++        return -2;
++
++      *locp = loc_aggregate;
++      return nloc_aggregate;
++    }
++
++  /* XXX We don't have a good way to return specific errors from ebl calls.
++     This value means we do not understand the type, but it is well-formed
++     DWARF and might be valid.  */
++  return -2;
++}
+diff --git a/backends/mips_symbol.c b/backends/mips_symbol.c
+new file mode 100644
+index 0000000..ba465fe
+--- /dev/null
++++ b/backends/mips_symbol.c
+@@ -0,0 +1,52 @@
++/* MIPS specific symbolic name handling.
++   Copyright (C) 2002, 2003, 2005 Red Hat, Inc.
++   This file is part of Red Hat elfutils.
++   Written by Jakub Jelinek <jakub@redhat.com>, 2002.
++
++   Red Hat elfutils is free software; you can redistribute it and/or modify
++   it under the terms of the GNU General Public License as published by the
++   Free Software Foundation; version 2 of the License.
++
++   Red Hat elfutils is distributed in the hope that it will be useful, but
++   WITHOUT ANY WARRANTY; without even the implied warranty of
++   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
++   General Public License for more details.
++
++   You should have received a copy of the GNU General Public License along
++   with Red Hat elfutils; if not, write to the Free Software Foundation,
++   Inc., 51 Franklin Street, Fifth Floor, Boston MA 02110-1301 USA.
++
++   Red Hat elfutils is an included package of the Open Invention Network.
++   An included package of the Open Invention Network is a package for which
++   Open Invention Network licensees cross-license their patents.  No patent
++   license is granted, either expressly or impliedly, by designation as an
++   included package.  Should you wish to participate in the Open Invention
++   Network licensing program, please visit www.openinventionnetwork.com
++   <http://www.openinventionnetwork.com>.  */
++
++#ifdef HAVE_CONFIG_H
++# include <config.h>
++#endif
++
++#include <elf.h>
++#include <stddef.h>
++
++#define BACKEND		mips_
++#include "libebl_CPU.h"
++
++/* Check for the simple reloc types.  */
++Elf_Type
++mips_reloc_simple_type (Ebl *ebl __attribute__ ((unused)), int type)
++{
++  switch (type)
++    {
++    case R_MIPS_16:
++      return ELF_T_HALF;
++    case R_MIPS_32:
++      return ELF_T_WORD;
++    case R_MIPS_64:
++      return ELF_T_XWORD;
++    default:
++      return ELF_T_NUM;
++    }
++}
+diff --git a/libebl/eblopenbackend.c b/libebl/eblopenbackend.c
+index 1f81477..5371396 100644
+--- a/libebl/eblopenbackend.c
++++ b/libebl/eblopenbackend.c
+@@ -72,6 +72,8 @@ static const struct
+   { "sparc", "elf_sparc", "sparc", 5, EM_SPARC, 0, 0 },
+   { "sparc", "elf_sparcv8plus", "sparc", 5, EM_SPARC32PLUS, 0, 0 },
+   { "s390", "ebl_s390", "s390", 4, EM_S390, 0, 0 },
++  { "mips", "elf_mips", "mips", 4, EM_MIPS, 0, 0 },
++  { "mips", "elf_mipsel", "mipsel", 4, EM_MIPS_RS3_LE, 0, 0 },
+ 
+   { "m32", "elf_m32", "m32", 3, EM_M32, 0, 0 },
+   { "m68k", "elf_m68k", "m68k", 4, EM_68K, ELFCLASS32, ELFDATA2MSB },
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/mips_readelf_w.patch b/import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/mips_readelf_w.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/elfutils/elfutils-0.168/debian/mips_readelf_w.patch
rename to import-layers/yocto-poky/meta/recipes-devtools/elfutils/files/debian/mips_readelf_w.patch
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/expect/expect_5.45.bb b/import-layers/yocto-poky/meta/recipes-devtools/expect/expect_5.45.bb
index 630f2e4..e2d24e8 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/expect/expect_5.45.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/expect/expect_5.45.bb
@@ -16,7 +16,7 @@
 DEPENDS += "tcl"
 RDEPENDS_${PN} = "tcl"
 
-inherit autotools
+inherit autotools update-alternatives
 
 PR = "r1"
 
@@ -57,6 +57,11 @@
                 "
 EXTRA_OEMAKE_install = " 'SCRIPTS=' "
 
+ALTERNATIVE_${PN}  = "mkpasswd"
+ALTERNATIVE_LINK_NAME[mkpasswd] = "${bindir}/mkpasswd"
+# Use lower priority than busybox's mkpasswd (created when built with CONFIG_CRYPTPW)
+ALTERNATIVE_PRIORITY[mkpasswd] = "40"
+
 FILES_${PN}-dev = "${libdir_native}/expect${PV}/libexpect*.so \
                    ${includedir}/expect.h \
                    ${includedir}/expect_tcl.h \
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/fdisk/gptfdisk_1.0.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/fdisk/gptfdisk_1.0.1.bb
deleted file mode 100644
index d62a903..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/fdisk/gptfdisk_1.0.1.bb
+++ /dev/null
@@ -1,26 +0,0 @@
-SUMMARY = "Utility for modifying GPT disk partitioning"
-DESCRIPTION = "GPT fdisk is a disk partitioning tool loosely modeled on Linux fdisk, but used for modifying GUID Partition Table (GPT) disks. The related FixParts utility fixes some common problems on Master Boot Record (MBR) disks."
-
-LICENSE = "GPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552"
-
-DEPENDS = "util-linux popt ncurses"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${PV}/${BP}.tar.gz"
-SRC_URI[md5sum] = "d7f3d306b083123bcc6f5941efade586"
-SRC_URI[sha256sum] = "864c8aee2efdda50346804d7e6230407d5f42a8ae754df70404dd8b2fdfaeac7"
-
-UPSTREAM_CHECK_URI = "http://sourceforge.net/projects/gptfdisk/files/gptfdisk/"
-UPSTREAM_CHECK_REGEX = "/gptfdisk/(?P<pver>(\d+[\.\-_]*)+)/"
-
-EXTRA_OEMAKE = "'CC=${CC}' 'CXX=${CXX}'"
-
-do_install() {
-    install -d ${D}${sbindir}
-    install -m 0755 cgdisk ${D}${sbindir}
-    install -m 0755 gdisk ${D}${sbindir}
-    install -m 0755 sgdisk ${D}${sbindir}
-    install -m 0755 fixparts ${D}${sbindir}
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/fdisk/gptfdisk_1.0.3.bb b/import-layers/yocto-poky/meta/recipes-devtools/fdisk/gptfdisk_1.0.3.bb
new file mode 100644
index 0000000..4d617e3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/fdisk/gptfdisk_1.0.3.bb
@@ -0,0 +1,26 @@
+SUMMARY = "Utility for modifying GPT disk partitioning"
+DESCRIPTION = "GPT fdisk is a disk partitioning tool loosely modeled on Linux fdisk, but used for modifying GUID Partition Table (GPT) disks. The related FixParts utility fixes some common problems on Master Boot Record (MBR) disks."
+
+LICENSE = "GPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552"
+
+DEPENDS = "util-linux popt ncurses"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${PV}/${BP}.tar.gz"
+SRC_URI[md5sum] = "07b625a583b66c8c5840be5923f3e3fe"
+SRC_URI[sha256sum] = "89fd5aec35c409d610a36cb49c65b442058565ed84042f767bba614b8fc91b5c"
+
+UPSTREAM_CHECK_URI = "http://sourceforge.net/projects/gptfdisk/files/gptfdisk/"
+UPSTREAM_CHECK_REGEX = "/gptfdisk/(?P<pver>(\d+[\.\-_]*)+)/"
+
+EXTRA_OEMAKE = "'CC=${CC}' 'CXX=${CXX}'"
+
+do_install() {
+    install -d ${D}${sbindir}
+    install -m 0755 cgdisk ${D}${sbindir}
+    install -m 0755 gdisk ${D}${sbindir}
+    install -m 0755 sgdisk ${D}${sbindir}
+    install -m 0755 fixparts ${D}${sbindir}
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/file/file/debian-742262.patch b/import-layers/yocto-poky/meta/recipes-devtools/file/file/debian-742262.patch
index 1ef485e..d31ac59 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/file/file/debian-742262.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/file/file/debian-742262.patch
@@ -1,19 +1,30 @@
-The awk pattern was checked *before* the Perl pattern, so the perl
-script with BEGIN{...} would be reported as awk, this patch fixes it.
+The awk pattern was checked *before* the Perl pattern, so the
+perl script with BEGIN{...} would be reported as awk, this patch fixes it.
 
 Upstream-Status: Backport [debian]
 
 Signed-off-by: Christoph Biedl <debian.axhn@manchmal.in-ulm.de>
 Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
+
+Rebase on 5.31
+
+Signed-off-by: Fan Xin <fan.xin@jp.fujitsu.com>
 ---
+ magic/Magdir/commands | 1 +
+ 1 file changed, 1 insertion(+)
+
 diff --git a/magic/Magdir/commands b/magic/Magdir/commands
+index 1a46efd..255c04b 100644
 --- a/magic/Magdir/commands
 +++ b/magic/Magdir/commands
 @@ -57,6 +57,7 @@
  0	string/wt	#!\ /usr/bin/awk	awk script text executable
  !:mime	text/x-awk
- 0	regex/4096	=^\\s{0,100}BEGIN\\s{0,100}[{]	awk or perl script text
+ 0	regex/4096	=^[A-Za-z0-9_]{0,100}BEGIN[A-Za-z0-9_]{0,100}[{]	awk or perl script text
 +!:strength - 12
  
  # AT&T Bell Labs' Plan 9 shell
  0	string/wt	#!\ /bin/rc	Plan 9 rc shell script text executable
+-- 
+1.9.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/file/file_5.30.bb b/import-layers/yocto-poky/meta/recipes-devtools/file/file_5.30.bb
deleted file mode 100644
index 0998fcf..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/file/file_5.30.bb
+++ /dev/null
@@ -1,48 +0,0 @@
-SUMMARY = "File classification tool"
-DESCRIPTION = "File attempts to classify files depending \
-on their contents and prints a description if a match is found."
-HOMEPAGE = "http://www.darwinsys.com/file/"
-SECTION = "console/utils"
-
-# two clause BSD
-LICENSE = "BSD"
-LIC_FILES_CHKSUM = "file://COPYING;beginline=2;md5=6a7382872edb68d33e1a9398b6e03188"
-
-DEPENDS = "zlib file-replacement-native"
-DEPENDS_class-native = "zlib-native"
-
-# Blacklist a bogus tag in upstream check
-UPSTREAM_CHECK_GITTAGREGEX = "FILE(?P<pver>(?!6_23).+)"
-
-SRC_URI = "git://github.com/file/file.git \
-        file://debian-742262.patch \
-        file://0001-Add-P-prompt-into-Usage-info.patch \
-        "
-
-SRCREV = "79814950aafb81ecd6a910c2a8a3b8ec12f3e4a6"
-S = "${WORKDIR}/git"
-
-inherit autotools
-
-EXTRA_OEMAKE_append_class-target = "-e FILE_COMPILE=${STAGING_BINDIR_NATIVE}/file-native/file"
-EXTRA_OEMAKE_append_class-nativesdk = "-e FILE_COMPILE=${STAGING_BINDIR_NATIVE}/file-native/file"
-
-CFLAGS_append = " -std=c99"
-
-FILES_${PN} += "${datadir}/misc/*.mgc"
-
-do_install_append_class-native() {
-	create_cmdline_wrapper ${D}/${bindir}/file \
-		--magic-file ${datadir}/misc/magic.mgc
-}
-
-do_install_append_class-nativesdk() {
-	create_cmdline_wrapper ${D}/${bindir}/file \
-		--magic-file ${datadir}/misc/magic.mgc
-}
-
-BBCLASSEXTEND = "native nativesdk"
-PROVIDES_append_class-native = " file-replacement-native"
-# Don't use NATIVE_PACKAGE_PATH_SUFFIX as that hides libmagic from anyone who
-# depends on file-replacement-native.
-bindir_append_class-native = "/file-native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/file/file_5.31.bb b/import-layers/yocto-poky/meta/recipes-devtools/file/file_5.31.bb
new file mode 100644
index 0000000..1b1f502
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/file/file_5.31.bb
@@ -0,0 +1,48 @@
+SUMMARY = "File classification tool"
+DESCRIPTION = "File attempts to classify files depending \
+on their contents and prints a description if a match is found."
+HOMEPAGE = "http://www.darwinsys.com/file/"
+SECTION = "console/utils"
+
+# two clause BSD
+LICENSE = "BSD"
+LIC_FILES_CHKSUM = "file://COPYING;beginline=2;md5=6a7382872edb68d33e1a9398b6e03188"
+
+DEPENDS = "zlib file-replacement-native"
+DEPENDS_class-native = "zlib-native"
+
+# Blacklist a bogus tag in upstream check
+UPSTREAM_CHECK_GITTAGREGEX = "FILE(?P<pver>(?!6_23).+)"
+
+SRC_URI = "git://github.com/file/file.git \
+        file://debian-742262.patch \
+        file://0001-Add-P-prompt-into-Usage-info.patch \
+        "
+
+SRCREV = "70c5f15060c7ad81150177de83a3e64500a54c9f"
+S = "${WORKDIR}/git"
+
+inherit autotools
+
+EXTRA_OEMAKE_append_class-target = "-e FILE_COMPILE=${STAGING_BINDIR_NATIVE}/file-native/file"
+EXTRA_OEMAKE_append_class-nativesdk = "-e FILE_COMPILE=${STAGING_BINDIR_NATIVE}/file-native/file"
+
+CFLAGS_append = " -std=c99"
+
+FILES_${PN} += "${datadir}/misc/*.mgc"
+
+do_install_append_class-native() {
+	create_cmdline_wrapper ${D}/${bindir}/file \
+		--magic-file ${datadir}/misc/magic.mgc
+}
+
+do_install_append_class-nativesdk() {
+	create_cmdline_wrapper ${D}/${bindir}/file \
+		--magic-file ${datadir}/misc/magic.mgc
+}
+
+BBCLASSEXTEND = "native nativesdk"
+PROVIDES_append_class-native = " file-replacement-native"
+# Don't use NATIVE_PACKAGE_PATH_SUFFIX as that hides libmagic from anyone who
+# depends on file-replacement-native.
+bindir_append_class-native = "/file-native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/flex/flex_2.6.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/flex/flex_2.6.0.bb
index ab35b09..a906fe8 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/flex/flex_2.6.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/flex/flex_2.6.0.bb
@@ -55,7 +55,11 @@
 	cp ${S}/build-aux/test-driver ${D}${PTEST_PATH}/build-aux/
 	cp -r ${S}/tests/* ${D}${PTEST_PATH}
 	cp -r ${B}/tests/* ${D}${PTEST_PATH}
-	sed -e 's/^Makefile:/_Makefile:/' \
+	sed -e 's,--sysroot=${STAGING_DIR_TARGET},,g' \
+	    -e 's|${DEBUG_PREFIX_MAP}||g' \
+	    -e 's:${HOSTTOOLS_DIR}/::g' \
+	    -e 's:${RECIPE_SYSROOT_NATIVE}::g' \
+	    -e 's:${BASE_WORKDIR}/${MULTIMACH_TARGET_SYS}::g' \-e 's/^Makefile:/_Makefile:/' \
 	    -e 's/^srcdir = \(.*\)/srcdir = ./' -e 's/^top_srcdir = \(.*\)/top_srcdir = ./' \
 	    -e 's/^builddir = \(.*\)/builddir = ./' -e 's/^top_builddir = \(.*\)/top_builddir = ./' \
 	    -i ${D}${PTEST_PATH}/Makefile
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4.inc
deleted file mode 100644
index b769675..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4.inc
+++ /dev/null
@@ -1,147 +0,0 @@
-require gcc-common.inc
-
-# Third digit in PV should be incremented after a minor release
-
-PV = "5.4.0"
-
-#SNAP = "5-20150405"
-
-# BINV should be incremented to a revision after a minor gcc release
-
-BINV = "5.4.0"
-
-FILESEXTRAPATHS =. "${FILE_DIRNAME}/gcc-5.4:${FILE_DIRNAME}/gcc-5.4/backport:"
-
-DEPENDS =+ "mpfr gmp libmpc zlib"
-NATIVEDEPS = "mpfr-native gmp-native libmpc-native zlib-native"
-
-LICENSE = "GPL-3.0-with-GCC-exception & GPLv3"
-
-LIC_FILES_CHKSUM = "\
-    file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552 \
-    file://COPYING3;md5=d32239bcb673463ab874e80d47fae504 \
-    file://COPYING3.LIB;md5=6a6a8e020838b23406c81b19c1d46df6 \
-    file://COPYING.LIB;md5=2d5025d4aa3495befef8f17206a5b0a1 \
-    file://COPYING.RUNTIME;md5=fe60d87048567d4fe8c8a0ed2448bcc8 \
-"
-#BASEURI = "http://www.netgull.com/gcc/snapshots/${SNAP}/gcc-${SNAP}.tar.bz2"
-BASEURI = "${GNU_MIRROR}/gcc/gcc-${PV}/gcc-${PV}.tar.bz2"
-
-SRC_URI = "\
-           ${BASEURI} \
-           ${BACKPORTS} \
-           file://0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch \
-           file://0002-uclibc-conf.patch \
-           file://0003-gcc-uclibc-locale-ctype_touplow_t.patch \
-           file://0004-uclibc-locale.patch \
-           file://0005-uclibc-locale-no__x.patch \
-           file://0006-uclibc-locale-wchar_fix.patch \
-           file://0007-uclibc-locale-update.patch \
-           file://0008-missing-execinfo_h.patch \
-           file://0009-c99-snprintf.patch \
-           file://0010-gcc-poison-system-directories.patch \
-           file://0011-gcc-poison-dir-extend.patch \
-           file://0012-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch \
-           file://0013-64-bit-multilib-hack.patch \
-           file://0014-optional-libstdc.patch \
-           file://0015-gcc-disable-MASK_RELAX_PIC_CALLS-bit.patch \
-           file://0016-COLLECT_GCC_OPTIONS.patch \
-           file://0017-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch \
-           file://0018-fortran-cross-compile-hack.patch \
-           file://0019-libgcc-sjlj-check.patch \
-           file://0020-cpp-honor-sysroot.patch \
-           file://0021-MIPS64-Default-to-N64-ABI.patch \
-           file://0022-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch \
-           file://0023-gcc-Fix-argument-list-too-long-error.patch \
-           file://0024-Disable-sdt.patch \
-           file://0025-libtool.patch \
-           file://0026-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch \
-           file://0027-Use-the-multilib-config-files-from-B-instead-of-usin.patch \
-           file://0028-Avoid-using-libdir-from-.la-which-usually-points-to-.patch \
-           file://0029-export-CPP.patch \
-           file://0030-Enable-SPE-AltiVec-generation-on-powepc-linux-target.patch \
-           file://0031-Disable-the-MULTILIB_OSDIRNAMES-and-other-multilib-o.patch \
-           file://0032-Ensure-target-gcc-headers-can-be-included.patch \
-           file://0033-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch \
-           file://0034-Don-t-search-host-directory-during-relink-if-inst_pr.patch \
-           file://0035-Dont-link-the-plugins-with-libgomp-explicitly.patch \
-           file://0036-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch \
-           file://0037-aarch64-Add-support-for-musl-ldso.patch \
-           file://0038-fix-g-sysroot.patch \
-           file://0039-libcc1-fix-libcc1-s-install-path-and-rpath.patch \
-           file://0040-handle-sysroot-support-for-nativesdk-gcc.patch \
-           file://0041-Search-target-sysroot-gcc-version-specific-dirs-with.patch \
-           file://0042-Fix-various-_FOR_BUILD-and-related-variables.patch \
-           file://0043-libstdc-Support-musl.patch \
-           file://0044-Adding-mmusl-as-a-musl-libc-specifier-and-the-necess.patch \
-           file://0045-Support-for-arm-linux-musl.patch \
-           file://0046-Get-rid-of-ever-broken-fixincludes-on-musl.patch \
-           file://0047-nios2-Define-MUSL_DYNAMIC_LINKER.patch \
-           file://0048-ssp_nonshared.patch \
-           file://0049-Disable-the-weak-reference-logic-in-gthr.h-for-os-ge.patch \
-           file://0050-powerpc-pass-secure-plt-to-the-linker.patch \
-           file://0051-Ignore-fdebug-prefix-map-in-producer-string-by-Danie.patch \
-           file://0052-nios2-use-ret-with-r31.patch \
-           file://0053-expr.c-PR-target-65358-Avoid-clobbering-partial-argu.patch \
-           file://0054-support-ffile-prefix-map.patch \
-           file://0055-Reuse-fdebug-prefix-map-to-replace-ffile-prefix-map.patch \
-           file://0056-Enable-libc-provide-ssp-and-gcc_cv_target_dl_iterate.patch \
-           file://0057-unwind-fix-for-musl.patch \
-           file://0058-fdebug-prefix-map-support-to-remap-relative-path.patch \
-           file://0059-libgcc-use-ldflags.patch \
-           file://CVE-2016-6131.patch \
-"
-
-BACKPORTS = ""
-
-SRC_URI[md5sum] = "4c626ac2a83ef30dfb9260e6f59c2b30"
-SRC_URI[sha256sum] = "608df76dec2d34de6558249d8af4cbee21eceddbcb580d666f7a5a583ca3303a"
-
-UPSTREAM_CHECK_REGEX = "gcc-(?P<pver>5\.\d+\.\d+).tar"
-
-#S = "${TMPDIR}/work-shared/gcc-${PV}-${PR}/gcc-${SNAP}"
-S = "${TMPDIR}/work-shared/gcc-${PV}-${PR}/gcc-${PV}"
-B = "${WORKDIR}/gcc-${PV}/build.${HOST_SYS}.${TARGET_SYS}"
-
-# Language Overrides
-FORTRAN = ""
-JAVA = ""
-
-LTO = "--enable-lto"
-
-EXTRA_OECONF_BASE = "\
-    ${LTO} \
-    --enable-libssp \
-    --enable-libitm \
-    --disable-bootstrap \
-    --disable-libmudflap \
-    --with-system-zlib \
-    --with-linker-hash-style=${LINKER_HASH_STYLE} \
-    --enable-linker-build-id \
-    --with-ppl=no \
-    --with-cloog=no \
-    --enable-checking=release \
-    --enable-cheaders=c_global \
-    --without-isl \
-"
-
-EXTRA_OECONF_INITIAL = "\
-    --disable-libmudflap \
-    --disable-libgomp \
-    --disable-libitm \
-    --disable-libquadmath \
-    --with-system-zlib \
-    --disable-lto \
-    --disable-plugin \
-    --enable-decimal-float=no \
-    --without-isl \
-    gcc_cv_libc_provides_ssp=yes \
-"
-
-EXTRA_OECONF_append_libc-uclibc = " --disable-decimal-float "
-
-EXTRA_OECONF_PATHS = "\
-    --with-gxx-include-dir=/not/exist{target_includedir}/c++/${BINV} \
-    --with-sysroot=/not/exist \
-    --with-build-sysroot=${STAGING_DIR_TARGET} \
-"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch
deleted file mode 100644
index 1aead96..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch
+++ /dev/null
@@ -1,42 +0,0 @@
-From 6029bb338305a5d1403ee23427ed8d58eae1ff53 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:37:11 +0400
-Subject: [PATCH 01/46] gcc-4.3.1: ARCH_FLAGS_FOR_TARGET
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Inappropriate [embedded specific]
----
- configure    | 2 +-
- configure.ac | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/configure b/configure
-index a3f66ba..1cba3a9 100755
---- a/configure
-+++ b/configure
-@@ -7464,7 +7464,7 @@ fi
- # for target_alias and gcc doesn't manage it consistently.
- target_configargs="--cache-file=./config.cache ${target_configargs}"
- 
--FLAGS_FOR_TARGET=
-+FLAGS_FOR_TARGET="$ARCH_FLAGS_FOR_TARGET"
- case " $target_configdirs " in
-  *" newlib "*)
-   case " $target_configargs " in
-diff --git a/configure.ac b/configure.ac
-index 987dfab..d3adb95 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -3104,7 +3104,7 @@ fi
- # for target_alias and gcc doesn't manage it consistently.
- target_configargs="--cache-file=./config.cache ${target_configargs}"
- 
--FLAGS_FOR_TARGET=
-+FLAGS_FOR_TARGET="$ARCH_FLAGS_FOR_TARGET"
- case " $target_configdirs " in
-  *" newlib "*)
-   case " $target_configargs " in
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0002-uclibc-conf.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0002-uclibc-conf.patch
deleted file mode 100644
index 8d6aeb5..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0002-uclibc-conf.patch
+++ /dev/null
@@ -1,53 +0,0 @@
-From b67c3a844bccec1766a7ec120e2d18cdcbc5f114 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:38:25 +0400
-Subject: [PATCH 02/46] uclibc-conf
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- contrib/regression/objs-gcc.sh | 4 ++++
- libjava/classpath/ltconfig     | 4 ++--
- 2 files changed, 6 insertions(+), 2 deletions(-)
-
-diff --git a/contrib/regression/objs-gcc.sh b/contrib/regression/objs-gcc.sh
-index 60b0497..6dc7ead 100755
---- a/contrib/regression/objs-gcc.sh
-+++ b/contrib/regression/objs-gcc.sh
-@@ -106,6 +106,10 @@ if [ $H_REAL_TARGET = $H_REAL_HOST -a $H_REAL_TARGET = i686-pc-linux-gnu ]
-  then
-   make all-gdb all-dejagnu all-ld || exit 1
-   make install-gdb install-dejagnu install-ld || exit 1
-+elif [ $H_REAL_TARGET = $H_REAL_HOST -a $H_REAL_TARGET = i686-pc-linux-uclibc ]
-+ then
-+  make all-gdb all-dejagnu all-ld || exit 1
-+  make install-gdb install-dejagnu install-ld || exit 1
- elif [ $H_REAL_TARGET = $H_REAL_HOST ] ; then
-   make bootstrap || exit 1
-   make install || exit 1
-diff --git a/libjava/classpath/ltconfig b/libjava/classpath/ltconfig
-index 743d951..ae4ea60 100755
---- a/libjava/classpath/ltconfig
-+++ b/libjava/classpath/ltconfig
-@@ -603,7 +603,7 @@ host_os=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\3/'`
- 
- # Transform linux* to *-*-linux-gnu*, to support old configure scripts.
- case $host_os in
--linux-gnu*) ;;
-+linux-gnu*|linux-uclibc*) ;;
- linux*) host=`echo $host | sed 's/^\(.*-.*-linux\)\(.*\)$/\1-gnu\2/'`
- esac
- 
-@@ -1247,7 +1247,7 @@ linux-gnuoldld* | linux-gnuaout* | linux-gnucoff*)
-   ;;
- 
- # This must be Linux ELF.
--linux-gnu*)
-+linux*)
-   version_type=linux
-   need_lib_prefix=no
-   need_version=no
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0003-gcc-uclibc-locale-ctype_touplow_t.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0003-gcc-uclibc-locale-ctype_touplow_t.patch
deleted file mode 100644
index bd03263..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0003-gcc-uclibc-locale-ctype_touplow_t.patch
+++ /dev/null
@@ -1,87 +0,0 @@
-From 9bcb3a1848ff0f8990301ca09a25b15c2cf90c6f Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:40:12 +0400
-Subject: [PATCH 03/46] gcc-uclibc-locale-ctype_touplow_t
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- libstdc++-v3/config/locale/generic/c_locale.cc | 5 +++++
- libstdc++-v3/config/locale/generic/c_locale.h  | 9 +++++++++
- libstdc++-v3/config/os/gnu-linux/ctype_base.h  | 9 +++++++++
- 3 files changed, 23 insertions(+)
-
-diff --git a/libstdc++-v3/config/locale/generic/c_locale.cc b/libstdc++-v3/config/locale/generic/c_locale.cc
-index 6da5f22..2f85c2f 100644
---- a/libstdc++-v3/config/locale/generic/c_locale.cc
-+++ b/libstdc++-v3/config/locale/generic/c_locale.cc
-@@ -263,5 +263,10 @@ _GLIBCXX_END_NAMESPACE_VERSION
- #ifdef _GLIBCXX_LONG_DOUBLE_COMPAT
- #define _GLIBCXX_LDBL_COMPAT(dbl, ldbl) \
-   extern "C" void ldbl (void) __attribute__ ((alias (#dbl)))
-+#ifdef __UCLIBC__
-+// This is because __c_locale is of type __ctype_touplow_t* which is short on uclibc. for glibc its int*
-+_GLIBCXX_LDBL_COMPAT(_ZSt14__convert_to_vIdEvPKcRT_RSt12_Ios_IostateRKPs, _ZSt14__convert_to_vIeEvPKcRT_RSt12_Ios_IostateRKPs);
-+#else
- _GLIBCXX_LDBL_COMPAT(_ZSt14__convert_to_vIdEvPKcRT_RSt12_Ios_IostateRKPi, _ZSt14__convert_to_vIeEvPKcRT_RSt12_Ios_IostateRKPi);
-+#endif
- #endif // _GLIBCXX_LONG_DOUBLE_COMPAT
-diff --git a/libstdc++-v3/config/locale/generic/c_locale.h b/libstdc++-v3/config/locale/generic/c_locale.h
-index ee3ef86..7fd5485 100644
---- a/libstdc++-v3/config/locale/generic/c_locale.h
-+++ b/libstdc++-v3/config/locale/generic/c_locale.h
-@@ -40,13 +40,22 @@
- 
- #include <clocale>
- 
-+#ifdef __UCLIBC__
-+#include <features.h>
-+#include <ctype.h>
-+#endif
-+
- #define _GLIBCXX_NUM_CATEGORIES 0
- 
- namespace std _GLIBCXX_VISIBILITY(default)
- {
- _GLIBCXX_BEGIN_NAMESPACE_VERSION
- 
-+#ifdef __UCLIBC__
-+  typedef __ctype_touplow_t*	__c_locale;
-+#else
-   typedef int*			__c_locale;
-+#endif
- 
-   // Convert numeric value of type double and long double to string and
-   // return length of string.  If vsnprintf is available use it, otherwise
-diff --git a/libstdc++-v3/config/os/gnu-linux/ctype_base.h b/libstdc++-v3/config/os/gnu-linux/ctype_base.h
-index fd52b73..2627cf3 100644
---- a/libstdc++-v3/config/os/gnu-linux/ctype_base.h
-+++ b/libstdc++-v3/config/os/gnu-linux/ctype_base.h
-@@ -33,6 +33,11 @@
- 
- // Information as gleaned from /usr/include/ctype.h
- 
-+#ifdef __UCLIBC__
-+#include <features.h>
-+#include <ctype.h>
-+#endif
-+
- namespace std _GLIBCXX_VISIBILITY(default)
- {
- _GLIBCXX_BEGIN_NAMESPACE_VERSION
-@@ -41,7 +46,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
-   struct ctype_base
-   {
-     // Non-standard typedefs.
-+#ifdef __UCLIBC__
-+    typedef const __ctype_touplow_t*	__to_type;
-+#else
-     typedef const int* 		__to_type;
-+#endif
- 
-     // NB: Offsets into ctype<char>::_M_table force a particular size
-     // on the mask type. Because of this, we don't use an enum.
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0004-uclibc-locale.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0004-uclibc-locale.patch
deleted file mode 100644
index 656265a..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0004-uclibc-locale.patch
+++ /dev/null
@@ -1,2862 +0,0 @@
-From bd9dd472d162fc72522f96f70f6391c7c63d2bf7 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:41:39 +0400
-Subject: [PATCH 04/46] uclibc-locale
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- libstdc++-v3/acinclude.m4                          |  37 ++
- .../config/locale/uclibc/c++locale_internal.h      |  63 ++
- libstdc++-v3/config/locale/uclibc/c_locale.cc      | 160 +++++
- libstdc++-v3/config/locale/uclibc/c_locale.h       | 117 ++++
- .../config/locale/uclibc/codecvt_members.cc        | 308 +++++++++
- .../config/locale/uclibc/collate_members.cc        |  80 +++
- libstdc++-v3/config/locale/uclibc/ctype_members.cc | 300 +++++++++
- .../config/locale/uclibc/messages_members.cc       | 100 +++
- .../config/locale/uclibc/messages_members.h        | 118 ++++
- .../config/locale/uclibc/monetary_members.cc       | 692 +++++++++++++++++++++
- .../config/locale/uclibc/numeric_members.cc        | 160 +++++
- libstdc++-v3/config/locale/uclibc/time_members.cc  | 406 ++++++++++++
- libstdc++-v3/config/locale/uclibc/time_members.h   |  68 ++
- libstdc++-v3/configure                             |  75 +++
- libstdc++-v3/include/c_compatibility/wchar.h       |   2 +
- libstdc++-v3/include/c_std/cwchar                  |   2 +
- 16 files changed, 2688 insertions(+)
- create mode 100644 libstdc++-v3/config/locale/uclibc/c++locale_internal.h
- create mode 100644 libstdc++-v3/config/locale/uclibc/c_locale.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/c_locale.h
- create mode 100644 libstdc++-v3/config/locale/uclibc/codecvt_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/collate_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/ctype_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/messages_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/messages_members.h
- create mode 100644 libstdc++-v3/config/locale/uclibc/monetary_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/numeric_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/time_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/time_members.h
-
-diff --git a/libstdc++-v3/acinclude.m4 b/libstdc++-v3/acinclude.m4
-index 826ef86..79cc716 100644
---- a/libstdc++-v3/acinclude.m4
-+++ b/libstdc++-v3/acinclude.m4
-@@ -2029,6 +2029,9 @@ AC_DEFUN([GLIBCXX_ENABLE_CLOCALE], [
-   # Default to "generic".
-   if test $enable_clocale_flag = auto; then
-     case ${target_os} in
-+      *-uclibc*)
-+        enable_clocale_flag=uclibc
-+        ;;
-       linux* | gnu* | kfreebsd*-gnu | knetbsd*-gnu)
- 	enable_clocale_flag=gnu
- 	;;
-@@ -2213,6 +2216,40 @@ AC_DEFUN([GLIBCXX_ENABLE_CLOCALE], [
-       CTIME_CC=config/locale/generic/time_members.cc
-       CLOCALE_INTERNAL_H=config/locale/generic/c++locale_internal.h
-       ;;
-+    uclibc)
-+      AC_MSG_RESULT(uclibc)
-+
-+      # Declare intention to use gettext, and add support for specific
-+      # languages.
-+      # For some reason, ALL_LINGUAS has to be before AM-GNU-GETTEXT
-+      ALL_LINGUAS="de fr"
-+
-+      # Don't call AM-GNU-GETTEXT here. Instead, assume glibc.
-+      AC_CHECK_PROG(check_msgfmt, msgfmt, yes, no)
-+      if test x"$check_msgfmt" = x"yes" && test x"$enable_nls" = x"yes"; then
-+        USE_NLS=yes
-+      fi
-+      # Export the build objects.
-+      for ling in $ALL_LINGUAS; do \
-+        glibcxx_MOFILES="$glibcxx_MOFILES $ling.mo"; \
-+        glibcxx_POFILES="$glibcxx_POFILES $ling.po"; \
-+      done
-+      AC_SUBST(glibcxx_MOFILES)
-+      AC_SUBST(glibcxx_POFILES)
-+
-+      CLOCALE_H=config/locale/uclibc/c_locale.h
-+      CLOCALE_CC=config/locale/uclibc/c_locale.cc
-+      CCODECVT_CC=config/locale/uclibc/codecvt_members.cc
-+      CCOLLATE_CC=config/locale/uclibc/collate_members.cc
-+      CCTYPE_CC=config/locale/uclibc/ctype_members.cc
-+      CMESSAGES_H=config/locale/uclibc/messages_members.h
-+      CMESSAGES_CC=config/locale/uclibc/messages_members.cc
-+      CMONEY_CC=config/locale/uclibc/monetary_members.cc
-+      CNUMERIC_CC=config/locale/uclibc/numeric_members.cc
-+      CTIME_H=config/locale/uclibc/time_members.h
-+      CTIME_CC=config/locale/uclibc/time_members.cc
-+      CLOCALE_INTERNAL_H=config/locale/uclibc/c++locale_internal.h
-+      ;;
-   esac
- 
-   # This is where the testsuite looks for locale catalogs, using the
-diff --git a/libstdc++-v3/config/locale/uclibc/c++locale_internal.h b/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-new file mode 100644
-index 0000000..2ae3e4a
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-@@ -0,0 +1,63 @@
-+// Prototypes for GLIBC thread locale __-prefixed functions -*- C++ -*-
-+
-+// Copyright (C) 2002, 2004, 2005 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+// Written by Jakub Jelinek <jakub@redhat.com>
-+
-+#include <bits/c++config.h>
-+#include <clocale>
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning clean this up
-+#endif
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+
-+extern "C" __typeof(nl_langinfo_l) __nl_langinfo_l;
-+extern "C" __typeof(strcoll_l) __strcoll_l;
-+extern "C" __typeof(strftime_l) __strftime_l;
-+extern "C" __typeof(strtod_l) __strtod_l;
-+extern "C" __typeof(strtof_l) __strtof_l;
-+extern "C" __typeof(strtold_l) __strtold_l;
-+extern "C" __typeof(strxfrm_l) __strxfrm_l;
-+extern "C" __typeof(newlocale) __newlocale;
-+extern "C" __typeof(freelocale) __freelocale;
-+extern "C" __typeof(duplocale) __duplocale;
-+extern "C" __typeof(uselocale) __uselocale;
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+extern "C" __typeof(iswctype_l) __iswctype_l;
-+extern "C" __typeof(towlower_l) __towlower_l;
-+extern "C" __typeof(towupper_l) __towupper_l;
-+extern "C" __typeof(wcscoll_l) __wcscoll_l;
-+extern "C" __typeof(wcsftime_l) __wcsftime_l;
-+extern "C" __typeof(wcsxfrm_l) __wcsxfrm_l;
-+extern "C" __typeof(wctype_l) __wctype_l;
-+#endif
-+
-+#endif // GLIBC 2.3 and later
-diff --git a/libstdc++-v3/config/locale/uclibc/c_locale.cc b/libstdc++-v3/config/locale/uclibc/c_locale.cc
-new file mode 100644
-index 0000000..5081dc1
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/c_locale.cc
-@@ -0,0 +1,160 @@
-+// Wrapper for underlying C-language localization -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.8  Standard locale categories.
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#include <cerrno>  // For errno
-+#include <locale>
-+#include <stdexcept>
-+#include <langinfo.h>
-+#include <bits/c++locale_internal.h>
-+
-+#ifndef __UCLIBC_HAS_XLOCALE__
-+#define __strtol_l(S, E, B, L)      strtol((S), (E), (B))
-+#define __strtoul_l(S, E, B, L)     strtoul((S), (E), (B))
-+#define __strtoll_l(S, E, B, L)     strtoll((S), (E), (B))
-+#define __strtoull_l(S, E, B, L)    strtoull((S), (E), (B))
-+#define __strtof_l(S, E, L)         strtof((S), (E))
-+#define __strtod_l(S, E, L)         strtod((S), (E))
-+#define __strtold_l(S, E, L)        strtold((S), (E))
-+#warning should dummy __newlocale check for C|POSIX ?
-+#define __newlocale(a, b, c)        NULL
-+#define __freelocale(a)             ((void)0)
-+#define __duplocale(a)              __c_locale()
-+#endif
-+
-+namespace std
-+{
-+  template<>
-+    void
-+    __convert_to_v(const char* __s, float& __v, ios_base::iostate& __err,
-+		   const __c_locale& __cloc)
-+    {
-+      if (!(__err & ios_base::failbit))
-+	{
-+	  char* __sanity;
-+	  errno = 0;
-+	  float __f = __strtof_l(__s, &__sanity, __cloc);
-+          if (__sanity != __s && errno != ERANGE)
-+	    __v = __f;
-+	  else
-+	    __err |= ios_base::failbit;
-+	}
-+    }
-+
-+  template<>
-+    void
-+    __convert_to_v(const char* __s, double& __v, ios_base::iostate& __err,
-+		   const __c_locale& __cloc)
-+    {
-+      if (!(__err & ios_base::failbit))
-+	{
-+	  char* __sanity;
-+	  errno = 0;
-+	  double __d = __strtod_l(__s, &__sanity, __cloc);
-+          if (__sanity != __s && errno != ERANGE)
-+	    __v = __d;
-+	  else
-+	    __err |= ios_base::failbit;
-+	}
-+    }
-+
-+  template<>
-+    void
-+    __convert_to_v(const char* __s, long double& __v, ios_base::iostate& __err,
-+		   const __c_locale& __cloc)
-+    {
-+      if (!(__err & ios_base::failbit))
-+	{
-+	  char* __sanity;
-+	  errno = 0;
-+	  long double __ld = __strtold_l(__s, &__sanity, __cloc);
-+          if (__sanity != __s && errno != ERANGE)
-+	    __v = __ld;
-+	  else
-+	    __err |= ios_base::failbit;
-+	}
-+    }
-+
-+  void
-+  locale::facet::_S_create_c_locale(__c_locale& __cloc, const char* __s,
-+				    __c_locale __old)
-+  {
-+    __cloc = __newlocale(1 << LC_ALL, __s, __old);
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    if (!__cloc)
-+      {
-+	// This named locale is not supported by the underlying OS.
-+	__throw_runtime_error(__N("locale::facet::_S_create_c_locale "
-+			      "name not valid"));
-+      }
-+#endif
-+  }
-+
-+  void
-+  locale::facet::_S_destroy_c_locale(__c_locale& __cloc)
-+  {
-+    if (_S_get_c_locale() != __cloc)
-+      __freelocale(__cloc);
-+  }
-+
-+  __c_locale
-+  locale::facet::_S_clone_c_locale(__c_locale& __cloc)
-+  { return __duplocale(__cloc); }
-+} // namespace std
-+
-+namespace __gnu_cxx
-+{
-+  const char* const category_names[6 + _GLIBCXX_NUM_CATEGORIES] =
-+    {
-+      "LC_CTYPE",
-+      "LC_NUMERIC",
-+      "LC_TIME",
-+      "LC_COLLATE",
-+      "LC_MONETARY",
-+      "LC_MESSAGES",
-+#if _GLIBCXX_NUM_CATEGORIES != 0
-+      "LC_PAPER",
-+      "LC_NAME",
-+      "LC_ADDRESS",
-+      "LC_TELEPHONE",
-+      "LC_MEASUREMENT",
-+      "LC_IDENTIFICATION"
-+#endif
-+    };
-+}
-+
-+namespace std
-+{
-+  const char* const* const locale::_S_categories = __gnu_cxx::category_names;
-+}  // namespace std
-diff --git a/libstdc++-v3/config/locale/uclibc/c_locale.h b/libstdc++-v3/config/locale/uclibc/c_locale.h
-new file mode 100644
-index 0000000..da07c1f
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/c_locale.h
-@@ -0,0 +1,117 @@
-+// Wrapper for underlying C-language localization -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.8  Standard locale categories.
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#ifndef _C_LOCALE_H
-+#define _C_LOCALE_H 1
-+
-+#pragma GCC system_header
-+
-+#include <cstring>              // get std::strlen
-+#include <cstdio>               // get std::snprintf or std::sprintf
-+#include <clocale>
-+#include <langinfo.h>		// For codecvt
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix this
-+#endif
-+#ifdef __UCLIBC_HAS_LOCALE__
-+#include <iconv.h>		// For codecvt using iconv, iconv_t
-+#endif
-+#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
-+#include <libintl.h> 		// For messages
-+#endif
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning what is _GLIBCXX_C_LOCALE_GNU for
-+#endif
-+#define _GLIBCXX_C_LOCALE_GNU 1
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix categories
-+#endif
-+// #define _GLIBCXX_NUM_CATEGORIES 6
-+#define _GLIBCXX_NUM_CATEGORIES 0
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+namespace __gnu_cxx
-+{
-+  extern "C" __typeof(uselocale) __uselocale;
-+}
-+#endif
-+
-+namespace std
-+{
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+  typedef __locale_t		__c_locale;
-+#else
-+  typedef int*			__c_locale;
-+#endif
-+
-+  // Convert numeric value of type _Tv to string and return length of
-+  // string.  If snprintf is available use it, otherwise fall back to
-+  // the unsafe sprintf which, in general, can be dangerous and should
-+  // be avoided.
-+  template<typename _Tv>
-+    int
-+    __convert_from_v(char* __out,
-+		     const int __size __attribute__ ((__unused__)),
-+		     const char* __fmt,
-+#ifdef __UCLIBC_HAS_XCLOCALE__
-+		     _Tv __v, const __c_locale& __cloc, int __prec)
-+    {
-+      __c_locale __old = __gnu_cxx::__uselocale(__cloc);
-+#else
-+		     _Tv __v, const __c_locale&, int __prec)
-+    {
-+# ifdef __UCLIBC_HAS_LOCALE__
-+      char* __old = std::setlocale(LC_ALL, NULL);
-+      char* __sav = new char[std::strlen(__old) + 1];
-+      std::strcpy(__sav, __old);
-+      std::setlocale(LC_ALL, "C");
-+# endif
-+#endif
-+
-+      const int __ret = std::snprintf(__out, __size, __fmt, __prec, __v);
-+
-+#ifdef __UCLIBC_HAS_XCLOCALE__
-+      __gnu_cxx::__uselocale(__old);
-+#elif defined __UCLIBC_HAS_LOCALE__
-+      std::setlocale(LC_ALL, __sav);
-+      delete [] __sav;
-+#endif
-+      return __ret;
-+    }
-+}
-+
-+#endif
-diff --git a/libstdc++-v3/config/locale/uclibc/codecvt_members.cc b/libstdc++-v3/config/locale/uclibc/codecvt_members.cc
-new file mode 100644
-index 0000000..64aa962
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/codecvt_members.cc
-@@ -0,0 +1,308 @@
-+// std::codecvt implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2002, 2003 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.1.5 - Template class codecvt
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#include <locale>
-+#include <cstdlib>  // For MB_CUR_MAX
-+#include <climits>  // For MB_LEN_MAX
-+#include <bits/c++locale_internal.h>
-+
-+namespace std
-+{
-+  // Specializations.
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  codecvt_base::result
-+  codecvt<wchar_t, char, mbstate_t>::
-+  do_out(state_type& __state, const intern_type* __from,
-+	 const intern_type* __from_end, const intern_type*& __from_next,
-+	 extern_type* __to, extern_type* __to_end,
-+	 extern_type*& __to_next) const
-+  {
-+    result __ret = ok;
-+    state_type __tmp_state(__state);
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_codecvt);
-+#endif
-+
-+    // wcsnrtombs is *very* fast but stops if encounters NUL characters:
-+    // in case we fall back to wcrtomb and then continue, in a loop.
-+    // NB: wcsnrtombs is a GNU extension
-+    for (__from_next = __from, __to_next = __to;
-+	 __from_next < __from_end && __to_next < __to_end
-+	 && __ret == ok;)
-+      {
-+	const intern_type* __from_chunk_end = wmemchr(__from_next, L'\0',
-+						      __from_end - __from_next);
-+	if (!__from_chunk_end)
-+	  __from_chunk_end = __from_end;
-+
-+	__from = __from_next;
-+	const size_t __conv = wcsnrtombs(__to_next, &__from_next,
-+					 __from_chunk_end - __from_next,
-+					 __to_end - __to_next, &__state);
-+	if (__conv == static_cast<size_t>(-1))
-+	  {
-+	    // In case of error, in order to stop at the exact place we
-+	    // have to start again from the beginning with a series of
-+	    // wcrtomb.
-+	    for (; __from < __from_next; ++__from)
-+	      __to_next += wcrtomb(__to_next, *__from, &__tmp_state);
-+	    __state = __tmp_state;
-+	    __ret = error;
-+	  }
-+	else if (__from_next && __from_next < __from_chunk_end)
-+	  {
-+	    __to_next += __conv;
-+	    __ret = partial;
-+	  }
-+	else
-+	  {
-+	    __from_next = __from_chunk_end;
-+	    __to_next += __conv;
-+	  }
-+
-+	if (__from_next < __from_end && __ret == ok)
-+	  {
-+	    extern_type __buf[MB_LEN_MAX];
-+	    __tmp_state = __state;
-+	    const size_t __conv = wcrtomb(__buf, *__from_next, &__tmp_state);
-+	    if (__conv > static_cast<size_t>(__to_end - __to_next))
-+	      __ret = partial;
-+	    else
-+	      {
-+		memcpy(__to_next, __buf, __conv);
-+		__state = __tmp_state;
-+		__to_next += __conv;
-+		++__from_next;
-+	      }
-+	  }
-+      }
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+
-+    return __ret;
-+  }
-+
-+  codecvt_base::result
-+  codecvt<wchar_t, char, mbstate_t>::
-+  do_in(state_type& __state, const extern_type* __from,
-+	const extern_type* __from_end, const extern_type*& __from_next,
-+	intern_type* __to, intern_type* __to_end,
-+	intern_type*& __to_next) const
-+  {
-+    result __ret = ok;
-+    state_type __tmp_state(__state);
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_codecvt);
-+#endif
-+
-+    // mbsnrtowcs is *very* fast but stops if encounters NUL characters:
-+    // in case we store a L'\0' and then continue, in a loop.
-+    // NB: mbsnrtowcs is a GNU extension
-+    for (__from_next = __from, __to_next = __to;
-+	 __from_next < __from_end && __to_next < __to_end
-+	 && __ret == ok;)
-+      {
-+	const extern_type* __from_chunk_end;
-+	__from_chunk_end = static_cast<const extern_type*>(memchr(__from_next, '\0',
-+								  __from_end
-+								  - __from_next));
-+	if (!__from_chunk_end)
-+	  __from_chunk_end = __from_end;
-+
-+	__from = __from_next;
-+	size_t __conv = mbsnrtowcs(__to_next, &__from_next,
-+				   __from_chunk_end - __from_next,
-+				   __to_end - __to_next, &__state);
-+	if (__conv == static_cast<size_t>(-1))
-+	  {
-+	    // In case of error, in order to stop at the exact place we
-+	    // have to start again from the beginning with a series of
-+	    // mbrtowc.
-+	    for (;; ++__to_next, __from += __conv)
-+	      {
-+		__conv = mbrtowc(__to_next, __from, __from_end - __from,
-+				 &__tmp_state);
-+		if (__conv == static_cast<size_t>(-1)
-+		    || __conv == static_cast<size_t>(-2))
-+		  break;
-+	      }
-+	    __from_next = __from;
-+	    __state = __tmp_state;
-+	    __ret = error;
-+	  }
-+	else if (__from_next && __from_next < __from_chunk_end)
-+	  {
-+	    // It is unclear what to return in this case (see DR 382).
-+	    __to_next += __conv;
-+	    __ret = partial;
-+	  }
-+	else
-+	  {
-+	    __from_next = __from_chunk_end;
-+	    __to_next += __conv;
-+	  }
-+
-+	if (__from_next < __from_end && __ret == ok)
-+	  {
-+	    if (__to_next < __to_end)
-+	      {
-+		// XXX Probably wrong for stateful encodings
-+		__tmp_state = __state;
-+		++__from_next;
-+		*__to_next++ = L'\0';
-+	      }
-+	    else
-+	      __ret = partial;
-+	  }
-+      }
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+
-+    return __ret;
-+  }
-+
-+  int
-+  codecvt<wchar_t, char, mbstate_t>::
-+  do_encoding() const throw()
-+  {
-+    // XXX This implementation assumes that the encoding is
-+    // stateless and is either single-byte or variable-width.
-+    int __ret = 0;
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_codecvt);
-+#endif
-+    if (MB_CUR_MAX == 1)
-+      __ret = 1;
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+    return __ret;
-+  }
-+
-+  int
-+  codecvt<wchar_t, char, mbstate_t>::
-+  do_max_length() const throw()
-+  {
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_codecvt);
-+#endif
-+    // XXX Probably wrong for stateful encodings.
-+    int __ret = MB_CUR_MAX;
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+    return __ret;
-+  }
-+
-+  int
-+  codecvt<wchar_t, char, mbstate_t>::
-+  do_length(state_type& __state, const extern_type* __from,
-+	    const extern_type* __end, size_t __max) const
-+  {
-+    int __ret = 0;
-+    state_type __tmp_state(__state);
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_codecvt);
-+#endif
-+
-+    // mbsnrtowcs is *very* fast but stops if encounters NUL characters:
-+    // in case we advance past it and then continue, in a loop.
-+    // NB: mbsnrtowcs is a GNU extension
-+
-+    // A dummy internal buffer is needed in order for mbsnrtocws to consider
-+    // its fourth parameter (it wouldn't with NULL as first parameter).
-+    wchar_t* __to = static_cast<wchar_t*>(__builtin_alloca(sizeof(wchar_t)
-+							   * __max));
-+    while (__from < __end && __max)
-+      {
-+	const extern_type* __from_chunk_end;
-+	__from_chunk_end = static_cast<const extern_type*>(memchr(__from, '\0',
-+								  __end
-+								  - __from));
-+	if (!__from_chunk_end)
-+	  __from_chunk_end = __end;
-+
-+	const extern_type* __tmp_from = __from;
-+	size_t __conv = mbsnrtowcs(__to, &__from,
-+				   __from_chunk_end - __from,
-+				   __max, &__state);
-+	if (__conv == static_cast<size_t>(-1))
-+	  {
-+	    // In case of error, in order to stop at the exact place we
-+	    // have to start again from the beginning with a series of
-+	    // mbrtowc.
-+	    for (__from = __tmp_from;; __from += __conv)
-+	      {
-+		__conv = mbrtowc(NULL, __from, __end - __from,
-+				 &__tmp_state);
-+		if (__conv == static_cast<size_t>(-1)
-+		    || __conv == static_cast<size_t>(-2))
-+		  break;
-+	      }
-+	    __state = __tmp_state;
-+	    __ret += __from - __tmp_from;
-+	    break;
-+	  }
-+	if (!__from)
-+	  __from = __from_chunk_end;
-+
-+	__ret += __from - __tmp_from;
-+	__max -= __conv;
-+
-+	if (__from < __end && __max)
-+	  {
-+	    // XXX Probably wrong for stateful encodings
-+	    __tmp_state = __state;
-+	    ++__from;
-+	    ++__ret;
-+	    --__max;
-+	  }
-+      }
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+
-+    return __ret;
-+  }
-+#endif
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/collate_members.cc b/libstdc++-v3/config/locale/uclibc/collate_members.cc
-new file mode 100644
-index 0000000..c2664a7
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/collate_members.cc
-@@ -0,0 +1,80 @@
-+// std::collate implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.4.1.2  collate virtual functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#include <locale>
-+#include <bits/c++locale_internal.h>
-+
-+#ifndef __UCLIBC_HAS_XLOCALE__
-+#define __strcoll_l(S1, S2, L)      strcoll((S1), (S2))
-+#define __strxfrm_l(S1, S2, N, L)   strxfrm((S1), (S2), (N))
-+#define __wcscoll_l(S1, S2, L)      wcscoll((S1), (S2))
-+#define __wcsxfrm_l(S1, S2, N, L)   wcsxfrm((S1), (S2), (N))
-+#endif
-+
-+namespace std
-+{
-+  // These are basically extensions to char_traits, and perhaps should
-+  // be put there instead of here.
-+  template<>
-+    int
-+    collate<char>::_M_compare(const char* __one, const char* __two) const
-+    {
-+      int __cmp = __strcoll_l(__one, __two, _M_c_locale_collate);
-+      return (__cmp >> (8 * sizeof (int) - 2)) | (__cmp != 0);
-+    }
-+
-+  template<>
-+    size_t
-+    collate<char>::_M_transform(char* __to, const char* __from,
-+				size_t __n) const
-+    { return __strxfrm_l(__to, __from, __n, _M_c_locale_collate); }
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  template<>
-+    int
-+    collate<wchar_t>::_M_compare(const wchar_t* __one,
-+				 const wchar_t* __two) const
-+    {
-+      int __cmp = __wcscoll_l(__one, __two, _M_c_locale_collate);
-+      return (__cmp >> (8 * sizeof (int) - 2)) | (__cmp != 0);
-+    }
-+
-+  template<>
-+    size_t
-+    collate<wchar_t>::_M_transform(wchar_t* __to, const wchar_t* __from,
-+				   size_t __n) const
-+    { return __wcsxfrm_l(__to, __from, __n, _M_c_locale_collate); }
-+#endif
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/ctype_members.cc b/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-new file mode 100644
-index 0000000..7294e3a
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-@@ -0,0 +1,300 @@
-+// std::ctype implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.1.1.2  ctype virtual functions.
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#define _LIBC
-+#include <locale>
-+#undef _LIBC
-+#include <bits/c++locale_internal.h>
-+
-+#ifndef __UCLIBC_HAS_XLOCALE__
-+#define __wctype_l(S, L)           wctype((S))
-+#define __towupper_l(C, L)         towupper((C))
-+#define __towlower_l(C, L)         towlower((C))
-+#define __iswctype_l(C, M, L)      iswctype((C), (M))
-+#endif
-+
-+namespace std
-+{
-+  // NB: The other ctype<char> specializations are in src/locale.cc and
-+  // various /config/os/* files.
-+  template<>
-+    ctype_byname<char>::ctype_byname(const char* __s, size_t __refs)
-+    : ctype<char>(0, false, __refs)
-+    {
-+      if (std::strcmp(__s, "C") != 0 && std::strcmp(__s, "POSIX") != 0)
-+	{
-+	  this->_S_destroy_c_locale(this->_M_c_locale_ctype);
-+	  this->_S_create_c_locale(this->_M_c_locale_ctype, __s);
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	  this->_M_toupper = this->_M_c_locale_ctype->__ctype_toupper;
-+	  this->_M_tolower = this->_M_c_locale_ctype->__ctype_tolower;
-+	  this->_M_table = this->_M_c_locale_ctype->__ctype_b;
-+#endif
-+	}
-+    }
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  ctype<wchar_t>::__wmask_type
-+  ctype<wchar_t>::_M_convert_to_wmask(const mask __m) const
-+  {
-+    __wmask_type __ret;
-+    switch (__m)
-+      {
-+      case space:
-+	__ret = __wctype_l("space", _M_c_locale_ctype);
-+	break;
-+      case print:
-+	__ret = __wctype_l("print", _M_c_locale_ctype);
-+	break;
-+      case cntrl:
-+	__ret = __wctype_l("cntrl", _M_c_locale_ctype);
-+	break;
-+      case upper:
-+	__ret = __wctype_l("upper", _M_c_locale_ctype);
-+	break;
-+      case lower:
-+	__ret = __wctype_l("lower", _M_c_locale_ctype);
-+	break;
-+      case alpha:
-+	__ret = __wctype_l("alpha", _M_c_locale_ctype);
-+	break;
-+      case digit:
-+	__ret = __wctype_l("digit", _M_c_locale_ctype);
-+	break;
-+      case punct:
-+	__ret = __wctype_l("punct", _M_c_locale_ctype);
-+	break;
-+      case xdigit:
-+	__ret = __wctype_l("xdigit", _M_c_locale_ctype);
-+	break;
-+      case alnum:
-+	__ret = __wctype_l("alnum", _M_c_locale_ctype);
-+	break;
-+      case graph:
-+	__ret = __wctype_l("graph", _M_c_locale_ctype);
-+	break;
-+      default:
-+	__ret = __wmask_type();
-+      }
-+    return __ret;
-+  }
-+
-+  wchar_t
-+  ctype<wchar_t>::do_toupper(wchar_t __c) const
-+  { return __towupper_l(__c, _M_c_locale_ctype); }
-+
-+  const wchar_t*
-+  ctype<wchar_t>::do_toupper(wchar_t* __lo, const wchar_t* __hi) const
-+  {
-+    while (__lo < __hi)
-+      {
-+        *__lo = __towupper_l(*__lo, _M_c_locale_ctype);
-+        ++__lo;
-+      }
-+    return __hi;
-+  }
-+
-+  wchar_t
-+  ctype<wchar_t>::do_tolower(wchar_t __c) const
-+  { return __towlower_l(__c, _M_c_locale_ctype); }
-+
-+  const wchar_t*
-+  ctype<wchar_t>::do_tolower(wchar_t* __lo, const wchar_t* __hi) const
-+  {
-+    while (__lo < __hi)
-+      {
-+        *__lo = __towlower_l(*__lo, _M_c_locale_ctype);
-+        ++__lo;
-+      }
-+    return __hi;
-+  }
-+
-+  bool
-+  ctype<wchar_t>::
-+  do_is(mask __m, wchar_t __c) const
-+  {
-+    // Highest bitmask in ctype_base == 10, but extra in "C"
-+    // library for blank.
-+    bool __ret = false;
-+    const size_t __bitmasksize = 11;
-+    for (size_t __bitcur = 0; __bitcur <= __bitmasksize; ++__bitcur)
-+      if (__m & _M_bit[__bitcur]
-+	  && __iswctype_l(__c, _M_wmask[__bitcur], _M_c_locale_ctype))
-+	{
-+	  __ret = true;
-+	  break;
-+	}
-+    return __ret;
-+  }
-+
-+  const wchar_t*
-+  ctype<wchar_t>::
-+  do_is(const wchar_t* __lo, const wchar_t* __hi, mask* __vec) const
-+  {
-+    for (; __lo < __hi; ++__vec, ++__lo)
-+      {
-+	// Highest bitmask in ctype_base == 10, but extra in "C"
-+	// library for blank.
-+	const size_t __bitmasksize = 11;
-+	mask __m = 0;
-+	for (size_t __bitcur = 0; __bitcur <= __bitmasksize; ++__bitcur)
-+	  if (__iswctype_l(*__lo, _M_wmask[__bitcur], _M_c_locale_ctype))
-+	    __m |= _M_bit[__bitcur];
-+	*__vec = __m;
-+      }
-+    return __hi;
-+  }
-+
-+  const wchar_t*
-+  ctype<wchar_t>::
-+  do_scan_is(mask __m, const wchar_t* __lo, const wchar_t* __hi) const
-+  {
-+    while (__lo < __hi && !this->do_is(__m, *__lo))
-+      ++__lo;
-+    return __lo;
-+  }
-+
-+  const wchar_t*
-+  ctype<wchar_t>::
-+  do_scan_not(mask __m, const char_type* __lo, const char_type* __hi) const
-+  {
-+    while (__lo < __hi && this->do_is(__m, *__lo) != 0)
-+      ++__lo;
-+    return __lo;
-+  }
-+
-+  wchar_t
-+  ctype<wchar_t>::
-+  do_widen(char __c) const
-+  { return _M_widen[static_cast<unsigned char>(__c)]; }
-+
-+  const char*
-+  ctype<wchar_t>::
-+  do_widen(const char* __lo, const char* __hi, wchar_t* __dest) const
-+  {
-+    while (__lo < __hi)
-+      {
-+	*__dest = _M_widen[static_cast<unsigned char>(*__lo)];
-+	++__lo;
-+	++__dest;
-+      }
-+    return __hi;
-+  }
-+
-+  char
-+  ctype<wchar_t>::
-+  do_narrow(wchar_t __wc, char __dfault) const
-+  {
-+    if (__wc >= 0 && __wc < 128 && _M_narrow_ok)
-+      return _M_narrow[__wc];
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_ctype);
-+#endif
-+    const int __c = wctob(__wc);
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+    return (__c == EOF ? __dfault : static_cast<char>(__c));
-+  }
-+
-+  const wchar_t*
-+  ctype<wchar_t>::
-+  do_narrow(const wchar_t* __lo, const wchar_t* __hi, char __dfault,
-+	    char* __dest) const
-+  {
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_ctype);
-+#endif
-+    if (_M_narrow_ok)
-+      while (__lo < __hi)
-+	{
-+	  if (*__lo >= 0 && *__lo < 128)
-+	    *__dest = _M_narrow[*__lo];
-+	  else
-+	    {
-+	      const int __c = wctob(*__lo);
-+	      *__dest = (__c == EOF ? __dfault : static_cast<char>(__c));
-+	    }
-+	  ++__lo;
-+	  ++__dest;
-+	}
-+    else
-+      while (__lo < __hi)
-+	{
-+	  const int __c = wctob(*__lo);
-+	  *__dest = (__c == EOF ? __dfault : static_cast<char>(__c));
-+	  ++__lo;
-+	  ++__dest;
-+	}
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+    return __hi;
-+  }
-+
-+  void
-+  ctype<wchar_t>::_M_initialize_ctype()
-+  {
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_ctype);
-+#endif
-+    wint_t __i;
-+    for (__i = 0; __i < 128; ++__i)
-+      {
-+	const int __c = wctob(__i);
-+	if (__c == EOF)
-+	  break;
-+	else
-+	  _M_narrow[__i] = static_cast<char>(__c);
-+      }
-+    if (__i == 128)
-+      _M_narrow_ok = true;
-+    else
-+      _M_narrow_ok = false;
-+    for (size_t __j = 0;
-+	 __j < sizeof(_M_widen) / sizeof(wint_t); ++__j)
-+      _M_widen[__j] = btowc(__j);
-+
-+    for (size_t __k = 0; __k <= 11; ++__k)
-+      {
-+	_M_bit[__k] = static_cast<mask>(_ISbit(__k));
-+	_M_wmask[__k] = _M_convert_to_wmask(_M_bit[__k]);
-+      }
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+  }
-+#endif //  _GLIBCXX_USE_WCHAR_T
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/messages_members.cc b/libstdc++-v3/config/locale/uclibc/messages_members.cc
-new file mode 100644
-index 0000000..13594d9
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/messages_members.cc
-@@ -0,0 +1,100 @@
-+// std::messages implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.7.1.2  messages virtual functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#include <locale>
-+#include <bits/c++locale_internal.h>
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix gettext stuff
-+#endif
-+#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
-+extern "C" char *__dcgettext(const char *domainname,
-+			     const char *msgid, int category);
-+#undef gettext
-+#define gettext(msgid) __dcgettext(NULL, msgid, LC_MESSAGES)
-+#else
-+#undef gettext
-+#define gettext(msgid) (msgid)
-+#endif
-+
-+namespace std
-+{
-+  // Specializations.
-+  template<>
-+    string
-+    messages<char>::do_get(catalog, int, int, const string& __dfault) const
-+    {
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+      __c_locale __old = __uselocale(_M_c_locale_messages);
-+      const char* __msg = const_cast<const char*>(gettext(__dfault.c_str()));
-+      __uselocale(__old);
-+      return string(__msg);
-+#elif defined __UCLIBC_HAS_LOCALE__
-+      char* __old = strdup(setlocale(LC_ALL, NULL));
-+      setlocale(LC_ALL, _M_name_messages);
-+      const char* __msg = gettext(__dfault.c_str());
-+      setlocale(LC_ALL, __old);
-+      free(__old);
-+      return string(__msg);
-+#else
-+      const char* __msg = gettext(__dfault.c_str());
-+      return string(__msg);
-+#endif
-+    }
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  template<>
-+    wstring
-+    messages<wchar_t>::do_get(catalog, int, int, const wstring& __dfault) const
-+    {
-+# ifdef __UCLIBC_HAS_XLOCALE__
-+      __c_locale __old = __uselocale(_M_c_locale_messages);
-+      char* __msg = gettext(_M_convert_to_char(__dfault));
-+      __uselocale(__old);
-+      return _M_convert_from_char(__msg);
-+# elif defined __UCLIBC_HAS_LOCALE__
-+      char* __old = strdup(setlocale(LC_ALL, NULL));
-+      setlocale(LC_ALL, _M_name_messages);
-+      char* __msg = gettext(_M_convert_to_char(__dfault));
-+      setlocale(LC_ALL, __old);
-+      free(__old);
-+      return _M_convert_from_char(__msg);
-+# else
-+      char* __msg = gettext(_M_convert_to_char(__dfault));
-+      return _M_convert_from_char(__msg);
-+# endif
-+    }
-+#endif
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/messages_members.h b/libstdc++-v3/config/locale/uclibc/messages_members.h
-new file mode 100644
-index 0000000..1424078
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/messages_members.h
-@@ -0,0 +1,118 @@
-+// std::messages implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.7.1.2  messages functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix prototypes for *textdomain funcs
-+#endif
-+#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
-+extern "C" char *__textdomain(const char *domainname);
-+extern "C" char *__bindtextdomain(const char *domainname,
-+				  const char *dirname);
-+#else
-+#undef __textdomain
-+#undef __bindtextdomain
-+#define __textdomain(D)           ((void)0)
-+#define __bindtextdomain(D,P)     ((void)0)
-+#endif
-+
-+  // Non-virtual member functions.
-+  template<typename _CharT>
-+     messages<_CharT>::messages(size_t __refs)
-+     : facet(__refs), _M_c_locale_messages(_S_get_c_locale()),
-+     _M_name_messages(_S_get_c_name())
-+     { }
-+
-+  template<typename _CharT>
-+     messages<_CharT>::messages(__c_locale __cloc, const char* __s,
-+				size_t __refs)
-+     : facet(__refs), _M_c_locale_messages(_S_clone_c_locale(__cloc)),
-+     _M_name_messages(__s)
-+     {
-+       char* __tmp = new char[std::strlen(__s) + 1];
-+       std::strcpy(__tmp, __s);
-+       _M_name_messages = __tmp;
-+     }
-+
-+  template<typename _CharT>
-+    typename messages<_CharT>::catalog
-+    messages<_CharT>::open(const basic_string<char>& __s, const locale& __loc,
-+			   const char* __dir) const
-+    {
-+      __bindtextdomain(__s.c_str(), __dir);
-+      return this->do_open(__s, __loc);
-+    }
-+
-+  // Virtual member functions.
-+  template<typename _CharT>
-+    messages<_CharT>::~messages()
-+    {
-+      if (_M_name_messages != _S_get_c_name())
-+	delete [] _M_name_messages;
-+      _S_destroy_c_locale(_M_c_locale_messages);
-+    }
-+
-+  template<typename _CharT>
-+    typename messages<_CharT>::catalog
-+    messages<_CharT>::do_open(const basic_string<char>& __s,
-+			      const locale&) const
-+    {
-+      // No error checking is done, assume the catalog exists and can
-+      // be used.
-+      __textdomain(__s.c_str());
-+      return 0;
-+    }
-+
-+  template<typename _CharT>
-+    void
-+    messages<_CharT>::do_close(catalog) const
-+    { }
-+
-+   // messages_byname
-+   template<typename _CharT>
-+     messages_byname<_CharT>::messages_byname(const char* __s, size_t __refs)
-+     : messages<_CharT>(__refs)
-+     {
-+       if (this->_M_name_messages != locale::facet::_S_get_c_name())
-+	 delete [] this->_M_name_messages;
-+       char* __tmp = new char[std::strlen(__s) + 1];
-+       std::strcpy(__tmp, __s);
-+       this->_M_name_messages = __tmp;
-+
-+       if (std::strcmp(__s, "C") != 0 && std::strcmp(__s, "POSIX") != 0)
-+	 {
-+	   this->_S_destroy_c_locale(this->_M_c_locale_messages);
-+	   this->_S_create_c_locale(this->_M_c_locale_messages, __s);
-+	 }
-+     }
-diff --git a/libstdc++-v3/config/locale/uclibc/monetary_members.cc b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-new file mode 100644
-index 0000000..aa52731
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-@@ -0,0 +1,692 @@
-+// std::moneypunct implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.6.3.2  moneypunct virtual functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#define _LIBC
-+#include <locale>
-+#undef _LIBC
-+#include <bits/c++locale_internal.h>
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning optimize this for uclibc
-+#warning tailor for stub locale support
-+#endif
-+
-+#ifndef __UCLIBC_HAS_XLOCALE__
-+#define __nl_langinfo_l(N, L)         nl_langinfo((N))
-+#endif
-+
-+namespace std
-+{
-+  // Construct and return valid pattern consisting of some combination of:
-+  // space none symbol sign value
-+  money_base::pattern
-+  money_base::_S_construct_pattern(char __precedes, char __space, char __posn)
-+  {
-+    pattern __ret;
-+
-+    // This insanely complicated routine attempts to construct a valid
-+    // pattern for use with monyepunct. A couple of invariants:
-+
-+    // if (__precedes) symbol -> value
-+    // else value -> symbol
-+
-+    // if (__space) space
-+    // else none
-+
-+    // none == never first
-+    // space never first or last
-+
-+    // Any elegant implementations of this are welcome.
-+    switch (__posn)
-+      {
-+      case 0:
-+      case 1:
-+	// 1 The sign precedes the value and symbol.
-+	__ret.field[0] = sign;
-+	if (__space)
-+	  {
-+	    // Pattern starts with sign.
-+	    if (__precedes)
-+	      {
-+		__ret.field[1] = symbol;
-+		__ret.field[3] = value;
-+	      }
-+	    else
-+	      {
-+		__ret.field[1] = value;
-+		__ret.field[3] = symbol;
-+	      }
-+	    __ret.field[2] = space;
-+	  }
-+	else
-+	  {
-+	    // Pattern starts with sign and ends with none.
-+	    if (__precedes)
-+	      {
-+		__ret.field[1] = symbol;
-+		__ret.field[2] = value;
-+	      }
-+	    else
-+	      {
-+		__ret.field[1] = value;
-+		__ret.field[2] = symbol;
-+	      }
-+	    __ret.field[3] = none;
-+	  }
-+	break;
-+      case 2:
-+	// 2 The sign follows the value and symbol.
-+	if (__space)
-+	  {
-+	    // Pattern either ends with sign.
-+	    if (__precedes)
-+	      {
-+		__ret.field[0] = symbol;
-+		__ret.field[2] = value;
-+	      }
-+	    else
-+	      {
-+		__ret.field[0] = value;
-+		__ret.field[2] = symbol;
-+	      }
-+	    __ret.field[1] = space;
-+	    __ret.field[3] = sign;
-+	  }
-+	else
-+	  {
-+	    // Pattern ends with sign then none.
-+	    if (__precedes)
-+	      {
-+		__ret.field[0] = symbol;
-+		__ret.field[1] = value;
-+	      }
-+	    else
-+	      {
-+		__ret.field[0] = value;
-+		__ret.field[1] = symbol;
-+	      }
-+	    __ret.field[2] = sign;
-+	    __ret.field[3] = none;
-+	  }
-+	break;
-+      case 3:
-+	// 3 The sign immediately precedes the symbol.
-+	if (__precedes)
-+	  {
-+	    __ret.field[0] = sign;
-+	    __ret.field[1] = symbol;
-+	    if (__space)
-+	      {
-+		__ret.field[2] = space;
-+		__ret.field[3] = value;
-+	      }
-+	    else
-+	      {
-+		__ret.field[2] = value;
-+		__ret.field[3] = none;
-+	      }
-+	  }
-+	else
-+	  {
-+	    __ret.field[0] = value;
-+	    if (__space)
-+	      {
-+		__ret.field[1] = space;
-+		__ret.field[2] = sign;
-+		__ret.field[3] = symbol;
-+	      }
-+	    else
-+	      {
-+		__ret.field[1] = sign;
-+		__ret.field[2] = symbol;
-+		__ret.field[3] = none;
-+	      }
-+	  }
-+	break;
-+      case 4:
-+	// 4 The sign immediately follows the symbol.
-+	if (__precedes)
-+	  {
-+	    __ret.field[0] = symbol;
-+	    __ret.field[1] = sign;
-+	    if (__space)
-+	      {
-+		__ret.field[2] = space;
-+		__ret.field[3] = value;
-+	      }
-+	    else
-+	      {
-+		__ret.field[2] = value;
-+		__ret.field[3] = none;
-+	      }
-+	  }
-+	else
-+	  {
-+	    __ret.field[0] = value;
-+	    if (__space)
-+	      {
-+		__ret.field[1] = space;
-+		__ret.field[2] = symbol;
-+		__ret.field[3] = sign;
-+	      }
-+	    else
-+	      {
-+		__ret.field[1] = symbol;
-+		__ret.field[2] = sign;
-+		__ret.field[3] = none;
-+	      }
-+	  }
-+	break;
-+      default:
-+	;
-+      }
-+    return __ret;
-+  }
-+
-+  template<>
-+    void
-+    moneypunct<char, true>::_M_initialize_moneypunct(__c_locale __cloc,
-+						     const char*)
-+    {
-+      if (!_M_data)
-+	_M_data = new __moneypunct_cache<char, true>;
-+
-+      if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_data->_M_decimal_point = '.';
-+	  _M_data->_M_thousands_sep = ',';
-+	  _M_data->_M_grouping = "";
-+	  _M_data->_M_grouping_size = 0;
-+	  _M_data->_M_curr_symbol = "";
-+	  _M_data->_M_curr_symbol_size = 0;
-+	  _M_data->_M_positive_sign = "";
-+	  _M_data->_M_positive_sign_size = 0;
-+	  _M_data->_M_negative_sign = "";
-+	  _M_data->_M_negative_sign_size = 0;
-+	  _M_data->_M_frac_digits = 0;
-+	  _M_data->_M_pos_format = money_base::_S_default_pattern;
-+	  _M_data->_M_neg_format = money_base::_S_default_pattern;
-+
-+	  for (size_t __i = 0; __i < money_base::_S_end; ++__i)
-+	    _M_data->_M_atoms[__i] = money_base::_S_atoms[__i];
-+	}
-+      else
-+	{
-+	  // Named locale.
-+	  _M_data->_M_decimal_point = *(__nl_langinfo_l(__MON_DECIMAL_POINT,
-+							__cloc));
-+	  _M_data->_M_thousands_sep = *(__nl_langinfo_l(__MON_THOUSANDS_SEP,
-+							__cloc));
-+	  _M_data->_M_grouping = __nl_langinfo_l(__MON_GROUPING, __cloc);
-+	  _M_data->_M_grouping_size = strlen(_M_data->_M_grouping);
-+	  _M_data->_M_positive_sign = __nl_langinfo_l(__POSITIVE_SIGN, __cloc);
-+	  _M_data->_M_positive_sign_size = strlen(_M_data->_M_positive_sign);
-+
-+	  char __nposn = *(__nl_langinfo_l(__INT_N_SIGN_POSN, __cloc));
-+	  if (!__nposn)
-+	    _M_data->_M_negative_sign = "()";
-+	  else
-+	    _M_data->_M_negative_sign = __nl_langinfo_l(__NEGATIVE_SIGN,
-+							__cloc);
-+	  _M_data->_M_negative_sign_size = strlen(_M_data->_M_negative_sign);
-+
-+	  // _Intl == true
-+	  _M_data->_M_curr_symbol = __nl_langinfo_l(__INT_CURR_SYMBOL, __cloc);
-+	  _M_data->_M_curr_symbol_size = strlen(_M_data->_M_curr_symbol);
-+	  _M_data->_M_frac_digits = *(__nl_langinfo_l(__INT_FRAC_DIGITS,
-+						      __cloc));
-+	  char __pprecedes = *(__nl_langinfo_l(__INT_P_CS_PRECEDES, __cloc));
-+	  char __pspace = *(__nl_langinfo_l(__INT_P_SEP_BY_SPACE, __cloc));
-+	  char __pposn = *(__nl_langinfo_l(__INT_P_SIGN_POSN, __cloc));
-+	  _M_data->_M_pos_format = _S_construct_pattern(__pprecedes, __pspace,
-+							__pposn);
-+	  char __nprecedes = *(__nl_langinfo_l(__INT_N_CS_PRECEDES, __cloc));
-+	  char __nspace = *(__nl_langinfo_l(__INT_N_SEP_BY_SPACE, __cloc));
-+	  _M_data->_M_neg_format = _S_construct_pattern(__nprecedes, __nspace,
-+							__nposn);
-+	}
-+    }
-+
-+  template<>
-+    void
-+    moneypunct<char, false>::_M_initialize_moneypunct(__c_locale __cloc,
-+						      const char*)
-+    {
-+      if (!_M_data)
-+	_M_data = new __moneypunct_cache<char, false>;
-+
-+      if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_data->_M_decimal_point = '.';
-+	  _M_data->_M_thousands_sep = ',';
-+	  _M_data->_M_grouping = "";
-+	  _M_data->_M_grouping_size = 0;
-+	  _M_data->_M_curr_symbol = "";
-+	  _M_data->_M_curr_symbol_size = 0;
-+	  _M_data->_M_positive_sign = "";
-+	  _M_data->_M_positive_sign_size = 0;
-+	  _M_data->_M_negative_sign = "";
-+	  _M_data->_M_negative_sign_size = 0;
-+	  _M_data->_M_frac_digits = 0;
-+	  _M_data->_M_pos_format = money_base::_S_default_pattern;
-+	  _M_data->_M_neg_format = money_base::_S_default_pattern;
-+
-+	  for (size_t __i = 0; __i < money_base::_S_end; ++__i)
-+	    _M_data->_M_atoms[__i] = money_base::_S_atoms[__i];
-+	}
-+      else
-+	{
-+	  // Named locale.
-+	  _M_data->_M_decimal_point = *(__nl_langinfo_l(__MON_DECIMAL_POINT,
-+							__cloc));
-+	  _M_data->_M_thousands_sep = *(__nl_langinfo_l(__MON_THOUSANDS_SEP,
-+							__cloc));
-+	  _M_data->_M_grouping = __nl_langinfo_l(__MON_GROUPING, __cloc);
-+	  _M_data->_M_grouping_size = strlen(_M_data->_M_grouping);
-+	  _M_data->_M_positive_sign = __nl_langinfo_l(__POSITIVE_SIGN, __cloc);
-+	  _M_data->_M_positive_sign_size = strlen(_M_data->_M_positive_sign);
-+
-+	  char __nposn = *(__nl_langinfo_l(__N_SIGN_POSN, __cloc));
-+	  if (!__nposn)
-+	    _M_data->_M_negative_sign = "()";
-+	  else
-+	    _M_data->_M_negative_sign = __nl_langinfo_l(__NEGATIVE_SIGN,
-+							__cloc);
-+	  _M_data->_M_negative_sign_size = strlen(_M_data->_M_negative_sign);
-+
-+	  // _Intl == false
-+	  _M_data->_M_curr_symbol = __nl_langinfo_l(__CURRENCY_SYMBOL, __cloc);
-+	  _M_data->_M_curr_symbol_size = strlen(_M_data->_M_curr_symbol);
-+	  _M_data->_M_frac_digits = *(__nl_langinfo_l(__FRAC_DIGITS, __cloc));
-+	  char __pprecedes = *(__nl_langinfo_l(__P_CS_PRECEDES, __cloc));
-+	  char __pspace = *(__nl_langinfo_l(__P_SEP_BY_SPACE, __cloc));
-+	  char __pposn = *(__nl_langinfo_l(__P_SIGN_POSN, __cloc));
-+	  _M_data->_M_pos_format = _S_construct_pattern(__pprecedes, __pspace,
-+							__pposn);
-+	  char __nprecedes = *(__nl_langinfo_l(__N_CS_PRECEDES, __cloc));
-+	  char __nspace = *(__nl_langinfo_l(__N_SEP_BY_SPACE, __cloc));
-+	  _M_data->_M_neg_format = _S_construct_pattern(__nprecedes, __nspace,
-+							__nposn);
-+	}
-+    }
-+
-+  template<>
-+    moneypunct<char, true>::~moneypunct()
-+    { delete _M_data; }
-+
-+  template<>
-+    moneypunct<char, false>::~moneypunct()
-+    { delete _M_data; }
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  template<>
-+    void
-+    moneypunct<wchar_t, true>::_M_initialize_moneypunct(__c_locale __cloc,
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+							const char*)
-+#else
-+							const char* __name)
-+#endif
-+    {
-+      if (!_M_data)
-+	_M_data = new __moneypunct_cache<wchar_t, true>;
-+
-+      if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_data->_M_decimal_point = L'.';
-+	  _M_data->_M_thousands_sep = L',';
-+	  _M_data->_M_grouping = "";
-+	  _M_data->_M_grouping_size = 0;
-+	  _M_data->_M_curr_symbol = L"";
-+	  _M_data->_M_curr_symbol_size = 0;
-+	  _M_data->_M_positive_sign = L"";
-+	  _M_data->_M_positive_sign_size = 0;
-+	  _M_data->_M_negative_sign = L"";
-+	  _M_data->_M_negative_sign_size = 0;
-+	  _M_data->_M_frac_digits = 0;
-+	  _M_data->_M_pos_format = money_base::_S_default_pattern;
-+	  _M_data->_M_neg_format = money_base::_S_default_pattern;
-+
-+	  // Use ctype::widen code without the facet...
-+	  for (size_t __i = 0; __i < money_base::_S_end; ++__i)
-+	    _M_data->_M_atoms[__i] =
-+	      static_cast<wchar_t>(money_base::_S_atoms[__i]);
-+	}
-+      else
-+	{
-+	  // Named locale.
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	  __c_locale __old = __uselocale(__cloc);
-+#else
-+	  // Switch to named locale so that mbsrtowcs will work.
-+	  char* __old = strdup(setlocale(LC_ALL, NULL));
-+	  setlocale(LC_ALL, __name);
-+#endif
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix this... should be monetary
-+#endif
-+#ifdef __UCLIBC__
-+# ifdef __UCLIBC_HAS_XLOCALE__
-+	  _M_data->_M_decimal_point = __cloc->decimal_point_wc;
-+	  _M_data->_M_thousands_sep = __cloc->thousands_sep_wc;
-+# else
-+	  _M_data->_M_decimal_point = __global_locale->decimal_point_wc;
-+	  _M_data->_M_thousands_sep = __global_locale->thousands_sep_wc;
-+# endif
-+#else
-+	  union { char *__s; wchar_t __w; } __u;
-+	  __u.__s = __nl_langinfo_l(_NL_MONETARY_DECIMAL_POINT_WC, __cloc);
-+	  _M_data->_M_decimal_point = __u.__w;
-+
-+	  __u.__s = __nl_langinfo_l(_NL_MONETARY_THOUSANDS_SEP_WC, __cloc);
-+	  _M_data->_M_thousands_sep = __u.__w;
-+#endif
-+	  _M_data->_M_grouping = __nl_langinfo_l(__MON_GROUPING, __cloc);
-+	  _M_data->_M_grouping_size = strlen(_M_data->_M_grouping);
-+
-+	  const char* __cpossign = __nl_langinfo_l(__POSITIVE_SIGN, __cloc);
-+	  const char* __cnegsign = __nl_langinfo_l(__NEGATIVE_SIGN, __cloc);
-+	  const char* __ccurr = __nl_langinfo_l(__INT_CURR_SYMBOL, __cloc);
-+
-+	  wchar_t* __wcs_ps = 0;
-+	  wchar_t* __wcs_ns = 0;
-+	  const char __nposn = *(__nl_langinfo_l(__INT_N_SIGN_POSN, __cloc));
-+	  try
-+	    {
-+	      mbstate_t __state;
-+	      size_t __len = strlen(__cpossign);
-+	      if (__len)
-+		{
-+		  ++__len;
-+		  memset(&__state, 0, sizeof(mbstate_t));
-+		  __wcs_ps = new wchar_t[__len];
-+		  mbsrtowcs(__wcs_ps, &__cpossign, __len, &__state);
-+		  _M_data->_M_positive_sign = __wcs_ps;
-+		}
-+	      else
-+		_M_data->_M_positive_sign = L"";
-+	      _M_data->_M_positive_sign_size = wcslen(_M_data->_M_positive_sign);
-+
-+	      __len = strlen(__cnegsign);
-+	      if (!__nposn)
-+		_M_data->_M_negative_sign = L"()";
-+	      else if (__len)
-+		{
-+		  ++__len;
-+		  memset(&__state, 0, sizeof(mbstate_t));
-+		  __wcs_ns = new wchar_t[__len];
-+		  mbsrtowcs(__wcs_ns, &__cnegsign, __len, &__state);
-+		  _M_data->_M_negative_sign = __wcs_ns;
-+		}
-+	      else
-+		_M_data->_M_negative_sign = L"";
-+	      _M_data->_M_negative_sign_size = wcslen(_M_data->_M_negative_sign);
-+
-+	      // _Intl == true.
-+	      __len = strlen(__ccurr);
-+	      if (__len)
-+		{
-+		  ++__len;
-+		  memset(&__state, 0, sizeof(mbstate_t));
-+		  wchar_t* __wcs = new wchar_t[__len];
-+		  mbsrtowcs(__wcs, &__ccurr, __len, &__state);
-+		  _M_data->_M_curr_symbol = __wcs;
-+		}
-+	      else
-+		_M_data->_M_curr_symbol = L"";
-+	      _M_data->_M_curr_symbol_size = wcslen(_M_data->_M_curr_symbol);
-+	    }
-+	  catch (...)
-+	    {
-+	      delete _M_data;
-+	      _M_data = 0;
-+	      delete __wcs_ps;
-+	      delete __wcs_ns;
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	      __uselocale(__old);
-+#else
-+	      setlocale(LC_ALL, __old);
-+	      free(__old);
-+#endif
-+	      __throw_exception_again;
-+	    }
-+
-+	  _M_data->_M_frac_digits = *(__nl_langinfo_l(__INT_FRAC_DIGITS,
-+						      __cloc));
-+	  char __pprecedes = *(__nl_langinfo_l(__INT_P_CS_PRECEDES, __cloc));
-+	  char __pspace = *(__nl_langinfo_l(__INT_P_SEP_BY_SPACE, __cloc));
-+	  char __pposn = *(__nl_langinfo_l(__INT_P_SIGN_POSN, __cloc));
-+	  _M_data->_M_pos_format = _S_construct_pattern(__pprecedes, __pspace,
-+							__pposn);
-+	  char __nprecedes = *(__nl_langinfo_l(__INT_N_CS_PRECEDES, __cloc));
-+	  char __nspace = *(__nl_langinfo_l(__INT_N_SEP_BY_SPACE, __cloc));
-+	  _M_data->_M_neg_format = _S_construct_pattern(__nprecedes, __nspace,
-+							__nposn);
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	  __uselocale(__old);
-+#else
-+	  setlocale(LC_ALL, __old);
-+	  free(__old);
-+#endif
-+	}
-+    }
-+
-+  template<>
-+  void
-+  moneypunct<wchar_t, false>::_M_initialize_moneypunct(__c_locale __cloc,
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+						       const char*)
-+#else
-+                                                       const char* __name)
-+#endif
-+  {
-+    if (!_M_data)
-+      _M_data = new __moneypunct_cache<wchar_t, false>;
-+
-+    if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_data->_M_decimal_point = L'.';
-+	  _M_data->_M_thousands_sep = L',';
-+	  _M_data->_M_grouping = "";
-+          _M_data->_M_grouping_size = 0;
-+	  _M_data->_M_curr_symbol = L"";
-+	  _M_data->_M_curr_symbol_size = 0;
-+	  _M_data->_M_positive_sign = L"";
-+	  _M_data->_M_positive_sign_size = 0;
-+	  _M_data->_M_negative_sign = L"";
-+	  _M_data->_M_negative_sign_size = 0;
-+	  _M_data->_M_frac_digits = 0;
-+	  _M_data->_M_pos_format = money_base::_S_default_pattern;
-+	  _M_data->_M_neg_format = money_base::_S_default_pattern;
-+
-+	  // Use ctype::widen code without the facet...
-+	  for (size_t __i = 0; __i < money_base::_S_end; ++__i)
-+	    _M_data->_M_atoms[__i] =
-+	      static_cast<wchar_t>(money_base::_S_atoms[__i]);
-+	}
-+      else
-+	{
-+	  // Named locale.
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	  __c_locale __old = __uselocale(__cloc);
-+#else
-+	  // Switch to named locale so that mbsrtowcs will work.
-+	  char* __old = strdup(setlocale(LC_ALL, NULL));
-+	  setlocale(LC_ALL, __name);
-+#endif
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix this... should be monetary
-+#endif
-+#ifdef __UCLIBC__
-+# ifdef __UCLIBC_HAS_XLOCALE__
-+	  _M_data->_M_decimal_point = __cloc->decimal_point_wc;
-+	  _M_data->_M_thousands_sep = __cloc->thousands_sep_wc;
-+# else
-+	  _M_data->_M_decimal_point = __global_locale->decimal_point_wc;
-+	  _M_data->_M_thousands_sep = __global_locale->thousands_sep_wc;
-+# endif
-+#else
-+          union { char *__s; wchar_t __w; } __u;
-+	  __u.__s = __nl_langinfo_l(_NL_MONETARY_DECIMAL_POINT_WC, __cloc);
-+	  _M_data->_M_decimal_point = __u.__w;
-+
-+	  __u.__s = __nl_langinfo_l(_NL_MONETARY_THOUSANDS_SEP_WC, __cloc);
-+	  _M_data->_M_thousands_sep = __u.__w;
-+#endif
-+	  _M_data->_M_grouping = __nl_langinfo_l(__MON_GROUPING, __cloc);
-+          _M_data->_M_grouping_size = strlen(_M_data->_M_grouping);
-+
-+	  const char* __cpossign = __nl_langinfo_l(__POSITIVE_SIGN, __cloc);
-+	  const char* __cnegsign = __nl_langinfo_l(__NEGATIVE_SIGN, __cloc);
-+	  const char* __ccurr = __nl_langinfo_l(__CURRENCY_SYMBOL, __cloc);
-+
-+	  wchar_t* __wcs_ps = 0;
-+	  wchar_t* __wcs_ns = 0;
-+	  const char __nposn = *(__nl_langinfo_l(__N_SIGN_POSN, __cloc));
-+	  try
-+            {
-+              mbstate_t __state;
-+              size_t __len;
-+              __len = strlen(__cpossign);
-+              if (__len)
-+                {
-+		  ++__len;
-+		  memset(&__state, 0, sizeof(mbstate_t));
-+		  __wcs_ps = new wchar_t[__len];
-+		  mbsrtowcs(__wcs_ps, &__cpossign, __len, &__state);
-+		  _M_data->_M_positive_sign = __wcs_ps;
-+		}
-+	      else
-+		_M_data->_M_positive_sign = L"";
-+              _M_data->_M_positive_sign_size = wcslen(_M_data->_M_positive_sign);
-+
-+	      __len = strlen(__cnegsign);
-+	      if (!__nposn)
-+		_M_data->_M_negative_sign = L"()";
-+	      else if (__len)
-+		{
-+		  ++__len;
-+		  memset(&__state, 0, sizeof(mbstate_t));
-+		  __wcs_ns = new wchar_t[__len];
-+		  mbsrtowcs(__wcs_ns, &__cnegsign, __len, &__state);
-+		  _M_data->_M_negative_sign = __wcs_ns;
-+		}
-+	      else
-+		_M_data->_M_negative_sign = L"";
-+              _M_data->_M_negative_sign_size = wcslen(_M_data->_M_negative_sign);
-+
-+	      // _Intl == true.
-+	      __len = strlen(__ccurr);
-+	      if (__len)
-+		{
-+		  ++__len;
-+		  memset(&__state, 0, sizeof(mbstate_t));
-+		  wchar_t* __wcs = new wchar_t[__len];
-+		  mbsrtowcs(__wcs, &__ccurr, __len, &__state);
-+		  _M_data->_M_curr_symbol = __wcs;
-+		}
-+	      else
-+		_M_data->_M_curr_symbol = L"";
-+              _M_data->_M_curr_symbol_size = wcslen(_M_data->_M_curr_symbol);
-+	    }
-+          catch (...)
-+	    {
-+	      delete _M_data;
-+              _M_data = 0;
-+	      delete __wcs_ps;
-+	      delete __wcs_ns;
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	      __uselocale(__old);
-+#else
-+	      setlocale(LC_ALL, __old);
-+	      free(__old);
-+#endif
-+              __throw_exception_again;
-+	    }
-+
-+	  _M_data->_M_frac_digits = *(__nl_langinfo_l(__FRAC_DIGITS, __cloc));
-+	  char __pprecedes = *(__nl_langinfo_l(__P_CS_PRECEDES, __cloc));
-+	  char __pspace = *(__nl_langinfo_l(__P_SEP_BY_SPACE, __cloc));
-+	  char __pposn = *(__nl_langinfo_l(__P_SIGN_POSN, __cloc));
-+	  _M_data->_M_pos_format = _S_construct_pattern(__pprecedes, __pspace,
-+	                                                __pposn);
-+	  char __nprecedes = *(__nl_langinfo_l(__N_CS_PRECEDES, __cloc));
-+	  char __nspace = *(__nl_langinfo_l(__N_SEP_BY_SPACE, __cloc));
-+	  _M_data->_M_neg_format = _S_construct_pattern(__nprecedes, __nspace,
-+	                                                __nposn);
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	  __uselocale(__old);
-+#else
-+	  setlocale(LC_ALL, __old);
-+	  free(__old);
-+#endif
-+	}
-+    }
-+
-+  template<>
-+    moneypunct<wchar_t, true>::~moneypunct()
-+    {
-+      if (_M_data->_M_positive_sign_size)
-+	delete [] _M_data->_M_positive_sign;
-+      if (_M_data->_M_negative_sign_size
-+          && wcscmp(_M_data->_M_negative_sign, L"()") != 0)
-+	delete [] _M_data->_M_negative_sign;
-+      if (_M_data->_M_curr_symbol_size)
-+	delete [] _M_data->_M_curr_symbol;
-+      delete _M_data;
-+    }
-+
-+  template<>
-+    moneypunct<wchar_t, false>::~moneypunct()
-+    {
-+      if (_M_data->_M_positive_sign_size)
-+	delete [] _M_data->_M_positive_sign;
-+      if (_M_data->_M_negative_sign_size
-+          && wcscmp(_M_data->_M_negative_sign, L"()") != 0)
-+	delete [] _M_data->_M_negative_sign;
-+      if (_M_data->_M_curr_symbol_size)
-+	delete [] _M_data->_M_curr_symbol;
-+      delete _M_data;
-+    }
-+#endif
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/numeric_members.cc b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-new file mode 100644
-index 0000000..883ec1a
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-@@ -0,0 +1,160 @@
-+// std::numpunct implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.3.1.2  numpunct virtual functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#define _LIBC
-+#include <locale>
-+#undef _LIBC
-+#include <bits/c++locale_internal.h>
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning tailor for stub locale support
-+#endif
-+#ifndef __UCLIBC_HAS_XLOCALE__
-+#define __nl_langinfo_l(N, L)         nl_langinfo((N))
-+#endif
-+
-+namespace std
-+{
-+  template<>
-+    void
-+    numpunct<char>::_M_initialize_numpunct(__c_locale __cloc)
-+    {
-+      if (!_M_data)
-+	_M_data = new __numpunct_cache<char>;
-+
-+      if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_data->_M_grouping = "";
-+	  _M_data->_M_grouping_size = 0;
-+	  _M_data->_M_use_grouping = false;
-+
-+	  _M_data->_M_decimal_point = '.';
-+	  _M_data->_M_thousands_sep = ',';
-+
-+	  for (size_t __i = 0; __i < __num_base::_S_oend; ++__i)
-+	    _M_data->_M_atoms_out[__i] = __num_base::_S_atoms_out[__i];
-+
-+	  for (size_t __j = 0; __j < __num_base::_S_iend; ++__j)
-+	    _M_data->_M_atoms_in[__j] = __num_base::_S_atoms_in[__j];
-+	}
-+      else
-+	{
-+	  // Named locale.
-+	  _M_data->_M_decimal_point = *(__nl_langinfo_l(DECIMAL_POINT,
-+							__cloc));
-+	  _M_data->_M_thousands_sep = *(__nl_langinfo_l(THOUSANDS_SEP,
-+							__cloc));
-+
-+	  // Check for NULL, which implies no grouping.
-+	  if (_M_data->_M_thousands_sep == '\0')
-+	    _M_data->_M_grouping = "";
-+	  else
-+	    _M_data->_M_grouping = __nl_langinfo_l(GROUPING, __cloc);
-+	  _M_data->_M_grouping_size = strlen(_M_data->_M_grouping);
-+	}
-+
-+      // NB: There is no way to extact this info from posix locales.
-+      // _M_truename = __nl_langinfo_l(YESSTR, __cloc);
-+      _M_data->_M_truename = "true";
-+      _M_data->_M_truename_size = 4;
-+      // _M_falsename = __nl_langinfo_l(NOSTR, __cloc);
-+      _M_data->_M_falsename = "false";
-+      _M_data->_M_falsename_size = 5;
-+    }
-+
-+  template<>
-+    numpunct<char>::~numpunct()
-+    { delete _M_data; }
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  template<>
-+    void
-+    numpunct<wchar_t>::_M_initialize_numpunct(__c_locale __cloc)
-+    {
-+      if (!_M_data)
-+	_M_data = new __numpunct_cache<wchar_t>;
-+
-+      if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_data->_M_grouping = "";
-+	  _M_data->_M_grouping_size = 0;
-+	  _M_data->_M_use_grouping = false;
-+
-+	  _M_data->_M_decimal_point = L'.';
-+	  _M_data->_M_thousands_sep = L',';
-+
-+	  // Use ctype::widen code without the facet...
-+	  for (size_t __i = 0; __i < __num_base::_S_oend; ++__i)
-+	    _M_data->_M_atoms_out[__i] =
-+	      static_cast<wchar_t>(__num_base::_S_atoms_out[__i]);
-+
-+	  for (size_t __j = 0; __j < __num_base::_S_iend; ++__j)
-+	    _M_data->_M_atoms_in[__j] =
-+	      static_cast<wchar_t>(__num_base::_S_atoms_in[__j]);
-+	}
-+      else
-+	{
-+	  // Named locale.
-+	  // NB: In the GNU model wchar_t is always 32 bit wide.
-+	  union { char *__s; wchar_t __w; } __u;
-+	  __u.__s = __nl_langinfo_l(_NL_NUMERIC_DECIMAL_POINT_WC, __cloc);
-+	  _M_data->_M_decimal_point = __u.__w;
-+
-+	  __u.__s = __nl_langinfo_l(_NL_NUMERIC_THOUSANDS_SEP_WC, __cloc);
-+	  _M_data->_M_thousands_sep = __u.__w;
-+
-+	  if (_M_data->_M_thousands_sep == L'\0')
-+	    _M_data->_M_grouping = "";
-+	  else
-+	    _M_data->_M_grouping = __nl_langinfo_l(GROUPING, __cloc);
-+	  _M_data->_M_grouping_size = strlen(_M_data->_M_grouping);
-+	}
-+
-+      // NB: There is no way to extact this info from posix locales.
-+      // _M_truename = __nl_langinfo_l(YESSTR, __cloc);
-+      _M_data->_M_truename = L"true";
-+      _M_data->_M_truename_size = 4;
-+      // _M_falsename = __nl_langinfo_l(NOSTR, __cloc);
-+      _M_data->_M_falsename = L"false";
-+      _M_data->_M_falsename_size = 5;
-+    }
-+
-+  template<>
-+    numpunct<wchar_t>::~numpunct()
-+    { delete _M_data; }
-+ #endif
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/time_members.cc b/libstdc++-v3/config/locale/uclibc/time_members.cc
-new file mode 100644
-index 0000000..e0707d7
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/time_members.cc
-@@ -0,0 +1,406 @@
-+// std::time_get, std::time_put implementation, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.5.1.2 - time_get virtual functions
-+// ISO C++ 14882: 22.2.5.3.2 - time_put virtual functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#include <locale>
-+#include <bits/c++locale_internal.h>
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning tailor for stub locale support
-+#endif
-+#ifndef __UCLIBC_HAS_XLOCALE__
-+#define __nl_langinfo_l(N, L)         nl_langinfo((N))
-+#endif
-+
-+namespace std
-+{
-+  template<>
-+    void
-+    __timepunct<char>::
-+    _M_put(char* __s, size_t __maxlen, const char* __format,
-+	   const tm* __tm) const
-+    {
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+      const size_t __len = __strftime_l(__s, __maxlen, __format, __tm,
-+					_M_c_locale_timepunct);
-+#else
-+      char* __old = strdup(setlocale(LC_ALL, NULL));
-+      setlocale(LC_ALL, _M_name_timepunct);
-+      const size_t __len = strftime(__s, __maxlen, __format, __tm);
-+      setlocale(LC_ALL, __old);
-+      free(__old);
-+#endif
-+      // Make sure __s is null terminated.
-+      if (__len == 0)
-+	__s[0] = '\0';
-+    }
-+
-+  template<>
-+    void
-+    __timepunct<char>::_M_initialize_timepunct(__c_locale __cloc)
-+    {
-+      if (!_M_data)
-+	_M_data = new __timepunct_cache<char>;
-+
-+      if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_c_locale_timepunct = _S_get_c_locale();
-+
-+	  _M_data->_M_date_format = "%m/%d/%y";
-+	  _M_data->_M_date_era_format = "%m/%d/%y";
-+	  _M_data->_M_time_format = "%H:%M:%S";
-+	  _M_data->_M_time_era_format = "%H:%M:%S";
-+	  _M_data->_M_date_time_format = "";
-+	  _M_data->_M_date_time_era_format = "";
-+	  _M_data->_M_am = "AM";
-+	  _M_data->_M_pm = "PM";
-+	  _M_data->_M_am_pm_format = "";
-+
-+	  // Day names, starting with "C"'s Sunday.
-+	  _M_data->_M_day1 = "Sunday";
-+	  _M_data->_M_day2 = "Monday";
-+	  _M_data->_M_day3 = "Tuesday";
-+	  _M_data->_M_day4 = "Wednesday";
-+	  _M_data->_M_day5 = "Thursday";
-+	  _M_data->_M_day6 = "Friday";
-+	  _M_data->_M_day7 = "Saturday";
-+
-+	  // Abbreviated day names, starting with "C"'s Sun.
-+	  _M_data->_M_aday1 = "Sun";
-+	  _M_data->_M_aday2 = "Mon";
-+	  _M_data->_M_aday3 = "Tue";
-+	  _M_data->_M_aday4 = "Wed";
-+	  _M_data->_M_aday5 = "Thu";
-+	  _M_data->_M_aday6 = "Fri";
-+	  _M_data->_M_aday7 = "Sat";
-+
-+	  // Month names, starting with "C"'s January.
-+	  _M_data->_M_month01 = "January";
-+	  _M_data->_M_month02 = "February";
-+	  _M_data->_M_month03 = "March";
-+	  _M_data->_M_month04 = "April";
-+	  _M_data->_M_month05 = "May";
-+	  _M_data->_M_month06 = "June";
-+	  _M_data->_M_month07 = "July";
-+	  _M_data->_M_month08 = "August";
-+	  _M_data->_M_month09 = "September";
-+	  _M_data->_M_month10 = "October";
-+	  _M_data->_M_month11 = "November";
-+	  _M_data->_M_month12 = "December";
-+
-+	  // Abbreviated month names, starting with "C"'s Jan.
-+	  _M_data->_M_amonth01 = "Jan";
-+	  _M_data->_M_amonth02 = "Feb";
-+	  _M_data->_M_amonth03 = "Mar";
-+	  _M_data->_M_amonth04 = "Apr";
-+	  _M_data->_M_amonth05 = "May";
-+	  _M_data->_M_amonth06 = "Jun";
-+	  _M_data->_M_amonth07 = "Jul";
-+	  _M_data->_M_amonth08 = "Aug";
-+	  _M_data->_M_amonth09 = "Sep";
-+	  _M_data->_M_amonth10 = "Oct";
-+	  _M_data->_M_amonth11 = "Nov";
-+	  _M_data->_M_amonth12 = "Dec";
-+	}
-+      else
-+	{
-+	  _M_c_locale_timepunct = _S_clone_c_locale(__cloc);
-+
-+	  _M_data->_M_date_format = __nl_langinfo_l(D_FMT, __cloc);
-+	  _M_data->_M_date_era_format = __nl_langinfo_l(ERA_D_FMT, __cloc);
-+	  _M_data->_M_time_format = __nl_langinfo_l(T_FMT, __cloc);
-+	  _M_data->_M_time_era_format = __nl_langinfo_l(ERA_T_FMT, __cloc);
-+	  _M_data->_M_date_time_format = __nl_langinfo_l(D_T_FMT, __cloc);
-+	  _M_data->_M_date_time_era_format = __nl_langinfo_l(ERA_D_T_FMT,
-+							     __cloc);
-+	  _M_data->_M_am = __nl_langinfo_l(AM_STR, __cloc);
-+	  _M_data->_M_pm = __nl_langinfo_l(PM_STR, __cloc);
-+	  _M_data->_M_am_pm_format = __nl_langinfo_l(T_FMT_AMPM, __cloc);
-+
-+	  // Day names, starting with "C"'s Sunday.
-+	  _M_data->_M_day1 = __nl_langinfo_l(DAY_1, __cloc);
-+	  _M_data->_M_day2 = __nl_langinfo_l(DAY_2, __cloc);
-+	  _M_data->_M_day3 = __nl_langinfo_l(DAY_3, __cloc);
-+	  _M_data->_M_day4 = __nl_langinfo_l(DAY_4, __cloc);
-+	  _M_data->_M_day5 = __nl_langinfo_l(DAY_5, __cloc);
-+	  _M_data->_M_day6 = __nl_langinfo_l(DAY_6, __cloc);
-+	  _M_data->_M_day7 = __nl_langinfo_l(DAY_7, __cloc);
-+
-+	  // Abbreviated day names, starting with "C"'s Sun.
-+	  _M_data->_M_aday1 = __nl_langinfo_l(ABDAY_1, __cloc);
-+	  _M_data->_M_aday2 = __nl_langinfo_l(ABDAY_2, __cloc);
-+	  _M_data->_M_aday3 = __nl_langinfo_l(ABDAY_3, __cloc);
-+	  _M_data->_M_aday4 = __nl_langinfo_l(ABDAY_4, __cloc);
-+	  _M_data->_M_aday5 = __nl_langinfo_l(ABDAY_5, __cloc);
-+	  _M_data->_M_aday6 = __nl_langinfo_l(ABDAY_6, __cloc);
-+	  _M_data->_M_aday7 = __nl_langinfo_l(ABDAY_7, __cloc);
-+
-+	  // Month names, starting with "C"'s January.
-+	  _M_data->_M_month01 = __nl_langinfo_l(MON_1, __cloc);
-+	  _M_data->_M_month02 = __nl_langinfo_l(MON_2, __cloc);
-+	  _M_data->_M_month03 = __nl_langinfo_l(MON_3, __cloc);
-+	  _M_data->_M_month04 = __nl_langinfo_l(MON_4, __cloc);
-+	  _M_data->_M_month05 = __nl_langinfo_l(MON_5, __cloc);
-+	  _M_data->_M_month06 = __nl_langinfo_l(MON_6, __cloc);
-+	  _M_data->_M_month07 = __nl_langinfo_l(MON_7, __cloc);
-+	  _M_data->_M_month08 = __nl_langinfo_l(MON_8, __cloc);
-+	  _M_data->_M_month09 = __nl_langinfo_l(MON_9, __cloc);
-+	  _M_data->_M_month10 = __nl_langinfo_l(MON_10, __cloc);
-+	  _M_data->_M_month11 = __nl_langinfo_l(MON_11, __cloc);
-+	  _M_data->_M_month12 = __nl_langinfo_l(MON_12, __cloc);
-+
-+	  // Abbreviated month names, starting with "C"'s Jan.
-+	  _M_data->_M_amonth01 = __nl_langinfo_l(ABMON_1, __cloc);
-+	  _M_data->_M_amonth02 = __nl_langinfo_l(ABMON_2, __cloc);
-+	  _M_data->_M_amonth03 = __nl_langinfo_l(ABMON_3, __cloc);
-+	  _M_data->_M_amonth04 = __nl_langinfo_l(ABMON_4, __cloc);
-+	  _M_data->_M_amonth05 = __nl_langinfo_l(ABMON_5, __cloc);
-+	  _M_data->_M_amonth06 = __nl_langinfo_l(ABMON_6, __cloc);
-+	  _M_data->_M_amonth07 = __nl_langinfo_l(ABMON_7, __cloc);
-+	  _M_data->_M_amonth08 = __nl_langinfo_l(ABMON_8, __cloc);
-+	  _M_data->_M_amonth09 = __nl_langinfo_l(ABMON_9, __cloc);
-+	  _M_data->_M_amonth10 = __nl_langinfo_l(ABMON_10, __cloc);
-+	  _M_data->_M_amonth11 = __nl_langinfo_l(ABMON_11, __cloc);
-+	  _M_data->_M_amonth12 = __nl_langinfo_l(ABMON_12, __cloc);
-+	}
-+    }
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  template<>
-+    void
-+    __timepunct<wchar_t>::
-+    _M_put(wchar_t* __s, size_t __maxlen, const wchar_t* __format,
-+	   const tm* __tm) const
-+    {
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+      __wcsftime_l(__s, __maxlen, __format, __tm, _M_c_locale_timepunct);
-+      const size_t __len = __wcsftime_l(__s, __maxlen, __format, __tm,
-+					_M_c_locale_timepunct);
-+#else
-+      char* __old = strdup(setlocale(LC_ALL, NULL));
-+      setlocale(LC_ALL, _M_name_timepunct);
-+      const size_t __len = wcsftime(__s, __maxlen, __format, __tm);
-+      setlocale(LC_ALL, __old);
-+      free(__old);
-+#endif
-+      // Make sure __s is null terminated.
-+      if (__len == 0)
-+	__s[0] = L'\0';
-+    }
-+
-+  template<>
-+    void
-+    __timepunct<wchar_t>::_M_initialize_timepunct(__c_locale __cloc)
-+    {
-+      if (!_M_data)
-+	_M_data = new __timepunct_cache<wchar_t>;
-+
-+#warning wide time stuff
-+//       if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_c_locale_timepunct = _S_get_c_locale();
-+
-+	  _M_data->_M_date_format = L"%m/%d/%y";
-+	  _M_data->_M_date_era_format = L"%m/%d/%y";
-+	  _M_data->_M_time_format = L"%H:%M:%S";
-+	  _M_data->_M_time_era_format = L"%H:%M:%S";
-+	  _M_data->_M_date_time_format = L"";
-+	  _M_data->_M_date_time_era_format = L"";
-+	  _M_data->_M_am = L"AM";
-+	  _M_data->_M_pm = L"PM";
-+	  _M_data->_M_am_pm_format = L"";
-+
-+	  // Day names, starting with "C"'s Sunday.
-+	  _M_data->_M_day1 = L"Sunday";
-+	  _M_data->_M_day2 = L"Monday";
-+	  _M_data->_M_day3 = L"Tuesday";
-+	  _M_data->_M_day4 = L"Wednesday";
-+	  _M_data->_M_day5 = L"Thursday";
-+	  _M_data->_M_day6 = L"Friday";
-+	  _M_data->_M_day7 = L"Saturday";
-+
-+	  // Abbreviated day names, starting with "C"'s Sun.
-+	  _M_data->_M_aday1 = L"Sun";
-+	  _M_data->_M_aday2 = L"Mon";
-+	  _M_data->_M_aday3 = L"Tue";
-+	  _M_data->_M_aday4 = L"Wed";
-+	  _M_data->_M_aday5 = L"Thu";
-+	  _M_data->_M_aday6 = L"Fri";
-+	  _M_data->_M_aday7 = L"Sat";
-+
-+	  // Month names, starting with "C"'s January.
-+	  _M_data->_M_month01 = L"January";
-+	  _M_data->_M_month02 = L"February";
-+	  _M_data->_M_month03 = L"March";
-+	  _M_data->_M_month04 = L"April";
-+	  _M_data->_M_month05 = L"May";
-+	  _M_data->_M_month06 = L"June";
-+	  _M_data->_M_month07 = L"July";
-+	  _M_data->_M_month08 = L"August";
-+	  _M_data->_M_month09 = L"September";
-+	  _M_data->_M_month10 = L"October";
-+	  _M_data->_M_month11 = L"November";
-+	  _M_data->_M_month12 = L"December";
-+
-+	  // Abbreviated month names, starting with "C"'s Jan.
-+	  _M_data->_M_amonth01 = L"Jan";
-+	  _M_data->_M_amonth02 = L"Feb";
-+	  _M_data->_M_amonth03 = L"Mar";
-+	  _M_data->_M_amonth04 = L"Apr";
-+	  _M_data->_M_amonth05 = L"May";
-+	  _M_data->_M_amonth06 = L"Jun";
-+	  _M_data->_M_amonth07 = L"Jul";
-+	  _M_data->_M_amonth08 = L"Aug";
-+	  _M_data->_M_amonth09 = L"Sep";
-+	  _M_data->_M_amonth10 = L"Oct";
-+	  _M_data->_M_amonth11 = L"Nov";
-+	  _M_data->_M_amonth12 = L"Dec";
-+	}
-+#if 0
-+      else
-+	{
-+	  _M_c_locale_timepunct = _S_clone_c_locale(__cloc);
-+
-+	  union { char *__s; wchar_t *__w; } __u;
-+
-+	  __u.__s = __nl_langinfo_l(_NL_WD_FMT, __cloc);
-+	  _M_data->_M_date_format = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WERA_D_FMT, __cloc);
-+	  _M_data->_M_date_era_format = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WT_FMT, __cloc);
-+	  _M_data->_M_time_format = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WERA_T_FMT, __cloc);
-+	  _M_data->_M_time_era_format = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WD_T_FMT, __cloc);
-+	  _M_data->_M_date_time_format = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WERA_D_T_FMT, __cloc);
-+	  _M_data->_M_date_time_era_format = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WAM_STR, __cloc);
-+	  _M_data->_M_am = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WPM_STR, __cloc);
-+	  _M_data->_M_pm = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WT_FMT_AMPM, __cloc);
-+	  _M_data->_M_am_pm_format = __u.__w;
-+
-+	  // Day names, starting with "C"'s Sunday.
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_1, __cloc);
-+	  _M_data->_M_day1 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_2, __cloc);
-+	  _M_data->_M_day2 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_3, __cloc);
-+	  _M_data->_M_day3 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_4, __cloc);
-+	  _M_data->_M_day4 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_5, __cloc);
-+	  _M_data->_M_day5 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_6, __cloc);
-+	  _M_data->_M_day6 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_7, __cloc);
-+	  _M_data->_M_day7 = __u.__w;
-+
-+	  // Abbreviated day names, starting with "C"'s Sun.
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_1, __cloc);
-+	  _M_data->_M_aday1 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_2, __cloc);
-+	  _M_data->_M_aday2 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_3, __cloc);
-+	  _M_data->_M_aday3 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_4, __cloc);
-+	  _M_data->_M_aday4 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_5, __cloc);
-+	  _M_data->_M_aday5 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_6, __cloc);
-+	  _M_data->_M_aday6 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_7, __cloc);
-+	  _M_data->_M_aday7 = __u.__w;
-+
-+	  // Month names, starting with "C"'s January.
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_1, __cloc);
-+	  _M_data->_M_month01 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_2, __cloc);
-+	  _M_data->_M_month02 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_3, __cloc);
-+	  _M_data->_M_month03 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_4, __cloc);
-+	  _M_data->_M_month04 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_5, __cloc);
-+	  _M_data->_M_month05 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_6, __cloc);
-+	  _M_data->_M_month06 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_7, __cloc);
-+	  _M_data->_M_month07 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_8, __cloc);
-+	  _M_data->_M_month08 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_9, __cloc);
-+	  _M_data->_M_month09 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_10, __cloc);
-+	  _M_data->_M_month10 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_11, __cloc);
-+	  _M_data->_M_month11 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_12, __cloc);
-+	  _M_data->_M_month12 = __u.__w;
-+
-+	  // Abbreviated month names, starting with "C"'s Jan.
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_1, __cloc);
-+	  _M_data->_M_amonth01 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_2, __cloc);
-+	  _M_data->_M_amonth02 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_3, __cloc);
-+	  _M_data->_M_amonth03 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_4, __cloc);
-+	  _M_data->_M_amonth04 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_5, __cloc);
-+	  _M_data->_M_amonth05 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_6, __cloc);
-+	  _M_data->_M_amonth06 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_7, __cloc);
-+	  _M_data->_M_amonth07 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_8, __cloc);
-+	  _M_data->_M_amonth08 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_9, __cloc);
-+	  _M_data->_M_amonth09 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_10, __cloc);
-+	  _M_data->_M_amonth10 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_11, __cloc);
-+	  _M_data->_M_amonth11 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_12, __cloc);
-+	  _M_data->_M_amonth12 = __u.__w;
-+	}
-+#endif // 0
-+    }
-+#endif
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/time_members.h b/libstdc++-v3/config/locale/uclibc/time_members.h
-new file mode 100644
-index 0000000..ba8e858
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/time_members.h
-@@ -0,0 +1,68 @@
-+// std::time_get, std::time_put implementation, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.5.1.2 - time_get functions
-+// ISO C++ 14882: 22.2.5.3.2 - time_put functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+  template<typename _CharT>
-+    __timepunct<_CharT>::__timepunct(size_t __refs)
-+    : facet(__refs), _M_data(NULL), _M_c_locale_timepunct(NULL),
-+    _M_name_timepunct(_S_get_c_name())
-+    { _M_initialize_timepunct(); }
-+
-+  template<typename _CharT>
-+    __timepunct<_CharT>::__timepunct(__cache_type* __cache, size_t __refs)
-+    : facet(__refs), _M_data(__cache), _M_c_locale_timepunct(NULL),
-+    _M_name_timepunct(_S_get_c_name())
-+    { _M_initialize_timepunct(); }
-+
-+  template<typename _CharT>
-+    __timepunct<_CharT>::__timepunct(__c_locale __cloc, const char* __s,
-+				     size_t __refs)
-+    : facet(__refs), _M_data(NULL), _M_c_locale_timepunct(NULL),
-+    _M_name_timepunct(__s)
-+    {
-+      char* __tmp = new char[std::strlen(__s) + 1];
-+      std::strcpy(__tmp, __s);
-+      _M_name_timepunct = __tmp;
-+      _M_initialize_timepunct(__cloc);
-+    }
-+
-+  template<typename _CharT>
-+    __timepunct<_CharT>::~__timepunct()
-+    {
-+      if (_M_name_timepunct != _S_get_c_name())
-+	delete [] _M_name_timepunct;
-+      delete _M_data;
-+      _S_destroy_c_locale(_M_c_locale_timepunct);
-+    }
-diff --git a/libstdc++-v3/configure b/libstdc++-v3/configure
-index 8cd4c76..217012e 100755
---- a/libstdc++-v3/configure
-+++ b/libstdc++-v3/configure
-@@ -15918,6 +15918,9 @@ fi
-   # Default to "generic".
-   if test $enable_clocale_flag = auto; then
-     case ${target_os} in
-+      *-uclibc*)
-+        enable_clocale_flag=uclibc
-+        ;;
-       linux* | gnu* | kfreebsd*-gnu | knetbsd*-gnu)
- 	enable_clocale_flag=gnu
- 	;;
-@@ -16196,6 +16199,78 @@ $as_echo "newlib" >&6; }
-       CTIME_CC=config/locale/generic/time_members.cc
-       CLOCALE_INTERNAL_H=config/locale/generic/c++locale_internal.h
-       ;;
-+    uclibc)
-+      { $as_echo "$as_me:${as_lineno-$LINENO}: result: uclibc" >&5
-+$as_echo "uclibc" >&6; }
-+
-+      # Declare intention to use gettext, and add support for specific
-+      # languages.
-+      # For some reason, ALL_LINGUAS has to be before AM-GNU-GETTEXT
-+      ALL_LINGUAS="de fr"
-+
-+      # Don't call AM-GNU-GETTEXT here. Instead, assume glibc.
-+      # Extract the first word of "msgfmt", so it can be a program name with args.
-+set dummy msgfmt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_check_msgfmt+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$check_msgfmt"; then
-+  ac_cv_prog_check_msgfmt="$check_msgfmt" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_check_msgfmt="yes"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+  test -z "$ac_cv_prog_check_msgfmt" && ac_cv_prog_check_msgfmt="no"
-+fi
-+fi
-+check_msgfmt=$ac_cv_prog_check_msgfmt
-+if test -n "$check_msgfmt"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $check_msgfmt" >&5
-+$as_echo "$check_msgfmt" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+      if test x"$check_msgfmt" = x"yes" && test x"$enable_nls" = x"yes"; then
-+        USE_NLS=yes
-+      fi
-+      # Export the build objects.
-+      for ling in $ALL_LINGUAS; do \
-+        glibcxx_MOFILES="$glibcxx_MOFILES $ling.mo"; \
-+        glibcxx_POFILES="$glibcxx_POFILES $ling.po"; \
-+      done
-+
-+
-+
-+      CLOCALE_H=config/locale/uclibc/c_locale.h
-+      CLOCALE_CC=config/locale/uclibc/c_locale.cc
-+      CCODECVT_CC=config/locale/uclibc/codecvt_members.cc
-+      CCOLLATE_CC=config/locale/uclibc/collate_members.cc
-+      CCTYPE_CC=config/locale/uclibc/ctype_members.cc
-+      CMESSAGES_H=config/locale/uclibc/messages_members.h
-+      CMESSAGES_CC=config/locale/uclibc/messages_members.cc
-+      CMONEY_CC=config/locale/uclibc/monetary_members.cc
-+      CNUMERIC_CC=config/locale/uclibc/numeric_members.cc
-+      CTIME_H=config/locale/uclibc/time_members.h
-+      CTIME_CC=config/locale/uclibc/time_members.cc
-+      CLOCALE_INTERNAL_H=config/locale/uclibc/c++locale_internal.h
-+      ;;
-   esac
- 
-   # This is where the testsuite looks for locale catalogs, using the
-diff --git a/libstdc++-v3/include/c_compatibility/wchar.h b/libstdc++-v3/include/c_compatibility/wchar.h
-index 06b5d47..7d3f835 100644
---- a/libstdc++-v3/include/c_compatibility/wchar.h
-+++ b/libstdc++-v3/include/c_compatibility/wchar.h
-@@ -101,7 +101,9 @@ using std::wmemcmp;
- using std::wmemcpy;
- using std::wmemmove;
- using std::wmemset;
-+#if _GLIBCXX_HAVE_WCSFTIME
- using std::wcsftime;
-+#endif
- 
- #if _GLIBCXX_USE_C99
- using std::wcstold;
-diff --git a/libstdc++-v3/include/c_std/cwchar b/libstdc++-v3/include/c_std/cwchar
-index aa1b2fa..45c4617 100644
---- a/libstdc++-v3/include/c_std/cwchar
-+++ b/libstdc++-v3/include/c_std/cwchar
-@@ -175,7 +175,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
-   using ::wcscoll;
-   using ::wcscpy;
-   using ::wcscspn;
-+#if _GLIBCXX_HAVE_WCSFTIME
-   using ::wcsftime;
-+#endif
-   using ::wcslen;
-   using ::wcsncat;
-   using ::wcsncmp;
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0005-uclibc-locale-no__x.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0005-uclibc-locale-no__x.patch
deleted file mode 100644
index 19a86a4..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0005-uclibc-locale-no__x.patch
+++ /dev/null
@@ -1,257 +0,0 @@
-From 72a1a4af3c18627f59aa5ba1f753a89011b4e4f0 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:42:36 +0400
-Subject: [PATCH 05/46] uclibc-locale-no__x
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- .../config/locale/uclibc/c++locale_internal.h      | 45 ++++++++++++++++++++++
- libstdc++-v3/config/locale/uclibc/c_locale.cc      | 14 -------
- libstdc++-v3/config/locale/uclibc/c_locale.h       |  1 +
- .../config/locale/uclibc/collate_members.cc        |  7 ----
- libstdc++-v3/config/locale/uclibc/ctype_members.cc |  7 ----
- .../config/locale/uclibc/messages_members.cc       |  7 +---
- .../config/locale/uclibc/messages_members.h        | 18 ++++-----
- .../config/locale/uclibc/monetary_members.cc       |  4 --
- .../config/locale/uclibc/numeric_members.cc        |  3 --
- libstdc++-v3/config/locale/uclibc/time_members.cc  |  3 --
- 10 files changed, 55 insertions(+), 54 deletions(-)
-
-diff --git a/libstdc++-v3/config/locale/uclibc/c++locale_internal.h b/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-index 2ae3e4a..e74fddf 100644
---- a/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-+++ b/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-@@ -60,4 +60,49 @@ extern "C" __typeof(wcsxfrm_l) __wcsxfrm_l;
- extern "C" __typeof(wctype_l) __wctype_l;
- #endif
- 
-+# define __nl_langinfo_l nl_langinfo_l
-+# define __strcoll_l strcoll_l
-+# define __strftime_l strftime_l
-+# define __strtod_l strtod_l
-+# define __strtof_l strtof_l
-+# define __strtold_l strtold_l
-+# define __strxfrm_l strxfrm_l
-+# define __newlocale newlocale
-+# define __freelocale freelocale
-+# define __duplocale duplocale
-+# define __uselocale uselocale
-+
-+# ifdef _GLIBCXX_USE_WCHAR_T
-+#  define __iswctype_l iswctype_l
-+#  define __towlower_l towlower_l
-+#  define __towupper_l towupper_l
-+#  define __wcscoll_l wcscoll_l
-+#  define __wcsftime_l wcsftime_l
-+#  define __wcsxfrm_l wcsxfrm_l
-+#  define __wctype_l wctype_l
-+# endif
-+
-+#else
-+# define __nl_langinfo_l(N, L)       nl_langinfo((N))
-+# define __strcoll_l(S1, S2, L)      strcoll((S1), (S2))
-+# define __strtod_l(S, E, L)         strtod((S), (E))
-+# define __strtof_l(S, E, L)         strtof((S), (E))
-+# define __strtold_l(S, E, L)        strtold((S), (E))
-+# define __strxfrm_l(S1, S2, N, L)   strxfrm((S1), (S2), (N))
-+# warning should dummy __newlocale check for C|POSIX ?
-+# define __newlocale(a, b, c)        NULL
-+# define __freelocale(a)             ((void)0)
-+# define __duplocale(a)              __c_locale()
-+//# define __uselocale ?
-+//
-+# ifdef _GLIBCXX_USE_WCHAR_T
-+#  define __iswctype_l(C, M, L)       iswctype((C), (M))
-+#  define __towlower_l(C, L)          towlower((C))
-+#  define __towupper_l(C, L)          towupper((C))
-+#  define __wcscoll_l(S1, S2, L)      wcscoll((S1), (S2))
-+//#  define __wcsftime_l(S, M, F, T, L)  wcsftime((S), (M), (F), (T))
-+#  define __wcsxfrm_l(S1, S2, N, L)   wcsxfrm((S1), (S2), (N))
-+#  define __wctype_l(S, L)            wctype((S))
-+# endif
-+
- #endif // GLIBC 2.3 and later
-diff --git a/libstdc++-v3/config/locale/uclibc/c_locale.cc b/libstdc++-v3/config/locale/uclibc/c_locale.cc
-index 5081dc1..21430d0 100644
---- a/libstdc++-v3/config/locale/uclibc/c_locale.cc
-+++ b/libstdc++-v3/config/locale/uclibc/c_locale.cc
-@@ -39,20 +39,6 @@
- #include <langinfo.h>
- #include <bits/c++locale_internal.h>
- 
--#ifndef __UCLIBC_HAS_XLOCALE__
--#define __strtol_l(S, E, B, L)      strtol((S), (E), (B))
--#define __strtoul_l(S, E, B, L)     strtoul((S), (E), (B))
--#define __strtoll_l(S, E, B, L)     strtoll((S), (E), (B))
--#define __strtoull_l(S, E, B, L)    strtoull((S), (E), (B))
--#define __strtof_l(S, E, L)         strtof((S), (E))
--#define __strtod_l(S, E, L)         strtod((S), (E))
--#define __strtold_l(S, E, L)        strtold((S), (E))
--#warning should dummy __newlocale check for C|POSIX ?
--#define __newlocale(a, b, c)        NULL
--#define __freelocale(a)             ((void)0)
--#define __duplocale(a)              __c_locale()
--#endif
--
- namespace std
- {
-   template<>
-diff --git a/libstdc++-v3/config/locale/uclibc/c_locale.h b/libstdc++-v3/config/locale/uclibc/c_locale.h
-index da07c1f..4bca5f1 100644
---- a/libstdc++-v3/config/locale/uclibc/c_locale.h
-+++ b/libstdc++-v3/config/locale/uclibc/c_locale.h
-@@ -68,6 +68,7 @@ namespace __gnu_cxx
- {
-   extern "C" __typeof(uselocale) __uselocale;
- }
-+#define __uselocale uselocale
- #endif
- 
- namespace std
-diff --git a/libstdc++-v3/config/locale/uclibc/collate_members.cc b/libstdc++-v3/config/locale/uclibc/collate_members.cc
-index c2664a7..ec5c329 100644
---- a/libstdc++-v3/config/locale/uclibc/collate_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/collate_members.cc
-@@ -36,13 +36,6 @@
- #include <locale>
- #include <bits/c++locale_internal.h>
- 
--#ifndef __UCLIBC_HAS_XLOCALE__
--#define __strcoll_l(S1, S2, L)      strcoll((S1), (S2))
--#define __strxfrm_l(S1, S2, N, L)   strxfrm((S1), (S2), (N))
--#define __wcscoll_l(S1, S2, L)      wcscoll((S1), (S2))
--#define __wcsxfrm_l(S1, S2, N, L)   wcsxfrm((S1), (S2), (N))
--#endif
--
- namespace std
- {
-   // These are basically extensions to char_traits, and perhaps should
-diff --git a/libstdc++-v3/config/locale/uclibc/ctype_members.cc b/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-index 7294e3a..7b12861 100644
---- a/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-@@ -38,13 +38,6 @@
- #undef _LIBC
- #include <bits/c++locale_internal.h>
- 
--#ifndef __UCLIBC_HAS_XLOCALE__
--#define __wctype_l(S, L)           wctype((S))
--#define __towupper_l(C, L)         towupper((C))
--#define __towlower_l(C, L)         towlower((C))
--#define __iswctype_l(C, M, L)      iswctype((C), (M))
--#endif
--
- namespace std
- {
-   // NB: The other ctype<char> specializations are in src/locale.cc and
-diff --git a/libstdc++-v3/config/locale/uclibc/messages_members.cc b/libstdc++-v3/config/locale/uclibc/messages_members.cc
-index 13594d9..d7693b4 100644
---- a/libstdc++-v3/config/locale/uclibc/messages_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/messages_members.cc
-@@ -39,13 +39,10 @@
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning fix gettext stuff
- #endif
--#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
--extern "C" char *__dcgettext(const char *domainname,
--			     const char *msgid, int category);
- #undef gettext
--#define gettext(msgid) __dcgettext(NULL, msgid, LC_MESSAGES)
-+#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
-+#define gettext(msgid) dcgettext(NULL, msgid, LC_MESSAGES)
- #else
--#undef gettext
- #define gettext(msgid) (msgid)
- #endif
- 
-diff --git a/libstdc++-v3/config/locale/uclibc/messages_members.h b/libstdc++-v3/config/locale/uclibc/messages_members.h
-index 1424078..d89da33 100644
---- a/libstdc++-v3/config/locale/uclibc/messages_members.h
-+++ b/libstdc++-v3/config/locale/uclibc/messages_members.h
-@@ -36,15 +36,11 @@
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning fix prototypes for *textdomain funcs
- #endif
--#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
--extern "C" char *__textdomain(const char *domainname);
--extern "C" char *__bindtextdomain(const char *domainname,
--				  const char *dirname);
--#else
--#undef __textdomain
--#undef __bindtextdomain
--#define __textdomain(D)           ((void)0)
--#define __bindtextdomain(D,P)     ((void)0)
-+#ifndef __UCLIBC_HAS_GETTEXT_AWARENESS__
-+#undef textdomain
-+#undef bindtextdomain
-+#define textdomain(D)           ((void)0)
-+#define bindtextdomain(D,P)     ((void)0)
- #endif
- 
-   // Non-virtual member functions.
-@@ -70,7 +66,7 @@ extern "C" char *__bindtextdomain(const char *domainname,
-     messages<_CharT>::open(const basic_string<char>& __s, const locale& __loc,
- 			   const char* __dir) const
-     {
--      __bindtextdomain(__s.c_str(), __dir);
-+      bindtextdomain(__s.c_str(), __dir);
-       return this->do_open(__s, __loc);
-     }
- 
-@@ -90,7 +86,7 @@ extern "C" char *__bindtextdomain(const char *domainname,
-     {
-       // No error checking is done, assume the catalog exists and can
-       // be used.
--      __textdomain(__s.c_str());
-+      textdomain(__s.c_str());
-       return 0;
-     }
- 
-diff --git a/libstdc++-v3/config/locale/uclibc/monetary_members.cc b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-index aa52731..2e6f80a 100644
---- a/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-@@ -43,10 +43,6 @@
- #warning tailor for stub locale support
- #endif
- 
--#ifndef __UCLIBC_HAS_XLOCALE__
--#define __nl_langinfo_l(N, L)         nl_langinfo((N))
--#endif
--
- namespace std
- {
-   // Construct and return valid pattern consisting of some combination of:
-diff --git a/libstdc++-v3/config/locale/uclibc/numeric_members.cc b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-index 883ec1a..2c70642 100644
---- a/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-@@ -41,9 +41,6 @@
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning tailor for stub locale support
- #endif
--#ifndef __UCLIBC_HAS_XLOCALE__
--#define __nl_langinfo_l(N, L)         nl_langinfo((N))
--#endif
- 
- namespace std
- {
-diff --git a/libstdc++-v3/config/locale/uclibc/time_members.cc b/libstdc++-v3/config/locale/uclibc/time_members.cc
-index e0707d7..d848ed5 100644
---- a/libstdc++-v3/config/locale/uclibc/time_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/time_members.cc
-@@ -40,9 +40,6 @@
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning tailor for stub locale support
- #endif
--#ifndef __UCLIBC_HAS_XLOCALE__
--#define __nl_langinfo_l(N, L)         nl_langinfo((N))
--#endif
- 
- namespace std
- {
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0006-uclibc-locale-wchar_fix.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0006-uclibc-locale-wchar_fix.patch
deleted file mode 100644
index d7dbe68..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0006-uclibc-locale-wchar_fix.patch
+++ /dev/null
@@ -1,68 +0,0 @@
-From b1298344f0c221c382a95af689f7f3ecd864f531 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:45:57 +0400
-Subject: [PATCH 06/46] uclibc-locale-wchar_fix
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- libstdc++-v3/config/locale/uclibc/monetary_members.cc |  4 ++--
- libstdc++-v3/config/locale/uclibc/numeric_members.cc  | 13 +++++++++++++
- 2 files changed, 15 insertions(+), 2 deletions(-)
-
-diff --git a/libstdc++-v3/config/locale/uclibc/monetary_members.cc b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-index 2e6f80a..31ebb9f 100644
---- a/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-@@ -401,7 +401,7 @@ namespace std
- # ifdef __UCLIBC_HAS_XLOCALE__
- 	  _M_data->_M_decimal_point = __cloc->decimal_point_wc;
- 	  _M_data->_M_thousands_sep = __cloc->thousands_sep_wc;
--# else
-+# elif defined __UCLIBC_HAS_LOCALE__
- 	  _M_data->_M_decimal_point = __global_locale->decimal_point_wc;
- 	  _M_data->_M_thousands_sep = __global_locale->thousands_sep_wc;
- # endif
-@@ -556,7 +556,7 @@ namespace std
- # ifdef __UCLIBC_HAS_XLOCALE__
- 	  _M_data->_M_decimal_point = __cloc->decimal_point_wc;
- 	  _M_data->_M_thousands_sep = __cloc->thousands_sep_wc;
--# else
-+# elif defined __UCLIBC_HAS_LOCALE__
- 	  _M_data->_M_decimal_point = __global_locale->decimal_point_wc;
- 	  _M_data->_M_thousands_sep = __global_locale->thousands_sep_wc;
- # endif
-diff --git a/libstdc++-v3/config/locale/uclibc/numeric_members.cc b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-index 2c70642..d5c8961 100644
---- a/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-@@ -127,12 +127,25 @@ namespace std
- 	{
- 	  // Named locale.
- 	  // NB: In the GNU model wchar_t is always 32 bit wide.
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix this... should be numeric
-+#endif
-+#ifdef __UCLIBC__
-+# ifdef __UCLIBC_HAS_XLOCALE__
-+	  _M_data->_M_decimal_point = __cloc->decimal_point_wc;
-+	  _M_data->_M_thousands_sep = __cloc->thousands_sep_wc;
-+# elif defined __UCLIBC_HAS_LOCALE__
-+	  _M_data->_M_decimal_point = __global_locale->decimal_point_wc;
-+	  _M_data->_M_thousands_sep = __global_locale->thousands_sep_wc;
-+# endif
-+#else
- 	  union { char *__s; wchar_t __w; } __u;
- 	  __u.__s = __nl_langinfo_l(_NL_NUMERIC_DECIMAL_POINT_WC, __cloc);
- 	  _M_data->_M_decimal_point = __u.__w;
- 
- 	  __u.__s = __nl_langinfo_l(_NL_NUMERIC_THOUSANDS_SEP_WC, __cloc);
- 	  _M_data->_M_thousands_sep = __u.__w;
-+#endif
- 
- 	  if (_M_data->_M_thousands_sep == L'\0')
- 	    _M_data->_M_grouping = "";
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0007-uclibc-locale-update.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0007-uclibc-locale-update.patch
deleted file mode 100644
index cde7499..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0007-uclibc-locale-update.patch
+++ /dev/null
@@ -1,542 +0,0 @@
-From e9d7cb62741c22d667fe56e98d50753d89aefdc3 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:46:58 +0400
-Subject: [PATCH 07/46] uclibc-locale-update
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- .../config/locale/uclibc/c++locale_internal.h      |  3 +
- libstdc++-v3/config/locale/uclibc/c_locale.cc      | 74 ++++++++++------------
- libstdc++-v3/config/locale/uclibc/c_locale.h       | 42 ++++++------
- libstdc++-v3/config/locale/uclibc/ctype_members.cc | 51 +++++++++++----
- .../config/locale/uclibc/messages_members.h        | 12 ++--
- .../config/locale/uclibc/monetary_members.cc       | 34 ++++++----
- .../config/locale/uclibc/numeric_members.cc        |  5 ++
- libstdc++-v3/config/locale/uclibc/time_members.cc  | 18 ++++--
- libstdc++-v3/config/locale/uclibc/time_members.h   | 17 +++--
- 9 files changed, 158 insertions(+), 98 deletions(-)
-
-diff --git a/libstdc++-v3/config/locale/uclibc/c++locale_internal.h b/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-index e74fddf..971a6b4 100644
---- a/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-+++ b/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-@@ -31,6 +31,9 @@
- 
- #include <bits/c++config.h>
- #include <clocale>
-+#include <cstdlib>
-+#include <cstring>
-+#include <cstddef>
- 
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning clean this up
-diff --git a/libstdc++-v3/config/locale/uclibc/c_locale.cc b/libstdc++-v3/config/locale/uclibc/c_locale.cc
-index 21430d0..1b9d8e1 100644
---- a/libstdc++-v3/config/locale/uclibc/c_locale.cc
-+++ b/libstdc++-v3/config/locale/uclibc/c_locale.cc
-@@ -39,23 +39,20 @@
- #include <langinfo.h>
- #include <bits/c++locale_internal.h>
- 
--namespace std
--{
-+_GLIBCXX_BEGIN_NAMESPACE(std)
-+
-   template<>
-     void
-     __convert_to_v(const char* __s, float& __v, ios_base::iostate& __err,
- 		   const __c_locale& __cloc)
-     {
--      if (!(__err & ios_base::failbit))
--	{
--	  char* __sanity;
--	  errno = 0;
--	  float __f = __strtof_l(__s, &__sanity, __cloc);
--          if (__sanity != __s && errno != ERANGE)
--	    __v = __f;
--	  else
--	    __err |= ios_base::failbit;
--	}
-+      char* __sanity;
-+      errno = 0;
-+      float __f = __strtof_l(__s, &__sanity, __cloc);
-+      if (__sanity != __s && errno != ERANGE)
-+	__v = __f;
-+      else
-+	__err |= ios_base::failbit;
-     }
- 
-   template<>
-@@ -63,16 +60,13 @@ namespace std
-     __convert_to_v(const char* __s, double& __v, ios_base::iostate& __err,
- 		   const __c_locale& __cloc)
-     {
--      if (!(__err & ios_base::failbit))
--	{
--	  char* __sanity;
--	  errno = 0;
--	  double __d = __strtod_l(__s, &__sanity, __cloc);
--          if (__sanity != __s && errno != ERANGE)
--	    __v = __d;
--	  else
--	    __err |= ios_base::failbit;
--	}
-+      char* __sanity;
-+      errno = 0;
-+      double __d = __strtod_l(__s, &__sanity, __cloc);
-+      if (__sanity != __s && errno != ERANGE)
-+	__v = __d;
-+      else
-+	__err |= ios_base::failbit;
-     }
- 
-   template<>
-@@ -80,16 +74,13 @@ namespace std
-     __convert_to_v(const char* __s, long double& __v, ios_base::iostate& __err,
- 		   const __c_locale& __cloc)
-     {
--      if (!(__err & ios_base::failbit))
--	{
--	  char* __sanity;
--	  errno = 0;
--	  long double __ld = __strtold_l(__s, &__sanity, __cloc);
--          if (__sanity != __s && errno != ERANGE)
--	    __v = __ld;
--	  else
--	    __err |= ios_base::failbit;
--	}
-+      char* __sanity;
-+      errno = 0;
-+      long double __ld = __strtold_l(__s, &__sanity, __cloc);
-+      if (__sanity != __s && errno != ERANGE)
-+	__v = __ld;
-+      else
-+	__err |= ios_base::failbit;
-     }
- 
-   void
-@@ -110,17 +101,18 @@ namespace std
-   void
-   locale::facet::_S_destroy_c_locale(__c_locale& __cloc)
-   {
--    if (_S_get_c_locale() != __cloc)
-+    if (__cloc && _S_get_c_locale() != __cloc)
-       __freelocale(__cloc);
-   }
- 
-   __c_locale
-   locale::facet::_S_clone_c_locale(__c_locale& __cloc)
-   { return __duplocale(__cloc); }
--} // namespace std
- 
--namespace __gnu_cxx
--{
-+_GLIBCXX_END_NAMESPACE
-+
-+_GLIBCXX_BEGIN_NAMESPACE(__gnu_cxx)
-+
-   const char* const category_names[6 + _GLIBCXX_NUM_CATEGORIES] =
-     {
-       "LC_CTYPE",
-@@ -138,9 +130,11 @@ namespace __gnu_cxx
-       "LC_IDENTIFICATION"
- #endif
-     };
--}
- 
--namespace std
--{
-+_GLIBCXX_END_NAMESPACE
-+
-+_GLIBCXX_BEGIN_NAMESPACE(std)
-+
-   const char* const* const locale::_S_categories = __gnu_cxx::category_names;
--}  // namespace std
-+
-+_GLIBCXX_END_NAMESPACE
-diff --git a/libstdc++-v3/config/locale/uclibc/c_locale.h b/libstdc++-v3/config/locale/uclibc/c_locale.h
-index 4bca5f1..64a6d46 100644
---- a/libstdc++-v3/config/locale/uclibc/c_locale.h
-+++ b/libstdc++-v3/config/locale/uclibc/c_locale.h
-@@ -39,21 +39,23 @@
- #pragma GCC system_header
- 
- #include <cstring>              // get std::strlen
--#include <cstdio>               // get std::snprintf or std::sprintf
-+#include <cstdio>               // get std::vsnprintf or std::vsprintf
- #include <clocale>
- #include <langinfo.h>		// For codecvt
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning fix this
- #endif
--#ifdef __UCLIBC_HAS_LOCALE__
-+#ifdef _GLIBCXX_USE_ICONV
- #include <iconv.h>		// For codecvt using iconv, iconv_t
- #endif
--#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
--#include <libintl.h> 		// For messages
-+#ifdef HAVE_LIBINTL_H
-+#include <libintl.h>		// For messages
- #endif
-+#include <cstdarg>
- 
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning what is _GLIBCXX_C_LOCALE_GNU for
-+// psm: used in os/gnu-linux/ctype_noninline.h
- #endif
- #define _GLIBCXX_C_LOCALE_GNU 1
- 
-@@ -78,23 +80,25 @@ namespace std
- #else
-   typedef int*			__c_locale;
- #endif
--
--  // Convert numeric value of type _Tv to string and return length of
--  // string.  If snprintf is available use it, otherwise fall back to
--  // the unsafe sprintf which, in general, can be dangerous and should
-+  // Convert numeric value of type double to string and return length of
-+  // string.  If vsnprintf is available use it, otherwise fall back to
-+  // the unsafe vsprintf which, in general, can be dangerous and should
-   // be avoided.
--  template<typename _Tv>
--    int
--    __convert_from_v(char* __out,
--		     const int __size __attribute__ ((__unused__)),
--		     const char* __fmt,
--#ifdef __UCLIBC_HAS_XCLOCALE__
--		     _Tv __v, const __c_locale& __cloc, int __prec)
-+    inline int
-+    __convert_from_v(const __c_locale&
-+#ifndef __UCLIBC_HAS_XCLOCALE__
-+	__cloc __attribute__ ((__unused__))
-+#endif
-+		     ,
-+		     char* __out,
-+		     const int __size,
-+		     const char* __fmt, ...)
-     {
-+      va_list __args;
-+#ifdef __UCLIBC_HAS_XCLOCALE__
-+
-       __c_locale __old = __gnu_cxx::__uselocale(__cloc);
- #else
--		     _Tv __v, const __c_locale&, int __prec)
--    {
- # ifdef __UCLIBC_HAS_LOCALE__
-       char* __old = std::setlocale(LC_ALL, NULL);
-       char* __sav = new char[std::strlen(__old) + 1];
-@@ -103,7 +107,9 @@ namespace std
- # endif
- #endif
- 
--      const int __ret = std::snprintf(__out, __size, __fmt, __prec, __v);
-+      va_start(__args, __fmt);
-+      const int __ret = std::vsnprintf(__out, __size, __fmt, __args);
-+      va_end(__args);
- 
- #ifdef __UCLIBC_HAS_XCLOCALE__
-       __gnu_cxx::__uselocale(__old);
-diff --git a/libstdc++-v3/config/locale/uclibc/ctype_members.cc b/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-index 7b12861..13e011d 100644
---- a/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-@@ -33,16 +33,20 @@
- 
- // Written by Benjamin Kosnik <bkoz@redhat.com>
- 
-+#include <features.h>
-+#ifdef __UCLIBC_HAS_LOCALE__
- #define _LIBC
- #include <locale>
- #undef _LIBC
-+#else
-+#include <locale>
-+#endif
- #include <bits/c++locale_internal.h>
- 
--namespace std
--{
-+_GLIBCXX_BEGIN_NAMESPACE(std)
-+
-   // NB: The other ctype<char> specializations are in src/locale.cc and
-   // various /config/os/* files.
--  template<>
-     ctype_byname<char>::ctype_byname(const char* __s, size_t __refs)
-     : ctype<char>(0, false, __refs)
-     {
-@@ -57,6 +61,8 @@ namespace std
- #endif
- 	}
-     }
-+    ctype_byname<char>::~ctype_byname()
-+    { }
- 
- #ifdef _GLIBCXX_USE_WCHAR_T
-   ctype<wchar_t>::__wmask_type
-@@ -138,17 +144,33 @@ namespace std
-   ctype<wchar_t>::
-   do_is(mask __m, wchar_t __c) const
-   {
--    // Highest bitmask in ctype_base == 10, but extra in "C"
--    // library for blank.
-+    // The case of __m == ctype_base::space is particularly important,
-+    // due to its use in many istream functions.  Therefore we deal with
-+    // it first, exploiting the knowledge that on GNU systems _M_bit[5]
-+    // is the mask corresponding to ctype_base::space.  NB: an encoding
-+    // change would not affect correctness!
-+
-     bool __ret = false;
--    const size_t __bitmasksize = 11;
--    for (size_t __bitcur = 0; __bitcur <= __bitmasksize; ++__bitcur)
--      if (__m & _M_bit[__bitcur]
--	  && __iswctype_l(__c, _M_wmask[__bitcur], _M_c_locale_ctype))
--	{
--	  __ret = true;
--	  break;
--	}
-+    if (__m == _M_bit[5])
-+      __ret = __iswctype_l(__c, _M_wmask[5], _M_c_locale_ctype);
-+    else
-+      {
-+	// Highest bitmask in ctype_base == 10, but extra in "C"
-+	// library for blank.
-+	const size_t __bitmasksize = 11;
-+	for (size_t __bitcur = 0; __bitcur <= __bitmasksize; ++__bitcur)
-+	  if (__m & _M_bit[__bitcur])
-+	    {
-+	      if (__iswctype_l(__c, _M_wmask[__bitcur], _M_c_locale_ctype))
-+		{
-+		  __ret = true;
-+		  break;
-+		}
-+	      else if (__m == _M_bit[__bitcur])
-+		break;
-+	    }
-+      }
-+
-     return __ret;
-   }
- 
-@@ -290,4 +312,5 @@ namespace std
- #endif
-   }
- #endif //  _GLIBCXX_USE_WCHAR_T
--}
-+
-+_GLIBCXX_END_NAMESPACE
-diff --git a/libstdc++-v3/config/locale/uclibc/messages_members.h b/libstdc++-v3/config/locale/uclibc/messages_members.h
-index d89da33..067657a 100644
---- a/libstdc++-v3/config/locale/uclibc/messages_members.h
-+++ b/libstdc++-v3/config/locale/uclibc/messages_members.h
-@@ -53,12 +53,16 @@
-   template<typename _CharT>
-      messages<_CharT>::messages(__c_locale __cloc, const char* __s,
- 				size_t __refs)
--     : facet(__refs), _M_c_locale_messages(_S_clone_c_locale(__cloc)),
--     _M_name_messages(__s)
-+     : facet(__refs), _M_c_locale_messages(NULL),
-+     _M_name_messages(NULL)
-      {
--       char* __tmp = new char[std::strlen(__s) + 1];
--       std::strcpy(__tmp, __s);
-+       const size_t __len = std::strlen(__s) + 1;
-+       char* __tmp = new char[__len];
-+       std::memcpy(__tmp, __s, __len);
-        _M_name_messages = __tmp;
-+
-+       // Last to avoid leaking memory if new throws.
-+       _M_c_locale_messages = _S_clone_c_locale(__cloc);
-      }
- 
-   template<typename _CharT>
-diff --git a/libstdc++-v3/config/locale/uclibc/monetary_members.cc b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-index 31ebb9f..7679b9c 100644
---- a/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-@@ -33,9 +33,14 @@
- 
- // Written by Benjamin Kosnik <bkoz@redhat.com>
- 
-+#include <features.h>
-+#ifdef __UCLIBC_HAS_LOCALE__
- #define _LIBC
- #include <locale>
- #undef _LIBC
-+#else
-+#include <locale>
-+#endif
- #include <bits/c++locale_internal.h>
- 
- #ifdef __UCLIBC_MJN3_ONLY__
-@@ -206,7 +211,7 @@ namespace std
- 	  }
- 	break;
-       default:
--	;
-+	__ret = pattern();
-       }
-     return __ret;
-   }
-@@ -390,7 +395,9 @@ namespace std
- 	  __c_locale __old = __uselocale(__cloc);
- #else
- 	  // Switch to named locale so that mbsrtowcs will work.
--	  char* __old = strdup(setlocale(LC_ALL, NULL));
-+  	  char* __old = setlocale(LC_ALL, NULL);
-+          const size_t __llen = strlen(__old) + 1;
-+          char* __sav = new char[__llen];
- 	  setlocale(LC_ALL, __name);
- #endif
- 
-@@ -477,8 +484,8 @@ namespace std
- #ifdef __UCLIBC_HAS_XLOCALE__
- 	      __uselocale(__old);
- #else
--	      setlocale(LC_ALL, __old);
--	      free(__old);
-+	      setlocale(LC_ALL, __sav);
-+	      delete [] __sav;
- #endif
- 	      __throw_exception_again;
- 	    }
-@@ -498,8 +505,8 @@ namespace std
- #ifdef __UCLIBC_HAS_XLOCALE__
- 	  __uselocale(__old);
- #else
--	  setlocale(LC_ALL, __old);
--	  free(__old);
-+	  setlocale(LC_ALL, __sav);
-+	  delete [] __sav;
- #endif
- 	}
-     }
-@@ -545,8 +552,11 @@ namespace std
- 	  __c_locale __old = __uselocale(__cloc);
- #else
- 	  // Switch to named locale so that mbsrtowcs will work.
--	  char* __old = strdup(setlocale(LC_ALL, NULL));
--	  setlocale(LC_ALL, __name);
-+          char* __old = setlocale(LC_ALL, NULL);
-+          const size_t __llen = strlen(__old) + 1;
-+          char* __sav = new char[__llen];
-+          memcpy(__sav, __old, __llen);
-+          setlocale(LC_ALL, __name);
- #endif
- 
- #ifdef __UCLIBC_MJN3_ONLY__
-@@ -633,8 +643,8 @@ namespace std
- #ifdef __UCLIBC_HAS_XLOCALE__
- 	      __uselocale(__old);
- #else
--	      setlocale(LC_ALL, __old);
--	      free(__old);
-+	      setlocale(LC_ALL, __sav);
-+	      delete [] __sav;
- #endif
-               __throw_exception_again;
- 	    }
-@@ -653,8 +663,8 @@ namespace std
- #ifdef __UCLIBC_HAS_XLOCALE__
- 	  __uselocale(__old);
- #else
--	  setlocale(LC_ALL, __old);
--	  free(__old);
-+	  setlocale(LC_ALL, __sav);
-+	  delete [] __sav;
- #endif
- 	}
-     }
-diff --git a/libstdc++-v3/config/locale/uclibc/numeric_members.cc b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-index d5c8961..8ae8969 100644
---- a/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-@@ -33,9 +33,14 @@
- 
- // Written by Benjamin Kosnik <bkoz@redhat.com>
- 
-+#include <features.h>
-+#ifdef __UCLIBC_HAS_LOCALE__
- #define _LIBC
- #include <locale>
- #undef _LIBC
-+#else
-+#include <locale>
-+#endif
- #include <bits/c++locale_internal.h>
- 
- #ifdef __UCLIBC_MJN3_ONLY__
-diff --git a/libstdc++-v3/config/locale/uclibc/time_members.cc b/libstdc++-v3/config/locale/uclibc/time_members.cc
-index d848ed5..f24d53e 100644
---- a/libstdc++-v3/config/locale/uclibc/time_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/time_members.cc
-@@ -53,11 +53,14 @@ namespace std
-       const size_t __len = __strftime_l(__s, __maxlen, __format, __tm,
- 					_M_c_locale_timepunct);
- #else
--      char* __old = strdup(setlocale(LC_ALL, NULL));
-+      char* __old = setlocale(LC_ALL, NULL);
-+      const size_t __llen = strlen(__old) + 1;
-+      char* __sav = new char[__llen];
-+      memcpy(__sav, __old, __llen);
-       setlocale(LC_ALL, _M_name_timepunct);
-       const size_t __len = strftime(__s, __maxlen, __format, __tm);
--      setlocale(LC_ALL, __old);
--      free(__old);
-+      setlocale(LC_ALL, __sav);
-+      delete [] __sav;
- #endif
-       // Make sure __s is null terminated.
-       if (__len == 0)
-@@ -207,11 +210,14 @@ namespace std
-       const size_t __len = __wcsftime_l(__s, __maxlen, __format, __tm,
- 					_M_c_locale_timepunct);
- #else
--      char* __old = strdup(setlocale(LC_ALL, NULL));
-+      char* __old = setlocale(LC_ALL, NULL);
-+      const size_t __llen = strlen(__old) + 1;
-+      char* __sav = new char[__llen];
-+      memcpy(__sav, __old, __llen);
-       setlocale(LC_ALL, _M_name_timepunct);
-       const size_t __len = wcsftime(__s, __maxlen, __format, __tm);
--      setlocale(LC_ALL, __old);
--      free(__old);
-+      setlocale(LC_ALL, __sav);
-+      delete [] __sav;
- #endif
-       // Make sure __s is null terminated.
-       if (__len == 0)
-diff --git a/libstdc++-v3/config/locale/uclibc/time_members.h b/libstdc++-v3/config/locale/uclibc/time_members.h
-index ba8e858..1665dde 100644
---- a/libstdc++-v3/config/locale/uclibc/time_members.h
-+++ b/libstdc++-v3/config/locale/uclibc/time_members.h
-@@ -50,12 +50,21 @@
-     __timepunct<_CharT>::__timepunct(__c_locale __cloc, const char* __s,
- 				     size_t __refs)
-     : facet(__refs), _M_data(NULL), _M_c_locale_timepunct(NULL),
--    _M_name_timepunct(__s)
-+    _M_name_timepunct(NULL)
-     {
--      char* __tmp = new char[std::strlen(__s) + 1];
--      std::strcpy(__tmp, __s);
-+      const size_t __len = std::strlen(__s) + 1;
-+      char* __tmp = new char[__len];
-+      std::memcpy(__tmp, __s, __len);
-       _M_name_timepunct = __tmp;
--      _M_initialize_timepunct(__cloc);
-+
-+      try
-+	{ _M_initialize_timepunct(__cloc); }
-+      catch(...)
-+	{
-+	  delete [] _M_name_timepunct;
-+	  __throw_exception_again;
-+	}
-+
-     }
- 
-   template<typename _CharT>
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0008-missing-execinfo_h.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0008-missing-execinfo_h.patch
deleted file mode 100644
index 9f87931..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0008-missing-execinfo_h.patch
+++ /dev/null
@@ -1,28 +0,0 @@
-From 9b7442069eb68a42d2437181c1b2e710dd077e8b Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:48:10 +0400
-Subject: [PATCH 08/46] missing-execinfo_h
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- boehm-gc/include/gc.h | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/boehm-gc/include/gc.h b/boehm-gc/include/gc.h
-index 6b38f2d..fca98ff 100644
---- a/boehm-gc/include/gc.h
-+++ b/boehm-gc/include/gc.h
-@@ -503,7 +503,7 @@ GC_API GC_PTR GC_malloc_atomic_ignore_off_page GC_PROTO((size_t lb));
- #if defined(__linux__) || defined(__GLIBC__)
- # include <features.h>
- # if (__GLIBC__ == 2 && __GLIBC_MINOR__ >= 1 || __GLIBC__ > 2) \
--     && !defined(__ia64__)
-+     && !defined(__ia64__) && !defined(__UCLIBC__)
- #   ifndef GC_HAVE_BUILTIN_BACKTRACE
- #     define GC_HAVE_BUILTIN_BACKTRACE
- #   endif
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0009-c99-snprintf.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0009-c99-snprintf.patch
deleted file mode 100644
index 6b236c6..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0009-c99-snprintf.patch
+++ /dev/null
@@ -1,28 +0,0 @@
-From fde97f80d2d931ed3fa4a86294799366b2359331 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:49:03 +0400
-Subject: [PATCH 09/46] c99-snprintf
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- libstdc++-v3/include/c_std/cstdio | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/libstdc++-v3/include/c_std/cstdio b/libstdc++-v3/include/c_std/cstdio
-index 37f01ca..f00c06d 100644
---- a/libstdc++-v3/include/c_std/cstdio
-+++ b/libstdc++-v3/include/c_std/cstdio
-@@ -144,7 +144,7 @@ namespace std
-   using ::vsprintf;
- } // namespace std
- 
--#if _GLIBCXX_USE_C99
-+#if _GLIBCXX_USE_C99 || defined(__UCLIBC__)
- 
- #undef snprintf
- #undef vfscanf
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0010-gcc-poison-system-directories.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0010-gcc-poison-system-directories.patch
deleted file mode 100644
index 2da8877..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0010-gcc-poison-system-directories.patch
+++ /dev/null
@@ -1,192 +0,0 @@
-From 7a90e62d557c78ae52006dff30c99006e10d9357 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:59:00 +0400
-Subject: [PATCH 10/46] gcc: poison-system-directories
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Inappropriate [distribution: codesourcery]
----
- gcc/common.opt      |  4 ++++
- gcc/config.in       |  6 ++++++
- gcc/configure       | 16 ++++++++++++++++
- gcc/configure.ac    | 10 ++++++++++
- gcc/doc/invoke.texi |  9 +++++++++
- gcc/gcc.c           |  2 ++
- gcc/incpath.c       | 19 +++++++++++++++++++
- 7 files changed, 66 insertions(+)
-
-diff --git a/gcc/common.opt b/gcc/common.opt
-index 1218a71..bfba114 100644
---- a/gcc/common.opt
-+++ b/gcc/common.opt
-@@ -623,6 +623,10 @@ Wreturn-local-addr
- Common Var(warn_return_local_addr) Init(1) Warning
- Warn about returning a pointer/reference to a local or temporary variable.
- 
-+Wpoison-system-directories
-+Common Var(flag_poison_system_directories) Init(1) Warning
-+Warn for -I and -L options using system directories if cross compiling
-+
- Wshadow
- Common Var(warn_shadow) Warning
- Warn when one local variable shadows another
-diff --git a/gcc/config.in b/gcc/config.in
-index 5335258..f079826 100644
---- a/gcc/config.in
-+++ b/gcc/config.in
-@@ -168,6 +168,12 @@
- #endif
- 
- 
-+/* Define to warn for use of native system header directories */
-+#ifndef USED_FOR_TARGET
-+#undef ENABLE_POISON_SYSTEM_DIRECTORIES
-+#endif
-+
-+
- /* Define if you want all operations on RTL (the basic data structure of the
-    optimizer and back end) to be checked for dynamic type safety at runtime.
-    This is quite expensive. */
-diff --git a/gcc/configure b/gcc/configure
-index 3c92795..34371a3 100755
---- a/gcc/configure
-+++ b/gcc/configure
-@@ -933,6 +933,7 @@ with_system_zlib
- enable_maintainer_mode
- enable_link_mutex
- enable_version_specific_runtime_libs
-+enable_poison_system_directories
- enable_plugin
- enable_host_shared
- enable_libquadmath_support
-@@ -1670,6 +1671,8 @@ Optional Features:
-   --enable-version-specific-runtime-libs
-                           specify that runtime libraries should be installed
-                           in a compiler-specific directory
-+  --enable-poison-system-directories
-+                          warn for use of native system header directories
-   --enable-plugin         enable plugin support
-   --enable-host-shared    build host code as shared libraries
-   --disable-libquadmath-support
-@@ -28211,6 +28214,19 @@ if test "${enable_version_specific_runtime_libs+set}" = set; then :
- fi
- 
- 
-+# Check whether --enable-poison-system-directories was given.
-+if test "${enable_poison_system_directories+set}" = set; then :
-+  enableval=$enable_poison_system_directories;
-+else
-+  enable_poison_system_directories=no
-+fi
-+
-+if test "x${enable_poison_system_directories}" = "xyes"; then
-+
-+$as_echo "#define ENABLE_POISON_SYSTEM_DIRECTORIES 1" >>confdefs.h
-+
-+fi
-+
- # Substitute configuration variables
- 
- 
-diff --git a/gcc/configure.ac b/gcc/configure.ac
-index d414081..240d322 100644
---- a/gcc/configure.ac
-+++ b/gcc/configure.ac
-@@ -5654,6 +5654,16 @@ AC_ARG_ENABLE(version-specific-runtime-libs,
-                 [specify that runtime libraries should be
-                  installed in a compiler-specific directory])])
- 
-+AC_ARG_ENABLE([poison-system-directories],
-+             AS_HELP_STRING([--enable-poison-system-directories],
-+                            [warn for use of native system header directories]),,
-+             [enable_poison_system_directories=no])
-+if test "x${enable_poison_system_directories}" = "xyes"; then
-+  AC_DEFINE([ENABLE_POISON_SYSTEM_DIRECTORIES],
-+           [1],
-+           [Define to warn for use of native system header directories])
-+fi
-+
- # Substitute configuration variables
- AC_SUBST(subdirs)
- AC_SUBST(srcdir)
-diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
-index d3be589..c81b55b 100644
---- a/gcc/doc/invoke.texi
-+++ b/gcc/doc/invoke.texi
-@@ -269,6 +269,7 @@ Objective-C and Objective-C++ Dialects}.
- -Woverlength-strings  -Wpacked  -Wpacked-bitfield-compat  -Wpadded @gol
- -Wparentheses  -Wpedantic-ms-format -Wno-pedantic-ms-format @gol
- -Wpointer-arith  -Wno-pointer-to-int-cast @gol
-+-Wno-poison-system-directories @gol
- -Wredundant-decls  -Wno-return-local-addr @gol
- -Wreturn-type  -Wsequence-point  -Wshadow  -Wno-shadow-ivar @gol
- -Wshift-count-negative -Wshift-count-overflow @gol
-@@ -4433,6 +4434,14 @@ made up of data only and thus requires no special treatment.  But, for
- most targets, it is made up of code and thus requires the stack to be
- made executable in order for the program to work properly.
- 
-+@item -Wno-poison-system-directories
-+@opindex Wno-poison-system-directories
-+Do not warn for @option{-I} or @option{-L} options using system
-+directories such as @file{/usr/include} when cross compiling.  This
-+option is intended for use in chroot environments when such
-+directories contain the correct headers and libraries for the target
-+system rather than the host.
-+
- @item -Wfloat-equal
- @opindex Wfloat-equal
- @opindex Wno-float-equal
-diff --git a/gcc/gcc.c b/gcc/gcc.c
-index d956c36..675bcc1 100644
---- a/gcc/gcc.c
-+++ b/gcc/gcc.c
-@@ -835,6 +835,8 @@ proper position among the other output files.  */
-    "%{fuse-ld=*:-fuse-ld=%*} " LINK_COMPRESS_DEBUG_SPEC \
-    "%X %{o*} %{e*} %{N} %{n} %{r}\
-     %{s} %{t} %{u*} %{z} %{Z} %{!nostdlib:%{!nostartfiles:%S}} " VTABLE_VERIFICATION_SPEC " \
-+    %{Wno-poison-system-directories:--no-poison-system-directories}\
-+    %{Werror=poison-system-directories:--error-poison-system-directories}\
-     %{static:} %{L*} %(mfwrap) %(link_libgcc) " SANITIZER_EARLY_SPEC " %o\
-     " CHKP_SPEC " \
-     %{fopenacc|fopenmp|ftree-parallelize-loops=*:%:include(libgomp.spec)%(link_gomp)}\
-diff --git a/gcc/incpath.c b/gcc/incpath.c
-index 6c54ca6..cc0c921 100644
---- a/gcc/incpath.c
-+++ b/gcc/incpath.c
-@@ -28,6 +28,7 @@
- #include "intl.h"
- #include "incpath.h"
- #include "cppdefault.h"
-+#include "diagnostic-core.h"
- 
- /* Microsoft Windows does not natively support inodes.
-    VMS has non-numeric inodes.  */
-@@ -383,6 +384,24 @@ merge_include_chains (const char *sysroot, cpp_reader *pfile, int verbose)
- 	}
-       fprintf (stderr, _("End of search list.\n"));
-     }
-+
-+#ifdef ENABLE_POISON_SYSTEM_DIRECTORIES
-+  if (flag_poison_system_directories)
-+    {
-+       struct cpp_dir *p;
-+
-+       for (p = heads[QUOTE]; p; p = p->next)
-+         {
-+          if ((!strncmp (p->name, "/usr/include", 12))
-+              || (!strncmp (p->name, "/usr/local/include", 18))
-+              || (!strncmp (p->name, "/usr/X11R6/include", 18)))
-+            warning (OPT_Wpoison_system_directories,
-+                     "include location \"%s\" is unsafe for "
-+                     "cross-compilation",
-+                     p->name);
-+         }
-+    }
-+#endif
- }
- 
- /* Use given -I paths for #include "..." but not #include <...>, and
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0011-gcc-poison-dir-extend.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0011-gcc-poison-dir-extend.patch
deleted file mode 100644
index 511e694..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0011-gcc-poison-dir-extend.patch
+++ /dev/null
@@ -1,39 +0,0 @@
-From db406054de6b0967cd76cbb998f05b0c5af0bd94 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:00:34 +0400
-Subject: [PATCH 11/46] gcc-poison-dir-extend
-
-Add /sw/include and /opt/include based on the original
-zecke-no-host-includes.patch patch.  The original patch checked for
-/usr/include, /sw/include and /opt/include and then triggered a failure and
-aborted.
-
-Instead, we add the two missing items to the current scan.  If the user
-wants this to be a failure, they can add "-Werror=poison-system-directories".
-
-Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- gcc/incpath.c | 4 +++-
- 1 file changed, 3 insertions(+), 1 deletion(-)
-
-diff --git a/gcc/incpath.c b/gcc/incpath.c
-index cc0c921..0bc1f67 100644
---- a/gcc/incpath.c
-+++ b/gcc/incpath.c
-@@ -394,7 +394,9 @@ merge_include_chains (const char *sysroot, cpp_reader *pfile, int verbose)
-          {
-           if ((!strncmp (p->name, "/usr/include", 12))
-               || (!strncmp (p->name, "/usr/local/include", 18))
--              || (!strncmp (p->name, "/usr/X11R6/include", 18)))
-+              || (!strncmp (p->name, "/usr/X11R6/include", 18))
-+              || (!strncmp (p->name, "/sw/include", 11))
-+              || (!strncmp (p->name, "/opt/include", 12)))
-             warning (OPT_Wpoison_system_directories,
-                      "include location \"%s\" is unsafe for "
-                      "cross-compilation",
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0012-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0012-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch
deleted file mode 100644
index 750bbc8..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0012-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch
+++ /dev/null
@@ -1,73 +0,0 @@
-From 278d293c4cee9482a23aa3443741861ff2f67212 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:08:31 +0400
-Subject: [PATCH 12/46] gcc-4.3.3: SYSROOT_CFLAGS_FOR_TARGET
-
-Before committing, I noticed that PR/32161 was marked as a dup of PR/32009, but my previous patch did not fix it.
-
-This alternative patch is better because it lets you just use CFLAGS_FOR_TARGET to set the compilation flags for libgcc. Since bootstrapped target libraries are never compiled with the native compiler, it makes little sense to use different flags for stage1 and later stages. And it also makes little sense to use a different variable than CFLAGS_FOR_TARGET.
-
-Other changes I had to do include:
-
-- moving the creation of default CFLAGS_FOR_TARGET from Makefile.am to configure.ac, because otherwise the BOOT_CFLAGS are substituted into CFLAGS_FOR_TARGET (which is "-O2 -g $(CFLAGS)") via $(CFLAGS). It is also cleaner this way though.
-
-- passing the right CFLAGS to configure scripts as exported environment variables
-
-I also stopped passing LIBCFLAGS to configure scripts since they are unused in the whole src tree. And I updated the documentation as H-P reminded me to do.
-
-Bootstrapped/regtested i686-pc-linux-gnu, will commit to 4.4 shortly. Ok for 4.3?
-
-Signed-off-by: Paolo Bonzini  <bonzini@gnu.org>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- configure | 32 ++++++++++++++++++++++++++++++++
- 1 file changed, 32 insertions(+)
-
-diff --git a/configure b/configure
-index 1cba3a9..9cae8da 100755
---- a/configure
-+++ b/configure
-@@ -6733,6 +6733,38 @@ fi
- 
- 
- 
-+# During gcc bootstrap, if we use some random cc for stage1 then CFLAGS
-+# might be empty or "-g".  We don't require a C++ compiler, so CXXFLAGS
-+# might also be empty (or "-g", if a non-GCC C++ compiler is in the path).
-+# We want to ensure that TARGET libraries (which we know are built with
-+# gcc) are built with "-O2 -g", so include those options when setting
-+# CFLAGS_FOR_TARGET and CXXFLAGS_FOR_TARGET.
-+if test "x$CFLAGS_FOR_TARGET" = x; then
-+  CFLAGS_FOR_TARGET=$CFLAGS
-+  case " $CFLAGS " in
-+    *" -O2 "*) ;;
-+    *) CFLAGS_FOR_TARGET="-O2 $CFLAGS" ;;
-+  esac
-+  case " $CFLAGS " in
-+    *" -g "* | *" -g3 "*) ;;
-+    *) CFLAGS_FOR_TARGET="-g $CFLAGS" ;;
-+  esac
-+fi
-+
-+
-+if test "x$CXXFLAGS_FOR_TARGET" = x; then
-+  CXXFLAGS_FOR_TARGET=$CXXFLAGS
-+  case " $CXXFLAGS " in
-+    *" -O2 "*) ;;
-+    *) CXXFLAGS_FOR_TARGET="-O2 $CXXFLAGS" ;;
-+  esac
-+  case " $CXXFLAGS " in
-+    *" -g "* | *" -g3 "*) ;;
-+    *) CXXFLAGS_FOR_TARGET="-g $CXXFLAGS" ;;
-+  esac
-+fi
-+
-+
- # Handle --with-headers=XXX.  If the value is not "yes", the contents of
- # the named directory are copied to $(tooldir)/sys-include.
- if test x"${with_headers}" != x && test x"${with_headers}" != xno ; then
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0013-64-bit-multilib-hack.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0013-64-bit-multilib-hack.patch
deleted file mode 100644
index 45aaf4c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0013-64-bit-multilib-hack.patch
+++ /dev/null
@@ -1,85 +0,0 @@
-From bdde4f5efa3c89ac7ee0bc05f322f27e3dabeab9 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:10:06 +0400
-Subject: [PATCH 13/46] 64-bit multilib hack.
-
-GCC has internal multilib handling code but it assumes a very specific rigid directory
-layout. The build system implementation of multilib layout is very generic and allows
-complete customisation of the library directories.
-
-This patch is a partial solution to allow any custom directories to be passed into gcc
-and handled correctly. It forces gcc to use the base_libdir (which is the current
-directory, "."). We need to do this for each multilib that is configured as we don't
-know which compiler options may be being passed into the compiler. Since we have a compiler
-per mulitlib at this point that isn't an issue.
-
-The one problem is the target compiler is only going to work for the default multlilib at
-this point. Ideally we'd figure out which multilibs were being enabled with which paths
-and be able to patch these entries with a complete set of correct paths but this we
-don't have such code at this point. This is something the target gcc recipe should do
-and override these platform defaults in its build config.
-
-RP 15/8/11
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Signed-off-by: Elvis Dowson <elvis.dowson@gmail.com>
-
-Upstream-Status: Pending
----
- gcc/config/i386/t-linux64   |  6 ++----
- gcc/config/mips/t-linux64   | 10 +++-------
- gcc/config/rs6000/t-linux64 |  5 ++---
- 3 files changed, 7 insertions(+), 14 deletions(-)
-
-diff --git a/gcc/config/i386/t-linux64 b/gcc/config/i386/t-linux64
-index f6dbb78..d770da5 100644
---- a/gcc/config/i386/t-linux64
-+++ b/gcc/config/i386/t-linux64
-@@ -32,7 +32,5 @@
- #
- comma=,
- MULTILIB_OPTIONS    = $(subst $(comma),/,$(TM_MULTILIB_CONFIG))
--MULTILIB_DIRNAMES   = $(patsubst m%, %, $(subst /, ,$(MULTILIB_OPTIONS)))
--MULTILIB_OSDIRNAMES = m64=../lib64$(call if_multiarch,:x86_64-linux-gnu)
--MULTILIB_OSDIRNAMES+= m32=$(if $(wildcard $(shell echo $(SYSTEM_HEADER_DIR))/../../usr/lib32),../lib32,../lib)$(call if_multiarch,:i386-linux-gnu)
--MULTILIB_OSDIRNAMES+= mx32=../libx32$(call if_multiarch,:x86_64-linux-gnux32)
-+MULTILIB_DIRNAMES = . .
-+MULTILIB_OSDIRNAMES = ../$(shell basename $(base_libdir)) ../$(shell basename $(base_libdir))
-diff --git a/gcc/config/mips/t-linux64 b/gcc/config/mips/t-linux64
-index 7e96406..b72dcb0 100644
---- a/gcc/config/mips/t-linux64
-+++ b/gcc/config/mips/t-linux64
-@@ -17,10 +17,6 @@
- # <http://www.gnu.org/licenses/>.
- 
- MULTILIB_OPTIONS = mabi=n32/mabi=32/mabi=64
--MULTILIB_DIRNAMES = n32 32 64
--MIPS_EL = $(if $(filter %el, $(firstword $(subst -, ,$(target)))),el)
--MIPS_SOFT = $(if $(strip $(filter MASK_SOFT_FLOAT_ABI, $(target_cpu_default)) $(filter soft, $(with_float))),soft)
--MULTILIB_OSDIRNAMES = \
--	../lib32$(call if_multiarch,:mips64$(MIPS_EL)-linux-gnuabin32$(MIPS_SOFT)) \
--	../lib$(call if_multiarch,:mips$(MIPS_EL)-linux-gnu$(MIPS_SOFT)) \
--	../lib64$(call if_multiarch,:mips64$(MIPS_EL)-linux-gnuabi64$(MIPS_SOFT))
-+MULTILIB_DIRNAMES = . . .
-+MULTILIB_OSDIRNAMES = ../$(shell basename $(base_libdir)) ../$(shell basename $(base_libdir)) ../$(shell basename $(base_libdir))
-+
-diff --git a/gcc/config/rs6000/t-linux64 b/gcc/config/rs6000/t-linux64
-index b6b351d..1d5b37a 100644
---- a/gcc/config/rs6000/t-linux64
-+++ b/gcc/config/rs6000/t-linux64
-@@ -26,10 +26,9 @@
- # MULTILIB_OSDIRNAMES according to what is found on the target.
- 
- MULTILIB_OPTIONS    := m64/m32
--MULTILIB_DIRNAMES   := 64 32
-+MULTILIB_DIRNAMES   := . .
- MULTILIB_EXTRA_OPTS := 
--MULTILIB_OSDIRNAMES := m64=../lib64$(call if_multiarch,:powerpc64-linux-gnu)
--MULTILIB_OSDIRNAMES += m32=$(if $(wildcard $(shell echo $(SYSTEM_HEADER_DIR))/../../usr/lib32),../lib32,../lib)$(call if_multiarch,:powerpc-linux-gnu)
-+MULTILIB_OSDIRNAMES := ../$(shell basename $(base_libdir)) ../$(shell basename $(base_libdir))
- 
- rs6000-linux.o: $(srcdir)/config/rs6000/rs6000-linux.c
- 	$(COMPILE) $<
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0014-optional-libstdc.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0014-optional-libstdc.patch
deleted file mode 100644
index 73741f8..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0014-optional-libstdc.patch
+++ /dev/null
@@ -1,101 +0,0 @@
-From a13763f8a1d413a432e7b40835a062f86208f29a Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:12:56 +0400
-Subject: [PATCH 14/46] optional libstdc
-
-gcc-runtime builds libstdc++ separately from gcc-cross-*. Its configure tests using g++
-will not run correctly since by default the linker will try to link against libstdc++
-which shouldn't exist yet. We need an option to disable -lstdc++
-option whilst leaving -lc, -lgcc and other automatic library dependencies added by gcc
-driver. This patch adds such an option which only disables the -lstdc++.
-
-A "standard" gcc build uses xgcc and hence avoids this. We should ask upstream how to
-do this officially, the likely answer is don't build libstdc++ separately.
-
-RP 29/6/10
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Inappropriate [embedded specific]
----
- gcc/c-family/c.opt  | 4 ++++
- gcc/cp/g++spec.c    | 1 +
- gcc/doc/invoke.texi | 8 +++++++-
- gcc/gcc.c           | 1 +
- 4 files changed, 13 insertions(+), 1 deletion(-)
-
-diff --git a/gcc/c-family/c.opt b/gcc/c-family/c.opt
-index 4162566..453ec8e 100644
---- a/gcc/c-family/c.opt
-+++ b/gcc/c-family/c.opt
-@@ -1543,6 +1543,10 @@ nostdinc++
- C++ ObjC++
- Do not search standard system include directories for C++
- 
-+nostdlib++
-+Driver
-+Do not link standard C++ runtime library
-+
- o
- C ObjC C++ ObjC++ Joined Separate
- ; Documented in common.opt
-diff --git a/gcc/cp/g++spec.c b/gcc/cp/g++spec.c
-index 6536d7e..f57a5d4 100644
---- a/gcc/cp/g++spec.c
-+++ b/gcc/cp/g++spec.c
-@@ -138,6 +138,7 @@ lang_specific_driver (struct cl_decoded_option **in_decoded_options,
-       switch (decoded_options[i].opt_index)
- 	{
- 	case OPT_nostdlib:
-+	case OPT_nostdlib__:
- 	case OPT_nodefaultlibs:
- 	  library = -1;
- 	  break;
-diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
-index c81b55b..6d3f68c 100644
---- a/gcc/doc/invoke.texi
-+++ b/gcc/doc/invoke.texi
-@@ -195,6 +195,7 @@ in the following sections.
- -fvisibility-inlines-hidden @gol
- -fvtable-verify=@r{[}std@r{|}preinit@r{|}none@r{]} @gol
- -fvtv-counts -fvtv-debug @gol
-+-nostdlib++ @gol
- -fvisibility-ms-compat @gol
- -fext-numeric-literals @gol
- -Wabi=@var{n}  -Wabi-tag  -Wconversion-null  -Wctor-dtor-privacy @gol
-@@ -488,7 +489,7 @@ Objective-C and Objective-C++ Dialects}.
- -s  -static -static-libgcc -static-libstdc++ @gol
- -static-libasan -static-libtsan -static-liblsan -static-libubsan @gol
- -static-libmpx -static-libmpxwrappers @gol
---shared -shared-libgcc  -symbolic @gol
-+-shared -shared-libgcc  -symbolic -nostdlib++ @gol
- -T @var{script}  -Wl,@var{option}  -Xlinker @var{option} @gol
- -u @var{symbol} -z @var{keyword}}
- 
-@@ -11187,6 +11188,11 @@ These entries are usually resolved by entries in
- libc.  These entry points should be supplied through some other
- mechanism when this option is specified.
- 
-+@item -nostdlib++
-+@opindex nostdlib++
-+Do not use the standard system C++ runtime libraries when linking.
-+Only the libraries you specify will be passed to the linker.
-+
- @cindex @option{-lgcc}, use with @option{-nostdlib}
- @cindex @option{-nostdlib} and unresolved references
- @cindex unresolved references and @option{-nostdlib}
-diff --git a/gcc/gcc.c b/gcc/gcc.c
-index 675bcc1..a37ec8b 100644
---- a/gcc/gcc.c
-+++ b/gcc/gcc.c
-@@ -845,6 +845,7 @@ proper position among the other output files.  */
-     %(mflib) " STACK_SPLIT_SPEC "\
-     %{fprofile-arcs|fprofile-generate*|coverage:-lgcov} " SANITIZER_SPEC " \
-     %{!nostdlib:%{!nodefaultlibs:%(link_ssp) %(link_gcc_c_sequence)}}\
-+    %{!nostdlib++:}\
-     %{!nostdlib:%{!nostartfiles:%E}} %{T*} }}}}}}"
- #endif
- 
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0015-gcc-disable-MASK_RELAX_PIC_CALLS-bit.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0015-gcc-disable-MASK_RELAX_PIC_CALLS-bit.patch
deleted file mode 100644
index 1b62ef8..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0015-gcc-disable-MASK_RELAX_PIC_CALLS-bit.patch
+++ /dev/null
@@ -1,59 +0,0 @@
-From 4bd21a1d63cfb14e261d90adc0c5d4f9462dc555 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:14:20 +0400
-Subject: [PATCH 15/46] gcc: disable MASK_RELAX_PIC_CALLS bit
-
-The new feature added after 4.3.3
-"http://www.pubbs.net/200909/gcc/94048-patch-add-support-for-rmipsjalr.html"
-will cause cc1plus eat up all the system memory when build webkit-gtk.
-The function mips_get_pic_call_symbol keeps on recursively calling itself.
-Disable this feature to walk aside the bug.
-
-Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Inappropriate [configuration]
----
- gcc/configure    | 7 -------
- gcc/configure.ac | 7 -------
- 2 files changed, 14 deletions(-)
-
-diff --git a/gcc/configure b/gcc/configure
-index 34371a3..8caecdc 100755
---- a/gcc/configure
-+++ b/gcc/configure
-@@ -26479,13 +26479,6 @@ $as_echo_n "checking assembler and linker for explicit JALR relocation... " >&6;
-         rm -f conftest.*
-       fi
-     fi
--    if test $gcc_cv_as_ld_jalr_reloc = yes; then
--      if test x$target_cpu_default = x; then
--        target_cpu_default=MASK_RELAX_PIC_CALLS
--      else
--        target_cpu_default="($target_cpu_default)|MASK_RELAX_PIC_CALLS"
--      fi
--    fi
-     { $as_echo "$as_me:${as_lineno-$LINENO}: result: $gcc_cv_as_ld_jalr_reloc" >&5
- $as_echo "$gcc_cv_as_ld_jalr_reloc" >&6; }
- 
-diff --git a/gcc/configure.ac b/gcc/configure.ac
-index 240d322..57819ff 100644
---- a/gcc/configure.ac
-+++ b/gcc/configure.ac
-@@ -4384,13 +4384,6 @@ x:
-         rm -f conftest.*
-       fi
-     fi
--    if test $gcc_cv_as_ld_jalr_reloc = yes; then
--      if test x$target_cpu_default = x; then
--        target_cpu_default=MASK_RELAX_PIC_CALLS
--      else
--        target_cpu_default="($target_cpu_default)|MASK_RELAX_PIC_CALLS"
--      fi
--    fi
-     AC_MSG_RESULT($gcc_cv_as_ld_jalr_reloc)
- 
-     AC_CACHE_CHECK([linker for .eh_frame personality relaxation],
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0016-COLLECT_GCC_OPTIONS.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0016-COLLECT_GCC_OPTIONS.patch
deleted file mode 100644
index e6ae262..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0016-COLLECT_GCC_OPTIONS.patch
+++ /dev/null
@@ -1,38 +0,0 @@
-From 92427ebb94dc66f8e64ebf3ffbcd6dd5ce0935c1 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:16:28 +0400
-Subject: [PATCH 16/46] COLLECT_GCC_OPTIONS
-
-This patch adds --sysroot into COLLECT_GCC_OPTIONS which is used to
-invoke collect2.
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- gcc/gcc.c | 9 +++++++++
- 1 file changed, 9 insertions(+)
-
-diff --git a/gcc/gcc.c b/gcc/gcc.c
-index a37ec8b..87b47c5 100644
---- a/gcc/gcc.c
-+++ b/gcc/gcc.c
-@@ -4352,6 +4352,15 @@ set_collect_gcc_options (void)
- 		sizeof ("COLLECT_GCC_OPTIONS=") - 1);
- 
-   first_time = TRUE;
-+#ifdef HAVE_LD_SYSROOT
-+  if (target_system_root_changed && target_system_root)
-+    {
-+      obstack_grow (&collect_obstack, "'--sysroot=", sizeof("'--sysroot=")-1);
-+      obstack_grow (&collect_obstack, target_system_root,strlen(target_system_root));
-+      obstack_grow (&collect_obstack, "'", 1);
-+      first_time = FALSE;
-+    }
-+#endif
-   for (i = 0; (int) i < n_switches; i++)
-     {
-       const char *const *args;
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0017-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0017-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch
deleted file mode 100644
index b89a279..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0017-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch
+++ /dev/null
@@ -1,96 +0,0 @@
-From b1f3118e439459c26a3e19c617b7b0c93745e77a Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:17:25 +0400
-Subject: [PATCH 17/46] Use the defaults.h in ${B} instead of ${S}, and t-oe in
- ${B}
-
-Use the defaults.h in ${B} instead of ${S}, and t-oe in ${B}, so that
-the source can be shared between gcc-cross-initial,
-gcc-cross-intermediate, gcc-cross, gcc-runtime, and also the sdk build.
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
-
-While compiling gcc-crosssdk-initial-x86_64 on some host, there is
-occasionally failure that test the existance of default.h doesn't
-work, the reason is tm_include_list='** defaults.h' rather than
-tm_include_list='** ./defaults.h'
-
-So we add the test condition for this situation.
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- gcc/Makefile.in  | 2 +-
- gcc/configure    | 4 ++--
- gcc/configure.ac | 4 ++--
- gcc/mkconfig.sh  | 4 ++--
- 4 files changed, 7 insertions(+), 7 deletions(-)
-
-diff --git a/gcc/Makefile.in b/gcc/Makefile.in
-index 07c6f0a..e1e63e8 100644
---- a/gcc/Makefile.in
-+++ b/gcc/Makefile.in
-@@ -502,7 +502,7 @@ TARGET_SYSTEM_ROOT = @TARGET_SYSTEM_ROOT@
- TARGET_SYSTEM_ROOT_DEFINE = @TARGET_SYSTEM_ROOT_DEFINE@
- 
- xmake_file=@xmake_file@
--tmake_file=@tmake_file@
-+tmake_file=@tmake_file@ ./t-oe
- TM_ENDIAN_CONFIG=@TM_ENDIAN_CONFIG@
- TM_MULTILIB_CONFIG=@TM_MULTILIB_CONFIG@
- TM_MULTILIB_EXCEPTIONS_CONFIG=@TM_MULTILIB_EXCEPTIONS_CONFIG@
-diff --git a/gcc/configure b/gcc/configure
-index 8caecdc..8ec3d40 100755
---- a/gcc/configure
-+++ b/gcc/configure
-@@ -11850,8 +11850,8 @@ for f in $tm_file; do
-        tm_include_list="${tm_include_list} $f"
-        ;;
-     defaults.h )
--       tm_file_list="${tm_file_list} \$(srcdir)/$f"
--       tm_include_list="${tm_include_list} $f"
-+       tm_file_list="${tm_file_list} ./$f"
-+       tm_include_list="${tm_include_list} ./$f"
-        ;;
-     * )
-        tm_file_list="${tm_file_list} \$(srcdir)/config/$f"
-diff --git a/gcc/configure.ac b/gcc/configure.ac
-index 57819ff..643a47e 100644
---- a/gcc/configure.ac
-+++ b/gcc/configure.ac
-@@ -1832,8 +1832,8 @@ for f in $tm_file; do
-        tm_include_list="${tm_include_list} $f"
-        ;;
-     defaults.h )
--       tm_file_list="${tm_file_list} \$(srcdir)/$f"
--       tm_include_list="${tm_include_list} $f"
-+       tm_file_list="${tm_file_list} ./$f"
-+       tm_include_list="${tm_include_list} ./$f"
-        ;;
-     * )
-        tm_file_list="${tm_file_list} \$(srcdir)/config/$f"
-diff --git a/gcc/mkconfig.sh b/gcc/mkconfig.sh
-index c32f087..6803ab9 100644
---- a/gcc/mkconfig.sh
-+++ b/gcc/mkconfig.sh
-@@ -77,7 +77,7 @@ if [ -n "$HEADERS" ]; then
-     if [ $# -ge 1 ]; then
- 	echo '#ifdef IN_GCC' >> ${output}T
- 	for file in "$@"; do
--	    if test x"$file" = x"defaults.h"; then
-+	    if test x"$file" = x"./defaults.h" -o x"$file" = x"defaults.h"; then
- 		postpone_defaults_h="yes"
- 	    else
- 		echo "# include \"$file\"" >> ${output}T
-@@ -109,7 +109,7 @@ esac
- 
- # If we postponed including defaults.h, add the #include now.
- if test x"$postpone_defaults_h" = x"yes"; then
--    echo "# include \"defaults.h\"" >> ${output}T
-+    echo "# include \"./defaults.h\"" >> ${output}T
- fi
- 
- # Add multiple inclusion protection guard, part two.
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0018-fortran-cross-compile-hack.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0018-fortran-cross-compile-hack.patch
deleted file mode 100644
index e8ba325..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0018-fortran-cross-compile-hack.patch
+++ /dev/null
@@ -1,46 +0,0 @@
-From d11c73c1295565ff3766ae04e2a410a89fc84dd5 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:20:01 +0400
-Subject: [PATCH 18/46] fortran cross-compile hack.
-
-* Fortran would have searched for arm-angstrom-gnueabi-gfortran but would have used
-used gfortan. For gcc_4.2.2.bb we want to use the gfortran compiler from our cross
-directory.
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Inappropriate [embedded specific]
----
- libgfortran/configure    | 2 +-
- libgfortran/configure.ac | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/libgfortran/configure b/libgfortran/configure
-index bb3107b..5d47e65 100755
---- a/libgfortran/configure
-+++ b/libgfortran/configure
-@@ -12747,7 +12747,7 @@ esac
- 
- # We need gfortran to compile parts of the library
- #AC_PROG_FC(gfortran)
--FC="$GFORTRAN"
-+#FC="$GFORTRAN"
- ac_ext=${ac_fc_srcext-f}
- ac_compile='$FC -c $FCFLAGS $ac_fcflags_srcext conftest.$ac_ext >&5'
- ac_link='$FC -o conftest$ac_exeext $FCFLAGS $LDFLAGS $ac_fcflags_srcext conftest.$ac_ext $LIBS >&5'
-diff --git a/libgfortran/configure.ac b/libgfortran/configure.ac
-index adafb3f..b4ade28 100644
---- a/libgfortran/configure.ac
-+++ b/libgfortran/configure.ac
-@@ -240,7 +240,7 @@ AC_SUBST(enable_static)
- 
- # We need gfortran to compile parts of the library
- #AC_PROG_FC(gfortran)
--FC="$GFORTRAN"
-+#FC="$GFORTRAN"
- AC_PROG_FC(gfortran)
- 
- # extra LD Flags which are required for targets
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0019-libgcc-sjlj-check.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0019-libgcc-sjlj-check.patch
deleted file mode 100644
index 01a4d1f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0019-libgcc-sjlj-check.patch
+++ /dev/null
@@ -1,74 +0,0 @@
-From 7b40212ed6c0c9fe4efe51f31bccd3d9f892f238 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:20:50 +0400
-Subject: [PATCH 19/46] libgcc-sjlj-check
-
-ac_fn_c_try_compile doesnt seem to keep the intermediate files
-which are needed for sjlj test to pass since it greps into the
-generated file. So we run the compiler command using AC_TRY_COMMAND
-which then generates the needed .s file
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- libgcc/configure    | 10 ++++++----
- libgcc/configure.ac | 10 ++++------
- 2 files changed, 10 insertions(+), 10 deletions(-)
-
-diff --git a/libgcc/configure b/libgcc/configure
-index 203d384..6aef3e7 100644
---- a/libgcc/configure
-+++ b/libgcc/configure
-@@ -4570,17 +4570,19 @@ void foo ()
- }
- 
- _ACEOF
--CFLAGS_hold=$CFLAGS
--CFLAGS="--save-temps -fexceptions"
- libgcc_cv_lib_sjlj_exceptions=unknown
--if ac_fn_c_try_compile; then :
-+if { ac_try='${CC-cc} -fexceptions -S conftest.c -o conftest.s 1>&5'
-+  { { eval echo "\"\$as_me\":${as_lineno-$LINENO}: \"$ac_try\""; } >&5
-+  (eval $ac_try) 2>&5
-+  ac_status=$?
-+  $as_echo "$as_me:${as_lineno-$LINENO}: \$? = $ac_status" >&5
-+  test $ac_status = 0; }; }; then
-   if grep _Unwind_SjLj_Resume conftest.s >/dev/null 2>&1; then
-     libgcc_cv_lib_sjlj_exceptions=yes
-   elif grep _Unwind_Resume conftest.s >/dev/null 2>&1; then
-     libgcc_cv_lib_sjlj_exceptions=no
-   fi
- fi
--CFLAGS=$CFLAGS_hold
- rm -f conftest*
- 
- fi
-diff --git a/libgcc/configure.ac b/libgcc/configure.ac
-index a10a952..cc324f3 100644
---- a/libgcc/configure.ac
-+++ b/libgcc/configure.ac
-@@ -255,16 +255,14 @@ void foo ()
-   bar();
- }
- ])])
--CFLAGS_hold=$CFLAGS
--CFLAGS="--save-temps -fexceptions"
- libgcc_cv_lib_sjlj_exceptions=unknown
--AS_IF([ac_fn_c_try_compile],
--  [if grep _Unwind_SjLj_Resume conftest.s >/dev/null 2>&1; then
-+if AC_TRY_COMMAND(${CC-cc} -fexceptions -S conftest.c -o conftest.s 1>&AS_MESSAGE_LOG_FD); then
-+  if grep _Unwind_SjLj_Resume conftest.s >/dev/null 2>&1; then
-     libgcc_cv_lib_sjlj_exceptions=yes
-   elif grep _Unwind_Resume conftest.s >/dev/null 2>&1; then
-     libgcc_cv_lib_sjlj_exceptions=no
--  fi])
--CFLAGS=$CFLAGS_hold
-+  fi
-+fi
- rm -f conftest*
- ])
- 
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0020-cpp-honor-sysroot.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0020-cpp-honor-sysroot.patch
deleted file mode 100644
index 13f66d4..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0020-cpp-honor-sysroot.patch
+++ /dev/null
@@ -1,54 +0,0 @@
-From 5f9759a85fff3d78e3f71ae01c970b10d326c406 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:22:00 +0400
-Subject: [PATCH 20/46] cpp: honor sysroot.
-
-Currently, if the gcc toolchain is relocated and installed from sstate, then you try and compile
-preprocessed source (.i or .ii files), the compiler will try and access the builtin sysroot location
-rather than the --sysroot option specified on the commandline. If access to that directory is
-permission denied (unreadable), gcc will error.
-
-This happens when ccache is in use due to the fact it uses preprocessed source files.
-
-The fix below adds %I to the cpp-output spec macro so the default substitutions for -iprefix,
--isystem, -isysroot happen and the correct sysroot is used.
-
-[YOCTO #2074]
-
-RP 2012/04/13
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- gcc/cp/lang-specs.h | 2 +-
- gcc/gcc.c           | 2 +-
- 2 files changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/gcc/cp/lang-specs.h b/gcc/cp/lang-specs.h
-index b0728f0..e69ee7d 100644
---- a/gcc/cp/lang-specs.h
-+++ b/gcc/cp/lang-specs.h
-@@ -63,5 +63,5 @@ along with GCC; see the file COPYING3.  If not see
-   {".ii", "@c++-cpp-output", 0, 0, 0},
-   {"@c++-cpp-output",
-    "%{!M:%{!MM:%{!E:\
--    cc1plus -fpreprocessed %i %(cc1_options) %2\
-+    cc1plus -fpreprocessed %i %I %(cc1_options) %2\
-     %{!fsyntax-only:%(invoke_as)}}}}", 0, 0, 0},
-diff --git a/gcc/gcc.c b/gcc/gcc.c
-index 87b47c5..e6efae7 100644
---- a/gcc/gcc.c
-+++ b/gcc/gcc.c
-@@ -1142,7 +1142,7 @@ static const struct compiler default_compilers[] =
-                     %W{o*:--output-pch=%*}}%V}}}}}}", 0, 0, 0},
-   {".i", "@cpp-output", 0, 0, 0},
-   {"@cpp-output",
--   "%{!M:%{!MM:%{!E:cc1 -fpreprocessed %i %(cc1_options) %{!fsyntax-only:%(invoke_as)}}}}", 0, 0, 0},
-+   "%{!M:%{!MM:%{!E:cc1 -fpreprocessed %i %I %(cc1_options) %{!fsyntax-only:%(invoke_as)}}}}", 0, 0, 0},
-   {".s", "@assembler", 0, 0, 0},
-   {"@assembler",
-    "%{!M:%{!MM:%{!E:%{!S:as %(asm_debug) %(asm_options) %i %A }}}}", 0, 0, 0},
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0021-MIPS64-Default-to-N64-ABI.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0021-MIPS64-Default-to-N64-ABI.patch
deleted file mode 100644
index c7cffe4..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0021-MIPS64-Default-to-N64-ABI.patch
+++ /dev/null
@@ -1,57 +0,0 @@
-From 34d22ab6cfd2dfe6dd171c8d0b31cafcdeb86298 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:23:08 +0400
-Subject: [PATCH 21/46] MIPS64: Default to N64 ABI
-
-MIPS64 defaults to n32 ABI, this patch makes it
-so that it defaults to N64 ABI
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Inappropriate [OE config specific]
----
- gcc/config.gcc | 10 +++++-----
- 1 file changed, 5 insertions(+), 5 deletions(-)
-
-diff --git a/gcc/config.gcc b/gcc/config.gcc
-index c835734..dd0739d 100644
---- a/gcc/config.gcc
-+++ b/gcc/config.gcc
-@@ -2017,29 +2017,29 @@ mips*-*-linux*)				# Linux MIPS, either endian.
- 			default_mips_arch=mips32
- 			;;
- 		mips64el-st-linux-gnu)
--			default_mips_abi=n32
-+			default_mips_abi=64
- 			tm_file="${tm_file} mips/st.h"
- 			tmake_file="${tmake_file} mips/t-st"
- 			enable_mips_multilibs="yes"
- 			;;
- 		mips64octeon*-*-linux*)
--			default_mips_abi=n32
-+			default_mips_abi=64
- 			tm_defines="${tm_defines} MIPS_CPU_STRING_DEFAULT=\\\"octeon\\\""
- 			target_cpu_default=MASK_SOFT_FLOAT_ABI
- 			enable_mips_multilibs="yes"
- 			;;
- 		mipsisa64r6*-*-linux*)
--			default_mips_abi=n32
-+			default_mips_abi=64
- 			default_mips_arch=mips64r6
- 			enable_mips_multilibs="yes"
- 			;;
- 		mipsisa64r2*-*-linux*)
--			default_mips_abi=n32
-+			default_mips_abi=64
- 			default_mips_arch=mips64r2
- 			enable_mips_multilibs="yes"
- 			;;
- 		mips64*-*-linux* | mipsisa64*-*-linux*)
--			default_mips_abi=n32
-+			default_mips_abi=64
- 			enable_mips_multilibs="yes"
- 			;;
- 	esac
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0022-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0022-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch
deleted file mode 100644
index 6208351..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0022-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch
+++ /dev/null
@@ -1,216 +0,0 @@
-From f6b41b62ea62a0f1447d9fc235f7fd2bbf3fe7c2 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:24:50 +0400
-Subject: [PATCH 22/46] Define GLIBC_DYNAMIC_LINKER and UCLIBC_DYNAMIC_LINKER
- relative to SYSTEMLIBS_DIR
-
-This patch defines GLIBC_DYNAMIC_LINKER and UCLIBC_DYNAMIC_LINKER
-relative to SYSTEMLIBS_DIR which can be set in generated headers
-This breaks the assumption of hardcoded multilib in gcc
-Change is only for the supported architectures in OE including
-SH, sparc, alpha for possible future support (if any)
-
-Removes the do_headerfix task in metadata
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Inappropriate [OE configuration]
----
- gcc/config/alpha/linux-elf.h |  4 ++--
- gcc/config/arm/linux-eabi.h  |  4 ++--
- gcc/config/arm/linux-elf.h   |  2 +-
- gcc/config/i386/linux.h      |  2 +-
- gcc/config/i386/linux64.h    |  6 +++---
- gcc/config/linux.h           |  8 ++++----
- gcc/config/mips/linux.h      | 12 ++++++------
- gcc/config/rs6000/linux64.h  | 10 +++++-----
- gcc/config/sh/linux.h        |  2 +-
- gcc/config/sparc/linux.h     |  2 +-
- gcc/config/sparc/linux64.h   |  4 ++--
- 11 files changed, 28 insertions(+), 28 deletions(-)
-
-diff --git a/gcc/config/alpha/linux-elf.h b/gcc/config/alpha/linux-elf.h
-index 2c70a2f..dd048db 100644
---- a/gcc/config/alpha/linux-elf.h
-+++ b/gcc/config/alpha/linux-elf.h
-@@ -23,8 +23,8 @@ along with GCC; see the file COPYING3.  If not see
- #define EXTRA_SPECS \
- { "elf_dynamic_linker", ELF_DYNAMIC_LINKER },
- 
--#define GLIBC_DYNAMIC_LINKER	"/lib/ld-linux.so.2"
--#define UCLIBC_DYNAMIC_LINKER "/lib/ld-uClibc.so.0"
-+#define GLIBC_DYNAMIC_LINKER	SYSTEMLIBS_DIR "ld-linux.so.2"
-+#define UCLIBC_DYNAMIC_LINKER  SYSTEMLIBS_DIR "ld-uClibc.so.0"
- #if DEFAULT_LIBC == LIBC_UCLIBC
- #define CHOOSE_DYNAMIC_LINKER(G, U) "%{mglibc:" G ";:" U "}"
- #elif DEFAULT_LIBC == LIBC_GLIBC
-diff --git a/gcc/config/arm/linux-eabi.h b/gcc/config/arm/linux-eabi.h
-index e9d65dc..cfdf3f0 100644
---- a/gcc/config/arm/linux-eabi.h
-+++ b/gcc/config/arm/linux-eabi.h
-@@ -68,8 +68,8 @@
-    GLIBC_DYNAMIC_LINKER_DEFAULT and TARGET_DEFAULT_FLOAT_ABI.  */
- 
- #undef  GLIBC_DYNAMIC_LINKER
--#define GLIBC_DYNAMIC_LINKER_SOFT_FLOAT "/lib/ld-linux.so.3"
--#define GLIBC_DYNAMIC_LINKER_HARD_FLOAT "/lib/ld-linux-armhf.so.3"
-+#define GLIBC_DYNAMIC_LINKER_SOFT_FLOAT SYSTEMLIBS_DIR "ld-linux.so.3"
-+#define GLIBC_DYNAMIC_LINKER_HARD_FLOAT SYSTEMLIBS_DIR "ld-linux-armhf.so.3"
- #define GLIBC_DYNAMIC_LINKER_DEFAULT GLIBC_DYNAMIC_LINKER_SOFT_FLOAT
- 
- #define GLIBC_DYNAMIC_LINKER \
-diff --git a/gcc/config/arm/linux-elf.h b/gcc/config/arm/linux-elf.h
-index 49ad954..a5ab559 100644
---- a/gcc/config/arm/linux-elf.h
-+++ b/gcc/config/arm/linux-elf.h
-@@ -62,7 +62,7 @@
- 
- #define LIBGCC_SPEC "%{mfloat-abi=soft*:-lfloat} -lgcc"
- 
--#define GLIBC_DYNAMIC_LINKER "/lib/ld-linux.so.2"
-+#define GLIBC_DYNAMIC_LINKER SYSTEMLIBS_DIR "ld-linux.so.2"
- 
- #define LINUX_TARGET_LINK_SPEC  "%{h*} \
-    %{static:-Bstatic} \
-diff --git a/gcc/config/i386/linux.h b/gcc/config/i386/linux.h
-index a100963..21ba2b2 100644
---- a/gcc/config/i386/linux.h
-+++ b/gcc/config/i386/linux.h
-@@ -20,4 +20,4 @@ along with GCC; see the file COPYING3.  If not see
- <http://www.gnu.org/licenses/>.  */
- 
- #define GNU_USER_LINK_EMULATION "elf_i386"
--#define GLIBC_DYNAMIC_LINKER "/lib/ld-linux.so.2"
-+#define GLIBC_DYNAMIC_LINKER SYSTEMLIBS_DIR "ld-linux.so.2"
-diff --git a/gcc/config/i386/linux64.h b/gcc/config/i386/linux64.h
-index a27d3be..6185cce 100644
---- a/gcc/config/i386/linux64.h
-+++ b/gcc/config/i386/linux64.h
-@@ -27,6 +27,6 @@ see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
- #define GNU_USER_LINK_EMULATION64 "elf_x86_64"
- #define GNU_USER_LINK_EMULATIONX32 "elf32_x86_64"
- 
--#define GLIBC_DYNAMIC_LINKER32 "/lib/ld-linux.so.2"
--#define GLIBC_DYNAMIC_LINKER64 "/lib64/ld-linux-x86-64.so.2"
--#define GLIBC_DYNAMIC_LINKERX32 "/libx32/ld-linux-x32.so.2"
-+#define GLIBC_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-linux.so.2"
-+#define GLIBC_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld-linux-x86-64.so.2"
-+#define GLIBC_DYNAMIC_LINKERX32 SYSTEMLIBS_DIR "ld-linux-x32.so.2"
-diff --git a/gcc/config/linux.h b/gcc/config/linux.h
-index 857389a..22b9be5 100644
---- a/gcc/config/linux.h
-+++ b/gcc/config/linux.h
-@@ -73,10 +73,10 @@ see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
-    GLIBC_DYNAMIC_LINKER must be defined for each target using them, or
-    GLIBC_DYNAMIC_LINKER32 and GLIBC_DYNAMIC_LINKER64 for targets
-    supporting both 32-bit and 64-bit compilation.  */
--#define UCLIBC_DYNAMIC_LINKER "/lib/ld-uClibc.so.0"
--#define UCLIBC_DYNAMIC_LINKER32 "/lib/ld-uClibc.so.0"
--#define UCLIBC_DYNAMIC_LINKER64 "/lib/ld64-uClibc.so.0"
--#define UCLIBC_DYNAMIC_LINKERX32 "/lib/ldx32-uClibc.so.0"
-+#define UCLIBC_DYNAMIC_LINKER SYSTEMLIBS_DIR "ld-uClibc.so.0"
-+#define UCLIBC_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-uClibc.so.0"
-+#define UCLIBC_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld64-uClibc.so.0"
-+#define UCLIBC_DYNAMIC_LINKERX32 SYSTEMLIBS_DIR "ldx32-uClibc.so.0"
- #define BIONIC_DYNAMIC_LINKER "/system/bin/linker"
- #define BIONIC_DYNAMIC_LINKER32 "/system/bin/linker"
- #define BIONIC_DYNAMIC_LINKER64 "/system/bin/linker64"
-diff --git a/gcc/config/mips/linux.h b/gcc/config/mips/linux.h
-index 91df261..c306afb 100644
---- a/gcc/config/mips/linux.h
-+++ b/gcc/config/mips/linux.h
-@@ -22,20 +22,20 @@ along with GCC; see the file COPYING3.  If not see
- #define GNU_USER_LINK_EMULATIONN32 "elf32%{EB:b}%{EL:l}tsmipn32"
- 
- #define GLIBC_DYNAMIC_LINKER32 \
--  "%{mnan=2008:/lib/ld-linux-mipsn8.so.1;:/lib/ld.so.1}"
-+  "%{mnan=2008:" SYSTEMLIBS_DIR "ld-linux-mipsn8.so.1;:" SYSTEMLIBS_DIR "ld.so.1}"
- #define GLIBC_DYNAMIC_LINKER64 \
--  "%{mnan=2008:/lib64/ld-linux-mipsn8.so.1;:/lib64/ld.so.1}"
-+  "%{mnan=2008:" SYSTEMLIBS_DIR "ld-linux-mipsn8.so.1;:" SYSTEMLIBS_DIR "ld.so.1}"
- #define GLIBC_DYNAMIC_LINKERN32 \
--  "%{mnan=2008:/lib32/ld-linux-mipsn8.so.1;:/lib32/ld.so.1}"
-+  "%{mnan=2008:" SYSTEMLIBS_DIR "ld-linux-mipsn8.so.1;:" SYSTEMLIBS_DIR "ld.so.1}"
- 
- #undef UCLIBC_DYNAMIC_LINKER32
- #define UCLIBC_DYNAMIC_LINKER32 \
--  "%{mnan=2008:/lib/ld-uClibc-mipsn8.so.0;:/lib/ld-uClibc.so.0}"
-+  "%{mnan=2008:" SYSTEMLIBS_DIR "ld-uClibc-mipsn8.so.0;:" SYSTEMLIBS_DIR "ld-uClibc.so.0}"
- #undef UCLIBC_DYNAMIC_LINKER64
- #define UCLIBC_DYNAMIC_LINKER64 \
--  "%{mnan=2008:/lib/ld64-uClibc-mipsn8.so.0;:/lib/ld64-uClibc.so.0}"
-+  "%{mnan=2008:" SYSTEMLIBS_DIR "ld64-uClibc-mipsn8.so.0;:" SYSTEMLIBS_DIR "ld64-uClibc.so.0}"
- #define UCLIBC_DYNAMIC_LINKERN32 \
--  "%{mnan=2008:/lib32/ld-uClibc-mipsn8.so.0;:/lib32/ld-uClibc.so.0}"
-+  "%{mnan=2008:" SYSTEMLIBS_DIR "ld-uClibc-mipsn8.so.0;:" SYSTEMLIBS_DIR "ld-uClibc.so.0}"
- 
- #define BIONIC_DYNAMIC_LINKERN32 "/system/bin/linker32"
- #define GNU_USER_DYNAMIC_LINKERN32 \
-diff --git a/gcc/config/rs6000/linux64.h b/gcc/config/rs6000/linux64.h
-index 0879e7e..31c4338 100644
---- a/gcc/config/rs6000/linux64.h
-+++ b/gcc/config/rs6000/linux64.h
-@@ -357,14 +357,14 @@ extern int dot_symbols;
- #undef	LINK_OS_DEFAULT_SPEC
- #define LINK_OS_DEFAULT_SPEC "%(link_os_linux)"
- 
--#define GLIBC_DYNAMIC_LINKER32 "/lib/ld.so.1"
-+#define GLIBC_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld.so.1"
- #ifdef LINUX64_DEFAULT_ABI_ELFv2
--#define GLIBC_DYNAMIC_LINKER64 "%{mabi=elfv1:/lib64/ld64.so.1;:/lib64/ld64.so.2}"
-+#define GLIBC_DYNAMIC_LINKER64 "%{mabi=elfv1:" SYSTEMLIBS_DIR "ld64.so.1;:" SYSTEMLIBS_DIR "ld64.so.2}"
- #else
--#define GLIBC_DYNAMIC_LINKER64 "%{mabi=elfv2:/lib64/ld64.so.2;:/lib64/ld64.so.1}"
-+#define GLIBC_DYNAMIC_LINKER64 "%{mabi=elfv2:" SYSTEMLIBS_DIR "ld64.so.2;:" SYSTEMLIBS_DIR "ld64.so.1}"
- #endif
--#define UCLIBC_DYNAMIC_LINKER32 "/lib/ld-uClibc.so.0"
--#define UCLIBC_DYNAMIC_LINKER64 "/lib/ld64-uClibc.so.0"
-+#define UCLIBC_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-uClibc.so.0"
-+#define UCLIBC_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld64-uClibc.so.0"
- #if DEFAULT_LIBC == LIBC_UCLIBC
- #define CHOOSE_DYNAMIC_LINKER(G, U) "%{mglibc:" G ";:" U "}"
- #elif DEFAULT_LIBC == LIBC_GLIBC
-diff --git a/gcc/config/sh/linux.h b/gcc/config/sh/linux.h
-index 0f5d614..168416c 100644
---- a/gcc/config/sh/linux.h
-+++ b/gcc/config/sh/linux.h
-@@ -43,7 +43,7 @@ along with GCC; see the file COPYING3.  If not see
- 
- #define TARGET_ASM_FILE_END file_end_indicate_exec_stack
- 
--#define GLIBC_DYNAMIC_LINKER "/lib/ld-linux.so.2"
-+#define GLIBC_DYNAMIC_LINKER SYSTEMLIBS_DIR "ld-linux.so.2"
- 
- #undef SUBTARGET_LINK_EMUL_SUFFIX
- #define SUBTARGET_LINK_EMUL_SUFFIX "_linux"
-diff --git a/gcc/config/sparc/linux.h b/gcc/config/sparc/linux.h
-index 56def4b..00b564c 100644
---- a/gcc/config/sparc/linux.h
-+++ b/gcc/config/sparc/linux.h
-@@ -83,7 +83,7 @@ extern const char *host_detect_local_cpu (int argc, const char **argv);
-    When the -shared link option is used a final link is not being
-    done.  */
- 
--#define GLIBC_DYNAMIC_LINKER "/lib/ld-linux.so.2"
-+#define GLIBC_DYNAMIC_LINKER SYSTEMLIBS_DIR "ld-linux.so.2"
- 
- #undef  LINK_SPEC
- #define LINK_SPEC "-m elf32_sparc %{shared:-shared} \
-diff --git a/gcc/config/sparc/linux64.h b/gcc/config/sparc/linux64.h
-index fa805fd..0eb4acd 100644
---- a/gcc/config/sparc/linux64.h
-+++ b/gcc/config/sparc/linux64.h
-@@ -84,8 +84,8 @@ along with GCC; see the file COPYING3.  If not see
-    When the -shared link option is used a final link is not being
-    done.  */
- 
--#define GLIBC_DYNAMIC_LINKER32 "/lib/ld-linux.so.2"
--#define GLIBC_DYNAMIC_LINKER64 "/lib64/ld-linux.so.2"
-+#define GLIBC_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-linux.so.2"
-+#define GLIBC_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld-linux.so.2"
- 
- #ifdef SPARC_BI_ARCH
- 
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0023-gcc-Fix-argument-list-too-long-error.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0023-gcc-Fix-argument-list-too-long-error.patch
deleted file mode 100644
index be91c9e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0023-gcc-Fix-argument-list-too-long-error.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-From 9701f596bbe5bf51bbf48661bc515b45d4b6f4d2 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:26:37 +0400
-Subject: [PATCH 23/46] gcc: Fix argument list too long error.
-
-There would be an "Argument list too long" error when the
-build directory is longer than 200, this is caused by:
-
-headers=`echo $(PLUGIN_HEADERS) | tr ' ' '\012' | sort -u`
-
-The PLUGIN_HEADERS is too long before sort, so the "echo" can't handle
-it, use the $(sort list) of GNU make which can handle the too long list
-would fix the problem, the header would be short enough after sorted.
-The "tr ' ' '\012'" was used for translating the space to "\n", the
-$(sort list) doesn't need this.
-
-Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- gcc/Makefile.in | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/gcc/Makefile.in b/gcc/Makefile.in
-index e1e63e8..7b49f88 100644
---- a/gcc/Makefile.in
-+++ b/gcc/Makefile.in
-@@ -3262,7 +3262,7 @@ install-plugin: installdirs lang.install-plugin s-header-vars install-gengtype
- # We keep the directory structure for files in config or c-family and .def
- # files. All other files are flattened to a single directory.
- 	$(mkinstalldirs) $(DESTDIR)$(plugin_includedir)
--	headers=`echo $(PLUGIN_HEADERS) $$(cd $(srcdir); echo *.h *.def) | tr ' ' '\012' | sort -u`; \
-+	headers="$(sort $(PLUGIN_HEADERS) $$(cd $(srcdir); echo *.h *.def))"; \
- 	srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'`; \
- 	for file in $$headers; do \
- 	  if [ -f $$file ] ; then \
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0024-Disable-sdt.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0024-Disable-sdt.patch
deleted file mode 100644
index b23ce97..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0024-Disable-sdt.patch
+++ /dev/null
@@ -1,113 +0,0 @@
-From 946e047614103e1f2982613c7aa5f76587500c0e Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:28:10 +0400
-Subject: [PATCH 24/46] Disable sdt.
-
-We don't list dtrace in DEPENDS so we shouldn't be depending on this header.
-It may or may not exist from preivous builds though. To be determinstic, disable
-sdt.h usage always. This avoids build failures if the header is removed after configure
-but before libgcc is compiled for example.
-
-RP 2012/8/7
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Disable sdt for libstdc++-v3.
-
-Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
-
-Upstream-Status: Inappropriate [hack]
----
- gcc/configure             | 12 ++++++------
- gcc/configure.ac          | 18 +++++++++---------
- libstdc++-v3/configure    |  6 +++---
- libstdc++-v3/configure.ac |  2 +-
- 4 files changed, 19 insertions(+), 19 deletions(-)
-
-diff --git a/gcc/configure b/gcc/configure
-index 8ec3d40..eaa3b07 100755
---- a/gcc/configure
-+++ b/gcc/configure
-@@ -27857,12 +27857,12 @@ fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: checking sys/sdt.h in the target C library" >&5
- $as_echo_n "checking sys/sdt.h in the target C library... " >&6; }
- have_sys_sdt_h=no
--if test -f $target_header_dir/sys/sdt.h; then
--  have_sys_sdt_h=yes
--
--$as_echo "#define HAVE_SYS_SDT_H 1" >>confdefs.h
--
--fi
-+#if test -f $target_header_dir/sys/sdt.h; then
-+#  have_sys_sdt_h=yes
-+#
-+#$as_echo "#define HAVE_SYS_SDT_H 1" >>confdefs.h
-+#
-+#fi
- { $as_echo "$as_me:${as_lineno-$LINENO}: result: $have_sys_sdt_h" >&5
- $as_echo "$have_sys_sdt_h" >&6; }
- 
-diff --git a/gcc/configure.ac b/gcc/configure.ac
-index 643a47e..9f6cacb 100644
---- a/gcc/configure.ac
-+++ b/gcc/configure.ac
-@@ -5315,15 +5315,15 @@ if test x$gcc_cv_libc_provides_ssp = xyes; then
- fi
- 
- # Test for <sys/sdt.h> on the target.
--GCC_TARGET_TEMPLATE([HAVE_SYS_SDT_H])
--AC_MSG_CHECKING(sys/sdt.h in the target C library)
--have_sys_sdt_h=no
--if test -f $target_header_dir/sys/sdt.h; then
--  have_sys_sdt_h=yes
--  AC_DEFINE(HAVE_SYS_SDT_H, 1,
--            [Define if your target C library provides sys/sdt.h])
--fi
--AC_MSG_RESULT($have_sys_sdt_h)
-+#GCC_TARGET_TEMPLATE([HAVE_SYS_SDT_H])
-+#AC_MSG_CHECKING(sys/sdt.h in the target C library)
-+#have_sys_sdt_h=no
-+#if test -f $target_header_dir/sys/sdt.h; then
-+#  have_sys_sdt_h=yes
-+#  AC_DEFINE(HAVE_SYS_SDT_H, 1,
-+#            [Define if your target C library provides sys/sdt.h])
-+#fi
-+#AC_MSG_RESULT($have_sys_sdt_h)
- 
- # Check if TFmode long double should be used by default or not.
- # Some glibc targets used DFmode long double, but with glibc 2.4
-diff --git a/libstdc++-v3/configure b/libstdc++-v3/configure
-index 217012e..81240b9 100755
---- a/libstdc++-v3/configure
-+++ b/libstdc++-v3/configure
-@@ -20964,11 +20964,11 @@ ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
- ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
- ac_compiler_gnu=$ac_cv_c_compiler_gnu
- 
--  if test $glibcxx_cv_sys_sdt_h = yes; then
-+#  if test $glibcxx_cv_sys_sdt_h = yes; then
- 
--$as_echo "#define HAVE_SYS_SDT_H 1" >>confdefs.h
-+#$as_echo "#define HAVE_SYS_SDT_H 1" >>confdefs.h
- 
--  fi
-+#  fi
-   { $as_echo "$as_me:${as_lineno-$LINENO}: result: $glibcxx_cv_sys_sdt_h" >&5
- $as_echo "$glibcxx_cv_sys_sdt_h" >&6; }
- 
-diff --git a/libstdc++-v3/configure.ac b/libstdc++-v3/configure.ac
-index 580fb8b..5a41083 100644
---- a/libstdc++-v3/configure.ac
-+++ b/libstdc++-v3/configure.ac
-@@ -230,7 +230,7 @@ GLIBCXX_CHECK_SC_NPROCESSORS_ONLN
- GLIBCXX_CHECK_SC_NPROC_ONLN
- GLIBCXX_CHECK_PTHREADS_NUM_PROCESSORS_NP
- GLIBCXX_CHECK_SYSCTL_HW_NCPU
--GLIBCXX_CHECK_SDT_H
-+#GLIBCXX_CHECK_SDT_H
- 
- # Check for available headers.
- AC_CHECK_HEADERS([endian.h execinfo.h float.h fp.h ieeefp.h inttypes.h \
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0025-libtool.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0025-libtool.patch
deleted file mode 100644
index 8d5eb97..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0025-libtool.patch
+++ /dev/null
@@ -1,42 +0,0 @@
-From f68c5b78751660505b22b46dad99240db0df3456 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:29:11 +0400
-Subject: [PATCH 25/46] libtool
-
-libstdc++ from gcc-runtime gets created with -rpath=/usr/lib/../lib for qemux86-64
-when running on am x86_64 build host.
-
-This patch stops this speading to libdir in the libstdc++.la file within libtool.
-Arguably, it shouldn't be passing this into libtool in the first place but
-for now this resolves the nastiest problems this causes.
-
-func_normal_abspath would resolve an empty path to `pwd` so we need
-to filter the zero case.
-
-RP 2012/8/24
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- ltmain.sh | 4 ++++
- 1 file changed, 4 insertions(+)
-
-diff --git a/ltmain.sh b/ltmain.sh
-index 9503ec8..0121fba 100644
---- a/ltmain.sh
-+++ b/ltmain.sh
-@@ -6359,6 +6359,10 @@ func_mode_link ()
- 	func_warning "ignoring multiple \`-rpath's for a libtool library"
- 
-       install_libdir="$1"
-+      if test -n "$install_libdir"; then
-+	func_normal_abspath "$install_libdir"
-+	install_libdir=$func_normal_abspath_result
-+      fi
- 
-       oldlibs=
-       if test -z "$rpath"; then
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0026-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0026-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch
deleted file mode 100644
index a22d95f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0026-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch
+++ /dev/null
@@ -1,43 +0,0 @@
-From 4c75e349aa7abd1e285720328b23690636bea242 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:30:32 +0400
-Subject: [PATCH 26/46] gcc: armv4: pass fix-v4bx to linker to support EABI.
-
-The LINK_SPEC for linux gets overwritten by linux-eabi.h which
-means the value of TARGET_FIX_V4BX_SPEC gets lost and as a result
-the option is not passed to linker when chosing march=armv4
-This patch redefines this in linux-eabi.h and reinserts it
-for eabi defaulting toolchains.
-
-We might want to send it upstream.
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- gcc/config/arm/linux-eabi.h | 6 +++++-
- 1 file changed, 5 insertions(+), 1 deletion(-)
-
-diff --git a/gcc/config/arm/linux-eabi.h b/gcc/config/arm/linux-eabi.h
-index cfdf3f0..048a062 100644
---- a/gcc/config/arm/linux-eabi.h
-+++ b/gcc/config/arm/linux-eabi.h
-@@ -77,10 +77,14 @@
-     %{mfloat-abi=soft*:" GLIBC_DYNAMIC_LINKER_SOFT_FLOAT "} \
-     %{!mfloat-abi=*:" GLIBC_DYNAMIC_LINKER_DEFAULT "}"
- 
-+/* For armv4 we pass --fix-v4bx to linker to support EABI */
-+#undef TARGET_FIX_V4BX_SPEC
-+#define TARGET_FIX_V4BX_SPEC "%{mcpu=arm8|mcpu=arm810|mcpu=strongarm*|march=armv4: --fix-v4bx}"
-+
- /* At this point, bpabi.h will have clobbered LINK_SPEC.  We want to
-    use the GNU/Linux version, not the generic BPABI version.  */
- #undef  LINK_SPEC
--#define LINK_SPEC EABI_LINK_SPEC					\
-+#define LINK_SPEC TARGET_FIX_V4BX_SPEC EABI_LINK_SPEC			\
-   LINUX_OR_ANDROID_LD (LINUX_TARGET_LINK_SPEC,				\
- 		       LINUX_TARGET_LINK_SPEC " " ANDROID_LINK_SPEC)
- 
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0027-Use-the-multilib-config-files-from-B-instead-of-usin.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0027-Use-the-multilib-config-files-from-B-instead-of-usin.patch
deleted file mode 100644
index 46815d1..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0027-Use-the-multilib-config-files-from-B-instead-of-usin.patch
+++ /dev/null
@@ -1,102 +0,0 @@
-From 8217dd7fb318f9abc6564f67b424c433d8398d61 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 09:33:04 +0400
-Subject: [PATCH 27/46] Use the multilib config files from ${B} instead of
- using the ones from ${S}
-
-Use the multilib config files from ${B} instead of using the ones from ${S}
-so that the source can be shared between gcc-cross-initial,
-gcc-cross-intermediate, gcc-cross, gcc-runtime, and also the sdk build.
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Signed-off-by: Constantin Musca <constantinx.musca@intel.com>
-
-Upstream-Status: Inappropriate [configuration]
----
- gcc/configure    | 22 ++++++++++++++++++----
- gcc/configure.ac | 22 ++++++++++++++++++----
- 2 files changed, 36 insertions(+), 8 deletions(-)
-
-diff --git a/gcc/configure b/gcc/configure
-index eaa3b07..9b60aad 100755
---- a/gcc/configure
-+++ b/gcc/configure
-@@ -11830,10 +11830,20 @@ done
- tmake_file_=
- for f in ${tmake_file}
- do
--	if test -f ${srcdir}/config/$f
--	then
--		tmake_file_="${tmake_file_} \$(srcdir)/config/$f"
--	fi
-+  case $f in
-+    */t-linux64 )
-+       if test -f ./config/$f
-+       then
-+         tmake_file_="${tmake_file_} ./config/$f"
-+       fi
-+       ;;
-+    * )
-+       if test -f ${srcdir}/config/$f
-+       then
-+         tmake_file_="${tmake_file_} \$(srcdir)/config/$f"
-+       fi
-+       ;;
-+  esac
- done
- tmake_file="${tmake_file_}"
- 
-@@ -11844,6 +11854,10 @@ tm_file_list="options.h"
- tm_include_list="options.h insn-constants.h"
- for f in $tm_file; do
-   case $f in
-+    */linux64.h )
-+       tm_file_list="${tm_file_list} ./config/$f"
-+       tm_include_list="${tm_include_list} ./config/$f"
-+       ;;
-     ./* )
-        f=`echo $f | sed 's/^..//'`
-        tm_file_list="${tm_file_list} $f"
-diff --git a/gcc/configure.ac b/gcc/configure.ac
-index 9f6cacb..e1e54f2 100644
---- a/gcc/configure.ac
-+++ b/gcc/configure.ac
-@@ -1812,10 +1812,20 @@ done
- tmake_file_=
- for f in ${tmake_file}
- do
--	if test -f ${srcdir}/config/$f
--	then
--		tmake_file_="${tmake_file_} \$(srcdir)/config/$f"
--	fi
-+  case $f in
-+    */t-linux64 )
-+       if test -f ./config/$f
-+       then
-+         tmake_file_="${tmake_file_} ./config/$f"
-+       fi
-+       ;;
-+    * )
-+       if test -f ${srcdir}/config/$f
-+       then
-+         tmake_file_="${tmake_file_} \$(srcdir)/config/$f"
-+       fi
-+       ;;
-+  esac
- done
- tmake_file="${tmake_file_}"
- 
-@@ -1826,6 +1836,10 @@ tm_file_list="options.h"
- tm_include_list="options.h insn-constants.h"
- for f in $tm_file; do
-   case $f in
-+    */linux64.h )
-+       tm_file_list="${tm_file_list} ./config/$f"
-+       tm_include_list="${tm_include_list} ./config/$f"
-+       ;;
-     ./* )
-        f=`echo $f | sed 's/^..//'`
-        tm_file_list="${tm_file_list} $f"
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0028-Avoid-using-libdir-from-.la-which-usually-points-to-.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0028-Avoid-using-libdir-from-.la-which-usually-points-to-.patch
deleted file mode 100644
index 60ddd9c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0028-Avoid-using-libdir-from-.la-which-usually-points-to-.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-From 08a0e45b308fc97d06f5c3ef77a257e7a6bd1a48 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 20 Feb 2015 09:39:38 +0000
-Subject: [PATCH 28/46] Avoid using libdir from .la which usually points to a
- host path
-
-Upstream-Status: Inappropriate [embedded specific]
-
-Signed-off-by: Jonathan Liu <net147@gmail.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- ltmain.sh | 3 +++
- 1 file changed, 3 insertions(+)
-
-diff --git a/ltmain.sh b/ltmain.sh
-index 0121fba..52bdbdb 100644
---- a/ltmain.sh
-+++ b/ltmain.sh
-@@ -5628,6 +5628,9 @@ func_mode_link ()
- 	    absdir="$abs_ladir"
- 	    libdir="$abs_ladir"
- 	  else
-+	    # Instead of using libdir from .la which usually points to a host path,
-+	    # use the path the .la is contained in.
-+	    libdir="$abs_ladir"
- 	    dir="$libdir"
- 	    absdir="$libdir"
- 	  fi
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0029-export-CPP.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0029-export-CPP.patch
deleted file mode 100644
index 62195aa..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0029-export-CPP.patch
+++ /dev/null
@@ -1,53 +0,0 @@
-From 95f494d1544857ad38949a7da37e2d7264475233 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 20 Feb 2015 09:40:59 +0000
-Subject: [PATCH 29/46] export CPP
-
-The OE environment sets and exports CPP as being the target gcc. When
-building gcc-cross-canadian for a mingw targetted sdk, the following can be found
-in build.x86_64-pokysdk-mingw32.i586-poky-linux/build-x86_64-linux/libiberty/config.log:
-
-configure:3641: checking for _FILE_OFFSET_BITS value needed for large files
-configure:3666: gcc  -c -isystem/media/build1/poky/build/tmp/sysroots/x86_64-linux/usr/include -O2 -pipe  conftest.c >&5
-configure:3666: $? = 0
-configure:3698: result: no
-configure:3786: checking how to run the C preprocessor
-configure:3856: result: x86_64-pokysdk-mingw32-gcc -E --sysroot=/media/build1/poky/build/tmp/sysroots/x86_64-nativesdk-mingw32-pokysdk-mingw32
-configure:3876: x86_64-pokysdk-mingw32-gcc -E --sysroot=/media/build1/poky/build/tmp/sysroots/x86_64-nativesdk-mingw32-pokysdk-mingw32 conftest.c
-configure:3876: $? = 0
-
-Note this is a *build* target (in build-x86_64-linux) so it should be
-using the host "gcc", not x86_64-pokysdk-mingw32-gcc. Since the mingw32
-headers are very different, using the wrong cpp is a real problem. It is leaking
-into configure through the CPP variable. Ultimately this leads to build
-failures related to not being able to include a process.h file for pem-unix.c.
-
-The fix is to ensure we export a sane CPP value into the build
-environment when using build targets. We could define a CPP_FOR_BUILD value which may be
-the version which needs to be upstreamed but for now, this fix is good enough to
-avoid the problem.
-
-RP 22/08/2013
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- Makefile.in | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/Makefile.in b/Makefile.in
-index 36b4008..a783e1e 100644
---- a/Makefile.in
-+++ b/Makefile.in
-@@ -149,6 +149,7 @@ BUILD_EXPORTS = \
- 	AR="$(AR_FOR_BUILD)"; export AR; \
- 	AS="$(AS_FOR_BUILD)"; export AS; \
- 	CC="$(CC_FOR_BUILD)"; export CC; \
-+	CPP="$(CC_FOR_BUILD) -E"; export CPP; \
- 	CFLAGS="$(CFLAGS_FOR_BUILD)"; export CFLAGS; \
- 	CONFIG_SHELL="$(SHELL)"; export CONFIG_SHELL; \
- 	CXX="$(CXX_FOR_BUILD)"; export CXX; \
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0030-Enable-SPE-AltiVec-generation-on-powepc-linux-target.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0030-Enable-SPE-AltiVec-generation-on-powepc-linux-target.patch
deleted file mode 100644
index 5705187..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0030-Enable-SPE-AltiVec-generation-on-powepc-linux-target.patch
+++ /dev/null
@@ -1,52 +0,0 @@
-From 4ed8c1fd9dc05f7a9db9298a55396c8f0ff549a7 Mon Sep 17 00:00:00 2001
-From: Alexandru-Cezar Sardan <alexandru.sardan@freescale.com>
-Date: Wed, 5 Feb 2014 16:52:31 +0200
-Subject: [PATCH 30/46] Enable SPE & AltiVec generation on powepc*linux target
-
-When is configured with --target=powerpc-linux, the resulting GCC will
-not be able to generate code for SPE targets (e500v1/v2).
-GCC configured with --target=powerpc-linuxspe will not be able to
-generate AltiVec instructions (for e6500).
-This patch modifies the configured file such that SPE or AltiVec code
-can be generated when gcc is configured with --target=powerpc-linux.
-The ABI and speciffic instructions can be selected through the
-"-mabi=spe or -mabi=altivec" and the "-mspe or -maltivec" parameters.
-
-Upstream-Status: Inappropriate [configuration]
-
-Signed-off-by: Alexandru-Cezar Sardan <alexandru.sardan@freescale.com>
----
- gcc/config.gcc | 9 ++++++++-
- 1 file changed, 8 insertions(+), 1 deletion(-)
-
-Index: gcc-5.3.0/gcc/config.gcc
-===================================================================
---- gcc-5.3.0.orig/gcc/config.gcc
-+++ gcc-5.3.0/gcc/config.gcc
-@@ -2346,7 +2346,14 @@ powerpc-*-rtems*)
- 	tmake_file="${tmake_file} rs6000/t-fprules rs6000/t-rtems rs6000/t-ppccomm"
- 	;;
- powerpc*-*-linux*)
--	tm_file="${tm_file} dbxelf.h elfos.h gnu-user.h freebsd-spec.h rs6000/sysv4.h"
-+	case ${target} in
-+	    powerpc*-*-linux*spe* | powerpc*-*-linux*altivec*)
-+		tm_file="${tm_file} dbxelf.h elfos.h gnu-user.h freebsd-spec.h rs6000/sysv4.h"
-+		;;
-+	    *)
-+		tm_file="${tm_file} dbxelf.h elfos.h gnu-user.h freebsd-spec.h rs6000/sysv4.h rs6000/linuxaltivec.h rs6000/linuxspe.h rs6000/e500.h"
-+		;;
-+	esac
- 	extra_options="${extra_options} rs6000/sysv4.opt"
- 	tmake_file="${tmake_file} rs6000/t-fprules rs6000/t-ppccomm"
- 	extra_objs="$extra_objs rs6000-linux.o"
-Index: gcc-5.3.0/gcc/config/rs6000/linuxspe.h
-===================================================================
---- gcc-5.3.0.orig/gcc/config/rs6000/linuxspe.h
-+++ gcc-5.3.0/gcc/config/rs6000/linuxspe.h
-@@ -27,6 +27,3 @@
- #undef	TARGET_DEFAULT
- #define TARGET_DEFAULT MASK_STRICT_ALIGN
- #endif
--
--#undef  ASM_DEFAULT_SPEC
--#define	ASM_DEFAULT_SPEC "-mppc -mspe -me500"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0031-Disable-the-MULTILIB_OSDIRNAMES-and-other-multilib-o.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0031-Disable-the-MULTILIB_OSDIRNAMES-and-other-multilib-o.patch
deleted file mode 100644
index f2bc664..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0031-Disable-the-MULTILIB_OSDIRNAMES-and-other-multilib-o.patch
+++ /dev/null
@@ -1,42 +0,0 @@
-From bb6fea821483aa3259b34e101a39c993edd01411 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 20 Feb 2015 10:21:55 +0000
-Subject: [PATCH 31/46] Disable the MULTILIB_OSDIRNAMES and other multilib
- options.
-
-Hard coding the MULTILIB_OSDIRNAMES with ../lib64 is causing problems on
-systems where the libdir is NOT set to /lib64.  This is allowed by the
-ABI, as
-long as the dynamic loader is present in /lib.
-
-We simply want to use the default rules in gcc to find and configure the
-normal libdir.
-
-Upstream-Status: Inappropriate[OE-Specific]
-
-Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gcc/config/aarch64/t-aarch64-linux | 8 ++++----
- 1 file changed, 4 insertions(+), 4 deletions(-)
-
-diff --git a/gcc/config/aarch64/t-aarch64-linux b/gcc/config/aarch64/t-aarch64-linux
-index c296376..3bb59bf 100644
---- a/gcc/config/aarch64/t-aarch64-linux
-+++ b/gcc/config/aarch64/t-aarch64-linux
-@@ -21,8 +21,8 @@
- LIB1ASMSRC   = aarch64/lib1funcs.asm
- LIB1ASMFUNCS = _aarch64_sync_cache_range
- 
--AARCH_BE = $(if $(findstring TARGET_BIG_ENDIAN_DEFAULT=1, $(tm_defines)),_be)
--MULTILIB_OSDIRNAMES = mabi.lp64=../lib64$(call if_multiarch,:aarch64$(AARCH_BE)-linux-gnu)
--MULTIARCH_DIRNAME = $(call if_multiarch,aarch64$(AARCH_BE)-linux-gnu)
-+#AARCH_BE = $(if $(findstring TARGET_BIG_ENDIAN_DEFAULT=1, $(tm_defines)),_be)
-+#MULTILIB_OSDIRNAMES = mabi.lp64=../lib64$(call if_multiarch,:aarch64$(AARCH_BE)-linux-gnu)
-+#MULTIARCH_DIRNAME = $(call if_multiarch,aarch64$(AARCH_BE)-linux-gnu)
- 
--MULTILIB_OSDIRNAMES += mabi.ilp32=../libilp32
-+#MULTILIB_OSDIRNAMES += mabi.ilp32=../libilp32
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0032-Ensure-target-gcc-headers-can-be-included.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0032-Ensure-target-gcc-headers-can-be-included.patch
deleted file mode 100644
index 89503ff..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0032-Ensure-target-gcc-headers-can-be-included.patch
+++ /dev/null
@@ -1,98 +0,0 @@
-From bb26f90f5e0accc3539f62b6fec663682959dd36 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 20 Feb 2015 10:25:11 +0000
-Subject: [PATCH 32/46] Ensure target gcc headers can be included
-
-There are a few headers installed as part of the OpenEmbedded
-gcc-runtime target (omp.h, ssp/*.h). Being installed from a recipe
-built for the target architecture, these are within the target
-sysroot and not cross/nativesdk; thus they weren't able to be
-found by gcc with the existing search paths. Add support for
-picking up these headers under the sysroot supplied on the gcc
-command line in order to resolve this.
-
-Upstream-Status: Pending
-
-Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gcc/Makefile.in  | 2 ++
- gcc/cppdefault.c | 4 ++++
- gcc/defaults.h   | 9 +++++++++
- gcc/gcc.c        | 7 -------
- 4 files changed, 15 insertions(+), 7 deletions(-)
-
-diff --git a/gcc/Makefile.in b/gcc/Makefile.in
-index 7b49f88..cd5bc4a 100644
---- a/gcc/Makefile.in
-+++ b/gcc/Makefile.in
-@@ -582,6 +582,7 @@ libexecdir = @libexecdir@
- 
- # Directory in which the compiler finds libraries etc.
- libsubdir = $(libdir)/gcc/$(real_target_noncanonical)/$(version)$(accel_dir_suffix)
-+libsubdir_target = gcc/$(target_noncanonical)/$(version)
- # Directory in which the compiler finds executables
- libexecsubdir = $(libexecdir)/gcc/$(real_target_noncanonical)/$(version)$(accel_dir_suffix)
- # Directory in which all plugin resources are installed
-@@ -2610,6 +2611,7 @@ CFLAGS-intl.o += -DLOCALEDIR=\"$(localedir)\"
- 
- PREPROCESSOR_DEFINES = \
-   -DGCC_INCLUDE_DIR=\"$(libsubdir)/include\" \
-+  -DGCC_INCLUDE_SUBDIR_TARGET=\"$(libsubdir_target)/include\" \
-   -DFIXED_INCLUDE_DIR=\"$(libsubdir)/include-fixed\" \
-   -DGPLUSPLUS_INCLUDE_DIR=\"$(gcc_gxx_include_dir)\" \
-   -DGPLUSPLUS_INCLUDE_DIR_ADD_SYSROOT=$(gcc_gxx_include_dir_add_sysroot) \
-diff --git a/gcc/cppdefault.c b/gcc/cppdefault.c
-index 580c1ba..03a0287 100644
---- a/gcc/cppdefault.c
-+++ b/gcc/cppdefault.c
-@@ -59,6 +59,10 @@ const struct default_include cpp_include_defaults[]
-     /* This is the dir for gcc's private headers.  */
-     { GCC_INCLUDE_DIR, "GCC", 0, 0, 0, 0 },
- #endif
-+#ifdef GCC_INCLUDE_SUBDIR_TARGET
-+    /* This is the dir for gcc's private headers under the specified sysroot.  */
-+    { STANDARD_STARTFILE_PREFIX_2 GCC_INCLUDE_SUBDIR_TARGET, "GCC", 0, 0, 1, 0 },
-+#endif
- #ifdef LOCAL_INCLUDE_DIR
-     /* /usr/local/include comes before the fixincluded header files.  */
-     { LOCAL_INCLUDE_DIR, 0, 0, 1, 1, 2 },
-diff --git a/gcc/defaults.h b/gcc/defaults.h
-index 1d54798..983d45e 100644
---- a/gcc/defaults.h
-+++ b/gcc/defaults.h
-@@ -1361,4 +1361,13 @@ see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
- 
- #endif /* GCC_INSN_FLAGS_H  */
- 
-+/* Default prefixes to attach to command names.  */
-+
-+#ifndef STANDARD_STARTFILE_PREFIX_1
-+#define STANDARD_STARTFILE_PREFIX_1 "/lib/"
-+#endif
-+#ifndef STANDARD_STARTFILE_PREFIX_2
-+#define STANDARD_STARTFILE_PREFIX_2 "/usr/lib/"
-+#endif
-+
- #endif  /* ! GCC_DEFAULTS_H */
-diff --git a/gcc/gcc.c b/gcc/gcc.c
-index e6efae7..3ca27b9 100644
---- a/gcc/gcc.c
-+++ b/gcc/gcc.c
-@@ -1263,13 +1263,6 @@ static const char *gcc_libexec_prefix;
- 
- /* Default prefixes to attach to command names.  */
- 
--#ifndef STANDARD_STARTFILE_PREFIX_1
--#define STANDARD_STARTFILE_PREFIX_1 "/lib/"
--#endif
--#ifndef STANDARD_STARTFILE_PREFIX_2
--#define STANDARD_STARTFILE_PREFIX_2 "/usr/lib/"
--#endif
--
- #ifdef CROSS_DIRECTORY_STRUCTURE  /* Don't use these prefixes for a cross compiler.  */
- #undef MD_EXEC_PREFIX
- #undef MD_STARTFILE_PREFIX
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0033-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0033-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch
deleted file mode 100644
index 19d480f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0033-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch
+++ /dev/null
@@ -1,54 +0,0 @@
-From 629fcc7d0075c9b4261da6435e122429fb028ec0 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 20 Feb 2015 11:17:19 +0000
-Subject: [PATCH 33/46] gcc 4.8+ won't build with --disable-dependency-tracking
-
-since the *.Ppo files don't get created unless --enable-dependency-tracking is true.
-
-This patch ensures we only use those compiler options when its enabled.
-
-Upstream-Status: Submitted
-
-(Problem was already reported upstream, attached this patch there
-http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55930)
-
-RP
-2012/09/22
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- libatomic/Makefile.am | 3 ++-
- libatomic/Makefile.in | 3 ++-
- 2 files changed, 4 insertions(+), 2 deletions(-)
-
-diff --git a/libatomic/Makefile.am b/libatomic/Makefile.am
-index 3d8ab62..23dff7e 100644
---- a/libatomic/Makefile.am
-+++ b/libatomic/Makefile.am
-@@ -101,7 +101,8 @@ PAT_S		= $(word 3,$(PAT_SPLIT))
- IFUNC_DEF	= -DIFUNC_ALT=$(PAT_S)
- IFUNC_OPT	= $(word $(PAT_S),$(IFUNC_OPTIONS))
- 
--M_DEPS		= -MT $@ -MD -MP -MF $(DEPDIR)/$(@F).Ppo
-+@AMDEP_TRUE@M_DEPS		= -MT $@ -MD -MP -MF $(DEPDIR)/$(@F).Ppo
-+@AMDEP_FALSE@M_DEPS		=
- M_SIZE		= -DN=$(PAT_N)
- M_IFUNC		= $(if $(PAT_S),$(IFUNC_DEF) $(IFUNC_OPT))
- M_FILE		= $(PAT_BASE)_n.c
-diff --git a/libatomic/Makefile.in b/libatomic/Makefile.in
-index 9288652..3720256 100644
---- a/libatomic/Makefile.in
-+++ b/libatomic/Makefile.in
-@@ -330,7 +330,8 @@ PAT_N = $(word 2,$(PAT_SPLIT))
- PAT_S = $(word 3,$(PAT_SPLIT))
- IFUNC_DEF = -DIFUNC_ALT=$(PAT_S)
- IFUNC_OPT = $(word $(PAT_S),$(IFUNC_OPTIONS))
--M_DEPS = -MT $@ -MD -MP -MF $(DEPDIR)/$(@F).Ppo
-+@AMDEP_TRUE@M_DEPS = -MT $@ -MD -MP -MF $(DEPDIR)/$(@F).Ppo
-+@AMDEP_FALSE@M_DEPS =
- M_SIZE = -DN=$(PAT_N)
- M_IFUNC = $(if $(PAT_S),$(IFUNC_DEF) $(IFUNC_OPT))
- M_FILE = $(PAT_BASE)_n.c
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0034-Don-t-search-host-directory-during-relink-if-inst_pr.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0034-Don-t-search-host-directory-during-relink-if-inst_pr.patch
deleted file mode 100644
index a453fa6..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0034-Don-t-search-host-directory-during-relink-if-inst_pr.patch
+++ /dev/null
@@ -1,38 +0,0 @@
-From a4efc9ca85734c283d4da09b628121e1c1f49c4e Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Tue, 3 Mar 2015 08:21:19 +0000
-Subject: [PATCH 34/46] Don't search host directory during "relink" if
- $inst_prefix is provided
-
-http://lists.gnu.org/archive/html/libtool-patches/2011-01/msg00026.html
-
-Upstream-Status: Submitted
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- ltmain.sh | 5 +++--
- 1 file changed, 3 insertions(+), 2 deletions(-)
-
-diff --git a/ltmain.sh b/ltmain.sh
-index 52bdbdb..82bcec3 100644
---- a/ltmain.sh
-+++ b/ltmain.sh
-@@ -6004,12 +6004,13 @@ func_mode_link ()
- 	      fi
- 	    else
- 	      # We cannot seem to hardcode it, guess we'll fake it.
-+	      # Default if $libdir is not relative to the prefix:
- 	      add_dir="-L$libdir"
--	      # Try looking first in the location we're being installed to.
-+
- 	      if test -n "$inst_prefix_dir"; then
- 		case $libdir in
- 		  [\\/]*)
--		    add_dir="$add_dir -L$inst_prefix_dir$libdir"
-+		    add_dir="-L$inst_prefix_dir$libdir"
- 		    ;;
- 		esac
- 	      fi
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0035-Dont-link-the-plugins-with-libgomp-explicitly.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0035-Dont-link-the-plugins-with-libgomp-explicitly.patch
deleted file mode 100644
index 6ed589b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0035-Dont-link-the-plugins-with-libgomp-explicitly.patch
+++ /dev/null
@@ -1,83 +0,0 @@
-From 5b0125a792842ae02df507bc55555661cb10f9a3 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sun, 8 Mar 2015 03:41:39 +0000
-Subject: [PATCH 35/46] Dont link the plugins with libgomp explicitly
-
-They are dlopened by libgomp anyway. This fixes
-the libtool relink issue which causes issues during
-cross compilation
-
-libtool: install: /usr/bin/install -c .libs/libgomp.lai
-/home/ubuntu/work/bleeding/build-qemux86-64mc/tmp/work/core2-64-rdk-linux/gcc-runtime/5.0-r0/image/usr/lib/../lib/libgomp.la
-libtool: install: error: cannot install `libgomp-plugin-host_nonshm.la'
-to a directory not ending in /usr/lib
-Makefile:517: recipe for target 'install-toolexeclibLTLIBRARIES' failed
-make[2]: *** [install-toolexeclibLTLIBRARIES] Error 1
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- libgomp/Makefile.in        | 7 +++----
- libgomp/plugin/Makefrag.am | 3 +--
- 2 files changed, 4 insertions(+), 6 deletions(-)
-
-diff --git a/libgomp/Makefile.in b/libgomp/Makefile.in
-index b61b108..71b2627 100644
---- a/libgomp/Makefile.in
-+++ b/libgomp/Makefile.in
-@@ -123,7 +123,7 @@ am__installdirs = "$(DESTDIR)$(toolexeclibdir)" "$(DESTDIR)$(infodir)" \
- 	"$(DESTDIR)$(fincludedir)" "$(DESTDIR)$(libsubincludedir)" \
- 	"$(DESTDIR)$(toolexeclibdir)"
- LTLIBRARIES = $(toolexeclib_LTLIBRARIES)
--libgomp_plugin_host_nonshm_la_DEPENDENCIES = libgomp.la
-+libgomp_plugin_host_nonshm_la_LIBADD =
- am_libgomp_plugin_host_nonshm_la_OBJECTS =  \
- 	libgomp_plugin_host_nonshm_la-plugin-host.lo
- libgomp_plugin_host_nonshm_la_OBJECTS =  \
-@@ -133,7 +133,7 @@ libgomp_plugin_host_nonshm_la_LINK = $(LIBTOOL) --tag=CC \
- 	--mode=link $(CCLD) $(AM_CFLAGS) $(CFLAGS) \
- 	$(libgomp_plugin_host_nonshm_la_LDFLAGS) $(LDFLAGS) -o $@
- am__DEPENDENCIES_1 =
--@PLUGIN_NVPTX_TRUE@libgomp_plugin_nvptx_la_DEPENDENCIES = libgomp.la \
-+@PLUGIN_NVPTX_TRUE@libgomp_plugin_nvptx_la_DEPENDENCIES =  \
- @PLUGIN_NVPTX_TRUE@	$(am__DEPENDENCIES_1)
- @PLUGIN_NVPTX_TRUE@am_libgomp_plugin_nvptx_la_OBJECTS =  \
- @PLUGIN_NVPTX_TRUE@	libgomp_plugin_nvptx_la-plugin-nvptx.lo
-@@ -407,7 +407,7 @@ libgomp_la_SOURCES = alloc.c barrier.c critical.c env.c error.c iter.c \
- @PLUGIN_NVPTX_TRUE@libgomp_plugin_nvptx_la_LDFLAGS =  \
- @PLUGIN_NVPTX_TRUE@	$(libgomp_plugin_nvptx_version_info) \
- @PLUGIN_NVPTX_TRUE@	$(lt_host_flags) $(PLUGIN_NVPTX_LDFLAGS)
--@PLUGIN_NVPTX_TRUE@libgomp_plugin_nvptx_la_LIBADD = libgomp.la $(PLUGIN_NVPTX_LIBS)
-+@PLUGIN_NVPTX_TRUE@libgomp_plugin_nvptx_la_LIBADD = $(PLUGIN_NVPTX_LIBS)
- @PLUGIN_NVPTX_TRUE@libgomp_plugin_nvptx_la_LIBTOOLFLAGS = --tag=disable-static
- libgomp_plugin_host_nonshm_version_info = -version-info $(libtool_VERSION)
- libgomp_plugin_host_nonshm_la_SOURCES = plugin/plugin-host.c
-@@ -415,7 +415,6 @@ libgomp_plugin_host_nonshm_la_CPPFLAGS = $(AM_CPPFLAGS) -DHOST_NONSHM_PLUGIN
- libgomp_plugin_host_nonshm_la_LDFLAGS = \
- 	$(libgomp_plugin_host_nonshm_version_info) $(lt_host_flags)
- 
--libgomp_plugin_host_nonshm_la_LIBADD = libgomp.la
- libgomp_plugin_host_nonshm_la_LIBTOOLFLAGS = --tag=disable-static
- nodist_noinst_HEADERS = libgomp_f.h
- nodist_libsubinclude_HEADERS = omp.h openacc.h
-diff --git a/libgomp/plugin/Makefrag.am b/libgomp/plugin/Makefrag.am
-index 167485f..d2c5428 100644
---- a/libgomp/plugin/Makefrag.am
-+++ b/libgomp/plugin/Makefrag.am
-@@ -35,7 +35,7 @@ libgomp_plugin_nvptx_la_CPPFLAGS = $(AM_CPPFLAGS) $(PLUGIN_NVPTX_CPPFLAGS)
- libgomp_plugin_nvptx_la_LDFLAGS = $(libgomp_plugin_nvptx_version_info) \
- 	$(lt_host_flags)
- libgomp_plugin_nvptx_la_LDFLAGS += $(PLUGIN_NVPTX_LDFLAGS)
--libgomp_plugin_nvptx_la_LIBADD = libgomp.la $(PLUGIN_NVPTX_LIBS)
-+libgomp_plugin_nvptx_la_LIBADD = $(PLUGIN_NVPTX_LIBS)
- libgomp_plugin_nvptx_la_LIBTOOLFLAGS = --tag=disable-static
- endif
- 
-@@ -45,5 +45,4 @@ libgomp_plugin_host_nonshm_la_SOURCES = plugin/plugin-host.c
- libgomp_plugin_host_nonshm_la_CPPFLAGS = $(AM_CPPFLAGS) -DHOST_NONSHM_PLUGIN
- libgomp_plugin_host_nonshm_la_LDFLAGS = \
- 	$(libgomp_plugin_host_nonshm_version_info) $(lt_host_flags)
--libgomp_plugin_host_nonshm_la_LIBADD = libgomp.la
- libgomp_plugin_host_nonshm_la_LIBTOOLFLAGS = --tag=disable-static
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0036-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0036-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch
deleted file mode 100644
index 41c0294..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0036-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-From b0b0688176a9482777e9b2e98408bfc4505d31af Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Tue, 28 Apr 2015 23:15:27 -0700
-Subject: [PATCH 36/46] Use SYSTEMLIBS_DIR replacement instead of hardcoding
- base_libdir
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gcc/config/aarch64/aarch64-linux.h | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/gcc/config/aarch64/aarch64-linux.h b/gcc/config/aarch64/aarch64-linux.h
-index 257acf0..abeb948 100644
---- a/gcc/config/aarch64/aarch64-linux.h
-+++ b/gcc/config/aarch64/aarch64-linux.h
-@@ -21,7 +21,7 @@
- #ifndef GCC_AARCH64_LINUX_H
- #define GCC_AARCH64_LINUX_H
- 
--#define GLIBC_DYNAMIC_LINKER "/lib/ld-linux-aarch64%{mbig-endian:_be}%{mabi=ilp32:_ilp32}.so.1"
-+#define GLIBC_DYNAMIC_LINKER  SYSTEMLIBS_DIR "ld-linux-aarch64%{mbig-endian:_be}%{mabi=ilp32:_ilp32}.so.1"
- 
- #undef  ASAN_CC1_SPEC
- #define ASAN_CC1_SPEC "%{%:sanitize(address):-funwind-tables}"
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0037-aarch64-Add-support-for-musl-ldso.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0037-aarch64-Add-support-for-musl-ldso.patch
deleted file mode 100644
index 30dbe74..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0037-aarch64-Add-support-for-musl-ldso.patch
+++ /dev/null
@@ -1,26 +0,0 @@
-From 420ca64a86d560a77ee596a9774d672b08ca8278 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Tue, 28 Apr 2015 23:18:39 -0700
-Subject: [PATCH 37/46] aarch64: Add support for musl ldso
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gcc/config/aarch64/aarch64-linux.h | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/gcc/config/aarch64/aarch64-linux.h b/gcc/config/aarch64/aarch64-linux.h
-index abeb948..f9e65fc 100644
---- a/gcc/config/aarch64/aarch64-linux.h
-+++ b/gcc/config/aarch64/aarch64-linux.h
-@@ -23,6 +23,8 @@
- 
- #define GLIBC_DYNAMIC_LINKER  SYSTEMLIBS_DIR "ld-linux-aarch64%{mbig-endian:_be}%{mabi=ilp32:_ilp32}.so.1"
- 
-+#define MUSL_DYNAMIC_LINKER  SYSTEMLIBS_DIR "ld-musl-aarch64.so.1"
-+
- #undef  ASAN_CC1_SPEC
- #define ASAN_CC1_SPEC "%{%:sanitize(address):-funwind-tables}"
- 
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0038-fix-g-sysroot.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0038-fix-g-sysroot.patch
deleted file mode 100644
index 9ba574a..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0038-fix-g-sysroot.patch
+++ /dev/null
@@ -1,56 +0,0 @@
-From 8df3e7007a22c9d6be5d5ae4b9b169c5c8431917 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 27 May 2015 17:06:06 -0700
-Subject: [PATCH 38/46] fix g++ sysroot
-
-Portions of
-
-http://www.mail-archive.com/gcc-patches@gcc.gnu.org/msg26013.html
-
-are not upstreamed yet. So lets keep missing pieces.
-
-Upstream-Status: Pending
-
-Without this, compiling something simple like #include <limits> on target
-with c++ test.cpp fails unable to find the header. strace shows it looking in
-usr/include/xxxx rather than /usr/include/xxxx
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gcc/configure    | 4 +++-
- gcc/configure.ac | 4 +++-
- 2 files changed, 6 insertions(+), 2 deletions(-)
-
-diff --git a/gcc/configure b/gcc/configure
-index 9b60aad..6df594c 100755
---- a/gcc/configure
-+++ b/gcc/configure
-@@ -3375,7 +3375,9 @@ gcc_gxx_include_dir_add_sysroot=0
- if test "${with_sysroot+set}" = set; then
-   gcc_gxx_without_sysroot=`expr "${gcc_gxx_include_dir}" : "${with_sysroot}"'\(.*\)'`
-   if test "${gcc_gxx_without_sysroot}"; then
--    gcc_gxx_include_dir="${gcc_gxx_without_sysroot}"
-+    if test x${with_sysroot} != x/; then
-+      gcc_gxx_include_dir="${gcc_gxx_without_sysroot}"
-+    fi
-     gcc_gxx_include_dir_add_sysroot=1
-   fi
- fi
-diff --git a/gcc/configure.ac b/gcc/configure.ac
-index e1e54f2..3bb2173 100644
---- a/gcc/configure.ac
-+++ b/gcc/configure.ac
-@@ -152,7 +152,9 @@ gcc_gxx_include_dir_add_sysroot=0
- if test "${with_sysroot+set}" = set; then
-   gcc_gxx_without_sysroot=`expr "${gcc_gxx_include_dir}" : "${with_sysroot}"'\(.*\)'`
-   if test "${gcc_gxx_without_sysroot}"; then
--    gcc_gxx_include_dir="${gcc_gxx_without_sysroot}"
-+    if test x${with_sysroot} != x/; then
-+      gcc_gxx_include_dir="${gcc_gxx_without_sysroot}"
-+    fi
-     gcc_gxx_include_dir_add_sysroot=1
-   fi
- fi
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0039-libcc1-fix-libcc1-s-install-path-and-rpath.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0039-libcc1-fix-libcc1-s-install-path-and-rpath.patch
deleted file mode 100644
index 2e0df96..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0039-libcc1-fix-libcc1-s-install-path-and-rpath.patch
+++ /dev/null
@@ -1,54 +0,0 @@
-From 1eede9e4a10d3532db826a6eeced695df3ad5b89 Mon Sep 17 00:00:00 2001
-From: Robert Yang <liezhi.yang@windriver.com>
-Date: Sun, 5 Jul 2015 20:25:18 -0700
-Subject: [PATCH 39/46] libcc1: fix libcc1's install path and rpath
-
-* Install libcc1.so and libcc1plugin.so into
-  $(libexecdir)/gcc/$(target_noncanonical)/$(gcc_version), as what we
-  had done to lto-plugin.
-* Fix bad RPATH iussue:
-  gcc-5.2.0: package gcc-plugins contains bad RPATH /patht/to/tmp/sysroots/qemux86-64/usr/lib64/../lib64 in file
- /path/to/gcc/5.2.0-r0/packages-split/gcc-plugins/usr/lib64/gcc/x86_64-poky-linux/5.2.0/plugin/libcc1plugin.so.0.0.0
- [rpaths]
-
-Upstream-Status: Inappropriate [OE configuration]
-
-Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
----
- libcc1/Makefile.am | 4 ++--
- libcc1/Makefile.in | 4 ++--
- 2 files changed, 4 insertions(+), 4 deletions(-)
-
-diff --git a/libcc1/Makefile.am b/libcc1/Makefile.am
-index 7a274b3..db69bea 100644
---- a/libcc1/Makefile.am
-+++ b/libcc1/Makefile.am
-@@ -35,8 +35,8 @@ libiberty = $(if $(wildcard $(libiberty_noasan)),$(Wc)$(libiberty_noasan), \
- 	    $(Wc)$(libiberty_normal)))
- libiberty_dep = $(patsubst $(Wc)%,%,$(libiberty))
- 
--plugindir = $(libdir)/gcc/$(target_noncanonical)/$(gcc_version)/plugin
--cc1libdir = $(libdir)/$(libsuffix)
-+cc1libdir = $(libexecdir)/gcc/$(target_noncanonical)/$(gcc_version)
-+plugindir = $(cc1libdir)
- 
- if ENABLE_PLUGIN
- plugin_LTLIBRARIES = libcc1plugin.la
-diff --git a/libcc1/Makefile.in b/libcc1/Makefile.in
-index 1916134..c8995d2 100644
---- a/libcc1/Makefile.in
-+++ b/libcc1/Makefile.in
-@@ -262,8 +262,8 @@ libiberty = $(if $(wildcard $(libiberty_noasan)),$(Wc)$(libiberty_noasan), \
- 	    $(Wc)$(libiberty_normal)))
- 
- libiberty_dep = $(patsubst $(Wc)%,%,$(libiberty))
--plugindir = $(libdir)/gcc/$(target_noncanonical)/$(gcc_version)/plugin
--cc1libdir = $(libdir)/$(libsuffix)
-+cc1libdir = $(libexecdir)/gcc/$(target_noncanonical)/$(gcc_version)
-+plugindir = $(cc1libdir)
- @ENABLE_PLUGIN_TRUE@plugin_LTLIBRARIES = libcc1plugin.la
- @ENABLE_PLUGIN_TRUE@cc1lib_LTLIBRARIES = libcc1.la
- BUILT_SOURCES = compiler-name.h
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0040-handle-sysroot-support-for-nativesdk-gcc.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0040-handle-sysroot-support-for-nativesdk-gcc.patch
deleted file mode 100644
index 11e1310..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0040-handle-sysroot-support-for-nativesdk-gcc.patch
+++ /dev/null
@@ -1,213 +0,0 @@
-From b41674d357c4ba320b9860e0f313d2f5a332f27d Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 7 Dec 2015 23:39:54 +0000
-Subject: [PATCH 40/46] handle sysroot support for nativesdk-gcc
-
-Being able to build a nativesdk gcc is useful, particularly in cases
-where the host compiler may be of an incompatible version (or a 32
-bit compiler is needed).
-
-Sadly, building nativesdk-gcc is not straight forward. We install
-nativesdk-gcc into a relocatable location and this means that its
-library locations can change. "Normal" sysroot support doesn't help
-in this case since the values of paths like "libdir" change, not just
-base root directory of the system.
-
-In order to handle this we do two things:
-
-a) Add %r into spec file markup which can be used for injected paths
-   such as SYSTEMLIBS_DIR (see gcc_multilib_setup()).
-b) Add other paths which need relocation into a .gccrelocprefix section
-   which the relocation code will notice and adjust automatically.
-
-Upstream-Status: Inappropriate
-RP 2015/7/28
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gcc/cppdefault.c | 50 +++++++++++++++++++++++++++++++++++++-------------
- gcc/cppdefault.h |  3 ++-
- gcc/gcc.c        | 20 ++++++++++++++------
- 3 files changed, 53 insertions(+), 20 deletions(-)
-
-diff --git a/gcc/cppdefault.c b/gcc/cppdefault.c
-index 03a0287..f44c1d8 100644
---- a/gcc/cppdefault.c
-+++ b/gcc/cppdefault.c
-@@ -35,6 +35,30 @@
- # undef CROSS_INCLUDE_DIR
- #endif
- 
-+static char GPLUSPLUS_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = GPLUSPLUS_INCLUDE_DIR;
-+static char GCC_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = GCC_INCLUDE_DIR;
-+static char GPLUSPLUS_TOOL_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = GPLUSPLUS_TOOL_INCLUDE_DIR;
-+static char GPLUSPLUS_BACKWARD_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = GPLUSPLUS_BACKWARD_INCLUDE_DIR;
-+static char STANDARD_STARTFILE_PREFIX_2VAR[4096] __attribute__ ((section (".gccrelocprefix"))) = STANDARD_STARTFILE_PREFIX_2 GCC_INCLUDE_SUBDIR_TARGET;
-+#ifdef LOCAL_INCLUDE_DIR
-+static char LOCAL_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = LOCAL_INCLUDE_DIR;
-+#endif
-+#ifdef PREFIX_INCLUDE_DIR
-+static char PREFIX_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = PREFIX_INCLUDE_DIR;
-+#endif
-+#ifdef FIXED_INCLUDE_DIR
-+static char FIXED_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = FIXED_INCLUDE_DIR;
-+#endif
-+#ifdef CROSS_INCLUDE_DIR
-+static char CROSS_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = CROSS_INCLUDE_DIR;
-+#endif
-+#ifdef TOOL_INCLUDE_DIR
-+static char TOOL_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = TOOL_INCLUDE_DIR;
-+#endif
-+#ifdef NATIVE_SYSTEM_HEADER_DIR
-+static char NATIVE_SYSTEM_HEADER_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = NATIVE_SYSTEM_HEADER_DIR;
-+#endif
-+
- const struct default_include cpp_include_defaults[]
- #ifdef INCLUDE_DEFAULTS
- = INCLUDE_DEFAULTS;
-@@ -42,38 +66,38 @@ const struct default_include cpp_include_defaults[]
- = {
- #ifdef GPLUSPLUS_INCLUDE_DIR
-     /* Pick up GNU C++ generic include files.  */
--    { GPLUSPLUS_INCLUDE_DIR, "G++", 1, 1,
-+    { GPLUSPLUS_INCLUDE_DIRVAR, "G++", 1, 1,
-       GPLUSPLUS_INCLUDE_DIR_ADD_SYSROOT, 0 },
- #endif
- #ifdef GPLUSPLUS_TOOL_INCLUDE_DIR
-     /* Pick up GNU C++ target-dependent include files.  */
--    { GPLUSPLUS_TOOL_INCLUDE_DIR, "G++", 1, 1,
-+    { GPLUSPLUS_TOOL_INCLUDE_DIRVAR, "G++", 1, 1,
-       GPLUSPLUS_INCLUDE_DIR_ADD_SYSROOT, 1 },
- #endif
- #ifdef GPLUSPLUS_BACKWARD_INCLUDE_DIR
-     /* Pick up GNU C++ backward and deprecated include files.  */
--    { GPLUSPLUS_BACKWARD_INCLUDE_DIR, "G++", 1, 1,
-+    { GPLUSPLUS_BACKWARD_INCLUDE_DIRVAR, "G++", 1, 1,
-       GPLUSPLUS_INCLUDE_DIR_ADD_SYSROOT, 0 },
- #endif
- #ifdef GCC_INCLUDE_DIR
-     /* This is the dir for gcc's private headers.  */
--    { GCC_INCLUDE_DIR, "GCC", 0, 0, 0, 0 },
-+    { GCC_INCLUDE_DIRVAR, "GCC", 0, 0, 0, 0 },
- #endif
- #ifdef GCC_INCLUDE_SUBDIR_TARGET
-     /* This is the dir for gcc's private headers under the specified sysroot.  */
--    { STANDARD_STARTFILE_PREFIX_2 GCC_INCLUDE_SUBDIR_TARGET, "GCC", 0, 0, 1, 0 },
-+    { STANDARD_STARTFILE_PREFIX_2VAR, "GCC", 0, 0, 1, 0 },
- #endif
- #ifdef LOCAL_INCLUDE_DIR
-     /* /usr/local/include comes before the fixincluded header files.  */
--    { LOCAL_INCLUDE_DIR, 0, 0, 1, 1, 2 },
--    { LOCAL_INCLUDE_DIR, 0, 0, 1, 1, 0 },
-+    { LOCAL_INCLUDE_DIRVAR, 0, 0, 1, 1, 2 },
-+    { LOCAL_INCLUDE_DIRVAR, 0, 0, 1, 1, 0 },
- #endif
- #ifdef PREFIX_INCLUDE_DIR
--    { PREFIX_INCLUDE_DIR, 0, 0, 1, 0, 0 },
-+    { PREFIX_INCLUDE_DIRVAR, 0, 0, 1, 0, 0 },
- #endif
- #ifdef FIXED_INCLUDE_DIR
-     /* This is the dir for fixincludes.  */
--    { FIXED_INCLUDE_DIR, "GCC", 0, 0, 0,
-+    { FIXED_INCLUDE_DIRVAR, "GCC", 0, 0, 0,
-       /* A multilib suffix needs adding if different multilibs use
- 	 different headers.  */
- #ifdef SYSROOT_HEADERS_SUFFIX_SPEC
-@@ -85,16 +109,16 @@ const struct default_include cpp_include_defaults[]
- #endif
- #ifdef CROSS_INCLUDE_DIR
-     /* One place the target system's headers might be.  */
--    { CROSS_INCLUDE_DIR, "GCC", 0, 0, 0, 0 },
-+    { CROSS_INCLUDE_DIRVAR, "GCC", 0, 0, 0, 0 },
- #endif
- #ifdef TOOL_INCLUDE_DIR
-     /* Another place the target system's headers might be.  */
--    { TOOL_INCLUDE_DIR, "BINUTILS", 0, 1, 0, 0 },
-+    { TOOL_INCLUDE_DIRVAR, "BINUTILS", 0, 1, 0, 0 },
- #endif
- #ifdef NATIVE_SYSTEM_HEADER_DIR
-     /* /usr/include comes dead last.  */
--    { NATIVE_SYSTEM_HEADER_DIR, NATIVE_SYSTEM_HEADER_COMPONENT, 0, 0, 1, 2 },
--    { NATIVE_SYSTEM_HEADER_DIR, NATIVE_SYSTEM_HEADER_COMPONENT, 0, 0, 1, 0 },
-+    { NATIVE_SYSTEM_HEADER_DIRVAR, NATIVE_SYSTEM_HEADER_COMPONENT, 0, 0, 1, 2 },
-+    { NATIVE_SYSTEM_HEADER_DIRVAR, NATIVE_SYSTEM_HEADER_COMPONENT, 0, 0, 1, 0 },
- #endif
-     { 0, 0, 0, 0, 0, 0 }
-   };
-diff --git a/gcc/cppdefault.h b/gcc/cppdefault.h
-index c98644f..4198669 100644
---- a/gcc/cppdefault.h
-+++ b/gcc/cppdefault.h
-@@ -33,7 +33,8 @@
- 
- struct default_include
- {
--  const char *const fname;	/* The name of the directory.  */
-+  const char *fname;     /* The name of the directory.  */
-+
-   const char *const component;	/* The component containing the directory
- 				   (see update_path in prefix.c) */
-   const char cplusplus;		/* Only look here if we're compiling C++.  */
-diff --git a/gcc/gcc.c b/gcc/gcc.c
-index 3ca27b9..2b7756e 100644
---- a/gcc/gcc.c
-+++ b/gcc/gcc.c
-@@ -120,6 +120,8 @@ static const char *target_system_root = TARGET_SYSTEM_ROOT;
- #else
- static const char *target_system_root = 0;
- #endif
-+ 
-+static char target_relocatable_prefix[4096] __attribute__ ((section (".gccrelocprefix"))) = SYSTEMLIBS_DIR;
- 
- /* Nonzero means pass the updated target_system_root to the compiler.  */
- 
-@@ -390,6 +392,7 @@ or with constant text in a single argument.
-  %G     process LIBGCC_SPEC as a spec.
-  %R     Output the concatenation of target_system_root and
-         target_sysroot_suffix.
-+ %r     Output the base path target_relocatable_prefix
-  %S     process STARTFILE_SPEC as a spec.  A capital S is actually used here.
-  %E     process ENDFILE_SPEC as a spec.  A capital E is actually used here.
-  %C     process CPP_SPEC as a spec.
-@@ -1286,10 +1289,10 @@ static const char *gcc_libexec_prefix;
-    gcc_exec_prefix is set because, in that case, we know where the
-    compiler has been installed, and use paths relative to that
-    location instead.  */
--static const char *const standard_exec_prefix = STANDARD_EXEC_PREFIX;
--static const char *const standard_libexec_prefix = STANDARD_LIBEXEC_PREFIX;
--static const char *const standard_bindir_prefix = STANDARD_BINDIR_PREFIX;
--static const char *const standard_startfile_prefix = STANDARD_STARTFILE_PREFIX;
-+static char standard_exec_prefix[4096] __attribute__ ((section (".gccrelocprefix"))) = STANDARD_EXEC_PREFIX;
-+static char standard_libexec_prefix[4096] __attribute__ ((section (".gccrelocprefix"))) = STANDARD_LIBEXEC_PREFIX;
-+static char standard_bindir_prefix[4096] __attribute__ ((section (".gccrelocprefix"))) = STANDARD_BINDIR_PREFIX;
-+static char *const standard_startfile_prefix = STANDARD_STARTFILE_PREFIX;
- 
- /* For native compilers, these are well-known paths containing
-    components that may be provided by the system.  For cross
-@@ -1297,9 +1300,9 @@ static const char *const standard_startfile_prefix = STANDARD_STARTFILE_PREFIX;
- static const char *md_exec_prefix = MD_EXEC_PREFIX;
- static const char *md_startfile_prefix = MD_STARTFILE_PREFIX;
- static const char *md_startfile_prefix_1 = MD_STARTFILE_PREFIX_1;
--static const char *const standard_startfile_prefix_1
-+static char standard_startfile_prefix_1[4096] __attribute__ ((section (".gccrelocprefix")))
-   = STANDARD_STARTFILE_PREFIX_1;
--static const char *const standard_startfile_prefix_2
-+static char standard_startfile_prefix_2[4096] __attribute__ ((section (".gccrelocprefix")))
-   = STANDARD_STARTFILE_PREFIX_2;
- 
- /* A relative path to be used in finding the location of tools
-@@ -5523,6 +5526,11 @@ do_spec_1 (const char *spec, int inswitch, const char *soft_matched_part)
- 	      }
- 	    break;
- 
-+          case 'r':
-+              obstack_grow (&obstack, target_relocatable_prefix,
-+		      strlen (target_relocatable_prefix));
-+            break;
-+
- 	  case 'S':
- 	    value = do_spec_1 (startfile_spec, 0, NULL);
- 	    if (value != 0)
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0041-Search-target-sysroot-gcc-version-specific-dirs-with.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0041-Search-target-sysroot-gcc-version-specific-dirs-with.patch
deleted file mode 100644
index 5a504a1..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0041-Search-target-sysroot-gcc-version-specific-dirs-with.patch
+++ /dev/null
@@ -1,102 +0,0 @@
-From 99cadb4d8415dd5a275d7d6410f20db33d0f8433 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 7 Dec 2015 23:41:45 +0000
-Subject: [PATCH 41/46] Search target sysroot gcc version specific dirs with
- multilib.
-
-We install the gcc libraries (such as crtbegin.p) into
-<sysroot><libdir>/<target-sys>/5.2.0/
-which is a default search path for GCC (aka multi_suffix in the
-code below). <target-sys> is 'machine' in gcc's terminology. We use
-these directories so that multiple gcc versions could in theory
-co-exist on target.
-
-We only want to build one gcc-cross-canadian per arch and have this work
-for all multilibs. <target-sys> can be handled by mapping the multilib
-<target-sys> to the one used by gcc-cross-canadian, e.g.
-mips64-polkmllib32-linux
-is symlinked to by mips64-poky-linux.
-
-The default gcc search path in the target sysroot for a "lib64" mutlilib
-is:
-
-<sysroot>/lib32/mips64-poky-linux/5.2.0/
-<sysroot>/lib32/../lib64/
-<sysroot>/usr/lib32/mips64-poky-linux/5.2.0/
-<sysroot>/usr/lib32/../lib64/
-<sysroot>/lib32/
-<sysroot>/usr/lib32/
-
-which means that the lib32 crtbegin.o will be found and the lib64 ones
-will not which leads to compiler failures.
-
-This patch injects a multilib version of that path first so the lib64
-binaries can be found first. With this change the search path becomes:
-
-<sysroot>/lib32/../lib64/mips64-poky-linux/5.2.0/
-<sysroot>/lib32/mips64-poky-linux/5.2.0/
-<sysroot>/lib32/../lib64/
-<sysroot>/usr/lib32/../lib64/mips64-poky-linux/5.2.0/
-<sysroot>/usr/lib32/mips64-poky-linux/5.2.0/
-<sysroot>/usr/lib32/../lib64/
-<sysroot>/lib32/
-<sysroot>/usr/lib32/
-
-Upstream-Status: Pending
-RP 2015/7/31
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gcc/gcc.c | 29 ++++++++++++++++++++++++++++-
- 1 file changed, 28 insertions(+), 1 deletion(-)
-
-diff --git a/gcc/gcc.c b/gcc/gcc.c
-index 2b7756e..8f53aea 100644
---- a/gcc/gcc.c
-+++ b/gcc/gcc.c
-@@ -2305,7 +2305,7 @@ for_each_path (const struct path_prefix *paths,
-       if (path == NULL)
- 	{
- 	  len = paths->max_len + extra_space + 1;
--	  len += MAX (MAX (suffix_len, multi_os_dir_len), multiarch_len);
-+	  len += MAX ((suffix_len + multi_os_dir_len), multiarch_len);
- 	  path = XNEWVEC (char, len);
- 	}
- 
-@@ -2317,6 +2317,33 @@ for_each_path (const struct path_prefix *paths,
- 	  /* Look first in MACHINE/VERSION subdirectory.  */
- 	  if (!skip_multi_dir)
- 	    {
-+	      if (!(pl->os_multilib ? skip_multi_os_dir : skip_multi_dir))
-+	        {
-+	          const char *this_multi;
-+	          size_t this_multi_len;
-+
-+	          if (pl->os_multilib)
-+		    {
-+		      this_multi = multi_os_dir;
-+		      this_multi_len = multi_os_dir_len;
-+		    }
-+	          else
-+		    {
-+		      this_multi = multi_dir;
-+		      this_multi_len = multi_dir_len;
-+		    }
-+
-+	          /* Look in multilib MACHINE/VERSION subdirectory first */
-+	          if (this_multi_len)
-+	            {
-+		      memcpy (path + len, this_multi, this_multi_len + 1);
-+	              memcpy (path + len + this_multi_len, multi_suffix, suffix_len + 1);
-+	              ret = callback (path, callback_info);
-+	                if (ret)
-+		          break;
-+	            }
-+	        }
-+
- 	      memcpy (path + len, multi_suffix, suffix_len + 1);
- 	      ret = callback (path, callback_info);
- 	      if (ret)
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0042-Fix-various-_FOR_BUILD-and-related-variables.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0042-Fix-various-_FOR_BUILD-and-related-variables.patch
deleted file mode 100644
index 5af764b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0042-Fix-various-_FOR_BUILD-and-related-variables.patch
+++ /dev/null
@@ -1,137 +0,0 @@
-From 78e77206902349c9256f06c3dd179197a39af2e1 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 7 Dec 2015 23:42:45 +0000
-Subject: [PATCH 42/46] Fix various _FOR_BUILD and related variables
-
-When doing a FOR_BUILD thing, you have to override CFLAGS with
-CFLAGS_FOR_BUILD. And if you use C++, you also have to override
-CXXFLAGS with CXXFLAGS_FOR_BUILD.
-Without this, when building for mingw, you end up trying to use
-the mingw headers for a host build.
-
-The same goes for other variables as well, such as CPPFLAGS,
-CPP, and GMPINC.
-
-Upstream-Status: Pending
-
-Signed-off-by: Peter Seebach <peter.seebach@windriver.com>
-Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- Makefile.in      | 6 ++++++
- Makefile.tpl     | 5 +++++
- gcc/Makefile.in  | 2 +-
- gcc/configure    | 2 +-
- gcc/configure.ac | 2 +-
- 5 files changed, 14 insertions(+), 3 deletions(-)
-
-diff --git a/Makefile.in b/Makefile.in
-index a783e1e..8b6d3d2 100644
---- a/Makefile.in
-+++ b/Makefile.in
-@@ -152,6 +152,7 @@ BUILD_EXPORTS = \
- 	CPP="$(CC_FOR_BUILD) -E"; export CPP; \
- 	CFLAGS="$(CFLAGS_FOR_BUILD)"; export CFLAGS; \
- 	CONFIG_SHELL="$(SHELL)"; export CONFIG_SHELL; \
-+	CPPFLAGS="$(CPPFLAGS_FOR_BUILD)"; export CPPFLAGS; \
- 	CXX="$(CXX_FOR_BUILD)"; export CXX; \
- 	CXXFLAGS="$(CXXFLAGS_FOR_BUILD)"; export CXXFLAGS; \
- 	GCJ="$(GCJ_FOR_BUILD)"; export GCJ; \
-@@ -170,6 +171,9 @@ BUILD_EXPORTS = \
- # built for the build system to override those in BASE_FLAGS_TO_PASS.
- EXTRA_BUILD_FLAGS = \
- 	CFLAGS="$(CFLAGS_FOR_BUILD)" \
-+	CXXFLAGS="$(CXXFLAGS_FOR_BUILD)" \
-+	CPP="$(CC_FOR_BUILD) -E" \
-+	CPPFLAGS="$(CPPFLAGS_FOR_BUILD)" \
- 	LDFLAGS="$(LDFLAGS_FOR_BUILD)"
- 
- # This is the list of directories to built for the host system.
-@@ -187,6 +191,7 @@ HOST_SUBDIR = @host_subdir@
- HOST_EXPORTS = \
- 	$(BASE_EXPORTS) \
- 	CC="$(CC)"; export CC; \
-+	CPP="$(CC) -E"; export CPP; \
- 	ADA_CFLAGS="$(ADA_CFLAGS)"; export ADA_CFLAGS; \
- 	CFLAGS="$(CFLAGS)"; export CFLAGS; \
- 	CONFIG_SHELL="$(SHELL)"; export CONFIG_SHELL; \
-@@ -710,6 +715,7 @@ BASE_FLAGS_TO_PASS = \
- 	"CC_FOR_BUILD=$(CC_FOR_BUILD)" \
- 	"CFLAGS_FOR_BUILD=$(CFLAGS_FOR_BUILD)" \
- 	"CXX_FOR_BUILD=$(CXX_FOR_BUILD)" \
-+	"CXXFLAGS_FOR_BUILD=$(CXXFLAGS_FOR_BUILD)" \
- 	"EXPECT=$(EXPECT)" \
- 	"FLEX=$(FLEX)" \
- 	"INSTALL=$(INSTALL)" \
-diff --git a/Makefile.tpl b/Makefile.tpl
-index 1ea1954..78a59c3 100644
---- a/Makefile.tpl
-+++ b/Makefile.tpl
-@@ -154,6 +154,7 @@ BUILD_EXPORTS = \
- 	CC="$(CC_FOR_BUILD)"; export CC; \
- 	CFLAGS="$(CFLAGS_FOR_BUILD)"; export CFLAGS; \
- 	CONFIG_SHELL="$(SHELL)"; export CONFIG_SHELL; \
-+	CPPFLAGS="$(CPPFLAGS_FOR_BUILD)"; export CPPFLAGS; \
- 	CXX="$(CXX_FOR_BUILD)"; export CXX; \
- 	CXXFLAGS="$(CXXFLAGS_FOR_BUILD)"; export CXXFLAGS; \
- 	GCJ="$(GCJ_FOR_BUILD)"; export GCJ; \
-@@ -172,6 +173,9 @@ BUILD_EXPORTS = \
- # built for the build system to override those in BASE_FLAGS_TO_PASS.
- EXTRA_BUILD_FLAGS = \
- 	CFLAGS="$(CFLAGS_FOR_BUILD)" \
-+	CXXFLAGS="$(CXXFLAGS_FOR_BUILD)" \
-+	CPP="$(CC_FOR_BUILD) -E" \
-+	CPPFLAGS="$(CPPFLAGS_FOR_BUILD)" \
- 	LDFLAGS="$(LDFLAGS_FOR_BUILD)"
- 
- # This is the list of directories to built for the host system.
-@@ -189,6 +193,7 @@ HOST_SUBDIR = @host_subdir@
- HOST_EXPORTS = \
- 	$(BASE_EXPORTS) \
- 	CC="$(CC)"; export CC; \
-+	CPP="$(CC) -E"; export CPP; \
- 	ADA_CFLAGS="$(ADA_CFLAGS)"; export ADA_CFLAGS; \
- 	CFLAGS="$(CFLAGS)"; export CFLAGS; \
- 	CONFIG_SHELL="$(SHELL)"; export CONFIG_SHELL; \
-diff --git a/gcc/Makefile.in b/gcc/Makefile.in
-index cd5bc4a..98ae4f4 100644
---- a/gcc/Makefile.in
-+++ b/gcc/Makefile.in
-@@ -762,7 +762,7 @@ BUILD_LINKERFLAGS = $(BUILD_CXXFLAGS)
- # Native linker and preprocessor flags.  For x-fragment overrides.
- BUILD_LDFLAGS=@BUILD_LDFLAGS@
- BUILD_CPPFLAGS= -I. -I$(@D) -I$(srcdir) -I$(srcdir)/$(@D) \
--		-I$(srcdir)/../include @INCINTL@ $(CPPINC) $(CPPFLAGS)
-+		-I$(srcdir)/../include @INCINTL@ $(CPPINC) $(CPPFLAGS_FOR_BUILD)
- 
- # Actual name to use when installing a native compiler.
- GCC_INSTALL_NAME := $(shell echo gcc|sed '$(program_transform_name)')
-diff --git a/gcc/configure b/gcc/configure
-index 6df594c..fcb05e7 100755
---- a/gcc/configure
-+++ b/gcc/configure
-@@ -11521,7 +11521,7 @@ else
- 	CC="${CC_FOR_BUILD}" CFLAGS="${CFLAGS_FOR_BUILD}" \
- 	CXX="${CXX_FOR_BUILD}" CXXFLAGS="${CXXFLAGS_FOR_BUILD}" \
- 	LD="${LD_FOR_BUILD}" LDFLAGS="${LDFLAGS_FOR_BUILD}" \
--	GMPINC="" CPPFLAGS="${CPPFLAGS} -DGENERATOR_FILE" \
-+	GMPINC="" CPPFLAGS="${CPPFLAGS_FOR_BUILD} -DGENERATOR_FILE" \
- 	${realsrcdir}/configure \
- 		--enable-languages=${enable_languages-all} \
- 		--target=$target_alias --host=$build_alias --build=$build_alias
-diff --git a/gcc/configure.ac b/gcc/configure.ac
-index 3bb2173..923bc9a 100644
---- a/gcc/configure.ac
-+++ b/gcc/configure.ac
-@@ -1633,7 +1633,7 @@ else
- 	CC="${CC_FOR_BUILD}" CFLAGS="${CFLAGS_FOR_BUILD}" \
- 	CXX="${CXX_FOR_BUILD}" CXXFLAGS="${CXXFLAGS_FOR_BUILD}" \
- 	LD="${LD_FOR_BUILD}" LDFLAGS="${LDFLAGS_FOR_BUILD}" \
--	GMPINC="" CPPFLAGS="${CPPFLAGS} -DGENERATOR_FILE" \
-+	GMPINC="" CPPFLAGS="${CPPFLAGS_FOR_BUILD} -DGENERATOR_FILE" \
- 	${realsrcdir}/configure \
- 		--enable-languages=${enable_languages-all} \
- 		--target=$target_alias --host=$build_alias --build=$build_alias
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0043-libstdc-Support-musl.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0043-libstdc-Support-musl.patch
deleted file mode 100644
index bad8402..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0043-libstdc-Support-musl.patch
+++ /dev/null
@@ -1,44 +0,0 @@
-From df430b943a2df6b72054c808d4b93338a82562de Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Tue, 8 Dec 2015 08:23:34 +0000
-Subject: [PATCH 43/46] libstdc++: Support musl
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- libstdc++-v3/configure.host | 10 ++++++++++
- 1 file changed, 10 insertions(+)
-
-diff --git a/libstdc++-v3/configure.host b/libstdc++-v3/configure.host
-index 640199c..1756444 100644
---- a/libstdc++-v3/configure.host
-+++ b/libstdc++-v3/configure.host
-@@ -274,14 +274,24 @@ case "${host_os}" in
-     os_include_dir="os/bsd/freebsd"
-     ;;
-   gnu* | linux* | kfreebsd*-gnu | knetbsd*-gnu)
-+    # check for musl by target
-+    case "${host_os}" in
-+      *-musl*)
-+        os_include_dir="os/generic"
-+        ;;
-+      *)
-     if [ "$uclibc" = "yes" ]; then
-       os_include_dir="os/uclibc"
-     elif [ "$bionic" = "yes" ]; then
-       os_include_dir="os/bionic"
-+    elif [ "$musl" = "yes" ]; then
-+      os_include_dir="os/generic"
-     else
-       os_include_dir="os/gnu-linux"
-     fi
-     ;;
-+    esac
-+    ;;
-   hpux*)
-     os_include_dir="os/hpux"
-     ;;
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0044-Adding-mmusl-as-a-musl-libc-specifier-and-the-necess.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0044-Adding-mmusl-as-a-musl-libc-specifier-and-the-necess.patch
deleted file mode 100644
index 26aa96c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0044-Adding-mmusl-as-a-musl-libc-specifier-and-the-necess.patch
+++ /dev/null
@@ -1,270 +0,0 @@
-From 0b54799d80fb859c7b142467e4d42c99db59df50 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Tue, 8 Dec 2015 08:30:35 +0000
-Subject: [PATCH 44/46] Adding -mmusl as a musl libc specifier, and the
- necessary hacks for it to know how to find musl's dynamic linker.
-
-Upstream-Status: Backport [partial]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gcc/config.gcc                |  10 ++++-
- gcc/config/linux.h            | 100 +++++++++++++++++++++++++++++++++++++-----
- gcc/config/linux.opt          |   4 ++
- gcc/config/rs6000/secureplt.h |   1 +
- gcc/config/rs6000/sysv4.h     |   5 +++
- gcc/ginclude/stddef.h         |   3 ++
- 6 files changed, 110 insertions(+), 13 deletions(-)
-
-Index: gcc-5.4.0/gcc/config.gcc
-===================================================================
---- gcc-5.4.0.orig/gcc/config.gcc
-+++ gcc-5.4.0/gcc/config.gcc
-@@ -575,7 +575,7 @@ case ${target} in
- esac
- 
- # Common C libraries.
--tm_defines="$tm_defines LIBC_GLIBC=1 LIBC_UCLIBC=2 LIBC_BIONIC=3"
-+tm_defines="$tm_defines LIBC_GLIBC=1 LIBC_UCLIBC=2 LIBC_BIONIC=3 LIBC_MUSL=4"
- 
- # 32-bit x86 processors supported by --with-arch=.  Each processor
- # MUST be separated by exactly one space.
-@@ -720,6 +720,9 @@ case ${target} in
-     *-*-*uclibc*)
-       tm_defines="$tm_defines DEFAULT_LIBC=LIBC_UCLIBC"
-       ;;
-+    *-*-*musl*)
-+      tm_defines="$tm_defines DEFAULT_LIBC=LIBC_MUSL"
-+      ;;
-     *)
-       tm_defines="$tm_defines DEFAULT_LIBC=LIBC_GLIBC"
-       ;;
-@@ -2420,6 +2423,11 @@ powerpc*-*-linux*)
- 	    powerpc*-*-linux*paired*)
- 		tm_file="${tm_file} rs6000/750cl.h" ;;
- 	esac
-+        case ${target} in
-+	    *-linux*-musl*)
-+		enable_secureplt=yes ;;
-+	esac
-+
- 	if test x${enable_secureplt} = xyes; then
- 		tm_file="rs6000/secureplt.h ${tm_file}"
- 	fi
-Index: gcc-5.4.0/gcc/config/linux.h
-===================================================================
---- gcc-5.4.0.orig/gcc/config/linux.h
-+++ gcc-5.4.0/gcc/config/linux.h
-@@ -32,10 +32,12 @@ see the files COPYING3 and COPYING.RUNTI
- #define OPTION_GLIBC  (DEFAULT_LIBC == LIBC_GLIBC)
- #define OPTION_UCLIBC (DEFAULT_LIBC == LIBC_UCLIBC)
- #define OPTION_BIONIC (DEFAULT_LIBC == LIBC_BIONIC)
-+#define OPTION_MUSL   (DEFAULT_LIBC == LIBC_MUSL)
- #else
- #define OPTION_GLIBC  (linux_libc == LIBC_GLIBC)
- #define OPTION_UCLIBC (linux_libc == LIBC_UCLIBC)
- #define OPTION_BIONIC (linux_libc == LIBC_BIONIC)
-+#define OPTION_MUSL   (linux_libc == LIBC_MUSL)
- #endif
- 
- #define GNU_USER_TARGET_OS_CPP_BUILTINS()			\
-@@ -53,18 +55,21 @@ see the files COPYING3 and COPYING.RUNTI
-    uClibc or Bionic is the default C library and whether
-    -muclibc or -mglibc or -mbionic has been passed to change the default.  */
- 
--#define CHOOSE_DYNAMIC_LINKER1(LIBC1, LIBC2, LIBC3, LD1, LD2, LD3)	\
--  "%{" LIBC2 ":" LD2 ";:%{" LIBC3 ":" LD3 ";:" LD1 "}}"
-+#define CHOOSE_DYNAMIC_LINKER1(LIBC1, LIBC2, LIBC3, LIBC4, LD1, LD2, LD3, LD4)	\
-+  "%{" LIBC2 ":" LD2 ";:%{" LIBC3 ":" LD3 ";:%{" LIBC4 ":" LD4 ";:" LD1 "}}}"
- 
- #if DEFAULT_LIBC == LIBC_GLIBC
--#define CHOOSE_DYNAMIC_LINKER(G, U, B) \
--  CHOOSE_DYNAMIC_LINKER1 ("mglibc", "muclibc", "mbionic", G, U, B)
-+#define CHOOSE_DYNAMIC_LINKER(G, U, B, M) \
-+  CHOOSE_DYNAMIC_LINKER1 ("mglibc", "muclibc", "mbionic", "mmusl", G, U, B, M)
- #elif DEFAULT_LIBC == LIBC_UCLIBC
--#define CHOOSE_DYNAMIC_LINKER(G, U, B) \
--  CHOOSE_DYNAMIC_LINKER1 ("muclibc", "mglibc", "mbionic", U, G, B)
-+#define CHOOSE_DYNAMIC_LINKER(G, U, B, M) \
-+  CHOOSE_DYNAMIC_LINKER1 ("muclibc", "mglibc", "mbionic", "mmusl", U, G, B, M)
- #elif DEFAULT_LIBC == LIBC_BIONIC
--#define CHOOSE_DYNAMIC_LINKER(G, U, B) \
--  CHOOSE_DYNAMIC_LINKER1 ("mbionic", "mglibc", "muclibc", B, G, U)
-+#define CHOOSE_DYNAMIC_LINKER(G, U, B, M) \
-+  CHOOSE_DYNAMIC_LINKER1 ("mbionic", "mglibc", "muclibc", "mmusl", B, G, U, M)
-+#elif DEFAULT_LIBC == LIBC_MUSL
-+#define CHOOSE_DYNAMIC_LINKER(G, U, B, M) \
-+  CHOOSE_DYNAMIC_LINKER1 ("mmusl", "mglibc", "muclibc", "mbionic", M, G, U, B)
- #else
- #error "Unsupported DEFAULT_LIBC"
- #endif /* DEFAULT_LIBC */
-@@ -84,16 +89,16 @@ see the files COPYING3 and COPYING.RUNTI
- 
- #define GNU_USER_DYNAMIC_LINKER						\
-   CHOOSE_DYNAMIC_LINKER (GLIBC_DYNAMIC_LINKER, UCLIBC_DYNAMIC_LINKER,	\
--			 BIONIC_DYNAMIC_LINKER)
-+			 BIONIC_DYNAMIC_LINKER, MUSL_DYNAMIC_LINKER)
- #define GNU_USER_DYNAMIC_LINKER32					\
-   CHOOSE_DYNAMIC_LINKER (GLIBC_DYNAMIC_LINKER32, UCLIBC_DYNAMIC_LINKER32, \
--			 BIONIC_DYNAMIC_LINKER32)
-+			 BIONIC_DYNAMIC_LINKER32, MUSL_DYNAMIC_LINKER32)
- #define GNU_USER_DYNAMIC_LINKER64					\
-   CHOOSE_DYNAMIC_LINKER (GLIBC_DYNAMIC_LINKER64, UCLIBC_DYNAMIC_LINKER64, \
--			 BIONIC_DYNAMIC_LINKER64)
-+			 BIONIC_DYNAMIC_LINKER64, MUSL_DYNAMIC_LINKER64)
- #define GNU_USER_DYNAMIC_LINKERX32					\
-   CHOOSE_DYNAMIC_LINKER (GLIBC_DYNAMIC_LINKERX32, UCLIBC_DYNAMIC_LINKERX32, \
--			 BIONIC_DYNAMIC_LINKERX32)
-+			 BIONIC_DYNAMIC_LINKERX32, MUSL_DYNAMIC_LINKERX32)
- 
- /* Whether we have Bionic libc runtime */
- #undef TARGET_HAS_BIONIC
-@@ -123,3 +128,74 @@ see the files COPYING3 and COPYING.RUNTI
- # define TARGET_LIBC_HAS_FUNCTION linux_libc_has_function
- 
- #endif
-+
-+/* musl avoids problematic includes by rearranging the include directories.
-+ * Unfortunately, this is mostly duplicated from cppdefault.c */
-+#if DEFAULT_LIBC == LIBC_MUSL
-+#define INCLUDE_DEFAULTS_MUSL_GPP			\
-+    { GPLUSPLUS_INCLUDE_DIR, "G++", 1, 1,		\
-+      GPLUSPLUS_INCLUDE_DIR_ADD_SYSROOT, 0 },		\
-+    { GPLUSPLUS_TOOL_INCLUDE_DIR, "G++", 1, 1,		\
-+      GPLUSPLUS_INCLUDE_DIR_ADD_SYSROOT, 1 },		\
-+    { GPLUSPLUS_BACKWARD_INCLUDE_DIR, "G++", 1, 1,	\
-+      GPLUSPLUS_INCLUDE_DIR_ADD_SYSROOT, 0 },
-+
-+#ifdef LOCAL_INCLUDE_DIR
-+#define INCLUDE_DEFAULTS_MUSL_LOCAL			\
-+    { LOCAL_INCLUDE_DIR, 0, 0, 1, 1, 2 },		\
-+    { LOCAL_INCLUDE_DIR, 0, 0, 1, 1, 0 },
-+#else
-+#define INCLUDE_DEFAULTS_MUSL_LOCAL
-+#endif
-+
-+#ifdef PREFIX_INCLUDE_DIR
-+#define INCLUDE_DEFAULTS_MUSL_PREFIX			\
-+    { PREFIX_INCLUDE_DIR, 0, 0, 1, 0, 0},
-+#else
-+#define INCLUDE_DEFAULTS_MUSL_PREFIX
-+#endif
-+
-+#ifdef CROSS_INCLUDE_DIR
-+#define INCLUDE_DEFAULTS_MUSL_CROSS			\
-+    { CROSS_INCLUDE_DIR, "GCC", 0, 0, 0, 0},
-+#else
-+#define INCLUDE_DEFAULTS_MUSL_CROSS
-+#endif
-+
-+#ifdef TOOL_INCLUDE_DIR
-+#define INCLUDE_DEFAULTS_MUSL_TOOL			\
-+    { TOOL_INCLUDE_DIR, "BINUTILS", 0, 1, 0, 0},
-+#else
-+#define INCLUDE_DEFAULTS_MUSL_TOOL
-+#endif
-+
-+#ifdef NATIVE_SYSTEM_HEADER_DIR
-+#define INCLUDE_DEFAULTS_MUSL_NATIVE			\
-+    { NATIVE_SYSTEM_HEADER_DIR, 0, 0, 0, 1, 2 },	\
-+    { NATIVE_SYSTEM_HEADER_DIR, 0, 0, 0, 1, 0 },
-+#else
-+#define INCLUDE_DEFAULTS_MUSL_NATIVE
-+#endif
-+
-+#if defined (CROSS_DIRECTORY_STRUCTURE) && !defined (TARGET_SYSTEM_ROOT)
-+# undef INCLUDE_DEFAULTS_MUSL_LOCAL
-+# define INCLUDE_DEFAULTS_MUSL_LOCAL
-+# undef INCLUDE_DEFAULTS_MUSL_NATIVE
-+# define INCLUDE_DEFAULTS_MUSL_NATIVE
-+#else
-+# undef INCLUDE_DEFAULTS_MUSL_CROSS
-+# define INCLUDE_DEFAULTS_MUSL_CROSS
-+#endif
-+
-+#undef INCLUDE_DEFAULTS
-+#define INCLUDE_DEFAULTS				\
-+  {							\
-+    INCLUDE_DEFAULTS_MUSL_GPP				\
-+    INCLUDE_DEFAULTS_MUSL_PREFIX			\
-+    INCLUDE_DEFAULTS_MUSL_CROSS				\
-+    INCLUDE_DEFAULTS_MUSL_TOOL				\
-+    INCLUDE_DEFAULTS_MUSL_NATIVE			\
-+    { GCC_INCLUDE_DIR, "GCC", 0, 1, 0, 0 },		\
-+    { 0, 0, 0, 0, 0, 0 }				\
-+  }
-+#endif
-Index: gcc-5.4.0/gcc/config/linux.opt
-===================================================================
---- gcc-5.4.0.orig/gcc/config/linux.opt
-+++ gcc-5.4.0/gcc/config/linux.opt
-@@ -28,5 +28,9 @@ Target Report RejectNegative Var(linux_l
- Use GNU C library
- 
- muclibc
--Target Report RejectNegative Var(linux_libc,LIBC_UCLIBC) Negative(mbionic)
-+Target Report RejectNegative Var(linux_libc,LIBC_UCLIBC) Negative(mmusl)
- Use uClibc C library
-+
-+mmusl
-+Target Report RejectNegative Var(linux_libc,LIBC_MUSL) Negative(mbionic)
-+Use musl C library
-Index: gcc-5.4.0/gcc/config/rs6000/secureplt.h
-===================================================================
---- gcc-5.4.0.orig/gcc/config/rs6000/secureplt.h
-+++ gcc-5.4.0/gcc/config/rs6000/secureplt.h
-@@ -18,3 +18,4 @@ along with GCC; see the file COPYING3.
- <http://www.gnu.org/licenses/>.  */
- 
- #define CC1_SECURE_PLT_DEFAULT_SPEC "-msecure-plt"
-+#define LINK_SECURE_PLT_DEFAULT_SPEC "--secure-plt"
-Index: gcc-5.4.0/gcc/config/rs6000/sysv4.h
-===================================================================
---- gcc-5.4.0.orig/gcc/config/rs6000/sysv4.h
-+++ gcc-5.4.0/gcc/config/rs6000/sysv4.h
-@@ -538,6 +538,10 @@ ENDIAN_SELECT(" -mbig", " -mlittle", DEF
- #define CC1_SECURE_PLT_DEFAULT_SPEC ""
- #endif
- 
-+#ifndef LINK_SECURE_PLT_DEFAULT_SPEC
-+#define LINK_SECURE_PLT_DEFAULT_SPEC ""
-+#endif
-+
- /* Pass -G xxx to the compiler.  */
- #undef CC1_SPEC
- #define	CC1_SPEC "%{G*} %(cc1_cpu)" \
-@@ -889,6 +893,7 @@ ncrtn.o%s"
-   { "link_os_openbsd",		LINK_OS_OPENBSD_SPEC },			\
-   { "link_os_default",		LINK_OS_DEFAULT_SPEC },			\
-   { "cc1_secure_plt_default",	CC1_SECURE_PLT_DEFAULT_SPEC },		\
-+  { "link_secure_plt_default",	LINK_SECURE_PLT_DEFAULT_SPEC },		\
-   { "cpp_os_ads",		CPP_OS_ADS_SPEC },			\
-   { "cpp_os_yellowknife",	CPP_OS_YELLOWKNIFE_SPEC },		\
-   { "cpp_os_mvme",		CPP_OS_MVME_SPEC },			\
-Index: gcc-5.4.0/gcc/ginclude/stddef.h
-===================================================================
---- gcc-5.4.0.orig/gcc/ginclude/stddef.h
-+++ gcc-5.4.0/gcc/ginclude/stddef.h
-@@ -184,6 +184,7 @@ typedef __PTRDIFF_TYPE__ ptrdiff_t;
- #ifndef _GCC_SIZE_T
- #ifndef _SIZET_
- #ifndef __size_t
-+#ifndef __DEFINED_size_t /* musl */
- #define __size_t__	/* BeOS */
- #define __SIZE_T__	/* Cray Unicos/Mk */
- #define _SIZE_T
-@@ -200,6 +201,7 @@ typedef __PTRDIFF_TYPE__ ptrdiff_t;
- #define ___int_size_t_h
- #define _GCC_SIZE_T
- #define _SIZET_
-+#define __DEFINED_size_t /* musl */
- #if (defined (__FreeBSD__) && (__FreeBSD__ >= 5)) \
-   || defined(__DragonFly__) \
-   || defined(__FreeBSD_kernel__)
-@@ -218,6 +220,7 @@ typedef __SIZE_TYPE__ size_t;
- typedef long ssize_t;
- #endif /* __BEOS__ */
- #endif /* !(defined (__GNUG__) && defined (size_t)) */
-+#endif /* __DEFINED_size_t */
- #endif /* __size_t */
- #endif /* _SIZET_ */
- #endif /* _GCC_SIZE_T */
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0045-Support-for-arm-linux-musl.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0045-Support-for-arm-linux-musl.patch
deleted file mode 100644
index 3c1115a..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0045-Support-for-arm-linux-musl.patch
+++ /dev/null
@@ -1,216 +0,0 @@
-From a6c649571d49c972e6d207577780ada7e9b6bad5 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Tue, 8 Dec 2015 08:31:52 +0000
-Subject: [PATCH 45/57] Support for arm-linux-musl.
-
-Fix musl ldso for all arches
-
-Upstream-Status: backport [partial]
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- gcc/config/aarch64/aarch64-linux.h |  1 +
- gcc/config/arm/linux-eabi.h        | 17 +++++++++++++++++
- gcc/config/i386/linux.h            |  1 +
- gcc/config/i386/linux64.h          |  5 +++++
- gcc/config/mips/linux.h            |  8 +++++++-
- gcc/config/rs6000/linux64.h        | 13 +++++++++----
- gcc/config/rs6000/sysv4.h          | 10 +++++++---
- libitm/config/arm/hwcap.cc         |  4 ++++
- libitm/config/linux/x86/tls.h      |  8 ++++++--
- 9 files changed, 57 insertions(+), 10 deletions(-)
-
-diff --git a/gcc/config/aarch64/aarch64-linux.h b/gcc/config/aarch64/aarch64-linux.h
-index f9e65fc..1b2d0c0 100644
---- a/gcc/config/aarch64/aarch64-linux.h
-+++ b/gcc/config/aarch64/aarch64-linux.h
-@@ -22,6 +22,7 @@
- #define GCC_AARCH64_LINUX_H
- 
- #define GLIBC_DYNAMIC_LINKER  SYSTEMLIBS_DIR "ld-linux-aarch64%{mbig-endian:_be}%{mabi=ilp32:_ilp32}.so.1"
-+#define MUSL_DYNAMIC_LINKER  SYSTEMLIBS_DIR "ld-musl-aarch64.so.1"
- 
- #define MUSL_DYNAMIC_LINKER  SYSTEMLIBS_DIR "ld-musl-aarch64.so.1"
- 
-diff --git a/gcc/config/arm/linux-eabi.h b/gcc/config/arm/linux-eabi.h
-index 048a062..adb9c44 100644
---- a/gcc/config/arm/linux-eabi.h
-+++ b/gcc/config/arm/linux-eabi.h
-@@ -81,6 +81,23 @@
- #undef TARGET_FIX_V4BX_SPEC
- #define TARGET_FIX_V4BX_SPEC "%{mcpu=arm8|mcpu=arm810|mcpu=strongarm*|march=armv4: --fix-v4bx}"
- 
-+/* For ARM musl currently supports four dynamic linkers:
-+   - ld-musl-arm.so.1 - for the EABI-derived soft-float ABI
-+   - ld-musl-armhf.so.1 - for the EABI-derived hard-float ABI
-+   - ld-musl-armeb.so.1 - for the EABI-derived soft-float ABI, EB
-+   - ld-musl-armebhf.so.1 - for the EABI-derived hard-float ABI, EB
-+   musl does not support the legacy OABI mode.
-+   All the dynamic linkers live in /lib.
-+   We default to soft-float, EL. */
-+#undef  MUSL_DYNAMIC_LINKER
-+#if TARGET_BIG_ENDIAN_DEFAULT
-+#define MUSL_DYNAMIC_LINKER_E "%{mlittle-endian:;:eb}"
-+#else
-+#define MUSL_DYNAMIC_LINKER_E "%{mbig-endian:eb}"
-+#endif
-+#define MUSL_DYNAMIC_LINKER \
-+  SYSTEMLIBS_DIR "ld-musl-arm" MUSL_DYNAMIC_LINKER_E "%{mfloat-abi=hard:hf}.so.1"
-+
- /* At this point, bpabi.h will have clobbered LINK_SPEC.  We want to
-    use the GNU/Linux version, not the generic BPABI version.  */
- #undef  LINK_SPEC
-diff --git a/gcc/config/i386/linux.h b/gcc/config/i386/linux.h
-index 21ba2b2..e2b81e5 100644
---- a/gcc/config/i386/linux.h
-+++ b/gcc/config/i386/linux.h
-@@ -21,3 +21,4 @@ along with GCC; see the file COPYING3.  If not see
- 
- #define GNU_USER_LINK_EMULATION "elf_i386"
- #define GLIBC_DYNAMIC_LINKER SYSTEMLIBS_DIR "ld-linux.so.2"
-+#define MUSL_DYNAMIC_LINKER SYSTEMLIBS_DIR "ld-musl-i386.so.1"
-diff --git a/gcc/config/i386/linux64.h b/gcc/config/i386/linux64.h
-index 6185cce..5a3a977 100644
---- a/gcc/config/i386/linux64.h
-+++ b/gcc/config/i386/linux64.h
-@@ -30,3 +30,8 @@ see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
- #define GLIBC_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-linux.so.2"
- #define GLIBC_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld-linux-x86-64.so.2"
- #define GLIBC_DYNAMIC_LINKERX32 SYSTEMLIBS_DIR "ld-linux-x32.so.2"
-+
-+#define MUSL_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-musl-i386.so.1"
-+#define MUSL_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld-musl-x86_64.so.1"
-+#define MUSL_DYNAMIC_LINKERX32 SYSTEMLIBS_DIR "ld-musl-x32.so.1"
-+
-diff --git a/gcc/config/mips/linux.h b/gcc/config/mips/linux.h
-index c306afb..b899388 100644
---- a/gcc/config/mips/linux.h
-+++ b/gcc/config/mips/linux.h
-@@ -21,6 +21,12 @@ along with GCC; see the file COPYING3.  If not see
- #define GNU_USER_LINK_EMULATION64 "elf64%{EB:b}%{EL:l}tsmip"
- #define GNU_USER_LINK_EMULATIONN32 "elf32%{EB:b}%{EL:l}tsmipn32"
- 
-+#undef MUSL_DYNAMIC_LINKER32
-+#define MUSL_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-musl-mips%{EL:el}%{msoft-float:-sf}.so.1"
-+#undef MUSL_DYNAMIC_LINKER64
-+#define MUSL_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld-musl-mips64%{EL:el}%{msoft-float:-sf}.so.1"
-+#define MUSL_DYNAMIC_LINKERN32 SYSTEMLIBS_DIR "ld-musl-mipsn32%{EL:el}%{msoft-float:-sf}.so.1"
-+
- #define GLIBC_DYNAMIC_LINKER32 \
-   "%{mnan=2008:" SYSTEMLIBS_DIR "ld-linux-mipsn8.so.1;:" SYSTEMLIBS_DIR "ld.so.1}"
- #define GLIBC_DYNAMIC_LINKER64 \
-@@ -40,4 +46,4 @@ along with GCC; see the file COPYING3.  If not see
- #define BIONIC_DYNAMIC_LINKERN32 "/system/bin/linker32"
- #define GNU_USER_DYNAMIC_LINKERN32 \
-   CHOOSE_DYNAMIC_LINKER (GLIBC_DYNAMIC_LINKERN32, UCLIBC_DYNAMIC_LINKERN32, \
--                         BIONIC_DYNAMIC_LINKERN32)
-+                         BIONIC_DYNAMIC_LINKERN32,  MUSL_DYNAMIC_LINKERN32)
-diff --git a/gcc/config/rs6000/linux64.h b/gcc/config/rs6000/linux64.h
-index 31c4338..679da4b 100644
---- a/gcc/config/rs6000/linux64.h
-+++ b/gcc/config/rs6000/linux64.h
-@@ -365,17 +365,22 @@ extern int dot_symbols;
- #endif
- #define UCLIBC_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-uClibc.so.0"
- #define UCLIBC_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld64-uClibc.so.0"
-+#define MUSL_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-musl-powerpc.so.1"
-+#define MUSL_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld-musl-powerpc64.so.1"
-+
- #if DEFAULT_LIBC == LIBC_UCLIBC
--#define CHOOSE_DYNAMIC_LINKER(G, U) "%{mglibc:" G ";:" U "}"
-+#define CHOOSE_DYNAMIC_LINKER(G, U, M) "%{mglibc:" G ";:%{mmusl:" M ";:" U "}}"
- #elif DEFAULT_LIBC == LIBC_GLIBC
--#define CHOOSE_DYNAMIC_LINKER(G, U) "%{muclibc:" U ";:" G "}"
-+#define CHOOSE_DYNAMIC_LINKER(G, U, M) "%{muclibc:" U ";:%{mmusl:" M ";:" G "}}"
-+#elif DEFAULT_LIBC == LIBC_MUSL
-+#define CHOOSE_DYNAMIC_LINKER(G, U, M) "%{mglibc:" G ";:%{muclibc:" U ";:" M "}}"
- #else
- #error "Unsupported DEFAULT_LIBC"
- #endif
- #define GNU_USER_DYNAMIC_LINKER32 \
--  CHOOSE_DYNAMIC_LINKER (GLIBC_DYNAMIC_LINKER32, UCLIBC_DYNAMIC_LINKER32)
-+  CHOOSE_DYNAMIC_LINKER (GLIBC_DYNAMIC_LINKER32, UCLIBC_DYNAMIC_LINKER32, MUSL_DYNAMIC_LINKER32)
- #define GNU_USER_DYNAMIC_LINKER64 \
--  CHOOSE_DYNAMIC_LINKER (GLIBC_DYNAMIC_LINKER64, UCLIBC_DYNAMIC_LINKER64)
-+  CHOOSE_DYNAMIC_LINKER (GLIBC_DYNAMIC_LINKER64, UCLIBC_DYNAMIC_LINKER64, MUSL_DYNAMIC_LINKER64)
- 
- #undef  DEFAULT_ASM_ENDIAN
- #if (TARGET_DEFAULT & MASK_LITTLE_ENDIAN)
-diff --git a/gcc/config/rs6000/sysv4.h b/gcc/config/rs6000/sysv4.h
-index 7cd07e0..8794fa5 100644
---- a/gcc/config/rs6000/sysv4.h
-+++ b/gcc/config/rs6000/sysv4.h
-@@ -763,15 +763,19 @@ ENDIAN_SELECT(" -mbig", " -mlittle", DEFAULT_ASM_ENDIAN)
- 
- #define GLIBC_DYNAMIC_LINKER "/lib/ld.so.1"
- #define UCLIBC_DYNAMIC_LINKER "/lib/ld-uClibc.so.0"
-+#define MUSL_DYNAMIC_LINKER "/lib/ld-musl-powerpc.so.1"
-+
- #if DEFAULT_LIBC == LIBC_UCLIBC
--#define CHOOSE_DYNAMIC_LINKER(G, U) "%{mglibc:" G ";:" U "}"
-+#define CHOOSE_DYNAMIC_LINKER(G, U, M) "%{mglibc:" G ";:%{mmusl:" M ";:" U "}}"
-+#elif DEFAULT_LIBC == LIBC_MUSL
-+#define CHOOSE_DYNAMIC_LINKER(G, U, M) "%{mglibc:" G ";:%{muclibc:" U ";:" M "}}"
- #elif !defined (DEFAULT_LIBC) || DEFAULT_LIBC == LIBC_GLIBC
--#define CHOOSE_DYNAMIC_LINKER(G, U) "%{muclibc:" U ";:" G "}"
-+#define CHOOSE_DYNAMIC_LINKER(G, U, M) "%{muclibc:" U ";:%{mmusl:" M ";:" G "}}"
- #else
- #error "Unsupported DEFAULT_LIBC"
- #endif
- #define GNU_USER_DYNAMIC_LINKER \
--  CHOOSE_DYNAMIC_LINKER (GLIBC_DYNAMIC_LINKER, UCLIBC_DYNAMIC_LINKER)
-+  CHOOSE_DYNAMIC_LINKER (GLIBC_DYNAMIC_LINKER, UCLIBC_DYNAMIC_LINKER, MUSL_DYNAMIC_LINKER)
- 
- #define LINK_OS_LINUX_SPEC "-m elf32ppclinux %{!shared: %{!static: \
-   %{rdynamic:-export-dynamic} \
-diff --git a/libitm/config/arm/hwcap.cc b/libitm/config/arm/hwcap.cc
-index a1c2cfd..dd4fad2 100644
---- a/libitm/config/arm/hwcap.cc
-+++ b/libitm/config/arm/hwcap.cc
-@@ -40,7 +40,11 @@ int GTM_hwcap HIDDEN = 0
- 
- #ifdef __linux__
- #include <unistd.h>
-+#ifdef __GLIBC__
- #include <sys/fcntl.h>
-+#else
-+#include <fcntl.h>
-+#endif
- #include <elf.h>
- 
- static void __attribute__((constructor))
-diff --git a/libitm/config/linux/x86/tls.h b/libitm/config/linux/x86/tls.h
-index e731ab7..54ad8b6 100644
---- a/libitm/config/linux/x86/tls.h
-+++ b/libitm/config/linux/x86/tls.h
-@@ -25,16 +25,19 @@
- #ifndef LIBITM_X86_TLS_H
- #define LIBITM_X86_TLS_H 1
- 
--#if defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 10)
-+#if defined(__GLIBC_PREREQ)
-+#if __GLIBC_PREREQ(2, 10)
- /* Use slots in the TCB head rather than __thread lookups.
-    GLIBC has reserved words 10 through 13 for TM.  */
- #define HAVE_ARCH_GTM_THREAD 1
- #define HAVE_ARCH_GTM_THREAD_DISP 1
- #endif
-+#endif
- 
- #include "config/generic/tls.h"
- 
--#if defined(__GLIBC_PREREQ) && __GLIBC_PREREQ(2, 10)
-+#if defined(__GLIBC_PREREQ)
-+#if __GLIBC_PREREQ(2, 10)
- namespace GTM HIDDEN {
- 
- #ifdef __x86_64__
-@@ -101,5 +104,6 @@ static inline void set_abi_disp(struct abi_dispatch *x)
- 
- } // namespace GTM
- #endif /* >= GLIBC 2.10 */
-+#endif
- 
- #endif // LIBITM_X86_TLS_H
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0046-Get-rid-of-ever-broken-fixincludes-on-musl.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0046-Get-rid-of-ever-broken-fixincludes-on-musl.patch
deleted file mode 100644
index ddb0fc4..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0046-Get-rid-of-ever-broken-fixincludes-on-musl.patch
+++ /dev/null
@@ -1,28 +0,0 @@
-From 047116e8c9cbb340264f4f28db3f21a68ba57ff3 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Tue, 8 Dec 2015 08:32:24 +0000
-Subject: [PATCH 46/46] Get rid of ever-broken fixincludes on musl.
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
- fixincludes/mkfixinc.sh | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/fixincludes/mkfixinc.sh b/fixincludes/mkfixinc.sh
-index 6653fedb..0d96c8c 100755
---- a/fixincludes/mkfixinc.sh
-+++ b/fixincludes/mkfixinc.sh
-@@ -19,7 +19,8 @@ case $machine in
-     powerpc-*-eabi*    | \
-     powerpc-*-rtems*   | \
-     powerpcle-*-eabisim* | \
--    powerpcle-*-eabi* )
-+    powerpcle-*-eabi* | \
-+    *-musl* )
- 	#  IF there is no include fixing,
- 	#  THEN create a no-op fixer and exit
- 	(echo "#! /bin/sh" ; echo "exit 0" ) > ${target}
--- 
-2.6.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0047-nios2-Define-MUSL_DYNAMIC_LINKER.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0047-nios2-Define-MUSL_DYNAMIC_LINKER.patch
deleted file mode 100644
index a1cfb9c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0047-nios2-Define-MUSL_DYNAMIC_LINKER.patch
+++ /dev/null
@@ -1,28 +0,0 @@
-From f5ca07132b9292d2045ca7204e9cbfde2e59d0bf Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Tue, 2 Feb 2016 10:26:10 -0800
-Subject: [PATCH 47/48] nios2: Define MUSL_DYNAMIC_LINKER
-
-Signed-off-by: Marek Vasut <marex@denx.de>
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
-Upstream-Status: Pending
-
- gcc/config/nios2/linux.h | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/gcc/config/nios2/linux.h b/gcc/config/nios2/linux.h
-index f43f655..5587ab3 100644
---- a/gcc/config/nios2/linux.h
-+++ b/gcc/config/nios2/linux.h
-@@ -30,6 +30,7 @@
- #define CPP_SPEC "%{posix:-D_POSIX_SOURCE} %{pthread:-D_REENTRANT}"
- 
- #define GLIBC_DYNAMIC_LINKER "/lib/ld-linux-nios2.so.1"
-+#define MUSL_DYNAMIC_LINKER  "/lib/ld-musl-nios2.so.1"
- 
- #undef LINK_SPEC
- #define LINK_SPEC LINK_SPEC_ENDIAN \
--- 
-2.7.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0048-ssp_nonshared.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0048-ssp_nonshared.patch
deleted file mode 100644
index 5ddd40a..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0048-ssp_nonshared.patch
+++ /dev/null
@@ -1,29 +0,0 @@
-From 3cb6013cf287ed9b1247ea37541e64b9810a121d Mon Sep 17 00:00:00 2001
-From: Szabolcs Nagy <nsz@port70.net>
-Date: Sat, 7 Nov 2015 14:58:40 +0000
-Subject: [PATCH 48/48] ssp_nonshared
-
----
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-
- gcc/gcc.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-diff --git a/gcc/gcc.c b/gcc/gcc.c
-index 8f53aea..3ddc658 100644
---- a/gcc/gcc.c
-+++ b/gcc/gcc.c
-@@ -732,7 +732,8 @@ proper position among the other output files.  */
- #ifndef LINK_SSP_SPEC
- #ifdef TARGET_LIBC_PROVIDES_SSP
- #define LINK_SSP_SPEC "%{fstack-protector|fstack-protector-all" \
--		       "|fstack-protector-strong|fstack-protector-explicit:}"
-+		       "|fstack-protector-strong|fstack-protector-explicit" \
-+		       ":-lssp_nonshared}"
- #else
- #define LINK_SSP_SPEC "%{fstack-protector|fstack-protector-all" \
- 		       "|fstack-protector-strong|fstack-protector-explicit" \
--- 
-2.7.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0049-Disable-the-weak-reference-logic-in-gthr.h-for-os-ge.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0049-Disable-the-weak-reference-logic-in-gthr.h-for-os-ge.patch
deleted file mode 100644
index 0ea5143..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0049-Disable-the-weak-reference-logic-in-gthr.h-for-os-ge.patch
+++ /dev/null
@@ -1,78 +0,0 @@
-From 553d8e3b9073ff3e0a9d2fac9b1823fb17ad247c Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Tue, 2 Feb 2016 21:00:18 -0800
-Subject: [PATCH 49/51] Disable the weak reference logic in gthr.h for
- os/generic
-
-It does not work unless work arounds are there in gthr-posix.h
-
-origin of patch
-http://port70.net/~nsz/musl/gcc-5.3.0/0004-gthr.patch
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
-Upstream-Status: Pending
-
- libgfortran/acinclude.m4                    | 2 +-
- libgfortran/configure                       | 2 +-
- libstdc++-v3/config/os/generic/os_defines.h | 5 +++++
- libstdc++-v3/configure.host                 | 3 +++
- 4 files changed, 10 insertions(+), 2 deletions(-)
-
-diff --git a/libgfortran/acinclude.m4 b/libgfortran/acinclude.m4
-index ba890f9..30b8b1a6 100644
---- a/libgfortran/acinclude.m4
-+++ b/libgfortran/acinclude.m4
-@@ -100,7 +100,7 @@ void foo (void);
- 	      [Define to 1 if the target supports #pragma weak])
-   fi
-   case "$host" in
--    *-*-darwin* | *-*-hpux* | *-*-cygwin* | *-*-mingw* )
-+    *-*-darwin* | *-*-hpux* | *-*-cygwin* | *-*-mingw* | *-*-musl* )
-       AC_DEFINE(GTHREAD_USE_WEAK, 0,
- 		[Define to 0 if the target shouldn't use #pragma weak])
-       ;;
-diff --git a/libgfortran/configure b/libgfortran/configure
-index 5d47e65..cdf9695 100755
---- a/libgfortran/configure
-+++ b/libgfortran/configure
-@@ -26456,7 +26456,7 @@ $as_echo "#define SUPPORTS_WEAK 1" >>confdefs.h
- 
-   fi
-   case "$host" in
--    *-*-darwin* | *-*-hpux* | *-*-cygwin* | *-*-mingw* )
-+    *-*-darwin* | *-*-hpux* | *-*-cygwin* | *-*-mingw* | *-*-musl* )
- 
- $as_echo "#define GTHREAD_USE_WEAK 0" >>confdefs.h
- 
-diff --git a/libstdc++-v3/config/os/generic/os_defines.h b/libstdc++-v3/config/os/generic/os_defines.h
-index 45bf52a..103ec0e 100644
---- a/libstdc++-v3/config/os/generic/os_defines.h
-+++ b/libstdc++-v3/config/os/generic/os_defines.h
-@@ -33,4 +33,9 @@
- // System-specific #define, typedefs, corrections, etc, go here.  This
- // file will come before all others.
- 
-+// Disable the weak reference logic in gthr.h for os/generic because it
-+// is broken on every platform unless there is implementation specific
-+// workaround in gthr-posix.h and at link-time for static linking.
-+#define _GLIBCXX_GTHREAD_USE_WEAK 0
-+
- #endif
-diff --git a/libstdc++-v3/configure.host b/libstdc++-v3/configure.host
-index 1756444..2a87bb8 100644
---- a/libstdc++-v3/configure.host
-+++ b/libstdc++-v3/configure.host
-@@ -273,6 +273,9 @@ case "${host_os}" in
-   freebsd*)
-     os_include_dir="os/bsd/freebsd"
-     ;;
-+  linux-musl*)
-+    os_include_dir="os/generic"
-+    ;;
-   gnu* | linux* | kfreebsd*-gnu | knetbsd*-gnu)
-     # check for musl by target
-     case "${host_os}" in
--- 
-2.7.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0050-powerpc-pass-secure-plt-to-the-linker.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0050-powerpc-pass-secure-plt-to-the-linker.patch
deleted file mode 100644
index b2f2bbd..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0050-powerpc-pass-secure-plt-to-the-linker.patch
+++ /dev/null
@@ -1,66 +0,0 @@
-From 4fa0cf03678f849917dcc3d149989b7fecdbe276 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Tue, 2 Feb 2016 21:10:00 -0800
-Subject: [PATCH 50/51] powerpc pass --secure-plt to the linker
-
-Secure-plt when enabled does not pass right options to linker
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
-Upstream-Status: Pending
- gcc/config/rs6000/linux64.h | 4 ++++
- gcc/config/rs6000/sysv4.h   | 2 ++
- 2 files changed, 6 insertions(+)
-
-diff --git a/gcc/config/rs6000/linux64.h b/gcc/config/rs6000/linux64.h
-index 679da4b..3ca7cd7 100644
---- a/gcc/config/rs6000/linux64.h
-+++ b/gcc/config/rs6000/linux64.h
-@@ -174,20 +174,24 @@ extern int dot_symbols;
- #undef	ASM_DEFAULT_SPEC
- #undef	ASM_SPEC
- #undef	LINK_OS_LINUX_SPEC
-+#undef	LINK_SECURE_PLT_SPEC
- 
- #ifndef	RS6000_BI_ARCH
- #define	ASM_DEFAULT_SPEC "-mppc64"
- #define	ASM_SPEC	 "%(asm_spec64) %(asm_spec_common)"
- #define	LINK_OS_LINUX_SPEC "%(link_os_linux_spec64)"
-+#define	LINK_SECURE_PLT_SPEC ""
- #else
- #if DEFAULT_ARCH64_P
- #define	ASM_DEFAULT_SPEC "-mppc%{!m32:64}"
- #define	ASM_SPEC	 "%{m32:%(asm_spec32)}%{!m32:%(asm_spec64)} %(asm_spec_common)"
- #define	LINK_OS_LINUX_SPEC "%{m32:%(link_os_linux_spec32)}%{!m32:%(link_os_linux_spec64)}"
-+#define	LINK_SECURE_PLT_SPEC "%{m32: " LINK_SECURE_PLT_DEFAULT_SPEC "}"
- #else
- #define	ASM_DEFAULT_SPEC "-mppc%{m64:64}"
- #define	ASM_SPEC	 "%{!m64:%(asm_spec32)}%{m64:%(asm_spec64)} %(asm_spec_common)"
- #define	LINK_OS_LINUX_SPEC "%{!m64:%(link_os_linux_spec32)}%{m64:%(link_os_linux_spec64)}"
-+#define	LINK_SECURE_PLT_SPEC "%{!m64: " LINK_SECURE_PLT_DEFAULT_SPEC "}"
- #endif
- #endif
- 
-diff --git a/gcc/config/rs6000/sysv4.h b/gcc/config/rs6000/sysv4.h
-index 8794fa5..0835551 100644
---- a/gcc/config/rs6000/sysv4.h
-+++ b/gcc/config/rs6000/sysv4.h
-@@ -571,6 +571,7 @@ ENDIAN_SELECT(" -mbig", " -mlittle", DEFAULT_ASM_ENDIAN)
-                : %(link_start_default)     }"
- 
- #define LINK_START_DEFAULT_SPEC ""
-+#define LINK_SECURE_PLT_SPEC LINK_SECURE_PLT_DEFAULT_SPEC
- 
- #undef	LINK_SPEC
- #define	LINK_SPEC "\
-@@ -578,6 +579,7 @@ ENDIAN_SELECT(" -mbig", " -mlittle", DEFAULT_ASM_ENDIAN)
- %{R*} \
- %(link_shlib) \
- %{!T*: %(link_start) } \
-+%{!static: %{!mbss-plt: %(link_secure_plt_default)}} \
- %(link_os)"
- 
- /* Shared libraries are not default.  */
--- 
-2.7.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0051-Ignore-fdebug-prefix-map-in-producer-string-by-Danie.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0051-Ignore-fdebug-prefix-map-in-producer-string-by-Danie.patch
deleted file mode 100644
index e8f79b5..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0051-Ignore-fdebug-prefix-map-in-producer-string-by-Danie.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-From 32593b38082ea65f4c82159254adf1e0dc2423be Mon Sep 17 00:00:00 2001
-From: bernds <bernds@138bc75d-0d04-0410-961f-82ee72b054a4>
-Date: Tue, 16 Feb 2016 03:15:15 -0500
-Subject: [PATCH] Ignore -fdebug-prefix-map in producer string (by Daniel Kahn
- Gillmor)
-
-* dwarf2out.c (gen_producer_string): Ignore -fdebug-prefix-map.
-
-git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@231835 138bc75d-0d04-0410-961f-82ee72b054a4
-
-Upstream-Status: Backport
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- gcc/dwarf2out.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/gcc/dwarf2out.c b/gcc/dwarf2out.c
-index 3614c10..526f114 100644
---- a/gcc/dwarf2out.c
-+++ b/gcc/dwarf2out.c
-@@ -19670,6 +19670,7 @@ gen_producer_string (void)
-       case OPT_fpreprocessed:
-       case OPT_fltrans_output_list_:
-       case OPT_fresolution_:
-+      case OPT_fdebug_prefix_map_:
- 	/* Ignore these.  */
- 	continue;
-       default:
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0052-nios2-use-ret-with-r31.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0052-nios2-use-ret-with-r31.patch
deleted file mode 100644
index f3cb47f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0052-nios2-use-ret-with-r31.patch
+++ /dev/null
@@ -1,103 +0,0 @@
-From 1d67120d95c2c6e0ed4f7357d1cc62887eaba463 Mon Sep 17 00:00:00 2001
-From: sandra <sandra@138bc75d-0d04-0410-961f-82ee72b054a4>
-Date: Tue, 12 May 2015 15:57:22 +0000
-Subject: [PATCH] 2015-05-12  Chung-Lin Tang  <cltang@codesourcery.com>
- Sandra Loosemore <sandra@codesourcery.com>
-
-	gcc/
-	* config/nios2/nios2.h (enum reg_class): Add IJMP_REGS enum
-	value.
-	(REG_CLASS_NAMES): Add "IJMP_REGS".
-	(REG_CLASS_CONTENTS): Add new entry for IJMP_REGS.
-	* config/nios2/nios2.md (indirect_jump,*tablejump): Adjust to
-	use new "c" register constraint.
-	* config/nios2/constraint.md (c): New register constraint
-	corresponding to IJMP_REGS.
-
-
-
-git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@223082 138bc75d-0d04-0410-961f-82ee72b054a4
-Signed-off-by: Marek Vasut <marex@denx.de>
-Upstream-Status: Backport [ git://gcc.gnu.org/git/gcc.git 1d67120d95c2c6e0ed4f7357d1cc62887eaba463 ]
----
- gcc/ChangeLog                   | 12 ++++++++++++
- gcc/config/nios2/constraints.md |  3 +++
- gcc/config/nios2/nios2.h        | 11 +++++++----
- gcc/config/nios2/nios2.md       |  4 ++--
- 4 files changed, 24 insertions(+), 6 deletions(-)
-
-diff --git a/gcc/config/nios2/constraints.md b/gcc/config/nios2/constraints.md
-index f4bd9f7..735f892 100644
---- a/gcc/config/nios2/constraints.md
-+++ b/gcc/config/nios2/constraints.md
-@@ -39,6 +39,9 @@
- 
- ;; Register constraints
- 
-+(define_register_constraint "c" "IJMP_REGS"
-+  "A register suitable for an indirect jump.")
-+
- (define_register_constraint "j" "SIB_REGS"
-   "A register suitable for an indirect sibcall.")
- 
-diff --git a/gcc/config/nios2/nios2.h b/gcc/config/nios2/nios2.h
-index 510ab5f..ac33978 100644
---- a/gcc/config/nios2/nios2.h
-+++ b/gcc/config/nios2/nios2.h
-@@ -173,6 +173,7 @@ enum reg_class
- {
-   NO_REGS,
-   SIB_REGS,
-+  IJMP_REGS,
-   GP_REGS,
-   ALL_REGS,
-   LIM_REG_CLASSES
-@@ -183,6 +184,7 @@ enum reg_class
- #define REG_CLASS_NAMES   \
-   {  "NO_REGS",		  \
-      "SIB_REGS",	  \
-+     "IJMP_REGS",	  \
-      "GP_REGS",           \
-      "ALL_REGS" }
- 
-@@ -190,10 +192,11 @@ enum reg_class
- 
- #define REG_CLASS_CONTENTS			\
-   {						\
--    /* NO_REGS  */ { 0, 0},			\
--    /* SIB_REGS */ { 0xfe0c, 0},		\
--    /* GP_REGS  */ {~0, 0},			\
--    /* ALL_REGS */ {~0,~0}			\
-+    /* NO_REGS    */ { 0, 0},			\
-+    /* SIB_REGS   */ { 0xfe0c, 0},		\
-+    /* IJMP_REGS  */ { 0x7fffffff, 0},		\
-+    /* GP_REGS    */ {~0, 0},			\
-+    /* ALL_REGS   */ {~0,~0}			\
-   }
- 
- 
-diff --git a/gcc/config/nios2/nios2.md b/gcc/config/nios2/nios2.md
-index 7b35d269..36ef101 100644
---- a/gcc/config/nios2/nios2.md
-+++ b/gcc/config/nios2/nios2.md
-@@ -697,7 +697,7 @@
- ; check or adjust for overflow.
- 
- (define_insn "indirect_jump"
--  [(set (pc) (match_operand:SI 0 "register_operand" "r"))]
-+  [(set (pc) (match_operand:SI 0 "register_operand" "c"))]
-   ""
-   "jmp\\t%0"
-   [(set_attr "type" "control")])
-@@ -811,7 +811,7 @@
- 
- (define_insn "*tablejump"
-   [(set (pc)
--        (match_operand:SI 0 "register_operand" "r"))
-+        (match_operand:SI 0 "register_operand" "c"))
-    (use (label_ref (match_operand 1 "" "")))]
-   ""
-   "jmp\\t%0"
--- 
-2.7.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0053-expr.c-PR-target-65358-Avoid-clobbering-partial-argu.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0053-expr.c-PR-target-65358-Avoid-clobbering-partial-argu.patch
deleted file mode 100644
index c18f40e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0053-expr.c-PR-target-65358-Avoid-clobbering-partial-argu.patch
+++ /dev/null
@@ -1,307 +0,0 @@
-From 536b8318974495cde2b42c3c2742748e2b271be0 Mon Sep 17 00:00:00 2001
-From: ktkachov <ktkachov@138bc75d-0d04-0410-961f-82ee72b054a4>
-Date: Wed, 27 May 2015 13:25:01 +0000
-Subject: [PATCH] PR target/65358 Avoid clobbering partial argument during
- sibcall
-
-	PR target/65358
-	* expr.c (memory_load_overlap): New function.
-	(emit_push_insn): When pushing partial args to the stack would
-	clobber the register part load the overlapping part into a pseudo
-	and put it into the hard reg after pushing.  Change return type
-	to bool.  Add bool argument.
-	* expr.h (emit_push_insn): Change return type to bool.
-	Add bool argument.
-	* calls.c (expand_call): Cancel sibcall optimization when encountering
-	partial argument on targets with ARGS_GROW_DOWNWARD and
-	!STACK_GROWS_DOWNWARD.
-	(emit_library_call_value_1): Update callsite of emit_push_insn.
-	(store_one_arg): Likewise.
-
-	PR target/65358
-	* gcc.dg/pr65358.c: New test.
-
-git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@223753 138bc75d-0d04-0410-961f-82ee72b054a4
-
-Upstream-Status: Backport from 6.0
-Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
----
- gcc/calls.c                    | 17 ++++++--
- gcc/expr.c                     | 90 +++++++++++++++++++++++++++++++++++++-----
- gcc/expr.h                     |  4 +-
- gcc/testsuite/gcc.dg/pr65358.c | 33 ++++++++++++++++
- 4 files changed, 129 insertions(+), 15 deletions(-)
- create mode 100644 gcc/testsuite/gcc.dg/pr65358.c
-
-diff --git a/gcc/calls.c b/gcc/calls.c
-index ee8ea5f..2334381 100644
---- a/gcc/calls.c
-+++ b/gcc/calls.c
-@@ -3236,6 +3236,14 @@ expand_call (tree exp, rtx target, int ignore)
- 	    {
- 	      rtx_insn *before_arg = get_last_insn ();
- 
-+	     /* On targets with weird calling conventions (e.g. PA) it's
-+		hard to ensure that all cases of argument overlap between
-+		stack and registers work.  Play it safe and bail out.  */
-+#if defined(ARGS_GROW_DOWNWARD) && !defined(STACK_GROWS_DOWNWARD)
-+		  sibcall_failure = 1;
-+		  break;
-+#endif
-+
- 	      if (store_one_arg (&args[i], argblock, flags,
- 				 adjusted_args_size.var != 0,
- 				 reg_parm_stack_space)
-@@ -4279,7 +4287,7 @@ emit_library_call_value_1 (int retval, rtx orgfun, rtx value,
- 			  partial, reg, 0, argblock,
- 			  GEN_INT (argvec[argnum].locate.offset.constant),
- 			  reg_parm_stack_space,
--			  ARGS_SIZE_RTX (argvec[argnum].locate.alignment_pad));
-+			  ARGS_SIZE_RTX (argvec[argnum].locate.alignment_pad), false);
- 
- 	  /* Now mark the segment we just used.  */
- 	  if (ACCUMULATE_OUTGOING_ARGS)
-@@ -4886,10 +4894,11 @@ store_one_arg (struct arg_data *arg, rtx argblock, int flags,
- 
-       /* This isn't already where we want it on the stack, so put it there.
- 	 This can either be done with push or copy insns.  */
--      emit_push_insn (arg->value, arg->mode, TREE_TYPE (pval), NULL_RTX,
-+      if (!emit_push_insn (arg->value, arg->mode, TREE_TYPE (pval), NULL_RTX,
- 		      parm_align, partial, reg, used - size, argblock,
- 		      ARGS_SIZE_RTX (arg->locate.offset), reg_parm_stack_space,
--		      ARGS_SIZE_RTX (arg->locate.alignment_pad));
-+		      ARGS_SIZE_RTX (arg->locate.alignment_pad), true))
-+	sibcall_failure = 1;
- 
-       /* Unless this is a partially-in-register argument, the argument is now
- 	 in the stack.  */
-@@ -5001,7 +5010,7 @@ store_one_arg (struct arg_data *arg, rtx argblock, int flags,
-       emit_push_insn (arg->value, arg->mode, TREE_TYPE (pval), size_rtx,
- 		      parm_align, partial, reg, excess, argblock,
- 		      ARGS_SIZE_RTX (arg->locate.offset), reg_parm_stack_space,
--		      ARGS_SIZE_RTX (arg->locate.alignment_pad));
-+		      ARGS_SIZE_RTX (arg->locate.alignment_pad), false);
- 
-       /* Unless this is a partially-in-register argument, the argument is now
- 	 in the stack.
-diff --git a/gcc/expr.c b/gcc/expr.c
-index 5c09550..24a6293 100644
---- a/gcc/expr.c
-+++ b/gcc/expr.c
-@@ -4121,12 +4121,35 @@ emit_single_push_insn (machine_mode mode, rtx x, tree type)
- }
- #endif
- 
-+/* If reading SIZE bytes from X will end up reading from
-+   Y return the number of bytes that overlap.  Return -1
-+   if there is no overlap or -2 if we can't determine
-+   (for example when X and Y have different base registers).  */
-+
-+static int
-+memory_load_overlap (rtx x, rtx y, HOST_WIDE_INT size)
-+{
-+  rtx tmp = plus_constant (Pmode, x, size);
-+  rtx sub = simplify_gen_binary (MINUS, Pmode, tmp, y);
-+
-+  if (!CONST_INT_P (sub))
-+    return -2;
-+
-+  HOST_WIDE_INT val = INTVAL (sub);
-+
-+  return IN_RANGE (val, 1, size) ? val : -1;
-+}
-+
- /* Generate code to push X onto the stack, assuming it has mode MODE and
-    type TYPE.
-    MODE is redundant except when X is a CONST_INT (since they don't
-    carry mode info).
-    SIZE is an rtx for the size of data to be copied (in bytes),
-    needed only if X is BLKmode.
-+   Return true if successful.  May return false if asked to push a
-+   partial argument during a sibcall optimization (as specified by
-+   SIBCALL_P) and the incoming and outgoing pointers cannot be shown
-+   to not overlap.
- 
-    ALIGN (in bits) is maximum alignment we can assume.
- 
-@@ -4152,11 +4175,11 @@ emit_single_push_insn (machine_mode mode, rtx x, tree type)
-    for arguments passed in registers.  If nonzero, it will be the number
-    of bytes required.  */
- 
--void
-+bool
- emit_push_insn (rtx x, machine_mode mode, tree type, rtx size,
- 		unsigned int align, int partial, rtx reg, int extra,
- 		rtx args_addr, rtx args_so_far, int reg_parm_stack_space,
--		rtx alignment_pad)
-+		rtx alignment_pad, bool sibcall_p)
- {
-   rtx xinner;
-   enum direction stack_direction
-@@ -4179,6 +4202,10 @@ emit_push_insn (rtx x, machine_mode mode, tree type, rtx size,
- 
-   xinner = x;
- 
-+  int nregs = partial / UNITS_PER_WORD;
-+  rtx *tmp_regs = NULL;
-+  int overlapping = 0;
-+
-   if (mode == BLKmode
-       || (STRICT_ALIGNMENT && align < GET_MODE_ALIGNMENT (mode)))
-     {
-@@ -4309,6 +4336,43 @@ emit_push_insn (rtx x, machine_mode mode, tree type, rtx size,
- 	     PARM_BOUNDARY.  Assume the caller isn't lying.  */
- 	  set_mem_align (target, align);
- 
-+	  /* If part should go in registers and pushing to that part would
-+	     overwrite some of the values that need to go into regs, load the
-+	     overlapping values into temporary pseudos to be moved into the hard
-+	     regs at the end after the stack pushing has completed.
-+	     We cannot load them directly into the hard regs here because
-+	     they can be clobbered by the block move expansions.
-+	     See PR 65358.  */
-+
-+	  if (partial > 0 && reg != 0 && mode == BLKmode
-+	      && GET_CODE (reg) != PARALLEL)
-+	    {
-+	      overlapping = memory_load_overlap (XEXP (x, 0), temp, partial);
-+	      if (overlapping > 0)
-+	        {
-+		  gcc_assert (overlapping % UNITS_PER_WORD == 0);
-+		  overlapping /= UNITS_PER_WORD;
-+
-+		  tmp_regs = XALLOCAVEC (rtx, overlapping);
-+
-+		  for (int i = 0; i < overlapping; i++)
-+		    tmp_regs[i] = gen_reg_rtx (word_mode);
-+
-+		  for (int i = 0; i < overlapping; i++)
-+		    emit_move_insn (tmp_regs[i],
-+				    operand_subword_force (target, i, mode));
-+	        }
-+	      else if (overlapping == -1)
-+		overlapping = 0;
-+	      /* Could not determine whether there is overlap.
-+	         Fail the sibcall.  */
-+	      else
-+		{
-+		  overlapping = 0;
-+		  if (sibcall_p)
-+		    return false;
-+		}
-+	    }
- 	  emit_block_move (target, xinner, size, BLOCK_OP_CALL_PARM);
- 	}
-     }
-@@ -4363,12 +4427,13 @@ emit_push_insn (rtx x, machine_mode mode, tree type, rtx size,
- 	 has a size a multiple of a word.  */
-       for (i = size - 1; i >= not_stack; i--)
- 	if (i >= not_stack + offset)
--	  emit_push_insn (operand_subword_force (x, i, mode),
-+	  if (!emit_push_insn (operand_subword_force (x, i, mode),
- 			  word_mode, NULL_TREE, NULL_RTX, align, 0, NULL_RTX,
- 			  0, args_addr,
- 			  GEN_INT (args_offset + ((i - not_stack + skip)
- 						  * UNITS_PER_WORD)),
--			  reg_parm_stack_space, alignment_pad);
-+			  reg_parm_stack_space, alignment_pad, sibcall_p))
-+	    return false;
-     }
-   else
-     {
-@@ -4411,9 +4476,8 @@ emit_push_insn (rtx x, machine_mode mode, tree type, rtx size,
- 	}
-     }
- 
--  /* If part should go in registers, copy that part
--     into the appropriate registers.  Do this now, at the end,
--     since mem-to-mem copies above may do function calls.  */
-+  /* Move the partial arguments into the registers and any overlapping
-+     values that we moved into the pseudos in tmp_regs.  */
-   if (partial > 0 && reg != 0)
-     {
-       /* Handle calls that pass values in multiple non-contiguous locations.
-@@ -4421,9 +4485,15 @@ emit_push_insn (rtx x, machine_mode mode, tree type, rtx size,
-       if (GET_CODE (reg) == PARALLEL)
- 	emit_group_load (reg, x, type, -1);
-       else
--	{
-+        {
- 	  gcc_assert (partial % UNITS_PER_WORD == 0);
--	  move_block_to_reg (REGNO (reg), x, partial / UNITS_PER_WORD, mode);
-+	  move_block_to_reg (REGNO (reg), x, nregs - overlapping, mode);
-+
-+	  for (int i = 0; i < overlapping; i++)
-+	    emit_move_insn (gen_rtx_REG (word_mode, REGNO (reg)
-+						    + nregs - overlapping + i),
-+			    tmp_regs[i]);
-+
- 	}
-     }
- 
-@@ -4432,6 +4502,8 @@ emit_push_insn (rtx x, machine_mode mode, tree type, rtx size,
- 
-   if (alignment_pad && args_addr == 0)
-     anti_adjust_stack (alignment_pad);
-+
-+  return true;
- }
- 
- /* Return X if X can be used as a subtarget in a sequence of arithmetic
-diff --git a/gcc/expr.h b/gcc/expr.h
-index 867852e..5fcc13f 100644
---- a/gcc/expr.h
-+++ b/gcc/expr.h
-@@ -218,8 +218,8 @@ extern rtx emit_move_resolve_push (machine_mode, rtx);
- extern rtx push_block (rtx, int, int);
- 
- /* Generate code to push something onto the stack, given its mode and type.  */
--extern void emit_push_insn (rtx, machine_mode, tree, rtx, unsigned int,
--			    int, rtx, int, rtx, rtx, int, rtx);
-+extern bool emit_push_insn (rtx, machine_mode, tree, rtx, unsigned int,
-+			    int, rtx, int, rtx, rtx, int, rtx, bool);
- 
- /* Expand an assignment that stores the value of FROM into TO.  */
- extern void expand_assignment (tree, tree, bool);
-diff --git a/gcc/testsuite/gcc.dg/pr65358.c b/gcc/testsuite/gcc.dg/pr65358.c
-new file mode 100644
-index 0000000..ba89fd4
---- /dev/null
-+++ b/gcc/testsuite/gcc.dg/pr65358.c
-@@ -0,0 +1,33 @@
-+/* { dg-do run } */
-+/* { dg-options "-O2" } */
-+
-+struct pack
-+{
-+  int fine;
-+  int victim;
-+  int killer;
-+};
-+
-+int __attribute__ ((__noinline__, __noclone__))
-+bar (int a, int b, struct pack p)
-+{
-+  if (a != 20 || b != 30)
-+    __builtin_abort ();
-+  if (p.fine != 40 || p.victim != 50 || p.killer != 60)
-+    __builtin_abort ();
-+  return 0;
-+}
-+
-+int __attribute__ ((__noinline__, __noclone__))
-+foo (int arg1, int arg2, int arg3, struct pack p)
-+{
-+  return bar (arg2, arg3, p);
-+}
-+
-+int main (void)
-+{
-+  struct pack p = { 40, 50, 60 };
-+
-+  (void) foo (10, 20, 30, p);
-+  return 0;
-+}
--- 
-2.7.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0054-support-ffile-prefix-map.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0054-support-ffile-prefix-map.patch
deleted file mode 100644
index da16879..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0054-support-ffile-prefix-map.patch
+++ /dev/null
@@ -1,284 +0,0 @@
-From ef7c2bda6b4c88f8007ed663b1108cd4651598c8 Mon Sep 17 00:00:00 2001
-From: Hongxu Jia <hongxu.jia@windriver.com>
-Date: Wed, 16 Mar 2016 02:27:43 -0400
-Subject: [PATCH] gcc/libcpp: support -ffile-prefix-map=<old>=<new>
-
-Similar -fdebug-prefix-map, add option -ffile-prefix-map to map one
-directory name (old) to another (new) in __FILE__, __BASE_FILE__ and
-__builtin_FILE ().
-
-https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70268
-
-Upstream-Status: Submitted [gcc-patches@gcc.gnu.org]
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- gcc/c-family/c-opts.c     |  6 ++++
- gcc/c-family/c.opt        |  4 +++
- gcc/dwarf2out.c           |  1 +
- gcc/gimplify.c            |  2 ++
- libcpp/Makefile.in        | 10 +++---
- libcpp/file-map.c         | 92 +++++++++++++++++++++++++++++++++++++++++++++++
- libcpp/include/file-map.h | 30 ++++++++++++++++
- libcpp/macro.c            |  2 ++
- 8 files changed, 142 insertions(+), 5 deletions(-)
- create mode 100644 libcpp/file-map.c
- create mode 100644 libcpp/include/file-map.h
-
-diff --git a/gcc/c-family/c-opts.c b/gcc/c-family/c-opts.c
-index 718a052..3f93c56 100644
---- a/gcc/c-family/c-opts.c
-+++ b/gcc/c-family/c-opts.c
-@@ -46,6 +46,7 @@ along with GCC; see the file COPYING3.  If not see
- #include "opts.h"
- #include "plugin.h"		/* For PLUGIN_INCLUDE_FILE event.  */
- #include "mkdeps.h"
-+#include "file-map.h"
- #include "c-target.h"
- #include "tm.h"			/* For BYTES_BIG_ENDIAN,
- 				   DOLLARS_IN_IDENTIFIERS,
-@@ -510,6 +511,11 @@ c_common_handle_option (size_t scode, const char *arg, int value,
-       cpp_opts->narrow_charset = arg;
-       break;
- 
-+    case OPT_ffile_prefix_map_:
-+      if (add_file_prefix_map (arg) < 0)
-+        error ("invalid argument %qs to -ffile-prefix-map", arg);
-+      break;
-+
-     case OPT_fwide_exec_charset_:
-       cpp_opts->wide_charset = arg;
-       break;
-diff --git a/gcc/c-family/c.opt b/gcc/c-family/c.opt
-index 453ec8e..30ad053 100644
---- a/gcc/c-family/c.opt
-+++ b/gcc/c-family/c.opt
-@@ -1117,6 +1117,10 @@ fexec-charset=
- C ObjC C++ ObjC++ Joined RejectNegative
- -fexec-charset=<cset>	Convert all strings and character constants to character set <cset>
- 
-+ffile-prefix-map=
-+C ObjC C++ ObjC++ Joined RejectNegative
-+-ffile-prefix-map=<old=new>	Map one directory name to another in __FILE__, __BASE_FILE__ and __builtin_FILE ()
-+
- fextended-identifiers
- C ObjC C++ ObjC++
- Permit universal character names (\\u and \\U) in identifiers
-diff --git a/gcc/dwarf2out.c b/gcc/dwarf2out.c
-index 526f114..438a475 100644
---- a/gcc/dwarf2out.c
-+++ b/gcc/dwarf2out.c
-@@ -19671,6 +19671,7 @@ gen_producer_string (void)
-       case OPT_fltrans_output_list_:
-       case OPT_fresolution_:
-       case OPT_fdebug_prefix_map_:
-+      case OPT_ffile_prefix_map_:
- 	/* Ignore these.  */
- 	continue;
-       default:
-diff --git a/gcc/gimplify.c b/gcc/gimplify.c
-index c85f83a..1ffe1e1 100644
---- a/gcc/gimplify.c
-+++ b/gcc/gimplify.c
-@@ -87,6 +87,7 @@ along with GCC; see the file COPYING3.  If not see
- #include "gimple-low.h"
- #include "cilk.h"
- #include "gomp-constants.h"
-+#include "file-map.h"
- 
- #include "langhooks-def.h"	/* FIXME: for lhd_set_decl_assembler_name */
- #include "tree-pass.h"		/* FIXME: only for PROP_gimple_any */
-@@ -2370,6 +2371,7 @@ gimplify_call_expr (tree *expr_p, gimple_seq *pre_p, bool want_value)
-       case BUILT_IN_FILE:
- 	{
- 	  const char *locfile = LOCATION_FILE (EXPR_LOCATION (*expr_p));
-+	  locfile = remap_file_filename (locfile);
- 	  *expr_p = build_string_literal (strlen (locfile) + 1, locfile);
- 	  return GS_OK;
- 	}
-diff --git a/libcpp/Makefile.in b/libcpp/Makefile.in
-index ad35563..c210ff9 100644
---- a/libcpp/Makefile.in
-+++ b/libcpp/Makefile.in
-@@ -84,12 +84,12 @@ DEPMODE = $(CXXDEPMODE)
- 
- 
- libcpp_a_OBJS = charset.o directives.o directives-only.o errors.o \
--	expr.o files.o identifiers.o init.o lex.o line-map.o macro.o \
--	mkdeps.o pch.o symtab.o traditional.o
-+	expr.o file-map.o files.o identifiers.o init.o lex.o line-map.o \
-+	macro.o mkdeps.o pch.o symtab.o traditional.o
- 
- libcpp_a_SOURCES = charset.c directives.c directives-only.c errors.c \
--	expr.c files.c identifiers.c init.c lex.c line-map.c macro.c \
--	mkdeps.c pch.c symtab.c traditional.c
-+	expr.c file-map.c files.c identifiers.c init.c lex.c line-map.c \
-+	macro.c mkdeps.c pch.c symtab.c traditional.c
- 
- all: libcpp.a $(USED_CATALOGS)
- 
-@@ -263,7 +263,7 @@ po/$(PACKAGE).pot: $(libcpp_a_SOURCES)
- 
- TAGS_SOURCES = $(libcpp_a_SOURCES) internal.h ucnid.h \
-     include/line-map.h include/symtab.h include/cpp-id-data.h \
--    include/cpplib.h include/mkdeps.h system.h
-+    include/cpplib.h include/mkdeps.h system.h include/file-map.h
- 
- TAGS: $(TAGS_SOURCES)
- 	cd $(srcdir) && etags $(TAGS_SOURCES)
-diff --git a/libcpp/file-map.c b/libcpp/file-map.c
-new file mode 100644
-index 0000000..04e851b
---- /dev/null
-+++ b/libcpp/file-map.c
-@@ -0,0 +1,92 @@
-+/* Map one directory name to another in __FILE__, __BASE_FILE__
-+   and __builtin_FILE ().
-+   Copyright (C) 2001-2016 Free Software Foundation, Inc.
-+
-+This program is free software; you can redistribute it and/or modify it
-+under the terms of the GNU General Public License as published by the
-+Free Software Foundation; either version 3, or (at your option) any
-+later version.
-+
-+This program is distributed in the hope that it will be useful,
-+but WITHOUT ANY WARRANTY; without even the implied warranty of
-+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+GNU General Public License for more details.
-+
-+You should have received a copy of the GNU General Public License
-+along with this program; see the file COPYING3.  If not see
-+<http://www.gnu.org/licenses/>.
-+
-+ In other words, you are welcome to use, share and improve this program.
-+ You are forbidden to forbid anyone else to use, share and improve
-+ what you give them.   Help stamp out software-hoarding!  */
-+
-+#include "config.h"
-+#include "system.h"
-+#include "file-map.h"
-+
-+/* Structure recording the mapping from source file and directory
-+   names at compile time to __FILE__ */
-+typedef struct file_prefix_map
-+{
-+  const char *old_prefix;
-+  const char *new_prefix;
-+  size_t old_len;
-+  size_t new_len;
-+  struct file_prefix_map *next;
-+} file_prefix_map;
-+
-+/* Linked list of such structures.  */
-+static file_prefix_map *file_prefix_maps;
-+
-+/* Record prefix mapping of __FILE__.  ARG is the argument to
-+   -ffile-prefix-map and must be of the form OLD=NEW.  */
-+int
-+add_file_prefix_map (const char *arg)
-+{
-+  file_prefix_map *map;
-+  const char *p;
-+
-+  p = strchr (arg, '=');
-+  if (!p)
-+  {
-+      fprintf(stderr, "invalid argument %qs to -ffile-prefix-map", arg);
-+      return -1;
-+  }
-+  map = XNEW (file_prefix_map);
-+  map->old_prefix = xstrndup (arg, p - arg);
-+  map->old_len = p - arg;
-+  p++;
-+  map->new_prefix = xstrdup (p);
-+  map->new_len = strlen (p);
-+  map->next = file_prefix_maps;
-+  file_prefix_maps = map;
-+
-+  return 0;
-+}
-+
-+/* Perform user-specified mapping of __FILE__ prefixes.  Return
-+   the new name corresponding to filename.  */
-+
-+const char *
-+remap_file_filename (const char *filename)
-+{
-+  file_prefix_map *map;
-+  char *s;
-+  const char *name;
-+  size_t name_len;
-+
-+  for (map = file_prefix_maps; map; map = map->next)
-+    if (filename_ncmp (filename, map->old_prefix, map->old_len) == 0)
-+      break;
-+  if (!map)
-+    return filename;
-+  name = filename + map->old_len;
-+  name_len = strlen (name) + 1;
-+  s = (char *) alloca (name_len + map->new_len);
-+  memcpy (s, map->new_prefix, map->new_len);
-+  memcpy (s + map->new_len, name, name_len);
-+
-+  return xstrdup (s);
-+}
-+
-+
-diff --git a/libcpp/include/file-map.h b/libcpp/include/file-map.h
-new file mode 100644
-index 0000000..e6f8cbf
---- /dev/null
-+++ b/libcpp/include/file-map.h
-@@ -0,0 +1,30 @@
-+/* Map one directory name to another in __FILE__, __BASE_FILE__
-+   and __builtin_FILE ().
-+   Copyright (C) 2001-2016 Free Software Foundation, Inc.
-+
-+This program is free software; you can redistribute it and/or modify it
-+under the terms of the GNU General Public License as published by the
-+Free Software Foundation; either version 3, or (at your option) any
-+later version.
-+
-+This program is distributed in the hope that it will be useful,
-+but WITHOUT ANY WARRANTY; without even the implied warranty of
-+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+GNU General Public License for more details.
-+
-+You should have received a copy of the GNU General Public License
-+along with this program; see the file COPYING3.  If not see
-+<http://www.gnu.org/licenses/>.
-+
-+ In other words, you are welcome to use, share and improve this program.
-+ You are forbidden to forbid anyone else to use, share and improve
-+ what you give them.   Help stamp out software-hoarding!  */
-+
-+#ifndef LIBCPP_FILE_MAP_H
-+#define LIBCPP_FILE_MAP_H
-+
-+const char * remap_file_filename (const char *filename);
-+
-+int add_file_prefix_map (const char *arg);
-+
-+#endif /* !LIBCPP_FILE_MAP_H  */
-diff --git a/libcpp/macro.c b/libcpp/macro.c
-index 1e0a0b5..c3d330c 100644
---- a/libcpp/macro.c
-+++ b/libcpp/macro.c
-@@ -26,6 +26,7 @@ along with this program; see the file COPYING3.  If not see
- #include "system.h"
- #include "cpplib.h"
- #include "internal.h"
-+#include "file-map.h"
- 
- typedef struct macro_arg macro_arg;
- /* This structure represents the tokens of a macro argument.  These
-@@ -297,6 +298,7 @@ _cpp_builtin_macro_text (cpp_reader *pfile, cpp_hashnode *node)
- 	    if (!name)
- 	      abort ();
- 	  }
-+	name = remap_file_filename (name);
- 	len = strlen (name);
- 	buf = _cpp_unaligned_alloc (pfile, len * 2 + 3);
- 	result = buf;
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0055-Reuse-fdebug-prefix-map-to-replace-ffile-prefix-map.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0055-Reuse-fdebug-prefix-map-to-replace-ffile-prefix-map.patch
deleted file mode 100644
index c7caed8..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0055-Reuse-fdebug-prefix-map-to-replace-ffile-prefix-map.patch
+++ /dev/null
@@ -1,43 +0,0 @@
-From 14b79641ff6b0008aef7fbf7aa300daec11d1e78 Mon Sep 17 00:00:00 2001
-From: Hongxu Jia <hongxu.jia@windriver.com>
-Date: Wed, 16 Mar 2016 05:39:59 -0400
-Subject: [PATCH] Reuse -fdebug-prefix-map to replace -ffile-prefix-map
-
-The oe-core may use external toolchain to compile,
-which may not support -ffile-prefix-map.
-
-Since we use -fdebug-prefix-map to do the same thing,
-so we could reuse it to replace -ffile-prefix-map.
-
-Upstream-Status: Inappropriate[oe-core specific]
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- gcc/opts-global.c | 4 ++++
- 1 file changed, 4 insertions(+)
-
-diff --git a/gcc/opts-global.c b/gcc/opts-global.c
-index b61bdcf..51bb177 100644
---- a/gcc/opts-global.c
-+++ b/gcc/opts-global.c
-@@ -50,6 +50,7 @@ along with GCC; see the file COPYING3.  If not see
- #include "rtl.h"
- #include "dbgcnt.h"
- #include "debug.h"
-+#include "file-map.h"
- #include "hash-map.h"
- #include "plugin-api.h"
- #include "ipa-ref.h"
-@@ -378,6 +379,9 @@ handle_common_deferred_options (void)
- 
- 	case OPT_fdebug_prefix_map_:
- 	  add_debug_prefix_map (opt->arg);
-+
-+	  /* Reuse -fdebug-prefix-map to replace -ffile-prefix-map */
-+	  add_file_prefix_map (opt->arg);
- 	  break;
- 
- 	case OPT_fdump_:
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0056-Enable-libc-provide-ssp-and-gcc_cv_target_dl_iterate.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0056-Enable-libc-provide-ssp-and-gcc_cv_target_dl_iterate.patch
deleted file mode 100644
index 9791342..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0056-Enable-libc-provide-ssp-and-gcc_cv_target_dl_iterate.patch
+++ /dev/null
@@ -1,85 +0,0 @@
-From 5c7bd853c8703f65904083778712ef625c3f3814 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sun, 27 Mar 2016 20:31:50 -0700
-Subject: [PATCH 56/57] Enable libc provide ssp and
- gcc_cv_target_dl_iterate_phdr for musl
-
-        * configure.ac (gcc_cv_libc_provides_ssp): Add *-*-musl*.
-        (gcc_cv_target_dl_iterate_phdr): Add *-linux-musl*.
-        * configure: Regenerate.
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
-Upstream-Status: Backport
-
- gcc/configure    | 7 +++++++
- gcc/configure.ac | 7 +++++++
- 2 files changed, 14 insertions(+)
-
-diff --git a/gcc/configure b/gcc/configure
-index fcb05e7..81a449c 100755
---- a/gcc/configure
-+++ b/gcc/configure
-@@ -27814,6 +27814,9 @@ if test "${gcc_cv_libc_provides_ssp+set}" = set; then :
- else
-   gcc_cv_libc_provides_ssp=no
-     case "$target" in
-+       *-*-musl*)
-+        # All versions of musl provide stack protector
-+        gcc_cv_libc_provides_ssp=yes;;
-        *-*-linux* | *-*-kfreebsd*-gnu | *-*-knetbsd*-gnu)
-       # glibc 2.4 and later provides __stack_chk_fail and
-       # either __stack_chk_guard, or TLS access to stack guard canary.
-@@ -27846,6 +27849,7 @@ fi
- 	 # <http://gcc.gnu.org/ml/gcc/2008-10/msg00130.html>) and for now
- 	 # simply assert that glibc does provide this, which is true for all
- 	 # realistically usable GNU/Hurd configurations.
-+	 # All supported versions of musl provide it as well
- 	 gcc_cv_libc_provides_ssp=yes;;
-        *-*-darwin* | *-*-freebsd*)
- 	 ac_fn_c_check_func "$LINENO" "__stack_chk_fail" "ac_cv_func___stack_chk_fail"
-@@ -27942,6 +27946,9 @@ case "$target" in
-       gcc_cv_target_dl_iterate_phdr=no
-     fi
-     ;;
-+  *-linux-musl*)
-+    gcc_cv_target_dl_iterate_phdr=yes
-+    ;;
- esac
- 
- if test x$gcc_cv_target_dl_iterate_phdr = xyes; then
-diff --git a/gcc/configure.ac b/gcc/configure.ac
-index 923bc9a..b08e3dc 100644
---- a/gcc/configure.ac
-+++ b/gcc/configure.ac
-@@ -5291,6 +5291,9 @@ AC_CACHE_CHECK(__stack_chk_fail in target C library,
-       gcc_cv_libc_provides_ssp,
-       [gcc_cv_libc_provides_ssp=no
-     case "$target" in
-+       *-*-musl*)
-+        # All versions of musl provide stack protector
-+        gcc_cv_libc_provides_ssp=yes;;
-        *-*-linux* | *-*-kfreebsd*-gnu | *-*-knetbsd*-gnu)
-       # glibc 2.4 and later provides __stack_chk_fail and
-       # either __stack_chk_guard, or TLS access to stack guard canary.
-@@ -5317,6 +5320,7 @@ AC_CACHE_CHECK(__stack_chk_fail in target C library,
- 	 # <http://gcc.gnu.org/ml/gcc/2008-10/msg00130.html>) and for now
- 	 # simply assert that glibc does provide this, which is true for all
- 	 # realistically usable GNU/Hurd configurations.
-+	 # All supported versions of musl provide it as well
- 	 gcc_cv_libc_provides_ssp=yes;;
-        *-*-darwin* | *-*-freebsd*)
- 	 AC_CHECK_FUNC(__stack_chk_fail,[gcc_cv_libc_provides_ssp=yes],
-@@ -5390,6 +5394,9 @@ case "$target" in
-       gcc_cv_target_dl_iterate_phdr=no
-     fi
-     ;;
-+  *-linux-musl*)
-+    gcc_cv_target_dl_iterate_phdr=yes
-+    ;;
- esac
- GCC_TARGET_TEMPLATE([TARGET_DL_ITERATE_PHDR])
- if test x$gcc_cv_target_dl_iterate_phdr = xyes; then
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0057-unwind-fix-for-musl.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0057-unwind-fix-for-musl.patch
deleted file mode 100644
index c193587..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0057-unwind-fix-for-musl.patch
+++ /dev/null
@@ -1,42 +0,0 @@
-From c5c33bf881a2aea355310dd90873ed39bc272b3c Mon Sep 17 00:00:00 2001
-From: ktkachov <ktkachov@138bc75d-0d04-0410-961f-82ee72b054a4>
-Date: Wed, 22 Apr 2015 14:20:01 +0000
-Subject: [PATCH 57/57] unwind fix for musl
-
-On behalf of szabolcs.nagy@arm.com
-
-2015-04-22  Gregor Richards  <gregor.richards@uwaterloo.ca>
-	    Szabolcs Nagy  <szabolcs.nagy@arm.com>
-
-	* unwind-dw2-fde-dip.c (USE_PT_GNU_EH_FRAME): Define it on
-	Linux if target provides dl_iterate_phdr.
-
-git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@222328 138bc75d-0d04-0410-961f-82ee72b054a4
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
-Upstream-Status: Backport
-
- libgcc/unwind-dw2-fde-dip.c | 6 ++++++
- 1 file changed, 6 insertions(+)
-
-diff --git a/libgcc/unwind-dw2-fde-dip.c b/libgcc/unwind-dw2-fde-dip.c
-index e1e566b..137dced 100644
---- a/libgcc/unwind-dw2-fde-dip.c
-+++ b/libgcc/unwind-dw2-fde-dip.c
-@@ -59,6 +59,12 @@
- 
- #if !defined(inhibit_libc) && defined(HAVE_LD_EH_FRAME_HDR) \
-     && defined(TARGET_DL_ITERATE_PHDR) \
-+    && defined(__linux__)
-+# define USE_PT_GNU_EH_FRAME
-+#endif
-+
-+#if !defined(inhibit_libc) && defined(HAVE_LD_EH_FRAME_HDR) \
-+    && defined(TARGET_DL_ITERATE_PHDR) \
-     && (defined(__DragonFly__) || defined(__FreeBSD__))
- # define ElfW __ElfN
- # define USE_PT_GNU_EH_FRAME
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0058-fdebug-prefix-map-support-to-remap-relative-path.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0058-fdebug-prefix-map-support-to-remap-relative-path.patch
deleted file mode 100644
index 0b91fdb..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0058-fdebug-prefix-map-support-to-remap-relative-path.patch
+++ /dev/null
@@ -1,51 +0,0 @@
-From 289ad2969a5966c603bf6928ce442db74c4cbb25 Mon Sep 17 00:00:00 2001
-From: Hongxu Jia <hongxu.jia@windriver.com>
-Date: Thu, 24 Mar 2016 11:23:14 -0400
-Subject: [PATCH] gcc/final.c: -fdebug-prefix-map support to remap sources with
- relative path
-
-PR other/70428
-* final.c (remap_debug_filename): Use lrealpath to translate
-relative path before remapping
-
-https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70428
-Upstream-Status: Submitted [gcc-patches@gcc.gnu.org]
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
-diff --git a/gcc/final.c b/gcc/final.c
-index 55cf509..c3594c2 100644
---- a/gcc/final.c
-+++ b/gcc/final.c
-@@ -1554,16 +1554,25 @@ remap_debug_filename (const char *filename)
-   const char *name;
-   size_t name_len;
- 
-+  /* Support to remap filename with relative path  */
-+  char *realpath = lrealpath (filename);
-+  if (realpath == NULL)
-+    return filename;
-+
-   for (map = debug_prefix_maps; map; map = map->next)
--    if (filename_ncmp (filename, map->old_prefix, map->old_len) == 0)
-+    if (filename_ncmp (realpath, map->old_prefix, map->old_len) == 0)
-       break;
-   if (!map)
--    return filename;
--  name = filename + map->old_len;
-+    {
-+      free (realpath);
-+      return filename;
-+    }
-+  name = realpath + map->old_len;
-   name_len = strlen (name) + 1;
-   s = (char *) alloca (name_len + map->new_len);
-   memcpy (s, map->new_prefix, map->new_len);
-   memcpy (s + map->new_len, name, name_len);
-+  free (realpath);
-   return ggc_strdup (s);
- }
- 
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0059-libgcc-use-ldflags.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0059-libgcc-use-ldflags.patch
deleted file mode 100644
index 325b72a..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/0059-libgcc-use-ldflags.patch
+++ /dev/null
@@ -1,16 +0,0 @@
-Link libgcc using LDFLAGS, not just SHLIB_LDFLAGS
-
-Signed-off-by: Christopher Larson <chris_larson@mentor.com>
-Upstream-Status: Pending
-
---- gcc-5.3.0.orig/libgcc/config/t-slibgcc
-+++ gcc-5.3.0/libgcc/config/t-slibgcc
-@@ -32,7 +32,7 @@ SHLIB_INSTALL_SOLINK = $(LN_S) $(SHLIB_S
- 	$(DESTDIR)$(slibdir)$(SHLIB_SLIBDIR_QUAL)/$(SHLIB_SOLINK)
- 
- SHLIB_LINK = $(CC) $(LIBGCC2_CFLAGS) -shared -nodefaultlibs \
--	$(SHLIB_LDFLAGS) \
-+	$(LDFLAGS) $(SHLIB_LDFLAGS) \
- 	-o $(SHLIB_DIR)/$(SHLIB_SONAME).tmp @multilib_flags@ \
- 	$(SHLIB_OBJS) $(SHLIB_LC) && \
- 	rm -f $(SHLIB_DIR)/$(SHLIB_SOLINK) && \
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/CVE-2016-6131.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/CVE-2016-6131.patch
deleted file mode 100644
index 88524c3..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-5.4/CVE-2016-6131.patch
+++ /dev/null
@@ -1,251 +0,0 @@
-From b3f6b32165d3f437bd0ac6269c3c499b68ecf036 Mon Sep 17 00:00:00 2001
-From: law <law@138bc75d-0d04-0410-961f-82ee72b054a4>
-Date: Thu, 4 Aug 2016 16:53:18 +0000
-Subject: [PATCH] Fix for PR71696 in Libiberty Demangler
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-[BZ #71696] -- https://gcc.gnu.org/bugzilla/show_bug.cgi?id=71696
-
-2016-08-04  Marcel Böhme  <boehme.marcel@gmail.com>
-
-	PR c++/71696
-	* cplus-dem.c: Prevent infinite recursion when there is a cycle
-	in the referencing of remembered mangled types.
-	(work_stuff): New stack to keep track of the remembered mangled
-	types that are currently being processed.
-	(push_processed_type): New method to push currently processed
-	remembered type onto the stack.
-	(pop_processed_type): New method to pop currently processed
-	remembered type from the stack.
-	(work_stuff_copy_to_from): Copy values of new variables.
-	(delete_non_B_K_work_stuff): Free stack memory.
-	(demangle_args): Push/Pop currently processed remembered type.
-	(do_type): Do not demangle a cyclic reference and push/pop
-	referenced remembered type.
-
-cherry-picked from commit of
-git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@239143 138bc75d-0d04-0410-961f-82ee72b054a4
-
-Upstream-Status: Backport [master]
-CVE: CVE-2016-6131
-Signed-off-by: Yuanjie Huang <yuanjie.huang@windriver.com>
----
- libiberty/ChangeLog                   | 17 ++++++++
- libiberty/cplus-dem.c                 | 78 ++++++++++++++++++++++++++++++++---
- libiberty/testsuite/demangle-expected | 18 ++++++++
- 3 files changed, 108 insertions(+), 5 deletions(-)
-
-diff --git a/libiberty/ChangeLog b/libiberty/ChangeLog
-index 9859ad3..7939480 100644
---- a/libiberty/ChangeLog
-+++ b/libiberty/ChangeLog
-@@ -1,3 +1,20 @@
-+2016-08-04  Marcel Böhme  <boehme.marcel@gmail.com>
-+
-+	PR c++/71696
-+	* cplus-dem.c: Prevent infinite recursion when there is a cycle
-+	in the referencing of remembered mangled types.
-+	(work_stuff): New stack to keep track of the remembered mangled
-+	types that are currently being processed.
-+	(push_processed_type): New method to push currently processed
-+	remembered type onto the stack.
-+	(pop_processed_type): New method to pop currently processed
-+	remembered type from the stack.
-+	(work_stuff_copy_to_from): Copy values of new variables.
-+	(delete_non_B_K_work_stuff): Free stack memory.
-+	(demangle_args): Push/Pop currently processed remembered type.
-+	(do_type): Do not demangle a cyclic reference and push/pop
-+	referenced remembered type.
-+
- 2016-06-03  Release Manager
- 
- 	* GCC 5.4.0 released.
-diff --git a/libiberty/cplus-dem.c b/libiberty/cplus-dem.c
-index 7514e57..f21e630 100644
---- a/libiberty/cplus-dem.c
-+++ b/libiberty/cplus-dem.c
-@@ -144,6 +144,9 @@ struct work_stuff
-   string* previous_argument; /* The last function argument demangled.  */
-   int nrepeats;         /* The number of times to repeat the previous
- 			   argument.  */
-+  int *proctypevec;     /* Indices of currently processed remembered typevecs.  */
-+  int proctypevec_size;
-+  int nproctypes;
- };
- 
- #define PRINT_ANSI_QUALIFIERS (work -> options & DMGL_ANSI)
-@@ -435,6 +438,10 @@ iterate_demangle_function (struct work_stuff *,
- 
- static void remember_type (struct work_stuff *, const char *, int);
- 
-+static void push_processed_type (struct work_stuff *, int);
-+
-+static void pop_processed_type (struct work_stuff *);
-+
- static void remember_Btype (struct work_stuff *, const char *, int, int);
- 
- static int register_Btype (struct work_stuff *);
-@@ -1301,6 +1308,10 @@ work_stuff_copy_to_from (struct work_stuff *to, struct work_stuff *from)
-       memcpy (to->btypevec[i], from->btypevec[i], len);
-     }
- 
-+  if (from->proctypevec)
-+    to->proctypevec =
-+      XDUPVEC (int, from->proctypevec, from->proctypevec_size);
-+
-   if (from->ntmpl_args)
-     to->tmpl_argvec = XNEWVEC (char *, from->ntmpl_args);
- 
-@@ -1329,11 +1340,17 @@ delete_non_B_K_work_stuff (struct work_stuff *work)
-   /* Discard the remembered types, if any.  */
- 
-   forget_types (work);
--  if (work -> typevec != NULL)
-+  if (work->typevec != NULL)
-     {
--      free ((char *) work -> typevec);
--      work -> typevec = NULL;
--      work -> typevec_size = 0;
-+      free ((char *) work->typevec);
-+      work->typevec = NULL;
-+      work->typevec_size = 0;
-+    }
-+  if (work->proctypevec != NULL)
-+    {
-+      free (work->proctypevec);
-+      work->proctypevec = NULL;
-+      work->proctypevec_size = 0;
-     }
-   if (work->tmpl_argvec)
-     {
-@@ -3552,6 +3569,8 @@ static int
- do_type (struct work_stuff *work, const char **mangled, string *result)
- {
-   int n;
-+  int i;
-+  int is_proctypevec;
-   int done;
-   int success;
-   string decl;
-@@ -3564,6 +3583,7 @@ do_type (struct work_stuff *work, const char **mangled, string *result)
- 
-   done = 0;
-   success = 1;
-+  is_proctypevec = 0;
-   while (success && !done)
-     {
-       int member;
-@@ -3616,8 +3636,15 @@ do_type (struct work_stuff *work, const char **mangled, string *result)
- 	      success = 0;
- 	    }
- 	  else
-+	    for (i = 0; i < work->nproctypes; i++)
-+	      if (work -> proctypevec [i] == n)
-+	        success = 0;
-+
-+	  if (success)
- 	    {
--	      remembered_type = work -> typevec[n];
-+	      is_proctypevec = 1;
-+	      push_processed_type (work, n);
-+	      remembered_type = work->typevec[n];
- 	      mangled = &remembered_type;
- 	    }
- 	  break;
-@@ -3840,6 +3867,9 @@ do_type (struct work_stuff *work, const char **mangled, string *result)
-     string_delete (result);
-   string_delete (&decl);
- 
-+  if (is_proctypevec)
-+    pop_processed_type (work);
-+
-   if (success)
-     /* Assume an integral type, if we're not sure.  */
-     return (int) ((tk == tk_none) ? tk_integral : tk);
-@@ -4252,6 +4282,41 @@ do_arg (struct work_stuff *work, const char **mangled, string *result)
- }
- 
- static void
-+push_processed_type (struct work_stuff *work, int typevec_index)
-+{
-+  if (work->nproctypes >= work->proctypevec_size)
-+    {
-+      if (!work->proctypevec_size)
-+	{
-+	  work->proctypevec_size = 4;
-+	  work->proctypevec = XNEWVEC (int, work->proctypevec_size);
-+	}
-+      else
-+	{
-+	  if (work->proctypevec_size < 16)
-+	    /* Double when small.  */
-+	    work->proctypevec_size *= 2;
-+	  else
-+	    {
-+	      /* Grow slower when large.  */
-+	      if (work->proctypevec_size > (INT_MAX / 3) * 2)
-+                xmalloc_failed (INT_MAX);
-+              work->proctypevec_size = (work->proctypevec_size * 3 / 2);
-+	    }
-+          work->proctypevec
-+            = XRESIZEVEC (int, work->proctypevec, work->proctypevec_size);
-+	}
-+    }
-+    work->proctypevec [work->nproctypes++] = typevec_index;
-+}
-+
-+static void
-+pop_processed_type (struct work_stuff *work)
-+{
-+  work->nproctypes--;
-+}
-+
-+static void
- remember_type (struct work_stuff *work, const char *start, int len)
- {
-   char *tem;
-@@ -4515,10 +4580,13 @@ demangle_args (struct work_stuff *work, const char **mangled,
- 		{
- 		  string_append (declp, ", ");
- 		}
-+	      push_processed_type (work, t);
- 	      if (!do_arg (work, &tem, &arg))
- 		{
-+		  pop_processed_type (work);
- 		  return (0);
- 		}
-+	      pop_processed_type (work);
- 	      if (PRINT_ARG_TYPES)
- 		{
- 		  string_appends (declp, &arg);
-diff --git a/libiberty/testsuite/demangle-expected b/libiberty/testsuite/demangle-expected
-index 1d8b771..d690b23 100644
---- a/libiberty/testsuite/demangle-expected
-+++ b/libiberty/testsuite/demangle-expected
-@@ -4429,3 +4429,21 @@ __vt_90000000000cafebabe
- 
- _Z80800000000000000000000
- _Z80800000000000000000000
-+#
-+# Tests write access violation PR70926
-+
-+0__Ot2m02R5T0000500000
-+0__Ot2m02R5T0000500000
-+#
-+
-+0__GT50000000000_
-+0__GT50000000000_
-+#
-+
-+__t2m05B500000000000000000_
-+__t2m05B500000000000000000_
-+#
-+# Tests stack overflow PR71696
-+
-+__10%0__S4_0T0T0
-+%0<>::%0(%0<>)
--- 
-2.9.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3.inc
index 5c81a33..e569e02 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3.inc
@@ -32,12 +32,6 @@
 SRC_URI = "\
            ${BASEURI} \
            file://0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch \
-           file://0002-uclibc-conf.patch \
-           file://0003-gcc-uclibc-locale-ctype_touplow_t.patch \
-           file://0004-uclibc-locale.patch \
-           file://0005-uclibc-locale-no__x.patch \
-           file://0006-uclibc-locale-wchar_fix.patch \
-           file://0007-uclibc-locale-update.patch \
            file://0008-missing-execinfo_h.patch \
            file://0009-c99-snprintf.patch \
            file://0010-gcc-poison-system-directories.patch \
@@ -71,7 +65,7 @@
            file://0038-Search-target-sysroot-gcc-version-specific-dirs-with.patch \
            file://0039-Fix-various-_FOR_BUILD-and-related-variables.patch \
            file://0040-nios2-Define-MUSL_DYNAMIC_LINKER.patch \
-           file://0041-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch \
+           file://0041-ssp_nonshared.patch \
            file://0042-gcc-libcpp-support-ffile-prefix-map-old-new.patch \
            file://0043-Reuse-fdebug-prefix-map-to-replace-ffile-prefix-map.patch \
            file://0044-gcc-final.c-fdebug-prefix-map-support-to-remap-sourc.patch \
@@ -81,6 +75,7 @@
            file://0048-sync-gcc-stddef.h-with-musl.patch \
            file://0054_all_nopie-all-flags.patch \
            file://0055-unwind_h-glibc26.patch \
+           file://0056-LRA-PR70904-relax-the-restriction-on-subreg-reload-f.patch \
            ${BACKPORTS} \
 "
 BACKPORTS = "\
@@ -129,8 +124,6 @@
     gcc_cv_libc_provides_ssp=yes \
 "
 
-EXTRA_OECONF_append_libc-uclibc = " --disable-decimal-float "
-
 EXTRA_OECONF_PATHS = "\
     --with-gxx-include-dir=/not/exist{target_includedir}/c++/${BINV} \
     --with-sysroot=/not/exist \
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0002-uclibc-conf.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0002-uclibc-conf.patch
deleted file mode 100644
index 4d284ef..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0002-uclibc-conf.patch
+++ /dev/null
@@ -1,53 +0,0 @@
-From 4efc5a258c812875743647d756f75c93c4d514a5 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:38:25 +0400
-Subject: [PATCH 02/46] uclibc-conf
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- contrib/regression/objs-gcc.sh | 4 ++++
- libjava/classpath/ltconfig     | 4 ++--
- 2 files changed, 6 insertions(+), 2 deletions(-)
-
-diff --git a/contrib/regression/objs-gcc.sh b/contrib/regression/objs-gcc.sh
-index 60b0497..6dc7ead 100755
---- a/contrib/regression/objs-gcc.sh
-+++ b/contrib/regression/objs-gcc.sh
-@@ -106,6 +106,10 @@ if [ $H_REAL_TARGET = $H_REAL_HOST -a $H_REAL_TARGET = i686-pc-linux-gnu ]
-  then
-   make all-gdb all-dejagnu all-ld || exit 1
-   make install-gdb install-dejagnu install-ld || exit 1
-+elif [ $H_REAL_TARGET = $H_REAL_HOST -a $H_REAL_TARGET = i686-pc-linux-uclibc ]
-+ then
-+  make all-gdb all-dejagnu all-ld || exit 1
-+  make install-gdb install-dejagnu install-ld || exit 1
- elif [ $H_REAL_TARGET = $H_REAL_HOST ] ; then
-   make bootstrap || exit 1
-   make install || exit 1
-diff --git a/libjava/classpath/ltconfig b/libjava/classpath/ltconfig
-index d318957..df55950 100755
---- a/libjava/classpath/ltconfig
-+++ b/libjava/classpath/ltconfig
-@@ -603,7 +603,7 @@ host_os=`echo $host | sed 's/^\([^-]*\)-\([^-]*\)-\(.*\)$/\3/'`
- 
- # Transform linux* to *-*-linux-gnu*, to support old configure scripts.
- case $host_os in
--linux-gnu*) ;;
-+linux-gnu*|linux-uclibc*) ;;
- linux*) host=`echo $host | sed 's/^\(.*-.*-linux\)\(.*\)$/\1-gnu\2/'`
- esac
- 
-@@ -1247,7 +1247,7 @@ linux-gnuoldld* | linux-gnuaout* | linux-gnucoff*)
-   ;;
- 
- # This must be Linux ELF.
--linux-gnu*)
-+linux*)
-   version_type=linux
-   need_lib_prefix=no
-   need_version=no
--- 
-2.8.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0003-gcc-uclibc-locale-ctype_touplow_t.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0003-gcc-uclibc-locale-ctype_touplow_t.patch
deleted file mode 100644
index df07feb..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0003-gcc-uclibc-locale-ctype_touplow_t.patch
+++ /dev/null
@@ -1,87 +0,0 @@
-From ad5fd283fc7ef04f66c7fb003805364ea3bd34e9 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:40:12 +0400
-Subject: [PATCH 03/46] gcc-uclibc-locale-ctype_touplow_t
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- libstdc++-v3/config/locale/generic/c_locale.cc | 5 +++++
- libstdc++-v3/config/locale/generic/c_locale.h  | 9 +++++++++
- libstdc++-v3/config/os/gnu-linux/ctype_base.h  | 9 +++++++++
- 3 files changed, 23 insertions(+)
-
-diff --git a/libstdc++-v3/config/locale/generic/c_locale.cc b/libstdc++-v3/config/locale/generic/c_locale.cc
-index ef6ce8f..4740636 100644
---- a/libstdc++-v3/config/locale/generic/c_locale.cc
-+++ b/libstdc++-v3/config/locale/generic/c_locale.cc
-@@ -273,5 +273,10 @@ _GLIBCXX_END_NAMESPACE_VERSION
- #ifdef _GLIBCXX_LONG_DOUBLE_COMPAT
- #define _GLIBCXX_LDBL_COMPAT(dbl, ldbl) \
-   extern "C" void ldbl (void) __attribute__ ((alias (#dbl)))
-+#ifdef __UCLIBC__
-+// This is because __c_locale is of type __ctype_touplow_t* which is short on uclibc. for glibc its int*
-+_GLIBCXX_LDBL_COMPAT(_ZSt14__convert_to_vIdEvPKcRT_RSt12_Ios_IostateRKPs, _ZSt14__convert_to_vIeEvPKcRT_RSt12_Ios_IostateRKPs);
-+#else
- _GLIBCXX_LDBL_COMPAT(_ZSt14__convert_to_vIdEvPKcRT_RSt12_Ios_IostateRKPi, _ZSt14__convert_to_vIeEvPKcRT_RSt12_Ios_IostateRKPi);
-+#endif
- #endif // _GLIBCXX_LONG_DOUBLE_COMPAT
-diff --git a/libstdc++-v3/config/locale/generic/c_locale.h b/libstdc++-v3/config/locale/generic/c_locale.h
-index 794471e..d65f955 100644
---- a/libstdc++-v3/config/locale/generic/c_locale.h
-+++ b/libstdc++-v3/config/locale/generic/c_locale.h
-@@ -40,13 +40,22 @@
- 
- #include <clocale>
- 
-+#ifdef __UCLIBC__
-+#include <features.h>
-+#include <ctype.h>
-+#endif
-+
- #define _GLIBCXX_NUM_CATEGORIES 0
- 
- namespace std _GLIBCXX_VISIBILITY(default)
- {
- _GLIBCXX_BEGIN_NAMESPACE_VERSION
- 
-+#ifdef __UCLIBC__
-+  typedef __ctype_touplow_t*	__c_locale;
-+#else
-   typedef int*			__c_locale;
-+#endif
- 
-   // Convert numeric value of type double and long double to string and
-   // return length of string.  If vsnprintf is available use it, otherwise
-diff --git a/libstdc++-v3/config/os/gnu-linux/ctype_base.h b/libstdc++-v3/config/os/gnu-linux/ctype_base.h
-index 591c793..55eb0e9 100644
---- a/libstdc++-v3/config/os/gnu-linux/ctype_base.h
-+++ b/libstdc++-v3/config/os/gnu-linux/ctype_base.h
-@@ -33,6 +33,11 @@
- 
- // Information as gleaned from /usr/include/ctype.h
- 
-+#ifdef __UCLIBC__
-+#include <features.h>
-+#include <ctype.h>
-+#endif
-+
- namespace std _GLIBCXX_VISIBILITY(default)
- {
- _GLIBCXX_BEGIN_NAMESPACE_VERSION
-@@ -41,7 +46,11 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
-   struct ctype_base
-   {
-     // Non-standard typedefs.
-+#ifdef __UCLIBC__
-+    typedef const __ctype_touplow_t*	__to_type;
-+#else
-     typedef const int* 		__to_type;
-+#endif
- 
-     // NB: Offsets into ctype<char>::_M_table force a particular size
-     // on the mask type. Because of this, we don't use an enum.
--- 
-2.8.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0004-uclibc-locale.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0004-uclibc-locale.patch
deleted file mode 100644
index ae2627c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0004-uclibc-locale.patch
+++ /dev/null
@@ -1,2862 +0,0 @@
-From 68bd083357e78678a9baac760beb2a31f00954a5 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:41:39 +0400
-Subject: [PATCH 04/46] uclibc-locale
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- libstdc++-v3/acinclude.m4                          |  37 ++
- .../config/locale/uclibc/c++locale_internal.h      |  63 ++
- libstdc++-v3/config/locale/uclibc/c_locale.cc      | 160 +++++
- libstdc++-v3/config/locale/uclibc/c_locale.h       | 117 ++++
- .../config/locale/uclibc/codecvt_members.cc        | 308 +++++++++
- .../config/locale/uclibc/collate_members.cc        |  80 +++
- libstdc++-v3/config/locale/uclibc/ctype_members.cc | 300 +++++++++
- .../config/locale/uclibc/messages_members.cc       | 100 +++
- .../config/locale/uclibc/messages_members.h        | 118 ++++
- .../config/locale/uclibc/monetary_members.cc       | 692 +++++++++++++++++++++
- .../config/locale/uclibc/numeric_members.cc        | 160 +++++
- libstdc++-v3/config/locale/uclibc/time_members.cc  | 406 ++++++++++++
- libstdc++-v3/config/locale/uclibc/time_members.h   |  68 ++
- libstdc++-v3/configure                             |  75 +++
- libstdc++-v3/include/c_compatibility/wchar.h       |   2 +
- libstdc++-v3/include/c_std/cwchar                  |   2 +
- 16 files changed, 2688 insertions(+)
- create mode 100644 libstdc++-v3/config/locale/uclibc/c++locale_internal.h
- create mode 100644 libstdc++-v3/config/locale/uclibc/c_locale.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/c_locale.h
- create mode 100644 libstdc++-v3/config/locale/uclibc/codecvt_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/collate_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/ctype_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/messages_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/messages_members.h
- create mode 100644 libstdc++-v3/config/locale/uclibc/monetary_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/numeric_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/time_members.cc
- create mode 100644 libstdc++-v3/config/locale/uclibc/time_members.h
-
-diff --git a/libstdc++-v3/acinclude.m4 b/libstdc++-v3/acinclude.m4
-index b0f88cb..a0ee36b 100644
---- a/libstdc++-v3/acinclude.m4
-+++ b/libstdc++-v3/acinclude.m4
-@@ -2358,6 +2358,9 @@ AC_DEFUN([GLIBCXX_ENABLE_CLOCALE], [
-   # Default to "generic".
-   if test $enable_clocale_flag = auto; then
-     case ${target_os} in
-+      *-uclibc*)
-+        enable_clocale_flag=uclibc
-+        ;;
-       linux* | gnu* | kfreebsd*-gnu | knetbsd*-gnu)
- 	enable_clocale_flag=gnu
- 	;;
-@@ -2542,6 +2545,40 @@ AC_DEFUN([GLIBCXX_ENABLE_CLOCALE], [
-       CTIME_CC=config/locale/generic/time_members.cc
-       CLOCALE_INTERNAL_H=config/locale/generic/c++locale_internal.h
-       ;;
-+    uclibc)
-+      AC_MSG_RESULT(uclibc)
-+
-+      # Declare intention to use gettext, and add support for specific
-+      # languages.
-+      # For some reason, ALL_LINGUAS has to be before AM-GNU-GETTEXT
-+      ALL_LINGUAS="de fr"
-+
-+      # Don't call AM-GNU-GETTEXT here. Instead, assume glibc.
-+      AC_CHECK_PROG(check_msgfmt, msgfmt, yes, no)
-+      if test x"$check_msgfmt" = x"yes" && test x"$enable_nls" = x"yes"; then
-+        USE_NLS=yes
-+      fi
-+      # Export the build objects.
-+      for ling in $ALL_LINGUAS; do \
-+        glibcxx_MOFILES="$glibcxx_MOFILES $ling.mo"; \
-+        glibcxx_POFILES="$glibcxx_POFILES $ling.po"; \
-+      done
-+      AC_SUBST(glibcxx_MOFILES)
-+      AC_SUBST(glibcxx_POFILES)
-+
-+      CLOCALE_H=config/locale/uclibc/c_locale.h
-+      CLOCALE_CC=config/locale/uclibc/c_locale.cc
-+      CCODECVT_CC=config/locale/uclibc/codecvt_members.cc
-+      CCOLLATE_CC=config/locale/uclibc/collate_members.cc
-+      CCTYPE_CC=config/locale/uclibc/ctype_members.cc
-+      CMESSAGES_H=config/locale/uclibc/messages_members.h
-+      CMESSAGES_CC=config/locale/uclibc/messages_members.cc
-+      CMONEY_CC=config/locale/uclibc/monetary_members.cc
-+      CNUMERIC_CC=config/locale/uclibc/numeric_members.cc
-+      CTIME_H=config/locale/uclibc/time_members.h
-+      CTIME_CC=config/locale/uclibc/time_members.cc
-+      CLOCALE_INTERNAL_H=config/locale/uclibc/c++locale_internal.h
-+      ;;
-   esac
- 
-   # This is where the testsuite looks for locale catalogs, using the
-diff --git a/libstdc++-v3/config/locale/uclibc/c++locale_internal.h b/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-new file mode 100644
-index 0000000..2ae3e4a
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-@@ -0,0 +1,63 @@
-+// Prototypes for GLIBC thread locale __-prefixed functions -*- C++ -*-
-+
-+// Copyright (C) 2002, 2004, 2005 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+// Written by Jakub Jelinek <jakub@redhat.com>
-+
-+#include <bits/c++config.h>
-+#include <clocale>
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning clean this up
-+#endif
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+
-+extern "C" __typeof(nl_langinfo_l) __nl_langinfo_l;
-+extern "C" __typeof(strcoll_l) __strcoll_l;
-+extern "C" __typeof(strftime_l) __strftime_l;
-+extern "C" __typeof(strtod_l) __strtod_l;
-+extern "C" __typeof(strtof_l) __strtof_l;
-+extern "C" __typeof(strtold_l) __strtold_l;
-+extern "C" __typeof(strxfrm_l) __strxfrm_l;
-+extern "C" __typeof(newlocale) __newlocale;
-+extern "C" __typeof(freelocale) __freelocale;
-+extern "C" __typeof(duplocale) __duplocale;
-+extern "C" __typeof(uselocale) __uselocale;
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+extern "C" __typeof(iswctype_l) __iswctype_l;
-+extern "C" __typeof(towlower_l) __towlower_l;
-+extern "C" __typeof(towupper_l) __towupper_l;
-+extern "C" __typeof(wcscoll_l) __wcscoll_l;
-+extern "C" __typeof(wcsftime_l) __wcsftime_l;
-+extern "C" __typeof(wcsxfrm_l) __wcsxfrm_l;
-+extern "C" __typeof(wctype_l) __wctype_l;
-+#endif
-+
-+#endif // GLIBC 2.3 and later
-diff --git a/libstdc++-v3/config/locale/uclibc/c_locale.cc b/libstdc++-v3/config/locale/uclibc/c_locale.cc
-new file mode 100644
-index 0000000..5081dc1
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/c_locale.cc
-@@ -0,0 +1,160 @@
-+// Wrapper for underlying C-language localization -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.8  Standard locale categories.
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#include <cerrno>  // For errno
-+#include <locale>
-+#include <stdexcept>
-+#include <langinfo.h>
-+#include <bits/c++locale_internal.h>
-+
-+#ifndef __UCLIBC_HAS_XLOCALE__
-+#define __strtol_l(S, E, B, L)      strtol((S), (E), (B))
-+#define __strtoul_l(S, E, B, L)     strtoul((S), (E), (B))
-+#define __strtoll_l(S, E, B, L)     strtoll((S), (E), (B))
-+#define __strtoull_l(S, E, B, L)    strtoull((S), (E), (B))
-+#define __strtof_l(S, E, L)         strtof((S), (E))
-+#define __strtod_l(S, E, L)         strtod((S), (E))
-+#define __strtold_l(S, E, L)        strtold((S), (E))
-+#warning should dummy __newlocale check for C|POSIX ?
-+#define __newlocale(a, b, c)        NULL
-+#define __freelocale(a)             ((void)0)
-+#define __duplocale(a)              __c_locale()
-+#endif
-+
-+namespace std
-+{
-+  template<>
-+    void
-+    __convert_to_v(const char* __s, float& __v, ios_base::iostate& __err,
-+		   const __c_locale& __cloc)
-+    {
-+      if (!(__err & ios_base::failbit))
-+	{
-+	  char* __sanity;
-+	  errno = 0;
-+	  float __f = __strtof_l(__s, &__sanity, __cloc);
-+          if (__sanity != __s && errno != ERANGE)
-+	    __v = __f;
-+	  else
-+	    __err |= ios_base::failbit;
-+	}
-+    }
-+
-+  template<>
-+    void
-+    __convert_to_v(const char* __s, double& __v, ios_base::iostate& __err,
-+		   const __c_locale& __cloc)
-+    {
-+      if (!(__err & ios_base::failbit))
-+	{
-+	  char* __sanity;
-+	  errno = 0;
-+	  double __d = __strtod_l(__s, &__sanity, __cloc);
-+          if (__sanity != __s && errno != ERANGE)
-+	    __v = __d;
-+	  else
-+	    __err |= ios_base::failbit;
-+	}
-+    }
-+
-+  template<>
-+    void
-+    __convert_to_v(const char* __s, long double& __v, ios_base::iostate& __err,
-+		   const __c_locale& __cloc)
-+    {
-+      if (!(__err & ios_base::failbit))
-+	{
-+	  char* __sanity;
-+	  errno = 0;
-+	  long double __ld = __strtold_l(__s, &__sanity, __cloc);
-+          if (__sanity != __s && errno != ERANGE)
-+	    __v = __ld;
-+	  else
-+	    __err |= ios_base::failbit;
-+	}
-+    }
-+
-+  void
-+  locale::facet::_S_create_c_locale(__c_locale& __cloc, const char* __s,
-+				    __c_locale __old)
-+  {
-+    __cloc = __newlocale(1 << LC_ALL, __s, __old);
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    if (!__cloc)
-+      {
-+	// This named locale is not supported by the underlying OS.
-+	__throw_runtime_error(__N("locale::facet::_S_create_c_locale "
-+			      "name not valid"));
-+      }
-+#endif
-+  }
-+
-+  void
-+  locale::facet::_S_destroy_c_locale(__c_locale& __cloc)
-+  {
-+    if (_S_get_c_locale() != __cloc)
-+      __freelocale(__cloc);
-+  }
-+
-+  __c_locale
-+  locale::facet::_S_clone_c_locale(__c_locale& __cloc)
-+  { return __duplocale(__cloc); }
-+} // namespace std
-+
-+namespace __gnu_cxx
-+{
-+  const char* const category_names[6 + _GLIBCXX_NUM_CATEGORIES] =
-+    {
-+      "LC_CTYPE",
-+      "LC_NUMERIC",
-+      "LC_TIME",
-+      "LC_COLLATE",
-+      "LC_MONETARY",
-+      "LC_MESSAGES",
-+#if _GLIBCXX_NUM_CATEGORIES != 0
-+      "LC_PAPER",
-+      "LC_NAME",
-+      "LC_ADDRESS",
-+      "LC_TELEPHONE",
-+      "LC_MEASUREMENT",
-+      "LC_IDENTIFICATION"
-+#endif
-+    };
-+}
-+
-+namespace std
-+{
-+  const char* const* const locale::_S_categories = __gnu_cxx::category_names;
-+}  // namespace std
-diff --git a/libstdc++-v3/config/locale/uclibc/c_locale.h b/libstdc++-v3/config/locale/uclibc/c_locale.h
-new file mode 100644
-index 0000000..da07c1f
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/c_locale.h
-@@ -0,0 +1,117 @@
-+// Wrapper for underlying C-language localization -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004, 2005 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.8  Standard locale categories.
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#ifndef _C_LOCALE_H
-+#define _C_LOCALE_H 1
-+
-+#pragma GCC system_header
-+
-+#include <cstring>              // get std::strlen
-+#include <cstdio>               // get std::snprintf or std::sprintf
-+#include <clocale>
-+#include <langinfo.h>		// For codecvt
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix this
-+#endif
-+#ifdef __UCLIBC_HAS_LOCALE__
-+#include <iconv.h>		// For codecvt using iconv, iconv_t
-+#endif
-+#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
-+#include <libintl.h> 		// For messages
-+#endif
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning what is _GLIBCXX_C_LOCALE_GNU for
-+#endif
-+#define _GLIBCXX_C_LOCALE_GNU 1
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix categories
-+#endif
-+// #define _GLIBCXX_NUM_CATEGORIES 6
-+#define _GLIBCXX_NUM_CATEGORIES 0
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+namespace __gnu_cxx
-+{
-+  extern "C" __typeof(uselocale) __uselocale;
-+}
-+#endif
-+
-+namespace std
-+{
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+  typedef __locale_t		__c_locale;
-+#else
-+  typedef int*			__c_locale;
-+#endif
-+
-+  // Convert numeric value of type _Tv to string and return length of
-+  // string.  If snprintf is available use it, otherwise fall back to
-+  // the unsafe sprintf which, in general, can be dangerous and should
-+  // be avoided.
-+  template<typename _Tv>
-+    int
-+    __convert_from_v(char* __out,
-+		     const int __size __attribute__ ((__unused__)),
-+		     const char* __fmt,
-+#ifdef __UCLIBC_HAS_XCLOCALE__
-+		     _Tv __v, const __c_locale& __cloc, int __prec)
-+    {
-+      __c_locale __old = __gnu_cxx::__uselocale(__cloc);
-+#else
-+		     _Tv __v, const __c_locale&, int __prec)
-+    {
-+# ifdef __UCLIBC_HAS_LOCALE__
-+      char* __old = std::setlocale(LC_ALL, NULL);
-+      char* __sav = new char[std::strlen(__old) + 1];
-+      std::strcpy(__sav, __old);
-+      std::setlocale(LC_ALL, "C");
-+# endif
-+#endif
-+
-+      const int __ret = std::snprintf(__out, __size, __fmt, __prec, __v);
-+
-+#ifdef __UCLIBC_HAS_XCLOCALE__
-+      __gnu_cxx::__uselocale(__old);
-+#elif defined __UCLIBC_HAS_LOCALE__
-+      std::setlocale(LC_ALL, __sav);
-+      delete [] __sav;
-+#endif
-+      return __ret;
-+    }
-+}
-+
-+#endif
-diff --git a/libstdc++-v3/config/locale/uclibc/codecvt_members.cc b/libstdc++-v3/config/locale/uclibc/codecvt_members.cc
-new file mode 100644
-index 0000000..64aa962
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/codecvt_members.cc
-@@ -0,0 +1,308 @@
-+// std::codecvt implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2002, 2003 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.1.5 - Template class codecvt
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#include <locale>
-+#include <cstdlib>  // For MB_CUR_MAX
-+#include <climits>  // For MB_LEN_MAX
-+#include <bits/c++locale_internal.h>
-+
-+namespace std
-+{
-+  // Specializations.
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  codecvt_base::result
-+  codecvt<wchar_t, char, mbstate_t>::
-+  do_out(state_type& __state, const intern_type* __from,
-+	 const intern_type* __from_end, const intern_type*& __from_next,
-+	 extern_type* __to, extern_type* __to_end,
-+	 extern_type*& __to_next) const
-+  {
-+    result __ret = ok;
-+    state_type __tmp_state(__state);
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_codecvt);
-+#endif
-+
-+    // wcsnrtombs is *very* fast but stops if encounters NUL characters:
-+    // in case we fall back to wcrtomb and then continue, in a loop.
-+    // NB: wcsnrtombs is a GNU extension
-+    for (__from_next = __from, __to_next = __to;
-+	 __from_next < __from_end && __to_next < __to_end
-+	 && __ret == ok;)
-+      {
-+	const intern_type* __from_chunk_end = wmemchr(__from_next, L'\0',
-+						      __from_end - __from_next);
-+	if (!__from_chunk_end)
-+	  __from_chunk_end = __from_end;
-+
-+	__from = __from_next;
-+	const size_t __conv = wcsnrtombs(__to_next, &__from_next,
-+					 __from_chunk_end - __from_next,
-+					 __to_end - __to_next, &__state);
-+	if (__conv == static_cast<size_t>(-1))
-+	  {
-+	    // In case of error, in order to stop at the exact place we
-+	    // have to start again from the beginning with a series of
-+	    // wcrtomb.
-+	    for (; __from < __from_next; ++__from)
-+	      __to_next += wcrtomb(__to_next, *__from, &__tmp_state);
-+	    __state = __tmp_state;
-+	    __ret = error;
-+	  }
-+	else if (__from_next && __from_next < __from_chunk_end)
-+	  {
-+	    __to_next += __conv;
-+	    __ret = partial;
-+	  }
-+	else
-+	  {
-+	    __from_next = __from_chunk_end;
-+	    __to_next += __conv;
-+	  }
-+
-+	if (__from_next < __from_end && __ret == ok)
-+	  {
-+	    extern_type __buf[MB_LEN_MAX];
-+	    __tmp_state = __state;
-+	    const size_t __conv = wcrtomb(__buf, *__from_next, &__tmp_state);
-+	    if (__conv > static_cast<size_t>(__to_end - __to_next))
-+	      __ret = partial;
-+	    else
-+	      {
-+		memcpy(__to_next, __buf, __conv);
-+		__state = __tmp_state;
-+		__to_next += __conv;
-+		++__from_next;
-+	      }
-+	  }
-+      }
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+
-+    return __ret;
-+  }
-+
-+  codecvt_base::result
-+  codecvt<wchar_t, char, mbstate_t>::
-+  do_in(state_type& __state, const extern_type* __from,
-+	const extern_type* __from_end, const extern_type*& __from_next,
-+	intern_type* __to, intern_type* __to_end,
-+	intern_type*& __to_next) const
-+  {
-+    result __ret = ok;
-+    state_type __tmp_state(__state);
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_codecvt);
-+#endif
-+
-+    // mbsnrtowcs is *very* fast but stops if encounters NUL characters:
-+    // in case we store a L'\0' and then continue, in a loop.
-+    // NB: mbsnrtowcs is a GNU extension
-+    for (__from_next = __from, __to_next = __to;
-+	 __from_next < __from_end && __to_next < __to_end
-+	 && __ret == ok;)
-+      {
-+	const extern_type* __from_chunk_end;
-+	__from_chunk_end = static_cast<const extern_type*>(memchr(__from_next, '\0',
-+								  __from_end
-+								  - __from_next));
-+	if (!__from_chunk_end)
-+	  __from_chunk_end = __from_end;
-+
-+	__from = __from_next;
-+	size_t __conv = mbsnrtowcs(__to_next, &__from_next,
-+				   __from_chunk_end - __from_next,
-+				   __to_end - __to_next, &__state);
-+	if (__conv == static_cast<size_t>(-1))
-+	  {
-+	    // In case of error, in order to stop at the exact place we
-+	    // have to start again from the beginning with a series of
-+	    // mbrtowc.
-+	    for (;; ++__to_next, __from += __conv)
-+	      {
-+		__conv = mbrtowc(__to_next, __from, __from_end - __from,
-+				 &__tmp_state);
-+		if (__conv == static_cast<size_t>(-1)
-+		    || __conv == static_cast<size_t>(-2))
-+		  break;
-+	      }
-+	    __from_next = __from;
-+	    __state = __tmp_state;
-+	    __ret = error;
-+	  }
-+	else if (__from_next && __from_next < __from_chunk_end)
-+	  {
-+	    // It is unclear what to return in this case (see DR 382).
-+	    __to_next += __conv;
-+	    __ret = partial;
-+	  }
-+	else
-+	  {
-+	    __from_next = __from_chunk_end;
-+	    __to_next += __conv;
-+	  }
-+
-+	if (__from_next < __from_end && __ret == ok)
-+	  {
-+	    if (__to_next < __to_end)
-+	      {
-+		// XXX Probably wrong for stateful encodings
-+		__tmp_state = __state;
-+		++__from_next;
-+		*__to_next++ = L'\0';
-+	      }
-+	    else
-+	      __ret = partial;
-+	  }
-+      }
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+
-+    return __ret;
-+  }
-+
-+  int
-+  codecvt<wchar_t, char, mbstate_t>::
-+  do_encoding() const throw()
-+  {
-+    // XXX This implementation assumes that the encoding is
-+    // stateless and is either single-byte or variable-width.
-+    int __ret = 0;
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_codecvt);
-+#endif
-+    if (MB_CUR_MAX == 1)
-+      __ret = 1;
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+    return __ret;
-+  }
-+
-+  int
-+  codecvt<wchar_t, char, mbstate_t>::
-+  do_max_length() const throw()
-+  {
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_codecvt);
-+#endif
-+    // XXX Probably wrong for stateful encodings.
-+    int __ret = MB_CUR_MAX;
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+    return __ret;
-+  }
-+
-+  int
-+  codecvt<wchar_t, char, mbstate_t>::
-+  do_length(state_type& __state, const extern_type* __from,
-+	    const extern_type* __end, size_t __max) const
-+  {
-+    int __ret = 0;
-+    state_type __tmp_state(__state);
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_codecvt);
-+#endif
-+
-+    // mbsnrtowcs is *very* fast but stops if encounters NUL characters:
-+    // in case we advance past it and then continue, in a loop.
-+    // NB: mbsnrtowcs is a GNU extension
-+
-+    // A dummy internal buffer is needed in order for mbsnrtocws to consider
-+    // its fourth parameter (it wouldn't with NULL as first parameter).
-+    wchar_t* __to = static_cast<wchar_t*>(__builtin_alloca(sizeof(wchar_t)
-+							   * __max));
-+    while (__from < __end && __max)
-+      {
-+	const extern_type* __from_chunk_end;
-+	__from_chunk_end = static_cast<const extern_type*>(memchr(__from, '\0',
-+								  __end
-+								  - __from));
-+	if (!__from_chunk_end)
-+	  __from_chunk_end = __end;
-+
-+	const extern_type* __tmp_from = __from;
-+	size_t __conv = mbsnrtowcs(__to, &__from,
-+				   __from_chunk_end - __from,
-+				   __max, &__state);
-+	if (__conv == static_cast<size_t>(-1))
-+	  {
-+	    // In case of error, in order to stop at the exact place we
-+	    // have to start again from the beginning with a series of
-+	    // mbrtowc.
-+	    for (__from = __tmp_from;; __from += __conv)
-+	      {
-+		__conv = mbrtowc(NULL, __from, __end - __from,
-+				 &__tmp_state);
-+		if (__conv == static_cast<size_t>(-1)
-+		    || __conv == static_cast<size_t>(-2))
-+		  break;
-+	      }
-+	    __state = __tmp_state;
-+	    __ret += __from - __tmp_from;
-+	    break;
-+	  }
-+	if (!__from)
-+	  __from = __from_chunk_end;
-+
-+	__ret += __from - __tmp_from;
-+	__max -= __conv;
-+
-+	if (__from < __end && __max)
-+	  {
-+	    // XXX Probably wrong for stateful encodings
-+	    __tmp_state = __state;
-+	    ++__from;
-+	    ++__ret;
-+	    --__max;
-+	  }
-+      }
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+
-+    return __ret;
-+  }
-+#endif
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/collate_members.cc b/libstdc++-v3/config/locale/uclibc/collate_members.cc
-new file mode 100644
-index 0000000..c2664a7
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/collate_members.cc
-@@ -0,0 +1,80 @@
-+// std::collate implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.4.1.2  collate virtual functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#include <locale>
-+#include <bits/c++locale_internal.h>
-+
-+#ifndef __UCLIBC_HAS_XLOCALE__
-+#define __strcoll_l(S1, S2, L)      strcoll((S1), (S2))
-+#define __strxfrm_l(S1, S2, N, L)   strxfrm((S1), (S2), (N))
-+#define __wcscoll_l(S1, S2, L)      wcscoll((S1), (S2))
-+#define __wcsxfrm_l(S1, S2, N, L)   wcsxfrm((S1), (S2), (N))
-+#endif
-+
-+namespace std
-+{
-+  // These are basically extensions to char_traits, and perhaps should
-+  // be put there instead of here.
-+  template<>
-+    int
-+    collate<char>::_M_compare(const char* __one, const char* __two) const
-+    {
-+      int __cmp = __strcoll_l(__one, __two, _M_c_locale_collate);
-+      return (__cmp >> (8 * sizeof (int) - 2)) | (__cmp != 0);
-+    }
-+
-+  template<>
-+    size_t
-+    collate<char>::_M_transform(char* __to, const char* __from,
-+				size_t __n) const
-+    { return __strxfrm_l(__to, __from, __n, _M_c_locale_collate); }
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  template<>
-+    int
-+    collate<wchar_t>::_M_compare(const wchar_t* __one,
-+				 const wchar_t* __two) const
-+    {
-+      int __cmp = __wcscoll_l(__one, __two, _M_c_locale_collate);
-+      return (__cmp >> (8 * sizeof (int) - 2)) | (__cmp != 0);
-+    }
-+
-+  template<>
-+    size_t
-+    collate<wchar_t>::_M_transform(wchar_t* __to, const wchar_t* __from,
-+				   size_t __n) const
-+    { return __wcsxfrm_l(__to, __from, __n, _M_c_locale_collate); }
-+#endif
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/ctype_members.cc b/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-new file mode 100644
-index 0000000..7294e3a
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-@@ -0,0 +1,300 @@
-+// std::ctype implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.1.1.2  ctype virtual functions.
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#define _LIBC
-+#include <locale>
-+#undef _LIBC
-+#include <bits/c++locale_internal.h>
-+
-+#ifndef __UCLIBC_HAS_XLOCALE__
-+#define __wctype_l(S, L)           wctype((S))
-+#define __towupper_l(C, L)         towupper((C))
-+#define __towlower_l(C, L)         towlower((C))
-+#define __iswctype_l(C, M, L)      iswctype((C), (M))
-+#endif
-+
-+namespace std
-+{
-+  // NB: The other ctype<char> specializations are in src/locale.cc and
-+  // various /config/os/* files.
-+  template<>
-+    ctype_byname<char>::ctype_byname(const char* __s, size_t __refs)
-+    : ctype<char>(0, false, __refs)
-+    {
-+      if (std::strcmp(__s, "C") != 0 && std::strcmp(__s, "POSIX") != 0)
-+	{
-+	  this->_S_destroy_c_locale(this->_M_c_locale_ctype);
-+	  this->_S_create_c_locale(this->_M_c_locale_ctype, __s);
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	  this->_M_toupper = this->_M_c_locale_ctype->__ctype_toupper;
-+	  this->_M_tolower = this->_M_c_locale_ctype->__ctype_tolower;
-+	  this->_M_table = this->_M_c_locale_ctype->__ctype_b;
-+#endif
-+	}
-+    }
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  ctype<wchar_t>::__wmask_type
-+  ctype<wchar_t>::_M_convert_to_wmask(const mask __m) const
-+  {
-+    __wmask_type __ret;
-+    switch (__m)
-+      {
-+      case space:
-+	__ret = __wctype_l("space", _M_c_locale_ctype);
-+	break;
-+      case print:
-+	__ret = __wctype_l("print", _M_c_locale_ctype);
-+	break;
-+      case cntrl:
-+	__ret = __wctype_l("cntrl", _M_c_locale_ctype);
-+	break;
-+      case upper:
-+	__ret = __wctype_l("upper", _M_c_locale_ctype);
-+	break;
-+      case lower:
-+	__ret = __wctype_l("lower", _M_c_locale_ctype);
-+	break;
-+      case alpha:
-+	__ret = __wctype_l("alpha", _M_c_locale_ctype);
-+	break;
-+      case digit:
-+	__ret = __wctype_l("digit", _M_c_locale_ctype);
-+	break;
-+      case punct:
-+	__ret = __wctype_l("punct", _M_c_locale_ctype);
-+	break;
-+      case xdigit:
-+	__ret = __wctype_l("xdigit", _M_c_locale_ctype);
-+	break;
-+      case alnum:
-+	__ret = __wctype_l("alnum", _M_c_locale_ctype);
-+	break;
-+      case graph:
-+	__ret = __wctype_l("graph", _M_c_locale_ctype);
-+	break;
-+      default:
-+	__ret = __wmask_type();
-+      }
-+    return __ret;
-+  }
-+
-+  wchar_t
-+  ctype<wchar_t>::do_toupper(wchar_t __c) const
-+  { return __towupper_l(__c, _M_c_locale_ctype); }
-+
-+  const wchar_t*
-+  ctype<wchar_t>::do_toupper(wchar_t* __lo, const wchar_t* __hi) const
-+  {
-+    while (__lo < __hi)
-+      {
-+        *__lo = __towupper_l(*__lo, _M_c_locale_ctype);
-+        ++__lo;
-+      }
-+    return __hi;
-+  }
-+
-+  wchar_t
-+  ctype<wchar_t>::do_tolower(wchar_t __c) const
-+  { return __towlower_l(__c, _M_c_locale_ctype); }
-+
-+  const wchar_t*
-+  ctype<wchar_t>::do_tolower(wchar_t* __lo, const wchar_t* __hi) const
-+  {
-+    while (__lo < __hi)
-+      {
-+        *__lo = __towlower_l(*__lo, _M_c_locale_ctype);
-+        ++__lo;
-+      }
-+    return __hi;
-+  }
-+
-+  bool
-+  ctype<wchar_t>::
-+  do_is(mask __m, wchar_t __c) const
-+  {
-+    // Highest bitmask in ctype_base == 10, but extra in "C"
-+    // library for blank.
-+    bool __ret = false;
-+    const size_t __bitmasksize = 11;
-+    for (size_t __bitcur = 0; __bitcur <= __bitmasksize; ++__bitcur)
-+      if (__m & _M_bit[__bitcur]
-+	  && __iswctype_l(__c, _M_wmask[__bitcur], _M_c_locale_ctype))
-+	{
-+	  __ret = true;
-+	  break;
-+	}
-+    return __ret;
-+  }
-+
-+  const wchar_t*
-+  ctype<wchar_t>::
-+  do_is(const wchar_t* __lo, const wchar_t* __hi, mask* __vec) const
-+  {
-+    for (; __lo < __hi; ++__vec, ++__lo)
-+      {
-+	// Highest bitmask in ctype_base == 10, but extra in "C"
-+	// library for blank.
-+	const size_t __bitmasksize = 11;
-+	mask __m = 0;
-+	for (size_t __bitcur = 0; __bitcur <= __bitmasksize; ++__bitcur)
-+	  if (__iswctype_l(*__lo, _M_wmask[__bitcur], _M_c_locale_ctype))
-+	    __m |= _M_bit[__bitcur];
-+	*__vec = __m;
-+      }
-+    return __hi;
-+  }
-+
-+  const wchar_t*
-+  ctype<wchar_t>::
-+  do_scan_is(mask __m, const wchar_t* __lo, const wchar_t* __hi) const
-+  {
-+    while (__lo < __hi && !this->do_is(__m, *__lo))
-+      ++__lo;
-+    return __lo;
-+  }
-+
-+  const wchar_t*
-+  ctype<wchar_t>::
-+  do_scan_not(mask __m, const char_type* __lo, const char_type* __hi) const
-+  {
-+    while (__lo < __hi && this->do_is(__m, *__lo) != 0)
-+      ++__lo;
-+    return __lo;
-+  }
-+
-+  wchar_t
-+  ctype<wchar_t>::
-+  do_widen(char __c) const
-+  { return _M_widen[static_cast<unsigned char>(__c)]; }
-+
-+  const char*
-+  ctype<wchar_t>::
-+  do_widen(const char* __lo, const char* __hi, wchar_t* __dest) const
-+  {
-+    while (__lo < __hi)
-+      {
-+	*__dest = _M_widen[static_cast<unsigned char>(*__lo)];
-+	++__lo;
-+	++__dest;
-+      }
-+    return __hi;
-+  }
-+
-+  char
-+  ctype<wchar_t>::
-+  do_narrow(wchar_t __wc, char __dfault) const
-+  {
-+    if (__wc >= 0 && __wc < 128 && _M_narrow_ok)
-+      return _M_narrow[__wc];
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_ctype);
-+#endif
-+    const int __c = wctob(__wc);
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+    return (__c == EOF ? __dfault : static_cast<char>(__c));
-+  }
-+
-+  const wchar_t*
-+  ctype<wchar_t>::
-+  do_narrow(const wchar_t* __lo, const wchar_t* __hi, char __dfault,
-+	    char* __dest) const
-+  {
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_ctype);
-+#endif
-+    if (_M_narrow_ok)
-+      while (__lo < __hi)
-+	{
-+	  if (*__lo >= 0 && *__lo < 128)
-+	    *__dest = _M_narrow[*__lo];
-+	  else
-+	    {
-+	      const int __c = wctob(*__lo);
-+	      *__dest = (__c == EOF ? __dfault : static_cast<char>(__c));
-+	    }
-+	  ++__lo;
-+	  ++__dest;
-+	}
-+    else
-+      while (__lo < __hi)
-+	{
-+	  const int __c = wctob(*__lo);
-+	  *__dest = (__c == EOF ? __dfault : static_cast<char>(__c));
-+	  ++__lo;
-+	  ++__dest;
-+	}
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+    return __hi;
-+  }
-+
-+  void
-+  ctype<wchar_t>::_M_initialize_ctype()
-+  {
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __c_locale __old = __uselocale(_M_c_locale_ctype);
-+#endif
-+    wint_t __i;
-+    for (__i = 0; __i < 128; ++__i)
-+      {
-+	const int __c = wctob(__i);
-+	if (__c == EOF)
-+	  break;
-+	else
-+	  _M_narrow[__i] = static_cast<char>(__c);
-+      }
-+    if (__i == 128)
-+      _M_narrow_ok = true;
-+    else
-+      _M_narrow_ok = false;
-+    for (size_t __j = 0;
-+	 __j < sizeof(_M_widen) / sizeof(wint_t); ++__j)
-+      _M_widen[__j] = btowc(__j);
-+
-+    for (size_t __k = 0; __k <= 11; ++__k)
-+      {
-+	_M_bit[__k] = static_cast<mask>(_ISbit(__k));
-+	_M_wmask[__k] = _M_convert_to_wmask(_M_bit[__k]);
-+      }
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+    __uselocale(__old);
-+#endif
-+  }
-+#endif //  _GLIBCXX_USE_WCHAR_T
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/messages_members.cc b/libstdc++-v3/config/locale/uclibc/messages_members.cc
-new file mode 100644
-index 0000000..13594d9
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/messages_members.cc
-@@ -0,0 +1,100 @@
-+// std::messages implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.7.1.2  messages virtual functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#include <locale>
-+#include <bits/c++locale_internal.h>
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix gettext stuff
-+#endif
-+#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
-+extern "C" char *__dcgettext(const char *domainname,
-+			     const char *msgid, int category);
-+#undef gettext
-+#define gettext(msgid) __dcgettext(NULL, msgid, LC_MESSAGES)
-+#else
-+#undef gettext
-+#define gettext(msgid) (msgid)
-+#endif
-+
-+namespace std
-+{
-+  // Specializations.
-+  template<>
-+    string
-+    messages<char>::do_get(catalog, int, int, const string& __dfault) const
-+    {
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+      __c_locale __old = __uselocale(_M_c_locale_messages);
-+      const char* __msg = const_cast<const char*>(gettext(__dfault.c_str()));
-+      __uselocale(__old);
-+      return string(__msg);
-+#elif defined __UCLIBC_HAS_LOCALE__
-+      char* __old = strdup(setlocale(LC_ALL, NULL));
-+      setlocale(LC_ALL, _M_name_messages);
-+      const char* __msg = gettext(__dfault.c_str());
-+      setlocale(LC_ALL, __old);
-+      free(__old);
-+      return string(__msg);
-+#else
-+      const char* __msg = gettext(__dfault.c_str());
-+      return string(__msg);
-+#endif
-+    }
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  template<>
-+    wstring
-+    messages<wchar_t>::do_get(catalog, int, int, const wstring& __dfault) const
-+    {
-+# ifdef __UCLIBC_HAS_XLOCALE__
-+      __c_locale __old = __uselocale(_M_c_locale_messages);
-+      char* __msg = gettext(_M_convert_to_char(__dfault));
-+      __uselocale(__old);
-+      return _M_convert_from_char(__msg);
-+# elif defined __UCLIBC_HAS_LOCALE__
-+      char* __old = strdup(setlocale(LC_ALL, NULL));
-+      setlocale(LC_ALL, _M_name_messages);
-+      char* __msg = gettext(_M_convert_to_char(__dfault));
-+      setlocale(LC_ALL, __old);
-+      free(__old);
-+      return _M_convert_from_char(__msg);
-+# else
-+      char* __msg = gettext(_M_convert_to_char(__dfault));
-+      return _M_convert_from_char(__msg);
-+# endif
-+    }
-+#endif
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/messages_members.h b/libstdc++-v3/config/locale/uclibc/messages_members.h
-new file mode 100644
-index 0000000..1424078
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/messages_members.h
-@@ -0,0 +1,118 @@
-+// std::messages implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.7.1.2  messages functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix prototypes for *textdomain funcs
-+#endif
-+#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
-+extern "C" char *__textdomain(const char *domainname);
-+extern "C" char *__bindtextdomain(const char *domainname,
-+				  const char *dirname);
-+#else
-+#undef __textdomain
-+#undef __bindtextdomain
-+#define __textdomain(D)           ((void)0)
-+#define __bindtextdomain(D,P)     ((void)0)
-+#endif
-+
-+  // Non-virtual member functions.
-+  template<typename _CharT>
-+     messages<_CharT>::messages(size_t __refs)
-+     : facet(__refs), _M_c_locale_messages(_S_get_c_locale()),
-+     _M_name_messages(_S_get_c_name())
-+     { }
-+
-+  template<typename _CharT>
-+     messages<_CharT>::messages(__c_locale __cloc, const char* __s,
-+				size_t __refs)
-+     : facet(__refs), _M_c_locale_messages(_S_clone_c_locale(__cloc)),
-+     _M_name_messages(__s)
-+     {
-+       char* __tmp = new char[std::strlen(__s) + 1];
-+       std::strcpy(__tmp, __s);
-+       _M_name_messages = __tmp;
-+     }
-+
-+  template<typename _CharT>
-+    typename messages<_CharT>::catalog
-+    messages<_CharT>::open(const basic_string<char>& __s, const locale& __loc,
-+			   const char* __dir) const
-+    {
-+      __bindtextdomain(__s.c_str(), __dir);
-+      return this->do_open(__s, __loc);
-+    }
-+
-+  // Virtual member functions.
-+  template<typename _CharT>
-+    messages<_CharT>::~messages()
-+    {
-+      if (_M_name_messages != _S_get_c_name())
-+	delete [] _M_name_messages;
-+      _S_destroy_c_locale(_M_c_locale_messages);
-+    }
-+
-+  template<typename _CharT>
-+    typename messages<_CharT>::catalog
-+    messages<_CharT>::do_open(const basic_string<char>& __s,
-+			      const locale&) const
-+    {
-+      // No error checking is done, assume the catalog exists and can
-+      // be used.
-+      __textdomain(__s.c_str());
-+      return 0;
-+    }
-+
-+  template<typename _CharT>
-+    void
-+    messages<_CharT>::do_close(catalog) const
-+    { }
-+
-+   // messages_byname
-+   template<typename _CharT>
-+     messages_byname<_CharT>::messages_byname(const char* __s, size_t __refs)
-+     : messages<_CharT>(__refs)
-+     {
-+       if (this->_M_name_messages != locale::facet::_S_get_c_name())
-+	 delete [] this->_M_name_messages;
-+       char* __tmp = new char[std::strlen(__s) + 1];
-+       std::strcpy(__tmp, __s);
-+       this->_M_name_messages = __tmp;
-+
-+       if (std::strcmp(__s, "C") != 0 && std::strcmp(__s, "POSIX") != 0)
-+	 {
-+	   this->_S_destroy_c_locale(this->_M_c_locale_messages);
-+	   this->_S_create_c_locale(this->_M_c_locale_messages, __s);
-+	 }
-+     }
-diff --git a/libstdc++-v3/config/locale/uclibc/monetary_members.cc b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-new file mode 100644
-index 0000000..aa52731
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-@@ -0,0 +1,692 @@
-+// std::moneypunct implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.6.3.2  moneypunct virtual functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#define _LIBC
-+#include <locale>
-+#undef _LIBC
-+#include <bits/c++locale_internal.h>
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning optimize this for uclibc
-+#warning tailor for stub locale support
-+#endif
-+
-+#ifndef __UCLIBC_HAS_XLOCALE__
-+#define __nl_langinfo_l(N, L)         nl_langinfo((N))
-+#endif
-+
-+namespace std
-+{
-+  // Construct and return valid pattern consisting of some combination of:
-+  // space none symbol sign value
-+  money_base::pattern
-+  money_base::_S_construct_pattern(char __precedes, char __space, char __posn)
-+  {
-+    pattern __ret;
-+
-+    // This insanely complicated routine attempts to construct a valid
-+    // pattern for use with monyepunct. A couple of invariants:
-+
-+    // if (__precedes) symbol -> value
-+    // else value -> symbol
-+
-+    // if (__space) space
-+    // else none
-+
-+    // none == never first
-+    // space never first or last
-+
-+    // Any elegant implementations of this are welcome.
-+    switch (__posn)
-+      {
-+      case 0:
-+      case 1:
-+	// 1 The sign precedes the value and symbol.
-+	__ret.field[0] = sign;
-+	if (__space)
-+	  {
-+	    // Pattern starts with sign.
-+	    if (__precedes)
-+	      {
-+		__ret.field[1] = symbol;
-+		__ret.field[3] = value;
-+	      }
-+	    else
-+	      {
-+		__ret.field[1] = value;
-+		__ret.field[3] = symbol;
-+	      }
-+	    __ret.field[2] = space;
-+	  }
-+	else
-+	  {
-+	    // Pattern starts with sign and ends with none.
-+	    if (__precedes)
-+	      {
-+		__ret.field[1] = symbol;
-+		__ret.field[2] = value;
-+	      }
-+	    else
-+	      {
-+		__ret.field[1] = value;
-+		__ret.field[2] = symbol;
-+	      }
-+	    __ret.field[3] = none;
-+	  }
-+	break;
-+      case 2:
-+	// 2 The sign follows the value and symbol.
-+	if (__space)
-+	  {
-+	    // Pattern either ends with sign.
-+	    if (__precedes)
-+	      {
-+		__ret.field[0] = symbol;
-+		__ret.field[2] = value;
-+	      }
-+	    else
-+	      {
-+		__ret.field[0] = value;
-+		__ret.field[2] = symbol;
-+	      }
-+	    __ret.field[1] = space;
-+	    __ret.field[3] = sign;
-+	  }
-+	else
-+	  {
-+	    // Pattern ends with sign then none.
-+	    if (__precedes)
-+	      {
-+		__ret.field[0] = symbol;
-+		__ret.field[1] = value;
-+	      }
-+	    else
-+	      {
-+		__ret.field[0] = value;
-+		__ret.field[1] = symbol;
-+	      }
-+	    __ret.field[2] = sign;
-+	    __ret.field[3] = none;
-+	  }
-+	break;
-+      case 3:
-+	// 3 The sign immediately precedes the symbol.
-+	if (__precedes)
-+	  {
-+	    __ret.field[0] = sign;
-+	    __ret.field[1] = symbol;
-+	    if (__space)
-+	      {
-+		__ret.field[2] = space;
-+		__ret.field[3] = value;
-+	      }
-+	    else
-+	      {
-+		__ret.field[2] = value;
-+		__ret.field[3] = none;
-+	      }
-+	  }
-+	else
-+	  {
-+	    __ret.field[0] = value;
-+	    if (__space)
-+	      {
-+		__ret.field[1] = space;
-+		__ret.field[2] = sign;
-+		__ret.field[3] = symbol;
-+	      }
-+	    else
-+	      {
-+		__ret.field[1] = sign;
-+		__ret.field[2] = symbol;
-+		__ret.field[3] = none;
-+	      }
-+	  }
-+	break;
-+      case 4:
-+	// 4 The sign immediately follows the symbol.
-+	if (__precedes)
-+	  {
-+	    __ret.field[0] = symbol;
-+	    __ret.field[1] = sign;
-+	    if (__space)
-+	      {
-+		__ret.field[2] = space;
-+		__ret.field[3] = value;
-+	      }
-+	    else
-+	      {
-+		__ret.field[2] = value;
-+		__ret.field[3] = none;
-+	      }
-+	  }
-+	else
-+	  {
-+	    __ret.field[0] = value;
-+	    if (__space)
-+	      {
-+		__ret.field[1] = space;
-+		__ret.field[2] = symbol;
-+		__ret.field[3] = sign;
-+	      }
-+	    else
-+	      {
-+		__ret.field[1] = symbol;
-+		__ret.field[2] = sign;
-+		__ret.field[3] = none;
-+	      }
-+	  }
-+	break;
-+      default:
-+	;
-+      }
-+    return __ret;
-+  }
-+
-+  template<>
-+    void
-+    moneypunct<char, true>::_M_initialize_moneypunct(__c_locale __cloc,
-+						     const char*)
-+    {
-+      if (!_M_data)
-+	_M_data = new __moneypunct_cache<char, true>;
-+
-+      if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_data->_M_decimal_point = '.';
-+	  _M_data->_M_thousands_sep = ',';
-+	  _M_data->_M_grouping = "";
-+	  _M_data->_M_grouping_size = 0;
-+	  _M_data->_M_curr_symbol = "";
-+	  _M_data->_M_curr_symbol_size = 0;
-+	  _M_data->_M_positive_sign = "";
-+	  _M_data->_M_positive_sign_size = 0;
-+	  _M_data->_M_negative_sign = "";
-+	  _M_data->_M_negative_sign_size = 0;
-+	  _M_data->_M_frac_digits = 0;
-+	  _M_data->_M_pos_format = money_base::_S_default_pattern;
-+	  _M_data->_M_neg_format = money_base::_S_default_pattern;
-+
-+	  for (size_t __i = 0; __i < money_base::_S_end; ++__i)
-+	    _M_data->_M_atoms[__i] = money_base::_S_atoms[__i];
-+	}
-+      else
-+	{
-+	  // Named locale.
-+	  _M_data->_M_decimal_point = *(__nl_langinfo_l(__MON_DECIMAL_POINT,
-+							__cloc));
-+	  _M_data->_M_thousands_sep = *(__nl_langinfo_l(__MON_THOUSANDS_SEP,
-+							__cloc));
-+	  _M_data->_M_grouping = __nl_langinfo_l(__MON_GROUPING, __cloc);
-+	  _M_data->_M_grouping_size = strlen(_M_data->_M_grouping);
-+	  _M_data->_M_positive_sign = __nl_langinfo_l(__POSITIVE_SIGN, __cloc);
-+	  _M_data->_M_positive_sign_size = strlen(_M_data->_M_positive_sign);
-+
-+	  char __nposn = *(__nl_langinfo_l(__INT_N_SIGN_POSN, __cloc));
-+	  if (!__nposn)
-+	    _M_data->_M_negative_sign = "()";
-+	  else
-+	    _M_data->_M_negative_sign = __nl_langinfo_l(__NEGATIVE_SIGN,
-+							__cloc);
-+	  _M_data->_M_negative_sign_size = strlen(_M_data->_M_negative_sign);
-+
-+	  // _Intl == true
-+	  _M_data->_M_curr_symbol = __nl_langinfo_l(__INT_CURR_SYMBOL, __cloc);
-+	  _M_data->_M_curr_symbol_size = strlen(_M_data->_M_curr_symbol);
-+	  _M_data->_M_frac_digits = *(__nl_langinfo_l(__INT_FRAC_DIGITS,
-+						      __cloc));
-+	  char __pprecedes = *(__nl_langinfo_l(__INT_P_CS_PRECEDES, __cloc));
-+	  char __pspace = *(__nl_langinfo_l(__INT_P_SEP_BY_SPACE, __cloc));
-+	  char __pposn = *(__nl_langinfo_l(__INT_P_SIGN_POSN, __cloc));
-+	  _M_data->_M_pos_format = _S_construct_pattern(__pprecedes, __pspace,
-+							__pposn);
-+	  char __nprecedes = *(__nl_langinfo_l(__INT_N_CS_PRECEDES, __cloc));
-+	  char __nspace = *(__nl_langinfo_l(__INT_N_SEP_BY_SPACE, __cloc));
-+	  _M_data->_M_neg_format = _S_construct_pattern(__nprecedes, __nspace,
-+							__nposn);
-+	}
-+    }
-+
-+  template<>
-+    void
-+    moneypunct<char, false>::_M_initialize_moneypunct(__c_locale __cloc,
-+						      const char*)
-+    {
-+      if (!_M_data)
-+	_M_data = new __moneypunct_cache<char, false>;
-+
-+      if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_data->_M_decimal_point = '.';
-+	  _M_data->_M_thousands_sep = ',';
-+	  _M_data->_M_grouping = "";
-+	  _M_data->_M_grouping_size = 0;
-+	  _M_data->_M_curr_symbol = "";
-+	  _M_data->_M_curr_symbol_size = 0;
-+	  _M_data->_M_positive_sign = "";
-+	  _M_data->_M_positive_sign_size = 0;
-+	  _M_data->_M_negative_sign = "";
-+	  _M_data->_M_negative_sign_size = 0;
-+	  _M_data->_M_frac_digits = 0;
-+	  _M_data->_M_pos_format = money_base::_S_default_pattern;
-+	  _M_data->_M_neg_format = money_base::_S_default_pattern;
-+
-+	  for (size_t __i = 0; __i < money_base::_S_end; ++__i)
-+	    _M_data->_M_atoms[__i] = money_base::_S_atoms[__i];
-+	}
-+      else
-+	{
-+	  // Named locale.
-+	  _M_data->_M_decimal_point = *(__nl_langinfo_l(__MON_DECIMAL_POINT,
-+							__cloc));
-+	  _M_data->_M_thousands_sep = *(__nl_langinfo_l(__MON_THOUSANDS_SEP,
-+							__cloc));
-+	  _M_data->_M_grouping = __nl_langinfo_l(__MON_GROUPING, __cloc);
-+	  _M_data->_M_grouping_size = strlen(_M_data->_M_grouping);
-+	  _M_data->_M_positive_sign = __nl_langinfo_l(__POSITIVE_SIGN, __cloc);
-+	  _M_data->_M_positive_sign_size = strlen(_M_data->_M_positive_sign);
-+
-+	  char __nposn = *(__nl_langinfo_l(__N_SIGN_POSN, __cloc));
-+	  if (!__nposn)
-+	    _M_data->_M_negative_sign = "()";
-+	  else
-+	    _M_data->_M_negative_sign = __nl_langinfo_l(__NEGATIVE_SIGN,
-+							__cloc);
-+	  _M_data->_M_negative_sign_size = strlen(_M_data->_M_negative_sign);
-+
-+	  // _Intl == false
-+	  _M_data->_M_curr_symbol = __nl_langinfo_l(__CURRENCY_SYMBOL, __cloc);
-+	  _M_data->_M_curr_symbol_size = strlen(_M_data->_M_curr_symbol);
-+	  _M_data->_M_frac_digits = *(__nl_langinfo_l(__FRAC_DIGITS, __cloc));
-+	  char __pprecedes = *(__nl_langinfo_l(__P_CS_PRECEDES, __cloc));
-+	  char __pspace = *(__nl_langinfo_l(__P_SEP_BY_SPACE, __cloc));
-+	  char __pposn = *(__nl_langinfo_l(__P_SIGN_POSN, __cloc));
-+	  _M_data->_M_pos_format = _S_construct_pattern(__pprecedes, __pspace,
-+							__pposn);
-+	  char __nprecedes = *(__nl_langinfo_l(__N_CS_PRECEDES, __cloc));
-+	  char __nspace = *(__nl_langinfo_l(__N_SEP_BY_SPACE, __cloc));
-+	  _M_data->_M_neg_format = _S_construct_pattern(__nprecedes, __nspace,
-+							__nposn);
-+	}
-+    }
-+
-+  template<>
-+    moneypunct<char, true>::~moneypunct()
-+    { delete _M_data; }
-+
-+  template<>
-+    moneypunct<char, false>::~moneypunct()
-+    { delete _M_data; }
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  template<>
-+    void
-+    moneypunct<wchar_t, true>::_M_initialize_moneypunct(__c_locale __cloc,
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+							const char*)
-+#else
-+							const char* __name)
-+#endif
-+    {
-+      if (!_M_data)
-+	_M_data = new __moneypunct_cache<wchar_t, true>;
-+
-+      if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_data->_M_decimal_point = L'.';
-+	  _M_data->_M_thousands_sep = L',';
-+	  _M_data->_M_grouping = "";
-+	  _M_data->_M_grouping_size = 0;
-+	  _M_data->_M_curr_symbol = L"";
-+	  _M_data->_M_curr_symbol_size = 0;
-+	  _M_data->_M_positive_sign = L"";
-+	  _M_data->_M_positive_sign_size = 0;
-+	  _M_data->_M_negative_sign = L"";
-+	  _M_data->_M_negative_sign_size = 0;
-+	  _M_data->_M_frac_digits = 0;
-+	  _M_data->_M_pos_format = money_base::_S_default_pattern;
-+	  _M_data->_M_neg_format = money_base::_S_default_pattern;
-+
-+	  // Use ctype::widen code without the facet...
-+	  for (size_t __i = 0; __i < money_base::_S_end; ++__i)
-+	    _M_data->_M_atoms[__i] =
-+	      static_cast<wchar_t>(money_base::_S_atoms[__i]);
-+	}
-+      else
-+	{
-+	  // Named locale.
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	  __c_locale __old = __uselocale(__cloc);
-+#else
-+	  // Switch to named locale so that mbsrtowcs will work.
-+	  char* __old = strdup(setlocale(LC_ALL, NULL));
-+	  setlocale(LC_ALL, __name);
-+#endif
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix this... should be monetary
-+#endif
-+#ifdef __UCLIBC__
-+# ifdef __UCLIBC_HAS_XLOCALE__
-+	  _M_data->_M_decimal_point = __cloc->decimal_point_wc;
-+	  _M_data->_M_thousands_sep = __cloc->thousands_sep_wc;
-+# else
-+	  _M_data->_M_decimal_point = __global_locale->decimal_point_wc;
-+	  _M_data->_M_thousands_sep = __global_locale->thousands_sep_wc;
-+# endif
-+#else
-+	  union { char *__s; wchar_t __w; } __u;
-+	  __u.__s = __nl_langinfo_l(_NL_MONETARY_DECIMAL_POINT_WC, __cloc);
-+	  _M_data->_M_decimal_point = __u.__w;
-+
-+	  __u.__s = __nl_langinfo_l(_NL_MONETARY_THOUSANDS_SEP_WC, __cloc);
-+	  _M_data->_M_thousands_sep = __u.__w;
-+#endif
-+	  _M_data->_M_grouping = __nl_langinfo_l(__MON_GROUPING, __cloc);
-+	  _M_data->_M_grouping_size = strlen(_M_data->_M_grouping);
-+
-+	  const char* __cpossign = __nl_langinfo_l(__POSITIVE_SIGN, __cloc);
-+	  const char* __cnegsign = __nl_langinfo_l(__NEGATIVE_SIGN, __cloc);
-+	  const char* __ccurr = __nl_langinfo_l(__INT_CURR_SYMBOL, __cloc);
-+
-+	  wchar_t* __wcs_ps = 0;
-+	  wchar_t* __wcs_ns = 0;
-+	  const char __nposn = *(__nl_langinfo_l(__INT_N_SIGN_POSN, __cloc));
-+	  try
-+	    {
-+	      mbstate_t __state;
-+	      size_t __len = strlen(__cpossign);
-+	      if (__len)
-+		{
-+		  ++__len;
-+		  memset(&__state, 0, sizeof(mbstate_t));
-+		  __wcs_ps = new wchar_t[__len];
-+		  mbsrtowcs(__wcs_ps, &__cpossign, __len, &__state);
-+		  _M_data->_M_positive_sign = __wcs_ps;
-+		}
-+	      else
-+		_M_data->_M_positive_sign = L"";
-+	      _M_data->_M_positive_sign_size = wcslen(_M_data->_M_positive_sign);
-+
-+	      __len = strlen(__cnegsign);
-+	      if (!__nposn)
-+		_M_data->_M_negative_sign = L"()";
-+	      else if (__len)
-+		{
-+		  ++__len;
-+		  memset(&__state, 0, sizeof(mbstate_t));
-+		  __wcs_ns = new wchar_t[__len];
-+		  mbsrtowcs(__wcs_ns, &__cnegsign, __len, &__state);
-+		  _M_data->_M_negative_sign = __wcs_ns;
-+		}
-+	      else
-+		_M_data->_M_negative_sign = L"";
-+	      _M_data->_M_negative_sign_size = wcslen(_M_data->_M_negative_sign);
-+
-+	      // _Intl == true.
-+	      __len = strlen(__ccurr);
-+	      if (__len)
-+		{
-+		  ++__len;
-+		  memset(&__state, 0, sizeof(mbstate_t));
-+		  wchar_t* __wcs = new wchar_t[__len];
-+		  mbsrtowcs(__wcs, &__ccurr, __len, &__state);
-+		  _M_data->_M_curr_symbol = __wcs;
-+		}
-+	      else
-+		_M_data->_M_curr_symbol = L"";
-+	      _M_data->_M_curr_symbol_size = wcslen(_M_data->_M_curr_symbol);
-+	    }
-+	  catch (...)
-+	    {
-+	      delete _M_data;
-+	      _M_data = 0;
-+	      delete __wcs_ps;
-+	      delete __wcs_ns;
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	      __uselocale(__old);
-+#else
-+	      setlocale(LC_ALL, __old);
-+	      free(__old);
-+#endif
-+	      __throw_exception_again;
-+	    }
-+
-+	  _M_data->_M_frac_digits = *(__nl_langinfo_l(__INT_FRAC_DIGITS,
-+						      __cloc));
-+	  char __pprecedes = *(__nl_langinfo_l(__INT_P_CS_PRECEDES, __cloc));
-+	  char __pspace = *(__nl_langinfo_l(__INT_P_SEP_BY_SPACE, __cloc));
-+	  char __pposn = *(__nl_langinfo_l(__INT_P_SIGN_POSN, __cloc));
-+	  _M_data->_M_pos_format = _S_construct_pattern(__pprecedes, __pspace,
-+							__pposn);
-+	  char __nprecedes = *(__nl_langinfo_l(__INT_N_CS_PRECEDES, __cloc));
-+	  char __nspace = *(__nl_langinfo_l(__INT_N_SEP_BY_SPACE, __cloc));
-+	  _M_data->_M_neg_format = _S_construct_pattern(__nprecedes, __nspace,
-+							__nposn);
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	  __uselocale(__old);
-+#else
-+	  setlocale(LC_ALL, __old);
-+	  free(__old);
-+#endif
-+	}
-+    }
-+
-+  template<>
-+  void
-+  moneypunct<wchar_t, false>::_M_initialize_moneypunct(__c_locale __cloc,
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+						       const char*)
-+#else
-+                                                       const char* __name)
-+#endif
-+  {
-+    if (!_M_data)
-+      _M_data = new __moneypunct_cache<wchar_t, false>;
-+
-+    if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_data->_M_decimal_point = L'.';
-+	  _M_data->_M_thousands_sep = L',';
-+	  _M_data->_M_grouping = "";
-+          _M_data->_M_grouping_size = 0;
-+	  _M_data->_M_curr_symbol = L"";
-+	  _M_data->_M_curr_symbol_size = 0;
-+	  _M_data->_M_positive_sign = L"";
-+	  _M_data->_M_positive_sign_size = 0;
-+	  _M_data->_M_negative_sign = L"";
-+	  _M_data->_M_negative_sign_size = 0;
-+	  _M_data->_M_frac_digits = 0;
-+	  _M_data->_M_pos_format = money_base::_S_default_pattern;
-+	  _M_data->_M_neg_format = money_base::_S_default_pattern;
-+
-+	  // Use ctype::widen code without the facet...
-+	  for (size_t __i = 0; __i < money_base::_S_end; ++__i)
-+	    _M_data->_M_atoms[__i] =
-+	      static_cast<wchar_t>(money_base::_S_atoms[__i]);
-+	}
-+      else
-+	{
-+	  // Named locale.
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	  __c_locale __old = __uselocale(__cloc);
-+#else
-+	  // Switch to named locale so that mbsrtowcs will work.
-+	  char* __old = strdup(setlocale(LC_ALL, NULL));
-+	  setlocale(LC_ALL, __name);
-+#endif
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix this... should be monetary
-+#endif
-+#ifdef __UCLIBC__
-+# ifdef __UCLIBC_HAS_XLOCALE__
-+	  _M_data->_M_decimal_point = __cloc->decimal_point_wc;
-+	  _M_data->_M_thousands_sep = __cloc->thousands_sep_wc;
-+# else
-+	  _M_data->_M_decimal_point = __global_locale->decimal_point_wc;
-+	  _M_data->_M_thousands_sep = __global_locale->thousands_sep_wc;
-+# endif
-+#else
-+          union { char *__s; wchar_t __w; } __u;
-+	  __u.__s = __nl_langinfo_l(_NL_MONETARY_DECIMAL_POINT_WC, __cloc);
-+	  _M_data->_M_decimal_point = __u.__w;
-+
-+	  __u.__s = __nl_langinfo_l(_NL_MONETARY_THOUSANDS_SEP_WC, __cloc);
-+	  _M_data->_M_thousands_sep = __u.__w;
-+#endif
-+	  _M_data->_M_grouping = __nl_langinfo_l(__MON_GROUPING, __cloc);
-+          _M_data->_M_grouping_size = strlen(_M_data->_M_grouping);
-+
-+	  const char* __cpossign = __nl_langinfo_l(__POSITIVE_SIGN, __cloc);
-+	  const char* __cnegsign = __nl_langinfo_l(__NEGATIVE_SIGN, __cloc);
-+	  const char* __ccurr = __nl_langinfo_l(__CURRENCY_SYMBOL, __cloc);
-+
-+	  wchar_t* __wcs_ps = 0;
-+	  wchar_t* __wcs_ns = 0;
-+	  const char __nposn = *(__nl_langinfo_l(__N_SIGN_POSN, __cloc));
-+	  try
-+            {
-+              mbstate_t __state;
-+              size_t __len;
-+              __len = strlen(__cpossign);
-+              if (__len)
-+                {
-+		  ++__len;
-+		  memset(&__state, 0, sizeof(mbstate_t));
-+		  __wcs_ps = new wchar_t[__len];
-+		  mbsrtowcs(__wcs_ps, &__cpossign, __len, &__state);
-+		  _M_data->_M_positive_sign = __wcs_ps;
-+		}
-+	      else
-+		_M_data->_M_positive_sign = L"";
-+              _M_data->_M_positive_sign_size = wcslen(_M_data->_M_positive_sign);
-+
-+	      __len = strlen(__cnegsign);
-+	      if (!__nposn)
-+		_M_data->_M_negative_sign = L"()";
-+	      else if (__len)
-+		{
-+		  ++__len;
-+		  memset(&__state, 0, sizeof(mbstate_t));
-+		  __wcs_ns = new wchar_t[__len];
-+		  mbsrtowcs(__wcs_ns, &__cnegsign, __len, &__state);
-+		  _M_data->_M_negative_sign = __wcs_ns;
-+		}
-+	      else
-+		_M_data->_M_negative_sign = L"";
-+              _M_data->_M_negative_sign_size = wcslen(_M_data->_M_negative_sign);
-+
-+	      // _Intl == true.
-+	      __len = strlen(__ccurr);
-+	      if (__len)
-+		{
-+		  ++__len;
-+		  memset(&__state, 0, sizeof(mbstate_t));
-+		  wchar_t* __wcs = new wchar_t[__len];
-+		  mbsrtowcs(__wcs, &__ccurr, __len, &__state);
-+		  _M_data->_M_curr_symbol = __wcs;
-+		}
-+	      else
-+		_M_data->_M_curr_symbol = L"";
-+              _M_data->_M_curr_symbol_size = wcslen(_M_data->_M_curr_symbol);
-+	    }
-+          catch (...)
-+	    {
-+	      delete _M_data;
-+              _M_data = 0;
-+	      delete __wcs_ps;
-+	      delete __wcs_ns;
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	      __uselocale(__old);
-+#else
-+	      setlocale(LC_ALL, __old);
-+	      free(__old);
-+#endif
-+              __throw_exception_again;
-+	    }
-+
-+	  _M_data->_M_frac_digits = *(__nl_langinfo_l(__FRAC_DIGITS, __cloc));
-+	  char __pprecedes = *(__nl_langinfo_l(__P_CS_PRECEDES, __cloc));
-+	  char __pspace = *(__nl_langinfo_l(__P_SEP_BY_SPACE, __cloc));
-+	  char __pposn = *(__nl_langinfo_l(__P_SIGN_POSN, __cloc));
-+	  _M_data->_M_pos_format = _S_construct_pattern(__pprecedes, __pspace,
-+	                                                __pposn);
-+	  char __nprecedes = *(__nl_langinfo_l(__N_CS_PRECEDES, __cloc));
-+	  char __nspace = *(__nl_langinfo_l(__N_SEP_BY_SPACE, __cloc));
-+	  _M_data->_M_neg_format = _S_construct_pattern(__nprecedes, __nspace,
-+	                                                __nposn);
-+
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+	  __uselocale(__old);
-+#else
-+	  setlocale(LC_ALL, __old);
-+	  free(__old);
-+#endif
-+	}
-+    }
-+
-+  template<>
-+    moneypunct<wchar_t, true>::~moneypunct()
-+    {
-+      if (_M_data->_M_positive_sign_size)
-+	delete [] _M_data->_M_positive_sign;
-+      if (_M_data->_M_negative_sign_size
-+          && wcscmp(_M_data->_M_negative_sign, L"()") != 0)
-+	delete [] _M_data->_M_negative_sign;
-+      if (_M_data->_M_curr_symbol_size)
-+	delete [] _M_data->_M_curr_symbol;
-+      delete _M_data;
-+    }
-+
-+  template<>
-+    moneypunct<wchar_t, false>::~moneypunct()
-+    {
-+      if (_M_data->_M_positive_sign_size)
-+	delete [] _M_data->_M_positive_sign;
-+      if (_M_data->_M_negative_sign_size
-+          && wcscmp(_M_data->_M_negative_sign, L"()") != 0)
-+	delete [] _M_data->_M_negative_sign;
-+      if (_M_data->_M_curr_symbol_size)
-+	delete [] _M_data->_M_curr_symbol;
-+      delete _M_data;
-+    }
-+#endif
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/numeric_members.cc b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-new file mode 100644
-index 0000000..883ec1a
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-@@ -0,0 +1,160 @@
-+// std::numpunct implementation details, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.3.1.2  numpunct virtual functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#define _LIBC
-+#include <locale>
-+#undef _LIBC
-+#include <bits/c++locale_internal.h>
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning tailor for stub locale support
-+#endif
-+#ifndef __UCLIBC_HAS_XLOCALE__
-+#define __nl_langinfo_l(N, L)         nl_langinfo((N))
-+#endif
-+
-+namespace std
-+{
-+  template<>
-+    void
-+    numpunct<char>::_M_initialize_numpunct(__c_locale __cloc)
-+    {
-+      if (!_M_data)
-+	_M_data = new __numpunct_cache<char>;
-+
-+      if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_data->_M_grouping = "";
-+	  _M_data->_M_grouping_size = 0;
-+	  _M_data->_M_use_grouping = false;
-+
-+	  _M_data->_M_decimal_point = '.';
-+	  _M_data->_M_thousands_sep = ',';
-+
-+	  for (size_t __i = 0; __i < __num_base::_S_oend; ++__i)
-+	    _M_data->_M_atoms_out[__i] = __num_base::_S_atoms_out[__i];
-+
-+	  for (size_t __j = 0; __j < __num_base::_S_iend; ++__j)
-+	    _M_data->_M_atoms_in[__j] = __num_base::_S_atoms_in[__j];
-+	}
-+      else
-+	{
-+	  // Named locale.
-+	  _M_data->_M_decimal_point = *(__nl_langinfo_l(DECIMAL_POINT,
-+							__cloc));
-+	  _M_data->_M_thousands_sep = *(__nl_langinfo_l(THOUSANDS_SEP,
-+							__cloc));
-+
-+	  // Check for NULL, which implies no grouping.
-+	  if (_M_data->_M_thousands_sep == '\0')
-+	    _M_data->_M_grouping = "";
-+	  else
-+	    _M_data->_M_grouping = __nl_langinfo_l(GROUPING, __cloc);
-+	  _M_data->_M_grouping_size = strlen(_M_data->_M_grouping);
-+	}
-+
-+      // NB: There is no way to extact this info from posix locales.
-+      // _M_truename = __nl_langinfo_l(YESSTR, __cloc);
-+      _M_data->_M_truename = "true";
-+      _M_data->_M_truename_size = 4;
-+      // _M_falsename = __nl_langinfo_l(NOSTR, __cloc);
-+      _M_data->_M_falsename = "false";
-+      _M_data->_M_falsename_size = 5;
-+    }
-+
-+  template<>
-+    numpunct<char>::~numpunct()
-+    { delete _M_data; }
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  template<>
-+    void
-+    numpunct<wchar_t>::_M_initialize_numpunct(__c_locale __cloc)
-+    {
-+      if (!_M_data)
-+	_M_data = new __numpunct_cache<wchar_t>;
-+
-+      if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_data->_M_grouping = "";
-+	  _M_data->_M_grouping_size = 0;
-+	  _M_data->_M_use_grouping = false;
-+
-+	  _M_data->_M_decimal_point = L'.';
-+	  _M_data->_M_thousands_sep = L',';
-+
-+	  // Use ctype::widen code without the facet...
-+	  for (size_t __i = 0; __i < __num_base::_S_oend; ++__i)
-+	    _M_data->_M_atoms_out[__i] =
-+	      static_cast<wchar_t>(__num_base::_S_atoms_out[__i]);
-+
-+	  for (size_t __j = 0; __j < __num_base::_S_iend; ++__j)
-+	    _M_data->_M_atoms_in[__j] =
-+	      static_cast<wchar_t>(__num_base::_S_atoms_in[__j]);
-+	}
-+      else
-+	{
-+	  // Named locale.
-+	  // NB: In the GNU model wchar_t is always 32 bit wide.
-+	  union { char *__s; wchar_t __w; } __u;
-+	  __u.__s = __nl_langinfo_l(_NL_NUMERIC_DECIMAL_POINT_WC, __cloc);
-+	  _M_data->_M_decimal_point = __u.__w;
-+
-+	  __u.__s = __nl_langinfo_l(_NL_NUMERIC_THOUSANDS_SEP_WC, __cloc);
-+	  _M_data->_M_thousands_sep = __u.__w;
-+
-+	  if (_M_data->_M_thousands_sep == L'\0')
-+	    _M_data->_M_grouping = "";
-+	  else
-+	    _M_data->_M_grouping = __nl_langinfo_l(GROUPING, __cloc);
-+	  _M_data->_M_grouping_size = strlen(_M_data->_M_grouping);
-+	}
-+
-+      // NB: There is no way to extact this info from posix locales.
-+      // _M_truename = __nl_langinfo_l(YESSTR, __cloc);
-+      _M_data->_M_truename = L"true";
-+      _M_data->_M_truename_size = 4;
-+      // _M_falsename = __nl_langinfo_l(NOSTR, __cloc);
-+      _M_data->_M_falsename = L"false";
-+      _M_data->_M_falsename_size = 5;
-+    }
-+
-+  template<>
-+    numpunct<wchar_t>::~numpunct()
-+    { delete _M_data; }
-+ #endif
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/time_members.cc b/libstdc++-v3/config/locale/uclibc/time_members.cc
-new file mode 100644
-index 0000000..e0707d7
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/time_members.cc
-@@ -0,0 +1,406 @@
-+// std::time_get, std::time_put implementation, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.5.1.2 - time_get virtual functions
-+// ISO C++ 14882: 22.2.5.3.2 - time_put virtual functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+#include <locale>
-+#include <bits/c++locale_internal.h>
-+
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning tailor for stub locale support
-+#endif
-+#ifndef __UCLIBC_HAS_XLOCALE__
-+#define __nl_langinfo_l(N, L)         nl_langinfo((N))
-+#endif
-+
-+namespace std
-+{
-+  template<>
-+    void
-+    __timepunct<char>::
-+    _M_put(char* __s, size_t __maxlen, const char* __format,
-+	   const tm* __tm) const
-+    {
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+      const size_t __len = __strftime_l(__s, __maxlen, __format, __tm,
-+					_M_c_locale_timepunct);
-+#else
-+      char* __old = strdup(setlocale(LC_ALL, NULL));
-+      setlocale(LC_ALL, _M_name_timepunct);
-+      const size_t __len = strftime(__s, __maxlen, __format, __tm);
-+      setlocale(LC_ALL, __old);
-+      free(__old);
-+#endif
-+      // Make sure __s is null terminated.
-+      if (__len == 0)
-+	__s[0] = '\0';
-+    }
-+
-+  template<>
-+    void
-+    __timepunct<char>::_M_initialize_timepunct(__c_locale __cloc)
-+    {
-+      if (!_M_data)
-+	_M_data = new __timepunct_cache<char>;
-+
-+      if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_c_locale_timepunct = _S_get_c_locale();
-+
-+	  _M_data->_M_date_format = "%m/%d/%y";
-+	  _M_data->_M_date_era_format = "%m/%d/%y";
-+	  _M_data->_M_time_format = "%H:%M:%S";
-+	  _M_data->_M_time_era_format = "%H:%M:%S";
-+	  _M_data->_M_date_time_format = "";
-+	  _M_data->_M_date_time_era_format = "";
-+	  _M_data->_M_am = "AM";
-+	  _M_data->_M_pm = "PM";
-+	  _M_data->_M_am_pm_format = "";
-+
-+	  // Day names, starting with "C"'s Sunday.
-+	  _M_data->_M_day1 = "Sunday";
-+	  _M_data->_M_day2 = "Monday";
-+	  _M_data->_M_day3 = "Tuesday";
-+	  _M_data->_M_day4 = "Wednesday";
-+	  _M_data->_M_day5 = "Thursday";
-+	  _M_data->_M_day6 = "Friday";
-+	  _M_data->_M_day7 = "Saturday";
-+
-+	  // Abbreviated day names, starting with "C"'s Sun.
-+	  _M_data->_M_aday1 = "Sun";
-+	  _M_data->_M_aday2 = "Mon";
-+	  _M_data->_M_aday3 = "Tue";
-+	  _M_data->_M_aday4 = "Wed";
-+	  _M_data->_M_aday5 = "Thu";
-+	  _M_data->_M_aday6 = "Fri";
-+	  _M_data->_M_aday7 = "Sat";
-+
-+	  // Month names, starting with "C"'s January.
-+	  _M_data->_M_month01 = "January";
-+	  _M_data->_M_month02 = "February";
-+	  _M_data->_M_month03 = "March";
-+	  _M_data->_M_month04 = "April";
-+	  _M_data->_M_month05 = "May";
-+	  _M_data->_M_month06 = "June";
-+	  _M_data->_M_month07 = "July";
-+	  _M_data->_M_month08 = "August";
-+	  _M_data->_M_month09 = "September";
-+	  _M_data->_M_month10 = "October";
-+	  _M_data->_M_month11 = "November";
-+	  _M_data->_M_month12 = "December";
-+
-+	  // Abbreviated month names, starting with "C"'s Jan.
-+	  _M_data->_M_amonth01 = "Jan";
-+	  _M_data->_M_amonth02 = "Feb";
-+	  _M_data->_M_amonth03 = "Mar";
-+	  _M_data->_M_amonth04 = "Apr";
-+	  _M_data->_M_amonth05 = "May";
-+	  _M_data->_M_amonth06 = "Jun";
-+	  _M_data->_M_amonth07 = "Jul";
-+	  _M_data->_M_amonth08 = "Aug";
-+	  _M_data->_M_amonth09 = "Sep";
-+	  _M_data->_M_amonth10 = "Oct";
-+	  _M_data->_M_amonth11 = "Nov";
-+	  _M_data->_M_amonth12 = "Dec";
-+	}
-+      else
-+	{
-+	  _M_c_locale_timepunct = _S_clone_c_locale(__cloc);
-+
-+	  _M_data->_M_date_format = __nl_langinfo_l(D_FMT, __cloc);
-+	  _M_data->_M_date_era_format = __nl_langinfo_l(ERA_D_FMT, __cloc);
-+	  _M_data->_M_time_format = __nl_langinfo_l(T_FMT, __cloc);
-+	  _M_data->_M_time_era_format = __nl_langinfo_l(ERA_T_FMT, __cloc);
-+	  _M_data->_M_date_time_format = __nl_langinfo_l(D_T_FMT, __cloc);
-+	  _M_data->_M_date_time_era_format = __nl_langinfo_l(ERA_D_T_FMT,
-+							     __cloc);
-+	  _M_data->_M_am = __nl_langinfo_l(AM_STR, __cloc);
-+	  _M_data->_M_pm = __nl_langinfo_l(PM_STR, __cloc);
-+	  _M_data->_M_am_pm_format = __nl_langinfo_l(T_FMT_AMPM, __cloc);
-+
-+	  // Day names, starting with "C"'s Sunday.
-+	  _M_data->_M_day1 = __nl_langinfo_l(DAY_1, __cloc);
-+	  _M_data->_M_day2 = __nl_langinfo_l(DAY_2, __cloc);
-+	  _M_data->_M_day3 = __nl_langinfo_l(DAY_3, __cloc);
-+	  _M_data->_M_day4 = __nl_langinfo_l(DAY_4, __cloc);
-+	  _M_data->_M_day5 = __nl_langinfo_l(DAY_5, __cloc);
-+	  _M_data->_M_day6 = __nl_langinfo_l(DAY_6, __cloc);
-+	  _M_data->_M_day7 = __nl_langinfo_l(DAY_7, __cloc);
-+
-+	  // Abbreviated day names, starting with "C"'s Sun.
-+	  _M_data->_M_aday1 = __nl_langinfo_l(ABDAY_1, __cloc);
-+	  _M_data->_M_aday2 = __nl_langinfo_l(ABDAY_2, __cloc);
-+	  _M_data->_M_aday3 = __nl_langinfo_l(ABDAY_3, __cloc);
-+	  _M_data->_M_aday4 = __nl_langinfo_l(ABDAY_4, __cloc);
-+	  _M_data->_M_aday5 = __nl_langinfo_l(ABDAY_5, __cloc);
-+	  _M_data->_M_aday6 = __nl_langinfo_l(ABDAY_6, __cloc);
-+	  _M_data->_M_aday7 = __nl_langinfo_l(ABDAY_7, __cloc);
-+
-+	  // Month names, starting with "C"'s January.
-+	  _M_data->_M_month01 = __nl_langinfo_l(MON_1, __cloc);
-+	  _M_data->_M_month02 = __nl_langinfo_l(MON_2, __cloc);
-+	  _M_data->_M_month03 = __nl_langinfo_l(MON_3, __cloc);
-+	  _M_data->_M_month04 = __nl_langinfo_l(MON_4, __cloc);
-+	  _M_data->_M_month05 = __nl_langinfo_l(MON_5, __cloc);
-+	  _M_data->_M_month06 = __nl_langinfo_l(MON_6, __cloc);
-+	  _M_data->_M_month07 = __nl_langinfo_l(MON_7, __cloc);
-+	  _M_data->_M_month08 = __nl_langinfo_l(MON_8, __cloc);
-+	  _M_data->_M_month09 = __nl_langinfo_l(MON_9, __cloc);
-+	  _M_data->_M_month10 = __nl_langinfo_l(MON_10, __cloc);
-+	  _M_data->_M_month11 = __nl_langinfo_l(MON_11, __cloc);
-+	  _M_data->_M_month12 = __nl_langinfo_l(MON_12, __cloc);
-+
-+	  // Abbreviated month names, starting with "C"'s Jan.
-+	  _M_data->_M_amonth01 = __nl_langinfo_l(ABMON_1, __cloc);
-+	  _M_data->_M_amonth02 = __nl_langinfo_l(ABMON_2, __cloc);
-+	  _M_data->_M_amonth03 = __nl_langinfo_l(ABMON_3, __cloc);
-+	  _M_data->_M_amonth04 = __nl_langinfo_l(ABMON_4, __cloc);
-+	  _M_data->_M_amonth05 = __nl_langinfo_l(ABMON_5, __cloc);
-+	  _M_data->_M_amonth06 = __nl_langinfo_l(ABMON_6, __cloc);
-+	  _M_data->_M_amonth07 = __nl_langinfo_l(ABMON_7, __cloc);
-+	  _M_data->_M_amonth08 = __nl_langinfo_l(ABMON_8, __cloc);
-+	  _M_data->_M_amonth09 = __nl_langinfo_l(ABMON_9, __cloc);
-+	  _M_data->_M_amonth10 = __nl_langinfo_l(ABMON_10, __cloc);
-+	  _M_data->_M_amonth11 = __nl_langinfo_l(ABMON_11, __cloc);
-+	  _M_data->_M_amonth12 = __nl_langinfo_l(ABMON_12, __cloc);
-+	}
-+    }
-+
-+#ifdef _GLIBCXX_USE_WCHAR_T
-+  template<>
-+    void
-+    __timepunct<wchar_t>::
-+    _M_put(wchar_t* __s, size_t __maxlen, const wchar_t* __format,
-+	   const tm* __tm) const
-+    {
-+#ifdef __UCLIBC_HAS_XLOCALE__
-+      __wcsftime_l(__s, __maxlen, __format, __tm, _M_c_locale_timepunct);
-+      const size_t __len = __wcsftime_l(__s, __maxlen, __format, __tm,
-+					_M_c_locale_timepunct);
-+#else
-+      char* __old = strdup(setlocale(LC_ALL, NULL));
-+      setlocale(LC_ALL, _M_name_timepunct);
-+      const size_t __len = wcsftime(__s, __maxlen, __format, __tm);
-+      setlocale(LC_ALL, __old);
-+      free(__old);
-+#endif
-+      // Make sure __s is null terminated.
-+      if (__len == 0)
-+	__s[0] = L'\0';
-+    }
-+
-+  template<>
-+    void
-+    __timepunct<wchar_t>::_M_initialize_timepunct(__c_locale __cloc)
-+    {
-+      if (!_M_data)
-+	_M_data = new __timepunct_cache<wchar_t>;
-+
-+#warning wide time stuff
-+//       if (!__cloc)
-+	{
-+	  // "C" locale
-+	  _M_c_locale_timepunct = _S_get_c_locale();
-+
-+	  _M_data->_M_date_format = L"%m/%d/%y";
-+	  _M_data->_M_date_era_format = L"%m/%d/%y";
-+	  _M_data->_M_time_format = L"%H:%M:%S";
-+	  _M_data->_M_time_era_format = L"%H:%M:%S";
-+	  _M_data->_M_date_time_format = L"";
-+	  _M_data->_M_date_time_era_format = L"";
-+	  _M_data->_M_am = L"AM";
-+	  _M_data->_M_pm = L"PM";
-+	  _M_data->_M_am_pm_format = L"";
-+
-+	  // Day names, starting with "C"'s Sunday.
-+	  _M_data->_M_day1 = L"Sunday";
-+	  _M_data->_M_day2 = L"Monday";
-+	  _M_data->_M_day3 = L"Tuesday";
-+	  _M_data->_M_day4 = L"Wednesday";
-+	  _M_data->_M_day5 = L"Thursday";
-+	  _M_data->_M_day6 = L"Friday";
-+	  _M_data->_M_day7 = L"Saturday";
-+
-+	  // Abbreviated day names, starting with "C"'s Sun.
-+	  _M_data->_M_aday1 = L"Sun";
-+	  _M_data->_M_aday2 = L"Mon";
-+	  _M_data->_M_aday3 = L"Tue";
-+	  _M_data->_M_aday4 = L"Wed";
-+	  _M_data->_M_aday5 = L"Thu";
-+	  _M_data->_M_aday6 = L"Fri";
-+	  _M_data->_M_aday7 = L"Sat";
-+
-+	  // Month names, starting with "C"'s January.
-+	  _M_data->_M_month01 = L"January";
-+	  _M_data->_M_month02 = L"February";
-+	  _M_data->_M_month03 = L"March";
-+	  _M_data->_M_month04 = L"April";
-+	  _M_data->_M_month05 = L"May";
-+	  _M_data->_M_month06 = L"June";
-+	  _M_data->_M_month07 = L"July";
-+	  _M_data->_M_month08 = L"August";
-+	  _M_data->_M_month09 = L"September";
-+	  _M_data->_M_month10 = L"October";
-+	  _M_data->_M_month11 = L"November";
-+	  _M_data->_M_month12 = L"December";
-+
-+	  // Abbreviated month names, starting with "C"'s Jan.
-+	  _M_data->_M_amonth01 = L"Jan";
-+	  _M_data->_M_amonth02 = L"Feb";
-+	  _M_data->_M_amonth03 = L"Mar";
-+	  _M_data->_M_amonth04 = L"Apr";
-+	  _M_data->_M_amonth05 = L"May";
-+	  _M_data->_M_amonth06 = L"Jun";
-+	  _M_data->_M_amonth07 = L"Jul";
-+	  _M_data->_M_amonth08 = L"Aug";
-+	  _M_data->_M_amonth09 = L"Sep";
-+	  _M_data->_M_amonth10 = L"Oct";
-+	  _M_data->_M_amonth11 = L"Nov";
-+	  _M_data->_M_amonth12 = L"Dec";
-+	}
-+#if 0
-+      else
-+	{
-+	  _M_c_locale_timepunct = _S_clone_c_locale(__cloc);
-+
-+	  union { char *__s; wchar_t *__w; } __u;
-+
-+	  __u.__s = __nl_langinfo_l(_NL_WD_FMT, __cloc);
-+	  _M_data->_M_date_format = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WERA_D_FMT, __cloc);
-+	  _M_data->_M_date_era_format = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WT_FMT, __cloc);
-+	  _M_data->_M_time_format = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WERA_T_FMT, __cloc);
-+	  _M_data->_M_time_era_format = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WD_T_FMT, __cloc);
-+	  _M_data->_M_date_time_format = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WERA_D_T_FMT, __cloc);
-+	  _M_data->_M_date_time_era_format = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WAM_STR, __cloc);
-+	  _M_data->_M_am = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WPM_STR, __cloc);
-+	  _M_data->_M_pm = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WT_FMT_AMPM, __cloc);
-+	  _M_data->_M_am_pm_format = __u.__w;
-+
-+	  // Day names, starting with "C"'s Sunday.
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_1, __cloc);
-+	  _M_data->_M_day1 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_2, __cloc);
-+	  _M_data->_M_day2 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_3, __cloc);
-+	  _M_data->_M_day3 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_4, __cloc);
-+	  _M_data->_M_day4 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_5, __cloc);
-+	  _M_data->_M_day5 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_6, __cloc);
-+	  _M_data->_M_day6 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WDAY_7, __cloc);
-+	  _M_data->_M_day7 = __u.__w;
-+
-+	  // Abbreviated day names, starting with "C"'s Sun.
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_1, __cloc);
-+	  _M_data->_M_aday1 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_2, __cloc);
-+	  _M_data->_M_aday2 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_3, __cloc);
-+	  _M_data->_M_aday3 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_4, __cloc);
-+	  _M_data->_M_aday4 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_5, __cloc);
-+	  _M_data->_M_aday5 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_6, __cloc);
-+	  _M_data->_M_aday6 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABDAY_7, __cloc);
-+	  _M_data->_M_aday7 = __u.__w;
-+
-+	  // Month names, starting with "C"'s January.
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_1, __cloc);
-+	  _M_data->_M_month01 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_2, __cloc);
-+	  _M_data->_M_month02 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_3, __cloc);
-+	  _M_data->_M_month03 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_4, __cloc);
-+	  _M_data->_M_month04 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_5, __cloc);
-+	  _M_data->_M_month05 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_6, __cloc);
-+	  _M_data->_M_month06 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_7, __cloc);
-+	  _M_data->_M_month07 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_8, __cloc);
-+	  _M_data->_M_month08 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_9, __cloc);
-+	  _M_data->_M_month09 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_10, __cloc);
-+	  _M_data->_M_month10 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_11, __cloc);
-+	  _M_data->_M_month11 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WMON_12, __cloc);
-+	  _M_data->_M_month12 = __u.__w;
-+
-+	  // Abbreviated month names, starting with "C"'s Jan.
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_1, __cloc);
-+	  _M_data->_M_amonth01 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_2, __cloc);
-+	  _M_data->_M_amonth02 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_3, __cloc);
-+	  _M_data->_M_amonth03 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_4, __cloc);
-+	  _M_data->_M_amonth04 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_5, __cloc);
-+	  _M_data->_M_amonth05 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_6, __cloc);
-+	  _M_data->_M_amonth06 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_7, __cloc);
-+	  _M_data->_M_amonth07 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_8, __cloc);
-+	  _M_data->_M_amonth08 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_9, __cloc);
-+	  _M_data->_M_amonth09 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_10, __cloc);
-+	  _M_data->_M_amonth10 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_11, __cloc);
-+	  _M_data->_M_amonth11 = __u.__w;
-+	  __u.__s = __nl_langinfo_l(_NL_WABMON_12, __cloc);
-+	  _M_data->_M_amonth12 = __u.__w;
-+	}
-+#endif // 0
-+    }
-+#endif
-+}
-diff --git a/libstdc++-v3/config/locale/uclibc/time_members.h b/libstdc++-v3/config/locale/uclibc/time_members.h
-new file mode 100644
-index 0000000..ba8e858
---- /dev/null
-+++ b/libstdc++-v3/config/locale/uclibc/time_members.h
-@@ -0,0 +1,68 @@
-+// std::time_get, std::time_put implementation, GNU version -*- C++ -*-
-+
-+// Copyright (C) 2001, 2002, 2003, 2004 Free Software Foundation, Inc.
-+//
-+// This file is part of the GNU ISO C++ Library.  This library is free
-+// software; you can redistribute it and/or modify it under the
-+// terms of the GNU General Public License as published by the
-+// Free Software Foundation; either version 2, or (at your option)
-+// any later version.
-+
-+// This library is distributed in the hope that it will be useful,
-+// but WITHOUT ANY WARRANTY; without even the implied warranty of
-+// MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-+// GNU General Public License for more details.
-+
-+// You should have received a copy of the GNU General Public License along
-+// with this library; see the file COPYING.  If not, write to the Free
-+// Software Foundation, 59 Temple Place - Suite 330, Boston, MA 02111-1307,
-+// USA.
-+
-+// As a special exception, you may use this file as part of a free software
-+// library without restriction.  Specifically, if other files instantiate
-+// templates or use macros or inline functions from this file, or you compile
-+// this file and link it with other files to produce an executable, this
-+// file does not by itself cause the resulting executable to be covered by
-+// the GNU General Public License.  This exception does not however
-+// invalidate any other reasons why the executable file might be covered by
-+// the GNU General Public License.
-+
-+//
-+// ISO C++ 14882: 22.2.5.1.2 - time_get functions
-+// ISO C++ 14882: 22.2.5.3.2 - time_put functions
-+//
-+
-+// Written by Benjamin Kosnik <bkoz@redhat.com>
-+
-+  template<typename _CharT>
-+    __timepunct<_CharT>::__timepunct(size_t __refs)
-+    : facet(__refs), _M_data(NULL), _M_c_locale_timepunct(NULL),
-+    _M_name_timepunct(_S_get_c_name())
-+    { _M_initialize_timepunct(); }
-+
-+  template<typename _CharT>
-+    __timepunct<_CharT>::__timepunct(__cache_type* __cache, size_t __refs)
-+    : facet(__refs), _M_data(__cache), _M_c_locale_timepunct(NULL),
-+    _M_name_timepunct(_S_get_c_name())
-+    { _M_initialize_timepunct(); }
-+
-+  template<typename _CharT>
-+    __timepunct<_CharT>::__timepunct(__c_locale __cloc, const char* __s,
-+				     size_t __refs)
-+    : facet(__refs), _M_data(NULL), _M_c_locale_timepunct(NULL),
-+    _M_name_timepunct(__s)
-+    {
-+      char* __tmp = new char[std::strlen(__s) + 1];
-+      std::strcpy(__tmp, __s);
-+      _M_name_timepunct = __tmp;
-+      _M_initialize_timepunct(__cloc);
-+    }
-+
-+  template<typename _CharT>
-+    __timepunct<_CharT>::~__timepunct()
-+    {
-+      if (_M_name_timepunct != _S_get_c_name())
-+	delete [] _M_name_timepunct;
-+      delete _M_data;
-+      _S_destroy_c_locale(_M_c_locale_timepunct);
-+    }
-diff --git a/libstdc++-v3/configure b/libstdc++-v3/configure
-index 41797a9..8a5481c 100755
---- a/libstdc++-v3/configure
-+++ b/libstdc++-v3/configure
-@@ -15830,6 +15830,9 @@ fi
-   # Default to "generic".
-   if test $enable_clocale_flag = auto; then
-     case ${target_os} in
-+      *-uclibc*)
-+        enable_clocale_flag=uclibc
-+        ;;
-       linux* | gnu* | kfreebsd*-gnu | knetbsd*-gnu)
- 	enable_clocale_flag=gnu
- 	;;
-@@ -16108,6 +16111,78 @@ $as_echo "newlib" >&6; }
-       CTIME_CC=config/locale/generic/time_members.cc
-       CLOCALE_INTERNAL_H=config/locale/generic/c++locale_internal.h
-       ;;
-+    uclibc)
-+      { $as_echo "$as_me:${as_lineno-$LINENO}: result: uclibc" >&5
-+$as_echo "uclibc" >&6; }
-+
-+      # Declare intention to use gettext, and add support for specific
-+      # languages.
-+      # For some reason, ALL_LINGUAS has to be before AM-GNU-GETTEXT
-+      ALL_LINGUAS="de fr"
-+
-+      # Don't call AM-GNU-GETTEXT here. Instead, assume glibc.
-+      # Extract the first word of "msgfmt", so it can be a program name with args.
-+set dummy msgfmt; ac_word=$2
-+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
-+$as_echo_n "checking for $ac_word... " >&6; }
-+if test "${ac_cv_prog_check_msgfmt+set}" = set; then :
-+  $as_echo_n "(cached) " >&6
-+else
-+  if test -n "$check_msgfmt"; then
-+  ac_cv_prog_check_msgfmt="$check_msgfmt" # Let the user override the test.
-+else
-+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
-+for as_dir in $PATH
-+do
-+  IFS=$as_save_IFS
-+  test -z "$as_dir" && as_dir=.
-+    for ac_exec_ext in '' $ac_executable_extensions; do
-+  if { test -f "$as_dir/$ac_word$ac_exec_ext" && $as_test_x "$as_dir/$ac_word$ac_exec_ext"; }; then
-+    ac_cv_prog_check_msgfmt="yes"
-+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
-+    break 2
-+  fi
-+done
-+  done
-+IFS=$as_save_IFS
-+
-+  test -z "$ac_cv_prog_check_msgfmt" && ac_cv_prog_check_msgfmt="no"
-+fi
-+fi
-+check_msgfmt=$ac_cv_prog_check_msgfmt
-+if test -n "$check_msgfmt"; then
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $check_msgfmt" >&5
-+$as_echo "$check_msgfmt" >&6; }
-+else
-+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
-+$as_echo "no" >&6; }
-+fi
-+
-+
-+      if test x"$check_msgfmt" = x"yes" && test x"$enable_nls" = x"yes"; then
-+        USE_NLS=yes
-+      fi
-+      # Export the build objects.
-+      for ling in $ALL_LINGUAS; do \
-+        glibcxx_MOFILES="$glibcxx_MOFILES $ling.mo"; \
-+        glibcxx_POFILES="$glibcxx_POFILES $ling.po"; \
-+      done
-+
-+
-+
-+      CLOCALE_H=config/locale/uclibc/c_locale.h
-+      CLOCALE_CC=config/locale/uclibc/c_locale.cc
-+      CCODECVT_CC=config/locale/uclibc/codecvt_members.cc
-+      CCOLLATE_CC=config/locale/uclibc/collate_members.cc
-+      CCTYPE_CC=config/locale/uclibc/ctype_members.cc
-+      CMESSAGES_H=config/locale/uclibc/messages_members.h
-+      CMESSAGES_CC=config/locale/uclibc/messages_members.cc
-+      CMONEY_CC=config/locale/uclibc/monetary_members.cc
-+      CNUMERIC_CC=config/locale/uclibc/numeric_members.cc
-+      CTIME_H=config/locale/uclibc/time_members.h
-+      CTIME_CC=config/locale/uclibc/time_members.cc
-+      CLOCALE_INTERNAL_H=config/locale/uclibc/c++locale_internal.h
-+      ;;
-   esac
- 
-   # This is where the testsuite looks for locale catalogs, using the
-diff --git a/libstdc++-v3/include/c_compatibility/wchar.h b/libstdc++-v3/include/c_compatibility/wchar.h
-index 55a0b52..7d8bb15 100644
---- a/libstdc++-v3/include/c_compatibility/wchar.h
-+++ b/libstdc++-v3/include/c_compatibility/wchar.h
-@@ -101,7 +101,9 @@ using std::wmemcmp;
- using std::wmemcpy;
- using std::wmemmove;
- using std::wmemset;
-+#if _GLIBCXX_HAVE_WCSFTIME
- using std::wcsftime;
-+#endif
- 
- #if _GLIBCXX_USE_C99_WCHAR
- using std::wcstold;
-diff --git a/libstdc++-v3/include/c_std/cwchar b/libstdc++-v3/include/c_std/cwchar
-index dc4cef02..256d126 100644
---- a/libstdc++-v3/include/c_std/cwchar
-+++ b/libstdc++-v3/include/c_std/cwchar
-@@ -175,7 +175,9 @@ _GLIBCXX_BEGIN_NAMESPACE_VERSION
-   using ::wcscoll;
-   using ::wcscpy;
-   using ::wcscspn;
-+#if _GLIBCXX_HAVE_WCSFTIME
-   using ::wcsftime;
-+#endif
-   using ::wcslen;
-   using ::wcsncat;
-   using ::wcsncmp;
--- 
-2.8.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0005-uclibc-locale-no__x.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0005-uclibc-locale-no__x.patch
deleted file mode 100644
index 3275016..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0005-uclibc-locale-no__x.patch
+++ /dev/null
@@ -1,257 +0,0 @@
-From c01c14e8e9be382ecd4121ee70f5003b4cb0f904 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:42:36 +0400
-Subject: [PATCH 05/46] uclibc-locale-no__x
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- .../config/locale/uclibc/c++locale_internal.h      | 45 ++++++++++++++++++++++
- libstdc++-v3/config/locale/uclibc/c_locale.cc      | 14 -------
- libstdc++-v3/config/locale/uclibc/c_locale.h       |  1 +
- .../config/locale/uclibc/collate_members.cc        |  7 ----
- libstdc++-v3/config/locale/uclibc/ctype_members.cc |  7 ----
- .../config/locale/uclibc/messages_members.cc       |  7 +---
- .../config/locale/uclibc/messages_members.h        | 18 ++++-----
- .../config/locale/uclibc/monetary_members.cc       |  4 --
- .../config/locale/uclibc/numeric_members.cc        |  3 --
- libstdc++-v3/config/locale/uclibc/time_members.cc  |  3 --
- 10 files changed, 55 insertions(+), 54 deletions(-)
-
-diff --git a/libstdc++-v3/config/locale/uclibc/c++locale_internal.h b/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-index 2ae3e4a..e74fddf 100644
---- a/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-+++ b/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-@@ -60,4 +60,49 @@ extern "C" __typeof(wcsxfrm_l) __wcsxfrm_l;
- extern "C" __typeof(wctype_l) __wctype_l;
- #endif
- 
-+# define __nl_langinfo_l nl_langinfo_l
-+# define __strcoll_l strcoll_l
-+# define __strftime_l strftime_l
-+# define __strtod_l strtod_l
-+# define __strtof_l strtof_l
-+# define __strtold_l strtold_l
-+# define __strxfrm_l strxfrm_l
-+# define __newlocale newlocale
-+# define __freelocale freelocale
-+# define __duplocale duplocale
-+# define __uselocale uselocale
-+
-+# ifdef _GLIBCXX_USE_WCHAR_T
-+#  define __iswctype_l iswctype_l
-+#  define __towlower_l towlower_l
-+#  define __towupper_l towupper_l
-+#  define __wcscoll_l wcscoll_l
-+#  define __wcsftime_l wcsftime_l
-+#  define __wcsxfrm_l wcsxfrm_l
-+#  define __wctype_l wctype_l
-+# endif
-+
-+#else
-+# define __nl_langinfo_l(N, L)       nl_langinfo((N))
-+# define __strcoll_l(S1, S2, L)      strcoll((S1), (S2))
-+# define __strtod_l(S, E, L)         strtod((S), (E))
-+# define __strtof_l(S, E, L)         strtof((S), (E))
-+# define __strtold_l(S, E, L)        strtold((S), (E))
-+# define __strxfrm_l(S1, S2, N, L)   strxfrm((S1), (S2), (N))
-+# warning should dummy __newlocale check for C|POSIX ?
-+# define __newlocale(a, b, c)        NULL
-+# define __freelocale(a)             ((void)0)
-+# define __duplocale(a)              __c_locale()
-+//# define __uselocale ?
-+//
-+# ifdef _GLIBCXX_USE_WCHAR_T
-+#  define __iswctype_l(C, M, L)       iswctype((C), (M))
-+#  define __towlower_l(C, L)          towlower((C))
-+#  define __towupper_l(C, L)          towupper((C))
-+#  define __wcscoll_l(S1, S2, L)      wcscoll((S1), (S2))
-+//#  define __wcsftime_l(S, M, F, T, L)  wcsftime((S), (M), (F), (T))
-+#  define __wcsxfrm_l(S1, S2, N, L)   wcsxfrm((S1), (S2), (N))
-+#  define __wctype_l(S, L)            wctype((S))
-+# endif
-+
- #endif // GLIBC 2.3 and later
-diff --git a/libstdc++-v3/config/locale/uclibc/c_locale.cc b/libstdc++-v3/config/locale/uclibc/c_locale.cc
-index 5081dc1..21430d0 100644
---- a/libstdc++-v3/config/locale/uclibc/c_locale.cc
-+++ b/libstdc++-v3/config/locale/uclibc/c_locale.cc
-@@ -39,20 +39,6 @@
- #include <langinfo.h>
- #include <bits/c++locale_internal.h>
- 
--#ifndef __UCLIBC_HAS_XLOCALE__
--#define __strtol_l(S, E, B, L)      strtol((S), (E), (B))
--#define __strtoul_l(S, E, B, L)     strtoul((S), (E), (B))
--#define __strtoll_l(S, E, B, L)     strtoll((S), (E), (B))
--#define __strtoull_l(S, E, B, L)    strtoull((S), (E), (B))
--#define __strtof_l(S, E, L)         strtof((S), (E))
--#define __strtod_l(S, E, L)         strtod((S), (E))
--#define __strtold_l(S, E, L)        strtold((S), (E))
--#warning should dummy __newlocale check for C|POSIX ?
--#define __newlocale(a, b, c)        NULL
--#define __freelocale(a)             ((void)0)
--#define __duplocale(a)              __c_locale()
--#endif
--
- namespace std
- {
-   template<>
-diff --git a/libstdc++-v3/config/locale/uclibc/c_locale.h b/libstdc++-v3/config/locale/uclibc/c_locale.h
-index da07c1f..4bca5f1 100644
---- a/libstdc++-v3/config/locale/uclibc/c_locale.h
-+++ b/libstdc++-v3/config/locale/uclibc/c_locale.h
-@@ -68,6 +68,7 @@ namespace __gnu_cxx
- {
-   extern "C" __typeof(uselocale) __uselocale;
- }
-+#define __uselocale uselocale
- #endif
- 
- namespace std
-diff --git a/libstdc++-v3/config/locale/uclibc/collate_members.cc b/libstdc++-v3/config/locale/uclibc/collate_members.cc
-index c2664a7..ec5c329 100644
---- a/libstdc++-v3/config/locale/uclibc/collate_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/collate_members.cc
-@@ -36,13 +36,6 @@
- #include <locale>
- #include <bits/c++locale_internal.h>
- 
--#ifndef __UCLIBC_HAS_XLOCALE__
--#define __strcoll_l(S1, S2, L)      strcoll((S1), (S2))
--#define __strxfrm_l(S1, S2, N, L)   strxfrm((S1), (S2), (N))
--#define __wcscoll_l(S1, S2, L)      wcscoll((S1), (S2))
--#define __wcsxfrm_l(S1, S2, N, L)   wcsxfrm((S1), (S2), (N))
--#endif
--
- namespace std
- {
-   // These are basically extensions to char_traits, and perhaps should
-diff --git a/libstdc++-v3/config/locale/uclibc/ctype_members.cc b/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-index 7294e3a..7b12861 100644
---- a/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-@@ -38,13 +38,6 @@
- #undef _LIBC
- #include <bits/c++locale_internal.h>
- 
--#ifndef __UCLIBC_HAS_XLOCALE__
--#define __wctype_l(S, L)           wctype((S))
--#define __towupper_l(C, L)         towupper((C))
--#define __towlower_l(C, L)         towlower((C))
--#define __iswctype_l(C, M, L)      iswctype((C), (M))
--#endif
--
- namespace std
- {
-   // NB: The other ctype<char> specializations are in src/locale.cc and
-diff --git a/libstdc++-v3/config/locale/uclibc/messages_members.cc b/libstdc++-v3/config/locale/uclibc/messages_members.cc
-index 13594d9..d7693b4 100644
---- a/libstdc++-v3/config/locale/uclibc/messages_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/messages_members.cc
-@@ -39,13 +39,10 @@
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning fix gettext stuff
- #endif
--#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
--extern "C" char *__dcgettext(const char *domainname,
--			     const char *msgid, int category);
- #undef gettext
--#define gettext(msgid) __dcgettext(NULL, msgid, LC_MESSAGES)
-+#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
-+#define gettext(msgid) dcgettext(NULL, msgid, LC_MESSAGES)
- #else
--#undef gettext
- #define gettext(msgid) (msgid)
- #endif
- 
-diff --git a/libstdc++-v3/config/locale/uclibc/messages_members.h b/libstdc++-v3/config/locale/uclibc/messages_members.h
-index 1424078..d89da33 100644
---- a/libstdc++-v3/config/locale/uclibc/messages_members.h
-+++ b/libstdc++-v3/config/locale/uclibc/messages_members.h
-@@ -36,15 +36,11 @@
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning fix prototypes for *textdomain funcs
- #endif
--#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
--extern "C" char *__textdomain(const char *domainname);
--extern "C" char *__bindtextdomain(const char *domainname,
--				  const char *dirname);
--#else
--#undef __textdomain
--#undef __bindtextdomain
--#define __textdomain(D)           ((void)0)
--#define __bindtextdomain(D,P)     ((void)0)
-+#ifndef __UCLIBC_HAS_GETTEXT_AWARENESS__
-+#undef textdomain
-+#undef bindtextdomain
-+#define textdomain(D)           ((void)0)
-+#define bindtextdomain(D,P)     ((void)0)
- #endif
- 
-   // Non-virtual member functions.
-@@ -70,7 +66,7 @@ extern "C" char *__bindtextdomain(const char *domainname,
-     messages<_CharT>::open(const basic_string<char>& __s, const locale& __loc,
- 			   const char* __dir) const
-     {
--      __bindtextdomain(__s.c_str(), __dir);
-+      bindtextdomain(__s.c_str(), __dir);
-       return this->do_open(__s, __loc);
-     }
- 
-@@ -90,7 +86,7 @@ extern "C" char *__bindtextdomain(const char *domainname,
-     {
-       // No error checking is done, assume the catalog exists and can
-       // be used.
--      __textdomain(__s.c_str());
-+      textdomain(__s.c_str());
-       return 0;
-     }
- 
-diff --git a/libstdc++-v3/config/locale/uclibc/monetary_members.cc b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-index aa52731..2e6f80a 100644
---- a/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-@@ -43,10 +43,6 @@
- #warning tailor for stub locale support
- #endif
- 
--#ifndef __UCLIBC_HAS_XLOCALE__
--#define __nl_langinfo_l(N, L)         nl_langinfo((N))
--#endif
--
- namespace std
- {
-   // Construct and return valid pattern consisting of some combination of:
-diff --git a/libstdc++-v3/config/locale/uclibc/numeric_members.cc b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-index 883ec1a..2c70642 100644
---- a/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-@@ -41,9 +41,6 @@
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning tailor for stub locale support
- #endif
--#ifndef __UCLIBC_HAS_XLOCALE__
--#define __nl_langinfo_l(N, L)         nl_langinfo((N))
--#endif
- 
- namespace std
- {
-diff --git a/libstdc++-v3/config/locale/uclibc/time_members.cc b/libstdc++-v3/config/locale/uclibc/time_members.cc
-index e0707d7..d848ed5 100644
---- a/libstdc++-v3/config/locale/uclibc/time_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/time_members.cc
-@@ -40,9 +40,6 @@
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning tailor for stub locale support
- #endif
--#ifndef __UCLIBC_HAS_XLOCALE__
--#define __nl_langinfo_l(N, L)         nl_langinfo((N))
--#endif
- 
- namespace std
- {
--- 
-2.8.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0006-uclibc-locale-wchar_fix.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0006-uclibc-locale-wchar_fix.patch
deleted file mode 100644
index e45a482..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0006-uclibc-locale-wchar_fix.patch
+++ /dev/null
@@ -1,68 +0,0 @@
-From e7a4760fb40008cae33e6fc7dc4cfef6c2fd5f93 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:45:57 +0400
-Subject: [PATCH 06/46] uclibc-locale-wchar_fix
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- libstdc++-v3/config/locale/uclibc/monetary_members.cc |  4 ++--
- libstdc++-v3/config/locale/uclibc/numeric_members.cc  | 13 +++++++++++++
- 2 files changed, 15 insertions(+), 2 deletions(-)
-
-diff --git a/libstdc++-v3/config/locale/uclibc/monetary_members.cc b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-index 2e6f80a..31ebb9f 100644
---- a/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-@@ -401,7 +401,7 @@ namespace std
- # ifdef __UCLIBC_HAS_XLOCALE__
- 	  _M_data->_M_decimal_point = __cloc->decimal_point_wc;
- 	  _M_data->_M_thousands_sep = __cloc->thousands_sep_wc;
--# else
-+# elif defined __UCLIBC_HAS_LOCALE__
- 	  _M_data->_M_decimal_point = __global_locale->decimal_point_wc;
- 	  _M_data->_M_thousands_sep = __global_locale->thousands_sep_wc;
- # endif
-@@ -556,7 +556,7 @@ namespace std
- # ifdef __UCLIBC_HAS_XLOCALE__
- 	  _M_data->_M_decimal_point = __cloc->decimal_point_wc;
- 	  _M_data->_M_thousands_sep = __cloc->thousands_sep_wc;
--# else
-+# elif defined __UCLIBC_HAS_LOCALE__
- 	  _M_data->_M_decimal_point = __global_locale->decimal_point_wc;
- 	  _M_data->_M_thousands_sep = __global_locale->thousands_sep_wc;
- # endif
-diff --git a/libstdc++-v3/config/locale/uclibc/numeric_members.cc b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-index 2c70642..d5c8961 100644
---- a/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-@@ -127,12 +127,25 @@ namespace std
- 	{
- 	  // Named locale.
- 	  // NB: In the GNU model wchar_t is always 32 bit wide.
-+#ifdef __UCLIBC_MJN3_ONLY__
-+#warning fix this... should be numeric
-+#endif
-+#ifdef __UCLIBC__
-+# ifdef __UCLIBC_HAS_XLOCALE__
-+	  _M_data->_M_decimal_point = __cloc->decimal_point_wc;
-+	  _M_data->_M_thousands_sep = __cloc->thousands_sep_wc;
-+# elif defined __UCLIBC_HAS_LOCALE__
-+	  _M_data->_M_decimal_point = __global_locale->decimal_point_wc;
-+	  _M_data->_M_thousands_sep = __global_locale->thousands_sep_wc;
-+# endif
-+#else
- 	  union { char *__s; wchar_t __w; } __u;
- 	  __u.__s = __nl_langinfo_l(_NL_NUMERIC_DECIMAL_POINT_WC, __cloc);
- 	  _M_data->_M_decimal_point = __u.__w;
- 
- 	  __u.__s = __nl_langinfo_l(_NL_NUMERIC_THOUSANDS_SEP_WC, __cloc);
- 	  _M_data->_M_thousands_sep = __u.__w;
-+#endif
- 
- 	  if (_M_data->_M_thousands_sep == L'\0')
- 	    _M_data->_M_grouping = "";
--- 
-2.8.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0007-uclibc-locale-update.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0007-uclibc-locale-update.patch
deleted file mode 100644
index b73e591..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0007-uclibc-locale-update.patch
+++ /dev/null
@@ -1,542 +0,0 @@
-From 8d53a38a3038104e6830ecea5e4beadce54457c1 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Fri, 29 Mar 2013 08:46:58 +0400
-Subject: [PATCH 07/46] uclibc-locale-update
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
----
- .../config/locale/uclibc/c++locale_internal.h      |  3 +
- libstdc++-v3/config/locale/uclibc/c_locale.cc      | 74 ++++++++++------------
- libstdc++-v3/config/locale/uclibc/c_locale.h       | 42 ++++++------
- libstdc++-v3/config/locale/uclibc/ctype_members.cc | 51 +++++++++++----
- .../config/locale/uclibc/messages_members.h        | 12 ++--
- .../config/locale/uclibc/monetary_members.cc       | 34 ++++++----
- .../config/locale/uclibc/numeric_members.cc        |  5 ++
- libstdc++-v3/config/locale/uclibc/time_members.cc  | 18 ++++--
- libstdc++-v3/config/locale/uclibc/time_members.h   | 17 +++--
- 9 files changed, 158 insertions(+), 98 deletions(-)
-
-diff --git a/libstdc++-v3/config/locale/uclibc/c++locale_internal.h b/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-index e74fddf..971a6b4 100644
---- a/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-+++ b/libstdc++-v3/config/locale/uclibc/c++locale_internal.h
-@@ -31,6 +31,9 @@
- 
- #include <bits/c++config.h>
- #include <clocale>
-+#include <cstdlib>
-+#include <cstring>
-+#include <cstddef>
- 
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning clean this up
-diff --git a/libstdc++-v3/config/locale/uclibc/c_locale.cc b/libstdc++-v3/config/locale/uclibc/c_locale.cc
-index 21430d0..1b9d8e1 100644
---- a/libstdc++-v3/config/locale/uclibc/c_locale.cc
-+++ b/libstdc++-v3/config/locale/uclibc/c_locale.cc
-@@ -39,23 +39,20 @@
- #include <langinfo.h>
- #include <bits/c++locale_internal.h>
- 
--namespace std
--{
-+_GLIBCXX_BEGIN_NAMESPACE(std)
-+
-   template<>
-     void
-     __convert_to_v(const char* __s, float& __v, ios_base::iostate& __err,
- 		   const __c_locale& __cloc)
-     {
--      if (!(__err & ios_base::failbit))
--	{
--	  char* __sanity;
--	  errno = 0;
--	  float __f = __strtof_l(__s, &__sanity, __cloc);
--          if (__sanity != __s && errno != ERANGE)
--	    __v = __f;
--	  else
--	    __err |= ios_base::failbit;
--	}
-+      char* __sanity;
-+      errno = 0;
-+      float __f = __strtof_l(__s, &__sanity, __cloc);
-+      if (__sanity != __s && errno != ERANGE)
-+	__v = __f;
-+      else
-+	__err |= ios_base::failbit;
-     }
- 
-   template<>
-@@ -63,16 +60,13 @@ namespace std
-     __convert_to_v(const char* __s, double& __v, ios_base::iostate& __err,
- 		   const __c_locale& __cloc)
-     {
--      if (!(__err & ios_base::failbit))
--	{
--	  char* __sanity;
--	  errno = 0;
--	  double __d = __strtod_l(__s, &__sanity, __cloc);
--          if (__sanity != __s && errno != ERANGE)
--	    __v = __d;
--	  else
--	    __err |= ios_base::failbit;
--	}
-+      char* __sanity;
-+      errno = 0;
-+      double __d = __strtod_l(__s, &__sanity, __cloc);
-+      if (__sanity != __s && errno != ERANGE)
-+	__v = __d;
-+      else
-+	__err |= ios_base::failbit;
-     }
- 
-   template<>
-@@ -80,16 +74,13 @@ namespace std
-     __convert_to_v(const char* __s, long double& __v, ios_base::iostate& __err,
- 		   const __c_locale& __cloc)
-     {
--      if (!(__err & ios_base::failbit))
--	{
--	  char* __sanity;
--	  errno = 0;
--	  long double __ld = __strtold_l(__s, &__sanity, __cloc);
--          if (__sanity != __s && errno != ERANGE)
--	    __v = __ld;
--	  else
--	    __err |= ios_base::failbit;
--	}
-+      char* __sanity;
-+      errno = 0;
-+      long double __ld = __strtold_l(__s, &__sanity, __cloc);
-+      if (__sanity != __s && errno != ERANGE)
-+	__v = __ld;
-+      else
-+	__err |= ios_base::failbit;
-     }
- 
-   void
-@@ -110,17 +101,18 @@ namespace std
-   void
-   locale::facet::_S_destroy_c_locale(__c_locale& __cloc)
-   {
--    if (_S_get_c_locale() != __cloc)
-+    if (__cloc && _S_get_c_locale() != __cloc)
-       __freelocale(__cloc);
-   }
- 
-   __c_locale
-   locale::facet::_S_clone_c_locale(__c_locale& __cloc)
-   { return __duplocale(__cloc); }
--} // namespace std
- 
--namespace __gnu_cxx
--{
-+_GLIBCXX_END_NAMESPACE
-+
-+_GLIBCXX_BEGIN_NAMESPACE(__gnu_cxx)
-+
-   const char* const category_names[6 + _GLIBCXX_NUM_CATEGORIES] =
-     {
-       "LC_CTYPE",
-@@ -138,9 +130,11 @@ namespace __gnu_cxx
-       "LC_IDENTIFICATION"
- #endif
-     };
--}
- 
--namespace std
--{
-+_GLIBCXX_END_NAMESPACE
-+
-+_GLIBCXX_BEGIN_NAMESPACE(std)
-+
-   const char* const* const locale::_S_categories = __gnu_cxx::category_names;
--}  // namespace std
-+
-+_GLIBCXX_END_NAMESPACE
-diff --git a/libstdc++-v3/config/locale/uclibc/c_locale.h b/libstdc++-v3/config/locale/uclibc/c_locale.h
-index 4bca5f1..64a6d46 100644
---- a/libstdc++-v3/config/locale/uclibc/c_locale.h
-+++ b/libstdc++-v3/config/locale/uclibc/c_locale.h
-@@ -39,21 +39,23 @@
- #pragma GCC system_header
- 
- #include <cstring>              // get std::strlen
--#include <cstdio>               // get std::snprintf or std::sprintf
-+#include <cstdio>               // get std::vsnprintf or std::vsprintf
- #include <clocale>
- #include <langinfo.h>		// For codecvt
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning fix this
- #endif
--#ifdef __UCLIBC_HAS_LOCALE__
-+#ifdef _GLIBCXX_USE_ICONV
- #include <iconv.h>		// For codecvt using iconv, iconv_t
- #endif
--#ifdef __UCLIBC_HAS_GETTEXT_AWARENESS__
--#include <libintl.h> 		// For messages
-+#ifdef HAVE_LIBINTL_H
-+#include <libintl.h>		// For messages
- #endif
-+#include <cstdarg>
- 
- #ifdef __UCLIBC_MJN3_ONLY__
- #warning what is _GLIBCXX_C_LOCALE_GNU for
-+// psm: used in os/gnu-linux/ctype_noninline.h
- #endif
- #define _GLIBCXX_C_LOCALE_GNU 1
- 
-@@ -78,23 +80,25 @@ namespace std
- #else
-   typedef int*			__c_locale;
- #endif
--
--  // Convert numeric value of type _Tv to string and return length of
--  // string.  If snprintf is available use it, otherwise fall back to
--  // the unsafe sprintf which, in general, can be dangerous and should
-+  // Convert numeric value of type double to string and return length of
-+  // string.  If vsnprintf is available use it, otherwise fall back to
-+  // the unsafe vsprintf which, in general, can be dangerous and should
-   // be avoided.
--  template<typename _Tv>
--    int
--    __convert_from_v(char* __out,
--		     const int __size __attribute__ ((__unused__)),
--		     const char* __fmt,
--#ifdef __UCLIBC_HAS_XCLOCALE__
--		     _Tv __v, const __c_locale& __cloc, int __prec)
-+    inline int
-+    __convert_from_v(const __c_locale&
-+#ifndef __UCLIBC_HAS_XCLOCALE__
-+	__cloc __attribute__ ((__unused__))
-+#endif
-+		     ,
-+		     char* __out,
-+		     const int __size,
-+		     const char* __fmt, ...)
-     {
-+      va_list __args;
-+#ifdef __UCLIBC_HAS_XCLOCALE__
-+
-       __c_locale __old = __gnu_cxx::__uselocale(__cloc);
- #else
--		     _Tv __v, const __c_locale&, int __prec)
--    {
- # ifdef __UCLIBC_HAS_LOCALE__
-       char* __old = std::setlocale(LC_ALL, NULL);
-       char* __sav = new char[std::strlen(__old) + 1];
-@@ -103,7 +107,9 @@ namespace std
- # endif
- #endif
- 
--      const int __ret = std::snprintf(__out, __size, __fmt, __prec, __v);
-+      va_start(__args, __fmt);
-+      const int __ret = std::vsnprintf(__out, __size, __fmt, __args);
-+      va_end(__args);
- 
- #ifdef __UCLIBC_HAS_XCLOCALE__
-       __gnu_cxx::__uselocale(__old);
-diff --git a/libstdc++-v3/config/locale/uclibc/ctype_members.cc b/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-index 7b12861..13e011d 100644
---- a/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/ctype_members.cc
-@@ -33,16 +33,20 @@
- 
- // Written by Benjamin Kosnik <bkoz@redhat.com>
- 
-+#include <features.h>
-+#ifdef __UCLIBC_HAS_LOCALE__
- #define _LIBC
- #include <locale>
- #undef _LIBC
-+#else
-+#include <locale>
-+#endif
- #include <bits/c++locale_internal.h>
- 
--namespace std
--{
-+_GLIBCXX_BEGIN_NAMESPACE(std)
-+
-   // NB: The other ctype<char> specializations are in src/locale.cc and
-   // various /config/os/* files.
--  template<>
-     ctype_byname<char>::ctype_byname(const char* __s, size_t __refs)
-     : ctype<char>(0, false, __refs)
-     {
-@@ -57,6 +61,8 @@ namespace std
- #endif
- 	}
-     }
-+    ctype_byname<char>::~ctype_byname()
-+    { }
- 
- #ifdef _GLIBCXX_USE_WCHAR_T
-   ctype<wchar_t>::__wmask_type
-@@ -138,17 +144,33 @@ namespace std
-   ctype<wchar_t>::
-   do_is(mask __m, wchar_t __c) const
-   {
--    // Highest bitmask in ctype_base == 10, but extra in "C"
--    // library for blank.
-+    // The case of __m == ctype_base::space is particularly important,
-+    // due to its use in many istream functions.  Therefore we deal with
-+    // it first, exploiting the knowledge that on GNU systems _M_bit[5]
-+    // is the mask corresponding to ctype_base::space.  NB: an encoding
-+    // change would not affect correctness!
-+
-     bool __ret = false;
--    const size_t __bitmasksize = 11;
--    for (size_t __bitcur = 0; __bitcur <= __bitmasksize; ++__bitcur)
--      if (__m & _M_bit[__bitcur]
--	  && __iswctype_l(__c, _M_wmask[__bitcur], _M_c_locale_ctype))
--	{
--	  __ret = true;
--	  break;
--	}
-+    if (__m == _M_bit[5])
-+      __ret = __iswctype_l(__c, _M_wmask[5], _M_c_locale_ctype);
-+    else
-+      {
-+	// Highest bitmask in ctype_base == 10, but extra in "C"
-+	// library for blank.
-+	const size_t __bitmasksize = 11;
-+	for (size_t __bitcur = 0; __bitcur <= __bitmasksize; ++__bitcur)
-+	  if (__m & _M_bit[__bitcur])
-+	    {
-+	      if (__iswctype_l(__c, _M_wmask[__bitcur], _M_c_locale_ctype))
-+		{
-+		  __ret = true;
-+		  break;
-+		}
-+	      else if (__m == _M_bit[__bitcur])
-+		break;
-+	    }
-+      }
-+
-     return __ret;
-   }
- 
-@@ -290,4 +312,5 @@ namespace std
- #endif
-   }
- #endif //  _GLIBCXX_USE_WCHAR_T
--}
-+
-+_GLIBCXX_END_NAMESPACE
-diff --git a/libstdc++-v3/config/locale/uclibc/messages_members.h b/libstdc++-v3/config/locale/uclibc/messages_members.h
-index d89da33..067657a 100644
---- a/libstdc++-v3/config/locale/uclibc/messages_members.h
-+++ b/libstdc++-v3/config/locale/uclibc/messages_members.h
-@@ -53,12 +53,16 @@
-   template<typename _CharT>
-      messages<_CharT>::messages(__c_locale __cloc, const char* __s,
- 				size_t __refs)
--     : facet(__refs), _M_c_locale_messages(_S_clone_c_locale(__cloc)),
--     _M_name_messages(__s)
-+     : facet(__refs), _M_c_locale_messages(NULL),
-+     _M_name_messages(NULL)
-      {
--       char* __tmp = new char[std::strlen(__s) + 1];
--       std::strcpy(__tmp, __s);
-+       const size_t __len = std::strlen(__s) + 1;
-+       char* __tmp = new char[__len];
-+       std::memcpy(__tmp, __s, __len);
-        _M_name_messages = __tmp;
-+
-+       // Last to avoid leaking memory if new throws.
-+       _M_c_locale_messages = _S_clone_c_locale(__cloc);
-      }
- 
-   template<typename _CharT>
-diff --git a/libstdc++-v3/config/locale/uclibc/monetary_members.cc b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-index 31ebb9f..7679b9c 100644
---- a/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/monetary_members.cc
-@@ -33,9 +33,14 @@
- 
- // Written by Benjamin Kosnik <bkoz@redhat.com>
- 
-+#include <features.h>
-+#ifdef __UCLIBC_HAS_LOCALE__
- #define _LIBC
- #include <locale>
- #undef _LIBC
-+#else
-+#include <locale>
-+#endif
- #include <bits/c++locale_internal.h>
- 
- #ifdef __UCLIBC_MJN3_ONLY__
-@@ -206,7 +211,7 @@ namespace std
- 	  }
- 	break;
-       default:
--	;
-+	__ret = pattern();
-       }
-     return __ret;
-   }
-@@ -390,7 +395,9 @@ namespace std
- 	  __c_locale __old = __uselocale(__cloc);
- #else
- 	  // Switch to named locale so that mbsrtowcs will work.
--	  char* __old = strdup(setlocale(LC_ALL, NULL));
-+  	  char* __old = setlocale(LC_ALL, NULL);
-+          const size_t __llen = strlen(__old) + 1;
-+          char* __sav = new char[__llen];
- 	  setlocale(LC_ALL, __name);
- #endif
- 
-@@ -477,8 +484,8 @@ namespace std
- #ifdef __UCLIBC_HAS_XLOCALE__
- 	      __uselocale(__old);
- #else
--	      setlocale(LC_ALL, __old);
--	      free(__old);
-+	      setlocale(LC_ALL, __sav);
-+	      delete [] __sav;
- #endif
- 	      __throw_exception_again;
- 	    }
-@@ -498,8 +505,8 @@ namespace std
- #ifdef __UCLIBC_HAS_XLOCALE__
- 	  __uselocale(__old);
- #else
--	  setlocale(LC_ALL, __old);
--	  free(__old);
-+	  setlocale(LC_ALL, __sav);
-+	  delete [] __sav;
- #endif
- 	}
-     }
-@@ -545,8 +552,11 @@ namespace std
- 	  __c_locale __old = __uselocale(__cloc);
- #else
- 	  // Switch to named locale so that mbsrtowcs will work.
--	  char* __old = strdup(setlocale(LC_ALL, NULL));
--	  setlocale(LC_ALL, __name);
-+          char* __old = setlocale(LC_ALL, NULL);
-+          const size_t __llen = strlen(__old) + 1;
-+          char* __sav = new char[__llen];
-+          memcpy(__sav, __old, __llen);
-+          setlocale(LC_ALL, __name);
- #endif
- 
- #ifdef __UCLIBC_MJN3_ONLY__
-@@ -633,8 +643,8 @@ namespace std
- #ifdef __UCLIBC_HAS_XLOCALE__
- 	      __uselocale(__old);
- #else
--	      setlocale(LC_ALL, __old);
--	      free(__old);
-+	      setlocale(LC_ALL, __sav);
-+	      delete [] __sav;
- #endif
-               __throw_exception_again;
- 	    }
-@@ -653,8 +663,8 @@ namespace std
- #ifdef __UCLIBC_HAS_XLOCALE__
- 	  __uselocale(__old);
- #else
--	  setlocale(LC_ALL, __old);
--	  free(__old);
-+	  setlocale(LC_ALL, __sav);
-+	  delete [] __sav;
- #endif
- 	}
-     }
-diff --git a/libstdc++-v3/config/locale/uclibc/numeric_members.cc b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-index d5c8961..8ae8969 100644
---- a/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/numeric_members.cc
-@@ -33,9 +33,14 @@
- 
- // Written by Benjamin Kosnik <bkoz@redhat.com>
- 
-+#include <features.h>
-+#ifdef __UCLIBC_HAS_LOCALE__
- #define _LIBC
- #include <locale>
- #undef _LIBC
-+#else
-+#include <locale>
-+#endif
- #include <bits/c++locale_internal.h>
- 
- #ifdef __UCLIBC_MJN3_ONLY__
-diff --git a/libstdc++-v3/config/locale/uclibc/time_members.cc b/libstdc++-v3/config/locale/uclibc/time_members.cc
-index d848ed5..f24d53e 100644
---- a/libstdc++-v3/config/locale/uclibc/time_members.cc
-+++ b/libstdc++-v3/config/locale/uclibc/time_members.cc
-@@ -53,11 +53,14 @@ namespace std
-       const size_t __len = __strftime_l(__s, __maxlen, __format, __tm,
- 					_M_c_locale_timepunct);
- #else
--      char* __old = strdup(setlocale(LC_ALL, NULL));
-+      char* __old = setlocale(LC_ALL, NULL);
-+      const size_t __llen = strlen(__old) + 1;
-+      char* __sav = new char[__llen];
-+      memcpy(__sav, __old, __llen);
-       setlocale(LC_ALL, _M_name_timepunct);
-       const size_t __len = strftime(__s, __maxlen, __format, __tm);
--      setlocale(LC_ALL, __old);
--      free(__old);
-+      setlocale(LC_ALL, __sav);
-+      delete [] __sav;
- #endif
-       // Make sure __s is null terminated.
-       if (__len == 0)
-@@ -207,11 +210,14 @@ namespace std
-       const size_t __len = __wcsftime_l(__s, __maxlen, __format, __tm,
- 					_M_c_locale_timepunct);
- #else
--      char* __old = strdup(setlocale(LC_ALL, NULL));
-+      char* __old = setlocale(LC_ALL, NULL);
-+      const size_t __llen = strlen(__old) + 1;
-+      char* __sav = new char[__llen];
-+      memcpy(__sav, __old, __llen);
-       setlocale(LC_ALL, _M_name_timepunct);
-       const size_t __len = wcsftime(__s, __maxlen, __format, __tm);
--      setlocale(LC_ALL, __old);
--      free(__old);
-+      setlocale(LC_ALL, __sav);
-+      delete [] __sav;
- #endif
-       // Make sure __s is null terminated.
-       if (__len == 0)
-diff --git a/libstdc++-v3/config/locale/uclibc/time_members.h b/libstdc++-v3/config/locale/uclibc/time_members.h
-index ba8e858..1665dde 100644
---- a/libstdc++-v3/config/locale/uclibc/time_members.h
-+++ b/libstdc++-v3/config/locale/uclibc/time_members.h
-@@ -50,12 +50,21 @@
-     __timepunct<_CharT>::__timepunct(__c_locale __cloc, const char* __s,
- 				     size_t __refs)
-     : facet(__refs), _M_data(NULL), _M_c_locale_timepunct(NULL),
--    _M_name_timepunct(__s)
-+    _M_name_timepunct(NULL)
-     {
--      char* __tmp = new char[std::strlen(__s) + 1];
--      std::strcpy(__tmp, __s);
-+      const size_t __len = std::strlen(__s) + 1;
-+      char* __tmp = new char[__len];
-+      std::memcpy(__tmp, __s, __len);
-       _M_name_timepunct = __tmp;
--      _M_initialize_timepunct(__cloc);
-+
-+      try
-+	{ _M_initialize_timepunct(__cloc); }
-+      catch(...)
-+	{
-+	  delete [] _M_name_timepunct;
-+	  __throw_exception_again;
-+	}
-+
-     }
- 
-   template<typename _CharT>
--- 
-2.8.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0041-ssp_nonshared.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0041-ssp_nonshared.patch
new file mode 100644
index 0000000..0744529
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0041-ssp_nonshared.patch
@@ -0,0 +1,28 @@
+From 551a5db7acb56e085a101f1c222d51b2c1b039a4 Mon Sep 17 00:00:00 2001
+From: Szabolcs Nagy <nsz@port70.net>
+Date: Sat, 7 Nov 2015 14:58:40 +0000
+Subject: [PATCH 41/46] ssp_nonshared
+
+---
+Upstream-Status: Inappropriate [OE Configuration]
+
+ gcc/gcc.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/gcc/gcc.c b/gcc/gcc.c
+index 2812819..9de96ee 100644
+--- a/gcc/gcc.c
++++ b/gcc/gcc.c
+@@ -863,7 +863,8 @@ proper position among the other output files.  */
+ #ifndef LINK_SSP_SPEC
+ #ifdef TARGET_LIBC_PROVIDES_SSP
+ #define LINK_SSP_SPEC "%{fstack-protector|fstack-protector-all" \
+-		       "|fstack-protector-strong|fstack-protector-explicit:}"
++		       "|fstack-protector-strong|fstack-protector-explicit" \
++		       ":-lssp_nonshared}"
+ #else
+ #define LINK_SSP_SPEC "%{fstack-protector|fstack-protector-all" \
+ 		       "|fstack-protector-strong|fstack-protector-explicit" \
+-- 
+2.8.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0047-libgcc_s-Use-alias-for-__cpu_indicator_init-instead-.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0047-libgcc_s-Use-alias-for-__cpu_indicator_init-instead-.patch
index ed6cd69..6b5da02 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0047-libgcc_s-Use-alias-for-__cpu_indicator_init-instead-.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0047-libgcc_s-Use-alias-for-__cpu_indicator_init-instead-.patch
@@ -31,7 +31,7 @@
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
 ---
-Upstream-Status: Rejected
+Upstream-Status: Denied
 
  gcc/config/i386/i386.c       | 4 ++--
  libgcc/config/i386/cpuinfo.c | 6 +++---
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0056-LRA-PR70904-relax-the-restriction-on-subreg-reload-f.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0056-LRA-PR70904-relax-the-restriction-on-subreg-reload-f.patch
new file mode 100644
index 0000000..231f147
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0056-LRA-PR70904-relax-the-restriction-on-subreg-reload-f.patch
@@ -0,0 +1,51 @@
+From a582b0a53d1dc8604a201348b99ca8de48784e7e Mon Sep 17 00:00:00 2001
+From: jiwang <jiwang@138bc75d-0d04-0410-961f-82ee72b054a4>
+Date: Thu, 12 May 2016 17:00:52 +0000
+Subject: [PATCH] [LRA] PR70904, relax the restriction on subreg reload for
+ wide mode
+
+2016-05-12  Jiong Wang  <jiong.wang@arm.com>
+
+gcc/
+  PR rtl-optimization/70904
+  * lra-constraint.c (process_addr_reg): Relax the restriction on
+  subreg reload for wide mode.
+
+git-svn-id: svn+ssh://gcc.gnu.org/svn/gcc/trunk@236181 138bc75d-0d04-0410-961f-82ee72b054a4
+---
+Upstream-Status: Backport
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+ gcc/lra-constraints.c | 16 +++++++++++++++-
+ 1 file changed, 15 insertions(+), 1 deletion(-)
+
+diff --git a/gcc/lra-constraints.c b/gcc/lra-constraints.c
+index f96fd458e23..73fb72a2ea5 100644
+--- a/gcc/lra-constraints.c
++++ b/gcc/lra-constraints.c
+@@ -1326,7 +1326,21 @@ process_addr_reg (rtx *loc, bool check_only_p, rtx_insn **before, rtx_insn **aft
+ 
+   subreg_p = GET_CODE (*loc) == SUBREG;
+   if (subreg_p)
+-    loc = &SUBREG_REG (*loc);
++    {
++      reg = SUBREG_REG (*loc);
++      mode = GET_MODE (reg);
++
++      /* For mode with size bigger than ptr_mode, there unlikely to be "mov"
++	 between two registers with different classes, but there normally will
++	 be "mov" which transfers element of vector register into the general
++	 register, and this normally will be a subreg which should be reloaded
++	 as a whole.  This is particularly likely to be triggered when
++	 -fno-split-wide-types specified.  */
++      if (in_class_p (reg, cl, &new_class)
++	  || GET_MODE_SIZE (mode) <= GET_MODE_SIZE (ptr_mode))
++       loc = &SUBREG_REG (*loc);
++    }
++
+   reg = *loc;
+   mode = GET_MODE (reg);
+   if (! REG_P (reg))
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3.inc
new file mode 100644
index 0000000..2073828
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3.inc
@@ -0,0 +1,128 @@
+require gcc-common.inc
+
+# Third digit in PV should be incremented after a minor release
+
+PV = "7.3.0"
+
+# BINV should be incremented to a revision after a minor gcc release
+
+BINV = "7.3.0"
+
+FILESEXTRAPATHS =. "${FILE_DIRNAME}/gcc-7.3:${FILE_DIRNAME}/gcc-7.3/backport:"
+
+DEPENDS =+ "mpfr gmp libmpc zlib"
+NATIVEDEPS = "mpfr-native gmp-native libmpc-native zlib-native"
+
+LICENSE = "GPL-3.0-with-GCC-exception & GPLv3"
+
+LIC_FILES_CHKSUM = "\
+    file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552 \
+    file://COPYING3;md5=d32239bcb673463ab874e80d47fae504 \
+    file://COPYING3.LIB;md5=6a6a8e020838b23406c81b19c1d46df6 \
+    file://COPYING.LIB;md5=2d5025d4aa3495befef8f17206a5b0a1 \
+    file://COPYING.RUNTIME;md5=fe60d87048567d4fe8c8a0ed2448bcc8 \
+"
+
+#RELEASE = "7-20170504"
+BASEURI ?= "${GNU_MIRROR}/gcc/gcc-${PV}/gcc-${PV}.tar.xz"
+#SRCREV = "f7cf798b73fd1a07098f9a490deec1e2a36e0bed"
+#BASEURI ?= "git://github.com/gcc-mirror/gcc;branch=gcc-6-branch;protocol=git"
+#BASEURI ?= "http://mirrors.concertpass.com/gcc/snapshots/${RELEASE}/gcc-${RELEASE}.tar.bz2"
+
+SRC_URI = "\
+           ${BASEURI} \
+           file://0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch \
+           file://0008-c99-snprintf.patch \
+           file://0009-gcc-poison-system-directories.patch \
+           file://0010-gcc-poison-dir-extend.patch \
+           file://0011-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch \
+           file://0012-64-bit-multilib-hack.patch \
+           file://0013-optional-libstdc.patch \
+           file://0014-gcc-disable-MASK_RELAX_PIC_CALLS-bit.patch \
+           file://0015-COLLECT_GCC_OPTIONS.patch \
+           file://0016-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch \
+           file://0017-fortran-cross-compile-hack.patch \
+           file://0018-cpp-honor-sysroot.patch \
+           file://0019-MIPS64-Default-to-N64-ABI.patch \
+           file://0020-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch \
+           file://0021-gcc-Fix-argument-list-too-long-error.patch \
+           file://0022-Disable-sdt.patch \
+           file://0023-libtool.patch \
+           file://0024-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch \
+           file://0025-Use-the-multilib-config-files-from-B-instead-of-usin.patch \
+           file://0026-Avoid-using-libdir-from-.la-which-usually-points-to-.patch \
+           file://0027-export-CPP.patch \
+           file://0028-Enable-SPE-AltiVec-generation-on-powepc-linux-target.patch \
+           file://0029-Disable-the-MULTILIB_OSDIRNAMES-and-other-multilib-o.patch \
+           file://0030-Ensure-target-gcc-headers-can-be-included.patch \
+           file://0031-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch \
+           file://0032-Don-t-search-host-directory-during-relink-if-inst_pr.patch \
+           file://0033-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch \
+           file://0034-aarch64-Add-support-for-musl-ldso.patch \
+           file://0035-libcc1-fix-libcc1-s-install-path-and-rpath.patch \
+           file://0036-handle-sysroot-support-for-nativesdk-gcc.patch \
+           file://0037-Search-target-sysroot-gcc-version-specific-dirs-with.patch \
+           file://0038-Fix-various-_FOR_BUILD-and-related-variables.patch \
+           file://0039-nios2-Define-MUSL_DYNAMIC_LINKER.patch \
+           file://0040-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch \
+           file://0041-gcc-libcpp-support-ffile-prefix-map-old-new.patch \
+           file://0042-Reuse-fdebug-prefix-map-to-replace-ffile-prefix-map.patch \
+           file://0043-gcc-final.c-fdebug-prefix-map-support-to-remap-sourc.patch \
+           file://0044-libgcc-Add-knob-to-use-ldbl-128-on-ppc.patch \
+           file://0045-Link-libgcc-using-LDFLAGS-not-just-SHLIB_LDFLAGS.patch \
+           file://0047-sync-gcc-stddef.h-with-musl.patch \
+           file://0048-gcc-Enable-static-PIE.patch \
+           file://fix-segmentation-fault-precompiled-hdr.patch \
+           file://0050-RISC-V-Handle-non-legitimate-address-in-riscv_legiti.patch \
+           ${BACKPORTS} \
+"
+BACKPORTS = "\
+"
+
+SRC_URI[md5sum] = "be2da21680f27624f3a87055c4ba5af2"
+SRC_URI[sha256sum] = "832ca6ae04636adbb430e865a1451adf6979ab44ca1c8374f61fba65645ce15c"
+
+S = "${TMPDIR}/work-shared/gcc-${PV}-${PR}/gcc-${PV}"
+#S = "${TMPDIR}/work-shared/gcc-${PV}-${PR}/git"
+B = "${WORKDIR}/gcc-${PV}/build.${HOST_SYS}.${TARGET_SYS}"
+
+# Language Overrides
+FORTRAN = ""
+JAVA = ""
+
+LTO = "--enable-lto"
+
+EXTRA_OECONF_BASE = "\
+    ${LTO} \
+    --enable-libssp \
+    --enable-libitm \
+    --disable-bootstrap \
+    --disable-libmudflap \
+    --with-system-zlib \
+    --with-linker-hash-style=${LINKER_HASH_STYLE} \
+    --enable-linker-build-id \
+    --with-ppl=no \
+    --with-cloog=no \
+    --enable-checking=release \
+    --enable-cheaders=c_global \
+    --without-isl \
+"
+
+EXTRA_OECONF_INITIAL = "\
+    --disable-libmudflap \
+    --disable-libgomp \
+    --disable-libitm \
+    --disable-libquadmath \
+    --with-system-zlib \
+    --disable-lto \
+    --disable-plugin \
+    --enable-decimal-float=no \
+    --without-isl \
+    gcc_cv_libc_provides_ssp=yes \
+"
+
+EXTRA_OECONF_PATHS = "\
+    --with-gxx-include-dir=/not/exist{target_includedir}/c++/${BINV} \
+    --with-sysroot=/not/exist \
+    --with-build-sysroot=${STAGING_DIR_TARGET} \
+"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch
new file mode 100644
index 0000000..1af1c74
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0001-gcc-4.3.1-ARCH_FLAGS_FOR_TARGET.patch
@@ -0,0 +1,42 @@
+From 2fcf1e23ef4b2a5c93526f12212aa892595261f6 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 08:37:11 +0400
+Subject: [PATCH 01/47] gcc-4.3.1: ARCH_FLAGS_FOR_TARGET
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Inappropriate [embedded specific]
+---
+ configure    | 2 +-
+ configure.ac | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/configure b/configure
+index 32a38633ad8..b4760952085 100755
+--- a/configure
++++ b/configure
+@@ -7472,7 +7472,7 @@ fi
+ # for target_alias and gcc doesn't manage it consistently.
+ target_configargs="--cache-file=./config.cache ${target_configargs}"
+ 
+-FLAGS_FOR_TARGET=
++FLAGS_FOR_TARGET="$ARCH_FLAGS_FOR_TARGET"
+ case " $target_configdirs " in
+  *" newlib "*)
+   case " $target_configargs " in
+diff --git a/configure.ac b/configure.ac
+index 12377499295..176ebb921ed 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -3065,7 +3065,7 @@ fi
+ # for target_alias and gcc doesn't manage it consistently.
+ target_configargs="--cache-file=./config.cache ${target_configargs}"
+ 
+-FLAGS_FOR_TARGET=
++FLAGS_FOR_TARGET="$ARCH_FLAGS_FOR_TARGET"
+ case " $target_configdirs " in
+  *" newlib "*)
+   case " $target_configargs " in
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0008-c99-snprintf.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0008-c99-snprintf.patch
new file mode 100644
index 0000000..ebd562b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0008-c99-snprintf.patch
@@ -0,0 +1,28 @@
+From 732f10eead61830a8aee1bf38cce892da25c35b1 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 08:49:03 +0400
+Subject: [PATCH 08/47] c99-snprintf
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
+---
+ libstdc++-v3/include/c_std/cstdio | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/libstdc++-v3/include/c_std/cstdio b/libstdc++-v3/include/c_std/cstdio
+index 43969892aa2..12148457909 100644
+--- a/libstdc++-v3/include/c_std/cstdio
++++ b/libstdc++-v3/include/c_std/cstdio
+@@ -144,7 +144,7 @@ namespace std
+   using ::vsprintf;
+ } // namespace std
+ 
+-#if _GLIBCXX_USE_C99_STDIO
++#if _GLIBCXX_USE_C99_STDIO || defined(__UCLIBC__)
+ 
+ #undef snprintf
+ #undef vfscanf
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0009-gcc-poison-system-directories.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0009-gcc-poison-system-directories.patch
new file mode 100644
index 0000000..fe13ed6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0009-gcc-poison-system-directories.patch
@@ -0,0 +1,192 @@
+From 4791a0a0f4595d0a18974f4e85a759a0789943db Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 08:59:00 +0400
+Subject: [PATCH 09/47] gcc: poison-system-directories
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Inappropriate [distribution: codesourcery]
+---
+ gcc/common.opt      |  4 ++++
+ gcc/config.in       |  6 ++++++
+ gcc/configure       | 16 ++++++++++++++++
+ gcc/configure.ac    | 10 ++++++++++
+ gcc/doc/invoke.texi |  9 +++++++++
+ gcc/gcc.c           |  2 ++
+ gcc/incpath.c       | 19 +++++++++++++++++++
+ 7 files changed, 66 insertions(+)
+
+diff --git a/gcc/common.opt b/gcc/common.opt
+index a5c3aeaa336..f02fe66367e 100644
+--- a/gcc/common.opt
++++ b/gcc/common.opt
+@@ -662,6 +662,10 @@ Wreturn-local-addr
+ Common Var(warn_return_local_addr) Init(1) Warning
+ Warn about returning a pointer/reference to a local or temporary variable.
+ 
++Wpoison-system-directories
++Common Var(flag_poison_system_directories) Init(1) Warning
++Warn for -I and -L options using system directories if cross compiling
++
+ Wshadow
+ Common Var(warn_shadow) Warning
+ Warn when one variable shadows another.  Same as -Wshadow=global.
+diff --git a/gcc/config.in b/gcc/config.in
+index bf2aa7b2e7d..b1203987e15 100644
+--- a/gcc/config.in
++++ b/gcc/config.in
+@@ -194,6 +194,12 @@
+ #endif
+ 
+ 
++/* Define to warn for use of native system header directories */
++#ifndef USED_FOR_TARGET
++#undef ENABLE_POISON_SYSTEM_DIRECTORIES
++#endif
++
++
+ /* Define if you want all operations on RTL (the basic data structure of the
+    optimizer and back end) to be checked for dynamic type safety at runtime.
+    This is quite expensive. */
+diff --git a/gcc/configure b/gcc/configure
+index c823ffe6290..4898f04fa6b 100755
+--- a/gcc/configure
++++ b/gcc/configure
+@@ -949,6 +949,7 @@ with_system_zlib
+ enable_maintainer_mode
+ enable_link_mutex
+ enable_version_specific_runtime_libs
++enable_poison_system_directories
+ enable_plugin
+ enable_host_shared
+ enable_libquadmath_support
+@@ -1691,6 +1692,8 @@ Optional Features:
+   --enable-version-specific-runtime-libs
+                           specify that runtime libraries should be installed
+                           in a compiler-specific directory
++  --enable-poison-system-directories
++                          warn for use of native system header directories
+   --enable-plugin         enable plugin support
+   --enable-host-shared    build host code as shared libraries
+   --disable-libquadmath-support
+@@ -29347,6 +29350,19 @@ if test "${enable_version_specific_runtime_libs+set}" = set; then :
+ fi
+ 
+ 
++# Check whether --enable-poison-system-directories was given.
++if test "${enable_poison_system_directories+set}" = set; then :
++  enableval=$enable_poison_system_directories;
++else
++  enable_poison_system_directories=no
++fi
++
++if test "x${enable_poison_system_directories}" = "xyes"; then
++
++$as_echo "#define ENABLE_POISON_SYSTEM_DIRECTORIES 1" >>confdefs.h
++
++fi
++
+ # Substitute configuration variables
+ 
+ 
+diff --git a/gcc/configure.ac b/gcc/configure.ac
+index acfe9797389..9dc1dc7fc96 100644
+--- a/gcc/configure.ac
++++ b/gcc/configure.ac
+@@ -6101,6 +6101,16 @@ AC_ARG_ENABLE(version-specific-runtime-libs,
+                 [specify that runtime libraries should be
+                  installed in a compiler-specific directory])])
+ 
++AC_ARG_ENABLE([poison-system-directories],
++             AS_HELP_STRING([--enable-poison-system-directories],
++                            [warn for use of native system header directories]),,
++             [enable_poison_system_directories=no])
++if test "x${enable_poison_system_directories}" = "xyes"; then
++  AC_DEFINE([ENABLE_POISON_SYSTEM_DIRECTORIES],
++           [1],
++           [Define to warn for use of native system header directories])
++fi
++
+ # Substitute configuration variables
+ AC_SUBST(subdirs)
+ AC_SUBST(srcdir)
+diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
+index 68a558e9992..060cd7169c6 100644
+--- a/gcc/doc/invoke.texi
++++ b/gcc/doc/invoke.texi
+@@ -298,6 +298,7 @@ Objective-C and Objective-C++ Dialects}.
+ -Wpacked  -Wpacked-bitfield-compat  -Wpadded @gol
+ -Wparentheses  -Wno-pedantic-ms-format @gol
+ -Wplacement-new  -Wplacement-new=@var{n} @gol
++-Wno-poison-system-directories @gol
+ -Wpointer-arith  -Wpointer-compare  -Wno-pointer-to-int-cast @gol
+ -Wno-pragmas  -Wredundant-decls  -Wrestrict  -Wno-return-local-addr @gol
+ -Wreturn-type  -Wsequence-point  -Wshadow  -Wno-shadow-ivar @gol
+@@ -5395,6 +5396,14 @@ made up of data only and thus requires no special treatment.  But, for
+ most targets, it is made up of code and thus requires the stack to be
+ made executable in order for the program to work properly.
+ 
++@item -Wno-poison-system-directories
++@opindex Wno-poison-system-directories
++Do not warn for @option{-I} or @option{-L} options using system
++directories such as @file{/usr/include} when cross compiling.  This
++option is intended for use in chroot environments when such
++directories contain the correct headers and libraries for the target
++system rather than the host.
++
+ @item -Wfloat-equal
+ @opindex Wfloat-equal
+ @opindex Wno-float-equal
+diff --git a/gcc/gcc.c b/gcc/gcc.c
+index c48178f..f63d53d 100644
+--- a/gcc/gcc.c
++++ b/gcc/gcc.c
+@@ -1029,6 +1029,8 @@ proper position among the other output files.  */
+    "%{fuse-ld=*:-fuse-ld=%*} " LINK_COMPRESS_DEBUG_SPEC \
+    "%X %{o*} %{e*} %{N} %{n} %{r}\
+     %{s} %{t} %{u*} %{z} %{Z} %{!nostdlib:%{!nostartfiles:%S}} \
++    %{Wno-poison-system-directories:--no-poison-system-directories} \
++    %{Werror=poison-system-directories:--error-poison-system-directories} \
+     %{static|no-pie:} %{L*} %(mfwrap) %(link_libgcc) " \
+     VTABLE_VERIFICATION_SPEC " " SANITIZER_EARLY_SPEC " %o " CHKP_SPEC " \
+     %{fopenacc|fopenmp|%:gt(%{ftree-parallelize-loops=*:%*} 1):\
+diff --git a/gcc/incpath.c b/gcc/incpath.c
+index 98fe5ec9ab3..f90e74dbd73 100644
+--- a/gcc/incpath.c
++++ b/gcc/incpath.c
+@@ -26,6 +26,7 @@
+ #include "intl.h"
+ #include "incpath.h"
+ #include "cppdefault.h"
++#include "diagnostic-core.h"
+ 
+ /* Microsoft Windows does not natively support inodes.
+    VMS has non-numeric inodes.  */
+@@ -382,6 +383,24 @@ merge_include_chains (const char *sysroot, cpp_reader *pfile, int verbose)
+ 	}
+       fprintf (stderr, _("End of search list.\n"));
+     }
++
++#ifdef ENABLE_POISON_SYSTEM_DIRECTORIES
++  if (flag_poison_system_directories)
++    {
++       struct cpp_dir *p;
++
++       for (p = heads[QUOTE]; p; p = p->next)
++         {
++          if ((!strncmp (p->name, "/usr/include", 12))
++              || (!strncmp (p->name, "/usr/local/include", 18))
++              || (!strncmp (p->name, "/usr/X11R6/include", 18)))
++            warning (OPT_Wpoison_system_directories,
++                     "include location \"%s\" is unsafe for "
++                     "cross-compilation",
++                     p->name);
++         }
++    }
++#endif
+ }
+ 
+ /* Use given -I paths for #include "..." but not #include <...>, and
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0010-gcc-poison-dir-extend.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0010-gcc-poison-dir-extend.patch
new file mode 100644
index 0000000..4e06aa2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0010-gcc-poison-dir-extend.patch
@@ -0,0 +1,39 @@
+From e74ef84ad609b3b6a5c37d207ffc3c6e70d1f025 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:00:34 +0400
+Subject: [PATCH 10/47] gcc-poison-dir-extend
+
+Add /sw/include and /opt/include based on the original
+zecke-no-host-includes.patch patch.  The original patch checked for
+/usr/include, /sw/include and /opt/include and then triggered a failure and
+aborted.
+
+Instead, we add the two missing items to the current scan.  If the user
+wants this to be a failure, they can add "-Werror=poison-system-directories".
+
+Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
+---
+ gcc/incpath.c | 4 +++-
+ 1 file changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/gcc/incpath.c b/gcc/incpath.c
+index f90e74dbd73..c583ee5061d 100644
+--- a/gcc/incpath.c
++++ b/gcc/incpath.c
+@@ -393,7 +393,9 @@ merge_include_chains (const char *sysroot, cpp_reader *pfile, int verbose)
+          {
+           if ((!strncmp (p->name, "/usr/include", 12))
+               || (!strncmp (p->name, "/usr/local/include", 18))
+-              || (!strncmp (p->name, "/usr/X11R6/include", 18)))
++              || (!strncmp (p->name, "/usr/X11R6/include", 18))
++              || (!strncmp (p->name, "/sw/include", 11))
++              || (!strncmp (p->name, "/opt/include", 12)))
+             warning (OPT_Wpoison_system_directories,
+                      "include location \"%s\" is unsafe for "
+                      "cross-compilation",
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0011-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0011-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch
new file mode 100644
index 0000000..b39ff1e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0011-gcc-4.3.3-SYSROOT_CFLAGS_FOR_TARGET.patch
@@ -0,0 +1,73 @@
+From a41d3a53a4e313c20802330d6b5c75358a4ed882 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:08:31 +0400
+Subject: [PATCH 11/47] gcc-4.3.3: SYSROOT_CFLAGS_FOR_TARGET
+
+Before committing, I noticed that PR/32161 was marked as a dup of PR/32009, but my previous patch did not fix it.
+
+This alternative patch is better because it lets you just use CFLAGS_FOR_TARGET to set the compilation flags for libgcc. Since bootstrapped target libraries are never compiled with the native compiler, it makes little sense to use different flags for stage1 and later stages. And it also makes little sense to use a different variable than CFLAGS_FOR_TARGET.
+
+Other changes I had to do include:
+
+- moving the creation of default CFLAGS_FOR_TARGET from Makefile.am to configure.ac, because otherwise the BOOT_CFLAGS are substituted into CFLAGS_FOR_TARGET (which is "-O2 -g $(CFLAGS)") via $(CFLAGS). It is also cleaner this way though.
+
+- passing the right CFLAGS to configure scripts as exported environment variables
+
+I also stopped passing LIBCFLAGS to configure scripts since they are unused in the whole src tree. And I updated the documentation as H-P reminded me to do.
+
+Bootstrapped/regtested i686-pc-linux-gnu, will commit to 4.4 shortly. Ok for 4.3?
+
+Signed-off-by: Paolo Bonzini  <bonzini@gnu.org>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
+---
+ configure | 32 ++++++++++++++++++++++++++++++++
+ 1 file changed, 32 insertions(+)
+
+diff --git a/configure b/configure
+index b4760952085..72a8ba94c4e 100755
+--- a/configure
++++ b/configure
+@@ -6736,6 +6736,38 @@ fi
+ 
+ 
+ 
++# During gcc bootstrap, if we use some random cc for stage1 then CFLAGS
++# might be empty or "-g".  We don't require a C++ compiler, so CXXFLAGS
++# might also be empty (or "-g", if a non-GCC C++ compiler is in the path).
++# We want to ensure that TARGET libraries (which we know are built with
++# gcc) are built with "-O2 -g", so include those options when setting
++# CFLAGS_FOR_TARGET and CXXFLAGS_FOR_TARGET.
++if test "x$CFLAGS_FOR_TARGET" = x; then
++  CFLAGS_FOR_TARGET=$CFLAGS
++  case " $CFLAGS " in
++    *" -O2 "*) ;;
++    *) CFLAGS_FOR_TARGET="-O2 $CFLAGS" ;;
++  esac
++  case " $CFLAGS " in
++    *" -g "* | *" -g3 "*) ;;
++    *) CFLAGS_FOR_TARGET="-g $CFLAGS" ;;
++  esac
++fi
++
++
++if test "x$CXXFLAGS_FOR_TARGET" = x; then
++  CXXFLAGS_FOR_TARGET=$CXXFLAGS
++  case " $CXXFLAGS " in
++    *" -O2 "*) ;;
++    *) CXXFLAGS_FOR_TARGET="-O2 $CXXFLAGS" ;;
++  esac
++  case " $CXXFLAGS " in
++    *" -g "* | *" -g3 "*) ;;
++    *) CXXFLAGS_FOR_TARGET="-g $CXXFLAGS" ;;
++  esac
++fi
++
++
+ # Handle --with-headers=XXX.  If the value is not "yes", the contents of
+ # the named directory are copied to $(tooldir)/sys-include.
+ if test x"${with_headers}" != x && test x"${with_headers}" != xno ; then
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0012-64-bit-multilib-hack.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0012-64-bit-multilib-hack.patch
new file mode 100644
index 0000000..f3b3912
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0012-64-bit-multilib-hack.patch
@@ -0,0 +1,85 @@
+From 3af9fbbd14e83242ac2acb54bbb4bb726845fd34 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:10:06 +0400
+Subject: [PATCH 12/47] 64-bit multilib hack.
+
+GCC has internal multilib handling code but it assumes a very specific rigid directory
+layout. The build system implementation of multilib layout is very generic and allows
+complete customisation of the library directories.
+
+This patch is a partial solution to allow any custom directories to be passed into gcc
+and handled correctly. It forces gcc to use the base_libdir (which is the current
+directory, "."). We need to do this for each multilib that is configured as we don't
+know which compiler options may be being passed into the compiler. Since we have a compiler
+per mulitlib at this point that isn't an issue.
+
+The one problem is the target compiler is only going to work for the default multlilib at
+this point. Ideally we'd figure out which multilibs were being enabled with which paths
+and be able to patch these entries with a complete set of correct paths but this we
+don't have such code at this point. This is something the target gcc recipe should do
+and override these platform defaults in its build config.
+
+RP 15/8/11
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Signed-off-by: Elvis Dowson <elvis.dowson@gmail.com>
+
+Upstream-Status: Pending
+---
+ gcc/config/i386/t-linux64   |  6 ++----
+ gcc/config/mips/t-linux64   | 10 +++-------
+ gcc/config/rs6000/t-linux64 |  5 ++---
+ 3 files changed, 7 insertions(+), 14 deletions(-)
+
+diff --git a/gcc/config/i386/t-linux64 b/gcc/config/i386/t-linux64
+index e422c442dae..cc885e24457 100644
+--- a/gcc/config/i386/t-linux64
++++ b/gcc/config/i386/t-linux64
+@@ -32,7 +32,5 @@
+ #
+ comma=,
+ MULTILIB_OPTIONS    = $(subst $(comma),/,$(TM_MULTILIB_CONFIG))
+-MULTILIB_DIRNAMES   = $(patsubst m%, %, $(subst /, ,$(MULTILIB_OPTIONS)))
+-MULTILIB_OSDIRNAMES = m64=../lib64$(call if_multiarch,:x86_64-linux-gnu)
+-MULTILIB_OSDIRNAMES+= m32=$(if $(wildcard $(shell echo $(SYSTEM_HEADER_DIR))/../../usr/lib32),../lib32,../lib)$(call if_multiarch,:i386-linux-gnu)
+-MULTILIB_OSDIRNAMES+= mx32=../libx32$(call if_multiarch,:x86_64-linux-gnux32)
++MULTILIB_DIRNAMES = . .
++MULTILIB_OSDIRNAMES = ../$(shell basename $(base_libdir)) ../$(shell basename $(base_libdir))
+diff --git a/gcc/config/mips/t-linux64 b/gcc/config/mips/t-linux64
+index 100f9da5e14..601cdf08d05 100644
+--- a/gcc/config/mips/t-linux64
++++ b/gcc/config/mips/t-linux64
+@@ -17,10 +17,6 @@
+ # <http://www.gnu.org/licenses/>.
+ 
+ MULTILIB_OPTIONS = mabi=n32/mabi=32/mabi=64
+-MULTILIB_DIRNAMES = n32 32 64
+-MIPS_EL = $(if $(filter %el, $(firstword $(subst -, ,$(target)))),el)
+-MIPS_SOFT = $(if $(strip $(filter MASK_SOFT_FLOAT_ABI, $(target_cpu_default)) $(filter soft, $(with_float))),soft)
+-MULTILIB_OSDIRNAMES = \
+-	../lib32$(call if_multiarch,:mips64$(MIPS_EL)-linux-gnuabin32$(MIPS_SOFT)) \
+-	../lib$(call if_multiarch,:mips$(MIPS_EL)-linux-gnu$(MIPS_SOFT)) \
+-	../lib64$(call if_multiarch,:mips64$(MIPS_EL)-linux-gnuabi64$(MIPS_SOFT))
++MULTILIB_DIRNAMES = . . .
++MULTILIB_OSDIRNAMES = ../$(shell basename $(base_libdir)) ../$(shell basename $(base_libdir)) ../$(shell basename $(base_libdir))
++
+diff --git a/gcc/config/rs6000/t-linux64 b/gcc/config/rs6000/t-linux64
+index 2830ed0d861..d057facd2fd 100644
+--- a/gcc/config/rs6000/t-linux64
++++ b/gcc/config/rs6000/t-linux64
+@@ -26,10 +26,9 @@
+ # MULTILIB_OSDIRNAMES according to what is found on the target.
+ 
+ MULTILIB_OPTIONS    := m64/m32
+-MULTILIB_DIRNAMES   := 64 32
++MULTILIB_DIRNAMES   := . .
+ MULTILIB_EXTRA_OPTS := 
+-MULTILIB_OSDIRNAMES := m64=../lib64$(call if_multiarch,:powerpc64-linux-gnu)
+-MULTILIB_OSDIRNAMES += m32=$(if $(wildcard $(shell echo $(SYSTEM_HEADER_DIR))/../../usr/lib32),../lib32,../lib)$(call if_multiarch,:powerpc-linux-gnu)
++MULTILIB_OSDIRNAMES := ../$(shell basename $(base_libdir)) ../$(shell basename $(base_libdir))
+ 
+ rs6000-linux.o: $(srcdir)/config/rs6000/rs6000-linux.c
+ 	$(COMPILE) $<
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0013-optional-libstdc.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0013-optional-libstdc.patch
new file mode 100644
index 0000000..3439bf6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0013-optional-libstdc.patch
@@ -0,0 +1,125 @@
+From 26a58d05844274915d011edbf9330c6151687b22 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:12:56 +0400
+Subject: [PATCH 13/47] optional libstdc
+
+gcc-runtime builds libstdc++ separately from gcc-cross-*. Its configure tests using g++
+will not run correctly since by default the linker will try to link against libstdc++
+which shouldn't exist yet. We need an option to disable -lstdc++
+option whilst leaving -lc, -lgcc and other automatic library dependencies added by gcc
+driver. This patch adds such an option which only disables the -lstdc++.
+
+A "standard" gcc build uses xgcc and hence avoids this. We should ask upstream how to
+do this officially, the likely answer is don't build libstdc++ separately.
+
+RP 29/6/10
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Inappropriate [embedded specific]
+---
+ gcc/c-family/c.opt  |  4 ++++
+ gcc/cp/g++spec.c    |  1 +
+ gcc/doc/invoke.texi | 32 +++++++++++++++++++++++++++++++-
+ gcc/gcc.c           |  1 +
+ 4 files changed, 37 insertions(+), 1 deletion(-)
+
+diff --git a/gcc/c-family/c.opt b/gcc/c-family/c.opt
+index 9ad2f6e1fcc..c4ef7796282 100644
+--- a/gcc/c-family/c.opt
++++ b/gcc/c-family/c.opt
+@@ -1848,6 +1848,10 @@ nostdinc++
+ C++ ObjC++
+ Do not search standard system include directories for C++.
+ 
++nostdlib++
++Driver
++Do not link standard C++ runtime library
++
+ o
+ C ObjC C++ ObjC++ Joined Separate
+ ; Documented in common.opt
+diff --git a/gcc/cp/g++spec.c b/gcc/cp/g++spec.c
+index ffcc87c79c9..28d8a9cf530 100644
+--- a/gcc/cp/g++spec.c
++++ b/gcc/cp/g++spec.c
+@@ -137,6 +137,7 @@ lang_specific_driver (struct cl_decoded_option **in_decoded_options,
+       switch (decoded_options[i].opt_index)
+ 	{
+ 	case OPT_nostdlib:
++	case OPT_nostdlib__:
+ 	case OPT_nodefaultlibs:
+ 	  library = -1;
+ 	  break;
+diff --git a/gcc/doc/invoke.texi b/gcc/doc/invoke.texi
+index 060cd7169c6..8e2adc25644 100644
+--- a/gcc/doc/invoke.texi
++++ b/gcc/doc/invoke.texi
+@@ -211,6 +211,9 @@ in the following sections.
+ -fno-weak  -nostdinc++ @gol
+ -fvisibility-inlines-hidden @gol
+ -fvisibility-ms-compat @gol
++-fvtable-verify=@r{[}std@r{|}preinit@r{|}none@r{]} @gol
++-fvtv-counts -fvtv-debug @gol
++-nostdlib++ @gol
+ -fext-numeric-literals @gol
+ -Wabi=@var{n}  -Wabi-tag  -Wconversion-null  -Wctor-dtor-privacy @gol
+ -Wdelete-non-virtual-dtor  -Wliteral-suffix  -Wmultiple-inheritance @gol
+@@ -496,7 +499,7 @@ Objective-C and Objective-C++ Dialects}.
+ -s  -static  -static-libgcc  -static-libstdc++ @gol
+ -static-libasan  -static-libtsan  -static-liblsan  -static-libubsan @gol
+ -static-libmpx  -static-libmpxwrappers @gol
+--shared  -shared-libgcc  -symbolic @gol
++-shared  -shared-libgcc  -symbolic -nostdlib++ @gol
+ -T @var{script}  -Wl,@var{option}  -Xlinker @var{option} @gol
+ -u @var{symbol}  -z @var{keyword}}
+ 
+@@ -11606,6 +11609,33 @@ library subroutines.
+ constructors are called; @pxref{Collect2,,@code{collect2}, gccint,
+ GNU Compiler Collection (GCC) Internals}.)
+ 
++@item -nostdlib++
++@opindex nostdlib++
++Do not use the standard system C++ runtime libraries when linking.
++Only the libraries you specify will be passed to the linker.
++
++@cindex @option{-lgcc}, use with @option{-nostdlib}
++@cindex @option{-nostdlib} and unresolved references
++@cindex unresolved references and @option{-nostdlib}
++@cindex @option{-lgcc}, use with @option{-nodefaultlibs}
++@cindex @option{-nodefaultlibs} and unresolved references
++@cindex unresolved references and @option{-nodefaultlibs}
++One of the standard libraries bypassed by @option{-nostdlib} and
++@option{-nodefaultlibs} is @file{libgcc.a}, a library of internal subroutines
++which GCC uses to overcome shortcomings of particular machines, or special
++needs for some languages.
++(@xref{Interface,,Interfacing to GCC Output,gccint,GNU Compiler
++Collection (GCC) Internals},
++for more discussion of @file{libgcc.a}.)
++In most cases, you need @file{libgcc.a} even when you want to avoid
++other standard libraries.  In other words, when you specify @option{-nostdlib}
++or @option{-nodefaultlibs} you should usually specify @option{-lgcc} as well.
++This ensures that you have no unresolved references to internal GCC
++library subroutines.
++(An example of such an internal subroutine is @code{__main}, used to ensure C++
++constructors are called; @pxref{Collect2,,@code{collect2}, gccint,
++GNU Compiler Collection (GCC) Internals}.)
++
+ @item -pie
+ @opindex pie
+ Produce a position independent executable on targets that support it.
+diff --git a/gcc/gcc.c b/gcc/gcc.c
+index 6315aa0dd16..a5fafbe5107 100644
+--- a/gcc/gcc.c
++++ b/gcc/gcc.c
+@@ -1046,6 +1046,7 @@ proper position among the other output files.  */
+     %(mflib) " STACK_SPLIT_SPEC "\
+     %{fprofile-arcs|fprofile-generate*|coverage:-lgcov} " SANITIZER_SPEC " \
+     %{!nostdlib:%{!nodefaultlibs:%(link_ssp) %(link_gcc_c_sequence)}}\
++    %{!nostdlib++:}\
+     %{!nostdlib:%{!nostartfiles:%E}} %{T*}  \n%(post_link) }}}}}}"
+ #endif
+ 
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0014-gcc-disable-MASK_RELAX_PIC_CALLS-bit.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0014-gcc-disable-MASK_RELAX_PIC_CALLS-bit.patch
new file mode 100644
index 0000000..f92b5fb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0014-gcc-disable-MASK_RELAX_PIC_CALLS-bit.patch
@@ -0,0 +1,59 @@
+From 716de5db6859fd1ea21078c94a41fac7a885b7e9 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:14:20 +0400
+Subject: [PATCH 14/47] gcc: disable MASK_RELAX_PIC_CALLS bit
+
+The new feature added after 4.3.3
+"http://www.pubbs.net/200909/gcc/94048-patch-add-support-for-rmipsjalr.html"
+will cause cc1plus eat up all the system memory when build webkit-gtk.
+The function mips_get_pic_call_symbol keeps on recursively calling itself.
+Disable this feature to walk aside the bug.
+
+Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Inappropriate [configuration]
+---
+ gcc/configure    | 7 -------
+ gcc/configure.ac | 7 -------
+ 2 files changed, 14 deletions(-)
+
+diff --git a/gcc/configure b/gcc/configure
+index 4898f04fa6b..640e4643805 100755
+--- a/gcc/configure
++++ b/gcc/configure
+@@ -27303,13 +27303,6 @@ $as_echo_n "checking assembler and linker for explicit JALR relocation... " >&6;
+         rm -f conftest.*
+       fi
+     fi
+-    if test $gcc_cv_as_ld_jalr_reloc = yes; then
+-      if test x$target_cpu_default = x; then
+-        target_cpu_default=MASK_RELAX_PIC_CALLS
+-      else
+-        target_cpu_default="($target_cpu_default)|MASK_RELAX_PIC_CALLS"
+-      fi
+-    fi
+     { $as_echo "$as_me:${as_lineno-$LINENO}: result: $gcc_cv_as_ld_jalr_reloc" >&5
+ $as_echo "$gcc_cv_as_ld_jalr_reloc" >&6; }
+ 
+diff --git a/gcc/configure.ac b/gcc/configure.ac
+index 9dc1dc7fc96..9a2dae55ba2 100644
+--- a/gcc/configure.ac
++++ b/gcc/configure.ac
+@@ -4641,13 +4641,6 @@ x:
+         rm -f conftest.*
+       fi
+     fi
+-    if test $gcc_cv_as_ld_jalr_reloc = yes; then
+-      if test x$target_cpu_default = x; then
+-        target_cpu_default=MASK_RELAX_PIC_CALLS
+-      else
+-        target_cpu_default="($target_cpu_default)|MASK_RELAX_PIC_CALLS"
+-      fi
+-    fi
+     AC_MSG_RESULT($gcc_cv_as_ld_jalr_reloc)
+ 
+     AC_CACHE_CHECK([linker for .eh_frame personality relaxation],
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0015-COLLECT_GCC_OPTIONS.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0015-COLLECT_GCC_OPTIONS.patch
new file mode 100644
index 0000000..6e62945
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0015-COLLECT_GCC_OPTIONS.patch
@@ -0,0 +1,38 @@
+From 04a7a672301bb07caea6a7cad8378f63f1fe3200 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:16:28 +0400
+Subject: [PATCH 15/47] COLLECT_GCC_OPTIONS
+
+This patch adds --sysroot into COLLECT_GCC_OPTIONS which is used to
+invoke collect2.
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
+---
+ gcc/gcc.c | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+diff --git a/gcc/gcc.c b/gcc/gcc.c
+index a5fafbe5107..05896e19926 100644
+--- a/gcc/gcc.c
++++ b/gcc/gcc.c
+@@ -4654,6 +4654,15 @@ set_collect_gcc_options (void)
+ 		sizeof ("COLLECT_GCC_OPTIONS=") - 1);
+ 
+   first_time = TRUE;
++#ifdef HAVE_LD_SYSROOT
++  if (target_system_root_changed && target_system_root)
++    {
++      obstack_grow (&collect_obstack, "'--sysroot=", sizeof("'--sysroot=")-1);
++      obstack_grow (&collect_obstack, target_system_root,strlen(target_system_root));
++      obstack_grow (&collect_obstack, "'", 1);
++      first_time = FALSE;
++    }
++#endif
+   for (i = 0; (int) i < n_switches; i++)
+     {
+       const char *const *args;
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0016-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0016-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch
new file mode 100644
index 0000000..1991251
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0016-Use-the-defaults.h-in-B-instead-of-S-and-t-oe-in-B.patch
@@ -0,0 +1,96 @@
+From 47071cbd4f13ff5a4974f71f359a04afcfb125da Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:17:25 +0400
+Subject: [PATCH 16/47] Use the defaults.h in ${B} instead of ${S}, and t-oe in
+ ${B}
+
+Use the defaults.h in ${B} instead of ${S}, and t-oe in ${B}, so that
+the source can be shared between gcc-cross-initial,
+gcc-cross-intermediate, gcc-cross, gcc-runtime, and also the sdk build.
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
+
+While compiling gcc-crosssdk-initial-x86_64 on some host, there is
+occasionally failure that test the existance of default.h doesn't
+work, the reason is tm_include_list='** defaults.h' rather than
+tm_include_list='** ./defaults.h'
+
+So we add the test condition for this situation.
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ gcc/Makefile.in  | 2 +-
+ gcc/configure    | 4 ++--
+ gcc/configure.ac | 4 ++--
+ gcc/mkconfig.sh  | 4 ++--
+ 4 files changed, 7 insertions(+), 7 deletions(-)
+
+diff --git a/gcc/Makefile.in b/gcc/Makefile.in
+index 2411671cea3..7b590c9bbd3 100644
+--- a/gcc/Makefile.in
++++ b/gcc/Makefile.in
+@@ -532,7 +532,7 @@ TARGET_SYSTEM_ROOT = @TARGET_SYSTEM_ROOT@
+ TARGET_SYSTEM_ROOT_DEFINE = @TARGET_SYSTEM_ROOT_DEFINE@
+ 
+ xmake_file=@xmake_file@
+-tmake_file=@tmake_file@
++tmake_file=@tmake_file@ ./t-oe
+ TM_ENDIAN_CONFIG=@TM_ENDIAN_CONFIG@
+ TM_MULTILIB_CONFIG=@TM_MULTILIB_CONFIG@
+ TM_MULTILIB_EXCEPTIONS_CONFIG=@TM_MULTILIB_EXCEPTIONS_CONFIG@
+diff --git a/gcc/configure b/gcc/configure
+index 640e4643805..b5ac1552541 100755
+--- a/gcc/configure
++++ b/gcc/configure
+@@ -12150,8 +12150,8 @@ for f in $tm_file; do
+        tm_include_list="${tm_include_list} $f"
+        ;;
+     defaults.h )
+-       tm_file_list="${tm_file_list} \$(srcdir)/$f"
+-       tm_include_list="${tm_include_list} $f"
++       tm_file_list="${tm_file_list} ./$f"
++       tm_include_list="${tm_include_list} ./$f"
+        ;;
+     * )
+        tm_file_list="${tm_file_list} \$(srcdir)/config/$f"
+diff --git a/gcc/configure.ac b/gcc/configure.ac
+index 9a2dae55ba2..cb1479d1ef4 100644
+--- a/gcc/configure.ac
++++ b/gcc/configure.ac
+@@ -1922,8 +1922,8 @@ for f in $tm_file; do
+        tm_include_list="${tm_include_list} $f"
+        ;;
+     defaults.h )
+-       tm_file_list="${tm_file_list} \$(srcdir)/$f"
+-       tm_include_list="${tm_include_list} $f"
++       tm_file_list="${tm_file_list} ./$f"
++       tm_include_list="${tm_include_list} ./$f"
+        ;;
+     * )
+        tm_file_list="${tm_file_list} \$(srcdir)/config/$f"
+diff --git a/gcc/mkconfig.sh b/gcc/mkconfig.sh
+index 9fc7b5ca734..04abecfe648 100644
+--- a/gcc/mkconfig.sh
++++ b/gcc/mkconfig.sh
+@@ -77,7 +77,7 @@ if [ -n "$HEADERS" ]; then
+     if [ $# -ge 1 ]; then
+ 	echo '#ifdef IN_GCC' >> ${output}T
+ 	for file in "$@"; do
+-	    if test x"$file" = x"defaults.h"; then
++	    if test x"$file" = x"./defaults.h" -o x"$file" = x"defaults.h"; then
+ 		postpone_defaults_h="yes"
+ 	    else
+ 		echo "# include \"$file\"" >> ${output}T
+@@ -109,7 +109,7 @@ esac
+ 
+ # If we postponed including defaults.h, add the #include now.
+ if test x"$postpone_defaults_h" = x"yes"; then
+-    echo "# include \"defaults.h\"" >> ${output}T
++    echo "# include \"./defaults.h\"" >> ${output}T
+ fi
+ 
+ # Add multiple inclusion protection guard, part two.
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0017-fortran-cross-compile-hack.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0017-fortran-cross-compile-hack.patch
new file mode 100644
index 0000000..e2830c5
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0017-fortran-cross-compile-hack.patch
@@ -0,0 +1,46 @@
+From 4fc35a2bb7666a7de35568eb5d47f0ce6acebe62 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:20:01 +0400
+Subject: [PATCH 17/47] fortran cross-compile hack.
+
+* Fortran would have searched for arm-angstrom-gnueabi-gfortran but would have used
+used gfortan. For gcc_4.2.2.bb we want to use the gfortran compiler from our cross
+directory.
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Inappropriate [embedded specific]
+---
+ libgfortran/configure    | 2 +-
+ libgfortran/configure.ac | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/libgfortran/configure b/libgfortran/configure
+index 81238fcb79c..7ded7abd456 100755
+--- a/libgfortran/configure
++++ b/libgfortran/configure
+@@ -12792,7 +12792,7 @@ esac
+ 
+ # We need gfortran to compile parts of the library
+ #AC_PROG_FC(gfortran)
+-FC="$GFORTRAN"
++#FC="$GFORTRAN"
+ ac_ext=${ac_fc_srcext-f}
+ ac_compile='$FC -c $FCFLAGS $ac_fcflags_srcext conftest.$ac_ext >&5'
+ ac_link='$FC -o conftest$ac_exeext $FCFLAGS $LDFLAGS $ac_fcflags_srcext conftest.$ac_ext $LIBS >&5'
+diff --git a/libgfortran/configure.ac b/libgfortran/configure.ac
+index 37b12d2998f..63a4166ef62 100644
+--- a/libgfortran/configure.ac
++++ b/libgfortran/configure.ac
+@@ -243,7 +243,7 @@ AC_SUBST(enable_static)
+ 
+ # We need gfortran to compile parts of the library
+ #AC_PROG_FC(gfortran)
+-FC="$GFORTRAN"
++#FC="$GFORTRAN"
+ AC_PROG_FC(gfortran)
+ 
+ # extra LD Flags which are required for targets
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0018-cpp-honor-sysroot.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0018-cpp-honor-sysroot.patch
new file mode 100644
index 0000000..5559074
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0018-cpp-honor-sysroot.patch
@@ -0,0 +1,54 @@
+From 1c8a332469ca4bfefb10df70720e0dc83ff9a756 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:22:00 +0400
+Subject: [PATCH 18/47] cpp: honor sysroot.
+
+Currently, if the gcc toolchain is relocated and installed from sstate, then you try and compile
+preprocessed source (.i or .ii files), the compiler will try and access the builtin sysroot location
+rather than the --sysroot option specified on the commandline. If access to that directory is
+permission denied (unreadable), gcc will error.
+
+This happens when ccache is in use due to the fact it uses preprocessed source files.
+
+The fix below adds %I to the cpp-output spec macro so the default substitutions for -iprefix,
+-isystem, -isysroot happen and the correct sysroot is used.
+
+[YOCTO #2074]
+
+RP 2012/04/13
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
+---
+ gcc/cp/lang-specs.h | 2 +-
+ gcc/gcc.c           | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/gcc/cp/lang-specs.h b/gcc/cp/lang-specs.h
+index 6b383e1d86d..c7c7d6a56ec 100644
+--- a/gcc/cp/lang-specs.h
++++ b/gcc/cp/lang-specs.h
+@@ -64,5 +64,5 @@ along with GCC; see the file COPYING3.  If not see
+   {".ii", "@c++-cpp-output", 0, 0, 0},
+   {"@c++-cpp-output",
+    "%{!M:%{!MM:%{!E:\
+-    cc1plus -fpreprocessed %i %(cc1_options) %2\
++    cc1plus -fpreprocessed %i %I %(cc1_options) %2\
+     %{!fsyntax-only:%(invoke_as)}}}}", 0, 0, 0},
+diff --git a/gcc/gcc.c b/gcc/gcc.c
+index 05896e19926..c73d4023987 100644
+--- a/gcc/gcc.c
++++ b/gcc/gcc.c
+@@ -1351,7 +1351,7 @@ static const struct compiler default_compilers[] =
+ 					   %W{o*:--output-pch=%*}}%V}}}}}}}", 0, 0, 0},
+   {".i", "@cpp-output", 0, 0, 0},
+   {"@cpp-output",
+-   "%{!M:%{!MM:%{!E:cc1 -fpreprocessed %i %(cc1_options) %{!fsyntax-only:%(invoke_as)}}}}", 0, 0, 0},
++   "%{!M:%{!MM:%{!E:cc1 -fpreprocessed %i %I %(cc1_options) %{!fsyntax-only:%(invoke_as)}}}}", 0, 0, 0},
+   {".s", "@assembler", 0, 0, 0},
+   {"@assembler",
+    "%{!M:%{!MM:%{!E:%{!S:as %(asm_debug) %(asm_options) %i %A }}}}", 0, 0, 0},
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0019-MIPS64-Default-to-N64-ABI.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0019-MIPS64-Default-to-N64-ABI.patch
new file mode 100644
index 0000000..742a401
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0019-MIPS64-Default-to-N64-ABI.patch
@@ -0,0 +1,57 @@
+From 0a3b3cc45ea7ba83b46df7464b41c377e3966d88 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:23:08 +0400
+Subject: [PATCH 19/47] MIPS64: Default to N64 ABI
+
+MIPS64 defaults to n32 ABI, this patch makes it
+so that it defaults to N64 ABI
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Inappropriate [OE config specific]
+---
+ gcc/config.gcc | 10 +++++-----
+ 1 file changed, 5 insertions(+), 5 deletions(-)
+
+diff --git a/gcc/config.gcc b/gcc/config.gcc
+index b8bb4d65825..5e45f4b5199 100644
+--- a/gcc/config.gcc
++++ b/gcc/config.gcc
+@@ -2084,29 +2084,29 @@ mips*-*-linux*)				# Linux MIPS, either endian.
+ 			default_mips_arch=mips32
+ 			;;
+ 		mips64el-st-linux-gnu)
+-			default_mips_abi=n32
++			default_mips_abi=64
+ 			tm_file="${tm_file} mips/st.h"
+ 			tmake_file="${tmake_file} mips/t-st"
+ 			enable_mips_multilibs="yes"
+ 			;;
+ 		mips64octeon*-*-linux*)
+-			default_mips_abi=n32
++			default_mips_abi=64
+ 			tm_defines="${tm_defines} MIPS_CPU_STRING_DEFAULT=\\\"octeon\\\""
+ 			target_cpu_default=MASK_SOFT_FLOAT_ABI
+ 			enable_mips_multilibs="yes"
+ 			;;
+ 		mipsisa64r6*-*-linux*)
+-			default_mips_abi=n32
++			default_mips_abi=64
+ 			default_mips_arch=mips64r6
+ 			enable_mips_multilibs="yes"
+ 			;;
+ 		mipsisa64r2*-*-linux*)
+-			default_mips_abi=n32
++			default_mips_abi=64
+ 			default_mips_arch=mips64r2
+ 			enable_mips_multilibs="yes"
+ 			;;
+ 		mips64*-*-linux* | mipsisa64*-*-linux*)
+-			default_mips_abi=n32
++			default_mips_abi=64
+ 			enable_mips_multilibs="yes"
+ 			;;
+ 	esac
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0020-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0020-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch
new file mode 100644
index 0000000..de7b4df
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0020-Define-GLIBC_DYNAMIC_LINKER-and-UCLIBC_DYNAMIC_LINKE.patch
@@ -0,0 +1,234 @@
+From d6c983b685ee03e9cf21189108d31ed9f760ff3f Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:24:50 +0400
+Subject: [PATCH 20/47] Define GLIBC_DYNAMIC_LINKER and UCLIBC_DYNAMIC_LINKER
+ relative to SYSTEMLIBS_DIR
+
+This patch defines GLIBC_DYNAMIC_LINKER and UCLIBC_DYNAMIC_LINKER
+relative to SYSTEMLIBS_DIR which can be set in generated headers
+This breaks the assumption of hardcoded multilib in gcc
+Change is only for the supported architectures in OE including
+SH, sparc, alpha for possible future support (if any)
+
+Removes the do_headerfix task in metadata
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Inappropriate [OE configuration]
+---
+ gcc/config/alpha/linux-elf.h |  4 ++--
+ gcc/config/arm/linux-eabi.h  |  4 ++--
+ gcc/config/arm/linux-elf.h   |  2 +-
+ gcc/config/i386/linux.h      |  2 +-
+ gcc/config/i386/linux64.h    |  6 +++---
+ gcc/config/linux.h           |  8 ++++----
+ gcc/config/mips/linux.h      | 12 ++++++------
+ gcc/config/rs6000/linux64.h  | 16 ++++++----------
+ gcc/config/sh/linux.h        |  2 +-
+ gcc/config/sparc/linux.h     |  2 +-
+ gcc/config/sparc/linux64.h   |  4 ++--
+ 11 files changed, 29 insertions(+), 33 deletions(-)
+
+diff --git a/gcc/config/alpha/linux-elf.h b/gcc/config/alpha/linux-elf.h
+index 2c39fbe601c..6d88e21abe2 100644
+--- a/gcc/config/alpha/linux-elf.h
++++ b/gcc/config/alpha/linux-elf.h
+@@ -23,8 +23,8 @@ along with GCC; see the file COPYING3.  If not see
+ #define EXTRA_SPECS \
+ { "elf_dynamic_linker", ELF_DYNAMIC_LINKER },
+ 
+-#define GLIBC_DYNAMIC_LINKER	"/lib/ld-linux.so.2"
+-#define UCLIBC_DYNAMIC_LINKER "/lib/ld-uClibc.so.0"
++#define GLIBC_DYNAMIC_LINKER	SYSTEMLIBS_DIR "ld-linux.so.2"
++#define UCLIBC_DYNAMIC_LINKER  SYSTEMLIBS_DIR "ld-uClibc.so.0"
+ #if DEFAULT_LIBC == LIBC_UCLIBC
+ #define CHOOSE_DYNAMIC_LINKER(G, U) "%{mglibc:" G ";:" U "}"
+ #elif DEFAULT_LIBC == LIBC_GLIBC
+diff --git a/gcc/config/arm/linux-eabi.h b/gcc/config/arm/linux-eabi.h
+index a08cfb34377..fbac9a9d994 100644
+--- a/gcc/config/arm/linux-eabi.h
++++ b/gcc/config/arm/linux-eabi.h
+@@ -62,8 +62,8 @@
+    GLIBC_DYNAMIC_LINKER_DEFAULT and TARGET_DEFAULT_FLOAT_ABI.  */
+ 
+ #undef  GLIBC_DYNAMIC_LINKER
+-#define GLIBC_DYNAMIC_LINKER_SOFT_FLOAT "/lib/ld-linux.so.3"
+-#define GLIBC_DYNAMIC_LINKER_HARD_FLOAT "/lib/ld-linux-armhf.so.3"
++#define GLIBC_DYNAMIC_LINKER_SOFT_FLOAT SYSTEMLIBS_DIR "ld-linux.so.3"
++#define GLIBC_DYNAMIC_LINKER_HARD_FLOAT SYSTEMLIBS_DIR "ld-linux-armhf.so.3"
+ #define GLIBC_DYNAMIC_LINKER_DEFAULT GLIBC_DYNAMIC_LINKER_SOFT_FLOAT
+ 
+ #define GLIBC_DYNAMIC_LINKER \
+diff --git a/gcc/config/arm/linux-elf.h b/gcc/config/arm/linux-elf.h
+index 3d62367ae68..e8a16191849 100644
+--- a/gcc/config/arm/linux-elf.h
++++ b/gcc/config/arm/linux-elf.h
+@@ -60,7 +60,7 @@
+ 
+ #define LIBGCC_SPEC "%{mfloat-abi=soft*:-lfloat} -lgcc"
+ 
+-#define GLIBC_DYNAMIC_LINKER "/lib/ld-linux.so.2"
++#define GLIBC_DYNAMIC_LINKER SYSTEMLIBS_DIR "ld-linux.so.2"
+ 
+ #define LINUX_TARGET_LINK_SPEC  "%{h*} \
+    %{static:-Bstatic} \
+diff --git a/gcc/config/i386/linux.h b/gcc/config/i386/linux.h
+index 59132124d6b..336d158629c 100644
+--- a/gcc/config/i386/linux.h
++++ b/gcc/config/i386/linux.h
+@@ -20,7 +20,7 @@ along with GCC; see the file COPYING3.  If not see
+ <http://www.gnu.org/licenses/>.  */
+ 
+ #define GNU_USER_LINK_EMULATION "elf_i386"
+-#define GLIBC_DYNAMIC_LINKER "/lib/ld-linux.so.2"
++#define GLIBC_DYNAMIC_LINKER SYSTEMLIBS_DIR "ld-linux.so.2"
+ 
+ #undef MUSL_DYNAMIC_LINKER
+ #define MUSL_DYNAMIC_LINKER "/lib/ld-musl-i386.so.1"
+diff --git a/gcc/config/i386/linux64.h b/gcc/config/i386/linux64.h
+index e65c404ff91..c34ded98481 100644
+--- a/gcc/config/i386/linux64.h
++++ b/gcc/config/i386/linux64.h
+@@ -27,9 +27,9 @@ see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+ #define GNU_USER_LINK_EMULATION64 "elf_x86_64"
+ #define GNU_USER_LINK_EMULATIONX32 "elf32_x86_64"
+ 
+-#define GLIBC_DYNAMIC_LINKER32 "/lib/ld-linux.so.2"
+-#define GLIBC_DYNAMIC_LINKER64 "/lib64/ld-linux-x86-64.so.2"
+-#define GLIBC_DYNAMIC_LINKERX32 "/libx32/ld-linux-x32.so.2"
++#define GLIBC_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-linux.so.2"
++#define GLIBC_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld-linux-x86-64.so.2"
++#define GLIBC_DYNAMIC_LINKERX32 SYSTEMLIBS_DIR "ld-linux-x32.so.2"
+ 
+ #undef MUSL_DYNAMIC_LINKER32
+ #define MUSL_DYNAMIC_LINKER32 "/lib/ld-musl-i386.so.1"
+diff --git a/gcc/config/linux.h b/gcc/config/linux.h
+index b3a9e85e77f..2e683d0c430 100644
+--- a/gcc/config/linux.h
++++ b/gcc/config/linux.h
+@@ -81,10 +81,10 @@ see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+    GLIBC_DYNAMIC_LINKER must be defined for each target using them, or
+    GLIBC_DYNAMIC_LINKER32 and GLIBC_DYNAMIC_LINKER64 for targets
+    supporting both 32-bit and 64-bit compilation.  */
+-#define UCLIBC_DYNAMIC_LINKER "/lib/ld-uClibc.so.0"
+-#define UCLIBC_DYNAMIC_LINKER32 "/lib/ld-uClibc.so.0"
+-#define UCLIBC_DYNAMIC_LINKER64 "/lib/ld64-uClibc.so.0"
+-#define UCLIBC_DYNAMIC_LINKERX32 "/lib/ldx32-uClibc.so.0"
++#define UCLIBC_DYNAMIC_LINKER SYSTEMLIBS_DIR "ld-uClibc.so.0"
++#define UCLIBC_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-uClibc.so.0"
++#define UCLIBC_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld64-uClibc.so.0"
++#define UCLIBC_DYNAMIC_LINKERX32 SYSTEMLIBS_DIR "ldx32-uClibc.so.0"
+ #define BIONIC_DYNAMIC_LINKER "/system/bin/linker"
+ #define BIONIC_DYNAMIC_LINKER32 "/system/bin/linker"
+ #define BIONIC_DYNAMIC_LINKER64 "/system/bin/linker64"
+diff --git a/gcc/config/mips/linux.h b/gcc/config/mips/linux.h
+index 44132b8e44d..80505ad9f48 100644
+--- a/gcc/config/mips/linux.h
++++ b/gcc/config/mips/linux.h
+@@ -22,20 +22,20 @@ along with GCC; see the file COPYING3.  If not see
+ #define GNU_USER_LINK_EMULATIONN32 "elf32%{EB:b}%{EL:l}tsmipn32"
+ 
+ #define GLIBC_DYNAMIC_LINKER32 \
+-  "%{mnan=2008:/lib/ld-linux-mipsn8.so.1;:/lib/ld.so.1}"
++  "%{mnan=2008:" SYSTEMLIBS_DIR "ld-linux-mipsn8.so.1;:" SYSTEMLIBS_DIR "ld.so.1}"
+ #define GLIBC_DYNAMIC_LINKER64 \
+-  "%{mnan=2008:/lib64/ld-linux-mipsn8.so.1;:/lib64/ld.so.1}"
++  "%{mnan=2008:" SYSTEMLIBS_DIR "ld-linux-mipsn8.so.1;:" SYSTEMLIBS_DIR "ld.so.1}"
+ #define GLIBC_DYNAMIC_LINKERN32 \
+-  "%{mnan=2008:/lib32/ld-linux-mipsn8.so.1;:/lib32/ld.so.1}"
++  "%{mnan=2008:" SYSTEMLIBS_DIR "ld-linux-mipsn8.so.1;:" SYSTEMLIBS_DIR "ld.so.1}"
+ 
+ #undef UCLIBC_DYNAMIC_LINKER32
+ #define UCLIBC_DYNAMIC_LINKER32 \
+-  "%{mnan=2008:/lib/ld-uClibc-mipsn8.so.0;:/lib/ld-uClibc.so.0}"
++  "%{mnan=2008:" SYSTEMLIBS_DIR "ld-uClibc-mipsn8.so.0;:" SYSTEMLIBS_DIR "ld-uClibc.so.0}"
+ #undef UCLIBC_DYNAMIC_LINKER64
+ #define UCLIBC_DYNAMIC_LINKER64 \
+-  "%{mnan=2008:/lib/ld64-uClibc-mipsn8.so.0;:/lib/ld64-uClibc.so.0}"
++  "%{mnan=2008:" SYSTEMLIBS_DIR "ld64-uClibc-mipsn8.so.0;:" SYSTEMLIBS_DIR "ld64-uClibc.so.0}"
+ #define UCLIBC_DYNAMIC_LINKERN32 \
+-  "%{mnan=2008:/lib32/ld-uClibc-mipsn8.so.0;:/lib32/ld-uClibc.so.0}"
++  "%{mnan=2008:" SYSTEMLIBS_DIR "ld-uClibc-mipsn8.so.0;:" SYSTEMLIBS_DIR "ld-uClibc.so.0}"
+ 
+ #undef MUSL_DYNAMIC_LINKER32
+ #define MUSL_DYNAMIC_LINKER32 \
+diff --git a/gcc/config/rs6000/linux64.h b/gcc/config/rs6000/linux64.h
+index 71e35b709ad..3b00ec0fcf0 100644
+--- a/gcc/config/rs6000/linux64.h
++++ b/gcc/config/rs6000/linux64.h
+@@ -412,16 +412,11 @@ extern int dot_symbols;
+ #undef	LINK_OS_DEFAULT_SPEC
+ #define LINK_OS_DEFAULT_SPEC "%(link_os_linux)"
+ 
+-#define GLIBC_DYNAMIC_LINKER32 "%(dynamic_linker_prefix)/lib/ld.so.1"
+-
++#define GLIBC_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld.so.1"
+ #ifdef LINUX64_DEFAULT_ABI_ELFv2
+-#define GLIBC_DYNAMIC_LINKER64 \
+-"%{mabi=elfv1:%(dynamic_linker_prefix)/lib64/ld64.so.1;" \
+-":%(dynamic_linker_prefix)/lib64/ld64.so.2}"
++#define GLIBC_DYNAMIC_LINKER64 "%{mabi=elfv1:" SYSTEMLIBS_DIR "ld64.so.1;:" SYSTEMLIBS_DIR "ld64.so.2}"
+ #else
+-#define GLIBC_DYNAMIC_LINKER64 \
+-"%{mabi=elfv2:%(dynamic_linker_prefix)/lib64/ld64.so.2;" \
+-":%(dynamic_linker_prefix)/lib64/ld64.so.1}"
++#define GLIBC_DYNAMIC_LINKER64 "%{mabi=elfv2:" SYSTEMLIBS_DIR "ld64.so.2;:" SYSTEMLIBS_DIR "ld64.so.1}"
+ #endif
+ 
+ #define MUSL_DYNAMIC_LINKER32 \
+@@ -429,8 +424,9 @@ extern int dot_symbols;
+ #define MUSL_DYNAMIC_LINKER64 \
+   "/lib/ld-musl-powerpc64" MUSL_DYNAMIC_LINKER_E "%{msoft-float:-sf}.so.1"
+ 
+-#define UCLIBC_DYNAMIC_LINKER32 "/lib/ld-uClibc.so.0"
+-#define UCLIBC_DYNAMIC_LINKER64 "/lib/ld64-uClibc.so.0"
++#define UCLIBC_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-uClibc.so.0"
++#define UCLIBC_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld64-uClibc.so.0"
++
+ #if DEFAULT_LIBC == LIBC_UCLIBC
+ #define CHOOSE_DYNAMIC_LINKER(G, U, M) \
+   "%{mglibc:" G ";:%{mmusl:" M ";:" U "}}"
+diff --git a/gcc/config/sh/linux.h b/gcc/config/sh/linux.h
+index c30083423f2..196b82725f8 100644
+--- a/gcc/config/sh/linux.h
++++ b/gcc/config/sh/linux.h
+@@ -64,7 +64,7 @@ along with GCC; see the file COPYING3.  If not see
+   "/lib/ld-musl-sh" MUSL_DYNAMIC_LINKER_E MUSL_DYNAMIC_LINKER_FP \
+   "%{mfdpic:-fdpic}.so.1"
+ 
+-#define GLIBC_DYNAMIC_LINKER "/lib/ld-linux.so.2"
++#define GLIBC_DYNAMIC_LINKER SYSTEMLIBS_DIR "ld-linux.so.2"
+ 
+ #undef SUBTARGET_LINK_EMUL_SUFFIX
+ #define SUBTARGET_LINK_EMUL_SUFFIX "%{mfdpic:_fd;:_linux}"
+diff --git a/gcc/config/sparc/linux.h b/gcc/config/sparc/linux.h
+index ce084656fca..bed6300cb2a 100644
+--- a/gcc/config/sparc/linux.h
++++ b/gcc/config/sparc/linux.h
+@@ -83,7 +83,7 @@ extern const char *host_detect_local_cpu (int argc, const char **argv);
+    When the -shared link option is used a final link is not being
+    done.  */
+ 
+-#define GLIBC_DYNAMIC_LINKER "/lib/ld-linux.so.2"
++#define GLIBC_DYNAMIC_LINKER SYSTEMLIBS_DIR "ld-linux.so.2"
+ 
+ #undef  LINK_SPEC
+ #define LINK_SPEC "-m elf32_sparc %{shared:-shared} \
+diff --git a/gcc/config/sparc/linux64.h b/gcc/config/sparc/linux64.h
+index 573ce8a9a4c..6749f6b5d9c 100644
+--- a/gcc/config/sparc/linux64.h
++++ b/gcc/config/sparc/linux64.h
+@@ -84,8 +84,8 @@ along with GCC; see the file COPYING3.  If not see
+    When the -shared link option is used a final link is not being
+    done.  */
+ 
+-#define GLIBC_DYNAMIC_LINKER32 "/lib/ld-linux.so.2"
+-#define GLIBC_DYNAMIC_LINKER64 "/lib64/ld-linux.so.2"
++#define GLIBC_DYNAMIC_LINKER32 SYSTEMLIBS_DIR "ld-linux.so.2"
++#define GLIBC_DYNAMIC_LINKER64 SYSTEMLIBS_DIR "ld-linux.so.2"
+ 
+ #ifdef SPARC_BI_ARCH
+ 
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0021-gcc-Fix-argument-list-too-long-error.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0021-gcc-Fix-argument-list-too-long-error.patch
new file mode 100644
index 0000000..4e56214
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0021-gcc-Fix-argument-list-too-long-error.patch
@@ -0,0 +1,40 @@
+From 80c24247fed52c1269791088090bc0fa85280983 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:26:37 +0400
+Subject: [PATCH 21/47] gcc: Fix argument list too long error.
+
+There would be an "Argument list too long" error when the
+build directory is longer than 200, this is caused by:
+
+headers=`echo $(PLUGIN_HEADERS) | tr ' ' '\012' | sort -u`
+
+The PLUGIN_HEADERS is too long before sort, so the "echo" can't handle
+it, use the $(sort list) of GNU make which can handle the too long list
+would fix the problem, the header would be short enough after sorted.
+The "tr ' ' '\012'" was used for translating the space to "\n", the
+$(sort list) doesn't need this.
+
+Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
+---
+ gcc/Makefile.in | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/gcc/Makefile.in b/gcc/Makefile.in
+index 7b590c9bbd3..23cca7f0d5a 100644
+--- a/gcc/Makefile.in
++++ b/gcc/Makefile.in
+@@ -3459,7 +3459,7 @@ install-plugin: installdirs lang.install-plugin s-header-vars install-gengtype
+ # We keep the directory structure for files in config or c-family and .def
+ # files. All other files are flattened to a single directory.
+ 	$(mkinstalldirs) $(DESTDIR)$(plugin_includedir)
+-	headers=`echo $(PLUGIN_HEADERS) $$(cd $(srcdir); echo *.h *.def) | tr ' ' '\012' | sort -u`; \
++	headers="$(sort $(PLUGIN_HEADERS) $$(cd $(srcdir); echo *.h *.def))"; \
+ 	srcdirstrip=`echo "$(srcdir)" | sed 's/[].[^$$\\*|]/\\\\&/g'`; \
+ 	for file in $$headers; do \
+ 	  if [ -f $$file ] ; then \
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0022-Disable-sdt.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0022-Disable-sdt.patch
new file mode 100644
index 0000000..871f195
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0022-Disable-sdt.patch
@@ -0,0 +1,113 @@
+From 3021fec485f44478a3d5fffb4adac13d831fcdc1 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:28:10 +0400
+Subject: [PATCH 22/47] Disable sdt.
+
+We don't list dtrace in DEPENDS so we shouldn't be depending on this header.
+It may or may not exist from preivous builds though. To be determinstic, disable
+sdt.h usage always. This avoids build failures if the header is removed after configure
+but before libgcc is compiled for example.
+
+RP 2012/8/7
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Disable sdt for libstdc++-v3.
+
+Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
+
+Upstream-Status: Inappropriate [hack]
+---
+ gcc/configure             | 12 ++++++------
+ gcc/configure.ac          | 18 +++++++++---------
+ libstdc++-v3/configure    |  6 +++---
+ libstdc++-v3/configure.ac |  2 +-
+ 4 files changed, 19 insertions(+), 19 deletions(-)
+
+diff --git a/gcc/configure b/gcc/configure
+index b5ac1552541..08b2f63c7fa 100755
+--- a/gcc/configure
++++ b/gcc/configure
+@@ -28967,12 +28967,12 @@ fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: checking sys/sdt.h in the target C library" >&5
+ $as_echo_n "checking sys/sdt.h in the target C library... " >&6; }
+ have_sys_sdt_h=no
+-if test -f $target_header_dir/sys/sdt.h; then
+-  have_sys_sdt_h=yes
+-
+-$as_echo "#define HAVE_SYS_SDT_H 1" >>confdefs.h
+-
+-fi
++#if test -f $target_header_dir/sys/sdt.h; then
++#  have_sys_sdt_h=yes
++#
++#$as_echo "#define HAVE_SYS_SDT_H 1" >>confdefs.h
++#
++#fi
+ { $as_echo "$as_me:${as_lineno-$LINENO}: result: $have_sys_sdt_h" >&5
+ $as_echo "$have_sys_sdt_h" >&6; }
+ 
+diff --git a/gcc/configure.ac b/gcc/configure.ac
+index cb1479d1ef4..0581fe963dc 100644
+--- a/gcc/configure.ac
++++ b/gcc/configure.ac
+@@ -5754,15 +5754,15 @@ fi
+ AC_SUBST([enable_default_ssp])
+ 
+ # Test for <sys/sdt.h> on the target.
+-GCC_TARGET_TEMPLATE([HAVE_SYS_SDT_H])
+-AC_MSG_CHECKING(sys/sdt.h in the target C library)
+-have_sys_sdt_h=no
+-if test -f $target_header_dir/sys/sdt.h; then
+-  have_sys_sdt_h=yes
+-  AC_DEFINE(HAVE_SYS_SDT_H, 1,
+-            [Define if your target C library provides sys/sdt.h])
+-fi
+-AC_MSG_RESULT($have_sys_sdt_h)
++#GCC_TARGET_TEMPLATE([HAVE_SYS_SDT_H])
++#AC_MSG_CHECKING(sys/sdt.h in the target C library)
++#have_sys_sdt_h=no
++#if test -f $target_header_dir/sys/sdt.h; then
++#  have_sys_sdt_h=yes
++#  AC_DEFINE(HAVE_SYS_SDT_H, 1,
++#            [Define if your target C library provides sys/sdt.h])
++#fi
++#AC_MSG_RESULT($have_sys_sdt_h)
+ 
+ # Check if TFmode long double should be used by default or not.
+ # Some glibc targets used DFmode long double, but with glibc 2.4
+diff --git a/libstdc++-v3/configure b/libstdc++-v3/configure
+index fb7e126c0b0..a18057feb88 100755
+--- a/libstdc++-v3/configure
++++ b/libstdc++-v3/configure
+@@ -21856,11 +21856,11 @@ ac_compile='$CC -c $CFLAGS $CPPFLAGS conftest.$ac_ext >&5'
+ ac_link='$CC -o conftest$ac_exeext $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS >&5'
+ ac_compiler_gnu=$ac_cv_c_compiler_gnu
+ 
+-  if test $glibcxx_cv_sys_sdt_h = yes; then
++#  if test $glibcxx_cv_sys_sdt_h = yes; then
+ 
+-$as_echo "#define HAVE_SYS_SDT_H 1" >>confdefs.h
++#$as_echo "#define HAVE_SYS_SDT_H 1" >>confdefs.h
+ 
+-  fi
++#  fi
+   { $as_echo "$as_me:${as_lineno-$LINENO}: result: $glibcxx_cv_sys_sdt_h" >&5
+ $as_echo "$glibcxx_cv_sys_sdt_h" >&6; }
+ 
+diff --git a/libstdc++-v3/configure.ac b/libstdc++-v3/configure.ac
+index 8e973503be0..a46d25e740d 100644
+--- a/libstdc++-v3/configure.ac
++++ b/libstdc++-v3/configure.ac
+@@ -230,7 +230,7 @@ GLIBCXX_CHECK_SC_NPROCESSORS_ONLN
+ GLIBCXX_CHECK_SC_NPROC_ONLN
+ GLIBCXX_CHECK_PTHREADS_NUM_PROCESSORS_NP
+ GLIBCXX_CHECK_SYSCTL_HW_NCPU
+-GLIBCXX_CHECK_SDT_H
++#GLIBCXX_CHECK_SDT_H
+ 
+ # Check for available headers.
+ AC_CHECK_HEADERS([endian.h execinfo.h float.h fp.h ieeefp.h inttypes.h \
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0023-libtool.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0023-libtool.patch
new file mode 100644
index 0000000..27dfb1f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0023-libtool.patch
@@ -0,0 +1,42 @@
+From e79a4f8169e836c8deabca5a45884cfe11d07847 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:29:11 +0400
+Subject: [PATCH 23/47] libtool
+
+libstdc++ from gcc-runtime gets created with -rpath=/usr/lib/../lib for qemux86-64
+when running on am x86_64 build host.
+
+This patch stops this speading to libdir in the libstdc++.la file within libtool.
+Arguably, it shouldn't be passing this into libtool in the first place but
+for now this resolves the nastiest problems this causes.
+
+func_normal_abspath would resolve an empty path to `pwd` so we need
+to filter the zero case.
+
+RP 2012/8/24
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
+---
+ ltmain.sh | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/ltmain.sh b/ltmain.sh
+index 9503ec85d70..0121fba707f 100644
+--- a/ltmain.sh
++++ b/ltmain.sh
+@@ -6359,6 +6359,10 @@ func_mode_link ()
+ 	func_warning "ignoring multiple \`-rpath's for a libtool library"
+ 
+       install_libdir="$1"
++      if test -n "$install_libdir"; then
++	func_normal_abspath "$install_libdir"
++	install_libdir=$func_normal_abspath_result
++      fi
+ 
+       oldlibs=
+       if test -z "$rpath"; then
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0024-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0024-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch
new file mode 100644
index 0000000..aa1e1bb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0024-gcc-armv4-pass-fix-v4bx-to-linker-to-support-EABI.patch
@@ -0,0 +1,43 @@
+From 74d8dc48cb185e304c60067b4d8b50447ec328ec Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:30:32 +0400
+Subject: [PATCH 24/47] gcc: armv4: pass fix-v4bx to linker to support EABI.
+
+The LINK_SPEC for linux gets overwritten by linux-eabi.h which
+means the value of TARGET_FIX_V4BX_SPEC gets lost and as a result
+the option is not passed to linker when chosing march=armv4
+This patch redefines this in linux-eabi.h and reinserts it
+for eabi defaulting toolchains.
+
+We might want to send it upstream.
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
+---
+ gcc/config/arm/linux-eabi.h | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/gcc/config/arm/linux-eabi.h b/gcc/config/arm/linux-eabi.h
+index fbac9a9d994..5a51a8a7095 100644
+--- a/gcc/config/arm/linux-eabi.h
++++ b/gcc/config/arm/linux-eabi.h
+@@ -88,10 +88,14 @@
+ #define MUSL_DYNAMIC_LINKER \
+   "/lib/ld-musl-arm" MUSL_DYNAMIC_LINKER_E "%{mfloat-abi=hard:hf}.so.1"
+ 
++/* For armv4 we pass --fix-v4bx to linker to support EABI */
++#undef TARGET_FIX_V4BX_SPEC
++#define TARGET_FIX_V4BX_SPEC "%{mcpu=arm8|mcpu=arm810|mcpu=strongarm*|march=armv4: --fix-v4bx}"
++
+ /* At this point, bpabi.h will have clobbered LINK_SPEC.  We want to
+    use the GNU/Linux version, not the generic BPABI version.  */
+ #undef  LINK_SPEC
+-#define LINK_SPEC EABI_LINK_SPEC					\
++#define LINK_SPEC TARGET_FIX_V4BX_SPEC EABI_LINK_SPEC			\
+   LINUX_OR_ANDROID_LD (LINUX_TARGET_LINK_SPEC,				\
+ 		       LINUX_TARGET_LINK_SPEC " " ANDROID_LINK_SPEC)
+ 
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0025-Use-the-multilib-config-files-from-B-instead-of-usin.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0025-Use-the-multilib-config-files-from-B-instead-of-usin.patch
new file mode 100644
index 0000000..b234132
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0025-Use-the-multilib-config-files-from-B-instead-of-usin.patch
@@ -0,0 +1,102 @@
+From ac50dc3010a66220ad483c09efe270bb3f4c9424 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Mar 2013 09:33:04 +0400
+Subject: [PATCH 25/47] Use the multilib config files from ${B} instead of
+ using the ones from ${S}
+
+Use the multilib config files from ${B} instead of using the ones from ${S}
+so that the source can be shared between gcc-cross-initial,
+gcc-cross-intermediate, gcc-cross, gcc-runtime, and also the sdk build.
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Signed-off-by: Constantin Musca <constantinx.musca@intel.com>
+
+Upstream-Status: Inappropriate [configuration]
+---
+ gcc/configure    | 22 ++++++++++++++++++----
+ gcc/configure.ac | 22 ++++++++++++++++++----
+ 2 files changed, 36 insertions(+), 8 deletions(-)
+
+diff --git a/gcc/configure b/gcc/configure
+index 08b2f63c7fa..6ba391ed068 100755
+--- a/gcc/configure
++++ b/gcc/configure
+@@ -12130,10 +12130,20 @@ done
+ tmake_file_=
+ for f in ${tmake_file}
+ do
+-	if test -f ${srcdir}/config/$f
+-	then
+-		tmake_file_="${tmake_file_} \$(srcdir)/config/$f"
+-	fi
++  case $f in
++    */t-linux64 )
++       if test -f ./config/$f
++       then
++         tmake_file_="${tmake_file_} ./config/$f"
++       fi
++       ;;
++    * )
++       if test -f ${srcdir}/config/$f
++       then
++         tmake_file_="${tmake_file_} \$(srcdir)/config/$f"
++       fi
++       ;;
++  esac
+ done
+ tmake_file="${tmake_file_}"
+ 
+@@ -12144,6 +12154,10 @@ tm_file_list="options.h"
+ tm_include_list="options.h insn-constants.h"
+ for f in $tm_file; do
+   case $f in
++    */linux64.h )
++       tm_file_list="${tm_file_list} ./config/$f"
++       tm_include_list="${tm_include_list} ./config/$f"
++       ;;
+     ./* )
+        f=`echo $f | sed 's/^..//'`
+        tm_file_list="${tm_file_list} $f"
+diff --git a/gcc/configure.ac b/gcc/configure.ac
+index 0581fe963dc..8551a412df3 100644
+--- a/gcc/configure.ac
++++ b/gcc/configure.ac
+@@ -1902,10 +1902,20 @@ done
+ tmake_file_=
+ for f in ${tmake_file}
+ do
+-	if test -f ${srcdir}/config/$f
+-	then
+-		tmake_file_="${tmake_file_} \$(srcdir)/config/$f"
+-	fi
++  case $f in
++    */t-linux64 )
++       if test -f ./config/$f
++       then
++         tmake_file_="${tmake_file_} ./config/$f"
++       fi
++       ;;
++    * )
++       if test -f ${srcdir}/config/$f
++       then
++         tmake_file_="${tmake_file_} \$(srcdir)/config/$f"
++       fi
++       ;;
++  esac
+ done
+ tmake_file="${tmake_file_}"
+ 
+@@ -1916,6 +1926,10 @@ tm_file_list="options.h"
+ tm_include_list="options.h insn-constants.h"
+ for f in $tm_file; do
+   case $f in
++    */linux64.h )
++       tm_file_list="${tm_file_list} ./config/$f"
++       tm_include_list="${tm_include_list} ./config/$f"
++       ;;
+     ./* )
+        f=`echo $f | sed 's/^..//'`
+        tm_file_list="${tm_file_list} $f"
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0026-Avoid-using-libdir-from-.la-which-usually-points-to-.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0026-Avoid-using-libdir-from-.la-which-usually-points-to-.patch
new file mode 100644
index 0000000..fe24713
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0026-Avoid-using-libdir-from-.la-which-usually-points-to-.patch
@@ -0,0 +1,31 @@
+From 9fab47d8662986ad887d9eddc39fcbe25e576383 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 20 Feb 2015 09:39:38 +0000
+Subject: [PATCH 26/47] Avoid using libdir from .la which usually points to a
+ host path
+
+Upstream-Status: Inappropriate [embedded specific]
+
+Signed-off-by: Jonathan Liu <net147@gmail.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ltmain.sh | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/ltmain.sh b/ltmain.sh
+index 0121fba707f..52bdbdb5f9c 100644
+--- a/ltmain.sh
++++ b/ltmain.sh
+@@ -5628,6 +5628,9 @@ func_mode_link ()
+ 	    absdir="$abs_ladir"
+ 	    libdir="$abs_ladir"
+ 	  else
++	    # Instead of using libdir from .la which usually points to a host path,
++	    # use the path the .la is contained in.
++	    libdir="$abs_ladir"
+ 	    dir="$libdir"
+ 	    absdir="$libdir"
+ 	  fi
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0027-export-CPP.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0027-export-CPP.patch
new file mode 100644
index 0000000..4f9e1f0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0027-export-CPP.patch
@@ -0,0 +1,53 @@
+From fa6a46fdf73de7eacd289c084bbde6643b23f73b Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 20 Feb 2015 09:40:59 +0000
+Subject: [PATCH 27/47] export CPP
+
+The OE environment sets and exports CPP as being the target gcc. When
+building gcc-cross-canadian for a mingw targetted sdk, the following can be found
+in build.x86_64-pokysdk-mingw32.i586-poky-linux/build-x86_64-linux/libiberty/config.log:
+
+configure:3641: checking for _FILE_OFFSET_BITS value needed for large files
+configure:3666: gcc  -c -isystem/media/build1/poky/build/tmp/sysroots/x86_64-linux/usr/include -O2 -pipe  conftest.c >&5
+configure:3666: $? = 0
+configure:3698: result: no
+configure:3786: checking how to run the C preprocessor
+configure:3856: result: x86_64-pokysdk-mingw32-gcc -E --sysroot=/media/build1/poky/build/tmp/sysroots/x86_64-nativesdk-mingw32-pokysdk-mingw32
+configure:3876: x86_64-pokysdk-mingw32-gcc -E --sysroot=/media/build1/poky/build/tmp/sysroots/x86_64-nativesdk-mingw32-pokysdk-mingw32 conftest.c
+configure:3876: $? = 0
+
+Note this is a *build* target (in build-x86_64-linux) so it should be
+using the host "gcc", not x86_64-pokysdk-mingw32-gcc. Since the mingw32
+headers are very different, using the wrong cpp is a real problem. It is leaking
+into configure through the CPP variable. Ultimately this leads to build
+failures related to not being able to include a process.h file for pem-unix.c.
+
+The fix is to ensure we export a sane CPP value into the build
+environment when using build targets. We could define a CPP_FOR_BUILD value which may be
+the version which needs to be upstreamed but for now, this fix is good enough to
+avoid the problem.
+
+RP 22/08/2013
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ Makefile.in | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/Makefile.in b/Makefile.in
+index b824e0a0ca1..e34e9555388 100644
+--- a/Makefile.in
++++ b/Makefile.in
+@@ -149,6 +149,7 @@ BUILD_EXPORTS = \
+ 	AR="$(AR_FOR_BUILD)"; export AR; \
+ 	AS="$(AS_FOR_BUILD)"; export AS; \
+ 	CC="$(CC_FOR_BUILD)"; export CC; \
++	CPP="$(CC_FOR_BUILD) -E"; export CPP; \
+ 	CFLAGS="$(CFLAGS_FOR_BUILD)"; export CFLAGS; \
+ 	CONFIG_SHELL="$(SHELL)"; export CONFIG_SHELL; \
+ 	CXX="$(CXX_FOR_BUILD)"; export CXX; \
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0028-Enable-SPE-AltiVec-generation-on-powepc-linux-target.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0028-Enable-SPE-AltiVec-generation-on-powepc-linux-target.patch
new file mode 100644
index 0000000..b903349
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0028-Enable-SPE-AltiVec-generation-on-powepc-linux-target.patch
@@ -0,0 +1,56 @@
+From 2c05b4072f982df8002d61327837e18a724e934f Mon Sep 17 00:00:00 2001
+From: Alexandru-Cezar Sardan <alexandru.sardan@freescale.com>
+Date: Wed, 5 Feb 2014 16:52:31 +0200
+Subject: [PATCH 28/47] Enable SPE & AltiVec generation on powepc*linux target
+
+When is configured with --target=powerpc-linux, the resulting GCC will
+not be able to generate code for SPE targets (e500v1/v2).
+GCC configured with --target=powerpc-linuxspe will not be able to
+generate AltiVec instructions (for e6500).
+This patch modifies the configured file such that SPE or AltiVec code
+can be generated when gcc is configured with --target=powerpc-linux.
+The ABI and speciffic instructions can be selected through the
+"-mabi=spe or -mabi=altivec" and the "-mspe or -maltivec" parameters.
+
+Upstream-Status: Inappropriate [configuration]
+
+Signed-off-by: Alexandru-Cezar Sardan <alexandru.sardan@freescale.com>
+---
+ gcc/config.gcc               | 9 ++++++++-
+ gcc/config/rs6000/linuxspe.h | 3 ---
+ 2 files changed, 8 insertions(+), 4 deletions(-)
+
+diff --git a/gcc/config.gcc b/gcc/config.gcc
+index 5e45f4b5199..9b381dfd9af 100644
+--- a/gcc/config.gcc
++++ b/gcc/config.gcc
+@@ -2415,7 +2415,14 @@ powerpc-*-rtems*)
+ 	tmake_file="${tmake_file} rs6000/t-fprules rs6000/t-rtems rs6000/t-ppccomm"
+ 	;;
+ powerpc*-*-linux*)
+-	tm_file="${tm_file} dbxelf.h elfos.h gnu-user.h freebsd-spec.h rs6000/sysv4.h"
++	case ${target} in
++	    powerpc*-*-linux*spe* | powerpc*-*-linux*altivec*)
++		tm_file="${tm_file} dbxelf.h elfos.h gnu-user.h freebsd-spec.h rs6000/sysv4.h"
++		;;
++	    *)
++		tm_file="${tm_file} dbxelf.h elfos.h gnu-user.h freebsd-spec.h rs6000/sysv4.h rs6000/linuxaltivec.h rs6000/linuxspe.h rs6000/e500.h"
++		;;
++	esac
+ 	extra_options="${extra_options} rs6000/sysv4.opt"
+ 	tmake_file="${tmake_file} rs6000/t-fprules rs6000/t-ppccomm"
+ 	extra_objs="$extra_objs rs6000-linux.o"
+diff --git a/gcc/config/rs6000/linuxspe.h b/gcc/config/rs6000/linuxspe.h
+index 92efabfe664..6d486451a7e 100644
+--- a/gcc/config/rs6000/linuxspe.h
++++ b/gcc/config/rs6000/linuxspe.h
+@@ -27,6 +27,3 @@
+ #undef	TARGET_DEFAULT
+ #define TARGET_DEFAULT MASK_STRICT_ALIGN
+ #endif
+-
+-#undef  ASM_DEFAULT_SPEC
+-#define	ASM_DEFAULT_SPEC "-mppc -mspe -me500"
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0029-Disable-the-MULTILIB_OSDIRNAMES-and-other-multilib-o.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0029-Disable-the-MULTILIB_OSDIRNAMES-and-other-multilib-o.patch
new file mode 100644
index 0000000..7306a28
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0029-Disable-the-MULTILIB_OSDIRNAMES-and-other-multilib-o.patch
@@ -0,0 +1,42 @@
+From ec0f843b86c0f76bc5ebb20fafbc4aae1be4db61 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 20 Feb 2015 10:21:55 +0000
+Subject: [PATCH 29/47] Disable the MULTILIB_OSDIRNAMES and other multilib
+ options.
+
+Hard coding the MULTILIB_OSDIRNAMES with ../lib64 is causing problems on
+systems where the libdir is NOT set to /lib64.  This is allowed by the
+ABI, as
+long as the dynamic loader is present in /lib.
+
+We simply want to use the default rules in gcc to find and configure the
+normal libdir.
+
+Upstream-Status: Inappropriate[OE-Specific]
+
+Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ gcc/config/aarch64/t-aarch64-linux | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/gcc/config/aarch64/t-aarch64-linux b/gcc/config/aarch64/t-aarch64-linux
+index ab064ab6f22..f4b1f98b216 100644
+--- a/gcc/config/aarch64/t-aarch64-linux
++++ b/gcc/config/aarch64/t-aarch64-linux
+@@ -21,8 +21,8 @@
+ LIB1ASMSRC   = aarch64/lib1funcs.asm
+ LIB1ASMFUNCS = _aarch64_sync_cache_range
+ 
+-AARCH_BE = $(if $(findstring TARGET_BIG_ENDIAN_DEFAULT=1, $(tm_defines)),_be)
+-MULTILIB_OSDIRNAMES = mabi.lp64=../lib64$(call if_multiarch,:aarch64$(AARCH_BE)-linux-gnu)
+-MULTIARCH_DIRNAME = $(call if_multiarch,aarch64$(AARCH_BE)-linux-gnu)
++#AARCH_BE = $(if $(findstring TARGET_BIG_ENDIAN_DEFAULT=1, $(tm_defines)),_be)
++#MULTILIB_OSDIRNAMES = mabi.lp64=../lib64$(call if_multiarch,:aarch64$(AARCH_BE)-linux-gnu)
++#MULTIARCH_DIRNAME = $(call if_multiarch,aarch64$(AARCH_BE)-linux-gnu)
+ 
+-MULTILIB_OSDIRNAMES += mabi.ilp32=../libilp32
++#MULTILIB_OSDIRNAMES += mabi.ilp32=../libilp32
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0030-Ensure-target-gcc-headers-can-be-included.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0030-Ensure-target-gcc-headers-can-be-included.patch
new file mode 100644
index 0000000..568ba95
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0030-Ensure-target-gcc-headers-can-be-included.patch
@@ -0,0 +1,98 @@
+From bf5836989e0ffc1c1df1369df06877e96c08df41 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 20 Feb 2015 10:25:11 +0000
+Subject: [PATCH 30/47] Ensure target gcc headers can be included
+
+There are a few headers installed as part of the OpenEmbedded
+gcc-runtime target (omp.h, ssp/*.h). Being installed from a recipe
+built for the target architecture, these are within the target
+sysroot and not cross/nativesdk; thus they weren't able to be
+found by gcc with the existing search paths. Add support for
+picking up these headers under the sysroot supplied on the gcc
+command line in order to resolve this.
+
+Upstream-Status: Pending
+
+Signed-off-by: Paul Eggleton <paul.eggleton@linux.intel.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ gcc/Makefile.in  | 2 ++
+ gcc/cppdefault.c | 4 ++++
+ gcc/defaults.h   | 9 +++++++++
+ gcc/gcc.c        | 7 -------
+ 4 files changed, 15 insertions(+), 7 deletions(-)
+
+diff --git a/gcc/Makefile.in b/gcc/Makefile.in
+index 23cca7f0d5a..95d21effad3 100644
+--- a/gcc/Makefile.in
++++ b/gcc/Makefile.in
+@@ -608,6 +608,7 @@ libexecdir = @libexecdir@
+ 
+ # Directory in which the compiler finds libraries etc.
+ libsubdir = $(libdir)/gcc/$(real_target_noncanonical)/$(version)$(accel_dir_suffix)
++libsubdir_target = gcc/$(target_noncanonical)/$(version)
+ # Directory in which the compiler finds executables
+ libexecsubdir = $(libexecdir)/gcc/$(real_target_noncanonical)/$(version)$(accel_dir_suffix)
+ # Directory in which all plugin resources are installed
+@@ -2791,6 +2792,7 @@ CFLAGS-intl.o += -DLOCALEDIR=\"$(localedir)\"
+ 
+ PREPROCESSOR_DEFINES = \
+   -DGCC_INCLUDE_DIR=\"$(libsubdir)/include\" \
++  -DGCC_INCLUDE_SUBDIR_TARGET=\"$(libsubdir_target)/include\" \
+   -DFIXED_INCLUDE_DIR=\"$(libsubdir)/include-fixed\" \
+   -DGPLUSPLUS_INCLUDE_DIR=\"$(gcc_gxx_include_dir)\" \
+   -DGPLUSPLUS_INCLUDE_DIR_ADD_SYSROOT=$(gcc_gxx_include_dir_add_sysroot) \
+diff --git a/gcc/cppdefault.c b/gcc/cppdefault.c
+index 10b96eca0a7..c8da0884872 100644
+--- a/gcc/cppdefault.c
++++ b/gcc/cppdefault.c
+@@ -59,6 +59,10 @@ const struct default_include cpp_include_defaults[]
+     /* This is the dir for gcc's private headers.  */
+     { GCC_INCLUDE_DIR, "GCC", 0, 0, 0, 0 },
+ #endif
++#ifdef GCC_INCLUDE_SUBDIR_TARGET
++    /* This is the dir for gcc's private headers under the specified sysroot.  */
++    { STANDARD_STARTFILE_PREFIX_2 GCC_INCLUDE_SUBDIR_TARGET, "GCC", 0, 0, 1, 0 },
++#endif
+ #ifdef LOCAL_INCLUDE_DIR
+     /* /usr/local/include comes before the fixincluded header files.  */
+     { LOCAL_INCLUDE_DIR, 0, 0, 1, 1, 2 },
+diff --git a/gcc/defaults.h b/gcc/defaults.h
+index 7ad92d920f8..39848cc9c0e 100644
+--- a/gcc/defaults.h
++++ b/gcc/defaults.h
+@@ -1475,4 +1475,13 @@ see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+ #define DWARF_GNAT_ENCODINGS_DEFAULT DWARF_GNAT_ENCODINGS_GDB
+ #endif
+ 
++/* Default prefixes to attach to command names.  */
++
++#ifndef STANDARD_STARTFILE_PREFIX_1
++#define STANDARD_STARTFILE_PREFIX_1 "/lib/"
++#endif
++#ifndef STANDARD_STARTFILE_PREFIX_2
++#define STANDARD_STARTFILE_PREFIX_2 "/usr/lib/"
++#endif
++
+ #endif  /* ! GCC_DEFAULTS_H */
+diff --git a/gcc/gcc.c b/gcc/gcc.c
+index c73d4023987..b27245dbf77 100644
+--- a/gcc/gcc.c
++++ b/gcc/gcc.c
+@@ -1472,13 +1472,6 @@ static const char *gcc_libexec_prefix;
+ 
+ /* Default prefixes to attach to command names.  */
+ 
+-#ifndef STANDARD_STARTFILE_PREFIX_1
+-#define STANDARD_STARTFILE_PREFIX_1 "/lib/"
+-#endif
+-#ifndef STANDARD_STARTFILE_PREFIX_2
+-#define STANDARD_STARTFILE_PREFIX_2 "/usr/lib/"
+-#endif
+-
+ #ifdef CROSS_DIRECTORY_STRUCTURE  /* Don't use these prefixes for a cross compiler.  */
+ #undef MD_EXEC_PREFIX
+ #undef MD_STARTFILE_PREFIX
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0031-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0031-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch
new file mode 100644
index 0000000..0184010
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0031-gcc-4.8-won-t-build-with-disable-dependency-tracking.patch
@@ -0,0 +1,54 @@
+From c7b4d957edda955fbe405fd5295846614529f517 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 20 Feb 2015 11:17:19 +0000
+Subject: [PATCH 31/47] gcc 4.8+ won't build with --disable-dependency-tracking
+
+since the *.Ppo files don't get created unless --enable-dependency-tracking is true.
+
+This patch ensures we only use those compiler options when its enabled.
+
+Upstream-Status: Submitted
+
+(Problem was already reported upstream, attached this patch there
+http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55930)
+
+RP
+2012/09/22
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ libatomic/Makefile.am | 3 ++-
+ libatomic/Makefile.in | 3 ++-
+ 2 files changed, 4 insertions(+), 2 deletions(-)
+
+diff --git a/libatomic/Makefile.am b/libatomic/Makefile.am
+index d731406fdbd..2fafc72d2e7 100644
+--- a/libatomic/Makefile.am
++++ b/libatomic/Makefile.am
+@@ -101,7 +101,8 @@ PAT_S		= $(word 3,$(PAT_SPLIT))
+ IFUNC_DEF	= -DIFUNC_ALT=$(PAT_S)
+ IFUNC_OPT	= $(word $(PAT_S),$(IFUNC_OPTIONS))
+ 
+-M_DEPS		= -MT $@ -MD -MP -MF $(DEPDIR)/$(@F).Ppo
++@AMDEP_TRUE@M_DEPS		= -MT $@ -MD -MP -MF $(DEPDIR)/$(@F).Ppo
++@AMDEP_FALSE@M_DEPS		=
+ M_SIZE		= -DN=$(PAT_N)
+ M_IFUNC		= $(if $(PAT_S),$(IFUNC_DEF) $(IFUNC_OPT))
+ M_FILE		= $(PAT_BASE)_n.c
+diff --git a/libatomic/Makefile.in b/libatomic/Makefile.in
+index f6eeab312ea..3f06a894058 100644
+--- a/libatomic/Makefile.in
++++ b/libatomic/Makefile.in
+@@ -331,7 +331,8 @@ PAT_N = $(word 2,$(PAT_SPLIT))
+ PAT_S = $(word 3,$(PAT_SPLIT))
+ IFUNC_DEF = -DIFUNC_ALT=$(PAT_S)
+ IFUNC_OPT = $(word $(PAT_S),$(IFUNC_OPTIONS))
+-M_DEPS = -MT $@ -MD -MP -MF $(DEPDIR)/$(@F).Ppo
++@AMDEP_TRUE@M_DEPS = -MT $@ -MD -MP -MF $(DEPDIR)/$(@F).Ppo
++@AMDEP_FALSE@M_DEPS =
+ M_SIZE = -DN=$(PAT_N)
+ M_IFUNC = $(if $(PAT_S),$(IFUNC_DEF) $(IFUNC_OPT))
+ M_FILE = $(PAT_BASE)_n.c
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0032-Don-t-search-host-directory-during-relink-if-inst_pr.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0032-Don-t-search-host-directory-during-relink-if-inst_pr.patch
new file mode 100644
index 0000000..e8905f5
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0032-Don-t-search-host-directory-during-relink-if-inst_pr.patch
@@ -0,0 +1,38 @@
+From 3be6b766a5881b0b187c3c3c68250a9e4f7c0fa3 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 3 Mar 2015 08:21:19 +0000
+Subject: [PATCH 32/47] Don't search host directory during "relink" if
+ $inst_prefix is provided
+
+http://lists.gnu.org/archive/html/libtool-patches/2011-01/msg00026.html
+
+Upstream-Status: Submitted
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ltmain.sh | 5 +++--
+ 1 file changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/ltmain.sh b/ltmain.sh
+index 52bdbdb5f9c..82bcec39f05 100644
+--- a/ltmain.sh
++++ b/ltmain.sh
+@@ -6004,12 +6004,13 @@ func_mode_link ()
+ 	      fi
+ 	    else
+ 	      # We cannot seem to hardcode it, guess we'll fake it.
++	      # Default if $libdir is not relative to the prefix:
+ 	      add_dir="-L$libdir"
+-	      # Try looking first in the location we're being installed to.
++
+ 	      if test -n "$inst_prefix_dir"; then
+ 		case $libdir in
+ 		  [\\/]*)
+-		    add_dir="$add_dir -L$inst_prefix_dir$libdir"
++		    add_dir="-L$inst_prefix_dir$libdir"
+ 		    ;;
+ 		esac
+ 	      fi
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0033-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0033-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch
new file mode 100644
index 0000000..c0b8df3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0033-Use-SYSTEMLIBS_DIR-replacement-instead-of-hardcoding.patch
@@ -0,0 +1,29 @@
+From 6edcab9046b862cbb9b46892fc390ce69976539c Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 28 Apr 2015 23:15:27 -0700
+Subject: [PATCH 33/47] Use SYSTEMLIBS_DIR replacement instead of hardcoding
+ base_libdir
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Inappropriate [OE-Specific]
+
+ gcc/config/aarch64/aarch64-linux.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/gcc/config/aarch64/aarch64-linux.h b/gcc/config/aarch64/aarch64-linux.h
+index c45fc1d35d1..a7afe197266 100644
+--- a/gcc/config/aarch64/aarch64-linux.h
++++ b/gcc/config/aarch64/aarch64-linux.h
+@@ -21,7 +21,7 @@
+ #ifndef GCC_AARCH64_LINUX_H
+ #define GCC_AARCH64_LINUX_H
+ 
+-#define GLIBC_DYNAMIC_LINKER "/lib/ld-linux-aarch64%{mbig-endian:_be}%{mabi=ilp32:_ilp32}.so.1"
++#define GLIBC_DYNAMIC_LINKER  SYSTEMLIBS_DIR "ld-linux-aarch64%{mbig-endian:_be}%{mabi=ilp32:_ilp32}.so.1"
+ 
+ #undef MUSL_DYNAMIC_LINKER
+ #define MUSL_DYNAMIC_LINKER "/lib/ld-musl-aarch64%{mbig-endian:_be}%{mabi=ilp32:_ilp32}.so.1"
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0034-aarch64-Add-support-for-musl-ldso.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0034-aarch64-Add-support-for-musl-ldso.patch
new file mode 100644
index 0000000..7d866d9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0034-aarch64-Add-support-for-musl-ldso.patch
@@ -0,0 +1,28 @@
+From b140d6839cfba9cac892bc736d984540552d6a56 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 28 Apr 2015 23:18:39 -0700
+Subject: [PATCH 34/47] aarch64: Add support for musl ldso
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Inappropriate [OE-Specific]
+
+ gcc/config/aarch64/aarch64-linux.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/gcc/config/aarch64/aarch64-linux.h b/gcc/config/aarch64/aarch64-linux.h
+index a7afe197266..580c2c7ea15 100644
+--- a/gcc/config/aarch64/aarch64-linux.h
++++ b/gcc/config/aarch64/aarch64-linux.h
+@@ -24,7 +24,7 @@
+ #define GLIBC_DYNAMIC_LINKER  SYSTEMLIBS_DIR "ld-linux-aarch64%{mbig-endian:_be}%{mabi=ilp32:_ilp32}.so.1"
+ 
+ #undef MUSL_DYNAMIC_LINKER
+-#define MUSL_DYNAMIC_LINKER "/lib/ld-musl-aarch64%{mbig-endian:_be}%{mabi=ilp32:_ilp32}.so.1"
++#define MUSL_DYNAMIC_LINKER  SYSTEMLIBS_DIR "ld-musl-aarch64%{mbig-endian:_be}%{mabi=ilp32:_ilp32}.so.1"
+ 
+ #undef  ASAN_CC1_SPEC
+ #define ASAN_CC1_SPEC "%{%:sanitize(address):-funwind-tables}"
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0035-libcc1-fix-libcc1-s-install-path-and-rpath.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0035-libcc1-fix-libcc1-s-install-path-and-rpath.patch
new file mode 100644
index 0000000..e2c1956
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0035-libcc1-fix-libcc1-s-install-path-and-rpath.patch
@@ -0,0 +1,54 @@
+From 63617f2da153db10fa2fe938cce31bee01d47fe8 Mon Sep 17 00:00:00 2001
+From: Robert Yang <liezhi.yang@windriver.com>
+Date: Sun, 5 Jul 2015 20:25:18 -0700
+Subject: [PATCH 35/47] libcc1: fix libcc1's install path and rpath
+
+* Install libcc1.so and libcc1plugin.so into
+  $(libexecdir)/gcc/$(target_noncanonical)/$(gcc_version), as what we
+  had done to lto-plugin.
+* Fix bad RPATH iussue:
+  gcc-5.2.0: package gcc-plugins contains bad RPATH /patht/to/tmp/sysroots/qemux86-64/usr/lib64/../lib64 in file
+ /path/to/gcc/5.2.0-r0/packages-split/gcc-plugins/usr/lib64/gcc/x86_64-poky-linux/5.2.0/plugin/libcc1plugin.so.0.0.0
+ [rpaths]
+
+Upstream-Status: Inappropriate [OE configuration]
+
+Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
+---
+ libcc1/Makefile.am | 4 ++--
+ libcc1/Makefile.in | 4 ++--
+ 2 files changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/libcc1/Makefile.am b/libcc1/Makefile.am
+index 5e61a92a26b..e8b627f9cec 100644
+--- a/libcc1/Makefile.am
++++ b/libcc1/Makefile.am
+@@ -37,8 +37,8 @@ libiberty = $(if $(wildcard $(libiberty_noasan)),$(Wc)$(libiberty_noasan), \
+ 	    $(Wc)$(libiberty_normal)))
+ libiberty_dep = $(patsubst $(Wc)%,%,$(libiberty))
+ 
+-plugindir = $(libdir)/gcc/$(target_noncanonical)/$(gcc_version)/plugin
+-cc1libdir = $(libdir)/$(libsuffix)
++cc1libdir = $(libexecdir)/gcc/$(target_noncanonical)/$(gcc_version)
++plugindir = $(cc1libdir)
+ 
+ if ENABLE_PLUGIN
+ plugin_LTLIBRARIES = libcc1plugin.la libcp1plugin.la
+diff --git a/libcc1/Makefile.in b/libcc1/Makefile.in
+index 54babb02a49..e51d87ffdce 100644
+--- a/libcc1/Makefile.in
++++ b/libcc1/Makefile.in
+@@ -303,8 +303,8 @@ libiberty = $(if $(wildcard $(libiberty_noasan)),$(Wc)$(libiberty_noasan), \
+ 	    $(Wc)$(libiberty_normal)))
+ 
+ libiberty_dep = $(patsubst $(Wc)%,%,$(libiberty))
+-plugindir = $(libdir)/gcc/$(target_noncanonical)/$(gcc_version)/plugin
+-cc1libdir = $(libdir)/$(libsuffix)
++cc1libdir = $(libexecdir)/gcc/$(target_noncanonical)/$(gcc_version)
++plugindir = $(cc1libdir)
+ @ENABLE_PLUGIN_TRUE@plugin_LTLIBRARIES = libcc1plugin.la libcp1plugin.la
+ @ENABLE_PLUGIN_TRUE@cc1lib_LTLIBRARIES = libcc1.la
+ BUILT_SOURCES = c-compiler-name.h cp-compiler-name.h
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0036-handle-sysroot-support-for-nativesdk-gcc.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0036-handle-sysroot-support-for-nativesdk-gcc.patch
new file mode 100644
index 0000000..aa0b108
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0036-handle-sysroot-support-for-nativesdk-gcc.patch
@@ -0,0 +1,213 @@
+From ca14820ae834a62ef2b80b283e8f900714636272 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 7 Dec 2015 23:39:54 +0000
+Subject: [PATCH 36/47] handle sysroot support for nativesdk-gcc
+
+Being able to build a nativesdk gcc is useful, particularly in cases
+where the host compiler may be of an incompatible version (or a 32
+bit compiler is needed).
+
+Sadly, building nativesdk-gcc is not straight forward. We install
+nativesdk-gcc into a relocatable location and this means that its
+library locations can change. "Normal" sysroot support doesn't help
+in this case since the values of paths like "libdir" change, not just
+base root directory of the system.
+
+In order to handle this we do two things:
+
+a) Add %r into spec file markup which can be used for injected paths
+   such as SYSTEMLIBS_DIR (see gcc_multilib_setup()).
+b) Add other paths which need relocation into a .gccrelocprefix section
+   which the relocation code will notice and adjust automatically.
+
+Upstream-Status: Inappropriate
+RP 2015/7/28
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ gcc/cppdefault.c | 50 +++++++++++++++++++++++++++++++++++++-------------
+ gcc/cppdefault.h |  3 ++-
+ gcc/gcc.c        | 20 ++++++++++++++------
+ 3 files changed, 53 insertions(+), 20 deletions(-)
+
+diff --git a/gcc/cppdefault.c b/gcc/cppdefault.c
+index c8da0884872..43dc597a0c3 100644
+--- a/gcc/cppdefault.c
++++ b/gcc/cppdefault.c
+@@ -35,6 +35,30 @@
+ # undef CROSS_INCLUDE_DIR
+ #endif
+ 
++static char GPLUSPLUS_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = GPLUSPLUS_INCLUDE_DIR;
++static char GCC_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = GCC_INCLUDE_DIR;
++static char GPLUSPLUS_TOOL_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = GPLUSPLUS_TOOL_INCLUDE_DIR;
++static char GPLUSPLUS_BACKWARD_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = GPLUSPLUS_BACKWARD_INCLUDE_DIR;
++static char STANDARD_STARTFILE_PREFIX_2VAR[4096] __attribute__ ((section (".gccrelocprefix"))) = STANDARD_STARTFILE_PREFIX_2 GCC_INCLUDE_SUBDIR_TARGET;
++#ifdef LOCAL_INCLUDE_DIR
++static char LOCAL_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = LOCAL_INCLUDE_DIR;
++#endif
++#ifdef PREFIX_INCLUDE_DIR
++static char PREFIX_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = PREFIX_INCLUDE_DIR;
++#endif
++#ifdef FIXED_INCLUDE_DIR
++static char FIXED_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = FIXED_INCLUDE_DIR;
++#endif
++#ifdef CROSS_INCLUDE_DIR
++static char CROSS_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = CROSS_INCLUDE_DIR;
++#endif
++#ifdef TOOL_INCLUDE_DIR
++static char TOOL_INCLUDE_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = TOOL_INCLUDE_DIR;
++#endif
++#ifdef NATIVE_SYSTEM_HEADER_DIR
++static char NATIVE_SYSTEM_HEADER_DIRVAR[4096] __attribute__ ((section (".gccrelocprefix"))) = NATIVE_SYSTEM_HEADER_DIR;
++#endif
++
+ const struct default_include cpp_include_defaults[]
+ #ifdef INCLUDE_DEFAULTS
+ = INCLUDE_DEFAULTS;
+@@ -42,38 +66,38 @@ const struct default_include cpp_include_defaults[]
+ = {
+ #ifdef GPLUSPLUS_INCLUDE_DIR
+     /* Pick up GNU C++ generic include files.  */
+-    { GPLUSPLUS_INCLUDE_DIR, "G++", 1, 1,
++    { GPLUSPLUS_INCLUDE_DIRVAR, "G++", 1, 1,
+       GPLUSPLUS_INCLUDE_DIR_ADD_SYSROOT, 0 },
+ #endif
+ #ifdef GPLUSPLUS_TOOL_INCLUDE_DIR
+     /* Pick up GNU C++ target-dependent include files.  */
+-    { GPLUSPLUS_TOOL_INCLUDE_DIR, "G++", 1, 1,
++    { GPLUSPLUS_TOOL_INCLUDE_DIRVAR, "G++", 1, 1,
+       GPLUSPLUS_INCLUDE_DIR_ADD_SYSROOT, 1 },
+ #endif
+ #ifdef GPLUSPLUS_BACKWARD_INCLUDE_DIR
+     /* Pick up GNU C++ backward and deprecated include files.  */
+-    { GPLUSPLUS_BACKWARD_INCLUDE_DIR, "G++", 1, 1,
++    { GPLUSPLUS_BACKWARD_INCLUDE_DIRVAR, "G++", 1, 1,
+       GPLUSPLUS_INCLUDE_DIR_ADD_SYSROOT, 0 },
+ #endif
+ #ifdef GCC_INCLUDE_DIR
+     /* This is the dir for gcc's private headers.  */
+-    { GCC_INCLUDE_DIR, "GCC", 0, 0, 0, 0 },
++    { GCC_INCLUDE_DIRVAR, "GCC", 0, 0, 0, 0 },
+ #endif
+ #ifdef GCC_INCLUDE_SUBDIR_TARGET
+     /* This is the dir for gcc's private headers under the specified sysroot.  */
+-    { STANDARD_STARTFILE_PREFIX_2 GCC_INCLUDE_SUBDIR_TARGET, "GCC", 0, 0, 1, 0 },
++    { STANDARD_STARTFILE_PREFIX_2VAR, "GCC", 0, 0, 1, 0 },
+ #endif
+ #ifdef LOCAL_INCLUDE_DIR
+     /* /usr/local/include comes before the fixincluded header files.  */
+-    { LOCAL_INCLUDE_DIR, 0, 0, 1, 1, 2 },
+-    { LOCAL_INCLUDE_DIR, 0, 0, 1, 1, 0 },
++    { LOCAL_INCLUDE_DIRVAR, 0, 0, 1, 1, 2 },
++    { LOCAL_INCLUDE_DIRVAR, 0, 0, 1, 1, 0 },
+ #endif
+ #ifdef PREFIX_INCLUDE_DIR
+-    { PREFIX_INCLUDE_DIR, 0, 0, 1, 0, 0 },
++    { PREFIX_INCLUDE_DIRVAR, 0, 0, 1, 0, 0 },
+ #endif
+ #ifdef FIXED_INCLUDE_DIR
+     /* This is the dir for fixincludes.  */
+-    { FIXED_INCLUDE_DIR, "GCC", 0, 0, 0,
++    { FIXED_INCLUDE_DIRVAR, "GCC", 0, 0, 0,
+       /* A multilib suffix needs adding if different multilibs use
+ 	 different headers.  */
+ #ifdef SYSROOT_HEADERS_SUFFIX_SPEC
+@@ -85,16 +109,16 @@ const struct default_include cpp_include_defaults[]
+ #endif
+ #ifdef CROSS_INCLUDE_DIR
+     /* One place the target system's headers might be.  */
+-    { CROSS_INCLUDE_DIR, "GCC", 0, 0, 0, 0 },
++    { CROSS_INCLUDE_DIRVAR, "GCC", 0, 0, 0, 0 },
+ #endif
+ #ifdef TOOL_INCLUDE_DIR
+     /* Another place the target system's headers might be.  */
+-    { TOOL_INCLUDE_DIR, "BINUTILS", 0, 1, 0, 0 },
++    { TOOL_INCLUDE_DIRVAR, "BINUTILS", 0, 1, 0, 0 },
+ #endif
+ #ifdef NATIVE_SYSTEM_HEADER_DIR
+     /* /usr/include comes dead last.  */
+-    { NATIVE_SYSTEM_HEADER_DIR, NATIVE_SYSTEM_HEADER_COMPONENT, 0, 0, 1, 2 },
+-    { NATIVE_SYSTEM_HEADER_DIR, NATIVE_SYSTEM_HEADER_COMPONENT, 0, 0, 1, 0 },
++    { NATIVE_SYSTEM_HEADER_DIRVAR, NATIVE_SYSTEM_HEADER_COMPONENT, 0, 0, 1, 2 },
++    { NATIVE_SYSTEM_HEADER_DIRVAR, NATIVE_SYSTEM_HEADER_COMPONENT, 0, 0, 1, 0 },
+ #endif
+     { 0, 0, 0, 0, 0, 0 }
+   };
+diff --git a/gcc/cppdefault.h b/gcc/cppdefault.h
+index 17bbb0eaef7..a937ec1d187 100644
+--- a/gcc/cppdefault.h
++++ b/gcc/cppdefault.h
+@@ -33,7 +33,8 @@
+ 
+ struct default_include
+ {
+-  const char *const fname;	/* The name of the directory.  */
++  const char *fname;     /* The name of the directory.  */
++
+   const char *const component;	/* The component containing the directory
+ 				   (see update_path in prefix.c) */
+   const char cplusplus;		/* Only look here if we're compiling C++.  */
+diff --git a/gcc/gcc.c b/gcc/gcc.c
+index b27245dbf77..e015c77f15f 100644
+--- a/gcc/gcc.c
++++ b/gcc/gcc.c
+@@ -247,6 +247,8 @@ FILE *report_times_to_file = NULL;
+ #endif
+ static const char *target_system_root = DEFAULT_TARGET_SYSTEM_ROOT;
+ 
++static char target_relocatable_prefix[4096] __attribute__ ((section (".gccrelocprefix"))) = SYSTEMLIBS_DIR;
++
+ /* Nonzero means pass the updated target_system_root to the compiler.  */
+ 
+ static int target_system_root_changed;
+@@ -518,6 +520,7 @@ or with constant text in a single argument.
+  %G     process LIBGCC_SPEC as a spec.
+  %R     Output the concatenation of target_system_root and
+         target_sysroot_suffix.
++ %r     Output the base path target_relocatable_prefix
+  %S     process STARTFILE_SPEC as a spec.  A capital S is actually used here.
+  %E     process ENDFILE_SPEC as a spec.  A capital E is actually used here.
+  %C     process CPP_SPEC as a spec.
+@@ -1495,10 +1498,10 @@ static const char *gcc_libexec_prefix;
+    gcc_exec_prefix is set because, in that case, we know where the
+    compiler has been installed, and use paths relative to that
+    location instead.  */
+-static const char *const standard_exec_prefix = STANDARD_EXEC_PREFIX;
+-static const char *const standard_libexec_prefix = STANDARD_LIBEXEC_PREFIX;
+-static const char *const standard_bindir_prefix = STANDARD_BINDIR_PREFIX;
+-static const char *const standard_startfile_prefix = STANDARD_STARTFILE_PREFIX;
++static char standard_exec_prefix[4096] __attribute__ ((section (".gccrelocprefix"))) = STANDARD_EXEC_PREFIX;
++static char standard_libexec_prefix[4096] __attribute__ ((section (".gccrelocprefix"))) = STANDARD_LIBEXEC_PREFIX;
++static char standard_bindir_prefix[4096] __attribute__ ((section (".gccrelocprefix"))) = STANDARD_BINDIR_PREFIX;
++static char *const standard_startfile_prefix = STANDARD_STARTFILE_PREFIX;
+ 
+ /* For native compilers, these are well-known paths containing
+    components that may be provided by the system.  For cross
+@@ -1506,9 +1509,9 @@ static const char *const standard_startfile_prefix = STANDARD_STARTFILE_PREFIX;
+ static const char *md_exec_prefix = MD_EXEC_PREFIX;
+ static const char *md_startfile_prefix = MD_STARTFILE_PREFIX;
+ static const char *md_startfile_prefix_1 = MD_STARTFILE_PREFIX_1;
+-static const char *const standard_startfile_prefix_1
++static char standard_startfile_prefix_1[4096] __attribute__ ((section (".gccrelocprefix")))
+   = STANDARD_STARTFILE_PREFIX_1;
+-static const char *const standard_startfile_prefix_2
++static char standard_startfile_prefix_2[4096] __attribute__ ((section (".gccrelocprefix")))
+   = STANDARD_STARTFILE_PREFIX_2;
+ 
+ /* A relative path to be used in finding the location of tools
+@@ -5826,6 +5829,11 @@ do_spec_1 (const char *spec, int inswitch, const char *soft_matched_part)
+ 	      }
+ 	    break;
+ 
++          case 'r':
++              obstack_grow (&obstack, target_relocatable_prefix,
++		      strlen (target_relocatable_prefix));
++            break;
++
+ 	  case 'S':
+ 	    value = do_spec_1 (startfile_spec, 0, NULL);
+ 	    if (value != 0)
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0037-Search-target-sysroot-gcc-version-specific-dirs-with.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0037-Search-target-sysroot-gcc-version-specific-dirs-with.patch
new file mode 100644
index 0000000..6c85a03
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0037-Search-target-sysroot-gcc-version-specific-dirs-with.patch
@@ -0,0 +1,102 @@
+From 16a326bcd126b395b29019072905bae7a5d47500 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 7 Dec 2015 23:41:45 +0000
+Subject: [PATCH 37/47] Search target sysroot gcc version specific dirs with
+ multilib.
+
+We install the gcc libraries (such as crtbegin.p) into
+<sysroot><libdir>/<target-sys>/5.2.0/
+which is a default search path for GCC (aka multi_suffix in the
+code below). <target-sys> is 'machine' in gcc's terminology. We use
+these directories so that multiple gcc versions could in theory
+co-exist on target.
+
+We only want to build one gcc-cross-canadian per arch and have this work
+for all multilibs. <target-sys> can be handled by mapping the multilib
+<target-sys> to the one used by gcc-cross-canadian, e.g.
+mips64-polkmllib32-linux
+is symlinked to by mips64-poky-linux.
+
+The default gcc search path in the target sysroot for a "lib64" mutlilib
+is:
+
+<sysroot>/lib32/mips64-poky-linux/5.2.0/
+<sysroot>/lib32/../lib64/
+<sysroot>/usr/lib32/mips64-poky-linux/5.2.0/
+<sysroot>/usr/lib32/../lib64/
+<sysroot>/lib32/
+<sysroot>/usr/lib32/
+
+which means that the lib32 crtbegin.o will be found and the lib64 ones
+will not which leads to compiler failures.
+
+This patch injects a multilib version of that path first so the lib64
+binaries can be found first. With this change the search path becomes:
+
+<sysroot>/lib32/../lib64/mips64-poky-linux/5.2.0/
+<sysroot>/lib32/mips64-poky-linux/5.2.0/
+<sysroot>/lib32/../lib64/
+<sysroot>/usr/lib32/../lib64/mips64-poky-linux/5.2.0/
+<sysroot>/usr/lib32/mips64-poky-linux/5.2.0/
+<sysroot>/usr/lib32/../lib64/
+<sysroot>/lib32/
+<sysroot>/usr/lib32/
+
+Upstream-Status: Pending
+RP 2015/7/31
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ gcc/gcc.c | 29 ++++++++++++++++++++++++++++-
+ 1 file changed, 28 insertions(+), 1 deletion(-)
+
+diff --git a/gcc/gcc.c b/gcc/gcc.c
+index e015c77f15f..84af5d5a2e1 100644
+--- a/gcc/gcc.c
++++ b/gcc/gcc.c
+@@ -2533,7 +2533,7 @@ for_each_path (const struct path_prefix *paths,
+       if (path == NULL)
+ 	{
+ 	  len = paths->max_len + extra_space + 1;
+-	  len += MAX (MAX (suffix_len, multi_os_dir_len), multiarch_len);
++	  len += MAX ((suffix_len + multi_os_dir_len), multiarch_len);
+ 	  path = XNEWVEC (char, len);
+ 	}
+ 
+@@ -2545,6 +2545,33 @@ for_each_path (const struct path_prefix *paths,
+ 	  /* Look first in MACHINE/VERSION subdirectory.  */
+ 	  if (!skip_multi_dir)
+ 	    {
++	      if (!(pl->os_multilib ? skip_multi_os_dir : skip_multi_dir))
++	        {
++	          const char *this_multi;
++	          size_t this_multi_len;
++
++	          if (pl->os_multilib)
++		    {
++		      this_multi = multi_os_dir;
++		      this_multi_len = multi_os_dir_len;
++		    }
++	          else
++		    {
++		      this_multi = multi_dir;
++		      this_multi_len = multi_dir_len;
++		    }
++
++	          /* Look in multilib MACHINE/VERSION subdirectory first */
++	          if (this_multi_len)
++	            {
++		      memcpy (path + len, this_multi, this_multi_len + 1);
++	              memcpy (path + len + this_multi_len, multi_suffix, suffix_len + 1);
++	              ret = callback (path, callback_info);
++	                if (ret)
++		          break;
++	            }
++	        }
++
+ 	      memcpy (path + len, multi_suffix, suffix_len + 1);
+ 	      ret = callback (path, callback_info);
+ 	      if (ret)
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0038-Fix-various-_FOR_BUILD-and-related-variables.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0038-Fix-various-_FOR_BUILD-and-related-variables.patch
new file mode 100644
index 0000000..a226d10
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0038-Fix-various-_FOR_BUILD-and-related-variables.patch
@@ -0,0 +1,137 @@
+From 6e7f526e71a76aac3d49ba8f1742fe1b359c1060 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 7 Dec 2015 23:42:45 +0000
+Subject: [PATCH 38/47] Fix various _FOR_BUILD and related variables
+
+When doing a FOR_BUILD thing, you have to override CFLAGS with
+CFLAGS_FOR_BUILD. And if you use C++, you also have to override
+CXXFLAGS with CXXFLAGS_FOR_BUILD.
+Without this, when building for mingw, you end up trying to use
+the mingw headers for a host build.
+
+The same goes for other variables as well, such as CPPFLAGS,
+CPP, and GMPINC.
+
+Upstream-Status: Pending
+
+Signed-off-by: Peter Seebach <peter.seebach@windriver.com>
+Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ Makefile.in      | 6 ++++++
+ Makefile.tpl     | 5 +++++
+ gcc/Makefile.in  | 2 +-
+ gcc/configure    | 2 +-
+ gcc/configure.ac | 2 +-
+ 5 files changed, 14 insertions(+), 3 deletions(-)
+
+diff --git a/Makefile.in b/Makefile.in
+index e34e9555388..a03740f3f9d 100644
+--- a/Makefile.in
++++ b/Makefile.in
+@@ -152,6 +152,7 @@ BUILD_EXPORTS = \
+ 	CPP="$(CC_FOR_BUILD) -E"; export CPP; \
+ 	CFLAGS="$(CFLAGS_FOR_BUILD)"; export CFLAGS; \
+ 	CONFIG_SHELL="$(SHELL)"; export CONFIG_SHELL; \
++	CPPFLAGS="$(CPPFLAGS_FOR_BUILD)"; export CPPFLAGS; \
+ 	CXX="$(CXX_FOR_BUILD)"; export CXX; \
+ 	CXXFLAGS="$(CXXFLAGS_FOR_BUILD)"; export CXXFLAGS; \
+ 	GFORTRAN="$(GFORTRAN_FOR_BUILD)"; export GFORTRAN; \
+@@ -169,6 +170,9 @@ BUILD_EXPORTS = \
+ # built for the build system to override those in BASE_FLAGS_TO_PASS.
+ EXTRA_BUILD_FLAGS = \
+ 	CFLAGS="$(CFLAGS_FOR_BUILD)" \
++	CXXFLAGS="$(CXXFLAGS_FOR_BUILD)" \
++	CPP="$(CC_FOR_BUILD) -E" \
++	CPPFLAGS="$(CPPFLAGS_FOR_BUILD)" \
+ 	LDFLAGS="$(LDFLAGS_FOR_BUILD)"
+ 
+ # This is the list of directories to built for the host system.
+@@ -186,6 +190,7 @@ HOST_SUBDIR = @host_subdir@
+ HOST_EXPORTS = \
+ 	$(BASE_EXPORTS) \
+ 	CC="$(CC)"; export CC; \
++	CPP="$(CC) -E"; export CPP; \
+ 	ADA_CFLAGS="$(ADA_CFLAGS)"; export ADA_CFLAGS; \
+ 	CFLAGS="$(CFLAGS)"; export CFLAGS; \
+ 	CONFIG_SHELL="$(SHELL)"; export CONFIG_SHELL; \
+@@ -734,6 +739,7 @@ BASE_FLAGS_TO_PASS = \
+ 	"CC_FOR_BUILD=$(CC_FOR_BUILD)" \
+ 	"CFLAGS_FOR_BUILD=$(CFLAGS_FOR_BUILD)" \
+ 	"CXX_FOR_BUILD=$(CXX_FOR_BUILD)" \
++	"CXXFLAGS_FOR_BUILD=$(CXXFLAGS_FOR_BUILD)" \
+ 	"EXPECT=$(EXPECT)" \
+ 	"FLEX=$(FLEX)" \
+ 	"INSTALL=$(INSTALL)" \
+diff --git a/Makefile.tpl b/Makefile.tpl
+index d0fa07005be..953376c658d 100644
+--- a/Makefile.tpl
++++ b/Makefile.tpl
+@@ -154,6 +154,7 @@ BUILD_EXPORTS = \
+ 	CC="$(CC_FOR_BUILD)"; export CC; \
+ 	CFLAGS="$(CFLAGS_FOR_BUILD)"; export CFLAGS; \
+ 	CONFIG_SHELL="$(SHELL)"; export CONFIG_SHELL; \
++	CPPFLAGS="$(CPPFLAGS_FOR_BUILD)"; export CPPFLAGS; \
+ 	CXX="$(CXX_FOR_BUILD)"; export CXX; \
+ 	CXXFLAGS="$(CXXFLAGS_FOR_BUILD)"; export CXXFLAGS; \
+ 	GFORTRAN="$(GFORTRAN_FOR_BUILD)"; export GFORTRAN; \
+@@ -171,6 +172,9 @@ BUILD_EXPORTS = \
+ # built for the build system to override those in BASE_FLAGS_TO_PASS.
+ EXTRA_BUILD_FLAGS = \
+ 	CFLAGS="$(CFLAGS_FOR_BUILD)" \
++	CXXFLAGS="$(CXXFLAGS_FOR_BUILD)" \
++	CPP="$(CC_FOR_BUILD) -E" \
++	CPPFLAGS="$(CPPFLAGS_FOR_BUILD)" \
+ 	LDFLAGS="$(LDFLAGS_FOR_BUILD)"
+ 
+ # This is the list of directories to built for the host system.
+@@ -188,6 +192,7 @@ HOST_SUBDIR = @host_subdir@
+ HOST_EXPORTS = \
+ 	$(BASE_EXPORTS) \
+ 	CC="$(CC)"; export CC; \
++	CPP="$(CC) -E"; export CPP; \
+ 	ADA_CFLAGS="$(ADA_CFLAGS)"; export ADA_CFLAGS; \
+ 	CFLAGS="$(CFLAGS)"; export CFLAGS; \
+ 	CONFIG_SHELL="$(SHELL)"; export CONFIG_SHELL; \
+diff --git a/gcc/Makefile.in b/gcc/Makefile.in
+index 95d21effad3..dbe2bacde50 100644
+--- a/gcc/Makefile.in
++++ b/gcc/Makefile.in
+@@ -795,7 +795,7 @@ BUILD_LDFLAGS=@BUILD_LDFLAGS@
+ BUILD_NO_PIE_FLAG = @BUILD_NO_PIE_FLAG@
+ BUILD_LDFLAGS += $(BUILD_NO_PIE_FLAG)
+ BUILD_CPPFLAGS= -I. -I$(@D) -I$(srcdir) -I$(srcdir)/$(@D) \
+-		-I$(srcdir)/../include @INCINTL@ $(CPPINC) $(CPPFLAGS)
++		-I$(srcdir)/../include @INCINTL@ $(CPPINC) $(CPPFLAGS_FOR_BUILD)
+ 
+ # Actual name to use when installing a native compiler.
+ GCC_INSTALL_NAME := $(shell echo gcc|sed '$(program_transform_name)')
+diff --git a/gcc/configure b/gcc/configure
+index 6ba391ed068..72ca6e5c535 100755
+--- a/gcc/configure
++++ b/gcc/configure
+@@ -11789,7 +11789,7 @@ else
+ 	CC="${CC_FOR_BUILD}" CFLAGS="${CFLAGS_FOR_BUILD}" \
+ 	CXX="${CXX_FOR_BUILD}" CXXFLAGS="${CXXFLAGS_FOR_BUILD}" \
+ 	LD="${LD_FOR_BUILD}" LDFLAGS="${LDFLAGS_FOR_BUILD}" \
+-	GMPINC="" CPPFLAGS="${CPPFLAGS} -DGENERATOR_FILE" \
++	GMPINC="" CPPFLAGS="${CPPFLAGS_FOR_BUILD} -DGENERATOR_FILE" \
+ 	${realsrcdir}/configure \
+ 		--enable-languages=${enable_languages-all} \
+ 		--target=$target_alias --host=$build_alias --build=$build_alias
+diff --git a/gcc/configure.ac b/gcc/configure.ac
+index 8551a412df3..6eefb61dc2b 100644
+--- a/gcc/configure.ac
++++ b/gcc/configure.ac
+@@ -1708,7 +1708,7 @@ else
+ 	CC="${CC_FOR_BUILD}" CFLAGS="${CFLAGS_FOR_BUILD}" \
+ 	CXX="${CXX_FOR_BUILD}" CXXFLAGS="${CXXFLAGS_FOR_BUILD}" \
+ 	LD="${LD_FOR_BUILD}" LDFLAGS="${LDFLAGS_FOR_BUILD}" \
+-	GMPINC="" CPPFLAGS="${CPPFLAGS} -DGENERATOR_FILE" \
++	GMPINC="" CPPFLAGS="${CPPFLAGS_FOR_BUILD} -DGENERATOR_FILE" \
+ 	${realsrcdir}/configure \
+ 		--enable-languages=${enable_languages-all} \
+ 		--target=$target_alias --host=$build_alias --build=$build_alias
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0039-nios2-Define-MUSL_DYNAMIC_LINKER.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0039-nios2-Define-MUSL_DYNAMIC_LINKER.patch
new file mode 100644
index 0000000..a7aeccd
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0039-nios2-Define-MUSL_DYNAMIC_LINKER.patch
@@ -0,0 +1,28 @@
+From 6d03ddfb7a092942be6b58b1830f6986d012d5e3 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 2 Feb 2016 10:26:10 -0800
+Subject: [PATCH 39/47] nios2: Define MUSL_DYNAMIC_LINKER
+
+Signed-off-by: Marek Vasut <marex@denx.de>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Inappropriate [OE-Specific]
+
+ gcc/config/nios2/linux.h | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/gcc/config/nios2/linux.h b/gcc/config/nios2/linux.h
+index 5177fa641a4..1b8accda6ee 100644
+--- a/gcc/config/nios2/linux.h
++++ b/gcc/config/nios2/linux.h
+@@ -30,6 +30,7 @@
+ #define CPP_SPEC "%{posix:-D_POSIX_SOURCE} %{pthread:-D_REENTRANT}"
+ 
+ #define GLIBC_DYNAMIC_LINKER "/lib/ld-linux-nios2.so.1"
++#define MUSL_DYNAMIC_LINKER  "/lib/ld-musl-nios2.so.1"
+ 
+ #undef LINK_SPEC
+ #define LINK_SPEC LINK_SPEC_ENDIAN \
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0041-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0040-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-6.3/0041-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch
rename to import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0040-Add-ssp_nonshared-to-link-commandline-for-musl-targe.patch
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0041-gcc-libcpp-support-ffile-prefix-map-old-new.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0041-gcc-libcpp-support-ffile-prefix-map-old-new.patch
new file mode 100644
index 0000000..5260e36
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0041-gcc-libcpp-support-ffile-prefix-map-old-new.patch
@@ -0,0 +1,284 @@
+From 4eadc99bdd0974761bf48f0fd32994dd9a3ffcfe Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Wed, 16 Mar 2016 02:27:43 -0400
+Subject: [PATCH 41/47] gcc/libcpp: support -ffile-prefix-map=<old>=<new>
+
+Similar -fdebug-prefix-map, add option -ffile-prefix-map to map one
+directory name (old) to another (new) in __FILE__, __BASE_FILE__ and
+__builtin_FILE ().
+
+https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70268
+
+Upstream-Status: Submitted [gcc-patches@gcc.gnu.org]
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ gcc/c-family/c-opts.c     | 13 +++++++
+ gcc/c-family/c.opt        |  4 +++
+ gcc/dwarf2out.c           |  1 +
+ gcc/gimplify.c            |  2 ++
+ libcpp/Makefile.in        | 10 +++---
+ libcpp/file-map.c         | 92 +++++++++++++++++++++++++++++++++++++++++++++++
+ libcpp/include/file-map.h | 30 ++++++++++++++++
+ libcpp/macro.c            |  2 ++
+ 8 files changed, 149 insertions(+), 5 deletions(-)
+ create mode 100644 libcpp/file-map.c
+ create mode 100644 libcpp/include/file-map.h
+
+diff --git a/gcc/c-family/c-opts.c b/gcc/c-family/c-opts.c
+index ea0e01b101c..a741c75a78f 100644
+--- a/gcc/c-family/c-opts.c
++++ b/gcc/c-family/c-opts.c
+@@ -39,6 +39,14 @@ along with GCC; see the file COPYING3.  If not see
+ #include "opts.h"
+ #include "plugin.h"		/* For PLUGIN_INCLUDE_FILE event.  */
+ #include "mkdeps.h"
++#include "file-map.h"
++#include "c-target.h"
++#include "tm.h"			/* For BYTES_BIG_ENDIAN,
++				   DOLLARS_IN_IDENTIFIERS,
++				   STDC_0_IN_SYSTEM_HEADERS,
++				   TARGET_FLT_EVAL_METHOD_NON_DEFAULT and
++				   TARGET_OPTF.  */
++#include "tm_p.h"		/* For C_COMMON_OVERRIDE_OPTIONS.  */
+ #include "dumpfile.h"
+ 
+ #ifndef DOLLARS_IN_IDENTIFIERS
+@@ -517,6 +525,11 @@ c_common_handle_option (size_t scode, const char *arg, int value,
+       cpp_opts->narrow_charset = arg;
+       break;
+ 
++    case OPT_ffile_prefix_map_:
++      if (add_file_prefix_map (arg) < 0)
++        error ("invalid argument %qs to -ffile-prefix-map", arg);
++      break;
++
+     case OPT_fwide_exec_charset_:
+       cpp_opts->wide_charset = arg;
+       break;
+diff --git a/gcc/c-family/c.opt b/gcc/c-family/c.opt
+index c4ef7796282..73333dfd57c 100644
+--- a/gcc/c-family/c.opt
++++ b/gcc/c-family/c.opt
+@@ -1372,6 +1372,10 @@ fexec-charset=
+ C ObjC C++ ObjC++ Joined RejectNegative
+ -fexec-charset=<cset>	Convert all strings and character constants to character set <cset>.
+ 
++ffile-prefix-map=
++C ObjC C++ ObjC++ Joined RejectNegative
++-ffile-prefix-map=<old=new>	Map one directory name to another in __FILE__, __BASE_FILE__ and __builtin_FILE ()
++
+ fextended-identifiers
+ C ObjC C++ ObjC++
+ Permit universal character names (\\u and \\U) in identifiers.
+diff --git a/gcc/dwarf2out.c b/gcc/dwarf2out.c
+index 98c51576ec2..762f69ae88e 100644
+--- a/gcc/dwarf2out.c
++++ b/gcc/dwarf2out.c
+@@ -23421,6 +23421,7 @@ gen_producer_string (void)
+       case OPT_fltrans_output_list_:
+       case OPT_fresolution_:
+       case OPT_fdebug_prefix_map_:
++      case OPT_ffile_prefix_map_:
+ 	/* Ignore these.  */
+ 	continue;
+       default:
+diff --git a/gcc/gimplify.c b/gcc/gimplify.c
+index fd27eb1523f..5542b379f28 100644
+--- a/gcc/gimplify.c
++++ b/gcc/gimplify.c
+@@ -58,6 +58,8 @@ along with GCC; see the file COPYING3.  If not see
+ #include "gomp-constants.h"
+ #include "tree-dump.h"
+ #include "gimple-walk.h"
++#include "file-map.h"
++
+ #include "langhooks-def.h"	/* FIXME: for lhd_set_decl_assembler_name */
+ #include "builtins.h"
+ #include "asan.h"
+diff --git a/libcpp/Makefile.in b/libcpp/Makefile.in
+index 0bd3787c25e..d3b52956b52 100644
+--- a/libcpp/Makefile.in
++++ b/libcpp/Makefile.in
+@@ -84,12 +84,12 @@ DEPMODE = $(CXXDEPMODE)
+ 
+ 
+ libcpp_a_OBJS = charset.o directives.o directives-only.o errors.o \
+-	expr.o files.o identifiers.o init.o lex.o line-map.o macro.o \
+-	mkdeps.o pch.o symtab.o traditional.o
++	expr.o file-map.o files.o identifiers.o init.o lex.o line-map.o \
++	macro.o mkdeps.o pch.o symtab.o traditional.o
+ 
+ libcpp_a_SOURCES = charset.c directives.c directives-only.c errors.c \
+-	expr.c files.c identifiers.c init.c lex.c line-map.c macro.c \
+-	mkdeps.c pch.c symtab.c traditional.c
++	expr.c file-map.c files.c identifiers.c init.c lex.c line-map.c \
++	macro.c mkdeps.c pch.c symtab.c traditional.c
+ 
+ all: libcpp.a $(USED_CATALOGS)
+ 
+@@ -263,7 +263,7 @@ po/$(PACKAGE).pot: $(libcpp_a_SOURCES)
+ 
+ TAGS_SOURCES = $(libcpp_a_SOURCES) internal.h ucnid.h \
+     include/line-map.h include/symtab.h include/cpp-id-data.h \
+-    include/cpplib.h include/mkdeps.h system.h
++    include/cpplib.h include/mkdeps.h system.h include/file-map.h
+ 
+ TAGS: $(TAGS_SOURCES)
+ 	cd $(srcdir) && etags $(TAGS_SOURCES)
+diff --git a/libcpp/file-map.c b/libcpp/file-map.c
+new file mode 100644
+index 00000000000..18035ef6a72
+--- /dev/null
++++ b/libcpp/file-map.c
+@@ -0,0 +1,92 @@
++/* Map one directory name to another in __FILE__, __BASE_FILE__
++   and __builtin_FILE ().
++   Copyright (C) 2001-2016 Free Software Foundation, Inc.
++
++This program is free software; you can redistribute it and/or modify it
++under the terms of the GNU General Public License as published by the
++Free Software Foundation; either version 3, or (at your option) any
++later version.
++
++This program is distributed in the hope that it will be useful,
++but WITHOUT ANY WARRANTY; without even the implied warranty of
++MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++GNU General Public License for more details.
++
++You should have received a copy of the GNU General Public License
++along with this program; see the file COPYING3.  If not see
++<http://www.gnu.org/licenses/>.
++
++ In other words, you are welcome to use, share and improve this program.
++ You are forbidden to forbid anyone else to use, share and improve
++ what you give them.   Help stamp out software-hoarding!  */
++
++#include "config.h"
++#include "system.h"
++#include "file-map.h"
++
++/* Structure recording the mapping from source file and directory
++   names at compile time to __FILE__ */
++typedef struct file_prefix_map
++{
++  const char *old_prefix;
++  const char *new_prefix;
++  size_t old_len;
++  size_t new_len;
++  struct file_prefix_map *next;
++} file_prefix_map;
++
++/* Linked list of such structures.  */
++static file_prefix_map *file_prefix_maps;
++
++/* Record prefix mapping of __FILE__.  ARG is the argument to
++   -ffile-prefix-map and must be of the form OLD=NEW.  */
++int
++add_file_prefix_map (const char *arg)
++{
++  file_prefix_map *map;
++  const char *p;
++
++  p = strchr (arg, '=');
++  if (!p)
++  {
++      fprintf(stderr, "invalid argument %qs to -ffile-prefix-map", arg);
++      return -1;
++  }
++  map = XNEW (file_prefix_map);
++  map->old_prefix = xstrndup (arg, p - arg);
++  map->old_len = p - arg;
++  p++;
++  map->new_prefix = xstrdup (p);
++  map->new_len = strlen (p);
++  map->next = file_prefix_maps;
++  file_prefix_maps = map;
++
++  return 0;
++}
++
++/* Perform user-specified mapping of __FILE__ prefixes.  Return
++   the new name corresponding to filename.  */
++
++const char *
++remap_file_filename (const char *filename)
++{
++  file_prefix_map *map;
++  char *s;
++  const char *name;
++  size_t name_len;
++
++  for (map = file_prefix_maps; map; map = map->next)
++    if (filename_ncmp (filename, map->old_prefix, map->old_len) == 0)
++      break;
++  if (!map)
++    return filename;
++  name = filename + map->old_len;
++  name_len = strlen (name) + 1;
++  s = (char *) alloca (name_len + map->new_len);
++  memcpy (s, map->new_prefix, map->new_len);
++  memcpy (s + map->new_len, name, name_len);
++
++  return xstrdup (s);
++}
++
++
+diff --git a/libcpp/include/file-map.h b/libcpp/include/file-map.h
+new file mode 100644
+index 00000000000..87503152d27
+--- /dev/null
++++ b/libcpp/include/file-map.h
+@@ -0,0 +1,30 @@
++/* Map one directory name to another in __FILE__, __BASE_FILE__
++   and __builtin_FILE ().
++   Copyright (C) 2001-2016 Free Software Foundation, Inc.
++
++This program is free software; you can redistribute it and/or modify it
++under the terms of the GNU General Public License as published by the
++Free Software Foundation; either version 3, or (at your option) any
++later version.
++
++This program is distributed in the hope that it will be useful,
++but WITHOUT ANY WARRANTY; without even the implied warranty of
++MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
++GNU General Public License for more details.
++
++You should have received a copy of the GNU General Public License
++along with this program; see the file COPYING3.  If not see
++<http://www.gnu.org/licenses/>.
++
++ In other words, you are welcome to use, share and improve this program.
++ You are forbidden to forbid anyone else to use, share and improve
++ what you give them.   Help stamp out software-hoarding!  */
++
++#ifndef LIBCPP_FILE_MAP_H
++#define LIBCPP_FILE_MAP_H
++
++const char * remap_file_filename (const char *filename);
++
++int add_file_prefix_map (const char *arg);
++
++#endif /* !LIBCPP_FILE_MAP_H  */
+diff --git a/libcpp/macro.c b/libcpp/macro.c
+index de18c2210cf..2748c70d520 100644
+--- a/libcpp/macro.c
++++ b/libcpp/macro.c
+@@ -26,6 +26,7 @@ along with this program; see the file COPYING3.  If not see
+ #include "system.h"
+ #include "cpplib.h"
+ #include "internal.h"
++#include "file-map.h"
+ 
+ typedef struct macro_arg macro_arg;
+ /* This structure represents the tokens of a macro argument.  These
+@@ -301,6 +302,7 @@ _cpp_builtin_macro_text (cpp_reader *pfile, cpp_hashnode *node,
+ 	    if (!name)
+ 	      abort ();
+ 	  }
++	name = remap_file_filename (name);
+ 	len = strlen (name);
+ 	buf = _cpp_unaligned_alloc (pfile, len * 2 + 3);
+ 	result = buf;
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0042-Reuse-fdebug-prefix-map-to-replace-ffile-prefix-map.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0042-Reuse-fdebug-prefix-map-to-replace-ffile-prefix-map.patch
new file mode 100644
index 0000000..5247167
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0042-Reuse-fdebug-prefix-map-to-replace-ffile-prefix-map.patch
@@ -0,0 +1,43 @@
+From ddddc7335539fb8a6d30beba21781762df159186 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Wed, 16 Mar 2016 05:39:59 -0400
+Subject: [PATCH 42/47] Reuse -fdebug-prefix-map to replace -ffile-prefix-map
+
+The oe-core may use external toolchain to compile,
+which may not support -ffile-prefix-map.
+
+Since we use -fdebug-prefix-map to do the same thing,
+so we could reuse it to replace -ffile-prefix-map.
+
+Upstream-Status: Inappropriate[oe-core specific]
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ gcc/opts-global.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/gcc/opts-global.c b/gcc/opts-global.c
+index 50bad77c347..32b1d286721 100644
+--- a/gcc/opts-global.c
++++ b/gcc/opts-global.c
+@@ -31,6 +31,7 @@ along with GCC; see the file COPYING3.  If not see
+ #include "langhooks.h"
+ #include "dbgcnt.h"
+ #include "debug.h"
++#include "file-map.h"
+ #include "output.h"
+ #include "plugin.h"
+ #include "toplev.h"
+@@ -357,6 +358,9 @@ handle_common_deferred_options (void)
+ 
+ 	case OPT_fdebug_prefix_map_:
+ 	  add_debug_prefix_map (opt->arg);
++
++	  /* Reuse -fdebug-prefix-map to replace -ffile-prefix-map */
++	  add_file_prefix_map (opt->arg);
+ 	  break;
+ 
+ 	case OPT_fdump_:
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0043-gcc-final.c-fdebug-prefix-map-support-to-remap-sourc.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0043-gcc-final.c-fdebug-prefix-map-support-to-remap-sourc.patch
new file mode 100644
index 0000000..74a5c86
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0043-gcc-final.c-fdebug-prefix-map-support-to-remap-sourc.patch
@@ -0,0 +1,54 @@
+From 5bc97be388485a5f8dd85db34372a1299bffd263 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Thu, 24 Mar 2016 11:23:14 -0400
+Subject: [PATCH 43/47] gcc/final.c: -fdebug-prefix-map support to remap
+ sources with relative path
+
+PR other/70428
+* final.c (remap_debug_filename): Use lrealpath to translate
+relative path before remapping
+
+https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70428
+Upstream-Status: Submitted [gcc-patches@gcc.gnu.org]
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ gcc/final.c | 15 ++++++++++++---
+ 1 file changed, 12 insertions(+), 3 deletions(-)
+
+diff --git a/gcc/final.c b/gcc/final.c
+index 820162b2d28..d74cb901abd 100644
+--- a/gcc/final.c
++++ b/gcc/final.c
+@@ -1559,16 +1559,25 @@ remap_debug_filename (const char *filename)
+   const char *name;
+   size_t name_len;
+ 
++  /* Support to remap filename with relative path  */
++  char *realpath = lrealpath (filename);
++  if (realpath == NULL)
++    return filename;
++
+   for (map = debug_prefix_maps; map; map = map->next)
+-    if (filename_ncmp (filename, map->old_prefix, map->old_len) == 0)
++    if (filename_ncmp (realpath, map->old_prefix, map->old_len) == 0)
+       break;
+   if (!map)
+-    return filename;
+-  name = filename + map->old_len;
++    {
++      free (realpath);
++      return filename;
++    }
++  name = realpath + map->old_len;
+   name_len = strlen (name) + 1;
+   s = (char *) alloca (name_len + map->new_len);
+   memcpy (s, map->new_prefix, map->new_len);
+   memcpy (s + map->new_len, name, name_len);
++  free (realpath);
+   return ggc_strdup (s);
+ }
+ 
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0044-libgcc-Add-knob-to-use-ldbl-128-on-ppc.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0044-libgcc-Add-knob-to-use-ldbl-128-on-ppc.patch
new file mode 100644
index 0000000..e39af9b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0044-libgcc-Add-knob-to-use-ldbl-128-on-ppc.patch
@@ -0,0 +1,124 @@
+From 847aec764540636ec654fd7a012e271afa8d4e0f Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 29 Apr 2016 20:03:28 +0000
+Subject: [PATCH 44/47] libgcc: Add knob to use ldbl-128 on ppc
+
+musl does not support ldbl 128 so we can not assume
+that linux as a whole supports ldbl-128 bits, instead
+act upon configure option passed to gcc and assume no
+on musl and yes otherwise if no option is passed since
+default behaviour is to assume ldbl128 it does not
+change the defaults
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Upstream-Status: Pending
+---
+ libgcc/Makefile.in           |  1 +
+ libgcc/config/rs6000/t-linux |  5 ++++-
+ libgcc/configure             | 18 ++++++++++++++++++
+ libgcc/configure.ac          | 12 ++++++++++++
+ 4 files changed, 35 insertions(+), 1 deletion(-)
+ mode change 100644 => 100755 libgcc/configure
+
+diff --git a/libgcc/Makefile.in b/libgcc/Makefile.in
+index a1a392de88d..2fe6889a342 100644
+--- a/libgcc/Makefile.in
++++ b/libgcc/Makefile.in
+@@ -48,6 +48,7 @@ unwind_header = @unwind_header@
+ md_unwind_header = @md_unwind_header@
+ sfp_machine_header = @sfp_machine_header@
+ thread_header = @thread_header@
++with_ldbl128 = @with_ldbl128@
+ 
+ host_noncanonical = @host_noncanonical@
+ real_host_noncanonical = @real_host_noncanonical@
+diff --git a/libgcc/config/rs6000/t-linux b/libgcc/config/rs6000/t-linux
+index 4f6d4c4a4d2..c50dd94a2da 100644
+--- a/libgcc/config/rs6000/t-linux
++++ b/libgcc/config/rs6000/t-linux
+@@ -1,3 +1,6 @@
+ SHLIB_MAPFILES += $(srcdir)/config/rs6000/libgcc-glibc.ver
+ 
+-HOST_LIBGCC2_CFLAGS += -mlong-double-128 -mno-minimal-toc
++ifeq ($(with_ldbl128),yes)
++HOST_LIBGCC2_CFLAGS += -mlong-double-128
++endif
++HOST_LIBGCC2_CFLAGS += -mno-minimal-toc
+diff --git a/libgcc/configure b/libgcc/configure
+old mode 100644
+new mode 100755
+index 45c459788c3..e2d19b144b8
+--- a/libgcc/configure
++++ b/libgcc/configure
+@@ -618,6 +618,7 @@ build_vendor
+ build_cpu
+ build
+ with_aix_soname
++with_ldbl128
+ enable_vtable_verify
+ enable_shared
+ libgcc_topdir
+@@ -667,6 +668,7 @@ with_cross_host
+ with_ld
+ enable_shared
+ enable_vtable_verify
++with_long_double_128
+ with_aix_soname
+ enable_version_specific_runtime_libs
+ with_slibdir
+@@ -1324,6 +1326,7 @@ Optional Packages:
+   --with-target-subdir=SUBDIR      Configuring in a subdirectory for target
+   --with-cross-host=HOST           Configuring with a cross compiler
+   --with-ld               arrange to use the specified ld (full pathname)
++  --with-long-double-128  use 128-bit long double by default
+   --with-aix-soname=aix|svr4|both
+                           shared library versioning (aka "SONAME") variant to
+                           provide on AIX
+@@ -2208,6 +2211,21 @@ fi
+ 
+ 
+ 
++# Check whether --with-long-double-128 was given.
++if test "${with_long_double_128+set}" = set; then :
++  withval=$with_long_double_128; with_ldbl128="$with_long_double_128"
++else
++  case "${host}" in
++ power*-*-musl*)
++   with_ldbl128="no";;
++ *) with_ldbl128="yes";;
++ esac
++
++fi
++
++
++
++
+ # Check whether --with-aix-soname was given.
+ if test "${with_aix_soname+set}" = set; then :
+   withval=$with_aix_soname; case "${host}:${enable_shared}" in
+diff --git a/libgcc/configure.ac b/libgcc/configure.ac
+index af151473709..dada52416da 100644
+--- a/libgcc/configure.ac
++++ b/libgcc/configure.ac
+@@ -77,6 +77,18 @@ AC_ARG_ENABLE(vtable-verify,
+ [enable_vtable_verify=no])
+ AC_SUBST(enable_vtable_verify)
+ 
++AC_ARG_WITH(long-double-128,
++[AS_HELP_STRING([--with-long-double-128],
++    [use 128-bit long double by default])],
++      with_ldbl128="$with_long_double_128",
++[case "${host}" in
++ power*-*-musl*)
++   with_ldbl128="no";;
++ *) with_ldbl128="yes";;
++ esac
++])
++AC_SUBST(with_ldbl128)
++
+ AC_ARG_WITH(aix-soname,
+ [AS_HELP_STRING([--with-aix-soname=aix|svr4|both],
+     [shared library versioning (aka "SONAME") variant to provide on AIX])],
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0045-Link-libgcc-using-LDFLAGS-not-just-SHLIB_LDFLAGS.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0045-Link-libgcc-using-LDFLAGS-not-just-SHLIB_LDFLAGS.patch
new file mode 100644
index 0000000..3aa038c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0045-Link-libgcc-using-LDFLAGS-not-just-SHLIB_LDFLAGS.patch
@@ -0,0 +1,29 @@
+From 92beb883ab57a23a35ba76c496bc1f4cabb1690e Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 4 May 2016 21:11:34 -0700
+Subject: [PATCH 45/47] Link libgcc using LDFLAGS, not just SHLIB_LDFLAGS
+
+Upstream-Status: Pending
+
+Signed-off-by: Christopher Larson <chris_larson@mentor.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ libgcc/config/t-slibgcc | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/libgcc/config/t-slibgcc b/libgcc/config/t-slibgcc
+index 76be7710960..a1ee2ee26ed 100644
+--- a/libgcc/config/t-slibgcc
++++ b/libgcc/config/t-slibgcc
+@@ -32,7 +32,7 @@ SHLIB_INSTALL_SOLINK = $(LN_S) $(SHLIB_SONAME) \
+ 	$(DESTDIR)$(slibdir)$(SHLIB_SLIBDIR_QUAL)/$(SHLIB_SOLINK)
+ 
+ SHLIB_LINK = $(CC) $(LIBGCC2_CFLAGS) -shared -nodefaultlibs \
+-	$(SHLIB_LDFLAGS) \
++	$(LDFLAGS) $(SHLIB_LDFLAGS) \
+ 	-o $(SHLIB_DIR)/$(SHLIB_SONAME).tmp @multilib_flags@ \
+ 	$(SHLIB_OBJS) $(SHLIB_LC) && \
+ 	rm -f $(SHLIB_DIR)/$(SHLIB_SOLINK) && \
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0047-sync-gcc-stddef.h-with-musl.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0047-sync-gcc-stddef.h-with-musl.patch
new file mode 100644
index 0000000..65d22f1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0047-sync-gcc-stddef.h-with-musl.patch
@@ -0,0 +1,91 @@
+From 9b951c8f6b0aaff7c16dc4db72b5e56ec73810bb Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 3 Feb 2017 12:56:00 -0800
+Subject: [PATCH 47/47] sync gcc stddef.h with musl
+
+musl defines ptrdiff_t size_t and wchar_t
+so dont define them here if musl is definining them
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Inappropriate [OE-Specific]
+
+ gcc/ginclude/stddef.h | 9 +++++++++
+ 1 file changed, 9 insertions(+)
+
+diff --git a/gcc/ginclude/stddef.h b/gcc/ginclude/stddef.h
+index 872f451cac9..7e90938387c 100644
+--- a/gcc/ginclude/stddef.h
++++ b/gcc/ginclude/stddef.h
+@@ -134,6 +134,7 @@ _TYPE_wchar_t;
+ #ifndef ___int_ptrdiff_t_h
+ #ifndef _GCC_PTRDIFF_T
+ #ifndef _PTRDIFF_T_DECLARED /* DragonFly */
++#ifndef __DEFINED_ptrdiff_t /* musl */
+ #define _PTRDIFF_T
+ #define _T_PTRDIFF_
+ #define _T_PTRDIFF
+@@ -143,10 +144,12 @@ _TYPE_wchar_t;
+ #define ___int_ptrdiff_t_h
+ #define _GCC_PTRDIFF_T
+ #define _PTRDIFF_T_DECLARED
++#define __DEFINED_ptrdiff_t /* musl */
+ #ifndef __PTRDIFF_TYPE__
+ #define __PTRDIFF_TYPE__ long int
+ #endif
+ typedef __PTRDIFF_TYPE__ ptrdiff_t;
++#endif /* __DEFINED_ptrdiff_t */
+ #endif /* _PTRDIFF_T_DECLARED */
+ #endif /* _GCC_PTRDIFF_T */
+ #endif /* ___int_ptrdiff_t_h */
+@@ -184,6 +187,7 @@ typedef __PTRDIFF_TYPE__ ptrdiff_t;
+ #ifndef _GCC_SIZE_T
+ #ifndef _SIZET_
+ #ifndef __size_t
++#ifndef __DEFINED_size_t /* musl */
+ #define __size_t__	/* BeOS */
+ #define __SIZE_T__	/* Cray Unicos/Mk */
+ #define _SIZE_T
+@@ -200,6 +204,7 @@ typedef __PTRDIFF_TYPE__ ptrdiff_t;
+ #define ___int_size_t_h
+ #define _GCC_SIZE_T
+ #define _SIZET_
++#define __DEFINED_size_t /* musl */
+ #if (defined (__FreeBSD__) && (__FreeBSD__ >= 5)) \
+   || defined(__DragonFly__) \
+   || defined(__FreeBSD_kernel__)
+@@ -235,6 +240,7 @@ typedef long ssize_t;
+ #endif /* _SIZE_T */
+ #endif /* __SIZE_T__ */
+ #endif /* __size_t__ */
++#endif /* __DEFINED_size_t */
+ #undef	__need_size_t
+ #endif /* _STDDEF_H or __need_size_t.  */
+ 
+@@ -264,6 +270,7 @@ typedef long ssize_t;
+ #ifndef ___int_wchar_t_h
+ #ifndef __INT_WCHAR_T_H
+ #ifndef _GCC_WCHAR_T
++#ifndef __DEFINED_wchar_t /* musl */
+ #define __wchar_t__	/* BeOS */
+ #define __WCHAR_T__	/* Cray Unicos/Mk */
+ #define _WCHAR_T
+@@ -279,6 +286,7 @@ typedef long ssize_t;
+ #define __INT_WCHAR_T_H
+ #define _GCC_WCHAR_T
+ #define _WCHAR_T_DECLARED
++#define __DEFINED_wchar_t /* musl */
+ 
+ /* On BSD/386 1.1, at least, machine/ansi.h defines _BSD_WCHAR_T_
+    instead of _WCHAR_T_, and _BSD_RUNE_T_ (which, unlike the other
+@@ -344,6 +352,7 @@ typedef __WCHAR_TYPE__ wchar_t;
+ #endif
+ #endif /* __WCHAR_T__ */
+ #endif /* __wchar_t__ */
++#endif /* __DEFINED_wchar_t musl */
+ #undef	__need_wchar_t
+ #endif /* _STDDEF_H or __need_wchar_t.  */
+ 
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0048-gcc-Enable-static-PIE.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0048-gcc-Enable-static-PIE.patch
new file mode 100644
index 0000000..a96e913
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0048-gcc-Enable-static-PIE.patch
@@ -0,0 +1,46 @@
+From 44ef80688b56beea85c0070840dea1e2a4e34aed Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 13 Jun 2017 12:12:52 -0700
+Subject: [PATCH 49/49] gcc: Enable static PIE
+
+Static PIE support in GCC
+see
+https://gcc.gnu.org/ml/gcc/2015-06/msg00008.html
+
+startfiles before patch:
+ -static -> crt1.o crti.o crtbeginT.o
+ -static -PIE -> crt1.o crti.o crtbeginT.o
+ 
+after patch:
+ -static -> crt1.o crti.o crtbeginT.o
+ -static -PIE -> rcrt1.o crti.o crtbeginS.o
+ 
+Upstream-Status: Pending
+
+Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
+
+---
+ gcc/config/gnu-user.h | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/gcc/config/gnu-user.h b/gcc/config/gnu-user.h
+index de605b0..b035bbe 100644
+--- a/gcc/config/gnu-user.h
++++ b/gcc/config/gnu-user.h
+@@ -52,11 +52,11 @@ see the files COPYING3 and COPYING.RUNTIME respectively.  If not, see
+ #define GNU_USER_TARGET_STARTFILE_SPEC \
+   "%{shared:; \
+      pg|p|profile:gcrt1.o%s; \
+-     static:crt1.o%s; \
++     static: %{" PIE_SPEC ": rcrt1.o%s; :crt1.o%s}; \
+      " PIE_SPEC ":Scrt1.o%s; \
+      :crt1.o%s} \
+    crti.o%s \
+-   %{static:crtbeginT.o%s; \
++   %{static: %{" PIE_SPEC ": crtbeginS.o%s; :crtbeginT.o%s}; \
+      shared|" PIE_SPEC ":crtbeginS.o%s; \
+      :crtbegin.o%s} \
+    %{fvtable-verify=none:%s; \
+
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0050-RISC-V-Handle-non-legitimate-address-in-riscv_legiti.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0050-RISC-V-Handle-non-legitimate-address-in-riscv_legiti.patch
new file mode 100644
index 0000000..5a14d04
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/0050-RISC-V-Handle-non-legitimate-address-in-riscv_legiti.patch
@@ -0,0 +1,51 @@
+From 16210e6270e200cd4892a90ecef608906be3a130 Mon Sep 17 00:00:00 2001
+From: Kito Cheng <kito.cheng@gmail.com>
+Date: Thu, 4 May 2017 02:11:13 +0800
+Subject: [PATCH] RISC-V: Handle non-legitimate address in
+ riscv_legitimize_move
+
+GCC may generate non-legitimate address due to we allow some
+load/store with non-legitimate address in pic.md.
+
+  2017-05-12  Kito Cheng  <kito.cheng@gmail.com>
+
+      * config/riscv/riscv.c (riscv_legitimize_move): Handle
+      non-legitimate address.
+---
+Upstream-Status: Backport
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+ gcc/ChangeLog            |  5 +++++
+ gcc/config/riscv/riscv.c | 16 ++++++++++++++++
+ 2 files changed, 21 insertions(+)
+
+diff --git a/gcc/config/riscv/riscv.c b/gcc/config/riscv/riscv.c
+index f7fec4bfcf8..d519be1659a 100644
+--- a/gcc/config/riscv/riscv.c
++++ b/gcc/config/riscv/riscv.c
+@@ -1385,6 +1385,22 @@ riscv_legitimize_move (enum machine_mode mode, rtx dest, rtx src)
+       return true;
+     }
+ 
++  /* RISC-V GCC may generate non-legitimate address due to we provide some
++     pattern for optimize access PIC local symbol and it's make GCC generate
++     unrecognizable instruction during optmizing.  */
++
++  if (MEM_P (dest) && !riscv_legitimate_address_p (mode, XEXP (dest, 0),
++						   reload_completed))
++    {
++      XEXP (dest, 0) = riscv_force_address (XEXP (dest, 0), mode);
++    }
++
++  if (MEM_P (src) && !riscv_legitimate_address_p (mode, XEXP (src, 0),
++						  reload_completed))
++    {
++      XEXP (src, 0) = riscv_force_address (XEXP (src, 0), mode);
++    }
++
+   return false;
+ }
+ 
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/fix-segmentation-fault-precompiled-hdr.patch b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/fix-segmentation-fault-precompiled-hdr.patch
new file mode 100644
index 0000000..c0adef6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-7.3/fix-segmentation-fault-precompiled-hdr.patch
@@ -0,0 +1,49 @@
+
+Prevent a segmentation fault which occurs when using incorrect
+structure trying to access name of some named operators, such as 
+CPP_NOT, CPP_AND etc. "token->val.node.spelling" cannot be used in
+those cases, as is may not be initialized at all.
+
+
+[YOCTO #11738]
+
+Upstream-Status: Pending
+
+Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
+
+diff --git a/libcpp/lex.c b/libcpp/lex.c
+--- a/libcpp/lex.c
++++ b/libcpp/lex.c
+@@ -3229,11 +3229,27 @@
+     spell_ident:
+     case SPELL_IDENT:
+       if (forstring)
+-	{
+-	  memcpy (buffer, NODE_NAME (token->val.node.spelling),
+-		  NODE_LEN (token->val.node.spelling));
+-	  buffer += NODE_LEN (token->val.node.spelling);
+-	}
++        {
++          if (token->type == CPP_NAME)
++            {
++              memcpy (buffer, NODE_NAME (token->val.node.spelling),
++                    NODE_LEN (token->val.node.spelling));
++              buffer += NODE_LEN (token->val.node.spelling);
++              break;
++            }
++          /* NAMED_OP, cannot use node.spelling */
++          if (token->flags & NAMED_OP)
++            {
++              const char *str = cpp_named_operator2name (token->type);
++              if (str)
++                {
++                  size_t len = strlen(str);
++                  memcpy(buffer, str, len);
++                  buffer += len;
++                }
++              break;
++            }
++        }
+       else
+ 	buffer = _cpp_spell_ident_ucns (buffer, token->val.node.node);
+       break;
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-common.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-common.inc
index 857aa8f..aa3b53e 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-common.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-common.inc
@@ -43,7 +43,7 @@
     return ""
 
 def get_long_double_setting(bb, d):
-    if d.getVar('TRANSLATED_TARGET_ARCH') in [ 'powerpc', 'powerpc64' ] and d.getVar('TCLIBC') in [ 'uclibc', 'glibc' ]:
+    if d.getVar('TRANSLATED_TARGET_ARCH') in [ 'powerpc', 'powerpc64' ] and d.getVar('TCLIBC') in [ 'glibc' ]:
         return "--with-long-double-128"
     else:
         return "--without-long-double-128"
@@ -105,6 +105,7 @@
 BINV = "${PV}"
 #S = "${WORKDIR}/gcc-${PV}"
 S = "${TMPDIR}/work-shared/gcc-${PV}-${PR}/gcc-${PV}"
+
 B = "${WORKDIR}/gcc-${PV}/build.${HOST_SYS}.${TARGET_SYS}"
 
 target_includedir ?= "${includedir}"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-configure-common.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-configure-common.inc
index 00ef89e..e2ce234 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-configure-common.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-configure-common.inc
@@ -22,6 +22,8 @@
 GCCMULTILIB ?= "--disable-multilib"
 GCCTHREADS ?= "posix"
 
+GCCPIE ??= ""
+
 EXTRA_OECONF = "\
     ${@['--enable-clocale=generic', ''][d.getVar('USE_NLS') != 'no']} \
     --with-gnu-ld \
@@ -29,6 +31,7 @@
     --enable-languages=${LANGUAGES} \
     --enable-threads=${GCCTHREADS} \
     ${GCCMULTILIB} \
+    ${GCCPIE} \
     --enable-c99 \
     --enable-long-long \
     --enable-symvers=gnu \
@@ -53,9 +56,7 @@
 # in the config.log files (which might not get generated until do_compile
 # hence being missed by the insane do_configure check).
 
-# Build uclibc compilers without cxa_atexit support
 EXTRA_OECONF_append_linux = " --enable-__cxa_atexit"
-EXTRA_OECONF_append_libc-uclibc = " --enable-__cxa_atexit"
 
 EXTRA_OECONF_append_mips64 = " --with-abi=64 --with-arch-64=mips64 --with-tune-64=mips64"
 EXTRA_OECONF_append_mips64el = " --with-abi=64 --with-arch-64=mips64 --with-tune-64=mips64"
@@ -66,14 +67,6 @@
 EXTRA_OECONF_append_mipsisa64r6el = " --with-abi=64 --with-arch-64=mips64r6"
 EXTRA_OECONF_append_mipsisa64r6 = " --with-abi=64 --with-arch-64=mips64r6"
 
-# ARMv6+ adds atomic instructions that affect the ABI in libraries built
-# with TUNE_CCARGS in gcc-runtime.  Make the compiler default to a
-# compatible architecture.  armv6 and armv7a cover the minimum tune
-# features used in OE.
-EXTRA_OECONF_append_armv6 = " --with-arch=armv6"
-EXTRA_OECONF_append_armv7a = " --with-arch=armv7-a"
-EXTRA_OECONF_append_armv7ve = " --with-arch=armv7-a"
-
 EXTRA_OECONF_GCC_FLOAT ??= ""
 CPPFLAGS = ""
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-canadian.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-canadian.inc
index ec1d281..6d77620 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-canadian.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-canadian.inc
@@ -60,6 +60,7 @@
 
 do_compile () {
 	oe_runmake all-host configure-target-libgcc
+	(cd ${B}/${TARGET_SYS}/libgcc; oe_runmake enable-execute-stack.c unwind.h md-unwind-support.h sfp-machine.h gthr-default.h)
 }
 
 # Having anything auto depending on gcc-cross-sdk is a really bad idea...
@@ -80,7 +81,7 @@
     ${includedir}/c++/${BINV} \
     ${prefix}/${TARGET_SYS}/bin/* \
     ${prefix}/${TARGET_SYS}/lib/* \
-    ${prefix}/${TARGET_SYS}/usr/include/* \
+    ${prefix}/${TARGET_SYS}${target_includedir}/* \
 "
 INSANE_SKIP_${PN} += "dev-so"
 
@@ -96,7 +97,7 @@
 BINRELPATH = "${@os.path.relpath(d.expand("${bindir}"), d.expand("${libexecdir}/gcc/${TARGET_SYS}/${BINV}"))}"
 
 do_install () {
-	( cd ${B}/${TARGET_SYS}/libgcc; oe_runmake 'DESTDIR=${D}' install-unwind_h )
+	( cd ${B}/${TARGET_SYS}/libgcc; oe_runmake 'DESTDIR=${D}' install-unwind_h-forbuild install-unwind_h )
 	oe_runmake 'DESTDIR=${D}' install-host
 
 	# Cleanup some of the ${libdir}{,exec}/gcc stuff ...
@@ -153,7 +154,7 @@
 DEPENDS += "nativesdk-gmp nativesdk-mpfr nativesdk-libmpc ${ELFUTILS} nativesdk-zlib"
 RDEPENDS_${PN} += "nativesdk-mpfr nativesdk-libmpc ${ELFUTILS}"
 
-SYSTEMHEADERS = "/usr/include"
+SYSTEMHEADERS = "${target_includedir}/"
 SYSTEMLIBS = "${target_base_libdir}/"
 SYSTEMLIBS1 = "${target_libdir}/"
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-canadian_5.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-canadian_7.3.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-canadian_5.4.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-canadian_7.3.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-initial.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-initial.inc
index 9502c2b..892b1db 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-initial.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-initial.inc
@@ -35,10 +35,11 @@
 
 do_compile () {
     oe_runmake all-gcc configure-target-libgcc
+    (cd ${B}/${TARGET_SYS}/libgcc; oe_runmake enable-execute-stack.c unwind.h md-unwind-support.h sfp-machine.h gthr-default.h)
 }
 
 do_install () {
-	( cd ${B}/${TARGET_SYS}/libgcc; oe_runmake 'DESTDIR=${D}' install-unwind_h )
+	( cd ${B}/${TARGET_SYS}/libgcc; oe_runmake 'DESTDIR=${D}' install-unwind_h-forbuild install-unwind_h)
 	oe_runmake 'DESTDIR=${D}' install-gcc
 
 	# We don't really need this (here shares/ contains man/, info/, locale/).
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-initial_5.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-initial_7.3.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-initial_5.4.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross-initial_7.3.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross.inc
index 45985c3..1e184a6 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross.inc
@@ -57,6 +57,7 @@
 	export LDFLAGS_FOR_TARGET="${TARGET_LDFLAGS}"
 
 	oe_runmake all-host configure-target-libgcc
+	(cd ${B}/${TARGET_SYS}/libgcc; oe_runmake enable-execute-stack.c unwind.h md-unwind-support.h sfp-machine.h gthr-default.h)
 	# now generate script to drive testing
 	echo "#!/usr/bin/env sh" >${B}/${TARGET_PREFIX}testgcc
 	set >> ${B}/${TARGET_PREFIX}testgcc
@@ -155,22 +156,24 @@
 BINRELPATH = "${@os.path.relpath(d.expand("${STAGING_DIR_NATIVE}${prefix_native}/bin/${TARGET_SYS}"), d.expand("${libexecdir}/gcc/${TARGET_SYS}/${BINV}"))}"
 
 do_install () {
-	( cd ${B}/${TARGET_SYS}/libgcc; oe_runmake 'DESTDIR=${D}' install-unwind_h )
+	( cd ${B}/${TARGET_SYS}/libgcc; oe_runmake 'DESTDIR=${D}' install-unwind_h-forbuild install-unwind_h )
 	oe_runmake 'DESTDIR=${D}' install-host
 
 	install -d ${D}${target_base_libdir}
 	install -d ${D}${target_libdir}
-    
+
 	# Link gfortran to g77 to satisfy not-so-smart configure or hard coded g77
 	# gfortran is fully backwards compatible. This is a safe and practical solution. 
-	ln -sf ${STAGING_DIR_NATIVE}${prefix_native}/bin/${TARGET_PREFIX}gfortran ${STAGING_DIR_NATIVE}${prefix_native}/bin/${TARGET_PREFIX}g77 || true
+	if [ -n "${@d.getVar('FORTRAN')}" ]; then
+		ln -sf ${STAGING_DIR_NATIVE}${prefix_native}/bin/${TARGET_PREFIX}gfortran ${STAGING_DIR_NATIVE}${prefix_native}/bin/${TARGET_PREFIX}g77 || true
+		fortsymlinks="g77 gfortran"
+	fi
 
-	
 	# Insert symlinks into libexec so when tools without a prefix are searched for, the correct ones are
 	# found. These need to be relative paths so they work in different locations.
 	dest=${D}${libexecdir}/gcc/${TARGET_SYS}/${BINV}/
 	install -d $dest
-	for t in ar as ld nm objcopy objdump ranlib strip g77 gcc cpp gfortran; do
+	for t in ar as ld ld.bfd ld.gold nm objcopy objdump ranlib strip gcc cpp $fortsymlinks; do
 		ln -sf ${BINRELPATH}/${TARGET_PREFIX}$t $dest$t
 		ln -sf ${BINRELPATH}/${TARGET_PREFIX}$t ${dest}${TARGET_PREFIX}$t
 	done
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross_5.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross_7.3.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross_5.4.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-cross_7.3.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-crosssdk-initial_5.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-crosssdk-initial_7.3.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-crosssdk-initial_5.4.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-crosssdk-initial_7.3.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-crosssdk_5.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-crosssdk_7.3.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-crosssdk_5.4.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-crosssdk_7.3.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-runtime.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-runtime.inc
index 0dc405c..ee08529 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-runtime.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-runtime.inc
@@ -34,7 +34,6 @@
 
 do_configure () {
 	export CXX="${CXX} -nostdinc++ -nostdlib++"
-
 	for d in libgcc ${RUNTIMETARGET}; do
 		echo "Configuring $d"
 		rm -rf ${B}/${TARGET_SYS}/$d/
@@ -43,6 +42,9 @@
 		chmod a+x ${S}/$d/configure
 		relpath=${@os.path.relpath("${S}/$d", "${B}/${TARGET_SYS}/$d")}
 		$relpath/configure ${CONFIGUREOPTS} ${EXTRA_OECONF}
+		if [ "$d" = "libgcc" ]; then
+			(cd ${B}/${TARGET_SYS}/libgcc; oe_runmake enable-execute-stack.c unwind.h md-unwind-support.h sfp-machine.h gthr-default.h)
+		fi
 	done
 }
 EXTRACONFFUNCS += "extract_stashed_builddir"
@@ -99,8 +101,8 @@
 
 	if [ "${TCLIBC}" != "glibc" ]; then
 		case "${TARGET_OS}" in
-			"linux-musl" | "linux-uclibc" | "linux-*spe") extra_target_os="linux";;
-			"linux-musleabi" | "linux-uclibceabi") extra_target_os="linux-gnueabi";;
+			"linux-musl" | "linux-*spe") extra_target_os="linux";;
+			"linux-musleabi") extra_target_os="linux-gnueabi";;
 			*) extra_target_os="linux";;
 		esac
 		ln -s ${TARGET_SYS} ${D}${includedir}/c++/${BINV}/${TARGET_ARCH}${TARGET_VENDOR}-$extra_target_os
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-runtime_5.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-runtime_7.3.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-runtime_5.4.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-runtime_7.3.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-sanitizers.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-sanitizers.inc
index f97885b..3183b29 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-sanitizers.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-sanitizers.inc
@@ -97,6 +97,7 @@
 FILES_libtsan-dev += "\
     ${libdir}/libtsan.so \
     ${libdir}/libtsan.la \
+    ${libdir}/libtsan_*.o \
 "
 FILES_libtsan-staticdev += "${libdir}/libtsan.a"
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-sanitizers_5.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-sanitizers_7.3.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-sanitizers_5.4.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-sanitizers_7.3.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-source_5.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-source_7.3.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-source_5.4.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-source_7.3.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-target.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-target.inc
index eef4434..b6e31f5 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-target.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc-target.inc
@@ -9,6 +9,14 @@
 
 EXTRA_OECONF_append_linuxstdbase = " --enable-clocale=gnu"
 
+# ARMv6+ adds atomic instructions that affect the ABI in libraries built
+# with TUNE_CCARGS in gcc-runtime.  Make the compiler default to a
+# compatible architecture.  armv6 and armv7a cover the minimum tune
+# features used in OE.
+EXTRA_OECONF_append_armv6 = " --with-arch=armv6"
+EXTRA_OECONF_append_armv7a = " --with-arch=armv7-a"
+EXTRA_OECONF_append_armv7ve = " --with-arch=armv7-a"
+
 # libcc1 requres gcc_cv_objdump when cross build, but gcc_cv_objdump is
 # set in subdir gcc, so subdir libcc1 can't use it, export it here to
 # fix the problem.
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc_5.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc_5.4.bb
deleted file mode 100644
index 2c618df..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc_5.4.bb
+++ /dev/null
@@ -1,9 +0,0 @@
-require recipes-devtools/gcc/gcc-${PV}.inc
-require gcc-target.inc
-
-# Building with thumb enabled on armv4t fails with
-# | gcc-4.8.1-r0/gcc-4.8.1/gcc/cp/decl.c:7438:(.text.unlikely+0x2fa): relocation truncated to fit: R_ARM_THM_CALL against symbol `fancy_abort(char const*, int, char const*)' defined in .glue_7 section in linker stubs
-# | gcc-4.8.1-r0/gcc-4.8.1/gcc/cp/decl.c:7442:(.text.unlikely+0x318): additional relocation overflows omitted from the output
-ARM_INSTRUCTION_SET_armv4 = "arm"
-
-BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc_7.3.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc_7.3.bb
new file mode 100644
index 0000000..ab208e7
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/gcc_7.3.bb
@@ -0,0 +1,10 @@
+require recipes-devtools/gcc/gcc-${PV}.inc
+require gcc-target.inc
+
+# Building with thumb enabled on armv4t armv5t fails with
+# | gcc-4.8.1-r0/gcc-4.8.1/gcc/cp/decl.c:7438:(.text.unlikely+0x2fa): relocation truncated to fit: R_ARM_THM_CALL against symbol `fancy_abort(char const*, int, char const*)' defined in .glue_7 section in linker stubs
+# | gcc-4.8.1-r0/gcc-4.8.1/gcc/cp/decl.c:7442:(.text.unlikely+0x318): additional relocation overflows omitted from the output
+ARM_INSTRUCTION_SET_armv4 = "arm"
+ARM_INSTRUCTION_SET_armv5 = "arm"
+
+BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgcc-initial_5.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgcc-initial_7.3.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gcc/libgcc-initial_5.4.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gcc/libgcc-initial_7.3.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgcc.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgcc.inc
index 38d1643..1500fb5 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgcc.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgcc.inc
@@ -5,8 +5,8 @@
 do_install_append_class-target () {
 	if [ "${TCLIBC}" != "glibc" ]; then
 		case "${TARGET_OS}" in
-			"linux-musl" | "linux-uclibc" | "linux-*spe") extra_target_os="linux";;
-			"linux-musleabi" | "linux-uclibceabi") extra_target_os="linux-gnueabi";;
+			"linux-musl" | "linux-*spe") extra_target_os="linux";;
+			"linux-musleabi") extra_target_os="linux-gnueabi";;
 			*) extra_target_os="linux";;
 		esac
 		ln -s ${TARGET_SYS} ${D}${libdir}/${TARGET_ARCH}${TARGET_VENDOR}-$extra_target_os
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgcc_5.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgcc_7.3.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gcc/libgcc_5.4.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gcc/libgcc_7.3.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgfortran.inc b/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgfortran.inc
index 4846dec..5f5d4af 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgfortran.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgfortran.inc
@@ -37,7 +37,7 @@
 }
 
 INHIBIT_DEFAULT_DEPS = "1"
-DEPENDS = "gcc-runtime"
+DEPENDS = "gcc-runtime gcc-cross-${TARGET_ARCH}"
 
 BBCLASSEXTEND = "nativesdk"
 
@@ -54,6 +54,7 @@
     ${libdir}/libgfortran.la \
     ${libdir}/gcc/${TARGET_SYS}/${BINV}/libgfortranbegin.* \
     ${libdir}/gcc/${TARGET_SYS}/${BINV}/libcaf_single* \
+    ${libdir}/gcc/${TARGET_SYS}/${BINV}/finclude/ \
 "
 FILES_${PN}-staticdev = "${libdir}/libgfortran.a"
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgfortran_5.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/gcc/libgfortran_7.3.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gcc/libgfortran_5.4.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gcc/libgfortran_7.3.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-7.12.1.inc b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-7.12.1.inc
deleted file mode 100644
index 634756c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-7.12.1.inc
+++ /dev/null
@@ -1,22 +0,0 @@
-LICENSE = "GPLv2 & GPLv3 & LGPLv2 & LGPLv3"
-LIC_FILES_CHKSUM = "file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552 \
-		    file://COPYING3;md5=d32239bcb673463ab874e80d47fae504 \
-		    file://COPYING3.LIB;md5=6a6a8e020838b23406c81b19c1d46df6 \
-		    file://COPYING.LIB;md5=9f604d8a4f8e74f4f5140845a21b6674"
-
-SRC_URI = "http://ftp.gnu.org/gnu/gdb/gdb-${PV}.tar.xz \
-           file://0001-include-sys-types.h-for-mode_t.patch \
-           file://0002-make-man-install-relative-to-DESTDIR.patch \
-           file://0003-mips-linux-nat-Define-_ABIO32-if-not-defined.patch \
-           file://0004-ppc-ptrace-Define-pt_regs-uapi_pt_regs-on-GLIBC-syst.patch \
-           file://0005-Add-support-for-Renesas-SH-sh4-architecture.patch \
-           file://0006-Dont-disable-libreadline.a-when-using-disable-static.patch \
-           file://0007-use-asm-sgidefs.h.patch \
-           file://0008-Use-exorted-definitions-of-SIGRTMIN.patch \
-           file://0009-Change-order-of-CFLAGS.patch \
-           file://0010-resolve-restrict-keyword-conflict.patch \
-           file://package_devel_gdb_patches_120-sigprocmask-invalid-call.patch \
-"
-SRC_URI[md5sum] = "193453347ddced7acb6b1cd2ee8f2e4b"
-SRC_URI[sha256sum] = "4607680b973d3ec92c30ad029f1b7dbde3876869e6b3a117d8a7e90081113186"
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-8.0.inc b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-8.0.inc
new file mode 100644
index 0000000..fba32ce
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-8.0.inc
@@ -0,0 +1,22 @@
+LICENSE = "GPLv2 & GPLv3 & LGPLv2 & LGPLv3"
+LIC_FILES_CHKSUM = "file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552 \
+		    file://COPYING3;md5=d32239bcb673463ab874e80d47fae504 \
+		    file://COPYING3.LIB;md5=6a6a8e020838b23406c81b19c1d46df6 \
+		    file://COPYING.LIB;md5=9f604d8a4f8e74f4f5140845a21b6674"
+
+SRC_URI = "http://ftp.gnu.org/gnu/gdb/gdb-${PV}.tar.xz \
+           file://0001-include-sys-types.h-for-mode_t.patch \
+           file://0002-make-man-install-relative-to-DESTDIR.patch \
+           file://0003-mips-linux-nat-Define-_ABIO32-if-not-defined.patch \
+           file://0004-ppc-ptrace-Define-pt_regs-uapi_pt_regs-on-GLIBC-syst.patch \
+           file://0005-Add-support-for-Renesas-SH-sh4-architecture.patch \
+           file://0006-Dont-disable-libreadline.a-when-using-disable-static.patch \
+           file://0007-use-asm-sgidefs.h.patch \
+           file://0008-Use-exorted-definitions-of-SIGRTMIN.patch \
+           file://0009-Change-order-of-CFLAGS.patch \
+           file://0010-resolve-restrict-keyword-conflict.patch \
+           file://package_devel_gdb_patches_120-sigprocmask-invalid-call.patch \
+"
+SRC_URI[md5sum] = "c3d35cd949084be53b92cc1e03485f88"
+SRC_URI[sha256sum] = "f6a24ffe4917e67014ef9273eb8b547cb96a13e5ca74895b06d683b391f3f4ee"
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-common.inc b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-common.inc
index 239b375..9164a2b 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-common.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-common.inc
@@ -5,7 +5,6 @@
 
 LTTNGUST = "lttng-ust"
 LTTNGUST_aarch64 = ""
-LTTNGUST_libc-uclibc = ""
 LTTNGUST_mipsarch = ""
 LTTNGUST_sh4 = ""
 LTTNGUST_libc-musl = ""
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-cross-canadian_7.12.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-cross-canadian_8.0.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-cross-canadian_7.12.1.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-cross-canadian_8.0.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-cross_7.12.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-cross_8.0.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-cross_7.12.1.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb-cross_8.0.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0001-include-sys-types.h-for-mode_t.patch b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0001-include-sys-types.h-for-mode_t.patch
index fc6c92f..4f06d46 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0001-include-sys-types.h-for-mode_t.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0001-include-sys-types.h-for-mode_t.patch
@@ -1,4 +1,4 @@
-From 2c81e17216b4e471a1ce0bddb50f374b0722a2ce Mon Sep 17 00:00:00 2001
+From 91da0458b333249eb9c2f4c1f1e53fa4bc085cc9 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Tue, 19 Jan 2016 18:18:52 -0800
 Subject: [PATCH 01/10] include sys/types.h for mode_t
@@ -14,7 +14,7 @@
  1 file changed, 1 insertion(+)
 
 diff --git a/gdb/gdbserver/target.h b/gdb/gdbserver/target.h
-index 4c14c204bb..bdab18f7f7 100644
+index 3cc2bc4bab..e6b19b06b9 100644
 --- a/gdb/gdbserver/target.h
 +++ b/gdb/gdbserver/target.h
 @@ -28,6 +28,7 @@
@@ -26,5 +26,5 @@
  struct emit_ops;
  struct buffer;
 -- 
-2.11.0
+2.13.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0002-make-man-install-relative-to-DESTDIR.patch b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0002-make-man-install-relative-to-DESTDIR.patch
index 9a9201b..83c4dde 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0002-make-man-install-relative-to-DESTDIR.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0002-make-man-install-relative-to-DESTDIR.patch
@@ -1,4 +1,4 @@
-From f316d604b312bead78594f02e1355633eda9507b Mon Sep 17 00:00:00 2001
+From 9ce61f97b7758794f06894e934fbb256ff62163e Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Mon, 2 Mar 2015 02:27:55 +0000
 Subject: [PATCH 02/10] make man install relative to DESTDIR
@@ -11,7 +11,7 @@
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/sim/common/Makefile.in b/sim/common/Makefile.in
-index a05f50767a..8d0fa64ea8 100644
+index 3944956b5d..aa355e8347 100644
 --- a/sim/common/Makefile.in
 +++ b/sim/common/Makefile.in
 @@ -35,7 +35,7 @@ tooldir = $(libdir)/$(target_alias)
@@ -24,5 +24,5 @@
  includedir = @includedir@
  
 -- 
-2.11.0
+2.13.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0003-mips-linux-nat-Define-_ABIO32-if-not-defined.patch b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0003-mips-linux-nat-Define-_ABIO32-if-not-defined.patch
index 74c0006..6f7955b 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0003-mips-linux-nat-Define-_ABIO32-if-not-defined.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0003-mips-linux-nat-Define-_ABIO32-if-not-defined.patch
@@ -1,4 +1,4 @@
-From f2912b1d2e5c854a112176682903b696da33e003 Mon Sep 17 00:00:00 2001
+From ca0ef06b7320912df350e730e63f9bafdaa6ea70 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Wed, 23 Mar 2016 06:30:09 +0000
 Subject: [PATCH 03/10] mips-linux-nat: Define _ABIO32 if not defined
@@ -17,10 +17,10 @@
  1 file changed, 5 insertions(+)
 
 diff --git a/gdb/mips-linux-nat.c b/gdb/mips-linux-nat.c
-index 0f20f16814..722532bb6c 100644
+index 8041d84be7..f2df1b9907 100644
 --- a/gdb/mips-linux-nat.c
 +++ b/gdb/mips-linux-nat.c
-@@ -46,6 +46,11 @@
+@@ -47,6 +47,11 @@
  #define PTRACE_GET_THREAD_AREA 25
  #endif
  
@@ -33,5 +33,5 @@
     we'll clear this and use PTRACE_PEEKUSER instead.  */
  static int have_ptrace_regsets = 1;
 -- 
-2.11.0
+2.13.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0004-ppc-ptrace-Define-pt_regs-uapi_pt_regs-on-GLIBC-syst.patch b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0004-ppc-ptrace-Define-pt_regs-uapi_pt_regs-on-GLIBC-syst.patch
index 847f24f..357db25 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0004-ppc-ptrace-Define-pt_regs-uapi_pt_regs-on-GLIBC-syst.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0004-ppc-ptrace-Define-pt_regs-uapi_pt_regs-on-GLIBC-syst.patch
@@ -1,6 +1,6 @@
-From 7ef7b709885378279c424eab0510b93233400b24 Mon Sep 17 00:00:00 2001
+From 0f6d71118ca914002fcad78d2c8a518223d06bfb Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
-Date: Sat, 6 Aug 2016 17:32:50 -0700
+Date: Sat, 30 Apr 2016 18:32:14 -0700
 Subject: [PATCH 04/10] ppc/ptrace: Define pt_regs uapi_pt_regs on !GLIBC
  systems
 
@@ -13,7 +13,7 @@
  2 files changed, 12 insertions(+)
 
 diff --git a/gdb/gdbserver/linux-ppc-low.c b/gdb/gdbserver/linux-ppc-low.c
-index 1d013f185f..68098b3db9 100644
+index 33a9feb12c..1a9141faef 100644
 --- a/gdb/gdbserver/linux-ppc-low.c
 +++ b/gdb/gdbserver/linux-ppc-low.c
 @@ -21,7 +21,13 @@
@@ -31,7 +31,7 @@
  #include "nat/ppc-linux.h"
  #include "linux-ppc-tdesc.h"
 diff --git a/gdb/nat/ppc-linux.h b/gdb/nat/ppc-linux.h
-index 85fbcd84bb..cbec9c53b2 100644
+index 5837ea1767..7233929192 100644
 --- a/gdb/nat/ppc-linux.h
 +++ b/gdb/nat/ppc-linux.h
 @@ -18,7 +18,13 @@
@@ -49,5 +49,5 @@
  
  /* This sometimes isn't defined.  */
 -- 
-2.11.0
+2.13.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0005-Add-support-for-Renesas-SH-sh4-architecture.patch b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0005-Add-support-for-Renesas-SH-sh4-architecture.patch
index d0c15f6..cb1b7ab 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0005-Add-support-for-Renesas-SH-sh4-architecture.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0005-Add-support-for-Renesas-SH-sh4-architecture.patch
@@ -1,4 +1,4 @@
-From 6649e2cccfb11dec076abb02eae0afab95614829 Mon Sep 17 00:00:00 2001
+From 60ac68f601885ea6480229a5c8a89a0257da376c Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Mon, 2 Mar 2015 02:31:12 +0000
 Subject: [PATCH 05/10] Add support for Renesas SH (sh4) architecture.
@@ -13,7 +13,7 @@
 Upstream-Status: Pending
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
 ---
- gdb/Makefile.in                      |   1 +
+ gdb/Makefile.in                      |   2 +
  gdb/configure.host                   |   1 +
  gdb/sh-linux-tdep.c                  | 519 +++++++++++++++++++++++++++++++++++
  gdb/sh-tdep.c                        |  54 ++--
@@ -24,25 +24,26 @@
  gdb/testsuite/gdb.base/annota3.c     |   4 +
  gdb/testsuite/gdb.base/sigall.c      |   3 +
  gdb/testsuite/gdb.base/signals.c     |   4 +
- 11 files changed, 617 insertions(+), 29 deletions(-)
+ 11 files changed, 618 insertions(+), 29 deletions(-)
 
 diff --git a/gdb/Makefile.in b/gdb/Makefile.in
-index 7b2df86878..10f1266fe3 100644
+index 8be73ba423..e287ff6a2e 100644
 --- a/gdb/Makefile.in
 +++ b/gdb/Makefile.in
-@@ -1750,6 +1750,7 @@ ALLDEPFILES = \
- 	score-tdep.c \
- 	ser-go32.c ser-pipe.c ser-tcp.c ser-mingw.c \
- 	sh-tdep.c sh64-tdep.c shnbsd-tdep.c shnbsd-nat.c \
-+	sh-linux-tdep.c sh-linux-nat.c \
+@@ -2638,6 +2638,8 @@ ALLDEPFILES = \
+ 	sh-nbsd-tdep.c \
+ 	sh-tdep.c \
+ 	sh64-tdep.c \
++	sh-linux-tdep.c \
++	sh-linux-nat.c \
  	sol2-tdep.c \
- 	solib-svr4.c \
- 	sparc-linux-nat.c sparc-linux-tdep.c \
+ 	solib-aix.c \
+ 	solib-spu.c \
 diff --git a/gdb/configure.host b/gdb/configure.host
-index ef265ebe29..322a1e2c67 100644
+index d74fd04934..be12de1446 100644
 --- a/gdb/configure.host
 +++ b/gdb/configure.host
-@@ -149,6 +149,7 @@ powerpc*-*-linux*)	gdb_host=linux ;;
+@@ -150,6 +150,7 @@ powerpc*-*-linux*)	gdb_host=linux ;;
  
  s390*-*-linux*)		gdb_host=linux ;;
  
@@ -51,7 +52,7 @@
  			gdb_host=nbsd ;;
  sh*-*-openbsd*)		gdb_host=nbsd ;;
 diff --git a/gdb/sh-linux-tdep.c b/gdb/sh-linux-tdep.c
-index 2418d25010..ac8ea9e2a4 100644
+index c5c745d218..84e539aad3 100644
 --- a/gdb/sh-linux-tdep.c
 +++ b/gdb/sh-linux-tdep.c
 @@ -18,14 +18,37 @@
@@ -599,7 +600,7 @@
  
    /* GNU/Linux uses SVR4-style shared libraries.  */
 diff --git a/gdb/sh-tdep.c b/gdb/sh-tdep.c
-index 694f5f742d..8d54df7a1a 100644
+index 2c2b26847d..14f5281ed4 100644
 --- a/gdb/sh-tdep.c
 +++ b/gdb/sh-tdep.c
 @@ -21,6 +21,9 @@
@@ -620,7 +621,7 @@
  #include "doublest.h"
  #include "osabi.h"
  #include "reggroups.h"
-@@ -67,23 +71,6 @@ static const char *const sh_cc_enum[] = {
+@@ -68,23 +72,6 @@ static const char *const sh_cc_enum[] = {
  
  static const char *sh_active_calling_convention = sh_cc_gcc;
  
@@ -644,7 +645,7 @@
  static int
  sh_is_renesas_calling_convention (struct type *func_type)
  {
-@@ -1043,7 +1030,7 @@ sh_treat_as_flt_p (struct type *type)
+@@ -1052,7 +1039,7 @@ sh_treat_as_flt_p (struct type *type)
      return 0;
    /* Otherwise if the type of that member is float, the whole type is
       treated as float.  */
@@ -653,7 +654,7 @@
      return 1;
    /* Otherwise it's not treated as float.  */
    return 0;
-@@ -1093,7 +1080,7 @@ sh_push_dummy_call_fpu (struct gdbarch *gdbarch,
+@@ -1102,7 +1089,7 @@ sh_push_dummy_call_fpu (struct gdbarch *gdbarch,
       in four registers available.  Loop thru args from first to last.  */
    for (argnum = 0; argnum < nargs; argnum++)
      {
@@ -662,7 +663,7 @@
        len = TYPE_LENGTH (type);
        val = sh_justify_value_in_reg (gdbarch, args[argnum], len);
  
-@@ -1819,7 +1806,7 @@ sh_dwarf2_frame_init_reg (struct gdbarch *gdbarch, int regnum,
+@@ -1828,7 +1815,7 @@ sh_dwarf2_frame_init_reg (struct gdbarch *gdbarch, int regnum,
      reg->how = DWARF2_FRAME_REG_UNDEFINED;
  }
  
@@ -671,7 +672,7 @@
  sh_alloc_frame_cache (void)
  {
    struct sh_frame_cache *cache;
-@@ -1846,7 +1833,7 @@ sh_alloc_frame_cache (void)
+@@ -1855,7 +1842,7 @@ sh_alloc_frame_cache (void)
    return cache;
  }
  
@@ -680,7 +681,7 @@
  sh_frame_cache (struct frame_info *this_frame, void **this_cache)
  {
    struct gdbarch *gdbarch = get_frame_arch (this_frame);
-@@ -1913,9 +1900,9 @@ sh_frame_cache (struct frame_info *this_frame, void **this_cache)
+@@ -1922,9 +1909,9 @@ sh_frame_cache (struct frame_info *this_frame, void **this_cache)
    return cache;
  }
  
@@ -693,7 +694,7 @@
  {
    struct gdbarch *gdbarch = get_frame_arch (this_frame);
    struct sh_frame_cache *cache = sh_frame_cache (this_frame, this_cache);
-@@ -1929,7 +1916,7 @@ sh_frame_prev_register (struct frame_info *this_frame,
+@@ -1938,7 +1925,7 @@ sh_frame_prev_register (struct frame_info *this_frame,
       the current frame.  Frob regnum so that we pull the value from
       the correct place.  */
    if (regnum == gdbarch_pc_regnum (gdbarch))
@@ -702,7 +703,7 @@
  
    if (regnum < SH_NUM_REGS && cache->saved_regs[regnum] != -1)
      return frame_unwind_got_memory (this_frame, regnum,
-@@ -2238,8 +2225,8 @@ sh_return_in_first_hidden_param_p (struct gdbarch *gdbarch,
+@@ -2247,8 +2234,8 @@ sh_return_in_first_hidden_param_p (struct gdbarch *gdbarch,
  static struct gdbarch *
  sh_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches)
  {
@@ -712,7 +713,7 @@
  
    /* SH5 is handled entirely in sh64-tdep.c.  */
    if (info.bfd_arch_info->mach == bfd_mach_sh5)
-@@ -2255,6 +2242,18 @@ sh_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches)
+@@ -2264,6 +2251,18 @@ sh_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches)
    tdep = XCNEW (struct gdbarch_tdep);
    gdbarch = gdbarch_alloc (&info, tdep);
  
@@ -731,7 +732,7 @@
    set_gdbarch_short_bit (gdbarch, 2 * TARGET_CHAR_BIT);
    set_gdbarch_int_bit (gdbarch, 4 * TARGET_CHAR_BIT);
    set_gdbarch_long_bit (gdbarch, 4 * TARGET_CHAR_BIT);
-@@ -2405,10 +2404,11 @@ sh_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches)
+@@ -2418,10 +2417,11 @@ sh_gdbarch_init (struct gdbarch_info info, struct gdbarch_list *arches)
        break;
      }
  
@@ -745,7 +746,7 @@
    frame_unwind_append_unwinder (gdbarch, &sh_frame_unwind);
  
 diff --git a/gdb/sh-tdep.h b/gdb/sh-tdep.h
-index 666968f787..62c65b55ea 100644
+index d15ef050e0..c4642cefa4 100644
 --- a/gdb/sh-tdep.h
 +++ b/gdb/sh-tdep.h
 @@ -21,6 +21,12 @@
@@ -828,7 +829,7 @@
       where each general-purpose register is stored inside the associated
       core file section.  */
 diff --git a/gdb/testsuite/gdb.asm/asm-source.exp b/gdb/testsuite/gdb.asm/asm-source.exp
-index 6d9aef81bb..5b66b429d1 100644
+index e07e5543f2..f5e60e1002 100644
 --- a/gdb/testsuite/gdb.asm/asm-source.exp
 +++ b/gdb/testsuite/gdb.asm/asm-source.exp
 @@ -116,6 +116,11 @@ switch -glob -- [istarget] {
@@ -917,5 +918,5 @@
  static int count = 0;
  
 -- 
-2.11.0
+2.13.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0006-Dont-disable-libreadline.a-when-using-disable-static.patch b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0006-Dont-disable-libreadline.a-when-using-disable-static.patch
index 5ed8e81..8b13958 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0006-Dont-disable-libreadline.a-when-using-disable-static.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0006-Dont-disable-libreadline.a-when-using-disable-static.patch
@@ -1,4 +1,4 @@
-From 2a6e28ad5c0cad189a3697d96de031e4713052b8 Mon Sep 17 00:00:00 2001
+From 5c92ebd5e117e4cf118c984171e0703dfcfb8cd8 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Sat, 30 Apr 2016 15:25:03 -0700
 Subject: [PATCH 06/10] Dont disable libreadline.a when using --disable-static
@@ -19,10 +19,10 @@
  2 files changed, 3 insertions(+), 2 deletions(-)
 
 diff --git a/Makefile.def b/Makefile.def
-index ea8453e851..0fc66c694b 100644
+index 0d13f037d0..8bcd86e13f 100644
 --- a/Makefile.def
 +++ b/Makefile.def
-@@ -104,7 +104,8 @@ host_modules= { module= libiconv;
+@@ -105,7 +105,8 @@ host_modules= { module= libiconv;
  		missing= install-html;
  		missing= install-info; };
  host_modules= { module= m4; };
@@ -33,10 +33,10 @@
  host_modules= { module= sim; };
  host_modules= { module= texinfo; no_install= true; };
 diff --git a/Makefile.in b/Makefile.in
-index cb0136e8f8..55f9085c16 100644
+index 3acb83b8de..e348907128 100644
 --- a/Makefile.in
 +++ b/Makefile.in
-@@ -25385,7 +25385,7 @@ configure-readline:
+@@ -25470,7 +25470,7 @@ configure-readline:
  	  $$s/$$module_srcdir/configure \
  	  --srcdir=$${topdir}/$$module_srcdir \
  	  $(HOST_CONFIGARGS) --build=${build_alias} --host=${host_alias} \
@@ -46,5 +46,5 @@
  @endif readline
  
 -- 
-2.11.0
+2.13.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0007-use-asm-sgidefs.h.patch b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0007-use-asm-sgidefs.h.patch
index a42c9fd..33b4c30 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0007-use-asm-sgidefs.h.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0007-use-asm-sgidefs.h.patch
@@ -1,4 +1,4 @@
-From d7543b44255da4ae71447d4e4d63e0b6aa4ed909 Mon Sep 17 00:00:00 2001
+From 12a0b8d81e1fda6ba98abdce8d6f09f9555ebcf5 Mon Sep 17 00:00:00 2001
 From: Andre McCurdy <amccurdy@gmail.com>
 Date: Sat, 30 Apr 2016 15:29:06 -0700
 Subject: [PATCH 07/10] use <asm/sgidefs.h>
@@ -19,7 +19,7 @@
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/gdb/mips-linux-nat.c b/gdb/mips-linux-nat.c
-index 722532bb6c..51d8fc8f66 100644
+index f2df1b9907..d24664cb56 100644
 --- a/gdb/mips-linux-nat.c
 +++ b/gdb/mips-linux-nat.c
 @@ -31,7 +31,7 @@
@@ -30,7 +30,7 @@
 +#include <asm/sgidefs.h>
  #include "nat/gdb_ptrace.h"
  #include <asm/ptrace.h>
- 
+ #include "inf-ptrace.h"
 -- 
-2.11.0
+2.13.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0008-Use-exorted-definitions-of-SIGRTMIN.patch b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0008-Use-exorted-definitions-of-SIGRTMIN.patch
index ae9cb8c..4f64dea 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0008-Use-exorted-definitions-of-SIGRTMIN.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0008-Use-exorted-definitions-of-SIGRTMIN.patch
@@ -1,4 +1,4 @@
-From aacd77184da1328908da41c9fdb55ad881fa0e99 Mon Sep 17 00:00:00 2001
+From d3f240b38eed7cd08f6c50ea896572f1327b437a Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Sat, 30 Apr 2016 15:31:40 -0700
 Subject: [PATCH 08/10] Use exorted definitions of SIGRTMIN
@@ -20,10 +20,10 @@
  2 files changed, 6 insertions(+), 2 deletions(-)
 
 diff --git a/gdb/linux-nat.c b/gdb/linux-nat.c
-index 5d5efa0af4..e3420b49a0 100644
+index 8b29245c3d..f424ae9711 100644
 --- a/gdb/linux-nat.c
 +++ b/gdb/linux-nat.c
-@@ -5022,6 +5022,6 @@ lin_thread_get_thread_signals (sigset_t *set)
+@@ -5021,6 +5021,6 @@ lin_thread_get_thread_signals (sigset_t *set)
    /* NPTL reserves the first two RT signals, but does not provide any
       way for the debugger to query the signal numbers - fortunately
       they don't change.  */
@@ -33,12 +33,12 @@
 +  sigaddset (set, SIGRTMIN + 1);
  }
 diff --git a/gdb/nat/linux-nat.h b/gdb/nat/linux-nat.h
-index 2b485db141..d058afcde8 100644
+index 7dd18fefff..35137ab34f 100644
 --- a/gdb/nat/linux-nat.h
 +++ b/gdb/nat/linux-nat.h
-@@ -85,4 +85,8 @@ extern enum target_stop_reason lwp_stop_reason (struct lwp_info *lwp);
+@@ -90,4 +90,8 @@ extern void linux_stop_lwp (struct lwp_info *lwp);
  
- extern void linux_stop_lwp (struct lwp_info *lwp);
+ extern int lwp_is_stepping (struct lwp_info *lwp);
  
 +#ifndef W_STOPCODE
 +#define W_STOPCODE(sig) ((sig) << 8 | 0x7f)
@@ -46,5 +46,5 @@
 +
  #endif /* LINUX_NAT_H */
 -- 
-2.11.0
+2.13.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0009-Change-order-of-CFLAGS.patch b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0009-Change-order-of-CFLAGS.patch
index ed6e0ae..0c103ef 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0009-Change-order-of-CFLAGS.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0009-Change-order-of-CFLAGS.patch
@@ -1,4 +1,4 @@
-From 8c35d5d1825ed017cc58ea91011412e54c002eeb Mon Sep 17 00:00:00 2001
+From 3f54036b891054072b3e43ea8daaa57aa367b2e0 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Sat, 30 Apr 2016 15:35:39 -0700
 Subject: [PATCH 09/10] Change order of CFLAGS
@@ -9,26 +9,22 @@
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
 ---
- gdb/gdbserver/Makefile.in | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
+ gdb/gdbserver/Makefile.in | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/gdb/gdbserver/Makefile.in b/gdb/gdbserver/Makefile.in
-index f844ab8853..3f88db52e3 100644
+index 01dfdc0b89..f6aebef464 100644
 --- a/gdb/gdbserver/Makefile.in
 +++ b/gdb/gdbserver/Makefile.in
-@@ -138,10 +138,10 @@ CXXFLAGS = @CXXFLAGS@
- CPPFLAGS = @CPPFLAGS@
- 
- # INTERNAL_CFLAGS is the aggregate of all other *CFLAGS macros.
--INTERNAL_CFLAGS_BASE =  ${COMPILER_CFLAGS} ${GLOBAL_CFLAGS} \
-+INTERNAL_CFLAGS_BASE =  ${GLOBAL_CFLAGS} \
+@@ -140,7 +140,7 @@ CPPFLAGS = @CPPFLAGS@
+ INTERNAL_CFLAGS_BASE = ${CXXFLAGS} ${GLOBAL_CFLAGS} \
  	${PROFILE_CFLAGS} ${INCLUDE_CFLAGS} ${CPPFLAGS}
- INTERNAL_WARN_CFLAGS =  ${INTERNAL_CFLAGS_BASE} $(WARN_CFLAGS)
--INTERNAL_CFLAGS =  ${INTERNAL_WARN_CFLAGS} $(WERROR_CFLAGS) -DGDBSERVER
-+INTERNAL_CFLAGS =  ${INTERNAL_WARN_CFLAGS} $(WERROR_CFLAGS) ${COMPILER_CFLAGS} -DGDBSERVER
+ INTERNAL_WARN_CFLAGS = ${INTERNAL_CFLAGS_BASE} $(WARN_CFLAGS)
+-INTERNAL_CFLAGS = ${INTERNAL_WARN_CFLAGS} $(WERROR_CFLAGS) -DGDBSERVER
++INTERNAL_CFLAGS = ${INTERNAL_WARN_CFLAGS} $(WERROR_CFLAGS) ${COMPILER_CFLAGS} -DGDBSERVER
  
  # LDFLAGS is specifically reserved for setting from the command line
  # when running make.
 -- 
-2.11.0
+2.13.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0010-resolve-restrict-keyword-conflict.patch b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0010-resolve-restrict-keyword-conflict.patch
index 1938beb..c950710 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0010-resolve-restrict-keyword-conflict.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb/0010-resolve-restrict-keyword-conflict.patch
@@ -1,4 +1,4 @@
-From 7816d3497266e55c1c921d7cc1c8bf81c8ed0b4a Mon Sep 17 00:00:00 2001
+From 3ead0dd143521b0ba69c9e753bc4a236f9445ad9 Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
 Date: Tue, 10 May 2016 08:47:05 -0700
 Subject: [PATCH 10/10] resolve restrict keyword conflict
@@ -15,7 +15,7 @@
  1 file changed, 4 insertions(+), 4 deletions(-)
 
 diff --git a/gdb/gnulib/import/sys_time.in.h b/gdb/gnulib/import/sys_time.in.h
-index c556c5db23..2a6107fcf8 100644
+index d535a6a48b..7c34d5a1aa 100644
 --- a/gdb/gnulib/import/sys_time.in.h
 +++ b/gdb/gnulib/import/sys_time.in.h
 @@ -93,20 +93,20 @@ struct timeval
@@ -42,7 +42,7 @@
 +                       (struct timeval *__restrict, void *__restrict));
  # endif
  _GL_CXXALIASWARN (gettimeofday);
- #elif defined GNULIB_POSIXCHECK
+ # if defined __cplusplus && defined GNULIB_NAMESPACE
 -- 
-2.11.0
+2.13.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb_7.12.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb_8.0.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb_7.12.1.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/gdb/gdb_8.0.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/git/git.inc b/import-layers/yocto-poky/meta/recipes-devtools/git/git.inc
index 5c12ca8..9b4c128 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/git/git.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/git/git.inc
@@ -107,7 +107,6 @@
     ${libexecdir}/git-core/git-cvsserver \
     ${bindir}/git-cvsserver \
     ${libexecdir}/git-core/git-difftool \
-    ${libexecdir}/git-core/git-relink \
     ${libexecdir}/git-core/git-send-email \
     ${libexecdir}/git-core/git-svn \
     ${libexecdir}/git-core/git-instaweb \
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/git/git_2.11.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/git/git_2.11.1.bb
deleted file mode 100644
index f2f072c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/git/git_2.11.1.bb
+++ /dev/null
@@ -1,11 +0,0 @@
-require git.inc
-
-EXTRA_OECONF += "ac_cv_snprintf_returns_bogus=no \
-                 ac_cv_fread_reads_directories=${ac_cv_fread_reads_directories=yes} \
-                 "
-EXTRA_OEMAKE += "NO_GETTEXT=1"
-
-SRC_URI[tarball.md5sum] = "6a7a73db076bb0514b602720669d685c"
-SRC_URI[tarball.sha256sum] = "a1cdd7c820f92c44abb5003b36dc8cb7201ba38e8744802399f59c97285ca043"
-SRC_URI[manpages.md5sum] = "e4268a6b514ccdb624b6450ff55881a3"
-SRC_URI[manpages.sha256sum] = "ee567e7b0f95333816793714bb31c54e288cf8041f77a0092b85e62c9c2974f9"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/git/git_2.13.3.bb b/import-layers/yocto-poky/meta/recipes-devtools/git/git_2.13.3.bb
new file mode 100644
index 0000000..b3e3887
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/git/git_2.13.3.bb
@@ -0,0 +1,11 @@
+require git.inc
+
+EXTRA_OECONF += "ac_cv_snprintf_returns_bogus=no \
+                 ac_cv_fread_reads_directories=${ac_cv_fread_reads_directories=yes} \
+                 "
+EXTRA_OEMAKE += "NO_GETTEXT=1"
+
+SRC_URI[tarball.md5sum] = "d2dc550f6693ba7e5b16212b2714f59f"
+SRC_URI[tarball.sha256sum] = "1497001772f630d49809e981672edfe3e3ce1a1d18e905cd539c4d2f4dbcd75a"
+SRC_URI[manpages.md5sum] = "3037d11a4f4cdd19435871c267ca48b4"
+SRC_URI[manpages.sha256sum] = "f9b302eeb08ce08934e7afb42280ce9294411fbf5f7b6ac3fcc236e8031f10c5"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config/config-guess-uclibc.patch b/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config/config-guess-uclibc.patch
deleted file mode 100644
index 2094116..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config/config-guess-uclibc.patch
+++ /dev/null
@@ -1,170 +0,0 @@
-Upstream-Status: Pending
-
-Patch courtesy gentoo-portage/sys-devel/gnuconfig/files/automake-1.8.5-config-guess-uclibc.patch.
-
-updated to 20050516 by Marcin 'Hrw' Juszkiewicz (by hand)
-updated to 20080123 by Nitin A Kamble (by hand)
-updated to 20111001 by Saul Wold (by hand)
-updated to 20120818 by Marcin 'Hrw' Juszkiewicz (by hand)
-
-Signed-off-by: Saul Wold <sgw@linux.intel.com>
-Signed-off-by: Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
-
----
- config.guess |   67 +++++++++++++++++++++++++++++++++++------------------------
- 1 file changed, 40 insertions(+), 27 deletions(-)
-
---- git.orig/config.guess
-+++ git/config.guess
-@@ -138,6 +138,19 @@ UNAME_RELEASE=`(uname -r) 2>/dev/null` |
- UNAME_SYSTEM=`(uname -s) 2>/dev/null`  || UNAME_SYSTEM=unknown
- UNAME_VERSION=`(uname -v) 2>/dev/null` || UNAME_VERSION=unknown
- 
-+# Detect uclibc systems.
-+
-+LIBC="gnu"
-+if [ -f /usr/include/bits/uClibc_config.h ]
-+then
-+	LIBC=uclibc
-+	if [ -n `grep "#define __UCLIBC_CONFIG_VERSION__" /usr/include/bits/uClibc_config.h` ]
-+	then
-+		UCLIBC_SUBVER=`sed -n "/#define __UCLIBC_CONFIG_VERSION__ /s///p" /usr/include/bits/uClibc_config.h`
-+		LIBC=$LIBC$UCLIBC_SUBVER
-+	fi
-+fi
-+
- # Note: order is significant - the case branches are not exclusive.
- 
- case "${UNAME_MACHINE}:${UNAME_SYSTEM}:${UNAME_RELEASE}:${UNAME_VERSION}" in
-@@ -886,15 +899,15 @@ EOF
- 	  EV68*) UNAME_MACHINE=alphaev68 ;;
- 	esac
- 	objdump --private-headers /bin/sh | grep -q ld.so.1
--	if test "$?" = 0 ; then LIBC="libc1" ; else LIBC="" ; fi
--	echo ${UNAME_MACHINE}-unknown-linux-gnu${LIBC}
-+	if test "$?" = 0 ; then LIBC="gnulibc1" ; else LIBC="" ; fi
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     arm*:Linux:*:*)
- 	eval $set_cc_for_build
- 	if echo __ARM_EABI__ | $CC_FOR_BUILD -E - 2>/dev/null \
- 	    | grep -q __ARM_EABI__
- 	then
--	    echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	    echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	else
- 	    if echo __ARM_PCS_VFP | $CC_FOR_BUILD -E - 2>/dev/null \
- 		| grep -q __ARM_PCS_VFP
-@@ -906,19 +919,19 @@ EOF
- 	fi
- 	exit ;;
-     avr32*:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     cris:Linux:*:*)
--	echo ${UNAME_MACHINE}-axis-linux-gnu
-+	echo ${UNAME_MACHINE}-axis-linux-${LIBC}
- 	exit ;;
-     crisv32:Linux:*:*)
--	echo ${UNAME_MACHINE}-axis-linux-gnu
-+	echo ${UNAME_MACHINE}-axis-linux-${LIBC}
- 	exit ;;
-     frv:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     hexagon:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     i*86:Linux:*:*)
- 	LIBC=gnu
-@@ -932,13 +945,13 @@ EOF
- 	echo "${UNAME_MACHINE}-pc-linux-${LIBC}"
- 	exit ;;
-     ia64:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     m32r*:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     m68*:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     mips:Linux:*:* | mips64:Linux:*:*)
- 	eval $set_cc_for_build
-@@ -957,54 +970,54 @@ EOF
- 	#endif
- EOF
- 	eval `$CC_FOR_BUILD -E $dummy.c 2>/dev/null | grep '^CPU'`
--	test x"${CPU}" != x && { echo "${CPU}-unknown-linux-gnu"; exit; }
-+	test x"${CPU}" != x && { echo "${CPU}-unknown-linux-${LIBC}"; exit; }
- 	;;
-     or32:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     padre:Linux:*:*)
--	echo sparc-unknown-linux-gnu
-+	echo sparc-unknown-linux-${LIBC}
- 	exit ;;
-     parisc64:Linux:*:* | hppa64:Linux:*:*)
--	echo hppa64-unknown-linux-gnu
-+	echo hppa64-unknown-linux-${LIBC}
- 	exit ;;
-     parisc:Linux:*:* | hppa:Linux:*:*)
- 	# Look for CPU level
- 	case `grep '^cpu[^a-z]*:' /proc/cpuinfo 2>/dev/null | cut -d' ' -f2` in
--	  PA7*) echo hppa1.1-unknown-linux-gnu ;;
--	  PA8*) echo hppa2.0-unknown-linux-gnu ;;
--	  *)    echo hppa-unknown-linux-gnu ;;
-+	  PA7*) echo hppa1.1-unknown-linux-${LIBC} ;;
-+	  PA8*) echo hppa2.0-unknown-linux-${LIBC} ;;
-+	  *)    echo hppa-unknown-linux-${LIBC} ;;
- 	esac
- 	exit ;;
-     ppc64:Linux:*:*)
--	echo powerpc64-unknown-linux-gnu
-+	echo powerpc64-unknown-linux-${LIBC}
- 	exit ;;
-     ppc:Linux:*:*)
--	echo powerpc-unknown-linux-gnu
-+	echo powerpc-unknown-linux-${LIBC}
- 	exit ;;
-     s390:Linux:*:* | s390x:Linux:*:*)
- 	echo ${UNAME_MACHINE}-ibm-linux
- 	exit ;;
-     sh64*:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     sh*:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     sparc:Linux:*:* | sparc64:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     tile*:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     vax:Linux:*:*)
--	echo ${UNAME_MACHINE}-dec-linux-gnu
-+	echo ${UNAME_MACHINE}-dec-linux-${LIBC}
- 	exit ;;
-     x86_64:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     xtensa*:Linux:*:*)
--	echo ${UNAME_MACHINE}-unknown-linux-gnu
-+	echo ${UNAME_MACHINE}-unknown-linux-${LIBC}
- 	exit ;;
-     i*86:DYNIX/ptx:4*:*)
- 	# ptx 4.0 does uname -s correctly, with DYNIX/ptx in there.
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config/uclibc.patch b/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config/uclibc.patch
deleted file mode 100644
index 75fe100..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config/uclibc.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-Upstream-Status: Pending
-
---- config.sub.orig	2004-05-14 19:38:36.000000000 -0500
-+++ config.sub	2004-05-14 19:39:17.000000000 -0500
-@@ -118,7 +118,7 @@
- # Here we must recognize all the valid KERNEL-OS combinations.
- maybe_os=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\2/'`
- case $maybe_os in
--  nto-qnx* | linux-gnu* | kfreebsd*-gnu* | netbsd*-gnu* | storm-chaos* | os2-emx* | rtmk-nova*)
-+  nto-qnx* | linux-gnu* | linux-uclibc* | freebsd*-gnu* | netbsd*-gnu* | storm-chaos* | os2-emx* | rtmk-nova*)
-     os=-$maybe_os
-     basic_machine=`echo $1 | sed 's/^\(.*\)-\([^-]*-[^-]*\)$/\1/'`
-     ;;
-@@ -1135,7 +1135,8 @@
- 	      | -udi* | -eabi* | -lites* | -ieee* | -go32* | -aux* \
- 	      | -chorusos* | -chorusrdb* \
- 	      | -cygwin* | -pe* | -psos* | -moss* | -proelf* | -rtems* \
--	      | -mingw32* | -linux-gnu* | -uxpv* | -beos* | -mpeix* | -udk* \
-+	      | -mingw32* | -linux-gnu* | -linux-uclibc* \
-+	      | -uxpv* | -beos* | -mpeix* | -udk* \
- 	      | -interix* | -uwin* | -mks* | -rhapsody* | -darwin* | -opened* \
- 	      | -openstep* | -oskit* | -conix* | -pw32* | -nonstopux* \
- 	      | -storm-chaos* | -tops10* | -tenex* | -tops20* | -its* \
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config_20120814.bb b/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config_20120814.bb
index 0d05e79..3d428b9 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config_20120814.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config_20120814.bb
@@ -10,7 +10,6 @@
 
 
 SRC_URI = "http://downloads.yoctoproject.org/releases/gnu-config/gnu-config-${PV}.tar.bz2 \
-	   file://config-guess-uclibc.patch \
 	   file://musl-support.patch \
            file://gnu-configize.in"
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config_git.bb
index f1c7788..4fded60 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/gnu-config/gnu-config_git.bb
@@ -13,8 +13,8 @@
 
 SRC_URI = "git://git.savannah.gnu.org/config.git \
            file://gnu-configize.in"
-
 S = "${WORKDIR}/git"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 CLEANBROKEN = "1"
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4.inc
deleted file mode 100644
index 2f500f3..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4.inc
+++ /dev/null
@@ -1,16 +0,0 @@
-require go-common.inc
-
-PV = "1.4.3"
-GO_BASEVERSION = "1.4"
-FILESEXTRAPATHS_prepend := "${FILE_DIRNAME}/go-${GO_BASEVERSION}:"
-
-SRC_URI += "\
-        file://016-armhf-elf-header.patch \
-        file://go-cross-backport-cmd-link-support-new-386-amd64-rel.patch \
-        file://syslog.patch \
-        file://0001-cmd-ld-set-alignment-for-the-.rel.plt-section-on-32-.patch \
-"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=591778525c869cdde0ab5a1bf283cd81"
-SRC_URI[md5sum] = "dfb604511115dd402a77a553a5923a04"
-SRC_URI[sha256sum] = "9947fc705b0b841b5938c48b22dc33e9647ec0752bae66e50278df4f23f64959"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4/0001-cmd-ld-set-alignment-for-the-.rel.plt-section-on-32-.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4/0001-cmd-ld-set-alignment-for-the-.rel.plt-section-on-32-.patch
deleted file mode 100644
index f2adc20..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4/0001-cmd-ld-set-alignment-for-the-.rel.plt-section-on-32-.patch
+++ /dev/null
@@ -1,33 +0,0 @@
-From 855145d5c03c4b4faf60736c38d7a299c682af4a Mon Sep 17 00:00:00 2001
-From: Shenghou Ma <minux@golang.org>
-Date: Sat, 7 Feb 2015 14:06:02 -0500
-Subject: [PATCH] cmd/ld: set alignment for the .rel.plt section on 32-bit
- architectures
-
-Fixes #9802.
-
-Change-Id: I22c52a37bdb23a14cc4615c9519431bb14ca81ca
-Reviewed-on: https://go-review.googlesource.com/4170
-Reviewed-by: Ian Lance Taylor <iant@golang.org>
----
-Upstream-Status: Backport
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
- src/cmd/ld/elf.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/src/cmd/ld/elf.c b/src/cmd/ld/elf.c
-index 12ced98..97ed4bd 100644
---- a/src/cmd/ld/elf.c
-+++ b/src/cmd/ld/elf.c
-@@ -1363,6 +1363,7 @@ asmbelf(vlong symo)
- 			sh->type = SHT_REL;
- 			sh->flags = SHF_ALLOC;
- 			sh->entsize = ELF32RELSIZE;
-+			sh->addralign = 4;
- 			sh->link = elfshname(".dynsym")->shnum;
- 			shsym(sh, linklookup(ctxt, ".rel.plt", 0));
- 
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4/016-armhf-elf-header.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4/016-armhf-elf-header.patch
deleted file mode 100644
index e6e414e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4/016-armhf-elf-header.patch
+++ /dev/null
@@ -1,24 +0,0 @@
-Description: Use correct ELF header for armhf binaries.
-Author: Adam Conrad <adconrad@ubuntu.com>
-Last-Update: 2013-07-08
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Index: go/src/cmd/ld/elf.c
-===================================================================
---- go.orig/src/cmd/ld/elf.c	2015-02-20 10:49:58.763451586 -0800
-+++ go/src/cmd/ld/elf.c	2015-02-20 10:49:27.895478521 -0800
-@@ -57,7 +57,11 @@
- 	case '5':
- 		// we use EABI on both linux/arm and freebsd/arm.
- 		if(HEADTYPE == Hlinux || HEADTYPE == Hfreebsd)
--			hdr.flags = 0x5000002; // has entry point, Version5 EABI
-+#ifdef __ARM_PCS_VFP
-+			hdr.flags = 0x5000402; // has entry point, Version5 EABI, hard-float ABI
-+#else
-+			hdr.flags = 0x5000202; // has entry point, Version5 EABI, soft-float ABI
-+#endif
- 		// fallthrough
- 	default:
- 		hdr.phoff = ELF32HDRSIZE;	/* Must be be ELF32HDRSIZE: first PHdr must follow ELF header */
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4/go-cross-backport-cmd-link-support-new-386-amd64-rel.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4/go-cross-backport-cmd-link-support-new-386-amd64-rel.patch
deleted file mode 100644
index 95ca9d3..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4/go-cross-backport-cmd-link-support-new-386-amd64-rel.patch
+++ /dev/null
@@ -1,225 +0,0 @@
-From d6eefad445831c161fca130f9bdf7b3848aac23c Mon Sep 17 00:00:00 2001
-From: Paul Gortmaker <paul.gortmaker@windriver.com>
-Date: Tue, 29 Mar 2016 21:14:33 -0400
-Subject: [PATCH] go-cross: backport "cmd/link: support new 386/amd64
- relocations"
-
-Newer binutils won't support building older go-1.4.3 as per:
-
-https://github.com/golang/go/issues/13114
-
-Upstream commit 914db9f060b1fd3eb1f74d48f3bd46a73d4ae9c7 (see subj)
-was identified as the fix and nominated for 1.4.4 but that release
-never happened.  The paths in 1.4.3 aren't the same as go1.6beta1~662
-where this commit appeared, but the NetBSD folks indicated what a
-1.4.3 backport would look like here: https://gnats.netbsd.org/50777
-
-This is based on that, but without the BSD wrapper infrastructure
-layer that makes things look like patches of patches.
-
-Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
-
-Upstream-Status: Backport [ Partial ]
-
-diff --git a/src/cmd/6l/asm.c b/src/cmd/6l/asm.c
-index 18b5aa311981..2e9d339aef87 100644
---- a/src/cmd/6l/asm.c
-+++ b/src/cmd/6l/asm.c
-@@ -118,6 +118,8 @@ adddynrel(LSym *s, Reloc *r)
- 		return;
- 	
- 	case 256 + R_X86_64_GOTPCREL:
-+	case 256 + R_X86_64_GOTPCRELX:
-+	case 256 + R_X86_64_REX_GOTPCRELX:
- 		if(targ->type != SDYNIMPORT) {
- 			// have symbol
- 			if(r->off >= 2 && s->p[r->off-2] == 0x8b) {
-diff --git a/src/cmd/8l/asm.c b/src/cmd/8l/asm.c
-index 98c04240374f..cff29488e8af 100644
---- a/src/cmd/8l/asm.c
-+++ b/src/cmd/8l/asm.c
-@@ -115,6 +115,7 @@ adddynrel(LSym *s, Reloc *r)
- 		return;		
- 	
- 	case 256 + R_386_GOT32:
-+	case 256 + R_386_GOT32X:
- 		if(targ->type != SDYNIMPORT) {
- 			// have symbol
- 			if(r->off >= 2 && s->p[r->off-2] == 0x8b) {
-diff --git a/src/cmd/ld/elf.h b/src/cmd/ld/elf.h
-index e84d996f2596..bbf2cfaa3cc0 100644
---- a/src/cmd/ld/elf.h
-+++ b/src/cmd/ld/elf.h
-@@ -478,32 +478,47 @@ typedef struct {
-  * Relocation types.
-  */
- 
--#define	R_X86_64_NONE	0	/* No relocation. */
--#define	R_X86_64_64	1	/* Add 64 bit symbol value. */
--#define	R_X86_64_PC32	2	/* PC-relative 32 bit signed sym value. */
--#define	R_X86_64_GOT32	3	/* PC-relative 32 bit GOT offset. */
--#define	R_X86_64_PLT32	4	/* PC-relative 32 bit PLT offset. */
--#define	R_X86_64_COPY	5	/* Copy data from shared object. */
--#define	R_X86_64_GLOB_DAT 6	/* Set GOT entry to data address. */
--#define	R_X86_64_JMP_SLOT 7	/* Set GOT entry to code address. */
--#define	R_X86_64_RELATIVE 8	/* Add load address of shared object. */
--#define	R_X86_64_GOTPCREL 9	/* Add 32 bit signed pcrel offset to GOT. */
--#define	R_X86_64_32	10	/* Add 32 bit zero extended symbol value */
--#define	R_X86_64_32S	11	/* Add 32 bit sign extended symbol value */
--#define	R_X86_64_16	12	/* Add 16 bit zero extended symbol value */
--#define	R_X86_64_PC16	13	/* Add 16 bit signed extended pc relative symbol value */
--#define	R_X86_64_8	14	/* Add 8 bit zero extended symbol value */
--#define	R_X86_64_PC8	15	/* Add 8 bit signed extended pc relative symbol value */
--#define	R_X86_64_DTPMOD64 16	/* ID of module containing symbol */
--#define	R_X86_64_DTPOFF64 17	/* Offset in TLS block */
--#define	R_X86_64_TPOFF64 18	/* Offset in static TLS block */
--#define	R_X86_64_TLSGD	19	/* PC relative offset to GD GOT entry */
--#define	R_X86_64_TLSLD	20	/* PC relative offset to LD GOT entry */
--#define	R_X86_64_DTPOFF32 21	/* Offset in TLS block */
--#define	R_X86_64_GOTTPOFF 22	/* PC relative offset to IE GOT entry */
--#define	R_X86_64_TPOFF32 23	/* Offset in static TLS block */
--
--#define	R_X86_64_COUNT	24	/* Count of defined relocation types. */
-+#define	R_X86_64_NONE           0
-+#define	R_X86_64_64             1
-+#define	R_X86_64_PC32           2
-+#define	R_X86_64_GOT32          3
-+#define	R_X86_64_PLT32          4
-+#define	R_X86_64_COPY           5
-+#define	R_X86_64_GLOB_DAT       6
-+#define	R_X86_64_JMP_SLOT       7
-+#define	R_X86_64_RELATIVE       8
-+#define	R_X86_64_GOTPCREL       9
-+#define	R_X86_64_32             10
-+#define	R_X86_64_32S            11
-+#define	R_X86_64_16             12
-+#define	R_X86_64_PC16           13
-+#define	R_X86_64_8              14
-+#define	R_X86_64_PC8            15
-+#define	R_X86_64_DTPMOD64       16
-+#define	R_X86_64_DTPOFF64       17
-+#define	R_X86_64_TPOFF64        18
-+#define	R_X86_64_TLSGD          19
-+#define	R_X86_64_TLSLD          20
-+#define	R_X86_64_DTPOFF32       21
-+#define	R_X86_64_GOTTPOFF       22
-+#define	R_X86_64_TPOFF32        23
-+#define	R_X86_64_PC64           24
-+#define	R_X86_64_GOTOFF64       25
-+#define	R_X86_64_GOTPC32        26
-+#define	R_X86_64_GOT64          27
-+#define	R_X86_64_GOTPCREL64     28
-+#define	R_X86_64_GOTPC64        29
-+#define	R_X86_64_GOTPLT64       30
-+#define	R_X86_64_PLTOFF64       31
-+#define	R_X86_64_SIZE32         32
-+#define	R_X86_64_SIZE64         33
-+#define	R_X86_64_GOTPC32_TLSDEC 34
-+#define	R_X86_64_TLSDESC_CALL   35
-+#define	R_X86_64_TLSDESC        36
-+#define	R_X86_64_IRELATIVE      37
-+#define	R_X86_64_PC32_BND       40
-+#define	R_X86_64_GOTPCRELX      41
-+#define	R_X86_64_REX_GOTPCRELX  42
- 
- 
- #define	R_ALPHA_NONE		0	/* No reloc */
-@@ -581,39 +596,42 @@ typedef struct {
- #define	R_ARM_COUNT		38	/* Count of defined relocation types. */
- 
- 
--#define	R_386_NONE	0	/* No relocation. */
--#define	R_386_32	1	/* Add symbol value. */
--#define	R_386_PC32	2	/* Add PC-relative symbol value. */
--#define	R_386_GOT32	3	/* Add PC-relative GOT offset. */
--#define	R_386_PLT32	4	/* Add PC-relative PLT offset. */
--#define	R_386_COPY	5	/* Copy data from shared object. */
--#define	R_386_GLOB_DAT	6	/* Set GOT entry to data address. */
--#define	R_386_JMP_SLOT	7	/* Set GOT entry to code address. */
--#define	R_386_RELATIVE	8	/* Add load address of shared object. */
--#define	R_386_GOTOFF	9	/* Add GOT-relative symbol address. */
--#define	R_386_GOTPC	10	/* Add PC-relative GOT table address. */
--#define	R_386_TLS_TPOFF	14	/* Negative offset in static TLS block */
--#define	R_386_TLS_IE	15	/* Absolute address of GOT for -ve static TLS */
--#define	R_386_TLS_GOTIE	16	/* GOT entry for negative static TLS block */
--#define	R_386_TLS_LE	17	/* Negative offset relative to static TLS */
--#define	R_386_TLS_GD	18	/* 32 bit offset to GOT (index,off) pair */
--#define	R_386_TLS_LDM	19	/* 32 bit offset to GOT (index,zero) pair */
--#define	R_386_TLS_GD_32	24	/* 32 bit offset to GOT (index,off) pair */
--#define	R_386_TLS_GD_PUSH 25	/* pushl instruction for Sun ABI GD sequence */
--#define	R_386_TLS_GD_CALL 26	/* call instruction for Sun ABI GD sequence */
--#define	R_386_TLS_GD_POP 27	/* popl instruction for Sun ABI GD sequence */
--#define	R_386_TLS_LDM_32 28	/* 32 bit offset to GOT (index,zero) pair */
--#define	R_386_TLS_LDM_PUSH 29	/* pushl instruction for Sun ABI LD sequence */
--#define	R_386_TLS_LDM_CALL 30	/* call instruction for Sun ABI LD sequence */
--#define	R_386_TLS_LDM_POP 31	/* popl instruction for Sun ABI LD sequence */
--#define	R_386_TLS_LDO_32 32	/* 32 bit offset from start of TLS block */
--#define	R_386_TLS_IE_32	33	/* 32 bit offset to GOT static TLS offset entry */
--#define	R_386_TLS_LE_32	34	/* 32 bit offset within static TLS block */
--#define	R_386_TLS_DTPMOD32 35	/* GOT entry containing TLS index */
--#define	R_386_TLS_DTPOFF32 36	/* GOT entry containing TLS offset */
--#define	R_386_TLS_TPOFF32 37	/* GOT entry of -ve static TLS offset */
--
--#define	R_386_COUNT	38	/* Count of defined relocation types. */
-+#define	R_386_NONE          0
-+#define	R_386_32            1
-+#define	R_386_PC32          2
-+#define	R_386_GOT32         3
-+#define	R_386_PLT32         4
-+#define	R_386_COPY          5
-+#define	R_386_GLOB_DAT      6
-+#define	R_386_JMP_SLOT      7
-+#define	R_386_RELATIVE      8
-+#define	R_386_GOTOFF        9
-+#define	R_386_GOTPC         10
-+#define	R_386_TLS_TPOFF     14
-+#define	R_386_TLS_IE        15
-+#define	R_386_TLS_GOTIE     16
-+#define	R_386_TLS_LE        17
-+#define	R_386_TLS_GD        18
-+#define	R_386_TLS_LDM       19
-+#define	R_386_TLS_GD_32     24
-+#define	R_386_TLS_GD_PUSH   25
-+#define	R_386_TLS_GD_CALL   26
-+#define	R_386_TLS_GD_POP    27
-+#define	R_386_TLS_LDM_32    28
-+#define	R_386_TLS_LDM_PUSH  29
-+#define	R_386_TLS_LDM_CALL  30
-+#define	R_386_TLS_LDM_POP   31
-+#define	R_386_TLS_LDO_32    32
-+#define	R_386_TLS_IE_32     33
-+#define	R_386_TLS_LE_32     34
-+#define	R_386_TLS_DTPMOD32  35
-+#define	R_386_TLS_DTPOFF32  36
-+#define	R_386_TLS_TPOFF32   37
-+#define	R_386_TLS_GOTDESC   39
-+#define	R_386_TLS_DESC_CALL 40
-+#define	R_386_TLS_DESC      41
-+#define	R_386_IRELATIVE     42
-+#define	R_386_GOT32X        43
- 
- #define	R_PPC_NONE		0	/* No relocation. */
- #define	R_PPC_ADDR32		1
-diff --git a/src/cmd/ld/ldelf.c b/src/cmd/ld/ldelf.c
-index dd5fa0d2a839..2e2fbd17377f 100644
---- a/src/cmd/ld/ldelf.c
-+++ b/src/cmd/ld/ldelf.c
-@@ -888,12 +888,15 @@ reltype(char *pn, int elftype, uchar *siz)
- 	case R('6', R_X86_64_PC32):
- 	case R('6', R_X86_64_PLT32):
- 	case R('6', R_X86_64_GOTPCREL):
-+	case R('6', R_X86_64_GOTPCRELX):
-+	case R('6', R_X86_64_REX_GOTPCRELX):
- 	case R('8', R_386_32):
- 	case R('8', R_386_PC32):
- 	case R('8', R_386_GOT32):
- 	case R('8', R_386_PLT32):
- 	case R('8', R_386_GOTOFF):
- 	case R('8', R_386_GOTPC):
-+	case R('8', R_386_GOT32X):
- 		*siz = 4;
- 		break;
- 	case R('6', R_X86_64_64):
--- 
-2.7.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4/syslog.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4/syslog.patch
deleted file mode 100644
index 29be06f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.4/syslog.patch
+++ /dev/null
@@ -1,62 +0,0 @@
-Add timeouts to logger
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-
-diff -r -u go/src/log/syslog/syslog.go /home/achang/GOCOPY/go/src/log/syslog/syslog.go
---- go/src/log/syslog/syslog.go	2013-11-28 13:38:28.000000000 -0800
-+++ /home/achang/GOCOPY/go/src/log/syslog/syslog.go	2014-10-03 11:44:37.710403200 -0700
-@@ -33,6 +33,9 @@
- const severityMask = 0x07
- const facilityMask = 0xf8
- 
-+var writeTimeout = 1 * time.Second
-+var connectTimeout = 1 * time.Second
-+
- const (
- 	// Severity.
- 
-@@ -100,6 +103,7 @@
- type serverConn interface {
- 	writeString(p Priority, hostname, tag, s, nl string) error
- 	close() error
-+	setWriteDeadline(t time.Time) error
- }
- 
- type netConn struct {
-@@ -273,7 +277,11 @@
- 		nl = "\n"
- 	}
- 
--	err := w.conn.writeString(p, w.hostname, w.tag, msg, nl)
-+	err := w.conn.setWriteDeadline(time.Now().Add(writeTimeout))
-+	if err != nil {
-+		return 0, err
-+	}
-+	err = w.conn.writeString(p, w.hostname, w.tag, msg, nl)
- 	if err != nil {
- 		return 0, err
- 	}
-@@ -305,6 +313,10 @@
- 	return n.conn.Close()
- }
- 
-+func (n *netConn) setWriteDeadline(t time.Time) error {
-+	return n.conn.SetWriteDeadline(t)
-+}
-+
- // NewLogger creates a log.Logger whose output is written to
- // the system log service with the specified priority. The logFlag
- // argument is the flag set passed through to log.New to create
-diff -r -u go/src/log/syslog/syslog_unix.go /home/achang/GOCOPY/go/src/log/syslog/syslog_unix.go
---- go/src/log/syslog/syslog_unix.go	2013-11-28 13:38:28.000000000 -0800
-+++ /home/achang/GOCOPY/go/src/log/syslog/syslog_unix.go	2014-10-03 11:44:39.010403175 -0700
-@@ -19,7 +19,7 @@
- 	logPaths := []string{"/dev/log", "/var/run/syslog"}
- 	for _, network := range logTypes {
- 		for _, path := range logPaths {
--			conn, err := net.Dial(network, path)
-+			conn, err := net.DialTimeout(network, path, connectTimeout)
- 			if err != nil {
- 				continue
- 			} else {
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6.inc
deleted file mode 100644
index 769c1d8..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6.inc
+++ /dev/null
@@ -1,19 +0,0 @@
-require go-common.inc
-
-PV = "1.6.3"
-GO_BASEVERSION = "1.6"
-FILESEXTRAPATHS_prepend := "${FILE_DIRNAME}/go-${GO_BASEVERSION}:"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=591778525c869cdde0ab5a1bf283cd81"
-
-SRC_URI += "\
-       file://armhf-elf-header.patch \
-       file://syslog.patch \
-       file://fix-target-cc-for-build.patch \
-       file://fix-cc-handling.patch \
-       file://split-host-and-target-build.patch \
-       file://gotooldir.patch \
-"
-SRC_URI[md5sum] = "bf3fce6ccaadd310159c9e874220e2a2"
-SRC_URI[sha256sum] = "6326aeed5f86cf18f16d6dc831405614f855e2d416a91fd3fdc334f772345b00"
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/armhf-elf-header.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/armhf-elf-header.patch
deleted file mode 100644
index 1e3a16b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/armhf-elf-header.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-Encode arm EABI ( hard/soft ) calling convention in ELF header
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/cmd/link/internal/ld/elf.go
-===================================================================
---- go.orig/src/cmd/link/internal/ld/elf.go
-+++ go/src/cmd/link/internal/ld/elf.go
-@@ -827,7 +827,13 @@
- 	// 32-bit architectures
- 	case '5':
- 		// we use EABI on both linux/arm and freebsd/arm.
--		if HEADTYPE == obj.Hlinux || HEADTYPE == obj.Hfreebsd {
-+		if HEADTYPE == obj.Hlinux {
-+			if Ctxt.Goarm == 7 {
-+				ehdr.flags = 0x5000402 // has entry point, Version5 EABI, hard float
-+			} else {
-+				ehdr.flags = 0x5000202 // has entry point, Version5 EABI, soft float
-+			}
-+		} else if HEADTYPE == obj.Hfreebsd {
- 			// We set a value here that makes no indication of which
- 			// float ABI the object uses, because this is information
- 			// used by the dynamic linker to compare executables and
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/fix-cc-handling.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/fix-cc-handling.patch
deleted file mode 100644
index 983323a..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/fix-cc-handling.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-Accept CC with multiple words in its name
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/cmd/go/build.go
-===================================================================
---- go.orig/src/cmd/go/build.go	2015-07-29 14:48:40.323185807 -0700
-+++ go/src/cmd/go/build.go	2015-07-30 07:37:40.529818586 -0700
-@@ -2805,12 +2805,24 @@
- 	return b.ccompilerCmd("CC", defaultCC, objdir)
- }
- 
-+// gccCmd returns a gcc command line prefix
-+// defaultCC is defined in zdefaultcc.go, written by cmd/dist.
-+func (b *builder) gccCmdForReal() []string {
-+	return envList("CC", defaultCC)
-+}
-+
- // gxxCmd returns a g++ command line prefix
- // defaultCXX is defined in zdefaultcc.go, written by cmd/dist.
- func (b *builder) gxxCmd(objdir string) []string {
- 	return b.ccompilerCmd("CXX", defaultCXX, objdir)
- }
- 
-+// gxxCmd returns a g++ command line prefix
-+// defaultCXX is defined in zdefaultcc.go, written by cmd/dist.
-+func (b *builder) gxxCmdForReal() []string {
-+	return envList("CXX", defaultCXX)
-+}
-+
- // ccompilerCmd returns a command line prefix for the given environment
- // variable and using the default command when the variable is empty.
- func (b *builder) ccompilerCmd(envvar, defcmd, objdir string) []string {
-Index: go/src/cmd/go/env.go
-===================================================================
---- go.orig/src/cmd/go/env.go	2015-07-29 14:48:40.323185807 -0700
-+++ go/src/cmd/go/env.go	2015-07-30 07:40:54.461655721 -0700
-@@ -52,10 +52,9 @@
- 
- 	if goos != "plan9" {
- 		cmd := b.gccCmd(".")
--		env = append(env, envVar{"CC", cmd[0]})
-+		env = append(env, envVar{"CC", strings.Join(b.gccCmdForReal(), " ")})
- 		env = append(env, envVar{"GOGCCFLAGS", strings.Join(cmd[3:], " ")})
--		cmd = b.gxxCmd(".")
--		env = append(env, envVar{"CXX", cmd[0]})
-+		env = append(env, envVar{"CXX", strings.Join(b.gxxCmdForReal(), " ")})
- 	}
- 
- 	if buildContext.CgoEnabled {
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/fix-target-cc-for-build.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/fix-target-cc-for-build.patch
deleted file mode 100644
index 2f6156e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/fix-target-cc-for-build.patch
+++ /dev/null
@@ -1,17 +0,0 @@
-Put Quotes around CC_FOR_TARGET since it can be mutliple words e.g. in OE
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/make.bash
-===================================================================
---- go.orig/src/make.bash	2015-07-29 13:28:11.334031696 -0700
-+++ go/src/make.bash	2015-07-29 13:36:55.814465630 -0700
-@@ -158,7 +158,7 @@
- fi
- 
- echo "##### Building packages and commands for $GOOS/$GOARCH."
--CC=$CC_FOR_TARGET "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
-+CC="$CC_FOR_TARGET" "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
- echo
- 
- rm -f "$GOTOOLDIR"/go_bootstrap
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/gotooldir.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/gotooldir.patch
deleted file mode 100644
index 9467025..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/gotooldir.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-Define tooldir in relation to GOTOOLDIR env var
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/go/build/build.go
-===================================================================
---- go.orig/src/go/build/build.go
-+++ go/src/go/build/build.go
-@@ -1388,7 +1388,7 @@ func init() {
- }
- 
- // ToolDir is the directory containing build tools.
--var ToolDir = filepath.Join(runtime.GOROOT(), "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH)
-+var ToolDir = envOr("GOTOOLDIR", filepath.Join(runtime.GOROOT(), "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH))
- 
- // IsLocalImport reports whether the import path is
- // a local import path, like ".", "..", "./foo", or "../foo".
-Index: go/src/cmd/go/build.go
-===================================================================
---- go.orig/src/cmd/go/build.go
-+++ go/src/cmd/go/build.go
-@@ -1312,7 +1312,7 @@ func (b *builder) build(a *action) (err
- 		}
- 
- 		cgoExe := tool("cgo")
--		if a.cgo != nil && a.cgo.target != "" {
-+		if a.cgo != nil && a.cgo.target != "" && os.Getenv("GOTOOLDIR") == "" {
- 			cgoExe = a.cgo.target
- 		}
- 		outGo, outObj, err := b.cgo(a.p, cgoExe, obj, pcCFLAGS, pcLDFLAGS, cgofiles, gccfiles, cxxfiles, a.p.MFiles)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/split-host-and-target-build.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/split-host-and-target-build.patch
deleted file mode 100644
index afbae02..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/split-host-and-target-build.patch
+++ /dev/null
@@ -1,63 +0,0 @@
-Add new option --target-only to build target components
-Separates the host and target pieces of build
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/make.bash
-===================================================================
---- go.orig/src/make.bash
-+++ go/src/make.bash
-@@ -143,12 +143,23 @@ if [ "$1" = "--no-clean" ]; then
- 	buildall=""
- 	shift
- fi
--./cmd/dist/dist bootstrap $buildall $GO_DISTFLAGS -v # builds go_bootstrap
--# Delay move of dist tool to now, because bootstrap may clear tool directory.
--mv cmd/dist/dist "$GOTOOLDIR"/dist
--echo
- 
--if [ "$GOHOSTARCH" != "$GOARCH" -o "$GOHOSTOS" != "$GOOS" ]; then
-+do_host_build="yes"
-+do_target_build="yes"
-+if [ "$1" = "--target-only" ]; then
-+	do_host_build="no"
-+	shift
-+elif [ "$1" = "--host-only" ]; then
-+	do_target_build="no"
-+	shift
-+fi
-+
-+if [ "$do_host_build" = "yes" ]; then
-+	./cmd/dist/dist bootstrap $buildall $GO_DISTFLAGS -v # builds go_bootstrap
-+	# Delay move of dist tool to now, because bootstrap may clear tool directory.
-+	mv cmd/dist/dist "$GOTOOLDIR"/dist
-+	echo
-+
- 	echo "##### Building packages and commands for host, $GOHOSTOS/$GOHOSTARCH."
- 	# CC_FOR_TARGET is recorded as the default compiler for the go tool. When building for the host, however,
- 	# use the host compiler, CC, from `cmd/dist/dist env` instead.
-@@ -157,11 +168,20 @@ if [ "$GOHOSTARCH" != "$GOARCH" -o "$GOH
- 	echo
- fi
- 
--echo "##### Building packages and commands for $GOOS/$GOARCH."
--CC="$CC_FOR_TARGET" "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
--echo
-+if [ "$do_target_build" = "yes" ]; then
-+    GO_INSTALL="${GO_TARGET_INSTALL:-std cmd}"
-+    echo "##### Building packages and commands for $GOOS/$GOARCH."
-+    if [ "$GOHOSTOS" = "$GOOS" -a "$GOHOSTARCH" = "$GOARCH" -a "$do_host_build" = "yes" ]; then
-+	rm -rf ./host-tools
-+	mkdir ./host-tools
-+	mv "$GOTOOLDIR"/* ./host-tools
-+	GOTOOLDIR="$PWD/host-tools"
-+    fi
-+    GOTOOLDIR="$GOTOOLDIR" CC="$CC_FOR_TARGET" "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v ${GO_INSTALL}
-+    echo
- 
--rm -f "$GOTOOLDIR"/go_bootstrap
-+    rm -f "$GOTOOLDIR"/go_bootstrap
-+fi
- 
- if [ "$1" != "--no-banner" ]; then
- 	"$GOTOOLDIR"/dist banner
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/syslog.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/syslog.patch
deleted file mode 100644
index 29be06f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.6/syslog.patch
+++ /dev/null
@@ -1,62 +0,0 @@
-Add timeouts to logger
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-
-diff -r -u go/src/log/syslog/syslog.go /home/achang/GOCOPY/go/src/log/syslog/syslog.go
---- go/src/log/syslog/syslog.go	2013-11-28 13:38:28.000000000 -0800
-+++ /home/achang/GOCOPY/go/src/log/syslog/syslog.go	2014-10-03 11:44:37.710403200 -0700
-@@ -33,6 +33,9 @@
- const severityMask = 0x07
- const facilityMask = 0xf8
- 
-+var writeTimeout = 1 * time.Second
-+var connectTimeout = 1 * time.Second
-+
- const (
- 	// Severity.
- 
-@@ -100,6 +103,7 @@
- type serverConn interface {
- 	writeString(p Priority, hostname, tag, s, nl string) error
- 	close() error
-+	setWriteDeadline(t time.Time) error
- }
- 
- type netConn struct {
-@@ -273,7 +277,11 @@
- 		nl = "\n"
- 	}
- 
--	err := w.conn.writeString(p, w.hostname, w.tag, msg, nl)
-+	err := w.conn.setWriteDeadline(time.Now().Add(writeTimeout))
-+	if err != nil {
-+		return 0, err
-+	}
-+	err = w.conn.writeString(p, w.hostname, w.tag, msg, nl)
- 	if err != nil {
- 		return 0, err
- 	}
-@@ -305,6 +313,10 @@
- 	return n.conn.Close()
- }
- 
-+func (n *netConn) setWriteDeadline(t time.Time) error {
-+	return n.conn.SetWriteDeadline(t)
-+}
-+
- // NewLogger creates a log.Logger whose output is written to
- // the system log service with the specified priority. The logFlag
- // argument is the flag set passed through to log.New to create
-diff -r -u go/src/log/syslog/syslog_unix.go /home/achang/GOCOPY/go/src/log/syslog/syslog_unix.go
---- go/src/log/syslog/syslog_unix.go	2013-11-28 13:38:28.000000000 -0800
-+++ /home/achang/GOCOPY/go/src/log/syslog/syslog_unix.go	2014-10-03 11:44:39.010403175 -0700
-@@ -19,7 +19,7 @@
- 	logPaths := []string{"/dev/log", "/var/run/syslog"}
- 	for _, network := range logTypes {
- 		for _, path := range logPaths {
--			conn, err := net.Dial(network, path)
-+			conn, err := net.DialTimeout(network, path, connectTimeout)
- 			if err != nil {
- 				continue
- 			} else {
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7.inc
deleted file mode 100644
index 5c3004e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7.inc
+++ /dev/null
@@ -1,19 +0,0 @@
-require go-common.inc
-
-PV = "1.7.4"
-GO_BASEVERSION = "1.7"
-FILESEXTRAPATHS_prepend := "${FILE_DIRNAME}/go-${GO_BASEVERSION}:"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=5d4950ecb7b26d2c5e4e7b4e0dd74707"
-
-SRC_URI += "\
-       file://armhf-elf-header.patch \
-       file://syslog.patch \
-       file://fix-target-cc-for-build.patch \
-       file://fix-cc-handling.patch \
-       file://split-host-and-target-build.patch \
-       file://gotooldir.patch \
-"
-SRC_URI[md5sum] = "49c1076428a5d3b5ad7ac65233fcca2f"
-SRC_URI[sha256sum] = "4c189111e9ba651a2bb3ee868aa881fab36b2f2da3409e80885ca758a6b614cc"
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/armhf-elf-header.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/armhf-elf-header.patch
deleted file mode 100644
index 1e3a16b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/armhf-elf-header.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-Encode arm EABI ( hard/soft ) calling convention in ELF header
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/cmd/link/internal/ld/elf.go
-===================================================================
---- go.orig/src/cmd/link/internal/ld/elf.go
-+++ go/src/cmd/link/internal/ld/elf.go
-@@ -827,7 +827,13 @@
- 	// 32-bit architectures
- 	case '5':
- 		// we use EABI on both linux/arm and freebsd/arm.
--		if HEADTYPE == obj.Hlinux || HEADTYPE == obj.Hfreebsd {
-+		if HEADTYPE == obj.Hlinux {
-+			if Ctxt.Goarm == 7 {
-+				ehdr.flags = 0x5000402 // has entry point, Version5 EABI, hard float
-+			} else {
-+				ehdr.flags = 0x5000202 // has entry point, Version5 EABI, soft float
-+			}
-+		} else if HEADTYPE == obj.Hfreebsd {
- 			// We set a value here that makes no indication of which
- 			// float ABI the object uses, because this is information
- 			// used by the dynamic linker to compare executables and
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/fix-cc-handling.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/fix-cc-handling.patch
deleted file mode 100644
index a67caf4..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/fix-cc-handling.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-Accept CC with multiple words in its name
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/cmd/go/build.go
-===================================================================
---- go.orig/src/cmd/go/build.go
-+++ go/src/cmd/go/build.go
-@@ -2991,12 +2991,24 @@ func (b *builder) gccCmd(objdir string)
- 	return b.ccompilerCmd("CC", defaultCC, objdir)
- }
- 
-+// gccCmd returns a gcc command line prefix
-+// defaultCC is defined in zdefaultcc.go, written by cmd/dist.
-+func (b *builder) gccCmdForReal() []string {
-+	return envList("CC", defaultCC)
-+}
-+
- // gxxCmd returns a g++ command line prefix
- // defaultCXX is defined in zdefaultcc.go, written by cmd/dist.
- func (b *builder) gxxCmd(objdir string) []string {
- 	return b.ccompilerCmd("CXX", defaultCXX, objdir)
- }
- 
-+// gxxCmd returns a g++ command line prefix
-+// defaultCXX is defined in zdefaultcc.go, written by cmd/dist.
-+func (b *builder) gxxCmdForReal() []string {
-+	return envList("CXX", defaultCXX)
-+}
-+
- // gfortranCmd returns a gfortran command line prefix.
- func (b *builder) gfortranCmd(objdir string) []string {
- 	return b.ccompilerCmd("FC", "gfortran", objdir)
-Index: go/src/cmd/go/env.go
-===================================================================
---- go.orig/src/cmd/go/env.go
-+++ go/src/cmd/go/env.go
-@@ -51,10 +51,9 @@ func mkEnv() []envVar {
- 
- 	if goos != "plan9" {
- 		cmd := b.gccCmd(".")
--		env = append(env, envVar{"CC", cmd[0]})
-+		env = append(env, envVar{"CC", strings.Join(b.gccCmdForReal(), " ")})
- 		env = append(env, envVar{"GOGCCFLAGS", strings.Join(cmd[3:], " ")})
--		cmd = b.gxxCmd(".")
--		env = append(env, envVar{"CXX", cmd[0]})
-+		env = append(env, envVar{"CXX", strings.Join(b.gxxCmdForReal(), " ")})
- 	}
- 
- 	if buildContext.CgoEnabled {
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/fix-target-cc-for-build.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/fix-target-cc-for-build.patch
deleted file mode 100644
index 2f6156e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/fix-target-cc-for-build.patch
+++ /dev/null
@@ -1,17 +0,0 @@
-Put Quotes around CC_FOR_TARGET since it can be mutliple words e.g. in OE
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/make.bash
-===================================================================
---- go.orig/src/make.bash	2015-07-29 13:28:11.334031696 -0700
-+++ go/src/make.bash	2015-07-29 13:36:55.814465630 -0700
-@@ -158,7 +158,7 @@
- fi
- 
- echo "##### Building packages and commands for $GOOS/$GOARCH."
--CC=$CC_FOR_TARGET "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
-+CC="$CC_FOR_TARGET" "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
- echo
- 
- rm -f "$GOTOOLDIR"/go_bootstrap
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/gotooldir.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/gotooldir.patch
deleted file mode 100644
index 9467025..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/gotooldir.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-Define tooldir in relation to GOTOOLDIR env var
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/go/build/build.go
-===================================================================
---- go.orig/src/go/build/build.go
-+++ go/src/go/build/build.go
-@@ -1388,7 +1388,7 @@ func init() {
- }
- 
- // ToolDir is the directory containing build tools.
--var ToolDir = filepath.Join(runtime.GOROOT(), "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH)
-+var ToolDir = envOr("GOTOOLDIR", filepath.Join(runtime.GOROOT(), "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH))
- 
- // IsLocalImport reports whether the import path is
- // a local import path, like ".", "..", "./foo", or "../foo".
-Index: go/src/cmd/go/build.go
-===================================================================
---- go.orig/src/cmd/go/build.go
-+++ go/src/cmd/go/build.go
-@@ -1312,7 +1312,7 @@ func (b *builder) build(a *action) (err
- 		}
- 
- 		cgoExe := tool("cgo")
--		if a.cgo != nil && a.cgo.target != "" {
-+		if a.cgo != nil && a.cgo.target != "" && os.Getenv("GOTOOLDIR") == "" {
- 			cgoExe = a.cgo.target
- 		}
- 		outGo, outObj, err := b.cgo(a.p, cgoExe, obj, pcCFLAGS, pcLDFLAGS, cgofiles, gccfiles, cxxfiles, a.p.MFiles)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/split-host-and-target-build.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/split-host-and-target-build.patch
deleted file mode 100644
index b0dd95b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/split-host-and-target-build.patch
+++ /dev/null
@@ -1,62 +0,0 @@
-Add new option --target-only to build target components
-Separates the host and target pieces of build
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/make.bash
-===================================================================
---- go.orig/src/make.bash
-+++ go/src/make.bash
-@@ -154,13 +154,22 @@ if [ "$1" = "--no-clean" ]; then
- 	buildall=""
- 	shift
- fi
--./cmd/dist/dist bootstrap $buildall $GO_DISTFLAGS -v # builds go_bootstrap
-+do_host_build="yes"
-+do_target_build="yes"
-+if [ "$1" = "--target-only" ]; then
-+	do_host_build="no"
-+	shift
-+elif [ "$1" = "--host-only" ]; then
-+	do_target_build="no"
-+	shift
-+fi
- 
--# Delay move of dist tool to now, because bootstrap may clear tool directory.
--mv cmd/dist/dist "$GOTOOLDIR"/dist
--echo
-+if [ "$do_host_build" = "yes" ]; then
-+	./cmd/dist/dist bootstrap $buildall $GO_DISTFLAGS -v # builds go_bootstrap
-+	# Delay move of dist tool to now, because bootstrap may clear tool directory.
-+	mv cmd/dist/dist "$GOTOOLDIR"/dist
-+	echo
- 
--if [ "$GOHOSTARCH" != "$GOARCH" -o "$GOHOSTOS" != "$GOOS" ]; then
- 	echo "##### Building packages and commands for host, $GOHOSTOS/$GOHOSTARCH."
- 	# CC_FOR_TARGET is recorded as the default compiler for the go tool. When building for the host, however,
- 	# use the host compiler, CC, from `cmd/dist/dist env` instead.
-@@ -169,11 +178,20 @@ if [ "$GOHOSTARCH" != "$GOARCH" -o "$GOH
- 	echo
- fi
- 
--echo "##### Building packages and commands for $GOOS/$GOARCH."
--CC="$CC_FOR_TARGET" "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
--echo
-+if [ "$do_target_build" = "yes" ]; then
-+    GO_INSTALL="${GO_TARGET_INSTALL:-std cmd}"
-+    echo "##### Building packages and commands for $GOOS/$GOARCH."
-+    if [ "$GOHOSTOS" = "$GOOS" -a "$GOHOSTARCH" = "$GOARCH" -a "$do_host_build" = "yes" ]; then
-+	rm -rf ./host-tools
-+	mkdir ./host-tools
-+	mv "$GOTOOLDIR"/* ./host-tools
-+	GOTOOLDIR="$PWD/host-tools"
-+    fi
-+    GOTOOLDIR="$GOTOOLDIR" CC="$CC_FOR_TARGET" "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v ${GO_INSTALL}
-+    echo
- 
--rm -f "$GOTOOLDIR"/go_bootstrap
-+    rm -f "$GOTOOLDIR"/go_bootstrap
-+fi
- 
- if [ "$1" != "--no-banner" ]; then
- 	"$GOTOOLDIR"/dist banner
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/syslog.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/syslog.patch
deleted file mode 100644
index 29be06f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.7/syslog.patch
+++ /dev/null
@@ -1,62 +0,0 @@
-Add timeouts to logger
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-
-diff -r -u go/src/log/syslog/syslog.go /home/achang/GOCOPY/go/src/log/syslog/syslog.go
---- go/src/log/syslog/syslog.go	2013-11-28 13:38:28.000000000 -0800
-+++ /home/achang/GOCOPY/go/src/log/syslog/syslog.go	2014-10-03 11:44:37.710403200 -0700
-@@ -33,6 +33,9 @@
- const severityMask = 0x07
- const facilityMask = 0xf8
- 
-+var writeTimeout = 1 * time.Second
-+var connectTimeout = 1 * time.Second
-+
- const (
- 	// Severity.
- 
-@@ -100,6 +103,7 @@
- type serverConn interface {
- 	writeString(p Priority, hostname, tag, s, nl string) error
- 	close() error
-+	setWriteDeadline(t time.Time) error
- }
- 
- type netConn struct {
-@@ -273,7 +277,11 @@
- 		nl = "\n"
- 	}
- 
--	err := w.conn.writeString(p, w.hostname, w.tag, msg, nl)
-+	err := w.conn.setWriteDeadline(time.Now().Add(writeTimeout))
-+	if err != nil {
-+		return 0, err
-+	}
-+	err = w.conn.writeString(p, w.hostname, w.tag, msg, nl)
- 	if err != nil {
- 		return 0, err
- 	}
-@@ -305,6 +313,10 @@
- 	return n.conn.Close()
- }
- 
-+func (n *netConn) setWriteDeadline(t time.Time) error {
-+	return n.conn.SetWriteDeadline(t)
-+}
-+
- // NewLogger creates a log.Logger whose output is written to
- // the system log service with the specified priority. The logFlag
- // argument is the flag set passed through to log.New to create
-diff -r -u go/src/log/syslog/syslog_unix.go /home/achang/GOCOPY/go/src/log/syslog/syslog_unix.go
---- go/src/log/syslog/syslog_unix.go	2013-11-28 13:38:28.000000000 -0800
-+++ /home/achang/GOCOPY/go/src/log/syslog/syslog_unix.go	2014-10-03 11:44:39.010403175 -0700
-@@ -19,7 +19,7 @@
- 	logPaths := []string{"/dev/log", "/var/run/syslog"}
- 	for _, network := range logTypes {
- 		for _, path := range logPaths {
--			conn, err := net.Dial(network, path)
-+			conn, err := net.DialTimeout(network, path, connectTimeout)
- 			if err != nil {
- 				continue
- 			} else {
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8.inc
deleted file mode 100644
index 5c376a2..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8.inc
+++ /dev/null
@@ -1,19 +0,0 @@
-require go-common.inc
-
-GOMINOR = ""
-GO_BASEVERSION = "1.8"
-PV .= "${GOMINOR}"
-FILESEXTRAPATHS_prepend := "${FILE_DIRNAME}/go-${GO_BASEVERSION}:"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=5d4950ecb7b26d2c5e4e7b4e0dd74707"
-
-SRC_URI += "\
-       file://armhf-elf-header.patch \
-       file://syslog.patch \
-       file://fix-target-cc-for-build.patch \
-       file://fix-cc-handling.patch \
-       file://split-host-and-target-build.patch \
-       file://gotooldir.patch \
-"
-SRC_URI[md5sum] = "7743960c968760437b6e39093cfe6f67"
-SRC_URI[sha256sum] = "406865f587b44be7092f206d73fc1de252600b79b3cacc587b74b5ef5c623596"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/armhf-elf-header.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/armhf-elf-header.patch
deleted file mode 100644
index 3508838..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/armhf-elf-header.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-Encode arm EABI ( hard/soft ) calling convention in ELF header
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/cmd/link/internal/ld/elf.go
-===================================================================
---- go.orig/src/cmd/link/internal/ld/elf.go
-+++ go/src/cmd/link/internal/ld/elf.go
-@@ -950,7 +950,13 @@ func Elfinit(ctxt *Link) {
- 	case sys.ARM, sys.MIPS:
- 		if SysArch.Family == sys.ARM {
- 			// we use EABI on linux/arm, freebsd/arm, netbsd/arm.
--			if Headtype == obj.Hlinux || Headtype == obj.Hfreebsd || Headtype == obj.Hnetbsd {
-+			if Headtype == obj.Hlinux {
-+				if obj.GOARM == 7 {
-+					ehdr.flags = 0x5000402 // has entry point, Version5 EABI, hard float
-+				} else {
-+					ehdr.flags = 0x5000202 // has entry point, Version5 EABI, soft float
-+				}
-+			} else if Headtype == obj.Hfreebsd || Headtype == obj.Hnetbsd {
- 				// We set a value here that makes no indication of which
- 				// float ABI the object uses, because this is information
- 				// used by the dynamic linker to compare executables and
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/fix-cc-handling.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/fix-cc-handling.patch
deleted file mode 100644
index dc9b811..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/fix-cc-handling.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-Accept CC with multiple words in its name
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/cmd/go/build.go
-===================================================================
---- go.orig/src/cmd/go/build.go
-+++ go/src/cmd/go/build.go
-@@ -3100,12 +3100,24 @@ func (b *builder) gccCmd(objdir string)
- 	return b.ccompilerCmd("CC", defaultCC, objdir)
- }
- 
-+// gccCmd returns a gcc command line prefix
-+// defaultCC is defined in zdefaultcc.go, written by cmd/dist.
-+func (b *builder) gccCmdForReal() []string {
-+	return envList("CC", defaultCC)
-+}
-+
- // gxxCmd returns a g++ command line prefix
- // defaultCXX is defined in zdefaultcc.go, written by cmd/dist.
- func (b *builder) gxxCmd(objdir string) []string {
- 	return b.ccompilerCmd("CXX", defaultCXX, objdir)
- }
- 
-+// gxxCmd returns a g++ command line prefix
-+// defaultCXX is defined in zdefaultcc.go, written by cmd/dist.
-+func (b *builder) gxxCmdForReal() []string {
-+	return envList("CXX", defaultCXX)
-+}
-+
- // gfortranCmd returns a gfortran command line prefix.
- func (b *builder) gfortranCmd(objdir string) []string {
- 	return b.ccompilerCmd("FC", "gfortran", objdir)
-Index: go/src/cmd/go/env.go
-===================================================================
---- go.orig/src/cmd/go/env.go
-+++ go/src/cmd/go/env.go
-@@ -63,10 +63,9 @@ func mkEnv() []envVar {
- 	}
- 
- 	cmd := b.gccCmd(".")
--	env = append(env, envVar{"CC", cmd[0]})
-+	env = append(env, envVar{"CC", strings.Join(b.gccCmdForReal(), " ")})
- 	env = append(env, envVar{"GOGCCFLAGS", strings.Join(cmd[3:], " ")})
--	cmd = b.gxxCmd(".")
--	env = append(env, envVar{"CXX", cmd[0]})
-+	env = append(env, envVar{"CXX", strings.Join(b.gxxCmdForReal(), " ")})
- 
- 	if buildContext.CgoEnabled {
- 		env = append(env, envVar{"CGO_ENABLED", "1"})
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/fix-target-cc-for-build.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/fix-target-cc-for-build.patch
deleted file mode 100644
index 2f6156e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/fix-target-cc-for-build.patch
+++ /dev/null
@@ -1,17 +0,0 @@
-Put Quotes around CC_FOR_TARGET since it can be mutliple words e.g. in OE
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/make.bash
-===================================================================
---- go.orig/src/make.bash	2015-07-29 13:28:11.334031696 -0700
-+++ go/src/make.bash	2015-07-29 13:36:55.814465630 -0700
-@@ -158,7 +158,7 @@
- fi
- 
- echo "##### Building packages and commands for $GOOS/$GOARCH."
--CC=$CC_FOR_TARGET "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
-+CC="$CC_FOR_TARGET" "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
- echo
- 
- rm -f "$GOTOOLDIR"/go_bootstrap
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/gotooldir.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/gotooldir.patch
deleted file mode 100644
index 9467025..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/gotooldir.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-Define tooldir in relation to GOTOOLDIR env var
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/go/build/build.go
-===================================================================
---- go.orig/src/go/build/build.go
-+++ go/src/go/build/build.go
-@@ -1388,7 +1388,7 @@ func init() {
- }
- 
- // ToolDir is the directory containing build tools.
--var ToolDir = filepath.Join(runtime.GOROOT(), "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH)
-+var ToolDir = envOr("GOTOOLDIR", filepath.Join(runtime.GOROOT(), "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH))
- 
- // IsLocalImport reports whether the import path is
- // a local import path, like ".", "..", "./foo", or "../foo".
-Index: go/src/cmd/go/build.go
-===================================================================
---- go.orig/src/cmd/go/build.go
-+++ go/src/cmd/go/build.go
-@@ -1312,7 +1312,7 @@ func (b *builder) build(a *action) (err
- 		}
- 
- 		cgoExe := tool("cgo")
--		if a.cgo != nil && a.cgo.target != "" {
-+		if a.cgo != nil && a.cgo.target != "" && os.Getenv("GOTOOLDIR") == "" {
- 			cgoExe = a.cgo.target
- 		}
- 		outGo, outObj, err := b.cgo(a.p, cgoExe, obj, pcCFLAGS, pcLDFLAGS, cgofiles, gccfiles, cxxfiles, a.p.MFiles)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/split-host-and-target-build.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/split-host-and-target-build.patch
deleted file mode 100644
index b0dd95b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/split-host-and-target-build.patch
+++ /dev/null
@@ -1,62 +0,0 @@
-Add new option --target-only to build target components
-Separates the host and target pieces of build
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-Index: go/src/make.bash
-===================================================================
---- go.orig/src/make.bash
-+++ go/src/make.bash
-@@ -154,13 +154,22 @@ if [ "$1" = "--no-clean" ]; then
- 	buildall=""
- 	shift
- fi
--./cmd/dist/dist bootstrap $buildall $GO_DISTFLAGS -v # builds go_bootstrap
-+do_host_build="yes"
-+do_target_build="yes"
-+if [ "$1" = "--target-only" ]; then
-+	do_host_build="no"
-+	shift
-+elif [ "$1" = "--host-only" ]; then
-+	do_target_build="no"
-+	shift
-+fi
- 
--# Delay move of dist tool to now, because bootstrap may clear tool directory.
--mv cmd/dist/dist "$GOTOOLDIR"/dist
--echo
-+if [ "$do_host_build" = "yes" ]; then
-+	./cmd/dist/dist bootstrap $buildall $GO_DISTFLAGS -v # builds go_bootstrap
-+	# Delay move of dist tool to now, because bootstrap may clear tool directory.
-+	mv cmd/dist/dist "$GOTOOLDIR"/dist
-+	echo
- 
--if [ "$GOHOSTARCH" != "$GOARCH" -o "$GOHOSTOS" != "$GOOS" ]; then
- 	echo "##### Building packages and commands for host, $GOHOSTOS/$GOHOSTARCH."
- 	# CC_FOR_TARGET is recorded as the default compiler for the go tool. When building for the host, however,
- 	# use the host compiler, CC, from `cmd/dist/dist env` instead.
-@@ -169,11 +178,20 @@ if [ "$GOHOSTARCH" != "$GOARCH" -o "$GOH
- 	echo
- fi
- 
--echo "##### Building packages and commands for $GOOS/$GOARCH."
--CC="$CC_FOR_TARGET" "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
--echo
-+if [ "$do_target_build" = "yes" ]; then
-+    GO_INSTALL="${GO_TARGET_INSTALL:-std cmd}"
-+    echo "##### Building packages and commands for $GOOS/$GOARCH."
-+    if [ "$GOHOSTOS" = "$GOOS" -a "$GOHOSTARCH" = "$GOARCH" -a "$do_host_build" = "yes" ]; then
-+	rm -rf ./host-tools
-+	mkdir ./host-tools
-+	mv "$GOTOOLDIR"/* ./host-tools
-+	GOTOOLDIR="$PWD/host-tools"
-+    fi
-+    GOTOOLDIR="$GOTOOLDIR" CC="$CC_FOR_TARGET" "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v ${GO_INSTALL}
-+    echo
- 
--rm -f "$GOTOOLDIR"/go_bootstrap
-+    rm -f "$GOTOOLDIR"/go_bootstrap
-+fi
- 
- if [ "$1" != "--no-banner" ]; then
- 	"$GOTOOLDIR"/dist banner
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/syslog.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/syslog.patch
deleted file mode 100644
index 29be06f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.8/syslog.patch
+++ /dev/null
@@ -1,62 +0,0 @@
-Add timeouts to logger
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-
-diff -r -u go/src/log/syslog/syslog.go /home/achang/GOCOPY/go/src/log/syslog/syslog.go
---- go/src/log/syslog/syslog.go	2013-11-28 13:38:28.000000000 -0800
-+++ /home/achang/GOCOPY/go/src/log/syslog/syslog.go	2014-10-03 11:44:37.710403200 -0700
-@@ -33,6 +33,9 @@
- const severityMask = 0x07
- const facilityMask = 0xf8
- 
-+var writeTimeout = 1 * time.Second
-+var connectTimeout = 1 * time.Second
-+
- const (
- 	// Severity.
- 
-@@ -100,6 +103,7 @@
- type serverConn interface {
- 	writeString(p Priority, hostname, tag, s, nl string) error
- 	close() error
-+	setWriteDeadline(t time.Time) error
- }
- 
- type netConn struct {
-@@ -273,7 +277,11 @@
- 		nl = "\n"
- 	}
- 
--	err := w.conn.writeString(p, w.hostname, w.tag, msg, nl)
-+	err := w.conn.setWriteDeadline(time.Now().Add(writeTimeout))
-+	if err != nil {
-+		return 0, err
-+	}
-+	err = w.conn.writeString(p, w.hostname, w.tag, msg, nl)
- 	if err != nil {
- 		return 0, err
- 	}
-@@ -305,6 +313,10 @@
- 	return n.conn.Close()
- }
- 
-+func (n *netConn) setWriteDeadline(t time.Time) error {
-+	return n.conn.SetWriteDeadline(t)
-+}
-+
- // NewLogger creates a log.Logger whose output is written to
- // the system log service with the specified priority. The logFlag
- // argument is the flag set passed through to log.New to create
-diff -r -u go/src/log/syslog/syslog_unix.go /home/achang/GOCOPY/go/src/log/syslog/syslog_unix.go
---- go/src/log/syslog/syslog_unix.go	2013-11-28 13:38:28.000000000 -0800
-+++ /home/achang/GOCOPY/go/src/log/syslog/syslog_unix.go	2014-10-03 11:44:39.010403175 -0700
-@@ -19,7 +19,7 @@
- 	logPaths := []string{"/dev/log", "/var/run/syslog"}
- 	for _, network := range logTypes {
- 		for _, path := range logPaths {
--			conn, err := net.Dial(network, path)
-+			conn, err := net.DialTimeout(network, path, connectTimeout)
- 			if err != nil {
- 				continue
- 			} else {
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9.inc
new file mode 100644
index 0000000..7f12241
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9.inc
@@ -0,0 +1,23 @@
+require go-common.inc
+
+GO_BASEVERSION = "1.9"
+FILESEXTRAPATHS_prepend := "${FILE_DIRNAME}/go-${GO_BASEVERSION}:"
+
+LIC_FILES_CHKSUM = "file://LICENSE;md5=5d4950ecb7b26d2c5e4e7b4e0dd74707"
+
+SRC_URI += "\
+        file://0001-make.bash-quote-CC_FOR_TARGET.patch \
+        file://0002-cmd-go-fix-CC-and-CXX-environment-variable-construct.patch \
+        file://0003-make.bash-better-separate-host-and-target-builds.patch \
+        file://0004-cmd-go-allow-GOTOOLDIR-to-be-overridden-in-the-envir.patch \
+        file://0005-cmd-go-make-GOROOT-precious-by-default.patch \
+        file://0006-make.bash-add-GOTOOLDIR_BOOTSTRAP-environment-variab.patch \
+        file://0007-ld-add-soname-to-shareable-objects.patch \
+        file://0008-make.bash-add-GOHOSTxx-indirection-for-cross-canadia.patch \
+        file://0009-cmd-go-buildmode-pie-forces-external-linking-mode-on.patch \
+        file://0010-make.bash-override-CC-when-building-dist-and-go_boot.patch \
+"
+SRC_URI_append_libc-musl = " file://set-external-linker.patch"
+
+SRC_URI[main.md5sum] = "da2d44ea384076efec43ee1f8b7d45d2"
+SRC_URI[main.sha256sum] = "a4ab229028ed167ba1986825751463605264e44868362ca8e7accc8be057e993"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0001-make.bash-quote-CC_FOR_TARGET.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0001-make.bash-quote-CC_FOR_TARGET.patch
new file mode 100644
index 0000000..7800975
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0001-make.bash-quote-CC_FOR_TARGET.patch
@@ -0,0 +1,32 @@
+From d24734ad44006791fd48fc45ea34fe608ff672fb Mon Sep 17 00:00:00 2001
+From: Matt Madison <matt@madison.systems>
+Date: Wed, 13 Sep 2017 08:04:23 -0700
+Subject: [PATCH 1/7] make.bash: quote CC_FOR_TARGET
+
+For OE cross-builds, $CC_FOR_TARGET has more than
+one word and needs to be quoted.
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Signed-off-by: Matt Madison <matt@madison.systems>
+---
+ src/make.bash | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/make.bash b/src/make.bash
+index 71e7531..dcf3256 100755
+--- a/src/make.bash
++++ b/src/make.bash
+@@ -175,7 +175,7 @@ echo "##### Building packages and commands for $GOOS/$GOARCH."
+ 
+ old_bin_files=$(cd $GOROOT/bin && echo *)
+ 
+-CC=$CC_FOR_TARGET "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
++CC="$CC_FOR_TARGET" "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
+ 
+ # Check that there are no new files in $GOROOT/bin other than go and gofmt
+ # and $GOOS_$GOARCH (a directory used when cross-compiling).
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0002-cmd-go-fix-CC-and-CXX-environment-variable-construct.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0002-cmd-go-fix-CC-and-CXX-environment-variable-construct.patch
new file mode 100644
index 0000000..a4e4226
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0002-cmd-go-fix-CC-and-CXX-environment-variable-construct.patch
@@ -0,0 +1,67 @@
+From a7170d32a13aead608abd18996f6dab2e2a631b5 Mon Sep 17 00:00:00 2001
+From: Matt Madison <matt@madison.systems>
+Date: Wed, 13 Sep 2017 08:06:37 -0700
+Subject: [PATCH 2/7] cmd/go: fix CC and CXX environment variable construction
+
+For OE cross-builds, CC and CXX have multiple words, and
+we need their complete definitions when setting up the
+environment during Go builds.
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Signed-off-by: Matt Madison <matt@madison.systems>
+---
+ src/cmd/go/internal/envcmd/env.go |  4 ++--
+ src/cmd/go/internal/work/build.go | 12 ++++++++++++
+ 2 files changed, 14 insertions(+), 2 deletions(-)
+
+diff --git a/src/cmd/go/internal/envcmd/env.go b/src/cmd/go/internal/envcmd/env.go
+index 43d4334..529d21d 100644
+--- a/src/cmd/go/internal/envcmd/env.go
++++ b/src/cmd/go/internal/envcmd/env.go
+@@ -74,10 +74,10 @@ func MkEnv() []cfg.EnvVar {
+ 	}
+ 
+ 	cmd := b.GccCmd(".")
+-	env = append(env, cfg.EnvVar{Name: "CC", Value: cmd[0]})
++	env = append(env, cfg.EnvVar{Name: "CC", Value: strings.Join(b.GccCmdForReal(), " ")})
+ 	env = append(env, cfg.EnvVar{Name: "GOGCCFLAGS", Value: strings.Join(cmd[3:], " ")})
+ 	cmd = b.GxxCmd(".")
+-	env = append(env, cfg.EnvVar{Name: "CXX", Value: cmd[0]})
++	env = append(env, cfg.EnvVar{Name: "CXX", Value: strings.Join(b.GxxCmdForReal(), " ")})
+ 
+ 	if cfg.BuildContext.CgoEnabled {
+ 		env = append(env, cfg.EnvVar{Name: "CGO_ENABLED", Value: "1"})
+diff --git a/src/cmd/go/internal/work/build.go b/src/cmd/go/internal/work/build.go
+index 7d667ff..85df0b3 100644
+--- a/src/cmd/go/internal/work/build.go
++++ b/src/cmd/go/internal/work/build.go
+@@ -3127,12 +3127,24 @@ func (b *Builder) GccCmd(objdir string) []string {
+ 	return b.ccompilerCmd("CC", cfg.DefaultCC, objdir)
+ }
+ 
++// gccCmd returns a gcc command line prefix
++// defaultCC is defined in zdefaultcc.go, written by cmd/dist.
++func (b *Builder) GccCmdForReal() []string {
++	return envList("CC", cfg.DefaultCC)
++}
++
+ // gxxCmd returns a g++ command line prefix
+ // defaultCXX is defined in zdefaultcc.go, written by cmd/dist.
+ func (b *Builder) GxxCmd(objdir string) []string {
+ 	return b.ccompilerCmd("CXX", cfg.DefaultCXX, objdir)
+ }
+ 
++// gxxCmd returns a g++ command line prefix
++// defaultCXX is defined in zdefaultcc.go, written by cmd/dist.
++func (b *Builder) GxxCmdForReal() []string {
++	return envList("CXX", cfg.DefaultCXX)
++}
++
+ // gfortranCmd returns a gfortran command line prefix.
+ func (b *Builder) gfortranCmd(objdir string) []string {
+ 	return b.ccompilerCmd("FC", "gfortran", objdir)
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0003-make.bash-better-separate-host-and-target-builds.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0003-make.bash-better-separate-host-and-target-builds.patch
new file mode 100644
index 0000000..ffd9f23
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0003-make.bash-better-separate-host-and-target-builds.patch
@@ -0,0 +1,92 @@
+From 31e88f06af7ab787d8fe0c1ca625193e1799e167 Mon Sep 17 00:00:00 2001
+From: Matt Madison <matt@madison.systems>
+Date: Wed, 13 Sep 2017 08:12:04 -0700
+Subject: [PATCH 3/7] make.bash: better separate host and target builds
+
+Fore OE cross-builds, the simple checks in make.bash are
+insufficient for distinguishing host and target build
+environments, so add some options for telling the
+script which parts are being built.
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Signed-off-by: Matt Madison <matt@madison.systems>
+---
+ src/make.bash | 51 ++++++++++++++++++++++++++++-----------------------
+ 1 file changed, 28 insertions(+), 23 deletions(-)
+
+diff --git a/src/make.bash b/src/make.bash
+index dcf3256..9553623 100755
+--- a/src/make.bash
++++ b/src/make.bash
+@@ -156,13 +156,22 @@ if [ "$1" = "--no-clean" ]; then
+ 	buildall=""
+ 	shift
+ fi
+-./cmd/dist/dist bootstrap $buildall $GO_DISTFLAGS -v # builds go_bootstrap
++do_host_build="yes"
++do_target_build="yes"
++if [ "$1" = "--target-only" ]; then
++	do_host_build="no"
++	shift
++elif [ "$1" = "--host-only" ]; then
++	do_target_build="no"
++	shift
++fi
+ 
+-# Delay move of dist tool to now, because bootstrap may clear tool directory.
+-mv cmd/dist/dist "$GOTOOLDIR"/dist
+-echo
++if [ "$do_host_build" = "yes" ]; then
++	./cmd/dist/dist bootstrap $buildall $GO_DISTFLAGS -v # builds go_bootstrap
++	# Delay move of dist tool to now, because bootstrap may clear tool directory.
++	mv cmd/dist/dist "$GOTOOLDIR"/dist
++	echo
+ 
+-if [ "$GOHOSTARCH" != "$GOARCH" -o "$GOHOSTOS" != "$GOOS" ]; then
+ 	echo "##### Building packages and commands for host, $GOHOSTOS/$GOHOSTARCH."
+ 	# CC_FOR_TARGET is recorded as the default compiler for the go tool. When building for the host, however,
+ 	# use the host compiler, CC, from `cmd/dist/dist env` instead.
+@@ -171,24 +180,20 @@ if [ "$GOHOSTARCH" != "$GOARCH" -o "$GOHOSTOS" != "$GOOS" ]; then
+ 	echo
+ fi
+ 
+-echo "##### Building packages and commands for $GOOS/$GOARCH."
+-
+-old_bin_files=$(cd $GOROOT/bin && echo *)
+-
+-CC="$CC_FOR_TARGET" "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
+-
+-# Check that there are no new files in $GOROOT/bin other than go and gofmt
+-# and $GOOS_$GOARCH (a directory used when cross-compiling).
+-(cd $GOROOT/bin && for f in *; do
+-	if ! expr " $old_bin_files go gofmt ${GOOS}_${GOARCH} " : ".* $f " >/dev/null 2>/dev/null; then
+-		echo 1>&2 "ERROR: unexpected new file in $GOROOT/bin: $f"
+-		exit 1
+-	fi
+-done)
+-
+-echo
+-
+-rm -f "$GOTOOLDIR"/go_bootstrap
++if [ "$do_target_build" = "yes" ]; then
++    GO_INSTALL="${GO_TARGET_INSTALL:-std cmd}"
++    echo "##### Building packages and commands for $GOOS/$GOARCH."
++    if [ "$GOHOSTOS" = "$GOOS" -a "$GOHOSTARCH" = "$GOARCH" -a "$do_host_build" = "yes" ]; then
++	rm -rf ./host-tools
++	mkdir ./host-tools
++	mv "$GOTOOLDIR"/* ./host-tools
++	GOTOOLDIR="$PWD/host-tools"
++    fi
++    GOTOOLDIR="$GOTOOLDIR" CC="$CC_FOR_TARGET" "$GOTOOLDIR"/go_bootstrap install $GO_FLAGS -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v ${GO_INSTALL}
++    echo
++
++    rm -f "$GOTOOLDIR"/go_bootstrap
++fi
+ 
+ if [ "$1" != "--no-banner" ]; then
+ 	"$GOTOOLDIR"/dist banner
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0004-cmd-go-allow-GOTOOLDIR-to-be-overridden-in-the-envir.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0004-cmd-go-allow-GOTOOLDIR-to-be-overridden-in-the-envir.patch
new file mode 100644
index 0000000..180b06a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0004-cmd-go-allow-GOTOOLDIR-to-be-overridden-in-the-envir.patch
@@ -0,0 +1,68 @@
+From 1369178b497b12088ec4c2794606cc9f14cc327c Mon Sep 17 00:00:00 2001
+From: Matt Madison <matt@madison.systems>
+Date: Wed, 13 Sep 2017 08:15:03 -0700
+Subject: [PATCH 4/7] cmd/go: allow GOTOOLDIR to be overridden in the
+ environment
+
+For OE cross-builds, host-side tools reside in the native
+GOROOT, not the target GOROOT.  Allow GOTOOLDIR to be set
+in the environment to allow that split, rather than always
+computing GOTOOLDIR relative to the GOROOT setting.
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Signed-off-by: Matt Madison <matt@madison.systems>
+---
+ src/cmd/go/internal/cfg/cfg.go    | 7 ++++++-
+ src/cmd/go/internal/work/build.go | 2 +-
+ src/go/build/build.go             | 2 +-
+ 3 files changed, 8 insertions(+), 3 deletions(-)
+
+diff --git a/src/cmd/go/internal/cfg/cfg.go b/src/cmd/go/internal/cfg/cfg.go
+index b3ad1ce..c1dc974 100644
+--- a/src/cmd/go/internal/cfg/cfg.go
++++ b/src/cmd/go/internal/cfg/cfg.go
+@@ -91,7 +91,12 @@ func init() {
+ 	// as the tool directory does not move based on environment variables.
+ 	// This matches the initialization of ToolDir in go/build,
+ 	// except for using GOROOT rather than runtime.GOROOT().
+-	build.ToolDir = filepath.Join(GOROOT, "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH)
++	s := os.Getenv("GOTOOLDIR")
++	if s == "" {
++		build.ToolDir = filepath.Join(GOROOT, "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH)
++	} else {
++		build.ToolDir = s
++	}
+ }
+ 
+ func findGOROOT() string {
+diff --git a/src/cmd/go/internal/work/build.go b/src/cmd/go/internal/work/build.go
+index 85df0b3..7b9a69e 100644
+--- a/src/cmd/go/internal/work/build.go
++++ b/src/cmd/go/internal/work/build.go
+@@ -1337,7 +1337,7 @@ func (b *Builder) build(a *Action) (err error) {
+ 		}
+ 
+ 		var cgoExe string
+-		if a.cgo != nil && a.cgo.Target != "" {
++		if a.cgo != nil && a.cgo.Target != "" && os.Getenv("GOTOOLDIR") == "" {
+ 			cgoExe = a.cgo.Target
+ 		} else {
+ 			cgoExe = base.Tool("cgo")
+diff --git a/src/go/build/build.go b/src/go/build/build.go
+index fd89871..e16145b 100644
+--- a/src/go/build/build.go
++++ b/src/go/build/build.go
+@@ -1588,7 +1588,7 @@ func init() {
+ }
+ 
+ // ToolDir is the directory containing build tools.
+-var ToolDir = filepath.Join(runtime.GOROOT(), "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH)
++var ToolDir = envOr("GOTOOLDIR", filepath.Join(runtime.GOROOT(), "pkg/tool/"+runtime.GOOS+"_"+runtime.GOARCH))
+ 
+ // IsLocalImport reports whether the import path is
+ // a local import path, like ".", "..", "./foo", or "../foo".
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0005-cmd-go-make-GOROOT-precious-by-default.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0005-cmd-go-make-GOROOT-precious-by-default.patch
new file mode 100644
index 0000000..6e93bcb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0005-cmd-go-make-GOROOT-precious-by-default.patch
@@ -0,0 +1,41 @@
+From 44f961975dac6cf464a77b5f6dd0c47cc192c4fd Mon Sep 17 00:00:00 2001
+From: Matt Madison <matt@madison.systems>
+Date: Wed, 13 Sep 2017 08:19:52 -0700
+Subject: [PATCH 5/7] cmd/go: make GOROOT precious by default
+
+For OE builds, we never want packages that have
+already been installed into the build root to be
+modified, so prevent the go build tool from checking
+if they should be rebuilt.
+
+Also add an environment variable to override this
+behavior, just for building the Go runtime.
+
+Upstream-Status: Pending
+
+Signed-off-by: Matt Madison <matt@madison.systems>
+---
+ src/cmd/go/internal/load/pkg.go | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+diff --git a/src/cmd/go/internal/load/pkg.go b/src/cmd/go/internal/load/pkg.go
+index 60de666..2660d3f 100644
+--- a/src/cmd/go/internal/load/pkg.go
++++ b/src/cmd/go/internal/load/pkg.go
+@@ -1530,6 +1530,13 @@ func isStale(p *Package) (bool, string) {
+ 		return true, "build ID mismatch"
+ 	}
+ 
++	// For OE builds, make anything in GOROOT non-stale,
++	// to prevent a package build from overwriting the
++	// build root.
++	if p.Goroot && os.Getenv("GOROOT_OVERRIDE") != "1" {
++		return false, "GOROOT-resident packages do not get rebuilt"
++	}
++
+ 	// Package is stale if a dependency is.
+ 	for _, p1 := range p.Internal.Deps {
+ 		if p1.Stale {
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0006-make.bash-add-GOTOOLDIR_BOOTSTRAP-environment-variab.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0006-make.bash-add-GOTOOLDIR_BOOTSTRAP-environment-variab.patch
new file mode 100644
index 0000000..f0f5640
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0006-make.bash-add-GOTOOLDIR_BOOTSTRAP-environment-variab.patch
@@ -0,0 +1,36 @@
+From aae74d1045ca03306ba4159206ee3bac72bcdfbb Mon Sep 17 00:00:00 2001
+From: Matt Madison <matt@madison.systems>
+Date: Wed, 13 Sep 2017 08:23:23 -0700
+Subject: [PATCH 6/7] make.bash: add GOTOOLDIR_BOOTSTRAP environment variable
+
+For cross-canadian builds, we need to use the native
+GOTOOLDIR during the bootstrap phase, so provide a way
+to pass that setting into the build script.
+
+Upstream-Status: Pending
+
+Signed-off-by: Matt Madison <matt@madison.systems>
+---
+ src/make.bash | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/src/make.bash b/src/make.bash
+index 9553623..2e6fb05 100755
+--- a/src/make.bash
++++ b/src/make.bash
+@@ -172,10 +172,11 @@ if [ "$do_host_build" = "yes" ]; then
+ 	mv cmd/dist/dist "$GOTOOLDIR"/dist
+ 	echo
+ 
++	GOTOOLDIR_BOOTSTRAP="${GOTOOLDIR_BOOTSTRAP:-$GOTOOLDIR}"
+ 	echo "##### Building packages and commands for host, $GOHOSTOS/$GOHOSTARCH."
+ 	# CC_FOR_TARGET is recorded as the default compiler for the go tool. When building for the host, however,
+ 	# use the host compiler, CC, from `cmd/dist/dist env` instead.
+-	CC=$CC GOOS=$GOHOSTOS GOARCH=$GOHOSTARCH \
++	CC=$CC GOOS=$GOHOSTOS GOARCH=$GOHOSTARCH GOTOOLDIR="$GOTOOLDIR_BOOTSTRAP" \
+ 		"$GOTOOLDIR"/go_bootstrap install -gcflags "$GO_GCFLAGS" -ldflags "$GO_LDFLAGS" -v std cmd
+ 	echo
+ fi
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0007-ld-add-soname-to-shareable-objects.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0007-ld-add-soname-to-shareable-objects.patch
new file mode 100644
index 0000000..6459782
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0007-ld-add-soname-to-shareable-objects.patch
@@ -0,0 +1,46 @@
+From e957c3458d53e37bf416f51d2f8bf54c195e50f5 Mon Sep 17 00:00:00 2001
+From: Matt Madison <matt@madison.systems>
+Date: Wed, 13 Sep 2017 08:27:02 -0700
+Subject: [PATCH 7/7] ld: add soname to shareable objects
+
+Shared library handling in OE depends on the inclusion
+of an soname header, so update the go linker to add that
+header for both internal and external linking.
+
+Upstream-Status: Pending
+
+Signed-off-by: Matt Madison <matt@madison.systems>
+---
+ src/cmd/link/internal/ld/lib.go | 3 +++
+ 1 file changed, 3 insertions(+)
+
+diff --git a/src/cmd/link/internal/ld/lib.go b/src/cmd/link/internal/ld/lib.go
+index 0234105..0b9e2d0 100644
+--- a/src/cmd/link/internal/ld/lib.go
++++ b/src/cmd/link/internal/ld/lib.go
+@@ -1124,12 +1124,14 @@ func (l *Link) hostlink() {
+ 			// Pass -z nodelete to mark the shared library as
+ 			// non-closeable: a dlclose will do nothing.
+ 			argv = append(argv, "-shared", "-Wl,-z,nodelete")
++			argv = append(argv, fmt.Sprintf("-Wl,-soname,%s", filepath.Base(*flagOutfile)))
+ 		}
+ 	case BuildmodeShared:
+ 		if UseRelro() {
+ 			argv = append(argv, "-Wl,-z,relro")
+ 		}
+ 		argv = append(argv, "-shared")
++		argv = append(argv, fmt.Sprintf("-Wl,-soname,%s", filepath.Base(*flagOutfile)))
+ 	case BuildmodePlugin:
+ 		if Headtype == objabi.Hdarwin {
+ 			argv = append(argv, "-dynamiclib")
+@@ -1138,6 +1140,7 @@ func (l *Link) hostlink() {
+ 				argv = append(argv, "-Wl,-z,relro")
+ 			}
+ 			argv = append(argv, "-shared")
++			argv = append(argv, fmt.Sprintf("-Wl,-soname,%s", filepath.Base(*flagOutfile)))
+ 		}
+ 	}
+ 
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0008-make.bash-add-GOHOSTxx-indirection-for-cross-canadia.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0008-make.bash-add-GOHOSTxx-indirection-for-cross-canadia.patch
new file mode 100644
index 0000000..0977c78
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0008-make.bash-add-GOHOSTxx-indirection-for-cross-canadia.patch
@@ -0,0 +1,33 @@
+From 03e6c339d4fb712fbb8c4ca6ef2fc7100dcdb3d7 Mon Sep 17 00:00:00 2001
+From: Matt Madison <matt@madison.systems>
+Date: Thu, 14 Sep 2017 05:38:10 -0700
+Subject: [PATCH 8/8] make.bash: add GOHOSTxx indirection for cross-canadian
+ builds
+
+Add environment variables for specifying the host OS/arch
+that we are building the compiler for, so it can differ from
+the build host OS/arch.
+
+Upstream-Status: Pending
+
+Signed-off-by: Matt Madison <matt@madison.systems>
+---
+ src/make.bash | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/src/make.bash b/src/make.bash
+index 2e6fb05..0bdadc6 100755
+--- a/src/make.bash
++++ b/src/make.bash
+@@ -173,6 +173,8 @@ if [ "$do_host_build" = "yes" ]; then
+ 	echo
+ 
+ 	GOTOOLDIR_BOOTSTRAP="${GOTOOLDIR_BOOTSTRAP:-$GOTOOLDIR}"
++	GOHOSTOS="${GOHOSTOS_CROSS:-$GOHOSTOS}"
++	GOHOSTARCH="${GOHOSTARCH_CROSS:-$GOHOSTARCH}"
+ 	echo "##### Building packages and commands for host, $GOHOSTOS/$GOHOSTARCH."
+ 	# CC_FOR_TARGET is recorded as the default compiler for the go tool. When building for the host, however,
+ 	# use the host compiler, CC, from `cmd/dist/dist env` instead.
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0009-cmd-go-buildmode-pie-forces-external-linking-mode-on.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0009-cmd-go-buildmode-pie-forces-external-linking-mode-on.patch
new file mode 100644
index 0000000..aa5fcfd
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0009-cmd-go-buildmode-pie-forces-external-linking-mode-on.patch
@@ -0,0 +1,47 @@
+From aae44527c8065d54f6acaf87c82cba1ac96fae59 Mon Sep 17 00:00:00 2001
+From: Ian Lance Taylor <iant@golang.org>
+Date: Fri, 18 Aug 2017 17:46:03 -0700
+Subject: [PATCH] cmd/go: -buildmode=pie forces external linking mode on all
+ systems
+
+The go tool assumed that -buildmode=pie implied internal linking on
+linux-amd64. However, that was changed by CL 36417 for issue #18968.
+
+Fixes #21452
+
+Change-Id: I8ed13aea52959cc5c53223f4c41ba35329445545
+Reviewed-on: https://go-review.googlesource.com/57231
+Run-TryBot: Ian Lance Taylor <iant@golang.org>
+TryBot-Result: Gobot Gobot <gobot@golang.org>
+Reviewed-by: Avelino <t@avelino.xxx>
+Reviewed-by: Rob Pike <r@golang.org>
+---
+Upstream-Status: Backport
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+ src/cmd/go/internal/load/pkg.go | 7 ++++---
+ 1 file changed, 4 insertions(+), 3 deletions(-)
+
+diff --git a/src/cmd/go/internal/load/pkg.go b/src/cmd/go/internal/load/pkg.go
+index 2660d3f..d40773b 100644
+--- a/src/cmd/go/internal/load/pkg.go
++++ b/src/cmd/go/internal/load/pkg.go
+@@ -954,11 +954,12 @@ func (p *Package) load(stk *ImportStack, bp *build.Package, err error) *Package
+ 
+ 	if cfg.BuildContext.CgoEnabled && p.Name == "main" && !p.Goroot {
+ 		// Currently build modes c-shared, pie (on systems that do not
+-		// support PIE with internal linking mode), plugin, and
+-		// -linkshared force external linking mode, as of course does
++		// support PIE with internal linking mode (currently all
++		// systems: issue #18968)), plugin, and -linkshared force
++		// external linking mode, as of course does
+ 		// -ldflags=-linkmode=external. External linking mode forces
+ 		// an import of runtime/cgo.
+-		pieCgo := cfg.BuildBuildmode == "pie" && (cfg.BuildContext.GOOS != "linux" || cfg.BuildContext.GOARCH != "amd64")
++		pieCgo := cfg.BuildBuildmode == "pie"
+ 		linkmodeExternal := false
+ 		for i, a := range cfg.BuildLdflags {
+ 			if a == "-linkmode=external" {
+-- 
+2.14.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0010-make.bash-override-CC-when-building-dist-and-go_boot.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0010-make.bash-override-CC-when-building-dist-and-go_boot.patch
new file mode 100644
index 0000000..83fd78c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/0010-make.bash-override-CC-when-building-dist-and-go_boot.patch
@@ -0,0 +1,43 @@
+From 21d83dd9499e5be30eea28dd7034d1ea2a01c838 Mon Sep 17 00:00:00 2001
+From: Matt Madison <matt@madison.systems>
+Date: Tue, 14 Nov 2017 07:38:42 -0800
+Subject: [PATCH 10/10] make.bash: override CC when building dist and
+ go_bootstrap
+
+For cross-canadian builds, dist and go_bootstrap
+run on the build host, so CC needs to point to the
+build host's C compiler.  Add a BUILD_CC environment
+for this, falling back to $CC if not present.
+
+Upstream-Status: Pending
+
+Signed-off-by: Matt Madison <matt@madison.systems>
+---
+ src/make.bash | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/src/make.bash b/src/make.bash
+index 0bdadc6..f199349 100755
+--- a/src/make.bash
++++ b/src/make.bash
+@@ -131,7 +131,7 @@ if [ "$GOROOT_BOOTSTRAP" = "$GOROOT" ]; then
+ 	exit 1
+ fi
+ rm -f cmd/dist/dist
+-GOROOT="$GOROOT_BOOTSTRAP" GOOS="" GOARCH="" "$GOROOT_BOOTSTRAP/bin/go" build -o cmd/dist/dist ./cmd/dist
++CC=${BUILD_CC:-${CC}} GOROOT="$GOROOT_BOOTSTRAP" GOOS="" GOARCH="" "$GOROOT_BOOTSTRAP/bin/go" build -o cmd/dist/dist ./cmd/dist
+ 
+ # -e doesn't propagate out of eval, so check success by hand.
+ eval $(./cmd/dist/dist env -p || echo FAIL=true)
+@@ -167,7 +167,7 @@ elif [ "$1" = "--host-only" ]; then
+ fi
+ 
+ if [ "$do_host_build" = "yes" ]; then
+-	./cmd/dist/dist bootstrap $buildall $GO_DISTFLAGS -v # builds go_bootstrap
++	CC=${BUILD_CC:-${CC}} ./cmd/dist/dist bootstrap $buildall $GO_DISTFLAGS -v # builds go_bootstrap
+ 	# Delay move of dist tool to now, because bootstrap may clear tool directory.
+ 	mv cmd/dist/dist "$GOTOOLDIR"/dist
+ 	echo
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/set-external-linker.patch b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/set-external-linker.patch
new file mode 100644
index 0000000..d6bd7fa
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-1.9/set-external-linker.patch
@@ -0,0 +1,111 @@
+Change the dynamic linker hardcoding to use musl when not using glibc
+this should be applied conditional to musl being the system C library
+
+Upstream-Status: Inappropriate [Real Fix should be portable across libcs]
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Index: go/src/cmd/link/internal/amd64/obj.go
+===================================================================
+--- go.orig/src/cmd/link/internal/amd64/obj.go
++++ go/src/cmd/link/internal/amd64/obj.go
+@@ -67,7 +67,7 @@ func Init() {
+ 	ld.Thearch.Append64 = ld.Append64l
+ 	ld.Thearch.TLSIEtoLE = tlsIEtoLE
+ 
+-	ld.Thearch.Linuxdynld = "/lib64/ld-linux-x86-64.so.2"
++	ld.Thearch.Linuxdynld = "/lib/ld-musl-x86_64.so.1"
+ 	ld.Thearch.Freebsddynld = "/libexec/ld-elf.so.1"
+ 	ld.Thearch.Openbsddynld = "/usr/libexec/ld.so"
+ 	ld.Thearch.Netbsddynld = "/libexec/ld.elf_so"
+Index: go/src/cmd/link/internal/arm/obj.go
+===================================================================
+--- go.orig/src/cmd/link/internal/arm/obj.go
++++ go/src/cmd/link/internal/arm/obj.go
+@@ -63,7 +63,7 @@ func Init() {
+ 	ld.Thearch.Append32 = ld.Append32l
+ 	ld.Thearch.Append64 = ld.Append64l
+ 
+-	ld.Thearch.Linuxdynld = "/lib/ld-linux.so.3" // 2 for OABI, 3 for EABI
++	ld.Thearch.Linuxdynld = "/lib/ld-musl-armhf.so.1"
+ 	ld.Thearch.Freebsddynld = "/usr/libexec/ld-elf.so.1"
+ 	ld.Thearch.Openbsddynld = "/usr/libexec/ld.so"
+ 	ld.Thearch.Netbsddynld = "/libexec/ld.elf_so"
+Index: go/src/cmd/link/internal/arm64/obj.go
+===================================================================
+--- go.orig/src/cmd/link/internal/arm64/obj.go
++++ go/src/cmd/link/internal/arm64/obj.go
+@@ -62,7 +62,7 @@ func Init() {
+ 	ld.Thearch.Append32 = ld.Append32l
+ 	ld.Thearch.Append64 = ld.Append64l
+ 
+-	ld.Thearch.Linuxdynld = "/lib/ld-linux-aarch64.so.1"
++	ld.Thearch.Linuxdynld = "/lib/ld-musl-aarch64.so.1"
+ 
+ 	ld.Thearch.Freebsddynld = "XXX"
+ 	ld.Thearch.Openbsddynld = "XXX"
+Index: go/src/cmd/link/internal/mips/obj.go
+===================================================================
+--- go.orig/src/cmd/link/internal/mips/obj.go
++++ go/src/cmd/link/internal/mips/obj.go
+@@ -77,7 +77,7 @@ func Init() {
+ 		ld.Thearch.Append64 = ld.Append64b
+ 	}
+ 
+-	ld.Thearch.Linuxdynld = "/lib/ld.so.1"
++	ld.Thearch.Linuxdynld = "/lib/ld-musl-mipsle.so.1"
+ 
+ 	ld.Thearch.Freebsddynld = "XXX"
+ 	ld.Thearch.Openbsddynld = "XXX"
+Index: go/src/cmd/link/internal/mips64/obj.go
+===================================================================
+--- go.orig/src/cmd/link/internal/mips64/obj.go
++++ go/src/cmd/link/internal/mips64/obj.go
+@@ -75,7 +75,7 @@ func Init() {
+ 		ld.Thearch.Append64 = ld.Append64b
+ 	}
+ 
+-	ld.Thearch.Linuxdynld = "/lib64/ld64.so.1"
++	ld.Thearch.Linuxdynld = "/lib64/ld-musl-mips64le.so.1"
+ 
+ 	ld.Thearch.Freebsddynld = "XXX"
+ 	ld.Thearch.Openbsddynld = "XXX"
+Index: go/src/cmd/link/internal/ppc64/obj.go
+===================================================================
+--- go.orig/src/cmd/link/internal/ppc64/obj.go
++++ go/src/cmd/link/internal/ppc64/obj.go
+@@ -77,7 +77,7 @@ func Init() {
+ 	}
+ 
+ 	// TODO(austin): ABI v1 uses /usr/lib/ld.so.1
+-	ld.Thearch.Linuxdynld = "/lib64/ld64.so.1"
++	ld.Thearch.Linuxdynld = "/lib/ld-musl-powerpc64le.so.1"
+ 
+ 	ld.Thearch.Freebsddynld = "XXX"
+ 	ld.Thearch.Openbsddynld = "XXX"
+Index: go/src/cmd/link/internal/s390x/obj.go
+===================================================================
+--- go.orig/src/cmd/link/internal/s390x/obj.go
++++ go/src/cmd/link/internal/s390x/obj.go
+@@ -62,7 +62,7 @@ func Init() {
+ 	ld.Thearch.Append32 = ld.Append32b
+ 	ld.Thearch.Append64 = ld.Append64b
+ 
+-	ld.Thearch.Linuxdynld = "/lib64/ld64.so.1"
++	ld.Thearch.Linuxdynld = "/lib/ld-musl-s390x.so.1"
+ 
+ 	// not relevant for s390x
+ 	ld.Thearch.Freebsddynld = "XXX"
+Index: go/src/cmd/link/internal/x86/obj.go
+===================================================================
+--- go.orig/src/cmd/link/internal/x86/obj.go
++++ go/src/cmd/link/internal/x86/obj.go
+@@ -63,7 +63,7 @@ func Init() {
+ 	ld.Thearch.Append32 = ld.Append32l
+ 	ld.Thearch.Append64 = ld.Append64l
+ 
+-	ld.Thearch.Linuxdynld = "/lib/ld-linux.so.2"
++	ld.Thearch.Linuxdynld = "/lib/ld-musl-i386.so.1"
+ 	ld.Thearch.Freebsddynld = "/usr/libexec/ld-elf.so.1"
+ 	ld.Thearch.Openbsddynld = "/usr/libexec/ld.so"
+ 	ld.Thearch.Netbsddynld = "/usr/libexec/ld.elf_so"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-bootstrap-native_1.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go-bootstrap-native_1.4.bb
deleted file mode 100644
index 3d4141e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-bootstrap-native_1.4.bb
+++ /dev/null
@@ -1,3 +0,0 @@
-BOOTSTRAP = "1.4"
-require go-native.inc
-require go-${PV}.inc
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-common.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go-common.inc
index f74b8b7..9af6873 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-common.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-common.inc
@@ -14,9 +14,13 @@
 
 inherit goarch
 
-SRC_URI = "http://golang.org/dl/go${PV}.src.tar.gz"
+SRC_URI = "http://golang.org/dl/go${PV}.src.tar.gz;name=main"
 S = "${WORKDIR}/go"
 B = "${S}"
 
 INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
 SSTATE_SCAN_CMD = "true"
+
+do_compile_prepend() {
+	BUILD_CC=${BUILD_CC}
+}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross-canadian.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross-canadian.inc
new file mode 100644
index 0000000..8afda6b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross-canadian.inc
@@ -0,0 +1,62 @@
+inherit cross-canadian
+
+DEPENDS = "go-native virtual/${HOST_PREFIX}go-crosssdk virtual/nativesdk-${HOST_PREFIX}go-runtime \
+           virtual/${HOST_PREFIX}gcc-crosssdk virtual/nativesdk-${HOST_PREFIX}libc-for-gcc \
+           virtual/nativesdk-${HOST_PREFIX}compilerlibs"
+PN = "go-cross-canadian-${TRANSLATED_TARGET_ARCH}"
+
+export GOHOSTOS = "${BUILD_GOOS}"
+export GOHOSTARCH = "${BUILD_GOARCH}"
+export GOHOSTOS_CROSS = "${HOST_GOOS}"
+export GOHOSTARCH_CROSS = "${HOST_GOARCH}"
+export GOROOT_BOOTSTRAP = "${STAGING_LIBDIR_NATIVE}/go"
+export GOTOOLDIR_BOOTSTRAP = "${STAGING_LIBDIR_NATIVE}/${HOST_SYS}/go/pkg/tool/${BUILD_GOTUPLE}"
+export GOROOT_FINAL = "${libdir}/go"
+export CGO_ENABLED = "1"
+export CC_FOR_TARGET = "${TARGET_PREFIX}gcc"
+export CXX_FOR_TARGET = "${TARGET_PREFIX}g++"
+CC = "${HOST_PREFIX}gcc"
+export CGO_CFLAGS = "--sysroot=${STAGING_DIR_TARGET} ${HOST_CC_ARCH} ${CFLAGS}"
+export CGO_LDFLAGS = "--sysroot=${STAGING_DIR_TARGET} ${HOST_CC_ARCH} ${LDFLAGS}"
+export GO_LDFLAGS = '-linkmode external -extld ${HOST_PREFIX}gcc -extldflags "--sysroot=${STAGING_DIR_TARGET} ${HOST_CC_ARCH} ${LDFLAGS}"'
+
+do_configure[noexec] = "1"
+
+do_compile() {
+	export GOBIN="${B}/bin"
+	rm -rf ${GOBIN} ${B}/pkg
+	mkdir ${GOBIN}
+	cd src
+	./make.bash --host-only --no-banner
+	cd ${B}
+}
+
+
+make_wrapper() {
+    rm -f ${D}${bindir}/$2
+    cat <<END >${D}${bindir}/$2
+#!/bin/sh
+here=\`dirname \$0\`
+native_goroot=\`readlink -f \$here/../../lib/${TARGET_SYS}/go\`
+export GOARCH="${TARGET_GOARCH}"
+export GOOS="${TARGET_GOOS}"
+test -n "\$GOARM" || export GOARM="${TARGET_GOARM}"
+test -n "\$GO386" || export GO386="${TARGET_GO386}"
+export GOTOOLDIR="\$native_goroot/pkg/tool/${HOST_GOTUPLE}"
+test -n "\$GOROOT" || export GOROOT="\$OECORE_TARGET_SYSROOT/${target_libdir}/go"
+\$here/../../lib/${TARGET_SYS}/go/bin/$1 "\$@"
+END
+    chmod +x ${D}${bindir}/$2
+}
+
+do_install() {
+	install -d ${D}${libdir}/go/pkg/tool
+	cp --preserve=mode,timestamps -R ${B}/pkg/tool/${HOST_GOTUPLE} ${D}${libdir}/go/pkg/tool/
+	install -d ${D}${bindir} ${D}${libdir}/go/bin
+	for f in ${B}/${GO_BUILD_BINDIR}/*
+	do
+		base=`basename $f`
+		install -m755 $f ${D}${libdir}/go/bin
+		make_wrapper $base ${TARGET_PREFIX}$base
+	done
+}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross-canadian_1.9.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross-canadian_1.9.bb
new file mode 100644
index 0000000..7ac9449
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross-canadian_1.9.bb
@@ -0,0 +1,2 @@
+require go-cross-canadian.inc
+require go-${PV}.inc
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross.inc
index 93206a5..3ac7211 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross.inc
@@ -1,18 +1,62 @@
 inherit cross
 
-DEPENDS += "gcc-cross-${TARGET_ARCH}"
+PROVIDES = "virtual/${TARGET_PREFIX}go"
+DEPENDS += "go-native"
 
 PN = "go-cross-${TARGET_ARCH}"
 
-# Ignore how TARGET_ARCH is computed.
-TARGET_ARCH[vardepvalue] = "${TARGET_ARCH}"
+export GOHOSTOS = "${BUILD_GOOS}"
+export GOHOSTARCH = "${BUILD_GOARCH}"
+export GOOS = "${TARGET_GOOS}"
+export GOARCH = "${TARGET_GOARCH}"
+export GOARM = "${TARGET_GOARM}"
+export GO386 = "${TARGET_GO386}"
+export GOROOT_BOOTSTRAP = "${STAGING_LIBDIR_NATIVE}/go"
+export GOROOT_FINAL = "${libdir}/go"
+export CGO_ENABLED = "1"
+export CC_FOR_TARGET="${TARGET_PREFIX}gcc ${TARGET_CC_ARCH} --sysroot=${STAGING_DIR_TARGET}"
+export CXX_FOR_TARGET="${TARGET_PREFIX}g++ ${TARGET_CC_ARCH} --sysroot=${STAGING_DIR_TARGET}"
+CC = "${@d.getVar('BUILD_CC').strip()}"
 
-FILESEXTRAPATHS =. "${FILE_DIRNAME}/go-cross:"
+do_configure[noexec] = "1"
 
-GOROOT_FINAL = "${libdir}/go"
-export GOROOT_FINAL
+do_compile() {
+    export GOBIN="${B}/bin"
+    rm -rf ${GOBIN} ${B}/pkg
+    mkdir ${GOBIN}
+    cd src
+    ./make.bash --host-only
+    cd ${B}
+}
 
-# x32 ABI is not supported on go compiler so far
-COMPATIBLE_HOST_linux-gnux32 = "null"
-# ppc32 is not supported in go compilers
-COMPATIBLE_HOST_powerpc = "null"
+
+make_wrapper() {
+    rm -f ${D}${bindir}/$2
+    cat <<END >${D}${bindir}/$2
+#!/bin/bash
+here=\`dirname \$0\`
+export GOARCH="${TARGET_GOARCH}"
+export GOOS="${TARGET_GOOS}"
+export GOARM="\${GOARM:-${TARGET_GOARM}}"
+export GO386="\${GO386:-${TARGET_GO386}}"
+\$here/../../lib/${CROSS_TARGET_SYS_DIR}/go/bin/$1 "\$@"
+END
+    chmod +x ${D}${bindir}/$2
+}
+
+do_install() {
+    install -d ${D}${libdir}/go
+    cp --preserve=mode,timestamps -R ${B}/pkg ${D}${libdir}/go/
+    install -d ${D}${libdir}/go/src
+    (cd ${S}/src; for d in *; do \
+        [ ! -d $d ] || cp --preserve=mode,timestamps -R ${S}/src/$d ${D}${libdir}/go/src/; \
+    done)
+    rm -rf ${D}${libdir}/go/src/runtime/pprof/testdata
+    install -d ${D}${bindir} ${D}${libdir}/go/bin
+    for f in ${B}/bin/*
+    do
+        base=`basename $f`
+        install -m755 $f ${D}${libdir}/go/bin
+        make_wrapper $base ${TARGET_PREFIX}$base
+    done
+}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross_1.7.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross_1.7.bb
deleted file mode 100644
index 56ee084..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross_1.7.bb
+++ /dev/null
@@ -1,5 +0,0 @@
-require go-cross.inc
-require go_${PV}.bb
-
-# Go binaries are not understood by the strip tool.
-INHIBIT_SYSROOT_STRIP = "1"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross_1.8.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross_1.8.bb
deleted file mode 100644
index 56ee084..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross_1.8.bb
+++ /dev/null
@@ -1,5 +0,0 @@
-require go-cross.inc
-require go_${PV}.bb
-
-# Go binaries are not understood by the strip tool.
-INHIBIT_SYSROOT_STRIP = "1"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross_1.9.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross_1.9.bb
new file mode 100644
index 0000000..80b5a03
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-cross_1.9.bb
@@ -0,0 +1,2 @@
+require go-cross.inc
+require go-${PV}.inc
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-crosssdk.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go-crosssdk.inc
new file mode 100644
index 0000000..f67e4b9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-crosssdk.inc
@@ -0,0 +1,54 @@
+inherit crosssdk
+
+DEPENDS = "go-native virtual/${TARGET_PREFIX}gcc-crosssdk virtual/nativesdk-${TARGET_PREFIX}compilerlibs virtual/${TARGET_PREFIX}binutils-crosssdk"
+PN = "go-crosssdk-${TARGET_ARCH}"
+PROVIDES = "virtual/${TARGET_PREFIX}go-crosssdk"
+
+export GOHOSTOS = "${BUILD_GOOS}"
+export GOHOSTARCH = "${BUILD_GOARCH}"
+export GOOS = "${TARGET_GOOS}"
+export GOARCH = "${TARGET_GOARCH}"
+export GOROOT_BOOTSTRAP = "${STAGING_LIBDIR_NATIVE}/go"
+export GOROOT_FINAL = "${libdir}/go"
+export CGO_ENABLED = "1"
+export CC_FOR_TARGET="${TARGET_PREFIX}gcc ${TARGET_CC_ARCH} --sysroot=${STAGING_DIR_TARGET}${SDKPATHNATIVE}"
+export CXX_FOR_TARGET="${TARGET_PREFIX}g++ ${TARGET_CC_ARCH} --sysroot=${STAGING_DIR_TARGET}${SDKPATHNATIVE}"
+export GO_INSTALL = "cmd"
+CC = "${@d.getVar('BUILD_CC').strip()}"
+
+do_configure[noexec] = "1"
+
+do_compile() {
+	export GOBIN="${B}/bin"
+	rm -rf ${GOBIN} ${B}/pkg
+	mkdir ${GOBIN}
+	cd src
+	./make.bash --host-only
+	cd ${B}
+}
+
+make_wrapper() {
+    rm -f ${D}${bindir}/$2
+    cat <<END >${D}${bindir}/$2
+#!/bin/bash
+here=\`dirname \$0\`
+export GOARCH="${TARGET_GOARCH}"
+export GOOS="${TARGET_GOOS}"
+\$here/../../lib/${CROSS_TARGET_SYS_DIR}/go/bin/$1 "\$@"
+END
+    chmod +x ${D}${bindir}/$2
+}
+
+do_install() {
+	install -d ${D}${libdir}/go
+	install -d ${D}${libdir}/go/bin
+	install -d ${D}${libdir}/go/pkg/tool
+	install -d ${D}${bindir}
+	cp --preserve=mode,timestamps -R ${S}/pkg/tool/${BUILD_GOTUPLE} ${D}${libdir}/go/pkg/tool/
+	for f in ${B}/bin/*
+	do
+		base=`basename $f`
+		install -m755 $f ${D}${libdir}/go/bin
+		make_wrapper $base ${TARGET_PREFIX}$base
+	done
+}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-crosssdk_1.9.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go-crosssdk_1.9.bb
new file mode 100644
index 0000000..1857c8a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-crosssdk_1.9.bb
@@ -0,0 +1,2 @@
+require go-crosssdk.inc
+require go-${PV}.inc
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-dep_0.3.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go-dep_0.3.0.bb
new file mode 100644
index 0000000..abfeb48
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-dep_0.3.0.bb
@@ -0,0 +1,16 @@
+SUMMARY = "Dependency management tool for Golang"
+HOMEPAGE = "https://github.com/golang/dep"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://src/${GO_IMPORT}/LICENSE;md5=1bad315647751fab0007812f70d42c0d"
+
+GO_IMPORT = "github.com/golang/dep"
+SRC_URI = "git://${GO_IMPORT}"
+
+# Points to 0.3.0 tag
+SRCREV = "7a91b794bbfbf1f3b8b79823799316451127801b"
+
+inherit go
+
+GO_INSTALL = "${GO_IMPORT}/cmd/dep"
+
+RDEPENDS_${PN}-dev += "bash"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-native.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go-native.inc
index c21f8fd..95db1c2 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-native.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-native.inc
@@ -1,16 +1,28 @@
+# Use immediate assingment here to get the original (/usr/lib)
+# instead of the one rewritten by native.bbclass.
+nonstaging_libdir := "${libdir}"
+
 inherit native
 
-BOOTSTRAP ?= ""
+SRC_URI_append = " http://golang.org/dl/go1.4.3.src.tar.gz;name=bootstrap;subdir=go1.4"
+SRC_URI[bootstrap.md5sum] = "dfb604511115dd402a77a553a5923a04"
+SRC_URI[bootstrap.sha256sum] = "9947fc705b0b841b5938c48b22dc33e9647ec0752bae66e50278df4f23f64959"
+
 export GOOS = "${BUILD_GOOS}"
 export GOARCH = "${BUILD_GOARCH}"
-export GOROOT_FINAL = "${STAGING_LIBDIR_NATIVE}/go${BOOTSTRAP}"
-export GOROOT_BOOTSTRAP = "${STAGING_LIBDIR_NATIVE}/go1.4"
+CC = "${@d.getVar('BUILD_CC').strip()}"
+
 export CGO_ENABLED = "1"
 
-do_configure[noexec] = "1"
+do_configure() {
+    cd ${WORKDIR}/go1.4/go/src
+    CGO_ENABLED=0 GOROOT=${WORKDIR}/go1.4/go ./make.bash
+}
 
 do_compile() {
 	export GOBIN="${B}/bin"
+	export GOROOT_FINAL="${nonstaging_libdir}/go"
+	export GOROOT_BOOTSTRAP="${WORKDIR}/go1.4/go"
 	rm -rf ${GOBIN}
 	mkdir ${GOBIN}
 
@@ -18,7 +30,7 @@
 	mkdir -p ${WORKDIR}/build-tmp
 
 	cd src
-	CGO_ENABLED=0 ./make.bash --host-only
+	./make.bash --host-only
 }
 
 make_wrapper() {
@@ -26,31 +38,25 @@
 	cat <<END >${D}${bindir}/$2$3
 #!/bin/bash
 here=\`dirname \$0\`
-export GOROOT="${GOROOT:-\`readlink -f \$here/../lib/go$3\`}"
-\$here/../lib/go$3/bin/$1 "\$@"
+export GOROOT="${GOROOT:-\`readlink -f \$here/../lib/go\`}"
+\$here/../lib/go/bin/$1 "\$@"
 END
-	chmod +x ${D}${bindir}/$2$3
+	chmod +x ${D}${bindir}/$2
 }
 
 do_install() {
-	install -d ${D}${libdir}/go${BOOTSTRAP}
-	cp -a ${B}/pkg ${D}${libdir}/go${BOOTSTRAP}/
-	install -d ${D}${libdir}/go${BOOTSTRAP}/src
+	install -d ${D}${libdir}/go
+	cp --preserve=mode,timestamps -R ${B}/pkg ${D}${libdir}/go/
+	install -d ${D}${libdir}/go/src
 	(cd ${S}/src; for d in *; do \
-		[ -d $d ] && cp -a ${S}/src/$d ${D}${libdir}/go${BOOTSTRAP}/src/; \
+		[ -d $d ] && cp -a ${S}/src/$d ${D}${libdir}/go/src/; \
 	done)
-
-	install -d ${D}${bindir} ${D}${libdir}/go${BOOTSTRAP}/bin
+	rm -rf ${D}${libdir}/go/src/runtime/pprof/testdata
+	install -d ${D}${bindir} ${D}${libdir}/go/bin
 	for f in ${B}/bin/*
 	do
 		base=`basename $f`
-		install -m755 $f ${D}${libdir}/go${BOOTSTRAP}/bin
-		make_wrapper $base $base ${BOOTSTRAP}
+		install -m755 $f ${D}${libdir}/go/bin
+		make_wrapper $base $base
 	done
 }
-
-do_package[noexec] = "1"
-do_packagedata[noexec] = "1"
-do_package_write_ipk[noexec] = "1"
-do_package_write_deb[noexec] = "1"
-do_package_write_rpm[noexec] = "1"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-native_1.8.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go-native_1.8.bb
deleted file mode 100644
index 182fca2..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go-native_1.8.bb
+++ /dev/null
@@ -1,3 +0,0 @@
-require ${PN}.inc
-require go-${PV}.inc
-DEPENDS += "go-bootstrap-native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-native_1.9.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go-native_1.9.bb
new file mode 100644
index 0000000..bbf3c0d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-native_1.9.bb
@@ -0,0 +1,2 @@
+require ${PN}.inc
+require go-${PV}.inc
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-runtime.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go-runtime.inc
new file mode 100644
index 0000000..29ae86e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-runtime.inc
@@ -0,0 +1,98 @@
+DEPENDS = "virtual/${TARGET_PREFIX}go go-native"
+DEPENDS_class-nativesdk = "virtual/${TARGET_PREFIX}go-crosssdk"
+PROVIDES = "virtual/${TARGET_PREFIX}go-runtime"
+
+export GOHOSTOS = "${BUILD_GOOS}"
+export GOHOSTARCH = "${BUILD_GOARCH}"
+export GOOS = "${TARGET_GOOS}"
+export GOARCH = "${TARGET_GOARCH}"
+export GOARM = "${TARGET_GOARM}"
+export GO386 = "${TARGET_GO386}"
+export GOROOT_BOOTSTRAP = "${STAGING_LIBDIR_NATIVE}/go"
+export GOROOT_FINAL = "${libdir}/go"
+export GO_TARGET_INSTALL = "std"
+export CGO_ENABLED = "1"
+export CC_FOR_TARGET="${CC}"
+export CXX_FOR_TARGET="${CXX}"
+export GOROOT_OVERRIDE = "1"
+
+do_configure() {
+	:
+}
+
+do_configure_libc-musl() {
+	rm -f ${S}/src/runtime/race/*.syso
+}
+
+do_compile() {
+	export GOBIN="${B}/bin"
+	export CC="${@d.getVar('BUILD_CC').strip()}"
+	rm -rf ${GOBIN} ${B}/pkg
+	mkdir ${GOBIN}
+	cd src
+	./make.bash --host-only
+	cp ${B}/pkg/tool/${BUILD_GOTUPLE}/go_bootstrap ${B}
+	rm -rf ${B}/pkg/${TARGET_GOTUPLE}
+	./make.bash --target-only
+	if [ -n "${GO_DYNLINK}" ]; then
+		cp ${B}/go_bootstrap ${B}/pkg/tool/${BUILD_GOTUPLE}
+		GO_FLAGS="-buildmode=shared" GO_LDFLAGS="-extldflags \"${LDFLAGS}\"" ./make.bash --target-only
+	fi
+	cd ${B}
+}
+
+do_install() {
+	install -d ${D}${libdir}/go/src
+	cp --preserve=mode,timestamps -R ${B}/pkg ${D}${libdir}/go/
+	if [ "${BUILD_GOTUPLE}" != "${TARGET_GOTUPLE}" ]; then
+		rm -rf ${D}${libdir}/go/pkg/${BUILD_GOTUPLE}
+		rm -rf ${D}${libdir}/go/pkg/obj/${BUILD_GOTUPLE}
+	fi
+	rm -rf ${D}${libdir}/go/pkg/tool
+	rm -rf ${D}${libdir}/go/pkg/obj
+	rm -rf ${D}${libdir}/go/pkg/bootstrap
+	find src -mindepth 1 -maxdepth 1 -type d | while read srcdir; do
+		cp --preserve=mode,timestamps -R $srcdir ${D}${libdir}/go/src/
+	done
+	rm -f ${D}${libdir}/go/src/cmd/dist/dist
+}
+
+# Remove test binaries that cannot be relocated
+do_install_append_class-nativesdk() {
+	rm -rf ${D}${libdir}/go/src/runtime/pprof/testdata
+}
+
+# These testdata directories aren't needed for builds and contain binaries
+# that can cause errors in sysroot_strip(), so just remove them.
+sysroot_stage_all_append() {
+	find ${SYSROOT_DESTDIR}${libdir}/go/src -depth -type d -name 'testdata' -exec rm -rf {} \;
+}
+
+ALLOW_EMPTY_${PN} = "1"
+FILES_${PN} = "${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*${SOLIBSDEV}"
+FILES_${PN}-dev = "${libdir}/go/src ${libdir}/go/pkg/include \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*.shlibname \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*/*.shlibname \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*/*/*.shlibname \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*/*/*/*.shlibname \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*/*/*/*/*.shlibname \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*/*/*/*/*/*.shlibname \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*/*/*/*/*/*/*.shlibname \
+"
+FILES_${PN}-staticdev = "${libdir}/go/pkg/${TARGET_GOTUPLE} \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*.a \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*/*.a \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*/*/*.a \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*/*/*/*.a \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*/*/*/*/*.a \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*/*/*/*/*/*.a \
+                   ${libdir}/go/pkg/${TARGET_GOTUPLE}_dynlink/*/*/*/*/*/*/*.a \
+"
+# The testdata directories in the source tree include some binaries for various
+# architectures, scripts, and .a files
+INSANE_SKIP_${PN}-dev = "staticdev ldflags file-rdeps arch"
+
+INHIBIT_PACKAGE_STRIP = "1"
+INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
+
+BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-runtime_1.9.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go-runtime_1.9.bb
new file mode 100644
index 0000000..43b68b4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-runtime_1.9.bb
@@ -0,0 +1,2 @@
+require go-${PV}.inc
+require go-runtime.inc
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go-target.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go-target.inc
new file mode 100644
index 0000000..cac5d78
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go-target.inc
@@ -0,0 +1,61 @@
+inherit goarch
+DEPENDS = "virtual/${TARGET_PREFIX}go go-native"
+DEPENDS_class-nativesdk = "virtual/${TARGET_PREFIX}go-crosssdk go-native"
+
+export GOHOSTOS = "${BUILD_GOOS}"
+export GOHOSTARCH = "${BUILD_GOARCH}"
+export GOOS = "${TARGET_GOOS}"
+export GOARCH = "${TARGET_GOARCH}"
+export GOARM = "${TARGET_GOARM}"
+export GO386 = "${TARGET_GO386}"
+export GOROOT_BOOTSTRAP = "${STAGING_LIBDIR_NATIVE}/go"
+export GOROOT_FINAL = "${libdir}/go"
+export CGO_ENABLED = "1"
+export CC_FOR_TARGET = "${CC}"
+export CXX_FOR_TARGET = "${CXX}"
+export GO_TARGET_INSTALL = "cmd"
+export GO_FLAGS = "-a"
+GO_LDFLAGS = ""
+GO_LDFLAGS_class-nativesdk = "-linkmode external"
+export GO_LDFLAGS
+
+SECURITY_CFLAGS = "${SECURITY_NOPIE_CFLAGS}"
+SECURITY_LDFLAGS = ""
+
+do_configure[noexec] = "1"
+
+do_compile() {
+	export GOBIN="${B}/bin"
+	export CC="${@d.getVar('BUILD_CC').strip()}"
+	rm -rf ${GOBIN} ${B}/pkg
+	mkdir ${GOBIN}
+
+	export TMPDIR=${WORKDIR}/build-tmp
+	mkdir -p ${WORKDIR}/build-tmp
+
+	cd src
+	./make.bash
+	cd ${B}
+}
+
+do_install() {
+	install -d ${D}${libdir}/go/pkg/tool
+	cp --preserve=mode,timestamps -R ${B}/pkg/tool/${TARGET_GOTUPLE} ${D}${libdir}/go/pkg/tool/
+	install -d ${D}${libdir}/go/src
+	cp --preserve=mode,timestamps -R ${S}/src/cmd ${D}${libdir}/go/src/
+	install -d ${D}${libdir}/go/bin
+	install -d ${D}${bindir}
+	for f in ${B}/${GO_BUILD_BINDIR}/*; do
+		name=`basename $f`
+		install -m 0755 $f ${D}${libdir}/go/bin/
+		ln -sf ../${BASELIB}/go/bin/$name ${D}${bindir}/
+	done
+}
+
+PACKAGES = "${PN} ${PN}-dev"
+FILES_${PN} = "${libdir}/go/bin ${libdir}/go/pkg/tool/${TARGET_GOTUPLE} ${bindir}"
+FILES_${PN}-dev = "${libdir}/go"
+RDEPENDS_${PN}-dev = "perl bash"
+INSANE_SKIP_${PN} = "ldflags"
+
+BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go.inc b/import-layers/yocto-poky/meta/recipes-devtools/go/go.inc
deleted file mode 100644
index 25437dd..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go.inc
+++ /dev/null
@@ -1,86 +0,0 @@
-inherit goarch
-DEPENDS += "go-bootstrap-native"
-
-# libgcc is required for the target specific libraries to build
-# properly, but apparently not for go-cross and, more importantly,
-# also can't be used there because go-cross cannot depend on
-# the tune-specific libgcc. Otherwise go-cross also would have
-# to be tune-specific.
-DEPENDS += "${@ 'libgcc' if not oe.utils.inherits(d, 'cross') else ''}"
-
-# Prevent runstrip from running because you get errors when the host arch != target arch
-INHIBIT_PACKAGE_STRIP = "1"
-INHIBIT_SYSROOT_STRIP = "1"
-
-# x32 ABI is not supported on go compiler so far
-COMPATIBLE_HOST_linux-gnux32 = "null"
-# ppc32 is not supported in go compilers
-COMPATIBLE_HOST_powerpc = "null"
-
-export GOHOSTOS = "${BUILD_GOOS}"
-export GOHOSTARCH = "${BUILD_GOARCH}"
-export GOOS = "${TARGET_GOOS}"
-export GOARCH = "${TARGET_GOARCH}"
-export GOARM = "${TARGET_GOARM}"
-export GOROOT_BOOTSTRAP = "${STAGING_LIBDIR_NATIVE}/go1.4"
-export GOROOT_FINAL = "${libdir}/go"
-export CGO_ENABLED = "1"
-export CC_FOR_TARGET = "${CC}"
-export CXX_FOR_TARGET = "${CXX}"
-
-do_configure[noexec] = "1"
-
-do_compile_prepend_class-cross() {
-	export CGO_ENABLED=0
-}
-
-do_compile() {
-	export GOBIN="${B}/bin"
-	export CC="${@d.getVar('BUILD_CC', True).strip()}"
-	rm -rf ${GOBIN} ${B}/pkg
-	mkdir ${GOBIN}
-
-	export TMPDIR=${WORKDIR}/build-tmp
-	mkdir -p ${WORKDIR}/build-tmp
-
-	cd src
-	./make.bash --host-only
-	# Ensure cgo.a is built with the target toolchain
-	export GOBIN="${B}/target/bin"
-	rm -rf ${GOBIN}
-	mkdir -p ${GOBIN}
-	GO_FLAGS="-a" ./make.bash
-}
-
-do_install_class-target() {
-	install -d ${D}${libdir}/go
-	cp -a ${B}/pkg ${D}${libdir}/go/
-	install -d ${D}${libdir}/go/src
-	(cd ${S}/src; for d in *; do \
-		[ -d $d ] && cp -a ${S}/src/$d ${D}${libdir}/go/src/; \
-	done)
-	install -d ${D}${bindir}
-	if [ -d ${B}/bin/${GOOS}_${GOARCH} ]
-	then
-		install -m 0755 ${B}/bin/${GOOS}_${GOARCH}/* ${D}${bindir}
-	else
-		install -m 0755 ${B}/bin/* ${D}${bindir}
-	fi
-}
-
-do_install_class-cross() {
-	install -d ${D}${libdir}/go
-	cp -a ${B}/pkg ${D}${libdir}/go/
-	install -d ${D}${libdir}/go/src
-	(cd ${S}/src; for d in *; do \
-		[ -d $d ] && cp -a ${S}/src/$d ${D}${libdir}/go/src/; \
-	done)
-	install -d ${D}${bindir}
-	for f in ${B}/bin/go*
-	do
-		install -m755 $f ${D}${bindir}
-	done
-}
-do_package_qa[noexec] = "1"
-
-RDEPENDS_${PN} += "perl"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go_1.6.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go_1.6.bb
deleted file mode 100644
index 2f59033..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go_1.6.bb
+++ /dev/null
@@ -1,4 +0,0 @@
-require go.inc
-require go-${PV}.inc
-
-BBCLASSEXTEND = "cross"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go_1.7.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go_1.7.bb
deleted file mode 100644
index e7a6ab2..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go_1.7.bb
+++ /dev/null
@@ -1,2 +0,0 @@
-require go-${PV}.inc
-require go.inc
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go_1.8.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go_1.8.bb
deleted file mode 100644
index 091b131..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/go/go_1.8.bb
+++ /dev/null
@@ -1,3 +0,0 @@
-require go-${PV}.inc
-require go.inc
-TUNE_CCARGS_remove = "-march=mips32r2"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/go/go_1.9.bb b/import-layers/yocto-poky/meta/recipes-devtools/go/go_1.9.bb
new file mode 100644
index 0000000..c67e2cb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/go/go_1.9.bb
@@ -0,0 +1,2 @@
+require go-${PV}.inc
+require go-target.inc
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/arm_aarch64.patch b/import-layers/yocto-poky/meta/recipes-devtools/guile/files/arm_aarch64.patch
deleted file mode 100644
index f1788b6..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/arm_aarch64.patch
+++ /dev/null
@@ -1,19 +0,0 @@
-guile: add aarch64 recognition
-
-Assume little-endian.
-
-Upstream-Status: Pending
-
-Signed-off-by: joe.slater@windriver.com
-
---- a/module/system/base/target.scm
-+++ b/module/system/base/target.scm
-@@ -70,6 +70,8 @@
-             ((member cpu '("sparc" "sparc64" "powerpc" "powerpc64" "spu"
-                            "mips" "mips64"))
-              (endianness big))
-+            ((string-match "^aarch64" cpu)
-+             (endianness little))
-             ((string-match "^arm.*eb" cpu)
-              (endianness big))
-             ((string-match "^arm.*" cpu)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/arm_endianness.patch b/import-layers/yocto-poky/meta/recipes-devtools/guile/files/arm_endianness.patch
deleted file mode 100644
index ea4328b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/arm_endianness.patch
+++ /dev/null
@@ -1,23 +0,0 @@
-Support form ARM endianness
-
-Fixes Yocto bug# 2729
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
-
-Index: guile-2.0.5/module/system/base/target.scm
-===================================================================
---- guile-2.0.5.orig/module/system/base/target.scm	2012-01-24 03:06:06.000000000 -0800
-+++ guile-2.0.5/module/system/base/target.scm	2012-07-12 13:05:44.372364103 -0700
-@@ -70,7 +70,9 @@
-             ((member cpu '("sparc" "sparc64" "powerpc" "powerpc64" "spu"
-                            "mips" "mips64"))
-              (endianness big))
--            ((string-match "^arm.*el" cpu)
-+            ((string-match "^arm.*eb" cpu)
-+             (endianness big))
-+            ((string-match "^arm.*" cpu)
-              (endianness little))
-             (else
-              (error "unknown CPU endianness" cpu)))))
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/debian/0002-Mark-Unused-modules-are-removed-gc-test-as-unresolve.patch b/import-layers/yocto-poky/meta/recipes-devtools/guile/files/debian/0002-Mark-Unused-modules-are-removed-gc-test-as-unresolve.patch
deleted file mode 100644
index c7bf635..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/debian/0002-Mark-Unused-modules-are-removed-gc-test-as-unresolve.patch
+++ /dev/null
@@ -1,39 +0,0 @@
-Upstream-Status: Inappropriate [debian patch]
-
-Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
-
-From e52bfcdbaca5dce498678d8f512381e3e39a4066 Mon Sep 17 00:00:00 2001
-From: Rob Browning <rlb@defaultvalue.org>
-Date: Sun, 18 Mar 2012 11:40:55 -0500
-Subject: Mark "Unused modules are removed" gc test as unresolved.
-
-As per discussion with upstream, mark this test as unresolved since it
-may produce false negatives, depending on the behavior/timing of the
-garbage collector.
----
- test-suite/tests/gc.test |   11 ++++++-----
- 1 files changed, 6 insertions(+), 5 deletions(-)
-
-diff --git a/test-suite/tests/gc.test b/test-suite/tests/gc.test
-index a969752..8c8e13e 100644
---- a/test-suite/tests/gc.test
-+++ b/test-suite/tests/gc.test
-@@ -84,11 +84,13 @@
-       ;; one gc round. not sure why.
- 
-       (maybe-gc-flakiness
--       (= (let lp ((i 0))
--            (if (guard)
--                (lp (1+ i))
--                i))
--          total))))
-+       (or (= (let lp ((i 0))
-+                (if (guard)
-+                    (lp (1+ i))
-+                    i))
-+              total)
-+           (throw 'unresolved)))))
-+
- 
-   (pass-if "Lexical vars are collectable"
-     (let ((l (compile
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/debian/0003-Mark-mutex-with-owner-not-retained-threads-test-as-u.patch b/import-layers/yocto-poky/meta/recipes-devtools/guile/files/debian/0003-Mark-mutex-with-owner-not-retained-threads-test-as-u.patch
deleted file mode 100644
index d3faf3e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/debian/0003-Mark-mutex-with-owner-not-retained-threads-test-as-u.patch
+++ /dev/null
@@ -1,33 +0,0 @@
-Upstream-Status: Inappropriate [debian patch]
-
-Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
-
-From 848543091d55dddb54a85612155964506d712852 Mon Sep 17 00:00:00 2001
-From: Rob Browning <rlb@defaultvalue.org>
-Date: Sun, 18 Mar 2012 13:28:24 -0500
-Subject: Mark "mutex with owner not retained" threads test as unresolved.
-
-As per discussion with upstream, mark this test as unresolved since it
-may produce false negatives, depending on the behavior/timing of the
-garbage collector.
----
- test-suite/tests/threads.test |    6 ++++--
- 1 files changed, 4 insertions(+), 2 deletions(-)
-
-diff --git a/test-suite/tests/threads.test b/test-suite/tests/threads.test
-index 85a7c38..50899cb 100644
---- a/test-suite/tests/threads.test
-+++ b/test-suite/tests/threads.test
-@@ -414,8 +414,10 @@
- 
-             (gc) (gc)
-             (let ((m (g)))
--              (and (mutex? m)
--                   (eq? (mutex-owner m) (current-thread)))))))
-+              (or
-+               (and (mutex? m)
-+                    (eq? (mutex-owner m) (current-thread)))
-+               (throw 'unresolved))))))
- 
-       ;;
-       ;; mutex lock levels
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/guile_2.0.6_fix_sed_error.patch b/import-layers/yocto-poky/meta/recipes-devtools/guile/files/guile_2.0.6_fix_sed_error.patch
deleted file mode 100644
index 5597bb2..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/guile_2.0.6_fix_sed_error.patch
+++ /dev/null
@@ -1,24 +0,0 @@
-Upstream-Status: Pending
-
-This fixes sed issue when prefix has / in it, like /usr/local
-
-autoreconf error avoided:
-| sed: -e expression #1, char 9: unknown option to `s'
-| configure.ac:39: error: AC_INIT should be called with package and version arguments
-
-Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
-2012/05/01
-
-Index: guile-2.0.5/build-aux/git-version-gen
-===================================================================
---- guile-2.0.5.orig/build-aux/git-version-gen
-+++ guile-2.0.5/build-aux/git-version-gen
-@@ -187,7 +187,7 @@ else
-     v=UNKNOWN
- fi
- 
--v=`echo "$v" |sed "s/^$prefix//"`
-+v=`echo "$v" |sed "s#^$prefix##"`
- 
- # Test whether to append the "-dirty" suffix only if the version
- # string we're using came from git.  I.e., skip the test if it's "UNKNOWN"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/libguile-Makefile.am-hook.patch b/import-layers/yocto-poky/meta/recipes-devtools/guile/files/libguile-Makefile.am-hook.patch
deleted file mode 100644
index 290b9d4..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/libguile-Makefile.am-hook.patch
+++ /dev/null
@@ -1,42 +0,0 @@
-From 9c4e120a7a87db34d22a50883a5a525170b480d7 Mon Sep 17 00:00:00 2001
-From: Robert Yang <liezhi.yang@windriver.com>
-Date: Tue, 6 Jan 2015 23:10:51 -0800
-Subject: [PATCH] libguile/Makefile.am: install-data-hook -> install-exec-hook
-
-It may install such a file:
-/usr/lib64/libguile-2.0*-gdb.scm
-
-This is because when there is no file in the directory:
-for f in libguile-2.0*; do
-    [snip]
-done
-
-The f would be libguile-2.0* itself, use install-exec-hook will fix the
-problem since it depends on install-libLTLIBRARIES.
-
-Upstream-Status: Pending
-
-Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
----
- libguile/Makefile.am |    4 +---
- 1 file changed, 1 insertion(+), 3 deletions(-)
-
-diff --git a/libguile/Makefile.am b/libguile/Makefile.am
-index 5decd99..52645b7 100644
---- a/libguile/Makefile.am
-+++ b/libguile/Makefile.am
-@@ -446,10 +446,8 @@ EXTRA_libguile_@GUILE_EFFECTIVE_VERSION@_la_SOURCES = _scm.h		\
- ## delete guile-snarf.awk from the installation bindir, in case it's
- ## lingering there due to an earlier guile version not having been
- ## wiped out.
--install-exec-hook:
-+install-exec-hook: libguile-2.0-gdb.scm
- 	rm -f $(DESTDIR)$(bindir)/guile-snarf.awk
--
--install-data-hook: libguile-2.0-gdb.scm
- 	@$(MKDIR_P) $(DESTDIR)$(libdir)
- ## We want to install libguile-2.0-gdb.scm as SOMETHING-gdb.scm.
- ## SOMETHING is the full name of the final library.  We want to ignore
--- 
-1.7.9.5
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/opensuse/guile-64bit.patch b/import-layers/yocto-poky/meta/recipes-devtools/guile/files/opensuse/guile-64bit.patch
deleted file mode 100644
index da69b5f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/opensuse/guile-64bit.patch
+++ /dev/null
@@ -1,39 +0,0 @@
-Upstream-Status: Inappropriate [opensuse patch]
-
-Signed-off-by: Nitin A Kamble <nitin.a.kamble@intel.com>
-
-Index: guile-2.0.3/libguile/hash.c
-===================================================================
---- guile-2.0.3.orig/libguile/hash.c	2011-07-06 15:49:59.000000000 -0700
-+++ guile-2.0.3/libguile/hash.c	2012-01-13 21:49:43.332844884 -0800
-@@ -270,7 +270,7 @@ scm_hasher(SCM obj, unsigned long n, siz
- unsigned long
- scm_ihashq (SCM obj, unsigned long n)
- {
--  return (SCM_UNPACK (obj) >> 1) % n;
-+  return ((unsigned long) SCM_UNPACK (obj) >> 1) % n;
- }
- 
- 
-@@ -306,7 +306,7 @@ scm_ihashv (SCM obj, unsigned long n)
-   if (SCM_NUMP(obj))
-     return (unsigned long) scm_hasher(obj, n, 10);
-   else
--    return SCM_UNPACK (obj) % n;
-+    return (unsigned long) SCM_UNPACK (obj) % n;
- }
- 
- 
-Index: guile-2.0.3/libguile/struct.c
-===================================================================
---- guile-2.0.3.orig/libguile/struct.c	2011-07-06 15:50:00.000000000 -0700
-+++ guile-2.0.3/libguile/struct.c	2012-01-13 21:49:43.332844884 -0800
-@@ -942,7 +942,7 @@ scm_struct_ihashq (SCM obj, unsigned lon
- {
-   /* The length of the hash table should be a relative prime it's not
-      necessary to shift down the address.  */
--  return SCM_UNPACK (obj) % n;
-+  return (unsigned long) SCM_UNPACK (obj) % n;
- }
- 
- SCM_DEFINE (scm_struct_vtable_name, "struct-vtable-name", 1, 0, 0, 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/workaround-ice-ssa-corruption.patch b/import-layers/yocto-poky/meta/recipes-devtools/guile/files/workaround-ice-ssa-corruption.patch
deleted file mode 100644
index 6c34838..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/guile/files/workaround-ice-ssa-corruption.patch
+++ /dev/null
@@ -1,60 +0,0 @@
-libguile/vm-i-system.c: workaround ice ssa corruption while compiling with option -g -O
-
-While compiling with option -g -O, there was a ssa corruption:
-..
-Unable to coalesce ssa_names 48 and 3476 which are marked as MUST COALESCE.
-sp_48(ab) and  sp_3476(ab)
-guile-2.0.11/libguile/vm-engine.c: In function 'vm_debug_engine':
-guile-2.0.11/libguile/vm.c:673:19: internal compiler error: SSA corruption
- #define VM_NAME   vm_debug_engine
-                   ^
-guile-2.0.11/libguile/vm-engine.c:39:1: note: in expansion of macro 'VM_NAME'
- VM_NAME (SCM vm, SCM program, SCM *argv, int nargs)
- ^
-Please submit a full bug report,
-with preprocessed source if appropriate.
-See <http://gcc.gnu.org/bugs.html> for instructions.
-...
-
-Tweak libguile/vm-i-system.c to add boundary value check to workaround it.
-
-Upstream-Status: Pending
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- libguile/vm-i-system.c | 20 ++++++++++++++++----
- 1 file changed, 16 insertions(+), 4 deletions(-)
-
-diff --git a/libguile/vm-i-system.c b/libguile/vm-i-system.c
---- a/libguile/vm-i-system.c
-+++ b/libguile/vm-i-system.c
-@@ -625,10 +625,22 @@ VM_DEFINE_INSTRUCTION (47, bind_optionals_shuffle, "bind-optionals/shuffle", 6,
-   /* now shuffle up, from walk to ntotal */
-   {
-     scm_t_ptrdiff nshuf = sp - walk + 1, i;
--    sp = (fp - 1) + ntotal + nshuf;
--    CHECK_OVERFLOW ();
--    for (i = 0; i < nshuf; i++)
--      sp[-i] = walk[nshuf-i-1];
-+    /* check the value of nshuf to workaround ice ssa corruption */
-+    /* while compiling with -O -g */
-+    if (nshuf > 0)
-+    {
-+      sp = (fp - 1) + ntotal + nshuf;
-+      CHECK_OVERFLOW ();
-+      for (i = 0; i < nshuf; i++)
-+        sp[-i] = walk[nshuf-i-1];
-+    }
-+    else
-+    {
-+      sp = (fp - 1) + ntotal + nshuf;
-+      CHECK_OVERFLOW ();
-+      for (i = 0; i < nshuf; i++)
-+        sp[-i] = walk[nshuf-i-1];
-+    }
-   }
-   /* and fill optionals & keyword args with SCM_UNDEFINED */
-   while (walk <= (fp - 1) + ntotal)
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/guile/guile_2.0.14.bb b/import-layers/yocto-poky/meta/recipes-devtools/guile/guile_2.0.14.bb
deleted file mode 100644
index 7a01d0f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/guile/guile_2.0.14.bb
+++ /dev/null
@@ -1,125 +0,0 @@
-SUMMARY = "Guile is the GNU Ubiquitous Intelligent Language for Extensions"
-DESCRIPTION = "Guile is the GNU Ubiquitous Intelligent Language for Extensions,\
- the official extension language for the GNU operating system.\
- Guile is a library designed to help programmers create flexible applications.\
- Using Guile in an application allows the application's functionality to be\
- extended by users or other programmers with plug-ins, modules, or scripts.\
- Guile provides what might be described as 'practical software freedom,'\
- making it possible for users to customize an application to meet their\
- needs without digging into the application's internals."
-
-HOMEPAGE = "http://www.gnu.org/software/guile/"
-SECTION = "devel"
-LICENSE = "GPLv3"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
-
-SRC_URI = "${GNU_MIRROR}/guile/guile-${PV}.tar.xz \
-           file://debian/0002-Mark-Unused-modules-are-removed-gc-test-as-unresolve.patch \
-           file://debian/0003-Mark-mutex-with-owner-not-retained-threads-test-as-u.patch \
-           file://opensuse/guile-64bit.patch \
-           file://guile_2.0.6_fix_sed_error.patch \
-           file://arm_endianness.patch \
-           file://arm_aarch64.patch \
-           file://workaround-ice-ssa-corruption.patch \
-           file://libguile-Makefile.am-hook.patch \
-           "
-
-SRC_URI[md5sum] = "c64977c775effd19393364b3018fd8cd"
-SRC_URI[sha256sum] = "e8442566256e1be14e51fc18839cd799b966bc5b16c6a1d7a7c35155a8619d82"
-
-inherit autotools gettext pkgconfig texinfo
-BBCLASSEXTEND = "native"
-
-# Fix "Argument list too long" error when len(TMPDIR) = 410
-acpaths = "-I ./m4"
-
-DEPENDS = "libunistring bdwgc gmp libtool libffi ncurses readline"
-# add guile-native only to the target recipe's DEPENDS
-DEPENDS_append_class-target = " guile-native libatomic-ops"
-
-# The comment of the script guile-config said it has been deprecated but we should
-# at least add the required dependency to make it work since we still provide the script.
-RDEPENDS_${PN} = "pkgconfig"
-
-RDEPENDS_${PN}_append_libc-glibc_class-target = " glibc-gconv-iso8859-1"
-
-EXTRA_OECONF += "${@['--without-libltdl-prefix --without-libgmp-prefix --without-libreadline-prefix', ''][bb.data.inherits_class('native',d)]}"
-
-EXTRA_OECONF_append_class-target = " --with-libunistring-prefix=${STAGING_LIBDIR} \
-                                     --with-libgmp-prefix=${STAGING_LIBDIR} \
-                                     --with-libltdl-prefix=${STAGING_LIBDIR}"
-EXTRA_OECONF_append_libc-uclibc = " guile_cv_use_csqrt=no "
-
-CFLAGS_append_libc-musl = " -DHAVE_GC_SET_FINALIZER_NOTIFIER \
-	                    -DHAVE_GC_GET_HEAP_USAGE_SAFE \
-	                    -DHAVE_GC_GET_FREE_SPACE_DIVISOR \
-	                    -DHAVE_GC_SET_FINALIZE_ON_DEMAND \
-                           "
-
-do_configure_prepend() {
-	mkdir -p po
-}
-
-export GUILE_FOR_BUILD="${BUILD_SYS}-guile"
-
-do_install_append_class-native() {
-	install -m 0755  ${D}${bindir}/guile ${D}${bindir}/${HOST_SYS}-guile
-
-	create_wrapper ${D}/${bindir}/guile \
-		GUILE_LOAD_PATH=${STAGING_DATADIR_NATIVE}/guile/2.0 \
-		GUILE_LOAD_COMPILED_PATH=${STAGING_LIBDIR_NATIVE}/guile/2.0/ccache
-	create_wrapper ${D}${bindir}/${HOST_SYS}-guile \
-		GUILE_LOAD_PATH=${STAGING_DATADIR_NATIVE}/guile/2.0 \
-		GUILE_LOAD_COMPILED_PATH=${STAGING_LIBDIR_NATIVE}/guile/2.0/ccache
-}
-
-do_install_append_class-target() {
-	# cleanup buildpaths in scripts
-	sed -i -e 's:${STAGING_DIR_NATIVE}::' ${D}${bindir}/guile-config
-	sed -i -e 's:${STAGING_DIR_HOST}::' ${D}${bindir}/guile-snarf
-
-	sed -i -e 's:${STAGING_DIR_TARGET}::g' ${D}${libdir}/pkgconfig/guile-2.0.pc
-}
-
-do_install_append_libc-musl() {
-	rm -f ${D}${libdir}/charset.alias
-}
-
-SYSROOT_PREPROCESS_FUNCS = "guile_cross_config"
-
-guile_cross_config() {
-	# this is only for target recipe
-	if [ "${PN}" = "guile" ]
-	then
-	        # Create guile-config returning target values instead of native values
-	        install -d ${SYSROOT_DESTDIR}${STAGING_BINDIR_CROSS}
-        	printf '#!%s \\\n--no-auto-compile -e main -s\n!#\n(define %%guile-build-info %s(\n' $(which ${BUILD_SYS}-guile) "'" \
-			> ${B}/guile-config.cross
-	        sed -n -e 's:^[ \t]*{[ \t]*":  (:' \
-			-e 's:",[ \t]*": . ":' \
-			-e 's:" *}, *\\:"):' \
-			-e 's:^.*cachedir.*$::' \
-			-e '/^  (/p' \
-			< ${B}/libguile/libpath.h >> ${B}/guile-config.cross
-	        echo '))' >> ${B}/guile-config.cross
-	        cat ${B}/meta/guile-config >> ${B}/guile-config.cross
-	        install ${B}/guile-config.cross ${STAGING_BINDIR_CROSS}/guile-config
-	fi
-}
-
-# Guile needs the compiled files to be newer than the source, and it won't
-# auto-compile into the prefix even if it can write there, so touch them here as
-# sysroot is managed.
-SSTATEPOSTINSTFUNCS += "guile_sstate_postinst"
-GUILESSTATEDIR = "${COMPONENTS_DIR}/${TUNE_PKGARCH}/${PN}/${libdir}/guile/2.0/ccache"
-GUILESSTATEDIR_class-native = "${COMPONENTS_DIR}/${BUILD_ARCH}/${PN}/${libdir_native}/guile/2.0/ccache"
-guile_sstate_postinst() {
-	if [ "${BB_CURRENTTASK}" = "populate_sysroot" -o "${BB_CURRENTTASK}" = "populate_sysroot_setscene" ]
-	then
-                find ${GUILESSTATEDIR} -type f | xargs touch
-	fi
-}
-
-# http://errors.yoctoproject.org/Errors/Details/20491/
-ARM_INSTRUCTION_SET_armv4 = "arm"
-ARM_INSTRUCTION_SET_armv5 = "arm"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/i2c-tools/i2c-tools_3.1.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/i2c-tools/i2c-tools_3.1.2.bb
index 4d25542..c017252 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/i2c-tools/i2c-tools_3.1.2.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/i2c-tools/i2c-tools_3.1.2.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Set of i2c tools for linux"
+HOMEPAGE = "https://i2c.wiki.kernel.org/index.php/I2C_Tools"
 SECTION = "base"
 LICENSE = "GPLv2+"
 LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/intltool/intltool_0.51.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/intltool/intltool_0.51.0.bb
index 551bdf0..ecff2fa 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/intltool/intltool_0.51.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/intltool/intltool_0.51.0.bb
@@ -16,7 +16,7 @@
 
 DEPENDS = "libxml-parser-perl-native"
 RDEPENDS_${PN} = "gettext-dev libxml-parser-perl"
-DEPENDS_class-native = "libxml-parser-perl-native"
+DEPENDS_class-native = "libxml-parser-perl-native gettext-native"
 
 inherit autotools pkgconfig perlnative
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c/0001-Add-FALLTHRU-comment-to-handle-GCC7-warnings.patch b/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c/0001-Add-FALLTHRU-comment-to-handle-GCC7-warnings.patch
index df3b600..537be5e 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c/0001-Add-FALLTHRU-comment-to-handle-GCC7-warnings.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c/0001-Add-FALLTHRU-comment-to-handle-GCC7-warnings.patch
@@ -1,14 +1,11 @@
-From 9522ac8e5d5b20a472f3ffc356d388d36f7f582c Mon Sep 17 00:00:00 2001
+From 7b24f8bd95ad4f7d00c93ca2ad998c14a0266dbe Mon Sep 17 00:00:00 2001
 From: marxin <mliska@suse.cz>
 Date: Tue, 21 Mar 2017 08:42:11 +0100
 Subject: [PATCH] Add FALLTHRU comment to handle GCC7 warnings.
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
 
 ---
-Upstream-Status: Backport [https://github.com/json-c/json-c/commit/014924ba899f659917bb64392bbff7d3c803afc2]
-Signed-off-by: André Draszik <adraszik@tycoint.com>
+Upstream-Status: Backport
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
 
  json_object.c  |  1 +
  json_tokener.c |  1 +
@@ -28,10 +25,10 @@
      return 0;
    }
 diff --git a/json_tokener.c b/json_tokener.c
-index 7fa32ae..b32d657 100644
+index 9a76293..ae7b1ae 100644
 --- a/json_tokener.c
 +++ b/json_tokener.c
-@@ -306,6 +306,7 @@ struct json_object* json_tokener_parse_ex(struct json_tokener *tok,
+@@ -305,6 +305,7 @@ struct json_object* json_tokener_parse_ex(struct json_tokener *tok,
              tok->err = json_tokener_error_parse_unexpected;
              goto out;
          }
@@ -73,5 +70,5 @@
               break;
      case 0 : return c;
 -- 
-2.14.1
+2.12.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c/0001-Link-against-libm-when-needed.patch b/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c/0001-Link-against-libm-when-needed.patch
deleted file mode 100644
index bfe9d72..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c/0001-Link-against-libm-when-needed.patch
+++ /dev/null
@@ -1,53 +0,0 @@
-From 93582ad85ef48c18ac12f00a9a9e124989b1fcab Mon Sep 17 00:00:00 2001
-From: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
-Date: Fri, 1 May 2015 12:52:18 +0200
-Subject: [PATCH] Link against libm when needed
-
-In certain C libraries (e.g uClibc), isnan() and related functions are
-implemented in libm, so json-c needs to link against it. This commit
-therefore adds an AC_TRY_LINK() test to check whether a program
-calling isnan() can be properly linked with no special flags. If not,
-we assume linking against libm is needed.
-
-The json-c.pc.in file is also adjusted so that in the case of static
-linking against json-c, -lm is also used.
-
-Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
----
-Upstream-Status: Backport
-
- configure.ac | 4 ++++
- json-c.pc.in | 3 ++-
- 2 files changed, 6 insertions(+), 1 deletion(-)
-
-diff --git a/configure.ac b/configure.ac
-index c50f81b..30e7174 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -50,6 +50,10 @@ AC_CHECK_DECLS([isinf], [], [], [[#include <math.h>]])
- AC_CHECK_DECLS([_isnan], [], [], [[#include <float.h>]])
- AC_CHECK_DECLS([_finite], [], [], [[#include <float.h>]])
- 
-+if test "$ac_cv_have_decl_isnan" = "yes" ; then
-+   AC_TRY_LINK([#include <math.h>], [float f = 0.0; return isnan(f)], [], [LIBS="$LIBS -lm"])
-+fi
-+
- #check if .section.gnu.warning accepts long strings (for __warn_references)
- AC_LANG_PUSH([C])
- 
-diff --git a/json-c.pc.in b/json-c.pc.in
-index 037739d..05bfbc8 100644
---- a/json-c.pc.in
-+++ b/json-c.pc.in
-@@ -6,6 +6,7 @@ includedir=@includedir@
- Name: json-c
- Description: JSON implementation in C
- Version: @VERSION@
--Requires: 
-+Requires:
-+Libs.private: @LIBS@
- Libs:  -L${libdir} -ljson-c
- Cflags: -I${includedir}/json-c
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c/0001-json_tokener-requires-INF-and-NAN.patch b/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c/0001-json_tokener-requires-INF-and-NAN.patch
deleted file mode 100644
index d29d911..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c/0001-json_tokener-requires-INF-and-NAN.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-From 9be71700eb580c815688584a64621a38867c3fdd Mon Sep 17 00:00:00 2001
-From: James Myatt <james.myatt@tessella.com>
-Date: Thu, 5 Feb 2015 15:57:14 +0000
-Subject: [PATCH] json_tokener requires INF and NAN
-
----
-Upstream-Status: Backport
-
- json_tokener.c | 3 ++-
- 1 file changed, 2 insertions(+), 1 deletion(-)
-
-Index: json-c-0.12/json_tokener.c
-===================================================================
---- json-c-0.12.orig/json_tokener.c
-+++ json-c-0.12/json_tokener.c
-@@ -16,6 +16,7 @@
- #include "config.h"
- 
- #include <math.h>
-+#include "math_compat.h"
- #include <stdio.h>
- #include <stdlib.h>
- #include <stddef.h>
-@@ -352,12 +353,10 @@ struct json_object* json_tokener_parse_e
- 
-     case json_tokener_state_inf: /* aka starts with 'i' */
-       {
--	int size;
--	int size_inf;
-+	size_t size_inf;
- 	int is_negative = 0;
- 
- 	printbuf_memappend_fast(tok->pb, &c, 1);
--	size = json_min(tok->st_pos+1, json_null_str_len);
- 	size_inf = json_min(tok->st_pos+1, json_inf_str_len);
- 	char *infbuf = tok->pb->buf;
- 	if (*infbuf == '-')
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c_0.12.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c_0.12.1.bb
new file mode 100644
index 0000000..401cf13
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c_0.12.1.bb
@@ -0,0 +1,32 @@
+SUMMARY = "C bindings for apps which will manipulate JSON data"
+DESCRIPTION = "JSON-C implements a reference counting object model that allows you to easily construct JSON objects in C."
+HOMEPAGE = "https://github.com/json-c/json-c/wiki"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=de54b60fbbc35123ba193fea8ee216f2"
+
+SRC_URI = "https://s3.amazonaws.com/json-c_releases/releases/${BP}.tar.gz \
+           file://0001-Add-FALLTHRU-comment-to-handle-GCC7-warnings.patch \
+           "
+SRC_URI[md5sum] = "55f7853f7d8cf664554ce3fa71bf1c7d"
+SRC_URI[sha256sum] = "2a136451a7932d80b7d197b10441e26e39428d67b1443ec43bbba824705e1123"
+
+UPSTREAM_CHECK_REGEX = "json-c-(?P<pver>\d+(\.\d+)+).tar"
+# json-c releases page is fetching the list of releases in some weird XML format
+# from https://s3.amazonaws.com/json-c_releases and processes it with javascript :-/
+#UPSTREAM_CHECK_URI = "https://s3.amazonaws.com/json-c_releases/releases/index.html"
+RECIPE_UPSTREAM_VERSION = "0.12.1"
+RECIPE_UPSTREAM_DATE = "Jun 07, 2016"
+CHECK_DATE = "Apr 19, 2017"
+
+RPROVIDES_${PN} = "libjson"
+
+inherit autotools
+
+EXTRA_OECONF = "--enable-rdrand"
+
+do_configure_prepend() {
+    # Clean up autoconf cruft that should not be in the tarball
+    rm -f ${S}/config.status
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c_0.12.bb b/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c_0.12.bb
deleted file mode 100644
index 072c092..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/json-c/json-c_0.12.bb
+++ /dev/null
@@ -1,33 +0,0 @@
-SUMMARY = "C bindings for apps which will manipulate JSON data"
-DESCRIPTION = "JSON-C implements a reference counting object model that allows you to easily construct JSON objects in C."
-HOMEPAGE = "https://github.com/json-c/json-c/wiki"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=de54b60fbbc35123ba193fea8ee216f2"
-
-SRC_URI = "https://s3.amazonaws.com/json-c_releases/releases/${BP}.tar.gz \
-           file://0001-json_tokener-requires-INF-and-NAN.patch \
-           file://0001-Link-against-libm-when-needed.patch \
-           file://0001-Add-FALLTHRU-comment-to-handle-GCC7-warnings.patch \
-          "
-
-SRC_URI[md5sum] = "3ca4bbb881dfc4017e8021b5e0a8c491"
-SRC_URI[sha256sum] = "000c01b2b3f82dcb4261751eb71f1b084404fb7d6a282f06074d3c17078b9f3f"
-
-UPSTREAM_CHECK_REGEX = "json-c-(?P<pver>\d+(\.\d+)+).tar"
-# json-c releases page is fetching the list of releases in some weird XML format
-# from https://s3.amazonaws.com/json-c_releases and processes it with javascript :-/
-#UPSTREAM_CHECK_URI = "https://s3.amazonaws.com/json-c_releases/releases/index.html"
-RECIPE_UPSTREAM_VERSION = "0.12"
-RECIPE_UPSTREAM_DATE = "Apr 11, 2014"
-CHECK_DATE = "Dec 04, 2015"
-
-RPROVIDES_${PN} = "libjson"
-
-inherit autotools
-
-do_configure_prepend() {
-    # Clean up autoconf cruft that should not be in the tarball
-    rm -f ${S}/config.status
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/kconfig-frontends/kconfig-frontends_3.12.0.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/kconfig-frontends/kconfig-frontends_3.12.0.0.bb
deleted file mode 100644
index 4ca0e4d..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/kconfig-frontends/kconfig-frontends_3.12.0.0.bb
+++ /dev/null
@@ -1,39 +0,0 @@
-# Copyright (C) 2012 Khem Raj <raj.khem@gmail.com>
-# Released under the MIT license (see COPYING.MIT for the terms)
-
-SUMMARY = "Linux kernel style configuration framework for other projects"
-DESCRIPTION = "The kconfig-frontends project aims at centralising \
-the effort of keeping an up-to-date, out-of-tree, packaging of the \
-kconfig infrastructure, ready for use by third-party projects. \
-The kconfig-frontends package provides the kconfig parser, as well as all \
-the frontends"
-HOMEPAGE = "http://ymorin.is-a-geek.org/projects/kconfig-frontends"
-LICENSE = "GPL-2.0"
-LIC_FILES_CHKSUM = "file://COPYING;md5=9b8cf60ff39767ff04b671fca8302408"
-SECTION = "devel"
-DEPENDS += "ncurses flex bison gperf-native"
-RDEPENDS_${PN} += "python bash"
-SRC_URI = "git://ymorin.is-a-geek.org/kconfig-frontends"
-
-SRCREV = "75d35b172fc0f7b6620dd659af41f2ce04edc4e6"
-
-S = "${WORKDIR}/git"
-
-inherit autotools pkgconfig
-do_configure_prepend () {
-	mkdir -p ${S}/scripts/.autostuff/m4
-}
-
-do_install_append() {
-	ln -s kconfig-conf ${D}${bindir}/conf
-	ln -s kconfig-mconf ${D}${bindir}/mconf
-}
-
-EXTRA_OECONF += "--disable-gconf --disable-qconf"
-
-# Some packages have the version preceeding the .so instead properly
-# versioned .so.<version>, so we need to reorder and repackage.
-SOLIBS = "-${@d.getVar('PV')[:-2]}.so"
-FILES_SOLIBSDEV = "${libdir}/libkconfig-parser.so"
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/libcomps/libcomps/0002-Set-library-installation-path-correctly.patch b/import-layers/yocto-poky/meta/recipes-devtools/libcomps/libcomps/0002-Set-library-installation-path-correctly.patch
index ec1fdc4..dc3d976 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/libcomps/libcomps/0002-Set-library-installation-path-correctly.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/libcomps/libcomps/0002-Set-library-installation-path-correctly.patch
@@ -3,7 +3,7 @@
 Date: Fri, 30 Dec 2016 18:26:00 +0200
 Subject: [PATCH 2/2] Set library installation path correctly
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://github.com/rpm-software-management/libcomps/pull/32]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 ---
  libcomps/src/CMakeLists.txt | 2 +-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0001-FindGtkDoc.cmake-drop-the-requirement-for-GTKDOC_SCA.patch b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0001-FindGtkDoc.cmake-drop-the-requirement-for-GTKDOC_SCA.patch
index 73acda6..791a32e 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0001-FindGtkDoc.cmake-drop-the-requirement-for-GTKDOC_SCA.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0001-FindGtkDoc.cmake-drop-the-requirement-for-GTKDOC_SCA.patch
@@ -7,7 +7,7 @@
 For some reason cmake is not able to find it when building in openembedded,
 and it's bundled with the source code anyway.
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://github.com/rpm-software-management/libdnf/pull/312]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 ---
  cmake/modules/FindGtkDoc.cmake | 2 +-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0001-Get-parameters-for-both-libsolv-and-libsolvext-libdn.patch b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0001-Get-parameters-for-both-libsolv-and-libsolvext-libdn.patch
index 954add6..280edb7 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0001-Get-parameters-for-both-libsolv-and-libsolvext-libdn.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0001-Get-parameters-for-both-libsolv-and-libsolvext-libdn.patch
@@ -1,28 +1,29 @@
-From 5958b151a4dbb89114e90c971a34b74f873b7beb Mon Sep 17 00:00:00 2001
+From 3012a93745223751cc979e3770207a09a075bec6 Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
 Date: Tue, 7 Feb 2017 12:16:03 +0200
-Subject: [PATCH] Get parameters for both libsolv and libsolvext (libdnf is
+Subject: [PATCH 5/5] Get parameters for both libsolv and libsolvext (libdnf is
  using both)
 
-Upstream-Status: Pending [depends on whether https://github.com/openSUSE/libsolv/pull/177 is accepted]
+Upstream-Status: Submitted [https://github.com/rpm-software-management/libdnf/pull/312]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+
 ---
  CMakeLists.txt | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/CMakeLists.txt b/CMakeLists.txt
-index b531da1..e512da0 100644
+index 8b2ab9a..e2d33d7 100644
 --- a/CMakeLists.txt
 +++ b/CMakeLists.txt
-@@ -28,7 +28,7 @@ find_package (PkgConfig REQUIRED)
+@@ -29,7 +29,7 @@ find_package (PkgConfig REQUIRED)
  SET (CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/cmake/modules)
  PKG_CHECK_MODULES(GLIB gio-unix-2.0>=2.44.0 REQUIRED)
  FIND_LIBRARY (RPMDB_LIBRARY NAMES rpmdb)
 -PKG_CHECK_MODULES (LIBSOLV REQUIRED libsolv)
 +PKG_CHECK_MODULES (LIBSOLV REQUIRED libsolv libsolvext)
  set(LIBSOLV_LIBRARY ${LIBSOLV_LIBRARIES})
- pkg_check_modules (CHECK REQUIRED check)
- pkg_check_modules (REPO REQUIRED librepo)
+ if (ENABLE_RHSM_SUPPORT)
+     pkg_check_modules (RHSM REQUIRED librhsm)
 -- 
 2.11.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0002-Prefix-sysroot-path-to-introspection-tools-path.patch b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0002-Prefix-sysroot-path-to-introspection-tools-path.patch
index 3d772a5..7eecc3d 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0002-Prefix-sysroot-path-to-introspection-tools-path.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0002-Prefix-sysroot-path-to-introspection-tools-path.patch
@@ -3,7 +3,7 @@
 Date: Fri, 30 Dec 2016 18:17:19 +0200
 Subject: [PATCH 2/4] Prefix sysroot path to introspection tools path.
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://github.com/rpm-software-management/libdnf/pull/312]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 ---
  libdnf/CMakeLists.txt | 4 ++--
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0003-Set-the-library-installation-directory-correctly.patch b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0003-Set-the-library-installation-directory-correctly.patch
index d7e59d8..8126409 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0003-Set-the-library-installation-directory-correctly.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0003-Set-the-library-installation-directory-correctly.patch
@@ -3,7 +3,7 @@
 Date: Fri, 30 Dec 2016 18:20:01 +0200
 Subject: [PATCH 3/4] Set the library installation directory correctly.
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://github.com/rpm-software-management/libdnf/pull/312]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 ---
  CMakeLists.txt | 4 +++-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0004-Set-libsolv-variables-with-pkg-config-cmake-s-own-mo.patch b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0004-Set-libsolv-variables-with-pkg-config-cmake-s-own-mo.patch
index 931959b..1ea9310 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0004-Set-libsolv-variables-with-pkg-config-cmake-s-own-mo.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf/0004-Set-libsolv-variables-with-pkg-config-cmake-s-own-mo.patch
@@ -1,29 +1,30 @@
-From 6d2718b925453f9a6915001f489606eb8e4086c8 Mon Sep 17 00:00:00 2001
+From 55cbe6f40fe0836385e1a7241ec811cbe99e5840 Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
 Date: Fri, 30 Dec 2016 18:24:50 +0200
-Subject: [PATCH 4/4] Set libsolv variables with pkg-config (cmake's own module
+Subject: [PATCH 4/5] Set libsolv variables with pkg-config (cmake's own module
  doesn't work properly).
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://github.com/rpm-software-management/libdnf/pull/312]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+
 ---
  CMakeLists.txt | 3 ++-
  1 file changed, 2 insertions(+), 1 deletion(-)
 
 diff --git a/CMakeLists.txt b/CMakeLists.txt
-index 8edb627..b531da1 100644
+index a75df04..8b2ab9a 100644
 --- a/CMakeLists.txt
 +++ b/CMakeLists.txt
-@@ -28,7 +28,8 @@ find_package (PkgConfig REQUIRED)
+@@ -29,7 +29,8 @@ find_package (PkgConfig REQUIRED)
  SET (CMAKE_MODULE_PATH ${CMAKE_SOURCE_DIR}/cmake/modules)
  PKG_CHECK_MODULES(GLIB gio-unix-2.0>=2.44.0 REQUIRED)
  FIND_LIBRARY (RPMDB_LIBRARY NAMES rpmdb)
 -find_package (LibSolv 0.6.21 REQUIRED COMPONENTS ext)
 +PKG_CHECK_MODULES (LIBSOLV REQUIRED libsolv)
 +set(LIBSOLV_LIBRARY ${LIBSOLV_LIBRARIES})
- pkg_check_modules (CHECK REQUIRED check)
- pkg_check_modules (REPO REQUIRED librepo)
- FIND_PROGRAM (VALGRIND_PROGRAM NAMES valgrind PATH /usr/bin /usr/local/bin)
+ if (ENABLE_RHSM_SUPPORT)
+     pkg_check_modules (RHSM REQUIRED librhsm)
+     include_directories (${RHSM_INCLUDE_DIRS})
 -- 
 2.11.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf_0.9.3.bb b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf_0.9.3.bb
new file mode 100644
index 0000000..01d9346
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf_0.9.3.bb
@@ -0,0 +1,28 @@
+SUMMARY = "Library providing simplified C and Python API to libsolv"
+LICENSE = "LGPLv2.1"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+
+SRC_URI = "git://github.com/rpm-software-management/libdnf \
+           file://0001-FindGtkDoc.cmake-drop-the-requirement-for-GTKDOC_SCA.patch \
+           file://0002-Prefix-sysroot-path-to-introspection-tools-path.patch \
+           file://0003-Set-the-library-installation-directory-correctly.patch \
+           file://0004-Set-libsolv-variables-with-pkg-config-cmake-s-own-mo.patch \
+           file://0001-Get-parameters-for-both-libsolv-and-libsolvext-libdn.patch \
+           "
+
+SRCREV = "1b19950e82d88eec28d01b4e7c1da712c941201d"
+
+S = "${WORKDIR}/git"
+
+DEPENDS = "glib-2.0 libsolv libcheck librepo rpm gtk-doc"
+
+inherit gtk-doc gobject-introspection cmake pkgconfig distutils3-base
+
+EXTRA_OECMAKE = " -DPYTHON_INSTALL_DIR=${PYTHON_SITEPACKAGES_DIR} -DWITH_MAN=OFF -DPYTHON_DESIRED=3 \
+                  ${@bb.utils.contains('GI_DATA_ENABLED', 'True', '-DWITH_GIR=ON', '-DWITH_GIR=OFF', d)} \
+                "
+EXTRA_OECMAKE_append_class-native = " -DWITH_GIR=OFF"
+EXTRA_OECMAKE_append_class-nativesdk = " -DWITH_GIR=OFF"
+
+BBCLASSEXTEND = "native nativesdk"
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf_git.bb
deleted file mode 100644
index ef28611..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/libdnf/libdnf_git.bb
+++ /dev/null
@@ -1,29 +0,0 @@
-SUMMARY = "Library providing simplified C and Python API to libsolv"
-LICENSE = "LGPLv2.1"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
-
-SRC_URI = "git://github.com/rpm-software-management/libdnf \
-           file://0001-FindGtkDoc.cmake-drop-the-requirement-for-GTKDOC_SCA.patch \
-           file://0002-Prefix-sysroot-path-to-introspection-tools-path.patch \
-           file://0003-Set-the-library-installation-directory-correctly.patch \
-           file://0004-Set-libsolv-variables-with-pkg-config-cmake-s-own-mo.patch \
-           file://0001-Get-parameters-for-both-libsolv-and-libsolvext-libdn.patch \
-           "
-
-PV = "0.2.3+git${SRCPV}"
-SRCREV = "367545629cc01a8e622890d89bd13d062ce60d7b"
-
-S = "${WORKDIR}/git"
-
-DEPENDS = "glib-2.0 libsolv libcheck librepo rpm gtk-doc"
-
-inherit gtk-doc gobject-introspection cmake pkgconfig distutils3-base
-
-EXTRA_OECMAKE = " -DPYTHON_INSTALL_DIR=${PYTHON_SITEPACKAGES_DIR} -DWITH_MAN=OFF -DPYTHON_DESIRED=3 \
-                  ${@bb.utils.contains('GI_DATA_ENABLED', 'True', '-DWITH_GIR=ON', '-DWITH_GIR=OFF', d)} \
-                "
-EXTRA_OECMAKE_append_class-native = " -DWITH_GIR=OFF"
-EXTRA_OECMAKE_append_class-nativesdk = " -DWITH_GIR=OFF"
-
-BBCLASSEXTEND = "native nativesdk"
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0001-Correctly-set-the-library-installation-directory.patch b/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0001-Correctly-set-the-library-installation-directory.patch
index 01fea40..08a58f1 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0001-Correctly-set-the-library-installation-directory.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0001-Correctly-set-the-library-installation-directory.patch
@@ -3,7 +3,7 @@
 Date: Fri, 30 Dec 2016 18:04:35 +0200
 Subject: [PATCH 1/4] Correctly set the library installation directory
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://github.com/rpm-software-management/librepo/pull/110]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 ---
  librepo/CMakeLists.txt | 3 ++-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0003-tests-fix-a-race-when-deleting-temporary-directories.patch b/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0003-tests-fix-a-race-when-deleting-temporary-directories.patch
index 0d2fae4..89ca60e 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0003-tests-fix-a-race-when-deleting-temporary-directories.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0003-tests-fix-a-race-when-deleting-temporary-directories.patch
@@ -3,7 +3,7 @@
 Date: Fri, 30 Dec 2016 18:06:24 +0200
 Subject: [PATCH 3/4] tests: fix a race when deleting temporary directories
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [https://github.com/rpm-software-management/librepo/pull/110]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 ---
  tests/python/tests/test_yum_repo_downloading.py | 2 +-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0004-Set-gpgme-variables-with-pkg-config-not-with-cmake-m.patch b/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0004-Set-gpgme-variables-with-pkg-config-not-with-cmake-m.patch
index 6665b31..f7d7ab3 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0004-Set-gpgme-variables-with-pkg-config-not-with-cmake-m.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0004-Set-gpgme-variables-with-pkg-config-not-with-cmake-m.patch
@@ -4,7 +4,7 @@
 Subject: [PATCH 4/4] Set gpgme variables with pkg-config, not with cmake
  module (which doesn't work properly)
 
-Upstream-Status: Pending
+Upstream-Status: Inappropriate [gpgme upstream does not have pkg-config support and is not interested in it]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 ---
  CMakeLists.txt | 3 ++-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0005-Fix-typo-correct-LRO_SSLVERIFYHOST-with-CURLOPT_SSL_.patch b/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0005-Fix-typo-correct-LRO_SSLVERIFYHOST-with-CURLOPT_SSL_.patch
new file mode 100644
index 0000000..b0c7d1c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo/0005-Fix-typo-correct-LRO_SSLVERIFYHOST-with-CURLOPT_SSL_.patch
@@ -0,0 +1,40 @@
+From a4bbbccce6edc1a2d1bd475506e2975fd7696c88 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Thu, 8 Jun 2017 16:31:30 +0800
+Subject: [PATCH] Fix typo: correct LRO_SSLVERIFYHOST with
+ CURLOPT_SSL_VERIFYHOST
+
+In commit 51d32c6cd88ba0139c32793183fd6a236c1ef456
+---
+Author: Tomas Mlcoch <tmlcoch@redhat.com>
+Date:   Mon May 5 14:31:35 2014 +0200
+
+    Add LRO_SSLVERIFYPEER and LRO_SSLVERIFYHOST options (RhBug: 1093014)
+---
+
+It incorrectly setopt CURLOPT_SSL_VERIFYPEER for LRO_SSLVERIFYHOST.
+Use CURLOPT_SSL_VERIFYHOST to correct.
+
+Upstream-Status: Submitted
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ librepo/handle.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/librepo/handle.c b/librepo/handle.c
+index ccea79b..ff39db4 100644
+--- a/librepo/handle.c
++++ b/librepo/handle.c
+@@ -629,7 +629,7 @@ lr_handle_setopt(LrHandle *handle,
+ 
+     case LRO_SSLVERIFYHOST:
+         handle->sslverifyhost = va_arg(arg, long) ? 2 : 0;
+-        c_rc = curl_easy_setopt(c_h, CURLOPT_SSL_VERIFYPEER, handle->sslverifyhost);
++        c_rc = curl_easy_setopt(c_h, CURLOPT_SSL_VERIFYHOST, handle->sslverifyhost);
+         break;
+ 
+     case LRO_SSLCLIENTCERT:
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo_git.bb
index 2f194f1..3238b14 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/librepo/librepo_git.bb
@@ -7,6 +7,7 @@
            file://0002-Do-not-try-to-obtain-PYTHON_INSTALL_DIR-by-running-p.patch \
            file://0003-tests-fix-a-race-when-deleting-temporary-directories.patch \
            file://0004-Set-gpgme-variables-with-pkg-config-not-with-cmake-m.patch \
+           file://0005-Fix-typo-correct-LRO_SSLVERIFYHOST-with-CURLOPT_SSL_.patch \
            "
 
 PV = "1.7.20+git${SRCPV}"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/libtool/libtool/nohardcodepaths.patch b/import-layers/yocto-poky/meta/recipes-devtools/libtool/libtool/nohardcodepaths.patch
index b2239fb..fcbce72 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/libtool/libtool/nohardcodepaths.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/libtool/libtool/nohardcodepaths.patch
@@ -4,6 +4,8 @@
 
 RP 2015/2/3
 
+Upstream-Status: Inappropriate
+
 Index: libtool-2.4.5/libtoolize.in
 ===================================================================
 --- libtool-2.4.5.orig/libtoolize.in
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/libtool/libtool_2.4.6.bb b/import-layers/yocto-poky/meta/recipes-devtools/libtool/libtool_2.4.6.bb
index 06abb05..b02620b 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/libtool/libtool_2.4.6.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/libtool/libtool_2.4.6.bb
@@ -15,6 +15,7 @@
 
 do_install_append () {
         sed -e 's@--sysroot=${STAGING_DIR_HOST}@@g' \
+            -e "s@${DEBUG_PREFIX_MAP}@@g" \
             -e 's@${STAGING_DIR_HOST}@@g' \
             -e 's@${STAGING_DIR_NATIVE}@@g' \
             -e 's@^\(sys_lib_search_path_spec="\).*@\1${libdir} ${base_libdir}"@' \
@@ -22,5 +23,6 @@
             -e 's@^\(compiler_lib_search_path="\).*@\1${libdir} ${base_libdir}"@' \
             -e 's@^\(predep_objects="\).*@\1"@' \
             -e 's@^\(postdep_objects="\).*@\1"@' \
+            -e "s@${HOSTTOOLS_DIR}/@@g" \
             -i ${D}${bindir}/libtool
 }
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/llvm/llvm/0001-llvm-TargetLibraryInfo-Undefine-libc-functions-if-th.patch b/import-layers/yocto-poky/meta/recipes-devtools/llvm/llvm/0001-llvm-TargetLibraryInfo-Undefine-libc-functions-if-th.patch
new file mode 100644
index 0000000..e251799
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/llvm/llvm/0001-llvm-TargetLibraryInfo-Undefine-libc-functions-if-th.patch
@@ -0,0 +1,93 @@
+From 28293e48cf1a52004c6a78de448718441f9e05f9 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 21 May 2016 00:33:20 +0000
+Subject: [PATCH 1/2] llvm: TargetLibraryInfo: Undefine libc functions if they
+ are macros
+
+musl defines some functions as macros and not inline functions
+if this is the case then make sure to undefine them
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ include/llvm/Analysis/TargetLibraryInfo.def | 21 +++++++++++++++++++++
+ 1 file changed, 21 insertions(+)
+
+diff --git a/include/llvm/Analysis/TargetLibraryInfo.def b/include/llvm/Analysis/TargetLibraryInfo.def
+index 9cbe917c146..aff8419cf54 100644
+--- a/include/llvm/Analysis/TargetLibraryInfo.def
++++ b/include/llvm/Analysis/TargetLibraryInfo.def
+@@ -656,6 +656,9 @@ TLI_DEFINE_STRING_INTERNAL("fmodl")
+ TLI_DEFINE_ENUM_INTERNAL(fopen)
+ TLI_DEFINE_STRING_INTERNAL("fopen")
+ /// FILE *fopen64(const char *filename, const char *opentype)
++#ifdef fopen64
++#undef fopen64
++#endif
+ TLI_DEFINE_ENUM_INTERNAL(fopen64)
+ TLI_DEFINE_STRING_INTERNAL("fopen64")
+ /// int fprintf(FILE *stream, const char *format, ...);
+@@ -691,6 +694,9 @@ TLI_DEFINE_STRING_INTERNAL("fseek")
+ /// int fseeko(FILE *stream, off_t offset, int whence);
+ TLI_DEFINE_ENUM_INTERNAL(fseeko)
+ TLI_DEFINE_STRING_INTERNAL("fseeko")
++#ifdef fseeko64
++#undef fseeko64
++#endif
+ /// int fseeko64(FILE *stream, off64_t offset, int whence)
+ TLI_DEFINE_ENUM_INTERNAL(fseeko64)
+ TLI_DEFINE_STRING_INTERNAL("fseeko64")
+@@ -701,6 +707,9 @@ TLI_DEFINE_STRING_INTERNAL("fsetpos")
+ TLI_DEFINE_ENUM_INTERNAL(fstat)
+ TLI_DEFINE_STRING_INTERNAL("fstat")
+ /// int fstat64(int filedes, struct stat64 *buf)
++#ifdef fstat64
++#undef fstat64
++#endif
+ TLI_DEFINE_ENUM_INTERNAL(fstat64)
+ TLI_DEFINE_STRING_INTERNAL("fstat64")
+ /// int fstatvfs(int fildes, struct statvfs *buf);
+@@ -716,6 +725,9 @@ TLI_DEFINE_STRING_INTERNAL("ftell")
+ TLI_DEFINE_ENUM_INTERNAL(ftello)
+ TLI_DEFINE_STRING_INTERNAL("ftello")
+ /// off64_t ftello64(FILE *stream)
++#ifdef ftello64
++#undef ftello64
++#endif
+ TLI_DEFINE_ENUM_INTERNAL(ftello64)
+ TLI_DEFINE_STRING_INTERNAL("ftello64")
+ /// int ftrylockfile(FILE *file);
+@@ -836,6 +848,9 @@ TLI_DEFINE_STRING_INTERNAL("logl")
+ TLI_DEFINE_ENUM_INTERNAL(lstat)
+ TLI_DEFINE_STRING_INTERNAL("lstat")
+ /// int lstat64(const char *path, struct stat64 *buf);
++#ifdef lstat64
++#undef lstat64
++#endif
+ TLI_DEFINE_ENUM_INTERNAL(lstat64)
+ TLI_DEFINE_STRING_INTERNAL("lstat64")
+ /// void *malloc(size_t size);
+@@ -1055,6 +1070,9 @@ TLI_DEFINE_STRING_INTERNAL("sscanf")
+ TLI_DEFINE_ENUM_INTERNAL(stat)
+ TLI_DEFINE_STRING_INTERNAL("stat")
+ /// int stat64(const char *path, struct stat64 *buf);
++#ifdef stat64
++#undef stat64
++#endif
+ TLI_DEFINE_ENUM_INTERNAL(stat64)
+ TLI_DEFINE_STRING_INTERNAL("stat64")
+ /// int statvfs(const char *path, struct statvfs *buf);
+@@ -1184,6 +1202,9 @@ TLI_DEFINE_STRING_INTERNAL("times")
+ TLI_DEFINE_ENUM_INTERNAL(tmpfile)
+ TLI_DEFINE_STRING_INTERNAL("tmpfile")
+ /// FILE *tmpfile64(void)
++#ifdef tmpfile64
++#undef tmpfile64
++#endif
+ TLI_DEFINE_ENUM_INTERNAL(tmpfile64)
+ TLI_DEFINE_STRING_INTERNAL("tmpfile64")
+ /// int toascii(int c);
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/llvm/llvm/0002-llvm-allow-env-override-of-exe-path.patch b/import-layers/yocto-poky/meta/recipes-devtools/llvm/llvm/0002-llvm-allow-env-override-of-exe-path.patch
new file mode 100644
index 0000000..832bd72
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/llvm/llvm/0002-llvm-allow-env-override-of-exe-path.patch
@@ -0,0 +1,39 @@
+From d776487bac17650704614248d19d1e6b35775001 Mon Sep 17 00:00:00 2001
+From: Martin Kelly <mkelly@xevo.com>
+Date: Fri, 19 May 2017 00:22:57 -0700
+Subject: [PATCH 2/2] llvm: allow env override of exe path
+
+When using a native llvm-config from inside a sysroot, we need llvm-config to
+return the libraries, include directories, etc. from inside the sysroot rather
+than from the native sysroot. Thus provide an env override for calling
+llvm-config from a target sysroot.
+
+Signed-off-by: Martin Kelly <mkelly@xevo.com>
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ tools/llvm-config/llvm-config.cpp | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+diff --git a/tools/llvm-config/llvm-config.cpp b/tools/llvm-config/llvm-config.cpp
+index 08b096afb05..d8d7742744e 100644
+--- a/tools/llvm-config/llvm-config.cpp
++++ b/tools/llvm-config/llvm-config.cpp
+@@ -225,6 +225,13 @@ Typical components:\n\
+ 
+ /// \brief Compute the path to the main executable.
+ std::string GetExecutablePath(const char *Argv0) {
++  // Hack for Yocto: we need to override the root path when we are using
++  // llvm-config from within a target sysroot.
++  const char *Sysroot = std::getenv("YOCTO_ALTERNATE_EXE_PATH");
++  if (Sysroot != nullptr) {
++    return Sysroot;
++  }
++
+   // This just needs to be some symbol in the binary; C++ doesn't
+   // allow taking the address of ::main however.
+   void *P = (void *)(intptr_t)GetExecutablePath;
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/llvm/llvm_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/llvm/llvm_git.bb
new file mode 100644
index 0000000..f06fa49
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/llvm/llvm_git.bb
@@ -0,0 +1,185 @@
+# Copyright (C) 2017 Khem Raj <raj.khem@gmail.com>
+# Released under the MIT license (see COPYING.MIT for the terms)
+
+DESCRIPTION = "The LLVM Compiler Infrastructure"
+HOMEPAGE = "http://llvm.org"
+LICENSE = "NCSA"
+SECTION = "devel"
+
+LIC_FILES_CHKSUM = "file://LICENSE.TXT;md5=e825e017edc35cfd58e26116e5251771"
+
+DEPENDS = "libffi libxml2-native zlib ninja-native llvm-native"
+
+RDEPENDS_${PN}_append_class-target = " ncurses-terminfo"
+
+inherit perlnative pythonnative cmake pkgconfig
+
+PROVIDES += "llvm${PV}"
+
+LLVM_RELEASE = "${PV}"
+LLVM_DIR = "llvm${LLVM_RELEASE}"
+
+SRCREV = "9a5c333388cbb54a0ce3a67c4f539f5e590a089b"
+PV = "5.0"
+PATCH_VERSION = "0"
+SRC_URI = "git://github.com/llvm-mirror/llvm.git;branch=release_50;protocol=http \
+           file://0001-llvm-TargetLibraryInfo-Undefine-libc-functions-if-th.patch \
+           file://0002-llvm-allow-env-override-of-exe-path.patch \
+          "
+UPSTREAM_VERSION_UNKNOWN = "1"
+S = "${WORKDIR}/git"
+
+LLVM_INSTALL_DIR = "${WORKDIR}/llvm-install"
+def get_llvm_arch(bb, d, arch_var):
+    import re
+    a = d.getVar(arch_var)
+    if   re.match('(i.86|athlon|x86.64)$', a):         return 'X86'
+    elif re.match('arm$', a):                          return 'ARM'
+    elif re.match('armeb$', a):                        return 'ARM'
+    elif re.match('aarch64$', a):                      return 'AArch64'
+    elif re.match('aarch64_be$', a):                   return 'AArch64'
+    elif re.match('mips(isa|)(32|64|)(r6|)(el|)$', a): return 'Mips'
+    elif re.match('p(pc|owerpc)(|64)', a):             return 'PowerPC'
+    else:
+        raise bb.parse.SkipRecipe("Cannot map '%s' to a supported LLVM architecture" % a)
+
+def get_llvm_target_arch(bb, d):
+    return get_llvm_arch(bb, d, 'TARGET_ARCH')
+#
+# Default to build all OE-Core supported target arches (user overridable).
+#
+LLVM_TARGETS ?= "${@get_llvm_target_arch(bb, d)}"
+LLVM_TARGETS_prepend_x86 = "AMDGPU;"
+LLVM_TARGETS_prepend_x86-64 = "AMDGPU;"
+
+ARM_INSTRUCTION_SET_armv5 = "arm"
+ARM_INSTRUCTION_SET_armv4t = "arm"
+
+EXTRA_OECMAKE += "-DLLVM_ENABLE_ASSERTIONS=OFF \
+                  -DLLVM_ENABLE_EXPENSIVE_CHECKS=OFF \
+                  -DLLVM_ENABLE_PIC=ON \
+                  -DLLVM_BINDINGS_LIST='' \
+                  -DLLVM_LINK_LLVM_DYLIB=ON \
+                  -DLLVM_ENABLE_FFI=ON \
+                  -DFFI_INCLUDE_DIR=$(pkg-config --variable=includedir libffi) \
+                  -DLLVM_OPTIMIZED_TABLEGEN=ON \
+                  -DLLVM_TARGETS_TO_BUILD="${LLVM_TARGETS}" \
+                  -G Ninja"
+
+EXTRA_OECMAKE_append_class-target = "\
+                  -DCMAKE_CROSSCOMPILING:BOOL=ON \
+                  -DLLVM_TABLEGEN=${STAGING_BINDIR_NATIVE}/llvm-tblgen${PV} \
+                 "
+
+EXTRA_OECMAKE_append_class-nativesdk = "\
+                  -DCMAKE_CROSSCOMPILING:BOOL=ON \
+                  -DLLVM_TABLEGEN=${STAGING_BINDIR_NATIVE}/llvm-tblgen${PV} \
+                 "
+
+do_configure_prepend() {
+# Fix paths in llvm-config
+	sed -i "s|sys::path::parent_path(CurrentPath))\.str()|sys::path::parent_path(sys::path::parent_path(CurrentPath))).str()|g" ${S}/tools/llvm-config/llvm-config.cpp
+	sed -ri "s#/(bin|include|lib)(/?\")#/\1/${LLVM_DIR}\2#g" ${S}/tools/llvm-config/llvm-config.cpp
+	sed -ri "s#lib/${LLVM_DIR}#${baselib}/${LLVM_DIR}#g" ${S}/tools/llvm-config/llvm-config.cpp
+}
+
+do_compile() {
+	NINJA_STATUS="[%p] " ninja -v ${PARALLEL_MAKE}
+}
+
+do_compile_class-native() {
+	NINJA_STATUS="[%p] " ninja -v ${PARALLEL_MAKE} llvm-config llvm-tblgen
+}
+
+do_install() {
+	NINJA_STATUS="[%p] " DESTDIR=${LLVM_INSTALL_DIR} ninja -v install
+	install -D -m 0755 ${B}/bin/llvm-config ${D}${libdir}/${LLVM_DIR}/llvm-config
+
+	install -d ${D}${bindir}/${LLVM_DIR}
+	cp -r ${LLVM_INSTALL_DIR}${bindir}/* ${D}${bindir}/${LLVM_DIR}/
+
+	install -d ${D}${includedir}/${LLVM_DIR}
+	cp -r ${LLVM_INSTALL_DIR}${includedir}/* ${D}${includedir}/${LLVM_DIR}/
+
+	install -d ${D}${libdir}/${LLVM_DIR}
+
+	# The LLVM sources have "/lib" embedded and so we cannot completely rely on the ${libdir} variable
+	if [ -d ${LLVM_INSTALL_DIR}${libdir}/ ]; then
+		cp -r ${LLVM_INSTALL_DIR}${libdir}/* ${D}${libdir}/${LLVM_DIR}/
+	elif [ -d ${LLVM_INSTALL_DIR}${prefix}/lib ]; then
+		cp -r ${LLVM_INSTALL_DIR}${prefix}/lib/* ${D}${libdir}/${LLVM_DIR}/
+	elif [ -d ${LLVM_INSTALL_DIR}${prefix}/lib64 ]; then
+		cp -r ${LLVM_INSTALL_DIR}${prefix}/lib64/* ${D}${libdir}/${LLVM_DIR}/
+	fi
+
+	# Remove unnecessary cmake files
+	rm -rf ${D}${libdir}/${LLVM_DIR}/cmake
+
+	ln -s ${LLVM_DIR}/libLLVM-${PV}${SOLIBSDEV} ${D}${libdir}/libLLVM-${PV}${SOLIBSDEV}
+
+	# We'll have to delete the libLLVM.so due to multiple reasons...
+	rm -rf ${D}${libdir}/${LLVM_DIR}/libLLVM.so
+	rm -rf ${D}${libdir}/${LLVM_DIR}/libLTO.so
+}
+do_install_class-native() {
+	install -D -m 0755 ${B}/bin/llvm-tblgen ${D}${bindir}/llvm-tblgen${PV}
+	install -D -m 0755 ${B}/bin/llvm-config ${D}${bindir}/llvm-config${PV}
+	install -D -m 0755 ${B}/lib/libLLVM-${PV}.so ${D}${libdir}/libLLVM-${PV}.so
+}
+
+PACKAGES += "${PN}-bugpointpasses ${PN}-llvmhello"
+ALLOW_EMPTY_${PN} = "1"
+ALLOW_EMPTY_${PN}-staticdev = "1"
+FILES_${PN} = ""
+FILES_${PN}-staticdev = ""
+FILES_${PN}-dbg = " \
+    ${bindir}/${LLVM_DIR}/.debug \
+    ${libdir}/${LLVM_DIR}/.debug/BugpointPasses.so \
+    ${libdir}/${LLVM_DIR}/.debug/LLVMHello.so \
+    ${libdir}/${LLVM_DIR}/.debug/libLTO.so* \
+    ${libdir}/${LLVM_DIR}/.debug/llvm-config \
+    /usr/src/debug \
+"
+
+FILES_${PN}-dev = " \
+    ${bindir}/${LLVM_DIR} \
+    ${includedir}/${LLVM_DIR} \
+    ${libdir}/${LLVM_DIR}/llvm-config \
+"
+
+RRECOMMENDS_${PN}-dev += "${PN}-bugpointpasses ${PN}-llvmhello"
+
+FILES_${PN}-bugpointpasses = "\
+    ${libdir}/${LLVM_DIR}/BugpointPasses.so \
+"
+FILES_${PN} += "\
+    ${libdir}/${LLVM_DIR}/libLTO.so.* \
+"
+
+FILES_${PN}-llvmhello = "\
+    ${libdir}/${LLVM_DIR}/LLVMHello.so \
+"
+
+PACKAGES_DYNAMIC = "^libllvm${LLVM_RELEASE}-.*$"
+NOAUTOPACKAGEDEBUG = "1"
+
+INSANE_SKIP_${MLPREFIX}libllvm${LLVM_RELEASE}-llvm-${LLVM_RELEASE}.${PATCH_VERSION} += "dev-so"
+INSANE_SKIP_${MLPREFIX}libllvm${LLVM_RELEASE}-llvm-${LLVM_RELEASE} += "dev-so"
+INSANE_SKIP_${MLPREFIX}libllvm${LLVM_RELEASE}-llvm += "dev-so"
+
+python llvm_populate_packages() {
+    libdir = bb.data.expand('${libdir}', d)
+    libllvm_libdir = bb.data.expand('${libdir}/${LLVM_DIR}', d)
+    split_dbg_packages = do_split_packages(d, libllvm_libdir+'/.debug', '^lib(.*)\.so$', 'libllvm${LLVM_RELEASE}-%s-dbg', 'Split debug package for %s', allow_dirs=True)
+    split_packages = do_split_packages(d, libdir, '^lib(.*)\.so$', 'libllvm${LLVM_RELEASE}-%s', 'Split package for %s', allow_dirs=True, allow_links=True, recursive=True)
+    split_staticdev_packages = do_split_packages(d, libllvm_libdir, '^lib(.*)\.a$', 'libllvm${LLVM_RELEASE}-%s-staticdev', 'Split staticdev package for %s', allow_dirs=True)
+    if split_packages:
+        pn = d.getVar('PN')
+        d.appendVar('RDEPENDS_' + pn, ' '+' '.join(split_packages))
+        d.appendVar('RDEPENDS_' + pn + '-dbg', ' '+' '.join(split_dbg_packages))
+        d.appendVar('RDEPENDS_' + pn + '-staticdev', ' '+' '.join(split_staticdev_packages))
+}
+
+PACKAGESPLITFUNCS_prepend = "llvm_populate_packages "
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/m4/m4.inc b/import-layers/yocto-poky/meta/recipes-devtools/m4/m4.inc
index 4a83929..2002594 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/m4/m4.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/m4/m4.inc
@@ -1,4 +1,5 @@
 SUMMARY = "Traditional Unix macro processor"
+HOMEPAGE = "https://www.gnu.org/software/m4/m4.html"
 DESCRIPTION = "GNU m4 is an implementation of the traditional Unix macro processor.  It is mostly SVR4 \
 compatible although it has some extensions (for example, handling more than 9 positional parameters to macros). \
 GNU M4 also has built-in functions for including files, running shell commands, doing arithmetic, etc."
@@ -6,5 +7,4 @@
 inherit autotools texinfo
 
 EXTRA_OEMAKE += "'infodir=${infodir}'"
-LDFLAGS_prepend_libc-uclibc = " -lrt "
 SRC_URI = "${GNU_MIRROR}/m4/m4-${PV}.tar.gz"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/mkelfimage/mkelfimage/convert.bin.c b/import-layers/yocto-poky/meta/recipes-devtools/mkelfimage/mkelfimage/convert.bin.c
new file mode 100644
index 0000000..cb6faa8
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/mkelfimage/mkelfimage/convert.bin.c
@@ -0,0 +1,348 @@
+0xfc, 0xfa, 0xa3, 0x9c, 0x02, 0x01, 0x00, 0x89, 0x1d, 0xa0, 0x02, 0x01, 0x00, 0x83, 0xfc, 0x00,
+0x74, 0x09, 0x8b, 0x44, 0x24, 0x04, 0xa3, 0xa4, 0x02, 0x01, 0x00, 0x8b, 0x25, 0x80, 0x14, 0x01,
+0x00, 0x6a, 0x00, 0x9d, 0x31, 0xc0, 0xbf, 0xb8, 0x15, 0x01, 0x00, 0xb9, 0x44, 0x58, 0x01, 0x00,
+0x29, 0xf9, 0xfc, 0xf3, 0xaa, 0xbe, 0xba, 0x02, 0x01, 0x00, 0xbf, 0x00, 0x10, 0x02, 0x00, 0xb9,
+0x70, 0x00, 0x00, 0x00, 0xf3, 0xa4, 0x0f, 0x01, 0x15, 0xb4, 0x02, 0x01, 0x00, 0x0f, 0x01, 0x1d,
+0xae, 0x02, 0x01, 0x00, 0xb8, 0x18, 0x00, 0x00, 0x00, 0x8e, 0xd8, 0x8e, 0xc0, 0x8e, 0xe0, 0x8e,
+0xe8, 0x8e, 0xd0, 0x68, 0x90, 0x14, 0x01, 0x00, 0xff, 0x35, 0xa4, 0x02, 0x01, 0x00, 0xff, 0x35,
+0xa0, 0x02, 0x01, 0x00, 0xff, 0x35, 0x9c, 0x02, 0x01, 0x00, 0xe8, 0xe3, 0x08, 0x00, 0x00, 0x89,
+0xc6, 0x83, 0xc4, 0x10, 0x6a, 0x00, 0x9d, 0x31, 0xdb, 0xa1, 0xac, 0x14, 0x01, 0x00, 0x83, 0xf8,
+0x01, 0x74, 0x13, 0x31, 0xc0, 0x31, 0xc9, 0x31, 0xd2, 0x31, 0xff, 0x31, 0xed, 0x6a, 0x10, 0xa1,
+0xa8, 0x14, 0x01, 0x00, 0x50, 0xcb, 0x89, 0xf0, 0xbe, 0x2a, 0x03, 0x01, 0x00, 0xbf, 0x00, 0x20,
+0x02, 0x00, 0xb9, 0x20, 0x00, 0x00, 0x00, 0xf3, 0xa4, 0x89, 0xc6, 0x0f, 0x01, 0x15, 0x2a, 0x03,
+0x01, 0x00, 0x31, 0xc0, 0x0f, 0xba, 0xe8, 0x05, 0x0f, 0x22, 0xe0, 0xbf, 0x00, 0x30, 0x02, 0x00,
+0x31, 0xc0, 0xb9, 0x00, 0x18, 0x00, 0x00, 0xf3, 0xab, 0xbf, 0x00, 0x30, 0x02, 0x00, 0x8d, 0x87,
+0x07, 0x10, 0x00, 0x00, 0x89, 0x07, 0xbf, 0x00, 0x40, 0x02, 0x00, 0x8d, 0x87, 0x07, 0x10, 0x00,
+0x00, 0xb9, 0x04, 0x00, 0x00, 0x00, 0x89, 0x07, 0x05, 0x00, 0x10, 0x00, 0x00, 0x83, 0xc7, 0x08,
+0x49, 0x75, 0xf3, 0xbf, 0x00, 0x50, 0x02, 0x00, 0xb8, 0x83, 0x01, 0x00, 0x00, 0xb9, 0x00, 0x08,
+0x00, 0x00, 0x89, 0x07, 0x05, 0x00, 0x00, 0x20, 0x00, 0x83, 0xc7, 0x08, 0x49, 0x75, 0xf3, 0xb8,
+0x00, 0x30, 0x02, 0x00, 0x0f, 0x22, 0xd8, 0xb9, 0x80, 0x00, 0x00, 0xc0, 0x0f, 0x32, 0x0f, 0xba,
+0xe8, 0x08, 0x0f, 0x30, 0x6a, 0x10, 0xa1, 0xa8, 0x14, 0x01, 0x00, 0x50, 0x31, 0xc0, 0x0f, 0xba,
+0xe8, 0x1f, 0x0f, 0xba, 0xe8, 0x00, 0x0f, 0x22, 0xc0, 0xcb, 0x55, 0x89, 0xe5, 0x53, 0x56, 0x57,
+0x8b, 0x7d, 0x08, 0x81, 0xef, 0x00, 0x00, 0x01, 0x00, 0x8b, 0x75, 0x0c, 0x56, 0xe8, 0xfc, 0x00,
+0x00, 0x00, 0x66, 0x31, 0xdb, 0x66, 0xb8, 0x20, 0xe8, 0x00, 0x00, 0x66, 0xba, 0x50, 0x41, 0x4d,
+0x53, 0x66, 0xb9, 0x14, 0x00, 0x00, 0x00, 0xcd, 0x15, 0x72, 0x18, 0x66, 0x3d, 0x50, 0x41, 0x4d,
+0x53, 0x75, 0x10, 0x66, 0x4e, 0x66, 0x85, 0xf6, 0x74, 0x09, 0x83, 0xc7, 0x14, 0x66, 0x83, 0xfb,
+0x00, 0x75, 0xd2, 0x66, 0xe8, 0x82, 0x00, 0x00, 0x00, 0x58, 0x29, 0xf0, 0x5f, 0x5e, 0x5b, 0x89,
+0xec, 0x5d, 0xc3, 0x53, 0x56, 0x57, 0xe8, 0xb3, 0x00, 0x00, 0x00, 0xf9, 0x31, 0xc9, 0x31, 0xd2,
+0xb8, 0x01, 0xe8, 0xcd, 0x15, 0x72, 0x28, 0x83, 0xf9, 0x00, 0x75, 0x09, 0x83, 0xfa, 0x00, 0x75,
+0x04, 0x89, 0xc1, 0x89, 0xda, 0x66, 0x81, 0xe2, 0xff, 0xff, 0x00, 0x00, 0x66, 0xc1, 0xe2, 0x06,
+0x66, 0x89, 0xd0, 0x66, 0x81, 0xe1, 0xff, 0xff, 0x00, 0x00, 0x66, 0x01, 0xc8, 0xeb, 0x03, 0x66,
+0x31, 0xc0, 0x66, 0xe8, 0x33, 0x00, 0x00, 0x00, 0x5f, 0x5e, 0x5b, 0xc3, 0x53, 0x56, 0x57, 0xe8,
+0x6a, 0x00, 0x00, 0x00, 0xb4, 0x88, 0xcd, 0x15, 0x66, 0x25, 0xff, 0xff, 0x00, 0x00, 0x66, 0xe8,
+0x17, 0x00, 0x00, 0x00, 0x5f, 0x5e, 0x5b, 0xc3, 0xe8, 0x51, 0x00, 0x00, 0x00, 0xcd, 0x12, 0x89,
+0xc1, 0x66, 0xe8, 0x04, 0x00, 0x00, 0x00, 0x66, 0x89, 0xc8, 0xc3, 0xfa, 0x2e, 0x67, 0x0f, 0x01,
+0x15, 0xb4, 0x02, 0x00, 0x00, 0x0f, 0x20, 0xc0, 0x66, 0x83, 0xc8, 0x01, 0x0f, 0x22, 0xc0, 0x66,
+0xea, 0x37, 0x02, 0x01, 0x00, 0x10, 0x00, 0xb8, 0x18, 0x00, 0x00, 0x00, 0x8e, 0xd8, 0x8e, 0xc0,
+0x8e, 0xd0, 0x81, 0xc4, 0x00, 0x00, 0x01, 0x00, 0x31, 0xc0, 0x8e, 0xe0, 0x8e, 0xe8, 0x58, 0x05,
+0x00, 0x00, 0x01, 0x00, 0x50, 0x2e, 0x0f, 0x01, 0x1d, 0xae, 0x02, 0x01, 0x00, 0xc3, 0x58, 0x2d,
+0x00, 0x00, 0x01, 0x00, 0x50, 0x81, 0xec, 0x00, 0x00, 0x01, 0x00, 0xea, 0x72, 0x02, 0x00, 0x00,
+0x08, 0x00, 0x0f, 0x20, 0xc0, 0x66, 0x83, 0xe0, 0xfe, 0x0f, 0x22, 0xc0, 0x66, 0xea, 0x84, 0x02,
+0x00, 0x00, 0x00, 0x10, 0x8c, 0xc8, 0x8e, 0xd8, 0x8e, 0xc0, 0x8e, 0xd0, 0x8e, 0xe0, 0x8e, 0xe8,
+0x2e, 0x67, 0x0f, 0x01, 0x1d, 0xa8, 0x02, 0x00, 0x00, 0xfb, 0x66, 0xc3, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x6f, 0x00, 0x00, 0x10, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0xff, 0xff, 0x00, 0x00, 0x01, 0x9b, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00, 0x00, 0x9a,
+0xcf, 0x00, 0xff, 0xff, 0x00, 0x00, 0x00, 0x92, 0xcf, 0x00, 0xff, 0xff, 0x00, 0x00, 0x01, 0x93,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00, 0x00, 0x9a,
+0xcf, 0x00, 0xff, 0xff, 0x00, 0x00, 0x00, 0x92, 0xcf, 0x00, 0x20, 0x00, 0x00, 0x20, 0x02, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff, 0x00, 0x00, 0x00, 0x9a,
+0xaf, 0x00, 0xff, 0xff, 0x00, 0x00, 0x00, 0x92, 0xcf, 0x00, 0x55, 0x89, 0xc1, 0x89, 0xe5, 0xba,
+0xfd, 0x03, 0x00, 0x00, 0xec, 0xa8, 0x20, 0x74, 0xfb, 0xba, 0xf8, 0x03, 0x00, 0x00, 0x88, 0xc8,
+0xee, 0xba, 0xfd, 0x03, 0x00, 0x00, 0xec, 0xa8, 0x40, 0x74, 0xfb, 0x5d, 0xc3, 0x55, 0x83, 0xf8,
+0x0a, 0x89, 0xe5, 0x53, 0x89, 0xc3, 0x75, 0x0a, 0xb8, 0x0d, 0x00, 0x00, 0x00, 0xe8, 0xc8, 0xff,
+0xff, 0xff, 0x89, 0xd8, 0x5b, 0x5d, 0xeb, 0xc2, 0x55, 0x89, 0xe5, 0x57, 0x56, 0x53, 0x83, 0xec,
+0x44, 0x8d, 0x5d, 0x0c, 0x8b, 0x7d, 0x08, 0x0f, 0xbe, 0x07, 0x84, 0xc0, 0x0f, 0x84, 0x68, 0x01,
+0x00, 0x00, 0x3c, 0x25, 0x74, 0x0d, 0xe8, 0xc2, 0xff, 0xff, 0xff, 0x89, 0x7d, 0xbc, 0xe9, 0x4e,
+0x01, 0x00, 0x00, 0x8d, 0x47, 0x01, 0x89, 0x45, 0xbc, 0x8a, 0x47, 0x01, 0x3c, 0x73, 0x75, 0x1a,
+0x89, 0xd8, 0x83, 0xc3, 0x04, 0x8b, 0x30, 0x0f, 0xbe, 0x06, 0x84, 0xc0, 0x0f, 0x84, 0x2f, 0x01,
+0x00, 0x00, 0xe8, 0x96, 0xff, 0xff, 0xff, 0x46, 0xeb, 0xed, 0x3c, 0x4c, 0x75, 0x0d, 0x8d, 0x47,
+0x02, 0xbe, 0x3c, 0x00, 0x00, 0x00, 0x89, 0x45, 0xbc, 0xeb, 0x38, 0x3c, 0x6c, 0x75, 0x0d, 0x8d,
+0x47, 0x02, 0xbe, 0x1c, 0x00, 0x00, 0x00, 0x89, 0x45, 0xbc, 0xeb, 0x27, 0xbe, 0x1c, 0x00, 0x00,
+0x00, 0x3c, 0x68, 0x75, 0x1e, 0x80, 0x7f, 0x02, 0x68, 0x74, 0x0d, 0x8d, 0x47, 0x02, 0xbe, 0x0c,
+0x00, 0x00, 0x00, 0x89, 0x45, 0xbc, 0xeb, 0x0b, 0x8d, 0x47, 0x03, 0xbe, 0x04, 0x00, 0x00, 0x00,
+0x89, 0x45, 0xbc, 0x8b, 0x45, 0xbc, 0x8a, 0x00, 0x88, 0xc2, 0x83, 0xca, 0x20, 0x80, 0xfa, 0x78,
+0x75, 0x5b, 0x8b, 0x13, 0x83, 0xfe, 0x1c, 0x7e, 0x0e, 0x8b, 0x4b, 0x04, 0x89, 0x55, 0xb0, 0x89,
+0x4d, 0xb4, 0x83, 0xc3, 0x08, 0xeb, 0x0b, 0x31, 0xc9, 0x89, 0x55, 0xb0, 0x89, 0x4d, 0xb4, 0x83,
+0xc3, 0x04, 0x83, 0xe0, 0x20, 0x8d, 0x7d, 0xc8, 0x88, 0x45, 0xbb, 0x89, 0xf1, 0x8b, 0x55, 0xb4,
+0x8b, 0x45, 0xb0, 0x0f, 0xad, 0xd0, 0xd3, 0xea, 0xf6, 0xc1, 0x20, 0x74, 0x02, 0x89, 0xd0, 0x83,
+0xe0, 0x0f, 0x8a, 0x55, 0xbb, 0x47, 0x0a, 0x90, 0x14, 0x10, 0x01, 0x00, 0x88, 0x57, 0xff, 0x83,
+0xe9, 0x04, 0x79, 0xd9, 0xc1, 0xee, 0x02, 0x8d, 0x74, 0x35, 0xc9, 0xeb, 0x62, 0x3c, 0x64, 0x75,
+0x4a, 0x8b, 0x03, 0x83, 0xfe, 0x1c, 0x7e, 0x05, 0x83, 0xc3, 0x08, 0xeb, 0x03, 0x83, 0xc3, 0x04,
+0x8d, 0x4d, 0xc8, 0x85, 0xc0, 0x79, 0x09, 0xc6, 0x45, 0xc8, 0x2d, 0x8d, 0x4d, 0xc9, 0xf7, 0xd8,
+0x89, 0xce, 0xbf, 0x0a, 0x00, 0x00, 0x00, 0x99, 0x46, 0xf7, 0xff, 0x83, 0xc2, 0x30, 0x85, 0xc0,
+0x88, 0x56, 0xff, 0x75, 0xf2, 0x89, 0xf0, 0x48, 0x39, 0xc1, 0x73, 0x23, 0x0f, 0xb6, 0x38, 0x8a,
+0x11, 0x41, 0x88, 0x10, 0x89, 0xfa, 0x88, 0x51, 0xff, 0xeb, 0xec, 0x3c, 0x63, 0x75, 0x0a, 0x8b,
+0x03, 0x83, 0xc3, 0x04, 0x88, 0x45, 0xc8, 0xeb, 0x03, 0x88, 0x45, 0xc8, 0x8d, 0x75, 0xc9, 0x8d,
+0x7d, 0xc8, 0x39, 0xf7, 0x73, 0x0b, 0x0f, 0xbe, 0x07, 0x47, 0xe8, 0x6e, 0xfe, 0xff, 0xff, 0xeb,
+0xf1, 0x8b, 0x7d, 0xbc, 0x47, 0xe9, 0x8d, 0xfe, 0xff, 0xff, 0x83, 0xc4, 0x44, 0x5b, 0x5e, 0x5f,
+0x5d, 0xc3, 0x55, 0x31, 0xc9, 0x89, 0xe5, 0x56, 0x53, 0x31, 0xdb, 0x83, 0xec, 0x10, 0x39, 0xd3,
+0x74, 0x23, 0x0f, 0xb6, 0x34, 0x18, 0xf6, 0xc3, 0x01, 0x74, 0x03, 0xc1, 0xe6, 0x08, 0x01, 0xf1,
+0x81, 0xf9, 0xff, 0xff, 0x00, 0x00, 0x76, 0x0a, 0x89, 0xce, 0xc1, 0xee, 0x10, 0x01, 0xf1, 0x0f,
+0xb7, 0xc9, 0x43, 0xeb, 0xd9, 0x88, 0x4d, 0xf6, 0xc1, 0xe9, 0x08, 0x88, 0x4d, 0xf7, 0x0f, 0xb7,
+0x45, 0xf6, 0x83, 0xc4, 0x10, 0x5b, 0x5e, 0x5d, 0xc3, 0x55, 0x89, 0xe5, 0x57, 0x56, 0x53, 0x89,
+0xc3, 0x51, 0x89, 0x55, 0xf0, 0x3b, 0x5d, 0xf0, 0x73, 0x5f, 0x81, 0x3b, 0x4c, 0x42, 0x49, 0x4f,
+0x74, 0x05, 0x83, 0xc3, 0x10, 0xeb, 0xee, 0x83, 0x7b, 0x04, 0x18, 0x75, 0xf5, 0xba, 0x18, 0x00,
+0x00, 0x00, 0x89, 0xd8, 0xe8, 0x89, 0xff, 0xff, 0xff, 0xf7, 0xd0, 0x66, 0x85, 0xc0, 0x75, 0xe2,
+0x8d, 0x73, 0x18, 0x8b, 0x53, 0x0c, 0x89, 0xf0, 0xe8, 0x75, 0xff, 0xff, 0xff, 0xf7, 0xd0, 0x0f,
+0xb7, 0xc0, 0x39, 0x43, 0x10, 0x75, 0xcb, 0x8b, 0x4b, 0x0c, 0x31, 0xc0, 0x01, 0xf1, 0x39, 0xf1,
+0x76, 0x10, 0x89, 0xcf, 0x8b, 0x56, 0x04, 0x29, 0xf7, 0x39, 0xfa, 0x77, 0x05, 0x40, 0x01, 0xd6,
+0xeb, 0xec, 0x39, 0x43, 0x14, 0x75, 0xab, 0xeb, 0x02, 0x31, 0xdb, 0x5a, 0x89, 0xd8, 0x5b, 0x5e,
+0x5f, 0x5d, 0xc3, 0x55, 0xba, 0x00, 0x10, 0x00, 0x00, 0x89, 0xe5, 0x57, 0x56, 0x53, 0x89, 0xc3,
+0x31, 0xc0, 0x51, 0xe8, 0x71, 0xff, 0xff, 0xff, 0x85, 0xc0, 0x75, 0x17, 0xba, 0x00, 0x00, 0x10,
+0x00, 0xb8, 0x00, 0x00, 0x0f, 0x00, 0xe8, 0x5e, 0xff, 0xff, 0xff, 0x85, 0xc0, 0x75, 0x04, 0x31,
+0xc0, 0xeb, 0x2f, 0x8b, 0x50, 0x04, 0x01, 0xc2, 0x83, 0x3a, 0x11, 0x75, 0x16, 0x8b, 0x72, 0x08,
+0x89, 0xf2, 0x89, 0xf0, 0x81, 0xc2, 0x00, 0x10, 0x00, 0x00, 0xe8, 0x3a, 0xff, 0xff, 0xff, 0x85,
+0xc0, 0x74, 0xdc, 0x89, 0x43, 0x28, 0xc7, 0x43, 0x24, 0x01, 0x00, 0x00, 0x00, 0xb8, 0x01, 0x00,
+0x00, 0x00, 0x5a, 0x5b, 0x5e, 0x5f, 0x5d, 0xc3, 0x55, 0x89, 0xe5, 0x57, 0x56, 0x53, 0x83, 0xec,
+0x14, 0x89, 0x55, 0xe8, 0x8b, 0x75, 0x0c, 0x8b, 0x5d, 0x08, 0x89, 0x75, 0xe4, 0x89, 0x4d, 0xec,
+0x0f, 0xb6, 0xb0, 0xe8, 0x01, 0x00, 0x00, 0x89, 0x5d, 0xe0, 0x8b, 0x7d, 0x10, 0x83, 0xfe, 0x1f,
+0x7f, 0x32, 0x89, 0xf3, 0x6b, 0xf6, 0x14, 0x01, 0xc6, 0x43, 0x89, 0x96, 0xd0, 0x02, 0x00, 0x00,
+0x89, 0x8e, 0xd4, 0x02, 0x00, 0x00, 0x8b, 0x55, 0xe0, 0x8b, 0x4d, 0xe4, 0x89, 0x96, 0xd8, 0x02,
+0x00, 0x00, 0x89, 0x8e, 0xdc, 0x02, 0x00, 0x00, 0x89, 0xbe, 0xe0, 0x02, 0x00, 0x00, 0x88, 0x98,
+0xe8, 0x01, 0x00, 0x00, 0x4f, 0x75, 0x65, 0x8b, 0x5d, 0xe0, 0x8b, 0x75, 0xe4, 0x03, 0x5d, 0xe8,
+0x13, 0x75, 0xec, 0x89, 0xd9, 0x89, 0xf3, 0xbe, 0xff, 0xff, 0x3f, 0x00, 0x81, 0xfb, 0xff, 0x03,
+0x00, 0x00, 0x77, 0x13, 0x0f, 0xac, 0xd9, 0x0a, 0x89, 0xce, 0x81, 0xf9, 0xff, 0xff, 0x3f, 0x00,
+0x76, 0x05, 0xbe, 0xff, 0xff, 0x3f, 0x00, 0x8b, 0xb8, 0xe0, 0x01, 0x00, 0x00, 0x8d, 0x97, 0x00,
+0x04, 0x00, 0x00, 0x39, 0xf2, 0x73, 0x25, 0x8d, 0x96, 0x00, 0xfc, 0xff, 0xff, 0x89, 0x90, 0xe0,
+0x01, 0x00, 0x00, 0x81, 0xfa, 0xff, 0xff, 0x00, 0x00, 0x77, 0x0b, 0x66, 0x81, 0xee, 0x00, 0x04,
+0x66, 0x89, 0x70, 0x02, 0xeb, 0x06, 0x66, 0xc7, 0x40, 0x02, 0x00, 0xfc, 0x83, 0xc4, 0x14, 0x5b,
+0x5e, 0x5f, 0x5d, 0xc3, 0x55, 0x89, 0xe5, 0x8b, 0x55, 0x08, 0xec, 0x5d, 0xc3, 0x55, 0x89, 0xe5,
+0x8b, 0x55, 0x08, 0x66, 0xed, 0x5d, 0xc3, 0x55, 0x89, 0xe5, 0x8b, 0x55, 0x08, 0xed, 0x5d, 0xc3,
+0x55, 0x89, 0xe5, 0x8b, 0x45, 0x08, 0x8b, 0x55, 0x0c, 0xee, 0x5d, 0xc3, 0x55, 0x89, 0xe5, 0x8b,
+0x45, 0x08, 0x8b, 0x55, 0x0c, 0x66, 0xef, 0x5d, 0xc3, 0x55, 0x89, 0xe5, 0x8b, 0x45, 0x08, 0x8b,
+0x55, 0x0c, 0xef, 0x5d, 0xc3, 0x55, 0x31, 0xc0, 0x89, 0xe5, 0x8b, 0x55, 0x08, 0x3b, 0x45, 0x0c,
+0x74, 0x09, 0x80, 0x3c, 0x02, 0x00, 0x74, 0x03, 0x40, 0xeb, 0xf2, 0x5d, 0xc3, 0x55, 0x31, 0xd2,
+0x89, 0xe5, 0x8b, 0x45, 0x08, 0x8b, 0x4d, 0x0c, 0x3b, 0x55, 0x10, 0x74, 0x06, 0x88, 0x0c, 0x10,
+0x42, 0xeb, 0xf5, 0x5d, 0xc3, 0x55, 0x31, 0xd2, 0x89, 0xe5, 0x53, 0x8b, 0x45, 0x08, 0x8b, 0x4d,
+0x0c, 0x3b, 0x55, 0x10, 0x74, 0x09, 0x8a, 0x1c, 0x11, 0x88, 0x1c, 0x10, 0x42, 0xeb, 0xf2, 0x5b,
+0x5d, 0xc3, 0x55, 0x31, 0xc9, 0x89, 0xe5, 0x3b, 0x4d, 0x10, 0x74, 0x17, 0x8b, 0x45, 0x08, 0x0f,
+0xb6, 0x10, 0x8b, 0x45, 0x0c, 0x0f, 0xb6, 0x00, 0x38, 0xc2, 0x74, 0x04, 0x29, 0xd0, 0xeb, 0x05,
+0x41, 0xeb, 0xe4, 0x31, 0xc0, 0x5d, 0xc3, 0x55, 0x89, 0xe5, 0x57, 0x56, 0x53, 0x8b, 0x45, 0x08,
+0x68, 0x00, 0x01, 0x00, 0x00, 0x8b, 0x4d, 0x10, 0x8d, 0x98, 0x00, 0x08, 0x00, 0x00, 0x53, 0xe8,
+0x71, 0xff, 0xff, 0xff, 0x01, 0xc3, 0x5e, 0x81, 0xf9, 0xfe, 0x00, 0x00, 0x00, 0x5f, 0x8d, 0x73,
+0x01, 0xc6, 0x03, 0x20, 0x7e, 0x05, 0xb9, 0xff, 0x00, 0x00, 0x00, 0x51, 0xff, 0x75, 0x0c, 0xe8,
+0x51, 0xff, 0xff, 0xff, 0x5a, 0x59, 0x50, 0xff, 0x75, 0x0c, 0x89, 0xc7, 0x56, 0xe8, 0x73, 0xff,
+0xff, 0xff, 0x83, 0xc4, 0x0c, 0xc6, 0x44, 0x3b, 0x01, 0x00, 0x8d, 0x65, 0xf4, 0x5b, 0x5e, 0x5f,
+0x5d, 0xc3, 0x55, 0x89, 0xe5, 0x8b, 0x45, 0x10, 0x8b, 0x55, 0x0c, 0x89, 0x45, 0x0c, 0x8b, 0x45,
+0x08, 0x89, 0x55, 0x10, 0x8b, 0x40, 0x10, 0x89, 0x45, 0x08, 0x5d, 0xeb, 0x8a, 0x55, 0x89, 0xe5,
+0x57, 0x56, 0x53, 0x8b, 0x5d, 0x08, 0x68, 0x00, 0x10, 0x00, 0x00, 0x6a, 0x00, 0xff, 0x73, 0x10,
+0xe8, 0x18, 0xff, 0xff, 0xff, 0x8b, 0x43, 0x10, 0xc7, 0x40, 0x0e, 0x19, 0x01, 0x10, 0x00, 0xc7,
+0x00, 0x00, 0x19, 0x00, 0x00, 0xc7, 0x40, 0x04, 0x00, 0x00, 0x00, 0x50, 0xc6, 0x80, 0xe8, 0x01,
+0x00, 0x00, 0x00, 0x8d, 0xb8, 0x00, 0x08, 0x00, 0x00, 0x66, 0xc7, 0x40, 0x0a, 0x00, 0x00, 0xc7,
+0x40, 0x20, 0x3f, 0xa3, 0x00, 0x08, 0x89, 0xb8, 0x28, 0x02, 0x00, 0x00, 0x8b, 0x43, 0x0c, 0x68,
+0xff, 0x00, 0x00, 0x00, 0x8d, 0x48, 0x28, 0x51, 0xe8, 0xb8, 0xfe, 0xff, 0xff, 0x5a, 0x5e, 0x89,
+0xc6, 0x50, 0x51, 0x57, 0xe8, 0xdc, 0xfe, 0xff, 0xff, 0x8b, 0x43, 0x10, 0x83, 0xc0, 0x40, 0xc6,
+0x84, 0x30, 0xc0, 0x07, 0x00, 0x00, 0x00, 0x6a, 0x40, 0x6a, 0x00, 0x50, 0xe8, 0xac, 0xfe, 0xff,
+0xff, 0x83, 0xc4, 0x24, 0x8b, 0x43, 0x10, 0x83, 0xe8, 0x80, 0x6a, 0x20, 0x6a, 0x00, 0x50, 0xe8,
+0x99, 0xfe, 0xff, 0xff, 0x8b, 0x53, 0x0c, 0x8b, 0x43, 0x10, 0x05, 0x02, 0x02, 0x00, 0x00, 0x8b,
+0x4a, 0x14, 0x66, 0x8b, 0x52, 0x16, 0x66, 0x89, 0x48, 0xf6, 0x66, 0x89, 0x50, 0xfa, 0x66, 0xc7,
+0x80, 0x9e, 0xfe, 0xff, 0xff, 0x00, 0x00, 0xc7, 0x40, 0xde, 0x00, 0x00, 0x00, 0x00, 0x66, 0xc7,
+0x80, 0x00, 0xfe, 0xff, 0xff, 0x00, 0x00, 0x66, 0xc7, 0x40, 0xf0, 0xff, 0xff, 0xc6, 0x40, 0xfd,
+0x00, 0x6a, 0x04, 0x68, 0x25, 0x10, 0x01, 0x00, 0x50, 0xe8, 0x67, 0xfe, 0xff, 0xff, 0x8b, 0x43,
+0x10, 0x66, 0xc7, 0x80, 0x10, 0x02, 0x00, 0x00, 0x50, 0x00, 0x83, 0xc4, 0x18, 0x8b, 0x4b, 0x0c,
+0x66, 0xc7, 0x80, 0x06, 0x02, 0x00, 0x00, 0x01, 0x02, 0xc7, 0x80, 0x18, 0x02, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0xc7, 0x80, 0x1c, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x8b, 0x51, 0x24,
+0x85, 0xd2, 0x74, 0x0f, 0x8b, 0x49, 0x20, 0x89, 0x90, 0x1c, 0x02, 0x00, 0x00, 0x89, 0x88, 0x18,
+0x02, 0x00, 0x00, 0xc7, 0x43, 0x2c, 0x01, 0x00, 0x00, 0x00, 0x8d, 0x65, 0xf4, 0x5b, 0x5e, 0x5f,
+0x5d, 0xc3, 0x55, 0x89, 0xe5, 0x57, 0x56, 0x53, 0x83, 0xec, 0x4c, 0xc7, 0x45, 0xc8, 0x00, 0x00,
+0x02, 0x00, 0x8b, 0x45, 0x08, 0x89, 0x45, 0xb8, 0x8b, 0x45, 0x0c, 0x89, 0x45, 0xbc, 0x8b, 0x45,
+0x10, 0x89, 0x45, 0xc0, 0x8b, 0x45, 0x14, 0x89, 0x45, 0xc4, 0x8d, 0x45, 0xb8, 0x50, 0xe8, 0x9a,
+0xfe, 0xff, 0xff, 0x8b, 0x45, 0xb8, 0x59, 0x3d, 0x02, 0xb0, 0xad, 0x2b, 0x75, 0x09, 0xc7, 0x45,
+0xcc, 0x01, 0x00, 0x00, 0x00, 0xeb, 0x21, 0x31, 0xf6, 0x3d, 0x07, 0xb0, 0x11, 0x0a, 0x75, 0x2c,
+0x8b, 0x45, 0xbc, 0x8b, 0x10, 0xe8, 0x58, 0xfb, 0xff, 0xff, 0xf7, 0xd0, 0x66, 0x85, 0xc0, 0x75,
+0x0e, 0xc7, 0x45, 0xd0, 0x01, 0x00, 0x00, 0x00, 0xbe, 0x01, 0x00, 0x00, 0x00, 0xeb, 0x0d, 0x50,
+0x68, 0x2a, 0x10, 0x01, 0x00, 0xe8, 0xae, 0xf9, 0xff, 0xff, 0x58, 0x5a, 0x31, 0xdb, 0x81, 0x7d,
+0xb8, 0x07, 0xb0, 0x1f, 0x0e, 0x75, 0x03, 0x8b, 0x5d, 0xbc, 0x8b, 0x45, 0xc0, 0x85, 0xc0, 0x74,
+0x0a, 0x81, 0x38, 0x07, 0xb0, 0x1f, 0x0e, 0x75, 0x02, 0x89, 0xc3, 0x85, 0xf6, 0x0f, 0x85, 0xce,
+0x00, 0x00, 0x00, 0x85, 0xdb, 0x0f, 0x84, 0xc6, 0x00, 0x00, 0x00, 0x66, 0x83, 0x7b, 0x08, 0x00,
+0x74, 0x0f, 0x8b, 0x53, 0x04, 0x89, 0xd8, 0xe8, 0xf6, 0xfa, 0xff, 0xff, 0xf7, 0xd0, 0x0f, 0xb7,
+0xf0, 0x8b, 0x4b, 0x04, 0x8d, 0x43, 0x0c, 0x01, 0xd9, 0x31, 0xff, 0x89, 0x4d, 0xb4, 0x39, 0x45,
+0xb4, 0x76, 0x1f, 0x8b, 0x08, 0x8b, 0x50, 0x04, 0x83, 0xc1, 0x03, 0x83, 0xc2, 0x03, 0x83, 0xe1,
+0xfc, 0x83, 0xe2, 0xfc, 0x8d, 0x54, 0x11, 0x0c, 0x01, 0xd0, 0x39, 0x45, 0xb4, 0x72, 0x03, 0x47,
+0xeb, 0xdc, 0x81, 0x3b, 0x07, 0xb0, 0x1f, 0x0e, 0x75, 0x1b, 0x85, 0xf6, 0x75, 0x17, 0x0f, 0xb7,
+0x43, 0x0a, 0x39, 0xf8, 0x75, 0x0f, 0xc7, 0x45, 0xd4, 0x01, 0x00, 0x00, 0x00, 0x89, 0x5d, 0xc0,
+0xe9, 0x95, 0x00, 0x00, 0x00, 0x50, 0x68, 0x4d, 0x10, 0x01, 0x00, 0xe8, 0x08, 0xf9, 0xff, 0xff,
+0x56, 0x68, 0x67, 0x10, 0x01, 0x00, 0xe8, 0xfd, 0xf8, 0xff, 0xff, 0x57, 0x68, 0x79, 0x10, 0x01,
+0x00, 0xe8, 0xf2, 0xf8, 0xff, 0xff, 0x53, 0x68, 0x8b, 0x10, 0x01, 0x00, 0xe8, 0xe7, 0xf8, 0xff,
+0xff, 0x83, 0xc4, 0x20, 0xff, 0x73, 0x04, 0x68, 0x9d, 0x10, 0x01, 0x00, 0xe8, 0xd7, 0xf8, 0xff,
+0xff, 0xff, 0x33, 0x68, 0xaf, 0x10, 0x01, 0x00, 0xe8, 0xcb, 0xf8, 0xff, 0xff, 0x0f, 0xb7, 0x43,
+0x0a, 0x50, 0x68, 0xc1, 0x10, 0x01, 0x00, 0xe8, 0xbc, 0xf8, 0xff, 0xff, 0x83, 0xc4, 0x18, 0xeb,
+0x04, 0x85, 0xf6, 0x75, 0x35, 0x51, 0x68, 0xd3, 0x10, 0x01, 0x00, 0xe8, 0xa8, 0xf8, 0xff, 0xff,
+0xff, 0x75, 0xb8, 0x68, 0xee, 0x10, 0x01, 0x00, 0xe8, 0x9b, 0xf8, 0xff, 0xff, 0xff, 0x75, 0xbc,
+0x68, 0xf7, 0x10, 0x01, 0x00, 0xe8, 0x8e, 0xf8, 0xff, 0xff, 0xff, 0x75, 0xc0, 0x68, 0x00, 0x11,
+0x01, 0x00, 0xe8, 0x81, 0xf8, 0xff, 0xff, 0x83, 0xc4, 0x20, 0xc7, 0x45, 0xd8, 0x00, 0x00, 0x00,
+0x00, 0xc7, 0x45, 0xdc, 0x00, 0x00, 0x00, 0x00, 0x83, 0x7d, 0xd4, 0x00, 0x0f, 0x84, 0xfc, 0x00,
+0x00, 0x00, 0x8b, 0x7d, 0xc0, 0x8d, 0x57, 0x0c, 0x03, 0x7f, 0x04, 0x39, 0xd7, 0x76, 0x51, 0x8b,
+0x32, 0x8b, 0x5a, 0x04, 0x89, 0x5d, 0xb4, 0x83, 0xc3, 0x03, 0x8d, 0x4e, 0x03, 0x83, 0xe3, 0xfc,
+0x83, 0xe1, 0xfc, 0x8d, 0x42, 0x0c, 0x01, 0xcb, 0x01, 0xc3, 0x39, 0xdf, 0x72, 0x32, 0x83, 0x7a,
+0x08, 0x01, 0x75, 0x28, 0x85, 0xf6, 0x75, 0x24, 0x89, 0x55, 0xb0, 0x89, 0x4d, 0xac, 0x52, 0x6a,
+0x00, 0x6a, 0x00, 0x50, 0xe8, 0x29, 0xfc, 0xff, 0xff, 0x83, 0xc4, 0x10, 0x89, 0xc6, 0x85, 0xc0,
+0x8b, 0x55, 0xb0, 0x8b, 0x4d, 0xac, 0x0f, 0x84, 0x1d, 0x04, 0x00, 0x00, 0x89, 0xda, 0xeb, 0xab,
+0x31, 0xd2, 0x31, 0xff, 0x31, 0xf6, 0x85, 0xd2, 0x0f, 0x95, 0xc3, 0xf7, 0xc6, 0x01, 0x00, 0x00,
+0x00, 0x75, 0x34, 0x84, 0xdb, 0x74, 0x30, 0x31, 0xf6, 0x83, 0x7a, 0x04, 0x0a, 0x75, 0x28, 0x89,
+0x55, 0xb4, 0x50, 0x6a, 0x0a, 0x68, 0x35, 0x11, 0x01, 0x00, 0x57, 0xe8, 0xe2, 0xfb, 0xff, 0xff,
+0x83, 0xc4, 0x10, 0x8b, 0x55, 0xb4, 0x85, 0xc0, 0x75, 0x0d, 0x8d, 0x45, 0xb8, 0xe8, 0x11, 0xfa,
+0xff, 0xff, 0x8b, 0x55, 0xb4, 0x89, 0xc6, 0x85, 0xf6, 0x75, 0x0f, 0x84, 0xdb, 0x74, 0x0b, 0x31,
+0xc0, 0x83, 0x7a, 0x04, 0x00, 0x0f, 0x94, 0xc0, 0x89, 0xc6, 0x85, 0xf6, 0x75, 0x25, 0x84, 0xdb,
+0x74, 0x21, 0x83, 0x7a, 0x04, 0x01, 0x75, 0x1b, 0x50, 0x6a, 0x01, 0x68, 0x66, 0x10, 0x01, 0x00,
+0x57, 0xe8, 0x9c, 0xfb, 0xff, 0xff, 0x83, 0xc4, 0x10, 0x85, 0xc0, 0x0f, 0x94, 0xc0, 0x0f, 0xb6,
+0xc0, 0x89, 0xc6, 0x85, 0xf6, 0x75, 0x13, 0x84, 0xdb, 0x74, 0x0f, 0x57, 0x68, 0x0a, 0x11, 0x01,
+0x00, 0xe8, 0x72, 0xf7, 0xff, 0xff, 0x5f, 0x58, 0xeb, 0x04, 0x85, 0xf6, 0x75, 0x13, 0x8d, 0x45,
+0xb8, 0xe8, 0xad, 0xf9, 0xff, 0xff, 0x85, 0xc0, 0x75, 0x07, 0xc7, 0x45, 0xd8, 0x01, 0x00, 0x00,
+0x00, 0x51, 0x68, 0x25, 0x11, 0x01, 0x00, 0xe8, 0x4c, 0xf7, 0xff, 0xff, 0x5b, 0x83, 0x7d, 0xdc,
+0x00, 0x5e, 0x74, 0x0d, 0x50, 0x68, 0x34, 0x11, 0x01, 0x00, 0xe8, 0x39, 0xf7, 0xff, 0xff, 0x58,
+0x5a, 0x83, 0x7d, 0xd8, 0x00, 0x74, 0x0d, 0x53, 0x68, 0x3f, 0x11, 0x01, 0x00, 0xe8, 0x26, 0xf7,
+0xff, 0xff, 0x5e, 0x5f, 0x50, 0x68, 0x65, 0x10, 0x01, 0x00, 0xe8, 0x19, 0xf7, 0xff, 0xff, 0x5a,
+0x83, 0x7d, 0xdc, 0x00, 0x59, 0x75, 0x31, 0x83, 0x7d, 0xd8, 0x00, 0x0f, 0x84, 0x16, 0x01, 0x00,
+0x00, 0x50, 0x50, 0x6a, 0x20, 0x68, 0xc4, 0x55, 0x01, 0x00, 0xe8, 0xbb, 0xf4, 0xff, 0xff, 0xbb,
+0xc0, 0x55, 0x01, 0x00, 0xa3, 0xc0, 0x55, 0x01, 0x00, 0x83, 0xc4, 0x10, 0xc7, 0x45, 0xb4, 0x00,
+0x00, 0x00, 0x00, 0xe9, 0xaa, 0x00, 0x00, 0x00, 0x8b, 0x45, 0xe0, 0x8d, 0x58, 0x18, 0x8b, 0x48,
+0x0c, 0x01, 0xd9, 0x89, 0x4d, 0xb4, 0x39, 0x5d, 0xb4, 0x76, 0xbc, 0x8b, 0x55, 0xb4, 0x8b, 0x43,
+0x04, 0x29, 0xda, 0x39, 0xd0, 0x77, 0xb0, 0x83, 0x3b, 0x01, 0x75, 0x4f, 0x83, 0xe8, 0x08, 0xb9,
+0x14, 0x00, 0x00, 0x00, 0x31, 0xd2, 0x8d, 0x73, 0x08, 0xf7, 0xf1, 0x89, 0x45, 0xb0, 0x31, 0xff,
+0x39, 0x7d, 0xb0, 0x7e, 0x2f, 0x83, 0xff, 0x1f, 0x7f, 0x2a, 0x31, 0xc0, 0x8b, 0x16, 0x83, 0x7e,
+0x10, 0x01, 0x0f, 0x95, 0xc0, 0x83, 0xec, 0x04, 0x40, 0x8b, 0x4e, 0x04, 0x47, 0x50, 0x8b, 0x45,
+0xc8, 0xff, 0x76, 0x0c, 0xff, 0x76, 0x08, 0x83, 0xc6, 0x14, 0xe8, 0x29, 0xf9, 0xff, 0xff, 0x83,
+0xc4, 0x10, 0xeb, 0xcc, 0xc7, 0x45, 0xe4, 0x00, 0x00, 0x00, 0x00, 0x03, 0x5b, 0x04, 0xeb, 0x96,
+0x8b, 0x43, 0x14, 0x8b, 0x53, 0x04, 0x8b, 0x4b, 0x08, 0x8b, 0x73, 0x0c, 0x8b, 0x7b, 0x10, 0x83,
+0xf8, 0x01, 0x75, 0x07, 0xc7, 0x45, 0xe4, 0x00, 0x00, 0x00, 0x00, 0x83, 0xec, 0x04, 0x83, 0xc3,
+0x14, 0x50, 0x8b, 0x45, 0xb0, 0x57, 0x56, 0xe8, 0xec, 0xf8, 0xff, 0xff, 0xff, 0x45, 0xb4, 0x83,
+0xc4, 0x10, 0x8b, 0x45, 0xb4, 0x3b, 0x05, 0xc0, 0x55, 0x01, 0x00, 0x8b, 0x45, 0xc8, 0x89, 0x45,
+0xb0, 0x7c, 0xbd, 0xe8, 0x3b, 0xf4, 0xff, 0xff, 0x8b, 0x7d, 0xb0, 0x89, 0x87, 0xe0, 0x01, 0x00,
+0x00, 0x8b, 0x5d, 0xc8, 0xe8, 0x73, 0xf4, 0xff, 0xff, 0x66, 0x89, 0x43, 0x02, 0x8b, 0x45, 0xc8,
+0x83, 0xb8, 0xe0, 0x01, 0x00, 0x00, 0x00, 0x75, 0x07, 0x66, 0x83, 0x78, 0x02, 0x00, 0x74, 0x07,
+0xc7, 0x45, 0xe4, 0x00, 0x00, 0x00, 0x00, 0x83, 0x7d, 0xcc, 0x00, 0x0f, 0x84, 0xb9, 0x00, 0x00,
+0x00, 0x8b, 0x75, 0xbc, 0x8b, 0x55, 0xc8, 0x83, 0x7d, 0xe4, 0x00, 0x8b, 0x1e, 0x74, 0x4d, 0xf6,
+0xc3, 0x01, 0x74, 0x48, 0x8b, 0x46, 0x08, 0x05, 0x00, 0x04, 0x00, 0x00, 0x3d, 0xff, 0xff, 0x3f,
+0x00, 0x76, 0x05, 0xb8, 0xff, 0xff, 0x3f, 0x00, 0x8b, 0xba, 0xe0, 0x01, 0x00, 0x00, 0x8d, 0x8f,
+0x00, 0x04, 0x00, 0x00, 0x39, 0xc1, 0x73, 0x24, 0x8d, 0x88, 0x00, 0xfc, 0xff, 0xff, 0x89, 0x8a,
+0xe0, 0x01, 0x00, 0x00, 0x81, 0xf9, 0xff, 0xff, 0x00, 0x00, 0x77, 0x0a, 0x66, 0x2d, 0x00, 0x04,
+0x66, 0x89, 0x42, 0x02, 0xeb, 0x06, 0x66, 0xc7, 0x42, 0x02, 0x00, 0xfc, 0x80, 0xe3, 0x04, 0x74,
+0x0f, 0x50, 0x6a, 0xff, 0xff, 0x76, 0x10, 0x52, 0xe8, 0xaa, 0xf9, 0xff, 0xff, 0x83, 0xc4, 0x10,
+0x83, 0x7d, 0xe4, 0x00, 0x0f, 0x84, 0xbd, 0x01, 0x00, 0x00, 0xf6, 0x06, 0x40, 0x0f, 0x84, 0xb4,
+0x01, 0x00, 0x00, 0x8b, 0x5e, 0x30, 0x8b, 0x46, 0x2c, 0x01, 0xd8, 0x8b, 0x7d, 0xc8, 0x89, 0x45,
+0xb4, 0x8b, 0x73, 0xfc, 0x39, 0x5d, 0xb4, 0x0f, 0x86, 0x9a, 0x01, 0x00, 0x00, 0x8b, 0x13, 0x8b,
+0x4b, 0x04, 0x50, 0x89, 0xf8, 0xff, 0x73, 0x10, 0xff, 0x73, 0x0c, 0xff, 0x73, 0x08, 0x01, 0xf3,
+0xe8, 0xe3, 0xf7, 0xff, 0xff, 0x83, 0xc4, 0x10, 0xeb, 0xda, 0x83, 0x7d, 0xd0, 0x00, 0x0f, 0x84,
+0xaa, 0x00, 0x00, 0x00, 0x8b, 0x75, 0xbc, 0x8b, 0x46, 0x0c, 0x85, 0xc0, 0x74, 0x10, 0x52, 0x50,
+0xff, 0x76, 0x08, 0xff, 0x75, 0xc8, 0xe8, 0x3c, 0xf9, 0xff, 0xff, 0x83, 0xc4, 0x10, 0x8b, 0x46,
+0x14, 0x8b, 0x5e, 0x10, 0x89, 0x45, 0xb4, 0x83, 0x7d, 0xb4, 0x00, 0x0f, 0x84, 0x46, 0x01, 0x00,
+0x00, 0x83, 0x3b, 0x01, 0x75, 0x6b, 0x83, 0x7d, 0xe4, 0x00, 0x74, 0x65, 0x8b, 0x45, 0xc8, 0xb9,
+0x14, 0x00, 0x00, 0x00, 0x89, 0x45, 0xb0, 0x8b, 0x43, 0x04, 0x83, 0xe8, 0x08, 0x31, 0xd2, 0xf7,
+0xf1, 0x89, 0x45, 0xac, 0x8d, 0x73, 0x08, 0x31, 0xff, 0x39, 0x7d, 0xac, 0x7e, 0x3c, 0x83, 0xff,
+0x1f, 0x7f, 0x37, 0x8b, 0x46, 0x10, 0x8d, 0x50, 0xff, 0xb8, 0x02, 0x00, 0x00, 0x00, 0x83, 0xfa,
+0x03, 0x77, 0x07, 0x0f, 0xb6, 0x82, 0x10, 0x10, 0x01, 0x00, 0x83, 0xec, 0x04, 0x8b, 0x16, 0x8b,
+0x4e, 0x04, 0x47, 0x50, 0x8b, 0x45, 0xb0, 0xff, 0x76, 0x0c, 0xff, 0x76, 0x08, 0x83, 0xc6, 0x14,
+0xe8, 0x43, 0xf7, 0xff, 0xff, 0x83, 0xc4, 0x10, 0xeb, 0xbf, 0xc7, 0x45, 0xe4, 0x00, 0x00, 0x00,
+0x00, 0x8b, 0x43, 0x04, 0x01, 0xc3, 0x29, 0x45, 0xb4, 0xe9, 0x79, 0xff, 0xff, 0xff, 0x83, 0x7d,
+0xd4, 0x00, 0x0f, 0x84, 0xbf, 0x00, 0x00, 0x00, 0x8b, 0x55, 0xc0, 0x8b, 0x7a, 0x04, 0x8d, 0x42,
+0x0c, 0x01, 0xd7, 0x39, 0xc7, 0x0f, 0x86, 0xac, 0x00, 0x00, 0x00, 0x8b, 0x10, 0x8d, 0x48, 0x0c,
+0x8d, 0x5a, 0x03, 0x83, 0xe3, 0xfc, 0x8d, 0x34, 0x19, 0x8b, 0x58, 0x04, 0x89, 0x75, 0xb4, 0x8d,
+0x73, 0x03, 0x83, 0xe6, 0xfc, 0x03, 0x75, 0xb4, 0x89, 0x75, 0xb0, 0x39, 0xf7, 0x0f, 0x82, 0x84,
+0x00, 0x00, 0x00, 0x8b, 0x35, 0x70, 0x14, 0x01, 0x00, 0x8b, 0x40, 0x08, 0x89, 0x75, 0xac, 0x39,
+0x05, 0x78, 0x14, 0x01, 0x00, 0x8b, 0x35, 0x74, 0x14, 0x01, 0x00, 0x75, 0x27, 0x3b, 0x55, 0xac,
+0x75, 0x22, 0x50, 0x52, 0x56, 0x51, 0xe8, 0x17, 0xf8, 0xff, 0xff, 0x83, 0xc4, 0x10, 0x85, 0xc0,
+0x75, 0x12, 0x52, 0x8d, 0x45, 0xb8, 0xff, 0x75, 0xb4, 0x53, 0x50, 0xff, 0x15, 0x7c, 0x14, 0x01,
+0x00, 0x83, 0xc4, 0x10, 0x8b, 0x45, 0xb0, 0xeb, 0x8a, 0x8d, 0x7c, 0x0a, 0x0c, 0x83, 0x7d, 0xb4,
+0x07, 0x0f, 0x85, 0xdf, 0xfb, 0xff, 0xff, 0x89, 0x55, 0xb4, 0x50, 0x6a, 0x07, 0x68, 0x40, 0x11,
+0x01, 0x00, 0x57, 0xe8, 0xda, 0xf7, 0xff, 0xff, 0x83, 0xc4, 0x10, 0x8b, 0x55, 0xb4, 0x85, 0xc0,
+0x0f, 0x85, 0xc0, 0xfb, 0xff, 0xff, 0xc7, 0x45, 0xd8, 0x01, 0x00, 0x00, 0x00, 0xbe, 0x01, 0x00,
+0x00, 0x00, 0xe9, 0xaf, 0xfb, 0xff, 0xff, 0xb0, 0x80, 0xe6, 0x70, 0x31, 0xc0, 0xe6, 0xf0, 0xe6,
+0xf1, 0xb0, 0x11, 0xe6, 0x20, 0xe6, 0xa0, 0xb0, 0x20, 0xe6, 0x21, 0xb0, 0x28, 0xe6, 0xa1, 0xb0,
+0x04, 0xe6, 0x21, 0xb0, 0x02, 0xe6, 0xa1, 0xb0, 0x01, 0xe6, 0x21, 0xe6, 0xa1, 0xb0, 0xff, 0xe6,
+0xa1, 0xb0, 0xfb, 0xe6, 0x21, 0x8b, 0x45, 0xc8, 0x8d, 0x65, 0xf4, 0x5b, 0x5e, 0x5f, 0x5d, 0xc3,
+0x01, 0x02, 0x03, 0x04, 0x30, 0x31, 0x32, 0x33, 0x34, 0x35, 0x36, 0x37, 0x38, 0x39, 0x41, 0x42,
+0x43, 0x44, 0x45, 0x46, 0x00, 0x48, 0x64, 0x72, 0x53, 0x00, 0x42, 0x61, 0x64, 0x20, 0x75, 0x6e,
+0x69, 0x66, 0x6f, 0x72, 0x6d, 0x20, 0x62, 0x6f, 0x6f, 0x74, 0x20, 0x68, 0x65, 0x61, 0x64, 0x65,
+0x72, 0x20, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x73, 0x75, 0x6d, 0x21, 0x0a, 0x00, 0x42, 0x61, 0x64,
+0x20, 0x45, 0x4c, 0x46, 0x20, 0x70, 0x61, 0x72, 0x61, 0x6d, 0x65, 0x74, 0x65, 0x72, 0x20, 0x74,
+0x61, 0x62, 0x6c, 0x65, 0x21, 0x0a, 0x00, 0x20, 0x20, 0x20, 0x63, 0x68, 0x65, 0x63, 0x6b, 0x73,
+0x75, 0x6d, 0x20, 0x3d, 0x20, 0x25, 0x78, 0x0a, 0x00, 0x20, 0x20, 0x20, 0x20, 0x20, 0x20, 0x63,
+0x6f, 0x75, 0x6e, 0x74, 0x20, 0x3d, 0x20, 0x25, 0x78, 0x0a, 0x00, 0x20, 0x20, 0x20, 0x20, 0x20,
+0x20, 0x20, 0x20, 0x68, 0x64, 0x72, 0x20, 0x3d, 0x20, 0x25, 0x78, 0x0a, 0x00, 0x20, 0x20, 0x20,
+0x20, 0x20, 0x62, 0x5f, 0x73, 0x69, 0x7a, 0x65, 0x20, 0x3d, 0x20, 0x25, 0x78, 0x0a, 0x00, 0x62,
+0x5f, 0x73, 0x69, 0x67, 0x6e, 0x61, 0x74, 0x75, 0x72, 0x65, 0x20, 0x3d, 0x20, 0x25, 0x78, 0x0a,
+0x00, 0x20, 0x20, 0x62, 0x5f, 0x72, 0x65, 0x63, 0x6f, 0x72, 0x64, 0x73, 0x20, 0x3d, 0x20, 0x25,
+0x78, 0x0a, 0x00, 0x55, 0x6e, 0x6b, 0x6e, 0x6f, 0x77, 0x6e, 0x20, 0x62, 0x6f, 0x6f, 0x74, 0x6c,
+0x6f, 0x61, 0x64, 0x65, 0x72, 0x20, 0x63, 0x6c, 0x61, 0x73, 0x73, 0x21, 0x0a, 0x00, 0x74, 0x79,
+0x70, 0x65, 0x3d, 0x25, 0x78, 0x0a, 0x00, 0x64, 0x61, 0x74, 0x61, 0x3d, 0x25, 0x78, 0x0a, 0x00,
+0x70, 0x61, 0x72, 0x61, 0x6d, 0x3d, 0x25, 0x78, 0x0a, 0x00, 0x55, 0x6e, 0x6b, 0x6e, 0x6f, 0x77,
+0x6e, 0x20, 0x66, 0x69, 0x72, 0x6d, 0x77, 0x61, 0x72, 0x65, 0x20, 0x74, 0x79, 0x70, 0x65, 0x3a,
+0x20, 0x25, 0x73, 0x0a, 0x00, 0x46, 0x69, 0x72, 0x6d, 0x77, 0x61, 0x72, 0x65, 0x20, 0x74, 0x79,
+0x70, 0x65, 0x3a, 0x00, 0x20, 0x4c, 0x69, 0x6e, 0x75, 0x78, 0x42, 0x49, 0x4f, 0x53, 0x00, 0x20,
+0x50, 0x43, 0x42, 0x49, 0x4f, 0x53, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x01, 0x7a, 0x52, 0x00, 0x01, 0x7c, 0x08, 0x01, 0x1b, 0x0c, 0x04, 0x04, 0x88, 0x01, 0x00, 0x00,
+0x1c, 0x00, 0x00, 0x00, 0x1c, 0x00, 0x00, 0x00, 0xe2, 0xf1, 0xff, 0xff, 0x23, 0x00, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x44, 0x0d, 0x05, 0x5d, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x00,
+0x20, 0x00, 0x00, 0x00, 0x3c, 0x00, 0x00, 0x00, 0xe5, 0xf1, 0xff, 0xff, 0x1b, 0x00, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x45, 0x0d, 0x05, 0x41, 0x83, 0x03, 0x51, 0xc3, 0x41, 0xc5,
+0x0c, 0x04, 0x04, 0x00, 0x2c, 0x00, 0x00, 0x00, 0x60, 0x00, 0x00, 0x00, 0xdc, 0xf1, 0xff, 0xff,
+0x8a, 0x01, 0x00, 0x00, 0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x42, 0x0d, 0x05, 0x46, 0x87, 0x03,
+0x86, 0x04, 0x83, 0x05, 0x03, 0x7d, 0x01, 0xc3, 0x41, 0xc6, 0x41, 0xc7, 0x41, 0xc5, 0x0c, 0x04,
+0x04, 0x00, 0x00, 0x00, 0x24, 0x00, 0x00, 0x00, 0x90, 0x00, 0x00, 0x00, 0x36, 0xf3, 0xff, 0xff,
+0x47, 0x00, 0x00, 0x00, 0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x44, 0x0d, 0x05, 0x42, 0x86, 0x03,
+0x83, 0x04, 0x7d, 0xc3, 0x41, 0xc6, 0x41, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x28, 0x00, 0x00, 0x00,
+0xb8, 0x00, 0x00, 0x00, 0x55, 0xf3, 0xff, 0xff, 0x7a, 0x00, 0x00, 0x00, 0x00, 0x41, 0x0e, 0x08,
+0x85, 0x02, 0x42, 0x0d, 0x05, 0x43, 0x87, 0x03, 0x86, 0x04, 0x83, 0x05, 0x02, 0x70, 0xc3, 0x41,
+0xc6, 0x41, 0xc7, 0x41, 0xc5, 0x0c, 0x04, 0x04, 0x28, 0x00, 0x00, 0x00, 0xe4, 0x00, 0x00, 0x00,
+0xa3, 0xf3, 0xff, 0xff, 0x65, 0x00, 0x00, 0x00, 0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x47, 0x0d,
+0x05, 0x43, 0x87, 0x03, 0x86, 0x04, 0x83, 0x05, 0x02, 0x56, 0xc3, 0x41, 0xc6, 0x41, 0xc7, 0x41,
+0xc5, 0x0c, 0x04, 0x04, 0x28, 0x00, 0x00, 0x00, 0x10, 0x01, 0x00, 0x00, 0xdc, 0xf3, 0xff, 0xff,
+0xcc, 0x00, 0x00, 0x00, 0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x42, 0x0d, 0x05, 0x46, 0x87, 0x03,
+0x86, 0x04, 0x83, 0x05, 0x02, 0xbf, 0xc3, 0x41, 0xc6, 0x41, 0xc7, 0x41, 0xc5, 0x0c, 0x04, 0x04,
+0x1c, 0x00, 0x00, 0x00, 0x3c, 0x01, 0x00, 0x00, 0x7c, 0xf4, 0xff, 0xff, 0x09, 0x00, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x42, 0x0d, 0x05, 0x45, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x00,
+0x1c, 0x00, 0x00, 0x00, 0x5c, 0x01, 0x00, 0x00, 0x65, 0xf4, 0xff, 0xff, 0x0a, 0x00, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x42, 0x0d, 0x05, 0x46, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x00,
+0x1c, 0x00, 0x00, 0x00, 0x7c, 0x01, 0x00, 0x00, 0x4f, 0xf4, 0xff, 0xff, 0x09, 0x00, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x42, 0x0d, 0x05, 0x45, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x00,
+0x1c, 0x00, 0x00, 0x00, 0x9c, 0x01, 0x00, 0x00, 0x38, 0xf4, 0xff, 0xff, 0x0c, 0x00, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x42, 0x0d, 0x05, 0x48, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x00,
+0x1c, 0x00, 0x00, 0x00, 0xbc, 0x01, 0x00, 0x00, 0x24, 0xf4, 0xff, 0xff, 0x0d, 0x00, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x42, 0x0d, 0x05, 0x49, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x00,
+0x1c, 0x00, 0x00, 0x00, 0xdc, 0x01, 0x00, 0x00, 0x11, 0xf4, 0xff, 0xff, 0x0c, 0x00, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x42, 0x0d, 0x05, 0x48, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x00,
+0x1c, 0x00, 0x00, 0x00, 0xfc, 0x01, 0x00, 0x00, 0xfd, 0xf3, 0xff, 0xff, 0x18, 0x00, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x44, 0x0d, 0x05, 0x52, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x00,
+0x1c, 0x00, 0x00, 0x00, 0x1c, 0x02, 0x00, 0x00, 0xf5, 0xf3, 0xff, 0xff, 0x18, 0x00, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x44, 0x0d, 0x05, 0x52, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x00,
+0x20, 0x00, 0x00, 0x00, 0x3c, 0x02, 0x00, 0x00, 0xed, 0xf3, 0xff, 0xff, 0x1d, 0x00, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x44, 0x0d, 0x05, 0x41, 0x83, 0x03, 0x55, 0xc3, 0x41, 0xc5,
+0x0c, 0x04, 0x04, 0x00, 0x1c, 0x00, 0x00, 0x00, 0x60, 0x02, 0x00, 0x00, 0xe6, 0xf3, 0xff, 0xff,
+0x25, 0x00, 0x00, 0x00, 0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x44, 0x0d, 0x05, 0x5f, 0xc5, 0x0c,
+0x04, 0x04, 0x00, 0x00, 0x28, 0x00, 0x00, 0x00, 0x80, 0x02, 0x00, 0x00, 0xeb, 0xf3, 0xff, 0xff,
+0x5b, 0x00, 0x00, 0x00, 0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x42, 0x0d, 0x05, 0x43, 0x87, 0x03,
+0x86, 0x04, 0x83, 0x05, 0x02, 0x51, 0xc3, 0x41, 0xc6, 0x41, 0xc7, 0x41, 0xc5, 0x0c, 0x04, 0x04,
+0x1c, 0x00, 0x00, 0x00, 0xac, 0x02, 0x00, 0x00, 0x1a, 0xf4, 0xff, 0xff, 0x1b, 0x00, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x42, 0x0d, 0x05, 0x56, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x00,
+0x2c, 0x00, 0x00, 0x00, 0xcc, 0x02, 0x00, 0x00, 0x15, 0xf4, 0xff, 0xff, 0x35, 0x01, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x42, 0x0d, 0x05, 0x43, 0x87, 0x03, 0x86, 0x04, 0x83, 0x05,
+0x03, 0x2b, 0x01, 0xc3, 0x41, 0xc6, 0x41, 0xc7, 0x41, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x00, 0x00,
+0x2c, 0x00, 0x00, 0x00, 0xfc, 0x02, 0x00, 0x00, 0x1a, 0xf5, 0xff, 0xff, 0xae, 0x06, 0x00, 0x00,
+0x00, 0x41, 0x0e, 0x08, 0x85, 0x02, 0x42, 0x0d, 0x05, 0x46, 0x87, 0x03, 0x86, 0x04, 0x83, 0x05,
+0x03, 0xa1, 0x06, 0xc3, 0x41, 0xc6, 0x41, 0xc7, 0x41, 0xc5, 0x0c, 0x04, 0x04, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x66, 0x10, 0x01, 0x00, 0x04, 0x00, 0x00, 0x00, 0x12, 0x08, 0x01, 0x00,
+0xc0, 0x55, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0xa5, 0xa5, 0xa5, 0xa5, 0x70, 0x00, 0x00, 0x00, 0x20, 0x00, 0x00, 0x00, 0x00, 0x60, 0x00, 0x00,
+0x8c, 0x42, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x42, 0x4f, 0x4f, 0x54, 0x5f, 0x49, 0x4d, 0x41,
+0x47, 0x45, 0x3d, 0x68, 0x65, 0x61, 0x64, 0x2e, 0x53, 0x20, 0x63, 0x6f, 0x6e, 0x73, 0x6f, 0x6c,
+0x65, 0x3d, 0x74, 0x74, 0x79, 0x53, 0x30, 0x20, 0x69, 0x70, 0x3d, 0x64, 0x68, 0x63, 0x70, 0x20,
+0x72, 0x6f, 0x6f, 0x74, 0x3d, 0x2f, 0x64, 0x65, 0x76, 0x2f, 0x6e, 0x66, 0x73, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/mkelfimage/mkelfimage_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/mkelfimage/mkelfimage_git.bb
index 2bcc8d7..330fa7c 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/mkelfimage/mkelfimage_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/mkelfimage/mkelfimage_git.bb
@@ -17,21 +17,28 @@
            "
 SRC_URI_append_class-native = " \
            file://fix-makefile-to-find-libz.patch   \
+           file://convert.bin.c \
 "
 
 CLEANBROKEN = "1"
 
 S = "${WORKDIR}/git/util/mkelfImage"
 
-CFLAGS += "-fno-stack-protector"
 CACHED_CONFIGUREVARS += "\
     HOST_CC='${BUILD_CC}' \
     HOST_CFLAGS='${BUILD_CFLAGS}' \
     HOST_CPPFLAGS='${BUILD_CPPFLAGS}' \
+    I386_CFLAGS='-fno-stack-protector' \
+    IA64_CFLAGS='-fno-stack-protector' \
 "
+EXTRA_OECONF_append_x86-64 = " --with-i386=${HOST_SYS}"
 
 inherit autotools-brokensep
 
+do_configure_prepend-class-native() {
+	cp ${WORKDIR}/convert.bin.c ${S}/linux-i386/
+}
+
 do_install_append() {
 	rmdir ${D}${datadir}/mkelfImage/elf32-i386
 	rmdir ${D}${datadir}/mkelfImage
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/mklibs/mklibs-native_0.1.43.bb b/import-layers/yocto-poky/meta/recipes-devtools/mklibs/mklibs-native_0.1.43.bb
index b9c6c6f..a161476 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/mklibs/mklibs-native_0.1.43.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/mklibs/mklibs-native_0.1.43.bb
@@ -4,7 +4,6 @@
 SECTION = "devel"
 LICENSE = "GPLv2+"
 LIC_FILES_CHKSUM = "file://debian/copyright;md5=98d31037b13d896e33890738ef01af64"
-DEPENDS = "python-native"
 
 SRC_URI = "http://snapshot.debian.org/archive/debian/20161123T152011Z/pool/main/m/mklibs/mklibs_${PV}.tar.xz \
 	file://ac_init_fix.patch\
@@ -20,6 +19,6 @@
 
 UPSTREAM_CHECK_URI = "${DEBIAN_MIRROR}/main/m/mklibs/"
 
-inherit autotools gettext native pythonnative
+inherit autotools gettext native
 
 S = "${WORKDIR}/mklibs"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/mmc/mmc-utils_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/mmc/mmc-utils_git.bb
index 7858819..50acdb1 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/mmc/mmc-utils_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/mmc/mmc-utils_git.bb
@@ -4,11 +4,12 @@
 LIC_FILES_CHKSUM = "file://mmc.c;beginline=1;endline=20;md5=fae32792e20f4d27ade1c5a762d16b7d"
 
 SRCBRANCH ?= "master"
-SRCREV = "2cb6695e8dec00d887bdd5309d1b57d836fcd214"
+SRCREV = "37c86e60c0442fef570b75cd81aeb1db4d0cbafd"
 
 PV = "0.1"
 
 SRC_URI = "git://git.kernel.org/pub/scm/linux/kernel/git/cjb/mmc-utils.git;branch=${SRCBRANCH}"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 S = "${WORKDIR}/git"
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/mtd/mtd-utils_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/mtd/mtd-utils_git.bb
index 4fbc54f..48ba2ee 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/mtd/mtd-utils_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/mtd/mtd-utils_git.bb
@@ -5,7 +5,7 @@
 LIC_FILES_CHKSUM = "file://COPYING;md5=0636e73ff0215e8d672dc4c32c317bb3 \
                     file://include/common.h;beginline=1;endline=17;md5=ba05b07912a44ea2bf81ce409380049c"
 
-inherit autotools pkgconfig
+inherit autotools pkgconfig update-alternatives
 
 DEPENDS = "zlib lzo e2fsprogs util-linux"
 
@@ -30,6 +30,11 @@
 
 EXTRA_OEMAKE = "'CC=${CC}' 'RANLIB=${RANLIB}' 'AR=${AR}' 'CFLAGS=${CFLAGS} ${@bb.utils.contains('PACKAGECONFIG', 'xattr', '', '-DWITHOUT_XATTR', d)} -I${S}/include' 'BUILDDIR=${S}'"
 
+ALTERNATIVE_${PN} = "flash_eraseall"
+ALTERNATIVE_LINK_NAME[flash_eraseall] = "${sbindir}/flash_eraseall"
+# Use higher priority than busybox's flash_eraseall (created when built with CONFIG_FLASH_ERASEALL)
+ALTERNATIVE_PRIORITY[flash_eraseall] = "100"
+
 do_install () {
 	oe_runmake install DESTDIR=${D} SBINDIR=${sbindir} MANDIR=${mandir} INCLUDEDIR=${includedir}
 }
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/mtools/mtools/disable-hardcoded-configs.patch b/import-layers/yocto-poky/meta/recipes-devtools/mtools/mtools/disable-hardcoded-configs.patch
new file mode 100644
index 0000000..01455f1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/mtools/mtools/disable-hardcoded-configs.patch
@@ -0,0 +1,23 @@
+Disabled reading host configs.
+
+Upstream-Status: Inappropriate [native]
+
+Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
+
+--- mtools-4.0.18/config.c.orig	2017-06-13 12:27:38.644000000 +0300
++++ mtools-4.0.18/config.c	2017-06-13 12:28:47.576000000 +0300
+@@ -701,14 +701,6 @@
+ 	memcpy(devices, const_devices,
+ 	       nr_const_devices*sizeof(struct device));
+ 
+-    (void) ((parse(CONF_FILE,1) |
+-	     parse(LOCAL_CONF_FILE,1) |
+-	     parse(SYS_CONF_FILE,1)) ||
+-	    (parse(OLD_CONF_FILE,1) |
+-	     parse(OLD_LOCAL_CONF_FILE,1)));
+-    /* the old-name configuration files only get executed if none of the
+-     * new-name config files were used */
+-
+     homedir = get_homedir();
+     if ( homedir ){
+ 	strncpy(conf_file, homedir, MAXPATHLEN );
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/mtools/mtools_4.0.18.bb b/import-layers/yocto-poky/meta/recipes-devtools/mtools/mtools_4.0.18.bb
index b0efc9e..dcd32ed 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/mtools/mtools_4.0.18.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/mtools/mtools_4.0.18.bb
@@ -33,13 +33,12 @@
            file://0001-Continue-even-if-fs-size-is-not-divisible-by-sectors.patch \
            "
 
+SRC_URI_append_class-native = " file://disable-hardcoded-configs.patch"
 
 inherit autotools texinfo
 
 EXTRA_OECONF = "--without-x"
 
-LDFLAGS_append_libc-uclibc = " -liconv "
-
 BBCLASSEXTEND = "native nativesdk"
 
 PACKAGECONFIG ??= ""
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/nasm/nasm_2.12.02.bb b/import-layers/yocto-poky/meta/recipes-devtools/nasm/nasm_2.12.02.bb
deleted file mode 100644
index 3280b84..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/nasm/nasm_2.12.02.bb
+++ /dev/null
@@ -1,28 +0,0 @@
-SUMMARY = "General-purpose x86 assembler"
-SECTION = "devel"
-LICENSE = "BSD-2-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=90904486f8fbf1861cf42752e1a39efe"
-
-SRC_URI = "http://www.nasm.us/pub/nasm/releasebuilds/${PV}/nasm-${PV}.tar.bz2 "
-
-SRC_URI[md5sum] = "d15843c3fb7db39af80571ee27ec6fad"
-SRC_URI[sha256sum] = "00b0891c678c065446ca59bcee64719d0096d54d6886e6e472aeee2e170ae324"
-
-inherit autotools-brokensep
-
-do_configure_prepend () {
-	if [ -f ${S}/aclocal.m4 ] && [ ! -f ${S}/acinclude.m4 ]; then
-		mv ${S}/aclocal.m4 ${S}/acinclude.m4
-	fi
-}
-
-do_install() {
-	install -d ${D}${bindir}
-	install -d ${D}${mandir}/man1
-
-	oe_runmake 'INSTALLROOT=${D}' install
-}
-
-BBCLASSEXTEND = "native"
-
-DEPENDS = "groff-native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/nasm/nasm_2.13.01.bb b/import-layers/yocto-poky/meta/recipes-devtools/nasm/nasm_2.13.01.bb
new file mode 100644
index 0000000..bf18cd6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/nasm/nasm_2.13.01.bb
@@ -0,0 +1,28 @@
+SUMMARY = "General-purpose x86 assembler"
+SECTION = "devel"
+LICENSE = "BSD-2-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=90904486f8fbf1861cf42752e1a39efe"
+
+SRC_URI = "http://www.nasm.us/pub/nasm/releasebuilds/${PV}/nasm-${PV}.tar.bz2 "
+
+SRC_URI[md5sum] = "1f7d4662040d24351df7d6719ed4f97a"
+SRC_URI[sha256sum] = "08f97baf0a7f892128c6413cfa93b69dc5825fbbd06c70928aea028835d198fa"
+
+inherit autotools-brokensep
+
+do_configure_prepend () {
+	if [ -f ${S}/aclocal.m4 ] && [ ! -f ${S}/acinclude.m4 ]; then
+		mv ${S}/aclocal.m4 ${S}/acinclude.m4
+	fi
+}
+
+do_install() {
+	install -d ${D}${bindir}
+	install -d ${D}${mandir}/man1
+
+	oe_runmake 'INSTALLROOT=${D}' install
+}
+
+BBCLASSEXTEND = "native"
+
+DEPENDS = "groff-native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/ninja/ninja_1.7.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/ninja/ninja_1.7.2.bb
new file mode 100644
index 0000000..4d3b272
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/ninja/ninja_1.7.2.bb
@@ -0,0 +1,30 @@
+SUMMARY = "Ninja is a small build system with a focus on speed."
+HOMEPAGE = "http://martine.github.com/ninja/"
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://COPYING;md5=a81586a64ad4e476c791cda7e2f2c52e"
+
+DEPENDS = "re2c-native ninja-native"
+
+SRCREV = "717b7b4a31db6027207588c0fb89c3ead384747b"
+
+SRC_URI = "git://github.com/martine/ninja.git;branch=release"
+UPSTREAM_CHECK_GITTAGREGEX = "v(?P<pver>.*)"
+
+S = "${WORKDIR}/git"
+
+do_configure[noexec] = "1"
+
+do_compile_class-native() {
+	./configure.py --bootstrap
+}
+
+do_compile() {
+	./configure.py
+	ninja
+}
+
+do_install() {
+	install -D -m 0755  ${S}/ninja ${D}${bindir}/ninja
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/opkg-utils/opkg-utils/0001-Switch-all-scripts-to-use-Python-3.x.patch b/import-layers/yocto-poky/meta/recipes-devtools/opkg-utils/opkg-utils/0001-Switch-all-scripts-to-use-Python-3.x.patch
new file mode 100644
index 0000000..c36ae2f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/opkg-utils/opkg-utils/0001-Switch-all-scripts-to-use-Python-3.x.patch
@@ -0,0 +1,112 @@
+From d42b23f4fb5d6bd58e92e995fe5befc76efbae0c Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Thu, 27 Apr 2017 15:47:58 +0300
+Subject: [PATCH] Switch all scripts to use Python 3.x
+
+Upstream-Status: Pending
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+ makePackage          | 2 +-
+ opkg-compare-indexes | 2 +-
+ opkg-graph-deps      | 2 +-
+ opkg-list-fields     | 2 +-
+ opkg-make-index      | 2 +-
+ opkg-show-deps       | 2 +-
+ opkg-unbuild         | 2 +-
+ opkg-update-index    | 2 +-
+ opkg.py              | 2 +-
+ 9 files changed, 9 insertions(+), 9 deletions(-)
+
+diff --git a/makePackage b/makePackage
+index 4bdfc56..02124dd 100755
+--- a/makePackage
++++ b/makePackage
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python3
+ 
+ # The general algorithm this program follows goes like this:
+ #   Run tar to extract control from control.tar.gz from the package.
+diff --git a/opkg-compare-indexes b/opkg-compare-indexes
+index b60d20a..80c1263 100755
+--- a/opkg-compare-indexes
++++ b/opkg-compare-indexes
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ from __future__ import absolute_import
+ from __future__ import print_function
+ 
+diff --git a/opkg-graph-deps b/opkg-graph-deps
+index 6653fd5..f1e376a 100755
+--- a/opkg-graph-deps
++++ b/opkg-graph-deps
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ from __future__ import absolute_import
+ from __future__ import print_function
+ 
+diff --git a/opkg-list-fields b/opkg-list-fields
+index c14a90f..24f7955 100755
+--- a/opkg-list-fields
++++ b/opkg-list-fields
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ from __future__ import absolute_import
+ from __future__ import print_function
+ 
+diff --git a/opkg-make-index b/opkg-make-index
+index 3f757f6..2988f9f 100755
+--- a/opkg-make-index
++++ b/opkg-make-index
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ from __future__ import absolute_import
+ from __future__ import print_function
+ 
+diff --git a/opkg-show-deps b/opkg-show-deps
+index 153f21e..4e18b4f 100755
+--- a/opkg-show-deps
++++ b/opkg-show-deps
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ from __future__ import absolute_import
+ from __future__ import print_function
+ 
+diff --git a/opkg-unbuild b/opkg-unbuild
+index 4f36bec..57642c9 100755
+--- a/opkg-unbuild
++++ b/opkg-unbuild
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ from __future__ import absolute_import
+ from __future__ import print_function
+ 
+diff --git a/opkg-update-index b/opkg-update-index
+index 341c1c2..7bff8a1 100755
+--- a/opkg-update-index
++++ b/opkg-update-index
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ from __future__ import absolute_import
+ 
+ import sys, os
+diff --git a/opkg.py b/opkg.py
+index 2ecac8a..7e64de4 100644
+--- a/opkg.py
++++ b/opkg.py
+@@ -1,4 +1,4 @@
+-#!/usr/bin/env python
++#!/usr/bin/env python3
+ #   Copyright (C) 2001 Alexander S. Guy <a7r@andern.org>
+ #                      Andern Research Labs
+ #
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/opkg-utils/opkg-utils_0.3.5.bb b/import-layers/yocto-poky/meta/recipes-devtools/opkg-utils/opkg-utils_0.3.5.bb
new file mode 100644
index 0000000..646cc8f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/opkg-utils/opkg-utils_0.3.5.bb
@@ -0,0 +1,62 @@
+SUMMARY = "Additional utilities for the opkg package manager"
+SUMMARY_update-alternatives-opkg = "Utility for managing the alternatives system"
+SECTION = "base"
+HOMEPAGE = "http://git.yoctoproject.org/cgit/cgit.cgi/opkg-utils"
+LICENSE = "GPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
+                    file://opkg.py;beginline=2;endline=18;md5=63ce9e6bcc445181cd9e4baf4b4ccc35"
+PROVIDES += "${@bb.utils.contains('PACKAGECONFIG', 'update-alternatives', 'virtual/update-alternatives', '', d)}"
+
+SRC_URI = "http://git.yoctoproject.org/cgit/cgit.cgi/${BPN}/snapshot/${BPN}-${PV}.tar.gz \
+           file://0001-Switch-all-scripts-to-use-Python-3.x.patch \
+"
+SRC_URI_append_class-native = " file://tar_ignore_error.patch"
+
+SRC_URI[md5sum] = "a19e09c79bf1152aac62e8a120d679ff"
+SRC_URI[sha256sum] = "7f4b08912e26a3f4f6f423f3b4e7157a73b1f3a7483fc59b216d1a80b50b0c38"
+
+TARGET_CC_ARCH += "${LDFLAGS}"
+
+# For native builds we use the host Python
+PYTHONRDEPS = "python3 python3-shell python3-io python3-math python3-crypt python3-logging python3-fcntl python3-subprocess python3-pickle python3-compression python3-textutils python3-stringold"
+PYTHONRDEPS_class-native = ""
+
+PACKAGECONFIG = "python update-alternatives"
+PACKAGECONFIG[python] = ",,,${PYTHONRDEPS}"
+PACKAGECONFIG[update-alternatives] = ",,,"
+
+do_install() {
+	oe_runmake PREFIX=${prefix} DESTDIR=${D} install
+	if ! ${@bb.utils.contains('PACKAGECONFIG', 'update-alternatives', 'true', 'false', d)}; then
+		rm -f "${D}${bindir}/update-alternatives"
+	fi
+
+    if ! ${@bb.utils.contains('PACKAGECONFIG', 'python', 'true', 'false', d)}; then
+        grep -lZ "/usr/bin/env.*python" ${D}${bindir}/* | xargs -0 rm
+    fi
+}
+
+do_install_append_class-target() {
+	if [ -e "${D}${bindir}/update-alternatives" ]; then
+		sed -i ${D}${bindir}/update-alternatives -e 's,/usr/bin,${bindir},g; s,/usr/lib,${nonarch_libdir},g'
+	fi
+}
+
+# These are empty and will pull python3-dev into images where it wouldn't
+# have been otherwise, so don't generate them.
+PACKAGES_remove = "${PN}-dev ${PN}-staticdev"
+
+PACKAGES =+ "update-alternatives-opkg"
+FILES_update-alternatives-opkg = "${bindir}/update-alternatives"
+RPROVIDES_update-alternatives-opkg = "update-alternatives update-alternatives-cworth"
+RREPLACES_update-alternatives-opkg = "update-alternatives-cworth"
+RCONFLICTS_update-alternatives-opkg = "update-alternatives-cworth"
+
+pkg_postrm_update-alternatives-opkg() {
+	rm -rf $D${nonarch_libdir}/opkg/alternatives
+	rmdir $D${nonarch_libdir}/opkg || true
+}
+
+BBCLASSEXTEND = "native nativesdk"
+
+CLEANBROKEN = "1"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/opkg-utils/opkg-utils_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/opkg-utils/opkg-utils_git.bb
deleted file mode 100644
index 9deea0f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/opkg-utils/opkg-utils_git.bb
+++ /dev/null
@@ -1,54 +0,0 @@
-SUMMARY = "Additional utilities for the opkg package manager"
-SUMMARY_update-alternatives-opkg = "Utility for managing the alternatives system"
-SECTION = "base"
-HOMEPAGE = "http://git.yoctoproject.org/cgit/cgit.cgi/opkg-utils"
-LICENSE = "GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
-                    file://opkg.py;beginline=1;endline=18;md5=15917491ad6bf7acc666ca5f7cc1e083"
-PROVIDES += "${@bb.utils.contains('PACKAGECONFIG', 'update-alternatives', 'virtual/update-alternatives', '', d)}"
-
-SRCREV = "1a708fd73d10c2b7677dd4cc4e017746ebbb9166"
-PV = "0.3.4+git${SRCPV}"
-
-SRC_URI = "git://git.yoctoproject.org/opkg-utils \
-"
-SRC_URI_append_class-native = " file://tar_ignore_error.patch"
-
-S = "${WORKDIR}/git"
-
-TARGET_CC_ARCH += "${LDFLAGS}"
-
-PYTHONRDEPS = "python python-shell python-io python-math python-crypt python-logging python-fcntl python-subprocess python-pickle python-compression python-textutils python-stringold"
-PYTHONRDEPS_class-native = ""
-
-PACKAGECONFIG = "python update-alternatives"
-PACKAGECONFIG[python] = ",,,${PYTHONRDEPS}"
-PACKAGECONFIG[update-alternatives] = ",,,"
-
-do_install() {
-	oe_runmake PREFIX=${prefix} DESTDIR=${D} install
-	if ! ${@bb.utils.contains('PACKAGECONFIG', 'update-alternatives', 'true', 'false', d)}; then
-		rm -f "${D}${bindir}/update-alternatives"
-	fi
-}
-
-do_install_append_class-target() {
-	if [ -e "${D}${bindir}/update-alternatives" ]; then
-		sed -i ${D}${bindir}/update-alternatives -e 's,/usr/bin,${bindir},g; s,/usr/lib,${nonarch_libdir},g'
-	fi
-}
-
-PACKAGES =+ "update-alternatives-opkg"
-FILES_update-alternatives-opkg = "${bindir}/update-alternatives"
-RPROVIDES_update-alternatives-opkg = "update-alternatives update-alternatives-cworth"
-RREPLACES_update-alternatives-opkg = "update-alternatives-cworth"
-RCONFLICTS_update-alternatives-opkg = "update-alternatives-cworth"
-
-pkg_postrm_update-alternatives-opkg() {
-	rm -rf $D${nonarch_libdir}/opkg/alternatives
-	rmdir $D${nonarch_libdir}/opkg || true
-}
-
-BBCLASSEXTEND = "native nativesdk"
-
-CLEANBROKEN = "1"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg-arch-config_1.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg-arch-config_1.0.bb
index 682142a..0c2dbc9 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg-arch-config_1.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg-arch-config_1.0.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Architecture-dependent configuration for opkg"
+HOMEPAGE = "http://code.google.com/p/opkg/"
 LICENSE = "MIT"
 PACKAGE_ARCH = "${MACHINE_ARCH}"
 PR = "r1"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg/opkg-configure.service b/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg/opkg-configure.service
index 8e74026..432c3dd 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg/opkg-configure.service
+++ b/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg/opkg-configure.service
@@ -8,7 +8,7 @@
 Type=oneshot
 EnvironmentFile=-@SYSCONFDIR@/default/postinst
 ExecStart=-@BASE_BINDIR@/sh -c " if [ $POSTINST_LOGGING = '1' ]; then @BINDIR@/opkg configure > $LOGFILE 2>&1; else @BINDIR@/opkg configure; fi"
-ExecStartPost=@BASE_BINDIR@/systemctl disable opkg-configure.service
+ExecStartPost=@BASE_BINDIR@/systemctl --no-reload disable opkg-configure.service
 StandardOutput=syslog
 RemainAfterExit=No
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg/status-conffile.patch b/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg/status-conffile.patch
deleted file mode 100644
index 6fc405b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg/status-conffile.patch
+++ /dev/null
@@ -1,69 +0,0 @@
-Upstream-Status: Submitted
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-From 086d5083dfe0102368cb7c8ce89b0c06b64ca773 Mon Sep 17 00:00:00 2001
-From: Ross Burton <ross.burton@intel.com>
-Date: Tue, 10 Jan 2017 15:24:59 +0000
-Subject: [PATCH 1/2] opkg_cmd: only look at conffile status if we're going to
- output it
-
-The loop to compare the recorded conffile hash with their hash on disk is
-outputted at level INFO but the loop was executed at level NOTICE and higher.
-
-This means that if a conffile had been deleted the status operation would
-produce error messages for output it isn't displaying.
-
-Signed-off-by: Ross Burton <ross.burton@intel.com>
----
- libopkg/opkg_cmd.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/libopkg/opkg_cmd.c b/libopkg/opkg_cmd.c
-index ba57c6a..37416fd 100644
---- a/libopkg/opkg_cmd.c
-+++ b/libopkg/opkg_cmd.c
-@@ -638,7 +638,7 @@ static int opkg_info_status_cmd(int argc, char **argv, int installed_only)
- 
-         pkg_formatted_info(stdout, pkg);
- 
--        if (opkg_config->verbosity >= NOTICE) {
-+        if (opkg_config->verbosity >= INFO) {
-             conffile_list_elt_t *iter;
-             for (iter = nv_pair_list_first(&pkg->conffiles); iter;
-                  iter = nv_pair_list_next(&pkg->conffiles, iter)) {
--- 
-2.8.1
-
-From 225e30e0f9fa7cfeaa3f89e2713e5147ab371def Mon Sep 17 00:00:00 2001
-From: Ross Burton <ross.burton@intel.com>
-Date: Tue, 10 Jan 2017 15:28:47 +0000
-Subject: [PATCH 2/2] conffile: gracefully handle deleted conffiles in
- conffile_has_been_modified
-
-Handle conffiles that don't exist gracefully so that instead of showing an error
-message from file_md5sum_alloc() a notice that the file has been deleted is
-shown instead.
-
-Signed-off-by: Ross Burton <ross.burton@intel.com>
----
- libopkg/conffile.c | 5 +++++
- 1 file changed, 5 insertions(+)
-
-diff --git a/libopkg/conffile.c b/libopkg/conffile.c
-index b2f2469..7b4b87b 100644
---- a/libopkg/conffile.c
-+++ b/libopkg/conffile.c
-@@ -51,6 +51,11 @@ int conffile_has_been_modified(conffile_t * conffile)
-     }
- 
-     root_filename = root_filename_alloc(filename);
-+    if (!file_exists(root_filename)) {
-+        opkg_msg(INFO, "Conffile %s deleted\n", conffile->name);
-+        free(root_filename);
-+        return 1;
-+    }
- 
-     md5sum = file_md5sum_alloc(root_filename);
- 
--- 
-2.8.1
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg_0.3.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg_0.3.4.bb
deleted file mode 100644
index a21fde1..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg_0.3.4.bb
+++ /dev/null
@@ -1,76 +0,0 @@
-SUMMARY = "Open Package Manager"
-SUMMARY_libopkg = "Open Package Manager library"
-SECTION = "base"
-HOMEPAGE = "http://code.google.com/p/opkg/"
-BUGTRACKER = "http://code.google.com/p/opkg/issues/list"
-LICENSE = "GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
-                    file://src/opkg.c;beginline=2;endline=21;md5=90435a519c6ea69ef22e4a88bcc52fa0"
-
-DEPENDS = "libarchive"
-
-PE = "1"
-
-SRC_URI = "http://downloads.yoctoproject.org/releases/${BPN}/${BPN}-${PV}.tar.gz \
-           file://opkg-configure.service \
-           file://opkg.conf \
-           file://0001-opkg_conf-create-opkg.lock-in-run-instead-of-var-run.patch \
-           file://status-conffile.patch \
-"
-
-SRC_URI[md5sum] = "6c52a065499056a196e0b45a27e392de"
-SRC_URI[sha256sum] = "750b900b53b62a9b280b601a196f02da81091eda2f3478c509512aa5a1ec93be"
-
-inherit autotools pkgconfig systemd
-
-SYSTEMD_SERVICE_${PN} = "opkg-configure.service"
-
-target_localstatedir := "${localstatedir}"
-OPKGLIBDIR = "${target_localstatedir}/lib"
-
-PACKAGECONFIG ??= "libsolv"
-
-PACKAGECONFIG[gpg] = "--enable-gpg,--disable-gpg,gpgme libgpg-error,gnupg"
-PACKAGECONFIG[curl] = "--enable-curl,--disable-curl,curl"
-PACKAGECONFIG[ssl-curl] = "--enable-ssl-curl,--disable-ssl-curl,curl openssl"
-PACKAGECONFIG[openssl] = "--enable-openssl,--disable-openssl,openssl"
-PACKAGECONFIG[sha256] = "--enable-sha256,--disable-sha256"
-PACKAGECONFIG[pathfinder] = "--enable-pathfinder,--disable-pathfinder,pathfinder"
-PACKAGECONFIG[libsolv] = "--with-libsolv,--without-libsolv,libsolv"
-
-EXTRA_OECONF_class-native = "--localstatedir=/${@os.path.relpath('${localstatedir}', '${STAGING_DIR_NATIVE}')} --sysconfdir=/${@os.path.relpath('${sysconfdir}', '${STAGING_DIR_NATIVE}')}"
-
-do_install_append () {
-	install -d ${D}${sysconfdir}/opkg
-	install -m 0644 ${WORKDIR}/opkg.conf ${D}${sysconfdir}/opkg/opkg.conf
-	echo "option lists_dir ${OPKGLIBDIR}/opkg/lists" >>${D}${sysconfdir}/opkg/opkg.conf
-
-	# We need to create the lock directory
-	install -d ${D}${OPKGLIBDIR}/opkg
-
-	if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)};then
-		install -d ${D}${systemd_unitdir}/system
-		install -m 0644 ${WORKDIR}/opkg-configure.service ${D}${systemd_unitdir}/system/
-		sed -i -e 's,@BASE_BINDIR@,${base_bindir},g' \
-			-e 's,@SYSCONFDIR@,${sysconfdir},g' \
-			-e 's,@BINDIR@,${bindir},g' \
-			-e 's,@SYSTEMD_UNITDIR@,${systemd_unitdir},g' \
-			${D}${systemd_unitdir}/system/opkg-configure.service
-	fi
-}
-
-RDEPENDS_${PN} = "${VIRTUAL-RUNTIME_update-alternatives} opkg-arch-config libarchive"
-RDEPENDS_${PN}_class-native = ""
-RDEPENDS_${PN}_class-nativesdk = ""
-RREPLACES_${PN} = "opkg-nogpg opkg-collateral"
-RCONFLICTS_${PN} = "opkg-collateral"
-RPROVIDES_${PN} = "opkg-collateral"
-
-PACKAGES =+ "libopkg"
-
-FILES_libopkg = "${libdir}/*.so.* ${OPKGLIBDIR}/opkg/"
-FILES_${PN} += "${systemd_unitdir}/system/"
-
-BBCLASSEXTEND = "native nativesdk"
-
-CONFFILES_${PN} = "${sysconfdir}/opkg/opkg.conf"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg_0.3.5.bb b/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg_0.3.5.bb
new file mode 100644
index 0000000..3e511b6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/opkg/opkg_0.3.5.bb
@@ -0,0 +1,75 @@
+SUMMARY = "Open Package Manager"
+SUMMARY_libopkg = "Open Package Manager library"
+SECTION = "base"
+HOMEPAGE = "http://code.google.com/p/opkg/"
+BUGTRACKER = "http://code.google.com/p/opkg/issues/list"
+LICENSE = "GPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
+                    file://src/opkg.c;beginline=2;endline=21;md5=90435a519c6ea69ef22e4a88bcc52fa0"
+
+DEPENDS = "libarchive"
+
+PE = "1"
+
+SRC_URI = "http://downloads.yoctoproject.org/releases/${BPN}/${BPN}-${PV}.tar.gz \
+           file://opkg-configure.service \
+           file://opkg.conf \
+           file://0001-opkg_conf-create-opkg.lock-in-run-instead-of-var-run.patch \
+"
+
+SRC_URI[md5sum] = "d202d09ea0932943071b842626cab13c"
+SRC_URI[sha256sum] = "734bc21dea11262113fa86b928d09812618b3966f352350cf916a6ae0d343f32"
+
+inherit autotools pkgconfig systemd
+
+SYSTEMD_SERVICE_${PN} = "opkg-configure.service"
+
+target_localstatedir := "${localstatedir}"
+OPKGLIBDIR = "${target_localstatedir}/lib"
+
+PACKAGECONFIG ??= "libsolv"
+
+PACKAGECONFIG[gpg] = "--enable-gpg,--disable-gpg,gpgme libgpg-error,gnupg"
+PACKAGECONFIG[curl] = "--enable-curl,--disable-curl,curl"
+PACKAGECONFIG[ssl-curl] = "--enable-ssl-curl,--disable-ssl-curl,curl openssl"
+PACKAGECONFIG[openssl] = "--enable-openssl,--disable-openssl,openssl"
+PACKAGECONFIG[sha256] = "--enable-sha256,--disable-sha256"
+PACKAGECONFIG[pathfinder] = "--enable-pathfinder,--disable-pathfinder,pathfinder"
+PACKAGECONFIG[libsolv] = "--with-libsolv,--without-libsolv,libsolv"
+
+EXTRA_OECONF_class-native = "--localstatedir=/${@os.path.relpath('${localstatedir}', '${STAGING_DIR_NATIVE}')} --sysconfdir=/${@os.path.relpath('${sysconfdir}', '${STAGING_DIR_NATIVE}')}"
+
+do_install_append () {
+	install -d ${D}${sysconfdir}/opkg
+	install -m 0644 ${WORKDIR}/opkg.conf ${D}${sysconfdir}/opkg/opkg.conf
+	echo "option lists_dir ${OPKGLIBDIR}/opkg/lists" >>${D}${sysconfdir}/opkg/opkg.conf
+
+	# We need to create the lock directory
+	install -d ${D}${OPKGLIBDIR}/opkg
+
+	if ${@bb.utils.contains('DISTRO_FEATURES','systemd','true','false',d)};then
+		install -d ${D}${systemd_unitdir}/system
+		install -m 0644 ${WORKDIR}/opkg-configure.service ${D}${systemd_unitdir}/system/
+		sed -i -e 's,@BASE_BINDIR@,${base_bindir},g' \
+			-e 's,@SYSCONFDIR@,${sysconfdir},g' \
+			-e 's,@BINDIR@,${bindir},g' \
+			-e 's,@SYSTEMD_UNITDIR@,${systemd_unitdir},g' \
+			${D}${systemd_unitdir}/system/opkg-configure.service
+	fi
+}
+
+RDEPENDS_${PN} = "${VIRTUAL-RUNTIME_update-alternatives} opkg-arch-config libarchive"
+RDEPENDS_${PN}_class-native = ""
+RDEPENDS_${PN}_class-nativesdk = ""
+RREPLACES_${PN} = "opkg-nogpg opkg-collateral"
+RCONFLICTS_${PN} = "opkg-collateral"
+RPROVIDES_${PN} = "opkg-collateral"
+
+PACKAGES =+ "libopkg"
+
+FILES_libopkg = "${libdir}/*.so.* ${OPKGLIBDIR}/opkg/"
+FILES_${PN} += "${systemd_unitdir}/system/"
+
+BBCLASSEXTEND = "native nativesdk"
+
+CONFFILES_${PN} = "${sysconfdir}/opkg/opkg.conf"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/orc/orc_0.4.26.bb b/import-layers/yocto-poky/meta/recipes-devtools/orc/orc_0.4.26.bb
deleted file mode 100644
index e47342f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/orc/orc_0.4.26.bb
+++ /dev/null
@@ -1,27 +0,0 @@
-SUMMARY = "Optimised Inner Loop Runtime Compiler"
-HOMEPAGE = "http://gstreamer.freedesktop.org/modules/orc.html"
-LICENSE = "BSD-2-Clause & BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://COPYING;md5=1400bd9d09e8af56b9ec982b3d85797e"
-
-SRC_URI = "http://gstreamer.freedesktop.org/src/orc/orc-${PV}.tar.xz"
-
-SRC_URI[md5sum] = "8e9bef677bae289d3324d81c337a4507"
-SRC_URI[sha256sum] = "7d52fa80ef84988359c3434e1eea302d077a08987abdde6905678ebcad4fa649"
-
-inherit autotools pkgconfig gtk-doc
-
-BBCLASSEXTEND = "native nativesdk"
-
-PACKAGES =+ "orc-examples"
-PACKAGES_DYNAMIC += "^liborc-.*"
-FILES_orc-examples = "${libdir}/orc/*"
-FILES_${PN} = "${bindir}/*"
-
-python populate_packages_prepend () {
-    libdir = d.expand('${libdir}')
-    do_split_packages(d, libdir, '^lib(.*)\.so\.*', 'lib%s', 'ORC %s library', extra_depends='', allow_links=True)
-}
-
-do_compile_prepend_class-native () {
-    sed -i -e 's#/tmp#.#g' ${S}/orc/orccodemem.c
-}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/orc/orc_0.4.27.bb b/import-layers/yocto-poky/meta/recipes-devtools/orc/orc_0.4.27.bb
new file mode 100644
index 0000000..303f991
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/orc/orc_0.4.27.bb
@@ -0,0 +1,27 @@
+SUMMARY = "Optimised Inner Loop Runtime Compiler"
+HOMEPAGE = "http://gstreamer.freedesktop.org/modules/orc.html"
+LICENSE = "BSD-2-Clause & BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://COPYING;md5=1400bd9d09e8af56b9ec982b3d85797e"
+
+SRC_URI = "http://gstreamer.freedesktop.org/src/orc/orc-${PV}.tar.xz"
+
+SRC_URI[md5sum] = "5837dc20dacb5b668935bbded10cbb61"
+SRC_URI[sha256sum] = "51e53e58fc8158e5986a1f1a49a6d970c5b16493841cf7b9de2c2bde7ce36b93"
+
+inherit autotools pkgconfig gtk-doc
+
+BBCLASSEXTEND = "native nativesdk"
+
+PACKAGES =+ "orc-examples"
+PACKAGES_DYNAMIC += "^liborc-.*"
+FILES_orc-examples = "${libdir}/orc/*"
+FILES_${PN} = "${bindir}/*"
+
+python populate_packages_prepend () {
+    libdir = d.expand('${libdir}')
+    do_split_packages(d, libdir, '^lib(.*)\.so\.*', 'lib%s', 'ORC %s library', extra_depends='', allow_links=True)
+}
+
+do_compile_prepend_class-native () {
+    sed -i -e 's#/tmp#.#g' ${S}/orc/orccodemem.c
+}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/ossp-uuid/ossp-uuid/libtool-tag.patch b/import-layers/yocto-poky/meta/recipes-devtools/ossp-uuid/ossp-uuid/libtool-tag.patch
new file mode 100644
index 0000000..7f601af
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/ossp-uuid/ossp-uuid/libtool-tag.patch
@@ -0,0 +1,21 @@
+Repect LIBTOOLFLAGS
+
+This add a knob that can be controlled from env to set generic options
+for libtool
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Index: uuid-1.6.2/Makefile.in
+===================================================================
+--- uuid-1.6.2.orig/Makefile.in
++++ uuid-1.6.2/Makefile.in
+@@ -56,7 +56,7 @@ RM          = rm -f
+ CP          = cp
+ RMDIR       = rmdir
+ SHTOOL      = $(S)/shtool
+-LIBTOOL     = @LIBTOOL@
++LIBTOOL     = @LIBTOOL@ $(LIBTOOLFLAGS)
+ TRUE        = true
+ POD2MAN     = pod2man
+ PERL        = @PERL@
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/ossp-uuid/ossp-uuid_1.6.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/ossp-uuid/ossp-uuid_1.6.2.bb
index 85a1bcf..5d9ca79 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/ossp-uuid/ossp-uuid_1.6.2.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/ossp-uuid/ossp-uuid_1.6.2.bb
@@ -27,6 +27,7 @@
 	   file://uuid-nostrip.patch \
            file://install-pc.patch \
            file://ldflags.patch \
+           file://libtool-tag.patch \
 	  "
 SRC_URI[md5sum] = "5db0d43a9022a6ebbbc25337ae28942f"
 SRC_URI[sha256sum] = "11a615225baa5f8bb686824423f50e4427acd3f70d394765bdff32801f0fd5b0"
@@ -37,6 +38,7 @@
 
 EXTRA_OECONF = "--without-dce --without-cxx --without-perl --without-perl-compat --without-php --without-pgsql"
 EXTRA_OECONF = "--includedir=${includedir}/ossp"
+EXTRA_OEMAKE_class-target = "LIBTOOLFLAGS='--tag=CC'"
 
 do_configure_prepend() {
   # This package has a completely custom aclocal.m4, which should be acinclude.m4.
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/patchelf/patchelf/avoidholes.patch b/import-layers/yocto-poky/meta/recipes-devtools/patchelf/patchelf/avoidholes.patch
index a273688..0b45c39 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/patchelf/patchelf/avoidholes.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/patchelf/patchelf/avoidholes.patch
@@ -31,7 +31,7 @@
 RP
 2017/3/7
 
-Upstream-Status: Pending
+Upstream-Status: Accepted
 
 Index: patchelf-0.9/src/patchelf.cc
 ===================================================================
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/patchelf/patchelf/fix-adjusting-startPage.patch b/import-layers/yocto-poky/meta/recipes-devtools/patchelf/patchelf/fix-adjusting-startPage.patch
new file mode 100644
index 0000000..f64cbed
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/patchelf/patchelf/fix-adjusting-startPage.patch
@@ -0,0 +1,38 @@
+commit 1cc234fea5600190d872329aca60e2365cefc39e
+Author: Ed Bartosh <ed.bartosh@linux.intel.com>
+Date:   Fri Jul 21 12:33:53 2017 +0300
+
+fix adjusting startPage
+
+startPage is adjusted unconditionally for all executables.
+This results in incorrect addresses assigned to INTERP and LOAD
+program headers, which breaks patched executable.
+
+Adjusting startPage variable only when startOffset > startPage
+should fix this.
+
+This change is related to the issue NixOS#10
+
+Signed-off-by: Ed Bartosh <ed.bartosh@linux.intel.com>
+
+Github PR: https://github.com/NixOS/patchelf/pull/127
+
+Upstream-Status: Submitted
+
+diff --git a/src/patchelf.cc b/src/patchelf.cc
+index cbd36c0..e9d7ea5 100644
+--- a/src/patchelf.cc
++++ b/src/patchelf.cc
+@@ -720,10 +720,8 @@ void ElfFile<ElfFileParamNames>::rewriteSectionsLibrary()
+        since DYN executables tend to start at virtual address 0, so
+        rewriteSectionsExecutable() won't work because it doesn't have
+        any virtual address space to grow downwards into. */
+-    if (isExecutable) {
+-        if (startOffset >= startPage) {
+-            debug("shifting new PT_LOAD segment by %d bytes to work around a Linux kernel bug\n", startOffset - startPage);
+-        }
++    if (isExecutable && startOffset > startPage) {
++        debug("shifting new PT_LOAD segment by %d bytes to work around a Linux kernel bug\n", startOffset - startPage);
+         startPage = startOffset;
+     }
+ 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/patchelf/patchelf_0.9.bb b/import-layers/yocto-poky/meta/recipes-devtools/patchelf/patchelf_0.9.bb
index 01f0e62..d703039 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/patchelf/patchelf_0.9.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/patchelf/patchelf_0.9.bb
@@ -3,6 +3,7 @@
            file://handle-read-only-files.patch \
            file://Increase-maxSize-to-64MB.patch \
            file://avoidholes.patch \
+           file://fix-adjusting-startPage.patch \
 "
 
 LICENSE = "GPLv3"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pax-utils/pax-utils_1.2.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/pax-utils/pax-utils_1.2.2.bb
index 476fa6f..9635a5e 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/pax-utils/pax-utils_1.2.2.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pax-utils/pax-utils_1.2.2.bb
@@ -7,9 +7,7 @@
 LICENSE = "GPLv2+"
 LIC_FILES_CHKSUM = "file://COPYING;md5=eb723b61539feef013de476e68b5c50a"
 
-SRC_URI = "http://gentoo.osuosl.org/distfiles/pax-utils-${PV}.tar.xz \
-"
-
+SRC_URI = "https://dev.gentoo.org/~vapier/dist/pax-utils-${PV}.tar.xz"
 SRC_URI[md5sum] = "a580468318f0ff42edf4a8cd314cc942"
 SRC_URI[sha256sum] = "7f4a7f8db6b4743adde7582fa48992ad01776796fcde030683732f56221337d9"
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/liburi-perl_1.71.bb b/import-layers/yocto-poky/meta/recipes-devtools/perl/liburi-perl_1.71.bb
deleted file mode 100644
index 432803c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/liburi-perl_1.71.bb
+++ /dev/null
@@ -1,30 +0,0 @@
-SUMMARY = "Perl module to manipulate and access URI strings"
-DESCRIPTION = "This package contains the URI.pm module with friends. \
-The module implements the URI class. URI objects can be used to access \
-and manipulate the various components that make up these strings."
-
-HOMEPAGE = "http://search.cpan.org/dist/URI/"
-SECTION = "libs"
-LICENSE = "Artistic-1.0 | GPL-1.0+"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=c453e94fae672800f83bc1bd7a38b53f"
-
-DEPENDS += "perl"
-
-SRC_URI = "https://downloads.yoctoproject.org/mirror/sources/URI-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "247c3da29a794f72730e01aa5a715daf"
-SRC_URI[sha256sum] = "9c8eca0d7f39e74bbc14706293e653b699238eeb1a7690cc9c136fb8c2644115"
-
-S = "${WORKDIR}/URI-${PV}"
-
-EXTRA_CPANFLAGS = "EXPATLIBPATH=${STAGING_LIBDIR} EXPATINCPATH=${STAGING_INCDIR}"
-
-inherit cpan
-
-do_compile() {
-	export LIBC="$(find ${STAGING_DIR_TARGET}/${base_libdir}/ -name 'libc-*.so')"
-	cpan_do_compile
-}
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/liburi-perl_1.72.bb b/import-layers/yocto-poky/meta/recipes-devtools/perl/liburi-perl_1.72.bb
new file mode 100644
index 0000000..a399c10
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/liburi-perl_1.72.bb
@@ -0,0 +1,30 @@
+SUMMARY = "Perl module to manipulate and access URI strings"
+DESCRIPTION = "This package contains the URI.pm module with friends. \
+The module implements the URI class. URI objects can be used to access \
+and manipulate the various components that make up these strings."
+
+HOMEPAGE = "http://search.cpan.org/dist/URI/"
+SECTION = "libs"
+LICENSE = "Artistic-1.0 | GPL-1.0+"
+
+LIC_FILES_CHKSUM = "file://LICENSE;md5=c453e94fae672800f83bc1bd7a38b53f"
+
+DEPENDS += "perl"
+
+SRC_URI = "http://www.cpan.org/authors/id/E/ET/ETHER/URI-${PV}.tar.gz"
+
+SRC_URI[md5sum] = "cd56d81ed429efaa97e7f3ff08851b48"
+SRC_URI[sha256sum] = "35f14431d4b300de4be1163b0b5332de2d7fbda4f05ff1ed198a8e9330d40a32"
+
+S = "${WORKDIR}/URI-${PV}"
+
+EXTRA_CPANFLAGS = "EXPATLIBPATH=${STAGING_LIBDIR} EXPATINCPATH=${STAGING_INCDIR}"
+
+inherit cpan
+
+do_compile() {
+	export LIBC="$(find ${STAGING_DIR_TARGET}/${base_libdir}/ -name 'libc-*.so')"
+	cpan_do_compile
+}
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-parser-perl_2.44.bb b/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-parser-perl_2.44.bb
index d9bbf71..cc3a660 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-parser-perl_2.44.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-parser-perl_2.44.bb
@@ -1,4 +1,5 @@
 SUMMARY = "XML::Parser - A perl module for parsing XML documents"
+HOMEPAGE = "https://libexpat.github.io/"
 SECTION = "libs"
 LICENSE = "Artistic-1.0 | GPL-1.0+"
 LIC_FILES_CHKSUM = "file://README;beginline=2;endline=6;md5=c8767d7516229f07b26e42d1cf8b51f1"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-perl_0.08.bb b/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-perl_0.08.bb
index 2c01976..0478427 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-perl_0.08.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-perl_0.08.bb
@@ -2,6 +2,7 @@
 documents for working with XML in Perl.  libxml-perl software \
 works in combination with XML::Parser, PerlSAX, XML::DOM, \
 XML::Grove and others."
+HOMEPAGE = "http://search.cpan.org/dist/libxml-perl/"
 SUMMARY = "Collection of Perl modules for working with XML"
 SECTION = "libs"
 LICENSE = "Artistic-1.0 | GPL-1.0+"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-simple-perl_2.22.bb b/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-simple-perl_2.22.bb
deleted file mode 100644
index 2243bb2..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-simple-perl_2.22.bb
+++ /dev/null
@@ -1,24 +0,0 @@
-SUMMARY = "Perl module for reading and writing XML"
-DESCRIPTION = "The XML::Simple Perl module provides a simple API layer \
-on top of an underlying XML parsing module to maintain XML files \
-(especially configuration files). It is a blunt rewrite of XML::Simple \
-(by Grant McLean) to use the XML::LibXML parser for XML structures, \
-where the original uses plain Perl or SAX parsers."
-HOMEPAGE = "http://search.cpan.org/~markov/XML-LibXML-Simple-0.93/lib/XML/LibXML/Simple.pod"
-SECTION = "libs"
-LICENSE = "Artistic-1.0 | GPL-1.0+"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=fa1187fceda00eee10b62961407ea7be"
-DEPENDS += "libxml-parser-perl"
-
-SRC_URI = "http://www.cpan.org/modules/by-module/XML/XML-Simple-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "0914abddfce749453ed89b54029f2643"
-SRC_URI[sha256sum] = "b9450ef22ea9644ae5d6ada086dc4300fa105be050a2030ebd4efd28c198eb49"
-
-S = "${WORKDIR}/XML-Simple-${PV}"
-
-EXTRA_PERLFLAGS = "-I ${PERLHOSTLIB}"
-
-inherit cpan
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-simple-perl_2.24.bb b/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-simple-perl_2.24.bb
new file mode 100644
index 0000000..0cf2eeb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/libxml-simple-perl_2.24.bb
@@ -0,0 +1,24 @@
+SUMMARY = "Perl module for reading and writing XML"
+DESCRIPTION = "The XML::Simple Perl module provides a simple API layer \
+on top of an underlying XML parsing module to maintain XML files \
+(especially configuration files). It is a blunt rewrite of XML::Simple \
+(by Grant McLean) to use the XML::LibXML parser for XML structures, \
+where the original uses plain Perl or SAX parsers."
+HOMEPAGE = "http://search.cpan.org/~markov/XML-LibXML-Simple-0.93/lib/XML/LibXML/Simple.pod"
+SECTION = "libs"
+LICENSE = "Artistic-1.0 | GPL-1.0+"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=23477e18a0d04392cdf44ae70e49b495"
+DEPENDS += "libxml-parser-perl"
+
+SRC_URI = "http://www.cpan.org/modules/by-module/XML/XML-Simple-${PV}.tar.gz"
+
+SRC_URI[md5sum] = "1cd2e8e3421160c42277523d5b2f4dd2"
+SRC_URI[sha256sum] = "9a14819fd17c75fbb90adcec0446ceab356cab0ccaff870f2e1659205dc2424f"
+
+S = "${WORKDIR}/XML-Simple-${PV}"
+
+EXTRA_PERLFLAGS = "-I ${PERLHOSTLIB}"
+
+inherit cpan
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl-native_5.24.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl-native_5.24.1.bb
index e01d11f..6c56a7d 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl-native_5.24.1.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl-native_5.24.1.bb
@@ -32,6 +32,7 @@
 		-Dcc="${CC}" \
 		-Dcflags="${CFLAGS}" \
 		-Dldflags="${LDFLAGS}" \
+		-Dlddlflags="${LDFLAGS} -shared" \
 		-Dcf_by="Open Embedded" \
 		-Dprefix=${prefix} \
 		-Dvendorprefix=${prefix} \
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/0001-Configure-Remove-fstack-protector-strong-for-native-.patch b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/0001-Configure-Remove-fstack-protector-strong-for-native-.patch
index 7391ac5..14a05d2 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/0001-Configure-Remove-fstack-protector-strong-for-native-.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/0001-Configure-Remove-fstack-protector-strong-for-native-.patch
@@ -10,7 +10,7 @@
 
 [YOCTO #10338]
 
-Upstream-status: Inappropriate [configuration]
+Upstream-Status: Inappropriate [configuration]
 
 [1] http://errors.yoctoproject.org/Errors/Details/109589/
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/cpan-missing-site-dirs.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/cpan-missing-site-dirs.diff
index a63b968..c597701 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/cpan-missing-site-dirs.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/cpan-missing-site-dirs.diff
@@ -13,6 +13,7 @@
 
 Bug-Debian: http://bugs.debian.org/688842
 Patch-Name: debian/cpan-missing-site-dirs.diff
+Upstream-Status: Pending
 ---
  cpan/CPAN/lib/CPAN/FirstTime.pm | 31 +++++++++++++++++++++++++++----
  1 file changed, 27 insertions(+), 4 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/cpan_definstalldirs.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/cpan_definstalldirs.diff
index 6b52950..572f149 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/cpan_definstalldirs.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/cpan_definstalldirs.diff
@@ -9,6 +9,7 @@
 ordering, but not ours.
 
 Patch-Name: debian/cpan_definstalldirs.diff
+Upstream-Status: Pending
 ---
  cpan/CPAN/lib/CPAN/FirstTime.pm | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/db_file_ver.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/db_file_ver.diff
index 280bf11..0861650 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/db_file_ver.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/db_file_ver.diff
@@ -8,6 +8,7 @@
 Package dependencies ensure the correct library is linked at run-time.
 
 Patch-Name: debian/db_file_ver.diff
+Upstream-Status: Pending
 ---
  cpan/DB_File/version.c | 2 ++
  1 file changed, 2 insertions(+)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/deprecate-with-apt.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/deprecate-with-apt.diff
index 601ee4c..c2ac4a3 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/deprecate-with-apt.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/deprecate-with-apt.diff
@@ -15,6 +15,7 @@
 separate packages instead.
 
 Patch-Name: debian/deprecate-with-apt.diff
+Upstream-Status: Pending
 ---
  lib/deprecate.pm | 15 ++++++++++++++-
  1 file changed, 14 insertions(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/doc_info.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/doc_info.diff
index fbea2ee..4662ecd 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/doc_info.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/doc_info.diff
@@ -6,6 +6,7 @@
 Indicate that the user needs to install the perl-doc package.
 
 Patch-Name: debian/doc_info.diff
+Upstream-Status: Pending
 ---
  pod/perl.pod | 12 ++++++++++--
  1 file changed, 10 insertions(+), 2 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/enc2xs_inc.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/enc2xs_inc.diff
index e074b20..b3bd58c 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/enc2xs_inc.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/enc2xs_inc.diff
@@ -11,6 +11,7 @@
   issues with follow => 1 (see #603686 and [rt.cpan.org #64585])
 
 Patch-Name: debian/enc2xs_inc.diff
+Upstream-Status: Pending
 ---
  cpan/Encode/bin/enc2xs | 8 ++++----
  t/porting/customized.t | 3 +++
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/errno_ver.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/errno_ver.diff
index 3d09229..a965fbe 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/errno_ver.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/errno_ver.diff
@@ -11,7 +11,7 @@
 compatible, but built on a different machine.
 
 Patch-Name: debian/errno_ver.diff
-
+Upstream-Status: Pending
 ---
  ext/Errno/Errno_pm.PL | 5 -----
  1 file changed, 5 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/extutils_set_libperl_path.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/extutils_set_libperl_path.diff
index adb4bd9..e023038 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/extutils_set_libperl_path.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/extutils_set_libperl_path.diff
@@ -7,6 +7,7 @@
 CORE directory to match other static libraries.
 
 Patch-Name: debian/extutils_set_libperl_path.diff
+Upstream-Status: Pending
 ---
  cpan/ExtUtils-MakeMaker/lib/ExtUtils/MM_Unix.pm | 2 +-
  pp.c                                            | 2 +-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fakeroot.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fakeroot.diff
index ec461cf..bdf34d1 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fakeroot.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fakeroot.diff
@@ -11,6 +11,7 @@
 rule where the Makefile is created, but is for the clean/binary* targets.
 
 Patch-Name: debian/fakeroot.diff
+Upstream-Status: Pending
 ---
  Makefile.SH | 7 ++-----
  1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/find_html2text.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/find_html2text.diff
index d319e75..0827091 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/find_html2text.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/find_html2text.diff
@@ -16,6 +16,7 @@
 [Maintainer's note: html2text in Debian is not the same implementation
 as the html2text.pl which is expected, but should provide similar
 functionality].
+Upstream-Status: Pending
 ---
  cpan/CPAN/lib/CPAN/Distribution.pm | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/document_makemaker_ccflags.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/document_makemaker_ccflags.diff
index 61a9271..f3d9258 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/document_makemaker_ccflags.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/document_makemaker_ccflags.diff
@@ -10,6 +10,7 @@
 binary interface on some platforms.
 
 Patch-Name: fixes/document_makemaker_ccflags.diff
+Upstream-Status: Pending
 ---
  cpan/ExtUtils-MakeMaker/lib/ExtUtils/MakeMaker.pm | 4 ++++
  1 file changed, 4 insertions(+)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/memoize_storable_nstore.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/memoize_storable_nstore.diff
index 525f962..d9b36f6 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/memoize_storable_nstore.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/memoize_storable_nstore.diff
@@ -47,6 +47,7 @@
 Bug: https://rt.cpan.org/Public/Bug/Display.html?id=77790
 Forwarded: https://rt.cpan.org/Public/Bug/Display.html?id=77790
 Patch-Name: fixes/memoize_storable_nstore.diff
+Upstream-Status: Pending
 ---
  cpan/Memoize/Memoize/Storable.pm |  2 +-
  cpan/Memoize/t/tie_storable.t    | 24 ++++++++++++++++++++----
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/net_smtp_docs.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/net_smtp_docs.diff
index 3c31972..afcf7fb 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/net_smtp_docs.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/net_smtp_docs.diff
@@ -7,6 +7,7 @@
 Bug: http://rt.cpan.org/Public/Bug/Display.html?id=36038
 
 Patch-Name: fixes/net_smtp_docs.diff
+Upstream-Status: Pending
 ---
  cpan/libnet/lib/Net/SMTP.pm | 1 +
  1 file changed, 1 insertion(+)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/perl-Cnn.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/perl-Cnn.diff
index b5564fd..9bdf41b 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/perl-Cnn.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/perl-Cnn.diff
@@ -12,6 +12,7 @@
 Bug-Debian: https://bugs.debian.org/788636
 Origin: upstream, http://perl5.git.perl.org/perl.git/commit/89d84ff965
 Patch-Name: fixes/perl-Cnn.diff
+Upstream-Status: Pending
 ---
  t/run/switchC.t |  7 ++++++-
  util.c          | 17 ++++++++---------
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/pod_man_reproducible_date.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/pod_man_reproducible_date.diff
index 7c9ca86..d23573f 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/pod_man_reproducible_date.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/pod_man_reproducible_date.diff
@@ -13,6 +13,7 @@
 Bug-Debian: http://bugs.debian.org/759405
 Origin: upstream
 Patch-Name: fixes/pod_man_reproducible_date.diff
+Upstream-Status: Pending
 ---
  cpan/podlators/lib/Pod/Man.pm  | 69 +++++++++++++++++++++++++++++++-----------
  cpan/podlators/t/devise-date.t | 29 +++++++++++++-----
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-empty-date.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-empty-date.diff
index 7ebbf9c..9de29b8 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-empty-date.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-empty-date.diff
@@ -10,6 +10,7 @@
 Origin: upstream, http://git.eyrie.org/?p=perl/podlators.git;a=commitdiff;h=e0e9fcb53e8fc954b2b1955385eea18c27f869af
 Bug-Debian: https://bugs.debian.org/780259
 Patch-Name: fixes/podman-empty-date.diff
+Upstream-Status: Pending
 ---
  cpan/podlators/lib/Pod/Man.pm  | 2 +-
  cpan/podlators/t/devise-date.t | 6 +++++-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-pipe.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-pipe.diff
index 1a60361..d8858d8 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-pipe.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-pipe.diff
@@ -14,6 +14,7 @@
 Origin: upstream, http://git.eyrie.org/?p=perl/podlators.git;a=commitdiff;h=d98872e46c93861b7aba14949e1258712087dc55
 Bug-Debian: https://bugs.debian.org/777405
 Patch-Name: fixes/podman-pipe.diff
+Upstream-Status: Pending
 ---
  cpan/podlators/lib/Pod/Man.pm     | 15 +++++++++++++++
  cpan/podlators/scripts/pod2man.PL |  4 ++++
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-utc-docs.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-utc-docs.diff
index 0cdfeff..b6ae409 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-utc-docs.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-utc-docs.diff
@@ -12,6 +12,7 @@
 Origin: upstream, http://git.eyrie.org/?p=perl/podlators.git;a=commitdiff;h=52db93bf80e4a06f8497e4ebade0506b6ee0e70d
 Bug-Debian: https://bugs.debian.org/780259
 Patch-Name: fixes/podman-utc-docs.diff
+Upstream-Status: Pending
 ---
  cpan/podlators/lib/Pod/Man.pm     |  6 +++++-
  cpan/podlators/scripts/pod2man.PL | 11 ++++++-----
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-utc.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-utc.diff
index fbd7b9d..3fb7c20 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-utc.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/podman-utc.diff
@@ -14,6 +14,7 @@
 Origin: upstream, http://git.eyrie.org/?p=perl/podlators.git;a=commitdiff;h=913fbb2bd2ce071e20128629302ae2852554cad4
 Bug-Debian: https://bugs.debian.org/780259
 Patch-Name: fixes/podman-utc.diff
+Upstream-Status: Pending
 ---
  cpan/podlators/lib/Pod/Man.pm | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/respect_umask.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/respect_umask.diff
index d1b498b..c8663f5 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/respect_umask.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/fixes/respect_umask.diff
@@ -7,6 +7,7 @@
 site directories.
 
 Patch-Name: fixes/respect_umask.diff
+Upstream-Status: Pending
 ---
  cpan/ExtUtils-Install/lib/ExtUtils/Install.pm   | 18 +++++++++---------
  cpan/ExtUtils-MakeMaker/lib/ExtUtils/MM_Unix.pm | 18 +++++++++---------
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/instmodsh_doc.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/instmodsh_doc.diff
index a62c746..7e1fd69 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/instmodsh_doc.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/instmodsh_doc.diff
@@ -4,6 +4,7 @@
 Subject: Debian policy doesn't install .packlist files for core or vendor.
 
 Patch-Name: debian/instmodsh_doc.diff
+Upstream-Status: Pending
 ---
  cpan/ExtUtils-MakeMaker/bin/instmodsh | 4 +++-
  1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/ld_run_path.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/ld_run_path.diff
index d80f86c..ff0b287 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/ld_run_path.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/ld_run_path.diff
@@ -4,6 +4,7 @@
 Subject: Remove standard libs from LD_RUN_PATH as per Debian policy.
 
 Patch-Name: debian/ld_run_path.diff
+Upstream-Status: Pending
 ---
  cpan/ExtUtils-MakeMaker/lib/ExtUtils/Liblist/Kid.pm | 3 +++
  1 file changed, 3 insertions(+)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/libnet_config_path.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/libnet_config_path.diff
index 54ef964..d534742 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/libnet_config_path.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/libnet_config_path.diff
@@ -5,6 +5,7 @@
  writable.
 
 Patch-Name: debian/libnet_config_path.diff
+Upstream-Status: Pending
 ---
  cpan/libnet/lib/Net/Config.pm | 7 +++----
  1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/libperl_embed_doc.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/libperl_embed_doc.diff
index 76b8054..0cdc0d3 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/libperl_embed_doc.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/libperl_embed_doc.diff
@@ -6,6 +6,7 @@
 Bug-Debian: http://bugs.debian.org/186778
 
 Patch-Name: debian/libperl_embed_doc.diff
+Upstream-Status: Pending
 ---
  lib/ExtUtils/Embed.pm | 3 +++
  1 file changed, 3 insertions(+)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/locale-robustness.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/locale-robustness.diff
index fd471ed..7cf1242 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/locale-robustness.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/locale-robustness.diff
@@ -14,6 +14,7 @@
 Bug: https://rt.perl.org/Ticket/Display.html?id=124310
 Bug-Debian: https://bugs.debian.org/782068
 Patch-Name: debian/locale-robustness.diff
+Upstream-Status: Pending
 ---
  t/run/locale.t | 7 +++----
  1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/makemaker-pasthru.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/makemaker-pasthru.diff
index fa0f9da..5f07180 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/makemaker-pasthru.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/makemaker-pasthru.diff
@@ -11,6 +11,7 @@
 
 Bug-Debian: https://bugs.debian.org/758471
 Patch-Name: debian/makemaker-pasthru.diff
+Upstream-Status: Pending
 ---
  cpan/ExtUtils-MakeMaker/lib/ExtUtils/MM_Unix.pm | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/makemaker_customized.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/makemaker_customized.diff
index b1b4cb9..d870b60 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/makemaker_customized.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/makemaker_customized.diff
@@ -4,6 +4,7 @@
 Subject: Update t/porting/customized.dat for files patched in Debian
 
 Patch-Name: debian/makemaker_customized.diff
+Upstream-Status: Pending
 ---
  t/porting/customized.dat | 8 ++++----
  1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/mod_paths.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/mod_paths.diff
index ae15907..7e22484 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/mod_paths.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/mod_paths.diff
@@ -17,6 +17,7 @@
 version than is included in core.
 
 Patch-Name: debian/mod_paths.diff
+Upstream-Status: Pending
 ---
  perl.c | 58 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
  1 file changed, 58 insertions(+)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/no_packlist_perllocal.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/no_packlist_perllocal.diff
index b911fd2..7484bec 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/no_packlist_perllocal.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/no_packlist_perllocal.diff
@@ -4,6 +4,7 @@
 Subject: Don't install .packlist or perllocal.pod for perl or vendor
 
 Patch-Name: debian/no_packlist_perllocal.diff
+Upstream-Status: Pending
 ---
  cpan/ExtUtils-MakeMaker/lib/ExtUtils/MM_Unix.pm | 35 +++----------------------
  1 file changed, 3 insertions(+), 32 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/patchlevel.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/patchlevel.diff
index 8656b02..2d05ae5 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/patchlevel.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/patchlevel.diff
@@ -10,6 +10,7 @@
 from the debian/patches/ directory when building the package.
 
 Patch-Name: debian/patchlevel.diff
+Upstream-Status: Pending
 ---
  patchlevel.h | 3 +++
  1 file changed, 3 insertions(+)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/perl5db-x-terminal-emulator.patch b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/perl5db-x-terminal-emulator.patch
index 533952c..6f1625b 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/perl5db-x-terminal-emulator.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/perl5db-x-terminal-emulator.patch
@@ -10,6 +10,7 @@
 Forwarded: not-needed
 
 Patch-Name: debian/perl5db-x-terminal-emulator.patch
+Upstream-Status: Pending
 ---
  lib/perl5db.pl | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/perlivp.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/perlivp.diff
index 2c1eab9..5c7413b 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/perlivp.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/perlivp.diff
@@ -21,6 +21,7 @@
 Signed-off-by: Niko Tyni <ntyni@debian.org>
 
 Patch-Name: debian/perlivp.diff
+Upstream-Status: Pending
 ---
  utils/perlivp.PL | 1 +
  1 file changed, 1 insertion(+)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/pod2man-customized.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/pod2man-customized.diff
index 6270b87..4707562 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/pod2man-customized.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/pod2man-customized.diff
@@ -4,6 +4,7 @@
 Subject: Update porting/customized.dat for pod2man modifications
 
 Patch-Name: debian/pod2man-customized.diff
+Upstream-Status: Pending
 ---
  t/porting/customized.dat | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/prefix_changes.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/prefix_changes.diff
index c41efbe..b681c3e 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/prefix_changes.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/prefix_changes.diff
@@ -9,6 +9,7 @@
 modules).
 
 Patch-Name: debian/prefix_changes.diff
+Upstream-Status: Pending
 ---
  cpan/ExtUtils-MakeMaker/lib/ExtUtils/MM_Any.pm  | 12 ++++++------
  cpan/ExtUtils-MakeMaker/lib/ExtUtils/MM_Unix.pm |  3 +--
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/prune_libs.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/prune_libs.diff
index d153e0e..a2ed52a 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/prune_libs.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/prune_libs.diff
@@ -10,7 +10,7 @@
 and some of the original list may be present on buildds (see Bug#128355).
 
 Patch-Name: debian/prune_libs.diff
-
+Upstream-Status: Pending
 ---
  Configure | 5 ++---
  1 file changed, 2 insertions(+), 3 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/regen-skip.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/regen-skip.diff
index 8a3fc99..5d9a7c4 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/regen-skip.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/regen-skip.diff
@@ -8,6 +8,7 @@
 the regeneration check is broken because lib/.gitignore is missing.
 
 Patch-Name: debian/regen-skip.diff
+Upstream-Status: Pending
 ---
  regen/lib_cleanup.pl | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/skip-kfreebsd-crash.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/skip-kfreebsd-crash.diff
index ecfc0bc..3b37452 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/skip-kfreebsd-crash.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/skip-kfreebsd-crash.diff
@@ -12,6 +12,7 @@
 Skip the test until the culprit is found.
 
 Patch-Name: debian/skip-kfreebsd-crash.diff
+Upstream-Status: Pending
 ---
  t/op/threads.t | 4 ++++
  1 file changed, 4 insertions(+)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/skip-upstream-git-tests.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/skip-upstream-git-tests.diff
index 4c87104..279f4ab 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/skip-upstream-git-tests.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/skip-upstream-git-tests.diff
@@ -9,6 +9,7 @@
 Skip the tests altogether even if the .git directory exists.
 
 Patch-Name: debian/skip-upstream-git-tests.diff
+Upstream-Status: Pending
 ---
  t/test.pl | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/squelch-locale-warnings.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/squelch-locale-warnings.diff
index cb31457..4964e48 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/squelch-locale-warnings.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/squelch-locale-warnings.diff
@@ -14,6 +14,7 @@
 the warning will be triggered normally again at that point.
 
 Patch-Name: debian/squelch-locale-warnings.diff
+Upstream-Status: Pending
 ---
  locale.c           | 5 ++++-
  pod/perllocale.pod | 8 ++++++++
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/writable_site_dirs.diff b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/writable_site_dirs.diff
index 53adc2f..ab373b3 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/writable_site_dirs.diff
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/debian/writable_site_dirs.diff
@@ -6,6 +6,7 @@
 Policy requires group writable site directories
 
 Patch-Name: debian/writable_site_dirs.diff
+Upstream-Status: Pending
 ---
  cpan/ExtUtils-MakeMaker/lib/ExtUtils/MM_Unix.pm | 6 +++---
  1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/ext-ODBM_File-t-odbm.t-fix-the-path-of-dbmt_common.p.patch b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/ext-ODBM_File-t-odbm.t-fix-the-path-of-dbmt_common.p.patch
index 6b05b87..b85b50c 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/ext-ODBM_File-t-odbm.t-fix-the-path-of-dbmt_common.p.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl/ext-ODBM_File-t-odbm.t-fix-the-path-of-dbmt_common.p.patch
@@ -9,6 +9,8 @@
 Can't locate ../../t/lib/dbmt_common.pl in @INC
 
 Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
+
+Upstream-Status: Pending
 ---
  ext/ODBM_File/t/odbm.t |    2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl_5.24.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl_5.24.1.bb
index cf7a8e1..b55d222 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/perl/perl_5.24.1.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/perl/perl_5.24.1.bb
@@ -124,30 +124,8 @@
             cat $i >> config.sh-${TARGET_ARCH}-${TARGET_OS}
         done
 
-        # Fixups for uclibc
-        if [ "${TARGET_OS}" = "linux-uclibc" -o "${TARGET_OS}" = "linux-uclibceabi" ]; then
-                sed -i -e "s,\(d_crypt_r=\)'define',\1'undef',g" \
-                       -e "s,\(d_futimes=\)'define',\1'undef',g" \
-                       -e "s,\(d_finitel=\)'define',\1'undef',g" \
-                       -e "s,\(crypt_r_proto=\)'\w+',\1'0',g" \
-                       -e "s,\(d_getnetbyname_r=\)'define',\1'undef',g" \
-                       -e "s,\(getnetbyname_r_proto=\)'\w+',\1'0',g" \
-                       -e "s,\(d_getnetbyaddr_r=\)'define',\1'undef',g" \
-                       -e "s,\(getnetbyaddr_r_proto=\)'\w+',\1'0',g" \
-                       -e "s,\(d_getnetent_r=\)'define',\1'undef',g" \
-                       -e "s,\(getnetent_r_proto=\)'\w+',\1'0',g" \
-                       -e "s,\(d_sockatmark=\)'define',\1'undef',g" \
-                       -e "s,\(d_sockatmarkproto=\)'\w+',\1'0',g" \
-                       -e "s,\(d_eaccess=\)'define',\1'undef',g" \
-                       -e "s,\(d_stdio_ptr_lval=\)'define',\1'undef',g" \
-                       -e "s,\(d_stdio_ptr_lval_sets_cnt=\)'define',\1'undef',g" \
-                       -e "s,\(d_stdiobase=\)'define',\1'undef',g" \
-                       -e "s,\(d_stdstdio=\)'define',\1'undef',g" \
-                       -e "s,-fstack-protector,-fno-stack-protector,g" \
-                    config.sh-${TARGET_ARCH}-${TARGET_OS}
-        fi
         # Fixups for musl
-        if [ "${TARGET_OS}" = "linux-musl" -o "${TARGET_OS}" = "linux-musleabi" ]; then
+        if [ "${TARGET_OS}" = "linux-musl" -o "${TARGET_OS}" = "linux-musleabi" -o "${TARGET_OS}" = "linux-muslx32" ]; then
                 sed -i -e "s,\(d_libm_lib_version=\)'define',\1'undef',g" \
                        -e "s,\(d_stdio_ptr_lval=\)'define',\1'undef',g" \
                        -e "s,\(d_stdio_ptr_lval_sets_cnt=\)'define',\1'undef',g" \
@@ -193,7 +171,7 @@
 			;;
 	esac
         # These are strewn all over the source tree
-        for foo in `grep -I --exclude="*.patch" --exclude="*.diff" --exclude="*.pod" --exclude="README*" -m1 "/usr/include/.*\.h" ${S}/* -r -l` ${S}/utils/h2xs.PL ; do
+        for foo in `grep -I --exclude="*.patch" --exclude="*.diff" --exclude="*.pod" --exclude="README*" --exclude="Glossary" -m1 "/usr/include/.*\.h" ${S}/* -r -l` ${S}/utils/h2xs.PL ; do
             echo Fixing: $foo
             sed -e 's|\([ "^'\''I]\+\)/usr/include/|\1${STAGING_INCDIR}/|g' -i $foo
         done
@@ -245,6 +223,7 @@
 perl_package_preprocess () {
         # Fix up installed configuration
         sed -i -e "s,${D},,g" \
+               -e "s,${DEBUG_PREFIX_MAP},,g" \
                -e "s,--sysroot=${STAGING_DIR_HOST},,g" \
                -e "s,-isystem${STAGING_INCDIR} ,,g" \
                -e "s,${STAGING_LIBDIR},${libdir},g" \
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/0001-Minimal-tweaks-to-compile-with-Visual-C-2015.patch b/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/0001-Minimal-tweaks-to-compile-with-Visual-C-2015.patch
new file mode 100644
index 0000000..3805ad3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/0001-Minimal-tweaks-to-compile-with-Visual-C-2015.patch
@@ -0,0 +1,224 @@
+From 4d7b4d7c8e9966c593f472355607204c6c80fecb Mon Sep 17 00:00:00 2001
+From: Dan Kegel <dank@kegel.com>
+Date: Sun, 4 Jun 2017 19:19:55 -0700
+Subject: [PATCH] Minimal tweaks to compile with Visual C 2015
+
+Upstream-Status: Backport
+
+Signed-off-by: Maxin B. John <maxin.john@intel.com>
+---
+ getopt_long.c           |  2 ++
+ libpkgconf/bsdstubs.c   |  1 +
+ libpkgconf/libpkgconf.h |  2 +-
+ libpkgconf/path.c       | 10 +++++-----
+ libpkgconf/pkg.c        | 28 +++++++++++++++++++---------
+ libpkgconf/stdinc.h     |  9 +++++++--
+ 6 files changed, 35 insertions(+), 17 deletions(-)
+
+diff --git a/getopt_long.c b/getopt_long.c
+index afeb68d..5ce9bfd 100644
+--- a/getopt_long.c
++++ b/getopt_long.c
+@@ -62,7 +62,9 @@
+ #include <stdio.h>
+ #include <stdlib.h>
+ #include <string.h>
++#ifndef _WIN32
+ #include <unistd.h>
++#endif
+ 
+ #define PKGCONF_HACK_LOGICAL_OR_ALL_VALUES
+ 
+diff --git a/libpkgconf/bsdstubs.c b/libpkgconf/bsdstubs.c
+index 8f70ff3..2c000ac 100644
+--- a/libpkgconf/bsdstubs.c
++++ b/libpkgconf/bsdstubs.c
+@@ -17,6 +17,7 @@
+  * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  */
+ 
++#include <stdlib.h>
+ #include <sys/types.h>
+ #include <string.h>
+ 
+diff --git a/libpkgconf/libpkgconf.h b/libpkgconf/libpkgconf.h
+index 404bf0c..551d85d 100644
+--- a/libpkgconf/libpkgconf.h
++++ b/libpkgconf/libpkgconf.h
+@@ -310,7 +310,7 @@ void pkgconf_audit_log_dependency(pkgconf_client_t *client, const pkgconf_pkg_t
+ /* path.c */
+ void pkgconf_path_add(const char *text, pkgconf_list_t *dirlist, bool filter);
+ size_t pkgconf_path_split(const char *text, pkgconf_list_t *dirlist, bool filter);
+-size_t pkgconf_path_build_from_environ(const char *environ, const char *fallback, pkgconf_list_t *dirlist, bool filter);
++size_t pkgconf_path_build_from_environ(const char *envvarname, const char *fallback, pkgconf_list_t *dirlist, bool filter);
+ bool pkgconf_path_match_list(const char *path, const pkgconf_list_t *dirlist);
+ void pkgconf_path_free(pkgconf_list_t *dirlist);
+ bool pkgconf_path_relocate(char *buf, size_t buflen);
+diff --git a/libpkgconf/path.c b/libpkgconf/path.c
+index dddb3bf..59e003e 100644
+--- a/libpkgconf/path.c
++++ b/libpkgconf/path.c
+@@ -20,7 +20,7 @@
+ # include <sys/cygwin.h>
+ #endif
+ 
+-#ifdef HAVE_SYS_STAT_H
++#if defined(HAVE_SYS_STAT_H) && ! defined(_WIN32)
+ # include <sys/stat.h>
+ # define PKGCONF_CACHE_INODES
+ #endif
+@@ -156,12 +156,12 @@ pkgconf_path_split(const char *text, pkgconf_list_t *dirlist, bool filter)
+ /*
+  * !doc
+  *
+- * .. c:function:: size_t pkgconf_path_build_from_environ(const char *environ, const char *fallback, pkgconf_list_t *dirlist)
++ * .. c:function:: size_t pkgconf_path_build_from_environ(const char *envvarname, const char *fallback, pkgconf_list_t *dirlist)
+  *
+  *    Adds the paths specified in an environment variable to a path list.  If the environment variable is not set,
+  *    an optional default set of paths is added.
+  *
+- *    :param char* environ: The environment variable to look up.
++ *    :param char* envvarname: The environment variable to look up.
+  *    :param char* fallback: The fallback paths to use if the environment variable is not set.
+  *    :param pkgconf_list_t* dirlist: The path list to add the path nodes to.
+  *    :param bool filter: Whether to perform duplicate filtering.
+@@ -169,11 +169,11 @@ pkgconf_path_split(const char *text, pkgconf_list_t *dirlist, bool filter)
+  *    :rtype: size_t
+  */
+ size_t
+-pkgconf_path_build_from_environ(const char *environ, const char *fallback, pkgconf_list_t *dirlist, bool filter)
++pkgconf_path_build_from_environ(const char *envvarname, const char *fallback, pkgconf_list_t *dirlist, bool filter)
+ {
+ 	const char *data;
+ 
+-	data = getenv(environ);
++	data = getenv(envvarname);
+ 	if (data != NULL)
+ 		return pkgconf_path_split(data, dirlist, filter);
+ 
+diff --git a/libpkgconf/pkg.c b/libpkgconf/pkg.c
+index 7aebd61..5dacae3 100644
+--- a/libpkgconf/pkg.c
++++ b/libpkgconf/pkg.c
+@@ -30,6 +30,8 @@
+ #	define PKG_CONFIG_REG_KEY "Software\\pkgconfig\\PKG_CONFIG_PATH"
+ #	undef PKG_DEFAULT_PATH
+ #	define PKG_DEFAULT_PATH "../lib/pkgconfig;../share/pkgconfig"
++#define strncasecmp _strnicmp
++#define strcasecmp _stricmp
+ #endif
+ 
+ #define PKG_CONFIG_EXT		".pc"
+@@ -134,21 +136,21 @@ static int pkgconf_pkg_parser_keyword_pair_cmp(const void *key, const void *ptr)
+ static void
+ pkgconf_pkg_parser_tuple_func(const pkgconf_client_t *client, pkgconf_pkg_t *pkg, const ptrdiff_t offset, char *value)
+ {
+-	char **dest = ((void *) pkg + offset);
++	char **dest = (char **)((char *) pkg + offset);
+ 	*dest = pkgconf_tuple_parse(client, &pkg->vars, value);
+ }
+ 
+ static void
+ pkgconf_pkg_parser_fragment_func(const pkgconf_client_t *client, pkgconf_pkg_t *pkg, const ptrdiff_t offset, char *value)
+ {
+-	pkgconf_list_t *dest = ((void *) pkg + offset);
++	pkgconf_list_t *dest = (pkgconf_list_t *)((char *) pkg + offset);
+ 	pkgconf_fragment_parse(client, dest, &pkg->vars, value);
+ }
+ 
+ static void
+ pkgconf_pkg_parser_dependency_func(const pkgconf_client_t *client, pkgconf_pkg_t *pkg, const ptrdiff_t offset, char *value)
+ {
+-	pkgconf_list_t *dest = ((void *) pkg + offset);
++	pkgconf_list_t *dest = (pkgconf_list_t *)((char *) pkg + offset);
+ 	pkgconf_dependency_parse(client, pkg, dest, value);
+ }
+ 
+@@ -238,7 +240,7 @@ pkgconf_pkg_validate(const pkgconf_client_t *client, const pkgconf_pkg_t *pkg)
+ 
+ 	for (i = 0; i < PKGCONF_ARRAY_SIZE(pkgconf_pkg_validations); i++)
+ 	{
+-		char **p = ((void *) pkg + pkgconf_pkg_validations[i].offset);
++		char **p = (char **)((char *) pkg + pkgconf_pkg_validations[i].offset);
+ 
+ 		if (*p != NULL)
+ 			continue;
+@@ -587,7 +589,7 @@ pkgconf_scan_all(pkgconf_client_t *client, void *data, pkgconf_pkg_iteration_fun
+ 
+ #ifdef _WIN32
+ static pkgconf_pkg_t *
+-pkgconf_pkg_find_in_registry_key(const pkgconf_client_t *client, HKEY hkey, const char *name)
++pkgconf_pkg_find_in_registry_key(pkgconf_client_t *client, HKEY hkey, const char *name)
+ {
+ 	pkgconf_pkg_t *pkg = NULL;
+ 
+@@ -1048,8 +1050,12 @@ typedef struct {
+ 
+ static const pkgconf_pkg_provides_vermatch_rule_t pkgconf_pkg_provides_vermatch_rules[] = {
+ 	[PKGCONF_CMP_ANY] = {
+-		.rulecmp = {},
+-		.depcmp = {},
++		.rulecmp = {
++			[PKGCONF_CMP_ANY]			= pkgconf_pkg_comparator_none,
++                },
++		.depcmp = {
++			[PKGCONF_CMP_ANY]			= pkgconf_pkg_comparator_none,
++                },
+ 	},
+ 	[PKGCONF_CMP_LESS_THAN] = {
+ 		.rulecmp = {
+@@ -1121,7 +1127,9 @@ static const pkgconf_pkg_provides_vermatch_rule_t pkgconf_pkg_provides_vermatch_
+ 			[PKGCONF_CMP_EQUAL]			= pkgconf_pkg_comparator_eq,
+ 			[PKGCONF_CMP_NOT_EQUAL]			= pkgconf_pkg_comparator_ne
+ 		},
+-		.depcmp = {},
++		.depcmp = {
++			[PKGCONF_CMP_ANY]			= pkgconf_pkg_comparator_none,
++                },
+ 	},
+ 	[PKGCONF_CMP_NOT_EQUAL] = {
+ 		.rulecmp = {
+@@ -1133,7 +1141,9 @@ static const pkgconf_pkg_provides_vermatch_rule_t pkgconf_pkg_provides_vermatch_
+ 			[PKGCONF_CMP_EQUAL]			= pkgconf_pkg_comparator_ne,
+ 			[PKGCONF_CMP_NOT_EQUAL]			= pkgconf_pkg_comparator_eq
+ 		},
+-		.depcmp = {},
++		.depcmp = {
++			[PKGCONF_CMP_ANY]			= pkgconf_pkg_comparator_none,
++                },
+ 	},
+ };
+ 
+diff --git a/libpkgconf/stdinc.h b/libpkgconf/stdinc.h
+index 58cc6c7..ac7e53c 100644
+--- a/libpkgconf/stdinc.h
++++ b/libpkgconf/stdinc.h
+@@ -24,9 +24,7 @@
+ #include <stdbool.h>
+ #include <stdarg.h>
+ #include <string.h>
+-#include <dirent.h>
+ #include <sys/types.h>
+-#include <unistd.h>
+ #include <stdint.h>
+ 
+ #ifdef _WIN32
+@@ -34,8 +32,15 @@
+ # include <windows.h>
+ # include <malloc.h>
+ # define PATH_DEV_NULL	"nul"
++# ifndef ssize_t
++#  include <BaseTsd.h>
++#  define ssize_t SSIZE_T
++# endif
++# include "win-dirent.h"
+ #else
+ # define PATH_DEV_NULL	"/dev/null"
++# include <dirent.h>
++# include <unistd.h>
+ #endif
+ 
+ #endif
+-- 
+2.4.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/0001-stdinc.h-fix-build-with-mingw.patch b/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/0001-stdinc.h-fix-build-with-mingw.patch
new file mode 100644
index 0000000..49ebe31
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/0001-stdinc.h-fix-build-with-mingw.patch
@@ -0,0 +1,48 @@
+From ea28c5b34457cf7676181b284e22ea5f79a30d85 Mon Sep 17 00:00:00 2001
+From: "Maxin B. John" <maxin.john@intel.com>
+Date: Thu, 13 Jul 2017 14:47:31 +0300
+Subject: [PATCH] stdinc.h: fix build with mingw
+
+Fixes this build error with mingw:
+...
+| compilation terminated.
+| In file included from ../pkgconf-1.3.7/libpkgconf/libpkgconf.h:19:0,
+| from ../pkgconf-1.3.7/libpkgconf/audit.c:16:
+| ../pkgconf-1.3.7/libpkgconf/stdinc.h:36:12: fatal error: BaseTsd.h: No
+such file or directory
+| # include <BaseTsd.h>
+
+Upstream-Status: Submitted
+
+Signed-off-by: Maxin B. John <maxin.john@intel.com>
+---
+ libpkgconf/stdinc.h | 10 +++++++++-
+ 1 file changed, 9 insertions(+), 1 deletion(-)
+
+diff --git a/libpkgconf/stdinc.h b/libpkgconf/stdinc.h
+index ac7e53c..d8efcf5 100644
+--- a/libpkgconf/stdinc.h
++++ b/libpkgconf/stdinc.h
+@@ -33,10 +33,18 @@
+ # include <malloc.h>
+ # define PATH_DEV_NULL	"nul"
+ # ifndef ssize_t
++# ifndef __MINGW32__
+ #  include <BaseTsd.h>
++# else
++#  include <basetsd.h>
++# endif
+ #  define ssize_t SSIZE_T
+ # endif
+-# include "win-dirent.h"
++# ifndef __MINGW32__
++#  include "win-dirent.h"
++# else
++# include <dirent.h>
++# endif
+ #else
+ # define PATH_DEV_NULL	"/dev/null"
+ # include <dirent.h>
+-- 
+2.4.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/pkg-config-esdk.in b/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/pkg-config-esdk.in
new file mode 100644
index 0000000..4fc9b0a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/pkg-config-esdk.in
@@ -0,0 +1,24 @@
+#! /bin/sh
+
+# Orignal pkg-config-native action when called as pkg-config-native
+# NO Change here
+if [ "pkg-config-native" = "`basename $0`" ] ; then
+	PKG_CONFIG_PATH="@PATH_NATIVE@"
+	PKG_CONFIG_LIBDIR="@LIBDIR_NATIVE@"
+	unset PKG_CONFIG_SYSROOT_DIR
+else
+	# in this case check if we are in the esdk
+	if [ "$OE_SKIP_SDK_CHECK" = "1" ] ; then
+		parentpid=`ps -o ppid= -p $$`
+		parentpid_info=`ps -wo comm= -o args= -p $parentpid`
+
+		# check if we are being called from  the kernel's make menuconfig
+		if ( echo $parentpid_info | grep -q check-lxdialog ) ; then
+			PKG_CONFIG_PATH="@PATH_NATIVE@"
+			PKG_CONFIG_LIBDIR="@LIBDIR_NATIVE@"
+			unset PKG_CONFIG_SYSROOT_DIR
+		fi
+	fi
+fi
+
+pkg-config.real "$@"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/pkg-config-native.in b/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/pkg-config-native.in
new file mode 100644
index 0000000..9ed30a0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/pkg-config-native.in
@@ -0,0 +1,6 @@
+#! /bin/sh
+
+PKG_CONFIG_PATH="@PATH_NATIVE@"
+unset PKG_CONFIG_SYSROOT_DIR
+
+pkg-config "$@"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/pkg-config-wrapper b/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/pkg-config-wrapper
new file mode 100755
index 0000000..695f349
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf/pkg-config-wrapper
@@ -0,0 +1,16 @@
+#!/bin/sh
+# pkgconf wrapper to deal with pkg-config/pkgconf compatibility issues
+#
+# Copyright (C) 2015 Christopher Larson <chris_larson@mentor.com>
+# License: MIT (see COPYING.MIT at the root of the repository for terms)
+
+for arg; do
+    case "$arg" in
+        --variable|--variable=*)
+            # pkg-config doesn't sysroot-prefix user variables
+            unset PKG_CONFIG_SYSROOT_DIR
+            ;;
+    esac
+done
+
+exec pkgconf "$@"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf_1.3.7.bb b/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf_1.3.7.bb
new file mode 100644
index 0000000..5da0dd1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pkgconf/pkgconf_1.3.7.bb
@@ -0,0 +1,73 @@
+SUMMARY = "pkgconf provides compiler and linker configuration for development frameworks."
+DESCRIPTION = "pkgconf is a program which helps to configure compiler and linker \
+flags for development frameworks. It is similar to pkg-config from \
+freedesktop.org, providing additional functionality while also maintaining \
+compatibility."
+HOMEPAGE = "http://pkgconf.org"
+BUGTRACKER = "https://github.com/pkgconf/pkgconf/issues"
+SECTION = "devel"
+PROVIDES += "pkgconfig"
+RPROVIDES_${PN} += "pkgconfig"
+DEFAULT_PREFERENCE = "-1"
+
+# The pkgconf license seems to be functionally equivalent to BSD-2-Clause or
+# ISC, but has different wording, so needs its own name.
+LICENSE = "pkgconf"
+LIC_FILES_CHKSUM = "file://COPYING;md5=548a9d1db10cc0a84810c313a0e9266f"
+
+SRC_URI = "\
+    https://distfiles.dereferenced.org/pkgconf/pkgconf-${PV}.tar.xz \
+    file://0001-Minimal-tweaks-to-compile-with-Visual-C-2015.patch \
+    file://0001-stdinc.h-fix-build-with-mingw.patch \
+    file://pkg-config-wrapper \
+    file://pkg-config-native.in \
+    file://pkg-config-esdk.in \
+"
+SRC_URI[md5sum] = "ac35c34d84eeb6a03d4d61b8555d6197"
+SRC_URI[sha256sum] = "1be7e40900c7467893c65f810211b1e68da3f8d5e70fddb883fc24839cad0339"
+
+inherit autotools update-alternatives
+
+EXTRA_OECONF += "--with-pkg-config-dir='${libdir}/pkgconfig:${datadir}/pkgconfig'"
+
+do_install_append () {
+    # Install a wrapper which deals, as much as possible with pkgconf vs
+    # pkg-config compatibility issues.
+    install -m 0755 "${WORKDIR}/pkg-config-wrapper" "${D}${bindir}/pkg-config"
+}
+
+do_install_append_class-native () {
+    # Install a pkg-config-native wrapper that will use the native sysroot instead
+    # of the MACHINE sysroot, for using pkg-config when building native tools.
+    sed -e "s|@PATH_NATIVE@|${PKG_CONFIG_PATH}|" \
+        < ${WORKDIR}/pkg-config-native.in > ${B}/pkg-config-native
+    install -m755 ${B}/pkg-config-native ${D}${bindir}/pkg-config-native
+    sed -e "s|@PATH_NATIVE@|${PKG_CONFIG_PATH}|" \
+        -e "s|@LIBDIR_NATIVE@|${PKG_CONFIG_LIBDIR}|" \
+        < ${WORKDIR}/pkg-config-esdk.in > ${B}/pkg-config-esdk
+    install -m755 ${B}/pkg-config-esdk ${D}${bindir}/pkg-config-esdk
+}
+
+ALTERNATIVE_${PN} = "pkg-config"
+
+# When using the RPM generated automatic package dependencies, some packages
+# will end up requiring 'pkgconfig(pkg-config)'.  Allow this behavior by
+# specifying an appropriate provide.
+RPROVIDES_${PN} += "pkgconfig(pkg-config)"
+
+# Include pkg.m4 in the main package, leaving libpkgconf dev files in -dev
+FILES_${PN}-dev_remove = "${datadir}/aclocal"
+FILES_${PN} += "${datadir}/aclocal"
+
+BBCLASSEXTEND += "native nativesdk"
+
+pkgconf_sstate_fixup_esdk () {
+   if [ "${BB_CURRENTTASK}" = "populate_sysroot_setscene" -a "${WITHIN_EXT_SDK}" = "1" ] ; then
+       pkgconfdir="${SSTATE_INSTDIR}/recipe-sysroot-native/${bindir_native}"
+       mv $pkgconfdir/pkg-config $pkgconfdir/pkg-config.real
+       lnr $pkgconfdir/pkg-config-esdk $pkgconfdir/pkg-config
+       sed -i -e "s|^pkg-config|pkg-config.real|" $pkgconfdir/pkg-config-native
+   fi
+}
+
+SSTATEPOSTUNPACKFUNCS_append_class-native = " pkgconf_sstate_fixup_esdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pkgconfig/pkgconfig/0001-gdate-Move-warning-pragma-outside-of-function.patch b/import-layers/yocto-poky/meta/recipes-devtools/pkgconfig/pkgconfig/0001-gdate-Move-warning-pragma-outside-of-function.patch
deleted file mode 100644
index 14c8293..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/pkgconfig/pkgconfig/0001-gdate-Move-warning-pragma-outside-of-function.patch
+++ /dev/null
@@ -1,39 +0,0 @@
-From 946d36266d8a30f04fe34d3183bf4929141934d2 Mon Sep 17 00:00:00 2001
-From: coypu <coypu@sdf.org>
-Date: Wed, 2 Mar 2016 19:38:48 +0200
-Subject: [PATCH] gdate: Move warning pragma outside of function
-
-Commit 0817af40e8c74c721c30f6ef482b1f53d12044c7 breaks the build on
-older versions of GCC, which don't allow pragma inside functions.
-
-https://bugzilla.gnome.org/761550
----
-Upstream-Status: Backport
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
- glib/glib/gdate.c | 5 +++++
- 1 file changed, 5 insertions(+)
-
-diff --git a/glib/glib/gdate.c b/glib/glib/gdate.c
-index 1978cf7..20e6c4a 100644
---- a/glib/glib/gdate.c
-+++ b/glib/glib/gdate.c
-@@ -2439,6 +2439,9 @@ win32_strftime_helper (const GDate     *d,
-  *
-  * Returns: number of characters written to the buffer, or 0 the buffer was too small
-  */
-+#pragma GCC diagnostic push
-+#pragma GCC diagnostic ignored "-Wformat-nonliteral"
-+
- gsize     
- g_date_strftime (gchar       *s, 
-                  gsize        slen, 
-@@ -2549,3 +2552,5 @@ g_date_strftime (gchar       *s,
-   return retval;
- #endif
- }
-+
-+#pragma GCC diagnostic pop
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pkgconfig/pkgconfig/pkg-config-esdk.in b/import-layers/yocto-poky/meta/recipes-devtools/pkgconfig/pkgconfig/pkg-config-esdk.in
new file mode 100644
index 0000000..4fc9b0a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pkgconfig/pkgconfig/pkg-config-esdk.in
@@ -0,0 +1,24 @@
+#! /bin/sh
+
+# Orignal pkg-config-native action when called as pkg-config-native
+# NO Change here
+if [ "pkg-config-native" = "`basename $0`" ] ; then
+	PKG_CONFIG_PATH="@PATH_NATIVE@"
+	PKG_CONFIG_LIBDIR="@LIBDIR_NATIVE@"
+	unset PKG_CONFIG_SYSROOT_DIR
+else
+	# in this case check if we are in the esdk
+	if [ "$OE_SKIP_SDK_CHECK" = "1" ] ; then
+		parentpid=`ps -o ppid= -p $$`
+		parentpid_info=`ps -wo comm= -o args= -p $parentpid`
+
+		# check if we are being called from  the kernel's make menuconfig
+		if ( echo $parentpid_info | grep -q check-lxdialog ) ; then
+			PKG_CONFIG_PATH="@PATH_NATIVE@"
+			PKG_CONFIG_LIBDIR="@LIBDIR_NATIVE@"
+			unset PKG_CONFIG_SYSROOT_DIR
+		fi
+	fi
+fi
+
+pkg-config.real "$@"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pkgconfig/pkgconfig_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/pkgconfig/pkgconfig_git.bb
index dc44992..52ef2a9 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/pkgconfig/pkgconfig_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pkgconfig/pkgconfig_git.bb
@@ -8,13 +8,13 @@
 LICENSE = "GPLv2+"
 LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
 
-SRCREV = "87152c05be88ca8be71a3a563f275b3686d32c28"
-PV = "0.29.1+git${SRCPV}"
+SRCREV = "edf8e6f0ea77ede073f07bff0d2ae1fc7a38103b"
+PV = "0.29.2+git${SRCPV}"
 
 SRC_URI = "git://anongit.freedesktop.org/pkg-config \
+           file://pkg-config-esdk.in \
            file://pkg-config-native.in \
            file://fix-glib-configure-libtool-usage.patch \
-           file://0001-gdate-Move-warning-pragma-outside-of-function.patch \
            file://0001-glib-gettext.m4-Update-AM_GLIB_GNU_GETTEXT-to-match-.patch \
            "
 
@@ -55,4 +55,19 @@
         -e "s|@LIBDIR_NATIVE@|${PKG_CONFIG_LIBDIR}|" \
         < ${WORKDIR}/pkg-config-native.in > ${B}/pkg-config-native
     install -m755 ${B}/pkg-config-native ${D}${bindir}/pkg-config-native
+    sed -e "s|@PATH_NATIVE@|${PKG_CONFIG_PATH}|" \
+        -e "s|@LIBDIR_NATIVE@|${PKG_CONFIG_LIBDIR}|" \
+        < ${WORKDIR}/pkg-config-esdk.in > ${B}/pkg-config-esdk
+    install -m755 ${B}/pkg-config-esdk ${D}${bindir}/pkg-config-esdk
 }
+
+pkgconfig_sstate_fixup_esdk () {
+	if [ "${BB_CURRENTTASK}" = "populate_sysroot_setscene" -a "${WITHIN_EXT_SDK}" = "1" ] ; then
+		pkgconfdir="${SSTATE_INSTDIR}/recipe-sysroot-native/${bindir_native}"
+		mv $pkgconfdir/pkg-config $pkgconfdir/pkg-config.real
+		lnr $pkgconfdir/pkg-config-esdk $pkgconfdir/pkg-config
+		sed -i -e "s|^pkg-config|pkg-config.real|" $pkgconfdir/pkg-config-native
+	fi
+}
+
+SSTATEPOSTUNPACKFUNCS_append_class-native = " pkgconfig_sstate_fixup_esdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/prelink/prelink_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/prelink/prelink_git.bb
index 4529dbf..570ef36 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/prelink/prelink_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/prelink/prelink_git.bb
@@ -3,6 +3,7 @@
 # Would need transfig-native for documentation if it wasn't disabled
 DEPENDS = "elfutils binutils"
 SUMMARY = "An ELF prelinking utility"
+HOMEPAGE = "http://git.yoctoproject.org/cgit.cgi/prelink-cross/about/"
 DESCRIPTION = "The prelink package contains a utility which modifies ELF shared libraries \
 and executables, so that far fewer relocations need to be resolved at \
 runtime and thus programs come up faster."
@@ -31,6 +32,8 @@
            file://prelink.cron.daily \
            file://prelink.default \
 	   file://macros.prelink"
+UPSTREAM_CHECK_GITTAGREGEX = "upstream has no usable tags"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 TARGET_OS_ORIG := "${TARGET_OS}"
 OVERRIDES_append = ":${TARGET_OS_ORIG}"
@@ -150,13 +153,15 @@
 	install -m 0644 ${WORKDIR}/macros.prelink ${D}${sysconfdir}/rpm/macros.prelink
 }
 
-# If we're using image-prelink, we want to skip this on the host side
-# but still do it if the package is installed on the target...
+# If we ae doing a cross install, we want to avoid prelinking.
+# Prelinking during a cross install should be handled by the image-prelink
+# bbclass.  If the user desires this to run on the target at first boot
+# they will need to create a custom boot script.
 pkg_postinst_prelink() {
 #!/bin/sh
 
 if [ "x$D" != "x" ]; then
-  ${@bb.utils.contains('USER_CLASSES', 'image-prelink', 'exit 0', 'exit 1', d)}
+  exit 0
 fi
 
 prelink -a
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pseudo/files/0001-Use-epoll-API-on-Linux.patch b/import-layers/yocto-poky/meta/recipes-devtools/pseudo/files/0001-Use-epoll-API-on-Linux.patch
new file mode 100644
index 0000000..42557b1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pseudo/files/0001-Use-epoll-API-on-Linux.patch
@@ -0,0 +1,292 @@
+From 9e407e0be01695e7b927f5820ade87ee9602c248 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Fri, 15 Sep 2017 17:00:14 +0300
+Subject: [PATCH] Use epoll API on Linux
+
+Also a couple of other modifications due to epoll having
+a different approach to how the working set of fds is defined
+and used:
+1) open_client() returns an index into the array of clients
+2) close_client() has a protection against being called twice
+with the same client (which would mess up the active_clients
+counter)
+
+Upstream-Status: Submitted [Seebs CC'd by email]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+
+---
+ enums/exit_status.in |   3 +
+ pseudo_server.c      | 189 ++++++++++++++++++++++++++++++++++++++++++++++++++-
+ 2 files changed, 190 insertions(+), 2 deletions(-)
+
+diff --git a/enums/exit_status.in b/enums/exit_status.in
+index 6be44d3..88f94cd 100644
+--- a/enums/exit_status.in
++++ b/enums/exit_status.in
+@@ -18,3 +18,6 @@ listen_fd, "server loop had no valid listen fd"
+ pseudo_loaded, "server couldn't get out of pseudo environment"
+ pseudo_prefix, "couldn't get valid pseudo prefix"
+ pseudo_invocation, "invalid server command arguments"
++epoll_create, "epoll_create() failed"
++epoll_ctl, "epoll_ctl() failed"
++
+diff --git a/pseudo_server.c b/pseudo_server.c
+index ff16efd..14d34de 100644
+--- a/pseudo_server.c
++++ b/pseudo_server.c
+@@ -40,6 +40,12 @@
+ #include "pseudo_client.h"
+ #include "pseudo_db.h"
+ 
++// This has to come after pseudo includes, as that's where PSEUDO_PORT defines are
++#ifdef PSEUDO_PORT_LINUX
++#include <sys/epoll.h>
++#endif
++
++
+ static int listen_fd = -1;
+ 
+ typedef struct {
+@@ -59,6 +65,7 @@ static int active_clients = 0, highest_client = 0, max_clients = 0;
+ 
+ #define LOOP_DELAY 2
+ #define DEFAULT_PSEUDO_SERVER_TIMEOUT 30
++#define EPOLL_MAX_EVENTS 10
+ int pseudo_server_timeout = DEFAULT_PSEUDO_SERVER_TIMEOUT;
+ static int die_peacefully = 0;
+ static int die_forcefully = 0;
+@@ -80,6 +87,9 @@ quit_now(int signal) {
+ static int messages = 0, responses = 0;
+ static struct timeval message_time = { .tv_sec = 0 };
+ 
++#ifdef PSEUDO_PORT_LINUX
++static void pseudo_server_loop_epoll(void);
++#endif
+ static void pseudo_server_loop(void);
+ 
+ /* helper function to make a directory, just like mkdir -p.
+@@ -369,12 +379,16 @@ pseudo_server_start(int daemonize) {
+ 			kill(ppid, SIGUSR1);
+ 		}
+ 	}
++#ifdef PSEUDO_PORT_LINUX
++	pseudo_server_loop_epoll();
++#else
+ 	pseudo_server_loop();
++#endif
+ 	return 0;
+ }
+ 
+ /* mess with internal tables as needed */
+-static void
++static unsigned int
+ open_client(int fd) {
+ 	pseudo_client_t *new_clients;
+ 	int i;
+@@ -390,7 +404,7 @@ open_client(int fd) {
+ 			++active_clients;
+ 			if (i > highest_client)
+ 				highest_client = i;
+-			return;
++			return i;
+ 		}
+ 	}
+ 
+@@ -414,9 +428,11 @@ open_client(int fd) {
+ 
+ 		max_clients += 16;
+ 		++active_clients;
++		return max_clients - 16;
+ 	} else {
+ 		pseudo_diag("error allocating new client, fd %d\n", fd);
+ 		close(fd);
++		return 0;
+ 	}
+ }
+ 
+@@ -433,6 +449,10 @@ close_client(int client) {
+ 			client, highest_client);
+ 		return;
+ 	}
++	if (clients[client].fd == -1) {
++		pseudo_debug(PDBGF_SERVER, "client %d already closed\n", client);
++		return;
++	}
+ 	close(clients[client].fd);
+ 	clients[client].fd = -1;
+ 	free(clients[client].tag);
+@@ -566,6 +586,171 @@ serve_client(int i) {
+ 	}
+ }
+ 
++#ifdef PSEUDO_PORT_LINUX
++static void pseudo_server_loop_epoll(void)
++{
++	struct sockaddr_un client;
++	socklen_t len;
++        int i;
++        int rc;
++        int fd;
++	int timeout;
++	struct epoll_event ev, events[EPOLL_MAX_EVENTS];
++	int loop_timeout = pseudo_server_timeout;
++
++	clients = malloc(16 * sizeof(*clients));
++
++	clients[0].fd = listen_fd;
++	clients[0].pid = getpid();
++
++	for (i = 1; i < 16; ++i) {
++		clients[i].fd = -1;
++		clients[i].pid = 0;
++		clients[i].tag = NULL;
++		clients[i].program = NULL;
++	}
++
++	active_clients = 1;
++	max_clients = 16;
++	highest_client = 0;
++
++	pseudo_debug(PDBGF_SERVER, "server loop started.\n");
++	if (listen_fd < 0) {
++		pseudo_diag("got into loop with no valid listen fd.\n");
++		exit(PSEUDO_EXIT_LISTEN_FD);
++	}
++
++	timeout = LOOP_DELAY * 1000;
++
++	int epollfd = epoll_create1(0);
++	if (epollfd == -1) {
++		pseudo_diag("epoll_create1() failed.\n");
++		exit(PSEUDO_EXIT_EPOLL_CREATE);
++	}
++	ev.events = EPOLLIN;
++	ev.data.u64 = 0;
++	if (epoll_ctl(epollfd, EPOLL_CTL_ADD, clients[0].fd, &ev) == -1) {
++		pseudo_diag("epoll_ctl() failed with listening socket.\n");
++		exit(PSEUDO_EXIT_EPOLL_CTL);
++	}
++
++	pdb_log_msg(SEVERITY_INFO, NULL, NULL, NULL, "server started (pid %d)", getpid());
++
++        for (;;) {
++		rc = epoll_wait(epollfd, events, EPOLL_MAX_EVENTS, timeout);
++		if (rc == 0 || (rc == -1 && errno == EINTR)) {
++			/* If there's no clients, start timing out.  If there
++			 * are active clients, never time out.
++			 */
++			if (active_clients == 1) {
++				loop_timeout -= LOOP_DELAY;
++                                /* maybe flush database to disk */
++                                pdb_maybe_backup();
++				if (loop_timeout <= 0) {
++					pseudo_debug(PDBGF_SERVER, "no more clients, got bored.\n");
++					die_peacefully = 1;
++				} else {
++					/* display this if not exiting */
++					pseudo_debug(PDBGF_SERVER | PDBGF_BENCHMARK, "%d messages handled in %.4f seconds, %d responses\n",
++						messages,
++						(double) message_time.tv_sec +
++						(double) message_time.tv_usec / 1000000.0,
++                                                responses);
++				}
++			}
++		} else if (rc > 0) {
++			loop_timeout = pseudo_server_timeout;
++			for (i = 0; i < rc; ++i) {
++				if (clients[events[i].data.u64].fd == listen_fd) {
++					if (!die_forcefully) {
++						len = sizeof(client);
++						if ((fd = accept(listen_fd, (struct sockaddr *) &client, &len)) != -1) {
++						/* Don't allow clients to end up on fd 2, because glibc's
++						 * malloc debug uses that fd unconditionally.
++						 */
++							if (fd == 2) {
++								int newfd = fcntl(fd, F_DUPFD, 3);
++								close(fd);
++								fd = newfd;
++							}
++							pseudo_debug(PDBGF_SERVER, "new client fd %d\n", fd);
++		                                        /* A new client implicitly cancels any
++		                                         * previous shutdown request, or a
++		                                         * shutdown for lack of clients.
++		                                         */
++		                                        pseudo_server_timeout = DEFAULT_PSEUDO_SERVER_TIMEOUT;
++		                                        die_peacefully = 0;
++
++							ev.events = EPOLLIN;
++							ev.data.u64 = open_client(fd);
++							if (ev.data.u64 != 0 && epoll_ctl(epollfd, EPOLL_CTL_ADD, clients[ev.data.u64].fd, &ev) == -1) {
++								pseudo_diag("epoll_ctl() failed with accepted socket.\n");
++								exit(PSEUDO_EXIT_EPOLL_CTL);
++							}
++						} else if (errno == EMFILE) {
++							pseudo_debug(PDBGF_SERVER, "Hit max open files, dropping a client.\n");
++		                                        /* In theory there is a potential race here where if we close a client, 
++		                                           it may have sent us a fastop message which we don't act upon.
++		                                           If we don't close a filehandle we'll loop indefinitely thought. 
++		                                           Only close one per loop iteration in the interests of caution */
++				                        for (int j = 1; j <= highest_client; ++j) {
++				                                if (clients[j].fd != -1) {
++				                                        close_client(j);
++									break;
++								}
++							}
++						}
++					}
++				} else {
++					struct timeval tv1, tv2;
++                                        int rc;
++					gettimeofday(&tv1, NULL);
++					rc = serve_client(events[i].data.u64);
++					gettimeofday(&tv2, NULL);
++					++messages;
++                                        if (rc == 0)
++                                                ++responses;
++					message_time.tv_sec += (tv2.tv_sec - tv1.tv_sec);
++					message_time.tv_usec += (tv2.tv_usec - tv1.tv_usec);
++					if (message_time.tv_usec < 0) {
++						message_time.tv_usec += 1000000;
++						--message_time.tv_sec;
++					} else while (message_time.tv_usec > 1000000) {
++						message_time.tv_usec -= 1000000;
++						++message_time.tv_sec;
++					}
++				}
++				if (die_forcefully)
++					break;
++			}
++			pseudo_debug(PDBGF_SERVER, "server loop complete [%d clients left]\n", active_clients);
++		} else {
++			pseudo_diag("epoll_wait failed: %s\n", strerror(errno));
++			break;
++		}
++		if (die_peacefully || die_forcefully) {
++			pseudo_debug(PDBGF_SERVER, "quitting.\n");
++			pseudo_debug(PDBGF_SERVER | PDBGF_BENCHMARK, "server %d exiting: handled %d messages in %.4f seconds\n",
++				getpid(), messages,
++				(double) message_time.tv_sec +
++				(double) message_time.tv_usec / 1000000.0);
++			pdb_log_msg(SEVERITY_INFO, NULL, NULL, NULL, "server %d exiting: handled %d messages in %.4f seconds",
++				getpid(), messages,
++				(double) message_time.tv_sec +
++				(double) message_time.tv_usec / 1000000.0);
++			/* and at this point, we'll start refusing connections */
++			close(clients[0].fd);
++			/* This is a good place to insert a delay for
++			 * debugging race conditions during startup. */
++			/* usleep(300000); */
++			exit(0);
++		}
++	}
++
++}
++
++#endif
++
+ /* get clients, handle messages, shut down.
+  * This doesn't actually do any work, it just calls a ton of things which
+  * do work.
+-- 
+2.14.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pseudo/files/fastopreply.patch b/import-layers/yocto-poky/meta/recipes-devtools/pseudo/files/fastopreply.patch
new file mode 100644
index 0000000..904c2d0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pseudo/files/fastopreply.patch
@@ -0,0 +1,76 @@
+Ensure FASTOP messages get an ACK reply so that the client can be sure the server
+recieved them. This means if connections are terminated, data isn't lost.
+
+RP 2017/9/22
+
+Upstream-Status: Submitted
+
+Index: pseudo-1.8.2/pseudo_client.c
+===================================================================
+--- pseudo-1.8.2.orig/pseudo_client.c
++++ pseudo-1.8.2/pseudo_client.c
+@@ -1331,21 +1331,19 @@ pseudo_client_request(pseudo_msg_t *msg,
+ 		 * indicating a successful send.
+ 		 */
+ 		pseudo_debug(PDBGF_CLIENT | PDBGF_VERBOSE, "sent!\n");
+-		if (msg->type != PSEUDO_MSG_FASTOP) {
+-			response = pseudo_msg_receive(connect_fd);
+-			if (!response) {
+-				pseudo_debug(PDBGF_CLIENT, "expected response did not occur; retrying\n");
++		response = pseudo_msg_receive(connect_fd);
++		if (!response) {
++			pseudo_debug(PDBGF_CLIENT, "expected response did not occur; retrying\n");
++		} else {
++			if (response->type != PSEUDO_MSG_ACK) {
++				pseudo_debug(PDBGF_CLIENT, "got non-ack response %d\n", response->type);
++				return 0;
++			} else if (msg->type != PSEUDO_MSG_FASTOP) {
++				pseudo_debug(PDBGF_CLIENT | PDBGF_VERBOSE, "got response type %d\n", response->type);
++				return response;
+ 			} else {
+-				if (response->type != PSEUDO_MSG_ACK) {
+-					pseudo_debug(PDBGF_CLIENT, "got non-ack response %d\n", response->type);
+-					return 0;
+-				} else {
+-					pseudo_debug(PDBGF_CLIENT | PDBGF_VERBOSE, "got response type %d\n", response->type);
+-					return response;
+-				}
++				return 0;
+ 			}
+-		} else {
+-			return 0;
+ 		}
+ 	}
+ 	pseudo_diag("pseudo: server connection persistently failed, aborting.\n");
+Index: pseudo-1.8.2/pseudo_server.c
+===================================================================
+--- pseudo-1.8.2.orig/pseudo_server.c
++++ pseudo-1.8.2/pseudo_server.c
+@@ -463,6 +463,11 @@ close_client(int client) {
+ 			--highest_client;
+ }
+ 
++static pseudo_msg_t server_fastop_reply = { 
++        .type = PSEUDO_MSG_ACK,
++        .op = OP_NONE,
++};
++
+ /* Actually process a request.
+  */
+ static int
+@@ -515,8 +520,14 @@ serve_client(int i) {
+ 		 * pseudo_server_response.
+ 		 */
+ 		if (in->type != PSEUDO_MSG_SHUTDOWN) {
+-                        if (in->type == PSEUDO_MSG_FASTOP)
++                        if (in->type == PSEUDO_MSG_FASTOP) {
+                                 send_response = 0;
++                                /* For fastops we reply now to say we got the data */
++                                if ((rc = pseudo_msg_send(clients[i].fd, &server_fastop_reply, 0, NULL)) != 0) {
++                                            pseudo_debug(PDBGF_SERVER, "failed to send fastop ack to client %d [%d]: %d (%s)\n",
++                                                    i, (int) clients[i].pid, rc, strerror(errno));
++                                }
++                        }
+ 			/* most messages don't need these, but xattr may */
+ 			response_path = 0;
+ 			response_pathlen = -1;
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pseudo/files/toomanyfiles.patch b/import-layers/yocto-poky/meta/recipes-devtools/pseudo/files/toomanyfiles.patch
new file mode 100644
index 0000000..b085a45
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pseudo/files/toomanyfiles.patch
@@ -0,0 +1,59 @@
+Currently if we max out the maximum number of files, pseudo can deadlock, unable to
+accept new connections yet unable to move forward and unblock the other processes
+waiting either.
+
+Rather than hang, when this happens, close out inactive connections, allowing us
+to accept the new ones. The disconnected clients will simply reconnect. There is
+a small risk of data loss here sadly but its better than hanging.
+
+RP
+2017/4/25
+
+Upstream-Status: Submitted [Peter is aware of the issue]
+
+Index: pseudo-1.8.2/pseudo_server.c
+===================================================================
+--- pseudo-1.8.2.orig/pseudo_server.c
++++ pseudo-1.8.2/pseudo_server.c
+@@ -581,6 +581,7 @@ pseudo_server_loop(void) {
+ 	int rc;
+ 	int fd;
+ 	int loop_timeout = pseudo_server_timeout;
++	int hitmaxfiles;
+ 
+ 	clients = malloc(16 * sizeof(*clients));
+ 
+@@ -597,6 +598,7 @@ pseudo_server_loop(void) {
+ 	active_clients = 1;
+ 	max_clients = 16;
+ 	highest_client = 0;
++	hitmaxfiles = 0;
+ 
+ 	pseudo_debug(PDBGF_SERVER, "server loop started.\n");
+ 	if (listen_fd < 0) {
+@@ -663,10 +665,15 @@ pseudo_server_loop(void) {
+ 						message_time.tv_usec -= 1000000;
+ 						++message_time.tv_sec;
+ 					}
++				} else if (hitmaxfiles) {
++					/* Only close one per loop iteration in the interests of caution */
++					close_client(i);
++					hitmaxfiles = 0;
+ 				}
+ 				if (die_forcefully)
+ 					break;
+ 			}
++			hitmaxfiles = 0;
+ 			if (!die_forcefully && 
+ 			    (FD_ISSET(clients[0].fd, &events) ||
+ 			     FD_ISSET(clients[0].fd, &reads))) {
+@@ -688,6 +698,9 @@ pseudo_server_loop(void) {
+                                          */
+                                         pseudo_server_timeout = DEFAULT_PSEUDO_SERVER_TIMEOUT;
+                                         die_peacefully = 0;
++				} else if (errno == EMFILE) {
++					hitmaxfiles = 1;
++					pseudo_debug(PDBGF_SERVER, "Hit max open files, dropping a client.\n");
+ 				}
+ 			}
+ 			pseudo_debug(PDBGF_SERVER, "server loop complete [%d clients left]\n", active_clients);
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/pseudo/pseudo_1.8.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/pseudo/pseudo_1.8.2.bb
index b427b9a..73ef572 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/pseudo/pseudo_1.8.2.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/pseudo/pseudo_1.8.2.bb
@@ -7,6 +7,9 @@
            file://moreretries.patch \
            file://efe0be279901006f939cd357ccee47b651c786da.patch \
            file://b6b68db896f9963558334aff7fca61adde4ec10f.patch \
+           file://fastopreply.patch \
+           file://toomanyfiles.patch \
+           file://0001-Use-epoll-API-on-Linux.patch \
            "
 
 SRC_URI[md5sum] = "7d41e72188fbea1f696c399c1a435675"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/files/0001-BUG-fix-infinite-loop-when-creating-np.pad-on-an-emp.patch b/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/files/0001-BUG-fix-infinite-loop-when-creating-np.pad-on-an-emp.patch
new file mode 100644
index 0000000..b9e5856
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/files/0001-BUG-fix-infinite-loop-when-creating-np.pad-on-an-emp.patch
@@ -0,0 +1,45 @@
+From 4170b98e0d5864ef4db1c5704a6e9428c3be9fb8 Mon Sep 17 00:00:00 2001
+From: Iryna Shcherbina <ishcherb@redhat.com>
+Date: Thu, 24 Aug 2017 18:01:43 +0200
+Subject: [PATCH] BUG: fix infinite loop when creating np.pad on an empty array
+
+Upstream-Status: Backport [https://github.com/numpy/numpy/pull/9599/commits/6f9ea0abbd305d53f9017debab3a3a591fe0e249]
+CVE: CVE-2017-12852
+Signed-off-by: Dengke Du <dengke.du@windriver.com>
+---
+ numpy/lib/arraypad.py            | 3 +++
+ numpy/lib/tests/test_arraypad.py | 4 ++++
+ 2 files changed, 7 insertions(+)
+
+diff --git a/numpy/lib/arraypad.py b/numpy/lib/arraypad.py
+index 2dad99c..294a689 100644
+--- a/numpy/lib/arraypad.py
++++ b/numpy/lib/arraypad.py
+@@ -1406,6 +1406,9 @@ def pad(array, pad_width, mode, **kwargs):
+             newmat = _append_min(newmat, pad_after, chunk_after, axis)
+ 
+     elif mode == 'reflect':
++        if narray.size == 0:
++            raise ValueError("There aren't any elements to reflect in `array`")
++
+         for axis, (pad_before, pad_after) in enumerate(pad_width):
+             # Recursive padding along any axis where `pad_amt` is too large
+             # for indexing tricks. We can only safely pad the original axis
+diff --git a/numpy/lib/tests/test_arraypad.py b/numpy/lib/tests/test_arraypad.py
+index 056aa45..0f71d32 100644
+--- a/numpy/lib/tests/test_arraypad.py
++++ b/numpy/lib/tests/test_arraypad.py
+@@ -1014,6 +1014,10 @@ class ValueError1(TestCase):
+         assert_raises(ValueError, pad, arr, ((-2, 3), (3, 2)),
+                       **kwargs)
+ 
++    def test_check_empty_array(self):
++        assert_raises(ValueError, pad, [], 4, mode='reflect')
++        assert_raises(ValueError, pad, np.ndarray(0), 4, mode='reflect')
++
+ 
+ class ValueError2(TestCase):
+     def test_check_negative_pad_amount(self):
+-- 
+2.8.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/files/0001-Don-t-search-usr-and-so-on-for-libraries-by-default-.patch b/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/files/0001-Don-t-search-usr-and-so-on-for-libraries-by-default-.patch
index 5b134ed..ffd6ced 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/files/0001-Don-t-search-usr-and-so-on-for-libraries-by-default-.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/files/0001-Don-t-search-usr-and-so-on-for-libraries-by-default-.patch
@@ -11,11 +11,11 @@
  numpy/distutils/system_info.py | 50 +++++-------------------------------------
  1 file changed, 6 insertions(+), 44 deletions(-)
 
-diff --git a/numpy/distutils/system_info.py b/numpy/distutils/system_info.py
-index 9dd48e2..80e197a 100644
---- a/numpy/distutils/system_info.py
-+++ b/numpy/distutils/system_info.py
-@@ -204,51 +204,13 @@ if sys.platform == 'win32':
+Index: numpy-1.13.1/numpy/distutils/system_info.py
+===================================================================
+--- numpy-1.13.1.orig/numpy/distutils/system_info.py
++++ numpy-1.13.1/numpy/distutils/system_info.py
+@@ -211,51 +211,13 @@ if sys.platform == 'win32':
      default_x11_lib_dirs = []
      default_x11_include_dirs = []
  else:
@@ -29,7 +29,10 @@
 -                            '/opt/local/include', '/sw/include',
 -                            '/usr/include/suitesparse']
 -    default_src_dirs = ['.', '/usr/local/src', '/opt/src', '/sw/src']
--
++    default_lib_dirs = libpaths(['/deadir/lib'], platform_bits)
++    default_include_dirs = ['/deaddir/include']
++    default_src_dirs = ['.', '/deaddir/src']
+ 
 -    default_x11_lib_dirs = libpaths(['/usr/X11R6/lib', '/usr/X11/lib',
 -                                     '/usr/lib'], platform_bits)
 -    default_x11_include_dirs = ['/usr/X11R6/include', '/usr/X11/include',
@@ -50,7 +53,7 @@
 -        # tests are run in debug mode Python 3.
 -        tmp = open(os.devnull, 'w')
 -        p = sp.Popen(["gcc", "-print-multiarch"], stdout=sp.PIPE,
--                stderr=tmp)
+-                     stderr=tmp)
 -    except (OSError, DistutilsError):
 -        # OSError if gcc is not installed, or SandboxViolation (DistutilsError
 -        # subclass) if an old setuptools bug is triggered (see gh-3160).
@@ -64,15 +67,8 @@
 -    finally:
 -        if tmp is not None:
 -            tmp.close()
-+    default_lib_dirs = libpaths(['/deadir/lib'], platform_bits)
-+    default_include_dirs = ['/deaddir/include']
-+    default_src_dirs = ['.', '/deaddir/src']
-+
 +    default_x11_lib_dirs = libpaths(['/deaddir/lib'], platform_bits)
 +    default_x11_include_dirs = ['/deaddir/include']
  
  if os.path.join(sys.prefix, 'lib') not in default_lib_dirs:
      default_lib_dirs.insert(0, os.path.join(sys.prefix, 'lib'))
--- 
-2.6.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/files/d70d37b7c4aa2af3fe879a0d858c54f2aa32a725.patch b/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/files/d70d37b7c4aa2af3fe879a0d858c54f2aa32a725.patch
deleted file mode 100644
index 08cb078..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/files/d70d37b7c4aa2af3fe879a0d858c54f2aa32a725.patch
+++ /dev/null
@@ -1,47 +0,0 @@
-From 154b2c19f392817a936aea0190e276f8228cb489 Mon Sep 17 00:00:00 2001
-From: "Erik M. Bray" <erik.bray@lri.fr>
-Date: Mon, 12 Dec 2016 13:07:16 +0100
-Subject: [PATCH] BUG: xlocale.h is not available in newlib--all the defines
- used here from xlocale.h are instead found in locale.h
-
-Added a feature check for xlocale.h, with fallback to locale.h if it is
-missing.
----
- numpy/core/setup_common.py          | 1 +
- numpy/core/src/multiarray/numpyos.c | 8 +++++++-
- 2 files changed, 8 insertions(+), 1 deletion(-)
-
-Upstream-Status: Backport
-RP 2017/9/6
-
-diff --git a/numpy/core/setup_common.py b/numpy/core/setup_common.py
-index ba7521e3043..a1729e65656 100644
---- a/numpy/core/setup_common.py
-+++ b/numpy/core/setup_common.py
-@@ -113,6 +113,7 @@ def check_api_version(apiversion, codegen_dir):
-                 "xmmintrin.h",  # SSE
-                 "emmintrin.h",  # SSE2
-                 "features.h",  # for glibc version linux
-+                "xlocale.h"  # see GH#8367
- ]
- 
- # optional gcc compiler builtins and their call arguments and optional a
-diff --git a/numpy/core/src/multiarray/numpyos.c b/numpy/core/src/multiarray/numpyos.c
-index 450ec40b6e0..84617ea78c3 100644
---- a/numpy/core/src/multiarray/numpyos.c
-+++ b/numpy/core/src/multiarray/numpyos.c
-@@ -15,7 +15,13 @@
- 
- #ifdef HAVE_STRTOLD_L
- #include <stdlib.h>
--#include <xlocale.h>
-+#ifdef HAVE_XLOCALE_H
-+    /*
-+     * the defines from xlocale.h are included in locale.h on some sytems;
-+     * see gh-8367
-+     */
-+    #include <xlocale.h>
-+#endif
- #endif
- 
- 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/python-numpy_1.11.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/python-numpy_1.11.2.bb
deleted file mode 100644
index 04c96d7..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/python-numpy_1.11.2.bb
+++ /dev/null
@@ -1,113 +0,0 @@
-SUMMARY = "A sophisticated Numeric Processing Package for Python"
-SECTION = "devel/python"
-LICENSE = "PSF"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=7e51a5677b22b865abbfb3dff6ffb2d0"
-
-SRCNAME = "numpy"
-
-SRC_URI = "https://files.pythonhosted.org/packages/source/n/${SRCNAME}/${SRCNAME}-${PV}.tar.gz \
-           file://0001-Don-t-search-usr-and-so-on-for-libraries-by-default-.patch \
-           file://remove-build-path-in-comments.patch \
-           file://fix_shebang_f2py.patch \
-           file://d70d37b7c4aa2af3fe879a0d858c54f2aa32a725.patch \
-           ${CONFIGFILESURI} "
-UPSTREAM_CHECK_URI = "https://sourceforge.net/projects/numpy/files/"
-
-CONFIGFILESURI ?= ""
-
-CONFIGFILESURI_aarch64 = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_arm = " \
-    file://config.h \
-    file://numpyconfig.h \
-"
-CONFIGFILESURI_armeb = " \
-    file://config.h \
-    file://numpyconfig.h \
-"
-CONFIGFILESURI_mipsarcho32el = " \
-    file://config.h \
-    file://numpyconfig.h \
-"
-CONFIGFILESURI_x86 = " \
-    file://config.h \
-    file://numpyconfig.h \
-"
-CONFIGFILESURI_x86-64 = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_mipsarcho32eb = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_powerpc = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_powerpc64 = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_mipsarchn64eb = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_mipsarchn64el = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_mipsarchn32eb = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_mipsarchn32el = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-
-S = "${WORKDIR}/numpy-${PV}"
-
-inherit setuptools
-
-# Make the build fail and replace *config.h with proper one
-# This is a ugly, ugly hack - Koen
-do_compile_prepend_class-target() {
-    ${STAGING_BINDIR_NATIVE}/python-native/python setup.py build ${DISTUTILS_BUILD_ARGS} || \
-    true
-    cp ${WORKDIR}/*config.h ${S}/build/$(ls ${S}/build | grep src)/numpy/core/include/numpy/
-}
-
-FILES_${PN}-staticdev += "${PYTHON_SITEPACKAGES_DIR}/numpy/core/lib/*.a"
-
-SRC_URI[md5sum] = "03bd7927c314c43780271bf1ab795ebc"
-SRC_URI[sha256sum] = "04db2fbd64e2e7c68e740b14402b25af51418fc43a59d9e54172b38b906b0f69"
-
-# install what is needed for numpy.test()
-RDEPENDS_${PN} = "python-unittest \
-                  python-difflib \
-                  python-pprint \
-                  python-pickle \
-                  python-shell \
-                  python-nose \
-                  python-doctest \
-                  python-datetime \
-                  python-distutils \
-                  python-misc \
-                  python-mmap \
-                  python-netclient \
-                  python-numbers \
-                  python-pydoc \
-                  python-pkgutil \
-                  python-email \
-                  python-subprocess \
-                  python-compression \
-                  python-ctypes \
-                  python-threading \
-"
-
-RDEPENDS_${PN}_class-native = ""
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/python-numpy_1.13.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/python-numpy_1.13.1.bb
new file mode 100644
index 0000000..13e8f4f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/python-numpy_1.13.1.bb
@@ -0,0 +1,114 @@
+SUMMARY = "A sophisticated Numeric Processing Package for Python"
+SECTION = "devel/python"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=1002b09cd654fcaa2dcc87535acd9a96"
+
+SRCNAME = "numpy"
+
+SRC_URI = "https://github.com/${SRCNAME}/${SRCNAME}/releases/download/v${PV}/${SRCNAME}-${PV}.tar.gz \
+           file://0001-Don-t-search-usr-and-so-on-for-libraries-by-default-.patch \
+           file://remove-build-path-in-comments.patch \
+           file://fix_shebang_f2py.patch \
+           file://0001-BUG-fix-infinite-loop-when-creating-np.pad-on-an-emp.patch \
+           ${CONFIGFILESURI} "
+
+SRC_URI[md5sum] = "6d459e4a24f5035f720dda3c57716a92"
+SRC_URI[sha256sum] = "de020ec06f1e9ce1115a50161a38bf8d4c2525379900f9cb478cc613a1e7cd93"
+
+UPSTREAM_CHECK_URI = "https://github.com/numpy/numpy/releases"
+
+CONFIGFILESURI ?= ""
+
+CONFIGFILESURI_aarch64 = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_arm = " \
+    file://config.h \
+    file://numpyconfig.h \
+"
+CONFIGFILESURI_armeb = " \
+    file://config.h \
+    file://numpyconfig.h \
+"
+CONFIGFILESURI_mipsarcho32el = " \
+    file://config.h \
+    file://numpyconfig.h \
+"
+CONFIGFILESURI_x86 = " \
+    file://config.h \
+    file://numpyconfig.h \
+"
+CONFIGFILESURI_x86-64 = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_mipsarcho32eb = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_powerpc = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_powerpc64 = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_mipsarchn64eb = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_mipsarchn64el = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_mipsarchn32eb = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_mipsarchn32el = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+
+S = "${WORKDIR}/numpy-${PV}"
+
+inherit setuptools
+
+# Make the build fail and replace *config.h with proper one
+# This is a ugly, ugly hack - Koen
+do_compile_prepend_class-target() {
+    ${STAGING_BINDIR_NATIVE}/python-native/python setup.py build ${DISTUTILS_BUILD_ARGS} || \
+    true
+    cp ${WORKDIR}/*config.h ${S}/build/$(ls ${S}/build | grep src)/numpy/core/include/numpy/
+}
+
+FILES_${PN}-staticdev += "${PYTHON_SITEPACKAGES_DIR}/numpy/core/lib/*.a"
+
+# install what is needed for numpy.test()
+RDEPENDS_${PN} = "python-unittest \
+                  python-difflib \
+                  python-pprint \
+                  python-pickle \
+                  python-shell \
+                  python-nose \
+                  python-doctest \
+                  python-datetime \
+                  python-distutils \
+                  python-misc \
+                  python-mmap \
+                  python-netclient \
+                  python-numbers \
+                  python-pydoc \
+                  python-pkgutil \
+                  python-email \
+                  python-subprocess \
+                  python-compression \
+                  python-ctypes \
+                  python-threading \
+"
+
+RDEPENDS_${PN}_class-native = ""
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/python3-numpy_1.11.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/python3-numpy_1.11.2.bb
deleted file mode 100644
index 8f9665f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/python3-numpy_1.11.2.bb
+++ /dev/null
@@ -1,114 +0,0 @@
-SUMMARY = "A sophisticated Numeric Processing Package for Python"
-SECTION = "devel/python"
-LICENSE = "PSF"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=7e51a5677b22b865abbfb3dff6ffb2d0"
-
-SRCNAME = "numpy"
-
-SRC_URI = "https://files.pythonhosted.org/packages/source/n/${SRCNAME}/${SRCNAME}-${PV}.tar.gz \
-           file://0001-Don-t-search-usr-and-so-on-for-libraries-by-default-.patch \
-           file://remove-build-path-in-comments.patch \
-           file://fix_shebang_f2py.patch \
-           file://d70d37b7c4aa2af3fe879a0d858c54f2aa32a725.patch \
-           ${CONFIGFILESURI} "
-UPSTREAM_CHECK_URI = "https://sourceforge.net/projects/numpy/files/"
-
-CONFIGFILESURI ?= ""
-
-CONFIGFILESURI_aarch64 = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_arm = " \
-    file://config.h \
-    file://numpyconfig.h \
-"
-CONFIGFILESURI_armeb = " \
-    file://config.h \
-    file://numpyconfig.h \
-"
-CONFIGFILESURI_mipsarcho32el = " \
-    file://config.h \
-    file://numpyconfig.h \
-"
-CONFIGFILESURI_x86 = " \
-    file://config.h \
-    file://numpyconfig.h \
-"
-CONFIGFILESURI_x86-64 = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_mipsarcho32eb = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_powerpc = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_powerpc64 = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_mipsarchn64eb = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_mipsarchn64el = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_mipsarchn32eb = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-CONFIGFILESURI_mipsarchn32el = " \
-    file://config.h \
-    file://_numpyconfig.h \
-"
-
-S = "${WORKDIR}/numpy-${PV}"
-
-inherit setuptools3
-
-# Make the build fail and replace *config.h with proper one
-# This is a ugly, ugly hack - Koen
-do_compile_prepend_class-target() {
-    ${STAGING_BINDIR_NATIVE}/python3-native/python3 setup.py build ${DISTUTILS_BUILD_ARGS} || \
-    true
-    cp ${WORKDIR}/*config.h ${S}/build/$(ls ${S}/build | grep src)/numpy/core/include/numpy/
-}
-
-FILES_${PN}-staticdev += "${PYTHON_SITEPACKAGES_DIR}/numpy/core/lib/*.a"
-
-SRC_URI[md5sum] = "03bd7927c314c43780271bf1ab795ebc"
-SRC_URI[sha256sum] = "04db2fbd64e2e7c68e740b14402b25af51418fc43a59d9e54172b38b906b0f69"
-
-# install what is needed for numpy.test()
-RDEPENDS_${PN} = "python3-unittest \
-                  python3-difflib \
-                  python3-pprint \
-                  python3-pickle \
-                  python3-shell \
-                  python3-nose \
-                  python3-doctest \
-                  python3-datetime \
-                  python3-distutils \
-                  python3-misc \
-                  python3-mmap \
-                  python3-netclient \
-                  python3-numbers \
-                  python3-pydoc \
-                  python3-pkgutil \
-                  python3-email \
-                  python3-subprocess \
-                  python3-compression \
-                  python3-ctypes \
-                  python3-threading \
-                  python3-textutils \
-"
-
-RDEPENDS_${PN}_class-native = ""
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/python3-numpy_1.13.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/python3-numpy_1.13.1.bb
new file mode 100644
index 0000000..29874b8
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python-numpy/python3-numpy_1.13.1.bb
@@ -0,0 +1,114 @@
+SUMMARY = "A sophisticated Numeric Processing Package for Python"
+SECTION = "devel/python"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=1002b09cd654fcaa2dcc87535acd9a96"
+
+SRCNAME = "numpy"
+
+SRC_URI = "https://github.com/${SRCNAME}/${SRCNAME}/releases/download/v${PV}/${SRCNAME}-${PV}.tar.gz \
+           file://0001-Don-t-search-usr-and-so-on-for-libraries-by-default-.patch \
+           file://remove-build-path-in-comments.patch \
+           file://fix_shebang_f2py.patch \
+           file://0001-BUG-fix-infinite-loop-when-creating-np.pad-on-an-emp.patch \
+           ${CONFIGFILESURI} "
+SRC_URI[md5sum] = "6d459e4a24f5035f720dda3c57716a92"
+SRC_URI[sha256sum] = "de020ec06f1e9ce1115a50161a38bf8d4c2525379900f9cb478cc613a1e7cd93"
+
+UPSTREAM_CHECK_URI = "https://github.com/numpy/numpy/releases"
+
+CONFIGFILESURI ?= ""
+
+CONFIGFILESURI_aarch64 = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_arm = " \
+    file://config.h \
+    file://numpyconfig.h \
+"
+CONFIGFILESURI_armeb = " \
+    file://config.h \
+    file://numpyconfig.h \
+"
+CONFIGFILESURI_mipsarcho32el = " \
+    file://config.h \
+    file://numpyconfig.h \
+"
+CONFIGFILESURI_x86 = " \
+    file://config.h \
+    file://numpyconfig.h \
+"
+CONFIGFILESURI_x86-64 = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_mipsarcho32eb = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_powerpc = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_powerpc64 = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_mipsarchn64eb = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_mipsarchn64el = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_mipsarchn32eb = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+CONFIGFILESURI_mipsarchn32el = " \
+    file://config.h \
+    file://_numpyconfig.h \
+"
+
+S = "${WORKDIR}/numpy-${PV}"
+
+inherit setuptools3
+
+# Make the build fail and replace *config.h with proper one
+# This is a ugly, ugly hack - Koen
+do_compile_prepend_class-target() {
+    ${STAGING_BINDIR_NATIVE}/python3-native/python3 setup.py build ${DISTUTILS_BUILD_ARGS} || \
+    true
+    cp ${WORKDIR}/*config.h ${S}/build/$(ls ${S}/build | grep src)/numpy/core/include/numpy/
+}
+
+FILES_${PN}-staticdev += "${PYTHON_SITEPACKAGES_DIR}/numpy/core/lib/*.a"
+
+# install what is needed for numpy.test()
+RDEPENDS_${PN} = "python3-unittest \
+                  python3-difflib \
+                  python3-pprint \
+                  python3-pickle \
+                  python3-shell \
+                  python3-nose \
+                  python3-doctest \
+                  python3-datetime \
+                  python3-distutils \
+                  python3-misc \
+                  python3-mmap \
+                  python3-netclient \
+                  python3-numbers \
+                  python3-pydoc \
+                  python3-pkgutil \
+                  python3-email \
+                  python3-subprocess \
+                  python3-compression \
+                  python3-ctypes \
+                  python3-threading \
+                  python3-textutils \
+"
+
+RDEPENDS_${PN}_class-native = ""
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-2.7-manifest.inc b/import-layers/yocto-poky/meta/recipes-devtools/python/python-2.7-manifest.inc
index 7ed254b..57d4834 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-2.7-manifest.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python-2.7-manifest.inc
@@ -281,7 +281,7 @@
 FILES_${PN}-zlib="${libdir}/python2.7/lib-dynload/zlib.so "
 
 SUMMARY_${PN}-modules="All Python modules"
-RDEPENDS_${PN}-modules="${PN}-2to3 ${PN}-argparse ${PN}-audio ${PN}-bsddb ${PN}-codecs ${PN}-compile ${PN}-compiler ${PN}-compression ${PN}-contextlib ${PN}-core ${PN}-crypt ${PN}-ctypes ${PN}-curses ${PN}-datetime ${PN}-db ${PN}-debugger ${PN}-difflib ${PN}-distutils ${PN}-doctest ${PN}-email ${PN}-fcntl ${PN}-gdbm ${PN}-hotshot ${PN}-html ${PN}-idle ${PN}-image ${PN}-importlib ${PN}-io ${PN}-json ${PN}-lang ${PN}-logging ${PN}-mailbox ${PN}-math ${PN}-mime ${PN}-mmap ${PN}-multiprocessing ${PN}-netclient ${PN}-netserver ${PN}-numbers ${PN}-pickle ${PN}-pkgutil ${PN}-plistlib ${PN}-pprint ${PN}-profile ${PN}-pydoc ${PN}-re ${PN}-readline ${PN}-resource ${PN}-robotparser ${PN}-shell ${PN}-smtpd ${PN}-sqlite3 ${PN}-sqlite3-tests ${PN}-stringold ${PN}-subprocess ${PN}-syslog ${PN}-terminal ${PN}-tests ${PN}-textutils ${PN}-threading ${PN}-tkinter ${PN}-unittest ${PN}-unixadmin ${PN}-xml ${PN}-xmlrpc ${PN}-zlib  "
+RDEPENDS_${PN}-modules="${PN}-2to3 ${PN}-argparse ${PN}-audio ${PN}-bsddb ${PN}-codecs ${PN}-compile ${PN}-compiler ${PN}-compression ${PN}-contextlib ${PN}-core ${PN}-crypt ${PN}-ctypes ${PN}-curses ${PN}-datetime ${PN}-db ${PN}-debugger ${PN}-difflib ${PN}-distutils ${PN}-doctest ${PN}-email ${PN}-fcntl ${PN}-gdbm ${PN}-hotshot ${PN}-html ${PN}-idle ${PN}-image ${PN}-importlib ${PN}-io ${PN}-json ${PN}-lang ${PN}-logging ${PN}-mailbox ${PN}-math ${PN}-mime ${PN}-mmap ${PN}-multiprocessing ${PN}-netclient ${PN}-netserver ${PN}-numbers ${PN}-pickle ${PN}-pkgutil ${PN}-plistlib ${PN}-pprint ${PN}-profile ${PN}-pydoc ${PN}-re ${PN}-readline ${PN}-resource ${PN}-robotparser ${PN}-shell ${PN}-smtpd ${PN}-sqlite3 ${PN}-sqlite3-tests ${PN}-stringold ${PN}-subprocess ${PN}-syslog ${PN}-terminal ${PN}-textutils ${PN}-threading ${PN}-tkinter ${PN}-unittest ${PN}-unixadmin ${PN}-xml ${PN}-xmlrpc ${PN}-zlib  "
 ALLOW_EMPTY_${PN}-modules = "1"
 
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-3.5-manifest.inc b/import-layers/yocto-poky/meta/recipes-devtools/python/python-3.5-manifest.inc
index 1e20f00..0260e87 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-3.5-manifest.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python-3.5-manifest.inc
@@ -1,6 +1,6 @@
 
 # WARNING: This file is AUTO GENERATED: Manual edits will be lost next time I regenerate the file.
-# Generator: '../../../scripts/contrib/python/generate-manifest-3.5.py' Version 20140131 (C) 2002-2010 Michael 'Mickey' Lauer <mlauer@vanille-media.de>
+# Generator: 'scripts/contrib/python/generate-manifest-3.5.py' Version 20140131 (C) 2002-2010 Michael 'Mickey' Lauer <mlauer@vanille-media.de>
 
  
 
@@ -34,7 +34,7 @@
 
 SUMMARY_${PN}-compression="Python high-level compression support"
 RDEPENDS_${PN}-compression="${PN}-core ${PN}-codecs ${PN}-importlib ${PN}-threading ${PN}-shell"
-FILES_${PN}-compression="${libdir}/python3.5/gzip.* ${libdir}/python3.5/__pycache__/gzip.* ${libdir}/python3.5/zipfile.* ${libdir}/python3.5/__pycache__/zipfile.* ${libdir}/python3.5/tarfile.* ${libdir}/python3.5/__pycache__/tarfile.* ${libdir}/python3.5/lib-dynload/bz2.*.so ${libdir}/python3.5/lib-dynload/__pycache__/bz2.*.so ${libdir}/python3.5/lib-dynload/zlib.*.so ${libdir}/python3.5/lib-dynload/__pycache__/zlib.*.so "
+FILES_${PN}-compression="${libdir}/python3.5/gzip.* ${libdir}/python3.5/__pycache__/gzip.* ${libdir}/python3.5/zipfile.* ${libdir}/python3.5/__pycache__/zipfile.* ${libdir}/python3.5/tarfile.* ${libdir}/python3.5/__pycache__/tarfile.* ${libdir}/python3.5/lib-dynload/bz2.*.so ${libdir}/python3.5/lib-dynload/__pycache__/bz2.*.so ${libdir}/python3.5/lib-dynload/zlib.*.so ${libdir}/python3.5/lib-dynload/__pycache__/zlib.*.so ${libdir}/python3.5/bz2.py ${libdir}/python3.5/__pycache__/bz2.py ${libdir}/python3.5/lzma.py ${libdir}/python3.5/__pycache__/lzma.py ${libdir}/python3.5/_compression.py ${libdir}/python3.5/__pycache__/_compression.py "
 
 SUMMARY_${PN}-core="Python interpreter and core modules"
 RDEPENDS_${PN}-core="${PN}-lang ${PN}-re ${PN}-reprlib ${PN}-codecs ${PN}-io ${PN}-math"
@@ -66,7 +66,7 @@
 
 SUMMARY_${PN}-dev="Python development package"
 RDEPENDS_${PN}-dev="${PN}-core"
-FILES_${PN}-dev="${includedir} ${libdir}/lib*${SOLIBSDEV} ${libdir}/*.la ${libdir}/*.a ${libdir}/*.o ${libdir}/pkgconfig ${base_libdir}/*.a ${base_libdir}/*.o ${datadir}/aclocal ${datadir}/pkgconfig ${libdir}/python3.5/config/Makefile ${libdir}/python3.5/config/Makefile/__pycache__ "
+FILES_${PN}-dev="${includedir} ${libdir}/lib*${SOLIBSDEV} ${libdir}/*.la ${libdir}/*.a ${libdir}/*.o ${libdir}/pkgconfig ${base_libdir}/*.a ${base_libdir}/*.o ${datadir}/aclocal ${datadir}/pkgconfig ${libdir}/python3.5/config*/Makefile ${libdir}/python3.5/config*/Makefile/__pycache__ "
 
 SUMMARY_${PN}-difflib="Python helpers for computing deltas between objects"
 RDEPENDS_${PN}-difflib="${PN}-lang ${PN}-re"
@@ -241,7 +241,7 @@
 FILES_${PN}-terminal="${libdir}/python3.5/pty.* ${libdir}/python3.5/__pycache__/pty.* ${libdir}/python3.5/tty.* ${libdir}/python3.5/__pycache__/tty.* "
 
 SUMMARY_${PN}-tests="Python tests"
-RDEPENDS_${PN}-tests="${PN}-core"
+RDEPENDS_${PN}-tests="${PN}-core ${PN}-compression"
 FILES_${PN}-tests="${libdir}/python3.5/test ${libdir}/python3.5/test/__pycache__ "
 
 SUMMARY_${PN}-textutils="Python option parsing, text wrapping and CSV support"
@@ -277,7 +277,7 @@
 FILES_${PN}-xmlrpc="${libdir}/python3.5/xmlrpclib.* ${libdir}/python3.5/__pycache__/xmlrpclib.* ${libdir}/python3.5/SimpleXMLRPCServer.* ${libdir}/python3.5/__pycache__/SimpleXMLRPCServer.* ${libdir}/python3.5/DocXMLRPCServer.* ${libdir}/python3.5/__pycache__/DocXMLRPCServer.* ${libdir}/python3.5/xmlrpc ${libdir}/python3.5/xmlrpc/__pycache__ "
 
 SUMMARY_${PN}-modules="All Python modules"
-RDEPENDS_${PN}-modules="${PN}-2to3 ${PN}-argparse ${PN}-asyncio ${PN}-audio ${PN}-codecs ${PN}-compile ${PN}-compression ${PN}-core ${PN}-crypt ${PN}-ctypes ${PN}-curses ${PN}-datetime ${PN}-db ${PN}-debugger ${PN}-difflib ${PN}-distutils ${PN}-doctest ${PN}-email ${PN}-enum ${PN}-fcntl ${PN}-gdbm ${PN}-html ${PN}-idle ${PN}-image ${PN}-importlib ${PN}-io ${PN}-json ${PN}-lang ${PN}-logging ${PN}-mailbox ${PN}-math ${PN}-mime ${PN}-mmap ${PN}-multiprocessing ${PN}-netclient ${PN}-netserver ${PN}-numbers ${PN}-pickle ${PN}-pkgutil ${PN}-pprint ${PN}-profile ${PN}-pydoc ${PN}-re ${PN}-readline ${PN}-reprlib ${PN}-resource ${PN}-selectors ${PN}-shell ${PN}-signal ${PN}-smtpd ${PN}-sqlite3 ${PN}-sqlite3-tests ${PN}-stringold ${PN}-subprocess ${PN}-syslog ${PN}-terminal ${PN}-tests ${PN}-textutils ${PN}-threading ${PN}-tkinter ${PN}-typing ${PN}-unittest ${PN}-unixadmin ${PN}-xml ${PN}-xmlrpc  "
+RDEPENDS_${PN}-modules="${PN}-2to3 ${PN}-argparse ${PN}-asyncio ${PN}-audio ${PN}-codecs ${PN}-compile ${PN}-compression ${PN}-core ${PN}-crypt ${PN}-ctypes ${PN}-curses ${PN}-datetime ${PN}-db ${PN}-debugger ${PN}-difflib ${PN}-distutils ${PN}-doctest ${PN}-email ${PN}-enum ${PN}-fcntl ${PN}-gdbm ${PN}-html ${PN}-idle ${PN}-image ${PN}-importlib ${PN}-io ${PN}-json ${PN}-lang ${PN}-logging ${PN}-mailbox ${PN}-math ${PN}-mime ${PN}-mmap ${PN}-multiprocessing ${PN}-netclient ${PN}-netserver ${PN}-numbers ${PN}-pickle ${PN}-pkgutil ${PN}-pprint ${PN}-profile ${PN}-pydoc ${PN}-re ${PN}-readline ${PN}-reprlib ${PN}-resource ${PN}-selectors ${PN}-shell ${PN}-signal ${PN}-smtpd ${PN}-sqlite3 ${PN}-sqlite3-tests ${PN}-stringold ${PN}-subprocess ${PN}-syslog ${PN}-terminal ${PN}-textutils ${PN}-threading ${PN}-tkinter ${PN}-typing ${PN}-unittest ${PN}-unixadmin ${PN}-xml ${PN}-xmlrpc  "
 ALLOW_EMPTY_${PN}-modules = "1"
 
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-async_0.6.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python-async_0.6.2.bb
deleted file mode 100644
index d855e42..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-async_0.6.2.bb
+++ /dev/null
@@ -1,5 +0,0 @@
-require python-async.inc
-
-inherit setuptools
-
-RDEPENDS_${PN} += "python-threading python-lang"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-git.inc b/import-layers/yocto-poky/meta/recipes-devtools/python/python-git.inc
index feddf27..777608c 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-git.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python-git.inc
@@ -10,8 +10,8 @@
 
 SRC_URI = "https://files.pythonhosted.org/packages/source/G/GitPython/GitPython-${PV}.tar.gz"
 
-SRC_URI[md5sum] = "77f8339e68dedb6d7c4e26371a588ed9"
-SRC_URI[sha256sum] = "e96f8e953cf9fee0a7599fc587667591328760b6341a0081ef311a942fc96204"
+SRC_URI[md5sum] = "df94212b19d270a625b67b4c84ac9a41"
+SRC_URI[sha256sum] = "5c00cbd256e2b1d039381d4f7d71fcb7ee5cc196ca10c101ff7191bd82ab5d9c"
 
 UPSTREAM_CHECK_URI = "https://pypi.python.org/pypi/GitPython/"
 UPSTREAM_CHECK_REGEX = "/GitPython/(?P<pver>(\d+[\.\-_]*)+)"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-git_2.1.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python-git_2.1.1.bb
deleted file mode 100644
index e49dbea..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-git_2.1.1.bb
+++ /dev/null
@@ -1,7 +0,0 @@
-require python-git.inc
-
-DEPENDS = "python-gitdb"
-
-inherit setuptools
-
-RDEPENDS_${PN} += "python-gitdb python-lang python-io python-shell python-math python-re python-subprocess python-stringold python-unixadmin"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-gitdb_0.6.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python-gitdb_0.6.4.bb
deleted file mode 100644
index 1777395..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-gitdb_0.6.4.bb
+++ /dev/null
@@ -1,7 +0,0 @@
-require python-gitdb.inc
-
-DEPENDS = "python-async python-smmap"
-
-inherit distutils
-
-RDEPENDS_${PN} += "python-smmap python-async python-mmap python-lang python-zlib python-io python-shell"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-mako.inc b/import-layers/yocto-poky/meta/recipes-devtools/python/python-mako.inc
index 10364db..1c83af6 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-mako.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python-mako.inc
@@ -1,12 +1,13 @@
 SUMMARY = "Templating library for Python"
+HOMEPAGE = "http://www.makotemplates.org/"
 SECTION = "devel/python"
 LICENSE = "MIT"
 LIC_FILES_CHKSUM = "file://LICENSE;md5=1bb21fa2d2f7a534c884b990430a6863"
 
 SRC_URI = "https://files.pythonhosted.org/packages/source/M/Mako/Mako-${PV}.tar.gz"
 
-SRC_URI[md5sum] = "a28e22a339080316b2acc352b9ee631c"
-SRC_URI[sha256sum] = "48559ebd872a8e77f92005884b3d88ffae552812cdf17db6768e5c3be5ebbe0d"
+SRC_URI[md5sum] = "5836cc997b1b773ef389bf6629c30e65"
+SRC_URI[sha256sum] = "4e02fde57bd4abb5ec400181e4c314f56ac3e49ba4fb8b0d50bba18cb27d25ae"
 
 UPSTREAM_CHECK_URI = "https://pypi.python.org/pypi/mako/"
 UPSTREAM_CHECK_REGEX = "/Mako/(?P<pver>(\d+[\.\-_]*)+)"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-mako_1.0.6.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python-mako_1.0.6.bb
deleted file mode 100644
index 230044e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-mako_1.0.6.bb
+++ /dev/null
@@ -1,17 +0,0 @@
-require python-mako.inc
-
-inherit setuptools
-
-RDEPENDS_${PN} = "python-threading \
-                  python-netclient \
-                  python-html \
-"
-RDEPENDS_${PN}_class-native = ""
-
-BBCLASSEXTEND = "native nativesdk"
-
-# The same utility is packaged in python3-mako, so it would conflict
-do_install_append() {
-    rm -f ${D}${bindir}/mako-render
-    rmdir ${D}${bindir}
-}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-native-2.7-manifest.inc b/import-layers/yocto-poky/meta/recipes-devtools/python/python-native-2.7-manifest.inc
index 581a37a..b05aae0 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-native-2.7-manifest.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python-native-2.7-manifest.inc
@@ -1,10 +1,9 @@
 
 # WARNING: This file is AUTO GENERATED: Manual edits will be lost next time I regenerate the file.
-# Generator: '../../../scripts/contrib/python/generate-manifest-2.7.py --native' Version 20110222.2 (C) 2002-2010 Michael 'Mickey' Lauer <mlauer@vanille-media.de>
+# Generator: '../scripts/contrib/python/generate-manifest-2.7.py --native' Version 20110222.2 (C) 2002-2010 Michael 'Mickey' Lauer <mlauer@vanille-media.de>
 
  
 
-RPROVIDES+="python-2to3-native python-argparse-native python-audio-native python-bsddb-native python-codecs-native python-compile-native python-compiler-native python-compression-native python-contextlib-native python-core-native python-crypt-native python-ctypes-native python-curses-native python-datetime-native python-db-native python-debugger-native python-dev-native python-difflib-native python-distutils-native python-distutils-staticdev-native python-doctest-native python-email-native python-fcntl-native python-gdbm-native python-hotshot-native python-html-native python-idle-native python-image-native python-importlib-native python-io-native python-json-native python-lang-native python-logging-native python-mailbox-native python-math-native python-mime-native python-mmap-native python-multiprocessing-native python-netclient-native python-netserver-native python-numbers-native python-pickle-native python-pkgutil-native python-plistlib-native python-pprint-native python-profile-native python-pydoc-native python-re-native python-readline-native python-resource-native python-robotparser-native python-shell-native python-smtpd-native python-sqlite3-native python-sqlite3-tests-native python-stringold-native python-subprocess-native python-syslog-native python-terminal-native python-tests-native python-textutils-native python-threading-native python-tkinter-native python-unittest-native python-unixadmin-native python-xml-native python-xmlrpc-native python-zlib-native "
-
+RPROVIDES += "python-modules-native python-2to3-native python-argparse-native python-audio-native python-bsddb-native python-codecs-native python-compile-native python-compiler-native python-compression-native python-contextlib-native python-core-native python-crypt-native python-ctypes-native python-curses-native python-datetime-native python-db-native python-debugger-native python-dev-native python-difflib-native python-distutils-native python-distutils-staticdev-native python-doctest-native python-email-native python-fcntl-native python-gdbm-native python-hotshot-native python-html-native python-idle-native python-image-native python-importlib-native python-io-native python-json-native python-lang-native python-logging-native python-mailbox-native python-math-native python-mime-native python-mmap-native python-multiprocessing-native python-netclient-native python-netserver-native python-numbers-native python-pickle-native python-pkgutil-native python-plistlib-native python-pprint-native python-profile-native python-pydoc-native python-re-native python-readline-native python-resource-native python-robotparser-native python-shell-native python-smtpd-native python-sqlite3-native python-sqlite3-tests-native python-stringold-native python-subprocess-native python-syslog-native python-terminal-native python-tests-native python-textutils-native python-threading-native python-tkinter-native python-unittest-native python-unixadmin-native python-xml-native python-xmlrpc-native python-zlib-native"
 
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-native-3.5-manifest.inc b/import-layers/yocto-poky/meta/recipes-devtools/python/python-native-3.5-manifest.inc
index 10be3e9..f1f732e 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-native-3.5-manifest.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python-native-3.5-manifest.inc
@@ -1,10 +1,10 @@
 
 # WARNING: This file is AUTO GENERATED: Manual edits will be lost next time I regenerate the file.
-# Generator: '../../../scripts/contrib/python/generate-manifest-3.5.py --native' Version 20140131 (C) 2002-2010 Michael 'Mickey' Lauer <mlauer@vanille-media.de>
+# Generator: '../scripts/contrib/python/generate-manifest-3.5.py --native' Version 20140131 (C) 2002-2010 Michael 'Mickey' Lauer <mlauer@vanille-media.de>
 
  
 
-RPROVIDES+="python3-2to3-native python3-argparse-native python3-asyncio-native python3-audio-native python3-codecs-native python3-compile-native python3-compression-native python3-core-native python3-crypt-native python3-ctypes-native python3-curses-native python3-datetime-native python3-db-native python3-debugger-native python3-dev-native python3-difflib-native python3-distutils-native python3-distutils-staticdev-native python3-doctest-native python3-email-native python3-enum-native python3-fcntl-native python3-gdbm-native python3-html-native python3-idle-native python3-image-native python3-importlib-native python3-io-native python3-json-native python3-lang-native python3-logging-native python3-mailbox-native python3-math-native python3-mime-native python3-mmap-native python3-multiprocessing-native python3-netclient-native python3-netserver-native python3-numbers-native python3-pickle-native python3-pkgutil-native python3-pprint-native python3-profile-native python3-pydoc-native python3-re-native python3-readline-native python3-reprlib-native python3-resource-native python3-selectors-native python3-shell-native python3-signal-native python3-smtpd-native python3-sqlite3-native python3-sqlite3-tests-native python3-stringold-native python3-subprocess-native python3-syslog-native python3-terminal-native python3-tests-native python3-textutils-native python3-threading-native python3-tkinter-native python3-typing-native python3-unittest-native python3-unixadmin-native python3-xml-native python3-xmlrpc-native "
+RPROVIDES += "python3-modules-native python3-2to3-native python3-argparse-native python3-asyncio-native python3-audio-native python3-codecs-native python3-compile-native python3-compression-native python3-core-native python3-crypt-native python3-ctypes-native python3-curses-native python3-datetime-native python3-db-native python3-debugger-native python3-dev-native python3-difflib-native python3-distutils-native python3-distutils-staticdev-native python3-doctest-native python3-email-native python3-enum-native python3-fcntl-native python3-gdbm-native python3-html-native python3-idle-native python3-image-native python3-importlib-native python3-io-native python3-json-native python3-lang-native python3-logging-native python3-mailbox-native python3-math-native python3-mime-native python3-mmap-native python3-multiprocessing-native python3-netclient-native python3-netserver-native python3-numbers-native python3-pickle-native python3-pkgutil-native python3-pprint-native python3-profile-native python3-pydoc-native python3-re-native python3-readline-native python3-reprlib-native python3-resource-native python3-selectors-native python3-shell-native python3-signal-native python3-smtpd-native python3-sqlite3-native python3-sqlite3-tests-native python3-stringold-native python3-subprocess-native python3-syslog-native python3-terminal-native python3-tests-native python3-textutils-native python3-threading-native python3-tkinter-native python3-typing-native python3-unittest-native python3-unixadmin-native python3-xml-native python3-xmlrpc-native"
 
 
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-nose_1.3.7.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python-nose_1.3.7.bb
index 3757f3a..9b3509c 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-nose_1.3.7.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python-nose_1.3.7.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Extends Python unittest to make testing easier"
+HOMEPAGE = "http://readthedocs.org/docs/nose/"
 DESCRIPTION = "nose extends the test loading and running features of unittest, \
 making it easier to write, find and run tests."
 SECTION = "devel/python"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-pexpect_4.2.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python-pexpect_4.2.1.bb
deleted file mode 100644
index 1321797..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-pexpect_4.2.1.bb
+++ /dev/null
@@ -1,28 +0,0 @@
-SUMMARY = "A Pure Python Expect like Module for Python"
-HOMEPAGE = "http://pexpect.readthedocs.org/"
-SECTION = "devel/python"
-LICENSE = "ISC"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=1c7a725251880af8c6a148181665385b"
-
-SRCNAME = "pexpect"
-
-SRC_URI = "https://files.pythonhosted.org/packages/source/p/${SRCNAME}/${SRCNAME}-${PV}.tar.gz"
-SRC_URI[md5sum] = "3694410001a99dff83f0b500a1ca1c95"
-SRC_URI[sha256sum] = "3d132465a75b57aa818341c6521392a06cc660feb3988d7f1074f39bd23c9a92"
-
-UPSTREAM_CHECK_URI = "https://pypi.python.org/pypi/pexpect"
-
-S = "${WORKDIR}/pexpect-${PV}"
-
-inherit setuptools
-
-RDEPENDS_${PN} = "\
-    python-core \
-    python-io \
-    python-terminal \
-    python-resource \
-    python-fcntl \
-    python-ptyprocess \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-ptyprocess_0.5.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python-ptyprocess_0.5.1.bb
deleted file mode 100644
index eed24ad..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-ptyprocess_0.5.1.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-SUMMARY = "Run a subprocess in a pseudo terminal"
-HOMEPAGE = "http://ptyprocess.readthedocs.io/en/latest/"
-SECTION = "devel/python"
-LICENSE = "ISC"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=cfdcd51fa7d5808da4e74346ee394490"
-
-SRCNAME = "ptyprocess"
-
-SRC_URI = "https://files.pythonhosted.org/packages/source/p/${SRCNAME}/${SRCNAME}-${PV}.tar.gz"
-SRC_URI[md5sum] = "94e537122914cc9ec9c1eadcd36e73a1"
-SRC_URI[sha256sum] = "0530ce63a9295bfae7bd06edc02b6aa935619f486f0f1dc0972f516265ee81a6"
-
-UPSTREAM_CHECK_URI = "https://pypi.python.org/pypi/ptyprocess"
-
-S = "${WORKDIR}/${SRCNAME}-${PV}"
-
-inherit setuptools
-
-RDEPENDS_${PN} = "\
-    python-core \
-"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-pycurl.inc b/import-layers/yocto-poky/meta/recipes-devtools/python/python-pycurl.inc
deleted file mode 100644
index d26318b..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-pycurl.inc
+++ /dev/null
@@ -1,31 +0,0 @@
-SUMMARY = "Python bindings for libcurl"
-HOMEPAGE = "http://pycurl.sourceforge.net/"
-SECTION = "devel/python"
-LICENSE = "LGPLv2.1+ | MIT"
-LIC_FILES_CHKSUM = "file://README.rst;beginline=166;endline=182;md5=a84a1caa65b89d4584b693d3680062fb \
-                    file://COPYING-LGPL;md5=3579a9fd0221d49a237aaa33492f988c \
-                    file://COPYING-MIT;md5=b7e434aeb228ed731c00bcf177e79b19"
-
-DEPENDS = "curl ${PYTHON_PN}"
-RDEPENDS_${PN} = "${PYTHON_PN}-core curl"
-SRCNAME = "pycurl"
-
-SRC_URI = "\
-  http://${SRCNAME}.sourceforge.net/download/${SRCNAME}-${PV}.tar.gz;name=archive \
-  file://no-static-link.patch \
-"
-
-SRC_URI[archive.md5sum] = "bca7bf47320082588db544ced2ba8717"
-SRC_URI[archive.sha256sum] = "8a1e0eb55573388275a1d6c2534ca4cfca5d7fa772b99b505c08fa149b27aed0"
-S = "${WORKDIR}/${SRCNAME}-${PV}"
-
-BBCLASSEXTEND = "native"
-
-# Ensure the docstrings are generated as make clean will remove them
-do_compile_prepend() {
-	${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} setup.py docstrings
-}
-
-do_install_append() {
-	rm -rf ${D}${datadir}/share
-}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-pycurl/no-static-link.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python-pycurl/no-static-link.patch
deleted file mode 100644
index 212779c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-pycurl/no-static-link.patch
+++ /dev/null
@@ -1,17 +0,0 @@
-Upstream-Status: Pending
-
-Signed-off-by: Laurentiu Palcu <laurentiu.palcu@intel.com>
-Signed-off-by: Maxin B. John <maxin.john@intel.com>
----
-diff -Naur pycurl-7.19.5.2-orig/setup.py pycurl-7.19.5.2/setup.py
---- pycurl-7.19.5.2-orig/setup.py	2015-11-02 15:42:24.000000000 +0200
-+++ pycurl-7.19.5.2/setup.py	2015-11-02 17:59:36.121527273 +0200
-@@ -154,7 +154,7 @@
-         optbuf = ''
-         sslhintbuf = ''
-         errtext = ''
--        for option in ["--libs", "--static-libs"]:
-+        for option in ["--libs"]:
-             p = subprocess.Popen((CURL_CONFIG, option),
-                 stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-             stdout, stderr = p.communicate()
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-pycurl_7.21.5.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python-pycurl_7.21.5.bb
deleted file mode 100644
index eb70cea..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-pycurl_7.21.5.bb
+++ /dev/null
@@ -1,3 +0,0 @@
-require python-pycurl.inc
-
-inherit distutils
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-setuptools.inc b/import-layers/yocto-poky/meta/recipes-devtools/python/python-setuptools.inc
index 40f47d4..ca521a9 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-setuptools.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python-setuptools.inc
@@ -3,14 +3,14 @@
 SECTION = "devel/python"
 LICENSE = "MIT"
 
-LIC_FILES_CHKSUM = "file://setup.py;beginline=146;endline=146;md5=3e8df024d6c1442d18e84acf8fbbc475"
+LIC_FILES_CHKSUM = "file://LICENSE;beginline=1;endline=19;md5=9a33897f1bca1160d7aad3835152e158"
 
 SRCNAME = "setuptools"
 
-SRC_URI = "https://files.pythonhosted.org/packages/source/s/${SRCNAME}/${SRCNAME}-${PV}.tar.gz"
+SRC_URI = "https://files.pythonhosted.org/packages/source/s/${SRCNAME}/${SRCNAME}-${PV}.zip"
 
-SRC_URI[md5sum] = "8b67868c3430e978833ebd0d1b766694"
-SRC_URI[sha256sum] = "8303fb24306385f09bf8b0e5a385c1548e42e8efc08558d64049bc0d55ea012d"
+SRC_URI[md5sum] = "b9e6c049617bac0f9e908a41ab4a29ac"
+SRC_URI[sha256sum] = "b0fe5d432d922df595e918577c51458d63f245115d141b309ac32ecfca329df5"
 
 UPSTREAM_CHECK_URI = "https://pypi.python.org/pypi/setuptools"
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-setuptools_32.1.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python-setuptools_32.1.1.bb
deleted file mode 100644
index 526474c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-setuptools_32.1.1.bb
+++ /dev/null
@@ -1,38 +0,0 @@
-require python-setuptools.inc
-
-PROVIDES = "python-distribute"
-
-DEPENDS += "python"
-DEPENDS_class-native += "python-native"
-
-inherit distutils
-
-DISTUTILS_INSTALL_ARGS += "--install-lib=${D}${PYTHON_SITEPACKAGES_DIR}"
-
-RDEPENDS_${PN} = "\
-  python-stringold \
-  python-email \
-  python-shell \
-  python-distutils \
-  python-compression \
-  python-pkgutil \
-  python-plistlib \
-  python-numbers \
-  python-html \
-  python-netserver \
-  python-ctypes \
-  python-subprocess \
-  python-unittest \
-  python-compile \
-"
-
-RDEPENDS_${PN}_class-native = "\
-  python-distutils \
-  python-compression \
-"
-
-RREPLACES_${PN} = "python-distribute"
-RPROVIDES_${PN} = "python-distribute"
-RCONFLICTS_${PN} = "python-distribute"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-setuptools_36.2.7.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python-setuptools_36.2.7.bb
new file mode 100644
index 0000000..0efacc1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python-setuptools_36.2.7.bb
@@ -0,0 +1,36 @@
+require python-setuptools.inc
+
+PROVIDES = "python-distribute"
+
+DEPENDS += "python"
+DEPENDS_class-native += "python-native"
+
+inherit setuptools
+
+RDEPENDS_${PN} = "\
+  python-stringold \
+  python-email \
+  python-shell \
+  python-distutils \
+  python-compression \
+  python-pkgutil \
+  python-plistlib \
+  python-numbers \
+  python-html \
+  python-netserver \
+  python-ctypes \
+  python-subprocess \
+  python-unittest \
+  python-compile \
+"
+
+RDEPENDS_${PN}_class-native = "\
+  python-distutils \
+  python-compression \
+"
+
+RREPLACES_${PN} = "python-distribute"
+RPROVIDES_${PN} = "python-distribute"
+RCONFLICTS_${PN} = "python-distribute"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-six_1.10.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python-six_1.10.0.bb
deleted file mode 100644
index 4350485..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-six_1.10.0.bb
+++ /dev/null
@@ -1,4 +0,0 @@
-inherit setuptools
-require python-six.inc
-
-RDEPENDS_${PN} += "python-io"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python-smmap_0.9.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python-smmap_0.9.0.bb
deleted file mode 100644
index c118dd8..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python-smmap_0.9.0.bb
+++ /dev/null
@@ -1,5 +0,0 @@
-require python-smmap.inc
-
-inherit setuptools
-
-RDEPENDS_${PN} += "python-codecs python-mmap python-lang"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python/multilib.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python/multilib.patch
index 50cc591..f5568d2 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python/multilib.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python/multilib.patch
@@ -1,6 +1,6 @@
 Rebased for python-2.7.9
 Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
-
+Upstream-Status: Pending
 Index: Python-2.7.13/configure.ac
 ===================================================================
 --- Python-2.7.13.orig/configure.ac
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python/pass-missing-libraries-to-Extension-for-mul.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python/pass-missing-libraries-to-Extension-for-mul.patch
new file mode 100644
index 0000000..44fcaac
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python/pass-missing-libraries-to-Extension-for-mul.patch
@@ -0,0 +1,82 @@
+From 71447f04979b267f8866573b67a4340b2719d799 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Fri, 4 Aug 2017 14:10:43 +0800
+Subject: [PATCH] setup.py: pass missing libraries to Extension for multiprocessing module
+
+In the following commit:
+...
+commit e711cafab13efc9c1fe6c5cd75826401445eb585
+Author: Benjamin Peterson <benjamin@python.org>
+Date:   Wed Jun 11 16:44:04 2008 +0000
+
+    Merged revisions 64104,64117 via svnmerge from
+    svn+ssh://pythondev@svn.python.org/python/trunk
+...
+(see diff in setup.py)
+It assigned libraries for multiprocessing module according
+the host_platform, but not pass it to Extension.
+
+In glibc, the following commit caused two definition of
+sem_getvalue are different.
+https://sourceware.org/git/?p=glibc.git;a=commit;h=042e1521c794a945edc43b5bfa7e69ad70420524
+(see diff in nptl/sem_getvalue.c for detail)
+`__new_sem_getvalue' is the latest sem_getvalue@@GLIBC_2.1
+and `__old_sem_getvalue' is to compat the old version
+sem_getvalue@GLIBC_2.0.
+
+To build python for embedded Linux systems:
+http://www.yoctoproject.org/docs/2.3.1/yocto-project-qs/yocto-project-qs.html
+If not explicitly link to library pthread (-lpthread), it will
+load glibc's sem_getvalue randomly at runtime.
+
+Such as build python on linux x86_64 host and run the python
+on linux x86_32 target. If not link library pthread, it caused
+multiprocessing bounded semaphore could not work correctly.
+...
+>>> import multiprocessing
+>>> pool_sema = multiprocessing.BoundedSemaphore(value=1)
+>>> pool_sema.acquire()
+True
+>>> pool_sema.release()
+Traceback (most recent call last):
+  File "<stdin>", line 1, in <module>
+ValueError: semaphore or lock released too many times
+...
+
+And the semaphore issue also caused multiprocessing.Queue().put() hung.
+
+Upstream-Status: Submitted [https://github.com/python/cpython/pull/2999]
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ setup.py | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/setup.py b/setup.py
+index 54054c2..9646bfc 100644
+--- a/setup.py
++++ b/setup.py
+@@ -1586,8 +1586,10 @@ class PyBuildExt(build_ext):
+         elif host_platform.startswith('netbsd'):
+             macros = dict()
+             libraries = []
+-
+-        else:                                   # Linux and other unices
++        elif host_platform.startswith(('linux')):
++            macros = dict()
++            libraries = ['pthread']
++        else:                                   # Other unices
+             macros = dict()
+             libraries = ['rt']
+ 
+@@ -1610,6 +1612,7 @@ class PyBuildExt(build_ext):
+         if sysconfig.get_config_var('WITH_THREAD'):
+             exts.append ( Extension('_multiprocessing', multiprocessing_srcs,
+                                     define_macros=macros.items(),
++                                    libraries=libraries,
+                                     include_dirs=["Modules/_multiprocessing"]))
+         else:
+             missing.append('_multiprocessing')
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python/support_SOURCE_DATE_EPOCH_in_py_compile_2.7.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python/support_SOURCE_DATE_EPOCH_in_py_compile_2.7.patch
new file mode 100644
index 0000000..1265179
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python/support_SOURCE_DATE_EPOCH_in_py_compile_2.7.patch
@@ -0,0 +1,34 @@
+The compiled .pyc files contain time stamp corresponding to the compile time.
+This prevents binary reproducibility. This patch allows to achieve binary
+reproducibility by overriding the build time stamp by the value 
+exported via SOURCE_DATE_EPOCH. 
+
+Patch by Bernhard M. Wiedemann
+
+Upstream-Status: Backport
+
+Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
+
+Fri Feb 24 17:08:25 UTC 2017 - bwiedemann@suse.com
+
+- Add reproducible.patch to allow reproducible builds of various
+  python packages like python-amqp
+  Upstream: https://github.com/python/cpython/pull/296
+
+
+@@ -0,0 +1,15 @@
+Index: Python-2.7.13/Lib/py_compile.py
+===================================================================
+--- Python-2.7.13.orig/Lib/py_compile.py
++++ Python-2.7.13/Lib/py_compile.py
+@@ -108,6 +108,10 @@ def compile(file, cfile=None, dfile=None
+             timestamp = long(os.fstat(f.fileno()).st_mtime)
+         except AttributeError:
+             timestamp = long(os.stat(file).st_mtime)
++        sde = os.environ.get('SOURCE_DATE_EPOCH')
++        if sde and timestamp > int(sde):
++            timestamp = int(sde)
++            os.utime(file, (timestamp, timestamp))
+         codestring = f.read()
+     try:
+         codeobject = __builtin__.compile(codestring, dfile or file,'exec')
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python/use_sysroot_ncurses_instead_of_host.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python/use_sysroot_ncurses_instead_of_host.patch
index 2c65786..fb4a3bc 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python/use_sysroot_ncurses_instead_of_host.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python/use_sysroot_ncurses_instead_of_host.patch
@@ -2,6 +2,7 @@
 if it is not found causes an error on configure,
 we should use ncursesw from sysroot instead
 
+Upstream-Status: Pending
 
 Signed-off-by: Alejandro Hernandez <alejandro.hernandez@linux.intel.com>
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-docutils_0.13.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-docutils_0.13.1.bb
deleted file mode 100644
index e36388c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-docutils_0.13.1.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-SUMMARY = "Text processing system for documentation"
-HOMEPAGE = "http://docutils.sourceforge.net"
-SECTION = "devel/python"
-LICENSE = "PSF & BSD-2-Clause & GPLv3"
-LIC_FILES_CHKSUM = "file://COPYING.txt;md5=7a4646907ab9083c826280b19e103106"
-
-DEPENDS = "python3"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/docutils/docutils-${PV}.tar.gz"
-SRC_URI[md5sum] = "ea4a893c633c788be9b8078b6b305d53"
-SRC_URI[sha256sum] = "718c0f5fb677be0f34b781e04241c4067cbd9327b66bdd8e763201130f5175be"
-
-S = "${WORKDIR}/docutils-${PV}"
-
-inherit distutils3
-
-BBCLASSEXTEND = "native"
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-docutils_0.14.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-docutils_0.14.bb
new file mode 100644
index 0000000..81a449d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-docutils_0.14.bb
@@ -0,0 +1,18 @@
+SUMMARY = "Text processing system for documentation"
+HOMEPAGE = "http://docutils.sourceforge.net"
+SECTION = "devel/python"
+LICENSE = "PSF & BSD-2-Clause & GPLv3"
+LIC_FILES_CHKSUM = "file://COPYING.txt;md5=35a23d42b615470583563132872c97d6"
+
+DEPENDS = "python3"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/docutils/docutils-${PV}.tar.gz"
+SRC_URI[md5sum] = "c53768d63db3873b7d452833553469de"
+SRC_URI[sha256sum] = "51e64ef2ebfb29cae1faa133b3710143496eca21c530f3f71424d77687764274"
+
+S = "${WORKDIR}/docutils-${PV}"
+
+inherit distutils3
+
+BBCLASSEXTEND = "native"
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-git_2.1.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-git_2.1.1.bb
deleted file mode 100644
index 7a2d452..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-git_2.1.1.bb
+++ /dev/null
@@ -1,7 +0,0 @@
-require python-git.inc
-
-DEPENDS = "python3-gitdb"
-
-inherit setuptools3
-
-RDEPENDS_${PN} += "python3-gitdb python3-lang python3-io python3-shell python3-math python3-re python3-subprocess python3-stringold python3-unixadmin python3-enum python3-logging python3-datetime python3-netclient python3-unittest python3-argparse"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-git_2.1.5.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-git_2.1.5.bb
new file mode 100644
index 0000000..4ac2a0e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-git_2.1.5.bb
@@ -0,0 +1,7 @@
+require python-git.inc
+
+DEPENDS = "python3-gitdb"
+
+inherit setuptools3
+
+RDEPENDS_${PN} += "python3-gitdb python3-lang python3-io python3-shell python3-math python3-re python3-subprocess python3-stringold python3-unixadmin python3-enum python3-logging python3-datetime python3-netclient python3-unittest python3-argparse git"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-mako_1.0.6.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-mako_1.0.7.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-devtools/python/python3-mako_1.0.6.bb
rename to import-layers/yocto-poky/meta/recipes-devtools/python/python3-mako_1.0.7.bb
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-native_3.5.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-native_3.5.2.bb
deleted file mode 100644
index 782e2cd..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-native_3.5.2.bb
+++ /dev/null
@@ -1,83 +0,0 @@
-require recipes-devtools/python/python.inc
-
-PR = "${INC_PR}.0"
-PYTHON_MAJMIN = "3.5"
-DISTRO_SRC_URI ?= "file://sitecustomize.py"
-DISTRO_SRC_URI_linuxstdbase = ""
-SRC_URI = "http://www.python.org/ftp/python/${PV}/Python-${PV}.tar.xz \
-file://12-distutils-prefix-is-inside-staging-area.patch \
-file://python-config.patch \
-file://000-cross-compile.patch \
-file://030-fixup-include-dirs.patch \
-file://070-dont-clean-ipkg-install.patch \
-file://080-distutils-dont_adjust_files.patch \
-file://130-readline-setup.patch \
-file://150-fix-setupterm.patch \
-file://python-3.3-multilib.patch \
-file://03-fix-tkinter-detection.patch \
-file://avoid_warning_about_tkinter.patch \
-file://shutil-follow-symlink-fix.patch \
-file://0001-h2py-Fix-issue-13032-where-it-fails-with-UnicodeDeco.patch \
-file://sysroot-include-headers.patch \
-file://unixccompiler.patch \
-${DISTRO_SRC_URI} \
-file://sysconfig.py-add-_PYTHON_PROJECT_SRC.patch \
-file://setup.py-check-cross_compiling-when-get-FLAGS.patch \
-file://0001-Do-not-use-the-shell-version-of-python-config-that-w.patch \
-"
-
-SRC_URI[md5sum] = "8906efbacfcdc7c3c9198aeefafd159e"  
-SRC_URI[sha256sum] = "0010f56100b9b74259ebcd5d4b295a32324b58b517403a10d1a2aa7cb22bca40" 
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=6b60258130e4ed10d3101517eb5b9385"
-
-# exclude pre-releases for both python 2.x and 3.x
-UPSTREAM_CHECK_REGEX = "[Pp]ython-(?P<pver>\d+(\.\d+)+).tar"
-
-S = "${WORKDIR}/Python-${PV}"
-
-EXTRANATIVEPATH += "bzip2-native"
-DEPENDS = "openssl-native bzip2-replacement-native zlib-native readline-native sqlite3-native"
-
-inherit native
-
-require python-native-${PYTHON_MAJMIN}-manifest.inc
-
-# uninative may be used on pre glibc 2.25 systems which don't have getentropy
-EXTRA_OECONF_append = " --bindir=${bindir}/${PN} --without-ensurepip ac_cv_func_getentropy=no"
-
-EXTRA_OEMAKE = '\
-  LIBC="" \
-  STAGING_LIBDIR=${STAGING_LIBDIR_NATIVE} \
-  STAGING_INCDIR=${STAGING_INCDIR_NATIVE} \
-  LIB=${baselib} \
-  ARCH=${TARGET_ARCH} \
-'
-
-# No ctypes option for python 3
-PYTHONLSBOPTS = ""
-
-do_configure_append() {
-	autoreconf --verbose --install --force --exclude=autopoint ../Python-${PV}/Modules/_ctypes/libffi
-	sed -i -e 's,#define HAVE_GETRANDOM 1,/\* #undef HAVE_GETRANDOM \*/,' ${B}/pyconfig.h
-}
-
-do_install() {
-	install -d ${D}${libdir}/pkgconfig
-	oe_runmake 'DESTDIR=${D}' install
-	if [ -e ${WORKDIR}/sitecustomize.py ]; then
-		install -m 0644 ${WORKDIR}/sitecustomize.py ${D}/${libdir}/python${PYTHON_MAJMIN}
-	fi
-	install -d ${D}${bindir}/${PN}
-	install -m 0755 Parser/pgen ${D}${bindir}/${PN}
-
-	# Make sure we use /usr/bin/env python
-	for PYTHSCRIPT in `grep -rIl ${bindir}/${PN}/python ${D}${bindir}/${PN}`; do
-		sed -i -e '1s|^#!.*|#!/usr/bin/env python3|' $PYTHSCRIPT
-	done
-
-	# Tests are large and we don't need them in the native sysroot
-	rm ${D}${libdir}/python${PYTHON_MAJMIN}/test -rf
-}
-
-RPROVIDES += "python3-misc-native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-native_3.5.3.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-native_3.5.3.bb
new file mode 100644
index 0000000..8cd9c88
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-native_3.5.3.bb
@@ -0,0 +1,84 @@
+require recipes-devtools/python/python.inc
+
+PR = "${INC_PR}.0"
+PYTHON_MAJMIN = "3.5"
+DISTRO_SRC_URI ?= "file://sitecustomize.py"
+DISTRO_SRC_URI_linuxstdbase = ""
+SRC_URI = "http://www.python.org/ftp/python/${PV}/Python-${PV}.tar.xz \
+file://12-distutils-prefix-is-inside-staging-area.patch \
+file://python-config.patch \
+file://0001-cross-compile-support.patch \
+file://030-fixup-include-dirs.patch \
+file://070-dont-clean-ipkg-install.patch \
+file://080-distutils-dont_adjust_files.patch \
+file://130-readline-setup.patch \
+file://150-fix-setupterm.patch \
+file://python-3.3-multilib.patch \
+file://03-fix-tkinter-detection.patch \
+file://avoid_warning_about_tkinter.patch \
+file://shutil-follow-symlink-fix.patch \
+file://0001-h2py-Fix-issue-13032-where-it-fails-with-UnicodeDeco.patch \
+file://sysroot-include-headers.patch \
+file://unixccompiler.patch \
+${DISTRO_SRC_URI} \
+file://sysconfig.py-add-_PYTHON_PROJECT_SRC.patch \
+file://setup.py-check-cross_compiling-when-get-FLAGS.patch \
+file://0001-Do-not-use-the-shell-version-of-python-config-that-w.patch \
+file://support_SOURCE_DATE_EPOCH_in_py_compile.patch \
+"
+
+SRC_URI[md5sum] = "57d1f8bfbabf4f2500273fb0706e6f21"
+SRC_URI[sha256sum] = "eefe2ad6575855423ab630f5b51a8ef6e5556f774584c06beab4926f930ddbb0"
+
+LIC_FILES_CHKSUM = "file://LICENSE;md5=b680ed99aa60d350c65a65914494207e"
+
+# exclude pre-releases for both python 2.x and 3.x
+UPSTREAM_CHECK_REGEX = "[Pp]ython-(?P<pver>\d+(\.\d+)+).tar"
+
+S = "${WORKDIR}/Python-${PV}"
+
+EXTRANATIVEPATH += "bzip2-native"
+DEPENDS = "openssl-native bzip2-replacement-native zlib-native readline-native sqlite3-native"
+
+inherit native
+
+require python-native-${PYTHON_MAJMIN}-manifest.inc
+
+# uninative may be used on pre glibc 2.25 systems which don't have getentropy
+EXTRA_OECONF_append = " --bindir=${bindir}/${PN} --without-ensurepip ac_cv_func_getentropy=no"
+
+EXTRA_OEMAKE = '\
+  LIBC="" \
+  STAGING_LIBDIR=${STAGING_LIBDIR_NATIVE} \
+  STAGING_INCDIR=${STAGING_INCDIR_NATIVE} \
+  LIB=${baselib} \
+  ARCH=${TARGET_ARCH} \
+'
+
+# No ctypes option for python 3
+PYTHONLSBOPTS = ""
+
+do_configure_append() {
+	autoreconf --verbose --install --force --exclude=autopoint ../Python-${PV}/Modules/_ctypes/libffi
+	sed -i -e 's,#define HAVE_GETRANDOM 1,/\* #undef HAVE_GETRANDOM \*/,' ${B}/pyconfig.h
+}
+
+do_install() {
+	install -d ${D}${libdir}/pkgconfig
+	oe_runmake 'DESTDIR=${D}' install
+	if [ -e ${WORKDIR}/sitecustomize.py ]; then
+		install -m 0644 ${WORKDIR}/sitecustomize.py ${D}/${libdir}/python${PYTHON_MAJMIN}
+	fi
+	install -d ${D}${bindir}/${PN}
+	install -m 0755 Parser/pgen ${D}${bindir}/${PN}
+
+	# Make sure we use /usr/bin/env python
+	for PYTHSCRIPT in `grep -rIl ${bindir}/${PN}/python ${D}${bindir}/${PN}`; do
+		sed -i -e '1s|^#!.*|#!/usr/bin/env python3|' $PYTHSCRIPT
+	done
+
+	# Tests are large and we don't need them in the native sysroot
+	rm ${D}${libdir}/python${PYTHON_MAJMIN}/test -rf
+}
+
+RPROVIDES += "python3-misc-native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-nose_1.3.7.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-nose_1.3.7.bb
index 99bba44..1e2ff74 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-nose_1.3.7.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-nose_1.3.7.bb
@@ -17,6 +17,10 @@
 
 inherit setuptools3
 
+do_install_append() {
+    mv ${D}${bindir}/nosetests ${D}${bindir}/nosetests3
+}
+
 RDEPENDS_${PN}_class-target = "\
   python3-unittest \
   "
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pip_9.0.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pip_9.0.1.bb
index 4456b9b..9b907a2 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pip_9.0.1.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pip_9.0.1.bb
@@ -53,4 +53,4 @@
   python3-xmlrpc \
 "
 
-BBCLASSEXTEND = "native"
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pycairo_1.10.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pycairo_1.10.0.bb
index f9031b3..9258ba1 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pycairo_1.10.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pycairo_1.10.0.bb
@@ -19,6 +19,8 @@
 
 inherit distutils3 pkgconfig
 
+CFLAGS += "-fPIC"
+
 BBCLASSEXTEND = "native"
 
 do_configure() {
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pycurl_7.21.5.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pycurl_7.21.5.bb
deleted file mode 100644
index 5d11192..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pycurl_7.21.5.bb
+++ /dev/null
@@ -1,5 +0,0 @@
-FILESEXTRAPATHS_prepend := "${THISDIR}/python-pycurl:"
-
-require python-pycurl.inc
-
-inherit distutils3
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pygobject/0001-configure.ac-Don-t-use-gnome-common-macros.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pygobject/0001-configure.ac-Don-t-use-gnome-common-macros.patch
new file mode 100644
index 0000000..aaedb58
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pygobject/0001-configure.ac-Don-t-use-gnome-common-macros.patch
@@ -0,0 +1,33 @@
+From 206360744cedff20eae3c8fcfde9938fdae99592 Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Thu, 6 Jul 2017 11:47:21 +0300
+Subject: [PATCH] configure.ac: Don't use gnome-common macros
+
+remove GNOME_COMPILE_WARNINGS() call: it's from gnome-common which
+is deprecated.
+
+This patch can be removed when upgrading to 3.25.1: at that point
+pygobject needs autoconf-archive instead.
+
+Upstream-Status: Inappropriate [Already handled upstream]
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+---
+ configure.ac | 2 --
+ 1 file changed, 2 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index 1f15b3c..5cb170f 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -221,8 +221,6 @@ AC_ARG_WITH(common,
+     with_common=yes)
+ AM_CONDITIONAL(WITH_COMMON, test "$with_common" = "yes")
+ 
+-# compiler warnings, errors, required cflags, and code coverage support
+-GNOME_COMPILE_WARNINGS([maximum], [-Wno-error=missing-prototypes])
+ AC_MSG_CHECKING(for Gnome code coverage support)
+ m4_ifdef([GNOME_CODE_COVERAGE],
+          [AC_MSG_RESULT(yes)
+-- 
+2.1.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pygobject_3.22.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pygobject_3.22.0.bb
deleted file mode 100644
index 143048d..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pygobject_3.22.0.bb
+++ /dev/null
@@ -1,31 +0,0 @@
-SUMMARY = "Python GObject bindings"
-SECTION = "devel/python"
-LICENSE = "LGPLv2.1"
-LIC_FILES_CHKSUM = "file://COPYING;md5=a916467b91076e631dd8edb7424769c7"
-
-inherit autotools pkgconfig gnomebase distutils3-base gobject-introspection upstream-version-is-even
-
-DEPENDS += "gnome-common-native python3 glib-2.0"
-
-SRCNAME="pygobject"
-SRC_URI = " \
-    http://ftp.gnome.org/pub/GNOME/sources/${SRCNAME}/${@gnome_verdir("${PV}")}/${SRCNAME}-${PV}.tar.xz \
-    file://0001-configure.ac-add-sysroot-path-to-GI_DATADIR-don-t-se.patch \
-"
-
-SRC_URI[md5sum] = "ed4117ed5d554d25fd7718807fbf819f"
-SRC_URI[sha256sum] = "08b29cfb08efc80f7a8630a2734dec65a99c1b59f1e5771c671d2e4ed8a5cbe7"
-
-S = "${WORKDIR}/${SRCNAME}-${PV}"
-
-
-PACKAGECONFIG ??= "${@bb.utils.contains_any('DISTRO_FEATURES', [ 'directfb', 'wayland', 'x11' ], 'cairo', '', d)}"
-
-# python3-pycairo is checked on configuration -> DEPENDS
-# we don't link against python3-pycairo -> RDEPENDS
-PACKAGECONFIG[cairo] = "--enable-cairo,--disable-cairo,cairo python3-pycairo, python3-pycairo"
-
-RDEPENDS_${PN} += "python3-setuptools python3-importlib"
-
-BBCLASSEXTEND = "native"
-PACKAGECONFIG_class-native = ""
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pygobject_3.24.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pygobject_3.24.1.bb
new file mode 100644
index 0000000..9d10af2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pygobject_3.24.1.bb
@@ -0,0 +1,32 @@
+SUMMARY = "Python GObject bindings"
+SECTION = "devel/python"
+LICENSE = "LGPLv2.1"
+LIC_FILES_CHKSUM = "file://COPYING;md5=a916467b91076e631dd8edb7424769c7"
+
+inherit autotools pkgconfig gnomebase distutils3-base gobject-introspection upstream-version-is-even
+
+DEPENDS += "python3 glib-2.0"
+
+SRCNAME="pygobject"
+SRC_URI = " \
+    http://ftp.gnome.org/pub/GNOME/sources/${SRCNAME}/${@gnome_verdir("${PV}")}/${SRCNAME}-${PV}.tar.xz \
+    file://0001-configure.ac-add-sysroot-path-to-GI_DATADIR-don-t-se.patch \
+    file://0001-configure.ac-Don-t-use-gnome-common-macros.patch \
+"
+
+SRC_URI[md5sum] = "69a843311d0f0385dff376e11a2d83d2"
+SRC_URI[sha256sum] = "a628a95aa0909e13fb08230b1b98fc48adef10b220932f76d62f6821b3fdbffd"
+
+S = "${WORKDIR}/${SRCNAME}-${PV}"
+
+
+PACKAGECONFIG ??= "${@bb.utils.contains_any('DISTRO_FEATURES', [ 'directfb', 'wayland', 'x11' ], 'cairo', '', d)}"
+
+# python3-pycairo is checked on configuration -> DEPENDS
+# we don't link against python3-pycairo -> RDEPENDS
+PACKAGECONFIG[cairo] = "--enable-cairo,--disable-cairo,cairo python3-pycairo, python3-pycairo"
+
+RDEPENDS_${PN} += "python3-setuptools python3-importlib"
+
+BBCLASSEXTEND = "native"
+PACKAGECONFIG_class-native = ""
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pygpgme_0.3.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pygpgme_0.3.bb
deleted file mode 100644
index 495f677..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-pygpgme_0.3.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-SUMMARY = "A Python module for working with OpenPGP messages"
-HOMEPAGE = "https://launchpad.net/pygpgme"
-LICENSE = "LGPLv2.1"
-LIC_FILES_CHKSUM = "file://README;md5=2dc15a76acf01e126188c8de634ae4b3"
-
-SRC_URI = "https://launchpad.net/pygpgme/trunk/${PV}/+download/pygpgme-${PV}.tar.gz"
-SRC_URI[md5sum] = "d38355af73f0352cde3d410b25f34fd0"
-SRC_URI[sha256sum] = "5fd887c407015296a8fd3f4b867fe0fcca3179de97ccde90449853a3dfb802e1"
-
-S = "${WORKDIR}/pygpgme-${PV}"
-
-inherit distutils3
-
-DEPENDS = "gpgme python3"
-
-RDEPENDS_${PN} += "python3-core"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-setuptools_32.1.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-setuptools_32.1.1.bb
deleted file mode 100644
index 65af6f0..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-setuptools_32.1.1.bb
+++ /dev/null
@@ -1,37 +0,0 @@
-require python-setuptools.inc
-
-DEPENDS += "python3"
-DEPENDS_class-native += "python3-native"
-
-inherit distutils3
-
-DISTUTILS_INSTALL_ARGS += "--install-lib=${D}${PYTHON_SITEPACKAGES_DIR}"
-
-# The installer puts the wrong path in the setuptools.pth file.  Correct it.
-do_install_append() {
-    rm ${D}${PYTHON_SITEPACKAGES_DIR}/setuptools.pth
-    mv ${D}${bindir}/easy_install ${D}${bindir}/easy3_install
-    echo "./${SRCNAME}-${PV}-py${PYTHON_BASEVERSION}.egg" > ${D}${PYTHON_SITEPACKAGES_DIR}/setuptools.pth
-}
-
-RDEPENDS_${PN} = "\
-  python3-distutils \
-  python3-compression \
-"
-RDEPENDS_${PN}_class-target = "\
-  python3-ctypes \
-  python3-distutils \
-  python3-email \
-  python3-importlib \
-  python3-numbers \
-  python3-compression \
-  python3-shell \
-  python3-subprocess \
-  python3-textutils \
-  python3-pkgutil \
-  python3-threading \
-  python3-misc \
-  python3-unittest \
-  python3-xml \
-"
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3-setuptools_36.2.7.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-setuptools_36.2.7.bb
new file mode 100644
index 0000000..a7bca97
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3-setuptools_36.2.7.bb
@@ -0,0 +1,33 @@
+require python-setuptools.inc
+
+DEPENDS += "python3"
+DEPENDS_class-native += "python3-native"
+DEPENDS_class-nativesdk += "nativesdk-python3"
+
+inherit setuptools3
+
+do_install_append() {
+    mv ${D}${bindir}/easy_install ${D}${bindir}/easy3_install
+}
+
+RDEPENDS_${PN}_class-native = "\
+  python3-distutils \
+  python3-compression \
+"
+RDEPENDS_${PN} = "\
+  python3-ctypes \
+  python3-distutils \
+  python3-email \
+  python3-importlib \
+  python3-numbers \
+  python3-compression \
+  python3-shell \
+  python3-subprocess \
+  python3-textutils \
+  python3-pkgutil \
+  python3-threading \
+  python3-misc \
+  python3-unittest \
+  python3-xml \
+"
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/000-cross-compile.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/000-cross-compile.patch
deleted file mode 100644
index 2d82221..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/000-cross-compile.patch
+++ /dev/null
@@ -1,85 +0,0 @@
-We cross compile python. This patch uses tools from host/native
-python instead of in-tree tools
-
--Khem
-
-Upstream-Status: Inappropriate[Configuration Specific]
-
----
- Makefile.pre.in |   25 +++++++++++++------------
- 1 file changed, 13 insertions(+), 12 deletions(-)
-
-Index: Python-3.5.2/Makefile.pre.in
-===================================================================
---- Python-3.5.2.orig/Makefile.pre.in
-+++ Python-3.5.2/Makefile.pre.in
-@@ -220,6 +220,7 @@ LIBOBJS=	@LIBOBJS@
- 
- PYTHON=		python$(EXE)
- BUILDPYTHON=	python$(BUILDEXE)
-+HOSTPYTHON=	$(BUILDPYTHON)
- 
- cross_compiling=@cross_compiling@
- PYTHON_FOR_BUILD=@PYTHON_FOR_BUILD@
-@@ -279,6 +280,7 @@ LIBFFI_INCLUDEDIR=	@LIBFFI_INCLUDEDIR@
- ##########################################################################
- # Parser
- PGEN=		Parser/pgen$(EXE)
-+HOSTPGEN=	$(PGEN)$(EXE)
- 
- PSRCS=		\
- 		Parser/acceler.c \
-@@ -509,7 +511,7 @@ build_all_generate_profile:
- 
- run_profile_task:
- 	: # FIXME: can't run for a cross build
--	$(LLVM_PROF_FILE) $(RUNSHARED) ./$(BUILDPYTHON) $(PROFILE_TASK) || true
-+	$(LLVM_PROF_FILE) $(RUNSHARED) $(HOSTPYTHON) $(PROFILE_TASK) || true
- 
- build_all_merge_profile:
- 	$(LLVM_PROF_MERGER)
-@@ -792,7 +794,7 @@ $(GRAMMAR_H): $(GRAMMAR_INPUT) $(PGEN)
- 	@$(MKDIR_P) Include
- 	# Avoid copying the file onto itself for an in-tree build
- 	if test "$(cross_compiling)" != "yes"; then \
--		$(PGEN) $(GRAMMAR_INPUT) $(GRAMMAR_H) $(GRAMMAR_C); \
-+		$(HOSTPGEN) $(GRAMMAR_INPUT) $(GRAMMAR_H) $(GRAMMAR_C); \
- 	else \
- 		cp $(srcdir)/Include/graminit.h $(GRAMMAR_H).tmp; \
- 		mv $(GRAMMAR_H).tmp $(GRAMMAR_H); \
-@@ -990,7 +992,7 @@ $(LIBRARY_OBJS) $(MODOBJS) Programs/pyth
- ######################################################################
- 
- TESTOPTS=	$(EXTRATESTOPTS)
--TESTPYTHON=	$(RUNSHARED) ./$(BUILDPYTHON) $(TESTPYTHONOPTS)
-+TESTPYTHON=	$(RUNSHARED) $(HOSTPYTHON) $(TESTPYTHONOPTS)
- TESTRUNNER=	$(TESTPYTHON) $(srcdir)/Tools/scripts/run_tests.py
- TESTTIMEOUT=	3600
- 
-@@ -1481,7 +1483,7 @@ frameworkinstallstructure:	$(LDLIBRARY)
- 		fi; \
- 	done
- 	$(LN) -fsn include/python$(LDVERSION) $(DESTDIR)$(prefix)/Headers
--	sed 's/%VERSION%/'"`$(RUNSHARED) ./$(BUILDPYTHON) -c 'import platform; print(platform.python_version())'`"'/g' < $(RESSRCDIR)/Info.plist > $(DESTDIR)$(prefix)/Resources/Info.plist
-+	sed 's/%VERSION%/'"`$(RUNSHARED) $(HOSTPYTHON) -c 'import platform; print(platform.python_version())'`"'/g' < $(RESSRCDIR)/Info.plist > $(DESTDIR)$(prefix)/Resources/Info.plist
- 	$(LN) -fsn $(VERSION) $(DESTDIR)$(PYTHONFRAMEWORKINSTALLDIR)/Versions/Current
- 	$(LN) -fsn Versions/Current/$(PYTHONFRAMEWORK) $(DESTDIR)$(PYTHONFRAMEWORKINSTALLDIR)/$(PYTHONFRAMEWORK)
- 	$(LN) -fsn Versions/Current/Headers $(DESTDIR)$(PYTHONFRAMEWORKINSTALLDIR)/Headers
-@@ -1547,7 +1549,7 @@ config.status:	$(srcdir)/configure
- 
- # Run reindent on the library
- reindent:
--	./$(BUILDPYTHON) $(srcdir)/Tools/scripts/reindent.py -r $(srcdir)/Lib
-+	$(HOSTPYTHON) $(srcdir)/Tools/scripts/reindent.py -r $(srcdir)/Lib
- 
- # Rerun configure with the same options as it was run last time,
- # provided the config.status script exists
-@@ -1683,7 +1685,7 @@ funny:
- 
- # Perform some verification checks on any modified files.
- patchcheck: all
--	$(RUNSHARED) ./$(BUILDPYTHON) $(srcdir)/Tools/scripts/patchcheck.py
-+	$(RUNSHARED) ./$(HOSTPYTHON) $(srcdir)/Tools/scripts/patchcheck.py
- 
- # Dependencies
- 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/0001-Do-not-use-the-shell-version-of-python-config-that-w.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/0001-Do-not-use-the-shell-version-of-python-config-that-w.patch
index b7e0ac6..8ea3f03 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/0001-Do-not-use-the-shell-version-of-python-config-that-w.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/0001-Do-not-use-the-shell-version-of-python-config-that-w.patch
@@ -1,4 +1,4 @@
-From 045c99b5f1eb6e4e0d8ad1ef9f0ba6574f738150 Mon Sep 17 00:00:00 2001
+From 04df959365e2b54d7503edf0e5534ff094284f2d Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
 Date: Fri, 23 Oct 2015 12:25:09 +0300
 Subject: [PATCH] Do not use the shell version of python-config that was
@@ -14,13 +14,13 @@
  1 file changed, 3 insertions(+), 6 deletions(-)
 
 diff --git a/Makefile.pre.in b/Makefile.pre.in
-index d7fc9a0..47e60bc 100644
+index 236f005..5c4337f 100644
 --- a/Makefile.pre.in
 +++ b/Makefile.pre.in
-@@ -1270,12 +1270,9 @@ python-config: $(srcdir)/Misc/python-config.in Misc/python-config.sh
+@@ -1348,12 +1348,9 @@ python-config: $(srcdir)/Misc/python-config.in Misc/python-config.sh
  	sed -e "s,@EXENAME@,$(BINDIR)/python$(LDVERSION)$(EXE)," < $(srcdir)/Misc/python-config.in >python-config.py
  	# Replace makefile compat. variable references with shell script compat. ones; $(VAR) -> ${VAR}
- 	sed -e 's,\$$(\([A-Za-z0-9_]*\)),\$$\{\1\},g' < Misc/python-config.sh >python-config
+ 	LC_ALL=C sed -e 's,\$$(\([A-Za-z0-9_]*\)),\$$\{\1\},g' < Misc/python-config.sh >python-config
 -	# On Darwin, always use the python version of the script, the shell
 -	# version doesn't use the compiler customizations that are provided
 -	# in python (_osx_support.py).
@@ -34,5 +34,5 @@
  
  # Install the include files
 -- 
-2.1.4
+2.11.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/0001-Issue-21272-Use-_sysconfigdata.py-to-initialize-dist.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/0001-Issue-21272-Use-_sysconfigdata.py-to-initialize-dist.patch
new file mode 100644
index 0000000..d1c92e9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/0001-Issue-21272-Use-_sysconfigdata.py-to-initialize-dist.patch
@@ -0,0 +1,66 @@
+From bcddbf40c7f1b80336268cdddacc17369fb0ccea Mon Sep 17 00:00:00 2001
+From: Libin Dang <libin.dang@windriver.com>
+Date: Tue, 11 Apr 2017 14:12:15 +0800
+Subject: [PATCH] Issue #21272: Use _sysconfigdata.py to initialize
+ distutils.sysconfig
+
+Backport upstream commit
+https://github.com/python/cpython/commit/409482251b06fe75c4ee56e85ffbb4b23d934159
+
+Upstream-Status: Backport
+
+Signed-off-by: Li Zhou <li.zhou@windriver.com>
+---
+ Lib/distutils/sysconfig.py | 35 ++++-------------------------------
+ 1 file changed, 4 insertions(+), 31 deletions(-)
+
+diff --git a/Lib/distutils/sysconfig.py b/Lib/distutils/sysconfig.py
+index 6d5cfd0..9925d24 100644
+--- a/Lib/distutils/sysconfig.py
++++ b/Lib/distutils/sysconfig.py
+@@ -424,38 +424,11 @@ _config_vars = None
+ 
+ def _init_posix():
+     """Initialize the module as appropriate for POSIX systems."""
+-    g = {}
+-    # load the installed Makefile:
+-    try:
+-        filename = get_makefile_filename()
+-        parse_makefile(filename, g)
+-    except OSError as msg:
+-        my_msg = "invalid Python installation: unable to open %s" % filename
+-        if hasattr(msg, "strerror"):
+-            my_msg = my_msg + " (%s)" % msg.strerror
+-
+-        raise DistutilsPlatformError(my_msg)
+-
+-    # load the installed pyconfig.h:
+-    try:
+-        filename = get_config_h_filename()
+-        with open(filename) as file:
+-            parse_config_h(file, g)
+-    except OSError as msg:
+-        my_msg = "invalid Python installation: unable to open %s" % filename
+-        if hasattr(msg, "strerror"):
+-            my_msg = my_msg + " (%s)" % msg.strerror
+-
+-        raise DistutilsPlatformError(my_msg)
+-
+-    # On AIX, there are wrong paths to the linker scripts in the Makefile
+-    # -- these paths are relative to the Python source, but when installed
+-    # the scripts are in another directory.
+-    if python_build:
+-        g['LDSHARED'] = g['BLDSHARED']
+-
++    # _sysconfigdata is generated at build time, see the sysconfig module
++    from _sysconfigdata import build_time_vars
+     global _config_vars
+-    _config_vars = g
++    _config_vars = {}
++    _config_vars.update(build_time_vars)
+ 
+ 
+ def _init_nt():
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/0001-cross-compile-support.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/0001-cross-compile-support.patch
new file mode 100644
index 0000000..118d75d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/0001-cross-compile-support.patch
@@ -0,0 +1,93 @@
+From 624c029abcc73c724020ccea9a2b4b5b5c00f2a6 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Fri, 31 Mar 2017 15:42:46 +0300
+Subject: [PATCH] cross-compile support
+
+We cross compile python. This patch uses tools from host/native
+python instead of in-tree tools
+
+-Khem
+
+Upstream-Status: Inappropriate[Configuration Specific]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+ Makefile.pre.in | 14 ++++++++------
+ 1 file changed, 8 insertions(+), 6 deletions(-)
+
+diff --git a/Makefile.pre.in b/Makefile.pre.in
+index a88b7d5..7cb8bb3 100644
+--- a/Makefile.pre.in
++++ b/Makefile.pre.in
+@@ -221,6 +221,7 @@ LIBOBJS=	@LIBOBJS@
+ 
+ PYTHON=		python$(EXE)
+ BUILDPYTHON=	python$(BUILDEXE)
++HOSTPYTHON=    $(BUILDPYTHON)
+ 
+ PYTHON_FOR_GEN=@PYTHON_FOR_GEN@
+ PYTHON_FOR_BUILD=@PYTHON_FOR_BUILD@
+@@ -280,6 +281,7 @@ LIBFFI_INCLUDEDIR=	@LIBFFI_INCLUDEDIR@
+ ##########################################################################
+ # Parser
+ PGEN=		Parser/pgen$(EXE)
++HOSTPGEN=	$(PGEN)$(EXE)
+ 
+ PSRCS=		\
+ 		Parser/acceler.c \
+@@ -510,7 +512,7 @@ build_all_generate_profile:
+ 
+ run_profile_task:
+ 	: # FIXME: can't run for a cross build
+-	$(LLVM_PROF_FILE) $(RUNSHARED) ./$(BUILDPYTHON) $(PROFILE_TASK) || true
++	$(LLVM_PROF_FILE) $(RUNSHARED) $(HOSTPYTHON) $(PROFILE_TASK) || true
+ 
+ build_all_merge_profile:
+ 	$(LLVM_PROF_MERGER)
+@@ -787,7 +789,7 @@ $(IO_OBJS): $(IO_H)
+ 
+ $(GRAMMAR_H): @GENERATED_COMMENT@ $(GRAMMAR_INPUT) $(PGEN)
+ 	@$(MKDIR_P) Include
+-	$(PGEN) $(GRAMMAR_INPUT) $(GRAMMAR_H) $(GRAMMAR_C)
++	$(HOSTPGEN) $(GRAMMAR_INPUT) $(GRAMMAR_H) $(GRAMMAR_C)
+ $(GRAMMAR_C): @GENERATED_COMMENT@ $(GRAMMAR_H)
+ 	touch $(GRAMMAR_C)
+ 
+@@ -976,7 +978,7 @@ $(LIBRARY_OBJS) $(MODOBJS) Programs/python.o: $(PYTHON_HEADERS)
+ ######################################################################
+ 
+ TESTOPTS=	$(EXTRATESTOPTS)
+-TESTPYTHON=	$(RUNSHARED) ./$(BUILDPYTHON) $(TESTPYTHONOPTS)
++TESTPYTHON=	$(RUNSHARED) $(HOSTPYTHON) $(TESTPYTHONOPTS)
+ TESTRUNNER=	$(TESTPYTHON) $(srcdir)/Tools/scripts/run_tests.py
+ TESTTIMEOUT=	3600
+ 
+@@ -1468,7 +1470,7 @@ frameworkinstallstructure:	$(LDLIBRARY)
+ 		fi; \
+ 	done
+ 	$(LN) -fsn include/python$(LDVERSION) $(DESTDIR)$(prefix)/Headers
+-	sed 's/%VERSION%/'"`$(RUNSHARED) ./$(BUILDPYTHON) -c 'import platform; print(platform.python_version())'`"'/g' < $(RESSRCDIR)/Info.plist > $(DESTDIR)$(prefix)/Resources/Info.plist
++	sed 's/%VERSION%/'"`$(RUNSHARED) $(HOSTPYTHON) -c 'import platform; print(platform.python_version())'`"'/g' < $(RESSRCDIR)/Info.plist > $(DESTDIR)$(prefix)/Resources/Info.plist
+ 	$(LN) -fsn $(VERSION) $(DESTDIR)$(PYTHONFRAMEWORKINSTALLDIR)/Versions/Current
+ 	$(LN) -fsn Versions/Current/$(PYTHONFRAMEWORK) $(DESTDIR)$(PYTHONFRAMEWORKINSTALLDIR)/$(PYTHONFRAMEWORK)
+ 	$(LN) -fsn Versions/Current/Headers $(DESTDIR)$(PYTHONFRAMEWORKINSTALLDIR)/Headers
+@@ -1534,7 +1536,7 @@ config.status:	$(srcdir)/configure
+ 
+ # Run reindent on the library
+ reindent:
+-	./$(BUILDPYTHON) $(srcdir)/Tools/scripts/reindent.py -r $(srcdir)/Lib
++	$(HOSTPYTHON) $(srcdir)/Tools/scripts/reindent.py -r $(srcdir)/Lib
+ 
+ # Rerun configure with the same options as it was run last time,
+ # provided the config.status script exists
+@@ -1674,7 +1676,7 @@ funny:
+ 
+ # Perform some verification checks on any modified files.
+ patchcheck: all
+-	$(RUNSHARED) ./$(BUILDPYTHON) $(srcdir)/Tools/scripts/patchcheck.py
++	$(RUNSHARED) ./$(HOSTPYTHON) $(srcdir)/Tools/scripts/patchcheck.py
+ 
+ # Dependencies
+ 
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/Fix-29519-weakref-spewing-exceptions-during-interp-f.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/Fix-29519-weakref-spewing-exceptions-during-interp-f.patch
new file mode 100644
index 0000000..7217c6e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/Fix-29519-weakref-spewing-exceptions-during-interp-f.patch
@@ -0,0 +1,56 @@
+From 62dcf34987b680e95873eb947b3f4d802199c667 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?=C5=81ukasz=20Langa?= <lukasz@langa.pl>
+Date: Fri, 10 Feb 2017 00:14:55 -0800
+Subject: [PATCH] Fix #29519: weakref spewing exceptions during interp
+ finalization
+
+commit 9cd7e17640a49635d1c1f8c2989578a8fc2c1de6
+from https://github.com/python/cpython
+
+Upstream-Status: Backport
+
+Signed-off-by: Lukasz Langa <lukasz@langa.pl>
+---
+ Lib/weakref.py | 4 ++--
+ Misc/NEWS      | 3 +++
+ 2 files changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/Lib/weakref.py b/Lib/weakref.py
+index aaebd0c..787e33a 100644
+--- a/Lib/weakref.py
++++ b/Lib/weakref.py
+@@ -106,7 +106,7 @@ class WeakValueDictionary(collections.MutableMapping):
+         self, *args = args
+         if len(args) > 1:
+             raise TypeError('expected at most 1 arguments, got %d' % len(args))
+-        def remove(wr, selfref=ref(self)):
++        def remove(wr, selfref=ref(self), _atomic_removal=_remove_dead_weakref):
+             self = selfref()
+             if self is not None:
+                 if self._iterating:
+@@ -114,7 +114,7 @@ class WeakValueDictionary(collections.MutableMapping):
+                 else:
+                     # Atomic removal is necessary since this function
+                     # can be called asynchronously by the GC
+-                    _remove_dead_weakref(d, wr.key)
++                    _atomic_removal(d, wr.key)
+         self._remove = remove
+         # A list of keys to be removed
+         self._pending_removals = []
+diff --git a/Misc/NEWS b/Misc/NEWS
+index 41cfdba..6d89f52 100644
+--- a/Misc/NEWS
++++ b/Misc/NEWS
+@@ -5719,6 +5719,9 @@ Core and Builtins
+ Library
+ -------
+ 
++- Issue #29519: Fix weakref spewing exceptions during interpreter shutdown
++  when used with a rare combination of multiprocessing and custom codecs.
++
+ - Issue #20154: Deadlock in asyncio.StreamReader.readexactly().
+ 
+ - Issue #16113: Remove sha3 module again.
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/pass-missing-libraries-to-Extension-for-mul.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/pass-missing-libraries-to-Extension-for-mul.patch
new file mode 100644
index 0000000..5c3af6b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/pass-missing-libraries-to-Extension-for-mul.patch
@@ -0,0 +1,82 @@
+From a784b70d47ba2104afbcfd805e2a66cdc2109ec5 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Fri, 4 Aug 2017 11:16:14 +0800
+Subject: [PATCH] setup.py: pass missing libraries to Extension for multiprocessing module
+
+In the following commit:
+...
+commit e711cafab13efc9c1fe6c5cd75826401445eb585
+Author: Benjamin Peterson <benjamin@python.org>
+Date:   Wed Jun 11 16:44:04 2008 +0000
+
+    Merged revisions 64104,64117 via svnmerge from
+    svn+ssh://pythondev@svn.python.org/python/trunk
+...
+(see diff in setup.py)
+It assigned libraries for multiprocessing module according
+the host_platform, but not pass it to Extension.
+
+In glibc, the following commit caused two definition of
+sem_getvalue are different.
+https://sourceware.org/git/?p=glibc.git;a=commit;h=042e1521c794a945edc43b5bfa7e69ad70420524
+(see diff in nptl/sem_getvalue.c for detail)
+`__new_sem_getvalue' is the latest sem_getvalue@@GLIBC_2.1
+and `__old_sem_getvalue' is to compat the old version
+sem_getvalue@GLIBC_2.0.
+
+To build python for embedded Linux systems:
+http://www.yoctoproject.org/docs/2.3.1/yocto-project-qs/yocto-project-qs.html
+If not explicitly link to library pthread (-lpthread), it will
+load glibc's sem_getvalue randomly at runtime.
+
+Such as build python on linux x86_64 host and run the python
+on linux x86_32 target. If not link library pthread, it caused
+multiprocessing bounded semaphore could not work correctly.
+...
+>>> import multiprocessing
+>>> pool_sema = multiprocessing.BoundedSemaphore(value=1)
+>>> pool_sema.acquire()
+True
+>>> pool_sema.release()
+Traceback (most recent call last):
+  File "<stdin>", line 1, in <module>
+ValueError: semaphore or lock released too many times
+...
+
+And the semaphore issue also caused multiprocessing.Queue().put() hung.
+
+Upstream-Status: Submitted [https://github.com/python/cpython/pull/2999]
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ setup.py | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/setup.py b/setup.py
+index 4f0f522..d05707d 100644
+--- a/setup.py
++++ b/setup.py
+@@ -1606,8 +1606,10 @@ class PyBuildExt(build_ext):
+         elif host_platform.startswith('netbsd'):
+             macros = dict()
+             libraries = []
+-
+-        else:                                   # Linux and other unices
++        elif host_platform.startswith(('linux')):
++            macros = dict()
++            libraries = ['pthread']
++        else:                                   # Other unices
+             macros = dict()
+             libraries = ['rt']
+ 
+@@ -1626,6 +1628,7 @@ class PyBuildExt(build_ext):
+         if sysconfig.get_config_var('WITH_THREAD'):
+             exts.append ( Extension('_multiprocessing', multiprocessing_srcs,
+                                     define_macros=list(macros.items()),
++                                    libraries=libraries,
+                                     include_dirs=["Modules/_multiprocessing"]))
+         else:
+             missing.append('_multiprocessing')
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/python3-fix-CVE-2016-1000110.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/python3-fix-CVE-2016-1000110.patch
deleted file mode 100644
index ab1b723..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/python3-fix-CVE-2016-1000110.patch
+++ /dev/null
@@ -1,148 +0,0 @@
-From aab3e8c432b90508ac14755128f5a687be2fdf43 Mon Sep 17 00:00:00 2001
-From: Mingli Yu <Mingli.Yu@windriver.com>
-Date: Thu, 22 Sep 2016 16:39:49 +0800
-Subject: [PATCH] python3: fix CVE-2016-1000110
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-Ignore the HTTP_PROXY variable when REQUEST_METHOD environment is set, which
-indicates that the script is in CGI mode.
-
-Issue #27568 Reported and patch contributed by Rémi Rampin. [#27568]
-
-Backport patch from https://hg.python.org/cpython/rev/a0ac52ed8f79
-
-Upstream-Status: Backport
-CVE: CVE-2016-1000110
-Signed-off-by: Mingli Yu <Mingli.Yu@windriver.com>
----
- Doc/howto/urllib2.rst          |  5 +++++
- Doc/library/urllib.request.rst | 17 ++++++++++++++++-
- Lib/test/test_urllib.py        | 14 +++++++++++++-
- Lib/urllib/request.py          |  6 ++++++
- Misc/NEWS                      |  4 ++++
- 5 files changed, 44 insertions(+), 2 deletions(-)
-
-diff --git a/Doc/howto/urllib2.rst b/Doc/howto/urllib2.rst
-index 24a4156..d2c7991 100644
---- a/Doc/howto/urllib2.rst
-+++ b/Doc/howto/urllib2.rst
-@@ -538,6 +538,11 @@ setting up a `Basic Authentication`_ handler: ::
-     through a proxy.  However, this can be enabled by extending urllib.request as
-     shown in the recipe [#]_.
- 
-+.. note::
-+
-+    ``HTTP_PROXY`` will be ignored if a variable ``REQUEST_METHOD`` is set; see
-+    the documentation on :func:`~urllib.request.getproxies`.
-+
- 
- Sockets and Layers
- ==================
-diff --git a/Doc/library/urllib.request.rst b/Doc/library/urllib.request.rst
-index 1338906..1291aeb 100644
---- a/Doc/library/urllib.request.rst
-+++ b/Doc/library/urllib.request.rst
-@@ -173,6 +173,16 @@ The :mod:`urllib.request` module defines the following functions:
-    If both lowercase and uppercase environment variables exist (and disagree),
-    lowercase is preferred.
- 
-+    .. note::
-+
-+       If the environment variable ``REQUEST_METHOD`` is set, which usually
-+       indicates your script is running in a CGI environment, the environment
-+       variable ``HTTP_PROXY`` (uppercase ``_PROXY``) will be ignored. This is
-+       because that variable can be injected by a client using the "Proxy:" HTTP
-+       header. If you need to use an HTTP proxy in a CGI environment, either use
-+       ``ProxyHandler`` explicitly, or make sure the variable name is in
-+       lowercase (or at least the ``_proxy`` suffix).
-+
- 
- The following classes are provided:
- 
-@@ -280,6 +290,11 @@ The following classes are provided:
-    list of hostname suffixes, optionally with ``:port`` appended, for example
-    ``cern.ch,ncsa.uiuc.edu,some.host:8080``.
- 
-+    .. note::
-+
-+       ``HTTP_PROXY`` will be ignored if a variable ``REQUEST_METHOD`` is set;
-+       see the documentation on :func:`~urllib.request.getproxies`.
-+
- 
- .. class:: HTTPPasswordMgr()
- 
-@@ -1138,7 +1153,7 @@ the returned bytes object to string once it determines or guesses
- the appropriate encoding.
- 
- The following W3C document, https://www.w3.org/International/O-charset\ , lists
--the various ways in which a (X)HTML or a XML document could have specified its
-+the various ways in which an (X)HTML or an XML document could have specified its
- encoding information.
- 
- As the python.org website uses *utf-8* encoding as specified in its meta tag, we
-diff --git a/Lib/test/test_urllib.py b/Lib/test/test_urllib.py
-index 5d05f8d..247598a 100644
---- a/Lib/test/test_urllib.py
-+++ b/Lib/test/test_urllib.py
-@@ -1,4 +1,4 @@
--"""Regresssion tests for what was in Python 2's "urllib" module"""
-+"""Regression tests for what was in Python 2's "urllib" module"""
- 
- import urllib.parse
- import urllib.request
-@@ -232,6 +232,18 @@ class ProxyTests(unittest.TestCase):
-         self.assertTrue(urllib.request.proxy_bypass_environment('anotherdomain.com:8888'))
-         self.assertTrue(urllib.request.proxy_bypass_environment('newdomain.com:1234'))
- 
-+    def test_proxy_cgi_ignore(self):
-+        try:
-+            self.env.set('HTTP_PROXY', 'http://somewhere:3128')
-+            proxies = urllib.request.getproxies_environment()
-+            self.assertEqual('http://somewhere:3128', proxies['http'])
-+            self.env.set('REQUEST_METHOD', 'GET')
-+            proxies = urllib.request.getproxies_environment()
-+            self.assertNotIn('http', proxies)
-+        finally:
-+            self.env.unset('REQUEST_METHOD')
-+            self.env.unset('HTTP_PROXY')
-+
-     def test_proxy_bypass_environment_host_match(self):
-         bypass = urllib.request.proxy_bypass_environment
-         self.env.set('NO_PROXY',
-diff --git a/Lib/urllib/request.py b/Lib/urllib/request.py
-index 1731fe3..3be327d 100644
---- a/Lib/urllib/request.py
-+++ b/Lib/urllib/request.py
-@@ -2412,6 +2412,12 @@ def getproxies_environment():
-         name = name.lower()
-         if value and name[-6:] == '_proxy':
-             proxies[name[:-6]] = value
-+    # CVE-2016-1000110 - If we are running as CGI script, forget HTTP_PROXY
-+    # (non-all-lowercase) as it may be set from the web server by a "Proxy:"
-+    # header from the client
-+    # If "proxy" is lowercase, it will still be used thanks to the next block
-+    if 'REQUEST_METHOD' in os.environ:
-+        proxies.pop('http', None)
-     for name, value in os.environ.items():
-         if name[-6:] == '_proxy':
-             name = name.lower()
-diff --git a/Misc/NEWS b/Misc/NEWS
-index 4ad2551..2fcc95b 100644
---- a/Misc/NEWS
-+++ b/Misc/NEWS
-@@ -329,6 +329,10 @@ Library
- - Issue #26644: Raise ValueError rather than SystemError when a negative
-   length is passed to SSLSocket.recv() or read().
- 
-+- Issue #27568: Prevent HTTPoxy attack (CVE-2016-1000110). Ignore the
-+  HTTP_PROXY variable when REQUEST_METHOD environment is set, which indicates
-+  that the script is in CGI mode.
-+
- - Issue #23804: Fix SSL recv(0) and read(0) methods to return zero bytes
-   instead of up to 1024.
- 
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/support_SOURCE_DATE_EPOCH_in_py_compile.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/support_SOURCE_DATE_EPOCH_in_py_compile.patch
new file mode 100644
index 0000000..32ecab9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/support_SOURCE_DATE_EPOCH_in_py_compile.patch
@@ -0,0 +1,97 @@
+The compiled .pyc files contain time stamp corresponding to the compile time.
+This prevents binary reproducibility. This patch allows to achieve binary
+reproducibility by overriding the build time stamp by the value 
+exported via SOURCE_DATE_EPOCH. 
+
+Upstream-Status: Backport
+
+Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
+
+
+From aeab488630fdb1b56a8d0b0c13fa88706b2afe9b Mon Sep 17 00:00:00 2001
+From: "Bernhard M. Wiedemann" <bwiedemann@suse.de>
+Date: Sat, 25 Feb 2017 06:42:28 +0100
+Subject: [PATCH] bpo-29708: support SOURCE_DATE_EPOCH env var in py_compile
+
+to allow for reproducible builds of python packages
+
+See https://reproducible-builds.org/ for why this is good
+and https://reproducible-builds.org/specs/source-date-epoch/
+for the definition of this variable.
+
+Background:
+In some distributions like openSUSE, binary rpms contain precompiled .pyc files.
+
+And packages like amqp or twisted dynamically generate .py files at build time
+so those have the current time and that timestamp gets embedded
+into the .pyc file header.
+When we then adapt file timestamps in rpms to be constant,
+the timestamp in the .pyc header will no more match
+the .py timestamp in the filesystem.
+The software will still work, but it will not use the .pyc file as it should.
+---
+ Doc/library/py_compile.rst  |  4 ++++
+ Lib/py_compile.py           |  4 ++++
+ Lib/test/test_py_compile.py | 19 +++++++++++++++++++
+ 3 files changed, 27 insertions(+)
+
+diff --git a/Doc/library/py_compile.rst b/Doc/library/py_compile.rst
+index 0af8fb1..841f3e8 100644
+--- a/Doc/library/py_compile.rst
++++ b/Doc/library/py_compile.rst
+@@ -53,6 +53,10 @@ byte-code cache files in the directory containing the source code.
+    :func:`compile` function.  The default of ``-1`` selects the optimization
+    level of the current interpreter.
+ 
++   If the SOURCE_DATE_EPOCH environment variable is set, the .py file mtime
++   and timestamp entry in .pyc file header, will be limited to this value.
++   See https://reproducible-builds.org/specs/source-date-epoch/ for more info.
++
+    .. versionchanged:: 3.2
+       Changed default value of *cfile* to be :PEP:`3147`-compliant.  Previous
+       default was *file* + ``'c'`` (``'o'`` if optimization was enabled).
+diff --git a/Lib/py_compile.py b/Lib/py_compile.py
+index 11c5b50..62dcdc7 100644
+--- a/Lib/py_compile.py
++++ b/Lib/py_compile.py
+@@ -137,6 +137,10 @@ def compile(file, cfile=None, dfile=None, doraise=False, optimize=-1):
+     except FileExistsError:
+         pass
+     source_stats = loader.path_stats(file)
++    sde = os.environ.get('SOURCE_DATE_EPOCH')
++    if sde and source_stats['mtime'] > int(sde):
++        source_stats['mtime'] = int(sde)
++        os.utime(file, (source_stats['mtime'], source_stats['mtime']))
+     bytecode = importlib._bootstrap_external._code_to_bytecode(
+             code, source_stats['mtime'], source_stats['size'])
+     mode = importlib._bootstrap_external._calc_mode(file)
+diff --git a/Lib/test/test_py_compile.py b/Lib/test/test_py_compile.py
+index 4a6caa5..3d09963 100644
+--- a/Lib/test/test_py_compile.py
++++ b/Lib/test/test_py_compile.py
+@@ -98,6 +98,25 @@ def test_bad_coding(self):
+         self.assertFalse(os.path.exists(
+             importlib.util.cache_from_source(bad_coding)))
+ 
++    def test_source_date_epoch(self):
++        testtime = 123456789
++        orig_sde = os.getenv("SOURCE_DATE_EPOCH")
++        os.environ["SOURCE_DATE_EPOCH"] = str(testtime)
++        py_compile.compile(self.source_path, self.pyc_path)
++        if orig_sde:
++            os.environ["SOURCE_DATE_EPOCH"] = orig_sde
++        else:
++            del os.environ["SOURCE_DATE_EPOCH"]
++        self.assertTrue(os.path.exists(self.pyc_path))
++        self.assertFalse(os.path.exists(self.cache_path))
++        statinfo = os.stat(self.source_path)
++        self.assertEqual(statinfo.st_mtime, testtime)
++        f = open(self.pyc_path, "rb")
++        f.read(4)
++        timebytes = f.read(4) # read timestamp from pyc header
++        f.close()
++        self.assertEqual(timebytes, (testtime).to_bytes(4, 'little'))
++
+     @unittest.skipIf(sys.flags.optimize > 0, 'test does not work with -O')
+     def test_double_dot_no_clobber(self):
+         # http://bugs.python.org/issue22966
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/upstream-random-fixes.patch b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/upstream-random-fixes.patch
index 0d9152c..9b40e8a 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3/upstream-random-fixes.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3/upstream-random-fixes.patch
@@ -1,21 +1,7 @@
-This patch updates random.c to match upstream python's code at revision 
-8125d9a8152b. This addresses various issues around problems with glibc 2.24 
-and 2.25 such that python would fail to start with:
-
-[rpurdie@centos7 ~]$ /tmp/t2/sysroots/x86_64-pokysdk-linux/usr/bin/python3
-Fatal Python error: getentropy() failed
-Aborted
-
-(taken from our buildtools-tarball also breaks eSDK)
-
-Upstream-Status: Backport
-
-# HG changeset patch
-# User Victor Stinner <victor.stinner@gmail.com>
-# Date 1483957133 -3600
-# Node ID 8125d9a8152b79e712cb09c7094b9129b9bcea86
-# Parent  337461574c90281630751b6095c4e1baf380cf7d
-Issue #29157: Prefer getrandom() over getentropy()
+From 035ba5da3e53e45c712b39fe1f6fb743e697c032 Mon Sep 17 00:00:00 2001
+From: Victor Stinner <victor.stinner@gmail.com>
+Date: Mon, 9 Jan 2017 11:18:53 +0100
+Subject: [PATCH] Issue #29157: Prefer getrandom() over getentropy()
 
 Copy and then adapt Python/random.c from default branch. Difference between 3.5
 and default branches:
@@ -26,12 +12,17 @@
 * Python 3.5 has no _PyOS_URandomNonblock() function: _PyOS_URandom()
   works in non-blocking mode on Python 3.5
 
-RP 2017/1/22
+Upstream-Status: Backport [https://github.com/python/cpython/commit/035ba5da3e53e45c712b39fe1f6fb743e697c032]
+Signed-off-by: Alexander Kanavin <alexander.kanavin@intel.com>
 
-Index: Python-3.5.2/Python/random.c
-===================================================================
---- Python-3.5.2.orig/Python/random.c
-+++ Python-3.5.2/Python/random.c
+---
+ Python/random.c | 494 +++++++++++++++++++++++++++++++++-----------------------
+ 1 file changed, 294 insertions(+), 200 deletions(-)
+
+diff --git a/Python/random.c b/Python/random.c
+index d203939..31f61d0 100644
+--- a/Python/random.c
++++ b/Python/random.c
 @@ -1,6 +1,9 @@
  #include "Python.h"
  #ifdef MS_WINDOWS
@@ -42,7 +33,7 @@
  #else
  #  include <fcntl.h>
  #  ifdef HAVE_SYS_STAT_H
-@@ -36,10 +39,9 @@ win32_urandom_init(int raise)
+@@ -37,10 +40,9 @@ win32_urandom_init(int raise)
      return 0;
  
  error:
@@ -55,7 +46,7 @@
      return -1;
  }
  
-@@ -52,8 +54,9 @@ win32_urandom(unsigned char *buffer, Py_
+@@ -53,8 +55,9 @@ win32_urandom(unsigned char *buffer, Py_ssize_t size, int raise)
  
      if (hCryptProv == 0)
      {
@@ -66,7 +57,7 @@
      }
  
      while (size > 0)
-@@ -62,11 +65,9 @@ win32_urandom(unsigned char *buffer, Py_
+@@ -63,11 +66,9 @@ win32_urandom(unsigned char *buffer, Py_ssize_t size, int raise)
          if (!CryptGenRandom(hCryptProv, (DWORD)chunk, buffer))
          {
              /* CryptGenRandom() failed */
@@ -80,7 +71,7 @@
              return -1;
          }
          buffer += chunk;
-@@ -75,55 +76,29 @@ win32_urandom(unsigned char *buffer, Py_
+@@ -76,58 +77,23 @@ win32_urandom(unsigned char *buffer, Py_ssize_t size, int raise)
      return 0;
  }
  
@@ -129,13 +120,19 @@
  #if defined(HAVE_GETRANDOM) || defined(HAVE_GETRANDOM_SYSCALL)
  #define PY_GETRANDOM 1
  
+-/* Call getrandom()
 +/* Call getrandom() to get random bytes:
 +
-+   - Return 1 on success
+    - Return 1 on success
+-   - Return 0 if getrandom() syscall is not available (failed with ENOSYS or
+-     EPERM) or if getrandom(GRND_NONBLOCK) failed with EAGAIN (system urandom
+-     not initialized yet) and raise=0.
 +   - Return 0 if getrandom() is not available (failed with ENOSYS or EPERM),
 +     or if getrandom(GRND_NONBLOCK) failed with EAGAIN (system urandom not
 +     initialized yet).
-+   - Raise an exception (if raise is non-zero) and return -1 on error:
+    - Raise an exception (if raise is non-zero) and return -1 on error:
+-     getrandom() failed with EINTR and the Python signal handler raised an
+-     exception, or getrandom() failed with a different error. */
 +     if getrandom() failed with EINTR, raise is non-zero and the Python signal
 +     handler raised an exception, or if getrandom() failed with a different
 +     error.
@@ -144,26 +141,16 @@
  static int
  py_getrandom(void *buffer, Py_ssize_t size, int raise)
  {
--    /* Is getrandom() supported by the running kernel?
--     * Need Linux kernel 3.17 or newer, or Solaris 11.3 or newer */
-+    /* Is getrandom() supported by the running kernel? Set to 0 if getrandom()
-+       failed with ENOSYS or EPERM. Need Linux kernel 3.17 or newer, or Solaris
-+       11.3 or newer */
-     static int getrandom_works = 1;
- 
-     /* getrandom() on Linux will block if called before the kernel has
-@@ -132,84 +107,165 @@ py_getrandom(void *buffer, Py_ssize_t si
+@@ -142,16 +108,19 @@ py_getrandom(void *buffer, Py_ssize_t size, int raise)
       * see https://bugs.python.org/issue26839. To avoid this, use the
       * GRND_NONBLOCK flag. */
      const int flags = GRND_NONBLOCK;
--    int n;
 +    char *dest;
-+    long n;
+     long n;
  
--    if (!getrandom_works)
-+    if (!getrandom_works) {
+     if (!getrandom_works) {
          return 0;
-+    }
+     }
  
 +    dest = buffer;
      while (0 < size) {
@@ -174,11 +161,8 @@
 +           requested. */
          n = Py_MIN(size, 1024);
  #else
--        n = size;
-+        n = Py_MIN(size, LONG_MAX);
- #endif
- 
-         errno = 0;
+         n = Py_MIN(size, LONG_MAX);
+@@ -161,34 +130,35 @@ py_getrandom(void *buffer, Py_ssize_t size, int raise)
  #ifdef HAVE_GETRANDOM
          if (raise) {
              Py_BEGIN_ALLOW_THREADS
@@ -209,56 +193,45 @@
  #endif
  
          if (n < 0) {
--            if (errno == ENOSYS) {
+-            /* ENOSYS: getrandom() syscall not supported by the kernel (but
+-             * maybe supported by the host which built Python). EPERM:
+-             * getrandom() syscall blocked by SECCOMP or something else. */
 +            /* ENOSYS: the syscall is not supported by the kernel.
 +               EPERM: the syscall is blocked by a security policy (ex: SECCOMP)
 +               or something else. */
-+            if (errno == ENOSYS || errno == EPERM) {
+             if (errno == ENOSYS || errno == EPERM) {
                  getrandom_works = 0;
                  return 0;
              }
 +
              if (errno == EAGAIN) {
--                /* If we failed with EAGAIN, the entropy pool was
--                 * uninitialized. In this case, we return failure to fall
--                 * back to reading from /dev/urandom.
--                 *
--                 * Note: In this case the data read will not be random so
--                 * should not be used for cryptographic purposes. Retaining
--                 * the existing semantics for practical purposes. */
-+                /* getrandom(GRND_NONBLOCK) fails with EAGAIN if the system
-+                   urandom is not initialiazed yet. In this case, fall back on
-+                   reading from /dev/urandom.
-+
-+                   Note: In this case the data read will not be random so
-+                   should not be used for cryptographic purposes. Retaining
-+                   the existing semantics for practical purposes. */
-                 getrandom_works = 0;
-                 return 0;
+                 /* getrandom(GRND_NONBLOCK) fails with EAGAIN if the system
+                    urandom is not initialiazed yet. In this case, fall back on
+@@ -202,169 +172,225 @@ py_getrandom(void *buffer, Py_ssize_t size, int raise)
              }
  
              if (errno == EINTR) {
 -                if (PyErr_CheckSignals()) {
--                    if (!raise)
+-                    if (!raise) {
 -                        Py_FatalError("getrandom() interrupted by a signal");
--                    return -1;
 +                if (raise) {
 +                    if (PyErr_CheckSignals()) {
 +                        return -1;
-+                    }
+                     }
+-                    return -1;
                  }
+ 
 -                /* retry getrandom() */
-+
 +                /* retry getrandom() if it was interrupted by a signal */
                  continue;
              }
  
--            if (raise)
-+            if (raise) {
+             if (raise) {
                  PyErr_SetFromErrno(PyExc_OSError);
--            else
+             }
+-            else {
 -                Py_FatalError("getrandom() failed");
-+            }
+-            }
              return -1;
          }
  
@@ -269,12 +242,19 @@
      return 1;
  }
 -#endif
-+
+ 
+-static struct {
+-    int fd;
+-    dev_t st_dev;
+-    ino_t st_ino;
+-} urandom_cache = { -1 };
 +#elif defined(HAVE_GETENTROPY)
 +#define PY_GETENTROPY 1
-+
+ 
 +/* Fill buffer with size pseudo-random bytes generated by getentropy():
-+
+ 
+-/* Read 'size' random bytes from py_getrandom(). Fall back on reading from
+-   /dev/urandom if getrandom() is not available.
 +   - Return 1 on success
 +   - Return 0 if getentropy() syscall is not available (failed with ENOSYS or
 +     EPERM).
@@ -282,25 +262,47 @@
 +     if getentropy() failed with EINTR, raise is non-zero and the Python signal
 +     handler raised an exception, or if getentropy() failed with a different
 +     error.
-+
+ 
+-   Call Py_FatalError() on error. */
+-static void
+-dev_urandom_noraise(unsigned char *buffer, Py_ssize_t size)
 +   getentropy() is retried if it failed with EINTR: interrupted by a signal. */
 +static int
 +py_getentropy(char *buffer, Py_ssize_t size, int raise)
-+{
+ {
+-    int fd;
+-    Py_ssize_t n;
 +    /* Is getentropy() supported by the running kernel? Set to 0 if
 +       getentropy() failed with ENOSYS or EPERM. */
 +    static int getentropy_works = 1;
-+
+ 
+-    assert (0 < size);
+-
+-#ifdef PY_GETRANDOM
+-    if (py_getrandom(buffer, size, 0) == 1) {
+-        return;
 +    if (!getentropy_works) {
 +        return 0;
-+    }
-+
+     }
+-    /* getrandom() failed with ENOSYS or EPERM,
+-       fall back on reading /dev/urandom */
+-#endif
+ 
+-    fd = _Py_open_noraise("/dev/urandom", O_RDONLY);
+-    if (fd < 0) {
+-        Py_FatalError("Failed to open /dev/urandom");
+-    }
 +    while (size > 0) {
 +        /* getentropy() is limited to returning up to 256 bytes. Call it
 +           multiple times if more bytes are requested. */
 +        Py_ssize_t len = Py_MIN(size, 256);
 +        int res;
-+
+ 
+-    while (0 < size)
+-    {
+-        do {
+-            n = read(fd, buffer, (size_t)size);
+-        } while (n < 0 && errno == EINTR);
 +        if (raise) {
 +            Py_BEGIN_ALLOW_THREADS
 +            res = getentropy(buffer, len);
@@ -309,7 +311,11 @@
 +        else {
 +            res = getentropy(buffer, len);
 +        }
-+
+ 
+-        if (n <= 0) {
+-            /* read() failed or returned 0 bytes */
+-            Py_FatalError("Failed to read bytes from /dev/urandom");
+-            break;
 +        if (res < 0) {
 +            /* ENOSYS: the syscall is not supported by the running kernel.
 +               EPERM: the syscall is blocked by a security policy (ex: SECCOMP)
@@ -334,71 +340,44 @@
 +                PyErr_SetFromErrno(PyExc_OSError);
 +            }
 +            return -1;
-+        }
+         }
+-        buffer += n;
+-        size -= n;
 +
 +        buffer += len;
 +        size -= len;
-+    }
+     }
+-    close(fd);
 +    return 1;
-+}
+ }
 +#endif /* defined(HAVE_GETENTROPY) && !defined(sun) */
+ 
+-/* Read 'size' random bytes from py_getrandom(). Fall back on reading from
+-   /dev/urandom if getrandom() is not available.
+ 
+-   Return 0 on success. Raise an exception and return -1 on error. */
++static struct {
++    int fd;
++    dev_t st_dev;
++    ino_t st_ino;
++} urandom_cache = { -1 };
 +
- 
- static struct {
-     int fd;
-@@ -217,127 +273,123 @@ static struct {
-     ino_t st_ino;
- } urandom_cache = { -1 };
- 
 +/* Read random bytes from the /dev/urandom device:
- 
--/* Read size bytes from /dev/urandom into buffer.
--   Call Py_FatalError() on error. */
--static void
--dev_urandom_noraise(unsigned char *buffer, Py_ssize_t size)
--{
--    int fd;
--    Py_ssize_t n;
++
 +   - Return 0 on success
 +   - Raise an exception (if raise is non-zero) and return -1 on error
- 
--    assert (0 < size);
++
 +   Possible causes of errors:
- 
--#ifdef PY_GETRANDOM
--    if (py_getrandom(buffer, size, 0) == 1)
--        return;
--    /* getrandom() is not supported by the running kernel, fall back
--     * on reading /dev/urandom */
--#endif
++
 +   - open() failed with ENOENT, ENXIO, ENODEV, EACCES: the /dev/urandom device
 +     was not found. For example, it was removed manually or not exposed in a
 +     chroot or container.
 +   - open() failed with a different error
 +   - fstat() failed
 +   - read() failed or returned 0
- 
--    fd = _Py_open_noraise("/dev/urandom", O_RDONLY);
--    if (fd < 0)
--        Py_FatalError("Failed to open /dev/urandom");
++
 +   read() is retried if it failed with EINTR: interrupted by a signal.
- 
--    while (0 < size)
--    {
--        do {
--            n = read(fd, buffer, (size_t)size);
--        } while (n < 0 && errno == EINTR);
--        if (n <= 0)
--        {
--            /* stop on error or if read(size) returned 0 */
--            Py_FatalError("Failed to read bytes from /dev/urandom");
--            break;
--        }
--        buffer += n;
--        size -= (Py_ssize_t)n;
--    }
--    close(fd);
--}
++
 +   The file descriptor of the device is kept open between calls to avoid using
 +   many file descriptors when run in parallel from multiple threads:
 +   see the issue #18756.
@@ -406,9 +385,7 @@
 +   st_dev and st_ino fields of the file descriptor (from fstat()) are cached to
 +   check if the file descriptor was replaced by a different file (which is
 +   likely a bug in the application): see the issue #21207.
- 
--/* Read size bytes from /dev/urandom into buffer.
--   Return 0 on success, raise an exception and return -1 on error. */
++
 +   If the file descriptor was closed or replaced, open a new file descriptor
 +   but don't close the old file descriptor: it probably points to something
 +   important for some third-party code. */
@@ -422,22 +399,24 @@
 -#ifdef PY_GETRANDOM
 -    int res;
 -#endif
- 
+-
 -    if (size <= 0)
 -        return 0;
-+    if (raise) {
-+        struct _Py_stat_struct st;
  
 -#ifdef PY_GETRANDOM
 -    res = py_getrandom(buffer, size, 1);
--    if (res < 0)
+-    if (res < 0) {
 -        return -1;
--    if (res == 1)
+-    }
+-    if (res == 1) {
 -        return 0;
--    /* getrandom() is not supported by the running kernel, fall back
--     * on reading /dev/urandom */
+-    }
+-    /* getrandom() failed with ENOSYS or EPERM,
+-       fall back on reading /dev/urandom */
 -#endif
--
++    if (raise) {
++        struct _Py_stat_struct st;
+ 
 -    if (urandom_cache.fd >= 0) {
 -        /* Does the fd point to the same thing as before? (issue #21207) */
 -        if (_Py_fstat_noraise(urandom_cache.fd, &st)
@@ -516,8 +495,9 @@
  
 -    do {
 -        n = _Py_read(fd, buffer, (size_t)size);
--        if (n == -1)
+-        if (n == -1) {
 -            return -1;
+-        }
 -        if (n == 0) {
 -            PyErr_Format(PyExc_RuntimeError,
 -                    "Failed to read %zi bytes from /dev/urandom",
@@ -566,7 +546,7 @@
      return 0;
  }
  
-@@ -349,8 +401,8 @@ dev_urandom_close(void)
+@@ -376,8 +402,8 @@ dev_urandom_close(void)
          urandom_cache.fd = -1;
      }
  }
@@ -576,7 +556,7 @@
  
  /* Fill buffer with pseudo-random bytes generated by a linear congruent
     generator (LCG):
-@@ -373,29 +425,98 @@ lcg_urandom(unsigned int x0, unsigned ch
+@@ -400,31 +426,100 @@ lcg_urandom(unsigned int x0, unsigned char *buffer, size_t size)
      }
  }
  
@@ -661,7 +641,7 @@
  #else
 -    return dev_urandom_python((char*)buffer, size);
 +    res = py_getentropy(buffer, size, raise);
- #endif
++#endif
 +    if (res < 0) {
 +        return -1;
 +    }
@@ -673,9 +653,9 @@
 +#endif
 +
 +    return dev_urandom(buffer, size, raise);
-+#endif
-+}
-+
+ #endif
+ }
+ 
 +/* Fill buffer with size pseudo-random bytes from the operating system random
 +   number generator (RNG). It is suitable for most cryptographic purposes
 +   except long living private keys for asymmetric encryption.
@@ -685,10 +665,12 @@
 +_PyOS_URandom(void *buffer, Py_ssize_t size)
 +{
 +    return pyurandom(buffer, size, 1);
- }
- 
++}
++
  void
-@@ -436,13 +557,14 @@ _PyRandom_Init(void)
+ _PyRandom_Init(void)
+ {
+@@ -463,13 +558,14 @@ _PyRandom_Init(void)
          }
      }
      else {
@@ -710,7 +692,7 @@
      }
  }
  
-@@ -454,8 +576,6 @@ _PyRandom_Fini(void)
+@@ -481,8 +577,6 @@ _PyRandom_Fini(void)
          CryptReleaseContext(hCryptProv, 0);
          hCryptProv = 0;
      }
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3_3.5.2.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3_3.5.2.bb
deleted file mode 100644
index 2ff7c9e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python3_3.5.2.bb
+++ /dev/null
@@ -1,224 +0,0 @@
-require recipes-devtools/python/python.inc
-
-DEPENDS = "python3-native libffi bzip2 db gdbm openssl readline sqlite3 zlib virtual/libintl xz"
-PR = "${INC_PR}.0"
-PYTHON_MAJMIN = "3.5"
-PYTHON_BINABI= "${PYTHON_MAJMIN}m"
-DISTRO_SRC_URI ?= "file://sitecustomize.py"
-DISTRO_SRC_URI_linuxstdbase = ""
-SRC_URI = "http://www.python.org/ftp/python/${PV}/Python-${PV}.tar.xz \
-file://python-config.patch \
-file://000-cross-compile.patch \
-file://030-fixup-include-dirs.patch \
-file://070-dont-clean-ipkg-install.patch \
-file://080-distutils-dont_adjust_files.patch \
-file://130-readline-setup.patch \
-file://150-fix-setupterm.patch \
-file://0001-h2py-Fix-issue-13032-where-it-fails-with-UnicodeDeco.patch \
-file://tweak-MULTIARCH-for-powerpc-linux-gnuspe.patch \
-${DISTRO_SRC_URI} \
-"
-
-SRC_URI += "\
-            file://03-fix-tkinter-detection.patch \
-            file://avoid_warning_about_tkinter.patch \
-            file://cgi_py.patch \
-            file://host_include_contamination.patch \
-            file://python-3.3-multilib.patch \
-            file://shutil-follow-symlink-fix.patch \
-            file://sysroot-include-headers.patch \
-            file://unixccompiler.patch \
-            file://avoid-ncursesw-include-path.patch \
-            file://python3-use-CROSSPYTHONPATH-for-PYTHON_FOR_BUILD.patch \
-            file://python3-setup.py-no-host-headers-libs.patch \
-            file://sysconfig.py-add-_PYTHON_PROJECT_SRC.patch \
-            file://setup.py-check-cross_compiling-when-get-FLAGS.patch \
-            file://setup.py-find-libraries-in-staging-dirs.patch \
-            file://configure.ac-fix-LIBPL.patch \
-            file://python3-fix-CVE-2016-1000110.patch \
-            file://upstream-random-fixes.patch \
-           "
-SRC_URI[md5sum] = "8906efbacfcdc7c3c9198aeefafd159e"
-SRC_URI[sha256sum] = "0010f56100b9b74259ebcd5d4b295a32324b58b517403a10d1a2aa7cb22bca40"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=6b60258130e4ed10d3101517eb5b9385"
-
-# exclude pre-releases for both python 2.x and 3.x
-UPSTREAM_CHECK_REGEX = "[Pp]ython-(?P<pver>\d+(\.\d+)+).tar"
-
-S = "${WORKDIR}/Python-${PV}"
-
-inherit autotools multilib_header python3native pkgconfig
-
-CONFIGUREOPTS += " --with-system-ffi "
-
-CACHED_CONFIGUREVARS = "ac_cv_have_chflags=no \
-                ac_cv_have_lchflags=no \
-                ac_cv_have_long_long_format=yes \
-                ac_cv_buggy_getaddrinfo=no \
-                ac_cv_file__dev_ptmx=yes \
-                ac_cv_file__dev_ptc=no \
-"
-
-TARGET_CC_ARCH += "-DNDEBUG -fno-inline"
-SDK_CC_ARCH += "-DNDEBUG -fno-inline"
-EXTRA_OEMAKE += "CROSS_COMPILE=yes"
-EXTRA_OECONF += "CROSSPYTHONPATH=${STAGING_LIBDIR_NATIVE}/python${PYTHON_MAJMIN}/lib-dynload/ --without-ensurepip"
-
-export CROSS_COMPILE = "${TARGET_PREFIX}"
-export _PYTHON_PROJECT_BASE = "${B}"
-export _PYTHON_PROJECT_SRC = "${S}"
-export CCSHARED = "-fPIC"
-
-# Fix cross compilation of different modules
-export CROSSPYTHONPATH = "${STAGING_LIBDIR_NATIVE}/python${PYTHON_MAJMIN}/lib-dynload/:${B}/build/lib.linux-${TARGET_ARCH}-${PYTHON_MAJMIN}:${S}/Lib:${S}/Lib/plat-linux"
-
-# No ctypes option for python 3
-PYTHONLSBOPTS = ""
-
-do_configure_append() {
-	rm -f ${S}/Makefile.orig
-	autoreconf -Wcross --verbose --install --force --exclude=autopoint ../Python-${PV}/Modules/_ctypes/libffi
-}
-
-do_compile() {
-        # regenerate platform specific files, because they depend on system headers
-        cd ${S}/Lib/plat-linux*
-        include=${STAGING_INCDIR} ${STAGING_BINDIR_NATIVE}/python3-native/python3 \
-                ${S}/Tools/scripts/h2py.py -i '(u_long)' \
-                ${STAGING_INCDIR}/dlfcn.h \
-                ${STAGING_INCDIR}/linux/cdrom.h \
-                ${STAGING_INCDIR}/netinet/in.h \
-                ${STAGING_INCDIR}/sys/types.h
-        sed -e 's,${STAGING_DIR_HOST},,g' -i *.py
-        cd -
-
-
-	# remove any bogus LD_LIBRARY_PATH
-	sed -i -e s,RUNSHARED=.*,RUNSHARED=, Makefile
-
-	if [ ! -f Makefile.orig ]; then
-		install -m 0644 Makefile Makefile.orig
-	fi
-	sed -i -e 's,^CONFIGURE_LDFLAGS=.*,CONFIGURE_LDFLAGS=-L. -L${STAGING_LIBDIR},g' \
-		-e 's,libdir=${libdir},libdir=${STAGING_LIBDIR},g' \
-		-e 's,libexecdir=${libexecdir},libexecdir=${STAGING_DIR_HOST}${libexecdir},g' \
-		-e 's,^LIBDIR=.*,LIBDIR=${STAGING_LIBDIR},g' \
-		-e 's,includedir=${includedir},includedir=${STAGING_INCDIR},g' \
-		-e 's,^INCLUDEDIR=.*,INCLUDE=${STAGING_INCDIR},g' \
-		-e 's,^CONFINCLUDEDIR=.*,CONFINCLUDE=${STAGING_INCDIR},g' \
-		Makefile
-	# save copy of it now, because if we do it in do_install and 
-	# then call do_install twice we get Makefile.orig == Makefile.sysroot
-	install -m 0644 Makefile Makefile.sysroot
-
-	oe_runmake HOSTPGEN=${STAGING_BINDIR_NATIVE}/python3-native/pgen \
-		HOSTPYTHON=${STAGING_BINDIR_NATIVE}/python3-native/python3 \
-		STAGING_LIBDIR=${STAGING_LIBDIR} \
-		STAGING_BASELIBDIR=${STAGING_BASELIBDIR} \
-		STAGING_INCDIR=${STAGING_INCDIR} \
-		LIB=${baselib} \
-		ARCH=${TARGET_ARCH} \
-		OPT="${CFLAGS}" libpython3.so
-
-	oe_runmake HOSTPGEN=${STAGING_BINDIR_NATIVE}/python3-native/pgen \
-		HOSTPYTHON=${STAGING_BINDIR_NATIVE}/python3-native/python3 \
-		STAGING_LIBDIR=${STAGING_LIBDIR} \
-		STAGING_INCDIR=${STAGING_INCDIR} \
-		STAGING_BASELIBDIR=${STAGING_BASELIBDIR} \
-		LIB=${baselib} \
-		ARCH=${TARGET_ARCH} \
-		OPT="${CFLAGS}"
-}
-
-do_install() {
-	# make install needs the original Makefile, or otherwise the inclues would
-	# go to ${D}${STAGING...}/...
-	install -m 0644 Makefile.orig Makefile
-
-	install -d ${D}${libdir}/pkgconfig
-	install -d ${D}${libdir}/python${PYTHON_MAJMIN}/config
-
-	# rerun the build once again with original makefile this time
-	# run install in a separate step to avoid compile/install race
-	oe_runmake HOSTPGEN=${STAGING_BINDIR_NATIVE}/python3-native/pgen \
-		HOSTPYTHON=${STAGING_BINDIR_NATIVE}/python3-native/python3 \
-		STAGING_LIBDIR=${STAGING_LIBDIR} \
-		STAGING_INCDIR=${STAGING_INCDIR} \
-		STAGING_BASELIBDIR=${STAGING_BASELIBDIR} \
-		LIB=${baselib} \
-		ARCH=${TARGET_ARCH} \
-		DESTDIR=${D} LIBDIR=${libdir}
-	
-	oe_runmake HOSTPGEN=${STAGING_BINDIR_NATIVE}/python3-native/pgen \
-		HOSTPYTHON=${STAGING_BINDIR_NATIVE}/python3-native/python3 \
-		STAGING_LIBDIR=${STAGING_LIBDIR} \
-		STAGING_INCDIR=${STAGING_INCDIR} \
-		STAGING_BASELIBDIR=${STAGING_BASELIBDIR} \
-		LIB=${baselib} \
-		ARCH=${TARGET_ARCH} \
-		DESTDIR=${D} LIBDIR=${libdir} install
-
-	# avoid conflict with 2to3 from Python 2
-	rm -f ${D}/${bindir}/2to3
-
-	install -m 0644 Makefile.sysroot ${D}/${libdir}/python${PYTHON_MAJMIN}/config/Makefile
-	install -m 0644 Makefile.sysroot ${D}/${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_MAJMIN}${PYTHON_ABI}/Makefile
-
-	if [ -e ${WORKDIR}/sitecustomize.py ]; then
-		install -m 0644 ${WORKDIR}/sitecustomize.py ${D}/${libdir}/python${PYTHON_MAJMIN}
-	fi
-
-	oe_multilib_header python${PYTHON_BINABI}/pyconfig.h
-}
-
-do_install_append_class-nativesdk () {
-	create_wrapper ${D}${bindir}/python${PYTHON_MAJMIN} TERMINFO_DIRS='${sysconfdir}/terminfo:/etc/terminfo:/usr/share/terminfo:/usr/share/misc/terminfo:/lib/terminfo'
-}
-
-SSTATE_SCAN_FILES += "Makefile"
-PACKAGE_PREPROCESS_FUNCS += "py_package_preprocess"
-
-py_package_preprocess () {
-	# copy back the old Makefile to fix target package
-	install -m 0644 ${B}/Makefile.orig ${PKGD}/${libdir}/python${PYTHON_MAJMIN}/config/Makefile
-	install -m 0644 ${B}/Makefile.orig ${PKGD}/${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_MAJMIN}${PYTHON_ABI}/Makefile
-	# Remove references to buildmachine paths in target Makefile and _sysconfigdata
-	sed -i -e 's:--sysroot=${STAGING_DIR_TARGET}::g' -e s:'--with-libtool-sysroot=${STAGING_DIR_TARGET}'::g \
-		${PKGD}/${libdir}/python${PYTHON_MAJMIN}/config/Makefile \
-		${PKGD}/${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_MAJMIN}${PYTHON_ABI}/Makefile \
-		${PKGD}/${libdir}/python${PYTHON_MAJMIN}/_sysconfigdata.py
-}
-
-require python-${PYTHON_MAJMIN}-manifest.inc
-
-# manual dependency additions
-RPROVIDES_${PN}-core = "${PN}"
-RRECOMMENDS_${PN}-core = "${PN}-readline"
-RRECOMMENDS_${PN}-crypt = "openssl"
-RRECOMMENDS_${PN}-crypt_class-nativesdk = "nativesdk-openssl"
-
-FILES_${PN}-2to3 += "${bindir}/2to3-${PYTHON_MAJMIN}"
-FILES_${PN}-pydoc += "${bindir}/pydoc${PYTHON_MAJMIN} ${bindir}/pydoc3"
-FILES_${PN}-idle += "${bindir}/idle3 ${bindir}/idle${PYTHON_MAJMIN}"
-
-PACKAGES =+ "${PN}-pyvenv"
-FILES_${PN}-pyvenv += "${bindir}/pyvenv-${PYTHON_MAJMIN} ${bindir}/pyvenv"
-
-# package libpython3
-PACKAGES =+ "libpython3 libpython3-staticdev"
-FILES_libpython3 = "${libdir}/libpython*.so.*"
-FILES_libpython3-staticdev += "${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_BINABI}/libpython${PYTHON_BINABI}.a"
-INSANE_SKIP_${PN}-dev += "dev-elf"
-
-# catch all the rest (unsorted)
-PACKAGES += "${PN}-misc"
-RDEPENDS_${PN}-misc += "${PN}-core ${PN}-email ${PN}-codecs ${PN}-textutils ${PN}-argparse"
-RDEPENDS_${PN}-modules += "${PN}-misc"
-FILES_${PN}-misc = "${libdir}/python${PYTHON_MAJMIN}"
-
-# catch manpage
-PACKAGES += "${PN}-man"
-FILES_${PN}-man = "${datadir}/man"
-
-BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python3_3.5.3.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python3_3.5.3.bb
new file mode 100644
index 0000000..13df12f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python3_3.5.3.bb
@@ -0,0 +1,227 @@
+require recipes-devtools/python/python.inc
+
+DEPENDS = "python3-native libffi bzip2 db gdbm openssl readline sqlite3 zlib virtual/libintl xz"
+PR = "${INC_PR}.0"
+PYTHON_MAJMIN = "3.5"
+PYTHON_BINABI= "${PYTHON_MAJMIN}m"
+DISTRO_SRC_URI ?= "file://sitecustomize.py"
+DISTRO_SRC_URI_linuxstdbase = ""
+SRC_URI = "http://www.python.org/ftp/python/${PV}/Python-${PV}.tar.xz \
+file://python-config.patch \
+file://0001-cross-compile-support.patch \
+file://030-fixup-include-dirs.patch \
+file://070-dont-clean-ipkg-install.patch \
+file://080-distutils-dont_adjust_files.patch \
+file://130-readline-setup.patch \
+file://150-fix-setupterm.patch \
+file://0001-h2py-Fix-issue-13032-where-it-fails-with-UnicodeDeco.patch \
+file://tweak-MULTIARCH-for-powerpc-linux-gnuspe.patch \
+file://support_SOURCE_DATE_EPOCH_in_py_compile.patch \
+${DISTRO_SRC_URI} \
+"
+
+SRC_URI += "\
+            file://03-fix-tkinter-detection.patch \
+            file://avoid_warning_about_tkinter.patch \
+            file://cgi_py.patch \
+            file://host_include_contamination.patch \
+            file://python-3.3-multilib.patch \
+            file://shutil-follow-symlink-fix.patch \
+            file://sysroot-include-headers.patch \
+            file://unixccompiler.patch \
+            file://avoid-ncursesw-include-path.patch \
+            file://python3-use-CROSSPYTHONPATH-for-PYTHON_FOR_BUILD.patch \
+            file://python3-setup.py-no-host-headers-libs.patch \
+            file://sysconfig.py-add-_PYTHON_PROJECT_SRC.patch \
+            file://setup.py-check-cross_compiling-when-get-FLAGS.patch \
+            file://setup.py-find-libraries-in-staging-dirs.patch \
+            file://configure.ac-fix-LIBPL.patch \
+            file://upstream-random-fixes.patch \
+            file://0001-Issue-21272-Use-_sysconfigdata.py-to-initialize-dist.patch \
+            file://Fix-29519-weakref-spewing-exceptions-during-interp-f.patch \
+            file://pass-missing-libraries-to-Extension-for-mul.patch \
+           "
+SRC_URI[md5sum] = "57d1f8bfbabf4f2500273fb0706e6f21"
+SRC_URI[sha256sum] = "eefe2ad6575855423ab630f5b51a8ef6e5556f774584c06beab4926f930ddbb0"
+
+LIC_FILES_CHKSUM = "file://LICENSE;md5=b680ed99aa60d350c65a65914494207e"
+
+# exclude pre-releases for both python 2.x and 3.x
+UPSTREAM_CHECK_REGEX = "[Pp]ython-(?P<pver>\d+(\.\d+)+).tar"
+
+S = "${WORKDIR}/Python-${PV}"
+
+inherit autotools multilib_header python3native pkgconfig
+
+CONFIGUREOPTS += " --with-system-ffi "
+
+CACHED_CONFIGUREVARS = "ac_cv_have_chflags=no \
+                ac_cv_have_lchflags=no \
+                ac_cv_have_long_long_format=yes \
+                ac_cv_buggy_getaddrinfo=no \
+                ac_cv_file__dev_ptmx=yes \
+                ac_cv_file__dev_ptc=no \
+"
+
+TARGET_CC_ARCH += "-DNDEBUG -fno-inline"
+SDK_CC_ARCH += "-DNDEBUG -fno-inline"
+EXTRA_OEMAKE += "CROSS_COMPILE=yes"
+EXTRA_OECONF += "CROSSPYTHONPATH=${STAGING_LIBDIR_NATIVE}/python${PYTHON_MAJMIN}/lib-dynload/ --without-ensurepip"
+
+export CROSS_COMPILE = "${TARGET_PREFIX}"
+export _PYTHON_PROJECT_BASE = "${B}"
+export _PYTHON_PROJECT_SRC = "${S}"
+export CCSHARED = "-fPIC"
+
+# Fix cross compilation of different modules
+export CROSSPYTHONPATH = "${STAGING_LIBDIR_NATIVE}/python${PYTHON_MAJMIN}/lib-dynload/:${B}/build/lib.linux-${TARGET_ARCH}-${PYTHON_MAJMIN}:${S}/Lib:${S}/Lib/plat-linux"
+
+# No ctypes option for python 3
+PYTHONLSBOPTS = ""
+
+do_configure_append() {
+	rm -f ${S}/Makefile.orig
+	autoreconf -Wcross --verbose --install --force --exclude=autopoint ../Python-${PV}/Modules/_ctypes/libffi
+}
+
+do_compile() {
+        # regenerate platform specific files, because they depend on system headers
+        cd ${S}/Lib/plat-linux*
+        include=${STAGING_INCDIR} ${STAGING_BINDIR_NATIVE}/python3-native/python3 \
+                ${S}/Tools/scripts/h2py.py -i '(u_long)' \
+                ${STAGING_INCDIR}/dlfcn.h \
+                ${STAGING_INCDIR}/linux/cdrom.h \
+                ${STAGING_INCDIR}/netinet/in.h \
+                ${STAGING_INCDIR}/sys/types.h
+        sed -e 's,${STAGING_DIR_HOST},,g' -i *.py
+        cd -
+
+
+	# remove any bogus LD_LIBRARY_PATH
+	sed -i -e s,RUNSHARED=.*,RUNSHARED=, Makefile
+
+	if [ ! -f Makefile.orig ]; then
+		install -m 0644 Makefile Makefile.orig
+	fi
+	sed -i -e 's,^CONFIGURE_LDFLAGS=.*,CONFIGURE_LDFLAGS=-L. -L${STAGING_LIBDIR},g' \
+		-e 's,libdir=${libdir},libdir=${STAGING_LIBDIR},g' \
+		-e 's,libexecdir=${libexecdir},libexecdir=${STAGING_DIR_HOST}${libexecdir},g' \
+		-e 's,^LIBDIR=.*,LIBDIR=${STAGING_LIBDIR},g' \
+		-e 's,includedir=${includedir},includedir=${STAGING_INCDIR},g' \
+		-e 's,^INCLUDEDIR=.*,INCLUDE=${STAGING_INCDIR},g' \
+		-e 's,^CONFINCLUDEDIR=.*,CONFINCLUDE=${STAGING_INCDIR},g' \
+		Makefile
+	# save copy of it now, because if we do it in do_install and 
+	# then call do_install twice we get Makefile.orig == Makefile.sysroot
+	install -m 0644 Makefile Makefile.sysroot
+
+	oe_runmake HOSTPGEN=${STAGING_BINDIR_NATIVE}/python3-native/pgen \
+		HOSTPYTHON=${STAGING_BINDIR_NATIVE}/python3-native/python3 \
+		STAGING_LIBDIR=${STAGING_LIBDIR} \
+		STAGING_BASELIBDIR=${STAGING_BASELIBDIR} \
+		STAGING_INCDIR=${STAGING_INCDIR} \
+		LIB=${baselib} \
+		ARCH=${TARGET_ARCH} \
+		OPT="${CFLAGS}" libpython3.so
+
+	oe_runmake HOSTPGEN=${STAGING_BINDIR_NATIVE}/python3-native/pgen \
+		HOSTPYTHON=${STAGING_BINDIR_NATIVE}/python3-native/python3 \
+		STAGING_LIBDIR=${STAGING_LIBDIR} \
+		STAGING_INCDIR=${STAGING_INCDIR} \
+		STAGING_BASELIBDIR=${STAGING_BASELIBDIR} \
+		LIB=${baselib} \
+		ARCH=${TARGET_ARCH} \
+		OPT="${CFLAGS}"
+}
+
+do_install() {
+	# make install needs the original Makefile, or otherwise the inclues would
+	# go to ${D}${STAGING...}/...
+	install -m 0644 Makefile.orig Makefile
+
+	install -d ${D}${libdir}/pkgconfig
+	install -d ${D}${libdir}/python${PYTHON_MAJMIN}/config
+
+	# rerun the build once again with original makefile this time
+	# run install in a separate step to avoid compile/install race
+	oe_runmake HOSTPGEN=${STAGING_BINDIR_NATIVE}/python3-native/pgen \
+		HOSTPYTHON=${STAGING_BINDIR_NATIVE}/python3-native/python3 \
+		STAGING_LIBDIR=${STAGING_LIBDIR} \
+		STAGING_INCDIR=${STAGING_INCDIR} \
+		STAGING_BASELIBDIR=${STAGING_BASELIBDIR} \
+		LIB=${baselib} \
+		ARCH=${TARGET_ARCH} \
+		DESTDIR=${D} LIBDIR=${libdir}
+	
+	oe_runmake HOSTPGEN=${STAGING_BINDIR_NATIVE}/python3-native/pgen \
+		HOSTPYTHON=${STAGING_BINDIR_NATIVE}/python3-native/python3 \
+		STAGING_LIBDIR=${STAGING_LIBDIR} \
+		STAGING_INCDIR=${STAGING_INCDIR} \
+		STAGING_BASELIBDIR=${STAGING_BASELIBDIR} \
+		LIB=${baselib} \
+		ARCH=${TARGET_ARCH} \
+		DESTDIR=${D} LIBDIR=${libdir} install
+
+	# avoid conflict with 2to3 from Python 2
+	rm -f ${D}/${bindir}/2to3
+
+	install -m 0644 Makefile.sysroot ${D}/${libdir}/python${PYTHON_MAJMIN}/config/Makefile
+	install -m 0644 Makefile.sysroot ${D}/${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_MAJMIN}${PYTHON_ABI}/Makefile
+
+	if [ -e ${WORKDIR}/sitecustomize.py ]; then
+		install -m 0644 ${WORKDIR}/sitecustomize.py ${D}/${libdir}/python${PYTHON_MAJMIN}
+	fi
+
+	oe_multilib_header python${PYTHON_BINABI}/pyconfig.h
+}
+
+do_install_append_class-nativesdk () {
+	create_wrapper ${D}${bindir}/python${PYTHON_MAJMIN} TERMINFO_DIRS='${sysconfdir}/terminfo:/etc/terminfo:/usr/share/terminfo:/usr/share/misc/terminfo:/lib/terminfo'
+}
+
+SSTATE_SCAN_FILES += "Makefile"
+PACKAGE_PREPROCESS_FUNCS += "py_package_preprocess"
+
+py_package_preprocess () {
+	# copy back the old Makefile to fix target package
+	install -m 0644 ${B}/Makefile.orig ${PKGD}/${libdir}/python${PYTHON_MAJMIN}/config/Makefile
+	install -m 0644 ${B}/Makefile.orig ${PKGD}/${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_MAJMIN}${PYTHON_ABI}/Makefile
+	# Remove references to buildmachine paths in target Makefile and _sysconfigdata
+	sed -i -e 's:--sysroot=${STAGING_DIR_TARGET}::g' -e s:'--with-libtool-sysroot=${STAGING_DIR_TARGET}'::g \
+		${PKGD}/${libdir}/python${PYTHON_MAJMIN}/config/Makefile \
+		${PKGD}/${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_MAJMIN}${PYTHON_ABI}/Makefile \
+		${PKGD}/${libdir}/python${PYTHON_MAJMIN}/_sysconfigdata.py
+}
+
+require python-${PYTHON_MAJMIN}-manifest.inc
+
+# manual dependency additions
+RPROVIDES_${PN}-modules = "${PN}"
+RRECOMMENDS_${PN}-core = "${PN}-readline"
+RRECOMMENDS_${PN}-crypt = "openssl"
+RRECOMMENDS_${PN}-crypt_class-nativesdk = "nativesdk-openssl"
+
+FILES_${PN}-2to3 += "${bindir}/2to3-${PYTHON_MAJMIN}"
+FILES_${PN}-pydoc += "${bindir}/pydoc${PYTHON_MAJMIN} ${bindir}/pydoc3"
+FILES_${PN}-idle += "${bindir}/idle3 ${bindir}/idle${PYTHON_MAJMIN}"
+
+PACKAGES =+ "${PN}-pyvenv"
+FILES_${PN}-pyvenv += "${bindir}/pyvenv-${PYTHON_MAJMIN} ${bindir}/pyvenv"
+
+# package libpython3
+PACKAGES =+ "libpython3 libpython3-staticdev"
+FILES_libpython3 = "${libdir}/libpython*.so.*"
+FILES_libpython3-staticdev += "${libdir}/python${PYTHON_MAJMIN}/config-${PYTHON_BINABI}/libpython${PYTHON_BINABI}.a"
+INSANE_SKIP_${PN}-dev += "dev-elf"
+
+# catch all the rest (unsorted)
+PACKAGES += "${PN}-misc"
+RDEPENDS_${PN}-misc += "${PN}-core ${PN}-email ${PN}-codecs ${PN}-textutils ${PN}-argparse"
+RDEPENDS_${PN}-modules += "${PN}-misc"
+FILES_${PN}-misc = "${libdir}/python${PYTHON_MAJMIN}"
+
+# catch manpage
+PACKAGES += "${PN}-man"
+FILES_${PN}-man = "${datadir}/man"
+
+BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/python/python_2.7.13.bb b/import-layers/yocto-poky/meta/recipes-devtools/python/python_2.7.13.bb
index 4ef9952..754c029 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/python/python_2.7.13.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/python/python_2.7.13.bb
@@ -1,5 +1,5 @@
 require python.inc
-DEPENDS = "python-native libffi bzip2 db gdbm openssl readline sqlite3 zlib"
+DEPENDS = "python-native libffi bzip2 gdbm openssl readline sqlite3 zlib"
 PR = "${INC_PR}"
 
 DISTRO_SRC_URI ?= "file://sitecustomize.py"
@@ -27,6 +27,8 @@
   file://use_sysroot_ncurses_instead_of_host.patch \
   file://add-CROSSPYTHONPATH-for-PYTHON_FOR_BUILD.patch \
   file://Don-t-use-getentropy-on-Linux.patch \
+  file://pass-missing-libraries-to-Extension-for-mul.patch \
+  file://support_SOURCE_DATE_EPOCH_in_py_compile_2.7.patch \
 "
 
 S = "${WORKDIR}/Python-${PV}"
@@ -37,6 +39,9 @@
 
 EXTRA_OECONF += "ac_cv_file__dev_ptmx=yes ac_cv_file__dev_ptc=no"
 
+PACKAGECONFIG ??= "bdb"
+PACKAGECONFIG[bdb] = ",,db"
+
 do_configure_append() {
 	rm -f ${S}/Makefile.orig
         autoreconf -Wcross --verbose --install --force --exclude=autopoint ../Python-${PV}/Modules/_ctypes/libffi
@@ -116,6 +121,10 @@
 	fi
 
 	oe_multilib_header python${PYTHON_MAJMIN}/pyconfig.h
+
+    if [ -z "${@bb.utils.filter('PACKAGECONFIG', 'bdb', d)}" ]; then
+        rm -rf ${D}/${libdir}/python${PYTHON_MAJMIN}/bsddb
+    fi
 }
 
 do_install_append_class-nativesdk () {
@@ -152,7 +161,9 @@
 PACKAGES += "${PN}-misc"
 FILES_${PN}-misc = "${libdir}/python${PYTHON_MAJMIN}"
 RDEPENDS_${PN}-modules += "${PN}-misc"
-RDEPENDS_${PN}-ptest = "${PN}-modules"
+
+# ptest
+RDEPENDS_${PN}-ptest = "${PN}-modules ${PN}-tests"
 #inherit ptest after "require python-${PYTHON_MAJMIN}-manifest.inc" so PACKAGES doesn't get overwritten
 inherit ptest
 
@@ -162,10 +173,23 @@
 	sed -e s:LIBDIR/python/ptest:${PTEST_PATH}:g \
 	 -e s:LIBDIR:${libdir}:g \
 	 -i ${D}${PTEST_PATH}/run-ptest
+
+	#Remove build host references
+	sed -i \
+		-e 's:--with-libtool-sysroot=${STAGING_DIR_TARGET}'::g \
+	    -e 's:--sysroot=${STAGING_DIR_TARGET}::g' \
+	    -e 's|${DEBUG_PREFIX_MAP}||g' \
+	    -e 's:${HOSTTOOLS_DIR}/::g' \
+	    -e 's:${RECIPE_SYSROOT}::g' \
+	    -e 's:${BASE_WORKDIR}/${MULTIMACH_TARGET_SYS}::g' \
+	${D}/${PTEST_PATH}/Makefile
 }
 
 # catch manpage
 PACKAGES += "${PN}-man"
 FILES_${PN}-man = "${datadir}/man"
 
+# Nasty but if bdb isn't enabled the package won't be generated
+RDEPENDS_${PN}-modules_remove = "${@bb.utils.contains('PACKAGECONFIG', 'bdb', '', '${PN}-bsddb', d)}"
+
 BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu-helper-native_1.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu-helper-native_1.0.bb
index 27d5315..d86b155 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu-helper-native_1.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu-helper-native_1.0.bb
@@ -20,6 +20,5 @@
 	install tunctl ${D}${bindir}/
 }
 
-RM_WORK_EXCLUDE_ITEMS += "recipe-sysroot-native"
 DEPENDS += "qemu-native"
 addtask addto_recipe_sysroot after do_populate_sysroot before do_build
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu.inc b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu.inc
index 0e1411a..2a1d14b 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu.inc
@@ -26,7 +26,6 @@
     --disable-strip \
     --disable-werror \
     --target-list=${@get_qemu_target_list(d)} \
-    --with-system-pixman \
     --extra-cflags='${CFLAGS}' \
     "
 EXTRA_OECONF_append_class-native = " --python=${USRBINPATH}/python2.7"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/0004-Add-support-for-VM-suspend-resume-for-TPM-TIS-v2.9.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/0004-Add-support-for-VM-suspend-resume-for-TPM-TIS-v2.9.patch
new file mode 100644
index 0000000..f1dbaff
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/0004-Add-support-for-VM-suspend-resume-for-TPM-TIS-v2.9.patch
@@ -0,0 +1,719 @@
+From 5e9dd9063f514447ea4f54046793f4f01c297ed4 Mon Sep 17 00:00:00 2001
+From: Stefan Berger <stefanb@linux.vnet.ibm.com>
+Date: Sat, 31 Dec 2016 11:23:32 -0500
+Subject: [PATCH 4/4] Add support for VM suspend/resume for TPM TIS
+
+Extend the TPM TIS code to support suspend/resume. In case a command
+is being processed by the external TPM when suspending, wait for the command
+to complete to catch the result. In case the bottom half did not run,
+run the one function the bottom half is supposed to run. This then
+makes the resume operation work.
+
+The passthrough backend does not support suspend/resume operation
+and is therefore blocked from suspend/resume and migration.
+
+The CUSE TPM's supported capabilities are tested and if sufficient
+capabilities are implemented, suspend/resume, snapshotting and
+migration are supported by the CUSE TPM.
+
+Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
+
+Upstream-Status: Pending [https://lists.nongnu.org/archive/html/qemu-devel/2016-06/msg00252.html]
+Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
+---
+ hw/tpm/tpm_passthrough.c     | 130 +++++++++++++++++++++++--
+ hw/tpm/tpm_tis.c             | 137 +++++++++++++++++++++++++-
+ hw/tpm/tpm_tis.h             |   2 +
+ hw/tpm/tpm_util.c            | 223 +++++++++++++++++++++++++++++++++++++++++++
+ hw/tpm/tpm_util.h            |   7 ++
+ include/sysemu/tpm_backend.h |  12 +++
+ 6 files changed, 503 insertions(+), 8 deletions(-)
+
+diff --git a/hw/tpm/tpm_passthrough.c b/hw/tpm/tpm_passthrough.c
+index 44739ebad2..bc8072d0bc 100644
+--- a/hw/tpm/tpm_passthrough.c
++++ b/hw/tpm/tpm_passthrough.c
+@@ -34,6 +34,8 @@
+ #include "tpm_tis.h"
+ #include "tpm_util.h"
+ #include "tpm_ioctl.h"
++#include "migration/migration.h"
++#include "qapi/error.h"
+ 
+ #define DEBUG_TPM 0
+ 
+@@ -49,6 +51,7 @@
+ #define TYPE_TPM_CUSE "tpm-cuse"
+ 
+ static const TPMDriverOps tpm_passthrough_driver;
++static const VMStateDescription vmstate_tpm_cuse;
+ 
+ /* data structures */
+ typedef struct TPMPassthruThreadParams {
+@@ -79,6 +82,10 @@ struct TPMPassthruState {
+     QemuMutex state_lock;
+     QemuCond cmd_complete;  /* singnaled once tpm_busy is false */
+     bool tpm_busy;
++
++    Error *migration_blocker;
++
++    TPMBlobBuffers tpm_blobs;
+ };
+ 
+ typedef struct TPMPassthruState TPMPassthruState;
+@@ -306,6 +313,10 @@ static void tpm_passthrough_shutdown(TPMPassthruState *tpm_pt)
+                          strerror(errno));
+         }
+     }
++    if (tpm_pt->migration_blocker) {
++        migrate_del_blocker(tpm_pt->migration_blocker);
++        error_free(tpm_pt->migration_blocker);
++    }
+ }
+ 
+ /*
+@@ -360,12 +371,14 @@ static int tpm_passthrough_cuse_check_caps(TPMPassthruState *tpm_pt)
+ /*
+  * Initialize the external CUSE TPM
+  */
+-static int tpm_passthrough_cuse_init(TPMPassthruState *tpm_pt)
++static int tpm_passthrough_cuse_init(TPMPassthruState *tpm_pt,
++                                     bool is_resume)
+ {
+     int rc = 0;
+-    ptm_init init = {
+-        .u.req.init_flags = PTM_INIT_FLAG_DELETE_VOLATILE,
+-    };
++    ptm_init init;
++    if (is_resume) {
++        init.u.req.init_flags = PTM_INIT_FLAG_DELETE_VOLATILE;
++    }
+ 
+     if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
+         if (ioctl(tpm_pt->tpm_fd, PTM_INIT, &init) < 0) {
+@@ -394,7 +407,7 @@ static int tpm_passthrough_startup_tpm(TPMBackend *tb)
+                               tpm_passthrough_worker_thread,
+                               &tpm_pt->tpm_thread_params);
+ 
+-    tpm_passthrough_cuse_init(tpm_pt);
++    tpm_passthrough_cuse_init(tpm_pt, false);
+ 
+     return 0;
+ }
+@@ -466,6 +479,32 @@ static int tpm_passthrough_reset_tpm_established_flag(TPMBackend *tb,
+     return rc;
+ }
+ 
++static int tpm_cuse_get_state_blobs(TPMBackend *tb,
++                                    bool decrypted_blobs,
++                                    TPMBlobBuffers *tpm_blobs)
++{
++    TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
++
++    assert(TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt));
++
++    return tpm_util_cuse_get_state_blobs(tpm_pt->tpm_fd, decrypted_blobs,
++                                         tpm_blobs);
++}
++
++static int tpm_cuse_set_state_blobs(TPMBackend *tb,
++                                    TPMBlobBuffers *tpm_blobs)
++{
++    TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
++
++    assert(TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt));
++
++    if (tpm_util_cuse_set_state_blobs(tpm_pt->tpm_fd, tpm_blobs)) {
++        return 1;
++    }
++
++    return tpm_passthrough_cuse_init(tpm_pt, true);
++}
++
+ static bool tpm_passthrough_get_startup_error(TPMBackend *tb)
+ {
+     TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
+@@ -488,7 +527,7 @@ static void tpm_passthrough_deliver_request(TPMBackend *tb)
+ {
+     TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
+ 
+-    /* TPM considered busy once TPM Request scheduled for processing */
++    /* TPM considered busy once TPM request scheduled for processing */
+     qemu_mutex_lock(&tpm_pt->state_lock);
+     tpm_pt->tpm_busy = true;
+     qemu_mutex_unlock(&tpm_pt->state_lock);
+@@ -601,6 +640,25 @@ static int tpm_passthrough_open_sysfs_cancel(TPMBackend *tb)
+     return fd;
+ }
+ 
++static void tpm_passthrough_block_migration(TPMPassthruState *tpm_pt)
++{
++    ptm_cap caps;
++
++    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
++        caps = PTM_CAP_GET_STATEBLOB | PTM_CAP_SET_STATEBLOB |
++               PTM_CAP_STOP;
++        if (!TPM_CUSE_IMPLEMENTS_ALL(tpm_pt, caps)) {
++            error_setg(&tpm_pt->migration_blocker,
++                       "Migration disabled: CUSE TPM lacks necessary capabilities");
++            migrate_add_blocker(tpm_pt->migration_blocker);
++        }
++    } else {
++        error_setg(&tpm_pt->migration_blocker,
++                   "Migration disabled: Passthrough TPM does not support migration");
++        migrate_add_blocker(tpm_pt->migration_blocker);
++    }
++}
++
+ static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
+ {
+     TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
+@@ -642,7 +700,7 @@ static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
+             goto err_close_tpmdev;
+         }
+         /* init TPM for probing */
+-        if (tpm_passthrough_cuse_init(tpm_pt)) {
++        if (tpm_passthrough_cuse_init(tpm_pt, false)) {
+             goto err_close_tpmdev;
+         }
+     }
+@@ -659,6 +717,7 @@ static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
+         }
+     }
+ 
++    tpm_passthrough_block_migration(tpm_pt);
+ 
+     return 0;
+ 
+@@ -766,10 +825,13 @@ static void tpm_passthrough_inst_init(Object *obj)
+ 
+     qemu_mutex_init(&tpm_pt->state_lock);
+     qemu_cond_init(&tpm_pt->cmd_complete);
++
++    vmstate_register(NULL, -1, &vmstate_tpm_cuse, obj);
+ }
+ 
+ static void tpm_passthrough_inst_finalize(Object *obj)
+ {
++    vmstate_unregister(NULL, &vmstate_tpm_cuse, obj);
+ }
+ 
+ static void tpm_passthrough_class_init(ObjectClass *klass, void *data)
+@@ -802,6 +864,60 @@ static const char *tpm_passthrough_cuse_create_desc(void)
+     return "CUSE TPM backend driver";
+ }
+ 
++static void tpm_cuse_pre_save(void *opaque)
++{
++    TPMPassthruState *tpm_pt = opaque;
++    TPMBackend *tb = &tpm_pt->parent;
++
++     qemu_mutex_lock(&tpm_pt->state_lock);
++     /* wait for TPM to finish processing */
++     if (tpm_pt->tpm_busy) {
++        qemu_cond_wait(&tpm_pt->cmd_complete, &tpm_pt->state_lock);
++     }
++     qemu_mutex_unlock(&tpm_pt->state_lock);
++
++    /* get the decrypted state blobs from the TPM */
++    tpm_cuse_get_state_blobs(tb, TRUE, &tpm_pt->tpm_blobs);
++}
++
++static int tpm_cuse_post_load(void *opaque,
++                              int version_id __attribute__((unused)))
++{
++    TPMPassthruState *tpm_pt = opaque;
++    TPMBackend *tb = &tpm_pt->parent;
++
++    return tpm_cuse_set_state_blobs(tb, &tpm_pt->tpm_blobs);
++}
++
++static const VMStateDescription vmstate_tpm_cuse = {
++    .name = "cuse-tpm",
++    .version_id = 1,
++    .minimum_version_id = 0,
++    .minimum_version_id_old = 0,
++    .pre_save  = tpm_cuse_pre_save,
++    .post_load = tpm_cuse_post_load,
++    .fields = (VMStateField[]) {
++        VMSTATE_UINT32(tpm_blobs.permanent_flags, TPMPassthruState),
++        VMSTATE_UINT32(tpm_blobs.permanent.size, TPMPassthruState),
++        VMSTATE_VBUFFER_ALLOC_UINT32(tpm_blobs.permanent.buffer,
++                                     TPMPassthruState, 1, NULL,
++                                     tpm_blobs.permanent.size),
++
++        VMSTATE_UINT32(tpm_blobs.volatil_flags, TPMPassthruState),
++        VMSTATE_UINT32(tpm_blobs.volatil.size, TPMPassthruState),
++        VMSTATE_VBUFFER_ALLOC_UINT32(tpm_blobs.volatil.buffer,
++                                     TPMPassthruState, 1, NULL,
++                                     tpm_blobs.volatil.size),
++
++        VMSTATE_UINT32(tpm_blobs.savestate_flags, TPMPassthruState),
++        VMSTATE_UINT32(tpm_blobs.savestate.size, TPMPassthruState),
++        VMSTATE_VBUFFER_ALLOC_UINT32(tpm_blobs.savestate.buffer,
++                                     TPMPassthruState, 1, NULL,
++                                     tpm_blobs.savestate.size),
++        VMSTATE_END_OF_LIST()
++    }
++};
++
+ static const TPMDriverOps tpm_cuse_driver = {
+     .type                     = TPM_TYPE_CUSE_TPM,
+     .opts                     = tpm_passthrough_cmdline_opts,
+diff --git a/hw/tpm/tpm_tis.c b/hw/tpm/tpm_tis.c
+index 14d9e83ea2..9b660cf737 100644
+--- a/hw/tpm/tpm_tis.c
++++ b/hw/tpm/tpm_tis.c
+@@ -368,6 +368,8 @@ static void tpm_tis_receive_bh(void *opaque)
+     TPMTISEmuState *tis = &s->s.tis;
+     uint8_t locty = s->locty_number;
+ 
++    tis->bh_scheduled = false;
++
+     qemu_mutex_lock(&s->state_lock);
+ 
+     tpm_tis_sts_set(&tis->loc[locty],
+@@ -415,6 +417,8 @@ static void tpm_tis_receive_cb(TPMState *s, uint8_t locty,
+     qemu_mutex_unlock(&s->state_lock);
+ 
+     qemu_bh_schedule(tis->bh);
++
++    tis->bh_scheduled = true;
+ }
+ 
+ /*
+@@ -1030,9 +1034,140 @@ static void tpm_tis_reset(DeviceState *dev)
+     tpm_tis_do_startup_tpm(s);
+ }
+ 
++
++/* persistent state handling */
++
++static void tpm_tis_pre_save(void *opaque)
++{
++    TPMState *s = opaque;
++    TPMTISEmuState *tis = &s->s.tis;
++    uint8_t locty = tis->active_locty;
++
++    DPRINTF("tpm_tis: suspend: locty = %d : r_offset = %d, w_offset = %d\n",
++            locty, tis->loc[0].r_offset, tis->loc[0].w_offset);
++#ifdef DEBUG_TIS
++    tpm_tis_dump_state(opaque, 0);
++#endif
++
++    qemu_mutex_lock(&s->state_lock);
++
++    /* wait for outstanding request to complete */
++    if (TPM_TIS_IS_VALID_LOCTY(locty) &&
++        tis->loc[locty].state == TPM_TIS_STATE_EXECUTION) {
++        /*
++         * If we get here when the bh is scheduled but did not run,
++         * we won't get notified...
++         */
++        if (!tis->bh_scheduled) {
++            /* backend thread to notify us */
++            qemu_cond_wait(&s->cmd_complete, &s->state_lock);
++        }
++        if (tis->loc[locty].state == TPM_TIS_STATE_EXECUTION) {
++            /* bottom half did not run - run its function */
++            qemu_mutex_unlock(&s->state_lock);
++            tpm_tis_receive_bh(opaque);
++            qemu_mutex_lock(&s->state_lock);
++        }
++    }
++
++    qemu_mutex_unlock(&s->state_lock);
++
++    /* copy current active read or write buffer into the buffer
++       written to disk */
++    if (TPM_TIS_IS_VALID_LOCTY(locty)) {
++        switch (tis->loc[locty].state) {
++        case TPM_TIS_STATE_RECEPTION:
++            memcpy(tis->buf,
++                   tis->loc[locty].w_buffer.buffer,
++                   MIN(sizeof(tis->buf),
++                       tis->loc[locty].w_buffer.size));
++            tis->offset = tis->loc[locty].w_offset;
++        break;
++        case TPM_TIS_STATE_COMPLETION:
++            memcpy(tis->buf,
++                   tis->loc[locty].r_buffer.buffer,
++                   MIN(sizeof(tis->buf),
++                       tis->loc[locty].r_buffer.size));
++            tis->offset = tis->loc[locty].r_offset;
++        break;
++        default:
++            /* leak nothing */
++            memset(tis->buf, 0x0, sizeof(tis->buf));
++        break;
++        }
++    }
++}
++
++static int tpm_tis_post_load(void *opaque,
++                             int version_id __attribute__((unused)))
++{
++    TPMState *s = opaque;
++    TPMTISEmuState *tis = &s->s.tis;
++
++    uint8_t locty = tis->active_locty;
++
++    if (TPM_TIS_IS_VALID_LOCTY(locty)) {
++        switch (tis->loc[locty].state) {
++        case TPM_TIS_STATE_RECEPTION:
++            memcpy(tis->loc[locty].w_buffer.buffer,
++                   tis->buf,
++                   MIN(sizeof(tis->buf),
++                       tis->loc[locty].w_buffer.size));
++            tis->loc[locty].w_offset = tis->offset;
++        break;
++        case TPM_TIS_STATE_COMPLETION:
++            memcpy(tis->loc[locty].r_buffer.buffer,
++                   tis->buf,
++                   MIN(sizeof(tis->buf),
++                       tis->loc[locty].r_buffer.size));
++            tis->loc[locty].r_offset = tis->offset;
++        break;
++        default:
++        break;
++        }
++    }
++
++    DPRINTF("tpm_tis: resume : locty = %d : r_offset = %d, w_offset = %d\n",
++            locty, tis->loc[0].r_offset, tis->loc[0].w_offset);
++
++    return 0;
++}
++
++static const VMStateDescription vmstate_locty = {
++    .name = "loc",
++    .version_id = 1,
++    .minimum_version_id = 0,
++    .minimum_version_id_old = 0,
++    .fields      = (VMStateField[]) {
++        VMSTATE_UINT32(state, TPMLocality),
++        VMSTATE_UINT32(inte, TPMLocality),
++        VMSTATE_UINT32(ints, TPMLocality),
++        VMSTATE_UINT8(access, TPMLocality),
++        VMSTATE_UINT32(sts, TPMLocality),
++        VMSTATE_UINT32(iface_id, TPMLocality),
++        VMSTATE_END_OF_LIST(),
++    }
++};
++
+ static const VMStateDescription vmstate_tpm_tis = {
+     .name = "tpm",
+-    .unmigratable = 1,
++    .version_id = 1,
++    .minimum_version_id = 0,
++    .minimum_version_id_old = 0,
++    .pre_save  = tpm_tis_pre_save,
++    .post_load = tpm_tis_post_load,
++    .fields = (VMStateField[]) {
++        VMSTATE_UINT32(s.tis.offset, TPMState),
++        VMSTATE_BUFFER(s.tis.buf, TPMState),
++        VMSTATE_UINT8(s.tis.active_locty, TPMState),
++        VMSTATE_UINT8(s.tis.aborting_locty, TPMState),
++        VMSTATE_UINT8(s.tis.next_locty, TPMState),
++
++        VMSTATE_STRUCT_ARRAY(s.tis.loc, TPMState, TPM_TIS_NUM_LOCALITIES, 1,
++                             vmstate_locty, TPMLocality),
++
++        VMSTATE_END_OF_LIST()
++    }
+ };
+ 
+ static Property tpm_tis_properties[] = {
+diff --git a/hw/tpm/tpm_tis.h b/hw/tpm/tpm_tis.h
+index a1df41fa21..b7fc0ea1a9 100644
+--- a/hw/tpm/tpm_tis.h
++++ b/hw/tpm/tpm_tis.h
+@@ -54,6 +54,8 @@ typedef struct TPMLocality {
+ 
+ typedef struct TPMTISEmuState {
+     QEMUBH *bh;
++    bool bh_scheduled; /* bh scheduled but did not run yet */
++
+     uint32_t offset;
+     uint8_t buf[TPM_TIS_BUFFER_MAX];
+ 
+diff --git a/hw/tpm/tpm_util.c b/hw/tpm/tpm_util.c
+index 7b35429725..b6ff74d946 100644
+--- a/hw/tpm/tpm_util.c
++++ b/hw/tpm/tpm_util.c
+@@ -22,6 +22,17 @@
+ #include "qemu/osdep.h"
+ #include "tpm_util.h"
+ #include "tpm_int.h"
++#include "tpm_ioctl.h"
++#include "qemu/error-report.h"
++
++#define DEBUG_TPM 0
++
++#define DPRINTF(fmt, ...) do { \
++    if (DEBUG_TPM) { \
++        fprintf(stderr, fmt, ## __VA_ARGS__); \
++    } \
++} while (0)
++
+ 
+ /*
+  * A basic test of a TPM device. We expect a well formatted response header
+@@ -125,3 +136,215 @@ int tpm_util_test_tpmdev(int tpm_fd, TPMVersion *tpm_version)
+ 
+     return 1;
+ }
++
++static void tpm_sized_buffer_reset(TPMSizedBuffer *tsb)
++{
++    g_free(tsb->buffer);
++    tsb->buffer = NULL;
++    tsb->size = 0;
++}
++
++/*
++ * Transfer a TPM state blob from the TPM into a provided buffer.
++ *
++ * @fd: file descriptor to talk to the CUSE TPM
++ * @type: the type of blob to transfer
++ * @decrypted_blob: whether we request to receive decrypted blobs
++ * @tsb: the TPMSizeBuffer to fill with the blob
++ * @flags: the flags to return to the caller
++ */
++static int tpm_util_cuse_get_state_blob(int fd,
++                                        uint8_t type,
++                                        bool decrypted_blob,
++                                        TPMSizedBuffer *tsb,
++                                        uint32_t *flags)
++{
++    ptm_getstate pgs;
++    uint16_t offset = 0;
++    ptm_res res;
++    ssize_t n;
++    size_t to_read;
++
++    tpm_sized_buffer_reset(tsb);
++
++    pgs.u.req.state_flags = (decrypted_blob) ? PTM_STATE_FLAG_DECRYPTED : 0;
++    pgs.u.req.type = type;
++    pgs.u.req.offset = offset;
++
++    if (ioctl(fd, PTM_GET_STATEBLOB, &pgs) < 0) {
++        error_report("CUSE TPM PTM_GET_STATEBLOB ioctl failed: %s",
++                     strerror(errno));
++        goto err_exit;
++    }
++    res = pgs.u.resp.tpm_result;
++    if (res != 0 && (res & 0x800) == 0) {
++        error_report("Getting the stateblob (type %d) failed with a TPM "
++                     "error 0x%x", type, res);
++        goto err_exit;
++    }
++
++    *flags = pgs.u.resp.state_flags;
++
++    tsb->buffer = g_malloc(pgs.u.resp.totlength);
++    memcpy(tsb->buffer, pgs.u.resp.data, pgs.u.resp.length);
++    tsb->size = pgs.u.resp.length;
++
++    /* if there are bytes left to get use read() interface */
++    while (tsb->size < pgs.u.resp.totlength) {
++        to_read = pgs.u.resp.totlength - tsb->size;
++        if (unlikely(to_read > SSIZE_MAX)) {
++            to_read = SSIZE_MAX;
++        }
++
++        n = read(fd, &tsb->buffer[tsb->size], to_read);
++        if (n != to_read) {
++            error_report("Could not read stateblob (type %d) : %s",
++                         type, strerror(errno));
++            goto err_exit;
++        }
++        tsb->size += to_read;
++    }
++
++    DPRINTF("tpm_util: got state blob type %d, %d bytes, flags 0x%08x, "
++            "decrypted=%d\n", type, tsb->size, *flags, decrypted_blob);
++
++    return 0;
++
++err_exit:
++    return 1;
++}
++
++int tpm_util_cuse_get_state_blobs(int tpm_fd,
++                                  bool decrypted_blobs,
++                                  TPMBlobBuffers *tpm_blobs)
++{
++    if (tpm_util_cuse_get_state_blob(tpm_fd, PTM_BLOB_TYPE_PERMANENT,
++                                     decrypted_blobs,
++                                     &tpm_blobs->permanent,
++                                     &tpm_blobs->permanent_flags) ||
++       tpm_util_cuse_get_state_blob(tpm_fd, PTM_BLOB_TYPE_VOLATILE,
++                                     decrypted_blobs,
++                                     &tpm_blobs->volatil,
++                                     &tpm_blobs->volatil_flags) ||
++       tpm_util_cuse_get_state_blob(tpm_fd, PTM_BLOB_TYPE_SAVESTATE,
++                                     decrypted_blobs,
++                                     &tpm_blobs->savestate,
++                                     &tpm_blobs->savestate_flags)) {
++        goto err_exit;
++    }
++
++    return 0;
++
++ err_exit:
++    tpm_sized_buffer_reset(&tpm_blobs->volatil);
++    tpm_sized_buffer_reset(&tpm_blobs->permanent);
++    tpm_sized_buffer_reset(&tpm_blobs->savestate);
++
++    return 1;
++}
++
++static int tpm_util_cuse_do_set_stateblob_ioctl(int fd,
++                                                uint32_t flags,
++                                                uint32_t type,
++                                                uint32_t length)
++{
++    ptm_setstate pss;
++
++    pss.u.req.state_flags = flags;
++    pss.u.req.type = type;
++    pss.u.req.length = length;
++
++    if (ioctl(fd, PTM_SET_STATEBLOB, &pss) < 0) {
++        error_report("CUSE TPM PTM_SET_STATEBLOB ioctl failed: %s",
++                     strerror(errno));
++        return 1;
++    }
++
++    if (pss.u.resp.tpm_result != 0) {
++        error_report("Setting the stateblob (type %d) failed with a TPM "
++                     "error 0x%x", type, pss.u.resp.tpm_result);
++        return 1;
++    }
++
++    return 0;
++}
++
++
++/*
++ * Transfer a TPM state blob to the CUSE TPM.
++ *
++ * @fd: file descriptor to talk to the CUSE TPM
++ * @type: the type of TPM state blob to transfer
++ * @tsb: TPMSizeBuffer containing the TPM state blob
++ * @flags: Flags describing the (encryption) state of the TPM state blob
++ */
++static int tpm_util_cuse_set_state_blob(int fd,
++                                        uint32_t type,
++                                        TPMSizedBuffer *tsb,
++                                        uint32_t flags)
++{
++    uint32_t offset = 0;
++    ssize_t n;
++    size_t to_write;
++
++    /* initiate the transfer to the CUSE TPM */
++    if (tpm_util_cuse_do_set_stateblob_ioctl(fd, flags, type, 0)) {
++        return 1;
++    }
++
++    /* use the write() interface for transferring the state blob */
++    while (offset < tsb->size) {
++        to_write = tsb->size - offset;
++        if (unlikely(to_write > SSIZE_MAX)) {
++            to_write = SSIZE_MAX;
++        }
++
++        n = write(fd, &tsb->buffer[offset], to_write);
++        if (n != to_write) {
++            error_report("Writing the stateblob (type %d) failed: %s",
++                         type, strerror(errno));
++            goto err_exit;
++        }
++        offset += to_write;
++    }
++
++    /* inidicate that the transfer is finished */
++    if (tpm_util_cuse_do_set_stateblob_ioctl(fd, flags, type, 0)) {
++        goto err_exit;
++    }
++
++    DPRINTF("tpm_util: set the state blob type %d, %d bytes, flags 0x%08x\n",
++            type, tsb->size, flags);
++
++    return 0;
++
++err_exit:
++    return 1;
++}
++
++int tpm_util_cuse_set_state_blobs(int tpm_fd,
++                                  TPMBlobBuffers *tpm_blobs)
++{
++    ptm_res res;
++
++    if (ioctl(tpm_fd, PTM_STOP, &res) < 0) {
++        error_report("tpm_passthrough: Could not stop "
++                     "the CUSE TPM: %s (%i)",
++                     strerror(errno), errno);
++        return 1;
++    }
++
++    if (tpm_util_cuse_set_state_blob(tpm_fd, PTM_BLOB_TYPE_PERMANENT,
++                                     &tpm_blobs->permanent,
++                                     tpm_blobs->permanent_flags) ||
++        tpm_util_cuse_set_state_blob(tpm_fd, PTM_BLOB_TYPE_VOLATILE,
++                                     &tpm_blobs->volatil,
++                                     tpm_blobs->volatil_flags) ||
++        tpm_util_cuse_set_state_blob(tpm_fd, PTM_BLOB_TYPE_SAVESTATE,
++                                     &tpm_blobs->savestate,
++                                     tpm_blobs->savestate_flags)) {
++        return 1;
++    }
++
++    return 0;
++}
+diff --git a/hw/tpm/tpm_util.h b/hw/tpm/tpm_util.h
+index df76245e6e..c24071d812 100644
+--- a/hw/tpm/tpm_util.h
++++ b/hw/tpm/tpm_util.h
+@@ -26,4 +26,11 @@
+ 
+ int tpm_util_test_tpmdev(int tpm_fd, TPMVersion *tpm_version);
+ 
++int tpm_util_cuse_get_state_blobs(int tpm_fd,
++                                  bool decrypted_blobs,
++                                  TPMBlobBuffers *tpm_blobs);
++
++int tpm_util_cuse_set_state_blobs(int tpm_fd,
++                                  TPMBlobBuffers *tpm_blobs);
++
+ #endif /* TPM_TPM_UTIL_H */
+diff --git a/include/sysemu/tpm_backend.h b/include/sysemu/tpm_backend.h
+index b58f52d39f..3403821b9d 100644
+--- a/include/sysemu/tpm_backend.h
++++ b/include/sysemu/tpm_backend.h
+@@ -62,6 +62,18 @@ typedef struct TPMSizedBuffer {
+     uint8_t  *buffer;
+ } TPMSizedBuffer;
+ 
++/* blobs from the TPM; part of VM state when migrating */
++typedef struct TPMBlobBuffers {
++    uint32_t permanent_flags;
++    TPMSizedBuffer permanent;
++
++    uint32_t volatil_flags;
++    TPMSizedBuffer volatil;
++
++    uint32_t savestate_flags;
++    TPMSizedBuffer savestate;
++} TPMBlobBuffers;
++
+ struct TPMDriverOps {
+     enum TpmType type;
+     const QemuOptDesc *opts;
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/0004-Add-support-for-VM-suspend-resume-for-TPM-TIS.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/0004-Add-support-for-VM-suspend-resume-for-TPM-TIS.patch
deleted file mode 100644
index b8a783d..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/0004-Add-support-for-VM-suspend-resume-for-TPM-TIS.patch
+++ /dev/null
@@ -1,719 +0,0 @@
-From 5e9dd9063f514447ea4f54046793f4f01c297ed4 Mon Sep 17 00:00:00 2001
-From: Stefan Berger <stefanb@linux.vnet.ibm.com>
-Date: Sat, 31 Dec 2016 11:23:32 -0500
-Subject: [PATCH 4/4] Add support for VM suspend/resume for TPM TIS
-
-Extend the TPM TIS code to support suspend/resume. In case a command
-is being processed by the external TPM when suspending, wait for the command
-to complete to catch the result. In case the bottom half did not run,
-run the one function the bottom half is supposed to run. This then
-makes the resume operation work.
-
-The passthrough backend does not support suspend/resume operation
-and is therefore blocked from suspend/resume and migration.
-
-The CUSE TPM's supported capabilities are tested and if sufficient
-capabilities are implemented, suspend/resume, snapshotting and
-migration are supported by the CUSE TPM.
-
-Signed-off-by: Stefan Berger <stefanb@linux.vnet.ibm.com>
-
-Upstream-Status: Pending [https://lists.nongnu.org/archive/html/qemu-devel/2016-06/msg00252.html]
-Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
----
- hw/tpm/tpm_passthrough.c     | 130 +++++++++++++++++++++++--
- hw/tpm/tpm_tis.c             | 137 +++++++++++++++++++++++++-
- hw/tpm/tpm_tis.h             |   2 +
- hw/tpm/tpm_util.c            | 223 +++++++++++++++++++++++++++++++++++++++++++
- hw/tpm/tpm_util.h            |   7 ++
- include/sysemu/tpm_backend.h |  12 +++
- 6 files changed, 503 insertions(+), 8 deletions(-)
-
-diff --git a/hw/tpm/tpm_passthrough.c b/hw/tpm/tpm_passthrough.c
-index 44739ebad2..bc8072d0bc 100644
---- a/hw/tpm/tpm_passthrough.c
-+++ b/hw/tpm/tpm_passthrough.c
-@@ -34,6 +34,8 @@
- #include "tpm_tis.h"
- #include "tpm_util.h"
- #include "tpm_ioctl.h"
-+#include "migration/migration.h"
-+#include "qapi/error.h"
- 
- #define DEBUG_TPM 0
- 
-@@ -49,6 +51,7 @@
- #define TYPE_TPM_CUSE "tpm-cuse"
- 
- static const TPMDriverOps tpm_passthrough_driver;
-+static const VMStateDescription vmstate_tpm_cuse;
- 
- /* data structures */
- typedef struct TPMPassthruThreadParams {
-@@ -79,6 +82,10 @@ struct TPMPassthruState {
-     QemuMutex state_lock;
-     QemuCond cmd_complete;  /* singnaled once tpm_busy is false */
-     bool tpm_busy;
-+
-+    Error *migration_blocker;
-+
-+    TPMBlobBuffers tpm_blobs;
- };
- 
- typedef struct TPMPassthruState TPMPassthruState;
-@@ -306,6 +313,10 @@ static void tpm_passthrough_shutdown(TPMPassthruState *tpm_pt)
-                          strerror(errno));
-         }
-     }
-+    if (tpm_pt->migration_blocker) {
-+        migrate_del_blocker(tpm_pt->migration_blocker);
-+        error_free(tpm_pt->migration_blocker);
-+    }
- }
- 
- /*
-@@ -360,12 +371,14 @@ static int tpm_passthrough_cuse_check_caps(TPMPassthruState *tpm_pt)
- /*
-  * Initialize the external CUSE TPM
-  */
--static int tpm_passthrough_cuse_init(TPMPassthruState *tpm_pt)
-+static int tpm_passthrough_cuse_init(TPMPassthruState *tpm_pt,
-+                                     bool is_resume)
- {
-     int rc = 0;
--    ptm_init init = {
--        .u.req.init_flags = PTM_INIT_FLAG_DELETE_VOLATILE,
--    };
-+    ptm_init init;
-+    if (is_resume) {
-+        init.u.req.init_flags = PTM_INIT_FLAG_DELETE_VOLATILE;
-+    }
- 
-     if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
-         if (ioctl(tpm_pt->tpm_fd, PTM_INIT, &init) < 0) {
-@@ -394,7 +407,7 @@ static int tpm_passthrough_startup_tpm(TPMBackend *tb)
-                               tpm_passthrough_worker_thread,
-                               &tpm_pt->tpm_thread_params);
- 
--    tpm_passthrough_cuse_init(tpm_pt);
-+    tpm_passthrough_cuse_init(tpm_pt, false);
- 
-     return 0;
- }
-@@ -466,6 +479,32 @@ static int tpm_passthrough_reset_tpm_established_flag(TPMBackend *tb,
-     return rc;
- }
- 
-+static int tpm_cuse_get_state_blobs(TPMBackend *tb,
-+                                    bool decrypted_blobs,
-+                                    TPMBlobBuffers *tpm_blobs)
-+{
-+    TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
-+
-+    assert(TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt));
-+
-+    return tpm_util_cuse_get_state_blobs(tpm_pt->tpm_fd, decrypted_blobs,
-+                                         tpm_blobs);
-+}
-+
-+static int tpm_cuse_set_state_blobs(TPMBackend *tb,
-+                                    TPMBlobBuffers *tpm_blobs)
-+{
-+    TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
-+
-+    assert(TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt));
-+
-+    if (tpm_util_cuse_set_state_blobs(tpm_pt->tpm_fd, tpm_blobs)) {
-+        return 1;
-+    }
-+
-+    return tpm_passthrough_cuse_init(tpm_pt, true);
-+}
-+
- static bool tpm_passthrough_get_startup_error(TPMBackend *tb)
- {
-     TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
-@@ -488,7 +527,7 @@ static void tpm_passthrough_deliver_request(TPMBackend *tb)
- {
-     TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
- 
--    /* TPM considered busy once TPM Request scheduled for processing */
-+    /* TPM considered busy once TPM request scheduled for processing */
-     qemu_mutex_lock(&tpm_pt->state_lock);
-     tpm_pt->tpm_busy = true;
-     qemu_mutex_unlock(&tpm_pt->state_lock);
-@@ -601,6 +640,25 @@ static int tpm_passthrough_open_sysfs_cancel(TPMBackend *tb)
-     return fd;
- }
- 
-+static void tpm_passthrough_block_migration(TPMPassthruState *tpm_pt)
-+{
-+    ptm_cap caps;
-+
-+    if (TPM_PASSTHROUGH_USES_CUSE_TPM(tpm_pt)) {
-+        caps = PTM_CAP_GET_STATEBLOB | PTM_CAP_SET_STATEBLOB |
-+               PTM_CAP_STOP;
-+        if (!TPM_CUSE_IMPLEMENTS_ALL(tpm_pt, caps)) {
-+            error_setg(&tpm_pt->migration_blocker,
-+                       "Migration disabled: CUSE TPM lacks necessary capabilities");
-+            migrate_add_blocker(tpm_pt->migration_blocker);
-+        }
-+    } else {
-+        error_setg(&tpm_pt->migration_blocker,
-+                   "Migration disabled: Passthrough TPM does not support migration");
-+        migrate_add_blocker(tpm_pt->migration_blocker);
-+    }
-+}
-+
- static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
- {
-     TPMPassthruState *tpm_pt = TPM_PASSTHROUGH(tb);
-@@ -642,7 +700,7 @@ static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
-             goto err_close_tpmdev;
-         }
-         /* init TPM for probing */
--        if (tpm_passthrough_cuse_init(tpm_pt)) {
-+        if (tpm_passthrough_cuse_init(tpm_pt, false)) {
-             goto err_close_tpmdev;
-         }
-     }
-@@ -659,6 +717,7 @@ static int tpm_passthrough_handle_device_opts(QemuOpts *opts, TPMBackend *tb)
-         }
-     }
- 
-+    tpm_passthrough_block_migration(tpm_pt);
- 
-     return 0;
- 
-@@ -766,10 +825,13 @@ static void tpm_passthrough_inst_init(Object *obj)
- 
-     qemu_mutex_init(&tpm_pt->state_lock);
-     qemu_cond_init(&tpm_pt->cmd_complete);
-+
-+    vmstate_register(NULL, -1, &vmstate_tpm_cuse, obj);
- }
- 
- static void tpm_passthrough_inst_finalize(Object *obj)
- {
-+    vmstate_unregister(NULL, &vmstate_tpm_cuse, obj);
- }
- 
- static void tpm_passthrough_class_init(ObjectClass *klass, void *data)
-@@ -802,6 +864,60 @@ static const char *tpm_passthrough_cuse_create_desc(void)
-     return "CUSE TPM backend driver";
- }
- 
-+static void tpm_cuse_pre_save(void *opaque)
-+{
-+    TPMPassthruState *tpm_pt = opaque;
-+    TPMBackend *tb = &tpm_pt->parent;
-+
-+     qemu_mutex_lock(&tpm_pt->state_lock);
-+     /* wait for TPM to finish processing */
-+     if (tpm_pt->tpm_busy) {
-+        qemu_cond_wait(&tpm_pt->cmd_complete, &tpm_pt->state_lock);
-+     }
-+     qemu_mutex_unlock(&tpm_pt->state_lock);
-+
-+    /* get the decrypted state blobs from the TPM */
-+    tpm_cuse_get_state_blobs(tb, TRUE, &tpm_pt->tpm_blobs);
-+}
-+
-+static int tpm_cuse_post_load(void *opaque,
-+                              int version_id __attribute__((unused)))
-+{
-+    TPMPassthruState *tpm_pt = opaque;
-+    TPMBackend *tb = &tpm_pt->parent;
-+
-+    return tpm_cuse_set_state_blobs(tb, &tpm_pt->tpm_blobs);
-+}
-+
-+static const VMStateDescription vmstate_tpm_cuse = {
-+    .name = "cuse-tpm",
-+    .version_id = 1,
-+    .minimum_version_id = 0,
-+    .minimum_version_id_old = 0,
-+    .pre_save  = tpm_cuse_pre_save,
-+    .post_load = tpm_cuse_post_load,
-+    .fields = (VMStateField[]) {
-+        VMSTATE_UINT32(tpm_blobs.permanent_flags, TPMPassthruState),
-+        VMSTATE_UINT32(tpm_blobs.permanent.size, TPMPassthruState),
-+        VMSTATE_VBUFFER_ALLOC_UINT32(tpm_blobs.permanent.buffer,
-+                                     TPMPassthruState, 1, NULL, 0,
-+                                     tpm_blobs.permanent.size),
-+
-+        VMSTATE_UINT32(tpm_blobs.volatil_flags, TPMPassthruState),
-+        VMSTATE_UINT32(tpm_blobs.volatil.size, TPMPassthruState),
-+        VMSTATE_VBUFFER_ALLOC_UINT32(tpm_blobs.volatil.buffer,
-+                                     TPMPassthruState, 1, NULL, 0,
-+                                     tpm_blobs.volatil.size),
-+
-+        VMSTATE_UINT32(tpm_blobs.savestate_flags, TPMPassthruState),
-+        VMSTATE_UINT32(tpm_blobs.savestate.size, TPMPassthruState),
-+        VMSTATE_VBUFFER_ALLOC_UINT32(tpm_blobs.savestate.buffer,
-+                                     TPMPassthruState, 1, NULL, 0,
-+                                     tpm_blobs.savestate.size),
-+        VMSTATE_END_OF_LIST()
-+    }
-+};
-+
- static const TPMDriverOps tpm_cuse_driver = {
-     .type                     = TPM_TYPE_CUSE_TPM,
-     .opts                     = tpm_passthrough_cmdline_opts,
-diff --git a/hw/tpm/tpm_tis.c b/hw/tpm/tpm_tis.c
-index 14d9e83ea2..9b660cf737 100644
---- a/hw/tpm/tpm_tis.c
-+++ b/hw/tpm/tpm_tis.c
-@@ -368,6 +368,8 @@ static void tpm_tis_receive_bh(void *opaque)
-     TPMTISEmuState *tis = &s->s.tis;
-     uint8_t locty = s->locty_number;
- 
-+    tis->bh_scheduled = false;
-+
-     qemu_mutex_lock(&s->state_lock);
- 
-     tpm_tis_sts_set(&tis->loc[locty],
-@@ -415,6 +417,8 @@ static void tpm_tis_receive_cb(TPMState *s, uint8_t locty,
-     qemu_mutex_unlock(&s->state_lock);
- 
-     qemu_bh_schedule(tis->bh);
-+
-+    tis->bh_scheduled = true;
- }
- 
- /*
-@@ -1030,9 +1034,140 @@ static void tpm_tis_reset(DeviceState *dev)
-     tpm_tis_do_startup_tpm(s);
- }
- 
-+
-+/* persistent state handling */
-+
-+static void tpm_tis_pre_save(void *opaque)
-+{
-+    TPMState *s = opaque;
-+    TPMTISEmuState *tis = &s->s.tis;
-+    uint8_t locty = tis->active_locty;
-+
-+    DPRINTF("tpm_tis: suspend: locty = %d : r_offset = %d, w_offset = %d\n",
-+            locty, tis->loc[0].r_offset, tis->loc[0].w_offset);
-+#ifdef DEBUG_TIS
-+    tpm_tis_dump_state(opaque, 0);
-+#endif
-+
-+    qemu_mutex_lock(&s->state_lock);
-+
-+    /* wait for outstanding request to complete */
-+    if (TPM_TIS_IS_VALID_LOCTY(locty) &&
-+        tis->loc[locty].state == TPM_TIS_STATE_EXECUTION) {
-+        /*
-+         * If we get here when the bh is scheduled but did not run,
-+         * we won't get notified...
-+         */
-+        if (!tis->bh_scheduled) {
-+            /* backend thread to notify us */
-+            qemu_cond_wait(&s->cmd_complete, &s->state_lock);
-+        }
-+        if (tis->loc[locty].state == TPM_TIS_STATE_EXECUTION) {
-+            /* bottom half did not run - run its function */
-+            qemu_mutex_unlock(&s->state_lock);
-+            tpm_tis_receive_bh(opaque);
-+            qemu_mutex_lock(&s->state_lock);
-+        }
-+    }
-+
-+    qemu_mutex_unlock(&s->state_lock);
-+
-+    /* copy current active read or write buffer into the buffer
-+       written to disk */
-+    if (TPM_TIS_IS_VALID_LOCTY(locty)) {
-+        switch (tis->loc[locty].state) {
-+        case TPM_TIS_STATE_RECEPTION:
-+            memcpy(tis->buf,
-+                   tis->loc[locty].w_buffer.buffer,
-+                   MIN(sizeof(tis->buf),
-+                       tis->loc[locty].w_buffer.size));
-+            tis->offset = tis->loc[locty].w_offset;
-+        break;
-+        case TPM_TIS_STATE_COMPLETION:
-+            memcpy(tis->buf,
-+                   tis->loc[locty].r_buffer.buffer,
-+                   MIN(sizeof(tis->buf),
-+                       tis->loc[locty].r_buffer.size));
-+            tis->offset = tis->loc[locty].r_offset;
-+        break;
-+        default:
-+            /* leak nothing */
-+            memset(tis->buf, 0x0, sizeof(tis->buf));
-+        break;
-+        }
-+    }
-+}
-+
-+static int tpm_tis_post_load(void *opaque,
-+                             int version_id __attribute__((unused)))
-+{
-+    TPMState *s = opaque;
-+    TPMTISEmuState *tis = &s->s.tis;
-+
-+    uint8_t locty = tis->active_locty;
-+
-+    if (TPM_TIS_IS_VALID_LOCTY(locty)) {
-+        switch (tis->loc[locty].state) {
-+        case TPM_TIS_STATE_RECEPTION:
-+            memcpy(tis->loc[locty].w_buffer.buffer,
-+                   tis->buf,
-+                   MIN(sizeof(tis->buf),
-+                       tis->loc[locty].w_buffer.size));
-+            tis->loc[locty].w_offset = tis->offset;
-+        break;
-+        case TPM_TIS_STATE_COMPLETION:
-+            memcpy(tis->loc[locty].r_buffer.buffer,
-+                   tis->buf,
-+                   MIN(sizeof(tis->buf),
-+                       tis->loc[locty].r_buffer.size));
-+            tis->loc[locty].r_offset = tis->offset;
-+        break;
-+        default:
-+        break;
-+        }
-+    }
-+
-+    DPRINTF("tpm_tis: resume : locty = %d : r_offset = %d, w_offset = %d\n",
-+            locty, tis->loc[0].r_offset, tis->loc[0].w_offset);
-+
-+    return 0;
-+}
-+
-+static const VMStateDescription vmstate_locty = {
-+    .name = "loc",
-+    .version_id = 1,
-+    .minimum_version_id = 0,
-+    .minimum_version_id_old = 0,
-+    .fields      = (VMStateField[]) {
-+        VMSTATE_UINT32(state, TPMLocality),
-+        VMSTATE_UINT32(inte, TPMLocality),
-+        VMSTATE_UINT32(ints, TPMLocality),
-+        VMSTATE_UINT8(access, TPMLocality),
-+        VMSTATE_UINT32(sts, TPMLocality),
-+        VMSTATE_UINT32(iface_id, TPMLocality),
-+        VMSTATE_END_OF_LIST(),
-+    }
-+};
-+
- static const VMStateDescription vmstate_tpm_tis = {
-     .name = "tpm",
--    .unmigratable = 1,
-+    .version_id = 1,
-+    .minimum_version_id = 0,
-+    .minimum_version_id_old = 0,
-+    .pre_save  = tpm_tis_pre_save,
-+    .post_load = tpm_tis_post_load,
-+    .fields = (VMStateField[]) {
-+        VMSTATE_UINT32(s.tis.offset, TPMState),
-+        VMSTATE_BUFFER(s.tis.buf, TPMState),
-+        VMSTATE_UINT8(s.tis.active_locty, TPMState),
-+        VMSTATE_UINT8(s.tis.aborting_locty, TPMState),
-+        VMSTATE_UINT8(s.tis.next_locty, TPMState),
-+
-+        VMSTATE_STRUCT_ARRAY(s.tis.loc, TPMState, TPM_TIS_NUM_LOCALITIES, 1,
-+                             vmstate_locty, TPMLocality),
-+
-+        VMSTATE_END_OF_LIST()
-+    }
- };
- 
- static Property tpm_tis_properties[] = {
-diff --git a/hw/tpm/tpm_tis.h b/hw/tpm/tpm_tis.h
-index a1df41fa21..b7fc0ea1a9 100644
---- a/hw/tpm/tpm_tis.h
-+++ b/hw/tpm/tpm_tis.h
-@@ -54,6 +54,8 @@ typedef struct TPMLocality {
- 
- typedef struct TPMTISEmuState {
-     QEMUBH *bh;
-+    bool bh_scheduled; /* bh scheduled but did not run yet */
-+
-     uint32_t offset;
-     uint8_t buf[TPM_TIS_BUFFER_MAX];
- 
-diff --git a/hw/tpm/tpm_util.c b/hw/tpm/tpm_util.c
-index 7b35429725..b6ff74d946 100644
---- a/hw/tpm/tpm_util.c
-+++ b/hw/tpm/tpm_util.c
-@@ -22,6 +22,17 @@
- #include "qemu/osdep.h"
- #include "tpm_util.h"
- #include "tpm_int.h"
-+#include "tpm_ioctl.h"
-+#include "qemu/error-report.h"
-+
-+#define DEBUG_TPM 0
-+
-+#define DPRINTF(fmt, ...) do { \
-+    if (DEBUG_TPM) { \
-+        fprintf(stderr, fmt, ## __VA_ARGS__); \
-+    } \
-+} while (0)
-+
- 
- /*
-  * A basic test of a TPM device. We expect a well formatted response header
-@@ -125,3 +136,215 @@ int tpm_util_test_tpmdev(int tpm_fd, TPMVersion *tpm_version)
- 
-     return 1;
- }
-+
-+static void tpm_sized_buffer_reset(TPMSizedBuffer *tsb)
-+{
-+    g_free(tsb->buffer);
-+    tsb->buffer = NULL;
-+    tsb->size = 0;
-+}
-+
-+/*
-+ * Transfer a TPM state blob from the TPM into a provided buffer.
-+ *
-+ * @fd: file descriptor to talk to the CUSE TPM
-+ * @type: the type of blob to transfer
-+ * @decrypted_blob: whether we request to receive decrypted blobs
-+ * @tsb: the TPMSizeBuffer to fill with the blob
-+ * @flags: the flags to return to the caller
-+ */
-+static int tpm_util_cuse_get_state_blob(int fd,
-+                                        uint8_t type,
-+                                        bool decrypted_blob,
-+                                        TPMSizedBuffer *tsb,
-+                                        uint32_t *flags)
-+{
-+    ptm_getstate pgs;
-+    uint16_t offset = 0;
-+    ptm_res res;
-+    ssize_t n;
-+    size_t to_read;
-+
-+    tpm_sized_buffer_reset(tsb);
-+
-+    pgs.u.req.state_flags = (decrypted_blob) ? PTM_STATE_FLAG_DECRYPTED : 0;
-+    pgs.u.req.type = type;
-+    pgs.u.req.offset = offset;
-+
-+    if (ioctl(fd, PTM_GET_STATEBLOB, &pgs) < 0) {
-+        error_report("CUSE TPM PTM_GET_STATEBLOB ioctl failed: %s",
-+                     strerror(errno));
-+        goto err_exit;
-+    }
-+    res = pgs.u.resp.tpm_result;
-+    if (res != 0 && (res & 0x800) == 0) {
-+        error_report("Getting the stateblob (type %d) failed with a TPM "
-+                     "error 0x%x", type, res);
-+        goto err_exit;
-+    }
-+
-+    *flags = pgs.u.resp.state_flags;
-+
-+    tsb->buffer = g_malloc(pgs.u.resp.totlength);
-+    memcpy(tsb->buffer, pgs.u.resp.data, pgs.u.resp.length);
-+    tsb->size = pgs.u.resp.length;
-+
-+    /* if there are bytes left to get use read() interface */
-+    while (tsb->size < pgs.u.resp.totlength) {
-+        to_read = pgs.u.resp.totlength - tsb->size;
-+        if (unlikely(to_read > SSIZE_MAX)) {
-+            to_read = SSIZE_MAX;
-+        }
-+
-+        n = read(fd, &tsb->buffer[tsb->size], to_read);
-+        if (n != to_read) {
-+            error_report("Could not read stateblob (type %d) : %s",
-+                         type, strerror(errno));
-+            goto err_exit;
-+        }
-+        tsb->size += to_read;
-+    }
-+
-+    DPRINTF("tpm_util: got state blob type %d, %d bytes, flags 0x%08x, "
-+            "decrypted=%d\n", type, tsb->size, *flags, decrypted_blob);
-+
-+    return 0;
-+
-+err_exit:
-+    return 1;
-+}
-+
-+int tpm_util_cuse_get_state_blobs(int tpm_fd,
-+                                  bool decrypted_blobs,
-+                                  TPMBlobBuffers *tpm_blobs)
-+{
-+    if (tpm_util_cuse_get_state_blob(tpm_fd, PTM_BLOB_TYPE_PERMANENT,
-+                                     decrypted_blobs,
-+                                     &tpm_blobs->permanent,
-+                                     &tpm_blobs->permanent_flags) ||
-+       tpm_util_cuse_get_state_blob(tpm_fd, PTM_BLOB_TYPE_VOLATILE,
-+                                     decrypted_blobs,
-+                                     &tpm_blobs->volatil,
-+                                     &tpm_blobs->volatil_flags) ||
-+       tpm_util_cuse_get_state_blob(tpm_fd, PTM_BLOB_TYPE_SAVESTATE,
-+                                     decrypted_blobs,
-+                                     &tpm_blobs->savestate,
-+                                     &tpm_blobs->savestate_flags)) {
-+        goto err_exit;
-+    }
-+
-+    return 0;
-+
-+ err_exit:
-+    tpm_sized_buffer_reset(&tpm_blobs->volatil);
-+    tpm_sized_buffer_reset(&tpm_blobs->permanent);
-+    tpm_sized_buffer_reset(&tpm_blobs->savestate);
-+
-+    return 1;
-+}
-+
-+static int tpm_util_cuse_do_set_stateblob_ioctl(int fd,
-+                                                uint32_t flags,
-+                                                uint32_t type,
-+                                                uint32_t length)
-+{
-+    ptm_setstate pss;
-+
-+    pss.u.req.state_flags = flags;
-+    pss.u.req.type = type;
-+    pss.u.req.length = length;
-+
-+    if (ioctl(fd, PTM_SET_STATEBLOB, &pss) < 0) {
-+        error_report("CUSE TPM PTM_SET_STATEBLOB ioctl failed: %s",
-+                     strerror(errno));
-+        return 1;
-+    }
-+
-+    if (pss.u.resp.tpm_result != 0) {
-+        error_report("Setting the stateblob (type %d) failed with a TPM "
-+                     "error 0x%x", type, pss.u.resp.tpm_result);
-+        return 1;
-+    }
-+
-+    return 0;
-+}
-+
-+
-+/*
-+ * Transfer a TPM state blob to the CUSE TPM.
-+ *
-+ * @fd: file descriptor to talk to the CUSE TPM
-+ * @type: the type of TPM state blob to transfer
-+ * @tsb: TPMSizeBuffer containing the TPM state blob
-+ * @flags: Flags describing the (encryption) state of the TPM state blob
-+ */
-+static int tpm_util_cuse_set_state_blob(int fd,
-+                                        uint32_t type,
-+                                        TPMSizedBuffer *tsb,
-+                                        uint32_t flags)
-+{
-+    uint32_t offset = 0;
-+    ssize_t n;
-+    size_t to_write;
-+
-+    /* initiate the transfer to the CUSE TPM */
-+    if (tpm_util_cuse_do_set_stateblob_ioctl(fd, flags, type, 0)) {
-+        return 1;
-+    }
-+
-+    /* use the write() interface for transferring the state blob */
-+    while (offset < tsb->size) {
-+        to_write = tsb->size - offset;
-+        if (unlikely(to_write > SSIZE_MAX)) {
-+            to_write = SSIZE_MAX;
-+        }
-+
-+        n = write(fd, &tsb->buffer[offset], to_write);
-+        if (n != to_write) {
-+            error_report("Writing the stateblob (type %d) failed: %s",
-+                         type, strerror(errno));
-+            goto err_exit;
-+        }
-+        offset += to_write;
-+    }
-+
-+    /* inidicate that the transfer is finished */
-+    if (tpm_util_cuse_do_set_stateblob_ioctl(fd, flags, type, 0)) {
-+        goto err_exit;
-+    }
-+
-+    DPRINTF("tpm_util: set the state blob type %d, %d bytes, flags 0x%08x\n",
-+            type, tsb->size, flags);
-+
-+    return 0;
-+
-+err_exit:
-+    return 1;
-+}
-+
-+int tpm_util_cuse_set_state_blobs(int tpm_fd,
-+                                  TPMBlobBuffers *tpm_blobs)
-+{
-+    ptm_res res;
-+
-+    if (ioctl(tpm_fd, PTM_STOP, &res) < 0) {
-+        error_report("tpm_passthrough: Could not stop "
-+                     "the CUSE TPM: %s (%i)",
-+                     strerror(errno), errno);
-+        return 1;
-+    }
-+
-+    if (tpm_util_cuse_set_state_blob(tpm_fd, PTM_BLOB_TYPE_PERMANENT,
-+                                     &tpm_blobs->permanent,
-+                                     tpm_blobs->permanent_flags) ||
-+        tpm_util_cuse_set_state_blob(tpm_fd, PTM_BLOB_TYPE_VOLATILE,
-+                                     &tpm_blobs->volatil,
-+                                     tpm_blobs->volatil_flags) ||
-+        tpm_util_cuse_set_state_blob(tpm_fd, PTM_BLOB_TYPE_SAVESTATE,
-+                                     &tpm_blobs->savestate,
-+                                     tpm_blobs->savestate_flags)) {
-+        return 1;
-+    }
-+
-+    return 0;
-+}
-diff --git a/hw/tpm/tpm_util.h b/hw/tpm/tpm_util.h
-index df76245e6e..c24071d812 100644
---- a/hw/tpm/tpm_util.h
-+++ b/hw/tpm/tpm_util.h
-@@ -26,4 +26,11 @@
- 
- int tpm_util_test_tpmdev(int tpm_fd, TPMVersion *tpm_version);
- 
-+int tpm_util_cuse_get_state_blobs(int tpm_fd,
-+                                  bool decrypted_blobs,
-+                                  TPMBlobBuffers *tpm_blobs);
-+
-+int tpm_util_cuse_set_state_blobs(int tpm_fd,
-+                                  TPMBlobBuffers *tpm_blobs);
-+
- #endif /* TPM_TPM_UTIL_H */
-diff --git a/include/sysemu/tpm_backend.h b/include/sysemu/tpm_backend.h
-index b58f52d39f..3403821b9d 100644
---- a/include/sysemu/tpm_backend.h
-+++ b/include/sysemu/tpm_backend.h
-@@ -62,6 +62,18 @@ typedef struct TPMSizedBuffer {
-     uint8_t  *buffer;
- } TPMSizedBuffer;
- 
-+/* blobs from the TPM; part of VM state when migrating */
-+typedef struct TPMBlobBuffers {
-+    uint32_t permanent_flags;
-+    TPMSizedBuffer permanent;
-+
-+    uint32_t volatil_flags;
-+    TPMSizedBuffer volatil;
-+
-+    uint32_t savestate_flags;
-+    TPMSizedBuffer savestate;
-+} TPMBlobBuffers;
-+
- struct TPMDriverOps {
-     enum TpmType type;
-     const QemuOptDesc *opts;
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/04b33e21866412689f18b7ad6daf0a54d8f959a7.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/04b33e21866412689f18b7ad6daf0a54d8f959a7.patch
deleted file mode 100644
index d947e8c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/04b33e21866412689f18b7ad6daf0a54d8f959a7.patch
+++ /dev/null
@@ -1,282 +0,0 @@
-From 04b33e21866412689f18b7ad6daf0a54d8f959a7 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Wed, 28 Jun 2017 13:44:52 -0700
-Subject: [PATCH] Replace 'struct ucontext' with 'ucontext_t' type
-
-glibc used to have:
-
-   typedef struct ucontext { ... } ucontext_t;
-
-glibc now has:
-
-   typedef struct ucontext_t { ... } ucontext_t;
-
-(See https://sourceware.org/bugzilla/show_bug.cgi?id=21457
- for detail and rationale for the glibc change)
-
-However, QEMU used "struct ucontext" in declarations. This is a
-private name and compatibility cannot be guaranteed. Switch to
-only using the standardized type name.
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Message-id: 20170628204452.41230-1-raj.khem@gmail.com
-Cc: Kamil Rytarowski <kamil@netbsd.org>
-Cc: Riku Voipio <riku.voipio@iki.fi>
-Cc: Laurent Vivier <laurent@vivier.eu>
-Cc: Paolo Bonzini <pbonzini@redhat.com>
-Reviewed-by: Eric Blake <eblake@redhat.com>
-[PMM: Rewrote commit message, based mostly on the one from
- Nathaniel McCallum]
-Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
-
-Upstream-Status: Backport
-RP 2017/9/6
----
- linux-user/host/aarch64/hostdep.h |  2 +-
- linux-user/host/arm/hostdep.h     |  2 +-
- linux-user/host/i386/hostdep.h    |  2 +-
- linux-user/host/ppc64/hostdep.h   |  2 +-
- linux-user/host/s390x/hostdep.h   |  2 +-
- linux-user/host/x86_64/hostdep.h  |  2 +-
- linux-user/signal.c               | 10 +++++-----
- tests/tcg/test-i386.c             |  4 ++--
- user-exec.c                       | 18 +++++++++---------
- 9 files changed, 22 insertions(+), 22 deletions(-)
-
-diff --git a/linux-user/host/aarch64/hostdep.h b/linux-user/host/aarch64/hostdep.h
-index 64f75ce..a8d41a2 100644
---- a/linux-user/host/aarch64/hostdep.h
-+++ b/linux-user/host/aarch64/hostdep.h
-@@ -24,7 +24,7 @@ extern char safe_syscall_end[];
- /* Adjust the signal context to rewind out of safe-syscall if we're in it */
- static inline void rewind_if_in_safe_syscall(void *puc)
- {
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
-     __u64 *pcreg = &uc->uc_mcontext.pc;
- 
-     if (*pcreg > (uintptr_t)safe_syscall_start
-diff --git a/linux-user/host/arm/hostdep.h b/linux-user/host/arm/hostdep.h
-index 5c1ae60..9276fe6 100644
---- a/linux-user/host/arm/hostdep.h
-+++ b/linux-user/host/arm/hostdep.h
-@@ -24,7 +24,7 @@ extern char safe_syscall_end[];
- /* Adjust the signal context to rewind out of safe-syscall if we're in it */
- static inline void rewind_if_in_safe_syscall(void *puc)
- {
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
-     unsigned long *pcreg = &uc->uc_mcontext.arm_pc;
- 
-     if (*pcreg > (uintptr_t)safe_syscall_start
-diff --git a/linux-user/host/i386/hostdep.h b/linux-user/host/i386/hostdep.h
-index d834bd8..073be74 100644
---- a/linux-user/host/i386/hostdep.h
-+++ b/linux-user/host/i386/hostdep.h
-@@ -24,7 +24,7 @@ extern char safe_syscall_end[];
- /* Adjust the signal context to rewind out of safe-syscall if we're in it */
- static inline void rewind_if_in_safe_syscall(void *puc)
- {
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
-     greg_t *pcreg = &uc->uc_mcontext.gregs[REG_EIP];
- 
-     if (*pcreg > (uintptr_t)safe_syscall_start
-diff --git a/linux-user/host/ppc64/hostdep.h b/linux-user/host/ppc64/hostdep.h
-index 0b0f5f7..98979ad 100644
---- a/linux-user/host/ppc64/hostdep.h
-+++ b/linux-user/host/ppc64/hostdep.h
-@@ -24,7 +24,7 @@ extern char safe_syscall_end[];
- /* Adjust the signal context to rewind out of safe-syscall if we're in it */
- static inline void rewind_if_in_safe_syscall(void *puc)
- {
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
-     unsigned long *pcreg = &uc->uc_mcontext.gp_regs[PT_NIP];
- 
-     if (*pcreg > (uintptr_t)safe_syscall_start
-diff --git a/linux-user/host/s390x/hostdep.h b/linux-user/host/s390x/hostdep.h
-index 6f9da9c..4f0171f 100644
---- a/linux-user/host/s390x/hostdep.h
-+++ b/linux-user/host/s390x/hostdep.h
-@@ -24,7 +24,7 @@ extern char safe_syscall_end[];
- /* Adjust the signal context to rewind out of safe-syscall if we're in it */
- static inline void rewind_if_in_safe_syscall(void *puc)
- {
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
-     unsigned long *pcreg = &uc->uc_mcontext.psw.addr;
- 
-     if (*pcreg > (uintptr_t)safe_syscall_start
-diff --git a/linux-user/host/x86_64/hostdep.h b/linux-user/host/x86_64/hostdep.h
-index 3b42596..a4fefb5 100644
---- a/linux-user/host/x86_64/hostdep.h
-+++ b/linux-user/host/x86_64/hostdep.h
-@@ -24,7 +24,7 @@ extern char safe_syscall_end[];
- /* Adjust the signal context to rewind out of safe-syscall if we're in it */
- static inline void rewind_if_in_safe_syscall(void *puc)
- {
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
-     greg_t *pcreg = &uc->uc_mcontext.gregs[REG_RIP];
- 
-     if (*pcreg > (uintptr_t)safe_syscall_start
-diff --git a/linux-user/signal.c b/linux-user/signal.c
-index d68bd26..cc0c3fc 100644
---- a/linux-user/signal.c
-+++ b/linux-user/signal.c
-@@ -3346,7 +3346,7 @@ static void setup_rt_frame(int sig, struct target_sigaction *ka,
-     *
-     *   a0 = signal number
-     *   a1 = pointer to siginfo_t
--    *   a2 = pointer to struct ucontext
-+    *   a2 = pointer to ucontext_t
-     *
-     * $25 and PC point to the signal handler, $29 points to the
-     * struct sigframe.
-@@ -3764,7 +3764,7 @@ struct target_signal_frame {
- 
- struct rt_signal_frame {
-     siginfo_t info;
--    struct ucontext uc;
-+    ucontext_t uc;
-     uint32_t tramp[2];
- };
- 
-@@ -3980,7 +3980,7 @@ struct rt_signal_frame {
-     siginfo_t *pinfo;
-     void *puc;
-     siginfo_t info;
--    struct ucontext uc;
-+    ucontext_t uc;
-     uint16_t retcode[4];      /* Trampoline code. */
- };
- 
-@@ -4515,7 +4515,7 @@ static void setup_rt_frame(int sig, struct target_sigaction *ka,
-         tswap_siginfo(&frame->info, info);
-     }
- 
--    /*err |= __clear_user(&frame->uc, offsetof(struct ucontext, uc_mcontext));*/
-+    /*err |= __clear_user(&frame->uc, offsetof(ucontext_t, uc_mcontext));*/
-     __put_user(0, &frame->uc.tuc_flags);
-     __put_user(0, &frame->uc.tuc_link);
-     __put_user(target_sigaltstack_used.ss_sp,
-@@ -5007,7 +5007,7 @@ enum {
- 
- struct target_ucontext {
-     target_ulong tuc_flags;
--    target_ulong tuc_link;    /* struct ucontext __user * */
-+    target_ulong tuc_link;    /* ucontext_t __user * */
-     struct target_sigaltstack tuc_stack;
- #if !defined(TARGET_PPC64)
-     int32_t tuc_pad[7];
-diff --git a/tests/tcg/test-i386.c b/tests/tcg/test-i386.c
-index 0f7b943..9599204 100644
---- a/tests/tcg/test-i386.c
-+++ b/tests/tcg/test-i386.c
-@@ -1720,7 +1720,7 @@ int tab[2];
- 
- void sig_handler(int sig, siginfo_t *info, void *puc)
- {
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
- 
-     printf("si_signo=%d si_errno=%d si_code=%d",
-            info->si_signo, info->si_errno, info->si_code);
-@@ -1912,7 +1912,7 @@ void test_exceptions(void)
- /* specific precise single step test */
- void sig_trap_handler(int sig, siginfo_t *info, void *puc)
- {
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
-     printf("EIP=" FMTLX "\n", (long)uc->uc_mcontext.gregs[REG_EIP]);
- }
- 
-diff --git a/user-exec.c b/user-exec.c
-index a8f95fa..2a975ea 100644
---- a/user-exec.c
-+++ b/user-exec.c
-@@ -167,7 +167,7 @@ int cpu_signal_handler(int host_signum, void *pinfo,
- #elif defined(__OpenBSD__)
-     struct sigcontext *uc = puc;
- #else
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
- #endif
-     unsigned long pc;
-     int trapno;
-@@ -222,7 +222,7 @@ int cpu_signal_handler(int host_signum, void *pinfo,
- #elif defined(__OpenBSD__)
-     struct sigcontext *uc = puc;
- #else
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
- #endif
- 
-     pc = PC_sig(uc);
-@@ -289,7 +289,7 @@ int cpu_signal_handler(int host_signum, void *pinfo,
- #if defined(__FreeBSD__) || defined(__FreeBSD_kernel__)
-     ucontext_t *uc = puc;
- #else
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
- #endif
-     unsigned long pc;
-     int is_write;
-@@ -316,7 +316,7 @@ int cpu_signal_handler(int host_signum, void *pinfo,
-                            void *puc)
- {
-     siginfo_t *info = pinfo;
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
-     uint32_t *pc = uc->uc_mcontext.sc_pc;
-     uint32_t insn = *pc;
-     int is_write = 0;
-@@ -414,7 +414,7 @@ int cpu_signal_handler(int host_signum, void *pinfo,
- #if defined(__NetBSD__)
-     ucontext_t *uc = puc;
- #else
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
- #endif
-     unsigned long pc;
-     int is_write;
-@@ -441,7 +441,7 @@ int cpu_signal_handler(int host_signum, void *pinfo,
- int cpu_signal_handler(int host_signum, void *pinfo, void *puc)
- {
-     siginfo_t *info = pinfo;
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
-     uintptr_t pc = uc->uc_mcontext.pc;
-     uint32_t insn = *(uint32_t *)pc;
-     bool is_write;
-@@ -474,7 +474,7 @@ int cpu_signal_handler(int host_signum, void *pinfo, void *puc)
- int cpu_signal_handler(int host_signum, void *pinfo, void *puc)
- {
-     siginfo_t *info = pinfo;
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
-     unsigned long ip;
-     int is_write = 0;
- 
-@@ -505,7 +505,7 @@ int cpu_signal_handler(int host_signum, void *pinfo,
-                        void *puc)
- {
-     siginfo_t *info = pinfo;
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
-     unsigned long pc;
-     uint16_t *pinsn;
-     int is_write = 0;
-@@ -558,7 +558,7 @@ int cpu_signal_handler(int host_signum, void *pinfo,
-                        void *puc)
- {
-     siginfo_t *info = pinfo;
--    struct ucontext *uc = puc;
-+    ucontext_t *uc = puc;
-     greg_t pc = uc->uc_mcontext.pc;
-     int is_write;
- 
--- 
-1.8.3.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2016-9908.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2016-9908.patch
deleted file mode 100644
index e0f7a1a..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2016-9908.patch
+++ /dev/null
@@ -1,44 +0,0 @@
-From 7139ccbc907441337b4b59cde2c5b5a54cb5b2cc Mon Sep 17 00:00:00 2001
-From: Sona Sarmadi <sona.sarmadi@enea.com>
-
-virtio-gpu: fix information leak in capset get dispatch
-
-In virgl_cmd_get_capset function, it uses g_malloc to allocate
-a response struct to the guest. As the 'resp'struct hasn't been full
-initialized it will lead the 'resp->padding' field to the guest.
-Use g_malloc0 to avoid this.
-
-Signed-off-by: Li Qiang <liqiang6-s@360.cn>
-Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
-Message-id: 58188cae.4a6ec20a.3d2d1.aff2@mx.google.com
-
-[Sona: backported from master to v2.8.0 and resolved conflict]
-
-Reference to upstream patch:
-http://git.qemu-project.org/?p=qemu.git;a=commit;h=85d9d044471f93c48c5c396f7e217b4ef12f69f8
-
-CVE: CVE-2016-9908
-Upstream-Status: Backport
-
-Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
-Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
----
- hw/display/virtio-gpu-3d.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/hw/display/virtio-gpu-3d.c b/hw/display/virtio-gpu-3d.c
-index 23f39de..d98b140 100644
---- a/hw/display/virtio-gpu-3d.c
-+++ b/hw/display/virtio-gpu-3d.c
-@@ -371,7 +371,7 @@ static void virgl_cmd_get_capset(VirtIOGPU *g,
- 
-     virgl_renderer_get_cap_set(gc.capset_id, &max_ver,
-                                &max_size);
--    resp = g_malloc(sizeof(*resp) + max_size);
-+    resp = g_malloc0(sizeof(*resp) + max_size);
- 
-     resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
-     virgl_renderer_fill_caps(gc.capset_id,
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2016-9912.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2016-9912.patch
deleted file mode 100644
index c009ffd..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2016-9912.patch
+++ /dev/null
@@ -1,45 +0,0 @@
-From b8e23926c568f2e963af39028b71c472e3023793 Mon Sep 17 00:00:00 2001
-From: Li Qiang <liq3ea@gmail.com>
-Date: Mon, 28 Nov 2016 21:29:25 -0500
-Subject: [PATCH] virtio-gpu: call cleanup mapping function in resource destroy
-
-If the guest destroy the resource before detach banking, the 'iov'
-and 'addrs' field in resource is not freed thus leading memory
-leak issue. This patch avoid this.
-
-CVE: CVE-2016-9912
-Upstream-Status: Backport
-
-Signed-off-by: Li Qiang <liq3ea@gmail.com>
-Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
-Message-id: 1480386565-10077-1-git-send-email-liq3ea@gmail.com
-Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
-Signed-off-by: Sona Sarmadi <sona.sarmadi@enea.com>
----
- hw/display/virtio-gpu.c | 3 +++
- 1 file changed, 3 insertions(+)
-
-diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
-index ed2b6d3..6a26258 100644
---- a/hw/display/virtio-gpu.c
-+++ b/hw/display/virtio-gpu.c
-@@ -28,6 +28,8 @@
- static struct virtio_gpu_simple_resource*
- virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
- 
-+static void virtio_gpu_cleanup_mapping(struct virtio_gpu_simple_resource *res);
-+
- #ifdef CONFIG_VIRGL
- #include <virglrenderer.h>
- #define VIRGL(_g, _virgl, _simple, ...)                     \
-@@ -364,6 +366,7 @@ static void virtio_gpu_resource_destroy(VirtIOGPU *g,
-                                         struct virtio_gpu_simple_resource *res)
- {
-     pixman_image_unref(res->image);
-+    virtio_gpu_cleanup_mapping(res);
-     QTAILQ_REMOVE(&g->reslist, res, next);
-     g->hostmem -= res->hostmem;
-     g_free(res);
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2017-13672.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2017-13672.patch
new file mode 100644
index 0000000..ce0b1ee
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2017-13672.patch
@@ -0,0 +1,504 @@
+From 3d90c6254863693a6b13d918d2b8682e08bbc681 Mon Sep 17 00:00:00 2001
+From: Gerd Hoffmann <kraxel@redhat.com>
+Date: Mon, 28 Aug 2017 14:29:06 +0200
+Subject: [PATCH] vga: stop passing pointers to vga_draw_line* functions
+
+Instead pass around the address (aka offset into vga memory).
+Add vga_read_* helper functions which apply vbe_size_mask to
+the address, to make sure the address stays within the valid
+range, similar to the cirrus blitter fixes (commits ffaf857778
+and 026aeffcb4).
+
+Impact:  DoS for privileged guest users.  qemu crashes with
+a segfault, when hitting the guard page after vga memory
+allocation, while reading vga memory for display updates.
+
+Fixes: CVE-2017-13672
+Cc: P J P <ppandit@redhat.com>
+Reported-by: David Buchanan <d@vidbuchanan.co.uk>
+Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
+Message-id: 20170828122906.18993-1-kraxel@redhat.com
+
+Upstream-Status: Backport
+[https://git.qemu.org/?p=qemu.git;a=commit;h=3d90c6254863693a6b13d918d2b8682e08bbc681]
+
+CVE: CVE-2017-13672
+
+Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
+---
+ hw/display/vga-helpers.h | 202 ++++++++++++++++++++++++++---------------------
+ hw/display/vga.c         |   5 +-
+ hw/display/vga_int.h     |   1 +
+ 3 files changed, 114 insertions(+), 94 deletions(-)
+
+diff --git a/hw/display/vga-helpers.h b/hw/display/vga-helpers.h
+index 94f6de2..5a752b3 100644
+--- a/hw/display/vga-helpers.h
++++ b/hw/display/vga-helpers.h
+@@ -95,20 +95,46 @@ static void vga_draw_glyph9(uint8_t *d, int linesize,
+     } while (--h);
+ }
+ 
++static inline uint8_t vga_read_byte(VGACommonState *vga, uint32_t addr)
++{
++    return vga->vram_ptr[addr & vga->vbe_size_mask];
++}
++
++static inline uint16_t vga_read_word_le(VGACommonState *vga, uint32_t addr)
++{
++    uint32_t offset = addr & vga->vbe_size_mask & ~1;
++    uint16_t *ptr = (uint16_t *)(vga->vram_ptr + offset);
++    return lduw_le_p(ptr);
++}
++
++static inline uint16_t vga_read_word_be(VGACommonState *vga, uint32_t addr)
++{
++    uint32_t offset = addr & vga->vbe_size_mask & ~1;
++    uint16_t *ptr = (uint16_t *)(vga->vram_ptr + offset);
++    return lduw_be_p(ptr);
++}
++
++static inline uint32_t vga_read_dword_le(VGACommonState *vga, uint32_t addr)
++{
++    uint32_t offset = addr & vga->vbe_size_mask & ~3;
++    uint32_t *ptr = (uint32_t *)(vga->vram_ptr + offset);
++    return ldl_le_p(ptr);
++}
++
+ /*
+  * 4 color mode
+  */
+-static void vga_draw_line2(VGACommonState *s1, uint8_t *d,
+-                           const uint8_t *s, int width)
++static void vga_draw_line2(VGACommonState *vga, uint8_t *d,
++                           uint32_t addr, int width)
+ {
+     uint32_t plane_mask, *palette, data, v;
+     int x;
+ 
+-    palette = s1->last_palette;
+-    plane_mask = mask16[s1->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
++    palette = vga->last_palette;
++    plane_mask = mask16[vga->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
+     width >>= 3;
+     for(x = 0; x < width; x++) {
+-        data = ((uint32_t *)s)[0];
++        data = vga_read_dword_le(vga, addr);
+         data &= plane_mask;
+         v = expand2[GET_PLANE(data, 0)];
+         v |= expand2[GET_PLANE(data, 2)] << 2;
+@@ -124,7 +150,7 @@ static void vga_draw_line2(VGACommonState *s1, uint8_t *d,
+         ((uint32_t *)d)[6] = palette[(v >> 4) & 0xf];
+         ((uint32_t *)d)[7] = palette[(v >> 0) & 0xf];
+         d += 32;
+-        s += 4;
++        addr += 4;
+     }
+ }
+ 
+@@ -134,17 +160,17 @@ static void vga_draw_line2(VGACommonState *s1, uint8_t *d,
+ /*
+  * 4 color mode, dup2 horizontal
+  */
+-static void vga_draw_line2d2(VGACommonState *s1, uint8_t *d,
+-                             const uint8_t *s, int width)
++static void vga_draw_line2d2(VGACommonState *vga, uint8_t *d,
++                             uint32_t addr, int width)
+ {
+     uint32_t plane_mask, *palette, data, v;
+     int x;
+ 
+-    palette = s1->last_palette;
+-    plane_mask = mask16[s1->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
++    palette = vga->last_palette;
++    plane_mask = mask16[vga->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
+     width >>= 3;
+     for(x = 0; x < width; x++) {
+-        data = ((uint32_t *)s)[0];
++        data = vga_read_dword_le(vga, addr);
+         data &= plane_mask;
+         v = expand2[GET_PLANE(data, 0)];
+         v |= expand2[GET_PLANE(data, 2)] << 2;
+@@ -160,24 +186,24 @@ static void vga_draw_line2d2(VGACommonState *s1, uint8_t *d,
+         PUT_PIXEL2(d, 6, palette[(v >> 4) & 0xf]);
+         PUT_PIXEL2(d, 7, palette[(v >> 0) & 0xf]);
+         d += 64;
+-        s += 4;
++        addr += 4;
+     }
+ }
+ 
+ /*
+  * 16 color mode
+  */
+-static void vga_draw_line4(VGACommonState *s1, uint8_t *d,
+-                           const uint8_t *s, int width)
++static void vga_draw_line4(VGACommonState *vga, uint8_t *d,
++                           uint32_t addr, int width)
+ {
+     uint32_t plane_mask, data, v, *palette;
+     int x;
+ 
+-    palette = s1->last_palette;
+-    plane_mask = mask16[s1->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
++    palette = vga->last_palette;
++    plane_mask = mask16[vga->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
+     width >>= 3;
+     for(x = 0; x < width; x++) {
+-        data = ((uint32_t *)s)[0];
++        data = vga_read_dword_le(vga, addr);
+         data &= plane_mask;
+         v = expand4[GET_PLANE(data, 0)];
+         v |= expand4[GET_PLANE(data, 1)] << 1;
+@@ -192,24 +218,24 @@ static void vga_draw_line4(VGACommonState *s1, uint8_t *d,
+         ((uint32_t *)d)[6] = palette[(v >> 4) & 0xf];
+         ((uint32_t *)d)[7] = palette[(v >> 0) & 0xf];
+         d += 32;
+-        s += 4;
++        addr += 4;
+     }
+ }
+ 
+ /*
+  * 16 color mode, dup2 horizontal
+  */
+-static void vga_draw_line4d2(VGACommonState *s1, uint8_t *d,
+-                             const uint8_t *s, int width)
++static void vga_draw_line4d2(VGACommonState *vga, uint8_t *d,
++                             uint32_t addr, int width)
+ {
+     uint32_t plane_mask, data, v, *palette;
+     int x;
+ 
+-    palette = s1->last_palette;
+-    plane_mask = mask16[s1->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
++    palette = vga->last_palette;
++    plane_mask = mask16[vga->ar[VGA_ATC_PLANE_ENABLE] & 0xf];
+     width >>= 3;
+     for(x = 0; x < width; x++) {
+-        data = ((uint32_t *)s)[0];
++        data = vga_read_dword_le(vga, addr);
+         data &= plane_mask;
+         v = expand4[GET_PLANE(data, 0)];
+         v |= expand4[GET_PLANE(data, 1)] << 1;
+@@ -224,7 +250,7 @@ static void vga_draw_line4d2(VGACommonState *s1, uint8_t *d,
+         PUT_PIXEL2(d, 6, palette[(v >> 4) & 0xf]);
+         PUT_PIXEL2(d, 7, palette[(v >> 0) & 0xf]);
+         d += 64;
+-        s += 4;
++        addr += 4;
+     }
+ }
+ 
+@@ -233,21 +259,21 @@ static void vga_draw_line4d2(VGACommonState *s1, uint8_t *d,
+  *
+  * XXX: add plane_mask support (never used in standard VGA modes)
+  */
+-static void vga_draw_line8d2(VGACommonState *s1, uint8_t *d,
+-                             const uint8_t *s, int width)
++static void vga_draw_line8d2(VGACommonState *vga, uint8_t *d,
++                             uint32_t addr, int width)
+ {
+     uint32_t *palette;
+     int x;
+ 
+-    palette = s1->last_palette;
++    palette = vga->last_palette;
+     width >>= 3;
+     for(x = 0; x < width; x++) {
+-        PUT_PIXEL2(d, 0, palette[s[0]]);
+-        PUT_PIXEL2(d, 1, palette[s[1]]);
+-        PUT_PIXEL2(d, 2, palette[s[2]]);
+-        PUT_PIXEL2(d, 3, palette[s[3]]);
++        PUT_PIXEL2(d, 0, palette[vga_read_byte(vga, addr + 0)]);
++        PUT_PIXEL2(d, 1, palette[vga_read_byte(vga, addr + 1)]);
++        PUT_PIXEL2(d, 2, palette[vga_read_byte(vga, addr + 2)]);
++        PUT_PIXEL2(d, 3, palette[vga_read_byte(vga, addr + 3)]);
+         d += 32;
+-        s += 4;
++        addr += 4;
+     }
+ }
+ 
+@@ -256,63 +282,63 @@ static void vga_draw_line8d2(VGACommonState *s1, uint8_t *d,
+  *
+  * XXX: add plane_mask support (never used in standard VGA modes)
+  */
+-static void vga_draw_line8(VGACommonState *s1, uint8_t *d,
+-                           const uint8_t *s, int width)
++static void vga_draw_line8(VGACommonState *vga, uint8_t *d,
++                           uint32_t addr, int width)
+ {
+     uint32_t *palette;
+     int x;
+ 
+-    palette = s1->last_palette;
++    palette = vga->last_palette;
+     width >>= 3;
+     for(x = 0; x < width; x++) {
+-        ((uint32_t *)d)[0] = palette[s[0]];
+-        ((uint32_t *)d)[1] = palette[s[1]];
+-        ((uint32_t *)d)[2] = palette[s[2]];
+-        ((uint32_t *)d)[3] = palette[s[3]];
+-        ((uint32_t *)d)[4] = palette[s[4]];
+-        ((uint32_t *)d)[5] = palette[s[5]];
+-        ((uint32_t *)d)[6] = palette[s[6]];
+-        ((uint32_t *)d)[7] = palette[s[7]];
++        ((uint32_t *)d)[0] = palette[vga_read_byte(vga, addr + 0)];
++        ((uint32_t *)d)[1] = palette[vga_read_byte(vga, addr + 1)];
++        ((uint32_t *)d)[2] = palette[vga_read_byte(vga, addr + 2)];
++        ((uint32_t *)d)[3] = palette[vga_read_byte(vga, addr + 3)];
++        ((uint32_t *)d)[4] = palette[vga_read_byte(vga, addr + 4)];
++        ((uint32_t *)d)[5] = palette[vga_read_byte(vga, addr + 5)];
++        ((uint32_t *)d)[6] = palette[vga_read_byte(vga, addr + 6)];
++        ((uint32_t *)d)[7] = palette[vga_read_byte(vga, addr + 7)];
+         d += 32;
+-        s += 8;
++        addr += 8;
+     }
+ }
+ 
+ /*
+  * 15 bit color
+  */
+-static void vga_draw_line15_le(VGACommonState *s1, uint8_t *d,
+-                               const uint8_t *s, int width)
++static void vga_draw_line15_le(VGACommonState *vga, uint8_t *d,
++                               uint32_t addr, int width)
+ {
+     int w;
+     uint32_t v, r, g, b;
+ 
+     w = width;
+     do {
+-        v = lduw_le_p((void *)s);
++        v = vga_read_word_le(vga, addr);
+         r = (v >> 7) & 0xf8;
+         g = (v >> 2) & 0xf8;
+         b = (v << 3) & 0xf8;
+         ((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
+-        s += 2;
++        addr += 2;
+         d += 4;
+     } while (--w != 0);
+ }
+ 
+-static void vga_draw_line15_be(VGACommonState *s1, uint8_t *d,
+-                               const uint8_t *s, int width)
++static void vga_draw_line15_be(VGACommonState *vga, uint8_t *d,
++                               uint32_t addr, int width)
+ {
+     int w;
+     uint32_t v, r, g, b;
+ 
+     w = width;
+     do {
+-        v = lduw_be_p((void *)s);
++        v = vga_read_word_be(vga, addr);
+         r = (v >> 7) & 0xf8;
+         g = (v >> 2) & 0xf8;
+         b = (v << 3) & 0xf8;
+         ((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
+-        s += 2;
++        addr += 2;
+         d += 4;
+     } while (--w != 0);
+ }
+@@ -320,38 +346,38 @@ static void vga_draw_line15_be(VGACommonState *s1, uint8_t *d,
+ /*
+  * 16 bit color
+  */
+-static void vga_draw_line16_le(VGACommonState *s1, uint8_t *d,
+-                               const uint8_t *s, int width)
++static void vga_draw_line16_le(VGACommonState *vga, uint8_t *d,
++                               uint32_t addr, int width)
+ {
+     int w;
+     uint32_t v, r, g, b;
+ 
+     w = width;
+     do {
+-        v = lduw_le_p((void *)s);
++        v = vga_read_word_le(vga, addr);
+         r = (v >> 8) & 0xf8;
+         g = (v >> 3) & 0xfc;
+         b = (v << 3) & 0xf8;
+         ((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
+-        s += 2;
++        addr += 2;
+         d += 4;
+     } while (--w != 0);
+ }
+ 
+-static void vga_draw_line16_be(VGACommonState *s1, uint8_t *d,
+-                               const uint8_t *s, int width)
++static void vga_draw_line16_be(VGACommonState *vga, uint8_t *d,
++                               uint32_t addr, int width)
+ {
+     int w;
+     uint32_t v, r, g, b;
+ 
+     w = width;
+     do {
+-        v = lduw_be_p((void *)s);
++        v = vga_read_word_be(vga, addr);
+         r = (v >> 8) & 0xf8;
+         g = (v >> 3) & 0xfc;
+         b = (v << 3) & 0xf8;
+         ((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
+-        s += 2;
++        addr += 2;
+         d += 4;
+     } while (--w != 0);
+ }
+@@ -359,36 +385,36 @@ static void vga_draw_line16_be(VGACommonState *s1, uint8_t *d,
+ /*
+  * 24 bit color
+  */
+-static void vga_draw_line24_le(VGACommonState *s1, uint8_t *d,
+-                               const uint8_t *s, int width)
++static void vga_draw_line24_le(VGACommonState *vga, uint8_t *d,
++                               uint32_t addr, int width)
+ {
+     int w;
+     uint32_t r, g, b;
+ 
+     w = width;
+     do {
+-        b = s[0];
+-        g = s[1];
+-        r = s[2];
++        b = vga_read_byte(vga, addr + 0);
++        g = vga_read_byte(vga, addr + 1);
++        r = vga_read_byte(vga, addr + 2);
+         ((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
+-        s += 3;
++        addr += 3;
+         d += 4;
+     } while (--w != 0);
+ }
+ 
+-static void vga_draw_line24_be(VGACommonState *s1, uint8_t *d,
+-                               const uint8_t *s, int width)
++static void vga_draw_line24_be(VGACommonState *vga, uint8_t *d,
++                               uint32_t addr, int width)
+ {
+     int w;
+     uint32_t r, g, b;
+ 
+     w = width;
+     do {
+-        r = s[0];
+-        g = s[1];
+-        b = s[2];
++        r = vga_read_byte(vga, addr + 0);
++        g = vga_read_byte(vga, addr + 1);
++        b = vga_read_byte(vga, addr + 2);
+         ((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
+-        s += 3;
++        addr += 3;
+         d += 4;
+     } while (--w != 0);
+ }
+@@ -396,44 +422,36 @@ static void vga_draw_line24_be(VGACommonState *s1, uint8_t *d,
+ /*
+  * 32 bit color
+  */
+-static void vga_draw_line32_le(VGACommonState *s1, uint8_t *d,
+-                               const uint8_t *s, int width)
++static void vga_draw_line32_le(VGACommonState *vga, uint8_t *d,
++                               uint32_t addr, int width)
+ {
+-#ifndef HOST_WORDS_BIGENDIAN
+-    memcpy(d, s, width * 4);
+-#else
+     int w;
+     uint32_t r, g, b;
+ 
+     w = width;
+     do {
+-        b = s[0];
+-        g = s[1];
+-        r = s[2];
++        b = vga_read_byte(vga, addr + 0);
++        g = vga_read_byte(vga, addr + 1);
++        r = vga_read_byte(vga, addr + 2);
+         ((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
+-        s += 4;
++        addr += 4;
+         d += 4;
+     } while (--w != 0);
+-#endif
+ }
+ 
+-static void vga_draw_line32_be(VGACommonState *s1, uint8_t *d,
+-                               const uint8_t *s, int width)
++static void vga_draw_line32_be(VGACommonState *vga, uint8_t *d,
++                               uint32_t addr, int width)
+ {
+-#ifdef HOST_WORDS_BIGENDIAN
+-    memcpy(d, s, width * 4);
+-#else
+     int w;
+     uint32_t r, g, b;
+ 
+     w = width;
+     do {
+-        r = s[1];
+-        g = s[2];
+-        b = s[3];
++        r = vga_read_byte(vga, addr + 1);
++        g = vga_read_byte(vga, addr + 2);
++        b = vga_read_byte(vga, addr + 3);
+         ((uint32_t *)d)[0] = rgb_to_pixel32(r, g, b);
+-        s += 4;
++        addr += 4;
+         d += 4;
+     } while (--w != 0);
+-#endif
+ }
+diff --git a/hw/display/vga.c b/hw/display/vga.c
+index ad7a465..6fc8c87 100644
+--- a/hw/display/vga.c
++++ b/hw/display/vga.c
+@@ -1005,7 +1005,7 @@ void vga_mem_writeb(VGACommonState *s, hwaddr addr, uint32_t val)
+ }
+ 
+ typedef void vga_draw_line_func(VGACommonState *s1, uint8_t *d,
+-                                const uint8_t *s, int width);
++                                uint32_t srcaddr, int width);
+ 
+ #include "vga-helpers.h"
+ 
+@@ -1666,7 +1666,7 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
+             if (y_start < 0)
+                 y_start = y;
+             if (!(is_buffer_shared(surface))) {
+-                vga_draw_line(s, d, s->vram_ptr + addr, width);
++                vga_draw_line(s, d, addr, width);
+                 if (s->cursor_draw_line)
+                     s->cursor_draw_line(s, d, y);
+             }
+@@ -2170,6 +2170,7 @@ void vga_common_init(VGACommonState *s, Object *obj, bool global_vmstate)
+     if (!s->vbe_size) {
+         s->vbe_size = s->vram_size;
+     }
++    s->vbe_size_mask = s->vbe_size - 1;
+ 
+     s->is_vbe_vmstate = 1;
+     memory_region_init_ram_nomigrate(&s->vram, obj, "vga.vram", s->vram_size,
+diff --git a/hw/display/vga_int.h b/hw/display/vga_int.h
+index dd6c958..ad34a1f 100644
+--- a/hw/display/vga_int.h
++++ b/hw/display/vga_int.h
+@@ -94,6 +94,7 @@ typedef struct VGACommonState {
+     uint32_t vram_size;
+     uint32_t vram_size_mb; /* property */
+     uint32_t vbe_size;
++    uint32_t vbe_size_mask;
+     uint32_t latch;
+     bool has_chain4_alias;
+     MemoryRegion chain4_alias;
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2017-13673.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2017-13673.patch
new file mode 100644
index 0000000..3d0695f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2017-13673.patch
@@ -0,0 +1,53 @@
+From e65294157d4b69393b3f819c99f4f647452b48e3 Mon Sep 17 00:00:00 2001
+From: Gerd Hoffmann <kraxel@redhat.com>
+Date: Mon, 28 Aug 2017 14:33:07 +0200
+Subject: [PATCH] vga: fix display update region calculation (split screen)
+
+vga display update mis-calculated the region for the dirty bitmap
+snapshot in case split screen mode is used.  This can trigger an
+assert in cpu_physical_memory_snapshot_get_dirty().
+
+Impact:  DoS for privileged guest users.
+
+Fixes: CVE-2017-13673
+Fixes: fec5e8c92becad223df9d972770522f64aafdb72
+Cc: P J P <ppandit@redhat.com>
+Reported-by: David Buchanan <d@vidbuchanan.co.uk>
+Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
+Message-id: 20170828123307.15392-1-kraxel@redhat.com
+
+Upstream-Status: Backport
+[https://git.qemu.org/?p=qemu.git;a=commit;h=e65294157d4b69393b3f819c99f4f647452b48e3]
+
+CVE: CVE-2017-13673
+
+Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
+---
+ hw/display/vga.c | 10 ++++++++--
+ 1 file changed, 8 insertions(+), 2 deletions(-)
+
+diff --git a/hw/display/vga.c b/hw/display/vga.c
+index 3433102..ad7a465 100644
+--- a/hw/display/vga.c
++++ b/hw/display/vga.c
+@@ -1628,9 +1628,15 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
+     y1 = 0;
+ 
+     if (!full_update) {
++        ram_addr_t region_start = addr1;
++        ram_addr_t region_end = addr1 + line_offset * height;
+         vga_sync_dirty_bitmap(s);
+-        snap = memory_region_snapshot_and_clear_dirty(&s->vram, addr1,
+-                                                      line_offset * height,
++        if (s->line_compare < height) {
++            /* split screen mode */
++            region_start = 0;
++        }
++        snap = memory_region_snapshot_and_clear_dirty(&s->vram, region_start,
++                                                      region_end - region_start,
+                                                       DIRTY_MEMORY_VGA);
+     }
+ 
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2017-13711.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2017-13711.patch
new file mode 100644
index 0000000..352f73f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2017-13711.patch
@@ -0,0 +1,87 @@
+From 1201d308519f1e915866d7583d5136d03cc1d384 Mon Sep 17 00:00:00 2001
+From: Samuel Thibault <samuel.thibault@ens-lyon.org>
+Date: Fri, 25 Aug 2017 01:35:53 +0200
+Subject: [PATCH] slirp: fix clearing ifq_so from pending packets
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+The if_fastq and if_batchq contain not only packets, but queues of packets
+for the same socket. When sofree frees a socket, it thus has to clear ifq_so
+from all the packets from the queues, not only the first.
+
+Signed-off-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
+Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
+Cc: qemu-stable@nongnu.org
+Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
+
+Upstream-Status: Backport
+[https://git.qemu.org/?p=qemu.git;a=commit;h=1201d308519f1e915866d7583d5136d03cc1d384]
+
+CVE: CVE-2017-13711
+
+Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
+---
+ slirp/socket.c | 39 +++++++++++++++++++++++----------------
+ 1 file changed, 23 insertions(+), 16 deletions(-)
+
+diff --git a/slirp/socket.c b/slirp/socket.c
+index ecec029..cb7b5b6 100644
+--- a/slirp/socket.c
++++ b/slirp/socket.c
+@@ -60,29 +60,36 @@ socreate(Slirp *slirp)
+ }
+ 
+ /*
++ * Remove references to so from the given message queue.
++ */
++static void
++soqfree(struct socket *so, struct quehead *qh)
++{
++    struct mbuf *ifq;
++
++    for (ifq = (struct mbuf *) qh->qh_link;
++             (struct quehead *) ifq != qh;
++             ifq = ifq->ifq_next) {
++        if (ifq->ifq_so == so) {
++            struct mbuf *ifm;
++            ifq->ifq_so = NULL;
++            for (ifm = ifq->ifs_next; ifm != ifq; ifm = ifm->ifs_next) {
++                ifm->ifq_so = NULL;
++            }
++        }
++    }
++}
++
++/*
+  * remque and free a socket, clobber cache
+  */
+ void
+ sofree(struct socket *so)
+ {
+   Slirp *slirp = so->slirp;
+-  struct mbuf *ifm;
+ 
+-  for (ifm = (struct mbuf *) slirp->if_fastq.qh_link;
+-       (struct quehead *) ifm != &slirp->if_fastq;
+-       ifm = ifm->ifq_next) {
+-    if (ifm->ifq_so == so) {
+-      ifm->ifq_so = NULL;
+-    }
+-  }
+-
+-  for (ifm = (struct mbuf *) slirp->if_batchq.qh_link;
+-       (struct quehead *) ifm != &slirp->if_batchq;
+-       ifm = ifm->ifq_next) {
+-    if (ifm->ifq_so == so) {
+-      ifm->ifq_so = NULL;
+-    }
+-  }
++  soqfree(so, &slirp->if_fastq);
++  soqfree(so, &slirp->if_batchq);
+ 
+   if (so->so_emu==EMU_RSH && so->extra) {
+ 	sofree(so->extra);
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2017-14167.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2017-14167.patch
new file mode 100644
index 0000000..969ad87
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/CVE-2017-14167.patch
@@ -0,0 +1,70 @@
+From ed4f86e8b6eff8e600c69adee68c7cd34dd2cccb Mon Sep 17 00:00:00 2001
+From: Prasad J Pandit <pjp@fedoraproject.org>
+Date: Thu, 7 Sep 2017 12:02:56 +0530
+Subject: [PATCH] multiboot: validate multiboot header address values
+
+While loading kernel via multiboot-v1 image, (flags & 0x00010000)
+indicates that multiboot header contains valid addresses to load
+the kernel image. These addresses are used to compute kernel
+size and kernel text offset in the OS image. Validate these
+address values to avoid an OOB access issue.
+
+This is CVE-2017-14167.
+
+Reported-by: Thomas Garnier <thgarnie@google.com>
+Signed-off-by: Prasad J Pandit <pjp@fedoraproject.org>
+Message-Id: <20170907063256.7418-1-ppandit@redhat.com>
+Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
+
+Upstream-Status: Backport
+[https://git.qemu.org/?p=qemu.git;a=commit;h=ed4f86e8b6eff8e600c69adee68c7cd34dd2cccb]
+
+CVE: CVE-2017-14167
+
+Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
+---
+ hw/i386/multiboot.c | 19 +++++++++++++++++++
+ 1 file changed, 19 insertions(+)
+
+diff --git a/hw/i386/multiboot.c b/hw/i386/multiboot.c
+index 6001f4c..c7b70c9 100644
+--- a/hw/i386/multiboot.c
++++ b/hw/i386/multiboot.c
+@@ -221,15 +221,34 @@ int load_multiboot(FWCfgState *fw_cfg,
+         uint32_t mh_header_addr = ldl_p(header+i+12);
+         uint32_t mh_load_end_addr = ldl_p(header+i+20);
+         uint32_t mh_bss_end_addr = ldl_p(header+i+24);
++
+         mh_load_addr = ldl_p(header+i+16);
++        if (mh_header_addr < mh_load_addr) {
++            fprintf(stderr, "invalid mh_load_addr address\n");
++            exit(1);
++        }
++
+         uint32_t mb_kernel_text_offset = i - (mh_header_addr - mh_load_addr);
+         uint32_t mb_load_size = 0;
+         mh_entry_addr = ldl_p(header+i+28);
+ 
+         if (mh_load_end_addr) {
++            if (mh_bss_end_addr < mh_load_addr) {
++                fprintf(stderr, "invalid mh_bss_end_addr address\n");
++                exit(1);
++            }
+             mb_kernel_size = mh_bss_end_addr - mh_load_addr;
++
++            if (mh_load_end_addr < mh_load_addr) {
++                fprintf(stderr, "invalid mh_load_end_addr address\n");
++                exit(1);
++            }
+             mb_load_size = mh_load_end_addr - mh_load_addr;
+         } else {
++            if (kernel_file_size < mb_kernel_text_offset) {
++                fprintf(stderr, "invalid kernel_file_size\n");
++                exit(1);
++            }
+             mb_kernel_size = kernel_file_size - mb_kernel_text_offset;
+             mb_load_size = mb_kernel_size;
+         }
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/add-ptest-in-makefile-v10.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/add-ptest-in-makefile-v10.patch
new file mode 100644
index 0000000..e963982
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/add-ptest-in-makefile-v10.patch
@@ -0,0 +1,28 @@
+From 4201a5791fc4798a45a9b9f881602d7bacb74ed1 Mon Sep 17 00:00:00 2001
+From: Juro Bystricky <juro.bystricky@intel.com>
+Date: Thu, 31 Aug 2017 11:06:56 -0700
+Subject: Add subpackage -ptest which runs all unit test cases for qemu.
+
+Upstream-Status: Pending
+
+Signed-off-by: Kai Kang <kai.kang@windriver.com>
+
+Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
+
+diff --git a/tests/Makefile.include b/tests/Makefile.include
+index f08b741..3d1b3e9 100644
+--- a/tests/Makefile.include
++++ b/tests/Makefile.include
+@@ -924,4 +924,12 @@ all: $(QEMU_IOTESTS_HELPERS-y)
+ -include $(wildcard tests/*.d)
+ -include $(wildcard tests/libqos/*.d)
+ 
++buildtest-TESTS: $(check-unit-y)
++
++runtest-TESTS:
++	for f in $(check-unit-y); do \
++		nf=$$(echo $$f | sed 's/tests\//\.\//g'); \
++		$$nf; \
++	done
++
+ endif
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/add-ptest-in-makefile.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/add-ptest-in-makefile.patch
deleted file mode 100644
index 2ce3478..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/add-ptest-in-makefile.patch
+++ /dev/null
@@ -1,28 +0,0 @@
-Upstream-Status: Pending
-
-Add subpackage -ptest which runs all unit test cases for qemu.
-
-Signed-off-by: Kai Kang <kai.kang@windriver.com>
----
- tests/Makefile.include | 8 ++++++++
- 1 file changed, 8 insertions(+)
-
-diff --git a/tests/Makefile.include b/tests/Makefile.include
-index 14be491..0fce37a 100644
---- a/tests/Makefile.include
-+++ b/tests/Makefile.include
-@@ -776,3 +776,11 @@ all: $(QEMU_IOTESTS_HELPERS-y)
- 
- -include $(wildcard tests/*.d)
- -include $(wildcard tests/libqos/*.d)
-+
-+buildtest-TESTS: $(check-unit-y)
-+
-+runtest-TESTS:
-+	for f in $(check-unit-y); do \
-+		nf=$$(echo $$f | sed 's/tests\//\.\//g'); \
-+		$$nf; \
-+	done
--- 
-2.9.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/apic-fixup-fallthrough-to-PIC.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/apic-fixup-fallthrough-to-PIC.patch
new file mode 100644
index 0000000..9bbbc6f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/apic-fixup-fallthrough-to-PIC.patch
@@ -0,0 +1,46 @@
+From bef93bb81588b5323a52d2e1886f2a77b64a976b Mon Sep 17 00:00:00 2001
+From: Mark Asselstine <mark.asselstine@windriver.com>
+Date: Tue, 26 Feb 2013 11:43:28 -0500
+Subject: [PATCH 03/18] apic: fixup fallthrough to PIC
+
+Commit 0e21e12bb311c4c1095d0269dc2ef81196ccb60a [Don't route PIC
+interrupts through the local APIC if the local APIC config says so.]
+missed a check to ensure the local APIC is enabled. Since if the local
+APIC is disabled it doesn't matter what the local APIC config says.
+
+If this check isn't done and the guest has disabled the local APIC the
+guest will receive a general protection fault, similar to what is seen
+here:
+
+https://lists.gnu.org/archive/html/qemu-devel/2012-12/msg02304.html
+
+The GPF is caused by an attempt to service interrupt 0xffffffff. This
+comes about since cpu_get_pic_interrupt() calls apic_accept_pic_intr()
+(with the local APIC disabled apic_get_interrupt() returns -1).
+apic_accept_pic_intr() returns 0 and thus the interrupt number which
+is returned from cpu_get_pic_interrupt(), and which is attempted to be
+serviced, is -1.
+
+Signed-off-by: Mark Asselstine <mark.asselstine@windriver.com>
+Upstream-Status: Submitted [https://lists.gnu.org/archive/html/qemu-devel/2013-04/msg00878.html]
+Signed-off-by: He Zhe <zhe.he@windriver.com>
+---
+ hw/intc/apic.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/hw/intc/apic.c b/hw/intc/apic.c
+index 45887d99..c5ae4087 100644
+--- a/hw/intc/apic.c
++++ b/hw/intc/apic.c
+@@ -587,7 +587,7 @@ int apic_accept_pic_intr(DeviceState *dev)
+     APICCommonState *s = APIC_COMMON(dev);
+     uint32_t lvt0;
+ 
+-    if (!s)
++    if (!s || !(s->spurious_vec & APIC_SV_ENABLE))
+         return -1;
+ 
+     lvt0 = s->lvt[APIC_LVT_LINT0];
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/configure-fix-Darwin-target-detection.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/configure-fix-Darwin-target-detection.patch
deleted file mode 100644
index 59cdc1c..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/configure-fix-Darwin-target-detection.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-Upstream-Status: Pending
-Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
-
-From 9ac096d8eccf2d56ece646320c282c8369f8337c Mon Sep 17 00:00:00 2001
-From: Cristian Iorga <cristian.iorga@intel.com>
-Date: Tue, 29 Jul 2014 18:35:59 +0300
-Subject: [PATCH] configure: fix Darwin target detection
-
-fix Darwin target detection for qemu
-cross-compilation.
-
-Signed-off-by: Cristian Iorga <cristian.iorga@intel.com>
----
- configure | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/configure b/configure
-index 283c71c..1c66a11 100755
---- a/configure
-+++ b/configure
-@@ -444,6 +444,8 @@ elif check_define __sun__ ; then
-   targetos='SunOS'
- elif check_define __HAIKU__ ; then
-   targetos='Haiku'
-+elif check_define __APPLE__ ; then
-+  targetos='Darwin'
- else
-   targetos=`uname -s`
- fi
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/ppc_locking.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/ppc_locking.patch
new file mode 100644
index 0000000..6f72243
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/ppc_locking.patch
@@ -0,0 +1,105 @@
+I've tracked down what I think is a problem causing qemu-system-ppc
+to hang whilst booting images.
+
+I believe the decrementer timer stops receiving interrupts so
+tasks in our images hang indefinitely as the timer stopped. 
+
+It can be summed up with this line of debug:
+
+ppc_set_irq: 0x55b4e0d562f0 n_IRQ 8 level 1 => pending 00000100req 00000004
+
+It should normally read:
+
+ppc_set_irq: 0x55b4e0d562f0 n_IRQ 8 level 1 => pending 00000100req 00000002
+
+The question is why CPU_INTERRUPT_EXITTB ends up being set when the
+lines above this log message clearly sets CPU_INTERRUPT_HARD (via 
+cpu_interrupt() ).
+
+I note in cpu.h:
+
+    /* updates protected by BQL */
+    uint32_t interrupt_request;
+
+(for struct CPUState)
+
+The ppc code does "cs->interrupt_request |= CPU_INTERRUPT_EXITTB" in 5
+places, 3 in excp_helper.c and 2 in helper_regs.h. In all cases,  
+g_assert(qemu_mutex_iothread_locked()); fails. If I do something like:
+
+if (!qemu_mutex_iothread_locked()) {
+    qemu_mutex_lock_iothread();
+    cpu_interrupt(cs, CPU_INTERRUPT_EXITTB);
+    qemu_mutex_unlock_iothread();
+} else {
+    cpu_interrupt(cs, CPU_INTERRUPT_EXITTB);
+}
+
+in these call sites then I can no longer lock qemu up with my test
+case.
+
+I suspect the _HARD setting gets overwritten which stops the 
+decrementer interrupts being delivered.
+
+Upstream-Status: Submitted [Issue discussed on qemu mailing list 2017/11/20]
+RP 2017/11/20
+
+Index: qemu-2.10.1/target/ppc/excp_helper.c
+===================================================================
+--- qemu-2.10.1.orig/target/ppc/excp_helper.c
++++ qemu-2.10.1/target/ppc/excp_helper.c
+@@ -207,7 +207,9 @@ static inline void powerpc_excp(PowerPCC
+                         "Entering checkstop state\n");
+             }
+             cs->halted = 1;
+-            cs->interrupt_request |= CPU_INTERRUPT_EXITTB;
++            qemu_mutex_lock_iothread();
++            cpu_interrupt(cs, CPU_INTERRUPT_EXITTB);
++            qemu_mutex_unlock_iothread();
+         }
+         if (env->msr_mask & MSR_HVB) {
+             /* ISA specifies HV, but can be delivered to guest with HV clear
+@@ -940,7 +942,9 @@ void helper_store_msr(CPUPPCState *env,
+ 
+     if (excp != 0) {
+         CPUState *cs = CPU(ppc_env_get_cpu(env));
+-        cs->interrupt_request |= CPU_INTERRUPT_EXITTB;
++        qemu_mutex_lock_iothread();
++        cpu_interrupt(cs, CPU_INTERRUPT_EXITTB);
++        qemu_mutex_unlock_iothread();
+         raise_exception(env, excp);
+     }
+ }
+@@ -995,7 +999,9 @@ static inline void do_rfi(CPUPPCState *e
+     /* No need to raise an exception here,
+      * as rfi is always the last insn of a TB
+      */
+-    cs->interrupt_request |= CPU_INTERRUPT_EXITTB;
++    qemu_mutex_lock_iothread();
++    cpu_interrupt(cs, CPU_INTERRUPT_EXITTB);
++    qemu_mutex_unlock_iothread();
+ 
+     /* Reset the reservation */
+     env->reserve_addr = -1;
+Index: qemu-2.10.1/target/ppc/helper_regs.h
+===================================================================
+--- qemu-2.10.1.orig/target/ppc/helper_regs.h
++++ qemu-2.10.1/target/ppc/helper_regs.h
+@@ -114,11 +114,15 @@ static inline int hreg_store_msr(CPUPPCS
+     }
+     if (((value >> MSR_IR) & 1) != msr_ir ||
+         ((value >> MSR_DR) & 1) != msr_dr) {
+-        cs->interrupt_request |= CPU_INTERRUPT_EXITTB;
++        qemu_mutex_lock_iothread();
++        cpu_interrupt(cs, CPU_INTERRUPT_EXITTB);
++        qemu_mutex_unlock_iothread();
+     }
+     if ((env->mmu_model & POWERPC_MMU_BOOKE) &&
+         ((value >> MSR_GS) & 1) != msr_gs) {
+-        cs->interrupt_request |= CPU_INTERRUPT_EXITTB;
++        qemu_mutex_lock_iothread();
++        cpu_interrupt(cs, CPU_INTERRUPT_EXITTB);
++        qemu_mutex_unlock_iothread();
+     }
+     if (unlikely((env->flags & POWERPC_FLAG_TGPR) &&
+                  ((value ^ env->msr) & (1 << MSR_TGPR)))) {
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/qemu-2.5.0-cflags.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/qemu-2.5.0-cflags.patch
index 173394f..eb99d14 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/qemu-2.5.0-cflags.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/qemu-2.5.0-cflags.patch
@@ -1,3 +1,5 @@
+Upstream-Status: Pending
+
 --- a/configure
 +++ b/configure
 @@ -4468,10 +4468,6 @@ fi
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/target-ppc-fix-user-mode.patch b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/target-ppc-fix-user-mode.patch
deleted file mode 100644
index ba21e71..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu/target-ppc-fix-user-mode.patch
+++ /dev/null
@@ -1,48 +0,0 @@
-[Qemu-ppc] [PATCH 1/1] target-ppc, tcg: fix usermode segfault with pthread
-
-From: Sam Bobroff
-Subject: [Qemu-ppc] [PATCH 1/1] target-ppc, tcg: fix usermode segfault with pthread_create()
-Date: Mon, 30 Jan 2017 16:08:07 +1100
-Programs run under qemu-ppc64 on an x86_64 host currently segfault
-if they use pthread_create() due to the adjustment made to the NIP in
-commit bd6fefe71cec5a0c7d2be4ac96307f25db56abf9.
-
-This patch changes cpu_loop() to set the NIP back to the
-pre-incremented value before calling do_syscall(), which causes the
-correct address to be used for the new thread and corrects the fault.
-
-Signed-off-by: Sam Bobroff <address@hidden>
-
-Upstream-Status: Backport
-
----
-
-linux-user/main.c | 4 +++-
-1 file changed, 3 insertions(+), 1 deletion(-)
-
-diff --git a/linux-user/main.c b/linux-user/main.c
-index 30049581ef..b5dee01541 100644
---- a/linux-user/main.c
-+++ b/linux-user/main.c
-@@ -1712,18 +1712,20 @@ void cpu_loop(CPUPPCState *env)
-              * in syscalls.
-              */
-             env->crf[0] &= ~0x1;
-+            env->nip += 4;
-             ret = do_syscall(env, env->gpr[0], env->gpr[3], env->gpr[4],
-                              env->gpr[5], env->gpr[6], env->gpr[7],
-                              env->gpr[8], 0, 0);
-             if (ret == -TARGET_ERESTARTSYS) {
-+                env->nip -= 4;
-                 break;
-             }
-             if (ret == (target_ulong)(-TARGET_QEMU_ESIGRETURN)) {
-+                env->nip -= 4;
-                 /* Returning from a successful sigreturn syscall.
-                    Avoid corrupting register state.  */
-                 break;
-             }
--            env->nip += 4;
-             if (ret > (target_ulong)(-515)) {
-                 env->crf[0] |= 0x1;
-                 ret = -ret;
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu_2.10.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu_2.10.0.bb
new file mode 100644
index 0000000..a9b4939
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu_2.10.0.bb
@@ -0,0 +1,63 @@
+require qemu.inc
+
+inherit ptest
+
+RDEPENDS_${PN}-ptest = "bash make"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=441c28d2cf86e15a37fa47e15a72fbac \
+                    file://COPYING.LIB;endline=24;md5=c04def7ae38850e7d3ef548588159913"
+
+SRC_URI = "http://wiki.qemu-project.org/download/${BP}.tar.bz2 \
+           file://powerpc_rom.bin \
+           file://disable-grabs.patch \
+           file://exclude-some-arm-EABI-obsolete-syscalls.patch \
+           file://wacom.patch \
+           file://add-ptest-in-makefile-v10.patch \
+           file://run-ptest \
+           file://qemu-enlarge-env-entry-size.patch \
+           file://no-valgrind.patch \
+           file://pathlimit.patch \
+           file://qemu-2.5.0-cflags.patch \
+           file://glibc-2.25.patch \
+           file://0001-Provide-support-for-the-CUSE-TPM.patch \
+           file://0002-Introduce-condition-to-notify-waiters-of-completed-c.patch \
+           file://0003-Introduce-condition-in-TPM-backend-for-notification.patch \
+           file://0004-Add-support-for-VM-suspend-resume-for-TPM-TIS-v2.9.patch \
+           file://apic-fixup-fallthrough-to-PIC.patch \
+           file://CVE-2017-13711.patch \
+           file://CVE-2017-13673.patch \
+           file://CVE-2017-13672.patch \
+           file://CVE-2017-14167.patch \
+           file://ppc_locking.patch \
+           "
+UPSTREAM_CHECK_REGEX = "qemu-(?P<pver>\d+\..*)\.tar"
+
+
+SRC_URI_append_class-native = " \
+            file://fix-libcap-header-issue-on-some-distro.patch \
+            file://cpus.c-qemu_cpu_kick_thread_debugging.patch \
+            "
+
+SRC_URI[md5sum] = "ca73441de73a9b52c6c49c97190d2185"
+SRC_URI[sha256sum] = "7e9f39e1306e6dcc595494e91c1464d4b03f55ddd2053183e0e1b69f7f776d48"
+
+COMPATIBLE_HOST_mipsarchn32 = "null"
+COMPATIBLE_HOST_mipsarchn64 = "null"
+
+do_install_append() {
+    # Prevent QA warnings about installed ${localstatedir}/run
+    if [ -d ${D}${localstatedir}/run ]; then rmdir ${D}${localstatedir}/run; fi
+    install -Dm 0755 ${WORKDIR}/powerpc_rom.bin ${D}${datadir}/qemu
+}
+
+do_compile_ptest() {
+	make buildtest-TESTS
+}
+
+do_install_ptest() {
+	cp -rL ${B}/tests ${D}${PTEST_PATH}
+	find ${D}${PTEST_PATH}/tests -type f -name "*.[Sshcod]" | xargs -i rm -rf {}
+
+	cp ${S}/tests/Makefile.include ${D}${PTEST_PATH}/tests
+}
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu_2.8.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu_2.8.0.bb
deleted file mode 100644
index fa70009..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemu_2.8.0.bb
+++ /dev/null
@@ -1,65 +0,0 @@
-require qemu.inc
-
-inherit ptest
-
-RDEPENDS_${PN}-ptest = "bash make"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=441c28d2cf86e15a37fa47e15a72fbac \
-                    file://COPYING.LIB;endline=24;md5=c04def7ae38850e7d3ef548588159913"
-
-SRC_URI += " \
-            file://powerpc_rom.bin \
-            file://disable-grabs.patch \
-            file://exclude-some-arm-EABI-obsolete-syscalls.patch \
-            file://wacom.patch \
-            file://add-ptest-in-makefile.patch \
-            file://run-ptest \
-            file://configure-fix-Darwin-target-detection.patch \
-            file://qemu-enlarge-env-entry-size.patch \
-            file://no-valgrind.patch \
-            file://pathlimit.patch \
-            file://qemu-2.5.0-cflags.patch \
-            file://target-ppc-fix-user-mode.patch \
-            file://glibc-2.25.patch \
-            file://04b33e21866412689f18b7ad6daf0a54d8f959a7.patch \
-"
-
-SRC_URI += " \
-            file://0001-Provide-support-for-the-CUSE-TPM.patch \
-            file://0002-Introduce-condition-to-notify-waiters-of-completed-c.patch \
-            file://0003-Introduce-condition-in-TPM-backend-for-notification.patch \
-            file://0004-Add-support-for-VM-suspend-resume-for-TPM-TIS.patch \
-            file://CVE-2016-9908.patch \
-            file://CVE-2016-9912.patch \
-"
-
-SRC_URI_append_class-native = " \
-            file://fix-libcap-header-issue-on-some-distro.patch \
-            file://cpus.c-qemu_cpu_kick_thread_debugging.patch \
-            "
-
-SRC_URI =+ "http://wiki.qemu-project.org/download/${BP}.tar.bz2"
-
-SRC_URI[md5sum] = "17940dce063b6ce450a12e719a6c9c43"
-SRC_URI[sha256sum] = "dafd5d7f649907b6b617b822692f4c82e60cf29bc0fc58bc2036219b591e5e62"
-
-COMPATIBLE_HOST_mipsarchn32 = "null"
-COMPATIBLE_HOST_mipsarchn64 = "null"
-
-do_install_append() {
-    # Prevent QA warnings about installed ${localstatedir}/run
-    if [ -d ${D}${localstatedir}/run ]; then rmdir ${D}${localstatedir}/run; fi
-    install -Dm 0755 ${WORKDIR}/powerpc_rom.bin ${D}${datadir}/qemu
-}
-
-do_compile_ptest() {
-	make buildtest-TESTS
-}
-
-do_install_ptest() {
-	cp -rL ${B}/tests ${D}${PTEST_PATH}
-	find ${D}${PTEST_PATH}/tests -type f -name "*.[Sshcod]" | xargs -i rm -rf {}
-
-	cp ${S}/tests/Makefile.include ${D}${PTEST_PATH}/tests
-}
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemuwrapper-cross_1.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemuwrapper-cross_1.0.bb
index e40cdaf..c983fba 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemuwrapper-cross_1.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/qemu/qemuwrapper-cross_1.0.bb
@@ -1,4 +1,5 @@
 SUMMARY = "QEMU wrapper script"
+HOMEPAGE = "http://qemu.org"
 LICENSE = "MIT"
 
 S = "${WORKDIR}"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/remake/remake.inc b/import-layers/yocto-poky/meta/recipes-devtools/remake/remake.inc
deleted file mode 100644
index df889fc..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/remake/remake.inc
+++ /dev/null
@@ -1,16 +0,0 @@
-SUMMARY = "Makefile debugger which is gnumake compatible"
-DESCRIPTION = "remake is a patched and modernized version of GNU make \
-utility that adds improved error reporting, the ability to trace \
-execution in a comprehensible way, and a debugger."
-
-HOMEPAGE = "http://bashdb.sourceforge.net/remake/"
-SECTION = "devel"
-
-SRC_URI = "git://github.com/rocky/remake.git"
-
-inherit autotools gettext update-alternatives pkgconfig
-
-ALTERNATIVE_${PN} = "make"
-ALTERNATIVE_LINK_NAME[make] = "${bindir}/make"
-ALTERNATIVE_TARGET[make] = "${bindir}/remake"
-ALTERNATIVE_PRIORITY = "100"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/remake/remake/version-remake.texi.patch b/import-layers/yocto-poky/meta/recipes-devtools/remake/remake/version-remake.texi.patch
deleted file mode 100644
index fa6329e..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/remake/remake/version-remake.texi.patch
+++ /dev/null
@@ -1,16 +0,0 @@
-Upstream-Status: Pending
-
-version-remake.texi is not there but it is required by remake.texi,
-just add it for getting the 'make remake.info' works.
-
-==============================================================
-diff --git a/doc/version-remake.texi b/doc/version-remake.texi
-new file mode 100644
-index 0000000..2a3b72b
---- /dev/null
-+++ b/doc/version-remake.texi
-@@ -0,0 +1,4 @@
-+@set UPDATED 10 June 2012
-+@set UPDATED-MONTH June 2012
-+@set EDITION 3.82+dbg-0.9git
-+@set VERSION 3.82+dbg-0.9git
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/remake/remake_4.1+dbg-1.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/remake/remake_4.1+dbg-1.1.bb
deleted file mode 100644
index 8eab7e3..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/remake/remake_4.1+dbg-1.1.bb
+++ /dev/null
@@ -1,28 +0,0 @@
-LICENSE = "GPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504 \
-                    file://tests/COPYING;md5=d32239bcb673463ab874e80d47fae504 \
-                    file://glob/COPYING.LIB;md5=4a770b67e6be0f60da244beb2de0fce4"
-require remake.inc
-
-UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>(\d+(\.\d+)+)\+dbg.+)"
-SRC_URI += "file://version-remake.texi.patch \
-           "
-SRCREV = "cf54641d50a0165bb17622b3e9770f426ccbc561"
-S = "${WORKDIR}/git"
-
-DEPENDS += "readline guile"
-# Need to add "gettext-native" dependency to remake-native.
-# By default only "gettext-minimal-native" is added
-# when inherit gettext.
-DEPENDS_class-native += "gettext-native"
-PROVIDES += "virtual/make"
-
-do_configure_prepend() {
-    # remove the default LINGUAS since we are not going to generate languages
-    rm ${S}/po/LINGUAS
-    touch ${S}/po/LINGUAS
-    # create config.rpath which required by configure.ac
-    ( cd ${S}; autopoint || touch config.rpath )
-}
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0001-Do-not-hardcode-lib-rpm-as-the-installation-path-for.patch b/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0001-Do-not-hardcode-lib-rpm-as-the-installation-path-for.patch
index d99ddeb..1f61aca 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0001-Do-not-hardcode-lib-rpm-as-the-installation-path-for.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0001-Do-not-hardcode-lib-rpm-as-the-installation-path-for.patch
@@ -4,7 +4,7 @@
 Subject: [PATCH] Do not hardcode "lib/rpm" as the installation path for
  default configuration and macros.
 
-Upstream-Status: Inappropriate [oe-core specific]
+Upstream-Status: Submitted [https://github.com/rpm-software-management/rpm/pull/263]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 
 ---
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0001-Fix-build-with-musl-C-library.patch b/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0001-Fix-build-with-musl-C-library.patch
index 95c7013..edf9ec0 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0001-Fix-build-with-musl-C-library.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0001-Fix-build-with-musl-C-library.patch
@@ -3,7 +3,7 @@
 Date: Mon, 27 Feb 2017 14:43:21 +0200
 Subject: [PATCH] Fix build with musl C library.
 
-Upstream-Status: Pending
+Upstream-Status: Inappropriate [problem already solved in master branch]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0001-Split-binary-package-building-into-a-separate-functi.patch b/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0001-Split-binary-package-building-into-a-separate-functi.patch
new file mode 100644
index 0000000..6e44f0b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0001-Split-binary-package-building-into-a-separate-functi.patch
@@ -0,0 +1,84 @@
+From 721a660a507d6d062e7aecafad886c643970a5d5 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Thu, 25 May 2017 18:15:27 +0300
+Subject: [PATCH 1/4] Split binary package building into a separate function
+
+So that it can be run as a thread pool task.
+
+Upstream-Status: Submitted [https://github.com/rpm-software-management/rpm/pull/226]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+
+---
+ build/pack.c | 33 +++++++++++++++++++++------------
+ 1 file changed, 21 insertions(+), 12 deletions(-)
+
+diff --git a/build/pack.c b/build/pack.c
+index 518f4e92a..ccfd614cc 100644
+--- a/build/pack.c
++++ b/build/pack.c
+@@ -546,18 +546,13 @@ static rpmRC checkPackages(char *pkgcheck)
+     return RPMRC_OK;
+ }
+ 
+-rpmRC packageBinaries(rpmSpec spec, const char *cookie, int cheating)
++static rpmRC packageBinary(rpmSpec spec, Package pkg, const char *cookie, int cheating, char** filename)
+ {
+-    rpmRC rc;
+-    const char *errorString;
+-    Package pkg;
+-    char *pkglist = NULL;
+-
+-    for (pkg = spec->packages; pkg != NULL; pkg = pkg->next) {
+-	char *fn;
++	const char *errorString;
++	rpmRC rc = RPMRC_OK;
+ 
+ 	if (pkg->fileList == NULL)
+-	    continue;
++	    return rc;
+ 
+ 	if ((rc = processScriptFiles(spec, pkg)))
+ 	    return rc;
+@@ -587,7 +582,7 @@ rpmRC packageBinaries(rpmSpec spec, const char *cookie, int cheating)
+ 		     headerGetString(pkg->header, RPMTAG_NAME), errorString);
+ 		return RPMRC_FAIL;
+ 	    }
+-	    fn = rpmGetPath("%{_rpmdir}/", binRpm, NULL);
++	    *filename = rpmGetPath("%{_rpmdir}/", binRpm, NULL);
+ 	    if ((binDir = strchr(binRpm, '/')) != NULL) {
+ 		struct stat st;
+ 		char *dn;
+@@ -609,14 +604,28 @@ rpmRC packageBinaries(rpmSpec spec, const char *cookie, int cheating)
+ 	    free(binRpm);
+ 	}
+ 
+-	rc = writeRPM(pkg, NULL, fn, NULL);
++	rc = writeRPM(pkg, NULL, *filename, NULL);
+ 	if (rc == RPMRC_OK) {
+ 	    /* Do check each written package if enabled */
+-	    char *pkgcheck = rpmExpand("%{?_build_pkgcheck} ", fn, NULL);
++	    char *pkgcheck = rpmExpand("%{?_build_pkgcheck} ", *filename, NULL);
+ 	    if (pkgcheck[0] != ' ') {
+ 		rc = checkPackages(pkgcheck);
+ 	    }
+ 	    free(pkgcheck);
++	}
++	return rc;
++}
++
++rpmRC packageBinaries(rpmSpec spec, const char *cookie, int cheating)
++{
++    rpmRC rc;
++    Package pkg;
++    char *pkglist = NULL;
++
++    for (pkg = spec->packages; pkg != NULL; pkg = pkg->next) {
++	char *fn = NULL;
++	rc = packageBinary(spec, pkg, cookie, cheating, &fn);
++	if (rc == RPMRC_OK) {
+ 	    rstrcat(&pkglist, fn);
+ 	    rstrcat(&pkglist, " ");
+ 	}
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0002-Run-binary-package-creation-via-thread-pools.patch b/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0002-Run-binary-package-creation-via-thread-pools.patch
new file mode 100644
index 0000000..d10041c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0002-Run-binary-package-creation-via-thread-pools.patch
@@ -0,0 +1,127 @@
+From 513200cf76758de4668312c628d6362bdabfaf4b Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Thu, 25 May 2017 19:30:20 +0300
+Subject: [PATCH 1/3] Run binary package creation via thread pools.
+
+Upstream-Status: Submitted [https://github.com/rpm-software-management/rpm/pull/226]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+
+---
+ build/pack.c | 81 +++++++++++++++++++++++++++++++++++++++++++++++++-----------
+ configure.ac |  3 +++
+ 2 files changed, 70 insertions(+), 14 deletions(-)
+
+diff --git a/build/pack.c b/build/pack.c
+index ccfd614cc..ed5b9ab4e 100644
+--- a/build/pack.c
++++ b/build/pack.c
+@@ -616,25 +616,78 @@ static rpmRC packageBinary(rpmSpec spec, Package pkg, const char *cookie, int ch
+ 	return rc;
+ }
+ 
+-rpmRC packageBinaries(rpmSpec spec, const char *cookie, int cheating)
++struct binaryPackageTaskData
+ {
+-    rpmRC rc;
+     Package pkg;
++    char *filename;
++    rpmRC result;
++    struct binaryPackageTaskData *next;
++};
++
++static struct binaryPackageTaskData* runBinaryPackageTasks(rpmSpec spec, const char *cookie, int cheating)
++{
++    struct binaryPackageTaskData *tasks = NULL;
++    struct binaryPackageTaskData *task = NULL;
++    struct binaryPackageTaskData *prev = NULL;
++
++    for (Package pkg = spec->packages; pkg != NULL; pkg = pkg->next) {
++        task = rcalloc(1, sizeof(*task));
++        task->pkg = pkg;
++        if (pkg == spec->packages) {
++            // the first package needs to be processed ahead of others, as they copy
++            // changelog data from it, and so otherwise data races would happen
++            task->result = packageBinary(spec, pkg, cookie, cheating, &(task->filename));
++            rpmlog(RPMLOG_NOTICE, _("Finished binary package job, result %d, filename %s\n"), task->result, task->filename);
++            tasks = task;
++        }
++        if (prev != NULL) {
++            prev->next = task;
++        }
++        prev = task;
++    }
++
++    #pragma omp parallel
++    #pragma omp single
++    // re-declaring task variable is necessary, or older gcc versions will produce code that segfaults
++    for (struct binaryPackageTaskData *task = tasks; task != NULL; task = task->next) {
++        if (task != tasks)
++        #pragma omp task
++        {
++            task->result = packageBinary(spec, task->pkg, cookie, cheating, &(task->filename));
++            rpmlog(RPMLOG_NOTICE, _("Finished binary package job, result %d, filename %s\n"), task->result, task->filename);
++        }
++    }
++
++    return tasks;
++}
++
++static void freeBinaryPackageTasks(struct binaryPackageTaskData* tasks)
++{
++    while (tasks != NULL) {
++        struct binaryPackageTaskData* next = tasks->next;
++        rfree(tasks->filename);
++        rfree(tasks);
++        tasks = next;
++    }
++}
++
++rpmRC packageBinaries(rpmSpec spec, const char *cookie, int cheating)
++{
+     char *pkglist = NULL;
+ 
+-    for (pkg = spec->packages; pkg != NULL; pkg = pkg->next) {
+-	char *fn = NULL;
+-	rc = packageBinary(spec, pkg, cookie, cheating, &fn);
+-	if (rc == RPMRC_OK) {
+-	    rstrcat(&pkglist, fn);
+-	    rstrcat(&pkglist, " ");
+-	}
+-	free(fn);
+-	if (rc != RPMRC_OK) {
+-	    pkglist = _free(pkglist);
+-	    return rc;
+-	}
++    struct binaryPackageTaskData *tasks = runBinaryPackageTasks(spec, cookie, cheating);
++
++    for (struct binaryPackageTaskData *task = tasks; task != NULL; task = task->next) {
++        if (task->result == RPMRC_OK) {
++            rstrcat(&pkglist, task->filename);
++            rstrcat(&pkglist, " ");
++        } else {
++            _free(pkglist);
++            freeBinaryPackageTasks(tasks);
++            return RPMRC_FAIL;
++        }
+     }
++    freeBinaryPackageTasks(tasks);
+ 
+     /* Now check the package set if enabled */
+     if (pkglist != NULL) {
+diff --git a/configure.ac b/configure.ac
+index a506ec819..59fa0acaf 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -17,6 +17,9 @@ AC_DISABLE_STATIC
+ 
+ PKG_PROG_PKG_CONFIG
+ 
++AC_OPENMP
++RPMCFLAGS="$OPENMP_CFLAGS $RPMCFLAGS"
++
+ dnl Checks for programs.
+ AC_PROG_CXX
+ AC_PROG_AWK
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0003-rpmstrpool.c-make-operations-over-string-pools-threa.patch b/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0003-rpmstrpool.c-make-operations-over-string-pools-threa.patch
new file mode 100644
index 0000000..c348ae5
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0003-rpmstrpool.c-make-operations-over-string-pools-threa.patch
@@ -0,0 +1,207 @@
+From c80892f17e44331206c8318d53b63bb6a99554d0 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Tue, 30 May 2017 13:58:30 +0300
+Subject: [PATCH 3/4] rpmstrpool.c: make operations over string pools
+ thread-safe
+
+Otherwise multithreaded rpm building explodes in various ways due
+to data races.
+
+Upstream-Status: Submitted [https://github.com/rpm-software-management/rpm/pull/226]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+
+---
+ rpmio/rpmstrpool.c | 56 +++++++++++++++++++++++++++++++++++++++++++++---------
+ 1 file changed, 47 insertions(+), 9 deletions(-)
+
+diff --git a/rpmio/rpmstrpool.c b/rpmio/rpmstrpool.c
+index 30a57eb10..58ba95a02 100644
+--- a/rpmio/rpmstrpool.c
++++ b/rpmio/rpmstrpool.c
+@@ -113,6 +113,8 @@ static poolHash poolHashCreate(int numBuckets)
+     return ht;
+ }
+ 
++static const char * rpmstrPoolStrNoLock(rpmstrPool pool, rpmsid sid);
++
+ static void poolHashResize(rpmstrPool pool, int numBuckets)
+ {
+     poolHash ht = pool->hash;
+@@ -120,7 +122,7 @@ static void poolHashResize(rpmstrPool pool, int numBuckets)
+ 
+     for (int i=0; i<ht->numBuckets; i++) {
+         if (!ht->buckets[i].keyid) continue;
+-        unsigned int keyHash = rstrhash(rpmstrPoolStr(pool, ht->buckets[i].keyid));
++        unsigned int keyHash = rstrhash(rpmstrPoolStrNoLock(pool, ht->buckets[i].keyid));
+         for (unsigned int j=0;;j++) {
+             unsigned int hash = hashbucket(keyHash, j) % numBuckets;
+             if (!buckets[hash].keyid) {
+@@ -149,7 +151,7 @@ static void poolHashAddHEntry(rpmstrPool pool, const char * key, unsigned int ke
+             ht->buckets[hash].keyid = keyid;
+             ht->keyCount++;
+             break;
+-        } else if (!strcmp(rpmstrPoolStr(pool, ht->buckets[hash].keyid), key)) {
++        } else if (!strcmp(rpmstrPoolStrNoLock(pool, ht->buckets[hash].keyid), key)) {
+             return;
+         }
+     }
+@@ -191,7 +193,7 @@ static void poolHashPrintStats(rpmstrPool pool)
+     int maxcollisions = 0;
+ 
+     for (i=0; i<ht->numBuckets; i++) {
+-        unsigned int keyHash = rstrhash(rpmstrPoolStr(pool, ht->buckets[i].keyid));
++        unsigned int keyHash = rstrhash(rpmstrPoolStrNoLock(pool, ht->buckets[i].keyid));
+         for (unsigned int j=0;;j++) {
+             unsigned int hash = hashbucket(keyHash, i) % ht->numBuckets;
+             if (hash==i) {
+@@ -221,7 +223,7 @@ static void rpmstrPoolRehash(rpmstrPool pool)
+ 
+     pool->hash = poolHashCreate(sizehint);
+     for (int i = 1; i <= pool->offs_size; i++)
+-	poolHashAddEntry(pool, rpmstrPoolStr(pool, i), i);
++	poolHashAddEntry(pool, rpmstrPoolStrNoLock(pool, i), i);
+ }
+ 
+ rpmstrPool rpmstrPoolCreate(void)
+@@ -245,6 +247,8 @@ rpmstrPool rpmstrPoolCreate(void)
+ 
+ rpmstrPool rpmstrPoolFree(rpmstrPool pool)
+ {
++    #pragma omp critical(rpmstrpool)
++    {
+     if (pool) {
+ 	if (pool->nrefs > 1) {
+ 	    pool->nrefs--;
+@@ -260,18 +264,24 @@ rpmstrPool rpmstrPoolFree(rpmstrPool pool)
+ 	    free(pool);
+ 	}
+     }
++    }
+     return NULL;
+ }
+ 
+ rpmstrPool rpmstrPoolLink(rpmstrPool pool)
+ {
++    #pragma omp critical(rpmstrpool)
++    {
+     if (pool)
+ 	pool->nrefs++;
++    }
+     return pool;
+ }
+ 
+ void rpmstrPoolFreeze(rpmstrPool pool, int keephash)
+ {
++    #pragma omp critical(rpmstrpool)
++    {
+     if (pool && !pool->frozen) {
+ 	if (!keephash) {
+ 	    pool->hash = poolHashFree(pool->hash);
+@@ -281,16 +291,20 @@ void rpmstrPoolFreeze(rpmstrPool pool, int keephash)
+ 			      pool->offs_alloced * sizeof(*pool->offs));
+ 	pool->frozen = 1;
+     }
++    }
+ }
+ 
+ void rpmstrPoolUnfreeze(rpmstrPool pool)
+ {
++    #pragma omp critical(rpmstrpool)
++    {
+     if (pool) {
+ 	if (pool->hash == NULL) {
+ 	    rpmstrPoolRehash(pool);
+ 	}
+ 	pool->frozen = 0;
+     }
++    }
+ }
+ 
+ static rpmsid rpmstrPoolPut(rpmstrPool pool, const char *s, size_t slen, unsigned int hash)
+@@ -350,7 +364,7 @@ static rpmsid rpmstrPoolGet(rpmstrPool pool, const char * key, size_t keylen,
+             return 0;
+         }
+ 
+-	s = rpmstrPoolStr(pool, ht->buckets[hash].keyid);
++	s = rpmstrPoolStrNoLock(pool, ht->buckets[hash].keyid);
+ 	/* pool string could be longer than keylen, require exact matche */
+ 	if (strncmp(s, key, keylen) == 0 && s[keylen] == '\0')
+ 	    return ht->buckets[hash].keyid;
+@@ -373,27 +387,31 @@ static inline rpmsid strn2id(rpmstrPool pool, const char *s, size_t slen,
+ rpmsid rpmstrPoolIdn(rpmstrPool pool, const char *s, size_t slen, int create)
+ {
+     rpmsid sid = 0;
+-
++    #pragma omp critical(rpmstrpool)
++    {
+     if (s != NULL) {
+ 	unsigned int hash = rstrnhash(s, slen);
+ 	sid = strn2id(pool, s, slen, hash, create);
+     }
++    }
+     return sid;
+ }
+ 
+ rpmsid rpmstrPoolId(rpmstrPool pool, const char *s, int create)
+ {
+     rpmsid sid = 0;
+-
++    #pragma omp critical(rpmstrpool)
++    {
+     if (s != NULL) {
+ 	size_t slen;
+ 	unsigned int hash = rstrlenhash(s, &slen);
+ 	sid = strn2id(pool, s, slen, hash, create);
+     }
++    }
+     return sid;
+ }
+ 
+-const char * rpmstrPoolStr(rpmstrPool pool, rpmsid sid)
++static const char * rpmstrPoolStrNoLock(rpmstrPool pool, rpmsid sid)
+ {
+     const char *s = NULL;
+     if (pool && sid > 0 && sid <= pool->offs_size)
+@@ -401,12 +419,25 @@ const char * rpmstrPoolStr(rpmstrPool pool, rpmsid sid)
+     return s;
+ }
+ 
++const char * rpmstrPoolStr(rpmstrPool pool, rpmsid sid)
++{
++    const char *s = NULL;
++    #pragma omp critical(rpmstrpool)
++    {
++    s = rpmstrPoolStrNoLock(pool, sid);
++    }
++    return s;
++}
++
+ size_t rpmstrPoolStrlen(rpmstrPool pool, rpmsid sid)
+ {
+     size_t slen = 0;
++    #pragma omp critical(rpmstrpool)
++    {
+     if (pool && sid > 0 && sid <= pool->offs_size) {
+ 	slen = strlen(pool->offs[sid]);
+     }
++    }
+     return slen;
+ }
+ 
+@@ -421,5 +452,12 @@ int rpmstrPoolStreq(rpmstrPool poolA, rpmsid sidA,
+ 
+ rpmsid rpmstrPoolNumStr(rpmstrPool pool)
+ {
+-    return (pool != NULL) ? pool->offs_size : 0;
++    rpmsid id = 0;
++    #pragma omp critical(rpmstrpool)
++    {
++    if (pool) {
++	id = pool->offs_size;
++    }
++    }
++    return id;
+ }
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0004-build-pack.c-remove-static-local-variables-from-buil.patch b/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0004-build-pack.c-remove-static-local-variables-from-buil.patch
new file mode 100644
index 0000000..64a5651
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/rpm/files/0004-build-pack.c-remove-static-local-variables-from-buil.patch
@@ -0,0 +1,337 @@
+From ec305795a302d226343e69031ff2024dfcde69c0 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Thu, 8 Jun 2017 17:08:09 +0300
+Subject: [PATCH 3/3] build/pack.c: remove static local variables from
+ buildHost() and getBuildTime()
+
+Their use is causing difficult to diagnoze data races when building multiple
+packages in parallel, and is a bad idea in general, as it also makes it more
+difficult to reason about code.
+
+Upstream-Status: Submitted [https://github.com/rpm-software-management/rpm/pull/226]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+
+
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+ build/build.c             | 54 ++++++++++++++++++++++++++++--
+ build/pack.c              | 84 +++++++++--------------------------------------
+ build/rpmbuild_internal.h |  8 +++--
+ 3 files changed, 74 insertions(+), 72 deletions(-)
+
+diff --git a/build/build.c b/build/build.c
+index 5f99c8db7..09a1311c5 100644
+--- a/build/build.c
++++ b/build/build.c
+@@ -6,6 +6,8 @@
+ #include "system.h"
+ 
+ #include <errno.h>
++#include <netdb.h>
++#include <time.h>
+ #include <sys/wait.h>
+ 
+ #include <rpm/rpmlog.h>
+@@ -16,6 +18,50 @@
+ 
+ #include "debug.h"
+ 
++static rpm_time_t getBuildTime(void)
++{
++    rpm_time_t buildTime = 0;
++    char *srcdate;
++    time_t epoch;
++    char *endptr;
++
++    srcdate = getenv("SOURCE_DATE_EPOCH");
++    if (srcdate) {
++        errno = 0;
++        epoch = strtol(srcdate, &endptr, 10);
++        if (srcdate == endptr || *endptr || errno != 0)
++            rpmlog(RPMLOG_ERR, _("unable to parse SOURCE_DATE_EPOCH\n"));
++        else
++            buildTime = (int32_t) epoch;
++    } else
++        buildTime = (int32_t) time(NULL);
++
++    return buildTime;
++}
++
++static char * buildHost(void)
++{
++    char* hostname;
++    struct hostent *hbn;
++    char *bhMacro;
++
++    bhMacro = rpmExpand("%{?_buildhost}", NULL);
++    if (strcmp(bhMacro, "") != 0) {
++        rasprintf(&hostname, "%s", bhMacro);
++    } else {
++        hostname = rcalloc(1024, sizeof(*hostname));
++        (void) gethostname(hostname, 1024);
++        hbn = gethostbyname(hostname);
++        if (hbn)
++            strcpy(hostname, hbn->h_name);
++        else
++            rpmlog(RPMLOG_WARNING,
++                    _("Could not canonicalize hostname: %s\n"), hostname);
++    }
++    free(bhMacro);
++    return(hostname);
++}
++
+ /**
+  */
+ static rpmRC doRmSource(rpmSpec spec)
+@@ -203,6 +249,9 @@ static rpmRC buildSpec(BTA_t buildArgs, rpmSpec spec, int what)
+     rpmRC rc = RPMRC_OK;
+     int test = (what & RPMBUILD_NOBUILD);
+     char *cookie = buildArgs->cookie ? xstrdup(buildArgs->cookie) : NULL;
++    const char* host = buildHost();
++    rpm_time_t buildTime = getBuildTime();
++
+ 
+     if (rpmExpandNumeric("%{?source_date_epoch_from_changelog}") &&
+ 	getenv("SOURCE_DATE_EPOCH") == NULL) {
+@@ -271,11 +320,11 @@ static rpmRC buildSpec(BTA_t buildArgs, rpmSpec spec, int what)
+ 		goto exit;
+ 
+ 	if (((what & RPMBUILD_PACKAGESOURCE) && !test) &&
+-	    (rc = packageSources(spec, &cookie)))
++	    (rc = packageSources(spec, &cookie, buildTime, host)))
+ 		return rc;
+ 
+ 	if (((what & RPMBUILD_PACKAGEBINARY) && !test) &&
+-	    (rc = packageBinaries(spec, cookie, (didBuild == 0))))
++	    (rc = packageBinaries(spec, cookie, (didBuild == 0), buildTime, host)))
+ 		goto exit;
+ 	
+ 	if ((what & RPMBUILD_CLEAN) &&
+@@ -295,6 +344,7 @@ static rpmRC buildSpec(BTA_t buildArgs, rpmSpec spec, int what)
+ 	(void) unlink(spec->specFile);
+ 
+ exit:
++    free(host);
+     free(cookie);
+     spec->rootDir = NULL;
+     if (rc != RPMRC_OK && rpmlogGetNrecs() > 0) {
+diff --git a/build/pack.c b/build/pack.c
+index ed5b9ab4e..62427065a 100644
+--- a/build/pack.c
++++ b/build/pack.c
+@@ -6,8 +6,6 @@
+ #include "system.h"
+ 
+ #include <errno.h>
+-#include <netdb.h>
+-#include <time.h>
+ #include <sys/wait.h>
+ 
+ #include <rpm/rpmlib.h>			/* RPMSIGTAG*, rpmReadPackageFile */
+@@ -151,57 +149,6 @@ exit:
+     return rc;
+ }
+ 
+-static rpm_time_t * getBuildTime(void)
+-{
+-    static rpm_time_t buildTime[1];
+-    char *srcdate;
+-    time_t epoch;
+-    char *endptr;
+-
+-    if (buildTime[0] == 0) {
+-        srcdate = getenv("SOURCE_DATE_EPOCH");
+-        if (srcdate) {
+-            errno = 0;
+-            epoch = strtol(srcdate, &endptr, 10);
+-            if (srcdate == endptr || *endptr || errno != 0)
+-                rpmlog(RPMLOG_ERR, _("unable to parse SOURCE_DATE_EPOCH\n"));
+-            else
+-                buildTime[0] = (int32_t) epoch;
+-        } else
+-            buildTime[0] = (int32_t) time(NULL);
+-    }
+-
+-    return buildTime;
+-}
+-
+-static const char * buildHost(void)
+-{
+-    static char hostname[1024];
+-    static int oneshot = 0;
+-    struct hostent *hbn;
+-    char *bhMacro;
+-
+-    if (! oneshot) {
+-        bhMacro = rpmExpand("%{?_buildhost}", NULL);
+-        if (strcmp(bhMacro, "") != 0 && strlen(bhMacro) < 1024) {
+-            strcpy(hostname, bhMacro);
+-        } else {
+-            if (strcmp(bhMacro, "") != 0)
+-                rpmlog(RPMLOG_WARNING, _("The _buildhost macro is too long\n"));
+-            (void) gethostname(hostname, sizeof(hostname));
+-            hbn = gethostbyname(hostname);
+-            if (hbn)
+-                strcpy(hostname, hbn->h_name);
+-            else
+-                rpmlog(RPMLOG_WARNING,
+-                        _("Could not canonicalize hostname: %s\n"), hostname);
+-        }
+-        free(bhMacro);
+-        oneshot = 1;
+-    }
+-    return(hostname);
+-}
+-
+ static rpmRC processScriptFiles(rpmSpec spec, Package pkg)
+ {
+     struct TriggerFileEntry *p;
+@@ -308,7 +255,8 @@ static int haveRichDep(Package pkg)
+ }
+ 
+ static rpmRC writeRPM(Package pkg, unsigned char ** pkgidp,
+-		      const char *fileName, char **cookie)
++		      const char *fileName, char **cookie,
++		      rpm_time_t buildTime, const char* buildHost)
+ {
+     FD_t fd = NULL;
+     char * rpmio_flags = NULL;
+@@ -397,7 +345,7 @@ static rpmRC writeRPM(Package pkg, unsigned char ** pkgidp,
+ 
+     /* Create and add the cookie */
+     if (cookie) {
+-	rasprintf(cookie, "%s %d", buildHost(), (int) (*getBuildTime()));
++	rasprintf(cookie, "%s %d", buildHost, buildTime);
+ 	headerPutString(pkg->header, RPMTAG_COOKIE, *cookie);
+     }
+     
+@@ -546,7 +494,7 @@ static rpmRC checkPackages(char *pkgcheck)
+     return RPMRC_OK;
+ }
+ 
+-static rpmRC packageBinary(rpmSpec spec, Package pkg, const char *cookie, int cheating, char** filename)
++static rpmRC packageBinary(rpmSpec spec, Package pkg, const char *cookie, int cheating, char** filename, rpm_time_t buildTime, const char* buildHost)
+ {
+ 	const char *errorString;
+ 	rpmRC rc = RPMRC_OK;
+@@ -565,8 +513,8 @@ static rpmRC packageBinary(rpmSpec spec, Package pkg, const char *cookie, int ch
+ 	headerCopyTags(spec->packages->header, pkg->header, copyTags);
+ 	
+ 	headerPutString(pkg->header, RPMTAG_RPMVERSION, VERSION);
+-	headerPutString(pkg->header, RPMTAG_BUILDHOST, buildHost());
+-	headerPutUint32(pkg->header, RPMTAG_BUILDTIME, getBuildTime(), 1);
++	headerPutString(pkg->header, RPMTAG_BUILDHOST, buildHost);
++	headerPutUint32(pkg->header, RPMTAG_BUILDTIME, &buildTime, 1);
+ 
+ 	if (spec->sourcePkgId != NULL) {
+ 	    headerPutBin(pkg->header, RPMTAG_SOURCEPKGID, spec->sourcePkgId,16);
+@@ -604,7 +552,7 @@ static rpmRC packageBinary(rpmSpec spec, Package pkg, const char *cookie, int ch
+ 	    free(binRpm);
+ 	}
+ 
+-	rc = writeRPM(pkg, NULL, *filename, NULL);
++	rc = writeRPM(pkg, NULL, *filename, NULL, buildTime, buildHost);
+ 	if (rc == RPMRC_OK) {
+ 	    /* Do check each written package if enabled */
+ 	    char *pkgcheck = rpmExpand("%{?_build_pkgcheck} ", *filename, NULL);
+@@ -624,7 +572,7 @@ struct binaryPackageTaskData
+     struct binaryPackageTaskData *next;
+ };
+ 
+-static struct binaryPackageTaskData* runBinaryPackageTasks(rpmSpec spec, const char *cookie, int cheating)
++static struct binaryPackageTaskData* runBinaryPackageTasks(rpmSpec spec, const char *cookie, int cheating, rpm_time_t buildTime, char* buildHost)
+ {
+     struct binaryPackageTaskData *tasks = NULL;
+     struct binaryPackageTaskData *task = NULL;
+@@ -636,7 +584,7 @@ static struct binaryPackageTaskData* runBinaryPackageTasks(rpmSpec spec, const c
+         if (pkg == spec->packages) {
+             // the first package needs to be processed ahead of others, as they copy
+             // changelog data from it, and so otherwise data races would happen
+-            task->result = packageBinary(spec, pkg, cookie, cheating, &(task->filename));
++            task->result = packageBinary(spec, pkg, cookie, cheating, &(task->filename), buildTime, buildHost);
+             rpmlog(RPMLOG_NOTICE, _("Finished binary package job, result %d, filename %s\n"), task->result, task->filename);
+             tasks = task;
+         }
+@@ -653,7 +601,7 @@ static struct binaryPackageTaskData* runBinaryPackageTasks(rpmSpec spec, const c
+         if (task != tasks)
+         #pragma omp task
+         {
+-            task->result = packageBinary(spec, task->pkg, cookie, cheating, &(task->filename));
++            task->result = packageBinary(spec, task->pkg, cookie, cheating, &(task->filename), buildTime, buildHost);
+             rpmlog(RPMLOG_NOTICE, _("Finished binary package job, result %d, filename %s\n"), task->result, task->filename);
+         }
+     }
+@@ -671,11 +619,11 @@ static void freeBinaryPackageTasks(struct binaryPackageTaskData* tasks)
+     }
+ }
+ 
+-rpmRC packageBinaries(rpmSpec spec, const char *cookie, int cheating)
++rpmRC packageBinaries(rpmSpec spec, const char *cookie, int cheating, rpm_time_t buildTime, char* buildHost)
+ {
+     char *pkglist = NULL;
+ 
+-    struct binaryPackageTaskData *tasks = runBinaryPackageTasks(spec, cookie, cheating);
++    struct binaryPackageTaskData *tasks = runBinaryPackageTasks(spec, cookie, cheating, buildTime, buildHost);
+ 
+     for (struct binaryPackageTaskData *task = tasks; task != NULL; task = task->next) {
+         if (task->result == RPMRC_OK) {
+@@ -702,22 +650,22 @@ rpmRC packageBinaries(rpmSpec spec, const char *cookie, int cheating)
+     return RPMRC_OK;
+ }
+ 
+-rpmRC packageSources(rpmSpec spec, char **cookie)
++rpmRC packageSources(rpmSpec spec, char **cookie, rpm_time_t buildTime, char* buildHost)
+ {
+     Package sourcePkg = spec->sourcePackage;
+     rpmRC rc;
+ 
+     /* Add some cruft */
+     headerPutString(sourcePkg->header, RPMTAG_RPMVERSION, VERSION);
+-    headerPutString(sourcePkg->header, RPMTAG_BUILDHOST, buildHost());
+-    headerPutUint32(sourcePkg->header, RPMTAG_BUILDTIME, getBuildTime(), 1);
++    headerPutString(sourcePkg->header, RPMTAG_BUILDHOST, buildHost);
++    headerPutUint32(sourcePkg->header, RPMTAG_BUILDTIME, &buildTime, 1);
+ 
+     /* XXX this should be %_srpmdir */
+     {	char *fn = rpmGetPath("%{_srcrpmdir}/", spec->sourceRpmName,NULL);
+ 	char *pkgcheck = rpmExpand("%{?_build_pkgcheck_srpm} ", fn, NULL);
+ 
+ 	spec->sourcePkgId = NULL;
+-	rc = writeRPM(sourcePkg, &spec->sourcePkgId, fn, cookie);
++	rc = writeRPM(sourcePkg, &spec->sourcePkgId, fn, cookie, buildTime, buildHost);
+ 
+ 	/* Do check SRPM package if enabled */
+ 	if (rc == RPMRC_OK && pkgcheck[0] != ' ') {
+diff --git a/build/rpmbuild_internal.h b/build/rpmbuild_internal.h
+index 8351a6a34..797337ca7 100644
+--- a/build/rpmbuild_internal.h
++++ b/build/rpmbuild_internal.h
+@@ -408,19 +408,23 @@ rpmRC processSourceFiles(rpmSpec spec, rpmBuildPkgFlags pkgFlags);
+  * @param spec		spec file control structure
+  * @param cookie	build identifier "cookie" or NULL
+  * @param cheating	was build shortcircuited?
++ * @param buildTime	the build timestamp that goes into packages
++ * @param buildHost	the hostname where the build is happening
+  * @return		RPMRC_OK on success
+  */
+ RPM_GNUC_INTERNAL
+-rpmRC packageBinaries(rpmSpec spec, const char *cookie, int cheating);
++rpmRC packageBinaries(rpmSpec spec, const char *cookie, int cheating, rpm_time_t buildTime, char* buildHost);
+ 
+ /** \ingroup rpmbuild
+  * Generate source package.
+  * @param spec		spec file control structure
+  * @retval cookie	build identifier "cookie" or NULL
++ * @param buildTime	the build timestamp that goes into packages
++ * @param buildHost	the hostname where the build is happening
+  * @return		RPMRC_OK on success
+  */
+ RPM_GNUC_INTERNAL
+-rpmRC packageSources(rpmSpec spec, char **cookie);
++rpmRC packageSources(rpmSpec spec, char **cookie, rpm_time_t buildTime, char* buildHost);
+ 
+ RPM_GNUC_INTERNAL
+ int addLangTag(rpmSpec spec, Header h, rpmTagVal tag,
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/rpm/rpm_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/rpm/rpm_git.bb
index b0f5780..7866314 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/rpm/rpm_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/rpm/rpm_git.bb
@@ -38,8 +38,13 @@
            file://0011-Do-not-require-that-ELF-binaries-are-executable-to-b.patch \
            file://0012-Use-conditional-to-access-_docdir-in-macros.in.patch \
            file://0013-Add-a-new-option-alldeps-to-rpmdeps.patch \
+           file://0001-Split-binary-package-building-into-a-separate-functi.patch \
+           file://0002-Run-binary-package-creation-via-thread-pools.patch \
+           file://0003-rpmstrpool.c-make-operations-over-string-pools-threa.patch \
+           file://0004-build-pack.c-remove-static-local-variables-from-buil.patch \
            file://0001-perl-disable-auto-reqs.patch \
            "
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 PV = "4.13.90+git${SRCPV}"
 PE = "1"
@@ -47,8 +52,8 @@
 
 S = "${WORKDIR}/git"
 
-DEPENDS = "nss libarchive db file popt xz dbus elfutils python3"
-DEPENDS_append_class-native = " file-replacement-native"
+DEPENDS = "nss libarchive db file popt xz bzip2 dbus elfutils python3"
+DEPENDS_append_class-native = " file-replacement-native bzip2-replacement-native"
 
 inherit autotools gettext pkgconfig python3native
 export PYTHON_ABI
@@ -68,6 +73,9 @@
 
 BBCLASSEXTEND = "native nativesdk"
 
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[imaevm] = "--with-imaevm,,ima-evm-utils"
+
 # Direct rpm-native to read configuration from our sysroot, not the one it was compiled in
 # libmagic also has sysroot path contamination, so override it
 do_install_append_class-native() {
@@ -99,7 +107,8 @@
 }
 
 do_install_append () {
-	sed -i -e 's:${HOSTTOOLS_DIR}/::g' ${D}/${libdir}/rpm/macros
+	sed -i -e 's:${HOSTTOOLS_DIR}/::g' \
+	    ${D}/${libdir}/rpm/macros
 
 	sed -i -e 's|/usr/bin/python|${USRBINPATH}/env ${PYTHON_PN}|' \
 	    ${D}${libdir}/rpm/pythondistdeps.py
@@ -119,3 +128,11 @@
 RPROVIDES_${PN} += "rpm-build"
 
 RDEPENDS_${PN} = "bash perl python3-core"
+
+PACKAGE_PREPROCESS_FUNCS += "rpm_package_preprocess"
+
+# Do not specify a sysroot when compiling on a target.
+rpm_package_preprocess () {
+	sed -i -e 's:--sysroot[^ ]*::g' \
+	    ${PKGD}/${libdir}/rpm/macros
+}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby/CVE-2017-14064.patch b/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby/CVE-2017-14064.patch
deleted file mode 100644
index 700d1bc..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby/CVE-2017-14064.patch
+++ /dev/null
@@ -1,353 +0,0 @@
-From d86d283fcb35d1442a121b92030884523908a331 Mon Sep 17 00:00:00 2001
-From: nagachika <nagachika@b2dd03c8-39d4-4d8f-98ff-823fe69b080e>
-Date: Sat, 22 Apr 2017 07:29:01 +0000
-Subject: [PATCH] merge revision(s) 58323,58324:
-
-	Merge json-2.0.4.
-
-	  * https://github.com/flori/json/releases/tag/v2.0.4
-	  * https://github.com/flori/json/blob/09fabeb03e73ed88dc8ce8f19d76ac59e51dae20/CHANGES.md#2017-03-23-204
-	Use `assert_raise` instead of `assert_raises`.
-
-git-svn-id: svn+ssh://ci.ruby-lang.org/ruby/branches/ruby_2_4@58445 b2dd03c8-39d4-4d8f-98ff-823fe69b080e
-
-Upstream-Status: Backport
-CVE: CVE-2017-14064
-
-Signed-off-by: Armin Kuster <akuster@mvisa.com>
-
----
- ext/json/fbuffer/fbuffer.h       |   3 ---
- ext/json/generator/generator.c   |  12 +++++-----
- ext/json/generator/generator.h   |   1 -
- ext/json/json.gemspec            | Bin 5473 -> 5474 bytes
- ext/json/lib/json/version.rb     |   2 +-
- ext/json/parser/parser.c         |  48 +++++++++++++++++++++++----------------
- ext/json/parser/parser.rl        |  14 +++++++++---
- test/json/json_encoding_test.rb  |   2 ++
- test/json/json_generator_test.rb |   0
- version.h                        |   2 +-
- 10 files changed, 49 insertions(+), 35 deletions(-)
- mode change 100755 => 100644 test/json/json_generator_test.rb
-
-Index: ruby-2.4.0/ext/json/fbuffer/fbuffer.h
-===================================================================
---- ruby-2.4.0.orig/ext/json/fbuffer/fbuffer.h
-+++ ruby-2.4.0/ext/json/fbuffer/fbuffer.h
-@@ -12,9 +12,6 @@
- #define RFLOAT_VALUE(val) (RFLOAT(val)->value)
- #endif
- 
--#ifndef RARRAY_PTR
--#define RARRAY_PTR(ARRAY) RARRAY(ARRAY)->ptr
--#endif
- #ifndef RARRAY_LEN
- #define RARRAY_LEN(ARRAY) RARRAY(ARRAY)->len
- #endif
-Index: ruby-2.4.0/ext/json/generator/generator.c
-===================================================================
---- ruby-2.4.0.orig/ext/json/generator/generator.c
-+++ ruby-2.4.0/ext/json/generator/generator.c
-@@ -308,7 +308,7 @@ static char *fstrndup(const char *ptr, u
-   char *result;
-   if (len <= 0) return NULL;
-   result = ALLOC_N(char, len);
--  memccpy(result, ptr, 0, len);
-+  memcpy(result, ptr, len);
-   return result;
- }
- 
-@@ -1062,7 +1062,7 @@ static VALUE cState_indent_set(VALUE sel
-         }
-     } else {
-         if (state->indent) ruby_xfree(state->indent);
--        state->indent = strdup(RSTRING_PTR(indent));
-+        state->indent = fstrndup(RSTRING_PTR(indent), len);
-         state->indent_len = len;
-     }
-     return Qnil;
-@@ -1100,7 +1100,7 @@ static VALUE cState_space_set(VALUE self
-         }
-     } else {
-         if (state->space) ruby_xfree(state->space);
--        state->space = strdup(RSTRING_PTR(space));
-+        state->space = fstrndup(RSTRING_PTR(space), len);
-         state->space_len = len;
-     }
-     return Qnil;
-@@ -1136,7 +1136,7 @@ static VALUE cState_space_before_set(VAL
-         }
-     } else {
-         if (state->space_before) ruby_xfree(state->space_before);
--        state->space_before = strdup(RSTRING_PTR(space_before));
-+        state->space_before = fstrndup(RSTRING_PTR(space_before), len);
-         state->space_before_len = len;
-     }
-     return Qnil;
-@@ -1173,7 +1173,7 @@ static VALUE cState_object_nl_set(VALUE
-         }
-     } else {
-         if (state->object_nl) ruby_xfree(state->object_nl);
--        state->object_nl = strdup(RSTRING_PTR(object_nl));
-+        state->object_nl = fstrndup(RSTRING_PTR(object_nl), len);
-         state->object_nl_len = len;
-     }
-     return Qnil;
-@@ -1208,7 +1208,7 @@ static VALUE cState_array_nl_set(VALUE s
-         }
-     } else {
-         if (state->array_nl) ruby_xfree(state->array_nl);
--        state->array_nl = strdup(RSTRING_PTR(array_nl));
-+        state->array_nl = fstrndup(RSTRING_PTR(array_nl), len);
-         state->array_nl_len = len;
-     }
-     return Qnil;
-Index: ruby-2.4.0/ext/json/generator/generator.h
-===================================================================
---- ruby-2.4.0.orig/ext/json/generator/generator.h
-+++ ruby-2.4.0/ext/json/generator/generator.h
-@@ -1,7 +1,6 @@
- #ifndef _GENERATOR_H_
- #define _GENERATOR_H_
- 
--#include <string.h>
- #include <math.h>
- #include <ctype.h>
- 
-Index: ruby-2.4.0/ext/json/lib/json/version.rb
-===================================================================
---- ruby-2.4.0.orig/ext/json/lib/json/version.rb
-+++ ruby-2.4.0/ext/json/lib/json/version.rb
-@@ -1,7 +1,7 @@
- # frozen_string_literal: false
- module JSON
-   # JSON version
--  VERSION         = '2.0.2'
-+  VERSION         = '2.0.4'
-   VERSION_ARRAY   = VERSION.split(/\./).map { |x| x.to_i } # :nodoc:
-   VERSION_MAJOR   = VERSION_ARRAY[0] # :nodoc:
-   VERSION_MINOR   = VERSION_ARRAY[1] # :nodoc:
-Index: ruby-2.4.0/ext/json/parser/parser.c
-===================================================================
---- ruby-2.4.0.orig/ext/json/parser/parser.c
-+++ ruby-2.4.0/ext/json/parser/parser.c
-@@ -1435,13 +1435,21 @@ static VALUE json_string_unescape(VALUE
-                     break;
-                 case 'u':
-                     if (pe > stringEnd - 4) {
--                        return Qnil;
-+                      rb_enc_raise(
-+                        EXC_ENCODING eParserError,
-+                        "%u: incomplete unicode character escape sequence at '%s'", __LINE__, p
-+                      );
-                     } else {
-                         UTF32 ch = unescape_unicode((unsigned char *) ++pe);
-                         pe += 3;
-                         if (UNI_SUR_HIGH_START == (ch & 0xFC00)) {
-                             pe++;
--                            if (pe > stringEnd - 6) return Qnil;
-+                            if (pe > stringEnd - 6) {
-+                              rb_enc_raise(
-+                                EXC_ENCODING eParserError,
-+                                "%u: incomplete surrogate pair at '%s'", __LINE__, p
-+                                );
-+                            }
-                             if (pe[0] == '\\' && pe[1] == 'u') {
-                                 UTF32 sur = unescape_unicode((unsigned char *) pe + 2);
-                                 ch = (((ch & 0x3F) << 10) | ((((ch >> 6) & 0xF) + 1) << 16)
-@@ -1471,7 +1479,7 @@ static VALUE json_string_unescape(VALUE
- }
- 
- 
--#line 1475 "parser.c"
-+#line 1483 "parser.c"
- enum {JSON_string_start = 1};
- enum {JSON_string_first_final = 8};
- enum {JSON_string_error = 0};
-@@ -1479,7 +1487,7 @@ enum {JSON_string_error = 0};
- enum {JSON_string_en_main = 1};
- 
- 
--#line 504 "parser.rl"
-+#line 512 "parser.rl"
- 
- 
- static int
-@@ -1501,15 +1509,15 @@ static char *JSON_parse_string(JSON_Pars
- 
-     *result = rb_str_buf_new(0);
- 
--#line 1505 "parser.c"
-+#line 1513 "parser.c"
- 	{
- 	cs = JSON_string_start;
- 	}
- 
--#line 525 "parser.rl"
-+#line 533 "parser.rl"
-     json->memo = p;
- 
--#line 1513 "parser.c"
-+#line 1521 "parser.c"
- 	{
- 	if ( p == pe )
- 		goto _test_eof;
-@@ -1534,7 +1542,7 @@ case 2:
- 		goto st0;
- 	goto st2;
- tr2:
--#line 490 "parser.rl"
-+#line 498 "parser.rl"
- 	{
-         *result = json_string_unescape(*result, json->memo + 1, p);
-         if (NIL_P(*result)) {
-@@ -1545,14 +1553,14 @@ tr2:
-             {p = (( p + 1))-1;}
-         }
-     }
--#line 501 "parser.rl"
-+#line 509 "parser.rl"
- 	{ p--; {p++; cs = 8; goto _out;} }
- 	goto st8;
- st8:
- 	if ( ++p == pe )
- 		goto _test_eof8;
- case 8:
--#line 1556 "parser.c"
-+#line 1564 "parser.c"
- 	goto st0;
- st3:
- 	if ( ++p == pe )
-@@ -1628,7 +1636,7 @@ case 7:
- 	_out: {}
- 	}
- 
--#line 527 "parser.rl"
-+#line 535 "parser.rl"
- 
-     if (json->create_additions && RTEST(match_string = json->match_string)) {
-           VALUE klass;
-@@ -1675,7 +1683,7 @@ static VALUE convert_encoding(VALUE sour
-     }
-     FORCE_UTF8(source);
-   } else {
--    source = rb_str_conv_enc(source, NULL, rb_utf8_encoding());
-+    source = rb_str_conv_enc(source, rb_enc_get(source), rb_utf8_encoding());
-   }
- #endif
-     return source;
-@@ -1808,7 +1816,7 @@ static VALUE cParser_initialize(int argc
- }
- 
- 
--#line 1812 "parser.c"
-+#line 1820 "parser.c"
- enum {JSON_start = 1};
- enum {JSON_first_final = 10};
- enum {JSON_error = 0};
-@@ -1816,7 +1824,7 @@ enum {JSON_error = 0};
- enum {JSON_en_main = 1};
- 
- 
--#line 720 "parser.rl"
-+#line 728 "parser.rl"
- 
- 
- /*
-@@ -1833,16 +1841,16 @@ static VALUE cParser_parse(VALUE self)
-   GET_PARSER;
- 
- 
--#line 1837 "parser.c"
-+#line 1845 "parser.c"
- 	{
- 	cs = JSON_start;
- 	}
- 
--#line 736 "parser.rl"
-+#line 744 "parser.rl"
-   p = json->source;
-   pe = p + json->len;
- 
--#line 1846 "parser.c"
-+#line 1854 "parser.c"
- 	{
- 	if ( p == pe )
- 		goto _test_eof;
-@@ -1876,7 +1884,7 @@ st0:
- cs = 0;
- 	goto _out;
- tr2:
--#line 712 "parser.rl"
-+#line 720 "parser.rl"
- 	{
-         char *np = JSON_parse_value(json, p, pe, &result, 0);
-         if (np == NULL) { p--; {p++; cs = 10; goto _out;} } else {p = (( np))-1;}
-@@ -1886,7 +1894,7 @@ st10:
- 	if ( ++p == pe )
- 		goto _test_eof10;
- case 10:
--#line 1890 "parser.c"
-+#line 1898 "parser.c"
- 	switch( (*p) ) {
- 		case 13: goto st10;
- 		case 32: goto st10;
-@@ -1975,7 +1983,7 @@ case 9:
- 	_out: {}
- 	}
- 
--#line 739 "parser.rl"
-+#line 747 "parser.rl"
- 
-   if (cs >= JSON_first_final && p == pe) {
-     return result;
-Index: ruby-2.4.0/ext/json/parser/parser.rl
-===================================================================
---- ruby-2.4.0.orig/ext/json/parser/parser.rl
-+++ ruby-2.4.0/ext/json/parser/parser.rl
-@@ -446,13 +446,21 @@ static VALUE json_string_unescape(VALUE
-                     break;
-                 case 'u':
-                     if (pe > stringEnd - 4) {
--                        return Qnil;
-+                      rb_enc_raise(
-+                        EXC_ENCODING eParserError,
-+                        "%u: incomplete unicode character escape sequence at '%s'", __LINE__, p
-+                      );
-                     } else {
-                         UTF32 ch = unescape_unicode((unsigned char *) ++pe);
-                         pe += 3;
-                         if (UNI_SUR_HIGH_START == (ch & 0xFC00)) {
-                             pe++;
--                            if (pe > stringEnd - 6) return Qnil;
-+                            if (pe > stringEnd - 6) {
-+                              rb_enc_raise(
-+                                EXC_ENCODING eParserError,
-+                                "%u: incomplete surrogate pair at '%s'", __LINE__, p
-+                                );
-+                            }
-                             if (pe[0] == '\\' && pe[1] == 'u') {
-                                 UTF32 sur = unescape_unicode((unsigned char *) pe + 2);
-                                 ch = (((ch & 0x3F) << 10) | ((((ch >> 6) & 0xF) + 1) << 16)
-@@ -570,7 +578,7 @@ static VALUE convert_encoding(VALUE sour
-     }
-     FORCE_UTF8(source);
-   } else {
--    source = rb_str_conv_enc(source, NULL, rb_utf8_encoding());
-+    source = rb_str_conv_enc(source, rb_enc_get(source), rb_utf8_encoding());
-   }
- #endif
-     return source;
-Index: ruby-2.4.0/test/json/json_encoding_test.rb
-===================================================================
---- ruby-2.4.0.orig/test/json/json_encoding_test.rb
-+++ ruby-2.4.0/test/json/json_encoding_test.rb
-@@ -79,6 +79,8 @@ class JSONEncodingTest < Test::Unit::Tes
-     json = '["\ud840\udc01"]'
-     assert_equal json, generate(utf8, :ascii_only => true)
-     assert_equal utf8, parse(json)
-+    assert_raise(JSON::ParserError) { parse('"\u"') }
-+    assert_raise(JSON::ParserError) { parse('"\ud800"') }
-   end
- 
-   def test_chars
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby/extmk.patch b/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby/extmk.patch
index 8b68450..a5b2184 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby/extmk.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby/extmk.patch
@@ -1,3 +1,4 @@
+Upstream-Status: Pending
 diff -ru ruby-1.8.7-p248.orig/ext/extmk.rb ruby-1.8.7-p248/ext/extmk.rb
 --- ruby-1.8.7-p248.orig/ext/extmk.rb	2009-12-24 03:01:58.000000000 -0600
 +++ ruby-1.8.7-p248/ext/extmk.rb	2010-02-12 15:55:27.370061558 -0600
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby/ruby-CVE-2017-14064.patch b/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby/ruby-CVE-2017-14064.patch
new file mode 100644
index 0000000..88e693c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby/ruby-CVE-2017-14064.patch
@@ -0,0 +1,87 @@
+From 8f782fd8e181d9cfe9387ded43a5ca9692266b85 Mon Sep 17 00:00:00 2001
+From: Florian Frank <flori@ping.de>
+Date: Thu, 2 Mar 2017 12:12:33 +0100
+Subject: [PATCH] Fix arbitrary heap exposure problem
+
+Upstream-Status: Backport
+CVE: CVE-2017-14064
+
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
+---
+ ext/json/generator/generator.c | 12 ++++++------
+ ext/json/generator/generator.h |  1 -
+ 2 files changed, 6 insertions(+), 7 deletions(-)
+
+diff --git a/ext/json/generator/generator.c b/ext/json/generator/generator.c
+index ef85bb7..c88818c 100644
+--- a/ext/json/generator/generator.c
++++ b/ext/json/generator/generator.c
+@@ -308,7 +308,7 @@ static char *fstrndup(const char *ptr, unsigned long len) {
+   char *result;
+   if (len <= 0) return NULL;
+   result = ALLOC_N(char, len);
+-  memccpy(result, ptr, 0, len);
++  memcpy(result, ptr, len);
+   return result;
+ }
+ 
+@@ -1062,7 +1062,7 @@ static VALUE cState_indent_set(VALUE self, VALUE indent)
+         }
+     } else {
+         if (state->indent) ruby_xfree(state->indent);
+-        state->indent = strdup(RSTRING_PTR(indent));
++        state->indent = fstrndup(RSTRING_PTR(indent), len);
+         state->indent_len = len;
+     }
+     return Qnil;
+@@ -1100,7 +1100,7 @@ static VALUE cState_space_set(VALUE self, VALUE space)
+         }
+     } else {
+         if (state->space) ruby_xfree(state->space);
+-        state->space = strdup(RSTRING_PTR(space));
++        state->space = fstrndup(RSTRING_PTR(space), len);
+         state->space_len = len;
+     }
+     return Qnil;
+@@ -1136,7 +1136,7 @@ static VALUE cState_space_before_set(VALUE self, VALUE space_before)
+         }
+     } else {
+         if (state->space_before) ruby_xfree(state->space_before);
+-        state->space_before = strdup(RSTRING_PTR(space_before));
++        state->space_before = fstrndup(RSTRING_PTR(space_before), len);
+         state->space_before_len = len;
+     }
+     return Qnil;
+@@ -1173,7 +1173,7 @@ static VALUE cState_object_nl_set(VALUE self, VALUE object_nl)
+         }
+     } else {
+         if (state->object_nl) ruby_xfree(state->object_nl);
+-        state->object_nl = strdup(RSTRING_PTR(object_nl));
++        state->object_nl = fstrndup(RSTRING_PTR(object_nl), len);
+         state->object_nl_len = len;
+     }
+     return Qnil;
+@@ -1208,7 +1208,7 @@ static VALUE cState_array_nl_set(VALUE self, VALUE array_nl)
+         }
+     } else {
+         if (state->array_nl) ruby_xfree(state->array_nl);
+-        state->array_nl = strdup(RSTRING_PTR(array_nl));
++        state->array_nl = fstrndup(RSTRING_PTR(array_nl), len);
+         state->array_nl_len = len;
+     }
+     return Qnil;
+diff --git a/ext/json/generator/generator.h b/ext/json/generator/generator.h
+index 900b4d5..c367a62 100644
+--- a/ext/json/generator/generator.h
++++ b/ext/json/generator/generator.h
+@@ -1,7 +1,6 @@
+ #ifndef _GENERATOR_H_
+ #define _GENERATOR_H_
+ 
+-#include <string.h>
+ #include <math.h>
+ #include <ctype.h>
+ 
+-- 
+2.10.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby_2.4.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby_2.4.0.bb
deleted file mode 100644
index 8cc52d6..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby_2.4.0.bb
+++ /dev/null
@@ -1,53 +0,0 @@
-require ruby.inc
-
-SRC_URI += " \
-           file://ruby-CVE-2017-9224.patch \
-           file://ruby-CVE-2017-9226.patch \
-           file://ruby-CVE-2017-9227.patch \
-           file://ruby-CVE-2017-9228.patch \
-           file://ruby-CVE-2017-9229.patch \
-           file://CVE-2017-14064.patch \
-           "
-
-SRC_URI[md5sum] = "7e9485dcdb86ff52662728de2003e625"
-SRC_URI[sha256sum] = "152fd0bd15a90b4a18213448f485d4b53e9f7662e1508190aa5b702446b29e3d"
-
-# it's unknown to configure script, but then passed to extconf.rb
-# maybe it's not really needed as we're hardcoding the result with
-# 0001-socket-extconf-hardcode-wide-getaddr-info-test-outco.patch
-UNKNOWN_CONFIGURE_WHITELIST += "--enable-wide-getaddrinfo"
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG += "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)}"
-
-PACKAGECONFIG[valgrind] = "--with-valgrind=yes, --with-valgrind=no, valgrind"
-PACKAGECONFIG[gpm] = "--with-gmp=yes, --with-gmp=no, gmp"
-PACKAGECONFIG[ipv6] = ",--enable-wide-getaddrinfo,"
-
-EXTRA_AUTORECONF += "--exclude=aclocal"
-
-EXTRA_OECONF = "\
-    --disable-versioned-paths \
-    --disable-rpath \
-    --disable-dtrace \
-    --enable-shared \
-    --enable-load-relative \
-"
-
-do_install() {
-    oe_runmake 'DESTDIR=${D}' install
-}
-
-PACKAGES =+ "${PN}-ri-docs ${PN}-rdoc"
-
-SUMMARY_${PN}-ri-docs = "ri (Ruby Interactive) documentation for the Ruby standard library"
-RDEPENDS_${PN}-ri-docs = "${PN}"
-FILES_${PN}-ri-docs += "${datadir}/ri"
-
-SUMMARY_${PN}-rdoc = "RDoc documentation generator from Ruby source"
-RDEPENDS_${PN}-rdoc = "${PN}"
-FILES_${PN}-rdoc += "${libdir}/ruby/*/rdoc ${bindir}/rdoc"
-
-FILES_${PN} += "${datadir}/rubygems"
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby_2.4.1.bb b/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby_2.4.1.bb
new file mode 100644
index 0000000..7d27ac8
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/ruby/ruby_2.4.1.bb
@@ -0,0 +1,53 @@
+require ruby.inc
+
+SRC_URI += " \
+           file://ruby-CVE-2017-9224.patch \
+           file://ruby-CVE-2017-9226.patch \
+           file://ruby-CVE-2017-9227.patch \
+           file://ruby-CVE-2017-9228.patch \
+           file://ruby-CVE-2017-9229.patch \
+           file://ruby-CVE-2017-14064.patch \
+           "
+
+SRC_URI[md5sum] = "782bca562e474dd25956dd0017d92677"
+SRC_URI[sha256sum] = "a330e10d5cb5e53b3a0078326c5731888bb55e32c4abfeb27d9e7f8e5d000250"
+
+# it's unknown to configure script, but then passed to extconf.rb
+# maybe it's not really needed as we're hardcoding the result with
+# 0001-socket-extconf-hardcode-wide-getaddr-info-test-outco.patch
+UNKNOWN_CONFIGURE_WHITELIST += "--enable-wide-getaddrinfo"
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG += "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)}"
+
+PACKAGECONFIG[valgrind] = "--with-valgrind=yes, --with-valgrind=no, valgrind"
+PACKAGECONFIG[gpm] = "--with-gmp=yes, --with-gmp=no, gmp"
+PACKAGECONFIG[ipv6] = ",--enable-wide-getaddrinfo,"
+
+EXTRA_AUTORECONF += "--exclude=aclocal"
+
+EXTRA_OECONF = "\
+    --disable-versioned-paths \
+    --disable-rpath \
+    --disable-dtrace \
+    --enable-shared \
+    --enable-load-relative \
+"
+
+do_install() {
+    oe_runmake 'DESTDIR=${D}' install
+}
+
+PACKAGES =+ "${PN}-ri-docs ${PN}-rdoc"
+
+SUMMARY_${PN}-ri-docs = "ri (Ruby Interactive) documentation for the Ruby standard library"
+RDEPENDS_${PN}-ri-docs = "${PN}"
+FILES_${PN}-ri-docs += "${datadir}/ri"
+
+SUMMARY_${PN}-rdoc = "RDoc documentation generator from Ruby source"
+RDEPENDS_${PN}-rdoc = "${PN}"
+FILES_${PN}-rdoc += "${libdir}/ruby/*/rdoc ${bindir}/rdoc"
+
+FILES_${PN} += "${datadir}/rubygems"
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/run-postinsts/run-postinsts/run-postinsts.service b/import-layers/yocto-poky/meta/recipes-devtools/run-postinsts/run-postinsts/run-postinsts.service
index 85a0439..1b71a1f 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/run-postinsts/run-postinsts/run-postinsts.service
+++ b/import-layers/yocto-poky/meta/recipes-devtools/run-postinsts/run-postinsts/run-postinsts.service
@@ -8,7 +8,7 @@
 [Service]
 Type=oneshot
 ExecStart=#SBINDIR#/run-postinsts
-ExecStartPost=#BASE_BINDIR#/systemctl disable run-postinsts.service
+ExecStartPost=#BASE_BINDIR#/systemctl --no-reload disable run-postinsts.service
 RemainAfterExit=No
 TimeoutSec=0
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/squashfs-tools/squashfs-tools/squashfs-tools-4.3-sysmacros.patch b/import-layers/yocto-poky/meta/recipes-devtools/squashfs-tools/squashfs-tools/squashfs-tools-4.3-sysmacros.patch
new file mode 100644
index 0000000..39521a7
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/squashfs-tools/squashfs-tools/squashfs-tools-4.3-sysmacros.patch
@@ -0,0 +1,32 @@
+From https://gitweb.gentoo.org/repo/gentoo.git/tree/sys-fs/squashfs-tools/files/squashfs-tools-4.3-sysmacros.patch
+
+Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
+
+Upstream-Status: Pending
+
+
+sys/types.h might not always include sys/sysmacros.h for major/minor/makedev
+
+--- a/squashfs-tools/mksquashfs.c
++++ b/squashfs-tools/mksquashfs.c
+@@ -59,6 +59,7 @@
+ #else
+ #include <endian.h>
+ #include <sys/sysinfo.h>
++#include <sys/sysmacros.h>
+ #endif
+ 
+ #include "squashfs_fs.h"
+--- a/squashfs-tools/unsquashfs.c
++++ b/squashfs-tools/unsquashfs.c
+@@ -38,6 +38,10 @@
+ #include <limits.h>
+ #include <ctype.h>
+ 
++#ifdef linux
++#include <sys/sysmacros.h>
++#endif
++
+ struct cache *fragment_cache, *data_cache;
+ struct queue *to_reader, *to_inflate, *to_writer, *from_writer;
+ pthread_t *thread, *inflator_thread;
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/squashfs-tools/squashfs-tools_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/squashfs-tools/squashfs-tools_git.bb
index 33ed09a..0f99170 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/squashfs-tools/squashfs-tools_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/squashfs-tools/squashfs-tools_git.bb
@@ -13,8 +13,10 @@
 SRC_URI = "git://github.com/plougher/squashfs-tools.git;protocol=https \
            http://downloads.sourceforge.net/sevenzip/lzma465.tar.bz2;name=lzma \
            file://0001-mksquashfs.c-get-inline-functions-work-with-C99.patch;striplevel=2 \
+           file://squashfs-tools-4.3-sysmacros.patch;striplevel=2 \
            file://fix-compat.patch \
 "
+UPSTREAM_VERSION_UNKNOWN = "1"
 SRC_URI[lzma.md5sum] = "29d5ffd03a5a3e51aef6a74e9eafb759"
 SRC_URI[lzma.sha256sum] = "c935fd04dd8e0e8c688a3078f3675d699679a90be81c12686837e0880aa0fa1e"
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/strace/strace/0001-tests-sigaction-Check-for-mips-and-alpha-before-usin.patch b/import-layers/yocto-poky/meta/recipes-devtools/strace/strace/0001-tests-sigaction-Check-for-mips-and-alpha-before-usin.patch
new file mode 100644
index 0000000..52096b2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/strace/strace/0001-tests-sigaction-Check-for-mips-and-alpha-before-usin.patch
@@ -0,0 +1,37 @@
+From 9f3fd388ae7c46420bccba405468690ed46d669a Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 18 Sep 2017 22:51:32 -0700
+Subject: [PATCH] tests/sigaction: Check for mips and alpha before using
+ sa_restorer
+
+local structure does not define restorer member for mips and alpha
+in definition, we need to match that assumption here where they are
+being set
+
+Fixes
+| ../../strace-4.18/tests/sigaction.c:177:36: error: 'struct_set_sa {aka struct set_sa}' has no member named 'restorer'
+|  # define SA_RESTORER_ARGS , new_act->restorer
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ tests/sigaction.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/tests/sigaction.c b/tests/sigaction.c
+index 7b46944..f46cda7 100644
+--- a/tests/sigaction.c
++++ b/tests/sigaction.c
+@@ -170,7 +170,7 @@ main(void)
+ 	sigdelset(mask.libc, SIGHUP);
+ 
+ 	memcpy(new_act->mask, mask.old, sizeof(mask.old));
+-#ifdef SA_RESTORER
++#if defined(SA_RESTORER) && !defined(MIPS) && !defined(ALPHA)
+ 	new_act->flags = SA_RESTORER;
+ 	new_act->restorer = (unsigned long) 0xdeadfacecafef00dULL;
+ # define SA_RESTORER_FMT ", sa_flags=SA_RESTORER, sa_restorer=%#lx"
+-- 
+2.14.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/strace/strace/Makefile-ptest.patch b/import-layers/yocto-poky/meta/recipes-devtools/strace/strace/Makefile-ptest.patch
index 876c2d8..97bcc90 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/strace/strace/Makefile-ptest.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/strace/strace/Makefile-ptest.patch
@@ -1,19 +1,23 @@
-strace: Add ptest
+From 0574ae9926308dcbca78bd8cd0f0f143f19cbcb5 Mon Sep 17 00:00:00 2001
+From: Gabriel Barbu <gabriel.barbu@enea.com>
+Date: Thu, 25 Jul 2013 15:28:33 +0200
+Subject: [PATCH 4/8] strace: Add ptest
 
 Upstream-Status: Inappropriate
 
 Signed-off-by: Gabriel Barbu <gabriel.barbu@enea.com>
 Signed-off-by: Chong Lu <Chong.Lu@windriver.com>
+
 ---
  configure.ac      |  2 +-
  tests/Makefile.am | 18 ++++++++++++++++++
  2 files changed, 19 insertions(+), 1 deletion(-)
 
 diff --git a/configure.ac b/configure.ac
-index b2b03c6..464a9dc 100644
+index 61d6425..6387c24 100644
 --- a/configure.ac
 +++ b/configure.ac
-@@ -39,7 +39,7 @@ AC_COPYRIGHT([Copyright (C) 1999-2017 The strace developers.])
+@@ -41,7 +41,7 @@ AC_COPYRIGHT([Copyright (C) 1999-]copyright_year[ The strace developers.])
  AC_CONFIG_SRCDIR([strace.c])
  AC_CONFIG_AUX_DIR([.])
  AC_CONFIG_HEADERS([config.h])
@@ -23,11 +27,11 @@
  AM_MAINTAINER_MODE
  AC_CANONICAL_HOST
 diff --git a/tests/Makefile.am b/tests/Makefile.am
-index 311d3bb..72f9022 100644
+index 5aa7f89..a55a355 100644
 --- a/tests/Makefile.am
 +++ b/tests/Makefile.am
-@@ -960,3 +960,21 @@ $(objects): scno.h
- CLEANFILES = ksysent.h $(TESTS:=.tmp)
+@@ -379,3 +379,21 @@ clean-local-check:
+ CLEANFILES = ksysent.h
  
  include ../scno.am
 +
@@ -47,4 +51,7 @@
 +		install $(srcdir)/$$file $(DESTDIR)/$(TESTDIR); \
 +		sed -i -e 's/$${srcdir=.}/./g' $(DESTDIR)/$(TESTDIR)/$$file; \
 +	done
-+	for i in net net-fd scm_rights-fd sigaction; do sed -i -e 's/$$srcdir/./g' $(DESTDIR)/$(TESTDIR)/$$i.test; done
++	for i in net scm_rights-fd rt_sigaction; do sed -i -e 's/$$srcdir/./g' $(DESTDIR)/$(TESTDIR)/$$i.test; done
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/strace/strace/update-gawk-paths.patch b/import-layers/yocto-poky/meta/recipes-devtools/strace/strace/update-gawk-paths.patch
index 94ee53c..f6ffa8e 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/strace/strace/update-gawk-paths.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/strace/strace/update-gawk-paths.patch
@@ -12,20 +12,20 @@
 
 Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
 ---
- mpers.awk                | 2 +-
- tests-m32/caps.awk       | 2 +-
- tests-m32/match.awk      | 2 +-
- tests-m32/sigaction.awk  | 2 +-
- tests-mx32/caps.awk      | 2 +-
- tests-mx32/match.awk     | 2 +-
- tests-mx32/sigaction.awk | 2 +-
- tests/caps.awk           | 2 +-
- tests/match.awk          | 2 +-
- tests/sigaction.awk      | 2 +-
+ mpers.awk                   | 2 +-
+ tests-m32/caps.awk          | 2 +-
+ tests-m32/match.awk         | 2 +-
+ tests-m32/rt_sigaction.awk  | 2 +-
+ tests-mx32/caps.awk         | 2 +-
+ tests-mx32/match.awk        | 2 +-
+ tests-mx32/rt_sigaction.awk | 2 +-
+ tests/caps.awk              | 2 +-
+ tests/match.awk             | 2 +-
+ tests/rt_sigaction.awk      | 2 +-
  10 files changed, 10 insertions(+), 10 deletions(-)
 
 diff --git a/mpers.awk b/mpers.awk
-index 99248c5..ff10520 100644
+index fe54763..b5238a8 100644
 --- a/mpers.awk
 +++ b/mpers.awk
 @@ -1,4 +1,4 @@
@@ -35,7 +35,7 @@
  # Copyright (c) 2015 Elvira Khabirova <lineprinter0@gmail.com>
  # Copyright (c) 2015-2016 Dmitry V. Levin <ldv@altlinux.org>
 diff --git a/tests-m32/caps.awk b/tests-m32/caps.awk
-index 67003ac..a66f1f0 100644
+index c6e31ef..5efc6cc 100644
 --- a/tests-m32/caps.awk
 +++ b/tests-m32/caps.awk
 @@ -1,4 +1,4 @@
@@ -54,18 +54,18 @@
  #
  # Copyright (c) 2014-2015 Dmitry V. Levin <ldv@altlinux.org>
  # All rights reserved.
-diff --git a/tests-m32/sigaction.awk b/tests-m32/sigaction.awk
-index 5c6b6d0..3e14464 100644
---- a/tests-m32/sigaction.awk
-+++ b/tests-m32/sigaction.awk
+diff --git a/tests-m32/rt_sigaction.awk b/tests-m32/rt_sigaction.awk
+index 9c3a9ed..8414243 100644
+--- a/tests-m32/rt_sigaction.awk
++++ b/tests-m32/rt_sigaction.awk
 @@ -1,4 +1,4 @@
 -#!/bin/gawk
 +#!/usr/bin/gawk
  #
  # Copyright (c) 2014-2015 Dmitry V. Levin <ldv@altlinux.org>
- # All rights reserved.
+ # Copyright (c) 2016 Elvira Khabirova <lineprinter0@gmail.com>
 diff --git a/tests-mx32/caps.awk b/tests-mx32/caps.awk
-index 67003ac..a66f1f0 100644
+index c6e31ef..5efc6cc 100644
 --- a/tests-mx32/caps.awk
 +++ b/tests-mx32/caps.awk
 @@ -1,4 +1,4 @@
@@ -84,18 +84,18 @@
  #
  # Copyright (c) 2014-2015 Dmitry V. Levin <ldv@altlinux.org>
  # All rights reserved.
-diff --git a/tests-mx32/sigaction.awk b/tests-mx32/sigaction.awk
-index 5c6b6d0..3e14464 100644
---- a/tests-mx32/sigaction.awk
-+++ b/tests-mx32/sigaction.awk
+diff --git a/tests-mx32/rt_sigaction.awk b/tests-mx32/rt_sigaction.awk
+index 9c3a9ed..8414243 100644
+--- a/tests-mx32/rt_sigaction.awk
++++ b/tests-mx32/rt_sigaction.awk
 @@ -1,4 +1,4 @@
 -#!/bin/gawk
 +#!/usr/bin/gawk
  #
  # Copyright (c) 2014-2015 Dmitry V. Levin <ldv@altlinux.org>
- # All rights reserved.
+ # Copyright (c) 2016 Elvira Khabirova <lineprinter0@gmail.com>
 diff --git a/tests/caps.awk b/tests/caps.awk
-index 67003ac..a66f1f0 100644
+index c6e31ef..5efc6cc 100644
 --- a/tests/caps.awk
 +++ b/tests/caps.awk
 @@ -1,4 +1,4 @@
@@ -114,13 +114,13 @@
  #
  # Copyright (c) 2014-2015 Dmitry V. Levin <ldv@altlinux.org>
  # All rights reserved.
-diff --git a/tests/sigaction.awk b/tests/sigaction.awk
-index 5c6b6d0..3e14464 100644
---- a/tests/sigaction.awk
-+++ b/tests/sigaction.awk
+diff --git a/tests/rt_sigaction.awk b/tests/rt_sigaction.awk
+index 9c3a9ed..8414243 100644
+--- a/tests/rt_sigaction.awk
++++ b/tests/rt_sigaction.awk
 @@ -1,4 +1,4 @@
 -#!/bin/gawk
 +#!/usr/bin/gawk
  #
  # Copyright (c) 2014-2015 Dmitry V. Levin <ldv@altlinux.org>
- # All rights reserved.
+ # Copyright (c) 2016 Elvira Khabirova <lineprinter0@gmail.com>
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/strace/strace_4.16.bb b/import-layers/yocto-poky/meta/recipes-devtools/strace/strace_4.16.bb
deleted file mode 100644
index b6cd2ac..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/strace/strace_4.16.bb
+++ /dev/null
@@ -1,49 +0,0 @@
-SUMMARY = "System call tracing tool"
-HOMEPAGE = "http://strace.sourceforge.net"
-SECTION = "console/utils"
-LICENSE = "BSD"
-LIC_FILES_CHKSUM = "file://COPYING;md5=488acb3aaaf5d14a2e1a852d13668a70"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/strace/strace-${PV}.tar.xz \
-           file://disable-git-version-gen.patch \
-           file://more-robust-test-for-m32-mx32-compile-support.patch \
-           file://update-gawk-paths.patch \
-           file://Makefile-ptest.patch \
-           file://run-ptest \
-           file://0001-Fix-build-when-using-non-glibc-libc-implementation-o.patch \
-           file://mips-SIGEMT.patch \
-           file://0001-caps-abbrev.awk-fix-gawk-s-path.patch \
-           "
-
-SRC_URI[md5sum] = "2873366cac98770efcbed6e748d5ef23"
-SRC_URI[sha256sum] = "98487cb5178ec1259986cc9f6e2a844f50e5d1208c112cc22431a1e4d9adf0ef"
-
-inherit autotools ptest bluetooth
-
-RDEPENDS_${PN}-ptest += "make coreutils grep gawk sed"
-
-PACKAGECONFIG_class-target ??= "\
-    ${@bb.utils.contains('DISTRO_FEATURES', 'bluetooth', 'bluez', '', d)} \
-"
-
-PACKAGECONFIG[bluez] = "ac_cv_header_bluetooth_bluetooth_h=yes,ac_cv_header_bluetooth_bluetooth_h=no,${BLUEZ}"
-PACKAGECONFIG[libunwind] = "--with-libunwind,--without-libunwind,libunwind"
-
-TESTDIR = "tests"
-
-do_install_append() {
-	# We don't ship strace-graph here because it needs perl
-	rm ${D}${bindir}/strace-graph
-}
-
-do_compile_ptest() {
-	oe_runmake -C ${TESTDIR} buildtest-TESTS
-}
-
-do_install_ptest() {
-	oe_runmake -C ${TESTDIR} install-ptest BUILDDIR=${B} DESTDIR=${D}${PTEST_PATH} TESTDIR=${TESTDIR}
-	sed -i -e '/^src/s/strace.*[1-9]/ptest/' ${D}/${PTEST_PATH}/${TESTDIR}/Makefile
-}
-
-BBCLASSEXTEND = "native"
-TOOLCHAIN = "gcc"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/strace/strace_4.18.bb b/import-layers/yocto-poky/meta/recipes-devtools/strace/strace_4.18.bb
new file mode 100644
index 0000000..5b2891a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/strace/strace_4.18.bb
@@ -0,0 +1,59 @@
+SUMMARY = "System call tracing tool"
+HOMEPAGE = "http://strace.sourceforge.net"
+SECTION = "console/utils"
+LICENSE = "BSD"
+LIC_FILES_CHKSUM = "file://COPYING;md5=f132b4d2adfccc63da4139a609367711"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/strace/strace-${PV}.tar.xz \
+           file://disable-git-version-gen.patch \
+           file://more-robust-test-for-m32-mx32-compile-support.patch \
+           file://update-gawk-paths.patch \
+           file://Makefile-ptest.patch \
+           file://run-ptest \
+           file://0001-Fix-build-when-using-non-glibc-libc-implementation-o.patch \
+           file://mips-SIGEMT.patch \
+           file://0001-caps-abbrev.awk-fix-gawk-s-path.patch \
+           file://0001-tests-sigaction-Check-for-mips-and-alpha-before-usin.patch \
+           "
+
+SRC_URI[md5sum] = "3579b3266bb096cebaefbe2cdb1a3a78"
+SRC_URI[sha256sum] = "89ad887c1e6226bdbca8da31d589cadea4be0744b142eb47b768086c937fca08"
+
+inherit autotools ptest bluetooth
+
+RDEPENDS_${PN}-ptest += "make coreutils grep gawk sed"
+
+PACKAGECONFIG_class-target ??= "\
+    ${@bb.utils.contains('DISTRO_FEATURES', 'bluetooth', 'bluez', '', d)} \
+"
+
+PACKAGECONFIG[bluez] = "ac_cv_header_bluetooth_bluetooth_h=yes,ac_cv_header_bluetooth_bluetooth_h=no,${BLUEZ}"
+PACKAGECONFIG[libunwind] = "--with-libunwind,--without-libunwind,libunwind"
+
+TESTDIR = "tests"
+
+do_install_append() {
+	# We don't ship strace-graph here because it needs perl
+	rm ${D}${bindir}/strace-graph
+}
+
+do_compile_ptest() {
+	oe_runmake -C ${TESTDIR} buildtest-TESTS
+}
+
+do_install_ptest() {
+	oe_runmake -C ${TESTDIR} install-ptest BUILDDIR=${B} DESTDIR=${D}${PTEST_PATH} TESTDIR=${TESTDIR}
+	sed -i -e '/^src/s/strace.*[1-9]/ptest/' \
+	    -e 's,--sysroot=${STAGING_DIR_TARGET},,g' \
+	    -e 's|${DEBUG_PREFIX_MAP}||g' \
+	    -e 's:${HOSTTOOLS_DIR}/::g' \
+	    -e 's:${RECIPE_SYSROOT_NATIVE}::g' \
+	    -e 's:${RECIPE_SYSROOT}::g' \
+	    -e 's:${BASE_WORKDIR}/${MULTIMACH_TARGET_SYS}::g' \
+	    -e '/^DEB_CHANGELOGTIME/d' \
+	    -e '/^RPM_CHANGELOGTIME/d' \
+	${D}/${PTEST_PATH}/${TESTDIR}/Makefile
+}
+
+BBCLASSEXTEND = "native"
+TOOLCHAIN = "gcc"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/subversion/subversion/CVE-2017-9800.patch b/import-layers/yocto-poky/meta/recipes-devtools/subversion/subversion/CVE-2017-9800.patch
new file mode 100644
index 0000000..0599c2b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/subversion/subversion/CVE-2017-9800.patch
@@ -0,0 +1,136 @@
+------------------------------------------------------------------------
+r1804691 | danielsh | 2017-08-10 11:14:13 -0700 (Thu, 10 Aug 2017) | 18 lines
+
+Fix CVE-2017-9800.
+
+See: https://subversion.apache.org/security/CVE-2017-0800-advisory.txt
+
+* subversion/libsvn_ra_svn/client.c
+  (svn_ctype.h): Include.
+  (find_tunnel_agent): Pass a "--" end-of-options guard to ssh.
+    Expect the 'hostinfo' parameter to be URI-decoded.
+  (is_valid_hostinfo): New.
+  (ra_svn_open): Validate the hostname before using it.
+
+* subversion/libsvn_subr/config_file.c
+  (svn_config_ensure): Update the example configuration likewise.
+
+Patch by: philip
+Review by: danielsh
+           stsp
+           astieger (earlier version)
+
+Upstream-Status: Backport
+http://svn.apache.org/viewvc?view=revision&amp;sortby=rev&amp;revision=1804691
+
+CVE: CVE-2017-9800
+
+Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
+---
+Index: subversion/libsvn_subr/config_file.c
+===================================================================
+--- subversion/libsvn_subr/config_file.c	(revision 1804690)
++++ subversion/libsvn_subr/config_file.c	(revision 1804691)
+@@ -1448,12 +1448,12 @@
+         "### passed to the tunnel agent as <user>@<hostname>.)  If the"      NL
+         "### built-in ssh scheme were not predefined, it could be defined"   NL
+         "### as:"                                                            NL
+-        "# ssh = $SVN_SSH ssh -q"                                            NL
++        "# ssh = $SVN_SSH ssh -q --"                                         NL
+         "### If you wanted to define a new 'rsh' scheme, to be used with"    NL
+         "### 'svn+rsh:' URLs, you could do so as follows:"                   NL
+-        "# rsh = rsh"                                                        NL
++        "# rsh = rsh --"                                                     NL
+         "### Or, if you wanted to specify a full path and arguments:"        NL
+-        "# rsh = /path/to/rsh -l myusername"                                 NL
++        "# rsh = /path/to/rsh -l myusername --"                              NL
+         "### On Windows, if you are specifying a full path to a command,"    NL
+         "### use a forward slash (/) or a paired backslash (\\\\) as the"    NL
+         "### path separator.  A single backslash will be treated as an"      NL
+Index: subversion/libsvn_ra_svn/client.c
+===================================================================
+--- subversion/libsvn_ra_svn/client.c	(revision 1804690)
++++ subversion/libsvn_ra_svn/client.c	(revision 1804691)
+@@ -46,6 +46,7 @@
+ #include "svn_props.h"
+ #include "svn_mergeinfo.h"
+ #include "svn_version.h"
++#include "svn_ctype.h"
+ 
+ #include "svn_private_config.h"
+ 
+@@ -398,7 +399,7 @@
+        * versions have it too. If the user is using some other ssh
+        * implementation that doesn't accept it, they can override it
+        * in the [tunnels] section of the config. */
+-      val = "$SVN_SSH ssh -q";
++      val = "$SVN_SSH ssh -q --";
+     }
+ 
+   if (!val || !*val)
+@@ -443,7 +444,7 @@
+   for (n = 0; cmd_argv[n] != NULL; n++)
+     argv[n] = cmd_argv[n];
+ 
+-  argv[n++] = svn_path_uri_decode(hostinfo, pool);
++  argv[n++] = hostinfo;
+   argv[n++] = "svnserve";
+   argv[n++] = "-t";
+   argv[n] = NULL;
+@@ -811,7 +812,33 @@
+ }
+ 
+ 
++/* A simple whitelist to ensure the following are valid:
++ *   user@server
++ *   [::1]:22
++ *   server-name
++ *   server_name
++ *   127.0.0.1
++ * with an extra restriction that a leading '-' is invalid.
++ */
++static svn_boolean_t
++is_valid_hostinfo(const char *hostinfo)
++{
++  const char *p = hostinfo;
+ 
++  if (p[0] == '-')
++    return FALSE;
++
++  while (*p)
++    {
++      if (!svn_ctype_isalnum(*p) && !strchr(":.-_[]@", *p))
++        return FALSE;
++
++      ++p;
++    }
++
++  return TRUE;
++}
++
+ static svn_error_t *ra_svn_open(svn_ra_session_t *session,
+                                 const char **corrected_url,
+                                 const char *url,
+@@ -844,8 +871,18 @@
+           || (callbacks->check_tunnel_func && callbacks->open_tunnel_func
+               && !callbacks->check_tunnel_func(callbacks->tunnel_baton,
+                                                tunnel))))
+-    SVN_ERR(find_tunnel_agent(tunnel, uri.hostinfo, &tunnel_argv, config,
+-                              result_pool));
++    {
++      const char *decoded_hostinfo;
++
++      decoded_hostinfo = svn_path_uri_decode(uri.hostinfo, result_pool);
++
++      if (!is_valid_hostinfo(decoded_hostinfo))
++        return svn_error_createf(SVN_ERR_BAD_URL, NULL, _("Invalid host '%s'"),
++                                 uri.hostinfo);
++
++      SVN_ERR(find_tunnel_agent(tunnel, decoded_hostinfo, &tunnel_argv,
++                                config, result_pool));
++    }
+   else
+     tunnel_argv = NULL;
+ 
+
+------------------------------------------------------------------------
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/subversion/subversion_1.9.5.bb b/import-layers/yocto-poky/meta/recipes-devtools/subversion/subversion_1.9.5.bb
deleted file mode 100644
index 05fba67..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/subversion/subversion_1.9.5.bb
+++ /dev/null
@@ -1,55 +0,0 @@
-SUMMARY = "Subversion (svn) version control system client"
-SECTION = "console/network"
-DEPENDS = "apr-util serf sqlite3 file"
-DEPENDS_append_class-native = " file-replacement-native"
-RDEPENDS_${PN} = "serf"
-LICENSE = "Apache-2"
-HOMEPAGE = "http://subversion.tigris.org"
-
-BBCLASSEXTEND = "native"
-
-inherit gettext
-
-SRC_URI = "${APACHE_MIRROR}/${BPN}/${BPN}-${PV}.tar.bz2 \
-           file://disable_macos.patch \
-           file://serf.m4-Regex-modified-to-allow-D-in-paths.patch \
-           file://0001-Fix-libtool-name-in-configure.ac.patch \
-           file://serfmacro.patch \
-           "
-
-SRC_URI[md5sum] = "9fcbae352a5efe73d46a88c97c6bba14"
-SRC_URI[sha256sum] = "8a4fc68aff1d18dcb4dd9e460648d24d9e98657fbed496c582929c6b3ce555e5"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=af81ae49ba359e70626c05e9bf313709"
-
-PACKAGECONFIG[sasl] = "--with-sasl,--without-sasl,cyrus-sasl"
-PACKAGECONFIG[gnome-keyring] = "--with-gnome-keyring,--without-gnome-keyring,glib-2.0 gnome-keyring"
-
-EXTRA_OECONF = " \
-                --without-berkeley-db --without-apxs \
-                --without-swig --with-apr=${STAGING_BINDIR_CROSS} \
-                --with-apr-util=${STAGING_BINDIR_CROSS} \
-                --disable-keychain \
-                ac_cv_path_RUBY=none"
-
-inherit autotools
-
-export LDFLAGS += " -L${STAGING_LIBDIR} "
-CPPFLAGS += "-P"
-BUILD_CPPFLAGS += "-P"
-
-acpaths = "-I build/ -I build/ac-macros/"
-
-do_configure_prepend () {
-	rm -f ${S}/libtool
-	rm -f ${S}/build/libtool.m4 ${S}/build/ltmain.sh ${S}/build/ltoptions.m4 ${S}/build/ltsugar.m4 ${S}/build/ltversion.m4 ${S}/build/lt~obsolete.m4
-	rm -f ${S}/aclocal.m4
-	sed -i -e 's:with_sasl="/usr/local":with_sasl="${STAGING_DIR}":' ${S}/build/ac-macros/sasl.m4
-}
-
-#| x86_64-linux-libtool: install: warning: `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/work/x86_64-linux/subversion-native/1.8.9-r0/build/subversion/libsvn_ra_local/libsvn_ra_local-1.la' has not been installed in `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/sysroots/x86_64-linux/usr/lib'| x86_64-linux-libtool: install: warning: `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/work/x86_64-linux/subversion-native/1.8.9-r0/build/subversion/libsvn_repos/libsvn_repos-1.la' has not been installed in `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/sysroots/x86_64-linux/usr/lib'| /usr/bin/ld: cannot find -lsvn_delta-1| collect2: ld returned 1 exit status| x86_64-linux-libtool: install: warning: `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/work/x86_64-linux/subversion-native/1.8.9-r0/build/subversion/libsvn_ra_svn/libsvn_ra_svn-1.la' has not been installed in `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/sysroots/x86_64-linux/usr/lib'| x86_64-linux-libtool: install: warning: `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/work/x86_64-linux/subversion-native/1.8.9-r0/build/subversion/libsvn_ra_serf/libsvn_ra_serf-1.la' has not been installed in `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/sysroots/x86_64-linux/usr/lib'
-#| x86_64-linux-libtool: install: error: relink `libsvn_ra_serf-1.la' with the above command before installing it
-#| x86_64-linux-libtool: install: warning: `../../subversion/libsvn_repos/libsvn_repos-1.la' has not been installed in `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/sysroots/x86_64-linux/usr/lib'
-#| /home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/work/x86_64-linux/subversion-native/1.8.9-r0/subversion-1.8.9/build-outputs.mk:1090: recipe for target 'install-serf-lib' failed
-#| make: *** [install-serf-lib] Error 1
-PARALLEL_MAKEINST = ""
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/subversion/subversion_1.9.6.bb b/import-layers/yocto-poky/meta/recipes-devtools/subversion/subversion_1.9.6.bb
new file mode 100644
index 0000000..532edeb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/subversion/subversion_1.9.6.bb
@@ -0,0 +1,56 @@
+SUMMARY = "Subversion (svn) version control system client"
+SECTION = "console/network"
+DEPENDS = "apr-util serf sqlite3 file"
+DEPENDS_append_class-native = " file-replacement-native"
+RDEPENDS_${PN} = "serf"
+LICENSE = "Apache-2"
+HOMEPAGE = "http://subversion.tigris.org"
+
+BBCLASSEXTEND = "native"
+
+inherit gettext pkgconfig
+
+SRC_URI = "${APACHE_MIRROR}/${BPN}/${BPN}-${PV}.tar.bz2 \
+           file://disable_macos.patch \
+           file://serf.m4-Regex-modified-to-allow-D-in-paths.patch \
+           file://0001-Fix-libtool-name-in-configure.ac.patch \
+           file://serfmacro.patch \
+           file://CVE-2017-9800.patch;striplevel=0 \
+           "
+
+SRC_URI[md5sum] = "f27e00338d4a9f7f9aec9d4a3f8b418b"
+SRC_URI[sha256sum] = "dbcbc51fb634082f009121f2cb64350ce32146612787ffb0f7ced351aacaae19"
+
+LIC_FILES_CHKSUM = "file://LICENSE;md5=af81ae49ba359e70626c05e9bf313709"
+
+PACKAGECONFIG[sasl] = "--with-sasl,--without-sasl,cyrus-sasl"
+PACKAGECONFIG[gnome-keyring] = "--with-gnome-keyring,--without-gnome-keyring,glib-2.0 gnome-keyring"
+
+EXTRA_OECONF = " \
+                --without-berkeley-db --without-apxs \
+                --without-swig --with-apr=${STAGING_BINDIR_CROSS} \
+                --with-apr-util=${STAGING_BINDIR_CROSS} \
+                --disable-keychain \
+                ac_cv_path_RUBY=none"
+
+inherit autotools
+
+export LDFLAGS += " -L${STAGING_LIBDIR} "
+CPPFLAGS += "-P"
+BUILD_CPPFLAGS += "-P"
+
+acpaths = "-I build/ -I build/ac-macros/"
+
+do_configure_prepend () {
+	rm -f ${S}/libtool
+	rm -f ${S}/build/libtool.m4 ${S}/build/ltmain.sh ${S}/build/ltoptions.m4 ${S}/build/ltsugar.m4 ${S}/build/ltversion.m4 ${S}/build/lt~obsolete.m4
+	rm -f ${S}/aclocal.m4
+	sed -i -e 's:with_sasl="/usr/local":with_sasl="${STAGING_DIR}":' ${S}/build/ac-macros/sasl.m4
+}
+
+#| x86_64-linux-libtool: install: warning: `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/work/x86_64-linux/subversion-native/1.8.9-r0/build/subversion/libsvn_ra_local/libsvn_ra_local-1.la' has not been installed in `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/sysroots/x86_64-linux/usr/lib'| x86_64-linux-libtool: install: warning: `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/work/x86_64-linux/subversion-native/1.8.9-r0/build/subversion/libsvn_repos/libsvn_repos-1.la' has not been installed in `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/sysroots/x86_64-linux/usr/lib'| /usr/bin/ld: cannot find -lsvn_delta-1| collect2: ld returned 1 exit status| x86_64-linux-libtool: install: warning: `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/work/x86_64-linux/subversion-native/1.8.9-r0/build/subversion/libsvn_ra_svn/libsvn_ra_svn-1.la' has not been installed in `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/sysroots/x86_64-linux/usr/lib'| x86_64-linux-libtool: install: warning: `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/work/x86_64-linux/subversion-native/1.8.9-r0/build/subversion/libsvn_ra_serf/libsvn_ra_serf-1.la' has not been installed in `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/sysroots/x86_64-linux/usr/lib'
+#| x86_64-linux-libtool: install: error: relink `libsvn_ra_serf-1.la' with the above command before installing it
+#| x86_64-linux-libtool: install: warning: `../../subversion/libsvn_repos/libsvn_repos-1.la' has not been installed in `/home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/sysroots/x86_64-linux/usr/lib'
+#| /home/pokybuild/yocto-autobuilder/yocto-worker/nightly-qa-logrotate/build/build/tmp/work/x86_64-linux/subversion-native/1.8.9-r0/subversion-1.8.9/build-outputs.mk:1090: recipe for target 'install-serf-lib' failed
+#| make: *** [install-serf-lib] Error 1
+PARALLEL_MAKEINST = ""
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/syslinux/syslinux_6.03.bb b/import-layers/yocto-poky/meta/recipes-devtools/syslinux/syslinux_6.03.bb
index 69bce1f..f8b1094 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/syslinux/syslinux_6.03.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/syslinux/syslinux_6.03.bb
@@ -1,5 +1,5 @@
 SUMMARY = "Multi-purpose linux bootloader"
-HOMEPAGE = "http://syslinux.zytor.com/"
+HOMEPAGE = "http://www.syslinux.org/"
 LICENSE = "GPLv2+"
 LIC_FILES_CHKSUM = "file://COPYING;md5=0636e73ff0215e8d672dc4c32c317bb3 \
                     file://README;beginline=35;endline=41;md5=558f2c71cb1fb9ba511ccd4858e48e8a"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/systemd-bootchart/files/0001-parse-util-Don-t-use-xlocale.h.patch b/import-layers/yocto-poky/meta/recipes-devtools/systemd-bootchart/files/0001-parse-util-Don-t-use-xlocale.h.patch
new file mode 100644
index 0000000..5aa0463
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/systemd-bootchart/files/0001-parse-util-Don-t-use-xlocale.h.patch
@@ -0,0 +1,32 @@
+From d379126d56d0b6e935b2d97ca71579e6cc54d1bb Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Tue, 18 Jul 2017 13:37:27 +0300
+Subject: [PATCH] parse-util: Don't use xlocale.h
+
+glibc 2.26 no longer contains the non-standard xlocale.h
+(http://sourceware.org/glibc/wiki/Release/2.26#Removal_of_.27xlocale.h.27)
+
+This change shouldn't break anything as xlocale.h was a subset of
+locale.h.
+
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Upstream-Status: Submitted [https://github.com/systemd/systemd-bootchart/pull/35]
+---
+ src/parse-util.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/src/parse-util.c b/src/parse-util.c
+index 5635a68..1b2169c 100644
+--- a/src/parse-util.c
++++ b/src/parse-util.c
+@@ -21,7 +21,6 @@
+ #include <locale.h>
+ #include <stdlib.h>
+ #include <string.h>
+-#include <xlocale.h>
+ 
+ #include "macro.h"
+ #include "parse-util.h"
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/systemd-bootchart/systemd-bootchart_231.bb b/import-layers/yocto-poky/meta/recipes-devtools/systemd-bootchart/systemd-bootchart_231.bb
index 1d88036..4da000e 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/systemd-bootchart/systemd-bootchart_231.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/systemd-bootchart/systemd-bootchart_231.bb
@@ -2,7 +2,9 @@
 LIC_FILES_CHKSUM = "file://LICENSE.LGPL2.1;md5=4fbd65380cdd255951079008b364516c \
                     file://LICENSE.GPL2;md5=751419260aa954499f7abaabaa882bbe"
 
-SRC_URI = "git://github.com/systemd/systemd-bootchart.git;protocol=https"
+SRC_URI = "git://github.com/systemd/systemd-bootchart.git;protocol=https \
+           file://0001-parse-util-Don-t-use-xlocale.h.patch \
+"
 
 # Modify these as desired
 PV = "231+git${SRCPV}"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/tcf-agent/tcf-agent_git.bb b/import-layers/yocto-poky/meta/recipes-devtools/tcf-agent/tcf-agent_git.bb
index e5e41f1..9db26dc 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/tcf-agent/tcf-agent_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/tcf-agent/tcf-agent_git.bb
@@ -16,6 +16,7 @@
            file://tcf-agent.init \
            file://tcf-agent.service \
           "
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 DEPENDS = "util-linux openssl"
 RDEPENDS_${PN} = "bash"
@@ -30,16 +31,22 @@
 INITSCRIPT_PARAMS = "start 99 3 5 . stop 20 0 1 2 6 ."
 
 # mangling needed for make
-MAKE_ARCH = "`echo ${TARGET_ARCH} | sed s,i.86,i686,`"
+MAKE_ARCH = "`echo ${TARGET_ARCH} | sed s,i.86,i686, | sed s,aarch64,a64,`"
 MAKE_OS = "`echo ${TARGET_OS} | sed s,^linux.*,GNU/Linux,`"
 
 EXTRA_OEMAKE = "MACHINE=${MAKE_ARCH} OPSYS=${MAKE_OS} 'CC=${CC}' 'AR=${AR}'"
 
-# They don't build on ARM and we don't need them actually.
-CFLAGS += "-DSERVICE_RunControl=0 -DSERVICE_Breakpoints=0 \
+LCL_STOP_SERVICES = "-DSERVICE_RunControl=0 -DSERVICE_Breakpoints=0 \
     -DSERVICE_Memory=0 -DSERVICE_Registers=0 -DSERVICE_MemoryMap=0 \
-    -DSERVICE_StackTrace=0 -DSERVICE_Symbols=0 -DSERVICE_LineNumbers=0 \
-    -DSERVICE_Expressions=0"
+    -DSERVICE_StackTrace=0 -DSERVICE_Expressions=0"
+
+
+# These features don't compile for several cases.
+#
+CFLAGS_append_mips = " ${LCL_STOP_SERVICES}"
+CFLAGS_append_mips64 = " ${LCL_STOP_SERVICES}"
+CFLAGS_append_libc-musl = " ${LCL_STOP_SERVICES}"
+CFLAGS_append_powerpc64 = " ${LCL_STOP_SERVICES}"
 
 do_install() {
 	oe_runmake install INSTALLROOT=${D}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/tcltk/tcl_8.6.6.bb b/import-layers/yocto-poky/meta/recipes-devtools/tcltk/tcl_8.6.6.bb
deleted file mode 100644
index 40cd18f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/tcltk/tcl_8.6.6.bb
+++ /dev/null
@@ -1,97 +0,0 @@
-SUMMARY = "Tool Command Language"
-HOMEPAGE = "http://tcl.sourceforge.net"
-SECTION = "devel/tcltk"
-
-# http://www.tcl.tk/software/tcltk/license.html
-LICENSE = "tcl & BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://../license.terms;md5=058f6229798281bbcac4239c788cfa38 \
-    file://../compat/license.terms;md5=058f6229798281bbcac4239c788cfa38 \
-    file://../library/license.terms;md5=058f6229798281bbcac4239c788cfa38 \
-    file://../macosx/license.terms;md5=058f6229798281bbcac4239c788cfa38 \
-    file://../tests/license.terms;md5=058f6229798281bbcac4239c788cfa38 \
-    file://../win/license.terms;md5=058f6229798281bbcac4239c788cfa38 \
-"
-
-DEPENDS = "tcl-native zlib"
-
-BASE_SRC_URI = "${SOURCEFORGE_MIRROR}/tcl/${BPN}${PV}-src.tar.gz \
-                file://tcl-add-soname.patch"
-SRC_URI = "${BASE_SRC_URI} \
-           file://fix_non_native_build_issue.patch \
-           file://fix_issue_with_old_distro_glibc.patch \
-           file://no_packages.patch \
-           file://tcl-remove-hardcoded-install-path.patch \
-           file://alter-includedir.patch \
-           file://run-ptest \
-"
-SRC_URI[md5sum] = "5193aea8107839a79df8ac709552ecb7"
-SRC_URI[sha256sum] = "a265409781e4b3edcc4ef822533071b34c3dc6790b893963809b9fe221befe07"
-
-SRC_URI_class-native = "${BASE_SRC_URI}"
-
-S = "${WORKDIR}/${BPN}${PV}/unix"
-
-VER = "${PV}"
-
-inherit autotools ptest binconfig
-
-DEPENDS_class-native = "zlib-native"
-
-EXTRA_OECONF = "--enable-threads --disable-rpath --libdir=${libdir}"
-
-do_configure() {
-	cd ${S}
-	gnu-configize
-	cd ${B}
-	oe_runconf
-}
-
-do_compile_prepend() {
-	echo > ${S}/../compat/fixstrtod.c
-}
-
-do_install() {
-	autotools_do_install install-private-headers
-	ln -sf ./tclsh${VER} ${D}${bindir}/tclsh
-	ln -sf tclsh8.6 ${D}${bindir}/tclsh${VER}
-	sed -i "s;-L${B};-L${STAGING_LIBDIR};g" tclConfig.sh
-	sed -i "s;'${WORKDIR};'${STAGING_INCDIR};g" tclConfig.sh
-	install -d ${D}${bindir_crossscripts}
-	install -m 0755 tclConfig.sh ${D}${bindir_crossscripts}
-	install -m 0755 tclConfig.sh ${D}${libdir}
-	cd ..
-	for dir in compat generic unix; do
-		install -d ${D}${includedir}/${BPN}${VER}/$dir
-		install -m 0644 ${S}/../$dir/*.h ${D}${includedir}/${BPN}${VER}/$dir/
-	done
-}
-
-SYSROOT_DIRS += "${bindir_crossscripts}"
-
-PACKAGES =+ "tcl-lib"
-FILES_tcl-lib = "${libdir}/libtcl8.6.so.*"
-FILES_${PN} += "${libdir}/tcl${VER} ${libdir}/tcl8.6 ${libdir}/tcl8"
-FILES_${PN}-dev += "${libdir}/tclConfig.sh ${libdir}/tclooConfig.sh"
-
-# isn't getting picked up by shlibs code
-RDEPENDS_${PN} += "tcl-lib"
-RDEPENDS_${PN}_class-native = ""
-RDEPENDS_${PN}-ptest += "libgcc"
-
-BBCLASSEXTEND = "native nativesdk"
-
-do_compile_ptest() {
-	oe_runmake tcltest
-}
-
-do_install_ptest() {
-	cp ${B}/tcltest ${D}${PTEST_PATH}
-	cp -r ${S}/../library ${D}${PTEST_PATH}
-	cp -r ${S}/../tests ${D}${PTEST_PATH}
-}
-
-# Fix some paths that might be used by Tcl extensions
-BINCONFIG_GLOB = "*Config.sh"
-
-# Fix the path in sstate
-SSTATE_SCAN_FILES += "*Config.sh"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/tcltk/tcl_8.6.7.bb b/import-layers/yocto-poky/meta/recipes-devtools/tcltk/tcl_8.6.7.bb
new file mode 100644
index 0000000..dac73be
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/tcltk/tcl_8.6.7.bb
@@ -0,0 +1,101 @@
+SUMMARY = "Tool Command Language"
+HOMEPAGE = "http://tcl.sourceforge.net"
+SECTION = "devel/tcltk"
+
+# http://www.tcl.tk/software/tcltk/license.html
+LICENSE = "tcl & BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://../license.terms;md5=058f6229798281bbcac4239c788cfa38 \
+    file://../compat/license.terms;md5=058f6229798281bbcac4239c788cfa38 \
+    file://../library/license.terms;md5=058f6229798281bbcac4239c788cfa38 \
+    file://../macosx/license.terms;md5=058f6229798281bbcac4239c788cfa38 \
+    file://../tests/license.terms;md5=058f6229798281bbcac4239c788cfa38 \
+    file://../win/license.terms;md5=058f6229798281bbcac4239c788cfa38 \
+"
+
+DEPENDS = "tcl-native zlib"
+
+BASE_SRC_URI = "${SOURCEFORGE_MIRROR}/tcl/${BPN}${PV}-src.tar.gz \
+                file://tcl-add-soname.patch"
+SRC_URI = "${BASE_SRC_URI} \
+           file://fix_non_native_build_issue.patch \
+           file://fix_issue_with_old_distro_glibc.patch \
+           file://no_packages.patch \
+           file://tcl-remove-hardcoded-install-path.patch \
+           file://alter-includedir.patch \
+           file://run-ptest \
+"
+SRC_URI[md5sum] = "5673aaf45b5de5d8dd80bb3daaeb8838"
+SRC_URI[sha256sum] = "7c6b8f84e37332423cfe5bae503440d88450da8cc1243496249faa5268026ba5"
+
+SRC_URI_class-native = "${BASE_SRC_URI}"
+
+S = "${WORKDIR}/${BPN}${PV}/unix"
+
+VER = "${PV}"
+
+inherit autotools ptest binconfig
+
+EXTRA_OECONF = "--enable-threads --disable-rpath --libdir=${libdir}"
+
+do_compile_prepend() {
+	echo > ${S}/../compat/fixstrtod.c
+}
+
+do_install() {
+	autotools_do_install
+	oe_runmake 'DESTDIR=${D}' install-private-headers
+	ln -sf ./tclsh${VER} ${D}${bindir}/tclsh
+	ln -sf tclsh8.6 ${D}${bindir}/tclsh${VER}
+	sed -i "s;-L${B};-L${STAGING_LIBDIR};g" tclConfig.sh
+	sed -i "s;'${WORKDIR};'${STAGING_INCDIR};g" tclConfig.sh
+	install -d ${D}${bindir_crossscripts}
+	install -m 0755 tclConfig.sh ${D}${bindir_crossscripts}
+	install -m 0755 tclConfig.sh ${D}${libdir}
+	for dir in compat generic unix; do
+		install -d ${D}${includedir}/${BPN}${VER}/$dir
+		install -m 0644 ${S}/../$dir/*.h ${D}${includedir}/${BPN}${VER}/$dir/
+	done
+}
+
+SYSROOT_DIRS += "${bindir_crossscripts}"
+
+PACKAGES =+ "tcl-lib"
+FILES_tcl-lib = "${libdir}/libtcl8.6.so.*"
+FILES_${PN} += "${libdir}/tcl${VER} ${libdir}/tcl8.6 ${libdir}/tcl8"
+FILES_${PN}-dev += "${libdir}/tclConfig.sh ${libdir}/tclooConfig.sh"
+
+# isn't getting picked up by shlibs code
+RDEPENDS_${PN} += "tcl-lib"
+RDEPENDS_${PN}_class-native = ""
+RDEPENDS_${PN}-ptest += "libgcc"
+
+BBCLASSEXTEND = "native nativesdk"
+
+do_compile_ptest() {
+	oe_runmake tcltest
+}
+
+do_install_ptest() {
+	cp ${B}/tcltest ${D}${PTEST_PATH}
+	cp -r ${S}/../library ${D}${PTEST_PATH}
+	cp -r ${S}/../tests ${D}${PTEST_PATH}
+}
+
+# Fix some paths that might be used by Tcl extensions
+BINCONFIG_GLOB = "*Config.sh"
+
+# Fix the path in sstate
+SSTATE_SCAN_FILES += "*Config.sh"
+
+# Cleanup host path from ${libdir}/tclConfig.sh and remove the
+# ${bindir_crossscripts}/tclConfig.sh from target
+PACKAGE_PREPROCESS_FUNCS += "tcl_package_preprocess"
+tcl_package_preprocess() {
+	sed -i -e "s;${DEBUG_PREFIX_MAP};;g" \
+	       -e "s;-L${STAGING_LIBDIR};-L${libdir};g" \
+	       -e "s;${STAGING_INCDIR};${includedir};g" \
+	       -e "s;--sysroot=${RECIPE_SYSROOT};;g" \
+	       ${PKGD}${libdir}/tclConfig.sh
+
+	rm -f ${PKGD}${bindir_crossscripts}/tclConfig.sh
+}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/unfs3/unfs3_0.9.22.r497.bb b/import-layers/yocto-poky/meta/recipes-devtools/unfs3/unfs3_0.9.22.r497.bb
index 52aa17e..cebc866 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/unfs3/unfs3_0.9.22.r497.bb
+++ b/import-layers/yocto-poky/meta/recipes-devtools/unfs3/unfs3_0.9.22.r497.bb
@@ -17,7 +17,8 @@
 # Only subversion url left in OE-Core, use a mirror tarball instead since
 # this rarely changes.
 # svn://svn.code.sf.net/p/unfs3/code;module=trunk;rev=${MOD_PV};protocol=http
-SRC_URI = "http://downloads.yoctoproject.org/mirror/sources/trunk_svn.code.sf.net_.p.unfs3.code_497_.tar.gz \
+# rename the tarball in mirror to avoid clash with user local svn tarball
+SRC_URI = "http://downloads.yoctoproject.org/mirror/sources/unfs3-0.9.22.r497.tar.gz \
            file://unfs3_parallel_build.patch \
            file://alternate_rpc_ports.patch \
            file://fix_pid_race_parent_writes_child_pid.patch \
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/vala/vala.inc b/import-layers/yocto-poky/meta/recipes-devtools/vala/vala.inc
index b338ea0..1261c02 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/vala/vala.inc
+++ b/import-layers/yocto-poky/meta/recipes-devtools/vala/vala.inc
@@ -2,7 +2,12 @@
 DESCRIPTION = "Vala is a C#-like language dedicated to ease GObject programming. \
 Vala compiles to plain C and has no runtime environment nor penalities whatsoever."
 SECTION = "devel"
-DEPENDS = "bison-native flex-native libxslt-native glib-2.0"
+DEPENDS = "bison-native flex-native glib-2.0"
+
+# Appending libxslt-native to dependencies has an effect
+# of rebuilding the manual, which is very slow. Let's do this
+# only when api-documentation distro feature is enabled.
+DEPENDS_append_class-target = " ${@bb.utils.contains('DISTRO_FEATURES', 'api-documentation', 'libxslt-native', '', d)}"
 
 # vala-native contains a native version of vapigen, which we use instead of the target one
 DEPENDS_append_class-target = " vala-native"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/vala/vala_0.34.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/vala/vala_0.34.4.bb
deleted file mode 100644
index d1b49ae..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/vala/vala_0.34.4.bb
+++ /dev/null
@@ -1,8 +0,0 @@
-require ${BPN}.inc
-
-SRC_URI += " file://0001-git-version-gen-don-t-append-dirty-if-we-re-not-in-g.patch \
-             file://0001-vapigen.m4-use-PKG_CONFIG_SYSROOT_DIR.patch \
-"
-
-SRC_URI[md5sum] = "a856989d749fc5e472a3592b96f9ca48"
-SRC_URI[sha256sum] = "6b17bd339414563ebc51f64b0b837919ea7552d8a8ffa71cdc837d25c9696b83"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/vala/vala_0.36.4.bb b/import-layers/yocto-poky/meta/recipes-devtools/vala/vala_0.36.4.bb
new file mode 100644
index 0000000..51000d9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/vala/vala_0.36.4.bb
@@ -0,0 +1,8 @@
+require ${BPN}.inc
+
+SRC_URI += " file://0001-git-version-gen-don-t-append-dirty-if-we-re-not-in-g.patch \
+             file://0001-vapigen.m4-use-PKG_CONFIG_SYSROOT_DIR.patch \
+"
+
+SRC_URI[md5sum] = "3c19014093f1a3d995357253b463082c"
+SRC_URI[sha256sum] = "e9f23ce711c1a72ce664d10946fbc5953f01b0b7f2a3562e7a01e362d86de059"
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-fix-build-for-musl-targets.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-fix-build-for-musl-targets.patch
deleted file mode 100644
index dc6feff..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-fix-build-for-musl-targets.patch
+++ /dev/null
@@ -1,69 +0,0 @@
-From 1b1a024efd18d44294e184e792c1e039cab02bfe Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sun, 14 Feb 2016 09:14:12 +0000
-Subject: [PATCH] fix build for musl targets
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
-Upstream-Status: Pending
-
- configure.ac             | 2 --
- coregrind/vg_preloaded.c | 2 +-
- include/pub_tool_redir.h | 7 +++++--
- 3 files changed, 6 insertions(+), 5 deletions(-)
-
-diff --git a/configure.ac b/configure.ac
-index 9366dc7..679f514 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -1066,8 +1066,6 @@ case "${GLIBC_VERSION}" in
- 	;;
-      2.0|2.1|*)
- 	AC_MSG_RESULT([unsupported version ${GLIBC_VERSION}])
--	AC_MSG_ERROR([Valgrind requires glibc version 2.2 or later,])
--	AC_MSG_ERROR([Darwin libc, Bionic libc or Solaris libc])
- 	;;
- esac
- 
-diff --git a/coregrind/vg_preloaded.c b/coregrind/vg_preloaded.c
-index 2ea7a7a..e49c832 100644
---- a/coregrind/vg_preloaded.c
-+++ b/coregrind/vg_preloaded.c
-@@ -56,7 +56,7 @@
- void VG_NOTIFY_ON_LOAD(freeres)( void );
- void VG_NOTIFY_ON_LOAD(freeres)( void )
- {
--#  if !defined(__UCLIBC__) \
-+#  if !defined(__UCLIBC__) && defined(__GLIBC__) \
-       && !defined(VGPV_arm_linux_android) \
-       && !defined(VGPV_x86_linux_android) \
-       && !defined(VGPV_mips32_linux_android) \
-diff --git a/include/pub_tool_redir.h b/include/pub_tool_redir.h
-index bac00d7..fbb2ef2 100644
---- a/include/pub_tool_redir.h
-+++ b/include/pub_tool_redir.h
-@@ -242,8 +242,7 @@
- /* --- Soname of the standard C library. --- */
- 
- #if defined(VGO_linux) || defined(VGO_solaris)
--#  define  VG_Z_LIBC_SONAME  libcZdsoZa              // libc.so*
--
-+#  define  VG_Z_LIBC_SONAME  libcZdZa                // libc.*
- #elif defined(VGO_darwin) && (DARWIN_VERS <= DARWIN_10_6)
- #  define  VG_Z_LIBC_SONAME  libSystemZdZaZddylib    // libSystem.*.dylib
- 
-@@ -274,7 +273,11 @@
- /* --- Soname of the pthreads library. --- */
- 
- #if defined(VGO_linux)
-+# if defined(__GLIBC__) || defined(__UCLIBC__)
- #  define  VG_Z_LIBPTHREAD_SONAME  libpthreadZdsoZd0     // libpthread.so.0
-+# else
-+#  define  VG_Z_LIBPTHREAD_SONAME  libcZdZa              // libc.*
-+#endif
- #elif defined(VGO_darwin)
- #  define  VG_Z_LIBPTHREAD_SONAME  libSystemZdZaZddylib  // libSystem.*.dylib
- #elif defined(VGO_solaris)
--- 
-2.7.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-makefiles-Drop-setting-mcpu-to-cortex-a8-on-arm-arch.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-makefiles-Drop-setting-mcpu-to-cortex-a8-on-arm-arch.patch
new file mode 100644
index 0000000..9f1da7b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-makefiles-Drop-setting-mcpu-to-cortex-a8-on-arm-arch.patch
@@ -0,0 +1,108 @@
+From 715cf122388f3527afa5649cebf9f1522c240693 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Thu, 20 Apr 2017 10:11:16 -0700
+Subject: [PATCH] makefiles: Drop setting -mcpu to cortex-a8 on arm
+ architecture
+
+We can not assume that all arches armv7+ are cortex-a8 only
+it fails to build for rpi which is armv7ve based (cortex-a8) cpu
+implementation.
+Fixes
+| cc1: warning: switch -mcpu=cortex-a8 conflicts with -march=armv7ve switch
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ Makefile.all.am            |  6 +++---
+ helgrind/tests/Makefile.am |  6 +++---
+ none/tests/arm/Makefile.am | 18 +++++++++---------
+ 3 files changed, 15 insertions(+), 15 deletions(-)
+
+diff --git a/Makefile.all.am b/Makefile.all.am
+index 02059a3..c7c4700 100644
+--- a/Makefile.all.am
++++ b/Makefile.all.am
+@@ -197,11 +197,11 @@ AM_CCASFLAGS_PPC64LE_LINUX  = @FLAG_M64@ -g
+ 
+ AM_FLAG_M3264_ARM_LINUX   = @FLAG_M32@
+ AM_CFLAGS_ARM_LINUX       = @FLAG_M32@ \
+-			 	$(AM_CFLAGS_BASE) -marm -mcpu=cortex-a8
++			 	$(AM_CFLAGS_BASE) -marm
+ AM_CFLAGS_PSO_ARM_LINUX   = @FLAG_M32@ $(AM_CFLAGS_BASE) \
+-				-marm -mcpu=cortex-a8 $(AM_CFLAGS_PSO_BASE)
++				-marm $(AM_CFLAGS_PSO_BASE)
+ AM_CCASFLAGS_ARM_LINUX    = @FLAG_M32@ \
+-				-marm -mcpu=cortex-a8 -g
++				-marm -g
+ 
+ AM_FLAG_M3264_ARM64_LINUX = @FLAG_M64@
+ AM_CFLAGS_ARM64_LINUX     = @FLAG_M64@ $(AM_CFLAGS_BASE)
+diff --git a/helgrind/tests/Makefile.am b/helgrind/tests/Makefile.am
+index df82169..07eb66a 100644
+--- a/helgrind/tests/Makefile.am
++++ b/helgrind/tests/Makefile.am
+@@ -189,9 +189,9 @@ if ! VGCONF_PLATFORMS_INCLUDE_X86_DARWIN
+ endif
+ 
+ if VGCONF_PLATFORMS_INCLUDE_ARM_LINUX
+-annotate_hbefore_CFLAGS = $(AM_CFLAGS) -mcpu=cortex-a8
+-tc07_hbl1_CFLAGS        = $(AM_CFLAGS) -mcpu=cortex-a8
+-tc08_hbl2_CFLAGS        = $(AM_CFLAGS) -mcpu=cortex-a8
++annotate_hbefore_CFLAGS = $(AM_CFLAGS)
++tc07_hbl1_CFLAGS        = $(AM_CFLAGS)
++tc08_hbl2_CFLAGS        = $(AM_CFLAGS)
+ else
+ annotate_hbefore_CFLAGS = $(AM_CFLAGS)
+ tc07_hbl1_CFLAGS        = $(AM_CFLAGS)
+diff --git a/none/tests/arm/Makefile.am b/none/tests/arm/Makefile.am
+index 024eb6d..ccecb90 100644
+--- a/none/tests/arm/Makefile.am
++++ b/none/tests/arm/Makefile.am
+@@ -52,10 +52,10 @@ allexec_CFLAGS		= $(AM_CFLAGS) @FLAG_W_NO_NONNULL@
+ # need special helping w.r.t -mfpu and -mfloat-abi, though.
+ # Also force -O0 since -O takes hundreds of MB of memory 
+ # for v6intThumb.c.
+-v6intARM_CFLAGS   = $(AM_CFLAGS) -g -O0 -mcpu=cortex-a8 -marm
+-v6intThumb_CFLAGS = $(AM_CFLAGS) -g -O0 -mcpu=cortex-a8 -mthumb
++v6intARM_CFLAGS   = $(AM_CFLAGS) -g -O0 -marm
++v6intThumb_CFLAGS = $(AM_CFLAGS) -g -O0 -mthumb
+ 
+-v6media_CFLAGS    = $(AM_CFLAGS) -g -O0 -mcpu=cortex-a8 -mthumb
++v6media_CFLAGS    = $(AM_CFLAGS) -g -O0 -mthumb
+ 
+ v8crypto_a_CFLAGS = $(AM_CFLAGS) -g -O0 -mfpu=crypto-neon-fp-armv8 -marm
+ v8crypto_t_CFLAGS = $(AM_CFLAGS) -g -O0 -mfpu=crypto-neon-fp-armv8 -mthumb
+@@ -65,23 +65,23 @@ v8memory_a_CFLAGS = $(AM_CFLAGS) -g -O0 \
+ v8memory_t_CFLAGS = $(AM_CFLAGS) -g -O0 \
+ 			-march=armv8-a -mfpu=crypto-neon-fp-armv8 -mthumb
+ 
+-vfp_CFLAGS        = $(AM_CFLAGS) -g -O0 -mcpu=cortex-a8 \
++vfp_CFLAGS        = $(AM_CFLAGS) -g -O0 \
+ 			-mfpu=neon \
+ 			-mthumb
+ 
+ 
+-neon128_CFLAGS    = $(AM_CFLAGS) -g -O0 -mcpu=cortex-a8 \
++neon128_CFLAGS    = $(AM_CFLAGS) -g -O0 \
+ 			-mfpu=neon \
+ 			-mthumb
+ 
+-neon64_CFLAGS     = $(AM_CFLAGS) -g -O0 -mcpu=cortex-a8 \
++neon64_CFLAGS     = $(AM_CFLAGS) -g -O0 \
+ 			-mfpu=neon \
+ 			-mthumb
+ 
+ intdiv_CFLAGS	  = $(AM_CFLAGS) -g -march=armv7ve -mcpu=cortex-a15 -mthumb
+-ldrt_CFLAGS	  = $(AM_CFLAGS) -g -mcpu=cortex-a8 -mthumb
+-ldrt_arm_CFLAGS	  = $(AM_CFLAGS) -g -mcpu=cortex-a8 -marm
++ldrt_CFLAGS	  = $(AM_CFLAGS) -g -mthumb
++ldrt_arm_CFLAGS	  = $(AM_CFLAGS) -g -marm
+ 
+-vcvt_fixed_float_VFP_CFLAGS = $(AM_CFLAGS) -g -mcpu=cortex-a8 -mfpu=vfpv3
++vcvt_fixed_float_VFP_CFLAGS = $(AM_CFLAGS) -g -mfpu=vfpv3
+ 
+ vfpv4_fma_CFLAGS  = $(AM_CFLAGS) -g -O0 -march=armv7ve -mcpu=cortex-a15 -mfpu=vfpv4 -marm
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-memcheck-arm64-Define-__THROW-if-not-already-defined.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-memcheck-arm64-Define-__THROW-if-not-already-defined.patch
new file mode 100644
index 0000000..a48d7db
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-memcheck-arm64-Define-__THROW-if-not-already-defined.patch
@@ -0,0 +1,32 @@
+From 3409dc35c15bb14c8a525239806322648e079ab1 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 5 Jul 2017 17:12:43 -0700
+Subject: [PATCH 1/3] memcheck/arm64: Define __THROW if not already defined
+
+Helps compiling with musl where __THROW is not available
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Submitted
+
+ memcheck/tests/arm64-linux/scalar.h | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/memcheck/tests/arm64-linux/scalar.h b/memcheck/tests/arm64-linux/scalar.h
+index 9008816..8ef050f 100644
+--- a/memcheck/tests/arm64-linux/scalar.h
++++ b/memcheck/tests/arm64-linux/scalar.h
+@@ -12,6 +12,10 @@
+ #include <sys/types.h>
+ #include <sys/mman.h>
+ 
++#ifndef __THROW
++#define __THROW
++#endif
++
+ // Since we use vki_unistd.h, we can't include <unistd.h>.  So we have to
+ // declare this ourselves.
+ extern long int syscall (long int __sysno, ...) __THROW;
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-memcheck-tests-Use-ucontext_t-instead-of-struct-ucon.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-memcheck-tests-Use-ucontext_t-instead-of-struct-ucon.patch
new file mode 100644
index 0000000..bf16a1a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-memcheck-tests-Use-ucontext_t-instead-of-struct-ucon.patch
@@ -0,0 +1,30 @@
+From 629ac492b1d9bc709d17337eb9b1c28603eca250 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 28 Jun 2017 11:01:25 -0700
+Subject: [PATCH] memcheck/tests: Use ucontext_t instead of struct ucontext
+
+glibc 2.26 does not expose struct ucontext anymore
+
+Upstream-Status: Submitted [https://bugs.kde.org/show_bug.cgi?id=381769]
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ memcheck/tests/linux/stack_changes.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/memcheck/tests/linux/stack_changes.c b/memcheck/tests/linux/stack_changes.c
+index ffb49c6..acc4109 100644
+--- a/memcheck/tests/linux/stack_changes.c
++++ b/memcheck/tests/linux/stack_changes.c
+@@ -11,7 +11,7 @@
+ // checks that Valgrind notices their stack changes properly.
+ 
+ #ifdef __GLIBC__
+-typedef  struct ucontext  mycontext;
++typedef ucontext_t  mycontext;
+ 
+ mycontext ctx1, ctx2, oldc;
+ int count;
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-sigqueue-Rename-_sifields-to-__si_fields-on-musl.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-sigqueue-Rename-_sifields-to-__si_fields-on-musl.patch
new file mode 100644
index 0000000..2736615
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-sigqueue-Rename-_sifields-to-__si_fields-on-musl.patch
@@ -0,0 +1,31 @@
+From 64ad2744acfb4fa40b1c114633a053f87125a203 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 10 Jun 2017 00:46:39 -0700
+Subject: [PATCH 1/6] sigqueue: Rename _sifields to __si_fields on musl
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ memcheck/tests/linux/sigqueue.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/memcheck/tests/linux/sigqueue.c b/memcheck/tests/linux/sigqueue.c
+index d18bd72..acb7cba 100644
+--- a/memcheck/tests/linux/sigqueue.c
++++ b/memcheck/tests/linux/sigqueue.c
+@@ -8,6 +8,11 @@
+ #include <syscall.h>
+ #include <unistd.h>
+ 
++/* musl libc defines siginfo_t __si_fields instead of _sifields */
++#if defined(__linux__) && !defined(__GLIBC__)
++#define _sifields __si_fields
++#endif
++
+ int main(int argc, char **argv)
+ {
+   siginfo_t *si;
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-str_tester.c-Limit-rawmemchr-test-to-glibc.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-str_tester.c-Limit-rawmemchr-test-to-glibc.patch
new file mode 100644
index 0000000..185b8f9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0001-str_tester.c-Limit-rawmemchr-test-to-glibc.patch
@@ -0,0 +1,39 @@
+From de692e359801a1f0488c76267e4f904dd2efe754 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 10 Jun 2017 00:39:07 -0700
+Subject: [PATCH] str_tester.c: Limit rawmemchr() test to glibc
+
+rawmemchr() is a GNU extention therefore mark it so
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ memcheck/tests/str_tester.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/memcheck/tests/str_tester.c b/memcheck/tests/str_tester.c
+index 9f7790a..47e4b4a 100644
+--- a/memcheck/tests/str_tester.c
++++ b/memcheck/tests/str_tester.c
+@@ -504,7 +504,7 @@ test_strchrnul (void)
+ #endif
+ 
+ // DDD: better done by testing for the function.
+-#if !defined(__APPLE__) && !defined(__sun)
++#if !defined(__APPLE__) && !defined(__sun) && defined(__GLIBC__)
+ static void
+ test_rawmemchr (void)
+ {
+@@ -1442,7 +1442,7 @@ main (void)
+   test_strchrnul ();
+ # endif
+ 
+-# if !defined(__APPLE__) && !defined(__sun)
++# if !defined(__APPLE__) && !defined(__sun) && defined(__GLIBC__)
+   /* rawmemchr.  */
+   test_rawmemchr ();
+ # endif
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0002-context-APIs-are-not-available-on-musl.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0002-context-APIs-are-not-available-on-musl.patch
new file mode 100644
index 0000000..3f9f33b4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0002-context-APIs-are-not-available-on-musl.patch
@@ -0,0 +1,49 @@
+From 862b807076d57f2f58ed9d572ddac8bb402774a2 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 10 Jun 2017 01:01:10 -0700
+Subject: [PATCH 2/6] context APIs are not available on musl
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ memcheck/tests/linux/stack_changes.c | 7 ++++++-
+ 1 file changed, 6 insertions(+), 1 deletion(-)
+
+diff --git a/memcheck/tests/linux/stack_changes.c b/memcheck/tests/linux/stack_changes.c
+index a978fc2..ffb49c6 100644
+--- a/memcheck/tests/linux/stack_changes.c
++++ b/memcheck/tests/linux/stack_changes.c
+@@ -10,6 +10,7 @@
+ // This test is checking the libc context calls (setcontext, etc.) and
+ // checks that Valgrind notices their stack changes properly.
+ 
++#ifdef __GLIBC__
+ typedef  struct ucontext  mycontext;
+ 
+ mycontext ctx1, ctx2, oldc;
+@@ -51,9 +52,11 @@ int init_context(mycontext *uc)
+ 
+     return ret;
+ }
++#endif
+ 
+ int main(int argc, char **argv)
+ {
++#ifdef __GLIBC__
+     int c1 = init_context(&ctx1);
+     int c2 = init_context(&ctx2);
+ 
+@@ -66,6 +69,8 @@ int main(int argc, char **argv)
+     //free(ctx1.uc_stack.ss_sp);
+     VALGRIND_STACK_DEREGISTER(c2);
+     //free(ctx2.uc_stack.ss_sp);
+-
++#else
++    printf("libc context call APIs e.g. getcontext() are deprecated by posix\n");
++#endif
+     return 0;
+ }
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0002-memcheck-x86-Define-__THROW-if-not-defined.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0002-memcheck-x86-Define-__THROW-if-not-defined.patch
new file mode 100644
index 0000000..5433472
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0002-memcheck-x86-Define-__THROW-if-not-defined.patch
@@ -0,0 +1,32 @@
+From 67d199dbdcbb3feff5f8928f87725fc64c0307d7 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 5 Jul 2017 17:36:42 -0700
+Subject: [PATCH 2/3] memcheck/x86: Define __THROW if not defined
+
+musl does not have __THROW, therefore make it null
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Submitted
+
+ memcheck/tests/x86-linux/scalar.h | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/memcheck/tests/x86-linux/scalar.h b/memcheck/tests/x86-linux/scalar.h
+index ef28b03..52f742e 100644
+--- a/memcheck/tests/x86-linux/scalar.h
++++ b/memcheck/tests/x86-linux/scalar.h
+@@ -11,6 +11,10 @@
+ #include <sys/types.h>
+ #include <sys/mman.h>
+ 
++#ifndef __THROW
++#define __THROW
++#endif
++
+ // Since we use vki_unistd.h, we can't include <unistd.h>.  So we have to
+ // declare this ourselves.
+ extern long int syscall (long int __sysno, ...) __THROW;
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0003-correct-include-directive-path-for-config.h.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0003-correct-include-directive-path-for-config.h.patch
new file mode 100644
index 0000000..c2965c4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0003-correct-include-directive-path-for-config.h.patch
@@ -0,0 +1,45 @@
+From ecbdea7bd8b08205f1bc3f6b72d4b4a80f313fcb Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 10 Jun 2017 01:03:17 -0700
+Subject: [PATCH 3/6] correct include directive path for config.h
+
+when building out of source tree, it can not find
+the generated config.h otherwise
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ memcheck/tests/linux/syscalls-2007.c  | 2 +-
+ memcheck/tests/linux/syslog-syscall.c | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/memcheck/tests/linux/syscalls-2007.c b/memcheck/tests/linux/syscalls-2007.c
+index b61c6d5..cc3fd62 100644
+--- a/memcheck/tests/linux/syscalls-2007.c
++++ b/memcheck/tests/linux/syscalls-2007.c
+@@ -10,7 +10,7 @@
+ 
+ #define _GNU_SOURCE
+ 
+-#include "../../config.h"
++#include "config.h"
+ #include <fcntl.h>
+ #include <signal.h>
+ #include <stdint.h>
+diff --git a/memcheck/tests/linux/syslog-syscall.c b/memcheck/tests/linux/syslog-syscall.c
+index 1143722..21e758b 100644
+--- a/memcheck/tests/linux/syslog-syscall.c
++++ b/memcheck/tests/linux/syslog-syscall.c
+@@ -6,7 +6,7 @@
+  *    klogctl().
+  */
+ 
+-#include "../../config.h"
++#include "config.h"
+ #include <stdio.h>
+ #if defined(HAVE_SYS_KLOG_H)
+ #include <sys/klog.h>
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0003-tests-seg_override-Replace-__modify_ldt-with-syscall.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0003-tests-seg_override-Replace-__modify_ldt-with-syscall.patch
new file mode 100644
index 0000000..fa1344c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0003-tests-seg_override-Replace-__modify_ldt-with-syscall.patch
@@ -0,0 +1,68 @@
+From d103475875858ab8a2e6b53ce178bb2f63883d4c Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 5 Jul 2017 17:37:56 -0700
+Subject: [PATCH 3/3] tests/seg_override: Replace __modify_ldt() with syscall()
+
+__modify_ldt() is specific to glibc, replacing it with syscall()
+makes it more portable.
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Submitted
+
+ none/tests/x86-linux/seg_override.c | 15 ++++++---------
+ 1 file changed, 6 insertions(+), 9 deletions(-)
+
+diff --git a/none/tests/x86-linux/seg_override.c b/none/tests/x86-linux/seg_override.c
+index b7619c9..c89874b 100644
+--- a/none/tests/x86-linux/seg_override.c
++++ b/none/tests/x86-linux/seg_override.c
+@@ -2,6 +2,8 @@
+ #include <stdio.h>
+ #include <errno.h>
+ #include <string.h>
++#include <unistd.h>
++#include <syscall.h>
+ 
+ /* Stuff from Wine. */
+ 
+@@ -52,14 +54,11 @@ inline static unsigned int wine_ldt_get_limit( const LDT_ENTRY *ent )
+ /* our copy of the ldt */
+ LDT_ENTRY ldt_copy[8192];
+ 
+-/* System call to set LDT entry.  */
+-//extern int __modify_ldt (int, struct modify_ldt_ldt_s *, size_t);
+-extern int __modify_ldt (int, void *, size_t);
+-
+ void print_ldt ( void )
+ {
+    int res;
+-   res = __modify_ldt( 0, ldt_copy, 8192*sizeof(LDT_ENTRY) );
++   /* System call to set LDT entry.  */
++   res = syscall(SYS_modify_ldt, 0, ldt_copy, 8192*sizeof(LDT_ENTRY) );
+    printf("got %d bytes\n", res );   
+    perror("error is");
+ }
+@@ -83,9 +82,6 @@ struct modify_ldt_ldt_s
+   unsigned int empty:25;
+ };
+ 
+-/* System call to set LDT entry.  */
+-//extern int __modify_ldt (int, struct modify_ldt_ldt_s *, size_t);
+-
+ void set_ldt1 ( void* base )
+ {
+   int stat;
+@@ -102,7 +98,8 @@ void set_ldt1 ( void* base )
+   ldt_entry.read_exec_only = 0;
+   ldt_entry.limit_in_pages = 0;
+   ldt_entry.seg_not_present = 0;
+-  stat = __modify_ldt (1, &ldt_entry, sizeof (ldt_entry));
++  /* System call to set LDT entry.  */
++  stat = syscall(SYS_modify_ldt, 1, &ldt_entry, sizeof (ldt_entry));
+   printf("stat = %d\n", stat);
+ }
+ 
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0004-Fix-out-of-tree-builds.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0004-Fix-out-of-tree-builds.patch
index ed313d6..39022d0 100644
--- a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0004-Fix-out-of-tree-builds.patch
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0004-Fix-out-of-tree-builds.patch
@@ -1,4 +1,4 @@
-From 38ae233b6893a4eec7f9ed6d8ad02392bca8eaed Mon Sep 17 00:00:00 2001
+From 739421e253e6eba3eb6438651822f80fa9c0502a Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
 Date: Tue, 15 Dec 2015 15:31:50 +0200
 Subject: [PATCH 1/2] Fix out of tree builds.
@@ -13,21 +13,21 @@
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 
 ---
- configure.ac | 64 ++++++++++++++++++++++++++++++------------------------------
- 1 file changed, 32 insertions(+), 32 deletions(-)
+ configure.ac | 68 ++++++++++++++++++++++++++++++------------------------------
+ 1 file changed, 34 insertions(+), 34 deletions(-)
 
 diff --git a/configure.ac b/configure.ac
-index 8ab7f9b..9366dc7 100644
+index 3874296fde0b..7a5ba2c8557e 100644
 --- a/configure.ac
 +++ b/configure.ac
-@@ -377,44 +377,44 @@ case "${host_os}" in
+@@ -373,50 +373,50 @@ case "${host_os}" in
  	     9.*)
  		  AC_MSG_RESULT([Darwin 9.x (${kernel}) / Mac OS X 10.5 Leopard])
  		  AC_DEFINE([DARWIN_VERS], DARWIN_10_5, [Darwin / Mac OS X version])
 -		  DEFAULT_SUPP="darwin9.supp ${DEFAULT_SUPP}"
 -		  DEFAULT_SUPP="darwin9-drd.supp ${DEFAULT_SUPP}"
-+		  DEFAULT_SUPP="$srcdir/darwin9.supp ${DEFAULT_SUPP}"
-+		  DEFAULT_SUPP="$srcdir/darwin9-drd.supp ${DEFAULT_SUPP}"
++ 		  DEFAULT_SUPP="$srcdir/darwin9.supp ${DEFAULT_SUPP}"
++ 		  DEFAULT_SUPP="$srcdir/darwin9-drd.supp ${DEFAULT_SUPP}"
  		  ;;
  	     10.*)
  		  AC_MSG_RESULT([Darwin 10.x (${kernel}) / Mac OS X 10.6 Snow Leopard])
@@ -77,9 +77,17 @@
 +		  DEFAULT_SUPP="$srcdir/darwin15.supp ${DEFAULT_SUPP}"
 +		  DEFAULT_SUPP="$srcdir/darwin10-drd.supp ${DEFAULT_SUPP}"
  		  ;;
+ 	     16.*)
+ 		  AC_MSG_RESULT([Darwin 16.x (${kernel}) / macOS 10.12 Sierra])
+ 		  AC_DEFINE([DARWIN_VERS], DARWIN_10_12, [Darwin / Mac OS X version])
+-		  DEFAULT_SUPP="darwin16.supp ${DEFAULT_SUPP}"
+-		  DEFAULT_SUPP="darwin10-drd.supp ${DEFAULT_SUPP}"
++		  DEFAULT_SUPP="$srcdir/darwin16.supp ${DEFAULT_SUPP}"
++		  DEFAULT_SUPP="$srcdir/darwin10-drd.supp ${DEFAULT_SUPP}"
+ 		  ;;
               *) 
  		  AC_MSG_RESULT([unsupported (${kernel})])
-@@ -426,13 +426,13 @@ case "${host_os}" in
+@@ -428,13 +428,13 @@ case "${host_os}" in
       solaris2.11*)
          AC_MSG_RESULT([ok (${host_os})])
          VGCONF_OS="solaris"
@@ -95,7 +103,7 @@
          ;;
  
       *) 
-@@ -1015,29 +1015,29 @@ AC_MSG_CHECKING([the glibc version])
+@@ -982,29 +982,29 @@ AC_MSG_CHECKING([the glibc version])
  case "${GLIBC_VERSION}" in
       2.2)
  	AC_MSG_RESULT(${GLIBC_VERSION} family)
@@ -135,7 +143,7 @@
  	;;
       2.*)
  	AC_MSG_RESULT(${GLIBC_VERSION} family)
-@@ -1046,8 +1046,8 @@ case "${GLIBC_VERSION}" in
+@@ -1013,8 +1013,8 @@ case "${GLIBC_VERSION}" in
  	AC_DEFINE([GLIBC_MANDATORY_INDEX_AND_STRLEN_REDIRECT], 1,
  		  [Define to 1 if index() and strlen() have been optimized heavily (x86 glibc >= 2.12)])
  	DEFAULT_SUPP="glibc-2.X.supp ${DEFAULT_SUPP}"
@@ -146,7 +154,7 @@
  	;;
       darwin)
  	AC_MSG_RESULT(Darwin)
-@@ -1057,7 +1057,7 @@ case "${GLIBC_VERSION}" in
+@@ -1024,7 +1024,7 @@ case "${GLIBC_VERSION}" in
       bionic)
  	AC_MSG_RESULT(Bionic)
  	AC_DEFINE([BIONIC_LIBC], 1, [Define to 1 if you're using Bionic])
@@ -155,7 +163,7 @@
  	;;
       solaris)
  	AC_MSG_RESULT(Solaris)
-@@ -1079,11 +1079,11 @@ if test "$VGCONF_OS" != "solaris"; then
+@@ -1051,11 +1051,11 @@ if test "$VGCONF_OS" != "solaris"; then
      # attempt to detect whether such libraries are installed on the
      # build machine (or even if any X facilities are present); just
      # add the suppressions antidisirregardless.
@@ -171,5 +179,5 @@
  
  
 -- 
-2.6.2
+2.13.2.3.g44cd85c14
 
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0004-pth_atfork1.c-Define-error-API-for-musl.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0004-pth_atfork1.c-Define-error-API-for-musl.patch
new file mode 100644
index 0000000..1cb7062
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0004-pth_atfork1.c-Define-error-API-for-musl.patch
@@ -0,0 +1,37 @@
+From fb77fef4f866dac7bcc6d1ae025da60564869f84 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 10 Jun 2017 01:06:11 -0700
+Subject: [PATCH 4/6] pth_atfork1.c: Define error() API for musl
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ none/tests/pth_atfork1.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/none/tests/pth_atfork1.c b/none/tests/pth_atfork1.c
+index 34201ef..b7f5f2d 100644
+--- a/none/tests/pth_atfork1.c
++++ b/none/tests/pth_atfork1.c
+@@ -18,7 +18,7 @@
+    Boston, MA 02111-1307, USA.  */
+ 
+ #include <errno.h>
+-#if !defined(__APPLE__) && !defined(__sun)
++#if !defined(__APPLE__) && !defined(__sun) && defined(__GLIBC__)
+ # include <error.h>
+ #endif
+ #include <stdlib.h>
+@@ -27,7 +27,7 @@
+ #include <sys/wait.h>
+ #include <stdio.h>
+ 
+-#if defined(__APPLE__) || defined(__sun)
++#if defined(__APPLE__) || defined(__sun) || (defined(__linux__) && !defined(__GLIBC__))
+ #include <string.h>  /* strerror */
+ static void error (int status, int errnum, char* msg)
+ {
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0005-tc20_verifywrap.c-Fake-__GLIBC_PREREQ-with-musl.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0005-tc20_verifywrap.c-Fake-__GLIBC_PREREQ-with-musl.patch
new file mode 100644
index 0000000..6176640
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0005-tc20_verifywrap.c-Fake-__GLIBC_PREREQ-with-musl.patch
@@ -0,0 +1,30 @@
+From b4b9f072c22f96844e02cb9d68f7ff2408680817 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 10 Jun 2017 01:07:59 -0700
+Subject: [PATCH 5/6] tc20_verifywrap.c: Fake __GLIBC_PREREQ with musl
+
+similar to sun
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ helgrind/tests/tc20_verifywrap.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/helgrind/tests/tc20_verifywrap.c b/helgrind/tests/tc20_verifywrap.c
+index c110000..a311a49 100644
+--- a/helgrind/tests/tc20_verifywrap.c
++++ b/helgrind/tests/tc20_verifywrap.c
+@@ -20,7 +20,7 @@
+ 
+ #if !defined(__APPLE__)
+ 
+-#if defined(__sun__)
++#if defined(__sun__) || (defined(__linux__) && !defined(__GLIBC__))
+ /* Fake __GLIBC_PREREQ on Solaris. Pretend glibc >= 2.4. */
+ # define __GLIBC_PREREQ
+ #else
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0006-pth_detached3.c-Dereference-pthread_t-before-adding-.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0006-pth_detached3.c-Dereference-pthread_t-before-adding-.patch
new file mode 100644
index 0000000..05886c7
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/0006-pth_detached3.c-Dereference-pthread_t-before-adding-.patch
@@ -0,0 +1,32 @@
+From a6547fc17c120dbd95b852f50b0c4bdee4fedb9a Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 10 Jun 2017 01:20:32 -0700
+Subject: [PATCH 6/6] pth_detached3.c: Dereference pthread_t before adding
+ offset to it
+
+Fixes
+error: invalid use of undefined type 'struct __pthread'
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ drd/tests/pth_detached3.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/drd/tests/pth_detached3.c b/drd/tests/pth_detached3.c
+index c02eef1..35d43a6 100644
+--- a/drd/tests/pth_detached3.c
++++ b/drd/tests/pth_detached3.c
+@@ -21,7 +21,7 @@ int main(int argc, char** argv)
+   pthread_detach(thread);
+ 
+   /* Invoke pthread_detach() with an invalid thread ID. */
+-  pthread_detach(thread + 8);
++  pthread_detach((pthread_t*)(&thread + 8));
+ 
+   fprintf(stderr, "Finished.\n");
+ 
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/link-gz-tests.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/link-gz-tests.patch
new file mode 100644
index 0000000..db32239
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/link-gz-tests.patch
@@ -0,0 +1,27 @@
+When checking if the compiler supports compressed debug sections we need to
+actually link instead of just compile.  Otherwise the compiler thinks that
+they are supported, but gold does not support -gz=zlib.
+
+Upstream-Status: Backport (r16459)
+Signed-off-by: Ross Burton <ross.burton@intel.com>
+
+--- a/configure.ac~	2017-07-11 11:53:16.000000000 +0100
++++ b/configure.ac	2017-07-11 18:16:13.674130483 +0100
+@@ -2119,7 +2119,7 @@
+ safe_CFLAGS=$CFLAGS
+ CFLAGS="-g -gz=zlib"
+ 
+-AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[ ]], [[
++AC_LINK_IFELSE([AC_LANG_PROGRAM([[ ]], [[
+   return 0;
+ ]])], [
+ ac_have_gz_zlib=yes
+@@ -2139,7 +2139,7 @@
+ safe_CFLAGS=$CFLAGS
+ CFLAGS="-g -gz=zlib-gnu"
+ 
+-AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[ ]], [[
++AC_LINK_IFELSE([AC_LANG_PROGRAM([[ ]], [[
+   return 0;
+ ]])], [
+ ac_have_gz_zlib_gnu=yes
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/ppc-headers.patch b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/ppc-headers.patch
new file mode 100644
index 0000000..51259db
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/ppc-headers.patch
@@ -0,0 +1,87 @@
+Backport a patch from upstream to fix test compilation for PPC where
+system headers don't get included.
+
+Upstream-Status: Backport
+Signed-off-by: Ross Burton <ross.burton@intel.com>
+
+r16450 | mjw | 2017-06-16 10:33:35 +0100 (Fri, 16 Jun 2017) | 7 lines
+
+ppc64 doesn't compile test_isa_2_06_partx.c without VSX support
+
+The #ifdef HAS_VSX guard is wrongly placed. It makes the standard
+include headers not be used. Causing a build failure. Fix by moving
+the #ifdef HAS_VSX after the standard includes.
+
+Index: none/tests/ppc32/test_isa_2_06_part3.c
+===================================================================
+--- a/none/tests/ppc32/test_isa_2_06_part3.c	(revision 16449)
++++ b/none/tests/ppc32/test_isa_2_06_part3.c	(revision 16450)
+@@ -20,17 +20,18 @@
+  The GNU General Public License is contained in the file COPYING.
+  */
+ 
+-#ifdef HAS_VSX
+-
+ #include <stdio.h>
+ #include <stdint.h>
+ #include <stdlib.h>
+ #include <string.h>
+ #include <malloc.h>
+-#include <altivec.h>
+ #include <math.h>
+ #include <unistd.h>    // getopt
+ 
++#ifdef HAS_VSX
++
++#include <altivec.h>
++
+ #ifndef __powerpc64__
+ typedef uint32_t HWord_t;
+ #else
+Index: none/tests/ppc32/test_isa_2_06_part1.c
+===================================================================
+--- a/none/tests/ppc32/test_isa_2_06_part1.c	(revision 16449)
++++ b/none/tests/ppc32/test_isa_2_06_part1.c	(revision 16450)
+@@ -20,13 +20,14 @@
+  The GNU General Public License is contained in the file COPYING.
+  */
+ 
+-#ifdef HAS_VSX
+-
+ #include <stdio.h>
+ #include <stdint.h>
+ #include <stdlib.h>
+ #include <string.h>
+ #include <malloc.h>
++
++#ifdef HAS_VSX
++
+ #include <altivec.h>
+ 
+ #ifndef __powerpc64__
+Index: none/tests/ppc32/test_isa_2_06_part2.c
+===================================================================
+--- a/none/tests/ppc32/test_isa_2_06_part2.c	(revision 16449)
++++ b/none/tests/ppc32/test_isa_2_06_part2.c	(revision 16450)
+@@ -20,17 +20,18 @@
+  The GNU General Public License is contained in the file COPYING.
+  */
+ 
+-#ifdef HAS_VSX
+-
+ #include <stdio.h>
+ #include <stdint.h>
+ #include <stdlib.h>
+ #include <string.h>
+ #include <malloc.h>
+-#include <altivec.h>
+ #include <math.h>
+ #include <unistd.h>    // getopt
+ 
++#ifdef HAS_VSX
++
++#include <altivec.h>
++
+ #ifndef __powerpc64__
+ typedef uint32_t HWord_t;
+ #else
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/run-ptest b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/run-ptest
index f9a72ec..447d33c 100755
--- a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/run-ptest
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind/run-ptest
@@ -2,11 +2,24 @@
 
 # run-ptest - 'ptest' test infrastructure shell script that
 #   wraps the valgrind regression script vg_regtest. 
-#   Must be run in the /usr/lib/valgrind/ptest directory. 
 #
 # Dave Lerner <dave.lerner@windriver.com>
 ###############################################################
 VALGRINDLIB=@libdir@/valgrind
-tests/vg_regtest --all \
+LOG="${VALGRINDLIB}/ptest/valgrind_ptest_$(date +%Y%m%d-%H%M%S).log"
+
+cd ${VALGRINDLIB}/ptest && ./tests/vg_regtest --all \
     --valgrind=/usr/bin/valgrind --valgrind-lib=$VALGRINDLIB \
-	--yocto-ptest
+    --yocto-ptest 2>&1|tee ${LOG}
+
+passed=`grep PASS: ${LOG}|wc -l`
+failed=`grep FAIL: ${LOG}|wc -l`
+skipped=`grep SKIP: ${LOG}|wc -l`
+all=$((passed + failed + skipped))
+
+( echo "=== Test Summary ==="
+  echo "TOTAL: ${all}"
+  echo "PASSED: ${passed}"
+  echo "FAILED: ${failed}"
+  echo "SKIPPED: ${skipped}"
+) | tee -a /${LOG}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind_3.12.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind_3.12.0.bb
deleted file mode 100644
index 4f8cd3f..0000000
--- a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind_3.12.0.bb
+++ /dev/null
@@ -1,127 +0,0 @@
-SUMMARY = "Valgrind memory debugger and instrumentation framework"
-HOMEPAGE = "http://valgrind.org/"
-BUGTRACKER = "http://valgrind.org/support/bug_reports.html"
-LICENSE = "GPLv2 & GPLv2+ & BSD"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://include/pub_tool_basics.h;beginline=1;endline=29;md5=ebb8e640ef633f940c425686c873f9fa \
-                    file://include/valgrind.h;beginline=1;endline=56;md5=4b5e24908e53016ea561c45f4234a327 \
-                    file://COPYING.DOCS;md5=24ea4c7092233849b4394699333b5c56"
-
-X11DEPENDS = "virtual/libx11"
-DEPENDS = "${@bb.utils.contains('DISTRO_FEATURES', 'x11', '${X11DEPENDS}', '', d)} \
-           ${@bb.utils.contains('DISTRO_FEATURES', 'ptest', 'boost', '', d)} \
-        "
-
-SRC_URI = "http://www.valgrind.org/downloads/valgrind-${PV}.tar.bz2 \
-           file://fixed-perl-path.patch \
-           file://Added-support-for-PPC-instructions-mfatbu-mfatbl.patch \
-           file://run-ptest \
-           file://0002-remove-rpath.patch \
-           file://0004-Fix-out-of-tree-builds.patch \
-           file://0005-Modify-vg_test-wrapper-to-support-PTEST-formats.patch \
-           file://0001-Remove-tests-that-fail-to-build-on-some-PPC32-config.patch \
-           file://use-appropriate-march-mcpu-mfpu-for-ARM-test-apps.patch \
-           file://avoid-neon-for-targets-which-don-t-support-it.patch \
-           file://valgrind-make-ld-XXX.so-strlen-intercept-optional.patch \
-"
-SRC_URI_append_libc-musl = "\
-           file://0001-fix-build-for-musl-targets.patch \
-"
-SRC_URI[md5sum] = "6eb03c0c10ea917013a7622e483d61bb"
-SRC_URI[sha256sum] = "67ca4395b2527247780f36148b084f5743a68ab0c850cb43e4a5b4b012cf76a1"
-
-COMPATIBLE_HOST = '(i.86|x86_64|arm|aarch64|mips|powerpc|powerpc64).*-linux'
-
-# valgrind supports armv7 and above
-COMPATIBLE_HOST_armv4 = 'null'
-COMPATIBLE_HOST_armv5 = 'null'
-COMPATIBLE_HOST_armv6 = 'null'
-
-# X32 isn't supported by valgrind at this time
-COMPATIBLE_HOST_linux-gnux32 = 'null'
-
-# Disable for some MIPS variants
-COMPATIBLE_HOST_mipsarchn32 = 'null'
-COMPATIBLE_HOST_mipsarchr6 = 'null'
-
-inherit autotools ptest
-
-EXTRA_OECONF = "--enable-tls --without-mpicc"
-EXTRA_OECONF += "${@['--enable-only32bit','--enable-only64bit'][d.getVar('SITEINFO_BITS') != '32']}"
-
-# valgrind checks host_cpu "armv7*)", so we need to over-ride the autotools.bbclass default --host option
-EXTRA_OECONF_append_arm = " --host=armv7${HOST_VENDOR}-${HOST_OS}"
-
-EXTRA_OEMAKE = "-w"
-
-CACHED_CONFIGUREVARS += "ac_cv_path_PERL='/usr/bin/env perl'"
-
-# valgrind likes to control its own optimisation flags. It generally defaults
-# to -O2 but uses -O0 for some specific test apps etc. Passing our own flags
-# (via CFLAGS) means we interfere with that. Only pass DEBUG_FLAGS to it
-# which fixes build path issue in DWARF.
-SELECTED_OPTIMIZATION = "${DEBUG_FLAGS}"
-
-CFLAGS_append_libc-uclibc = " -D__UCLIBC__ "
-
-do_install_append () {
-    install -m 644 ${B}/default.supp ${D}/${libdir}/valgrind/
-}
-
-RDEPENDS_${PN} += "perl"
-
-# valgrind needs debug information for ld.so at runtime in order to
-# redirect functions like strlen.
-RRECOMMENDS_${PN} += "${TCLIBC}-dbg"
-
-RDEPENDS_${PN}-ptest += " sed perl perl-module-file-glob"
-RDEPENDS_${PN}-ptest_append_libc-glibc = " glibc-utils"
-
-# One of the tests contains a bogus interpreter path on purpose.
-# Skip file dependency check
-SKIP_FILEDEPS_${PN}-ptest = '1'
-
-do_compile_ptest() {
-    oe_runmake check
-}
-
-do_install_ptest() {
-    chmod +x ${B}/tests/vg_regtest
-
-    # The test application binaries are not automatically installed.
-    # Grab them from the build directory.
-    #
-    # The regression tests require scripts and data files that are not
-    # copied to the build directory.  They must be copied from the
-    # source directory. 
-    saved_dir=$PWD
-    for parent_dir in ${S} ${B} ; do
-        cd $parent_dir
-
-        # exclude shell or the package won't install
-        rm -rf none/tests/shell* 2>/dev/null
-
-        subdirs="tests cachegrind/tests callgrind/tests drd/tests helgrind/tests massif/tests memcheck/tests none/tests"
-
-        # Get the vg test scripts, filters, and expected files
-        for dir in $subdirs ; do
-            find $dir | cpio -pvdu ${D}${PTEST_PATH}
-        done
-        cd $saved_dir
-    done
-
-    # clean out build artifacts before building the rpm
-    find ${D}${PTEST_PATH} \
-         \( -name "Makefile*" \
-        -o -name "*.o" \
-        -o -name "*.c" \
-        -o -name "*.S" \
-        -o -name "*.h" \) \
-        -exec rm {} \;
-
-    # needed by massif tests
-    cp ${B}/massif/ms_print ${D}${PTEST_PATH}/massif/ms_print
-
-    # handle multilib
-    sed -i s:@libdir@:${libdir}:g ${D}${PTEST_PATH}/run-ptest
-}
diff --git a/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind_3.13.0.bb b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind_3.13.0.bb
new file mode 100644
index 0000000..25b4126
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-devtools/valgrind/valgrind_3.13.0.bb
@@ -0,0 +1,153 @@
+SUMMARY = "Valgrind memory debugger and instrumentation framework"
+HOMEPAGE = "http://valgrind.org/"
+BUGTRACKER = "http://valgrind.org/support/bug_reports.html"
+LICENSE = "GPLv2 & GPLv2+ & BSD"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://include/pub_tool_basics.h;beginline=6;endline=29;md5=d4de0407239381463cf01dd276d7c22e \
+                    file://include/valgrind.h;beginline=1;endline=56;md5=ad3b317f3286b6b704575d9efe6ca5df \
+                    file://COPYING.DOCS;md5=24ea4c7092233849b4394699333b5c56"
+
+X11DEPENDS = "virtual/libx11"
+DEPENDS = "${@bb.utils.contains('DISTRO_FEATURES', 'x11', '${X11DEPENDS}', '', d)} \
+           ${@bb.utils.contains('DISTRO_FEATURES', 'ptest', 'boost', '', d)} \
+        "
+
+SRC_URI = "ftp://sourceware.org/pub/valgrind/valgrind-${PV}.tar.bz2 \
+           file://fixed-perl-path.patch \
+           file://Added-support-for-PPC-instructions-mfatbu-mfatbl.patch \
+           file://run-ptest \
+           file://0002-remove-rpath.patch \
+           file://0004-Fix-out-of-tree-builds.patch \
+           file://0005-Modify-vg_test-wrapper-to-support-PTEST-formats.patch \
+           file://0001-Remove-tests-that-fail-to-build-on-some-PPC32-config.patch \
+           file://use-appropriate-march-mcpu-mfpu-for-ARM-test-apps.patch \
+           file://avoid-neon-for-targets-which-don-t-support-it.patch \
+           file://valgrind-make-ld-XXX.so-strlen-intercept-optional.patch \
+           file://0001-makefiles-Drop-setting-mcpu-to-cortex-a8-on-arm-arch.patch \
+           file://0001-str_tester.c-Limit-rawmemchr-test-to-glibc.patch \
+           file://0001-sigqueue-Rename-_sifields-to-__si_fields-on-musl.patch \
+           file://0002-context-APIs-are-not-available-on-musl.patch \
+           file://0003-correct-include-directive-path-for-config.h.patch \
+           file://0004-pth_atfork1.c-Define-error-API-for-musl.patch \
+           file://0005-tc20_verifywrap.c-Fake-__GLIBC_PREREQ-with-musl.patch \
+           file://0006-pth_detached3.c-Dereference-pthread_t-before-adding-.patch \
+           file://0001-memcheck-tests-Use-ucontext_t-instead-of-struct-ucon.patch \
+           file://0001-memcheck-arm64-Define-__THROW-if-not-already-defined.patch \
+           file://0002-memcheck-x86-Define-__THROW-if-not-defined.patch \
+           file://0003-tests-seg_override-Replace-__modify_ldt-with-syscall.patch \
+           file://link-gz-tests.patch \
+           file://ppc-headers.patch \
+           "
+SRC_URI[md5sum] = "817dd08f1e8a66336b9ff206400a5369"
+SRC_URI[sha256sum] = "d76680ef03f00cd5e970bbdcd4e57fb1f6df7d2e2c071635ef2be74790190c3b"
+UPSTREAM_CHECK_REGEX = "valgrind-(?P<pver>\d+(\.\d+)+)\.tar"
+
+COMPATIBLE_HOST = '(i.86|x86_64|arm|aarch64|mips|powerpc|powerpc64).*-linux'
+
+# valgrind supports armv7 and above
+COMPATIBLE_HOST_armv4 = 'null'
+COMPATIBLE_HOST_armv5 = 'null'
+COMPATIBLE_HOST_armv6 = 'null'
+
+# X32 isn't supported by valgrind at this time
+COMPATIBLE_HOST_linux-gnux32 = 'null'
+COMPATIBLE_HOST_linux-muslx32 = 'null'
+
+# Disable for some MIPS variants
+COMPATIBLE_HOST_mipsarchn32 = 'null'
+COMPATIBLE_HOST_mipsarchr6 = 'null'
+
+inherit autotools ptest
+
+EXTRA_OECONF = "--enable-tls --without-mpicc"
+EXTRA_OECONF += "${@['--enable-only32bit','--enable-only64bit'][d.getVar('SITEINFO_BITS') != '32']}"
+
+# valgrind checks host_cpu "armv7*)", so we need to over-ride the autotools.bbclass default --host option
+EXTRA_OECONF_append_arm = " --host=armv7${HOST_VENDOR}-${HOST_OS}"
+TARGET_CC_ARCH_remove_arm = "${@get_mcpu(d)}"
+
+EXTRA_OEMAKE = "-w"
+
+CACHED_CONFIGUREVARS += "ac_cv_path_PERL='/usr/bin/env perl'"
+
+# valgrind likes to control its own optimisation flags. It generally defaults
+# to -O2 but uses -O0 for some specific test apps etc. Passing our own flags
+# (via CFLAGS) means we interfere with that. Only pass DEBUG_FLAGS to it
+# which fixes build path issue in DWARF.
+SELECTED_OPTIMIZATION = "${DEBUG_FLAGS}"
+
+def get_mcpu(d):
+    for arg in (d.getVar('TUNE_CCARGS') or '').split():
+        if arg.startswith('-mcpu='):
+            return arg
+        else:
+            continue
+    return ""
+
+do_configure_prepend () {
+    rm -rf ${S}/config.h
+}
+
+do_install_append () {
+    install -m 644 ${B}/default.supp ${D}/${libdir}/valgrind/
+}
+
+TUNE = "${@strip_mcpu(d)}"
+
+RDEPENDS_${PN} += "perl"
+
+# valgrind needs debug information for ld.so at runtime in order to
+# redirect functions like strlen.
+RRECOMMENDS_${PN} += "${TCLIBC}-dbg"
+
+RDEPENDS_${PN}-ptest += " sed perl perl-module-file-glob"
+RDEPENDS_${PN}-ptest_append_libc-glibc = " glibc-utils"
+
+# One of the tests contains a bogus interpreter path on purpose.
+# Skip file dependency check
+SKIP_FILEDEPS_${PN}-ptest = '1'
+
+do_compile_ptest() {
+    oe_runmake check
+}
+
+do_install_ptest() {
+    chmod +x ${B}/tests/vg_regtest
+
+    # The test application binaries are not automatically installed.
+    # Grab them from the build directory.
+    #
+    # The regression tests require scripts and data files that are not
+    # copied to the build directory.  They must be copied from the
+    # source directory. 
+    saved_dir=$PWD
+    for parent_dir in ${S} ${B} ; do
+        cd $parent_dir
+
+        # exclude shell or the package won't install
+        rm -rf none/tests/shell* 2>/dev/null
+
+        subdirs="tests cachegrind/tests callgrind/tests drd/tests helgrind/tests massif/tests memcheck/tests none/tests"
+
+        # Get the vg test scripts, filters, and expected files
+        for dir in $subdirs ; do
+            find $dir | cpio -pvdu ${D}${PTEST_PATH}
+        done
+        cd $saved_dir
+    done
+
+    # clean out build artifacts before building the rpm
+    find ${D}${PTEST_PATH} \
+         \( -name "Makefile*" \
+        -o -name "*.o" \
+        -o -name "*.c" \
+        -o -name "*.S" \
+        -o -name "*.h" \) \
+        -exec rm {} \;
+
+    # needed by massif tests
+    cp ${B}/massif/ms_print ${D}${PTEST_PATH}/massif/ms_print
+
+    # handle multilib
+    sed -i s:@libdir@:${libdir}:g ${D}${PTEST_PATH}/run-ptest
+}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/acpica/acpica_20150515.bb b/import-layers/yocto-poky/meta/recipes-extended/acpica/acpica_20150515.bb
deleted file mode 100644
index 1326ebd..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/acpica/acpica_20150515.bb
+++ /dev/null
@@ -1,49 +0,0 @@
-SUMMARY = "ACPICA tools for the development and debug of ACPI tables"
-DESCRIPTION = "The ACPI Component Architecture (ACPICA) project provides an \
-OS-independent reference implementation of the Advanced Configuration and \
-Power Interface Specification (ACPI). ACPICA code contains those portions of \
-ACPI meant to be directly integrated into the host OS as a kernel-resident \
-subsystem, and a small set of tools to assist in developing and debugging \
-ACPI tables."
-
-HOMEPAGE = "http://www.acpica.org/"
-SECTION = "console/tools"
-
-LICENSE = "BSD | GPLv2"
-LIC_FILES_CHKSUM = "file://generate/unix/readme.txt;md5=204407e197c1a01154a48f6c6280c3aa"
-
-COMPATIBLE_HOST = "(i.86|x86_64|arm|aarch64).*-linux"
-
-DEPENDS = "bison flex"
-
-SRC_URI = "https://acpica.org/sites/acpica/files/acpica-unix2-${PV}.tar.gz \
-    file://no-werror.patch \
-    file://rename-yy_scan_string-manually.patch \
-    file://manipulate-fds-instead-of-FILE.patch \
-    "
-SRC_URI[md5sum] = "2bc4a7ccc82de9df9fa964f784ecb29c"
-SRC_URI[sha256sum] = "61204ec56d71bc9bfa2ee2ade4c66f7e8541772ac72ef8ccc20b3f339cc96374"
-UPSTREAM_CHECK_URI = "https://acpica.org/downloads"
-
-S = "${WORKDIR}/acpica-unix2-${PV}"
-
-EXTRA_OEMAKE = "CC='${CC}' 'OPT_CFLAGS=-Wall'"
-
-do_install() {
-    install -D -p -m0755 generate/unix/bin*/iasl ${D}${bindir}/iasl
-    install -D -p -m0755 generate/unix/bin*/acpibin ${D}${bindir}/acpibin
-    install -D -p -m0755 generate/unix/bin*/acpiexec ${D}${bindir}/acpiexec
-    install -D -p -m0755 generate/unix/bin*/acpihelp ${D}${bindir}/acpihelp
-    install -D -p -m0755 generate/unix/bin*/acpinames ${D}${bindir}/acpinames
-    install -D -p -m0755 generate/unix/bin*/acpisrc ${D}${bindir}/acpisrc
-    install -D -p -m0755 generate/unix/bin*/acpixtract ${D}${bindir}/acpixtract
-}
-
-# iasl*.bb is a subset of this recipe, so RREPLACE it
-PROVIDES = "iasl"
-RPROVIDES_${PN} += "iasl"
-RREPLACES_${PN} += "iasl"
-RCONFLIGHTS_${PN} += "iasl"
-
-NATIVE_INSTALL_WORKS = "1"
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/acpica/acpica_20170303.bb b/import-layers/yocto-poky/meta/recipes-extended/acpica/acpica_20170303.bb
new file mode 100644
index 0000000..868505b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/acpica/acpica_20170303.bb
@@ -0,0 +1,55 @@
+SUMMARY = "ACPICA tools for the development and debug of ACPI tables"
+DESCRIPTION = "The ACPI Component Architecture (ACPICA) project provides an \
+OS-independent reference implementation of the Advanced Configuration and \
+Power Interface Specification (ACPI). ACPICA code contains those portions of \
+ACPI meant to be directly integrated into the host OS as a kernel-resident \
+subsystem, and a small set of tools to assist in developing and debugging \
+ACPI tables."
+
+HOMEPAGE = "http://www.acpica.org/"
+SECTION = "console/tools"
+
+LICENSE = "BSD | GPLv2"
+LIC_FILES_CHKSUM = "file://generate/unix/readme.txt;md5=204407e197c1a01154a48f6c6280c3aa"
+
+COMPATIBLE_HOST = "(i.86|x86_64|arm|aarch64).*-linux"
+
+DEPENDS = "bison flex"
+
+SRC_URI = "https://acpica.org/sites/acpica/files/acpica-unix2-${PV}.tar.gz \
+    file://no-werror.patch \
+    file://rename-yy_scan_string-manually.patch \
+    file://manipulate-fds-instead-of-FILE.patch;striplevel=2 \
+    file://0001-Linux-add-support-for-X32-ABI-compilation.patch \
+    "
+SRC_URI[md5sum] = "48ef4314fb4ffdd0c96f14dcf20544e1"
+SRC_URI[sha256sum] = "b2d81e84107ac9a02be86ea43cbea7afa8fd4b4150270bc88c2d4c9fea0b8aad"
+UPSTREAM_CHECK_URI = "https://acpica.org/downloads"
+
+S = "${WORKDIR}/acpica-unix2-${PV}"
+
+inherit update-alternatives
+
+ALTERNATIVE_PRIORITY = "100"
+ALTERNATIVE_${PN} = "acpixtract"
+
+EXTRA_OEMAKE = "CC='${CC}' 'OPT_CFLAGS=-Wall'"
+
+do_install() {
+    install -D -p -m0755 generate/unix/bin*/iasl ${D}${bindir}/iasl
+    install -D -p -m0755 generate/unix/bin*/acpibin ${D}${bindir}/acpibin
+    install -D -p -m0755 generate/unix/bin*/acpiexec ${D}${bindir}/acpiexec
+    install -D -p -m0755 generate/unix/bin*/acpihelp ${D}${bindir}/acpihelp
+    install -D -p -m0755 generate/unix/bin*/acpinames ${D}${bindir}/acpinames
+    install -D -p -m0755 generate/unix/bin*/acpisrc ${D}${bindir}/acpisrc
+    install -D -p -m0755 generate/unix/bin*/acpixtract ${D}${bindir}/acpixtract
+}
+
+# iasl*.bb is a subset of this recipe, so RREPLACE it
+PROVIDES = "iasl"
+RPROVIDES_${PN} += "iasl"
+RREPLACES_${PN} += "iasl"
+RCONFLIGHTS_${PN} += "iasl"
+
+NATIVE_INSTALL_WORKS = "1"
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/acpica/acpitests/aapits-linux.patch b/import-layers/yocto-poky/meta/recipes-extended/acpica/acpitests/aapits-linux.patch
deleted file mode 100644
index 7c5d6b0..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/acpica/acpitests/aapits-linux.patch
+++ /dev/null
@@ -1,336 +0,0 @@
-From: Al Stone <ahs3@ahs3.net>
-Date: Mon, 7 Apr 2014 19:09:37 +0000
-Subject: [PATCH 1/2] Fixup aapits build
-
-From http://git.linaro.org/people/al.stone/acpica-tools.git
-Upstream-status: Unknown
-
-diff -urN acpica-unix2-20130626/tests/aapits/atexec.c acpica-unix2-20130626-aapits/tests/aapits/atexec.c
---- acpica-unix2-20130626/tests/aapits/atexec.c	2013-01-17 12:48:28.000000000 -0700
-+++ acpica-unix2-20130626-aapits/tests/aapits/atexec.c	2013-07-25 13:44:23.023894441 -0600
-@@ -639,6 +639,7 @@
- }
- 
- 
-+#if ACPI_MACHINE_WIDTH == 32
- /*******************************************************************************
-  *
-  * FUNCTION:    AtBuildLocalRSDT
-@@ -757,8 +758,9 @@
-         LocalRSDT->Header.Checksum = (UINT8)~LocalRSDT->Header.Checksum;
-     }
- }
-+#endif
- 
- 
- /*******************************************************************************
-  *
-  * FUNCTION:    AtBuildLocalXSDT
-@@ -1424,7 +1426,7 @@
-         ACPI_WARNING ((AE_INFO,
-             "Request on [%4.4s] is beyond region limit Req-%X+%X, Base=%X, Len-%X\n",
-             (RegionObject->Region.Node)->Name.Ascii, (UINT32) Address,
--            ByteWidth, (UINT32) BufferAddress, Length));
-+            ByteWidth, (UINT32) BufferAddress, (UINT32) Length));
- 
-         return (AE_AML_REGION_LIMIT);
-     }
-@@ -1792,7 +1796,9 @@
-             Path, Obj.Integer.Value, Value);
- #else
-         printf ("API Error: Value of %s is 0x%llx instead of expected 0x%llx\n",
--            Path, Obj.Integer.Value, Value);
-+            Path,
-+	    (long long unsigned int) Obj.Integer.Value,
-+	    (long long unsigned int) Value);
- #endif
-         Status = AE_ERROR;
-     }
-@@ -1871,7 +1877,7 @@
-     {
-         TestErrors++;
-         printf ("Test Error: cannot allocate buffer of %d bytes\n",
--            Results.Length);
-+                (int) Results.Length);
-         return (AE_NO_MEMORY);
-     }
-     Results.Pointer = Object;
-@@ -1952,7 +1956,8 @@
-     {
-         printf ("AtCheckBuffer: unexpected length %d of Buffer vs"
-             " calculated %d bytes\n",
--            Results.Length, ACPI_ROUND_UP_TO_NATIVE_WORD(sizeof (ACPI_OBJECT) + Length));
-+            (int)Results.Length,
-+	    (int)(ACPI_ROUND_UP_TO_NATIVE_WORD(sizeof (ACPI_OBJECT) + Length)));
-     }
- 
-     /* Initialize the return buffer structure */
-@@ -1961,7 +1968,7 @@
-     {
-         TestErrors++;
-         printf ("Test Error: cannot allocate buffer of %d bytes\n",
--            Results.Length);
-+            (int) Results.Length);
-         return (AE_NO_MEMORY);
-     }
-     Results.Pointer = Object;
-diff -urN acpica-unix2-20130626/tests/aapits/atinit.c acpica-unix2-20130626-aapits/tests/aapits/atinit.c
---- acpica-unix2-20130626/tests/aapits/atinit.c	2013-01-17 12:48:28.000000000 -0700
-+++ acpica-unix2-20130626-aapits/tests/aapits/atinit.c	2013-07-25 13:20:19.706705960 -0600
-@@ -3024,7 +3024,7 @@
-             AapiErrors++;
-             printf ("API Error: AcpiGetSystemInfo() returned"
-                 " Length %d, expected %d\n",
--                OutBuffer.Length, sizeof (Info));
-+                (int) OutBuffer.Length, (int) sizeof (Info));
-             return (AE_ERROR);
-         }
- 
-@@ -3046,7 +3046,7 @@
-             AapiErrors++;
-             printf ("API Error: AcpiGetSystemInfo() returned"
-                 " Length %d, expected %d\n",
--                OutBuffer.Length, sizeof (Info));
-+                (int) OutBuffer.Length, (int) sizeof (Info));
-             return (AE_ERROR);
-         }
- 
-@@ -3066,7 +3066,7 @@
-             AapiErrors++;
-             printf ("API Error: AcpiGetSystemInfo() returned"
-                 " Length %d, expected %d\n",
--                OutBuffer.Length, sizeof (Info));
-+                (int) OutBuffer.Length, (int) sizeof (Info));
-             return (AE_ERROR);
-         }
-         else if (OutBuffer.Pointer != &Info)
-@@ -3149,7 +3149,7 @@
-             AapiErrors++;
-             printf ("API Error: AcpiGetSystemInfo() returned"
-                 " Length %d, expected %d\n",
--                OutBuffer.Length, sizeof (Info));
-+                (int) OutBuffer.Length, (int) sizeof (Info));
-             return (AE_ERROR);
-         }
-         else if (OutBuffer.Pointer != &Info)
-@@ -3214,7 +3214,7 @@
-             AapiErrors++;
-             printf ("API Error: AcpiGetSystemInfo() returned"
-                 " Length %d, expected %d\n",
--                OutBuffer.Length, sizeof (ACPI_SYSTEM_INFO));
-+                (int) OutBuffer.Length, (int) sizeof (ACPI_SYSTEM_INFO));
-             return (AE_ERROR);
-         }
-         else
-diff -urN acpica-unix2-20130626/tests/aapits/atmain.c acpica-unix2-20130626-aapits/tests/aapits/atmain.c
---- acpica-unix2-20130626/tests/aapits/atmain.c	2013-01-17 12:48:28.000000000 -0700
-+++ acpica-unix2-20130626-aapits/tests/aapits/atmain.c	2013-07-25 13:18:22.083323948 -0600
-@@ -315,7 +315,7 @@
-     {
-         printf ("ACPICA API TS err: test num %ld of test case %ld"
-             " is not implemented\n",
--            test_num, test_case);
-+            (long int) test_num, (long int) test_case);
-         return (AtRetNotImpl);
-     }
- 
-@@ -430,7 +432,7 @@
-     if (test_case < 1 || test_case > AT_TEST_CASE_NUM)
-     {
-         printf ("ACPICA API TS err: test case %ld is out of range 1 - %d\n",
--            test_case, AT_TEST_CASE_NUM);
-+            (long int) test_case, (int) AT_TEST_CASE_NUM);
-         return (AtRetBadParam);
-     }
- 
-@@ -438,7 +440,7 @@
-     if (test_num < 0 || test_num > AtTestCase[test_case].TestsNum)
-     {
-         printf ("ACPICA API TS err: test num %ld is out of range 0 - %d\n",
--            test_num, AtTestCase[test_case].TestsNum);
-+            (long int) test_num, AtTestCase[test_case].TestsNum);
-         return (AtRetBadParam);
-     }
-
-diff -urN acpica-unix2-20130626/tests/aapits/atnamespace.c acpica-unix2-20130626-aapits/tests/aapits/atnamespace.c
---- acpica-unix2-20130626/tests/aapits/atnamespace.c	2013-01-17 12:48:28.000000000 -0700
-+++ acpica-unix2-20130626-aapits/tests/aapits/atnamespace.c	2013-07-25 13:24:15.366466707 -0600
-@@ -2535,7 +2535,8 @@
- #else
-                 printf ("API Error: Address of %s (0x%llX) != (0x%llX)\n",
-                     PathNames[2 * i + 1],
--                    Info->Address, ExpectedInfo[i].Address);
-+                    (long long unsigned int) Info->Address,
-+		    (long long unsigned int) ExpectedInfo[i].Address);
- #endif
- #else
-                 printf ("API Error: Address of %s (0x%X) != (0x%X)\n",
-@@ -2908,7 +2909,8 @@
-         TestErrors++;
-         printf ("AtGetNextObjectTypeCommon: different numbers of entities"
-             "in TypesNames (%d) and LevelTypes0000 (%d)\n",
--            TypesCount, sizeof (LevelTypes0000) / sizeof (ACPI_OBJECT_TYPE));
-+            TypesCount,
-+	    (int) (sizeof (LevelTypes0000) / sizeof (ACPI_OBJECT_TYPE)));
-         return (AE_ERROR);
-     }
- 
-@@ -4192,7 +4194,9 @@
-             Pathname, Obj.Integer.Value, Value);
- #else
-         printf ("API Error: Value of %s is 0x%llx instead of expected 0x%llx\n",
--            Pathname, Obj.Integer.Value, Value);
-+            Pathname,
-+	    (long long unsigned int) Obj.Integer.Value,
-+	    (long long unsigned int) Value);
- #endif
-         Status = AE_ERROR;
-     }
-@@ -5199,7 +5203,7 @@
-             {
-                 AapiErrors++;
-                 printf ("API Error: AcpiOsAllocate(%d) returned NULL\n",
--                    OutName.Length);
-+                    (int) OutName.Length);
-                 return (AE_ERROR);
-             }
-         }
-diff -urN acpica-unix2-20130626/tests/aapits/atosxfctrl.c acpica-unix2-20130626-aapits/tests/aapits/atosxfctrl.c
---- acpica-unix2-20130626/tests/aapits/atosxfctrl.c	2013-01-17 12:48:28.000000000 -0700
-+++ acpica-unix2-20130626-aapits/tests/aapits/atosxfctrl.c	2013-07-25 13:30:00.375492751 -0600
-@@ -737,13 +737,15 @@
- #if ACPI_MACHINE_WIDTH == 64
- #ifdef    _MSC_VER
-         printf("OsxfCtrlFingReg: unexpected Width %d of Reg 0x%I64x\n",
-+            Width, Address);
- #else
-         printf("OsxfCtrlFingReg: unexpected Width %d of Reg 0x%llx\n",
-+            Width, (long long unsigned int) Address);
- #endif
- #else
-         printf("OsxfCtrlFingReg: unexpected Width %d of Reg 0x%x\n",
--#endif
-             Width, Address);
-+#endif
-         return (NULL);
-     }
- 
-@@ -764,15 +766,19 @@
- #ifdef    _MSC_VER
-                 printf("OsxfCtrlFingReg: intersection Regs (0x%I64x: 0x%x)"
-                     " and (0x%I64x: 0x%x)\n",
-+                    Reg->Address, Reg->Width, Address, Width);
- #else
-                 printf("OsxfCtrlFingReg: intersection Regs (0x%llx: 0x%x)"
-                     " and (0x%llx: 0x%x)\n",
-+                    (long long unsigned int) Reg->Address,
-+		    Reg->Width,
-+		    (long long unsigned int) Address, Width);
- #endif
- #else
-                 printf("OsxfCtrlFingReg: intersection Regs (0x%x: 0x%x)"
-                     " and (0x%x: 0x%x)\n",
--#endif
-                     Reg->Address, Reg->Width, Address, Width);
-+#endif
-                 return (NULL);
-             }
-         }
-@@ -786,13 +792,15 @@
- #if ACPI_MACHINE_WIDTH == 64
- #ifdef    _MSC_VER
-             printf("OsxfCtrlFingReg: no memory for Reg (0x%I64x: 0x%x)\n",
-+                Reg->Address, Reg->Width);
- #else
-             printf("OsxfCtrlFingReg: no memory for Reg (0x%llx: 0x%x)\n",
-+                (long long unsigned int) Reg->Address, Reg->Width);
- #endif
- #else
-             printf("OsxfCtrlFingReg: no memory for Reg (0x%x: 0x%x)\n",
--#endif
-                 Reg->Address, Reg->Width);
-+#endif
-             return (NULL);
-         }
-         Reg->Type = Type;
-@@ -932,14 +940,19 @@
- #if ACPI_MACHINE_WIDTH == 64
- #ifdef    _MSC_VER
-             printf("%.2u (%s Address 0x%I64x: Width %.2u) r/w counts: %u/%u\n",
-+                i, (Reg->Type == EMUL_REG_SYS)? "SYS": "IO",
-+                Reg->Address, Reg->Width, Reg->ReadCount, Reg->WriteCount);
- #else
-             printf("%.2u (%s Address 0x%llx: Width %.2u) r/w counts: %u/%u\n",
-+                i, (Reg->Type == EMUL_REG_SYS)? "SYS": "IO",
-+                (long long unsigned int) Reg->Address,
-+		Reg->Width, Reg->ReadCount, Reg->WriteCount);
- #endif
- #else
-             printf("%.2u (%s Address 0x%.4x: Width %.2u) r/w counts: %u/%u\n",
--#endif
-                 i, (Reg->Type == EMUL_REG_SYS)? "SYS": "IO",
-                 Reg->Address, Reg->Width, Reg->ReadCount, Reg->WriteCount);
-+#endif
-             Reg = Reg->Next;
-             i++;
-         }
-diff -urN acpica-unix2-20130626/tests/aapits/atresource.c acpica-unix2-20130626-aapits/tests/aapits/atresource.c
---- acpica-unix2-20130626/tests/aapits/atresource.c	2013-01-17 12:48:29.000000000 -0700
-+++ acpica-unix2-20130626-aapits/tests/aapits/atresource.c	2013-07-25 13:25:49.423565947 -0600
-@@ -174,7 +174,7 @@
-         AapiErrors++;
-         printf ("API Error: AcpiGetCurrentResources(%s) returned Length %d,"
-             " expected %d\n",
--            Pathname, OutBuffer.Length, RT0000_DEV0_CRS_LEN);
-+            Pathname, (int) OutBuffer.Length, RT0000_DEV0_CRS_LEN);
-         return (AE_ERROR);
-     }
- 
-@@ -490,7 +490,7 @@
-         AapiErrors++;
-         printf ("API Error: AcpiGetCurrentResources(%s) returned Length %d,"
-             " expected %d\n",
--            Pathname, OutBuffer.Length, RT0000_DEV0_CRS_LEN);
-+            Pathname, (int) OutBuffer.Length, RT0000_DEV0_CRS_LEN);
-         return (AE_ERROR);
-     }
- 
-@@ -689,7 +689,7 @@
-         AapiErrors++;
-         printf ("Api Error: Resource->Length (%d) != %d\n",
-             CurrentResource->Length,
--            ACPI_ROUND_UP_TO_NATIVE_WORD (ACPI_RS_SIZE (ACPI_RESOURCE_IRQ)));
-+            (int) (ACPI_ROUND_UP_TO_NATIVE_WORD (ACPI_RS_SIZE (ACPI_RESOURCE_IRQ))));
-     }
- 
-     if (CurrentResource->Data.Irq.Triggering != 0) /* Level-Triggered */
-@@ -981,7 +981,7 @@
-         AapiErrors++;
-         printf ("API Error: AcpiGetPossibleResources(%s) returned Length %d,"
-             " expected %d\n",
--            Pathname, OutBuffer.Length, RT0000_DEV0_CRS_LEN);
-+            Pathname, (int) OutBuffer.Length, RT0000_DEV0_CRS_LEN);
-         return (AE_ERROR);
-     }
- 
-@@ -1923,7 +1923,7 @@
-         AapiErrors++;
-         printf ("API Error: AcpiGetIrqRoutingTable(%s) returned Length %d,"
-             " expected %d\n",
--            Pathname, OutBuffer.Length, 0xA48);
-+            Pathname, (int) OutBuffer.Length, 0xA48);
-         return (AE_ERROR);
-     }
-
-diff -urN acpica-unix2-20130626/tests/aapits/Makefile acpica-unix2-20130626-aapits/tests/aapits/Makefile
---- acpica-unix2-20130626/tests/aapits/Makefile	2013-01-17 12:48:29.000000000 -0700
-+++ acpica-unix2-20130626-aapits/tests/aapits/Makefile	2013-07-25 15:17:09.309236422 -0600
-@@ -194,7 +194,7 @@
- CFLAGS+= -Wall -g -D_LINUX -DNDEBUG -D_CONSOLE -DACPI_APITS -DACPI_EXEC_APP -D_MULTI_THREADED -Wstrict-prototypes -I../../source/include
-
- 
--acpiexec : $(patsubst %.c,%.o, $(SRCS))
-+$(PROG) : $(patsubst %.c,%.o, $(SRCS))
- 	$(CC) $(LDFLAGS) $(patsubst %.c,%.o, $(SRCS)) -o $(PROG)
- 
- CLEANFILES= $(PROG)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/acpica/acpitests/aapits-makefile.patch b/import-layers/yocto-poky/meta/recipes-extended/acpica/acpitests/aapits-makefile.patch
deleted file mode 100644
index 4d9e997..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/acpica/acpitests/aapits-makefile.patch
+++ /dev/null
@@ -1,34 +0,0 @@
-From: Al Stone <ahs3@ahs3.net>
-Date: Mon, 7 Apr 2014 19:09:37 +0000
-Subject: [PATCH 1/2] Fixup aapits build
-
-From http://git.linaro.org/people/al.stone/acpica-tools.git
-Upstream-status: Unknown
-
-diff -urN acpica-unix2-20140325/tests/aapits/Makefile acpica-unix2-20140325/tests/aapits/Makefile
---- acpica-unix2-20140325/tests/aapits/Makefile	2014-04-05 14:23:14.683636794 -0600
-+++ acpica-unix2-20140325-aapits/tests/aapits/Makefile	2014-04-05 15:10:57.879184598 -0600
-@@ -16,6 +16,7 @@
- 	atosxfwrap.c \
- 	osunixxf.c \
- 	../../source/common/ahids.c \
-+	../../source/common/ahuuids.c \
- 	../../source/common/cmfsize.c \
- 	../../source/common/getopt.c \
- 	../../source/components/hardware/hwtimer.c \
-@@ -174,6 +175,7 @@
- 	../../source/components/utilities/utexcep.c \
- 	../../source/components/utilities/utfileio.c \
- 	../../source/components/utilities/utglobal.c \
-+	../../source/components/utilities/uthex.c \
- 	../../source/components/utilities/utids.c \
- 	../../source/components/utilities/utinit.c \
- 	../../source/components/utilities/utlock.c \
-@@ -189,6 +191,7 @@
- 	../../source/components/utilities/utstate.c \
- 	../../source/components/utilities/utstring.c \
- 	../../source/components/utilities/uttrack.c \
-+	../../source/components/utilities/utuuid.c \
- 	../../source/components/utilities/utxface.c \
- 	../../source/components/utilities/utxferror.c \
- 	../../source/components/utilities/utxfinit.c \
diff --git a/import-layers/yocto-poky/meta/recipes-extended/acpica/acpitests_20140828.bb b/import-layers/yocto-poky/meta/recipes-extended/acpica/acpitests_20140828.bb
deleted file mode 100644
index 45ac157..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/acpica/acpitests_20140828.bb
+++ /dev/null
@@ -1,36 +0,0 @@
-SUMMARY = "Test suite used to validate ACPICA"
-HOMEPAGE = "http://www.acpica.org/"
-
-LICENSE = "Intel"
-LIC_FILES_CHKSUM = "file://tests/aapits/atexec.c;beginline=1;endline=115;md5=e92bcdfcd01d117d1bda3e814bb2030a"
-
-DEPENDS = "bison flex"
-
-SRC_URI = "https://acpica.org/sites/acpica/files/acpitests-unix-${PV}.tar.gz;name=acpitests \
-           https://acpica.org/sites/acpica/files/acpica-unix2-${PV}.tar.gz;name=acpica \
-           file://aapits-linux.patch \
-           file://aapits-makefile.patch \
-"
-SRC_URI[acpitests.md5sum] = "db9d6fdaa8e3eb101d700ee5ba4938ed"
-SRC_URI[acpitests.sha256sum] = "e576c74bf1bf1c9f7348bf9419e05c8acfece7105abcdc052e66670c7af2cf00"
-SRC_URI[acpica.md5sum] = "6f05f0d10166a1b1ff6107f3d1cdf1e5"
-SRC_URI[acpica.sha256sum] = "01d8867656c5ba41dec307c4383ce676196ad4281ac2c9dec9f5be5fac6d888e"
-UPSTREAM_CHECK_URI = "https://acpica.org/downloads"
-
-S = "${WORKDIR}/acpitests-unix-${PV}"
-
-EXTRA_OEMAKE = "'CC=${CC}' 'OPT_CFLAGS=-Wall'"
-
-# The Makefiles expect a specific layout
-do_compile() {
-    cp -af ${WORKDIR}/acpica-unix2-${PV}/source ${S}
-    cd tests/aapits
-    oe_runmake
-}
-
-do_install() {
-    install -d ${D}${bindir}
-    install -m0755 tests/aapits/bin/aapits ${D}${bindir}
-}
-
-COMPATIBLE_HOST = "(i.86|x86_64|arm|aarch64).*-linux"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/acpica/files/0001-Linux-add-support-for-X32-ABI-compilation.patch b/import-layers/yocto-poky/meta/recipes-extended/acpica/files/0001-Linux-add-support-for-X32-ABI-compilation.patch
new file mode 100644
index 0000000..df74200
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/acpica/files/0001-Linux-add-support-for-X32-ABI-compilation.patch
@@ -0,0 +1,31 @@
+From d22241efc0708c9799f17a20eabb52a48d6d6ea1 Mon Sep 17 00:00:00 2001
+From: Anuj Mittal <anuj.mittal@intel.com>
+Date: Tue, 2 Jan 2018 12:35:32 +0800
+Subject: [PATCH] Linux: add support for X32 ABI compilation
+
+X32 follows ILP32 model. Check for ILP32 as well when checking for
+x86_64 to ensure the defines are correct for X32 ABI.
+
+Upstream-Status: Submitted [https://github.com/acpica/acpica/pull/348]
+
+Signed-off-by: Anuj Mittal <anuj.mittal@intel.com>
+---
+ source/include/platform/aclinux.h | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/source/include/platform/aclinux.h b/source/include/platform/aclinux.h
+index 75b1d82..6b8ff73 100644
+--- a/source/include/platform/aclinux.h
++++ b/source/include/platform/aclinux.h
+@@ -315,7 +315,7 @@
+ #define ACPI_FLUSH_CPU_CACHE()
+ #define ACPI_CAST_PTHREAD_T(Pthread) ((ACPI_THREAD_ID) (Pthread))
+ 
+-#if defined(__ia64__)    || defined(__x86_64__) ||\
++#if defined(__ia64__)    || (defined(__x86_64__) && !defined(__ILP32__)) ||\
+     defined(__aarch64__) || defined(__PPC64__) ||\
+     defined(__s390x__)
+ #define ACPI_MACHINE_WIDTH          64
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/acpica/files/manipulate-fds-instead-of-FILE.patch b/import-layers/yocto-poky/meta/recipes-extended/acpica/files/manipulate-fds-instead-of-FILE.patch
index 6944bb7..5610ed9 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/acpica/files/manipulate-fds-instead-of-FILE.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/acpica/files/manipulate-fds-instead-of-FILE.patch
@@ -1,6 +1,6 @@
-From 33a57979738e5ab13950ec1c0e7298e41ef50929 Mon Sep 17 00:00:00 2001
-From: Patrick Ohly <patrick.ohly@intel.com>
-Date: Thu, 23 Feb 2017 18:10:47 +0100
+From 69171c22f3872ecb4c1ab27985e93ca44084595e Mon Sep 17 00:00:00 2001
+From: Fan Xin <fan.xin@jp.fujitsu.com>
+Date: Mon, 5 Jun 2017 13:26:38 +0900
 Subject: [PATCH] aslfiles.c: manipulate fds instead of FILE
 
 Copying what stdout/stderr point to is not portable and fails with
@@ -12,60 +12,61 @@
 Upstream-Status: Inappropriate [embedded specific]
 
 Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
----
- source/compiler/aslfiles.c | 20 +++++++++++---------
- 1 file changed, 11 insertions(+), 9 deletions(-)
 
-diff --git a/source/compiler/aslfiles.c b/source/compiler/aslfiles.c
-index 947e465..7a352b4 100644
---- a/source/compiler/aslfiles.c
-+++ b/source/compiler/aslfiles.c
-@@ -44,6 +44,11 @@
+Rebase on acpica 20170303
+
+Signed-off-by: Fan Xin <fan.xin@jp.fujitsu.com>
+---
+ acpica-unix2-20170303/source/compiler/aslfiles.c | 14 +++++++++++---
+ 1 file changed, 11 insertions(+), 3 deletions(-)
+
+diff --git a/acpica-unix2-20170303/source/compiler/aslfiles.c b/acpica-unix2-20170303/source/compiler/aslfiles.c
+index 809090c..97898b1 100644
+--- a/acpica-unix2-20170303/source/compiler/aslfiles.c
++++ b/acpica-unix2-20170303/source/compiler/aslfiles.c
+@@ -44,6 +44,10 @@
  #include "aslcompiler.h"
  #include "acapps.h"
- 
+ #include "dtcompiler.h"
 +#include <sys/types.h>
 +#include <sys/stat.h>
 +#include <fcntl.h>
 +#include <unistd.h>
-+
+ 
  #define _COMPONENT          ACPI_COMPILER
          ACPI_MODULE_NAME    ("aslfiles")
- 
-@@ -569,6 +574,8 @@ FlOpenMiscOutputFiles (
+@@ -607,6 +611,8 @@ FlOpenMiscOutputFiles (
  
      if (Gbl_DebugFlag)
      {
-+        int fd;
++	int fd;
 +
          Filename = FlGenerateFilename (FilenamePrefix, FILE_SUFFIX_DEBUG);
          if (!Filename)
          {
-@@ -582,20 +589,15 @@ FlOpenMiscOutputFiles (
-         /* TBD: hide this behind a FlReopenFile function */
+@@ -618,10 +624,10 @@ FlOpenMiscOutputFiles (
+         /* Open the debug file as STDERR, text mode */
  
          Gbl_Files[ASL_FILE_DEBUG_OUTPUT].Filename = Filename;
 -        Gbl_Files[ASL_FILE_DEBUG_OUTPUT].Handle =
 -            freopen (Filename, "w+t", stderr);
--
+ 
 -        if (!Gbl_Files[ASL_FILE_DEBUG_OUTPUT].Handle)
 +        fd = open(Filename, O_CREAT|O_TRUNC, S_IRUSR|S_IWUSR|S_IRGRP|S_IWGRP|S_IROTH|S_IWOTH);
 +        if (fd < 0 ||
 +            dup2(fd, fileno(stderr)))
          {
--            /*
--             * A problem with freopen is that on error,
--             * we no longer have stderr.
--             */
-             Gbl_DebugFlag = FALSE;
--            memcpy (stderr, stdout, sizeof (FILE));
-             FlFileError (ASL_FILE_DEBUG_OUTPUT, ASL_MSG_DEBUG_FILENAME);
-             AslAbort ();
+             /*
+              * A problem with freopen is that on error, we no longer
+@@ -635,6 +641,8 @@ FlOpenMiscOutputFiles (
+             exit (1);
          }
-+        Gbl_Files[ASL_FILE_DEBUG_OUTPUT].Handle = stderr;
  
++        Gbl_Files[ASL_FILE_DEBUG_OUTPUT].Handle = stderr;
++
          AslCompilerSignon (ASL_FILE_DEBUG_OUTPUT);
          AslCompilerFileHeader (ASL_FILE_DEBUG_OUTPUT);
+     }
 -- 
-2.1.4
+1.9.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/acpica/files/no-werror.patch b/import-layers/yocto-poky/meta/recipes-extended/acpica/files/no-werror.patch
index 5d28f47..a6e7b54 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/acpica/files/no-werror.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/acpica/files/no-werror.patch
@@ -1,7 +1,7 @@
 Description: remove -Werror flag
 Forwarded: not-needed
 Author: Fathi Boudra <fathi.boudra@linaro.org>
-
+Upstream-Status: Pending
 ---
  generate/unix/iasl/Makefile |   12 ++++++------
  1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/asciidoc/asciidoc_8.6.9.bb b/import-layers/yocto-poky/meta/recipes-extended/asciidoc/asciidoc_8.6.9.bb
index 1cd1454..38164d5 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/asciidoc/asciidoc_8.6.9.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/asciidoc/asciidoc_8.6.9.bb
@@ -14,7 +14,7 @@
 
 UPSTREAM_CHECK_URI = "https://sourceforge.net/projects/asciidoc/files/"
 
-inherit distutils autotools-brokensep
+inherit autotools-brokensep
 
 export DESTDIR = "${D}"
 DEPENDS_class-native = "docbook-xml-dtd4-native"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/at/at/0001-remove-glibc-assumption.patch b/import-layers/yocto-poky/meta/recipes-extended/at/at/0001-remove-glibc-assumption.patch
index 53ae28b..7fdecc7 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/at/at/0001-remove-glibc-assumption.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/at/at/0001-remove-glibc-assumption.patch
@@ -1,6 +1,6 @@
-From 7f811d9c4ebc9444e613e251c31d6bf537a24dc1 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Mon, 13 Apr 2015 16:35:30 -0700
+From f446686916e503dfb9fb928252d1b72a07573b29 Mon Sep 17 00:00:00 2001
+From: Dengke Du <dengke.du@windriver.com>
+Date: Tue, 18 Jul 2017 03:42:56 -0400
 Subject: [PATCH] remove glibc assumption
 
 glibc time.h header has an undocumented __isleap macro
@@ -9,9 +9,11 @@
 on any other libc, stop using it and just define the macro in
 locally  instead.
 
-Upstream-Status: Pending
+Upstream-Status: Submitted [ https://lists.debian.org/debian-accessibility/2017/07/msg00044.html ]
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Signed-off-by: Dengke Du <dengke.du@windriver.com>
 ---
  parsetime.y | 11 +++++++----
  1 file changed, 7 insertions(+), 4 deletions(-)
@@ -53,5 +55,5 @@
  			{
  			    yyerror("Error in day of month");
 -- 
-2.1.4
+2.8.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/at/at_3.1.20.bb b/import-layers/yocto-poky/meta/recipes-extended/at/at_3.1.20.bb
index 904899f..9b537ee 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/at/at_3.1.20.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/at/at_3.1.20.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Delayed job execution and batch processing"
+HOMEPAGE = "http://blog.calhariz.com/"
 DESCRIPTION = "At allows for commands to be run at a particular time.  Batch will execute commands when \
 the system load levels drop to a particular level."
 SECTION = "base"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/bash/bash.inc b/import-layers/yocto-poky/meta/recipes-extended/bash/bash.inc
index 3e9c662..f4e1f7a 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/bash/bash.inc
+++ b/import-layers/yocto-poky/meta/recipes-extended/bash/bash.inc
@@ -25,9 +25,14 @@
 RDEPENDS_${PN}_class-nativesdk = ""
 RDEPENDS_${PN}-ptest += "make"
 
+DEPENDS_append_libc-glibc = " glibc-locale"
+RDEPENDS_${PN}-ptest_append_libc-glibc = " locale-base-fr-fr locale-base-de-de"
+
 USERADD_PACKAGES = "${PN}-ptest"
 USERADD_PARAM_${PN}-ptest = "--create-home --user-group test"
 
+CACHED_CONFIGUREVARS += "headersdir=${includedir}/${PN}"
+
 do_configure_prepend () {
 	if [ ! -e ${S}/acinclude.m4 ]; then
 		cat ${S}/aclocal.m4 > ${S}/acinclude.m4
@@ -46,16 +51,32 @@
 	fi
 }
 do_install_append_class-target () {
-	# Clean host path in bashbug
+	# Clean buildhost references in bashbug
 	sed -i -e "s,--sysroot=${STAGING_DIR_TARGET},,g" \
-		-e "s,-I${WORKDIR}/\S* ,,g" ${D}${bindir}/bashbug
+		-e "s,-I${WORKDIR}/\S* ,,g" \
+		-e 's|${DEBUG_PREFIX_MAP}||g' \
+		${D}${bindir}/bashbug
+
+	# Clean buildhost references in bash.pc
+	sed -i -e "s,--sysroot=${STAGING_DIR_TARGET},,g" \
+	     ${D}${libdir}/pkgconfig/bash.pc
+
+	# Clean buildhost references in Makefile.inc
+	sed -i -e "s,--sysroot=${STAGING_DIR_TARGET},,g" \
+		-e 's|${DEBUG_PREFIX_MAP}||g' \
+		-e 's:${HOSTTOOLS_DIR}/::g' \
+		-e 's:${BASE_WORKDIR}/${MULTIMACH_TARGET_SYS}::g' \
+		${D}${libdir}/bash/Makefile.inc
 }
 
 do_install_ptest () {
 	make INSTALL_TEST_DIR=${D}${PTEST_PATH}/tests install-test
 	cp ${B}/Makefile ${D}${PTEST_PATH}
         sed -i -e 's/^Makefile/_Makefile/' -e "s,--sysroot=${STAGING_DIR_TARGET},,g" \
-	    -e "s,${S},,g" -e "s,${B},,g" -e "s,${STAGING_DIR_NATIVE},,g" ${D}${PTEST_PATH}/Makefile
+	    -e 's|${DEBUG_PREFIX_MAP}||g' \
+	    -e "s,${S},,g" -e "s,${B},,g" -e "s,${STAGING_DIR_NATIVE},,g" \
+	    -e 's:${HOSTTOOLS_DIR}/::g' \
+	     ${D}${PTEST_PATH}/Makefile
 }
 
 pkg_postinst_${PN} () {
@@ -69,3 +90,9 @@
 PACKAGES += "${PN}-bashbug"
 FILES_${PN} = "${bindir}/bash ${base_bindir}/bash.bash"
 FILES_${PN}-bashbug = "${bindir}/bashbug"
+
+PACKAGE_BEFORE_PN += "${PN}-loadable"
+RDEPENDS_${PN}-loadable += "${PN}"
+FILES_${PN}-loadable += "${libdir}/bash/*"
+
+RPROVIDES_${PN} += "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', '/bin/sh /bin/bash', '', d)}"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/bash/bash/CVE-2016-9401.patch b/import-layers/yocto-poky/meta/recipes-extended/bash/bash/CVE-2016-9401.patch
deleted file mode 100644
index 28c9277..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/bash/bash/CVE-2016-9401.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-From fa741771ed47b30547be63b5b5dbfb51977aca12 Mon Sep 17 00:00:00 2001
-From: Chet Ramey <chet.ramey@case.edu>
-Date: Fri, 20 Jan 2017 11:47:31 -0500
-Subject: [PATCH] Bash-4.4 patch 6
-
-Bug-Reference-URL:
-https://lists.gnu.org/archive/html/bug-bash/2016-11/msg00116.html
-
-Reference to upstream patch:
-https://ftp.gnu.org/pub/gnu/bash/bash-4.4-patches/bash44-006
-
-Bug-Description:
-Out-of-range negative offsets to popd can cause the shell to crash attempting
-to free an invalid memory block.
-
-Upstream-Status: Backport
-CVE: CVE-2016-9401
-Signed-off-by: Li Zhou <li.zhou@windriver.com>
----
- builtins/pushd.def | 7 ++++++-
- 1 file changed, 6 insertions(+), 1 deletion(-)
-
-diff --git a/builtins/pushd.def b/builtins/pushd.def
-index 9c6548f..8a13bae 100644
---- a/builtins/pushd.def
-+++ b/builtins/pushd.def
-@@ -359,7 +359,7 @@ popd_builtin (list)
- 	break;
-     }
- 
--  if (which > directory_list_offset || (directory_list_offset == 0 && which == 0))
-+  if (which > directory_list_offset || (which < -directory_list_offset) || (directory_list_offset == 0 && which == 0))
-     {
-       pushd_error (directory_list_offset, which_word ? which_word : "");
-       return (EXECUTION_FAILURE);
-@@ -381,6 +381,11 @@ popd_builtin (list)
- 	 remove that directory from the list and shift the remainder
- 	 of the list into place. */
-       i = (direction == '+') ? directory_list_offset - which : which;
-+      if (i < 0 || i > directory_list_offset)
-+	{
-+	  pushd_error (directory_list_offset, which_word ? which_word : "");
-+	  return (EXECUTION_FAILURE);
-+	}
-       free (pushd_directory_list[i]);
-       directory_list_offset--;
- 
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/bash/bash/fix-run-coproc-run-heredoc-run-execscript-run-test-f.patch b/import-layers/yocto-poky/meta/recipes-extended/bash/bash/fix-run-coproc-run-heredoc-run-execscript-run-test-f.patch
index 7f099ae..9ac2461 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/bash/bash/fix-run-coproc-run-heredoc-run-execscript-run-test-f.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/bash/bash/fix-run-coproc-run-heredoc-run-execscript-run-test-f.patch
@@ -1,15 +1,7 @@
-From 2c30dff8ea8b17ad5ba9881e35ad1eba9c515f13 Mon Sep 17 00:00:00 2001
+From d1cd4c31ea0ed7406a3ad4bdaa211f581063f655 Mon Sep 17 00:00:00 2001
 From: Hongxu Jia <hongxu.jia@windriver.com>
-Date: Thu, 26 Nov 2015 22:09:07 -0500
-Subject: [PATCH] fix run-coproc/run-heredoc/run-execscript/run-test/ failed
-
-FAIL: run-coproc
-update test case:tests/coproc.right, tests/coproc.tests
-git://git.sv.gnu.org/bash.git bash-4.4-testing
-
-FAIL: run-heredoc
-update test case: tests/heredoc.right tests/heredoc3.sub
-git://git.sv.gnu.org/bash.git bash-4.4-testing
+Date: Tue, 15 Aug 2017 10:21:21 +0800
+Subject: [PATCH 2/2] fix run-execscript/run-test/ failed
 
 FAIL: run-execscript:
 the test suite should not be run as root
@@ -21,149 +13,33 @@
 
 Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
 ---
- tests/coproc.right   |  5 +----
- tests/coproc.tests   | 30 +++++++++++++++++++++++++-----
- tests/heredoc.right  |  5 ++---
- tests/heredoc3.sub   |  3 ++-
- tests/run-execscript |  3 ++-
- tests/run-test       |  3 ++-
- 6 files changed, 34 insertions(+), 15 deletions(-)
+ tests/run-execscript | 3 ++-
+ tests/run-test       | 3 ++-
+ 2 files changed, 4 insertions(+), 2 deletions(-)
 
-diff --git a/tests/coproc.right b/tests/coproc.right
-index 6d9deaa..94b001c 100644
---- a/tests/coproc.right
-+++ b/tests/coproc.right
-@@ -1,11 +1,8 @@
--84575
- 63 60
- a b c
--84577
- 63 60
- flop
--./coproc.tests: line 22: 84577 Terminated              coproc REFLECT { cat -; }
--84579
-+coproc.tests: REFLECT: status 143
- 63 60
- FOO
- 63 60
-diff --git a/tests/coproc.tests b/tests/coproc.tests
-index 8be3563..d347eb7 100644
---- a/tests/coproc.tests
-+++ b/tests/coproc.tests
-@@ -1,6 +1,13 @@
-+: ${TMPDIR:=/tmp}
-+TMPOUT=${TMPDIR}/coproc-wait-$BASHPID
-+
- coproc { echo a b c; sleep 2; }
- 
--echo $COPROC_PID
-+case $COPROC_PID in
-+[0-9]*)	;;
-+*)	echo COPROC_PID not integer ;;
-+esac
-+
- echo ${COPROC[@]}
- 
- read LINE <&${COPROC[0]}
-@@ -10,7 +17,11 @@ wait $COPROC_PID
- 
- coproc REFLECT { cat - ; }
- 
--echo $REFLECT_PID
-+case $REFLECT_PID in
-+[0-9]*)	;;
-+*)	echo REFLECT_PID not integer ;;
-+esac
-+
- echo ${REFLECT[@]}
- 
- echo flop >&${REFLECT[1]}
-@@ -18,12 +29,21 @@ read LINE <&${REFLECT[0]}
- 
- echo $LINE
- 
--kill $REFLECT_PID
--wait $REFLECT_PID
-+{ sleep 1; kill $REFLECT_PID; } &
-+wait $REFLECT_PID >$TMPOUT 2>&1 || echo "coproc.tests: REFLECT: status $?"
-+grep 'Terminated.*coproc.*REFLECT' < $TMPOUT >/dev/null 2>&1 || {
-+	echo "coproc.tests: wait for REFLECT failed" >&2
-+}
-+rm -f $TMPOUT
-+exec 2>&1
- 
- coproc xcase -n -u
- 
--echo $COPROC_PID
-+case $COPROC_PID in
-+[0-9]*)	;;
-+*)	echo COPROC_PID not integer ;;
-+esac
-+
- echo ${COPROC[@]}
- 
- echo foo >&${COPROC[1]}
-diff --git a/tests/heredoc.right b/tests/heredoc.right
-index 6abaa1f..8df91c5 100644
---- a/tests/heredoc.right
-+++ b/tests/heredoc.right
-@@ -76,15 +76,14 @@ ENDEND
- end ENDEND
- hello
- end hello
--x star x
- end x*x
- helloEND
- end helloEND
- hello
- \END
- end hello<NL>\END
--./heredoc3.sub: line 74: warning: here-document at line 72 delimited by end-of-file (wanted `EOF')
--./heredoc3.sub: line 75: syntax error: unexpected end of file
-+./heredoc3.sub: line 75: warning: here-document at line 73 delimited by end-of-file (wanted `EOF')
-+./heredoc3.sub: line 76: syntax error: unexpected end of file
- comsub here-string
- ./heredoc.tests: line 105: warning: here-document at line 103 delimited by end-of-file (wanted `EOF')
- hi
-diff --git a/tests/heredoc3.sub b/tests/heredoc3.sub
-index 73a111e..9d3d846 100644
---- a/tests/heredoc3.sub
-+++ b/tests/heredoc3.sub
-@@ -49,9 +49,10 @@ hello
-     END    
- echo end hello
- 
--cat <<x*x & touch 'x*x'
-+cat <<x*x >/dev/null & touch 'x*x'
- x star x
- x*x
-+wait $!
- echo end 'x*x'
- rm 'x*x'
- 
 diff --git a/tests/run-execscript b/tests/run-execscript
-index f97ab21..0d00a1b 100644
+index de78644..38397c1 100644
 --- a/tests/run-execscript
 +++ b/tests/run-execscript
 @@ -5,5 +5,6 @@ echo "warning: \`/tmp/bash-notthere' not being found or \`/' being a directory"
  echo "warning: produce diff output, please do not consider this a test failure" >&2
  echo "warning: if diff output differing only in the location of the bash" >&2
  echo "warning: binary appears, please do not consider this a test failure" >&2
--${THIS_SH} ./execscript > /tmp/xx 2>&1
-+rm -f /tmp/xx
-+su -c "${THIS_SH} ./execscript > /tmp/xx 2>&1" test
- diff /tmp/xx exec.right && rm -f /tmp/xx
+-${THIS_SH} ./execscript > ${BASH_TSTOUT} 2>&1
++rm -f ${BASH_TSTOUT}
++su -c "${THIS_SH} ./execscript > ${BASH_TSTOUT} 2>&1" test
+ diff ${BASH_TSTOUT} exec.right && rm -f ${BASH_TSTOUT}
 diff --git a/tests/run-test b/tests/run-test
-index b2482c3..2e8f049 100644
+index d68791c..d6317d2 100644
 --- a/tests/run-test
 +++ b/tests/run-test
 @@ -1,4 +1,5 @@
  unset GROUPS UID 2>/dev/null
  
--${THIS_SH} ./test.tests >/tmp/xx 2>&1
-+rm -f /tmp/xx
-+su -c "${THIS_SH} ./test.tests >/tmp/xx 2>&1" test
- diff /tmp/xx test.right && rm -f /tmp/xx
+-${THIS_SH} ./test.tests >${BASH_TSTOUT} 2>&1
++rm -f ${BASH_TSTOUT}
++su -c "${THIS_SH} ./test.tests > ${BASH_TSTOUT} 2>&1" test
+ diff ${BASH_TSTOUT} test.right && rm -f ${BASH_TSTOUT}
 -- 
-1.9.1
+1.8.3.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/bash/bash/fix-run-intl.patch b/import-layers/yocto-poky/meta/recipes-extended/bash/bash/fix-run-intl.patch
deleted file mode 100644
index d4a3409..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/bash/bash/fix-run-intl.patch
+++ /dev/null
@@ -1,110 +0,0 @@
-From a00d3161fd7b6a698bdd2ed5f0ac5faac580ee2a Mon Sep 17 00:00:00 2001
-From: Dengke Du <dengke.du@windriver.com>
-Date: Wed, 3 Aug 2016 23:13:00 -0400
-Subject: [PATCH] fix run-intl failed
-
-1. Filter extra white space of intl.right
-
-   Due to the extra white space of intl.right, when the result of
-   sub-test unicode2.sub of intl.tests compared to it, the test
-   failed.
-
-   So we need to filter the extra white space of intl.right.
-
-   Import this patch for intl.right from bash devel branch:
-
-	http://git.savannah.gnu.org/cgit/bash.git/log/?h=devel
-
-   commit is:
-
-	85ec0778f9d778e1820fb8c0e3e996f2d1103b45
-
-2. Change intl.right correspond to the unicode3.sub's output
-
-   In sub-test unicode3.sub of intl.tests, the payload value is:
-
-	payload=$'\065\247\100\063\231\053\306\123\070\237\242\352\263'
-
-   It used quoted string expansion(escaped octal) to assign ASCII
-   characters to variables. So when the test run the following:
-
-	printf %q "$payload"
-
-   It produced:
-
-	$'5\247@3\231+\306S8\237\242\352\263'
-
-   When compared to the intl.right(contain the converted character), it failed.
-
-   Import parts of patch for intl.right from bash devel branch:
-
-	http://git.savannah.gnu.org/cgit/bash.git/log/?h=devel
-
-   commit is:
-
-	74b8cbb41398b4453d8ba04d0cdd1b25f9dcb9e3
-
-Upstream-Status: Backport
-
-Signed-off-by: Dengke Du <dengke.du@windriver.com>
----
- tests/intl.right | 30 +++++++++++++++---------------
- 1 file changed, 15 insertions(+), 15 deletions(-)
-
-diff --git a/tests/intl.right b/tests/intl.right
-index acf108a..1efdfbe 100644
---- a/tests/intl.right
-+++ b/tests/intl.right
-@@ -18,34 +18,34 @@ aéb
- 1.0000
- 1,0000
- Passed all 1378 Unicode tests
--0000000   303 277 012                                                    
-+0000000 303 277 012
- 0000003
--0000000   303 277 012                                                    
-+0000000 303 277 012
- 0000003
--0000000   303 277 012                                                    
-+0000000 303 277 012
- 0000003
--0000000   303 277 012                                                    
-+0000000 303 277 012
- 0000003
--0000000   357 277 277 012                                                
-+0000000 357 277 277 012
- 0000004
--0000000   357 277 277 012                                                
-+0000000 357 277 277 012
- 0000004
--0000000   012                                                            
-+0000000 012
- 0000001
--0000000   012                                                            
-+0000000 012
- 0000001
--0000000   012                                                            
-+0000000 012
- 0000001
--0000000   012                                                            
-+0000000 012
- 0000001
--0000000   303 277 012                                                    
-+0000000 303 277 012
- 0000003
--0000000   303 277 012                                                    
-+0000000 303 277 012
- 0000003
--0000000   303 277 012                                                    
-+0000000 303 277 012
- 0000003
--0000000   101 040 302 243 040 305 222 012                                
-+0000000 101 040 302 243 040 305 222 012
- 0000010
- ./unicode3.sub: line 2: 5§@3™+ÆS8Ÿ¢ê³: command not found
--5§@3™+ÆS8Ÿ¢ê³
-+$'5\247@3\231+\306S8\237\242\352\263'
- + : $'5\247@3\231+\306S8\237\242\352\263'
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/bash/bash/test-output.patch b/import-layers/yocto-poky/meta/recipes-extended/bash/bash/test-output.patch
index 2b09b7d..0ffcc24 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/bash/bash/test-output.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/bash/bash/test-output.patch
@@ -1,25 +1,42 @@
-Add FAIL/PASS output to test output.
+From 28eb06047ebd2deaa8c7cd2bf6655ef6a469dc14 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Tue, 15 Aug 2017 10:01:56 +0800
+Subject: [PATCH 1/2] Add FAIL/PASS output to test output.
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
 
 Signed-off-by: Björn Stenberg <bjst@enea.com>
 Upstream-Status: Pending
+
+Rebase to 4.4
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
 ---
-diff -uNr a/tests/run-all b/tests/run-all
---- a/tests/run-all	1999-10-08 17:07:46.000000000 +0200
-+++ b/tests/run-all	2012-10-27 21:04:18.663331887 +0200
-@@ -22,7 +22,15 @@
+ tests/run-all | 11 ++++++++++-
+ 1 file changed, 10 insertions(+), 1 deletion(-)
+
+diff --git a/tests/run-all b/tests/run-all
+index 2882fe0..e21d026 100644
+--- a/tests/run-all
++++ b/tests/run-all
+@@ -33,7 +33,16 @@ do
  	case $x in
  	$0|run-minimal|run-gprof)	;;
  	*.orig|*~) ;;
--	*)	echo $x ; sh $x ;;
-+    *)  echo $x
-+         output=`sh $x`
-+         if [ -n "$output" ]; then
-+             echo "$output"
-+             echo "FAIL: $x"
-+         else
-+             echo "PASS: $x"
-+         fi
-+         ;;
+-	*)	echo $x ; sh $x ; rm -f ${BASH_TSTOUT} ;;
++	*)	echo $x
++		output=`sh $x`
++		if [ -n "$output" ]; then
++			echo "$output"
++			echo "FAIL: $x"
++		else
++			echo "PASS: $x"
++		fi
++		rm -f ${BASH_TSTOUT}
++		;;
  	esac
  done
  
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/bash/bash_4.3.30.bb b/import-layers/yocto-poky/meta/recipes-extended/bash/bash_4.3.30.bb
deleted file mode 100644
index 2648faf..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/bash/bash_4.3.30.bb
+++ /dev/null
@@ -1,76 +0,0 @@
-require bash.inc
-
-# GPLv2+ (< 4.0), GPLv3+ (>= 4.0)
-LICENSE = "GPLv3+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
-
-SRC_URI = "${GNU_MIRROR}/bash/${BP}.tar.gz;name=tarball \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-031;apply=yes;striplevel=0;name=patch031 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-032;apply=yes;striplevel=0;name=patch032 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-033;apply=yes;striplevel=0;name=patch033 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-034;apply=yes;striplevel=0;name=patch034 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-035;apply=yes;striplevel=0;name=patch035 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-036;apply=yes;striplevel=0;name=patch036 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-037;apply=yes;striplevel=0;name=patch037 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-038;apply=yes;striplevel=0;name=patch038 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-039;apply=yes;striplevel=0;name=patch039 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-040;apply=yes;striplevel=0;name=patch040 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-041;apply=yes;striplevel=0;name=patch041 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-042;apply=yes;striplevel=0;name=patch042 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-043;apply=yes;striplevel=0;name=patch043 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-044;apply=yes;striplevel=0;name=patch044 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-045;apply=yes;striplevel=0;name=patch045 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-046;apply=yes;striplevel=0;name=patch046 \
-           ${GNU_MIRROR}/bash/bash-4.3-patches/bash43-047;apply=yes;striplevel=0;name=patch047 \
-           file://execute_cmd.patch;striplevel=0 \
-           file://mkbuiltins_have_stringize.patch \
-           file://build-tests.patch \
-           file://test-output.patch \
-           file://fix-run-coproc-run-heredoc-run-execscript-run-test-f.patch \
-           file://run-ptest \
-           file://fix-run-builtins.patch \
-           file://0001-help-fix-printf-format-security-warning.patch \
-           file://fix-run-intl.patch \
-           file://CVE-2016-9401.patch \
-           file://bash-memleak-bug-fix-for-builtin-command-read.patch \
-           "
-
-SRC_URI[tarball.md5sum] = "a27b3ee9be83bd3ba448c0ff52b28447"
-SRC_URI[tarball.sha256sum] = "317881019bbf2262fb814b7dd8e40632d13c3608d2f237800a8828fbb8a640dd"
-
-SRC_URI[patch031.md5sum] = "236df1ac1130a033ed0dbe2d2115f28f"
-SRC_URI[patch031.sha256sum] = "cd529f59dd0f2fdd49d619fe34691da6f0affedf87cc37cd460a9f3fe812a61d"
-SRC_URI[patch032.md5sum] = "2360f7e79cfb28526f80021025ea5909"
-SRC_URI[patch032.sha256sum] = "889357d29a6005b2c3308ca5b6286cb223b5e9c083219e5db3156282dd554f4a"
-SRC_URI[patch033.md5sum] = "b551c4ee7b8713759e4143499d0bbd48"
-SRC_URI[patch033.sha256sum] = "fb2a7787a13fbe027a7335aca6eb3c21cdbd813e9edc221274b6a9d8692eaa16"
-SRC_URI[patch034.md5sum] = "c9a56fbe0348e05a886dff97f2872b74"
-SRC_URI[patch034.sha256sum] = "f1694f04f110defe1330a851cc2768e7e57ddd2dfdb0e3e350ca0e3c214ff889"
-SRC_URI[patch035.md5sum] = "e564e8ab44ed1ca3a4e315a9f6cabdc9"
-SRC_URI[patch035.sha256sum] = "370d85e51780036f2386dc18c5efe996eba8e652fc1973f0f4f2ab55a993c1e3"
-SRC_URI[patch036.md5sum] = "b00ff66c41a7c0f06e191200981980b0"
-SRC_URI[patch036.sha256sum] = "ac5f82445b36efdb543dbfae64afed63f586d7574b833e9aa9cd5170bc5fd27c"
-SRC_URI[patch037.md5sum] = "be2a7b05f6ae560313f3c9d5f7127bda"
-SRC_URI[patch037.sha256sum] = "33f170dd7400ab3418d749c55c6391b1d161ef2de7aced1873451b3a3fca5813"
-SRC_URI[patch038.md5sum] = "61e0522830b24fbe8c0d1b010f132470"
-SRC_URI[patch038.sha256sum] = "adbeaa500ca7a82535f0e88d673661963f8a5fcdc7ad63445e68bf5b49786367"
-SRC_URI[patch039.md5sum] = "a4775487abe958536751c8ce53cdf6f9"
-SRC_URI[patch039.sha256sum] = "ab94dced2215541097691f60c3eb323cc28ef2549463e6a5334bbcc1e61e74ec"
-SRC_URI[patch040.md5sum] = "80d3587c58854e226055ef099ffeb535"
-SRC_URI[patch040.sha256sum] = "84bb396b9262992ca5424feab6ed3ec39f193ef5c76dfe4a62b551bd8dd9d76b"
-SRC_URI[patch041.md5sum] = "20bf63eef7cb441c0b1cc49ef3191d03"
-SRC_URI[patch041.sha256sum] = "4ec432966e4198524a7e0cd685fe222e96043769c9613e66742ac475db132c1a"
-SRC_URI[patch042.md5sum] = "70790646ae61e207c995e44931390e50"
-SRC_URI[patch042.sha256sum] = "ac219322db2791da87a496ee6e8e5544846494bdaaea2626270c2f73c1044919"
-SRC_URI[patch043.md5sum] = "855a46955cb251534e80b4732b748e37"
-SRC_URI[patch043.sha256sum] = "47a8a3c005b46e25821f4d8f5ccb04c1d653b1c829cb40568d553dc44f7a6180"
-SRC_URI[patch044.md5sum] = "29623d3282fcbb37e1158136509b5bb8"
-SRC_URI[patch044.sha256sum] = "9338820630bf67373b44d8ea68409f65162ea7a47b9b29ace06a0aed12567f99"
-SRC_URI[patch045.md5sum] = "4473244ca5abfd4b018ea26dc73e7412"
-SRC_URI[patch045.sha256sum] = "ba6ec3978e9eaa1eb3fabdaf3cc6fdf8c4606ac1c599faaeb4e2d69864150023"
-SRC_URI[patch046.md5sum] = "7e5fb09991c077076b86e0e057798913"
-SRC_URI[patch046.sha256sum] = "b3b456a6b690cd293353f17e22d92a202b3c8bce587ae5f2667c20c9ab6f688f"
-SRC_URI[patch047.md5sum] = "8483153bad1a6f52cadc3bd9a8df7835"
-SRC_URI[patch047.sha256sum] = "c69248de7e78ba6b92f118fe1ef47bc86479d5040fe0b1f908ace1c9e3c67c4a"
-
-BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/bash/bash_4.4.bb b/import-layers/yocto-poky/meta/recipes-extended/bash/bash_4.4.bb
new file mode 100644
index 0000000..e544d07
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/bash/bash_4.4.bb
@@ -0,0 +1,59 @@
+require bash.inc
+
+# GPLv2+ (< 4.0), GPLv3+ (>= 4.0)
+LICENSE = "GPLv3+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
+
+SRC_URI = "${GNU_MIRROR}/bash/${BP}.tar.gz;name=tarball \
+           ${GNU_MIRROR}/bash/bash-4.4-patches/bash44-001;apply=yes;striplevel=0;name=patch001 \
+           ${GNU_MIRROR}/bash/bash-4.4-patches/bash44-002;apply=yes;striplevel=0;name=patch002 \
+           ${GNU_MIRROR}/bash/bash-4.4-patches/bash44-003;apply=yes;striplevel=0;name=patch003 \
+           ${GNU_MIRROR}/bash/bash-4.4-patches/bash44-004;apply=yes;striplevel=0;name=patch004 \
+           ${GNU_MIRROR}/bash/bash-4.4-patches/bash44-005;apply=yes;striplevel=0;name=patch005 \
+           ${GNU_MIRROR}/bash/bash-4.4-patches/bash44-006;apply=yes;striplevel=0;name=patch006 \
+           ${GNU_MIRROR}/bash/bash-4.4-patches/bash44-007;apply=yes;striplevel=0;name=patch007 \
+           ${GNU_MIRROR}/bash/bash-4.4-patches/bash44-008;apply=yes;striplevel=0;name=patch008 \
+           ${GNU_MIRROR}/bash/bash-4.4-patches/bash44-009;apply=yes;striplevel=0;name=patch009 \
+           ${GNU_MIRROR}/bash/bash-4.4-patches/bash44-010;apply=yes;striplevel=0;name=patch010 \
+           ${GNU_MIRROR}/bash/bash-4.4-patches/bash44-011;apply=yes;striplevel=0;name=patch011 \
+           ${GNU_MIRROR}/bash/bash-4.4-patches/bash44-012;apply=yes;striplevel=0;name=patch012 \
+           file://execute_cmd.patch;striplevel=0 \
+           file://mkbuiltins_have_stringize.patch \
+           file://build-tests.patch \
+           file://test-output.patch \
+           file://fix-run-coproc-run-heredoc-run-execscript-run-test-f.patch \
+           file://run-ptest \
+           file://fix-run-builtins.patch \
+           file://0001-help-fix-printf-format-security-warning.patch \
+           file://bash-memleak-bug-fix-for-builtin-command-read.patch \
+           "
+
+SRC_URI[tarball.md5sum] = "148888a7c95ac23705559b6f477dfe25"
+SRC_URI[tarball.sha256sum] = "d86b3392c1202e8ff5a423b302e6284db7f8f435ea9f39b5b1b20fd3ac36dfcb"
+
+SRC_URI[patch001.md5sum] = "817d01a6c0af6f79308a8b7b649e53d8"
+SRC_URI[patch001.sha256sum] = "3e28d91531752df9a8cb167ad07cc542abaf944de9353fe8c6a535c9f1f17f0f"
+SRC_URI[patch002.md5sum] = "765e14cff12c7284009772e8e24f2fe0"
+SRC_URI[patch002.sha256sum] = "7020a0183e17a7233e665b979c78c184ea369cfaf3e8b4b11f5547ecb7c13c53"
+SRC_URI[patch003.md5sum] = "49e7da93bf07f510a2eb6bb43ac3e5a2"
+SRC_URI[patch003.sha256sum] = "51df5a9192fdefe0ddca4bdf290932f74be03ffd0503a3d112e4199905e718b2"
+SRC_URI[patch004.md5sum] = "4557d674ab5831a5fa98052ab19edaf4"
+SRC_URI[patch004.sha256sum] = "ad080a30a4ac6c1273373617f29628cc320a35c8cd06913894794293dc52c8b3"
+SRC_URI[patch005.md5sum] = "cce96dd77cdd1d293beec10848f6cbb5"
+SRC_URI[patch005.sha256sum] = "221e4b725b770ad0bb6924df3f8d04f89eeca4558f6e4c777dfa93e967090529"
+SRC_URI[patch006.md5sum] = "d3379f8d8abce5c6ee338f931ad008d5"
+SRC_URI[patch006.sha256sum] = "6a8e2e2a6180d0f1ce39dcd651622fb6d2fd05db7c459f64ae42d667f1e344b8"
+SRC_URI[patch007.md5sum] = "ec38c76ca439ca7f9c178e9baede84fc"
+SRC_URI[patch007.sha256sum] = "de1ccc07b7bfc9e25243ad854f3bbb5d3ebf9155b0477df16aaf00a7b0d5edaf"
+SRC_URI[patch008.md5sum] = "e0ba18c1e3b94f905da9b5bf9d38b58b"
+SRC_URI[patch008.sha256sum] = "86144700465933636d7b945e89b77df95d3620034725be161ca0ca5a42e239ba"
+SRC_URI[patch009.md5sum] = "e952d4f44e612048930c559d90eb99bb"
+SRC_URI[patch009.sha256sum] = "0b6bdd1a18a0d20e330cc3bc71e048864e4a13652e29dc0ebf3918bea729343c"
+SRC_URI[patch010.md5sum] = "57b5b35955d68f9a09dbef6b86d2c782"
+SRC_URI[patch010.sha256sum] = "8465c6f2c56afe559402265b39d9e94368954930f9aa7f3dfa6d36dd66868e06"
+SRC_URI[patch011.md5sum] = "cc896e1fa696b93ded568e557e2392d5"
+SRC_URI[patch011.sha256sum] = "dd56426ef7d7295e1107c0b3d06c192eb9298f4023c202ca2ba6266c613d170d"
+SRC_URI[patch012.md5sum] = "fa47fbfa56fb7e9e5367f19a9df5fc9e"
+SRC_URI[patch012.sha256sum] = "fac271d2bf6372c9903e3b353cb9eda044d7fe36b5aab52f21f3f21cd6a2063e"
+
+BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/byacc/byacc.inc b/import-layers/yocto-poky/meta/recipes-extended/byacc/byacc.inc
deleted file mode 100644
index adb0719..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/byacc/byacc.inc
+++ /dev/null
@@ -1,22 +0,0 @@
-SUMMARY = "Berkeley LALR Yacc parser generator"
-DESCRIPTION = "A parser generator utility that reads a grammar specification from a file and generates an LR(1) \
-parser for it.  The parsers consist of a set of LALR(1) parsing tables and a driver routine written in the C \
-programming language."
-SECTION = "devel"
-LICENSE = "PD"
-
-SRC_URI = "ftp://invisible-island.net/byacc/byacc-${PV}.tgz \
-           file://byacc-open.patch \
-           file://0001-byacc-do-not-reorder-CC-and-CFLAGS.patch"
-
-EXTRA_OECONF += "--program-transform-name='s,^,b,'"
-
-BBCLASSEXTEND = "native"
-
-inherit autotools
-
-do_configure() {
-	install -m 0755 ${STAGING_DATADIR_NATIVE}/gnu-config/config.guess ${S}
-	install -m 0755 ${STAGING_DATADIR_NATIVE}/gnu-config/config.sub ${S}
-	oe_runconf
-}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/byacc/byacc/0001-byacc-do-not-reorder-CC-and-CFLAGS.patch b/import-layers/yocto-poky/meta/recipes-extended/byacc/byacc/0001-byacc-do-not-reorder-CC-and-CFLAGS.patch
deleted file mode 100644
index 7cd2510..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/byacc/byacc/0001-byacc-do-not-reorder-CC-and-CFLAGS.patch
+++ /dev/null
@@ -1,161 +0,0 @@
-Subject: byacc: do not reorder $CC and $CFLAGS
-
-byacc tries to process $CC and decide which part should belong to CC and which
-part should below to CFLAGS and then do reordering. It doesn't make much sense
-for OE. And it doesn't do its work correctly. Some options are dropped.
-
-Delete all these stuff so that we could have all options we need.
-
-Upstream-Status: Inappropriate [OE Specific]
-
-Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
----
- aclocal.m4 |   1 -
- configure  | 119 -------------------------------------------------------------
- 2 files changed, 120 deletions(-)
-
-diff --git a/aclocal.m4 b/aclocal.m4
-index 917a848..62ef241 100644
---- a/aclocal.m4
-+++ b/aclocal.m4
-@@ -1021,7 +1021,6 @@ CF_GCC_VERSION
- CF_ACVERSION_CHECK(2.52,
- 	[AC_PROG_CC_STDC],
- 	[CF_ANSI_CC_REQD])
--CF_CC_ENV_FLAGS
- ])dnl
- dnl ---------------------------------------------------------------------------
- dnl CF_PROG_GROFF version: 2 updated: 2015/07/04 11:16:27
-diff --git a/configure b/configure
-index 9707e50..4f0497c 100755
---- a/configure
-+++ b/configure
-@@ -1946,125 +1946,6 @@ esac
- # This should have been defined by AC_PROG_CC
- : ${CC:=cc}
- 
--echo "$as_me:1949: checking \$CC variable" >&5
--echo $ECHO_N "checking \$CC variable... $ECHO_C" >&6
--case "$CC" in
--(*[\ \	]-*)
--	echo "$as_me:1953: result: broken" >&5
--echo "${ECHO_T}broken" >&6
--	{ echo "$as_me:1955: WARNING: your environment misuses the CC variable to hold CFLAGS/CPPFLAGS options" >&5
--echo "$as_me: WARNING: your environment misuses the CC variable to hold CFLAGS/CPPFLAGS options" >&2;}
--	# humor him...
--	cf_flags=`echo "$CC" | sed -e 's/^.*[ 	]\(-[^ 	]\)/\1/'`
--	CC=`echo "$CC " | sed -e 's/[ 	]-[^ 	].*$//' -e 's/[ 	]*$//'`
--	for cf_arg in $cf_flags
--	do
--		case "x$cf_arg" in
--		(x-[IUDfgOW]*)
--
--cf_fix_cppflags=no
--cf_new_cflags=
--cf_new_cppflags=
--cf_new_extra_cppflags=
--
--for cf_add_cflags in $cf_flags
--do
--case $cf_fix_cppflags in
--(no)
--	case $cf_add_cflags in
--	(-undef|-nostdinc*|-I*|-D*|-U*|-E|-P|-C)
--		case $cf_add_cflags in
--		(-D*)
--			cf_tst_cflags=`echo ${cf_add_cflags} |sed -e 's/^-D[^=]*='\''\"[^"]*//'`
--
--			test "x${cf_add_cflags}" != "x${cf_tst_cflags}" \
--				&& test -z "${cf_tst_cflags}" \
--				&& cf_fix_cppflags=yes
--
--			if test $cf_fix_cppflags = yes ; then
--				cf_new_extra_cppflags="$cf_new_extra_cppflags $cf_add_cflags"
--				continue
--			elif test "${cf_tst_cflags}" = "\"'" ; then
--				cf_new_extra_cppflags="$cf_new_extra_cppflags $cf_add_cflags"
--				continue
--			fi
--			;;
--		esac
--		case "$CPPFLAGS" in
--		(*$cf_add_cflags)
--			;;
--		(*)
--			case $cf_add_cflags in
--			(-D*)
--				cf_tst_cppflags=`echo "x$cf_add_cflags" | sed -e 's/^...//' -e 's/=.*//'`
--
--CPPFLAGS=`echo "$CPPFLAGS" | \
--	sed	-e 's/-[UD]'"$cf_tst_cppflags"'\(=[^ 	]*\)\?[ 	]/ /g' \
--		-e 's/-[UD]'"$cf_tst_cppflags"'\(=[^ 	]*\)\?$//g'`
--
--				;;
--			esac
--			cf_new_cppflags="$cf_new_cppflags $cf_add_cflags"
--			;;
--		esac
--		;;
--	(*)
--		cf_new_cflags="$cf_new_cflags $cf_add_cflags"
--		;;
--	esac
--	;;
--(yes)
--	cf_new_extra_cppflags="$cf_new_extra_cppflags $cf_add_cflags"
--
--	cf_tst_cflags=`echo ${cf_add_cflags} |sed -e 's/^[^"]*"'\''//'`
--
--	test "x${cf_add_cflags}" != "x${cf_tst_cflags}" \
--		&& test -z "${cf_tst_cflags}" \
--		&& cf_fix_cppflags=no
--	;;
--esac
--done
--
--if test -n "$cf_new_cflags" ; then
--
--	CFLAGS="$CFLAGS $cf_new_cflags"
--fi
--
--if test -n "$cf_new_cppflags" ; then
--
--	CPPFLAGS="$CPPFLAGS $cf_new_cppflags"
--fi
--
--if test -n "$cf_new_extra_cppflags" ; then
--
--	EXTRA_CPPFLAGS="$cf_new_extra_cppflags $EXTRA_CPPFLAGS"
--fi
--
--			;;
--		(*)
--			CC="$CC $cf_arg"
--			;;
--		esac
--	done
--	test -n "$verbose" && echo "	resulting CC: '$CC'" 1>&6
--
--echo "${as_me:-configure}:2051: testing resulting CC: '$CC' ..." 1>&5
--
--	test -n "$verbose" && echo "	resulting CFLAGS: '$CFLAGS'" 1>&6
--
--echo "${as_me:-configure}:2055: testing resulting CFLAGS: '$CFLAGS' ..." 1>&5
--
--	test -n "$verbose" && echo "	resulting CPPFLAGS: '$CPPFLAGS'" 1>&6
--
--echo "${as_me:-configure}:2059: testing resulting CPPFLAGS: '$CPPFLAGS' ..." 1>&5
--
--	;;
--(*)
--	echo "$as_me:2063: result: ok" >&5
--echo "${ECHO_T}ok" >&6
--	;;
--esac
--
- echo "$as_me:2068: checking whether ${MAKE-make} sets \${MAKE}" >&5
- echo $ECHO_N "checking whether ${MAKE-make} sets \${MAKE}... $ECHO_C" >&6
- set dummy ${MAKE-make}; ac_make=`echo "$2" | sed 'y,./+-,__p_,'`
--- 
-2.8.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/byacc/byacc/byacc-open.patch b/import-layers/yocto-poky/meta/recipes-extended/byacc/byacc/byacc-open.patch
deleted file mode 100644
index 0058311..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/byacc/byacc/byacc-open.patch
+++ /dev/null
@@ -1,25 +0,0 @@
-Ubuntu defaults to passing _FORTIFY_SOURCE=2 which breaks byacc as it doesn't
-pass enough arguments to open():
-
- inlined from 'open_tmpfile' at byacc-20150711/main.c:588:5:
- /usr/include/x86_64-linux-gnu/bits/fcntl2.h:50:24: error: call to '__open_missing_mode' declared with attribute error:
- open with O_CREAT in second argument needs 3 arguments
-
-Add a mode of 0666 to fix this.
-
-Upstream-Status: Pending
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-diff --git a/main.c b/main.c
-index 620ce3f..82071a4 100644
---- a/main.c
-+++ b/main.c
-@@ -526,7 +526,7 @@ my_mkstemp(char *temp)
-     }
-     if ((name = tempnam(dname, fname)) != 0)
-     {
--	fd = open(name, O_CREAT | O_EXCL | O_RDWR);
-+      fd = open(name, O_CREAT | O_EXCL | O_RDWR, 0666);
- 	strcpy(temp, name);
-     }
-     else
diff --git a/import-layers/yocto-poky/meta/recipes-extended/byacc/byacc_20161202.bb b/import-layers/yocto-poky/meta/recipes-extended/byacc/byacc_20161202.bb
deleted file mode 100644
index 755f8ab..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/byacc/byacc_20161202.bb
+++ /dev/null
@@ -1,12 +0,0 @@
-# Sigh. This is one of those places where everyone licenses it differently. Someone
-# even apply UCB to it (Free/Net/OpenBSD). The maintainer states that:
-# "I've found no reliable source which states that byacc must bear a UCB copyright."
-# Setting to PD as this is what the upstream has it as.
-
-LICENSE = "PD"
-LIC_FILES_CHKSUM = "file://package/debian/copyright;md5=74533d32ffd38bca4cbf1f1305f8bc60"
-require byacc.inc
-
-
-SRC_URI[md5sum] = "48ef38447f2cc864c70ef864b26cf817"
-SRC_URI[sha256sum] = "30dc58cfcdb708eea7ba022db29b41d2d392f20727491b956954366f2f2117f0"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/bzip2/bzip2-1.0.6/Makefile.am b/import-layers/yocto-poky/meta/recipes-extended/bzip2/bzip2-1.0.6/Makefile.am
index 05d389f..dcf6458 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/bzip2/bzip2-1.0.6/Makefile.am
+++ b/import-layers/yocto-poky/meta/recipes-extended/bzip2/bzip2-1.0.6/Makefile.am
@@ -48,7 +48,8 @@
 	else echo "FAIL: sample3 decompress"; fi
 
 install-ptest:
-	cp $(srcdir)/Makefile		$(DESTDIR)/
+	sed  -n '/^runtest:/,/^install-ptest:/{/^install-ptest:/!p}' \
+           $(srcdir)/Makefile.am      > $(DESTDIR)/Makefile
 	cp $(srcdir)/sample1.ref	$(DESTDIR)/
 	cp $(srcdir)/sample2.ref	$(DESTDIR)/
 	cp $(srcdir)/sample3.ref	$(DESTDIR)/
diff --git a/import-layers/yocto-poky/meta/recipes-extended/bzip2/bzip2_1.0.6.bb b/import-layers/yocto-poky/meta/recipes-extended/bzip2/bzip2_1.0.6.bb
index 0512a75..de668d6 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/bzip2/bzip2_1.0.6.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/bzip2/bzip2_1.0.6.bb
@@ -34,7 +34,6 @@
 EXTRA_OECONF_append_class-native = " --bindir=${STAGING_BINDIR_NATIVE}/${PN}"
 
 do_install_ptest () {
-	cp -f ${B}/Makefile ${D}${PTEST_PATH}/Makefile
 	sed -i -e "s|^Makefile:|_Makefile:|" ${D}${PTEST_PATH}/Makefile
 }
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/cups/cups.inc b/import-layers/yocto-poky/meta/recipes-extended/cups/cups.inc
index c3fa459..ac4d225 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/cups/cups.inc
+++ b/import-layers/yocto-poky/meta/recipes-extended/cups/cups.inc
@@ -1,4 +1,5 @@
 SUMMARY = "An Internet printing system for Unix"
+HOMEPAGE = "https://www.cups.org/"
 SECTION = "console/utils"
 LICENSE = "GPLv2 & LGPLv2"
 DEPENDS = "gnutls libpng jpeg dbus dbus-glib zlib libusb"
@@ -79,13 +80,6 @@
 	fi
 }
 
-python do_package_append() {
-    import subprocess
-    # Change permissions back the way they were, they probably had a reason...
-    workdir = d.getVar('WORKDIR')
-    subprocess.call('chmod 0511 %s/install/cups/var/run/cups/certs' % workdir, shell=True)
-}
-
 PACKAGES =+ "${PN}-lib ${PN}-libimage"
 
 RDEPENDS_${PN} += "${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', 'procps', '', d)}"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/cups/cups_2.2.2.bb b/import-layers/yocto-poky/meta/recipes-extended/cups/cups_2.2.2.bb
deleted file mode 100644
index 5174c30..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/cups/cups_2.2.2.bb
+++ /dev/null
@@ -1,6 +0,0 @@
-require cups.inc
-
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=f212b4338db0da8cb892e94bf2949460"
-
-SRC_URI[md5sum] = "036f6bda6202ae3e280ac00c710b5ca4"
-SRC_URI[sha256sum] = "f589bb7d5d1dc3aa0915d7cf2b808571ef2e1530cd1a6ebe76ae8f9f4994e4f6"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/cups/cups_2.2.4.bb b/import-layers/yocto-poky/meta/recipes-extended/cups/cups_2.2.4.bb
new file mode 100644
index 0000000..ed94b67
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/cups/cups_2.2.4.bb
@@ -0,0 +1,6 @@
+require cups.inc
+
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=f212b4338db0da8cb892e94bf2949460"
+
+SRC_URI[md5sum] = "d26e5a0a574a69fe1d01079b2931fc49"
+SRC_URI[sha256sum] = "596d4db72651c335469ae5f37b0da72ac9f97d73e30838d787065f559dea98cc"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils-3.5/0001-Unset-need_charset_alias-when-building-for-musl.patch b/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils-3.6/0001-Unset-need_charset_alias-when-building-for-musl.patch
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils-3.5/0001-Unset-need_charset_alias-when-building-for-musl.patch
rename to import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils-3.6/0001-Unset-need_charset_alias-when-building-for-musl.patch
diff --git a/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils-3.6/0001-explicitly-disable-replacing-getopt.patch b/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils-3.6/0001-explicitly-disable-replacing-getopt.patch
new file mode 100644
index 0000000..351f87c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils-3.6/0001-explicitly-disable-replacing-getopt.patch
@@ -0,0 +1,30 @@
+Subject: explicitly disable replacing getopt
+
+Explicitly disable replacing getopt to avoid compilation error like below.
+
+  xstrtol-error.c:84:26: error: invalid use of undefined type 'struct rpl_option'
+
+Upstream-Status: Inappropriate [workaround]
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ m4/getopt.m4 | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/m4/getopt.m4 b/m4/getopt.m4
+index 3ebc7b7..8934426 100644
+--- a/m4/getopt.m4
++++ b/m4/getopt.m4
+@@ -22,8 +22,8 @@ AC_DEFUN([gl_FUNC_GETOPT_POSIX],
+     fi
+   ])
+   if test $REPLACE_GETOPT = 1; then
+-    dnl Arrange for getopt.h to be created.
+-    gl_GETOPT_SUBSTITUTE_HEADER
++    dnl Explicitly disable replacing getopt
++    :
+   fi
+ ])
+ 
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils-3.5/run-ptest b/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils-3.6/run-ptest
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils-3.5/run-ptest
rename to import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils-3.6/run-ptest
diff --git a/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils.inc b/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils.inc
index 243341a..7c5be50 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils.inc
+++ b/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils.inc
@@ -1,4 +1,5 @@
 SUMMARY = "Diffutils contains tools used for finding differences between files"
+HOMEPAGE = "https://www.gnu.org/software/diffutils/diffutils.html"
 DESCRIPTION = "Diffutils contains the GNU diff, diff3, \
 sdiff, and cmp utilities. These programs are usually \
 used for creating patch files."
@@ -6,13 +7,6 @@
 
 inherit autotools texinfo update-alternatives gettext
 
-# diffutils assumes non-glibc compilation with uclibc and
-# this causes it to generate its own implementations of
-# standard functionality.  regex.c actually breaks compilation
-# because it uses __mempcpy, there are other things (TBD:
-# see diffutils.mk in buildroot)
-EXTRA_OECONF_libc-uclibc = "--without-included-regex"
-
 ALTERNATIVE_${PN} = "diff cmp"
 ALTERNATIVE_PRIORITY = "100"
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils_3.5.bb b/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils_3.5.bb
deleted file mode 100644
index 243584b..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils_3.5.bb
+++ /dev/null
@@ -1,40 +0,0 @@
-LICENSE = "GPLv3+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
-
-require diffutils.inc
-
-SRC_URI = "${GNU_MIRROR}/diffutils/diffutils-${PV}.tar.xz \
-           file://0001-Unset-need_charset_alias-when-building-for-musl.patch \
-           file://run-ptest \
-"
-
-EXTRA_OECONF += "--without-libsigsegv-prefix"
-
-# Fix "Argument list too long" error when len(TMPDIR) = 410
-acpaths = "-I ./m4"
-
-do_configure_prepend () {
-	# Need to remove gettext macros with weird mix of versions
-	for i in codeset.m4 gettext_gl.m4 intlmacosx.m4 inttypes-pri.m4 lib-ld_gl.m4 lib-prefix_gl.m4 po_gl.m4 ssize_t.m4 wchar_t.m4 wint_t.m4; do
-		rm -f ${S}/m4/$i
-	done
-}
-
-SRC_URI[md5sum] = "569354697ff1cfc9a9de3781361015fa"
-SRC_URI[sha256sum] = "dad398ccd5b9faca6b0ab219a036453f62a602a56203ac659b43e889bec35533"
-
-inherit ptest
-
-do_install_ptest() {
-	t=${D}${PTEST_PATH}
-	install -D ${S}/build-aux/test-driver $t/build-aux/test-driver
-	cp -r ${S}/tests $t/
-	install ${B}/tests/Makefile $t/tests/
-	sed -e 's|^Makefile:|_Makefile:|' \
-	    -e 's|bash|sh|' \
-	    -e 's|^top_srcdir = \(.*\)|top_srcdir = ..\/|' \
-	    -e 's|^srcdir = \(.*\)|srcdir = .|' \
-	    -e 's|"`$(built_programs)`"|diff|' \
-	    -e 's|gawk|awk|g' \
-	    -i $t/tests/Makefile
-}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils_3.6.bb b/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils_3.6.bb
new file mode 100644
index 0000000..deadd62
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/diffutils/diffutils_3.6.bb
@@ -0,0 +1,39 @@
+LICENSE = "GPLv3+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
+
+require diffutils.inc
+
+SRC_URI = "${GNU_MIRROR}/diffutils/diffutils-${PV}.tar.xz \
+           file://0001-Unset-need_charset_alias-when-building-for-musl.patch \
+           file://run-ptest \
+"
+SRC_URI_append_libc-glibc = " file://0001-explicitly-disable-replacing-getopt.patch"
+
+SRC_URI[md5sum] = "07cf286672ced26fba54cd0313bdc071"
+SRC_URI[sha256sum] = "d621e8bdd4b573918c8145f7ae61817d1be9deb4c8d2328a65cea8e11d783bd6"
+
+EXTRA_OECONF += "ac_cv_path_PR_PROGRAM=${bindir}/pr --without-libsigsegv-prefix"
+
+# Fix "Argument list too long" error when len(TMPDIR) = 410
+acpaths = "-I ./m4"
+
+inherit ptest
+
+do_install_ptest() {
+	t=${D}${PTEST_PATH}
+	install -D ${S}/build-aux/test-driver $t/build-aux/test-driver
+	cp -r ${S}/tests $t/
+	install ${B}/tests/Makefile $t/tests/
+	sed -e 's,--sysroot=${STAGING_DIR_TARGET},,g' \
+	    -e 's|${DEBUG_PREFIX_MAP}||g' \
+	    -e 's:${HOSTTOOLS_DIR}/::g' \
+	    -e 's:${RECIPE_SYSROOT_NATIVE}::g' \
+	    -e 's:${BASE_WORKDIR}/${MULTIMACH_TARGET_SYS}::g' \
+	    -e 's|^Makefile:|_Makefile:|' \
+	    -e 's|bash|sh|' \
+	    -e 's|^top_srcdir = \(.*\)|top_srcdir = ..\/|' \
+	    -e 's|^srcdir = \(.*\)|srcdir = .|' \
+	    -e 's|"`$(built_programs)`"|diff|' \
+	    -e 's|gawk|awk|g' \
+	    -i $t/tests/Makefile
+}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ed/ed_1.14.1.bb b/import-layers/yocto-poky/meta/recipes-extended/ed/ed_1.14.1.bb
deleted file mode 100644
index 7e6bde4..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ed/ed_1.14.1.bb
+++ /dev/null
@@ -1,35 +0,0 @@
-SUMMARY = "Line-oriented text editor"
-HOMEPAGE = "http://www.gnu.org/software/ed/"
-
-LICENSE = "GPLv3+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=0c7051aef9219dc7237f206c5c4179a7 \
-                    file://ed.h;endline=20;md5=4e36b7a40e137f42aee718165590d125 \
-                    file://main.c;endline=17;md5=c5b8f78f115df187af76868a2aead16a"
-
-SECTION = "base"
-
-# LSB states that ed should be in /bin/
-bindir = "${base_bindir}"
-
-# Upstream regularly removes previous releases from https://ftp.gnu.org/gnu/ed/
-SRC_URI = "http://downloads.yoctoproject.org/mirror/sources/${BP}.tar.lz"
-UPSTREAM_CHECK_URI = "${GNU_MIRROR}/ed/"
-
-SRC_URI[md5sum] = "7f4a54fa7f366479f03654b8af645fd0"
-SRC_URI[sha256sum] = "ffb97eb8f2a2b5a71a9b97e3872adce953aa1b8958e04c5b7bf11d556f32552a"
-
-EXTRA_OEMAKE = "-e MAKEFLAGS="
-
-inherit texinfo
-
-do_configure() {
-	${S}/configure
-}
-
-do_install() {
-	oe_runmake 'DESTDIR=${D}' install
-	# Info dir listing isn't interesting at this point so remove it if it exists.
-	if [ -e "${D}${infodir}/dir" ]; then
-		rm -f ${D}${infodir}/dir
-	fi
-}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ed/ed_1.14.2.bb b/import-layers/yocto-poky/meta/recipes-extended/ed/ed_1.14.2.bb
new file mode 100644
index 0000000..87d03b1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/ed/ed_1.14.2.bb
@@ -0,0 +1,35 @@
+SUMMARY = "Line-oriented text editor"
+HOMEPAGE = "http://www.gnu.org/software/ed/"
+
+LICENSE = "GPLv3+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=0c7051aef9219dc7237f206c5c4179a7 \
+                    file://ed.h;endline=20;md5=4e36b7a40e137f42aee718165590d125 \
+                    file://main.c;endline=17;md5=c5b8f78f115df187af76868a2aead16a"
+
+SECTION = "base"
+
+# LSB states that ed should be in /bin/
+bindir = "${base_bindir}"
+
+# Upstream regularly removes previous releases from https://ftp.gnu.org/gnu/ed/
+SRC_URI = "${GNU_MIRROR}/ed/${BP}.tar.lz"
+UPSTREAM_CHECK_URI = "${GNU_MIRROR}/ed/"
+
+SRC_URI[md5sum] = "273d04778b2a51f7c3cbfcd2001876bf"
+SRC_URI[sha256sum] = "f57962ba930d70d02fc71d6be5c5f2346b16992a455ab9c43be7061dec9810db"
+
+EXTRA_OEMAKE = "-e MAKEFLAGS="
+
+inherit texinfo
+
+do_configure() {
+	${S}/configure
+}
+
+do_install() {
+	oe_runmake 'DESTDIR=${D}' install
+	# Info dir listing isn't interesting at this point so remove it if it exists.
+	if [ -e "${D}${infodir}/dir" ]; then
+		rm -f ${D}${infodir}/dir
+	fi
+}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ethtool/ethtool_4.11.bb b/import-layers/yocto-poky/meta/recipes-extended/ethtool/ethtool_4.11.bb
new file mode 100644
index 0000000..befe9b9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/ethtool/ethtool_4.11.bb
@@ -0,0 +1,30 @@
+SUMMARY = "Display or change ethernet card settings"
+DESCRIPTION = "A small utility for examining and tuning the settings of your ethernet-based network interfaces."
+HOMEPAGE = "http://www.kernel.org/pub/software/network/ethtool/"
+SECTION = "console/network"
+LICENSE = "GPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://ethtool.c;beginline=4;endline=17;md5=c19b30548c582577fc6b443626fc1216"
+
+SRC_URI = "${KERNELORG_MIRROR}/software/network/ethtool/ethtool-${PV}.tar.gz \
+           file://run-ptest \
+           file://avoid_parallel_tests.patch \
+           "
+
+SRC_URI[md5sum] = "8f1072679888c9335e49b17efb798b4c"
+SRC_URI[sha256sum] = "af2fd9692f3159d3ab1e41e6f9b7d8db2a4693f1cb22348c88ba89f70f0e6503"
+
+inherit autotools ptest
+RDEPENDS_${PN}-ptest += "make"
+
+do_compile_ptest() {
+   oe_runmake buildtest-TESTS
+}
+
+do_install_ptest () {
+   cp ${B}/Makefile                 ${D}${PTEST_PATH}
+   install ${B}/test-cmdline        ${D}${PTEST_PATH}
+   install ${B}/test-features       ${D}${PTEST_PATH}
+   install ${B}/ethtool             ${D}${PTEST_PATH}/ethtool
+   sed -i 's/^Makefile/_Makefile/'  ${D}${PTEST_PATH}/Makefile
+}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ethtool/ethtool_4.8.bb b/import-layers/yocto-poky/meta/recipes-extended/ethtool/ethtool_4.8.bb
deleted file mode 100644
index afaf2e2..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ethtool/ethtool_4.8.bb
+++ /dev/null
@@ -1,30 +0,0 @@
-SUMMARY = "Display or change ethernet card settings"
-DESCRIPTION = "A small utility for examining and tuning the settings of your ethernet-based network interfaces."
-HOMEPAGE = "http://www.kernel.org/pub/software/network/ethtool/"
-SECTION = "console/network"
-LICENSE = "GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://ethtool.c;beginline=4;endline=17;md5=c19b30548c582577fc6b443626fc1216"
-
-SRC_URI = "https://downloads.yoctoproject.org/mirror/sources/ethtool-${PV}.tar.gz \
-           file://run-ptest \
-           file://avoid_parallel_tests.patch \
-           "
-
-SRC_URI[md5sum] = "28c4a4d85c33f573c49ff6d81ec094fd"
-SRC_URI[sha256sum] = "1bd82ebe3d41de1b7b0d8f4fb18a8e8466fba934c952bc5c5002836ffa8bb606"
-
-inherit autotools ptest
-RDEPENDS_${PN}-ptest += "make"
-
-do_compile_ptest() {
-   oe_runmake buildtest-TESTS
-}
-
-do_install_ptest () {
-   cp ${B}/Makefile                 ${D}${PTEST_PATH}
-   install ${B}/test-cmdline        ${D}${PTEST_PATH}
-   install ${B}/test-features       ${D}${PTEST_PATH}
-   install ${B}/ethtool             ${D}${PTEST_PATH}/ethtool
-   sed -i 's/^Makefile/_Makefile/'  ${D}${PTEST_PATH}/Makefile
-}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/findutils/findutils.inc b/import-layers/yocto-poky/meta/recipes-extended/findutils/findutils.inc
index bfedf87..ad36429 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/findutils/findutils.inc
+++ b/import-layers/yocto-poky/meta/recipes-extended/findutils/findutils.inc
@@ -13,11 +13,4 @@
 ALTERNATIVE_${PN} = "find xargs"
 ALTERNATIVE_PRIORITY = "100"
 
-# diffutils assumes non-glibc compilation with uclibc and
-# this causes it to generate its own implementations of
-# standard functionality.  regex.c actually breaks compilation
-# because it uses __mempcpy, there are other things (TBD:
-# see diffutils.mk in buildroot)
-EXTRA_OECONF_libc-uclibc = "--without-included-regex"
-
 BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/foomatic/foomatic-filters_4.0.17.bb b/import-layers/yocto-poky/meta/recipes-extended/foomatic/foomatic-filters_4.0.17.bb
index 3f439e7..742c9a5 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/foomatic/foomatic-filters_4.0.17.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/foomatic/foomatic-filters_4.0.17.bb
@@ -1,4 +1,5 @@
 SUMMARY = "OpenPrinting printer support - filters"
+HOMEPAGE = "https://wiki.linuxfoundation.org/openprinting/start"
 DESCRIPTION = "Foomatic is a printer database designed to make it easier to set up \
 common printers for use with UNIX-like operating systems.\
 It provides the "glue" between a print spooler (like CUPS or lpr) and \
diff --git a/import-layers/yocto-poky/meta/recipes-extended/gawk/gawk_4.1.4.bb b/import-layers/yocto-poky/meta/recipes-extended/gawk/gawk_4.1.4.bb
index dda38ef..995d37d 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/gawk/gawk_4.1.4.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/gawk/gawk_4.1.4.bb
@@ -41,9 +41,9 @@
 
 do_install_ptest() {
 	mkdir ${D}${PTEST_PATH}/test
-	for i in `grep -vE "@|^$|#|Gt-dummy" ${S}/test/Maketests |awk -F: '{print $1}'` Maketests; \
+	for i in `grep -vE "@|^$|#|Gt-dummy" ${S}/test/Maketests |awk -F: '{print $1}'` Maketests inclib.awk; \
 	  do cp ${S}/test/$i* ${D}${PTEST_PATH}/test; \
 	done
 }
 
-BBCLASSEXTEND = "nativesdk"
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-10219.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-10219.patch
deleted file mode 100644
index 574abe0..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-10219.patch
+++ /dev/null
@@ -1,49 +0,0 @@
-From 4bef1a1d32e29b68855616020dbff574b9cda08f Mon Sep 17 00:00:00 2001
-From: Robin Watts <Robin.Watts@artifex.com>
-Date: Thu, 29 Dec 2016 15:57:43 +0000
-Subject: [PATCH] Bug 697453: Avoid divide by 0 in scan conversion code.
-
-Arithmetic overflow due to extreme values in the scan conversion
-code can cause a division by 0.
-
-Avoid this with a simple extra check.
-
-  dx_old=cf814d81
-  endp->x_next=b0e859b9
-  alp->x_next=8069a73a
-
-leads to dx_den = 0
-
-Upstream-Status: Backport
-CVE: CVE-2016-10219
-
-Signed-off-by: Catalin Enache <catalin.enache@windriver.com>
----
- base/gxfill.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/base/gxfill.c b/base/gxfill.c
-index 99196c0..2f81bb0 100644
---- a/base/gxfill.c
-+++ b/base/gxfill.c
-@@ -1741,7 +1741,7 @@ intersect(active_line *endp, active_line *alp, fixed y, fixed y1, fixed *p_y_new
-     fixed dx_old = alp->x_current - endp->x_current;
-     fixed dx_den = dx_old + endp->x_next - alp->x_next;
- 
--    if (dx_den <= dx_old)
-+    if (dx_den <= dx_old || dx_den == 0)
-         return false; /* Intersection isn't possible. */
-     dy = y1 - y;
-     if_debug3('F', "[F]cross: dy=%g, dx_old=%g, dx_new=%g\n",
-@@ -1750,7 +1750,7 @@ intersect(active_line *endp, active_line *alp, fixed y, fixed y1, fixed *p_y_new
-     /* Do the computation in single precision */
-     /* if the values are small enough. */
-     y_new =
--        ((dy | dx_old) < 1L << (size_of(fixed) * 4 - 1) ?
-+        (((ufixed)(dy | dx_old)) < (1L << (size_of(fixed) * 4 - 1)) ?
-          dy * dx_old / dx_den :
-          (INCR_EXPR(mq_cross), fixed_mult_quo(dy, dx_old, dx_den)))
-         + y;
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-10220.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-10220.patch
deleted file mode 100644
index 5e1e8ba..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-10220.patch
+++ /dev/null
@@ -1,55 +0,0 @@
-From daf85701dab05f17e924a48a81edc9195b4a04e8 Mon Sep 17 00:00:00 2001
-From: Ken Sharp <ken.sharp@artifex.com>
-Date: Wed, 21 Dec 2016 16:54:14 +0000
-Subject: [PATCH] fix crash with bad data supplied to makeimagedevice
-
-Bug #697450 "Null pointer dereference in gx_device_finalize()"
-
-The problem here is that the code to finalise a device unconditionally
-frees the icc_struct member of the device structure. However this
-particular (weird) device is not setup as a normal device, probably
-because its very, very ancient. Its possible for the initialisation
-of the device to abort with an error before calling gs_make_mem_device()
-which is where the icc_struct member gets allocated (or set to NULL).
-
-If that happens, then the cleanup code tries to free the device, which
-calls finalize() which tries to free a garbage pointer.
-
-Setting the device memory to 0x00 after we allocate it means that the
-icc_struct member will be NULL< and our memory manager allows for that
-happily enough, which avoids the problem.
-
-Upstream-Status: Backport
-CVE: CVE-2016-10220
-
-Signed-off-by: Catalin Enache <catalin.enache@windriver.com>
----
- base/gsdevmem.c | 12 ++++++++++++
- 1 file changed, 12 insertions(+)
-
-diff --git a/base/gsdevmem.c b/base/gsdevmem.c
-index 97b9cf4..fe75bcc 100644
---- a/base/gsdevmem.c
-+++ b/base/gsdevmem.c
-@@ -225,6 +225,18 @@ gs_makewordimagedevice(gx_device ** pnew_dev, const gs_matrix * pmat,
- 
-     if (pnew == 0)
-         return_error(gs_error_VMerror);
-+
-+    /* Bug #697450 "Null pointer dereference in gx_device_finalize()"
-+     * If we have incorrect data passed to gs_initialise_wordimagedevice() then the
-+     * initialisation will fail, crucially it will fail *before* it calls
-+     * gs_make_mem_device() which initialises the device. This means that the
-+     * icc_struct member will be uninitialsed, but the device finalise method
-+     * will unconditionally free that memory. Since its a garbage pointer, bad things happen.
-+     * Apparently we do still need makeimagedevice to be available from
-+     * PostScript, so in here just zero the device memory, which means that
-+     * the finalise routine won't have a problem.
-+     */
-+    memset(pnew, 0x00, st_device_memory.ssize);
-     code = gs_initialize_wordimagedevice(pnew, pmat, width, height,
-                                          colors, num_colors, word_oriented,
-                                          page_device, mem);
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-7978.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-7978.patch
deleted file mode 100644
index 668f205..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-7978.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-From 6f749c0c44e7b9e09737b9f29edf29925a34f0cf Mon Sep 17 00:00:00 2001
-From: Chris Liddell <chris.liddell@artifex.com>
-Date: Wed, 5 Oct 2016 09:59:25 +0100
-Subject: [PATCH] Bug 697179: Reference count device icc profile
-
-when copying a device
-
-Upstream-Status: Backport
-CVE: CVE-2016-7978
-
-Signed-off-by: Catalin Enache <catalin.enache@windriver.com>
----
- base/gsdevice.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/base/gsdevice.c b/base/gsdevice.c
-index 778106f..aea986a 100644
---- a/base/gsdevice.c
-+++ b/base/gsdevice.c
-@@ -614,6 +614,7 @@ gx_device_init(gx_device * dev, const gx_device * proto, gs_memory_t * mem,
-     dev->memory = mem;
-     dev->retained = !internal;
-     rc_init(dev, mem, (internal ? 0 : 1));
-+    rc_increment(dev->icc_struct);
- }
- 
- void
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-7979.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-7979.patch
deleted file mode 100644
index 9e930d3..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-7979.patch
+++ /dev/null
@@ -1,48 +0,0 @@
-From 875a0095f37626a721c7ff57d606a0f95af03913 Mon Sep 17 00:00:00 2001
-From: Ken Sharp <ken.sharp@artifex.com>
-Date: Wed, 5 Oct 2016 10:10:58 +0100
-Subject: [PATCH] DSC parser - validate parameters
-
-Bug #697190 ".initialize_dsc_parser doesn't validate the parameter is a dict type before using it."
-
-Regardless of any security implications, its simply wrong for a PostScript
-operator not to validate its parameter(s).
-
-No differences expected.
-
-Upstream-Status: Backport
-CVE: CVE-2016-7979
-
-Signed-off-by: Catalin Enache <catalin.enache@windriver.com>
----
- psi/zdscpars.c | 13 +++++++++----
- 1 file changed, 9 insertions(+), 4 deletions(-)
-
-diff --git a/psi/zdscpars.c b/psi/zdscpars.c
-index c05e154..9b4b605 100644
---- a/psi/zdscpars.c
-+++ b/psi/zdscpars.c
-@@ -150,11 +150,16 @@ zinitialize_dsc_parser(i_ctx_t *i_ctx_p)
-     ref local_ref;
-     int code;
-     os_ptr const op = osp;
--    dict * const pdict = op->value.pdict;
--    gs_memory_t * const mem = (gs_memory_t *)dict_memory(pdict);
--    dsc_data_t * const data =
--        gs_alloc_struct(mem, dsc_data_t, &st_dsc_data_t, "DSC parser init");
-+    dict *pdict;
-+    gs_memory_t *mem;
-+    dsc_data_t *data;
- 
-+    check_read_type(*op, t_dictionary);
-+
-+    pdict = op->value.pdict;
-+    mem = (gs_memory_t *)dict_memory(pdict);
-+
-+    data = gs_alloc_struct(mem, dsc_data_t, &st_dsc_data_t, "DSC parser init");
-     if (!data)
-         return_error(gs_error_VMerror);
-     data->document_level = 0;
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-8602.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-8602.patch
deleted file mode 100644
index e58567c..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2016-8602.patch
+++ /dev/null
@@ -1,47 +0,0 @@
-From f5c7555c30393e64ec1f5ab0dfae5b55b3b3fc78 Mon Sep 17 00:00:00 2001
-From: Chris Liddell <chris.liddell@artifex.com>
-Date: Sat, 8 Oct 2016 16:10:27 +0100
-Subject: [PATCH] Bug 697203: check for sufficient params in .sethalftone5
-
-and param types
-
-Upstream-Status: Backport
-CVE: CVE-2016-8602
-
-Signed-off-by: Catalin Enache <catalin.enache@windriver.com>
----
- psi/zht2.c | 12 ++++++++++--
- 1 file changed, 10 insertions(+), 2 deletions(-)
-
-diff --git a/psi/zht2.c b/psi/zht2.c
-index fb4a264..dfa27a4 100644
---- a/psi/zht2.c
-+++ b/psi/zht2.c
-@@ -82,14 +82,22 @@ zsethalftone5(i_ctx_t *i_ctx_p)
-     gs_memory_t *mem;
-     uint edepth = ref_stack_count(&e_stack);
-     int npop = 2;
--    int dict_enum = dict_first(op);
-+    int dict_enum;
-     ref rvalue[2];
-     int cname, colorant_number;
-     byte * pname;
-     uint name_size;
-     int halftonetype, type = 0;
-     gs_gstate *pgs = igs;
--    int space_index = r_space_index(op - 1);
-+    int space_index;
-+
-+    if (ref_stack_count(&o_stack) < 2)
-+        return_error(gs_error_stackunderflow);
-+    check_type(*op, t_dictionary);
-+    check_type(*(op - 1), t_dictionary);
-+
-+    dict_enum = dict_first(op);
-+    space_index = r_space_index(op - 1);
- 
-     mem = (gs_memory_t *) idmemory->spaces_indexed[space_index];
- 
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2017-7975.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2017-7975.patch
index d0886c9..e406086 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2017-7975.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/CVE-2017-7975.patch
@@ -1,6 +1,7 @@
-From 5e57e483298dae8b8d4ec9aab37a526736ac2e97 Mon Sep 17 00:00:00 2001
-From: Shailesh Mistry <shailesh.mistry@hotmail.co.uk>
-Date: Wed, 26 Apr 2017 22:12:14 +0100
+From b39be1019b4acc1aa50c6026463c543332e95a31 Mon Sep 17 00:00:00 2001
+From: Catalin Enache <catalin.enache@windriver.com>
+Date: Mon, 8 May 2017 16:18:14 +0300
+
 Subject: [PATCH] Bug 697693: Prevent SEGV due to integer overflow.
 
 While building a Huffman table, the start and end points were susceptible
@@ -12,15 +13,17 @@
 CVE: CVE-2017-7975
 
 Signed-off-by: Catalin Enache <catalin.enache@windriver.com>
----
- jbig2dec/jbig2_huffman.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
 
-diff --git a/jbig2dec/jbig2_huffman.c b/jbig2dec/jbig2_huffman.c
-index 511e461..b4189a1 100644
+Contents of this patch were extracted from a larger patch which addressed
+two CVE's.  The context (location of {) was also modified to apply to
+ghostscript 9.21.
+
+Signed-off-by: Joe Slater <joe.slater@windriver.com>
+
+
 --- a/jbig2dec/jbig2_huffman.c
 +++ b/jbig2dec/jbig2_huffman.c
-@@ -421,8 +421,8 @@ jbig2_build_huffman_table(Jbig2Ctx *ctx, const Jbig2HuffmanParams *params)
+@@ -421,8 +421,8 @@ jbig2_build_huffman_table(Jbig2Ctx *ctx,
  
              if (PREFLEN == CURLEN) {
                  int RANGELEN = lines[CURTEMP].RANGELEN;
@@ -31,6 +34,4 @@
                  byte eflags = 0;
  
                  if (end_j > max_j) {
--- 
-2.10.2
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/avoid-host-contamination.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/avoid-host-contamination.patch
new file mode 100644
index 0000000..c4794e7
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/avoid-host-contamination.patch
@@ -0,0 +1,19 @@
+Remove hardcode path refer to host to avoid host contamination.
+
+Upstream-Status: Inappropriate [embedded specific]
+
+Signed-off-by: Kai Kang <kai.kang@windriver.com>
+---
+diff --git a/devices/devs.mak b/devices/devs.mak
+index 3070d2e..df663f0 100644
+--- a/devices/devs.mak
++++ b/devices/devs.mak
+@@ -546,7 +546,7 @@ $(DEVOBJ)gdevxalt.$(OBJ) : $(DEVSRC)gdevxalt.c $(GDEVX) $(math__h) $(memory__h)\
+ ### NON PORTABLE, ONLY UNIX WITH GCC SUPPORT
+ 
+ $(DEVOBJ)X11.so : $(x11alt_) $(x11_) $(DEVS_MAK) $(MAKEDIRS)
+-	$(CCLD) $(LDFLAGS) -shared -o $(DEVOBJ)X11.so $(x11alt_) $(x11_) -L/usr/X11R6/lib -lXt -lSM -lICE -lXext -lX11 $(XLIBDIRS)
++	$(CCLD) $(LDFLAGS) -shared -o $(DEVOBJ)X11.so $(x11alt_) $(x11_) -lXt -lSM -lICE -lXext -lX11 $(XLIBDIRS)
+ 
+ ###### --------------- Memory-buffered printer devices --------------- ######
+ 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.02-prevent_recompiling.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.02-prevent_recompiling.patch
deleted file mode 100644
index e709195..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.02-prevent_recompiling.patch
+++ /dev/null
@@ -1,99 +0,0 @@
-Just use commands provided by ghostscript-native, preventing recompile them when
-compile ghostscript.
-Way to enable cross compile.
-
-Upstream-Status: Pending
-
-Signed-off-by: Kang Kai <kai.kang@windriver.com>
-Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
-
-Rebase to 9.19
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- base/unix-aux.mak | 64 +++++++++++++++++++++++++++----------------------------
- 1 file changed, 32 insertions(+), 32 deletions(-)
-
-diff --git a/base/unix-aux.mak b/base/unix-aux.mak
-index 0110667..e2eb1a1 100644
---- a/base/unix-aux.mak
-+++ b/base/unix-aux.mak
-@@ -71,44 +71,44 @@ $(GLOBJ)gp_sysv.$(OBJ): $(GLSRC)gp_sysv.c $(stdio__h) $(time__h) $(AK)\
- 
- # -------------------------- Auxiliary programs --------------------------- #
- 
--$(ECHOGS_XE): $(GLSRC)echogs.c $(AK) $(stdpre_h) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(ECHOGS_XE) $(GLSRC)echogs.c $(AUXEXTRALIBS)
--
-+#$(ECHOGS_XE): $(GLSRC)echogs.c $(AK) $(stdpre_h) $(UNIX_AUX_MAK) $(MAKEDIRS)
-+#	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(ECHOGS_XE) $(GLSRC)echogs.c $(AUXEXTRALIBS)
-+#
- # On the RS/6000 (at least), compiling genarch.c with gcc with -O
- # produces a buggy executable.
--$(GENARCH_XE): $(GLSRC)genarch.c $(AK) $(GENARCH_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENARCH_XE) $(GLSRC)genarch.c $(AUXEXTRALIBS)
--
--$(GENCONF_XE): $(GLSRC)genconf.c $(AK) $(GENCONF_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENCONF_XE) $(GLSRC)genconf.c $(AUXEXTRALIBS)
--
--$(GENDEV_XE): $(GLSRC)gendev.c $(AK) $(GENDEV_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENDEV_XE) $(GLSRC)gendev.c $(AUXEXTRALIBS)
--
--$(GENHT_XE): $(GLSRC)genht.c $(AK) $(GENHT_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(GENHT_CFLAGS) $(O_)$(GENHT_XE) $(GLSRC)genht.c $(AUXEXTRALIBS)
--
-+#$(GENARCH_XE): $(GLSRC)genarch.c $(AK) $(GENARCH_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
-+#	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENARCH_XE) $(GLSRC)genarch.c $(AUXEXTRALIBS)
-+#
-+#$(GENCONF_XE): $(GLSRC)genconf.c $(AK) $(GENCONF_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
-+#	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENCONF_XE) $(GLSRC)genconf.c $(AUXEXTRALIBS)
-+#
-+#$(GENDEV_XE): $(GLSRC)gendev.c $(AK) $(GENDEV_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
-+#	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENDEV_XE) $(GLSRC)gendev.c $(AUXEXTRALIBS)
-+#
-+#$(GENHT_XE): $(GLSRC)genht.c $(AK) $(GENHT_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
-+#	$(CCAUX_) $(GENHT_CFLAGS) $(O_)$(GENHT_XE) $(GLSRC)genht.c $(AUXEXTRALIBS)
-+#
- # To get GS to use the system zlib, you remove/hide the gs/zlib directory
- # which means that the mkromfs build can't find the zlib source it needs.
- # So it's split into two targets, one using the zlib source directly.....
--MKROMFS_OBJS_0=$(MKROMFS_ZLIB_OBJS) $(AUX)gpmisc.$(OBJ) $(AUX)gp_getnv.$(OBJ) \
-- $(AUX)gscdefs.$(OBJ) $(AUX)gp_unix.$(OBJ) $(AUX)gp_unifs.$(OBJ) $(AUX)gp_unifn.$(OBJ) \
-- $(AUX)gp_stdia.$(OBJ) $(AUX)gsutil.$(OBJ) $(AUX)memento.$(OBJ)
--
--$(MKROMFS_XE)_0: $(GLSRC)mkromfs.c $(MKROMFS_COMMON_DEPS) $(MKROMFS_OBJS_0) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(GENOPT) $(CFLAGS) $(I_)$(GLSRCDIR)$(_I) $(I_)$(GLOBJ)$(_I) $(I_)$(ZSRCDIR)$(_I) $(GLSRC)mkromfs.c $(O_)$(MKROMFS_XE)_0 $(MKROMFS_OBJS_0) $(AUXEXTRALIBS)
--
-+#MKROMFS_OBJS_0=$(MKROMFS_ZLIB_OBJS) $(AUX)gpmisc.$(OBJ) $(AUX)gp_getnv.$(OBJ) \
-+# $(AUX)gscdefs.$(OBJ) $(AUX)gp_unix.$(OBJ) $(AUX)gp_unifs.$(OBJ) $(AUX)gp_unifn.$(OBJ) \
-+# $(AUX)gp_stdia.$(OBJ) $(AUX)gsutil.$(OBJ) $(AUX)memento.$(OBJ)
-+#
-+#$(MKROMFS_XE)_0: $(GLSRC)mkromfs.c $(MKROMFS_COMMON_DEPS) $(MKROMFS_OBJS_0) $(UNIX_AUX_MAK) $(MAKEDIRS)
-+#	$(CCAUX_) $(GENOPT) $(CFLAGS) $(I_)$(GLSRCDIR)$(_I) $(I_)$(GLOBJ)$(_I) $(I_)$(ZSRCDIR)$(_I) $(GLSRC)mkromfs.c $(O_)$(MKROMFS_XE)_0 $(MKROMFS_OBJS_0) $(AUXEXTRALIBS)
-+#
- # .... and one using the zlib library linked via the command line
--MKROMFS_OBJS_1=$(AUX)gscdefs.$(OBJ) \
-- $(AUX)gpmisc.$(OBJ) $(AUX)gp_getnv.$(OBJ) \
-- $(AUX)gp_unix.$(OBJ) $(AUX)gp_unifs.$(OBJ) $(AUX)gp_unifn.$(OBJ) \
-- $(AUX)gp_stdia.$(OBJ) $(AUX)gsutil.$(OBJ)
--
--$(MKROMFS_XE)_1: $(GLSRC)mkromfs.c $(MKROMFS_COMMON_DEPS) $(MKROMFS_OBJS_1) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CCAUX_) $(GENOPT) $(CFLAGS) $(I_)$(GLSRCDIR)$(_I) $(I_)$(GLOBJ)$(_I) $(I_)$(ZSRCDIR)$(_I) $(GLSRC)mkromfs.c $(O_)$(MKROMFS_XE)_1 $(MKROMFS_OBJS_1) $(AUXEXTRALIBS)
--
--$(MKROMFS_XE): $(MKROMFS_XE)_$(SHARE_ZLIB) $(UNIX_AUX_MAK) $(MAKEDIRS)
--	$(CP_) $(MKROMFS_XE)_$(SHARE_ZLIB) $(MKROMFS_XE)
-+#MKROMFS_OBJS_1=$(AUX)gscdefs.$(OBJ) \
-+# $(AUX)gpmisc.$(OBJ) $(AUX)gp_getnv.$(OBJ) \
-+# $(AUX)gp_unix.$(OBJ) $(AUX)gp_unifs.$(OBJ) $(AUX)gp_unifn.$(OBJ) \
-+# $(AUX)gp_stdia.$(OBJ) $(AUX)gsutil.$(OBJ)
-+#
-+#$(MKROMFS_XE)_1: $(GLSRC)mkromfs.c $(MKROMFS_COMMON_DEPS) $(MKROMFS_OBJS_1) $(UNIX_AUX_MAK) $(MAKEDIRS)
-+#	$(CCAUX_) $(GENOPT) $(CFLAGS) $(I_)$(GLSRCDIR)$(_I) $(I_)$(GLOBJ)$(_I) $(I_)$(ZSRCDIR)$(_I) $(GLSRC)mkromfs.c $(O_)$(MKROMFS_XE)_1 $(MKROMFS_OBJS_1) $(AUXEXTRALIBS)
-+#
-+#$(MKROMFS_XE): $(MKROMFS_XE)_$(SHARE_ZLIB) $(UNIX_AUX_MAK) $(MAKEDIRS)
-+#	$(CP_) $(MKROMFS_XE)_$(SHARE_ZLIB) $(MKROMFS_XE)
- 
- # Query the environment to construct gconfig_.h.
- # These are all defined conditionally (except the JasPER one), so that
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.15-parallel-make.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.15-parallel-make.patch
index 8fa2c40..3ec3956 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.15-parallel-make.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.15-parallel-make.patch
@@ -10,6 +10,7 @@
 2015 00:40:22 -0800  Subject: [PATCH] contrib.mak: fix for parallel build
 
 Signed-off-by: Huang Qiyu <huangqy.fnst@cn.fujitsu.com>
+Upstream-Status: Pending
 ---
  contrib/contrib.mak | 2 ++
  1 file changed, 2 insertions(+)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.21-native-fix-disable-system-libtiff.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.21-native-fix-disable-system-libtiff.patch
new file mode 100644
index 0000000..bff3e61
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.21-native-fix-disable-system-libtiff.patch
@@ -0,0 +1,37 @@
+ghostscript-native:fix disable-system-libtiff
+
+Modify configure to add the check to make sure
+ghostscrip could work while system-libtiff is
+disabled.
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+
+Updated to apply to ghostscript 9.21.
+
+Signed-off-by: Joe Slater <joe.slater@windriver.com>
+
+Upstream-Status: Pending
+
+
+
+--- a/configure.ac
++++ b/configure.ac
+@@ -1259,6 +1259,7 @@ case "x$with_system_libtiff" in
+ esac
+ 
+ if test x"$SHARE_LIBTIFF" = x"0" ; then
++    if test -e $LIBTIFFDIR/configure; then
+       echo "Running libtiff configure script..."
+       olddir=`pwd`
+       if ! test -d "$LIBTIFFCONFDIR" ; then
+@@ -1272,6 +1273,10 @@ if test x"$SHARE_LIBTIFF" = x"0" ; then
+       cd "$olddir"
+       echo
+       echo "Continuing with Ghostscript configuration..."
++    else
++      AC_MSG_NOTICE([Could not find local copy of libtiff.
++Disabling tiff output devices.])
++    fi
+ fi
+ 
+ AC_SUBST(SHARE_LIBTIFF)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.21-prevent_recompiling.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.21-prevent_recompiling.patch
new file mode 100644
index 0000000..f2c6d04
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-9.21-prevent_recompiling.patch
@@ -0,0 +1,96 @@
+Just use commands provided by ghostscript-native, preventing recompile them when
+compile ghostscript.
+Way to enable cross compile.
+
+Upstream-Status: Pending
+
+Signed-off-by: Kang Kai <kai.kang@windriver.com>
+Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
+
+Rebase to 9.19
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+
+Rebase to 9.21
+Signed-off-by: Joe Slater <joe.slater@windriver.com>
+
+--- a/base/unix-aux.mak
++++ b/base/unix-aux.mak
+@@ -66,45 +66,45 @@ $(GLOBJ)gp_sysv.$(OBJ): $(GLSRC)gp_sysv.
+ 
+ # -------------------------- Auxiliary programs --------------------------- #
+ 
+-$(ECHOGS_XE): $(GLSRC)echogs.c $(AK) $(stdpre_h) $(UNIX_AUX_MAK) $(MAKEDIRS)
+-	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(ECHOGS_XE) $(GLSRC)echogs.c $(AUXEXTRALIBS)
+-
++#$(ECHOGS_XE): $(GLSRC)echogs.c $(AK) $(stdpre_h) $(UNIX_AUX_MAK) $(MAKEDIRS)
++#	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(ECHOGS_XE) $(GLSRC)echogs.c $(AUXEXTRALIBS)
++#
+ # On the RS/6000 (at least), compiling genarch.c with gcc with -O
+ # produces a buggy executable.
+-$(GENARCH_XE): $(GLSRC)genarch.c $(AK) $(GENARCH_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
+-	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENARCH_XE) $(GLSRC)genarch.c $(AUXEXTRALIBS)
+-
+-$(GENCONF_XE): $(GLSRC)genconf.c $(AK) $(GENCONF_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
+-	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENCONF_XE) $(GLSRC)genconf.c $(AUXEXTRALIBS)
+-
+-$(GENDEV_XE): $(GLSRC)gendev.c $(AK) $(GENDEV_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
+-	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENDEV_XE) $(GLSRC)gendev.c $(AUXEXTRALIBS)
+-
+-$(GENHT_XE): $(GLSRC)genht.c $(AK) $(GENHT_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
+-	$(CCAUX_) $(GENHT_CFLAGS) $(O_)$(GENHT_XE) $(GLSRC)genht.c $(AUXEXTRALIBS)
+-
++#$(GENARCH_XE): $(GLSRC)genarch.c $(AK) $(GENARCH_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
++#	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENARCH_XE) $(GLSRC)genarch.c $(AUXEXTRALIBS)
++#
++#$(GENCONF_XE): $(GLSRC)genconf.c $(AK) $(GENCONF_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
++#	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENCONF_XE) $(GLSRC)genconf.c $(AUXEXTRALIBS)
++#
++#$(GENDEV_XE): $(GLSRC)gendev.c $(AK) $(GENDEV_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
++#	$(CCAUX_) $(I_)$(GLSRCDIR)$(_I) $(O_)$(GENDEV_XE) $(GLSRC)gendev.c $(AUXEXTRALIBS)
++#
++#$(GENHT_XE): $(GLSRC)genht.c $(AK) $(GENHT_DEPS) $(UNIX_AUX_MAK) $(MAKEDIRS)
++#	$(CCAUX_) $(GENHT_CFLAGS) $(O_)$(GENHT_XE) $(GLSRC)genht.c $(AUXEXTRALIBS)
++#
+ # To get GS to use the system zlib, you remove/hide the gs/zlib directory
+ # which means that the mkromfs build can't find the zlib source it needs.
+ # So it's split into two targets, one using the zlib source directly.....
+-MKROMFS_OBJS_0=$(MKROMFS_ZLIB_OBJS) $(AUX)gpmisc.$(OBJ) $(AUX)gp_getnv.$(OBJ) \
+- $(AUX)gscdefs.$(OBJ) $(AUX)gp_unix.$(OBJ) $(AUX)gp_unifs.$(OBJ) $(AUX)gp_unifn.$(OBJ) \
+- $(AUX)gp_stdia.$(OBJ) $(AUX)gsutil.$(OBJ) $(AUX)memento.$(OBJ)
+-
+-$(MKROMFS_XE)_0: $(GLSRC)mkromfs.c $(MKROMFS_COMMON_DEPS) $(MKROMFS_OBJS_0) $(UNIX_AUX_MAK) $(MAKEDIRS)
+-	$(CCAUX_) $(GENOPTAUX) $(I_)$(GLSRCDIR)$(_I) $(I_)$(GLOBJ)$(_I) $(I_)$(ZSRCDIR)$(_I) $(GLSRC)mkromfs.c $(O_)$(MKROMFS_XE)_0 $(MKROMFS_OBJS_0) $(AUXEXTRALIBS)
+-
++#MKROMFS_OBJS_0=$(MKROMFS_ZLIB_OBJS) $(AUX)gpmisc.$(OBJ) $(AUX)gp_getnv.$(OBJ) \
++# $(AUX)gscdefs.$(OBJ) $(AUX)gp_unix.$(OBJ) $(AUX)gp_unifs.$(OBJ) $(AUX)gp_unifn.$(OBJ) \
++# $(AUX)gp_stdia.$(OBJ) $(AUX)gsutil.$(OBJ) $(AUX)memento.$(OBJ)
++#
++#$(MKROMFS_XE)_0: $(GLSRC)mkromfs.c $(MKROMFS_COMMON_DEPS) $(MKROMFS_OBJS_0) $(UNIX_AUX_MAK) $(MAKEDIRS)
++#	$(CCAUX_) $(GENOPTAUX) $(I_)$(GLSRCDIR)$(_I) $(I_)$(GLOBJ)$(_I) $(I_)$(ZSRCDIR)$(_I) $(GLSRC)mkromfs.c $(O_)$(MKROMFS_XE)_0 $(MKROMFS_OBJS_0) $(AUXEXTRALIBS)
++#
+ # .... and one using the zlib library linked via the command line
+-MKROMFS_OBJS_1=$(AUX)gscdefs.$(OBJ) \
+- $(AUX)gpmisc.$(OBJ) $(AUX)gp_getnv.$(OBJ) \
+- $(AUX)gp_unix.$(OBJ) $(AUX)gp_unifs.$(OBJ) $(AUX)gp_unifn.$(OBJ) \
+- $(AUX)gp_stdia.$(OBJ) $(AUX)gsutil.$(OBJ)
+-
+-$(MKROMFS_XE)_1: $(GLSRC)mkromfs.c $(MKROMFS_COMMON_DEPS) $(MKROMFS_OBJS_1) $(UNIX_AUX_MAK) $(MAKEDIRS)
+-	$(CCAUX_) $(GENOPTAUX) $(I_)$(GLSRCDIR)$(_I) $(I_)$(GLOBJ)$(_I) $(I_)$(ZSRCDIR)$(_I) $(GLSRC)mkromfs.c $(O_)$(MKROMFS_XE)_1 $(MKROMFS_OBJS_1) $(AUXEXTRALIBS)
+-
+-$(MKROMFS_XE): $(MKROMFS_XE)_$(SHARE_ZLIB) $(UNIX_AUX_MAK) $(MAKEDIRS)
+-	$(CP_) $(MKROMFS_XE)_$(SHARE_ZLIB) $(MKROMFS_XE)
+-
++#MKROMFS_OBJS_1=$(AUX)gscdefs.$(OBJ) \
++# $(AUX)gpmisc.$(OBJ) $(AUX)gp_getnv.$(OBJ) \
++# $(AUX)gp_unix.$(OBJ) $(AUX)gp_unifs.$(OBJ) $(AUX)gp_unifn.$(OBJ) \
++# $(AUX)gp_stdia.$(OBJ) $(AUX)gsutil.$(OBJ)
++#
++#$(MKROMFS_XE)_1: $(GLSRC)mkromfs.c $(MKROMFS_COMMON_DEPS) $(MKROMFS_OBJS_1) $(UNIX_AUX_MAK) $(MAKEDIRS)
++#	$(CCAUX_) $(GENOPTAUX) $(I_)$(GLSRCDIR)$(_I) $(I_)$(GLOBJ)$(_I) $(I_)$(ZSRCDIR)$(_I) $(GLSRC)mkromfs.c $(O_)$(MKROMFS_XE)_1 $(MKROMFS_OBJS_1) $(AUXEXTRALIBS)
++#
++#$(MKROMFS_XE): $(MKROMFS_XE)_$(SHARE_ZLIB) $(UNIX_AUX_MAK) $(MAKEDIRS)
++#	$(CP_) $(MKROMFS_XE)_$(SHARE_ZLIB) $(MKROMFS_XE)
++#
+ # Query the environment to construct gconfig_.h.
+ # These are all defined conditionally (except the JasPER one), so that
+ # they can be overridden by settings from the configure script.
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-native-fix-disable-system-libtiff.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-native-fix-disable-system-libtiff.patch
deleted file mode 100644
index 9158117..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/ghostscript-native-fix-disable-system-libtiff.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-ghostscript-native:fix disable-system-libtiff
-
-Modify configure to add the check to make sure
-ghostscrip could work while system-libtiff is
-disabled.
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
-Upstream-Status: Pending
----
- configure.ac | 5 +++++
- 1 file changed, 5 insertions(+)
-
-diff --git a/configure.ac b/configure.ac
---- a/configure.ac
-+++ b/configure.ac
-@@ -1055,6 +1055,7 @@ Disabling tiff output devices.])
- esac
- 
- if test $SHARE_LIBTIFF -eq 0; then
-+    if test -e $LIBTIFFDIR/configure; then
-       echo
-       echo "Running libtiff configure script..."
-       olddir=`pwd`
-@@ -1069,6 +1070,10 @@ if test $SHARE_LIBTIFF -eq 0; then
-       cd "$olddir"
-       echo
-       echo "Continuing with Ghostscript configuration..."
-+    else
-+      AC_MSG_NOTICE([Could not find local copy of libtiff.
-+Disabling tiff output devices.])
-+    fi
- fi
- 
- AC_SUBST(SHARE_LIBTIFF)
--- 
-1.8.1.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/mkdir-p.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/mkdir-p.patch
new file mode 100644
index 0000000..5a7eab1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/mkdir-p.patch
@@ -0,0 +1,36 @@
+ghostscript: allow directories to be created more than once
+
+When doing parallel builds, we might try to create directories
+more than once.  This should not cause an error.
+
+Upstream-Status: Pending
+
+Signed-off-by: Joe Slater <joe.slater@windriver.com>
+
+
+--- a/base/unix-end.mak
++++ b/base/unix-end.mak
+@@ -17,15 +17,14 @@
+ UNIX_END_MAK=$(GLSRC)unix-end.mak $(TOP_MAKEFILES)
+ # Define the rule for building standard configurations.
+ directories: $(UNIX_END_MAK)
+-	@if test "$(BINDIR)"    != "" -a ! -d $(BINDIR);        then mkdir $(BINDIR);        fi
+-	@if test "$(GLGENDIR)"  != "" -a ! -d $(GLGENDIR);      then mkdir $(GLGENDIR);      fi
+-	@if test "$(GLOBJDIR)"  != "" -a ! -d $(GLOBJDIR);      then mkdir $(GLOBJDIR);      fi
+-	@if test "$(DEVGENDIR)" != "" -a ! -d $(DEVGENDIR);     then mkdir $(DEVGENDIR);     fi
+-	@if test "$(DEVOBJDIR)" != "" -a ! -d $(DEVOBJDIR);     then mkdir $(DEVOBJDIR);     fi
+-	@if test "$(AUXDIR)"    != "" -a ! -d $(AUXDIR);        then mkdir $(AUXDIR);        fi
+-	@if test "$(PSGENDIR)"  != "" -a ! -d $(PSGENDIR);      then mkdir $(PSGENDIR);      fi
+-	@if test "$(PSGENDIR)"  != "" -a ! -d $(PSGENDIR)/cups; then mkdir $(PSGENDIR)/cups; fi
+-	@if test "$(PSOBJDIR)"  != "" -a ! -d $(PSOBJDIR);      then mkdir $(PSOBJDIR);      fi
++	@if test "$(BINDIR)"    != "" -a ! -d $(BINDIR);        then mkdir -p $(BINDIR);        fi
++	@if test "$(GLGENDIR)"  != "" -a ! -d $(GLGENDIR);      then mkdir -p $(GLGENDIR);      fi
++	@if test "$(GLOBJDIR)"  != "" -a ! -d $(GLOBJDIR);      then mkdir -p $(GLOBJDIR);      fi
++	@if test "$(DEVGENDIR)" != "" -a ! -d $(DEVGENDIR);     then mkdir -p $(DEVGENDIR);     fi
++	@if test "$(DEVOBJDIR)" != "" -a ! -d $(DEVOBJDIR);     then mkdir -p $(DEVOBJDIR);     fi
++	@if test "$(AUXDIR)"    != "" -a ! -d $(AUXDIR);        then mkdir -p $(AUXDIR);        fi
++	@if test "$(PSGENDIR)"  != "" -a ! -d $(PSGENDIR)/cups; then mkdir -p $(PSGENDIR)/cups; fi
++	@if test "$(PSOBJDIR)"  != "" -a ! -d $(PSOBJDIR);      then mkdir -p $(PSOBJDIR);      fi
+ 
+ 
+ gs: .gssubtarget $(UNIX_END_MAK)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/png_mak.patch b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/png_mak.patch
deleted file mode 100644
index 8b84986..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript/png_mak.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-ghostscript: add dependency for pnglibconf.h
-
-When using parallel make jobs, we need to be sure that
-pnglibconf.h is created before we try to reference it,
-so add a rule to png.mak.
-
-Upstream-Status: Pending
-
-Signed-off-by: Joe Slater <jslater@windriver.com>
-
-Rebase to 9.19
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- base/png.mak | 2 ++
- 1 file changed, 2 insertions(+)
-
-diff --git a/base/png.mak b/base/png.mak
-index fe5c6e2..8abb53a 100644
---- a/base/png.mak
-+++ b/base/png.mak
-@@ -74,6 +74,8 @@ png.clean-not-config-clean :
- 
- pnglibconf_h=$(PNGGENDIR)$(D)pnglibconf.h
- 
-+$(MAKEDIRS) : $(pnglibconf_h)
-+
- png.config-clean :
- 	$(RM_) $(pnglibconf_h)
- 	$(RM_) $(PNGGEN)lpg*.dev
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript_9.20.bb b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript_9.20.bb
deleted file mode 100644
index e1d9700..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript_9.20.bb
+++ /dev/null
@@ -1,123 +0,0 @@
-SUMMARY = "The GPL Ghostscript PostScript/PDF interpreter"
-DESCRIPTION = "Ghostscript is used for PostScript/PDF preview and printing.  Usually as \
-a back-end to a program such as ghostview, it can display PostScript and PDF \
-documents in an X11 environment. \
-\
-Furthermore, it can render PostScript and PDF files as graphics to be printed \
-on non-PostScript printers. Supported printers include common \
-dot-matrix, inkjet and laser models. \
-"
-HOMEPAGE = "http://www.ghostscript.com"
-SECTION = "console/utils"
-
-LICENSE = "GPLv3"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=70dc2bac4d0ce4448da873cd86b123fc"
-
-DEPENDS = "ghostscript-native tiff jpeg fontconfig cups libpng"
-DEPENDS_class-native = "libpng-native"
-
-UPSTREAM_CHECK_URI = "https://github.com/ArtifexSoftware/ghostpdl-downloads/releases"
-UPSTREAM_CHECK_REGEX = "(?P<pver>\d+(\.\d+)+)\.tar"
-
-SRC_URI_BASE = "https://github.com/ArtifexSoftware/ghostpdl-downloads/releases/download/gs920/${BPN}-${PV}.tar.gz \
-                file://ghostscript-9.15-parallel-make.patch \
-                file://ghostscript-9.16-Werror-return-type.patch \
-                file://png_mak.patch \
-                file://do-not-check-local-libpng-source.patch \
-"
-
-SRC_URI = "${SRC_URI_BASE} \
-           file://ghostscript-9.02-prevent_recompiling.patch \
-           file://ghostscript-9.02-genarch.patch \
-           file://objarch.h \
-           file://cups-no-gcrypt.patch \
-           file://CVE-2017-7207.patch \
-           file://CVE-2016-10219.patch \
-           file://CVE-2016-10220.patch \
-           file://CVE-2017-5951.patch \
-           file://CVE-2016-8602.patch \
-           file://CVE-2017-7975.patch \
-           file://CVE-2016-7977.patch \
-           file://CVE-2016-7978.patch \
-           file://CVE-2016-7979.patch \
-           file://CVE-2017-9216.patch \
-           file://CVE-2017-9611.patch \
-           file://CVE-2017-9612.patch \
-           file://CVE-2017-9739.patch \
-           file://CVE-2017-9726.patch \
-           file://CVE-2017-9727.patch \
-           file://CVE-2017-9835.patch \
-           file://CVE-2017-11714.patch \
-           "
-
-SRC_URI_class-native = "${SRC_URI_BASE} \
-                        file://ghostscript-native-fix-disable-system-libtiff.patch \
-                        file://base-genht.c-add-a-preprocessor-define-to-allow-fope.patch \
-                        "
-
-SRC_URI[md5sum] = "93c5987cd3ab341108be1ebbaadc24fe"
-SRC_URI[sha256sum] = "949b64b46ecf8906db54a94ecf83ab97534ebf946f770d3c3f283cb469cb6e14"
-
-EXTRA_OECONF = "--without-x --with-system-libtiff --without-jbig2dec \
-                --with-fontpath=${datadir}/fonts \
-                --without-libidn --with-cups-serverbin=${exec_prefix}/lib/cups \
-                --with-cups-datadir=${datadir}/cups \
-                "
-
-EXTRA_OECONF_append_mipsarcho32 = " --with-large_color_index=0"
-
-# Explicity disable libtiff, fontconfig,
-# freetype, cups for ghostscript-native
-EXTRA_OECONF_class-native = "--without-x --with-system-libtiff=no \
-                             --without-jbig2dec \
-                             --with-fontpath=${datadir}/fonts \
-                             --without-libidn --disable-fontconfig \
-                             --disable-freetype --disable-cups"
-
-# This has been fixed upstream but for now we need to subvert the check for time.h
-# http://bugs.ghostscript.com/show_bug.cgi?id=692443
-# http://bugs.ghostscript.com/show_bug.cgi?id=692426
-CFLAGS += "-DHAVE_SYS_TIME_H=1"
-BUILD_CFLAGS += "-DHAVE_SYS_TIME_H=1"
-
-inherit autotools
-
-do_configure_prepend () {
-	mkdir -p obj
-	mkdir -p soobj
-	if [ -e ${WORKDIR}/objarch.h ]; then
-		cp ${WORKDIR}/objarch.h obj/arch.h
-	fi
-}
-
-do_configure_append () {
-	# copy tools from the native ghostscript build
-	if [ "${PN}" != "ghostscript-native" ]; then
-		mkdir -p obj/aux soobj
-		for i in genarch genconf mkromfs echogs gendev genht; do
-			cp ${STAGING_BINDIR_NATIVE}/ghostscript-${PV}/$i obj/aux/$i
-		done
-	fi
-}
-
-do_install_append () {
-    mkdir -p ${D}${datadir}/ghostscript/${PV}/
-    cp -r ${S}/Resource ${D}${datadir}/ghostscript/${PV}/
-    cp -r ${S}/iccprofiles ${D}${datadir}/ghostscript/${PV}/
-}
-
-do_compile_class-native () {
-    mkdir -p obj
-    for i in genarch genconf mkromfs echogs gendev genht; do
-        oe_runmake obj/aux/$i
-    done
-}
-
-do_install_class-native () {
-    install -d ${D}${bindir}/ghostscript-${PV}
-    for i in genarch genconf mkromfs echogs gendev genht; do
-        install -m 755 obj/aux/$i ${D}${bindir}/ghostscript-${PV}/$i
-    done
-}
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript_9.21.bb b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript_9.21.bb
new file mode 100644
index 0000000..bf985c4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/ghostscript/ghostscript_9.21.bb
@@ -0,0 +1,137 @@
+SUMMARY = "The GPL Ghostscript PostScript/PDF interpreter"
+DESCRIPTION = "Ghostscript is used for PostScript/PDF preview and printing.  Usually as \
+a back-end to a program such as ghostview, it can display PostScript and PDF \
+documents in an X11 environment. \
+\
+Furthermore, it can render PostScript and PDF files as graphics to be printed \
+on non-PostScript printers. Supported printers include common \
+dot-matrix, inkjet and laser models. \
+"
+HOMEPAGE = "http://www.ghostscript.com"
+SECTION = "console/utils"
+
+LICENSE = "GPLv3"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=70dc2bac4d0ce4448da873cd86b123fc"
+
+DEPENDS = "ghostscript-native tiff jpeg fontconfig cups libpng"
+DEPENDS_class-native = "libpng-native"
+
+UPSTREAM_CHECK_URI = "https://github.com/ArtifexSoftware/ghostpdl-downloads/releases"
+UPSTREAM_CHECK_REGEX = "(?P<pver>\d+(\.\d+)+)\.tar"
+
+SRC_URI_BASE = "https://github.com/ArtifexSoftware/ghostpdl-downloads/releases/download/gs921/${BPN}-${PV}.tar.gz \
+                file://ghostscript-9.15-parallel-make.patch \
+                file://ghostscript-9.16-Werror-return-type.patch \
+                file://do-not-check-local-libpng-source.patch \
+                file://avoid-host-contamination.patch \
+                file://mkdir-p.patch \
+"
+
+SRC_URI = "${SRC_URI_BASE} \
+           file://ghostscript-9.21-prevent_recompiling.patch \
+           file://ghostscript-9.02-genarch.patch \
+           file://objarch.h \
+           file://cups-no-gcrypt.patch \
+           file://CVE-2016-7977.patch \
+           file://CVE-2017-7207.patch \
+           file://CVE-2017-5951.patch \
+           file://CVE-2017-7975.patch \
+           file://CVE-2017-9216.patch \
+           file://CVE-2017-9611.patch \
+           file://CVE-2017-9612.patch \
+           file://CVE-2017-9739.patch \
+           file://CVE-2017-9726.patch \
+           file://CVE-2017-9727.patch \
+           file://CVE-2017-9835.patch \
+           file://CVE-2017-11714.patch \
+           "
+
+SRC_URI_class-native = "${SRC_URI_BASE} \
+                        file://ghostscript-9.21-native-fix-disable-system-libtiff.patch \
+                        file://base-genht.c-add-a-preprocessor-define-to-allow-fope.patch \
+                        "
+
+SRC_URI[md5sum] = "5f213281761d2750fcf27476c404d17f"
+SRC_URI[sha256sum] = "02bceadbc4dddeb6f2eec9c8b1623d945d355ca11b8b4df035332b217d58ce85"
+
+# Put something like
+#
+#   PACKAGECONFIG_append_pn-ghostscript = " x11"
+#
+# in local.conf to enable building with X11.  Be careful.  The order
+# of the overrides matters!
+#
+#PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'x11', '', d)}"
+PACKAGECONFIG_class-native = ""
+
+PACKAGECONFIG[x11] = "--with-x --x-includes=${STAGING_INCDIR} --x-libraries=${STAGING_LIBDIR}, \
+                      --without-x, virtual/libx11 libxext libxt gtk+3\
+                      "
+
+EXTRA_OECONF = "--with-system-libtiff --without-jbig2dec \
+                --with-fontpath=${datadir}/fonts \
+                --without-libidn --with-cups-serverbin=${exec_prefix}/lib/cups \
+                --with-cups-datadir=${datadir}/cups \
+                CUPSCONFIG="${STAGING_BINDIR_CROSS}/cups-config" \
+                "
+
+EXTRA_OECONF_append_mipsarcho32 = " --with-large_color_index=0"
+
+# Explicity disable libtiff, fontconfig,
+# freetype, cups for ghostscript-native
+EXTRA_OECONF_class-native = "--without-x --with-system-libtiff=no \
+                             --without-jbig2dec \
+                             --with-fontpath=${datadir}/fonts \
+                             --without-libidn --disable-fontconfig \
+                             --disable-freetype --disable-cups"
+
+# This has been fixed upstream but for now we need to subvert the check for time.h
+# http://bugs.ghostscript.com/show_bug.cgi?id=692443
+# http://bugs.ghostscript.com/show_bug.cgi?id=692426
+CFLAGS += "-DHAVE_SYS_TIME_H=1"
+BUILD_CFLAGS += "-DHAVE_SYS_TIME_H=1"
+
+inherit autotools
+
+do_configure_prepend () {
+	mkdir -p obj
+	mkdir -p soobj
+	if [ -e ${WORKDIR}/objarch.h ]; then
+		cp ${WORKDIR}/objarch.h obj/arch.h
+	fi
+}
+
+do_configure_append () {
+	# copy tools from the native ghostscript build
+	if [ "${PN}" != "ghostscript-native" ]; then
+		mkdir -p obj/aux soobj
+		for i in genarch genconf mkromfs echogs gendev genht; do
+			cp ${STAGING_BINDIR_NATIVE}/ghostscript-${PV}/$i obj/aux/$i
+		done
+	fi
+}
+
+do_install_append () {
+    mkdir -p ${D}${datadir}/ghostscript/${PV}/
+    cp -r ${S}/Resource ${D}${datadir}/ghostscript/${PV}/
+    cp -r ${S}/iccprofiles ${D}${datadir}/ghostscript/${PV}/
+}
+
+do_compile_class-native () {
+    mkdir -p obj
+    for i in genarch genconf mkromfs echogs gendev genht; do
+        oe_runmake obj/aux/$i
+    done
+}
+
+do_install_class-native () {
+    install -d ${D}${bindir}/ghostscript-${PV}
+    for i in genarch genconf mkromfs echogs gendev genht; do
+        install -m 755 obj/aux/$i ${D}${bindir}/ghostscript-${PV}/$i
+    done
+}
+
+BBCLASSEXTEND = "native"
+
+# ghostscript does not supports "arc"
+COMPATIBLE_HOST = "^(?!arc).*"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/go-examples/files/helloworld.go b/import-layers/yocto-poky/meta/recipes-extended/go-examples/files/helloworld.go
deleted file mode 100644
index 0253c40..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/go-examples/files/helloworld.go
+++ /dev/null
@@ -1,10 +0,0 @@
-// You can edit this code!
-// Click here and start typing.
-// taken from https://golang.org/
-package main
-
-import "fmt"
-
-func main() {
-	fmt.Println("Hello, 世界")
-}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/go-examples/go-examples.inc b/import-layers/yocto-poky/meta/recipes-extended/go-examples/go-examples.inc
deleted file mode 100644
index c63268116..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/go-examples/go-examples.inc
+++ /dev/null
@@ -1,10 +0,0 @@
-DESCRIPTION = "This is a simple example recipe that cross-compiles a Go program."
-SECTION = "examples"
-HOMEPAGE = "https://golang.org/"
-
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
-
-S = "${WORKDIR}"
-
-inherit go
diff --git a/import-layers/yocto-poky/meta/recipes-extended/go-examples/go-helloworld_0.1.bb b/import-layers/yocto-poky/meta/recipes-extended/go-examples/go-helloworld_0.1.bb
index 930c57d..222fc9d 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/go-examples/go-helloworld_0.1.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/go-examples/go-helloworld_0.1.bb
@@ -1,13 +1,19 @@
-require go-examples.inc
+DESCRIPTION = "This is a simple example recipe that cross-compiles a Go program."
+SECTION = "examples"
+HOMEPAGE = "https://golang.org/"
 
-SRC_URI += " \
-  file://helloworld.go \
-"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
 
-do_compile() {
-	go build helloworld.go
-}
+SRC_URI = "git://${GO_IMPORT}"
+SRCREV = "46695d81d1fae905a270fb7db8a4d11a334562fe"
 
-do_install() {
-	install -D -m 0755 ${S}/helloworld ${D}${bindir}/helloworld
+GO_IMPORT = "github.com/golang/example"
+GO_INSTALL = "${GO_IMPORT}/hello"
+
+inherit go
+
+# This is just to make clear where this example is
+do_install_append() {
+    mv ${D}${bindir}/hello ${D}${bindir}/${BPN}
 }
diff --git a/import-layers/yocto-poky/meta/recipes-extended/gperf/gperf_3.0.4.bb b/import-layers/yocto-poky/meta/recipes-extended/gperf/gperf_3.0.4.bb
deleted file mode 100644
index 64003fc..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/gperf/gperf_3.0.4.bb
+++ /dev/null
@@ -1,5 +0,0 @@
-require gperf.inc
-
-
-SRC_URI[md5sum] = "c1f1db32fb6598d6a93e6e88796a8632"
-SRC_URI[sha256sum] = "767112a204407e62dbc3106647cf839ed544f3cf5d0f0523aaa2508623aad63e"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/gperf/gperf_3.1.bb b/import-layers/yocto-poky/meta/recipes-extended/gperf/gperf_3.1.bb
new file mode 100644
index 0000000..942820b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/gperf/gperf_3.1.bb
@@ -0,0 +1,5 @@
+require gperf.inc
+
+
+SRC_URI[md5sum] = "9e251c0a618ad0824b51117d5d9db87e"
+SRC_URI[sha256sum] = "588546b945bba4b70b6a3a616e80b4ab466e3f33024a352fc2198112cdbb3ae2"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/grep/grep_3.0.bb b/import-layers/yocto-poky/meta/recipes-extended/grep/grep_3.0.bb
deleted file mode 100644
index b2940d5..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/grep/grep_3.0.bb
+++ /dev/null
@@ -1,46 +0,0 @@
-SUMMARY = "GNU grep utility"
-HOMEPAGE = "http://savannah.gnu.org/projects/grep/"
-BUGTRACKER = "http://savannah.gnu.org/bugs/?group=grep"
-SECTION = "console/utils"
-LICENSE = "GPLv3"
-LIC_FILES_CHKSUM = "file://COPYING;md5=8006d9c814277c1bfc4ca22af94b59ee"
-
-SRC_URI = "${GNU_MIRROR}/grep/grep-${PV}.tar.xz \
-           file://0001-Unset-need_charset_alias-when-building-for-musl.patch \
-          "
-
-SRC_URI[md5sum] = "fa07c1616adeb9c3262be5177d10ad4a"
-SRC_URI[sha256sum] = "e2c81db5056e3e8c5995f0bb5d0d0e1cad1f6f45c3b2fc77b6e81435aed48ab5"
-
-inherit autotools gettext texinfo pkgconfig
-
-EXTRA_OECONF = "--disable-perl-regexp"
-
-# Fix "Argument list too long" error when len(TMPDIR) = 410
-acpaths = "-I ./m4"
-
-do_configure_prepend () {
-	rm -f ${S}/m4/init.m4
-}
-
-do_install () {
-	autotools_do_install
-	if [ "${base_bindir}" != "${bindir}" ]; then
-		install -d ${D}${base_bindir}
-		mv ${D}${bindir}/grep ${D}${base_bindir}/grep
-		mv ${D}${bindir}/egrep ${D}${base_bindir}/egrep
-		mv ${D}${bindir}/fgrep ${D}${base_bindir}/fgrep
-		rmdir ${D}${bindir}/
-	fi
-}
-
-inherit update-alternatives
-
-ALTERNATIVE_PRIORITY = "100"
-
-ALTERNATIVE_${PN} = "grep egrep fgrep"
-ALTERNATIVE_LINK_NAME[grep] = "${base_bindir}/grep"
-ALTERNATIVE_LINK_NAME[egrep] = "${base_bindir}/egrep"
-ALTERNATIVE_LINK_NAME[fgrep] = "${base_bindir}/fgrep"
-
-export CONFIG_SHELL="/bin/sh"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/grep/grep_3.1.bb b/import-layers/yocto-poky/meta/recipes-extended/grep/grep_3.1.bb
new file mode 100644
index 0000000..05b6b93
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/grep/grep_3.1.bb
@@ -0,0 +1,46 @@
+SUMMARY = "GNU grep utility"
+HOMEPAGE = "http://savannah.gnu.org/projects/grep/"
+BUGTRACKER = "http://savannah.gnu.org/bugs/?group=grep"
+SECTION = "console/utils"
+LICENSE = "GPLv3"
+LIC_FILES_CHKSUM = "file://COPYING;md5=8006d9c814277c1bfc4ca22af94b59ee"
+
+SRC_URI = "${GNU_MIRROR}/grep/grep-${PV}.tar.xz \
+           file://0001-Unset-need_charset_alias-when-building-for-musl.patch \
+          "
+
+SRC_URI[md5sum] = "feca7b3e7c7f4aab2b42ecbfc513b070"
+SRC_URI[sha256sum] = "db625c7ab3bb3ee757b3926a5cfa8d9e1c3991ad24707a83dde8a5ef2bf7a07e"
+
+inherit autotools gettext texinfo pkgconfig
+
+EXTRA_OECONF = "--disable-perl-regexp"
+
+# Fix "Argument list too long" error when len(TMPDIR) = 410
+acpaths = "-I ./m4"
+
+do_configure_prepend () {
+	rm -f ${S}/m4/init.m4
+}
+
+do_install () {
+	autotools_do_install
+	if [ "${base_bindir}" != "${bindir}" ]; then
+		install -d ${D}${base_bindir}
+		mv ${D}${bindir}/grep ${D}${base_bindir}/grep
+		mv ${D}${bindir}/egrep ${D}${base_bindir}/egrep
+		mv ${D}${bindir}/fgrep ${D}${base_bindir}/fgrep
+		rmdir ${D}${bindir}/
+	fi
+}
+
+inherit update-alternatives
+
+ALTERNATIVE_PRIORITY = "100"
+
+ALTERNATIVE_${PN} = "grep egrep fgrep"
+ALTERNATIVE_LINK_NAME[grep] = "${base_bindir}/grep"
+ALTERNATIVE_LINK_NAME[egrep] = "${base_bindir}/egrep"
+ALTERNATIVE_LINK_NAME[fgrep] = "${base_bindir}/fgrep"
+
+export CONFIG_SHELL="/bin/sh"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/gzip/files/0001-gzip-port-zdiff-zless-to-Busybox.patch b/import-layers/yocto-poky/meta/recipes-extended/gzip/files/0001-gzip-port-zdiff-zless-to-Busybox.patch
new file mode 100644
index 0000000..20d5a19
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/gzip/files/0001-gzip-port-zdiff-zless-to-Busybox.patch
@@ -0,0 +1,59 @@
+From 5f712621829ed81a758077431226a86df37fbc3b Mon Sep 17 00:00:00 2001
+From: Denys Zagorui <denys.zagorui@globallogic.com>
+Date: Thu, 8 Jun 2017 16:05:50 +0300
+Subject: [PATCH] gzip: port zdiff, zless to Busybox
+
+Problem reported by Denys Zagorui (Bug#26088).
+* tests/zdiff: Check that diff uses POSIX-format output.
+* zless.in (less_version): Don't exit merely because 'less -V'
+fails; instead, assume 'less' is compatible with an old version of
+the original 'less'.  Busybox 'less -V' fails, but apparently its
+'less' works anyway somehow.
+
+Signed-off-by: Denys Zagorui <denys.zagorui@globallogic.com>
+
+Upstream-Status: Accepted
+---
+ tests/zdiff | 4 +++-
+ zless.in    | 2 +-
+ 2 files changed, 4 insertions(+), 2 deletions(-)
+
+diff --git a/tests/zdiff b/tests/zdiff
+index 0bb7c7d..9cd4fd4 100755
+--- a/tests/zdiff
++++ b/tests/zdiff
+@@ -22,7 +22,6 @@
+ 
+ echo a > a || framework_failure_
+ echo b > b || framework_failure_
+-gzip a b || framework_failure_
+ 
+ cat <<EOF > exp
+ 1c1
+@@ -31,7 +30,10 @@ cat <<EOF > exp
+ > b
+ EOF
+ 
++diff a b | diff exp - || skip_ "diff output format is incompatible with POSIX"
++
+ fail=0
++gzip a b || fail=1
+ zdiff a.gz b.gz > out 2>&1
+ test $? = 1 || fail=1
+ 
+diff --git a/zless.in b/zless.in
+index e634af6..9759ae6 100644
+--- a/zless.in
++++ b/zless.in
+@@ -47,7 +47,7 @@ if test "${LESSMETACHARS+set}" != set; then
+   export LESSMETACHARS
+ fi
+ 
+-less_version=`less -V` || exit
++less_version=`less -V 2>/dev/null`
+ case $less_version in
+ less' '45[1-9]* | \
+ less' '4[6-9][0-9]* | \
+-- 
+1.9.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/gzip/files/run-ptest b/import-layers/yocto-poky/meta/recipes-extended/gzip/files/run-ptest
new file mode 100644
index 0000000..cf7c649
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/gzip/files/run-ptest
@@ -0,0 +1,6 @@
+#!/bin/sh
+
+cd src/tests
+
+make check
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/gzip/gzip_1.8.bb b/import-layers/yocto-poky/meta/recipes-extended/gzip/gzip_1.8.bb
index 11be846..d093207 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/gzip/gzip_1.8.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/gzip/gzip_1.8.bb
@@ -2,7 +2,9 @@
 
 LICENSE = "GPLv3+"
 
-SRC_URI = "${GNU_MIRROR}/gzip/${BP}.tar.gz"
+SRC_URI = "${GNU_MIRROR}/gzip/${BP}.tar.gz \
+            file://0001-gzip-port-zdiff-zless-to-Busybox.patch \
+            file://run-ptest"
 SRC_URI_append_class-target = " file://wrong-path-fix.patch"
 
 LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504 \
@@ -12,6 +14,26 @@
 
 BBCLASSEXTEND = "native"
 
+inherit ptest
+
+do_install_ptest() {
+	mkdir -p ${D}${PTEST_PATH}/src/build-aux
+	cp ${S}/build-aux/test-driver ${D}${PTEST_PATH}/src/build-aux/
+	mkdir -p ${D}${PTEST_PATH}/src/tests
+	cp -r ${S}/tests/* ${D}${PTEST_PATH}/src/tests
+	sed -e 's/^abs_srcdir = ..*/abs_srcdir = \.\./' \
+            -e 's/^top_srcdir = ..*/top_srcdir = \.\./' \
+            -e 's/^GREP = ..*/GREP = grep/'             \
+            -e 's/^AWK = ..*/AWK = awk/'                \
+            -e 's/^srcdir = ..*/srcdir = \./'           \
+            -e 's/^Makefile: ..*/Makefile: /'           \
+            -e 's,--sysroot=${STAGING_DIR_TARGET},,g'   \
+            -e 's|${DEBUG_PREFIX_MAP}||g' \
+            -e 's:${HOSTTOOLS_DIR}/::g'                 \
+            -e 's:${BASE_WORKDIR}/${MULTIMACH_TARGET_SYS}::g' \
+            ${B}/tests/Makefile > ${D}${PTEST_PATH}/src/tests/Makefile
+}
+
 SRC_URI[md5sum] = "732553152814b22dc35aa0267df5286c"
 SRC_URI[sha256sum] = "1ff7aedb3d66a0d73f442f6261e4b3860df6fd6c94025c2cb31a202c9c60fe0e"
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/hdparm/hdparm_9.51.bb b/import-layers/yocto-poky/meta/recipes-extended/hdparm/hdparm_9.51.bb
deleted file mode 100644
index fa00927..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/hdparm/hdparm_9.51.bb
+++ /dev/null
@@ -1,40 +0,0 @@
-SUMMARY = "Utility for viewing/manipulating IDE disk drive/driver parameters"
-DESCRIPTION = "hdparm is a Linux shell utility for viewing \
-and manipulating various IDE drive and driver parameters."
-SECTION = "console/utils"
-
-LICENSE = "BSD & GPLv2"
-LICENSE_${PN} = "BSD"
-LICENSE_${PN}-dbg = "BSD"
-LICENSE_wiper = "GPLv2"
-
-LIC_FILES_CHKSUM = "file://LICENSE.TXT;md5=910a8a42c962d238619c75fdb78bdb24 \
-                    file://debian/copyright;md5=a82d7ba3ade9e8ec902749db98c592f3 \
-                    file://wiper/GPLv2.txt;md5=fcb02dc552a041dee27e4b85c7396067 \
-                    file://wiper/wiper.sh;beginline=7;endline=31;md5=b7bc642addc152ea307505bf1a296f09"
-
-
-PACKAGES =+ "wiper"
-
-FILES_wiper = "${bindir}/wiper.sh"
-
-RDEPENDS_wiper = "bash gawk stat"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/hdparm/${BP}.tar.gz"
-
-SRC_URI[md5sum] = "8fe0a71db02f7ffc602d14a69f766cff"
-SRC_URI[sha256sum] = "1afad8891ecbe644c283f7d725157660ebf8bd5b4d9d67232afd45f83d2d5d91"
-
-EXTRA_OEMAKE = 'STRIP="echo" LDFLAGS="${LDFLAGS}"'
-
-inherit update-alternatives
-
-ALTERNATIVE_${PN} = "hdparm"
-ALTERNATIVE_LINK_NAME[hdparm] = "${base_sbindir}/hdparm"
-ALTERNATIVE_PRIORITY = "100"
-
-do_install () {
-	install -d ${D}/${base_sbindir} ${D}/${mandir}/man8 ${D}/${bindir}
-	oe_runmake 'DESTDIR=${D}' 'sbindir=${base_sbindir}' install
-	cp ${S}/wiper/wiper.sh ${D}/${bindir}
-}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/hdparm/hdparm_9.52.bb b/import-layers/yocto-poky/meta/recipes-extended/hdparm/hdparm_9.52.bb
new file mode 100644
index 0000000..49fdc94
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/hdparm/hdparm_9.52.bb
@@ -0,0 +1,41 @@
+SUMMARY = "Utility for viewing/manipulating IDE disk drive/driver parameters"
+HOMEPAGE = "http://sourceforge.net/projects/hdparm/"
+DESCRIPTION = "hdparm is a Linux shell utility for viewing \
+and manipulating various IDE drive and driver parameters."
+SECTION = "console/utils"
+
+LICENSE = "BSD & GPLv2"
+LICENSE_${PN} = "BSD"
+LICENSE_${PN}-dbg = "BSD"
+LICENSE_wiper = "GPLv2"
+
+LIC_FILES_CHKSUM = "file://LICENSE.TXT;md5=495d03e50dc6c89d6a30107ab0df5b03 \
+                    file://debian/copyright;md5=a82d7ba3ade9e8ec902749db98c592f3 \
+                    file://wiper/GPLv2.txt;md5=fcb02dc552a041dee27e4b85c7396067 \
+                    file://wiper/wiper.sh;beginline=7;endline=31;md5=b7bc642addc152ea307505bf1a296f09"
+
+
+PACKAGES =+ "wiper"
+
+FILES_wiper = "${bindir}/wiper.sh"
+
+RDEPENDS_wiper = "bash gawk stat"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/hdparm/${BP}.tar.gz"
+
+SRC_URI[md5sum] = "410539d0bf3cc247181594581edbfb53"
+SRC_URI[sha256sum] = "c3429cd423e271fa565bf584598fd751dd2e773bb7199a592b06b5a61cec4fb6"
+
+EXTRA_OEMAKE = 'STRIP="echo" LDFLAGS="${LDFLAGS}"'
+
+inherit update-alternatives
+
+ALTERNATIVE_${PN} = "hdparm"
+ALTERNATIVE_LINK_NAME[hdparm] = "${base_sbindir}/hdparm"
+ALTERNATIVE_PRIORITY = "100"
+
+do_install () {
+	install -d ${D}/${base_sbindir} ${D}/${mandir}/man8 ${D}/${bindir}
+	oe_runmake 'DESTDIR=${D}' 'sbindir=${base_sbindir}' install
+	cp ${S}/wiper/wiper.sh ${D}/${bindir}
+}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/iputils/files/arping-fix-arping-hang-if-SIGALRM-is-blocked.patch b/import-layers/yocto-poky/meta/recipes-extended/iputils/files/arping-fix-arping-hang-if-SIGALRM-is-blocked.patch
new file mode 100644
index 0000000..7b56276
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/iputils/files/arping-fix-arping-hang-if-SIGALRM-is-blocked.patch
@@ -0,0 +1,44 @@
+arping: fix arping hang if SIGALRM is blocked
+
+Unblock SIGALRM so that the previously called alarm() can prevent
+recvfrom() from blocking forever in case the inherited procmask is
+blocking SIGALRM and no packet is received.
+
+Upstream-Status: Backport
+
+Reported-by: Rui Prior <rprior@dcc.fc.up.pt>
+RH-Bugzilla: #1085971
+Signed-off-by: Jan Synacek <jsynacek@redhat.com>
+Signed-off-by: Zhenbo Gao <zhenbo.gao@windriver.com>
+
+diff --git a/arping.c.orig b/arping.c
+index 35408c1..2098159 100644
+--- a/arping.c.orig
++++ b/arping.c
+@@ -1215,16 +1215,22 @@ main(int argc, char **argv)
+ 		socklen_t alen = sizeof(from);
+ 		int cc;
+ 
++		sigemptyset(&sset);
++		sigaddset(&sset, SIGALRM);
++		sigaddset(&sset, SIGINT);
++		/* Unblock SIGALRM so that the previously called alarm()
++		 * can prevent recvfrom from blocking forever in case the
++		 * inherited procmask is blocking SIGALRM and no packet
++		 * is received. */
++		sigprocmask(SIG_UNBLOCK, &sset, &osset);
++
+ 		if ((cc = recvfrom(s, packet, sizeof(packet), 0,
+ 				   (struct sockaddr *)&from, &alen)) < 0) {
+ 			perror("arping: recvfrom");
+ 			continue;
+ 		}
+ 
+-		sigemptyset(&sset);
+-		sigaddset(&sset, SIGALRM);
+-		sigaddset(&sset, SIGINT);
+-		sigprocmask(SIG_BLOCK, &sset, &osset);
++		sigprocmask(SIG_BLOCK, &sset, NULL);
+ 		recv_pack(packet, cc, (struct sockaddr_ll *)&from);
+ 		sigprocmask(SIG_SETMASK, &osset, NULL);
+ 	}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/iputils/iputils_s20151218.bb b/import-layers/yocto-poky/meta/recipes-extended/iputils/iputils_s20151218.bb
index 0d4dd1b..46de6fc 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/iputils/iputils_s20151218.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/iputils/iputils_s20151218.bb
@@ -20,6 +20,7 @@
            file://nsgmls-path-fix.patch \
            file://0001-Fix-header-inclusion-for-musl.patch \
            file://0001-Intialize-struct-elements-by-name.patch \
+           file://arping-fix-arping-hang-if-SIGALRM-is-blocked.patch \
           "
 
 SRC_URI[md5sum] = "8aaa7395f27dff9f57ae016d4bc753ce"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libarchive/files/0001-archive_write_disk_posix.c-make-_fsobj-functions-mor.patch b/import-layers/yocto-poky/meta/recipes-extended/libarchive/files/0001-archive_write_disk_posix.c-make-_fsobj-functions-mor.patch
deleted file mode 100644
index e911a7c..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/libarchive/files/0001-archive_write_disk_posix.c-make-_fsobj-functions-mor.patch
+++ /dev/null
@@ -1,245 +0,0 @@
-From 90881d24d3f6d5fb207e97df3b91bbea8598e84e Mon Sep 17 00:00:00 2001
-From: Martin Matuska <martin@matuska.org>
-Date: Tue, 29 Nov 2016 16:47:37 +0100
-Subject: [PATCH 1/2] archive_write_disk_posix.c: make *_fsobj functions more
- readable
-
-Upstream-Status: Backported
-
-Signed-off-by: Amarnath Valluri <amarnath.valluri@intel.com>
----
- libarchive/archive_write_disk_posix.c | 121 +++++++++++++++++-----------------
- 1 file changed, 61 insertions(+), 60 deletions(-)
-
-diff --git a/libarchive/archive_write_disk_posix.c b/libarchive/archive_write_disk_posix.c
-index 17c23b0..d786bc2 100644
---- a/libarchive/archive_write_disk_posix.c
-+++ b/libarchive/archive_write_disk_posix.c
-@@ -336,6 +336,8 @@ struct archive_write_disk {
- 
- #define HFS_BLOCKS(s)	((s) >> 12)
- 
-+static void	fsobj_error(int *, struct archive_string *, int, const char *,
-+		    const char *);
- static int	check_symlinks_fsobj(char *path, int *error_number, struct archive_string *error_string, int flags);
- static int	check_symlinks(struct archive_write_disk *);
- static int	create_filesystem_object(struct archive_write_disk *);
-@@ -2005,8 +2007,9 @@ restore_entry(struct archive_write_disk *a)
- 
- 	if (en) {
- 		/* Everything failed; give up here. */
--		archive_set_error(&a->archive, en, "Can't create '%s'",
--		    a->name);
-+		if ((&a->archive)->error == NULL)
-+			archive_set_error(&a->archive, en, "Can't create '%s'",
-+			    a->name);
- 		return (ARCHIVE_FAILED);
- 	}
- 
-@@ -2388,6 +2391,17 @@ current_fixup(struct archive_write_disk *a, const char *pathname)
- 	return (a->current_fixup);
- }
- 
-+/* Error helper for new *_fsobj functions */
-+static void
-+fsobj_error(int *a_eno, struct archive_string *a_estr,
-+    int err, const char *errstr, const char *path)
-+{
-+	if (a_eno)
-+		*a_eno = err;
-+	if (a_estr)
-+		archive_string_sprintf(a_estr, errstr, path);
-+}
-+
- /*
-  * TODO: Someday, integrate this with the deep dir support; they both
-  * scan the path and both can be optimized by comparing against other
-@@ -2400,7 +2414,7 @@ current_fixup(struct archive_write_disk *a, const char *pathname)
-  * ARCHIVE_OK if there are none, otherwise puts an error in errmsg.
-  */
- static int
--check_symlinks_fsobj(char *path, int *error_number, struct archive_string *error_string, int flags)
-+check_symlinks_fsobj(char *path, int *a_eno, struct archive_string *a_estr, int flags)
- {
- #if !defined(HAVE_LSTAT)
- 	/* Platform doesn't have lstat, so we can't look for symlinks. */
-@@ -2474,19 +2488,20 @@ check_symlinks_fsobj(char *path, int *error_number, struct archive_string *error
- 			if (errno == ENOENT) {
- 				break;
- 			} else {
--				/* Treat any other error as fatal - best to be paranoid here
--				 * Note: This effectively disables deep directory
--				 * support when security checks are enabled.
--				 * Otherwise, very long pathnames that trigger
--				 * an error here could evade the sandbox.
--				 * TODO: We could do better, but it would probably
--				 * require merging the symlink checks with the
--				 * deep-directory editing. */
--				if (error_number) *error_number = errno;
--				if (error_string)
--					archive_string_sprintf(error_string,
--							"Could not stat %s",
--							path);
-+				/*
-+				 * Treat any other error as fatal - best to be
-+				 * paranoid here.
-+				 * Note: This effectively disables deep
-+				 * directory support when security checks are
-+				 * enabled. Otherwise, very long pathnames that
-+				 * trigger an error here could evade the
-+				 * sandbox.
-+				 * TODO: We could do better, but it would
-+				 * probably require merging the symlink checks
-+				 * with the deep-directory editing.
-+				 */
-+				fsobj_error(a_eno, a_estr, errno,
-+				    "Could not stat %s", path);
- 				res = ARCHIVE_FAILED;
- 				break;
- 			}
-@@ -2494,11 +2509,8 @@ check_symlinks_fsobj(char *path, int *error_number, struct archive_string *error
- 			if (!last) {
- 				if (chdir(head) != 0) {
- 					tail[0] = c;
--					if (error_number) *error_number = errno;
--					if (error_string)
--						archive_string_sprintf(error_string,
--								"Could not chdir %s",
--								path);
-+					fsobj_error(a_eno, a_estr, errno,
-+					    "Could not chdir %s", path);
- 					res = (ARCHIVE_FATAL);
- 					break;
- 				}
-@@ -2514,11 +2526,9 @@ check_symlinks_fsobj(char *path, int *error_number, struct archive_string *error
- 				 */
- 				if (unlink(head)) {
- 					tail[0] = c;
--					if (error_number) *error_number = errno;
--					if (error_string)
--						archive_string_sprintf(error_string,
--								"Could not remove symlink %s",
--								path);
-+					fsobj_error(a_eno, a_estr, errno,
-+					    "Could not remove symlink %s",
-+					    path);
- 					res = ARCHIVE_FAILED;
- 					break;
- 				}
-@@ -2529,13 +2539,14 @@ check_symlinks_fsobj(char *path, int *error_number, struct archive_string *error
- 				 * symlink with another symlink.
- 				 */
- 				tail[0] = c;
--				/* FIXME:  not sure how important this is to restore
-+				/*
-+				 * FIXME:  not sure how important this is to
-+				 * restore
-+				 */
-+				/*
- 				if (!S_ISLNK(path)) {
--					if (error_number) *error_number = 0;
--					if (error_string)
--						archive_string_sprintf(error_string,
--								"Removing symlink %s",
--								path);
-+					fsobj_error(a_eno, a_estr, 0,
-+					    "Removing symlink %s", path);
- 				}
- 				*/
- 				/* Symlink gone.  No more problem! */
-@@ -2545,22 +2556,17 @@ check_symlinks_fsobj(char *path, int *error_number, struct archive_string *error
- 				/* User asked us to remove problems. */
- 				if (unlink(head) != 0) {
- 					tail[0] = c;
--					if (error_number) *error_number = 0;
--					if (error_string)
--						archive_string_sprintf(error_string,
--								"Cannot remove intervening symlink %s",
--								path);
-+					fsobj_error(a_eno, a_estr, 0,
-+					    "Cannot remove intervening "
-+					    "symlink %s", path);
- 					res = ARCHIVE_FAILED;
- 					break;
- 				}
- 				tail[0] = c;
- 			} else {
- 				tail[0] = c;
--				if (error_number) *error_number = 0;
--				if (error_string)
--					archive_string_sprintf(error_string,
--							"Cannot extract through symlink %s",
--							path);
-+				fsobj_error(a_eno, a_estr, 0,
-+				    "Cannot extract through symlink %s", path);
- 				res = ARCHIVE_FAILED;
- 				break;
- 			}
-@@ -2577,10 +2583,8 @@ check_symlinks_fsobj(char *path, int *error_number, struct archive_string *error
- 	if (restore_pwd >= 0) {
- 		r = fchdir(restore_pwd);
- 		if (r != 0) {
--			if(error_number) *error_number = errno;
--			if(error_string)
--				archive_string_sprintf(error_string,
--						"chdir() failure");
-+			fsobj_error(a_eno, a_estr, errno,
-+			    "chdir() failure", "");
- 		}
- 		close(restore_pwd);
- 		restore_pwd = -1;
-@@ -2688,17 +2692,16 @@ cleanup_pathname_win(struct archive_write_disk *a)
-  * is set) if the path is absolute.
-  */
- static int
--cleanup_pathname_fsobj(char *path, int *error_number, struct archive_string *error_string, int flags)
-+cleanup_pathname_fsobj(char *path, int *a_eno, struct archive_string *a_estr,
-+    int flags)
- {
- 	char *dest, *src;
- 	char separator = '\0';
- 
- 	dest = src = path;
- 	if (*src == '\0') {
--		if (error_number) *error_number = ARCHIVE_ERRNO_MISC;
--		if (error_string)
--		    archive_string_sprintf(error_string,
--			    "Invalid empty pathname");
-+		fsobj_error(a_eno, a_estr, ARCHIVE_ERRNO_MISC,
-+		    "Invalid empty ", "pathname");
- 		return (ARCHIVE_FAILED);
- 	}
- 
-@@ -2708,10 +2711,8 @@ cleanup_pathname_fsobj(char *path, int *error_number, struct archive_string *err
- 	/* Skip leading '/'. */
- 	if (*src == '/') {
- 		if (flags & ARCHIVE_EXTRACT_SECURE_NOABSOLUTEPATHS) {
--			if (error_number) *error_number = ARCHIVE_ERRNO_MISC;
--			if (error_string)
--			    archive_string_sprintf(error_string,
--				    "Path is absolute");
-+			fsobj_error(a_eno, a_estr, ARCHIVE_ERRNO_MISC,
-+			    "Path is ", "absolute");
- 			return (ARCHIVE_FAILED);
- 		}
- 
-@@ -2738,11 +2739,11 @@ cleanup_pathname_fsobj(char *path, int *error_number, struct archive_string *err
- 			} else if (src[1] == '.') {
- 				if (src[2] == '/' || src[2] == '\0') {
- 					/* Conditionally warn about '..' */
--					if (flags & ARCHIVE_EXTRACT_SECURE_NODOTDOT) {
--						if (error_number) *error_number = ARCHIVE_ERRNO_MISC;
--						if (error_string)
--						    archive_string_sprintf(error_string,
--							    "Path contains '..'");
-+					if (flags
-+					    & ARCHIVE_EXTRACT_SECURE_NODOTDOT) {
-+						fsobj_error(a_eno, a_estr,
-+						    ARCHIVE_ERRNO_MISC,
-+						    "Path contains ", "'..'");
- 						return (ARCHIVE_FAILED);
- 					}
- 				}
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libarchive/files/0002-Fix-extracting-hardlinks-over-symlinks.patch b/import-layers/yocto-poky/meta/recipes-extended/libarchive/files/0002-Fix-extracting-hardlinks-over-symlinks.patch
deleted file mode 100644
index 3741863..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/libarchive/files/0002-Fix-extracting-hardlinks-over-symlinks.patch
+++ /dev/null
@@ -1,120 +0,0 @@
-From ece28103885a079a129a23c5001252a1648517af Mon Sep 17 00:00:00 2001
-From: Martin Matuska <martin@matuska.org>
-Date: Tue, 29 Nov 2016 16:55:41 +0100
-Subject: [PATCH 2/2] Fix extracting hardlinks over symlinks
-
-Closes #821
-
-Upstream-Status: Backported
-
-Signed-off-by: Amarnath Valluri <amarnath.valluri@intel.com>
----
- libarchive/archive_write_disk_posix.c | 43 +++++++++++++++++++++++++++++++++++
- tar/test/test_symlink_dir.c           | 18 ++++++++++++++-
- 2 files changed, 60 insertions(+), 1 deletion(-)
-
-diff --git a/libarchive/archive_write_disk_posix.c b/libarchive/archive_write_disk_posix.c
-index d786bc2..80b03cd 100644
---- a/libarchive/archive_write_disk_posix.c
-+++ b/libarchive/archive_write_disk_posix.c
-@@ -2563,6 +2563,49 @@ check_symlinks_fsobj(char *path, int *a_eno, struct archive_string *a_estr, int
- 					break;
- 				}
- 				tail[0] = c;
-+			} else if ((flags &
-+			    ARCHIVE_EXTRACT_SECURE_SYMLINKS) == 0) {
-+				/*
-+				 * We are not the last element and we want to
-+				 * follow symlinks if they are a directory.
-+				 * 
-+				 * This is needed to extract hardlinks over
-+				 * symlinks.
-+				 */
-+				r = stat(head, &st);
-+				if (r != 0) {
-+					tail[0] = c;
-+					if (errno == ENOENT) {
-+						break;
-+					} else {
-+						fsobj_error(a_eno, a_estr,
-+						    errno,
-+						    "Could not stat %s", path);
-+						res = (ARCHIVE_FAILED);
-+						break;
-+					}
-+				} else if (S_ISDIR(st.st_mode)) {
-+					if (chdir(head) != 0) {
-+						tail[0] = c;
-+						fsobj_error(a_eno, a_estr,
-+						    errno,
-+						    "Could not chdir %s", path);
-+						res = (ARCHIVE_FATAL);
-+						break;
-+					}
-+					/*
-+					 * Our view is now from inside
-+					 * this dir:
-+					 */
-+					head = tail + 1;
-+				} else {
-+					tail[0] = c;
-+					fsobj_error(a_eno, a_estr, 0,
-+					    "Cannot extract through "
-+					    "symlink %s", path);
-+					res = ARCHIVE_FAILED;
-+					break;
-+				}
- 			} else {
- 				tail[0] = c;
- 				fsobj_error(a_eno, a_estr, 0,
-diff --git a/tar/test/test_symlink_dir.c b/tar/test/test_symlink_dir.c
-index 25bd8b1..852e00b 100644
---- a/tar/test/test_symlink_dir.c
-+++ b/tar/test/test_symlink_dir.c
-@@ -47,11 +47,18 @@ DEFINE_TEST(test_symlink_dir)
- 	assertMakeDir("source/dir3", 0755);
- 	assertMakeDir("source/dir3/d3", 0755);
- 	assertMakeFile("source/dir3/f3", 0755, "abcde");
-+	assertMakeDir("source/dir4", 0755);
-+	assertMakeFile("source/dir4/file3", 0755, "abcdef");
-+	assertMakeHardlink("source/dir4/file4", "source/dir4/file3");
- 
- 	assertEqualInt(0,
- 	    systemf("%s -cf test.tar -C source dir dir2 dir3 file file2",
- 		testprog));
- 
-+	/* Second archive with hardlinks */
-+	assertEqualInt(0,
-+	    systemf("%s -cf test2.tar -C source dir4", testprog));
-+
- 	/*
- 	 * Extract with -x and without -P.
- 	 */
-@@ -118,9 +125,15 @@ DEFINE_TEST(test_symlink_dir)
- 		assertMakeSymlink("dest2/file2", "real_file2");
- 	assertEqualInt(0, systemf("%s -xPf test.tar -C dest2", testprog));
- 
--	/* dest2/dir symlink should be followed */
-+	/* "dir4" is a symlink to existing "real_dir" */
-+	if (canSymlink())
-+		assertMakeSymlink("dest2/dir4", "real_dir");
-+	assertEqualInt(0, systemf("%s -xPf test2.tar -C dest2", testprog));
-+
-+	/* dest2/dir and dest2/dir4 symlinks should be followed */
- 	if (canSymlink()) {
- 		assertIsSymlink("dest2/dir", "real_dir");
-+		assertIsSymlink("dest2/dir4", "real_dir");
- 		assertIsDir("dest2/real_dir", -1);
- 	}
- 
-@@ -141,4 +154,7 @@ DEFINE_TEST(test_symlink_dir)
- 	/* dest2/file2 symlink should be removed */
- 	failure("Symlink to non-existing file should be removed");
- 	assertIsReg("dest2/file2", -1);
-+
-+	/* dest2/dir4/file3 and dest2/dir4/file4 should be hard links */
-+	assertIsHardlink("dest2/dir4/file3", "dest2/dir4/file4");
- }
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libarchive/files/non-recursive-extract-and-list.patch b/import-layers/yocto-poky/meta/recipes-extended/libarchive/files/non-recursive-extract-and-list.patch
deleted file mode 100644
index 61f8f3e..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/libarchive/files/non-recursive-extract-and-list.patch
+++ /dev/null
@@ -1,153 +0,0 @@
-From d6271709d2deb980804f92e75f9b5cb600dc42ed Mon Sep 17 00:00:00 2001
-From: Patrick Ohly <patrick.ohly@intel.com>
-Date: Mon, 24 Oct 2016 12:54:48 +0200
-Subject: [PATCH 1/2] non-recursive extract and list
-
-Sometimes it makes sense to extract or list a directory contained in
-an archive without also doing the same for the content of the
-directory, i.e. allowing -n (= --no-recursion) in combination with the
-x and t modes.
-
-bsdtar uses the match functionality in libarchive to track include
-matches. A new libarchive API call
-archive_match_include_directories_recursively() gets introduced to
-influence the matching behavior, with the default behavior as before.
-
-Non-recursive matching can be achieved by anchoring the path match at
-both start and end. Asking for a directory which itself isn't in the
-archive when in non-recursive mode is an error and handled by the
-existing mechanism for tracking unused inclusion entries.
-
-Upstream-Status: Submitted [https://github.com/libarchive/libarchive/pull/812]
-
-Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
-
----
- libarchive/archive.h       |  2 ++
- libarchive/archive_match.c | 30 +++++++++++++++++++++++++++++-
- tar/bsdtar.1               |  3 +--
- tar/bsdtar.c               | 12 ++++++++++--
- 4 files changed, 42 insertions(+), 5 deletions(-)
-
-diff --git a/libarchive/archive.h b/libarchive/archive.h
-index ff401e9..38d8746 100644
---- a/libarchive/archive.h
-+++ b/libarchive/archive.h
-@@ -1085,6 +1085,8 @@ __LA_DECL int	archive_match_excluded(struct archive *,
-  */
- __LA_DECL int	archive_match_path_excluded(struct archive *,
- 		    struct archive_entry *);
-+/* Control recursive inclusion of directory content when directory is included. Default on. */
-+__LA_DECL int	archive_match_include_directories_recursively(struct archive *, int _enabled);
- /* Add exclusion pathname pattern. */
- __LA_DECL int	archive_match_exclude_pattern(struct archive *, const char *);
- __LA_DECL int	archive_match_exclude_pattern_w(struct archive *,
-diff --git a/libarchive/archive_match.c b/libarchive/archive_match.c
-index 0719cbd..6d03a65 100644
---- a/libarchive/archive_match.c
-+++ b/libarchive/archive_match.c
-@@ -93,6 +93,9 @@ struct archive_match {
- 	/* exclusion/inclusion set flag. */
- 	int			 setflag;
- 
-+	/* Recursively include directory content? */
-+	int			 recursive_include;
-+
- 	/*
- 	 * Matching filename patterns.
- 	 */
-@@ -223,6 +226,7 @@ archive_match_new(void)
- 		return (NULL);
- 	a->archive.magic = ARCHIVE_MATCH_MAGIC;
- 	a->archive.state = ARCHIVE_STATE_NEW;
-+	a->recursive_include = 1;
- 	match_list_init(&(a->inclusions));
- 	match_list_init(&(a->exclusions));
- 	__archive_rb_tree_init(&(a->exclusion_tree), &rb_ops_mbs);
-@@ -471,6 +475,28 @@ archive_match_path_excluded(struct archive *_a,
- }
- 
- /*
-+ * When recursive inclusion of directory content is enabled,
-+ * an inclusion pattern that matches a directory will also
-+ * include everything beneath that directory. Enabled by default.
-+ *
-+ * For compatibility with GNU tar, exclusion patterns always
-+ * match if a subset of the full patch matches (i.e., they are
-+ * are not rooted at the beginning of the path) and thus there
-+ * is no corresponding non-recursive exclusion mode.
-+ */
-+int
-+archive_match_include_directories_recursively(struct archive *_a, int _enabled)
-+{
-+	struct archive_match *a;
-+
-+	archive_check_magic(_a, ARCHIVE_MATCH_MAGIC,
-+	    ARCHIVE_STATE_NEW, "archive_match_include_directories_recursively");
-+	a = (struct archive_match *)_a;
-+	a->recursive_include = _enabled;
-+	return (ARCHIVE_OK);
-+}
-+
-+/*
-  * Utilty functions to get statistic information for inclusion patterns.
-  */
- int
-@@ -781,7 +807,9 @@ static int
- match_path_inclusion(struct archive_match *a, struct match *m,
-     int mbs, const void *pn)
- {
--	int flag = PATHMATCH_NO_ANCHOR_END;
-+	int flag = a->recursive_include ?
-+		PATHMATCH_NO_ANCHOR_END : /* Prefix match is good enough. */
-+		0; /* Full match required. */
- 	int r;
- 
- 	if (mbs) {
-diff --git a/tar/bsdtar.1 b/tar/bsdtar.1
-index 9eadaaf..f5d6457 100644
---- a/tar/bsdtar.1
-+++ b/tar/bsdtar.1
-@@ -346,8 +346,7 @@ In extract or list modes, this option is ignored.
- Do not extract modification time.
- By default, the modification time is set to the time stored in the archive.
- .It Fl n , Fl Fl norecurse , Fl Fl no-recursion
--(c, r, u modes only)
--Do not recursively archive the contents of directories.
-+Do not recursively archive (c, r, u), extract (x) or list (t) the contents of directories.
- .It Fl Fl newer Ar date
- (c, r, u modes only)
- Only include files and directories newer than the specified date.
-diff --git a/tar/bsdtar.c b/tar/bsdtar.c
-index 93bf60a..001d5ed 100644
---- a/tar/bsdtar.c
-+++ b/tar/bsdtar.c
-@@ -738,8 +738,6 @@ main(int argc, char **argv)
- 			break;
- 		}
- 	}
--	if (bsdtar->option_no_subdirs)
--		only_mode(bsdtar, "-n", "cru");
- 	if (bsdtar->option_stdout)
- 		only_mode(bsdtar, "-O", "xt");
- 	if (bsdtar->option_unlink_first)
-@@ -788,6 +786,16 @@ main(int argc, char **argv)
- 		only_mode(bsdtar, buff, "cru");
- 	}
- 
-+	/*
-+	 * When creating an archive from a directory tree, the directory
-+	 * walking code will already avoid entering directories when
-+	 * recursive inclusion of directory content is disabled, therefore
-+	 * changing the matching behavior has no effect for creation modes.
-+	 * It is relevant for extraction or listing.
-+	 */
-+	archive_match_include_directories_recursively(bsdtar->matching,
-+						      !bsdtar->option_no_subdirs);
-+
- 	/* Filename "-" implies stdio. */
- 	if (strcmp(bsdtar->filename, "-") == 0)
- 		bsdtar->filename = NULL;
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive/CVE-2017-14166.patch b/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive/CVE-2017-14166.patch
new file mode 100644
index 0000000..e85fec4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive/CVE-2017-14166.patch
@@ -0,0 +1,37 @@
+libarchive-3.3.2: Fix CVE-2017-14166
+
+[No upstream tracking] -- https://github.com/libarchive/libarchive/pull/935
+
+archive_read_support_format_xar: heap-based buffer overflow in xml_data
+
+Upstream-Status: Backport [https://github.com/libarchive/libarchive/commit/fa7438a0ff4033e4741c807394a9af6207940d71]
+CVE: CVE-2017-14166
+Bug: 935
+Signed-off-by: Andrej Valek <andrej.valek@siemens.com>
+
+diff --git a/libarchive/archive_read_support_format_xar.c b/libarchive/archive_read_support_format_xar.c
+index 7a22beb..93eeacc 100644
+--- a/libarchive/archive_read_support_format_xar.c
++++ b/libarchive/archive_read_support_format_xar.c
+@@ -1040,6 +1040,9 @@ atol10(const char *p, size_t char_cnt)
+ 	uint64_t l;
+ 	int digit;
+ 
++	if (char_cnt == 0)
++		return (0);
++
+ 	l = 0;
+ 	digit = *p - '0';
+ 	while (digit >= 0 && digit < 10  && char_cnt-- > 0) {
+@@ -1054,7 +1057,10 @@ atol8(const char *p, size_t char_cnt)
+ {
+ 	int64_t l;
+ 	int digit;
+-        
++
++	if (char_cnt == 0)
++		return (0);
++
+ 	l = 0;
+ 	while (char_cnt-- > 0) {
+ 		if (*p >= '0' && *p <= '7')
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive/CVE-2017-14502.patch b/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive/CVE-2017-14502.patch
new file mode 100644
index 0000000..72e1546
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive/CVE-2017-14502.patch
@@ -0,0 +1,37 @@
+From 5562545b5562f6d12a4ef991fae158bf4ccf92b6 Mon Sep 17 00:00:00 2001
+From: Joerg Sonnenberger <joerg@bec.de>
+Date: Sat, 9 Sep 2017 17:47:32 +0200
+Subject: [PATCH] Avoid a read off-by-one error for UTF16 names in RAR
+ archives.
+
+Reported-By: OSS-Fuzz issue 573
+
+CVE: CVE-2017-14502
+
+Upstream-Status: Backport
+
+Signed-off-by: Zhixiong Chi <zhixiong.chi@windriver.com>
+---
+ libarchive/archive_read_support_format_rar.c | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/libarchive/archive_read_support_format_rar.c b/libarchive/archive_read_support_format_rar.c
+index cbb14c3..751de69 100644
+--- a/libarchive/archive_read_support_format_rar.c
++++ b/libarchive/archive_read_support_format_rar.c
+@@ -1496,7 +1496,11 @@ read_header(struct archive_read *a, struct archive_entry *entry,
+         return (ARCHIVE_FATAL);
+       }
+       filename[filename_size++] = '\0';
+-      filename[filename_size++] = '\0';
++      /*
++       * Do not increment filename_size here as the computations below
++       * add the space for the terminating NUL explicitly.
++       */
++      filename[filename_size] = '\0';
+ 
+       /* Decoded unicode form is UTF-16BE, so we have to update a string
+        * conversion object for it. */
+-- 
+1.9.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive/bug929.patch b/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive/bug929.patch
new file mode 100644
index 0000000..2f3254c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive/bug929.patch
@@ -0,0 +1,38 @@
+libarchive-3.3.2: Fix bug929
+
+[No upstream tracking] -- https://github.com/libarchive/libarchive/pull/929
+
+archive_read_support_format_cpio: header_newc(): Avoid overflow when reading corrupt
+cpio archive
+
+A cpio "newc" archive with a namelength of "FFFFFFFF", if read on a
+system with a 32-bit size_t, would result in namelength + name_pad
+overflowing 32 bits and libarchive attempting to copy 2^32-1 bytes
+from a 2-byte buffer, with appropriately hilarious results.
+
+Check for this overflow and fail; there's no legitimate reason for a
+cpio archive to contain a file with a name over 4 billion characters
+in length.
+
+Upstream-Status: Backport [https://github.com/libarchive/libarchive/commit/bac4659e0b970990e7e3f3a3d239294e96311630]
+Bug: 929
+Signed-off-by: Andrej Valek <andrej.valek@siemens.com>
+
+diff --git a/libarchive/archive_read_support_format_cpio.c b/libarchive/archive_read_support_format_cpio.c
+index ad9f782..1faa64d 100644
+--- a/libarchive/archive_read_support_format_cpio.c
++++ b/libarchive/archive_read_support_format_cpio.c
+@@ -633,6 +633,13 @@ header_newc(struct archive_read *a, struct cpio *cpio,
+ 	/* Pad name to 2 more than a multiple of 4. */
+ 	*name_pad = (2 - *namelength) & 3;
+ 
++	/* Make sure that the padded name length fits into size_t. */
++	if ((size_t)(*namelength + *name_pad) < *namelength) {
++		archive_set_error(&a->archive, ARCHIVE_ERRNO_FILE_FORMAT,
++		    "cpio archive has invalid namelength");
++		return (ARCHIVE_FATAL);
++	}
++
+ 	/*
+ 	 * Note: entry_bytes_remaining is at least 64 bits and
+ 	 * therefore guaranteed to be big enough for a 33-bit file
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive/non-recursive-extract-and-list.patch b/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive/non-recursive-extract-and-list.patch
new file mode 100644
index 0000000..cd7be51
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive/non-recursive-extract-and-list.patch
@@ -0,0 +1,153 @@
+From 47f7566f6829c2b14e21bbbba699916de4998c72 Mon Sep 17 00:00:00 2001
+From: Patrick Ohly <patrick.ohly@intel.com>
+Date: Mon, 24 Oct 2016 12:54:48 +0200
+Subject: [PATCH 1/1] non-recursive extract and list
+
+Sometimes it makes sense to extract or list a directory contained in
+an archive without also doing the same for the content of the
+directory, i.e. allowing -n (= --no-recursion) in combination with the
+x and t modes.
+
+bsdtar uses the match functionality in libarchive to track include
+matches. A new libarchive API call
+archive_match_include_directories_recursively() gets introduced to
+influence the matching behavior, with the default behavior as before.
+
+Non-recursive matching can be achieved by anchoring the path match at
+both start and end. Asking for a directory which itself isn't in the
+archive when in non-recursive mode is an error and handled by the
+existing mechanism for tracking unused inclusion entries.
+
+Upstream-Status: Submitted [https://github.com/libarchive/libarchive/pull/812]
+
+Signed-off-by: Patrick Ohly <patrick.ohly@intel.com>
+
+---
+ libarchive/archive.h       |  2 ++
+ libarchive/archive_match.c | 30 +++++++++++++++++++++++++++++-
+ tar/bsdtar.1               |  3 +--
+ tar/bsdtar.c               | 12 ++++++++++--
+ 4 files changed, 42 insertions(+), 5 deletions(-)
+
+diff --git a/libarchive/archive.h b/libarchive/archive.h
+index 32710201..59fb4aa6 100644
+--- a/libarchive/archive.h
++++ b/libarchive/archive.h
+@@ -1093,6 +1093,8 @@ __LA_DECL int	archive_match_excluded(struct archive *,
+  */
+ __LA_DECL int	archive_match_path_excluded(struct archive *,
+ 		    struct archive_entry *);
++/* Control recursive inclusion of directory content when directory is included. Default on. */
++__LA_DECL int	archive_match_include_directories_recursively(struct archive *, int _enabled);
+ /* Add exclusion pathname pattern. */
+ __LA_DECL int	archive_match_exclude_pattern(struct archive *, const char *);
+ __LA_DECL int	archive_match_exclude_pattern_w(struct archive *,
+diff --git a/libarchive/archive_match.c b/libarchive/archive_match.c
+index be72066e..bb6a3407 100644
+--- a/libarchive/archive_match.c
++++ b/libarchive/archive_match.c
+@@ -93,6 +93,9 @@ struct archive_match {
+ 	/* exclusion/inclusion set flag. */
+ 	int			 setflag;
+ 
++	/* Recursively include directory content? */
++	int			 recursive_include;
++
+ 	/*
+ 	 * Matching filename patterns.
+ 	 */
+@@ -223,6 +226,7 @@ archive_match_new(void)
+ 		return (NULL);
+ 	a->archive.magic = ARCHIVE_MATCH_MAGIC;
+ 	a->archive.state = ARCHIVE_STATE_NEW;
++	a->recursive_include = 1;
+ 	match_list_init(&(a->inclusions));
+ 	match_list_init(&(a->exclusions));
+ 	__archive_rb_tree_init(&(a->exclusion_tree), &rb_ops_mbs);
+@@ -471,6 +475,28 @@ archive_match_path_excluded(struct archive *_a,
+ }
+ 
+ /*
++ * When recursive inclusion of directory content is enabled,
++ * an inclusion pattern that matches a directory will also
++ * include everything beneath that directory. Enabled by default.
++ *
++ * For compatibility with GNU tar, exclusion patterns always
++ * match if a subset of the full patch matches (i.e., they are
++ * are not rooted at the beginning of the path) and thus there
++ * is no corresponding non-recursive exclusion mode.
++ */
++int
++archive_match_include_directories_recursively(struct archive *_a, int _enabled)
++{
++	struct archive_match *a;
++
++	archive_check_magic(_a, ARCHIVE_MATCH_MAGIC,
++	    ARCHIVE_STATE_NEW, "archive_match_include_directories_recursively");
++	a = (struct archive_match *)_a;
++	a->recursive_include = _enabled;
++	return (ARCHIVE_OK);
++}
++
++/*
+  * Utility functions to get statistic information for inclusion patterns.
+  */
+ int
+@@ -781,7 +807,9 @@ static int
+ match_path_inclusion(struct archive_match *a, struct match *m,
+     int mbs, const void *pn)
+ {
+-	int flag = PATHMATCH_NO_ANCHOR_END;
++	int flag = a->recursive_include ?
++		PATHMATCH_NO_ANCHOR_END : /* Prefix match is good enough. */
++		0; /* Full match required. */
+ 	int r;
+ 
+ 	if (mbs) {
+diff --git a/tar/bsdtar.1 b/tar/bsdtar.1
+index 132e1145..1dd2a847 100644
+--- a/tar/bsdtar.1
++++ b/tar/bsdtar.1
+@@ -386,8 +386,7 @@ and the default behavior in c, r, and u modes or if
+ .Nm
+ is run in x mode as root.
+ .It Fl n , Fl Fl norecurse , Fl Fl no-recursion
+-(c, r, u modes only)
+-Do not recursively archive the contents of directories.
++Do not recursively archive (c, r, u), extract (x) or list (t) the contents of directories.
+ .It Fl Fl newer Ar date
+ (c, r, u modes only)
+ Only include files and directories newer than the specified date.
+diff --git a/tar/bsdtar.c b/tar/bsdtar.c
+index 11dedbf9..d014cc3e 100644
+--- a/tar/bsdtar.c
++++ b/tar/bsdtar.c
+@@ -794,8 +794,6 @@ main(int argc, char **argv)
+ 			break;
+ 		}
+ 	}
+-	if (bsdtar->flags & OPTFLAG_NO_SUBDIRS)
+-		only_mode(bsdtar, "-n", "cru");
+ 	if (bsdtar->flags & OPTFLAG_STDOUT)
+ 		only_mode(bsdtar, "-O", "xt");
+ 	if (bsdtar->flags & OPTFLAG_UNLINK_FIRST)
+@@ -845,6 +843,16 @@ main(int argc, char **argv)
+ 		only_mode(bsdtar, buff, "cru");
+ 	}
+ 
++	/*
++	 * When creating an archive from a directory tree, the directory
++	 * walking code will already avoid entering directories when
++	 * recursive inclusion of directory content is disabled, therefore
++	 * changing the matching behavior has no effect for creation modes.
++	 * It is relevant for extraction or listing.
++	 */
++	archive_match_include_directories_recursively(bsdtar->matching,
++						      !(bsdtar->flags & OPTFLAG_NO_SUBDIRS));
++
+ 	/* Filename "-" implies stdio. */
+ 	if (strcmp(bsdtar->filename, "-") == 0)
+ 		bsdtar->filename = NULL;
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive_3.2.2.bb b/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive_3.2.2.bb
deleted file mode 100644
index da959a2..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive_3.2.2.bb
+++ /dev/null
@@ -1,70 +0,0 @@
-SUMMARY = "Support for reading various archive formats"
-DESCRIPTION = "C library and command-line tools for reading and writing tar, cpio, zip, ISO, and other archive formats"
-HOMEPAGE = "http://www.libarchive.org/"
-SECTION = "devel"
-LICENSE = "BSD"
-LIC_FILES_CHKSUM = "file://COPYING;md5=ed99aca006bc346974bb745a35336425"
-
-DEPENDS = "e2fsprogs-native"
-
-PACKAGECONFIG ?= "zlib bz2"
-
-PACKAGECONFIG_append_class-target = "\
-	libxml2 \
-	${@bb.utils.filter('DISTRO_FEATURES', 'acl xattr', d)} \
-"
-
-DEPENDS_BZIP2 = "bzip2-replacement-native"
-DEPENDS_BZIP2_class-target = "bzip2"
-
-PACKAGECONFIG[acl] = "--enable-acl,--disable-acl,acl,"
-PACKAGECONFIG[xattr] = "--enable-xattr,--disable-xattr,attr,"
-PACKAGECONFIG[zlib] = "--with-zlib,--without-zlib,zlib,"
-PACKAGECONFIG[bz2] = "--with-bz2lib,--without-bz2lib,${DEPENDS_BZIP2},"
-PACKAGECONFIG[xz] = "--with-lzmadec --with-lzma,--without-lzmadec --without-lzma,xz,"
-PACKAGECONFIG[openssl] = "--with-openssl,--without-openssl,openssl,"
-PACKAGECONFIG[libxml2] = "--with-xml2,--without-xml2,libxml2,"
-PACKAGECONFIG[expat] = "--with-expat,--without-expat,expat,"
-PACKAGECONFIG[lzo] = "--with-lzo2,--without-lzo2,lzo,"
-PACKAGECONFIG[nettle] = "--with-nettle,--without-nettle,nettle,"
-PACKAGECONFIG[lz4] = "--with-lz4,--without-lz4,lz4,"
-
-EXTRA_OECONF += "--enable-largefile"
-
-SRC_URI = "http://libarchive.org/downloads/libarchive-${PV}.tar.gz \
-           file://non-recursive-extract-and-list.patch \
-	   file://0001-archive_write_disk_posix.c-make-_fsobj-functions-mor.patch \
-	   file://0002-Fix-extracting-hardlinks-over-symlinks.patch \
-           "
-
-SRC_URI[md5sum] = "1ec00b7dcaf969dd2a5712f85f23c764"
-SRC_URI[sha256sum] = "691c194ee132d1f0f7a42541f091db811bc2e56f7107e9121be2bc8c04f1060f"
-
-inherit autotools update-alternatives pkgconfig
-
-CPPFLAGS += "-I${WORKDIR}/extra-includes"
-
-do_configure[cleandirs] += "${WORKDIR}/extra-includes"
-do_configure_prepend() {
-	# We just need the headers for some type constants, so no need to
-	# build all of e2fsprogs for the target
-	cp -R ${STAGING_INCDIR_NATIVE}/ext2fs ${WORKDIR}/extra-includes/
-}
-
-ALTERNATIVE_PRIORITY = "80"
-
-PACKAGES =+ "bsdtar"
-FILES_bsdtar = "${bindir}/bsdtar"
-
-ALTERNATIVE_bsdtar = "tar"
-ALTERNATIVE_LINK_NAME[tar] = "${base_bindir}/tar"
-ALTERNATIVE_TARGET[tar] = "${bindir}/bsdtar"
-
-PACKAGES =+ "bsdcpio"
-FILES_bsdcpio = "${bindir}/bsdcpio"
-
-ALTERNATIVE_bsdcpio = "cpio"
-ALTERNATIVE_LINK_NAME[cpio] = "${base_bindir}/cpio"
-ALTERNATIVE_TARGET[cpio] = "${bindir}/bsdcpio"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive_3.3.2.bb b/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive_3.3.2.bb
new file mode 100644
index 0000000..5daca27
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libarchive/libarchive_3.3.2.bb
@@ -0,0 +1,71 @@
+SUMMARY = "Support for reading various archive formats"
+DESCRIPTION = "C library and command-line tools for reading and writing tar, cpio, zip, ISO, and other archive formats"
+HOMEPAGE = "http://www.libarchive.org/"
+SECTION = "devel"
+LICENSE = "BSD"
+LIC_FILES_CHKSUM = "file://COPYING;md5=ed99aca006bc346974bb745a35336425"
+
+DEPENDS = "e2fsprogs-native"
+
+PACKAGECONFIG ?= "zlib bz2 xz lzo"
+
+PACKAGECONFIG_append_class-target = "\
+	libxml2 \
+	${@bb.utils.filter('DISTRO_FEATURES', 'acl xattr', d)} \
+"
+
+DEPENDS_BZIP2 = "bzip2-replacement-native"
+DEPENDS_BZIP2_class-target = "bzip2"
+
+PACKAGECONFIG[acl] = "--enable-acl,--disable-acl,acl,"
+PACKAGECONFIG[xattr] = "--enable-xattr,--disable-xattr,attr,"
+PACKAGECONFIG[zlib] = "--with-zlib,--without-zlib,zlib,"
+PACKAGECONFIG[bz2] = "--with-bz2lib,--without-bz2lib,${DEPENDS_BZIP2},"
+PACKAGECONFIG[xz] = "--with-lzma,--without-lzma,xz,"
+PACKAGECONFIG[openssl] = "--with-openssl,--without-openssl,openssl,"
+PACKAGECONFIG[libxml2] = "--with-xml2,--without-xml2,libxml2,"
+PACKAGECONFIG[expat] = "--with-expat,--without-expat,expat,"
+PACKAGECONFIG[lzo] = "--with-lzo2,--without-lzo2,lzo,"
+PACKAGECONFIG[nettle] = "--with-nettle,--without-nettle,nettle,"
+PACKAGECONFIG[lz4] = "--with-lz4,--without-lz4,lz4,"
+
+EXTRA_OECONF += "--enable-largefile"
+
+SRC_URI = "http://libarchive.org/downloads/libarchive-${PV}.tar.gz \
+           file://bug929.patch \
+           file://CVE-2017-14166.patch \
+           file://CVE-2017-14502.patch \
+           file://non-recursive-extract-and-list.patch \
+          "
+
+SRC_URI[md5sum] = "4583bd6b2ebf7e0e8963d90879eb1b27"
+SRC_URI[sha256sum] = "ed2dbd6954792b2c054ccf8ec4b330a54b85904a80cef477a1c74643ddafa0ce"
+
+inherit autotools update-alternatives pkgconfig
+
+CPPFLAGS += "-I${WORKDIR}/extra-includes"
+
+do_configure[cleandirs] += "${WORKDIR}/extra-includes"
+do_configure_prepend() {
+	# We just need the headers for some type constants, so no need to
+	# build all of e2fsprogs for the target
+	cp -R ${STAGING_INCDIR_NATIVE}/ext2fs ${WORKDIR}/extra-includes/
+}
+
+ALTERNATIVE_PRIORITY = "80"
+
+PACKAGES =+ "bsdtar"
+FILES_bsdtar = "${bindir}/bsdtar"
+
+ALTERNATIVE_bsdtar = "tar"
+ALTERNATIVE_LINK_NAME[tar] = "${base_bindir}/tar"
+ALTERNATIVE_TARGET[tar] = "${bindir}/bsdtar"
+
+PACKAGES =+ "bsdcpio"
+FILES_bsdcpio = "${bindir}/bsdcpio"
+
+ALTERNATIVE_bsdcpio = "cpio"
+ALTERNATIVE_LINK_NAME[cpio] = "${base_bindir}/cpio"
+ALTERNATIVE_TARGET[cpio] = "${bindir}/bsdcpio"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn/0001-idn-fix-printf-format-security-warnings.patch b/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn/0001-idn-fix-printf-format-security-warnings.patch
index 5adc7d9..2d5faab 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn/0001-idn-fix-printf-format-security-warnings.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn/0001-idn-fix-printf-format-security-warnings.patch
@@ -1,181 +1,694 @@
-From 82f98dcbc429bbe89a9837c533cbcbc02e77c790 Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Andr=C3=A9=20Draszik?= <adraszik@tycoint.com>
-Date: Tue, 28 Jun 2016 12:43:31 +0100
-Subject: [PATCH] idn: fix printf() format security warnings
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
+From 7148adf34dae30345c2e4d9d437838a45ba6f6e8 Mon Sep 17 00:00:00 2001
+From: =?utf8?q?Tim=20R=C3=BChsen?= <tim.ruehsen@gmx.de>
+Date: Wed, 1 Feb 2017 11:06:39 +0100
+Subject: [PATCH] Fix -Wformat warnings
 
-| ../../libidn-1.32/src/idn.c: In function 'main':
-| ../../libidn-1.32/src/idn.c:172:7: error: format not a string literal and no format arguments [-Werror=format-security]
-|        error (0, 0, _("only one of -s, -e, -d, -a, -u or -n can be specified"));
-|        ^~~~~
-| ../../libidn-1.32/src/idn.c:187:5: error: format not a string literal and no format arguments [-Werror=format-security]
-|      fprintf (stderr, _("Type each input string on a line by itself, "
-|      ^~~~~~~
-| ../../libidn-1.32/src/idn.c:202:4: error: format not a string literal and no format arguments [-Werror=format-security]
-|     error (EXIT_FAILURE, errno, _("input error"));
-|     ^~~~~
-| ../../libidn-1.32/src/idn.c:220:8: error: format not a string literal and no format arguments [-Werror=format-security]
-|         _("could not convert from UTF-8 to UCS-4"));
-|         ^
-| ../../libidn-1.32/src/idn.c:245:8: error: format not a string literal and no format arguments [-Werror=format-security]
-|         _("could not convert from UTF-8 to UCS-4"));
-|         ^
-| ../../libidn-1.32/src/idn.c:281:6: error: format not a string literal and no format arguments [-Werror=format-security]
-|       _("could not convert from UTF-8 to UCS-4"));
-|       ^
-| ../../libidn-1.32/src/idn.c:340:6: error: format not a string literal and no format arguments [-Werror=format-security]
-|       _("could not convert from UCS-4 to UTF-8"));
-|       ^
-| ../../libidn-1.32/src/idn.c:364:6: error: format not a string literal and no format arguments [-Werror=format-security]
-|       _("could not convert from UCS-4 to UTF-8"));
-|       ^
-| ../../libidn-1.32/src/idn.c:442:8: error: format not a string literal and no format arguments [-Werror=format-security]
-|         _("could not convert from UCS-4 to UTF-8"));
-|         ^
-| ../../libidn-1.32/src/idn.c:498:6: error: format not a string literal and no format arguments [-Werror=format-security]
-|       _("could not convert from UTF-8 to UCS-4"));
-|       ^
-| ../../libidn-1.32/src/idn.c:527:5: error: format not a string literal and no format arguments [-Werror=format-security]
-|      _("could not convert from UTF-8 to UCS-4"));
-|      ^
-| ../../libidn-1.32/src/idn.c:540:6: error: format not a string literal and no format arguments [-Werror=format-security]
-|       error (EXIT_FAILURE, 0, _("could not do NFKC normalization"));
-|       ^~~~~
-| ../../libidn-1.32/src/idn.c:551:5: error: format not a string literal and no format arguments [-Werror=format-security]
-|      _("could not convert from UTF-8 to UCS-4"));
-|      ^
-
-Signed-off-by: André Draszik <adraszik@tycoint.com>
 ---
-Upstream-Status: Pending
+Upstream-Status: Backport
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
 
- src/idn.c | 27 ++++++++++++++-------------
- 1 file changed, 14 insertions(+), 13 deletions(-)
+ examples/example.c     |  6 +++---
+ examples/example3.c    |  4 ++--
+ examples/example4.c    |  4 ++--
+ examples/example5.c    |  2 +-
+ src/idn.c              |  2 +-
+ tests/tst_idna.c       | 25 +++++++++++++------------
+ tests/tst_idna2.c      |  8 ++++----
+ tests/tst_idna3.c      |  8 ++++----
+ tests/tst_nfkc.c       |  8 ++++----
+ tests/tst_pr29.c       | 12 ++++++------
+ tests/tst_punycode.c   | 13 +++++++------
+ tests/tst_strerror.c   | 20 ++++++++++----------
+ tests/tst_stringprep.c | 12 ++++++------
+ tests/tst_tld.c        | 20 ++++++++++----------
+ tests/utils.c          |  6 +++---
+ 15 files changed, 76 insertions(+), 74 deletions(-)
 
-diff --git a/src/idn.c b/src/idn.c
-index be1c7d1..68e4291 100644
---- a/src/idn.c
-+++ b/src/idn.c
-@@ -170,7 +170,7 @@ main (int argc, char *argv[])
-       (args_info.idna_to_unicode_given ? 1 : 0) +
-       (args_info.nfkc_given ? 1 : 0) != 1)
+diff --git a/examples/example.c b/examples/example.c
+index 6e91783..24f64e0 100644
+--- a/examples/example.c
++++ b/examples/example.c
+@@ -55,7 +55,7 @@ main (void)
+ 
+   printf ("Before locale2utf8 (length %ld): ", (long int) strlen (buf));
+   for (i = 0; i < strlen (buf); i++)
+-    printf ("%02x ", buf[i] & 0xFF);
++    printf ("%02x ", (unsigned) buf[i] & 0xFF);
+   printf ("\n");
+ 
+   p = stringprep_locale_to_utf8 (buf);
+@@ -69,7 +69,7 @@ main (void)
+ 
+   printf ("Before stringprep (length %ld): ", (long int) strlen (buf));
+   for (i = 0; i < strlen (buf); i++)
+-    printf ("%02x ", buf[i] & 0xFF);
++    printf ("%02x ", (unsigned) buf[i] & 0xFF);
+   printf ("\n");
+ 
+   rc = stringprep (buf, BUFSIZ, 0, stringprep_nameprep);
+@@ -79,7 +79,7 @@ main (void)
      {
--      error (0, 0, _("only one of -s, -e, -d, -a, -u or -n can be specified"));
-+      error (0, 0, "%s", _("only one of -s, -e, -d, -a, -u or -n can be specified"));
-       usage (EXIT_FAILURE);
+       printf ("After stringprep (length %ld): ", (long int) strlen (buf));
+       for (i = 0; i < strlen (buf); i++)
+-	printf ("%02x ", buf[i] & 0xFF);
++	printf ("%02x ", (unsigned) buf[i] & 0xFF);
+       printf ("\n");
      }
  
-@@ -185,7 +185,7 @@ main (int argc, char *argv[])
-   if (!args_info.quiet_given
-       && args_info.inputs_num == 0
-       && isatty (fileno (stdin)))
--    fprintf (stderr, _("Type each input string on a line by itself, "
-+    fprintf (stderr, "%s", _("Type each input string on a line by itself, "
- 		       "terminated by a newline character.\n"));
+diff --git a/examples/example3.c b/examples/example3.c
+index fc11c1c..ffb9042 100644
+--- a/examples/example3.c
++++ b/examples/example3.c
+@@ -56,7 +56,7 @@ main (void)
  
-   do
-@@ -197,7 +197,7 @@ main (int argc, char *argv[])
- 	  if (feof (stdin))
- 	    break;
+   printf ("Read string (length %ld): ", (long int) strlen (buf));
+   for (i = 0; i < strlen (buf); i++)
+-    printf ("%02x ", buf[i] & 0xFF);
++    printf ("%02x ", (unsigned) buf[i] & 0xFF);
+   printf ("\n");
  
--	  error (EXIT_FAILURE, errno, _("input error"));
-+	  error (EXIT_FAILURE, errno, "%s", _("input error"));
+   rc = idna_to_ascii_lz (buf, &p, 0);
+@@ -68,7 +68,7 @@ main (void)
+ 
+   printf ("ACE label (length %ld): '%s'\n", (long int) strlen (p), p);
+   for (i = 0; i < strlen (p); i++)
+-    printf ("%02x ", p[i] & 0xFF);
++    printf ("%02x ", (unsigned) p[i] & 0xFF);
+   printf ("\n");
+ 
+   free (p);
+diff --git a/examples/example4.c b/examples/example4.c
+index 1b319c9..a3315a1 100644
+--- a/examples/example4.c
++++ b/examples/example4.c
+@@ -56,7 +56,7 @@ main (void)
+ 
+   printf ("Read string (length %ld): ", (long int) strlen (buf));
+   for (i = 0; i < strlen (buf); i++)
+-    printf ("%02x ", buf[i] & 0xFF);
++    printf ("%02x ", (unsigned) buf[i] & 0xFF);
+   printf ("\n");
+ 
+   rc = idna_to_unicode_lzlz (buf, &p, 0);
+@@ -68,7 +68,7 @@ main (void)
+ 
+   printf ("ACE label (length %ld): '%s'\n", (long int) strlen (p), p);
+   for (i = 0; i < strlen (p); i++)
+-    printf ("%02x ", p[i] & 0xFF);
++    printf ("%02x ", (unsigned) p[i] & 0xFF);
+   printf ("\n");
+ 
+   free (p);
+diff --git a/examples/example5.c b/examples/example5.c
+index df55798..29d40b9 100644
+--- a/examples/example5.c
++++ b/examples/example5.c
+@@ -68,7 +68,7 @@ main (void)
+ 
+   printf ("Read string (length %ld): ", (long int) strlen (buf));
+   for (i = 0; i < strlen (buf); i++)
+-    printf ("%02x ", buf[i] & 0xFF);
++    printf ("%02x ", (unsigned) buf[i] & 0xFF);
+   printf ("\n");
+ 
+   p = stringprep_locale_to_utf8 (buf);
+diff --git a/src/idn.c b/src/idn.c
+index be1c7d1..13eb3c9 100644
+--- a/src/idn.c
++++ b/src/idn.c
+@@ -419,7 +419,7 @@ main (int argc, char *argv[])
+ 	      size_t i;
+ 	      for (i = 0; p[i]; i++)
+ 		fprintf (stderr, "output[%lu] = U+%04x\n",
+-			 (unsigned long) i, p[i]);
++			 (unsigned long) i, (unsigned) p[i]);
+ 	    }
+ 
+ 	  fprintf (stdout, "%s\n", p);
+diff --git a/tests/tst_idna.c b/tests/tst_idna.c
+index 415764e..4ac046f 100644
+--- a/tests/tst_idna.c
++++ b/tests/tst_idna.c
+@@ -220,13 +220,14 @@ doit (void)
+   char label[100];
+   uint32_t *ucs4label = NULL;
+   uint32_t tmp[100];
+-  size_t len, len2, i;
++  size_t len, len2;
+   int rc;
++  unsigned i;
+ 
+   for (i = 0; i < sizeof (idna) / sizeof (idna[0]); i++)
+     {
+       if (debug)
+-	printf ("IDNA entry %ld: %s\n", i, idna[i].name);
++	printf ("IDNA entry %u: %s\n", i, idna[i].name);
+ 
+       if (debug)
+ 	{
+@@ -237,7 +238,7 @@ doit (void)
+       rc = idna_to_ascii_4i (idna[i].in, idna[i].inlen, label, idna[i].flags);
+       if (rc != idna[i].toasciirc)
+ 	{
+-	  fail ("IDNA entry %ld failed: %d\n", i, rc);
++	  fail ("IDNA entry %u failed: %d\n", i, rc);
+ 	  if (debug)
+ 	    printf ("FATAL\n");
+ 	  continue;
+@@ -256,7 +257,7 @@ doit (void)
+ 	  if (strlen (idna[i].out) != strlen (label) ||
+ 	      strcasecmp (idna[i].out, label) != 0)
+ 	    {
+-	      fail ("IDNA entry %ld failed\n", i);
++	      fail ("IDNA entry %u failed\n", i);
+ 	      if (debug)
+ 		printf ("ERROR\n");
+ 	    }
+@@ -273,8 +274,8 @@ doit (void)
+ 
+       if (debug)
+ 	{
+-	  printf ("in: %s (%ld==%ld)\n", idna[i].out, strlen (idna[i].out),
+-		  len);
++	  printf ("in: %s (%d==%d)\n", idna[i].out, (int) strlen (idna[i].out),
++		  (int) len);
+ 	  ucs4print (ucs4label, len);
  	}
  
-       if (strlen (line) > 0)
-@@ -215,7 +215,7 @@ main (int argc, char *argv[])
- 	  if (!q)
+@@ -282,20 +283,20 @@ doit (void)
+       rc = idna_to_unicode_44i (ucs4label, len, tmp, &len2, idna[i].flags);
+       if (debug)
+ 	{
+-	  printf ("expected out (%ld):\n",
++	  printf ("expected out (%lu):\n",
+ 		  rc == IDNA_SUCCESS ? idna[i].inlen : len);
+ 	  if (rc == IDNA_SUCCESS)
+ 	    ucs4print (idna[i].in, idna[i].inlen);
+ 	  else
+ 	    ucs4print (ucs4label, len);
+ 
+-	  printf ("computed out (%ld):\n", len2);
++	  printf ("computed out (%d):\n", (int) len2);
+ 	  ucs4print (tmp, len2);
+ 	}
+ 
+       if (rc != idna[i].tounicoderc)
+ 	{
+-	  fail ("IDNA entry %ld failed: %d\n", i, rc);
++	  fail ("IDNA entry %u failed: %d\n", i, rc);
+ 	  if (debug)
+ 	    printf ("FATAL\n");
+ 	  continue;
+@@ -309,11 +310,11 @@ doit (void)
+ 	  if (debug)
  	    {
- 	      free (p);
--	      error (EXIT_FAILURE, 0,
-+	      error (EXIT_FAILURE, 0, "%s",
- 		     _("could not convert from UTF-8 to UCS-4"));
+ 	      if (rc == IDNA_SUCCESS)
+-		printf ("len=%ld len2=%ld\n", len2, idna[i].inlen);
++		printf ("len=%d len2=%d\n", (int) len2, (int) idna[i].inlen);
+ 	      else
+-		printf ("len=%ld len2=%ld\n", len, len2);
++		printf ("len=%d len2=%d\n", (int) len, (int) len2);
  	    }
+-	  fail ("IDNA entry %ld failed\n", i);
++	  fail ("IDNA entry %u failed\n", i);
+ 	  if (debug)
+ 	    printf ("ERROR\n");
+ 	}
+diff --git a/tests/tst_idna2.c b/tests/tst_idna2.c
+index 65b3a4d..38932ca 100644
+--- a/tests/tst_idna2.c
++++ b/tests/tst_idna2.c
+@@ -461,14 +461,14 @@ static const struct idna idna[] = {
+ void
+ doit (void)
+ {
+-  size_t i;
++  unsigned i;
+   char *out;
+   int rc;
  
-@@ -240,7 +240,7 @@ main (int argc, char *argv[])
- 	  if (!q)
+   for (i = 0; i < sizeof (idna) / sizeof (idna[0]); i++)
+     {
+       if (debug)
+-	printf ("IDNA2 entry %ld\n", i);
++	printf ("IDNA2 entry %u\n", i);
+ 
+       if (debug)
+ 	{
+@@ -487,7 +487,7 @@ doit (void)
+ 			     IDNA_USE_STD3_ASCII_RULES);
+       if (rc != IDNA_SUCCESS && strlen (idna[i].out) > 0)
+ 	{
+-	  fail ("IDNA2 entry %ld failed: %d\n", i, rc);
++	  fail ("IDNA2 entry %u failed: %d\n", i, rc);
+ 	  continue;
+ 	}
+ 
+@@ -504,7 +504,7 @@ doit (void)
+ 	  if (strlen (idna[i].out) != strlen (out) ||
+ 	      strcasecmp (idna[i].out, out) != 0)
  	    {
- 	      free (r);
--	      error (EXIT_FAILURE, 0,
-+	      error (EXIT_FAILURE, 0, "%s",
- 		     _("could not convert from UTF-8 to UCS-4"));
+-	      fail ("IDNA2 entry %ld failed\n", i);
++	      fail ("IDNA2 entry %u failed\n", i);
+ 	      if (debug)
+ 		printf ("ERROR\n");
  	    }
+diff --git a/tests/tst_idna3.c b/tests/tst_idna3.c
+index a189378..f65628c 100644
+--- a/tests/tst_idna3.c
++++ b/tests/tst_idna3.c
+@@ -59,13 +59,13 @@ doit (void)
+ {
+   int rc;
+   char *out = NULL;
+-  size_t i;
++  unsigned i;
  
-@@ -277,7 +277,7 @@ main (int argc, char *argv[])
- 	  q = stringprep_utf8_to_ucs4 (p, -1, &len);
- 	  free (p);
- 	  if (!q)
--	    error (EXIT_FAILURE, 0,
-+	    error (EXIT_FAILURE, 0, "%s",
- 		   _("could not convert from UTF-8 to UCS-4"));
+   for (i = 0; i < sizeof (idna) / sizeof (idna[0]); i++)
+     {
+       rc = idna_to_unicode_8z8z (idna[i].in, &out, 0);
+       if (rc != IDNA_SUCCESS)
+-	fail ("IDNA3[%ld] failed %d\n", i, rc);
++	fail ("IDNA3[%u] failed %d\n", i, rc);
  
- 	  if (args_info.debug_given)
-@@ -336,7 +336,7 @@ main (int argc, char *argv[])
- 	  r = stringprep_ucs4_to_utf8 (q, -1, NULL, NULL);
- 	  free (q);
- 	  if (!r)
--	    error (EXIT_FAILURE, 0,
-+	    error (EXIT_FAILURE, 0, "%s",
- 		   _("could not convert from UCS-4 to UTF-8"));
+       if (debug && rc == IDNA_SUCCESS)
+ 	{
+@@ -75,9 +75,9 @@ doit (void)
+ 	}
  
- 	  p = stringprep_utf8_to_locale (r);
-@@ -360,7 +360,7 @@ main (int argc, char *argv[])
- 	  q = stringprep_utf8_to_ucs4 (p, -1, NULL);
- 	  free (p);
- 	  if (!q)
--	    error (EXIT_FAILURE, 0,
-+	    error (EXIT_FAILURE, 0, "%s",
- 		   _("could not convert from UCS-4 to UTF-8"));
+       if (strcmp (out, idna[i].out) != 0)
+-	fail ("IDNA3[%ld] failed\n", i);
++	fail ("IDNA3[%u] failed\n", i);
+       else if (debug)
+-	printf ("IDNA3[%ld] success\n", i);
++	printf ("IDNA3[%u] success\n", i);
  
- 	  if (args_info.debug_given)
-@@ -438,7 +438,7 @@ main (int argc, char *argv[])
- 	  if (!q)
+       if (out)
+ 	idn_free (out);
+diff --git a/tests/tst_nfkc.c b/tests/tst_nfkc.c
+index d150fec..f5af9c6 100644
+--- a/tests/tst_nfkc.c
++++ b/tests/tst_nfkc.c
+@@ -68,18 +68,18 @@ void
+ doit (void)
+ {
+   char *out;
+-  size_t i;
++  unsigned i;
+ 
+   for (i = 0; i < sizeof (nfkc) / sizeof (nfkc[0]); i++)
+     {
+       if (debug)
+-	printf ("NFKC entry %ld\n", i);
++	printf ("NFKC entry %u\n", i);
+ 
+       out = stringprep_utf8_nfkc_normalize (nfkc[i].in,
+ 					    (ssize_t) strlen (nfkc[i].in));
+       if (out == NULL)
+ 	{
+-	  fail ("NFKC entry %ld failed fatally\n", i);
++	  fail ("NFKC entry %u failed fatally\n", i);
+ 	  continue;
+ 	}
+ 
+@@ -114,7 +114,7 @@ doit (void)
+       if (strlen (nfkc[i].out) != strlen (out) ||
+ 	  memcmp (nfkc[i].out, out, strlen (out)) != 0)
+ 	{
+-	  fail ("NFKC entry %ld failed\n", i);
++	  fail ("NFKC entry %u failed\n", i);
+ 	  if (debug)
+ 	    printf ("ERROR\n");
+ 	}
+diff --git a/tests/tst_pr29.c b/tests/tst_pr29.c
+index 3dc5466..11d0ede 100644
+--- a/tests/tst_pr29.c
++++ b/tests/tst_pr29.c
+@@ -91,7 +91,7 @@ static const struct tv tv[] = {
+ void
+ doit (void)
+ {
+-  size_t i;
++  unsigned i;
+   int rc;
+ 
+   for (i = 0; i < sizeof (tv) / sizeof (tv[0]); i++)
+@@ -100,7 +100,7 @@ doit (void)
+ 	{
+ 	  uint32_t *p, *q;
+ 
+-	  printf ("PR29 entry %ld: %s\n", i, tv[i].name);
++	  printf ("PR29 entry %u: %s\n", i, tv[i].name);
+ 
+ 	  printf ("in:\n");
+ 	  ucs4print (tv[i].in, tv[i].inlen);
+@@ -120,7 +120,7 @@ doit (void)
+       rc = pr29_4 (tv[i].in, tv[i].inlen);
+       if (rc != tv[i].rc)
+ 	{
+-	  fail ("PR29 entry %ld failed (expected %d): %d\n", i, tv[i].rc, rc);
++	  fail ("PR29 entry %u failed (expected %d): %d\n", i, tv[i].rc, rc);
+ 	  if (debug)
+ 	    printf ("FATAL\n");
+ 	  continue;
+@@ -129,7 +129,7 @@ doit (void)
+       rc = pr29_4z (tv[i].in);
+       if (rc != tv[i].rc)
+ 	{
+-	  fail ("PR29 entry %ld failed (expected %d): %d\n", i, tv[i].rc, rc);
++	  fail ("PR29 entry %u failed (expected %d): %d\n", i, tv[i].rc, rc);
+ 	  if (debug)
+ 	    printf ("FATAL\n");
+ 	  continue;
+@@ -142,7 +142,7 @@ doit (void)
+ 	p = stringprep_ucs4_to_utf8 (tv[i].in, (ssize_t) tv[i].inlen,
+ 				     &items_read, &items_written);
+ 	if (p == NULL)
+-	  fail ("FAIL: stringprep_ucs4_to_utf8(tv[%ld]) == NULL\n", i);
++	  fail ("FAIL: stringprep_ucs4_to_utf8(tv[%u]) == NULL\n", i);
+ 	if (debug)
+ 	  hexprint (p, strlen (p));
+ 
+@@ -150,7 +150,7 @@ doit (void)
+ 	free (p);
+ 	if (rc != tv[i].rc)
+ 	  {
+-	    fail ("PR29 entry %ld failed (expected %d): %d\n",
++	    fail ("PR29 entry %u failed (expected %d): %d\n",
+ 		  i, tv[i].rc, rc);
+ 	    if (debug)
+ 	      printf ("FATAL\n");
+diff --git a/tests/tst_punycode.c b/tests/tst_punycode.c
+index 493b8a2..997744a 100644
+--- a/tests/tst_punycode.c
++++ b/tests/tst_punycode.c
+@@ -173,7 +173,8 @@ doit (void)
+   char *p;
+   uint32_t *q;
+   int rc;
+-  size_t i, outlen;
++  size_t outlen;
++  unsigned i;
+ 
+   p = malloc (sizeof (*p) * BUFSIZ);
+   if (p == NULL)
+@@ -186,7 +187,7 @@ doit (void)
+   for (i = 0; i < sizeof (punycode) / sizeof (punycode[0]); i++)
+     {
+       if (debug)
+-	printf ("PUNYCODE entry %ld: %s\n", i, punycode[i].name);
++	printf ("PUNYCODE entry %u: %s\n", i, punycode[i].name);
+ 
+       if (debug)
+ 	{
+@@ -199,7 +200,7 @@ doit (void)
+ 			    NULL, &outlen, p);
+       if (rc != punycode[i].rc)
+ 	{
+-	  fail ("punycode_encode() entry %ld failed: %d\n", i, rc);
++	  fail ("punycode_encode() entry %u failed: %d\n", i, rc);
+ 	  if (debug)
+ 	    printf ("FATAL\n");
+ 	  continue;
+@@ -221,7 +222,7 @@ doit (void)
+ 	  if (strlen (punycode[i].out) != strlen (p) ||
+ 	      memcmp (punycode[i].out, p, strlen (p)) != 0)
  	    {
- 	      free (p);
--	      error (EXIT_FAILURE, 0,
-+	      error (EXIT_FAILURE, 0, "%s",
- 		     _("could not convert from UCS-4 to UTF-8"));
+-	      fail ("punycode() entry %ld failed\n", i);
++	      fail ("punycode() entry %u failed\n", i);
+ 	      if (debug)
+ 		printf ("ERROR\n");
  	    }
- 
-@@ -494,7 +494,7 @@ main (int argc, char *argv[])
- 	  r = stringprep_ucs4_to_utf8 (q, -1, NULL, NULL);
- 	  free (q);
- 	  if (!r)
--	    error (EXIT_FAILURE, 0,
-+	    error (EXIT_FAILURE, 0, "%s",
- 		   _("could not convert from UTF-8 to UCS-4"));
- 
- 	  p = stringprep_utf8_to_locale (r);
-@@ -523,7 +523,7 @@ main (int argc, char *argv[])
- 	      if (!q)
- 		{
- 		  free (p);
--		  error (EXIT_FAILURE, 0,
-+		  error (EXIT_FAILURE, 0, "%s",
- 			 _("could not convert from UTF-8 to UCS-4"));
- 		}
- 
-@@ -537,7 +537,8 @@ main (int argc, char *argv[])
- 	  r = stringprep_utf8_nfkc_normalize (p, -1);
- 	  free (p);
- 	  if (!r)
--	    error (EXIT_FAILURE, 0, _("could not do NFKC normalization"));
-+	    error (EXIT_FAILURE, 0, "%s",
-+		   _("could not do NFKC normalization"));
- 
- 	  if (args_info.debug_given)
+@@ -241,7 +242,7 @@ doit (void)
+ 			    &outlen, q, NULL);
+       if (rc != punycode[i].rc)
+ 	{
+-	  fail ("punycode() entry %ld failed: %d\n", i, rc);
++	  fail ("punycode() entry %u failed: %d\n", i, rc);
+ 	  if (debug)
+ 	    printf ("FATAL\n");
+ 	  continue;
+@@ -262,7 +263,7 @@ doit (void)
+ 	  if (punycode[i].inlen != outlen ||
+ 	      memcmp (punycode[i].in, q, outlen) != 0)
  	    {
-@@ -547,7 +548,7 @@ main (int argc, char *argv[])
- 	      if (!q)
- 		{
- 		  free (r);
--		  error (EXIT_FAILURE, 0,
-+		  error (EXIT_FAILURE, 0, "%s",
- 			 _("could not convert from UTF-8 to UCS-4"));
- 		}
+-	      fail ("punycode_decode() entry %ld failed\n", i);
++	      fail ("punycode_decode() entry %u failed\n", i);
+ 	      if (debug)
+ 		printf ("ERROR\n");
+ 	    }
+diff --git a/tests/tst_strerror.c b/tests/tst_strerror.c
+index 71fff59..730f5e4 100644
+--- a/tests/tst_strerror.c
++++ b/tests/tst_strerror.c
+@@ -110,7 +110,7 @@ doit (void)
+   /* Iterate through all error codes. */
  
+   {
+-    size_t i;
++    unsigned i;
+     const char *last_p = NULL;
+ 
+     for (i = 0;; i++)
+@@ -126,13 +126,13 @@ doit (void)
+ 	    break;
+ 	  }
+ 	if (debug)
+-	  printf ("idna %ld: %s\n", i, p);
++	  printf ("idna %u: %s\n", i, p);
+ 	last_p = p;
+       }
+   }
+ 
+   {
+-    size_t i;
++    unsigned i;
+     const char *last_p = NULL;
+ 
+     for (i = 0;; i++)
+@@ -141,13 +141,13 @@ doit (void)
+ 	if (p == last_p)
+ 	  break;
+ 	if (debug)
+-	  printf ("pr29 %ld: %s\n", i, p);
++	  printf ("pr29 %u: %s\n", i, p);
+ 	last_p = p;
+       }
+   }
+ 
+   {
+-    size_t i;
++    unsigned i;
+     const char *last_p = NULL;
+ 
+     for (i = 0;; i++)
+@@ -156,13 +156,13 @@ doit (void)
+ 	if (p == last_p)
+ 	  break;
+ 	if (debug)
+-	  printf ("punycode %ld: %s\n", i, p);
++	  printf ("punycode %u: %s\n", i, p);
+ 	last_p = p;
+       }
+   }
+ 
+   {
+-    size_t i;
++    unsigned i;
+     const char *last_p = NULL;
+ 
+     for (i = 0;; i++)
+@@ -183,13 +183,13 @@ doit (void)
+ 	    break;
+ 	  }
+ 	if (debug)
+-	  printf ("stringprep %ld: %s\n", i, p);
++	  printf ("stringprep %u: %s\n", i, p);
+ 	last_p = p;
+       }
+   }
+ 
+   {
+-    size_t i;
++    unsigned i;
+     const char *last_p = NULL;
+ 
+     for (i = 0;; i++)
+@@ -198,7 +198,7 @@ doit (void)
+ 	if (p == last_p)
+ 	  break;
+ 	if (debug)
+-	  printf ("tld %ld: %s\n", i, p);
++	  printf ("tld %u: %s\n", i, p);
+ 	last_p = p;
+       }
+   }
+diff --git a/tests/tst_stringprep.c b/tests/tst_stringprep.c
+index 149ce6f..7c9ab06 100644
+--- a/tests/tst_stringprep.c
++++ b/tests/tst_stringprep.c
+@@ -205,7 +205,7 @@ doit (void)
+ {
+   char *p;
+   int rc;
+-  size_t i;
++  unsigned i;
+ 
+   if (!stringprep_check_version (STRINGPREP_VERSION))
+     fail ("stringprep_check_version failed (header %s runtime %s)\n",
+@@ -224,7 +224,7 @@ doit (void)
+   for (i = 0; i < sizeof (strprep) / sizeof (strprep[0]); i++)
+     {
+       if (debug)
+-	printf ("STRINGPREP entry %ld\n", i);
++	printf ("STRINGPREP entry %u\n", i);
+ 
+       if (debug)
+ 	{
+@@ -247,12 +247,12 @@ doit (void)
+ 	  continue;
+ 	else if (l == NULL)
+ 	  {
+-	    fail ("bad UTF-8 in entry %ld\n", i);
++	    fail ("bad UTF-8 in entry %u\n", i);
+ 	    continue;
+ 	  }
+ 	else if (strcmp (strprep[i].in, x) != 0)
+ 	  {
+-	    fail ("bad UTF-8 in entry %ld\n", i);
++	    fail ("bad UTF-8 in entry %u\n", i);
+ 	    if (debug)
+ 	      {
+ 		puts ("expected:");
+@@ -274,7 +274,7 @@ doit (void)
+ 			       "Nameprep", strprep[i].flags);
+       if (rc != strprep[i].rc)
+ 	{
+-	  fail ("stringprep() entry %ld failed: %d\n", i, rc);
++	  fail ("stringprep() entry %u failed: %d\n", i, rc);
+ 	  if (debug)
+ 	    printf ("FATAL\n");
+ 	  if (rc == STRINGPREP_OK)
+@@ -302,7 +302,7 @@ doit (void)
+ 	  if (strlen (strprep[i].out) != strlen (p) ||
+ 	      memcmp (strprep[i].out, p, strlen (p)) != 0)
+ 	    {
+-	      fail ("stringprep() entry %ld failed\n", i);
++	      fail ("stringprep() entry %ld failed\n", (long) i);
+ 	      if (debug)
+ 		printf ("ERROR\n");
+ 	    }
+diff --git a/tests/tst_tld.c b/tests/tst_tld.c
+index 2f8e12e..d038c79 100644
+--- a/tests/tst_tld.c
++++ b/tests/tst_tld.c
+@@ -80,7 +80,7 @@ const Tld_table * my_tld_tables[] =
+ void
+ doit (void)
+ {
+-  size_t i;
++  unsigned i;
+   const Tld_table *tldtable;
+   char *out;
+   size_t errpos;
+@@ -206,7 +206,7 @@ doit (void)
+   for (i = 0; i < sizeof (tld) / sizeof (tld[0]); i++)
+     {
+       if (debug)
+-	printf ("TLD entry %ld: %s\n", i, tld[i].name);
++	printf ("TLD entry %u: %s\n", i, tld[i].name);
+ 
+       if (debug)
+ 	{
+@@ -217,7 +217,7 @@ doit (void)
+       tldtable = tld_default_table (tld[i].tld, NULL);
+       if (tldtable == NULL)
+ 	{
+-	  fail ("TLD entry %ld tld_get_table (%s)\n", i, tld[i].tld);
++	  fail ("TLD entry %u tld_get_table (%s)\n", i, tld[i].tld);
+ 	  if (debug)
+ 	    printf ("FATAL\n");
+ 	  continue;
+@@ -226,7 +226,7 @@ doit (void)
+       rc = tld_check_4t (tld[i].in, tld[i].inlen, &errpos, tldtable);
+       if (rc != tld[i].rc)
+ 	{
+-	  fail ("TLD entry %ld failed: %d\n", i, rc);
++	  fail ("TLD entry %u failed: %d\n", i, rc);
+ 	  if (debug)
+ 	    printf ("FATAL\n");
+ 	  continue;
+@@ -237,7 +237,7 @@ doit (void)
+ 
+       if (rc != tld[i].rc)
+ 	{
+-	  fail ("TLD entry %ld failed\n", i);
++	  fail ("TLD entry %u failed\n", i);
+ 	  if (debug)
+ 	    printf ("ERROR\n");
+ 	}
+@@ -245,12 +245,12 @@ doit (void)
+ 	{
+ 	  if (debug)
+ 	    printf ("returned errpos %ld expected errpos %ld\n",
+-		    errpos, tld[i].errpos);
++		    (long) errpos, (long) tld[i].errpos);
+ 
+ 	  if (tld[i].errpos != errpos)
+ 	    {
+-	      fail ("TLD entry %ld failed because errpos %ld != %ld\n", i,
+-		    tld[i].errpos, errpos);
++	      fail ("TLD entry %u failed because errpos %ld != %ld\n", i,
++		    (long) tld[i].errpos, (long) errpos);
+ 	      if (debug)
+ 		printf ("ERROR\n");
+ 	    }
+@@ -262,12 +262,12 @@ doit (void)
+ 	rc = tld_check_8z (tld[i].example, &errpos, NULL);
+ 	if (rc != tld[i].rc)
+ 	  {
+-	    fail ("TLD entry %ld failed\n", i);
++	    fail ("TLD entry %u failed\n", i);
+ 	    if (debug)
+ 	      printf ("ERROR\n");
+ 	  }
+ 	if (debug)
+-	  printf ("TLD entry %ld tld_check_8z (%s)\n", i, tld[i].example);
++	  printf ("TLD entry %u tld_check_8z (%s)\n", i, tld[i].example);
+       }
+     }
+ }
+diff --git a/tests/utils.c b/tests/utils.c
+index 717ee01..5577dc3 100644
+--- a/tests/utils.c
++++ b/tests/utils.c
+@@ -49,7 +49,7 @@ escapeprint (const char *str, size_t len)
+ {
+   size_t i;
+ 
+-  printf (" (length %ld bytes):\n\t", len);
++  printf (" (length %ld bytes):\n\t", (long) len);
+   for (i = 0; i < len; i++)
+     {
+       if (((str[i] & 0xFF) >= 'A' && (str[i] & 0xFF) <= 'Z') ||
+@@ -58,7 +58,7 @@ escapeprint (const char *str, size_t len)
+ 	  || (str[i] & 0xFF) == ' ' || (str[i] & 0xFF) == '.')
+ 	printf ("%c", (str[i] & 0xFF));
+       else
+-	printf ("\\x%02X", (str[i] & 0xFF));
++	printf ("\\x%02X", (unsigned) (str[i] & 0xFF));
+       if ((i + 1) % 16 == 0 && (i + 1) < len)
+ 	printf ("'\n\t'");
+     }
+@@ -73,7 +73,7 @@ hexprint (const char *str, size_t len)
+   printf ("\t;; ");
+   for (i = 0; i < len; i++)
+     {
+-      printf ("%02x ", (str[i] & 0xFF));
++      printf ("%02x ", (unsigned) (str[i] & 0xFF));
+       if ((i + 1) % 8 == 0)
+ 	printf (" ");
+       if ((i + 1) % 16 == 0 && i + 1 < len)
 -- 
-2.8.1
+1.9.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn/0001-idn-format-security-warnings.patch b/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn/0001-idn-format-security-warnings.patch
new file mode 100644
index 0000000..5adc7d9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn/0001-idn-format-security-warnings.patch
@@ -0,0 +1,181 @@
+From 82f98dcbc429bbe89a9837c533cbcbc02e77c790 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Andr=C3=A9=20Draszik?= <adraszik@tycoint.com>
+Date: Tue, 28 Jun 2016 12:43:31 +0100
+Subject: [PATCH] idn: fix printf() format security warnings
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+| ../../libidn-1.32/src/idn.c: In function 'main':
+| ../../libidn-1.32/src/idn.c:172:7: error: format not a string literal and no format arguments [-Werror=format-security]
+|        error (0, 0, _("only one of -s, -e, -d, -a, -u or -n can be specified"));
+|        ^~~~~
+| ../../libidn-1.32/src/idn.c:187:5: error: format not a string literal and no format arguments [-Werror=format-security]
+|      fprintf (stderr, _("Type each input string on a line by itself, "
+|      ^~~~~~~
+| ../../libidn-1.32/src/idn.c:202:4: error: format not a string literal and no format arguments [-Werror=format-security]
+|     error (EXIT_FAILURE, errno, _("input error"));
+|     ^~~~~
+| ../../libidn-1.32/src/idn.c:220:8: error: format not a string literal and no format arguments [-Werror=format-security]
+|         _("could not convert from UTF-8 to UCS-4"));
+|         ^
+| ../../libidn-1.32/src/idn.c:245:8: error: format not a string literal and no format arguments [-Werror=format-security]
+|         _("could not convert from UTF-8 to UCS-4"));
+|         ^
+| ../../libidn-1.32/src/idn.c:281:6: error: format not a string literal and no format arguments [-Werror=format-security]
+|       _("could not convert from UTF-8 to UCS-4"));
+|       ^
+| ../../libidn-1.32/src/idn.c:340:6: error: format not a string literal and no format arguments [-Werror=format-security]
+|       _("could not convert from UCS-4 to UTF-8"));
+|       ^
+| ../../libidn-1.32/src/idn.c:364:6: error: format not a string literal and no format arguments [-Werror=format-security]
+|       _("could not convert from UCS-4 to UTF-8"));
+|       ^
+| ../../libidn-1.32/src/idn.c:442:8: error: format not a string literal and no format arguments [-Werror=format-security]
+|         _("could not convert from UCS-4 to UTF-8"));
+|         ^
+| ../../libidn-1.32/src/idn.c:498:6: error: format not a string literal and no format arguments [-Werror=format-security]
+|       _("could not convert from UTF-8 to UCS-4"));
+|       ^
+| ../../libidn-1.32/src/idn.c:527:5: error: format not a string literal and no format arguments [-Werror=format-security]
+|      _("could not convert from UTF-8 to UCS-4"));
+|      ^
+| ../../libidn-1.32/src/idn.c:540:6: error: format not a string literal and no format arguments [-Werror=format-security]
+|       error (EXIT_FAILURE, 0, _("could not do NFKC normalization"));
+|       ^~~~~
+| ../../libidn-1.32/src/idn.c:551:5: error: format not a string literal and no format arguments [-Werror=format-security]
+|      _("could not convert from UTF-8 to UCS-4"));
+|      ^
+
+Signed-off-by: André Draszik <adraszik@tycoint.com>
+---
+Upstream-Status: Pending
+
+ src/idn.c | 27 ++++++++++++++-------------
+ 1 file changed, 14 insertions(+), 13 deletions(-)
+
+diff --git a/src/idn.c b/src/idn.c
+index be1c7d1..68e4291 100644
+--- a/src/idn.c
++++ b/src/idn.c
+@@ -170,7 +170,7 @@ main (int argc, char *argv[])
+       (args_info.idna_to_unicode_given ? 1 : 0) +
+       (args_info.nfkc_given ? 1 : 0) != 1)
+     {
+-      error (0, 0, _("only one of -s, -e, -d, -a, -u or -n can be specified"));
++      error (0, 0, "%s", _("only one of -s, -e, -d, -a, -u or -n can be specified"));
+       usage (EXIT_FAILURE);
+     }
+ 
+@@ -185,7 +185,7 @@ main (int argc, char *argv[])
+   if (!args_info.quiet_given
+       && args_info.inputs_num == 0
+       && isatty (fileno (stdin)))
+-    fprintf (stderr, _("Type each input string on a line by itself, "
++    fprintf (stderr, "%s", _("Type each input string on a line by itself, "
+ 		       "terminated by a newline character.\n"));
+ 
+   do
+@@ -197,7 +197,7 @@ main (int argc, char *argv[])
+ 	  if (feof (stdin))
+ 	    break;
+ 
+-	  error (EXIT_FAILURE, errno, _("input error"));
++	  error (EXIT_FAILURE, errno, "%s", _("input error"));
+ 	}
+ 
+       if (strlen (line) > 0)
+@@ -215,7 +215,7 @@ main (int argc, char *argv[])
+ 	  if (!q)
+ 	    {
+ 	      free (p);
+-	      error (EXIT_FAILURE, 0,
++	      error (EXIT_FAILURE, 0, "%s",
+ 		     _("could not convert from UTF-8 to UCS-4"));
+ 	    }
+ 
+@@ -240,7 +240,7 @@ main (int argc, char *argv[])
+ 	  if (!q)
+ 	    {
+ 	      free (r);
+-	      error (EXIT_FAILURE, 0,
++	      error (EXIT_FAILURE, 0, "%s",
+ 		     _("could not convert from UTF-8 to UCS-4"));
+ 	    }
+ 
+@@ -277,7 +277,7 @@ main (int argc, char *argv[])
+ 	  q = stringprep_utf8_to_ucs4 (p, -1, &len);
+ 	  free (p);
+ 	  if (!q)
+-	    error (EXIT_FAILURE, 0,
++	    error (EXIT_FAILURE, 0, "%s",
+ 		   _("could not convert from UTF-8 to UCS-4"));
+ 
+ 	  if (args_info.debug_given)
+@@ -336,7 +336,7 @@ main (int argc, char *argv[])
+ 	  r = stringprep_ucs4_to_utf8 (q, -1, NULL, NULL);
+ 	  free (q);
+ 	  if (!r)
+-	    error (EXIT_FAILURE, 0,
++	    error (EXIT_FAILURE, 0, "%s",
+ 		   _("could not convert from UCS-4 to UTF-8"));
+ 
+ 	  p = stringprep_utf8_to_locale (r);
+@@ -360,7 +360,7 @@ main (int argc, char *argv[])
+ 	  q = stringprep_utf8_to_ucs4 (p, -1, NULL);
+ 	  free (p);
+ 	  if (!q)
+-	    error (EXIT_FAILURE, 0,
++	    error (EXIT_FAILURE, 0, "%s",
+ 		   _("could not convert from UCS-4 to UTF-8"));
+ 
+ 	  if (args_info.debug_given)
+@@ -438,7 +438,7 @@ main (int argc, char *argv[])
+ 	  if (!q)
+ 	    {
+ 	      free (p);
+-	      error (EXIT_FAILURE, 0,
++	      error (EXIT_FAILURE, 0, "%s",
+ 		     _("could not convert from UCS-4 to UTF-8"));
+ 	    }
+ 
+@@ -494,7 +494,7 @@ main (int argc, char *argv[])
+ 	  r = stringprep_ucs4_to_utf8 (q, -1, NULL, NULL);
+ 	  free (q);
+ 	  if (!r)
+-	    error (EXIT_FAILURE, 0,
++	    error (EXIT_FAILURE, 0, "%s",
+ 		   _("could not convert from UTF-8 to UCS-4"));
+ 
+ 	  p = stringprep_utf8_to_locale (r);
+@@ -523,7 +523,7 @@ main (int argc, char *argv[])
+ 	      if (!q)
+ 		{
+ 		  free (p);
+-		  error (EXIT_FAILURE, 0,
++		  error (EXIT_FAILURE, 0, "%s",
+ 			 _("could not convert from UTF-8 to UCS-4"));
+ 		}
+ 
+@@ -537,7 +537,8 @@ main (int argc, char *argv[])
+ 	  r = stringprep_utf8_nfkc_normalize (p, -1);
+ 	  free (p);
+ 	  if (!r)
+-	    error (EXIT_FAILURE, 0, _("could not do NFKC normalization"));
++	    error (EXIT_FAILURE, 0, "%s",
++		   _("could not do NFKC normalization"));
+ 
+ 	  if (args_info.debug_given)
+ 	    {
+@@ -547,7 +548,7 @@ main (int argc, char *argv[])
+ 	      if (!q)
+ 		{
+ 		  free (r);
+-		  error (EXIT_FAILURE, 0,
++		  error (EXIT_FAILURE, 0, "%s",
+ 			 _("could not convert from UTF-8 to UCS-4"));
+ 		}
+ 
+-- 
+2.8.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn/gcc7-compatibility.patch b/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn/gcc7-compatibility.patch
new file mode 100644
index 0000000..546a6ea
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn/gcc7-compatibility.patch
@@ -0,0 +1,334 @@
+From 230930b3bc3e431b819eb45420cb42475d83ca93 Mon Sep 17 00:00:00 2001
+From: =?utf8?q?Tim=20R=C3=BChsen?= <tim.ruehsen@gmx.de>
+Date: Wed, 1 Feb 2017 10:44:36 +0100
+Subject: [PATCH] Update intprops.h for gcc-7 compatibility
+
+---
+Upstream-Status: Backport
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+ gl/intprops.h          | 65 ++++++++++++++++++++++++++++++--------------------
+ lib/gltests/intprops.h | 65 ++++++++++++++++++++++++++++++--------------------
+ 2 files changed, 78 insertions(+), 52 deletions(-)
+
+diff --git a/gl/intprops.h b/gl/intprops.h
+index e1fce5c..eb06b69 100644
+--- a/gl/intprops.h
++++ b/gl/intprops.h
+@@ -1,18 +1,18 @@
+ /* intprops.h -- properties of integer types
+ 
+-   Copyright (C) 2001-2016 Free Software Foundation, Inc.
++   Copyright (C) 2001-2017 Free Software Foundation, Inc.
+ 
+    This program is free software: you can redistribute it and/or modify it
+-   under the terms of the GNU General Public License as published
+-   by the Free Software Foundation; either version 3 of the License, or
++   under the terms of the GNU Lesser General Public License as published
++   by the Free Software Foundation; either version 2.1 of the License, or
+    (at your option) any later version.
+ 
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+-   GNU General Public License for more details.
++   GNU Lesser General Public License for more details.
+ 
+-   You should have received a copy of the GNU General Public License
++   You should have received a copy of the GNU Lesser General Public License
+    along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+ 
+ /* Written by Paul Eggert.  */
+@@ -47,12 +47,16 @@
+ 
+ /* Minimum and maximum values for integer types and expressions.  */
+ 
++/* The width in bits of the integer type or expression T.
++   Padding bits are not supported; this is checked at compile-time below.  */
++#define TYPE_WIDTH(t) (sizeof (t) * CHAR_BIT)
++
+ /* The maximum and minimum values for the integer type T.  */
+ #define TYPE_MINIMUM(t) ((t) ~ TYPE_MAXIMUM (t))
+ #define TYPE_MAXIMUM(t)                                                 \
+   ((t) (! TYPE_SIGNED (t)                                               \
+         ? (t) -1                                                        \
+-        : ((((t) 1 << (sizeof (t) * CHAR_BIT - 2)) - 1) * 2 + 1)))
++        : ((((t) 1 << (TYPE_WIDTH (t) - 2)) - 1) * 2 + 1)))
+ 
+ /* The maximum and minimum values for the type of the expression E,
+    after integer promotion.  E should not have side effects.  */
+@@ -65,7 +69,13 @@
+    ? _GL_SIGNED_INT_MAXIMUM (e)                                         \
+    : _GL_INT_NEGATE_CONVERT (e, 1))
+ #define _GL_SIGNED_INT_MAXIMUM(e)                                       \
+-  (((_GL_INT_CONVERT (e, 1) << (sizeof ((e) + 0) * CHAR_BIT - 2)) - 1) * 2 + 1)
++  (((_GL_INT_CONVERT (e, 1) << (TYPE_WIDTH ((e) + 0) - 2)) - 1) * 2 + 1)
++
++/* Work around OpenVMS incompatibility with C99.  */
++#if !defined LLONG_MAX && defined __INT64_MAX
++# define LLONG_MAX __INT64_MAX
++# define LLONG_MIN __INT64_MIN
++#endif
+ 
+ /* This include file assumes that signed types are two's complement without
+    padding bits; the above macros have undefined behavior otherwise.
+@@ -84,10 +94,15 @@ verify (TYPE_MAXIMUM (long int) == LONG_MAX);
+ verify (TYPE_MINIMUM (long long int) == LLONG_MIN);
+ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+ #endif
++/* Similarly, sanity-check one ISO/IEC TS 18661-1:2014 macro if defined.  */
++#ifdef UINT_WIDTH
++verify (TYPE_WIDTH (unsigned int) == UINT_WIDTH);
++#endif
+ 
+ /* Does the __typeof__ keyword work?  This could be done by
+    'configure', but for now it's easier to do it by hand.  */
+-#if (2 <= __GNUC__ || defined __IBM__TYPEOF__ \
++#if (2 <= __GNUC__ \
++     || (1210 <= __IBMC__ && defined __IBM__TYPEOF__) \
+      || (0x5110 <= __SUNPRO_C && !__STDC__))
+ # define _GL_HAVE___TYPEOF__ 1
+ #else
+@@ -116,8 +131,7 @@ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+    signed, this macro may overestimate the true bound by one byte when
+    applied to unsigned types of size 2, 4, 16, ... bytes.  */
+ #define INT_STRLEN_BOUND(t)                                     \
+-  (INT_BITS_STRLEN_BOUND (sizeof (t) * CHAR_BIT                 \
+-                          - _GL_SIGNED_TYPE_OR_EXPR (t))        \
++  (INT_BITS_STRLEN_BOUND (TYPE_WIDTH (t) - _GL_SIGNED_TYPE_OR_EXPR (t)) \
+    + _GL_SIGNED_TYPE_OR_EXPR (t))
+ 
+ /* Bound on buffer size needed to represent an integer type or expression T,
+@@ -222,20 +236,23 @@ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+    ? (a) < (min) >> (b)                                 \
+    : (max) >> (b) < (a))
+ 
+-/* True if __builtin_add_overflow (A, B, P) works when P is null.  */
+-#define _GL_HAS_BUILTIN_OVERFLOW_WITH_NULL (7 <= __GNUC__)
++/* True if __builtin_add_overflow (A, B, P) works when P is non-null.  */
++#define _GL_HAS_BUILTIN_OVERFLOW (5 <= __GNUC__)
++
++/* True if __builtin_add_overflow_p (A, B, C) works.  */
++#define _GL_HAS_BUILTIN_OVERFLOW_P (7 <= __GNUC__)
+ 
+ /* The _GL*_OVERFLOW macros have the same restrictions as the
+    *_RANGE_OVERFLOW macros, except that they do not assume that operands
+    (e.g., A and B) have the same type as MIN and MAX.  Instead, they assume
+    that the result (e.g., A + B) has that type.  */
+-#if _GL_HAS_BUILTIN_OVERFLOW_WITH_NULL
+-# define _GL_ADD_OVERFLOW(a, b, min, max)
+-   __builtin_add_overflow (a, b, (__typeof__ ((a) + (b)) *) 0)
+-# define _GL_SUBTRACT_OVERFLOW(a, b, min, max)
+-   __builtin_sub_overflow (a, b, (__typeof__ ((a) - (b)) *) 0)
+-# define _GL_MULTIPLY_OVERFLOW(a, b, min, max)
+-   __builtin_mul_overflow (a, b, (__typeof__ ((a) * (b)) *) 0)
++#if _GL_HAS_BUILTIN_OVERFLOW_P
++# define _GL_ADD_OVERFLOW(a, b, min, max)                               \
++   __builtin_add_overflow_p (a, b, (__typeof__ ((a) + (b))) 0)
++# define _GL_SUBTRACT_OVERFLOW(a, b, min, max)                          \
++   __builtin_sub_overflow_p (a, b, (__typeof__ ((a) - (b))) 0)
++# define _GL_MULTIPLY_OVERFLOW(a, b, min, max)                          \
++   __builtin_mul_overflow_p (a, b, (__typeof__ ((a) * (b))) 0)
+ #else
+ # define _GL_ADD_OVERFLOW(a, b, min, max)                                \
+    ((min) < 0 ? INT_ADD_RANGE_OVERFLOW (a, b, min, max)                  \
+@@ -315,7 +332,7 @@ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+   _GL_BINARY_OP_OVERFLOW (a, b, _GL_ADD_OVERFLOW)
+ #define INT_SUBTRACT_OVERFLOW(a, b) \
+   _GL_BINARY_OP_OVERFLOW (a, b, _GL_SUBTRACT_OVERFLOW)
+-#if _GL_HAS_BUILTIN_OVERFLOW_WITH_NULL
++#if _GL_HAS_BUILTIN_OVERFLOW_P
+ # define INT_NEGATE_OVERFLOW(a) INT_SUBTRACT_OVERFLOW (0, a)
+ #else
+ # define INT_NEGATE_OVERFLOW(a) \
+@@ -349,10 +366,6 @@ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+ #define INT_MULTIPLY_WRAPV(a, b, r) \
+   _GL_INT_OP_WRAPV (a, b, r, *, __builtin_mul_overflow, INT_MULTIPLY_OVERFLOW)
+ 
+-#ifndef __has_builtin
+-# define __has_builtin(x) 0
+-#endif
+-
+ /* Nonzero if this compiler has GCC bug 68193 or Clang bug 25390.  See:
+    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68193
+    https://llvm.org/bugs/show_bug.cgi?id=25390
+@@ -369,7 +382,7 @@ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+    the operation.  BUILTIN is the builtin operation, and OVERFLOW the
+    overflow predicate.  Return 1 if the result overflows.  See above
+    for restrictions.  */
+-#if 5 <= __GNUC__ || __has_builtin (__builtin_add_overflow)
++#if _GL_HAS_BUILTIN_OVERFLOW
+ # define _GL_INT_OP_WRAPV(a, b, r, op, builtin, overflow) builtin (a, b, r)
+ #elif 201112 <= __STDC_VERSION__ && !_GL__GENERIC_BOGUS
+ # define _GL_INT_OP_WRAPV(a, b, r, op, builtin, overflow) \
+@@ -412,7 +425,7 @@ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+ # else
+ #  define _GL_INT_OP_WRAPV_LONGISH(a, b, r, op, overflow) \
+     _GL_INT_OP_CALC (a, b, r, op, overflow, unsigned long int, \
+-                     long int, LONG_MIN, LONG_MAX))
++                     long int, LONG_MIN, LONG_MAX)
+ # endif
+ #endif
+ 
+diff --git a/lib/gltests/intprops.h b/lib/gltests/intprops.h
+index e1fce5c..eb06b69 100644
+--- a/lib/gltests/intprops.h
++++ b/lib/gltests/intprops.h
+@@ -1,18 +1,18 @@
+ /* intprops.h -- properties of integer types
+ 
+-   Copyright (C) 2001-2016 Free Software Foundation, Inc.
++   Copyright (C) 2001-2017 Free Software Foundation, Inc.
+ 
+    This program is free software: you can redistribute it and/or modify it
+-   under the terms of the GNU General Public License as published
+-   by the Free Software Foundation; either version 3 of the License, or
++   under the terms of the GNU Lesser General Public License as published
++   by the Free Software Foundation; either version 2.1 of the License, or
+    (at your option) any later version.
+ 
+    This program is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+-   GNU General Public License for more details.
++   GNU Lesser General Public License for more details.
+ 
+-   You should have received a copy of the GNU General Public License
++   You should have received a copy of the GNU Lesser General Public License
+    along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
+ 
+ /* Written by Paul Eggert.  */
+@@ -47,12 +47,16 @@
+ 
+ /* Minimum and maximum values for integer types and expressions.  */
+ 
++/* The width in bits of the integer type or expression T.
++   Padding bits are not supported; this is checked at compile-time below.  */
++#define TYPE_WIDTH(t) (sizeof (t) * CHAR_BIT)
++
+ /* The maximum and minimum values for the integer type T.  */
+ #define TYPE_MINIMUM(t) ((t) ~ TYPE_MAXIMUM (t))
+ #define TYPE_MAXIMUM(t)                                                 \
+   ((t) (! TYPE_SIGNED (t)                                               \
+         ? (t) -1                                                        \
+-        : ((((t) 1 << (sizeof (t) * CHAR_BIT - 2)) - 1) * 2 + 1)))
++        : ((((t) 1 << (TYPE_WIDTH (t) - 2)) - 1) * 2 + 1)))
+ 
+ /* The maximum and minimum values for the type of the expression E,
+    after integer promotion.  E should not have side effects.  */
+@@ -65,7 +69,13 @@
+    ? _GL_SIGNED_INT_MAXIMUM (e)                                         \
+    : _GL_INT_NEGATE_CONVERT (e, 1))
+ #define _GL_SIGNED_INT_MAXIMUM(e)                                       \
+-  (((_GL_INT_CONVERT (e, 1) << (sizeof ((e) + 0) * CHAR_BIT - 2)) - 1) * 2 + 1)
++  (((_GL_INT_CONVERT (e, 1) << (TYPE_WIDTH ((e) + 0) - 2)) - 1) * 2 + 1)
++
++/* Work around OpenVMS incompatibility with C99.  */
++#if !defined LLONG_MAX && defined __INT64_MAX
++# define LLONG_MAX __INT64_MAX
++# define LLONG_MIN __INT64_MIN
++#endif
+ 
+ /* This include file assumes that signed types are two's complement without
+    padding bits; the above macros have undefined behavior otherwise.
+@@ -84,10 +94,15 @@ verify (TYPE_MAXIMUM (long int) == LONG_MAX);
+ verify (TYPE_MINIMUM (long long int) == LLONG_MIN);
+ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+ #endif
++/* Similarly, sanity-check one ISO/IEC TS 18661-1:2014 macro if defined.  */
++#ifdef UINT_WIDTH
++verify (TYPE_WIDTH (unsigned int) == UINT_WIDTH);
++#endif
+ 
+ /* Does the __typeof__ keyword work?  This could be done by
+    'configure', but for now it's easier to do it by hand.  */
+-#if (2 <= __GNUC__ || defined __IBM__TYPEOF__ \
++#if (2 <= __GNUC__ \
++     || (1210 <= __IBMC__ && defined __IBM__TYPEOF__) \
+      || (0x5110 <= __SUNPRO_C && !__STDC__))
+ # define _GL_HAVE___TYPEOF__ 1
+ #else
+@@ -116,8 +131,7 @@ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+    signed, this macro may overestimate the true bound by one byte when
+    applied to unsigned types of size 2, 4, 16, ... bytes.  */
+ #define INT_STRLEN_BOUND(t)                                     \
+-  (INT_BITS_STRLEN_BOUND (sizeof (t) * CHAR_BIT                 \
+-                          - _GL_SIGNED_TYPE_OR_EXPR (t))        \
++  (INT_BITS_STRLEN_BOUND (TYPE_WIDTH (t) - _GL_SIGNED_TYPE_OR_EXPR (t)) \
+    + _GL_SIGNED_TYPE_OR_EXPR (t))
+ 
+ /* Bound on buffer size needed to represent an integer type or expression T,
+@@ -222,20 +236,23 @@ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+    ? (a) < (min) >> (b)                                 \
+    : (max) >> (b) < (a))
+ 
+-/* True if __builtin_add_overflow (A, B, P) works when P is null.  */
+-#define _GL_HAS_BUILTIN_OVERFLOW_WITH_NULL (7 <= __GNUC__)
++/* True if __builtin_add_overflow (A, B, P) works when P is non-null.  */
++#define _GL_HAS_BUILTIN_OVERFLOW (5 <= __GNUC__)
++
++/* True if __builtin_add_overflow_p (A, B, C) works.  */
++#define _GL_HAS_BUILTIN_OVERFLOW_P (7 <= __GNUC__)
+ 
+ /* The _GL*_OVERFLOW macros have the same restrictions as the
+    *_RANGE_OVERFLOW macros, except that they do not assume that operands
+    (e.g., A and B) have the same type as MIN and MAX.  Instead, they assume
+    that the result (e.g., A + B) has that type.  */
+-#if _GL_HAS_BUILTIN_OVERFLOW_WITH_NULL
+-# define _GL_ADD_OVERFLOW(a, b, min, max)
+-   __builtin_add_overflow (a, b, (__typeof__ ((a) + (b)) *) 0)
+-# define _GL_SUBTRACT_OVERFLOW(a, b, min, max)
+-   __builtin_sub_overflow (a, b, (__typeof__ ((a) - (b)) *) 0)
+-# define _GL_MULTIPLY_OVERFLOW(a, b, min, max)
+-   __builtin_mul_overflow (a, b, (__typeof__ ((a) * (b)) *) 0)
++#if _GL_HAS_BUILTIN_OVERFLOW_P
++# define _GL_ADD_OVERFLOW(a, b, min, max)                               \
++   __builtin_add_overflow_p (a, b, (__typeof__ ((a) + (b))) 0)
++# define _GL_SUBTRACT_OVERFLOW(a, b, min, max)                          \
++   __builtin_sub_overflow_p (a, b, (__typeof__ ((a) - (b))) 0)
++# define _GL_MULTIPLY_OVERFLOW(a, b, min, max)                          \
++   __builtin_mul_overflow_p (a, b, (__typeof__ ((a) * (b))) 0)
+ #else
+ # define _GL_ADD_OVERFLOW(a, b, min, max)                                \
+    ((min) < 0 ? INT_ADD_RANGE_OVERFLOW (a, b, min, max)                  \
+@@ -315,7 +332,7 @@ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+   _GL_BINARY_OP_OVERFLOW (a, b, _GL_ADD_OVERFLOW)
+ #define INT_SUBTRACT_OVERFLOW(a, b) \
+   _GL_BINARY_OP_OVERFLOW (a, b, _GL_SUBTRACT_OVERFLOW)
+-#if _GL_HAS_BUILTIN_OVERFLOW_WITH_NULL
++#if _GL_HAS_BUILTIN_OVERFLOW_P
+ # define INT_NEGATE_OVERFLOW(a) INT_SUBTRACT_OVERFLOW (0, a)
+ #else
+ # define INT_NEGATE_OVERFLOW(a) \
+@@ -349,10 +366,6 @@ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+ #define INT_MULTIPLY_WRAPV(a, b, r) \
+   _GL_INT_OP_WRAPV (a, b, r, *, __builtin_mul_overflow, INT_MULTIPLY_OVERFLOW)
+ 
+-#ifndef __has_builtin
+-# define __has_builtin(x) 0
+-#endif
+-
+ /* Nonzero if this compiler has GCC bug 68193 or Clang bug 25390.  See:
+    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68193
+    https://llvm.org/bugs/show_bug.cgi?id=25390
+@@ -369,7 +382,7 @@ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+    the operation.  BUILTIN is the builtin operation, and OVERFLOW the
+    overflow predicate.  Return 1 if the result overflows.  See above
+    for restrictions.  */
+-#if 5 <= __GNUC__ || __has_builtin (__builtin_add_overflow)
++#if _GL_HAS_BUILTIN_OVERFLOW
+ # define _GL_INT_OP_WRAPV(a, b, r, op, builtin, overflow) builtin (a, b, r)
+ #elif 201112 <= __STDC_VERSION__ && !_GL__GENERIC_BOGUS
+ # define _GL_INT_OP_WRAPV(a, b, r, op, builtin, overflow) \
+@@ -412,7 +425,7 @@ verify (TYPE_MAXIMUM (long long int) == LLONG_MAX);
+ # else
+ #  define _GL_INT_OP_WRAPV_LONGISH(a, b, r, op, overflow) \
+     _GL_INT_OP_CALC (a, b, r, op, overflow, unsigned long int, \
+-                     long int, LONG_MIN, LONG_MAX))
++                     long int, LONG_MIN, LONG_MAX)
+ # endif
+ #endif
+ 
+-- 
+1.9.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn_1.33.bb b/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn_1.33.bb
index d3d0f55..9e8bdba 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn_1.33.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/libidn/libidn_1.33.bb
@@ -19,6 +19,8 @@
            file://avoid_AM_PROG_MKDIR_P_warning_error_with_automake_1.12.patch \
            file://dont-depend-on-help2man.patch \
            file://0001-idn-fix-printf-format-security-warnings.patch \
+           file://gcc7-compatibility.patch \
+           file://0001-idn-format-security-warnings.patch \
 "
 
 SRC_URI[md5sum] = "a9aa7e003665de9c82bd3f9fc6ccf308"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libmnl/libmnl_1.0.4.bb b/import-layers/yocto-poky/meta/recipes-extended/libmnl/libmnl_1.0.4.bb
new file mode 100644
index 0000000..b458799
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libmnl/libmnl_1.0.4.bb
@@ -0,0 +1,13 @@
+SUMMARY = "Minimalistic user-space Netlink utility library"
+DESCRIPTION = "Minimalistic user-space library oriented to Netlink developers, providing \
+    functions for common tasks in parsing, validating, and constructing both the Netlink header and TLVs."
+HOMEPAGE = "http://www.netfilter.org/projects/libmnl/index.html"
+SECTION = "libs"
+LICENSE = "LGPLv2.1+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+
+SRC_URI = "http://www.netfilter.org/projects/libmnl/files/libmnl-${PV}.tar.bz2;name=tar"
+SRC_URI[tar.md5sum] = "be9b4b5328c6da1bda565ac5dffadb2d"
+SRC_URI[tar.sha256sum] = "171f89699f286a5854b72b91d06e8f8e3683064c5901fb09d954a9ab6f551f81"
+
+inherit autotools pkgconfig
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libnsl/libnsl2/0001-include-sys-cdefs.h-explicitly.patch b/import-layers/yocto-poky/meta/recipes-extended/libnsl/libnsl2/0001-include-sys-cdefs.h-explicitly.patch
new file mode 100644
index 0000000..bd647ac
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libnsl/libnsl2/0001-include-sys-cdefs.h-explicitly.patch
@@ -0,0 +1,68 @@
+From 508a0ff690dfebc17c4f55a5f81824ed549bed66 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 18 Apr 2017 09:13:33 -0700
+Subject: [PATCH 1/2] include sys/cdefs.h explicitly
+
+glibc includes this header indirectly but not musl
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/rpcsvc/nis.h    | 1 +
+ src/rpcsvc/nislib.h | 1 +
+ src/rpcsvc/ypclnt.h | 1 +
+ src/rpcsvc/ypupd.h  | 1 +
+ 4 files changed, 4 insertions(+)
+
+diff --git a/src/rpcsvc/nis.h b/src/rpcsvc/nis.h
+index 933c4d9..88cbca0 100644
+--- a/src/rpcsvc/nis.h
++++ b/src/rpcsvc/nis.h
+@@ -32,6 +32,7 @@
+ #ifndef _RPCSVC_NIS_H
+ #define _RPCSVC_NIS_H 1
+ 
++#include <sys/cdefs.h>
+ #include <features.h>
+ #include <rpc/rpc.h>
+ #include <rpcsvc/nis_tags.h>
+diff --git a/src/rpcsvc/nislib.h b/src/rpcsvc/nislib.h
+index a59c19b..a53fab3 100644
+--- a/src/rpcsvc/nislib.h
++++ b/src/rpcsvc/nislib.h
+@@ -19,6 +19,7 @@
+ #ifndef	__RPCSVC_NISLIB_H__
+ #define	__RPCSVC_NISLIB_H__
+ 
++#include <sys/cdefs.h>
+ #include <features.h>
+ 
+ __BEGIN_DECLS
+diff --git a/src/rpcsvc/ypclnt.h b/src/rpcsvc/ypclnt.h
+index fe43fd4..a686b61 100644
+--- a/src/rpcsvc/ypclnt.h
++++ b/src/rpcsvc/ypclnt.h
+@@ -20,6 +20,7 @@
+ #ifndef	__RPCSVC_YPCLNT_H__
+ #define	__RPCSVC_YPCLNT_H__
+ 
++#include <sys/cdefs.h>
+ #include <features.h>
+ 
+ /* Some defines */
+diff --git a/src/rpcsvc/ypupd.h b/src/rpcsvc/ypupd.h
+index d07fd4d..2c57301 100644
+--- a/src/rpcsvc/ypupd.h
++++ b/src/rpcsvc/ypupd.h
+@@ -33,6 +33,7 @@
+ #ifndef __RPCSVC_YPUPD_H__
+ #define __RPCSVC_YPUPD_H__
+ 
++#include <sys/cdefs.h>
+ #include <features.h>
+ 
+ #include <rpc/rpc.h>
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libnsl/libnsl2/0001-nis_call.c-Include-stdint.h-for-uintptr_t-definition.patch b/import-layers/yocto-poky/meta/recipes-extended/libnsl/libnsl2/0001-nis_call.c-Include-stdint.h-for-uintptr_t-definition.patch
new file mode 100644
index 0000000..e9ae517
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libnsl/libnsl2/0001-nis_call.c-Include-stdint.h-for-uintptr_t-definition.patch
@@ -0,0 +1,27 @@
+From d71cbeb3b76e54778a4d5eec6d387cce653537ca Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 9 Jun 2017 09:49:35 -0700
+Subject: [PATCH] nis_call.c: Include stdint.h for uintptr_t definition
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/nisplus/nis_call.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/src/nisplus/nis_call.c b/src/nisplus/nis_call.c
+index 1a2b90c..1dc982d 100644
+--- a/src/nisplus/nis_call.c
++++ b/src/nisplus/nis_call.c
+@@ -23,6 +23,7 @@
+ #include <errno.h>
+ #include <fcntl.h>
+ #include <string.h>
++#include <stdint.h>
+ #include <libintl.h>
+ #include <rpc/rpc.h>
+ #include <rpc/auth.h>
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libnsl/libnsl2/0002-Define-glibc-specific-macros.patch b/import-layers/yocto-poky/meta/recipes-extended/libnsl/libnsl2/0002-Define-glibc-specific-macros.patch
new file mode 100644
index 0000000..75fda4b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libnsl/libnsl2/0002-Define-glibc-specific-macros.patch
@@ -0,0 +1,57 @@
+From 60282514ea01af004d7f9e66dd3929223b7d2e7b Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 18 Apr 2017 09:16:12 -0700
+Subject: [PATCH 2/2] Define glibc specific macros
+
+Check and define
+rawmemchr, __asprintf, __mempcpy, __strtok_r
+__always_inline, TEMP_FAILURE_RETRY
+
+if not existing. Helps compiling with musl
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+diff --git a/src/rpcsvc/nis.h b/src/rpcsvc/nis.h
+index 88cbca0..23fc20c 100644
+--- a/src/rpcsvc/nis.h
++++ b/src/rpcsvc/nis.h
+@@ -57,6 +57,34 @@ __BEGIN_DECLS
+  *                                              <kukuk@suse.de>
+  */
+ 
++#ifndef rawmemchr
++#define rawmemchr(s,c) memchr((s),(size_t)-1,(c))
++#endif
++
++#ifndef __asprintf
++#define __asprintf asprintf
++#endif
++
++#ifndef __mempcpy
++#define __mempcpy mempcpy
++#endif
++
++#ifndef __strtok_r
++#define __strtok_r strtok_r
++#endif
++
++#ifndef __always_inline
++#define __always_inline __attribute__((__always_inline__))
++#endif
++
++#ifndef TEMP_FAILURE_RETRY
++#define TEMP_FAILURE_RETRY(exp) ({ \
++typeof (exp) _rc; \
++ do { \
++  _rc = (exp); \
++ } while (_rc == -1 && errno == EINTR); \
++ _rc; })
++#endif
+ 
+ #ifndef __nis_object_h
+ #define __nis_object_h
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libnsl/libnsl2_git.bb b/import-layers/yocto-poky/meta/recipes-extended/libnsl/libnsl2_git.bb
new file mode 100644
index 0000000..a539148
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libnsl/libnsl2_git.bb
@@ -0,0 +1,37 @@
+# Copyright (C) 2017 Khem Raj <raj.khem@gmail.com>
+# Released under the MIT license (see COPYING.MIT for the terms)
+
+SUMMARY = "Library containing NIS functions using TI-RPC (IPv6 enabled)"
+DESCRIPTION = "This library contains the public client interface for NIS(YP) and NIS+\
+               it was part of glibc and now is standalone packages. it also supports IPv6"
+HOMEPAGE = "https://github.com/thkukuk/libnsl"
+LICENSE = "LGPL-2.1"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+SECTION = "libs"
+DEPENDS = "libtirpc"
+DEPENDS_append_libc-musl = " bsd-headers"
+
+PV = "1.0.5+git${SRCPV}"
+
+SRCREV = "dfa2f313524aff9243c4d8ce1bace73786478356"
+
+SRC_URI = "git://github.com/thkukuk/libnsl \
+           file://0001-include-sys-cdefs.h-explicitly.patch \
+           file://0002-Define-glibc-specific-macros.patch \
+           file://0001-nis_call.c-Include-stdint.h-for-uintptr_t-definition.patch \
+          "
+
+S = "${WORKDIR}/git"
+
+inherit autotools pkgconfig gettext
+
+EXTRA_OECONF += "--libdir=${libdir}/nsl --includedir=${includedir}/nsl"
+
+do_install_append() {
+	install -d ${D}${libdir}
+	mv ${D}${libdir}/nsl/pkgconfig ${D}${libdir}
+}
+
+FILES_${PN} += "${libdir}/nsl/*.so.*"
+FILES_${PN}-dev += "${includedir}/nsl ${libdir}/nsl/*.so"
+FILES_${PN}-staticdev += "${libdir}/nsl/*.a"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libsolv/libsolv/0001-repo_rpmdb.c-increase-MAX_HDR_CNT-and-MAX_HDR_DSIZE.patch b/import-layers/yocto-poky/meta/recipes-extended/libsolv/libsolv/0001-repo_rpmdb.c-increase-MAX_HDR_CNT-and-MAX_HDR_DSIZE.patch
new file mode 100644
index 0000000..4a4e5cb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libsolv/libsolv/0001-repo_rpmdb.c-increase-MAX_HDR_CNT-and-MAX_HDR_DSIZE.patch
@@ -0,0 +1,35 @@
+From 1c4c935cb73ac1ccb9693df1a51ba218a22e8ca8 Mon Sep 17 00:00:00 2001
+From: Ming Liu <liu.ming50@gmail.com>
+Date: Sat, 30 Sep 2017 11:15:16 +0800
+Subject: [PATCH] repo_rpmdb.c: increase MAX_HDR_CNT and MAX_HDR_DSIZE
+
+Upstream-Status: Submitted [https://github.com/openSUSE/libsolv/pull/230]
+
+We encountered 'corrupt rpm' issues when installing extreme big RPM
+packages like the kernel-devsrc package of Yocto project.
+
+It can be fixed by increasing MAX_HDR_CNT and MAX_HDR_DSIZE per test.
+
+Signed-off-by: Ming Liu <liu.ming50@gmail.com>
+---
+ ext/repo_rpmdb.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/ext/repo_rpmdb.c b/ext/repo_rpmdb.c
+index c7000a9..7000835 100644
+--- a/ext/repo_rpmdb.c
++++ b/ext/repo_rpmdb.c
+@@ -170,8 +170,8 @@
+ #define MAX_SIG_CNT		0x100000
+ #define MAX_SIG_DSIZE		0x100000
+ 
+-#define MAX_HDR_CNT		0x100000
+-#define MAX_HDR_DSIZE		0x2000000
++#define MAX_HDR_CNT		0x200000
++#define MAX_HDR_DSIZE		0x4000000
+ 
+ typedef struct rpmhead {
+   int cnt;
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libsolv/libsolv_0.6.26.bb b/import-layers/yocto-poky/meta/recipes-extended/libsolv/libsolv_0.6.26.bb
deleted file mode 100644
index 42d63ae..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/libsolv/libsolv_0.6.26.bb
+++ /dev/null
@@ -1,31 +0,0 @@
-SUMMARY = "Library for solving packages and reading repositories"
-HOMEPAGE = "https://github.com/openSUSE/libsolv"
-BUGTRACKER = "https://github.com/openSUSE/libsolv/issues"
-SECTION = "devel"
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE.BSD;md5=62272bd11c97396d4aaf1c41bc11f7d8"
-
-DEPENDS = "expat zlib rpm"
-
-SRC_URI = "git://github.com/openSUSE/libsolv.git \
-           "
-SRC_URI_append_libc-musl = " file://0001-Add-fallback-fopencookie-implementation.patch \
-                             file://0002-Fixes-to-internal-fopencookie-implementation.patch \
-                           "
-
-SRCREV = "ba32f8286d3deec6faaabc79762a4760e9af0a07"
-UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>\d+(\.\d+)+)"
-
-S = "${WORKDIR}/git"
-
-inherit cmake
-
-EXTRA_OECMAKE = "-DLIB=${baselib} -DMULTI_SEMANTICS=ON -DENABLE_RPMMD=ON -DENABLE_RPMDB=ON"
-
-PACKAGES =+ "${PN}-tools ${PN}ext"
-
-FILES_${PN}-dev += "${datadir}/cmake/Modules/FindLibSolv.cmake"
-FILES_${PN}-tools = "${bindir}/*"
-FILES_${PN}ext = "${libdir}/${PN}ext.so.*"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libsolv/libsolv_0.6.28.bb b/import-layers/yocto-poky/meta/recipes-extended/libsolv/libsolv_0.6.28.bb
new file mode 100644
index 0000000..6c8f7fc
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libsolv/libsolv_0.6.28.bb
@@ -0,0 +1,32 @@
+SUMMARY = "Library for solving packages and reading repositories"
+HOMEPAGE = "https://github.com/openSUSE/libsolv"
+BUGTRACKER = "https://github.com/openSUSE/libsolv/issues"
+SECTION = "devel"
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE.BSD;md5=62272bd11c97396d4aaf1c41bc11f7d8"
+
+DEPENDS = "expat zlib rpm"
+
+SRC_URI = "git://github.com/openSUSE/libsolv.git \
+           file://0001-repo_rpmdb.c-increase-MAX_HDR_CNT-and-MAX_HDR_DSIZE.patch \
+          "
+SRC_URI_append_libc-musl = " file://0001-Add-fallback-fopencookie-implementation.patch \
+                             file://0002-Fixes-to-internal-fopencookie-implementation.patch \
+                           "
+
+SRCREV = "b8a9ddd88eb4e0ab351eb55a53186b5dc5ac0825"
+UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>\d+(\.\d+)+)"
+
+S = "${WORKDIR}/git"
+
+inherit cmake
+
+EXTRA_OECMAKE = "-DLIB=${baselib} -DMULTI_SEMANTICS=ON -DENABLE_RPMMD=ON -DENABLE_RPMDB=ON -DENABLE_COMPLEX_DEPS=ON"
+
+PACKAGES =+ "${PN}-tools ${PN}ext"
+
+FILES_${PN}-dev += "${datadir}/cmake/Modules/FindLibSolv.cmake"
+FILES_${PN}-tools = "${bindir}/*"
+FILES_${PN}ext = "${libdir}/${PN}ext.so.*"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/0001-Add-missing-rwlock_unlocks-in-xprt_register.patch b/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/0001-Add-missing-rwlock_unlocks-in-xprt_register.patch
deleted file mode 100644
index 50613ba..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/0001-Add-missing-rwlock_unlocks-in-xprt_register.patch
+++ /dev/null
@@ -1,62 +0,0 @@
-Subject: [PATCH] Add missing rwlock_unlocks in xprt_register
-
-It looks like in b2c9430f46c4ac848957fb8adaac176a3f6ac03f when svc_run
-switched to poll, an early return was added, but the rwlock was not
-unlocked.
-
-I observed that rpcbind built against libtirpc-1.0.1 would handle only
-one request before hanging, and tracked it down to a missing
-rwlock_unlock here.
-
-Fixes: b2c9430f46c4 ('Use poll() instead of select() in svc_run()')
-
-Upstream-Status: Backport
-
-Signed-off-by: Michael Forney <mforney@mforney.org>
-Signed-off-by: Steve Dickson <steved@redhat.com>
-Signed-off-by: Maxin B. John <maxin.john@intel.com>
----
- src/svc.c | 7 ++++---
- 1 file changed, 4 insertions(+), 3 deletions(-)
-
-diff --git a/src/svc.c b/src/svc.c
-index 9c41445..b59467b 100644
---- a/src/svc.c
-+++ b/src/svc.c
-@@ -99,7 +99,7 @@ xprt_register (xprt)
-     {
-       __svc_xports = (SVCXPRT **) calloc (_rpc_dtablesize(), sizeof (SVCXPRT *));
-       if (__svc_xports == NULL)
--	return;
-+            goto unlock;
-     }
-   if (sock < _rpc_dtablesize())
-     {
-@@ -120,14 +120,14 @@ xprt_register (xprt)
-             svc_pollfd[i].fd = sock;
-             svc_pollfd[i].events = (POLLIN | POLLPRI |
-                                     POLLRDNORM | POLLRDBAND);
--            return;
-+            goto unlock;
-           }
- 
-       new_svc_pollfd = (struct pollfd *) realloc (svc_pollfd,
-                                                   sizeof (struct pollfd)
-                                                   * (svc_max_pollfd + 1));
-       if (new_svc_pollfd == NULL) /* Out of memory */
--        return;
-+        goto unlock;
-       svc_pollfd = new_svc_pollfd;
-       ++svc_max_pollfd;
- 
-@@ -135,6 +135,7 @@ xprt_register (xprt)
-       svc_pollfd[svc_max_pollfd - 1].events = (POLLIN | POLLPRI |
-                                                POLLRDNORM | POLLRDBAND);
-     }
-+unlock:
-   rwlock_unlock (&svc_fd_lock);
- }
- 
--- 
-2.5.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/0001-include-stdint.h-for-uintptr_t.patch b/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/0001-include-stdint.h-for-uintptr_t.patch
new file mode 100644
index 0000000..1fe9833
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/0001-include-stdint.h-for-uintptr_t.patch
@@ -0,0 +1,32 @@
+From b80d3b573c1dade2b29b22f8acc3b9e2c7ddefd7 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 20 May 2017 13:36:43 -0700
+Subject: [PATCH] include stdint.h for uintptr_t
+
+Fixes
+| ../../libtirpc-1.0.1/src/xdr_sizeof.c:93:13: error: 'uintptr_t' undeclared (first use in this function); did you mean '__intptr_t'?
+|   if (len < (uintptr_t)xdrs->x_base) {
+|              ^~~~~~~~~
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/xdr_sizeof.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/src/xdr_sizeof.c b/src/xdr_sizeof.c
+index d23fbd1..79d6707 100644
+--- a/src/xdr_sizeof.c
++++ b/src/xdr_sizeof.c
+@@ -39,6 +39,7 @@
+ #include <rpc/xdr.h>
+ #include <sys/types.h>
+ #include <stdlib.h>
++#include <stdint.h>
+ #include "un-namespace.h"
+ 
+ /* ARGSUSED */
+-- 
+2.13.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/0001-replace-__bzero-with-memset-API.patch b/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/0001-replace-__bzero-with-memset-API.patch
new file mode 100644
index 0000000..d2b4da6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/0001-replace-__bzero-with-memset-API.patch
@@ -0,0 +1,30 @@
+From 20badc3e3608953fb5b36bb2e16fa51bd731aebc Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Tue, 18 Apr 2017 09:35:35 -0700
+Subject: [PATCH] replace __bzero() with memset() API
+
+memset is available across all libc implementation
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ src/des_impl.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/des_impl.c b/src/des_impl.c
+index 9dbccaf..15bec2a 100644
+--- a/src/des_impl.c
++++ b/src/des_impl.c
+@@ -588,7 +588,7 @@ _des_crypt (char *buf, unsigned len, struct desparams *desp)
+     }
+   tin0 = tin1 = tout0 = tout1 = xor0 = xor1 = 0;
+   tbuf[0] = tbuf[1] = 0;
+-  __bzero (schedule, sizeof (schedule));
++  memset (schedule, 0, sizeof (schedule));
+ 
+   return (1);
+ }
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/export_key_secretkey_is_set.patch b/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/export_key_secretkey_is_set.patch
new file mode 100644
index 0000000..a276ba2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/export_key_secretkey_is_set.patch
@@ -0,0 +1,24 @@
+Add key_secretkey_is_set to exported symbols map
+
+key_secret_is_set is a typo in libtirpc map
+Patch taken from
+
+https://sourceforge.net/p/libtirpc/discussion/637321/thread/fd73d431/
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Index: libtirpc-1.0.1/src/libtirpc.map
+===================================================================
+--- libtirpc-1.0.1.orig/src/libtirpc.map
++++ libtirpc-1.0.1/src/libtirpc.map
+@@ -298,7 +298,7 @@ TIRPC_0.3.2 {
+     key_gendes;
+     key_get_conv;
+     key_setsecret;
+-    key_secret_is_set;
++    key_secretkey_is_set;
+     key_setnet;
+     netname2host;
+     netname2user;
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/libtirpc-0.2.1-fortify.patch b/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/libtirpc-0.2.1-fortify.patch
deleted file mode 100644
index 4a785d3..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/libtirpc-0.2.1-fortify.patch
+++ /dev/null
@@ -1,26 +0,0 @@
-Fix a possible overflow (reported by _FORTIFY_SOURCE=2)
-
-Ported from Gentoo
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Index: libtirpc-0.2.1/src/getrpcport.c
-===================================================================
---- libtirpc-0.2.1.orig/src/getrpcport.c
-+++ libtirpc-0.2.1/src/getrpcport.c
-@@ -54,11 +54,11 @@ getrpcport(host, prognum, versnum, proto
- 
- 	if ((hp = gethostbyname(host)) == NULL)
- 		return (0);
-+	if (hp->h_length != sizeof(addr.sin_addr.s_addr))
-+		return (0);
- 	memset(&addr, 0, sizeof(addr));
- 	addr.sin_family = AF_INET;
- 	addr.sin_port =  0;
--	if (hp->h_length > sizeof(addr))
--	  hp->h_length = sizeof(addr);
- 	memcpy(&addr.sin_addr.s_addr, hp->h_addr, (size_t)hp->h_length);
- 	/* Inconsistent interfaces need casts! :-( */
- 	return (pmap_getport(&addr, (u_long)prognum, (u_long)versnum, 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/remove-des-functionality.patch b/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/remove-des-functionality.patch
deleted file mode 100644
index 512e934..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc/remove-des-functionality.patch
+++ /dev/null
@@ -1,144 +0,0 @@
-uclibc and musl does not provide des functionality. Lets disable it.
-
-Upstream-Status: Inappropriate [uclibc and musl specific]
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
-Signed-off-by: Maxin B. John <maxin.john@intel.com>
----
-diff -Naur libtirpc-1.0.1-orig/src/Makefile.am libtirpc-1.0.1/src/Makefile.am
---- libtirpc-1.0.1-orig/src/Makefile.am	2015-10-30 17:15:14.000000000 +0200
-+++ libtirpc-1.0.1/src/Makefile.am	2015-12-21 15:56:17.094702429 +0200
-@@ -22,9 +22,8 @@
-         pmap_prot.c pmap_prot2.c pmap_rmt.c rpc_prot.c rpc_commondata.c \
-         rpc_callmsg.c rpc_generic.c rpc_soc.c rpcb_clnt.c rpcb_prot.c \
-         rpcb_st_xdr.c svc.c svc_auth.c svc_dg.c svc_auth_unix.c svc_auth_none.c \
--	svc_auth_des.c \
-         svc_generic.c svc_raw.c svc_run.c svc_simple.c svc_vc.c getpeereid.c \
--        auth_time.c auth_des.c authdes_prot.c debug.c
-+        debug.c
- 
- ## XDR
- libtirpc_la_SOURCES += xdr.c xdr_rec.c xdr_array.c xdr_float.c xdr_mem.c xdr_reference.c xdr_stdio.c xdr_sizeof.c
-@@ -41,8 +40,8 @@
-     libtirpc_la_CFLAGS = -DHAVE_RPCSEC_GSS $(GSSAPI_CFLAGS)
- endif
- 
--libtirpc_la_SOURCES += key_call.c key_prot_xdr.c getpublickey.c
--libtirpc_la_SOURCES += netname.c netnamer.c rpcdname.c rtime.c
-+#libtirpc_la_SOURCES += key_call.c key_prot_xdr.c getpublickey.c
-+#libtirpc_la_SOURCES += netname.c netnamer.c rpcdname.c rtime.c
- 
- CLEANFILES	       = cscope.* *~
- DISTCLEANFILES	       = Makefile.in
-diff -Naur libtirpc-1.0.1-orig/src/rpc_soc.c libtirpc-1.0.1/src/rpc_soc.c
---- libtirpc-1.0.1-orig/src/rpc_soc.c	2015-10-30 17:15:14.000000000 +0200
-+++ libtirpc-1.0.1/src/rpc_soc.c	2015-12-21 15:56:17.095702416 +0200
-@@ -61,7 +61,6 @@
- #include <string.h>
- #include <unistd.h>
- #include <fcntl.h>
--#include <rpcsvc/nis.h>
- 
- #include "rpc_com.h"
- 
-@@ -522,86 +521,6 @@
- }
- 
- /*
-- * Create the client des authentication object. Obsoleted by
-- * authdes_seccreate().
-- */
--AUTH *
--authdes_create(servername, window, syncaddr, ckey)
--	char *servername;		/* network name of server */
--	u_int window;			/* time to live */
--	struct sockaddr *syncaddr;	/* optional hostaddr to sync with */
--	des_block *ckey;		/* optional conversation key to use */
--{
--	AUTH *nauth;
--	char hostname[NI_MAXHOST];
--
--	if (syncaddr) {
--		/*
--		 * Change addr to hostname, because that is the way
--		 * new interface takes it.
--		 */
--	        switch (syncaddr->sa_family) {
--		case AF_INET:
--		  if (getnameinfo(syncaddr, sizeof(struct sockaddr_in), hostname,
--				  sizeof hostname, NULL, 0, 0) != 0)
--		    goto fallback;
--		  break;
--		case AF_INET6:
--		  if (getnameinfo(syncaddr, sizeof(struct sockaddr_in6), hostname,
--				  sizeof hostname, NULL, 0, 0) != 0)
--		    goto fallback;
--		  break;
--		default:
--		  goto fallback;
--		}
--		nauth = authdes_seccreate(servername, window, hostname, ckey);
--		return (nauth);
--	}
--fallback:
--	return authdes_seccreate(servername, window, NULL, ckey);
--}
--
--/*
-- * Create the client des authentication object. Obsoleted by
-- * authdes_pk_seccreate().
-- */
--extern AUTH *authdes_pk_seccreate(const char *, netobj *, u_int, const char *,
--        const des_block *, nis_server *);
--
--AUTH *
--authdes_pk_create(servername, pkey, window, syncaddr, ckey)
--	char *servername;		/* network name of server */
--	netobj *pkey;			/* public key */
--	u_int window;			/* time to live */
--	struct sockaddr *syncaddr;	/* optional hostaddr to sync with */
--	des_block *ckey;		/* optional conversation key to use */
--{
--	AUTH *nauth;
--	char hostname[NI_MAXHOST];
--
--	if (syncaddr) {
--		/*
--		 * Change addr to hostname, because that is the way
--		 * new interface takes it.
--		 */
--	        switch (syncaddr->sa_family) {
--		case AF_INET:
--		  if (getnameinfo(syncaddr, sizeof(struct sockaddr_in), hostname,
--				  sizeof hostname, NULL, 0, 0) != 0)
--		    goto fallback;
--		  break;
--		default:
--		  goto fallback;
--		}
--		nauth = authdes_pk_seccreate(servername, pkey, window, hostname, ckey, NULL);
--		return (nauth);
--	}
--fallback:
--	return authdes_pk_seccreate(servername, pkey, window, NULL, ckey, NULL);
--}
--
--
--/*
-  * Create a client handle for a unix connection. Obsoleted by clnt_vc_create()
-  */
- CLIENT *
-diff -Naur libtirpc-1.0.1-orig/src/svc_auth.c libtirpc-1.0.1/src/svc_auth.c
---- libtirpc-1.0.1-orig/src/svc_auth.c	2015-10-30 17:15:14.000000000 +0200
-+++ libtirpc-1.0.1/src/svc_auth.c	2015-12-21 15:56:17.095702416 +0200
-@@ -114,9 +114,6 @@
- 	case AUTH_SHORT:
- 		dummy = _svcauth_short(rqst, msg);
- 		return (dummy);
--	case AUTH_DES:
--		dummy = _svcauth_des(rqst, msg);
--		return (dummy);
- #ifdef HAVE_RPCSEC_GSS
- 	case RPCSEC_GSS:
- 		dummy = _svcauth_gss(rqst, msg, no_dispatch);
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc_1.0.1.bb b/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc_1.0.1.bb
deleted file mode 100644
index e321d47..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc_1.0.1.bb
+++ /dev/null
@@ -1,40 +0,0 @@
-SUMMARY = "Transport-Independent RPC library"
-DESCRIPTION = "Libtirpc is a port of Suns Transport-Independent RPC library to Linux"
-SECTION = "libs/network"
-HOMEPAGE = "http://sourceforge.net/projects/libtirpc/"
-BUGTRACKER = "http://sourceforge.net/tracker/?group_id=183075&atid=903784"
-LICENSE = "BSD"
-LIC_FILES_CHKSUM = "file://COPYING;md5=f835cce8852481e4b2bbbdd23b5e47f3 \
-                    file://src/netname.c;beginline=1;endline=27;md5=f8a8cd2cb25ac5aa16767364fb0e3c24"
-
-PROVIDES = "virtual/librpc"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BP}.tar.bz2;name=libtirpc \
-           ${GENTOO_MIRROR}/${BPN}-glibc-nfs.tar.xz;name=glibc-nfs \
-           file://libtirpc-0.2.1-fortify.patch \
-           file://0001-Add-missing-rwlock_unlocks-in-xprt_register.patch \
-          "
-
-SRC_URI_append_libc-uclibc = " file://remove-des-functionality.patch \
-                             "
-
-SRC_URI_append_libc-musl = " file://remove-des-functionality.patch \
-                             file://Use-netbsd-queue.h.patch \
-                           "
-
-SRC_URI[libtirpc.md5sum] = "36ce1c0ff80863bb0839d54aa0b94014"
-SRC_URI[libtirpc.sha256sum] = "5156974f31be7ccbc8ab1de37c4739af6d9d42c87b1d5caf4835dda75fcbb89e"
-SRC_URI[glibc-nfs.md5sum] = "5ae500b9d0b6b72cb875bc04944b9445"
-SRC_URI[glibc-nfs.sha256sum] = "2677cfedf626f3f5a8f6e507aed5bb8f79a7453b589d684dbbc086e755170d83"
-
-inherit autotools pkgconfig
-
-EXTRA_OECONF = "--disable-gssapi"
-
-do_configure_prepend () {
-        cp -r ${S}/../tirpc ${S}
-}
-
-do_install_append() {
-        chown root:root ${D}${sysconfdir}/netconfig
-}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc_1.0.2.bb b/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc_1.0.2.bb
new file mode 100644
index 0000000..f9718c5
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/libtirpc/libtirpc_1.0.2.bb
@@ -0,0 +1,38 @@
+SUMMARY = "Transport-Independent RPC library"
+DESCRIPTION = "Libtirpc is a port of Suns Transport-Independent RPC library to Linux"
+SECTION = "libs/network"
+HOMEPAGE = "http://sourceforge.net/projects/libtirpc/"
+BUGTRACKER = "http://sourceforge.net/tracker/?group_id=183075&atid=903784"
+LICENSE = "BSD"
+LIC_FILES_CHKSUM = "file://COPYING;md5=f835cce8852481e4b2bbbdd23b5e47f3 \
+                    file://src/netname.c;beginline=1;endline=27;md5=f8a8cd2cb25ac5aa16767364fb0e3c24"
+
+PROVIDES = "virtual/librpc"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BP}.tar.bz2;name=libtirpc \
+           ${GENTOO_MIRROR}/${BPN}-glibc-nfs.tar.xz;name=glibc-nfs \
+           file://export_key_secretkey_is_set.patch \
+           file://0001-replace-__bzero-with-memset-API.patch \
+           file://0001-include-stdint.h-for-uintptr_t.patch \
+           "
+
+SRC_URI_append_libc-musl = " \
+                             file://Use-netbsd-queue.h.patch \
+                           "
+
+SRC_URI[libtirpc.md5sum] = "d5a37f1dccec484f9cabe2b97e54e9a6"
+SRC_URI[libtirpc.sha256sum] = "723c5ce92706cbb601a8db09110df1b4b69391643158f20ff587e20e7c5f90f5"
+SRC_URI[glibc-nfs.md5sum] = "5ae500b9d0b6b72cb875bc04944b9445"
+SRC_URI[glibc-nfs.sha256sum] = "2677cfedf626f3f5a8f6e507aed5bb8f79a7453b589d684dbbc086e755170d83"
+
+inherit autotools pkgconfig
+
+EXTRA_OECONF = "--disable-gssapi"
+
+do_configure_prepend () {
+        cp -r ${WORKDIR}/tirpc ${S}
+}
+
+do_install_append() {
+        chown root:root ${D}${sysconfdir}/netconfig
+}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate/act-as-mv-when-rotate.patch b/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate/act-as-mv-when-rotate.patch
index 2e931a2..04cb588 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate/act-as-mv-when-rotate.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate/act-as-mv-when-rotate.patch
@@ -1,4 +1,4 @@
-From 68f29ab490cf987aa34b5f4caf1784b58a021308 Mon Sep 17 00:00:00 2001
+From 517cbff66c8bdbf455bc3b7c1a85a4f990d0f9a6 Mon Sep 17 00:00:00 2001
 From: Robert Yang <liezhi.yang@windriver.com>
 Date: Tue, 17 Feb 2015 21:08:07 -0800
 Subject: [PATCH] Act as the "mv" command when rotate log
@@ -10,14 +10,14 @@
 
 Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
 ---
- logrotate.c |   71 +++++++++++++++++++++++++++++++++++++++++++++++++----------
- 1 file changed, 59 insertions(+), 12 deletions(-)
+ logrotate.c | 72 ++++++++++++++++++++++++++++++++++++++++++++++++++-----------
+ 1 file changed, 60 insertions(+), 12 deletions(-)
 
 diff --git a/logrotate.c b/logrotate.c
-index d3deb6a..cf8bf2c 100644
+index 4ad58d4..ba05884 100644
 --- a/logrotate.c
 +++ b/logrotate.c
-@@ -1157,6 +1157,53 @@ int findNeedRotating(struct logInfo *log, int logNum, int force)
+@@ -1315,6 +1315,54 @@ static int findNeedRotating(struct logInfo *log, int logNum, int force)
      return 0;
  }
  
@@ -68,10 +68,11 @@
 +    return 1;
 +}
 +
- int prerotateSingleLog(struct logInfo *log, int logNum, struct logState *state,
- 		       struct logNames *rotNames)
++
+ static int prerotateSingleLog(struct logInfo *log, int logNum,
+ 			      struct logState *state, struct logNames *rotNames)
  {
-@@ -1523,15 +1570,15 @@ int prerotateSingleLog(struct logInfo *log, int logNum, struct logState *state,
+@@ -1674,15 +1722,15 @@ static int prerotateSingleLog(struct logInfo *log, int logNum,
  		}
  
  	    message(MESS_DEBUG,
@@ -90,7 +91,7 @@
  			    oldName, newName, strerror(errno));
  		    hasErrors = 1;
  		}
-@@ -1669,21 +1716,21 @@ int rotateSingleLog(struct logInfo *log, int logNum, struct logState *state,
+@@ -1767,21 +1815,21 @@ static int rotateSingleLog(struct logInfo *log, int logNum,
  				return 1;
  			}
  
@@ -115,19 +116,19 @@
 -				message(MESS_ERROR, "failed to rename %s to %s: %s\n",
 +			mvFile(log->files[logNum], rotNames->finalName, log, prev_acl)) {
 +				message(MESS_ERROR, "failed to move %s to %s: %s\n",
- 					log->files[logNum], tmpFilename,
+ 					log->files[logNum], rotNames->finalName,
  					strerror(errno));
  					hasErrors = 1;
-@@ -2063,7 +2110,7 @@ int rotateLogSet(struct logInfo *log, int force)
+@@ -2170,7 +2218,7 @@ static int rotateLogSet(struct logInfo *log, int force)
      return hasErrors;
  }
  
--static int writeState(char *stateFilename)
+-static int writeState(const char *stateFilename)
 +static int writeState(struct logInfo *log, char *stateFilename)
  {
  	struct logState *p;
  	FILE *f;
-@@ -2227,7 +2274,7 @@ static int writeState(char *stateFilename)
+@@ -2322,7 +2370,7 @@ static int writeState(const char *stateFilename)
  		fclose(f);
  
  	if (error == 0) {
@@ -136,7 +137,7 @@
  			unlink(tmpFilename);
  			error = 1;
  			message(MESS_ERROR, "error renaming temp state file %s\n",
-@@ -2525,7 +2572,7 @@ int main(int argc, const char **argv)
+@@ -2648,7 +2696,7 @@ int main(int argc, const char **argv)
  		rc |= rotateLogSet(log, force);
  
  	if (!debug)
@@ -145,3 +146,6 @@
  
  	return (rc != 0);
  }
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate/update-the-manual.patch b/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate/update-the-manual.patch
index 50d037d..725567e 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate/update-the-manual.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate/update-the-manual.patch
@@ -1,4 +1,4 @@
-From e0b0fe30e9c49234994a20a86aacfaf80e690087 Mon Sep 17 00:00:00 2001
+From bf22e8805df69344f6f20cea390e829a22fa741b Mon Sep 17 00:00:00 2001
 From: Robert Yang <liezhi.yang@windriver.com>
 Date: Tue, 17 Feb 2015 21:14:37 -0800
 Subject: [PATCH] Update the manual
@@ -9,14 +9,14 @@
 
 Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
 ---
- logrotate.8 |   10 ++++------
+ logrotate.8.in | 10 ++++------
  1 file changed, 4 insertions(+), 6 deletions(-)
 
-diff --git a/logrotate.8 b/logrotate.8
-index e4e5f48..84407d0 100644
---- a/logrotate.8
-+++ b/logrotate.8
-@@ -405,12 +405,10 @@ Do not rotate the log if it is empty (this overrides the \fBifempty\fR option).
+diff --git a/logrotate.8.in b/logrotate.8.in
+index 951e406..581bf48 100644
+--- a/logrotate.8.in
++++ b/logrotate.8.in
+@@ -445,12 +445,10 @@ Do not rotate the log if it is empty (this overrides the \fBifempty\fR option).
  
  .TP
  \fBolddir \fIdirectory\fR
@@ -34,5 +34,5 @@
  
  .TP
 -- 
-1.7.9.5
+1.8.3.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate_3.12.3.bb b/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate_3.12.3.bb
new file mode 100644
index 0000000..620b208
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate_3.12.3.bb
@@ -0,0 +1,85 @@
+SUMMARY = "Rotates, compresses, removes and mails system log files"
+SECTION = "console/utils"
+HOMEPAGE = "https://github.com/logrotate/logrotate/issues"
+LICENSE = "GPLv2"
+
+# TODO: Document coreutils dependency. Why not RDEPENDS? Why not busybox?
+
+DEPENDS="coreutils popt"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
+
+# When updating logrotate to latest upstream, SRC_URI should point to
+# a proper release tarball from https://github.com/logrotate/logrotate/releases
+# and we have to take the snapshot for now because there is no such
+# tarball available for 3.9.1.
+
+S = "${WORKDIR}/${BPN}-${PV}"
+
+UPSTREAM_CHECK_URI = "https://github.com/${BPN}/${BPN}/releases"
+UPSTREAM_CHECK_REGEX = "logrotate-(?P<pver>\d+(\.\d+)+).tar"
+
+SRC_URI = "https://github.com/${BPN}/${BPN}/releases/download/${PV}/${BP}.tar.xz \
+            file://act-as-mv-when-rotate.patch \
+            file://update-the-manual.patch \
+            file://disable-check-different-filesystems.patch \
+            "
+
+SRC_URI[md5sum] = "a560c57fac87c45b2fc17406cdf79288"
+SRC_URI[sha256sum] = "2e6a401cac9024db2288297e3be1a8ab60e7401ba8e91225218aaf4a27e82a07"
+
+PACKAGECONFIG ?= "${@bb.utils.filter('DISTRO_FEATURES', 'acl selinux', d)}"
+
+PACKAGECONFIG[acl] = ",,acl"
+PACKAGECONFIG[selinux] = ",,libselinux"
+
+CONFFILES_${PN} += "${localstatedir}/lib/logrotate.status \
+		    ${sysconfdir}/logrotate.conf"
+
+# If RPM_OPT_FLAGS is unset, it adds -g itself rather than obeying our
+# optimization variables, so use it rather than EXTRA_CFLAGS.
+EXTRA_OEMAKE = "\
+    LFS= \
+    OS_NAME='${OS_NAME}' \
+    'CC=${CC}' \
+    'RPM_OPT_FLAGS=${CFLAGS}' \
+    'EXTRA_LDFLAGS=${LDFLAGS}' \
+    ${@bb.utils.contains('PACKAGECONFIG', 'acl', 'WITH_ACL=yes', '', d)} \
+    ${@bb.utils.contains('PACKAGECONFIG', 'selinux', 'WITH_SELINUX=yes', '', d)} \
+"
+
+# OS_NAME in the makefile defaults to `uname -s`. The behavior for
+# freebsd/netbsd is questionable, so leave it as Linux, which only sets
+# INSTALL=install and BASEDIR=/usr.
+OS_NAME = "Linux"
+
+inherit autotools systemd
+
+SYSTEMD_SERVICE_${PN} = "\
+    ${BPN}.service \
+    ${BPN}.timer \
+"
+
+LOGROTATE_SYSTEMD_TIMER_BASIS ?= "daily"
+LOGROTATE_SYSTEMD_TIMER_ACCURACY ?= "12h"
+
+do_install(){
+    oe_runmake install DESTDIR=${D} PREFIX=${D} MANDIR=${mandir}
+    mkdir -p ${D}${sysconfdir}/logrotate.d
+    mkdir -p ${D}${localstatedir}/lib
+    install -p -m 644 ${S}/examples/logrotate-default ${D}${sysconfdir}/logrotate.conf
+    touch ${D}${localstatedir}/lib/logrotate.status
+
+    if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
+        install -d ${D}${systemd_system_unitdir}
+        install -m 0644 ${S}/examples/logrotate.service ${D}${systemd_system_unitdir}/logrotate.service
+        install -m 0644 ${S}/examples/logrotate.timer ${D}${systemd_system_unitdir}/logrotate.timer
+        sed -i -e 's,OnCalendar=.*$,OnCalendar=${LOGROTATE_SYSTEMD_TIMER_BASIS},g' ${D}${systemd_system_unitdir}/logrotate.timer
+        sed -i -e 's,AccuracySec=.*$,AccuracySec=${LOGROTATE_SYSTEMD_TIMER_ACCURACY},g' ${D}${systemd_system_unitdir}/logrotate.timer
+    fi
+
+    if ${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', 'true', 'false', d)}; then
+        mkdir -p ${D}${sysconfdir}/cron.daily
+        install -p -m 0755 ${S}/examples/logrotate.cron ${D}${sysconfdir}/cron.daily/logrotate
+    fi
+}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate_3.9.1.bb b/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate_3.9.1.bb
deleted file mode 100644
index c938d9f..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/logrotate/logrotate_3.9.1.bb
+++ /dev/null
@@ -1,69 +0,0 @@
-SUMMARY = "Rotates, compresses, removes and mails system log files"
-SECTION = "console/utils"
-HOMEPAGE = "https://github.com/logrotate/logrotate/issues"
-LICENSE = "GPLv2"
-
-# TODO: logrotate 3.8.8 adds autotools/automake support, update recipe to use it.
-# TODO: Document coreutils dependency. Why not RDEPENDS? Why not busybox?
-
-DEPENDS="coreutils popt"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=18810669f13b87348459e611d31ab760"
-
-# When updating logrotate to latest upstream, SRC_URI should point to
-# a proper release tarball from https://github.com/logrotate/logrotate/releases
-# and we have to take the snapshot for now because there is no such
-# tarball available for 3.9.1.
-
-S = "${WORKDIR}/${BPN}-r3-9-1"
-
-UPSTREAM_CHECK_URI = "https://github.com/${BPN}/${BPN}/releases"
-
-SRC_URI = "https://github.com/${BPN}/${BPN}/archive/r3-9-1.tar.gz \
-           file://act-as-mv-when-rotate.patch \
-           file://update-the-manual.patch \
-           file://disable-check-different-filesystems.patch \
-            "
-
-SRC_URI[md5sum] = "8572b7c2cf9ade09a8a8e10098500fb3"
-SRC_URI[sha256sum] = "5bf8e478c428e7744fefa465118f8296e7e771c981fb6dffb7527856a0ea3617"
-
-PACKAGECONFIG ?= "${@bb.utils.filter('DISTRO_FEATURES', 'acl selinux', d)}"
-
-PACKAGECONFIG[acl] = ",,acl"
-PACKAGECONFIG[selinux] = ",,libselinux"
-
-CONFFILES_${PN} += "${localstatedir}/lib/logrotate.status \
-		    ${sysconfdir}/logrotate.conf"
-
-# If RPM_OPT_FLAGS is unset, it adds -g itself rather than obeying our
-# optimization variables, so use it rather than EXTRA_CFLAGS.
-EXTRA_OEMAKE = "\
-    LFS= \
-    OS_NAME='${OS_NAME}' \
-    'CC=${CC}' \
-    'RPM_OPT_FLAGS=${CFLAGS}' \
-    'EXTRA_LDFLAGS=${LDFLAGS}' \
-    ${@bb.utils.contains('PACKAGECONFIG', 'acl', 'WITH_ACL=yes', '', d)} \
-    ${@bb.utils.contains('PACKAGECONFIG', 'selinux', 'WITH_SELINUX=yes', '', d)} \
-"
-
-# OS_NAME in the makefile defaults to `uname -s`. The behavior for
-# freebsd/netbsd is questionable, so leave it as Linux, which only sets
-# INSTALL=install and BASEDIR=/usr.
-OS_NAME = "Linux"
-
-do_compile_prepend() {
-    # Make sure the recompile is OK
-    rm -f ${B}/.depend
-}
-
-do_install(){
-    oe_runmake install DESTDIR=${D} PREFIX=${D} MANDIR=${mandir}
-    mkdir -p ${D}${sysconfdir}/logrotate.d
-    mkdir -p ${D}${sysconfdir}/cron.daily
-    mkdir -p ${D}${localstatedir}/lib
-    install -p -m 644 examples/logrotate-default ${D}${sysconfdir}/logrotate.conf
-    install -p -m 755 examples/logrotate.cron ${D}${sysconfdir}/cron.daily/logrotate
-    touch ${D}${localstatedir}/lib/logrotate.status
-}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/lsb/lsb_4.1.bb b/import-layers/yocto-poky/meta/recipes-extended/lsb/lsb_4.1.bb
index cedf39e..0785610 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/lsb/lsb_4.1.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/lsb/lsb_4.1.bb
@@ -22,6 +22,7 @@
            file://lsb_pidofproc \
            file://lsb_start_daemon \
            "
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 SRC_URI[md5sum] = "30537ef5a01e0ca94b7b8eb6a36bb1e4"
 SRC_URI[sha256sum] = "99321288f8d62e7a1d485b7c6bdccf06766fb8ca603c6195806e4457fdf17172"
@@ -90,11 +91,13 @@
        install -m 0755 ${WORKDIR}/init-functions ${D}${nonarch_base_libdir}/lsb
 
        # create links for LSB test
-       if [ "${nonarch_base_libdir}" != "${nonarch_libdir}" ] ; then
-               install -d ${D}${nonarch_libdir}/lsb
+       if [ -e ${sbindir}/chkconfig ]; then
+               if [ "${nonarch_base_libdir}" != "${nonarch_libdir}" ] ; then
+                       install -d ${D}${nonarch_libdir}/lsb
+               fi
+               ln -sf ${sbindir}/chkconfig ${D}${nonarch_libdir}/lsb/install_initd
+               ln -sf ${sbindir}/chkconfig ${D}${nonarch_libdir}/lsb/remove_initd
        fi
-       ln -sf ${sbindir}/chkconfig ${D}${nonarch_libdir}/lsb/install_initd
-       ln -sf ${sbindir}/chkconfig ${D}${nonarch_libdir}/lsb/remove_initd
 
        if [ "${TARGET_ARCH}" = "x86_64" ]; then
                if [ "${base_libdir}" != "${base_prefix}/lib64" ]; then
diff --git a/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbinitscripts/functions.patch b/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbinitscripts/functions.patch
index a756d04..9c58d90 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbinitscripts/functions.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbinitscripts/functions.patch
@@ -1,26 +1,32 @@
-Upstream-Status: Inappropriate [configuration]
+From 57468c5f4e364bdad556604dca09046e1afca929 Mon Sep 17 00:00:00 2001
+From: Fan Xin <fan.xin@jp.fujitsu.com>
+Date: Mon, 5 Jun 2017 16:26:47 +0900
+Subject: [PATCH] Upstream-Status: Inappropriate [configuration]
 
 Signed-off-by: Xiaofeng Yan <xiaofeng.yan@windriver.com>
 Signed-off-by: Saul Wold <sgw@linux.intel.com>
 
-Index: initscripts-9.43/rc.d/init.d/functions
-===================================================================
---- initscripts-9.43.orig/rc.d/init.d/functions
-+++ initscripts-9.43/rc.d/init.d/functions
-@@ -13,6 +13,7 @@ umask 022
- PATH="/sbin:/usr/sbin:/bin:/usr/bin"
- export PATH
- 
-+
- if [ $PPID -ne 1 -a -z "$SYSTEMCTL_SKIP_REDIRECT" ] && \
- 		( /bin/mountpoint -q /cgroup/systemd || /bin/mountpoint -q /sys/fs/cgroup/systemd ) ; then
-         case "$0" in
-@@ -54,7 +55,7 @@ systemctl_redirect () {
+Rebase on 9.72
+
+Signed-off-by: Fan Xin <fan.xin@jp.fujitsu.com>
+Upstream-Status: Pending
+---
+ initscripts-9.72/rc.d/init.d/functions | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/initscripts-9.72/rc.d/init.d/functions b/initscripts-9.72/rc.d/init.d/functions
+index 0f627f1..a6aa092 100644
+--- a/initscripts-9.72/rc.d/init.d/functions
++++ b/initscripts-9.72/rc.d/init.d/functions
+@@ -59,7 +59,7 @@ systemctl_redirect () {
  [ -z "${COLUMNS:-}" ] && COLUMNS=80
  
  if [ -z "${CONSOLETYPE:-}" ]; then
--  if [ -c "/dev/stderr" -a -r "/dev/stderr" ]; then
-+  if [ -c "/dev/stderr" -a -r "/dev/stderr" -a -e /sbin/consoletype ]; then
-     CONSOLETYPE="$(/sbin/consoletype < /dev/stderr 2>/dev/null)"
-   else
-     CONSOLETYPE="serial"
+-    if [ -c "/dev/stderr" -a -r "/dev/stderr" ]; then
++    if [ -c "/dev/stderr" -a -r "/dev/stderr" -a -e /sbin/consoletype ]; then
+         CONSOLETYPE="$(/sbin/consoletype < /dev/stderr 2>/dev/null)"
+     else
+         CONSOLETYPE="serial"
+-- 
+1.9.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbinitscripts_9.68.bb b/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbinitscripts_9.68.bb
deleted file mode 100644
index 0c08fff..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbinitscripts_9.68.bb
+++ /dev/null
@@ -1,33 +0,0 @@
-SUMMARY = "SysV init scripts which are only used in an LSB image"
-SECTION = "base"
-LICENSE = "GPLv2"
-DEPENDS = "popt glib-2.0"
-
-RDEPENDS_${PN} += "util-linux"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=ebf4e8b49780ab187d51bd26aaa022c6"
-
-S="${WORKDIR}/initscripts-${PV}"
-SRC_URI = "http://pkgs.fedoraproject.org/repo/pkgs/initscripts/initscripts-${PV}.tar.bz2/6a51a5af38e01445f53989ed0727c3e1/initscripts-${PV}.tar.bz2 \
-           file://functions.patch \
-           file://0001-functions-avoid-exit-1-which-causes-init-scripts-to-.patch \
-          " 
-
-SRC_URI[md5sum] = "6a51a5af38e01445f53989ed0727c3e1"
-SRC_URI[sha256sum] = "2a1c6e9dbaa37a676518f4803b501e107c058bb14ef7a8db24c52b77fbcba531"
-
-inherit update-alternatives
-
-ALTERNATIVE_PRIORITY = "100"
-ALTERNATIVE_${PN} = "functions"
-ALTERNATIVE_LINK_NAME[functions] = "${sysconfdir}/init.d/functions"
-
-# Since we are only taking the patched version of functions, no need to
-# configure or compile anything so do not execute these
-do_configure[noexec] = "1" 
-do_compile[noexec] = "1" 
-
-do_install(){
-	install -d ${D}${sysconfdir}/init.d/
-	install -m 0644 ${S}/rc.d/init.d/functions ${D}${sysconfdir}/init.d/functions
-}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbinitscripts_9.72.bb b/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbinitscripts_9.72.bb
new file mode 100644
index 0000000..2d74a6f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbinitscripts_9.72.bb
@@ -0,0 +1,30 @@
+SUMMARY = "SysV init scripts which are only used in an LSB image"
+HOMEPAGE = "https://wiki.debian.org/LSBInitScripts"
+SECTION = "base"
+LICENSE = "GPLv2"
+DEPENDS = "popt glib-2.0"
+
+RPROVIDES_${PN} += "initd-functions"
+RDEPENDS_${PN} += "util-linux"
+RCONFLICTS_${PN} = "initscripts-functions"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=ebf4e8b49780ab187d51bd26aaa022c6"
+
+S="${WORKDIR}/initscripts-${PV}"
+SRC_URI = "http://pkgs.fedoraproject.org/repo/pkgs/initscripts/initscripts-${PV}.tar.gz/sha512/b6ed38f9576e9227c2ecf047e2d60e1e872f40d51d13861b0c91dddb282f10f7e6b79706a4d1435d7a57a14a0b73a1b71541cfe44c00e8e03ef96b08de19ec32/initscripts-${PV}.tar.gz \
+           file://functions.patch;striplevel=2 \
+           file://0001-functions-avoid-exit-1-which-causes-init-scripts-to-.patch \
+          " 
+
+SRC_URI[md5sum] = "d6c798f40dceb117e12126d94cb25a9a"
+SRC_URI[sha256sum] = "1793677bdd1f7ee4cb00878ce43346196374f848a4c8e4559e086040fc7487db"
+
+# Since we are only taking the patched version of functions, no need to
+# configure or compile anything so do not execute these
+do_configure[noexec] = "1" 
+do_compile[noexec] = "1" 
+
+do_install(){
+	install -d ${D}${sysconfdir}/init.d/
+	install -m 0644 ${S}/rc.d/init.d/functions ${D}${sysconfdir}/init.d/functions
+}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbtest_1.0.bb b/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbtest_1.0.bb
index ea12502..36f52fd 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbtest_1.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/lsb/lsbtest_1.0.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Automates Linux Standard Base (LSB) tests"
+HOMEPAGE = "https://wiki.debian.org/LSBInitScripts"
 SECTION = "console/utils"
 LICENSE = "GPLv2"
 PR = "r3"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/lsof/files/lsof-remove-host-information.patch b/import-layers/yocto-poky/meta/recipes-extended/lsof/files/lsof-remove-host-information.patch
new file mode 100644
index 0000000..b7d2323
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/lsof/files/lsof-remove-host-information.patch
@@ -0,0 +1,76 @@
+Remove host information from version.h
+
+make lsof not include host information
+
+Upstream-Status: Inappropriate [embedded specific]
+
+Signed-off-by: Li Wang <li.wang@windriver.com>
+---
+ dialects/linux/Makefile |   50 +++++++++--------------------------------------
+ 1 file changed, 9 insertions(+), 41 deletions(-)
+
+diff --git a/dialects/linux/Makefile b/dialects/linux/Makefile
+index 2bea108..792142b 100644
+--- a/dialects/linux/Makefile
++++ b/dialects/linux/Makefile
+@@ -76,48 +76,16 @@ version.h:	FRC
+ 	@echo Constructing version.h
+ 	@rm -f version.h
+ 	@echo '#define	LSOF_BLDCMT	"${LSOF_BLDCMT}"' > version.h;
+-	@echo '#define	LSOF_CC		"${CC}"' >> version.h
+-	@echo '#define	LSOF_CCV	"${CCV}"' >> version.h
+-	@echo '#define	LSOF_CCDATE	"'`date`'"' >> version.h
+-	@echo '#define	LSOF_CCFLAGS	"'`echo ${CFLAGS} | sed 's/\\\\(/\\(/g' | sed 's/\\\\)/\\)/g' | sed 's/"/\\\\"/g'`'"' >> version.h
++	@echo '#define	LSOF_CC		""' >> version.h
++	@echo '#define	LSOF_CCV	""' >> version.h
++	@echo '#define	LSOF_CCDATE	""' >> version.h
++	@echo '#define	LSOF_CCFLAGS	""' >> version.h
+ 	@echo '#define	LSOF_CINFO	"${CINFO}"' >> version.h
+-	@if [ "X${LSOF_HOST}" = "X" ]; then \
+-	  echo '#define	LSOF_HOST	"'`uname -n`'"' >> version.h; \
+-	else \
+-	  if [ "${LSOF_HOST}" = "none" ]; then \
+-	    echo '#define	LSOF_HOST	""' >> version.h; \
+-	  else \
+-	    echo '#define	LSOF_HOST	"${LSOF_HOST}"' >> version.h; \
+-	  fi \
+-	fi
+-	@echo '#define	LSOF_LDFLAGS	"${CFGL}"' >> version.h
+-	@if [ "X${LSOF_LOGNAME}" = "X" ]; then \
+-	  echo '#define	LSOF_LOGNAME	"${LOGNAME}"' >> version.h; \
+-	else \
+-	  if [ "${LSOF_LOGNAME}" = "none" ]; then \
+-	    echo '#define	LSOF_LOGNAME	""' >> version.h; \
+-	  else \
+-	    echo '#define	LSOF_LOGNAME	"${LSOF_LOGNAME}"' >> version.h; \
+-	  fi; \
+-	fi
+-	@if [ "X${LSOF_SYSINFO}" = "X" ]; then \
+-	    echo '#define	LSOF_SYSINFO	"'`uname -a`'"' >> version.h; \
+-	else \
+-	  if [ "${LSOF_SYSINFO}" = "none" ]; then \
+-	    echo '#define	LSOF_SYSINFO	""' >> version.h; \
+-	  else \
+-	    echo '#define	LSOF_SYSINFO	"${LSOF_SYSINFO}"' >> version.h; \
+-	  fi \
+-	fi
+-	@if [ "X${LSOF_USER}" = "X" ]; then \
+-	  echo '#define	LSOF_USER	"${USER}"' >> version.h; \
+-	else \
+-	  if [ "${LSOF_USER}" = "none" ]; then \
+-	    echo '#define	LSOF_USER	""' >> version.h; \
+-	  else \
+-	    echo '#define	LSOF_USER	"${LSOF_USER}"' >> version.h; \
+-	  fi \
+-	fi
++	@echo '#define	LSOF_HOST	""' >> version.h;
++	@echo '#define	LSOF_LDFLAGS	""' >> version.h
++	@echo '#define	LSOF_LOGNAME	""' >> version.h;
++	@echo '#define	LSOF_SYSINFO	""' >> version.h;
++	@echo '#define	LSOF_USER	""' >> version.h;
+ 	@sed '/VN/s/.ds VN \(.*\)/#define	LSOF_VERSION	"\1"/' < version >> version.h
+ 
+ FRC:
+-- 
+1.7.9.5
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/lsof/lsof_4.89.bb b/import-layers/yocto-poky/meta/recipes-extended/lsof/lsof_4.89.bb
index 29245b1..14546db 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/lsof/lsof_4.89.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/lsof/lsof_4.89.bb
@@ -11,7 +11,9 @@
 # https://people.freebsd.org/~abe/ ). http://www.mirrorservice.org seems to be
 # the most commonly used alternative.
 
-SRC_URI = "http://www.mirrorservice.org/sites/lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_${PV}.tar.bz2"
+SRC_URI = "http://www.mirrorservice.org/sites/lsof.itap.purdue.edu/pub/tools/unix/lsof/lsof_${PV}.tar.bz2 \
+           file://lsof-remove-host-information.patch \
+          "
 
 SRC_URI[md5sum] = "1b9cd34f3fb86856a125abbf2be3a386"
 SRC_URI[sha256sum] = "81ac2fc5fdc944793baf41a14002b6deb5a29096b387744e28f8c30a360a3718"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0002-Add-knob-to-control-whether-numa-support-should-be-c.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0002-Add-knob-to-control-whether-numa-support-should-be-c.patch
index 0684bee..9865020 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0002-Add-knob-to-control-whether-numa-support-should-be-c.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0002-Add-knob-to-control-whether-numa-support-should-be-c.patch
@@ -9,6 +9,7 @@
 
 Signed-off-by: Roy.Li <rongqing.li@windriver.com>
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Upstream-Status: Pending
 ---
  m4/ltp-numa.m4 | 10 +++++++++-
  1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0003-Add-knob-to-control-tirpc-support.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0003-Add-knob-to-control-tirpc-support.patch
index bf1176f..5cf1e05 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0003-Add-knob-to-control-tirpc-support.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0003-Add-knob-to-control-tirpc-support.patch
@@ -8,6 +8,7 @@
 
 Signed-off-by: Fathi Boudra <fathi.boudra@linaro.org>
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Upstream-Status: Pending
 ---
  configure.ac | 9 +++++++++
  1 file changed, 9 insertions(+)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0004-build-Add-option-to-select-libc-implementation.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0004-build-Add-option-to-select-libc-implementation.patch
index 2de9363..cf74463 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0004-build-Add-option-to-select-libc-implementation.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0004-build-Add-option-to-select-libc-implementation.patch
@@ -10,6 +10,7 @@
 Disable tests specifically not building _yet_ on musl based systems
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Upstream-Status: Pending
 ---
  Makefile                                    | 5 +++++
  testcases/kernel/Makefile                   | 5 ++++-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0005-kernel-controllers-Link-with-libfts-explicitly-on-mu.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0005-kernel-controllers-Link-with-libfts-explicitly-on-mu.patch
index 8dab1ed..b9390e2 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0005-kernel-controllers-Link-with-libfts-explicitly-on-mu.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0005-kernel-controllers-Link-with-libfts-explicitly-on-mu.patch
@@ -7,6 +7,7 @@
 external implementation for all fts APIs
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Upstream-Status: Pending
 ---
  testcases/kernel/controllers/Makefile.inc        | 3 +++
  testcases/kernel/controllers/cpuset/Makefile.inc | 3 +++
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0006-fix-PATH_MAX-undeclared-when-building-with-musl.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0006-fix-PATH_MAX-undeclared-when-building-with-musl.patch
deleted file mode 100644
index 8874b95..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0006-fix-PATH_MAX-undeclared-when-building-with-musl.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-From 32644bde4d33b677614534ec37030e57883b8e15 Mon Sep 17 00:00:00 2001
-From: Dengke Du <dengke.du@windriver.com>
-Date: Thu, 9 Feb 2017 16:41:12 +0800
-Subject: [PATCH 1/3] fix PATH_MAX undeclared when building with musl
-
-fix PATH_MAX undeclared when building with musl.
-
-Signed-off-by: Dengke Du <dengke.du@windriver.com>
-Upstream-Status: Pending
----
- include/tst_test.h | 3 +++
- 1 file changed, 3 insertions(+)
-
-diff --git a/include/tst_test.h b/include/tst_test.h
-index 7ff33b2..9779c0e 100644
---- a/include/tst_test.h
-+++ b/include/tst_test.h
-@@ -19,6 +19,9 @@
- #define TST_TEST_H__
- 
- #include <unistd.h>
-+#ifndef __GLIBC__
-+#include <limits.h>
-+#endif
- 
- #include "tst_common.h"
- #include "tst_res_flags.h"
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0007-fix-__WORDSIZE-undeclared-when-building-with-musl.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0007-fix-__WORDSIZE-undeclared-when-building-with-musl.patch
index 8d0e739..2f4ca63 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0007-fix-__WORDSIZE-undeclared-when-building-with-musl.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0007-fix-__WORDSIZE-undeclared-when-building-with-musl.patch
@@ -1,30 +1,31 @@
-From a2639bd0f0d3f9f3049ee33e6710fed06225f54f Mon Sep 17 00:00:00 2001
+From d1a27570457fb6e1d6bafe81bfa0f3507b137e32 Mon Sep 17 00:00:00 2001
 From: Dengke Du <dengke.du@windriver.com>
 Date: Thu, 9 Feb 2017 18:20:58 +0800
-Subject: [PATCH 1/2] fix __WORDSIZE undeclared when building with musl
+Subject: [PATCH] fix __WORDSIZE undeclared when building with musl
 
 fix __WORDSIZE undeclared when building with musl.
 
+Upstream-Status: Submitted [https://github.com/linux-test-project/ltp/pull/177]
+
 Signed-off-by: Dengke Du <dengke.du@windriver.com>
-Upstream-Status: Pending
 ---
  include/old/test.h | 3 +++
  1 file changed, 3 insertions(+)
 
 diff --git a/include/old/test.h b/include/old/test.h
-index d492560..263e92e 100644
+index b36764d83..cc6f1b551 100644
 --- a/include/old/test.h
 +++ b/include/old/test.h
-@@ -58,6 +58,9 @@
- #include "tst_clone.h"
- #include "old_device.h"
- #include "old_tmpdir.h"
+@@ -44,6 +44,9 @@
+ #include <string.h>
+ #include <stdlib.h>
+ #include <stdint.h>
 +#ifndef __GLIBC__
 +#include <bits/reg.h>
 +#endif
  
- /*
-  * Ensure that NUMSIGS is defined.
+ #include "usctest.h"
+ 
 -- 
-2.7.4
+2.11.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0008-Check-if-__GLIBC_PREREQ-is-defined-before-using-it.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0008-Check-if-__GLIBC_PREREQ-is-defined-before-using-it.patch
index 41f2623..e325ce4 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0008-Check-if-__GLIBC_PREREQ-is-defined-before-using-it.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0008-Check-if-__GLIBC_PREREQ-is-defined-before-using-it.patch
@@ -8,6 +8,7 @@
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
 
+Upstream-Status: Pending
 ---
  testcases/kernel/syscalls/accept4/accept4_01.c     |  9 ++++-
  testcases/kernel/syscalls/getcpu/getcpu01.c        | 40 +++++++++++++++++++++-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0009-fix-redefinition-of-struct-msgbuf-error-building-wit.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0009-fix-redefinition-of-struct-msgbuf-error-building-wit.patch
index 6886c55..dd7d283 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0009-fix-redefinition-of-struct-msgbuf-error-building-wit.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0009-fix-redefinition-of-struct-msgbuf-error-building-wit.patch
@@ -1,18 +1,19 @@
-From 2d7ea67e8eaa41cbd6816f7ab654254b40b530a1 Mon Sep 17 00:00:00 2001
+From bf5dd2932200e0199a38f3028d3bef2253f32e38 Mon Sep 17 00:00:00 2001
 From: Dengke Du <dengke.du@windriver.com>
 Date: Thu, 9 Feb 2017 17:17:37 +0800
 Subject: [PATCH] fix redefinition of 'struct msgbuf' error building with musl
 
 When building with musl the file "sys/msg.h" already contain 'struct msgbuf'
 
+Upstream-Status: Submitted [https://github.com/linux-test-project/ltp/pull/177]
+
 Signed-off-by: Dengke Du <dengke.du@windriver.com>
-Upstream-Status: Pending
 ---
  testcases/kernel/syscalls/ipc/msgrcv/msgrcv08.c | 4 +++-
  1 file changed, 3 insertions(+), 1 deletion(-)
 
 diff --git a/testcases/kernel/syscalls/ipc/msgrcv/msgrcv08.c b/testcases/kernel/syscalls/ipc/msgrcv/msgrcv08.c
-index a757c0d..e023114 100644
+index a757c0d18..e023114d2 100644
 --- a/testcases/kernel/syscalls/ipc/msgrcv/msgrcv08.c
 +++ b/testcases/kernel/syscalls/ipc/msgrcv/msgrcv08.c
 @@ -47,11 +47,13 @@ const char *TCID = "msgrcv08";
@@ -31,5 +32,5 @@
  static void msr(int msqid)
  {
 -- 
-2.7.4
+2.11.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0010-replace-__BEGIN_DECLS-and-__END_DECLS.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0010-replace-__BEGIN_DECLS-and-__END_DECLS.patch
index 5b0c444..b9fce88 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0010-replace-__BEGIN_DECLS-and-__END_DECLS.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0010-replace-__BEGIN_DECLS-and-__END_DECLS.patch
@@ -10,6 +10,8 @@
 its not a generally available typedef
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
 ---
  testcases/kernel/syscalls/epoll2/include/epoll.h | 8 ++++++--
  utils/sctp/include/netinet/sctp.h                | 9 +++++++--
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0011-Rename-sigset-variable-to-sigset1.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0011-Rename-sigset-variable-to-sigset1.patch
index 4ac14dc..25f6ba7 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0011-Rename-sigset-variable-to-sigset1.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0011-Rename-sigset-variable-to-sigset1.patch
@@ -7,6 +7,8 @@
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
 Signed-off-by: Dengke Du <dengke.du@windriver.com>
+
+Upstream-Status: Pending
 ---
  testcases/kernel/mem/shmt/shmt04.c                    | 10 +++++-----
  testcases/kernel/mem/shmt/shmt06.c                    | 10 +++++-----
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0012-fix-faccessat01.c-build-fails-with-security-flags.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0012-fix-faccessat01.c-build-fails-with-security-flags.patch
deleted file mode 100644
index 2600bd6..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0012-fix-faccessat01.c-build-fails-with-security-flags.patch
+++ /dev/null
@@ -1,70 +0,0 @@
-From 32f563008f95975d26d1c4fcb266c72c56f934be Mon Sep 17 00:00:00 2001
-From: Dengke Du <dengke.du@windriver.com>
-Date: Wed, 22 Feb 2017 01:21:55 -0500
-Subject: [PATCH] fix faccessat01.c build fails with security flags
-
-When the distro is poky-lsb, fix the following error:
-
-| In file included from ../../../../include/old/test.h:47:0,
-|                  from faccessat01.c:44:
-| faccessat01.c: In function 'setup':
-| ../../../../include/old/old_safe_file_ops.h:55:27: error: format not a string literal and no format arguments [-Werror=format-security]
-|                    (path), (fmt), ## __VA_ARGS__)
-|                            ^
-| faccessat01.c:132:2: note: in expansion of macro 'SAFE_FILE_PRINTF'
-|   SAFE_FILE_PRINTF(cleanup, testfile, testfile);
-|   ^~~~~~~~~~~~~~~~
-| ../../../../include/old/old_safe_file_ops.h:55:27: error: format not a string literal and no format arguments [-Werror=format-security]
-|                    (path), (fmt), ## __VA_ARGS__)
-|                            ^
-| faccessat01.c:133:2: note: in expansion of macro 'SAFE_FILE_PRINTF'
-|   SAFE_FILE_PRINTF(cleanup, testfile2, testfile2);
-|   ^~~~~~~~~~~~~~~~
-
-This is because in macro "SAFE_FILE_PRINTF", its third argument should be a
-format arguments, but in file faccessat01.c, it passed the same argument to
-macro "SAFE_FILE_PRINTF", so it results in the fails. It should pass the format
-string to the third argument.
-
-The same for file fchmodat01.c.
-
-Signed-off-by: Dengke Du <dengke.du@windriver.com>
-Upstream-Status: Pending
----
- testcases/kernel/syscalls/faccessat/faccessat01.c | 4 ++--
- testcases/kernel/syscalls/fchmodat/fchmodat01.c   | 4 ++--
- 2 files changed, 4 insertions(+), 4 deletions(-)
-
-diff --git a/testcases/kernel/syscalls/faccessat/faccessat01.c b/testcases/kernel/syscalls/faccessat/faccessat01.c
-index 622dfd3..1ca90e9 100644
---- a/testcases/kernel/syscalls/faccessat/faccessat01.c
-+++ b/testcases/kernel/syscalls/faccessat/faccessat01.c
-@@ -129,8 +129,8 @@ void setup(void)
- 	fds[0] = SAFE_OPEN(cleanup, pathname, O_DIRECTORY);
- 	fds[1] = fds[4] = fds[0];
- 
--	SAFE_FILE_PRINTF(cleanup, testfile, testfile);
--	SAFE_FILE_PRINTF(cleanup, testfile2, testfile2);
-+	SAFE_FILE_PRINTF(cleanup, testfile, "faccessattestfile%d.txt");
-+	SAFE_FILE_PRINTF(cleanup, testfile2, "%s/faccessattestfile%d.txt");
- 
- 	fds[2] = SAFE_OPEN(cleanup, testfile3, O_CREAT | O_RDWR, 0600);
- 
-diff --git a/testcases/kernel/syscalls/fchmodat/fchmodat01.c b/testcases/kernel/syscalls/fchmodat/fchmodat01.c
-index 6bf66d8..89d072a 100644
---- a/testcases/kernel/syscalls/fchmodat/fchmodat01.c
-+++ b/testcases/kernel/syscalls/fchmodat/fchmodat01.c
-@@ -127,8 +127,8 @@ void setup(void)
- 	fds[0] = SAFE_OPEN(cleanup, pathname, O_DIRECTORY);
- 	fds[1] = fds[4] = fds[0];
- 
--	SAFE_FILE_PRINTF(cleanup, testfile, testfile);
--	SAFE_FILE_PRINTF(cleanup, testfile2, testfile2);
-+	SAFE_FILE_PRINTF(cleanup, testfile, "fchmodattest%d.txt");
-+	SAFE_FILE_PRINTF(cleanup, testfile2, "%s/fchmodattest%d.txt");
- 
- 	fds[2] = SAFE_OPEN(cleanup, testfile3, O_CREAT | O_RDWR, 0600);
- 	fds[3] = 100;
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0018-guard-mallocopt-with-__GLIBC__.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0018-guard-mallocopt-with-__GLIBC__.patch
index 5198fe9..a79763d 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0018-guard-mallocopt-with-__GLIBC__.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0018-guard-mallocopt-with-__GLIBC__.patch
@@ -6,6 +6,7 @@
 mallocopt is not available on non glibc implementations
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Upstream-Status: Pending
 ---
  utils/benchmark/ebizzy-0.3/ebizzy.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0020-getdents-define-getdents-getdents64-only-for-glibc.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0020-getdents-define-getdents-getdents64-only-for-glibc.patch
index 0e4e458..7060a64b 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0020-getdents-define-getdents-getdents64-only-for-glibc.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0020-getdents-define-getdents-getdents64-only-for-glibc.patch
@@ -7,6 +7,8 @@
 functions with same name, it errors out.
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
 ---
  testcases/kernel/syscalls/getdents/getdents.h | 6 ++++--
  1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0021-Define-_GNU_SOURCE-for-MREMAP_MAYMOVE-definition.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0021-Define-_GNU_SOURCE-for-MREMAP_MAYMOVE-definition.patch
index c8cbe58..3e79c9f 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0021-Define-_GNU_SOURCE-for-MREMAP_MAYMOVE-definition.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0021-Define-_GNU_SOURCE-for-MREMAP_MAYMOVE-definition.patch
@@ -10,6 +10,8 @@
 error: 'MREMAP_MAYMOVE' undeclared (first use in this function)
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
 ---
  testcases/kernel/syscalls/mremap/mremap01.c | 4 +++-
  testcases/kernel/syscalls/mremap/mremap02.c | 2 ++
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0023-ptrace-Use-int-instead-of-enum-__ptrace_request.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0023-ptrace-Use-int-instead-of-enum-__ptrace_request.patch
index 4680c03..529f4ed 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0023-ptrace-Use-int-instead-of-enum-__ptrace_request.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0023-ptrace-Use-int-instead-of-enum-__ptrace_request.patch
@@ -6,6 +6,8 @@
 __ptrace_request is only available with glibc
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
 ---
  testcases/kernel/syscalls/ptrace/ptrace03.c           | 4 ++++
  testcases/kernel/syscalls/ptrace/spawn_ptrace_child.h | 4 ++++
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0024-rt_sigaction-rt_sigprocmark-Define-_GNU_SOURCE.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0024-rt_sigaction-rt_sigprocmark-Define-_GNU_SOURCE.patch
index 03c67a5..03aa45d 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0024-rt_sigaction-rt_sigprocmark-Define-_GNU_SOURCE.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0024-rt_sigaction-rt_sigprocmark-Define-_GNU_SOURCE.patch
@@ -7,6 +7,8 @@
 error: 'SA_NOMASK' undeclared here (not in a function)
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
 ---
  testcases/kernel/syscalls/rt_sigaction/rt_sigaction01.c     | 1 +
  testcases/kernel/syscalls/rt_sigaction/rt_sigaction02.c     | 2 +-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0025-mc_gethost-include-sys-types.h.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0025-mc_gethost-include-sys-types.h.patch
index aad3cb0..afcba63 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0025-mc_gethost-include-sys-types.h.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0025-mc_gethost-include-sys-types.h.patch
@@ -7,6 +7,8 @@
 error: unknown type name 'u_char'
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
 ---
  testcases/network/multicast/mc_gethost/mc_gethost.c | 1 +
  1 file changed, 1 insertion(+)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0026-crash01-Define-_GNU_SOURCE.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0026-crash01-Define-_GNU_SOURCE.patch
index 58b8ed4..f65fad1 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0026-crash01-Define-_GNU_SOURCE.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0026-crash01-Define-_GNU_SOURCE.patch
@@ -7,6 +7,8 @@
 error: 'SA_NOMASK' undeclared (first use in this function)
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
 ---
  testcases/misc/crash/crash01.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0027-sysconf01-Use-_SC_2_C_VERSION-conditionally.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0027-sysconf01-Use-_SC_2_C_VERSION-conditionally.patch
index f75e33b..adf6f27 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0027-sysconf01-Use-_SC_2_C_VERSION-conditionally.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0027-sysconf01-Use-_SC_2_C_VERSION-conditionally.patch
@@ -7,6 +7,8 @@
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
 Signed-off-by: Dengke Du <dengke.du@windriver.com>
+
+Upstream-Status: Pending
 ---
  testcases/kernel/syscalls/sysconf/sysconf01.c | 2 ++
  1 file changed, 2 insertions(+)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0028-rt_sigaction.h-Use-sighandler_t-instead-of-__sighand.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0028-rt_sigaction.h-Use-sighandler_t-instead-of-__sighand.patch
index b26aa13..c730d46 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0028-rt_sigaction.h-Use-sighandler_t-instead-of-__sighand.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0028-rt_sigaction.h-Use-sighandler_t-instead-of-__sighand.patch
@@ -8,6 +8,8 @@
 sighandler_t makes it work on musl too
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
 ---
  include/lapi/rt_sigaction.h                      | 2 +-
  testcases/kernel/syscalls/rt_sigsuspend/Makefile | 3 +++
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0030-lib-Use-PTHREAD_MUTEX_RECURSIVE-in-place-of-PTHREAD_.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0030-lib-Use-PTHREAD_MUTEX_RECURSIVE-in-place-of-PTHREAD_.patch
index c5fac9b..efa6d06 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0030-lib-Use-PTHREAD_MUTEX_RECURSIVE-in-place-of-PTHREAD_.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0030-lib-Use-PTHREAD_MUTEX_RECURSIVE-in-place-of-PTHREAD_.patch
@@ -8,6 +8,8 @@
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
 Signed-off-by: Dengke Du <dengke.du@windriver.com>
+
+Upstream-Status: Pending
 ---
  lib/tst_res.c | 4 ++++
  1 file changed, 4 insertions(+)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0034-periodic_output.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0034-periodic_output.patch
index 59caefe..c2ef899 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0034-periodic_output.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0034-periodic_output.patch
@@ -1,4 +1,7 @@
-Add periodic output for long time test.
+From 5a77e2bdc083f4f842a8ba7c2db1a7ac6e5f0664 Mon Sep 17 00:00:00 2001
+From: Dengke Du <dengke.du@windriver.com>
+Date: Wed, 31 May 2017 21:26:05 -0400
+Subject: [PATCH] Add periodic output for long time test.
 
 This is needed in context of having scripts running ltp tests and
 waiting with a timeout for the output of the tests.
@@ -6,20 +9,26 @@
 Signed-off-by: Tudor Florea <tudor.florea@enea.com>
 Upstream-Status: Pending
 
-diff -ruN a/testcases/kernel/controllers/memcg/stress/memcg_stress_test.sh b/testcases/kernel/controllers/memcg/stress/memcg_stress_test.sh
---- a/testcases/kernel/controllers/memcg/stress/memcg_stress_test.sh	2013-11-08 15:54:09.515049081 +0100
-+++ b/testcases/kernel/controllers/memcg/stress/memcg_stress_test.sh	2013-11-08 22:32:15.587370406 +0100
-@@ -37,7 +37,8 @@
+Signed-off-by: Dengke Du <dengke.du@windriver.com>
+---
+ .../kernel/controllers/memcg/stress/memcg_stress_test.sh      | 11 ++++++++---
+ 1 file changed, 8 insertions(+), 3 deletions(-)
+
+diff --git a/testcases/kernel/controllers/memcg/stress/memcg_stress_test.sh b/testcases/kernel/controllers/memcg/stress/memcg_stress_test.sh
+index af1a708..084e628 100755
+--- a/testcases/kernel/controllers/memcg/stress/memcg_stress_test.sh
++++ b/testcases/kernel/controllers/memcg/stress/memcg_stress_test.sh
+@@ -37,7 +37,8 @@ if [ "x$(grep -w memory /proc/cgroups | cut -f4)" != "x1" ]; then
          exit 0
  fi
  
--RUN_TIME=$(( 60 * 60 ))
+-RUN_TIME=$(( 15 * 60 ))
 +ONE_MINUTE=60
-+RUN_TIME=60
++RUN_TIME=15
  
  cleanup()
  {
-@@ -62,7 +63,7 @@
+@@ -62,7 +63,7 @@ do_mount()
  # $1 - Number of cgroups
  # $2 - Allocated how much memory in one process? in MB
  # $3 - The interval to touch memory in a process
@@ -28,16 +37,19 @@
  run_stress()
  {
  	do_mount;
-@@ -81,7 +82,11 @@
+@@ -81,7 +82,11 @@ run_stress()
  		eval /bin/kill -s SIGUSR1 \$pid$i 2> /dev/null
  	done
  
 -	sleep $4
 +	for i in $(seq 0 $(($4-1)))
 +	do
-+		eval echo "Started $i min ago. Still alive... " 
++		eval echo "Started $i min ago. Still alive... "
 +		sleep $ONE_MINUTE
 +	done
  
  	for i in $(seq 0 $(($1-1)))
  	do
+-- 
+2.8.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0037-ltp-fix-PAGE_SIZE-redefinition-and-O_CREAT-undeclear.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0037-ltp-fix-PAGE_SIZE-redefinition-and-O_CREAT-undeclear.patch
new file mode 100644
index 0000000..c8738ae
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0037-ltp-fix-PAGE_SIZE-redefinition-and-O_CREAT-undeclear.patch
@@ -0,0 +1,113 @@
+From a9d5595d2fa2ab252f1cabf63f4b65c3efbafeb9 Mon Sep 17 00:00:00 2001
+From: Dengke Du <dengke.du@windriver.com>
+Date: Thu, 10 Aug 2017 15:27:03 +0800
+Subject: [PATCH] ltp: fix PAGE_SIZE redefinition and O_CREAT undeclear when
+ build with musl
+
+error 1:
+
+|stack_clash.c:50:22: error: expected identifier or '(' before numeric constant
+| static unsigned long PAGE_SIZE;
+
+This is because the musl libc already contain PAGE_SIZE definition in limits.c,
+we can check it here:
+
+    https://git.musl-libc.org/cgit/musl/tree/include/limits.h#n43
+
+error 2:
+
+|ck01.c:157:22: error: 'O_CREAT' undeclared (first use in this function); did you mean 'S_IREAD'?
+|   fd = open(filename, O_CREAT | O_TRUNC | O_RDWR, 0644);
+|                       ^~~~~~~
+|                       S_IREAD
+
+This is because the musl libc put those in fcntl.h, so we should include that
+file.
+
+Upstream-Status: Submitted [ https://github.com/linux-test-project/ltp/pull/194 ]
+
+Signed-off-by: Dengke Du <dengke.du@windriver.com>
+---
+ testcases/cve/stack_clash.c               | 12 ++++++------
+ testcases/kernel/syscalls/flock/flock01.c |  1 +
+ testcases/kernel/syscalls/flock/flock02.c |  1 +
+ 3 files changed, 8 insertions(+), 6 deletions(-)
+
+diff --git a/testcases/cve/stack_clash.c b/testcases/cve/stack_clash.c
+index 2ef1a82..7c45991 100644
+--- a/testcases/cve/stack_clash.c
++++ b/testcases/cve/stack_clash.c
+@@ -47,7 +47,7 @@
+ #include "tst_test.h"
+ #include "tst_safe_stdio.h"
+ 
+-static unsigned long PAGE_SIZE;
++static unsigned long PAGE_SIZE_tst;
+ static unsigned long PAGE_MASK;
+ static unsigned long GAP_PAGES = 256;
+ static unsigned long THRESHOLD;
+@@ -66,7 +66,7 @@ void exhaust_stack_into_sigsegv(void)
+ 	exhaust_stack_into_sigsegv();
+ }
+ 
+-#define MAPPED_LEN PAGE_SIZE
++#define MAPPED_LEN PAGE_SIZE_tst
+ static unsigned long mapped_addr;
+ 
+ void segv_handler(int sig, siginfo_t *info, void *data LTP_ATTRIBUTE_UNUSED)
+@@ -150,7 +150,7 @@ void do_child(void)
+ 	stack_t signal_stack;
+ 	struct sigaction segv_sig = {.sa_sigaction = segv_handler, .sa_flags = SA_ONSTACK|SA_SIGINFO};
+ 	void *map;
+-	unsigned long gap = GAP_PAGES * PAGE_SIZE;
++	unsigned long gap = GAP_PAGES * PAGE_SIZE_tst;
+ 	struct rlimit rlimit;
+ 
+ 	rlimit.rlim_cur = rlimit.rlim_max = RLIM_INFINITY;
+@@ -200,8 +200,8 @@ void setup(void)
+ {
+ 	char buf[4096], *p;
+ 
+-	PAGE_SIZE = sysconf(_SC_PAGESIZE);
+-	PAGE_MASK = ~(PAGE_SIZE - 1);
++	PAGE_SIZE_tst = sysconf(_SC_PAGESIZE);
++	PAGE_MASK = ~(PAGE_SIZE_tst - 1);
+ 
+ 	buf[4095] = '\0';
+ 	SAFE_FILE_SCANF("/proc/cmdline", "%4095[^\n]", buf);
+@@ -214,7 +214,7 @@ void setup(void)
+ 		tst_res(TINFO, "stack_guard_gap = %ld", GAP_PAGES);
+ 	}
+ 
+-	THRESHOLD = (GAP_PAGES - 1) * PAGE_SIZE;
++	THRESHOLD = (GAP_PAGES - 1) * PAGE_SIZE_tst;
+ 
+ 	{
+ 		volatile int *a = alloca(128);
+diff --git a/testcases/kernel/syscalls/flock/flock01.c b/testcases/kernel/syscalls/flock/flock01.c
+index 3e17be4..06d89e3 100644
+--- a/testcases/kernel/syscalls/flock/flock01.c
++++ b/testcases/kernel/syscalls/flock/flock01.c
+@@ -69,6 +69,7 @@
+ #include <stdio.h>
+ #include <sys/wait.h>
+ #include <sys/file.h>
++#include <fcntl.h>
+ #include "test.h"
+ 
+ void setup(void);
+diff --git a/testcases/kernel/syscalls/flock/flock02.c b/testcases/kernel/syscalls/flock/flock02.c
+index 414df68..9ddf729 100644
+--- a/testcases/kernel/syscalls/flock/flock02.c
++++ b/testcases/kernel/syscalls/flock/flock02.c
+@@ -75,6 +75,7 @@
+ #include <sys/types.h>
+ #include <sys/file.h>
+ #include <sys/wait.h>
++#include <fcntl.h>
+ #include <errno.h>
+ #include <stdio.h>
+ #include "test.h"
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0038-commands-gdb01-replace-stdin-with-dev-null.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0038-commands-gdb01-replace-stdin-with-dev-null.patch
new file mode 100644
index 0000000..f7c0a4b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0038-commands-gdb01-replace-stdin-with-dev-null.patch
@@ -0,0 +1,34 @@
+From 2f6ab8f694b26b7f2566624f6d1f23788d6ab8a0 Mon Sep 17 00:00:00 2001
+From: Jan Stancek <jstancek@redhat.com>
+Date: Mon, 11 Sep 2017 12:57:58 +0200
+Subject: [PATCH] commands/gdb01: replace stdin with /dev/null
+
+If this testcase runs as background process, gdb can receive
+SIGTTOU and then testcase gets stuck.
+
+Signed-off-by: Jan Stancek <jstancek@redhat.com>
+
+Upstream-Status: Backport
+[https://github.com/linux-test-project/ltp/commit/2f6ab8f694b26b7f2566624f6d1f23788d6ab8a0]
+
+Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
+---
+ testcases/commands/gdb/gdb01.sh | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/testcases/commands/gdb/gdb01.sh b/testcases/commands/gdb/gdb01.sh
+index 07ae36f..e3a5b51 100755
+--- a/testcases/commands/gdb/gdb01.sh
++++ b/testcases/commands/gdb/gdb01.sh
+@@ -29,7 +29,7 @@ TST_NEEDS_CMDS="gdb /bin/cat"
+ 
+ simple_test()
+ {
+-	gdb /bin/cat -ex "run /etc/passwd" -ex quit
++	gdb /bin/cat -ex "run /etc/passwd" -ex quit < /dev/null
+ 	RC=$?
+ 	if [ $RC -eq 0 ] ; then
+ 		tst_res TPASS "gdb attached to process and completed run"
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0039-fcntl-fix-the-time-def-to-use-time_t.patch b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0039-fcntl-fix-the-time-def-to-use-time_t.patch
deleted file mode 100644
index c0c1dad..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp/0039-fcntl-fix-the-time-def-to-use-time_t.patch
+++ /dev/null
@@ -1,29 +0,0 @@
-From 7bce3d223494803cb32897cabe66119076e53d89 Mon Sep 17 00:00:00 2001
-From: Dengke Du <dengke.du@windriver.com>
-Date: Wed, 8 Feb 2017 16:23:51 +0800
-Subject: [PATCH 5/5] fcntl: fix the time() def to use time_t
-
-This fixes the build on X32, where long is 32-bit rather than 64-bit.
-
-Signed-off-by: Christopher Larson <chris_larson@mentor.com>
-Signed-off-by: Dengke Du <dengke.du@windriver.com>
----
- testcases/kernel/syscalls/fcntl/fcntl14.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/testcases/kernel/syscalls/fcntl/fcntl14.c b/testcases/kernel/syscalls/fcntl/fcntl14.c
-index c61eb24..99e3867 100644
---- a/testcases/kernel/syscalls/fcntl/fcntl14.c
-+++ b/testcases/kernel/syscalls/fcntl/fcntl14.c
-@@ -775,7 +775,7 @@ void dochild(void)
- 
- void run_test(int file_flag, int file_mode, int seek, int start, int end)
- {
--	extern long time();
-+	extern time_t time();
- 
- 	fail = 0;
- 
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp_20170116.bb b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp_20170116.bb
deleted file mode 100644
index 58af104..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp_20170116.bb
+++ /dev/null
@@ -1,114 +0,0 @@
-SUMMARY = "Linux Test Project"
-DESCRIPTION = "The Linux Test Project is a joint project with SGI, IBM, OSDL, and Bull with a goal to deliver test suites to the open source community that validate the reliability, robustness, and stability of Linux. The Linux Test Project is a collection of tools for testing the Linux kernel and related features."
-HOMEPAGE = "http://ltp.sourceforge.net"
-SECTION = "console/utils"
-LICENSE = "GPLv2 & GPLv2+ & LGPLv2+ & LGPLv2.1+ & BSD-2-Clause"
-LIC_FILES_CHKSUM = "\
-    file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-    file://testcases/kernel/controllers/freezer/COPYING;md5=0636e73ff0215e8d672dc4c32c317bb3 \
-    file://testcases/kernel/controllers/freezer/run_freezer.sh;beginline=5;endline=17;md5=86a61d2c042d59836ffb353a21456498 \
-    file://testcases/kernel/hotplug/memory_hotplug/COPYING;md5=e04a2e542b2b8629bf9cd2ba29b0fe41 \
-    file://testcases/kernel/hotplug/cpu_hotplug/COPYING;md5=e04a2e542b2b8629bf9cd2ba29b0fe41 \
-    file://testcases/open_posix_testsuite/COPYING;md5=216e43b72efbe4ed9017cc19c4c68b01 \
-    file://testcases/realtime/COPYING;md5=12f884d2ae1ff87c09e5b7ccc2c4ca7e \
-    file://tools/pounder21/COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
-    file://utils/benchmark/kernbench-0.42/COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
-    file://utils/ffsb-6.0-rc2/COPYING;md5=c46082167a314d785d012a244748d803 \
-"
-
-DEPENDS = "attr libaio libcap acl openssl zip-native"
-DEPENDS_append_libc-musl = " fts "
-EXTRA_OEMAKE_append_libc-musl = " LIBC=musl "
-CFLAGS_append_powerpc64 = " -D__SANE_USERSPACE_TYPES__"
-CFLAGS_append_mipsarchn64 = " -D__SANE_USERSPACE_TYPES__"
-SRCREV = "2c8457b0769fc026e4e1772f4c2a6da0be63a631"
-
-SRC_URI = "git://github.com/linux-test-project/ltp.git \
-           file://0001-add-_GNU_SOURCE-to-pec_listener.c.patch \
-           file://0002-Add-knob-to-control-whether-numa-support-should-be-c.patch \
-           file://0003-Add-knob-to-control-tirpc-support.patch \
-           file://0004-build-Add-option-to-select-libc-implementation.patch \
-           file://0005-kernel-controllers-Link-with-libfts-explicitly-on-mu.patch \
-           file://0006-fix-PATH_MAX-undeclared-when-building-with-musl.patch \
-           file://0007-fix-__WORDSIZE-undeclared-when-building-with-musl.patch \
-           file://0008-Check-if-__GLIBC_PREREQ-is-defined-before-using-it.patch \
-           file://0009-fix-redefinition-of-struct-msgbuf-error-building-wit.patch \
-           file://0010-replace-__BEGIN_DECLS-and-__END_DECLS.patch \
-           file://0011-Rename-sigset-variable-to-sigset1.patch \
-           file://0012-fix-faccessat01.c-build-fails-with-security-flags.patch \
-           file://0018-guard-mallocopt-with-__GLIBC__.patch \
-           file://0020-getdents-define-getdents-getdents64-only-for-glibc.patch \
-           file://0021-Define-_GNU_SOURCE-for-MREMAP_MAYMOVE-definition.patch \
-           file://0023-ptrace-Use-int-instead-of-enum-__ptrace_request.patch \
-           file://0024-rt_sigaction-rt_sigprocmark-Define-_GNU_SOURCE.patch \
-           file://0025-mc_gethost-include-sys-types.h.patch \
-           file://0026-crash01-Define-_GNU_SOURCE.patch \
-           file://0027-sysconf01-Use-_SC_2_C_VERSION-conditionally.patch \
-           file://0028-rt_sigaction.h-Use-sighandler_t-instead-of-__sighand.patch \
-           file://0030-lib-Use-PTHREAD_MUTEX_RECURSIVE-in-place-of-PTHREAD_.patch \
-           file://0033-shmat1-Cover-GNU-specific-code-under-__USE_GNU.patch \
-           file://0034-periodic_output.patch \
-           file://0035-fix-test_proc_kill-hang.patch \
-           file://0036-testcases-network-nfsv4-acl-acl1.c-Security-fix-on-s.patch \
-           file://0039-fcntl-fix-the-time-def-to-use-time_t.patch \
-           "
-
-S = "${WORKDIR}/git"
-
-inherit autotools-brokensep
-
-TARGET_CC_ARCH += "${LDFLAGS}"
-
-export prefix = "/opt/ltp"
-export exec_prefix = "/opt/ltp"
-
-PACKAGECONFIG[numa] = "--with-numa, --without-numa, numactl,"
-EXTRA_AUTORECONF += "-I ${S}/testcases/realtime/m4"
-EXTRA_OECONF = " --with-power-management-testsuite --with-realtime-testsuite "
-# ltp network/rpc test cases ftbfs when libtirpc is found
-EXTRA_OECONF += " --without-tirpc "
-
-# The makefiles make excessive use of make -C and several include testcases.mk
-# which triggers a build of the syscall header. To reproduce, build ltp,
-# then delete the header, then "make -j XX" and watch regen.sh run multiple
-# times. Its easier to generate this once here instead.
-do_compile_prepend () {
-	( make -C ${B}/testcases/kernel include/linux_syscall_numbers.h )
-}
-
-do_install(){
-    install -d ${D}/opt/ltp/
-    oe_runmake DESTDIR=${D} SKIP_IDCHECK=1 install
-
-    # fixup not deploy STPfailure_report.pl to avoid confusing about it fails to run
-    # as it lacks dependency on some perl moudle such as LWP::Simple
-    # And this script previously works as a tool for analyzing failures from LTP
-    # runs on the OSDL's Scaleable Test Platform (STP) and it mainly accesses
-    # http://khack.osdl.org to retrieve ltp test results run on
-    # OSDL's Scaleable Test Platform, but now http://khack.osdl.org unaccessible
-    rm -rf ${D}/opt/ltp/bin/STPfailure_report.pl
-
-    # In oe-core, we doesn't support ksh and csh now, so remove in.csh and in.ksh.
-    rm ${D}/opt/ltp/testcases/data/file01/in.csh
-    rm ${D}/opt/ltp/testcases/data/file01/in.ksh
-    # Copy POSIX test suite into ${D}/opt/ltp/testcases by manual
-    cp -r testcases/open_posix_testsuite ${D}/opt/ltp/testcases
-}
-
-RDEPENDS_${PN} = "perl e2fsprogs-mke2fs python-core libaio bash gawk expect ldd unzip gzip cpio cronie logrotate which at"
-
-FILES_${PN}-staticdev += "/opt/ltp/lib/libmem.a /opt/ltp/testcases/data/nm01/lib.a"
-
-FILES_${PN} += "/opt/ltp/* /opt/ltp/runtest/* /opt/ltp/scenario_groups/* /opt/ltp/testcases/bin/* /opt/ltp/testcases/bin/*/bin/* /opt/ltp/testscripts/* /opt/ltp/testcases/open_posix_testsuite/* /opt/ltp/testcases/open_posix_testsuite/conformance/* /opt/ltp/testcases/open_posix_testsuite/Documentation/* /opt/ltp/testcases/open_posix_testsuite/functional/* /opt/ltp/testcases/open_posix_testsuite/include/* /opt/ltp/testcases/open_posix_testsuite/scripts/* /opt/ltp/testcases/open_posix_testsuite/stress/* /opt/ltp/testcases/open_posix_testsuite/tools/*"
-
-# Avoid generated binaries stripping. Otherwise some of the ltp tests such as ldd01 & nm01 fails
-INHIBIT_PACKAGE_STRIP = "1"
-INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
-# However, test_arch_stripped is already stripped, so...
-INSANE_SKIP_${PN} += "already-stripped"
-
-# Avoid file dependency scans, as LTP checks for things that may or may not
-# exist on the running system.  For instance it has specific checks for
-# csh and ksh which are not typically part of OpenEmbedded systems (but
-# can be added via additional layers.)
-SKIP_FILEDEPS_${PN} = '1'
diff --git a/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp_20170516.bb b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp_20170516.bb
new file mode 100644
index 0000000..653cbfd
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/ltp/ltp_20170516.bb
@@ -0,0 +1,123 @@
+SUMMARY = "Linux Test Project"
+DESCRIPTION = "The Linux Test Project is a joint project with SGI, IBM, OSDL, and Bull with a goal to deliver test suites to the open source community that validate the reliability, robustness, and stability of Linux. The Linux Test Project is a collection of tools for testing the Linux kernel and related features."
+HOMEPAGE = "http://ltp.sourceforge.net"
+SECTION = "console/utils"
+LICENSE = "GPLv2 & GPLv2+ & LGPLv2+ & LGPLv2.1+ & BSD-2-Clause"
+LIC_FILES_CHKSUM = "\
+    file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+    file://testcases/kernel/controllers/freezer/COPYING;md5=0636e73ff0215e8d672dc4c32c317bb3 \
+    file://testcases/kernel/controllers/freezer/run_freezer.sh;beginline=5;endline=17;md5=86a61d2c042d59836ffb353a21456498 \
+    file://testcases/kernel/hotplug/memory_hotplug/COPYING;md5=e04a2e542b2b8629bf9cd2ba29b0fe41 \
+    file://testcases/kernel/hotplug/cpu_hotplug/COPYING;md5=e04a2e542b2b8629bf9cd2ba29b0fe41 \
+    file://testcases/open_posix_testsuite/COPYING;md5=48b1c5ec633e3e30ec2cf884ae699947 \
+    file://testcases/realtime/COPYING;md5=12f884d2ae1ff87c09e5b7ccc2c4ca7e \
+    file://tools/pounder21/COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
+    file://utils/benchmark/kernbench-0.42/COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
+    file://utils/ffsb-6.0-rc2/COPYING;md5=c46082167a314d785d012a244748d803 \
+"
+
+DEPENDS = "attr libaio libcap acl openssl zip-native"
+DEPENDS_append_libc-musl = " fts "
+EXTRA_OEMAKE_append_libc-musl = " LIBC=musl "
+CFLAGS_append_powerpc64 = " -D__SANE_USERSPACE_TYPES__"
+CFLAGS_append_mipsarchn64 = " -D__SANE_USERSPACE_TYPES__"
+SRCREV = "18916a2e6d8c997b7b29dcfa9550d5a15b22ed22"
+
+SRC_URI = "git://github.com/linux-test-project/ltp.git \
+           file://0001-add-_GNU_SOURCE-to-pec_listener.c.patch \
+           file://0002-Add-knob-to-control-whether-numa-support-should-be-c.patch \
+           file://0003-Add-knob-to-control-tirpc-support.patch \
+           file://0004-build-Add-option-to-select-libc-implementation.patch \
+           file://0005-kernel-controllers-Link-with-libfts-explicitly-on-mu.patch \
+           file://0007-fix-__WORDSIZE-undeclared-when-building-with-musl.patch \
+           file://0008-Check-if-__GLIBC_PREREQ-is-defined-before-using-it.patch \
+           file://0009-fix-redefinition-of-struct-msgbuf-error-building-wit.patch \
+           file://0010-replace-__BEGIN_DECLS-and-__END_DECLS.patch \
+           file://0011-Rename-sigset-variable-to-sigset1.patch \
+           file://0018-guard-mallocopt-with-__GLIBC__.patch \
+           file://0020-getdents-define-getdents-getdents64-only-for-glibc.patch \
+           file://0021-Define-_GNU_SOURCE-for-MREMAP_MAYMOVE-definition.patch \
+           file://0023-ptrace-Use-int-instead-of-enum-__ptrace_request.patch \
+           file://0024-rt_sigaction-rt_sigprocmark-Define-_GNU_SOURCE.patch \
+           file://0025-mc_gethost-include-sys-types.h.patch \
+           file://0026-crash01-Define-_GNU_SOURCE.patch \
+           file://0027-sysconf01-Use-_SC_2_C_VERSION-conditionally.patch \
+           file://0028-rt_sigaction.h-Use-sighandler_t-instead-of-__sighand.patch \
+           file://0030-lib-Use-PTHREAD_MUTEX_RECURSIVE-in-place-of-PTHREAD_.patch \
+           file://0033-shmat1-Cover-GNU-specific-code-under-__USE_GNU.patch \
+           file://0034-periodic_output.patch \
+           file://0035-fix-test_proc_kill-hang.patch \
+           file://0036-testcases-network-nfsv4-acl-acl1.c-Security-fix-on-s.patch \
+           file://0037-ltp-fix-PAGE_SIZE-redefinition-and-O_CREAT-undeclear.patch \
+           file://0038-commands-gdb01-replace-stdin-with-dev-null.patch \
+           "
+
+S = "${WORKDIR}/git"
+
+inherit autotools-brokensep
+
+TARGET_CC_ARCH += "${LDFLAGS}"
+
+export prefix = "/opt/ltp"
+export exec_prefix = "/opt/ltp"
+
+PACKAGECONFIG[numa] = "--with-numa, --without-numa, numactl,"
+EXTRA_AUTORECONF += "-I ${S}/testcases/realtime/m4"
+EXTRA_OECONF = " --with-power-management-testsuite --with-realtime-testsuite "
+# ltp network/rpc test cases ftbfs when libtirpc is found
+EXTRA_OECONF += " --without-tirpc "
+
+do_install(){
+    install -d ${D}/opt/ltp/
+    oe_runmake DESTDIR=${D} SKIP_IDCHECK=1 install
+
+    # fixup not deploy STPfailure_report.pl to avoid confusing about it fails to run
+    # as it lacks dependency on some perl moudle such as LWP::Simple
+    # And this script previously works as a tool for analyzing failures from LTP
+    # runs on the OSDL's Scaleable Test Platform (STP) and it mainly accesses
+    # http://khack.osdl.org to retrieve ltp test results run on
+    # OSDL's Scaleable Test Platform, but now http://khack.osdl.org unaccessible
+    rm -rf ${D}/opt/ltp/bin/STPfailure_report.pl
+
+    # Copy POSIX test suite into ${D}/opt/ltp/testcases by manual
+    cp -r testcases/open_posix_testsuite ${D}/opt/ltp/testcases
+}
+
+RDEPENDS_${PN} = "\
+    acl \
+    at \
+    attr \
+    bash \
+    cpio \
+    cronie \
+    curl \
+    e2fsprogs-mke2fs \
+    expect \
+    gawk \
+    gzip \
+    iproute2 \
+    ldd \
+    libaio \
+    logrotate \
+    perl \
+    python-core \
+    unzip \
+    util-linux \
+    which \
+"
+
+FILES_${PN}-staticdev += "/opt/ltp/lib/libmem.a /opt/ltp/testcases/data/nm01/lib.a"
+
+FILES_${PN} += "/opt/ltp/* /opt/ltp/runtest/* /opt/ltp/scenario_groups/* /opt/ltp/testcases/bin/* /opt/ltp/testcases/bin/*/bin/* /opt/ltp/testscripts/* /opt/ltp/testcases/open_posix_testsuite/* /opt/ltp/testcases/open_posix_testsuite/conformance/* /opt/ltp/testcases/open_posix_testsuite/Documentation/* /opt/ltp/testcases/open_posix_testsuite/functional/* /opt/ltp/testcases/open_posix_testsuite/include/* /opt/ltp/testcases/open_posix_testsuite/scripts/* /opt/ltp/testcases/open_posix_testsuite/stress/* /opt/ltp/testcases/open_posix_testsuite/tools/*"
+
+# Avoid generated binaries stripping. Otherwise some of the ltp tests such as ldd01 & nm01 fails
+INHIBIT_PACKAGE_STRIP = "1"
+INHIBIT_PACKAGE_DEBUG_SPLIT = "1"
+# However, test_arch_stripped is already stripped, so...
+INSANE_SKIP_${PN} += "already-stripped"
+
+# Avoid file dependency scans, as LTP checks for things that may or may not
+# exist on the running system.  For instance it has specific checks for
+# csh and ksh which are not typically part of OpenEmbedded systems (but
+# can be added via additional layers.)
+SKIP_FILEDEPS_${PN} = '1'
diff --git a/import-layers/yocto-poky/meta/recipes-extended/lzip/lzip_1.18.bb b/import-layers/yocto-poky/meta/recipes-extended/lzip/lzip_1.18.bb
deleted file mode 100644
index c1dc8ce..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/lzip/lzip_1.18.bb
+++ /dev/null
@@ -1,41 +0,0 @@
-SUMMARY = "Lossless data compressor based on the LZMA algorithm"
-HOMEPAGE = "http://lzip.nongnu.org/lzip.html"
-SECTION = "console/utils"
-LICENSE = "GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=76d6e300ffd8fb9d18bd9b136a9bba13 \
-                    file://decoder.cc;beginline=3;endline=16;md5=db09fe3f9573f94d0076f7f07959e6e1"
-
-SRC_URI = "${SAVANNAH_GNU_MIRROR}/lzip/lzip-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "3838567460ce4a10143de4bccc64fe1c"
-SRC_URI[sha256sum] = "47f9882a104ab05532f467a7b8f4ddbb898fa2f1e8d9d468556d6c2d04db14dd"
-
-CONFIGUREOPTS = "\
-    '--srcdir=${S}' \
-    '--prefix=${prefix}' \
-    '--exec-prefix=${exec_prefix}' \
-    '--bindir=${bindir}' \
-    '--datadir=${datadir}' \
-    '--infodir=${infodir}' \
-    '--sysconfdir=${sysconfdir}' \
-    'CXX=${CXX}' \
-    'CPPFLAGS=${CPPFLAGS}' \
-    'CXXFLAGS=${CXXFLAGS}' \
-    'LDFLAGS=${LDFLAGS}' \
-"
-EXTRA_OEMAKE = ""
-
-B = "${S}/obj"
-do_configure () {
-    ${S}/configure ${CONFIGUREOPTS}
-}
-
-do_install () {
-    oe_runmake 'DESTDIR=${D}' install
-    # Info dir listing isn't interesting at this point so remove it if it exists.
-    if [ -e "${D}${infodir}/dir" ]; then
-        rm -f ${D}${infodir}/dir
-    fi
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/lzip/lzip_1.19.bb b/import-layers/yocto-poky/meta/recipes-extended/lzip/lzip_1.19.bb
new file mode 100644
index 0000000..099b364
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/lzip/lzip_1.19.bb
@@ -0,0 +1,41 @@
+SUMMARY = "Lossless data compressor based on the LZMA algorithm"
+HOMEPAGE = "http://lzip.nongnu.org/lzip.html"
+SECTION = "console/utils"
+LICENSE = "GPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=76d6e300ffd8fb9d18bd9b136a9bba13 \
+                    file://decoder.cc;beginline=3;endline=16;md5=db09fe3f9573f94d0076f7f07959e6e1"
+
+SRC_URI = "${SAVANNAH_GNU_MIRROR}/lzip/lzip-${PV}.tar.gz"
+
+SRC_URI[md5sum] = "4dd8790d7528440d034fc713a8680bd3"
+SRC_URI[sha256sum] = "ffadc4f56be1bc0d3ae155ec4527bd003133bdc703a753b2cc683f610e646ba9"
+
+CONFIGUREOPTS = "\
+    '--srcdir=${S}' \
+    '--prefix=${prefix}' \
+    '--exec-prefix=${exec_prefix}' \
+    '--bindir=${bindir}' \
+    '--datadir=${datadir}' \
+    '--infodir=${infodir}' \
+    '--sysconfdir=${sysconfdir}' \
+    'CXX=${CXX}' \
+    'CPPFLAGS=${CPPFLAGS}' \
+    'CXXFLAGS=${CXXFLAGS}' \
+    'LDFLAGS=${LDFLAGS}' \
+"
+EXTRA_OEMAKE = ""
+
+B = "${S}/obj"
+do_configure () {
+    ${S}/configure ${CONFIGUREOPTS}
+}
+
+do_install () {
+    oe_runmake 'DESTDIR=${D}' install
+    # Info dir listing isn't interesting at this point so remove it if it exists.
+    if [ -e "${D}${infodir}/dir" ]; then
+        rm -f ${D}${infodir}/dir
+    fi
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0001-Don-t-reuse-weak-symbol-optopt-to-fix-FTBFS-on-mips.patch b/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0001-Don-t-reuse-weak-symbol-optopt-to-fix-FTBFS-on-mips.patch
deleted file mode 100644
index 77da333..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0001-Don-t-reuse-weak-symbol-optopt-to-fix-FTBFS-on-mips.patch
+++ /dev/null
@@ -1,59 +0,0 @@
-From: Luk Claes <luk@debian.org>
-Date: Sat, 4 Jul 2009 10:54:53 +0200
-Subject: Don't reuse weak symbol optopt to fix FTBFS on mips*
-
-This patch is taken from 
-ftp://ftp.debian.org/debian/pool/main/h/heirloom-mailx/heirloom-mailx_12.5-5.debian.tar.xz
-
-Upstream-Status: Inappropriate [upstream is dead]
----
- getopt.c |   10 +++++-----
- 1 file changed, 5 insertions(+), 5 deletions(-)
-
-diff --git a/getopt.c b/getopt.c
-index 83ce628..82e983c 100644
---- a/getopt.c
-+++ b/getopt.c
-@@ -43,7 +43,7 @@ typedef	int	ssize_t;
- char	*optarg;
- int	optind = 1;
- int	opterr = 1;
--int	optopt;
-+int	optoptc;
- 
- static void
- error(const char *s, int c)
-@@ -69,7 +69,7 @@ error(const char *s, int c)
- 		*bp++ = *s++;
- 	while (*msg)
- 		*bp++ = *msg++;
--	*bp++ = optopt;
-+	*bp++ = optoptc;
- 	*bp++ = '\n';
- 	write(2, buf, bp - buf);
- 	ac_free(buf);
-@@ -101,13 +101,13 @@ getopt(int argc, char *const argv[], const char *optstring)
- 		}
- 		curp = &argv[optind][1];
- 	}
--	optopt = curp[0] & 0377;
-+	optoptc = curp[0] & 0377;
- 	while (optstring[0]) {
- 		if (optstring[0] == ':') {
- 			optstring++;
- 			continue;
- 		}
--		if ((optstring[0] & 0377) == optopt) {
-+		if ((optstring[0] & 0377) == optoptc) {
- 			if (optstring[1] == ':') {
- 				if (curp[1] != '\0') {
- 					optarg = (char *)&curp[1];
-@@ -127,7 +127,7 @@ getopt(int argc, char *const argv[], const char *optstring)
- 					optind++;
- 				optarg = 0;
- 			}
--			return optopt;
-+			return optoptc;
- 		}
- 		optstring++;
- 	}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0002-Patched-out-SSL2-support-since-it-is-no-longer-suppo.patch b/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0002-Patched-out-SSL2-support-since-it-is-no-longer-suppo.patch
deleted file mode 100644
index 6bad433..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0002-Patched-out-SSL2-support-since-it-is-no-longer-suppo.patch
+++ /dev/null
@@ -1,41 +0,0 @@
-From: Hilko Bengen <bengen@debian.org>
-Date: Wed, 27 Apr 2011 00:18:42 +0200
-Subject: Patched out SSL2 support since it is no longer supported by OpenSSL.
-
-This patch is taken from
-ftp://ftp.debian.org/debian/pool/main/h/heirloom-mailx/heirloom-mailx_12.5-5.debian.tar.xz
-
-Upstream-Status: Inappropriate [upstream is dead]
----
- mailx.1   |    2 +-
- openssl.c |    4 +---
- 2 files changed, 2 insertions(+), 4 deletions(-)
-
-diff --git a/mailx.1 b/mailx.1
-index 417ea04..a02e430 100644
---- a/mailx.1
-+++ b/mailx.1
-@@ -3575,7 +3575,7 @@ Only applicable if SSL/TLS support is built using OpenSSL.
- .TP
- .B ssl-method
- Selects a SSL/TLS protocol version;
--valid values are `ssl2', `ssl3', and `tls1'.
-+valid values are `ssl3', and `tls1'.
- If unset, the method is selected automatically,
- if possible.
- .TP
-diff --git a/openssl.c b/openssl.c
-index b4e33fc..44fe4e5 100644
---- a/openssl.c
-+++ b/openssl.c
-@@ -216,9 +216,7 @@ ssl_select_method(const char *uhp)
- 
- 	cp = ssl_method_string(uhp);
- 	if (cp != NULL) {
--		if (equal(cp, "ssl2"))
--			method = SSLv2_client_method();
--		else if (equal(cp, "ssl3"))
-+		if (equal(cp, "ssl3"))
- 			method = SSLv3_client_method();
- 		else if (equal(cp, "tls1"))
- 			method = TLSv1_client_method();
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0003-Fixed-Lintian-warning-warning-macro-N-not-defined.patch b/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0003-Fixed-Lintian-warning-warning-macro-N-not-defined.patch
deleted file mode 100644
index 13b73ae..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0003-Fixed-Lintian-warning-warning-macro-N-not-defined.patch
+++ /dev/null
@@ -1,25 +0,0 @@
-From: Hilko Bengen <bengen@debian.org>
-Date: Sat, 14 Apr 2012 20:22:43 +0200
-Subject: Fixed Lintian warning (warning: macro `N' not defined)
-
-This patch is taken from
-ftp://ftp.debian.org/debian/pool/main/h/heirloom-mailx/heirloom-mailx_12.5-5.debian.tar.xz
-
-Upstream-Status: Inappropriate [upstream is dead]
----
- mailx.1 |    2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/mailx.1 b/mailx.1
-index a02e430..b0723bd 100644
---- a/mailx.1
-+++ b/mailx.1
-@@ -3781,7 +3781,7 @@ you could examine the first message by giving the command:
- .sp
- .fi
- which might cause
--.N mailx
-+.I mailx
- to respond with, for example:
- .nf
- .sp
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0011-outof-Introduce-expandaddr-flag.patch b/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0011-outof-Introduce-expandaddr-flag.patch
deleted file mode 100644
index 13b955c..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0011-outof-Introduce-expandaddr-flag.patch
+++ /dev/null
@@ -1,71 +0,0 @@
-From 9984ae5cb0ea0d61df1612b06952a61323c083d9 Mon Sep 17 00:00:00 2001
-From: Florian Weimer <fweimer@redhat.com>
-Date: Mon, 17 Nov 2014 11:13:38 +0100
-Subject: [PATCH 1/4] outof: Introduce expandaddr flag
-
-Document that address expansion is disabled unless the expandaddr
-binary option is set.
-
-This has been assigned CVE-2014-7844 for BSD mailx, but it is not
-a vulnerability in Heirloom mailx because this feature was documented.
-
-This patch is taken from
-ftp://ftp.debian.org/debian/pool/main/h/heirloom-mailx/heirloom-mailx_12.5-5.debian.tar.xz
-
-Upstream-Status: Inappropriate [upstream is dead]
-CVE: CVE-2014-7844
----
- mailx.1 | 14 ++++++++++++++
- names.c |  3 +++
- 2 files changed, 17 insertions(+)
-
-diff --git a/mailx.1 b/mailx.1
-index 70a7859..22a171b 100644
---- a/mailx.1
-+++ b/mailx.1
-@@ -656,6 +656,14 @@ but any reply returned to the machine
- will have the system wide alias expanded
- as all mail goes through sendmail.
- .SS "Recipient address specifications"
-+If the
-+.I expandaddr
-+option is not set (the default), recipient addresses must be names of
-+local mailboxes or Internet mail addresses.
-+.PP
-+If the
-+.I expandaddr
-+option is set, the following rules apply:
- When an address is used to name a recipient
- (in any of To, Cc, or Bcc),
- names of local mail folders
-@@ -2391,6 +2399,12 @@ and exits immediately.
- If this option is set,
- \fImailx\fR starts even with an empty mailbox.
- .TP
-+.B expandaddr
-+Causes
-+.I mailx
-+to expand message recipient addresses, as explained in the section,
-+Recipient address specifications.
-+.TP
- .B flipr
- Exchanges the
- .I Respond
-diff --git a/names.c b/names.c
-index 66e976b..c69560f 100644
---- a/names.c
-+++ b/names.c
-@@ -268,6 +268,9 @@ outof(struct name *names, FILE *fo, struct header *hp)
- 	FILE *fout, *fin;
- 	int ispipe;
- 
-+	if (value("expandaddr") == NULL)
-+		return names;
-+
- 	top = names;
- 	np = names;
- 	time(&now);
--- 
-1.9.3
-
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0012-unpack-Disable-option-processing-for-email-addresses.patch b/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0012-unpack-Disable-option-processing-for-email-addresses.patch
deleted file mode 100644
index 8cdbfd8..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0012-unpack-Disable-option-processing-for-email-addresses.patch
+++ /dev/null
@@ -1,79 +0,0 @@
-From e34e2ac67b80497080ebecccec40c3b61456167d Mon Sep 17 00:00:00 2001
-From: Florian Weimer <fweimer@redhat.com>
-Date: Mon, 17 Nov 2014 11:14:06 +0100
-Subject: [PATCH 2/4] unpack: Disable option processing for email addresses
- when calling sendmail
-
-This patch is taken from
-ftp://ftp.debian.org/debian/pool/main/h/heirloom-mailx/heirloom-mailx_12.5-5.debian.tar.xz
-
-Upstream-Status: Inappropriate [upstream is dead]
----
- extern.h  | 2 +-
- names.c   | 8 ++++++--
- sendout.c | 2 +-
- 3 files changed, 8 insertions(+), 4 deletions(-)
-
-diff --git a/extern.h b/extern.h
-index 6b85ba0..8873fe8 100644
---- a/extern.h
-+++ b/extern.h
-@@ -396,7 +396,7 @@ struct name *outof(struct name *names, FILE *fo, struct header *hp);
- int is_fileaddr(char *name);
- struct name *usermap(struct name *names);
- struct name *cat(struct name *n1, struct name *n2);
--char **unpack(struct name *np);
-+char **unpack(struct name *smopts, struct name *np);
- struct name *elide(struct name *names);
- int count(struct name *np);
- struct name *delete_alternates(struct name *np);
-diff --git a/names.c b/names.c
-index c69560f..45bbaed 100644
---- a/names.c
-+++ b/names.c
-@@ -549,7 +549,7 @@ cat(struct name *n1, struct name *n2)
-  * Return an error if the name list won't fit.
-  */
- char **
--unpack(struct name *np)
-+unpack(struct name *smopts, struct name *np)
- {
- 	char **ap, **top;
- 	struct name *n;
-@@ -564,7 +564,7 @@ unpack(struct name *np)
- 	 * the terminating 0 pointer.  Additional spots may be needed
- 	 * to pass along -f to the host mailer.
- 	 */
--	extra = 2;
-+	extra = 3 + count(smopts);
- 	extra++;
- 	metoo = value("metoo") != NULL;
- 	if (metoo)
-@@ -581,6 +581,10 @@ unpack(struct name *np)
- 		*ap++ = "-m";
- 	if (verbose)
- 		*ap++ = "-v";
-+	for (; smopts != NULL; smopts = smopts->n_flink)
-+		if ((smopts->n_type & GDEL) == 0)
-+			*ap++ = smopts->n_name;
-+	*ap++ = "--";
- 	for (; n != NULL; n = n->n_flink)
- 		if ((n->n_type & GDEL) == 0)
- 			*ap++ = n->n_name;
-diff --git a/sendout.c b/sendout.c
-index 7b7f2eb..c52f15d 100644
---- a/sendout.c
-+++ b/sendout.c
-@@ -835,7 +835,7 @@ start_mta(struct name *to, struct name *mailargs, FILE *input,
- #endif	/* HAVE_SOCKETS */
- 
- 	if ((smtp = value("smtp")) == NULL) {
--		args = unpack(cat(mailargs, to));
-+		args = unpack(mailargs, to);
- 		if (debug || value("debug")) {
- 			printf(catgets(catd, CATSET, 181,
- 					"Sendmail arguments:"));
--- 
-1.9.3
-
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0013-fio.c-Unconditionally-require-wordexp-support.patch b/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0013-fio.c-Unconditionally-require-wordexp-support.patch
deleted file mode 100644
index 5558d86..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0013-fio.c-Unconditionally-require-wordexp-support.patch
+++ /dev/null
@@ -1,113 +0,0 @@
-From 2bae8ecf04ec2ba6bb9f0af5b80485dd0edb427d Mon Sep 17 00:00:00 2001
-From: Florian Weimer <fweimer@redhat.com>
-Date: Mon, 17 Nov 2014 12:48:25 +0100
-Subject: [PATCH 3/4] fio.c: Unconditionally require wordexp support
-
-This patch is taken from
-ftp://ftp.debian.org/debian/pool/main/h/heirloom-mailx/heirloom-mailx_12.5-5.debian.tar.xz
-
-Upstream-Status: Inappropriate [upstream is dead]
----
- fio.c | 67 +++++--------------------------------------------------------------
- 1 file changed, 5 insertions(+), 62 deletions(-)
-
-diff --git a/fio.c b/fio.c
-index 65e8f10..1529236 100644
---- a/fio.c
-+++ b/fio.c
-@@ -43,12 +43,15 @@ static char sccsid[] = "@(#)fio.c	2.76 (gritter) 9/16/09";
- #endif /* not lint */
- 
- #include "rcv.h"
-+
-+#ifndef HAVE_WORDEXP
-+#error wordexp support is required
-+#endif
-+
- #include <sys/stat.h>
- #include <sys/file.h>
- #include <sys/wait.h>
--#ifdef	HAVE_WORDEXP
- #include <wordexp.h>
--#endif	/* HAVE_WORDEXP */
- #include <unistd.h>
- 
- #if defined (USE_NSS)
-@@ -481,7 +484,6 @@ next:
- static char *
- globname(char *name)
- {
--#ifdef	HAVE_WORDEXP
- 	wordexp_t we;
- 	char *cp;
- 	sigset_t nset;
-@@ -527,65 +529,6 @@ globname(char *name)
- 	}
- 	wordfree(&we);
- 	return cp;
--#else	/* !HAVE_WORDEXP */
--	char xname[PATHSIZE];
--	char cmdbuf[PATHSIZE];		/* also used for file names */
--	int pid, l;
--	char *cp, *shell;
--	int pivec[2];
--	extern int wait_status;
--	struct stat sbuf;
--
--	if (pipe(pivec) < 0) {
--		perror("pipe");
--		return name;
--	}
--	snprintf(cmdbuf, sizeof cmdbuf, "echo %s", name);
--	if ((shell = value("SHELL")) == NULL)
--		shell = SHELL;
--	pid = start_command(shell, 0, -1, pivec[1], "-c", cmdbuf, NULL);
--	if (pid < 0) {
--		close(pivec[0]);
--		close(pivec[1]);
--		return NULL;
--	}
--	close(pivec[1]);
--again:
--	l = read(pivec[0], xname, sizeof xname);
--	if (l < 0) {
--		if (errno == EINTR)
--			goto again;
--		perror("read");
--		close(pivec[0]);
--		return NULL;
--	}
--	close(pivec[0]);
--	if (wait_child(pid) < 0 && WTERMSIG(wait_status) != SIGPIPE) {
--		fprintf(stderr, catgets(catd, CATSET, 81,
--				"\"%s\": Expansion failed.\n"), name);
--		return NULL;
--	}
--	if (l == 0) {
--		fprintf(stderr, catgets(catd, CATSET, 82,
--					"\"%s\": No match.\n"), name);
--		return NULL;
--	}
--	if (l == sizeof xname) {
--		fprintf(stderr, catgets(catd, CATSET, 83,
--				"\"%s\": Expansion buffer overflow.\n"), name);
--		return NULL;
--	}
--	xname[l] = 0;
--	for (cp = &xname[l-1]; *cp == '\n' && cp > xname; cp--)
--		;
--	cp[1] = '\0';
--	if (strchr(xname, ' ') && stat(xname, &sbuf) < 0) {
--		fprintf(stderr, catgets(catd, CATSET, 84,
--				"\"%s\": Ambiguous.\n"), name);
--		return NULL;
--	}
--	return savestr(xname);
--#endif	/* !HAVE_WORDEXP */
- }
- 
- /*
--- 
-1.9.3
-
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0014-globname-Invoke-wordexp-with-WRDE_NOCMD.patch b/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0014-globname-Invoke-wordexp-with-WRDE_NOCMD.patch
deleted file mode 100644
index ae14b8a..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0014-globname-Invoke-wordexp-with-WRDE_NOCMD.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-From 73fefa0c1ac70043ec84f2d8b8f9f683213f168d Mon Sep 17 00:00:00 2001
-From: Florian Weimer <fweimer@redhat.com>
-Date: Mon, 17 Nov 2014 13:11:32 +0100
-Subject: [PATCH 4/4] globname: Invoke wordexp with WRDE_NOCMD (CVE-2004-2771)
-
-This patch is taken from
-ftp://ftp.debian.org/debian/pool/main/h/heirloom-mailx/heirloom-mailx_12.5-5.debian.tar.xz
-
-Upstream-Status: Inappropriate [upstream is dead]
-CVE: CVE-2004-2771
----
- fio.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/fio.c b/fio.c
-index 1529236..774a204 100644
---- a/fio.c
-+++ b/fio.c
-@@ -497,7 +497,7 @@ globname(char *name)
- 	sigemptyset(&nset);
- 	sigaddset(&nset, SIGCHLD);
- 	sigprocmask(SIG_BLOCK, &nset, NULL);
--	i = wordexp(name, &we, 0);
-+	i = wordexp(name, &we, WRDE_NOCMD);
- 	sigprocmask(SIG_UNBLOCK, &nset, NULL);
- 	switch (i) {
- 	case 0:
--- 
-1.9.3
-
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0015-usr-sbin-sendmail.patch b/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0015-usr-sbin-sendmail.patch
deleted file mode 100644
index 2b59914..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/0015-usr-sbin-sendmail.patch
+++ /dev/null
@@ -1,38 +0,0 @@
-Description: Sendmail is at /usr/sbin/sendmail
- As per Debian Policy §11.6
-Author: Ryan Kavanagh <rak@debian.org>
-Origin: Debian
-Forwarded: no
----
-This patch header follows DEP-3: http://dep.debian.net/deps/dep3/
-Index: heirloom-mailx-12.5/Makefile
-===================================================================
-This patch is taken from
-ftp://ftp.debian.org/debian/pool/main/h/heirloom-mailx/heirloom-mailx_12.5-5.debian.tar.xz
-
-Upstream-Status: Inappropriate [upstream is dead]
-
---- heirloom-mailx-12.5.orig/Makefile	2011-04-26 17:23:22.000000000 -0400
-+++ heirloom-mailx-12.5/Makefile	2015-01-27 13:20:04.733542801 -0500
-@@ -13,7 +13,7 @@
- 
- MAILRC		= $(SYSCONFDIR)/nail.rc
- MAILSPOOL	= /var/mail
--SENDMAIL	= /usr/lib/sendmail
-+SENDMAIL	= /usr/sbin/sendmail
- 
- DESTDIR		=
- 
-Index: heirloom-mailx-12.5/mailx.1
-===================================================================
---- heirloom-mailx-12.5.orig/mailx.1	2015-01-27 13:18:49.000000000 -0500
-+++ heirloom-mailx-12.5/mailx.1	2015-01-27 13:20:32.382336867 -0500
-@@ -4922,7 +4922,7 @@
- which just acts as a proxy.
- .PP
- \fIMailx\fR immediately contacts the SMTP server (or
--.IR \%/usr/lib/sendmail )
-+.IR \%/usr/sbin/sendmail )
- even when operating in
- .I disconnected
- mode.
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/explicitly.disable.krb5.support.patch b/import-layers/yocto-poky/meta/recipes-extended/mailx/files/explicitly.disable.krb5.support.patch
deleted file mode 100644
index b74fd04..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/mailx/files/explicitly.disable.krb5.support.patch
+++ /dev/null
@@ -1,46 +0,0 @@
-krb5 support is autodetected from sysroot making builds undeterministic
-feel free to improve this to support explicitly enabling/disabling it
-
-Upstream-Status: Pending
-
-Signed-off-by: Martin Jansa <Martin.Jansa@gmail.com>
-
---- a/makeconfig	2013-07-21 15:06:11.177792334 +0200
-+++ b/makeconfig	2013-07-21 15:07:20.028793994 +0200
-@@ -424,36 +424,6 @@
- }
- !
- 
--<$tmp2.c link_check gssapi 'for GSSAPI in libgss' \
--		'#define USE_GSSAPI' '-lgss' ||
--	<$tmp2.c link_check gssapi 'for GSSAPI in libgssapi_krb5' \
--			'#define USE_GSSAPI' '-lgssapi_krb5' ||
--		link_check gssapi 'for GSSAPI in libgssapi_krb5, old-style' \
--				'#define USE_GSSAPI
--#define GSSAPI_OLD_STYLE' '-lgssapi_krb5' <<\! || \
--			link_check gssapi 'for GSSAPI in libgssapi' \
--				'#define USE_GSSAPI
--#define	GSSAPI_REG_INCLUDE' '-lgssapi' <<\%
--#include <gssapi/gssapi.h>
--#include <gssapi/gssapi_generic.h>
--
--int main(void)
--{
--	gss_import_name(0, 0, gss_nt_service_name, 0);
--	gss_init_sec_context(0,0,0,0,0,0,0,0,0,0,0,0,0);
--	return 0;
--}
--!
--#include <gssapi.h>
--
--int main(void)
--{
--	gss_import_name(0, 0, GSS_C_NT_HOSTBASED_SERVICE, 0);
--	gss_init_sec_context(0,0,0,0,0,0,0,0,0,0,0,0,0);
--	return 0;
--}
--%
--
- cat >$tmp2.c <<\!
- #include "config.h"
- #ifdef HAVE_NL_LANGINFO
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mailx/mailx_12.5-5.bb b/import-layers/yocto-poky/meta/recipes-extended/mailx/mailx_12.5-5.bb
deleted file mode 100644
index 9dd710a..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/mailx/mailx_12.5-5.bb
+++ /dev/null
@@ -1,53 +0,0 @@
-SUMMARY = "mailx is the traditional command-line-mode mail user agent"
-
-DESCRIPTION = "Mailx is derived from Berkeley Mail and is intended provide the \
-functionality of the POSIX mailx command with additional support \
-for MIME, IMAP, POP3, SMTP, and S/MIME."
-
-HOMEPAGE = "http://heirloom.sourceforge.net/mailx.html"
-SECTION = "console/network"
-LICENSE = "BSD & MPL-1"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4202a0a62910cf94f7af8a3436a2a2dd"
-
-DEPENDS = "openssl"
-
-SRC_URI = "http://snapshot.debian.org/archive/debian/20160728T043443Z/pool/main/h/heirloom-mailx/heirloom-mailx_12.5.orig.tar.gz;name=archive \
-           file://0001-Don-t-reuse-weak-symbol-optopt-to-fix-FTBFS-on-mips.patch \
-           file://0002-Patched-out-SSL2-support-since-it-is-no-longer-suppo.patch \
-           file://0003-Fixed-Lintian-warning-warning-macro-N-not-defined.patch \
-           file://0011-outof-Introduce-expandaddr-flag.patch \
-           file://0012-unpack-Disable-option-processing-for-email-addresses.patch \
-           file://0013-fio.c-Unconditionally-require-wordexp-support.patch \
-           file://0014-globname-Invoke-wordexp-with-WRDE_NOCMD.patch \
-           file://0015-usr-sbin-sendmail.patch \
-           file://explicitly.disable.krb5.support.patch \
-          "
-
-SRC_URI[archive.md5sum] = "29a6033ef1412824d02eb9d9213cb1f2"
-SRC_URI[archive.sha256sum] = "015ba4209135867f37a0245d22235a392b8bbed956913286b887c2e2a9a421ad"
-
-# for this package we're mostly interested in tracking debian patches,
-# and not in the upstream version where all development has effectively stopped
-UPSTREAM_CHECK_URI = "${DEBIAN_MIRROR}/main/h/heirloom-mailx/"
-UPSTREAM_CHECK_REGEX = "(?P<pver>((\d+\.*)+)-((\d+\.*)+))\.(diff|debian\.tar)\.(gz|xz)"
-
-S = "${WORKDIR}/heirloom-mailx-12.5"
-
-inherit autotools-brokensep
-
-CFLAGS_append = " -D_BSD_SOURCE -DDEBIAN -I${S}/EXT"
-
-# "STRIP=true" means that 'true' command will be used to 'strip' files which will achieve the effect of not stripping them
-# mailx's Makefile doesn't allow a more straightforward way to avoid stripping
-EXTRA_OEMAKE = "SENDMAIL=${sbindir}/sendmail IPv6=-DHAVE_IPv6_FUNCS PREFIX=/usr UCBINSTALL=/usr/bin/install STRIP=true"
-
-# The makeconfig can't run parallelly, otherwise the checking results
-# might be incorrect and lead to errors:
-# fio.c:56:17: fatal error: ssl.h: No such file or directory
-# #include <ssl.h>
-PARALLEL_MAKE = ""
-
-# Causes gcc to get stuck and eat all available memory in qemuarm builds
-# http://errors.yoctoproject.org/Errors/Details/20488/
-ARM_INSTRUCTION_SET_armv4 = "arm"
-ARM_INSTRUCTION_SET_armv5 = "arm"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/man-pages/man-pages_4.09.bb b/import-layers/yocto-poky/meta/recipes-extended/man-pages/man-pages_4.09.bb
deleted file mode 100644
index 55fc21e..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/man-pages/man-pages_4.09.bb
+++ /dev/null
@@ -1,31 +0,0 @@
-SUMMARY = "Linux man-pages"
-DESCRIPTION = "The Linux man-pages project documents the Linux kernel and C library interfaces that are employed by user programs"
-SECTION = "console/utils"
-HOMEPAGE = "http://www.kernel.org/pub/linux/docs/man-pages"
-LICENSE = "GPLv2+"
-
-LIC_FILES_CHKSUM = "file://README;md5=8f2a3d43057d458e5066714980567a60"
-SRC_URI = "${KERNELORG_MIRROR}/linux/docs/${BPN}/Archive/${BP}.tar.gz"
-
-SRC_URI[md5sum] = "9e3c7b12a5fecda9a717a4bcc0ae3a67"
-SRC_URI[sha256sum] = "5fac324cefce0fbfae0df6c06ef3f6d6ab5227b9aad2a94f8657a0e3901f9185"
-
-RDEPENDS_${PN} = "man"
-
-do_configure[noexec] = "1"
-do_compile[noexec] = "1"
-
-do_install() {
-        oe_runmake install DESTDIR=${D}
-}
-
-# Only deliveres man-pages so FILES_${PN} gets everything
-FILES_${PN}-doc = ""
-FILES_${PN} = "${mandir}/*"
-
-inherit update-alternatives
-
-ALTERNATIVE_PRIORITY = "100"
-ALTERNATIVE_${PN} = "passwd.5 getspnam.3"
-ALTERNATIVE_LINK_NAME[passwd.5] = "${mandir}/man5/passwd.5"
-ALTERNATIVE_LINK_NAME[getspnam.3] = "${mandir}/man3/getspnam.3"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/man-pages/man-pages_4.11.bb b/import-layers/yocto-poky/meta/recipes-extended/man-pages/man-pages_4.11.bb
new file mode 100644
index 0000000..a3077a9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/man-pages/man-pages_4.11.bb
@@ -0,0 +1,31 @@
+SUMMARY = "Linux man-pages"
+DESCRIPTION = "The Linux man-pages project documents the Linux kernel and C library interfaces that are employed by user programs"
+SECTION = "console/utils"
+HOMEPAGE = "http://www.kernel.org/pub/linux/docs/man-pages"
+LICENSE = "GPLv2+"
+
+LIC_FILES_CHKSUM = "file://README;md5=8f2a3d43057d458e5066714980567a60"
+SRC_URI = "${KERNELORG_MIRROR}/linux/docs/${BPN}/Archive/${BP}.tar.gz"
+
+SRC_URI[md5sum] = "408300ed09d1ad5938070158b21da1d1"
+SRC_URI[sha256sum] = "e6db91a24e68c7c765b7b8e60f1591ed1049bc2dc3143db779eae4838b89d195"
+
+RDEPENDS_${PN} = "man"
+
+do_configure[noexec] = "1"
+do_compile[noexec] = "1"
+
+do_install() {
+        oe_runmake install DESTDIR=${D}
+}
+
+# Only deliveres man-pages so FILES_${PN} gets everything
+FILES_${PN}-doc = ""
+FILES_${PN} = "${mandir}/*"
+
+inherit update-alternatives
+
+ALTERNATIVE_PRIORITY = "100"
+ALTERNATIVE_${PN} = "passwd.5 getspnam.3"
+ALTERNATIVE_LINK_NAME[passwd.5] = "${mandir}/man5/passwd.5"
+ALTERNATIVE_LINK_NAME[getspnam.3] = "${mandir}/man3/getspnam.3"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mc/files/0002-Ticket-3697-tty_init-unify-curses-initialization.patch b/import-layers/yocto-poky/meta/recipes-extended/mc/files/0002-Ticket-3697-tty_init-unify-curses-initialization.patch
new file mode 100644
index 0000000..c54d4d0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/mc/files/0002-Ticket-3697-tty_init-unify-curses-initialization.patch
@@ -0,0 +1,66 @@
+From 4d46a108629beb66a293672db7b44f863b6598ba Mon Sep 17 00:00:00 2001
+From: Thomas Dickey <dickey@his.com>
+Date: Fri, 14 Apr 2017 14:06:13 +0300
+Subject: [PATCH] Ticket #3697: (tty_init): unify curses initialization
+
+...for various curses implementations.
+
+Signed-off-by: Andrew Borodin <aborodin@vmail.ru>
+
+Upstream-Status: Backport [https://github.com/MidnightCommander/mc.git]
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+
+---
+ lib/tty/tty-ncurses.c | 26 +++++++++-----------------
+ 1 file changed, 9 insertions(+), 17 deletions(-)
+
+diff --git a/lib/tty/tty-ncurses.c b/lib/tty/tty-ncurses.c
+index a7a11f3..8e69b39 100644
+--- a/lib/tty/tty-ncurses.c
++++ b/lib/tty/tty-ncurses.c
+@@ -179,6 +179,8 @@ mc_tty_normalize_lines_char (const char *ch)
+ void
+ tty_init (gboolean mouse_enable, gboolean is_xterm)
+ {
++    struct termios mode;
++
+     initscr ();
+ 
+ #ifdef HAVE_ESCDELAY
+@@ -194,25 +196,15 @@ tty_init (gboolean mouse_enable, gboolean is_xterm)
+     ESCDELAY = 200;
+ #endif /* HAVE_ESCDELAY */
+ 
+-#ifdef NCURSES_VERSION
++    tcgetattr (STDIN_FILENO, &mode);
+     /* use Ctrl-g to generate SIGINT */
+-    cur_term->Nttyb.c_cc[VINTR] = CTRL ('g');   /* ^g */
++    mode.c_cc[VINTR] = CTRL ('g');  /* ^g */
+     /* disable SIGQUIT to allow use Ctrl-\ key */
+-    cur_term->Nttyb.c_cc[VQUIT] = NULL_VALUE;
+-    tcsetattr (cur_term->Filedes, TCSANOW, &cur_term->Nttyb);
+-#else
+-    /* other curses implementation (bsd curses, ...) */
+-    {
+-        struct termios mode;
+-
+-        tcgetattr (STDIN_FILENO, &mode);
+-        /* use Ctrl-g to generate SIGINT */
+-        mode.c_cc[VINTR] = CTRL ('g');  /* ^g */
+-        /* disable SIGQUIT to allow use Ctrl-\ key */
+-        mode.c_cc[VQUIT] = NULL_VALUE;
+-        tcsetattr (STDIN_FILENO, TCSANOW, &mode);
+-    }
+-#endif /* NCURSES_VERSION */
++    mode.c_cc[VQUIT] = NULL_VALUE;
++    tcsetattr (STDIN_FILENO, TCSANOW, &mode);
++
++    /* curses remembers the "in-program" modes after this call */
++    def_prog_mode ();
+ 
+     tty_start_interrupt_key ();
+ 
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mc/mc_4.8.18.bb b/import-layers/yocto-poky/meta/recipes-extended/mc/mc_4.8.18.bb
deleted file mode 100644
index 17f3f73..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/mc/mc_4.8.18.bb
+++ /dev/null
@@ -1,52 +0,0 @@
-SUMMARY = "Midnight Commander is an ncurses based file manager"
-HOMEPAGE = "http://www.midnight-commander.org/"
-LICENSE = "GPLv3"
-LIC_FILES_CHKSUM = "file://COPYING;md5=270bbafe360e73f9840bd7981621f9c2"
-SECTION = "console/utils"
-DEPENDS = "ncurses glib-2.0 util-linux"
-RDEPENDS_${PN} = "ncurses-terminfo"
-
-SRC_URI = "http://www.midnight-commander.org/downloads/${BPN}-${PV}.tar.bz2 \
-           file://0001-mc-replace-perl-w-with-use-warnings.patch \
-           "
-SRC_URI[md5sum] = "cc56f0c9abd63c4caa3636bba3a08bfb"
-SRC_URI[sha256sum] = "5b591e10dcbea95233434da40cdad4663d360229adf89826576c319667c103cb"
-
-inherit autotools gettext pkgconfig
-
-#
-# Both Samba (smb) and sftp require package delivered from meta-openembedded
-#
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[smb] = "--enable-vfs-smb,--disable-vfs-smb,samba,"
-PACKAGECONFIG[sftp] = "--enable-vfs-sftp,--disable-vfs-sftp,libssh2,"
-
-EXTRA_OECONF = "--with-screen=ncurses --without-gpm-mouse --without-x"
-
-CACHED_CONFIGUREVARS += "ac_cv_path_PERL='/usr/bin/env perl'"
-
-do_install_append () {
-	sed -i -e '1s,#!.*perl,#!${bindir}/env perl,' ${D}${libexecdir}/mc/extfs.d/*
-	sed -i -e '1s,#!.*python,#!${bindir}/env python,' ${D}${libexecdir}/mc/extfs.d/*
-}
-
-PACKAGES =+ "${BPN}-helpers-perl ${BPN}-helpers-python ${BPN}-helpers ${BPN}-fish"
-
-SUMMARY_${BPN}-helpers-perl = "Midnight Commander Perl-based helper scripts"
-FILES_${BPN}-helpers-perl = "${libexecdir}/mc/extfs.d/a+ ${libexecdir}/mc/extfs.d/apt+ \
-                             ${libexecdir}/mc/extfs.d/deb ${libexecdir}/mc/extfs.d/deba \
-                             ${libexecdir}/mc/extfs.d/debd ${libexecdir}/mc/extfs.d/dpkg+ \
-                             ${libexecdir}/mc/extfs.d/mailfs ${libexecdir}/mc/extfs.d/patchfs \ 
-                             ${libexecdir}/mc/extfs.d/rpms+ ${libexecdir}/mc/extfs.d/ulib \ 
-                             ${libexecdir}/mc/extfs.d/uzip"
-RDEPENDS_${BPN}-helpers-perl = "perl"
-
-SUMMARY_${BPN}-helpers-python = "Midnight Commander Python-based helper scripts"
-FILES_${BPN}-helpers-python = "${libexecdir}/mc/extfs.d/s3+ ${libexecdir}/mc/extfs.d/uc1541"
-RDEPENDS_${BPN}-helpers-python = "python"
-
-SUMMARY_${BPN}-helpers = "Midnight Commander shell helper scripts"
-FILES_${BPN}-helpers = "${libexecdir}/mc/extfs.d/* ${libexecdir}/mc/ext.d/*"
-
-SUMMARY_${BPN}-fish = "Midnight Commander Fish scripts"
-FILES_${BPN}-fish = "${libexecdir}/mc/fish"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mc/mc_4.8.19.bb b/import-layers/yocto-poky/meta/recipes-extended/mc/mc_4.8.19.bb
new file mode 100644
index 0000000..b3a156c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/mc/mc_4.8.19.bb
@@ -0,0 +1,50 @@
+SUMMARY = "Midnight Commander is an ncurses based file manager"
+HOMEPAGE = "http://www.midnight-commander.org/"
+LICENSE = "GPLv3"
+LIC_FILES_CHKSUM = "file://COPYING;md5=270bbafe360e73f9840bd7981621f9c2"
+SECTION = "console/utils"
+DEPENDS = "ncurses glib-2.0 util-linux"
+RDEPENDS_${PN} = "ncurses-terminfo"
+
+SRC_URI = "http://www.midnight-commander.org/downloads/${BPN}-${PV}.tar.bz2 \
+           file://0001-mc-replace-perl-w-with-use-warnings.patch \
+           file://0002-Ticket-3697-tty_init-unify-curses-initialization.patch \
+           "
+SRC_URI[md5sum] = "ef423f5b6f80a1a5a5fc53b8324cab70"
+SRC_URI[sha256sum] = "d0dddfae7149fac903f74ef55cfcb2a198e0f7004346c7bded43669d61ba436f"
+
+inherit autotools gettext pkgconfig
+
+#
+# Both Samba (smb) and sftp require package delivered from meta-openembedded
+#
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[smb] = "--enable-vfs-smb,--disable-vfs-smb,samba,"
+PACKAGECONFIG[sftp] = "--enable-vfs-sftp,--disable-vfs-sftp,libssh2,"
+
+EXTRA_OECONF = "--with-screen=ncurses --without-gpm-mouse --without-x"
+
+CACHED_CONFIGUREVARS += "ac_cv_path_PERL='/usr/bin/env perl'"
+
+do_install_append () {
+	sed -i -e '1s,#!.*perl,#!${bindir}/env perl,' ${D}${libexecdir}/mc/extfs.d/*
+        
+        rm ${D}${libexecdir}/mc/extfs.d/s3+ ${D}${libexecdir}/mc/extfs.d/uc1541
+}
+
+PACKAGES =+ "${BPN}-helpers-perl ${BPN}-helpers ${BPN}-fish"
+
+SUMMARY_${BPN}-helpers-perl = "Midnight Commander Perl-based helper scripts"
+FILES_${BPN}-helpers-perl = "${libexecdir}/mc/extfs.d/a+ ${libexecdir}/mc/extfs.d/apt+ \
+                             ${libexecdir}/mc/extfs.d/deb ${libexecdir}/mc/extfs.d/deba \
+                             ${libexecdir}/mc/extfs.d/debd ${libexecdir}/mc/extfs.d/dpkg+ \
+                             ${libexecdir}/mc/extfs.d/mailfs ${libexecdir}/mc/extfs.d/patchfs \ 
+                             ${libexecdir}/mc/extfs.d/rpms+ ${libexecdir}/mc/extfs.d/ulib \ 
+                             ${libexecdir}/mc/extfs.d/uzip"
+RDEPENDS_${BPN}-helpers-perl = "perl"
+
+SUMMARY_${BPN}-helpers = "Midnight Commander shell helper scripts"
+FILES_${BPN}-helpers = "${libexecdir}/mc/extfs.d/* ${libexecdir}/mc/ext.d/*"
+
+SUMMARY_${BPN}-fish = "Midnight Commander Fish scripts"
+FILES_${BPN}-fish = "${libexecdir}/mc/fish"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0001-Use-CC-to-check-for-implicit-fallthrough-warning-sup.patch b/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0001-Use-CC-to-check-for-implicit-fallthrough-warning-sup.patch
new file mode 100644
index 0000000..a4b7b8a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0001-Use-CC-to-check-for-implicit-fallthrough-warning-sup.patch
@@ -0,0 +1,31 @@
+From a129ee6d80f3b2cda0d827c35fa81a517cf6d505 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 13 Oct 2017 10:27:34 -0700
+Subject: [PATCH] Use CC to check for implicit-fallthrough warning support
+
+This warning it new in gcc7 and in cross compile case
+its possible that build host gcc is version 7+ but the
+cross compile used for compiling mdadm is < version 7
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+ Makefile | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/Makefile b/Makefile
+index d207ee4..971f255 100644
+--- a/Makefile
++++ b/Makefile
+@@ -48,7 +48,7 @@ ifdef WARN_UNUSED
+ CWFLAGS += -Wp,-D_FORTIFY_SOURCE=2 -O3
+ endif
+ 
+-FALLTHROUGH := $(shell gcc -v --help 2>&1 | grep "implicit-fallthrough" | wc -l)
++FALLTHROUGH := $(shell ${CC} -v --help 2>&1 | grep "implicit-fallthrough" | wc -l)
+ ifneq "$(FALLTHROUGH)"  "0"
+ CWFLAGS += -Wimplicit-fallthrough=0
+ endif
+-- 
+2.14.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0001-mdadm-Add-Wimplicit-fallthrough-0-in-Makefile.patch b/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0001-mdadm-Add-Wimplicit-fallthrough-0-in-Makefile.patch
new file mode 100644
index 0000000..ce15170
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0001-mdadm-Add-Wimplicit-fallthrough-0-in-Makefile.patch
@@ -0,0 +1,37 @@
+From aa09af0fe2ec0737fa04ffd00957532684e257b9 Mon Sep 17 00:00:00 2001
+From: Xiao Ni <xni@redhat.com>
+Date: Fri, 17 Mar 2017 19:55:42 +0800
+Subject: [PATCH 1/5] mdadm: Add Wimplicit-fallthrough=0 in Makefile
+
+There are many errors like 'error: this statement may fall through'.
+But the logic is right. So add the flag Wimplicit-fallthrough=0
+to disable the error messages. The method I use is from
+https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html
+#index-Wimplicit-fallthrough-375
+
+Signed-off-by: Xiao Ni <xni@redhat.com>
+Signed-off-by: Jes Sorensen <Jes.Sorensen@gmail.com>
+---
+Upstream-Status: Backport
+ Makefile | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/Makefile b/Makefile
+index 0f307ec..e1a7058 100644
+--- a/Makefile
++++ b/Makefile
+@@ -48,6 +48,11 @@ ifdef WARN_UNUSED
+ CWFLAGS += -Wp,-D_FORTIFY_SOURCE=2 -O3
+ endif
+ 
++FALLTHROUGH := $(shell gcc -v --help 2>&1 | grep "implicit-fallthrough" | wc -l)
++ifneq "$(FALLTHROUGH)"  "0"
++CWFLAGS += -Wimplicit-fallthrough=0
++endif
++
+ ifdef DEBIAN
+ CPPFLAGS += -DDEBIAN
+ endif
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0002-mdadm-Specify-enough-length-when-write-to-buffer.patch b/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0002-mdadm-Specify-enough-length-when-write-to-buffer.patch
new file mode 100644
index 0000000..cbce053
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0002-mdadm-Specify-enough-length-when-write-to-buffer.patch
@@ -0,0 +1,75 @@
+From bb4df273041ba206008bdb0ada75ccd97c29f623 Mon Sep 17 00:00:00 2001
+From: Xiao Ni <xni@redhat.com>
+Date: Fri, 17 Mar 2017 19:55:43 +0800
+Subject: [PATCH 2/5] mdadm: Specify enough length when write to buffer
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+In Detail.c the buffer path in function Detail is defined as path[200],
+in fact the max lenth of content which needs to write to the buffer is
+287. Because the length of dname of struct dirent is 255.
+During building it reports error:
+error: ‘%s’ directive writing up to 255 bytes into a region of size 189
+[-Werror=format-overflow=]
+
+In function examine_super0 there is a buffer nb with length 5.
+But it need to show a int type argument. The lenght of max
+number of int is 10. So the buffer length should be 11.
+
+In human_size function the length of buf is 30. During building
+there is a error:
+output between 20 and 47 bytes into a destination of size 30.
+Change the length to 47.
+
+Signed-off-by: Xiao Ni <xni@redhat.com>
+Signed-off-by: Jes Sorensen <Jes.Sorensen@gmail.com>
+---
+Upstream-Status: Backport
+ Detail.c | 2 +-
+ super0.c | 2 +-
+ util.c   | 2 +-
+ 3 files changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/Detail.c b/Detail.c
+index 509b0d4..cb33794 100644
+--- a/Detail.c
++++ b/Detail.c
+@@ -575,7 +575,7 @@ This is pretty boring
+ 			printf("  Member Arrays :");
+ 
+ 			while (dir && (de = readdir(dir)) != NULL) {
+-				char path[200];
++				char path[287];
+ 				char vbuf[1024];
+ 				int nlen = strlen(sra->sys_name);
+ 				dev_t devid;
+diff --git a/super0.c b/super0.c
+index 938cfd9..f5b4507 100644
+--- a/super0.c
++++ b/super0.c
+@@ -231,7 +231,7 @@ static void examine_super0(struct supertype *st, char *homehost)
+ 	     d++) {
+ 		mdp_disk_t *dp;
+ 		char *dv;
+-		char nb[5];
++		char nb[11];
+ 		int wonly, failfast;
+ 		if (d>=0) dp = &sb->disks[d];
+ 		else dp = &sb->this_disk;
+diff --git a/util.c b/util.c
+index f100972..32bd909 100644
+--- a/util.c
++++ b/util.c
+@@ -811,7 +811,7 @@ unsigned long calc_csum(void *super, int bytes)
+ #ifndef MDASSEMBLE
+ char *human_size(long long bytes)
+ {
+-	static char buf[30];
++	static char buf[47];
+ 
+ 	/* We convert bytes to either centi-M{ega,ibi}bytes or
+ 	 * centi-G{igi,ibi}bytes, with appropriate rounding,
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0003-Replace-snprintf-with-strncpy-at-some-places-to-avoi.patch b/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0003-Replace-snprintf-with-strncpy-at-some-places-to-avoi.patch
new file mode 100644
index 0000000..dcec84f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0003-Replace-snprintf-with-strncpy-at-some-places-to-avoi.patch
@@ -0,0 +1,59 @@
+From bc87af1314325b00c6ac002a60a2b0f0caa81e34 Mon Sep 17 00:00:00 2001
+From: Xiao Ni <xni@redhat.com>
+Date: Sat, 18 Mar 2017 10:33:44 +0800
+Subject: [PATCH 3/5] Replace snprintf with strncpy at some places to avoid
+ truncation
+
+In gcc7 there are some building errors like:
+directive output may be truncated writing up to 31 bytes into a region of size 24
+snprintf(str, MPB_SIG_LEN, %s, mpb->sig);
+
+It just need to copy one string to target. So use strncpy to replace it.
+
+For this line code: snprintf(str, MPB_SIG_LEN, %s, mpb->sig);
+Because mpb->sig has the content of version after magic, so
+it's better to use strncpy to replace snprintf too.
+
+Signed-off-by: Xiao Ni <xni@redhat.com>
+Signed-off-by: Jes Sorensen <Jes.Sorensen@gmail.com>
+---
+Upstream-Status: Backport
+ super-intel.c | 9 ++++++---
+ 1 file changed, 6 insertions(+), 3 deletions(-)
+
+diff --git a/super-intel.c b/super-intel.c
+index 57c7e75..5499098 100644
+--- a/super-intel.c
++++ b/super-intel.c
+@@ -1811,7 +1811,8 @@ static void examine_super_imsm(struct supertype *st, char *homehost)
+ 	__u32 reserved = imsm_reserved_sectors(super, super->disks);
+ 	struct dl *dl;
+ 
+-	snprintf(str, MPB_SIG_LEN, "%s", mpb->sig);
++	strncpy(str, (char *)mpb->sig, MPB_SIG_LEN);
++	str[MPB_SIG_LEN-1] = '\0';
+ 	printf("          Magic : %s\n", str);
+ 	snprintf(str, strlen(MPB_VERSION_RAID0), "%s", get_imsm_version(mpb));
+ 	printf("        Version : %s\n", get_imsm_version(mpb));
+@@ -7142,14 +7143,16 @@ static int update_subarray_imsm(struct supertype *st, char *subarray,
+ 
+ 			u->type = update_rename_array;
+ 			u->dev_idx = vol;
+-			snprintf((char *) u->name, MAX_RAID_SERIAL_LEN, "%s", name);
++			strncpy((char *) u->name, name, MAX_RAID_SERIAL_LEN);
++			u->name[MAX_RAID_SERIAL_LEN-1] = '\0';
+ 			append_metadata_update(st, u, sizeof(*u));
+ 		} else {
+ 			struct imsm_dev *dev;
+ 			int i;
+ 
+ 			dev = get_imsm_dev(super, vol);
+-			snprintf((char *) dev->volume, MAX_RAID_SERIAL_LEN, "%s", name);
++			strncpy((char *) dev->volume, name, MAX_RAID_SERIAL_LEN);
++			dev->volume[MAX_RAID_SERIAL_LEN-1] = '\0';
+ 			for (i = 0; i < mpb->num_raid_devs; i++) {
+ 				dev = get_imsm_dev(super, i);
+ 				handle_missing(super, dev);
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0004-mdadm-Forced-type-conversion-to-avoid-truncation.patch b/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0004-mdadm-Forced-type-conversion-to-avoid-truncation.patch
new file mode 100644
index 0000000..94fde42
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0004-mdadm-Forced-type-conversion-to-avoid-truncation.patch
@@ -0,0 +1,33 @@
+From 5da889032e2d99751ed9fe60016146e9ae8114cd Mon Sep 17 00:00:00 2001
+From: Xiao Ni <xni@redhat.com>
+Date: Sat, 18 Mar 2017 10:33:45 +0800
+Subject: [PATCH 4/5] mdadm: Forced type conversion to avoid truncation
+
+Gcc reports it needs 19 bytes to right to disk->serial. Because the
+type of argument i is int. But the meaning of i is failed disk
+number. So it doesn't need to use 19 bytes.  Just add a type
+conversion to avoid this building error
+
+Signed-off-by: Xiao Ni <xni@redhat.com>
+Signed-off-by: Jes Sorensen <Jes.Sorensen@gmail.com>
+---
+Upstream-Status: Backport
+ super-intel.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/super-intel.c b/super-intel.c
+index 5499098..4e466ff 100644
+--- a/super-intel.c
++++ b/super-intel.c
+@@ -5228,7 +5228,7 @@ static int init_super_imsm_volume(struct supertype *st, mdu_array_info_t *info,
+ 			disk->status = CONFIGURED_DISK | FAILED_DISK;
+ 			disk->scsi_id = __cpu_to_le32(~(__u32)0);
+ 			snprintf((char *) disk->serial, MAX_RAID_SERIAL_LEN,
+-				 "missing:%d", i);
++				 "missing:%d", (__u8)i);
+ 		}
+ 		find_missing(super);
+ 	} else {
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0005-Add-a-comment-to-indicate-valid-fallthrough.patch b/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0005-Add-a-comment-to-indicate-valid-fallthrough.patch
new file mode 100644
index 0000000..3d9d3b9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/mdadm/files/0005-Add-a-comment-to-indicate-valid-fallthrough.patch
@@ -0,0 +1,128 @@
+From 09014233bf10900f7bd8390b3b64ff82bca45222 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 19 Apr 2017 12:04:15 -0700
+Subject: [PATCH 5/5] Add a comment to indicate valid fallthrough
+
+gcc7 warns about code with fallthroughs, this patch adds
+the comment to indicate a valid fallthrough, helps gcc7
+compiler warnings
+
+This works in cross and native compilation case
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Submitted
+
+ Grow.c        | 4 ++++
+ bitmap.c      | 8 ++++++++
+ mdadm.c       | 2 ++
+ super-intel.c | 1 +
+ util.c        | 1 +
+ 5 files changed, 16 insertions(+)
+
+diff --git a/Grow.c b/Grow.c
+index 455c5f9..27c73b1 100755
+--- a/Grow.c
++++ b/Grow.c
+@@ -1257,6 +1257,7 @@ char *analyse_change(char *devname, struct mdinfo *info, struct reshape *re)
+ 		switch (info->new_level) {
+ 		case 4:
+ 			delta_parity = 1;
++			/* fallthrough */
+ 		case 0:
+ 			re->level = 4;
+ 			re->before.layout = 0;
+@@ -1284,10 +1285,12 @@ char *analyse_change(char *devname, struct mdinfo *info, struct reshape *re)
+ 
+ 	case 4:
+ 		info->array.layout = ALGORITHM_PARITY_N;
++		/* fallthrough */
+ 	case 5:
+ 		switch (info->new_level) {
+ 		case 0:
+ 			delta_parity = -1;
++			/* fallthrough */
+ 		case 4:
+ 			re->level = info->array.level;
+ 			re->before.data_disks = info->array.raid_disks - 1;
+@@ -1343,6 +1346,7 @@ char *analyse_change(char *devname, struct mdinfo *info, struct reshape *re)
+ 		case 4:
+ 		case 5:
+ 			delta_parity = -1;
++			/* fallthrough */
+ 		case 6:
+ 			re->level = 6;
+ 			re->before.data_disks = info->array.raid_disks - 2;
+diff --git a/bitmap.c b/bitmap.c
+index ccedfd3..a6ff091 100644
+--- a/bitmap.c
++++ b/bitmap.c
+@@ -82,13 +82,21 @@ static inline int count_dirty_bits_byte(char byte, int num_bits)
+ 
+ 	switch (num_bits) { /* fall through... */
+ 		case 8:	if (byte & 128) num++;
++		/* fallthrough */
+ 		case 7:	if (byte &  64) num++;
++		/* fallthrough */
+ 		case 6:	if (byte &  32) num++;
++		/* fallthrough */
+ 		case 5:	if (byte &  16) num++;
++		/* fallthrough */
+ 		case 4:	if (byte &   8) num++;
++		/* fallthrough */
+ 		case 3: if (byte &   4) num++;
++		/* fallthrough */
+ 		case 2:	if (byte &   2) num++;
++		/* fallthrough */
+ 		case 1:	if (byte &   1) num++;
++		/* fallthrough */
+ 		default: break;
+ 	}
+ 
+diff --git a/mdadm.c b/mdadm.c
+index c3a265b..2d06d3b 100644
+--- a/mdadm.c
++++ b/mdadm.c
+@@ -148,6 +148,7 @@ int main(int argc, char *argv[])
+ 			    mode == CREATE || mode == GROW ||
+ 			    mode == INCREMENTAL || mode == MANAGE)
+ 				break; /* b means bitmap */
++		/* fallthrough */
+ 		case Brief:
+ 			c.brief = 1;
+ 			continue;
+@@ -828,6 +829,7 @@ int main(int argc, char *argv[])
+ 
+ 		case O(INCREMENTAL,NoDegraded):
+ 			pr_err("--no-degraded is deprecated in Incremental mode\n");
++			/* fallthrough */
+ 		case O(ASSEMBLE,NoDegraded): /* --no-degraded */
+ 			c.runstop = -1; /* --stop isn't allowed for --assemble,
+ 					 * so we overload slightly */
+diff --git a/super-intel.c b/super-intel.c
+index 4e466ff..00a2925 100644
+--- a/super-intel.c
++++ b/super-intel.c
+@@ -3271,6 +3271,7 @@ static void getinfo_super_imsm_volume(struct supertype *st, struct mdinfo *info,
+ 						<< SECT_PER_MB_SHIFT;
+ 			}
+ 		}
++		/* fallthrough */
+ 		case MIGR_VERIFY:
+ 			/* we could emulate the checkpointing of
+ 			 * 'sync_action=check' migrations, but for now
+diff --git a/util.c b/util.c
+index 32bd909..f2a4d19 100644
+--- a/util.c
++++ b/util.c
+@@ -335,6 +335,7 @@ unsigned long long parse_size(char *size)
+ 		switch (*c) {
+ 		case 'K':
+ 			c++;
++		/* fallthrough */
+ 		default:
+ 			s *= 2;
+ 			break;
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/mdadm/mdadm_4.0.bb b/import-layers/yocto-poky/meta/recipes-extended/mdadm/mdadm_4.0.bb
index 62614f0..dc098f1 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/mdadm/mdadm_4.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/mdadm/mdadm_4.0.bb
@@ -16,6 +16,12 @@
            file://run-ptest \
            file://0001-mdadm.h-Undefine-dprintf-before-redefining.patch \
            file://0001-include-sys-sysmacros.h-for-major-minor-defintions.patch \
+           file://0001-mdadm-Add-Wimplicit-fallthrough-0-in-Makefile.patch \
+           file://0002-mdadm-Specify-enough-length-when-write-to-buffer.patch \
+           file://0003-Replace-snprintf-with-strncpy-at-some-places-to-avoi.patch \
+           file://0004-mdadm-Forced-type-conversion-to-avoid-truncation.patch \
+           file://0005-Add-a-comment-to-indicate-valid-fallthrough.patch \
+           file://0001-Use-CC-to-check-for-implicit-fallthrough-warning-sup.patch \
            "
 SRC_URI[md5sum] = "2cb4feffea9167ba71b5f346a0c0a40d"
 SRC_URI[sha256sum] = "1d6ae7f24ced3a0fa7b5613b32f4a589bb4881e3946a5a2c3724056254ada3a9"
@@ -50,7 +56,7 @@
 }
 
 do_install_ptest() {
-	cp -a ${S}/tests ${D}${PTEST_PATH}/tests
+	cp -R --no-dereference --preserve=mode,links -v ${S}/tests ${D}${PTEST_PATH}/tests
 	cp ${S}/test ${D}${PTEST_PATH}
 	sed -e 's!sleep 0.*!sleep 1!g; s!/var/tmp!/!g' -i ${D}${PTEST_PATH}/test
 	ln -s ${base_sbindir}/mdadm ${D}${PTEST_PATH}/mdadm
diff --git a/import-layers/yocto-poky/meta/recipes-extended/minicom/minicom_2.7.1.bb b/import-layers/yocto-poky/meta/recipes-extended/minicom/minicom_2.7.1.bb
new file mode 100644
index 0000000..1a31a87
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/minicom/minicom_2.7.1.bb
@@ -0,0 +1,29 @@
+SUMMARY = "Text-based modem control and terminal emulation program"
+HOMEPAGE = "http://alioth.debian.org/projects/minicom/"
+DESCRIPTION = "Minicom is a text-based modem control and terminal emulation program for Unix-like operating systems"
+SECTION = "console/network"
+DEPENDS = "ncurses virtual/libiconv"
+LICENSE = "GPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=420477abc567404debca0a2a1cb6b645 \
+                    file://src/minicom.h;beginline=1;endline=12;md5=a58838cb709f0db517f4e42730c49e81"
+
+SRC_URI = "https://alioth.debian.org/frs/download.php/latestfile/3/${BP}.tar.gz \
+           file://allow.to.disable.lockdev.patch \
+           file://0001-fix-minicom-h-v-return-value-is-not-0.patch \
+           file://0001-Fix-build-issus-surfaced-due-to-musl.patch \
+          "
+
+SRC_URI[md5sum] = "9021cb8c5445f6e6e74b2acc39962d62"
+SRC_URI[sha256sum] = "532f836b7a677eb0cb1dca8d70302b73729c3d30df26d58368d712e5cca041f1"
+
+UPSTREAM_CHECK_URI = "https://alioth.debian.org/frs/?group_id=30018"
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[lockdev] = "--enable-lockdev,--disable-lockdev,lockdev"
+
+inherit autotools gettext pkgconfig
+
+do_install() {
+	for d in doc extras man lib src; do make -C $d DESTDIR=${D} install; done
+}
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/minicom/minicom_2.7.bb b/import-layers/yocto-poky/meta/recipes-extended/minicom/minicom_2.7.bb
deleted file mode 100644
index 3118686..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/minicom/minicom_2.7.bb
+++ /dev/null
@@ -1,28 +0,0 @@
-SUMMARY = "Text-based modem control and terminal emulation program"
-DESCRIPTION = "Minicom is a text-based modem control and terminal emulation program for Unix-like operating systems"
-SECTION = "console/network"
-DEPENDS = "ncurses virtual/libiconv"
-LICENSE = "GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=420477abc567404debca0a2a1cb6b645 \
-                    file://src/minicom.h;beginline=1;endline=12;md5=a58838cb709f0db517f4e42730c49e81"
-
-SRC_URI = "https://alioth.debian.org/frs/download.php/latestfile/3/${BP}.tar.gz \
-           file://allow.to.disable.lockdev.patch \
-           file://0001-fix-minicom-h-v-return-value-is-not-0.patch \
-           file://0001-Fix-build-issus-surfaced-due-to-musl.patch \
-          "
-
-SRC_URI[md5sum] = "7044ca3e291268c33294f171d426dc2d"
-SRC_URI[sha256sum] = "9ac3a663b82f4f5df64114b4792b9926b536c85f59de0f2d2b321c7626a904f4"
-
-UPSTREAM_CHECK_URI = "https://alioth.debian.org/frs/?group_id=30018"
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[lockdev] = "--enable-lockdev,--disable-lockdev,lockdev"
-
-inherit autotools gettext pkgconfig
-
-do_install() {
-	for d in doc extras man lib src; do make -C $d DESTDIR=${D} install; done
-}
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/net-tools/net-tools_1.60-26.bb b/import-layers/yocto-poky/meta/recipes-extended/net-tools/net-tools_1.60-26.bb
index 45d7bf4..5657fd8 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/net-tools/net-tools_1.60-26.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/net-tools/net-tools_1.60-26.bb
@@ -37,9 +37,7 @@
 
 inherit gettext
 
-do_patch[depends] = "quilt-native:do_populate_sysroot"
-
-LDFLAGS_append_libc-uclibc = " -lintl "
+do_patch[depends] += "quilt-native:do_populate_sysroot"
 
 # The Makefile is lame, no parallel build
 PARALLEL_MAKE = ""
@@ -83,20 +81,12 @@
 
 do_compile() {
 	# net-tools use COPTS/LOPTS to allow adding custom options
-	export COPTS="$CFLAGS"
-	export LOPTS="$LDFLAGS"
-	unset CFLAGS
-	unset LDFLAGS
-
-	oe_runmake
+	oe_runmake COPTS="$CFLAGS" LOPTS="$LDFLAGS"
 }
 
 do_install() {
-	export COPTS="$CFLAGS"
-	export LOPTS="$LDFLAGS"
-	unset CFLAGS
-	unset LDFLAGS
-	oe_runmake 'BASEDIR=${D}' install
+	# We don't need COPTS or LOPTS, but let's be consistent.
+	oe_runmake COPTS="$CFLAGS" LOPTS="$LDFLAGS" 'BASEDIR=${D}' install
 
 	if [ "${base_bindir}" != "/bin" ]; then
 		mkdir -p ${D}/${base_bindir}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/newt/libnewt-python_0.52.19.bb b/import-layers/yocto-poky/meta/recipes-extended/newt/libnewt-python_0.52.20.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-extended/newt/libnewt-python_0.52.19.bb
rename to import-layers/yocto-poky/meta/recipes-extended/newt/libnewt-python_0.52.20.bb
diff --git a/import-layers/yocto-poky/meta/recipes-extended/newt/libnewt_0.52.19.bb b/import-layers/yocto-poky/meta/recipes-extended/newt/libnewt_0.52.19.bb
deleted file mode 100644
index de76ce2..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/newt/libnewt_0.52.19.bb
+++ /dev/null
@@ -1,54 +0,0 @@
-SUMMARY = "A library for text mode user interfaces"
-
-DESCRIPTION = "Newt is a programming library for color text mode, widget based user \
-interfaces.  Newt can be used to add stacked windows, entry widgets, \
-checkboxes, radio buttons, labels, plain text fields, scrollbars, \
-etc., to text mode user interfaces.  This package also contains the \
-shared library needed by programs built with newt, as well as a \
-/usr/bin/dialog replacement called whiptail.  Newt is based on the \
-slang library."
-
-HOMEPAGE = "https://releases.pagure.org/newt/"
-SECTION = "libs"
-
-LICENSE = "LGPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=5f30f0716dfdd0d91eb439ebec522ec2"
-
-# slang needs to be >= 2.2
-DEPENDS = "slang popt"
-
-SRC_URI = "https://releases.pagure.org/newt/newt-${PV}.tar.gz \
-           file://fix_SHAREDDIR.patch \
-           file://cross_ar.patch \
-           file://Makefile.in-Add-tinfo-library-to-the-linking-librari.patch \
-           file://pie-flags.patch \
-           file://0001-detect-gold-as-GNU-linker-too.patch \
-"
-
-SRC_URI[md5sum] = "e4aa0f7943edd39c52481a87f68f412a"
-SRC_URI[sha256sum] = "08c0db56c21996af6a7cbab99491b774c6c09cef91cd9b03903c84634bff2e80"
-
-S = "${WORKDIR}/newt-${PV}"
-
-EXTRA_OECONF = "--without-tcl --without-python"
-
-inherit autotools-brokensep
-
-CLEANBROKEN = "1"
-
-export CPPFLAGS
-
-PACKAGES_prepend = "whiptail "
-
-do_configure_prepend() {
-    sh autogen.sh
-}
-
-do_compile_prepend() {
-    # Make sure the recompile is OK
-    rm -f ${B}/.depend
-}
-
-FILES_whiptail = "${bindir}/whiptail"
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/newt/libnewt_0.52.20.bb b/import-layers/yocto-poky/meta/recipes-extended/newt/libnewt_0.52.20.bb
new file mode 100644
index 0000000..65ce70c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/newt/libnewt_0.52.20.bb
@@ -0,0 +1,54 @@
+SUMMARY = "A library for text mode user interfaces"
+
+DESCRIPTION = "Newt is a programming library for color text mode, widget based user \
+interfaces.  Newt can be used to add stacked windows, entry widgets, \
+checkboxes, radio buttons, labels, plain text fields, scrollbars, \
+etc., to text mode user interfaces.  This package also contains the \
+shared library needed by programs built with newt, as well as a \
+/usr/bin/dialog replacement called whiptail.  Newt is based on the \
+slang library."
+
+HOMEPAGE = "https://releases.pagure.org/newt/"
+SECTION = "libs"
+
+LICENSE = "LGPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=5f30f0716dfdd0d91eb439ebec522ec2"
+
+# slang needs to be >= 2.2
+DEPENDS = "slang popt"
+
+SRC_URI = "https://releases.pagure.org/newt/newt-${PV}.tar.gz \
+           file://fix_SHAREDDIR.patch \
+           file://cross_ar.patch \
+           file://Makefile.in-Add-tinfo-library-to-the-linking-librari.patch \
+           file://pie-flags.patch \
+           file://0001-detect-gold-as-GNU-linker-too.patch \
+"
+
+SRC_URI[md5sum] = "70b288f821234593a8e7920e435b259b"
+SRC_URI[sha256sum] = "8d66ba6beffc3f786d4ccfee9d2b43d93484680ef8db9397a4fb70b5adbb6dbc"
+
+S = "${WORKDIR}/newt-${PV}"
+
+EXTRA_OECONF = "--without-tcl --without-python"
+
+inherit autotools-brokensep
+
+CLEANBROKEN = "1"
+
+export CPPFLAGS
+
+PACKAGES_prepend = "whiptail "
+
+do_configure_prepend() {
+    sh autogen.sh
+}
+
+do_compile_prepend() {
+    # Make sure the recompile is OK
+    rm -f ${B}/.depend
+}
+
+FILES_whiptail = "${bindir}/whiptail"
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/packagegroups/packagegroup-core-full-cmdline.bb b/import-layers/yocto-poky/meta/recipes-extended/packagegroups/packagegroup-core-full-cmdline.bb
index d8975f2..fdede59 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/packagegroups/packagegroup-core-full-cmdline.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/packagegroups/packagegroup-core-full-cmdline.bb
@@ -86,7 +86,6 @@
     mc-fish \
     mc-helpers \
     mc-helpers-perl \
-    mc-helpers-python \
     mktemp \
     ncurses \
     net-tools \
@@ -110,7 +109,6 @@
     "
 
 RDEPENDS_packagegroup-core-full-cmdline-dev-utils = "\
-    byacc \
     diffutils \
     m4 \
     make \
diff --git a/import-layers/yocto-poky/meta/recipes-extended/packagegroups/packagegroup-core-lsb.bb b/import-layers/yocto-poky/meta/recipes-extended/packagegroups/packagegroup-core-lsb.bb
index a156bcb..5baaf35 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/packagegroups/packagegroup-core-lsb.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/packagegroups/packagegroup-core-lsb.bb
@@ -66,7 +66,6 @@
     mc-fish \
     mc-helpers \
     mc-helpers-perl \
-    mc-helpers-python \
     mdadm \
     minicom \
     neon \
@@ -154,7 +153,6 @@
     localedef \
     lsb \
     m4 \
-    mailx \
     make \
     man \
     man-pages \
@@ -178,7 +176,6 @@
     ncurses \
     zlib \
     nspr \
-    libpng12 \
     nss \
 "
 
@@ -203,31 +200,6 @@
     python-misc \
 "
 
-QT4PKGS = " \
-    libqtcore4 \
-    libqtgui4 \
-    libqtsql4 \
-    libqtsvg4 \
-    libqtxml4 \
-    libqtnetwork4 \
-    qt4-plugin-sqldriver-sqlite \
-    ${@bb.utils.contains("DISTRO_FEATURES", "opengl", "libqtopengl4", "", d)} \
-    "
-QT4PKGS_mips64 = ""
-QT4PKGS_mips64n32 = ""
-
-def get_libqt4(d):
-    if 'linuxstdbase' in d.getVar('DISTROOVERRIDES', False) or "":
-        if 'qt4' in d.getVar('BBFILE_COLLECTIONS', False) or "":
-            return d.getVar('QT4PKGS', False)
-
-        bb.warn('The meta-qt4 layer should be added, this layer provides Qt 4.x ' \
-                'libraries. Its intended use is for passing LSB tests as Qt4 is ' \
-                'a requirement for LSB.')
-    return ''
-# We don't want this to rebuild every time you change your layer config
-get_libqt4[vardepsexclude] += "BBFILE_COLLECTIONS"
-
 SUMMARY_packagegroup-core-lsb-desktop = "LSB Desktop"
 DESCRIPTION_packagegroup-core-lsb-desktop = "Packages required to support libraries \
     specified in the LSB Desktop specification"
@@ -248,7 +220,6 @@
     gtk+ \
     atk \
     libasound \
-    ${@get_libqt4(d)} \
 "
 
 RDEPENDS_packagegroup-core-lsb-runtime-add = "\
@@ -267,10 +238,4 @@
     glibc-localedata-posix \
     glibc-extra-nss \
     glibc-pcprofile \
-    libclass-isa-perl \
-    libenv-perl \
-    libdumpvalue-perl \
-    libfile-checktree-perl \
-    libi18n-collate-perl \
-    libpod-plainer-perl \
 "
diff --git a/import-layers/yocto-poky/meta/recipes-extended/pam/libpam/use-utmpx.patch b/import-layers/yocto-poky/meta/recipes-extended/pam/libpam/use-utmpx.patch
deleted file mode 100644
index dd04bbb..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/pam/libpam/use-utmpx.patch
+++ /dev/null
@@ -1,233 +0,0 @@
-utmp() may not be configured in and use posix compliant utmpx always
-UTMP is SVID legacy, UTMPX is mandated by POSIX
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Index: Linux-PAM-1.2.1/libpam/pam_modutil_getlogin.c
-===================================================================
---- Linux-PAM-1.2.1.orig/libpam/pam_modutil_getlogin.c
-+++ Linux-PAM-1.2.1/libpam/pam_modutil_getlogin.c
-@@ -10,8 +10,7 @@
- 
- #include <stdlib.h>
- #include <unistd.h>
--#include <utmp.h>
--
-+#include <utmpx.h>
- #define _PAMMODUTIL_GETLOGIN "_pammodutil_getlogin"
- 
- const char *
-@@ -22,7 +21,7 @@ pam_modutil_getlogin(pam_handle_t *pamh)
-     const void *void_curr_tty;
-     const char *curr_tty;
-     char *curr_user;
--    struct utmp *ut, line;
-+    struct utmpx *ut, line;
- 
-     status = pam_get_data(pamh, _PAMMODUTIL_GETLOGIN, &logname);
-     if (status == PAM_SUCCESS) {
-@@ -48,10 +47,10 @@ pam_modutil_getlogin(pam_handle_t *pamh)
-     }
-     logname = NULL;
- 
--    setutent();
-+    setutxent();
-     strncpy(line.ut_line, curr_tty, sizeof(line.ut_line));
- 
--    if ((ut = getutline(&line)) == NULL) {
-+    if ((ut = getutxline(&line)) == NULL) {
- 	goto clean_up_and_go_home;
-     }
- 
-@@ -74,7 +73,7 @@ pam_modutil_getlogin(pam_handle_t *pamh)
- 
- clean_up_and_go_home:
- 
--    endutent();
-+    endutxent();
- 
-     return logname;
- }
-Index: Linux-PAM-1.2.1/modules/pam_issue/pam_issue.c
-===================================================================
---- Linux-PAM-1.2.1.orig/modules/pam_issue/pam_issue.c
-+++ Linux-PAM-1.2.1/modules/pam_issue/pam_issue.c
-@@ -25,7 +25,7 @@
- #include <string.h>
- #include <unistd.h>
- #include <sys/utsname.h>
--#include <utmp.h>
-+#include <utmpx.h>
- #include <time.h>
- #include <syslog.h>
- 
-@@ -246,13 +246,13 @@ read_issue_quoted(pam_handle_t *pamh, FI
- 	      case 'U':
- 		{
- 		    unsigned int users = 0;
--		    struct utmp *ut;
--		    setutent();
--		    while ((ut = getutent())) {
-+		    struct utmpx *ut;
-+		    setutxent();
-+		    while ((ut = getutxent())) {
- 			if (ut->ut_type == USER_PROCESS)
- 			    ++users;
- 		    }
--		    endutent();
-+		    endutxent();
- 		    if (c == 'U')
- 			snprintf (buf, sizeof buf, "%u %s", users,
- 			          (users == 1) ? "user" : "users");
-Index: Linux-PAM-1.2.1/modules/pam_lastlog/pam_lastlog.c
-===================================================================
---- Linux-PAM-1.2.1.orig/modules/pam_lastlog/pam_lastlog.c
-+++ Linux-PAM-1.2.1/modules/pam_lastlog/pam_lastlog.c
-@@ -15,8 +15,9 @@
- #include <errno.h>
- #ifdef HAVE_UTMP_H
- # include <utmp.h>
--#else
--# include <lastlog.h>
-+#endif
-+#ifdef HAVE_UTMPX_H
-+# include <utmpx.h>
- #endif
- #include <pwd.h>
- #include <stdlib.h>
-@@ -27,6 +28,12 @@
- #include <syslog.h>
- #include <unistd.h>
- 
-+#ifndef HAVE_UTMP_H
-+#define UT_LINESIZE 32
-+#define UT_HOSTSIZE 32
-+#define UT_NAMESIZE 256
-+#endif
-+
- #if defined(hpux) || defined(sunos) || defined(solaris)
- # ifndef _PATH_LASTLOG
- #  define _PATH_LASTLOG "/usr/adm/lastlog"
-@@ -38,7 +45,7 @@
- #  define UT_LINESIZE 12
- # endif /* UT_LINESIZE */
- #endif
--#if defined(hpux)
-+#if defined(hpux) || !defined HAVE_UTMP_H
- struct lastlog {
-     time_t  ll_time;
-     char    ll_line[UT_LINESIZE];
-@@ -447,8 +454,8 @@ last_login_failed(pam_handle_t *pamh, in
- {
-     int retval;
-     int fd;
--    struct utmp ut;
--    struct utmp utuser;
-+    struct utmpx ut;
-+    struct utmpx utuser;
-     int failed = 0;
-     char the_time[256];
-     char *date = NULL;
-Index: Linux-PAM-1.2.1/modules/pam_limits/pam_limits.c
-===================================================================
---- Linux-PAM-1.2.1.orig/modules/pam_limits/pam_limits.c
-+++ Linux-PAM-1.2.1/modules/pam_limits/pam_limits.c
-@@ -33,7 +33,7 @@
- #include <sys/resource.h>
- #include <limits.h>
- #include <glob.h>
--#include <utmp.h>
-+#include <utmpx.h>
- #ifndef UT_USER  /* some systems have ut_name instead of ut_user */
- #define UT_USER ut_user
- #endif
-@@ -227,7 +227,7 @@ static int
- check_logins (pam_handle_t *pamh, const char *name, int limit, int ctrl,
-               struct pam_limit_s *pl)
- {
--    struct utmp *ut;
-+    struct utmpx *ut;
-     int count;
- 
-     if (ctrl & PAM_DEBUG_ARG) {
-@@ -242,7 +242,7 @@ check_logins (pam_handle_t *pamh, const
-         return LOGIN_ERR;
-     }
- 
--    setutent();
-+    setutxent();
- 
-     /* Because there is no definition about when an application
-        actually adds a utmp entry, some applications bizarrely do the
-@@ -260,7 +260,7 @@ check_logins (pam_handle_t *pamh, const
- 	count = 1;
-     }
- 
--    while((ut = getutent())) {
-+    while((ut = getutxent())) {
- #ifdef USER_PROCESS
-         if (ut->ut_type != USER_PROCESS) {
-             continue;
-@@ -296,7 +296,7 @@ check_logins (pam_handle_t *pamh, const
- 	    break;
- 	}
-     }
--    endutent();
-+    endutxent();
-     if (count > limit) {
- 	if (name) {
- 	    pam_syslog(pamh, LOG_WARNING,
-Index: Linux-PAM-1.2.1/modules/pam_timestamp/pam_timestamp.c
-===================================================================
---- Linux-PAM-1.2.1.orig/modules/pam_timestamp/pam_timestamp.c
-+++ Linux-PAM-1.2.1/modules/pam_timestamp/pam_timestamp.c
-@@ -56,7 +56,7 @@
- #include <time.h>
- #include <sys/time.h>
- #include <unistd.h>
--#include <utmp.h>
-+#include <utmpx.h>
- #include <syslog.h>
- #include <paths.h>
- #include "hmacsha1.h"
-@@ -197,15 +197,15 @@ timestamp_good(time_t then, time_t now,
- static int
- check_login_time(const char *ruser, time_t timestamp)
- {
--	struct utmp utbuf, *ut;
-+	struct utmpx utbuf, *ut;
- 	time_t oldest_login = 0;
- 
--	setutent();
-+	setutxent();
- 	while(
- #ifdef HAVE_GETUTENT_R
--	      !getutent_r(&utbuf, &ut)
-+	      !getutxent_r(&utbuf, &ut)
- #else
--	      (ut = getutent()) != NULL
-+	      (ut = getutxent()) != NULL
- #endif
- 	      ) {
- 		if (ut->ut_type != USER_PROCESS) {
-@@ -218,7 +218,7 @@ check_login_time(const char *ruser, time
- 			oldest_login = ut->ut_tv.tv_sec;
- 		}
- 	}
--	endutent();
-+	endutxent();
- 	if(oldest_login == 0 || timestamp < oldest_login) {
- 		return PAM_AUTH_ERR;
- 	}
-Index: Linux-PAM-1.2.1/modules/pam_unix/support.c
-===================================================================
---- Linux-PAM-1.2.1.orig/modules/pam_unix/support.c
-+++ Linux-PAM-1.2.1/modules/pam_unix/support.c
-@@ -13,7 +13,6 @@
- #include <pwd.h>
- #include <shadow.h>
- #include <limits.h>
--#include <utmp.h>
- #include <errno.h>
- #include <signal.h>
- #include <ctype.h>
diff --git a/import-layers/yocto-poky/meta/recipes-extended/pam/libpam_1.3.0.bb b/import-layers/yocto-poky/meta/recipes-extended/pam/libpam_1.3.0.bb
index df56d27..8f7753d 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/pam/libpam_1.3.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/pam/libpam_1.3.0.bb
@@ -28,8 +28,6 @@
 SRC_URI[md5sum] = "da4b2289b7cfb19583d54e9eaaef1c3a"
 SRC_URI[sha256sum] = "241aed1ef522f66ed672719ecf2205ec513fd0075ed80cda8e086a5b1a01d1bb"
 
-SRC_URI_append_libc-uclibc = " file://use-utmpx.patch"
-
 SRC_URI_append_libc-musl = " file://0001-Add-support-for-defining-missing-funcitonality.patch \
                              file://include_paths_header.patch \
                            "
diff --git a/import-layers/yocto-poky/meta/recipes-extended/parted/files/0001-Move-python-helper-scripts-used-only-in-tests-to-Pyt.patch b/import-layers/yocto-poky/meta/recipes-extended/parted/files/0001-Move-python-helper-scripts-used-only-in-tests-to-Pyt.patch
new file mode 100644
index 0000000..428b14e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/parted/files/0001-Move-python-helper-scripts-used-only-in-tests-to-Pyt.patch
@@ -0,0 +1,44 @@
+From 6e82af54714392dcdf74a8aedaae7de7d0af1080 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Thu, 27 Apr 2017 16:37:24 +0300
+Subject: [PATCH] Move python helper scripts (used only in tests) to Python 3
+
+Upstream-Status: Pending
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+ tests/gpt-header-move | 2 +-
+ tests/msdos-overlap   | 4 ++--
+ 2 files changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/tests/gpt-header-move b/tests/gpt-header-move
+index 05cdc65..3cbcb7e 100755
+--- a/tests/gpt-header-move
++++ b/tests/gpt-header-move
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python3
+ 
+ # open img file, subtract 33 from altlba address, and move the last 33 sectors
+ # back by 33 sectors
+diff --git a/tests/msdos-overlap b/tests/msdos-overlap
+index 5bddfb0..3de7d2e 100755
+--- a/tests/msdos-overlap
++++ b/tests/msdos-overlap
+@@ -1,4 +1,4 @@
+-#!/usr/bin/python
++#!/usr/bin/env python3
+ """
+     Write an overlapping partition to a msdos disk
+ 
+@@ -14,7 +14,7 @@ BAD_ENTRY = (0x72, 0xf5, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
+ OFFSET = 0x1b8
+ 
+ if len(sys.argv) < 2:
+-    print "%s: <image or device>"
++    print("%s: <image or device>")
+     sys.exit(1)
+ 
+ data = "".join(chr(c) for c in BAD_ENTRY)
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/parted/files/0001-libparted-Use-read-only-when-probing-devices-on-linu.patch b/import-layers/yocto-poky/meta/recipes-extended/parted/files/0001-libparted-Use-read-only-when-probing-devices-on-linu.patch
new file mode 100644
index 0000000..e522e1c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/parted/files/0001-libparted-Use-read-only-when-probing-devices-on-linu.patch
@@ -0,0 +1,224 @@
+From d6e15a60e84c1511523aa81272b7db7a6ec441d0 Mon Sep 17 00:00:00 2001
+From: Ovidiu Panait <ovidiu.panait@windriver.com>
+Date: Tue, 26 Sep 2017 08:04:58 +0000
+Subject: [PATCH] libparted: Use read only when probing devices on linux
+ (#1245144)
+
+When a device is opened for RW closing it can trigger other actions,
+like udev scanning it for partition changes. Use read only for the
+init_* methods and RW for actual changes to the device.
+
+This adds _device_open which takes mode flags as an argument and turns
+linux_open into a wrapper for it with RW_MODE.
+
+_device_open_ro is added to open the device with RD_MODE and increment
+the open_counter. This is used in the init_* functions.
+
+_device_close is a wrapper around linux_close that decrements the
+open_counter and is used in the init_* functions.
+
+All of these changes are self-contained with no external API changes.
+The only visible change in behavior is that when a new PedDevice is
+created the device is opened in RO_MODE instead of RW_MODE.
+
+Resolves: rhbz#1245144
+
+Upstream-Status: Backport
+
+Author: Brian C. Lane <bcl@redhat.com>
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
+---
+ libparted/arch/linux.c | 62 +++++++++++++++++++++++++++++++++++---------------
+ 1 file changed, 44 insertions(+), 18 deletions(-)
+
+diff --git a/libparted/arch/linux.c b/libparted/arch/linux.c
+index f612617..0a06a54 100644
+--- a/libparted/arch/linux.c
++++ b/libparted/arch/linux.c
+@@ -294,7 +294,9 @@ struct blkdev_ioctl_param {
+ static char* _device_get_part_path (PedDevice const *dev, int num);
+ static int _partition_is_mounted_by_path (const char* path);
+ static unsigned int _device_get_partition_range(PedDevice const* dev);
+-
++static int _device_open (PedDevice* dev, int flags);
++static int _device_open_ro (PedDevice* dev);
++static int _device_close (PedDevice* dev);
+ 
+ static int
+ _read_fd (int fd, char **buf)
+@@ -913,7 +915,7 @@ init_ide (PedDevice* dev)
+         if (!_device_stat (dev, &dev_stat))
+                 goto error;
+ 
+-        if (!ped_device_open (dev))
++        if (!_device_open_ro (dev))
+                 goto error;
+ 
+         if (ioctl (arch_specific->fd, HDIO_GET_IDENTITY, &hdi)) {
+@@ -982,11 +984,11 @@ init_ide (PedDevice* dev)
+         if (!_device_probe_geometry (dev))
+                 goto error_close_dev;
+ 
+-        ped_device_close (dev);
++        _device_close (dev);
+         return 1;
+ 
+ error_close_dev:
+-        ped_device_close (dev);
++        _device_close (dev);
+ error:
+         return 0;
+ }
+@@ -1119,7 +1121,7 @@ init_scsi (PedDevice* dev)
+         char* vendor;
+         char* product;
+ 
+-        if (!ped_device_open (dev))
++        if (!_device_open_ro (dev))
+                 goto error;
+ 
+         if (ioctl (arch_specific->fd, SCSI_IOCTL_GET_IDLUN, &idlun) < 0) {
+@@ -1133,7 +1135,7 @@ init_scsi (PedDevice* dev)
+                         goto error_close_dev;
+                 if (!_device_probe_geometry (dev))
+                         goto error_close_dev;
+-                ped_device_close (dev);
++                _device_close (dev);
+                 return 1;
+         }
+ 
+@@ -1155,11 +1157,11 @@ init_scsi (PedDevice* dev)
+         if (!_device_probe_geometry (dev))
+                 goto error_close_dev;
+ 
+-        ped_device_close (dev);
++        _device_close (dev);
+         return 1;
+ 
+ error_close_dev:
+-        ped_device_close (dev);
++        _device_close (dev);
+ error:
+         return 0;
+ }
+@@ -1171,7 +1173,7 @@ init_file (PedDevice* dev)
+ 
+         if (!_device_stat (dev, &dev_stat))
+                 goto error;
+-        if (!ped_device_open (dev))
++        if (!_device_open_ro (dev))
+                 goto error;
+ 
+         dev->sector_size = PED_SECTOR_SIZE_DEFAULT;
+@@ -1198,7 +1200,7 @@ init_file (PedDevice* dev)
+                 goto error_close_dev;
+         }
+ 
+-        ped_device_close (dev);
++        _device_close (dev);
+ 
+         dev->bios_geom.cylinders = dev->length / 4 / 32;
+         dev->bios_geom.heads = 4;
+@@ -1209,7 +1211,7 @@ init_file (PedDevice* dev)
+         return 1;
+ 
+ error_close_dev:
+-        ped_device_close (dev);
++        _device_close (dev);
+ error:
+         return 0;
+ }
+@@ -1225,7 +1227,7 @@ init_dasd (PedDevice* dev, const char* model_name)
+         if (!_device_stat (dev, &dev_stat))
+                 goto error;
+ 
+-        if (!ped_device_open (dev))
++        if (!_device_open_ro (dev))
+                 goto error;
+ 
+         LinuxSpecific* arch_specific = LINUX_SPECIFIC (dev);
+@@ -1265,11 +1267,11 @@ init_dasd (PedDevice* dev, const char* model_name)
+ 
+         dev->model = strdup (model_name);
+ 
+-        ped_device_close (dev);
++        _device_close (dev);
+         return 1;
+ 
+ error_close_dev:
+-        ped_device_close (dev);
++        _device_close (dev);
+ error:
+         return 0;
+ }
+@@ -1284,7 +1286,7 @@ init_generic (PedDevice* dev, const char* model_name)
+         if (!_device_stat (dev, &dev_stat))
+                 goto error;
+ 
+-        if (!ped_device_open (dev))
++        if (!_device_open_ro (dev))
+                 goto error;
+ 
+         ped_exception_fetch_all ();
+@@ -1332,11 +1334,11 @@ init_generic (PedDevice* dev, const char* model_name)
+ 
+         dev->model = strdup (model_name);
+ 
+-        ped_device_close (dev);
++        _device_close (dev);
+         return 1;
+ 
+ error_close_dev:
+-        ped_device_close (dev);
++        _device_close (dev);
+ error:
+         return 0;
+ }
+@@ -1623,12 +1625,27 @@ retry:
+ }
+ 
+ static int
++_device_open_ro (PedDevice* dev)
++{
++    int rc = _device_open (dev, RD_MODE);
++    if (rc)
++        dev->open_count++;
++    return rc;
++}
++
++static int
+ linux_open (PedDevice* dev)
+ {
++    return _device_open (dev, RW_MODE);
++}
++
++static int
++_device_open (PedDevice* dev, int flags)
++{
+         LinuxSpecific*  arch_specific = LINUX_SPECIFIC (dev);
+ 
+ retry:
+-        arch_specific->fd = open (dev->path, RW_MODE);
++        arch_specific->fd = open (dev->path, flags);
+ 
+         if (arch_specific->fd == -1) {
+                 char*   rw_error_msg = strerror (errno);
+@@ -1697,6 +1714,15 @@ linux_refresh_close (PedDevice* dev)
+         return 1;
+ }
+ 
++static int
++_device_close (PedDevice* dev)
++{
++    int rc = linux_close (dev);
++    if (dev->open_count > 0)
++        dev->open_count--;
++    return rc;
++}
++
+ #if SIZEOF_OFF_T < 8
+ 
+ #if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,20)
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/parted/parted_3.2.bb b/import-layers/yocto-poky/meta/recipes-extended/parted/parted_3.2.bb
index 9ce2dfe..ab30108 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/parted/parted_3.2.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/parted/parted_3.2.bb
@@ -14,16 +14,17 @@
            file://0001-Include-fcntl.h-in-platform_defs.h.patch \
            file://0001-Unset-need_charset_alias-when-building-for-musl.patch \
            file://0002-libparted_fs_resize-link-against-libuuid-explicitly-.patch \
+           file://0001-Move-python-helper-scripts-used-only-in-tests-to-Pyt.patch \
 	   file://parted-3.2-sysmacros.patch \
            file://run-ptest \
            file://Makefile \
+           file://0001-libparted-Use-read-only-when-probing-devices-on-linu.patch \
 "
 
 SRC_URI[md5sum] = "0247b6a7b314f8edeb618159fa95f9cb"
 SRC_URI[sha256sum] = "858b589c22297cacdf437f3baff6f04b333087521ab274f7ab677cb8c6bb78e4"
 
 EXTRA_OECONF = "--disable-device-mapper"
-LDFLAGS_append_libc-uclibc = " -liconv "
 
 inherit autotools pkgconfig gettext texinfo ptest
 
@@ -46,4 +47,4 @@
 	sed -e 's| ../parted||' -i $t/tests/*.sh
 }
 
-RDEPENDS_${PN}-ptest = "bash coreutils perl util-linux-losetup python"
+RDEPENDS_${PN}-ptest = "bash coreutils perl util-linux-losetup python3"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/pax/pax/0001-Add-a-comment-for-fallthrough.patch b/import-layers/yocto-poky/meta/recipes-extended/pax/pax/0001-Add-a-comment-for-fallthrough.patch
new file mode 100644
index 0000000..b76f85a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/pax/pax/0001-Add-a-comment-for-fallthrough.patch
@@ -0,0 +1,38 @@
+From e67bb3debe582f0e77770b714bd012bb1082fc41 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 19 Apr 2017 11:32:00 -0700
+Subject: [PATCH] Add a comment for fallthrough
+
+Fixes warnings with gcc7 e.g.
+
+../../../../../../../workspace/sources/pax/src/options.c: In function 'tar_options':
+../../../../../../../workspace/sources/pax/src/options.c:725:7: error: this statement may fall through [-Werror=implicit-fallthrough=]
+    if (opt_add ("write_opt=nodir") < 0)
+       ^
+../../../../../../../workspace/sources/pax/src/options.c:730:2: note: here
+  case 'O':
+  ^~~~
+cc1: all warnings being treated as errors
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/options.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/src/options.c b/src/options.c
+index c663b4e..b80819a 100644
+--- a/src/options.c
++++ b/src/options.c
+@@ -724,6 +724,7 @@ tar_options (int argc, char **argv)
+ 	case 'o':
+ 	  if (opt_add ("write_opt=nodir") < 0)
+ 	    tar_usage ();
++	  /* fallthru */
+ 	case 'O':
+ 	  Oflag = 1;
+ 	  break;
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/pax/pax_3.4.bb b/import-layers/yocto-poky/meta/recipes-extended/pax/pax_3.4.bb
index 9b4e17b..6df9a81 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/pax/pax_3.4.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/pax/pax_3.4.bb
@@ -15,10 +15,11 @@
 DEPENDS_append_libc-musl = " fts "
 
 SRC_URI = "http://pkgs.fedoraproject.org/repo/pkgs/${BPN}/${BP}.tar.bz2/fbd9023b590b45ac3ade95870702a0d6/${BP}.tar.bz2 \
-	file://fix_for_compile_with_gcc-4.6.0.patch \
-	file://pax-3.4_fix_for_x32.patch \
-        file://0001-include-sys-sysmacros.h-for-major-minor-definitions.patch \
-"
+           file://fix_for_compile_with_gcc-4.6.0.patch \
+           file://pax-3.4_fix_for_x32.patch \
+           file://0001-include-sys-sysmacros.h-for-major-minor-definitions.patch \
+           file://0001-Add-a-comment-for-fallthrough.patch \
+           "
 
 SRC_URI_append_libc-musl = " file://0001-Fix-build-with-musl.patch \
                              file://0001-use-strtoll-instead-of-strtoq.patch \
diff --git a/import-layers/yocto-poky/meta/recipes-extended/perl/libtimedate-perl_2.30.bb b/import-layers/yocto-poky/meta/recipes-extended/perl/libtimedate-perl_2.30.bb
index c8fcde4..427613c 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/perl/libtimedate-perl_2.30.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/perl/libtimedate-perl_2.30.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Perl modules useful for manipulating date and time information"
+HOMEPAGE = "https://metacpan.org/release/TimeDate"
 SECTION = "libs"
 # You can redistribute it and/or modify it under the same terms as Perl itself.
 LICENSE = "Artistic-1.0 | GPL-1.0+"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-namespacesupport-perl_1.11.bb b/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-namespacesupport-perl_1.11.bb
deleted file mode 100644
index 9a9e710..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-namespacesupport-perl_1.11.bb
+++ /dev/null
@@ -1,22 +0,0 @@
-SUMMARY = "Perl module for supporting simple generic namespaces"
-DESCRIPTION = "XML::NamespaceSupport offers a simple way to process namespace-based XML names. \
-                It also helps maintain a prefix-to-namespace URI map, and provides a number of \
-                basic checks. "
-
-SECTION = "libs"
-LICENSE = "Artistic-1.0 | GPL-1.0+"
-PR = "r3"
-
-LIC_FILES_CHKSUM = "file://META.yml;beginline=22;endline=22;md5=3b2b564dae8b9af9e8896e85c07dcbe5"
-
-SRC_URI = "http://search.cpan.org/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-${PV}.tar.gz"
-SRC_URI[md5sum] = "222cca76161cd956d724286d36b607da"
-SRC_URI[sha256sum] = "6d8151f0a3f102313d76b64bfd1c2d9ed46bfe63a16f038e7d860fda287b74ea"
-
-
-S = "${WORKDIR}/XML-NamespaceSupport-${PV}"
-
-inherit cpan
-
-BBCLASSEXTEND="native"
-
diff --git a/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-namespacesupport-perl_1.12.bb b/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-namespacesupport-perl_1.12.bb
new file mode 100644
index 0000000..3498a28
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-namespacesupport-perl_1.12.bb
@@ -0,0 +1,23 @@
+SUMMARY = "Perl module for supporting simple generic namespaces"
+HOMEPAGE = "http://veillard.com/XML/"
+DESCRIPTION = "XML::NamespaceSupport offers a simple way to process namespace-based XML names. \
+                It also helps maintain a prefix-to-namespace URI map, and provides a number of \
+                basic checks. "
+
+SECTION = "libs"
+LICENSE = "Artistic-1.0 | GPL-1.0+"
+PR = "r3"
+
+LIC_FILES_CHKSUM = "file://META.yml;beginline=22;endline=22;md5=9ca1a4a941496e7feedac72c4fb8b137"
+
+SRC_URI = "http://search.cpan.org/CPAN/authors/id/P/PE/PERIGRIN/XML-NamespaceSupport-${PV}.tar.gz"
+SRC_URI[md5sum] = "a8916c6d095bcf073e1108af02e78c97"
+SRC_URI[sha256sum] = "47e995859f8dd0413aa3f22d350c4a62da652e854267aa0586ae544ae2bae5ef"
+
+
+S = "${WORKDIR}/XML-NamespaceSupport-${PV}"
+
+inherit cpan
+
+BBCLASSEXTEND="native"
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-sax-base-perl_1.08.bb b/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-sax-base-perl_1.08.bb
deleted file mode 100644
index 14a3370..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-sax-base-perl_1.08.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-SUMMARY = "Base class SAX Drivers and Filters"
-DESCRIPTION = "This module has a very simple task - to be a base class for \
-PerlSAX drivers and filters. It's default behaviour is to pass \
-the input directly to the output unchanged. It can be useful to \
-use this module as a base class so you don't have to, for example, \
-implement the characters() callback."
-
-SECTION = "libs"
-LICENSE = "Artistic-1.0 | GPL-1.0+"
-RDEPENDS_${PN} += "perl-module-extutils-makemaker"
-
-LIC_FILES_CHKSUM = "file://dist.ini;endline=5;md5=8f9c9a55340aefaee6e9704c88466446"
-
-SRC_URI = "http://search.cpan.org/CPAN/authors/id/G/GR/GRANTM/XML-SAX-Base-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "38c8c3247dfd080712596118d70dbe32"
-SRC_URI[sha256sum] = "666270318b15f88b8427e585198abbc19bc2e6ccb36dc4c0a4f2d9807330219e"
-
-S = "${WORKDIR}/XML-SAX-Base-${PV}"
-
-inherit cpan
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-sax-base-perl_1.09.bb b/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-sax-base-perl_1.09.bb
new file mode 100644
index 0000000..cd3a580
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-sax-base-perl_1.09.bb
@@ -0,0 +1,24 @@
+SUMMARY = "Base class SAX Drivers and Filters"
+HOMEPAGE = "http://search.cpan.org/dist/XML-SAX-Base/"
+DESCRIPTION = "This module has a very simple task - to be a base class for \
+PerlSAX drivers and filters. It's default behaviour is to pass \
+the input directly to the output unchanged. It can be useful to \
+use this module as a base class so you don't have to, for example, \
+implement the characters() callback."
+
+SECTION = "libs"
+LICENSE = "Artistic-1.0 | GPL-1.0+"
+RDEPENDS_${PN} += "perl-module-extutils-makemaker"
+
+LIC_FILES_CHKSUM = "file://dist.ini;endline=5;md5=8f9c9a55340aefaee6e9704c88466446"
+
+SRC_URI = "http://search.cpan.org/CPAN/authors/id/G/GR/GRANTM/XML-SAX-Base-${PV}.tar.gz"
+
+SRC_URI[md5sum] = "ec347a14065dd7aec7d9fb181b2d7946"
+SRC_URI[sha256sum] = "66cb355ba4ef47c10ca738bd35999723644386ac853abbeb5132841f5e8a2ad0"
+
+S = "${WORKDIR}/XML-SAX-Base-${PV}"
+
+inherit cpan
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-sax-perl_0.99.bb b/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-sax-perl_0.99.bb
index 45d3966..ad31b9c 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-sax-perl_0.99.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/perl/libxml-sax-perl_0.99.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Perl module for using and building Perl SAX2 XML processors"
+HOMEPAGE = "http://search.cpan.org/dist/XML-SAX/"
 DESCRIPTION = "XML::SAX consists of several framework classes for using and \
 building Perl SAX2 XML parsers, filters, and drivers.  It is designed \ 
 around the need to be able to "plug in" different SAX parsers to an \
diff --git a/import-layers/yocto-poky/meta/recipes-extended/psmisc/psmisc.inc b/import-layers/yocto-poky/meta/recipes-extended/psmisc/psmisc.inc
index 22b2e85..66a784b 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/psmisc/psmisc.inc
+++ b/import-layers/yocto-poky/meta/recipes-extended/psmisc/psmisc.inc
@@ -1,4 +1,5 @@
 SUMMARY = "Utilities for managing processes on your system"
+HOMEPAGE = "http://psmisc.sf.net/"
 DESCRIPTION = "The psmisc package contains utilities for managing processes on your \
 system: pstree, killall and fuser.  The pstree command displays a tree \
 structure of all of the running processes on your system.  The killall \
diff --git a/import-layers/yocto-poky/meta/recipes-extended/rpcbind/rpcbind/musl-sunrpc.patch b/import-layers/yocto-poky/meta/recipes-extended/rpcbind/rpcbind/musl-sunrpc.patch
deleted file mode 100644
index 6fbc636..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/rpcbind/rpcbind/musl-sunrpc.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-The musl implementation of getaddrinfo and getservbyname does not
-aliases. As a workaround we use "sunprc" instead of "portmapper"
-
-ported from alpine linux
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-
-Index: rpcbind-0.2.2/src/rpcbind.c
-===================================================================
---- rpcbind-0.2.2.orig/src/rpcbind.c
-+++ rpcbind-0.2.2/src/rpcbind.c
-@@ -491,7 +491,7 @@ init_transport(struct netconfig *nconf)
- 			if ((aicode = getaddrinfo(hosts[nhostsbak],
- 			    servname, &hints, &res)) != 0) {
- 			  if ((aicode = getaddrinfo(hosts[nhostsbak],
--						    "portmapper", &hints, &res)) != 0) {
-+						    "sunrpc", &hints, &res)) != 0) {
- 				syslog(LOG_ERR,
- 				    "cannot get local address for %s: %s",
- 				    nconf->nc_netid, gai_strerror(aicode));
-@@ -564,7 +564,7 @@ init_transport(struct netconfig *nconf)
- 		if ((strcmp(nconf->nc_netid, "local") != 0) &&
- 		    (strcmp(nconf->nc_netid, "unix") != 0)) {
- 			if ((aicode = getaddrinfo(NULL, servname, &hints, &res))!= 0) {
--			  if ((aicode = getaddrinfo(NULL, "portmapper", &hints, &res))!= 0) {
-+			  if ((aicode = getaddrinfo(NULL, "sunrpc", &hints, &res))!= 0) {
- 			  printf("cannot get local address for %s: %s",  nconf->nc_netid, gai_strerror(aicode));
- 			  syslog(LOG_ERR,
- 				    "cannot get local address for %s: %s",
diff --git a/import-layers/yocto-poky/meta/recipes-extended/rpcbind/rpcbind/remove-sys-queue.patch b/import-layers/yocto-poky/meta/recipes-extended/rpcbind/rpcbind/remove-sys-queue.patch
deleted file mode 100644
index 84fc974..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/rpcbind/rpcbind/remove-sys-queue.patch
+++ /dev/null
@@ -1,22 +0,0 @@
-musl does not provide this header and here is reasoning
-http://wiki.musl-libc.org/wiki/FAQ#Q:_why_is_sys.2Fqueue.h_not_included_.3F
-
-So include it only when __GLIBC__ is defined which is true for uclibc and glibc
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Upstream-Status: Pending
-
-Index: rpcbind-0.2.2/src/util.c
-===================================================================
---- rpcbind-0.2.2.orig/src/util.c
-+++ rpcbind-0.2.2/src/util.c
-@@ -41,7 +41,9 @@
- 
- #include <sys/types.h>
- #include <sys/socket.h>
-+#ifdef __GLIBC__
- #include <sys/queue.h>
-+#endif
- #include <net/if.h>
- #include <netinet/in.h>
- #include <ifaddrs.h>
diff --git a/import-layers/yocto-poky/meta/recipes-extended/rpcbind/rpcbind_0.2.4.bb b/import-layers/yocto-poky/meta/recipes-extended/rpcbind/rpcbind_0.2.4.bb
index 1a12055..60e46ed 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/rpcbind/rpcbind_0.2.4.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/rpcbind/rpcbind_0.2.4.bb
@@ -12,19 +12,11 @@
 
 SRC_URI = "${SOURCEFORGE_MIRROR}/rpcbind/rpcbind-${PV}.tar.bz2 \
            file://init.d \
-           file://remove-sys-queue.patch \
-           ${UCLIBCPATCHES} \
-           ${MUSLPATCHES} \
            file://rpcbind.conf \
            file://rpcbind.socket \
            file://rpcbind.service \
            file://0001-rpcbind-pair-all-svc_getargs-calls-with-svc_freeargs.patch \
           "
-MUSLPATCHES_libc-musl = "file://musl-sunrpc.patch"
-
-UCLIBCPATCHES ?= ""
-MUSLPATCHES ?= ""
-
 SRC_URI[md5sum] = "cf10cd41ed8228fc54c316191c1f07fe"
 SRC_URI[sha256sum] = "074a9a530dc7c11e0d905aa59bcb0847c009313f02e98d3d798aa9568f414c66"
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/screen/screen/0001-configure.ac-fix-configure-failed-while-build-dir-ha.patch b/import-layers/yocto-poky/meta/recipes-extended/screen/screen/0001-configure.ac-fix-configure-failed-while-build-dir-ha.patch
new file mode 100644
index 0000000..e8db12c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/screen/screen/0001-configure.ac-fix-configure-failed-while-build-dir-ha.patch
@@ -0,0 +1,108 @@
+From 4b258c5a9078f8df60684ab7536ce3a8ff207e08 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Thu, 12 Oct 2017 10:03:57 +0000
+Subject: [PATCH] configure.ac: fix configure failed while build dir contains "yes"
+
+While the name of build dir contains "yes", the AC_EGREP_CPP
+test always return true.
+
+We rarely use "yes;" to name build dir, so s/yes/yes;/g
+could fix the issue
+
+Upstream-Status: Pending
+
+Signed-off-by: Jian Kang <jian.kang@windriver.com>
+---
+ configure.ac | 20 ++++++++++----------
+ 1 file changed, 10 insertions(+), 10 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index 12996cd..4765af6 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -128,7 +128,7 @@ fi
+ 
+ 
+ AC_CHECKING(for Ultrix)
+-AC_EGREP_CPP(yes,
++AC_EGREP_CPP(yes;,
+ [#if defined(ultrix) || defined(__ultrix)
+    yes;
+ #endif
+@@ -145,7 +145,7 @@ dnl ghazi@caip.rutgers.edu (Kaveh R. Ghazi):
+ dnl BBN butterfly is not POSIX, but a MACH BSD system.
+ dnl Do not define POSIX and TERMIO.
+ AC_CHECKING(for butterfly)
+-AC_EGREP_CPP(yes,
++AC_EGREP_CPP(yes;,
+ [#if defined(butterfly)
+   yes;
+ #endif
+@@ -156,7 +156,7 @@ if test -n "$ULTRIX"; then
+   test -z "$GCC" && CC="$CC -YBSD"
+ fi
+ AC_CHECKING(for POSIX.1)
+-AC_EGREP_CPP(yes,
++AC_EGREP_CPP(yes;,
+ [#include <sys/types.h>
+ #include <unistd.h>
+ main () {
+@@ -173,14 +173,14 @@ AC_TRY_COMPILE(
+ #include <fcntl.h>], [int x = SIGCHLD | FNDELAY;], , AC_DEFINE(SYSV))
+ 
+ AC_CHECKING(for sequent/ptx)
+-AC_EGREP_CPP(yes,
++AC_EGREP_CPP(yes;,
+ [#ifdef _SEQUENT_
+   yes;
+ #endif
+ ], LIBS="$LIBS -lsocket -linet";seqptx=1)
+ 
+ AC_CHECKING(SVR4)
+-AC_EGREP_CPP(yes,
++AC_EGREP_CPP(yes;,
+ [main () {
+ #if defined(SVR4) || defined(__SVR4)
+   yes;
+@@ -200,9 +200,9 @@ fi
+ AC_CHECK_HEADERS([stropts.h string.h strings.h])
+ 
+ AC_CHECKING(for Solaris 2.x)
+-AC_EGREP_CPP(yes,
++AC_EGREP_CPP(yes;,
+ [#if defined(SVR4) && defined(sun)
+-  yes
++  yes;
+ #endif
+ ], LIBS="$LIBS -lsocket -lnsl -lkstat")
+ 
+@@ -697,7 +697,7 @@ else
+ pdir='/dev'
+ fi
+ dnl SCO uses ptyp%d
+-AC_EGREP_CPP(yes,
++AC_EGREP_CPP(yes;,
+ [#ifdef M_UNIX
+    yes;
+ #endif
+@@ -880,7 +880,7 @@ fi
+ )
+ 
+ if test -z "$load" ; then
+-AC_EGREP_CPP(yes,
++AC_EGREP_CPP(yes;,
+ [#if defined(NeXT) || defined(apollo) || defined(linux)
+   yes;
+ #endif
+@@ -1112,7 +1112,7 @@ AC_CHECKING(syslog in libbsd.a)
+ AC_TRY_LINK(, [closelog();], AC_NOTE(- found.), [LIBS="$oldlibs"
+ AC_NOTE(- bad news: syslog missing.) AC_DEFINE(NOSYSLOG)])])
+ 
+-AC_EGREP_CPP(yes,
++AC_EGREP_CPP(yes;,
+ [#ifdef M_UNIX
+    yes;
+ #endif
+-- 
+2.13.3
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/screen/screen/fix-parallel-make.patch b/import-layers/yocto-poky/meta/recipes-extended/screen/screen/fix-parallel-make.patch
deleted file mode 100644
index e0caf5d..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/screen/screen/fix-parallel-make.patch
+++ /dev/null
@@ -1,19 +0,0 @@
-This fixes the parallel make install failure
-
-Upstream-Status: Pending
-
-Signed-off-by: Saul Wold <sgw@linux.intel.com>
-
-Index: screen-4.0.3/Makefile.in
-===================================================================
---- screen-4.0.3.orig/Makefile.in
-+++ screen-4.0.3/Makefile.in
-@@ -70,7 +70,7 @@ screen: $(OFILES)
- .c.o:
- 	$(CC) -c -I. -I$(srcdir) $(M_CFLAGS) $(DEFS) $(OPTIONS) $(CFLAGS) $<
- 
--install_bin: .version screen
-+install_bin: .version screen installdirs
- 	-if [ -f $(DESTDIR)$(bindir)/$(SCREEN) ] && [ ! -f $(DESTDIR)$(bindir)/$(SCREEN).old ]; \
- 		then mv $(DESTDIR)$(bindir)/$(SCREEN) $(DESTDIR)$(bindir)/$(SCREEN).old; fi
- 	$(INSTALL_PROGRAM) screen $(DESTDIR)$(bindir)/$(SCREEN)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/screen/screen_4.5.1.bb b/import-layers/yocto-poky/meta/recipes-extended/screen/screen_4.5.1.bb
deleted file mode 100644
index 32c1a5a..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/screen/screen_4.5.1.bb
+++ /dev/null
@@ -1,52 +0,0 @@
-SUMMARY = "Multiplexing terminal manager"
-DESCRIPTION = "Screen is a full-screen window manager \
-that multiplexes a physical terminal between several \
-processes, typically interactive shells."
-HOMEPAGE = "http://www.gnu.org/software/screen/"
-BUGTRACKER = "https://savannah.gnu.org/bugs/?func=additem&group=screen"
-
-SECTION = "console/utils"
-
-LICENSE = "GPLv3+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504 \
-                    file://screen.h;endline=26;md5=3971142989289a8198a544220703c2bf"
-
-DEPENDS = "ncurses \
-          ${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'libpam', '', d)}"
-RDEPENDS_${PN} = "base-files"
-
-SRC_URI = "${GNU_MIRROR}/screen/screen-${PV}.tar.gz \
-           file://fix-parallel-make.patch \
-           ${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'file://screen.pam', '', d)} \
-           file://Remove-redundant-compiler-sanity-checks.patch \
-           file://Provide-cross-compile-alternatives-for-AC_TRY_RUN.patch \
-           file://Skip-host-file-system-checks-when-cross-compiling.patch \
-           file://Avoid-mis-identifying-systems-as-SVR4.patch \
-           file://0002-comm.h-now-depends-on-term.h.patch \
-           file://0001-fix-for-multijob-build.patch \
-          "
-
-SRC_URI[md5sum] = "a8c5da2f42f8a18fa4dada2419d1549b"
-SRC_URI[sha256sum] = "97db2114dd963b016cd4ded34831955dcbe3251e5eee45ac2606e67e9f097b2d"
-
-inherit autotools texinfo
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[utempter] = "ac_cv_header_utempter_h=yes,ac_cv_header_utempter_h=no,libutempter,"
-
-EXTRA_OECONF = "--with-pty-mode=0620 --with-pty-group=5 \
-               ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '--enable-pam', '--disable-pam', d)}"
-
-do_install_append () {
-	if [ "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}" ]; then
-		install -D -m 644 ${WORKDIR}/screen.pam ${D}/${sysconfdir}/pam.d/screen
-	fi
-}
-
-pkg_postinst_${PN} () {
-	grep -q "^${bindir}/screen$" $D${sysconfdir}/shells || echo ${bindir}/screen >> $D${sysconfdir}/shells
-}
-
-pkg_postrm_${PN} () {
-	printf "$(grep -v "^${bindir}/screen$" $D${sysconfdir}/shells)\n" > $D${sysconfdir}/shells
-}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/screen/screen_4.6.1.bb b/import-layers/yocto-poky/meta/recipes-extended/screen/screen_4.6.1.bb
new file mode 100644
index 0000000..bcd83a2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/screen/screen_4.6.1.bb
@@ -0,0 +1,52 @@
+SUMMARY = "Multiplexing terminal manager"
+DESCRIPTION = "Screen is a full-screen window manager \
+that multiplexes a physical terminal between several \
+processes, typically interactive shells."
+HOMEPAGE = "http://www.gnu.org/software/screen/"
+BUGTRACKER = "https://savannah.gnu.org/bugs/?func=additem&group=screen"
+
+SECTION = "console/utils"
+
+LICENSE = "GPLv3+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504 \
+                    file://screen.h;endline=26;md5=3971142989289a8198a544220703c2bf"
+
+DEPENDS = "ncurses \
+          ${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'libpam', '', d)}"
+RDEPENDS_${PN} = "base-files"
+
+SRC_URI = "${GNU_MIRROR}/screen/screen-${PV}.tar.gz \
+           ${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'file://screen.pam', '', d)} \
+           file://Remove-redundant-compiler-sanity-checks.patch \
+           file://Provide-cross-compile-alternatives-for-AC_TRY_RUN.patch \
+           file://Skip-host-file-system-checks-when-cross-compiling.patch \
+           file://Avoid-mis-identifying-systems-as-SVR4.patch \
+           file://0002-comm.h-now-depends-on-term.h.patch \
+           file://0001-fix-for-multijob-build.patch \
+           file://0001-configure.ac-fix-configure-failed-while-build-dir-ha.patch \
+          "
+
+SRC_URI[md5sum] = "132c893aabfaf2020074790215c8cacd"
+SRC_URI[sha256sum] = "aba9af66cb626155d6abce4703f45cce0e30a5114a368bd6387c966cbbbb7c64"
+
+inherit autotools texinfo
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[utempter] = "ac_cv_header_utempter_h=yes,ac_cv_header_utempter_h=no,libutempter,"
+
+EXTRA_OECONF = "--with-pty-mode=0620 --with-pty-group=5 \
+               ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '--enable-pam', '--disable-pam', d)}"
+
+do_install_append () {
+	if [ "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}" ]; then
+		install -D -m 644 ${WORKDIR}/screen.pam ${D}/${sysconfdir}/pam.d/screen
+	fi
+}
+
+pkg_postinst_${PN} () {
+	grep -q "^${bindir}/screen$" $D${sysconfdir}/shells || echo ${bindir}/screen >> $D${sysconfdir}/shells
+}
+
+pkg_postrm_${PN} () {
+	printf "$(grep -v "^${bindir}/screen$" $D${sysconfdir}/shells)\n" > $D${sysconfdir}/shells
+}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/sed/sed_4.2.2.bb b/import-layers/yocto-poky/meta/recipes-extended/sed/sed_4.2.2.bb
index 5aa7d8a..e31bec2 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/sed/sed_4.2.2.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/sed/sed_4.2.2.bb
@@ -44,3 +44,4 @@
 	oe_runmake -C ${TESTDIR} install-ptest BUILDDIR=${B} DESTDIR=${D}${PTEST_PATH} TESTDIR=${TESTDIR}
 }
 
+RPROVIDES_${PN} += "${@bb.utils.contains('DISTRO_FEATURES', 'usrmerge', '/bin/sed', '', d)}"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/shadow/files/0001-shadow-CVE-2017-12424 b/import-layers/yocto-poky/meta/recipes-extended/shadow/files/0001-shadow-CVE-2017-12424
new file mode 100644
index 0000000..4d3e1e0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/shadow/files/0001-shadow-CVE-2017-12424
@@ -0,0 +1,46 @@
+From 954e3d2e7113e9ac06632aee3c69b8d818cc8952 Mon Sep 17 00:00:00 2001
+From: Tomas Mraz <tmraz@fedoraproject.org>
+Date: Fri, 31 Mar 2017 16:25:06 +0200
+Subject: [PATCH] Fix buffer overflow if NULL line is present in db.
+
+If ptr->line == NULL for an entry, the first cycle will exit,
+but the second one will happily write past entries buffer.
+We actually do not want to exit the first cycle prematurely
+on ptr->line == NULL.
+Signed-off-by: Tomas Mraz <tmraz@fedoraproject.org>
+
+CVE: CVE-2017-12424
+Upstream-Status: Backport
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ lib/commonio.c | 8 ++++----
+ 1 file changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/lib/commonio.c b/lib/commonio.c
+index b10da06..31edbaa 100644
+--- a/lib/commonio.c
++++ b/lib/commonio.c
+@@ -751,16 +751,16 @@ commonio_sort (struct commonio_db *db, int (*cmp) (const void *, const void *))
+ 	for (ptr = db->head;
+ 	        (NULL != ptr)
+ #if KEEP_NIS_AT_END
+-	     && (NULL != ptr->line)
+-	     && (   ('+' != ptr->line[0])
+-	         && ('-' != ptr->line[0]))
++	     && ((NULL == ptr->line)
++	         || (('+' != ptr->line[0])
++	             && ('-' != ptr->line[0])))
+ #endif
+ 	     ;
+ 	     ptr = ptr->next) {
+ 		n++;
+ 	}
+ #if KEEP_NIS_AT_END
+-	if ((NULL != ptr) && (NULL != ptr->line)) {
++	if (NULL != ptr) {
+ 		nis = ptr;
+ 	}
+ #endif
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/shadow/shadow.inc b/import-layers/yocto-poky/meta/recipes-extended/shadow/shadow.inc
index 70ff68e..cc18964 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/shadow/shadow.inc
+++ b/import-layers/yocto-poky/meta/recipes-extended/shadow/shadow.inc
@@ -16,6 +16,7 @@
            file://0001-Do-not-read-login.defs-before-doing-chroot.patch \
            file://check_size_of_uid_t_and_gid_t_using_AC_CHECK_SIZEOF.patch \
            file://0001-useradd-copy-extended-attributes-of-home.patch \
+           file://0001-shadow-CVE-2017-12424 \
            ${@bb.utils.contains('PACKAGECONFIG', 'pam', '${PAM_SRC_URI}', '', d)} \
            "
 
@@ -59,7 +60,6 @@
 NSCDOPT = ""
 NSCDOPT_class-native = "--without-nscd"
 NSCDOPT_class-nativesdk = "--without-nscd"
-NSCDOPT_libc-uclibc = " --without-nscd"
 NSCDOPT_libc-glibc = "${@bb.utils.contains('DISTRO_FEATURES', 'libc-spawn', '--with-nscd', '--without-nscd', d)}"
           
 PAM_PLUGINS = "libpam-runtime \
diff --git a/import-layers/yocto-poky/meta/recipes-extended/slang/slang/run-ptest b/import-layers/yocto-poky/meta/recipes-extended/slang/slang/run-ptest
new file mode 100644
index 0000000..39f474a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/slang/slang/run-ptest
@@ -0,0 +1,3 @@
+#!/bin/sh
+
+make -C test runtests
diff --git a/import-layers/yocto-poky/meta/recipes-extended/slang/slang/terminfo_fixes.patch b/import-layers/yocto-poky/meta/recipes-extended/slang/slang/terminfo_fixes.patch
new file mode 100644
index 0000000..3e6d15a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/slang/slang/terminfo_fixes.patch
@@ -0,0 +1,148 @@
+Do not use the JD_TERMCAP macro since we cannot get the terminfo from
+ncurses pkg-config, but fix the macro to not reference host directories.
+Also add src/test/Makefile.in so that we can use -ltermcap if we want to.
+
+Upstream-Status: Pending
+
+Signed-off-by: Joe Slater <joe.slater@windriver.com>
+
+
+--- a/autoconf/aclocal.m4
++++ b/autoconf/aclocal.m4
+@@ -506,14 +506,10 @@ then
+ else
+    MISC_TERMINFO_DIRS=""
+ fi
+-JD_Terminfo_Dirs="$MISC_TERMINFO_DIRS \
+-                  /usr/lib/terminfo \
+-                  /usr/share/terminfo \
+-                  /usr/share/lib/terminfo \
+-		  /usr/local/lib/terminfo"
++
+ TERMCAP=-ltermcap
+ 
+-for terminfo_dir in $JD_Terminfo_Dirs
++for terminfo_dir in $MISC_TERMINFO_DIRS
+ do
+    if test -d $terminfo_dir
+    then
+--- a/autoconf/configure.ac
++++ b/autoconf/configure.ac
+@@ -249,7 +249,14 @@ AC_CHECK_SIZEOF(size_t)
+ JD_CHECK_LONG_LONG
+ JD_LARGE_FILE_SUPPORT
+ 
+-JD_TERMCAP
++dnl Do not use JD_TERMCAP, since we cannot get terminfo from ncurses*-config anymore.
++dnl Set TERMCAP=-ltermcap and AC_DEFINE(USE_TERMCAP,1,[Define to use termcap])
++dnl to use libtermcap.
++TERMCAP=""
++MISC_TERMINFO_DIRS=""
++AC_SUBST(TERMCAP)dnl
++AC_SUBST(MISC_TERMINFO_DIRS)dnl
++
+ JD_GCC_WARNINGS
+ 
+ JD_SET_OBJ_SRC_DIR(src)
+@@ -364,7 +371,7 @@ AC_CONFIG_HEADER(src/sysconf.h:src/confi
+ dnl AC_CONFIG_SUBDIRS(demo)
+ 
+ AC_OUTPUT(Makefile:autoconf/Makefile.in \
+-  src/Makefile slsh/Makefile modules/Makefile demo/Makefile \
++  src/Makefile src/test/Makefile slsh/Makefile modules/Makefile demo/Makefile \
+   slang.pc:autoconf/slangpc.in \
+ )
+ 
+--- /dev/null
++++ b/src/test/Makefile.in
+@@ -0,0 +1,90 @@
++# -*- make -*-
++TEST_SCRIPTS_SLC = argv syntax scircuit eqs sscanf loops arith array strops \
++  bstring pack stdio assoc selfload struct nspace path ifeval anytype arrmult \
++  time utf8 except bugs list regexp method deref naninf overflow sort \
++  longlong signal dollar req docfun debug qualif compare break multline \
++  stack misc posixio posdir proc math
++
++TEST_SCRIPTS_NO_SLC = autoload nspace2 prep
++
++TEST_SCRIPTS = $(TEST_SCRIPTS_SLC) $(TEST_SCRIPTS_NO_SLC)
++
++TEST_PGM = sltest
++MEMCHECK = valgrind --tool=memcheck --leak-check=yes --leak-resolution=med --num-callers=20
++RUN_TEST_PGM = ./$(TEST_PGM)
++SLANGINC = ..
++SLANGLIB = ../$(ARCH)objs
++OTHER_LIBS = -lm @TERMCAP@
++OTHER_CFLAGS =
++
++runtests: $(TEST_PGM) cleantmp
++	@tests=""; \
++	for test in $(TEST_SCRIPTS); \
++	do \
++	   tests="$$tests $$test.sl"; \
++	done; \
++	for test in $(TEST_SCRIPTS_SLC); \
++	do \
++	   tests="$$tests $$test.slc"; \
++	done; \
++	MAKERUNNING=1 ./runtests.sh $$tests
++#	@touch $(TEST_PGM).c
++
++update: $(TEST_PGM) cleantmp
++	@tests=""; \
++	for X in $(TEST_SCRIPTS); \
++	do \
++	  if [ ! -e lastrun/$$X.sl ] || [ $$X.sl -nt lastrun/$$X.sl ] ; \
++	  then \
++	   tests="$$tests $$X.sl"; \
++	  fi \
++	done; \
++	for X in $(TEST_SCRIPTS_SLC); \
++	do \
++	  if [ ! -e lastrun/$$X.slc ] || [ $$X.sl -nt lastrun/$$X.slc ] ; \
++	  then \
++	   tests="$$tests $$X.slc"; \
++	  fi \
++	done; \
++	if test -n "$$tests"; \
++	then \
++	  MAKERUNNING=1 ./runtests.sh $$tests; \
++	fi
++#	@touch $(TEST_PGM).c
++
++memcheck_runtests: $(TEST_PGM) cleantmp
++	@echo ""
++	@echo "Running tests:"
++	@echo ""
++	-@for X in $(TEST_SCRIPTS); \
++	do \
++	   $(MEMCHECK) --log-file=log.$${X} $(RUN_TEST_PGM) $$X.sl; \
++	   grep ERROR log.$${X}; grep 'lost: [^0]' log.$${X}; \
++	   $(MEMCHECK) --log-file=log.$${X}_u $(RUN_TEST_PGM) -utf8 $$X.sl; \
++	   grep ERROR log.$${X}_u; grep 'lost: [^0]' log.$${X}_u; \
++	done
++#	touch $(TEST_PGM).c
++
++memcheck_runtests_slc: $(TEST_PGM) cleantmp
++	@echo ""
++	@echo "Running tests:"
++	@echo ""
++	-@for X in $(TEST_SCRIPTS_SLC); \
++	do \
++	   $(MEMCHECK) --log-file=log.$${X}_c $(RUN_TEST_PGM) $$X.slc; \
++	   $(MEMCHECK) --log-file=log.$${X}_uc $(RUN_TEST_PGM) -utf8 $$X.slc; \
++	done
++#	touch $(TEST_PGM).c
++
++memcheck: memcheck_runtests memcheck_runtests_slc
++
++$(TEST_PGM): $(TEST_PGM).c assoc.c list.c $(SLANGLIB)/libslang.a
++	$(CC) $(CFLAGS) $(OTHER_CFLAGS) $(LDFLAGS) $(TEST_PGM).c -o $(TEST_PGM) -I$(SLANGINC) -L$(SLANGLIB) -lslang $(OTHER_LIBS)
++cleantmp:
++	-/bin/rm -rf tmpfile*.* tmpdir*.*
++clean: cleantmp
++	-/bin/rm -f *~ *.o *.log log.pid* *.slc log.* *.log-*
++distclean: clean
++	/bin/rm -f $(TEST_PGM) $(TEST_PGM).gcda $(TEST_PGM).gcno
++.PHONY: clean memcheck runtests memcheck_runtests_slc memcheck_runtests cleantmp
++
diff --git a/import-layers/yocto-poky/meta/recipes-extended/slang/slang/test-add-output-in-the-format-result-testname.patch b/import-layers/yocto-poky/meta/recipes-extended/slang/slang/test-add-output-in-the-format-result-testname.patch
new file mode 100644
index 0000000..27a9bb8
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/slang/slang/test-add-output-in-the-format-result-testname.patch
@@ -0,0 +1,30 @@
+From 38688ee2754415cf2a1935dafb8278861b7315e7 Mon Sep 17 00:00:00 2001
+From: Stefan Strogin <sstrogin@cisco.com>
+Date: Thu, 2 Mar 2017 00:26:31 +0200
+Subject: [PATCH] test: add output in the format "result: testname"
+
+Upstream-Status: Inappropriate [oe specific]
+
+Signed-off-by: Stefan Strogin <sstrogin@cisco.com>
+---
+ src/test/runtests.sh | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/src/test/runtests.sh b/src/test/runtests.sh
+index a3eaad0..64f0705 100755
+--- a/src/test/runtests.sh
++++ b/src/test/runtests.sh
+@@ -34,8 +34,10 @@ do
+     then
+ 	n_failed=`expr ${n_failed} + 1`
+ 	tests_failed="$tests_failed $testfile"
++	echo "FAIL: $testfile"
+     else
+         touch lastrun/$testfile
++	echo "PASS: $testfile"
+     fi
+ done
+ 
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/slang/slang_2.3.1a.bb b/import-layers/yocto-poky/meta/recipes-extended/slang/slang_2.3.1a.bb
index c71d804..0585c14 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/slang/slang_2.3.1a.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/slang/slang_2.3.1a.bb
@@ -17,6 +17,9 @@
 SRC_URI = "http://www.jedsoft.org/releases/${BPN}/${BP}.tar.bz2 \
            file://no-x.patch \
            file://dont-link-to-host.patch \
+           file://test-add-output-in-the-format-result-testname.patch \
+           file://terminfo_fixes.patch \
+           file://run-ptest \
           "
 
 SRC_URI[md5sum] = "c5235313042ed0e71ec708f7b85ec241"
@@ -25,7 +28,7 @@
 UPSTREAM_CHECK_URI = "http://www.jedsoft.org/releases/slang/"
 PREMIRRORS_append = "\n http://www.jedsoft.org/releases/slang/.* http://www.jedsoft.org/releases/slang/old/ \n"
 
-inherit autotools-brokensep
+inherit autotools-brokensep ptest
 CLEANBROKEN = "1"
 
 EXTRA_OECONF = "--without-onig"
@@ -49,6 +52,27 @@
     cd ${B}
 }
 
+do_compile_ptest() {
+	oe_runmake -C src static
+	oe_runmake -C src/test sltest
+}
+
+do_install_ptest() {
+	mkdir ${D}${PTEST_PATH}/test
+	for f in Makefile sltest runtests.sh *.sl *.inc; do
+		cp ${S}/src/test/$f ${D}${PTEST_PATH}/test/
+	done
+	sed -e 's/\ \$(TEST_PGM)\.c\ assoc\.c\ list\.c\ \$(SLANGLIB)\/libslang\.a//' \
+	    -e '/\$(CC).*(TEST_PGM)/d' \
+	    -i ${D}${PTEST_PATH}/test/Makefile
+
+	cp ${S}/slsh/lib/require.sl ${D}${PTEST_PATH}/test/
+	sed -i 's/\.\.\/\.\.\/slsh\/lib\/require\.sl/require\.sl/' ${D}${PTEST_PATH}/test/req.sl
+
+	cp ${S}/doc/text/slangfun.txt ${D}${PTEST_PATH}/test/
+	sed -i 's/\.\.\/\.\.\/doc\/text\/slangfun\.txt/slangfun\.txt/' ${D}${PTEST_PATH}/test/docfun.sl
+}
+
 FILES_${PN} += "${libdir}/${BPN}/v2/modules/ ${datadir}/slsh/"
 
 PARALLEL_MAKE = ""
diff --git a/import-layers/yocto-poky/meta/recipes-extended/stat/stat_3.3.bb b/import-layers/yocto-poky/meta/recipes-extended/stat/stat_3.3.bb
index 0697c73..8ac8e89 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/stat/stat_3.3.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/stat/stat_3.3.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Command line file status display utility"
+HOMEPAGE = "http://www.ibiblio.org/pub/Linux/utils/file/"
 DESCRIPTION = "Displays all information about a file that the stat() call provides and all information about a filesystem that statfs() provides."
 SECTION = "console/utils"
 LICENSE = "GPLv2"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/stress/files/texinfo.patch b/import-layers/yocto-poky/meta/recipes-extended/stress/files/texinfo.patch
index 5ac5951..f23a1f6 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/stress/files/texinfo.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/stress/files/texinfo.patch
@@ -1,3 +1,4 @@
+Upstream-Status: Pending
 --- a/doc/stress.texi
 +++ b/doc/stress.texi
 @@ -62,47 +62,47 @@
diff --git a/import-layers/yocto-poky/meta/recipes-extended/sudo/sudo.inc b/import-layers/yocto-poky/meta/recipes-extended/sudo/sudo.inc
index d42a04a..80ec0ae 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/sudo/sudo.inc
+++ b/import-layers/yocto-poky/meta/recipes-extended/sudo/sudo.inc
@@ -4,7 +4,7 @@
 BUGTRACKER = "http://www.sudo.ws/bugs/"
 SECTION = "admin"
 LICENSE = "ISC & BSD & Zlib"
-LIC_FILES_CHKSUM = "file://doc/LICENSE;md5=f600a47c2a2cdde5899e449607810ed1 \
+LIC_FILES_CHKSUM = "file://doc/LICENSE;md5=652fb4334c13b511597d7940ef8b3323 \
                     file://plugins/sudoers/redblack.c;beginline=1;endline=41;md5=cfe41112f96c19a074934d128f45c693 \
                     file://lib/util/reallocarray.c;beginline=3;endline=16;md5=85b0905b795d4d58bf2e00635649eec6 \
                     file://lib/util/fnmatch.c;beginline=3;endline=27;md5=67f83ee9bd456557397082f8f1be0efd \
@@ -27,6 +27,12 @@
 
 # mksigname/mksiglist are used on build host to generate source files
 do_compile_prepend () {
+	# Remove build host references from sudo_usage.h
+	sed -i  \
+	    -e 's,--with-libtool-sysroot=${STAGING_DIR_TARGET},,g' \
+	    -e 's,--build=${BUILD_SYS},,g' \
+	    -e 's,--host=${HOST_SYS},,g' \
+	    ${B}/src/sudo_usage.h
 	oe_runmake SSP_CFLAGS="" SSP_LDFLAGS="" CC="$BUILD_CC" CFLAGS="$BUILD_CFLAGS" CPPFLAGS="$BUILD_CPPFLAGS -I${S}/include -I${S} -I${B}"  -C lib/util mksigname mksiglist
 }
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/sudo/sudo_1.8.19p2.bb b/import-layers/yocto-poky/meta/recipes-extended/sudo/sudo_1.8.19p2.bb
deleted file mode 100644
index a9659c3..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/sudo/sudo_1.8.19p2.bb
+++ /dev/null
@@ -1,36 +0,0 @@
-require sudo.inc
-
-SRC_URI = "http://ftp.sudo.ws/sudo/dist/sudo-${PV}.tar.gz \
-           ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${PAM_SRC_URI}', '', d)} \
-           file://0001-Include-sys-types.h-for-id_t-definition.patch \
-           "
-
-PAM_SRC_URI = "file://sudo.pam"
-
-SRC_URI[md5sum] = "31a6090ed1d0946fa22cba19e86aafef"
-SRC_URI[sha256sum] = "237e18e67c2ad59ecacfa4b7707198b09fcf84914621585a9bc670dcc31a52e0"
-
-DEPENDS += " ${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'libpam', '', d)}"
-RDEPENDS_${PN} += " ${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'pam-plugin-limits pam-plugin-keyinit', '', d)}"
-
-EXTRA_OECONF += " \
-             ac_cv_type_rsize_t=no \
-             ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '--with-pam', '--without-pam', d)} \
-             ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', '--enable-tmpfiles.d=${libdir}/tmpfiles.d', '--disable-tmpfiles.d', d)} \
-             "
-
-do_install_append () {
-	if [ "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}" ]; then
-		install -D -m 644 ${WORKDIR}/sudo.pam ${D}/${sysconfdir}/pam.d/sudo
-	fi
-
-	chmod 4111 ${D}${bindir}/sudo
-	chmod 0440 ${D}${sysconfdir}/sudoers
-
-	# Explicitly remove the ${localstatedir}/run directory to avoid QA error
-	rmdir -p --ignore-fail-on-non-empty ${D}${localstatedir}/run/sudo
-}
-
-FILES_${PN} += "${libdir}/tmpfiles.d"
-FILES_${PN}-dev += "${libexecdir}/${BPN}/lib*${SOLIBSDEV} ${libexecdir}/${BPN}/*.la \
-                    ${libexecdir}/lib*${SOLIBSDEV} ${libexecdir}/*.la"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/sudo/sudo_1.8.20p2.bb b/import-layers/yocto-poky/meta/recipes-extended/sudo/sudo_1.8.20p2.bb
new file mode 100644
index 0000000..4f24b3c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/sudo/sudo_1.8.20p2.bb
@@ -0,0 +1,36 @@
+require sudo.inc
+
+SRC_URI = "http://ftp.sudo.ws/sudo/dist/sudo-${PV}.tar.gz \
+           ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '${PAM_SRC_URI}', '', d)} \
+           file://0001-Include-sys-types.h-for-id_t-definition.patch \
+           "
+
+PAM_SRC_URI = "file://sudo.pam"
+
+SRC_URI[md5sum] = "03da8e711caca6fd93e57751bfb74adc"
+SRC_URI[sha256sum] = "bd42ae1059e935f795c69ea97b3de09fe9410a58a74b5d5e6836eb5067a445d9"
+
+DEPENDS += " ${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'libpam', '', d)}"
+RDEPENDS_${PN} += " ${@bb.utils.contains('DISTRO_FEATURES', 'pam', 'pam-plugin-limits pam-plugin-keyinit', '', d)}"
+
+EXTRA_OECONF += " \
+             ac_cv_type_rsize_t=no \
+             ${@bb.utils.contains('DISTRO_FEATURES', 'pam', '--with-pam', '--without-pam', d)} \
+             ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', '--enable-tmpfiles.d=${libdir}/tmpfiles.d', '--disable-tmpfiles.d', d)} \
+             "
+
+do_install_append () {
+	if [ "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}" ]; then
+		install -D -m 644 ${WORKDIR}/sudo.pam ${D}/${sysconfdir}/pam.d/sudo
+	fi
+
+	chmod 4111 ${D}${bindir}/sudo
+	chmod 0440 ${D}${sysconfdir}/sudoers
+
+	# Explicitly remove the ${localstatedir}/run directory to avoid QA error
+	rmdir -p --ignore-fail-on-non-empty ${D}${localstatedir}/run/sudo
+}
+
+FILES_${PN} += "${libdir}/tmpfiles.d"
+FILES_${PN}-dev += "${libexecdir}/${BPN}/lib*${SOLIBSDEV} ${libexecdir}/${BPN}/*.la \
+                    ${libexecdir}/lib*${SOLIBSDEV} ${libexecdir}/*.la"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/sysklogd/files/0001-fix-problems-that-causes-a-segmentation-fault-under-.patch b/import-layers/yocto-poky/meta/recipes-extended/sysklogd/files/0001-fix-problems-that-causes-a-segmentation-fault-under-.patch
new file mode 100644
index 0000000..56431af
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/sysklogd/files/0001-fix-problems-that-causes-a-segmentation-fault-under-.patch
@@ -0,0 +1,28 @@
+From cb72b3e172c238b4b5ae5935dc6be54f5034fcf1 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 30 Jun 2017 18:20:06 -0700
+Subject: [PATCH 1/2] fix problems that causes a segmentation fault under some
+ conditions
+
+Upstream-Status: Inappropriate [ no upstream ]
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ ksym_mod.c | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/ksym_mod.c b/ksym_mod.c
+index 6e26da1..a3daa7d 100644
+--- a/ksym_mod.c
++++ b/ksym_mod.c
+@@ -186,7 +186,6 @@ extern int InitMsyms()
+ 		else
+ 			Syslog(LOG_ERR, "Error loading kernel symbols " \
+ 			       "- %s\n", strerror(errno));
+-		fclose(ksyms);
+ 		return(0);
+ 	}
+ 
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/sysklogd/files/0002-Make-way-for-respecting-flags-from-environment.patch b/import-layers/yocto-poky/meta/recipes-extended/sysklogd/files/0002-Make-way-for-respecting-flags-from-environment.patch
new file mode 100644
index 0000000..ebbdef3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/sysklogd/files/0002-Make-way-for-respecting-flags-from-environment.patch
@@ -0,0 +1,35 @@
+From b22f244732cd0f475af2f82fc7eecec49f90623b Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 1 Jul 2017 00:01:50 -0700
+Subject: [PATCH 2/2] Make way for respecting flags from environment
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ Makefile | 4 +---
+ 1 file changed, 1 insertion(+), 3 deletions(-)
+
+diff --git a/Makefile b/Makefile
+index 5af1689..af699d2 100644
+--- a/Makefile
++++ b/Makefile
+@@ -17,14 +17,12 @@
+ #   along with this program; if not, write to the Free Software
+ #   Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ 
+-CC= gcc
+ #SKFLAGS= -g -DSYSV -Wall
+ #LDFLAGS= -g
+-SKFLAGS= $(RPM_OPT_FLAGS) -O3 -DSYSV -fomit-frame-pointer -Wall -fno-strength-reduce
++SKFLAGS = $(CFLAGS) $(CPPFLAGS) -DSYSV -Wall -fno-strength-reduce
+ # -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE
+ # -D_FILE_OFFSET_BITS=64 -D_LARGEFILE64_SOURCE
+ # $(shell getconf LFS_SKFLAGS)
+-LDFLAGS= -s
+ 
+ # Look where your install program is.
+ INSTALL = /usr/bin/install
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/sysklogd/sysklogd.inc b/import-layers/yocto-poky/meta/recipes-extended/sysklogd/sysklogd.inc
index 78b8d7a..1a537fa 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/sysklogd/sysklogd.inc
+++ b/import-layers/yocto-poky/meta/recipes-extended/sysklogd/sysklogd.inc
@@ -11,11 +11,13 @@
                     file://klogd.c;beginline=2;endline=19;md5=7e87ed0ae6142de079bce738c10c899d \
                    "
 
-inherit update-rc.d update-alternatives systemd
+inherit update-rc.d systemd
 
 SRC_URI = "http://www.infodrom.org/projects/sysklogd/download/sysklogd-${PV}.tar.gz \
            file://no-strip-install.patch \
            file://0001-Fix-build-with-musl.patch \
+           file://0001-fix-problems-that-causes-a-segmentation-fault-under-.patch \
+           file://0002-Make-way-for-respecting-flags-from-environment.patch \
            file://sysklogd \
            file://syslog.conf \
            file://syslogd.service \
@@ -30,11 +32,10 @@
 SYSTEMD_AUTO_ENABLE = "enable"
 
 INITSCRIPT_NAME = "syslog"
-CONFFILES_${PN} = "${sysconfdir}/syslog.conf.${BPN}"
+CONFFILES_${PN} = "${sysconfdir}/syslog.conf"
+RCONFLICTS_${PN}-syslog = "rsyslog busybox-syslog syslog-ng"
 
-EXTRA_OEMAKE = "-e MAKEFLAGS="
-
-CFLAGS_append = " -DSYSV"
+CFLAGS += "-DSYSV -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE"
 
 do_install () {
 	install -d ${D}${mandir}/man8 \
@@ -57,21 +58,6 @@
 
 FILES_${PN} += "${@bb.utils.contains('DISTRO_FEATURES','systemd','${exec_prefix}/lib/tmpfiles.d/sysklogd.conf', '', d)}"
 
-# sysklogd package has no internal systemd support, so we weigh busybox's
-# sysklogd utility over it in case of systemd
-ALTERNATIVE_PRIORITY = "${@bb.utils.contains('DISTRO_FEATURES','systemd','10','100',d)}"
-
-ALTERNATIVE_${PN} = "syslogd klogd syslog-conf \
-    ${@bb.utils.contains('DISTRO_FEATURES','sysvinit','syslog-init','',d)}"
-
-ALTERNATIVE_${PN}-doc = "syslogd.8"
-ALTERNATIVE_LINK_NAME[syslogd.8] = "${mandir}/man8/syslogd.8"
-
-ALTERNATIVE_LINK_NAME[syslogd] = "${base_sbindir}/syslogd"
-ALTERNATIVE_LINK_NAME[klogd] = "${base_sbindir}/klogd"
-ALTERNATIVE_LINK_NAME[syslog-init] = "${sysconfdir}/init.d/syslog"
-ALTERNATIVE_LINK_NAME[syslog-conf] = "${sysconfdir}/syslog.conf"
-
 pkg_prerm_${PN} () {
 	if test "x$D" = "x"; then
 	if test "$1" = "upgrade" -o "$1" = "remove"; then
diff --git a/import-layers/yocto-poky/meta/recipes-extended/sysstat/sysstat.inc b/import-layers/yocto-poky/meta/recipes-extended/sysstat/sysstat.inc
index bb5629d..0bc7e14 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/sysstat/sysstat.inc
+++ b/import-layers/yocto-poky/meta/recipes-extended/sysstat/sysstat.inc
@@ -17,8 +17,10 @@
 # autotools-brokensep as this package doesn't use automake
 inherit autotools-brokensep gettext systemd
 
-EXTRA_OECONF += "--disable-sensors"
-EXTRA_OEMAKE += 'LFLAGS=""'
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[lm-sensors] = "--enable-sensors,--disable-sensors,lmsensors,lmsensors-libsensors"
+
+EXTRA_OECONF += "--disable-stripping"
 
 SYSTEMD_PACKAGES = "${PN}"
 SYSTEMD_SERVICE_${PN} = "sysstat.service"
@@ -32,10 +34,16 @@
 	autotools_do_install
 
 	# don't install /var/log/sa when populating rootfs. Do it through volatile
-
 	rm -rf ${D}/var
-	install -d ${D}/etc/default/volatiles
-	install -m 0644 ${WORKDIR}/99_sysstat ${D}/etc/default/volatiles
+	if ${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', 'true', 'false', d)}; then
+	        install -d ${D}/etc/default/volatiles
+		install -m 0644 ${WORKDIR}/99_sysstat ${D}/etc/default/volatiles
+	fi
+	if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
+	        install -d ${D}${sysconfdir}/tmpfiles.d
+	        echo "d ${localstatedir}/log/sa - - - -" \
+		     > ${D}${sysconfdir}/tmpfiles.d/sysstat.conf
+	fi
 
 	install -d ${D}${systemd_unitdir}/system
 	install -m 0644 ${WORKDIR}/sysstat.service ${D}${systemd_unitdir}/system
@@ -55,4 +63,3 @@
 FILES_${PN} += "${libdir}/sa"
 
 TARGET_CC_ARCH += "${LDFLAGS}"
-LDFLAGS_append_libc-uclibc = " -lintl"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/sysstat/sysstat_11.5.4.bb b/import-layers/yocto-poky/meta/recipes-extended/sysstat/sysstat_11.5.4.bb
deleted file mode 100644
index 7ff363b2..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/sysstat/sysstat_11.5.4.bb
+++ /dev/null
@@ -1,8 +0,0 @@
-require sysstat.inc
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=a23a74b3f4caf9616230789d94217acb"
-
-SRC_URI += "file://0001-Include-needed-headers-explicitly.patch"
-
-SRC_URI[md5sum] = "f16ae8edd462f5199ee033f7c0e2c197"
-SRC_URI[sha256sum] = "1e1008656575e70486b456e79775e98d3b8732d7e2cb408559209bd0318e0807"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/sysstat/sysstat_11.5.7.bb b/import-layers/yocto-poky/meta/recipes-extended/sysstat/sysstat_11.5.7.bb
new file mode 100644
index 0000000..72af931
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/sysstat/sysstat_11.5.7.bb
@@ -0,0 +1,8 @@
+require sysstat.inc
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=a23a74b3f4caf9616230789d94217acb"
+
+SRC_URI += "file://0001-Include-needed-headers-explicitly.patch"
+
+SRC_URI[md5sum] = "8f4a5d0de29f1056153e25e7a9c518d2"
+SRC_URI[sha256sum] = "4a38efaa0ca85ee5484d046bd427012979264fef17f07fd7855860e592819482"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/tcp-wrappers/tcp-wrappers_7.6.bb b/import-layers/yocto-poky/meta/recipes-extended/tcp-wrappers/tcp-wrappers_7.6.bb
index 5fdbbce..1effef1 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/tcp-wrappers/tcp-wrappers_7.6.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/tcp-wrappers/tcp-wrappers_7.6.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Security tool that is a wrapper for TCP daemons"
+HOMEPAGE = "http://www.softpanorama.org/Net/Network_security/TCP_wrappers/"
 DESCRIPTION = "Tools for monitoring and filtering incoming requests for tcp \
                services."
 SECTION = "console/network"
@@ -73,7 +74,6 @@
                 'EXTRA_CFLAGS=${CFLAGS} -DSYS_ERRLIST_DEFINED -DHAVE_STRERROR -DHAVE_WEAKSYMS -D_REENTRANT -DINET6=1 -Dss_family=__ss_family -Dss_len=__ss_len'"
 
 EXTRA_OEMAKE_NETGROUP = "-DNETGROUP -DUSE_GETDOMAIN"
-EXTRA_OEMAKE_NETGROUP_libc-uclibc = "-DUSE_GETDOMAIN"
 EXTRA_OEMAKE_NETGROUP_libc-musl = "-DUSE_GETDOMAIN"
 
 EXTRA_OEMAKE_append_libc-musl = " 'LIBS='"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/texinfo-dummy-native/texinfo-dummy/template.py b/import-layers/yocto-poky/meta/recipes-extended/texinfo-dummy-native/texinfo-dummy/template.py
index 8b7033e..e369f74 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/texinfo-dummy-native/texinfo-dummy/template.py
+++ b/import-layers/yocto-poky/meta/recipes-extended/texinfo-dummy-native/texinfo-dummy/template.py
@@ -1,4 +1,4 @@
-#! /usr/bin/env python2.7
+#! /usr/bin/env python3
 
 # template.py (and other filenames)
 # By Max Eliaser (max.eliaser@intel.com)
@@ -71,7 +71,7 @@
        this_binary + " is not one of " + ', '.join (valid_binaries)
 
 if "--version" in sys.argv:
-    print version_str
+    print(version_str)
     sys.exit (0)
 
 # For debugging
diff --git a/import-layers/yocto-poky/meta/recipes-extended/texinfo/texinfo_6.3.bb b/import-layers/yocto-poky/meta/recipes-extended/texinfo/texinfo_6.3.bb
index d82731e..f58df92 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/texinfo/texinfo_6.3.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/texinfo/texinfo_6.3.bb
@@ -79,4 +79,10 @@
                    ${datadir}/${tex_texinfo} \
                    ${mandir}/man1 ${mandir}/man5"
 
+# Lie about providing the Locale::gettext_xs module. It is not actually built,
+# but the code will test for it and if not found use Locale::gettext_pp instead.
+# However, this causes a file dependency on perl(Locale::gettext_xs) to be
+# generated, which must be satisfied.
+RPROVIDES_${PN} += "perl(Locale::gettext_xs)"
+
 BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/tzcode/files/0001-Fix-Makefile-quoting-bug.patch b/import-layers/yocto-poky/meta/recipes-extended/tzcode/files/0001-Fix-Makefile-quoting-bug.patch
new file mode 100644
index 0000000..e49fa09
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/tzcode/files/0001-Fix-Makefile-quoting-bug.patch
@@ -0,0 +1,174 @@
+From b520d20b8122a783f99f088758b78d928f70ee34 Mon Sep 17 00:00:00 2001
+From: Paul Eggert <eggert@cs.ucla.edu>
+Date: Mon, 23 Oct 2017 11:42:45 -0700
+Subject: [PATCH] Fix Makefile quoting bug
+
+Problem with INSTALLARGS reported by Zefram in:
+https://mm.icann.org/pipermail/tz/2017-October/025360.html
+Fix similar problems too.
+* Makefile (ZIC_INSTALL, VALIDATE_ENV, CC, install)
+(INSTALL, version, INSTALLARGS, right_posix, posix_right)
+(check_public): Use apostrophes to prevent undesirable
+interpretation of names by the shell.  We still do not support
+directory names containing apostrophes or newlines, but this is
+good enough.
+
+Upstream-Status: Backport
+Signed-off-by: Armin Kuster <akuster@mvista.com>
+
+* NEWS: Mention this.
+---
+ Makefile | 64 ++++++++++++++++++++++++++++++++--------------------------------
+ NEWS     |  8 ++++++++
+ 2 files changed, 40 insertions(+), 32 deletions(-)
+
+diff --git a/Makefile b/Makefile
+index c92edc0..97649ca 100644
+--- a/Makefile
++++ b/Makefile
+@@ -313,7 +313,7 @@ ZFLAGS=
+ 
+ # How to use zic to install tz binary files.
+ 
+-ZIC_INSTALL=	$(ZIC) -d $(DESTDIR)$(TZDIR) $(LEAPSECONDS)
++ZIC_INSTALL=	$(ZIC) -d '$(DESTDIR)$(TZDIR)' $(LEAPSECONDS)
+ 
+ # The name of a Posix-compliant 'awk' on your system.
+ AWK=		awk
+@@ -341,8 +341,8 @@ SGML_CATALOG_FILES= \
+ VALIDATE = nsgmls
+ VALIDATE_FLAGS = -s -B -wall -wno-unused-param
+ VALIDATE_ENV = \
+-  SGML_CATALOG_FILES=$(SGML_CATALOG_FILES) \
+-  SGML_SEARCH_PATH=$(SGML_SEARCH_PATH) \
++  SGML_CATALOG_FILES='$(SGML_CATALOG_FILES)' \
++  SGML_SEARCH_PATH='$(SGML_SEARCH_PATH)' \
+   SP_CHARSET_FIXED=YES \
+   SP_ENCODING=UTF-8
+ 
+@@ -396,7 +396,7 @@ GZIPFLAGS=	-9n
+ #MAKE=		make
+ 
+ cc=		cc
+-CC=		$(cc) -DTZDIR=\"$(TZDIR)\"
++CC=		$(cc) -DTZDIR='"$(TZDIR)"'
+ 
+ AR=		ar
+ 
+@@ -473,29 +473,29 @@ all:		tzselect yearistype zic zdump libtz.a $(TABDATA)
+ ALL:		all date $(ENCHILADA)
+ 
+ install:	all $(DATA) $(REDO) $(MANS)
+-		mkdir -p $(DESTDIR)$(ETCDIR) $(DESTDIR)$(TZDIR) \
+-			$(DESTDIR)$(LIBDIR) \
+-			$(DESTDIR)$(MANDIR)/man3 $(DESTDIR)$(MANDIR)/man5 \
+-			$(DESTDIR)$(MANDIR)/man8
++		mkdir -p '$(DESTDIR)$(ETCDIR)' '$(DESTDIR)$(TZDIR)' \
++			'$(DESTDIR)$(LIBDIR)' \
++			'$(DESTDIR)$(MANDIR)/man3' '$(DESTDIR)$(MANDIR)/man5' \
++			'$(DESTDIR)$(MANDIR)/man8'
+ 		$(ZIC_INSTALL) -l $(LOCALTIME) -p $(POSIXRULES)
+-		cp -f $(TABDATA) $(DESTDIR)$(TZDIR)/.
+-		cp tzselect zic zdump $(DESTDIR)$(ETCDIR)/.
+-		cp libtz.a $(DESTDIR)$(LIBDIR)/.
+-		$(RANLIB) $(DESTDIR)$(LIBDIR)/libtz.a
+-		cp -f newctime.3 newtzset.3 $(DESTDIR)$(MANDIR)/man3/.
+-		cp -f tzfile.5 $(DESTDIR)$(MANDIR)/man5/.
+-		cp -f tzselect.8 zdump.8 zic.8 $(DESTDIR)$(MANDIR)/man8/.
++		cp -f $(TABDATA) '$(DESTDIR)$(TZDIR)/.'
++		cp tzselect zic zdump '$(DESTDIR)$(ETCDIR)/.'
++		cp libtz.a '$(DESTDIR)$(LIBDIR)/.'
++		$(RANLIB) '$(DESTDIR)$(LIBDIR)/libtz.a'
++		cp -f newctime.3 newtzset.3 '$(DESTDIR)$(MANDIR)/man3/.'
++		cp -f tzfile.5 '$(DESTDIR)$(MANDIR)/man5/.'
++		cp -f tzselect.8 zdump.8 zic.8 '$(DESTDIR)$(MANDIR)/man8/.'
+ 
+ INSTALL:	ALL install date.1
+-		mkdir -p $(DESTDIR)$(BINDIR) $(DESTDIR)$(MANDIR)/man1
+-		cp date $(DESTDIR)$(BINDIR)/.
+-		cp -f date.1 $(DESTDIR)$(MANDIR)/man1/.
++		mkdir -p '$(DESTDIR)$(BINDIR)' '$(DESTDIR)$(MANDIR)/man1'
++		cp date '$(DESTDIR)$(BINDIR)/.'
++		cp -f date.1 '$(DESTDIR)$(MANDIR)/man1/.'
+ 
+ version:	$(VERSION_DEPS)
+ 		{ (type git) >/dev/null 2>&1 && \
+ 		  V=`git describe --match '[0-9][0-9][0-9][0-9][a-z]*' \
+ 				--abbrev=7 --dirty` || \
+-		  V=$(VERSION); } && \
++		  V='$(VERSION)'; } && \
+ 		printf '%s\n' "$$V" >$@.out
+ 		mv $@.out $@
+ 
+@@ -529,12 +529,12 @@ leapseconds:	$(LEAP_DEPS)
+ # Arguments to pass to submakes of install_data.
+ # They can be overridden by later submake arguments.
+ INSTALLARGS = \
+- BACKWARD=$(BACKWARD) \
+- DESTDIR=$(DESTDIR) \
++ BACKWARD='$(BACKWARD)' \
++ DESTDIR='$(DESTDIR)' \
+  LEAPSECONDS='$(LEAPSECONDS)' \
+  PACKRATDATA='$(PACKRATDATA)' \
+- TZDIR=$(TZDIR) \
+- YEARISTYPE=$(YEARISTYPE) \
++ TZDIR='$(TZDIR)' \
++ YEARISTYPE='$(YEARISTYPE)' \
+  ZIC='$(ZIC)'
+ 
+ # 'make install_data' installs one set of tz binary files.
+@@ -558,16 +558,16 @@ right_only:
+ # You must replace all of $(TZDIR) to switch from not using leap seconds
+ # to using them, or vice versa.
+ right_posix:	right_only
+-		rm -fr $(DESTDIR)$(TZDIR)-leaps
+-		ln -s $(TZDIR_BASENAME) $(DESTDIR)$(TZDIR)-leaps || \
+-		  $(MAKE) $(INSTALLARGS) TZDIR=$(TZDIR)-leaps right_only
+-		$(MAKE) $(INSTALLARGS) TZDIR=$(TZDIR)-posix posix_only
++		rm -fr '$(DESTDIR)$(TZDIR)-leaps'
++		ln -s '$(TZDIR_BASENAME)' '$(DESTDIR)$(TZDIR)-leaps' || \
++		  $(MAKE) $(INSTALLARGS) TZDIR='$(TZDIR)-leaps' right_only
++		$(MAKE) $(INSTALLARGS) TZDIR='$(TZDIR)-posix' posix_only
+ 
+ posix_right:	posix_only
+-		rm -fr $(DESTDIR)$(TZDIR)-posix
+-		ln -s $(TZDIR_BASENAME) $(DESTDIR)$(TZDIR)-posix || \
+-		  $(MAKE) $(INSTALLARGS) TZDIR=$(TZDIR)-posix posix_only
+-		$(MAKE) $(INSTALLARGS) TZDIR=$(TZDIR)-leaps right_only
++		rm -fr '$(DESTDIR)$(TZDIR)-posix'
++		ln -s '$(TZDIR_BASENAME)' '$(DESTDIR)$(TZDIR)-posix' || \
++		  $(MAKE) $(INSTALLARGS) TZDIR='$(TZDIR)-posix' posix_only
++		$(MAKE) $(INSTALLARGS) TZDIR='$(TZDIR)-leaps' right_only
+ 
+ # This obsolescent rule is present for backwards compatibility with
+ # tz releases 2014g through 2015g.  It should go away eventually.
+@@ -764,7 +764,7 @@ set-timestamps.out: $(ENCHILADA)
+ 
+ check_public:
+ 		$(MAKE) maintainer-clean
+-		$(MAKE) "CFLAGS=$(GCC_DEBUG_FLAGS)" ALL
++		$(MAKE) CFLAGS='$(GCC_DEBUG_FLAGS)' ALL
+ 		mkdir -p public.dir
+ 		for i in $(TDATA) tzdata.zi; do \
+ 		  $(zic) -v -d public.dir $$i 2>&1 || exit; \
+diff --git a/NEWS b/NEWS
+index bd2bec2..75ab095 100644
+--- a/NEWS
++++ b/NEWS
+@@ -1,5 +1,13 @@
+ News for the tz database
+ 
++Unreleased, experimental changes
++
++  Changes to build procedure
++
++    The Makefile now quotes values like BACKWARD more carefully when
++    passing them to the shell.  (Problem reported by Zefram.)
++
++
+ Release 2017c - 2017-10-20 14:49:34 -0700
+ 
+   Briefly:
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/tzcode/files/0002-Port-zdump-to-C90-snprintf.patch b/import-layers/yocto-poky/meta/recipes-extended/tzcode/files/0002-Port-zdump-to-C90-snprintf.patch
new file mode 100644
index 0000000..87afe47
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/tzcode/files/0002-Port-zdump-to-C90-snprintf.patch
@@ -0,0 +1,115 @@
+From e231da4fb2beb17c60b4b1a5c276366d6a6e433f Mon Sep 17 00:00:00 2001
+From: Paul Eggert <eggert@cs.ucla.edu>
+Date: Mon, 23 Oct 2017 17:58:36 -0700
+Subject: [PATCH] Port zdump to C90 + snprintf
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Problem reported by Jon Skeet in:
+https://mm.icann.org/pipermail/tz/2017-October/025362.html
+* NEWS: Mention this.
+* zdump.c (my_snprintf): New macro or function.  If a macro, it is
+just snprintf.  If a function, it is the same as the old snprintf
+static function, with an ATTRIBUTE_FORMAT to pacify modern GCC.
+All uses of snprintf changed to use my_snprintf.  This way,
+installers don’t need to specify -DHAVE_SNPRINTF if they are using
+a pre-C99 compiler with a library that has snprintf.
+
+Upstream-Status: Backport
+Signed-off-by: Armin Kuster <akuster@mvista.com>
+
+---
+ NEWS    |  4 ++++
+ zdump.c | 29 ++++++++++++++++-------------
+ 2 files changed, 20 insertions(+), 13 deletions(-)
+
+diff --git a/NEWS b/NEWS
+index 75ab095..dea08b8 100644
+--- a/NEWS
++++ b/NEWS
+@@ -7,6 +7,10 @@ Unreleased, experimental changes
+     The Makefile now quotes values like BACKWARD more carefully when
+     passing them to the shell.  (Problem reported by Zefram.)
+ 
++    Builders no longer need to specify -DHAVE_SNPRINTF on platforms
++    that have snprintf and use pre-C99 compilers.  (Problem reported
++    by Jon Skeet.)
++
+ 
+ Release 2017c - 2017-10-20 14:49:34 -0700
+ 
+diff --git a/zdump.c b/zdump.c
+index 8e3bf3e..d4e6084 100644
+--- a/zdump.c
++++ b/zdump.c
+@@ -795,12 +795,14 @@ show(timezone_t tz, char *zone, time_t t, bool v)
+ 		abbrok(abbr(tmp), zone);
+ }
+ 
+-#if !HAVE_SNPRINTF
++#if HAVE_SNPRINTF
++# define my_snprintf snprintf
++#else
+ # include <stdarg.h>
+ 
+ /* A substitute for snprintf that is good enough for zdump.  */
+-static int
+-snprintf(char *s, size_t size, char const *format, ...)
++static int ATTRIBUTE_FORMAT((printf, 3, 4))
++my_snprintf(char *s, size_t size, char const *format, ...)
+ {
+   int n;
+   va_list args;
+@@ -839,10 +841,10 @@ format_local_time(char *buf, size_t size, struct tm const *tm)
+ {
+   int ss = tm->tm_sec, mm = tm->tm_min, hh = tm->tm_hour;
+   return (ss
+-	  ? snprintf(buf, size, "%02d:%02d:%02d", hh, mm, ss)
++	  ? my_snprintf(buf, size, "%02d:%02d:%02d", hh, mm, ss)
+ 	  : mm
+-	  ? snprintf(buf, size, "%02d:%02d", hh, mm)
+-	  : snprintf(buf, size, "%02d", hh));
++	  ? my_snprintf(buf, size, "%02d:%02d", hh, mm)
++	  : my_snprintf(buf, size, "%02d", hh));
+ }
+ 
+ /* Store into BUF, of size SIZE, a formatted UTC offset for the
+@@ -877,10 +879,10 @@ format_utc_offset(char *buf, size_t size, struct tm const *tm, time_t t)
+   mm = off / 60 % 60;
+   hh = off / 60 / 60;
+   return (ss || 100 <= hh
+-	  ? snprintf(buf, size, "%c%02ld%02d%02d", sign, hh, mm, ss)
++	  ? my_snprintf(buf, size, "%c%02ld%02d%02d", sign, hh, mm, ss)
+ 	  : mm
+-	  ? snprintf(buf, size, "%c%02ld%02d", sign, hh, mm)
+-	  : snprintf(buf, size, "%c%02ld", sign, hh));
++	  ? my_snprintf(buf, size, "%c%02ld%02d", sign, hh, mm)
++	  : my_snprintf(buf, size, "%c%02ld", sign, hh));
+ }
+ 
+ /* Store into BUF (of size SIZE) a quoted string representation of P.
+@@ -983,15 +985,16 @@ istrftime(char *buf, size_t size, char const *time_fmt,
+ 	    for (abp = ab; is_alpha(*abp); abp++)
+ 	      continue;
+ 	    len = (!*abp && *ab
+-		   ? snprintf(b, s, "%s", ab)
++		   ? my_snprintf(b, s, "%s", ab)
+ 		   : format_quoted_string(b, s, ab));
+ 	    if (s <= len)
+ 	      return false;
+ 	    b += len, s -= len;
+ 	  }
+-	  formatted_len = (tm->tm_isdst
+-			   ? snprintf(b, s, &"\t\t%d"[show_abbr], tm->tm_isdst)
+-			   : 0);
++	  formatted_len
++	    = (tm->tm_isdst
++	       ? my_snprintf(b, s, &"\t\t%d"[show_abbr], tm->tm_isdst)
++	       : 0);
+ 	}
+ 	break;
+       }
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-extended/tzcode/tzcode-native_2017b.bb b/import-layers/yocto-poky/meta/recipes-extended/tzcode/tzcode-native_2017b.bb
deleted file mode 100644
index 165d2c6..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/tzcode/tzcode-native_2017b.bb
+++ /dev/null
@@ -1,28 +0,0 @@
-# note that we allow for us to use data later than our code version
-#
-SUMMARY = "tzcode, timezone zoneinfo utils -- zic, zdump, tzselect"
-LICENSE = "PD & BSD & BSD-3-Clause"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=ef1a352b901ee7b75a75df8171d6aca7"
-
-SRC_URI =" http://www.iana.org/time-zones/repository/releases/tzcode${PV}.tar.gz;name=tzcode \
-           http://www.iana.org/time-zones/repository/releases/tzdata${PV}.tar.gz;name=tzdata"
-UPSTREAM_CHECK_URI = "http://www.iana.org/time-zones"
-
-SRC_URI[tzcode.md5sum] = "afaf15deb13759e8b543d86350385b16"
-SRC_URI[tzcode.sha256sum] = "4d1735bb54e22b8d7443d4d1f1a13d007ae11be79a35e51f8e8322fb8e292d40"
-SRC_URI[tzdata.md5sum] = "50dc0dc50c68644c1f70804f2e7a1625"
-SRC_URI[tzdata.sha256sum] = "f8242a522ea3496b0ce4ff4f2e75a049178da21001a08b8e666d8cbe07d18086"
-
-S = "${WORKDIR}"
-
-inherit native
-
-EXTRA_OEMAKE += "cc='${CC}'"
-
-do_install () {
-        install -d ${D}${bindir}/
-        install -m 755 zic ${D}${bindir}/
-        install -m 755 zdump ${D}${bindir}/
-        install -m 755 tzselect ${D}${bindir}/
-}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/tzcode/tzcode-native_2018c.bb b/import-layers/yocto-poky/meta/recipes-extended/tzcode/tzcode-native_2018c.bb
new file mode 100644
index 0000000..85e9b70
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/tzcode/tzcode-native_2018c.bb
@@ -0,0 +1,30 @@
+# note that we allow for us to use data later than our code version
+#
+SUMMARY = "tzcode, timezone zoneinfo utils -- zic, zdump, tzselect"
+LICENSE = "PD & BSD & BSD-3-Clause"
+
+LIC_FILES_CHKSUM = "file://LICENSE;md5=c679c9d6b02bc2757b3eaf8f53c43fba"
+
+SRC_URI =" http://www.iana.org/time-zones/repository/releases/tzcode${PV}.tar.gz;name=tzcode \
+           http://www.iana.org/time-zones/repository/releases/tzdata${PV}.tar.gz;name=tzdata \
+           "
+
+UPSTREAM_CHECK_URI = "http://www.iana.org/time-zones"
+
+SRC_URI[tzcode.md5sum] = "e6e0d4b2ce3fa6906f303157bed2612e"
+SRC_URI[tzcode.sha256sum] = "31fa7fc0f94a6ff2d6bc878c0a35e8ab8b5aa0e8b01445a1d4a8f14777d0e665"
+SRC_URI[tzdata.md5sum] = "c412b1531adef1be7a645ab734f86acc"
+SRC_URI[tzdata.sha256sum] = "2825c3e4b7ef520f24d393bcc02942f9762ffd3e7fc9b23850789ed8f22933f6"
+
+S = "${WORKDIR}"
+
+inherit native
+
+EXTRA_OEMAKE += "cc='${CC}'"
+
+do_install () {
+        install -d ${D}${bindir}/
+        install -m 755 zic ${D}${bindir}/
+        install -m 755 zdump ${D}${bindir}/
+        install -m 755 tzselect ${D}${bindir}/
+}
diff --git a/import-layers/yocto-poky/meta/recipes-extended/tzdata/tzdata_2017b.bb b/import-layers/yocto-poky/meta/recipes-extended/tzdata/tzdata_2017b.bb
deleted file mode 100644
index 55e8976..0000000
--- a/import-layers/yocto-poky/meta/recipes-extended/tzdata/tzdata_2017b.bb
+++ /dev/null
@@ -1,215 +0,0 @@
-SUMMARY = "Timezone data"
-HOMEPAGE = "http://www.iana.org/time-zones"
-SECTION = "base"
-LICENSE = "PD & BSD & BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=ef1a352b901ee7b75a75df8171d6aca7"
-
-DEPENDS = "tzcode-native"
-
-SRC_URI = "http://www.iana.org/time-zones/repository/releases/tzdata${PV}.tar.gz;name=tzdata"
-UPSTREAM_CHECK_URI = "http://www.iana.org/time-zones"
-
-SRC_URI[tzdata.md5sum] = "50dc0dc50c68644c1f70804f2e7a1625"
-SRC_URI[tzdata.sha256sum] = "f8242a522ea3496b0ce4ff4f2e75a049178da21001a08b8e666d8cbe07d18086"
-
-inherit allarch
-
-RCONFLICTS_${PN} = "timezones timezone-africa timezone-america timezone-antarctica \
-             timezone-arctic timezone-asia timezone-atlantic \
-             timezone-australia timezone-europe timezone-indian \
-             timezone-iso3166.tab timezone-pacific timezone-zone.tab"
-
-S = "${WORKDIR}"
-
-DEFAULT_TIMEZONE ?= "Universal"
-INSTALL_TIMEZONE_FILE ?= "1"
-
-TZONES= "africa antarctica asia australasia europe northamerica southamerica  \
-         factory etcetera backward systemv \
-        "
-# pacificnew 
-
-do_compile () {
-        for zone in ${TZONES}; do \
-            ${STAGING_BINDIR_NATIVE}/zic -d ${WORKDIR}${datadir}/zoneinfo -L /dev/null \
-                -y ${S}/yearistype.sh ${S}/${zone} ; \
-            ${STAGING_BINDIR_NATIVE}/zic -d ${WORKDIR}${datadir}/zoneinfo/posix -L /dev/null \
-                -y ${S}/yearistype.sh ${S}/${zone} ; \
-            ${STAGING_BINDIR_NATIVE}/zic -d ${WORKDIR}${datadir}/zoneinfo/right -L ${S}/leapseconds \
-                -y ${S}/yearistype.sh ${S}/${zone} ; \
-        done
-}
-
-do_install () {
-        install -d ${D}/$exec_prefix ${D}${datadir}/zoneinfo
-        cp -pPR ${S}/$exec_prefix ${D}/
-        # libc is removing zoneinfo files from package
-        cp -pP "${S}/zone.tab" ${D}${datadir}/zoneinfo
-        cp -pP "${S}/zone1970.tab" ${D}${datadir}/zoneinfo
-        cp -pP "${S}/iso3166.tab" ${D}${datadir}/zoneinfo
-
-        # Install default timezone
-        if [ -e ${D}${datadir}/zoneinfo/${DEFAULT_TIMEZONE} ]; then
-            install -d ${D}${sysconfdir}
-            if [ "${INSTALL_TIMEZONE_FILE}" = "1" ]; then
-                echo ${DEFAULT_TIMEZONE} > ${D}${sysconfdir}/timezone
-            fi
-            ln -s ${datadir}/zoneinfo/${DEFAULT_TIMEZONE} ${D}${sysconfdir}/localtime
-        else
-            bberror "DEFAULT_TIMEZONE is set to an invalid value."
-            exit 1
-        fi
-
-        chown -R root:root ${D}
-}
-
-pkg_postinst_${PN} () {
-	etc_lt="$D${sysconfdir}/localtime"
-	src="$D${sysconfdir}/timezone"
-
-	if [ -e ${src} ] ; then
-		tz=$(sed -e 's:#.*::' -e 's:[[:space:]]*::g' -e '/^$/d' "${src}")
-	fi
-	
-	if [ -z "${tz}" ] ; then
-		exit 0
-	fi
-	
-	if [ ! -e "$D${datadir}/zoneinfo/${tz}" ] ; then
-		echo "You have an invalid TIMEZONE setting in ${src}"
-		echo "Your ${etc_lt} has been reset to Universal; enjoy!"
-		tz="Universal"
-		echo "Updating ${etc_lt} with $D${datadir}/zoneinfo/${tz}"
-		if [ -L ${etc_lt} ] ; then
-			rm -f "${etc_lt}"
-		fi
-		ln -s "${datadir}/zoneinfo/${tz}" "${etc_lt}"
-	fi
-}
-
-# Packages primarily organized by directory with a major city
-# in most time zones in the base package
-
-PACKAGES = "tzdata tzdata-misc tzdata-posix tzdata-right tzdata-africa \
-    tzdata-americas tzdata-antarctica tzdata-arctic tzdata-asia \
-    tzdata-atlantic tzdata-australia tzdata-europe tzdata-pacific"
-
-FILES_tzdata-africa += "${datadir}/zoneinfo/Africa/*"
-RPROVIDES_tzdata-africa = "tzdata-africa"
-
-FILES_tzdata-americas += "${datadir}/zoneinfo/America/*  \
-                ${datadir}/zoneinfo/US/*                \
-                ${datadir}/zoneinfo/Brazil/*            \
-                ${datadir}/zoneinfo/Canada/*            \
-                ${datadir}/zoneinfo/Mexico/*            \
-                ${datadir}/zoneinfo/Chile/*"
-RPROVIDES_tzdata-americas = "tzdata-americas"
-
-FILES_tzdata-antarctica += "${datadir}/zoneinfo/Antarctica/*"
-RPROVIDES_tzdata-antarctica = "tzdata-antarctica"
-
-FILES_tzdata-arctic += "${datadir}/zoneinfo/Arctic/*"
-RPROVIDES_tzdata-arctic = "tzdata-arctic"
-
-FILES_tzdata-asia += "${datadir}/zoneinfo/Asia/*        \
-                ${datadir}/zoneinfo/Indian/*            \
-                ${datadir}/zoneinfo/Mideast/*"
-RPROVIDES_tzdata-asia = "tzdata-asia"
-
-FILES_tzdata-atlantic += "${datadir}/zoneinfo/Atlantic/*"
-RPROVIDES_tzdata-atlantic = "tzdata-atlantic"
-
-FILES_tzdata-australia += "${datadir}/zoneinfo/Australia/*"
-RPROVIDES_tzdata-australia = "tzdata-australia"
-
-FILES_tzdata-europe += "${datadir}/zoneinfo/Europe/*"
-RPROVIDES_tzdata-europe = "tzdata-europe"
-
-FILES_tzdata-pacific += "${datadir}/zoneinfo/Pacific/*"
-RPROVIDES_tzdata-pacific = "tzdata-pacific"
-
-FILES_tzdata-posix += "${datadir}/zoneinfo/posix/*"
-RPROVIDES_tzdata-posix = "tzdata-posix"
-
-FILES_tzdata-right += "${datadir}/zoneinfo/right/*"
-RPROVIDES_tzdata-right = "tzdata-right"
-
-
-FILES_tzdata-misc += "${datadir}/zoneinfo/Cuba           \
-                ${datadir}/zoneinfo/Egypt                \
-                ${datadir}/zoneinfo/Eire                 \
-                ${datadir}/zoneinfo/Factory              \
-                ${datadir}/zoneinfo/GB-Eire              \
-                ${datadir}/zoneinfo/Hongkong             \
-                ${datadir}/zoneinfo/Iceland              \
-                ${datadir}/zoneinfo/Iran                 \
-                ${datadir}/zoneinfo/Israel               \
-                ${datadir}/zoneinfo/Jamaica              \
-                ${datadir}/zoneinfo/Japan                \
-                ${datadir}/zoneinfo/Kwajalein            \
-                ${datadir}/zoneinfo/Libya                \
-                ${datadir}/zoneinfo/Navajo               \
-                ${datadir}/zoneinfo/Poland               \
-                ${datadir}/zoneinfo/Portugal             \
-                ${datadir}/zoneinfo/Singapore            \
-                ${datadir}/zoneinfo/Turkey"
-RPROVIDES_tzdata-misc = "tzdata-misc"
-
-
-FILES_${PN} += "${datadir}/zoneinfo/Pacific/Honolulu     \
-                ${datadir}/zoneinfo/America/Anchorage    \
-                ${datadir}/zoneinfo/America/Los_Angeles  \
-                ${datadir}/zoneinfo/America/Denver       \
-                ${datadir}/zoneinfo/America/Chicago      \
-                ${datadir}/zoneinfo/America/New_York     \
-                ${datadir}/zoneinfo/America/Caracas      \
-                ${datadir}/zoneinfo/America/Sao_Paulo    \
-                ${datadir}/zoneinfo/Europe/London        \
-                ${datadir}/zoneinfo/Europe/Paris         \
-                ${datadir}/zoneinfo/Africa/Cairo         \
-                ${datadir}/zoneinfo/Europe/Moscow        \
-                ${datadir}/zoneinfo/Asia/Dubai           \
-                ${datadir}/zoneinfo/Asia/Karachi         \
-                ${datadir}/zoneinfo/Asia/Dhaka           \
-                ${datadir}/zoneinfo/Asia/Bankok          \
-                ${datadir}/zoneinfo/Asia/Hong_Kong       \
-                ${datadir}/zoneinfo/Asia/Tokyo           \
-                ${datadir}/zoneinfo/Australia/Darwin     \
-                ${datadir}/zoneinfo/Australia/Adelaide   \
-                ${datadir}/zoneinfo/Australia/Brisbane   \
-                ${datadir}/zoneinfo/Australia/Sydney     \
-                ${datadir}/zoneinfo/Pacific/Noumea       \
-                ${datadir}/zoneinfo/CET                  \
-                ${datadir}/zoneinfo/CST6CDT              \
-                ${datadir}/zoneinfo/EET                  \
-                ${datadir}/zoneinfo/EST                  \
-                ${datadir}/zoneinfo/EST5EDT              \
-                ${datadir}/zoneinfo/GB                   \
-                ${datadir}/zoneinfo/GMT                  \
-                ${datadir}/zoneinfo/GMT+0                \
-                ${datadir}/zoneinfo/GMT-0                \
-                ${datadir}/zoneinfo/GMT0                 \
-                ${datadir}/zoneinfo/Greenwich            \
-                ${datadir}/zoneinfo/HST                  \
-                ${datadir}/zoneinfo/MET                  \
-                ${datadir}/zoneinfo/MST                  \
-                ${datadir}/zoneinfo/MST7MDT              \
-                ${datadir}/zoneinfo/NZ                   \
-                ${datadir}/zoneinfo/NZ-CHAT              \
-                ${datadir}/zoneinfo/PRC                  \
-                ${datadir}/zoneinfo/PST8PDT              \
-                ${datadir}/zoneinfo/ROC                  \
-                ${datadir}/zoneinfo/ROK                  \
-                ${datadir}/zoneinfo/UCT                  \
-                ${datadir}/zoneinfo/UTC                  \
-                ${datadir}/zoneinfo/Universal            \
-                ${datadir}/zoneinfo/W-SU                 \
-                ${datadir}/zoneinfo/WET                  \
-                ${datadir}/zoneinfo/Zulu                 \
-                ${datadir}/zoneinfo/zone.tab             \
-                ${datadir}/zoneinfo/zone1970.tab         \
-                ${datadir}/zoneinfo/iso3166.tab          \
-                ${datadir}/zoneinfo/Etc/*"
-
-CONFFILES_${PN} += "${@ "${sysconfdir}/timezone" if bb.utils.to_boolean(d.getVar('INSTALL_TIMEZONE_FILE')) else "" }"
-CONFFILES_${PN} += "${sysconfdir}/localtime"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/tzdata/tzdata_2018c.bb b/import-layers/yocto-poky/meta/recipes-extended/tzdata/tzdata_2018c.bb
new file mode 100644
index 0000000..ff5ec1c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-extended/tzdata/tzdata_2018c.bb
@@ -0,0 +1,215 @@
+SUMMARY = "Timezone data"
+HOMEPAGE = "http://www.iana.org/time-zones"
+SECTION = "base"
+LICENSE = "PD & BSD & BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=c679c9d6b02bc2757b3eaf8f53c43fba"
+
+DEPENDS = "tzcode-native"
+
+SRC_URI = "http://www.iana.org/time-zones/repository/releases/tzdata${PV}.tar.gz;name=tzdata"
+UPSTREAM_CHECK_URI = "http://www.iana.org/time-zones"
+
+SRC_URI[tzdata.md5sum] = "c412b1531adef1be7a645ab734f86acc"
+SRC_URI[tzdata.sha256sum] = "2825c3e4b7ef520f24d393bcc02942f9762ffd3e7fc9b23850789ed8f22933f6"
+
+inherit allarch
+
+RCONFLICTS_${PN} = "timezones timezone-africa timezone-america timezone-antarctica \
+             timezone-arctic timezone-asia timezone-atlantic \
+             timezone-australia timezone-europe timezone-indian \
+             timezone-iso3166.tab timezone-pacific timezone-zone.tab"
+
+S = "${WORKDIR}"
+
+DEFAULT_TIMEZONE ?= "Universal"
+INSTALL_TIMEZONE_FILE ?= "1"
+
+TZONES= "africa antarctica asia australasia europe northamerica southamerica  \
+         factory etcetera backward systemv \
+        "
+# pacificnew 
+
+do_compile () {
+        for zone in ${TZONES}; do \
+            ${STAGING_BINDIR_NATIVE}/zic -d ${WORKDIR}${datadir}/zoneinfo -L /dev/null \
+                -y ${S}/yearistype.sh ${S}/${zone} ; \
+            ${STAGING_BINDIR_NATIVE}/zic -d ${WORKDIR}${datadir}/zoneinfo/posix -L /dev/null \
+                -y ${S}/yearistype.sh ${S}/${zone} ; \
+            ${STAGING_BINDIR_NATIVE}/zic -d ${WORKDIR}${datadir}/zoneinfo/right -L ${S}/leapseconds \
+                -y ${S}/yearistype.sh ${S}/${zone} ; \
+        done
+}
+
+do_install () {
+        install -d ${D}/$exec_prefix ${D}${datadir}/zoneinfo
+        cp -pPR ${S}/$exec_prefix ${D}/
+        # libc is removing zoneinfo files from package
+        cp -pP "${S}/zone.tab" ${D}${datadir}/zoneinfo
+        cp -pP "${S}/zone1970.tab" ${D}${datadir}/zoneinfo
+        cp -pP "${S}/iso3166.tab" ${D}${datadir}/zoneinfo
+
+        # Install default timezone
+        if [ -e ${D}${datadir}/zoneinfo/${DEFAULT_TIMEZONE} ]; then
+            install -d ${D}${sysconfdir}
+            if [ "${INSTALL_TIMEZONE_FILE}" = "1" ]; then
+                echo ${DEFAULT_TIMEZONE} > ${D}${sysconfdir}/timezone
+            fi
+            ln -s ${datadir}/zoneinfo/${DEFAULT_TIMEZONE} ${D}${sysconfdir}/localtime
+        else
+            bberror "DEFAULT_TIMEZONE is set to an invalid value."
+            exit 1
+        fi
+
+        chown -R root:root ${D}
+}
+
+pkg_postinst_${PN} () {
+	etc_lt="$D${sysconfdir}/localtime"
+	src="$D${sysconfdir}/timezone"
+
+	if [ -e ${src} ] ; then
+		tz=$(sed -e 's:#.*::' -e 's:[[:space:]]*::g' -e '/^$/d' "${src}")
+	fi
+	
+	if [ -z "${tz}" ] ; then
+		exit 0
+	fi
+	
+	if [ ! -e "$D${datadir}/zoneinfo/${tz}" ] ; then
+		echo "You have an invalid TIMEZONE setting in ${src}"
+		echo "Your ${etc_lt} has been reset to Universal; enjoy!"
+		tz="Universal"
+		echo "Updating ${etc_lt} with $D${datadir}/zoneinfo/${tz}"
+		if [ -L ${etc_lt} ] ; then
+			rm -f "${etc_lt}"
+		fi
+		ln -s "${datadir}/zoneinfo/${tz}" "${etc_lt}"
+	fi
+}
+
+# Packages primarily organized by directory with a major city
+# in most time zones in the base package
+
+PACKAGES = "tzdata tzdata-misc tzdata-posix tzdata-right tzdata-africa \
+    tzdata-americas tzdata-antarctica tzdata-arctic tzdata-asia \
+    tzdata-atlantic tzdata-australia tzdata-europe tzdata-pacific"
+
+FILES_tzdata-africa += "${datadir}/zoneinfo/Africa/*"
+RPROVIDES_tzdata-africa = "tzdata-africa"
+
+FILES_tzdata-americas += "${datadir}/zoneinfo/America/*  \
+                ${datadir}/zoneinfo/US/*                \
+                ${datadir}/zoneinfo/Brazil/*            \
+                ${datadir}/zoneinfo/Canada/*            \
+                ${datadir}/zoneinfo/Mexico/*            \
+                ${datadir}/zoneinfo/Chile/*"
+RPROVIDES_tzdata-americas = "tzdata-americas"
+
+FILES_tzdata-antarctica += "${datadir}/zoneinfo/Antarctica/*"
+RPROVIDES_tzdata-antarctica = "tzdata-antarctica"
+
+FILES_tzdata-arctic += "${datadir}/zoneinfo/Arctic/*"
+RPROVIDES_tzdata-arctic = "tzdata-arctic"
+
+FILES_tzdata-asia += "${datadir}/zoneinfo/Asia/*        \
+                ${datadir}/zoneinfo/Indian/*            \
+                ${datadir}/zoneinfo/Mideast/*"
+RPROVIDES_tzdata-asia = "tzdata-asia"
+
+FILES_tzdata-atlantic += "${datadir}/zoneinfo/Atlantic/*"
+RPROVIDES_tzdata-atlantic = "tzdata-atlantic"
+
+FILES_tzdata-australia += "${datadir}/zoneinfo/Australia/*"
+RPROVIDES_tzdata-australia = "tzdata-australia"
+
+FILES_tzdata-europe += "${datadir}/zoneinfo/Europe/*"
+RPROVIDES_tzdata-europe = "tzdata-europe"
+
+FILES_tzdata-pacific += "${datadir}/zoneinfo/Pacific/*"
+RPROVIDES_tzdata-pacific = "tzdata-pacific"
+
+FILES_tzdata-posix += "${datadir}/zoneinfo/posix/*"
+RPROVIDES_tzdata-posix = "tzdata-posix"
+
+FILES_tzdata-right += "${datadir}/zoneinfo/right/*"
+RPROVIDES_tzdata-right = "tzdata-right"
+
+
+FILES_tzdata-misc += "${datadir}/zoneinfo/Cuba           \
+                ${datadir}/zoneinfo/Egypt                \
+                ${datadir}/zoneinfo/Eire                 \
+                ${datadir}/zoneinfo/Factory              \
+                ${datadir}/zoneinfo/GB-Eire              \
+                ${datadir}/zoneinfo/Hongkong             \
+                ${datadir}/zoneinfo/Iceland              \
+                ${datadir}/zoneinfo/Iran                 \
+                ${datadir}/zoneinfo/Israel               \
+                ${datadir}/zoneinfo/Jamaica              \
+                ${datadir}/zoneinfo/Japan                \
+                ${datadir}/zoneinfo/Kwajalein            \
+                ${datadir}/zoneinfo/Libya                \
+                ${datadir}/zoneinfo/Navajo               \
+                ${datadir}/zoneinfo/Poland               \
+                ${datadir}/zoneinfo/Portugal             \
+                ${datadir}/zoneinfo/Singapore            \
+                ${datadir}/zoneinfo/Turkey"
+RPROVIDES_tzdata-misc = "tzdata-misc"
+
+
+FILES_${PN} += "${datadir}/zoneinfo/Pacific/Honolulu     \
+                ${datadir}/zoneinfo/America/Anchorage    \
+                ${datadir}/zoneinfo/America/Los_Angeles  \
+                ${datadir}/zoneinfo/America/Denver       \
+                ${datadir}/zoneinfo/America/Chicago      \
+                ${datadir}/zoneinfo/America/New_York     \
+                ${datadir}/zoneinfo/America/Caracas      \
+                ${datadir}/zoneinfo/America/Sao_Paulo    \
+                ${datadir}/zoneinfo/Europe/London        \
+                ${datadir}/zoneinfo/Europe/Paris         \
+                ${datadir}/zoneinfo/Africa/Cairo         \
+                ${datadir}/zoneinfo/Europe/Moscow        \
+                ${datadir}/zoneinfo/Asia/Dubai           \
+                ${datadir}/zoneinfo/Asia/Karachi         \
+                ${datadir}/zoneinfo/Asia/Dhaka           \
+                ${datadir}/zoneinfo/Asia/Bankok          \
+                ${datadir}/zoneinfo/Asia/Hong_Kong       \
+                ${datadir}/zoneinfo/Asia/Tokyo           \
+                ${datadir}/zoneinfo/Australia/Darwin     \
+                ${datadir}/zoneinfo/Australia/Adelaide   \
+                ${datadir}/zoneinfo/Australia/Brisbane   \
+                ${datadir}/zoneinfo/Australia/Sydney     \
+                ${datadir}/zoneinfo/Pacific/Noumea       \
+                ${datadir}/zoneinfo/CET                  \
+                ${datadir}/zoneinfo/CST6CDT              \
+                ${datadir}/zoneinfo/EET                  \
+                ${datadir}/zoneinfo/EST                  \
+                ${datadir}/zoneinfo/EST5EDT              \
+                ${datadir}/zoneinfo/GB                   \
+                ${datadir}/zoneinfo/GMT                  \
+                ${datadir}/zoneinfo/GMT+0                \
+                ${datadir}/zoneinfo/GMT-0                \
+                ${datadir}/zoneinfo/GMT0                 \
+                ${datadir}/zoneinfo/Greenwich            \
+                ${datadir}/zoneinfo/HST                  \
+                ${datadir}/zoneinfo/MET                  \
+                ${datadir}/zoneinfo/MST                  \
+                ${datadir}/zoneinfo/MST7MDT              \
+                ${datadir}/zoneinfo/NZ                   \
+                ${datadir}/zoneinfo/NZ-CHAT              \
+                ${datadir}/zoneinfo/PRC                  \
+                ${datadir}/zoneinfo/PST8PDT              \
+                ${datadir}/zoneinfo/ROC                  \
+                ${datadir}/zoneinfo/ROK                  \
+                ${datadir}/zoneinfo/UCT                  \
+                ${datadir}/zoneinfo/UTC                  \
+                ${datadir}/zoneinfo/Universal            \
+                ${datadir}/zoneinfo/W-SU                 \
+                ${datadir}/zoneinfo/WET                  \
+                ${datadir}/zoneinfo/Zulu                 \
+                ${datadir}/zoneinfo/zone.tab             \
+                ${datadir}/zoneinfo/zone1970.tab         \
+                ${datadir}/zoneinfo/iso3166.tab          \
+                ${datadir}/zoneinfo/Etc/*"
+
+CONFFILES_${PN} += "${@ "${sysconfdir}/timezone" if bb.utils.to_boolean(d.getVar('INSTALL_TIMEZONE_FILE')) else "" }"
+CONFFILES_${PN} += "${sysconfdir}/localtime"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/unzip/unzip_6.0.bb b/import-layers/yocto-poky/meta/recipes-extended/unzip/unzip_6.0.bb
index d4ee487..105d048 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/unzip/unzip_6.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/unzip/unzip_6.0.bb
@@ -20,6 +20,7 @@
 	file://18-cve-2014-9913-unzip-buffer-overflow.patch \
 	file://19-cve-2016-9844-zipinfo-buffer-overflow.patch \
 "
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 SRC_URI[md5sum] = "62b490407489521db863b523a7f86375"
 SRC_URI[sha256sum] = "036d96991646d0449ed0aa952e4fbe21b476ce994abc276e49d30e686708bd37"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/watchdog/watchdog_5.15.bb b/import-layers/yocto-poky/meta/recipes-extended/watchdog/watchdog_5.15.bb
index 9659f27..3d0b72e 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/watchdog/watchdog_5.15.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/watchdog/watchdog_5.15.bb
@@ -21,8 +21,7 @@
 UPSTREAM_CHECK_URI = "http://sourceforge.net/projects/watchdog/files/watchdog/"
 UPSTREAM_CHECK_REGEX = "/watchdog/(?P<pver>(\d+[\.\-_]*)+)/"
 
-inherit autotools
-inherit update-rc.d
+inherit autotools update-rc.d systemd
 
 DEPENDS_append_libc-musl = " libtirpc "
 CFLAGS_append_libc-musl = " -I${STAGING_INCDIR}/tirpc "
@@ -37,12 +36,18 @@
 INITSCRIPT_NAME_${PN}-keepalive = "wd_keepalive"
 INITSCRIPT_PARAMS_${PN}-keepalive = "start 15 1 2 3 4 5 . stop 85 0 6 ."
 
-do_install_append() {
-	install -D ${S}/redhat/watchdog.init ${D}/${sysconfdir}/init.d/watchdog.sh
-    install -Dm 0755 ${WORKDIR}/wd_keepalive.init ${D}${sysconfdir}/init.d/wd_keepalive
+SYSTEMD_SERVICE_${PN} = "watchdog.service wd_keepalive.service"
 
-    # watchdog.conf is provided by the watchdog-config recipe
-    rm ${D}${sysconfdir}/watchdog.conf
+do_install_append() {
+	install -d ${D}${systemd_system_unitdir}
+	install -m 0644 ${S}/debian/watchdog.service ${D}${systemd_system_unitdir}
+	install -m 0644 ${S}/debian/wd_keepalive.service ${D}${systemd_system_unitdir}
+
+	install -D ${S}/redhat/watchdog.init ${D}/${sysconfdir}/init.d/watchdog.sh
+	install -Dm 0755 ${WORKDIR}/wd_keepalive.init ${D}${sysconfdir}/init.d/wd_keepalive
+
+	# watchdog.conf is provided by the watchdog-config recipe
+	rm ${D}${sysconfdir}/watchdog.conf
 }
 
 PACKAGES =+ "${PN}-keepalive"
diff --git a/import-layers/yocto-poky/meta/recipes-extended/xdg-utils/xdg-utils_1.1.1.bb b/import-layers/yocto-poky/meta/recipes-extended/xdg-utils/xdg-utils_1.1.1.bb
index 5007498..34f8d10 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/xdg-utils/xdg-utils_1.1.1.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/xdg-utils/xdg-utils_1.1.1.bb
@@ -1,5 +1,5 @@
 SUMMARY = "Basic desktop integration functions"
-
+HOMEPAGE = "https://www.freedesktop.org/wiki/Software/xdg-utils/"
 DESCRIPTION = "The xdg-utils package is a set of simple scripts that provide basic \
 desktop integration functions for any Free Desktop, such as Linux. \
 They are intended to provide a set of defacto standards. \
diff --git a/import-layers/yocto-poky/meta/recipes-extended/xinetd/xinetd/xinetd-CVE-2013-4342.patch b/import-layers/yocto-poky/meta/recipes-extended/xinetd/xinetd/xinetd-CVE-2013-4342.patch
index c44c5a1..852a43f 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/xinetd/xinetd/xinetd-CVE-2013-4342.patch
+++ b/import-layers/yocto-poky/meta/recipes-extended/xinetd/xinetd/xinetd-CVE-2013-4342.patch
@@ -11,6 +11,7 @@
 
 CVE: CVE-2013-4342
 Signed-off-by: Li Wang <li.wang@windriver.com>
+Upstream-Status: Backport
 ---
  xinetd/builtins.c |    2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/import-layers/yocto-poky/meta/recipes-extended/xinetd/xinetd_2.3.15.bb b/import-layers/yocto-poky/meta/recipes-extended/xinetd/xinetd_2.3.15.bb
index 6bfaabe..1beb545 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/xinetd/xinetd_2.3.15.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/xinetd/xinetd_2.3.15.bb
@@ -24,6 +24,7 @@
       file://0001-configure-Use-HAVE_SYS_RESOURCE_H-to-guard-sys-resou.patch \
       file://xinetd.service \
       "
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 SRCREV = "68bb9ab9e9f214ad8a2322f28ac1d6733e70bc24"
 
diff --git a/import-layers/yocto-poky/meta/recipes-extended/zip/zip_3.0.bb b/import-layers/yocto-poky/meta/recipes-extended/zip/zip_3.0.bb
index 087423a..de779e9 100644
--- a/import-layers/yocto-poky/meta/recipes-extended/zip/zip_3.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-extended/zip/zip_3.0.bb
@@ -11,6 +11,7 @@
 
 SRC_URI = "${SOURCEFORGE_MIRROR}/infozip/Zip%203.x%20%28latest%29/3.0/zip30.tar.gz \
            file://fix-security-format.patch"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 SRC_URI[md5sum] = "7b74551e63f8ee6aab6fbc86676c0d37"
 SRC_URI[sha256sum] = "f0e8bb1f9b7eb0b01285495a2699df3a4b766784c1765a8f1aeedf63c0806369"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/epiphany/epiphany_3.22.6.bb b/import-layers/yocto-poky/meta/recipes-gnome/epiphany/epiphany_3.22.6.bb
deleted file mode 100644
index 651fef1..0000000
--- a/import-layers/yocto-poky/meta/recipes-gnome/epiphany/epiphany_3.22.6.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-SUMMARY = "WebKit based web browser for GNOME"
-LICENSE = "GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
-
-DEPENDS = "libsoup-2.4 webkitgtk gtk+3 iso-codes avahi libnotify gcr \
-	   gsettings-desktop-schemas gnome-desktop3 libxml2-native \
-	   intltool-native glib-2.0 glib-2.0-native"
-
-inherit gnomebase gsettings distro_features_check upstream-version-is-even gettext
-REQUIRED_DISTRO_FEATURES = "x11"
-
-SRC_URI += "file://0001-yelp.m4-drop-the-check-for-itstool.patch"
-SRC_URI[archive.md5sum] = "e08762c6bb01c4d291b3d22c7adb1a65"
-SRC_URI[archive.sha256sum] = "de7ea87dc450702bde620033f9e2ce807859727d007396d86b09f2b82397fcc2"
-
-EXTRA_OECONF += " --with-distributor-name=${DISTRO}"
-
-do_configure_prepend() {
-    sed -i -e s:help::g ${S}/Makefile.am
-}
-
-FILES_${PN} += "${datadir}/appdata ${datadir}/dbus-1 ${datadir}/gnome-shell/search-providers"
-RDEPENDS_${PN} = "iso-codes adwaita-icon-theme"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/epiphany/epiphany_3.24.3.bb b/import-layers/yocto-poky/meta/recipes-gnome/epiphany/epiphany_3.24.3.bb
new file mode 100644
index 0000000..c507d23
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/epiphany/epiphany_3.24.3.bb
@@ -0,0 +1,25 @@
+SUMMARY = "WebKit based web browser for GNOME"
+LICENSE = "GPLv3+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504"
+
+DEPENDS = "libsoup-2.4 webkitgtk gtk+3 iso-codes avahi libnotify gcr \
+	   gsettings-desktop-schemas gnome-desktop3 libxml2-native \
+	   glib-2.0 glib-2.0-native json-glib"
+
+inherit gnomebase gsettings distro_features_check upstream-version-is-even gettext
+REQUIRED_DISTRO_FEATURES = "x11"
+
+SRC_URI += "file://0001-yelp.m4-drop-the-check-for-itstool.patch \
+            file://0001-bookmarks-Check-for-return-value-of-fread.patch \
+           "
+SRC_URI[archive.md5sum] = "c0221aec6a08935e6854eaa9de9451ef"
+SRC_URI[archive.sha256sum] = "fef51676310d9f37e18c9b2d778254232eb17cccd988c2d1ecf42c7b2963a154"
+
+EXTRA_OECONF += " --with-distributor-name=${DISTRO} --enable-debug=no"
+
+do_configure_prepend() {
+    sed -i -e s:help::g ${S}/Makefile.am
+}
+
+FILES_${PN} += "${datadir}/dbus-1 ${datadir}/gnome-shell/search-providers"
+RDEPENDS_${PN} = "iso-codes adwaita-icon-theme"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/epiphany/files/0001-bookmarks-Check-for-return-value-of-fread.patch b/import-layers/yocto-poky/meta/recipes-gnome/epiphany/files/0001-bookmarks-Check-for-return-value-of-fread.patch
new file mode 100644
index 0000000..ddcd394
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/epiphany/files/0001-bookmarks-Check-for-return-value-of-fread.patch
@@ -0,0 +1,32 @@
+From aa2176be32eed2578da82f34d31148f934c11c34 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 28 Jun 2017 17:03:45 -0700
+Subject: [PATCH] bookmarks: Check for return value of fread()
+
+Fixes below compiler error
+ignoring return value of 'fread', declared with attribute warn_unused_result
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ src/bookmarks/ephy-bookmark.c | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/src/bookmarks/ephy-bookmark.c b/src/bookmarks/ephy-bookmark.c
+index ff0239b..8633ce4 100644
+--- a/src/bookmarks/ephy-bookmark.c
++++ b/src/bookmarks/ephy-bookmark.c
+@@ -217,7 +217,8 @@ ephy_bookmark_init (EphyBookmark *self)
+   bytes = g_malloc (num_bytes);
+ 
+   fp = fopen ("/dev/urandom", "r");
+-  fread (bytes, sizeof (guint8), num_bytes, fp);
++  if (fread (bytes, sizeof (guint8), num_bytes, fp) != num_bytes)
++    g_warning("Unable to read data from /dev/urandom\n");
+ 
+   self->id = g_malloc0 (ID_LEN + 1);
+   for (gsize i = 0; i < num_bytes; i++) {
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gcr/gcr_3.20.0.bb b/import-layers/yocto-poky/meta/recipes-gnome/gcr/gcr_3.20.0.bb
index f31abce..4450e15 100644
--- a/import-layers/yocto-poky/meta/recipes-gnome/gcr/gcr_3.20.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gcr/gcr_3.20.0.bb
@@ -5,7 +5,8 @@
 LICENSE = "GPLv2"
 LIC_FILES_CHKSUM = "file://COPYING;md5=55ca817ccb7d5b5b66355690e9abc605"
 
-DEPENDS = "intltool-native gtk+3 p11-kit glib-2.0 libgcrypt"
+DEPENDS = "intltool-native gtk+3 p11-kit glib-2.0 libgcrypt \
+           ${@bb.utils.contains('GI_DATA_ENABLED', 'True', 'libxslt-native', '', d)}"
 
 inherit autotools gnomebase gtk-icon-cache gtk-doc distro_features_check upstream-version-is-even vala gobject-introspection
 # depends on gtk+3, but also x11 through gtk+-x11
@@ -23,16 +24,3 @@
 
 # http://errors.yoctoproject.org/Errors/Details/20229/
 ARM_INSTRUCTION_SET = "arm"
-
-# on x86-64 the introspection binary goes into 
-# an infinite loop under qemu during compilation, 
-# printing the following:
-# 
-# gcrypt-Message: select() error: Bad address
-#
-# gcrypt-Message: select() error: Bad address
-#
-# gcrypt-Message: select() error: Bad address
-#
-# This will be investigated later.
-EXTRA_OECONF_append_x86-64 = " --disable-introspection --disable-gtk-doc"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/0001-queryloaders-Make-output-more-reproducible.patch b/import-layers/yocto-poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/0001-queryloaders-Make-output-more-reproducible.patch
new file mode 100644
index 0000000..aa21419
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf/0001-queryloaders-Make-output-more-reproducible.patch
@@ -0,0 +1,56 @@
+From 1049fbd887e52f94afeb03fc7942c01c143ebdfc Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Fri, 9 Jun 2017 12:01:25 +0300
+Subject: [PATCH] queryloaders: Make output more reproducible
+
+Reproducible builds are good: Sort the output by module name so that
+same input always leads to same output.
+
+This should also make gdk-pixbuf-print-mime-types output and
+gdk-pixbuf-thumbnailer.thumbnailer reproducible.
+
+https://bugzilla.gnome.org/show_bug.cgi?id=783592
+
+Upstream-Status: Submitted
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+---
+ gdk-pixbuf/queryloaders.c | 12 ++++++++++--
+ 1 file changed, 10 insertions(+), 2 deletions(-)
+
+diff --git a/gdk-pixbuf/queryloaders.c b/gdk-pixbuf/queryloaders.c
+index 395674a..4ac9b28 100644
+--- a/gdk-pixbuf/queryloaders.c
++++ b/gdk-pixbuf/queryloaders.c
+@@ -346,6 +346,7 @@ int main (int argc, char **argv)
+ #ifdef USE_GMODULE
+                 const char *path;
+                 GDir *dir;
++                GList *l, *modules = NULL;
+ 
+                 path = g_getenv ("GDK_PIXBUF_MODULEDIR");
+ #ifdef G_OS_WIN32
+@@ -365,12 +366,19 @@ int main (int argc, char **argv)
+                                 gint len = strlen (dent);
+                                 if (len > SOEXT_LEN &&
+                                     strcmp (dent + len - SOEXT_LEN, SOEXT) == 0) {
++                                        modules = g_list_prepend (modules,
++                                                                  g_strdup (dent));
+-                                        if (!query_module (contents, path, dent))
+-                                                success = FALSE;
+                                 }
+                         }
+                         g_dir_close (dir);
+                 }
++
++                modules = g_list_sort (modules, (GCompareFunc)strcmp);
++                for (l = modules; l != NULL; l = l->next)
++                        if (!query_module (contents, path, l->data))
++                                success = FALSE;
++
++                g_list_free_full (modules, g_free);
+ #else
+                 g_string_append_printf (contents, "# dynamic loading of modules not supported\n");
+ #endif
+-- 
+2.1.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.36.5.bb b/import-layers/yocto-poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.36.5.bb
deleted file mode 100644
index 7da6d16..0000000
--- a/import-layers/yocto-poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.36.5.bb
+++ /dev/null
@@ -1,99 +0,0 @@
-SUMMARY = "Image loading library for GTK+"
-HOMEPAGE = "http://www.gtk.org/"
-BUGTRACKER = "https://bugzilla.gnome.org/"
-
-LICENSE = "LGPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=3bf50002aefd002f49e7bb854063f7e7 \
-                    file://gdk-pixbuf/gdk-pixbuf.h;endline=26;md5=72b39da7cbdde2e665329fef618e1d6b"
-
-SECTION = "libs"
-
-DEPENDS = "glib-2.0 gdk-pixbuf-native"
-
-MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
-
-SRC_URI = "${GNOME_MIRROR}/${BPN}/${MAJ_VER}/${BPN}-${PV}.tar.xz \
-           file://hardcoded_libtool.patch \
-           file://extending-libinstall-dependencies.patch \
-           file://run-ptest \
-           file://fatal-loader.patch \
-           file://0001-Work-around-thumbnailer-cross-compile-failure.patch \
-           "
-SRC_URI[md5sum] = "0173fd5c11a5d2030d09201090636477"
-SRC_URI[sha256sum] = "7ace06170291a1f21771552768bace072ecdea9bd4a02f7658939b9a314c40fc"
-
-inherit autotools pkgconfig gettext pixbufcache ptest-gnome upstream-version-is-even gobject-introspection gtk-doc lib_package
-
-LIBV = "2.10.0"
-
-GDK_PIXBUF_LOADERS ?= "png jpeg"
-
-PACKAGECONFIG ??= "${GDK_PIXBUF_LOADERS}"
-PACKAGECONFIG_linuxstdbase = "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)} ${GDK_PIXBUF_LOADERS}"
-PACKAGECONFIG_class-native = "${GDK_PIXBUF_LOADERS}"
-
-PACKAGECONFIG[png] = "--with-libpng,--without-libpng,libpng"
-PACKAGECONFIG[jpeg] = "--with-libjpeg,--without-libjpeg,jpeg"
-PACKAGECONFIG[tiff] = "--with-libtiff,--without-libtiff,tiff"
-PACKAGECONFIG[jpeg2000] = "--with-libjasper,--without-libjasper,jasper"
-
-# Use GIO to sniff image format instead of trying all loaders
-PACKAGECONFIG[gio-sniff] = "--enable-gio-sniffing,--disable-gio-sniffing,,shared-mime-info"
-PACKAGECONFIG[x11] = "--with-x11,--without-x11,virtual/libx11"
-
-PACKAGES =+ "${PN}-xlib"
-
-FILES_${PN}-xlib = "${libdir}/*pixbuf_xlib*${SOLIBS}"
-ALLOW_EMPTY_${PN}-xlib = "1"
-
-FILES_${PN} += "${libdir}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders"
-
-FILES_${PN}-bin += "${datadir}/thumbnailers/gdk-pixbuf-thumbnailer.thumbnailer"
-
-FILES_${PN}-dev += " \
-	${bindir}/gdk-pixbuf-csource \
-	${bindir}/gdk-pixbuf-pixdata \
-        ${bindir}/gdk-pixbuf-print-mime-types \
-	${includedir}/* \
-	${libdir}/gdk-pixbuf-2.0/${LIBV}/loaders/*.la \
-"
-
-PACKAGES_DYNAMIC += "^gdk-pixbuf-loader-.*"
-PACKAGES_DYNAMIC_class-native = ""
-
-python populate_packages_prepend () {
-    postinst_pixbufloader = d.getVar("postinst_pixbufloader")
-
-    loaders_root = d.expand('${libdir}/gdk-pixbuf-2.0/${LIBV}/loaders')
-
-    packages = ' '.join(do_split_packages(d, loaders_root, '^libpixbufloader-(.*)\.so$', 'gdk-pixbuf-loader-%s', 'GDK pixbuf loader for %s'))
-    d.setVar('PIXBUF_PACKAGES', packages)
-
-    # The test suite exercises all the loaders, so ensure they are all
-    # dependencies of the ptest package.
-    d.appendVar("RDEPENDS_gdk-pixbuf-ptest", " " + packages)
-}
-
-do_install_append() {
-	# Move gdk-pixbuf-query-loaders into libdir so it is always available
-	# in multilib builds.
-	mv ${D}/${bindir}/gdk-pixbuf-query-loaders ${D}/${libdir}/gdk-pixbuf-2.0/
-}
-
-do_install_append_class-native() {
-	find ${D}${libdir} -name "libpixbufloader-*.la" -exec rm \{\} \;
-
-	create_wrapper ${D}/${bindir}/gdk-pixbuf-csource \
-		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache
-
-	create_wrapper ${D}/${bindir}/gdk-pixbuf-pixdata \
-		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache
-
-	create_wrapper ${D}/${bindir}/gdk-pixbuf-print-mime-types \
-		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache
-
-	create_wrapper ${D}/${libdir}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders \
-		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache \
-		GDK_PIXBUF_MODULEDIR=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders
-}
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.36.8.bb b/import-layers/yocto-poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.36.8.bb
new file mode 100644
index 0000000..8c35904
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gdk-pixbuf/gdk-pixbuf_2.36.8.bb
@@ -0,0 +1,102 @@
+SUMMARY = "Image loading library for GTK+"
+HOMEPAGE = "http://www.gtk.org/"
+BUGTRACKER = "https://bugzilla.gnome.org/"
+
+LICENSE = "LGPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=3bf50002aefd002f49e7bb854063f7e7 \
+                    file://gdk-pixbuf/gdk-pixbuf.h;endline=26;md5=72b39da7cbdde2e665329fef618e1d6b"
+
+SECTION = "libs"
+
+DEPENDS = "glib-2.0 gdk-pixbuf-native shared-mime-info"
+
+MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
+
+SRC_URI = "${GNOME_MIRROR}/${BPN}/${MAJ_VER}/${BPN}-${PV}.tar.xz \
+           file://hardcoded_libtool.patch \
+           file://extending-libinstall-dependencies.patch \
+           file://run-ptest \
+           file://fatal-loader.patch \
+           file://0001-Work-around-thumbnailer-cross-compile-failure.patch \
+           file://0001-queryloaders-Make-output-more-reproducible.patch \
+           "
+
+SRC_URI[md5sum] = "e0aaa0061eb12667b32b27472230b962"
+SRC_URI[sha256sum] = "5d68e5283cdc0bf9bda99c3e6a1d52ad07a03364fa186b6c26cfc86fcd396a19"
+
+inherit autotools pkgconfig gettext pixbufcache ptest-gnome upstream-version-is-even gobject-introspection gtk-doc lib_package
+
+LIBV = "2.10.0"
+
+GDK_PIXBUF_LOADERS ?= "png jpeg"
+
+PACKAGECONFIG ??= "${GDK_PIXBUF_LOADERS}"
+PACKAGECONFIG_linuxstdbase = "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)} ${GDK_PIXBUF_LOADERS}"
+PACKAGECONFIG_class-native = "${GDK_PIXBUF_LOADERS}"
+
+PACKAGECONFIG[png] = "--with-libpng,--without-libpng,libpng"
+PACKAGECONFIG[jpeg] = "--with-libjpeg,--without-libjpeg,jpeg"
+PACKAGECONFIG[tiff] = "--with-libtiff,--without-libtiff,tiff"
+PACKAGECONFIG[jpeg2000] = "--with-libjasper,--without-libjasper,jasper"
+
+PACKAGECONFIG[x11] = "--with-x11,--without-x11,virtual/libx11"
+
+PACKAGES =+ "${PN}-xlib"
+
+# For GIO image type sniffing
+RDEPENDS_${PN} = "shared-mime-info"
+
+FILES_${PN}-xlib = "${libdir}/*pixbuf_xlib*${SOLIBS}"
+ALLOW_EMPTY_${PN}-xlib = "1"
+
+FILES_${PN} += "${libdir}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders"
+
+FILES_${PN}-bin += "${datadir}/thumbnailers/gdk-pixbuf-thumbnailer.thumbnailer"
+
+FILES_${PN}-dev += " \
+	${bindir}/gdk-pixbuf-csource \
+	${bindir}/gdk-pixbuf-pixdata \
+        ${bindir}/gdk-pixbuf-print-mime-types \
+	${includedir}/* \
+	${libdir}/gdk-pixbuf-2.0/${LIBV}/loaders/*.la \
+"
+
+PACKAGES_DYNAMIC += "^gdk-pixbuf-loader-.*"
+PACKAGES_DYNAMIC_class-native = ""
+
+python populate_packages_prepend () {
+    postinst_pixbufloader = d.getVar("postinst_pixbufloader")
+
+    loaders_root = d.expand('${libdir}/gdk-pixbuf-2.0/${LIBV}/loaders')
+
+    packages = ' '.join(do_split_packages(d, loaders_root, '^libpixbufloader-(.*)\.so$', 'gdk-pixbuf-loader-%s', 'GDK pixbuf loader for %s'))
+    d.setVar('PIXBUF_PACKAGES', packages)
+
+    # The test suite exercises all the loaders, so ensure they are all
+    # dependencies of the ptest package.
+    d.appendVar("RDEPENDS_%s-ptest" % d.getVar('PN'), " " + packages)
+}
+
+do_install_append() {
+	# Move gdk-pixbuf-query-loaders into libdir so it is always available
+	# in multilib builds.
+	mv ${D}/${bindir}/gdk-pixbuf-query-loaders ${D}/${libdir}/gdk-pixbuf-2.0/
+}
+
+do_install_append_class-native() {
+	find ${D}${libdir} -name "libpixbufloader-*.la" -exec rm \{\} \;
+
+	create_wrapper ${D}/${bindir}/gdk-pixbuf-csource \
+		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache
+
+	create_wrapper ${D}/${bindir}/gdk-pixbuf-pixdata \
+		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache
+
+	create_wrapper ${D}/${bindir}/gdk-pixbuf-print-mime-types \
+		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache
+
+	create_wrapper ${D}/${libdir}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders \
+		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders.cache \
+		GDK_PIXBUF_MODULEDIR=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/${LIBV}/loaders
+}
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gnome-desktop/gnome-desktop/0001-configure.ac-Remove-gnome-common-macro-calls.patch b/import-layers/yocto-poky/meta/recipes-gnome/gnome-desktop/gnome-desktop/0001-configure.ac-Remove-gnome-common-macro-calls.patch
new file mode 100644
index 0000000..e95393c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gnome-desktop/gnome-desktop/0001-configure.ac-Remove-gnome-common-macro-calls.patch
@@ -0,0 +1,33 @@
+From 834bc861921fe0361f2d6a5b5716fc97a9519478 Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Thu, 6 Jul 2017 13:13:45 +0300
+Subject: [PATCH] configure.ac: Remove gnome-common macro calls
+
+gnome-common is deprecated and these aren't doing much for us.
+
+Upstreamable fix would probably involve using autoconf-archive:
+Trying to avoid that dependency for now.
+
+Upstream-Status: Inappropriate
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+---
+ configure.ac | 3 ---
+ 1 file changed, 3 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index 7adcf0e..bb7659d 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -71,9 +71,6 @@ AC_SUBST(GNOME_DATE)
+ AC_SUBST(GNOME_DATE_COMMENT_START)
+ AC_SUBST(GNOME_DATE_COMMENT_END)
+ 
+-GNOME_COMPILE_WARNINGS([maximum])
+-GNOME_MAINTAINER_MODE_DEFINES
+-
+ AC_ARG_ENABLE(deprecation_flags,
+               [AC_HELP_STRING([--enable-deprecation-flags],
+                               [use *_DISABLE_DEPRECATED flags @<:@default=no@:>@])],,
+-- 
+2.1.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gnome-desktop/gnome-desktop3_3.22.2.bb b/import-layers/yocto-poky/meta/recipes-gnome/gnome-desktop/gnome-desktop3_3.22.2.bb
deleted file mode 100644
index e72c6ce..0000000
--- a/import-layers/yocto-poky/meta/recipes-gnome/gnome-desktop/gnome-desktop3_3.22.2.bb
+++ /dev/null
@@ -1,25 +0,0 @@
-SUMMARY = "GNOME library for reading .desktop files"
-SECTION = "x11/gnome"
-LICENSE = "GPLv2 & LGPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://COPYING.LIB;md5=5f30f0716dfdd0d91eb439ebec522ec2"
-
-BPN = "gnome-desktop"
-
-inherit gnome pkgconfig upstream-version-is-even gobject-introspection
-SRC_URI[archive.md5sum] = "3d7222d5305f3db022eca31d8108e02d"
-SRC_URI[archive.sha256sum] = "51d7ebf7a6c359be14c3dd7a022213e931484653815eb10b0131bef4c8979e1c"
-
-SRC_URI += "file://gnome-desktop-thumbnail-don-t-convert-time_t-to-long.patch"
-
-DEPENDS += "intltool-native gnome-common-native gsettings-desktop-schemas gconf virtual/libx11 gtk+3 glib-2.0 startup-notification xkeyboard-config iso-codes udev"
-
-inherit distro_features_check gtk-doc
-REQUIRED_DISTRO_FEATURES = "x11"
-
-EXTRA_OECONF = "--disable-desktop-docs"
-
-PACKAGES =+ "libgnome-desktop3"
-FILES_libgnome-desktop3 = "${libdir}/lib*${SOLIBS} ${datadir}/libgnome-desktop*/pnp.ids ${datadir}/gnome/*xml"
-
-RRECOMMENDS_libgnome-desktop3 += "gsettings-desktop-schemas"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gnome-desktop/gnome-desktop3_3.24.2.bb b/import-layers/yocto-poky/meta/recipes-gnome/gnome-desktop/gnome-desktop3_3.24.2.bb
new file mode 100644
index 0000000..5c1c213
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gnome-desktop/gnome-desktop3_3.24.2.bb
@@ -0,0 +1,27 @@
+SUMMARY = "GNOME library for reading .desktop files"
+SECTION = "x11/gnome"
+LICENSE = "GPLv2 & LGPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://COPYING.LIB;md5=5f30f0716dfdd0d91eb439ebec522ec2"
+
+BPN = "gnome-desktop"
+
+inherit gnome pkgconfig upstream-version-is-even gobject-introspection
+SRC_URI[archive.md5sum] = "af7c6a243df7a335a010bdc05b34ca93"
+SRC_URI[archive.sha256sum] = "8fa1de66a6a75963bffc79b01a60434c71237d44c51beca09c0f714a032d785e"
+
+SRC_URI += "file://gnome-desktop-thumbnail-don-t-convert-time_t-to-long.patch \
+            file://0001-configure.ac-Remove-gnome-common-macro-calls.patch \
+"
+
+DEPENDS += "intltool-native gsettings-desktop-schemas gconf virtual/libx11 gtk+3 glib-2.0 startup-notification xkeyboard-config iso-codes udev"
+
+inherit distro_features_check gtk-doc
+REQUIRED_DISTRO_FEATURES = "x11"
+
+EXTRA_OECONF = "--disable-desktop-docs"
+
+PACKAGES =+ "libgnome-desktop3"
+FILES_libgnome-desktop3 = "${libdir}/lib*${SOLIBS} ${datadir}/libgnome-desktop*/pnp.ids ${datadir}/gnome/*xml"
+
+RRECOMMENDS_libgnome-desktop3 += "gsettings-desktop-schemas"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gnome/adwaita-icon-theme/0001-Don-t-use-AC_CANONICAL_HOST.patch b/import-layers/yocto-poky/meta/recipes-gnome/gnome/adwaita-icon-theme/0001-Don-t-use-AC_CANONICAL_HOST.patch
new file mode 100644
index 0000000..e7ac97b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gnome/adwaita-icon-theme/0001-Don-t-use-AC_CANONICAL_HOST.patch
@@ -0,0 +1,29 @@
+From d2b9ad8a80bf9320fe35c9aee8f52e55ebd40e06 Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Tue, 30 May 2017 14:55:49 +0300
+Subject: [PATCH] Don't use AC_CANONICAL_HOST
+
+This won't work when building allarch (and is only used to find out if
+target is windows).
+
+Upstream-Status: Inappropriate [embedded specific]
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+---
+ configure.ac | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/configure.ac b/configure.ac
+index d855b7a..6908f59 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -3,7 +3,6 @@ AC_PREREQ(2.53)
+ 
+ AC_INIT([adwaita-icon-theme], [3.24.0],
+         [http://bugzilla.gnome.org/enter_bug.cgi?product=adwaita-icon-theme])
+-AC_CANONICAL_HOST
+ AC_CONFIG_MACRO_DIR([m4])
+ AC_CONFIG_SRCDIR([index.theme.in])
+ 
+-- 
+2.1.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gnome/adwaita-icon-theme/0001-Run-installation-commands-as-shell-jobs.patch b/import-layers/yocto-poky/meta/recipes-gnome/gnome/adwaita-icon-theme/0001-Run-installation-commands-as-shell-jobs.patch
new file mode 100644
index 0000000..6c38e23
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gnome/adwaita-icon-theme/0001-Run-installation-commands-as-shell-jobs.patch
@@ -0,0 +1,82 @@
+From 8dcd73b45a660dbdc560676835ba46f495334f14 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Tue, 13 Jun 2017 18:10:06 +0300
+Subject: [PATCH] Run installation commands as shell jobs
+
+This greatly speeds up installation time on multi-core systems.
+
+Upstream-Status: Pending
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+ src/fullcolor/Makefile.am | 3 ++-
+ src/spinner/Makefile.am   | 5 +++--
+ src/symbolic/Makefile.am  | 7 ++++---
+ 3 files changed, 9 insertions(+), 6 deletions(-)
+
+diff --git a/src/fullcolor/Makefile.am b/src/fullcolor/Makefile.am
+index 1c940a5..3998ee6 100644
+--- a/src/fullcolor/Makefile.am
++++ b/src/fullcolor/Makefile.am
+@@ -9,9 +9,10 @@ install-data-local:
+ 		for file in `cd $(top_srcdir)/$(SVGOUTDIR)/$$size && find . -name "*.png"`; do \
+ 			context="`dirname $$file`"; \
+ 			$(mkdir_p) $(DESTDIR)$(themedir)/$$size/$$context; \
+-			$(install_sh_DATA) $(top_srcdir)/$(SVGOUTDIR)/$$size/$$file $(DESTDIR)$(themedir)/$$size/$$file; \
++			$(install_sh_DATA) $(top_srcdir)/$(SVGOUTDIR)/$$size/$$file $(DESTDIR)$(themedir)/$$size/$$file & \
+ 		done; \
+ 	done;
++	wait
+ 
+ ## FIXME we should add a way to remove links generated by icon mapping
+ uninstall-local:
+diff --git a/src/spinner/Makefile.am b/src/spinner/Makefile.am
+index 86f4d7c..3fae8c1 100644
+--- a/src/spinner/Makefile.am
++++ b/src/spinner/Makefile.am
+@@ -24,13 +24,14 @@ install-data-local:
+ 	      for file in `cd $(top_srcdir)/$(SVGOUTDIR)/$$size; find . -name "*.png"`; do \
+ 		      context="`dirname $$file`"; \
+ 		      $(mkdir_p) $(DESTDIR)$(themedir)/$$size/$$context; \
+-		      $(install_sh_DATA) $(top_srcdir)/$(SVGOUTDIR)/$$size/$$file $(DESTDIR)$(themedir)/$$size/$$file; \
++		      $(install_sh_DATA) $(top_srcdir)/$(SVGOUTDIR)/$$size/$$file $(DESTDIR)$(themedir)/$$size/$$file & \
+ 	      done; \
+ 	for file in `cd $(top_srcdir)/$(SVGOUTDIR)/scalable-up-to-32; find . -name "*.svg"`; do \
+ 		context="`dirname $$file`"; \
+ 		$(mkdir_p) $(DESTDIR)$(themedir)/scalable-up-to-32/$$context; \
+-		$(install_sh_DATA) $(top_srcdir)/$(SVGOUTDIR)/scalable-up-to-32/$$file $(DESTDIR)$(themedir)/scalable-up-to-32/$$file; \
++		$(install_sh_DATA) $(top_srcdir)/$(SVGOUTDIR)/scalable-up-to-32/$$file $(DESTDIR)$(themedir)/scalable-up-to-32/$$file & \
+ 	done
++	wait
+ 
+ uninstall-local:
+ 	      for file in `cd $(top_srcdir)/$(SVGOUTDIR)/scalable-up-to-32; find . -name "*.svg"`; do \
+diff --git a/src/symbolic/Makefile.am b/src/symbolic/Makefile.am
+index 24aac9b..61ba071 100644
+--- a/src/symbolic/Makefile.am
++++ b/src/symbolic/Makefile.am
+@@ -25,18 +25,19 @@ install-data-local:
+ 		for file in `cd $(top_srcdir)/$(SVGOUTDIR)/$$size; find . -name "*.png"`; do \
+ 			context="`dirname $$file`"; \
+ 			$(mkdir_p) $(DESTDIR)$(themedir)/$$size/$$context; \
+-			$(install_sh_DATA) $(top_srcdir)/$(SVGOUTDIR)/$$size/$$file $(DESTDIR)$(themedir)/$$size/$$file; \
++			$(install_sh_DATA) $(top_srcdir)/$(SVGOUTDIR)/$$size/$$file $(DESTDIR)$(themedir)/$$size/$$file & \
+ 		done; \
+ 	done
+ 	for file in `cd $(top_srcdir)/$(SVGOUTDIR)/scalable; find . -name "*.svg"`; do \
+ 		context="`dirname $$file`"; \
+ 		$(mkdir_p) $(DESTDIR)$(themedir)/scalable/$$context; \
+-		$(install_sh_DATA) $(top_srcdir)/$(SVGOUTDIR)/scalable/$$file $(DESTDIR)$(themedir)/scalable/$$file; \
++		$(install_sh_DATA) $(top_srcdir)/$(SVGOUTDIR)/scalable/$$file $(DESTDIR)$(themedir)/scalable/$$file & \
+ 		for size in $(symbolic_encode_sizes); do \
+ 			$(mkdir_p) $(DESTDIR)$(themedir)/$$size/$$context; \
+-			$(GTK_ENCODE_SYMBOLIC_SVG) $(top_srcdir)/$(SVGOUTDIR)/scalable/$$file $$size -o $(DESTDIR)$(themedir)/$$size/$$context; \
++			$(GTK_ENCODE_SYMBOLIC_SVG) $(top_srcdir)/$(SVGOUTDIR)/scalable/$$file $$size -o $(DESTDIR)$(themedir)/$$size/$$context & \
+ 		done \
+ 	done
++	wait
+ 
+ uninstall-local:
+ 	for file in `cd $(top_srcdir)/$(SVGOUTDIR)/scalable; find . -name "*.svg"`; do \
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gnome/adwaita-icon-theme_3.22.0.bb b/import-layers/yocto-poky/meta/recipes-gnome/gnome/adwaita-icon-theme_3.22.0.bb
deleted file mode 100644
index 1cbc1e6..0000000
--- a/import-layers/yocto-poky/meta/recipes-gnome/gnome/adwaita-icon-theme_3.22.0.bb
+++ /dev/null
@@ -1,42 +0,0 @@
-SUMMARY = "GTK+ icon theme"
-HOMEPAGE = "http://ftp.gnome.org/pub/GNOME/sources/adwaita-icon-theme/"
-BUGTRACKER = "https://bugzilla.gnome.org/"
-SECTION = "x11/gnome"
-
-LICENSE = "LGPL-3.0 | CC-BY-SA-3.0"
-LIC_FILES_CHKSUM = "file://COPYING;md5=c84cac88e46fc07647ea07e6c24eeb7c"
-
-inherit allarch autotools pkgconfig gettext gtk-icon-cache upstream-version-is-even
-
-
-MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
-SRC_URI = "${GNOME_MIRROR}/${BPN}/${MAJ_VER}/${BPN}-${PV}.tar.xz \
-          "
-
-SRC_URI[md5sum] = "cde51d7dfcbcfa3b8cdc3e5f0df8c799"
-SRC_URI[sha256sum] = "c18bf6e26087d9819a962c77288b291efab25d0419b73d909dd771716a45dcb7"
-
-do_install_append() {
-	# Build uses gtk-encode-symbolic-svg to create png versions:
-        # no need to store the svgs anymore.
-	rm -f ${D}${prefix}/share/icons/Adwaita/scalable/*/*-symbolic.svg \
-	      ${D}${prefix}/share/icons/Adwaita/scalable/*/*-symbolic-rtl.svg
-}
-
-PACKAGES = "${PN}-cursors ${PN}-symbolic-hires ${PN}-symbolic ${PN}-hires ${PN}"
-
-RREPLACES_${PN} = "gnome-icon-theme"
-RCONFLICTS_${PN} = "gnome-icon-theme"
-RPROVIDES_${PN} = "gnome-icon-theme"
-
-FILES_${PN}-cursors = "${prefix}/share/icons/Adwaita/cursors/"
-FILES_${PN}-symbolic-hires = "${prefix}/share/icons/Adwaita/96x96/*/*.symbolic.png \
-                              ${prefix}/share/icons/Adwaita/64x64/*/*.symbolic.png \
-                              ${prefix}/share/icons/Adwaita/48x48/*/*.symbolic.png \
-                              ${prefix}/share/icons/Adwaita/32x32/*/*.symbolic.png"
-FILES_${PN}-symbolic = "${prefix}/share/icons/Adwaita/16x16/*/*.symbolic.png \
-                        ${prefix}/share/icons/Adwaita/24x24/*/*.symbolic.png"
-FILES_${PN}-hires = "${prefix}/share/icons/Adwaita/256x256/ \
-                     ${prefix}/share/icons/Adwaita/512x512/"
-FILES_${PN} = "${prefix}/share/icons/Adwaita/ \
-               ${prefix}/share/pkgconfig/adwaita-icon-theme.pc"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gnome/adwaita-icon-theme_3.24.0.bb b/import-layers/yocto-poky/meta/recipes-gnome/gnome/adwaita-icon-theme_3.24.0.bb
new file mode 100644
index 0000000..d340536
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gnome/adwaita-icon-theme_3.24.0.bb
@@ -0,0 +1,44 @@
+SUMMARY = "GTK+ icon theme"
+HOMEPAGE = "http://ftp.gnome.org/pub/GNOME/sources/adwaita-icon-theme/"
+BUGTRACKER = "https://bugzilla.gnome.org/"
+SECTION = "x11/gnome"
+
+LICENSE = "LGPL-3.0 | CC-BY-SA-3.0"
+LIC_FILES_CHKSUM = "file://COPYING;md5=c84cac88e46fc07647ea07e6c24eeb7c"
+
+inherit allarch autotools pkgconfig gettext gtk-icon-cache upstream-version-is-even
+
+
+MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
+SRC_URI = "${GNOME_MIRROR}/${BPN}/${MAJ_VER}/${BPN}-${PV}.tar.xz \
+           file://0001-Don-t-use-AC_CANONICAL_HOST.patch \
+           file://0001-Run-installation-commands-as-shell-jobs.patch \
+           "
+
+SRC_URI[md5sum] = "3ccac0d600ffc936d2adfb80e9245bc5"
+SRC_URI[sha256sum] = "ccf79ff3bd340254737ce4d28b87f0ccee4b3358cd3cd5cd11dc7b42f41b272a"
+
+do_install_append() {
+	# Build uses gtk-encode-symbolic-svg to create png versions:
+        # no need to store the svgs anymore.
+	rm -f ${D}${prefix}/share/icons/Adwaita/scalable/*/*-symbolic.svg \
+	      ${D}${prefix}/share/icons/Adwaita/scalable/*/*-symbolic-rtl.svg
+}
+
+PACKAGES = "${PN}-cursors ${PN}-symbolic-hires ${PN}-symbolic ${PN}-hires ${PN}"
+
+RREPLACES_${PN} = "gnome-icon-theme"
+RCONFLICTS_${PN} = "gnome-icon-theme"
+RPROVIDES_${PN} = "gnome-icon-theme"
+
+FILES_${PN}-cursors = "${prefix}/share/icons/Adwaita/cursors/"
+FILES_${PN}-symbolic-hires = "${prefix}/share/icons/Adwaita/96x96/*/*.symbolic.png \
+                              ${prefix}/share/icons/Adwaita/64x64/*/*.symbolic.png \
+                              ${prefix}/share/icons/Adwaita/48x48/*/*.symbolic.png \
+                              ${prefix}/share/icons/Adwaita/32x32/*/*.symbolic.png"
+FILES_${PN}-symbolic = "${prefix}/share/icons/Adwaita/16x16/*/*.symbolic.png \
+                        ${prefix}/share/icons/Adwaita/24x24/*/*.symbolic.png"
+FILES_${PN}-hires = "${prefix}/share/icons/Adwaita/256x256/ \
+                     ${prefix}/share/icons/Adwaita/512x512/"
+FILES_${PN} = "${prefix}/share/icons/Adwaita/ \
+               ${prefix}/share/pkgconfig/adwaita-icon-theme.pc"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gnome/gconf_3.2.6.bb b/import-layers/yocto-poky/meta/recipes-gnome/gnome/gconf_3.2.6.bb
index 9e9f714..92fd12c 100644
--- a/import-layers/yocto-poky/meta/recipes-gnome/gnome/gconf_3.2.6.bb
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gnome/gconf_3.2.6.bb
@@ -1,5 +1,6 @@
 SUMMARY = "GNOME configuration system"
 SECTION = "x11/gnome"
+HOMEPAGE = "https://projects.gnome.org/gconf/"
 LICENSE = "LGPLv2+"
 LIC_FILES_CHKSUM = "file://COPYING;md5=55ca817ccb7d5b5b66355690e9abc605"
 
@@ -22,9 +23,8 @@
 
 # Disable PolicyKit by default
 PACKAGECONFIG ??= ""
-# We really don't want PolicyKit for native or uclibc
+# We really don't want PolicyKit for native
 PACKAGECONFIG_class-native = ""
-PACKAGECONFIG_libc-uclibc = ""
 
 PACKAGECONFIG[policykit] = "--enable-defaults-service,--disable-defaults-service,polkit"
 PACKAGECONFIG[debug] = "--enable-debug=yes, --enable-debug=minimum"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gnome/gnome-common_3.18.0.bb b/import-layers/yocto-poky/meta/recipes-gnome/gnome/gnome-common_3.18.0.bb
deleted file mode 100644
index 06f3bb3..0000000
--- a/import-layers/yocto-poky/meta/recipes-gnome/gnome/gnome-common_3.18.0.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-SUMMARY = "Common macros for building GNOME applications"
-HOMEPAGE = "http://www.gnome.org/"
-BUGTRACKER = "https://bugzilla.gnome.org/"
-
-LICENSE = "GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
-
-SECTION = "x11/gnome"
-inherit gnomebase allarch
-
-SRC_URI[archive.md5sum] = "933258d9c23e218eb6eec9cc1951b053"
-SRC_URI[archive.sha256sum] = "22569e370ae755e04527b76328befc4c73b62bfd4a572499fde116b8318af8cf"
-
-EXTRA_AUTORECONF = ""
-DEPENDS = ""
-
-FILES_${PN} += "${datadir}/aclocal"
-FILES_${PN}-dev = ""
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gnome/gnome-themes-standard_3.22.2.bb b/import-layers/yocto-poky/meta/recipes-gnome/gnome/gnome-themes-standard_3.22.2.bb
deleted file mode 100644
index 2b3cc97..0000000
--- a/import-layers/yocto-poky/meta/recipes-gnome/gnome/gnome-themes-standard_3.22.2.bb
+++ /dev/null
@@ -1,45 +0,0 @@
-SUMMARY = "GTK+2 standard themes"
-HOMEPAGE = "http://ftp.gnome.org/pub/GNOME/sources/gnome-themes-standard/"
-BUGTRACKER = "https://bugzilla.gnome.org/"
-SECTION = "x11/gnome"
-
-LICENSE = "LGPL-2.1"
-LIC_FILES_CHKSUM = "file://COPYING;md5=2d5025d4aa3495befef8f17206a5b0a1"
-
-inherit autotools pkgconfig gettext gtk-icon-cache upstream-version-is-even distro_features_check
-
-ANY_OF_DISTRO_FEATURES = "${GTK2DISTROFEATURES}"
-
-DEPENDS += "intltool-native gtk+"
-
-MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
-SRC_URI = "${GNOME_MIRROR}/${BPN}/${MAJ_VER}/${BPN}-${PV}.tar.xz \
-          "
-
-SRC_URI[md5sum] = "84624dbcecab7add32672abae030314d"
-SRC_URI[sha256sum] = "b34516cd59b873c187c1897c25bac3b9ce2d30a472f1fd7ae9d7105d93e17da5"
-
-EXTRA_OECONF = "--disable-gtk3-engine"
-
-do_install_append() {
-        # Only building Adwaita, remove highcontrast files
-        rm -rf ${D}${prefix}/share/themes/HighContrast \
-               ${D}${prefix}/share/icons
-}
-
-# There could be gnome-theme-highcontrast as well but that requires
-# gtk+3 and includes lots of icons (is also broken with B != S).
-PACKAGES += "gnome-theme-adwaita \
-             gnome-theme-adwaita-dbg \
-             gnome-theme-adwaita-dev \
-             gnome-theme-adwaita-dark \
-             "
-
-FILES_gnome-theme-adwaita = "${prefix}/share/themes/Adwaita \
-                              ${libdir}/gtk-2.0/2.10.0/engines/libadwaita.so"
-FILES_gnome-theme-adwaita-dev = "${libdir}/gtk-2.0/2.10.0/engines/libadwaita.la"
-FILES_gnome-theme-adwaita-dbg = "${libdir}/gtk-2.0/2.10.0/engines/.debug/libadwaita.so"
-
-FILES_gnome-theme-adwaita-dark = "${prefix}/share/themes/Adwaita-dark"
-RDEPENDS_gnome-theme-adwaita-dark = "gnome-theme-adwaita"
-
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gnome/gnome-themes-standard_3.22.3.bb b/import-layers/yocto-poky/meta/recipes-gnome/gnome/gnome-themes-standard_3.22.3.bb
new file mode 100644
index 0000000..55ee277
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gnome/gnome-themes-standard_3.22.3.bb
@@ -0,0 +1,46 @@
+SUMMARY = "GTK+2 standard themes"
+HOMEPAGE = "http://ftp.gnome.org/pub/GNOME/sources/gnome-themes-standard/"
+BUGTRACKER = "https://bugzilla.gnome.org/"
+SECTION = "x11/gnome"
+
+LICENSE = "LGPL-2.1"
+LIC_FILES_CHKSUM = "file://COPYING;md5=2d5025d4aa3495befef8f17206a5b0a1"
+
+inherit autotools pkgconfig gettext gtk-icon-cache upstream-version-is-even distro_features_check
+
+ANY_OF_DISTRO_FEATURES = "${GTK2DISTROFEATURES}"
+
+DEPENDS += "intltool-native gtk+"
+
+MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
+SRC_URI = "${GNOME_MIRROR}/${BPN}/${MAJ_VER}/${BPN}-${PV}.tar.xz \
+          "
+
+SRC_URI[md5sum] = "b51c362b157b6407303d44f93c31ee11"
+SRC_URI[sha256sum] = "61dc87c52261cfd5b94d65e8ffd923ddeb5d3944562f84942eeeb197ab8ab56a"
+
+EXTRA_OECONF = "--disable-gtk3-engine"
+
+do_install_append() {
+        # Only building Adwaita, remove highcontrast files
+        rm -rf ${D}${prefix}/share/themes/HighContrast \
+               ${D}${prefix}/share/icons
+
+	# The libtool archive file is unneeded with shared libs on modern Linux
+	rm -rf ${D}${libdir}/gtk-2.0/2.10.0/engines/libadwaita.la
+}
+
+# There could be gnome-theme-highcontrast as well but that requires
+# gtk+3 and includes lots of icons (is also broken with B != S).
+PACKAGES += "gnome-theme-adwaita \
+             gnome-theme-adwaita-dark \
+             "
+
+FILES_gnome-theme-adwaita = "${prefix}/share/themes/Adwaita \
+                              ${libdir}/gtk-2.0/2.10.0/engines/libadwaita.so"
+
+FILES_gnome-theme-adwaita-dark = "${prefix}/share/themes/Adwaita-dark"
+RDEPENDS_gnome-theme-adwaita-dark = "gnome-theme-adwaita"
+
+# gnome-themes-standard is empty and doesn't exist
+RDEPENDS_${PN}-dev = ""
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gobject-introspection/gobject-introspection_1.50.0.bb b/import-layers/yocto-poky/meta/recipes-gnome/gobject-introspection/gobject-introspection_1.50.0.bb
deleted file mode 100644
index 509fc5f..0000000
--- a/import-layers/yocto-poky/meta/recipes-gnome/gobject-introspection/gobject-introspection_1.50.0.bb
+++ /dev/null
@@ -1,178 +0,0 @@
-SUMMARY = "Middleware layer between GObject-using C libraries and language bindings"
-HOMEPAGE = "https://wiki.gnome.org/action/show/Projects/GObjectIntrospection"
-BUGTRACKER = "https://bugzilla.gnome.org/"
-SECTION = "libs"
-LICENSE = "LGPLv2+ & GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=90d577535a3898e1ae5dbf0ae3509a8c \
-                    file://tools/compiler.c;endline=20;md5=fc5007fc20022720e6c0b0cdde41fabd \
-                    file://giscanner/sourcescanner.c;endline=22;md5=194d6e0c1d00662f32d030ce44de8d39 \
-                    file://girepository/giregisteredtypeinfo.c;endline=21;md5=661847611ae6979465415f31a759ba27"
-
-SRC_URI = "${GNOME_MIRROR}/${BPN}/1.50/${BPN}-${PV}.tar.xz \
-           file://0001-Revert-an-incomplete-upstream-attempt-at-cross-compi.patch \
-           file://0002-configure.ac-add-host-gi-gi-cross-wrapper-gi-ldd-wra.patch \
-           file://0003-giscanner-add-use-binary-wrapper-option.patch \
-           file://0004-giscanner-add-a-use-ldd-wrapper-option.patch \
-           file://0005-Prefix-pkg-config-paths-with-PKG_CONFIG_SYSROOT_DIR-.patch \
-           "
-SRC_URI[md5sum] = "5af8d724f25d0c9cfbe6df41b77e5dc0"
-SRC_URI[sha256sum] = "1c6597c666f543c70ef3d7c893ab052968afae620efdc080c36657f4226337c5"
-
-inherit autotools pkgconfig gtk-doc python3native qemu gobject-introspection-data upstream-version-is-even
-BBCLASSEXTEND = "native"
-
-# needed for writing out the qemu wrapper script
-export STAGING_DIR_HOST
-export B
-
-DEPENDS_append = " libffi zlib glib-2.0 python3 flex-native bison-native"
-
-# target build needs qemu to run temporary introspection binaries created
-# on the fly by g-ir-scanner and a native version of itself to run
-# native versions of its own tools during build.
-# Also prelink-rtld is used to find out library dependencies of introspection binaries
-# (standard ldd doesn't work when cross-compiling).
-DEPENDS_class-target_append = " gobject-introspection-native qemu-native prelink-native"
-
-SSTATE_SCAN_FILES += "g-ir-scanner-qemuwrapper g-ir-scanner-wrapper g-ir-compiler-wrapper g-ir-scanner-lddwrapper Gio-2.0.gir postinst-ldsoconf-${PN}"
-
-do_configure_prepend_class-native() {
-        # Tweak the native python scripts so that they don't refer to the
-        # full path of native python binary (the solution is taken from glib-2.0 recipe)
-        # This removes the risk of exceeding Linux kernel's shebang line limit (128 bytes)
-        sed -i -e '1s,#!.*,#!${USRBINPATH}/env python3,' ${S}/tools/g-ir-tool-template.in
-}
-
-do_configure_prepend_class-target() {
-        # Write out a qemu wrapper that will be given to gi-scanner so that it
-        # can run target helper binaries through that.
-        qemu_binary="${@qemu_wrapper_cmdline(d, '$STAGING_DIR_HOST', ['\$GIR_EXTRA_LIBS_PATH','.libs','$STAGING_DIR_HOST/${libdir}','$STAGING_DIR_HOST/${base_libdir}'])}"
-        cat > ${B}/g-ir-scanner-qemuwrapper << EOF
-#!/bin/sh
-# Use a modules directory which doesn't exist so we don't load random things
-# which may then get deleted (or their dependencies) and potentially segfault
-export GIO_MODULE_DIR=${STAGING_LIBDIR}/gio/modules-dummy
-
-$qemu_binary "\$@"
-if [ \$? -ne 0 ]; then
-    echo "If the above error message is about missing .so libraries, then setting up GIR_EXTRA_LIBS_PATH in the recipe should help."
-    echo "(typically like this: GIR_EXTRA_LIBS_PATH=\"$""{B}/something/.libs\" )"
-    exit 1
-fi
-EOF
-        chmod +x ${B}/g-ir-scanner-qemuwrapper
-
-        # Write out a wrapper for g-ir-scanner itself, which will be used when building introspection files
-        # for glib-based packages. This wrapper calls the native version of the scanner, and tells it to use
-        # a qemu wrapper for running transient target binaries produced by the scanner, and an include directory 
-        # from the target sysroot.
-        cat > ${B}/g-ir-scanner-wrapper << EOF
-#!/bin/sh
-# This prevents g-ir-scanner from writing cache data to $HOME
-export GI_SCANNER_DISABLE_CACHE=1
-
-g-ir-scanner --use-binary-wrapper=${STAGING_BINDIR}/g-ir-scanner-qemuwrapper --use-ldd-wrapper=${STAGING_BINDIR}/g-ir-scanner-lddwrapper --add-include-path=${STAGING_DATADIR}/gir-1.0 "\$@"
-EOF
-        chmod +x ${B}/g-ir-scanner-wrapper
-
-        # Write out a wrapper for g-ir-compiler, which runs the target version of it through qemu.
-        # g-ir-compiler writes out the raw content of a C struct to disk, and therefore is architecture dependent.
-        cat > ${B}/g-ir-compiler-wrapper << EOF
-#!/bin/sh
-${STAGING_BINDIR}/g-ir-scanner-qemuwrapper ${STAGING_BINDIR}/g-ir-compiler "\$@"
-EOF
-        chmod +x ${B}/g-ir-compiler-wrapper
-
-        # Write out a wrapper to use instead of ldd, which does not work when a binary is built
-        # for a different architecture
-        cat > ${B}/g-ir-scanner-lddwrapper << EOF
-#!/bin/sh
-prelink-rtld --root=$STAGING_DIR_HOST "\$@"
-EOF
-        chmod +x ${B}/g-ir-scanner-lddwrapper
-
-        # Also tweak the target python scripts so that they don't refer to the
-        # native version of python binary (the solution is taken from glib-2.0 recipe)
-        sed -i -e '1s,#!.*,#!${USRBINPATH}/env python3,' ${S}/tools/g-ir-tool-template.in
-}
-
-# Configure target build to use native tools of itself and to use a qemu wrapper
-# and optionally to generate introspection data
-EXTRA_OECONF_class-target += "--enable-host-gi \
-                              --enable-gi-cross-wrapper=${B}/g-ir-scanner-qemuwrapper \
-                              --enable-gi-ldd-wrapper=${B}/g-ir-scanner-lddwrapper \
-                              ${@bb.utils.contains('GI_DATA_ENABLED', 'True', '--enable-introspection-data', '--disable-introspection-data', d)} \
-                             "
-
-PACKAGECONFIG ?= ""
-PACKAGECONFIG[doctool] = "--enable-doctool,--disable-doctool,python3-mako,"
-
-do_compile_prepend() {
-        # This prevents g-ir-scanner from writing cache data to $HOME
-        export GI_SCANNER_DISABLE_CACHE=1
-
-        # Needed to run g-ir unit tests, which won't be able to find the built libraries otherwise
-        export GIR_EXTRA_LIBS_PATH=$B/.libs
-}
-
-# Our wrappers need to be available system-wide, because they will be used 
-# to build introspection files for all other gobject-based packages
-do_install_append_class-target() {
-        install -d ${D}${bindir}/
-        install ${B}/g-ir-scanner-qemuwrapper ${D}${bindir}/
-        install ${B}/g-ir-scanner-wrapper ${D}${bindir}/
-        install ${B}/g-ir-compiler-wrapper ${D}${bindir}/
-        install ${B}/g-ir-scanner-lddwrapper ${D}${bindir}/
-}
-
-# .typelib files are needed at runtime and so they go to the main package
-FILES_${PN}_append = " ${libdir}/girepository-*/*.typelib"
-
-# .gir files go to dev package, as they're needed for developing (but not for running)
-# things that depends on introspection.
-FILES_${PN}-dev_append = " ${datadir}/gir-*/*.gir"
-
-# These are used by gobject-based packages
-# to generate transient introspection binaries
-FILES_${PN}-dev_append = " ${datadir}/gobject-introspection-1.0/gdump.c \
-               ${datadir}/gobject-introspection-1.0/Makefile.introspection"
-
-# These are used by dependent packages (e.g. pygobject) to build their
-# testsuites.
-FILES_${PN}-dev_append = " ${datadir}/gobject-introspection-1.0/tests/*.c \
-                   ${datadir}/gobject-introspection-1.0/tests/*.h"
-
-FILES_${PN}-dbg += "${libdir}/gobject-introspection/giscanner/.debug/"
-FILES_${PN}-staticdev += "${libdir}/gobject-introspection/giscanner/*.a"
-
-# we need target versions of introspection tools in sysroot so that they can be run via qemu
-# when building introspection files in other packages
-SYSROOT_DIRS_append_class-target = " ${bindir}"
-
-SYSROOT_PREPROCESS_FUNCS_append_class-target = " gi_binaries_sysroot_preprocess"
-gi_binaries_sysroot_preprocess() {
-        # Tweak the binary names in the introspection pkgconfig file, so that it
-        # picks up our wrappers which do the cross-compile and qemu magic.
-        sed -i \
-           -e "s|g_ir_scanner=.*|g_ir_scanner=${bindir}/g-ir-scanner-wrapper|" \
-           -e "s|g_ir_compiler=.*|g_ir_compiler=${bindir}/g-ir-compiler-wrapper|" \
-           ${SYSROOT_DESTDIR}${libdir}/pkgconfig/gobject-introspection-1.0.pc
-}
-
-# Need to ensure ld.so.conf exists so prelink-native works
-# both before we build and if we install from sstate
-do_configure[prefuncs] += "gobject_introspection_preconfigure"
-python gobject_introspection_preconfigure () {
-    oe.utils.write_ld_so_conf(d)
-}
-
-SYSROOT_PREPROCESS_FUNCS_append = " gi_ldsoconf_sysroot_preprocess"
-gi_ldsoconf_sysroot_preprocess () {
-	mkdir -p ${SYSROOT_DESTDIR}${bindir}
-	dest=${SYSROOT_DESTDIR}${bindir}/postinst-ldsoconf-${PN}
-	echo "#!/bin/sh" > $dest
-	echo "mkdir -p ${STAGING_DIR_TARGET}${sysconfdir}" >> $dest
-	echo "echo ${base_libdir} >> ${STAGING_DIR_TARGET}${sysconfdir}/ld.so.conf" >> $dest
-	echo "echo ${libdir} >> ${STAGING_DIR_TARGET}${sysconfdir}/ld.so.conf" >> $dest
-	chmod 755 $dest
-}
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gobject-introspection/gobject-introspection_1.52.1.bb b/import-layers/yocto-poky/meta/recipes-gnome/gobject-introspection/gobject-introspection_1.52.1.bb
new file mode 100644
index 0000000..3fe71a3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gobject-introspection/gobject-introspection_1.52.1.bb
@@ -0,0 +1,188 @@
+SUMMARY = "Middleware layer between GObject-using C libraries and language bindings"
+HOMEPAGE = "https://wiki.gnome.org/action/show/Projects/GObjectIntrospection"
+BUGTRACKER = "https://bugzilla.gnome.org/"
+SECTION = "libs"
+LICENSE = "LGPLv2+ & GPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=90d577535a3898e1ae5dbf0ae3509a8c \
+                    file://tools/compiler.c;endline=20;md5=fc5007fc20022720e6c0b0cdde41fabd \
+                    file://giscanner/sourcescanner.c;endline=22;md5=194d6e0c1d00662f32d030ce44de8d39 \
+                    file://girepository/giregisteredtypeinfo.c;endline=21;md5=661847611ae6979465415f31a759ba27"
+
+SRC_URI = "${GNOME_MIRROR}/${BPN}/1.52/${BPN}-${PV}.tar.xz \
+           file://0001-Revert-an-incomplete-upstream-attempt-at-cross-compi.patch \
+           file://0002-configure.ac-add-host-gi-gi-cross-wrapper-gi-ldd-wra.patch \
+           file://0003-giscanner-add-use-binary-wrapper-option.patch \
+           file://0004-giscanner-add-a-use-ldd-wrapper-option.patch \
+           file://0005-Prefix-pkg-config-paths-with-PKG_CONFIG_SYSROOT_DIR-.patch \
+           "
+SRC_URI[md5sum] = "34157073991f9eeb0ed953351b65eb61"
+SRC_URI[sha256sum] = "2ed0c38d52fe1aa6fc4def0c868fe481cb87b532fc694756b26d6cfab29faff4"
+
+inherit autotools pkgconfig gtk-doc python3native qemu gobject-introspection-data upstream-version-is-even
+BBCLASSEXTEND = "native"
+
+# needed for writing out the qemu wrapper script
+export STAGING_DIR_HOST
+export B
+
+DEPENDS_append = " libffi zlib glib-2.0 python3 flex-native bison-native"
+
+# target build needs qemu to run temporary introspection binaries created
+# on the fly by g-ir-scanner and a native version of itself to run
+# native versions of its own tools during build.
+# Also prelink-rtld is used to find out library dependencies of introspection binaries
+# (standard ldd doesn't work when cross-compiling).
+DEPENDS_class-target_append = " gobject-introspection-native qemu-native prelink-native"
+
+SSTATE_SCAN_FILES += "g-ir-scanner-qemuwrapper g-ir-scanner-wrapper g-ir-compiler-wrapper g-ir-scanner-lddwrapper Gio-2.0.gir postinst-ldsoconf-${PN}"
+
+do_configure_prepend_class-native() {
+        # Tweak the native python scripts so that they don't refer to the
+        # full path of native python binary (the solution is taken from glib-2.0 recipe)
+        # This removes the risk of exceeding Linux kernel's shebang line limit (128 bytes)
+        sed -i -e '1s,#!.*,#!${USRBINPATH}/env python3,' ${S}/tools/g-ir-tool-template.in
+}
+
+do_configure_prepend_class-target() {
+        # Write out a qemu wrapper that will be given to gi-scanner so that it
+        # can run target helper binaries through that.
+        qemu_binary="${@qemu_wrapper_cmdline(d, '$STAGING_DIR_HOST', ['\$GIR_EXTRA_LIBS_PATH','.libs','$STAGING_DIR_HOST/${libdir}','$STAGING_DIR_HOST/${base_libdir}'])}"
+        cat > ${B}/g-ir-scanner-qemuwrapper << EOF
+#!/bin/sh
+# Use a modules directory which doesn't exist so we don't load random things
+# which may then get deleted (or their dependencies) and potentially segfault
+export GIO_MODULE_DIR=${STAGING_LIBDIR}/gio/modules-dummy
+
+$qemu_binary "\$@"
+if [ \$? -ne 0 ]; then
+    echo "If the above error message is about missing .so libraries, then setting up GIR_EXTRA_LIBS_PATH in the recipe should help."
+    echo "(typically like this: GIR_EXTRA_LIBS_PATH=\"$""{B}/something/.libs\" )"
+    exit 1
+fi
+EOF
+        chmod +x ${B}/g-ir-scanner-qemuwrapper
+
+        # Write out a wrapper for g-ir-scanner itself, which will be used when building introspection files
+        # for glib-based packages. This wrapper calls the native version of the scanner, and tells it to use
+        # a qemu wrapper for running transient target binaries produced by the scanner, and an include directory 
+        # from the target sysroot.
+        cat > ${B}/g-ir-scanner-wrapper << EOF
+#!/bin/sh
+# This prevents g-ir-scanner from writing cache data to $HOME
+export GI_SCANNER_DISABLE_CACHE=1
+
+g-ir-scanner --use-binary-wrapper=${STAGING_BINDIR}/g-ir-scanner-qemuwrapper --use-ldd-wrapper=${STAGING_BINDIR}/g-ir-scanner-lddwrapper --add-include-path=${STAGING_DATADIR}/gir-1.0 "\$@"
+EOF
+        chmod +x ${B}/g-ir-scanner-wrapper
+
+        # Write out a wrapper for g-ir-compiler, which runs the target version of it through qemu.
+        # g-ir-compiler writes out the raw content of a C struct to disk, and therefore is architecture dependent.
+        cat > ${B}/g-ir-compiler-wrapper << EOF
+#!/bin/sh
+${STAGING_BINDIR}/g-ir-scanner-qemuwrapper ${STAGING_BINDIR}/g-ir-compiler "\$@"
+EOF
+        chmod +x ${B}/g-ir-compiler-wrapper
+
+        # Write out a wrapper to use instead of ldd, which does not work when a binary is built
+        # for a different architecture
+        cat > ${B}/g-ir-scanner-lddwrapper << EOF
+#!/bin/sh
+prelink-rtld --root=$STAGING_DIR_HOST "\$@"
+EOF
+        chmod +x ${B}/g-ir-scanner-lddwrapper
+
+        # Also tweak the target python scripts so that they don't refer to the
+        # native version of python binary (the solution is taken from glib-2.0 recipe)
+        sed -i -e '1s,#!.*,#!${USRBINPATH}/env python3,' ${S}/tools/g-ir-tool-template.in
+}
+
+# Configure target build to use native tools of itself and to use a qemu wrapper
+# and optionally to generate introspection data
+EXTRA_OECONF_class-target += "--enable-host-gi \
+                              --disable-static \
+                              --enable-gi-cross-wrapper=${B}/g-ir-scanner-qemuwrapper \
+                              --enable-gi-ldd-wrapper=${B}/g-ir-scanner-lddwrapper \
+                              ${@bb.utils.contains('GI_DATA_ENABLED', 'True', '--enable-introspection-data', '--disable-introspection-data', d)} \
+                             "
+
+PACKAGECONFIG ?= ""
+PACKAGECONFIG[doctool] = "--enable-doctool,--disable-doctool,python3-mako,"
+
+do_compile_prepend() {
+        # This prevents g-ir-scanner from writing cache data to $HOME
+        export GI_SCANNER_DISABLE_CACHE=1
+
+        # Needed to run g-ir unit tests, which won't be able to find the built libraries otherwise
+        export GIR_EXTRA_LIBS_PATH=$B/.libs
+}
+
+# Our wrappers need to be available system-wide, because they will be used 
+# to build introspection files for all other gobject-based packages
+do_install_append_class-target() {
+        install -d ${D}${bindir}/
+        install ${B}/g-ir-scanner-qemuwrapper ${D}${bindir}/
+        install ${B}/g-ir-scanner-wrapper ${D}${bindir}/
+        install ${B}/g-ir-compiler-wrapper ${D}${bindir}/
+        install ${B}/g-ir-scanner-lddwrapper ${D}${bindir}/
+}
+
+# .typelib files are needed at runtime and so they go to the main package
+FILES_${PN}_append = " ${libdir}/girepository-*/*.typelib"
+
+# .gir files go to dev package, as they're needed for developing (but not for running)
+# things that depends on introspection.
+FILES_${PN}-dev_append = " ${datadir}/gir-*/*.gir"
+
+# These are used by gobject-based packages
+# to generate transient introspection binaries
+FILES_${PN}-dev_append = " ${datadir}/gobject-introspection-1.0/gdump.c \
+               ${datadir}/gobject-introspection-1.0/Makefile.introspection"
+
+# These are used by dependent packages (e.g. pygobject) to build their
+# testsuites.
+FILES_${PN}-dev_append = " ${datadir}/gobject-introspection-1.0/tests/*.c \
+                   ${datadir}/gobject-introspection-1.0/tests/*.h"
+
+FILES_${PN}-dbg += "${libdir}/gobject-introspection/giscanner/.debug/"
+FILES_${PN}-staticdev += "${libdir}/gobject-introspection/giscanner/*.a"
+
+# we need target versions of introspection tools in sysroot so that they can be run via qemu
+# when building introspection files in other packages
+SYSROOT_DIRS_append_class-target = " ${bindir}"
+
+SYSROOT_PREPROCESS_FUNCS_append_class-target = " gi_binaries_sysroot_preprocess"
+gi_binaries_sysroot_preprocess() {
+        # Tweak the binary names in the introspection pkgconfig file, so that it
+        # picks up our wrappers which do the cross-compile and qemu magic.
+        sed -i \
+           -e "s|g_ir_scanner=.*|g_ir_scanner=${bindir}/g-ir-scanner-wrapper|" \
+           -e "s|g_ir_compiler=.*|g_ir_compiler=${bindir}/g-ir-compiler-wrapper|" \
+           ${SYSROOT_DESTDIR}${libdir}/pkgconfig/gobject-introspection-1.0.pc
+}
+
+# Need to ensure ld.so.conf exists so prelink-native works
+# both before we build and if we install from sstate
+do_configure[prefuncs] += "gobject_introspection_preconfigure"
+python gobject_introspection_preconfigure () {
+    oe.utils.write_ld_so_conf(d)
+}
+
+SYSROOT_PREPROCESS_FUNCS_append = " gi_ldsoconf_sysroot_preprocess"
+gi_ldsoconf_sysroot_preprocess () {
+	mkdir -p ${SYSROOT_DESTDIR}${bindir}
+	dest=${SYSROOT_DESTDIR}${bindir}/postinst-ldsoconf-${PN}
+	echo "#!/bin/sh" > $dest
+	echo "mkdir -p ${STAGING_DIR_TARGET}${sysconfdir}" >> $dest
+	echo "echo ${base_libdir} >> ${STAGING_DIR_TARGET}${sysconfdir}/ld.so.conf" >> $dest
+	echo "echo ${libdir} >> ${STAGING_DIR_TARGET}${sysconfdir}/ld.so.conf" >> $dest
+	chmod 755 $dest
+}
+
+# Remove wrapper files from the package, only used for cross-compiling
+PACKAGE_PREPROCESS_FUNCS += "gi_package_preprocess"
+gi_package_preprocess() {
+	rm -f ${PKGD}${bindir}/g-ir-scanner-qemuwrapper
+	rm -f ${PKGD}${bindir}/g-ir-scanner-wrapper
+	rm -f ${PKGD}${bindir}/g-ir-compiler-wrapper
+	rm -f ${PKGD}${bindir}/g-ir-scanner-lddwrapper
+}
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3.inc b/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3.inc
index 27da844..0a357db 100644
--- a/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3.inc
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3.inc
@@ -10,7 +10,11 @@
 
 LICENSE = "LGPLv2 & LGPLv2+ & LGPLv2.1+"
 
-inherit autotools gettext pkgconfig gtk-doc update-alternatives gtk-immodules-cache gsettings distro_features_check upstream-version-is-even gobject-introspection
+inherit autotools gettext pkgconfig gtk-doc update-alternatives gtk-immodules-cache gsettings distro_features_check gobject-introspection
+
+# versions >= 3.90 are development versions, otherwise like upstream-version-is-even
+UPSTREAM_CHECK_REGEX = "[^\d\.](?P<pver>3\.([1-8]?[02468])+(\.\d+)+)\.tar"
+
 ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
 
 # This should be in autotools.bbclass, but until something elses uses it putting
@@ -34,7 +38,6 @@
                  WAYLAND_PROTOCOLS_SYSROOT_DIR=${RECIPE_SYSROOT} \
                  ${@bb.utils.contains("DISTRO_FEATURES", "x11", "", "--disable-gtk-doc", d)} \
                  "
-EXTRA_OECONF[vardepsexclude] = "MACHINE"
 
 do_compile_prepend() {
         export GIR_EXTRA_LIBS_PATH="${B}/gdk/.libs"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3/0003-Add-disable-opengl-configure-option.patch b/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3/0003-Add-disable-opengl-configure-option.patch
index eaf6aec..9cdee0e 100644
--- a/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3/0003-Add-disable-opengl-configure-option.patch
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3/0003-Add-disable-opengl-configure-option.patch
@@ -942,8 +942,8 @@
 +	gdkgears
 +endif
 +
- if USE_X11
- noinst_PROGRAMS += testerrors
+ if USE_WAYLAND
+ noinst_PROGRAMS += testforeign
  endif
 diff --git a/testsuite/gtk/objects-finalize.c b/testsuite/gtk/objects-finalize.c
 index 0b3a519..07b096f 100644
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3_3.22.17.bb b/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3_3.22.17.bb
new file mode 100644
index 0000000..66a5463
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3_3.22.17.bb
@@ -0,0 +1,19 @@
+require gtk+3.inc
+
+MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
+
+SRC_URI = "http://ftp.gnome.org/pub/gnome/sources/gtk+/${MAJ_VER}/gtk+-${PV}.tar.xz \
+           file://0001-Hardcoded-libtool.patch \
+           file://0002-Do-not-try-to-initialize-GL-without-libGL.patch \
+           file://0003-Add-disable-opengl-configure-option.patch \
+           file://0004-configure.ac-Fix-wayland-protocols-path.patch \
+          "
+SRC_URI[md5sum] = "29f85430cf7cfa8ca8d0703ba65dbe11"
+SRC_URI[sha256sum] = "a6c1fb8f229c626a3d9c0e1ce6ea138de7f64a5a6bc799d45fa286fe461c3437"
+
+S = "${WORKDIR}/gtk+-${PV}"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=5f30f0716dfdd0d91eb439ebec522ec2 \
+                    file://gtk/gtk.h;endline=25;md5=1d8dc0fccdbfa26287a271dce88af737 \
+                    file://gdk/gdk.h;endline=25;md5=c920ce39dc88c6f06d3e7c50e08086f2 \
+                    file://tests/testgtk.c;endline=25;md5=cb732daee1d82af7a2bf953cf3cf26f1"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3_3.22.8.bb b/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3_3.22.8.bb
deleted file mode 100644
index 1cc57f9..0000000
--- a/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk+3_3.22.8.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-require gtk+3.inc
-
-MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
-
-SRC_URI = "http://ftp.gnome.org/pub/gnome/sources/gtk+/${MAJ_VER}/gtk+-${PV}.tar.xz \
-           file://0001-Hardcoded-libtool.patch \
-           file://0002-Do-not-try-to-initialize-GL-without-libGL.patch \
-           file://0003-Add-disable-opengl-configure-option.patch \
-           file://0004-configure.ac-Fix-wayland-protocols-path.patch \
-          "
-SRC_URI[md5sum] = "b4fb39a829e4425b3fa05adbe7f52fef"
-SRC_URI[sha256sum] = "c7254a88df5c17e9609cee9e848c3d0104512707edad4c3b4f256b131f8d3af1"
-
-S = "${WORKDIR}/gtk+-${PV}"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=5f30f0716dfdd0d91eb439ebec522ec2 \
-                    file://gtk/gtk.h;endline=25;md5=1d8dc0fccdbfa26287a271dce88af737 \
-                    file://gdk/gdk.h;endline=25;md5=c920ce39dc88c6f06d3e7c50e08086f2 \
-                    file://tests/testgtk.c;endline=25;md5=cb732daee1d82af7a2bf953cf3cf26f1"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk-icon-utils-native_3.22.17.bb b/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk-icon-utils-native_3.22.17.bb
new file mode 100644
index 0000000..032d82d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk-icon-utils-native_3.22.17.bb
@@ -0,0 +1,63 @@
+SUMMARY = "Native icon utils for GTK+"
+DESCRIPTION = "gtk-update-icon-cache and gtk-encode-symbolic-svg built from GTK+ natively, for build time and on-host postinst script execution."
+SECTION = "libs"
+
+DEPENDS = "glib-2.0-native gdk-pixbuf-native librsvg-native"
+
+LICENSE = "LGPLv2 & LGPLv2+ & LGPLv2.1+"
+
+MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
+
+SRC_URI = "http://ftp.gnome.org/pub/gnome/sources/gtk+/${MAJ_VER}/gtk+-${PV}.tar.xz \
+          file://Remove-Gdk-dependency-from-gtk-encode-symbolic-svg.patch"
+SRC_URI[md5sum] = "29f85430cf7cfa8ca8d0703ba65dbe11"
+SRC_URI[sha256sum] = "a6c1fb8f229c626a3d9c0e1ce6ea138de7f64a5a6bc799d45fa286fe461c3437"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=5f30f0716dfdd0d91eb439ebec522ec2 \
+                    file://gtk/gtk.h;endline=25;md5=1d8dc0fccdbfa26287a271dce88af737 \
+                    file://gdk/gdk.h;endline=25;md5=c920ce39dc88c6f06d3e7c50e08086f2 \
+                    file://tests/testgtk.c;endline=25;md5=cb732daee1d82af7a2bf953cf3cf26f1"
+
+S = "${WORKDIR}/gtk+-${PV}"
+
+inherit pkgconfig native
+
+# versions >= 3.90 are development versions, otherwise like upstream-version-is-even
+UPSTREAM_CHECK_REGEX = "[^\d\.](?P<pver>3\.([1-8]?[02468])+(\.\d+)+)\.tar"
+
+PKG_CONFIG_FOR_BUILD = "${STAGING_BINDIR_NATIVE}/pkg-config-native"
+
+do_configure() {
+	# Quite ugly but defines enough to compile the tools.
+	if ! test -f gtk/config.h; then
+		echo "#define GETTEXT_PACKAGE \"gtk30\"" >> gtk/config.h
+		echo "#define HAVE_UNISTD_H 1" >> gtk/config.h
+		echo "#define HAVE_FTW_H 1" >> gtk/config.h
+	fi
+	if ! test -f gdk/config.h; then
+		touch gdk/config.h
+	fi
+}
+
+do_compile() {
+	${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS} \
+		${S}/gtk/updateiconcache.c \
+		$(${PKG_CONFIG_FOR_BUILD} --cflags --libs gdk-pixbuf-2.0) \
+		-o gtk-update-icon-cache
+
+	${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS} \
+		${S}/gtk/encodesymbolic.c \
+		$(${PKG_CONFIG_FOR_BUILD} --cflags --libs gio-2.0 gdk-pixbuf-2.0) \
+		-o gtk-encode-symbolic-svg
+}
+
+do_install() {
+	install -d ${D}${bindir}
+	install -m 0755 ${B}/gtk-update-icon-cache ${D}${bindir}
+	install -m 0755 ${B}/gtk-encode-symbolic-svg ${D}${bindir}
+
+	create_wrapper ${D}/${bindir}/gtk-update-icon-cache \
+		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/2.10.0/loaders.cache
+	create_wrapper ${D}/${bindir}/gtk-encode-symbolic-svg \
+		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/2.10.0/loaders.cache
+}
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk-icon-utils-native_3.22.8.bb b/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk-icon-utils-native_3.22.8.bb
deleted file mode 100644
index 7283418..0000000
--- a/import-layers/yocto-poky/meta/recipes-gnome/gtk+/gtk-icon-utils-native_3.22.8.bb
+++ /dev/null
@@ -1,60 +0,0 @@
-SUMMARY = "Native icon utils for GTK+"
-DESCRIPTION = "gtk-update-icon-cache and gtk-encode-symbolic-svg built from GTK+ natively, for build time and on-host postinst script execution."
-SECTION = "libs"
-
-DEPENDS = "glib-2.0-native gdk-pixbuf-native librsvg-native"
-
-LICENSE = "LGPLv2 & LGPLv2+ & LGPLv2.1+"
-
-MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
-
-SRC_URI = "http://ftp.gnome.org/pub/gnome/sources/gtk+/${MAJ_VER}/gtk+-${PV}.tar.xz \
-          file://Remove-Gdk-dependency-from-gtk-encode-symbolic-svg.patch"
-SRC_URI[md5sum] = "b4fb39a829e4425b3fa05adbe7f52fef"
-SRC_URI[sha256sum] = "c7254a88df5c17e9609cee9e848c3d0104512707edad4c3b4f256b131f8d3af1"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=5f30f0716dfdd0d91eb439ebec522ec2 \
-                    file://gtk/gtk.h;endline=25;md5=1d8dc0fccdbfa26287a271dce88af737 \
-                    file://gdk/gdk.h;endline=25;md5=c920ce39dc88c6f06d3e7c50e08086f2 \
-                    file://tests/testgtk.c;endline=25;md5=cb732daee1d82af7a2bf953cf3cf26f1"
-
-S = "${WORKDIR}/gtk+-${PV}"
-
-inherit pkgconfig native upstream-version-is-even
-
-PKG_CONFIG_FOR_BUILD = "${STAGING_BINDIR_NATIVE}/pkg-config-native"
-
-do_configure() {
-	# Quite ugly but defines enough to compile the tools.
-	if ! test -f gtk/config.h; then
-		echo "#define GETTEXT_PACKAGE \"gtk30\"" >> gtk/config.h
-		echo "#define HAVE_UNISTD_H 1" >> gtk/config.h
-		echo "#define HAVE_FTW_H 1" >> gtk/config.h
-	fi
-	if ! test -f gdk/config.h; then
-		touch gdk/config.h
-	fi
-}
-
-do_compile() {
-	${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS} \
-		${S}/gtk/updateiconcache.c \
-		$(${PKG_CONFIG_FOR_BUILD} --cflags --libs gdk-pixbuf-2.0) \
-		-o gtk-update-icon-cache
-
-	${BUILD_CC} ${BUILD_CFLAGS} ${BUILD_LDFLAGS} \
-		${S}/gtk/encodesymbolic.c \
-		$(${PKG_CONFIG_FOR_BUILD} --cflags --libs gio-2.0 gdk-pixbuf-2.0) \
-		-o gtk-encode-symbolic-svg
-}
-
-do_install() {
-	install -d ${D}${bindir}
-	install -m 0755 ${B}/gtk-update-icon-cache ${D}${bindir}
-	install -m 0755 ${B}/gtk-encode-symbolic-svg ${D}${bindir}
-
-	create_wrapper ${D}/${bindir}/gtk-update-icon-cache \
-		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/2.10.0/loaders.cache
-	create_wrapper ${D}/${bindir}/gtk-encode-symbolic-svg \
-		GDK_PIXBUF_MODULE_FILE=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/2.10.0/loaders.cache
-}
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gtk-doc/files/conditionaltests.patch b/import-layers/yocto-poky/meta/recipes-gnome/gtk-doc/files/conditionaltests.patch
new file mode 100644
index 0000000..0c180f2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gtk-doc/files/conditionaltests.patch
@@ -0,0 +1,34 @@
+Allow the tests to be explicitly disabled to avoid floating dependnecy
+issues. This is not really an issue with RSS but is on previous releases.
+
+RP 2017/6/27
+Upstream-Status: Pending
+
+Index: gtk-doc-1.25/configure.ac
+===================================================================
+--- gtk-doc-1.25.orig/configure.ac
++++ gtk-doc-1.25/configure.ac
+@@ -161,6 +161,11 @@ if test "x$GCC" = "xyes"; then
+ 	fi
+ fi
+ 
++AC_ARG_ENABLE([tests],
++	AS_HELP_STRING([--enable-tests],
++	[enable tests (default=yes)]),,
++	[enable_tests="yes"])
++
+ dnl if glib is available we can enable the tests
+ PKG_CHECK_MODULES(TEST_DEPS, [glib-2.0 >= 2.6.0 gobject-2.0 >= 2.6.0],
+ 	[	glib_prefix="`$PKG_CONFIG --variable=prefix glib-2.0`"
+@@ -171,6 +176,11 @@ PKG_CHECK_MODULES(TEST_DEPS, [glib-2.0 >
+ 		build_tests="no"
+ 	]
+ )
++if test "x$enable_tests" != "xyes"; then
++	gtk_doc_use_libtool="no"
++	build_tests="no"
++fi
++
+ AM_CONDITIONAL(GTK_DOC_USE_LIBTOOL, test -n "$LIBTOOL" -a x$gtk_doc_use_libtool = xyes )
+ dnl this enable the rule in test/Makefile.am
+ AM_CONDITIONAL(BUILD_TESTS, test x$build_tests = xyes)
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/gtk-doc/gtk-doc_1.25.bb b/import-layers/yocto-poky/meta/recipes-gnome/gtk-doc/gtk-doc_1.25.bb
index 7de36ae9..e0eb994 100644
--- a/import-layers/yocto-poky/meta/recipes-gnome/gtk-doc/gtk-doc_1.25.bb
+++ b/import-layers/yocto-poky/meta/recipes-gnome/gtk-doc/gtk-doc_1.25.bb
@@ -16,6 +16,7 @@
 # hopefully no one minds because its scripts are not used for anything during build
 # and shouldn't be used on targets.
 PACKAGECONFIG[working-scripts] = "--with-highlight=source-highlight,--with-highlight=no,libxslt-native xmlto-native source-highlight-native perl-native"
+PACKAGECONFIG[tests] = "--enable-tests,--disable-tests,glib-2.0"
 
 # We cannot use host perl, because it may be too old for gtk-doc
 EXTRANATIVEPATH += "perl-native"
@@ -23,6 +24,7 @@
 SRC_URI += "file://0001-Do-not-hardocode-paths-to-perl-python-in-scripts.patch \
             file://0001-Do-not-error-out-if-xsltproc-is-not-found.patch \
             file://0001-Do-not-error-out-if-perl-is-not-found-or-its-version.patch \
+            file://conditionaltests.patch \
            "
 SRC_URI_append_class-native = " file://pkg-config-native.patch"
 
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/json-glib/json-glib_1.2.2.bb b/import-layers/yocto-poky/meta/recipes-gnome/json-glib/json-glib_1.2.2.bb
deleted file mode 100644
index 6869ba9..0000000
--- a/import-layers/yocto-poky/meta/recipes-gnome/json-glib/json-glib_1.2.2.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-SUMMARY = "JSON-GLib implements a full JSON parser using GLib and GObject"
-DESCRIPTION = "Use JSON-GLib it is possible to parse and generate valid JSON\
- data structures, using a DOM-like API. JSON-GLib also offers GObject \
-integration, providing the ability to serialize and deserialize GObject \
-instances to and from JSON data types."
-HOMEPAGE = "http://live.gnome.org/JsonGlib"
-
-LICENSE = "LGPLv2.1"
-LIC_FILES_CHKSUM = "file://COPYING;md5=7fbc338309ac38fefcd64b04bb903e34"
-
-DEPENDS = "glib-2.0"
-
-SRC_URI[archive.md5sum] = "c1daefb8d0fb59612af0c072c8aabb58"
-SRC_URI[archive.sha256sum] = "ea128ab52a824fcd06e5448fbb2bd8d9a13740d51c66d445828edba71321a621"
-
-inherit gnomebase gettext lib_package gobject-introspection gtk-doc manpages
-
-PACKAGECONFIG[manpages] = "--enable-man --with-xml-catalog=${STAGING_ETCDIR_NATIVE}/xml/catalog.xml, --disable-man, libxslt-native xmlto-native"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/json-glib/json-glib_1.2.8.bb b/import-layers/yocto-poky/meta/recipes-gnome/json-glib/json-glib_1.2.8.bb
new file mode 100644
index 0000000..2c5d381
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/json-glib/json-glib_1.2.8.bb
@@ -0,0 +1,20 @@
+SUMMARY = "JSON-GLib implements a full JSON parser using GLib and GObject"
+DESCRIPTION = "Use JSON-GLib it is possible to parse and generate valid JSON\
+ data structures, using a DOM-like API. JSON-GLib also offers GObject \
+integration, providing the ability to serialize and deserialize GObject \
+instances to and from JSON data types."
+HOMEPAGE = "http://live.gnome.org/JsonGlib"
+
+LICENSE = "LGPLv2.1"
+LIC_FILES_CHKSUM = "file://COPYING;md5=7fbc338309ac38fefcd64b04bb903e34"
+
+DEPENDS = "glib-2.0"
+
+SRC_URI[archive.md5sum] = "ff31e7d0594df44318e12facda3d086e"
+SRC_URI[archive.sha256sum] = "fd55a9037d39e7a10f0db64309f5f0265fa32ec962bf85066087b83a2807f40a"
+
+inherit gnomebase gettext lib_package gobject-introspection gtk-doc manpages
+
+PACKAGECONFIG[manpages] = "--enable-man --with-xml-catalog=${STAGING_ETCDIR_NATIVE}/xml/catalog.xml, --disable-man, libxslt-native xmlto-native"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/libgudev/libgudev_231.bb b/import-layers/yocto-poky/meta/recipes-gnome/libgudev/libgudev_231.bb
index ae0f1c7..ad67926 100644
--- a/import-layers/yocto-poky/meta/recipes-gnome/libgudev/libgudev_231.bb
+++ b/import-layers/yocto-poky/meta/recipes-gnome/libgudev/libgudev_231.bb
@@ -1,5 +1,5 @@
 SUMMARY = "GObject wrapper for libudev"
-
+HOMEPAGE = "https://wiki.gnome.org/Projects/libgudev"
 SRC_URI[archive.md5sum] = "916c10c51ec61131e244c3936bbb2e0c"
 SRC_URI[archive.sha256sum] = "3b1ef99d4a8984c35044103d8ddfc3cc52c80035c36abab2bcc5e3532e063f96"
 
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/libnotify/libnotify_0.7.7.bb b/import-layers/yocto-poky/meta/recipes-gnome/libnotify/libnotify_0.7.7.bb
index 2ab2f1f..6c299bc 100644
--- a/import-layers/yocto-poky/meta/recipes-gnome/libnotify/libnotify_0.7.7.bb
+++ b/import-layers/yocto-poky/meta/recipes-gnome/libnotify/libnotify_0.7.7.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Library for sending desktop notifications to a notification daemon"
+HOMEPAGE = "http://www.gnome.org"
 SECTION = "libs"
 LICENSE = "LGPLv2.1"
 LIC_FILES_CHKSUM = "file://COPYING;md5=7fbc338309ac38fefcd64b04bb903e34"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/librsvg/librsvg_2.40.16.bb b/import-layers/yocto-poky/meta/recipes-gnome/librsvg/librsvg_2.40.16.bb
deleted file mode 100644
index 49243c8..0000000
--- a/import-layers/yocto-poky/meta/recipes-gnome/librsvg/librsvg_2.40.16.bb
+++ /dev/null
@@ -1,43 +0,0 @@
-SUMMARY = "Library for rendering SVG files"
-HOMEPAGE = "http://ftp.gnome.org/pub/GNOME/sources/librsvg/"
-BUGTRACKER = "https://bugzilla.gnome.org/"
-
-LICENSE = "LGPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
-                    file://rsvg.h;beginline=3;endline=24;md5=20b4113c4909bbf0d67e006778302bc6"
-
-SECTION = "x11/utils"
-DEPENDS = "cairo gdk-pixbuf glib-2.0 libcroco libxml2 pango"
-BBCLASSEXTEND = "native"
-
-inherit autotools pkgconfig gnomebase gtk-doc pixbufcache upstream-version-is-even gobject-introspection
-
-SRC_URI += "file://gtk-option.patch"
-
-SRC_URI[archive.md5sum] = "f474fe37177a2bf8050787df2046095c"
-SRC_URI[archive.sha256sum] = "d48bcf6b03fa98f07df10332fb49d8c010786ddca6ab34cbba217684f533ff2e"
-
-CACHED_CONFIGUREVARS = "ac_cv_path_GDK_PIXBUF_QUERYLOADERS=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders"
-
-# The older ld (2.22) on the host (Centos 6.5) doesn't have the
-# -Bsymbolic-functions option, we can disable it for native.
-EXTRA_OECONF_append_class-native = " --enable-Bsymbolic=auto"
-
-PACKAGECONFIG ??= "gdkpixbuf"
-# The gdk-pixbuf loader
-PACKAGECONFIG[gdkpixbuf] = "--enable-pixbuf-loader,--disable-pixbuf-loader,gdk-pixbuf-native"
-# GTK+ test application (rsvg-view)
-PACKAGECONFIG[gtk] = "--with-gtk3,--without-gtk3,gtk+3"
-
-do_install_append() {
-	# Loadable modules don't need .a or .la on Linux
-	rm -f ${D}${libdir}/gdk-pixbuf-2.0/*/loaders/*.a ${D}${libdir}/gdk-pixbuf-2.0/*/loaders/*.la
-}
-
-PACKAGES =+ "librsvg-gtk rsvg"
-FILES_rsvg = "${bindir}/rsvg* \
-	      ${datadir}/pixmaps/svg-viewer.svg \
-	      ${datadir}/themes"
-FILES_librsvg-gtk = "${libdir}/gdk-pixbuf-2.0/*/*/*.so"
-
-PIXBUF_PACKAGES = "librsvg-gtk"
diff --git a/import-layers/yocto-poky/meta/recipes-gnome/librsvg/librsvg_2.40.18.bb b/import-layers/yocto-poky/meta/recipes-gnome/librsvg/librsvg_2.40.18.bb
new file mode 100644
index 0000000..21a0dc2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-gnome/librsvg/librsvg_2.40.18.bb
@@ -0,0 +1,44 @@
+SUMMARY = "Library for rendering SVG files"
+HOMEPAGE = "http://ftp.gnome.org/pub/GNOME/sources/librsvg/"
+BUGTRACKER = "https://bugzilla.gnome.org/"
+
+LICENSE = "LGPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
+                    file://rsvg.h;beginline=3;endline=24;md5=20b4113c4909bbf0d67e006778302bc6"
+
+SECTION = "x11/utils"
+DEPENDS = "cairo gdk-pixbuf glib-2.0 libcroco libxml2 pango"
+BBCLASSEXTEND = "native"
+
+inherit autotools pkgconfig gnomebase gtk-doc pixbufcache upstream-version-is-even gobject-introspection
+
+SRC_URI += "file://gtk-option.patch"
+
+SRC_URI[archive.md5sum] = "eaa5c8a8bbe2600ab5194c0d3b1b621b"
+SRC_URI[archive.sha256sum] = "bfc8c488c89c1e7212c478beb95c41b44701636125a3e6dab41187f1485b564c"
+
+CACHED_CONFIGUREVARS = "ac_cv_path_GDK_PIXBUF_QUERYLOADERS=${STAGING_LIBDIR_NATIVE}/gdk-pixbuf-2.0/gdk-pixbuf-query-loaders"
+
+# The older ld (2.22) on the host (Centos 6.5) doesn't have the
+# -Bsymbolic-functions option, we can disable it for native.
+EXTRA_OECONF_append_class-native = " --enable-Bsymbolic=auto"
+
+PACKAGECONFIG ??= "gdkpixbuf"
+# The gdk-pixbuf loader
+PACKAGECONFIG[gdkpixbuf] = "--enable-pixbuf-loader,--disable-pixbuf-loader,gdk-pixbuf-native"
+# GTK+ test application (rsvg-view)
+PACKAGECONFIG[gtk] = "--with-gtk3,--without-gtk3,gtk+3"
+
+do_install_append() {
+	# Loadable modules don't need .a or .la on Linux
+	rm -f ${D}${libdir}/gdk-pixbuf-2.0/*/loaders/*.a ${D}${libdir}/gdk-pixbuf-2.0/*/loaders/*.la
+}
+
+PACKAGES =+ "librsvg-gtk rsvg"
+FILES_rsvg = "${bindir}/rsvg* \
+	      ${datadir}/pixmaps/svg-viewer.svg \
+	      ${datadir}/themes"
+FILES_librsvg-gtk = "${libdir}/gdk-pixbuf-2.0/*/*/*.so \
+                     ${datadir}/thumbnailers/librsvg.thumbnailer"
+
+PIXBUF_PACKAGES = "librsvg-gtk"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo.inc b/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo.inc
index 8e1e2e1..20e0d2c 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo.inc
+++ b/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo.inc
@@ -30,6 +30,7 @@
 PACKAGECONFIG[valgrind] = "--enable-valgrind=yes,--disable-valgrind,valgrind"
 PACKAGECONFIG[egl] = "--enable-egl=yes,--disable-egl,virtual/egl"
 PACKAGECONFIG[glesv2] = "--enable-glesv2,--disable-glesv2,virtual/libgles2"
+PACKAGECONFIG[opengl] = "--enable-gl,--disable-gl,virtual/libgl"
 
 #check for TARGET_FPU=soft and inform configure of the result so it can disable some floating points 
 require cairo-fpu.inc
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo/0001-cairo-Fix-CVE-2017-9814.patch b/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo/0001-cairo-Fix-CVE-2017-9814.patch
new file mode 100644
index 0000000..7d02ab9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo/0001-cairo-Fix-CVE-2017-9814.patch
@@ -0,0 +1,45 @@
+From 042421e9e3d266ad0bb7805132041ef51ad3234d Mon Sep 17 00:00:00 2001
+From: Adrian Johnson <ajohnson@redneon.com>
+Date: Wed, 16 Aug 2017 22:52:35 -0400
+Subject: [PATCH] cairo: Fix CVE-2017-9814
+
+The bug happens because in some scenarios the variable size can
+have a value of 0 at line 1288. And malloc(0) is not returning
+NULL as some people could expect:
+
+    https://stackoverflow.com/questions/1073157/zero-size-malloc
+
+malloc(0) returns the smallest chunk possible. So the line 1290
+with the return is not execute. And the execution continues with
+an invalid map.
+
+Since the size is 0 the variable map is not initialized correctly
+at load_trutype_table. So, later when the variable map is accessed
+previous values from a freed chunk are used. This could allows an
+attacker to control the variable map.
+
+This patch have not merge in upstream now.
+
+Upstream-Status: Backport [https://bugs.freedesktop.org/show_bug.cgi?id=101547]
+CVE: CVE-2017-9814
+Signed-off-by: Dengke Du <dengke.du@windriver.com>
+---
+ src/cairo-truetype-subset.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/cairo-truetype-subset.c b/src/cairo-truetype-subset.c
+index e3449a0..f77d11c 100644
+--- a/src/cairo-truetype-subset.c
++++ b/src/cairo-truetype-subset.c
+@@ -1285,7 +1285,7 @@ _cairo_truetype_reverse_cmap (cairo_scaled_font_t *scaled_font,
+ 	return CAIRO_INT_STATUS_UNSUPPORTED;
+ 
+     size = be16_to_cpu (map->length);
+-    map = malloc (size);
++    map = _cairo_malloc (size);
+     if (unlikely (map == NULL))
+ 	return _cairo_error (CAIRO_STATUS_NO_MEMORY);
+ 
+-- 
+2.8.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo/cairo-get_bitmap_surface-bsc1036789-CVE-2017-7475.diff b/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo/cairo-get_bitmap_surface-bsc1036789-CVE-2017-7475.diff
new file mode 100644
index 0000000..7aaad2e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo/cairo-get_bitmap_surface-bsc1036789-CVE-2017-7475.diff
@@ -0,0 +1,22 @@
+Cairo: Fix Denial-of-Service Attack due to Logical Problem in Program
+
+https://bugs.freedesktop.org/show_bug.cgi?id=100763
+
+CVE: CVE-2017-7475
+Upstream-Status: Submitted
+
+Signed-off-by: Fan Xin <fan.xin@jp.fujitsu.com>
+
+Index: cairo-1.15.4/src/cairo-ft-font.c
+===================================================================
+--- cairo-1.15.4.orig/src/cairo-ft-font.c
++++ cairo-1.15.4/src/cairo-ft-font.c
+@@ -1149,7 +1149,7 @@ _get_bitmap_surface (FT_Bitmap		     *bi
+     width = bitmap->width;
+     height = bitmap->rows;
+ 
+-    if (width == 0 || height == 0) {
++    if (width == 0 || height == 0 || bitmap->buffer == NULL) {
+ 	*surface = (cairo_image_surface_t *)
+ 	    cairo_image_surface_create_for_data (NULL, format, 0, 0, 0);
+ 	return (*surface)->base.status;
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo_1.14.10.bb b/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo_1.14.10.bb
new file mode 100644
index 0000000..fcdddc6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo_1.14.10.bb
@@ -0,0 +1,46 @@
+require cairo.inc
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=e73e999e0c72b5ac9012424fa157ad77"
+
+SRC_URI = "http://cairographics.org/releases/cairo-${PV}.tar.xz \
+           file://cairo-get_bitmap_surface-bsc1036789-CVE-2017-7475.diff \ 
+           file://0001-cairo-Fix-CVE-2017-9814.patch \
+          "
+
+SRC_URI[md5sum] = "146f5f4d0b4439fc3792fd3452b7b12a"
+SRC_URI[sha256sum] = "7e87878658f2c9951a14fc64114d4958c0e65ac47530b8ac3078b2ce41b66a09"
+
+PACKAGES =+ "cairo-gobject cairo-script-interpreter cairo-perf-utils"
+
+SUMMARY_${PN} = "The Cairo 2D vector graphics library"
+DESCRIPTION_${PN} = "Cairo is a multi-platform library providing anti-aliased \
+vector-based rendering for multiple target backends. Paths consist \
+of line segments and cubic splines and can be rendered at any width \
+with various join and cap styles. All colors may be specified with \
+optional translucence (opacity/alpha) and combined using the \
+extended Porter/Duff compositing algebra as found in the X Render \
+Extension."
+
+SUMMARY_cairo-gobject = "The Cairo library GObject wrapper library"
+DESCRIPTION_cairo-gobject = "A GObject wrapper library for the Cairo API."
+
+SUMMARY_cairo-script-interpreter = "The Cairo library script interpreter"
+DESCRIPTION_cairo-script-interpreter = "The Cairo script interpreter implements \
+CairoScript.  CairoScript is used by tracing utilities to enable the ability \
+to replay rendering."
+
+DESCRIPTION_cairo-perf-utils = "The Cairo library performance utilities"
+
+FILES_${PN} = "${libdir}/libcairo.so.*"
+FILES_${PN}-dev += "${libdir}/cairo/*.so"
+FILES_${PN}-gobject = "${libdir}/libcairo-gobject.so.*"
+FILES_${PN}-script-interpreter = "${libdir}/libcairo-script-interpreter.so.*"
+FILES_${PN}-perf-utils = "${bindir}/cairo-trace ${libdir}/cairo/*.la ${libdir}/cairo/libcairo-trace.so.*"
+
+do_install_append () {
+	rm -rf ${D}${bindir}/cairo-sphinx
+	rm -rf ${D}${libdir}/cairo/cairo-fdr*
+	rm -rf ${D}${libdir}/cairo/cairo-sphinx*
+	rm -rf ${D}${libdir}/cairo/.debug/cairo-fdr*
+	rm -rf ${D}${libdir}/cairo/.debug/cairo-sphinx*
+}
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo_1.14.8.bb b/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo_1.14.8.bb
deleted file mode 100644
index 5a3c74f..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/cairo/cairo_1.14.8.bb
+++ /dev/null
@@ -1,43 +0,0 @@
-require cairo.inc
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=e73e999e0c72b5ac9012424fa157ad77"
-
-SRC_URI = "http://cairographics.org/releases/cairo-${PV}.tar.xz"
-
-SRC_URI[md5sum] = "4ef0db2eacb271c74f8a3fd87822aa98"
-SRC_URI[sha256sum] = "d1f2d98ae9a4111564f6de4e013d639cf77155baf2556582295a0f00a9bc5e20"
-
-PACKAGES =+ "cairo-gobject cairo-script-interpreter cairo-perf-utils"
-
-SUMMARY_${PN} = "The Cairo 2D vector graphics library"
-DESCRIPTION_${PN} = "Cairo is a multi-platform library providing anti-aliased \
-vector-based rendering for multiple target backends. Paths consist \
-of line segments and cubic splines and can be rendered at any width \
-with various join and cap styles. All colors may be specified with \
-optional translucence (opacity/alpha) and combined using the \
-extended Porter/Duff compositing algebra as found in the X Render \
-Extension."
-
-SUMMARY_cairo-gobject = "The Cairo library GObject wrapper library"
-DESCRIPTION_cairo-gobject = "A GObject wrapper library for the Cairo API."
-
-SUMMARY_cairo-script-interpreter = "The Cairo library script interpreter"
-DESCRIPTION_cairo-script-interpreter = "The Cairo script interpreter implements \
-CairoScript.  CairoScript is used by tracing utilities to enable the ability \
-to replay rendering."
-
-DESCRIPTION_cairo-perf-utils = "The Cairo library performance utilities"
-
-FILES_${PN} = "${libdir}/libcairo.so.*"
-FILES_${PN}-dev += "${libdir}/cairo/*.so"
-FILES_${PN}-gobject = "${libdir}/libcairo-gobject.so.*"
-FILES_${PN}-script-interpreter = "${libdir}/libcairo-script-interpreter.so.*"
-FILES_${PN}-perf-utils = "${bindir}/cairo-trace ${libdir}/cairo/*.la ${libdir}/cairo/libcairo-trace.so.*"
-
-do_install_append () {
-	rm -rf ${D}${bindir}/cairo-sphinx
-	rm -rf ${D}${libdir}/cairo/cairo-fdr*
-	rm -rf ${D}${libdir}/cairo/cairo-sphinx*
-	rm -rf ${D}${libdir}/cairo/.debug/cairo-fdr*
-	rm -rf ${D}${libdir}/cairo/.debug/cairo-sphinx*
-}
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-1.0_1.26.0.bb b/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-1.0_1.26.0.bb
deleted file mode 100644
index dfa1cfe..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-1.0_1.26.0.bb
+++ /dev/null
@@ -1,10 +0,0 @@
-require clutter-1.0.inc
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
-
-SRC_URI[archive.md5sum] = "b065e9ca53d1f6bc1ec26aeb27338bb7"
-SRC_URI[archive.sha256sum] = "67514e7824b3feb4723164084b36d6ce1ae41cb3a9897e9f1a56c8334993ce06"
-SRC_URI += "file://install-examples.patch \
-            file://run-installed-tests-with-tap-output.patch \
-            file://0001-Remove-clutter.types-as-it-is-build-configuration-sp.patch \
-            file://run-ptest"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-1.0_1.26.2.bb b/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-1.0_1.26.2.bb
new file mode 100644
index 0000000..48b0501
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-1.0_1.26.2.bb
@@ -0,0 +1,10 @@
+require clutter-1.0.inc
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+
+SRC_URI[archive.md5sum] = "a03482cbacf735eca8c996f210a21ee5"
+SRC_URI[archive.sha256sum] = "e7233314983055e9018f94f56882e29e7fc34d8d35de030789fdcd9b2d0e2e56"
+SRC_URI += "file://install-examples.patch \
+            file://run-installed-tests-with-tap-output.patch \
+            file://0001-Remove-clutter.types-as-it-is-build-configuration-sp.patch \
+            file://run-ptest"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-gst-3.0.inc b/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-gst-3.0.inc
index 4c87798..26ae91c 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-gst-3.0.inc
+++ b/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-gst-3.0.inc
@@ -15,3 +15,7 @@
 FILES_${PN}          += "${libdir}/gstreamer-1.0/lib*.so"
 FILES_${PN}-dev      += "${libdir}/gstreamer-1.0/*.la"
 FILES_${PN}-examples  = "${bindir}/video-player ${bindir}/video-sink"
+
+# Needs to be disable due to a dependency on gstreamer-plugins introspection files
+EXTRA_OECONF_append_mips64 = " --disable-introspection "
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-gst-3.0_3.0.22.bb b/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-gst-3.0_3.0.22.bb
deleted file mode 100644
index 6177c91..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-gst-3.0_3.0.22.bb
+++ /dev/null
@@ -1,7 +0,0 @@
-require clutter-gst-3.0.inc
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c \
-                    file://clutter-gst/clutter-gst.h;beginline=1;endline=24;md5=95baacba194e814c110ea3bdf25ddbf4"
-
-SRC_URI[archive.md5sum] = "88eea2dd5fc4357b5b18f1dfeed56c62"
-SRC_URI[archive.sha256sum] = "f1fc57fb32ea7e3d9234b58db35eb9ef3028cf0b266d85235f959edc0fe3dfd4"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-gst-3.0_3.0.24.bb b/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-gst-3.0_3.0.24.bb
new file mode 100644
index 0000000..ca5e0ae
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/clutter/clutter-gst-3.0_3.0.24.bb
@@ -0,0 +1,7 @@
+require clutter-gst-3.0.inc
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c \
+                    file://clutter-gst/clutter-gst.h;beginline=1;endline=24;md5=95baacba194e814c110ea3bdf25ddbf4"
+
+SRC_URI[archive.md5sum] = "3e145e24bb3c340eeeddafd18efe547d"
+SRC_URI[archive.sha256sum] = "e9f1c87d8f4c47062e952fb8008704f8942cf2d6f290688f3f7d13e83578cc6c"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/drm/libdrm_2.4.75.bb b/import-layers/yocto-poky/meta/recipes-graphics/drm/libdrm_2.4.75.bb
deleted file mode 100644
index 56963d4..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/drm/libdrm_2.4.75.bb
+++ /dev/null
@@ -1,51 +0,0 @@
-SUMMARY = "Userspace interface to the kernel DRM services"
-DESCRIPTION = "The runtime library for accessing the kernel DRM services.  DRM \
-stands for \"Direct Rendering Manager\", which is the kernel portion of the \
-\"Direct Rendering Infrastructure\" (DRI).  DRI is required for many hardware \
-accelerated OpenGL drivers."
-HOMEPAGE = "http://dri.freedesktop.org"
-SECTION = "x11/base"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://xf86drm.c;beginline=9;endline=32;md5=c8a3b961af7667c530816761e949dc71"
-PROVIDES = "drm"
-DEPENDS = "libpthread-stubs libpciaccess"
-
-SRC_URI = "http://dri.freedesktop.org/libdrm/${BP}.tar.bz2 \
-           file://installtests.patch \
-           file://fix_O_CLOEXEC_undeclared.patch \
-           file://0001-configure.ac-Allow-explicit-enabling-of-cunit-tests.patch \
-          "
-
-SRC_URI[md5sum] = "57b0589122ec4b8d5dfb9e430a21f0b3"
-SRC_URI[sha256sum] = "2d5a500eef412cc287d12268eed79d571e262d4957a2ec9258073f305985054f"
-
-inherit autotools pkgconfig manpages
-
-EXTRA_OECONF += "--disable-cairo-tests \
-                 --without-cunit \
-                 --enable-omap-experimental-api \
-                 --enable-etnaviv-experimental-api \
-                 --enable-install-test-programs \
-                 --disable-valgrind \
-                "
-PACKAGECONFIG[manpages] = "--enable-manpages, --disable-manpages, libxslt-native xmlto-native"
-
-ALLOW_EMPTY_${PN}-drivers = "1"
-PACKAGES =+ "${PN}-tests ${PN}-drivers ${PN}-radeon ${PN}-nouveau ${PN}-omap \
-             ${PN}-intel ${PN}-exynos ${PN}-kms ${PN}-freedreno ${PN}-amdgpu \
-             ${PN}-etnaviv"
-
-RRECOMMENDS_${PN}-drivers = "${PN}-radeon ${PN}-nouveau ${PN}-omap ${PN}-intel \
-                             ${PN}-exynos ${PN}-freedreno ${PN}-amdgpu \
-                             ${PN}-etnaviv"
-
-FILES_${PN}-tests = "${bindir}/*"
-FILES_${PN}-radeon = "${libdir}/libdrm_radeon.so.*"
-FILES_${PN}-nouveau = "${libdir}/libdrm_nouveau.so.*"
-FILES_${PN}-omap = "${libdir}/libdrm_omap.so.*"
-FILES_${PN}-intel = "${libdir}/libdrm_intel.so.*"
-FILES_${PN}-exynos = "${libdir}/libdrm_exynos.so.*"
-FILES_${PN}-kms = "${libdir}/libkms*.so.*"
-FILES_${PN}-freedreno = "${libdir}/libdrm_freedreno.so.*"
-FILES_${PN}-amdgpu = "${libdir}/libdrm_amdgpu.so.*"
-FILES_${PN}-etnaviv = "${libdir}/libdrm_etnaviv.so.*"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/drm/libdrm_2.4.83.bb b/import-layers/yocto-poky/meta/recipes-graphics/drm/libdrm_2.4.83.bb
new file mode 100644
index 0000000..a5cb75c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/drm/libdrm_2.4.83.bb
@@ -0,0 +1,51 @@
+SUMMARY = "Userspace interface to the kernel DRM services"
+DESCRIPTION = "The runtime library for accessing the kernel DRM services.  DRM \
+stands for \"Direct Rendering Manager\", which is the kernel portion of the \
+\"Direct Rendering Infrastructure\" (DRI).  DRI is required for many hardware \
+accelerated OpenGL drivers."
+HOMEPAGE = "http://dri.freedesktop.org"
+SECTION = "x11/base"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://xf86drm.c;beginline=9;endline=32;md5=c8a3b961af7667c530816761e949dc71"
+PROVIDES = "drm"
+DEPENDS = "libpthread-stubs libpciaccess"
+
+SRC_URI = "http://dri.freedesktop.org/libdrm/${BP}.tar.bz2 \
+           file://installtests.patch \
+           file://fix_O_CLOEXEC_undeclared.patch \
+           file://0001-configure.ac-Allow-explicit-enabling-of-cunit-tests.patch \
+          "
+
+SRC_URI[md5sum] = "23800953ed7564988872e1e8c61fde31"
+SRC_URI[sha256sum] = "03a52669da60ead62548a35bc430aafb6c2d8dd21ec9dba3a90f96eff5fe36d6"
+
+inherit autotools pkgconfig manpages
+
+EXTRA_OECONF += "--disable-cairo-tests \
+                 --without-cunit \
+                 --enable-omap-experimental-api \
+                 --enable-etnaviv-experimental-api \
+                 --enable-install-test-programs \
+                 --disable-valgrind \
+                "
+PACKAGECONFIG[manpages] = "--enable-manpages, --disable-manpages, libxslt-native xmlto-native"
+
+ALLOW_EMPTY_${PN}-drivers = "1"
+PACKAGES =+ "${PN}-tests ${PN}-drivers ${PN}-radeon ${PN}-nouveau ${PN}-omap \
+             ${PN}-intel ${PN}-exynos ${PN}-kms ${PN}-freedreno ${PN}-amdgpu \
+             ${PN}-etnaviv"
+
+RRECOMMENDS_${PN}-drivers = "${PN}-radeon ${PN}-nouveau ${PN}-omap ${PN}-intel \
+                             ${PN}-exynos ${PN}-freedreno ${PN}-amdgpu \
+                             ${PN}-etnaviv"
+
+FILES_${PN}-tests = "${bindir}/*"
+FILES_${PN}-radeon = "${libdir}/libdrm_radeon.so.*"
+FILES_${PN}-nouveau = "${libdir}/libdrm_nouveau.so.*"
+FILES_${PN}-omap = "${libdir}/libdrm_omap.so.*"
+FILES_${PN}-intel = "${libdir}/libdrm_intel.so.*"
+FILES_${PN}-exynos = "${libdir}/libdrm_exynos.so.*"
+FILES_${PN}-kms = "${libdir}/libkms*.so.*"
+FILES_${PN}-freedreno = "${libdir}/libdrm_freedreno.so.*"
+FILES_${PN}-amdgpu = "${libdir}/libdrm_amdgpu.so.*"
+FILES_${PN}-etnaviv = "${libdir}/libdrm_etnaviv.so.*"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/fontconfig/fontconfig/0001-Avoid-conflicts-with-integer-width-macros-from-TS-18.patch b/import-layers/yocto-poky/meta/recipes-graphics/fontconfig/fontconfig/0001-Avoid-conflicts-with-integer-width-macros-from-TS-18.patch
deleted file mode 100644
index cad7170..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/fontconfig/fontconfig/0001-Avoid-conflicts-with-integer-width-macros-from-TS-18.patch
+++ /dev/null
@@ -1,72 +0,0 @@
-From 20cddc824c6501c2082cac41b162c34cd5fcc530 Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sun, 11 Dec 2016 14:32:00 -0800
-Subject: [PATCH] Avoid conflicts with integer width macros from TS
- 18661-1:2014
-
-glibc 2.25+ has now defined these macros in <limits.h>
-https://sourceware.org/git/?p=glibc.git;a=commit;h=5b17fd0da62bf923cb61d1bb7b08cf2e1f1f9c1a
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
-Upstream-Status: Submitted
-
- fontconfig/fontconfig.h | 2 +-
- src/fcobjs.h            | 2 +-
- src/fcobjshash.gperf    | 2 +-
- src/fcobjshash.h        | 2 +-
- 4 files changed, 4 insertions(+), 4 deletions(-)
-
-Index: fontconfig-2.12.1/fontconfig/fontconfig.h
-===================================================================
---- fontconfig-2.12.1.orig/fontconfig/fontconfig.h
-+++ fontconfig-2.12.1/fontconfig/fontconfig.h
-@@ -128,7 +128,8 @@ typedef int		FcBool;
- #define FC_USER_CACHE_FILE	    ".fonts.cache-" FC_CACHE_VERSION
- 
- /* Adjust outline rasterizer */
--#define FC_CHAR_WIDTH	    "charwidth"	/* Int */
-+#define FC_CHARWIDTH	    "charwidth"	/* Int */
-+#define FC_CHAR_WIDTH	    FC_CHARWIDTH
- #define FC_CHAR_HEIGHT	    "charheight"/* Int */
- #define FC_MATRIX	    "matrix"    /* FcMatrix */
- 
-Index: fontconfig-2.12.1/src/fcobjs.h
-===================================================================
---- fontconfig-2.12.1.orig/src/fcobjs.h
-+++ fontconfig-2.12.1/src/fcobjs.h
-@@ -51,7 +51,7 @@ FC_OBJECT (DPI,			FcTypeDouble,	NULL)
- FC_OBJECT (RGBA,		FcTypeInteger,	NULL)
- FC_OBJECT (SCALE,		FcTypeDouble,	NULL)
- FC_OBJECT (MINSPACE,		FcTypeBool,	NULL)
--FC_OBJECT (CHAR_WIDTH,		FcTypeInteger,	NULL)
-+FC_OBJECT (CHARWIDTH,		FcTypeInteger,	NULL)
- FC_OBJECT (CHAR_HEIGHT,		FcTypeInteger,	NULL)
- FC_OBJECT (MATRIX,		FcTypeMatrix,	NULL)
- FC_OBJECT (CHARSET,		FcTypeCharSet,	FcCompareCharSet)
-Index: fontconfig-2.12.1/src/fcobjshash.gperf
-===================================================================
---- fontconfig-2.12.1.orig/src/fcobjshash.gperf
-+++ fontconfig-2.12.1/src/fcobjshash.gperf
-@@ -44,7 +44,7 @@ int id;
- "rgba",FC_RGBA_OBJECT
- "scale",FC_SCALE_OBJECT
- "minspace",FC_MINSPACE_OBJECT
--"charwidth",FC_CHAR_WIDTH_OBJECT
-+"charwidth",FC_CHARWIDTH_OBJECT
- "charheight",FC_CHAR_HEIGHT_OBJECT
- "matrix",FC_MATRIX_OBJECT
- "charset",FC_CHARSET_OBJECT
-Index: fontconfig-2.12.1/src/fcobjshash.h
-===================================================================
---- fontconfig-2.12.1.orig/src/fcobjshash.h
-+++ fontconfig-2.12.1/src/fcobjshash.h
-@@ -284,7 +284,7 @@ FcObjectTypeLookup (register const char
-       {(int)(long)&((struct FcObjectTypeNamePool_t *)0)->FcObjectTypeNamePool_str43,FC_CHARSET_OBJECT},
-       {-1},
- #line 47 "fcobjshash.gperf"
--      {(int)(long)&((struct FcObjectTypeNamePool_t *)0)->FcObjectTypeNamePool_str45,FC_CHAR_WIDTH_OBJECT},
-+      {(int)(long)&((struct FcObjectTypeNamePool_t *)0)->FcObjectTypeNamePool_str45,FC_CHARWIDTH_OBJECT},
- #line 48 "fcobjshash.gperf"
-       {(int)(long)&((struct FcObjectTypeNamePool_t *)0)->FcObjectTypeNamePool_str46,FC_CHAR_HEIGHT_OBJECT},
- #line 55 "fcobjshash.gperf"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/fontconfig/fontconfig_2.12.1.bb b/import-layers/yocto-poky/meta/recipes-graphics/fontconfig/fontconfig_2.12.1.bb
deleted file mode 100644
index 95b066c..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/fontconfig/fontconfig_2.12.1.bb
+++ /dev/null
@@ -1,50 +0,0 @@
-SUMMARY = "Generic font configuration library"
-DESCRIPTION = "Fontconfig is a font configuration and customization library, which \
-does not depend on the X Window System. It is designed to locate \
-fonts within the system and select them according to requirements \
-specified by applications. \
-Fontconfig is not a rasterization library, nor does it impose a \
-particular rasterization library on the application. The X-specific \
-library 'Xft' uses fontconfig along with freetype to specify and \
-rasterize fonts."
-
-HOMEPAGE = "http://www.fontconfig.org"
-BUGTRACKER = "https://bugs.freedesktop.org/enter_bug.cgi?product=fontconfig"
-
-LICENSE = "MIT-style & MIT & PD"
-LIC_FILES_CHKSUM = "file://COPYING;md5=7a0449e9bc5370402a94c00204beca3d \
-                    file://src/fcfreetype.c;endline=45;md5=5d9513e3196a1fbfdfa94051c09dfc84 \
-                    file://src/fccache.c;beginline=1360;endline=1375;md5=0326cfeb4a7333dd4dd25fbbc4b9f27f"
-
-SECTION = "libs"
-
-DEPENDS = "expat freetype zlib"
-
-SRC_URI = "http://fontconfig.org/release/fontconfig-${PV}.tar.gz \
-           file://revert-static-pkgconfig.patch \
-           file://0001-Avoid-conflicts-with-integer-width-macros-from-TS-18.patch \
-           "
-SRC_URI[md5sum] = "ce55e525c37147eee14cc2de6cc09f6c"
-SRC_URI[sha256sum] = "a9f42d03949f948a3a4f762287dbc16e53a927c91a07ee64207ebd90a9e5e292"
-
-PACKAGES =+ "fontconfig-utils"
-FILES_${PN} =+ "${datadir}/xml/*"
-FILES_fontconfig-utils = "${bindir}/*"
-
-# Work around past breakage in debian.bbclass
-RPROVIDES_fontconfig-utils = "libfontconfig-utils"
-RREPLACES_fontconfig-utils = "libfontconfig-utils"
-RCONFLICTS_fontconfig-utils = "libfontconfig-utils"
-DEBIAN_NOAUTONAME_fontconfig-utils = "1"
-
-inherit autotools pkgconfig relative_symlinks
-
-FONTCONFIG_CACHE_DIR ?= "${localstatedir}/cache/fontconfig"
-
-# comma separated list of additional directories
-# /usr/share/fonts is already included by default (you can change it with --with-default-fonts)
-FONTCONFIG_FONT_DIRS ?= "no"
-
-EXTRA_OECONF = " --disable-docs --with-default-fonts=${datadir}/fonts --with-cache-dir=${FONTCONFIG_CACHE_DIR} --with-add-fonts=${FONTCONFIG_FONT_DIRS}"
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/fontconfig/fontconfig_2.12.4.bb b/import-layers/yocto-poky/meta/recipes-graphics/fontconfig/fontconfig_2.12.4.bb
new file mode 100644
index 0000000..a058b35
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/fontconfig/fontconfig_2.12.4.bb
@@ -0,0 +1,54 @@
+SUMMARY = "Generic font configuration library"
+DESCRIPTION = "Fontconfig is a font configuration and customization library, which \
+does not depend on the X Window System. It is designed to locate \
+fonts within the system and select them according to requirements \
+specified by applications. \
+Fontconfig is not a rasterization library, nor does it impose a \
+particular rasterization library on the application. The X-specific \
+library 'Xft' uses fontconfig along with freetype to specify and \
+rasterize fonts."
+
+HOMEPAGE = "http://www.fontconfig.org"
+BUGTRACKER = "https://bugs.freedesktop.org/enter_bug.cgi?product=fontconfig"
+
+LICENSE = "MIT-style & MIT & PD"
+LIC_FILES_CHKSUM = "file://COPYING;md5=7a0449e9bc5370402a94c00204beca3d \
+                    file://src/fcfreetype.c;endline=45;md5=5d9513e3196a1fbfdfa94051c09dfc84 \
+                    file://src/fccache.c;beginline=1367;endline=1382;md5=0326cfeb4a7333dd4dd25fbbc4b9f27f"
+
+SECTION = "libs"
+
+DEPENDS = "expat freetype zlib gperf-native"
+
+SRC_URI = "http://fontconfig.org/release/fontconfig-${PV}.tar.gz \
+           file://revert-static-pkgconfig.patch \
+           "
+SRC_URI[md5sum] = "4fb01fc3f41760c41c69e37cc784b658"
+SRC_URI[sha256sum] = "fd5a6a663f4c4a00e196523902626654dd0c4a78686cbc6e472f338e50fdf806"
+
+do_configure_prepend() {
+    # work around https://bugs.freedesktop.org/show_bug.cgi?id=101280
+    rm -f ${S}/src/fcobjshash.h ${S}/src/fcobjshash.gperf
+}
+
+PACKAGES =+ "fontconfig-utils"
+FILES_${PN} =+ "${datadir}/xml/*"
+FILES_fontconfig-utils = "${bindir}/*"
+
+# Work around past breakage in debian.bbclass
+RPROVIDES_fontconfig-utils = "libfontconfig-utils"
+RREPLACES_fontconfig-utils = "libfontconfig-utils"
+RCONFLICTS_fontconfig-utils = "libfontconfig-utils"
+DEBIAN_NOAUTONAME_fontconfig-utils = "1"
+
+inherit autotools pkgconfig relative_symlinks
+
+FONTCONFIG_CACHE_DIR ?= "${localstatedir}/cache/fontconfig"
+
+# comma separated list of additional directories
+# /usr/share/fonts is already included by default (you can change it with --with-default-fonts)
+FONTCONFIG_FONT_DIRS ?= "no"
+
+EXTRA_OECONF = " --disable-docs --with-default-fonts=${datadir}/fonts --with-cache-dir=${FONTCONFIG_CACHE_DIR} --with-add-fonts=${FONTCONFIG_FONT_DIRS}"
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/freetype/freetype_2.7.1.bb b/import-layers/yocto-poky/meta/recipes-graphics/freetype/freetype_2.7.1.bb
deleted file mode 100644
index 544f835..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/freetype/freetype_2.7.1.bb
+++ /dev/null
@@ -1,49 +0,0 @@
-SUMMARY = "Freetype font rendering library"
-DESCRIPTION = "FreeType is a software font engine that is designed to be small, efficient, \
-highly customizable, and portable while capable of producing high-quality output (glyph \
-images). It can be used in graphics libraries, display servers, font conversion tools, text \
-image generation tools, and many other products as well."
-HOMEPAGE = "http://www.freetype.org/"
-BUGTRACKER = "https://savannah.nongnu.org/bugs/?group=freetype"
-SECTION = "libs"
-
-LICENSE = "FreeType | GPLv2+"
-LIC_FILES_CHKSUM = "file://docs/LICENSE.TXT;md5=4af6221506f202774ef74f64932878a1 \
-                    file://docs/FTL.TXT;md5=13b25413274c9b3b09b63e4028216ff4 \
-                    file://docs/GPLv2.TXT;md5=8ef380476f642c20ebf40fecb0add2ec"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/freetype/freetype-${PV}.tar.bz2 \
-           file://use-right-libtool.patch"
-
-UPSTREAM_CHECK_URI = "http://sourceforge.net/projects/freetype/files/freetype2/"
-UPSTREAM_CHECK_REGEX = "(?P<pver>\d+(\.\d+)+)"
-
-SRC_URI[md5sum] = "b3230110e0cab777e0df7631837ac36e"
-SRC_URI[sha256sum] = "3a3bb2c4e15ffb433f2032f50a5b5a92558206822e22bfe8cbe339af4aa82f88"
-
-inherit autotools pkgconfig binconfig-disabled multilib_header
-
-# Adapt autotools to work with the minimal autoconf usage in freetype
-AUTOTOOLS_SCRIPT_PATH = "${S}/builds/unix"
-CONFIGURE_SCRIPT = "${S}/configure"
-EXTRA_AUTORECONF += "--exclude=autoheader --exclude=automake"
-
-PACKAGECONFIG ??= "zlib"
-
-PACKAGECONFIG[bzip2] = "--with-bzip2,--without-bzip2,bzip2"
-# harfbuzz results in a circular dependency so enabling is non-trivial
-PACKAGECONFIG[harfbuzz] = "--with-harfbuzz,--without-harfbuzz,harfbuzz"
-PACKAGECONFIG[pixmap] = "--with-png,--without-png,libpng"
-PACKAGECONFIG[zlib] = "--with-zlib,--without-zlib,zlib"
-
-EXTRA_OECONF = "CC_BUILD='${BUILD_CC}'"
-
-TARGET_CPPFLAGS += "-D_FILE_OFFSET_BITS=64"
-
-do_install_append() {
-	oe_multilib_header freetype2/freetype/config/ftconfig.h
-}
-
-BINCONFIG = "${bindir}/freetype-config"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/freetype/freetype_2.8.bb b/import-layers/yocto-poky/meta/recipes-graphics/freetype/freetype_2.8.bb
new file mode 100644
index 0000000..8e88e84
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/freetype/freetype_2.8.bb
@@ -0,0 +1,49 @@
+SUMMARY = "Freetype font rendering library"
+DESCRIPTION = "FreeType is a software font engine that is designed to be small, efficient, \
+highly customizable, and portable while capable of producing high-quality output (glyph \
+images). It can be used in graphics libraries, display servers, font conversion tools, text \
+image generation tools, and many other products as well."
+HOMEPAGE = "http://www.freetype.org/"
+BUGTRACKER = "https://savannah.nongnu.org/bugs/?group=freetype"
+SECTION = "libs"
+
+LICENSE = "FreeType | GPLv2+"
+LIC_FILES_CHKSUM = "file://docs/LICENSE.TXT;md5=4af6221506f202774ef74f64932878a1 \
+                    file://docs/FTL.TXT;md5=13b25413274c9b3b09b63e4028216ff4 \
+                    file://docs/GPLv2.TXT;md5=8ef380476f642c20ebf40fecb0add2ec"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/freetype/freetype-${PV}.tar.bz2 \
+           file://use-right-libtool.patch"
+
+UPSTREAM_CHECK_URI = "http://sourceforge.net/projects/freetype/files/freetype2/"
+UPSTREAM_CHECK_REGEX = "(?P<pver>\d+(\.\d+)+)"
+
+SRC_URI[md5sum] = "2413ac3eaf508ada019c63959ea81a92"
+SRC_URI[sha256sum] = "a3c603ed84c3c2495f9c9331fe6bba3bb0ee65e06ec331e0a0fb52158291b40b"
+
+inherit autotools pkgconfig binconfig-disabled multilib_header
+
+# Adapt autotools to work with the minimal autoconf usage in freetype
+AUTOTOOLS_SCRIPT_PATH = "${S}/builds/unix"
+CONFIGURE_SCRIPT = "${S}/configure"
+EXTRA_AUTORECONF += "--exclude=autoheader --exclude=automake"
+
+PACKAGECONFIG ??= "zlib"
+
+PACKAGECONFIG[bzip2] = "--with-bzip2,--without-bzip2,bzip2"
+# harfbuzz results in a circular dependency so enabling is non-trivial
+PACKAGECONFIG[harfbuzz] = "--with-harfbuzz,--without-harfbuzz,harfbuzz"
+PACKAGECONFIG[pixmap] = "--with-png,--without-png,libpng"
+PACKAGECONFIG[zlib] = "--with-zlib,--without-zlib,zlib"
+
+EXTRA_OECONF = "CC_BUILD='${BUILD_CC}'"
+
+TARGET_CPPFLAGS += "-D_FILE_OFFSET_BITS=64"
+
+do_install_append() {
+	oe_multilib_header freetype2/freetype/config/ftconfig.h
+}
+
+BINCONFIG = "${bindir}/freetype-config"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/fstests/fstests_git.bb b/import-layers/yocto-poky/meta/recipes-graphics/fstests/fstests_git.bb
index 95c33f4..9e09cd2 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/fstests/fstests_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/fstests/fstests_git.bb
@@ -8,6 +8,7 @@
 PV = "0.1+git${SRCPV}"
 
 SRC_URI = "git://git.yoctoproject.org/${BPN}"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 LIC_FILES_CHKSUM = "file://test-pango-gdk.c;endline=24;md5=1ee74ec851ecda57eb7ac6cc180f7655"
 
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/glew/glew_2.0.0.bb b/import-layers/yocto-poky/meta/recipes-graphics/glew/glew_2.0.0.bb
index 1c93ca0..f2ab756 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/glew/glew_2.0.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/glew/glew_2.0.0.bb
@@ -24,6 +24,7 @@
 PACKAGECONFIG[opengl] = "SYSTEM='linux',,virtual/libx11 virtual/libgl libglu libxext libxi libxmu"
 PACKAGECONFIG[egl-gles2] = "SYSTEM='linux-egl' GLEW_NO_GLU='-DGLEW_NO_GLU' LDFLAGS.GL='-lEGL -lGLESv2',,virtual/egl virtual/libgles2"
 
+CFLAGS += "-D_GNU_SOURCE"
 # Override SYSTEM (via PACKAGECONFIG_CONFARGS) to avoid calling config.guess,
 # we're cross-compiling. Pass our CFLAGS via POPT as that's the optimisation
 # variable and safely overwritten.
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/harfbuzz/harfbuzz_1.4.1.bb b/import-layers/yocto-poky/meta/recipes-graphics/harfbuzz/harfbuzz_1.4.1.bb
deleted file mode 100644
index fc4773e..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/harfbuzz/harfbuzz_1.4.1.bb
+++ /dev/null
@@ -1,39 +0,0 @@
-SUMMARY = "Text shaping library"
-DESCRIPTION = "HarfBuzz is an OpenType text shaping engine."
-HOMEPAGE = "http://www.freedesktop.org/wiki/Software/HarfBuzz"
-BUGTRACKER = "https://bugs.freedesktop.org/enter_bug.cgi?product=HarfBuzz"
-SECTION = "libs"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=e021dd6dda6ff1e6b1044002fc662b9b \
-                    file://src/hb-ucdn/COPYING;md5=994ba0f1295f15b4bda4999a5bbeddef \
-"
-
-DEPENDS = "glib-2.0 cairo fontconfig freetype"
-
-SRC_URI = "http://www.freedesktop.org/software/harfbuzz/release/${BP}.tar.bz2"
-
-SRC_URI[md5sum] = "7b3f445d0a58485a31c18c03ce9b4e3c"
-SRC_URI[sha256sum] = "85a27fab639a1d651737dcb6b69e4101e3fd09522fdfdcb793df810b5cb315bd"
-
-inherit autotools pkgconfig lib_package gtk-doc
-
-PACKAGECONFIG ??= "icu"
-PACKAGECONFIG[icu] = "--with-icu,--without-icu,icu"
-
-EXTRA_OECONF = " \
-    --with-cairo \
-    --with-fontconfig \
-    --with-freetype \
-    --with-glib \
-    --without-graphite2 \
-"
-
-PACKAGES =+ "${PN}-icu ${PN}-icu-dev"
-
-FILES_${PN}-icu = "${libdir}/libharfbuzz-icu.so.*"
-FILES_${PN}-icu-dev = "${libdir}/libharfbuzz-icu.la \
-                       ${libdir}/libharfbuzz-icu.so \
-                       ${libdir}/pkgconfig/harfbuzz-icu.pc \
-"
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/harfbuzz/harfbuzz_1.4.8.bb b/import-layers/yocto-poky/meta/recipes-graphics/harfbuzz/harfbuzz_1.4.8.bb
new file mode 100644
index 0000000..4f5e5a3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/harfbuzz/harfbuzz_1.4.8.bb
@@ -0,0 +1,39 @@
+SUMMARY = "Text shaping library"
+DESCRIPTION = "HarfBuzz is an OpenType text shaping engine."
+HOMEPAGE = "http://www.freedesktop.org/wiki/Software/HarfBuzz"
+BUGTRACKER = "https://bugs.freedesktop.org/enter_bug.cgi?product=HarfBuzz"
+SECTION = "libs"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=e021dd6dda6ff1e6b1044002fc662b9b \
+                    file://src/hb-ucdn/COPYING;md5=994ba0f1295f15b4bda4999a5bbeddef \
+"
+
+DEPENDS = "glib-2.0 cairo fontconfig freetype"
+
+SRC_URI = "http://www.freedesktop.org/software/harfbuzz/release/${BP}.tar.bz2"
+
+SRC_URI[md5sum] = "d1aa446e1e65717311c15d9ac0cf31ee"
+SRC_URI[sha256sum] = "ccec4930ff0bb2d0c40aee203075447954b64a8c2695202413cc5e428c907131"
+
+inherit autotools pkgconfig lib_package gtk-doc
+
+PACKAGECONFIG ??= "icu"
+PACKAGECONFIG[icu] = "--with-icu,--without-icu,icu"
+
+EXTRA_OECONF = " \
+    --with-cairo \
+    --with-fontconfig \
+    --with-freetype \
+    --with-glib \
+    --without-graphite2 \
+"
+
+PACKAGES =+ "${PN}-icu ${PN}-icu-dev"
+
+FILES_${PN}-icu = "${libdir}/libharfbuzz-icu.so.*"
+FILES_${PN}-icu-dev = "${libdir}/libharfbuzz-icu.la \
+                       ${libdir}/libharfbuzz-icu.so \
+                       ${libdir}/pkgconfig/harfbuzz-icu.pc \
+"
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/jpeg/libjpeg-turbo/fix-mips.patch b/import-layers/yocto-poky/meta/recipes-graphics/jpeg/libjpeg-turbo/fix-mips.patch
deleted file mode 100644
index 4d41237..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/jpeg/libjpeg-turbo/fix-mips.patch
+++ /dev/null
@@ -1,45 +0,0 @@
-Fix a regression that causes the MIPS code from building.
-
-Upstream-Status: Backport
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-From 7bfb22af123ac10798a9a4c9ec7b23e5065db35e Mon Sep 17 00:00:00 2001
-From: DRC <information@libjpeg-turbo.org>
-Date: Mon, 26 Sep 2016 17:59:14 -0500
-Subject: [PATCH] Fix broken MIPS build
-
-Regression introduced by 9055fb408dcb585ce9392d395e16630d51002152
-
-Fixes #104
----
- ChangeLog.md      | 3 +++
- simd/jsimd_mips.c | 2 ++
- 2 files changed, 5 insertions(+)
-
-diff --git a/ChangeLog.md b/ChangeLog.md
-index e2b9df3..71ddcaa 100644
---- a/ChangeLog.md
-+++ b/ChangeLog.md
-@@ -6,6 +6,9 @@
- 1. Fixed a regression introduced by 1.5.1[7] that prevented libjpeg-turbo from
- building with Android NDK platforms prior to android-21 (5.0).
- 
-+2. Fixed a regression introduced by 1.5.1[1] that prevented the MIPS DSPR2 SIMD
-+code in libjpeg-turbo from building.
-+
- 
- 1.5.1
- =====
-diff --git a/simd/jsimd_mips.c b/simd/jsimd_mips.c
-index 63b8115..02e90cd 100644
---- a/simd/jsimd_mips.c
-+++ b/simd/jsimd_mips.c
-@@ -63,6 +63,8 @@ parse_proc_cpuinfo(const char* search_string)
- LOCAL(void)
- init_simd (void)
- {
-+  char *env = NULL;
-+
-   if (simd_support != ~0U)
-     return;
- 
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/jpeg/libjpeg-turbo_1.5.1.bb b/import-layers/yocto-poky/meta/recipes-graphics/jpeg/libjpeg-turbo_1.5.1.bb
deleted file mode 100644
index de2eeaf..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/jpeg/libjpeg-turbo_1.5.1.bb
+++ /dev/null
@@ -1,51 +0,0 @@
-SUMMARY = "Hardware accelerated JPEG compression/decompression library"
-DESCRIPTION = "libjpeg-turbo is a derivative of libjpeg that uses SIMD instructions (MMX, SSE2, NEON) to accelerate baseline JPEG compression and decompression"
-HOMEPAGE = "http://libjpeg-turbo.org/"
-
-LICENSE = "BSD-3-Clause"
-LIC_FILES_CHKSUM = "file://cdjpeg.h;endline=13;md5=05bab7c7ad899d85bfba60da1a1271f2 \
-                    file://jpeglib.h;endline=16;md5=f67d70e547a2662c079781c72f877f72 \
-                    file://djpeg.c;endline=11;md5=b90b6d2b4119f9e5807cd273f525d2af \
-"
-DEPENDS_append_x86-64_class-target = " nasm-native"
-DEPENDS_append_x86_class-target    = " nasm-native"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BPN}-${PV}.tar.gz \
-           file://fix-mips.patch"
-SRC_URI[md5sum] = "55deb139b0cac3c8200b75d485fc13f3"
-SRC_URI[sha256sum] = "41429d3d253017433f66e3d472b8c7d998491d2f41caa7306b8d9a6f2a2c666c"
-UPSTREAM_CHECK_URI = "http://sourceforge.net/projects/libjpeg-turbo/files/"
-UPSTREAM_CHECK_REGEX = "/libjpeg-turbo/files/(?P<pver>(\d+[\.\-_]*)+)/"
-
-PE= "1"
-
-# Drop-in replacement for jpeg
-PROVIDES = "jpeg"
-RPROVIDES_${PN} += "jpeg"
-RREPLACES_${PN} += "jpeg"
-RCONFLICTS_${PN} += "jpeg"
-
-inherit autotools pkgconfig
-
-# Add nasm-native dependency consistently for all build arches is hard
-EXTRA_OECONF_append_class-native = " --without-simd"
-
-# Work around missing x32 ABI support
-EXTRA_OECONF_append_class-target = " ${@bb.utils.contains("TUNE_FEATURES", "mx32", "--without-simd", "", d)}"
-
-# Work around missing non-floating point ABI support in MIPS
-EXTRA_OECONF_append_class-target = " ${@bb.utils.contains("MIPSPKGSFX_FPU", "-nf", "--without-simd", "", d)}"
-
-# Provide a workaround if Altivec unit is not present in PPC
-EXTRA_OECONF_append_class-target_powerpc = " ${@bb.utils.contains("TUNE_FEATURES", "altivec", "", "--without-simd", d)}"
-EXTRA_OECONF_append_class-target_powerpc64 = " ${@bb.utils.contains("TUNE_FEATURES", "altivec", "", "--without-simd", d)}"
-
-PACKAGES =+ "jpeg-tools libturbojpeg"
-
-DESCRIPTION_jpeg-tools = "The jpeg-tools package includes client programs to access libjpeg functionality.  These tools allow for the compression, decompression, transformation and display of JPEG files and benchmarking of the libjpeg library."
-FILES_jpeg-tools = "${bindir}/*"
-
-DESCRIPTION_libturbojpeg = "A SIMD-accelerated JPEG codec which provides only TurboJPEG APIs"
-FILES_libturbojpeg = "${libdir}/libturbojpeg.so.*"
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/jpeg/libjpeg-turbo_1.5.2.bb b/import-layers/yocto-poky/meta/recipes-graphics/jpeg/libjpeg-turbo_1.5.2.bb
new file mode 100644
index 0000000..58646d3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/jpeg/libjpeg-turbo_1.5.2.bb
@@ -0,0 +1,51 @@
+SUMMARY = "Hardware accelerated JPEG compression/decompression library"
+DESCRIPTION = "libjpeg-turbo is a derivative of libjpeg that uses SIMD instructions (MMX, SSE2, NEON) to accelerate baseline JPEG compression and decompression"
+HOMEPAGE = "http://libjpeg-turbo.org/"
+
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://cdjpeg.h;endline=13;md5=05bab7c7ad899d85bfba60da1a1271f2 \
+                    file://jpeglib.h;endline=16;md5=f67d70e547a2662c079781c72f877f72 \
+                    file://djpeg.c;endline=11;md5=b90b6d2b4119f9e5807cd273f525d2af \
+"
+DEPENDS_append_x86-64_class-target = " nasm-native"
+DEPENDS_append_x86_class-target    = " nasm-native"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BPN}-${PV}.tar.gz"
+
+SRC_URI[md5sum] = "6b4923e297a7eaa255f08511017a8818"
+SRC_URI[sha256sum] = "9098943b270388727ae61de82adec73cf9f0dbb240b3bc8b172595ebf405b528"
+UPSTREAM_CHECK_URI = "http://sourceforge.net/projects/libjpeg-turbo/files/"
+UPSTREAM_CHECK_REGEX = "/libjpeg-turbo/files/(?P<pver>(\d+[\.\-_]*)+)/"
+
+PE= "1"
+
+# Drop-in replacement for jpeg
+PROVIDES = "jpeg"
+RPROVIDES_${PN} += "jpeg"
+RREPLACES_${PN} += "jpeg"
+RCONFLICTS_${PN} += "jpeg"
+
+inherit autotools pkgconfig
+
+# Add nasm-native dependency consistently for all build arches is hard
+EXTRA_OECONF_append_class-native = " --without-simd"
+
+# Work around missing x32 ABI support
+EXTRA_OECONF_append_class-target = " ${@bb.utils.contains("TUNE_FEATURES", "mx32", "--without-simd", "", d)}"
+
+# Work around missing non-floating point ABI support in MIPS
+EXTRA_OECONF_append_class-target = " ${@bb.utils.contains("MIPSPKGSFX_FPU", "-nf", "--without-simd", "", d)}"
+
+# Provide a workaround if Altivec unit is not present in PPC
+EXTRA_OECONF_append_class-target_powerpc = " ${@bb.utils.contains("TUNE_FEATURES", "altivec", "", "--without-simd", d)}"
+EXTRA_OECONF_append_class-target_powerpc64 = " ${@bb.utils.contains("TUNE_FEATURES", "altivec", "", "--without-simd", d)}"
+
+PACKAGES =+ "jpeg-tools libturbojpeg"
+
+DESCRIPTION_jpeg-tools = "The jpeg-tools package includes client programs to access libjpeg functionality.  These tools allow for the compression, decompression, transformation and display of JPEG files and benchmarking of the libjpeg library."
+FILES_jpeg-tools = "${bindir}/*"
+
+DESCRIPTION_libturbojpeg = "A SIMD-accelerated JPEG codec which provides only TurboJPEG APIs"
+FILES_libturbojpeg = "${libdir}/libturbojpeg.so.*"
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/kmscube/kmscube_git.bb b/import-layers/yocto-poky/meta/recipes-graphics/kmscube/kmscube_git.bb
new file mode 100644
index 0000000..cab68ff
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/kmscube/kmscube_git.bb
@@ -0,0 +1,17 @@
+DESCRIPTION = "Demo application to showcase 3D graphics using kms and gbm"
+HOMEPAGE = "https://cgit.freedesktop.org/mesa/kmscube/"
+LICENSE = "MIT"
+SECTION = "graphics"
+DEPENDS = "virtual/libgles2 virtual/egl libdrm gstreamer1.0 gstreamer1.0-plugins-base"
+
+LIC_FILES_CHKSUM = "file://kmscube.c;beginline=1;endline=23;md5=8b309d4ee67b7315ff7381270dd631fb"
+
+SRCREV = "0d8de4ce3a03f36af1817f9b0586d132ad2c5d2e"
+SRC_URI = "git://anongit.freedesktop.org/mesa/kmscube;branch=master;protocol=git"
+UPSTREAM_VERSION_UNKNOWN = "1"
+
+S = "${WORKDIR}/git"
+
+inherit autotools pkgconfig distro_features_check
+
+REQUIRED_DISTRO_FEATURES = "opengl"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/libepoxy/libepoxy/Add-fallback-definition-for-EGL-CAST.patch b/import-layers/yocto-poky/meta/recipes-graphics/libepoxy/libepoxy/Add-fallback-definition-for-EGL-CAST.patch
new file mode 100644
index 0000000..b929725
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/libepoxy/libepoxy/Add-fallback-definition-for-EGL-CAST.patch
@@ -0,0 +1,33 @@
+Add fallback definition for EGL_CAST
+
+The EGL API update from d11104f introduced a dependency on the
+EGL_CAST() macro, provided by an updated eglplatform.h. Given that we
+don't provide eglplatform.h, add a fallback definition for if we're
+building against Mesa 17.0.x or similar.
+
+https://bugs.gentoo.org/show_bug.cgi?id=623926
+
+Upstream-Status: Backport [https://github.com/anholt/libepoxy/commit/ebe3a53db1c0bb34e1ca963b95d1f222115f93f8]
+
+Signed-off-by: Tom Hochstein <tom.hochstein@nxp.com>
+
+Index: libepoxy-1.4.3/src/gen_dispatch.py
+===================================================================
+--- libepoxy-1.4.3.orig/src/gen_dispatch.py	2017-06-06 04:24:13.000000000 -0500
++++ libepoxy-1.4.3/src/gen_dispatch.py	2017-11-06 12:45:43.594966473 -0600
+@@ -491,6 +491,15 @@
+             self.outln('#include "epoxy/gl.h"')
+             if self.target == "egl":
+                 self.outln('#include "EGL/eglplatform.h"')
++                # Account for older eglplatform.h, which doesn't define
++                # the EGL_CAST macro.
++                self.outln('#ifndef EGL_CAST')
++                self.outln('#if defined(__cplusplus)')
++                self.outln('#define EGL_CAST(type, value) (static_cast<type>(value))')
++                self.outln('#else')
++                self.outln('#define EGL_CAST(type, value) ((type) (value))')
++                self.outln('#endif')
++                self.outln('#endif')
+         else:
+             # Add some ridiculous inttypes.h redefinitions that are
+             # from khrplatform.h and not included in the XML.  We
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/libepoxy/libepoxy_1.4.0.bb b/import-layers/yocto-poky/meta/recipes-graphics/libepoxy/libepoxy_1.4.0.bb
deleted file mode 100644
index ee9f694..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/libepoxy/libepoxy_1.4.0.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-SUMMARY = "OpenGL function pointer management library"
-HOMEPAGE = "https://github.com/anholt/libepoxy/"
-SECTION = "libs"
-
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=58ef4c80d401e07bd9ee8b6b58cf464b"
-
-SRC_URI = "https://github.com/anholt/${BPN}/releases/download/v1.4/${BP}.tar.xz"
-SRC_URI[md5sum] = "d8d8cbf2beb64975d424fcc5167a2a38"
-SRC_URI[sha256sum] = "25a906b14a921bc2b488cfeaa21a00486fe92630e4a9dd346e4ecabeae52ab41"
-UPSTREAM_CHECK_URI = "https://github.com/anholt/libepoxy/releases"
-
-inherit autotools pkgconfig distro_features_check
-
-REQUIRED_DISTRO_FEATURES = "opengl"
-
-DEPENDS = "util-macros virtual/egl"
-
-PACKAGECONFIG[x11] = "--enable-glx, --disable-glx, virtual/libx11"
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/libepoxy/libepoxy_1.4.3.bb b/import-layers/yocto-poky/meta/recipes-graphics/libepoxy/libepoxy_1.4.3.bb
new file mode 100644
index 0000000..0172322
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/libepoxy/libepoxy_1.4.3.bb
@@ -0,0 +1,22 @@
+SUMMARY = "OpenGL function pointer management library"
+HOMEPAGE = "https://github.com/anholt/libepoxy/"
+SECTION = "libs"
+
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=58ef4c80d401e07bd9ee8b6b58cf464b"
+
+SRC_URI = "https://github.com/anholt/${BPN}/releases/download/${PV}/${BP}.tar.xz \
+           file://Add-fallback-definition-for-EGL-CAST.patch"
+SRC_URI[md5sum] = "af4c3ce0fb1143bdc4e43f85695a9bed"
+SRC_URI[sha256sum] = "0b808a06c9685a62fca34b680abb8bc7fb2fda074478e329b063c1f872b826f6"
+UPSTREAM_CHECK_URI = "https://github.com/anholt/libepoxy/releases"
+
+inherit autotools pkgconfig distro_features_check
+
+REQUIRED_DISTRO_FEATURES = "opengl"
+
+DEPENDS = "util-macros"
+
+PACKAGECONFIG[egl] = "--enable-egl, --disable-egl, virtual/egl"
+PACKAGECONFIG[x11] = "--enable-glx, --disable-glx, virtual/libx11"
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)} egl"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/libfakekey/libfakekey_git.bb b/import-layers/yocto-poky/meta/recipes-graphics/libfakekey/libfakekey_git.bb
index c60ddea..4b803db 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/libfakekey/libfakekey_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/libfakekey/libfakekey_git.bb
@@ -10,8 +10,8 @@
 DEPENDS = "libxtst"
 SECTION = "x11/wm"
 
-SRCREV = "e327ff049b8503af2dadffa84370a0860b9fb682"
-PV = "0.0+git${SRCPV}"
+SRCREV = "7ad885912efb2131e80914e964d5e635b0d07b40"
+PV = "0.3+git${SRCPV}"
 
 SRC_URI = "git://git.yoctoproject.org/${BPN}"
 
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/libsdl/libsdl-1.2.15/0001-build-Pass-tag-CC-explictly-when-using-libtool.patch b/import-layers/yocto-poky/meta/recipes-graphics/libsdl/libsdl-1.2.15/0001-build-Pass-tag-CC-explictly-when-using-libtool.patch
new file mode 100644
index 0000000..ec8c0fd
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/libsdl/libsdl-1.2.15/0001-build-Pass-tag-CC-explictly-when-using-libtool.patch
@@ -0,0 +1,73 @@
+From 44e4bb4cfb81024c8f5fd2e179e8a32c42756a2f Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 23 Jul 2017 16:52:43 -0700
+Subject: [PATCH] build: Pass --tag=CC explictly when using libtool
+
+Do not depend solely on libtool heuristics which fail
+in OE case when building with external compiler and
+hardening flags
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ Makefile.in              | 4 ++--
+ build-scripts/makedep.sh | 8 ++++----
+ 2 files changed, 6 insertions(+), 6 deletions(-)
+
+diff --git a/Makefile.in b/Makefile.in
+index ab51035..743ce30 100644
+--- a/Makefile.in
++++ b/Makefile.in
+@@ -72,10 +72,10 @@ depend:
+ include $(depend)
+ 
+ $(objects)/$(TARGET): $(OBJECTS)
+-	$(LIBTOOL) --mode=link $(CC) -o $@ $^ $(LDFLAGS) $(EXTRA_LDFLAGS) $(LT_LDFLAGS)
++	$(LIBTOOL) --tag=CC --mode=link $(CC) -o $@ $^ $(LDFLAGS) $(EXTRA_LDFLAGS) $(LT_LDFLAGS)
+ 
+ $(objects)/$(SDLMAIN_TARGET): $(SDLMAIN_OBJECTS)
+-	$(LIBTOOL) --mode=link $(CC) -o $@ $^ $(LDFLAGS) $(EXTRA_LDFLAGS) $(LT_LDFLAGS) $(SDLMAIN_LDFLAGS)
++	$(LIBTOOL) --tag=CC --mode=link $(CC) -o $@ $^ $(LDFLAGS) $(EXTRA_LDFLAGS) $(LT_LDFLAGS) $(SDLMAIN_LDFLAGS)
+ 
+ 
+ install: all install-bin install-hdrs install-lib install-data install-man
+diff --git a/build-scripts/makedep.sh b/build-scripts/makedep.sh
+index 3b3863b..dba28f2 100755
+--- a/build-scripts/makedep.sh
++++ b/build-scripts/makedep.sh
+@@ -51,19 +51,19 @@ do  echo "Generating dependencies for $src"
+     case $ext in
+         c) cat >>${output}.new <<__EOF__
+ 
+-	\$(LIBTOOL) --mode=compile \$(CC) \$(CFLAGS) \$(EXTRA_CFLAGS) -c $src  -o \$@
++	\$(LIBTOOL) --tag=CC --mode=compile \$(CC) \$(CFLAGS) \$(EXTRA_CFLAGS) -c $src  -o \$@
+ 
+ __EOF__
+         ;;
+         cc) cat >>${output}.new <<__EOF__
+ 
+-	\$(LIBTOOL) --mode=compile \$(CC) \$(CFLAGS) \$(EXTRA_CFLAGS) -c $src  -o \$@
++	\$(LIBTOOL) --tag=CC --mode=compile \$(CC) \$(CFLAGS) \$(EXTRA_CFLAGS) -c $src  -o \$@
+ 
+ __EOF__
+         ;;
+         m) cat >>${output}.new <<__EOF__
+ 
+-	\$(LIBTOOL) --mode=compile \$(CC) \$(CFLAGS) \$(EXTRA_CFLAGS) -c $src  -o \$@
++	\$(LIBTOOL) --tag=CC --mode=compile \$(CC) \$(CFLAGS) \$(EXTRA_CFLAGS) -c $src  -o \$@
+ 
+ __EOF__
+         ;;
+@@ -75,7 +75,7 @@ __EOF__
+         ;;
+         S) cat >>${output}.new <<__EOF__
+ 
+-	\$(LIBTOOL)  --mode=compile \$(CC) \$(CFLAGS) \$(EXTRA_CFLAGS) -c $src  -o \$@
++	\$(LIBTOOL) --tag=CC --mode=compile \$(CC) \$(CFLAGS) \$(EXTRA_CFLAGS) -c $src  -o \$@
+ 
+ __EOF__
+         ;;
+-- 
+2.13.3
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/libsdl/libsdl_1.2.15.bb b/import-layers/yocto-poky/meta/recipes-graphics/libsdl/libsdl_1.2.15.bb
index c802a6f..3680ea9 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/libsdl/libsdl_1.2.15.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/libsdl/libsdl_1.2.15.bb
@@ -17,6 +17,7 @@
 SRC_URI = "http://www.libsdl.org/release/SDL-${PV}.tar.gz \
            file://libsdl-1.2.15-xdata32.patch \
            file://pkgconfig.patch \
+           file://0001-build-Pass-tag-CC-explictly-when-using-libtool.patch \
           "
 
 UPSTREAM_CHECK_REGEX = "SDL-(?P<pver>\d+(\.\d+)+)\.tar"
@@ -53,6 +54,10 @@
 PACKAGECONFIG[opengl] = "--enable-video-opengl, --disable-video-opengl, virtual/libgl libglu"
 PACKAGECONFIG[x11] = "--enable-video-x11 --disable-x11-shared, --disable-video-x11, virtual/libx11 libxext libxrandr libxrender"
 
+# The following two options should only enabled with mingw support
+PACKAGECONFIG[stdio-redirect] = "--enable-stdio-redirect,--disable-stdio-redirect"
+PACKAGECONFIG[directx] = "--enable-directx,--disable-directx"
+
 EXTRA_AUTORECONF += "--include=acinclude --exclude=autoheader"
 
 do_configure_prepend() {
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/libva/files/0001-configure.ac-Use-wayland-scanner-in-PATH.patch b/import-layers/yocto-poky/meta/recipes-graphics/libva/files/0001-configure.ac-Use-wayland-scanner-in-PATH.patch
deleted file mode 100644
index a99c225..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/libva/files/0001-configure.ac-Use-wayland-scanner-in-PATH.patch
+++ /dev/null
@@ -1,37 +0,0 @@
-From 0af30602502035155929dd2a14482b82a9747cf8 Mon Sep 17 00:00:00 2001
-From: Jussi Kukkonen <jussi.kukkonen@intel.com>
-Date: Thu, 23 Feb 2017 15:23:15 +0200
-Subject: [PATCH] configure.ac: Use wayland-scanner in PATH
-
-pkg-config will give us the wrong wayland-scanner location.
-Use the one in path instead: it will be from native sysroot.
-
-This is a workaround and should be fixed upstream: preferably
-with the same fix as all the other wayland-scanner users
-(see YOCTO #11100).
-
-Upstream-Status: Inappropriate [workaround]
-Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
----
- configure.ac | 5 +----
- 1 file changed, 1 insertion(+), 4 deletions(-)
-
-diff --git a/configure.ac b/configure.ac
-index 64eddf2..5536f35 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -273,10 +273,7 @@ if test "$enable_wayland" = "yes"; then
-     PKG_CHECK_MODULES([WAYLAND], [wayland-client >= wayland_api_version],
-         [USE_WAYLAND="yes"], [:])
-     if test "$USE_WAYLAND" = "yes"; then
--
--        WAYLAND_PREFIX=`$PKG_CONFIG --variable=prefix wayland-client`
--        AC_PATH_PROG([WAYLAND_SCANNER], [wayland-scanner],,
--                     [${WAYLAND_PREFIX}/bin$PATH_SEPARATOR$PATH])
-+        AC_PATH_PROG([WAYLAND_SCANNER], [wayland-scanner])
- 
-         AC_DEFINE([HAVE_VA_WAYLAND], [1],
-                   [Defined to 1 if VA/Wayland API is built])
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/libva/files/0001-wayland-Don-t-commit-and-ship-generated-files.patch b/import-layers/yocto-poky/meta/recipes-graphics/libva/files/0001-wayland-Don-t-commit-and-ship-generated-files.patch
deleted file mode 100644
index bd97e22..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/libva/files/0001-wayland-Don-t-commit-and-ship-generated-files.patch
+++ /dev/null
@@ -1,339 +0,0 @@
-From 5993a7710cc8fd9a392d5015be5109987a42b498 Mon Sep 17 00:00:00 2001
-From: Jussi Kukkonen <jussi.kukkonen@intel.com>
-Date: Thu, 23 Feb 2017 16:10:09 +0200
-Subject: [PATCH] wayland: Don't commit and ship generated files
-
-I believe shipping wayland-drm-client-protocol.h is wrong: The header
-should always be generated by the wayland-scanner that matches the
-runtime wayland version. Currently when someone clones the repo and
-builds, the shipped version is used.
-
-Upstream-Status: Submitted [https://github.com/01org/libva/pull/33]
-Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
----
- va/wayland/Makefile.am                   |   5 +-
- va/wayland/wayland-drm-client-protocol.h | 290 -------------------------------
- 2 files changed, 4 insertions(+), 291 deletions(-)
- delete mode 100644 va/wayland/wayland-drm-client-protocol.h
-
-diff --git a/va/wayland/Makefile.am b/va/wayland/Makefile.am
-index 4f8262c..4ab8d07 100644
---- a/va/wayland/Makefile.am
-+++ b/va/wayland/Makefile.am
-@@ -53,7 +53,7 @@ protocol_source_h = \
- noinst_LTLIBRARIES		= libva_wayland.la
- libva_waylandincludedir		= ${includedir}/va
- libva_waylandinclude_HEADERS	= $(source_h)
--libva_wayland_la_SOURCES	= $(source_c) $(protocol_source_h)
-+libva_wayland_la_SOURCES	= $(source_c)
- noinst_HEADERS			= $(source_h_priv)
- 
- # Wayland protocol
-@@ -65,5 +65,8 @@ EXTRA_DIST = \
- 	wayland-drm.xml         \
- 	$(NULL)
- 
-+BUILT_SOURCES = $(protocol_source_h)
-+CLEANFILES = $(BUILT_SOURCES)
-+
- # Extra clean files so that maintainer-clean removes *everything*
- MAINTAINERCLEANFILES = Makefile.in
-diff --git a/va/wayland/wayland-drm-client-protocol.h b/va/wayland/wayland-drm-client-protocol.h
-deleted file mode 100644
-index da267e8..0000000
---- a/va/wayland/wayland-drm-client-protocol.h
-+++ /dev/null
-@@ -1,290 +0,0 @@
--/* Generated by wayland-scanner 1.11.90 */
--
--#ifndef DRM_CLIENT_PROTOCOL_H
--#define DRM_CLIENT_PROTOCOL_H
--
--#include <stdint.h>
--#include <stddef.h>
--#include "wayland-client.h"
--
--#ifdef  __cplusplus
--extern "C" {
--#endif
--
--/**
-- * @page page_drm The drm protocol
-- * @section page_ifaces_drm Interfaces
-- * - @subpage page_iface_wl_drm - 
-- * @section page_copyright_drm Copyright
-- * <pre>
-- *
-- * Copyright © 2008-2011 Kristian Høgsberg
-- * Copyright © 2010-2011 Intel Corporation
-- *
-- * Permission to use, copy, modify, distribute, and sell this
-- * software and its documentation for any purpose is hereby granted
-- * without fee, provided that\n the above copyright notice appear in
-- * all copies and that both that copyright notice and this permission
-- * notice appear in supporting documentation, and that the name of
-- * the copyright holders not be used in advertising or publicity
-- * pertaining to distribution of the software without specific,
-- * written prior permission.  The copyright holders make no
-- * representations about the suitability of this software for any
-- * purpose.  It is provided "as is" without express or implied
-- * warranty.
-- *
-- * THE COPYRIGHT HOLDERS DISCLAIM ALL WARRANTIES WITH REGARD TO THIS
-- * SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
-- * FITNESS, IN NO EVENT SHALL THE COPYRIGHT HOLDERS BE LIABLE FOR ANY
-- * SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
-- * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN
-- * AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION,
-- * ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF
-- * THIS SOFTWARE.
-- * </pre>
-- */
--struct wl_buffer;
--struct wl_drm;
--
--/**
-- * @page page_iface_wl_drm wl_drm
-- * @section page_iface_wl_drm_api API
-- * See @ref iface_wl_drm.
-- */
--/**
-- * @defgroup iface_wl_drm The wl_drm interface
-- */
--extern const struct wl_interface wl_drm_interface;
--
--#ifndef WL_DRM_ERROR_ENUM
--#define WL_DRM_ERROR_ENUM
--enum wl_drm_error {
--	WL_DRM_ERROR_AUTHENTICATE_FAIL = 0,
--	WL_DRM_ERROR_INVALID_FORMAT = 1,
--	WL_DRM_ERROR_INVALID_NAME = 2,
--};
--#endif /* WL_DRM_ERROR_ENUM */
--
--#ifndef WL_DRM_FORMAT_ENUM
--#define WL_DRM_FORMAT_ENUM
--enum wl_drm_format {
--	WL_DRM_FORMAT_C8 = 0x20203843,
--	WL_DRM_FORMAT_RGB332 = 0x38424752,
--	WL_DRM_FORMAT_BGR233 = 0x38524742,
--	WL_DRM_FORMAT_XRGB4444 = 0x32315258,
--	WL_DRM_FORMAT_XBGR4444 = 0x32314258,
--	WL_DRM_FORMAT_RGBX4444 = 0x32315852,
--	WL_DRM_FORMAT_BGRX4444 = 0x32315842,
--	WL_DRM_FORMAT_ARGB4444 = 0x32315241,
--	WL_DRM_FORMAT_ABGR4444 = 0x32314241,
--	WL_DRM_FORMAT_RGBA4444 = 0x32314152,
--	WL_DRM_FORMAT_BGRA4444 = 0x32314142,
--	WL_DRM_FORMAT_XRGB1555 = 0x35315258,
--	WL_DRM_FORMAT_XBGR1555 = 0x35314258,
--	WL_DRM_FORMAT_RGBX5551 = 0x35315852,
--	WL_DRM_FORMAT_BGRX5551 = 0x35315842,
--	WL_DRM_FORMAT_ARGB1555 = 0x35315241,
--	WL_DRM_FORMAT_ABGR1555 = 0x35314241,
--	WL_DRM_FORMAT_RGBA5551 = 0x35314152,
--	WL_DRM_FORMAT_BGRA5551 = 0x35314142,
--	WL_DRM_FORMAT_RGB565 = 0x36314752,
--	WL_DRM_FORMAT_BGR565 = 0x36314742,
--	WL_DRM_FORMAT_RGB888 = 0x34324752,
--	WL_DRM_FORMAT_BGR888 = 0x34324742,
--	WL_DRM_FORMAT_XRGB8888 = 0x34325258,
--	WL_DRM_FORMAT_XBGR8888 = 0x34324258,
--	WL_DRM_FORMAT_RGBX8888 = 0x34325852,
--	WL_DRM_FORMAT_BGRX8888 = 0x34325842,
--	WL_DRM_FORMAT_ARGB8888 = 0x34325241,
--	WL_DRM_FORMAT_ABGR8888 = 0x34324241,
--	WL_DRM_FORMAT_RGBA8888 = 0x34324152,
--	WL_DRM_FORMAT_BGRA8888 = 0x34324142,
--	WL_DRM_FORMAT_XRGB2101010 = 0x30335258,
--	WL_DRM_FORMAT_XBGR2101010 = 0x30334258,
--	WL_DRM_FORMAT_RGBX1010102 = 0x30335852,
--	WL_DRM_FORMAT_BGRX1010102 = 0x30335842,
--	WL_DRM_FORMAT_ARGB2101010 = 0x30335241,
--	WL_DRM_FORMAT_ABGR2101010 = 0x30334241,
--	WL_DRM_FORMAT_RGBA1010102 = 0x30334152,
--	WL_DRM_FORMAT_BGRA1010102 = 0x30334142,
--	WL_DRM_FORMAT_YUYV = 0x56595559,
--	WL_DRM_FORMAT_YVYU = 0x55595659,
--	WL_DRM_FORMAT_UYVY = 0x59565955,
--	WL_DRM_FORMAT_VYUY = 0x59555956,
--	WL_DRM_FORMAT_AYUV = 0x56555941,
--	WL_DRM_FORMAT_NV12 = 0x3231564e,
--	WL_DRM_FORMAT_NV21 = 0x3132564e,
--	WL_DRM_FORMAT_NV16 = 0x3631564e,
--	WL_DRM_FORMAT_NV61 = 0x3136564e,
--	WL_DRM_FORMAT_YUV410 = 0x39565559,
--	WL_DRM_FORMAT_YVU410 = 0x39555659,
--	WL_DRM_FORMAT_YUV411 = 0x31315559,
--	WL_DRM_FORMAT_YVU411 = 0x31315659,
--	WL_DRM_FORMAT_YUV420 = 0x32315559,
--	WL_DRM_FORMAT_YVU420 = 0x32315659,
--	WL_DRM_FORMAT_YUV422 = 0x36315559,
--	WL_DRM_FORMAT_YVU422 = 0x36315659,
--	WL_DRM_FORMAT_YUV444 = 0x34325559,
--	WL_DRM_FORMAT_YVU444 = 0x34325659,
--};
--#endif /* WL_DRM_FORMAT_ENUM */
--
--#ifndef WL_DRM_CAPABILITY_ENUM
--#define WL_DRM_CAPABILITY_ENUM
--/**
-- * @ingroup iface_wl_drm
-- * wl_drm capability bitmask
-- *
-- * Bitmask of capabilities.
-- */
--enum wl_drm_capability {
--	/**
--	 * wl_drm prime available
--	 */
--	WL_DRM_CAPABILITY_PRIME = 1,
--};
--#endif /* WL_DRM_CAPABILITY_ENUM */
--
--/**
-- * @ingroup iface_wl_drm
-- * @struct wl_drm_listener
-- */
--struct wl_drm_listener {
--	/**
--	 */
--	void (*device)(void *data,
--		       struct wl_drm *wl_drm,
--		       const char *name);
--	/**
--	 */
--	void (*format)(void *data,
--		       struct wl_drm *wl_drm,
--		       uint32_t format);
--	/**
--	 */
--	void (*authenticated)(void *data,
--			      struct wl_drm *wl_drm);
--	/**
--	 */
--	void (*capabilities)(void *data,
--			     struct wl_drm *wl_drm,
--			     uint32_t value);
--};
--
--/**
-- * @ingroup wl_drm_iface
-- */
--static inline int
--wl_drm_add_listener(struct wl_drm *wl_drm,
--		    const struct wl_drm_listener *listener, void *data)
--{
--	return wl_proxy_add_listener((struct wl_proxy *) wl_drm,
--				     (void (**)(void)) listener, data);
--}
--
--#define WL_DRM_AUTHENTICATE 0
--#define WL_DRM_CREATE_BUFFER 1
--#define WL_DRM_CREATE_PLANAR_BUFFER 2
--#define WL_DRM_CREATE_PRIME_BUFFER 3
--
--/**
-- * @ingroup iface_wl_drm
-- */
--#define WL_DRM_AUTHENTICATE_SINCE_VERSION 1
--/**
-- * @ingroup iface_wl_drm
-- */
--#define WL_DRM_CREATE_BUFFER_SINCE_VERSION 1
--/**
-- * @ingroup iface_wl_drm
-- */
--#define WL_DRM_CREATE_PLANAR_BUFFER_SINCE_VERSION 1
--/**
-- * @ingroup iface_wl_drm
-- */
--#define WL_DRM_CREATE_PRIME_BUFFER_SINCE_VERSION 2
--
--/** @ingroup iface_wl_drm */
--static inline void
--wl_drm_set_user_data(struct wl_drm *wl_drm, void *user_data)
--{
--	wl_proxy_set_user_data((struct wl_proxy *) wl_drm, user_data);
--}
--
--/** @ingroup iface_wl_drm */
--static inline void *
--wl_drm_get_user_data(struct wl_drm *wl_drm)
--{
--	return wl_proxy_get_user_data((struct wl_proxy *) wl_drm);
--}
--
--static inline uint32_t
--wl_drm_get_version(struct wl_drm *wl_drm)
--{
--	return wl_proxy_get_version((struct wl_proxy *) wl_drm);
--}
--
--/** @ingroup iface_wl_drm */
--static inline void
--wl_drm_destroy(struct wl_drm *wl_drm)
--{
--	wl_proxy_destroy((struct wl_proxy *) wl_drm);
--}
--
--/**
-- * @ingroup iface_wl_drm
-- */
--static inline void
--wl_drm_authenticate(struct wl_drm *wl_drm, uint32_t id)
--{
--	wl_proxy_marshal((struct wl_proxy *) wl_drm,
--			 WL_DRM_AUTHENTICATE, id);
--}
--
--/**
-- * @ingroup iface_wl_drm
-- */
--static inline struct wl_buffer *
--wl_drm_create_buffer(struct wl_drm *wl_drm, uint32_t name, int32_t width, int32_t height, uint32_t stride, uint32_t format)
--{
--	struct wl_proxy *id;
--
--	id = wl_proxy_marshal_constructor((struct wl_proxy *) wl_drm,
--			 WL_DRM_CREATE_BUFFER, &wl_buffer_interface, NULL, name, width, height, stride, format);
--
--	return (struct wl_buffer *) id;
--}
--
--/**
-- * @ingroup iface_wl_drm
-- */
--static inline struct wl_buffer *
--wl_drm_create_planar_buffer(struct wl_drm *wl_drm, uint32_t name, int32_t width, int32_t height, uint32_t format, int32_t offset0, int32_t stride0, int32_t offset1, int32_t stride1, int32_t offset2, int32_t stride2)
--{
--	struct wl_proxy *id;
--
--	id = wl_proxy_marshal_constructor((struct wl_proxy *) wl_drm,
--			 WL_DRM_CREATE_PLANAR_BUFFER, &wl_buffer_interface, NULL, name, width, height, format, offset0, stride0, offset1, stride1, offset2, stride2);
--
--	return (struct wl_buffer *) id;
--}
--
--/**
-- * @ingroup iface_wl_drm
-- */
--static inline struct wl_buffer *
--wl_drm_create_prime_buffer(struct wl_drm *wl_drm, int32_t name, int32_t width, int32_t height, uint32_t format, int32_t offset0, int32_t stride0, int32_t offset1, int32_t stride1, int32_t offset2, int32_t stride2)
--{
--	struct wl_proxy *id;
--
--	id = wl_proxy_marshal_constructor((struct wl_proxy *) wl_drm,
--			 WL_DRM_CREATE_PRIME_BUFFER, &wl_buffer_interface, NULL, name, width, height, format, offset0, stride0, offset1, stride1, offset2, stride2);
--
--	return (struct wl_buffer *) id;
--}
--
--#ifdef  __cplusplus
--}
--#endif
--
--#endif
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/libva/libva-utils_1.8.3.bb b/import-layers/yocto-poky/meta/recipes-graphics/libva/libva-utils_1.8.3.bb
new file mode 100644
index 0000000..c082c18
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/libva/libva-utils_1.8.3.bb
@@ -0,0 +1,33 @@
+SUMMARY = "libva-utils is a collection of utilities from libva project"
+
+DESCRIPTION = "libva-utils is a collection of utilities \
+and examples to exercise VA-API in accordance with the libva \
+project.VA-API is an open-source library and API specification, \
+which provides access to graphics hardware acceleration capabilities \
+for video processing. It consists of a main library and driver-specific \
+acceleration backends for each supported hardware vendor"
+
+HOMEPAGE = "https://01.org/linuxmedia/vaapi"
+BUGTRACKER = "https://github.com/01org/libva-utils/issues"
+
+SECTION = "x11"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b148fc8adf19dc9aec17cf9cd29a9a5e"
+
+SRC_URI = "git://github.com/01org/libva-utils.git;branch=v1.8-branch"
+SRCREV = "011c709b98b52db3b10aeb361dfea9da43930364"
+UPSTREAM_CHECK_URI = "https://github.com/01org/libva-utils/releases"
+UPSTREAM_CHECK_GITTAGREGEX = "(?P<pver>(\d+(\.\d+)+))"
+
+S = "${WORKDIR}/git"
+
+DEPENDS = "libva"
+
+inherit autotools pkgconfig distro_features_check
+
+# depends on libva which requires opengl
+REQUIRED_DISTRO_FEATURES = "opengl"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'wayland x11', d)}"
+PACKAGECONFIG[x11] = "--enable-x11,--disable-x11,virtual/libx11 libxext libxfixes"
+PACKAGECONFIG[wayland] = "--enable-wayland,--disable-wayland,wayland-native wayland"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/libva/libva_1.7.3.bb b/import-layers/yocto-poky/meta/recipes-graphics/libva/libva_1.7.3.bb
deleted file mode 100644
index 6c0b90f..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/libva/libva_1.7.3.bb
+++ /dev/null
@@ -1,53 +0,0 @@
-SUMMARY = "Video Acceleration (VA) API for Linux"
-DESCRIPTION = "Video Acceleration API (VA API) is a library (libVA) \
-and API specification which enables and provides access to graphics \
-hardware (GPU) acceleration for video processing on Linux and UNIX \
-based operating systems. Accelerated processing includes video \
-decoding, video encoding, subpicture blending and rendering. The \
-specification was originally designed by Intel for its GMA (Graphics \
-Media Accelerator) series of GPU hardware, the API is however not \
-limited to GPUs or Intel specific hardware, as other hardware and \
-manufacturers can also freely use this API for hardware accelerated \
-video decoding."
-
-HOMEPAGE = "https://01.org/linuxmedia/vaapi"
-BUGTRACKER = "https://github.com/01org/libva/issues"
-
-SECTION = "x11"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=2e48940f94acb0af582e5ef03537800f"
-
-SRC_URI = "git://github.com/01org/libva.git;protocol=http;branch=v1.7-branch \
-           file://0001-configure.ac-Use-wayland-scanner-in-PATH.patch \
-           file://0001-wayland-Don-t-commit-and-ship-generated-files.patch"
-SRCREV = "dbf9f7e33349c3cee8d131e93a6a4f91255635cb"
-UPSTREAM_CHECK_GITTAGREGEX = "libva-(?P<pver>(\d+(\.\d+)+))"
-
-S = "${WORKDIR}/git"
-
-DEPENDS = "libdrm virtual/mesa virtual/libgles1 virtual/libgles2 virtual/egl"
-
-inherit autotools pkgconfig distro_features_check
-
-REQUIRED_DISTRO_FEATURES = "opengl"
-
-EXTRA_OECONF = "--disable-dummy-driver"
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'wayland x11', d)}"
-PACKAGECONFIG[x11] = "--enable-x11,--disable-x11,virtual/libx11 libxext libxfixes"
-PACKAGECONFIG[wayland] = "--enable-wayland,--disable-wayland,wayland-native wayland"
-
-PACKAGES =+ "${PN}-x11 ${PN}-tpi ${PN}-glx ${PN}-egl ${PN}-wayland"
-
-RDEPENDS_${PN}-tpi =+ "${PN}"
-RDEPENDS_${PN}-x11 =+ "${PN}"
-RDEPENDS_${PN}-glx =+ "${PN}-x11"
-RDEPENDS_${PN}-egl =+ "${PN}-x11"
-
-FILES_${PN}-dbg += "${libdir}/dri/.debug"
-
-FILES_${PN}-x11 =+ "${libdir}/libva-x11*${SOLIBS}"
-FILES_${PN}-tpi =+ "${libdir}/libva-tpi*${SOLIBS}"
-FILES_${PN}-glx =+ "${libdir}/libva-glx*${SOLIBS}"
-FILES_${PN}-egl =+ "${libdir}/libva-egl*${SOLIBS}"
-FILES_${PN}-wayland =+ "${libdir}/libva-wayland*${SOLIBS}"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/libva/libva_1.8.3.bb b/import-layers/yocto-poky/meta/recipes-graphics/libva/libva_1.8.3.bb
new file mode 100644
index 0000000..ceeda84
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/libva/libva_1.8.3.bb
@@ -0,0 +1,51 @@
+SUMMARY = "Video Acceleration (VA) API for Linux"
+DESCRIPTION = "Video Acceleration API (VA API) is a library (libVA) \
+and API specification which enables and provides access to graphics \
+hardware (GPU) acceleration for video processing on Linux and UNIX \
+based operating systems. Accelerated processing includes video \
+decoding, video encoding, subpicture blending and rendering. The \
+specification was originally designed by Intel for its GMA (Graphics \
+Media Accelerator) series of GPU hardware, the API is however not \
+limited to GPUs or Intel specific hardware, as other hardware and \
+manufacturers can also freely use this API for hardware accelerated \
+video decoding."
+
+HOMEPAGE = "https://01.org/linuxmedia/vaapi"
+BUGTRACKER = "https://github.com/01org/libva/issues"
+
+SECTION = "x11"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=2e48940f94acb0af582e5ef03537800f"
+
+SRC_URI = "git://github.com/01org/libva.git;protocol=http;branch=v1.8-branch "
+SRCREV = "457470987cc9df5976ce8c72ffd4bfbd9baaf0f9"
+UPSTREAM_CHECK_GITTAGREGEX = "^(?P<pver>(\d+(\.\d+)+))$"
+
+S = "${WORKDIR}/git"
+
+DEPENDS = "libdrm virtual/mesa virtual/libgles1 virtual/libgles2 virtual/egl"
+
+inherit autotools pkgconfig distro_features_check
+
+REQUIRED_DISTRO_FEATURES = "opengl"
+
+EXTRA_OECONF = "ac_cv_prog_WAYLAND_SCANNER=wayland-scanner"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'wayland x11', d)}"
+PACKAGECONFIG[x11] = "--enable-x11,--disable-x11,virtual/libx11 libxext libxfixes"
+PACKAGECONFIG[wayland] = "--enable-wayland,--disable-wayland,wayland-native wayland"
+
+PACKAGES =+ "${PN}-x11 ${PN}-tpi ${PN}-glx ${PN}-egl ${PN}-wayland"
+
+RDEPENDS_${PN}-tpi =+ "${PN}"
+RDEPENDS_${PN}-x11 =+ "${PN}"
+RDEPENDS_${PN}-glx =+ "${PN}-x11"
+RDEPENDS_${PN}-egl =+ "${PN}-x11"
+
+FILES_${PN}-dbg += "${libdir}/dri/.debug"
+
+FILES_${PN}-x11 =+ "${libdir}/libva-x11*${SOLIBS}"
+FILES_${PN}-tpi =+ "${libdir}/libva-tpi*${SOLIBS}"
+FILES_${PN}-glx =+ "${libdir}/libva-glx*${SOLIBS}"
+FILES_${PN}-egl =+ "${libdir}/libva-egl*${SOLIBS}"
+FILES_${PN}-wayland =+ "${libdir}/libva-wayland*${SOLIBS}"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-Makefile.vulkan.am-explictly-add-lib-expat-to-intel-.patch b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-Makefile.vulkan.am-explictly-add-lib-expat-to-intel-.patch
new file mode 100644
index 0000000..bd1e863
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-Makefile.vulkan.am-explictly-add-lib-expat-to-intel-.patch
@@ -0,0 +1,44 @@
+From 342311dbb190735b7b32ab20f81c1d8dbcfe717a Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Wed, 11 Oct 2017 15:40:42 +0800
+Subject: [PATCH] Makefile.vulkan.am: explictly add lib expat to intel
+ libvulkan's lib depends
+
+While built with "-fvisibility=default"
+...
+|i586-oe-linux-gcc ... -fvisibility=default ... -o common/.libs/common_libintel_common_la-gen_decoder.o
+...
+
+It triggered the failure
+...
+|i586-oe-linux-g++  ... common/.libs/libintel_common.a ... -o vulkan/.libs/libvulkan_intel.so
+|common/.libs/libintel_common.a(common_libintel_common_la-gen_decoder.o):
+|In function `start_element':
+|/usr/src/debug/mesa/2_17.1.7-r0/mesa-17.1.7/src/intel/common/gen_decoder.c:371:
+undefined reference to `XML_GetCurrentLineNumber'
+...
+
+explictly add EXPAT_LIBS to intel's VULKAN_LIB_DEPS
+
+Upstream-Status: Submitted [mesa-dev@lists.freedesktop.org]
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ src/intel/Makefile.vulkan.am | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/src/intel/Makefile.vulkan.am b/src/intel/Makefile.vulkan.am
+index 271b0a5..8fbe2c8 100644
+--- a/src/intel/Makefile.vulkan.am
++++ b/src/intel/Makefile.vulkan.am
+@@ -144,6 +144,7 @@ VULKAN_LIB_DEPS = \
+ 	$(LIBDRM_LIBS) \
+ 	$(PTHREAD_LIBS) \
+ 	$(DLOPEN_LIBS) \
++	$(EXPAT_LIBS) \
+ 	-lm
+ 
+ if HAVE_PLATFORM_X11
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-Use-wayland-scanner-in-the-path.patch b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-Use-wayland-scanner-in-the-path.patch
index e49695b..eb6ff4f 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-Use-wayland-scanner-in-the-path.patch
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-Use-wayland-scanner-in-the-path.patch
@@ -1,4 +1,4 @@
-From 2f68fcaaf4964e7feeb383f5c26851965cda037c Mon Sep 17 00:00:00 2001
+From c908f0c13ac81a3a52140f129a13b2bc997ff4ee Mon Sep 17 00:00:00 2001
 From: Jussi Kukkonen <jussi.kukkonen@intel.com>
 Date: Tue, 15 Nov 2016 15:20:49 +0200
 Subject: [PATCH] Simplify wayland-scanner lookup
@@ -15,23 +15,23 @@
  1 file changed, 1 insertion(+), 6 deletions(-)
 
 diff --git a/configure.ac b/configure.ac
-index e56e35a..a92005a 100644
+index 2c7e636fac..d2b2350739 100644
 --- a/configure.ac
 +++ b/configure.ac
-@@ -2020,12 +2020,7 @@ if test "x$with_egl_platforms" != "x" -a "x$enable_egl" != xyes; then
-     AC_MSG_ERROR([cannot build egl state tracker without EGL library])
+@@ -2174,12 +2174,7 @@ if test "x$with_platforms" != xauto; then
+     with_egl_platforms=$with_platforms
  fi
  
 -PKG_CHECK_MODULES([WAYLAND_SCANNER], [wayland-scanner],
 -        WAYLAND_SCANNER=`$PKG_CONFIG --variable=wayland_scanner wayland-scanner`,
 -        WAYLAND_SCANNER='')
 -if test "x$WAYLAND_SCANNER" = x; then
--    AC_PATH_PROG([WAYLAND_SCANNER], [wayland-scanner])
+-    AC_PATH_PROG([WAYLAND_SCANNER], [wayland-scanner], [:])
 -fi
 +AC_PATH_PROG([WAYLAND_SCANNER], [wayland-scanner])
  
  # Do per-EGL platform setups and checks
  egl_platforms=`IFS=', '; echo $with_egl_platforms`
 -- 
-2.1.4
+2.13.0
 
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-ac-fix-build-after-LLVM-5.0-SVN-r300718.patch b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-ac-fix-build-after-LLVM-5.0-SVN-r300718.patch
new file mode 100644
index 0000000..b27a3bc
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-ac-fix-build-after-LLVM-5.0-SVN-r300718.patch
@@ -0,0 +1,40 @@
+From 9861437e58fdd0de01193a102608d34e5952953f Mon Sep 17 00:00:00 2001
+From: Christoph Haag <haagch+mesadev@frickel.club>
+Date: Thu, 20 Apr 2017 10:34:18 +0200
+Subject: [PATCH 1/2] ac: fix build after LLVM 5.0 SVN r300718
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+v2: previously getWithDereferenceableBytes() exists, but addAttr() doesn't take that type
+
+Signed-off-by: Christoph Haag <haagch+mesadev@frickel.club>
+Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com>
+Tested-and-reviewed-by: Mike Lothian <mike@fireburn.co.uk>
+---
+Upstream-Status: Backport
+
+ src/amd/common/ac_llvm_helper.cpp | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/src/amd/common/ac_llvm_helper.cpp b/src/amd/common/ac_llvm_helper.cpp
+index d9ea4b1..11fa809 100644
+--- a/src/amd/common/ac_llvm_helper.cpp
++++ b/src/amd/common/ac_llvm_helper.cpp
+@@ -44,9 +44,13 @@ typedef AttributeSet AttributeList;
+ void ac_add_attr_dereferenceable(LLVMValueRef val, uint64_t bytes)
+ {
+    llvm::Argument *A = llvm::unwrap<llvm::Argument>(val);
++#if HAVE_LLVM < 0x0500
+    llvm::AttrBuilder B;
+    B.addDereferenceableAttr(bytes);
+    A->addAttr(llvm::AttributeList::get(A->getContext(), A->getArgNo() + 1,  B));
++#else
++   A->addAttr(llvm::Attribute::getWithDereferenceableBytes(A->getContext(), bytes));
++#endif
+ }
+ 
+ bool ac_is_sgpr_param(LLVMValueRef arg)
+-- 
+2.13.3
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-configure.ac-Always-check-for-expat.patch b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-configure.ac-Always-check-for-expat.patch
new file mode 100644
index 0000000..4753c49
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-configure.ac-Always-check-for-expat.patch
@@ -0,0 +1,51 @@
+From 1f7d752193f02d15d5923cee992e8f46d4c6df1b Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Mon, 28 Aug 2017 13:51:49 +0300
+Subject: [PATCH] configure.ac: Always check for expat
+
+expat was not checked if dri was not built leading to build failure
+in vulkan driver: backport a fix (a combination of multiple commits
+that should end up in 17.3).
+
+Upstream-Status: Backport
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+---
+ configure.ac | 15 ++++++---------
+ 1 file changed, 6 insertions(+), 9 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index fd346c8aa2..662faecefa 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -1777,6 +1777,12 @@ if test "x$with_dri_drivers" = xno; then
+     with_dri_drivers=''
+ fi
+ 
++# Check for expat
++PKG_CHECK_MODULES([EXPAT], [expat])
++PKG_CHECK_MODULES([EXPAT], [expat],,
++    [PKG_CHECK_MODULES([EXPAT], [expat21])]
++)
++
+ dnl If $with_dri_drivers is yes, drivers will be added through
+ dnl platform checks. Set DEFINES and LIB_DEPS
+ if test "x$enable_dri" = xyes; then
+@@ -1810,15 +1816,6 @@ if test "x$enable_dri" = xyes; then
+         with_dri_drivers="i915 i965 nouveau r200 radeon swrast"
+     fi
+ 
+-    # Check for expat
+-    PKG_CHECK_MODULES([EXPAT], [expat], [],
+-        # expat version 2.0 and earlier do not provide expat.pc
+-        [AC_CHECK_HEADER([expat.h],[],
+-                         [AC_MSG_ERROR([Expat headers required for DRI not found])])
+-         AC_CHECK_LIB([expat],[XML_ParserCreate],[],
+-                     [AC_MSG_ERROR([Expat library required for DRI not found])])
+-         EXPAT_LIBS="-lexpat"])
+-
+     # put all the necessary libs together
+     DRI_LIB_DEPS="$DRI_LIB_DEPS $SELINUX_LIBS $LIBDRM_LIBS $EXPAT_LIBS -lm $PTHREAD_LIBS $DLOPEN_LIBS"
+ fi
+-- 
+2.14.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-winsys-svga-drm-Include-sys-types.h.patch b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-winsys-svga-drm-Include-sys-types.h.patch
new file mode 100644
index 0000000..549b867
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0001-winsys-svga-drm-Include-sys-types.h.patch
@@ -0,0 +1,34 @@
+From d8750776404b1031d762966d0f551d13d2fe05a7 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 16 Aug 2017 18:58:20 -0700
+Subject: [PATCH] winsys/svga/drm: Include sys/types.h
+
+vmw_screen.h uses dev_t which is defines in sys/types.h
+this header is required to be included for getting dev_t
+definition. This issue happens on musl C library, it is hidden
+on glibc since sys/types.h is included through another
+system headers
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+
+Upstream-Status: Submitted
+
+ src/gallium/winsys/svga/drm/vmw_screen.h | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/src/gallium/winsys/svga/drm/vmw_screen.h b/src/gallium/winsys/svga/drm/vmw_screen.h
+index 0ef8e84..2eda97e 100644
+--- a/src/gallium/winsys/svga/drm/vmw_screen.h
++++ b/src/gallium/winsys/svga/drm/vmw_screen.h
+@@ -41,6 +41,7 @@
+ #include "svga_winsys.h"
+ #include "pipebuffer/pb_buffer_fenced.h"
+ #include <os/os_thread.h>
++#include <sys/types.h>
+ 
+ #define VMW_GMR_POOL_SIZE (16*1024*1024)
+ #define VMW_QUERY_POOL_SIZE (8192)
+-- 
+2.14.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0002-gallivm-Fix-build-against-LLVM-SVN-r302589.patch b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0002-gallivm-Fix-build-against-LLVM-SVN-r302589.patch
new file mode 100644
index 0000000..ac8caec
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0002-gallivm-Fix-build-against-LLVM-SVN-r302589.patch
@@ -0,0 +1,49 @@
+From a02a0dfda2712d30ad62b8f0421ec7b8244ba2cb Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Michel=20D=C3=A4nzer?= <michel.daenzer@amd.com>
+Date: Wed, 10 May 2017 17:26:07 +0900
+Subject: [PATCH 2/2] gallivm: Fix build against LLVM SVN >= r302589
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+deregisterEHFrames doesn't take any parameters anymore.
+
+Reviewed-by: Vedran Miletić <vedran@miletic.net>
+Reviewed-by: Marek Olšák <marek.olsak@amd.com>
+---
+Upstream-Status: Backport
+
+ src/gallium/auxiliary/gallivm/lp_bld_misc.cpp | 12 +++++++++---
+ 1 file changed, 9 insertions(+), 3 deletions(-)
+
+diff --git a/src/gallium/auxiliary/gallivm/lp_bld_misc.cpp b/src/gallium/auxiliary/gallivm/lp_bld_misc.cpp
+index 2a388cb..0e4a531 100644
+--- a/src/gallium/auxiliary/gallivm/lp_bld_misc.cpp
++++ b/src/gallium/auxiliary/gallivm/lp_bld_misc.cpp
+@@ -342,14 +342,20 @@ class DelegatingJITMemoryManager : public BaseMemoryManager {
+       virtual void registerEHFrames(uint8_t *Addr, uint64_t LoadAddr, size_t Size) {
+          mgr()->registerEHFrames(Addr, LoadAddr, Size);
+       }
+-      virtual void deregisterEHFrames(uint8_t *Addr, uint64_t LoadAddr, size_t Size) {
+-         mgr()->deregisterEHFrames(Addr, LoadAddr, Size);
+-      }
+ #else
+       virtual void registerEHFrames(llvm::StringRef SectionData) {
+          mgr()->registerEHFrames(SectionData);
+       }
+ #endif
++#if HAVE_LLVM >= 0x0500
++      virtual void deregisterEHFrames() {
++         mgr()->deregisterEHFrames();
++      }
++#elif HAVE_LLVM >= 0x0304
++      virtual void deregisterEHFrames(uint8_t *Addr, uint64_t LoadAddr, size_t Size) {
++         mgr()->deregisterEHFrames(Addr, LoadAddr, Size);
++      }
++#endif
+       virtual void *getPointerToNamedFunction(const std::string &Name,
+                                               bool AbortOnFailure=true) {
+          return mgr()->getPointerToNamedFunction(Name, AbortOnFailure);
+-- 
+2.13.3
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0002-hardware-gloat.patch b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0002-hardware-gloat.patch
new file mode 100644
index 0000000..0e014dc
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/0002-hardware-gloat.patch
@@ -0,0 +1,51 @@
+From 00bcd599310dc7fce4fe336ffd85902429051a0c Mon Sep 17 00:00:00 2001
+From: Igor Gnatenko <i.gnatenko.brain@gmail.com>
+Date: Sun, 20 Mar 2016 13:27:04 +0100
+Subject: [PATCH 2/4] hardware gloat
+
+Upstream-Status: Inappropriate [not author]
+Signed-off-by: Igor Gnatenko <i.gnatenko.brain@gmail.com>
+---
+ src/gallium/drivers/llvmpipe/lp_screen.c | 7 +++++++
+ src/gallium/drivers/softpipe/sp_screen.c | 7 +++++++
+ 2 files changed, 14 insertions(+)
+
+diff --git a/src/gallium/drivers/llvmpipe/lp_screen.c b/src/gallium/drivers/llvmpipe/lp_screen.c
+index 4f61de8..3b0ec77 100644
+--- a/src/gallium/drivers/llvmpipe/lp_screen.c
++++ b/src/gallium/drivers/llvmpipe/lp_screen.c
+@@ -411,6 +411,13 @@ llvmpipe_is_format_supported( struct pipe_screen *_screen,
+    if (!format_desc)
+       return FALSE;
+ 
++   if ((bind & PIPE_BIND_RENDER_TARGET) &&
++       format != PIPE_FORMAT_R9G9B9E5_FLOAT &&
++       format != PIPE_FORMAT_R11G11B10_FLOAT &&
++       util_format_is_float(format)) {
++      return FALSE;
++   }
++
+    assert(target == PIPE_BUFFER ||
+           target == PIPE_TEXTURE_1D ||
+           target == PIPE_TEXTURE_1D_ARRAY ||
+diff --git a/src/gallium/drivers/softpipe/sp_screen.c b/src/gallium/drivers/softpipe/sp_screen.c
+index 031602b..c279120 100644
+--- a/src/gallium/drivers/softpipe/sp_screen.c
++++ b/src/gallium/drivers/softpipe/sp_screen.c
+@@ -358,6 +358,13 @@ softpipe_is_format_supported( struct pipe_screen *screen,
+    if (!format_desc)
+       return FALSE;
+ 
++   if ((bind & PIPE_BIND_RENDER_TARGET) &&
++       format != PIPE_FORMAT_R9G9B9E5_FLOAT &&
++       format != PIPE_FORMAT_R11G11B10_FLOAT &&
++       util_format_is_float(format)) {
++      return FALSE;
++   }
++
+    if (sample_count > 1)
+       return FALSE;
+ 
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/llvm-config-version.patch b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/llvm-config-version.patch
new file mode 100644
index 0000000..aa33a1e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/llvm-config-version.patch
@@ -0,0 +1,32 @@
+When building llvm from git or svn it embeds the svn/git revision into internal version string
+
+$ /mnt/a/oe/build/tmp/work/corei7-64-bec-linux/mesa/2_17.1.5-r0/recipe-sysroot/usr/lib/llvm5.0/llvm-config-host --version
+5.0.0git-9a5c333388c
+
+We need to ignore everything after 5.0.0 which is what the cut cmd is doing
+
+Upstream-Status: Pending
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Index: mesa-17.1.5/configure.ac
+===================================================================
+--- mesa-17.1.5.orig/configure.ac
++++ mesa-17.1.5/configure.ac
+@@ -967,7 +967,7 @@ strip_unwanted_llvm_flags() {
+ 
+ llvm_set_environment_variables() {
+     if test "x$LLVM_CONFIG" != xno; then
+-        LLVM_VERSION=`$LLVM_CONFIG --version | egrep -o '^[[0-9.]]+'`
++        LLVM_VERSION=`$LLVM_CONFIG --version | cut -c1-5`
+         LLVM_CPPFLAGS=`strip_unwanted_llvm_flags "$LLVM_CONFIG --cppflags"`
+         LLVM_INCLUDEDIR=`$LLVM_CONFIG --includedir`
+         LLVM_LIBDIR=`$LLVM_CONFIG --libdir`
+@@ -2560,7 +2560,7 @@ if test "x$enable_llvm" = xyes; then
+     dnl (See https://llvm.org/bugs/show_bug.cgi?id=6823)
+     if test "x$enable_llvm_shared_libs" = xyes; then
+         dnl We can't use $LLVM_VERSION because it has 'svn' stripped out,
+-        LLVM_SO_NAME=LLVM-`$LLVM_CONFIG --version`
++        LLVM_SO_NAME=LLVM-`$LLVM_CONFIG --version|cut -c1-5`
+         AS_IF([test -f "$LLVM_LIBDIR/lib$LLVM_SO_NAME.$IMP_LIB_EXT"], [llvm_have_one_so=yes])
+ 
+         if test "x$llvm_have_one_so" = xyes; then
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/vulkan-mkdir.patch b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/vulkan-mkdir.patch
new file mode 100644
index 0000000..15ee5ee
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/files/vulkan-mkdir.patch
@@ -0,0 +1,37 @@
+Upstream-Status: Submitted
+Signed-off-by: Ross Burton <ross.burton@intel.com>
+
+From c78979fd95a1c4f732f7e6edf0f32c524e5955b8 Mon Sep 17 00:00:00 2001
+From: Ross Burton <ross.burton@intel.com>
+Date: Wed, 12 Jul 2017 17:10:07 +0100
+Subject: [PATCH] src/intel/Makefile.vulkan.am: create target directories when
+ required
+
+In out-of-tree builds src/intel/vulkan won't exist, so always create it before
+writing into it.
+
+Signed-off-by: Ross Burton <ross.burton@intel.com>
+---
+ src/intel/Makefile.vulkan.am | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/src/intel/Makefile.vulkan.am b/src/intel/Makefile.vulkan.am
+index 3857a5dc62..26e9cd410c 100644
+--- a/src/intel/Makefile.vulkan.am
++++ b/src/intel/Makefile.vulkan.am
+@@ -44,11 +44,13 @@ EXTRA_DIST += \
+ 	vulkan/TODO
+ 
+ vulkan/dev_icd.json : vulkan/dev_icd.json.in
++	$(MKDIR_GEN)
+ 	$(AM_V_GEN) $(SED) \
+ 		-e "s#@build_libdir@#${abs_top_builddir}/${LIB_DIR}#" \
+ 		< $(srcdir)/vulkan/dev_icd.json.in > $@
+ 
+ vulkan/intel_icd.@host_cpu@.json : vulkan/intel_icd.json.in
++	$(MKDIR_GEN)
+ 	$(AM_V_GEN) $(SED) \
+ 		-e "s#@install_libdir@#${libdir}#" \
+ 		< $(srcdir)/vulkan/intel_icd.json.in > $@
+-- 
+2.11.0
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/libglu_9.0.0.bb b/import-layers/yocto-poky/meta/recipes-graphics/mesa/libglu_9.0.0.bb
index 8b94613..eeb898f 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/mesa/libglu_9.0.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/libglu_9.0.0.bb
@@ -12,7 +12,7 @@
 PE = "2"
 PR = "0"
 
-SRC_URI = "ftp://ftp.freedesktop.org/pub/mesa/glu/glu-${PV}.tar.bz2"
+SRC_URI = "https://mesa.freedesktop.org/archive/glu/glu-${PV}.tar.bz2"
 
 SRC_URI[md5sum] = "be9249132ff49275461cf92039083030"
 SRC_URI[sha256sum] = "1f7ad0d379a722fcbd303aa5650c6d7d5544fde83196b42a73d1193568a4df12"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa-demos_8.3.0.bb b/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa-demos_8.3.0.bb
index e43b9ef..bae3b18 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa-demos_8.3.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa-demos_8.3.0.bb
@@ -9,7 +9,7 @@
 LIC_FILES_CHKSUM = "file://src/xdemos/glxgears.c;beginline=1;endline=20;md5=914225785450eff644a86c871d3ae00e \
                     file://src/xdemos/glxdemo.c;beginline=1;endline=8;md5=b01d5ab1aee94d35b7efaa2ef48e1a06"
 
-SRC_URI = "ftp://ftp.freedesktop.org/pub/mesa/demos/${PV}/${BPN}-${PV}.tar.bz2 \
+SRC_URI = "https://mesa.freedesktop.org/archive/demos/${PV}/${BPN}-${PV}.tar.bz2 \
            file://0001-mesa-demos-Add-missing-data-files.patch \
            file://0003-configure-Allow-to-disable-demos-which-require-GLEW-.patch \
            file://0004-Use-DEMOS_DATA_DIR-to-locate-data-files.patch \
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa-gl_17.0.2.bb b/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa-gl_17.0.2.bb
deleted file mode 100644
index e3604f3..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa-gl_17.0.2.bb
+++ /dev/null
@@ -1,13 +0,0 @@
-require mesa_${PV}.bb
-
-SUMMARY += " (OpenGL only, no EGL/GLES)"
-
-FILESEXTRAPATHS =. "${FILE_DIRNAME}/mesa:"
-
-PROVIDES = "virtual/libgl virtual/mesa"
-
-S = "${WORKDIR}/mesa-${PV}"
-
-PACKAGECONFIG ??= "dri ${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
-
-EXCLUDE_FROM_WORLD = "1"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa-gl_17.1.7.bb b/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa-gl_17.1.7.bb
new file mode 100644
index 0000000..73267eb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa-gl_17.1.7.bb
@@ -0,0 +1,9 @@
+require mesa_${PV}.bb
+
+SUMMARY += " (OpenGL only, no EGL/GLES)"
+
+PROVIDES = "virtual/libgl virtual/mesa"
+
+S = "${WORKDIR}/mesa-${PV}"
+
+PACKAGECONFIG ??= "opengl dri ${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa.inc b/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa.inc
index 25cbf63..4f31ed2 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa.inc
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa.inc
@@ -10,47 +10,64 @@
 BUGTRACKER = "https://bugs.freedesktop.org"
 SECTION = "x11"
 LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://docs/license.html;md5=899fbe7e42d494c7c8c159c7001693d5"
+LIC_FILES_CHKSUM = "file://docs/license.html;md5=725f991a1cc322aa7a0cd3a2016621c4"
 
 PE = "2"
 
-DEPENDS = "expat makedepend-native flex-native bison-native libxml2-native"
-
+DEPENDS = "expat makedepend-native flex-native bison-native libxml2-native zlib chrpath-replacement-native"
+EXTRANATIVEPATH += "chrpath-native"
 PROVIDES = "virtual/libgl virtual/libgles1 virtual/libgles2 virtual/egl virtual/mesa"
 
-inherit autotools pkgconfig pythonnative gettext distro_features_check
+inherit autotools pkgconfig gettext distro_features_check
 
-REQUIRED_DISTRO_FEATURES = "opengl"
+ANY_OF_DISTRO_FEATURES = "opengl vulkan"
 
-export LLVM_CONFIG = "${STAGING_BINDIR_CROSS}/llvm-config"
-EXTRA_OECONF = "--enable-shared-glapi"
+PLATFORMS ??= "${@bb.utils.filter('PACKAGECONFIG', 'x11 wayland', d)} \
+               ${@bb.utils.contains('PACKAGECONFIG', 'gbm', 'drm', '', d)}"
 
-PACKAGECONFIG ??= "gbm egl gles dri \
-		${@bb.utils.filter('DISTRO_FEATURES', 'wayland x11', d)} \
-		"
+export LLVM_CONFIG = "${STAGING_BINDIR_NATIVE}/llvm-config${MESA_LLVM_RELEASE}"
+export YOCTO_ALTERNATE_EXE_PATH = "${STAGING_LIBDIR}/llvm${MESA_LLVM_RELEASE}/llvm-config"
+EXTRA_OECONF = "--enable-shared-glapi \
+                --disable-opencl \
+                --with-llvm-prefix=${STAGING_LIBDIR}/llvm${MESA_LLVM_RELEASE} \
+                --with-platforms='${PLATFORMS}'"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'wayland vulkan', d)} \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'opengl', 'opengl egl gles gbm dri', '', d)} \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11 opengl', 'x11', '', d)} \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11 vulkan', 'dri3', '', d)} \
+		   "
+
+# "gbm" requires "dri", "opengl"
 PACKAGECONFIG[gbm] = "--enable-gbm,--disable-gbm"
 
 X11_DEPS = "xf86driproto glproto virtual/libx11 libxext libxxf86vm libxdamage libxfixes"
+# "x11" requires "opengl"
 PACKAGECONFIG[x11] = "--enable-glx-tls,--disable-glx,${X11_DEPS}"
 PACKAGECONFIG[xvmc] = "--enable-xvmc,--disable-xvmc,libxvmc"
-PACKAGECONFIG[wayland] = ",,wayland-native wayland"
+PACKAGECONFIG[wayland] = ",,wayland-native wayland libdrm"
 
 DRIDRIVERS = "swrast"
 DRIDRIVERS_append_x86 = ",radeon,r200,nouveau,i965,i915"
 DRIDRIVERS_append_x86-64 = ",radeon,r200,nouveau,i965,i915"
+# "dri" requires "opengl"
 PACKAGECONFIG[dri] = "--enable-dri --with-dri-drivers=${DRIDRIVERS}, --disable-dri, dri2proto libdrm"
 PACKAGECONFIG[dri3] = "--enable-dri3, --disable-dri3, dri3proto presentproto libxshmfence"
 
 # Vulkan drivers need dri3 enabled
 # radeon could be enabled as well but requires gallium-llvm with llvm >= 3.9
-PACKAGECONFIG[vulkan] = "--with-vulkan-drivers=intel, --without-vulkan-drivers"
+VULKAN_DRIVERS = ""
+VULKAN_DRIVERS_append_x86 = ",intel"
+VULKAN_DRIVERS_append_x86-64 = ",intel"
+PACKAGECONFIG[vulkan] = "--with-vulkan-drivers=${VULKAN_DRIVERS}, --without-vulkan-drivers"
 
+PACKAGECONFIG[opengl] = "--enable-opengl, --disable-opengl"
+
+# "gles" requires "opengl"
 PACKAGECONFIG[gles] = "--enable-gles1 --enable-gles2, --disable-gles1 --disable-gles2"
 
-EGL_PLATFORMS  = "drm"
-EGL_PLATFORMS .="${@bb.utils.contains('PACKAGECONFIG', 'x11', ',x11', '', d)}"
-EGL_PLATFORMS .="${@bb.utils.contains('PACKAGECONFIG', 'wayland', ',wayland', '', d)}"
-PACKAGECONFIG[egl] = "--enable-egl --with-egl-platforms=${EGL_PLATFORMS}, --disable-egl"
+# "egl" requires "dri", "opengl"
+PACKAGECONFIG[egl] = "--enable-egl, --disable-egl"
 
 PACKAGECONFIG[etnaviv] = ""
 PACKAGECONFIG[imx] = ""
@@ -65,9 +82,9 @@
 GALLIUMDRIVERS_append_x86 = "${@bb.utils.contains('PACKAGECONFIG', 'gallium-llvm', ',${GALLIUMDRIVERS_LLVM}', '', d)}"
 GALLIUMDRIVERS_append_x86-64 = "${@bb.utils.contains('PACKAGECONFIG', 'gallium-llvm', ',${GALLIUMDRIVERS_LLVM}', '', d)}"
 # keep --with-gallium-drivers separate, because when only one of gallium versions is enabled, other 2 were adding --without-gallium-drivers
-PACKAGECONFIG[gallium]      = "--with-gallium-drivers=${GALLIUMDRIVERS}, --without-gallium-drivers"
-MESA_LLVM_RELEASE ?= "3.3"
-PACKAGECONFIG[gallium-llvm] = "--enable-gallium-llvm --enable-llvm-shared-libs, --disable-gallium-llvm, llvm${MESA_LLVM_RELEASE} \
+PACKAGECONFIG[gallium]      = "--enable-texture-float --with-gallium-drivers=${GALLIUMDRIVERS}, --without-gallium-drivers"
+MESA_LLVM_RELEASE ?= "5.0"
+PACKAGECONFIG[gallium-llvm] = "--enable-llvm --enable-llvm-shared-libs, --disable-llvm, llvm${MESA_LLVM_RELEASE} llvm-native \
                                ${@'elfutils' if ${GALLIUMDRIVERS_LLVM33_ENABLED} else ''}"
 export WANT_LLVM_RELEASE = "${MESA_LLVM_RELEASE}"
 PACKAGECONFIG[xa]  = "--enable-xa, --disable-xa"
@@ -75,9 +92,13 @@
 OSMESA = "${@bb.utils.contains('PACKAGECONFIG', 'gallium', 'gallium-osmesa', 'osmesa', d)}"
 PACKAGECONFIG[osmesa] = "--enable-${OSMESA},--disable-${OSMESA}"
 
+PACKAGECONFIG[unwind] = "--enable-libunwind,--disable-libunwind,libunwind"
+
 # llvmpipe is slow if compiled with -fomit-frame-pointer (e.g. -O2)
 FULL_OPTIMIZATION_append = " -fno-omit-frame-pointer"
 
+CFLAGS_append_armv5 = " -DMISSING_64BIT_ATOMICS"
+
 # Multiple virtual/gl providers being built breaks staging
 EXCLUDE_FROM_WORLD = "1"
 
@@ -106,9 +127,10 @@
     rm -f ${D}${libdir}/egl/*.la
     rm -f ${D}${libdir}/gallium-pipe/*.la
     rm -f ${D}${libdir}/gbm/*.la
-    
+
     # it was packaged in libdricore9.1.3-1 and preventing upgrades when debian.bbclass was used 
     rm -f ${D}${sysconfdir}/drirc
+    chrpath --delete ${D}${libdir}/dri/*_dri.so || true
 }
 
 # For the packages that make up the OpenGL interfaces, inject variables so that
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa_17.0.2.bb b/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa_17.0.2.bb
deleted file mode 100644
index 2689e8f..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa_17.0.2.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-require ${BPN}.inc
-
-SRC_URI = "ftp://ftp.freedesktop.org/pub/mesa/mesa-${PV}.tar.xz \
-           file://replace_glibc_check_with_linux.patch \
-           file://disable-asm-on-non-gcc.patch \
-           file://0001-Use-wayland-scanner-in-the-path.patch \
-"
-
-SRC_URI[md5sum] = "8f808e92b893d412fbd6510e1d16f5c5"
-SRC_URI[sha256sum] = "f8f191f909e01e65de38d5bdea5fb057f21649a3aed20948be02348e77a689d4"
-
-#because we cannot rely on the fact that all apps will use pkgconfig,
-#make eglplatform.h independent of MESA_EGL_NO_X11_HEADER
-do_install_append() {
-    if ${@bb.utils.contains('PACKAGECONFIG', 'egl', 'true', 'false', d)}; then
-        sed -i -e 's/^#if defined(MESA_EGL_NO_X11_HEADERS)$/#if defined(MESA_EGL_NO_X11_HEADERS) || ${@bb.utils.contains('PACKAGECONFIG', 'x11', '0', '1', d)}/' ${D}${includedir}/EGL/eglplatform.h
-    fi
-}
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa_17.1.7.bb b/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa_17.1.7.bb
new file mode 100644
index 0000000..39cfce9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa_17.1.7.bb
@@ -0,0 +1,25 @@
+require ${BPN}.inc
+
+SRC_URI = "https://mesa.freedesktop.org/archive/mesa-${PV}.tar.xz \
+           file://replace_glibc_check_with_linux.patch \
+           file://disable-asm-on-non-gcc.patch \
+           file://0001-Use-wayland-scanner-in-the-path.patch \
+           file://0002-hardware-gloat.patch \
+           file://vulkan-mkdir.patch \
+           file://llvm-config-version.patch \
+           file://0001-ac-fix-build-after-LLVM-5.0-SVN-r300718.patch \
+           file://0002-gallivm-Fix-build-against-LLVM-SVN-r302589.patch \
+           file://0001-winsys-svga-drm-Include-sys-types.h.patch \
+           file://0001-configure.ac-Always-check-for-expat.patch \
+           file://0001-Makefile.vulkan.am-explictly-add-lib-expat-to-intel-.patch \
+           "
+SRC_URI[md5sum] = "e40bb428a263bd28cbf6478dae45b207"
+SRC_URI[sha256sum] = "69f472a874b1122404fa0bd13e2d6bf87eb3b9ad9c21d2f39872a96d83d9e5f5"
+
+#because we cannot rely on the fact that all apps will use pkgconfig,
+#make eglplatform.h independent of MESA_EGL_NO_X11_HEADER
+do_install_append() {
+    if ${@bb.utils.contains('PACKAGECONFIG', 'egl', 'true', 'false', d)}; then
+        sed -i -e 's/^#if defined(MESA_EGL_NO_X11_HEADERS)$/#if defined(MESA_EGL_NO_X11_HEADERS) || ${@bb.utils.contains('PACKAGECONFIG', 'x11', '0', '1', d)}/' ${D}${includedir}/EGL/eglplatform.h
+    fi
+}
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa_git.bb b/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa_git.bb
deleted file mode 100644
index c034517..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/mesa/mesa_git.bb
+++ /dev/null
@@ -1,22 +0,0 @@
-require ${BPN}.inc
-
-DEFAULT_PREFERENCE = "-1"
-
-SRCREV = "ea0d1f575c214c09ba3df12644a960e86e031766"
-PV = "10.5.4+git${SRCPV}"
-
-SRC_URI = "git://anongit.freedesktop.org/git/mesa/mesa;branch=10.5"
-
-S = "${WORKDIR}/git"
-
-DEPENDS += "python-mako-native"
-
-inherit pythonnative
-
-#because we cannot rely on the fact that all apps will use pkgconfig,
-#make eglplatform.h independent of MESA_EGL_NO_X11_HEADER
-do_install_append() {
-    if ${@bb.utils.contains('PACKAGECONFIG', 'egl', 'true', 'false', d)}; then
-        sed -i -e 's/^#if defined(MESA_EGL_NO_X11_HEADERS)$/#if defined(MESA_EGL_NO_X11_HEADERS) || ${@bb.utils.contains('PACKAGECONFIG', 'x11', '0', '1', d)}/' ${D}${includedir}/EGL/eglplatform.h
-    fi
-}
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/pango/pango_1.40.3.bb b/import-layers/yocto-poky/meta/recipes-graphics/pango/pango_1.40.3.bb
deleted file mode 100644
index e259a82..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/pango/pango_1.40.3.bb
+++ /dev/null
@@ -1,52 +0,0 @@
-SUMMARY = "Framework for layout and rendering of internationalized text"
-DESCRIPTION = "Pango is a library for laying out and rendering of text, \
-with an emphasis on internationalization. Pango can be used anywhere \
-that text layout is needed, though most of the work on Pango so far has \
-been done in the context of the GTK+ widget toolkit. Pango forms the \
-core of text and font handling for GTK+-2.x."
-HOMEPAGE = "http://www.pango.org/"
-BUGTRACKER = "http://bugzilla.gnome.org"
-SECTION = "libs"
-LICENSE = "LGPLv2.0+"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=3bf50002aefd002f49e7bb854063f7e7"
-
-inherit gnomebase gtk-doc ptest-gnome upstream-version-is-even gobject-introspection
-
-SRC_URI += "file://run-ptest \
-            file://0001-Drop-introspection-macros-from-acinclude.m4.patch \
-            file://0001-Enforce-recreation-of-docs-pango.types-it-is-build-c.patch \
-"
-SRC_URI[archive.md5sum] = "17c26720f5a862a12f7e1745e2f1d966"
-SRC_URI[archive.sha256sum] = "abba8b5ce728520c3a0f1535eab19eac3c14aeef7faa5aded90017ceac2711d3"
-
-DEPENDS = "glib-2.0 glib-2.0-native fontconfig freetype virtual/libiconv cairo harfbuzz"
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
-PACKAGECONFIG[x11] = "--with-xft,--without-xft,virtual/libx11 libxft"
-
-EXTRA_AUTORECONF = ""
-
-EXTRA_OECONF = " \
-	        --disable-debug \
-	        "
-
-LEAD_SONAME = "libpango-1.0*"
-LIBV = "1.8.0"
-
-# This binary needs to be compiled for the host architecture.  This isn't pretty!
-do_compile_prepend_class-target () {
-	if ${@bb.utils.contains('DISTRO_FEATURES', 'ptest', 'true', 'false', d)}; then
-		make CC="${BUILD_CC}" CFLAGS="" LDFLAGS="${BUILD_LDFLAGS}" AM_CPPFLAGS="$(pkg-config-native --cflags glib-2.0)" gen_all_unicode_LDADD="$(pkg-config-native --libs glib-2.0)" -C ${B}/tests gen-all-unicode
-	fi
-}
-
-FILES_${PN} = "${bindir}/* ${libdir}/libpango*${SOLIBS}"
-FILES_${PN}-dev += "${libdir}/pango/${LIBV}/modules/*.la"
-
-RDEPENDS_${PN}-ptest += "liberation-fonts cantarell-fonts"
-
-RPROVIDES_${PN} += "pango-modules pango-module-indic-lang \
-                    pango-module-basic-fc pango-module-arabic-lang"
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/pango/pango_1.40.6.bb b/import-layers/yocto-poky/meta/recipes-graphics/pango/pango_1.40.6.bb
new file mode 100644
index 0000000..31c3d2a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/pango/pango_1.40.6.bb
@@ -0,0 +1,52 @@
+SUMMARY = "Framework for layout and rendering of internationalized text"
+DESCRIPTION = "Pango is a library for laying out and rendering of text, \
+with an emphasis on internationalization. Pango can be used anywhere \
+that text layout is needed, though most of the work on Pango so far has \
+been done in the context of the GTK+ widget toolkit. Pango forms the \
+core of text and font handling for GTK+-2.x."
+HOMEPAGE = "http://www.pango.org/"
+BUGTRACKER = "http://bugzilla.gnome.org"
+SECTION = "libs"
+LICENSE = "LGPLv2.0+"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=3bf50002aefd002f49e7bb854063f7e7"
+
+inherit gnomebase gtk-doc ptest-gnome upstream-version-is-even gobject-introspection
+
+SRC_URI += "file://run-ptest \
+            file://0001-Drop-introspection-macros-from-acinclude.m4.patch \
+            file://0001-Enforce-recreation-of-docs-pango.types-it-is-build-c.patch \
+"
+SRC_URI[archive.md5sum] = "507c6746fbf53fc9d48c577f1e265de3"
+SRC_URI[archive.sha256sum] = "ca152b7383a1e9f7fd74ae96023dc6770dc5043414793bfe768ff06b6759e573"
+
+DEPENDS = "glib-2.0 glib-2.0-native fontconfig freetype virtual/libiconv cairo harfbuzz"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
+PACKAGECONFIG[x11] = "--with-xft,--without-xft,virtual/libx11 libxft"
+
+EXTRA_AUTORECONF = ""
+
+EXTRA_OECONF = " \
+	        --disable-debug \
+	        "
+
+LEAD_SONAME = "libpango-1.0*"
+LIBV = "1.8.0"
+
+# This binary needs to be compiled for the host architecture.  This isn't pretty!
+do_compile_prepend_class-target () {
+	if ${@bb.utils.contains('DISTRO_FEATURES', 'ptest', 'true', 'false', d)}; then
+		make CC="${BUILD_CC}" CFLAGS="" LDFLAGS="${BUILD_LDFLAGS}" AM_CPPFLAGS="$(pkg-config-native --cflags glib-2.0)" gen_all_unicode_LDADD="$(pkg-config-native --libs glib-2.0)" -C ${B}/tests gen-all-unicode
+	fi
+}
+
+FILES_${PN} = "${bindir}/* ${libdir}/libpango*${SOLIBS}"
+FILES_${PN}-dev += "${libdir}/pango/${LIBV}/modules/*.la"
+
+RDEPENDS_${PN}-ptest += "liberation-fonts cantarell-fonts"
+
+RPROVIDES_${PN} += "pango-modules pango-module-indic-lang \
+                    pango-module-basic-fc pango-module-arabic-lang"
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit/0001-CMake-define-GBM_BO_MAP-only-when-symbol-is-found.patch b/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit/0001-CMake-define-GBM_BO_MAP-only-when-symbol-is-found.patch
deleted file mode 100644
index 8b63424..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit/0001-CMake-define-GBM_BO_MAP-only-when-symbol-is-found.patch
+++ /dev/null
@@ -1,51 +0,0 @@
-From 47697aee05a112422acf203982085e7b3e6c05b2 Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?Daniel=20D=C3=ADaz?= <daniel.diaz@linaro.org>
-Date: Thu, 4 May 2017 00:57:39 -0500
-Subject: [PATCH 1/4] CMake: define GBM_BO_MAP only when symbol is found
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-gbm_bo_map() and _unmap() have been added recently to Mesa,
-and this update may not have reached all implementations of
-GBM, such as the one provided by Mali r6, where said
-definitions can be found in the header file but not in the
-library itself. This leads to errors like the following when
-linking:
-  ../../../../lib/libpiglitutil_gl.so.0: undefined reference to `gbm_bo_unmap'
-  ../../../../lib/libpiglitutil_gl.so.0: undefined reference to `gbm_bo_map'
-  collect2: error: ld returned 1 exit status
-  make[2]: *** [bin/point-sprite] Error 1
-
-Instead of relying on the header file, actually try to link
-using that symbol to determine if PIGLIT_HAS_GBM_BO_MAP
-should be defined.
-
-Upstream-Status: Submitted [piglit@lists.freedesktop.org]
-
-Signed-off-by: Daniel Díaz <daniel.diaz@linaro.org>
-Reviewed-by: Jan Vesely <jan.vesely@rutgers.edu>
-Reviewed-by: Dylan Baker <dylan@pnwbakers.com>
----
- CMakeLists.txt | 5 +++--
- 1 file changed, 3 insertions(+), 2 deletions(-)
-
-diff --git a/CMakeLists.txt b/CMakeLists.txt
-index a4ff99e..cc26fa8 100644
---- a/CMakeLists.txt
-+++ b/CMakeLists.txt
-@@ -141,8 +141,9 @@ IF(${CMAKE_SYSTEM_NAME} MATCHES "Linux")
- 	if(GBM_FOUND)
- 		set(PIGLIT_HAS_GBM True)
- 		add_definitions(-DPIGLIT_HAS_GBM)
--		if (GBM_VERSION VERSION_EQUAL "12.1" OR GBM_VERSION VERSION_GREATER "12.1")
--			set(PIGLIT_HAS_GBM_BO_MAP True)
-+		set(CMAKE_REQUIRED_LIBRARIES ${CMAKE_REQUIRED_LIBRARIES} ${GBM_LIBRARIES})
-+		CHECK_FUNCTION_EXISTS(gbm_bo_map PIGLIT_HAS_GBM_BO_MAP)
-+		if (PIGLIT_HAS_GBM_BO_MAP)
- 			add_definitions(-DPIGLIT_HAS_GBM_BO_MAP)
- 		endif()
- 	endif(GBM_FOUND)
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit/0002-util-egl-Honour-Surfaceless-MESA-in-get_default_disp.patch b/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit/0002-util-egl-Honour-Surfaceless-MESA-in-get_default_disp.patch
deleted file mode 100644
index f3aa1ba..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit/0002-util-egl-Honour-Surfaceless-MESA-in-get_default_disp.patch
+++ /dev/null
@@ -1,54 +0,0 @@
-From a6608f218b5023cef36b3de5ec3c5f00b0211d1c Mon Sep 17 00:00:00 2001
-From: Daniel Diaz <daniel.diaz@linaro.org>
-Date: Wed, 17 May 2017 18:00:15 -0500
-Subject: [PATCH 2/4] util/egl: Honour Surfaceless MESA in get_default_display
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-The EGL_MESA_platform_surfaceless extension was introduced not too long
-ago. Add support for it our helper.
-
-Upstream-Status: Accepted, since git 7b74602.
-
-Signed-off-by: Daniel Díaz <daniel.diaz@linaro.org>
-Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
----
- tests/util/piglit-util-egl.c | 7 +++++++
- 1 file changed, 7 insertions(+)
-
-diff --git a/tests/util/piglit-util-egl.c b/tests/util/piglit-util-egl.c
-index 106c735..389fe12 100644
---- a/tests/util/piglit-util-egl.c
-+++ b/tests/util/piglit-util-egl.c
-@@ -85,6 +85,7 @@ piglit_egl_get_default_display(EGLenum platform)
- 	static bool has_x11 = false;
- 	static bool has_wayland = false;
- 	static bool has_gbm = false;
-+	static bool has_surfaceless_mesa = false;
- 
- 	static EGLDisplay (*peglGetPlatformDisplayEXT)(EGLenum platform, void *native_display, const EGLint *attrib_list);
- 
-@@ -99,6 +100,7 @@ piglit_egl_get_default_display(EGLenum platform)
- 		has_x11 = piglit_is_egl_extension_supported(EGL_NO_DISPLAY, "EGL_EXT_platform_x11");
- 		has_wayland = piglit_is_egl_extension_supported(EGL_NO_DISPLAY, "EGL_EXT_platform_wayland");
- 		has_gbm = piglit_is_egl_extension_supported(EGL_NO_DISPLAY, "EGL_EXT_platform_gbm");
-+		has_surfaceless_mesa = piglit_is_egl_extension_supported(EGL_NO_DISPLAY, "EGL_MESA_platform_surfaceless");
- 
- 		peglGetPlatformDisplayEXT = (void*) eglGetProcAddress("eglGetPlatformDisplayEXT");
- 	}
-@@ -123,6 +125,11 @@ piglit_egl_get_default_display(EGLenum platform)
- 			return EGL_NO_DISPLAY;
- 		}
- 		break;
-+	case EGL_PLATFORM_SURFACELESS_MESA:
-+		if (!has_surfaceless_mesa) {
-+			return EGL_NO_DISPLAY;
-+		}
-+		break;
- 	default:
- 		fprintf(stderr, "%s: unrecognized platform %#x\n", __func__, platform);
- 		return EGL_NO_DISPLAY;
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit/0003-egl_mesa_platform_surfaceless-Don-t-use-eglGetPlatfo.patch b/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit/0003-egl_mesa_platform_surfaceless-Don-t-use-eglGetPlatfo.patch
deleted file mode 100644
index 0f24dc1..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit/0003-egl_mesa_platform_surfaceless-Don-t-use-eglGetPlatfo.patch
+++ /dev/null
@@ -1,36 +0,0 @@
-From c0dc430b8f5deeacdb11cd188195e16f512af233 Mon Sep 17 00:00:00 2001
-From: Daniel Diaz <daniel.diaz@linaro.org>
-Date: Wed, 17 May 2017 18:00:16 -0500
-Subject: [PATCH 3/4] egl_mesa_platform_surfaceless: Don't use
- eglGetPlatformDisplay directly
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-The entry point is not guaranteed to exist, so use the
-piglit_egl_get_default_display() helper which does the correct thing.
-
-Upstream-Status: Accepted, since git 7b74602.
-
-Signed-off-by: Daniel Díaz <daniel.diaz@linaro.org>
-Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
----
- .../spec/egl_mesa_platform_surfaceless/egl_mesa_platform_surfaceless.c  | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/tests/egl/spec/egl_mesa_platform_surfaceless/egl_mesa_platform_surfaceless.c b/tests/egl/spec/egl_mesa_platform_surfaceless/egl_mesa_platform_surfaceless.c
-index 3bbd6aa..81a3919 100644
---- a/tests/egl/spec/egl_mesa_platform_surfaceless/egl_mesa_platform_surfaceless.c
-+++ b/tests/egl/spec/egl_mesa_platform_surfaceless/egl_mesa_platform_surfaceless.c
-@@ -31,7 +31,7 @@ test_setup(EGLDisplay *dpy)
- 
- 	piglit_require_egl_extension(EGL_NO_DISPLAY, "EGL_MESA_platform_surfaceless");
- 
--	*dpy = eglGetPlatformDisplay(EGL_PLATFORM_SURFACELESS_MESA, NULL, NULL);
-+	*dpy = piglit_egl_get_default_display(EGL_PLATFORM_SURFACELESS_MESA);
- 	if (*dpy == EGL_NO_DISPLAY) {
- 		printf("failed to get EGLDisplay\n");
- 		piglit_report_result(PIGLIT_SKIP);
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit/0004-egl_mesa_platform_surfaceless-Use-EXT-functions-for-.patch b/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit/0004-egl_mesa_platform_surfaceless-Use-EXT-functions-for-.patch
deleted file mode 100644
index 0952af5..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit/0004-egl_mesa_platform_surfaceless-Use-EXT-functions-for-.patch
+++ /dev/null
@@ -1,78 +0,0 @@
-From 57de1ff6758ec5ea4a52637f233e3e3150086255 Mon Sep 17 00:00:00 2001
-From: Daniel Diaz <daniel.diaz@linaro.org>
-Date: Wed, 17 May 2017 18:00:17 -0500
-Subject: [PATCH 4/4] egl_mesa_platform_surfaceless: Use EXT functions for
- surfaces
-MIME-Version: 1.0
-Content-Type: text/plain; charset=UTF-8
-Content-Transfer-Encoding: 8bit
-
-These EXT symbols are guaranteed to exist since they require
-EGL_EXT_platform_base.
-
-Upstream-Status: Accepted, since git 7b74602.
-
-Signed-off-by: Daniel Díaz <daniel.diaz@linaro.org>
-Reviewed-by: Emil Velikov <emil.velikov@collabora.com>
----
- .../egl_mesa_platform_surfaceless.c                | 23 ++++++++++++++++++++--
- 1 file changed, 21 insertions(+), 2 deletions(-)
-
-diff --git a/tests/egl/spec/egl_mesa_platform_surfaceless/egl_mesa_platform_surfaceless.c b/tests/egl/spec/egl_mesa_platform_surfaceless/egl_mesa_platform_surfaceless.c
-index 81a3919..264ed71 100644
---- a/tests/egl/spec/egl_mesa_platform_surfaceless/egl_mesa_platform_surfaceless.c
-+++ b/tests/egl/spec/egl_mesa_platform_surfaceless/egl_mesa_platform_surfaceless.c
-@@ -24,6 +24,24 @@
- #include "piglit-util.h"
- #include "piglit-util-egl.h"
- 
-+/* Extension function pointers.
-+ *
-+ * Use prefix 'pegl' (piglit egl) instead of 'egl' to avoid collisions with
-+ * prototypes in eglext.h. */
-+EGLSurface (*peglCreatePlatformPixmapSurfaceEXT)(EGLDisplay display, EGLConfig config,
-+	    NativePixmapType native_pixmap, const EGLint *attrib_list);
-+EGLSurface (*peglCreatePlatformWindowSurfaceEXT)(EGLDisplay display, EGLConfig config,
-+	    NativeWindowType native_window, const EGLint *attrib_list);
-+
-+static void
-+init_egl_extension_funcs(void)
-+{
-+	peglCreatePlatformPixmapSurfaceEXT = (void*)
-+		eglGetProcAddress("eglCreatePlatformPixmapSurfaceEXT");
-+	peglCreatePlatformWindowSurfaceEXT = (void*)
-+		eglGetProcAddress("eglCreatePlatformWindowSurfaceEXT");
-+}
-+
- static void
- test_setup(EGLDisplay *dpy)
- {
-@@ -72,7 +90,7 @@ test_create_window(void *test_data)
- 
- 	test_setup(&dpy);
- 
--	surf = eglCreatePlatformWindowSurface(dpy, EGL_NO_CONFIG_KHR,
-+	surf = peglCreatePlatformWindowSurfaceEXT(dpy, EGL_NO_CONFIG_KHR,
- 					      /*native_window*/ NULL,
- 					      /*attrib_list*/ NULL);
- 	if (surf) {
-@@ -103,7 +121,7 @@ test_create_pixmap(void *test_data)
- 
- 	test_setup(&dpy);
- 
--	surf = eglCreatePlatformPixmapSurface(dpy, EGL_NO_CONFIG_KHR,
-+	surf = peglCreatePlatformPixmapSurfaceEXT(dpy, EGL_NO_CONFIG_KHR,
- 					      /*native_window*/ NULL,
- 					      /*attrib_list*/ NULL);
- 	if (surf) {
-@@ -205,6 +223,7 @@ main(int argc, char **argv)
- 		piglit_report_result(PIGLIT_FAIL);
- 	}
- 
-+	init_egl_extension_funcs();
- 	result = piglit_run_selected_subtests(subtests, selected_names,
- 					      num_selected, result);
- 	piglit_report_result(result);
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit_git.bb b/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit_git.bb
index 2ea5779..d274a8f 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/piglit/piglit_git.bb
@@ -5,14 +5,11 @@
 SRC_URI = "git://anongit.freedesktop.org/piglit \
            file://0001-cmake-install-bash-completions-in-the-right-place.patch \
            file://0001-tests-Use-FE_UPWARD-only-if-its-defined-in-fenv.h.patch \
-           file://0001-CMake-define-GBM_BO_MAP-only-when-symbol-is-found.patch \
-           file://0002-util-egl-Honour-Surfaceless-MESA-in-get_default_disp.patch \
-           file://0003-egl_mesa_platform_surfaceless-Don-t-use-eglGetPlatfo.patch \
-           file://0004-egl_mesa_platform_surfaceless-Use-EXT-functions-for-.patch \
            "
+UPSTREAM_VERSION_UNKNOWN = "1"
 
-# From 2017-02-06
-SRCREV = "ca58eec0b965655c7eba592a634cbf4aadfbc675"
+# From 2017-07-03
+SRCREV = "c8f4fd9eeb298a2ef0855927f22634f794ef3eff"
 # (when PV goes above 1.0 remove the trailing r)
 PV = "1.0+gitr${SRCPV}"
 
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/ttf-fonts/liberation-fonts_1.04.bb b/import-layers/yocto-poky/meta/recipes-graphics/ttf-fonts/liberation-fonts_1.04.bb
deleted file mode 100644
index 74212e7..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/ttf-fonts/liberation-fonts_1.04.bb
+++ /dev/null
@@ -1,40 +0,0 @@
-SUMMARY = "Liberation(tm) Fonts"
-DESCRIPTION = "The Liberation(tm) Fonts is a font family originally \
-created by Ascender(c) which aims at metric compatibility with \
-Arial, Times New Roman, Courier New."
-HOMEPAGE = "https://releases.pagure.org/liberation-fonts/"
-BUGTRACKER = "https://bugzilla.redhat.com/"
-
-RECIPE_NO_UPDATE_REASON = "2.x depends on fontforge package, which is not yet provided in oe-core"
-
-SECTION = "x11/fonts"
-LICENSE = "GPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f"
-PR = "r4"
-PE = "1"
-
-inherit allarch fontcache
-
-FONT_PACKAGES = "${PN}"
-
-SRC_URI = "https://releases.pagure.org/liberation-fonts/liberation-fonts-${PV}.tar.gz \
-           file://30-liberation-aliases.conf"
-
-SRC_URI[md5sum] = "4846797ef0fc70b0cbaede2514677c58"
-SRC_URI[sha256sum] = "0e0d0957c85b758561a3d4aef4ebcd2c39959e5328429d96ae106249d83531a1"
-
-do_install () {
-	install -d ${D}${datadir}/fonts/ttf/
-	for i in *.ttf; do
-		install -m 0644 $i ${D}${prefix}/share/fonts/ttf/${i}
-	done
-
-	install -d ${D}${sysconfdir}/fonts/conf.d/
-	install -m 0644 ${WORKDIR}/30-liberation-aliases.conf ${D}${sysconfdir}/fonts/conf.d/
-
-	install -d ${D}${prefix}/share/doc/${BPN}/
-	install -m 0644 License.txt ${D}${datadir}/doc/${BPN}/
-}
-
-PACKAGES = "${PN}"
-FILES_${PN} += "${sysconfdir} ${datadir}"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/ttf-fonts/liberation-fonts_2.00.1.bb b/import-layers/yocto-poky/meta/recipes-graphics/ttf-fonts/liberation-fonts_2.00.1.bb
new file mode 100644
index 0000000..412da48
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/ttf-fonts/liberation-fonts_2.00.1.bb
@@ -0,0 +1,39 @@
+SUMMARY = "Liberation(tm) Fonts"
+DESCRIPTION = "The Liberation(tm) Fonts is a font family originally \
+created by Ascender(c) which aims at metric compatibility with \
+Arial, Times New Roman, Courier New."
+HOMEPAGE = "https://releases.pagure.org/liberation-fonts/"
+BUGTRACKER = "https://bugzilla.redhat.com/"
+
+SECTION = "x11/fonts"
+LICENSE = "OFL-1.1"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=f96db970a9a46c5369142b99f530366b"
+PE = "1"
+
+inherit allarch fontcache
+
+FONT_PACKAGES = "${PN}"
+
+SRC_URI = "https://releases.pagure.org/liberation-fonts/liberation-fonts-ttf-${PV}.tar.gz \
+           file://30-liberation-aliases.conf"
+
+S = "${WORKDIR}/liberation-fonts-ttf-${PV}"
+
+SRC_URI[md5sum] = "5c781723a0d9ed6188960defba8e91cf"
+SRC_URI[sha256sum] = "7890278a6cd17873c57d9cd785c2d230d9abdea837e96516019c5885dd271504"
+
+do_install () {
+	install -d ${D}${datadir}/fonts/ttf/
+	for i in *.ttf; do
+		install -m 0644 $i ${D}${prefix}/share/fonts/ttf/${i}
+	done
+
+	install -d ${D}${sysconfdir}/fonts/conf.d/
+	install -m 0644 ${WORKDIR}/30-liberation-aliases.conf ${D}${sysconfdir}/fonts/conf.d/
+
+	install -d ${D}${prefix}/share/doc/${BPN}/
+	install -m 0644 LICENSE ${D}${datadir}/doc/${BPN}/
+}
+
+PACKAGES = "${PN}"
+FILES_${PN} += "${sysconfdir} ${datadir}"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/vulkan/assimp_4.0.0.bb b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/assimp_4.0.0.bb
new file mode 100644
index 0000000..7a96a4f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/assimp_4.0.0.bb
@@ -0,0 +1,22 @@
+DESCRIPTION = "Open Asset Import Library is a portable Open Source library to import \
+               various well-known 3D model formats in a uniform manner."
+HOMEPAGE = "http://www.assimp.org/"
+SECTION = "devel"
+
+LICENSE = "BSD-3-Clause"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=2119edef0916b0bd511cb3c731076271"
+
+DEPENDS = "zlib"
+
+SRC_URI = "git://github.com/assimp/assimp.git"
+UPSTREAM_CHECK_GITTAGREGEX = "v(?P<pver>(\d+(\.\d+)+))"
+
+SRCREV = "52c8d06f5d6498afd66df983da348a6b112f1314"
+
+S = "${WORKDIR}/git"
+
+inherit cmake
+
+EXTRA_OECMAKE = "-DASSIMP_BUILD_ASSIMP_TOOLS=OFF -DASSIMP_BUILD_TESTS=OFF -DASSIMP_LIB_INSTALL_DIR=${baselib}"
+
+FILES_${PN}-dev += "${libdir}/cmake/"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan-demos/0001-Don-t-build-demos-with-questionably-licensed-data.patch b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan-demos/0001-Don-t-build-demos-with-questionably-licensed-data.patch
new file mode 100644
index 0000000..d32c8f2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan-demos/0001-Don-t-build-demos-with-questionably-licensed-data.patch
@@ -0,0 +1,91 @@
+From 55770fb07c42fe410cf8d09f8f5976babc89b9ef Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Tue, 4 Jul 2017 17:13:45 +0300
+Subject: [PATCH] Don't build demos with questionably licensed data
+
+Some of the models don't have open source compatible licenses:
+don't build demos using those. Also don't build demos that need
+resources that are not included.
+
+ssao:
+scenerendering:
+	Sibenik model, no license found
+
+deferred:
+deferredmultisampling:
+deferredshadows:
+	armor model, CC-BY-3.0
+
+vulkanscene:
+imgui:
+shadowmapping:
+	vulkanscene model, no license found
+
+indirectdraw:
+	plant model, no license found
+
+hdr:
+pbribl:
+pbrtexture:
+	Require external Vulkan Asset Pack
+
+Upstream-Status: Inappropriate [configuration]
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+---
+ CMakeLists.txt | 13 -------------
+ 1 file changed, 13 deletions(-)
+
+diff --git a/CMakeLists.txt b/CMakeLists.txt
+index 4958fff..0f9d3e4 100644
+--- a/CMakeLists.txt
++++ b/CMakeLists.txt
+@@ -150,17 +150,11 @@ set(EXAMPLES
+ 	computeparticles
+ 	computeshader
+ 	debugmarker
+-	deferred
+-	deferredmultisampling
+-	deferredshadows
+ 	displacement
+ 	distancefieldfonts
+ 	dynamicuniformbuffer
+ 	gears
+ 	geometryshader
+-	hdr
+-	imgui
+-	indirectdraw	
+ 	instancing
+ 	mesh
+ 	multisampling
+@@ -170,20 +164,14 @@ set(EXAMPLES
+ 	parallaxmapping
+ 	particlefire
+ 	pbrbasic
+-	pbribl
+-	pbrtexture
+ 	pipelines
+ 	pushconstants
+ 	radialblur
+ 	raytracing
+-	scenerendering
+ 	screenshot
+-	shadowmapping
+-	shadowmappingomni
+ 	skeletalanimation
+ 	specializationconstants
+ 	sphericalenvmapping
+-	ssao
+ 	subpasses
+ 	terraintessellation
+ 	tessellation
+@@ -196,7 +184,6 @@ set(EXAMPLES
+ 	texturesparseresidency
+ 	triangle
+ 	viewportarray
+-	vulkanscene
+ )
+ 
+ buildExamples()
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan-demos/0001-Fix-build-on-x86.patch b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan-demos/0001-Fix-build-on-x86.patch
new file mode 100644
index 0000000..681b342
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan-demos/0001-Fix-build-on-x86.patch
@@ -0,0 +1,41 @@
+From b0495efb6c3ea3a530fcbaddac86da57ecce5a66 Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Mon, 10 Jul 2017 13:11:12 +0300
+Subject: [PATCH] Fix build on x86
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+| func_common.inl:193:51: error: wrong number of template arguments
+| (5, should be 6) struct compute_sign<T, P, vecType, false, Aligned>
+
+The fix is backported from the upstream glm project.
+
+Upstream-Status: Pending [https://github.com/SaschaWillems/Vulkan/issues/356]
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+---
+ external/glm/glm/detail/func_common.inl | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/external/glm/glm/detail/func_common.inl b/external/glm/glm/detail/func_common.inl
+index cafaed5..2dd94e1 100644
+--- a/external/glm/glm/detail/func_common.inl
++++ b/external/glm/glm/detail/func_common.inl
+@@ -190,12 +190,12 @@ namespace detail
+ 
+ #	if GLM_ARCH == GLM_ARCH_X86
+ 	template<length_t L, typename T, precision P, template<length_t, typename, precision> class vecType, bool Aligned>
+-	struct compute_sign<T, P, vecType, false, Aligned>
++	struct compute_sign<L, T, P, vecType, false, Aligned>
+ 	{
+ 		GLM_FUNC_QUALIFIER static vecType<L, T, P> call(vecType<L, T, P> const & x)
+ 		{
+ 			T const Shift(static_cast<T>(sizeof(T) * 8 - 1));
+-			vecType<L, T, P> const y(vecType<typename make_unsigned<T>::type, P>(-x) >> typename make_unsigned<T>::type(Shift));
++			vecType<L, T, P> const y(vecType<L, typename make_unsigned<T>::type, P>(-x) >> typename make_unsigned<T>::type(Shift));
+ 
+ 			return (x >> Shift) | y;
+ 		}
+-- 
+2.1.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan-demos/0001-Support-installing-demos-support-out-of-tree-builds.patch b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan-demos/0001-Support-installing-demos-support-out-of-tree-builds.patch
new file mode 100644
index 0000000..4addea3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan-demos/0001-Support-installing-demos-support-out-of-tree-builds.patch
@@ -0,0 +1,85 @@
+From edca667684764cfcc0460e448e834fadf623a887 Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Mon, 3 Jul 2017 14:49:18 +0300
+Subject: [PATCH] Support installing demos, support out-of-tree builds
+
+This is especially useful for cross-compile situation where testing
+happens on target.
+
+-DRESOURCE_INSTALL_DIR=<path> decides where data is installed (and
+where the binaries will load the data from): if it's left empty,
+then nothing will be installed and binaries will load the data from
+CMAKE_SOURCE_DIR.
+
+Binaries are now correctly built in CMAKE_BINARY_DIR.
+
+Upstream-Status: Submitted [https://github.com/SaschaWillems/Vulkan/pull/352]
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+---
+ CMakeLists.txt             | 15 ++++++++++++++-
+ base/vulkanexamplebase.cpp |  2 +-
+ 2 files changed, 15 insertions(+), 2 deletions(-)
+
+diff --git a/CMakeLists.txt b/CMakeLists.txt
+index b9886bc..4958fff 100644
+--- a/CMakeLists.txt
++++ b/CMakeLists.txt
+@@ -16,6 +16,8 @@ include_directories(base)
+ OPTION(USE_D2D_WSI "Build the project using Direct to Display swapchain" OFF)
+ OPTION(USE_WAYLAND_WSI "Build the project using Wayland swapchain" OFF)
+ 
++set(RESOURCE_INSTALL_DIR "" CACHE PATH "Path to install resources to (leave empty for running uninstalled)")
++
+ # Use FindVulkan module added with CMAKE 3.7
+ if (NOT CMAKE_VERSION VERSION_LESS 3.7.0)
+ 	message(STATUS "Using module to find Vulkan")
+@@ -108,6 +110,10 @@ function(buildExample EXAMPLE_NAME)
+ 		add_executable(${EXAMPLE_NAME} ${MAIN_CPP} ${SOURCE} ${SHADERS})
+ 		target_link_libraries(${EXAMPLE_NAME} ${Vulkan_LIBRARY} ${ASSIMP_LIBRARIES} ${WAYLAND_CLIENT_LIBRARIES} ${CMAKE_THREAD_LIBS_INIT})
+ 	endif(WIN32)
++
++	if(RESOURCE_INSTALL_DIR)
++		install(TARGETS ${EXAMPLE_NAME} DESTINATION ${CMAKE_INSTALL_BINDIR})
++	endif()
+ endfunction(buildExample)
+ 
+ # Build all examples
+@@ -117,6 +123,13 @@ function(buildExamples)
+ 	endforeach(EXAMPLE)
+ endfunction(buildExamples)
+ 
++if(RESOURCE_INSTALL_DIR)
++	add_definitions(-DVK_EXAMPLE_DATA_DIR=\"${RESOURCE_INSTALL_DIR}/\")
++	install(DIRECTORY data/ DESTINATION ${RESOURCE_INSTALL_DIR}/)
++else()
++	add_definitions(-DVK_EXAMPLE_DATA_DIR=\"${CMAKE_SOURCE_DIR}/data/\")
++endif()
++
+ # Compiler specific stuff
+ IF(MSVC)
+ 	SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} /EHsc")
+@@ -128,7 +141,7 @@ ELSE(WIN32)
+ 	link_libraries(${XCB_LIBRARIES} ${Vulkan_LIBRARY})
+ ENDIF(WIN32)
+ 
+-set(CMAKE_RUNTIME_OUTPUT_DIRECTORY "${CMAKE_SOURCE_DIR}/bin/")
++set(CMAKE_RUNTIME_OUTPUT_DIRECTORY "${CMAKE_BINARY_DIR}/bin/")
+ 
+ set(EXAMPLES 
+ 	bloom
+diff --git a/base/vulkanexamplebase.cpp b/base/vulkanexamplebase.cpp
+index 647368a..a0f28a5 100644
+--- a/base/vulkanexamplebase.cpp
++++ b/base/vulkanexamplebase.cpp
+@@ -84,7 +84,7 @@ const std::string VulkanExampleBase::getAssetPath()
+ #if defined(__ANDROID__)
+ 	return "";
+ #else
+-	return "./../data/";
++	return VK_EXAMPLE_DATA_DIR;
+ #endif
+ }
+ #endif
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan-demos_git.bb b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan-demos_git.bb
new file mode 100644
index 0000000..0b89435
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan-demos_git.bb
@@ -0,0 +1,41 @@
+DESCRIPTION = "Collection of Vulkan examples"
+LICENSE = "MIT"
+DEPENDS = "zlib"
+
+LIC_FILES_CHKSUM = "file://LICENSE.md;md5=dcf473723faabf17baa9b5f2207599d0 \
+                    file://triangle/triangle.cpp;endline=12;md5=bccd1bf9cadd9e10086cf7872157e4fa"
+
+SRC_URI = "git://github.com/SaschaWillems/Vulkan.git \
+           file://0001-Support-installing-demos-support-out-of-tree-builds.patch \
+           file://0001-Don-t-build-demos-with-questionably-licensed-data.patch \
+           file://0001-Fix-build-on-x86.patch \
+"
+UPSTREAM_VERSION_UNKNOWN = "1"
+SRCREV = "18df00c7b4677b0889486e16977857aa987947e2"
+UPSTREAM_CHECK_GITTAGREGEX = "These are not the releases you're looking for"
+S = "${WORKDIR}/git"
+
+REQUIRED_DISTRO_FEATURES = 'vulkan'
+
+inherit cmake distro_features_check
+DEPENDS = "vulkan assimp"
+
+do_install_append () {
+    # Remove assets that have uncertain licenses
+    rm ${D}${datadir}/vulkan-demos/models/armor/* \
+       ${D}${datadir}/vulkan-demos/models/sibenik/* \
+       ${D}${datadir}/vulkan-demos/models/vulkanscene* \
+       ${D}${datadir}/vulkan-demos/models/plants.dae \
+       ${D}${datadir}/vulkan-demos/textures/texturearray_plants*
+
+    mv ${D}${bindir}/screenshot ${D}${bindir}/vulkan-screenshot
+}
+
+EXTRA_OECMAKE = "-DRESOURCE_INSTALL_DIR=${datadir}/vulkan-demos"
+
+ANY_OF_DISTRO_FEATURES = "x11 wayland"
+
+# Can only pick one of [wayland,xcb]
+PACKAGECONFIG = "${@bb.utils.contains('DISTRO_FEATURES', 'wayland', 'wayland', 'xcb' ,d)}"
+PACKAGECONFIG[wayland] = "-DUSE_WAYLAND_WSI=ON, -DUSE_WAYLAND_WSI=OFF, wayland"
+PACKAGECONFIG[xcb] = ",,libxcb"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan/0001-Use-getenv-if-secure_getenv-does-not-exist.patch b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan/0001-Use-getenv-if-secure_getenv-does-not-exist.patch
deleted file mode 100644
index 694922c..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan/0001-Use-getenv-if-secure_getenv-does-not-exist.patch
+++ /dev/null
@@ -1,34 +0,0 @@
-From 20525add1df8e1fb13fef90ac068f982def8b958 Mon Sep 17 00:00:00 2001
-From: Jussi Kukkonen <jussi.kukkonen@intel.com>
-Date: Wed, 8 Mar 2017 13:23:58 +0200
-Subject: [PATCH] Use getenv() if secure_getenv() does not exist
-
-musl does not implement secure version: default to getenv() in that
-case.
-
-https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers/issues/1538
-
-Upstream-Status: Pending
-Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
----
- loader/loader.c | 4 ++++
- 1 file changed, 4 insertions(+)
-
-diff --git a/loader/loader.c b/loader/loader.c
-index 24758f4..bff79c1 100644
---- a/loader/loader.c
-+++ b/loader/loader.c
-@@ -54,6 +54,10 @@
- #endif
- #endif
- 
-+#if !defined(__secure_getenv)
-+#define __secure_getenv getenv
-+#endif
-+
- struct loader_struct loader = {0};
- // TLS for instance for alloc/free callbacks
- THREAD_LOCAL_DECL struct loader_instance *tls_instance;
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan_1.0.39.1.bb b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan_1.0.39.1.bb
deleted file mode 100644
index 45d1c49..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan_1.0.39.1.bb
+++ /dev/null
@@ -1,34 +0,0 @@
-SUMMARY = "3D graphics and compute API common loader"
-DESCRIPTION = "Vulkan is a new generation graphics and compute API \
-that provides efficient access to modern GPUs. These packages \
-provide only the common vendor-agnostic library loader, headers and \
-the vulkaninfo utility."
-HOMEPAGE = "https://www.khronos.org/vulkan/"
-BUGTRACKER = "https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers"
-SECTION = "libs"
-
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=99c647ca3d4f6a4b9d8628f757aad156 \
-                    file://loader/loader.c;endline=25;md5=a87cd5442291c23d1fce4eece4cfde9d"
-SRC_URI = "git://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers.git;branch=sdk-1.0.39 \
-           file://demos-Don-t-build-tri-or-cube.patch \
-           file://0001-Use-getenv-if-secure_getenv-does-not-exist.patch \
-"
-SRCREV = "9c21ed0fb275589c3af6118aec9ef4f1d1544dc1"
-
-S = "${WORKDIR}/git"
-
-
-inherit cmake python3native lib_package distro_features_check
-ANY_OF_DISTRO_FEATURES = "x11 wayland"
-
-EXTRA_OECMAKE = "-DBUILD_WSI_MIR_SUPPORT=OFF \
-                 -DBUILD_LAYERS=OFF \
-                 -DBUILD_TESTS=OFF"
-
-# must choose x11 or wayland or both
-PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'x11', '' ,d)} \
-                   ${@bb.utils.contains('DISTRO_FEATURES', 'wayland', 'wayland', '' ,d)}"
-PACKAGECONFIG[x11] = "-DBUILD_WSI_XLIB_SUPPORT=ON -DBUILD_WSI_XCB_SUPPORT=ON -DDEMOS_WSI_SELECTION=XCB, -DBUILD_WSI_XLIB_SUPPORT=OFF -DBUILD_WSI_XCB_SUPPORT=OFF -DDEMOS_WSI_SELECTION=WAYLAND, libxcb libx11 libxrandr"
-PACKAGECONFIG[wayland] = "-DBUILD_WSI_WAYLAND_SUPPORT=ON, -DBUILD_WSI_WAYLAND_SUPPORT=OFF, wayland"
-
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan_1.0.51.0.bb b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan_1.0.51.0.bb
new file mode 100644
index 0000000..9de39bc
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/vulkan/vulkan_1.0.51.0.bb
@@ -0,0 +1,36 @@
+SUMMARY = "3D graphics and compute API common loader"
+DESCRIPTION = "Vulkan is a new generation graphics and compute API \
+that provides efficient access to modern GPUs. These packages \
+provide only the common vendor-agnostic library loader, headers and \
+the vulkaninfo utility."
+HOMEPAGE = "https://www.khronos.org/vulkan/"
+BUGTRACKER = "https://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers"
+SECTION = "libs"
+
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE.txt;md5=99c647ca3d4f6a4b9d8628f757aad156 \
+                    file://loader/loader.c;endline=25;md5=a87cd5442291c23d1fce4eece4cfde9d"
+SRC_URI = "git://github.com/KhronosGroup/Vulkan-LoaderAndValidationLayers.git;branch=sdk-1.0.51 \
+           file://demos-Don-t-build-tri-or-cube.patch \
+"
+SRCREV = "8d021e4d5a9f91436f4462df1dafb222908e296d"
+UPSTREAM_CHECK_GITTAGREGEX = "sdk-(?P<pver>\d+(\.\d+)+)"
+
+S = "${WORKDIR}/git"
+
+REQUIRED_DISTRO_FEATURES = "vulkan"
+
+inherit cmake python3native lib_package distro_features_check
+ANY_OF_DISTRO_FEATURES = "x11 wayland"
+
+EXTRA_OECMAKE = "-DBUILD_WSI_MIR_SUPPORT=OFF \
+                 -DBUILD_LAYERS=OFF \
+                 -DBUILD_TESTS=OFF"
+
+# must choose x11 or wayland or both
+PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'x11', '' ,d)} \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'wayland', 'wayland', '' ,d)}"
+PACKAGECONFIG[x11] = "-DBUILD_WSI_XLIB_SUPPORT=ON -DBUILD_WSI_XCB_SUPPORT=ON -DDEMOS_WSI_SELECTION=XCB, -DBUILD_WSI_XLIB_SUPPORT=OFF -DBUILD_WSI_XCB_SUPPORT=OFF -DDEMOS_WSI_SELECTION=WAYLAND, libxcb libx11 libxrandr"
+PACKAGECONFIG[wayland] = "-DBUILD_WSI_WAYLAND_SUPPORT=ON, -DBUILD_WSI_WAYLAND_SUPPORT=OFF, wayland"
+
+RRECOMMENDS_${PN} = "mesa-vulkan-drivers"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/wayland/libinput/0001-tools-Fix-race-in-autotools-install.patch b/import-layers/yocto-poky/meta/recipes-graphics/wayland/libinput/0001-tools-Fix-race-in-autotools-install.patch
new file mode 100644
index 0000000..4667538
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/wayland/libinput/0001-tools-Fix-race-in-autotools-install.patch
@@ -0,0 +1,37 @@
+From 5e8864c5b7a2e258eea041b0ef66dac7fcab9b7f Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Wed, 9 Aug 2017 09:47:14 +0300
+Subject: [PATCH] tools: Fix race in (autotools) install
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+exec/data distinction is done based on install dir so compat scripts
+must be moved in exec hook.
+
+This should fix this occasional failure:
+| install: cannot change permissions of
+| ‘/usr/bin/libinput-debug-events.compat’: No such file or directory
+
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Upstream-Status: Submitted
+---
+ tools/Makefile.am | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/tools/Makefile.am b/tools/Makefile.am
+index 2c8660b..7ee8b90 100644
+--- a/tools/Makefile.am
++++ b/tools/Makefile.am
+@@ -63,7 +63,7 @@ endif
+ 
+ EXTRA_DIST = make-ptraccel-graphs.sh install-compat-scripts.sh $(bin_SCRIPTS)
+ 
+-install-data-hook:
++install-exec-hook:
+ 	(cd $(DESTDIR)$(bindir) && mv libinput-list-devices.compat libinput-list-devices)
+ 	(cd $(DESTDIR)$(bindir) && mv libinput-debug-events.compat libinput-debug-events)
+ 
+-- 
+2.13.3
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/wayland/libinput/touchpad-serial-synaptics-need-to-fake-new-touches-on-TRIPLETAP.patch b/import-layers/yocto-poky/meta/recipes-graphics/wayland/libinput/touchpad-serial-synaptics-need-to-fake-new-touches-on-TRIPLETAP.patch
deleted file mode 100644
index b52b496..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/wayland/libinput/touchpad-serial-synaptics-need-to-fake-new-touches-on-TRIPLETAP.patch
+++ /dev/null
@@ -1,72 +0,0 @@
-This is a workaround upstream suggests for use with kernel 4.1.
-
-Upstream-Status: Inappropriate [temporary work-around]
-Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
-
-
-
-From: Peter Hutterer <peter.hutterer@who-t.net>
-Date: Mon Aug 3 18:23:12 PDT 2015
-Subject: [PATCH v3 libinput] touchpad: serial synaptics need to fake new touches on TRIPLETAP
-
-On the 4.1 kernels synaptics pretends to have 3 slots (the serial fw only does
-2). This was added to avoid cursor jumps but has since been reverted for 4.2
-(kernel commit dbf3c37086, 4.1.3 is still buggy). In some cases a TRIPLETAP
-may be triggered without slot 2 ever activating.
-
-While there are still those kernels out there, work around this bug by opening
-a new touch point where none exists if the fake finger count exceeds the slot
-count.
-
-Reported-by: Jan Alexander Steffens <jan.steffens at gmail.com>
-Signed-off-by: Peter Hutterer <peter.hutterer at who-t.net>
-Tested-by: Jan Alexander Steffens <jan.steffens at gmail.com>
-Reviewed-by: Hans de Goede <hdegoede at redhat.com>
----
-Changes to v2:
-- split out the handling instead of having a tmp state variable, see Hans'
-  comments from v2
-
-Mainly sending this to the list again so I have a link to point people to.
-If you're on 4.1.x add this patch to your distribution package.
-
- src/evdev-mt-touchpad.c | 22 ++++++++++++++++------
- 1 file changed, 16 insertions(+), 6 deletions(-)
-
-diff --git a/src/evdev-mt-touchpad.c b/src/evdev-mt-touchpad.c
-index a683d9a..5ef03d5 100644
---- a/src/evdev-mt-touchpad.c
-+++ b/src/evdev-mt-touchpad.c
-@@ -369,13 +369,23 @@ tp_restore_synaptics_touches(struct tp_dispatch *tp,
- 	for (i = 0; i < tp->num_slots; i++) {
- 		struct tp_touch *t = tp_get_touch(tp, i);
- 
--		if (t->state != TOUCH_END)
-+		switch(t->state) {
-+		case TOUCH_HOVERING:
-+		case TOUCH_BEGIN:
-+		case TOUCH_UPDATE:
- 			continue;
--
--		/* new touch, move it through begin to update immediately */
--		tp_new_touch(tp, t, time);
--		tp_begin_touch(tp, t, time);
--		t->state = TOUCH_UPDATE;
-+		case TOUCH_NONE:
-+			/* new touch, move it through to begin immediately */
-+			tp_new_touch(tp, t, time);
-+			tp_begin_touch(tp, t, time);
-+			break;
-+		case TOUCH_END:
-+			/* touch just ended ,we need need to restore it to update */
-+			tp_new_touch(tp, t, time);
-+			tp_begin_touch(tp, t, time);
-+			t->state = TOUCH_UPDATE;
-+			break;
-+		}
- 	}
- }
- 
--- 
-2.4.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/wayland/libinput_1.6.1.bb b/import-layers/yocto-poky/meta/recipes-graphics/wayland/libinput_1.6.1.bb
deleted file mode 100644
index c8714f2..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/wayland/libinput_1.6.1.bb
+++ /dev/null
@@ -1,25 +0,0 @@
-SUMMARY = "Library to handle input devices in Wayland compositors"
-HOMEPAGE = "http://www.freedesktop.org/wiki/Software/libinput/"
-SECTION = "libs"
-
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=2184aef38ff137ed33ce9a63b9d1eb8f"
-
-DEPENDS = "libevdev udev mtdev"
-
-SRC_URI = "http://www.freedesktop.org/software/${BPN}/${BP}.tar.xz \
-           file://touchpad-serial-synaptics-need-to-fake-new-touches-on-TRIPLETAP.patch \
-"
-SRC_URI[md5sum] = "7e282344f8ed7ec5cf87ca9fc22674fb"
-SRC_URI[sha256sum] = "9d816f13eee63bcca0e9c3bb652c52ab55f39be4d1b90b54e4bfd1dc92ef55a8"
-
-inherit autotools pkgconfig
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[libunwind] = "--with-libunwind,--without-libunwind,libunwind"
-PACKAGECONFIG[libwacom] = "--enable-libwacom,--disable-libwacom,libwacom"
-PACKAGECONFIG[gui] = "--enable-event-gui,--disable-event-gui,cairo gtk+3"
-
-UDEVDIR = "`pkg-config --variable=udevdir udev`"
-
-EXTRA_OECONF += "--with-udev-dir=${UDEVDIR}"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/wayland/libinput_1.8.1.bb b/import-layers/yocto-poky/meta/recipes-graphics/wayland/libinput_1.8.1.bb
new file mode 100644
index 0000000..a21babe
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/wayland/libinput_1.8.1.bb
@@ -0,0 +1,31 @@
+SUMMARY = "Library to handle input devices in Wayland compositors"
+HOMEPAGE = "http://www.freedesktop.org/wiki/Software/libinput/"
+SECTION = "libs"
+
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=2184aef38ff137ed33ce9a63b9d1eb8f"
+
+DEPENDS = "libevdev udev mtdev"
+
+SRC_URI = "http://www.freedesktop.org/software/${BPN}/${BP}.tar.xz \
+           file://0001-tools-Fix-race-in-autotools-install.patch \
+"
+
+SRC_URI[md5sum] = "8247f0bb67052ffb272c50c3cb9c5998"
+SRC_URI[sha256sum] = "e3590a9037e561a5791c8bd3b34bfd30fad5cacd8cbefc0d75fafe3a41d07147"
+
+inherit autotools pkgconfig lib_package
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[libunwind] = "--with-libunwind,--without-libunwind,libunwind"
+PACKAGECONFIG[libwacom] = "--enable-libwacom,--disable-libwacom,libwacom"
+PACKAGECONFIG[gui] = "--enable-debug-gui,--disable-debug-gui,cairo gtk+3"
+
+UDEVDIR = "`pkg-config --variable=udevdir udev`"
+
+EXTRA_OECONF += "--with-udev-dir=${UDEVDIR} --disable-documentation --disable-tests"
+
+# package name changed in 1.8.1 upgrade: make sure package upgrades work
+RPROVIDES_${PN} = "libinput"
+RREPLACES_${PN} = "libinput"
+RCONFLICTS_${PN} = "libinput"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/wayland/wayland-protocols_1.10.bb b/import-layers/yocto-poky/meta/recipes-graphics/wayland/wayland-protocols_1.10.bb
new file mode 100644
index 0000000..4f9e9f3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/wayland/wayland-protocols_1.10.bb
@@ -0,0 +1,20 @@
+SUMMARY = "Collection of additional Wayland protocols"
+DESCRIPTION = "Wayland protocols that add functionality not \
+available in the Wayland core protocol. Such protocols either add \
+completely new functionality, or extend the functionality of some other \
+protocol either in Wayland core, or some other protocol in \
+wayland-protocols."
+HOMEPAGE = "http://wayland.freedesktop.org"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=c7b12b6702da38ca028ace54aae3d484 \
+                    file://stable/presentation-time/presentation-time.xml;endline=26;md5=4646cd7d9edc9fa55db941f2d3a7dc53"
+
+SRC_URI = "https://wayland.freedesktop.org/releases/${BPN}-${PV}.tar.xz \
+           "
+SRC_URI[md5sum] = "84a7846c2b6a6a3e265fc9be36453e60"
+SRC_URI[sha256sum] = "5719c51d7354864983171c5083e93a72ac99229e2b460c4bb10513de08839c0a"
+
+inherit allarch autotools pkgconfig
+
+PACKAGES = "${PN}"
+FILES_${PN} += "${datadir}/pkgconfig/wayland-protocols.pc"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/wayland/wayland-protocols_1.7.bb b/import-layers/yocto-poky/meta/recipes-graphics/wayland/wayland-protocols_1.7.bb
deleted file mode 100644
index 1d2f0db..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/wayland/wayland-protocols_1.7.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-SUMMARY = "Collection of additional Wayland protocols"
-DESCRIPTION = "Wayland protocols that add functionality not \
-available in the Wayland core protocol. Such protocols either add \
-completely new functionality, or extend the functionality of some other \
-protocol either in Wayland core, or some other protocol in \
-wayland-protocols."
-HOMEPAGE = "http://wayland.freedesktop.org"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=c7b12b6702da38ca028ace54aae3d484 \
-                    file://stable/presentation-time/presentation-time.xml;endline=26;md5=4646cd7d9edc9fa55db941f2d3a7dc53"
-
-SRC_URI = "https://wayland.freedesktop.org/releases/${BPN}-${PV}.tar.xz \
-           "
-SRC_URI[md5sum] = "9acfc9556f7cfedc44d97af60da66a5f"
-SRC_URI[sha256sum] = "635f2a937d318f1fecb97b54074ca211486e38af943868dd0fa82ea38d091c1f"
-
-inherit allarch autotools pkgconfig
-
-PACKAGES = "${PN}"
-FILES_${PN} += "${datadir}/pkgconfig/wayland-protocols.pc"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/wayland/weston/weston-gl-renderer-Set-pitch-correctly-for-subsampled-textures.patch b/import-layers/yocto-poky/meta/recipes-graphics/wayland/weston/weston-gl-renderer-Set-pitch-correctly-for-subsampled-textures.patch
deleted file mode 100644
index c188018..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/wayland/weston/weston-gl-renderer-Set-pitch-correctly-for-subsampled-textures.patch
+++ /dev/null
@@ -1,55 +0,0 @@
-Multi-plane sub-sampled textures have partial width/height, e.g.
-YUV420/I420 has a full-size Y plane, followed by a half-width/height U
-plane, and a half-width/height V plane.
-
-zwp_linux_dmabuf_v1 allows clients to pass an explicit pitch for each
-plane, but for wl_shm this must be inferred. gl-renderer was correctly
-accounting for the width and height when subsampling, but the pitch was
-being taken as the pitch for the first plane.
-
-This does not match the requirements for GStreamer's waylandsink, in
-particular, as well as other clients. Fix the SHM upload path to
-correctly set the pitch for each plane, according to subsampling.
-
-Tested with:
-  $ gst-launch-1.0 videotestsrc ! waylandsink
-
-Upstream-Status: Backport [https://patchwork.freedesktop.org/patch/180767/]
-
-Signed-off-by: Daniel Stone <daniels@collabora.com>
-Fixes: fdeefe42418 ("gl-renderer: add support of WL_SHM_FORMAT_YUV420")
-Reported-by: Fabien Lahoudere <fabien.lahoudere@collabora.co.uk>
-Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=103063
-
----
- libweston/gl-renderer.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/libweston/gl-renderer.c b/libweston/gl-renderer.c
-index 244ce309..40bf0bb6 100644
---- a/libweston/gl-renderer.c
-+++ b/libweston/gl-renderer.c
-@@ -1285,14 +1285,13 @@ gl_renderer_flush_damage(struct weston_surface *surface)
- 		goto done;
- 	}
- 
--	glPixelStorei(GL_UNPACK_ROW_LENGTH_EXT, gs->pitch);
--
- 	if (gs->needs_full_upload) {
- 		glPixelStorei(GL_UNPACK_SKIP_PIXELS_EXT, 0);
- 		glPixelStorei(GL_UNPACK_SKIP_ROWS_EXT, 0);
- 		wl_shm_buffer_begin_access(buffer->shm_buffer);
- 		for (j = 0; j < gs->num_textures; j++) {
- 			glBindTexture(GL_TEXTURE_2D, gs->textures[j]);
-+			glPixelStorei(GL_UNPACK_ROW_LENGTH_EXT, gs->pitch / gs->hsub[j]);
- 			glTexImage2D(GL_TEXTURE_2D, 0,
- 				     gs->gl_format[j],
- 				     gs->pitch / gs->hsub[j],
-@@ -1317,6 +1316,7 @@ gl_renderer_flush_damage(struct weston_surface *surface)
- 		glPixelStorei(GL_UNPACK_SKIP_ROWS_EXT, r.y1);
- 		for (j = 0; j < gs->num_textures; j++) {
- 			glBindTexture(GL_TEXTURE_2D, gs->textures[j]);
-+			glPixelStorei(GL_UNPACK_ROW_LENGTH_EXT, gs->pitch / gs->hsub[j]);
- 			glTexSubImage2D(GL_TEXTURE_2D, 0,
- 					r.x1 / gs->hsub[j],
- 					r.y1 / gs->vsub[j],
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/wayland/weston_2.0.0.bb b/import-layers/yocto-poky/meta/recipes-graphics/wayland/weston_2.0.0.bb
index 063494c..54b07bd 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/wayland/weston_2.0.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/wayland/weston_2.0.0.bb
@@ -12,7 +12,6 @@
            file://0001-configure.ac-Fix-wayland-protocols-path.patch \
            file://xwayland.weston-start \
            file://0001-weston-launch-Provide-a-default-version-that-doesn-t.patch \
-           file://weston-gl-renderer-Set-pitch-correctly-for-subsampled-textures.patch \
 "
 SRC_URI[md5sum] = "15f38945942bf2a91fe2687145fb4c7d"
 SRC_URI[sha256sum] = "b4e446ac27f118196f1609dab89bb3cb3e81652d981414ad860e733b355365d8"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xcursor-transparent-theme/xcursor-transparent-theme_git.bb b/import-layers/yocto-poky/meta/recipes-graphics/xcursor-transparent-theme/xcursor-transparent-theme_git.bb
index b871d89..733ebce 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/xcursor-transparent-theme/xcursor-transparent-theme_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xcursor-transparent-theme/xcursor-transparent-theme_git.bb
@@ -11,6 +11,7 @@
 PV = "0.1.1+git${SRCPV}"
 
 SRC_URI = "git://git.yoctoproject.org/${BPN};branch=master"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 S = "${WORKDIR}/git"
 
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-app/x11perf_1.6.0.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-app/x11perf_1.6.0.bb
index 4e93558..a06aa26 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-app/x11perf_1.6.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-app/x11perf_1.6.0.bb
@@ -13,6 +13,10 @@
 
 PE = "1"
 
+do_install_append_class-target () {
+    sed -i -e 's:${HOSTTOOLS_DIR}/::g' ${D}${bindir}/x11perfcomp
+}
+
 FILES_${PN} += "${libdir}/X11/x11perfcomp/*"
 
 SRC_URI[md5sum] = "f0b24e4d8beb622a419e8431e1c03cd7"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-app/xkbcomp_1.3.1.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-app/xkbcomp_1.3.1.bb
deleted file mode 100644
index 1c98359..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-app/xkbcomp_1.3.1.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-require xorg-app-common.inc
-
-SUMMARY = "A program to compile XKB keyboard description"
-
-DESCRIPTION = "The xkbcomp keymap compiler converts a description of an \
-XKB keymap into one of several output formats. The most common use for \
-xkbcomp is to create a compiled keymap file (.xkm extension) which can \
-be read directly by XKB-capable X servers or utilities."
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=08436e4f4476964e2e2dd7e7e41e076a"
-
-PR = "${INC_PR}.0"
-
-DEPENDS += "libxkbfile"
-
-BBCLASSEXTEND = "native"
-
-SRC_URI[md5sum] = "a4d8353daf6cb0a9c47379b7413c42c6"
-SRC_URI[sha256sum] = "0304dc9e0d4ac10831a9ef5d5419722375ddbc3eac3ff4413094d57bc1f1923d"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-app/xkbcomp_1.4.0.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-app/xkbcomp_1.4.0.bb
new file mode 100644
index 0000000..c9dc327
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-app/xkbcomp_1.4.0.bb
@@ -0,0 +1,19 @@
+require xorg-app-common.inc
+
+SUMMARY = "A program to compile XKB keyboard description"
+
+DESCRIPTION = "The xkbcomp keymap compiler converts a description of an \
+XKB keymap into one of several output formats. The most common use for \
+xkbcomp is to create a compiled keymap file (.xkm extension) which can \
+be read directly by XKB-capable X servers or utilities."
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=08436e4f4476964e2e2dd7e7e41e076a"
+
+PR = "${INC_PR}.0"
+
+DEPENDS += "libxkbfile"
+
+BBCLASSEXTEND = "native"
+
+SRC_URI[md5sum] = "cc22b232bc78a303371983e1b48794ab"
+SRC_URI[sha256sum] = "bc69c8748c03c5ad9afdc8dff9db11994dd871b614c65f8940516da6bf61ce6b"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-input-libinput_0.24.0.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-input-libinput_0.24.0.bb
deleted file mode 100644
index 14b1271..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-input-libinput_0.24.0.bb
+++ /dev/null
@@ -1,11 +0,0 @@
-require xorg-driver-input.inc
-
-SUMMARY = "Generic input driver for the X.Org server based on libinput"
-LIC_FILES_CHKSUM = "file://COPYING;md5=5e6b20ea2ef94a998145f0ea3f788ee0"
-
-DEPENDS += "libinput"
-
-SRC_URI[md5sum] = "bd3fa118e4abadb8804dc6a099bb4ab3"
-SRC_URI[sha256sum] = "ddcb07350aed59b2996a92a1b4ff64d1c0b0c86a3f0ddca15b2b1c8c8bb13628"
-
-FILES_${PN} += "${datadir}/X11/xorg.conf.d"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-input-libinput_0.25.1.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-input-libinput_0.25.1.bb
new file mode 100644
index 0000000..7b3ea16
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-input-libinput_0.25.1.bb
@@ -0,0 +1,11 @@
+require xorg-driver-input.inc
+
+SUMMARY = "Generic input driver for the X.Org server based on libinput"
+LIC_FILES_CHKSUM = "file://COPYING;md5=5e6b20ea2ef94a998145f0ea3f788ee0"
+
+DEPENDS += "libinput"
+
+SRC_URI[md5sum] = "14003139614b25cc76c9a4cad059df89"
+SRC_URI[sha256sum] = "489f7d591c9ef08463d4966e61f7c6ea433f5fcbb9f5370fb621da639a84c7e0"
+
+FILES_${PN} += "${datadir}/X11/xorg.conf.d"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-video-intel_git.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-video-intel_git.bb
index f86de6f..138dfdd 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-video-intel_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-video-intel_git.bb
@@ -24,7 +24,7 @@
 
 DEPENDS += "virtual/libx11 drm libpciaccess pixman"
 
-PACKAGECONFIG ??= "xvmc sna udev ${@bb.utils.contains('DISTRO_FEATURES', 'opengl', 'dri dri1 dri2', '', d)}"
+PACKAGECONFIG ??= "xvmc uxa udev ${@bb.utils.contains('DISTRO_FEATURES', 'opengl', 'dri dri1 dri2', '', d)}"
 
 PACKAGECONFIG[dri] = "--enable-dri,--disable-dri"
 PACKAGECONFIG[dri1] = "--enable-dri1,--disable-dri1,xf86driproto"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-video-omapfb/0001-Prevents-omapfb-from-from-crashing-when-pixelclock-o.patch b/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-video-omapfb/0001-Prevents-omapfb-from-from-crashing-when-pixelclock-o.patch
index c4cf16e..ac19219 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-video-omapfb/0001-Prevents-omapfb-from-from-crashing-when-pixelclock-o.patch
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-driver/xf86-video-omapfb/0001-Prevents-omapfb-from-from-crashing-when-pixelclock-o.patch
@@ -6,6 +6,8 @@
 
 Due to a Linux design bug it is easy to get a pixelclock set to zero
 when changing displays at runtime.
+
+Upstream-Status: Pending
 ---
  src/omapfb-output.c | 9 +++++++--
  1 file changed, 7 insertions(+), 2 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess/0001-Include-config.h-before-anything-else-in-.c.patch b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess/0001-Include-config.h-before-anything-else-in-.c.patch
deleted file mode 100644
index e92fc0d..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess/0001-Include-config.h-before-anything-else-in-.c.patch
+++ /dev/null
@@ -1,187 +0,0 @@
-From b14696a55796e739624bbda4f772427032efff2a Mon Sep 17 00:00:00 2001
-From: Julien Cristau <jcristau@debian.org>
-Date: Sun, 26 Apr 2015 15:20:57 +0200
-Subject: [PATCH 1/4] Include config.h before anything else in *.c
-
-Debian bug#749008 <https://bugs.debian.org/749008>
-
-Reported-by: Michael Tautschnig <mt@debian.org>
-Signed-off-by: Julien Cristau <jcristau@debian.org>
-Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
----
-Upstream-Status: Backport
-
- src/common_capability.c  | 3 +++
- src/common_init.c        | 3 +++
- src/common_interface.c   | 3 +++
- src/common_io.c          | 3 +++
- src/common_iterator.c    | 3 +++
- src/common_map.c         | 3 +++
- src/common_vgaarb_stub.c | 3 +++
- src/linux_devmem.c       | 5 +++--
- src/openbsd_pci.c        | 3 +++
- src/solx_devfs.c         | 3 +++
- src/x86_pci.c            | 4 +++-
- 11 files changed, 33 insertions(+), 3 deletions(-)
-
-diff --git a/src/common_capability.c b/src/common_capability.c
-index 488743d..15d395d 100644
---- a/src/common_capability.c
-+++ b/src/common_capability.c
-@@ -31,6 +31,9 @@
-  *
-  * \author Ian Romanick <idr@us.ibm.com>
-  */
-+#ifdef HAVE_CONFIG_H
-+#include "config.h"
-+#endif
- 
- #include <stdlib.h>
- #include <stdio.h>
-diff --git a/src/common_init.c b/src/common_init.c
-index b1c0c3e..f7b59bd 100644
---- a/src/common_init.c
-+++ b/src/common_init.c
-@@ -28,6 +28,9 @@
-  *
-  * \author Ian Romanick <idr@us.ibm.com>
-  */
-+#ifdef HAVE_CONFIG_H
-+#include "config.h"
-+#endif
- 
- #include <stdlib.h>
- #include <errno.h>
-diff --git a/src/common_interface.c b/src/common_interface.c
-index 59778cf..cb95e90 100644
---- a/src/common_interface.c
-+++ b/src/common_interface.c
-@@ -28,6 +28,9 @@
-  *
-  * \author Ian Romanick <idr@us.ibm.com>
-  */
-+#ifdef HAVE_CONFIG_H
-+#include "config.h"
-+#endif
- 
- #include <stdlib.h>
- #include <string.h>
-diff --git a/src/common_io.c b/src/common_io.c
-index f5c9e45..e9586ad 100644
---- a/src/common_io.c
-+++ b/src/common_io.c
-@@ -22,6 +22,9 @@
-  * Author:
-  *	Adam Jackson <ajax@redhat.com>
-  */
-+#ifdef HAVE_CONFIG_H
-+#include "config.h"
-+#endif
- 
- #include <stdlib.h>
- #include <string.h>
-diff --git a/src/common_iterator.c b/src/common_iterator.c
-index ccf656d..2beb180 100644
---- a/src/common_iterator.c
-+++ b/src/common_iterator.c
-@@ -28,6 +28,9 @@
-  *
-  * \author Ian Romanick <idr@us.ibm.com>
-  */
-+#ifdef HAVE_CONFIG_H
-+#include "config.h"
-+#endif
- 
- #include <stdlib.h>
- #include <string.h>
-diff --git a/src/common_map.c b/src/common_map.c
-index 8757151..f1854bb 100644
---- a/src/common_map.c
-+++ b/src/common_map.c
-@@ -21,6 +21,9 @@
-  * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
-  * DEALINGS IN THE SOFTWARE.
-  */
-+#ifdef HAVE_CONFIG_H
-+#include "config.h"
-+#endif
- 
- #include <sys/types.h>
- #include <sys/mman.h>
-diff --git a/src/common_vgaarb_stub.c b/src/common_vgaarb_stub.c
-index 9394273..c1708f6 100644
---- a/src/common_vgaarb_stub.c
-+++ b/src/common_vgaarb_stub.c
-@@ -23,6 +23,9 @@
-  * OTHER DEALINGS IN THE SOFTWARE.
-  *
-  */
-+#ifdef HAVE_CONFIG_H
-+#include "config.h"
-+#endif
- 
- #include <stdio.h>
- #include "pciaccess.h"
-diff --git a/src/linux_devmem.c b/src/linux_devmem.c
-index 10e3bde..0d0567c 100644
---- a/src/linux_devmem.c
-+++ b/src/linux_devmem.c
-@@ -32,8 +32,9 @@
-  *
-  * \author Ian Romanick <idr@us.ibm.com>
-  */
--
--#define _GNU_SOURCE
-+#ifdef HAVE_CONFIG_H
-+#include "config.h"
-+#endif
- 
- #include <stdlib.h>
- #include <string.h>
-diff --git a/src/openbsd_pci.c b/src/openbsd_pci.c
-index 4d1b5cd..b8ce318 100644
---- a/src/openbsd_pci.c
-+++ b/src/openbsd_pci.c
-@@ -13,6 +13,9 @@
-  * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
-  * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
-  */
-+#ifdef HAVE_CONFIG_H
-+#include "config.h"
-+#endif
- 
- #include <sys/param.h>
- #include <sys/ioctl.h>
-diff --git a/src/solx_devfs.c b/src/solx_devfs.c
-index f572393..cf96467 100644
---- a/src/solx_devfs.c
-+++ b/src/solx_devfs.c
-@@ -25,6 +25,9 @@
- /*
-  * Solaris devfs interfaces
-  */
-+#ifdef HAVE_CONFIG_H
-+#include "config.h"
-+#endif
- 
- #include <stdlib.h>
- #include <strings.h>
-diff --git a/src/x86_pci.c b/src/x86_pci.c
-index 49c1cab..32daa04 100644
---- a/src/x86_pci.c
-+++ b/src/x86_pci.c
-@@ -18,8 +18,10 @@
-  * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
-  * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
-  */
-+#ifdef HAVE_CONFIG_H
-+#include "config.h"
-+#endif
- 
--#define _GNU_SOURCE
- #include <unistd.h>
- #include <stdio.h>
- #include <stdlib.h>
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess/0002-Fix-quoting-issue.patch b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess/0002-Fix-quoting-issue.patch
deleted file mode 100644
index 16d69a8..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess/0002-Fix-quoting-issue.patch
+++ /dev/null
@@ -1,34 +0,0 @@
-From 765e0a38cb8c40f8865af5cb356ffe6039ffb08f Mon Sep 17 00:00:00 2001
-From: Thomas Klausner <wiz@NetBSD.org>
-Date: Sun, 22 Mar 2015 21:38:23 +0100
-Subject: [PATCH 2/4] Fix quoting issue.
-
-m4 has '[]' as quoting characters, so if we want '[]' to
-end up in the configure script, we need to quote them again.
-
-Reported by Greg Troxel <gdt@ir.bbn.com>.
-
-Signed-off-by: Thomas Klausner <wiz@NetBSD.org>
-Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
----
-Upstream-Status: Backport
-
- configure.ac | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/configure.ac b/configure.ac
-index e67e9e1..888330b 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -76,7 +76,7 @@ case $host_os in
- 		;;
- 	*netbsd*)
- 		case $host in
--		*i[3-9]86*)
-+		*i[[3-9]]86*)
- 			PCIACCESS_LIBS="$PCIACCESS_LIBS -li386"
- 			;;
- 		*x86_64*|*amd64*)
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess/0003-linux_sysfs.c-Include-limits.h-for-PATH_MAX.patch b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess/0003-linux_sysfs.c-Include-limits.h-for-PATH_MAX.patch
deleted file mode 100644
index f513c8e..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess/0003-linux_sysfs.c-Include-limits.h-for-PATH_MAX.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From 6bd2f7f92eae713663f4e13f6e2cb23526607b8c Mon Sep 17 00:00:00 2001
-From: Felix Janda <felix.janda@posteo.de>
-Date: Fri, 1 May 2015 16:36:50 +0200
-Subject: [PATCH 3/4] linux_sysfs.c: Include <limits.h> for PATH_MAX
-
-Fixes compilation with musl libc.
-
-Tested-by: Bernd Kuhls <bernd.kuhls@t-online.de>
-Signed-off-by: Felix Janda <felix.janda@posteo.de>
-Reviewed-by: Alan Coopersmith <alan.coopersmith@oracle.com>
-Signed-off-by: Alan Coopersmith <alan.coopersmith@oracle.com>
----
-Upstream-Status: Backport
-
- src/linux_sysfs.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/src/linux_sysfs.c b/src/linux_sysfs.c
-index 50d94cf..3f95e53 100644
---- a/src/linux_sysfs.c
-+++ b/src/linux_sysfs.c
-@@ -45,6 +45,7 @@
- #include <sys/types.h>
- #include <sys/stat.h>
- #include <fcntl.h>
-+#include <limits.h>
- #include <sys/mman.h>
- #include <dirent.h>
- #include <errno.h>
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess_0.13.4.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess_0.13.4.bb
deleted file mode 100644
index ffa6a60..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess_0.13.4.bb
+++ /dev/null
@@ -1,21 +0,0 @@
-SUMMARY = "Generic PCI access library for X"
-
-DESCRIPTION = "libpciaccess provides functionality for X to access the \
-PCI bus and devices in a platform-independent way."
-
-require xorg-lib-common.inc
-
-SRC_URI += "\
-            file://0001-Include-config.h-before-anything-else-in-.c.patch \
-            file://0002-Fix-quoting-issue.patch \
-            file://0003-linux_sysfs.c-Include-limits.h-for-PATH_MAX.patch \
-            file://0004-Don-t-include-sys-io.h-on-arm.patch \
-"
-
-SRC_URI[md5sum] = "ace78aec799b1cf6dfaea55d3879ed9f"
-SRC_URI[sha256sum] = "07f864654561e4ac8629a0ef9c8f07fbc1f8592d1b6c418431593e9ba2cf2fcf"
-
-LICENSE = "MIT & MIT-style"
-LIC_FILES_CHKSUM = "file://COPYING;md5=277aada5222b9a22fbf3471ff3687068"
-
-REQUIRED_DISTRO_FEATURES = ""
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess_0.13.5.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess_0.13.5.bb
new file mode 100644
index 0000000..a3b682b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpciaccess_0.13.5.bb
@@ -0,0 +1,18 @@
+SUMMARY = "Generic PCI access library for X"
+
+DESCRIPTION = "libpciaccess provides functionality for X to access the \
+PCI bus and devices in a platform-independent way."
+
+require xorg-lib-common.inc
+
+SRC_URI += "\
+            file://0004-Don-t-include-sys-io.h-on-arm.patch \
+"
+
+SRC_URI[md5sum] = "d810ab17e24c1418dedf7207fb2841d4"
+SRC_URI[sha256sum] = "752c54e9b3c311b4347cb50aea8566fa48eab274346ea8a06f7f15de3240b999"
+
+LICENSE = "MIT & MIT-style"
+LIC_FILES_CHKSUM = "file://COPYING;md5=277aada5222b9a22fbf3471ff3687068"
+
+REQUIRED_DISTRO_FEATURES = ""
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpthread-stubs_0.3.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpthread-stubs_0.3.bb
deleted file mode 100644
index 5514c7f..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpthread-stubs_0.3.bb
+++ /dev/null
@@ -1,25 +0,0 @@
-SUMMARY = "Library that provides weak aliases for pthread functions"
-DESCRIPTION = "This library provides weak aliases for pthread functions \
-not provided in libc or otherwise available by default."
-HOMEPAGE = "http://xcb.freedesktop.org"
-BUGTRACKER = "http://bugs.freedesktop.org/buglist.cgi?product=XCB"
-SECTION = "x11/libs"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=6edc1fea03d959f0c2d743fe5ca746ad"
-
-#DEPENDS = "xcb-proto xproto libxau libxslt-native"
-# DEPENDS += "xsltproc-native gperf-native"
-
-ALLOW_EMPTY_${PN} = "1"
-
-SRC_URI = "http://xcb.freedesktop.org/dist/libpthread-stubs-${PV}.tar.bz2"
-
-SRC_URI[md5sum] = "e8fa31b42e13f87e8f5a7a2b731db7ee"
-SRC_URI[sha256sum] = "35b6d54e3cc6f3ba28061da81af64b9a92b7b757319098172488a660e3d87299"
-
-inherit autotools pkgconfig
-
-RDEPENDS_${PN}-dev = ""
-RRECOMMENDS_${PN}-dbg = "${PN}-dev (= ${EXTENDPKGV})"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpthread-stubs_0.4.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpthread-stubs_0.4.bb
new file mode 100644
index 0000000..a699841
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libpthread-stubs_0.4.bb
@@ -0,0 +1,25 @@
+SUMMARY = "Library that provides weak aliases for pthread functions"
+DESCRIPTION = "This library provides weak aliases for pthread functions \
+not provided in libc or otherwise available by default."
+HOMEPAGE = "http://xcb.freedesktop.org"
+BUGTRACKER = "http://bugs.freedesktop.org/buglist.cgi?product=XCB"
+SECTION = "x11/libs"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=6edc1fea03d959f0c2d743fe5ca746ad"
+
+#DEPENDS = "xcb-proto xproto libxau libxslt-native"
+# DEPENDS += "xsltproc-native gperf-native"
+
+ALLOW_EMPTY_${PN} = "1"
+
+SRC_URI = "http://xcb.freedesktop.org/dist/libpthread-stubs-${PV}.tar.bz2"
+
+SRC_URI[md5sum] = "48c1544854a94db0e51499cc3afd797f"
+SRC_URI[sha256sum] = "e4d05911a3165d3b18321cc067fdd2f023f06436e391c6a28dff618a78d2e733"
+
+inherit autotools pkgconfig
+
+RDEPENDS_${PN}-dev = ""
+RRECOMMENDS_${PN}-dbg = "${PN}-dev (= ${EXTENDPKGV})"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11-diet_1.6.4.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11-diet_1.6.4.bb
deleted file mode 100644
index 0c761d7..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11-diet_1.6.4.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-require libx11.inc
-
-DESCRIPTION += " Support for XCMS and XLOCALE is disabled in \
-this version."
-
-SRC_URI += "file://X18NCMSstubs.diff \
-            file://fix-disable-xlocale.diff \
-            file://fix-utf8-wrong-define.patch \
-           "
-
-RPROVIDES_${PN}-dev = "libx11-dev"
-RPROVIDES_${PN}-locale = "libx11-locale"
-
-SRC_URI[md5sum] = "6d54227082f3aa2c596f0b3a3fbb9175"
-SRC_URI[sha256sum] = "b7c748be3aa16ec2cbd81edc847e9b6ee03f88143ab270fb59f58a044d34e441"
-
-EXTRA_OECONF += "--disable-xlocale"
-
-PACKAGECONFIG ??= ""
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11-diet_1.6.5.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11-diet_1.6.5.bb
new file mode 100644
index 0000000..295f96a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11-diet_1.6.5.bb
@@ -0,0 +1,19 @@
+require libx11.inc
+
+DESCRIPTION += " Support for XCMS and XLOCALE is disabled in \
+this version."
+
+SRC_URI += "file://X18NCMSstubs.diff \
+            file://fix-disable-xlocale.diff \
+            file://fix-utf8-wrong-define.patch \
+           "
+
+RPROVIDES_${PN}-dev = "libx11-dev"
+RPROVIDES_${PN}-locale = "libx11-locale"
+
+SRC_URI[md5sum] = "0f618db70c4054ca67cee0cc156a4255"
+SRC_URI[sha256sum] = "4d3890db2ba225ba8c55ca63c6409c1ebb078a2806de59fb16342768ae63435d"
+
+EXTRA_OECONF += "--disable-xlocale"
+
+PACKAGECONFIG ??= ""
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11/Fix-hanging-issue-in-_XReply.patch b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11/Fix-hanging-issue-in-_XReply.patch
new file mode 100644
index 0000000..897882b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11/Fix-hanging-issue-in-_XReply.patch
@@ -0,0 +1,60 @@
+From 5235a7f3692a4c3c90dd4ac1be3c670388904bbe Mon Sep 17 00:00:00 2001
+From: Tatu Frisk <tatu.frisk@ge.com>
+Date: Tue, 14 Mar 2017 14:41:27 +0200
+Subject: [PATCH] Fix hanging issue in _XReply
+
+Assume event queue is empty if another thread is blocking waiting for event.
+
+If one thread was blocking waiting for an event and another thread sent a
+reply to the X server, both threads got blocked until an event was
+received.
+
+Upstream-Status: Submitted [https://patchwork.freedesktop.org/patch/171458/]
+
+This patch needs to be removed once the corresponding patch has been merged upstream.
+
+https://patchwork.freedesktop.org/patch/171458/
+
+Signed-off-by: Tatu Frisk <tatu.frisk@ge.com>
+Signed-off-by: Jose Alarcon <jose.alarcon@ge.com>
+---
+ src/xcb_io.c | 19 +++++++------------
+ 1 file changed, 7 insertions(+), 12 deletions(-)
+
+diff --git a/src/xcb_io.c b/src/xcb_io.c
+index 5987329..c64eb04 100644
+--- a/src/xcb_io.c
++++ b/src/xcb_io.c
+@@ -609,22 +609,17 @@ Status _XReply(Display *dpy, xReply *rep, int extra, Bool discard)
+ 		 * letting anyone else process this sequence number, we
+ 		 * need to process any events that should have come
+ 		 * earlier. */
+-
+ 		if(dpy->xcb->event_owner == XlibOwnsEventQueue)
+ 		{
+ 			xcb_generic_reply_t *event;
+-			/* If some thread is already waiting for events,
+-			 * it will get the first one. That thread must
+-			 * process that event before we can continue. */
+-			/* FIXME: That event might be after this reply,
+-			 * and might never even come--or there might be
+-			 * multiple threads trying to get events. */
+-			while(dpy->xcb->event_waiter)
+-			{ /* need braces around ConditionWait */
+-				ConditionWait(dpy, dpy->xcb->event_notify);
++
++			/* Assume event queue is empty if another thread is blocking
++			 * waiting for event. */
++			if(!dpy->xcb->event_waiter)
++			{
++				while((event = poll_for_response(dpy)))
++					handle_response(dpy, event, True);
+ 			}
+-			while((event = poll_for_event(dpy)))
+-				handle_response(dpy, event, True);
+ 		}
+ 
+ 		req->reply_waiter = 0;
+-- 
+2.10.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11_1.6.4.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11_1.6.4.bb
deleted file mode 100644
index caa95fb..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11_1.6.4.bb
+++ /dev/null
@@ -1,10 +0,0 @@
-require libx11.inc
-inherit gettext
-
-BBCLASSEXTEND = "native nativesdk"
-
-SRC_URI += "file://disable_tests.patch \
-           "
-
-SRC_URI[md5sum] = "6d54227082f3aa2c596f0b3a3fbb9175"
-SRC_URI[sha256sum] = "b7c748be3aa16ec2cbd81edc847e9b6ee03f88143ab270fb59f58a044d34e441"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11_1.6.5.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11_1.6.5.bb
new file mode 100644
index 0000000..427bf28
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libx11_1.6.5.bb
@@ -0,0 +1,14 @@
+require libx11.inc
+inherit gettext
+
+BBCLASSEXTEND = "native nativesdk"
+
+SRC_URI += "file://disable_tests.patch \
+            file://Fix-hanging-issue-in-_XReply.patch \
+           "
+do_configure_append () {
+    sed -i -e "/X11_CFLAGS/d" ${B}/src/util/Makefile
+}
+
+SRC_URI[md5sum] = "0f618db70c4054ca67cee0cc156a4255"
+SRC_URI[sha256sum] = "4d3890db2ba225ba8c55ca63c6409c1ebb078a2806de59fb16342768ae63435d"
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libxcalibrate_git.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libxcalibrate_git.bb
index 455e869..7c7fa3d 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libxcalibrate_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libxcalibrate_git.bb
@@ -16,6 +16,7 @@
 
 SRC_URI = "git://anongit.freedesktop.org/git/xorg/lib/libXCalibrate \
            file://fix-xcb.patch"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 S = "${WORKDIR}/git"
 
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libxt/0001-libXt-util-don-t-link-makestrs-with-target-cflags.patch b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libxt/0001-libXt-util-don-t-link-makestrs-with-target-cflags.patch
new file mode 100644
index 0000000..1a691a3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libxt/0001-libXt-util-don-t-link-makestrs-with-target-cflags.patch
@@ -0,0 +1,33 @@
+From b0c0e6d90bd99a699701c9542640adb218f5d536 Mon Sep 17 00:00:00 2001
+From: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
+Date: Mon, 10 Jul 2017 16:51:13 +0300
+Subject: [PATCH] libXt: util: don't link makestrs with target cflags
+
+The line: AM_CFLAGS = $(XT_CFLAGS)
+in util/Makefile.am is wrong because it adds target cflags to the
+compilation of makestrs, which is built for the build machine, which
+leads to build failures when cross-compiling.
+
+Upstream-Status: Pending
+
+Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
+Signed-off-by: Maxin B. John <maxin.john@intel.com>
+---
+ util/Makefile.am | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/util/Makefile.am b/util/Makefile.am
+index 800b35b..f2dd1f9 100644
+--- a/util/Makefile.am
++++ b/util/Makefile.am
+@@ -11,7 +11,6 @@ EXTRA_DIST = \
+ 	StrDefs.ht \
+ 	string.list
+ 
+-AM_CFLAGS = $(XT_CFLAGS)
+ makestrs_SOURCES = makestrs.c
+ 
+ 
+-- 
+2.4.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libxt_1.1.5.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libxt_1.1.5.bb
index c1ed0bb..180d00d 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libxt_1.1.5.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/libxt_1.1.5.bb
@@ -23,7 +23,9 @@
 
 XORG_PN = "libXt"
 
-SRC_URI +=  "file://libxt_fix_for_x32.patch"
+SRC_URI +=  "file://libxt_fix_for_x32.patch \
+             file://0001-libXt-util-don-t-link-makestrs-with-target-cflags.patch \
+            "
 
 BBCLASSEXTEND = "native"
 
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/xkeyboard-config_2.20.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/xkeyboard-config_2.20.bb
deleted file mode 100644
index d00904d..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/xkeyboard-config_2.20.bb
+++ /dev/null
@@ -1,31 +0,0 @@
-SUMMARY = "Keyboard configuration database for X Window"
-
-DESCRIPTION = "The non-arch keyboard configuration database for X \
-Window.  The goal is to provide the consistent, well-structured, \
-frequently released open source of X keyboard configuration data for X \
-Window System implementations.  The project is targeted to XKB-based \
-systems."
-
-HOMEPAGE = "http://freedesktop.org/wiki/Software/XKeyboardConfig"
-BUGTRACKER = "https://bugs.freedesktop.org/enter_bug.cgi?product=xkeyboard-config"
-
-LICENSE = "MIT & MIT-style"
-LIC_FILES_CHKSUM = "file://COPYING;md5=0e7f21ca7db975c63467d2e7624a12f9"
-
-SRC_URI = "${XORG_MIRROR}/individual/data/xkeyboard-config/${BPN}-${PV}.tar.bz2"
-SRC_URI[md5sum] = "1f68886339116ae3877052204c9b9b88"
-SRC_URI[sha256sum] = "d1bfc72553c4e3ef1cd6f13eec0488cf940498b612ab8a0b362e7090c94bc134"
-
-SECTION = "x11/libs"
-DEPENDS = "intltool-native virtual/gettext util-macros libxslt-native"
-
-EXTRA_OECONF = "--with-xkb-rules-symlink=xorg --disable-runtime-deps"
-
-FILES_${PN} += "${datadir}/X11/xkb"
-
-inherit autotools pkgconfig gettext
-
-do_install_append () {
-    install -d ${D}${datadir}/X11/xkb/compiled
-    cd ${D}${datadir}/X11/xkb/rules && ln -sf base xorg
-}
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/xkeyboard-config_2.21.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/xkeyboard-config_2.21.bb
new file mode 100644
index 0000000..01a51ad
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-lib/xkeyboard-config_2.21.bb
@@ -0,0 +1,31 @@
+SUMMARY = "Keyboard configuration database for X Window"
+
+DESCRIPTION = "The non-arch keyboard configuration database for X \
+Window.  The goal is to provide the consistent, well-structured, \
+frequently released open source of X keyboard configuration data for X \
+Window System implementations.  The project is targeted to XKB-based \
+systems."
+
+HOMEPAGE = "http://freedesktop.org/wiki/Software/XKeyboardConfig"
+BUGTRACKER = "https://bugs.freedesktop.org/enter_bug.cgi?product=xkeyboard-config"
+
+LICENSE = "MIT & MIT-style"
+LIC_FILES_CHKSUM = "file://COPYING;md5=0e7f21ca7db975c63467d2e7624a12f9"
+
+SRC_URI = "${XORG_MIRROR}/individual/data/xkeyboard-config/${BPN}-${PV}.tar.bz2"
+SRC_URI[md5sum] = "af9498e8954907d0a47f0f7b3d21e1ef"
+SRC_URI[sha256sum] = "30c17049fae129fc14875656da9aa3099e3031d6ce0ee1d77aae190fd9edcec5"
+
+SECTION = "x11/libs"
+DEPENDS = "intltool-native util-macros libxslt-native"
+
+EXTRA_OECONF = "--with-xkb-rules-symlink=xorg --disable-runtime-deps"
+
+FILES_${PN} += "${datadir}/X11/xkb"
+
+inherit autotools pkgconfig gettext
+
+do_install_append () {
+    install -d ${D}${datadir}/X11/xkb/compiled
+    cd ${D}${datadir}/X11/xkb/rules && ln -sf base xorg
+}
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-proto/calibrateproto_git.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-proto/calibrateproto_git.bb
index b88d157..3a98926 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-proto/calibrateproto_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-proto/calibrateproto_git.bb
@@ -17,3 +17,5 @@
 SRC_URI = "git://anongit.freedesktop.org/git/xorg/proto/calibrateproto \
            file://fix.patch;apply=yes"
 S = "${WORKDIR}/git"
+UPSTREAM_VERSION_UNKNOWN = "1"
+
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-xserver/xserver-xorg.inc b/import-layers/yocto-poky/meta/recipes-graphics/xorg-xserver/xserver-xorg.inc
index 1650c79..863d80c 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-xserver/xserver-xorg.inc
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-xserver/xserver-xorg.inc
@@ -123,7 +123,7 @@
 OPENGL_PKGCONFIGS = "dri glx glamor dri3 xshmfence"
 PACKAGECONFIG ??= "dri2 udev ${XORG_CRYPTO} \
                    ${@bb.utils.contains('DISTRO_FEATURES', 'opengl', '${OPENGL_PKGCONFIGS}', '', d)} \
-                   ${@bb.utils.contains('DISTRO_FEATURES', 'wayland', 'xwayland', '', d)} \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'opengl wayland', 'xwayland', '', d)} \
                    ${@bb.utils.filter('DISTRO_FEATURES', 'systemd', d)} \
 "
 
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-xserver/xserver-xorg_1.19.1.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-xserver/xserver-xorg_1.19.1.bb
deleted file mode 100644
index 0d6df32..0000000
--- a/import-layers/yocto-poky/meta/recipes-graphics/xorg-xserver/xserver-xorg_1.19.1.bb
+++ /dev/null
@@ -1,35 +0,0 @@
-require xserver-xorg.inc
-
-SRC_URI += "file://musl-arm-inb-outb.patch \
-            file://0001-configure.ac-Fix-check-for-CLOCK_MONOTONIC.patch \
-            file://0002-configure.ac-Fix-wayland-scanner-and-protocols-locat.patch \
-            file://0003-modesetting-Fix-16-bit-depth-bpp-mode.patch \
-            file://0003-Remove-check-for-useSIGIO-option.patch \
-            file://CVE-2017-10971-1.patch \
-            file://CVE-2017-10971-2.patch \
-            file://CVE-2017-10971-3.patch \
-            "
-SRC_URI[md5sum] = "caa8ee7b2950abbf734347d137529fb6"
-SRC_URI[sha256sum] = "79ae2cf39d3f6c4a91201d8dad549d1d774b3420073c5a70d390040aa965a7fb"
-
-# These extensions are now integrated into the server, so declare the migration
-# path for in-place upgrades.
-
-RREPLACES_${PN} =  "${PN}-extension-dri \
-                    ${PN}-extension-dri2 \
-                    ${PN}-extension-record \
-                    ${PN}-extension-extmod \
-                    ${PN}-extension-dbe \
-                   "
-RPROVIDES_${PN} =  "${PN}-extension-dri \
-                    ${PN}-extension-dri2 \
-                    ${PN}-extension-record \
-                    ${PN}-extension-extmod \
-                    ${PN}-extension-dbe \
-                   "
-RCONFLICTS_${PN} = "${PN}-extension-dri \
-                    ${PN}-extension-dri2 \
-                    ${PN}-extension-record \
-                    ${PN}-extension-extmod \
-                    ${PN}-extension-dbe \
-                   "
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xorg-xserver/xserver-xorg_1.19.3.bb b/import-layers/yocto-poky/meta/recipes-graphics/xorg-xserver/xserver-xorg_1.19.3.bb
new file mode 100644
index 0000000..65ef6c6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xorg-xserver/xserver-xorg_1.19.3.bb
@@ -0,0 +1,35 @@
+require xserver-xorg.inc
+
+SRC_URI += "file://musl-arm-inb-outb.patch \
+            file://0001-configure.ac-Fix-check-for-CLOCK_MONOTONIC.patch \
+            file://0002-configure.ac-Fix-wayland-scanner-and-protocols-locat.patch \
+            file://0003-modesetting-Fix-16-bit-depth-bpp-mode.patch \
+            file://0003-Remove-check-for-useSIGIO-option.patch \
+            file://CVE-2017-10971-1.patch \
+            file://CVE-2017-10971-2.patch \
+            file://CVE-2017-10971-3.patch \
+            "
+SRC_URI[md5sum] = "015d2fc4b9f2bfe7a626edb63a62c65e"
+SRC_URI[sha256sum] = "677a8166e03474719238dfe396ce673c4234735464d6dadf2959b600d20e5a98"
+
+# These extensions are now integrated into the server, so declare the migration
+# path for in-place upgrades.
+
+RREPLACES_${PN} =  "${PN}-extension-dri \
+                    ${PN}-extension-dri2 \
+                    ${PN}-extension-record \
+                    ${PN}-extension-extmod \
+                    ${PN}-extension-dbe \
+                   "
+RPROVIDES_${PN} =  "${PN}-extension-dri \
+                    ${PN}-extension-dri2 \
+                    ${PN}-extension-record \
+                    ${PN}-extension-extmod \
+                    ${PN}-extension-dbe \
+                   "
+RCONFLICTS_${PN} = "${PN}-extension-dri \
+                    ${PN}-extension-dri2 \
+                    ${PN}-extension-record \
+                    ${PN}-extension-extmod \
+                    ${PN}-extension-dbe \
+                   "
diff --git a/import-layers/yocto-poky/meta/recipes-graphics/xvideo-tests/xvideo-tests_git.bb b/import-layers/yocto-poky/meta/recipes-graphics/xvideo-tests/xvideo-tests_git.bb
index 51839dd..2667dd8 100644
--- a/import-layers/yocto-poky/meta/recipes-graphics/xvideo-tests/xvideo-tests_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-graphics/xvideo-tests/xvideo-tests_git.bb
@@ -8,6 +8,7 @@
 PV = "0.1+git${SRCPV}"
 
 SRC_URI = "git://git.yoctoproject.org/test-xvideo"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 S = "${WORKDIR}/git"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/blktrace/blktrace_git.bb b/import-layers/yocto-poky/meta/recipes-kernel/blktrace/blktrace_git.bb
index 957cb70..770575f 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/blktrace/blktrace_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/blktrace/blktrace_git.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Generates traces of I/O traffic on block devices"
+HOMEPAGE = "http://brick.kernel.dk/snaps/"
 LICENSE = "GPLv2"
 LIC_FILES_CHKSUM = "file://COPYING;md5=393a5ca445f6965873eca0259a17f833"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-linux_1.8.bb b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-linux_1.9.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-linux_1.8.bb
rename to import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-linux_1.9.bb
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-module_1.8.bb b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-module_1.9.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-module_1.8.bb
rename to import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-module_1.9.bb
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-tests_1.8.bb b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-tests_1.8.bb
deleted file mode 100644
index c400524..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-tests_1.8.bb
+++ /dev/null
@@ -1,21 +0,0 @@
-require cryptodev.inc
-
-SUMMARY = "A test suite for /dev/crypto device driver"
-
-DEPENDS += "openssl"
-
-SRC_URI += " \
-file://0001-Add-the-compile-and-install-rules-for-cryptodev-test.patch \
-"
-
-EXTRA_OEMAKE='KERNEL_DIR="${STAGING_EXECPREFIXDIR}" PREFIX="${D}"'
-
-do_compile() {
-	oe_runmake testprogs
-}
-
-do_install() {
-	oe_runmake install_tests
-}
-
-FILES_${PN} = "${bindir}/tests_cryptodev/*"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-tests_1.9.bb b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-tests_1.9.bb
new file mode 100644
index 0000000..9afb3de
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev-tests_1.9.bb
@@ -0,0 +1,21 @@
+require cryptodev.inc
+
+SUMMARY = "A test suite for /dev/crypto device driver"
+
+DEPENDS += "openssl10"
+
+SRC_URI += " \
+file://0001-Add-the-compile-and-install-rules-for-cryptodev-test.patch \
+"
+
+EXTRA_OEMAKE='KERNEL_DIR="${STAGING_EXECPREFIXDIR}" PREFIX="${D}"'
+
+do_compile() {
+	oe_runmake testprogs
+}
+
+do_install() {
+	oe_runmake install_tests
+}
+
+FILES_${PN} = "${bindir}/*"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev.inc b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev.inc
index 4ae0a2b..50366e7 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev.inc
+++ b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/cryptodev.inc
@@ -3,14 +3,10 @@
 LICENSE = "GPLv2"
 LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
 
-SRC_URI = "http://nwl.cc/pub/cryptodev-linux/cryptodev-linux-${PV}.tar.gz \
-           file://06d6b560c6e45dc317dae47c74706fa43f4a31d8.patch \
-           file://cb186f682679383e8b5806240927903730ce85d9.patch \
-           file://0001-Adjust-to-another-change-in-the-user-page-API.patch \
-           file://kernel-4-10-changes.patch"
+SRC_URI = "http://nwl.cc/pub/cryptodev-linux/cryptodev-linux-${PV}.tar.gz"
 
-SRC_URI[md5sum] = "02644cc4cd02301e0b503a332eb2f0b5"
-SRC_URI[sha256sum] = "67fabde9fb67b286a96c4f45b594b0eccd0f761b495705c18f2ae9461b831376"
+SRC_URI[md5sum] = "cb4e0ed9e5937716c7c8a7be84895b6d"
+SRC_URI[sha256sum] = "9f4c0b49b30e267d776f79455d09c70cc9c12c86eee400a0d0a0cd1d8e467950"
 
 S = "${WORKDIR}/cryptodev-linux-${PV}"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/0001-Adjust-to-another-change-in-the-user-page-API.patch b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/0001-Adjust-to-another-change-in-the-user-page-API.patch
deleted file mode 100644
index fb75278..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/0001-Adjust-to-another-change-in-the-user-page-API.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-From f126e4837e6334d0464540995df7426fedf6b175 Mon Sep 17 00:00:00 2001
-From: Michael Weiser <michael.weiser@gmx.de>
-Date: Fri, 11 Nov 2016 18:09:32 +0100
-Subject: [PATCH] Adjust to another change in the user page API
-
-4.9.0 will replace the write and force flags of get_user_pages_remote()
-with a gup_flags parameter[1]. Distinguish the two APIs based on kernel
-version we're compiling for.
-
-[1] https://github.com/torvalds/linux/commit/9beae1ea89305a9667ceaab6d0bf46a045ad71e7
-
-Upstream-Status: Backport
-
-Signed-off-by: Daniel Schultz <d.schultz@phytec.de>
----
- zc.c | 8 +++++++-
- 1 file changed, 7 insertions(+), 1 deletion(-)
-
-diff --git a/zc.c b/zc.c
-index a97b49f..e766ee3 100644
---- a/zc.c
-+++ b/zc.c
-@@ -65,7 +65,13 @@ int __get_userbuf(uint8_t __user *addr, uint32_t len, int write,
- 	ret = get_user_pages(
- #endif
- 			task, mm,
--			(unsigned long)addr, pgcount, write, 0, pg, NULL);
-+			(unsigned long)addr, pgcount,
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 9, 0))
-+			write ? FOLL_WRITE : 0,
-+#else
-+			write, 0,
-+#endif
-+			pg, NULL);
- 	up_read(&mm->mmap_sem);
- 	if (ret != pgcount)
- 		return -EINVAL;
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/0001-Disable-installing-header-file-provided-by-another-p.patch b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/0001-Disable-installing-header-file-provided-by-another-p.patch
index a580fc6..885b582 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/0001-Disable-installing-header-file-provided-by-another-p.patch
+++ b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/0001-Disable-installing-header-file-provided-by-another-p.patch
@@ -7,22 +7,18 @@
 
 Upstream-Status: Inappropriate [ OE specific ]
 ---
- Makefile | 2 --
- 1 file changed, 2 deletions(-)
+ Makefile | 1 -
+ 1 file changed, 1 deletion(-)
 
 diff --git a/Makefile b/Makefile
-index d66ef26..8e97c6a 100644
+index 5a080e0..bf02396 100644
 --- a/Makefile
 +++ b/Makefile
-@@ -23,8 +23,6 @@ install: modules_install
+@@ -33,7 +33,6 @@ install: modules_install
  
  modules_install:
- 	make -C $(KERNEL_DIR) SUBDIRS=`pwd` modules_install
--	@echo "Installing cryptodev.h in $(PREFIX)/usr/include/crypto ..."
--	@install -D crypto/cryptodev.h $(PREFIX)/usr/include/crypto/cryptodev.h
+ 	$(MAKE) $(KERNEL_MAKE_OPTS) modules_install
+-	install -m 644 -D crypto/cryptodev.h $(DESTDIR)/$(includedir)/crypto/cryptodev.h
  
  clean:
- 	make -C $(KERNEL_DIR) SUBDIRS=`pwd` clean
--- 
-1.9.1
-
+ 	$(MAKE) $(KERNEL_MAKE_OPTS) clean
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/06d6b560c6e45dc317dae47c74706fa43f4a31d8.patch b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/06d6b560c6e45dc317dae47c74706fa43f4a31d8.patch
deleted file mode 100644
index cb556e1..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/06d6b560c6e45dc317dae47c74706fa43f4a31d8.patch
+++ /dev/null
@@ -1,54 +0,0 @@
-From f14b4706b0d04988e7e5bc8c4d2aefef9f029d9d Mon Sep 17 00:00:00 2001
-From: Michael Weiser <michael.weiser@gmx.de>
-Date: Fri, 5 Aug 2016 18:43:55 +0200
-Subject: [PATCH] Adjust to recent user page API changes
-
-4.6.0 basically renamed get_user_pages() to get_user_pages_remote() and
-introduced a new get_user_pages() that always works on the current
-task.[1] Distinguish the two APIs based on kernel version we're
-compiling for.
-
-Also, there seems to have been a massive cleansing of
-page_cache_release(page) in favour of put_page(page)[2] which was an
-alias for put_page(page)[3] since 2.6.0. Before that beginning with
-2.4.0 both page_cache_release(page) and put_page(page) have been aliases
-for __free_page(page). So using put_page() instead of
-page_cache_release(page) will produce identical code for anything after
-2.4.0.
-
-[1] https://lkml.org/lkml/2016/2/10/555
-[2] https://www.spinics.net/lists/linux-fsdevel/msg95923.html
-[3] https://www.spinics.net/lists/linux-fsdevel/msg95922.html
----
- zc.c | 9 +++++++--
- 1 file changed, 7 insertions(+), 2 deletions(-)
-
-Upstream-Status: Backport [from master for 4.8 kernels]
-
-Index: cryptodev-linux-1.8/zc.c
-===================================================================
---- cryptodev-linux-1.8.orig/zc.c
-+++ cryptodev-linux-1.8/zc.c
-@@ -59,7 +59,12 @@ int __get_userbuf(uint8_t __user *addr,
- 	}
- 
- 	down_read(&mm->mmap_sem);
--	ret = get_user_pages(task, mm,
-+#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 6, 0))
-+	ret = get_user_pages_remote(
-+#else
-+	ret = get_user_pages(
-+#endif
-+			task, mm,
- 			(unsigned long)addr, pgcount, write, 0, pg, NULL);
- 	up_read(&mm->mmap_sem);
- 	if (ret != pgcount)
-@@ -119,7 +124,7 @@ void release_user_pages(struct csession
- 		else
- 			ses->readonly_pages--;
- 
--		page_cache_release(ses->pages[i]);
-+		put_page(ses->pages[i]);
- 	}
- 	ses->used_pages = 0;
- }
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/cb186f682679383e8b5806240927903730ce85d9.patch b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/cb186f682679383e8b5806240927903730ce85d9.patch
deleted file mode 100644
index eb0eab6..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/cb186f682679383e8b5806240927903730ce85d9.patch
+++ /dev/null
@@ -1,279 +0,0 @@
-From cb186f682679383e8b5806240927903730ce85d9 Mon Sep 17 00:00:00 2001
-From: Michael Weiser <michael.weiser@gmx.de>
-Date: Fri, 5 Aug 2016 17:26:27 +0200
-Subject: [PATCH] Support skcipher in addition to ablkcipher API
-
-The ablkcipher API is being phased out[1]. The unified skcipher API
-seems to have made its entry with 4.3.[3, 4] By what can be seen from
-migration patches[1.ff.], it's a drop-in replacement.
-
-Also, deallocators such as crypto_free_skcipher() are NULL-safe now[2].
-
-Add a new header cipherapi.h to aid migration from ablkcipher to skcipher and
-retain support for old kernels. Make it decide which API to use and provide
-appropriate function calls and type definitions. Since the ablkcipher and
-skcipher APIs are so similar, those are mainly defines for corresponding
-pseudo-functions in namespace cryptodev_ derived directly from their API
-counterparts.
-
-Compiles and works (i.e. checks pass) with Debian testing 4.6.4 kernel
-as well as 4.8-rc2+ Linus git tree as of today. (Both require a fix for
-changed page access API[5].)
-
-[1] https://www.spinics.net/lists/linux-crypto/msg18133.html
-[2] https://www.spinics.net/lists/linux-crypto/msg18154.html, line 120
-[3] https://www.spinics.net/lists/linux-crypto/msg16373.html
-[4] https://www.spinics.net/lists/linux-crypto/msg16294.html
-[5] https://github.com/cryptodev-linux/cryptodev-linux/pull/14
----
- cipherapi.h | 60 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
- cryptlib.c  | 40 ++++++++++++++++++----------------------
- cryptlib.h  |  6 ++++--
- ioctl.c     |  4 ++--
- 4 files changed, 84 insertions(+), 26 deletions(-)
- create mode 100644 cipherapi.h
-
-Upstream-Status: Backport [from master for 4.8 kernels]
-
-Index: cryptodev-linux-1.8/cipherapi.h
-===================================================================
---- /dev/null
-+++ cryptodev-linux-1.8/cipherapi.h
-@@ -0,0 +1,60 @@
-+#ifndef CIPHERAPI_H
-+# define CIPHERAPI_H
-+
-+#include <linux/version.h>
-+
-+#if (LINUX_VERSION_CODE < KERNEL_VERSION(4, 8, 0))
-+# include <linux/crypto.h>
-+
-+typedef struct ablkcipher_alg cryptodev_blkcipher_alg_t;
-+typedef struct crypto_ablkcipher cryptodev_crypto_blkcipher_t;
-+typedef struct ablkcipher_request cryptodev_blkcipher_request_t;
-+
-+# define cryptodev_crypto_alloc_blkcipher crypto_alloc_ablkcipher
-+# define cryptodev_crypto_blkcipher_alg crypto_ablkcipher_alg
-+# define cryptodev_crypto_blkcipher_blocksize crypto_ablkcipher_blocksize
-+# define cryptodev_crypto_blkcipher_ivsize crypto_ablkcipher_ivsize
-+# define cryptodev_crypto_blkcipher_alignmask crypto_ablkcipher_alignmask
-+# define cryptodev_crypto_blkcipher_setkey crypto_ablkcipher_setkey
-+
-+static inline void cryptodev_crypto_free_blkcipher(cryptodev_crypto_blkcipher_t *c) {
-+	if (c)
-+		crypto_free_ablkcipher(c);
-+}
-+
-+# define cryptodev_blkcipher_request_alloc ablkcipher_request_alloc
-+# define cryptodev_blkcipher_request_set_callback ablkcipher_request_set_callback
-+
-+static inline void cryptodev_blkcipher_request_free(cryptodev_blkcipher_request_t *r) {
-+	if (r)
-+		ablkcipher_request_free(r);
-+}
-+
-+# define cryptodev_blkcipher_request_set_crypt ablkcipher_request_set_crypt
-+# define cryptodev_crypto_blkcipher_encrypt crypto_ablkcipher_encrypt
-+# define cryptodev_crypto_blkcipher_decrypt crypto_ablkcipher_decrypt
-+# define cryptodev_crypto_blkcipher_tfm crypto_ablkcipher_tfm
-+#else
-+#include <crypto/skcipher.h>
-+
-+typedef struct skcipher_alg cryptodev_blkcipher_alg_t;
-+typedef struct crypto_skcipher cryptodev_crypto_blkcipher_t;
-+typedef struct skcipher_request cryptodev_blkcipher_request_t;
-+
-+# define cryptodev_crypto_alloc_blkcipher crypto_alloc_skcipher
-+# define cryptodev_crypto_blkcipher_alg crypto_skcipher_alg
-+# define cryptodev_crypto_blkcipher_blocksize crypto_skcipher_blocksize
-+# define cryptodev_crypto_blkcipher_ivsize crypto_skcipher_ivsize
-+# define cryptodev_crypto_blkcipher_alignmask crypto_skcipher_alignmask
-+# define cryptodev_crypto_blkcipher_setkey crypto_skcipher_setkey
-+# define cryptodev_crypto_free_blkcipher crypto_free_skcipher
-+# define cryptodev_blkcipher_request_alloc skcipher_request_alloc
-+# define cryptodev_blkcipher_request_set_callback skcipher_request_set_callback
-+# define cryptodev_blkcipher_request_free skcipher_request_free
-+# define cryptodev_blkcipher_request_set_crypt skcipher_request_set_crypt
-+# define cryptodev_crypto_blkcipher_encrypt crypto_skcipher_encrypt
-+# define cryptodev_crypto_blkcipher_decrypt crypto_skcipher_decrypt
-+# define cryptodev_crypto_blkcipher_tfm crypto_skcipher_tfm
-+#endif
-+
-+#endif
-Index: cryptodev-linux-1.8/cryptlib.c
-===================================================================
---- cryptodev-linux-1.8.orig/cryptlib.c
-+++ cryptodev-linux-1.8/cryptlib.c
-@@ -23,7 +23,6 @@
-  * 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301, USA.
-  */
- 
--#include <linux/crypto.h>
- #include <linux/mm.h>
- #include <linux/highmem.h>
- #include <linux/ioctl.h>
-@@ -37,6 +36,7 @@
- #include <linux/rtnetlink.h>
- #include <crypto/authenc.h>
- #include "cryptodev_int.h"
-+#include "cipherapi.h"
- 
- 
- struct cryptodev_result {
-@@ -133,15 +133,15 @@ int cryptodev_cipher_init(struct cipher_
- 	int ret;
- 
- 	if (aead == 0) {
--		struct ablkcipher_alg *alg;
-+		cryptodev_blkcipher_alg_t *alg;
- 
--		out->async.s = crypto_alloc_ablkcipher(alg_name, 0, 0);
-+		out->async.s = cryptodev_crypto_alloc_blkcipher(alg_name, 0, 0);
- 		if (unlikely(IS_ERR(out->async.s))) {
- 			ddebug(1, "Failed to load cipher %s", alg_name);
- 				return -EINVAL;
- 		}
- 
--		alg = crypto_ablkcipher_alg(out->async.s);
-+		alg = cryptodev_crypto_blkcipher_alg(out->async.s);
- 		if (alg != NULL) {
- 			/* Was correct key length supplied? */
- 			if (alg->max_keysize > 0 &&
-@@ -154,11 +154,11 @@ int cryptodev_cipher_init(struct cipher_
- 			}
- 		}
- 
--		out->blocksize = crypto_ablkcipher_blocksize(out->async.s);
--		out->ivsize = crypto_ablkcipher_ivsize(out->async.s);
--		out->alignmask = crypto_ablkcipher_alignmask(out->async.s);
-+		out->blocksize = cryptodev_crypto_blkcipher_blocksize(out->async.s);
-+		out->ivsize = cryptodev_crypto_blkcipher_ivsize(out->async.s);
-+		out->alignmask = cryptodev_crypto_blkcipher_alignmask(out->async.s);
- 
--		ret = crypto_ablkcipher_setkey(out->async.s, keyp, keylen);
-+		ret = cryptodev_crypto_blkcipher_setkey(out->async.s, keyp, keylen);
- 	} else {
- 		out->async.as = crypto_alloc_aead(alg_name, 0, 0);
- 		if (unlikely(IS_ERR(out->async.as))) {
-@@ -191,14 +191,14 @@ int cryptodev_cipher_init(struct cipher_
- 	init_completion(&out->async.result->completion);
- 
- 	if (aead == 0) {
--		out->async.request = ablkcipher_request_alloc(out->async.s, GFP_KERNEL);
-+		out->async.request = cryptodev_blkcipher_request_alloc(out->async.s, GFP_KERNEL);
- 		if (unlikely(!out->async.request)) {
- 			derr(1, "error allocating async crypto request");
- 			ret = -ENOMEM;
- 			goto error;
- 		}
- 
--		ablkcipher_request_set_callback(out->async.request,
-+		cryptodev_blkcipher_request_set_callback(out->async.request,
- 					CRYPTO_TFM_REQ_MAY_BACKLOG,
- 					cryptodev_complete, out->async.result);
- 	} else {
-@@ -218,10 +218,8 @@ int cryptodev_cipher_init(struct cipher_
- 	return 0;
- error:
- 	if (aead == 0) {
--		if (out->async.request)
--			ablkcipher_request_free(out->async.request);
--		if (out->async.s)
--			crypto_free_ablkcipher(out->async.s);
-+		cryptodev_blkcipher_request_free(out->async.request);
-+		cryptodev_crypto_free_blkcipher(out->async.s);
- 	} else {
- 		if (out->async.arequest)
- 			aead_request_free(out->async.arequest);
-@@ -237,10 +235,8 @@ void cryptodev_cipher_deinit(struct ciph
- {
- 	if (cdata->init) {
- 		if (cdata->aead == 0) {
--			if (cdata->async.request)
--				ablkcipher_request_free(cdata->async.request);
--			if (cdata->async.s)
--				crypto_free_ablkcipher(cdata->async.s);
-+			cryptodev_blkcipher_request_free(cdata->async.request);
-+			cryptodev_crypto_free_blkcipher(cdata->async.s);
- 		} else {
- 			if (cdata->async.arequest)
- 				aead_request_free(cdata->async.arequest);
-@@ -289,10 +285,10 @@ ssize_t cryptodev_cipher_encrypt(struct
- 	reinit_completion(&cdata->async.result->completion);
- 
- 	if (cdata->aead == 0) {
--		ablkcipher_request_set_crypt(cdata->async.request,
-+		cryptodev_blkcipher_request_set_crypt(cdata->async.request,
- 			(struct scatterlist *)src, dst,
- 			len, cdata->async.iv);
--		ret = crypto_ablkcipher_encrypt(cdata->async.request);
-+		ret = cryptodev_crypto_blkcipher_encrypt(cdata->async.request);
- 	} else {
- 		aead_request_set_crypt(cdata->async.arequest,
- 			(struct scatterlist *)src, dst,
-@@ -311,10 +307,10 @@ ssize_t cryptodev_cipher_decrypt(struct
- 
- 	reinit_completion(&cdata->async.result->completion);
- 	if (cdata->aead == 0) {
--		ablkcipher_request_set_crypt(cdata->async.request,
-+		cryptodev_blkcipher_request_set_crypt(cdata->async.request,
- 			(struct scatterlist *)src, dst,
- 			len, cdata->async.iv);
--		ret = crypto_ablkcipher_decrypt(cdata->async.request);
-+		ret = cryptodev_crypto_blkcipher_decrypt(cdata->async.request);
- 	} else {
- 		aead_request_set_crypt(cdata->async.arequest,
- 			(struct scatterlist *)src, dst,
-Index: cryptodev-linux-1.8/cryptlib.h
-===================================================================
---- cryptodev-linux-1.8.orig/cryptlib.h
-+++ cryptodev-linux-1.8/cryptlib.h
-@@ -3,6 +3,8 @@
- 
- #include <linux/version.h>
- 
-+#include "cipherapi.h"
-+
- struct cipher_data {
- 	int init; /* 0 uninitialized */
- 	int blocksize;
-@@ -12,8 +14,8 @@ struct cipher_data {
- 	int alignmask;
- 	struct {
- 		/* block ciphers */
--		struct crypto_ablkcipher *s;
--		struct ablkcipher_request *request;
-+		cryptodev_crypto_blkcipher_t *s;
-+		cryptodev_blkcipher_request_t *request;
- 
- 		/* AEAD ciphers */
- 		struct crypto_aead *as;
-Index: cryptodev-linux-1.8/ioctl.c
-===================================================================
---- cryptodev-linux-1.8.orig/ioctl.c
-+++ cryptodev-linux-1.8/ioctl.c
-@@ -34,7 +34,6 @@
-  */
- 
- #include <crypto/hash.h>
--#include <linux/crypto.h>
- #include <linux/mm.h>
- #include <linux/highmem.h>
- #include <linux/ioctl.h>
-@@ -53,6 +52,7 @@
- #include "cryptodev_int.h"
- #include "zc.h"
- #include "version.h"
-+#include "cipherapi.h"
- 
- MODULE_AUTHOR("Nikos Mavrogiannopoulos <nmav@gnutls.org>");
- MODULE_DESCRIPTION("CryptoDev driver");
-@@ -765,7 +765,7 @@ static int get_session_info(struct fcryp
- 
- 	if (ses_ptr->cdata.init) {
- 		if (ses_ptr->cdata.aead == 0)
--			tfm = crypto_ablkcipher_tfm(ses_ptr->cdata.async.s);
-+			tfm = cryptodev_crypto_blkcipher_tfm(ses_ptr->cdata.async.s);
- 		else
- 			tfm = crypto_aead_tfm(ses_ptr->cdata.async.as);
- 		tfm_info_to_alg_info(&siop->cipher_info, tfm);
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/kernel-4-10-changes.patch b/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/kernel-4-10-changes.patch
deleted file mode 100644
index 93d608b..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/cryptodev/files/kernel-4-10-changes.patch
+++ /dev/null
@@ -1,57 +0,0 @@
-Upstream-Status: Backport
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-From 2b29be8ac41414ed19cb4f5d5626d9bd0d7b11a8 Mon Sep 17 00:00:00 2001
-From: Cristian Stoica <cristian.stoica@nxp.com>
-Date: Wed, 8 Feb 2017 12:11:04 +0200
-Subject: [PATCH] adjust to API changes in kernel >=4.10
-
-There are many changes related to get_user_pages and the code is rewritten
-for clarity.
-
-Signed-off-by: Cristian Stoica <cristian.stoica@nxp.com>
----
- zc.c | 28 +++++++++++++++++-----------
- 1 file changed, 17 insertions(+), 11 deletions(-)
-
-diff --git a/zc.c b/zc.c
-index e766ee3..2f4ea99 100644
---- a/zc.c
-+++ b/zc.c
-@@ -59,19 +59,25 @@ int __get_userbuf(uint8_t __user *addr, uint32_t len, int write,
- 	}
- 
- 	down_read(&mm->mmap_sem);
--#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 6, 0))
--	ret = get_user_pages_remote(
-+#if (LINUX_VERSION_CODE < KERNEL_VERSION(4, 6, 0))
-+	ret = get_user_pages(task, mm,
-+			(unsigned long)addr, pgcount, write, 0, pg, NULL);
- #else
--	ret = get_user_pages(
--#endif
--			task, mm,
--			(unsigned long)addr, pgcount,
--#if (LINUX_VERSION_CODE >= KERNEL_VERSION(4, 9, 0))
--			write ? FOLL_WRITE : 0,
--#else
--			write, 0,
--#endif
-+#  if (LINUX_VERSION_CODE < KERNEL_VERSION(4, 9, 0))
-+	ret = get_user_pages_remote(task, mm,
-+			(unsigned long)addr, pgcount, write, 0, pg, NULL);
-+#  else
-+#    if (LINUX_VERSION_CODE < KERNEL_VERSION(4, 10, 0))
-+	ret = get_user_pages_remote(task, mm,
-+			(unsigned long)addr, pgcount, write ? FOLL_WRITE : 0,
- 			pg, NULL);
-+#    else
-+	ret = get_user_pages_remote(task, mm,
-+			(unsigned long)addr, pgcount, write ? FOLL_WRITE : 0,
-+			pg, NULL, NULL);
-+#    endif
-+#  endif
-+#endif
- 	up_read(&mm->mmap_sem);
- 	if (ret != pgcount)
- 		return -EINVAL;
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/dtc/dtc.inc b/import-layers/yocto-poky/meta/recipes-kernel/dtc/dtc.inc
index 0c409b0..d759946 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/dtc/dtc.inc
+++ b/import-layers/yocto-poky/meta/recipes-kernel/dtc/dtc.inc
@@ -1,4 +1,5 @@
 SUMMARY = "Device Tree Compiler"
+HOMEPAGE = "https://devicetree.org/"
 DESCRIPTION = "The Device Tree Compiler is a tool used to manipulate the Open-Firmware-like device tree used by PowerPC kernels."
 SECTION = "bootloader"
 LICENSE = "GPLv2 | BSD"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/dtc/dtc_1.4.2.bb b/import-layers/yocto-poky/meta/recipes-kernel/dtc/dtc_1.4.2.bb
deleted file mode 100644
index cc72adc..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/dtc/dtc_1.4.2.bb
+++ /dev/null
@@ -1,10 +0,0 @@
-require dtc.inc
-
-LIC_FILES_CHKSUM = "file://GPL;md5=94d55d512a9ba36caa9b7df079bae19f \
-		    file://libfdt/libfdt.h;beginline=3;endline=52;md5=fb360963151f8ec2d6c06b055bcbb68c"
-
-SRCREV = "ec02b34c05be04f249ffaaca4b666f5246877dea"
-
-S = "${WORKDIR}/git"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/dtc/dtc_1.4.4.bb b/import-layers/yocto-poky/meta/recipes-kernel/dtc/dtc_1.4.4.bb
new file mode 100644
index 0000000..eadb7ba
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/dtc/dtc_1.4.4.bb
@@ -0,0 +1,10 @@
+require dtc.inc
+
+LIC_FILES_CHKSUM = "file://GPL;md5=94d55d512a9ba36caa9b7df079bae19f \
+		    file://libfdt/libfdt.h;beginline=3;endline=52;md5=fb360963151f8ec2d6c06b055bcbb68c"
+
+SRCREV = "558cd81bdd432769b59bff01240c44f82cfb1a9d"
+
+S = "${WORKDIR}/git"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/kern-tools/kern-tools-native_git.bb b/import-layers/yocto-poky/meta/recipes-kernel/kern-tools/kern-tools-native_git.bb
index 5e65469..3a3992a 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/kern-tools/kern-tools-native_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/kern-tools/kern-tools-native_git.bb
@@ -4,7 +4,7 @@
 
 DEPENDS = "git-native"
 
-SRCREV = "bd7447cd6274d764a129dcdc246cdbfd8c47b991"
+SRCREV = "b46b1c4f0973bf1eb09cf1191f5f4e69bcd0475d"
 PR = "r12"
 PV = "0.2+git${SRCPV}"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools.inc b/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools.inc
index bdfe024..c689bec 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools.inc
+++ b/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools.inc
@@ -11,6 +11,7 @@
 SRC_URI = "${KERNELORG_MIRROR}/linux/utils/kernel/kexec/kexec-tools-${PV}.tar.gz \
            file://kdump \
            file://kdump.conf \
+           file://kdump.service \
 "
 
 PR = "r1"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools/0001-Disable-PIE-during-link.patch b/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools/0001-Disable-PIE-during-link.patch
new file mode 100644
index 0000000..3f2f85e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools/0001-Disable-PIE-during-link.patch
@@ -0,0 +1,31 @@
+From ea7be6d71b85880e8e8a2c8a4f49a696c5f31ae4 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 10 Jun 2017 11:18:49 -0700
+Subject: [PATCH] Disable PIE during link
+
+We have explcitly disabled PIE during compile so we
+just need to match it with linker flags
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ purgatory/Makefile | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/purgatory/Makefile b/purgatory/Makefile
+index 564bdb7..a08e41f 100644
+--- a/purgatory/Makefile
++++ b/purgatory/Makefile
+@@ -59,7 +59,7 @@ $(PURGATORY): CPPFLAGS=$($(ARCH)_PURGATORY_EXTRA_CFLAGS) \
+ 			-Iinclude \
+ 			-I$(shell $(CC) -print-file-name=include)
+ $(PURGATORY): LDFLAGS=$($(ARCH)_PURGATORY_EXTRA_CFLAGS)\
+-			-Wl,--no-undefined -nostartfiles -nostdlib \
++			-Wl,--no-undefined -no-pie -nostartfiles -nostdlib \
+ 			-nodefaultlibs -e purgatory_start -Wl,-r \
+ 			-Wl,-Map=$(PURGATORY_MAP)
+ 
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools/0001-arm64-Disable-PIC.patch b/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools/0001-arm64-Disable-PIC.patch
new file mode 100644
index 0000000..84e94d7
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools/0001-arm64-Disable-PIC.patch
@@ -0,0 +1,31 @@
+From 3bb73e5e5649b455e15d5ca3a7ad1a90c4960972 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sat, 10 Jun 2017 11:54:36 -0700
+Subject: [PATCH] arm64: Disable PIC
+
+Fix
+| cc1: sorry, unimplemented: code model 'large' with -fPIC
+| make: *** [Makefile:118: purgatory/arch/arm64/entry.o] Error 1
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ purgatory/arch/arm64/Makefile | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/purgatory/arch/arm64/Makefile b/purgatory/arch/arm64/Makefile
+index 636abea..80068ca 100644
+--- a/purgatory/arch/arm64/Makefile
++++ b/purgatory/arch/arm64/Makefile
+@@ -1,6 +1,7 @@
+ 
+ arm64_PURGATORY_EXTRA_CFLAGS = \
+ 	-mcmodel=large \
++	-fno-PIC \
+ 	-fno-stack-protector \
+ 	-fno-asynchronous-unwind-tables \
+ 	-Wundef \
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools/kdump.service b/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools/kdump.service
new file mode 100644
index 0000000..4e65a46
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools/kdump.service
@@ -0,0 +1,12 @@
+[Unit]
+Description=Reboot and dump vmcore via kexec
+DefaultDependencies=no
+
+[Service]
+Type=oneshot
+RemainAfterExit=yes
+ExecStart=@LIBEXECDIR@/kdump-helper start
+ExecStop=@LIBEXECDIR@/kdump-helper stop
+
+[Install]
+WantedBy=multi-user.target
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools_2.0.14.bb b/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools_2.0.14.bb
index 90d5985..0f6398f 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools_2.0.14.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/kexec/kexec-tools_2.0.14.bb
@@ -19,30 +19,47 @@
             file://0001-x86-x86_64-Fix-format-warning-with-die.patch \
             file://0002-ppc-Fix-format-warning-with-die.patch \
             file://kexec-x32.patch \
+            file://0001-Disable-PIE-during-link.patch \
+            file://0001-arm64-Disable-PIC.patch \
          "
 
 SRC_URI[md5sum] = "b2b2c5e6b29d467d6e99d587fb6b7cf5"
 SRC_URI[sha256sum] = "b3e69519d2acced256843b1e8f1ecfa00d9b54fa07449ed78f05b9193f239370"
 
+SECURITY_PIE_CFLAGS_remove = "-fPIE -pie"
+
 PACKAGES =+ "kexec kdump vmcore-dmesg"
 
 ALLOW_EMPTY_${PN} = "1"
 RRECOMMENDS_${PN} = "kexec kdump vmcore-dmesg"
 
 FILES_kexec = "${sbindir}/kexec"
-FILES_kdump = "${sbindir}/kdump ${sysconfdir}/init.d/kdump \
-               ${sysconfdir}/sysconfig/kdump.conf"
+FILES_kdump = "${sbindir}/kdump \
+               ${sysconfdir}/sysconfig/kdump.conf \
+               ${sysconfdir}/init.d/kdump \
+               ${libexecdir}/kdump-helper \
+               ${systemd_unitdir}/system/kdump.service \
+"
+
 FILES_vmcore-dmesg = "${sbindir}/vmcore-dmesg"
 
-inherit update-rc.d
+inherit update-rc.d systemd
 
 INITSCRIPT_PACKAGES = "kdump"
 INITSCRIPT_NAME_kdump = "kdump"
 INITSCRIPT_PARAMS_kdump = "start 56 2 3 4 5 . stop 56 0 1 6 ."
 
 do_install_append () {
-        install -d ${D}${sysconfdir}/init.d
-        install -m 0755 ${WORKDIR}/kdump ${D}${sysconfdir}/init.d/kdump
         install -d ${D}${sysconfdir}/sysconfig
         install -m 0644 ${WORKDIR}/kdump.conf ${D}${sysconfdir}/sysconfig
+
+        if ${@bb.utils.contains('DISTRO_FEATURES', 'sysvinit', 'true', 'false', d)}; then
+                install -D -m 0755 ${WORKDIR}/kdump ${D}${sysconfdir}/init.d/kdump
+        fi
+
+        if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
+                install -D -m 0755 ${WORKDIR}/kdump ${D}${libexecdir}/kdump-helper
+                install -D -m 0644 ${WORKDIR}/kdump.service ${D}${systemd_unitdir}/system/kdump.service
+                sed -i -e 's,@LIBEXECDIR@,${libexecdir},g' ${D}${systemd_unitdir}/system/kdump.service
+        fi
 }
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/kmod/kmod.inc b/import-layers/yocto-poky/meta/recipes-kernel/kmod/kmod.inc
index ba80fc5..7fb10b5 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/kmod/kmod.inc
+++ b/import-layers/yocto-poky/meta/recipes-kernel/kmod/kmod.inc
@@ -14,15 +14,14 @@
                    "
 inherit autotools gtk-doc pkgconfig manpages
 
-SRCREV = "65a885df5f6f15222b44fd695c5eaca17e837a14"
+SRCREV = "ef4257b59c4307b8c627d89f3c7f1feedb32582f"
 # Lookout for PV bump too when SRCREV is changed
-PV = "23+git${SRCPV}"
+PV = "24+git${SRCPV}"
 
 SRC_URI = "git://git.kernel.org/pub/scm/utils/kernel/kmod/kmod.git \
            file://depmod-search.conf \
            file://avoid_parallel_tests.patch \
            file://fix-O_CLOEXEC.patch \
-           file://kcmdline_quotes.patch \
           "
 
 S = "${WORKDIR}/git"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/kmod/kmod/kcmdline_quotes.patch b/import-layers/yocto-poky/meta/recipes-kernel/kmod/kmod/kcmdline_quotes.patch
deleted file mode 100644
index 46bdec5..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/kmod/kmod/kcmdline_quotes.patch
+++ /dev/null
@@ -1,44 +0,0 @@
-From 4a6f92a10680e7e36807f5e2ae8e497e8d73a048 Mon Sep 17 00:00:00 2001
-From: James Minor <james.minor@ni.com>
-Date: Fri, 20 Jan 2017 17:15:50 -0600
-Subject: [PATCH] libkmod: Fix handling of quotes in kernel command line
-
-If a module parameter on the command line contains quotes, any
-spaces inside those quotes should be included as part of the
-parameter.
-
-Signed-off-by: James Minor <james.minor@ni.com>
-
-Upstream-Status: Accepted
----
- libkmod/libkmod-config.c | 7 +++++++
- 1 file changed, 7 insertions(+)
-
-diff --git a/libkmod/libkmod-config.c b/libkmod/libkmod-config.c
-index 57fbe37..ea40d19 100644
---- a/libkmod/libkmod-config.c
-+++ b/libkmod/libkmod-config.c
-@@ -497,6 +497,7 @@ static int kmod_config_parse_kcmdline(struct kmod_config *config)
- 	char buf[KCMD_LINE_SIZE];
- 	int fd, err;
- 	char *p, *modname,  *param = NULL, *value = NULL, is_module = 1;
-+	bool is_quoted = false;
- 
- 	fd = open("/proc/cmdline", O_RDONLY|O_CLOEXEC);
- 	if (fd < 0) {
-@@ -514,6 +515,12 @@ static int kmod_config_parse_kcmdline(struct kmod_config *config)
- 	}
- 
- 	for (p = buf, modname = buf; *p != '\0' && *p != '\n'; p++) {
-+		if (*p == '"') {
-+			is_quoted = !is_quoted;
-+			continue;
-+		}
-+		if (is_quoted)
-+			continue;
- 		switch (*p) {
- 		case ' ':
- 			*p = '\0';
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/latencytop/latencytop-0.5/latencytop-makefile.patch b/import-layers/yocto-poky/meta/recipes-kernel/latencytop/latencytop-0.5/latencytop-makefile.patch
index af19b37..7147fda 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/latencytop/latencytop-0.5/latencytop-makefile.patch
+++ b/import-layers/yocto-poky/meta/recipes-kernel/latencytop/latencytop-0.5/latencytop-makefile.patch
@@ -1,3 +1,7 @@
+
+Signed-off-by: Jack Mitchell <jack.mitchell@dbbroadcast.co.uk>
+Upstream-Status: Pending
+
 diff --git a/Makefile.orig b/Makefile
 index 16a2369..fa797a2 100644
 --- a/Makefile.orig
@@ -5,8 +9,8 @@
 @@ -1,10 +1,11 @@
 -# FIXME: Use autoconf ?
 -HAS_GTK_GUI = 1
-+# Upstream-Status: Inappropriate [configuration]
-+# Signed-off-by: Jack Mitchell <jack.mitchell@dbbroadcast.co.uk>
++#
++#
  
  DESTDIR =
  SBINDIR = /usr/sbin
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux-firmware/linux-firmware_git.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux-firmware/linux-firmware_git.bb
index ff60d30..9054b33 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux-firmware/linux-firmware_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux-firmware/linux-firmware_git.bb
@@ -184,6 +184,8 @@
 
 SRC_URI = "git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git"
 
+UPSTREAM_VERSION_UNKNOWN = "1"
+
 S = "${WORKDIR}/git"
 
 inherit allarch
@@ -216,7 +218,6 @@
 
 	# fixup wl12xx location, after 2.6.37 the kernel searches a different location for it
 	( cd ${D}${nonarch_base_libdir}/firmware ; ln -sf ti-connectivity/* . )
-
 }
 
 
@@ -247,7 +248,7 @@
              ${PN}-iwlwifi-6050-4 ${PN}-iwlwifi-6050-5 \
              ${PN}-iwlwifi-7260 \
              ${PN}-iwlwifi-7265 \
-             ${PN}-iwlwifi-7265d ${PN}-iwlwifi-8265 \
+             ${PN}-iwlwifi-7265d ${PN}-iwlwifi-8000c ${PN}-iwlwifi-8265 \
              ${PN}-iwlwifi-misc \
              ${PN}-ibt-license ${PN}-ibt ${PN}-ibt-misc \
              ${PN}-ibt-11-5 ${PN}-ibt-12-16 ${PN}-ibt-hw-37-7 ${PN}-ibt-hw-37-8 \
@@ -619,6 +620,7 @@
 FILES_${PN}-iwlwifi-7260   = "${nonarch_base_libdir}/firmware/iwlwifi-7260-*.ucode"
 FILES_${PN}-iwlwifi-7265   = "${nonarch_base_libdir}/firmware/iwlwifi-7265-*.ucode"
 FILES_${PN}-iwlwifi-7265d   = "${nonarch_base_libdir}/firmware/iwlwifi-7265D-*.ucode"
+FILES_${PN}-iwlwifi-8000c   = "${nonarch_base_libdir}/firmware/iwlwifi-8000C-*.ucode"
 FILES_${PN}-iwlwifi-8265   = "${nonarch_base_libdir}/firmware/iwlwifi-8265-*.ucode"
 FILES_${PN}-iwlwifi-misc   = "${nonarch_base_libdir}/firmware/iwlwifi-*.ucode"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_4.10.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_4.10.bb
deleted file mode 100644
index 2926278..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_4.10.bb
+++ /dev/null
@@ -1,11 +0,0 @@
-require linux-libc-headers.inc
-
-SRC_URI_append_libc-musl = "\
-    file://0001-libc-compat.h-fix-some-issues-arising-from-in6.h.patch \
-    file://0002-libc-compat.h-prevent-redefinition-of-struct-ethhdr.patch \
-    file://0003-remove-inclusion-of-sysinfo.h-in-kernel.h.patch \
-    file://0001-libc-compat.h-musl-_does_-define-IFF_LOWER_UP-DORMAN.patch \
-   "
-
-SRC_URI[md5sum] = "b5e7f6b9b2fe1b6cc7bc56a3a0bfc090"
-SRC_URI[sha256sum] = "3c95d9f049bd085e5c346d2c77f063b8425f191460fcd3ae9fe7e94e0477dc4b"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_4.12.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_4.12.bb
new file mode 100644
index 0000000..f0d0abf
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux-libc-headers/linux-libc-headers_4.12.bb
@@ -0,0 +1,11 @@
+require linux-libc-headers.inc
+
+SRC_URI_append_libc-musl = "\
+    file://0001-libc-compat.h-fix-some-issues-arising-from-in6.h.patch \
+    file://0002-libc-compat.h-prevent-redefinition-of-struct-ethhdr.patch \
+    file://0003-remove-inclusion-of-sysinfo.h-in-kernel.h.patch \
+    file://0001-libc-compat.h-musl-_does_-define-IFF_LOWER_UP-DORMAN.patch \
+   "
+
+SRC_URI[md5sum] = "fc454157e2d024d401a60905d6481c6b"
+SRC_URI[sha256sum] = "a45c3becd4d08ce411c14628a949d08e2433d8cdeca92036c7013980e93858ab"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/kernel-devsrc.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/kernel-devsrc.bb
index 7004261..c1b5b77 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/kernel-devsrc.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/kernel-devsrc.bb
@@ -46,7 +46,7 @@
         cd ${B}
         find . -type d -name '.git*' -prune -o -path '.debug' -prune -o -type f -print0 | cpio --null -pdlu $kerneldir
         cd ${S}
-        find . -type d -name '.git*' -prune -o -type f -print0 | cpio --null -pdlu $kerneldir
+	find . -type d -name '.git*' -prune -o -type d -name '.kernel-meta' -prune -o -type f -print0 | cpio --null -pdlu $kerneldir
 
         # Explicitly set KBUILD_OUTPUT to ensure that the image directory is cleaned and not
         # The main build artifacts. We clean the directory to avoid QA errors on mismatched
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-dtb.inc b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-dtb.inc
index 0174c80..f191277 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-dtb.inc
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-dtb.inc
@@ -1,91 +1,3 @@
-# Support for device tree generation
-FILES_kernel-devicetree = "/${KERNEL_IMAGEDEST}/devicetree*"
-
-PACKAGE_WRITE_DEPS += "virtual/update-alternatives-native"
-
-python __anonymous () {
-    d.appendVar("PACKAGES", " kernel-devicetree")
-}
-
-normalize_dtb () {
-	DTB="$1"
-	if echo ${DTB} | grep -q '/dts/'; then
-		bbwarn "${DTB} contains the full path to the the dts file, but only the dtb name should be used."
-		DTB=`basename ${DTB} | sed 's,\.dts$,.dtb,g'`
-	fi
-	echo "${DTB}"
-}
-
-get_real_dtb_path_in_kernel () {
-	DTB="$1"
-	DTB_PATH="${B}/arch/${ARCH}/boot/dts/${DTB}"
-	if [ ! -e "${DTB_PATH}" ]; then
-		DTB_PATH="${B}/arch/${ARCH}/boot/${DTB}"
-	fi
-	echo "${DTB_PATH}"
-}
-
-do_compile_append() {
-	for DTB in ${KERNEL_DEVICETREE}; do
-		DTB=`normalize_dtb "${DTB}"`
-		oe_runmake ${DTB}
-	done
-}
-
-do_install_append() {
-	for DTB in ${KERNEL_DEVICETREE}; do
-		DTB=`normalize_dtb "${DTB}"`
-		DTB_EXT=${DTB##*.}
-		DTB_BASE_NAME=`basename ${DTB} ."${DTB_EXT}"`
-		for type in ${KERNEL_IMAGETYPE_FOR_MAKE}; do
-			symlink_name=${type}"-"${KERNEL_IMAGE_SYMLINK_NAME}
-			DTB_SYMLINK_NAME=`echo ${symlink_name} | sed "s/${MACHINE}/${DTB_BASE_NAME}/g"`
-			DTB_PATH=`get_real_dtb_path_in_kernel "${DTB}"`
-			install -m 0644 ${DTB_PATH} ${D}/${KERNEL_IMAGEDEST}/devicetree-${DTB_SYMLINK_NAME}.${DTB_EXT}
-		done
-	done
-}
-
-do_deploy_append() {
-	for DTB in ${KERNEL_DEVICETREE}; do
-		DTB=`normalize_dtb "${DTB}"`
-		DTB_EXT=${DTB##*.}
-		DTB_BASE_NAME=`basename ${DTB} ."${DTB_EXT}"`
-		for type in ${KERNEL_IMAGETYPE_FOR_MAKE}; do
-			base_name=${type}"-"${KERNEL_IMAGE_BASE_NAME}
-			symlink_name=${type}"-"${KERNEL_IMAGE_SYMLINK_NAME}
-			DTB_NAME=`echo ${base_name} | sed "s/${MACHINE}/${DTB_BASE_NAME}/g"`
-			DTB_SYMLINK_NAME=`echo ${symlink_name} | sed "s/${MACHINE}/${DTB_BASE_NAME}/g"`
-			DTB_PATH=`get_real_dtb_path_in_kernel "${DTB}"`
-			install -d ${DEPLOYDIR}
-			install -m 0644 ${DTB_PATH} ${DEPLOYDIR}/${DTB_NAME}.${DTB_EXT}
-			ln -sf ${DTB_NAME}.${DTB_EXT} ${DEPLOYDIR}/${DTB_SYMLINK_NAME}.${DTB_EXT}
-		done
-	done
-}
-
-pkg_postinst_kernel-devicetree () {
-	cd /${KERNEL_IMAGEDEST}
-	for DTB in ${KERNEL_DEVICETREE}; do
-		for type in ${KERNEL_IMAGETYPE_FOR_MAKE}; do
-			symlink_name=${type}"-"${KERNEL_IMAGE_SYMLINK_NAME}
-			DTB_EXT=${DTB##*.}
-			DTB_BASE_NAME=`basename ${DTB} ."${DTB_EXT}"`
-			DTB_SYMLINK_NAME=`echo ${symlink_name} | sed "s/${MACHINE}/${DTB_BASE_NAME}/g"`
-			update-alternatives --install /${KERNEL_IMAGEDEST}/${DTB_BASE_NAME}.${DTB_EXT} ${DTB_BASE_NAME}.${DTB_EXT} devicetree-${DTB_SYMLINK_NAME}.${DTB_EXT} ${KERNEL_PRIORITY} || true
-		done
-	done
-}
-
-pkg_postrm_kernel-devicetree () {
-	cd /${KERNEL_IMAGEDEST}
-	for DTB in ${KERNEL_DEVICETREE}; do
-		for type in ${KERNEL_IMAGETYPE_FOR_MAKE}; do
-			symlink_name=${type}"-"${KERNEL_IMAGE_SYMLINK_NAME}
-			DTB_EXT=${DTB##*.}
-			DTB_BASE_NAME=`basename ${DTB} ."${DTB_EXT}"`
-			DTB_SYMLINK_NAME=`echo ${symlink_name} | sed "s/${MACHINE}/${DTB_BASE_NAME}/g"`
-			update-alternatives --remove ${DTB_BASE_NAME}.${DTB_EXT} devicetree-${DTB_SYMLINK_NAME}.${DTB_EXT} ${KERNEL_PRIORITY} || true
-		done
-	done
+python() {
+    bb.warn("You are using the linux-dtb.inc which is deprecated. You can safely remove it as the Device Tree support is automatically enabled when KERNEL_DEVICETREE is set.")
 }
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-dummy.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-dummy.bb
index 994ac74..e1c7f76 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-dummy.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-dummy.bb
@@ -13,10 +13,14 @@
 PACKAGES_DYNAMIC += "^kernel-image-.*"
 PACKAGES_DYNAMIC += "^kernel-firmware-.*"
 
-PACKAGES += "kernel-modules"
+PACKAGES += "kernel-modules kernel-vmlinux"
 FILES_kernel-modules = ""
 ALLOW_EMPTY_kernel-modules = "1"
 DESCRIPTION_kernel-modules = "Kernel modules meta package"
+FILES_kernel-vmlinux = ""
+ALLOW_EMPTY_kernel-vmlinux = "1"
+DESCRIPTION_kernel-vmlinux = "Kernel vmlinux meta package"
+
 
 INHIBIT_DEFAULT_DEPS = "1"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-dev.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-dev.bb
index bf67f81..5852b42 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-dev.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-dev.bb
@@ -28,7 +28,7 @@
 SRCREV_machine ?= '${@oe.utils.conditional("PREFERRED_PROVIDER_virtual/kernel", "linux-yocto-dev", "${AUTOREV}", "29594404d7fe73cd80eaa4ee8c43dcc53970c60e", d)}'
 SRCREV_meta ?= '${@oe.utils.conditional("PREFERRED_PROVIDER_virtual/kernel", "linux-yocto-dev", "${AUTOREV}", "29594404d7fe73cd80eaa4ee8c43dcc53970c60e", d)}'
 
-LINUX_VERSION ?= "4.11-rc+"
+LINUX_VERSION ?= "4.12-rc+"
 LINUX_VERSION_EXTENSION ?= "-yoctodev-${LINUX_KERNEL_TYPE}"
 PV = "${LINUX_VERSION}+git${SRCPV}"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.1.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.1.bb
deleted file mode 100644
index c82468a..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.1.bb
+++ /dev/null
@@ -1,36 +0,0 @@
-KBRANCH ?= "standard/preempt-rt/base"
-
-require recipes-kernel/linux/linux-yocto.inc
-
-# Skip processing of this recipe if it is not explicitly specified as the
-# PREFERRED_PROVIDER for virtual/kernel. This avoids errors when trying
-# to build multiple virtual/kernel providers, e.g. as dependency of
-# core-image-rt-sdk, core-image-rt.
-python () {
-    if d.getVar("PREFERRED_PROVIDER_virtual/kernel") != "linux-yocto-rt":
-        raise bb.parse.SkipPackage("Set PREFERRED_PROVIDER_virtual/kernel to linux-yocto-rt to enable it")
-}
-
-SRCREV_machine ?= "8cc62ac3f26bd6dde68ad2d86026a252fe7add44"
-SRCREV_meta ?= "9f9c9a66ef3452343586adf150137967e955d71a"
-
-SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.1.git;branch=${KBRANCH};name=machine \
-           git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.1;destsuffix=${KMETA}"
-
-LINUX_VERSION ?= "4.1.43"
-
-PV = "${LINUX_VERSION}+git${SRCPV}"
-
-KMETA = "kernel-meta"
-KCONF_BSP_AUDIT_LEVEL = "2"
-
-LINUX_KERNEL_TYPE = "preempt-rt"
-
-COMPATIBLE_MACHINE = "(qemux86|qemux86-64|qemuarm|qemuppc|qemumips)"
-
-# Functionality flags
-KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc features/taskstats/taskstats.scc"
-KERNEL_FEATURES_append = " ${KERNEL_EXTRA_FEATURES}"
-KERNEL_FEATURES_append_qemuall=" cfg/virtio.scc"
-KERNEL_FEATURES_append_qemux86=" cfg/sound.scc cfg/paravirt_kvm.scc"
-KERNEL_FEATURES_append_qemux86-64=" cfg/sound.scc"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.10.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.10.bb
index f93d953..fd31bf9 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.10.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.10.bb
@@ -12,7 +12,7 @@
 }
 
 SRCREV_machine ?= "c1d8c4408b8aedd88eeb6ccc89ce834dd41b3f09"
-SRCREV_meta ?= "40ee48ac099c04f60d2c132031d9625a4e0c4c9e"
+SRCREV_meta ?= "1d9a8200184af22f8981fa24b0e82af49c7988dd"
 
 SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.10.git;branch=${KBRANCH};name=machine \
            git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.10;destsuffix=${KMETA}"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.12.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.12.bb
new file mode 100644
index 0000000..4eb0156
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.12.bb
@@ -0,0 +1,38 @@
+KBRANCH ?= "standard/preempt-rt/base"
+
+require recipes-kernel/linux/linux-yocto.inc
+
+# Skip processing of this recipe if it is not explicitly specified as the
+# PREFERRED_PROVIDER for virtual/kernel. This avoids errors when trying
+# to build multiple virtual/kernel providers, e.g. as dependency of
+# core-image-rt-sdk, core-image-rt.
+python () {
+    if d.getVar("PREFERRED_PROVIDER_virtual/kernel") != "linux-yocto-rt":
+        raise bb.parse.SkipPackage("Set PREFERRED_PROVIDER_virtual/kernel to linux-yocto-rt to enable it")
+}
+
+SRCREV_machine ?= "33aa1a4ea44399f12dfb26146ea06db5cd02ca69"
+SRCREV_meta ?= "19d815d5a34bfaad95d87cc097cef18b594daac8"
+
+SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.12.git;branch=${KBRANCH};name=machine \
+           git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.12;destsuffix=${KMETA}"
+
+LINUX_VERSION ?= "4.12.20"
+
+PV = "${LINUX_VERSION}+git${SRCPV}"
+
+KMETA = "kernel-meta"
+KCONF_BSP_AUDIT_LEVEL = "2"
+
+LINUX_KERNEL_TYPE = "preempt-rt"
+
+COMPATIBLE_MACHINE = "(qemux86|qemux86-64|qemuarm|qemuppc|qemumips)"
+
+KERNEL_DEVICETREE_qemuarm = "versatile-pb.dtb"
+
+# Functionality flags
+KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc features/taskstats/taskstats.scc"
+KERNEL_FEATURES_append = " ${KERNEL_EXTRA_FEATURES}"
+KERNEL_FEATURES_append_qemuall=" cfg/virtio.scc"
+KERNEL_FEATURES_append_qemux86=" cfg/sound.scc cfg/paravirt_kvm.scc"
+KERNEL_FEATURES_append_qemux86-64=" cfg/sound.scc"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.4.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.4.bb
index 25d8883..97538e2 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.4.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.4.bb
@@ -11,13 +11,13 @@
         raise bb.parse.SkipPackage("Set PREFERRED_PROVIDER_virtual/kernel to linux-yocto-rt to enable it")
 }
 
-SRCREV_machine ?= "1e691db7af642fff0222a1f1d1e9043c172699d5"
-SRCREV_meta ?= "804d2b3164ec25ed519fd695de9aa0908460c92e"
+SRCREV_machine ?= "d5efeeeb928a0111fc187fd1e8d03d2e4e35d4a0"
+SRCREV_meta ?= "b149d14ccae8349ab33e101f6af233a12f4b17ba"
 
 SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.4.git;branch=${KBRANCH};name=machine \
            git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.4;destsuffix=${KMETA}"
 
-LINUX_VERSION ?= "4.4.87"
+LINUX_VERSION ?= "4.4.113"
 
 PV = "${LINUX_VERSION}+git${SRCPV}"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.9.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.9.bb
index 6734dc0..5c016ed 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.9.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-rt_4.9.bb
@@ -11,13 +11,13 @@
         raise bb.parse.SkipPackage("Set PREFERRED_PROVIDER_virtual/kernel to linux-yocto-rt to enable it")
 }
 
-SRCREV_machine ?= "0817a7b3a853d1bdd3b87a2654ed2ee62f624806"
-SRCREV_meta ?= "6acae6f7200af17b3c2be5ecab2cffdc59a02b35"
+SRCREV_machine ?= "90d1ffa36cbd36722638c97c1bb46a5874dbe28e"
+SRCREV_meta ?= "0774eacea2a7d3a150594533b8c80d0c0bfdfded"
 
 SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.9.git;branch=${KBRANCH};name=machine \
            git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.9;destsuffix=${KMETA}"
 
-LINUX_VERSION ?= "4.9.49"
+LINUX_VERSION ?= "4.9.82"
 
 PV = "${LINUX_VERSION}+git${SRCPV}"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.1.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.1.bb
deleted file mode 100644
index 537efb1..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.1.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-KBRANCH ?= "standard/tiny/common-pc"
-LINUX_KERNEL_TYPE = "tiny"
-KCONFIG_MODE = "--allnoconfig"
-
-require recipes-kernel/linux/linux-yocto.inc
-
-LINUX_VERSION ?= "4.1.43"
-
-KMETA = "kernel-meta"
-KCONF_BSP_AUDIT_LEVEL = "2"
-
-SRCREV_machine ?= "782133d8166ac71ef1ffaba58b7cf81ec9e532a1"
-SRCREV_meta ?= "9f9c9a66ef3452343586adf150137967e955d71a"
-
-PV = "${LINUX_VERSION}+git${SRCPV}"
-
-SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.1.git;branch=${KBRANCH};name=machine \
-           git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.1;destsuffix=${KMETA}"
-
-COMPATIBLE_MACHINE = "(qemux86$)"
-
-# Functionality flags
-KERNEL_FEATURES = ""
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.10.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.10.bb
index 31396ab..b223497 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.10.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.10.bb
@@ -10,14 +10,14 @@
 KCONF_BSP_AUDIT_LEVEL = "2"
 
 SRCREV_machine ?= "c1d8c4408b8aedd88eeb6ccc89ce834dd41b3f09"
-SRCREV_meta ?= "40ee48ac099c04f60d2c132031d9625a4e0c4c9e"
+SRCREV_meta ?= "1d9a8200184af22f8981fa24b0e82af49c7988dd"
 
 PV = "${LINUX_VERSION}+git${SRCPV}"
 
 SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.10.git;branch=${KBRANCH};name=machine \
            git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.10;destsuffix=${KMETA}"
 
-COMPATIBLE_MACHINE = "(qemux86$)"
+COMPATIBLE_MACHINE = "qemux86|qemux86-64"
 
 # Functionality flags
 KERNEL_FEATURES = ""
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.12.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.12.bb
new file mode 100644
index 0000000..3ff2e1c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.12.bb
@@ -0,0 +1,25 @@
+KBRANCH ?= "standard/tiny/common-pc"
+LINUX_KERNEL_TYPE = "tiny"
+KCONFIG_MODE = "--allnoconfig"
+
+require recipes-kernel/linux/linux-yocto.inc
+
+LINUX_VERSION ?= "4.12.20"
+
+KMETA = "kernel-meta"
+KCONF_BSP_AUDIT_LEVEL = "2"
+
+SRCREV_machine ?= "40146055677a69730b2c36da1c8c1b4e9bae7bb0"
+SRCREV_meta ?= "19d815d5a34bfaad95d87cc097cef18b594daac8"
+
+PV = "${LINUX_VERSION}+git${SRCPV}"
+
+SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.12.git;branch=${KBRANCH};name=machine \
+           git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.12;destsuffix=${KMETA}"
+
+COMPATIBLE_MACHINE = "qemux86|qemux86-64"
+
+# Functionality flags
+KERNEL_FEATURES = ""
+
+KERNEL_DEVICETREE_qemuarm = "versatile-pb.dtb"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.4.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.4.bb
index 50b9a56..8a98189 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.4.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.4.bb
@@ -4,20 +4,20 @@
 
 require recipes-kernel/linux/linux-yocto.inc
 
-LINUX_VERSION ?= "4.4.87"
+LINUX_VERSION ?= "4.4.113"
 
 KMETA = "kernel-meta"
 KCONF_BSP_AUDIT_LEVEL = "2"
 
-SRCREV_machine ?= "b71c7b786aed26c0a1e4eca66f1d874ec017d699"
-SRCREV_meta ?= "804d2b3164ec25ed519fd695de9aa0908460c92e"
+SRCREV_machine ?= "4d31a8b7661509ff1044abcf9050750cc2478e20"
+SRCREV_meta ?= "b149d14ccae8349ab33e101f6af233a12f4b17ba"
 
 PV = "${LINUX_VERSION}+git${SRCPV}"
 
 SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.4.git;branch=${KBRANCH};name=machine \
            git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.4;destsuffix=${KMETA}"
 
-COMPATIBLE_MACHINE = "(qemux86$)"
+COMPATIBLE_MACHINE = "qemux86|qemux86-64"
 
 # Functionality flags
 KERNEL_FEATURES = ""
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.9.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.9.bb
index e0c5236..4d46802 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.9.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto-tiny_4.9.bb
@@ -4,20 +4,20 @@
 
 require recipes-kernel/linux/linux-yocto.inc
 
-LINUX_VERSION ?= "4.9.49"
+LINUX_VERSION ?= "4.9.82"
 
 KMETA = "kernel-meta"
 KCONF_BSP_AUDIT_LEVEL = "2"
 
-SRCREV_machine ?= "480ee599fb8df712c10dcf4b7aa6398b79f7d404"
-SRCREV_meta ?= "6acae6f7200af17b3c2be5ecab2cffdc59a02b35"
+SRCREV_machine ?= "eb3b2079ea43b451e06be443f8bc146736f9c4bc"
+SRCREV_meta ?= "0774eacea2a7d3a150594533b8c80d0c0bfdfded"
 
 PV = "${LINUX_VERSION}+git${SRCPV}"
 
 SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.9.git;branch=${KBRANCH};name=machine \
            git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.9;destsuffix=${KMETA}"
 
-COMPATIBLE_MACHINE = "(qemux86$)"
+COMPATIBLE_MACHINE = "qemux86|qemux86-64"
 
 # Functionality flags
 KERNEL_FEATURES = ""
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto.inc b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto.inc
index 637506a..9c1f61b 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto.inc
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto.inc
@@ -33,7 +33,8 @@
 # and it can be specific to the machine or shared
 # KMACHINE = "UNDEFINED"
 
-LINUX_KERNEL_TYPE ?= "standard"
+# The distro or local.conf should set this, but if nobody cares...
+LINUX_KERNEL_TYPE ??= "standard"
 
 # KMETA ?= ""
 KBRANCH ?= "master"
@@ -48,12 +49,11 @@
 KCONF_BSP_AUDIT_LEVEL ?= "0"
 KMETA_AUDIT ?= "yes"
 
-LINUX_VERSION_EXTENSION ?= "-yocto-${LINUX_KERNEL_TYPE}"
+LINUX_VERSION_EXTENSION ??= "-yocto-${LINUX_KERNEL_TYPE}"
 
 # Pick up shared functions
 inherit kernel
 inherit kernel-yocto
-require linux-dtb.inc
 
 B = "${WORKDIR}/linux-${PACKAGE_ARCH}-${LINUX_KERNEL_TYPE}-build"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.1.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.1.bb
deleted file mode 100644
index 316bc32..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.1.bb
+++ /dev/null
@@ -1,42 +0,0 @@
-KBRANCH ?= "standard/base"
-
-require recipes-kernel/linux/linux-yocto.inc
-
-# board specific branches
-KBRANCH_qemuarm  ?= "standard/arm-versatile-926ejs"
-KBRANCH_qemuarm64 ?= "standard/qemuarm64"
-KBRANCH_qemumips ?= "standard/mti-malta32"
-KBRANCH_qemuppc  ?= "standard/qemuppc"
-KBRANCH_qemux86  ?= "standard/base"
-KBRANCH_qemux86-64 ?= "standard/base"
-KBRANCH_qemumips64 ?= "standard/mti-malta64"
-
-SRCREV_machine_qemuarm ?= "642e9b95f97f8e00ef819bbfb2f281c1079bfbb1"
-SRCREV_machine_qemuarm64 ?= "782133d8166ac71ef1ffaba58b7cf81ec9e532a1"
-SRCREV_machine_qemumips ?= "54d4575d0df1805e127c5fa59201cd978849fb3f"
-SRCREV_machine_qemuppc ?= "58906f2737c644e67a7c9754b7e36630005a0011"
-SRCREV_machine_qemux86 ?= "782133d8166ac71ef1ffaba58b7cf81ec9e532a1"
-SRCREV_machine_qemux86-64 ?= "782133d8166ac71ef1ffaba58b7cf81ec9e532a1"
-SRCREV_machine_qemumips64 ?= "03f0c0293abfe5226c2fdd22328476854fa0127f"
-SRCREV_machine ?= "782133d8166ac71ef1ffaba58b7cf81ec9e532a1"
-SRCREV_meta ?= "9f9c9a66ef3452343586adf150137967e955d71a"
-
-SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.1.git;name=machine;branch=${KBRANCH}; \
-           git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.1;destsuffix=${KMETA}"
-
-LINUX_VERSION ?= "4.1.43"
-
-PV = "${LINUX_VERSION}+git${SRCPV}"
-
-KMETA = "kernel-meta"
-KCONF_BSP_AUDIT_LEVEL = "2"
-
-COMPATIBLE_MACHINE = "qemuarm|qemuarm64|qemux86|qemuppc|qemumips|qemumips64|qemux86-64"
-
-# Functionality flags
-KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc"
-KERNEL_FEATURES_append = " ${KERNEL_EXTRA_FEATURES}"
-KERNEL_FEATURES_append_qemuall=" cfg/virtio.scc"
-KERNEL_FEATURES_append_qemux86=" cfg/sound.scc cfg/paravirt_kvm.scc"
-KERNEL_FEATURES_append_qemux86-64=" cfg/sound.scc cfg/paravirt_kvm.scc"
-KERNEL_FEATURES_append = " ${@bb.utils.contains("TUNE_FEATURES", "mx32", " cfg/x32.scc", "" ,d)}"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.10.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.10.bb
index 7fcc753..1fe3cca 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.10.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.10.bb
@@ -19,7 +19,7 @@
 SRCREV_machine_qemux86-64 ?= "c1d8c4408b8aedd88eeb6ccc89ce834dd41b3f09"
 SRCREV_machine_qemumips64 ?= "8bb135e71037c46175bbcc7acf387309b2e17133"
 SRCREV_machine ?= "c1d8c4408b8aedd88eeb6ccc89ce834dd41b3f09"
-SRCREV_meta ?= "40ee48ac099c04f60d2c132031d9625a4e0c4c9e"
+SRCREV_meta ?= "1d9a8200184af22f8981fa24b0e82af49c7988dd"
 
 SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.10.git;name=machine;branch=${KBRANCH}; \
            git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.10;destsuffix=${KMETA}"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.12.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.12.bb
new file mode 100644
index 0000000..b10b82b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.12.bb
@@ -0,0 +1,44 @@
+KBRANCH ?= "standard/base"
+
+require recipes-kernel/linux/linux-yocto.inc
+
+# board specific branches
+KBRANCH_qemuarm  ?= "standard/arm-versatile-926ejs"
+KBRANCH_qemuarm64 ?= "standard/qemuarm64"
+KBRANCH_qemumips ?= "standard/mti-malta32"
+KBRANCH_qemuppc  ?= "standard/qemuppc"
+KBRANCH_qemux86  ?= "standard/base"
+KBRANCH_qemux86-64 ?= "standard/base"
+KBRANCH_qemumips64 ?= "standard/mti-malta64"
+
+SRCREV_machine_qemuarm ?= "6e5a99db3a495e023041a30d43262047795e645a"
+SRCREV_machine_qemuarm64 ?= "40146055677a69730b2c36da1c8c1b4e9bae7bb0"
+SRCREV_machine_qemumips ?= "43dc47f90007d54c7086cc03b28e946e30135f1c"
+SRCREV_machine_qemuppc ?= "40146055677a69730b2c36da1c8c1b4e9bae7bb0"
+SRCREV_machine_qemux86 ?= "40146055677a69730b2c36da1c8c1b4e9bae7bb0"
+SRCREV_machine_qemux86-64 ?= "40146055677a69730b2c36da1c8c1b4e9bae7bb0"
+SRCREV_machine_qemumips64 ?= "69f0c96d8e47b0dccfb374809d729f0042c77868"
+SRCREV_machine ?= "40146055677a69730b2c36da1c8c1b4e9bae7bb0"
+SRCREV_meta ?= "19d815d5a34bfaad95d87cc097cef18b594daac8"
+
+SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.12.git;name=machine;branch=${KBRANCH}; \
+           git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.12;destsuffix=${KMETA}"
+
+LINUX_VERSION ?= "4.12.20"
+
+PV = "${LINUX_VERSION}+git${SRCPV}"
+
+KMETA = "kernel-meta"
+KCONF_BSP_AUDIT_LEVEL = "2"
+
+KERNEL_DEVICETREE_qemuarm = "versatile-pb.dtb"
+
+COMPATIBLE_MACHINE = "qemuarm|qemuarm64|qemux86|qemuppc|qemumips|qemumips64|qemux86-64"
+
+# Functionality flags
+KERNEL_EXTRA_FEATURES ?= "features/netfilter/netfilter.scc"
+KERNEL_FEATURES_append = " ${KERNEL_EXTRA_FEATURES}"
+KERNEL_FEATURES_append_qemuall=" cfg/virtio.scc"
+KERNEL_FEATURES_append_qemux86=" cfg/sound.scc cfg/paravirt_kvm.scc"
+KERNEL_FEATURES_append_qemux86-64=" cfg/sound.scc cfg/paravirt_kvm.scc"
+KERNEL_FEATURES_append = " ${@bb.utils.contains("TUNE_FEATURES", "mx32", " cfg/x32.scc", "" ,d)}"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.4.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.4.bb
index d300e69..97c16d5 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.4.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.4.bb
@@ -11,20 +11,20 @@
 KBRANCH_qemux86-64 ?= "standard/base"
 KBRANCH_qemumips64 ?= "standard/mti-malta64"
 
-SRCREV_machine_qemuarm ?= "840e2c02dde98ee9c584cacdd5bb0812e9dcd016"
-SRCREV_machine_qemuarm64 ?= "b71c7b786aed26c0a1e4eca66f1d874ec017d699"
-SRCREV_machine_qemumips ?= "27a547fe77b05ae63c3b973a029c3933ef62c164"
-SRCREV_machine_qemuppc ?= "b71c7b786aed26c0a1e4eca66f1d874ec017d699"
-SRCREV_machine_qemux86 ?= "b71c7b786aed26c0a1e4eca66f1d874ec017d699"
-SRCREV_machine_qemux86-64 ?= "b71c7b786aed26c0a1e4eca66f1d874ec017d699"
-SRCREV_machine_qemumips64 ?= "7e333c223b568704cc3303b2e922ff85a2a8f7ef"
-SRCREV_machine ?= "b71c7b786aed26c0a1e4eca66f1d874ec017d699"
-SRCREV_meta ?= "804d2b3164ec25ed519fd695de9aa0908460c92e"
+SRCREV_machine_qemuarm ?= "400c0f39b954cd8fffdf53e6ec97852b73fea7af"
+SRCREV_machine_qemuarm64 ?= "4d31a8b7661509ff1044abcf9050750cc2478e20"
+SRCREV_machine_qemumips ?= "fb03a9472367b6c177729ac631326aafd5d17c92"
+SRCREV_machine_qemuppc ?= "4d31a8b7661509ff1044abcf9050750cc2478e20"
+SRCREV_machine_qemux86 ?= "4d31a8b7661509ff1044abcf9050750cc2478e20"
+SRCREV_machine_qemux86-64 ?= "4d31a8b7661509ff1044abcf9050750cc2478e20"
+SRCREV_machine_qemumips64 ?= "26b8ba186a6d39728fc1510bd2264110c75842f5"
+SRCREV_machine ?= "4d31a8b7661509ff1044abcf9050750cc2478e20"
+SRCREV_meta ?= "b149d14ccae8349ab33e101f6af233a12f4b17ba"
 
 SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.4.git;name=machine;branch=${KBRANCH}; \
            git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.4;destsuffix=${KMETA}"
 
-LINUX_VERSION ?= "4.4.87"
+LINUX_VERSION ?= "4.4.113"
 
 PV = "${LINUX_VERSION}+git${SRCPV}"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.9.bb b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.9.bb
index dbe40a3..a5a165f 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.9.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/linux/linux-yocto_4.9.bb
@@ -11,20 +11,20 @@
 KBRANCH_qemux86-64 ?= "standard/base"
 KBRANCH_qemumips64 ?= "standard/mti-malta64"
 
-SRCREV_machine_qemuarm ?= "8caa35a74753d45178720933f03d8d5150a8ff17"
-SRCREV_machine_qemuarm64 ?= "480ee599fb8df712c10dcf4b7aa6398b79f7d404"
-SRCREV_machine_qemumips ?= "fc2a3b9f932779fdf053675a5a73e8f9917507a5"
-SRCREV_machine_qemuppc ?= "480ee599fb8df712c10dcf4b7aa6398b79f7d404"
-SRCREV_machine_qemux86 ?= "480ee599fb8df712c10dcf4b7aa6398b79f7d404"
-SRCREV_machine_qemux86-64 ?= "480ee599fb8df712c10dcf4b7aa6398b79f7d404"
-SRCREV_machine_qemumips64 ?= "aee63978005c04ea853099764acaa08130e65554"
-SRCREV_machine ?= "480ee599fb8df712c10dcf4b7aa6398b79f7d404"
-SRCREV_meta ?= "6acae6f7200af17b3c2be5ecab2cffdc59a02b35"
+SRCREV_machine_qemuarm ?= "23369eb7e07c839fa73a8c1e85aba37a07bf14c1"
+SRCREV_machine_qemuarm64 ?= "eb3b2079ea43b451e06be443f8bc146736f9c4bc"
+SRCREV_machine_qemumips ?= "cab9e059447878f5383f91a05db12813f69cbfc1"
+SRCREV_machine_qemuppc ?= "eb3b2079ea43b451e06be443f8bc146736f9c4bc"
+SRCREV_machine_qemux86 ?= "eb3b2079ea43b451e06be443f8bc146736f9c4bc"
+SRCREV_machine_qemux86-64 ?= "eb3b2079ea43b451e06be443f8bc146736f9c4bc"
+SRCREV_machine_qemumips64 ?= "c2e5ef83b612d50f50fafeed9930dbea302fbe8c"
+SRCREV_machine ?= "eb3b2079ea43b451e06be443f8bc146736f9c4bc"
+SRCREV_meta ?= "0774eacea2a7d3a150594533b8c80d0c0bfdfded"
 
 SRC_URI = "git://git.yoctoproject.org/linux-yocto-4.9.git;name=machine;branch=${KBRANCH}; \
            git://git.yoctoproject.org/yocto-kernel-cache;type=kmeta;name=meta;branch=yocto-4.9;destsuffix=${KMETA}"
 
-LINUX_VERSION ?= "4.9.49"
+LINUX_VERSION ?= "4.9.82"
 
 PV = "${LINUX_VERSION}+git${SRCPV}"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/lttng/babeltrace_1.5.2.bb b/import-layers/yocto-poky/meta/recipes-kernel/lttng/babeltrace_1.5.2.bb
deleted file mode 100644
index c7ea5c1..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/lttng/babeltrace_1.5.2.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-SUMMARY = "Babeltrace - Trace Format Babel Tower"
-DESCRIPTION = "Babeltrace provides trace read and write libraries in host side, as well as a trace converter, which used to convert LTTng 2.0 traces into human-readable log."
-HOMEPAGE = "http://www.efficios.com/babeltrace/"
-BUGTRACKER = "https://bugs.lttng.org/projects/babeltrace"
-
-LICENSE = "MIT & GPLv2"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=76ba15dd76a248e1dd526bca0e2125fa"
-
-DEPENDS = "glib-2.0 util-linux popt bison-native flex-native"
-
-inherit autotools pkgconfig
-
-SRC_URI = "http://www.efficios.com/files/babeltrace/babeltrace-${PV}.tar.bz2 \
-"
-
-EXTRA_OECONF = "--disable-debug-info"
-
-SRC_URI[md5sum] = "1176e7f69e128112d5f29fefec39c6ce"
-SRC_URI[sha256sum] = "696ee46e5750ab57a258663e73915d2901b7cd4fc53b06eb3f7a0d7b1012fa56"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/lttng/babeltrace_1.5.3.bb b/import-layers/yocto-poky/meta/recipes-kernel/lttng/babeltrace_1.5.3.bb
new file mode 100644
index 0000000..4d81da0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/lttng/babeltrace_1.5.3.bb
@@ -0,0 +1,19 @@
+SUMMARY = "Babeltrace - Trace Format Babel Tower"
+DESCRIPTION = "Babeltrace provides trace read and write libraries in host side, as well as a trace converter, which used to convert LTTng 2.0 traces into human-readable log."
+HOMEPAGE = "http://www.efficios.com/babeltrace/"
+BUGTRACKER = "https://bugs.lttng.org/projects/babeltrace"
+
+LICENSE = "MIT & GPLv2"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=76ba15dd76a248e1dd526bca0e2125fa"
+
+DEPENDS = "glib-2.0 util-linux popt bison-native flex-native"
+
+inherit autotools pkgconfig
+
+SRC_URI = "http://www.efficios.com/files/babeltrace/babeltrace-${PV}.tar.bz2 \
+"
+
+EXTRA_OECONF = "--disable-debug-info"
+
+SRC_URI[md5sum] = "0cec2745ac316649791c43f416d71ea1"
+SRC_URI[sha256sum] = "2249fee5beba657731f5d6a84c5296c6517f544bfbe7571bd1fd7af23726137c"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules/BUILD_RUNTIME_BUG_ON-vs-gcc7.patch b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules/BUILD_RUNTIME_BUG_ON-vs-gcc7.patch
new file mode 100644
index 0000000..7606360
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules/BUILD_RUNTIME_BUG_ON-vs-gcc7.patch
@@ -0,0 +1,43 @@
+From ab07574ef90fa510f293c37897d577066a88fe0d Mon Sep 17 00:00:00 2001
+From: Nathan Lynch <nathan_lynch@mentor.com>
+Date: Tue, 25 Apr 2017 16:26:57 -0500
+Subject: [PATCH] BUILD_RUNTIME_BUG_ON vs gcc7
+
+Avoid using LTTng's BUILD_RUNTIME_BUG_ON macro, as it appears to run
+into a similar problem as Linux experienced with ilog2.
+
+See:
+https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=474c90156c8dcc2fa815e6716cc9394d7930cb9c
+https://gcc.gnu.org/bugzilla/show_bug.cgi?id=72785
+
+Upstream-Status: Pending
+Signed-off-by: Nathan Lynch <nathan_lynch@mentor.com>
+---
+ lib/align.h | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/lib/align.h b/lib/align.h
+index 5b91ae87410b..5e134cd485fe 100644
+--- a/lib/align.h
++++ b/lib/align.h
+@@ -48,7 +48,7 @@
+  */
+ #define offset_align(align_drift, alignment)				       \
+ 	({								       \
+-		BUILD_RUNTIME_BUG_ON((alignment) == 0			       \
++		BUG_ON((alignment) == 0					       \
+ 				   || ((alignment) & ((alignment) - 1)));      \
+ 		(((alignment) - (align_drift)) & ((alignment) - 1));	       \
+ 	})
+@@ -63,7 +63,7 @@
+  */
+ #define offset_align_floor(align_drift, alignment)			       \
+ 	({								       \
+-		BUILD_RUNTIME_BUG_ON((alignment) == 0			       \
++		BUG_ON((alignment) == 0					       \
+ 				   || ((alignment) & ((alignment) - 1)));      \
+ 		(((align_drift) - (alignment)) & ((alignment) - 1));	       \
+ 	})
+-- 
+2.9.3
+
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules/Makefile-Do-not-fail-if-CONFIG_TRACEPOINTS-is-not-en.patch b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules/Makefile-Do-not-fail-if-CONFIG_TRACEPOINTS-is-not-en.patch
index 6a91f32..e411242 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules/Makefile-Do-not-fail-if-CONFIG_TRACEPOINTS-is-not-en.patch
+++ b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules/Makefile-Do-not-fail-if-CONFIG_TRACEPOINTS-is-not-en.patch
@@ -10,7 +10,7 @@
 This change makes the build do not fail when CONFIG_TRACEPOINTS is not
 available, allowing it to be kept being pulled by default.
 
-Upstream-Status: Inapropriate [embedded specific]
+Upstream-Status: Inappropriate [embedded specific]
 
 Signed-off-by: Otavio Salvador <otavio@ossystems.com.br>
 ---
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules_2.9.1.bb b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules_2.9.1.bb
deleted file mode 100644
index abff79d..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules_2.9.1.bb
+++ /dev/null
@@ -1,32 +0,0 @@
-SECTION = "devel"
-SUMMARY = "Linux Trace Toolkit KERNEL MODULE"
-DESCRIPTION = "The lttng-modules 2.0 package contains the kernel tracer modules"
-LICENSE = "LGPLv2.1 & GPLv2 & MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=362844633a08753bd96ab322a6c7f9f6 \
-                    file://gpl-2.0.txt;md5=751419260aa954499f7abaabaa882bbe \
-                    file://lgpl-2.1.txt;md5=243b725d71bb5df4a1e5920b344b86ad"
-
-inherit module
-
-COMPATIBLE_HOST = '(x86_64|i.86|powerpc|aarch64|mips|nios2|arm).*-linux'
-
-SRC_URI = "https://lttng.org/files/${BPN}/${BPN}-${PV}.tar.bz2 \
-           file://Makefile-Do-not-fail-if-CONFIG_TRACEPOINTS-is-not-en.patch"
-
-SRC_URI[md5sum] = "5a16bca52233cc2bdff572b1120a88f6"
-SRC_URI[sha256sum] = "62078fe3254ca65969db4b72e59042cd797dbf2740848f8f82384877ae460d54"
-
-export INSTALL_MOD_DIR="kernel/lttng-modules"
-
-EXTRA_OEMAKE += "KERNELDIR='${STAGING_KERNEL_DIR}'"
-
-do_install_append() {
-	# Delete empty directories to avoid QA failures if no modules were built
-	find ${D}/${nonarch_base_libdir} -depth -type d -empty -exec rmdir {} \;
-}
-
-python do_package_prepend() {
-    if not os.path.exists(os.path.join(d.getVar('D'), d.getVar('nonarch_base_libdir')[1:], 'modules')):
-        bb.warn("%s: no modules were created; this may be due to CONFIG_TRACEPOINTS not being enabled in your kernel." % d.getVar('PN'))
-}
-
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules_2.9.5.bb b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules_2.9.5.bb
new file mode 100644
index 0000000..61d9744
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-modules_2.9.5.bb
@@ -0,0 +1,34 @@
+SECTION = "devel"
+SUMMARY = "Linux Trace Toolkit KERNEL MODULE"
+DESCRIPTION = "The lttng-modules 2.0 package contains the kernel tracer modules"
+LICENSE = "LGPLv2.1 & GPLv2 & MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=362844633a08753bd96ab322a6c7f9f6 \
+                    file://gpl-2.0.txt;md5=751419260aa954499f7abaabaa882bbe \
+                    file://lgpl-2.1.txt;md5=243b725d71bb5df4a1e5920b344b86ad"
+
+inherit module
+
+COMPATIBLE_HOST = '(x86_64|i.86|powerpc|aarch64|mips|nios2|arm).*-linux'
+
+SRC_URI = "https://lttng.org/files/${BPN}/${BPN}-${PV}.tar.bz2 \
+           file://Makefile-Do-not-fail-if-CONFIG_TRACEPOINTS-is-not-en.patch \
+           file://BUILD_RUNTIME_BUG_ON-vs-gcc7.patch \
+"
+
+SRC_URI[md5sum] = "af97aaaf86133fd783bb9937dc3b4d59"
+SRC_URI[sha256sum] = "3f4e82ceb1c3c1875ec1fe89fba08294ed7feec2161339a5a71a066b27fc3e22"
+
+export INSTALL_MOD_DIR="kernel/lttng-modules"
+
+EXTRA_OEMAKE += "KERNELDIR='${STAGING_KERNEL_DIR}'"
+
+do_install_append() {
+	# Delete empty directories to avoid QA failures if no modules were built
+	find ${D}/${nonarch_base_libdir} -depth -type d -empty -exec rmdir {} \;
+}
+
+python do_package_prepend() {
+    if not os.path.exists(os.path.join(d.getVar('D'), d.getVar('nonarch_base_libdir')[1:], 'modules')):
+        bb.warn("%s: no modules were created; this may be due to CONFIG_TRACEPOINTS not being enabled in your kernel." % d.getVar('PN'))
+}
+
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-tools_2.9.4.bb b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-tools_2.9.4.bb
deleted file mode 100644
index fbf3ae9..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-tools_2.9.4.bb
+++ /dev/null
@@ -1,122 +0,0 @@
-SECTION = "devel"
-SUMMARY = "Linux Trace Toolkit Control"
-DESCRIPTION = "The Linux trace toolkit is a suite of tools designed \
-to extract program execution details from the Linux operating system \
-and interpret them."
-
-LICENSE = "GPLv2 & LGPLv2.1"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=01d7fc4496aacf37d90df90b90b0cac1 \
-                    file://gpl-2.0.txt;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://lgpl-2.1.txt;md5=0f0d71500e6a57fd24d825f33242b9ca"
-
-DEPENDS = "liburcu popt libxml2 util-linux"
-RDEPENDS_${PN} = "libgcc"
-RDEPENDS_${PN}-ptest += "make perl bash gawk ${PN} babeltrace procps"
-# babelstats.pl wants getopt-long
-RDEPENDS_${PN}-ptest += "perl-module-getopt-long"
-
-PYTHON_OPTION = "am_cv_python_pyexecdir='${PYTHON_SITEPACKAGES_DIR}' \
-                 am_cv_python_pythondir='${PYTHON_SITEPACKAGES_DIR}' \
-                 PYTHON_INCLUDE='-I${STAGING_INCDIR}/python${PYTHON_BASEVERSION}${PYTHON_ABI}' \
-"
-PACKAGECONFIG ??= "lttng-ust"
-PACKAGECONFIG[python] = "--enable-python-bindings ${PYTHON_OPTION},,python3 swig-native"
-PACKAGECONFIG[lttng-ust] = "--with-lttng-ust, --without-lttng-ust, lttng-ust"
-PACKAGECONFIG[kmod] = "--enable-kmod, --disable-kmod, kmod"
-PACKAGECONFIG[manpages] = "--enable-man-pages, --disable-man-pages, asciidoc-native xmlto-native libxslt-native"
-PACKAGECONFIG_remove_libc-musl = "lttng-ust"
-
-SRC_URI = "https://lttng.org/files/lttng-tools/lttng-tools-${PV}.tar.bz2 \
-           file://x32.patch \
-           file://run-ptest \
-           "
-
-SRC_URI[md5sum] = "10f47abfa214b167580358388428770d"
-SRC_URI[sha256sum] = "3c72456bceef961fad1d0861f07478db16b7659efa69099ba36645da89ca2cc5"
-
-inherit autotools ptest pkgconfig useradd python3-dir manpages
-
-USERADD_PACKAGES = "${PN}"
-GROUPADD_PARAM_${PN} = "tracing"
-
-FILES_${PN} += "${libdir}/lttng/libexec/* ${datadir}/xml/lttng \
-                ${PYTHON_SITEPACKAGES_DIR}/*"
-FILES_${PN}-staticdev += "${PYTHON_SITEPACKAGES_DIR}/*.a"
-FILES_${PN}-dev += "${PYTHON_SITEPACKAGES_DIR}/*.la"
-
-# Since files are installed into ${libdir}/lttng/libexec we match 
-# the libexec insane test so skip it.
-# Python module needs to keep _lttng.so
-INSANE_SKIP_${PN} = "libexec dev-so"
-INSANE_SKIP_${PN}-dbg = "libexec"
-
-do_install_ptest () {
-    for f in Makefile tests/Makefile tests/utils/utils.sh ; do
-        install -D "${B}/$f" "${D}${PTEST_PATH}/$f"
-    done
-
-    for f in config/tap-driver.sh config/test-driver ; do
-        install -D "${S}/$f" "${D}${PTEST_PATH}/$f"
-    done
-
-    # Prevent 'make check' from recursing into non-test subdirectories.
-    sed -i -e 's!^SUBDIRS = .*!SUBDIRS = tests!' "${D}${PTEST_PATH}/Makefile"
-
-    # We don't need these
-    sed -i -e '/dist_noinst_SCRIPTS = /,/^$/d' "${D}${PTEST_PATH}/tests/Makefile"
-
-    # We shouldn't need to build anything in tests/utils
-    sed -i -e 's!am__append_1 = . utils!am__append_1 = . !' \
-        "${D}${PTEST_PATH}/tests/Makefile"
-
-    # Copy the tests directory tree and the executables and
-    # Makefiles found within.
-    for d in $(find "${B}/tests" -type d -not -name .libs -printf '%P ') ; do
-        install -d "${D}${PTEST_PATH}/tests/$d"
-        find "${B}/tests/$d" -maxdepth 1 -executable -type f \
-            -exec install -t "${D}${PTEST_PATH}/tests/$d" {} +
-        test -r "${B}/tests/$d/Makefile" && \
-            install -t "${D}${PTEST_PATH}/tests/$d" "${B}/tests/$d/Makefile"
-    done
-
-    # We shouldn't need to build anything in tests/regression/tools
-    sed -i -e 's!^SUBDIRS = tools !SUBDIRS = !' \
-        "${D}${PTEST_PATH}/tests/regression/Makefile"
-
-    # Prevent attempts to update Makefiles during test runs, and
-    # silence "Making check in $SUBDIR" messages.
-    find "${D}${PTEST_PATH}" -name Makefile -type f -exec \
-        sed -i -e '/Makefile:/,/^$/d' -e '/%: %.in/,/^$/d' \
-        -e '/echo "Making $$target in $$subdir"; \\/d' \
-        -e 's/^srcdir = \(.*\)/srcdir = ./' \
-        -e 's/^builddir = \(.*\)/builddir = ./' \
-        -e 's/^all-am:.*/all-am:/' \
-        {} +
-
-    # These objects trigger [rpaths] QA checks; the test harness
-    # skips the associated tests if they're missing, so delete
-    # them.
-    objs=""
-    objs="$objs regression/ust/ust-dl/libbar.so"
-    objs="$objs regression/ust/ust-dl/libfoo.so"
-    for obj in $objs ; do
-        rm -f "${D}${PTEST_PATH}/tests/${obj}"
-    done
-
-    find "${D}${PTEST_PATH}" -name Makefile -type f -exec \
-        touch -r "${B}/Makefile" {} +
-
-    # Substitute links to installed binaries.
-    for prog in lttng lttng-relayd lttng-sessiond lttng-consumerd ; do
-        exedir="${D}${PTEST_PATH}/src/bin/${prog}"
-        install -d "$exedir"
-        case "$prog" in
-            lttng-consumerd)
-                ln -s "${libdir}/lttng/libexec/$prog" "$exedir"
-                ;;
-            *)
-                ln -s "${bindir}/$prog" "$exedir"
-                ;;
-        esac
-    done
-}
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-tools_2.9.5.bb b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-tools_2.9.5.bb
new file mode 100644
index 0000000..6e92c22
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-tools_2.9.5.bb
@@ -0,0 +1,122 @@
+SECTION = "devel"
+SUMMARY = "Linux Trace Toolkit Control"
+DESCRIPTION = "The Linux trace toolkit is a suite of tools designed \
+to extract program execution details from the Linux operating system \
+and interpret them."
+
+LICENSE = "GPLv2 & LGPLv2.1"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=01d7fc4496aacf37d90df90b90b0cac1 \
+                    file://gpl-2.0.txt;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://lgpl-2.1.txt;md5=0f0d71500e6a57fd24d825f33242b9ca"
+
+DEPENDS = "liburcu popt libxml2 util-linux"
+RDEPENDS_${PN} = "libgcc"
+RDEPENDS_${PN}-ptest += "make perl bash gawk ${PN} babeltrace procps"
+# babelstats.pl wants getopt-long
+RDEPENDS_${PN}-ptest += "perl-module-getopt-long"
+
+PYTHON_OPTION = "am_cv_python_pyexecdir='${PYTHON_SITEPACKAGES_DIR}' \
+                 am_cv_python_pythondir='${PYTHON_SITEPACKAGES_DIR}' \
+                 PYTHON_INCLUDE='-I${STAGING_INCDIR}/python${PYTHON_BASEVERSION}${PYTHON_ABI}' \
+"
+PACKAGECONFIG ??= "lttng-ust"
+PACKAGECONFIG[python] = "--enable-python-bindings ${PYTHON_OPTION},,python3 swig-native"
+PACKAGECONFIG[lttng-ust] = "--with-lttng-ust, --without-lttng-ust, lttng-ust"
+PACKAGECONFIG[kmod] = "--enable-kmod, --disable-kmod, kmod"
+PACKAGECONFIG[manpages] = "--enable-man-pages, --disable-man-pages, asciidoc-native xmlto-native libxslt-native"
+PACKAGECONFIG_remove_libc-musl = "lttng-ust"
+
+SRC_URI = "https://lttng.org/files/lttng-tools/lttng-tools-${PV}.tar.bz2 \
+           file://x32.patch \
+           file://run-ptest \
+           "
+
+SRC_URI[md5sum] = "051224eb991aee07f8721ff1877d0b96"
+SRC_URI[sha256sum] = "77839eb6fc6c652125f08acfd9369701c2516eb05cc2084160e7efc7a3fb731c"
+
+inherit autotools ptest pkgconfig useradd python3-dir manpages
+
+USERADD_PACKAGES = "${PN}"
+GROUPADD_PARAM_${PN} = "tracing"
+
+FILES_${PN} += "${libdir}/lttng/libexec/* ${datadir}/xml/lttng \
+                ${PYTHON_SITEPACKAGES_DIR}/*"
+FILES_${PN}-staticdev += "${PYTHON_SITEPACKAGES_DIR}/*.a"
+FILES_${PN}-dev += "${PYTHON_SITEPACKAGES_DIR}/*.la"
+
+# Since files are installed into ${libdir}/lttng/libexec we match 
+# the libexec insane test so skip it.
+# Python module needs to keep _lttng.so
+INSANE_SKIP_${PN} = "libexec dev-so"
+INSANE_SKIP_${PN}-dbg = "libexec"
+
+do_install_ptest () {
+    for f in Makefile tests/Makefile tests/utils/utils.sh ; do
+        install -D "${B}/$f" "${D}${PTEST_PATH}/$f"
+    done
+
+    for f in config/tap-driver.sh config/test-driver ; do
+        install -D "${S}/$f" "${D}${PTEST_PATH}/$f"
+    done
+
+    # Prevent 'make check' from recursing into non-test subdirectories.
+    sed -i -e 's!^SUBDIRS = .*!SUBDIRS = tests!' "${D}${PTEST_PATH}/Makefile"
+
+    # We don't need these
+    sed -i -e '/dist_noinst_SCRIPTS = /,/^$/d' "${D}${PTEST_PATH}/tests/Makefile"
+
+    # We shouldn't need to build anything in tests/utils
+    sed -i -e 's!am__append_1 = . utils!am__append_1 = . !' \
+        "${D}${PTEST_PATH}/tests/Makefile"
+
+    # Copy the tests directory tree and the executables and
+    # Makefiles found within.
+    for d in $(find "${B}/tests" -type d -not -name .libs -printf '%P ') ; do
+        install -d "${D}${PTEST_PATH}/tests/$d"
+        find "${B}/tests/$d" -maxdepth 1 -executable -type f \
+            -exec install -t "${D}${PTEST_PATH}/tests/$d" {} +
+        test -r "${B}/tests/$d/Makefile" && \
+            install -t "${D}${PTEST_PATH}/tests/$d" "${B}/tests/$d/Makefile"
+    done
+
+    # We shouldn't need to build anything in tests/regression/tools
+    sed -i -e 's!^SUBDIRS = tools !SUBDIRS = !' \
+        "${D}${PTEST_PATH}/tests/regression/Makefile"
+
+    # Prevent attempts to update Makefiles during test runs, and
+    # silence "Making check in $SUBDIR" messages.
+    find "${D}${PTEST_PATH}" -name Makefile -type f -exec \
+        sed -i -e '/Makefile:/,/^$/d' -e '/%: %.in/,/^$/d' \
+        -e '/echo "Making $$target in $$subdir"; \\/d' \
+        -e 's/^srcdir = \(.*\)/srcdir = ./' \
+        -e 's/^builddir = \(.*\)/builddir = ./' \
+        -e 's/^all-am:.*/all-am:/' \
+        {} +
+
+    # These objects trigger [rpaths] QA checks; the test harness
+    # skips the associated tests if they're missing, so delete
+    # them.
+    objs=""
+    objs="$objs regression/ust/ust-dl/libbar.so"
+    objs="$objs regression/ust/ust-dl/libfoo.so"
+    for obj in $objs ; do
+        rm -f "${D}${PTEST_PATH}/tests/${obj}"
+    done
+
+    find "${D}${PTEST_PATH}" -name Makefile -type f -exec \
+        touch -r "${B}/Makefile" {} +
+
+    # Substitute links to installed binaries.
+    for prog in lttng lttng-relayd lttng-sessiond lttng-consumerd ; do
+        exedir="${D}${PTEST_PATH}/src/bin/${prog}"
+        install -d "$exedir"
+        case "$prog" in
+            lttng-consumerd)
+                ln -s "${libdir}/lttng/libexec/$prog" "$exedir"
+                ;;
+            *)
+                ln -s "${bindir}/$prog" "$exedir"
+                ;;
+        esac
+    done
+}
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-ust_2.9.0.bb b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-ust_2.9.0.bb
deleted file mode 100644
index 288b5ee..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-ust_2.9.0.bb
+++ /dev/null
@@ -1,37 +0,0 @@
-SUMMARY = "Linux Trace Toolkit Userspace Tracer 2.x"
-DESCRIPTION = "The LTTng UST 2.x package contains the userspace tracer library to trace userspace codes."
-HOMEPAGE = "http://lttng.org/ust"
-BUGTRACKER = "https://bugs.lttng.org/projects/lttng-ust"
-
-LICENSE = "LGPLv2.1+ & MIT & GPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=c963eb366b781252b0bf0fdf1624d9e9 \
-                    file://snprintf/snprintf.c;endline=32;md5=d3d544959d8a3782b2e07451be0a903c \
-                    file://snprintf/various.h;endline=31;md5=89f2509b6b4682c4fc95255eec4abe44"
-
-inherit autotools lib_package manpages
-
-DEPENDS = "liburcu util-linux"
-RDEPENDS_${PN}-bin = "python3-core"
-
-# For backwards compatibility after rename
-RPROVIDES_${PN} = "lttng2-ust"
-RREPLACES_${PN} = "lttng2-ust"
-RCONFLICTS_${PN} = "lttng2-ust"
-
-PE = "2"
-
-SRC_URI = "https://lttng.org/files/lttng-ust/lttng-ust-${PV}.tar.bz2 \
-           file://lttng-ust-doc-examples-disable.patch \
-          "
-SRC_URI[md5sum] = "77f3378ba37a36801420bce87b702e9c"
-SRC_URI[sha256sum] = "4d541a863f42dfc685ca05024027a442c70d03594c154a43e62bc109b1ea5daf"
-
-CVE_PRODUCT = "ust"
-
-PACKAGECONFIG[manpages] = "--enable-man-pages, --disable-man-pages, asciidoc-native xmlto-native libxslt-native"
-
-do_install_append() {
-        # Patch python tools to use Python 3; they should be source compatible, but
-        # still refer to Python 2 in the shebang
-        sed -i -e '1s,#!.*python.*,#!${bindir}/python3,' ${D}${bindir}/lttng-gen-tp
-}
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-ust_2.9.1.bb b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-ust_2.9.1.bb
new file mode 100644
index 0000000..03da4ca
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/lttng/lttng-ust_2.9.1.bb
@@ -0,0 +1,37 @@
+SUMMARY = "Linux Trace Toolkit Userspace Tracer 2.x"
+DESCRIPTION = "The LTTng UST 2.x package contains the userspace tracer library to trace userspace codes."
+HOMEPAGE = "http://lttng.org/ust"
+BUGTRACKER = "https://bugs.lttng.org/projects/lttng-ust"
+
+LICENSE = "LGPLv2.1+ & MIT & GPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=c963eb366b781252b0bf0fdf1624d9e9 \
+                    file://snprintf/snprintf.c;endline=32;md5=d3d544959d8a3782b2e07451be0a903c \
+                    file://snprintf/various.h;endline=31;md5=89f2509b6b4682c4fc95255eec4abe44"
+
+inherit autotools lib_package manpages
+
+DEPENDS = "liburcu util-linux"
+RDEPENDS_${PN}-bin = "python3-core"
+
+# For backwards compatibility after rename
+RPROVIDES_${PN} = "lttng2-ust"
+RREPLACES_${PN} = "lttng2-ust"
+RCONFLICTS_${PN} = "lttng2-ust"
+
+PE = "2"
+
+SRC_URI = "https://lttng.org/files/lttng-ust/lttng-ust-${PV}.tar.bz2 \
+           file://lttng-ust-doc-examples-disable.patch \
+          "
+SRC_URI[md5sum] = "5a5636fc3d9aa370f65b25a802a79e6e"
+SRC_URI[sha256sum] = "b891d267cdbbbd11cf34751f66c21c4a7fdc0eec3c1b53be2c40dca073b7daa4"
+
+CVE_PRODUCT = "ust"
+
+PACKAGECONFIG[manpages] = "--enable-man-pages, --disable-man-pages, asciidoc-native xmlto-native libxslt-native"
+
+do_install_append() {
+        # Patch python tools to use Python 3; they should be source compatible, but
+        # still refer to Python 2 in the shebang
+        sed -i -e '1s,#!.*python.*,#!${bindir}/python3,' ${D}${bindir}/lttng-gen-tp
+}
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/oprofile/oprofile.inc b/import-layers/yocto-poky/meta/recipes-kernel/oprofile/oprofile.inc
index 96ef43d..4b01654 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/oprofile/oprofile.inc
+++ b/import-layers/yocto-poky/meta/recipes-kernel/oprofile/oprofile.inc
@@ -27,6 +27,8 @@
            file://0001-Add-rmb-definition-for-NIOS2-architecture.patch \
            file://0001-Fix-FTBFS-problem-with-GCC-6.patch \
 "
+UPSTREAM_CHECK_REGEX = "oprofile-(?P<pver>\d+(\.\d+)+)/"
+UPSTREAM_CHECK_URI = "https://sourceforge.net/projects/oprofile/files/oprofile/"
 
 SRC_URI_append_libc-musl = " file://musl.patch"
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/perf/perf.bb b/import-layers/yocto-poky/meta/recipes-kernel/perf/perf.bb
index 1bad6f4..b79b973 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/perf/perf.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/perf/perf.bb
@@ -26,7 +26,7 @@
     virtual/${MLPREFIX}libc \
     ${MLPREFIX}elfutils \
     ${MLPREFIX}binutils \
-    bison-native flex-native xz \
+    bison flex xz \
     xmlto-native \
     asciidoc-native \
 "
@@ -35,10 +35,9 @@
 
 PROVIDES = "virtual/perf"
 
-inherit linux-kernel-base kernel-arch
+inherit linux-kernel-base kernel-arch pythonnative
 
 # needed for building the tools/perf Python bindings
-inherit ${@bb.utils.contains('PACKAGECONFIG', 'scripting', 'pythonnative', '', d)}
 inherit python-dir
 export PYTHON_SITEPACKAGES_DIR
 
@@ -48,8 +47,7 @@
 do_populate_lic[depends] += "virtual/kernel:do_patch"
 
 # needed for building the tools/perf Perl binding
-inherit ${@bb.utils.contains('PACKAGECONFIG', 'scripting', 'perlnative', '', d)}
-inherit cpan-base
+inherit perlnative cpan-base
 # Env var which tells perl if it should use host (no) or target (yes) settings
 export PERLCONFIGTARGET = "${@is_target(d)}"
 export PERL_INC = "${STAGING_LIBDIR}${PERL_OWN_DIR}/perl/${@get_perl_version(d)}/CORE"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/powertop/powertop_2.8.bb b/import-layers/yocto-poky/meta/recipes-kernel/powertop/powertop_2.8.bb
index 1f2dfb8..4d7a3e7 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/powertop/powertop_2.8.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/powertop/powertop_2.8.bb
@@ -17,11 +17,6 @@
 
 inherit autotools gettext pkgconfig
 
-# we need to explicitly link with libintl in uClibc systems
-EXTRA_LDFLAGS ?= ""
-EXTRA_LDFLAGS_libc-uclibc = "-lintl"
-LDFLAGS += "${EXTRA_LDFLAGS}"
-
 # we do not want libncursesw if we can
 do_configure_prepend() {
     # configure.ac checks for delwin() in "ncursesw ncurses" so let's drop first one
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/sysprof/sysprof_3.22.3.bb b/import-layers/yocto-poky/meta/recipes-kernel/sysprof/sysprof_3.22.3.bb
deleted file mode 100644
index 2631063..0000000
--- a/import-layers/yocto-poky/meta/recipes-kernel/sysprof/sysprof_3.22.3.bb
+++ /dev/null
@@ -1,34 +0,0 @@
-SUMMARY = "System-wide Performance Profiler for Linux"
-LICENSE = "GPLv3+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504 \
-                    file://src/sp-application.c;endline=17;md5=40e55577ef122c88fe20052acda64875"
-
-inherit gnomebase gettext systemd upstream-version-is-even
-
-DEPENDS = "glib-2.0 libxml2-native glib-2.0-native"
-
-SRC_URI += " \
-           file://define-NT_GNU_BUILD_ID.patch \
-           file://0001-configure-Add-option-to-enable-disable-polkit.patch \
-           file://0001-Disable-check-for-polkit-for-UI.patch \
-           file://0001-Avoid-building-docs.patch \
-          "
-SRC_URI[archive.sha256sum] = "e6dca325b3014440f457a92db18ffe342a35888db3f0756694a99b9652796367"
-SRC_URI[archive.md5sum] = "9514065dc752105240e5567c13708af4"
-
-AUTOTOOLS_AUXDIR = "${S}/build-aux"
-
-EXTRA_OECONF = "--enable-compile-warnings"
-
-PACKAGECONFIG ?= "${@bb.utils.contains_any('DISTRO_FEATURES', '${GTK3DISTROFEATURES}', 'gtk', '', d)}"
-PACKAGECONFIG[gtk] = "--enable-gtk,--disable-gtk,gtk+3"
-PACKAGECONFIG[polkit] = "--enable-polkit,--disable-polkit,polkit dbus"
-
-SOLIBS = ".so"
-FILES_SOLIBSDEV = ""
-FILES_${PN} += "${datadir}/icons/"
-
-SYSTEMD_SERVICE_${PN} = "${@bb.utils.contains('PACKAGECONFIG', 'polkit', 'sysprof2.service', '', d)}"
-
-# We do not yet work for aarch64.
-COMPATIBLE_HOST = "^(?!aarch64).*"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/sysprof/sysprof_3.24.1.bb b/import-layers/yocto-poky/meta/recipes-kernel/sysprof/sysprof_3.24.1.bb
new file mode 100644
index 0000000..79a27be
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/sysprof/sysprof_3.24.1.bb
@@ -0,0 +1,34 @@
+SUMMARY = "System-wide Performance Profiler for Linux"
+HOMEPAGE = "http://www.sysprof.com"
+LICENSE = "GPLv3+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504 \
+                    file://src/sp-application.c;endline=17;md5=40e55577ef122c88fe20052acda64875"
+
+inherit gnomebase gettext systemd upstream-version-is-even
+
+DEPENDS = "glib-2.0 libxml2-native glib-2.0-native"
+
+SRC_URI += " \
+           file://define-NT_GNU_BUILD_ID.patch \
+           file://0001-configure-Add-option-to-enable-disable-polkit.patch \
+           file://0001-Disable-check-for-polkit-for-UI.patch \
+           file://0001-Avoid-building-docs.patch \
+          "
+SRC_URI[archive.md5sum] = "2b44ae1d8cd899417294a9c4509d7870"
+SRC_URI[archive.sha256sum] = "054eebe2afb6fe3c06ac8c46bc045c42f675d4fd64e6f16cbc602d5c7ce27bec"
+
+AUTOTOOLS_AUXDIR = "${S}/build-aux"
+
+EXTRA_OECONF = "--enable-compile-warnings"
+
+PACKAGECONFIG ?= "${@bb.utils.contains_any('DISTRO_FEATURES', '${GTK3DISTROFEATURES}', 'gtk', '', d)}"
+PACKAGECONFIG[gtk] = "--enable-gtk,--disable-gtk,gtk+3"
+PACKAGECONFIG[polkit] = "--enable-polkit,--disable-polkit,polkit dbus"
+
+SOLIBS = ".so"
+FILES_SOLIBSDEV = ""
+
+SYSTEMD_SERVICE_${PN} = "${@bb.utils.contains('PACKAGECONFIG', 'polkit', 'sysprof2.service', '', d)}"
+
+# We do not yet work for aarch64.
+COMPATIBLE_HOST = "^(?!aarch64).*"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap-native_git.bb b/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap-native_git.bb
index c3da77c..19cc1cf 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap-native_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap-native_git.bb
@@ -3,5 +3,4 @@
 
 inherit native
 
-RM_WORK_EXCLUDE_ITEMS += "recipe-sysroot-native"
 addtask addto_recipe_sysroot after do_populate_sysroot before do_build
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap/0001-staprun-stapbpf-don-t-support-installing-a-non-root.patch b/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap/0001-staprun-stapbpf-don-t-support-installing-a-non-root.patch
new file mode 100644
index 0000000..9f11648
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap/0001-staprun-stapbpf-don-t-support-installing-a-non-root.patch
@@ -0,0 +1,62 @@
+From 3e13a006fe3dff9489269274093bf868532036e2 Mon Sep 17 00:00:00 2001
+From: Saul Wold <sgw@linux.intel.com>
+Date: Tue, 5 Sep 2017 16:02:55 -0700
+Subject: [PATCH] staprun/stapbpf: don't support installing a non-root
+
+Since we are in a known environment and installing as root and
+expect to be running as root, don't create the group or chmod
+the binaries.
+
+Upstream-Status: Inappropriate [Embedded]
+Signed-off-by: Saul Wold <sgw@linux.intel.com>
+---
+ stapbpf/Makefile.am | 14 +++++++-------
+ staprun/Makefile.am | 12 ++++++------
+ 2 files changed, 13 insertions(+), 13 deletions(-)
+
+diff --git a/stapbpf/Makefile.am b/stapbpf/Makefile.am
+index 421b044ef..f7daeb2b2 100644
+--- a/stapbpf/Makefile.am
++++ b/stapbpf/Makefile.am
+@@ -39,11 +39,11 @@ git_version.stamp ../git_version.h:
+ 
+ # Why the "id -u" condition?  This way, an unprivileged user can run
+ # make install, and have "sudo stap ...." or "sudo stapbpf ...." work later.
+-install-exec-hook:
+-	if [ `id -u` -eq 0 ]; then \
+-		getent group stapusr >/dev/null || groupadd -g 156 -r stapusr 2>/dev/null || groupadd -r stapusr; \
+-		getent group stapusr >/dev/null \
+-		&& chgrp stapusr "$(DESTDIR)$(bindir)/stapbpf" \
+-		&& chmod 04110 "$(DESTDIR)$(bindir)/stapbpf"; \
+-	fi
++#install-exec-hook:
++#	if [ `id -u` -eq 0 ]; then \
++#		getent group stapusr >/dev/null || groupadd -g 156 -r stapusr 2>/dev/null || groupadd -r stapusr; \
++#		getent group stapusr >/dev/null \
++#		&& chgrp stapusr "$(DESTDIR)$(bindir)/stapbpf" \
++#		&& chmod 04110 "$(DESTDIR)$(bindir)/stapbpf"; \
++#	fi
+ endif
+diff --git a/staprun/Makefile.am b/staprun/Makefile.am
+index 4073aa01c..2925e34c3 100644
+--- a/staprun/Makefile.am
++++ b/staprun/Makefile.am
+@@ -72,9 +72,9 @@ git_version.stamp ../git_version.h:
+ 
+ # Why the "id -u" condition?  This way, an unprivileged user can run
+ # make install, and have "sudo stap ...." or "sudo staprun ...." work later.
+-install-exec-hook:
+-	if [ `id -u` -eq 0 ]; then \
+-		getent group stapusr >/dev/null || groupadd -g 156 -r stapusr 2>/dev/null || groupadd -r stapusr; \
+-		getent group stapusr >/dev/null && chgrp stapusr "$(DESTDIR)$(bindir)/staprun"; \
+-		chmod 04110 "$(DESTDIR)$(bindir)/staprun"; \
+-	fi
++#install-exec-hook:
++#	if [ `id -u` -eq 0 ]; then \
++#		getent group stapusr >/dev/null || groupadd -g 156 -r stapusr 2>/dev/null || groupadd -r stapusr; \
++#		getent group stapusr >/dev/null && chgrp stapusr "$(DESTDIR)$(bindir)/staprun"; \
++#		chmod 04110 "$(DESTDIR)$(bindir)/staprun"; \
++#	fi
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap_git.bb b/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap_git.bb
index b3fd973..475b207 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap_git.bb
@@ -1,4 +1,5 @@
 SUMMARY = "Script-directed dynamic tracing and performance analysis tool for Linux"
+HOMEPAGE = "https://sourceware.org/systemtap/"
 
 require systemtap_git.inc
 
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap_git.inc b/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap_git.inc
index a6aedd3..3dc688a 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap_git.inc
+++ b/import-layers/yocto-poky/meta/recipes-kernel/systemtap/systemtap_git.inc
@@ -1,6 +1,6 @@
 LICENSE = "GPLv2"
 LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
-SRCREV = "b8ea350dc13adb6190d9044a5b80110a4c441270"
+SRCREV = "45d0e7a09a15a21078d0ebf2db5175ed9e87014e"
 PV = "3.1"
 
 SRC_URI = "git://sourceware.org/git/systemtap.git \
@@ -12,7 +12,8 @@
            file://0001-Do-not-let-configure-write-a-python-location-into-th.patch \
            file://0001-Install-python-modules-to-correct-library-dir.patch \
            file://0001-buildrun-remove-quotes-around-I-include-line.patch \
-          "
+           file://0001-staprun-stapbpf-don-t-support-installing-a-non-root.patch \
+           "
 
 # systemtap doesn't support mips
 COMPATIBLE_HOST = '(x86_64|i.86|powerpc|arm|aarch64).*-linux'
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/kernelshark_git.bb b/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/kernelshark_git.bb
index 563182c..9a5e800 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/kernelshark_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/kernelshark_git.bb
@@ -15,7 +15,7 @@
 EXTRA_OEMAKE = "\
     'prefix=${prefix}' \
     'bindir_relative=${@oe.path.relative(prefix, bindir)}' \
-    'libdir=${@oe.path.relative(prefix, libdir)}' \
+    'libdir=${libdir}' \
     NO_PYTHON=1 \
     gui \
 "
@@ -28,5 +28,6 @@
     oe_runmake DESTDIR="${D}" install_gui
     rm ${D}${bindir}/trace-cmd
     rm -rf ${D}${libdir}/trace-cmd
+    rm -rf ${D}${sysconfdir}/bash_completion.d/trace-cmd.bash
     rmdir ${D}${libdir}
 }
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/trace-cmd.inc b/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/trace-cmd.inc
index 3ad06fa..002ee65 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/trace-cmd.inc
+++ b/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/trace-cmd.inc
@@ -1,5 +1,5 @@
-SRCREV = "9be5d74805830a291615f2f34a27c903f6a37b1e"
-PV = "2.6"
+SRCREV = "021710e1073fe203341b427cd1a4bac577ec899c"
+PV = "2.6.1"
 
 inherit pkgconfig
 
@@ -7,6 +7,7 @@
 
 SRC_URI = "git://git.kernel.org/pub/scm/linux/kernel/git/rostedt/trace-cmd.git;branch=trace-cmd-stable-v2.6 \
            file://blktrace-api-compatibility.patch \
+           file://0001-Include-limits.h-so-that-PATH_MAX-is-defined-an-issu.patch \
 "
 
 S = "${WORKDIR}/git"
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/trace-cmd/0001-Include-limits.h-so-that-PATH_MAX-is-defined-an-issu.patch b/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/trace-cmd/0001-Include-limits.h-so-that-PATH_MAX-is-defined-an-issu.patch
new file mode 100644
index 0000000..5763083
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/trace-cmd/0001-Include-limits.h-so-that-PATH_MAX-is-defined-an-issu.patch
@@ -0,0 +1,27 @@
+From 9488f92c1d0c7931c3e17950d1f9eea2aeb3e2bd Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Wed, 14 Jun 2017 15:56:18 +0300
+Subject: [PATCH] Include limits.h so that PATH_MAX is defined (an issue on
+ musl).
+
+Upstream-Status: Pending
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+ trace-listen.c | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/trace-listen.c b/trace-listen.c
+index 17ff9d8..838d6bc 100644
+--- a/trace-listen.c
++++ b/trace-listen.c
+@@ -31,6 +31,7 @@
+ #include <fcntl.h>
+ #include <signal.h>
+ #include <errno.h>
++#include <limits.h>
+ 
+ #include "trace-local.h"
+ #include "trace-msg.h"
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/trace-cmd_git.bb b/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/trace-cmd_git.bb
index dd9a8a0..27c7c19 100644
--- a/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/trace-cmd_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-kernel/trace-cmd/trace-cmd_git.bb
@@ -1,11 +1,12 @@
 SUMMARY = "User interface to Ftrace"
+HOMEPAGE = "http://git.kernel.org/"
 LICENSE = "GPLv2 & LGPLv2.1"
 
 require trace-cmd.inc
 
 LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe \
                     file://trace-cmd.c;beginline=6;endline=8;md5=2c22c965a649ddd7973d7913c5634a5e \
-                    file://COPYING.LIB;md5=bbb461211a33b134d42ed5ee802b37ff \
+                    file://COPYING.LIB;md5=edb195fe538e4552c1f6ca0fd7bf4f0a \
                     file://trace-input.c;beginline=5;endline=8;md5=3ec82f43bbe0cfb5951ff414ef4d44d0 \
 "
 
@@ -17,7 +18,7 @@
     'img_install=${datadir}/kernelshark/html/images' \
     \
     'bindir_relative=${@oe.path.relative(prefix, bindir)}' \
-    'libdir=${@oe.path.relative(prefix, libdir)}' \
+    'libdir=${libdir}' \
     \
     NO_PYTHON=1 \
 "
diff --git a/import-layers/yocto-poky/meta/recipes-lsb4/libpng/libpng12_1.2.57.bb b/import-layers/yocto-poky/meta/recipes-lsb4/libpng/libpng12_1.2.57.bb
deleted file mode 100644
index 9f74f5f..0000000
--- a/import-layers/yocto-poky/meta/recipes-lsb4/libpng/libpng12_1.2.57.bb
+++ /dev/null
@@ -1,36 +0,0 @@
-SUMMARY = "PNG image format decoding library"
-HOMEPAGE = "http://www.libpng.org/"
-SECTION = "libs"
-LICENSE = "Libpng"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=597b8a91994a3e27ae6aa79bf02677d9 \
-                    file://png.h;beginline=19;endline=109;md5=166406397718925b660f0033f7558ef7"
-DEPENDS = "zlib"
-
-PN = "libpng12"
-S = "${WORKDIR}/libpng-${PV}"
-
-SRC_URI = "${GENTOO_MIRROR}/libpng-${PV}.tar.xz"
-
-SRC_URI[md5sum] = "307052e5e8af97b82b17b64fb1b3677a"
-SRC_URI[sha256sum] = "0f4620e11fa283fedafb474427c8e96bf149511a1804bdc47350963ae5cf54d8"
-
-UPSTREAM_CHECK_URI = "http://sourceforge.net/projects/libpng/files/libpng12/"
-UPSTREAM_CHECK_REGEX = "/libpng12/(?P<pver>(\d+[\.\-_]*)+)/"
-
-BINCONFIG_GLOB = "${PN}-config"
-
-inherit autotools binconfig pkgconfig
-
-do_install_append() {
-	# The follow link files link to corresponding png12*.h and libpng12* files
-	# They conflict with higher verison, so drop them
-	rm ${D}/${includedir}/png.h
-	rm ${D}/${includedir}/pngconf.h
-
-	rm ${D}/${libdir}/libpng.la
-	rm ${D}/${libdir}/libpng.so
-	rm ${D}/${libdir}/libpng.a || true
-	rm ${D}/${libdir}/pkgconfig/libpng.pc
-
-	rm ${D}/${bindir}/libpng-config
-}
diff --git a/import-layers/yocto-poky/meta/recipes-lsb4/perl/libclass-isa-perl_0.36.bb b/import-layers/yocto-poky/meta/recipes-lsb4/perl/libclass-isa-perl_0.36.bb
deleted file mode 100644
index f93841d..0000000
--- a/import-layers/yocto-poky/meta/recipes-lsb4/perl/libclass-isa-perl_0.36.bb
+++ /dev/null
@@ -1,31 +0,0 @@
-SUMMARY = "Perl module for reporting the search path for a class's ISA tree"
-DESCRIPTION = "Suppose you have a class (like Food::Fish::Fishstick) that is derived, \
-via its @ISA, from one or more superclasses (as Food::Fish::Fishstick is from Food::Fish,\
-Life::Fungus, and Chemicals), and some of those superclasses may themselves each be\
-derived, via its @ISA, from one or more superclasses (as above).\
-\
-When, then, you call a method in that class ($fishstick->calories), Perl first searches\
-there for that method, but if it's not there, it goes searching in its superclasses, and\
-so on, in a depth-first (or maybe "height-first" is the word) search. In the above example,\
-it'd first look in Food::Fish, then Food, then Matter, then Life::Fungus, then Life, then\
-Chemicals.\
-\
-This library, Class::ISA, provides functions that return that list -- the list\
-(in order) of names of classes Perl would search to find a method, with no duplicates."
-
-HOMEPAGE = "http://search.cpan.org/dist/Class-ISA/"
-SECTION = "libs"
-LICENSE = "Artistic-1.0 | GPL-1.0+"
-
-LIC_FILES_CHKSUM = "file://README;beginline=107;endline=111;md5=6a5c6842a63cfe4dab1f66e2350e4d25"
-
-SRC_URI = "http://search.cpan.org/CPAN/authors/id/S/SM/SMUELLER/Class-ISA-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "3a2ad203c8dc87d6c9de16215d00af47"
-SRC_URI[sha256sum] = "8816f34e9a38e849a10df756030dccf9fe061a196c11ac3faafd7113c929b964"
-
-S = "${WORKDIR}/Class-ISA-${PV}"
-
-inherit cpan
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-lsb4/perl/libdumpvalue-perl_1.17.bb b/import-layers/yocto-poky/meta/recipes-lsb4/perl/libdumpvalue-perl_1.17.bb
deleted file mode 100644
index 9a74614..0000000
--- a/import-layers/yocto-poky/meta/recipes-lsb4/perl/libdumpvalue-perl_1.17.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-SUMMARY = "Perl module for provides screen dump of Perl data"
-
-HOMEPAGE = "http://search.cpan.org/~flora/Dumpvalue/"
-SECTION = "libs"
-LICENSE = "Artistic-1.0 | GPL-1.0+"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=f736bec5ada1fc5e39b2a8e7e06bbcbb"
-
-SRC_URI = "http://search.cpan.org/CPAN/authors/id/F/FL/FLORA/Dumpvalue-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "6ede9f693d4a9c4555541cb1a1cc2006"
-SRC_URI[sha256sum] = "9ea74606b545f769a787ec2ae229549a2ad0a8e3cd4b14eff2ce3841836b3bdb"
-
-S = "${WORKDIR}/Dumpvalue-${PV}"
-
-inherit cpan
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-lsb4/perl/libenv-perl_1.04.bb b/import-layers/yocto-poky/meta/recipes-lsb4/perl/libenv-perl_1.04.bb
deleted file mode 100644
index dd8e115..0000000
--- a/import-layers/yocto-poky/meta/recipes-lsb4/perl/libenv-perl_1.04.bb
+++ /dev/null
@@ -1,21 +0,0 @@
-SUMMARY = "Perl module that imports environment variables as scalars or arrays"
-DESCRIPTION = "Perl maintains environment variables in a special hash named %ENV. \
-For when this access method is inconvenient, the Perl module Env allows environment \
-variables to be treated as scalar or array variables."
-
-HOMEPAGE = "http://search.cpan.org/~flora/Env/"
-SECTION = "libs"
-LICENSE = "Artistic-1.0 | GPL-1.0+"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=76c1cbf18db56b3340d91cb947943bd3"
-
-SRC_URI = "http://search.cpan.org/CPAN/authors/id/F/FL/FLORA/Env-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "fdba5c0690e66972c96fee112cf5f25c"
-SRC_URI[sha256sum] = "d94a3d412df246afdc31a2199cbd8ae915167a3f4684f7b7014ce1200251ebb0"
-
-S = "${WORKDIR}/Env-${PV}"
-
-inherit cpan
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-lsb4/perl/libfile-checktree-perl_4.41.bb b/import-layers/yocto-poky/meta/recipes-lsb4/perl/libfile-checktree-perl_4.41.bb
deleted file mode 100644
index ce37c72..0000000
--- a/import-layers/yocto-poky/meta/recipes-lsb4/perl/libfile-checktree-perl_4.41.bb
+++ /dev/null
@@ -1,32 +0,0 @@
-SUMMARY = "Perl module that run many filetest checks on a tree"
-DESCRIPTION = "The validate() routine takes a single multiline string consisting \
-of directives, each containing a filename plus a file test to try on it. (The file \
-test may also be a "cd", causing subsequent relative filenames to be interpreted \
-relative to that directory.) After the file test you may put || die to make it a \
-fatal error if the file test fails. The default is || warn. The file test may \
-optionally have a "!' prepended to test for the opposite condition. If you do a \
-cd and then list some relative filenames, you may want to indent them slightly for \
-readability. If you supply your own die() or warn() message, you can use $file to \
-interpolate the filename. \
-\
-Filetests may be bunched: "-rwx" tests for all of -r, -w, and -x. Only the first failed \
-test of the bunch will produce a warning. \
-\
-The routine returns the number of warnings issued."
-
-HOMEPAGE = "http://search.cpan.org/~flora/File-CheckTree/"
-SECTION = "libs"
-LICENSE = "Artistic-1.0 | GPL-1.0+"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=c6fcacc5df80e037060300a7f4b93bf9"
-
-SRC_URI = "http://search.cpan.org/CPAN/authors/id/F/FL/FLORA/File-CheckTree-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "519c82aa7e5b7f752b4da14a6c8ad740"
-SRC_URI[sha256sum] = "fc99ab6bb5af4664832715974b5a19e328071dc9202ab72e5d5a594ebd46a729"
-
-S = "${WORKDIR}/File-CheckTree-${PV}"
-
-inherit cpan
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-lsb4/perl/libi18n-collate-perl_1.02.bb b/import-layers/yocto-poky/meta/recipes-lsb4/perl/libi18n-collate-perl_1.02.bb
deleted file mode 100644
index f1839e0..0000000
--- a/import-layers/yocto-poky/meta/recipes-lsb4/perl/libi18n-collate-perl_1.02.bb
+++ /dev/null
@@ -1,21 +0,0 @@
-SUMMARY = "Perl module that compare 8-bit scalar data according to the current locale"
-DESCRIPTION = "This module provides you with objects that will collate according to \
-your national character set, provided that the POSIX setlocale() function is supported \
-on your system."
-
-HOMEPAGE = "http://search.cpan.org/~flora/I18N-Collate/"
-SECTION = "libs"
-LICENSE = "Artistic-1.0 | GPL-1.0+"
-
-LIC_FILES_CHKSUM = "file://LICENSE;md5=ff6d629144a6ec1ea8c300f75760184f"
-
-SRC_URI = "http://search.cpan.org/CPAN/authors/id/F/FL/FLORA/I18N-Collate-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "72ddb6d1c59cfdf31aa3b04799b86af0"
-SRC_URI[sha256sum] = "9174506bc903eda89690394e3f45558ab7e013114227896d8569d6164648fe37"
-
-S = "${WORKDIR}/I18N-Collate-${PV}"
-
-inherit cpan
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-lsb4/perl/libpod-plainer-perl_1.04.bb b/import-layers/yocto-poky/meta/recipes-lsb4/perl/libpod-plainer-perl_1.04.bb
deleted file mode 100644
index a3e58f0..0000000
--- a/import-layers/yocto-poky/meta/recipes-lsb4/perl/libpod-plainer-perl_1.04.bb
+++ /dev/null
@@ -1,23 +0,0 @@
-SUMMARY = "Perl extension for converting Pod to old-style Pod"
-DESCRIPTION = "Pod::Plainer uses Pod::Parser which takes Pod with the (new) 'C<< .. >>' \
-constructs and returns the old(er) style with just 'C<>'; '<' and '>' are replaced by \
-'E<lt>' and 'E<gt>'. \
-\
-This can be used to pre-process Pod before using tools which do not recognise the new style Pods."
-
-HOMEPAGE = "http://search.cpan.org/dist/Pod-Plainer/"
-SECTION = "libs"
-LICENSE = "Artistic-1.0 | GPL-1.0+"
-
-LIC_FILES_CHKSUM = "file://README;beginline=27;md5=227cf83970fc61264845825d9d2bf6f8"
-
-SRC_URI = "http://search.cpan.org/CPAN/authors/id/R/RM/RMBARKER/Pod-Plainer-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "f502eacd1a40894b9dfea55fc2cd5e7d"
-SRC_URI[sha256sum] = "1bbfbf7d1d4871e5a83bab2137e22d089078206815190eb1d5c1260a3499456f"
-
-S = "${WORKDIR}/Pod-Plainer-${PV}"
-
-inherit cpan
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-lib/0001-ucm-parser-needs-limits.h.patch b/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-lib/0001-ucm-parser-needs-limits.h.patch
deleted file mode 100644
index 4edaf4d..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-lib/0001-ucm-parser-needs-limits.h.patch
+++ /dev/null
@@ -1,33 +0,0 @@
-From 005ac9d2fa000912c8fb8257020a0471b1c6176c Mon Sep 17 00:00:00 2001
-From: Gustavo Zacarias <gustavo@zacarias.com.ar>
-Date: Wed, 21 Dec 2016 19:46:34 -0300
-Subject: [PATCH] ucm: parser needs limits.h
-
-It's using PATH_MAX which is defined there, otherwise the build fails on
-musl libc.
-
-Signed-off-by: Gustavo Zacarias <gustavo@zacarias.com.ar>
-Signed-off-by: Takashi Iwai <tiwai@suse.de>
-
-Upstream-Status: Accepted [expected in 1.1.4]
-
-Signed-off-by: Tanu Kaskinen <tanuk@iki.fi>
----
- src/ucm/parser.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/src/ucm/parser.c b/src/ucm/parser.c
-index c98373a9..f520abc5 100644
---- a/src/ucm/parser.c
-+++ b/src/ucm/parser.c
-@@ -32,6 +32,7 @@
- 
- #include "ucm_local.h"
- #include <dirent.h>
-+#include <limits.h>
- 
- /** The name of the environment variable containing the UCM directory */
- #define ALSA_CONFIG_UCM_VAR "ALSA_CONFIG_UCM"
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-lib_1.1.3.bb b/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-lib_1.1.3.bb
deleted file mode 100644
index 191a036..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-lib_1.1.3.bb
+++ /dev/null
@@ -1,46 +0,0 @@
-SUMMARY = "ALSA sound library"
-HOMEPAGE = "http://www.alsa-project.org"
-BUGTRACKER = "http://alsa-project.org/main/index.php/Bug_Tracking"
-SECTION = "libs/multimedia"
-LICENSE = "LGPLv2.1 & GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=7fbc338309ac38fefcd64b04bb903e34 \
-                    file://src/socket.c;beginline=1;endline=26;md5=11ff89a8a7a4a690a5c78effe8159545"
-
-BBCLASSEXTEND = "native nativesdk"
-
-SRC_URI = "ftp://ftp.alsa-project.org/pub/lib/${BP}.tar.bz2 \
-           file://Check-if-wordexp-function-is-supported.patch \
-           file://avoid-including-sys-poll.h-directly.patch \
-           file://0001-ucm-parser-needs-limits.h.patch \
-"
-SRC_URI[md5sum] = "eefe5992567ba00d6110a540657aaf5c"
-SRC_URI[sha256sum] = "71282502184c592c1a008e256c22ed0ba5728ca65e05273ceb480c70f515969c"
-
-inherit autotools pkgconfig
-
-require alsa-fpu.inc
-EXTRA_OECONF += "${@get_alsa_fpu_setting(bb, d)} "
-
-EXTRA_OECONF += "--disable-python"
-
-EXTRA_OECONF_append_libc-uclibc = " --with-versioned=no "
-
-PACKAGES =+ "alsa-server libasound alsa-conf alsa-doc"
-FILES_libasound = "${libdir}/libasound.so.*"
-FILES_alsa-server = "${bindir}/*"
-FILES_alsa-conf = "${datadir}/alsa/"
-
-RDEPENDS_libasound = "alsa-conf"
-
-# alsa-lib gets automatically added to alsa-lib-dev dependencies, but the
-# alsa-lib package doesn't exist. libasound is the real library package.
-RDEPENDS_${PN}-dev = "libasound"
-
-# upgrade path
-RPROVIDES_${PN}-dev = "alsa-dev"
-RREPLACES_${PN}-dev = "alsa-dev"
-RCONFLICTS_${PN}-dev = "alsa-dev"
-
-RPROVIDES_alsa-conf = "alsa-conf-base"
-RREPLACES_alsa-conf = "alsa-conf-base"
-RCONFLICTS_alsa-conf = "alsa-conf-base"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-lib_1.1.4.1.bb b/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-lib_1.1.4.1.bb
new file mode 100644
index 0000000..acdeae1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-lib_1.1.4.1.bb
@@ -0,0 +1,43 @@
+SUMMARY = "ALSA sound library"
+HOMEPAGE = "http://www.alsa-project.org"
+BUGTRACKER = "http://alsa-project.org/main/index.php/Bug_Tracking"
+SECTION = "libs/multimedia"
+LICENSE = "LGPLv2.1 & GPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=7fbc338309ac38fefcd64b04bb903e34 \
+                    file://src/socket.c;beginline=1;endline=26;md5=11ff89a8a7a4a690a5c78effe8159545"
+
+BBCLASSEXTEND = "native nativesdk"
+
+SRC_URI = "ftp://ftp.alsa-project.org/pub/lib/${BP}.tar.bz2 \
+           file://Check-if-wordexp-function-is-supported.patch \
+           file://avoid-including-sys-poll.h-directly.patch \
+"
+SRC_URI[md5sum] = "29fa3e69122d3cf3e8f0e01a0cb1d183"
+SRC_URI[sha256sum] = "91bb870c14d1c7c269213285eeed874fa3d28112077db061a3af8010d0885b76"
+
+inherit autotools pkgconfig
+
+require alsa-fpu.inc
+EXTRA_OECONF += "${@get_alsa_fpu_setting(bb, d)} "
+
+EXTRA_OECONF += "--disable-python"
+
+PACKAGES =+ "alsa-server libasound alsa-conf alsa-doc"
+FILES_libasound = "${libdir}/libasound.so.*"
+FILES_alsa-server = "${bindir}/*"
+FILES_alsa-conf = "${datadir}/alsa/"
+
+RDEPENDS_libasound = "alsa-conf"
+
+# alsa-lib gets automatically added to alsa-lib-dev dependencies, but the
+# alsa-lib package doesn't exist. libasound is the real library package.
+RDEPENDS_${PN}-dev = "libasound"
+
+# upgrade path
+RPROVIDES_${PN}-dev = "alsa-dev"
+RREPLACES_${PN}-dev = "alsa-dev"
+RCONFLICTS_${PN}-dev = "alsa-dev"
+
+RPROVIDES_alsa-conf = "alsa-conf-base"
+RREPLACES_alsa-conf = "alsa-conf-base"
+RCONFLICTS_alsa-conf = "alsa-conf-base"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-plugins_1.1.1.bb b/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-plugins_1.1.1.bb
deleted file mode 100644
index 16686a0..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-plugins_1.1.1.bb
+++ /dev/null
@@ -1,103 +0,0 @@
-SUMMARY = "ALSA Plugins"
-HOMEPAGE = "http://alsa-project.org"
-BUGTRACKER = "http://alsa-project.org/main/index.php/Bug_Tracking"
-SECTION = "multimedia"
-
-# The primary license of alsa-plugins is LGPLv2.1.
-#
-# m4/attributes.m4 is licensed under GPLv2+. m4/attributes.m4 is part of the
-# build system, and doesn't affect the licensing of the build result.
-#
-# The samplerate plugin source code is licensed under GPLv2+ to be consistent
-# with the libsamplerate license. However, if the licensee has a commercial
-# license for libsamplerate, the samplerate plugin may be used under the terms
-# of LGPLv2.1 like the rest of the plugins.
-LICENSE = "LGPLv2.1 & GPLv2+"
-LIC_FILES_CHKSUM = "\
-        file://COPYING;md5=7fbc338309ac38fefcd64b04bb903e34 \
-        file://COPYING.GPL;md5=94d55d512a9ba36caa9b7df079bae19f \
-        file://m4/attributes.m4;endline=33;md5=b25958da44c02231e3641f1bccef53eb \
-        file://rate/rate_samplerate.c;endline=35;md5=fd77bce85f4a338c0e8ab18430b69fae \
-"
-
-SRC_URI = "ftp://ftp.alsa-project.org/pub/plugins/${BP}.tar.bz2"
-SRC_URI[md5sum] = "69f9f3e2de3c97fc71d496e91e271fe5"
-SRC_URI[sha256sum] = "8ea4d1e082c36528a896a2581e5eb62d4dc2683238e353050d0d624e65f901f1"
-
-DEPENDS += "alsa-lib"
-
-inherit autotools pkgconfig
-
-PACKAGECONFIG ??= "\
-        samplerate \
-        speexdsp \
-        ${@bb.utils.filter('DISTRO_FEATURES', 'pulseaudio', d)} \
-"
-PACKAGECONFIG[avcodec] = "--enable-avcodec,--disable-avcodec,libav"
-PACKAGECONFIG[jack] = "--enable-jack,--disable-jack,jack"
-PACKAGECONFIG[maemo-plugin] = "--enable-maemo-plugin,--disable-maemo-plugin"
-PACKAGECONFIG[maemo-resource-manager] = "--enable-maemo-resource-manager,--disable-maemo-resource-manager,dbus"
-PACKAGECONFIG[pulseaudio] = "--enable-pulseaudio,--disable-pulseaudio,pulseaudio"
-PACKAGECONFIG[samplerate] = "--enable-samplerate,--disable-samplerate,libsamplerate0"
-PACKAGECONFIG[speexdsp] = "--with-speex=lib,--with-speex=no,speexdsp"
-
-PACKAGES += "${@bb.utils.contains('PACKAGECONFIG', 'pulseaudio', 'alsa-plugins-pulseaudio-conf', '', d)}"
-
-PACKAGES_DYNAMIC = "^libasound-module-.*"
-
-# The alsa-plugins package doesn't itself contain anything, it just depends on
-# all built plugins.
-ALLOW_EMPTY_${PN} = "1"
-
-do_install_append() {
-	rm ${D}${libdir}/alsa-lib/*.la
-
-	# We use the example as is, so just drop the .example suffix.
-	if [ "${@bb.utils.contains('PACKAGECONFIG', 'pulseaudio', 'yes', 'no', d)}" = "yes" ]; then
-		mv ${D}${datadir}/alsa/alsa.conf.d/99-pulseaudio-default.conf.example ${D}${datadir}/alsa/alsa.conf.d/99-pulseaudio-default.conf
-	fi
-}
-
-python populate_packages_prepend() {
-    plugindir = d.expand('${libdir}/alsa-lib/')
-    packages = " ".join(do_split_packages(d, plugindir, '^libasound_module_(.*)\.so$', 'libasound-module-%s', 'Alsa plugin for %s', extra_depends=''))
-    d.setVar("RDEPENDS_alsa-plugins", packages)
-}
-
-# The rate plugins create some symlinks. For example, the samplerate plugin
-# creates these links to the main plugin file:
-#
-#   libasound_module_rate_samplerate_best.so
-#   libasound_module_rate_samplerate_linear.so
-#   libasound_module_rate_samplerate_medium.so
-#   libasound_module_rate_samplerate_order.so
-#
-# The other rate plugins create similar links. We have to add the links to
-# FILES manually, because do_split_packages() skips the links (which is good,
-# because we wouldn't want do_split_packages() to create separate packages for
-# the symlinks).
-#
-# The symlinks cause QA errors, because usually it's a bug if a non
-# -dev/-dbg/-nativesdk package contains links to .so files, but in this case
-# the errors are false positives, so we disable the QA checks.
-FILES_${MLPREFIX}libasound-module-rate-lavcrate += "${libdir}/alsa-lib/*rate_lavcrate_*.so"
-FILES_${MLPREFIX}libasound-module-rate-samplerate += "${libdir}/alsa-lib/*rate_samplerate_*.so"
-FILES_${MLPREFIX}libasound-module-rate-speexrate += "${libdir}/alsa-lib/*rate_speexrate_*.so"
-INSANE_SKIP_${MLPREFIX}libasound-module-rate-lavcrate = "dev-so"
-INSANE_SKIP_${MLPREFIX}libasound-module-rate-samplerate = "dev-so"
-INSANE_SKIP_${MLPREFIX}libasound-module-rate-speexrate = "dev-so"
-
-# 50-pulseaudio.conf defines a device named "pulse" that applications can use
-# if they explicitly want to use the PulseAudio plugin.
-# 99-pulseaudio-default.conf configures the "default" device to use the
-# PulseAudio plugin.
-FILES_${PN}-pulseaudio-conf += "\
-        ${datadir}/alsa/alsa.conf.d/50-pulseaudio.conf \
-        ${datadir}/alsa/alsa.conf.d/99-pulseaudio-default.conf \
-"
-
-RDEPENDS_${PN}-pulseaudio-conf += "\
-        libasound-module-conf-pulse \
-        libasound-module-ctl-pulse \
-        libasound-module-pcm-pulse \
-"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-plugins_1.1.4.bb b/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-plugins_1.1.4.bb
new file mode 100644
index 0000000..b7f79b7
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-plugins_1.1.4.bb
@@ -0,0 +1,103 @@
+SUMMARY = "ALSA Plugins"
+HOMEPAGE = "http://alsa-project.org"
+BUGTRACKER = "http://alsa-project.org/main/index.php/Bug_Tracking"
+SECTION = "multimedia"
+
+# The primary license of alsa-plugins is LGPLv2.1.
+#
+# m4/attributes.m4 is licensed under GPLv2+. m4/attributes.m4 is part of the
+# build system, and doesn't affect the licensing of the build result.
+#
+# The samplerate plugin source code is licensed under GPLv2+ to be consistent
+# with the libsamplerate license. However, if the licensee has a commercial
+# license for libsamplerate, the samplerate plugin may be used under the terms
+# of LGPLv2.1 like the rest of the plugins.
+LICENSE = "LGPLv2.1 & GPLv2+"
+LIC_FILES_CHKSUM = "\
+        file://COPYING;md5=7fbc338309ac38fefcd64b04bb903e34 \
+        file://COPYING.GPL;md5=94d55d512a9ba36caa9b7df079bae19f \
+        file://m4/attributes.m4;endline=33;md5=b25958da44c02231e3641f1bccef53eb \
+        file://rate/rate_samplerate.c;endline=35;md5=fd77bce85f4a338c0e8ab18430b69fae \
+"
+
+SRC_URI = "ftp://ftp.alsa-project.org/pub/plugins/${BP}.tar.bz2"
+SRC_URI[md5sum] = "de51130a7444b79b2dd3c25e28420754"
+SRC_URI[sha256sum] = "530d1c3bdaeb058f2a03607a33b9e16ee5369bfd30a96bc09bd2c69b4ddd1a8a"
+
+DEPENDS += "alsa-lib"
+
+inherit autotools pkgconfig
+
+PACKAGECONFIG ??= "\
+        samplerate \
+        speexdsp \
+        ${@bb.utils.filter('DISTRO_FEATURES', 'pulseaudio', d)} \
+"
+PACKAGECONFIG[avcodec] = "--enable-avcodec,--disable-avcodec,libav"
+PACKAGECONFIG[jack] = "--enable-jack,--disable-jack,jack"
+PACKAGECONFIG[maemo-plugin] = "--enable-maemo-plugin,--disable-maemo-plugin"
+PACKAGECONFIG[maemo-resource-manager] = "--enable-maemo-resource-manager,--disable-maemo-resource-manager,dbus"
+PACKAGECONFIG[pulseaudio] = "--enable-pulseaudio,--disable-pulseaudio,pulseaudio"
+PACKAGECONFIG[samplerate] = "--enable-samplerate,--disable-samplerate,libsamplerate0"
+PACKAGECONFIG[speexdsp] = "--with-speex=lib,--with-speex=no,speexdsp"
+
+PACKAGES += "${@bb.utils.contains('PACKAGECONFIG', 'pulseaudio', 'alsa-plugins-pulseaudio-conf', '', d)}"
+
+PACKAGES_DYNAMIC = "^libasound-module-.*"
+
+# The alsa-plugins package doesn't itself contain anything, it just depends on
+# all built plugins.
+ALLOW_EMPTY_${PN} = "1"
+
+do_install_append() {
+	rm ${D}${libdir}/alsa-lib/*.la
+
+	# We use the example as is, so just drop the .example suffix.
+	if [ "${@bb.utils.contains('PACKAGECONFIG', 'pulseaudio', 'yes', 'no', d)}" = "yes" ]; then
+		mv ${D}${datadir}/alsa/alsa.conf.d/99-pulseaudio-default.conf.example ${D}${datadir}/alsa/alsa.conf.d/99-pulseaudio-default.conf
+	fi
+}
+
+python populate_packages_prepend() {
+    plugindir = d.expand('${libdir}/alsa-lib/')
+    packages = " ".join(do_split_packages(d, plugindir, '^libasound_module_(.*)\.so$', 'libasound-module-%s', 'Alsa plugin for %s', extra_depends=''))
+    d.setVar("RDEPENDS_alsa-plugins", packages)
+}
+
+# The rate plugins create some symlinks. For example, the samplerate plugin
+# creates these links to the main plugin file:
+#
+#   libasound_module_rate_samplerate_best.so
+#   libasound_module_rate_samplerate_linear.so
+#   libasound_module_rate_samplerate_medium.so
+#   libasound_module_rate_samplerate_order.so
+#
+# The other rate plugins create similar links. We have to add the links to
+# FILES manually, because do_split_packages() skips the links (which is good,
+# because we wouldn't want do_split_packages() to create separate packages for
+# the symlinks).
+#
+# The symlinks cause QA errors, because usually it's a bug if a non
+# -dev/-dbg/-nativesdk package contains links to .so files, but in this case
+# the errors are false positives, so we disable the QA checks.
+FILES_${MLPREFIX}libasound-module-rate-lavcrate += "${libdir}/alsa-lib/*rate_lavcrate_*.so"
+FILES_${MLPREFIX}libasound-module-rate-samplerate += "${libdir}/alsa-lib/*rate_samplerate_*.so"
+FILES_${MLPREFIX}libasound-module-rate-speexrate += "${libdir}/alsa-lib/*rate_speexrate_*.so"
+INSANE_SKIP_${MLPREFIX}libasound-module-rate-lavcrate = "dev-so"
+INSANE_SKIP_${MLPREFIX}libasound-module-rate-samplerate = "dev-so"
+INSANE_SKIP_${MLPREFIX}libasound-module-rate-speexrate = "dev-so"
+
+# 50-pulseaudio.conf defines a device named "pulse" that applications can use
+# if they explicitly want to use the PulseAudio plugin.
+# 99-pulseaudio-default.conf configures the "default" device to use the
+# PulseAudio plugin.
+FILES_${PN}-pulseaudio-conf += "\
+        ${datadir}/alsa/alsa.conf.d/50-pulseaudio.conf \
+        ${datadir}/alsa/alsa.conf.d/99-pulseaudio-default.conf \
+"
+
+RDEPENDS_${PN}-pulseaudio-conf += "\
+        libasound-module-conf-pulse \
+        libasound-module-ctl-pulse \
+        libasound-module-pcm-pulse \
+"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-utils-scripts_1.1.3.bb b/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-utils-scripts_1.1.4.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-utils-scripts_1.1.3.bb
rename to import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-utils-scripts_1.1.4.bb
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-utils_1.1.3.bb b/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-utils_1.1.3.bb
deleted file mode 100644
index f374a17..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-utils_1.1.3.bb
+++ /dev/null
@@ -1,115 +0,0 @@
-SUMMARY = "ALSA sound utilities"
-HOMEPAGE = "http://www.alsa-project.org"
-BUGTRACKER = "http://alsa-project.org/main/index.php/Bug_Tracking"
-SECTION = "console/utils"
-LICENSE = "GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552 \
-                    file://alsactl/utils.c;beginline=1;endline=20;md5=fe9526b055e246b5558809a5ae25c0b9"
-DEPENDS = "alsa-lib ncurses libsamplerate0"
-
-PACKAGECONFIG ??= "udev"
-
-# alsabat can be built also without fftw support (with reduced functionality).
-# It would be better to always enable alsabat, but provide an option for
-# enabling/disabling fftw. The configure script doesn't support that, however
-# (at least in any obvious way), so for now we only support alsabat with fftw
-# or no alsabat at all.
-PACKAGECONFIG[bat] = "--enable-bat,--disable-bat,fftwf"
-
-PACKAGECONFIG[udev] = "--with-udev-rules-dir=`pkg-config --variable=udevdir udev`/rules.d,,udev"
-PACKAGECONFIG[manpages] = "--enable-xmlto, --disable-xmlto, xmlto-native docbook-xml-dtd4-native docbook-xsl-stylesheets-native"
-
-SRC_URI = "ftp://ftp.alsa-project.org/pub/utils/alsa-utils-${PV}.tar.bz2 \
-           file://0001-alsactl-don-t-let-systemd-unit-restore-the-volume-wh.patch \
-          "
-
-SRC_URI[md5sum] = "2bf94d3e3410dcc74bb0dae10d46a979"
-SRC_URI[sha256sum] = "127217a54eea0f9a49700a2f239a2d4f5384aa094d68df04a8eb80132eb6167c"
-
-# On build machines with python-docutils (not python3-docutils !!) installed
-# rst2man (not rst2man.py) is detected and compile fails with
-# | make[1]: *** No rule to make target 'alsaucm.1', needed by 'all-am'.  Stop.
-# Avoid this by disabling expicitly
-EXTRA_OECONF = "--disable-rst2man"
-
-# lazy hack. needs proper fixing in gettext.m4, see
-# http://bugs.openembedded.org/show_bug.cgi?id=2348
-# please close bug and remove this comment when properly fixed
-#
-EXTRA_OECONF_append_libc-uclibc = " --disable-nls"
-
-inherit autotools gettext pkgconfig manpages
-
-# This are all packages that we need to make. Also, the now empty alsa-utils
-# ipk depends on them.
-
-ALSA_UTILS_PKGS = "\
-             ${@bb.utils.contains('PACKAGECONFIG', 'bat', 'alsa-utils-alsabat', '', d)} \
-             alsa-utils-alsamixer \
-             alsa-utils-alsatplg \
-             alsa-utils-midi \
-             alsa-utils-aplay \
-             alsa-utils-amixer \
-             alsa-utils-aconnect \
-             alsa-utils-iecset \
-             alsa-utils-speakertest \
-             alsa-utils-aseqnet \
-             alsa-utils-aseqdump \
-             alsa-utils-alsactl \
-             alsa-utils-alsaloop \
-             alsa-utils-alsaucm \
-            "
-
-PACKAGES += "${ALSA_UTILS_PKGS}"
-RDEPENDS_${PN} += "${ALSA_UTILS_PKGS}"
-
-FILES_${PN} = ""
-FILES_alsa-utils-alsabat     = "${bindir}/alsabat"
-FILES_alsa-utils-alsatplg    = "${bindir}/alsatplg"
-FILES_alsa-utils-aplay       = "${bindir}/aplay ${bindir}/arecord"
-FILES_alsa-utils-amixer      = "${bindir}/amixer"
-FILES_alsa-utils-alsamixer   = "${bindir}/alsamixer"
-FILES_alsa-utils-speakertest = "${bindir}/speaker-test ${datadir}/sounds/alsa/ ${datadir}/alsa/speaker-test/"
-FILES_alsa-utils-midi        = "${bindir}/aplaymidi ${bindir}/arecordmidi ${bindir}/amidi"
-FILES_alsa-utils-aconnect    = "${bindir}/aconnect"
-FILES_alsa-utils-aseqnet     = "${bindir}/aseqnet"
-FILES_alsa-utils-iecset      = "${bindir}/iecset"
-FILES_alsa-utils-alsactl     = "${sbindir}/alsactl */udev/rules.d */*/udev/rules.d ${systemd_unitdir} ${localstatedir}/lib/alsa ${datadir}/alsa/init/"
-FILES_alsa-utils-aseqdump    = "${bindir}/aseqdump"
-FILES_alsa-utils-alsaloop    = "${bindir}/alsaloop"
-FILES_alsa-utils-alsaucm     = "${bindir}/alsaucm"
-
-SUMMARY_alsa-utils-alsabat      = "Command-line sound tester for ALSA sound card driver"
-SUMMARY_alsa-utils-alsatplg     = "Converts topology text files into binary format for kernel"
-SUMMARY_alsa-utils-aplay        = "Play (and record) sound files using ALSA"
-SUMMARY_alsa-utils-amixer       = "Command-line control for ALSA mixer and settings"
-SUMMARY_alsa-utils-alsamixer    = "ncurses-based control for ALSA mixer and settings"
-SUMMARY_alsa-utils-speakertest  = "ALSA surround speaker test utility"
-SUMMARY_alsa-utils-midi         = "Miscellaneous MIDI utilities for ALSA"
-SUMMARY_alsa-utils-aconnect     = "ALSA sequencer connection manager"
-SUMMARY_alsa-utils-aseqnet      = "Network client/server for ALSA sequencer"
-SUMMARY_alsa-utils-iecset       = "ALSA utility for setting/showing IEC958 (S/PDIF) status bits"
-SUMMARY_alsa-utils-alsactl      = "Saves/restores ALSA-settings in /etc/asound.state"
-SUMMARY_alsa-utils-aseqdump     = "Shows the events received at an ALSA sequencer port"
-SUMMARY_alsa-utils-alsaloop     = "ALSA PCM loopback utility"
-SUMMARY_alsa-utils-alsaucm      = "ALSA Use Case Manager"
-
-RRECOMMENDS_alsa-utils-alsactl = "alsa-states"
-
-ALLOW_EMPTY_alsa-utils = "1"
-
-do_install() {
-	autotools_do_install
-
-	# We don't ship this here because it requires a dependency on bash.
-	# See alsa-utils-scripts_${PV}.bb
-	rm ${D}${sbindir}/alsaconf
-	rm ${D}${sbindir}/alsa-info.sh
-	rm -f ${D}${sbindir}/alsabat-test.sh
-
-	if ${@bb.utils.contains('PACKAGECONFIG', 'udev', 'false', 'true', d)}; then
-		# This is where alsa-utils will install its rules if we don't tell it anything else.
-		rm -rf ${D}${nonarch_base_libdir}/udev
-		rmdir --ignore-fail-on-non-empty ${D}${nonarch_base_libdir}
-	fi
-}
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-utils_1.1.4.bb b/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-utils_1.1.4.bb
new file mode 100644
index 0000000..c8f4b86
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/alsa/alsa-utils_1.1.4.bb
@@ -0,0 +1,109 @@
+SUMMARY = "ALSA sound utilities"
+HOMEPAGE = "http://www.alsa-project.org"
+BUGTRACKER = "http://alsa-project.org/main/index.php/Bug_Tracking"
+SECTION = "console/utils"
+LICENSE = "GPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552 \
+                    file://alsactl/utils.c;beginline=1;endline=20;md5=fe9526b055e246b5558809a5ae25c0b9"
+DEPENDS = "alsa-lib ncurses libsamplerate0"
+
+PACKAGECONFIG ??= "udev"
+
+# alsabat can be built also without fftw support (with reduced functionality).
+# It would be better to always enable alsabat, but provide an option for
+# enabling/disabling fftw. The configure script doesn't support that, however
+# (at least in any obvious way), so for now we only support alsabat with fftw
+# or no alsabat at all.
+PACKAGECONFIG[bat] = "--enable-bat,--disable-bat,fftwf"
+
+PACKAGECONFIG[udev] = "--with-udev-rules-dir=`pkg-config --variable=udevdir udev`/rules.d,,udev"
+PACKAGECONFIG[manpages] = "--enable-xmlto, --disable-xmlto, xmlto-native docbook-xml-dtd4-native docbook-xsl-stylesheets-native"
+
+SRC_URI = "ftp://ftp.alsa-project.org/pub/utils/alsa-utils-${PV}.tar.bz2 \
+           file://0001-alsactl-don-t-let-systemd-unit-restore-the-volume-wh.patch \
+          "
+
+SRC_URI[md5sum] = "01e3934ca5bd22a80c27289d1b0adcdc"
+SRC_URI[sha256sum] = "a7831044de92c5bf33bf3365a3f36e49397f4191e934df460ae1ca15138c9d9d"
+
+# On build machines with python-docutils (not python3-docutils !!) installed
+# rst2man (not rst2man.py) is detected and compile fails with
+# | make[1]: *** No rule to make target 'alsaucm.1', needed by 'all-am'.  Stop.
+# Avoid this by disabling expicitly
+EXTRA_OECONF = "--disable-rst2man"
+
+inherit autotools gettext pkgconfig manpages
+
+# This are all packages that we need to make. Also, the now empty alsa-utils
+# ipk depends on them.
+
+ALSA_UTILS_PKGS = "\
+             ${@bb.utils.contains('PACKAGECONFIG', 'bat', 'alsa-utils-alsabat', '', d)} \
+             alsa-utils-alsamixer \
+             alsa-utils-alsatplg \
+             alsa-utils-midi \
+             alsa-utils-aplay \
+             alsa-utils-amixer \
+             alsa-utils-aconnect \
+             alsa-utils-iecset \
+             alsa-utils-speakertest \
+             alsa-utils-aseqnet \
+             alsa-utils-aseqdump \
+             alsa-utils-alsactl \
+             alsa-utils-alsaloop \
+             alsa-utils-alsaucm \
+            "
+
+PACKAGES += "${ALSA_UTILS_PKGS}"
+RDEPENDS_${PN} += "${ALSA_UTILS_PKGS}"
+
+FILES_${PN} = ""
+FILES_alsa-utils-alsabat     = "${bindir}/alsabat"
+FILES_alsa-utils-alsatplg    = "${bindir}/alsatplg"
+FILES_alsa-utils-aplay       = "${bindir}/aplay ${bindir}/arecord"
+FILES_alsa-utils-amixer      = "${bindir}/amixer"
+FILES_alsa-utils-alsamixer   = "${bindir}/alsamixer"
+FILES_alsa-utils-speakertest = "${bindir}/speaker-test ${datadir}/sounds/alsa/ ${datadir}/alsa/speaker-test/"
+FILES_alsa-utils-midi        = "${bindir}/aplaymidi ${bindir}/arecordmidi ${bindir}/amidi"
+FILES_alsa-utils-aconnect    = "${bindir}/aconnect"
+FILES_alsa-utils-aseqnet     = "${bindir}/aseqnet"
+FILES_alsa-utils-iecset      = "${bindir}/iecset"
+FILES_alsa-utils-alsactl     = "${sbindir}/alsactl */udev/rules.d */*/udev/rules.d ${systemd_unitdir} ${localstatedir}/lib/alsa ${datadir}/alsa/init/"
+FILES_alsa-utils-aseqdump    = "${bindir}/aseqdump"
+FILES_alsa-utils-alsaloop    = "${bindir}/alsaloop"
+FILES_alsa-utils-alsaucm     = "${bindir}/alsaucm"
+
+SUMMARY_alsa-utils-alsabat      = "Command-line sound tester for ALSA sound card driver"
+SUMMARY_alsa-utils-alsatplg     = "Converts topology text files into binary format for kernel"
+SUMMARY_alsa-utils-aplay        = "Play (and record) sound files using ALSA"
+SUMMARY_alsa-utils-amixer       = "Command-line control for ALSA mixer and settings"
+SUMMARY_alsa-utils-alsamixer    = "ncurses-based control for ALSA mixer and settings"
+SUMMARY_alsa-utils-speakertest  = "ALSA surround speaker test utility"
+SUMMARY_alsa-utils-midi         = "Miscellaneous MIDI utilities for ALSA"
+SUMMARY_alsa-utils-aconnect     = "ALSA sequencer connection manager"
+SUMMARY_alsa-utils-aseqnet      = "Network client/server for ALSA sequencer"
+SUMMARY_alsa-utils-iecset       = "ALSA utility for setting/showing IEC958 (S/PDIF) status bits"
+SUMMARY_alsa-utils-alsactl      = "Saves/restores ALSA-settings in /etc/asound.state"
+SUMMARY_alsa-utils-aseqdump     = "Shows the events received at an ALSA sequencer port"
+SUMMARY_alsa-utils-alsaloop     = "ALSA PCM loopback utility"
+SUMMARY_alsa-utils-alsaucm      = "ALSA Use Case Manager"
+
+RRECOMMENDS_alsa-utils-alsactl = "alsa-states"
+
+ALLOW_EMPTY_alsa-utils = "1"
+
+do_install() {
+	autotools_do_install
+
+	# We don't ship this here because it requires a dependency on bash.
+	# See alsa-utils-scripts_${PV}.bb
+	rm ${D}${sbindir}/alsaconf
+	rm ${D}${sbindir}/alsa-info.sh
+	rm -f ${D}${sbindir}/alsabat-test.sh
+
+	if ${@bb.utils.contains('PACKAGECONFIG', 'udev', 'false', 'true', d)}; then
+		# This is where alsa-utils will install its rules if we don't tell it anything else.
+		rm -rf ${D}${nonarch_base_libdir}/udev
+		rmdir --ignore-fail-on-non-empty ${D}${nonarch_base_libdir}
+	fi
+}
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/0001-build-fix-for-mips.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/0001-build-fix-for-mips.patch
new file mode 100644
index 0000000..3f8224a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/0001-build-fix-for-mips.patch
@@ -0,0 +1,44 @@
+From f34c567045bea5a7ded9bcfa8e785cfd24cc7dde Mon Sep 17 00:00:00 2001
+From: Shivraj Patil <shivraj.patil@imgtec.com>
+Date: Tue, 4 Apr 2017 18:56:01 +0530
+Subject: [PATCH] build fix for mips
+
+Signed-off-by: Shivraj Patil <shivraj.patil@imgtec.com>
+Signed-off-by: Ronald S. Bultje <rsbultje@gmail.com>
+---
+Upstream-Status: Backport
+
+ libavcodec/mips/hevcpred_init_mips.c | 3 ++-
+ libavcodec/mips/hevcpred_msa.c       | 2 +-
+ 2 files changed, 3 insertions(+), 2 deletions(-)
+
+diff --git a/libavcodec/mips/hevcpred_init_mips.c b/libavcodec/mips/hevcpred_init_mips.c
+index 331cfac115..e987698d66 100644
+--- a/libavcodec/mips/hevcpred_init_mips.c
++++ b/libavcodec/mips/hevcpred_init_mips.c
+@@ -18,7 +18,8 @@
+  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+  */
+ 
+-#include "libavcodec/hevc.h"
++#include "config.h"
++#include "libavutil/attributes.h"
+ #include "libavcodec/mips/hevcpred_mips.h"
+ 
+ #if HAVE_MSA
+diff --git a/libavcodec/mips/hevcpred_msa.c b/libavcodec/mips/hevcpred_msa.c
+index 6a3b2815fd..963c64c861 100644
+--- a/libavcodec/mips/hevcpred_msa.c
++++ b/libavcodec/mips/hevcpred_msa.c
+@@ -18,7 +18,7 @@
+  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+  */
+ 
+-#include "libavcodec/hevc.h"
++#include "libavcodec/hevcdec.h"
+ #include "libavutil/mips/generic_macros_msa.h"
+ #include "hevcpred_mips.h"
+ 
+-- 
+2.13.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14054.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14054.patch
new file mode 100644
index 0000000..e8baa188
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14054.patch
@@ -0,0 +1,39 @@
+From 124eb202e70678539544f6268efc98131f19fa49 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?=E5=AD=99=E6=B5=A9=20and=20=E5=BC=A0=E6=B4=AA=E4=BA=AE=28?=
+ =?UTF-8?q?=E6=9C=9B=E5=88=9D=29?= <tony.sh and wangchu.zhl@alibaba-inc.com>
+Date: Fri, 25 Aug 2017 01:15:28 +0200
+Subject: [PATCH] avformat/rmdec: Fix DoS due to lack of eof check
+
+Fixes: loop.ivr
+
+Found-by: Xiaohei and Wangchu from Alibaba Security Team
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2017-14054
+Upstream-Status: Backport
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ libavformat/rmdec.c | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/libavformat/rmdec.c b/libavformat/rmdec.c
+index 178eaea..d6d7d9c 100644
+--- a/libavformat/rmdec.c
++++ b/libavformat/rmdec.c
+@@ -1223,8 +1223,11 @@ static int ivr_read_header(AVFormatContext *s)
+             av_log(s, AV_LOG_DEBUG, "%s = '%s'\n", key, val);
+         } else if (type == 4) {
+             av_log(s, AV_LOG_DEBUG, "%s = '0x", key);
+-            for (j = 0; j < len; j++)
++            for (j = 0; j < len; j++) {
++                if (avio_feof(pb))
++                    return AVERROR_INVALIDDATA;
+                 av_log(s, AV_LOG_DEBUG, "%X", avio_r8(pb));
++            }
+             av_log(s, AV_LOG_DEBUG, "'\n");
+         } else if (len == 4 && type == 3 && !strncmp(key, "StreamCount", tlen)) {
+             nb_streams = value = avio_rb32(pb);
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14055.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14055.patch
new file mode 100644
index 0000000..37d0d1a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14055.patch
@@ -0,0 +1,34 @@
+From 4f05e2e2dc1a89f38cd9f0960a6561083d714f1e Mon Sep 17 00:00:00 2001
+From: Michael Niedermayer <michael@niedermayer.cc>
+Date: Fri, 25 Aug 2017 01:15:30 +0200
+Subject: [PATCH] avformat/mvdec: Fix DoS due to lack of eof check
+
+Fixes: loop.mv
+
+Found-by: Xiaohei and Wangchu from Alibaba Security Team
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2017-14055
+Upstream-Status: Backport
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ libavformat/mvdec.c | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/libavformat/mvdec.c b/libavformat/mvdec.c
+index 0e12c8c..f7aa4cb 100644
+--- a/libavformat/mvdec.c
++++ b/libavformat/mvdec.c
+@@ -342,6 +342,8 @@ static int mv_read_header(AVFormatContext *avctx)
+             uint32_t pos   = avio_rb32(pb);
+             uint32_t asize = avio_rb32(pb);
+             uint32_t vsize = avio_rb32(pb);
++            if (avio_feof(pb))
++                return AVERROR_INVALIDDATA;
+             avio_skip(pb, 8);
+             av_add_index_entry(ast, pos, timestamp, asize, 0, AVINDEX_KEYFRAME);
+             av_add_index_entry(vst, pos + asize, i, vsize, 0, AVINDEX_KEYFRAME);
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14056.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14056.patch
new file mode 100644
index 0000000..088b357
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14056.patch
@@ -0,0 +1,51 @@
+From 96f24d1bee7fe7bac08e2b7c74db1a046c9dc0de Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?=E5=AD=99=E6=B5=A9=20and=20=E5=BC=A0=E6=B4=AA=E4=BA=AE=28?=
+ =?UTF-8?q?=E6=9C=9B=E5=88=9D=29?= <tony.sh and wangchu.zhl@alibaba-inc.com>
+Date: Fri, 25 Aug 2017 01:15:29 +0200
+Subject: [PATCH] avformat/rl2: Fix DoS due to lack of eof check
+
+Fixes: loop.rl2
+
+Found-by: Xiaohei and Wangchu from Alibaba Security Team
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2017-14056
+Upstream-Status: Backport
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ libavformat/rl2.c | 15 ++++++++++++---
+ 1 file changed, 12 insertions(+), 3 deletions(-)
+
+diff --git a/libavformat/rl2.c b/libavformat/rl2.c
+index 0bec8f1..eb1682d 100644
+--- a/libavformat/rl2.c
++++ b/libavformat/rl2.c
+@@ -170,12 +170,21 @@ static av_cold int rl2_read_header(AVFormatContext *s)
+     }
+ 
+     /** read offset and size tables */
+-    for(i=0; i < frame_count;i++)
++    for(i=0; i < frame_count;i++) {
++        if (avio_feof(pb))
++            return AVERROR_INVALIDDATA;
+         chunk_size[i] = avio_rl32(pb);
+-    for(i=0; i < frame_count;i++)
++    }
++    for(i=0; i < frame_count;i++) {
++        if (avio_feof(pb))
++            return AVERROR_INVALIDDATA;
+         chunk_offset[i] = avio_rl32(pb);
+-    for(i=0; i < frame_count;i++)
++    }
++    for(i=0; i < frame_count;i++) {
++        if (avio_feof(pb))
++            return AVERROR_INVALIDDATA;
+         audio_size[i] = avio_rl32(pb) & 0xFFFF;
++    }
+ 
+     /** build the sample index */
+     for(i=0;i<frame_count;i++){
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14057.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14057.patch
new file mode 100644
index 0000000..b301d23
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14057.patch
@@ -0,0 +1,44 @@
+From 7f9ec5593e04827249e7aeb466da06a98a0d7329 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?=E5=AD=99=E6=B5=A9=20and=20=E5=BC=A0=E6=B4=AA=E4=BA=AE=28?=
+ =?UTF-8?q?=E6=9C=9B=E5=88=9D=29?= <tony.sh and wangchu.zhl@alibaba-inc.com>
+Date: Fri, 25 Aug 2017 12:37:25 +0200
+Subject: [PATCH] avformat/asfdec: Fix DoS due to lack of eof check
+
+Fixes: loop.asf
+
+Found-by: Xiaohei and Wangchu from Alibaba Security Team
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2017-14057
+Upstream-Status: Backport
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ libavformat/asfdec_f.c | 6 ++++--
+ 1 file changed, 4 insertions(+), 2 deletions(-)
+
+diff --git a/libavformat/asfdec_f.c b/libavformat/asfdec_f.c
+index be09a92..f3acbae 100644
+--- a/libavformat/asfdec_f.c
++++ b/libavformat/asfdec_f.c
+@@ -749,13 +749,15 @@ static int asf_read_marker(AVFormatContext *s, int64_t size)
+     count = avio_rl32(pb);    // markers count
+     avio_rl16(pb);            // reserved 2 bytes
+     name_len = avio_rl16(pb); // name length
+-    for (i = 0; i < name_len; i++)
+-        avio_r8(pb); // skip the name
++    avio_skip(pb, name_len);
+ 
+     for (i = 0; i < count; i++) {
+         int64_t pres_time;
+         int name_len;
+ 
++        if (avio_feof(pb))
++            return AVERROR_INVALIDDATA;
++
+         avio_rl64(pb);             // offset, 8 bytes
+         pres_time = avio_rl64(pb); // presentation time
+         pres_time -= asf->hdr.preroll * 10000;
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14058.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14058.patch
new file mode 100644
index 0000000..95803ce
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14058.patch
@@ -0,0 +1,94 @@
+From 7ec414892ddcad88313848494b6fc5f437c9ca4a Mon Sep 17 00:00:00 2001
+From: Michael Niedermayer <michael@niedermayer.cc>
+Date: Sat, 26 Aug 2017 01:26:58 +0200
+Subject: [PATCH] avformat/hls: Fix DoS due to infinite loop
+
+Fixes: loop.m3u
+
+The default max iteration count of 1000 is arbitrary and ideas for a better solution are welcome
+
+Found-by: Xiaohei and Wangchu from Alibaba Security Team
+
+Previous version reviewed-by: Steven Liu <lingjiujianke@gmail.com>
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2017-14058
+Upstream-Status: Backport
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ doc/demuxers.texi | 18 ++++++++++++++++++
+ libavformat/hls.c |  7 +++++++
+ 2 files changed, 25 insertions(+)
+
+diff --git a/doc/demuxers.texi b/doc/demuxers.texi
+index 29a23d4..73dc0fe 100644
+--- a/doc/demuxers.texi
++++ b/doc/demuxers.texi
+@@ -300,6 +300,24 @@ used to end the output video at the length of the shortest input file,
+ which in this case is @file{input.mp4} as the GIF in this example loops
+ infinitely.
+ 
++@section hls
++
++HLS demuxer
++
++It accepts the following options:
++
++@table @option
++@item live_start_index
++segment index to start live streams at (negative values are from the end).
++
++@item allowed_extensions
++',' separated list of file extensions that hls is allowed to access.
++
++@item max_reload
++Maximum number of times a insufficient list is attempted to be reloaded.
++Default value is 1000.
++@end table
++
+ @section image2
+ 
+ Image file demuxer.
+diff --git a/libavformat/hls.c b/libavformat/hls.c
+index 01731bd..0995345 100644
+--- a/libavformat/hls.c
++++ b/libavformat/hls.c
+@@ -205,6 +205,7 @@ typedef struct HLSContext {
+     AVDictionary *avio_opts;
+     int strict_std_compliance;
+     char *allowed_extensions;
++    int max_reload;
+ } HLSContext;
+ 
+ static int read_chomp_line(AVIOContext *s, char *buf, int maxlen)
+@@ -1263,6 +1264,7 @@ static int read_data(void *opaque, uint8_t *buf, int buf_size)
+     HLSContext *c = v->parent->priv_data;
+     int ret, i;
+     int just_opened = 0;
++    int reload_count = 0;
+ 
+ restart:
+     if (!v->needed)
+@@ -1294,6 +1296,9 @@ restart:
+         reload_interval = default_reload_interval(v);
+ 
+ reload:
++        reload_count++;
++        if (reload_count > c->max_reload)
++            return AVERROR_EOF;
+         if (!v->finished &&
+             av_gettime_relative() - v->last_load_time >= reload_interval) {
+             if ((ret = parse_playlist(c, v->url, v, NULL)) < 0) {
+@@ -2150,6 +2155,8 @@ static const AVOption hls_options[] = {
+         OFFSET(allowed_extensions), AV_OPT_TYPE_STRING,
+         {.str = "3gp,aac,avi,flac,mkv,m3u8,m4a,m4s,m4v,mpg,mov,mp2,mp3,mp4,mpeg,mpegts,ogg,ogv,oga,ts,vob,wav"},
+         INT_MIN, INT_MAX, FLAGS},
++    {"max_reload", "Maximum number of times a insufficient list is attempted to be reloaded",
++        OFFSET(max_reload), AV_OPT_TYPE_INT, {.i64 = 1000}, 0, INT_MAX, FLAGS},
+     {NULL}
+ };
+ 
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14059.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14059.patch
new file mode 100644
index 0000000..34fde0b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14059.patch
@@ -0,0 +1,40 @@
+From 7e80b63ecd259d69d383623e75b318bf2bd491f6 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?=E5=AD=99=E6=B5=A9=20and=20=E5=BC=A0=E6=B4=AA=E4=BA=AE=28?=
+ =?UTF-8?q?=E6=9C=9B=E5=88=9D=29?= <tony.sh and wangchu.zhl@alibaba-inc.com>
+Date: Fri, 25 Aug 2017 01:15:27 +0200
+Subject: [PATCH] avformat/cinedec: Fix DoS due to lack of eof check
+
+Fixes: loop.cine
+
+Found-by: Xiaohei and Wangchu from Alibaba Security Team
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2017-14059
+Upstream-Status: Backport
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ libavformat/cinedec.c | 6 +++++-
+ 1 file changed, 5 insertions(+), 1 deletion(-)
+
+diff --git a/libavformat/cinedec.c b/libavformat/cinedec.c
+index 763b93b..de34fb9 100644
+--- a/libavformat/cinedec.c
++++ b/libavformat/cinedec.c
+@@ -267,8 +267,12 @@ static int cine_read_header(AVFormatContext *avctx)
+ 
+     /* parse image offsets */
+     avio_seek(pb, offImageOffsets, SEEK_SET);
+-    for (i = 0; i < st->duration; i++)
++    for (i = 0; i < st->duration; i++) {
++        if (avio_feof(pb))
++            return AVERROR_INVALIDDATA;
++
+         av_add_index_entry(st, avio_rl64(pb), i, 0, 0, AVINDEX_KEYFRAME);
++    }
+ 
+     return 0;
+ }
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14169.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14169.patch
new file mode 100644
index 0000000..e1284fa
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14169.patch
@@ -0,0 +1,39 @@
+From 9d00fb9d70ee8c0cc7002b89318c5be00f1bbdad Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?=E5=AD=99=E6=B5=A9=28=E6=99=93=E9=BB=91=29?=
+ <tony.sh@alibaba-inc.com>
+Date: Tue, 29 Aug 2017 23:59:21 +0200
+Subject: [PATCH] avformat/mxfdec: Fix Sign error in mxf_read_primer_pack()
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Fixes: 20170829B.mxf
+
+Co-Author: 张洪亮(望初)" <wangchu.zhl@alibaba-inc.com>
+Found-by: Xiaohei and Wangchu from Alibaba Security Team
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2017-14169
+Upstream-Status: Backport
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ libavformat/mxfdec.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/libavformat/mxfdec.c b/libavformat/mxfdec.c
+index 6adb77d..91731a7 100644
+--- a/libavformat/mxfdec.c
++++ b/libavformat/mxfdec.c
+@@ -500,7 +500,7 @@ static int mxf_read_primer_pack(void *arg, AVIOContext *pb, int tag, int size, U
+         avpriv_request_sample(pb, "Primer pack item length %d", item_len);
+         return AVERROR_PATCHWELCOME;
+     }
+-    if (item_num > 65536) {
++    if (item_num > 65536 || item_num < 0) {
+         av_log(mxf->fc, AV_LOG_ERROR, "item_num %d is too large\n", item_num);
+         return AVERROR_INVALIDDATA;
+     }
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14170.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14170.patch
new file mode 100644
index 0000000..8860125
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14170.patch
@@ -0,0 +1,49 @@
+From 900f39692ca0337a98a7cf047e4e2611071810c2 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?=E5=AD=99=E6=B5=A9=28=E6=99=93=E9=BB=91=29?=
+ <tony.sh@alibaba-inc.com>
+Date: Tue, 29 Aug 2017 23:59:21 +0200
+Subject: [PATCH] avformat/mxfdec: Fix DoS issues in
+ mxf_read_index_entry_array()
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Fixes: 20170829A.mxf
+
+Co-Author: 张洪亮(望初)" <wangchu.zhl@alibaba-inc.com>
+Found-by: Xiaohei and Wangchu from Alibaba Security Team
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2017-14170
+Upstream-Status: Backport
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ libavformat/mxfdec.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/libavformat/mxfdec.c b/libavformat/mxfdec.c
+index f8d0f9e..6adb77d 100644
+--- a/libavformat/mxfdec.c
++++ b/libavformat/mxfdec.c
+@@ -899,6 +899,8 @@ static int mxf_read_index_entry_array(AVIOContext *pb, MXFIndexTableSegment *seg
+     segment->nb_index_entries = avio_rb32(pb);
+ 
+     length = avio_rb32(pb);
++    if(segment->nb_index_entries && length < 11)
++        return AVERROR_INVALIDDATA;
+ 
+     if (!(segment->temporal_offset_entries=av_calloc(segment->nb_index_entries, sizeof(*segment->temporal_offset_entries))) ||
+         !(segment->flag_entries          = av_calloc(segment->nb_index_entries, sizeof(*segment->flag_entries))) ||
+@@ -909,6 +911,8 @@ static int mxf_read_index_entry_array(AVIOContext *pb, MXFIndexTableSegment *seg
+     }
+ 
+     for (i = 0; i < segment->nb_index_entries; i++) {
++        if(avio_feof(pb))
++            return AVERROR_INVALIDDATA;
+         segment->temporal_offset_entries[i] = avio_r8(pb);
+         avio_r8(pb);                                        /* KeyFrameOffset */
+         segment->flag_entries[i] = avio_r8(pb);
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14171.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14171.patch
new file mode 100644
index 0000000..e2ae204
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14171.patch
@@ -0,0 +1,44 @@
+From c24bcb553650b91e9eff15ef6e54ca73de2453b7 Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?=E5=AD=99=E6=B5=A9=28=E6=99=93=E9=BB=91=29?=
+ <tony.sh@alibaba-inc.com>
+Date: Tue, 29 Aug 2017 23:59:21 +0200
+Subject: [PATCH] avformat/nsvdec: Fix DoS due to lack of eof check in
+ nsvs_file_offset loop.
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+Fixes: 20170829.nsv
+
+Co-Author: 张洪亮(望初)" <wangchu.zhl@alibaba-inc.com>
+Found-by: Xiaohei and Wangchu from Alibaba Security Team
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2017-14171
+Upstream-Status: Backport
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ libavformat/nsvdec.c | 5 ++++-
+ 1 file changed, 4 insertions(+), 1 deletion(-)
+
+diff --git a/libavformat/nsvdec.c b/libavformat/nsvdec.c
+index c6ddb67..d8ce656 100644
+--- a/libavformat/nsvdec.c
++++ b/libavformat/nsvdec.c
+@@ -335,8 +335,11 @@ static int nsv_parse_NSVf_header(AVFormatContext *s)
+         if (!nsv->nsvs_file_offset)
+             return AVERROR(ENOMEM);
+ 
+-        for(i=0;i<table_entries_used;i++)
++        for(i=0;i<table_entries_used;i++) {
++            if (avio_feof(pb))
++                return AVERROR_INVALIDDATA;
+             nsv->nsvs_file_offset[i] = avio_rl32(pb) + size;
++        }
+ 
+         if(table_entries > table_entries_used &&
+            avio_rl32(pb) == MKTAG('T','O','C','2')) {
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14222.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14222.patch
new file mode 100644
index 0000000..ee02037
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14222.patch
@@ -0,0 +1,40 @@
+From 9cb4eb772839c5e1de2855d126bf74ff16d13382 Mon Sep 17 00:00:00 2001
+From: Michael Niedermayer <michael@niedermayer.cc>
+Date: Tue, 5 Sep 2017 00:16:29 +0200
+Subject: [PATCH] avformat/mov: Fix DoS in read_tfra()
+
+Fixes: Missing EOF check in loop
+No testcase
+
+Found-by: Xiaohei and Wangchu from Alibaba Security Team
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2017-14222
+Upstream-Status: Backport
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ libavformat/mov.c | 7 +++++++
+ 1 file changed, 7 insertions(+)
+
+diff --git a/libavformat/mov.c b/libavformat/mov.c
+index 994e9c6..2519707 100644
+--- a/libavformat/mov.c
++++ b/libavformat/mov.c
+@@ -6094,6 +6094,13 @@ static int read_tfra(MOVContext *mov, AVIOContext *f)
+     }
+     for (i = 0; i < index->item_count; i++) {
+         int64_t time, offset;
++
++        if (avio_feof(f)) {
++            index->item_count = 0;
++            av_freep(&index->items);
++            return AVERROR_INVALIDDATA;
++        }
++
+         if (version == 1) {
+             time   = avio_rb64(f);
+             offset = avio_rb64(f);
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14223.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14223.patch
new file mode 100644
index 0000000..d1fef6b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14223.patch
@@ -0,0 +1,38 @@
+From afc9c683ed9db01edb357bc8c19edad4282b3a97 Mon Sep 17 00:00:00 2001
+From: Michael Niedermayer <michael@niedermayer.cc>
+Date: Tue, 5 Sep 2017 00:16:29 +0200
+Subject: [PATCH] avformat/asfdec: Fix DoS in asf_build_simple_index()
+
+Fixes: Missing EOF check in loop
+No testcase
+
+Found-by: Xiaohei and Wangchu from Alibaba Security Team
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2017-14223
+Upstream-Status: Backport
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ libavformat/asfdec_f.c | 5 +++++
+ 1 file changed, 5 insertions(+)
+
+diff --git a/libavformat/asfdec_f.c b/libavformat/asfdec_f.c
+index f3acbae..cc648b9 100644
+--- a/libavformat/asfdec_f.c
++++ b/libavformat/asfdec_f.c
+@@ -1610,6 +1610,11 @@ static int asf_build_simple_index(AVFormatContext *s, int stream_index)
+             int64_t pos       = s->internal->data_offset + s->packet_size * (int64_t)pktnum;
+             int64_t index_pts = FFMAX(av_rescale(itime, i, 10000) - asf->hdr.preroll, 0);
+ 
++            if (avio_feof(s->pb)) {
++                ret = AVERROR_INVALIDDATA;
++                goto end;
++            }
++
+             if (pos != last_pos) {
+                 av_log(s, AV_LOG_DEBUG, "pktnum:%d, pktct:%d  pts: %"PRId64"\n",
+                        pktnum, pktct, index_pts);
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14225.patch b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14225.patch
new file mode 100644
index 0000000..ce6845e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg/CVE-2017-14225.patch
@@ -0,0 +1,49 @@
+Subject: [PATCH] ffprobe: Fix null pointer dereference with color primaries
+
+Found-by: AD-lab of venustech
+Signed-off-by: Michael Niedermayer <michael@niedermayer.cc>
+
+CVE: CVE-2017-14225
+Upstream-Status: Backport
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+---
+ ffprobe.c | 15 +++++++++++----
+ 1 file changed, 11 insertions(+), 4 deletions(-)
+
+diff --git a/ffprobe.c b/ffprobe.c
+index a219fc1..df22b30 100644
+--- a/ffprobe.c
++++ b/ffprobe.c
+@@ -1899,6 +1899,16 @@ static void print_pkt_side_data(WriterContext *w,
+     writer_print_section_footer(w);
+ }
+ 
++static void print_primaries(WriterContext *w, enum AVColorPrimaries color_primaries)
++{
++    const char *val = av_color_primaries_name(color_primaries);
++    if (!val || color_primaries == AVCOL_PRI_UNSPECIFIED) {
++	print_str_opt("color_primaries", "unknown");
++    } else {
++	print_str("color_primaries", val);
++    }
++}
++
+ static void clear_log(int need_lock)
+ {
+     int i;
+@@ -2420,10 +2430,7 @@ static int show_stream(WriterContext *w, AVFormatContext *fmt_ctx, int stream_id
+         else
+             print_str_opt("color_transfer", av_color_transfer_name(par->color_trc));
+ 
+-        if (par->color_primaries != AVCOL_PRI_UNSPECIFIED)
+-            print_str("color_primaries", av_color_primaries_name(par->color_primaries));
+-        else
+-            print_str_opt("color_primaries", av_color_primaries_name(par->color_primaries));
++        print_primaries(w, par->color_primaries);
+ 
+         if (par->chroma_location != AVCHROMA_LOC_UNSPECIFIED)
+             print_str("chroma_location", av_chroma_location_name(par->chroma_location));
+-- 
+2.1.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg_3.2.4.bb b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg_3.2.4.bb
deleted file mode 100644
index 3216f8e..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg_3.2.4.bb
+++ /dev/null
@@ -1,147 +0,0 @@
-SUMMARY = "A complete, cross-platform solution to record, convert and stream audio and video."
-DESCRIPTION = "FFmpeg is the leading multimedia framework, able to decode, encode, transcode, \
-               mux, demux, stream, filter and play pretty much anything that humans and machines \
-               have created. It supports the most obscure ancient formats up to the cutting edge."
-HOMEPAGE = "https://www.ffmpeg.org/"
-SECTION = "libs"
-
-LICENSE = "BSD & GPLv2+ & LGPLv2.1+ & MIT"
-LICENSE_${PN} = "GPLv2+"
-LICENSE_libavcodec = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
-LICENSE_libavdevice = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
-LICENSE_libavfilter = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
-LICENSE_libavformat = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
-LICENSE_libavresample = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
-LICENSE_libavutil = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
-LICENSE_libpostproc = "GPLv2+"
-LICENSE_libswresample = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
-LICENSE_libswscale = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
-LICENSE_FLAGS = "commercial"
-
-LIC_FILES_CHKSUM = "file://COPYING.GPLv2;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://COPYING.GPLv3;md5=d32239bcb673463ab874e80d47fae504 \
-                    file://COPYING.LGPLv2.1;md5=bd7a443320af8c812e4c18d1b79df004 \
-                    file://COPYING.LGPLv3;md5=e6a600fd5e1d9cbde2d983680233ad02"
-
-SRC_URI = "https://www.ffmpeg.org/releases/${BP}.tar.xz \
-           file://mips64_cpu_detection.patch \
-          "
-SRC_URI[md5sum] = "39fd71024ac76ba35f04397021af5606"
-SRC_URI[sha256sum] = "6e38ff14f080c98b58cf5967573501b8cb586e3a173b591f3807d8f0660daf7a"
-
-# Build fails when thumb is enabled: https://bugzilla.yoctoproject.org/show_bug.cgi?id=7717
-ARM_INSTRUCTION_SET = "arm"
-
-# Should be API compatible with libav (which was a fork of ffmpeg)
-# libpostproc was previously packaged from a separate recipe
-PROVIDES = "libav libpostproc"
-
-DEPENDS = "alsa-lib zlib libogg yasm-native"
-
-inherit autotools pkgconfig
-
-PACKAGECONFIG ??= "avdevice avfilter avcodec avformat swresample swscale postproc \
-                   bzlib gpl lzma theora x264 \
-                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'x11 xv', '', d)}"
-
-# libraries to build in addition to avutil
-PACKAGECONFIG[avdevice] = "--enable-avdevice,--disable-avdevice"
-PACKAGECONFIG[avfilter] = "--enable-avfilter,--disable-avfilter"
-PACKAGECONFIG[avcodec] = "--enable-avcodec,--disable-avcodec"
-PACKAGECONFIG[avformat] = "--enable-avformat,--disable-avformat"
-PACKAGECONFIG[swresample] = "--enable-swresample,--disable-swresample"
-PACKAGECONFIG[swscale] = "--enable-swscale,--disable-swscale"
-PACKAGECONFIG[postproc] = "--enable-postproc,--disable-postproc"
-PACKAGECONFIG[avresample] = "--enable-avresample,--disable-avresample"
-
-# features to support
-PACKAGECONFIG[bzlib] = "--enable-bzlib,--disable-bzlib,bzip2"
-PACKAGECONFIG[gpl] = "--enable-gpl,--disable-gpl"
-PACKAGECONFIG[gsm] = "--enable-libgsm,--disable-libgsm,libgsm"
-PACKAGECONFIG[jack] = "--enable-indev=jack,--disable-indev=jack,jack"
-PACKAGECONFIG[libvorbis] = "--enable-libvorbis,--disable-libvorbis,libvorbis"
-PACKAGECONFIG[lzma] = "--enable-lzma,--disable-lzma,xz"
-PACKAGECONFIG[mp3lame] = "--enable-libmp3lame,--disable-libmp3lame,lame"
-PACKAGECONFIG[openssl] = "--enable-openssl,--disable-openssl,openssl"
-PACKAGECONFIG[schroedinger] = "--enable-libschroedinger,--disable-libschroedinger,schroedinger"
-PACKAGECONFIG[speex] = "--enable-libspeex,--disable-libspeex,speex"
-PACKAGECONFIG[theora] = "--enable-libtheora,--disable-libtheora,libtheora"
-PACKAGECONFIG[vaapi] = "--enable-vaapi,--disable-vaapi,libva"
-PACKAGECONFIG[vdpau] = "--enable-vdpau,--disable-vdpau,libvdpau"
-PACKAGECONFIG[vpx] = "--enable-libvpx,--disable-libvpx,libvpx"
-PACKAGECONFIG[x11] = "--enable-x11grab,--disable-x11grab,virtual/libx11 libxfixes libxext xproto virtual/libsdl"
-PACKAGECONFIG[x264] = "--enable-libx264,--disable-libx264,x264"
-PACKAGECONFIG[xv] = "--enable-outdev=xv,--disable-outdev=xv,libxv"
-
-# Check codecs that require --enable-nonfree
-USE_NONFREE = "${@bb.utils.contains_any('PACKAGECONFIG', [ 'openssl' ], 'yes', '', d)}"
-
-def cpu(d):
-    for arg in (d.getVar('TUNE_CCARGS') or '').split():
-        if arg.startswith('-mcpu='):
-            return arg[6:]
-    return 'generic'
-
-EXTRA_OECONF = " \
-    --disable-stripping \
-    --enable-pic \
-    --enable-shared \
-    --enable-pthreads \
-    ${@bb.utils.contains('USE_NONFREE', 'yes', '--enable-nonfree', '', d)} \
-    \
-    --cross-prefix=${TARGET_PREFIX} \
-    \
-    --ld="${CCLD}" \
-    --cc="${CC}" \
-    --cxx="${CXX}" \
-    --arch=${TARGET_ARCH} \
-    --target-os="linux" \
-    --enable-cross-compile \
-    --extra-cflags="${TARGET_CFLAGS} ${HOST_CC_ARCH}${TOOLCHAIN_OPTIONS}" \
-    --extra-ldflags="${TARGET_LDFLAGS}" \
-    --sysroot="${STAGING_DIR_TARGET}" \
-    --enable-hardcoded-tables \
-    ${EXTRA_FFCONF} \
-    --libdir=${libdir} \
-    --shlibdir=${libdir} \
-    --datadir=${datadir}/ffmpeg \
-    ${@bb.utils.contains('AVAILTUNES', 'mips32r2', '', '--disable-mipsdsp --disable-mipsdspr2', d)} \
-    --cpu=${@cpu(d)} \
-"
-
-EXTRA_OECONF_append_linux-gnux32 = " --disable-asm"
-
-do_configure() {
-    ${S}/configure ${EXTRA_OECONF}
-}
-
-PACKAGES =+ "libavcodec \
-             libavdevice \
-             libavfilter \
-             libavformat \
-             libavresample \
-             libavutil \
-             libpostproc \
-             libswresample \
-             libswscale"
-
-FILES_libavcodec = "${libdir}/libavcodec${SOLIBS}"
-FILES_libavdevice = "${libdir}/libavdevice${SOLIBS}"
-FILES_libavfilter = "${libdir}/libavfilter${SOLIBS}"
-FILES_libavformat = "${libdir}/libavformat${SOLIBS}"
-FILES_libavresample = "${libdir}/libavresample${SOLIBS}"
-FILES_libavutil = "${libdir}/libavutil${SOLIBS}"
-FILES_libpostproc = "${libdir}/libpostproc${SOLIBS}"
-FILES_libswresample = "${libdir}/libswresample${SOLIBS}"
-FILES_libswscale = "${libdir}/libswscale${SOLIBS}"
-
-# ffmpeg disables PIC on some platforms (e.g. x86-32)
-INSANE_SKIP_${MLPREFIX}libavcodec = "textrel"
-INSANE_SKIP_${MLPREFIX}libavdevice = "textrel"
-INSANE_SKIP_${MLPREFIX}libavfilter = "textrel"
-INSANE_SKIP_${MLPREFIX}libavformat = "textrel"
-INSANE_SKIP_${MLPREFIX}libavutil = "textrel"
-INSANE_SKIP_${MLPREFIX}libavresample = "textrel"
-INSANE_SKIP_${MLPREFIX}libswscale = "textrel"
-INSANE_SKIP_${MLPREFIX}libswresample = "textrel"
-INSANE_SKIP_${MLPREFIX}libpostproc = "textrel"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg_3.3.3.bb b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg_3.3.3.bb
new file mode 100644
index 0000000..c1ebecf
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/ffmpeg/ffmpeg_3.3.3.bb
@@ -0,0 +1,165 @@
+SUMMARY = "A complete, cross-platform solution to record, convert and stream audio and video."
+DESCRIPTION = "FFmpeg is the leading multimedia framework, able to decode, encode, transcode, \
+               mux, demux, stream, filter and play pretty much anything that humans and machines \
+               have created. It supports the most obscure ancient formats up to the cutting edge."
+HOMEPAGE = "https://www.ffmpeg.org/"
+SECTION = "libs"
+
+LICENSE = "BSD & GPLv2+ & LGPLv2.1+ & MIT"
+LICENSE_${PN} = "GPLv2+"
+LICENSE_libavcodec = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
+LICENSE_libavdevice = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
+LICENSE_libavfilter = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
+LICENSE_libavformat = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
+LICENSE_libavresample = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
+LICENSE_libavutil = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
+LICENSE_libpostproc = "GPLv2+"
+LICENSE_libswresample = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
+LICENSE_libswscale = "${@bb.utils.contains('PACKAGECONFIG', 'gpl', 'GPLv2+', 'LGPLv2.1+', d)}"
+LICENSE_FLAGS = "commercial"
+
+LIC_FILES_CHKSUM = "file://COPYING.GPLv2;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://COPYING.GPLv3;md5=d32239bcb673463ab874e80d47fae504 \
+                    file://COPYING.LGPLv2.1;md5=bd7a443320af8c812e4c18d1b79df004 \
+                    file://COPYING.LGPLv3;md5=e6a600fd5e1d9cbde2d983680233ad02"
+
+SRC_URI = "https://www.ffmpeg.org/releases/${BP}.tar.xz \
+           file://mips64_cpu_detection.patch \
+           file://0001-build-fix-for-mips.patch \
+           file://CVE-2017-14054.patch \
+           file://CVE-2017-14055.patch \
+           file://CVE-2017-14056.patch \
+           file://CVE-2017-14057.patch \
+           file://CVE-2017-14058.patch \
+           file://CVE-2017-14059.patch \
+           file://CVE-2017-14169.patch \
+           file://CVE-2017-14170.patch \
+           file://CVE-2017-14171.patch \
+           file://CVE-2017-14222.patch \
+           file://CVE-2017-14223.patch \
+           file://CVE-2017-14225.patch \
+          "
+SRC_URI[md5sum] = "743dc66ebe67180283b92d029f690d0f"
+SRC_URI[sha256sum] = "d2a9002cdc6b533b59728827186c044ad02ba64841f1b7cd6c21779875453a1e"
+
+# Build fails when thumb is enabled: https://bugzilla.yoctoproject.org/show_bug.cgi?id=7717
+ARM_INSTRUCTION_SET = "arm"
+
+# Should be API compatible with libav (which was a fork of ffmpeg)
+# libpostproc was previously packaged from a separate recipe
+PROVIDES = "libav libpostproc"
+
+DEPENDS = "alsa-lib zlib libogg yasm-native"
+
+inherit autotools pkgconfig
+
+PACKAGECONFIG ??= "avdevice avfilter avcodec avformat swresample swscale postproc \
+                   bzlib gpl lzma theora x264 \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'xv', '', d)}"
+
+# libraries to build in addition to avutil
+PACKAGECONFIG[avdevice] = "--enable-avdevice,--disable-avdevice"
+PACKAGECONFIG[avfilter] = "--enable-avfilter,--disable-avfilter"
+PACKAGECONFIG[avcodec] = "--enable-avcodec,--disable-avcodec"
+PACKAGECONFIG[avformat] = "--enable-avformat,--disable-avformat"
+PACKAGECONFIG[swresample] = "--enable-swresample,--disable-swresample"
+PACKAGECONFIG[swscale] = "--enable-swscale,--disable-swscale"
+PACKAGECONFIG[postproc] = "--enable-postproc,--disable-postproc"
+PACKAGECONFIG[avresample] = "--enable-avresample,--disable-avresample"
+
+# features to support
+PACKAGECONFIG[bzlib] = "--enable-bzlib,--disable-bzlib,bzip2"
+PACKAGECONFIG[gpl] = "--enable-gpl,--disable-gpl"
+PACKAGECONFIG[gsm] = "--enable-libgsm,--disable-libgsm,libgsm"
+PACKAGECONFIG[jack] = "--enable-indev=jack,--disable-indev=jack,jack"
+PACKAGECONFIG[libvorbis] = "--enable-libvorbis,--disable-libvorbis,libvorbis"
+PACKAGECONFIG[lzma] = "--enable-lzma,--disable-lzma,xz"
+PACKAGECONFIG[mp3lame] = "--enable-libmp3lame,--disable-libmp3lame,lame"
+PACKAGECONFIG[openssl] = "--enable-openssl,--disable-openssl,openssl"
+PACKAGECONFIG[schroedinger] = "--enable-libschroedinger,--disable-libschroedinger,schroedinger"
+PACKAGECONFIG[sdl2] = "--enable-sdl2,--disable-sdl2,virtual/libsdl2"
+PACKAGECONFIG[speex] = "--enable-libspeex,--disable-libspeex,speex"
+PACKAGECONFIG[theora] = "--enable-libtheora,--disable-libtheora,libtheora"
+PACKAGECONFIG[vaapi] = "--enable-vaapi,--disable-vaapi,libva"
+PACKAGECONFIG[vdpau] = "--enable-vdpau,--disable-vdpau,libvdpau"
+PACKAGECONFIG[vpx] = "--enable-libvpx,--disable-libvpx,libvpx"
+PACKAGECONFIG[x264] = "--enable-libx264,--disable-libx264,x264"
+PACKAGECONFIG[xv] = "--enable-outdev=xv,--disable-outdev=xv,libxv"
+
+# Check codecs that require --enable-nonfree
+USE_NONFREE = "${@bb.utils.contains_any('PACKAGECONFIG', [ 'openssl' ], 'yes', '', d)}"
+
+def cpu(d):
+    for arg in (d.getVar('TUNE_CCARGS') or '').split():
+        if arg.startswith('-mcpu='):
+            return arg[6:]
+    return 'generic'
+
+EXTRA_OECONF = " \
+    --disable-stripping \
+    --enable-pic \
+    --enable-shared \
+    --enable-pthreads \
+    --disable-libxcb \
+    --disable-libxcb-shm \
+    --disable-libxcb-xfixes \
+    --disable-libxcb-shape \
+    ${@bb.utils.contains('USE_NONFREE', 'yes', '--enable-nonfree', '', d)} \
+    \
+    --cross-prefix=${TARGET_PREFIX} \
+    \
+    --ld="${CCLD}" \
+    --cc="${CC}" \
+    --cxx="${CXX}" \
+    --arch=${TARGET_ARCH} \
+    --target-os="linux" \
+    --enable-cross-compile \
+    --extra-cflags="${TARGET_CFLAGS} ${HOST_CC_ARCH}${TOOLCHAIN_OPTIONS}" \
+    --extra-ldflags="${TARGET_LDFLAGS}" \
+    --sysroot="${STAGING_DIR_TARGET}" \
+    --enable-hardcoded-tables \
+    ${EXTRA_FFCONF} \
+    --libdir=${libdir} \
+    --shlibdir=${libdir} \
+    --datadir=${datadir}/ffmpeg \
+    ${@bb.utils.contains('AVAILTUNES', 'mips32r2', '', '--disable-mipsdsp --disable-mipsdspr2', d)} \
+    --cpu=${@cpu(d)} \
+    --pkg-config=pkg-config \
+"
+
+EXTRA_OECONF_append_linux-gnux32 = " --disable-asm"
+
+do_configure() {
+    ${S}/configure ${EXTRA_OECONF}
+}
+
+PACKAGES =+ "libavcodec \
+             libavdevice \
+             libavfilter \
+             libavformat \
+             libavresample \
+             libavutil \
+             libpostproc \
+             libswresample \
+             libswscale"
+
+FILES_libavcodec = "${libdir}/libavcodec${SOLIBS}"
+FILES_libavdevice = "${libdir}/libavdevice${SOLIBS}"
+FILES_libavfilter = "${libdir}/libavfilter${SOLIBS}"
+FILES_libavformat = "${libdir}/libavformat${SOLIBS}"
+FILES_libavresample = "${libdir}/libavresample${SOLIBS}"
+FILES_libavutil = "${libdir}/libavutil${SOLIBS}"
+FILES_libpostproc = "${libdir}/libpostproc${SOLIBS}"
+FILES_libswresample = "${libdir}/libswresample${SOLIBS}"
+FILES_libswscale = "${libdir}/libswscale${SOLIBS}"
+
+# ffmpeg disables PIC on some platforms (e.g. x86-32)
+INSANE_SKIP_${MLPREFIX}libavcodec = "textrel"
+INSANE_SKIP_${MLPREFIX}libavdevice = "textrel"
+INSANE_SKIP_${MLPREFIX}libavfilter = "textrel"
+INSANE_SKIP_${MLPREFIX}libavformat = "textrel"
+INSANE_SKIP_${MLPREFIX}libavutil = "textrel"
+INSANE_SKIP_${MLPREFIX}libavresample = "textrel"
+INSANE_SKIP_${MLPREFIX}libswscale = "textrel"
+INSANE_SKIP_${MLPREFIX}libswresample = "textrel"
+INSANE_SKIP_${MLPREFIX}libpostproc = "textrel"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player/0001-gtk-play-Disable-visualizations.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player/0001-gtk-play-Disable-visualizations.patch
deleted file mode 100644
index ea88120..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player/0001-gtk-play-Disable-visualizations.patch
+++ /dev/null
@@ -1,59 +0,0 @@
-From 6cf42c468e93b0aaa171961e059bc3e2fb915889 Mon Sep 17 00:00:00 2001
-From: Jussi Kukkonen <jussi.kukkonen@intel.com>
-Date: Fri, 28 Apr 2017 14:35:19 +0300
-Subject: [PATCH] gtk-play: Disable visualizations
-
-This is a workaround for [YOCTO #11410] (audio playback is broken in
-mediaplayer if vaapi is used). It disables visualizations and makes
-sure we clear the window (otherwise nothing does that and result is
-very ugly).
-
-This patch should be removed when 11410 is fixed.
-
-Upstream-Status: Inappropriate [bug workaround]
-Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
----
- gtk/gtk-play.c | 13 ++++++++++++-
- 1 file changed, 12 insertions(+), 1 deletion(-)
-
-diff --git a/gtk/gtk-play.c b/gtk/gtk-play.c
-index 8ae0fea..63b9bb0 100644
---- a/gtk/gtk-play.c
-+++ b/gtk/gtk-play.c
-@@ -1401,6 +1401,15 @@ get_child_position (GtkOverlay * overlay, GtkWidget * widget,
-   return TRUE;
- }
- 
-+/* Hack to make sure something gets drawn if visualizations are disabled */
-+static gboolean
-+draw_cb (GtkWidget *widget, cairo_t *cr, gpointer data)
-+{
-+  cairo_set_source_rgb (cr, 0, 0, 0);
-+  cairo_paint (cr);
-+  return FALSE;
-+}
-+
- static void
- create_ui (GtkPlay * play)
- {
-@@ -1431,6 +1440,8 @@ create_ui (GtkPlay * play)
-     play->video_area = gtk_drawing_area_new ();
-     g_signal_connect (play->video_area, "realize",
-         G_CALLBACK (video_area_realize_cb), play);
-+    g_signal_connect (play->video_area, "draw",
-+        G_CALLBACK (draw_cb), NULL);
-   }
-   gtk_widget_set_events (play->video_area, GDK_EXPOSURE_MASK
-       | GDK_LEAVE_NOTIFY_MASK
-@@ -1753,7 +1764,7 @@ gtk_play_constructor (GType type, guint n_construct_params,
- 
-   /* enable visualization (by default playbin uses goom) */
-   /* if visualization is enabled then use the first element */
--  gst_player_set_visualization_enabled (self->player, TRUE);
-+  gst_player_set_visualization_enabled (self->player, FALSE);
- 
-   g_signal_connect (G_OBJECT (self), "show", G_CALLBACK (show_cb), NULL);
- 
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player/Add-error-signal-emission-for-missing-plugins.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player/Add-error-signal-emission-for-missing-plugins.patch
deleted file mode 100644
index 712d46d..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player/Add-error-signal-emission-for-missing-plugins.patch
+++ /dev/null
@@ -1,252 +0,0 @@
-From d64c7edb66f4a64ff49c4306cf77fd269b7079ab Mon Sep 17 00:00:00 2001
-From: Jussi Kukkonen <jussi.kukkonen@intel.com>
-Date: Mon, 16 Mar 2015 13:45:30 +0200
-Subject: [PATCH] Add error signal emission for missing plugins
-
-Add a missing plugins error signal to gst-player. Note that this error
-does not necessarily mean the playback has completely failed, but in
-practice the user experience will be bad (think, e.g. of a mp4 file
-where H.264 codec is missing: AAC playback still works...).
-
-Use the signal in gtk-play to show a infobar if plugins are missing.
-
-Submitted upstream at https://github.com/sdroege/gst-player/pull/11
-
-Upstream-Status: Submitted
-Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
----
- configure.ac               |  2 +-
- gtk/gtk-play.c             | 54 +++++++++++++++++++++++++++++++++++++++++++++-
- lib/gst/player/gstplayer.c | 22 +++++++++++++++++++
- lib/gst/player/gstplayer.h |  3 ++-
- 4 files changed, 78 insertions(+), 3 deletions(-)
-
-diff --git a/configure.ac b/configure.ac
-index 90ab74c..6cdb4eb 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -53,7 +53,7 @@ AC_SUBST(LT_AGE)
- PKG_PROG_PKG_CONFIG
- 
- PKG_CHECK_MODULES(GLIB, [glib-2.0 gobject-2.0])
--PKG_CHECK_MODULES(GSTREAMER, [gstreamer-1.0 >= 1.4 gstreamer-video-1.0 >= 1.4])
-+PKG_CHECK_MODULES(GSTREAMER, [gstreamer-1.0 >= 1.4 gstreamer-video-1.0 >= 1.4 gstreamer-pbutils-1.0])
- 
- GLIB_PREFIX="`$PKG_CONFIG --variable=prefix glib-2.0`"
- AC_SUBST(GLIB_PREFIX)
-diff --git a/gtk/gtk-play.c b/gtk/gtk-play.c
-index b92773b..e2b605a 100644
---- a/gtk/gtk-play.c
-+++ b/gtk/gtk-play.c
-@@ -30,6 +30,8 @@ typedef struct
-   GtkWidget *prev_button, *next_button;
-   GtkWidget *seekbar;
-   GtkWidget *video_area;
-+  GtkWidget *info_label;
-+  GtkWidget *info_bar;
-   GtkWidget *volume_button;
-   gulong seekbar_value_changed_signal_id;
-   gboolean playing;
-@@ -141,6 +143,13 @@ play_pause_clicked_cb (GtkButton * button, GtkPlay * play)
- }
- 
- static void
-+clear_missing_plugins (GtkPlay * play)
-+{
-+  gtk_label_set_text (GTK_LABEL (play->info_label), "");
-+  gtk_widget_hide (play->info_bar);
-+}
-+
-+static void
- skip_prev_clicked_cb (GtkButton * button, GtkPlay * play)
- {
-   GList *prev;
-@@ -155,6 +164,7 @@ skip_prev_clicked_cb (GtkButton * button, GtkPlay * play)
- 
-   gtk_widget_set_sensitive (play->next_button, TRUE);
-   gst_player_set_uri (play->player, prev->data);
-+  clear_missing_plugins (play);
-   gst_player_play (play->player);
-   set_title (play, prev->data);
-   gtk_widget_set_sensitive (play->prev_button, g_list_previous (prev) != NULL);
-@@ -175,6 +185,7 @@ skip_next_clicked_cb (GtkButton * button, GtkPlay * play)
- 
-   gtk_widget_set_sensitive (play->prev_button, TRUE);
-   gst_player_set_uri (play->player, next->data);
-+  clear_missing_plugins (play);
-   gst_player_play (play->player);
-   set_title (play, next->data);
-   gtk_widget_set_sensitive (play->next_button, g_list_next (next) != NULL);
-@@ -193,10 +204,16 @@ volume_changed_cb (GtkScaleButton * button, gdouble value, GtkPlay * play)
-   gst_player_set_volume (play->player, value);
- }
- 
-+void
-+info_bar_response_cb (GtkInfoBar * bar, gint response, GtkPlay * play)
-+{
-+  gtk_widget_hide (GTK_WIDGET (bar));
-+}
-+
- static void
- create_ui (GtkPlay * play)
- {
--  GtkWidget *controls, *main_hbox, *main_vbox;
-+  GtkWidget *controls, *main_hbox, *main_vbox, *info_bar, *content_area;
- 
-   play->window = gtk_window_new (GTK_WINDOW_TOPLEVEL);
-   g_signal_connect (G_OBJECT (play->window), "delete-event",
-@@ -208,6 +225,20 @@ create_ui (GtkPlay * play)
-   g_signal_connect (play->video_area, "realize",
-       G_CALLBACK (video_area_realize_cb), play);
- 
-+  play->info_bar = gtk_info_bar_new ();
-+  gtk_info_bar_set_message_type (GTK_INFO_BAR (play->info_bar),
-+      GTK_MESSAGE_WARNING);
-+  //gtk_info_bar_set_show_close_button (GTK_INFO_BAR (play->info_bar),
-+  //    TRUE);
-+  gtk_widget_set_no_show_all (play->info_bar, TRUE);
-+  g_signal_connect (play->info_bar, "response",
-+      G_CALLBACK (info_bar_response_cb), play);
-+
-+  content_area = gtk_info_bar_get_content_area (GTK_INFO_BAR (play->info_bar));
-+  play->info_label = gtk_label_new ("");
-+  gtk_container_add (GTK_CONTAINER (content_area), play->info_label);
-+  gtk_widget_show (play->info_label);
-+
-   /* Unified play/pause button */
-   play->play_pause_button =
-       gtk_button_new_from_icon_name ("media-playback-pause",
-@@ -258,6 +289,7 @@ create_ui (GtkPlay * play)
- 
-   main_vbox = gtk_box_new (GTK_ORIENTATION_VERTICAL, 0);
-   gtk_box_pack_start (GTK_BOX (main_vbox), main_hbox, TRUE, TRUE, 0);
-+  gtk_box_pack_start (GTK_BOX (main_vbox), play->info_bar, FALSE, FALSE, 0);
-   gtk_box_pack_start (GTK_BOX (main_vbox), controls, FALSE, FALSE, 0);
-   gtk_container_add (GTK_CONTAINER (play->window), main_vbox);
- 
-@@ -322,6 +354,7 @@ eos_cb (GstPlayer * unused, GtkPlay * play)
-       gtk_widget_set_sensitive (play->next_button, g_list_next (next) != NULL);
- 
-       gst_player_set_uri (play->player, next->data);
-+      clear_missing_plugins (play);
-       gst_player_play (play->player);
-       set_title (play, next->data);
-     } else {
-@@ -330,6 +363,24 @@ eos_cb (GstPlayer * unused, GtkPlay * play)
-   }
- }
- 
-+static void
-+error_cb (GstPlayer * player, GError * err, GtkPlay * play)
-+{
-+  char *message;
-+
-+  if (g_error_matches (err, gst_player_error_quark (),
-+      GST_PLAYER_ERROR_MISSING_PLUGIN)) {
-+    // add message to end of any existing message: there may be
-+    // multiple missing plugins.
-+    message = g_strdup_printf ("%s%s. ",
-+        gtk_label_get_text (GTK_LABEL (play->info_label)), err->message);
-+    gtk_label_set_text (GTK_LABEL (play->info_label), message);
-+    g_free (message);
-+
-+    gtk_widget_show (play->info_bar);
-+  }
-+}
-+
- int
- main (gint argc, gchar ** argv)
- {
-@@ -422,6 +473,7 @@ main (gint argc, gchar ** argv)
-   g_signal_connect (play.player, "video-dimensions-changed",
-       G_CALLBACK (video_dimensions_changed_cb), &play);
-   g_signal_connect (play.player, "end-of-stream", G_CALLBACK (eos_cb), &play);
-+  g_signal_connect (play.player, "error", G_CALLBACK (error_cb), &play);
- 
-   /* We have file(s) that need playing. */
-   set_title (&play, g_list_first (play.uris)->data);
-diff --git a/lib/gst/player/gstplayer.c b/lib/gst/player/gstplayer.c
-index bd682d9..78e7ba1 100644
---- a/lib/gst/player/gstplayer.c
-+++ b/lib/gst/player/gstplayer.c
-@@ -47,6 +47,7 @@
- 
- #include <gst/gst.h>
- #include <gst/video/video.h>
-+#include <gst/pbutils/missing-plugins.h>
- 
- GST_DEBUG_CATEGORY_STATIC (gst_player_debug);
- #define GST_CAT_DEFAULT gst_player_debug
-@@ -238,6 +239,7 @@ gst_player_class_init (GstPlayerClass * klass)
-       g_signal_new ("video-dimensions-changed", G_TYPE_FROM_CLASS (klass),
-       G_SIGNAL_RUN_LAST | G_SIGNAL_NO_RECURSE | G_SIGNAL_NO_HOOKS, 0, NULL,
-       NULL, NULL, G_TYPE_NONE, 2, G_TYPE_INT, G_TYPE_INT);
-+
- }
- 
- static void
-@@ -619,6 +621,21 @@ error_cb (GstBus * bus, GstMessage * msg, gpointer user_data)
-   g_mutex_unlock (&self->priv->lock);
- }
- 
-+static void
-+element_cb (GstBus * bus, GstMessage * msg, gpointer user_data)
-+{
-+  GstPlayer *self = GST_PLAYER (user_data);
-+
-+  if (gst_is_missing_plugin_message (msg)) {
-+    gchar *desc;
-+
-+    desc = gst_missing_plugin_message_get_description (msg);
-+    emit_error (self, g_error_new (GST_PLAYER_ERROR,
-+        GST_PLAYER_ERROR_MISSING_PLUGIN, "Missing plugin '%s'", desc));
-+    g_free (desc);
-+  }
-+}
-+
- static gboolean
- eos_dispatch (gpointer user_data)
- {
-@@ -1059,6 +1076,8 @@ gst_player_main (gpointer data)
-       NULL, NULL);
-   g_source_attach (bus_source, self->priv->context);
- 
-+  g_signal_connect (G_OBJECT (bus), "message::element",
-+      G_CALLBACK (element_cb), self);
-   g_signal_connect (G_OBJECT (bus), "message::error", G_CALLBACK (error_cb),
-       self);
-   g_signal_connect (G_OBJECT (bus), "message::eos", G_CALLBACK (eos_cb), self);
-@@ -1560,6 +1579,7 @@ gst_player_error_get_type (void)
-   static gsize id = 0;
-   static const GEnumValue values[] = {
-     {C_ENUM (GST_PLAYER_ERROR_FAILED), "GST_PLAYER_ERROR_FAILED", "failed"},
-+    {C_ENUM (GST_PLAYER_ERROR_MISSING_PLUGIN), "GST_PLAYER_ERROR_MISSING_PLUGIN", "missing-plugin"},
-     {0, NULL, NULL}
-   };
- 
-@@ -1577,6 +1597,8 @@ gst_player_error_get_name (GstPlayerError error)
-   switch (error) {
-     case GST_PLAYER_ERROR_FAILED:
-       return "failed";
-+    case GST_PLAYER_ERROR_MISSING_PLUGIN:
-+      return "missing-plugin";
-   }
- 
-   g_assert_not_reached ();
-diff --git a/lib/gst/player/gstplayer.h b/lib/gst/player/gstplayer.h
-index c438513..35fb5bb 100644
---- a/lib/gst/player/gstplayer.h
-+++ b/lib/gst/player/gstplayer.h
-@@ -44,7 +44,8 @@ GType        gst_player_error_get_type                (void);
- #define      GST_TYPE_PLAYER_ERROR                    (gst_player_error_get_type ())
- 
- typedef enum {
--  GST_PLAYER_ERROR_FAILED = 0
-+  GST_PLAYER_ERROR_FAILED = 0,
-+  GST_PLAYER_ERROR_MISSING_PLUGIN
- } GstPlayerError;
- 
- const gchar *gst_player_error_get_name                (GstPlayerError error);
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player/Fix-pause-play.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player/Fix-pause-play.patch
deleted file mode 100644
index 783c42a..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player/Fix-pause-play.patch
+++ /dev/null
@@ -1,107 +0,0 @@
-Fix pause/play
-
-The current player state is now notified via the state-changed signal,
-and in the GTK UI it was only used to keep track of the desired state.
-
-This is a backport of upstream commit 738479c7a0.
-
-Upstream-Status: Backport
-Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
-
----
- gtk/gtk-play.c             |  8 ++++++--
- lib/gst/player/gstplayer.c | 12 ------------
- lib/gst/player/gstplayer.h |  2 --
- 3 files changed, 6 insertions(+), 16 deletions(-)
-
-diff --git a/gtk/gtk-play.c b/gtk/gtk-play.c
-index 6e7a098..e2b605a 100644
---- a/gtk/gtk-play.c
-+++ b/gtk/gtk-play.c
-@@ -34,6 +34,7 @@ typedef struct
-   GtkWidget *info_bar;
-   GtkWidget *volume_button;
-   gulong seekbar_value_changed_signal_id;
-+  gboolean playing;
- } GtkPlay;
- 
- /* Compat stubs */
-@@ -118,12 +119,13 @@ play_pause_clicked_cb (GtkButton * button, GtkPlay * play)
- {
-   GtkWidget *image;
- 
--  if (gst_player_is_playing (play->player)) {
-+  if (play->playing) {
-     gst_player_pause (play->player);
-     image =
-         gtk_image_new_from_icon_name ("media-playback-start",
-         GTK_ICON_SIZE_BUTTON);
-     gtk_button_set_image (GTK_BUTTON (play->play_pause_button), image);
-+    play->playing = FALSE;
-   } else {
-     gchar *title;
- 
-@@ -136,6 +138,7 @@ play_pause_clicked_cb (GtkButton * button, GtkPlay * play)
-     title = gst_player_get_uri (play->player);
-     set_title (play, title);
-     g_free (title);
-+    play->playing = TRUE;
-   }
- }
- 
-@@ -335,7 +338,7 @@ video_dimensions_changed_cb (GstPlayer * unused, gint width, gint height,
- static void
- eos_cb (GstPlayer * unused, GtkPlay * play)
- {
--  if (gst_player_is_playing (play->player)) {
-+  if (play->playing) {
-     GList *next = NULL;
-     gchar *uri;
- 
-@@ -452,6 +455,7 @@ main (gint argc, gchar ** argv)
-   }
- 
-   play.player = gst_player_new ();
-+  play.playing = TRUE;
- 
-   g_object_set (play.player, "dispatch-to-main-context", TRUE, NULL);
- 
-diff --git a/lib/gst/player/gstplayer.c b/lib/gst/player/gstplayer.c
-index 069b284..78e7ba1 100644
---- a/lib/gst/player/gstplayer.c
-+++ b/lib/gst/player/gstplayer.c
-@@ -1422,18 +1422,6 @@ gst_player_set_uri (GstPlayer * self, const gchar * val)
-   g_object_set (self, "uri", val, NULL);
- }
- 
--gboolean
--gst_player_is_playing (GstPlayer * self)
--{
--  gboolean val;
--
--  g_return_val_if_fail (GST_IS_PLAYER (self), FALSE);
--
--  g_object_get (self, "is-playing", &val, NULL);
--
--  return val;
--}
--
- GstClockTime
- gst_player_get_position (GstPlayer * self)
- {
-diff --git a/lib/gst/player/gstplayer.h b/lib/gst/player/gstplayer.h
-index 6933dd7..35fb5bb 100644
---- a/lib/gst/player/gstplayer.h
-+++ b/lib/gst/player/gstplayer.h
-@@ -93,8 +93,6 @@ gchar *      gst_player_get_uri                       (GstPlayer    * player);
- void         gst_player_set_uri                       (GstPlayer    * player,
-                                                        const gchar  * uri);
- 
--gboolean     gst_player_is_playing                    (GstPlayer    * player);
--
- GstClockTime gst_player_get_position                  (GstPlayer    * player);
- GstClockTime gst_player_get_duration                  (GstPlayer    * player);
- 
--- 
-2.1.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player/filechooser.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player/filechooser.patch
deleted file mode 100644
index 7bf1b03..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player/filechooser.patch
+++ /dev/null
@@ -1,54 +0,0 @@
-Upstream-Status: Submitted
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-From 43d4b19ab611d844156e26c4840cc54ddb73ae03 Mon Sep 17 00:00:00 2001
-From: Ross Burton <ross.burton@intel.com>
-Date: Thu, 26 Feb 2015 17:17:05 +0000
-Subject: [PATCH] gtk-play: show a file chooser if no URIs were passed
-
----
- gtk/gtk-play.c |   28 ++++++++++++++++++++++++++--
- 1 file changed, 26 insertions(+), 2 deletions(-)
-
-diff --git a/gtk/gtk-play.c b/gtk/gtk-play.c
-index f015077..9766a72 100644
---- a/gtk/gtk-play.c
-+++ b/gtk/gtk-play.c
-@@ -319,8 +319,32 @@ main (gint argc, gchar ** argv)
-   // FIXME: Add support for playlists and stuff
-   /* Parse the list of the file names we have to play. */
-   if (!file_names) {
--    g_print ("Usage: %s FILE(s)|URI(s)\n", APP_NAME);
--    return 1;
-+    GtkWidget *chooser;
-+    int res;
-+
-+    chooser = gtk_file_chooser_dialog_new ("Select files to play", NULL,
-+                                          GTK_FILE_CHOOSER_ACTION_OPEN,
-+                                          "_Cancel", GTK_RESPONSE_CANCEL,
-+                                          "_Open", GTK_RESPONSE_ACCEPT,
-+                                          NULL);
-+    g_object_set (chooser,
-+                  "local-only", FALSE,
-+                  "select-multiple", TRUE,
-+                  NULL);
-+
-+    res = gtk_dialog_run (GTK_DIALOG (chooser));
-+    if (res == GTK_RESPONSE_ACCEPT) {
-+      GSList *l;
-+
-+      l = gtk_file_chooser_get_uris (GTK_FILE_CHOOSER (chooser));
-+      while (l) {
-+        play.uris = g_list_append (play.uris, l->data);
-+        l = g_slist_delete_link (l, l);
-+      }
-+    } else {
-+      return 0;
-+    }
-+    gtk_widget_destroy (chooser);
-   } else {
-     guint i;
- 
--- 
-1.7.10.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player_git.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player_git.bb
index cb12a46..4fe8fde 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gst-player_git.bb
@@ -5,14 +5,11 @@
 DEPENDS = "glib-2.0 gstreamer1.0 gstreamer1.0-plugins-base gstreamer1.0-plugins-bad gtk+3 glib-2.0-native"
 
 SRC_URI = "git://github.com/sdroege/gst-player.git \
-           file://filechooser.patch;apply=0 \
-           file://Fix-pause-play.patch;apply=0 \
-           file://Add-error-signal-emission-for-missing-plugins.patch;apply=0 \
-           file://0001-gtk-play-Disable-visualizations.patch \
            file://gst-player.desktop"
 
 SRCREV = "ee3c226c82767a089743e4e06058743e67f73cdb"
 PV = "0.0.1+git${SRCPV}"
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 S = "${WORKDIR}/git"
 
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer-vaapi/vaapivideobufferpool-create-allocator-if-needed.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer-vaapi/vaapivideobufferpool-create-allocator-if-needed.patch
deleted file mode 100644
index f666adc..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer-vaapi/vaapivideobufferpool-create-allocator-if-needed.patch
+++ /dev/null
@@ -1,61 +0,0 @@
-Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
-Upstream-Status: Backport [commit 59a731be6b in '1.10' branch]
-
-
-From 02a6002c3a80b3a5301c0943b1a1518bbdf439fc Mon Sep 17 00:00:00 2001
-From: =?UTF-8?q?V=C3=ADctor=20Manuel=20J=C3=A1quez=20Leal?=
- <vjaquez@igalia.com>
-Date: Fri, 21 Apr 2017 19:07:18 +0200
-Subject: [PATCH] vaapivideobufferpool: create allocator if needed
-
-Backport to branch 1.10
-
-Sometimes a video decoder could set different buffer pool
-configurations, because their frame size changes. In this case we
-did not reconfigure the allocator.
-
-This patch enables this use case, creating a new allocator inside
-the VAAPI buffer pool if the caps changed, if it is not dmabuf-based.
-If so, it is just reconfigured, since it doesn't have a surface pool.
-
-https://bugzilla.gnome.org/show_bug.cgi?id=781577
----
- gst/vaapi/gstvaapivideobufferpool.c | 21 +++++++++++++++++++++
- 1 file changed, 21 insertions(+)
-
-diff --git a/gst/vaapi/gstvaapivideobufferpool.c b/gst/vaapi/gstvaapivideobufferpool.c
-index a3b9223f..9a50614b 100644
---- a/gst/vaapi/gstvaapivideobufferpool.c
-+++ b/gst/vaapi/gstvaapivideobufferpool.c
-@@ -159,6 +159,27 @@ gst_vaapi_video_buffer_pool_set_config (GstBufferPool * pool,
-     gst_object_replace ((GstObject **) & priv->allocator, NULL);
-   priv->video_info = new_vip;
- 
-+  {
-+    guint surface_alloc_flags;
-+    gboolean vinfo_changed = FALSE;
-+
-+    if (allocator) {
-+      const GstVideoInfo *alloc_vinfo =
-+          gst_allocator_get_vaapi_video_info (allocator, &surface_alloc_flags);
-+      vinfo_changed = gst_video_info_changed (alloc_vinfo, &new_vip);
-+    }
-+
-+    if (vinfo_changed && allocator && priv->use_dmabuf_memory) {
-+      gst_allocator_set_vaapi_video_info (allocator, &new_vip,
-+          surface_alloc_flags);
-+    } else if (!priv->use_dmabuf_memory && (vinfo_changed || !allocator)) {
-+      /* let's destroy the other allocator and create a new one */
-+      allocator = gst_vaapi_video_allocator_new (priv->display, &new_vip,
-+          surface_alloc_flags);
-+      gst_buffer_pool_config_set_allocator (config, allocator, NULL);
-+    }
-+  }
-+
-   if (!gst_buffer_pool_config_has_option (config,
-           GST_BUFFER_POOL_OPTION_VAAPI_VIDEO_META))
-     goto error_no_vaapi_video_meta_option;
--- 
-2.11.0
-
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav/0001-configure-check-for-armv7ve-variant.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav/0001-configure-check-for-armv7ve-variant.patch
new file mode 100644
index 0000000..b80d073
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav/0001-configure-check-for-armv7ve-variant.patch
@@ -0,0 +1,35 @@
+From aac5902d3c9cb35c771e760d0e487622aa2e116a Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Thu, 20 Apr 2017 10:38:18 -0700
+Subject: [PATCH] configure: check for armv7ve variant
+
+OE passes -mcpu and -march via cmdline and if
+package tries to detect one of it own then it
+should be compatible otherwise, newer gcc7+ will
+error out
+
+Check for relevant preprocessor macro to determine
+armv7ve architecture
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ gst-libs/ext/libav/configure | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/gst-libs/ext/libav/configure b/gst-libs/ext/libav/configure
+index 4a5e477..727818e 100755
+--- a/gst-libs/ext/libav/configure
++++ b/gst-libs/ext/libav/configure
+@@ -4295,6 +4295,7 @@ elif enabled arm; then
+         elif check_arm_arch 6Z;       then echo armv6z
+         elif check_arm_arch 6ZK;      then echo armv6zk
+         elif check_arm_arch 6T2;      then echo armv6t2
++        elif check_arm_arch EXT_IDIV; then echo armv7ve
+         elif check_arm_arch 7;        then echo armv7
+         elif check_arm_arch 7A  7_A;  then echo armv7-a
+         elif check_arm_arch 7S;       then echo armv7-a
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav/0001-hevcpred_msa.c-Fix-build-by-Including-libavcodec-hev.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav/0001-hevcpred_msa.c-Fix-build-by-Including-libavcodec-hev.patch
new file mode 100644
index 0000000..afbfc84
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav/0001-hevcpred_msa.c-Fix-build-by-Including-libavcodec-hev.patch
@@ -0,0 +1,33 @@
+From b5226c096a0b7049874858e94a59d43e10ba3fd2 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Thu, 21 Sep 2017 10:22:56 -0700
+Subject: [PATCH] hevcpred_msa.c: Fix build by Including libavcodec/hevcdec.h
+
+src/libavcodec/mips/hevcpred_msa.c:1913:32: error: unknown type name 'HEVCContext'; did you mean 'HEVCPredContext'?
+ void ff_intra_pred_8_16x16_msa(HEVCContext *s, int x0, int y0, int c_idx)
+                                ^~~~~~~~~~~
+                                HEVCPredContext
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ gst-libs/ext/libav/libavcodec/mips/hevcpred_msa.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/gst-libs/ext/libav/libavcodec/mips/hevcpred_msa.c b/gst-libs/ext/libav/libavcodec/mips/hevcpred_msa.c
+index 6a3b281..963c64c 100644
+--- a/gst-libs/ext/libav/libavcodec/mips/hevcpred_msa.c
++++ b/gst-libs/ext/libav/libavcodec/mips/hevcpred_msa.c
+@@ -18,7 +18,7 @@
+  * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
+  */
+ 
+-#include "libavcodec/hevc.h"
++#include "libavcodec/hevcdec.h"
+ #include "libavutil/mips/generic_macros_msa.h"
+ #include "hevcpred_mips.h"
+ 
+-- 
+2.14.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav_1.10.4.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav_1.10.4.bb
deleted file mode 100644
index 59d81db..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav_1.10.4.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-require gstreamer1.0-libav.inc
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://COPYING.LIB;md5=6762ed442b3822387a51c92d928ead0d \
-                    file://ext/libav/gstav.h;beginline=1;endline=18;md5=a752c35267d8276fd9ca3db6994fca9c \
-                    file://gst-libs/ext/libav/COPYING.GPLv2;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://gst-libs/ext/libav/COPYING.GPLv3;md5=d32239bcb673463ab874e80d47fae504 \
-                    file://gst-libs/ext/libav/COPYING.LGPLv2.1;md5=bd7a443320af8c812e4c18d1b79df004 \
-                    file://gst-libs/ext/libav/COPYING.LGPLv3;md5=e6a600fd5e1d9cbde2d983680233ad02"
-
-SRC_URI = " \
-    http://gstreamer.freedesktop.org/src/gst-libav/gst-libav-${PV}.tar.xz \
-    file://0001-Disable-yasm-for-libav-when-disable-yasm.patch \
-    file://workaround-to-build-gst-libav-for-i586-with-gcc.patch \
-    file://mips64_cpu_detection.patch \
-"
-SRC_URI[md5sum] = "e2bdd9fde6ca3ff7efffb93df121f4fd"
-SRC_URI[sha256sum] = "6ca0feca75e3d48315e07f20ec37cf6260ed1e9dde58df355febd5016246268b"
-
-S = "${WORKDIR}/gst-libav-${PV}"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav_1.12.2.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav_1.12.2.bb
new file mode 100644
index 0000000..3b5bbd1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav_1.12.2.bb
@@ -0,0 +1,21 @@
+require gstreamer1.0-libav.inc
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://COPYING.LIB;md5=6762ed442b3822387a51c92d928ead0d \
+                    file://ext/libav/gstav.h;beginline=1;endline=18;md5=a752c35267d8276fd9ca3db6994fca9c \
+                    file://gst-libs/ext/libav/COPYING.GPLv2;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://gst-libs/ext/libav/COPYING.GPLv3;md5=d32239bcb673463ab874e80d47fae504 \
+                    file://gst-libs/ext/libav/COPYING.LGPLv2.1;md5=bd7a443320af8c812e4c18d1b79df004 \
+                    file://gst-libs/ext/libav/COPYING.LGPLv3;md5=e6a600fd5e1d9cbde2d983680233ad02"
+
+SRC_URI = "http://gstreamer.freedesktop.org/src/gst-libav/gst-libav-${PV}.tar.xz \
+           file://0001-Disable-yasm-for-libav-when-disable-yasm.patch \
+           file://workaround-to-build-gst-libav-for-i586-with-gcc.patch \
+           file://mips64_cpu_detection.patch \
+           file://0001-configure-check-for-armv7ve-variant.patch \
+           file://0001-hevcpred_msa.c-Fix-build-by-Including-libavcodec-hev.patch \
+           "
+SRC_URI[md5sum] = "8788aecc032a287227b4bd239d1b998a"
+SRC_URI[sha256sum] = "5bb735b9bb218b652ae4071ea6f6be8eaae55e9d3233aec2f36b882a27542db3"
+
+S = "${WORKDIR}/gst-libav-${PV}"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-meta-base.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-meta-base.bb
index c542b13..016e176 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-meta-base.bb
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-meta-base.bb
@@ -37,7 +37,7 @@
     gstreamer1.0-plugins-base-videoscale \
     gstreamer1.0-plugins-base-videoconvert \
     gstreamer1.0-plugins-good-autodetect \
-    gstreamer1.0-plugins-good-souphttpsrc"
+    gstreamer1.0-plugins-good-soup"
 
 RRECOMMENDS_gstreamer1.0-meta-x11-base = "\
     gstreamer1.0-plugins-base-ximagesink \
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-omx.inc b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-omx.inc
index 05562b1..5d92351 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-omx.inc
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-omx.inc
@@ -29,10 +29,17 @@
         d.setVar("PACKAGE_ARCH", d.getVar("MACHINE_ARCH"))
 }
 
+delete_pkg_m4_file() {
+    # Delete m4 files which we provide patched versions of but will be ignored
+    # if these exist
+	rm -f "${S}/common/m4/pkg.m4"
+	rm -f "${S}/common/m4/gtk-doc.m4"
+}
+do_configure[prefuncs] += "delete_pkg_m4_file"
+
 set_omx_core_name() {
 	sed -i -e "s;^core-name=.*;core-name=${GSTREAMER_1_0_OMX_CORE_NAME};" "${D}${sysconfdir}/xdg/gstomx.conf"
 }
-
 do_install[postfuncs] += " set_omx_core_name "
 
 FILES_${PN} += "${libdir}/gstreamer-1.0/*.so"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-omx_1.10.4.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-omx_1.10.4.bb
deleted file mode 100644
index dfeefa5..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-omx_1.10.4.bb
+++ /dev/null
@@ -1,11 +0,0 @@
-include gstreamer1.0-omx.inc
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c \
-                    file://omx/gstomx.h;beginline=1;endline=21;md5=5c8e1fca32704488e76d2ba9ddfa935f"
-
-SRC_URI = "http://gstreamer.freedesktop.org/src/gst-omx/gst-omx-${PV}.tar.xz"
-
-SRC_URI[md5sum] = "cedb230f1c47d0cf4b575d70dff66ff2"
-SRC_URI[sha256sum] = "45072925cf262f0fd528fab78f0de52734e46a5a88aa802fae51c67c09c81aa2"
-
-S = "${WORKDIR}/gst-omx-${PV}"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-omx_1.12.2.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-omx_1.12.2.bb
new file mode 100644
index 0000000..a1d4576
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-omx_1.12.2.bb
@@ -0,0 +1,11 @@
+include gstreamer1.0-omx.inc
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c \
+                    file://omx/gstomx.h;beginline=1;endline=21;md5=5c8e1fca32704488e76d2ba9ddfa935f"
+
+SRC_URI = "http://gstreamer.freedesktop.org/src/gst-omx/gst-omx-${PV}.tar.xz"
+
+SRC_URI[md5sum] = "4a1404a20b72e4ab6e826500218ec308"
+SRC_URI[sha256sum] = "1b22398f45a027e977d2b5309625ec91cdcaf0da8751cbc7f596d639a45ba298"
+
+S = "${WORKDIR}/gst-omx-${PV}"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad.inc b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad.inc
index 0ccfc89..7be15d9 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad.inc
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad.inc
@@ -20,7 +20,7 @@
     ${GSTREAMER_ORC} \
     ${PACKAGECONFIG_GL} \
     ${@bb.utils.contains('DISTRO_FEATURES', 'bluetooth', 'bluez', '', d)} \
-    ${@bb.utils.filter('DISTRO_FEATURES', 'directfb', d)} \
+    ${@bb.utils.filter('DISTRO_FEATURES', 'directfb vulkan', d)} \
     ${@bb.utils.contains('DISTRO_FEATURES', 'wayland', 'wayland egl', '', d)} \
     bz2 curl dash dtls hls neon rsvg sbc smoothstreaming sndfile uvch264 webp \
 "
@@ -40,9 +40,7 @@
 PACKAGECONFIG[fluidsynth]      = "--enable-fluidsynth,--disable-fluidsynth,fluidsynth"
 PACKAGECONFIG[gles2]           = "--enable-gles2,--disable-gles2,virtual/libgles2"
 PACKAGECONFIG[gtk]             = "--enable-gtk3,--disable-gtk3,gtk+3"
-# ensure OpenSSL is used for HLS AES description instead of nettle
-# (OpenSSL is a shared dependency with dtls)
-PACKAGECONFIG[hls]             = "--enable-hls --with-hls-crypto=openssl,--disable-hls,openssl"
+PACKAGECONFIG[hls]             = "--enable-hls --with-hls-crypto=nettle,--disable-hls,nettle"
 PACKAGECONFIG[kms]             = "--enable-kms,--disable-kms,libdrm"
 PACKAGECONFIG[libmms]          = "--enable-libmms,--disable-libmms,libmms"
 PACKAGECONFIG[libssh2]         = "--enable-libssh2,--disable-libssh2,libssh2"
@@ -66,19 +64,15 @@
 PACKAGECONFIG[uvch264]         = "--enable-uvch264,--disable-uvch264,libusb1 libgudev"
 PACKAGECONFIG[voaacenc]        = "--enable-voaacenc,--disable-voaacenc,vo-aacenc"
 PACKAGECONFIG[voamrwbenc]      = "--enable-voamrwbenc,--disable-voamrwbenc,vo-amrwbenc"
-PACKAGECONFIG[wayland]         = "--enable-wayland,--disable-wayland,wayland-native wayland wayland-protocols"
+PACKAGECONFIG[vulkan]          = "--enable-vulkan,--disable-vulkan,vulkan"
+PACKAGECONFIG[wayland]         = "--enable-wayland,--disable-wayland,wayland-native wayland wayland-protocols libdrm"
 PACKAGECONFIG[webp]            = "--enable-webp,--disable-webp,libwebp"
 
-# these plugins have not been ported to 1.0 (yet):
-#   apexsink linsys nas timidity sdl xvid wininet
-#   sndio cdxaparse dccp faceoverlay hdvparse tta mve nuvdemux
-#   patchdetect sdi videomeasure
-
 # these plugins have no corresponding library in OE-core or meta-openembedded:
-#   openni2 winks direct3d directsound winscreencap acm apple_media
+#   openni2 winks direct3d directsound winscreencap acm apple_media iqa
 #   android_media avc bs2b chromaprint daala dts fdkaac gme gsm kate ladspa libde265
-#   lv2 mimic mpeg2enc mplex musepack nvenc ofa openh264 opensles pvr soundtouch spandsp
-#   spc teletextdec tinyalsa vdpau vulkan wasapi x265 zbar
+#   lv2 mpeg2enc mplex msdk musepack nvenc ofa openh264 opensles soundtouch spandsp
+#   spc teletextdec tinyalsa vdpau wasapi x265 zbar webrtcdsp
 
 # qt5 support is disabled, because it is not present in OE core, and requires more work than
 # just adding a packageconfig (it requires access to moc, uic, rcc, and qmake paths).
@@ -94,7 +88,6 @@
     --enable-vcd \
     --disable-acm \
     --disable-android_media \
-    --disable-apexsink \
     --disable-apple_media \
     --disable-avc \
     --disable-bs2b \
@@ -107,43 +100,34 @@
     --disable-fdk_aac \
     --disable-gme \
     --disable-gsm \
+    --disable-iqa \
     --disable-kate \
     --disable-ladspa \
     --disable-libde265 \
-    --disable-libvisual \
-    --disable-linsys \
     --disable-lv2 \
-    --disable-mimic \
     --disable-mpeg2enc \
     --disable-mplex \
+    --disable-msdk \
     --disable-musepack \
-    --disable-nas \
     --disable-nvenc \
     --disable-ofa \
     --disable-openexr \
     --disable-openh264 \
     --disable-openni2 \
     --disable-opensles \
-    --disable-pvr \
     --disable-qt \
-    --disable-sdl \
-    --disable-sdltest \
-    --disable-sndio \
     --disable-soundtouch \
     --disable-spandsp \
     --disable-spc \
     --disable-teletextdec \
-    --disable-timidity \
     --disable-tinyalsa \
     --disable-vdpau \
-    --disable-vulkan \
     --disable-wasapi \
+    --disable-webrtcdsp \
     --disable-wildmidi \
-    --disable-wininet \
     --disable-winks \
     --disable-winscreencap \
     --disable-x265 \
-    --disable-xvid \
     --disable-zbar \
     ${@bb.utils.contains("TUNE_FEATURES", "mx32", "--disable-yadif", "", d)} \
 "
@@ -157,3 +141,7 @@
 FILES_${PN}-freeverb += "${datadir}/gstreamer-${LIBV}/presets/GstFreeverb.prs"
 FILES_${PN}-opencv += "${datadir}/gst-plugins-bad/${LIBV}/opencv*"
 FILES_${PN}-voamrwbenc += "${datadir}/gstreamer-${LIBV}/presets/GstVoAmrwbEnc.prs"
+
+do_compile_prepend() {
+    export GIR_EXTRA_LIBS_PATH="${B}/gst-libs/gst/allocators/.libs"
+}
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-Makefile.am-don-t-hardcode-libtool-name-when-running.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-Makefile.am-don-t-hardcode-libtool-name-when-running.patch
index 43f1ee0..8d99dc6 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-Makefile.am-don-t-hardcode-libtool-name-when-running.patch
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-Makefile.am-don-t-hardcode-libtool-name-when-running.patch
@@ -1,4 +1,4 @@
-From cff6fbf555a072408c21da1e818209c9d3814dd3 Mon Sep 17 00:00:00 2001
+From 7592e793b3906355d76ca9a59f8fea2749ea2a4e Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
 Date: Tue, 27 Oct 2015 14:36:58 +0200
 Subject: [PATCH] Makefile.am: don't hardcode libtool name when running
@@ -7,17 +7,34 @@
 Upstream-Status: Pending [review on oe-core list]
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
 
----
- gst-libs/gst/gl/Makefile.am        | 2 +-
- gst-libs/gst/insertbin/Makefile.am | 2 +-
- gst-libs/gst/mpegts/Makefile.am    | 2 +-
- 3 files changed, 3 insertions(+), 3 deletions(-)
+%% original patch: 0001-Makefile.am-don-t-hardcode-libtool-name-when-running.patch
 
-Index: gst-plugins-bad-1.10.1/gst-libs/gst/gl/Makefile.am
-===================================================================
---- gst-plugins-bad-1.10.1.orig/gst-libs/gst/gl/Makefile.am
-+++ gst-plugins-bad-1.10.1/gst-libs/gst/gl/Makefile.am
-@@ -171,7 +171,7 @@ GstGL-@GST_API_VERSION@.gir: $(INTROSPEC
+Signed-off-by: Maxin B. John <maxin.john@intel.com>
+---
+ gst-libs/gst/allocators/Makefile.am | 2 +-
+ gst-libs/gst/gl/Makefile.am         | 2 +-
+ gst-libs/gst/insertbin/Makefile.am  | 2 +-
+ gst-libs/gst/mpegts/Makefile.am     | 2 +-
+ 4 files changed, 4 insertions(+), 4 deletions(-)
+
+diff --git a/gst-libs/gst/allocators/Makefile.am b/gst-libs/gst/allocators/Makefile.am
+index e50d077..623f092 100644
+--- a/gst-libs/gst/allocators/Makefile.am
++++ b/gst-libs/gst/allocators/Makefile.am
+@@ -37,7 +37,7 @@ GstBadAllocators-@GST_API_VERSION@.gir: $(INTROSPECTION_SCANNER) libgstbadalloca
+ 		--add-include-path=`PKG_CONFIG_PATH="$(GST_PKG_CONFIG_PATH)" $(PKG_CONFIG) --variable=girdir gstreamer-@GST_API_VERSION@` \
+ 		--library=libgstbadallocators-@GST_API_VERSION@.la \
+ 		--include=Gst-@GST_API_VERSION@ \
+-		--libtool="$(top_builddir)/libtool" \
++		--libtool="$(LIBTOOL)" \
+ 		--pkg gstreamer-@GST_API_VERSION@ \
+ 		--pkg-export gstreamer-badallocators-@GST_API_VERSION@ \
+ 		--output $@ \
+diff --git a/gst-libs/gst/gl/Makefile.am b/gst-libs/gst/gl/Makefile.am
+index 2ae4773..dfa7a7d 100644
+--- a/gst-libs/gst/gl/Makefile.am
++++ b/gst-libs/gst/gl/Makefile.am
+@@ -178,7 +178,7 @@ GstGL-@GST_API_VERSION@.gir: $(INTROSPECTION_SCANNER) libgstgl-@GST_API_VERSION@
  		--include=Gst-@GST_API_VERSION@ \
  		--include=GstBase-@GST_API_VERSION@ \
  		--include=GstVideo-@GST_API_VERSION@ \
@@ -26,11 +43,11 @@
  		--pkg gstreamer-@GST_API_VERSION@ \
  		--pkg gstreamer-base-@GST_API_VERSION@ \
  		--pkg gstreamer-video-@GST_API_VERSION@ \
-Index: gst-plugins-bad-1.10.1/gst-libs/gst/insertbin/Makefile.am
-===================================================================
---- gst-plugins-bad-1.10.1.orig/gst-libs/gst/insertbin/Makefile.am
-+++ gst-plugins-bad-1.10.1/gst-libs/gst/insertbin/Makefile.am
-@@ -45,7 +45,7 @@ GstInsertBin-@GST_API_VERSION@.gir: $(IN
+diff --git a/gst-libs/gst/insertbin/Makefile.am b/gst-libs/gst/insertbin/Makefile.am
+index 1f8ea30..4b98ef6 100644
+--- a/gst-libs/gst/insertbin/Makefile.am
++++ b/gst-libs/gst/insertbin/Makefile.am
+@@ -45,7 +45,7 @@ GstInsertBin-@GST_API_VERSION@.gir: $(INTROSPECTION_SCANNER) libgstinsertbin-@GS
  		--library=libgstinsertbin-@GST_API_VERSION@.la \
  		--include=Gst-@GST_API_VERSION@ \
  		--include=GstBase-@GST_API_VERSION@ \
@@ -39,11 +56,11 @@
  		--pkg gstreamer-@GST_API_VERSION@ \
  		--pkg gstreamer-base-@GST_API_VERSION@ \
  		--pkg-export gstreamer-insertbin-@GST_API_VERSION@ \
-Index: gst-plugins-bad-1.10.1/gst-libs/gst/mpegts/Makefile.am
-===================================================================
---- gst-plugins-bad-1.10.1.orig/gst-libs/gst/mpegts/Makefile.am
-+++ gst-plugins-bad-1.10.1/gst-libs/gst/mpegts/Makefile.am
-@@ -79,7 +79,7 @@ GstMpegts-@GST_API_VERSION@.gir: $(INTRO
+diff --git a/gst-libs/gst/mpegts/Makefile.am b/gst-libs/gst/mpegts/Makefile.am
+index aeea32e..929d9cc 100644
+--- a/gst-libs/gst/mpegts/Makefile.am
++++ b/gst-libs/gst/mpegts/Makefile.am
+@@ -79,7 +79,7 @@ GstMpegts-@GST_API_VERSION@.gir: $(INTROSPECTION_SCANNER) libgstmpegts-@GST_API_
  		--add-include-path=`PKG_CONFIG_PATH="$(GST_PKG_CONFIG_PATH)" $(PKG_CONFIG) --variable=girdir gstreamer-video-@GST_API_VERSION@` \
  		--library=libgstmpegts-@GST_API_VERSION@.la \
  		--include=Gst-@GST_API_VERSION@ \
@@ -52,3 +69,6 @@
  		--pkg gstreamer-@GST_API_VERSION@ \
  		--pkg gstreamer-video-@GST_API_VERSION@ \
  		--pkg-export gstreamer-mpegts-@GST_API_VERSION@ \
+-- 
+2.4.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-Prepend-PKG_CONFIG_SYSROOT_DIR-to-pkg-config-output.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-Prepend-PKG_CONFIG_SYSROOT_DIR-to-pkg-config-output.patch
index 86a4495..48d93ab 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-Prepend-PKG_CONFIG_SYSROOT_DIR-to-pkg-config-output.patch
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-Prepend-PKG_CONFIG_SYSROOT_DIR-to-pkg-config-output.patch
@@ -1,34 +1,38 @@
-From c271503d7e233428ac0323c51d6517113e26bef7 Mon Sep 17 00:00:00 2001
+From 7c8f68c5428380b930579dc9ef27c853264448fd Mon Sep 17 00:00:00 2001
 From: Khem Raj <raj.khem@gmail.com>
-Date: Thu, 1 Dec 2016 00:27:13 -0800
+Date: Mon, 15 May 2017 15:06:11 +0300
 Subject: [PATCH] Prepend PKG_CONFIG_SYSROOT_DIR to pkg-config output
 
 In cross environment we have to prepend the sysroot to the path found by
 pkgconfig since the path returned from pkgconfig does not have sysroot prefixed
 it ends up using the files from host system. If build host has wayland installed
-the build will succeed but if you dont have wayland-protocols installed on build host then
-it wont find the files on build host
+the build will succeed but if you dont have wayland-protocols installed on build
+host then it wont find the files on build host
 
-This should work ok with non sysrooted builds too since in those cases PKG_CONFIG_SYSROOT_DIR
-will be empty
+This should work ok with non sysrooted builds too since
+in those cases PKG_CONFIG_SYSROOT_DIR will be empty
 
 Upstream-Status: Pending
 
 Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Signed-off-by: Maxin B. John <maxin.john@intel.com>
 ---
  configure.ac | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
-Index: gst-plugins-bad-1.10.1/configure.ac
-===================================================================
---- gst-plugins-bad-1.10.1.orig/configure.ac
-+++ gst-plugins-bad-1.10.1/configure.ac
-@@ -2233,7 +2233,7 @@ AG_GST_CHECK_FEATURE(WAYLAND, [wayland s
-     PKG_CHECK_MODULES(WAYLAND_PROTOCOLS, wayland-protocols >= 1.4, [
-       if test "x$wayland_scanner" != "x"; then
-         HAVE_WAYLAND="yes"
--        AC_SUBST(WAYLAND_PROTOCOLS_DATADIR, `$PKG_CONFIG --variable=pkgdatadir wayland-protocols`)
-+        AC_SUBST(WAYLAND_PROTOCOLS_DATADIR, ${WAYLAND_PROTOCOLS_SYSROOT_DIR}`$PKG_CONFIG --variable=pkgdatadir wayland-protocols`)
-       else
-         AC_MSG_RESULT([wayland-scanner is required to build the wayland plugin])
-         HAVE_WAYLAND="no"
+diff --git a/configure.ac b/configure.ac
+index e307be6..83cdeb0 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -2272,7 +2272,7 @@ AG_GST_CHECK_FEATURE(WAYLAND, [wayland sink], wayland , [
+   PKG_CHECK_MODULES(WAYLAND, wayland-client >= 1.4.0 libdrm >= 2.4.55 wayland-protocols >= 1.4, [
+     if test "x$wayland_scanner" != "x"; then
+       HAVE_WAYLAND="yes"
+-      AC_SUBST(WAYLAND_PROTOCOLS_DATADIR, `$PKG_CONFIG --variable=pkgdatadir wayland-protocols`)
++      AC_SUBST(WAYLAND_PROTOCOLS_DATADIR, ${WAYLAND_PROTOCOLS_SYSROOT_DIR}`$PKG_CONFIG --variable=pkgdatadir wayland-protocols`)
+     else
+       AC_MSG_RESULT([wayland-scanner is required to build the wayland plugin])
+       HAVE_WAYLAND="no"
+-- 
+2.4.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-gstreamer-gl.pc.in-don-t-append-GL_CFLAGS-to-CFLAGS.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-gstreamer-gl.pc.in-don-t-append-GL_CFLAGS-to-CFLAGS.patch
index 9fc91d8..2235a57 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-gstreamer-gl.pc.in-don-t-append-GL_CFLAGS-to-CFLAGS.patch
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-gstreamer-gl.pc.in-don-t-append-GL_CFLAGS-to-CFLAGS.patch
@@ -1,24 +1,29 @@
-From a93ca63d01e7cd1e40b5be576992f77fac364bd5 Mon Sep 17 00:00:00 2001
+From 5622ca3b61603dc316a0f1fbede3f9aa353a5e48 Mon Sep 17 00:00:00 2001
 From: Alexander Kanavin <alex.kanavin@gmail.com>
-Date: Mon, 21 Mar 2016 18:21:17 +0200
+Date: Fri, 12 May 2017 16:47:12 +0300
 Subject: [PATCH] gstreamer-gl.pc.in: don't append GL_CFLAGS to CFLAGS
 
 Dependencies' include directories should not be added in this way;
 it causes problems when cross-compiling in sysroot environments.
 
 Upstream-Status: Pending
+
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+Signed-off-by: Maxin B. John <maxin.john@intel.com>
 ---
  pkgconfig/gstreamer-gl.pc.in | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)
 
-Index: gst-plugins-bad-1.10.1/pkgconfig/gstreamer-gl.pc.in
-===================================================================
---- gst-plugins-bad-1.10.1.orig/pkgconfig/gstreamer-gl.pc.in
-+++ gst-plugins-bad-1.10.1/pkgconfig/gstreamer-gl.pc.in
-@@ -10,4 +10,4 @@ Version: @VERSION@
+diff --git a/pkgconfig/gstreamer-gl.pc.in b/pkgconfig/gstreamer-gl.pc.in
+index 8e7a303..d167be1 100644
+--- a/pkgconfig/gstreamer-gl.pc.in
++++ b/pkgconfig/gstreamer-gl.pc.in
+@@ -13,4 +13,4 @@ Version: @VERSION@
  Requires: gstreamer-base-@GST_API_VERSION@ gstreamer-@GST_API_VERSION@
  
- Libs: -L${libdir} -lgstgl-@GST_API_VERSION@ @GL_LIBS@
+ Libs: -L${libdir} -lgstgl-@GST_API_VERSION@
 -Cflags: -I${includedir} -I${libdir}/gstreamer-@GST_API_VERSION@/include @GL_CFLAGS@
 +Cflags: -I${includedir} -I${libdir}/gstreamer-@GST_API_VERSION@/include
+-- 
+2.4.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-mssdemux-improved-live-playback-support.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-mssdemux-improved-live-playback-support.patch
deleted file mode 100644
index 4832c18..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-mssdemux-improved-live-playback-support.patch
+++ /dev/null
@@ -1,929 +0,0 @@
-From 73721ad4e9e2d32e1c8b6a3b4aaa98401530e58a Mon Sep 17 00:00:00 2001
-From: Philippe Normand <philn@igalia.com>
-Date: Tue, 29 Nov 2016 14:43:41 +0100
-Subject: [PATCH] mssdemux: improved live playback support
-
-When a MSS server hosts a live stream the fragments listed in the
-manifest usually don't have accurate timestamps and duration, except
-for the first fragment, which additionally stores timing information
-for the few upcoming fragments. In this scenario it is useless to
-periodically fetch and update the manifest and the fragments list can
-be incrementally built by parsing the first/current fragment.
-
-https://bugzilla.gnome.org/show_bug.cgi?id=755036
----
-Upstream-Status: Backport
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
- ext/smoothstreaming/Makefile.am               |   2 +
- ext/smoothstreaming/gstmssdemux.c             |  60 ++++++
- ext/smoothstreaming/gstmssfragmentparser.c    | 266 ++++++++++++++++++++++++++
- ext/smoothstreaming/gstmssfragmentparser.h    |  84 ++++++++
- ext/smoothstreaming/gstmssmanifest.c          | 158 ++++++++++++++-
- ext/smoothstreaming/gstmssmanifest.h          |   7 +
- gst-libs/gst/adaptivedemux/gstadaptivedemux.c |  27 ++-
- gst-libs/gst/adaptivedemux/gstadaptivedemux.h |  14 ++
- 8 files changed, 606 insertions(+), 12 deletions(-)
- create mode 100644 ext/smoothstreaming/gstmssfragmentparser.c
- create mode 100644 ext/smoothstreaming/gstmssfragmentparser.h
-
-diff --git a/ext/smoothstreaming/Makefile.am b/ext/smoothstreaming/Makefile.am
-index 4faf9df9f..a5e1ad6ae 100644
---- a/ext/smoothstreaming/Makefile.am
-+++ b/ext/smoothstreaming/Makefile.am
-@@ -13,8 +13,10 @@ libgstsmoothstreaming_la_LIBADD = \
- libgstsmoothstreaming_la_LDFLAGS = ${GST_PLUGIN_LDFLAGS}
- libgstsmoothstreaming_la_SOURCES = gstsmoothstreaming-plugin.c \
- 	gstmssdemux.c \
-+	gstmssfragmentparser.c \
- 	gstmssmanifest.c
- libgstsmoothstreaming_la_LIBTOOLFLAGS = --tag=disable-static
- 
- noinst_HEADERS = gstmssdemux.h \
-+	gstmssfragmentparser.h \
- 	gstmssmanifest.h
-diff --git a/ext/smoothstreaming/gstmssdemux.c b/ext/smoothstreaming/gstmssdemux.c
-index 12fb40497..120d9c22b 100644
---- a/ext/smoothstreaming/gstmssdemux.c
-+++ b/ext/smoothstreaming/gstmssdemux.c
-@@ -135,11 +135,18 @@ gst_mss_demux_stream_update_fragment_info (GstAdaptiveDemuxStream * stream);
- static gboolean gst_mss_demux_seek (GstAdaptiveDemux * demux, GstEvent * seek);
- static gint64
- gst_mss_demux_get_manifest_update_interval (GstAdaptiveDemux * demux);
-+static gint64
-+gst_mss_demux_stream_get_fragment_waiting_time (GstAdaptiveDemuxStream *
-+    stream);
- static GstFlowReturn
- gst_mss_demux_update_manifest_data (GstAdaptiveDemux * demux,
-     GstBuffer * buffer);
- static gboolean gst_mss_demux_get_live_seek_range (GstAdaptiveDemux * demux,
-     gint64 * start, gint64 * stop);
-+static GstFlowReturn gst_mss_demux_data_received (GstAdaptiveDemux * demux,
-+    GstAdaptiveDemuxStream * stream, GstBuffer * buffer);
-+static gboolean
-+gst_mss_demux_requires_periodical_playlist_update (GstAdaptiveDemux * demux);
- 
- static void
- gst_mss_demux_class_init (GstMssDemuxClass * klass)
-@@ -192,10 +199,15 @@ gst_mss_demux_class_init (GstMssDemuxClass * klass)
-       gst_mss_demux_stream_select_bitrate;
-   gstadaptivedemux_class->stream_update_fragment_info =
-       gst_mss_demux_stream_update_fragment_info;
-+  gstadaptivedemux_class->stream_get_fragment_waiting_time =
-+      gst_mss_demux_stream_get_fragment_waiting_time;
-   gstadaptivedemux_class->update_manifest_data =
-       gst_mss_demux_update_manifest_data;
-   gstadaptivedemux_class->get_live_seek_range =
-       gst_mss_demux_get_live_seek_range;
-+  gstadaptivedemux_class->data_received = gst_mss_demux_data_received;
-+  gstadaptivedemux_class->requires_periodical_playlist_update =
-+      gst_mss_demux_requires_periodical_playlist_update;
- 
-   GST_DEBUG_CATEGORY_INIT (mssdemux_debug, "mssdemux", 0, "mssdemux plugin");
- }
-@@ -650,6 +662,13 @@ gst_mss_demux_get_manifest_update_interval (GstAdaptiveDemux * demux)
-   return interval;
- }
- 
-+static gint64
-+gst_mss_demux_stream_get_fragment_waiting_time (GstAdaptiveDemuxStream * stream)
-+{
-+  /* Wait a second for live streams so we don't try premature fragments downloading */
-+  return GST_SECOND;
-+}
-+
- static GstFlowReturn
- gst_mss_demux_update_manifest_data (GstAdaptiveDemux * demux,
-     GstBuffer * buffer)
-@@ -670,3 +689,44 @@ gst_mss_demux_get_live_seek_range (GstAdaptiveDemux * demux, gint64 * start,
- 
-   return gst_mss_manifest_get_live_seek_range (mssdemux->manifest, start, stop);
- }
-+
-+static GstFlowReturn
-+gst_mss_demux_data_received (GstAdaptiveDemux * demux,
-+    GstAdaptiveDemuxStream * stream, GstBuffer * buffer)
-+{
-+  GstMssDemux *mssdemux = GST_MSS_DEMUX_CAST (demux);
-+  GstMssDemuxStream *mssstream = (GstMssDemuxStream *) stream;
-+  gsize available;
-+
-+  if (!gst_mss_manifest_is_live (mssdemux->manifest)) {
-+    return GST_ADAPTIVE_DEMUX_CLASS (parent_class)->data_received (demux,
-+        stream, buffer);
-+  }
-+
-+  if (gst_mss_stream_fragment_parsing_needed (mssstream->manifest_stream)) {
-+    gst_mss_manifest_live_adapter_push (mssstream->manifest_stream, buffer);
-+    available =
-+        gst_mss_manifest_live_adapter_available (mssstream->manifest_stream);
-+    // FIXME: try to reduce this minimal size.
-+    if (available < 4096) {
-+      return GST_FLOW_OK;
-+    } else {
-+      GST_LOG_OBJECT (stream->pad, "enough data, parsing fragment.");
-+      buffer =
-+          gst_mss_manifest_live_adapter_take_buffer (mssstream->manifest_stream,
-+          available);
-+      gst_mss_stream_parse_fragment (mssstream->manifest_stream, buffer);
-+    }
-+  }
-+
-+  return GST_ADAPTIVE_DEMUX_CLASS (parent_class)->data_received (demux, stream,
-+      buffer);
-+}
-+
-+static gboolean
-+gst_mss_demux_requires_periodical_playlist_update (GstAdaptiveDemux * demux)
-+{
-+  GstMssDemux *mssdemux = GST_MSS_DEMUX_CAST (demux);
-+
-+  return (!gst_mss_manifest_is_live (mssdemux->manifest));
-+}
-diff --git a/ext/smoothstreaming/gstmssfragmentparser.c b/ext/smoothstreaming/gstmssfragmentparser.c
-new file mode 100644
-index 000000000..b554d4f31
---- /dev/null
-+++ b/ext/smoothstreaming/gstmssfragmentparser.c
-@@ -0,0 +1,266 @@
-+/*
-+ * Microsoft Smooth-Streaming fragment parsing library
-+ *
-+ * gstmssfragmentparser.h
-+ *
-+ * Copyright (C) 2016 Igalia S.L
-+ * Copyright (C) 2016 Metrological
-+ *   Author: Philippe Normand <philn@igalia.com>
-+ *
-+ * This library is free software; you can redistribute it and/or
-+ * modify it under the terms of the GNU Library General Public
-+ * License as published by the Free Software Foundation; either
-+ * version 2.1 of the License, or (at your option) any later version.
-+ *
-+ * This library is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+ * Library General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU Library General Public
-+ * License along with this library (COPYING); if not, write to the
-+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
-+ * Boston, MA 02111-1307, USA.
-+ */
-+
-+#include "gstmssfragmentparser.h"
-+#include <gst/base/gstbytereader.h>
-+#include <string.h>
-+
-+GST_DEBUG_CATEGORY_EXTERN (mssdemux_debug);
-+#define GST_CAT_DEFAULT mssdemux_debug
-+
-+void
-+gst_mss_fragment_parser_init (GstMssFragmentParser * parser)
-+{
-+  parser->status = GST_MSS_FRAGMENT_HEADER_PARSER_INIT;
-+  parser->tfrf.entries_count = 0;
-+}
-+
-+void
-+gst_mss_fragment_parser_clear (GstMssFragmentParser * parser)
-+{
-+  parser->tfrf.entries_count = 0;
-+  if (parser->tfrf.entries) {
-+    g_free (parser->tfrf.entries);
-+    parser->tfrf.entries = 0;
-+  }
-+}
-+
-+static gboolean
-+_parse_tfrf_box (GstMssFragmentParser * parser, GstByteReader * reader)
-+{
-+  guint8 version;
-+  guint32 flags = 0;
-+  guint8 fragment_count = 0;
-+  guint8 index = 0;
-+
-+  if (!gst_byte_reader_get_uint8 (reader, &version)) {
-+    GST_ERROR ("Error getting box's version field");
-+    return FALSE;
-+  }
-+
-+  if (!gst_byte_reader_get_uint24_be (reader, &flags)) {
-+    GST_ERROR ("Error getting box's flags field");
-+    return FALSE;
-+  }
-+
-+  gst_byte_reader_get_uint8 (reader, &fragment_count);
-+  parser->tfrf.entries_count = fragment_count;
-+  parser->tfrf.entries =
-+      g_malloc (sizeof (GstTfrfBoxEntry) * parser->tfrf.entries_count);
-+  for (index = 0; index < fragment_count; index++) {
-+    guint64 absolute_time = 0;
-+    guint64 absolute_duration = 0;
-+    if (version & 0x01) {
-+      gst_byte_reader_get_uint64_be (reader, &absolute_time);
-+      gst_byte_reader_get_uint64_be (reader, &absolute_duration);
-+    } else {
-+      guint32 time = 0;
-+      guint32 duration = 0;
-+      gst_byte_reader_get_uint32_be (reader, &time);
-+      gst_byte_reader_get_uint32_be (reader, &duration);
-+      time = ~time;
-+      duration = ~duration;
-+      absolute_time = ~time;
-+      absolute_duration = ~duration;
-+    }
-+    parser->tfrf.entries[index].time = absolute_time;
-+    parser->tfrf.entries[index].duration = absolute_duration;
-+  }
-+
-+  GST_LOG ("tfrf box parsed");
-+  return TRUE;
-+}
-+
-+static gboolean
-+_parse_tfxd_box (GstMssFragmentParser * parser, GstByteReader * reader)
-+{
-+  guint8 version;
-+  guint32 flags = 0;
-+  guint64 absolute_time = 0;
-+  guint64 absolute_duration = 0;
-+
-+  if (!gst_byte_reader_get_uint8 (reader, &version)) {
-+    GST_ERROR ("Error getting box's version field");
-+    return FALSE;
-+  }
-+
-+  if (!gst_byte_reader_get_uint24_be (reader, &flags)) {
-+    GST_ERROR ("Error getting box's flags field");
-+    return FALSE;
-+  }
-+
-+  if (version & 0x01) {
-+    gst_byte_reader_get_uint64_be (reader, &absolute_time);
-+    gst_byte_reader_get_uint64_be (reader, &absolute_duration);
-+  } else {
-+    guint32 time = 0;
-+    guint32 duration = 0;
-+    gst_byte_reader_get_uint32_be (reader, &time);
-+    gst_byte_reader_get_uint32_be (reader, &duration);
-+    time = ~time;
-+    duration = ~duration;
-+    absolute_time = ~time;
-+    absolute_duration = ~duration;
-+  }
-+
-+  parser->tfxd.time = absolute_time;
-+  parser->tfxd.duration = absolute_duration;
-+  GST_LOG ("tfxd box parsed");
-+  return TRUE;
-+}
-+
-+gboolean
-+gst_mss_fragment_parser_add_buffer (GstMssFragmentParser * parser,
-+    GstBuffer * buffer)
-+{
-+  GstByteReader reader;
-+  GstMapInfo info;
-+  guint32 size;
-+  guint32 fourcc;
-+  const guint8 *uuid;
-+  gboolean error = FALSE;
-+  gboolean mdat_box_found = FALSE;
-+
-+  static const guint8 tfrf_uuid[] = {
-+    0xd4, 0x80, 0x7e, 0xf2, 0xca, 0x39, 0x46, 0x95,
-+    0x8e, 0x54, 0x26, 0xcb, 0x9e, 0x46, 0xa7, 0x9f
-+  };
-+
-+  static const guint8 tfxd_uuid[] = {
-+    0x6d, 0x1d, 0x9b, 0x05, 0x42, 0xd5, 0x44, 0xe6,
-+    0x80, 0xe2, 0x14, 0x1d, 0xaf, 0xf7, 0x57, 0xb2
-+  };
-+
-+  static const guint8 piff_uuid[] = {
-+    0xa2, 0x39, 0x4f, 0x52, 0x5a, 0x9b, 0x4f, 0x14,
-+    0xa2, 0x44, 0x6c, 0x42, 0x7c, 0x64, 0x8d, 0xf4
-+  };
-+
-+  if (!gst_buffer_map (buffer, &info, GST_MAP_READ)) {
-+    return FALSE;
-+  }
-+
-+  gst_byte_reader_init (&reader, info.data, info.size);
-+  GST_TRACE ("Total buffer size: %u", gst_byte_reader_get_size (&reader));
-+
-+  size = gst_byte_reader_get_uint32_be_unchecked (&reader);
-+  fourcc = gst_byte_reader_get_uint32_le_unchecked (&reader);
-+  if (fourcc == GST_MSS_FRAGMENT_FOURCC_MOOF) {
-+    GST_TRACE ("moof box found");
-+    size = gst_byte_reader_get_uint32_be_unchecked (&reader);
-+    fourcc = gst_byte_reader_get_uint32_le_unchecked (&reader);
-+    if (fourcc == GST_MSS_FRAGMENT_FOURCC_MFHD) {
-+      gst_byte_reader_skip_unchecked (&reader, size - 8);
-+
-+      size = gst_byte_reader_get_uint32_be_unchecked (&reader);
-+      fourcc = gst_byte_reader_get_uint32_le_unchecked (&reader);
-+      if (fourcc == GST_MSS_FRAGMENT_FOURCC_TRAF) {
-+        size = gst_byte_reader_get_uint32_be_unchecked (&reader);
-+        fourcc = gst_byte_reader_get_uint32_le_unchecked (&reader);
-+        if (fourcc == GST_MSS_FRAGMENT_FOURCC_TFHD) {
-+          gst_byte_reader_skip_unchecked (&reader, size - 8);
-+
-+          size = gst_byte_reader_get_uint32_be_unchecked (&reader);
-+          fourcc = gst_byte_reader_get_uint32_le_unchecked (&reader);
-+          if (fourcc == GST_MSS_FRAGMENT_FOURCC_TRUN) {
-+            GST_TRACE ("trun box found, size: %" G_GUINT32_FORMAT, size);
-+            if (!gst_byte_reader_skip (&reader, size - 8)) {
-+              GST_WARNING ("Failed to skip trun box, enough data?");
-+              error = TRUE;
-+              goto beach;
-+            }
-+          }
-+        }
-+      }
-+    }
-+  }
-+
-+  while (!mdat_box_found) {
-+    GST_TRACE ("remaining data: %u", gst_byte_reader_get_remaining (&reader));
-+    if (!gst_byte_reader_get_uint32_be (&reader, &size)) {
-+      GST_WARNING ("Failed to get box size, enough data?");
-+      error = TRUE;
-+      break;
-+    }
-+
-+    GST_TRACE ("box size: %" G_GUINT32_FORMAT, size);
-+    if (!gst_byte_reader_get_uint32_le (&reader, &fourcc)) {
-+      GST_WARNING ("Failed to get fourcc, enough data?");
-+      error = TRUE;
-+      break;
-+    }
-+
-+    if (fourcc == GST_MSS_FRAGMENT_FOURCC_MDAT) {
-+      GST_LOG ("mdat box found");
-+      mdat_box_found = TRUE;
-+      break;
-+    }
-+
-+    if (fourcc != GST_MSS_FRAGMENT_FOURCC_UUID) {
-+      GST_ERROR ("invalid UUID fourcc: %" GST_FOURCC_FORMAT,
-+          GST_FOURCC_ARGS (fourcc));
-+      error = TRUE;
-+      break;
-+    }
-+
-+    if (!gst_byte_reader_peek_data (&reader, 16, &uuid)) {
-+      GST_ERROR ("not enough data in UUID box");
-+      error = TRUE;
-+      break;
-+    }
-+
-+    if (memcmp (uuid, piff_uuid, 16) == 0) {
-+      gst_byte_reader_skip_unchecked (&reader, size - 8);
-+      GST_LOG ("piff box detected");
-+    }
-+
-+    if (memcmp (uuid, tfrf_uuid, 16) == 0) {
-+      gst_byte_reader_get_data (&reader, 16, &uuid);
-+      if (!_parse_tfrf_box (parser, &reader)) {
-+        GST_ERROR ("txrf box parsing error");
-+        error = TRUE;
-+        break;
-+      }
-+    }
-+
-+    if (memcmp (uuid, tfxd_uuid, 16) == 0) {
-+      gst_byte_reader_get_data (&reader, 16, &uuid);
-+      if (!_parse_tfxd_box (parser, &reader)) {
-+        GST_ERROR ("tfrf box parsing error");
-+        error = TRUE;
-+        break;
-+      }
-+    }
-+  }
-+
-+beach:
-+
-+  if (!error)
-+    parser->status = GST_MSS_FRAGMENT_HEADER_PARSER_FINISHED;
-+
-+  GST_LOG ("Fragment parsing successful: %s", error ? "no" : "yes");
-+  gst_buffer_unmap (buffer, &info);
-+  return !error;
-+}
-diff --git a/ext/smoothstreaming/gstmssfragmentparser.h b/ext/smoothstreaming/gstmssfragmentparser.h
-new file mode 100644
-index 000000000..cf4711865
---- /dev/null
-+++ b/ext/smoothstreaming/gstmssfragmentparser.h
-@@ -0,0 +1,84 @@
-+/*
-+ * Microsoft Smooth-Streaming fragment parsing library
-+ *
-+ * gstmssfragmentparser.h
-+ *
-+ * Copyright (C) 2016 Igalia S.L
-+ * Copyright (C) 2016 Metrological
-+ *   Author: Philippe Normand <philn@igalia.com>
-+ *
-+ * This library is free software; you can redistribute it and/or
-+ * modify it under the terms of the GNU Library General Public
-+ * License as published by the Free Software Foundation; either
-+ * version 2.1 of the License, or (at your option) any later version.
-+ *
-+ * This library is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+ * Library General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU Library General Public
-+ * License along with this library (COPYING); if not, write to the
-+ * Free Software Foundation, Inc., 59 Temple Place - Suite 330,
-+ * Boston, MA 02111-1307, USA.
-+ */
-+
-+#ifndef __GST_MSS_FRAGMENT_PARSER_H__
-+#define __GST_MSS_FRAGMENT_PARSER_H__
-+
-+#include <gst/gst.h>
-+
-+G_BEGIN_DECLS
-+
-+#define GST_MSS_FRAGMENT_FOURCC_MOOF GST_MAKE_FOURCC('m','o','o','f')
-+#define GST_MSS_FRAGMENT_FOURCC_MFHD GST_MAKE_FOURCC('m','f','h','d')
-+#define GST_MSS_FRAGMENT_FOURCC_TRAF GST_MAKE_FOURCC('t','r','a','f')
-+#define GST_MSS_FRAGMENT_FOURCC_TFHD GST_MAKE_FOURCC('t','f','h','d')
-+#define GST_MSS_FRAGMENT_FOURCC_TRUN GST_MAKE_FOURCC('t','r','u','n')
-+#define GST_MSS_FRAGMENT_FOURCC_UUID GST_MAKE_FOURCC('u','u','i','d')
-+#define GST_MSS_FRAGMENT_FOURCC_MDAT GST_MAKE_FOURCC('m','d','a','t')
-+
-+typedef struct _GstTfxdBox
-+{
-+  guint8 version;
-+  guint32 flags;
-+
-+  guint64 time;
-+  guint64 duration;
-+} GstTfxdBox;
-+
-+typedef struct _GstTfrfBoxEntry
-+{
-+  guint64 time;
-+  guint64 duration;
-+} GstTfrfBoxEntry;
-+
-+typedef struct _GstTfrfBox
-+{
-+  guint8 version;
-+  guint32 flags;
-+
-+  gint entries_count;
-+  GstTfrfBoxEntry *entries;
-+} GstTfrfBox;
-+
-+typedef enum _GstFragmentHeaderParserStatus
-+{
-+  GST_MSS_FRAGMENT_HEADER_PARSER_INIT,
-+  GST_MSS_FRAGMENT_HEADER_PARSER_FINISHED
-+} GstFragmentHeaderParserStatus;
-+
-+typedef struct _GstMssFragmentParser
-+{
-+  GstFragmentHeaderParserStatus status;
-+  GstTfxdBox tfxd;
-+  GstTfrfBox tfrf;
-+} GstMssFragmentParser;
-+
-+void gst_mss_fragment_parser_init (GstMssFragmentParser * parser);
-+void gst_mss_fragment_parser_clear (GstMssFragmentParser * parser);
-+gboolean gst_mss_fragment_parser_add_buffer (GstMssFragmentParser * parser, GstBuffer * buf);
-+
-+G_END_DECLS
-+
-+#endif /* __GST_MSS_FRAGMENT_PARSER_H__ */
-diff --git a/ext/smoothstreaming/gstmssmanifest.c b/ext/smoothstreaming/gstmssmanifest.c
-index 144bbb42d..e1031ba55 100644
---- a/ext/smoothstreaming/gstmssmanifest.c
-+++ b/ext/smoothstreaming/gstmssmanifest.c
-@@ -1,5 +1,7 @@
- /* GStreamer
-  * Copyright (C) 2012 Smart TV Alliance
-+ * Copyright (C) 2016 Igalia S.L
-+ * Copyright (C) 2016 Metrological
-  *  Author: Thiago Sousa Santos <thiago.sousa.santos@collabora.com>, Collabora Ltd.
-  *
-  * gstmssmanifest.c:
-@@ -31,6 +33,7 @@
- #include <gst/codecparsers/gsth264parser.h>
- 
- #include "gstmssmanifest.h"
-+#include "gstmssfragmentparser.h"
- 
- GST_DEBUG_CATEGORY_EXTERN (mssdemux_debug);
- #define GST_CAT_DEFAULT mssdemux_debug
-@@ -74,12 +77,17 @@ struct _GstMssStream
-   gboolean active;              /* if the stream is currently being used */
-   gint selectedQualityIndex;
- 
-+  gboolean has_live_fragments;
-+  GstAdapter *live_adapter;
-+
-   GList *fragments;
-   GList *qualities;
- 
-   gchar *url;
-   gchar *lang;
- 
-+  GstMssFragmentParser fragment_parser;
-+
-   guint fragment_repetition_index;
-   GList *current_fragment;
-   GList *current_quality;
-@@ -96,6 +104,7 @@ struct _GstMssManifest
- 
-   gboolean is_live;
-   gint64 dvr_window;
-+  guint64 look_ahead_fragment_count;
- 
-   GString *protection_system_id;
-   gchar *protection_data;
-@@ -235,7 +244,8 @@ compare_bitrate (GstMssStreamQuality * a, GstMssStreamQuality * b)
- }
- 
- static void
--_gst_mss_stream_init (GstMssStream * stream, xmlNodePtr node)
-+_gst_mss_stream_init (GstMssManifest * manifest, GstMssStream * stream,
-+    xmlNodePtr node)
- {
-   xmlNodePtr iter;
-   GstMssFragmentListBuilder builder;
-@@ -248,9 +258,21 @@ _gst_mss_stream_init (GstMssStream * stream, xmlNodePtr node)
-   stream->url = (gchar *) xmlGetProp (node, (xmlChar *) MSS_PROP_URL);
-   stream->lang = (gchar *) xmlGetProp (node, (xmlChar *) MSS_PROP_LANGUAGE);
- 
-+  /* for live playback each fragment usually has timing
-+   * information for the few next look-ahead fragments so the
-+   * playlist can be built incrementally from the first fragment
-+   * of the manifest.
-+   */
-+
-+  GST_DEBUG ("Live stream: %s, look-ahead fragments: %" G_GUINT64_FORMAT,
-+      manifest->is_live ? "yes" : "no", manifest->look_ahead_fragment_count);
-+  stream->has_live_fragments = manifest->is_live
-+      && manifest->look_ahead_fragment_count;
-+
-   for (iter = node->children; iter; iter = iter->next) {
-     if (node_has_type (iter, MSS_NODE_STREAM_FRAGMENT)) {
--      gst_mss_fragment_list_builder_add (&builder, iter);
-+      if (!stream->has_live_fragments || !builder.fragments)
-+        gst_mss_fragment_list_builder_add (&builder, iter);
-     } else if (node_has_type (iter, MSS_NODE_STREAM_QUALITY)) {
-       GstMssStreamQuality *quality = gst_mss_stream_quality_new (iter);
-       stream->qualities = g_list_prepend (stream->qualities, quality);
-@@ -259,17 +281,24 @@ _gst_mss_stream_init (GstMssStream * stream, xmlNodePtr node)
-     }
-   }
- 
--  stream->fragments = g_list_reverse (builder.fragments);
-+  if (stream->has_live_fragments) {
-+    stream->live_adapter = gst_adapter_new ();
-+  }
-+
-+  if (builder.fragments) {
-+    stream->fragments = g_list_reverse (builder.fragments);
-+    stream->current_fragment = stream->fragments;
-+  }
- 
-   /* order them from smaller to bigger based on bitrates */
-   stream->qualities =
-       g_list_sort (stream->qualities, (GCompareFunc) compare_bitrate);
--
--  stream->current_fragment = stream->fragments;
-   stream->current_quality = stream->qualities;
- 
-   stream->regex_bitrate = g_regex_new ("\\{[Bb]itrate\\}", 0, 0, NULL);
-   stream->regex_position = g_regex_new ("\\{start[ _]time\\}", 0, 0, NULL);
-+
-+  gst_mss_fragment_parser_init (&stream->fragment_parser);
- }
- 
- 
-@@ -315,6 +344,7 @@ gst_mss_manifest_new (GstBuffer * data)
-   xmlNodePtr nodeiter;
-   gchar *live_str;
-   GstMapInfo mapinfo;
-+  gchar *look_ahead_fragment_count_str;
- 
-   if (!gst_buffer_map (data, &mapinfo, GST_MAP_READ)) {
-     return NULL;
-@@ -335,6 +365,7 @@ gst_mss_manifest_new (GstBuffer * data)
-   /* the entire file is always available for non-live streams */
-   if (!manifest->is_live) {
-     manifest->dvr_window = 0;
-+    manifest->look_ahead_fragment_count = 0;
-   } else {
-     /* if 0, or non-existent, the length is infinite */
-     gchar *dvr_window_str = (gchar *) xmlGetProp (root,
-@@ -346,6 +377,17 @@ gst_mss_manifest_new (GstBuffer * data)
-         manifest->dvr_window = 0;
-       }
-     }
-+
-+    look_ahead_fragment_count_str =
-+        (gchar *) xmlGetProp (root, (xmlChar *) "LookAheadFragmentCount");
-+    if (look_ahead_fragment_count_str) {
-+      manifest->look_ahead_fragment_count =
-+          g_ascii_strtoull (look_ahead_fragment_count_str, NULL, 10);
-+      xmlFree (look_ahead_fragment_count_str);
-+      if (manifest->look_ahead_fragment_count <= 0) {
-+        manifest->look_ahead_fragment_count = 0;
-+      }
-+    }
-   }
- 
-   for (nodeiter = root->children; nodeiter; nodeiter = nodeiter->next) {
-@@ -354,7 +396,7 @@ gst_mss_manifest_new (GstBuffer * data)
-       GstMssStream *stream = g_new0 (GstMssStream, 1);
- 
-       manifest->streams = g_slist_append (manifest->streams, stream);
--      _gst_mss_stream_init (stream, nodeiter);
-+      _gst_mss_stream_init (manifest, stream, nodeiter);
-     }
- 
-     if (nodeiter->type == XML_ELEMENT_NODE
-@@ -371,6 +413,11 @@ gst_mss_manifest_new (GstBuffer * data)
- static void
- gst_mss_stream_free (GstMssStream * stream)
- {
-+  if (stream->live_adapter) {
-+    gst_adapter_clear (stream->live_adapter);
-+    g_object_unref (stream->live_adapter);
-+  }
-+
-   g_list_free_full (stream->fragments, g_free);
-   g_list_free_full (stream->qualities,
-       (GDestroyNotify) gst_mss_stream_quality_free);
-@@ -379,6 +426,7 @@ gst_mss_stream_free (GstMssStream * stream)
-   g_regex_unref (stream->regex_position);
-   g_regex_unref (stream->regex_bitrate);
-   g_free (stream);
-+  gst_mss_fragment_parser_clear (&stream->fragment_parser);
- }
- 
- void
-@@ -1079,6 +1127,9 @@ GstFlowReturn
- gst_mss_stream_advance_fragment (GstMssStream * stream)
- {
-   GstMssStreamFragment *fragment;
-+  const gchar *stream_type_name =
-+      gst_mss_stream_type_name (gst_mss_stream_get_type (stream));
-+
-   g_return_val_if_fail (stream->active, GST_FLOW_ERROR);
- 
-   if (stream->current_fragment == NULL)
-@@ -1086,14 +1137,20 @@ gst_mss_stream_advance_fragment (GstMssStream * stream)
- 
-   fragment = stream->current_fragment->data;
-   stream->fragment_repetition_index++;
--  if (stream->fragment_repetition_index < fragment->repetitions) {
--    return GST_FLOW_OK;
--  }
-+  if (stream->fragment_repetition_index < fragment->repetitions)
-+    goto beach;
- 
-   stream->fragment_repetition_index = 0;
-   stream->current_fragment = g_list_next (stream->current_fragment);
-+
-+  GST_DEBUG ("Advanced to fragment #%d on %s stream", fragment->number,
-+      stream_type_name);
-   if (stream->current_fragment == NULL)
-     return GST_FLOW_EOS;
-+
-+beach:
-+  gst_mss_fragment_parser_clear (&stream->fragment_parser);
-+  gst_mss_fragment_parser_init (&stream->fragment_parser);
-   return GST_FLOW_OK;
- }
- 
-@@ -1173,6 +1230,11 @@ gst_mss_stream_seek (GstMssStream * stream, gboolean forward,
-   GST_DEBUG ("Stream %s seeking to %" G_GUINT64_FORMAT, stream->url, time);
-   for (iter = stream->fragments; iter; iter = g_list_next (iter)) {
-     fragment = iter->data;
-+    if (stream->has_live_fragments) {
-+      if (fragment->time + fragment->repetitions * fragment->duration > time)
-+        stream->current_fragment = iter;
-+      break;
-+    }
-     if (fragment->time + fragment->repetitions * fragment->duration > time) {
-       stream->current_fragment = iter;
-       stream->fragment_repetition_index =
-@@ -1256,9 +1318,14 @@ static void
- gst_mss_stream_reload_fragments (GstMssStream * stream, xmlNodePtr streamIndex)
- {
-   xmlNodePtr iter;
--  guint64 current_gst_time = gst_mss_stream_get_fragment_gst_timestamp (stream);
-+  guint64 current_gst_time;
-   GstMssFragmentListBuilder builder;
- 
-+  if (stream->has_live_fragments)
-+    return;
-+
-+  current_gst_time = gst_mss_stream_get_fragment_gst_timestamp (stream);
-+
-   gst_mss_fragment_list_builder_init (&builder);
- 
-   GST_DEBUG ("Current position: %" GST_TIME_FORMAT,
-@@ -1514,3 +1581,74 @@ gst_mss_manifest_get_live_seek_range (GstMssManifest * manifest, gint64 * start,
- 
-   return ret;
- }
-+
-+void
-+gst_mss_manifest_live_adapter_push (GstMssStream * stream, GstBuffer * buffer)
-+{
-+  gst_adapter_push (stream->live_adapter, buffer);
-+}
-+
-+gsize
-+gst_mss_manifest_live_adapter_available (GstMssStream * stream)
-+{
-+  return gst_adapter_available (stream->live_adapter);
-+}
-+
-+GstBuffer *
-+gst_mss_manifest_live_adapter_take_buffer (GstMssStream * stream, gsize nbytes)
-+{
-+  return gst_adapter_take_buffer (stream->live_adapter, nbytes);
-+}
-+
-+gboolean
-+gst_mss_stream_fragment_parsing_needed (GstMssStream * stream)
-+{
-+  return stream->fragment_parser.status == GST_MSS_FRAGMENT_HEADER_PARSER_INIT;
-+}
-+
-+void
-+gst_mss_stream_parse_fragment (GstMssStream * stream, GstBuffer * buffer)
-+{
-+  GstMssStreamFragment *current_fragment = NULL;
-+  const gchar *stream_type_name;
-+  guint8 index;
-+
-+  if (!stream->has_live_fragments)
-+    return;
-+
-+  if (!gst_mss_fragment_parser_add_buffer (&stream->fragment_parser, buffer))
-+    return;
-+
-+  current_fragment = stream->current_fragment->data;
-+  current_fragment->time = stream->fragment_parser.tfxd.time;
-+  current_fragment->duration = stream->fragment_parser.tfxd.duration;
-+
-+  stream_type_name =
-+      gst_mss_stream_type_name (gst_mss_stream_get_type (stream));
-+
-+  for (index = 0; index < stream->fragment_parser.tfrf.entries_count; index++) {
-+    GList *l = g_list_last (stream->fragments);
-+    GstMssStreamFragment *last;
-+    GstMssStreamFragment *fragment;
-+
-+    if (l == NULL)
-+      break;
-+
-+    last = (GstMssStreamFragment *) l->data;
-+
-+    if (last->time == stream->fragment_parser.tfrf.entries[index].time)
-+      continue;
-+
-+    fragment = g_new (GstMssStreamFragment, 1);
-+    fragment->number = last->number + 1;
-+    fragment->repetitions = 1;
-+    fragment->time = stream->fragment_parser.tfrf.entries[index].time;
-+    fragment->duration = stream->fragment_parser.tfrf.entries[index].duration;
-+
-+    stream->fragments = g_list_append (stream->fragments, fragment);
-+    GST_LOG ("Adding fragment number: %u to %s stream, time: %" G_GUINT64_FORMAT
-+        ", duration: %" G_GUINT64_FORMAT ", repetitions: %u",
-+        fragment->number, stream_type_name,
-+        fragment->time, fragment->duration, fragment->repetitions);
-+  }
-+}
-diff --git a/ext/smoothstreaming/gstmssmanifest.h b/ext/smoothstreaming/gstmssmanifest.h
-index 6b7b1f971..03b066ae5 100644
---- a/ext/smoothstreaming/gstmssmanifest.h
-+++ b/ext/smoothstreaming/gstmssmanifest.h
-@@ -26,6 +26,7 @@
- #include <glib.h>
- #include <gio/gio.h>
- #include <gst/gst.h>
-+#include <gst/base/gstadapter.h>
- 
- G_BEGIN_DECLS
- 
-@@ -73,5 +74,11 @@ const gchar * gst_mss_stream_get_lang (GstMssStream * stream);
- 
- const gchar * gst_mss_stream_type_name (GstMssStreamType streamtype);
- 
-+void gst_mss_manifest_live_adapter_push(GstMssStream * stream, GstBuffer * buffer);
-+gsize gst_mss_manifest_live_adapter_available(GstMssStream * stream);
-+GstBuffer * gst_mss_manifest_live_adapter_take_buffer(GstMssStream * stream, gsize nbytes);
-+gboolean gst_mss_stream_fragment_parsing_needed(GstMssStream * stream);
-+void gst_mss_stream_parse_fragment(GstMssStream * stream, GstBuffer * buffer);
-+
- G_END_DECLS
- #endif /* __GST_MSS_MANIFEST_H__ */
-diff --git a/gst-libs/gst/adaptivedemux/gstadaptivedemux.c b/gst-libs/gst/adaptivedemux/gstadaptivedemux.c
-index 634e4f388..ddca726b6 100644
---- a/gst-libs/gst/adaptivedemux/gstadaptivedemux.c
-+++ b/gst-libs/gst/adaptivedemux/gstadaptivedemux.c
-@@ -291,6 +291,9 @@ gst_adaptive_demux_wait_until (GstClock * clock, GCond * cond, GMutex * mutex,
-     GstClockTime end_time);
- static gboolean gst_adaptive_demux_clock_callback (GstClock * clock,
-     GstClockTime time, GstClockID id, gpointer user_data);
-+static gboolean
-+gst_adaptive_demux_requires_periodical_playlist_update_default (GstAdaptiveDemux
-+    * demux);
- 
- /* we can't use G_DEFINE_ABSTRACT_TYPE because we need the klass in the _init
-  * method to get to the padtemplates */
-@@ -412,6 +415,9 @@ gst_adaptive_demux_class_init (GstAdaptiveDemuxClass * klass)
-   klass->data_received = gst_adaptive_demux_stream_data_received_default;
-   klass->finish_fragment = gst_adaptive_demux_stream_finish_fragment_default;
-   klass->update_manifest = gst_adaptive_demux_update_manifest_default;
-+  klass->requires_periodical_playlist_update =
-+      gst_adaptive_demux_requires_periodical_playlist_update_default;
-+
- }
- 
- static void
-@@ -686,7 +692,9 @@ gst_adaptive_demux_sink_event (GstPad * pad, GstObject * parent,
-             demux->priv->stop_updates_task = FALSE;
-             g_mutex_unlock (&demux->priv->updates_timed_lock);
-             /* Task to periodically update the manifest */
--            gst_task_start (demux->priv->updates_task);
-+            if (demux_class->requires_periodical_playlist_update (demux)) {
-+              gst_task_start (demux->priv->updates_task);
-+            }
-           }
-         } else {
-           /* no streams */
-@@ -2125,6 +2133,13 @@ gst_adaptive_demux_stream_data_received_default (GstAdaptiveDemux * demux,
-   return gst_adaptive_demux_stream_push_buffer (stream, buffer);
- }
- 
-+static gboolean
-+gst_adaptive_demux_requires_periodical_playlist_update_default (GstAdaptiveDemux
-+    * demux)
-+{
-+  return TRUE;
-+}
-+
- static GstFlowReturn
- _src_chain (GstPad * pad, GstObject * parent, GstBuffer * buffer)
- {
-@@ -3338,7 +3353,15 @@ gst_adaptive_demux_stream_download_loop (GstAdaptiveDemuxStream * stream)
-       GST_DEBUG_OBJECT (stream->pad, "EOS, checking to stop download loop");
-       /* we push the EOS after releasing the object lock */
-       if (gst_adaptive_demux_is_live (demux)) {
--        if (gst_adaptive_demux_stream_wait_manifest_update (demux, stream)) {
-+        GstAdaptiveDemuxClass *demux_class =
-+            GST_ADAPTIVE_DEMUX_GET_CLASS (demux);
-+
-+        /* this might be a fragment download error, refresh the manifest, just in case */
-+        if (!demux_class->requires_periodical_playlist_update (demux)) {
-+          ret = gst_adaptive_demux_update_manifest (demux);
-+          break;
-+        } else if (gst_adaptive_demux_stream_wait_manifest_update (demux,
-+                stream)) {
-           goto end;
-         }
-         gst_task_stop (stream->download_task);
-diff --git a/gst-libs/gst/adaptivedemux/gstadaptivedemux.h b/gst-libs/gst/adaptivedemux/gstadaptivedemux.h
-index 780f4d93f..9a1a1b7d1 100644
---- a/gst-libs/gst/adaptivedemux/gstadaptivedemux.h
-+++ b/gst-libs/gst/adaptivedemux/gstadaptivedemux.h
-@@ -459,6 +459,20 @@ struct _GstAdaptiveDemuxClass
-    * selected period.
-    */
-   GstClockTime (*get_period_start_time) (GstAdaptiveDemux *demux);
-+
-+  /**
-+   * requires_periodical_playlist_update:
-+   * @demux: #GstAdaptiveDemux
-+   *
-+   * Some adaptive streaming protocols allow the client to download
-+   * the playlist once and build up the fragment list based on the
-+   * current fragment metadata. For those protocols the demuxer
-+   * doesn't need to periodically refresh the playlist. This vfunc
-+   * is relevant only for live playback scenarios.
-+   *
-+   * Return: %TRUE if the playlist needs to be refreshed periodically by the demuxer.
-+   */
-+  gboolean (*requires_periodical_playlist_update) (GstAdaptiveDemux * demux);
- };
- 
- GType    gst_adaptive_demux_get_type (void);
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-smoothstreaming-implement-adaptivedemux-s-get_live_s.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-smoothstreaming-implement-adaptivedemux-s-get_live_s.patch
deleted file mode 100644
index 76d29e1..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-smoothstreaming-implement-adaptivedemux-s-get_live_s.patch
+++ /dev/null
@@ -1,183 +0,0 @@
-From e9178fa082116d4bf733b184a8b6951112c17900 Mon Sep 17 00:00:00 2001
-From: Matthew Waters <matthew@centricular.com>
-Date: Thu, 10 Nov 2016 17:18:36 +1100
-Subject: [PATCH] smoothstreaming: implement adaptivedemux's
- get_live_seek_range()
-
-Allows seeking through the available fragments that are still available
-on the server as specified by the DVRWindowLength attribute in the
-manifest.
-
-https://bugzilla.gnome.org/show_bug.cgi?id=774178
----
-Upstream-Status: Backport
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
- ext/smoothstreaming/gstmssdemux.c    | 13 ++++++
- ext/smoothstreaming/gstmssmanifest.c | 84 ++++++++++++++++++++++++++++++++++++
- ext/smoothstreaming/gstmssmanifest.h |  1 +
- 3 files changed, 98 insertions(+)
-
-diff --git a/ext/smoothstreaming/gstmssdemux.c b/ext/smoothstreaming/gstmssdemux.c
-index 9d0aece2b..b66e19514 100644
---- a/ext/smoothstreaming/gstmssdemux.c
-+++ b/ext/smoothstreaming/gstmssdemux.c
-@@ -138,6 +138,8 @@ gst_mss_demux_get_manifest_update_interval (GstAdaptiveDemux * demux);
- static GstFlowReturn
- gst_mss_demux_update_manifest_data (GstAdaptiveDemux * demux,
-     GstBuffer * buffer);
-+static gboolean gst_mss_demux_get_live_seek_range (GstAdaptiveDemux * demux,
-+    gint64 * start, gint64 * stop);
- 
- static void
- gst_mss_demux_class_init (GstMssDemuxClass * klass)
-@@ -192,6 +194,8 @@ gst_mss_demux_class_init (GstMssDemuxClass * klass)
-       gst_mss_demux_stream_update_fragment_info;
-   gstadaptivedemux_class->update_manifest_data =
-       gst_mss_demux_update_manifest_data;
-+  gstadaptivedemux_class->get_live_seek_range =
-+      gst_mss_demux_get_live_seek_range;
- 
-   GST_DEBUG_CATEGORY_INIT (mssdemux_debug, "mssdemux", 0, "mssdemux plugin");
- }
-@@ -659,3 +663,12 @@ gst_mss_demux_update_manifest_data (GstAdaptiveDemux * demux,
-   gst_mss_manifest_reload_fragments (mssdemux->manifest, buffer);
-   return GST_FLOW_OK;
- }
-+
-+static gboolean
-+gst_mss_demux_get_live_seek_range (GstAdaptiveDemux * demux, gint64 * start,
-+    gint64 * stop)
-+{
-+  GstMssDemux *mssdemux = GST_MSS_DEMUX_CAST (demux);
-+
-+  return gst_mss_manifest_get_live_seek_range (mssdemux->manifest, start, stop);
-+}
-diff --git a/ext/smoothstreaming/gstmssmanifest.c b/ext/smoothstreaming/gstmssmanifest.c
-index 1b72e8de1..317b3cef9 100644
---- a/ext/smoothstreaming/gstmssmanifest.c
-+++ b/ext/smoothstreaming/gstmssmanifest.c
-@@ -42,6 +42,7 @@ GST_DEBUG_CATEGORY_EXTERN (mssdemux_debug);
- 
- #define MSS_PROP_BITRATE              "Bitrate"
- #define MSS_PROP_DURATION             "d"
-+#define MSS_PROP_DVR_WINDOW_LENGTH    "DVRWindowLength"
- #define MSS_PROP_LANGUAGE             "Language"
- #define MSS_PROP_NUMBER               "n"
- #define MSS_PROP_REPETITIONS          "r"
-@@ -94,6 +95,7 @@ struct _GstMssManifest
-   xmlNodePtr xmlrootnode;
- 
-   gboolean is_live;
-+  gint64 dvr_window;
- 
-   GString *protection_system_id;
-   gchar *protection_data;
-@@ -330,6 +332,22 @@ gst_mss_manifest_new (GstBuffer * data)
-     xmlFree (live_str);
-   }
- 
-+  /* the entire file is always available for non-live streams */
-+  if (!manifest->is_live) {
-+    manifest->dvr_window = 0;
-+  } else {
-+    /* if 0, or non-existent, the length is infinite */
-+    gchar *dvr_window_str = (gchar *) xmlGetProp (root,
-+        (xmlChar *) MSS_PROP_DVR_WINDOW_LENGTH);
-+    if (dvr_window_str) {
-+      manifest->dvr_window = g_ascii_strtoull (dvr_window_str, NULL, 10);
-+      xmlFree (dvr_window_str);
-+      if (manifest->dvr_window <= 0) {
-+        manifest->dvr_window = 0;
-+      }
-+    }
-+  }
-+
-   for (nodeiter = root->children; nodeiter; nodeiter = nodeiter->next) {
-     if (nodeiter->type == XML_ELEMENT_NODE
-         && (strcmp ((const char *) nodeiter->name, "StreamIndex") == 0)) {
-@@ -1406,3 +1424,69 @@ gst_mss_stream_get_lang (GstMssStream * stream)
- {
-   return stream->lang;
- }
-+
-+static GstClockTime
-+gst_mss_manifest_get_dvr_window_length_clock_time (GstMssManifest * manifest)
-+{
-+  gint64 timescale;
-+
-+  /* the entire file is always available for non-live streams */
-+  if (manifest->dvr_window == 0)
-+    return GST_CLOCK_TIME_NONE;
-+
-+  timescale = gst_mss_manifest_get_timescale (manifest);
-+  return (GstClockTime) gst_util_uint64_scale_round (manifest->dvr_window,
-+      GST_SECOND, timescale);
-+}
-+
-+static gboolean
-+gst_mss_stream_get_live_seek_range (GstMssStream * stream, gint64 * start,
-+    gint64 * stop)
-+{
-+  GList *l;
-+  GstMssStreamFragment *fragment;
-+  guint64 timescale = gst_mss_stream_get_timescale (stream);
-+
-+  g_return_val_if_fail (stream->active, FALSE);
-+
-+  /* XXX: assumes all the data in the stream is still available */
-+  l = g_list_first (stream->fragments);
-+  fragment = (GstMssStreamFragment *) l->data;
-+  *start = gst_util_uint64_scale_round (fragment->time, GST_SECOND, timescale);
-+
-+  l = g_list_last (stream->fragments);
-+  fragment = (GstMssStreamFragment *) l->data;
-+  *stop = gst_util_uint64_scale_round (fragment->time + fragment->duration *
-+      fragment->repetitions, GST_SECOND, timescale);
-+
-+  return TRUE;
-+}
-+
-+gboolean
-+gst_mss_manifest_get_live_seek_range (GstMssManifest * manifest, gint64 * start,
-+    gint64 * stop)
-+{
-+  GSList *iter;
-+  gboolean ret = FALSE;
-+
-+  for (iter = manifest->streams; iter; iter = g_slist_next (iter)) {
-+    GstMssStream *stream = iter->data;
-+
-+    if (stream->active) {
-+      /* FIXME: bound this correctly for multiple streams */
-+      if (!(ret = gst_mss_stream_get_live_seek_range (stream, start, stop)))
-+        break;
-+    }
-+  }
-+
-+  if (ret && gst_mss_manifest_is_live (manifest)) {
-+    GstClockTime dvr_window =
-+        gst_mss_manifest_get_dvr_window_length_clock_time (manifest);
-+
-+    if (GST_CLOCK_TIME_IS_VALID (dvr_window) && *stop - *start > dvr_window) {
-+      *start = *stop - dvr_window;
-+    }
-+  }
-+
-+  return ret;
-+}
-diff --git a/ext/smoothstreaming/gstmssmanifest.h b/ext/smoothstreaming/gstmssmanifest.h
-index af7419c23..6b7b1f971 100644
---- a/ext/smoothstreaming/gstmssmanifest.h
-+++ b/ext/smoothstreaming/gstmssmanifest.h
-@@ -54,6 +54,7 @@ void gst_mss_manifest_reload_fragments (GstMssManifest * manifest, GstBuffer * d
- GstClockTime gst_mss_manifest_get_min_fragment_duration (GstMssManifest * manifest);
- const gchar * gst_mss_manifest_get_protection_system_id (GstMssManifest * manifest);
- const gchar * gst_mss_manifest_get_protection_data (GstMssManifest * manifest);
-+gboolean gst_mss_manifest_get_live_seek_range (GstMssManifest * manifest, gint64 * start, gint64 * stop);
- 
- GstMssStreamType gst_mss_stream_get_type (GstMssStream *stream);
- GstCaps * gst_mss_stream_get_caps (GstMssStream * stream);
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-smoothstreaming-use-the-duration-from-the-list-of-fr.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-smoothstreaming-use-the-duration-from-the-list-of-fr.patch
deleted file mode 100644
index 4e51040..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-smoothstreaming-use-the-duration-from-the-list-of-fr.patch
+++ /dev/null
@@ -1,62 +0,0 @@
-From 0fbee8f37427b88339194b22ba9aa210772a8613 Mon Sep 17 00:00:00 2001
-From: Matthew Waters <matthew@centricular.com>
-Date: Thu, 10 Nov 2016 17:20:27 +1100
-Subject: [PATCH] smoothstreaming: use the duration from the list of fragments
- if not present in the manifest
-
-Provides a more accurate duration for live streams that may be minutes
-or hours in front of the earliest fragment.
-
-https://bugzilla.gnome.org/show_bug.cgi?id=774178
----
-Upstream-Status: Backport
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
- ext/smoothstreaming/gstmssmanifest.c | 24 ++++++++++++++++++++++++
- 1 file changed, 24 insertions(+)
-
-diff --git a/ext/smoothstreaming/gstmssmanifest.c b/ext/smoothstreaming/gstmssmanifest.c
-index 317b3cef9..144bbb42d 100644
---- a/ext/smoothstreaming/gstmssmanifest.c
-+++ b/ext/smoothstreaming/gstmssmanifest.c
-@@ -888,6 +888,7 @@ gst_mss_manifest_get_duration (GstMssManifest * manifest)
-   gchar *duration;
-   guint64 dur = -1;
- 
-+  /* try the property */
-   duration =
-       (gchar *) xmlGetProp (manifest->xmlrootnode,
-       (xmlChar *) MSS_PROP_STREAM_DURATION);
-@@ -895,6 +896,29 @@ gst_mss_manifest_get_duration (GstMssManifest * manifest)
-     dur = g_ascii_strtoull (duration, NULL, 10);
-     xmlFree (duration);
-   }
-+  /* else use the fragment list */
-+  if (dur <= 0) {
-+    guint64 max_dur = 0;
-+    GSList *iter;
-+
-+    for (iter = manifest->streams; iter; iter = g_slist_next (iter)) {
-+      GstMssStream *stream = iter->data;
-+
-+      if (stream->active) {
-+        if (stream->fragments) {
-+          GList *l = g_list_last (stream->fragments);
-+          GstMssStreamFragment *fragment = (GstMssStreamFragment *) l->data;
-+          guint64 frag_dur =
-+              fragment->time + fragment->duration * fragment->repetitions;
-+          max_dur = MAX (frag_dur, max_dur);
-+        }
-+      }
-+    }
-+
-+    if (max_dur != 0)
-+      dur = max_dur;
-+  }
-+
-   return dur;
- }
- 
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-vkdisplay-Use-ifdef-for-platform-specific-defines.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-vkdisplay-Use-ifdef-for-platform-specific-defines.patch
new file mode 100644
index 0000000..caaa62d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0001-vkdisplay-Use-ifdef-for-platform-specific-defines.patch
@@ -0,0 +1,37 @@
+From 1523ab462c1bf19055960ced255f4872b6cf9f5c Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Wed, 5 Jul 2017 11:00:42 +0300
+Subject: [PATCH 1/2] vkdisplay: Use ifdef for platform specific defines
+
+VK_KHR_*_SURFACE_EXTENSION_NAME are only available when corresponding
+WSI is enabled.
+
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Upstream-Status: Submitted [https://bugzilla.gnome.org/show_bug.cgi?id=784539]
+---
+ ext/vulkan/vkdisplay.c | 4 ++++
+ 1 file changed, 4 insertions(+)
+
+diff --git a/ext/vulkan/vkdisplay.c b/ext/vulkan/vkdisplay.c
+index 550134cd8..ade5d4583 100644
+--- a/ext/vulkan/vkdisplay.c
++++ b/ext/vulkan/vkdisplay.c
+@@ -448,11 +448,15 @@ gst_vulkan_display_type_to_extension_string (GstVulkanDisplayType type)
+   if (type == GST_VULKAN_DISPLAY_TYPE_NONE)
+     return NULL;
+ 
++#if GST_VULKAN_HAVE_WINDOW_XCB
+   if (type & GST_VULKAN_DISPLAY_TYPE_XCB)
+     return VK_KHR_XCB_SURFACE_EXTENSION_NAME;
++#endif
+ 
++#if GST_VULKAN_HAVE_WINDOW_WAYLAND
+   if (type & GST_VULKAN_DISPLAY_TYPE_WAYLAND)
+     return VK_KHR_WAYLAND_SURFACE_EXTENSION_NAME;
++#endif
+ 
+   return NULL;
+ }
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0002-vulkan-Use-the-generated-version-of-vkconfig.h.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0002-vulkan-Use-the-generated-version-of-vkconfig.h.patch
new file mode 100644
index 0000000..0df145d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/0002-vulkan-Use-the-generated-version-of-vkconfig.h.patch
@@ -0,0 +1,64 @@
+From c23e1dc22deb495561cffb877edb2746b740a1fa Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Wed, 5 Jul 2017 11:07:05 +0300
+Subject: [PATCH 2/2] vulkan: Use the generated version of vkconfig.h
+
+Build fails in ext/vulkan/xcb and ext/vulkan/wayland when:
+* building from tarball
+* building out-of-tree
+* Only one WSI integration (xcb or wayland) is enabled by configure.ac
+This is because vkconfig.h from source directory gets used instead
+of the generated one.
+
+Add the correct build directory to "-I". Use angle bracket
+include in vkapi.h so that it actually looks in the include search
+path instead of defaulting to the same (source tree) directory.
+
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Upstream-Status: Submitted [https://bugzilla.gnome.org/show_bug.cgi?id=784539]
+---
+ ext/vulkan/vkapi.h             | 2 +-
+ ext/vulkan/wayland/Makefile.am | 1 +
+ ext/vulkan/xcb/Makefile.am     | 1 +
+ 3 files changed, 3 insertions(+), 1 deletion(-)
+
+diff --git a/ext/vulkan/vkapi.h b/ext/vulkan/vkapi.h
+index e9c23aa92..a37c29d0f 100644
+--- a/ext/vulkan/vkapi.h
++++ b/ext/vulkan/vkapi.h
+@@ -23,7 +23,7 @@
+ 
+ #define VK_PROTOTYPES
+ 
+-#include "vkconfig.h"
++#include <vkconfig.h>
+ #include "vk_fwd.h"
+ #include "vkmacros.h"
+ 
+diff --git a/ext/vulkan/wayland/Makefile.am b/ext/vulkan/wayland/Makefile.am
+index f92d85e2c..10cfb70e6 100644
+--- a/ext/vulkan/wayland/Makefile.am
++++ b/ext/vulkan/wayland/Makefile.am
+@@ -14,6 +14,7 @@ noinst_HEADERS = \
+ 
+ libgstvulkan_wayland_la_CFLAGS = \
+ 	-I$(top_srcdir)/gst-libs \
++	-I$(top_builddir)/ext/vulkan \
+ 	-I$(top_srcdir)/ext/vulkan \
+ 	-I$(top_builddir)/gst-libs \
+ 	$(GST_PLUGINS_BASE_CFLAGS) \
+diff --git a/ext/vulkan/xcb/Makefile.am b/ext/vulkan/xcb/Makefile.am
+index 7debcff9e..b5103551b 100644
+--- a/ext/vulkan/xcb/Makefile.am
++++ b/ext/vulkan/xcb/Makefile.am
+@@ -14,6 +14,7 @@ noinst_HEADERS = \
+ 
+ libgstvulkan_xcb_la_CFLAGS = \
+ 	-I$(top_srcdir)/gst-libs \
++	-I$(top_builddir)/ext/vulkan \
+ 	-I$(top_srcdir)/ext/vulkan \
+ 	-I$(top_builddir)/gst-libs \
+ 	$(GST_PLUGINS_BASE_CFLAGS) \
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/link-with-libvchostif.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/link-with-libvchostif.patch
new file mode 100644
index 0000000..c382b17
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad/link-with-libvchostif.patch
@@ -0,0 +1,35 @@
+Add -lvchostif to link when using -lEGL on rpi
+
+This is required because libEGL from userland uses sybols
+from this library.
+
+lib/libEGL.so.1.0.0                                                                                                                                                                                                                              121: 00000000     0 FUNC    GLOBAL DEFAULT  UND vc_dispmanx_element_add
+  1552: 00000000     0 FUNC    GLOBAL DEFAULT  UND vc_dispmanx_element_add
+
+These symbols are provided by libvchostif as seen below
+
+lib/libvchostif.so
+   252: 0000b161   192 FUNC    GLOBAL DEFAULT    9 vc_dispmanx_element_add
+   809: 0000b161   192 FUNC    GLOBAL DEFAULT    9 vc_dispmanx_element_add
+
+With this explicit link, plugins fail during runtime
+
+(gst-plugin-scanner:571): GStreamer-WARNING **: Failed to load plugin '/usr/lib/gstreamer-1.0/libgstomx.so': Error relocating /usr/lib/libgstgl-1.0.so.0: vc_dispmanx_element_add: symbol not found
+(gst-plugin-scanner:571): GStreamer-WARNING **: Failed to load plugin '/usr/lib/gstreamer-1.0/libgstopengl.so': Error relocating /usr/lib/libgstgl-1.0.so.0: vc_dispmanx_element_add: symbol not found
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+Upstream-Status: Pending
+Index: gst-plugins-bad-1.10.4/configure.ac
+===================================================================
+--- gst-plugins-bad-1.10.4.orig/configure.ac
++++ gst-plugins-bad-1.10.4/configure.ac
+@@ -785,7 +785,7 @@ case $host in
+                             HAVE_EGL=yes
+                             HAVE_GLES2=yes
+                             HAVE_EGL_RPI=yes
+-                            EGL_LIBS="-lbcm_host -lvcos -lvchiq_arm"
++                            EGL_LIBS="-lbcm_host -lvchostif -lvcos -lvchiq_arm"
+                             EGL_CFLAGS=""
+                             AC_DEFINE(USE_EGL_RPI, [1], [Use RPi platform])
+                           ])
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.10.4.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.10.4.bb
deleted file mode 100644
index 0bb4053..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.10.4.bb
+++ /dev/null
@@ -1,28 +0,0 @@
-require gstreamer1.0-plugins-bad.inc
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=73a5855a8119deb017f5f13cf327095d \
-                    file://COPYING.LIB;md5=21682e4e8fea52413fd26c60acb907e5 \
-                    file://gst/tta/crc32.h;beginline=12;endline=29;md5=27db269c575d1e5317fffca2d33b3b50 \
-                    file://gst/tta/filters.h;beginline=12;endline=29;md5=8a08270656f2f8ad7bb3655b83138e5a"
-
-SRC_URI = " \
-    http://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-${PV}.tar.xz \
-    file://configure-allow-to-disable-libssh2.patch \
-    file://fix-maybe-uninitialized-warnings-when-compiling-with-Os.patch \
-    file://avoid-including-sys-poll.h-directly.patch \
-    file://ensure-valid-sentinels-for-gst_structure_get-etc.patch \
-    file://0001-gstreamer-gl.pc.in-don-t-append-GL_CFLAGS-to-CFLAGS.patch \
-    file://0009-glimagesink-Downrank-to-marginal.patch \
-    file://0001-introspection.m4-prefix-pkgconfig-paths-with-PKG_CON.patch \
-    file://0001-Prepend-PKG_CONFIG_SYSROOT_DIR-to-pkg-config-output.patch \
-    file://0001-smoothstreaming-implement-adaptivedemux-s-get_live_s.patch \
-    file://0001-smoothstreaming-use-the-duration-from-the-list-of-fr.patch \
-    file://0001-mssdemux-improved-live-playback-support.patch \
-"
-SRC_URI[md5sum] = "2757103e57a096a1a05b3ab85b8381af"
-SRC_URI[sha256sum] = "23ddae506b3a223b94869a0d3eea3e9a12e847f94d2d0e0b97102ce13ecd6966"
-
-S = "${WORKDIR}/gst-plugins-bad-${PV}"
-
-EXTRA_OECONF += "WAYLAND_PROTOCOLS_SYSROOT_DIR=${RECIPE_SYSROOT}"
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.12.2.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.12.2.bb
new file mode 100644
index 0000000..8321da0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-bad_1.12.2.bb
@@ -0,0 +1,26 @@
+require gstreamer1.0-plugins-bad.inc
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=73a5855a8119deb017f5f13cf327095d \
+                    file://COPYING.LIB;md5=21682e4e8fea52413fd26c60acb907e5 "
+
+SRC_URI = " \
+    http://gstreamer.freedesktop.org/src/gst-plugins-bad/gst-plugins-bad-${PV}.tar.xz \
+    file://configure-allow-to-disable-libssh2.patch \
+    file://fix-maybe-uninitialized-warnings-when-compiling-with-Os.patch \
+    file://avoid-including-sys-poll.h-directly.patch \
+    file://ensure-valid-sentinels-for-gst_structure_get-etc.patch \
+    file://0001-gstreamer-gl.pc.in-don-t-append-GL_CFLAGS-to-CFLAGS.patch \
+    file://0009-glimagesink-Downrank-to-marginal.patch \
+    file://0001-introspection.m4-prefix-pkgconfig-paths-with-PKG_CON.patch \
+    file://0001-Prepend-PKG_CONFIG_SYSROOT_DIR-to-pkg-config-output.patch \
+    file://link-with-libvchostif.patch \
+    file://0001-vkdisplay-Use-ifdef-for-platform-specific-defines.patch \
+    file://0002-vulkan-Use-the-generated-version-of-vkconfig.h.patch \
+"
+SRC_URI[md5sum] = "5683f0ea91f9e1e0613b0f6f729980a7"
+SRC_URI[sha256sum] = "9c2c7edde4f59d74eb414e0701c55131f562e5c605a3ce9b091754f106c09e37"
+
+S = "${WORKDIR}/gst-plugins-bad-${PV}"
+
+EXTRA_OECONF += "WAYLAND_PROTOCOLS_SYSROOT_DIR=${RECIPE_SYSROOT}"
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base/encodebin-Need-more-buffers-in-output-queue-for-bett.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base/encodebin-Need-more-buffers-in-output-queue-for-bett.patch
deleted file mode 100644
index 5341e3c..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base/encodebin-Need-more-buffers-in-output-queue-for-bett.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-From 540e02c92c75e08b90326863dc787fa5cadf9da6 Mon Sep 17 00:00:00 2001
-From: Song Bing <b06498@freescale.com>
-Date: Fri, 13 Mar 2015 18:04:31 +0800
-Subject: [PATCH] encodebin: Need more buffers in output queue for better
- performance
-
-Need more buffers in output queue for better performance
-
-Upstream-Status: Submitted [https://bugzilla.gnome.org/show_bug.cgi?id=744191]
-
-Signed-off-by: Song Bing <b06498@freescale.com>
----
- gst/encoding/gstencodebin.c |    3 +--
- 1 file changed, 1 insertion(+), 2 deletions(-)
-
-diff --git a/gst/encoding/gstencodebin.c b/gst/encoding/gstencodebin.c
-index 6728e58..32daae4 100644
---- a/gst/encoding/gstencodebin.c
-+++ b/gst/encoding/gstencodebin.c
-@@ -1228,8 +1228,7 @@ _create_stream_group (GstEncodeBin * ebin, GstEncodingProfile * sprof,
-    * We only use a 1buffer long queue here, the actual queueing will be done
-    * in the input queue */
-   last = sgroup->outqueue = gst_element_factory_make ("queue", NULL);
--  g_object_set (sgroup->outqueue, "max-size-buffers", (guint32) 1,
--      "max-size-bytes", (guint32) 0, "max-size-time", (guint64) 0,
-+  g_object_set (sgroup->outqueue, "max-size-time", (guint64) 0,
-       "silent", TRUE, NULL);
- 
-   gst_bin_add (GST_BIN (ebin), sgroup->outqueue);
--- 
-1.7.9.5
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base_1.10.4.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base_1.10.4.bb
deleted file mode 100644
index 7c81670..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base_1.10.4.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-require gstreamer1.0-plugins-base.inc
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=c54ce9345727175ff66d17b67ff51f58 \
-                    file://COPYING.LIB;md5=6762ed442b3822387a51c92d928ead0d \
-                    file://common/coverage/coverage-report.pl;beginline=2;endline=17;md5=a4e1830fce078028c8f0974161272607"
-
-SRC_URI = " \
-    http://gstreamer.freedesktop.org/src/gst-plugins-base/gst-plugins-base-${PV}.tar.xz \
-    file://get-caps-from-src-pad-when-query-caps.patch \
-    file://0003-ssaparse-enhance-SSA-text-lines-parsing.patch \
-    file://0004-subparse-set-need_segment-after-sink-pad-received-GS.patch \
-    file://encodebin-Need-more-buffers-in-output-queue-for-bett.patch \
-    file://make-gio_unix_2_0-dependency-configurable.patch \
-    file://0001-introspection.m4-prefix-pkgconfig-paths-with-PKG_CON.patch \
-"
-SRC_URI[md5sum] = "f6b46f8fac01eb773d556e3efc369e86"
-SRC_URI[sha256sum] = "f6d245b6b3d4cb733f81ebb021074c525ece83db0c10e932794b339b8d935eb7"
-
-S = "${WORKDIR}/gst-plugins-base-${PV}"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base_1.12.2.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base_1.12.2.bb
new file mode 100644
index 0000000..5de0b8b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-base_1.12.2.bb
@@ -0,0 +1,18 @@
+require gstreamer1.0-plugins-base.inc
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=c54ce9345727175ff66d17b67ff51f58 \
+                    file://COPYING.LIB;md5=6762ed442b3822387a51c92d928ead0d \
+                    file://common/coverage/coverage-report.pl;beginline=2;endline=17;md5=a4e1830fce078028c8f0974161272607"
+
+SRC_URI = " \
+    http://gstreamer.freedesktop.org/src/gst-plugins-base/gst-plugins-base-${PV}.tar.xz \
+    file://get-caps-from-src-pad-when-query-caps.patch \
+    file://0003-ssaparse-enhance-SSA-text-lines-parsing.patch \
+    file://0004-subparse-set-need_segment-after-sink-pad-received-GS.patch \
+    file://make-gio_unix_2_0-dependency-configurable.patch \
+    file://0001-introspection.m4-prefix-pkgconfig-paths-with-PKG_CON.patch \
+"
+SRC_URI[md5sum] = "77f5379c4ca677616b415e3b3ff95578"
+SRC_URI[sha256sum] = "5067dce3afe197a9536fea0107c77213fab536dff4a213b07fc60378d5510675"
+
+S = "${WORKDIR}/gst-plugins-base-${PV}"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good/0001-v4l2-Fix-4K-colorimetry.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good/0001-v4l2-Fix-4K-colorimetry.patch
new file mode 100644
index 0000000..f78818a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good/0001-v4l2-Fix-4K-colorimetry.patch
@@ -0,0 +1,48 @@
+From 545646cccba243236e10362fe7325f89be57da1f Mon Sep 17 00:00:00 2001
+From: Nicolas Dufresne <nicolas.dufresne@collabora.com>
+Date: Tue, 18 Jul 2017 11:28:37 -0400
+Subject: [PATCH] v4l2: Fix 4K colorimetry
+
+Since 1.6, the transfer function for BT2020 has been changed from BT709
+to BT2020_12. It's the same function, but with more precision. As a side
+effect, the V4L2 colorpsace didn't match GStreamer colorspace. When
+GStreamer ended up making a guess, it would not match anything supported
+by V4L2 anymore. This this by using BT2020_12 for BT2020 colorspace and
+BT2020 transfer function in replacement of BT709 whenever a 4K
+resolution is detected.
+
+Upstream-Status: Backport
+Signed-off-by: Nicolas Dechesne <nicolas.dechesne@linaro.org>
+
+---
+ sys/v4l2/gstv4l2object.c | 7 +++++--
+ 1 file changed, 5 insertions(+), 2 deletions(-)
+
+diff --git a/sys/v4l2/gstv4l2object.c b/sys/v4l2/gstv4l2object.c
+index 61244455f..aae2c55e7 100644
+--- a/sys/v4l2/gstv4l2object.c
++++ b/sys/v4l2/gstv4l2object.c
+@@ -1960,7 +1960,7 @@ gst_v4l2_object_get_colorspace (struct v4l2_format *fmt,
+     case V4L2_COLORSPACE_BT2020:
+       cinfo->range = GST_VIDEO_COLOR_RANGE_16_235;
+       cinfo->matrix = GST_VIDEO_COLOR_MATRIX_BT2020;
+-      cinfo->transfer = GST_VIDEO_TRANSFER_BT709;
++      cinfo->transfer = GST_VIDEO_TRANSFER_BT2020_12;
+       cinfo->primaries = GST_VIDEO_COLOR_PRIMARIES_BT2020;
+       break;
+     case V4L2_COLORSPACE_SMPTE240M:
+@@ -2062,7 +2062,10 @@ gst_v4l2_object_get_colorspace (struct v4l2_format *fmt,
+ 
+   switch (transfer) {
+     case V4L2_XFER_FUNC_709:
+-      cinfo->transfer = GST_VIDEO_TRANSFER_BT709;
++      if (fmt->fmt.pix.height > 2160)
++        cinfo->transfer = GST_VIDEO_TRANSFER_BT2020_12;
++      else
++        cinfo->transfer = GST_VIDEO_TRANSFER_BT709;
+       break;
+     case V4L2_XFER_FUNC_SRGB:
+       cinfo->transfer = GST_VIDEO_TRANSFER_SRGB;
+-- 
+2.14.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good/0001-v4l2object-Also-add-videometa-if-there-is-padding-to.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good/0001-v4l2object-Also-add-videometa-if-there-is-padding-to.patch
deleted file mode 100644
index 2a9a23e..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good/0001-v4l2object-Also-add-videometa-if-there-is-padding-to.patch
+++ /dev/null
@@ -1,35 +0,0 @@
-From 22be02612adc757f6a43cefc6ee65ecaef68f0d9 Mon Sep 17 00:00:00 2001
-From: Carlos Rafael Giani <dv@pseudoterminal.org>
-Date: Thu, 23 Mar 2017 22:13:05 +0100
-Subject: [PATCH] v4l2object: Also add videometa if there is padding to the
- right and bottom
-
-https://bugzilla.gnome.org/show_bug.cgi?id=780478
-
-Upstream-Status: Backport [1.10.5]
-
-Signed-off-by: Carlos Rafael Giani <dv@pseudoterminal.org>
----
- sys/v4l2/gstv4l2object.c | 5 +++--
- 1 file changed, 3 insertions(+), 2 deletions(-)
-
-diff --git a/sys/v4l2/gstv4l2object.c b/sys/v4l2/gstv4l2object.c
-index 91c8ff0..ed4654e 100644
---- a/sys/v4l2/gstv4l2object.c
-+++ b/sys/v4l2/gstv4l2object.c
-@@ -3070,9 +3070,10 @@ store_info:
-   GST_DEBUG_OBJECT (v4l2object->element, "Got sizeimage %" G_GSIZE_FORMAT,
-       info->size);
- 
--  /* to avoid copies we need video meta if top or left padding */
-+  /* to avoid copies we need video meta if there is padding */
-   v4l2object->need_video_meta =
--      ((align->padding_top + align->padding_left) != 0);
-+      ((align->padding_top + align->padding_left + align->padding_right +
-+          align->padding_bottom) != 0);
- 
-   /* ... or if stride is non "standard" */
-   if (!standard_stride)
--- 
-2.7.4
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good_1.10.4.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good_1.10.4.bb
deleted file mode 100644
index 57447bf..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good_1.10.4.bb
+++ /dev/null
@@ -1,18 +0,0 @@
-require gstreamer1.0-plugins-good.inc
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=a6f89e2100d9b6cdffcea4f398e37343 \
-                    file://common/coverage/coverage-report.pl;beginline=2;endline=17;md5=a4e1830fce078028c8f0974161272607 \
-                    file://gst/replaygain/rganalysis.c;beginline=1;endline=23;md5=b60ebefd5b2f5a8e0cab6bfee391a5fe"
-
-SRC_URI = " \
-    http://gstreamer.freedesktop.org/src/gst-plugins-good/gst-plugins-good-${PV}.tar.xz \
-    file://0001-gstrtpmp4gpay-set-dafault-value-for-MPEG4-without-co.patch \
-    file://avoid-including-sys-poll.h-directly.patch \
-    file://ensure-valid-sentinel-for-gst_structure_get.patch \
-    file://0001-introspection.m4-prefix-pkgconfig-paths-with-PKG_CON.patch \
-    file://0001-v4l2object-Also-add-videometa-if-there-is-padding-to.patch \
-"
-SRC_URI[md5sum] = "cc0cc13cdb07d4237600b6886b81f31d"
-SRC_URI[sha256sum] = "8a86c61434a8c44665365bd0b3557a040937d1f44bf69caee4e9ea816ce74d7e"
-
-S = "${WORKDIR}/gst-plugins-good-${PV}"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good_1.12.2.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good_1.12.2.bb
new file mode 100644
index 0000000..f9593c9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-good_1.12.2.bb
@@ -0,0 +1,21 @@
+require gstreamer1.0-plugins-good.inc
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=a6f89e2100d9b6cdffcea4f398e37343 \
+                    file://common/coverage/coverage-report.pl;beginline=2;endline=17;md5=a4e1830fce078028c8f0974161272607 \
+                    file://gst/replaygain/rganalysis.c;beginline=1;endline=23;md5=b60ebefd5b2f5a8e0cab6bfee391a5fe"
+
+SRC_URI = " \
+    http://gstreamer.freedesktop.org/src/gst-plugins-good/gst-plugins-good-${PV}.tar.xz \
+    file://0001-gstrtpmp4gpay-set-dafault-value-for-MPEG4-without-co.patch \
+    file://avoid-including-sys-poll.h-directly.patch \
+    file://ensure-valid-sentinel-for-gst_structure_get.patch \
+    file://0001-introspection.m4-prefix-pkgconfig-paths-with-PKG_CON.patch \
+    file://0001-v4l2-Fix-4K-colorimetry.patch \
+"
+SRC_URI[md5sum] = "20254217d9805484532e08ff1c3aa296"
+SRC_URI[sha256sum] = "5591ee7208ab30289a30658a82b76bf87169c927572d9b794f3a41ed48e1ee96"
+
+S = "${WORKDIR}/gst-plugins-good-${PV}"
+
+RPROVIDES_${PN}-pulseaudio += "${PN}-pulse"
+RPROVIDES_${PN}-soup += "${PN}-souphttpsrc"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-ugly.inc b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-ugly.inc
index 708ad7a..60aa968 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-ugly.inc
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-ugly.inc
@@ -18,7 +18,6 @@
 PACKAGECONFIG[cdio]     = "--enable-cdio,--disable-cdio,libcdio"
 PACKAGECONFIG[dvdread]  = "--enable-dvdread,--disable-dvdread,libdvdread"
 PACKAGECONFIG[lame]     = "--enable-lame,--disable-lame,lame"
-PACKAGECONFIG[mad]      = "--enable-mad,--disable-mad,libmad"
 PACKAGECONFIG[mpeg2dec] = "--enable-mpeg2dec,--disable-mpeg2dec,mpeg2dec"
 PACKAGECONFIG[mpg123]   = "--enable-mpg123,--disable-mpg123,mpg123"
 PACKAGECONFIG[x264]     = "--enable-x264,--disable-x264,x264"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-ugly_1.10.4.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-ugly_1.10.4.bb
deleted file mode 100644
index 92a2caa..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-ugly_1.10.4.bb
+++ /dev/null
@@ -1,13 +0,0 @@
-require gstreamer1.0-plugins-ugly.inc
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=a6f89e2100d9b6cdffcea4f398e37343 \
-                    file://tests/check/elements/xingmux.c;beginline=1;endline=21;md5=4c771b8af188724855cb99cadd390068"
-
-SRC_URI = " \
-    http://gstreamer.freedesktop.org/src/gst-plugins-ugly/gst-plugins-ugly-${PV}.tar.xz \
-    file://0001-introspection.m4-prefix-pkgconfig-paths-with-PKG_CON.patch \
-"
-SRC_URI[md5sum] = "c68d0509c9980b0b70a4b836ff73fff1"
-SRC_URI[sha256sum] = "6386c77ca8459cba431ed0b63da780c7062c7cc48055d222024d8eaf198ffa59"
-
-S = "${WORKDIR}/gst-plugins-ugly-${PV}"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-ugly_1.12.2.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-ugly_1.12.2.bb
new file mode 100644
index 0000000..6956c85
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins-ugly_1.12.2.bb
@@ -0,0 +1,13 @@
+require gstreamer1.0-plugins-ugly.inc
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=a6f89e2100d9b6cdffcea4f398e37343 \
+                    file://tests/check/elements/xingmux.c;beginline=1;endline=21;md5=4c771b8af188724855cb99cadd390068"
+
+SRC_URI = " \
+    http://gstreamer.freedesktop.org/src/gst-plugins-ugly/gst-plugins-ugly-${PV}.tar.xz \
+    file://0001-introspection.m4-prefix-pkgconfig-paths-with-PKG_CON.patch \
+"
+SRC_URI[md5sum] = "eb639021905a32cf3013ca5bac1b694d"
+SRC_URI[sha256sum] = "1cc3942bbf3ea87da3e35437d4e014e991b103db22a6174f62a98c89c3f5f466"
+
+S = "${WORKDIR}/gst-plugins-ugly-${PV}"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins.inc b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins.inc
index 3f6d4c3..c40d398 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins.inc
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-plugins.inc
@@ -49,3 +49,6 @@
 do_configure[prefuncs] += " delete_pkg_m4_file patch_gtk_doc_makefiles"
 
 PACKAGES_DYNAMIC = "^${PN}-.*"
+
+# qemu-mips64: error while loading shared libraries: .../recipe-sysroot/usr/lib/libgthread-2.0.so.0: ELF file data encoding not little-endian
+EXTRA_OECONF_append_mips64 = " --disable-introspection "
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-python.inc b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-python.inc
new file mode 100644
index 0000000..961b930
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-python.inc
@@ -0,0 +1,35 @@
+SUMMARY = "Python bindings for GStreamer 1.0"
+HOMEPAGE = "http://cgit.freedesktop.org/gstreamer/gst-python/"
+SECTION = "multimedia"
+LICENSE = "LGPLv2.1"
+
+DEPENDS = "gstreamer1.0 python3-pygobject"
+RDEPENDS_${PN} += "gstreamer1.0 python3-pygobject"
+
+PNREAL = "gst-python"
+
+SRC_URI = "http://gstreamer.freedesktop.org/src/${PNREAL}/${PNREAL}-${PV}.tar.xz"
+
+S = "${WORKDIR}/${PNREAL}-${PV}"
+
+inherit autotools pkgconfig distutils3-base upstream-version-is-even gobject-introspection
+
+do_install_append() {
+    # gstpythonplugin hardcodes the location of the libpython from the build
+    # workspace and then fails at runtime. We can override it using
+    # --with-libpython-dir=${libdir}, but it still fails because it looks for a
+    # symlinked library ending in .so instead of the actually library with
+    # LIBNAME.so.MAJOR.MINOR. Although we could patch the code to use the path
+    # we want, it will break again if the library version ever changes. We need
+    # to think about the best way of handling this and possibly consult
+    # upstream.
+    #
+    # Note that this particular find line is taken from the Debian packaging for
+    # gst-python1.0.
+    find "${D}" \
+        -name '*.pyc' -o \
+        -name '*.pyo' -o \
+        -name '*.la' -o \
+        -name 'libgstpythonplugin*' \
+        -delete
+}
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-python_1.12.2.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-python_1.12.2.bb
new file mode 100644
index 0000000..b2dc05b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-python_1.12.2.bb
@@ -0,0 +1,7 @@
+require gstreamer1.0-python.inc
+
+SRC_URI = "http://gstreamer.freedesktop.org/src/${PNREAL}/${PNREAL}-${PV}.tar.xz"
+SRC_URI[md5sum] = "da5c9fa42290bc3006661c869e076ffe"
+SRC_URI[sha256sum] = "f4cc32ad46a653e1ae2f27ac2a16078b00075c9106b2784a1a8d1f31c5069e47"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=c34deae4e395ca07e725ab0076a5f740"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-rtsp-server.inc b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-rtsp-server.inc
index 7191f98..68173ce 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-rtsp-server.inc
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-rtsp-server.inc
@@ -36,3 +36,5 @@
 
 do_configure[prefuncs] += " delete_pkg_m4_file patch_gtk_doc_makefiles"
 
+# Needs to be disable due to a dependency on gstreamer-plugins introspection files
+EXTRA_OECONF_append_mips64 = " --disable-introspection "
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-rtsp-server_1.10.4.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-rtsp-server_1.10.4.bb
deleted file mode 100644
index 6aa9a53..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-rtsp-server_1.10.4.bb
+++ /dev/null
@@ -1,6 +0,0 @@
-require gstreamer1.0-rtsp-server.inc
-
-SRC_URI[md5sum] = "ef587fa6393e09bc26f884510596e305"
-SRC_URI[sha256sum] = "2f6e12fd4e3568ee190dc24e57e4c3a878971c3a3fb6904a9674404fac256de6"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=6762ed442b3822387a51c92d928ead0d"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-rtsp-server_1.12.2.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-rtsp-server_1.12.2.bb
new file mode 100644
index 0000000..2426cca
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-rtsp-server_1.12.2.bb
@@ -0,0 +1,6 @@
+require gstreamer1.0-rtsp-server.inc
+
+SRC_URI[md5sum] = "022757cab183f5b970086e9101c60a98"
+SRC_URI[sha256sum] = "d8ba9264e8ae6e440293328e759e40456f161aa66077b3143dd07581136190b3"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=6762ed442b3822387a51c92d928ead0d"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-vaapi.inc b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-vaapi.inc
index ef0734b..abfcc65 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-vaapi.inc
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-vaapi.inc
@@ -22,12 +22,21 @@
 
 PACKAGES =+ "${PN}-tests"
 
+# OpenGL packageconfig factored out to make it easy for distros
+# and BSP layers to pick either glx, egl, or no GL. By default,
+# try detecting X11 first, and if found (with OpenGL), use GLX,
+# otherwise try to check if EGL can be used.
+PACKAGECONFIG_GL ?= "${@bb.utils.contains('DISTRO_FEATURES', 'x11 opengl', 'glx', \
+                        bb.utils.contains('DISTRO_FEATURES',     'opengl', 'egl', \
+                                                                       '', d), d)}"
+
 PACKAGECONFIG ??= "drm \
-                   ${@bb.utils.contains('DISTRO_FEATURES', 'opengl x11', 'glx', '', d)} \
+                   ${PACKAGECONFIG_GL} \
                    ${@bb.utils.filter('DISTRO_FEATURES', 'wayland x11', d)}"
 
 PACKAGECONFIG[drm] = "--enable-drm,--disable-drm,udev libdrm"
-PACKAGECONFIG[glx] = "--enable-glx,--disable-glx,virtual/mesa"
+PACKAGECONFIG[egl] = "--enable-egl,--disable-egl,virtual/egl"
+PACKAGECONFIG[glx] = "--enable-glx,--disable-glx,virtual/libgl"
 PACKAGECONFIG[wayland] = "--enable-wayland,--disable-wayland,wayland"
 PACKAGECONFIG[x11] = "--enable-x11,--disable-x11,virtual/libx11 libxrandr libxrender"
 
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-vaapi_1.10.4.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-vaapi_1.10.4.bb
deleted file mode 100644
index 44c66de..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-vaapi_1.10.4.bb
+++ /dev/null
@@ -1,7 +0,0 @@
-require gstreamer1.0-vaapi.inc
-SRC_URI[md5sum] = "318af17f906637570b61dd7be9b5581d"
-SRC_URI[sha256sum] = "03e690621594d9f9495d86c7dac8b8590b3a150462770ed070dc76f66a70de75"
-
-SRC_URI += "file://vaapivideobufferpool-create-allocator-if-needed.patch"
-
-DEPENDS += "gstreamer1.0 gstreamer1.0-plugins-base gstreamer1.0-plugins-bad"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-vaapi_1.12.2.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-vaapi_1.12.2.bb
new file mode 100644
index 0000000..fbd9f86
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-vaapi_1.12.2.bb
@@ -0,0 +1,5 @@
+require gstreamer1.0-vaapi.inc
+SRC_URI[md5sum] = "bea015f33696a15ad9575fb71af862ed"
+SRC_URI[sha256sum] = "23c714e0474b3c7ae6ff8884aebf8503a1bc3ded335fa2d2b2ac31788466163a"
+
+DEPENDS += "gstreamer1.0 gstreamer1.0-plugins-base gstreamer1.0-plugins-bad"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0.inc b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0.inc
index 72d7ce6..3291934 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0.inc
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0.inc
@@ -25,10 +25,10 @@
 PACKAGECONFIG[valgrind] = "--enable-valgrind,--disable-valgrind,valgrind,"
 PACKAGECONFIG[gst-tracer-hooks] = "--enable-gst-tracer-hooks,--disable-gst-tracer-hooks,"
 PACKAGECONFIG[unwind] = "--with-unwind,--without-unwind,libunwind"
+PACKAGECONFIG[dw] = "--with-dw,--without-dw,elfutils"
 
 EXTRA_OECONF = " \
     --disable-dependency-tracking \
-    --disable-docbook \
     --disable-examples \
 "
 
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0/0001-configure-Add-switches-for-enabling-disabling-libdw-.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0/0001-configure-Add-switches-for-enabling-disabling-libdw-.patch
new file mode 100644
index 0000000..1132fd5
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0/0001-configure-Add-switches-for-enabling-disabling-libdw-.patch
@@ -0,0 +1,70 @@
+From a0cb41ba72913eda06049d266ec43ea8f52b5bee Mon Sep 17 00:00:00 2001
+From: Carlos Rafael Giani <dv@pseudoterminal.org>
+Date: Fri, 11 Aug 2017 21:21:36 +0200
+Subject: [PATCH] configure: Add switches for enabling/disabling libdw and
+ libunwind
+
+[Original patch modified to be applicable to 1.12.2]
+
+Upstream-Status: Submitted [https://bugzilla.gnome.org/show_bug.cgi?id=778193]
+
+Signed-off-by: Carlos Rafael Giani <dv@pseudoterminal.org>
+---
+ configure.ac | 38 ++++++++++++++++++++++++++++++++------
+ 1 file changed, 32 insertions(+), 6 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index b6b2923..32dd827 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -821,15 +821,41 @@ fi
+ AM_CONDITIONAL(HAVE_GTK, test "x$HAVE_GTK" = "xyes")
+ 
+ dnl libunwind is optionally used by the leaks tracer
+-PKG_CHECK_MODULES(UNWIND, libunwind, HAVE_UNWIND=yes, HAVE_UNWIND=no)
+-if test "x$HAVE_UNWIND" = "xyes"; then
+-  AC_DEFINE(HAVE_UNWIND, 1, [libunwind available])
++AC_ARG_WITH([unwind],[AS_HELP_STRING([--with-unwind=yes|no|auto],[use libunwind])],
++            [], [with_unwind=auto])
++if [ test "x${with_unwind}" != "xno" ]; then
++  PKG_CHECK_MODULES(UNWIND, [libunwind],
++      [
++        HAVE_UNWIND=yes
++        AC_DEFINE(HAVE_UNWIND, 1, [libunwind available])
++      ],
++      [
++        HAVE_UNWIND=no
++        if [ test "x${with_unwind}" = "xyes" ]; then
++          AC_MSG_ERROR([could not find libunwind])
++        fi
++      ])
++else
++  HAVE_UNWIND=no
+ fi
+ 
+ dnl libdw is optionally used to add source lines and numbers to backtraces
+-PKG_CHECK_MODULES(DW, libdw, HAVE_DW=yes, HAVE_DW=no)
+-if test "x$HAVE_DW" = "xyes"; then
+-  AC_DEFINE(HAVE_DW, 1, [libdw available])
++AC_ARG_WITH([dw],[AS_HELP_STRING([--with-dw=yes|no|auto],[use libdw])],
++            [], [with_dw=auto])
++if [ test "x${with_dw}" != "xno" ]; then
++  PKG_CHECK_MODULES(DW, [libdw],
++      [
++        HAVE_DW=yes
++        AC_DEFINE(HAVE_DW, 1, [libdw available])
++      ],
++      [
++        HAVE_DW=no
++        if [ test "x${with_dw}" = "xyes" ]; then
++          AC_MSG_ERROR([could not find libdw])
++        fi
++      ])
++else
++  HAVE_DW=no
+ fi
+ 
+ dnl Check for backtrace() from libc
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0/deterministic-unwind.patch b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0/deterministic-unwind.patch
deleted file mode 100644
index e39e6ca..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0/deterministic-unwind.patch
+++ /dev/null
@@ -1,24 +0,0 @@
-Make the detection of libunwind deterministic.
-
-Upstream-Status: Pending
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-diff --git a/configure.ac b/configure.ac
-index ac88fb2..182c19a 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -829,3 +828,0 @@ AM_CONDITIONAL(HAVE_GTK, test "x$HAVE_GTK" = "xyes")
--dnl libunwind is optionally used by the leaks tracer
--PKG_CHECK_MODULES(UNWIND, libunwind, HAVE_UNWIND=yes, HAVE_UNWIND=no)
--
-@@ -839,3 +836,7 @@ AC_CHECK_FUNC(backtrace, [
--if test "x$HAVE_UNWIND" = "xyes"; then
--  AC_DEFINE(HAVE_UNWIND, 1, [libunwind available])
--fi
-+dnl libunwind is optionally used by the leaks tracer
-+AC_ARG_WITH([unwind],[AS_HELP_STRING([--with-unwind],[use libunwind])],
-+            [], [with_unwind=yes])
-+AS_IF([test "$with_unwind" = yes],
-+      [PKG_CHECK_MODULES(UNWIND, libunwind)
-+       AC_DEFINE(HAVE_UNWIND, 1, [libunwind available])]
-+)
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0_1.10.4.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0_1.10.4.bb
deleted file mode 100644
index 2a67993..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0_1.10.4.bb
+++ /dev/null
@@ -1,13 +0,0 @@
-require gstreamer1.0.inc
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=6762ed442b3822387a51c92d928ead0d \
-                    file://gst/gst.h;beginline=1;endline=21;md5=e059138481205ee2c6fc1c079c016d0d"
-
-SRC_URI = " \
-    http://gstreamer.freedesktop.org/src/gstreamer/gstreamer-${PV}.tar.xz \
-    file://deterministic-unwind.patch \
-"
-SRC_URI[md5sum] = "7c91a97e4a2dc81eafd59d0a2f8b0d6e"
-SRC_URI[sha256sum] = "50c2f5af50a6cc6c0a3f3ed43bdd8b5e2bff00bacfb766d4be139ec06d8b5218"
-
-S = "${WORKDIR}/gstreamer-${PV}"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0_1.12.2.bb b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0_1.12.2.bb
new file mode 100644
index 0000000..8d41a59
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/gstreamer/gstreamer1.0_1.12.2.bb
@@ -0,0 +1,13 @@
+require gstreamer1.0.inc
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=6762ed442b3822387a51c92d928ead0d \
+                    file://gst/gst.h;beginline=1;endline=21;md5=e059138481205ee2c6fc1c079c016d0d"
+
+SRC_URI = " \
+    http://gstreamer.freedesktop.org/src/gstreamer/gstreamer-${PV}.tar.xz \
+    file://0001-configure-Add-switches-for-enabling-disabling-libdw-.patch \
+"
+SRC_URI[md5sum] = "4748860621607ffd96244fb79c86c238"
+SRC_URI[sha256sum] = "9fde3f39a2ea984f9e07ce09250285ce91f6e3619d186889f75b5154ecf994ba"
+
+S = "${WORKDIR}/gstreamer-${PV}"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/lame/lame/CVE-2017-13712.patch b/import-layers/yocto-poky/meta/recipes-multimedia/lame/lame/CVE-2017-13712.patch
new file mode 100644
index 0000000..f9ec766
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/lame/lame/CVE-2017-13712.patch
@@ -0,0 +1,309 @@
+Upstream-Status: Backport [http://lame.cvs.sourceforge.net/viewvc/lame/lame/libmp3lame/id3tag.c?r1=1.79&r2=1.80]
+
+Backport patch to fix CVE-2017-13712 for lame.
+
+Signed-off-by: Kai Kang <kai.kang@windriver.com>
+---
+--- a/libmp3lame/id3tag.c	2017/08/22 19:44:05	1.79
++++ b/libmp3lame/id3tag.c	2017/08/28 15:39:51	1.80
+@@ -194,7 +194,11 @@
+ }
+ #endif
+ 
+-
++static int
++is_lame_internal_flags_null(lame_t gfp)
++{
++    return (gfp && gfp->internal_flags) ? 0 : 1;
++}
+ 
+ static int
+ id3v2_add_ucs2_lng(lame_t gfp, uint32_t frame_id, unsigned short const *desc, unsigned short const *text);
+@@ -238,8 +242,7 @@
+ static void
+ id3v2AddAudioDuration(lame_t gfp, double ms)
+ {
+-    lame_internal_flags *gfc = gfp != 0 ? gfp->internal_flags : 0;
+-    SessionConfig_t const *const cfg = &gfc->cfg;
++    SessionConfig_t const *const cfg = &gfp->internal_flags->cfg; /* caller checked pointers */
+     char    buffer[1024];
+     double const max_ulong = MAX_U_32_NUM;
+     unsigned long playlength_ms;
+@@ -280,7 +283,12 @@
+ void
+ id3tag_init(lame_t gfp)
+ {
+-    lame_internal_flags *gfc = gfp->internal_flags;
++    lame_internal_flags *gfc = 0;
++
++    if (is_lame_internal_flags_null(gfp)) {
++        return;
++    }
++    gfc = gfp->internal_flags;
+     free_id3tag(gfc);
+     memset(&gfc->tag_spec, 0, sizeof gfc->tag_spec);
+     gfc->tag_spec.genre_id3v1 = GENRE_NUM_UNKNOWN;
+@@ -293,7 +301,12 @@
+ void
+ id3tag_add_v2(lame_t gfp)
+ {
+-    lame_internal_flags *gfc = gfp->internal_flags;
++    lame_internal_flags *gfc = 0;
++
++    if (is_lame_internal_flags_null(gfp)) {
++        return;
++    }
++    gfc = gfp->internal_flags;
+     gfc->tag_spec.flags &= ~V1_ONLY_FLAG;
+     gfc->tag_spec.flags |= ADD_V2_FLAG;
+ }
+@@ -301,7 +314,12 @@
+ void
+ id3tag_v1_only(lame_t gfp)
+ {
+-    lame_internal_flags *gfc = gfp->internal_flags;
++    lame_internal_flags *gfc = 0;
++
++    if (is_lame_internal_flags_null(gfp)) {
++        return;
++    }
++    gfc = gfp->internal_flags;
+     gfc->tag_spec.flags &= ~(ADD_V2_FLAG | V2_ONLY_FLAG);
+     gfc->tag_spec.flags |= V1_ONLY_FLAG;
+ }
+@@ -309,7 +327,12 @@
+ void
+ id3tag_v2_only(lame_t gfp)
+ {
+-    lame_internal_flags *gfc = gfp->internal_flags;
++    lame_internal_flags *gfc = 0;
++
++    if (is_lame_internal_flags_null(gfp)) {
++        return;
++    }
++    gfc = gfp->internal_flags;
+     gfc->tag_spec.flags &= ~V1_ONLY_FLAG;
+     gfc->tag_spec.flags |= V2_ONLY_FLAG;
+ }
+@@ -317,7 +340,12 @@
+ void
+ id3tag_space_v1(lame_t gfp)
+ {
+-    lame_internal_flags *gfc = gfp->internal_flags;
++    lame_internal_flags *gfc = 0;
++
++    if (is_lame_internal_flags_null(gfp)) {
++        return;
++    }
++    gfc = gfp->internal_flags;
+     gfc->tag_spec.flags &= ~V2_ONLY_FLAG;
+     gfc->tag_spec.flags |= SPACE_V1_FLAG;
+ }
+@@ -331,7 +359,12 @@
+ void
+ id3tag_set_pad(lame_t gfp, size_t n)
+ {
+-    lame_internal_flags *gfc = gfp->internal_flags;
++    lame_internal_flags *gfc = 0;
++
++    if (is_lame_internal_flags_null(gfp)) {
++        return;
++    }
++    gfc = gfp->internal_flags;
+     gfc->tag_spec.flags &= ~V1_ONLY_FLAG;
+     gfc->tag_spec.flags |= PAD_V2_FLAG;
+     gfc->tag_spec.flags |= ADD_V2_FLAG;
+@@ -583,22 +616,29 @@
+ int
+ id3tag_set_albumart(lame_t gfp, const char *image, size_t size)
+ {
+-    int     mimetype = 0;
+-    unsigned char const *data = (unsigned char const *) image;
+-    lame_internal_flags *gfc = gfp->internal_flags;
+-
+-    /* determine MIME type from the actual image data */
+-    if (2 < size && data[0] == 0xFF && data[1] == 0xD8) {
+-        mimetype = MIMETYPE_JPEG;
+-    }
+-    else if (4 < size && data[0] == 0x89 && strncmp((const char *) &data[1], "PNG", 3) == 0) {
+-        mimetype = MIMETYPE_PNG;
+-    }
+-    else if (4 < size && strncmp((const char *) data, "GIF8", 4) == 0) {
+-        mimetype = MIMETYPE_GIF;
++    int     mimetype = MIMETYPE_NONE;
++    lame_internal_flags *gfc = 0;
++
++    if (is_lame_internal_flags_null(gfp)) {
++        return 0;
+     }
+-    else {
+-        return -1;
++    gfc = gfp->internal_flags;
++
++    if (image != 0) {
++        unsigned char const *data = (unsigned char const *) image;
++        /* determine MIME type from the actual image data */
++        if (2 < size && data[0] == 0xFF && data[1] == 0xD8) {
++            mimetype = MIMETYPE_JPEG;
++        }
++        else if (4 < size && data[0] == 0x89 && strncmp((const char *) &data[1], "PNG", 3) == 0) {
++            mimetype = MIMETYPE_PNG;
++        }
++        else if (4 < size && strncmp((const char *) data, "GIF8", 4) == 0) {
++            mimetype = MIMETYPE_GIF;
++        }
++        else {
++            return -1;
++        }
+     }
+     if (gfc->tag_spec.albumart != 0) {
+         free(gfc->tag_spec.albumart);
+@@ -606,7 +646,7 @@
+         gfc->tag_spec.albumart_size = 0;
+         gfc->tag_spec.albumart_mimetype = MIMETYPE_NONE;
+     }
+-    if (size < 1) {
++    if (size < 1 || mimetype == MIMETYPE_NONE) {
+         return 0;
+     }
+     gfc->tag_spec.albumart = lame_calloc(unsigned char, size);
+@@ -959,6 +999,9 @@
+     if (frame_id == 0) {
+         return -1;
+     }
++    if (is_lame_internal_flags_null(gfp)) {
++        return 0;
++    }
+     if (text == 0) {
+         return 0;
+     }
+@@ -1008,6 +1051,9 @@
+     if (frame_id == 0) {
+         return -1;
+     }
++    if (is_lame_internal_flags_null(gfp)) {
++        return 0;
++    }
+     if (text == 0) {
+         return 0;
+     }
+@@ -1037,6 +1083,9 @@
+ int
+ id3tag_set_comment_latin1(lame_t gfp, char const *lang, char const *desc, char const *text)
+ {
++    if (is_lame_internal_flags_null(gfp)) {
++        return 0;
++    }
+     return id3v2_add_latin1(gfp, ID_COMMENT, lang, desc, text);
+ }
+ 
+@@ -1044,6 +1093,9 @@
+ int
+ id3tag_set_comment_utf16(lame_t gfp, char const *lang, unsigned short const *desc, unsigned short const *text)
+ {
++    if (is_lame_internal_flags_null(gfp)) {
++        return 0;
++    }
+     return id3v2_add_ucs2(gfp, ID_COMMENT, lang, desc, text);
+ }
+ 
+@@ -1054,6 +1106,9 @@
+ int
+ id3tag_set_comment_ucs2(lame_t gfp, char const *lang, unsigned short const *desc, unsigned short const *text)
+ {
++    if (is_lame_internal_flags_null(gfp)) {
++        return 0;
++    }
+     return id3tag_set_comment_utf16(gfp, lang, desc, text);
+ }
+ 
+@@ -1244,9 +1299,9 @@
+ int
+ id3tag_set_genre(lame_t gfp, const char *genre)
+ {
+-    lame_internal_flags *gfc = gfp->internal_flags;
++    lame_internal_flags *gfc = gfp != 0 ? gfp->internal_flags : 0;
+     int     ret = 0;
+-    if (genre && *genre) {
++    if (gfc && genre && *genre) {
+         int const num = lookupGenre(genre);
+         if (num == -1) return num;
+         gfc->tag_spec.flags |= CHANGED_FLAG;
+@@ -1539,6 +1594,9 @@
+ int
+ id3tag_set_fieldvalue(lame_t gfp, const char *fieldvalue)
+ {
++    if (is_lame_internal_flags_null(gfp)) {
++        return 0;
++    }
+     if (fieldvalue && *fieldvalue) {
+         if (strlen(fieldvalue) < 5 || fieldvalue[4] != '=') {
+             return -1;
+@@ -1551,6 +1609,9 @@
+ int
+ id3tag_set_fieldvalue_utf16(lame_t gfp, const unsigned short *fieldvalue)
+ {
++    if (is_lame_internal_flags_null(gfp)) {
++        return 0;
++    }
+     if (fieldvalue && *fieldvalue) {
+         size_t dx = hasUcs2ByteOrderMarker(fieldvalue[0]);
+         unsigned short const separator = fromLatin1Char(fieldvalue, '=');
+@@ -1581,20 +1642,21 @@
+ int
+ id3tag_set_fieldvalue_ucs2(lame_t gfp, const unsigned short *fieldvalue)
+ {
++    if (is_lame_internal_flags_null(gfp)) {
++        return 0;
++    }
+     return id3tag_set_fieldvalue_utf16(gfp, fieldvalue);
+ }
+ 
+ size_t
+ lame_get_id3v2_tag(lame_t gfp, unsigned char *buffer, size_t size)
+ {
+-    lame_internal_flags *gfc;
+-    if (gfp == 0) {
++    lame_internal_flags *gfc = 0;
++
++    if (is_lame_internal_flags_null(gfp)) {
+         return 0;
+     }
+     gfc = gfp->internal_flags;
+-    if (gfc == 0) {
+-        return 0;
+-    }
+     if (test_tag_spec_flags(gfc, V1_ONLY_FLAG)) {
+         return 0;
+     }
+@@ -1736,7 +1798,12 @@
+ int
+ id3tag_write_v2(lame_t gfp)
+ {
+-    lame_internal_flags *gfc = gfp->internal_flags;
++    lame_internal_flags *gfc = 0;
++
++    if (is_lame_internal_flags_null(gfp)) {
++        return 0;
++    }
++    gfc = gfp->internal_flags;
+ #if 0
+     debug_tag_spec_flags(gfc, "write v2");
+ #endif
+@@ -1837,10 +1904,15 @@
+ int
+ id3tag_write_v1(lame_t gfp)
+ {
+-    lame_internal_flags *const gfc = gfp->internal_flags;
++    lame_internal_flags* gfc = 0;
+     size_t  i, n, m;
+     unsigned char tag[128];
+ 
++    if (is_lame_internal_flags_null(gfp)) {
++        return 0;
++    }
++    gfc = gfp->internal_flags;
++
+     m = sizeof(tag);
+     n = lame_get_id3v1_tag(gfp, tag, m);
+     if (n > m) {
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/lame/lame_3.99.5.bb b/import-layers/yocto-poky/meta/recipes-multimedia/lame/lame_3.99.5.bb
index 0477611..e5321bb 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/lame/lame_3.99.5.bb
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/lame/lame_3.99.5.bb
@@ -14,7 +14,9 @@
 
 SRC_URI = "${SOURCEFORGE_MIRROR}/lame/lame-${PV}.tar.gz \
            file://no-gtk1.patch \
-           file://lame-3.99.5_fix_for_automake-1.12.x.patch "
+           file://lame-3.99.5_fix_for_automake-1.12.x.patch \
+           file://CVE-2017-13712.patch \
+           "
 
 SRC_URI[md5sum] = "84835b313d4a8b68f5349816d33e07ce"
 SRC_URI[sha256sum] = "24346b4158e4af3bd9f2e194bb23eb473c75fb7377011523353196b19b9a23ff"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libid3tag/libid3tag/0001-Fix-gperf-3.1-incompatibility.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libid3tag/libid3tag/0001-Fix-gperf-3.1-incompatibility.patch
new file mode 100644
index 0000000..54f49f6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/libid3tag/libid3tag/0001-Fix-gperf-3.1-incompatibility.patch
@@ -0,0 +1,40 @@
+From 91fcf66b9182c75cd2b96d88991d5a1c6307d4b4 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Wed, 2 Aug 2017 16:27:52 +0300
+Subject: [PATCH] Fix gperf 3.1 incompatibility.
+
+Upstream-Status: Pending
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+ compat.h    | 2 +-
+ frametype.h | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/compat.h b/compat.h
+index 8af71ec..b3d80d9 100644
+--- a/compat.h
++++ b/compat.h
+@@ -34,7 +34,7 @@ struct id3_compat {
+ };
+ 
+ struct id3_compat const *id3_compat_lookup(register char const *,
+-					   register unsigned int);
++					   register size_t);
+ 
+ int id3_compat_fixup(struct id3_tag *);
+ 
+diff --git a/frametype.h b/frametype.h
+index dd064b2..b5b7593 100644
+--- a/frametype.h
++++ b/frametype.h
+@@ -37,6 +37,6 @@ extern struct id3_frametype const id3_frametype_unknown;
+ extern struct id3_frametype const id3_frametype_obsolete;
+ 
+ struct id3_frametype const *id3_frametype_lookup(register char const *,
+-						 register unsigned int);
++						 register size_t);
+ 
+ # endif
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libid3tag/libid3tag_0.15.1b.bb b/import-layers/yocto-poky/meta/recipes-multimedia/libid3tag/libid3tag_0.15.1b.bb
index 05a8a47..f6139d6 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libid3tag/libid3tag_0.15.1b.bb
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/libid3tag/libid3tag_0.15.1b.bb
@@ -9,10 +9,13 @@
 DEPENDS = "zlib gperf-native"
 PR = "r7"
 
-SRC_URI = "ftp://ftp.mars.org/pub/mpeg/libid3tag-${PV}.tar.gz \
+SRC_URI = "${SOURCEFORGE_MIRROR}/mad/libid3tag-${PV}.tar.gz \
            file://addpkgconfig.patch \
            file://obsolete_automake_macros.patch \
-"
+           file://0001-Fix-gperf-3.1-incompatibility.patch \
+           "
+UPSTREAM_CHECK_URI = "https://sourceforge.net/projects/mad/files/libid3tag/"
+UPSTREAM_CHECK_REGEX = "/projects/mad/files/libid3tag/(?P<pver>.*)/$"
 
 SRC_URI[md5sum] = "e5808ad997ba32c498803822078748c3"
 SRC_URI[sha256sum] = "63da4f6e7997278f8a3fef4c6a372d342f705051d1eeb6a46a86b03610e26151"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libpng/libpng_1.6.28.bb b/import-layers/yocto-poky/meta/recipes-multimedia/libpng/libpng_1.6.28.bb
deleted file mode 100644
index 9cb2967..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libpng/libpng_1.6.28.bb
+++ /dev/null
@@ -1,25 +0,0 @@
-SUMMARY = "PNG image format decoding library"
-HOMEPAGE = "http://www.libpng.org/"
-SECTION = "libs"
-LICENSE = "Libpng"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=67d8837410863f9821bbd606536f0329 \
-                    file://png.h;endline=144;md5=abfa0497feb393b5842d3d82c1009520"
-DEPENDS = "zlib"
-
-SRC_URI = "${GENTOO_MIRROR}/libpng-${PV}.tar.xz \
-          "
-SRC_URI[md5sum] = "425354f86c392318d31aedca71019372"
-SRC_URI[sha256sum] = "d8d3ec9de6b5db740fefac702c37ffcf96ae46cb17c18c1544635a3852f78f7a"
-
-BINCONFIG = "${bindir}/libpng-config ${bindir}/libpng16-config"
-
-inherit autotools binconfig-disabled pkgconfig
-
-# Work around missing symbols
-EXTRA_OECONF_append_class-target = " ${@bb.utils.contains("TUNE_FEATURES", "neon", "--enable-arm-neon=on", "--enable-arm-neon=off" ,d)}"
-
-PACKAGES =+ "${PN}-tools"
-
-FILES_${PN}-tools = "${bindir}/png-fix-itxt ${bindir}/pngfix ${bindir}/pngcp"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libpng/libpng_1.6.31.bb b/import-layers/yocto-poky/meta/recipes-multimedia/libpng/libpng_1.6.31.bb
new file mode 100644
index 0000000..c96ea14
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/libpng/libpng_1.6.31.bb
@@ -0,0 +1,28 @@
+SUMMARY = "PNG image format decoding library"
+HOMEPAGE = "http://www.libpng.org/"
+SECTION = "libs"
+LICENSE = "Libpng"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=6f21e4e1f795810b5ea491b333bc6ea1 \
+                    file://png.h;endline=144;md5=af300c419a45c53a8d2daa0b7d167a66"
+DEPENDS = "zlib"
+
+LIBV = "16"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/project/${BPN}/${BPN}${LIBV}/${PV}/${BP}.tar.xz"
+SRC_URI[md5sum] = "1b34eab440263e32cfa39d19413fad54"
+SRC_URI[sha256sum] = "232a602de04916b2b5ce6f901829caf419519e6a16cc9cd7c1c91187d3ee8b41"
+
+MIRRORS += "${SOURCEFORGE_MIRROR}/project/${BPN}/${BPN}${LIBV}/${PV}/ ${SOURCEFORGE_MIRROR}/project/${BPN}/${BPN}${LIBV}/older-releases/${PV}/"
+
+BINCONFIG = "${bindir}/libpng-config ${bindir}/libpng16-config"
+
+inherit autotools binconfig-disabled pkgconfig
+
+# Work around missing symbols
+EXTRA_OECONF_append_class-target = " ${@bb.utils.contains("TUNE_FEATURES", "neon", "--enable-arm-neon=on", "--enable-arm-neon=off" ,d)}"
+
+PACKAGES =+ "${PN}-tools"
+
+FILES_${PN}-tools = "${bindir}/png-fix-itxt ${bindir}/pngfix ${bindir}/pngcp"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libsndfile/libsndfile1/CVE-2017-8362.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libsndfile/libsndfile1/CVE-2017-8362.patch
index 771ec92..9ee7e46 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libsndfile/libsndfile1/CVE-2017-8362.patch
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/libsndfile/libsndfile1/CVE-2017-8362.patch
@@ -14,20 +14,17 @@
 Upstream-Status: Backport [https://github.com/erikd/libsndfile/commit/ef1dbb2df1c0e741486646de40bd638a9c4cd808]
 
 Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
-
-removed hunk #2 as it was only cosmentic and caused a merge conflict.
-signef-off-by: Armin Kuster <akuster808@gmail.com>
 ---
  src/flac.c | 11 +++++++++--
  1 file changed, 9 insertions(+), 2 deletions(-)
 
-Index: libsndfile-1.0.27/src/flac.c
-===================================================================
---- libsndfile-1.0.27.orig/src/flac.c
-+++ libsndfile-1.0.27/src/flac.c
+diff --git a/src/flac.c b/src/flac.c
+index 5a4f8c2..e4f9aaa 100644
+--- a/src/flac.c
++++ b/src/flac.c
 @@ -169,6 +169,14 @@ flac_buffer_copy (SF_PRIVATE *psf)
- 	const FLAC__int32* const *buffer = pflac->wbuffer ;
- 	unsigned i = 0, j, offset ;
+ 	const int32_t* const *buffer = pflac->wbuffer ;
+ 	unsigned i = 0, j, offset, channels, len ;
  
 +	if (psf->sf.channels != (int) frame->header.channels)
 +	{	psf_log_printf (psf, "Error: FLAC frame changed from %d to %d channels\n"
@@ -40,7 +37,15 @@
  	/*
  	**	frame->header.blocksize is variable and we're using a constant blocksize
  	**	of FLAC__MAX_BLOCK_SIZE.
-@@ -410,7 +418,7 @@ sf_flac_meta_callback (const FLAC__Strea
+@@ -202,7 +210,6 @@ flac_buffer_copy (SF_PRIVATE *psf)
+ 		return 0 ;
+ 		} ;
+ 
+-
+ 	len = SF_MIN (pflac->len, frame->header.blocksize) ;
+ 
+ 	if (pflac->remain % channels != 0)
+@@ -436,7 +443,7 @@ sf_flac_meta_callback (const FLAC__StreamDecoder * UNUSED (decoder), const FLAC_
  	{	case FLAC__METADATA_TYPE_STREAMINFO :
  			if (psf->sf.channels > 0 && psf->sf.channels != (int) metadata->data.stream_info.channels)
  			{	psf_log_printf (psf, "Error: FLAC stream changed from %d to %d channels\n"
@@ -49,3 +54,6 @@
  									psf->sf.channels, metadata->data.stream_info.channels) ;
  				psf->error = SFE_FLAC_CHANNEL_COUNT_CHANGED ;
  				return ;
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libsndfile/libsndfile1_1.0.27.bb b/import-layers/yocto-poky/meta/recipes-multimedia/libsndfile/libsndfile1_1.0.27.bb
deleted file mode 100644
index c6df4e9..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libsndfile/libsndfile1_1.0.27.bb
+++ /dev/null
@@ -1,33 +0,0 @@
-SUMMARY = "Audio format Conversion library"
-HOMEPAGE = "http://www.mega-nerd.com/libsndfile"
-AUTHOR = "Erik de Castro Lopo"
-DEPENDS = "flac libogg libvorbis sqlite3"
-SECTION = "libs/multimedia"
-LICENSE = "LGPLv2.1"
-
-SRC_URI = "http://www.mega-nerd.com/libsndfile/files/libsndfile-${PV}.tar.gz \
-           file://CVE-2017-6892.patch \
-           file://CVE-2017-8361-8365.patch \
-           file://CVE-2017-8362.patch \
-           file://CVE-2017-8363.patch \
-          "
-
-SRC_URI[md5sum] = "fd1d97c6077f03b5d984d7956ffedb7a"
-SRC_URI[sha256sum] = "a391952f27f4a92ceb2b4c06493ac107896ed6c76be9a613a4731f076d30fac0"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=e77fe93202736b47c07035910f47974a"
-
-CVE_PRODUCT = "libsndfile"
-
-S = "${WORKDIR}/libsndfile-${PV}"
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'alsa', d)}"
-PACKAGECONFIG[alsa] = "--enable-alsa,--disable-alsa,alsa-lib"
-
-inherit autotools lib_package pkgconfig
-
-do_configure_prepend_arm() {
-	export ac_cv_sys_largefile_source=1
-	export ac_cv_sys_file_offset_bits=64
-}
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libsndfile/libsndfile1_1.0.28.bb b/import-layers/yocto-poky/meta/recipes-multimedia/libsndfile/libsndfile1_1.0.28.bb
new file mode 100644
index 0000000..281ac82
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/libsndfile/libsndfile1_1.0.28.bb
@@ -0,0 +1,33 @@
+SUMMARY = "Audio format Conversion library"
+HOMEPAGE = "http://www.mega-nerd.com/libsndfile"
+AUTHOR = "Erik de Castro Lopo"
+DEPENDS = "flac libogg libvorbis sqlite3"
+SECTION = "libs/multimedia"
+LICENSE = "LGPLv2.1"
+
+SRC_URI = "http://www.mega-nerd.com/libsndfile/files/libsndfile-${PV}.tar.gz \
+           file://CVE-2017-6892.patch \
+           file://CVE-2017-8361-8365.patch \
+           file://CVE-2017-8362.patch \
+           file://CVE-2017-8363.patch \
+          "
+
+SRC_URI[md5sum] = "646b5f98ce89ac60cdb060fcd398247c"
+SRC_URI[sha256sum] = "1ff33929f042fa333aed1e8923aa628c3ee9e1eb85512686c55092d1e5a9dfa9"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=e77fe93202736b47c07035910f47974a"
+
+CVE_PRODUCT = "libsndfile"
+
+S = "${WORKDIR}/libsndfile-${PV}"
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'alsa', d)}"
+PACKAGECONFIG[alsa] = "--enable-alsa,--disable-alsa,alsa-lib"
+
+inherit autotools lib_package pkgconfig
+
+do_configure_prepend_arm() {
+	export ac_cv_sys_largefile_source=1
+	export ac_cv_sys_file_offset_bits=64
+}
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10093.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10093.patch
deleted file mode 100644
index e09bb7f..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10093.patch
+++ /dev/null
@@ -1,47 +0,0 @@
-From 787c0ee906430b772f33ca50b97b8b5ca070faec Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Sat, 3 Dec 2016 16:40:01 +0000
-Subject: [PATCH] * tools/tiffcp.c: fix uint32 underflow/overflow that can
- cause heap-based buffer overflow. Reported by Agostino Sarubbo. Fixes
- http://bugzilla.maptools.org/show_bug.cgi?id=2610
-
-Upstream-Status: Backport
-CVE: CVE-2016-10093
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
-
----
- ChangeLog      | 7 +++++++
- tools/tiffcp.c | 6 +++---
- 2 files changed, 10 insertions(+), 3 deletions(-)
-
-Index: tiff-4.0.7/tools/tiffcp.c
-===================================================================
---- tiff-4.0.7.orig/tools/tiffcp.c
-+++ tiff-4.0.7/tools/tiffcp.c
-@@ -1163,7 +1163,7 @@ bad:
- 
- static void
- cpStripToTile(uint8* out, uint8* in,
--    uint32 rows, uint32 cols, int outskew, int inskew)
-+    uint32 rows, uint32 cols, int outskew, int64 inskew)
- {
- 	while (rows-- > 0) {
- 		uint32 j = cols;
-@@ -1320,7 +1320,7 @@ DECLAREreadFunc(readContigTilesIntoBuffe
- 	tdata_t tilebuf;
- 	uint32 imagew = TIFFScanlineSize(in);
- 	uint32 tilew  = TIFFTileRowSize(in);
--	int iskew = imagew - tilew;
-+	int64 iskew = (int64)imagew - (int64)tilew;
- 	uint8* bufp = (uint8*) buf;
- 	uint32 tw, tl;
- 	uint32 row;
-@@ -1348,7 +1348,7 @@ DECLAREreadFunc(readContigTilesIntoBuffe
- 				status = 0;
- 				goto done;
- 			}
--			if (colb + tilew > imagew) {
-+			if (colb > iskew) {
- 				uint32 width = imagew - colb;
- 				uint32 oskew = tilew - width;
- 				cpStripToTile(bufp + colb,
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10266.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10266.patch
deleted file mode 100644
index e1c90c6..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10266.patch
+++ /dev/null
@@ -1,60 +0,0 @@
-From 1855407c4e5a27ade006b26c2dec8a31745c356e Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Fri, 2 Dec 2016 21:56:56 +0000
-Subject: [PATCH] * libtiff/tif_read.c, libtiff/tiffiop.h: fix uint32 overflow
- in TIFFReadEncodedStrip() that caused an integer division by zero. Reported
- by Agostino Sarubbo. Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2596
-
-Upstream-Status: Backport
-
-CVE: CVE-2016-10266
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
----
- ChangeLog          | 7 +++++++
- libtiff/tif_read.c | 2 +-
- libtiff/tiffiop.h  | 4 ++++
- 3 files changed, 12 insertions(+), 1 deletion(-)
-
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog
-+++ tiff-4.0.7/ChangeLog
-@@ -1,3 +1,10 @@
-+2016-12-02 Even Rouault <even.rouault at spatialys.com>
-+
-+       * libtiff/tif_read.c, libtiff/tiffiop.h: fix uint32 overflow in
-+       TIFFReadEncodedStrip() that caused an integer division by zero.
-+       Reported by Agostino Sarubbo.
-+       Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2596
-+
- 2017-07-15  Even Rouault <even.rouault at spatialys.com>
- 
- 	* tools/tiff2pdf.c: prevent heap buffer overflow write in "Raw"
-Index: tiff-4.0.7/libtiff/tif_read.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_read.c
-+++ tiff-4.0.7/libtiff/tif_read.c
-@@ -346,7 +346,7 @@ TIFFReadEncodedStrip(TIFF* tif, uint32 s
- 	rowsperstrip=td->td_rowsperstrip;
- 	if (rowsperstrip>td->td_imagelength)
- 		rowsperstrip=td->td_imagelength;
--	stripsperplane=((td->td_imagelength+rowsperstrip-1)/rowsperstrip);
-+	stripsperplane= TIFFhowmany_32_maxuint_compat(td->td_imagelength, rowsperstrip);
- 	stripinplane=(strip%stripsperplane);
- 	plane=(uint16)(strip/stripsperplane);
- 	rows=td->td_imagelength-stripinplane*rowsperstrip;
-Index: tiff-4.0.7/libtiff/tiffiop.h
-===================================================================
---- tiff-4.0.7.orig/libtiff/tiffiop.h
-+++ tiff-4.0.7/libtiff/tiffiop.h
-@@ -250,6 +250,10 @@ struct tiff {
- #define TIFFhowmany_32(x, y) (((uint32)x < (0xffffffff - (uint32)(y-1))) ? \
- 			   ((((uint32)(x))+(((uint32)(y))-1))/((uint32)(y))) : \
- 			   0U)
-+/* Variant of TIFFhowmany_32() that doesn't return 0 if x close to MAXUINT. */
-+/* Caution: TIFFhowmany_32_maxuint_compat(x,y)*y might overflow */
-+#define TIFFhowmany_32_maxuint_compat(x, y) \
-+			   (((uint32)(x) / (uint32)(y)) + ((((uint32)(x) % (uint32)(y)) != 0) ? 1 : 0))
- #define TIFFhowmany8_32(x) (((x)&0x07)?((uint32)(x)>>3)+1:(uint32)(x)>>3)
- #define TIFFroundup_32(x, y) (TIFFhowmany_32(x,y)*(y))
- #define TIFFhowmany_64(x, y) ((((uint64)(x))+(((uint64)(y))-1))/((uint64)(y)))
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10267.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10267.patch
deleted file mode 100644
index f4c5791..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10267.patch
+++ /dev/null
@@ -1,70 +0,0 @@
-From f8203c7ab1dbd7b5c59158576bec7da90191f42f Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Sat, 3 Dec 2016 11:15:18 +0000
-Subject: [PATCH] * libtiff/tif_ojpeg.c: make OJPEGDecode() early exit in case
- of failure in OJPEGPreDecode(). This will avoid a divide by zero, and
- potential other issues. Reported by Agostino Sarubbo. Fixes
- http://bugzilla.maptools.org/show_bug.cgi?id=2611
-
-Upstream-Status: Backport
-
-CVE: CVE-2016-10267
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
----
- ChangeLog           | 7 +++++++
- libtiff/tif_ojpeg.c | 8 ++++++++
- 2 files changed, 15 insertions(+)
-
-diff --git a/ChangeLog b/ChangeLog
-index 7339c1a..66fbcdc 100644
---- a/ChangeLog
-+++ b/ChangeLog
-@@ -1,3 +1,10 @@
-+2016-12-03 Even Rouault <even.rouault at spatialys.com>
-+ 
-+	* libtiff/tif_ojpeg.c: make OJPEGDecode() early exit in case of failure in
-+	OJPEGPreDecode(). This will avoid a divide by zero, and potential other issues.
-+	Reported by Agostino Sarubbo.
-+	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2611
-+
- 2016-12-02 Even Rouault <even.rouault at spatialys.com>
- 
- 	* libtiff/tif_read.c, libtiff/tiffiop.h: fix uint32 overflow in
-diff --git a/libtiff/tif_ojpeg.c b/libtiff/tif_ojpeg.c
-index 1ccc3f9..f19e8fd 100644
---- a/libtiff/tif_ojpeg.c
-+++ b/libtiff/tif_ojpeg.c
-@@ -244,6 +244,7 @@ typedef enum {
- 
- typedef struct {
- 	TIFF* tif;
-+        int decoder_ok;
- 	#ifndef LIBJPEG_ENCAP_EXTERNAL
- 	JMP_BUF exit_jmpbuf;
- 	#endif
-@@ -722,6 +723,7 @@ OJPEGPreDecode(TIFF* tif, uint16 s)
- 		}
- 		sp->write_curstrile++;
- 	}
-+	sp->decoder_ok = 1;
- 	return(1);
- }
- 
-@@ -784,8 +786,14 @@ OJPEGPreDecodeSkipScanlines(TIFF* tif)
- static int
- OJPEGDecode(TIFF* tif, uint8* buf, tmsize_t cc, uint16 s)
- {
-+        static const char module[]="OJPEGDecode";
- 	OJPEGState* sp=(OJPEGState*)tif->tif_data;
- 	(void)s;
-+        if( !sp->decoder_ok )
-+        {
-+            TIFFErrorExt(tif->tif_clientdata,module,"Cannot decode: decoder not correctly initialized");
-+            return 0;
-+        }
- 	if (sp->libjpeg_jpeg_query_style==0)
- 	{
- 		if (OJPEGDecodeRaw(tif,buf,cc)==0)
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10268.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10268.patch
deleted file mode 100644
index 03b982a..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10268.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-From 5397a417e61258c69209904e652a1f409ec3b9df Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Fri, 2 Dec 2016 22:13:32 +0000
-Subject: [PATCH] * tools/tiffcp.c: avoid uint32 underflow in cpDecodedStrips
- that can cause various issues, such as buffer overflows in the library.
- Reported by Agostino Sarubbo. Fixes
- http://bugzilla.maptools.org/show_bug.cgi?id=2598
-
-Upstream-Status: Backport
-CVE: CVE-2016-10268
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
-
----
- ChangeLog      | 7 +++++++
- tools/tiffcp.c | 2 +-
- 2 files changed, 8 insertions(+), 1 deletion(-)
-
-Index: tiff-4.0.7/tools/tiffcp.c
-===================================================================
---- tiff-4.0.7.orig/tools/tiffcp.c
-+++ tiff-4.0.7/tools/tiffcp.c
-@@ -985,7 +985,7 @@ DECLAREcpFunc(cpDecodedStrips)
- 		tstrip_t s, ns = TIFFNumberOfStrips(in);
- 		uint32 row = 0;
- 		_TIFFmemset(buf, 0, stripsize);
--		for (s = 0; s < ns; s++) {
-+		for (s = 0; s < ns && row < imagelength; s++) {
- 			tsize_t cc = (row + rowsperstrip > imagelength) ?
- 			    TIFFVStripSize(in, imagelength - row) : stripsize;
- 			if (TIFFReadEncodedStrip(in, s, buf, cc) < 0
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10269.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10269.patch
deleted file mode 100644
index d9f4a15..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10269.patch
+++ /dev/null
@@ -1,131 +0,0 @@
-From 10f72dd232849d0142a0688bcc9aa71025f120a3 Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Fri, 2 Dec 2016 23:05:51 +0000
-Subject: [PATCH 2/4] * libtiff/tif_pixarlog.c, libtiff/tif_luv.c: fix
- heap-based buffer overflow on generation of PixarLog / LUV compressed files,
- with ColorMap, TransferFunction attached and nasty plays with bitspersample.
- The fix for LUV has not been tested, but suffers from the same kind of issue
- of PixarLog. Reported by Agostino Sarubbo. Fixes
- http://bugzilla.maptools.org/show_bug.cgi?id=2604
-
-Upstream-Status: Backport
-
-CVE: CVE-2016-10269
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
----
- ChangeLog              | 10 ++++++++++
- libtiff/tif_luv.c      | 18 ++++++++++++++----
- libtiff/tif_pixarlog.c | 17 +++++++++++++++--
- 3 files changed, 39 insertions(+), 6 deletions(-)
-
-diff --git a/ChangeLog b/ChangeLog
-index 2e913b6..0a2c2a7 100644
---- a/ChangeLog
-+++ b/ChangeLog
-@@ -1,4 +1,14 @@
- 2016-12-03 Even Rouault <even.rouault at spatialys.com>
-+
-+	* libtiff/tif_pixarlog.c, libtiff/tif_luv.c: fix heap-based buffer
-+	overflow on generation of PixarLog / LUV compressed files, with
-+	ColorMap, TransferFunction attached and nasty plays with bitspersample.
-+	The fix for LUV has not been tested, but suffers from the same kind
-+	of issue of PixarLog.
-+	Reported by Agostino Sarubbo.
-+	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2604
-+
-+2016-12-03 Even Rouault <even.rouault at spatialys.com>
-  
- 	* libtiff/tif_ojpeg.c: make OJPEGDecode() early exit in case of failure in
- 	OJPEGPreDecode(). This will avoid a divide by zero, and potential other issues.
-diff --git a/libtiff/tif_luv.c b/libtiff/tif_luv.c
-index f68a9b1..e6783db 100644
---- a/libtiff/tif_luv.c
-+++ b/libtiff/tif_luv.c
-@@ -158,6 +158,7 @@
- typedef struct logLuvState LogLuvState;
- 
- struct logLuvState {
-+        int                     encoder_state;  /* 1 if encoder correctly initialized */
- 	int                     user_datafmt;   /* user data format */
- 	int                     encode_meth;    /* encoding method */
- 	int                     pixel_size;     /* bytes per pixel */
-@@ -1552,6 +1553,7 @@ LogLuvSetupEncode(TIFF* tif)
- 		    td->td_photometric, "must be either LogLUV or LogL");
- 		break;
- 	}
-+	sp->encoder_state = 1;
- 	return (1);
- notsupported:
- 	TIFFErrorExt(tif->tif_clientdata, module,
-@@ -1563,19 +1565,27 @@ notsupported:
- static void
- LogLuvClose(TIFF* tif)
- {
-+        LogLuvState* sp = (LogLuvState*) tif->tif_data;
- 	TIFFDirectory *td = &tif->tif_dir;
- 
-+	assert(sp != 0);
- 	/*
- 	 * For consistency, we always want to write out the same
- 	 * bitspersample and sampleformat for our TIFF file,
- 	 * regardless of the data format being used by the application.
- 	 * Since this routine is called after tags have been set but
- 	 * before they have been recorded in the file, we reset them here.
-+         * Note: this is really a nasty approach. See PixarLogClose
- 	 */
--	td->td_samplesperpixel =
--	    (td->td_photometric == PHOTOMETRIC_LOGL) ? 1 : 3;
--	td->td_bitspersample = 16;
--	td->td_sampleformat = SAMPLEFORMAT_INT;
-+        if( sp->encoder_state )
-+        {
-+            /* See PixarLogClose. Might avoid issues with tags whose size depends
-+             * on those below, but not completely sure this is enough. */
-+            td->td_samplesperpixel =
-+                (td->td_photometric == PHOTOMETRIC_LOGL) ? 1 : 3;
-+            td->td_bitspersample = 16;
-+            td->td_sampleformat = SAMPLEFORMAT_INT;
-+        }
- }
- 
- static void
-diff --git a/libtiff/tif_pixarlog.c b/libtiff/tif_pixarlog.c
-index d1246c3..aa99bc9 100644
---- a/libtiff/tif_pixarlog.c
-+++ b/libtiff/tif_pixarlog.c
-@@ -1233,8 +1233,10 @@ PixarLogPostEncode(TIFF* tif)
- static void
- PixarLogClose(TIFF* tif)
- {
-+        PixarLogState* sp = (PixarLogState*) tif->tif_data;
- 	TIFFDirectory *td = &tif->tif_dir;
- 
-+	assert(sp != 0);
- 	/* In a really sneaky (and really incorrect, and untruthful, and
- 	 * troublesome, and error-prone) maneuver that completely goes against
- 	 * the spirit of TIFF, and breaks TIFF, on close, we covertly
-@@ -1243,8 +1245,19 @@ PixarLogClose(TIFF* tif)
- 	 * readers that don't know about PixarLog, or how to set
- 	 * the PIXARLOGDATFMT pseudo-tag.
- 	 */
--	td->td_bitspersample = 8;
--	td->td_sampleformat = SAMPLEFORMAT_UINT;
-+
-+        if (sp->state&PLSTATE_INIT) {
-+            /* We test the state to avoid an issue such as in
-+             * http://bugzilla.maptools.org/show_bug.cgi?id=2604
-+             * What appends in that case is that the bitspersample is 1 and
-+             * a TransferFunction is set. The size of the TransferFunction
-+             * depends on 1<<bitspersample. So if we increase it, an access
-+             * out of the buffer will happen at directory flushing.
-+             * Another option would be to clear those targs. 
-+             */
-+            td->td_bitspersample = 8;
-+            td->td_sampleformat = SAMPLEFORMAT_UINT;
-+        }
- }
- 
- static void
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10270.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10270.patch
deleted file mode 100644
index 43ad6ed..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10270.patch
+++ /dev/null
@@ -1,134 +0,0 @@
-From 6e7042c61e300cf9971c645c79d05de974b24308 Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Sat, 3 Dec 2016 11:02:15 +0000
-Subject: [PATCH 3/4] * libtiff/tif_dirread.c: modify
- ChopUpSingleUncompressedStrip() to instanciate compute ntrips as
- TIFFhowmany_32(td->td_imagelength, rowsperstrip), instead of a logic based on
- the total size of data. Which is faulty is the total size of data is not
- sufficient to fill the whole image, and thus results in reading outside of
- the StripByCounts/StripOffsets arrays when using TIFFReadScanline(). Reported
- by Agostino Sarubbo. Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2608.
-
-* libtiff/tif_strip.c: revert the change in TIFFNumberOfStrips() done
-for http://bugzilla.maptools.org/show_bug.cgi?id=2587 / CVE-2016-9273 since
-the above change is a better fix that makes it unnecessary.
-
-Upstream-Status: Backport
-
-CVE: CVE-2016-10270
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
----
- ChangeLog             | 15 +++++++++++++++
- libtiff/tif_dirread.c | 22 ++++++++++------------
- libtiff/tif_strip.c   |  9 ---------
- 3 files changed, 25 insertions(+), 21 deletions(-)
-
-diff --git a/ChangeLog b/ChangeLog
-index 0a2c2a7..6e30e41 100644
---- a/ChangeLog
-+++ b/ChangeLog
-@@ -1,5 +1,20 @@
- 2016-12-03 Even Rouault <even.rouault at spatialys.com>
- 
-+	* libtiff/tif_dirread.c: modify ChopUpSingleUncompressedStrip() to
-+	instanciate compute ntrips as TIFFhowmany_32(td->td_imagelength, rowsperstrip),
-+	instead of a logic based on the total size of data. Which is faulty is
-+	the total size of data is not sufficient to fill the whole image, and thus
-+	results in reading outside of the StripByCounts/StripOffsets arrays when
-+	using TIFFReadScanline().
-+	Reported by Agostino Sarubbo.
-+	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2608.
-+
-+	* libtiff/tif_strip.c: revert the change in TIFFNumberOfStrips() done
-+	for http://bugzilla.maptools.org/show_bug.cgi?id=2587 / CVE-2016-9273 since
-+	the above change is a better fix that makes it unnecessary.
-+
-+2016-12-03 Even Rouault <even.rouault at spatialys.com>
-+
- 	* libtiff/tif_pixarlog.c, libtiff/tif_luv.c: fix heap-based buffer
- 	overflow on generation of PixarLog / LUV compressed files, with
- 	ColorMap, TransferFunction attached and nasty plays with bitspersample.
-diff --git a/libtiff/tif_dirread.c b/libtiff/tif_dirread.c
-index 3eec79c..570d0c3 100644
---- a/libtiff/tif_dirread.c
-+++ b/libtiff/tif_dirread.c
-@@ -5502,8 +5502,7 @@ ChopUpSingleUncompressedStrip(TIFF* tif)
- 	uint64 rowblockbytes;
- 	uint64 stripbytes;
- 	uint32 strip;
--	uint64 nstrips64;
--	uint32 nstrips32;
-+	uint32 nstrips;
- 	uint32 rowsperstrip;
- 	uint64* newcounts;
- 	uint64* newoffsets;
-@@ -5534,18 +5533,17 @@ ChopUpSingleUncompressedStrip(TIFF* tif)
- 	    return;
- 
- 	/*
--	 * never increase the number of strips in an image
-+	 * never increase the number of rows per strip
- 	 */
- 	if (rowsperstrip >= td->td_rowsperstrip)
- 		return;
--	nstrips64 = TIFFhowmany_64(bytecount, stripbytes);
--	if ((nstrips64==0)||(nstrips64>0xFFFFFFFF)) /* something is wonky, do nothing. */
--	    return;
--	nstrips32 = (uint32)nstrips64;
-+        nstrips = TIFFhowmany_32(td->td_imagelength, rowsperstrip);
-+        if( nstrips == 0 )
-+            return;
- 
--	newcounts = (uint64*) _TIFFCheckMalloc(tif, nstrips32, sizeof (uint64),
-+	newcounts = (uint64*) _TIFFCheckMalloc(tif, nstrips, sizeof (uint64),
- 				"for chopped \"StripByteCounts\" array");
--	newoffsets = (uint64*) _TIFFCheckMalloc(tif, nstrips32, sizeof (uint64),
-+	newoffsets = (uint64*) _TIFFCheckMalloc(tif, nstrips, sizeof (uint64),
- 				"for chopped \"StripOffsets\" array");
- 	if (newcounts == NULL || newoffsets == NULL) {
- 		/*
-@@ -5562,18 +5560,18 @@ ChopUpSingleUncompressedStrip(TIFF* tif)
- 	 * Fill the strip information arrays with new bytecounts and offsets
- 	 * that reflect the broken-up format.
- 	 */
--	for (strip = 0; strip < nstrips32; strip++) {
-+	for (strip = 0; strip < nstrips; strip++) {
- 		if (stripbytes > bytecount)
- 			stripbytes = bytecount;
- 		newcounts[strip] = stripbytes;
--		newoffsets[strip] = offset;
-+		newoffsets[strip] = stripbytes ? offset : 0;
- 		offset += stripbytes;
- 		bytecount -= stripbytes;
- 	}
- 	/*
- 	 * Replace old single strip info with multi-strip info.
- 	 */
--	td->td_stripsperimage = td->td_nstrips = nstrips32;
-+	td->td_stripsperimage = td->td_nstrips = nstrips;
- 	TIFFSetField(tif, TIFFTAG_ROWSPERSTRIP, rowsperstrip);
- 
- 	_TIFFfree(td->td_stripbytecount);
-diff --git a/libtiff/tif_strip.c b/libtiff/tif_strip.c
-index 4c46ecf..1676e47 100644
---- a/libtiff/tif_strip.c
-+++ b/libtiff/tif_strip.c
-@@ -63,15 +63,6 @@ TIFFNumberOfStrips(TIFF* tif)
- 	TIFFDirectory *td = &tif->tif_dir;
- 	uint32 nstrips;
- 
--    /* If the value was already computed and store in td_nstrips, then return it,
--       since ChopUpSingleUncompressedStrip might have altered and resized the
--       since the td_stripbytecount and td_stripoffset arrays to the new value
--       after the initial affectation of td_nstrips = TIFFNumberOfStrips() in
--       tif_dirread.c ~line 3612.
--       See http://bugzilla.maptools.org/show_bug.cgi?id=2587 */
--    if( td->td_nstrips )
--        return td->td_nstrips;
--
- 	nstrips = (td->td_rowsperstrip == (uint32) -1 ? 1 :
- 	     TIFFhowmany_32(td->td_imagelength, td->td_rowsperstrip));
- 	if (td->td_planarconfig == PLANARCONFIG_SEPARATE)
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10271.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10271.patch
deleted file mode 100644
index 4fe5bcd..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2016-10271.patch
+++ /dev/null
@@ -1,30 +0,0 @@
-From 9657bbe3cdce4aaa90e07d50c1c70ae52da0ba6a Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Sat, 3 Dec 2016 11:35:56 +0000
-Subject: [PATCH] * tools/tiffcrop.c: fix readContigStripsIntoBuffer() in -i
- (ignore) mode so that the output buffer is correctly incremented to avoid
- write outside bounds. Reported by Agostino Sarubbo. Fixes
- http://bugzilla.maptools.org/show_bug.cgi?id=2620
-
-Upstream-Status: Backport
-CVE: CVE-2016-10271
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
-
----
- ChangeLog        | 7 +++++++
- tools/tiffcrop.c | 2 +-
- 2 files changed, 8 insertions(+), 1 deletion(-)
-
-Index: tiff-4.0.7/tools/tiffcrop.c
-===================================================================
---- tiff-4.0.7.orig/tools/tiffcrop.c
-+++ tiff-4.0.7/tools/tiffcrop.c
-@@ -3698,7 +3698,7 @@ static int readContigStripsIntoBuffer (T
-                                   (unsigned long) strip, (unsigned long)rows);
-                         return 0;
-                 }
--                bufp += bytes_read;
-+                bufp += stripsize;
-         }
- 
-         return 1;
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-10688.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-10688.patch
index ed9c0f5..b0db969 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-10688.patch
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-10688.patch
@@ -18,10 +18,10 @@
  libtiff/tif_dirwrite.c | 20 ++++++++++++++++----
  2 files changed, 24 insertions(+), 4 deletions(-)
 
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog
-+++ tiff-4.0.7/ChangeLog
+diff --git a/ChangeLog b/ChangeLog
+index 0240f0b..42eaeb7 100644
+--- a/ChangeLog
++++ b/ChangeLog
 @@ -1,3 +1,11 @@
 +2017-06-30  Even Rouault <even.rouault at spatialys.com>
 +
@@ -34,11 +34,11 @@
  2017-06-26  Even Rouault <even.rouault at spatialys.com>
  
  	* libtiff/tif_jbig.c: fix memory leak in error code path of JBIGDecode()
-Index: tiff-4.0.7/libtiff/tif_dirwrite.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_dirwrite.c
-+++ tiff-4.0.7/libtiff/tif_dirwrite.c
-@@ -2047,7 +2047,10 @@ TIFFWriteDirectoryTagCheckedLong8(TIFF*
+diff --git a/libtiff/tif_dirwrite.c b/libtiff/tif_dirwrite.c
+index 2967da5..8d6686b 100644
+--- a/libtiff/tif_dirwrite.c
++++ b/libtiff/tif_dirwrite.c
+@@ -2111,7 +2111,10 @@ TIFFWriteDirectoryTagCheckedLong8(TIFF* tif, uint32* ndir, TIFFDirEntry* dir, ui
  {
  	uint64 m;
  	assert(sizeof(uint64)==8);
@@ -50,7 +50,7 @@
  	m=value;
  	if (tif->tif_flags&TIFF_SWAB)
  		TIFFSwabLong8(&m);
-@@ -2060,7 +2063,10 @@ TIFFWriteDirectoryTagCheckedLong8Array(T
+@@ -2124,7 +2127,10 @@ TIFFWriteDirectoryTagCheckedLong8Array(TIFF* tif, uint32* ndir, TIFFDirEntry* di
  {
  	assert(count<0x20000000);
  	assert(sizeof(uint64)==8);
@@ -62,7 +62,7 @@
  	if (tif->tif_flags&TIFF_SWAB)
  		TIFFSwabArrayOfLong8(value,count);
  	return(TIFFWriteDirectoryTagData(tif,ndir,dir,tag,TIFF_LONG8,count,count*8,value));
-@@ -2072,7 +2078,10 @@ TIFFWriteDirectoryTagCheckedSlong8(TIFF*
+@@ -2136,7 +2142,10 @@ TIFFWriteDirectoryTagCheckedSlong8(TIFF* tif, uint32* ndir, TIFFDirEntry* dir, u
  {
  	int64 m;
  	assert(sizeof(int64)==8);
@@ -74,7 +74,7 @@
  	m=value;
  	if (tif->tif_flags&TIFF_SWAB)
  		TIFFSwabLong8((uint64*)(&m));
-@@ -2085,7 +2094,10 @@ TIFFWriteDirectoryTagCheckedSlong8Array(
+@@ -2149,7 +2158,10 @@ TIFFWriteDirectoryTagCheckedSlong8Array(TIFF* tif, uint32* ndir, TIFFDirEntry* d
  {
  	assert(count<0x20000000);
  	assert(sizeof(int64)==8);
@@ -86,3 +86,6 @@
  	if (tif->tif_flags&TIFF_SWAB)
  		TIFFSwabArrayOfLong8((uint64*)value,count);
  	return(TIFFWriteDirectoryTagData(tif,ndir,dir,tag,TIFF_SLONG8,count,count*8,value));
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-13726.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-13726.patch
new file mode 100644
index 0000000..c60ffa6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-13726.patch
@@ -0,0 +1,54 @@
+From 5317ce215936ce611846557bb104b49d3b4c8345 Mon Sep 17 00:00:00 2001
+From: Even Rouault <even.rouault@spatialys.com>
+Date: Wed, 23 Aug 2017 13:21:41 +0000
+Subject: [PATCH] * libtiff/tif_dirwrite.c: replace assertion related to not
+ finding the SubIFD tag by runtime check. Fixes
+ http://bugzilla.maptools.org/show_bug.cgi?id=2727 Reported by team OWL337
+
+Upstream-Status: Backport
+[https://github.com/vadz/libtiff/commit/f91ca83a21a6a583050e5a5755ce1441b2bf1d7e]
+
+CVE: CVE-2017-13726
+
+Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
+---
+ ChangeLog              | 7 +++++++
+ libtiff/tif_dirwrite.c | 7 ++++++-
+ 2 files changed, 13 insertions(+), 1 deletion(-)
+
+diff --git a/ChangeLog b/ChangeLog
+index 6980da8..3e299d9 100644
+--- a/ChangeLog
++++ b/ChangeLog
+@@ -1,3 +1,10 @@
++2017-08-23  Even Rouault <even.rouault at spatialys.com>
++
++	* libtiff/tif_dirwrite.c: replace assertion related to not finding the
++	SubIFD tag by runtime check.
++	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2727
++	Reported by team OWL337
++
+ 2017-07-15  Even Rouault <even.rouault at spatialys.com>
+ 
+ 	* tools/tiff2pdf.c: prevent heap buffer overflow write in "Raw"
+diff --git a/libtiff/tif_dirwrite.c b/libtiff/tif_dirwrite.c
+index 8d6686b..14090ae 100644
+--- a/libtiff/tif_dirwrite.c
++++ b/libtiff/tif_dirwrite.c
+@@ -821,7 +821,12 @@ TIFFWriteDirectorySec(TIFF* tif, int isimage, int imagedone, uint64* pdiroff)
+ 			TIFFDirEntry* nb;
+ 			for (na=0, nb=dir; ; na++, nb++)
+ 			{
+-				assert(na<ndir);
++				if( na == ndir )
++                                {
++                                    TIFFErrorExt(tif->tif_clientdata,module,
++                                                 "Cannot find SubIFD tag");
++                                    goto bad;
++                                }
+ 				if (nb->tdir_tag==TIFFTAG_SUBIFD)
+ 					break;
+ 			}
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-13727.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-13727.patch
new file mode 100644
index 0000000..e228c2f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-13727.patch
@@ -0,0 +1,65 @@
+From a5e8245cc67646f7b448b4ca29258eaac418102c Mon Sep 17 00:00:00 2001
+From: Even Rouault <even.rouault@spatialys.com>
+Date: Wed, 23 Aug 2017 13:33:42 +0000
+Subject: [PATCH] * libtiff/tif_dirwrite.c: replace assertion to tag value not
+ fitting on uint32 when selecting the value of SubIFD tag by runtime check (in
+ TIFFWriteDirectoryTagSubifd()). Fixes
+ http://bugzilla.maptools.org/show_bug.cgi?id=2728 Reported by team OWL337
+
+SubIFD tag by runtime check (in TIFFWriteDirectorySec())
+
+Upstream-Status: Backport
+[https://github.com/vadz/libtiff/commit/b6af137bf9ef852f1a48a50a5afb88f9e9da01cc]
+
+CVE: CVE-2017-13727
+
+Signed-off-by: Yi Zhao <yi.zhao@windriver.com>
+---
+ ChangeLog              | 10 +++++++++-
+ libtiff/tif_dirwrite.c |  9 ++++++++-
+ 2 files changed, 17 insertions(+), 2 deletions(-)
+
+diff --git a/ChangeLog b/ChangeLog
+index 3e299d9..8f5efe9 100644
+--- a/ChangeLog
++++ b/ChangeLog
+@@ -1,7 +1,15 @@
+ 2017-08-23  Even Rouault <even.rouault at spatialys.com>
+ 
++	* libtiff/tif_dirwrite.c: replace assertion to tag value not fitting
++	on uint32 when selecting the value of SubIFD tag by runtime check
++	(in TIFFWriteDirectoryTagSubifd()).
++	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2728
++	Reported by team OWL337
++
++2017-08-23  Even Rouault <even.rouault at spatialys.com>
++
+ 	* libtiff/tif_dirwrite.c: replace assertion related to not finding the
+-	SubIFD tag by runtime check.
++	SubIFD tag by runtime check (in TIFFWriteDirectorySec())
+ 	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2727
+ 	Reported by team OWL337
+ 
+diff --git a/libtiff/tif_dirwrite.c b/libtiff/tif_dirwrite.c
+index 14090ae..f0a4baa 100644
+--- a/libtiff/tif_dirwrite.c
++++ b/libtiff/tif_dirwrite.c
+@@ -1949,7 +1949,14 @@ TIFFWriteDirectoryTagSubifd(TIFF* tif, uint32* ndir, TIFFDirEntry* dir)
+ 		for (p=0; p < tif->tif_dir.td_nsubifd; p++)
+ 		{
+                         assert(pa != 0);
+-			assert(*pa <= 0xFFFFFFFFUL);
++
++                        /* Could happen if an classicTIFF has a SubIFD of type LONG8 (which is illegal) */
++                        if( *pa > 0xFFFFFFFFUL)
++                        {
++                            TIFFErrorExt(tif->tif_clientdata,module,"Illegal value for SubIFD tag");
++                            _TIFFfree(o);
++                            return(0);
++                        }
+ 			*pb++=(uint32)(*pa++);
+ 		}
+ 		n=TIFFWriteDirectoryTagCheckedIfdArray(tif,ndir,dir,TIFFTAG_SUBIFD,tif->tif_dir.td_nsubifd,o);
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7592.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7592.patch
deleted file mode 100644
index 5b80445..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7592.patch
+++ /dev/null
@@ -1,40 +0,0 @@
-From 48780b4fcc425cddc4ef8ffdf536f96a0d1b313b Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Wed, 11 Jan 2017 16:38:26 +0000
-Subject: [PATCH] * libtiff/tif_getimage.c: add explicit uint32 cast in putagreytile to
-avoid UndefinedBehaviorSanitizer warning.
-Patch by Nicolas Pena.
-Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2658
-
-Upstream-Status: Backport
-CVE: CVE-2017-7592
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
-
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog	2017-04-24 14:25:10.143926025 +0530
-+++ tiff-4.0.7/ChangeLog	2017-04-24 15:20:21.291839230 +0530
-@@ -1,3 +1,10 @@
-+2017-01-11 Even Rouault <even.rouault at spatialys.com>
-+
-+	* libtiff/tif_getimage.c: add explicit uint32 cast in putagreytile to
-+	avoid UndefinedBehaviorSanitizer warning.
-+	Patch by Nicolas Pena.
-+	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2658
-+
- 2016-12-03 Even Rouault <even.rouault at spatialys.com>
- 
- 	* libtiff/tif_dirread.c: modify ChopUpSingleUncompressedStrip() to
-Index: tiff-4.0.7/libtiff/tif_getimage.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_getimage.c	2016-11-18 08:17:45.000000000 +0530
-+++ tiff-4.0.7/libtiff/tif_getimage.c	2017-04-24 15:17:46.671843283 +0530
-@@ -1305,7 +1305,7 @@
-     while (h-- > 0) {
- 	for (x = w; x-- > 0;)
-         {
--            *cp++ = BWmap[*pp][0] & (*(pp+1) << 24 | ~A1);
-+            *cp++ = BWmap[*pp][0] & ((uint32)*(pp+1) << 24 | ~A1);
-             pp += samplesperpixel;
-         }
- 	cp += toskew;
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7593.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7593.patch
deleted file mode 100644
index 380dfcb..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7593.patch
+++ /dev/null
@@ -1,98 +0,0 @@
-From d60332057b9575ada4f264489582b13e30137be1 Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Wed, 11 Jan 2017 19:02:49 +0000
-Subject: [PATCH] * libtiff/tiffiop.h, tif_unix.c, tif_win32.c, tif_vms.c: add
- _TIFFcalloc()
-
-* libtiff/tif_read.c: TIFFReadBufferSetup(): use _TIFFcalloc() to zero
-initialize tif_rawdata.
-Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2651
-
-Upstream-Status: Backport
-
-CVE: CVE-2017-7593
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog	2017-04-25 19:03:20.584155452 +0530
-+++ tiff-4.0.7/ChangeLog	2017-04-26 12:09:41.986866896 +0530
-@@ -44,6 +44,14 @@
- 
- 2017-01-11 Even Rouault <even.rouault at spatialys.com>
- 
-+	* libtiff/tiffiop.h, tif_unix.c, tif_win32.c : add _TIFFcalloc()
-+
-+	* libtiff/tif_read.c: TIFFReadBufferSetup(): use _TIFFcalloc() to zero
-+	initialize tif_rawdata.
-+	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2651
-+
-+2017-01-11 Even Rouault <even.rouault at spatialys.com>
-+
- 	* libtiff/tif_getimage.c: add explicit uint32 cast in putagreytile to
- 	avoid UndefinedBehaviorSanitizer warning.
- 	Patch by Nicolas Pena.
-Index: tiff-4.0.7/libtiff/tif_read.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_read.c	2017-04-25 19:03:20.584155452 +0530
-+++ tiff-4.0.7/libtiff/tif_read.c	2017-04-26 12:11:42.814863729 +0530
-@@ -986,7 +986,9 @@
- 				 "Invalid buffer size");
- 		    return (0);
- 		}
--		tif->tif_rawdata = (uint8*) _TIFFmalloc(tif->tif_rawdatasize);
-+                /* Initialize to zero to avoid uninitialized buffers in case of */
-+                /* short reads (http://bugzilla.maptools.org/show_bug.cgi?id=2651) */
-+                tif->tif_rawdata = (uint8*) _TIFFcalloc(1, tif->tif_rawdatasize);
- 		tif->tif_flags |= TIFF_MYBUFFER;
- 	}
- 	if (tif->tif_rawdata == NULL) {
-Index: tiff-4.0.7/libtiff/tif_unix.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_unix.c	2015-08-29 03:46:22.707817041 +0530
-+++ tiff-4.0.7/libtiff/tif_unix.c	2017-04-26 12:13:07.442861510 +0530
-@@ -316,6 +316,14 @@
- 	return (malloc((size_t) s));
- }
- 
-+void* _TIFFcalloc(tmsize_t nmemb, tmsize_t siz)
-+{
-+    if( nmemb == 0 || siz == 0 )
-+        return ((void *) NULL);
-+
-+    return calloc((size_t) nmemb, (size_t)siz);
-+}
-+
- void
- _TIFFfree(void* p)
- {
-Index: tiff-4.0.7/libtiff/tif_win32.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_win32.c	2015-08-29 03:46:22.730570169 +0530
-+++ tiff-4.0.7/libtiff/tif_win32.c	2017-04-26 12:14:12.918859794 +0530
-@@ -360,6 +360,14 @@
- 	return (malloc((size_t) s));
- }
- 
-+void* _TIFFcalloc(tmsize_t nmemb, tmsize_t siz)
-+{
-+    if( nmemb == 0 || siz == 0 )
-+        return ((void *) NULL);
-+
-+    return calloc((size_t) nmemb, (size_t)siz);
-+}
-+
- void
- _TIFFfree(void* p)
- {
-Index: tiff-4.0.7/libtiff/tiffio.h
-===================================================================
---- tiff-4.0.7.orig/libtiff/tiffio.h	2016-01-24 21:09:51.894442042 +0530
-+++ tiff-4.0.7/libtiff/tiffio.h	2017-04-26 12:15:55.034857117 +0530
-@@ -293,6 +293,7 @@
-  */
- 
- extern void* _TIFFmalloc(tmsize_t s);
-+extern void* _TIFFcalloc(tmsize_t nmemb, tmsize_t siz);
- extern void* _TIFFrealloc(void* p, tmsize_t s);
- extern void _TIFFmemset(void* p, int v, tmsize_t c);
- extern void _TIFFmemcpy(void* d, const void* s, tmsize_t c);
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7594-p1.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7594-p1.patch
deleted file mode 100644
index 5c7e460..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7594-p1.patch
+++ /dev/null
@@ -1,43 +0,0 @@
-rom 8283e4d1b7e53340684d12932880cbcbaf23a8c1 Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Thu, 12 Jan 2017 17:43:25 +0000
-
-* libtiff/tif_ojpeg.c: fix leak in OJPEGReadHeaderInfoSecTablesAcTable
-  when read fails.
-  Patch by Nicolas Pena.
-  Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2659
-
-Upstream-Status: Backport
-
-CVE: CVE-2017-7594 #patch1
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog	2017-04-24 16:13:15.000000000 +0530
-+++ tiff-4.0.7/ChangeLog	2017-04-24 16:50:26.465897646 +0530
-@@ -1,3 +1,10 @@
-+2017-01-12 Even Rouault <even.rouault at spatialys.com>
-+
-+	* libtiff/tif_ojpeg.c: fix leak in OJPEGReadHeaderInfoSecTablesAcTable
-+	when read fails.
-+	Patch by Nicolás Peña.
-+	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2659
-+
- 2017-01-11 Even Rouault <even.rouault at spatialys.com>
- 
- 	* libtiff/tif_getimage.c: add explicit uint32 cast in putagreytile to
-Index: tiff-4.0.7/libtiff/tif_ojpeg.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_ojpeg.c	2017-04-24 16:02:29.817973051 +0530
-+++ tiff-4.0.7/libtiff/tif_ojpeg.c	2017-04-24 16:52:27.349894477 +0530
-@@ -1918,7 +1918,10 @@
- 				rb[sizeof(uint32)+5+n]=o[n];
- 			p=(uint32)TIFFReadFile(tif,&(rb[sizeof(uint32)+21]),q);
- 			if (p!=q)
-+                        {
-+                                _TIFFfree(rb);
- 				return(0);
-+                        }
- 			sp->actable[m]=rb;
- 			sp->sos_tda[m]=(sp->sos_tda[m]|m);
- 		}
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7594-p2.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7594-p2.patch
deleted file mode 100644
index 82a19c6..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7594-p2.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-From 2ea32f7372b65c24b2816f11c04bf59b5090d05b Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Thu, 12 Jan 2017 19:23:20 +0000
-Subject: [PATCH 2/2] * libtiff/tif_ojpeg.c: fix leak in
- OJPEGReadHeaderInfoSecTablesQTable, OJPEGReadHeaderInfoSecTablesDcTable and
- OJPEGReadHeaderInfoSecTablesAcTable
-
-Upstream-status: Backport
-
-CVE: CVE-2017-7594 #patch2
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog	2017-04-24 16:50:26.465897646 +0530
-+++ tiff-4.0.7/ChangeLog	2017-04-24 16:56:20.685888360 +0530
-@@ -1,6 +1,7 @@
- 2017-01-12 Even Rouault <even.rouault at spatialys.com>
- 
--	* libtiff/tif_ojpeg.c: fix leak in OJPEGReadHeaderInfoSecTablesAcTable
-+	* libtiff/tif_ojpeg.c: fix leak in OJPEGReadHeaderInfoSecTablesQTable,
-+	OJPEGReadHeaderInfoSecTablesDcTable and OJPEGReadHeaderInfoSecTablesAcTable
- 	when read fails.
- 	Patch by Nicolás Peña.
- 	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2659
-Index: tiff-4.0.7/libtiff/tif_ojpeg.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_ojpeg.c	2017-04-24 16:52:27.349894477 +0530
-+++ tiff-4.0.7/libtiff/tif_ojpeg.c	2017-04-24 16:59:20.001883660 +0530
-@@ -1790,7 +1790,10 @@
- 			TIFFSeekFile(tif,sp->qtable_offset[m],SEEK_SET); 
- 			p=(uint32)TIFFReadFile(tif,&ob[sizeof(uint32)+5],64);
- 			if (p!=64)
-+                        {
-+                                _TIFFfree(ob);
- 				return(0);
-+                        }
- 			sp->qtable[m]=ob;
- 			sp->sof_tq[m]=m;
- 		}
-@@ -1854,7 +1857,10 @@
- 				rb[sizeof(uint32)+5+n]=o[n];
- 			p=(uint32)TIFFReadFile(tif,&(rb[sizeof(uint32)+21]),q);
- 			if (p!=q)
-+                        {
-+                                _TIFFfree(rb);
- 				return(0);
-+                        }
- 			sp->dctable[m]=rb;
- 			sp->sos_tda[m]=(m<<4);
- 		}
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7595.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7595.patch
deleted file mode 100644
index 851a37f..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7595.patch
+++ /dev/null
@@ -1,48 +0,0 @@
-commit 618d490090bfd10e613ac574ecff31a293904b44
-Author: erouault <erouault>
-Date:   Wed Jan 11 12:15:01 2017 +0000
-
-    * libtiff/tif_jpeg.c: avoid integer division by zero
-      in JPEGSetupEncode() when horizontal or vertical sampling is set to 0.
-      Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2653
-
-Upstream-Status: Backport
-
-CVE: CVE-2017-7595
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
-
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog	2017-04-24 17:31:40.013832807 +0530
-+++ tiff-4.0.7/ChangeLog	2017-04-24 18:03:34.769782616 +0530
-@@ -8,6 +8,12 @@
- 
- 2017-01-11 Even Rouault <even.rouault at spatialys.com>
- 
-+	* libtiff/tif_jpeg.c: avoid integer division by zero in
-+	JPEGSetupEncode() when horizontal or vertical sampling is set to 0.
-+	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2653
-+
-+2017-01-11 Even Rouault <even.rouault at spatialys.com>
-+
- 	* libtiff/tif_getimage.c: add explicit uint32 cast in putagreytile to
- 	avoid UndefinedBehaviorSanitizer warning.
- 	Patch by Nicolas Pena.
-Index: tiff-4.0.7/libtiff/tif_jpeg.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_jpeg.c	2016-01-24 21:09:51.781641625 +0530
-+++ tiff-4.0.7/libtiff/tif_jpeg.c	2017-04-24 18:05:59.777778815 +0530
-@@ -1626,6 +1626,13 @@
- 	case PHOTOMETRIC_YCBCR:
- 		sp->h_sampling = td->td_ycbcrsubsampling[0];
- 		sp->v_sampling = td->td_ycbcrsubsampling[1];
-+                if( sp->h_sampling == 0 || sp->v_sampling == 0 )
-+                {
-+                      TIFFErrorExt(tif->tif_clientdata, module,
-+                                   "Invalig horizontal/vertical sampling value");
-+                      return (0);
-+                }
-+
- 		/*
- 		 * A ReferenceBlackWhite field *must* be present since the
- 		 * default value is inappropriate for YCbCr.  Fill in the
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7596.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7596.patch
deleted file mode 100644
index 1945c3d..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7596.patch
+++ /dev/null
@@ -1,308 +0,0 @@
-From 3144e57770c1e4d26520d8abee750f8ac8b75490 Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Wed, 11 Jan 2017 16:09:02 +0000
-Subject: [PATCH] * libtiff/tif_dir.c, tif_dirread.c, tif_dirwrite.c: implement
- various clampings of double to other data types to avoid undefined behaviour
- if the output range isn't big enough to hold the input value. Fixes
- http://bugzilla.maptools.org/show_bug.cgi?id=2643
- http://bugzilla.maptools.org/show_bug.cgi?id=2642
- http://bugzilla.maptools.org/show_bug.cgi?id=2646
- http://bugzilla.maptools.org/show_bug.cgi?id=2647
-
-Upstream-Status: Backport
-
-CVE: CVE-2017-7596
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
-
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog	2017-04-25 15:53:40.294592812 +0530
-+++ tiff-4.0.7/ChangeLog	2017-04-25 16:02:03.238600641 +0530
-@@ -6,6 +6,16 @@
- 	Patch by Nicolás Peña.
- 	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2659
- 
-+2017-01-11 Even Rouault <even.rouault at spatialys.com>
-+
-+	* libtiff/tif_dir.c, tif_dirread.c, tif_dirwrite.c: implement various clampings
-+	of double to other data types to avoid undefined behaviour if the output range
-+	isn't big enough to hold the input value.
-+	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2643
-+	http://bugzilla.maptools.org/show_bug.cgi?id=2642
-+	http://bugzilla.maptools.org/show_bug.cgi?id=2646
-+	http://bugzilla.maptools.org/show_bug.cgi?id=2647
-+
- 2017-01-11 Even Rouault <even.rouault at spatialys.com>
- 
- 	* libtiff/tif_jpeg.c: avoid integer division by zero in
-Index: tiff-4.0.7/libtiff/tif_dir.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_dir.c	2016-10-30 04:33:18.856598072 +0530
-+++ tiff-4.0.7/libtiff/tif_dir.c	2017-04-25 16:02:03.238600641 +0530
-@@ -31,6 +31,7 @@
-  * (and also some miscellaneous stuff)
-  */
- #include "tiffiop.h"
-+#include <float.h>
- 
- /*
-  * These are used in the backwards compatibility code...
-@@ -154,6 +155,15 @@
- 	return (0);
- }
- 
-+static float TIFFClampDoubleToFloat( double val )
-+{
-+    if( val > FLT_MAX )
-+        return FLT_MAX;
-+    if( val < -FLT_MAX )
-+        return -FLT_MAX;
-+    return (float)val;
-+}
-+
- static int
- _TIFFVSetField(TIFF* tif, uint32 tag, va_list ap)
- {
-@@ -312,13 +322,13 @@
-         dblval = va_arg(ap, double);
-         if( dblval < 0 )
-             goto badvaluedouble;
--		td->td_xresolution = (float) dblval;
-+		td->td_xresolution = TIFFClampDoubleToFloat( dblval );
- 		break;
- 	case TIFFTAG_YRESOLUTION:
-         dblval = va_arg(ap, double);
-         if( dblval < 0 )
-             goto badvaluedouble;
--		td->td_yresolution = (float) dblval;
-+		td->td_yresolution = TIFFClampDoubleToFloat( dblval );
- 		break;
- 	case TIFFTAG_PLANARCONFIG:
- 		v = (uint16) va_arg(ap, uint16_vap);
-@@ -327,10 +337,10 @@
- 		td->td_planarconfig = (uint16) v;
- 		break;
- 	case TIFFTAG_XPOSITION:
--		td->td_xposition = (float) va_arg(ap, double);
-+		td->td_xposition = TIFFClampDoubleToFloat( va_arg(ap, double) );
- 		break;
- 	case TIFFTAG_YPOSITION:
--		td->td_yposition = (float) va_arg(ap, double);
-+		td->td_yposition = TIFFClampDoubleToFloat( va_arg(ap, double) );
- 		break;
- 	case TIFFTAG_RESOLUTIONUNIT:
- 		v = (uint16) va_arg(ap, uint16_vap);
-Index: tiff-4.0.7/libtiff/tif_dirread.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_dirread.c	2017-04-25 15:53:40.134592810 +0530
-+++ tiff-4.0.7/libtiff/tif_dirread.c	2017-04-25 16:02:03.242600641 +0530
-@@ -40,6 +40,7 @@
-  */
- 
- #include "tiffiop.h"
-+#include <float.h>
- 
- #define IGNORE 0          /* tag placeholder used below */
- #define FAILED_FII    ((uint32) -1)
-@@ -2406,7 +2407,14 @@
- 				ma=(double*)origdata;
- 				mb=data;
- 				for (n=0; n<count; n++)
--					*mb++=(float)(*ma++);
-+                                {
-+                                    double val = *ma++;
-+                                    if( val > FLT_MAX )
-+                                        val = FLT_MAX;
-+                                    else if( val < -FLT_MAX )
-+                                        val = -FLT_MAX;
-+                                    *mb++=(float)val;
-+                                }
- 			}
- 			break;
- 	}
-Index: tiff-4.0.7/libtiff/tif_dirwrite.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_dirwrite.c	2016-10-30 04:33:18.876854501 +0530
-+++ tiff-4.0.7/libtiff/tif_dirwrite.c	2017-04-25 16:07:48.670606018 +0530
-@@ -30,6 +30,7 @@
-  * Directory Write Support Routines.
-  */
- #include "tiffiop.h"
-+#include <float.h>
- 
- #ifdef HAVE_IEEEFP
- #define TIFFCvtNativeToIEEEFloat(tif, n, fp)
-@@ -939,6 +940,69 @@
- 	return(0);
- }
- 
-+static float TIFFClampDoubleToFloat( double val )
-+{
-+    if( val > FLT_MAX )
-+        return FLT_MAX;
-+    if( val < -FLT_MAX )
-+        return -FLT_MAX;
-+    return (float)val;
-+}
-+
-+static int8 TIFFClampDoubleToInt8( double val )
-+{
-+    if( val > 127 )
-+        return 127;
-+    if( val < -128 || val != val )
-+        return -128;
-+    return (int8)val;
-+}
-+
-+static int16 TIFFClampDoubleToInt16( double val )
-+{
-+    if( val > 32767 )
-+        return 32767;
-+    if( val < -32768 || val != val )
-+        return -32768;
-+    return (int16)val;
-+}
-+
-+static int32 TIFFClampDoubleToInt32( double val )
-+{
-+    if( val > 0x7FFFFFFF )
-+        return 0x7FFFFFFF;
-+    if( val < -0x7FFFFFFF-1 || val != val )
-+        return -0x7FFFFFFF-1;
-+    return (int32)val;
-+}
-+
-+static uint8 TIFFClampDoubleToUInt8( double val )
-+{
-+    if( val < 0 )
-+        return 0;
-+    if( val > 255 || val != val )
-+        return 255;
-+    return (uint8)val;
-+}
-+
-+static uint16 TIFFClampDoubleToUInt16( double val )
-+{
-+    if( val < 0 )
-+        return 0;
-+    if( val > 65535 || val != val )
-+        return 65535;
-+    return (uint16)val;
-+}
-+
-+static uint32 TIFFClampDoubleToUInt32( double val )
-+{
-+    if( val < 0 )
-+        return 0;
-+    if( val > 0xFFFFFFFFU || val != val )
-+        return 0xFFFFFFFFU;
-+    return (uint32)val;
-+}
-+
- static int
- TIFFWriteDirectoryTagSampleformatArray(TIFF* tif, uint32* ndir, TIFFDirEntry* dir, uint16 tag, uint32 count, double* value)
- {
-@@ -959,7 +1023,7 @@
- 			if (tif->tif_dir.td_bitspersample<=32)
- 			{
- 				for (i = 0; i < count; ++i)
--					((float*)conv)[i] = (float)value[i];
-+					((float*)conv)[i] = TIFFClampDoubleToFloat(value[i]);
- 				ok = TIFFWriteDirectoryTagFloatArray(tif,ndir,dir,tag,count,(float*)conv);
- 			}
- 			else
-@@ -971,19 +1035,19 @@
- 			if (tif->tif_dir.td_bitspersample<=8)
- 			{
- 				for (i = 0; i < count; ++i)
--					((int8*)conv)[i] = (int8)value[i];
-+					((int8*)conv)[i] = TIFFClampDoubleToInt8(value[i]);
- 				ok = TIFFWriteDirectoryTagSbyteArray(tif,ndir,dir,tag,count,(int8*)conv);
- 			}
- 			else if (tif->tif_dir.td_bitspersample<=16)
- 			{
- 				for (i = 0; i < count; ++i)
--					((int16*)conv)[i] = (int16)value[i];
-+					((int16*)conv)[i] = TIFFClampDoubleToInt16(value[i]);
- 				ok = TIFFWriteDirectoryTagSshortArray(tif,ndir,dir,tag,count,(int16*)conv);
- 			}
- 			else
- 			{
- 				for (i = 0; i < count; ++i)
--					((int32*)conv)[i] = (int32)value[i];
-+					((int32*)conv)[i] = TIFFClampDoubleToInt32(value[i]);
- 				ok = TIFFWriteDirectoryTagSlongArray(tif,ndir,dir,tag,count,(int32*)conv);
- 			}
- 			break;
-@@ -991,19 +1055,19 @@
- 			if (tif->tif_dir.td_bitspersample<=8)
- 			{
- 				for (i = 0; i < count; ++i)
--					((uint8*)conv)[i] = (uint8)value[i];
-+					((uint8*)conv)[i] = TIFFClampDoubleToUInt8(value[i]);
- 				ok = TIFFWriteDirectoryTagByteArray(tif,ndir,dir,tag,count,(uint8*)conv);
- 			}
- 			else if (tif->tif_dir.td_bitspersample<=16)
- 			{
- 				for (i = 0; i < count; ++i)
--					((uint16*)conv)[i] = (uint16)value[i];
-+					((uint16*)conv)[i] = TIFFClampDoubleToUInt16(value[i]);
- 				ok = TIFFWriteDirectoryTagShortArray(tif,ndir,dir,tag,count,(uint16*)conv);
- 			}
- 			else
- 			{
- 				for (i = 0; i < count; ++i)
--					((uint32*)conv)[i] = (uint32)value[i];
-+					((uint32*)conv)[i] = TIFFClampDoubleToUInt32(value[i]);
- 				ok = TIFFWriteDirectoryTagLongArray(tif,ndir,dir,tag,count,(uint32*)conv);
- 			}
- 			break;
-@@ -2094,15 +2158,25 @@
- static int
- TIFFWriteDirectoryTagCheckedRational(TIFF* tif, uint32* ndir, TIFFDirEntry* dir, uint16 tag, double value)
- {
-+        static const char module[] = "TIFFWriteDirectoryTagCheckedRational";
- 	uint32 m[2];
--	assert(value>=0.0);
- 	assert(sizeof(uint32)==4);
--	if (value<=0.0)
-+	if (value<0)
-+	{
-+            TIFFErrorExt(tif->tif_clientdata,module,"Negative value is illegal");
-+            return 0;
-+	}
-+        else if( value != value )
-+        {
-+            TIFFErrorExt(tif->tif_clientdata,module,"Not-a-number value is illegal");
-+            return 0;
-+        }
-+        else if (value==0.0)
- 	{
- 		m[0]=0;
- 		m[1]=1;
--	}
--	else if (value==(double)(uint32)value)
-+        }
-+        else if (value <= 0xFFFFFFFFU && value==(double)(uint32)value)
- 	{
- 		m[0]=(uint32)value;
- 		m[1]=1;
-@@ -2143,7 +2217,7 @@
- 	}
- 	for (na=value, nb=m, nc=0; nc<count; na++, nb+=2, nc++)
- 	{
--		if (*na<=0.0)
-+                if (*na<=0.0 || *na != *na)
- 		{
- 			nb[0]=0;
- 			nb[1]=1;
-@@ -2153,7 +2227,8 @@
- 			nb[0]=(uint32)(*na);
- 			nb[1]=1;
- 		}
--		else if (*na<1.0)
-+		else if (*na >= 0 && *na <= (float)0xFFFFFFFFU &&
-+                         *na==(float)(uint32)(*na))
- 		{
- 			nb[0]=(uint32)((double)(*na)*0xFFFFFFFF);
- 			nb[1]=0xFFFFFFFF;
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7598.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7598.patch
deleted file mode 100644
index 6d082bb..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7598.patch
+++ /dev/null
@@ -1,65 +0,0 @@
-From 3cfd62d77c2a7e147a05bd678524c345fa9c2bb8 Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Wed, 11 Jan 2017 13:28:01 +0000
-Subject: [PATCH] * libtiff/tif_dirread.c: avoid division by floating point 0
- in TIFFReadDirEntryCheckedRational() and TIFFReadDirEntryCheckedSrational(),
- and return 0 in that case (instead of infinity as before presumably)
- Apparently some sanitizers do not like those divisions by zero. Fixes
- http://bugzilla.maptools.org/show_bug.cgi?id=2644
-
-Upstream-Status: Backport
-
-CVE: CVE-2017-7598
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog	2017-04-25 16:14:59.858612730 +0530
-+++ tiff-4.0.7/ChangeLog	2017-04-25 18:11:36.048107127 +0530
-@@ -1,3 +1,4 @@
-+
- 2017-01-12 Even Rouault <even.rouault at spatialys.com>
- 
- 	* libtiff/tif_ojpeg.c: fix leak in OJPEGReadHeaderInfoSecTablesQTable,
-@@ -8,6 +9,14 @@
- 
- 2017-01-11 Even Rouault <even.rouault at spatialys.com>
- 
-+	* libtiff/tif_dirread.c: avoid division by floating point 0 in
-+	TIFFReadDirEntryCheckedRational() and TIFFReadDirEntryCheckedSrational(),
-+	and return 0 in that case (instead of infinity as before presumably)
-+	Apparently some sanitizers do not like those divisions by zero.
-+	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2644
-+
-+2017-01-11 Even Rouault <even.rouault at spatialys.com>
-+
- 	* libtiff/tif_dir.c, tif_dirread.c, tif_dirwrite.c: implement various clampings
- 	of double to other data types to avoid undefined behaviour if the output range
- 	isn't big enough to hold the input value.
-Index: tiff-4.0.7/libtiff/tif_dirread.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_dirread.c	2017-04-25 16:14:59.858612730 +0530
-+++ tiff-4.0.7/libtiff/tif_dirread.c	2017-04-25 18:16:21.836111576 +0530
-@@ -2880,7 +2880,10 @@
- 		m.l = direntry->tdir_offset.toff_long8;
- 	if (tif->tif_flags&TIFF_SWAB)
- 		TIFFSwabArrayOfLong(m.i,2);
--	if (m.i[0]==0)
-+        /* Not completely sure what we should do when m.i[1]==0, but some */
-+        /* sanitizers do not like division by 0.0: */
-+        /* http://bugzilla.maptools.org/show_bug.cgi?id=2644 */
-+	if (m.i[0]==0 || m.i[1]==0)
- 		*value=0.0;
- 	else
- 		*value=(double)m.i[0]/(double)m.i[1];
-@@ -2908,7 +2911,10 @@
- 		m.l=direntry->tdir_offset.toff_long8;
- 	if (tif->tif_flags&TIFF_SWAB)
- 		TIFFSwabArrayOfLong(m.i,2);
--	if ((int32)m.i[0]==0)
-+        /* Not completely sure what we should do when m.i[1]==0, but some */
-+        /* sanitizers do not like division by 0.0: */
-+        /* http://bugzilla.maptools.org/show_bug.cgi?id=2644 */
-+	if ((int32)m.i[0]==0 || m.i[1]==0)
- 		*value=0.0;
- 	else
- 		*value=(double)((int32)m.i[0])/(double)m.i[1];
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7601.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7601.patch
deleted file mode 100644
index 1b3c6c8..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7601.patch
+++ /dev/null
@@ -1,52 +0,0 @@
-From 0a76a8c765c7b8327c59646284fa78c3c27e5490 Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Wed, 11 Jan 2017 16:13:50 +0000
-Subject: [PATCH] * libtiff/tif_jpeg.c: validate BitsPerSample in
- JPEGSetupEncode() to avoid undefined behaviour caused by invalid shift
- exponent. Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2648
-
-Upstream-Status: Backport
-
-CVE: CVE-2017-7601
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog	2017-04-25 18:21:32.856116417 +0530
-+++ tiff-4.0.7/ChangeLog	2017-04-25 18:35:31.904129477 +0530
-@@ -1,4 +1,3 @@
--
- 2017-01-12 Even Rouault <even.rouault at spatialys.com>
- 
- 	* libtiff/tif_ojpeg.c: fix leak in OJPEGReadHeaderInfoSecTablesQTable,
-@@ -9,6 +8,12 @@
- 
- 2017-01-11 Even Rouault <even.rouault at spatialys.com>
- 
-+	* libtiff/tif_jpeg.c: validate BitsPerSample in JPEGSetupEncode() to avoid
-+	undefined behaviour caused by invalid shift exponent.
-+	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2648
-+
-+2017-01-11 Even Rouault <even.rouault at spatialys.com>
-+
- 	* libtiff/tif_dirread.c: avoid division by floating point 0 in
- 	TIFFReadDirEntryCheckedRational() and TIFFReadDirEntryCheckedSrational(),
- 	and return 0 in that case (instead of infinity as before presumably)
-Index: tiff-4.0.7/libtiff/tif_jpeg.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_jpeg.c	2017-04-25 18:21:32.744116415 +0530
-+++ tiff-4.0.7/libtiff/tif_jpeg.c	2017-04-25 18:38:02.200131817 +0530
-@@ -1632,7 +1632,13 @@
-                                    "Invalig horizontal/vertical sampling value");
-                       return (0);
-                 }
--
-+                if( td->td_bitspersample > 16 )
-+                {
-+                    TIFFErrorExt(tif->tif_clientdata, module,
-+                                 "BitsPerSample %d not allowed for JPEG",
-+                                 td->td_bitspersample);
-+                    return (0);
-+                }
- 		/*
- 		 * A ReferenceBlackWhite field *must* be present since the
- 		 * default value is inappropriate for YCbCr.  Fill in the
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7602.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7602.patch
deleted file mode 100644
index 9ed97e2..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-7602.patch
+++ /dev/null
@@ -1,69 +0,0 @@
-From 66e7bd59520996740e4df5495a830b42fae48bc4 Mon Sep 17 00:00:00 2001
-From: erouault <erouault>
-Date: Wed, 11 Jan 2017 16:33:34 +0000
-Subject: [PATCH] * libtiff/tif_read.c: avoid potential undefined behaviour on
- signed integer addition in TIFFReadRawStrip1() in isMapped() case. Fixes
- http://bugzilla.maptools.org/show_bug.cgi?id=2650
-
-Upstream-Status: Backport
-
-CVE: CVE-2017-7602
-Signed-off-by: Rajkumar Veer <rveer@mvista.com>
-
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog	2017-04-25 18:42:07.656135638 +0530
-+++ tiff-4.0.7/ChangeLog	2017-04-25 18:54:36.812147299 +0530
-@@ -8,6 +8,12 @@
- 
- 2017-01-11 Even Rouault <even.rouault at spatialys.com>
- 
-+	* libtiff/tif_read.c: avoid potential undefined behaviour on signed integer
-+	addition in TIFFReadRawStrip1() in isMapped() case.
-+	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2650
-+
-+2017-01-11 Even Rouault <even.rouault at spatialys.com>
-+
- 	* libtiff/tif_jpeg.c: validate BitsPerSample in JPEGSetupEncode() to avoid
- 	undefined behaviour caused by invalid shift exponent.
- 	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2648
-Index: tiff-4.0.7/libtiff/tif_read.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_read.c	2017-04-25 18:42:07.132135629 +0530
-+++ tiff-4.0.7/libtiff/tif_read.c	2017-04-25 18:58:25.272150855 +0530
-@@ -420,16 +420,26 @@
- 			return ((tmsize_t)(-1));
- 		}
- 	} else {
--		tmsize_t ma,mb;
-+		tmsize_t ma;
- 		tmsize_t n;
--		ma=(tmsize_t)td->td_stripoffset[strip];
--		mb=ma+size;
--		if ((td->td_stripoffset[strip] > (uint64)TIFF_TMSIZE_T_MAX)||(ma>tif->tif_size))
--			n=0;
--		else if ((mb<ma)||(mb<size)||(mb>tif->tif_size))
--			n=tif->tif_size-ma;
--		else
--			n=size;
-+                if ((td->td_stripoffset[strip] > (uint64)TIFF_TMSIZE_T_MAX)||
-+                     ((ma=(tmsize_t)td->td_stripoffset[strip])>tif->tif_size))
-+                {
-+                    n=0;
-+                }
-+                else if( ma > TIFF_TMSIZE_T_MAX - size )
-+                {
-+                    n=0;
-+                }
-+                else
-+                {
-+                    tmsize_t mb=ma+size;
-+                    if (mb>tif->tif_size)
-+                            n=tif->tif_size-ma;
-+                    else
-+                            n=size;
-+                }
-+
- 		if (n!=size) {
- #if defined(__WIN32__) && (defined(_MSC_VER) || defined(__MINGW32__))
- 			TIFFErrorExt(tif->tif_clientdata, module,
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-9147.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-9147.patch
index 94f3390..3392285 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-9147.patch
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-9147.patch
@@ -32,38 +32,38 @@
  libtiff/tif_dirread.c |   4 ++
  4 files changed, 128 insertions(+)
 
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog
-+++ tiff-4.0.7/ChangeLog
+diff --git a/ChangeLog b/ChangeLog
+index ee8d9d0..5739292 100644
+--- a/ChangeLog
++++ b/ChangeLog
 @@ -1,3 +1,23 @@
 +2017-06-01  Even Rouault <even.rouault at spatialys.com>
 +
-+       * libtiff/tif_dirinfo.c, tif_dirread.c: add _TIFFCheckFieldIsValidForCodec(),
-+       and use it in TIFFReadDirectory() so as to ignore fields whose tag is a
-+       codec-specified tag but this codec is not enabled. This avoids TIFFGetField()
-+       to behave differently depending on whether the codec is enabled or not, and
-+       thus can avoid stack based buffer overflows in a number of TIFF utilities
-+       such as tiffsplit, tiffcmp, thumbnail, etc.
-+       Patch derived from 0063-Handle-properly-CODEC-specific-tags.patch
-+       (http://bugzilla.maptools.org/show_bug.cgi?id=2580) by Raphaël Hertzog.
-+       Fixes:
-+       http://bugzilla.maptools.org/show_bug.cgi?id=2580
-+       http://bugzilla.maptools.org/show_bug.cgi?id=2693
-+       http://bugzilla.maptools.org/show_bug.cgi?id=2625 (CVE-2016-10095)
-+       http://bugzilla.maptools.org/show_bug.cgi?id=2564 (CVE-2015-7554)
-+       http://bugzilla.maptools.org/show_bug.cgi?id=2561 (CVE-2016-5318)
-+       http://bugzilla.maptools.org/show_bug.cgi?id=2499 (CVE-2014-8128)
-+       http://bugzilla.maptools.org/show_bug.cgi?id=2441
-+       http://bugzilla.maptools.org/show_bug.cgi?id=2433
++	* libtiff/tif_dirinfo.c, tif_dirread.c: add _TIFFCheckFieldIsValidForCodec(),
++	and use it in TIFFReadDirectory() so as to ignore fields whose tag is a
++	codec-specified tag but this codec is not enabled. This avoids TIFFGetField()
++	to behave differently depending on whether the codec is enabled or not, and
++	thus can avoid stack based buffer overflows in a number of TIFF utilities
++	such as tiffsplit, tiffcmp, thumbnail, etc.
++	Patch derived from 0063-Handle-properly-CODEC-specific-tags.patch
++	(http://bugzilla.maptools.org/show_bug.cgi?id=2580) by Raphaël Hertzog.
++	Fixes:
++	http://bugzilla.maptools.org/show_bug.cgi?id=2580
++	http://bugzilla.maptools.org/show_bug.cgi?id=2693
++	http://bugzilla.maptools.org/show_bug.cgi?id=2625 (CVE-2016-10095)
++	http://bugzilla.maptools.org/show_bug.cgi?id=2564 (CVE-2015-7554)
++	http://bugzilla.maptools.org/show_bug.cgi?id=2561 (CVE-2016-5318)
++	http://bugzilla.maptools.org/show_bug.cgi?id=2499 (CVE-2014-8128)
++	http://bugzilla.maptools.org/show_bug.cgi?id=2441
++	http://bugzilla.maptools.org/show_bug.cgi?id=2433
 +
- 2017-01-11 Even Rouault <even.rouault at spatialys.com>
+ 2017-05-21  Bob Friesenhahn  <bfriesen@simple.dallas.tx.us>
  
- 	* tools/tiffcp.c: error out cleanly in cpContig2SeparateByRow and
-Index: tiff-4.0.7/libtiff/tif_dir.h
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_dir.h
-+++ tiff-4.0.7/libtiff/tif_dir.h
+ 	* configure.ac: libtiff 4.0.8 released.
+diff --git a/libtiff/tif_dir.h b/libtiff/tif_dir.h
+index e12b44b..5206be4 100644
+--- a/libtiff/tif_dir.h
++++ b/libtiff/tif_dir.h
 @@ -291,6 +291,7 @@ struct _TIFFField {
  extern int _TIFFMergeFields(TIFF*, const TIFFField[], uint32);
  extern const TIFFField* _TIFFFindOrRegisterField(TIFF *, uint32, TIFFDataType);
@@ -72,11 +72,11 @@
  
  #if defined(__cplusplus)
  }
-Index: tiff-4.0.7/libtiff/tif_dirinfo.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_dirinfo.c
-+++ tiff-4.0.7/libtiff/tif_dirinfo.c
-@@ -956,6 +956,109 @@ TIFFMergeFieldInfo(TIFF* tif, const TIFF
+diff --git a/libtiff/tif_dirinfo.c b/libtiff/tif_dirinfo.c
+index 0c8ef42..97c0df0 100644
+--- a/libtiff/tif_dirinfo.c
++++ b/libtiff/tif_dirinfo.c
+@@ -956,6 +956,109 @@ TIFFMergeFieldInfo(TIFF* tif, const TIFFFieldInfo info[], uint32 n)
  	return 0;
  }
  
@@ -186,11 +186,11 @@
  /* vim: set ts=8 sts=8 sw=8 noet: */
  
  /*
-Index: tiff-4.0.7/libtiff/tif_dirread.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_dirread.c
-+++ tiff-4.0.7/libtiff/tif_dirread.c
-@@ -3566,6 +3566,10 @@ TIFFReadDirectory(TIFF* tif)
+diff --git a/libtiff/tif_dirread.c b/libtiff/tif_dirread.c
+index 1d4f0b9..f1dc3d7 100644
+--- a/libtiff/tif_dirread.c
++++ b/libtiff/tif_dirread.c
+@@ -3580,6 +3580,10 @@ TIFFReadDirectory(TIFF* tif)
  							goto bad;
  						dp->tdir_tag=IGNORE;
  						break;
@@ -201,3 +201,6 @@
  				}
  			}
  		}
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-9936.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-9936.patch
index 204949b..fc99363 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-9936.patch
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/CVE-2017-9936.patch
@@ -18,10 +18,10 @@
  libtiff/tif_jbig.c | 1 +
  2 files changed, 7 insertions(+)
 
-Index: tiff-4.0.7/ChangeLog
-===================================================================
---- tiff-4.0.7.orig/ChangeLog
-+++ tiff-4.0.7/ChangeLog
+diff --git a/ChangeLog b/ChangeLog
+index 5739292..0240f0b 100644
+--- a/ChangeLog
++++ b/ChangeLog
 @@ -1,3 +1,9 @@
 +2017-06-26  Even Rouault <even.rouault at spatialys.com>
 +
@@ -31,12 +31,12 @@
 +
  2017-06-01  Even Rouault <even.rouault at spatialys.com>
  
-        * libtiff/tif_dirinfo.c, tif_dirread.c: add _TIFFCheckFieldIsValidForCodec(),
-Index: tiff-4.0.7/libtiff/tif_jbig.c
-===================================================================
---- tiff-4.0.7.orig/libtiff/tif_jbig.c
-+++ tiff-4.0.7/libtiff/tif_jbig.c
-@@ -94,6 +94,7 @@ static int JBIGDecode(TIFF* tif, uint8*
+ 	* libtiff/tif_dirinfo.c, tif_dirread.c: add _TIFFCheckFieldIsValidForCodec(),
+diff --git a/libtiff/tif_jbig.c b/libtiff/tif_jbig.c
+index 5f5f75e..c75f31d 100644
+--- a/libtiff/tif_jbig.c
++++ b/libtiff/tif_jbig.c
+@@ -94,6 +94,7 @@ static int JBIGDecode(TIFF* tif, uint8* buffer, tmsize_t size, uint16 s)
  			     jbg_strerror(decodeStatus)
  #endif
  			     );
@@ -44,3 +44,6 @@
  		return 0;
  	}
  
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/libtiff-CVE-2017-5225.patch b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/libtiff-CVE-2017-5225.patch
deleted file mode 100644
index 3263353..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/files/libtiff-CVE-2017-5225.patch
+++ /dev/null
@@ -1,92 +0,0 @@
-From a24df1e93833dfeaa69bf4d510518dc4684db64d Mon Sep 17 00:00:00 2001
-From: Li Zhou <li.zhou@windriver.com>
-Date: Wed, 25 Jan 2017 17:07:21 +0800
-Subject: [PATCH] libtiff: fix CVE-2017-5225
-
-tools/tiffcp.c: error out cleanly in cpContig2SeparateByRow
-and cpSeparate2ContigByRow if BitsPerSample != 8 to avoid heap based
-overflow. Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2656 and
-http://bugzilla.maptools.org/show_bug.cgi?id=2657
-
-Upstream-Status: Backport
-CVE: CVE-2017-5225
-Signed-off-by: Li Zhou <li.zhou@windriver.com>
----
- ChangeLog      |  7 +++++++
- tools/tiffcp.c | 24 ++++++++++++++++++++++--
- 2 files changed, 29 insertions(+), 2 deletions(-)
-
-diff --git a/ChangeLog b/ChangeLog
-index 9b9d397..7e82795 100644
---- a/ChangeLog
-+++ b/ChangeLog
-@@ -1,3 +1,10 @@
-+2017-01-11 Even Rouault <even.rouault at spatialys.com>
-+
-+	* tools/tiffcp.c: error out cleanly in cpContig2SeparateByRow and
-+	cpSeparate2ContigByRow if BitsPerSample != 8 to avoid heap based overflow.
-+	Fixes http://bugzilla.maptools.org/show_bug.cgi?id=2656 and
-+	http://bugzilla.maptools.org/show_bug.cgi?id=2657
-+
- 2016-11-19  Bob Friesenhahn  <bfriesen@simple.dallas.tx.us>
- 
- 	* libtiff 4.0.7 released.
-diff --git a/tools/tiffcp.c b/tools/tiffcp.c
-index 338a3d1..2e84577 100644
---- a/tools/tiffcp.c
-+++ b/tools/tiffcp.c
-@@ -592,7 +592,7 @@ static	copyFunc pickCopyFunc(TIFF*, TIFF*, uint16, uint16);
- static int
- tiffcp(TIFF* in, TIFF* out)
- {
--	uint16 bitspersample, samplesperpixel = 1;
-+	uint16 bitspersample = 1, samplesperpixel = 1;
- 	uint16 input_compression, input_photometric = PHOTOMETRIC_MINISBLACK;
- 	copyFunc cf;
- 	uint32 width, length;
-@@ -1068,6 +1068,16 @@ DECLAREcpFunc(cpContig2SeparateByRow)
- 	register uint32 n;
- 	uint32 row;
- 	tsample_t s;
-+        uint16 bps = 0;
-+
-+        (void) TIFFGetField(in, TIFFTAG_BITSPERSAMPLE, &bps);
-+        if( bps != 8 )
-+        {
-+            TIFFError(TIFFFileName(in),
-+                      "Error, can only handle BitsPerSample=8 in %s",
-+                      "cpContig2SeparateByRow");
-+            return 0;
-+        }
- 
- 	inbuf = _TIFFmalloc(scanlinesizein);
- 	outbuf = _TIFFmalloc(scanlinesizeout);
-@@ -1121,6 +1131,16 @@ DECLAREcpFunc(cpSeparate2ContigByRow)
- 	register uint32 n;
- 	uint32 row;
- 	tsample_t s;
-+        uint16 bps = 0;
-+
-+        (void) TIFFGetField(in, TIFFTAG_BITSPERSAMPLE, &bps);
-+        if( bps != 8 )
-+        {
-+            TIFFError(TIFFFileName(in),
-+                      "Error, can only handle BitsPerSample=8 in %s",
-+                      "cpSeparate2ContigByRow");
-+            return 0;
-+        }
- 
- 	inbuf = _TIFFmalloc(scanlinesizein);
- 	outbuf = _TIFFmalloc(scanlinesizeout);
-@@ -1763,7 +1783,7 @@ pickCopyFunc(TIFF* in, TIFF* out, uint16 bitspersample, uint16 samplesperpixel)
- 	uint32 w, l, tw, tl;
- 	int bychunk;
- 
--	(void) TIFFGetField(in, TIFFTAG_PLANARCONFIG, &shortv);
-+	(void) TIFFGetFieldDefaulted(in, TIFFTAG_PLANARCONFIG, &shortv);
- 	if (shortv != config && bitspersample != 8 && samplesperpixel > 1) {
- 		fprintf(stderr,
- 		    "%s: Cannot handle different planar configuration w/ bits/sample != 8\n",
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/tiff_4.0.7.bb b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/tiff_4.0.7.bb
deleted file mode 100644
index 866e852..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/tiff_4.0.7.bb
+++ /dev/null
@@ -1,69 +0,0 @@
-SUMMARY = "Provides support for the Tag Image File Format (TIFF)"
-LICENSE = "BSD-2-Clause"
-LIC_FILES_CHKSUM = "file://COPYRIGHT;md5=34da3db46fab7501992f9615d7e158cf"
-
-CVE_PRODUCT = "libtiff"
-
-SRC_URI = "http://download.osgeo.org/libtiff/tiff-${PV}.tar.gz \
-           file://libtool2.patch \
-           file://libtiff-CVE-2017-5225.patch \
-           file://CVE-2017-9147.patch \
-           file://CVE-2017-9936.patch \
-           file://CVE-2017-10688.patch \
-           file://CVE-2017-11335.patch \
-           file://CVE-2016-10271.patch \
-           file://CVE-2016-10093.patch \
-           file://CVE-2016-10268.patch \
-           file://CVE-2016-10266.patch \
-           file://CVE-2016-10267.patch \
-           file://CVE-2016-10269.patch \
-           file://CVE-2016-10270.patch \
-           file://CVE-2017-7592.patch \
-           file://CVE-2017-7594-p1.patch \
-           file://CVE-2017-7594-p2.patch \
-           file://CVE-2017-7595.patch \
-           file://CVE-2017-7596.patch \
-           file://CVE-2017-7598.patch \
-           file://CVE-2017-7601.patch \
-           file://CVE-2017-7602.patch \
-           file://CVE-2017-7593.patch \
-        "
-
-SRC_URI[md5sum] = "77ae928d2c6b7fb46a21c3a29325157b"
-SRC_URI[sha256sum] = "9f43a2cfb9589e5cecaa66e16bf87f814c945f22df7ba600d63aac4632c4f019"
-
-# exclude betas
-UPSTREAM_CHECK_REGEX = "tiff-(?P<pver>\d+(\.\d+)+).tar"
-
-inherit autotools
-
-CACHED_CONFIGUREVARS = "ax_cv_check_gl_libgl=no"
-
-PACKAGECONFIG ?= "cxx jpeg zlib lzma \
-                  strip-chopping extrasample-as-alpha check-ycbcr-subsampling"
-
-PACKAGECONFIG[cxx] = "--enable-cxx,--disable-cxx,,"
-PACKAGECONFIG[jpeg] = "--enable-jpeg,--disable-jpeg,jpeg,"
-PACKAGECONFIG[zlib] = "--enable-zlib,--disable-zlib,zlib,"
-PACKAGECONFIG[lzma] = "--enable-lzma,--disable-lzma,xz,"
-
-# Convert single-strip uncompressed images to multiple strips of specified
-# size (default: 8192) to reduce memory usage
-PACKAGECONFIG[strip-chopping] = "--enable-strip-chopping,--disable-strip-chopping,,"
-
-# Treat a fourth sample with no EXTRASAMPLE_ value as being ASSOCALPHA
-PACKAGECONFIG[extrasample-as-alpha] = "--enable-extrasample-as-alpha,--disable-extrasample-as-alpha,,"
-
-# Control picking up YCbCr subsample info. Disable to support files lacking
-# the tag
-PACKAGECONFIG[check-ycbcr-subsampling] = "--enable-check-ycbcr-subsampling,--disable-check-ycbcr-subsampling,,"
-
-# Support a mechanism allowing reading large strips (usually one strip files)
-# in chunks when using TIFFReadScanline. Experimental 4.0+ feature
-PACKAGECONFIG[chunky-strip-read] = "--enable-chunky-strip-read,--disable-chunky-strip-read,,"
-
-PACKAGES =+ "tiffxx tiff-utils"
-FILES_tiffxx = "${libdir}/libtiffxx.so.*"
-FILES_tiff-utils = "${bindir}/*"
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/tiff_4.0.8.bb b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/tiff_4.0.8.bb
new file mode 100644
index 0000000..cb91baa
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/libtiff/tiff_4.0.8.bb
@@ -0,0 +1,54 @@
+SUMMARY = "Provides support for the Tag Image File Format (TIFF)"
+LICENSE = "BSD-2-Clause"
+LIC_FILES_CHKSUM = "file://COPYRIGHT;md5=34da3db46fab7501992f9615d7e158cf"
+
+CVE_PRODUCT = "libtiff"
+
+SRC_URI = "http://download.osgeo.org/libtiff/tiff-${PV}.tar.gz \
+           file://libtool2.patch \
+           file://CVE-2017-9147.patch \
+           file://CVE-2017-9936.patch \
+           file://CVE-2017-10688.patch \
+           file://CVE-2017-11335.patch \
+           file://CVE-2017-13726.patch \
+           file://CVE-2017-13727.patch \
+          "
+
+SRC_URI[md5sum] = "2a7d1c1318416ddf36d5f6fa4600069b"
+SRC_URI[sha256sum] = "59d7a5a8ccd92059913f246877db95a2918e6c04fb9d43fd74e5c3390dac2910"
+
+# exclude betas
+UPSTREAM_CHECK_REGEX = "tiff-(?P<pver>\d+(\.\d+)+).tar"
+
+inherit autotools
+
+CACHED_CONFIGUREVARS = "ax_cv_check_gl_libgl=no"
+
+PACKAGECONFIG ?= "cxx jpeg zlib lzma \
+                  strip-chopping extrasample-as-alpha check-ycbcr-subsampling"
+
+PACKAGECONFIG[cxx] = "--enable-cxx,--disable-cxx,,"
+PACKAGECONFIG[jpeg] = "--enable-jpeg,--disable-jpeg,jpeg,"
+PACKAGECONFIG[zlib] = "--enable-zlib,--disable-zlib,zlib,"
+PACKAGECONFIG[lzma] = "--enable-lzma,--disable-lzma,xz,"
+
+# Convert single-strip uncompressed images to multiple strips of specified
+# size (default: 8192) to reduce memory usage
+PACKAGECONFIG[strip-chopping] = "--enable-strip-chopping,--disable-strip-chopping,,"
+
+# Treat a fourth sample with no EXTRASAMPLE_ value as being ASSOCALPHA
+PACKAGECONFIG[extrasample-as-alpha] = "--enable-extrasample-as-alpha,--disable-extrasample-as-alpha,,"
+
+# Control picking up YCbCr subsample info. Disable to support files lacking
+# the tag
+PACKAGECONFIG[check-ycbcr-subsampling] = "--enable-check-ycbcr-subsampling,--disable-check-ycbcr-subsampling,,"
+
+# Support a mechanism allowing reading large strips (usually one strip files)
+# in chunks when using TIFFReadScanline. Experimental 4.0+ feature
+PACKAGECONFIG[chunky-strip-read] = "--enable-chunky-strip-read,--disable-chunky-strip-read,,"
+
+PACKAGES =+ "tiffxx tiff-utils"
+FILES_tiffxx = "${libdir}/libtiffxx.so.*"
+FILES_tiff-utils = "${bindir}/*"
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/files/0001-check-for-available-arm-optimizations.patch b/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/files/0001-check-for-available-arm-optimizations.patch
new file mode 100644
index 0000000..5bf68b3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/files/0001-check-for-available-arm-optimizations.patch
@@ -0,0 +1,55 @@
+From cbcff58ed670c8edc0be1004384cbe0fd07d8d26 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 5 Jul 2017 18:49:21 -0700
+Subject: [PATCH 1/2] check for available arm optimizations
+
+Taken From
+http://sources.debian.net/src/mpeg2dec/0.5.1-7/debian/patches/65_arm-test-with-compiler.patch/
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ configure.ac | 12 ++++++++----
+ 1 file changed, 8 insertions(+), 4 deletions(-)
+
+diff --git a/configure.ac b/configure.ac
+index acdcb1e..2c0a721 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -59,7 +59,7 @@ elif test x"$GCC" = x"yes"; then
+     AC_TRY_CFLAGS([$TRY_CFLAGS $CFLAGS],[OPT_CFLAGS="$TRY_CFLAGS"])
+ 
+     dnl arch-specific flags
+-    arm_conditional=false
++    build_arm_opt=false
+     case "$host" in
+     i?86-* | k?-* | x86_64-* | amd64-*)
+ 	AC_DEFINE([ARCH_X86],,[x86 architecture])
+@@ -102,8 +102,12 @@ elif test x"$GCC" = x"yes"; then
+     alpha*)
+ 	AC_DEFINE([ARCH_ALPHA],,[alpha architecture]);;
+     arm*)
+-	arm_conditional=:
+-	AC_DEFINE([ARCH_ARM],,[ARM architecture]);;
++	AC_LANG(C)
++	AC_COMPILE_IFELSE(
++		[AC_LANG_SOURCE([[
++			void foo(void) { __asm__ volatile("pld [r1]"); }]])],
++		build_arm_opt=true; AC_DEFINE([ARCH_ARM],,[ARM architecture]),
++		build_arm_opt=false);;
+     esac
+ elif test x"$CC" = x"tendracc"; then
+     dnl TenDRA portability checking compiler
+@@ -123,7 +127,7 @@ else
+     esac
+ fi
+ 
+-AM_CONDITIONAL(ARCH_ARM, ${arm_conditional})
++AM_CONDITIONAL(ARCH_ARM, test x$build_arm_opt = xtrue)
+ 
+ dnl Checks for libtool - this must be done after we set cflags
+ AC_LIBTOOL_WIN32_DLL
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/files/0002-Set-visibility-of-global-symbols-used-in-ARM-specifi.patch b/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/files/0002-Set-visibility-of-global-symbols-used-in-ARM-specifi.patch
new file mode 100644
index 0000000..8301692
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/files/0002-Set-visibility-of-global-symbols-used-in-ARM-specifi.patch
@@ -0,0 +1,63 @@
+From f9d9dc92d75f8910e3cd5fdcbea72e505cdf3493 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Wed, 5 Jul 2017 19:03:36 -0700
+Subject: [PATCH 2/2] Set visibility of global symbols used in ARM specific
+ assembly file to internal
+
+Taken from
+http://sources.debian.net/src/mpeg2dec/0.5.1-7/debian/patches/60_arm-private-symbols.patch/
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ libmpeg2/motion_comp_arm_s.S | 12 ++++++++----
+ 1 file changed, 8 insertions(+), 4 deletions(-)
+
+diff --git a/libmpeg2/motion_comp_arm_s.S b/libmpeg2/motion_comp_arm_s.S
+index f6c3d7d..c921f7c 100644
+--- a/libmpeg2/motion_comp_arm_s.S
++++ b/libmpeg2/motion_comp_arm_s.S
+@@ -23,7 +23,8 @@
+ 
+ @ ----------------------------------------------------------------
+ 	.align
+-	.global MC_put_o_16_arm
++	.global   MC_put_o_16_arm
++	.internal MC_put_o_16_arm
+ MC_put_o_16_arm:
+ 	@@ void func(uint8_t * dest, const uint8_t * ref, int stride, int height)
+ 	pld [r1]
+@@ -83,7 +84,8 @@ MC_put_o_16_arm_align_jt:
+ 
+ @ ----------------------------------------------------------------
+ 	.align
+-	.global MC_put_o_8_arm
++	.global   MC_put_o_8_arm
++	.internal MC_put_o_8_arm
+ MC_put_o_8_arm:
+ 	@@ void func(uint8_t * dest, const uint8_t * ref, int stride, int height)
+ 	pld [r1]
+@@ -152,7 +154,8 @@ MC_put_o_8_arm_align_jt:
+ .endm
+ 
+ 	.align
+-	.global MC_put_x_16_arm
++	.global   MC_put_x_16_arm
++	.internal MC_put_x_16_arm
+ MC_put_x_16_arm:
+ 	@@ void func(uint8_t * dest, const uint8_t * ref, int stride, int height)
+ 	pld [r1]
+@@ -244,7 +247,8 @@ MC_put_x_16_arm_align_jt:
+ 
+ @ ----------------------------------------------------------------
+ 	.align
+-	.global MC_put_x_8_arm
++	.global   MC_put_x_8_arm
++	.internal MC_put_x_8_arm
+ MC_put_x_8_arm:
+ 	@@ void func(uint8_t * dest, const uint8_t * ref, int stride, int height)
+ 	pld [r1]
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/files/altivec_h_needed.patch b/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/files/altivec_h_needed.patch
new file mode 100644
index 0000000..5113ad4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/files/altivec_h_needed.patch
@@ -0,0 +1,43 @@
+Add new method to judge whether <altivec.h> is needed
+
+The original logic will use "typedef vector int t;" to judge
+whether <altivec.h> is needed. altivec.h contains the following
+statement:
+
+ #if !defined(__APPLE_ALTIVEC__)
+ #define vector __vector
+ #define pixel __pixel
+ #define bool 
+ #endif
+
+In gcc-4.3.3, __APPLE_ALTIVEC__ is not defined by compiler, neither
+as vector, pixel, and bool. In order to make "typedef vector int t;"
+pass the compilation, we need to include altivec.h.
+
+However in gcc-4.5.0, __APPLE_ALTIVEC__ is defined by compiler,
+so as vector, pixel, and bool. We could not judge whether
+altivec.h is needed by "typedef vector int t;".
+Here we include another statement "int tmp = __CR6_EQ;", in
+which __CR6_EQ is defined in altivec.h.
+
+Upstream-Status: Pending
+
+Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
+
+diff -ruN a/configure.in b/configure.in
+--- a/configure.ac	2010-09-14 20:55:42.399687663 +0800
++++ b/configure.ac	2010-09-14 20:56:43.403204648 +0800
+@@ -79,11 +79,11 @@
+ 		 CFLAGS="$OPT_CFLAGS $TRY_CFLAGS $CFLAGS"
+ 		 AC_MSG_CHECKING([if <altivec.h> is needed])
+ 		 AC_TRY_COMPILE([],
+-		    [typedef vector int t;
++		    [typedef vector int t; int tmp = __CR6_EQ;
+ 		     vec_ld(0, (unsigned char *)0);],
+ 		    [have_altivec=yes; AC_MSG_RESULT(no)],
+ 		    [AC_TRY_COMPILE([#include <altivec.h>],
+-			[typedef vector int t; vec_ld(0, (unsigned char *)0);],
++			[typedef vector int t; int tmp = __CR6_EQ; vec_ld(0, (unsigned char *)0);],
+ 			[AC_DEFINE([HAVE_ALTIVEC_H],,
+ 			    [Define to 1 if you have the <altivec.h> header.])
+ 			 have_altivec=yes; AC_MSG_RESULT(yes)],
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/mpeg2dec-0.4.1/altivec_h_needed.patch b/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/mpeg2dec-0.4.1/altivec_h_needed.patch
deleted file mode 100644
index 7dc5643..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/mpeg2dec-0.4.1/altivec_h_needed.patch
+++ /dev/null
@@ -1,43 +0,0 @@
-Add new method to judge whether <altivec.h> is needed
-
-The original logic will use "typedef vector int t;" to judge
-whether <altivec.h> is needed. altivec.h contains the following
-statement:
-
- #if !defined(__APPLE_ALTIVEC__)
- #define vector __vector
- #define pixel __pixel
- #define bool 
- #endif
-
-In gcc-4.3.3, __APPLE_ALTIVEC__ is not defined by compiler, neither
-as vector, pixel, and bool. In order to make "typedef vector int t;"
-pass the compilation, we need to include altivec.h.
-
-However in gcc-4.5.0, __APPLE_ALTIVEC__ is defined by compiler,
-so as vector, pixel, and bool. We could not judge whether
-altivec.h is needed by "typedef vector int t;".
-Here we include another statement "int tmp = __CR6_EQ;", in
-which __CR6_EQ is defined in altivec.h.
-
-Upstream-Status: Pending
-
-Signed-off-by: Dongxiao Xu <dongxiao.xu@intel.com>
-
-diff -ruN mpeg2dec-0.4.1-orig/configure.in mpeg2dec-0.4.1/configure.in
---- mpeg2dec-0.4.1-orig/configure.in	2010-09-14 20:55:42.399687663 +0800
-+++ mpeg2dec-0.4.1/configure.in	2010-09-14 20:56:43.403204648 +0800
-@@ -75,11 +75,11 @@
- 		 CFLAGS="$OPT_CFLAGS $TRY_CFLAGS $CFLAGS"
- 		 AC_MSG_CHECKING([if <altivec.h> is needed])
- 		 AC_TRY_COMPILE([],
--		    [typedef vector int t;
-+		    [typedef vector int t; int tmp = __CR6_EQ;
- 		     vec_ld(0, (unsigned char *)0);],
- 		    [have_altivec=yes; AC_MSG_RESULT(no)],
- 		    [AC_TRY_COMPILE([#include <altivec.h>],
--			[typedef vector int t; vec_ld(0, (unsigned char *)0);],
-+			[typedef vector int t; int tmp = __CR6_EQ; vec_ld(0, (unsigned char *)0);],
- 			[AC_DEFINE([HAVE_ALTIVEC_H],,
- 			    [Define to 1 if you have the <altivec.h> header.])
- 			 have_altivec=yes; AC_MSG_RESULT(yes)],
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/mpeg2dec_0.4.1.bb b/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/mpeg2dec_0.4.1.bb
deleted file mode 100644
index fe765da..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/mpeg2dec_0.4.1.bb
+++ /dev/null
@@ -1,40 +0,0 @@
-SUMMARY = "Library and test program for decoding MPEG-2 and MPEG-1 video streams"
-HOMEPAGE = "http://libmpeg2.sourceforge.net/"
-SECTION = "libs"
-LICENSE = "GPLv2+"
-LICENSE_FLAGS = "commercial"
-LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
-                    file://include/mpeg2.h;beginline=1;endline=22;md5=ead62602d4638329d3b5b86a55803154"
-
-PR = "r2"
-
-SRC_URI = "http://libmpeg2.sourceforge.net/files/mpeg2dec-${PV}.tar.gz \
-           file://altivec_h_needed.patch"
-
-SRC_URI[md5sum] = "7631b0a4bcfdd0d78c0bb0083080b0dc"
-SRC_URI[sha256sum] = "c74a76068f8ec36d4bb59a03bf1157be44118ca02252180e8b358b0b5e3edeee"
-
-UPSTREAM_CHECK_URI = "http://libmpeg2.sourceforge.net/downloads.html"
-
-inherit autotools pkgconfig
-
-EXTRA_OECONF = "--enable-shared --disable-sdl"
-
-PACKAGECONFIG ?= "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
-PACKAGECONFIG[x11] = "--with-x,--without-x,virtual/libx11 libxext libxv"
-
-PACKAGES = "mpeg2dec-dbg mpeg2dec mpeg2dec-doc libmpeg2 libmpeg2-dev libmpeg2convert libmpeg2convert-dev libmpeg2-staticdev libmpeg2convert-staticdev"
-
-FILES_${PN} = "${bindir}/*"
-FILES_libmpeg2 = "${libdir}/libmpeg2.so.*"
-FILES_libmpeg2convert = "${libdir}/libmpeg2convert.so.*"
-FILES_libmpeg2-dev = "${libdir}/libmpeg2.so \
-                      ${libdir}/libmpeg2.la \
-                      ${libdir}/pkgconfig/libmpeg2.pc \
-                      ${includedir}/mpeg2dec/mpeg2.h"
-FILES_libmpeg2-staticdev = "${libdir}/libmpeg2.a"
-FILES_libmpeg2convert-dev = "${libdir}/libmpeg2convert.so \
-                             ${libdir}/libmpeg2convert.la \
-                             ${libdir}/pkgconfig/libmpeg2convert.pc \
-                             ${includedir}/mpeg2dec/mpeg2convert.h"
-FILES_libmpeg2convert-staticdev = "${libdir}/libmpeg2convert.a"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/mpeg2dec_0.5.1.bb b/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/mpeg2dec_0.5.1.bb
new file mode 100644
index 0000000..7711c2d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/mpeg2dec/mpeg2dec_0.5.1.bb
@@ -0,0 +1,45 @@
+SUMMARY = "Library and test program for decoding MPEG-2 and MPEG-1 video streams"
+HOMEPAGE = "http://libmpeg2.sourceforge.net/"
+SECTION = "libs"
+LICENSE = "GPLv2+"
+LICENSE_FLAGS = "commercial"
+LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
+                    file://include/mpeg2.h;beginline=1;endline=22;md5=7766f4fcb58f0f8413c49a746f2ab89b"
+
+SRC_URI = "http://libmpeg2.sourceforge.net/files/libmpeg2-${PV}.tar.gz \
+           file://altivec_h_needed.patch \
+           file://0001-check-for-available-arm-optimizations.patch \
+           file://0002-Set-visibility-of-global-symbols-used-in-ARM-specifi.patch \
+           "
+
+S = "${WORKDIR}/libmpeg2-${PV}"
+
+SRC_URI[md5sum] = "0f92c7454e58379b4a5a378485bbd8ef"
+SRC_URI[sha256sum] = "dee22e893cb5fc2b2b6ebd60b88478ab8556cb3b93f9a0d7ce8f3b61851871d4"
+
+UPSTREAM_CHECK_URI = "http://libmpeg2.sourceforge.net/downloads.html"
+
+inherit autotools pkgconfig
+
+EXTRA_OECONF = "--enable-shared --disable-sdl"
+
+PACKAGECONFIG ?= "${@bb.utils.filter('DISTRO_FEATURES', 'x11', d)}"
+PACKAGECONFIG[x11] = "--with-x,--without-x,virtual/libx11 libxext libxv"
+
+PACKAGES = "mpeg2dec-dbg mpeg2dec mpeg2dec-doc libmpeg2 libmpeg2-dev libmpeg2convert libmpeg2convert-dev libmpeg2-staticdev libmpeg2convert-staticdev"
+
+FILES_${PN} = "${bindir}/*"
+FILES_libmpeg2 = "${libdir}/libmpeg2.so.*"
+FILES_libmpeg2convert = "${libdir}/libmpeg2convert.so.*"
+FILES_libmpeg2-dev = "${libdir}/libmpeg2.so \
+                      ${libdir}/libmpeg2.la \
+                      ${libdir}/libmpeg2arch.la \
+                      ${libdir}/pkgconfig/libmpeg2.pc \
+                      ${includedir}/mpeg2dec/mpeg2.h"
+FILES_libmpeg2-staticdev = "${libdir}/libmpeg2.a"
+FILES_libmpeg2convert-dev = "${libdir}/libmpeg2convert.so \
+                             ${libdir}/libmpeg2convert.la \
+                             ${libdir}/libmpeg2convertarch.la \
+                             ${libdir}/pkgconfig/libmpeg2convert.pc \
+                             ${includedir}/mpeg2dec/mpeg2convert.h"
+FILES_libmpeg2convert-staticdev = "${libdir}/libmpeg2convert.a"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/mpg123/mpg123_1.23.8.bb b/import-layers/yocto-poky/meta/recipes-multimedia/mpg123/mpg123_1.23.8.bb
deleted file mode 100644
index e0a7038..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/mpg123/mpg123_1.23.8.bb
+++ /dev/null
@@ -1,58 +0,0 @@
-SUMMARY = "Audio decoder for MPEG-1 Layer 1/2/3"
-DESCRIPTION = "The core of mpg123 is an MPEG-1 Layer 1/2/3 decoding library, which can be used by other programs. \
-mpg123 also comes with a command-line tool which can playback using ALSA, PulseAudio, OSS, and several other APIs, \
-and also can write the decoded audio to WAV."
-HOMEPAGE = "http://mpg123.de/"
-BUGTRACKER = "http://sourceforge.net/p/mpg123/bugs/"
-SECTION = "multimedia"
-
-LICENSE = "LGPLv2.1"
-LICENSE_FLAGS = "commercial"
-LIC_FILES_CHKSUM = "file://COPYING;md5=1e86753638d3cf2512528b99079bc4f3"
-
-SRC_URI = "https://www.mpg123.de/download/${BP}.tar.bz2"
-
-SRC_URI[md5sum] = "4dde045123a2ad1e385a0a82c0ef9268"
-SRC_URI[sha256sum] = "de2303c8ecb65593e39815c0a2f2f2d91f708c43b85a55fdd1934c82e677cf8e"
-
-inherit autotools pkgconfig
-
-# The options should be mutually exclusive for configuration script.
-# If both alsa and pulseaudio are specified (as in the default distro features)
-# pulseaudio takes precedence.
-PACKAGECONFIG_ALSA = "${@bb.utils.filter('DISTRO_FEATURES', 'alsa', d)}"
-PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'pulseaudio', 'pulseaudio', '${PACKAGECONFIG_ALSA}', d)}"
-
-PACKAGECONFIG[alsa] = "--with-default-audio=alsa,,alsa-lib"
-PACKAGECONFIG[esd] = ",,esound"
-PACKAGECONFIG[jack] = ",,jack"
-PACKAGECONFIG[openal] = ",,openal-soft"
-PACKAGECONFIG[portaudio] = ",,portaudio-v19"
-PACKAGECONFIG[pulseaudio] = "--with-default-audio=pulse,,pulseaudio"
-PACKAGECONFIG[sdl] = ",,libsdl"
-
-# Following are possible sound output modules:
-# alsa arts coreaudio dummy esd jack nas openal os2 oss portaudio pulse sdl sndio sun tinyalsa win32 win32_wasapi
-AUDIOMODS += "${@bb.utils.filter('PACKAGECONFIG', 'alsa esd jack openal portaudio sdl', d)}"
-AUDIOMODS += "${@bb.utils.contains('PACKAGECONFIG', 'pulseaudio', 'pulse', '', d)}"
-
-EXTRA_OECONF = " \
-    --enable-shared \
-    --with-audio='${AUDIOMODS}' \
-    --with-module-suffix=.so \
-    ${@bb.utils.contains('TUNE_FEATURES', 'neon', '--with-cpu=neon', '', d)} \
-    ${@bb.utils.contains('TUNE_FEATURES', 'altivec', '--with-cpu=altivec', '', d)} \
-"
-
-# The x86 assembler optimisations contains text relocations and there are no
-# upstream plans to fix them: http://sourceforge.net/p/mpg123/bugs/168/
-INSANE_SKIP_${PN}_append_x86 = " textrel"
-
-# Fails to build with thumb-1 (qemuarm)
-#| {standard input}: Assembler messages:
-#| {standard input}:47: Error: selected processor does not support Thumb mode `smull r5,r6,r7,r4'
-#| {standard input}:48: Error: shifts in CMP/MOV instructions are only supported in unified syntax -- `mov r5,r5,lsr#24'
-#...
-#| make[3]: *** [equalizer.lo] Error 1
-ARM_INSTRUCTION_SET_armv4 = "arm"
-ARM_INSTRUCTION_SET_armv5 = "arm"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/mpg123/mpg123_1.25.6.bb b/import-layers/yocto-poky/meta/recipes-multimedia/mpg123/mpg123_1.25.6.bb
new file mode 100644
index 0000000..cb86199
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/mpg123/mpg123_1.25.6.bb
@@ -0,0 +1,51 @@
+SUMMARY = "Audio decoder for MPEG-1 Layer 1/2/3"
+DESCRIPTION = "The core of mpg123 is an MPEG-1 Layer 1/2/3 decoding library, which can be used by other programs. \
+mpg123 also comes with a command-line tool which can playback using ALSA, PulseAudio, OSS, and several other APIs, \
+and also can write the decoded audio to WAV."
+HOMEPAGE = "http://mpg123.de/"
+BUGTRACKER = "http://sourceforge.net/p/mpg123/bugs/"
+SECTION = "multimedia"
+
+LICENSE = "LGPLv2.1"
+LICENSE_FLAGS = "commercial"
+LIC_FILES_CHKSUM = "file://COPYING;md5=1e86753638d3cf2512528b99079bc4f3"
+
+SRC_URI = "https://www.mpg123.de/download/${BP}.tar.bz2"
+SRC_URI[md5sum] = "43336bef78f67c2e66c4f6c288ca1eb3"
+SRC_URI[sha256sum] = "0f0458c9b87799bc2c9bf9455279cc4d305e245db43b51a39ef27afe025c5a8e"
+
+inherit autotools pkgconfig
+
+# The options should be mutually exclusive for configuration script.
+# If both alsa and pulseaudio are specified (as in the default distro features)
+# pulseaudio takes precedence.
+PACKAGECONFIG_ALSA = "${@bb.utils.filter('DISTRO_FEATURES', 'alsa', d)}"
+PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'pulseaudio', 'pulseaudio', '${PACKAGECONFIG_ALSA}', d)}"
+
+PACKAGECONFIG[alsa] = "--with-default-audio=alsa,,alsa-lib"
+PACKAGECONFIG[esd] = ",,esound"
+PACKAGECONFIG[jack] = ",,jack"
+PACKAGECONFIG[openal] = ",,openal-soft"
+PACKAGECONFIG[portaudio] = ",,portaudio-v19"
+PACKAGECONFIG[pulseaudio] = "--with-default-audio=pulse,,pulseaudio"
+PACKAGECONFIG[sdl] = ",,libsdl"
+
+# Following are possible sound output modules:
+# alsa arts coreaudio dummy esd jack nas openal os2 oss portaudio pulse sdl sndio sun tinyalsa win32 win32_wasapi
+AUDIOMODS += "${@bb.utils.filter('PACKAGECONFIG', 'alsa esd jack openal portaudio sdl', d)}"
+AUDIOMODS += "${@bb.utils.contains('PACKAGECONFIG', 'pulseaudio', 'pulse', '', d)}"
+
+EXTRA_OECONF = " \
+    --enable-shared \
+    --with-audio='${AUDIOMODS}' \
+    ${@bb.utils.contains('TUNE_FEATURES', 'neon', '--with-cpu=neon', '', d)} \
+    ${@bb.utils.contains('TUNE_FEATURES', 'altivec', '--with-cpu=altivec', '', d)} \
+"
+# Fails to build with thumb-1 (qemuarm)
+#| {standard input}: Assembler messages:
+#| {standard input}:47: Error: selected processor does not support Thumb mode `smull r5,r6,r7,r4'
+#| {standard input}:48: Error: shifts in CMP/MOV instructions are only supported in unified syntax -- `mov r5,r5,lsr#24'
+#...
+#| make[3]: *** [equalizer.lo] Error 1
+ARM_INSTRUCTION_SET_armv4 = "arm"
+ARM_INSTRUCTION_SET_armv5 = "arm"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/pulseaudio/pulseaudio/pulseaudio-discuss-iochannel-don-t-use-variable-length-array-in-union.patch b/import-layers/yocto-poky/meta/recipes-multimedia/pulseaudio/pulseaudio/pulseaudio-discuss-iochannel-don-t-use-variable-length-array-in-union.patch
new file mode 100644
index 0000000..11b56ab
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/pulseaudio/pulseaudio/pulseaudio-discuss-iochannel-don-t-use-variable-length-array-in-union.patch
@@ -0,0 +1,59 @@
+From patchwork Sat Feb  4 12:19:01 2017
+Content-Type: text/plain; charset="utf-8"
+MIME-Version: 1.0
+Content-Transfer-Encoding: 7bit
+Subject: [pulseaudio-discuss] iochannel: don't use variable length array in
+ union
+From: Tanu Kaskinen <tanuk@iki.fi>
+X-Patchwork-Id: 136885
+Message-Id: <20170204121901.17428-1-tanuk@iki.fi>
+To: pulseaudio-discuss@lists.freedesktop.org
+Date: Sat,  4 Feb 2017 14:19:01 +0200
+
+Clang didn't like the variable length array:
+
+pulsecore/iochannel.c:358:17: error: fields must have a constant size:
+'variable length array in structure' extension will never be supported
+        uint8_t data[CMSG_SPACE(sizeof(int) * nfd)];
+                ^
+
+Commit 451d1d6762 introduced the variable length array in order to have
+the correct value in msg_controllen. This patch reverts that commit and
+uses a different way to achieve the same goal.
+
+BugLink: https://bugs.freedesktop.org/show_bug.cgi?id=99458
+---
+Upstream-Status: Backport
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+ src/pulsecore/iochannel.c | 10 ++++++++--
+ 1 file changed, 8 insertions(+), 2 deletions(-)
+
+diff --git a/src/pulsecore/iochannel.c b/src/pulsecore/iochannel.c
+index 8ace297ff..897337522 100644
+--- a/src/pulsecore/iochannel.c
++++ b/src/pulsecore/iochannel.c
+@@ -355,7 +355,7 @@ ssize_t pa_iochannel_write_with_fds(pa_iochannel*io, const void*data, size_t l,
+     struct iovec iov;
+     union {
+         struct cmsghdr hdr;
+-        uint8_t data[CMSG_SPACE(sizeof(int) * nfd)];
++        uint8_t data[CMSG_SPACE(sizeof(int) * MAX_ANCIL_DATA_FDS)];
+     } cmsg;
+ 
+     pa_assert(io);
+@@ -382,7 +382,13 @@ ssize_t pa_iochannel_write_with_fds(pa_iochannel*io, const void*data, size_t l,
+     mh.msg_iov = &iov;
+     mh.msg_iovlen = 1;
+     mh.msg_control = &cmsg;
+-    mh.msg_controllen = sizeof(cmsg);
++
++    /* If we followed the example on the cmsg man page, we'd use
++     * sizeof(cmsg.data) here, but if nfd < MAX_ANCIL_DATA_FDS, then the data
++     * buffer is larger than needed, and the kernel doesn't like it if we set
++     * msg_controllen to a larger than necessary value. The commit message for
++     * commit 451d1d6762 contains a longer explanation. */
++    mh.msg_controllen = CMSG_SPACE(sizeof(int) * nfd);
+ 
+     if ((r = sendmsg(io->ofd, &mh, MSG_NOSIGNAL)) >= 0) {
+         io->writable = io->hungup = false;
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/pulseaudio/pulseaudio_10.0.bb b/import-layers/yocto-poky/meta/recipes-multimedia/pulseaudio/pulseaudio_10.0.bb
index f3a8573..9a34afa 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/pulseaudio/pulseaudio_10.0.bb
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/pulseaudio/pulseaudio_10.0.bb
@@ -3,6 +3,7 @@
 SRC_URI = "http://freedesktop.org/software/pulseaudio/releases/${BP}.tar.xz \
            file://0001-padsp-Make-it-compile-on-musl.patch \
            file://0001-client-conf-Add-allow-autospawn-for-root.patch \
+           file://pulseaudio-discuss-iochannel-don-t-use-variable-length-array-in-union.patch \
            file://volatiles.04_pulse \
 "
 SRC_URI[md5sum] = "4950d2799bf55ab91f6b7f990b7f0971"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/sbc/sbc_1.3.bb b/import-layers/yocto-poky/meta/recipes-multimedia/sbc/sbc_1.3.bb
index 2d7f31b..2bb895d 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/sbc/sbc_1.3.bb
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/sbc/sbc_1.3.bb
@@ -2,8 +2,12 @@
 DESCRIPTION = "Bluetooth low-complexity, subband codec (SBC) library."
 HOMEPAGE = "https://www.bluez.org"
 SECTION = "libs"
-LICENSE = "GPLv2"
+LICENSE = "GPLv2+ & LGPLv2.1+"
+LICENSE_${PN} = "LGPLv2.1+"
+LICENSE_${PN}-examples = "GPLv2+"
 LIC_FILES_CHKSUM = "file://COPYING;md5=12f884d2ae1ff87c09e5b7ccc2c4ca7e \
+                    file://COPYING.LIB;md5=fb504b67c50331fc78734fed90fb0e09 \
+                    file://src/sbcenc.c;beginline=1;endline=24;md5=08e7a70b127f4100ff2cd7d629147d8d \
                     file://sbc/sbc.h;beginline=1;endline=26;md5=0f57d0df22b0d40746bdd29805a4361b"
 
 DEPENDS = "libsndfile1"
@@ -14,3 +18,6 @@
 SRC_URI[sha256sum] = "e61022cf576f14190241e7071753fdacdce5d1dea89ffd704110fc50be689309"
 
 inherit autotools pkgconfig
+
+PACKAGES =+ "${PN}-examples"
+FILES_${PN}-examples += "${bindir}/*"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/speex/speex_1.2.0.bb b/import-layers/yocto-poky/meta/recipes-multimedia/speex/speex_1.2.0.bb
new file mode 100644
index 0000000..19636bb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/speex/speex_1.2.0.bb
@@ -0,0 +1,20 @@
+SUMMARY = "Speech Audio Codec"
+DESCRIPTION = "Speex is an Open Source/Free Software patent-free audio compression format designed for speech."
+HOMEPAGE = "http://www.speex.org"
+SECTION = "libs"
+LICENSE = "BSD"
+LIC_FILES_CHKSUM = "file://COPYING;md5=314649d8ba9dd7045dfb6683f298d0a8 \
+                    file://include/speex/speex.h;beginline=1;endline=34;md5=ef8c8ea4f7198d71cf3509c6ed05ea50"
+DEPENDS = "libogg speexdsp"
+
+SRC_URI = "http://downloads.xiph.org/releases/speex/speex-${PV}.tar.gz"
+UPSTREAM_CHECK_REGEX = "speex-(?P<pver>\d+(\.\d+)+)\.tar"
+
+SRC_URI[md5sum] = "8ab7bb2589110dfaf0ed7fa7757dc49c"
+SRC_URI[sha256sum] = "eaae8af0ac742dc7d542c9439ac72f1f385ce838392dc849cae4536af9210094"
+
+inherit autotools pkgconfig lib_package
+
+EXTRA_OECONF = "\
+        ${@bb.utils.contains('TARGET_FPU', 'soft', '--enable-fixed-point --disable-float-api --disable-vbr', '', d)} \
+"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/speex/speex_1.2rc2.bb b/import-layers/yocto-poky/meta/recipes-multimedia/speex/speex_1.2rc2.bb
deleted file mode 100644
index f7d23db..0000000
--- a/import-layers/yocto-poky/meta/recipes-multimedia/speex/speex_1.2rc2.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-SUMMARY = "Speech Audio Codec"
-DESCRIPTION = "Speex is an Open Source/Free Software patent-free audio compression format designed for speech."
-HOMEPAGE = "http://www.speex.org"
-SECTION = "libs"
-LICENSE = "BSD"
-LIC_FILES_CHKSUM = "file://COPYING;md5=314649d8ba9dd7045dfb6683f298d0a8 \
-                    file://include/speex/speex.h;beginline=1;endline=34;md5=ef8c8ea4f7198d71cf3509c6ed05ea50"
-DEPENDS = "libogg speexdsp"
-
-SRC_URI = "http://downloads.us.xiph.org/releases/speex/speex-${PV}.tar.gz"
-
-SRC_URI[md5sum] = "6ae7db3bab01e1d4b86bacfa8ca33e81"
-SRC_URI[sha256sum] = "caa27c7247ff15c8521c2ae0ea21987c9e9710a8f2d3448e8b79da9806bce891"
-
-inherit autotools pkgconfig lib_package
-
-EXTRA_OECONF = "\
-        ${@bb.utils.contains('TARGET_FPU', 'soft', '--enable-fixed-point --disable-float-api --disable-vbr', '', d)} \
-"
diff --git a/import-layers/yocto-poky/meta/recipes-multimedia/x264/x264_git.bb b/import-layers/yocto-poky/meta/recipes-multimedia/x264/x264_git.bb
index 69f52c2..c5476c7 100644
--- a/import-layers/yocto-poky/meta/recipes-multimedia/x264/x264_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-multimedia/x264/x264_git.bb
@@ -12,6 +12,7 @@
            file://don-t-default-to-cortex-a9-with-neon.patch \
            file://Fix-X32-build-by-disabling-asm.patch \
            "
+UPSTREAM_VERSION_UNKNOWN = "1"
 
 SRCREV = "2b741f81e51f92d053d87a49f59ff1026553a0f6"
 
diff --git a/import-layers/yocto-poky/meta/recipes-sato/images/core-image-sato-sdk-ptest.bb b/import-layers/yocto-poky/meta/recipes-sato/images/core-image-sato-sdk-ptest.bb
index 93e5a4e..531571e 100644
--- a/import-layers/yocto-poky/meta/recipes-sato/images/core-image-sato-sdk-ptest.bb
+++ b/import-layers/yocto-poky/meta/recipes-sato/images/core-image-sato-sdk-ptest.bb
@@ -4,3 +4,8 @@
 
 IMAGE_FEATURES += "ptest-pkgs"
 
+# This image is sufficiently large (~3GB) that it can't actually fit in a live
+# image (which has a 4GB limit), so nullify the overhead factor (1.3x out of the
+# box) and explicitly add just 500MB.
+IMAGE_OVERHEAD_FACTOR = "1.0"
+IMAGE_ROOTFS_EXTRA_SPACE = "524288"
diff --git a/import-layers/yocto-poky/meta/recipes-sato/images/core-image-sato.bb b/import-layers/yocto-poky/meta/recipes-sato/images/core-image-sato.bb
index 99dad11..b897950 100644
--- a/import-layers/yocto-poky/meta/recipes-sato/images/core-image-sato.bb
+++ b/import-layers/yocto-poky/meta/recipes-sato/images/core-image-sato.bb
@@ -8,7 +8,5 @@
 
 inherit core-image
 
-IMAGE_INSTALL += "packagegroup-core-x11-sato-games"
-
 TOOLCHAIN_HOST_TASK_append = " nativesdk-intltool nativesdk-glib-2.0"
 TOOLCHAIN_HOST_TASK_remove_task-populate-sdk-ext = " nativesdk-intltool nativesdk-glib-2.0"
diff --git a/import-layers/yocto-poky/meta/recipes-sato/matchbox-desktop/matchbox-desktop_2.1.bb b/import-layers/yocto-poky/meta/recipes-sato/matchbox-desktop/matchbox-desktop_2.1.bb
deleted file mode 100644
index c9a7b4b..0000000
--- a/import-layers/yocto-poky/meta/recipes-sato/matchbox-desktop/matchbox-desktop_2.1.bb
+++ /dev/null
@@ -1,33 +0,0 @@
-SUMMARY = "Matchbox Window Manager Desktop"
-HOMEPAGE = "http://matchbox-project.org/"
-BUGTRACKER = "http://bugzilla.yoctoproject.org/"
-
-LICENSE = "GPLv2+ & LGPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
-                    file://src/desktop.c;endline=20;md5=36c9bf295e6007f3423095f405af5a2d \
-                    file://src/main.c;endline=19;md5=2044244f97a195c25b7dc602ac7e9a00"
-
-DEPENDS = "gnome-common-native gtk+3 startup-notification dbus"
-SECTION = "x11/wm"
-
-# SRCREV tagged 2.1
-SRCREV = "c8473519a0f37488b8b3e839e275b000cdde0b80"
-SRC_URI = "git://git.yoctoproject.org/${BPN}-2 \
-           file://vfolders/* \
-           "
-
-EXTRA_OECONF = "--enable-startup-notification --with-dbus"
-
-S = "${WORKDIR}/git"
-
-inherit autotools pkgconfig distro_features_check
-
-# The startup-notification requires x11 in DISTRO_FEATURES
-REQUIRED_DISTRO_FEATURES = "x11"
-
-do_install_append() {
-    install -d ${D}${datadir}/matchbox/vfolders/
-    install -m 0644 ${WORKDIR}/vfolders/* ${D}${datadir}/matchbox/vfolders/
-}
-
-FILES_${PN} += "${datadir}/matchbox/vfolders/"
diff --git a/import-layers/yocto-poky/meta/recipes-sato/matchbox-desktop/matchbox-desktop_2.2.bb b/import-layers/yocto-poky/meta/recipes-sato/matchbox-desktop/matchbox-desktop_2.2.bb
new file mode 100644
index 0000000..b0cdfa2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-sato/matchbox-desktop/matchbox-desktop_2.2.bb
@@ -0,0 +1,33 @@
+SUMMARY = "Matchbox Window Manager Desktop"
+HOMEPAGE = "http://matchbox-project.org/"
+BUGTRACKER = "http://bugzilla.yoctoproject.org/"
+
+LICENSE = "GPLv2+ & LGPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
+                    file://src/desktop.c;endline=20;md5=36c9bf295e6007f3423095f405af5a2d \
+                    file://src/main.c;endline=19;md5=2044244f97a195c25b7dc602ac7e9a00"
+
+DEPENDS = "gtk+3 startup-notification dbus"
+SECTION = "x11/wm"
+
+# SRCREV tagged 2.2
+SRCREV = "6bc67d09da4147e5552fe30011a05a2c59d2f777"
+SRC_URI = "git://git.yoctoproject.org/${BPN}-2 \
+           file://vfolders/* \
+           "
+
+EXTRA_OECONF = "--enable-startup-notification --with-dbus"
+
+S = "${WORKDIR}/git"
+
+inherit autotools pkgconfig distro_features_check
+
+# The startup-notification requires x11 in DISTRO_FEATURES
+REQUIRED_DISTRO_FEATURES = "x11"
+
+do_install_append() {
+    install -d ${D}${datadir}/matchbox/vfolders/
+    install -m 0644 ${WORKDIR}/vfolders/* ${D}${datadir}/matchbox/vfolders/
+}
+
+FILES_${PN} += "${datadir}/matchbox/vfolders/"
diff --git a/import-layers/yocto-poky/meta/recipes-sato/packagegroups/packagegroup-core-x11-sato.bb b/import-layers/yocto-poky/meta/recipes-sato/packagegroups/packagegroup-core-x11-sato.bb
index 6fb1b67..97cced7 100644
--- a/import-layers/yocto-poky/meta/recipes-sato/packagegroups/packagegroup-core-x11-sato.bb
+++ b/import-layers/yocto-poky/meta/recipes-sato/packagegroups/packagegroup-core-x11-sato.bb
@@ -19,7 +19,6 @@
     "
 
 NETWORK_MANAGER ?= "connman-gnome"
-NETWORK_MANAGER_libc-uclibc = ""
 
 SUMMARY_${PN}-base = "Sato desktop - base packages"
 RDEPENDS_${PN}-base = "\
diff --git a/import-layers/yocto-poky/meta/recipes-sato/puzzles/files/0001-Use-Wno-error-format-overflow-if-the-compiler-suppor.patch b/import-layers/yocto-poky/meta/recipes-sato/puzzles/files/0001-Use-Wno-error-format-overflow-if-the-compiler-suppor.patch
new file mode 100644
index 0000000..d40a3b1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-sato/puzzles/files/0001-Use-Wno-error-format-overflow-if-the-compiler-suppor.patch
@@ -0,0 +1,32 @@
+From 337799e40350b3db2441cc98f65ec36a74dfb356 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Fri, 21 Apr 2017 12:18:08 -0700
+Subject: [PATCH] Use -Wno-error=format-overflow= if the compiler supports it
+
+we need this warning to be suppressed with gcc7+
+however older compilers dont support it so we need
+a way to disble it only if compiler supports it
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ configure.ac | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/configure.ac b/configure.ac
+index 3a38c95..bb9035e 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -42,7 +42,7 @@ fi
+ if test "x$GCC" = "xyes"; then
+   AC_MSG_CHECKING([for usable gcc warning flags])
+   gccwarningflags=
+-  for flag in -Wall -Werror -std=c89 -pedantic; do
++  for flag in -Wall -Werror -std=c89 -pedantic -Wno-error=format-overflow=; do
+     ac_save_CFLAGS="$CFLAGS"
+     ac_save_LIBS="$LIBS"
+     CFLAGS="$CFLAGS$gccwarningflags $flag $GTK_CFLAGS"
+-- 
+2.12.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-sato/puzzles/puzzles_git.bb b/import-layers/yocto-poky/meta/recipes-sato/puzzles/puzzles_git.bb
index a25c007..decd2a8 100644
--- a/import-layers/yocto-poky/meta/recipes-sato/puzzles/puzzles_git.bb
+++ b/import-layers/yocto-poky/meta/recipes-sato/puzzles/puzzles_git.bb
@@ -14,7 +14,9 @@
            file://0001-Use-labs-instead-of-abs.patch \
            file://0001-palisade-Fix-warnings-with-clang-on-arm.patch \
            file://0001-Clarify-conditions-to-avoid-compiler-errors.patch \
-"
+           file://0001-Use-Wno-error-format-overflow-if-the-compiler-suppor.patch \
+           "
+UPSTREAM_VERSION_UNKNOWN = "1"
 SRCREV = "8dfe5cec31e784e4ece2955ecc8cc35ee7e8fbb3"
 PE = "1"
 PV = "0.0+git${SRCPV}"
diff --git a/import-layers/yocto-poky/meta/recipes-sato/webkit/webkitgtk/0001-OptionsGTK.cmake-drop-the-hardcoded-introspection-gt.patch b/import-layers/yocto-poky/meta/recipes-sato/webkit/webkitgtk/0001-OptionsGTK.cmake-drop-the-hardcoded-introspection-gt.patch
index 93a69c0..0f6eeed 100644
--- a/import-layers/yocto-poky/meta/recipes-sato/webkit/webkitgtk/0001-OptionsGTK.cmake-drop-the-hardcoded-introspection-gt.patch
+++ b/import-layers/yocto-poky/meta/recipes-sato/webkit/webkitgtk/0001-OptionsGTK.cmake-drop-the-hardcoded-introspection-gt.patch
@@ -9,6 +9,8 @@
 through the use of qemu target emulation.
 
 Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+
+Upstream-Status: Pending
 ---
  Source/cmake/OptionsGTK.cmake | 6 ------
  1 file changed, 6 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-sato/webkit/webkitgtk_2.18.5.bb b/import-layers/yocto-poky/meta/recipes-sato/webkit/webkitgtk_2.18.5.bb
deleted file mode 100644
index a64aee2..0000000
--- a/import-layers/yocto-poky/meta/recipes-sato/webkit/webkitgtk_2.18.5.bb
+++ /dev/null
@@ -1,121 +0,0 @@
-SUMMARY = "WebKit web rendering engine for the GTK+ platform"
-HOMEPAGE = "http://www.webkitgtk.org/"
-BUGTRACKER = "http://bugs.webkit.org/"
-
-LICENSE = "BSD & LGPLv2+"
-LIC_FILES_CHKSUM = "file://Source/JavaScriptCore/COPYING.LIB;md5=d0c6d6397a5d84286dda758da57bd691 \
-                    file://Source/WebCore/LICENSE-APPLE;md5=4646f90082c40bcf298c285f8bab0b12 \
-		    file://Source/WebCore/LICENSE-LGPL-2;md5=36357ffde2b64ae177b2494445b79d21 \
-		    file://Source/WebCore/LICENSE-LGPL-2.1;md5=a778a33ef338abbaf8b8a7c36b6eec80 \
-		   "
-
-SRC_URI = "http://www.webkitgtk.org/releases/${BPN}-${PV}.tar.xz \
-           file://0001-FindGObjectIntrospection.cmake-prefix-variables-obta.patch \
-           file://0001-When-building-introspection-files-add-CMAKE_C_FLAGS-.patch \
-           file://0001-OptionsGTK.cmake-drop-the-hardcoded-introspection-gt.patch \
-           file://0001-Fix-racy-parallel-build-of-WebKit2-4.0.gir.patch \
-           file://0001-Tweak-gtkdoc-settings-so-that-gtkdoc-generation-work.patch \
-           file://x32_support.patch \
-           file://cross-compile.patch \
-           file://detect-atomics-during-configure.patch \
-           file://0001-WebKitMacros-Append-to-I-and-not-to-isystem.patch \
-           file://0001-Fix-build-with-musl.patch \
-           "
-
-SRC_URI[md5sum] = "af18c2cfa00cadfd0b4d8db21cab011d"
-SRC_URI[sha256sum] = "0c6d80cc7eb5d32f8063041fa11a1a6f17a29765c2f69c6bc862cd47c2d539b8"
-
-inherit cmake pkgconfig gobject-introspection perlnative distro_features_check upstream-version-is-even gtk-doc
-
-# depends on libxt
-REQUIRED_DISTRO_FEATURES = "x11"
-
-DEPENDS = "zlib libsoup-2.4 curl libxml2 cairo libxslt libxt libidn libgcrypt \
-           gtk+3 gstreamer1.0 gstreamer1.0-plugins-base flex-native gperf-native sqlite3 \
-	   pango icu bison-native gawk intltool-native libwebp \
-	   atk udev harfbuzz jpeg libpng pulseaudio librsvg libtheora libvorbis libxcomposite libxtst \
-	   ruby-native libnotify gstreamer1.0-plugins-bad \
-	   gettext-native glib-2.0 glib-2.0-native libtasn1 \
-          "
-
-PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'x11', 'wayland' ,d)} \
-                   ${@bb.utils.contains('DISTRO_FEATURES', 'opengl', 'webgl opengl', '' ,d)} \
-                   enchant \
-                   libsecret \
-                  "
-
-PACKAGECONFIG[wayland] = "-DENABLE_WAYLAND_TARGET=ON,-DENABLE_WAYLAND_TARGET=OFF,wayland"
-PACKAGECONFIG[x11] = "-DENABLE_X11_TARGET=ON,-DENABLE_X11_TARGET=OFF,virtual/libx11"
-PACKAGECONFIG[geoclue] = "-DENABLE_GEOLOCATION=ON,-DENABLE_GEOLOCATION=OFF,geoclue"
-PACKAGECONFIG[enchant] = "-DENABLE_SPELLCHECK=ON,-DENABLE_SPELLCHECK=OFF,enchant"
-PACKAGECONFIG[gtk2] = "-DENABLE_PLUGIN_PROCESS_GTK2=ON,-DENABLE_PLUGIN_PROCESS_GTK2=OFF,gtk+"
-PACKAGECONFIG[gles2] = "-DENABLE_GLES2=ON,-DENABLE_GLES2=OFF,virtual/libgles2"
-PACKAGECONFIG[webgl] = "-DENABLE_WEBGL=ON,-DENABLE_WEBGL=OFF,virtual/libgl"
-PACKAGECONFIG[opengl] = "-DENABLE_OPENGL=ON,-DENABLE_OPENGL=OFF,virtual/libgl"
-PACKAGECONFIG[libsecret] = "-DUSE_LIBSECRET=ON,-DUSE_LIBSECRET=OFF,libsecret"
-PACKAGECONFIG[libhyphen] = "-DUSE_LIBHYPHEN=ON,-DUSE_LIBHYPHEN=OFF,libhyphen"
-
-EXTRA_OECMAKE = " \
-		-DPORT=GTK \
-		-DCMAKE_BUILD_TYPE=Release \
-		${@bb.utils.contains('GI_DATA_ENABLED', 'True', '-DENABLE_INTROSPECTION=ON', '-DENABLE_INTROSPECTION=OFF', d)} \
-		${@bb.utils.contains('GTKDOC_ENABLED', 'True', '-DENABLE_GTKDOC=ON', '-DENABLE_GTKDOC=OFF', d)} \
-		-DENABLE_MINIBROWSER=ON \
-                -DPYTHON_EXECUTABLE=`which python` \
-		"
-
-# GL/GLES header clash: both define the same thing, differently, on 32 bit x86
-EXTRA_OECMAKE_append_x86 = " -DUSE_GSTREAMER_GL=OFF "
-EXTRA_OECMAKE_append_x86-x32 = " -DUSE_GSTREAMER_GL=OFF "
-
-# Javascript JIT is not supported on powerpc
-EXTRA_OECMAKE_append_powerpc = " -DENABLE_JIT=OFF "
-EXTRA_OECMAKE_append_powerpc64 = " -DENABLE_JIT=OFF "
-
-# ARM JIT code does not build on ARMv4/5/6 anymore
-EXTRA_OECMAKE_append_armv5 = " -DENABLE_JIT=OFF "
-EXTRA_OECMAKE_append_armv6 = " -DENABLE_JIT=OFF "
-EXTRA_OECMAKE_append_armv4 = " -DENABLE_JIT=OFF "
-
-# binutils 2.25.1 has a bug on aarch64:
-# https://sourceware.org/bugzilla/show_bug.cgi?id=18430
-EXTRA_OECMAKE_append_aarch64 = " -DUSE_LD_GOLD=OFF "
-EXTRA_OECMAKE_append_mipsarch = " -DUSE_LD_GOLD=OFF "
-EXTRA_OECMAKE_append_powerpc = " -DUSE_LD_GOLD=OFF "
-EXTRA_OECMAKE_append_toolchain-clang = " -DUSE_LD_GOLD=OFF "
-
-EXTRA_OECMAKE_append_aarch64 = " -DWTF_CPU_ARM64_CORTEXA53=ON"
-
-# JIT not supported on MIPS either
-EXTRA_OECMAKE_append_mipsarch = " -DENABLE_JIT=OFF "
-
-# JIT not supported on X32
-# An attempt was made to upstream JIT support for x32 in
-# https://bugs.webkit.org/show_bug.cgi?id=100450, but this was closed as
-# unresolved due to limited X32 adoption.
-EXTRA_OECMAKE_append_x86-x32 = " -DENABLE_JIT=OFF "
-
-SECURITY_CFLAGS_remove_aarch64 = "-fpie"
-SECURITY_CFLAGS_append_aarch64 = " -fPIE"
-
-FILES_${PN} += "${libdir}/webkit2gtk-4.0/injected-bundle/libwebkit2gtkinjectedbundle.so"
-
-RRECOMMENDS_${PN} += "ca-certificates shared-mime-info"
-
-# http://errors.yoctoproject.org/Errors/Details/20370/
-ARM_INSTRUCTION_SET_armv4 = "arm"
-ARM_INSTRUCTION_SET_armv5 = "arm"
-ARM_INSTRUCTION_SET_armv6 = "arm"
-
-# https://bugzilla.yoctoproject.org/show_bug.cgi?id=9474
-# https://bugs.webkit.org/show_bug.cgi?id=159880
-# JSC JIT can build on ARMv7 with -marm, but doesn't work on runtime.
-# Upstream only tests regularly the JSC JIT on ARMv7 with Thumb2 (-mthumb).
-ARM_INSTRUCTION_SET_armv7a = "thumb"
-ARM_INSTRUCTION_SET_armv7r = "thumb"
-ARM_INSTRUCTION_SET_armv7ve = "thumb"
-
-# qemu: uncaught target signal 11 (Segmentation fault) - core dumped
-# Segmentation fault
-GI_DATA_ENABLED_armv7a = "False"
-GI_DATA_ENABLED_armv7ve = "False"
diff --git a/import-layers/yocto-poky/meta/recipes-sato/webkit/webkitgtk_2.18.6.bb b/import-layers/yocto-poky/meta/recipes-sato/webkit/webkitgtk_2.18.6.bb
new file mode 100644
index 0000000..ff0ff8f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-sato/webkit/webkitgtk_2.18.6.bb
@@ -0,0 +1,121 @@
+SUMMARY = "WebKit web rendering engine for the GTK+ platform"
+HOMEPAGE = "http://www.webkitgtk.org/"
+BUGTRACKER = "http://bugs.webkit.org/"
+
+LICENSE = "BSD & LGPLv2+"
+LIC_FILES_CHKSUM = "file://Source/JavaScriptCore/COPYING.LIB;md5=d0c6d6397a5d84286dda758da57bd691 \
+                    file://Source/WebCore/LICENSE-APPLE;md5=4646f90082c40bcf298c285f8bab0b12 \
+		    file://Source/WebCore/LICENSE-LGPL-2;md5=36357ffde2b64ae177b2494445b79d21 \
+		    file://Source/WebCore/LICENSE-LGPL-2.1;md5=a778a33ef338abbaf8b8a7c36b6eec80 \
+		   "
+
+SRC_URI = "http://www.webkitgtk.org/releases/${BPN}-${PV}.tar.xz \
+           file://0001-FindGObjectIntrospection.cmake-prefix-variables-obta.patch \
+           file://0001-When-building-introspection-files-add-CMAKE_C_FLAGS-.patch \
+           file://0001-OptionsGTK.cmake-drop-the-hardcoded-introspection-gt.patch \
+           file://0001-Fix-racy-parallel-build-of-WebKit2-4.0.gir.patch \
+           file://0001-Tweak-gtkdoc-settings-so-that-gtkdoc-generation-work.patch \
+           file://x32_support.patch \
+           file://cross-compile.patch \
+           file://detect-atomics-during-configure.patch \
+           file://0001-WebKitMacros-Append-to-I-and-not-to-isystem.patch \
+           file://0001-Fix-build-with-musl.patch \
+           "
+
+SRC_URI[md5sum] = "c1a548595135ee75ad3bf2e18ac83112"
+SRC_URI[sha256sum] = "93912cc2f40f12e452be1ca4babdbdaac0ec4f828d441257a6b06c2963bbac3c"
+
+inherit cmake pkgconfig gobject-introspection perlnative distro_features_check upstream-version-is-even gtk-doc
+
+# depends on libxt
+REQUIRED_DISTRO_FEATURES = "x11"
+
+DEPENDS = "zlib libsoup-2.4 curl libxml2 cairo libxslt libxt libidn libgcrypt \
+           gtk+3 gstreamer1.0 gstreamer1.0-plugins-base flex-native gperf-native sqlite3 \
+	   pango icu bison-native gawk intltool-native libwebp \
+	   atk udev harfbuzz jpeg libpng pulseaudio librsvg libtheora libvorbis libxcomposite libxtst \
+	   ruby-native libnotify gstreamer1.0-plugins-bad \
+	   gettext-native glib-2.0 glib-2.0-native libtasn1 \
+          "
+
+PACKAGECONFIG ??= "${@bb.utils.contains('DISTRO_FEATURES', 'x11', 'x11', 'wayland' ,d)} \
+                   ${@bb.utils.contains('DISTRO_FEATURES', 'opengl', 'webgl opengl', '' ,d)} \
+                   enchant \
+                   libsecret \
+                  "
+
+PACKAGECONFIG[wayland] = "-DENABLE_WAYLAND_TARGET=ON,-DENABLE_WAYLAND_TARGET=OFF,wayland"
+PACKAGECONFIG[x11] = "-DENABLE_X11_TARGET=ON,-DENABLE_X11_TARGET=OFF,virtual/libx11"
+PACKAGECONFIG[geoclue] = "-DENABLE_GEOLOCATION=ON,-DENABLE_GEOLOCATION=OFF,geoclue"
+PACKAGECONFIG[enchant] = "-DENABLE_SPELLCHECK=ON,-DENABLE_SPELLCHECK=OFF,enchant"
+PACKAGECONFIG[gtk2] = "-DENABLE_PLUGIN_PROCESS_GTK2=ON,-DENABLE_PLUGIN_PROCESS_GTK2=OFF,gtk+"
+PACKAGECONFIG[gles2] = "-DENABLE_GLES2=ON,-DENABLE_GLES2=OFF,virtual/libgles2"
+PACKAGECONFIG[webgl] = "-DENABLE_WEBGL=ON,-DENABLE_WEBGL=OFF,virtual/libgl"
+PACKAGECONFIG[opengl] = "-DENABLE_OPENGL=ON,-DENABLE_OPENGL=OFF,virtual/libgl"
+PACKAGECONFIG[libsecret] = "-DUSE_LIBSECRET=ON,-DUSE_LIBSECRET=OFF,libsecret"
+PACKAGECONFIG[libhyphen] = "-DUSE_LIBHYPHEN=ON,-DUSE_LIBHYPHEN=OFF,libhyphen"
+
+EXTRA_OECMAKE = " \
+		-DPORT=GTK \
+		-DCMAKE_BUILD_TYPE=Release \
+		${@bb.utils.contains('GI_DATA_ENABLED', 'True', '-DENABLE_INTROSPECTION=ON', '-DENABLE_INTROSPECTION=OFF', d)} \
+		${@bb.utils.contains('GTKDOC_ENABLED', 'True', '-DENABLE_GTKDOC=ON', '-DENABLE_GTKDOC=OFF', d)} \
+		-DENABLE_MINIBROWSER=ON \
+                -DPYTHON_EXECUTABLE=`which python` \
+		"
+
+# GL/GLES header clash: both define the same thing, differently, on 32 bit x86
+EXTRA_OECMAKE_append_x86 = " -DUSE_GSTREAMER_GL=OFF "
+EXTRA_OECMAKE_append_x86-x32 = " -DUSE_GSTREAMER_GL=OFF "
+
+# Javascript JIT is not supported on powerpc
+EXTRA_OECMAKE_append_powerpc = " -DENABLE_JIT=OFF "
+EXTRA_OECMAKE_append_powerpc64 = " -DENABLE_JIT=OFF "
+
+# ARM JIT code does not build on ARMv4/5/6 anymore
+EXTRA_OECMAKE_append_armv5 = " -DENABLE_JIT=OFF "
+EXTRA_OECMAKE_append_armv6 = " -DENABLE_JIT=OFF "
+EXTRA_OECMAKE_append_armv4 = " -DENABLE_JIT=OFF "
+
+# binutils 2.25.1 has a bug on aarch64:
+# https://sourceware.org/bugzilla/show_bug.cgi?id=18430
+EXTRA_OECMAKE_append_aarch64 = " -DUSE_LD_GOLD=OFF "
+EXTRA_OECMAKE_append_mipsarch = " -DUSE_LD_GOLD=OFF "
+EXTRA_OECMAKE_append_powerpc = " -DUSE_LD_GOLD=OFF "
+EXTRA_OECMAKE_append_toolchain-clang = " -DUSE_LD_GOLD=OFF "
+
+EXTRA_OECMAKE_append_aarch64 = " -DWTF_CPU_ARM64_CORTEXA53=ON"
+
+# JIT not supported on MIPS either
+EXTRA_OECMAKE_append_mipsarch = " -DENABLE_JIT=OFF "
+
+# JIT not supported on X32
+# An attempt was made to upstream JIT support for x32 in
+# https://bugs.webkit.org/show_bug.cgi?id=100450, but this was closed as
+# unresolved due to limited X32 adoption.
+EXTRA_OECMAKE_append_x86-x32 = " -DENABLE_JIT=OFF "
+
+SECURITY_CFLAGS_remove_aarch64 = "-fpie"
+SECURITY_CFLAGS_append_aarch64 = " -fPIE"
+
+FILES_${PN} += "${libdir}/webkit2gtk-4.0/injected-bundle/libwebkit2gtkinjectedbundle.so"
+
+RRECOMMENDS_${PN} += "ca-certificates shared-mime-info"
+
+# http://errors.yoctoproject.org/Errors/Details/20370/
+ARM_INSTRUCTION_SET_armv4 = "arm"
+ARM_INSTRUCTION_SET_armv5 = "arm"
+ARM_INSTRUCTION_SET_armv6 = "arm"
+
+# https://bugzilla.yoctoproject.org/show_bug.cgi?id=9474
+# https://bugs.webkit.org/show_bug.cgi?id=159880
+# JSC JIT can build on ARMv7 with -marm, but doesn't work on runtime.
+# Upstream only tests regularly the JSC JIT on ARMv7 with Thumb2 (-mthumb).
+ARM_INSTRUCTION_SET_armv7a = "thumb"
+ARM_INSTRUCTION_SET_armv7r = "thumb"
+ARM_INSTRUCTION_SET_armv7ve = "thumb"
+
+# qemu: uncaught target signal 11 (Segmentation fault) - core dumped
+# Segmentation fault
+GI_DATA_ENABLED_armv7a = "False"
+GI_DATA_ENABLED_armv7ve = "False"
diff --git a/import-layers/yocto-poky/meta/recipes-support/apr/apr-util_1.5.4.bb b/import-layers/yocto-poky/meta/recipes-support/apr/apr-util_1.5.4.bb
deleted file mode 100644
index 2b8676f..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/apr/apr-util_1.5.4.bb
+++ /dev/null
@@ -1,95 +0,0 @@
-SUMMARY = "Apache Portable Runtime (APR) companion library"
-HOMEPAGE = "http://apr.apache.org/"
-SECTION = "libs"
-DEPENDS = "apr expat gdbm"
-
-BBCLASSEXTEND = "native nativesdk"
-
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=519e0a18e03f7c023070568c14b077bb \
-                    file://include/apu_version.h;endline=17;md5=806685a84e71f10c80144c48eb35df42"
-
-SRC_URI = "${APACHE_MIRROR}/apr/${BPN}-${PV}.tar.gz \
-           file://configfix.patch \
-           file://configure_fixes.patch \
-           file://run-ptest \
-"
-
-SRC_URI[md5sum] = "866825c04da827c6e5f53daff5569f42"
-SRC_URI[sha256sum] = "976a12a59bc286d634a21d7be0841cc74289ea9077aa1af46be19d1a6e844c19"
-
-EXTRA_OECONF = "--with-apr=${STAGING_BINDIR_CROSS}/apr-1-config \ 
-		--without-odbc \
-		--without-pgsql \
-		--with-dbm=gdbm \
-		--with-gdbm=${STAGING_DIR_HOST}${prefix} \
-		--without-sqlite2 \
-		--with-expat=${STAGING_DIR_HOST}${prefix}"
-
-
-inherit autotools lib_package binconfig
-
-PR = "r1"
-
-OE_BINCONFIG_EXTRA_MANGLE = " -e 's:location=source:location=installed:'"
-
-do_configure_append() {
-	if [ "${CLASSOVERRIDE}" = "class-target" ]; then
-		cp ${STAGING_DATADIR}/apr/apr_rules.mk ${B}/build/rules.mk
-	fi
-}
-do_configure_prepend_class-native() {
-	mkdir ${B}/build
-	cp ${STAGING_DATADIR_NATIVE}/apr/apr_rules.mk ${B}/build/rules.mk
-}
-do_configure_append_class-native() {
-	sed -i "s#LIBTOOL=\$(SHELL) \$(apr_builddir)#LIBTOOL=\$(SHELL) ${STAGING_BINDIR_NATIVE}#" ${B}/build/rules.mk
-	# sometimes there isn't SHELL
-	sed -i "s#LIBTOOL=\$(apr_builddir)#LIBTOOL=${STAGING_BINDIR_NATIVE}#" ${B}/build/rules.mk
-}
-
-do_configure_prepend_class-nativesdk() {
-	cp ${STAGING_DATADIR}/apr/apr_rules.mk ${S}/build/rules.mk
-}
-
-do_configure_append_class-nativesdk() {
-	sed -i "s#\(apr_builddir\)=.*#\1=${STAGING_DATADIR}/build-1#" ${B}/build/rules.mk
-	sed -i "s#\(apr_builders\)=.*#\1=${STAGING_DATADIR}/build-1#" ${B}/build/rules.mk
-	sed -i "s#\(top_builddir\)=.*#\1=${STAGING_DATADIR}/build-1#" ${B}/build/rules.mk
-	sed -i "s#\(LIBTOOL=\$(apr_builddir)\).*#\1/libtool#" ${B}/build/rules.mk
-}
-
-do_install_append_class-target() {
-	sed -i -e 's,${STAGING_DIR_HOST},,g' \
-	       -e 's,APU_SOURCE_DIR=.*,APR_SOURCE_DIR=,g' \
-	       -e 's,APU_BUILD_DIR=.*,APR_BUILD_DIR=,g' ${D}${bindir}/apu-1-config
-}
-
-PACKAGECONFIG ??= "crypto"
-PACKAGECONFIG[ldap] = "--with-ldap,--without-ldap,openldap"
-PACKAGECONFIG[crypto] = "--with-openssl=${STAGING_DIR_HOST}${prefix} --with-crypto,--without-crypto,openssl"
-PACKAGECONFIG[sqlite3] = "--with-sqlite3=${STAGING_DIR_HOST}${prefix},--without-sqlite3,sqlite3"
-
-#files ${libdir}/apr-util-1/*.so are not symlinks but loadable modules thus they are packaged in ${PN}
-FILES_${PN}     += "${libdir}/apr-util-1/apr*${SOLIBS} ${libdir}/apr-util-1/apr*${SOLIBSDEV}"
-FILES_${PN}-dev += "${libdir}/aprutil.exp ${libdir}/apr-util-1/*.la"
-FILES_${PN}-staticdev += "${libdir}/apr-util-1/*.a"
-
-INSANE_SKIP_${PN} += "dev-so"
-
-inherit ptest
-
-RDEPENDS_${PN}-ptest_append_libc-glibc = " glibc-gconv-iso8859-1 glibc-gconv-iso8859-2 glibc-gconv-utf-7"
-
-do_compile_ptest() {
-	cd ${B}/test
-	oe_runmake
-}
-
-do_install_ptest() {
-	t=${D}${PTEST_PATH}/test
-	mkdir $t
-	for i in testall data; do \
-	  cp -r ${B}/test/$i $t; \
-	done
-}
diff --git a/import-layers/yocto-poky/meta/recipes-support/apr/apr-util_1.6.0.bb b/import-layers/yocto-poky/meta/recipes-support/apr/apr-util_1.6.0.bb
new file mode 100644
index 0000000..748d196
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/apr/apr-util_1.6.0.bb
@@ -0,0 +1,95 @@
+SUMMARY = "Apache Portable Runtime (APR) companion library"
+HOMEPAGE = "http://apr.apache.org/"
+SECTION = "libs"
+DEPENDS = "apr expat gdbm"
+
+BBCLASSEXTEND = "native nativesdk"
+
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=158aa0b1efe0c12f23d4b007ddb9a5db \
+                    file://include/apu_version.h;endline=17;md5=806685a84e71f10c80144c48eb35df42"
+
+SRC_URI = "${APACHE_MIRROR}/apr/${BPN}-${PV}.tar.gz \
+           file://configfix.patch \
+           file://configure_fixes.patch \
+           file://run-ptest \
+"
+
+SRC_URI[md5sum] = "3b03dbff60728a4f4c33f5d929e8b35a"
+SRC_URI[sha256sum] = "483ef4d59e6ac9a36c7d3fd87ad7b9db7ad8ae29c06b9dd8ff22dda1cc416389"
+
+EXTRA_OECONF = "--with-apr=${STAGING_BINDIR_CROSS}/apr-1-config \ 
+		--without-odbc \
+		--without-pgsql \
+		--with-dbm=gdbm \
+		--with-gdbm=${STAGING_DIR_HOST}${prefix} \
+		--without-sqlite2 \
+		--with-expat=${STAGING_DIR_HOST}${prefix}"
+
+
+inherit autotools lib_package binconfig
+
+PR = "r1"
+
+OE_BINCONFIG_EXTRA_MANGLE = " -e 's:location=source:location=installed:'"
+
+do_configure_append() {
+	if [ "${CLASSOVERRIDE}" = "class-target" ]; then
+		cp ${STAGING_DATADIR}/apr/apr_rules.mk ${B}/build/rules.mk
+	fi
+}
+do_configure_prepend_class-native() {
+	mkdir ${B}/build
+	cp ${STAGING_DATADIR_NATIVE}/apr/apr_rules.mk ${B}/build/rules.mk
+}
+do_configure_append_class-native() {
+	sed -i "s#LIBTOOL=\$(SHELL) \$(apr_builddir)#LIBTOOL=\$(SHELL) ${STAGING_BINDIR_NATIVE}#" ${B}/build/rules.mk
+	# sometimes there isn't SHELL
+	sed -i "s#LIBTOOL=\$(apr_builddir)#LIBTOOL=${STAGING_BINDIR_NATIVE}#" ${B}/build/rules.mk
+}
+
+do_configure_prepend_class-nativesdk() {
+	cp ${STAGING_DATADIR}/apr/apr_rules.mk ${S}/build/rules.mk
+}
+
+do_configure_append_class-nativesdk() {
+	sed -i "s#\(apr_builddir\)=.*#\1=${STAGING_DATADIR}/build-1#" ${B}/build/rules.mk
+	sed -i "s#\(apr_builders\)=.*#\1=${STAGING_DATADIR}/build-1#" ${B}/build/rules.mk
+	sed -i "s#\(top_builddir\)=.*#\1=${STAGING_DATADIR}/build-1#" ${B}/build/rules.mk
+	sed -i "s#\(LIBTOOL=\$(apr_builddir)\).*#\1/libtool#" ${B}/build/rules.mk
+}
+
+do_install_append_class-target() {
+	sed -i -e 's,${STAGING_DIR_HOST},,g' \
+	       -e 's,APU_SOURCE_DIR=.*,APR_SOURCE_DIR=,g' \
+	       -e 's,APU_BUILD_DIR=.*,APR_BUILD_DIR=,g' ${D}${bindir}/apu-1-config
+}
+
+PACKAGECONFIG ??= "crypto"
+PACKAGECONFIG[ldap] = "--with-ldap,--without-ldap,openldap"
+PACKAGECONFIG[crypto] = "--with-openssl=${STAGING_DIR_HOST}${prefix} --with-crypto,--without-crypto,openssl"
+PACKAGECONFIG[sqlite3] = "--with-sqlite3=${STAGING_DIR_HOST}${prefix},--without-sqlite3,sqlite3"
+
+#files ${libdir}/apr-util-1/*.so are not symlinks but loadable modules thus they are packaged in ${PN}
+FILES_${PN}     += "${libdir}/apr-util-1/apr*${SOLIBS} ${libdir}/apr-util-1/apr*${SOLIBSDEV}"
+FILES_${PN}-dev += "${libdir}/aprutil.exp ${libdir}/apr-util-1/*.la"
+FILES_${PN}-staticdev += "${libdir}/apr-util-1/*.a"
+
+INSANE_SKIP_${PN} += "dev-so"
+
+inherit ptest
+
+RDEPENDS_${PN}-ptest_append_libc-glibc = " glibc-gconv-iso8859-1 glibc-gconv-iso8859-2 glibc-gconv-utf-7"
+
+do_compile_ptest() {
+	cd ${B}/test
+	oe_runmake
+}
+
+do_install_ptest() {
+	t=${D}${PTEST_PATH}/test
+	mkdir $t
+	for i in testall data; do \
+	  cp -r ${B}/test/$i $t; \
+	done
+}
diff --git a/import-layers/yocto-poky/meta/recipes-support/apr/apr/0001-apr-fix-off_t-size-doesn-t-match-in-glibc-when-cross.patch b/import-layers/yocto-poky/meta/recipes-support/apr/apr/0001-apr-fix-off_t-size-doesn-t-match-in-glibc-when-cross.patch
index 1237142..c5e92ac 100644
--- a/import-layers/yocto-poky/meta/recipes-support/apr/apr/0001-apr-fix-off_t-size-doesn-t-match-in-glibc-when-cross.patch
+++ b/import-layers/yocto-poky/meta/recipes-support/apr/apr/0001-apr-fix-off_t-size-doesn-t-match-in-glibc-when-cross.patch
@@ -27,6 +27,8 @@
 Change the above correspondingly.
 
 Signed-off-by: Dengke Du <dengke.du@windriver.com>
+
+Upstream-Status: Pending
 ---
  configure.in | 8 ++++----
  1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/import-layers/yocto-poky/meta/recipes-support/apr/apr_1.5.2.bb b/import-layers/yocto-poky/meta/recipes-support/apr/apr_1.5.2.bb
deleted file mode 100644
index 992b561..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/apr/apr_1.5.2.bb
+++ /dev/null
@@ -1,114 +0,0 @@
-SUMMARY = "Apache Portable Runtime (APR) library"
-HOMEPAGE = "http://apr.apache.org/"
-SECTION = "libs"
-DEPENDS = "util-linux"
-
-LICENSE = "Apache-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=4dfd4cd216828c8cae5de5a12f3844c8 \
-                    file://include/apr_lib.h;endline=17;md5=ee42fa7575dc40580a9e01c1b75fae96"
-
-BBCLASSEXTEND = "native nativesdk"
-
-SRC_URI = "${APACHE_MIRROR}/apr/${BPN}-${PV}.tar.bz2 \
-           file://configure_fixes.patch \
-           file://cleanup.patch \
-           file://configfix.patch \
-           file://run-ptest \
-           file://upgrade-and-fix-1.5.1.patch \
-           file://Fix-packet-discards-HTTP-redirect.patch \
-           file://configure.in-fix-LTFLAGS-to-make-it-work-with-ccache.patch \
-           file://0001-apr-fix-off_t-size-doesn-t-match-in-glibc-when-cross.patch \
-           file://0002-explicitly-link-libapr-against-phtread-to-make-gold-.patch \
-"
-
-SRC_URI[md5sum] = "4e9769f3349fe11fc0a5e1b224c236aa"
-SRC_URI[sha256sum] = "7d03ed29c22a7152be45b8e50431063736df9e1daa1ddf93f6a547ba7a28f67a"
-
-inherit autotools-brokensep lib_package binconfig multilib_header ptest
-
-OE_BINCONFIG_EXTRA_MANGLE = " -e 's:location=source:location=installed:'"
-
-# Added to fix some issues with cmake. Refer to https://github.com/bmwcarit/meta-ros/issues/68#issuecomment-19896928
-CACHED_CONFIGUREVARS += "apr_cv_mutex_recursive=yes"
-
-# Also suppress trying to use sctp.
-#
-CACHED_CONFIGUREVARS += "ac_cv_header_netinet_sctp_h=no ac_cv_header_netinet_sctp_uio_h=no"
-
-# Otherwise libtool fails to compile apr-utils
-# x86_64-linux-libtool: compile: unable to infer tagged configuration
-# x86_64-linux-libtool:   error: specify a tag with '--tag'
-CCACHE = ""
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)}"
-PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
-
-do_configure_prepend() {
-	# Avoid absolute paths for grep since it causes failures
-	# when using sstate between different hosts with different
-	# install paths for grep.
-	export GREP="grep"
-
-	cd ${S}
-	./buildconf
-}
-
-FILES_${PN}-dev += "${libdir}/apr.exp ${datadir}/build-1/*"
-RDEPENDS_${PN}-dev += "bash"
-
-#for some reason, build/libtool.m4 handled by buildconf still be overwritten
-#when autoconf, so handle it again.
-do_configure_append() {
-	sed -i -e 's/LIBTOOL=\(.*\)top_build/LIBTOOL=\1apr_build/' ${S}/build/libtool.m4
-	sed -i -e 's/LIBTOOL=\(.*\)top_build/LIBTOOL=\1apr_build/' ${S}/build/apr_rules.mk
-}
-
-do_install_append() {
-	oe_multilib_header apr.h
-	install -d ${D}${datadir}/apr
-}
-
-do_install_append_class-target() {
-	sed -i -e 's,${STAGING_DIR_HOST},,g' ${D}${datadir}/build-1/apr_rules.mk
-	sed -i -e 's,${STAGING_DIR_HOST},,g' \
-	       -e 's,APR_SOURCE_DIR=.*,APR_SOURCE_DIR=,g' \
-	       -e 's,APR_BUILD_DIR=.*,APR_BUILD_DIR=,g' ${D}${bindir}/apr-1-config
-}
-
-SSTATE_SCAN_FILES += "apr_rules.mk libtool"
-
-SYSROOT_PREPROCESS_FUNCS += "apr_sysroot_preprocess"
-
-apr_sysroot_preprocess () {
-	d=${SYSROOT_DESTDIR}${datadir}/apr
-	install -d $d/
-	cp ${S}/build/apr_rules.mk $d/
-	sed -i s,apr_builddir=.*,apr_builddir=,g $d/apr_rules.mk
-	sed -i s,apr_builders=.*,apr_builders=,g $d/apr_rules.mk
-	sed -i s,LIBTOOL=.*,LIBTOOL=${HOST_SYS}-libtool,g $d/apr_rules.mk
-	sed -i s,\$\(apr_builders\),${STAGING_DATADIR}/apr/,g $d/apr_rules.mk
-	cp ${S}/build/mkdir.sh $d/
-	cp ${S}/build/make_exports.awk $d/
-	cp ${S}/build/make_var_export.awk $d/
-	cp ${S}/${HOST_SYS}-libtool ${SYSROOT_DESTDIR}${datadir}/build-1/libtool
-}
-
-do_compile_ptest() {
-	cd ${S}/test
-	oe_runmake
-}
-
-do_install_ptest() {
-	t=${D}${PTEST_PATH}/test
-	mkdir -p $t/.libs
-	cp -r ${S}/test/data $t/
-	cp -r ${S}/test/.libs/*.so $t/.libs/
-	cp ${S}/test/proc_child $t/
-	cp ${S}/test/readchild $t/
-	cp ${S}/test/sockchild $t/
-	cp ${S}/test/sockperf $t/
-	cp ${S}/test/testall $t/
-	cp ${S}/test/tryread $t/
-}
-
-export CONFIG_SHELL="/bin/bash"
diff --git a/import-layers/yocto-poky/meta/recipes-support/apr/apr_1.6.2.bb b/import-layers/yocto-poky/meta/recipes-support/apr/apr_1.6.2.bb
new file mode 100644
index 0000000..e2eed53
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/apr/apr_1.6.2.bb
@@ -0,0 +1,114 @@
+SUMMARY = "Apache Portable Runtime (APR) library"
+HOMEPAGE = "http://apr.apache.org/"
+SECTION = "libs"
+DEPENDS = "util-linux"
+
+LICENSE = "Apache-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=4dfd4cd216828c8cae5de5a12f3844c8 \
+                    file://include/apr_lib.h;endline=17;md5=ee42fa7575dc40580a9e01c1b75fae96"
+
+BBCLASSEXTEND = "native nativesdk"
+
+SRC_URI = "${APACHE_MIRROR}/apr/${BPN}-${PV}.tar.bz2 \
+           file://configure_fixes.patch \
+           file://cleanup.patch \
+           file://configfix.patch \
+           file://run-ptest \
+           file://upgrade-and-fix-1.5.1.patch \
+           file://Fix-packet-discards-HTTP-redirect.patch \
+           file://configure.in-fix-LTFLAGS-to-make-it-work-with-ccache.patch \
+           file://0001-apr-fix-off_t-size-doesn-t-match-in-glibc-when-cross.patch \
+           file://0002-explicitly-link-libapr-against-phtread-to-make-gold-.patch \
+"
+
+SRC_URI[md5sum] = "e81a851967c79b5ce9bfbc909e4bf735"
+SRC_URI[sha256sum] = "09109cea377bab0028bba19a92b5b0e89603df9eab05c0f7dbd4dd83d48dcebd"
+
+inherit autotools-brokensep lib_package binconfig multilib_header ptest
+
+OE_BINCONFIG_EXTRA_MANGLE = " -e 's:location=source:location=installed:'"
+
+# Added to fix some issues with cmake. Refer to https://github.com/bmwcarit/meta-ros/issues/68#issuecomment-19896928
+CACHED_CONFIGUREVARS += "apr_cv_mutex_recursive=yes"
+
+# Also suppress trying to use sctp.
+#
+CACHED_CONFIGUREVARS += "ac_cv_header_netinet_sctp_h=no ac_cv_header_netinet_sctp_uio_h=no"
+
+# Otherwise libtool fails to compile apr-utils
+# x86_64-linux-libtool: compile: unable to infer tagged configuration
+# x86_64-linux-libtool:   error: specify a tag with '--tag'
+CCACHE = ""
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)}"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
+
+do_configure_prepend() {
+	# Avoid absolute paths for grep since it causes failures
+	# when using sstate between different hosts with different
+	# install paths for grep.
+	export GREP="grep"
+
+	cd ${S}
+	./buildconf
+}
+
+FILES_${PN}-dev += "${libdir}/apr.exp ${datadir}/build-1/*"
+RDEPENDS_${PN}-dev += "bash"
+
+#for some reason, build/libtool.m4 handled by buildconf still be overwritten
+#when autoconf, so handle it again.
+do_configure_append() {
+	sed -i -e 's/LIBTOOL=\(.*\)top_build/LIBTOOL=\1apr_build/' ${S}/build/libtool.m4
+	sed -i -e 's/LIBTOOL=\(.*\)top_build/LIBTOOL=\1apr_build/' ${S}/build/apr_rules.mk
+}
+
+do_install_append() {
+	oe_multilib_header apr.h
+	install -d ${D}${datadir}/apr
+}
+
+do_install_append_class-target() {
+	sed -i -e 's,${STAGING_DIR_HOST},,g' ${D}${datadir}/build-1/apr_rules.mk
+	sed -i -e 's,${STAGING_DIR_HOST},,g' \
+	       -e 's,APR_SOURCE_DIR=.*,APR_SOURCE_DIR=,g' \
+	       -e 's,APR_BUILD_DIR=.*,APR_BUILD_DIR=,g' ${D}${bindir}/apr-1-config
+}
+
+SSTATE_SCAN_FILES += "apr_rules.mk libtool"
+
+SYSROOT_PREPROCESS_FUNCS += "apr_sysroot_preprocess"
+
+apr_sysroot_preprocess () {
+	d=${SYSROOT_DESTDIR}${datadir}/apr
+	install -d $d/
+	cp ${S}/build/apr_rules.mk $d/
+	sed -i s,apr_builddir=.*,apr_builddir=,g $d/apr_rules.mk
+	sed -i s,apr_builders=.*,apr_builders=,g $d/apr_rules.mk
+	sed -i s,LIBTOOL=.*,LIBTOOL=${HOST_SYS}-libtool,g $d/apr_rules.mk
+	sed -i s,\$\(apr_builders\),${STAGING_DATADIR}/apr/,g $d/apr_rules.mk
+	cp ${S}/build/mkdir.sh $d/
+	cp ${S}/build/make_exports.awk $d/
+	cp ${S}/build/make_var_export.awk $d/
+	cp ${S}/${HOST_SYS}-libtool ${SYSROOT_DESTDIR}${datadir}/build-1/libtool
+}
+
+do_compile_ptest() {
+	cd ${S}/test
+	oe_runmake
+}
+
+do_install_ptest() {
+	t=${D}${PTEST_PATH}/test
+	mkdir -p $t/.libs
+	cp -r ${S}/test/data $t/
+	cp -r ${S}/test/.libs/*.so $t/.libs/
+	cp ${S}/test/proc_child $t/
+	cp ${S}/test/readchild $t/
+	cp ${S}/test/sockchild $t/
+	cp ${S}/test/sockperf $t/
+	cp ${S}/test/testall $t/
+	cp ${S}/test/tryread $t/
+}
+
+export CONFIG_SHELL="/bin/bash"
diff --git a/import-layers/yocto-poky/meta/recipes-support/argp-standalone/argp-standalone_1.3.bb b/import-layers/yocto-poky/meta/recipes-support/argp-standalone/argp-standalone_1.3.bb
index bd0cfdf..21bbcab 100644
--- a/import-layers/yocto-poky/meta/recipes-support/argp-standalone/argp-standalone_1.3.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/argp-standalone/argp-standalone_1.3.bb
@@ -19,6 +19,9 @@
 
 CFLAGS += "-fPIC -U__OPTIMIZE__"
 
+RDEPENDS_${PN}-dev = ""
+RDEPENDS_${PN}-staticdev = ""
+
 do_install() {
 	install -D -m 0644 ${B}/libargp.a ${D}${libdir}/libargp.a
 	install -D -m 0644 ${S}/argp.h ${D}${includedir}/argp.h
diff --git a/import-layers/yocto-poky/meta/recipes-support/aspell/aspell/gcc7.patch b/import-layers/yocto-poky/meta/recipes-support/aspell/aspell/gcc7.patch
new file mode 100644
index 0000000..6ffd077
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/aspell/aspell/gcc7.patch
@@ -0,0 +1,40 @@
+From 8089fa02122fed0a6394eba14bbedcb1d18e2384 Mon Sep 17 00:00:00 2001
+From: Kevin Atkinson <kevina@gnu.org>
+Date: Thu, 29 Dec 2016 00:50:31 -0500
+Subject: [PATCH] Compile Fixes for GCC 7.
+
+Closes #519.
+---
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+Upstream-Status: Backport
+
+ modules/filter/tex.cpp | 2 +-
+ prog/check_funs.cpp    | 2 +-
+ 2 files changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/modules/filter/tex.cpp b/modules/filter/tex.cpp
+index a979539..19ab63c 100644
+--- a/modules/filter/tex.cpp
++++ b/modules/filter/tex.cpp
+@@ -174,7 +174,7 @@ namespace {
+ 
+     if (c == '{') {
+ 
+-      if (top.in_what == Parm || top.in_what == Opt || top.do_check == '\0')
++      if (top.in_what == Parm || top.in_what == Opt || *top.do_check == '\0')
+ 	push_command(Parm);
+ 
+       top.in_what = Parm;
+diff --git a/prog/check_funs.cpp b/prog/check_funs.cpp
+index db54f3d..89ee09d 100644
+--- a/prog/check_funs.cpp
++++ b/prog/check_funs.cpp
+@@ -647,7 +647,7 @@ static void print_truncate(FILE * out, const char * word, int width) {
+     }
+   }
+   if (i == width-1) {
+-    if (word == '\0')
++    if (*word == '\0')
+       put(out,' ');
+     else if (word[len] == '\0')
+       put(out, word, len);
diff --git a/import-layers/yocto-poky/meta/recipes-support/aspell/aspell_0.60.6.1.bb b/import-layers/yocto-poky/meta/recipes-support/aspell/aspell_0.60.6.1.bb
index 5a23754..19a7155 100644
--- a/import-layers/yocto-poky/meta/recipes-support/aspell/aspell_0.60.6.1.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/aspell/aspell_0.60.6.1.bb
@@ -6,7 +6,9 @@
 
 PR = "r1"
 
-SRC_URI = "${GNU_MIRROR}/aspell/aspell-${PV}.tar.gz"
+SRC_URI = "${GNU_MIRROR}/aspell/aspell-${PV}.tar.gz \
+           file://gcc7.patch \
+          "
 SRC_URI[md5sum] = "e66a9c9af6a60dc46134fdacf6ce97d7"
 SRC_URI[sha256sum] = "f52583a83a63633701c5f71db3dc40aab87b7f76b29723aeb27941eff42df6e1"
 
diff --git a/import-layers/yocto-poky/meta/recipes-support/atk/at-spi2-atk_2.22.0.bb b/import-layers/yocto-poky/meta/recipes-support/atk/at-spi2-atk_2.22.0.bb
deleted file mode 100644
index 58edb6e..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/atk/at-spi2-atk_2.22.0.bb
+++ /dev/null
@@ -1,21 +0,0 @@
-SUMMARY = "AT-SPI 2 Toolkit Bridge"
-LICENSE = "LGPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=e9f288ba982d60518f375b5898283886"
-
-MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
-
-SRC_URI = "${GNOME_MIRROR}/${BPN}/${MAJ_VER}/${BPN}-${PV}.tar.xz"
-SRC_URI[md5sum] = "aa62aed21b8e03dc44ab81ae49d893ca"
-SRC_URI[sha256sum] = "e8bdedbeb873eb229eb08c88e11d07713ec25ae175251648ad1a9da6c21113c1"
-
-DEPENDS = "dbus glib-2.0 glib-2.0-native atk at-spi2-core"
-
-inherit autotools pkgconfig distro_features_check upstream-version-is-even
-
-# The at-spi2-core requires x11 in DISTRO_FEATURES
-REQUIRED_DISTRO_FEATURES = "x11"
-
-PACKAGES =+ "${PN}-gnome ${PN}-gtk2"
-
-FILES_${PN}-gnome = "${libdir}/gnome-settings-daemon-3.0/gtk-modules"
-FILES_${PN}-gtk2 = "${libdir}/gtk-2.0/modules/libatk-bridge.*"
diff --git a/import-layers/yocto-poky/meta/recipes-support/atk/at-spi2-atk_2.24.1.bb b/import-layers/yocto-poky/meta/recipes-support/atk/at-spi2-atk_2.24.1.bb
new file mode 100644
index 0000000..4a0e411
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/atk/at-spi2-atk_2.24.1.bb
@@ -0,0 +1,22 @@
+SUMMARY = "AT-SPI 2 Toolkit Bridge"
+HOMEPAGE = "https://wiki.linuxfoundation.org/accessibility/d-bus"
+LICENSE = "LGPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=e9f288ba982d60518f375b5898283886"
+
+MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
+
+SRC_URI = "${GNOME_MIRROR}/${BPN}/${MAJ_VER}/${BPN}-${PV}.tar.xz"
+SRC_URI[md5sum] = "79388fbc4dc7f27394556dd389aeb594"
+SRC_URI[sha256sum] = "60dc90ac4f74b8ffe96a9363c25208a443b381bacecfefea6de549f20ed6957d"
+
+DEPENDS = "dbus glib-2.0 glib-2.0-native atk at-spi2-core"
+
+inherit autotools pkgconfig distro_features_check upstream-version-is-even
+
+# The at-spi2-core requires x11 in DISTRO_FEATURES
+REQUIRED_DISTRO_FEATURES = "x11"
+
+PACKAGES =+ "${PN}-gnome ${PN}-gtk2"
+
+FILES_${PN}-gnome = "${libdir}/gnome-settings-daemon-3.0/gtk-modules"
+FILES_${PN}-gtk2 = "${libdir}/gtk-2.0/modules/libatk-bridge.*"
diff --git a/import-layers/yocto-poky/meta/recipes-support/atk/at-spi2-core_2.22.0.bb b/import-layers/yocto-poky/meta/recipes-support/atk/at-spi2-core_2.22.0.bb
deleted file mode 100644
index 272849e..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/atk/at-spi2-core_2.22.0.bb
+++ /dev/null
@@ -1,29 +0,0 @@
-SUMMARY = "Assistive Technology Service Provider Interface (dbus core)"
-LICENSE = "LGPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=e9f288ba982d60518f375b5898283886"
-
-MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
-
-SRC_URI = "${GNOME_MIRROR}/${BPN}/${MAJ_VER}/${BPN}-${PV}.tar.xz \
-           file://0001-nls.m4-Take-it-from-gettext-0.15.patch \
-           file://0001-build-Add-with-systemduserunitdir.patch \
-          "
-
-SRC_URI[md5sum] = "3da5fe62a653e49dad1c47f9a46fee56"
-SRC_URI[sha256sum] = "415ea3af21318308798e098be8b3a17b2f0cf2fe16cecde5ad840cf4e0f2c80a"
-
-DEPENDS = "dbus glib-2.0 virtual/libx11 libxi libxtst intltool-native"
-
-inherit autotools gtk-doc gettext systemd pkgconfig distro_features_check upstream-version-is-even gobject-introspection
-# depends on virtual/libx11
-REQUIRED_DISTRO_FEATURES = "x11"
-
-EXTRA_OECONF = "--disable-xevie \
-                --with-systemduserunitdir=${systemd_user_unitdir} \
-                --with-dbus-daemondir=${bindir}"
-
-FILES_${PN} += "${datadir}/dbus-1/services/*.service \
-                ${datadir}/dbus-1/accessibility-services/*.service \
-                ${datadir}/defaults/at-spi2 \
-                ${systemd_user_unitdir}/at-spi-dbus-bus.service \
-                "
diff --git a/import-layers/yocto-poky/meta/recipes-support/atk/at-spi2-core_2.24.1.bb b/import-layers/yocto-poky/meta/recipes-support/atk/at-spi2-core_2.24.1.bb
new file mode 100644
index 0000000..1687ae3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/atk/at-spi2-core_2.24.1.bb
@@ -0,0 +1,30 @@
+SUMMARY = "Assistive Technology Service Provider Interface (dbus core)"
+HOMEPAGE = "https://wiki.linuxfoundation.org/accessibility/d-bus"
+LICENSE = "LGPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=e9f288ba982d60518f375b5898283886"
+
+MAJ_VER = "${@oe.utils.trim_version("${PV}", 2)}"
+
+SRC_URI = "${GNOME_MIRROR}/${BPN}/${MAJ_VER}/${BPN}-${PV}.tar.xz \
+           file://0001-nls.m4-Take-it-from-gettext-0.15.patch \
+           file://0001-build-Add-with-systemduserunitdir.patch \
+          "
+
+SRC_URI[md5sum] = "61d0a471e693292934a73f288ebff35c"
+SRC_URI[sha256sum] = "1e90d064b937aacfe79a96232ac7e63d28d716e85bd9ff4333f865305a959b5b"
+
+DEPENDS = "dbus glib-2.0 virtual/libx11 libxi libxtst intltool-native"
+
+inherit autotools gtk-doc gettext systemd pkgconfig distro_features_check upstream-version-is-even gobject-introspection
+# depends on virtual/libx11
+REQUIRED_DISTRO_FEATURES = "x11"
+
+EXTRA_OECONF = "--disable-xevie \
+                --with-systemduserunitdir=${systemd_user_unitdir} \
+                --with-dbus-daemondir=${bindir}"
+
+FILES_${PN} += "${datadir}/dbus-1/services/*.service \
+                ${datadir}/dbus-1/accessibility-services/*.service \
+                ${datadir}/defaults/at-spi2 \
+                ${systemd_user_unitdir}/at-spi-dbus-bus.service \
+                "
diff --git a/import-layers/yocto-poky/meta/recipes-support/atk/atk_2.22.0.bb b/import-layers/yocto-poky/meta/recipes-support/atk/atk_2.22.0.bb
deleted file mode 100644
index bc80f95..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/atk/atk_2.22.0.bb
+++ /dev/null
@@ -1,21 +0,0 @@
-SUMMARY = "Accessibility toolkit for GNOME"
-HOMEPAGE = "http://live.gnome.org/GAP/"
-BUGTRACKER = "https://bugzilla.gnome.org/"
-SECTION = "x11/libs"
-
-LICENSE = "GPLv2+ & LGPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=3bf50002aefd002f49e7bb854063f7e7 \
-                    file://atk/atkutil.c;endline=18;md5=6fd31cd2fdc9b30f619ca8d819bc12d3 \
-                    file://atk/atk.h;endline=18;md5=fcd7710187e0eae485e356c30d1b0c3b"
-
-DEPENDS = "glib-2.0"
-
-inherit gnomebase gtk-doc gettext upstream-version-is-even gobject-introspection
-
-SRC_URI[archive.md5sum] = "c7f2adcf75e4058727174cde970e9129"
-SRC_URI[archive.sha256sum] = "d349f5ca4974c9c76a4963e5b254720523b0c78672cbc0e1a3475dbd9b3d44b6"
-
-BBCLASSEXTEND = "native"
-
-EXTRA_OECONF = "--disable-glibtest \
-               "
diff --git a/import-layers/yocto-poky/meta/recipes-support/atk/atk_2.24.0.bb b/import-layers/yocto-poky/meta/recipes-support/atk/atk_2.24.0.bb
new file mode 100644
index 0000000..d62319c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/atk/atk_2.24.0.bb
@@ -0,0 +1,21 @@
+SUMMARY = "Accessibility toolkit for GNOME"
+HOMEPAGE = "http://live.gnome.org/GAP/"
+BUGTRACKER = "https://bugzilla.gnome.org/"
+SECTION = "x11/libs"
+
+LICENSE = "GPLv2+ & LGPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=3bf50002aefd002f49e7bb854063f7e7 \
+                    file://atk/atkutil.c;endline=18;md5=6fd31cd2fdc9b30f619ca8d819bc12d3 \
+                    file://atk/atk.h;endline=18;md5=fcd7710187e0eae485e356c30d1b0c3b"
+
+DEPENDS = "glib-2.0"
+
+inherit gnomebase gtk-doc gettext upstream-version-is-even gobject-introspection
+
+SRC_URI[archive.md5sum] = "3747a80089dfa3c0bbcf21adfff9968b"
+SRC_URI[archive.sha256sum] = "bb2daa9a808c73a7a79d2983f333e0ba74be42fc51e3ba1faf2551a636487a49"
+
+BBCLASSEXTEND = "native"
+
+EXTRA_OECONF = "--disable-glibtest \
+               "
diff --git a/import-layers/yocto-poky/meta/recipes-support/attr/acl/test-fix-directory-permissions.patch b/import-layers/yocto-poky/meta/recipes-support/attr/acl/test-fix-directory-permissions.patch
index cd4510c..e64990a 100644
--- a/import-layers/yocto-poky/meta/recipes-support/attr/acl/test-fix-directory-permissions.patch
+++ b/import-layers/yocto-poky/meta/recipes-support/attr/acl/test-fix-directory-permissions.patch
@@ -1,6 +1,9 @@
+From 311589fedf196168382d8f0db303ab328bcf9d83 Mon Sep 17 00:00:00 2001
+From: Peter Seebach <peter.seebach@windriver.com>
+Date: Wed, 11 May 2016 15:16:06 -0500
+Subject: [PATCH] acl.inc, run-ptest: improve ptest functionality on limited
+
 commit c45bae84817a70fef6c2b661a07a492a0d23ae85
-Author: Peter Seebach <peter.seebach@windriver.com>
-Date:   Wed May 11 15:16:06 2016 -0500
 
     Fix permissions on temporary directory
 
@@ -10,6 +13,13 @@
 
     Signed-off-by: Peter Seebach <peter.seebach@windriver.com>
 
+Upstream-Status: Backport [ http://git.savannah.gnu.org/cgit/acl.git/commit/?id=c6772a958800de064482634f77c20a0faafc5af6 ]
+
+Signed-off-by: Dengke Du <dengke.du@windriver.com>
+---
+ test/root/permissions.test | 1 +
+ 1 file changed, 1 insertion(+)
+
 diff --git a/test/root/permissions.test b/test/root/permissions.test
 index 42615f5..098b52a 100644
 --- a/test/root/permissions.test
@@ -22,3 +32,6 @@
  	$ mkdir d
  	$ cd d
  	$ umask 027
+-- 
+2.8.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/attr/acl_2.2.52.bb b/import-layers/yocto-poky/meta/recipes-support/attr/acl_2.2.52.bb
index 306c7b5..8f3dc45 100644
--- a/import-layers/yocto-poky/meta/recipes-support/attr/acl_2.2.52.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/attr/acl_2.2.52.bb
@@ -43,6 +43,11 @@
 	tar -c --exclude=nfs test/ | ( cd ${D}${PTEST_PATH} && tar -xf - )
 	mkdir ${D}${PTEST_PATH}/include
 	cp ${S}/include/builddefs ${S}/include/buildmacros ${S}/include/buildrules ${D}${PTEST_PATH}/include/
+	# Remove any build host references
+	sed -e "s:--sysroot=${STAGING_DIR_TARGET}::g" \
+	    -e 's:${HOSTTOOLS_DIR}/::g' \
+	    -e 's:${RECIPE_SYSROOT_NATIVE}::g' \
+	    -i ${D}${PTEST_PATH}/include/builddefs
 }
 
 RDEPENDS_${PN}-ptest = "acl bash coreutils perl perl-module-filehandle perl-module-getopt-std perl-module-posix shadow"
diff --git a/import-layers/yocto-poky/meta/recipes-support/attr/attr.inc b/import-layers/yocto-poky/meta/recipes-support/attr/attr.inc
index e8b5d05..24ef5ad 100644
--- a/import-layers/yocto-poky/meta/recipes-support/attr/attr.inc
+++ b/import-layers/yocto-poky/meta/recipes-support/attr/attr.inc
@@ -32,6 +32,12 @@
 	  do cp ${S}/include/$i ${D}${PTEST_PATH}/include/; \
 	done
 	sed -e 's|; @echo|; echo|' -i ${D}${PTEST_PATH}/test/Makefile
+    
+	# Remove any build host references
+	sed -e "s:--sysroot=${STAGING_DIR_TARGET}::g" \
+	    -e 's:${HOSTTOOLS_DIR}/::g' \
+	    -e 's:${RECIPE_SYSROOT_NATIVE}::g' \
+	    -i ${D}${PTEST_PATH}/include/builddefs
 }
 
 RDEPENDS_${PN}-ptest = "attr coreutils perl-module-filehandle perl-module-getopt-std perl-module-posix"
diff --git a/import-layers/yocto-poky/meta/recipes-support/attr/attr/0001-Use-stdint-types-consistently.patch b/import-layers/yocto-poky/meta/recipes-support/attr/attr/0001-Use-stdint-types-consistently.patch
new file mode 100644
index 0000000..dcd6507
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/attr/attr/0001-Use-stdint-types-consistently.patch
@@ -0,0 +1,69 @@
+From 37a27b6fd09ecb37097b85e5db74e4f77b80fe0a Mon Sep 17 00:00:00 2001
+From: Felix Janda <felix.janda@posteo.de>
+Date: Tue, 12 Jan 2016 22:20:33 +0100
+Subject: [PATCH] Use stdint types consistently
+
+---
+Upstream-Status: Backport
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+
+ include/attributes.h | 6 ++++--
+ man/man3/attr_list.3 | 8 ++++----
+ 2 files changed, 8 insertions(+), 6 deletions(-)
+
+Index: attr-2.4.47/include/attributes.h
+===================================================================
+--- attr-2.4.47.orig/include/attributes.h
++++ attr-2.4.47/include/attributes.h
+@@ -22,6 +22,7 @@
+ extern "C" {
+ #endif
+ 
++#include <stdint.h>
+ /*
+  *	An almost-IRIX-compatible extended attributes API
+  *	(the IRIX attribute "list" operation is missing, added ATTR_SECURE).
+@@ -69,7 +70,7 @@ typedef struct attrlist {
+  * al_offset[i] entry points to.
+  */
+ typedef struct attrlist_ent {	/* data from attr_list() */
+-	u_int32_t	a_valuelen;	/* number bytes in value of attr */
++	uint32_t	a_valuelen;	/* number bytes in value of attr */
+ 	char		a_name[1];	/* attr name (NULL terminated) */
+ } attrlist_ent_t;
+ 
+@@ -90,7 +91,7 @@ typedef struct attrlist_ent {	/* data fr
+  * operation on a cursor is to bzero() it.
+  */
+ typedef struct attrlist_cursor {
+-	u_int32_t	opaque[4];	/* an opaque cookie */
++	uint32_t	opaque[4];	/* an opaque cookie */
+ } attrlist_cursor_t;
+ 
+ /*
+Index: attr-2.4.47/man/man3/attr_list.3
+===================================================================
+--- attr-2.4.47.orig/man/man3/attr_list.3
++++ attr-2.4.47/man/man3/attr_list.3
+@@ -72,9 +72,9 @@ The contents of an \f4attrlist_t\fP stru
+ .nf
+ .ft 4
+ .ta 9n 22n
+-__int32_t al_count; /\(** number of entries in attrlist \(**/
+-__int32_t al_more; /\(** T/F: more attrs (do syscall again) \(**/
+-__int32_t al_offset[1]; /\(** byte offsets of attrs [var-sized] \(**/
++int32_t al_count; /\(** number of entries in attrlist \(**/
++int32_t al_more; /\(** T/F: more attrs (do syscall again) \(**/
++int32_t al_offset[1]; /\(** byte offsets of attrs [var-sized] \(**/
+ .ft 1
+ .fi
+ .RE
+@@ -113,7 +113,7 @@ include the following members:
+ .nf
+ .ft 4
+ .ta 9n 22n
+-u_int32_t a_valuelen; /\(** number bytes in value of attr \(**/
++uint32_t a_valuelen; /\(** number bytes in value of attr \(**/
+ char a_name[]; /\(** attr name (NULL terminated) \(**/
+ .ft 1
+ .fi
diff --git a/import-layers/yocto-poky/meta/recipes-support/attr/attr_2.4.47.bb b/import-layers/yocto-poky/meta/recipes-support/attr/attr_2.4.47.bb
index 556c8e4..fc88bef 100644
--- a/import-layers/yocto-poky/meta/recipes-support/attr/attr_2.4.47.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/attr/attr_2.4.47.bb
@@ -4,6 +4,9 @@
 # future releases of attr, remove this when updating the recipe.
 SRC_URI += "file://attr-Missing-configure.ac.patch \
             file://dont-use-decl-macros.patch \
+            file://Remove-the-section-2-man-pages.patch \
+            file://Remove-the-attr.5-man-page-moved-to-man-pages.patch \
+            file://0001-Use-stdint-types-consistently.patch \
            "
 
 SRC_URI[md5sum] = "84f58dec00b60f2dc8fd1c9709291cc7"
diff --git a/import-layers/yocto-poky/meta/recipes-support/attr/ea-acl.inc b/import-layers/yocto-poky/meta/recipes-support/attr/ea-acl.inc
index c587b3c..e6f4c72 100644
--- a/import-layers/yocto-poky/meta/recipes-support/attr/ea-acl.inc
+++ b/import-layers/yocto-poky/meta/recipes-support/attr/ea-acl.inc
@@ -47,6 +47,3 @@
 FILES_lib${BPN} = "${base_libdir}/lib*${SOLIBS}"
 
 BBCLASSEXTEND = "native"
-# Only append ldflags for target recipe and if USE_NLS is enabled
-LDFLAGS_append_libc-uclibc_class-target = "${@['', ' -lintl '][(d.getVar('USE_NLS') == 'yes')]}"
-EXTRA_OECONF_append_libc-uclibc_class-target = "${@['', ' --disable-gettext '][(d.getVar('USE_NLS') == 'no')]}"
diff --git a/import-layers/yocto-poky/meta/recipes-support/attr/files/Remove-the-attr.5-man-page-moved-to-man-pages.patch b/import-layers/yocto-poky/meta/recipes-support/attr/files/Remove-the-attr.5-man-page-moved-to-man-pages.patch
new file mode 100644
index 0000000..d5ab83d
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/attr/files/Remove-the-attr.5-man-page-moved-to-man-pages.patch
@@ -0,0 +1,240 @@
+From 6047c8522b91235ad1e835f44f5e36472d9d49b2 Mon Sep 17 00:00:00 2001
+From: Andreas Gruenbacher <andreas.gruenbacher@gmail.com>
+Date: Wed, 22 Apr 2015 11:46:59 +0200
+Subject: [PATCH 2/2] Remove the attr.5 man page (moved to man-pages)
+
+Commit dce9b4448c7f2b22bd206cd068fb05cb2f3255b9 from
+https://git.savannah.nongnu.org/git/attr.git
+
+The attr.5 page is part of the extended attribute system call documentation,
+which has been moved into the man-pages package. Move the attr.5 page there
+as well.
+
+Upstream-Status: Backport
+
+[MA: updated to apply directly to v2.4.47]
+Signed-off-by: Mark Asselstine <mark.asselstine@windriver.com>
+---
+ man/Makefile      |   2 +-
+ man/man5/Makefile |  35 -------------
+ man/man5/attr.5   | 153 ------------------------------------------------------
+ 3 files changed, 1 insertion(+), 189 deletions(-)
+ delete mode 100644 man/man5/Makefile
+ delete mode 100644 man/man5/attr.5
+
+diff --git a/man/Makefile b/man/Makefile
+index 755daed..9301f09 100644
+--- a/man/Makefile
++++ b/man/Makefile
+@@ -19,7 +19,7 @@
+ TOPDIR = ..
+ include $(TOPDIR)/include/builddefs
+ 
+-SUBDIRS = man1 man3 man5
++SUBDIRS = man1 man3
+ 
+ default : $(SUBDIRS)
+ 
+diff --git a/man/man5/Makefile b/man/man5/Makefile
+deleted file mode 100644
+index 6b70d3d..0000000
+--- a/man/man5/Makefile
++++ /dev/null
+@@ -1,35 +0,0 @@
+-#
+-# Copyright (c) 2000, 2002 Silicon Graphics, Inc.  All Rights Reserved.
+-# Copyright (C) 2009  Andreas Gruenbacher <agruen@suse.de>
+-#
+-# This program is free software: you can redistribute it and/or modify it
+-# under the terms of the GNU General Public License as published by
+-# the Free Software Foundation, either version 2 of the License, or
+-# (at your option) any later version.
+-#
+-# This program is distributed in the hope that it will be useful,
+-# but WITHOUT ANY WARRANTY; without even the implied warranty of
+-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+-# GNU General Public License for more details.
+-#
+-# You should have received a copy of the GNU General Public License
+-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+-#
+-
+-TOPDIR = ../..
+-include $(TOPDIR)/include/builddefs
+-
+-MAN_SECTION	= 5
+-
+-MAN_PAGES	= $(shell echo *.$(MAN_SECTION))
+-MAN_DEST	= $(PKG_MAN_DIR)/man$(MAN_SECTION)
+-LSRCFILES	= $(MAN_PAGES)
+-
+-default : $(MAN_PAGES)
+-
+-include $(BUILDRULES)
+-
+-install : default
+-	$(INSTALL) -m 755 -d $(MAN_DEST)
+-	$(INSTALL_MAN)
+-install-dev install-lib:
+diff --git a/man/man5/attr.5 b/man/man5/attr.5
+deleted file mode 100644
+index a02757d..0000000
+--- a/man/man5/attr.5
++++ /dev/null
+@@ -1,153 +0,0 @@
+-.\" Extended attributes manual page
+-.\"
+-.\" Copyright (C) 2000, 2002, 2007  Andreas Gruenbacher <agruen@suse.de>
+-.\" Copyright (C) 2001, 2002, 2004, 2007 Silicon Graphics, Inc.
+-.\" All rights reserved.
+-.\"
+-.\" This is free documentation; you can redistribute it and/or
+-.\" modify it under the terms of the GNU General Public License as
+-.\" published by the Free Software Foundation; either version 2 of
+-.\" the License, or (at your option) any later version.
+-.\"
+-.\" The GNU General Public License's references to "object code"
+-.\" and "executables" are to be interpreted as the output of any
+-.\" document formatting or typesetting system, including
+-.\" intermediate and printed output.
+-.\"
+-.\" This manual is distributed in the hope that it will be useful,
+-.\" but WITHOUT ANY WARRANTY; without even the implied warranty of
+-.\" MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+-.\" GNU General Public License for more details.
+-.\"
+-.\" You should have received a copy of the GNU General Public
+-.\" License along with this manual.  If not, see
+-.\" <http://www.gnu.org/licenses/>.
+-.\"
+-.TH ATTR 5
+-.SH NAME
+-attr - Extended attributes
+-.SH DESCRIPTION
+-Extended attributes are name:value pairs associated permanently with
+-files and directories, similar to the environment strings associated
+-with a process.
+-An attribute may be defined or undefined.
+-If it is defined, its value may be empty or non-empty.
+-.PP
+-Extended attributes are extensions to the normal attributes which are
+-associated with all inodes in the system (i.e. the
+-.BR stat (2)
+-data).
+-They are often used to provide additional functionality
+-to a filesystem \- for example, additional security features such as
+-Access Control Lists (ACLs) may be implemented using extended attributes.
+-.PP
+-Users with search access to a file or directory may retrieve a list of
+-attribute names defined for that file or directory.
+-.PP
+-Extended attributes are accessed as atomic objects.
+-Reading retrieves the whole value of an attribute and stores it in a buffer.
+-Writing replaces any previous value with the new value.
+-.PP
+-Space consumed for extended attributes is counted towards the disk quotas
+-of the file owner and file group.
+-.PP
+-Currently, support for extended attributes is implemented on Linux by the
+-ext2, ext3, ext4, XFS, JFS and reiserfs filesystems.
+-.SH EXTENDED ATTRIBUTE NAMESPACES
+-Attribute names are zero-terminated strings.
+-The attribute name is always specified in the fully qualified
+-.IR namespace.attribute
+-form, eg.
+-.IR user.mime_type ,
+-.IR trusted.md5sum ,
+-.IR system.posix_acl_access ,
+-or
+-.IR security.selinux .
+-.PP
+-The namespace mechanism is used to define different classes of extended
+-attributes.
+-These different classes exist for several reasons, e.g. the permissions
+-and capabilities required for manipulating extended attributes of one
+-namespace may differ to another.
+-.PP
+-Currently the
+-.IR security ,
+-.IR system ,
+-.IR trusted ,
+-and
+-.IR user
+-extended attribute classes are defined as described below. Additional
+-classes may be added in the future.
+-.SS Extended security attributes
+-The security attribute namespace is used by kernel security modules,
+-such as Security Enhanced Linux.  
+-Read and write access permissions to security attributes depend on the
+-policy implemented for each security attribute by the security module.
+-When no security module is loaded, all processes have read access to
+-extended security attributes, and write access is limited to processes
+-that have the CAP_SYS_ADMIN capability.
+-.SS Extended system attributes
+-Extended system attributes are used by the kernel to store system
+-objects such as Access Control Lists and Capabilities.  Read and write
+-access permissions to system attributes depend on the policy implemented
+-for each system attribute implemented by filesystems in the kernel.
+-.SS Trusted extended attributes
+-Trusted extended attributes are visible and accessible only to processes that
+-have the CAP_SYS_ADMIN capability (the super user usually has this
+-capability).
+-Attributes in this class are used to implement mechanisms in user
+-space (i.e., outside the kernel) which keep information in extended attributes
+-to which ordinary processes should not have access.
+-.SS Extended user attributes
+-Extended user attributes may be assigned to files and directories for
+-storing arbitrary additional information such as the mime type,
+-character set or encoding of a file. The access permissions for user
+-attributes are defined by the file permission bits.
+-.PP
+-The file permission bits of regular files and directories are
+-interpreted differently from the file permission bits of special files
+-and symbolic links. For regular files and directories the file
+-permission bits define access to the file's contents, while for device special
+-files they define access to the device described by the special file.
+-The file permissions of symbolic links are not used in access
+-checks. These differences would allow users to consume filesystem resources in
+-a way not controllable by disk quotas for group or world writable special files and directories.
+-.PP
+-For this reason, extended user attributes are only allowed for regular files and directories, and access to extended user attributes is restricted to the
+-owner and to users with appropriate capabilities for directories with the
+-sticky bit set (see the
+-.BR chmod (1)
+-manual page for an explanation of Sticky Directories).
+-.SH FILESYSTEM DIFFERENCES
+-The kernel and the filesystem may place limits on the maximum number
+-and size of extended attributes that can be associated with a file.
+-Some file systems, such as ext2/3 and reiserfs, require the filesystem
+-to be mounted with the
+-.B user_xattr
+-mount option in order for extended user attributes to be used.
+-.PP
+-In the current ext2, ext3 and ext4 filesystem implementations, each
+-extended attribute must fit on a single filesystem block (1024, 2048
+-or 4096 bytes, depending on the block size specified when the
+-filesystem was created).
+-.PP
+-In the XFS and reiserfs filesystem implementations, there is no
+-practical limit on the number or size of extended attributes
+-associated with a file, and the algorithms used to store extended
+-attribute information on disk are scalable.
+-.PP
+-In the JFS filesystem implementation, names can be up to 255 bytes and
+-values up to 65,535 bytes.
+-.SH ADDITIONAL NOTES
+-Since the filesystems on which extended attributes are stored might also
+-be used on architectures with a different byte order and machine word
+-size, care should be taken to store attribute values in an architecture
+-independent format.
+-.SH AUTHORS
+-Andreas Gruenbacher,
+-.RI < a.gruenbacher@bestbits.at >
+-and the SGI XFS development team,
+-.RI < linux-xfs@oss.sgi.com >.
+-.SH SEE ALSO
+-getfattr(1),
+-setfattr(1).
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/attr/files/Remove-the-section-2-man-pages.patch b/import-layers/yocto-poky/meta/recipes-support/attr/files/Remove-the-section-2-man-pages.patch
new file mode 100644
index 0000000..044c5a0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/attr/files/Remove-the-section-2-man-pages.patch
@@ -0,0 +1,666 @@
+From b972600a26f3a930e53e2fce2625266a5d29813e Mon Sep 17 00:00:00 2001
+From: Andreas Gruenbacher <andreas.gruenbacher@gmail.com>
+Date: Tue, 14 Apr 2015 23:53:11 +0200
+Subject: [PATCH 1/2] Remove the section 2 man pages
+
+Commit 8d1263bca95722d66a6f8e83450f49d0956ea534 from upstream
+https://git.savannah.nongnu.org/git/attr.git/
+
+The section 2 man pages have long since been added to the man-pages package
+which documents all system calls; they were disabled in attr by default since
+January 2014.  Get rid of them here.
+
+Upstream-Status: Backport
+
+[MA: modified to apply directly to v2.4.47]
+Signed-off-by: Mark Asselstine <mark.asselstine@windriver.com>
+---
+ man/Makefile           |   2 +-
+ man/man2/Makefile      |  35 -----------
+ man/man2/getxattr.2    | 143 --------------------------------------------
+ man/man2/listxattr.2   | 158 -------------------------------------------------
+ man/man2/removexattr.2 | 111 ----------------------------------
+ man/man2/setxattr.2    | 143 --------------------------------------------
+ 6 files changed, 1 insertion(+), 591 deletions(-)
+ delete mode 100644 man/man2/Makefile
+ delete mode 100644 man/man2/getxattr.2
+ delete mode 100644 man/man2/listxattr.2
+ delete mode 100644 man/man2/removexattr.2
+ delete mode 100644 man/man2/setxattr.2
+
+diff --git a/man/Makefile b/man/Makefile
+index 9535426..755daed 100644
+--- a/man/Makefile
++++ b/man/Makefile
+@@ -19,7 +19,7 @@
+ TOPDIR = ..
+ include $(TOPDIR)/include/builddefs
+ 
+-SUBDIRS = man1 man2 man3 man5
++SUBDIRS = man1 man3 man5
+ 
+ default : $(SUBDIRS)
+ 
+diff --git a/man/man2/Makefile b/man/man2/Makefile
+deleted file mode 100644
+index d77309d..0000000
+--- a/man/man2/Makefile
++++ /dev/null
+@@ -1,35 +0,0 @@
+-#
+-# Copyright (c) 2000-2002 Silicon Graphics, Inc.  All Rights Reserved.
+-# Copyright (C) 2009  Andreas Gruenbacher <agruen@suse.de>
+-#
+-# This program is free software: you can redistribute it and/or modify it
+-# under the terms of the GNU General Public License as published by
+-# the Free Software Foundation, either version 2 of the License, or
+-# (at your option) any later version.
+-#
+-# This program is distributed in the hope that it will be useful,
+-# but WITHOUT ANY WARRANTY; without even the implied warranty of
+-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+-# GNU General Public License for more details.
+-#
+-# You should have received a copy of the GNU General Public License
+-# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+-#
+-
+-TOPDIR = ../..
+-include $(TOPDIR)/include/builddefs
+-
+-MAN_SECTION	= 2
+-
+-MAN_PAGES	= $(shell echo *.$(MAN_SECTION))
+-MAN_DEST	= $(PKG_MAN_DIR)/man$(MAN_SECTION)
+-LSRCFILES	= $(MAN_PAGES)
+-
+-default install : $(MAN_PAGES)
+-
+-include $(BUILDRULES)
+-
+-install-dev : default
+-	$(INSTALL) -m 755 -d $(MAN_DEST)
+-	$(INSTALL_MAN)
+-install-lib:
+diff --git a/man/man2/getxattr.2 b/man/man2/getxattr.2
+deleted file mode 100644
+index 405ad89..0000000
+--- a/man/man2/getxattr.2
++++ /dev/null
+@@ -1,143 +0,0 @@
+-.\"
+-.\" Extended attributes system calls manual pages
+-.\"
+-.\" (C) Andreas Gruenbacher, February 2001
+-.\" (C) Silicon Graphics Inc, September 2001
+-.\"
+-.\" This is free documentation; you can redistribute it and/or
+-.\" modify it under the terms of the GNU General Public License as
+-.\" published by the Free Software Foundation; either version 2 of
+-.\" the License, or (at your option) any later version.
+-.\"
+-.\" The GNU General Public License's references to "object code"
+-.\" and "executables" are to be interpreted as the output of any
+-.\" document formatting or typesetting system, including
+-.\" intermediate and printed output.
+-.\"
+-.\" This manual is distributed in the hope that it will be useful,
+-.\" but WITHOUT ANY WARRANTY; without even the implied warranty of
+-.\" MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+-.\" GNU General Public License for more details.
+-.\"
+-.\" You should have received a copy of the GNU General Public
+-.\" License along with this manual.  If not, see
+-.\" <http://www.gnu.org/licenses/>.
+-.\"
+-.TH GETXATTR 2 "Extended Attributes" "Dec 2001" "System calls"
+-.SH NAME
+-getxattr, lgetxattr, fgetxattr \- retrieve an extended attribute value
+-.SH SYNOPSIS
+-.fam C
+-.nf
+-.B #include <sys/types.h>
+-.B #include <attr/xattr.h>
+-.sp
+-.BI "ssize_t getxattr (const char\ *" path ", const char\ *" name ",
+-.BI "\t\t\t\t void\ *" value ", size_t " size );
+-.BI "ssize_t lgetxattr (const char\ *" path ", const char\ *" name ",
+-.BI "\t\t\t\t void\ *" value ", size_t " size );
+-.BI "ssize_t fgetxattr (int " filedes ", const char\ *" name ",
+-.BI "\t\t\t\t void\ *" value ", size_t " size );
+-.fi
+-.fam T
+-.SH DESCRIPTION
+-Extended attributes are
+-.IR name :\c
+-.I value
+-pairs associated with inodes (files, directories, symlinks, etc).
+-They are extensions to the normal attributes which are associated
+-with all inodes in the system (i.e. the
+-.BR stat (2)
+-data).
+-A complete overview of extended attributes concepts can be found in
+-.BR attr (5).
+-.PP
+-.B getxattr
+-retrieves the
+-.I value
+-of the extended attribute identified by
+-.I name
+-and associated with the given
+-.I path
+-in the filesystem.
+-The length of the attribute
+-.I value
+-is returned.
+-.PP
+-.B lgetxattr
+-is identical to 
+-.BR getxattr ,
+-except in the case of a symbolic link, where the link itself is
+-interrogated, not the file that it refers to.
+-.PP
+-.B fgetxattr
+-is identical to
+-.BR getxattr ,
+-only the open file pointed to by
+-.I filedes
+-(as returned by
+-.BR open (2))
+-is interrogated in place of
+-.IR path .
+-.PP
+-An extended attribute
+-.I name
+-is a simple NULL-terminated string.
+-The name includes a namespace prefix \- there may be several, disjoint
+-namespaces associated with an individual inode.
+-The value of an extended attribute is a chunk of arbitrary textual or
+-binary data of specified length.
+-.PP
+-An empty buffer of
+-.I size
+-zero can be passed into these calls to return the current size of the
+-named extended attribute, which can be used to estimate the size of a
+-buffer which is sufficiently large to hold the value associated with
+-the extended attribute.
+-.PP
+-The interface is designed to allow guessing of initial buffer
+-sizes, and to enlarge buffers when the return value indicates
+-that the buffer provided was too small.
+-.SH RETURN VALUE
+-On success, a positive number is returned indicating the size of the
+-extended attribute value.
+-On failure, \-1 is returned and
+-.I errno
+-is set appropriately.
+-.PP
+-If the named attribute does not exist, or the process has no access to
+-this attribute,
+-.I errno
+-is set to ENOATTR.
+-.PP
+-If the
+-.I size
+-of the
+-.I value
+-buffer is too small to hold the result,
+-.I errno
+-is set to ERANGE.
+-.PP
+-If extended attributes are not supported by the filesystem, or are disabled,
+-.I errno
+-is set to ENOTSUP.
+-.PP
+-The errors documented for the
+-.BR stat (2)
+-system call are also applicable here.
+-.SH AUTHORS
+-Andreas Gruenbacher,
+-.RI < a.gruenbacher@bestbits.at >
+-and the SGI XFS development team,
+-.RI < linux-xfs@oss.sgi.com >.
+-Please send any bug reports or comments to these addresses.
+-.SH SEE ALSO
+-.BR getfattr (1),
+-.BR setfattr (1),
+-.BR open (2),
+-.BR stat (2),
+-.BR setxattr (2),
+-.BR listxattr (2),
+-.BR removexattr (2),
+-and
+-.BR attr (5).
+diff --git a/man/man2/listxattr.2 b/man/man2/listxattr.2
+deleted file mode 100644
+index 8b4371c..0000000
+--- a/man/man2/listxattr.2
++++ /dev/null
+@@ -1,158 +0,0 @@
+-.\"
+-.\" Extended attributes system calls manual pages
+-.\"
+-.\" (C) Andreas Gruenbacher, February 2001
+-.\" (C) Silicon Graphics Inc, September 2001
+-.\"
+-.\" This is free documentation; you can redistribute it and/or
+-.\" modify it under the terms of the GNU General Public License as
+-.\" published by the Free Software Foundation; either version 2 of
+-.\" the License, or (at your option) any later version.
+-.\"
+-.\" The GNU General Public License's references to "object code"
+-.\" and "executables" are to be interpreted as the output of any
+-.\" document formatting or typesetting system, including
+-.\" intermediate and printed output.
+-.\"
+-.\" This manual is distributed in the hope that it will be useful,
+-.\" but WITHOUT ANY WARRANTY; without even the implied warranty of
+-.\" MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+-.\" GNU General Public License for more details.
+-.\"
+-.\" You should have received a copy of the GNU General Public
+-.\" License along with this manual.  If not, see
+-.\" <http://www.gnu.org/licenses/>.
+-.\"
+-.TH LISTXATTR 2 "Extended Attributes" "Dec 2001" "System calls"
+-.SH NAME
+-listxattr, llistxattr, flistxattr \- list extended attribute names
+-.SH SYNOPSIS
+-.fam C
+-.nf
+-.B #include <sys/types.h>
+-.B #include <attr/xattr.h>
+-.sp
+-.BI "ssize_t listxattr (const char\ *" path ",
+-.BI "\t\t\t\t char\ *" list ", size_t " size );
+-.BI "ssize_t llistxattr (const char\ *" path ",
+-.BI "\t\t\t\t char\ *" list ", size_t " size );
+-.BI "ssize_t flistxattr (int " filedes ",
+-.BI "\t\t\t\t char\ *" list ", size_t " size );
+-.fi
+-.fam T
+-.SH DESCRIPTION
+-Extended attributes are name:value
+-pairs associated with inodes (files, directories, symlinks, etc).
+-They are extensions to the normal attributes which are associated
+-with all inodes in the system (i.e. the
+-.BR stat (2)
+-data).
+-A complete overview of extended attributes concepts can be found in
+-.BR attr (5).
+-.PP
+-.B listxattr
+-retrieves the
+-.I list
+-of extended attribute names associated with the given
+-.I path
+-in the filesystem.
+-The list is the set of (NULL-terminated) names, one after the other.
+-Names of extended attributes to which the calling process does not
+-have access may be omitted from the list.
+-The length of the attribute name
+-.I list
+-is returned.
+-.PP
+-.B llistxattr
+-is identical to
+-.BR listxattr ,
+-except in the case of a symbolic link, where the list of names of
+-extended attributes associated with the link itself is retrieved,
+-not the file that it refers to.
+-.I list
+-is a caller-allocated buffer of size
+-.IR size .
+-.PP
+-.B flistxattr
+-is identical to
+-.BR listxattr ,
+-only the open file pointed to by
+-.I filedes
+-(as returned by
+-.BR open (2))
+-is interrogated in place of
+-.IR path .
+-.PP
+-A single extended attribute
+-.I name
+-is a simple NULL-terminated string.
+-The name includes a namespace prefix \- there may be several, disjoint
+-namespaces associated with an individual inode.
+-.PP
+-An empty buffer of
+-.I size
+-zero can be passed into these calls to return the current size of the
+-list of extended attribute names, which can be used to estimate the
+-size of a buffer which is sufficiently large to hold the list of names.
+-.SH EXAMPLES
+-The
+-.I list
+-of names is returned as an unordered array of NULL-terminated character
+-strings (attribute names are separated by NULL characters), like this:
+-.fam C
+-.RS
+-.nf
+-user.name1\\0system.name1\\0user.name2\\0
+-.fi
+-.RE
+-.fam T
+-.P
+-Filesystems like ext2, ext3 and XFS which implement POSIX ACLs using
+-extended attributes, might return a
+-.I list
+-like this:
+-.fam C
+-.RS
+-.nf
+-system.posix_acl_access\\0system.posix_acl_default\\0
+-.fi
+-.RE
+-.fam T
+-.SH RETURN VALUE
+-On success, a positive number is returned indicating the size of the
+-extended attribute name list.
+-On failure, \-1 is returned and
+-.I errno
+-is set appropriately.
+-.PP
+-If the
+-.I size
+-of the
+-.I list
+-buffer is too small to hold the result,
+-.I errno
+-is set to ERANGE.
+-.PP
+-If extended attributes are not supported by the filesystem, or are disabled,
+-.I errno
+-is set to ENOTSUP.
+-.PP
+-The errors documented for the
+-.BR stat (2)
+-system call are also applicable here.
+-.SH AUTHORS
+-Andreas Gruenbacher,
+-.RI < a.gruenbacher@bestbits.at >
+-and the SGI XFS development team,
+-.RI < linux-xfs@oss.sgi.com >.
+-Please send any bug reports or comments to these addresses.
+-.SH SEE ALSO
+-.BR getfattr (1),
+-.BR setfattr (1),
+-.BR open (2),
+-.BR stat (2),
+-.BR getxattr (2),
+-.BR setxattr (2),
+-.BR removexattr (2),
+-and
+-.BR attr (5).
+diff --git a/man/man2/removexattr.2 b/man/man2/removexattr.2
+deleted file mode 100644
+index 2c7d934..0000000
+--- a/man/man2/removexattr.2
++++ /dev/null
+@@ -1,111 +0,0 @@
+-.\"
+-.\" Extended attributes system calls manual pages
+-.\"
+-.\" (C) Andreas Gruenbacher, February 2001
+-.\" (C) Silicon Graphics Inc, September 2001
+-.\"
+-.\" This is free documentation; you can redistribute it and/or
+-.\" modify it under the terms of the GNU General Public License as
+-.\" published by the Free Software Foundation; either version 2 of
+-.\" the License, or (at your option) any later version.
+-.\"
+-.\" The GNU General Public License's references to "object code"
+-.\" and "executables" are to be interpreted as the output of any
+-.\" document formatting or typesetting system, including
+-.\" intermediate and printed output.
+-.\"
+-.\" This manual is distributed in the hope that it will be useful,
+-.\" but WITHOUT ANY WARRANTY; without even the implied warranty of
+-.\" MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+-.\" GNU General Public License for more details.
+-.\"
+-.\" You should have received a copy of the GNU General Public
+-.\" License along with this manual.  If not, see
+-.\" <http://www.gnu.org/licenses/>.
+-.\"
+-.TH REMOVEXATTR 2 "Extended Attributes" "Dec 2001" "System calls"
+-.SH NAME
+-removexattr, lremovexattr, fremovexattr \- remove an extended attribute
+-.SH SYNOPSIS
+-.fam C
+-.nf
+-.B #include <sys/types.h>
+-.B #include <attr/xattr.h>
+-.sp
+-.BI "int removexattr (const char\ *" path ", const char\ *" name );
+-.BI "int lremovexattr (const char\ *" path ", const char\ *" name );
+-.BI "int fremovexattr (int " filedes ", const char\ *" name );
+-.fi
+-.fam T
+-.SH DESCRIPTION
+-Extended attributes are
+-.IR name :\c
+-value pairs associated with inodes (files, directories, symlinks, etc).
+-They are extensions to the normal attributes which are associated
+-with all inodes in the system (i.e. the
+-.BR stat (2)
+-data).
+-A complete overview of extended attributes concepts can be found in
+-.BR attr (5).
+-.PP
+-.B removexattr
+-removes the extended attribute identified by
+-.I name
+-and associated with the given
+-.I path
+-in the filesystem.
+-.PP
+-.B lremovexattr
+-is identical to 
+-.BR removexattr ,
+-except in the case of a symbolic link, where the extended attribute is
+-removed from the link itself, not the file that it refers to.
+-.PP
+-.B fremovexattr
+-is identical to
+-.BR removexattr ,
+-only the extended attribute is removed from the open file pointed to by
+-.I filedes
+-(as returned by
+-.BR open (2))
+-in place of
+-.IR path .
+-.PP
+-An extended attribute name is a simple NULL-terminated string.
+-The
+-.I name
+-includes a namespace prefix \- there may be several, disjoint
+-namespaces associated with an individual inode.
+-.SH RETURN VALUE
+-On success, zero is returned.
+-On failure, \-1 is returned and
+-.I errno
+-is set appropriately.
+-.PP
+-If the named attribute does not exist,
+-.I errno
+-is set to ENOATTR.
+-.PP
+-If extended attributes are not supported by the filesystem, or are disabled,
+-.I errno
+-is set to ENOTSUP.
+-.PP
+-The errors documented for the
+-.BR stat (2)
+-system call are also applicable here.
+-.SH AUTHORS
+-Andreas Gruenbacher,
+-.RI < a.gruenbacher@bestbits.at >
+-and the SGI XFS development team,
+-.RI < linux-xfs@oss.sgi.com >.
+-Please send any bug reports or comments to these addresses.
+-.SH SEE ALSO
+-.BR getfattr (1),
+-.BR setfattr (1),
+-.BR open (2),
+-.BR stat (2),
+-.BR setxattr (2),
+-.BR getxattr (2),
+-.BR listxattr (2),
+-and
+-.BR attr (5).
+diff --git a/man/man2/setxattr.2 b/man/man2/setxattr.2
+deleted file mode 100644
+index b20dc9f..0000000
+--- a/man/man2/setxattr.2
++++ /dev/null
+@@ -1,143 +0,0 @@
+-.\"
+-.\" Extended attributes system calls manual pages
+-.\"
+-.\" (C) Andreas Gruenbacher, February 2001
+-.\" (C) Silicon Graphics Inc, September 2001
+-.\"
+-.\" This is free documentation; you can redistribute it and/or
+-.\" modify it under the terms of the GNU General Public License as
+-.\" published by the Free Software Foundation; either version 2 of
+-.\" the License, or (at your option) any later version.
+-.\"
+-.\" The GNU General Public License's references to "object code"
+-.\" and "executables" are to be interpreted as the output of any
+-.\" document formatting or typesetting system, including
+-.\" intermediate and printed output.
+-.\"
+-.\" This manual is distributed in the hope that it will be useful,
+-.\" but WITHOUT ANY WARRANTY; without even the implied warranty of
+-.\" MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+-.\" GNU General Public License for more details.
+-.\"
+-.\" You should have received a copy of the GNU General Public
+-.\" License along with this manual.  If not, see
+-.\" <http://www.gnu.org/licenses/>.
+-.\"
+-.TH SETXATTR 2 "Extended Attributes" "Dec 2001" "System calls"
+-.SH NAME
+-setxattr, lsetxattr, fsetxattr \- set an extended attribute value
+-.SH SYNOPSIS
+-.fam C
+-.nf
+-.B #include <sys/types.h>
+-.B #include <attr/xattr.h>
+-.sp
+-.BI "int setxattr (const char\ *" path ", const char\ *" name ",
+-.BI "\t\t\t const void\ *" value ", size_t " size ", int " flags );
+-.BI "int lsetxattr (const char\ *" path ", const char\ *" name ",
+-.BI "\t\t\t const void\ *" value ", size_t " size ", int " flags );
+-.BI "int fsetxattr (int " filedes ", const char\ *" name ",
+-.BI "\t\t\t const void\ *" value ", size_t " size ", int " flags );
+-.fi
+-.fam T
+-.SH DESCRIPTION
+-Extended attributes are
+-.IR name :\c
+-.I value
+-pairs associated with inodes (files, directories, symlinks, etc).
+-They are extensions to the normal attributes which are associated
+-with all inodes in the system (i.e. the
+-.BR stat (2)
+-data).
+-A complete overview of extended attributes concepts can be found in
+-.BR attr (5).
+-.PP
+-.B setxattr
+-sets the
+-.I value
+-of the extended attribute identified by
+-.I name
+-and associated with the given
+-.I path
+-in the filesystem.
+-The
+-.I size
+-of the
+-.I value
+-must be specified.
+-.PP
+-.B lsetxattr
+-is identical to 
+-.BR setxattr ,
+-except in the case of a symbolic link, where the extended attribute is
+-set on the link itself, not the file that it refers to.
+-.PP
+-.B fsetxattr
+-is identical to
+-.BR setxattr ,
+-only the extended attribute is set on the open file pointed to by
+-.I filedes
+-(as returned by
+-.BR open (2))
+-in place of
+-.IR path .
+-.PP
+-An extended attribute name is a simple NULL-terminated string.
+-The
+-.I name
+-includes a namespace prefix \- there may be several, disjoint
+-namespaces associated with an individual inode.
+-The
+-.I value
+-of an extended attribute is a chunk of arbitrary textual or
+-binary data of specified length.
+-.PP
+-The
+-.I flags
+-parameter can be used to refine the semantics of the operation.
+-XATTR_CREATE specifies a pure create, which fails if the named
+-attribute exists already.
+-XATTR_REPLACE specifies a pure replace operation, which fails if the
+-named attribute does not already exist.
+-By default (no flags), the extended attribute will be created if
+-need be, or will simply replace the value if the attribute exists.
+-.SH RETURN VALUE
+-On success, zero is returned.
+-On failure, \-1 is returned and
+-.I errno
+-is set appropriately.
+-.PP
+-If XATTR_CREATE is specified, and the attribute exists already,
+-.I errno
+-is set to EEXIST.
+-If XATTR_REPLACE is specified, and the attribute does not exist,
+-.I errno
+-is set to ENOATTR.
+-.PP
+-If there is insufficient space remaining to store the extended attribute,
+-.I errno
+-is set to either ENOSPC, or EDQUOT if quota enforcement was the cause.
+-.PP
+-If extended attributes are not supported by the filesystem, or are disabled,
+-.I errno
+-is set to ENOTSUP.
+-.PP
+-The errors documented for the
+-.BR stat (2)
+-system call are also applicable here.
+-.SH AUTHORS
+-Andreas Gruenbacher,
+-.RI < a.gruenbacher@bestbits.at >
+-and the SGI XFS development team,
+-.RI < linux-xfs@oss.sgi.com >.
+-Please send any bug reports or comments to these addresses.
+-.SH SEE ALSO
+-.BR getfattr (1),
+-.BR setfattr (1),
+-.BR open (2),
+-.BR stat (2),
+-.BR getxattr (2),
+-.BR listxattr (2),
+-.BR removexattr (2),
+-and
+-.BR attr (5).
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/bash-completion/bash-completion_2.5.bb b/import-layers/yocto-poky/meta/recipes-support/bash-completion/bash-completion_2.5.bb
deleted file mode 100644
index dd22857..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/bash-completion/bash-completion_2.5.bb
+++ /dev/null
@@ -1,42 +0,0 @@
-SUMMARY = "Programmable Completion for Bash 4"
-HOMEPAGE = "http://bash-completion.alioth.debian.org/"
-BUGTRACKER = "https://alioth.debian.org/projects/bash-completion/"
-
-LICENSE = "GPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
-
-SECTION = "console/utils"
-
-SRC_URI = "https://github.com/scop/bash-completion/releases/download/${PV}/${BPN}-${PV}.tar.xz"
-
-SRC_URI[md5sum] = "15300010bd4034de12c3fc4f171692e3"
-SRC_URI[sha256sum] = "b0b9540c65532825eca030f1241731383f89b2b65e80f3492c5dd2f0438c95cf"
-UPSTREAM_CHECK_REGEX = "bash-completion-(?P<pver>(?!2008).+)\.tar"
-UPSTREAM_CHECK_URI = "https://github.com/scop/bash-completion/releases"
-
-PARALLEL_MAKE = ""
-
-inherit autotools
-
-do_install_append() {
-	# compatdir
-	install -d ${D}${sysconfdir}/bash_completion.d/
-	echo '. ${datadir}/${BPN}/bash_completion' >${D}${sysconfdir}/bash_completion
-
-	# Delete files already provided by util-linux
-	local i
-	for i in mount umount; do
-		rm ${D}${datadir}/${BPN}/completions/$i
-	done
-}
-
-RDEPENDS_${PN} = "bash"
-
-# Some recipes are providing ${PN}-bash-completion packages
-PACKAGES =+ "${PN}-extra"
-FILES_${PN}-extra = "${datadir}/${BPN}/completions/ \
-    ${datadir}/${BPN}/helpers/"
-
-FILES_${PN}-dev += "${datadir}/cmake"
-
-BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/bash-completion/bash-completion_2.7.bb b/import-layers/yocto-poky/meta/recipes-support/bash-completion/bash-completion_2.7.bb
new file mode 100644
index 0000000..f519b3f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/bash-completion/bash-completion_2.7.bb
@@ -0,0 +1,42 @@
+SUMMARY = "Programmable Completion for Bash 4"
+HOMEPAGE = "http://bash-completion.alioth.debian.org/"
+BUGTRACKER = "https://alioth.debian.org/projects/bash-completion/"
+
+LICENSE = "GPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=751419260aa954499f7abaabaa882bbe"
+
+SECTION = "console/utils"
+
+SRC_URI = "https://github.com/scop/bash-completion/releases/download/${PV}/${BPN}-${PV}.tar.xz"
+
+SRC_URI[md5sum] = "28117492bdc9408438e6041683a423ce"
+SRC_URI[sha256sum] = "41ba892d3f427d4a686de32673f35401bc947a7801f684127120cdb13641441e"
+UPSTREAM_CHECK_REGEX = "bash-completion-(?P<pver>(?!2008).+)\.tar"
+UPSTREAM_CHECK_URI = "https://github.com/scop/bash-completion/releases"
+
+PARALLEL_MAKE = ""
+
+inherit autotools
+
+do_install_append() {
+	# compatdir
+	install -d ${D}${sysconfdir}/bash_completion.d/
+	echo '. ${datadir}/${BPN}/bash_completion' >${D}${sysconfdir}/bash_completion
+
+	# Delete files already provided by util-linux
+	local i
+	for i in mount umount rfkill; do
+		rm ${D}${datadir}/${BPN}/completions/$i
+	done
+}
+
+RDEPENDS_${PN} = "bash"
+
+# Some recipes are providing ${PN}-bash-completion packages
+PACKAGES =+ "${PN}-extra"
+FILES_${PN}-extra = "${datadir}/${BPN}/completions/ \
+    ${datadir}/${BPN}/helpers/"
+
+FILES_${PN}-dev += "${datadir}/cmake"
+
+BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/bdwgc/bdwgc/0001-configure.ac-add-check-for-NO_GETCONTEXT-definition.patch b/import-layers/yocto-poky/meta/recipes-support/bdwgc/bdwgc/0001-configure.ac-add-check-for-NO_GETCONTEXT-definition.patch
deleted file mode 100644
index 8ef774f..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/bdwgc/bdwgc/0001-configure.ac-add-check-for-NO_GETCONTEXT-definition.patch
+++ /dev/null
@@ -1,29 +0,0 @@
-configure.ac: add check for NO_GETCONTEXT definition
-
-Signed-off-by: Samuel Martin <s.martin49@gmail.com>
-[yann.morin.1998@free.fr: add a comment, change variable name, use
- AS_IF, remove debug traces, use AC_CHECK_FUNCS (as suggested by
- Thomas)]
-Signed-off-by: "Yann E. MORIN" <yann.morin.1998@free.fr>
-Cc: Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
-
----
-Upstream-Status: Pending
- configure.ac | 6 ++++++
- 1 file changed, 6 insertions(+)
-
---- bdwgc-7.2f.orig/configure.ac	2014-06-01 19:00:47.000000000 +0200
-+++ bdwgc-7.2f/configure.ac	2014-12-23 14:13:11.585716713 +0100
-@@ -365,6 +365,12 @@
-   AC_MSG_RESULT($ac_cv_fno_strict_aliasing)
- fi
- 
-+# Check for getcontext (uClibc can be configured without it, for example)
-+AC_CHECK_FUNCS([getcontext])
-+AS_IF([test "$ac_cv_func_getcontext" = "no"],
-+  [CFLAGS="$CFLAGS -DNO_GETCONTEXT"
-+   CPPFLAGS="$CPPFLAGS -DNO_GETCONTEXT"])
-+
- case "$host" in
- # While IRIX 6 has libdl for the O32 and N32 ABIs, it's missing for N64
- # and unnecessary everywhere.
diff --git a/import-layers/yocto-poky/meta/recipes-support/bdwgc/bdwgc/musl_header_fix.patch b/import-layers/yocto-poky/meta/recipes-support/bdwgc/bdwgc/musl_header_fix.patch
deleted file mode 100644
index 4a18496..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/bdwgc/bdwgc/musl_header_fix.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-Add missing header to avoid:
-
-| 1472659610.016355: ../git/pthread_stop_world.c: In function 'GC_brief_async_signal_safe_sleep':
-| 1472659610.0540252: ../git/pthread_stop_world.c:397:22: error: storage size of 'tv' isn't known
-| 1472659610.0540252:        struct timeval tv;
-| 1472659610.0540252:                       ^~
-| 1472659610.054099: ../git/pthread_stop_world.c:397:22: warning: unused variable 'tv' [-Wunused-variable]
-| 1472659610.054099:        struct timeval tv;
-| 1472659610.054099:                       ^~
-| 1472659610.054099: Makefile:1530: recipe for target 'pthread_stop_world.lo' failed
-
-in musl builds.
-
-Upstream-Status: Pending
-
-Index: git/pthread_stop_world.c
-===================================================================
---- git.orig/pthread_stop_world.c
-+++ git/pthread_stop_world.c
-@@ -45,6 +45,7 @@
- #include <semaphore.h>
- #include <errno.h>
- #include <unistd.h>
-+#include <sys/time.h>
- #include "atomic_ops.h"
- 
- /* It's safe to call original pthread_sigmask() here.   */
diff --git a/import-layers/yocto-poky/meta/recipes-support/bdwgc/bdwgc_7.6.0.bb b/import-layers/yocto-poky/meta/recipes-support/bdwgc/bdwgc_7.6.0.bb
deleted file mode 100644
index dcb68f0..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/bdwgc/bdwgc_7.6.0.bb
+++ /dev/null
@@ -1,42 +0,0 @@
-SUMMARY = "A garbage collector for C and C++"
-
-DESCRIPTION = "The Boehm-Demers-Weiser conservative garbage collector can be\
- used as a garbage collecting replacement for C malloc or C++ new. It allows\
- you to allocate memory basically as you normally would, without explicitly\
- deallocating memory that is no longer useful. The collector automatically\
- recycles memory when it determines that it can no longer be otherwise\
- accessed.\
-  The collector is also used by a number of programming language\
- implementations that either use C as intermediate code, want to facilitate\
- easier interoperation with C libraries, or just prefer the simple collector\
- interface.\
-  Alternatively, the garbage collector may be used as a leak detector for C\
- or C++ programs, though that is not its primary goal.\
-  Empirically, this collector works with most unmodified C programs, simply\
- by replacing malloc with GC_malloc calls, replacing realloc with GC_realloc\
- calls, and removing free calls."
-
-HOMEPAGE = "http://www.hboehm.info/gc/"
-SECTION = "devel"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://README.QUICK;md5=4f81f24ec69726c312487c2ac740e9e3"
-
-SRCREV = "8ac1d84a40eb7a431fec1b8097e3f24b48fb23fa"
-SRC_URI = "git://github.com/ivmai/bdwgc.git \
-           file://0001-configure.ac-add-check-for-NO_GETCONTEXT-definition.patch \
-           file://musl_header_fix.patch \
-          "
-
-FILES_${PN}-doc = "${datadir}"
-
-S = "${WORKDIR}/git"
-
-ARM_INSTRUCTION_SET = "arm"
-
-inherit autotools pkgconfig
-
-# by default use external libatomic-ops
-PACKAGECONFIG ??= "libatomic-ops"
-PACKAGECONFIG[libatomic-ops] = "--with-libatomic-ops=yes,--with-libatomic-ops=no,libatomic-ops"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/bmap-tools/bmap-tools_3.2.bb b/import-layers/yocto-poky/meta/recipes-support/bmap-tools/bmap-tools_3.2.bb
deleted file mode 100644
index e10f5fd..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/bmap-tools/bmap-tools_3.2.bb
+++ /dev/null
@@ -1,24 +0,0 @@
-SUMMARY = "Tools to generate block map (AKA bmap) and flash images using bmap"
-DESCRIPTION = "Bmap-tools - tools to generate block map (AKA bmap) and flash images using \
-bmap. Bmaptool is a generic tool for creating the block map (bmap) for a file, \
-and copying files using the block map. The idea is that large file containing \
-unused blocks, like raw system image files, can be copied or flashed a lot \
-faster with bmaptool than with traditional tools like "dd" or "cp"."
-HOMEPAGE = "http://git.infradead.org/users/dedekind/bmap-tools.git"
-SECTION = "console/utils"
-LICENSE = "GPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
-
-SRC_URI = "ftp://ftp.infradead.org/pub/${BPN}/${BPN}-${PV}.tgz"
-SRC_URI[md5sum] = "92cdad1cb4dfa0cca7176c8e22752616"
-SRC_URI[sha256sum] = "cc6c7f7dc0a37e2a32deb127308e24e6c4b80bfb54f3803c308efab02bf2d434"
-
-RDEPENDS_${PN} = "python-core python-compression"
-
-inherit setuptools
-
-BBCLASSEXTEND = "native"
-
-do_install_append_class-native() {
-    sed -i -e 's|^#!.*/usr/bin/env python|#! /usr/bin/env nativepython|' ${D}${bindir}/bmaptool
-}
diff --git a/import-layers/yocto-poky/meta/recipes-support/bmap-tools/bmap-tools_3.4.bb b/import-layers/yocto-poky/meta/recipes-support/bmap-tools/bmap-tools_3.4.bb
new file mode 100644
index 0000000..70b6e91
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/bmap-tools/bmap-tools_3.4.bb
@@ -0,0 +1,22 @@
+SUMMARY = "Tools to generate block map (AKA bmap) and flash images using bmap"
+DESCRIPTION = "Bmap-tools - tools to generate block map (AKA bmap) and flash images using \
+bmap. Bmaptool is a generic tool for creating the block map (bmap) for a file, \
+and copying files using the block map. The idea is that large file containing \
+unused blocks, like raw system image files, can be copied or flashed a lot \
+faster with bmaptool than with traditional tools like "dd" or "cp"."
+HOMEPAGE = "https://github.com/01org/bmap-tools"
+SECTION = "console/utils"
+LICENSE = "GPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
+
+SRC_URI = "git://github.com/intel/bmap-tools.git"
+SRCREV = "9dad724104df265442226972a1e310813f9ffcba"
+
+S = "${WORKDIR}/git"
+
+RDEPENDS_${PN} = "python3-core python3-compression python3-mmap python3-setuptools"
+
+inherit python3native
+inherit setuptools3
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-support/boost/bjam-native_1.63.0.bb b/import-layers/yocto-poky/meta/recipes-support/boost/bjam-native_1.64.0.bb
similarity index 100%
rename from import-layers/yocto-poky/meta/recipes-support/boost/bjam-native_1.63.0.bb
rename to import-layers/yocto-poky/meta/recipes-support/boost/bjam-native_1.64.0.bb
diff --git a/import-layers/yocto-poky/meta/recipes-support/boost/boost-1.63.0.inc b/import-layers/yocto-poky/meta/recipes-support/boost/boost-1.63.0.inc
deleted file mode 100644
index b540a89..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/boost/boost-1.63.0.inc
+++ /dev/null
@@ -1,24 +0,0 @@
-# The Boost web site provides free peer-reviewed portable
-# C++ source libraries. The emphasis is on libraries which
-# work well with the C++ Standard Library. The libraries are
-# intended to be widely useful, and are in regular use by
-# thousands of programmers across a broad spectrum of applications.
-HOMEPAGE = "http://www.boost.org/"
-LICENSE = "BSL-1.0 & MIT & Python-2.0"
-LIC_FILES_CHKSUM = "file://LICENSE_1_0.txt;md5=e4224ccaecb14d942c71d31bef20d78c"
-
-BOOST_VER = "${@"_".join(d.getVar("PV").split("."))}"
-BOOST_MAJ = "${@"_".join(d.getVar("PV").split(".")[0:2])}"
-BOOST_P = "boost_${BOOST_VER}"
-
-SRC_URI = "${SOURCEFORGE_MIRROR}/project/boost/boost/${PV}/${BOOST_P}.tar.bz2"
-
-SRC_URI[md5sum] = "1c837ecd990bb022d07e7aab32b09847"
-SRC_URI[sha256sum] = "beae2529f759f6b3bf3f4969a19c2e9d6f0c503edcb2de4a61d1428519fcb3b0"
-
-UPSTREAM_CHECK_URI = "http://www.boost.org/users/download/"
-UPSTREAM_CHECK_REGEX = "boostorg/release/(?P<pver>.*)/source/"
-
-PR = "r1"
-
-S = "${WORKDIR}/${BOOST_P}"
diff --git a/import-layers/yocto-poky/meta/recipes-support/boost/boost-1.64.0.inc b/import-layers/yocto-poky/meta/recipes-support/boost/boost-1.64.0.inc
new file mode 100644
index 0000000..dc7b1a9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/boost/boost-1.64.0.inc
@@ -0,0 +1,21 @@
+# The Boost web site provides free peer-reviewed portable
+# C++ source libraries. The emphasis is on libraries which
+# work well with the C++ Standard Library. The libraries are
+# intended to be widely useful, and are in regular use by
+# thousands of programmers across a broad spectrum of applications.
+HOMEPAGE = "http://www.boost.org/"
+LICENSE = "BSL-1.0 & MIT & Python-2.0"
+LIC_FILES_CHKSUM = "file://LICENSE_1_0.txt;md5=e4224ccaecb14d942c71d31bef20d78c"
+
+BOOST_VER = "${@"_".join(d.getVar("PV").split("."))}"
+BOOST_MAJ = "${@"_".join(d.getVar("PV").split(".")[0:2])}"
+BOOST_P = "boost_${BOOST_VER}"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/project/boost/boost/${PV}/${BOOST_P}.tar.bz2"
+SRC_URI[md5sum] = "93eecce2abed9d2442c9676914709349"
+SRC_URI[sha256sum] = "7bcc5caace97baa948931d712ea5f37038dbb1c5d89b43ad4def4ed7cb683332"
+
+UPSTREAM_CHECK_URI = "http://www.boost.org/users/download/"
+UPSTREAM_CHECK_REGEX = "boostorg/release/(?P<pver>.*)/source/"
+
+S = "${WORKDIR}/${BOOST_P}"
diff --git a/import-layers/yocto-poky/meta/recipes-support/boost/boost.inc b/import-layers/yocto-poky/meta/recipes-support/boost/boost.inc
index 4ff70e3..41fc90f 100644
--- a/import-layers/yocto-poky/meta/recipes-support/boost/boost.inc
+++ b/import-layers/yocto-poky/meta/recipes-support/boost/boost.inc
@@ -38,7 +38,7 @@
 BOOST_LIBS_remove_mips16e = "wave"
 
 # optional libraries
-PACKAGECONFIG ??= "locale"
+PACKAGECONFIG ??= "locale python"
 PACKAGECONFIG[locale] = ",,icu"
 PACKAGECONFIG[graph_parallel] = ",,,boost-mpi mpich"
 PACKAGECONFIG[mpi] = ",,mpich"
@@ -169,7 +169,7 @@
 BJAM_OPTS_append_class-native = ' -sNO_BZIP2=1'
 
 # Adjust the build for x32
-BJAM_OPTS_append_linux-gnux32 = " abi=x32 address-model=64"
+BJAM_OPTS_append_x86-x32 = " abi=x32 address-model=64"
 
 do_configure() {
 	cp -f ${S}/boost/config/platform/linux.hpp ${S}/boost/config/platform/linux-gnueabi.hpp
diff --git a/import-layers/yocto-poky/meta/recipes-support/boost/boost/0001-When-using-soft-float-on-ARM-we-should-not-expect-th.patch b/import-layers/yocto-poky/meta/recipes-support/boost/boost/0001-When-using-soft-float-on-ARM-we-should-not-expect-th.patch
deleted file mode 100644
index bafd5ce..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/boost/boost/0001-When-using-soft-float-on-ARM-we-should-not-expect-th.patch
+++ /dev/null
@@ -1,29 +0,0 @@
-From 9ab0914207e6d0e6b75ce12147c54b96478feb64 Mon Sep 17 00:00:00 2001
-From: Alexander Kanavin <alex.kanavin@gmail.com>
-Date: Tue, 21 Feb 2017 12:50:35 +0200
-Subject: [PATCH] When using soft-float, on ARM we should not expect the FE_*
- symbols
-
-Upstream-Status: Pending
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
----
- boost/test/execution_monitor.hpp | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/boost/test/execution_monitor.hpp b/boost/test/execution_monitor.hpp
-index f53348a..86252d7 100644
---- a/boost/test/execution_monitor.hpp
-+++ b/boost/test/execution_monitor.hpp
-@@ -498,7 +498,7 @@ enum masks {
- 
-     BOOST_FPE_ALL       = MCW_EM,
- 
--#elif defined(BOOST_NO_FENV_H) || defined(BOOST_CLANG) /* *** */
-+#elif defined(BOOST_NO_FENV_H) || defined(BOOST_CLANG) || defined(__ARM_PCS) /* *** */
-     BOOST_FPE_ALL       = BOOST_FPE_OFF,
- 
- #else /* *** */
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/boost/boost/0001-correct-error-which-appeared-when-compiling-non-c-co.patch b/import-layers/yocto-poky/meta/recipes-support/boost/boost/0001-correct-error-which-appeared-when-compiling-non-c-co.patch
new file mode 100644
index 0000000..f96005e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/boost/boost/0001-correct-error-which-appeared-when-compiling-non-c-co.patch
@@ -0,0 +1,28 @@
+From 02fa5cee1b8d0321787113e2dc10b162c657f333 Mon Sep 17 00:00:00 2001
+From: Robert Ramey <ramey@rrsd.com>
+Date: Wed, 1 Feb 2017 16:43:59 -0800
+Subject: [PATCH] correct error which appeared when compiling non c++ compliant
+ code for arrays
+
+Upstream-Status: Backport [expected to be released in v1.65]
+
+---
+ boost/serialization/array.hpp | 2 ++
+ 1 file changed, 2 insertions(+)
+
+diff --git a/boost/serialization/array.hpp b/boost/serialization/array.hpp
+index 61708b3..612d1a6 100644
+--- a/boost/serialization/array.hpp
++++ b/boost/serialization/array.hpp
+@@ -23,6 +23,8 @@ namespace std{
+ } // namespace std
+ #endif
+ 
++#include <boost/serialization/array_wrapper.hpp>
++
+ #ifndef BOOST_NO_CXX11_HDR_ARRAY
+ 
+ #include <array>
+-- 
+2.9.3
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/boost/boost/0004-Use-atomic-by-default-when-BOOST_NO_CXX11_HDR_ATOMIC.patch b/import-layers/yocto-poky/meta/recipes-support/boost/boost/0004-Use-atomic-by-default-when-BOOST_NO_CXX11_HDR_ATOMIC.patch
deleted file mode 100644
index ab7826b..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/boost/boost/0004-Use-atomic-by-default-when-BOOST_NO_CXX11_HDR_ATOMIC.patch
+++ /dev/null
@@ -1,59 +0,0 @@
-From c234cc557f60729e071d6da59747c1a9289555c5 Mon Sep 17 00:00:00 2001
-From: Peter Dimov <pdimov@pdimov.com>
-Date: Sun, 28 Aug 2016 21:28:21 +0300
-Subject: [PATCH 4/4] Use <atomic> by default when BOOST_NO_CXX11_HDR_ATOMIC is
- not defined
-
----
-Upstream-Status: Backport [https://github.com/boostorg/smart_ptr/commit/20fedcff2ca3143503ec4e876d47745ab0ec7b0c]
-Signed-off-by: André Draszik <git@andred.net>
- boost/smart_ptr/detail/atomic_count.hpp    | 3 +++
- boost/smart_ptr/detail/sp_counted_base.hpp | 3 +++
- boost/smart_ptr/detail/spinlock.hpp        | 3 +++
- 3 files changed, 9 insertions(+)
-
-diff --git a/boost/smart_ptr/detail/atomic_count.hpp b/boost/smart_ptr/detail/atomic_count.hpp
-index 8aefd44..6e4f71a 100644
---- a/boost/smart_ptr/detail/atomic_count.hpp
-+++ b/boost/smart_ptr/detail/atomic_count.hpp
-@@ -73,6 +73,9 @@
- #elif defined( BOOST_DISABLE_THREADS ) && !defined( BOOST_SP_ENABLE_THREADS ) && !defined( BOOST_DISABLE_WIN32 )
- # include <boost/smart_ptr/detail/atomic_count_nt.hpp>
- 
-+#elif !defined( BOOST_NO_CXX11_HDR_ATOMIC )
-+# include <boost/smart_ptr/detail/atomic_count_std_atomic.hpp>
-+
- #elif defined( __GNUC__ ) && ( defined( __i386__ ) || defined( __x86_64__ ) ) && !defined( __PATHSCALE__ )
- # include <boost/smart_ptr/detail/atomic_count_gcc_x86.hpp>
- 
-diff --git a/boost/smart_ptr/detail/sp_counted_base.hpp b/boost/smart_ptr/detail/sp_counted_base.hpp
-index 0995ca8..83ede23 100644
---- a/boost/smart_ptr/detail/sp_counted_base.hpp
-+++ b/boost/smart_ptr/detail/sp_counted_base.hpp
-@@ -44,6 +44,9 @@
- #elif defined( BOOST_SP_HAS_CLANG_C11_ATOMICS )
- # include <boost/smart_ptr/detail/sp_counted_base_clang.hpp>
- 
-+#elif !defined( BOOST_NO_CXX11_HDR_ATOMIC )
-+# include <boost/smart_ptr/detail/sp_counted_base_std_atomic.hpp>
-+
- #elif defined( __SNC__ )
- # include <boost/smart_ptr/detail/sp_counted_base_snc_ps3.hpp>
- 
-diff --git a/boost/smart_ptr/detail/spinlock.hpp b/boost/smart_ptr/detail/spinlock.hpp
-index 19f93d7..0b618df 100644
---- a/boost/smart_ptr/detail/spinlock.hpp
-+++ b/boost/smart_ptr/detail/spinlock.hpp
-@@ -43,6 +43,9 @@
- #elif defined( BOOST_SP_USE_PTHREADS )
- #  include <boost/smart_ptr/detail/spinlock_pt.hpp>
- 
-+#elif !defined( BOOST_NO_CXX11_HDR_ATOMIC )
-+#  include <boost/smart_ptr/detail/spinlock_std_atomic.hpp>
-+
- #elif defined(__GNUC__) && defined( __arm__ ) && !defined( __thumb__ )
- #  include <boost/smart_ptr/detail/spinlock_gcc_arm.hpp>
- 
--- 
-2.9.3
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/boost/boost/py3.patch b/import-layers/yocto-poky/meta/recipes-support/boost/boost/py3.patch
deleted file mode 100644
index 2b1ff18..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/boost/boost/py3.patch
+++ /dev/null
@@ -1,269 +0,0 @@
-Backport fixes from upstream (as of boost super-module commit 0d52b9) to improve
-the building of the Boost Python module with Python 3.
-
-Upstream-Status: Backport
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-diff --git a/build/Jamfile b/build/Jamfile
-index 313fdab..f14dc11 100644
---- a/libs/python/build/Jamfile
-+++ b/libs/python/build/Jamfile
-@@ -6,6 +6,7 @@ import os ;
- import indirect ;
- import modules ;
- import feature ;
-+import property ;
- 
- import python ;
- 
-@@ -30,21 +31,8 @@ else
-         ;
- }
- 
--rule find-py3-version
--{
--    local versions = [ feature.values python ] ;
--    local py3ver ;
--    for local v in $(versions)
--    {
--        if $(v) >= 3.0
--        {
--            py3ver = $(v) ;
--        }
--    }
--    return $(py3ver) ;
--}
--
--py3-version = [ find-py3-version ] ;
-+py2-version = [ py-version 2 ] ;
-+py3-version = [ py-version 3 ] ;
- 
- project boost/python
-   : source-location ../src
-@@ -52,11 +40,17 @@ project boost/python
- 
- rule cond ( test ? : yes * : no * ) { if $(test) { return $(yes) ; } else { return $(no) ; } }
- rule unless ( test ? : yes * : no * ) { if ! $(test) { return $(yes) ; } else { return $(no) ; } }
-+local rule eq ( a : b ) { if $(a) = $(b) { return 1 ; } }
- 
--rule lib_boost_python ( is-py3 ? )
--{
-+lib_boost_python(2) = boost_python ;
-+lib_boost_python(3) = boost_python3 ;
- 
--    lib [ cond $(is-py3) : boost_python3 : boost_python ]
-+lib_boost_python($(py2-version)) = $(lib_boost_python(2)) ;
-+lib_boost_python($(py3-version)) = $(lib_boost_python(3)) ;
-+
-+rule lib_boost_python ( version )
-+{
-+    lib $(lib_boost_python($(version)))
-         : # sources
-         numeric.cpp
-         list.cpp
-@@ -112,11 +106,13 @@ rule lib_boost_python ( is-py3 ? )
-             <dependency>config-warning
- 
-             <python-debugging>on:<define>BOOST_DEBUG_PYTHON
--            [ cond $(is-py3) : <python>$(py3-version) ]
-+            <python>$(version)
- 
-             -<tag>@$(BOOST_JAMROOT_MODULE)%$(BOOST_JAMROOT_MODULE).tag
-             <tag>@$(BOOST_JAMROOT_MODULE)%$(BOOST_JAMROOT_MODULE).python-tag
- 
-+            <conditional>@python.require-py
-+
-         :   # default build
-             <link>shared
-         :   # usage requirements
-@@ -125,51 +121,68 @@ rule lib_boost_python ( is-py3 ? )
-         ;
- }
- 
--rule lib_boost_numpy ( is-py3 ? )
-+lib_boost_numpy(2) = boost_numpy ;
-+lib_boost_numpy(3) = boost_numpy3 ;
-+
-+lib_boost_numpy($(py2-version)) = $(lib_boost_python(2)) ;
-+lib_boost_numpy($(py3-version)) = $(lib_boost_python(3)) ;
-+
-+rule lib_boost_numpy ( version )
- {
-     numpy-include = [ python.numpy-include ] ;
--    lib [ cond $(is-py3) : boost_numpy3 : boost_numpy ]
-+    lib $(lib_boost_numpy($(version)))
-         : # sources
-         numpy/dtype.cpp
-         numpy/matrix.cpp
-         numpy/ndarray.cpp
--	numpy/numpy.cpp
--	numpy/scalars.cpp
--	numpy/ufunc.cpp
-+        numpy/numpy.cpp
-+        numpy/scalars.cpp
-+        numpy/ufunc.cpp
-         :   # requirements
-+            <link>static:<define>BOOST_NUMPY_STATIC_LIB 
-+            <define>BOOST_NUMPY_SOURCE
-             [ cond [ python.numpy ] : <library>/python//python_for_extensions ]
-             [ unless [ python.numpy ] : <build>no ]
--	    <include>$(numpy-include)
--	    <library>boost_python
-+            <include>$(numpy-include)
-+            <library>$(lib_boost_python($(version)))
-             <python-debugging>on:<define>BOOST_DEBUG_PYTHON
--            [ cond $(is-py3) : <python>$(py3-version) ]
-+            <python>$(version)
- 
-             -<tag>@$(BOOST_JAMROOT_MODULE)%$(BOOST_JAMROOT_MODULE).tag
-             <tag>@$(BOOST_JAMROOT_MODULE)%$(BOOST_JAMROOT_MODULE).python-tag
- 
-+            <conditional>@python.require-py
-+
-         :   # default build
-             <link>shared
-         :   # usage requirements
-+			<link>static:<define>BOOST_NUMPY_STATIC_LIB
-             <python-debugging>on:<define>BOOST_DEBUG_PYTHON
-         ;
- }
- 
--libraries = boost_python ;
--libraries3 = boost_python3 ;
--if [ python.numpy ]
--{
--    libraries += boost_numpy ;
--    libraries3 += boost_numpy3 ;
--}
--
--lib_boost_python ;
--lib_boost_numpy ;
-+libraries = ;
- 
--if $(py3-version)
-+for local N in 2 3
- {
--    lib_boost_python yes ;
--    lib_boost_numpy yes ;
--    libraries += $(libraries3) ;
-+    if $(py$(N)-version)
-+    {
-+        lib_boost_python $(py$(N)-version) ;
-+        libraries += $(lib_boost_python($(py$(N)-version))) ;
-+    }
-+    else
-+    {
-+        alias $(lib_boost_python($(N))) ;
-+    }
-+    if $(py$(N)-version) && [ python.numpy ]
-+    {
-+        lib_boost_numpy $(py$(N)-version) ;
-+        libraries += $(lib_boost_numpy($(py$(N)-version))) ;
-+    }
-+    else
-+    {
-+        alias $(lib_boost_numpy($(N))) ;
-+    }
- }
- 
- boost-install $(libraries) ;
-diff --git a/src/tools/python.jam b/src/tools/python.jam
-index cc13385..bf300b8 100644
---- a/tools/build/src/tools/python.jam
-+++ b/tools/build/src/tools/python.jam
-@@ -34,6 +34,7 @@ import path ;
- import feature ;
- import set ;
- import builtin ;
-+import property-set ;
- 
- 
- # Make this module a project.
-@@ -60,6 +61,10 @@ lib rt ;
- # installed in the development system's default paths.
- feature.feature pythonpath : : free optional path ;
- 
-+# The best configured version of Python 2 and 3.
-+py2-version = ;
-+py3-version = ;
-+
- # Initializes the Python toolset. Note that all parameters are optional.
- #
- # - version -- the version of Python to use. Should be in Major.Minor format,
-@@ -861,6 +866,11 @@ local rule configure ( version ? : cmd-or-prefix ? : includes * : libraries ? :
-         if ! $(version) in [ feature.values python ]
-         {
-             feature.extend python : $(version) ;
-+            py$(major-minor[1])-version ?= $(version) ;
-+            if $(py$(major-minor[1])-version) < $(version)
-+            {
-+                py$(major-minor[1])-version = $(version) ;
-+            }
-         }
-         target-requirements += <python>$(version:E=default) ;
-     }
-@@ -916,6 +926,9 @@ local rule configure ( version ? : cmd-or-prefix ? : includes * : libraries ? :
-         }
-     }
- 
-+    # In case we added duplicate requirements from what the user specified.
-+    target-requirements = [ sequence.unique $(target-requirements) ] ;
-+
-     # Global, but conditional, requirements to give access to the interpreter
-     # for general utilities, like other toolsets, that run Python scripts.
-     toolset.add-requirements
-@@ -934,19 +947,6 @@ local rule configure ( version ? : cmd-or-prefix ? : includes * : libraries ? :
-         toolset.add-requirements <target-os>$(target-os):<python>$(version:E=default) ;
-     }
- 
--    # We also set a default requirement that assigns the first python configured
--    # for a particular target OS as the default. This makes it so that we can
--    # select a python interpreter with only knowledge of the target OS. And hence
--    # can configure different Pythons based on the target OS only.
--    local toolset-requirements = [ toolset.requirements ] ;
--    local toolset-target-os-requirements
--        = [ property.evaluate-conditionals-in-context
--            [ $(toolset-requirements).raw ] : <target-os>$(target-os) ] ;
--    if ! <python> in $(toolset-target-os-requirements:G)
--    {
--        toolset.add-requirements <target-os>$(target-os):<python>$(version:E=default) ;
--    }
--
-     # Register the right suffix for extensions.
-     register-extension-suffix $(extension-suffix) : $(target-requirements) ;
- 
-@@ -1038,6 +1038,22 @@ local rule configure ( version ? : cmd-or-prefix ? : includes * : libraries ? :
-             : $(usage-requirements)
-             ;
-     }
-+    
-+}
-+
-+# Conditional rule specification that will prevent building of a target
-+# if there is no matching python configuration available with the given
-+# required properties.
-+rule require-py ( properties * )
-+{
-+    local py-ext-target = [ $(.project).find python_for_extensions ] ;
-+    local property-set = [ property-set.create $(properties) ] ;
-+    property-set = [ $(property-set).expand ] ;
-+    local py-ext-alternative = [ $(py-ext-target).select-alternatives $(property-set) ] ;
-+    if ! $(py-ext-alternative)
-+    {
-+        return <build>no ;
-+    }
- }
- 
- 
-@@ -1298,5 +1314,11 @@ rule numpy-test ( name : sources * : requirements * )
-         : $(name) ] ;
- }
- 
-+rule py-version ( n )
-+{
-+    return $(py$(n)-version) ;
-+}
-+
- IMPORT $(__name__) : bpl-test : : bpl-test ;
- IMPORT $(__name__) : numpy-test : : numpy-test ;
-+IMPORT $(__name__) : py-version : : py-version ;
diff --git a/import-layers/yocto-poky/meta/recipes-support/boost/boost_1.63.0.bb b/import-layers/yocto-poky/meta/recipes-support/boost/boost_1.63.0.bb
deleted file mode 100644
index 1107686..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/boost/boost_1.63.0.bb
+++ /dev/null
@@ -1,15 +0,0 @@
-require boost-${PV}.inc
-require boost.inc
-
-SRC_URI += "\
-    file://arm-intrinsics.patch \
-    file://0001-When-using-soft-float-on-ARM-we-should-not-expect-th.patch \
-    file://boost-CVE-2012-2677.patch \
-    file://0001-boost-asio-detail-socket_types.hpp-fix-poll.h-includ.patch \
-    file://0004-Use-atomic-by-default-when-BOOST_NO_CXX11_HDR_ATOMIC.patch \
-    file://boost-math-disable-pch-for-gcc.patch \
-    file://0001-Apply-boost-1.62.0-no-forced-flags.patch.patch \
-    file://0003-Don-t-set-up-arch-instruction-set-flags-we-do-that-o.patch \
-    file://0002-Don-t-set-up-m32-m64-we-do-that-ourselves.patch \
-    file://py3.patch \
-"
diff --git a/import-layers/yocto-poky/meta/recipes-support/boost/boost_1.64.0.bb b/import-layers/yocto-poky/meta/recipes-support/boost/boost_1.64.0.bb
new file mode 100644
index 0000000..d1c20e1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/boost/boost_1.64.0.bb
@@ -0,0 +1,13 @@
+require boost-${PV}.inc
+require boost.inc
+
+SRC_URI += "\
+    file://arm-intrinsics.patch \
+    file://boost-CVE-2012-2677.patch \
+    file://0001-boost-asio-detail-socket_types.hpp-fix-poll.h-includ.patch \
+    file://boost-math-disable-pch-for-gcc.patch \
+    file://0001-Apply-boost-1.62.0-no-forced-flags.patch.patch \
+    file://0003-Don-t-set-up-arch-instruction-set-flags-we-do-that-o.patch \
+    file://0002-Don-t-set-up-m32-m64-we-do-that-ourselves.patch \
+    file://0001-correct-error-which-appeared-when-compiling-non-c-co.patch \
+"
diff --git a/import-layers/yocto-poky/meta/recipes-support/ca-certificates/ca-certificates_20161130.bb b/import-layers/yocto-poky/meta/recipes-support/ca-certificates/ca-certificates_20161130.bb
deleted file mode 100644
index c282ace..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/ca-certificates/ca-certificates_20161130.bb
+++ /dev/null
@@ -1,84 +0,0 @@
-SUMMARY = "Common CA certificates"
-DESCRIPTION = "This package includes PEM files of CA certificates to allow \
-SSL-based applications to check for the authenticity of SSL connections. \
-This derived from Debian's CA Certificates."
-HOMEPAGE = "http://packages.debian.org/sid/ca-certificates"
-SECTION = "misc"
-LICENSE = "GPL-2.0+ & MPL-2.0"
-LIC_FILES_CHKSUM = "file://debian/copyright;md5=e7358b9541ccf3029e9705ed8de57968"
-
-# This is needed to ensure we can run the postinst at image creation time
-DEPENDS = ""
-DEPENDS_class-native = "openssl-native"
-DEPENDS_class-nativesdk = "openssl-native"
-# Need c_rehash from openssl and run-parts from debianutils
-PACKAGE_WRITE_DEPS += "openssl-native debianutils-native"
-
-SRCREV = "61b70a1007dc269d56881a0d480fc841daacc77c"
-
-SRC_URI = "git://anonscm.debian.org/collab-maint/ca-certificates.git \
-           file://0002-update-ca-certificates-use-SYSROOT.patch \
-           file://0001-update-ca-certificates-don-t-use-Debianisms-in-run-p.patch \
-           file://update-ca-certificates-support-Toybox.patch \
-           file://default-sysroot.patch \
-           file://sbindir.patch"
-
-S = "${WORKDIR}/git"
-
-inherit allarch
-
-EXTRA_OEMAKE = "\
-    'CERTSDIR=${datadir}/ca-certificates' \
-    'SBINDIR=${sbindir}' \
-"
-
-do_compile_prepend() {
-    oe_runmake clean
-}
-
-do_install () {
-    install -d ${D}${datadir}/ca-certificates \
-               ${D}${sysconfdir}/ssl/certs \
-               ${D}${sysconfdir}/ca-certificates/update.d
-    oe_runmake 'DESTDIR=${D}' install
-
-    install -d ${D}${mandir}/man8
-    install -m 0644 sbin/update-ca-certificates.8 ${D}${mandir}/man8/
-
-    install -d ${D}${sysconfdir}
-    {
-        echo "# Lines starting with # will be ignored"
-        echo "# Lines starting with ! will remove certificate on next update"
-        echo "#"
-        find ${D}${datadir}/ca-certificates -type f -name '*.crt' | \
-            sed 's,^${D}${datadir}/ca-certificates/,,'
-    } >${D}${sysconfdir}/ca-certificates.conf
-}
-
-do_install_append_class-target () {
-    sed -i -e 's,/etc/,${sysconfdir}/,' \
-           -e 's,/usr/share/,${datadir}/,' \
-           -e 's,/usr/local,${prefix}/local,' \
-        ${D}${sbindir}/update-ca-certificates \
-        ${D}${mandir}/man8/update-ca-certificates.8
-}
-
-pkg_postinst_${PN} () {
-    SYSROOT="$D" $D${sbindir}/update-ca-certificates
-}
-
-CONFFILES_${PN} += "${sysconfdir}/ca-certificates.conf"
-
-# Postinsts don't seem to be run for nativesdk packages when populating SDKs.
-CONFFILES_${PN}_append_class-nativesdk = " ${sysconfdir}/ssl/certs/ca-certificates.crt"
-do_install_append_class-nativesdk () {
-    SYSROOT="${D}${SDKPATHNATIVE}" ${D}${sbindir}/update-ca-certificates
-}
-
-do_install_append_class-native () {
-    SYSROOT="${D}${base_prefix}" ${D}${sbindir}/update-ca-certificates
-}
-
-RDEPENDS_${PN} += "openssl"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/ca-certificates/ca-certificates_20170717.bb b/import-layers/yocto-poky/meta/recipes-support/ca-certificates/ca-certificates_20170717.bb
new file mode 100644
index 0000000..7d59fa6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/ca-certificates/ca-certificates_20170717.bb
@@ -0,0 +1,85 @@
+SUMMARY = "Common CA certificates"
+DESCRIPTION = "This package includes PEM files of CA certificates to allow \
+SSL-based applications to check for the authenticity of SSL connections. \
+This derived from Debian's CA Certificates."
+HOMEPAGE = "http://packages.debian.org/sid/ca-certificates"
+SECTION = "misc"
+LICENSE = "GPL-2.0+ & MPL-2.0"
+LIC_FILES_CHKSUM = "file://debian/copyright;md5=e7358b9541ccf3029e9705ed8de57968"
+
+# This is needed to ensure we can run the postinst at image creation time
+DEPENDS = ""
+DEPENDS_class-native = "openssl-native"
+DEPENDS_class-nativesdk = "openssl-native"
+# Need c_rehash from openssl and run-parts from debianutils
+PACKAGE_WRITE_DEPS += "openssl-native debianutils-native"
+
+SRCREV = "34b8e19e541b8af4076616b2e170c7a70cdaded0"
+
+SRC_URI = "git://anonscm.debian.org/collab-maint/ca-certificates.git \
+           file://0002-update-ca-certificates-use-SYSROOT.patch \
+           file://0001-update-ca-certificates-don-t-use-Debianisms-in-run-p.patch \
+           file://update-ca-certificates-support-Toybox.patch \
+           file://default-sysroot.patch \
+           file://sbindir.patch"
+
+S = "${WORKDIR}/git"
+SYSROOT_DIRS_class-native += "/etc"
+
+inherit allarch
+
+EXTRA_OEMAKE = "\
+    'CERTSDIR=${datadir}/ca-certificates' \
+    'SBINDIR=${sbindir}' \
+"
+
+do_compile_prepend() {
+    oe_runmake clean
+}
+
+do_install () {
+    install -d ${D}${datadir}/ca-certificates \
+               ${D}${sysconfdir}/ssl/certs \
+               ${D}${sysconfdir}/ca-certificates/update.d
+    oe_runmake 'DESTDIR=${D}' install
+
+    install -d ${D}${mandir}/man8
+    install -m 0644 sbin/update-ca-certificates.8 ${D}${mandir}/man8/
+
+    install -d ${D}${sysconfdir}
+    {
+        echo "# Lines starting with # will be ignored"
+        echo "# Lines starting with ! will remove certificate on next update"
+        echo "#"
+        find ${D}${datadir}/ca-certificates -type f -name '*.crt' | \
+            sed 's,^${D}${datadir}/ca-certificates/,,'
+    } >${D}${sysconfdir}/ca-certificates.conf
+}
+
+do_install_append_class-target () {
+    sed -i -e 's,/etc/,${sysconfdir}/,' \
+           -e 's,/usr/share/,${datadir}/,' \
+           -e 's,/usr/local,${prefix}/local,' \
+        ${D}${sbindir}/update-ca-certificates \
+        ${D}${mandir}/man8/update-ca-certificates.8
+}
+
+pkg_postinst_${PN} () {
+    SYSROOT="$D" $D${sbindir}/update-ca-certificates
+}
+
+CONFFILES_${PN} += "${sysconfdir}/ca-certificates.conf"
+
+# Postinsts don't seem to be run for nativesdk packages when populating SDKs.
+CONFFILES_${PN}_append_class-nativesdk = " ${sysconfdir}/ssl/certs/ca-certificates.crt"
+do_install_append_class-nativesdk () {
+    SYSROOT="${D}${SDKPATHNATIVE}" ${D}${sbindir}/update-ca-certificates
+}
+
+do_install_append_class-native () {
+    SYSROOT="${D}${base_prefix}" ${D}${sbindir}/update-ca-certificates
+}
+
+RDEPENDS_${PN} += "openssl"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000099.patch b/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000099.patch
new file mode 100644
index 0000000..96ff1b0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000099.patch
@@ -0,0 +1,41 @@
+From c9332fa5e84f24da300b42b1a931ade929d3e27d Mon Sep 17 00:00:00 2001
+From: Even Rouault <even.rouault@spatialys.com>
+Date: Tue, 1 Aug 2017 17:17:06 +0200
+Subject: [PATCH] file: output the correct buffer to the user
+
+Regression brought by 7c312f84ea930d8 (April 2017)
+
+CVE: CVE-2017-1000099
+
+Bug: https://curl.haxx.se/docs/adv_20170809C.html
+
+Credit to OSS-Fuzz for the discovery
+
+Upstream-Status: Backport
+https://github.com/curl/curl/commit/c9332fa5e84f24da300b42b1a931ade929d3e27d
+
+Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
+---
+ lib/file.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/lib/file.c b/lib/file.c
+index bd426eac2..666cbe75b 100644
+--- a/lib/file.c
++++ b/lib/file.c
+@@ -499,11 +499,11 @@ static CURLcode file_do(struct connectdata *conn, bool *done)
+              Curl_month[tm->tm_mon],
+              tm->tm_year + 1900,
+              tm->tm_hour,
+              tm->tm_min,
+              tm->tm_sec);
+-    result = Curl_client_write(conn, CLIENTWRITE_BOTH, buf, 0);
++    result = Curl_client_write(conn, CLIENTWRITE_BOTH, header, 0);
+     if(!result)
+       /* set the file size to make it available post transfer */
+       Curl_pgrsSetDownloadSize(data, expected_size);
+     return result;
+   }
+-- 
+2.13.3
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000100.patch b/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000100.patch
index 508df8c..f74f1dd 100644
--- a/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000100.patch
+++ b/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000100.patch
@@ -6,7 +6,7 @@
 ... and thereby avoid telling send() to send off more bytes than the
 size of the buffer!
 
-CVE-2017-1000100
+CVE: CVE-2017-1000100
 
 Bug: https://curl.haxx.se/docs/adv_20170809B.html
 Reported-by: Even Rouault
@@ -16,17 +16,15 @@
 Upstream-Status: Backport
 https://github.com/curl/curl/commit/358b2b131ad6c095696f20dcfa62b8305263f898
 
-CVE: CVE-2017-1000100
-Signed-off-by: Armin Kuster <akuster@mvista.com>
-
+Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
 ---
- lib/tftp.c | 7 ++++++-
+ lib/tftp.c |    7 ++++++-
  1 file changed, 6 insertions(+), 1 deletion(-)
 
-Index: curl-7.53.1/lib/tftp.c
-===================================================================
---- curl-7.53.1.orig/lib/tftp.c
-+++ curl-7.53.1/lib/tftp.c
+diff --git a/lib/tftp.c b/lib/tftp.c
+index 02bd842..f6f4bce 100644
+--- a/lib/tftp.c
++++ b/lib/tftp.c
 @@ -5,7 +5,7 @@
   *                            | (__| |_| |  _ <| |___
   *                             \___|\___/|_| \_\_____|
@@ -36,7 +34,7 @@
   *
   * This software is licensed as described in the file COPYING, which
   * you should have received as part of this distribution. The terms
-@@ -490,6 +490,11 @@ static CURLcode tftp_send_first(tftp_sta
+@@ -491,6 +491,11 @@ static CURLcode tftp_send_first(tftp_state_data_t *state, tftp_event_t event)
      if(result)
        return result;
  
@@ -48,3 +46,6 @@
      snprintf((char *)state->spacket.data+2,
               state->blksize,
               "%s%c%s%c", filename, '\0',  mode, '\0');
+-- 
+1.7.9.5
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000101.patch b/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000101.patch
index 9eef5e2..c300fff 100644
--- a/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000101.patch
+++ b/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000101.patch
@@ -6,15 +6,17 @@
 
 Added test 1289 to verify.
 
-CVE-2017-1000101
+CVE: CVE-2017-1000101
 
 Bug: https://curl.haxx.se/docs/adv_20170809A.html
 Reported-by: Brian Carpenter
 
 Upstream-Status: Backport
-CVE: CVE-2017-1000101
-Signed-off-by: Armin Kuster <akuster@mvista.com>
+https://github.com/curl/curl/commit/453e7a7a03a2cec749abd3878a48e728c515cca7
 
+Rebase the tests/data/Makefile.inc changes for curl 7.54.1.
+
+Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
 ---
  src/tool_urlglob.c      |  5 ++++-
  tests/data/Makefile.inc |  2 +-
@@ -22,11 +24,11 @@
  3 files changed, 40 insertions(+), 2 deletions(-)
  create mode 100644 tests/data/test1289
 
-Index: curl-7.53.1/src/tool_urlglob.c
-===================================================================
---- curl-7.53.1.orig/src/tool_urlglob.c
-+++ curl-7.53.1/src/tool_urlglob.c
-@@ -269,7 +269,10 @@ static CURLcode glob_range(URLGlob *glob
+diff --git a/src/tool_urlglob.c b/src/tool_urlglob.c
+index 6b1ece0..d56dcd9 100644
+--- a/src/tool_urlglob.c
++++ b/src/tool_urlglob.c
+@@ -273,7 +273,10 @@ static CURLcode glob_range(URLGlob *glob, char **patternp,
          }
          errno = 0;
          max_n = strtoul(pattern, &endp, 10);
@@ -38,22 +40,24 @@
            pattern = endp+1;
            errno = 0;
            step_n = strtoul(pattern, &endp, 10);
-Index: curl-7.53.1/tests/data/Makefile.inc
-===================================================================
---- curl-7.53.1.orig/tests/data/Makefile.inc
-+++ curl-7.53.1/tests/data/Makefile.inc
-@@ -131,6 +131,7 @@ test1244 test1245 test1246 test1247 test
- test1252 test1253 test1254 test1255 test1256 test1257 test1258 test1259 \
+diff --git a/tests/data/Makefile.inc b/tests/data/Makefile.inc
+index 155320a..7adbee6 100644
+--- a/tests/data/Makefile.inc
++++ b/tests/data/Makefile.inc
+@@ -132,7 +132,7 @@ test1252 test1253 test1254 test1255 test1256 test1257 test1258 test1259 \
+ test1260 test1261 test1262 \
  \
- test1280 test1281 test1282 test1283 test1284 test1285 test1286 \
-+test1289 \
+ test1280 test1281 test1282 test1283 test1284 test1285 test1286 test1287 \
+-test1288 \
++test1288 test1289 \
  \
  test1300 test1301 test1302 test1303 test1304 test1305 test1306 test1307 \
  test1308 test1309 test1310 test1311 test1312 test1313 test1314 test1315 \
-Index: curl-7.53.1/tests/data/test1289
-===================================================================
+diff --git a/tests/data/test1289 b/tests/data/test1289
+new file mode 100644
+index 0000000..d679cc0
 --- /dev/null
-+++ curl-7.53.1/tests/data/test1289
++++ b/tests/data/test1289
 @@ -0,0 +1,35 @@
 +<testcase>
 +<info>
@@ -90,3 +94,6 @@
 +</errorcode>
 +</verify>
 +</testcase>
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000254.patch b/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000254.patch
new file mode 100644
index 0000000..2b0798b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/curl/curl/CVE-2017-1000254.patch
@@ -0,0 +1,138 @@
+From 1b2eba6f9745c064f7283e0ada8f46df9d9d6e42 Mon Sep 17 00:00:00 2001
+From: Li Zhou <li.zhou@windriver.com>
+Date: Mon, 23 Oct 2017 00:26:50 -0700
+Subject: [PATCH] FTP: zero terminate the entry path even on bad input
+
+... a single double quote could leave the entry path buffer without a zero
+terminating byte. CVE-2017-1000254
+
+Test 1152 added to verify.
+
+Reported-by: Max Dymond
+Bug: https://curl.haxx.se/docs/adv_20171004.html
+
+Upstream-Status: Backport
+CVE: CVE-2017-1000254
+Signed-off-by: Li Zhou <li.zhou@windriver.com>
+---
+ lib/ftp.c               |  7 ++++--
+ tests/data/Makefile.inc |  2 ++
+ tests/data/test1152     | 61 +++++++++++++++++++++++++++++++++++++++++++++++++
+ 3 files changed, 68 insertions(+), 2 deletions(-)
+ create mode 100644 tests/data/test1152
+
+diff --git a/lib/ftp.c b/lib/ftp.c
+index 5edec37..493dbf9 100644
+--- a/lib/ftp.c
++++ b/lib/ftp.c
+@@ -2826,6 +2826,7 @@ static CURLcode ftp_statemach_act(struct connectdata *conn)
+         const size_t buf_size = data->set.buffer_size;
+         char *dir;
+         char *store;
++        bool entry_extracted = FALSE;
+ 
+         dir = malloc(nread + 1);
+         if(!dir)
+@@ -2857,7 +2858,7 @@ static CURLcode ftp_statemach_act(struct connectdata *conn)
+               }
+               else {
+                 /* end of path */
+-                *store = '\0'; /* zero terminate */
++                entry_extracted = TRUE;
+                 break; /* get out of this loop */
+               }
+             }
+@@ -2866,7 +2867,9 @@ static CURLcode ftp_statemach_act(struct connectdata *conn)
+             store++;
+             ptr++;
+           }
+-
++          *store = '\0'; /* zero terminate */
++        }
++        if(entry_extracted) {
+           /* If the path name does not look like an absolute path (i.e.: it
+              does not start with a '/'), we probably need some server-dependent
+              adjustments. For example, this is the case when connecting to
+diff --git a/tests/data/Makefile.inc b/tests/data/Makefile.inc
+index 7adbee6..5284654 100644
+--- a/tests/data/Makefile.inc
++++ b/tests/data/Makefile.inc
+@@ -121,6 +121,8 @@ test1120 test1121 test1122 test1123 test1124 test1125 test1126 test1127 \
+ test1128 test1129 test1130 test1131 test1132 test1133 test1134 test1135 \
+ test1136 test1137 test1138 test1139 test1140 test1141 test1142 test1143 \
+ test1144 test1145 test1146 \
++test1152 \
++\
+ test1200 test1201 test1202 test1203 test1204 test1205 test1206 test1207 \
+ test1208 test1209 test1210 test1211 test1212 test1213 test1214 test1215 \
+ test1216 test1217 test1218 test1219 \
+diff --git a/tests/data/test1152 b/tests/data/test1152
+new file mode 100644
+index 0000000..aa8c0a7
+--- /dev/null
++++ b/tests/data/test1152
+@@ -0,0 +1,61 @@
++<testcase>
++<info>
++<keywords>
++FTP
++PASV
++LIST
++</keywords>
++</info>
++#
++# Server-side
++<reply>
++<servercmd>
++REPLY PWD 257 "just one
++</servercmd>
++
++# When doing LIST, we get the default list output hard-coded in the test
++# FTP server
++<data mode="text">
++total 20
++drwxr-xr-x   8 98       98           512 Oct 22 13:06 .
++drwxr-xr-x   8 98       98           512 Oct 22 13:06 ..
++drwxr-xr-x   2 98       98           512 May  2  1996 curl-releases
++-r--r--r--   1 0        1             35 Jul 16  1996 README
++lrwxrwxrwx   1 0        1              7 Dec  9  1999 bin -> usr/bin
++dr-xr-xr-x   2 0        1            512 Oct  1  1997 dev
++drwxrwxrwx   2 98       98           512 May 29 16:04 download.html
++dr-xr-xr-x   2 0        1            512 Nov 30  1995 etc
++drwxrwxrwx   2 98       1            512 Oct 30 14:33 pub
++dr-xr-xr-x   5 0        1            512 Oct  1  1997 usr
++</data>
++</reply>
++
++#
++# Client-side
++<client>
++<server>
++ftp
++</server>
++ <name>
++FTP with uneven quote in PWD response
++ </name>
++ <command>
++ftp://%HOSTIP:%FTPPORT/test-1152/
++</command>
++</client>
++
++#
++# Verify data after the test has been "shot"
++<verify>
++<protocol>
++USER anonymous
++PASS ftp@example.com
++PWD
++CWD test-1152
++EPSV
++TYPE A
++LIST
++QUIT
++</protocol>
++</verify>
++</testcase>
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/curl/curl/reproducible-mkhelp.patch b/import-layers/yocto-poky/meta/recipes-support/curl/curl/reproducible-mkhelp.patch
new file mode 100644
index 0000000..268bbeb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/curl/curl/reproducible-mkhelp.patch
@@ -0,0 +1,32 @@
+From 1fe92fd3dd64c7228f6ff41e3fc16c4f2392471a Mon Sep 17 00:00:00 2001
+From: Juro Bystricky <juro.bystricky@intel.com>
+Date: Fri, 27 Oct 2017 08:28:25 -0700
+Subject: mkhelp.pl: support reproducible build
+
+Do not generate line with the current date, such as:
+
+* Generation time: Tue Oct-24 18:01:41 2017
+
+This will improve reproducibility. The generated string is only
+part of a comment, so there should be no adverse consequences.
+
+Upstream-Status: Submitted [ https://github.com/curl/curl/pull/2026 ]
+
+Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
+
+diff --git a/src/mkhelp.pl b/src/mkhelp.pl
+index 270daa2..757f024 100755
+--- a/src/mkhelp.pl
++++ b/src/mkhelp.pl
+@@ -102,11 +102,9 @@ while(<READ>) {
+ }
+ close(READ);
+ 
+-$now = localtime;
+ print <<HEAD
+ /*
+  * NEVER EVER edit this manually, fix the mkhelp.pl script instead!
+- * Generation time: $now
+  */
+ #ifdef USE_MANUAL
+ #include "tool_hugehelp.h"
diff --git a/import-layers/yocto-poky/meta/recipes-support/curl/curl_7.53.1.bb b/import-layers/yocto-poky/meta/recipes-support/curl/curl_7.53.1.bb
deleted file mode 100644
index c3e1f89..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/curl/curl_7.53.1.bb
+++ /dev/null
@@ -1,76 +0,0 @@
-SUMMARY = "Command line tool and library for client-side URL transfers"
-HOMEPAGE = "http://curl.haxx.se/"
-BUGTRACKER = "http://curl.haxx.se/mail/list.cgi?list=curl-tracker"
-SECTION = "console/network"
-LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://COPYING;beginline=8;md5=3a34942f4ae3fbf1a303160714e664ac"
-
-SRC_URI = "http://curl.haxx.se/download/curl-${PV}.tar.bz2 \
-           file://0001-replace-krb5-config-with-pkg-config.patch \
-"
-
-# curl likes to set -g0 in CFLAGS, so we stop it
-# from mucking around with debug options
-#
-SRC_URI += " file://configure_ac.patch \
-             file://CVE-2017-1000100.patch \
-             file://CVE-2017-1000101.patch \
-             "
-
-SRC_URI[md5sum] = "fb1f03a142236840c1a77c035fa4c542"
-SRC_URI[sha256sum] = "1c7207c06d75e9136a944a2e0528337ce76f15b9ec9ae4bb30d703b59bf530e8"
-
-CVE_PRODUCT = "libcurl"
-inherit autotools pkgconfig binconfig multilib_header
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} gnutls proxy threaded-resolver zlib"
-PACKAGECONFIG_class-native = "ipv6 proxy ssl threaded-resolver zlib"
-PACKAGECONFIG_class-nativesdk = "ipv6 proxy ssl threaded-resolver zlib"
-
-PACKAGECONFIG[dict] = "--enable-dict,--disable-dict,"
-PACKAGECONFIG[gnutls] = "--with-gnutls,--without-gnutls,gnutls"
-PACKAGECONFIG[gopher] = "--enable-gopher,--disable-gopher,"
-PACKAGECONFIG[imap] = "--enable-imap,--disable-imap,"
-PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
-PACKAGECONFIG[ldap] = "--enable-ldap,--disable-ldap,"
-PACKAGECONFIG[ldaps] = "--enable-ldaps,--disable-ldaps,"
-PACKAGECONFIG[libidn] = "--with-libidn,--without-libidn,libidn"
-PACKAGECONFIG[libssh2] = "--with-libssh2,--without-libssh2,libssh2"
-PACKAGECONFIG[pop3] = "--enable-pop3,--disable-pop3,"
-PACKAGECONFIG[proxy] = "--enable-proxy,--disable-proxy,"
-PACKAGECONFIG[rtmpdump] = "--with-librtmp,--without-librtmp,rtmpdump"
-PACKAGECONFIG[rtsp] = "--enable-rtsp,--disable-rtsp,"
-PACKAGECONFIG[smb] = "--enable-smb,--disable-smb,"
-PACKAGECONFIG[smtp] = "--enable-smtp,--disable-smtp,"
-PACKAGECONFIG[ssl] = "--with-ssl --with-random=/dev/urandom,--without-ssl,openssl"
-PACKAGECONFIG[telnet] = "--enable-telnet,--disable-telnet,"
-PACKAGECONFIG[tftp] = "--enable-tftp,--disable-tftp,"
-PACKAGECONFIG[threaded-resolver] = "--enable-threaded-resolver,--disable-threaded-resolver"
-PACKAGECONFIG[zlib] = "--with-zlib=${STAGING_LIBDIR}/../,--without-zlib,zlib"
-PACKAGECONFIG[krb5] = "--with-gssapi,--without-gssapi,krb5"
-
-EXTRA_OECONF = " \
-    --enable-crypto-auth \
-    --with-ca-bundle=${sysconfdir}/ssl/certs/ca-certificates.crt \
-    --without-libmetalink \
-    --without-libpsl \
-    --without-nghttp2 \
-"
-
-do_install_append() {
-	oe_multilib_header curl/curlbuild.h
-}
-
-do_install_append_class-target() {
-	# cleanup buildpaths from curl-config
-	sed -i -e 's,${STAGING_DIR_HOST},,g' ${D}${bindir}/curl-config
-}
-
-PACKAGES =+ "lib${BPN}"
-
-FILES_lib${BPN} = "${libdir}/lib*.so.*"
-RRECOMMENDS_lib${BPN} += "ca-certificates"
-
-FILES_${PN} += "${datadir}/zsh"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/curl/curl_7.54.1.bb b/import-layers/yocto-poky/meta/recipes-support/curl/curl_7.54.1.bb
new file mode 100644
index 0000000..58f0531
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/curl/curl_7.54.1.bb
@@ -0,0 +1,87 @@
+SUMMARY = "Command line tool and library for client-side URL transfers"
+HOMEPAGE = "http://curl.haxx.se/"
+BUGTRACKER = "http://curl.haxx.se/mail/list.cgi?list=curl-tracker"
+SECTION = "console/network"
+LICENSE = "MIT"
+LIC_FILES_CHKSUM = "file://COPYING;beginline=8;md5=3a34942f4ae3fbf1a303160714e664ac"
+
+SRC_URI = "http://curl.haxx.se/download/curl-${PV}.tar.bz2 \
+           file://0001-replace-krb5-config-with-pkg-config.patch \
+           file://CVE-2017-1000099.patch \
+           file://CVE-2017-1000100.patch \
+           file://CVE-2017-1000101.patch \
+           file://CVE-2017-1000254.patch \
+"
+
+SRC_URI_append_class-target = " \
+           file://reproducible-mkhelp.patch \
+"
+
+# curl likes to set -g0 in CFLAGS, so we stop it
+# from mucking around with debug options
+#
+SRC_URI += " file://configure_ac.patch"
+
+SRC_URI[md5sum] = "6b6eb722f512e7a24855ff084f54fe55"
+SRC_URI[sha256sum] = "fdfc4df2d001ee0c44ec071186e770046249263c491fcae48df0e1a3ca8f25a0"
+
+CVE_PRODUCT = "libcurl"
+inherit autotools pkgconfig binconfig multilib_header
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)} gnutls proxy threaded-resolver zlib"
+PACKAGECONFIG_class-native = "ipv6 proxy ssl threaded-resolver zlib"
+PACKAGECONFIG_class-nativesdk = "ipv6 proxy ssl threaded-resolver zlib"
+
+# 'ares' and 'threaded-resolver' are mutually exclusive
+PACKAGECONFIG[ares] = "--enable-ares,--disable-ares,c-ares"
+PACKAGECONFIG[dict] = "--enable-dict,--disable-dict,"
+PACKAGECONFIG[gnutls] = "--with-gnutls,--without-gnutls,gnutls"
+PACKAGECONFIG[gopher] = "--enable-gopher,--disable-gopher,"
+PACKAGECONFIG[imap] = "--enable-imap,--disable-imap,"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
+PACKAGECONFIG[ldap] = "--enable-ldap,--disable-ldap,"
+PACKAGECONFIG[ldaps] = "--enable-ldaps,--disable-ldaps,"
+PACKAGECONFIG[libidn] = "--with-libidn,--without-libidn,libidn"
+PACKAGECONFIG[libssh2] = "--with-libssh2,--without-libssh2,libssh2"
+PACKAGECONFIG[pop3] = "--enable-pop3,--disable-pop3,"
+PACKAGECONFIG[proxy] = "--enable-proxy,--disable-proxy,"
+PACKAGECONFIG[rtmpdump] = "--with-librtmp,--without-librtmp,rtmpdump"
+PACKAGECONFIG[rtsp] = "--enable-rtsp,--disable-rtsp,"
+PACKAGECONFIG[smb] = "--enable-smb,--disable-smb,"
+PACKAGECONFIG[smtp] = "--enable-smtp,--disable-smtp,"
+PACKAGECONFIG[ssl] = "--with-ssl --with-random=/dev/urandom,--without-ssl,openssl"
+PACKAGECONFIG[telnet] = "--enable-telnet,--disable-telnet,"
+PACKAGECONFIG[tftp] = "--enable-tftp,--disable-tftp,"
+PACKAGECONFIG[threaded-resolver] = "--enable-threaded-resolver,--disable-threaded-resolver"
+PACKAGECONFIG[zlib] = "--with-zlib=${STAGING_LIBDIR}/../,--without-zlib,zlib"
+PACKAGECONFIG[krb5] = "--with-gssapi,--without-gssapi,krb5"
+PACKAGECONFIG[nghttp2] = "--with-nghttp2,--without-nghttp2,nghttp2"
+
+EXTRA_OECONF = " \
+    --enable-crypto-auth \
+    --with-ca-bundle=${sysconfdir}/ssl/certs/ca-certificates.crt \
+    --without-libmetalink \
+    --without-libpsl \
+"
+
+do_install_append() {
+	oe_multilib_header curl/curlbuild.h
+}
+
+do_install_append_class-target() {
+	# cleanup buildpaths from curl-config
+	sed -i \
+	    -e 's,--sysroot=${STAGING_DIR_TARGET},,g' \
+	    -e 's,--with-libtool-sysroot=${STAGING_DIR_TARGET},,g' \
+	    -e 's|${DEBUG_PREFIX_MAP}||g' \
+	    ${D}${bindir}/curl-config
+}
+
+PACKAGES =+ "lib${BPN}"
+
+FILES_lib${BPN} = "${libdir}/lib*.so.*"
+RRECOMMENDS_lib${BPN} += "ca-certificates"
+
+FILES_${PN} += "${datadir}/zsh"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/db/db/0001-configure-Add-explicit-tag-options-to-libtool-invoca.patch b/import-layers/yocto-poky/meta/recipes-support/db/db/0001-configure-Add-explicit-tag-options-to-libtool-invoca.patch
new file mode 100644
index 0000000..cb28db1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/db/db/0001-configure-Add-explicit-tag-options-to-libtool-invoca.patch
@@ -0,0 +1,42 @@
+From 32e5943a3c4637d39e4d65b544dcb99e280210e3 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Sun, 23 Jul 2017 10:54:26 -0700
+Subject: [PATCH] configure: Add explicit tag options to libtool invocation
+
+This helps cross compile when tag inference via heuristics
+fail because CC variable is having -fPIE -pie and libtool
+smartly removes it when building libraries
+
+Upstream-Status: Pending
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+ dist/configure.ac | 12 ++++++------
+ 1 file changed, 6 insertions(+), 6 deletions(-)
+
+diff --git a/dist/configure.ac b/dist/configure.ac
+index 689f3b8..9c14bdb 100644
+--- a/dist/configure.ac
++++ b/dist/configure.ac
+@@ -366,12 +366,12 @@ LIBTOOL="./libtool"
+ 
+ INSTALLER="\$(LIBTOOL) --mode=install cp -p"
+ 
+-MAKEFILE_CC="\$(LIBTOOL) --mode=compile ${MAKEFILE_CC}"
+-MAKEFILE_SOLINK="\$(LIBTOOL) --mode=link ${MAKEFILE_CCLINK} -avoid-version"
+-MAKEFILE_CCLINK="\$(LIBTOOL) --mode=link ${MAKEFILE_CCLINK}"
+-MAKEFILE_CXX="\$(LIBTOOL) --mode=compile ${MAKEFILE_CXX}"
+-MAKEFILE_XSOLINK="\$(LIBTOOL) --mode=link ${MAKEFILE_CXXLINK} -avoid-version"
+-MAKEFILE_CXXLINK="\$(LIBTOOL) --mode=link ${MAKEFILE_CXXLINK}"
++MAKEFILE_CC="\$(LIBTOOL) --tag=CC --mode=compile ${MAKEFILE_CC}"
++MAKEFILE_SOLINK="\$(LIBTOOL) --tag=CC --mode=link ${MAKEFILE_CCLINK} -avoid-version"
++MAKEFILE_CCLINK="\$(LIBTOOL) --tag=CC --mode=link ${MAKEFILE_CCLINK}"
++MAKEFILE_CXX="\$(LIBTOOL) --tag=CXX --mode=compile ${MAKEFILE_CXX}"
++MAKEFILE_XSOLINK="\$(LIBTOOL) --tag=CXX --mode=link ${MAKEFILE_CXXLINK} -avoid-version"
++MAKEFILE_CXXLINK="\$(LIBTOOL) --tag=CXX --mode=link ${MAKEFILE_CXXLINK}"
+ 
+ 
+ case "$host_os" in
+-- 
+2.13.3
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/db/db/sequence-type.patch b/import-layers/yocto-poky/meta/recipes-support/db/db/sequence-type.patch
new file mode 100644
index 0000000..a6fe3d6
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/db/db/sequence-type.patch
@@ -0,0 +1,59 @@
+configure wants to use host-specific types to get a 64-bit integer in db.h
+instead of using an alias such as int64_t.  This means that the header differs
+in multilib environments for no good reason, so replace the type with the alias
+in stdint.h.
+
+This then breaks the overly complicated type check but as we know that int64_t
+exists and works, we can just delete that.
+
+Upstream-Status: Pending
+Signed-off-by: Ross Burton <ross.burton@intel.com>
+
+--- a/dist/aclocal/sequence.m4~	2013-09-09 16:35:02.000000000 +0100
++++ b/dist/aclocal/sequence.m4	2017-11-01 13:21:45.472295971 +0000
+@@ -24 +24 @@
+-		db_cv_seq_type="long"
++		db_cv_seq_type="int64_t"
+@@ -31 +31 @@
+-		db_cv_seq_type="long long"
++		db_cv_seq_type="int64_t"
+@@ -41,38 +41 @@
+-	# Test to see if we can declare variables of the appropriate size
+-	# and format them.  If we're cross-compiling, all we get is a link
+-	# test, which won't test for the appropriate printf format strings.
+-	if test "$db_cv_build_sequence" = "yes"; then
+-		AC_TRY_RUN([
+-		main() {
+-			$db_cv_seq_type l;
+-			unsigned $db_cv_seq_type u;
+-			char buf@<:@100@:>@;
+-
+-			buf@<:@0@:>@ = 'a';
+-			l = 9223372036854775807LL;
+-			(void)snprintf(buf, sizeof(buf), $db_cv_seq_fmt, l);
+-			if (strcmp(buf, "9223372036854775807"))
+-				return (1);
+-			u = 18446744073709551615ULL;
+-			(void)snprintf(buf, sizeof(buf), $db_cv_seq_ufmt, u);
+-			if (strcmp(buf, "18446744073709551615"))
+-				return (1);
+-			return (0);
+-		}],, [db_cv_build_sequence="no"],
+-		AC_TRY_LINK(,[
+-			$db_cv_seq_type l;
+-			unsigned $db_cv_seq_type u;
+-			char buf@<:@100@:>@;
+-
+-			buf@<:@0@:>@ = 'a';
+-			l = 9223372036854775807LL;
+-			(void)snprintf(buf, sizeof(buf), $db_cv_seq_fmt, l);
+-			if (strcmp(buf, "9223372036854775807"))
+-				return (1);
+-			u = 18446744073709551615ULL;
+-			(void)snprintf(buf, sizeof(buf), $db_cv_seq_ufmt, u);
+-			if (strcmp(buf, "18446744073709551615"))
+-				return (1);
+-			return (0);
+-		],, [db_cv_build_sequence="no"]))
+-	fi
++	db_cv_build_sequence="yes"
diff --git a/import-layers/yocto-poky/meta/recipes-support/db/db_5.3.28.bb b/import-layers/yocto-poky/meta/recipes-support/db/db_5.3.28.bb
index 26065bb..fb4befb 100644
--- a/import-layers/yocto-poky/meta/recipes-support/db/db_5.3.28.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/db/db_5.3.28.bb
@@ -22,14 +22,20 @@
 SRC_URI += "file://arm-thumb-mutex_db5.patch \
             file://fix-parallel-build.patch \
             file://0001-atomic-Rename-local-__atomic_compare_exchange-to-avo.patch \
+            file://0001-configure-Add-explicit-tag-options-to-libtool-invoca.patch \
+            file://sequence-type.patch \
            "
+# We are not interested in official latest 6.x versions;
+# let's track what debian is using.
+UPSTREAM_CHECK_URI = "${DEBIAN_MIRROR}/main/d/db5.3/"
+UPSTREAM_CHECK_REGEX = "db5\.3_(?P<pver>.+)\.orig"
 
 SRC_URI[md5sum] = "b99454564d5b4479750567031d66fe24"
 SRC_URI[sha256sum] = "e0a992d740709892e81f9d93f06daf305cf73fb81b545afe72478043172c3628"
 
 LIC_FILES_CHKSUM = "file://LICENSE;md5=ed1158e31437f4f87cdd4ab2b8613955"
 
-inherit autotools multilib_header
+inherit autotools
 
 # Put virtual/db in any appropriate provider of a
 # relational database, use it as a dependency in
@@ -72,22 +78,32 @@
 MUTEX_arm = "${ARM_MUTEX}"
 MUTEX_armeb = "${ARM_MUTEX}"
 EXTRA_OECONF += "${MUTEX} STRIP=true"
-EXTRA_OEMAKE_append_class-target = " LIBTOOL=${STAGING_BINDIR_CROSS}/${HOST_SYS}-libtool"
+EXTRA_OEMAKE += "LIBTOOL='./${HOST_SYS}-libtool'"
 
+EXTRA_AUTORECONF += "--exclude=autoheader  -I ${S}/dist/aclocal -I${S}/dist/aclocal_java"
 AUTOTOOLS_SCRIPT_PATH = "${S}/dist"
 
 # Cancel the site stuff - it's set for db3 and destroys the
 # configure.
 CONFIG_SITE = ""
-do_configure() {
-    cd ${B}
-	gnu-configize --force ${AUTOTOOLS_SCRIPT_PATH}
-	oe_runconf
+
+oe_runconf_prepend() {
+	. ${S}/dist/RELEASE
+	# Edit version information we couldn't pre-compute.
+	sed -i -e "s/__EDIT_DB_VERSION_FAMILY__/$DB_VERSION_FAMILY/g" \
+	    -e "s/__EDIT_DB_VERSION_RELEASE__/$DB_VERSION_RELEASE/g" \
+	    -e "s/__EDIT_DB_VERSION_MAJOR__/$DB_VERSION_MAJOR/g" \
+	    -e "s/__EDIT_DB_VERSION_MINOR__/$DB_VERSION_MINOR/g" \
+	    -e "s/__EDIT_DB_VERSION_PATCH__/$DB_VERSION_PATCH/g" \
+	    -e "s/__EDIT_DB_VERSION_STRING__/$DB_VERSION_STRING/g" \
+	    -e "s/__EDIT_DB_VERSION_FULL_STRING__/$DB_VERSION_FULL_STRING/g" \
+	    -e "s/__EDIT_DB_VERSION_UNIQUE_NAME__/$DB_VERSION_UNIQUE_NAME/g" \
+	    -e "s/__EDIT_DB_VERSION__/$DB_VERSION/g" ${S}/dist/configure
 }
 
 do_compile_prepend() {
     # Stop libtool adding RPATHs
-    sed -i -e 's|hardcode_into_libs=yes|hardcode_into_libs=no|' ${B}/libtool
+    sed -i -e 's|hardcode_into_libs=yes|hardcode_into_libs=no|' ${B}/${HOST_SYS}-libtool
 }
 
 do_install_append() {
@@ -97,8 +113,6 @@
 	ln -s db51/db.h ${D}/${includedir}/db.h
 	ln -s db51/db_cxx.h ${D}/${includedir}/db_cxx.h
 
-        oe_multilib_header db51/db.h
-
 	# The docs end up in /usr/docs - not right.
 	if test -d "${D}/${prefix}/docs"
 	then
diff --git a/import-layers/yocto-poky/meta/recipes-support/debianutils/debianutils_4.8.1.1.bb b/import-layers/yocto-poky/meta/recipes-support/debianutils/debianutils_4.8.1.1.bb
new file mode 100644
index 0000000..1857d4b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/debianutils/debianutils_4.8.1.1.bb
@@ -0,0 +1,55 @@
+SUMMARY = "Miscellaneous utilities specific to Debian"
+SECTION = "base"
+LICENSE = "GPLv2 & SMAIL_GPL"
+LIC_FILES_CHKSUM = "file://debian/copyright;md5=f01a5203d50512fc4830b4332b696a9f"
+
+SRC_URI = "http://snapshot.debian.org/archive/debian/20170402T211732Z/pool/main/d/${BPN}/${BPN}_${PV}.tar.xz"
+# the package is taken from snapshots.debian.org; that source is static and goes stale
+# so we check the latest upstream from a directory that does get updated
+UPSTREAM_CHECK_URI = "${DEBIAN_MIRROR}/main/d/${BPN}/"
+
+SRC_URI[md5sum] = "ee5fcecaab071bd0c93e8a0cee65d6c4"
+SRC_URI[sha256sum] = "06446cd4c0d309fd31a0682c5c2f07f7613fb867f769414b9cc51f155ad73172"
+
+inherit autotools update-alternatives
+
+do_configure_prepend() {
+    sed -i -e 's:tempfile.1 which.1:which.1:g' ${S}/Makefile.am
+}
+
+do_install_append() {
+    if [ "${base_bindir}" != "${bindir}" ]; then
+        # Debian places some utils into ${base_bindir} as does busybox
+        install -d ${D}${base_bindir}
+        for app in run-parts tempfile; do
+            mv ${D}${bindir}/$app ${D}${base_bindir}/$app
+        done
+    fi
+}
+
+# Note that we package the update-alternatives name.
+#
+PACKAGES =+ "${PN}-run-parts"
+FILES_${PN}-run-parts = "${base_bindir}/run-parts.debianutils"
+
+RDEPENDS_${PN} += "${PN}-run-parts"
+RDEPENDS_${PN}_class-native = ""
+
+ALTERNATIVE_PRIORITY="30"
+ALTERNATIVE_${PN} = "add-shell installkernel remove-shell savelog tempfile which"
+
+ALTERNATIVE_PRIORITY_${PN}-run-parts = "60"
+ALTERNATIVE_${PN}-run-parts = "run-parts"
+
+ALTERNATIVE_${PN}-doc = "which.1"
+ALTERNATIVE_LINK_NAME[which.1] = "${mandir}/man1/which.1"
+
+ALTERNATIVE_LINK_NAME[add-shell]="${sbindir}/add-shell"
+ALTERNATIVE_LINK_NAME[installkernel]="${sbindir}/installkernel"
+ALTERNATIVE_LINK_NAME[remove-shell]="${sbindir}/remove-shell"
+ALTERNATIVE_LINK_NAME[run-parts]="${base_bindir}/run-parts"
+ALTERNATIVE_LINK_NAME[savelog]="${bindir}/savelog"
+ALTERNATIVE_LINK_NAME[tempfile]="${base_bindir}/tempfile"
+ALTERNATIVE_LINK_NAME[which]="${bindir}/which"
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-support/debianutils/debianutils_4.8.1.bb b/import-layers/yocto-poky/meta/recipes-support/debianutils/debianutils_4.8.1.bb
deleted file mode 100644
index 12c63ee..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/debianutils/debianutils_4.8.1.bb
+++ /dev/null
@@ -1,54 +0,0 @@
-SUMMARY = "Miscellaneous utilities specific to Debian"
-SECTION = "base"
-LICENSE = "GPLv2 & SMAIL_GPL"
-LIC_FILES_CHKSUM = "file://debian/copyright;md5=f01a5203d50512fc4830b4332b696a9f"
-
-SRC_URI = "http://snapshot.debian.org/archive/debian/20161118T033019Z/pool/main/d/${BPN}/${BPN}_${PV}.tar.xz"
-# the package is taken from snapshots.debian.org; that source is static and goes stale
-# so we check the latest upstream from a directory that does get updated
-UPSTREAM_CHECK_URI = "${DEBIAN_MIRROR}/main/d/${BPN}/"
-
-
-SRC_URI[md5sum] = "44083fc66164746880dffd30d62d054b"
-SRC_URI[sha256sum] = "2c395c0bdcfe89de30828b1d25cc5549ded5225a6d3625fbcb2cc0881ef5f026"
-
-inherit autotools update-alternatives
-
-do_configure_prepend() {
-    sed -i -e 's:tempfile.1 which.1:which.1:g' ${S}/Makefile.am
-}
-
-do_install_append() {
-    if [ "${base_bindir}" != "${bindir}" ]; then
-        # Debian places some utils into ${base_bindir} as does busybox
-        install -d ${D}${base_bindir}
-        for app in run-parts tempfile; do
-            mv ${D}${bindir}/$app ${D}${base_bindir}/$app
-        done
-    fi
-}
-
-# Note that we package the update-alternatives name.
-#
-PACKAGES =+ "${PN}-run-parts"
-FILES_${PN}-run-parts = "${base_bindir}/run-parts.debianutils"
-
-RDEPENDS_${PN} += "${PN}-run-parts"
-RDEPENDS_${PN}_class-native = ""
-
-ALTERNATIVE_PRIORITY="30"
-ALTERNATIVE_${PN} = "add-shell installkernel remove-shell savelog tempfile which"
-ALTERNATIVE_${PN}-run-parts = "run-parts"
-
-ALTERNATIVE_${PN}-doc = "which.1"
-ALTERNATIVE_LINK_NAME[which.1] = "${mandir}/man1/which.1"
-
-ALTERNATIVE_LINK_NAME[add-shell]="${sbindir}/add-shell"
-ALTERNATIVE_LINK_NAME[installkernel]="${sbindir}/installkernel"
-ALTERNATIVE_LINK_NAME[remove-shell]="${sbindir}/remove-shell"
-ALTERNATIVE_LINK_NAME[run-parts]="${base_bindir}/run-parts"
-ALTERNATIVE_LINK_NAME[savelog]="${bindir}/savelog"
-ALTERNATIVE_LINK_NAME[tempfile]="${base_bindir}/tempfile"
-ALTERNATIVE_LINK_NAME[which]="${bindir}/which"
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-support/gdbm/files/ptest.patch b/import-layers/yocto-poky/meta/recipes-support/gdbm/files/ptest.patch
index 65236fb..b9461be 100644
--- a/import-layers/yocto-poky/meta/recipes-support/gdbm/files/ptest.patch
+++ b/import-layers/yocto-poky/meta/recipes-support/gdbm/files/ptest.patch
@@ -1,30 +1,41 @@
-Add install-ptest rules.
+From 4e4b70a4a3dcf1fdbee9e68bed3b62f42b197a3a Mon Sep 17 00:00:00 2001
+From: Josep Puigdemont <josep.puigdemont@enea.com>
+Date: Sun, 4 May 2014 16:02:07 +0200
+Subject: [PATCH] Add install-ptest rules.
 
 Signed-off-by: Josep Puigdemont <josep.puigdemont@enea.com>
 Signed-off-by: Maxin B. John <maxin.john@enea.com>
 Upstream-Status: Pending
 
-diff -ur a/Makefile.am b/Makefile.am
---- a/Makefile.am	2011-08-16 10:13:10.000000000 +0200
-+++ b/Makefile.am	2013-04-12 18:02:16.473715873 +0200
-@@ -31,3 +31,8 @@
- 	d=`date '+%d/%m/%Y'`; \
- 	sed 's|/\*@DIST_DATE@\*/|"'"$$d"'"|' $(srcdir)/src/version.c > \
- 		$(distdir)/src/version.c
+---
+ Makefile.am       |  5 +++++
+ tests/Makefile.am | 12 +++++++++++-
+ 2 files changed, 16 insertions(+), 1 deletion(-)
+
+diff --git a/Makefile.am b/Makefile.am
+index 4cdc734..24b99f0 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -45,3 +45,8 @@ ChangeLog:
+                 awk -f $(top_srcdir)/git2chg.awk                            \
+                     -v append=$(top_srcdir)/$(prev_change_log) > ChangeLog; \
+         fi
 +
 +install-ptest:
 +	@for subdir in $(SUBDIRS); do \
 +		$(MAKE) -C $$subdir DESTDIR=$(DESTDIR)/$$subdir $@; \
 +	done
-diff -ur a/tests/Makefile.am b/tests/Makefile.am
---- a/tests/Makefile.am	2011-11-11 19:39:42.000000000 +0100
-+++ b/tests/Makefile.am	2013-04-12 18:30:57.066301037 +0200
-@@ -132,4 +132,14 @@
+diff --git a/tests/Makefile.am b/tests/Makefile.am
+index 3dbb580..22ffc44 100644
+--- a/tests/Makefile.am
++++ b/tests/Makefile.am
+@@ -130,4 +130,14 @@ dtfetch_LDADD = ../src/libgdbm.la ../compat/libgdbm_compat.la
  dtdel_LDADD = ../src/libgdbm.la ../compat/libgdbm_compat.la
  d_creat_ce_LDADD = ../src/libgdbm.la ../compat/libgdbm_compat.la
  
+-
 +buildtests: $(check_PROGRAMS) $(TESTSUITE)
- 
++
 +install-ptest: $(check_PROGRAMS) $(TESTSUITE)
 +	@$(INSTALL) -d $(DESTDIR)
 +	@for file in $^; do \
@@ -34,3 +45,6 @@
 +			$(INSTALL_PROGRAM) $$file $(DESTDIR) ; \
 +		fi \
 +	done
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/gdbm/gdbm_1.12.bb b/import-layers/yocto-poky/meta/recipes-support/gdbm/gdbm_1.12.bb
deleted file mode 100644
index c380073..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gdbm/gdbm_1.12.bb
+++ /dev/null
@@ -1,43 +0,0 @@
-SUMMARY = "Key/value database library with extensible hashing"
-HOMEPAGE = "http://www.gnu.org/software/gdbm/"
-SECTION = "libs"
-LICENSE = "GPLv3"
-LIC_FILES_CHKSUM = "file://COPYING;md5=241da1b9fe42e642cbb2c24d5e0c4d24"
-
-
-SRC_URI = "${GNU_MIRROR}/gdbm/gdbm-${PV}.tar.gz \
-           file://run-ptest \
-           file://ptest.patch \
-          "
-
-SRC_URI[md5sum] = "9ce96ff4c99e74295ea19040931c8fb9"
-SRC_URI[sha256sum] = "d97b2166ee867fd6ca5c022efee80702d6f30dd66af0e03ed092285c3af9bcea"
-
-inherit autotools gettext texinfo lib_package ptest
-
-# Needed for dbm python module
-EXTRA_OECONF = "-enable-libgdbm-compat"
-
-# Stop presence of dbm/nbdm on the host contaminating builds
-CACHED_CONFIGUREVARS += "ac_cv_lib_ndbm_main=no ac_cv_lib_dbm_main=no"
-
-BBCLASSEXTEND = "native nativesdk"
-
-do_install_append () {
-    # Create a symlink to ndbm.h and gdbm.h in include/gdbm to let other packages to find
-    # these headers
-    install -d ${D}${includedir}/gdbm
-    ln -sf ../ndbm.h ${D}/${includedir}/gdbm/ndbm.h
-    ln -sf ../gdbm.h ${D}/${includedir}/gdbm/gdbm.h
-}
-
-RDEPENDS_${PN}-ptest += "diffutils"
-
-do_compile_ptest() {
-    oe_runmake -C tests buildtests
-}
-
-PACKAGES =+ "${PN}-compat \
-            "
-FILES_${PN}-compat = "${libdir}/libgdbm_compat${SOLIBS} \
-                     "
diff --git a/import-layers/yocto-poky/meta/recipes-support/gdbm/gdbm_1.13.bb b/import-layers/yocto-poky/meta/recipes-support/gdbm/gdbm_1.13.bb
new file mode 100644
index 0000000..4bbe147
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gdbm/gdbm_1.13.bb
@@ -0,0 +1,43 @@
+SUMMARY = "Key/value database library with extensible hashing"
+HOMEPAGE = "http://www.gnu.org/software/gdbm/"
+SECTION = "libs"
+LICENSE = "GPLv3"
+LIC_FILES_CHKSUM = "file://COPYING;md5=241da1b9fe42e642cbb2c24d5e0c4d24"
+
+
+SRC_URI = "${GNU_MIRROR}/gdbm/gdbm-${PV}.tar.gz \
+           file://run-ptest \
+           file://ptest.patch \
+          "
+
+SRC_URI[md5sum] = "8929dcda2a8de3fd2367bdbf66769376"
+SRC_URI[sha256sum] = "9d252cbd7d793f7b12bcceaddda98d257c14f4d1890d851c386c37207000a253"
+
+inherit autotools gettext texinfo lib_package ptest
+
+# Needed for dbm python module
+EXTRA_OECONF = "-enable-libgdbm-compat"
+
+# Stop presence of dbm/nbdm on the host contaminating builds
+CACHED_CONFIGUREVARS += "ac_cv_lib_ndbm_main=no ac_cv_lib_dbm_main=no"
+
+BBCLASSEXTEND = "native nativesdk"
+
+do_install_append () {
+    # Create a symlink to ndbm.h and gdbm.h in include/gdbm to let other packages to find
+    # these headers
+    install -d ${D}${includedir}/gdbm
+    ln -sf ../ndbm.h ${D}/${includedir}/gdbm/ndbm.h
+    ln -sf ../gdbm.h ${D}/${includedir}/gdbm/gdbm.h
+}
+
+RDEPENDS_${PN}-ptest += "diffutils"
+
+do_compile_ptest() {
+    oe_runmake -C tests buildtests
+}
+
+PACKAGES =+ "${PN}-compat \
+            "
+FILES_${PN}-compat = "${libdir}/libgdbm_compat${SOLIBS} \
+                     "
diff --git a/import-layers/yocto-poky/meta/recipes-support/gmp/gmp_6.1.2.bb b/import-layers/yocto-poky/meta/recipes-support/gmp/gmp_6.1.2.bb
index 5e65075..b008710 100644
--- a/import-layers/yocto-poky/meta/recipes-support/gmp/gmp_6.1.2.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/gmp/gmp_6.1.2.bb
@@ -25,8 +25,11 @@
 PACKAGES =+ "libgmpxx"
 FILES_libgmpxx = "${libdir}/libgmpxx${SOLIBS}"
 
-do_install_append_class-target() {
-        sed -i "s|--sysroot=${STAGING_DIR_HOST}||g" ${D}${includedir}/gmp.h
+do_install_prepend_class-target() {
+        sed -i \
+        -e "s|--sysroot=${STAGING_DIR_HOST}||g" \
+        -e "s|${DEBUG_PREFIX_MAP}||g" \
+         ${B}/gmp.h
 }
 
 SSTATE_SCAN_FILES += "gmp.h"
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/0001-Use-pkg-config-to-find-pth-instead-of-pth-config.patch b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/0001-Use-pkg-config-to-find-pth-instead-of-pth-config.patch
new file mode 100644
index 0000000..5c9c022
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/0001-Use-pkg-config-to-find-pth-instead-of-pth-config.patch
@@ -0,0 +1,105 @@
+From 59a3c76d4016ffc615f1c45184f4c6820061d69c Mon Sep 17 00:00:00 2001
+From: Richard Purdie <richard.purdie@linuxfoundation.org>
+Date: Wed, 16 Aug 2017 11:14:12 +0800
+Subject: [PATCH 1/4] Use pkg-config to find pth instead of pth-config.
+
+Upstream-Status: Denied
+[not submitted but they've been clear they don't want a pkg-config
+dependency]
+
+RP 2014/5/22
+
+Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
+
+Rebase to 2.1.23
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ m4/gnupg-pth.m4 | 53 ++++++++---------------------------------------------
+ 1 file changed, 8 insertions(+), 45 deletions(-)
+
+diff --git a/m4/gnupg-pth.m4 b/m4/gnupg-pth.m4
+index 6dc9e0e..5892531 100644
+--- a/m4/gnupg-pth.m4
++++ b/m4/gnupg-pth.m4
+@@ -17,33 +17,9 @@ dnl implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
+ # Taken and modified from the m4 macros which come with Pth.
+ AC_DEFUN([GNUPG_PTH_VERSION_CHECK],
+   [
+-    _pth_version=`$PTH_CONFIG --version | awk 'NR==1 {print [$]3}'`
+     _req_version="ifelse([$1],,1.2.0,$1)"
++    PKG_CHECK_MODULES(PTH, [pth >= $_req_version], [have_pth=yes], [have_pth=no])
+ 
+-    AC_MSG_CHECKING(for PTH - version >= $_req_version)
+-    for _var in _pth_version _req_version; do
+-        eval "_val=\"\$${_var}\""
+-        _major=`echo $_val | sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\([[ab.]]\)\([[0-9]]*\)/\1/'`
+-        _minor=`echo $_val | sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\([[ab.]]\)\([[0-9]]*\)/\2/'`
+-        _rtype=`echo $_val | sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\([[ab.]]\)\([[0-9]]*\)/\3/'`
+-        _micro=`echo $_val | sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\([[ab.]]\)\([[0-9]]*\)/\4/'`
+-        case $_rtype in
+-            "a" ) _rtype=0 ;;
+-            "b" ) _rtype=1 ;;
+-            "." ) _rtype=2 ;;
+-        esac
+-        _hex=`echo dummy | awk '{ printf("%d%02d%1d%02d", major, minor, rtype, micro); }' \
+-              "major=$_major" "minor=$_minor" "rtype=$_rtype" "micro=$_micro"`
+-        eval "${_var}_hex=\"\$_hex\""
+-    done
+-    have_pth=no
+-    if test ".$_pth_version_hex" != .; then
+-        if test ".$_req_version_hex" != .; then
+-            if test $_pth_version_hex -ge $_req_version_hex; then
+-                have_pth=yes
+-            fi
+-        fi
+-    fi
+     if test $have_pth = yes; then
+        AC_MSG_RESULT(yes)
+        AC_MSG_CHECKING([whether PTH installation is sane])
+@@ -51,9 +27,9 @@ AC_DEFUN([GNUPG_PTH_VERSION_CHECK],
+          _gnupg_pth_save_cflags=$CFLAGS
+          _gnupg_pth_save_ldflags=$LDFLAGS
+          _gnupg_pth_save_libs=$LIBS
+-         CFLAGS="$CFLAGS `$PTH_CONFIG --cflags`"
+-         LDFLAGS="$LDFLAGS `$PTH_CONFIG --ldflags`"
+-         LIBS="$LIBS `$PTH_CONFIG --libs --all`"
++         CFLAGS="$CFLAGS $PTH_CFLAGS"
++         LDFLAGS="$LDFLAGS $PTH_LDFLAGS"
++         LIBS="$LIBS $PTH_LIBS"
+          AC_LINK_IFELSE([AC_LANG_PROGRAM([#include <pth.h>
+                                          ],
+                                          [[ pth_init ();]])],
+@@ -80,26 +56,13 @@ AC_DEFUN([GNUPG_PTH_VERSION_CHECK],
+ # PTH_CLFAGS and PTH_LIBS are AS_SUBST.
+ #
+ AC_DEFUN([GNUPG_PATH_PTH],
+-[ AC_ARG_WITH(pth-prefix,
+-             AC_HELP_STRING([--with-pth-prefix=PFX],
+-                           [prefix where GNU Pth is installed (optional)]),
+-     pth_config_prefix="$withval", pth_config_prefix="")
+-  if test x$pth_config_prefix != x ; then
+-     PTH_CONFIG="$pth_config_prefix/bin/pth-config"
+-  fi
+-  AC_PATH_PROG(PTH_CONFIG, pth-config, no)
++[
+   tmp=ifelse([$1], ,1.3.7,$1)
+-  if test "$PTH_CONFIG" != "no"; then
+-    GNUPG_PTH_VERSION_CHECK($tmp)
+-    if test $have_pth = yes; then      
+-       PTH_CFLAGS=`$PTH_CONFIG --cflags`
+-       PTH_LIBS=`$PTH_CONFIG --ldflags`
+-       PTH_LIBS="$PTH_LIBS `$PTH_CONFIG --libs --all`"
+-       AC_DEFINE(HAVE_PTH, 1,
++  GNUPG_PTH_VERSION_CHECK($tmp)
++  if test $have_pth = yes; then
++      AC_DEFINE(HAVE_PTH, 1,
+                 [Defined if the GNU Pth is available])
+-    fi
+   fi
+   AC_SUBST(PTH_CFLAGS)
+   AC_SUBST(PTH_LIBS)
+ ])
+-
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/0002-use-pkgconfig-instead-of-npth-config.patch b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/0002-use-pkgconfig-instead-of-npth-config.patch
new file mode 100644
index 0000000..6d86e5c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/0002-use-pkgconfig-instead-of-npth-config.patch
@@ -0,0 +1,85 @@
+From 53c2aec2e13f4e2d09be7148869c862f07dfdd4d Mon Sep 17 00:00:00 2001
+From: Saul Wold <sgw@linux.intel.com>
+Date: Wed, 16 Aug 2017 11:16:30 +0800
+Subject: [PATCH 2/4] use pkgconfig instead of npth config
+
+Upstream-Status: Inappropriate [openembedded specific]
+
+Signed-off-by: Saul Wold <sgw@linux.intel.com>
+
+Rebase to 2.1.23
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ m4/npth.m4 | 34 ++++++++--------------------------
+ 1 file changed, 8 insertions(+), 26 deletions(-)
+
+diff --git a/m4/npth.m4 b/m4/npth.m4
+index 17c2644..15a931b 100644
+--- a/m4/npth.m4
++++ b/m4/npth.m4
+@@ -17,10 +17,10 @@ AC_DEFUN([_AM_PATH_NPTH_CONFIG],
+   if test "x$npth_config_prefix" != x ; then
+       NPTH_CONFIG="$npth_config_prefix/bin/npth-config"
+   fi
+-  AC_PATH_PROG(NPTH_CONFIG, npth-config, no)
++  AC_PATH_PROG(PKGCONFIG, pkg-config, no)
+ 
+-  if test "$NPTH_CONFIG" != "no" ; then
+-    npth_version=`$NPTH_CONFIG --version`
++  if test "$PKGCONFIG" != "no" ; then
++    npth_version=`$PKGCONFIG --modversion npth`
+   fi
+   npth_version_major=`echo $npth_version | \
+                sed 's/\([[0-9]]*\)\.\([[0-9]]*\).*/\1/'`
+@@ -45,7 +45,7 @@ AC_DEFUN([AM_PATH_NPTH],
+ 
+   AC_MSG_CHECKING(for NPTH - version >= $min_npth_version)
+   ok=no
+-  if test "$NPTH_CONFIG" != "no" ; then
++  if test "$PKGCONFIG" != "no" ; then
+     req_major=`echo $min_npth_version | \
+                sed 's/\([[0-9]]*\)\.\([[0-9]]*\)/\1/'`
+     req_minor=`echo $min_npth_version | \
+@@ -66,28 +66,9 @@ AC_DEFUN([AM_PATH_NPTH],
+   fi
+   if test $ok = yes; then
+     AC_MSG_RESULT([yes ($npth_version)])
+-  else
+-    AC_MSG_RESULT(no)
+-  fi
+-  if test $ok = yes; then
+-     # If we have a recent NPTH, we should also check that the
+-     # API is compatible.
+-     if test "$req_npth_api" -gt 0 ; then
+-        tmp=`$NPTH_CONFIG --api-version 2>/dev/null || echo 0`
+-        if test "$tmp" -gt 0 ; then
+-           AC_MSG_CHECKING([NPTH API version])
+-           if test "$req_npth_api" -eq "$tmp" ; then
+-             AC_MSG_RESULT([okay])
+-           else
+-             ok=no
+-             AC_MSG_RESULT([does not match. want=$req_npth_api got=$tmp])
+-           fi
+-        fi
+-     fi
+-  fi
+-  if test $ok = yes; then
+-    NPTH_CFLAGS=`$NPTH_CONFIG --cflags`
+-    NPTH_LIBS=`$NPTH_CONFIG --libs`
++    NPTH_CFLAGS=`$PKGCONFIG --cflags npth`
++    NPTH_LIBS=`$PKGCONFIG --libs npth`
++    AC_MSG_WARN([[GOT HERE - $NPTH_LIBS ]])
+     ifelse([$2], , :, [$2])
+     npth_config_host=`$NPTH_CONFIG --host 2>/dev/null || echo none`
+     if test x"$npth_config_host" != xnone ; then
+@@ -103,6 +84,7 @@ AC_DEFUN([AM_PATH_NPTH],
+       fi
+     fi
+   else
++    AC_MSG_RESULT(no)
+     NPTH_CFLAGS=""
+     NPTH_LIBS=""
+     ifelse([$3], , :, [$3])
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/0003-dirmngr-uses-libgpg-error.patch b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/0003-dirmngr-uses-libgpg-error.patch
new file mode 100644
index 0000000..3e798ef
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/0003-dirmngr-uses-libgpg-error.patch
@@ -0,0 +1,32 @@
+From 9c3858ffda6246bf9e1e6aeeb920532a56b19408 Mon Sep 17 00:00:00 2001
+From: Saul Wold <sgw@linux.intel.com>
+Date: Wed, 16 Aug 2017 11:18:01 +0800
+Subject: [PATCH 3/4] dirmngr uses libgpg error
+
+Upstream-Status: Pending
+Signed-off-by: Saul Wold <sgw@linux.intel.com>
+
+Rebase to 2.1.23
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ dirmngr/Makefile.am | 3 ++-
+ 1 file changed, 2 insertions(+), 1 deletion(-)
+
+diff --git a/dirmngr/Makefile.am b/dirmngr/Makefile.am
+index b404165..d3f916e 100644
+--- a/dirmngr/Makefile.am
++++ b/dirmngr/Makefile.am
+@@ -82,7 +82,8 @@ endif
+ dirmngr_LDADD = $(libcommonpth) \
+         $(DNSLIBS) $(LIBASSUAN_LIBS) \
+ 	$(LIBGCRYPT_LIBS) $(KSBA_LIBS) $(NPTH_LIBS) \
+-	$(NTBTLS_LIBS) $(LIBGNUTLS_LIBS) $(LIBINTL) $(LIBICONV)
++	$(NTBTLS_LIBS) $(LIBGNUTLS_LIBS) $(LIBINTL) $(LIBICONV) \
++	$(GPG_ERROR_LIBS)
+ if USE_LDAP
+ dirmngr_LDADD += $(ldaplibs)
+ endif
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/0004-autogen.sh-fix-find-version-for-beta-checking.patch b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/0004-autogen.sh-fix-find-version-for-beta-checking.patch
new file mode 100644
index 0000000..dcd8582
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/0004-autogen.sh-fix-find-version-for-beta-checking.patch
@@ -0,0 +1,34 @@
+From 914ae4a3f7529fb069467bf0ded57dd24ee2e763 Mon Sep 17 00:00:00 2001
+From: Wenzong Fan <wenzong.fan@windriver.com>
+Date: Wed, 16 Aug 2017 11:23:22 +0800
+Subject: [PATCH 4/4] autogen.sh: fix find-version for beta checking
+
+find-version always assumes that gnupg is beta if autogen.sh is run
+out of git-repo. This doesn't work for users whom just take release
+tarball and re-run autoconf in their local build dir.
+
+Upstream-Status: Pending
+
+Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
+
+Rebase to 2.1.23
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ autogen.sh | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/autogen.sh b/autogen.sh
+index e5ba5bf..05e0e11 100755
+--- a/autogen.sh
++++ b/autogen.sh
+@@ -245,7 +245,6 @@ if [ "$myhost" = "find-version" ]; then
+       rvd=$((0x$(echo ${rev} | dd bs=1 count=4 2>/dev/null)))
+     else
+       ingit=no
+-      beta=yes
+       tmp="-unknown"
+       rev="0000000"
+       rvd="0"
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/autogen.sh-fix-find-version-for-beta-checking.patch b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/autogen.sh-fix-find-version-for-beta-checking.patch
deleted file mode 100644
index 4241bc3..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/autogen.sh-fix-find-version-for-beta-checking.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-From 717f994be2466d378e6611a4739c606db6d0dc46 Mon Sep 17 00:00:00 2001
-From: Wenzong Fan <wenzong.fan@windriver.com>
-Date: Sun, 25 Oct 2015 22:44:47 -0400
-Subject: [PATCH] autogen.sh: fix find-version for beta checking
-
-find-version always assumes that gnupg is beta if autogen.sh is run
-out of git-repo. This doesn't work for users whom just take release
-tarball and re-run autoconf in their local build dir.
-
-Upstream-Status: Pending
-
-Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
----
- autogen.sh | 1 -
- 1 file changed, 1 deletion(-)
-
-diff --git a/autogen.sh b/autogen.sh
-index 7effd56..d673432 100755
---- a/autogen.sh
-+++ b/autogen.sh
-@@ -228,7 +228,6 @@ if [ "$myhost" = "find-version" ]; then
-       rvd=$((0x$(echo ${rev} | head -c 4)))
-     else
-       ingit=no
--      beta=yes
-       tmp="-unknown"
-       rev="0000000"
-       rvd="0"
--- 
-1.9.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/dirmngr-uses-libgpg-error.patch b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/dirmngr-uses-libgpg-error.patch
deleted file mode 100644
index 7af1955..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/dirmngr-uses-libgpg-error.patch
+++ /dev/null
@@ -1,16 +0,0 @@
-Upstream-Status: Pending
-Signed-off-by: Saul Wold <sgw@linux.intel.com>
-Index: gnupg-2.1.0/dirmngr/Makefile.am
-===================================================================
---- gnupg-2.1.0.orig/dirmngr/Makefile.am
-+++ gnupg-2.1.0/dirmngr/Makefile.am
-@@ -78,7 +78,8 @@ endif
- dirmngr_LDADD = $(libcommontlsnpth) $(libcommonpth) \
-         $(DNSLIBS) $(LIBASSUAN_LIBS) \
- 	$(LIBGCRYPT_LIBS) $(KSBA_LIBS) $(NPTH_LIBS) \
--	$(NTBTLS_LIBS) $(LIBGNUTLS_LIBS) $(LIBINTL) $(LIBICONV)
-+	$(NTBTLS_LIBS) $(LIBGNUTLS_LIBS) $(LIBINTL) $(LIBICONV) \
-+	$(GPG_ERROR_LIBS)
- if USE_LDAP
- dirmngr_LDADD += $(ldaplibs)
- endif
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/pkgconfig.patch b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/pkgconfig.patch
deleted file mode 100644
index f958603..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/pkgconfig.patch
+++ /dev/null
@@ -1,90 +0,0 @@
-Use pkg-config to find pth instead of pth-config.
-
-Upstream-Status: Denied
-[not submitted but they've been clear they don't want a pkg-config dependency]
-
-RP 2014/5/22
-
-Index: gnupg-2.1.0/m4/gnupg-pth.m4
-===================================================================
---- gnupg-2.1.0.orig/m4/gnupg-pth.m4
-+++ gnupg-2.1.0/m4/gnupg-pth.m4
-@@ -17,33 +17,9 @@ dnl implied warranty of MERCHANTABILITY
- # Taken and modified from the m4 macros which come with Pth.
- AC_DEFUN([GNUPG_PTH_VERSION_CHECK],
-   [
--    _pth_version=`$PTH_CONFIG --version | awk 'NR==1 {print [$]3}'`
-     _req_version="ifelse([$1],,1.2.0,$1)"
-+    PKG_CHECK_MODULES(PTH, [pth >= $_req_version], [have_pth=yes], [have_pth=no])
- 
--    AC_MSG_CHECKING(for PTH - version >= $_req_version)
--    for _var in _pth_version _req_version; do
--        eval "_val=\"\$${_var}\""
--        _major=`echo $_val | sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\([[ab.]]\)\([[0-9]]*\)/\1/'`
--        _minor=`echo $_val | sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\([[ab.]]\)\([[0-9]]*\)/\2/'`
--        _rtype=`echo $_val | sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\([[ab.]]\)\([[0-9]]*\)/\3/'`
--        _micro=`echo $_val | sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\([[ab.]]\)\([[0-9]]*\)/\4/'`
--        case $_rtype in
--            "a" ) _rtype=0 ;;
--            "b" ) _rtype=1 ;;
--            "." ) _rtype=2 ;;
--        esac
--        _hex=`echo dummy | awk '{ printf("%d%02d%1d%02d", major, minor, rtype, micro); }' \
--              "major=$_major" "minor=$_minor" "rtype=$_rtype" "micro=$_micro"`
--        eval "${_var}_hex=\"\$_hex\""
--    done
--    have_pth=no
--    if test ".$_pth_version_hex" != .; then
--        if test ".$_req_version_hex" != .; then
--            if test $_pth_version_hex -ge $_req_version_hex; then
--                have_pth=yes
--            fi
--        fi
--    fi
-     if test $have_pth = yes; then
-        AC_MSG_RESULT(yes)
-        AC_MSG_CHECKING([whether PTH installation is sane])
-@@ -51,9 +27,9 @@ AC_DEFUN([GNUPG_PTH_VERSION_CHECK],
-          _gnupg_pth_save_cflags=$CFLAGS
-          _gnupg_pth_save_ldflags=$LDFLAGS
-          _gnupg_pth_save_libs=$LIBS
--         CFLAGS="$CFLAGS `$PTH_CONFIG --cflags`"
--         LDFLAGS="$LDFLAGS `$PTH_CONFIG --ldflags`"
--         LIBS="$LIBS `$PTH_CONFIG --libs --all`"
-+         CFLAGS="$CFLAGS $PTH_CFLAGS"
-+         LDFLAGS="$LDFLAGS $PTH_LDFLAGS"
-+         LIBS="$LIBS $PTH_LIBS"
-          AC_LINK_IFELSE([AC_LANG_PROGRAM([#include <pth.h>
-                                          ],
-                                          [[ pth_init ();]])],
-@@ -80,26 +56,13 @@ AC_DEFUN([GNUPG_PTH_VERSION_CHECK],
- # PTH_CLFAGS and PTH_LIBS are AS_SUBST.
- #
- AC_DEFUN([GNUPG_PATH_PTH],
--[ AC_ARG_WITH(pth-prefix,
--             AC_HELP_STRING([--with-pth-prefix=PFX],
--                           [prefix where GNU Pth is installed (optional)]),
--     pth_config_prefix="$withval", pth_config_prefix="")
--  if test x$pth_config_prefix != x ; then
--     PTH_CONFIG="$pth_config_prefix/bin/pth-config"
--  fi
--  AC_PATH_PROG(PTH_CONFIG, pth-config, no)
-+[
-   tmp=ifelse([$1], ,1.3.7,$1)
--  if test "$PTH_CONFIG" != "no"; then
--    GNUPG_PTH_VERSION_CHECK($tmp)
--    if test $have_pth = yes; then      
--       PTH_CFLAGS=`$PTH_CONFIG --cflags`
--       PTH_LIBS=`$PTH_CONFIG --ldflags`
--       PTH_LIBS="$PTH_LIBS `$PTH_CONFIG --libs --all`"
--       AC_DEFINE(HAVE_PTH, 1,
-+  GNUPG_PTH_VERSION_CHECK($tmp)
-+  if test $have_pth = yes; then
-+      AC_DEFINE(HAVE_PTH, 1,
-                 [Defined if the GNU Pth is available])
--    fi
-   fi
-   AC_SUBST(PTH_CFLAGS)
-   AC_SUBST(PTH_LIBS)
- ])
--
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/use-pkgconfig-instead-of-npth-config.patch b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/use-pkgconfig-instead-of-npth-config.patch
deleted file mode 100644
index c6dbf1b..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg/use-pkgconfig-instead-of-npth-config.patch
+++ /dev/null
@@ -1,72 +0,0 @@
-Upstream-Status: Inappropriate [openembedded specific]
-
-Signed-off-by: Saul Wold <sgw@linux.intel.com>
-
-
-Index: gnupg-2.1.0/m4/npth.m4
-===================================================================
---- gnupg-2.1.0.orig/m4/npth.m4
-+++ gnupg-2.1.0/m4/npth.m4
-@@ -17,10 +17,10 @@ AC_DEFUN([_AM_PATH_NPTH_CONFIG],
-   if test "x$npth_config_prefix" != x ; then
-       NPTH_CONFIG="$npth_config_prefix/bin/npth-config"
-   fi
--  AC_PATH_PROG(NPTH_CONFIG, npth-config, no)
-+  AC_PATH_PROG(PKGCONFIG, pkg-config, no)
- 
--  if test "$NPTH_CONFIG" != "no" ; then
--    npth_version=`$NPTH_CONFIG --version`
-+  if test "$PKGCONFIG" != "no" ; then
-+    npth_version=`$PKGCONFIG --modversion npth`
-   fi
-   npth_version_major=`echo $npth_version | \
-                sed 's/\([[0-9]]*\)\.\([[0-9]]*\).*/\1/'`
-@@ -45,7 +45,7 @@ AC_DEFUN([AM_PATH_NPTH],
- 
-   AC_MSG_CHECKING(for NPTH - version >= $min_npth_version)
-   ok=no
--  if test "$NPTH_CONFIG" != "no" ; then
-+  if test "$PKGCONFIG" != "no" ; then
-     req_major=`echo $min_npth_version | \
-                sed 's/\([[0-9]]*\)\.\([[0-9]]*\)/\1/'`
-     req_minor=`echo $min_npth_version | \
-@@ -66,28 +66,9 @@ AC_DEFUN([AM_PATH_NPTH],
-   fi
-   if test $ok = yes; then
-     AC_MSG_RESULT([yes ($npth_version)])
--  else
--    AC_MSG_RESULT(no)
--  fi
--  if test $ok = yes; then
--     # If we have a recent NPTH, we should also check that the
--     # API is compatible.
--     if test "$req_npth_api" -gt 0 ; then
--        tmp=`$NPTH_CONFIG --api-version 2>/dev/null || echo 0`
--        if test "$tmp" -gt 0 ; then
--           AC_MSG_CHECKING([NPTH API version])
--           if test "$req_npth_api" -eq "$tmp" ; then
--             AC_MSG_RESULT([okay])
--           else
--             ok=no
--             AC_MSG_RESULT([does not match. want=$req_npth_api got=$tmp])
--           fi
--        fi
--     fi
--  fi
--  if test $ok = yes; then
--    NPTH_CFLAGS=`$NPTH_CONFIG --cflags`
--    NPTH_LIBS=`$NPTH_CONFIG --libs`
-+    NPTH_CFLAGS=`$PKGCONFIG --cflags npth`
-+    NPTH_LIBS=`$PKGCONFIG --libs npth`
-+    AC_MSG_WARN([[GOT HERE - $NPTH_LIBS ]])
-     ifelse([$2], , :, [$2])
-     npth_config_host=`$NPTH_CONFIG --host 2>/dev/null || echo none`
-     if test x"$npth_config_host" != xnone ; then
-@@ -103,6 +84,7 @@ AC_DEFUN([AM_PATH_NPTH],
-       fi
-     fi
-   else
-+    AC_MSG_RESULT(no)
-     NPTH_CFLAGS=""
-     NPTH_LIBS=""
-     ifelse([$3], , :, [$3])
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg_2.1.18.bb b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg_2.1.18.bb
deleted file mode 100644
index a0611aa..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg_2.1.18.bb
+++ /dev/null
@@ -1,46 +0,0 @@
-SUMMARY = "GNU Privacy Guard - encryption and signing tools (2.x)"
-HOMEPAGE = "http://www.gnupg.org/"
-LICENSE = "GPLv3 & LGPLv3"
-LIC_FILES_CHKSUM = "file://COPYING;md5=189af8afca6d6075ba6c9e0aa8077626 \
-                    file://COPYING.LIB;md5=a2b6bf2cb38ee52619e60f30a1fc7257"
-
-DEPENDS = "npth libassuan libksba zlib bzip2 readline libgcrypt"
-
-inherit autotools gettext texinfo pkgconfig
-
-UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
-SRC_URI = "${GNUPG_MIRROR}/${BPN}/${BPN}-${PV}.tar.bz2 \
-           file://pkgconfig.patch \
-           file://use-pkgconfig-instead-of-npth-config.patch \
-           file://dirmngr-uses-libgpg-error.patch \
-           file://autogen.sh-fix-find-version-for-beta-checking.patch \
-          "
-
-SRC_URI[md5sum] = "c67f908b0b35c7ebc62144f362757e1e"
-SRC_URI[sha256sum] = "d04c6fab7e5562ce4b915b22020e34d4c1a256847690cf149842264fc7cef994"
-
-EXTRA_OECONF = "--disable-ldap \
-		--disable-ccid-driver \
-		--with-zlib=${STAGING_LIBDIR}/.. \
-		--with-bzip2=${STAGING_LIBDIR}/.. \
-                --with-readline=${STAGING_LIBDIR}/.. \
-               "
-RRECOMMENDS_${PN} = "pinentry"
-
-do_configure_prepend () {
-	# Else these could be used in prefernce to those in aclocal-copy
-	rm -f ${S}/m4/gpg-error.m4
-	rm -f ${S}/m4/libassuan.m4
-	rm -f ${S}/m4/ksba.m4
-	rm -f ${S}/m4/libgcrypt.m4
-}
-
-do_install_append() {
-	ln -sf gpg2 ${D}${bindir}/gpg
-	ln -sf gpgv2 ${D}${bindir}/gpgv
-}
-
-RDEPENDS_${PN} = "gnutls"
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[sqlite3] = "--enable-sqlite, --disable-sqlite, sqlite3"
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg_2.2.0.bb b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg_2.2.0.bb
new file mode 100644
index 0000000..0176ddd
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gnupg/gnupg_2.2.0.bb
@@ -0,0 +1,46 @@
+SUMMARY = "GNU Privacy Guard - encryption and signing tools (2.x)"
+HOMEPAGE = "http://www.gnupg.org/"
+LICENSE = "GPLv3 & LGPLv3"
+LIC_FILES_CHKSUM = "file://COPYING;md5=189af8afca6d6075ba6c9e0aa8077626 \
+                    file://COPYING.LGPL3;md5=a2b6bf2cb38ee52619e60f30a1fc7257"
+
+DEPENDS = "npth libassuan libksba zlib bzip2 readline libgcrypt"
+
+inherit autotools gettext texinfo pkgconfig
+
+UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
+SRC_URI = "${GNUPG_MIRROR}/${BPN}/${BPN}-${PV}.tar.bz2 \
+           file://0001-Use-pkg-config-to-find-pth-instead-of-pth-config.patch \
+           file://0002-use-pkgconfig-instead-of-npth-config.patch \
+           file://0003-dirmngr-uses-libgpg-error.patch \
+           file://0004-autogen.sh-fix-find-version-for-beta-checking.patch \
+          "
+
+SRC_URI[md5sum] = "789f16949fae2d003d387f49e9da4b74"
+SRC_URI[sha256sum] = "d4514a0be0f7a1ff263193330019eb4b53c82f0f5e230af3c14df371271a45e6"
+
+EXTRA_OECONF = "--disable-ldap \
+		--disable-ccid-driver \
+		--with-zlib=${STAGING_LIBDIR}/.. \
+		--with-bzip2=${STAGING_LIBDIR}/.. \
+		--with-readline=${STAGING_LIBDIR}/.. \
+		--enable-gpg-is-gpg2 \
+               "
+RRECOMMENDS_${PN} = "pinentry"
+
+do_configure_prepend () {
+	# Else these could be used in prefernce to those in aclocal-copy
+	rm -f ${S}/m4/gpg-error.m4
+	rm -f ${S}/m4/libassuan.m4
+	rm -f ${S}/m4/ksba.m4
+	rm -f ${S}/m4/libgcrypt.m4
+}
+
+do_install_append() {
+	ln -sf gpg2 ${D}${bindir}/gpg
+	ln -sf gpgv2 ${D}${bindir}/gpgv
+}
+
+PACKAGECONFIG ??= "gnutls"
+PACKAGECONFIG[gnutls] = "--enable-gnutls, --disable-gnutls, gnutls"
+PACKAGECONFIG[sqlite3] = "--enable-sqlite, --disable-sqlite, sqlite3"
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls.inc b/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls.inc
index 1ecad1f..29b5dd6 100644
--- a/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls.inc
+++ b/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls.inc
@@ -46,7 +46,6 @@
 "
 
 LDFLAGS_append_libc-musl = " -largp"
-LDFLAGS_append_libc-uclibc = " -luargp -pthread"
 
 do_configure_prepend() {
 	for dir in . lib; do
@@ -59,5 +58,3 @@
 FILES_${PN}-dev += "${bindir}/gnutls-cli-debug"
 FILES_${PN}-openssl = "${libdir}/libgnutls-openssl.so.*"
 FILES_${PN}-xx = "${libdir}/libgnutlsxx.so.*"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls/correct_rpl_gettimeofday_signature.patch b/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls/correct_rpl_gettimeofday_signature.patch
deleted file mode 100644
index 96b023a46..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls/correct_rpl_gettimeofday_signature.patch
+++ /dev/null
@@ -1,68 +0,0 @@
-From 81b0f04c14f673b99778d2e7d8e85461e0bf2018 Mon Sep 17 00:00:00 2001
-From: Valentin Popa <valentin.popa@intel.com>
-Date: Fri, 25 Apr 2014 13:58:55 +0300
-Subject: [PATCH 1/3] Correct rpl_gettimeofday signature
-
-Currently we fail on uclibc like below
-
-| In file included from /home/kraj/work/angstrom/sources/openembedded-core/build/tmp-uclibc/sysroots/qemuarm/usr/include/sys/procfs.h:32:0,
-|                  from /home/kraj/work/angstrom/sources/openembedded-core/build/tmp-uclibc/sysroots/qemuarm/usr/include/sys/ucontext.h:26,
-|                  from /home/kraj/work/angstrom/sources/openembedded-core/build/tmp-uclibc/sysroots/qemuarm/usr/include/signal.h:392,
-|                  from ../../gl/signal.h:52,
-|                  from ../../gl/sys/select.h:58,
-|                  from /home/kraj/work/angstrom/sources/openembedded-core/build/tmp-uclibc/sysroots/qemuarm/usr/include/sys/types.h:220,
-|                  from ../../gl/sys/types.h:28,
-|                  from ../../lib/includes/gnutls/gnutls.h:46,
-|                  from ex-cxx.cpp:3:
-| ../../gl/sys/time.h:396:66: error: conflicting declaration 'void* restrict'
-| ../../gl/sys/time.h:396:50: error: 'restrict' has a previous declaration as 'timeval* restrict'
-| make[4]: *** [ex-cxx.o] Error 1
-| make[4]: *** Waiting for unfinished jobs....
-
-GCC detects that we call 'restrict' as param name in function
-signatures and complains since both params are called 'restrict'
-therefore we use __restrict to denote the C99 keywork
-
-This only happens of uclibc since this code is not excercised with
-eglibc otherwise we will have same issue there too
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Upstream-Status: Pending
-
----
- gl/sys_time.in.h | 8 ++++----
- 1 file changed, 4 insertions(+), 4 deletions(-)
-
-diff --git a/gl/sys_time.in.h b/gl/sys_time.in.h
-index 5a8caf3..2dc5718 100644
---- a/gl/sys_time.in.h
-+++ b/gl/sys_time.in.h
-@@ -93,20 +93,20 @@ struct timeval
- #   define gettimeofday rpl_gettimeofday
- #  endif
- _GL_FUNCDECL_RPL (gettimeofday, int,
--                  (struct timeval *restrict, void *restrict)
-+                  (struct timeval *__restrict, void *__restrict)
-                   _GL_ARG_NONNULL ((1)));
- _GL_CXXALIAS_RPL (gettimeofday, int,
--                  (struct timeval *restrict, void *restrict));
-+                  (struct timeval *__restrict, void *__restrict));
- # else
- #  if !@HAVE_GETTIMEOFDAY@
- _GL_FUNCDECL_SYS (gettimeofday, int,
--                  (struct timeval *restrict, void *restrict)
-+                  (struct timeval *__restrict, void *__restrict)
-                   _GL_ARG_NONNULL ((1)));
- #  endif
- /* Need to cast, because on glibc systems, by default, the second argument is
-                                                   struct timezone *.  */
- _GL_CXXALIAS_SYS_CAST (gettimeofday, int,
--                       (struct timeval *restrict, void *restrict));
-+                       (struct timeval *__restrict, void *__restrict));
- # endif
- _GL_CXXALIASWARN (gettimeofday);
- # if defined __cplusplus && defined GNULIB_NAMESPACE
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls/use-pkg-config-to-locate-zlib.patch b/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls/use-pkg-config-to-locate-zlib.patch
index 0e1b7c8..ae141a5 100644
--- a/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls/use-pkg-config-to-locate-zlib.patch
+++ b/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls/use-pkg-config-to-locate-zlib.patch
@@ -1,7 +1,9 @@
-From cee80af1fe93f5b76765afeebfcc3b902768f5d6 Mon Sep 17 00:00:00 2001
-From: Andre McCurdy <armccurdy@gmail.com>
-Date: Tue, 26 May 2015 21:41:24 -0700
-Subject: [PATCH] use pkg-config to locate zlib
+From 18081068a97c00015aabc5fa321664951458ea0d Mon Sep 17 00:00:00 2001
+From: Fan Xin <fan.xin@jp.fujitsu.com>
+Date: Fri, 9 Jun 2017 15:20:31 +0900
+Subject: [PATCH] From cee80af1fe93f5b76765afeebfcc3b902768f5d6 Mon Sep 17
+ 00:00:00 2001 From: Andre McCurdy <armccurdy@gmail.com> Date: Tue, 26 May
+ 2015 21:41:24 -0700 Subject: [PATCH] use pkg-config to locate zlib
 
 AC_LIB_HAVE_LINKFLAGS can sometimes find host libs and is therefore not
 robust when cross-compiling. Remove it for zlib and use PKG_CHECK_MODULES
@@ -18,15 +20,19 @@
 Upstream-Status: Inappropriate [configuration]
 
 Signed-off-by: Andre McCurdy <armccurdy@gmail.com>
+
+Rebase on gnutls 3.5.13
+
+Signed-off-by: Fan Xin <fan.xin@jp.fujitsu.com>
 ---
- configure.ac | 24 ++++++++++--------------
- 1 file changed, 10 insertions(+), 14 deletions(-)
+ configure.ac | 25 +++++++++----------------
+ 1 file changed, 9 insertions(+), 16 deletions(-)
 
 diff --git a/configure.ac b/configure.ac
-index 1b561d5..0c787dc 100644
+index c65268e..f6a18aa 100644
 --- a/configure.ac
 +++ b/configure.ac
-@@ -508,25 +508,21 @@ AC_ARG_WITH(zlib, AS_HELP_STRING([--without-zlib],
+@@ -735,28 +735,21 @@ AC_ARG_WITH(zlib, AS_HELP_STRING([--without-zlib],
  AC_MSG_CHECKING([whether to include zlib compression support])
  if test x$ac_zlib != xno; then
   AC_MSG_RESULT(yes)
@@ -49,6 +55,7 @@
 -    else
 -      GNUTLS_REQUIRES_PRIVATE="$GNUTLS_REQUIRES_PRIVATE, zlib"
 -    fi
+-    LIBZ_PC=""
 +  PKG_CHECK_MODULES(ZLIB, zlib)
 +  HAVE_LIBZ=yes
 +  AC_DEFINE([HAVE_LIBZ], [1], [zlib is enabled])
@@ -57,11 +64,12 @@
 +  AC_SUBST(LTLIBZ)
 +  if test "x$GNUTLS_REQUIRES_PRIVATE" = x; then
 +    GNUTLS_REQUIRES_PRIVATE="Requires.private: zlib"
-+  else
+   else
+-    LIBZ_PC=$LIBZ
 +    GNUTLS_REQUIRES_PRIVATE="$GNUTLS_REQUIRES_PRIVATE, zlib"
    fi
  fi
- AC_SUBST(GNUTLS_REQUIRES_PRIVATE)
+ AC_SUBST(LIBZ_PC)
 -- 
 1.9.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls_3.5.13.bb b/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls_3.5.13.bb
new file mode 100644
index 0000000..35d7d09
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls_3.5.13.bb
@@ -0,0 +1,10 @@
+require gnutls.inc
+
+SRC_URI += "file://0001-configure.ac-fix-sed-command.patch \
+            file://use-pkg-config-to-locate-zlib.patch \
+            file://arm_eabi.patch \
+           "
+SRC_URI[md5sum] = "4fd41ad86572933c2379b4cc321a0959"
+SRC_URI[sha256sum] = "79f5480ad198dad5bc78e075f4a40c4a315a1b2072666919d2d05a08aec13096"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls_3.5.9.bb b/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls_3.5.9.bb
deleted file mode 100644
index 8f84dbb..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gnutls/gnutls_3.5.9.bb
+++ /dev/null
@@ -1,10 +0,0 @@
-require gnutls.inc
-
-SRC_URI += "file://correct_rpl_gettimeofday_signature.patch \
-            file://0001-configure.ac-fix-sed-command.patch \
-            file://use-pkg-config-to-locate-zlib.patch \
-            file://arm_eabi.patch \
-           "
-SRC_URI[md5sum] = "0ab25eb6a1509345dd085bc21a387951"
-SRC_URI[sha256sum] = "82b10f0c4ef18f4e64ad8cef5dbaf14be732f5095a41cf366b4ecb4050382951"
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnutls/libtasn1_4.10.bb b/import-layers/yocto-poky/meta/recipes-support/gnutls/libtasn1_4.10.bb
deleted file mode 100644
index dfc7f12..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gnutls/libtasn1_4.10.bb
+++ /dev/null
@@ -1,24 +0,0 @@
-SUMMARY = "Library for ASN.1 and DER manipulation"
-HOMEPAGE = "http://www.gnu.org/software/libtasn1/"
-
-LICENSE = "GPLv3+ & LGPLv2.1+"
-LICENSE_${PN}-bin = "GPLv3+"
-LICENSE_${PN} = "LGPLv2.1+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504 \
-                    file://COPYING.LIB;md5=4fbd65380cdd255951079008b364516c \
-                    file://README;endline=8;md5=c3803a3e8ca5ab5eb1e5912faa405351"
-
-SRC_URI = "${GNU_MIRROR}/libtasn1/libtasn1-${PV}.tar.gz \
-           file://dont-depend-on-help2man.patch \
-           file://0001-stdint.m4-reintroduce-GNULIB_OVERRIDES_WINT_T-check.patch \
-           file://CVE-2017-10790.patch \
-           "
-
-DEPENDS = "bison-native"
-
-SRC_URI[md5sum] = "f4faffdf63969d0e4e6df43b9679e8e5"
-SRC_URI[sha256sum] = "681a4d9a0d259f2125713f2e5766c5809f151b3a1392fd91390f780b4b8f5a02"
-
-inherit autotools texinfo binconfig lib_package gtk-doc
-
-BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-support/gnutls/libtasn1_4.12.bb b/import-layers/yocto-poky/meta/recipes-support/gnutls/libtasn1_4.12.bb
new file mode 100644
index 0000000..7a7571a
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gnutls/libtasn1_4.12.bb
@@ -0,0 +1,24 @@
+SUMMARY = "Library for ASN.1 and DER manipulation"
+HOMEPAGE = "http://www.gnu.org/software/libtasn1/"
+
+LICENSE = "GPLv3+ & LGPLv2.1+"
+LICENSE_${PN}-bin = "GPLv3+"
+LICENSE_${PN} = "LGPLv2.1+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=d32239bcb673463ab874e80d47fae504 \
+                    file://COPYING.LIB;md5=4fbd65380cdd255951079008b364516c \
+                    file://README;endline=8;md5=c3803a3e8ca5ab5eb1e5912faa405351"
+
+SRC_URI = "${GNU_MIRROR}/libtasn1/libtasn1-${PV}.tar.gz \
+           file://dont-depend-on-help2man.patch \
+           file://0001-stdint.m4-reintroduce-GNULIB_OVERRIDES_WINT_T-check.patch \
+           file://CVE-2017-10790.patch \
+           "
+
+DEPENDS = "bison-native"
+
+SRC_URI[md5sum] = "5c724bd1f73aaf4a311833e1cd297b21"
+SRC_URI[sha256sum] = "6753da2e621257f33f5b051cc114d417e5206a0818fe0b1ecfd6153f70934753"
+
+inherit autotools texinfo binconfig lib_package gtk-doc
+
+BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0001-Correctly-install-python-modules.patch b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0001-Correctly-install-python-modules.patch
deleted file mode 100644
index 42655fb..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0001-Correctly-install-python-modules.patch
+++ /dev/null
@@ -1,26 +0,0 @@
-From 4d714c097e497b63d2e8b22a834c671045e215e9 Mon Sep 17 00:00:00 2001
-From: Alexander Kanavin <alex.kanavin@gmail.com>
-Date: Thu, 9 Mar 2017 21:34:55 +0200
-Subject: [PATCH] Correctly install python modules
-
-Upstream-Status: Inappropriate [oe-core specific]
-Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
----
- lang/python/Makefile.am | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/lang/python/Makefile.am b/lang/python/Makefile.am
-index e32fd12..5ecf6fb 100644
---- a/lang/python/Makefile.am
-+++ b/lang/python/Makefile.am
-@@ -102,6 +102,7 @@ install-exec-local:
- 	for PYTHON in $(PYTHONS); do \
- 	  $$PYTHON setup.py install \
- 	  --prefix $(DESTDIR)$(prefix) \
-+          --install-lib=$(DESTDIR)${pythondir} \
- 	  --record files.txt \
- 	  --verbose ; \
- 	  cat files.txt >> install_files.txt ; \
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0001-gpgme-config-skip-all-lib-or-usr-lib-directories-in-.patch b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0001-gpgme-config-skip-all-lib-or-usr-lib-directories-in-.patch
deleted file mode 100644
index 84d55b9..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0001-gpgme-config-skip-all-lib-or-usr-lib-directories-in-.patch
+++ /dev/null
@@ -1,31 +0,0 @@
-From 8c317f6186bd3a9a1c80b4d1e872b3db95934bb6 Mon Sep 17 00:00:00 2001
-From: Alexander Kanavin <alex.kanavin@gmail.com>
-Date: Thu, 13 Apr 2017 16:40:27 +0300
-Subject: [PATCH] gpgme-config: skip all /lib* or /usr/lib* directories in
- output
-
-The logic was not working in multilib setups which use other
-directory names than plain /lib or /usr/lib.
-
-Upstream-Status: Inappropriate [oe-core specific]
-Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
----
- src/gpgme-config.in | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/src/gpgme-config.in b/src/gpgme-config.in
-index a4d152e..8342865 100644
---- a/src/gpgme-config.in
-+++ b/src/gpgme-config.in
-@@ -154,7 +154,7 @@ while test $# -gt 0; do
-             for i in $libs $tmp_l $assuan_libs $gpg_error_libs $tmp_x; do
-               skip=no
-               case $i in
--                  -L/usr/lib|-L/lib)
-+                  -L/usr/lib*|-L/lib*)
-                       skip=yes
-                       ;;
-                   -L*|-l*)
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0001-pkgconfig.patch b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0001-pkgconfig.patch
new file mode 100644
index 0000000..14a43ee
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0001-pkgconfig.patch
@@ -0,0 +1,303 @@
+From 8ae149035c97d27cd2c624704d1651806c53577e Mon Sep 17 00:00:00 2001
+From: Richard Purdie <richard.purdie@linuxfoundation.org>
+Date: Wed, 16 Aug 2017 02:00:08 -0400
+Subject: [PATCH 1/5] pkgconfig
+
+Update gpgme to use pkgconfig instead of -config files since its
+simpler and less error prone when cross compiling.
+
+Upstream-Status: Denied [Upstream not interested in pkg-config support]
+RP 2015/4/17
+
+Signed-off-by: Richard Purdie <richard.purdie@linuxfoundation.org>
+
+Rebase to 1.9.0
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ configure.ac            |   1 +
+ src/Makefile.am         |   4 +-
+ src/gpgme-pthread.pc.in |  15 +++++++
+ src/gpgme.m4            | 114 ++++--------------------------------------------
+ src/gpgme.pc.in         |  15 +++++++
+ 5 files changed, 42 insertions(+), 107 deletions(-)
+ create mode 100644 src/gpgme-pthread.pc.in
+ create mode 100644 src/gpgme.pc.in
+
+diff --git a/configure.ac b/configure.ac
+index 0dac6ce..6a9e507 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -866,6 +866,7 @@ AC_CONFIG_FILES(Makefile src/Makefile
+                 src/versioninfo.rc
+                 src/gpgme.h)
+ AC_CONFIG_FILES(src/gpgme-config, chmod +x src/gpgme-config)
++AC_CONFIG_FILES(src/gpgme.pc src/gpgme-pthread.pc)
+ AC_CONFIG_FILES(lang/cpp/Makefile lang/cpp/src/Makefile)
+ AC_CONFIG_FILES(lang/cpp/src/GpgmeppConfig-w32.cmake.in)
+ AC_CONFIG_FILES(lang/cpp/src/GpgmeppConfig.cmake.in)
+diff --git a/src/Makefile.am b/src/Makefile.am
+index ce6f1d4..5f38fee 100644
+--- a/src/Makefile.am
++++ b/src/Makefile.am
+@@ -19,12 +19,14 @@
+ ## Process this file with automake to produce Makefile.in
+ 
+ EXTRA_DIST = gpgme-config.in gpgme.m4 libgpgme.vers ChangeLog-2011 \
+-	     gpgme.h.in versioninfo.rc.in gpgme.def
++	     gpgme.h.in versioninfo.rc.in gpgme.def gpgme.pc.in gpgme-pthread.pc.in
+ 
+ bin_SCRIPTS = gpgme-config
+ m4datadir = $(datadir)/aclocal
+ m4data_DATA = gpgme.m4
+ nodist_include_HEADERS = gpgme.h
++pkgconfigdir = $(libdir)/pkgconfig
++pkgconfig_DATA = gpgme.pc gpgme-pthread.pc
+ 
+ bin_PROGRAMS = gpgme-tool
+ 
+diff --git a/src/gpgme-pthread.pc.in b/src/gpgme-pthread.pc.in
+new file mode 100644
+index 0000000..074bbf6
+--- /dev/null
++++ b/src/gpgme-pthread.pc.in
+@@ -0,0 +1,15 @@
++prefix=@prefix@
++exec_prefix=@exec_prefix@
++libdir=@libdir@
++includedir=@includedir@
++
++# API info
++api_version=@GPGME_CONFIG_API_VERSION@
++host=@GPGME_CONFIG_HOST@
++
++Name: gpgme
++Description: GnuPG Made Easy (GPGME) is a C language library that allows to addsupport for cryptography to a program (deprecated)
++Version: @VERSION@
++Libs: -L${libdir} -lgpgme -lpthread
++Cflags: -I${includedir}
++Requires: libassuan gpg-error
+diff --git a/src/gpgme.m4 b/src/gpgme.m4
+index 6c2be44..d8a75cb 100644
+--- a/src/gpgme.m4
++++ b/src/gpgme.m4
+@@ -79,7 +79,7 @@ dnl config script does not match the host specification the script
+ dnl is added to the gpg_config_script_warn variable.
+ dnl
+ AC_DEFUN([AM_PATH_GPGME],
+-[ AC_REQUIRE([_AM_PATH_GPGME_CONFIG])dnl
++[
+   tmp=ifelse([$1], ,1:0.4.2,$1)
+   if echo "$tmp" | grep ':' >/dev/null 2>/dev/null ; then
+      req_gpgme_api=`echo "$tmp"     | sed 's/\(.*\):\(.*\)/\1/'`
+@@ -89,36 +89,12 @@ AC_DEFUN([AM_PATH_GPGME],
+      min_gpgme_version="$tmp"
+   fi
+ 
+-  AC_MSG_CHECKING(for GPGME - version >= $min_gpgme_version)
+-  ok=no
+-  if test "$GPGME_CONFIG" != "no" ; then
+-    req_major=`echo $min_gpgme_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\1/'`
+-    req_minor=`echo $min_gpgme_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\2/'`
+-    req_micro=`echo $min_gpgme_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\3/'`
+-    if test "$gpgme_version_major" -gt "$req_major"; then
+-        ok=yes
+-    else
+-        if test "$gpgme_version_major" -eq "$req_major"; then
+-            if test "$gpgme_version_minor" -gt "$req_minor"; then
+-               ok=yes
+-            else
+-               if test "$gpgme_version_minor" -eq "$req_minor"; then
+-                   if test "$gpgme_version_micro" -ge "$req_micro"; then
+-                     ok=yes
+-                   fi
+-               fi
+-            fi
+-        fi
+-    fi
+-  fi
++  PKG_CHECK_MODULES(GPGME, [gpgme >= $min_gpgme_version], [ok=yes], [ok=no])
+   if test $ok = yes; then
+      # If we have a recent GPGME, we should also check that the
+      # API is compatible.
+      if test "$req_gpgme_api" -gt 0 ; then
+-        tmp=`$GPGME_CONFIG --api-version 2>/dev/null || echo 0`
++        tmp=`$PKG_CONFIG --variable=api_version gpgme 2>/dev/null || echo 0`
+         if test "$tmp" -gt 0 ; then
+            if test "$req_gpgme_api" -ne "$tmp" ; then
+              ok=no
+@@ -127,19 +103,11 @@ AC_DEFUN([AM_PATH_GPGME],
+      fi
+   fi
+   if test $ok = yes; then
+-    GPGME_CFLAGS=`$GPGME_CONFIG --cflags`
+-    GPGME_LIBS=`$GPGME_CONFIG --libs`
+-    AC_MSG_RESULT(yes)
+     ifelse([$2], , :, [$2])
+     _AM_PATH_GPGME_CONFIG_HOST_CHECK
+   else
+-    GPGME_CFLAGS=""
+-    GPGME_LIBS=""
+-    AC_MSG_RESULT(no)
+     ifelse([$3], , :, [$3])
+   fi
+-  AC_SUBST(GPGME_CFLAGS)
+-  AC_SUBST(GPGME_LIBS)
+ ])
+ 
+ dnl AM_PATH_GPGME_PTHREAD([MINIMUM-VERSION,
+@@ -148,7 +116,7 @@ dnl Test for libgpgme and define GPGME_PTHREAD_CFLAGS
+ dnl  and GPGME_PTHREAD_LIBS.
+ dnl
+ AC_DEFUN([AM_PATH_GPGME_PTHREAD],
+-[ AC_REQUIRE([_AM_PATH_GPGME_CONFIG])dnl
++[
+   tmp=ifelse([$1], ,1:0.4.2,$1)
+   if echo "$tmp" | grep ':' >/dev/null 2>/dev/null ; then
+      req_gpgme_api=`echo "$tmp"     | sed 's/\(.*\):\(.*\)/\1/'`
+@@ -158,38 +126,12 @@ AC_DEFUN([AM_PATH_GPGME_PTHREAD],
+      min_gpgme_version="$tmp"
+   fi
+ 
+-  AC_MSG_CHECKING(for GPGME pthread - version >= $min_gpgme_version)
+-  ok=no
+-  if test "$GPGME_CONFIG" != "no" ; then
+-    if `$GPGME_CONFIG --thread=pthread 2> /dev/null` ; then
+-      req_major=`echo $min_gpgme_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\1/'`
+-      req_minor=`echo $min_gpgme_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\2/'`
+-      req_micro=`echo $min_gpgme_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\3/'`
+-      if test "$gpgme_version_major" -gt "$req_major"; then
+-        ok=yes
+-      else
+-        if test "$gpgme_version_major" -eq "$req_major"; then
+-          if test "$gpgme_version_minor" -gt "$req_minor"; then
+-            ok=yes
+-          else
+-            if test "$gpgme_version_minor" -eq "$req_minor"; then
+-              if test "$gpgme_version_micro" -ge "$req_micro"; then
+-                ok=yes
+-              fi
+-            fi
+-          fi
+-        fi
+-      fi
+-    fi
+-  fi
++  PKG_CHECK_MODULES(GPGME_PTHREAD, [gpgme-pthread >= $min_gpgme_version], [ok=yes], [ok=no])
+   if test $ok = yes; then
+      # If we have a recent GPGME, we should also check that the
+      # API is compatible.
+      if test "$req_gpgme_api" -gt 0 ; then
+-        tmp=`$GPGME_CONFIG --api-version 2>/dev/null || echo 0`
++        tmp=`$PKG_CONFIG --variable=api_version gpgme-pthread 2>/dev/null || echo 0`
+         if test "$tmp" -gt 0 ; then
+            if test "$req_gpgme_api" -ne "$tmp" ; then
+              ok=no
+@@ -198,19 +140,11 @@ AC_DEFUN([AM_PATH_GPGME_PTHREAD],
+      fi
+   fi
+   if test $ok = yes; then
+-    GPGME_PTHREAD_CFLAGS=`$GPGME_CONFIG --thread=pthread --cflags`
+-    GPGME_PTHREAD_LIBS=`$GPGME_CONFIG --thread=pthread --libs`
+-    AC_MSG_RESULT(yes)
+     ifelse([$2], , :, [$2])
+     _AM_PATH_GPGME_CONFIG_HOST_CHECK
+   else
+-    GPGME_PTHREAD_CFLAGS=""
+-    GPGME_PTHREAD_LIBS=""
+-    AC_MSG_RESULT(no)
+     ifelse([$3], , :, [$3])
+   fi
+-  AC_SUBST(GPGME_PTHREAD_CFLAGS)
+-  AC_SUBST(GPGME_PTHREAD_LIBS)
+ ])
+ 
+ 
+@@ -229,36 +163,12 @@ AC_DEFUN([AM_PATH_GPGME_GLIB],
+      min_gpgme_version="$tmp"
+   fi
+ 
+-  AC_MSG_CHECKING(for GPGME - version >= $min_gpgme_version)
+-  ok=no
+-  if test "$GPGME_CONFIG" != "no" ; then
+-    req_major=`echo $min_gpgme_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\1/'`
+-    req_minor=`echo $min_gpgme_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\2/'`
+-    req_micro=`echo $min_gpgme_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\3/'`
+-    if test "$gpgme_version_major" -gt "$req_major"; then
+-        ok=yes
+-    else
+-        if test "$gpgme_version_major" -eq "$req_major"; then
+-            if test "$gpgme_version_minor" -gt "$req_minor"; then
+-               ok=yes
+-            else
+-               if test "$gpgme_version_minor" -eq "$req_minor"; then
+-                   if test "$gpgme_version_micro" -ge "$req_micro"; then
+-                     ok=yes
+-                   fi
+-               fi
+-            fi
+-        fi
+-    fi
+-  fi
++  PKG_CHECK_MODULES(GPGME_GLIB, [gpgme >= $min_gpgme_version glib-2.0], [ok=yes], [ok=no])  
+   if test $ok = yes; then
+      # If we have a recent GPGME, we should also check that the
+      # API is compatible.
+      if test "$req_gpgme_api" -gt 0 ; then
+-        tmp=`$GPGME_CONFIG --api-version 2>/dev/null || echo 0`
++        tmp=`$PKG_CONFIG --variable=api_version gpgme 2>/dev/null || echo 0`
+         if test "$tmp" -gt 0 ; then
+            if test "$req_gpgme_api" -ne "$tmp" ; then
+              ok=no
+@@ -267,17 +177,9 @@ AC_DEFUN([AM_PATH_GPGME_GLIB],
+      fi
+   fi
+   if test $ok = yes; then
+-    GPGME_GLIB_CFLAGS=`$GPGME_CONFIG --glib --cflags`
+-    GPGME_GLIB_LIBS=`$GPGME_CONFIG --glib --libs`
+-    AC_MSG_RESULT(yes)
+     ifelse([$2], , :, [$2])
+     _AM_PATH_GPGME_CONFIG_HOST_CHECK
+   else
+-    GPGME_GLIB_CFLAGS=""
+-    GPGME_GLIB_LIBS=""
+-    AC_MSG_RESULT(no)
+     ifelse([$3], , :, [$3])
+   fi
+-  AC_SUBST(GPGME_GLIB_CFLAGS)
+-  AC_SUBST(GPGME_GLIB_LIBS)
+ ])
+diff --git a/src/gpgme.pc.in b/src/gpgme.pc.in
+new file mode 100644
+index 0000000..b69539f
+--- /dev/null
++++ b/src/gpgme.pc.in
+@@ -0,0 +1,15 @@
++prefix=@prefix@
++exec_prefix=@exec_prefix@
++libdir=@libdir@
++includedir=@includedir@
++
++# API info
++api_version=@GPGME_CONFIG_API_VERSION@
++host=@GPGME_CONFIG_HOST@
++
++Name: gpgme
++Description: GnuPG Made Easy (GPGME) is a C language library that allows to addsupport for cryptography to a program.
++Version: @VERSION@
++Libs: -L${libdir} -lgpgme
++Cflags: -I${includedir}
++Requires: libassuan gpg-error
+\ No newline at end of file
+-- 
+2.8.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0002-gpgme-lang-python-gpg-error-config-should-not-be-use.patch b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0002-gpgme-lang-python-gpg-error-config-should-not-be-use.patch
new file mode 100644
index 0000000..f1f8c91
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0002-gpgme-lang-python-gpg-error-config-should-not-be-use.patch
@@ -0,0 +1,65 @@
+From fb165c9bd96aca8c9ee3e4509c9b6e35d238ad2e Mon Sep 17 00:00:00 2001
+From: Mark Hatle <mark.hatle@windriver.com>
+Date: Wed, 16 Aug 2017 02:02:47 -0400
+Subject: [PATCH 2/5] gpgme/lang/python: gpg-error-config should not be used.
+
+gpg-error-config was modified by OE to always return an error.  So we want
+to find an alternative way to retrieve whatever it is we need.  It turns
+out that the system is just trying to find the path to the gpg-error.h, which
+we can pull in from the STAGING_INC environment.
+
+Upstream-Status: Inappropriate [changes are specific to OE]
+
+Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
+
+Rebase to 1.9.0
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ lang/python/setup.py.in | 15 ++-------------
+ 1 file changed, 2 insertions(+), 13 deletions(-)
+
+diff --git a/lang/python/setup.py.in b/lang/python/setup.py.in
+index bf4efa3..7c34487 100755
+--- a/lang/python/setup.py.in
++++ b/lang/python/setup.py.in
+@@ -24,7 +24,6 @@ import glob
+ import subprocess
+ 
+ # Out-of-tree build of the gpg bindings.
+-gpg_error_config = ["gpg-error-config"]
+ gpgme_config_flags = ["--thread=pthread"]
+ gpgme_config = ["gpgme-config"] + gpgme_config_flags
+ gpgme_h = ""
+@@ -52,13 +51,6 @@ else:
+     devnull = open(os.devnull, "w")
+ 
+ try:
+-    subprocess.check_call(gpg_error_config + ['--version'],
+-                          stdout=devnull)
+-except:
+-    sys.exit("Could not find gpg-error-config.  " +
+-             "Please install the libgpg-error development package.")
+-
+-try:
+     subprocess.check_call(gpgme_config + ['--version'],
+                           stdout=devnull)
+ except:
+@@ -81,12 +73,9 @@ if not (major > 1 or (major == 1 and minor >= 7)):
+ if not gpgme_h:
+     gpgme_h = os.path.join(getconfig("prefix")[0], "include", "gpgme.h")
+ 
+-gpg_error_prefix = getconfig("prefix", config=gpg_error_config)[0]
+-gpg_error_h = os.path.join(gpg_error_prefix, "include", "gpg-error.h")
++gpg_error_h = os.path.join(os.getenv('STAGING_INCDIR'), "gpg-error.h")
+ if not os.path.exists(gpg_error_h):
+-    gpg_error_h = \
+-        glob.glob(os.path.join(gpg_error_prefix, "include",
+-                               "*", "gpg-error.h"))[0]
++    sys.exit("gpg_error_h not found: %s" % gpg_error_h)
+ 
+ print("Building python gpg module using {} and {}.".format(gpgme_h, gpg_error_h))
+ 
+-- 
+2.8.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0003-Correctly-install-python-modules.patch b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0003-Correctly-install-python-modules.patch
new file mode 100644
index 0000000..d383311
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0003-Correctly-install-python-modules.patch
@@ -0,0 +1,30 @@
+From 62332eec752dd790f4dd071dfb0dbe86be377203 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Wed, 16 Aug 2017 02:05:34 -0400
+Subject: [PATCH 3/5] Correctly install python modules
+
+Upstream-Status: Inappropriate [oe-core specific]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+
+Rebase to 1.9.0
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ lang/python/Makefile.am | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/lang/python/Makefile.am b/lang/python/Makefile.am
+index d91ead9..67f7cdc 100644
+--- a/lang/python/Makefile.am
++++ b/lang/python/Makefile.am
+@@ -106,6 +106,7 @@ install-exec-local:
+ 	  cd python$${VERSION}-gpg ; \
+ 	  $$PYTHON setup.py install \
+ 	  --prefix $(DESTDIR)$(prefix) \
++	  --install-lib=$(DESTDIR)${pythondir} \
+ 	  --record files.txt \
+ 	  --verbose ; \
+ 	  cat files.txt >> ../install_files.txt ; \
+-- 
+2.8.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0004-python-import.patch b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0004-python-import.patch
new file mode 100644
index 0000000..9307103
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0004-python-import.patch
@@ -0,0 +1,34 @@
+From ccbf028eea8815d3b16d6c34b527253a6b108ec3 Mon Sep 17 00:00:00 2001
+From: Ross Burton <ross.burton@intel.com>
+Date: Wed, 16 Aug 2017 02:06:45 -0400
+Subject: [PATCH 4/5] python import
+
+Don't check for output on stderr to know if an import worked, host inputrc and
+sysroot readline can cause warnings on stderr.
+
+Upstream-Status: Backport (from autoconf-archive 883a2abd)
+Signed-off-by: Ross Burton <ross.burton@intel.com>
+
+Rebase to 1.9.0
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ m4/ax_python_devel.m4 | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/m4/ax_python_devel.m4 b/m4/ax_python_devel.m4
+index b990d5b..318b089 100644
+--- a/m4/ax_python_devel.m4
++++ b/m4/ax_python_devel.m4
+@@ -137,7 +137,7 @@ variable to configure. See ``configure --help'' for reference.
+ 	#
+ 	AC_MSG_CHECKING([for the distutils Python package])
+ 	ac_distutils_result=`$PYTHON -c "import distutils" 2>&1`
+-	if test -z "$ac_distutils_result"; then
++	if test $? -eq 0; then
+ 		AC_MSG_RESULT([yes])
+ 	else
+ 		AC_MSG_RESULT([no])
+-- 
+2.8.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0005-gpgme-config-skip-all-lib-or-usr-lib-directories-in-.patch b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0005-gpgme-config-skip-all-lib-or-usr-lib-directories-in-.patch
new file mode 100644
index 0000000..7a6cc7b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/0005-gpgme-config-skip-all-lib-or-usr-lib-directories-in-.patch
@@ -0,0 +1,31 @@
+From 064ae4441e2c11329748a18157988f9e953f9752 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Thu, 13 Apr 2017 16:40:27 +0300
+Subject: [PATCH 5/5] gpgme-config: skip all /lib* or /usr/lib* directories in
+ output
+
+The logic was not working in multilib setups which use other
+directory names than plain /lib or /usr/lib.
+
+Upstream-Status: Inappropriate [oe-core specific]
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+---
+ src/gpgme-config.in | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/src/gpgme-config.in b/src/gpgme-config.in
+index a4d152e..8342865 100644
+--- a/src/gpgme-config.in
++++ b/src/gpgme-config.in
+@@ -154,7 +154,7 @@ while test $# -gt 0; do
+             for i in $libs $tmp_l $assuan_libs $gpg_error_libs $tmp_x; do
+               skip=no
+               case $i in
+-                  -L/usr/lib|-L/lib)
++                  -L/usr/lib*|-L/lib*)
+                       skip=yes
+                       ;;
+                   -L*|-l*)
+-- 
+2.8.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/pkgconfig.patch b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/pkgconfig.patch
deleted file mode 100644
index 341cabf..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/pkgconfig.patch
+++ /dev/null
@@ -1,295 +0,0 @@
-Update gpgme to use pkgconfig instead of -config files since its
-simpler and less error prone when cross compiling.
-
-Upstream-Status: Denied [Upstream not interested in pkg-config support]
-RP 2015/4/17
-
-Rebase to 1.8.0
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- configure.ac            |   1 +
- src/Makefile.am         |   4 +-
- src/gpgme-pthread.pc.in |  15 +++++++
- src/gpgme.m4            | 114 ++++--------------------------------------------
- src/gpgme.pc.in         |  15 +++++++
- 5 files changed, 42 insertions(+), 107 deletions(-)
- create mode 100644 src/gpgme-pthread.pc.in
- create mode 100644 src/gpgme.pc.in
-
-diff --git a/configure.ac b/configure.ac
-index 0a67b48..e402dd3 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -875,6 +875,7 @@ AC_CONFIG_FILES(Makefile src/Makefile
-                 src/versioninfo.rc
-                 src/gpgme.h)
- AC_CONFIG_FILES(src/gpgme-config, chmod +x src/gpgme-config)
-+AC_CONFIG_FILES(src/gpgme.pc src/gpgme-pthread.pc)
- AC_CONFIG_FILES(lang/cpp/Makefile lang/cpp/src/Makefile)
- AC_CONFIG_FILES(lang/cpp/src/GpgmeppConfig-w32.cmake.in)
- AC_CONFIG_FILES(lang/cpp/src/GpgmeppConfig.cmake.in)
-diff --git a/src/Makefile.am b/src/Makefile.am
-index ce6f1d4..5f38fee 100644
---- a/src/Makefile.am
-+++ b/src/Makefile.am
-@@ -19,12 +19,14 @@
- ## Process this file with automake to produce Makefile.in
- 
- EXTRA_DIST = gpgme-config.in gpgme.m4 libgpgme.vers ChangeLog-2011 \
--	     gpgme.h.in versioninfo.rc.in gpgme.def
-+	     gpgme.h.in versioninfo.rc.in gpgme.def gpgme.pc.in gpgme-pthread.pc.in
- 
- bin_SCRIPTS = gpgme-config
- m4datadir = $(datadir)/aclocal
- m4data_DATA = gpgme.m4
- nodist_include_HEADERS = gpgme.h
-+pkgconfigdir = $(libdir)/pkgconfig
-+pkgconfig_DATA = gpgme.pc gpgme-pthread.pc
- 
- bin_PROGRAMS = gpgme-tool
- 
-diff --git a/src/gpgme-pthread.pc.in b/src/gpgme-pthread.pc.in
-new file mode 100644
-index 0000000..980a48e
---- /dev/null
-+++ b/src/gpgme-pthread.pc.in
-@@ -0,0 +1,15 @@
-+prefix=@prefix@
-+exec_prefix=@exec_prefix@
-+libdir=@libdir@
-+includedir=@includedir@
-+
-+# API info
-+api_version=@GPGME_CONFIG_API_VERSION@
-+host=@GPGME_CONFIG_HOST@
-+
-+Name: gpgme
-+Description: GnuPG Made Easy (GPGME) is a C language library that allows to addsupport for cryptography to a program (deprecated)
-+Version: @VERSION@
-+Libs: -L${libdir} -lgpgme -lpthread
-+Cflags: -I${includedir}
-+Requires: libassuan gpg-error
-diff --git a/src/gpgme.m4 b/src/gpgme.m4
-index 6c2be44..d8a75cb 100644
---- a/src/gpgme.m4
-+++ b/src/gpgme.m4
-@@ -79,7 +79,7 @@ dnl config script does not match the host specification the script
- dnl is added to the gpg_config_script_warn variable.
- dnl
- AC_DEFUN([AM_PATH_GPGME],
--[ AC_REQUIRE([_AM_PATH_GPGME_CONFIG])dnl
-+[
-   tmp=ifelse([$1], ,1:0.4.2,$1)
-   if echo "$tmp" | grep ':' >/dev/null 2>/dev/null ; then
-      req_gpgme_api=`echo "$tmp"     | sed 's/\(.*\):\(.*\)/\1/'`
-@@ -89,36 +89,12 @@ AC_DEFUN([AM_PATH_GPGME],
-      min_gpgme_version="$tmp"
-   fi
- 
--  AC_MSG_CHECKING(for GPGME - version >= $min_gpgme_version)
--  ok=no
--  if test "$GPGME_CONFIG" != "no" ; then
--    req_major=`echo $min_gpgme_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\1/'`
--    req_minor=`echo $min_gpgme_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\2/'`
--    req_micro=`echo $min_gpgme_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\3/'`
--    if test "$gpgme_version_major" -gt "$req_major"; then
--        ok=yes
--    else
--        if test "$gpgme_version_major" -eq "$req_major"; then
--            if test "$gpgme_version_minor" -gt "$req_minor"; then
--               ok=yes
--            else
--               if test "$gpgme_version_minor" -eq "$req_minor"; then
--                   if test "$gpgme_version_micro" -ge "$req_micro"; then
--                     ok=yes
--                   fi
--               fi
--            fi
--        fi
--    fi
--  fi
-+  PKG_CHECK_MODULES(GPGME, [gpgme >= $min_gpgme_version], [ok=yes], [ok=no])
-   if test $ok = yes; then
-      # If we have a recent GPGME, we should also check that the
-      # API is compatible.
-      if test "$req_gpgme_api" -gt 0 ; then
--        tmp=`$GPGME_CONFIG --api-version 2>/dev/null || echo 0`
-+        tmp=`$PKG_CONFIG --variable=api_version gpgme 2>/dev/null || echo 0`
-         if test "$tmp" -gt 0 ; then
-            if test "$req_gpgme_api" -ne "$tmp" ; then
-              ok=no
-@@ -127,19 +103,11 @@ AC_DEFUN([AM_PATH_GPGME],
-      fi
-   fi
-   if test $ok = yes; then
--    GPGME_CFLAGS=`$GPGME_CONFIG --cflags`
--    GPGME_LIBS=`$GPGME_CONFIG --libs`
--    AC_MSG_RESULT(yes)
-     ifelse([$2], , :, [$2])
-     _AM_PATH_GPGME_CONFIG_HOST_CHECK
-   else
--    GPGME_CFLAGS=""
--    GPGME_LIBS=""
--    AC_MSG_RESULT(no)
-     ifelse([$3], , :, [$3])
-   fi
--  AC_SUBST(GPGME_CFLAGS)
--  AC_SUBST(GPGME_LIBS)
- ])
- 
- dnl AM_PATH_GPGME_PTHREAD([MINIMUM-VERSION,
-@@ -148,7 +116,7 @@ dnl Test for libgpgme and define GPGME_PTHREAD_CFLAGS
- dnl  and GPGME_PTHREAD_LIBS.
- dnl
- AC_DEFUN([AM_PATH_GPGME_PTHREAD],
--[ AC_REQUIRE([_AM_PATH_GPGME_CONFIG])dnl
-+[
-   tmp=ifelse([$1], ,1:0.4.2,$1)
-   if echo "$tmp" | grep ':' >/dev/null 2>/dev/null ; then
-      req_gpgme_api=`echo "$tmp"     | sed 's/\(.*\):\(.*\)/\1/'`
-@@ -158,38 +126,12 @@ AC_DEFUN([AM_PATH_GPGME_PTHREAD],
-      min_gpgme_version="$tmp"
-   fi
- 
--  AC_MSG_CHECKING(for GPGME pthread - version >= $min_gpgme_version)
--  ok=no
--  if test "$GPGME_CONFIG" != "no" ; then
--    if `$GPGME_CONFIG --thread=pthread 2> /dev/null` ; then
--      req_major=`echo $min_gpgme_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\1/'`
--      req_minor=`echo $min_gpgme_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\2/'`
--      req_micro=`echo $min_gpgme_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\3/'`
--      if test "$gpgme_version_major" -gt "$req_major"; then
--        ok=yes
--      else
--        if test "$gpgme_version_major" -eq "$req_major"; then
--          if test "$gpgme_version_minor" -gt "$req_minor"; then
--            ok=yes
--          else
--            if test "$gpgme_version_minor" -eq "$req_minor"; then
--              if test "$gpgme_version_micro" -ge "$req_micro"; then
--                ok=yes
--              fi
--            fi
--          fi
--        fi
--      fi
--    fi
--  fi
-+  PKG_CHECK_MODULES(GPGME_PTHREAD, [gpgme-pthread >= $min_gpgme_version], [ok=yes], [ok=no])
-   if test $ok = yes; then
-      # If we have a recent GPGME, we should also check that the
-      # API is compatible.
-      if test "$req_gpgme_api" -gt 0 ; then
--        tmp=`$GPGME_CONFIG --api-version 2>/dev/null || echo 0`
-+        tmp=`$PKG_CONFIG --variable=api_version gpgme-pthread 2>/dev/null || echo 0`
-         if test "$tmp" -gt 0 ; then
-            if test "$req_gpgme_api" -ne "$tmp" ; then
-              ok=no
-@@ -198,19 +140,11 @@ AC_DEFUN([AM_PATH_GPGME_PTHREAD],
-      fi
-   fi
-   if test $ok = yes; then
--    GPGME_PTHREAD_CFLAGS=`$GPGME_CONFIG --thread=pthread --cflags`
--    GPGME_PTHREAD_LIBS=`$GPGME_CONFIG --thread=pthread --libs`
--    AC_MSG_RESULT(yes)
-     ifelse([$2], , :, [$2])
-     _AM_PATH_GPGME_CONFIG_HOST_CHECK
-   else
--    GPGME_PTHREAD_CFLAGS=""
--    GPGME_PTHREAD_LIBS=""
--    AC_MSG_RESULT(no)
-     ifelse([$3], , :, [$3])
-   fi
--  AC_SUBST(GPGME_PTHREAD_CFLAGS)
--  AC_SUBST(GPGME_PTHREAD_LIBS)
- ])
- 
- 
-@@ -229,36 +163,12 @@ AC_DEFUN([AM_PATH_GPGME_GLIB],
-      min_gpgme_version="$tmp"
-   fi
- 
--  AC_MSG_CHECKING(for GPGME - version >= $min_gpgme_version)
--  ok=no
--  if test "$GPGME_CONFIG" != "no" ; then
--    req_major=`echo $min_gpgme_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\1/'`
--    req_minor=`echo $min_gpgme_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\2/'`
--    req_micro=`echo $min_gpgme_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\3/'`
--    if test "$gpgme_version_major" -gt "$req_major"; then
--        ok=yes
--    else
--        if test "$gpgme_version_major" -eq "$req_major"; then
--            if test "$gpgme_version_minor" -gt "$req_minor"; then
--               ok=yes
--            else
--               if test "$gpgme_version_minor" -eq "$req_minor"; then
--                   if test "$gpgme_version_micro" -ge "$req_micro"; then
--                     ok=yes
--                   fi
--               fi
--            fi
--        fi
--    fi
--  fi
-+  PKG_CHECK_MODULES(GPGME_GLIB, [gpgme >= $min_gpgme_version glib-2.0], [ok=yes], [ok=no])  
-   if test $ok = yes; then
-      # If we have a recent GPGME, we should also check that the
-      # API is compatible.
-      if test "$req_gpgme_api" -gt 0 ; then
--        tmp=`$GPGME_CONFIG --api-version 2>/dev/null || echo 0`
-+        tmp=`$PKG_CONFIG --variable=api_version gpgme 2>/dev/null || echo 0`
-         if test "$tmp" -gt 0 ; then
-            if test "$req_gpgme_api" -ne "$tmp" ; then
-              ok=no
-@@ -267,17 +177,9 @@ AC_DEFUN([AM_PATH_GPGME_GLIB],
-      fi
-   fi
-   if test $ok = yes; then
--    GPGME_GLIB_CFLAGS=`$GPGME_CONFIG --glib --cflags`
--    GPGME_GLIB_LIBS=`$GPGME_CONFIG --glib --libs`
--    AC_MSG_RESULT(yes)
-     ifelse([$2], , :, [$2])
-     _AM_PATH_GPGME_CONFIG_HOST_CHECK
-   else
--    GPGME_GLIB_CFLAGS=""
--    GPGME_GLIB_LIBS=""
--    AC_MSG_RESULT(no)
-     ifelse([$3], , :, [$3])
-   fi
--  AC_SUBST(GPGME_GLIB_CFLAGS)
--  AC_SUBST(GPGME_GLIB_LIBS)
- ])
-diff --git a/src/gpgme.pc.in b/src/gpgme.pc.in
-new file mode 100644
-index 0000000..b69539f
---- /dev/null
-+++ b/src/gpgme.pc.in
-@@ -0,0 +1,15 @@
-+prefix=@prefix@
-+exec_prefix=@exec_prefix@
-+libdir=@libdir@
-+includedir=@includedir@
-+
-+# API info
-+api_version=@GPGME_CONFIG_API_VERSION@
-+host=@GPGME_CONFIG_HOST@
-+
-+Name: gpgme
-+Description: GnuPG Made Easy (GPGME) is a C language library that allows to addsupport for cryptography to a program.
-+Version: @VERSION@
-+Libs: -L${libdir} -lgpgme
-+Cflags: -I${includedir}
-+Requires: libassuan gpg-error
-\ No newline at end of file
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/python-import.patch b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/python-import.patch
deleted file mode 100644
index 61b77a1..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/python-import.patch
+++ /dev/null
@@ -1,19 +0,0 @@
-Don't check for output on stderr to know if an import worked, host inputrc and
-sysroot readline can cause warnings on stderr.
-
-Upstream-Status: Backport (from autoconf-archive 883a2abd)
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-diff --git a/m4/ax_python_devel.m4 b/m4/ax_python_devel.m4
-index b990d5b..318b089 100644
---- a/m4/ax_python_devel.m4
-+++ b/m4/ax_python_devel.m4
-@@ -137,7 +137,7 @@ variable to configure. See ``configure --help'' for reference.
- 	#
- 	AC_MSG_CHECKING([for the distutils Python package])
- 	ac_distutils_result=`$PYTHON -c "import distutils" 2>&1`
--	if test -z "$ac_distutils_result"; then
-+	if test $? -eq 0; then
- 		AC_MSG_RESULT([yes])
- 	else
- 		AC_MSG_RESULT([no])
diff --git a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/python-lang-config.patch b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/python-lang-config.patch
deleted file mode 100644
index 132e426..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme/python-lang-config.patch
+++ /dev/null
@@ -1,52 +0,0 @@
-gpgme/lang/python: gpg-error-config should not be used.
-
-gpg-error-config was modified by OE to always return an error.  So we want
-to find an alternative way to retrieve whatever it is we need.  It turns
-out that the system is just trying to find the path to the gpg-error.h, which
-we can pull in from the STAGING_INC environment.
-
-Upstream-Status: Inappropriate [changes are specific to OE]
-
-Signed-off-by: Mark Hatle <mark.hatle@windriver.com>
-
-Index: gpgme-1.8.0/lang/python/setup.py.in
-===================================================================
---- gpgme-1.8.0.orig/lang/python/setup.py.in
-+++ gpgme-1.8.0/lang/python/setup.py.in
-@@ -24,7 +24,6 @@ import glob
- import subprocess
- 
- # Out-of-tree build of the gpg bindings.
--gpg_error_config = ["gpg-error-config"]
- gpgme_config_flags = ["--thread=pthread"]
- gpgme_config = ["gpgme-config"] + gpgme_config_flags
- gpgme_h = ""
-@@ -52,13 +51,6 @@ else:
-     devnull = open(os.devnull, "w")
- 
- try:
--    subprocess.check_call(gpg_error_config + ['--version'],
--                          stdout=devnull)
--except:
--    sys.exit("Could not find gpg-error-config.  " +
--             "Please install the libgpg-error development package.")
--
--try:
-     subprocess.check_call(gpgme_config + ['--version'],
-                           stdout=devnull)
- except:
-@@ -81,12 +73,9 @@ if not (major > 1 or (major == 1 and min
- if not gpgme_h:
-     gpgme_h = os.path.join(getconfig("prefix")[0], "include", "gpgme.h")
- 
--gpg_error_prefix = getconfig("prefix", config=gpg_error_config)[0]
--gpg_error_h = os.path.join(gpg_error_prefix, "include", "gpg-error.h")
-+gpg_error_h = os.path.join(os.getenv('STAGING_INCDIR'), "gpg-error.h")
- if not os.path.exists(gpg_error_h):
--    gpg_error_h = \
--        glob.glob(os.path.join(gpg_error_prefix, "include",
--                               "*", "gpg-error.h"))[0]
-+    sys.exit("gpg_error_h not found: %s" % gpg_error_h)
- 
- print("Building python gpg module using {} and {}.".format(gpgme_h, gpg_error_h))
- 
diff --git a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme_1.8.0.bb b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme_1.8.0.bb
deleted file mode 100644
index 4ddf6ed..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme_1.8.0.bb
+++ /dev/null
@@ -1,76 +0,0 @@
-SUMMARY = "High-level GnuPG encryption/signing API"
-DESCRIPTION = "GnuPG Made Easy (GPGME) is a library designed to make access to GnuPG easier for applications. It provides a High-Level Crypto API for encryption, decryption, signing, signature verification and key management"
-HOMEPAGE = "http://www.gnupg.org/gpgme.html"
-BUGTRACKER = "https://bugs.g10code.com/gnupg/index"
-
-LICENSE = "GPLv2+ & LGPLv2.1+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
-                    file://COPYING.LESSER;md5=bbb461211a33b134d42ed5ee802b37ff \
-                    file://src/gpgme.h.in;endline=23;md5=0f7059665c4b7897f4f4d0cb93aa9f98 \
-                    file://src/engine.h;endline=22;md5=4b6d8ba313d9b564cc4d4cfb1640af9d"
-
-UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
-SRC_URI = "${GNUPG_MIRROR}/gpgme/${BP}.tar.bz2 \
-           file://pkgconfig.patch \
-           file://python-lang-config.patch \
-           file://0001-Correctly-install-python-modules.patch \
-           file://python-import.patch \
-           file://0001-gpgme-config-skip-all-lib-or-usr-lib-directories-in-.patch \
-          "
-
-SRC_URI[md5sum] = "722a4153904b9b5dc15485a22d29263b"
-SRC_URI[sha256sum] = "596097257c2ce22e747741f8ff3d7e24f6e26231fa198a41b2a072e62d1e5d33"
-
-DEPENDS = "libgpg-error libassuan"
-RDEPENDS_${PN}-cpp += "libstdc++"
-
-RDEPENDS_python2-gpg += "python-unixadmin"
-RDEPENDS_python3-gpg += "python3-unixadmin"
-
-BINCONFIG = "${bindir}/gpgme-config"
-
-# Note select python2 or python3, but you can't select both at the same time
-PACKAGECONFIG ??= "python3"
-PACKAGECONFIG[python2] = ",,python swig-native,"
-PACKAGECONFIG[python3] = ",,python3 swig-native,"
-
-# Default in configure.ac: "cl cpp python qt"
-# Supported: "cl cpp python python2 python3 qt"
-# python says 'search and find python2 or python3'
-
-# Building the C++ bindings for native requires a C++ compiler with C++11
-# support. Since these bindings are currently not needed, we can disable them.
-DEFAULT_LANGUAGES = ""
-DEFAULT_LANGUAGES_class-target = "cpp"
-LANGUAGES ?= "${DEFAULT_LANGUAGES}"
-LANGUAGES .= "${@bb.utils.contains('PACKAGECONFIG', 'python2', ' python2', '', d)}"
-LANGUAGES .= "${@bb.utils.contains('PACKAGECONFIG', 'python3', ' python3', '', d)}"
-
-PYTHON_INHERIT = "${@bb.utils.contains('PACKAGECONFIG', 'python2', 'pythonnative', '', d)}"
-PYTHON_INHERIT .= "${@bb.utils.contains('PACKAGECONFIG', 'python3', 'python3native', '', d)}"
-
-EXTRA_OECONF += '--enable-languages="${LANGUAGES}"'
-
-inherit autotools texinfo binconfig-disabled pkgconfig ${PYTHON_INHERIT}
-
-export PKG_CONFIG='pkg-config'
-
-BBCLASSEXTEND = "native nativesdk"
-
-PACKAGES =+ "${PN}-cpp"
-PACKAGES =. "${@bb.utils.contains('PACKAGECONFIG', 'python2', 'python2-gpg ', '', d)}"
-PACKAGES =. "${@bb.utils.contains('PACKAGECONFIG', 'python3', 'python3-gpg ', '', d)}"
-
-FILES_${PN}-cpp = "${libdir}/libgpgmepp.so.*"
-FILES_python2-gpg = "${PYTHON_SITEPACKAGES_DIR}/*"
-FILES_python3-gpg = "${PYTHON_SITEPACKAGES_DIR}/*"
-FILES_${PN}-dev += "${datadir}/common-lisp/source/gpgme/* \
-                    ${libdir}/cmake/* \
-"
-
-CFLAGS_append_libc-musl = " -D__error_t_defined "
-do_configure_prepend () {
-	# Else these could be used in preference to those in aclocal-copy
-	rm -f ${S}/m4/gpg-error.m4
-	rm -f ${S}/m4/libassuan.m4
-}
diff --git a/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme_1.9.0.bb b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme_1.9.0.bb
new file mode 100644
index 0000000..065c346
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/gpgme/gpgme_1.9.0.bb
@@ -0,0 +1,77 @@
+SUMMARY = "High-level GnuPG encryption/signing API"
+DESCRIPTION = "GnuPG Made Easy (GPGME) is a library designed to make access to GnuPG easier for applications. It provides a High-Level Crypto API for encryption, decryption, signing, signature verification and key management"
+HOMEPAGE = "http://www.gnupg.org/gpgme.html"
+BUGTRACKER = "https://bugs.g10code.com/gnupg/index"
+
+LICENSE = "GPLv2+ & LGPLv2.1+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
+                    file://COPYING.LESSER;md5=bbb461211a33b134d42ed5ee802b37ff \
+                    file://src/gpgme.h.in;endline=23;md5=9d157d08a69059344e6f82abd2d25781 \
+                    file://src/engine.h;endline=22;md5=4b6d8ba313d9b564cc4d4cfb1640af9d"
+
+UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
+SRC_URI = "${GNUPG_MIRROR}/gpgme/${BP}.tar.bz2 \
+           file://0001-pkgconfig.patch \
+           file://0002-gpgme-lang-python-gpg-error-config-should-not-be-use.patch \
+           file://0003-Correctly-install-python-modules.patch \
+           file://0004-python-import.patch \
+           file://0005-gpgme-config-skip-all-lib-or-usr-lib-directories-in-.patch \
+          "
+
+SRC_URI[md5sum] = "1e00bb8ef04d1d05d5a0f19e143854c3"
+SRC_URI[sha256sum] = "1b29fedb8bfad775e70eafac5b0590621683b2d9869db994568e6401f4034ceb"
+
+DEPENDS = "libgpg-error libassuan"
+RDEPENDS_${PN}-cpp += "libstdc++"
+
+RDEPENDS_python2-gpg += "python-unixadmin"
+RDEPENDS_python3-gpg += "python3-unixadmin"
+
+BINCONFIG = "${bindir}/gpgme-config"
+
+# Note select python2 or python3, but you can't select both at the same time
+PACKAGECONFIG ??= "python3"
+PACKAGECONFIG[python2] = ",,python swig-native,"
+PACKAGECONFIG[python3] = ",,python3 swig-native,"
+
+# Default in configure.ac: "cl cpp python qt"
+# Supported: "cl cpp python python2 python3 qt"
+# python says 'search and find python2 or python3'
+
+# Building the C++ bindings for native requires a C++ compiler with C++11
+# support. Since these bindings are currently not needed, we can disable them.
+DEFAULT_LANGUAGES = ""
+DEFAULT_LANGUAGES_class-target = "cpp"
+LANGUAGES ?= "${DEFAULT_LANGUAGES}"
+LANGUAGES .= "${@bb.utils.contains('PACKAGECONFIG', 'python2', ' python2', '', d)}"
+LANGUAGES .= "${@bb.utils.contains('PACKAGECONFIG', 'python3', ' python3', '', d)}"
+
+PYTHON_INHERIT = "${@bb.utils.contains('PACKAGECONFIG', 'python2', 'pythonnative', '', d)}"
+PYTHON_INHERIT .= "${@bb.utils.contains('PACKAGECONFIG', 'python3', 'python3native', '', d)}"
+
+EXTRA_OECONF += '--enable-languages="${LANGUAGES}"'
+
+inherit autotools texinfo binconfig-disabled pkgconfig ${PYTHON_INHERIT}
+
+export PKG_CONFIG='pkg-config'
+
+BBCLASSEXTEND = "native nativesdk"
+
+PACKAGES =+ "${PN}-cpp"
+PACKAGES =. "${@bb.utils.contains('PACKAGECONFIG', 'python2', 'python2-gpg ', '', d)}"
+PACKAGES =. "${@bb.utils.contains('PACKAGECONFIG', 'python3', 'python3-gpg ', '', d)}"
+
+FILES_${PN}-cpp = "${libdir}/libgpgmepp.so.*"
+FILES_python2-gpg = "${PYTHON_SITEPACKAGES_DIR}/*"
+FILES_python3-gpg = "${PYTHON_SITEPACKAGES_DIR}/*"
+FILES_${PN}-dev += "${datadir}/common-lisp/source/gpgme/* \
+                    ${libdir}/cmake/* \
+"
+
+CFLAGS_append_libc-musl = " -D__error_t_defined "
+do_configure_prepend () {
+	# Else these could be used in preference to those in aclocal-copy
+	rm -f ${S}/m4/gpg-error.m4
+	rm -f ${S}/m4/libassuan.m4
+	rm -f ${S}/m4/python.m4
+}
diff --git a/import-layers/yocto-poky/meta/recipes-support/icu/icu.inc b/import-layers/yocto-poky/meta/recipes-support/icu/icu.inc
index fe48101..a1ef9ec 100644
--- a/import-layers/yocto-poky/meta/recipes-support/icu/icu.inc
+++ b/import-layers/yocto-poky/meta/recipes-support/icu/icu.inc
@@ -15,8 +15,6 @@
 SPDX_S = "${WORKDIR}/icu"
 STAGING_ICU_DIR_NATIVE = "${STAGING_DATADIR_NATIVE}/${BPN}/${PV}"
 
-CPPFLAGS_append_libc-uclibc = " -DU_TIMEZONE=0"
-
 BINCONFIG = "${bindir}/icu-config"
 
 inherit autotools pkgconfig binconfig
diff --git a/import-layers/yocto-poky/meta/recipes-support/icu/icu/CVE-2017-14952.patch b/import-layers/yocto-poky/meta/recipes-support/icu/icu/CVE-2017-14952.patch
new file mode 100644
index 0000000..f759efc
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/icu/icu/CVE-2017-14952.patch
@@ -0,0 +1,28 @@
+From fc83cd832725d3968011f118637b9f5d212e8717 Mon Sep 17 00:00:00 2001
+From: Ovidiu Panait <ovidiu.panait@windriver.com>
+Date: Fri, 10 Nov 2017 16:51:25 +0200
+Subject: [PATCH] Removed redundant UVector entry clean up function call.
+
+Upstream-Status: Backport
+CVE: CVE-2017-14952
+
+Signed-off-by: Ovidiu Panait <ovidiu.panait@windriver.com>
+---
+ i18n/zonemeta.cpp | 1 -
+ 1 file changed, 1 deletion(-)
+
+diff --git a/i18n/zonemeta.cpp b/i18n/zonemeta.cpp
+index 84a9657..e163b00 100644
+--- a/i18n/zonemeta.cpp
++++ b/i18n/zonemeta.cpp
+@@ -690,7 +690,6 @@ ZoneMeta::createMetazoneMappings(const UnicodeString &tzid) {
+                     mzMappings = new UVector(deleteOlsonToMetaMappingEntry, NULL, status);
+                     if (U_FAILURE(status)) {
+                         delete mzMappings;
+-                        deleteOlsonToMetaMappingEntry(entry);
+                         uprv_free(entry);
+                         break;
+                     }
+-- 
+2.10.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/icu/icu/icu-pkgdata-large-cmd.patch b/import-layers/yocto-poky/meta/recipes-support/icu/icu/icu-pkgdata-large-cmd.patch
index 6e40659..e758a62 100644
--- a/import-layers/yocto-poky/meta/recipes-support/icu/icu/icu-pkgdata-large-cmd.patch
+++ b/import-layers/yocto-poky/meta/recipes-support/icu/icu/icu-pkgdata-large-cmd.patch
@@ -8,14 +8,16 @@
 Upstream-Status: Pending
 
 Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
+Signed-off-by: Jackie Huang <jackie.huang@windriver.com>
 ---
- tools/pkgdata/pkgdata.cpp |    2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
+ tools/pkgdata/pkgdata.cpp | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
 
 diff --git a/tools/pkgdata/pkgdata.cpp b/tools/pkgdata/pkgdata.cpp
+index 60167dd..506dd32 100644
 --- a/tools/pkgdata/pkgdata.cpp
 +++ b/tools/pkgdata/pkgdata.cpp
-@@ -1019,7 +1019,7 @@ normal_symlink_mode:
+@@ -1084,7 +1084,7 @@ normal_symlink_mode:
  
  static int32_t pkg_installLibrary(const char *installDir, const char *targetDir, UBool noVersion) {
      int32_t result = 0;
@@ -24,6 +26,24 @@
  
      sprintf(cmd, "cd %s && %s %s %s%s%s",
              targetDir,
+@@ -1152,7 +1152,7 @@ static int32_t pkg_installLibrary(const char *installDir, const char *targetDir,
+ 
+ static int32_t pkg_installCommonMode(const char *installDir, const char *fileName) {
+     int32_t result = 0;
+-    char cmd[SMALL_BUFFER_MAX_SIZE] = "";
++    char cmd[LARGE_BUFFER_MAX_SIZE] = "";
+ 
+     if (!T_FileStream_file_exists(installDir)) {
+         UErrorCode status = U_ZERO_ERROR;
+@@ -1184,7 +1184,7 @@ static int32_t pkg_installCommonMode(const char *installDir, const char *fileNam
+ #endif
+ static int32_t pkg_installFileMode(const char *installDir, const char *srcDir, const char *fileListName) {
+     int32_t result = 0;
+-    char cmd[SMALL_BUFFER_MAX_SIZE] = "";
++    char cmd[LARGE_BUFFER_MAX_SIZE] = "";
+ 
+     if (!T_FileStream_file_exists(installDir)) {
+         UErrorCode status = U_ZERO_ERROR;
 -- 
-1.7.10.4
+1.9.1
 
diff --git a/import-layers/yocto-poky/meta/recipes-support/icu/icu_58.2.bb b/import-layers/yocto-poky/meta/recipes-support/icu/icu_58.2.bb
deleted file mode 100644
index 47684a6..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/icu/icu_58.2.bb
+++ /dev/null
@@ -1,29 +0,0 @@
-require icu.inc
-
-LIC_FILES_CHKSUM = "file://../LICENSE;md5=1b3b75c1777cd49ad5c6a24cd338cfc9"
-
-def icu_download_version(d):
-    pvsplit = d.getVar('PV').split('.')
-    return pvsplit[0] + "_" + pvsplit[1]
-
-ICU_PV = "${@icu_download_version(d)}"
-
-# http://errors.yoctoproject.org/Errors/Details/20486/
-ARM_INSTRUCTION_SET_armv4 = "arm"
-ARM_INSTRUCTION_SET_armv5 = "arm"
-
-BASE_SRC_URI = "http://download.icu-project.org/files/icu4c/${PV}/icu4c-${ICU_PV}-src.tgz"
-SRC_URI = "${BASE_SRC_URI} \
-           file://icu-pkgdata-large-cmd.patch \
-           file://fix-install-manx.patch \
-           file://0001-i18n-Drop-include-xlocale.h.patch \
-           "
-
-SRC_URI_append_class-target = "\
-           file://0001-Disable-LDFLAGSICUDT-for-Linux.patch \
-          "
-SRC_URI[md5sum] = "fac212b32b7ec7ab007a12dff1f3aea1"
-SRC_URI[sha256sum] = "2b0a4410153a9b20de0e20c7d8b66049a72aef244b53683d0d7521371683da0c"
-
-UPSTREAM_CHECK_REGEX = "(?P<pver>\d+(\.\d+)+)/"
-UPSTREAM_CHECK_URI = "http://download.icu-project.org/files/icu4c/"
diff --git a/import-layers/yocto-poky/meta/recipes-support/icu/icu_59.1.bb b/import-layers/yocto-poky/meta/recipes-support/icu/icu_59.1.bb
new file mode 100644
index 0000000..9fb1be8
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/icu/icu_59.1.bb
@@ -0,0 +1,30 @@
+require icu.inc
+
+LIC_FILES_CHKSUM = "file://../LICENSE;md5=fe9e1f2c500466d8f18df2cd068e4b74"
+
+def icu_download_version(d):
+    pvsplit = d.getVar('PV').split('.')
+    return pvsplit[0] + "_" + pvsplit[1]
+
+ICU_PV = "${@icu_download_version(d)}"
+
+# http://errors.yoctoproject.org/Errors/Details/20486/
+ARM_INSTRUCTION_SET_armv4 = "arm"
+ARM_INSTRUCTION_SET_armv5 = "arm"
+
+BASE_SRC_URI = "http://download.icu-project.org/files/icu4c/${PV}/icu4c-${ICU_PV}-src.tgz"
+SRC_URI = "${BASE_SRC_URI} \
+           file://icu-pkgdata-large-cmd.patch \
+           file://fix-install-manx.patch \
+           file://0001-i18n-Drop-include-xlocale.h.patch \
+           file://CVE-2017-14952.patch \
+           "
+
+SRC_URI_append_class-target = "\
+           file://0001-Disable-LDFLAGSICUDT-for-Linux.patch \
+          "
+SRC_URI[md5sum] = "54923fa9fab5b2b83f235fb72523de37"
+SRC_URI[sha256sum] = "7132fdaf9379429d004005217f10e00b7d2319d0fea22bdfddef8991c45b75fe"
+
+UPSTREAM_CHECK_REGEX = "(?P<pver>\d+(\.\d+)+)/"
+UPSTREAM_CHECK_URI = "http://download.icu-project.org/files/icu4c/"
diff --git a/import-layers/yocto-poky/meta/recipes-support/iso-codes/iso-codes_3.74.bb b/import-layers/yocto-poky/meta/recipes-support/iso-codes/iso-codes_3.74.bb
deleted file mode 100644
index c436a1d..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/iso-codes/iso-codes_3.74.bb
+++ /dev/null
@@ -1,15 +0,0 @@
-SUMMARY = "ISO language, territory, currency, script codes and their translations"
-LICENSE = "LGPLv2.1"
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
-
-SRC_URI = "https://pkg-isocodes.alioth.debian.org/downloads/iso-codes-${PV}.tar.xz"
-SRC_URI[md5sum] = "d5448475d087756b78391b8c53c5b83a"
-SRC_URI[sha256sum] = "21f4f3cea8fe09f5b53784522303a0e1e7d083964ecaf1c75b1441d4d9ec6aee"
-
-# inherit gettext cannot be used, because it adds gettext-native to BASEDEPENDS which
-# are inhibited by allarch
-DEPENDS = "gettext-native"
-
-inherit allarch autotools
-
-FILES_${PN} += "${datadir}/xml/"
diff --git a/import-layers/yocto-poky/meta/recipes-support/iso-codes/iso-codes_3.75.bb b/import-layers/yocto-poky/meta/recipes-support/iso-codes/iso-codes_3.75.bb
new file mode 100644
index 0000000..4f3d53c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/iso-codes/iso-codes_3.75.bb
@@ -0,0 +1,15 @@
+SUMMARY = "ISO language, territory, currency, script codes and their translations"
+LICENSE = "LGPLv2.1"
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+
+SRC_URI = "https://pkg-isocodes.alioth.debian.org/downloads/iso-codes-${PV}.tar.xz"
+SRC_URI[md5sum] = "9ba173b69d4360003414f23837597a92"
+SRC_URI[sha256sum] = "7335e0301cd77cd4ee019bf5d3709aa79309d49dd66e85ba350caf67e00b00cd"
+
+# inherit gettext cannot be used, because it adds gettext-native to BASEDEPENDS which
+# are inhibited by allarch
+DEPENDS = "gettext-native"
+
+inherit allarch autotools
+
+FILES_${PN} += "${datadir}/xml/"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libatomic-ops/libatomic-ops_7.4.4.bb b/import-layers/yocto-poky/meta/recipes-support/libatomic-ops/libatomic-ops_7.4.4.bb
deleted file mode 100644
index 930c1b8..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libatomic-ops/libatomic-ops_7.4.4.bb
+++ /dev/null
@@ -1,32 +0,0 @@
-SUMMARY = "A library for atomic integer operations"
-HOMEPAGE = "https://github.com/ivmai/libatomic_ops/"
-SECTION = "optional"
-PROVIDES += "libatomics-ops"
-LICENSE = "GPLv2 & MIT"
-LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
-                    file://doc/LICENSING.txt;md5=e00dd5c8ac03a14c5ae5225a4525fa2d \
-		   "
-
-SRC_URI = "\
-	http://www.ivmaisoft.com/_bin/atomic_ops/libatomic_ops-${PV}.tar.gz \
-	file://0001-Add-initial-nios2-architecture-support.patch \
-	"
-
-SRC_URI[md5sum] = "426d804baae12c372967a6d183e25af2"
-SRC_URI[sha256sum] = "bf210a600dd1becbf7936dd2914cf5f5d3356046904848dcfd27d0c8b12b6f8f"
-
-S = "${WORKDIR}/libatomic_ops-${PV}"
-
-ALLOW_EMPTY_${PN} = "1"
-
-ARM_INSTRUCTION_SET = "arm"
-
-inherit autotools pkgconfig
-
-do_install_append() {
-	# those contain only docs, not necessary for now.
-	install -m 0755 -d ${D}${docdir}
-	mv ${D}${datadir}/libatomic_ops ${D}${docdir}/${BPN}
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libatomic-ops/libatomic-ops_7.6.0.bb b/import-layers/yocto-poky/meta/recipes-support/libatomic-ops/libatomic-ops_7.6.0.bb
new file mode 100644
index 0000000..4463d86
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libatomic-ops/libatomic-ops_7.6.0.bb
@@ -0,0 +1,20 @@
+SUMMARY = "A library for atomic integer operations"
+HOMEPAGE = "https://github.com/ivmai/libatomic_ops/"
+SECTION = "optional"
+PROVIDES += "libatomics-ops"
+LICENSE = "GPLv2 & MIT"
+LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
+                    file://doc/LICENSING.txt;md5=e00dd5c8ac03a14c5ae5225a4525fa2d \
+		   "
+PV .= "+git${SRCPV}"
+
+SRCREV = "73c60c5ef1ed370111549ee5aab6d4020ba70ed4"
+SRC_URI = "git://github.com/ivmai/libatomic_ops"
+
+S = "${WORKDIR}/git"
+
+ALLOW_EMPTY_${PN} = "1"
+
+inherit autotools pkgconfig
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd/0002-Remove-funopen.patch b/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd/0002-Remove-funopen.patch
index 83ce7c8..60da15e 100644
--- a/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd/0002-Remove-funopen.patch
+++ b/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd/0002-Remove-funopen.patch
@@ -15,17 +15,17 @@
  3 files changed, 3 deletions(-)
 
 diff --git a/man/Makefile.am b/man/Makefile.am
-index e4d6e4a..c701d94 100644
+index 28192c0..a22787d 100644
 --- a/man/Makefile.am
 +++ b/man/Makefile.am
-@@ -29,7 +29,6 @@ dist_man_MANS = \
- 	flopen.3 \
- 	fmtcheck.3 \
- 	fparseln.3 \
+@@ -168,7 +168,6 @@ dist_man_MANS = \
+ 	fmtcheck.3bsd \
+ 	fparseln.3bsd \
+ 	fpurge.3bsd \
 -	funopen.3bsd \
- 	getbsize.3 \
- 	getmode.3 \
- 	getpeereid.3 \
+ 	getbsize.3bsd \
+ 	getmode.3bsd \
+ 	getpeereid.3bsd \
 diff --git a/src/Makefile.am b/src/Makefile.am
 index ad83dbf..13225a3 100644
 --- a/src/Makefile.am
@@ -39,7 +39,7 @@
  	getpeereid.c \
  	hash/md5.c \
 diff --git a/test/Makefile.am b/test/Makefile.am
-index a75c8ff..e3a1d41 100644
+index d86539a..b32ed2e 100644
 --- a/test/Makefile.am
 +++ b/test/Makefile.am
 @@ -36,7 +36,6 @@ check_PROGRAMS = \
@@ -50,6 +50,3 @@
  	fparseln \
  	fpurge \
  	md5 \
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd/0003-Fix-build-breaks-due-to-missing-a.out.h.patch b/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd/0003-Fix-build-breaks-due-to-missing-a.out.h.patch
deleted file mode 100644
index 176d940..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd/0003-Fix-build-breaks-due-to-missing-a.out.h.patch
+++ /dev/null
@@ -1,130 +0,0 @@
-From a1b93c25311834f2f411e9bfe2e616899ba2122d Mon Sep 17 00:00:00 2001
-From: Khem Raj <raj.khem@gmail.com>
-Date: Sun, 6 Nov 2016 10:23:55 -0800
-Subject: [PATCH 3/3] Fix build breaks due to missing a.out.h
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
----
-Upstream-Status: Pending
-
- include/bsd/nlist.h                     |  1 -
- include/bsd/nlist.h => src/local-aout.h | 47 ++++++++++++++++++++++-----------
- src/nlist.c                             |  9 +++++++
- 3 files changed, 41 insertions(+), 16 deletions(-)
- copy include/bsd/nlist.h => src/local-aout.h (63%)
-
-diff --git a/include/bsd/nlist.h b/include/bsd/nlist.h
-index 0389ab7..9c7e3d8 100644
---- a/include/bsd/nlist.h
-+++ b/include/bsd/nlist.h
-@@ -28,7 +28,6 @@
- #define LIBBSD_NLIST_H
- 
- #include <sys/cdefs.h>
--#include <a.out.h>
- 
- /* __BEGIN_DECLS */
- #ifdef __cplusplus
-diff --git a/include/bsd/nlist.h b/src/local-aout.h
-similarity index 63%
-copy from include/bsd/nlist.h
-copy to src/local-aout.h
-index 0389ab7..2adb93e 100644
---- a/include/bsd/nlist.h
-+++ b/src/local-aout.h
-@@ -1,5 +1,5 @@
- /*
-- * Copyright © 2009 Guillem Jover <guillem@hadrons.org>
-+ * Copyright © 2016 Khem Raj <raj.khem@gmail.com>
-  *
-  * Redistribution and use in source and binary forms, with or without
-  * modification, are permitted provided that the following conditions
-@@ -24,20 +24,37 @@
-  * ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-  */
- 
--#ifndef LIBBSD_NLIST_H
--#define LIBBSD_NLIST_H
-+#ifndef LIBBSD_LOCAL_AOUT_H
-+#define LIBBSD_LOCAL_AOUT_H
- 
--#include <sys/cdefs.h>
--#include <a.out.h>
-+#define N_UNDF  0
-+#define N_ABS   2
-+#define N_TEXT  4
-+#define N_DATA  6
-+#define N_BSS   8
-+#define N_FN    15
-+#define N_EXT   1
-+#define N_TYPE  036
-+#define N_STAB  0340
-+#define N_INDR  0xa
-+#define N_SETA  0x14    /* Absolute set element symbol.  */
-+#define N_SETT  0x16    /* Text set element symbol.  */
-+#define N_SETD  0x18    /* Data set element symbol.  */
-+#define N_SETB  0x1A    /* Bss set element symbol.  */
-+#define N_SETV  0x1C    /* Pointer to set vector in data area.  */
- 
--/* __BEGIN_DECLS */
--#ifdef __cplusplus
--extern "C" {
--#endif
--extern int nlist(const char *filename, struct nlist *list);
--#ifdef __cplusplus
--}
--#endif
--/* __END_DECLS */
-+struct nlist
-+{
-+  union
-+    {
-+      char *n_name;
-+      struct nlist *n_next;
-+      long n_strx;
-+    } n_un;
-+  unsigned char n_type;
-+  char n_other;
-+  short n_desc;
-+  unsigned long n_value;
-+};
- 
--#endif
-+#endif /* LIBBSD_LOCAL_AOUT_H */
-diff --git a/src/nlist.c b/src/nlist.c
-index 0cffe55..625d310 100644
---- a/src/nlist.c
-+++ b/src/nlist.c
-@@ -40,7 +40,11 @@ static char sccsid[] = "@(#)nlist.c	8.1 (Berkeley) 6/4/93";
- 
- #include <errno.h>
- #include <fcntl.h>
-+#ifdef __GLIBC__
- #include <a.out.h>
-+#else
-+#define __NO_A_OUT_SUPPORT
-+#endif
- #include <stdio.h>
- #include <string.h>
- #include <unistd.h>
-@@ -48,12 +52,17 @@ static char sccsid[] = "@(#)nlist.c	8.1 (Berkeley) 6/4/93";
- #if !defined(__NO_A_OUT_SUPPORT)
- #define _NLIST_DO_AOUT
- #endif
-+
- #define _NLIST_DO_ELF
- 
- #ifdef _NLIST_DO_ELF
- #include "local-elf.h"
- #endif
- 
-+#ifdef _NLIST_DO_ELF
-+#include "local-aout.h"
-+#endif
-+
- #define SIZE_T_MAX 0xffffffffU
- 
- #ifdef _NLIST_DO_AOUT
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd_0.8.3.bb b/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd_0.8.3.bb
deleted file mode 100644
index e85ee21..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd_0.8.3.bb
+++ /dev/null
@@ -1,51 +0,0 @@
-# Copyright (C) 2013 Khem Raj <raj.khem@gmail.com>
-# Released under the MIT license (see COPYING.MIT for the terms)
-
-SUMMARY = "Library of utility functions from BSD systems"
-DESCRIPTION = "This library provides useful functions commonly found on BSD systems, \
-               and lacking on others like GNU systems, thus making it easier to port \
-               projects with strong BSD origins, without needing to embed the same \
-               code over and over again on each project."
-
-HOMEPAGE = "http://libbsd.freedesktop.org/wiki/"
-# There seems to be more licenses used in the code, I don't think we want to list them all here, complete list:
-# OE @ ~/projects/libbsd $ grep ^License: COPYING  | sort
-# License: BSD-2-clause
-# License: BSD-2-clause
-# License: BSD-2-clause-NetBSD
-# License: BSD-2-clause-author
-# License: BSD-2-clause-verbatim
-# License: BSD-3-clause
-# License: BSD-3-clause
-# License: BSD-3-clause
-# License: BSD-3-clause-Peter-Wemm
-# License: BSD-3-clause-Regents
-# License: BSD-4-clause-Christopher-G-Demetriou
-# License: BSD-4-clause-Niels-Provos
-# License: BSD-5-clause-Peter-Wemm
-# License: Beerware
-# License: Expat
-# License: ISC
-# License: ISC-Original
-# License: public-domain
-# License: public-domain-Colin-Plumb
-LICENSE = "BSD-4-Clause & ISC & PD"
-LIC_FILES_CHKSUM = "file://COPYING;md5=0b9c89d861915b86655b96e5e32fa2aa"
-SECTION = "libs"
-
-SRC_URI = " \
-    http://libbsd.freedesktop.org/releases/${BPN}-${PV}.tar.xz \
-    file://0001-src-libbsd-overlay.pc.in-Set-Cflags-to-use-I-instead.patch \
-"
-SRC_URI_append_libc-musl  = " \
-    file://0001-Replace-__BEGIN_DECLS-and-__END_DECLS.patch \
-    file://0002-Remove-funopen.patch \
-    file://0003-Fix-build-breaks-due-to-missing-a.out.h.patch \
-"
-
-SRC_URI[md5sum] = "e935c1bb6cc98a4a43cb1da22795493a"
-SRC_URI[sha256sum] = "934b634f4dfd865b6482650b8f522c70ae65c463529de8be907b53c89c3a34a8"
-
-inherit autotools pkgconfig
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd_0.8.6.bb b/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd_0.8.6.bb
new file mode 100644
index 0000000..182543f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libbsd/libbsd_0.8.6.bb
@@ -0,0 +1,50 @@
+# Copyright (C) 2013 Khem Raj <raj.khem@gmail.com>
+# Released under the MIT license (see COPYING.MIT for the terms)
+
+SUMMARY = "Library of utility functions from BSD systems"
+DESCRIPTION = "This library provides useful functions commonly found on BSD systems, \
+               and lacking on others like GNU systems, thus making it easier to port \
+               projects with strong BSD origins, without needing to embed the same \
+               code over and over again on each project."
+
+HOMEPAGE = "http://libbsd.freedesktop.org/wiki/"
+# There seems to be more licenses used in the code, I don't think we want to list them all here, complete list:
+# OE @ ~/projects/libbsd $ grep ^License: COPYING  | sort
+# License: BSD-2-clause
+# License: BSD-2-clause
+# License: BSD-2-clause-NetBSD
+# License: BSD-2-clause-author
+# License: BSD-2-clause-verbatim
+# License: BSD-3-clause
+# License: BSD-3-clause
+# License: BSD-3-clause
+# License: BSD-3-clause-Peter-Wemm
+# License: BSD-3-clause-Regents
+# License: BSD-4-clause-Christopher-G-Demetriou
+# License: BSD-4-clause-Niels-Provos
+# License: BSD-5-clause-Peter-Wemm
+# License: Beerware
+# License: Expat
+# License: ISC
+# License: ISC-Original
+# License: public-domain
+# License: public-domain-Colin-Plumb
+LICENSE = "BSD-4-Clause & ISC & PD"
+LIC_FILES_CHKSUM = "file://COPYING;md5=08fc4e66be4526715dab09c5fba5e9e8"
+SECTION = "libs"
+
+SRC_URI = " \
+    http://libbsd.freedesktop.org/releases/${BPN}-${PV}.tar.xz \
+    file://0001-src-libbsd-overlay.pc.in-Set-Cflags-to-use-I-instead.patch \
+"
+SRC_URI_append_libc-musl  = " \
+    file://0001-Replace-__BEGIN_DECLS-and-__END_DECLS.patch \
+    file://0002-Remove-funopen.patch \
+"
+
+SRC_URI[md5sum] = "4ab7bec639af17d0aacb50222b479110"
+SRC_URI[sha256sum] = "467fbf9df1f49af11f7f686691057c8c0a7613ae5a870577bef9155de39f9687"
+
+inherit autotools pkgconfig
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libcap/files/0001-Fix-build-with-gperf-3.1.patch b/import-layers/yocto-poky/meta/recipes-support/libcap/files/0001-Fix-build-with-gperf-3.1.patch
new file mode 100644
index 0000000..110ef90
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libcap/files/0001-Fix-build-with-gperf-3.1.patch
@@ -0,0 +1,41 @@
+From a05eba68c42222f02465d7ba376015926433c531 Mon Sep 17 00:00:00 2001
+From: Alexander Kanavin <alex.kanavin@gmail.com>
+Date: Wed, 26 Jul 2017 13:37:49 +0300
+Subject: [PATCH] Fix build with gperf 3.1
+
+The generated gperf file refers to size_t which needs to be
+provided by stddef.h include. Also, adjust the makefile
+to match the declaration in the gperf file.
+
+Upstream-Status: Pending
+Signed-off-by: Alexander Kanavin <alex.kanavin@gmail.com>
+
+---
+ libcap/Makefile | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/libcap/Makefile b/libcap/Makefile
+index d189777..1a57206 100644
+--- a/libcap/Makefile
++++ b/libcap/Makefile
+@@ -22,7 +22,7 @@ all: $(MINLIBNAME) $(STALIBNAME) libcap.pc
+ 
+ ifeq ($(BUILD_GPERF),yes)
+ USE_GPERF_OUTPUT = $(GPERF_OUTPUT)
+-INCLUDE_GPERF_OUTPUT = -include $(GPERF_OUTPUT)
++INCLUDE_GPERF_OUTPUT = -include stddef.h -include $(GPERF_OUTPUT)
+ endif
+ 
+ libcap.pc: libcap.pc.in
+@@ -41,7 +41,7 @@ cap_names.h: _makenames
+ 	./_makenames > cap_names.h
+ 
+ $(GPERF_OUTPUT): cap_names.list.h
+-	perl -e 'print "struct __cap_token_s { const char *name; int index; };\n%{\nconst struct __cap_token_s *__cap_lookup_name(const char *, unsigned int);\n%}\n%%\n"; while ($$l = <>) { $$l =~ s/[\{\"]//g; $$l =~ s/\}.*// ; print $$l; }' < $< | gperf --ignore-case --language=ANSI-C --readonly --null-strings --global-table --hash-function-name=__cap_hash_name --lookup-function-name="__cap_lookup_name" -c -t -m20 $(INDENT) > $@
++	perl -e 'print "struct __cap_token_s { const char *name; int index; };\n%{\nconst struct __cap_token_s *__cap_lookup_name(const char *, register size_t);\n%}\n%%\n"; while ($$l = <>) { $$l =~ s/[\{\"]//g; $$l =~ s/\}.*// ; print $$l; }' < $< | gperf --ignore-case --language=ANSI-C --readonly --null-strings --global-table --hash-function-name=__cap_hash_name --lookup-function-name="__cap_lookup_name" -c -t -m20 $(INDENT) > $@
+ 
+ cap_names.list.h: Makefile $(KERNEL_HEADERS)/linux/capability.h
+ 	@echo "=> making $@ from $(KERNEL_HEADERS)/linux/capability.h"
+-- 
+2.13.2
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/libcap/libcap_2.25.bb b/import-layers/yocto-poky/meta/recipes-support/libcap/libcap_2.25.bb
index 0587e77..d619a2e 100644
--- a/import-layers/yocto-poky/meta/recipes-support/libcap/libcap_2.25.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/libcap/libcap_2.25.bb
@@ -9,7 +9,8 @@
 
 SRC_URI = "${KERNELORG_MIRROR}/linux/libs/security/linux-privs/${BPN}2/${BPN}-${PV}.tar.xz \
            file://0001-ensure-the-XATTR_NAME_CAPS-is-defined-when-it-is-use.patch \
-"
+           file://0001-Fix-build-with-gperf-3.1.patch \
+           "
 SRC_URI[md5sum] = "6666b839e5d46c2ad33fc8aa2ceb5f77"
 SRC_URI[sha256sum] = "693c8ac51e983ee678205571ef272439d83afe62dd8e424ea14ad9790bc35162"
 
@@ -22,9 +23,6 @@
 	# on what should be replaced with ?=
 	sed -e 's,:=,?=,g' -i Make.Rules
 	sed -e 's,^BUILD_CFLAGS ?= $(.*CFLAGS),BUILD_CFLAGS := $(BUILD_CFLAGS),' -i Make.Rules
-
-	# disable gperf detection
-	sed -e '/shell gperf/cifeq (,yes)' -i libcap/Makefile
 }
 
 PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'pam', d)}"
@@ -37,6 +35,7 @@
   lib=${@os.path.basename('${libdir}')} \
   RAISE_SETFCAP=no \
   DYNAMIC=yes \
+  BUILD_GPERF=yes \
 "
 
 EXTRA_OEMAKE_append_class-target = " SYSTEM_HEADERS=${STAGING_INCDIR}"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libcroco/libcroco_0.6.11.bb b/import-layers/yocto-poky/meta/recipes-support/libcroco/libcroco_0.6.11.bb
deleted file mode 100644
index 9df7923..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libcroco/libcroco_0.6.11.bb
+++ /dev/null
@@ -1,20 +0,0 @@
-SUMMARY = "Cascading Style Sheet (CSS) parsing and manipulation toolkit"
-HOMEPAGE = "http://www.gnome.org/"
-BUGTRACKER = "https://bugzilla.gnome.org/"
-
-LICENSE = "LGPLv2 & LGPLv2.1"
-LIC_FILES_CHKSUM = "file://COPYING;md5=55ca817ccb7d5b5b66355690e9abc605 \
-                    file://src/cr-rgb.c;endline=22;md5=31d5f0944d556c8589d04ea6055fcc66 \
-                    file://tests/cr-test-utils.c;endline=21;md5=2382c27934cae1d3792fcb17a6142c4e"
-
-SECTION = "x11/utils"
-DEPENDS = "glib-2.0 libxml2 zlib"
-BBCLASSEXTEND = "native"
-EXTRA_OECONF += "--enable-Bsymbolic=auto"
-
-BINCONFIG = "${bindir}/croco-0.6-config"
-
-inherit autotools pkgconfig gnomebase gtk-doc binconfig-disabled
-
-SRC_URI[archive.md5sum] = "dabc1911dfbfa85f8e6859ca47863168"
-SRC_URI[archive.sha256sum] = "132b528a948586b0dfa05d7e9e059901bca5a3be675b6071a90a90b81ae5a056"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libcroco/libcroco_0.6.12.bb b/import-layers/yocto-poky/meta/recipes-support/libcroco/libcroco_0.6.12.bb
new file mode 100644
index 0000000..b0af759
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libcroco/libcroco_0.6.12.bb
@@ -0,0 +1,20 @@
+SUMMARY = "Cascading Style Sheet (CSS) parsing and manipulation toolkit"
+HOMEPAGE = "http://www.gnome.org/"
+BUGTRACKER = "https://bugzilla.gnome.org/"
+
+LICENSE = "LGPLv2 & LGPLv2.1"
+LIC_FILES_CHKSUM = "file://COPYING;md5=55ca817ccb7d5b5b66355690e9abc605 \
+                    file://src/cr-rgb.c;endline=22;md5=31d5f0944d556c8589d04ea6055fcc66 \
+                    file://tests/cr-test-utils.c;endline=21;md5=2382c27934cae1d3792fcb17a6142c4e"
+
+SECTION = "x11/utils"
+DEPENDS = "glib-2.0 libxml2 zlib"
+BBCLASSEXTEND = "native"
+EXTRA_OECONF += "--enable-Bsymbolic=auto"
+
+BINCONFIG = "${bindir}/croco-0.6-config"
+
+inherit autotools pkgconfig gnomebase gtk-doc binconfig-disabled
+
+SRC_URI[archive.md5sum] = "bc0984fce078ba2ce29f9500c6b9ddce"
+SRC_URI[archive.sha256sum] = "ddc4b5546c9fb4280a5017e2707fbd4839034ed1aba5b7d4372212f34f84f860"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libevdev/libevdev_1.5.6.bb b/import-layers/yocto-poky/meta/recipes-support/libevdev/libevdev_1.5.6.bb
deleted file mode 100644
index 2f84554..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libevdev/libevdev_1.5.6.bb
+++ /dev/null
@@ -1,14 +0,0 @@
-SUMMARY = "Wrapper library for evdev devices"
-HOMEPAGE = "http://www.freedesktop.org/wiki/Software/libevdev/"
-SECTION = "libs"
-
-LICENSE = "MIT-X"
-LIC_FILES_CHKSUM = "file://COPYING;md5=75aae0d38feea6fda97ca381cb9132eb \
-                    file://libevdev/libevdev.h;endline=21;md5=7ff4f0b5113252c2f1a828e0bbad98d1"
-
-SRC_URI = "http://www.freedesktop.org/software/libevdev/${BP}.tar.xz"
-
-SRC_URI[md5sum] = "d4ce9f061f8f954bea7adba0cb768a53"
-SRC_URI[sha256sum] = "ecec7e9d66b1d3692f10b3b20aa97fb25e874a784c5552a7b1698091fef5a688"
-
-inherit autotools pkgconfig
diff --git a/import-layers/yocto-poky/meta/recipes-support/libevdev/libevdev_1.5.7.bb b/import-layers/yocto-poky/meta/recipes-support/libevdev/libevdev_1.5.7.bb
new file mode 100644
index 0000000..f740da2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libevdev/libevdev_1.5.7.bb
@@ -0,0 +1,14 @@
+SUMMARY = "Wrapper library for evdev devices"
+HOMEPAGE = "http://www.freedesktop.org/wiki/Software/libevdev/"
+SECTION = "libs"
+
+LICENSE = "MIT-X"
+LIC_FILES_CHKSUM = "file://COPYING;md5=75aae0d38feea6fda97ca381cb9132eb \
+                    file://libevdev/libevdev.h;endline=21;md5=7ff4f0b5113252c2f1a828e0bbad98d1"
+
+SRC_URI = "http://www.freedesktop.org/software/libevdev/${BP}.tar.xz"
+
+SRC_URI[md5sum] = "4f1cfaee8d75ea3fbbfeb99a98730952"
+SRC_URI[sha256sum] = "a1e59e37a2f0d397ffd7e83b73af0e638db83b8dd08902ef0f651a21cc1dd422"
+
+inherit autotools pkgconfig
diff --git a/import-layers/yocto-poky/meta/recipes-support/libevent/libevent/Makefile-missing-test-dir.patch b/import-layers/yocto-poky/meta/recipes-support/libevent/libevent/Makefile-missing-test-dir.patch
new file mode 100644
index 0000000..8880bd0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libevent/libevent/Makefile-missing-test-dir.patch
@@ -0,0 +1,27 @@
+Fix missing test directory creation.
+
+GCC used in OE-core has "dependency tracking" disabled and
+libevent has problem with this.
+Due to removed makefile.am/in files in test/sample/include
+directories, output directories are not created in
+configuration step. Compilation step will fails, when
+trying to write to non-existing directory.
+
+Upstream-Status: Inappropriate [Other]
+Workaround specific to our build system.
+
+Signed-off-by: Andrej Valek <andrej.valek@siemens.com>
+Signed-off-by: Pascal Bach <pascal.bach@siemens.com>
+
+diff --git a/libevent-2.1.8-stable/test/include.am b/libevent-2.1.8-stable/test/include.am
+index eea249f..d323dff 100644
+--- a/test/include.am
++++ b/test/include.am
+@@ -161,6 +161,7 @@ test_bench_httpclient_LDADD = $(LIBEVENT_GC_SECTIONS) libevent_core.la
+ test/regress.gen.c test/regress.gen.h: test/rpcgen-attempted
+ 
+ test/rpcgen-attempted: test/regress.rpc event_rpcgen.py test/rpcgen_wrapper.sh
++	@$(MKDIR_P) test
+ 	$(AM_V_GEN)date -u > $@
+ 	$(AM_V_at)if $(srcdir)/test/rpcgen_wrapper.sh $(srcdir)/test; then \
+ 	   true; \
diff --git a/import-layers/yocto-poky/meta/recipes-support/libevent/libevent_2.0.22.bb b/import-layers/yocto-poky/meta/recipes-support/libevent/libevent_2.0.22.bb
deleted file mode 100644
index df8a31c..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libevent/libevent_2.0.22.bb
+++ /dev/null
@@ -1,41 +0,0 @@
-SUMMARY = "An asynchronous event notification library"
-HOMEPAGE = "http://libevent.org/"
-BUGTRACKER = "http://sourceforge.net/tracker/?group_id=50884&atid=461322"
-SECTION = "libs"
-
-LICENSE = "BSD"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=45c5316ff684bcfe2f9f86d8b1279559"
-
-SRC_URI = " \
-    ${SOURCEFORGE_MIRROR}/levent/${BP}-stable.tar.gz \
-    file://run-ptest \
-"
-
-SRC_URI[md5sum] = "c4c56f986aa985677ca1db89630a2e11"
-SRC_URI[sha256sum] = "71c2c49f0adadacfdbe6332a372c38cf9c8b7895bb73dabeaa53cdcc1d4e1fa3"
-
-UPSTREAM_CHECK_URI = "http://libevent.org/"
-
-S = "${WORKDIR}/${BPN}-${PV}-stable"
-
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[openssl] = "--enable-openssl,--disable-openssl,openssl"
-
-inherit autotools
-
-# Needed for Debian packaging
-LEAD_SONAME = "libevent-2.0.so"
-
-inherit ptest
-
-DEPENDS = "zlib"
-
-BBCLASSEXTEND = "native nativesdk"
-
-do_install_ptest() {
-	install -d ${D}${PTEST_PATH}/test
-	for file in ${B}/test/.libs/regress ${B}/test/.libs/test*
-	do
-		install -m 0755 $file ${D}${PTEST_PATH}/test
-	done
-}
diff --git a/import-layers/yocto-poky/meta/recipes-support/libevent/libevent_2.1.8.bb b/import-layers/yocto-poky/meta/recipes-support/libevent/libevent_2.1.8.bb
new file mode 100644
index 0000000..1270d62
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libevent/libevent_2.1.8.bb
@@ -0,0 +1,42 @@
+SUMMARY = "An asynchronous event notification library"
+HOMEPAGE = "http://libevent.org/"
+BUGTRACKER = "https://github.com/libevent/libevent/issues"
+SECTION = "libs"
+
+LICENSE = "BSD & MIT"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=17f20574c0b154d12236d5fbe964f549"
+
+SRC_URI = " \
+    https://github.com/libevent/libevent/releases/download/release-${PV}-stable/${BP}-stable.tar.gz \
+    file://Makefile-missing-test-dir.patch \
+    file://run-ptest \
+"
+
+SRC_URI[md5sum] = "f3eeaed018542963b7d2416ef1135ecc"
+SRC_URI[sha256sum] = "965cc5a8bb46ce4199a47e9b2c9e1cae3b137e8356ffdad6d94d3b9069b71dc2"
+
+UPSTREAM_CHECK_URI = "http://libevent.org/"
+
+S = "${WORKDIR}/${BPN}-${PV}-stable"
+
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[openssl] = "--enable-openssl,--disable-openssl,openssl"
+
+inherit autotools
+
+# Needed for Debian packaging
+LEAD_SONAME = "libevent-2.1.so"
+
+inherit ptest
+
+DEPENDS = "zlib"
+
+BBCLASSEXTEND = "native nativesdk"
+
+do_install_ptest() {
+	install -d ${D}${PTEST_PATH}/test
+	for file in ${B}/test/.libs/regress ${B}/test/.libs/test*
+	do
+		install -m 0755 $file ${D}${PTEST_PATH}/test
+	done
+}
diff --git a/import-layers/yocto-poky/meta/recipes-support/libffi/libffi/0001-libffi-Support-musl-x32-build.patch b/import-layers/yocto-poky/meta/recipes-support/libffi/libffi/0001-libffi-Support-musl-x32-build.patch
new file mode 100644
index 0000000..6b167c8
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libffi/libffi/0001-libffi-Support-musl-x32-build.patch
@@ -0,0 +1,30 @@
+From 69c3906c85c791716bf650aa36d9361d22acf3fb Mon Sep 17 00:00:00 2001
+From: sweeaun <swee.aun.khor@intel.com>
+Date: Thu, 6 Jul 2017 16:32:46 -0700
+Subject: [PATCH] libffi: Support musl x32 build
+
+Support libffi build with target musl-x32.
+
+Upstream-Status: Pending
+
+Signed-off-by: sweeaun <swee.aun.khor@intel.com>
+---
+ configure.ac | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/configure.ac b/configure.ac
+index a7bf5ee..8ebe99c 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -177,7 +177,7 @@ case "$host" in
+ 	TARGETDIR=x86
+ 	if test $ac_cv_sizeof_size_t = 4; then
+ 	  case "$host" in
+-	    *-gnux32)
++	    *-gnux32 | *-muslx32)
+ 	      TARGET=X86_64
+ 	      ;;
+ 	    *)
+-- 
+2.7.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/libffi/libffi_3.2.1.bb b/import-layers/yocto-poky/meta/recipes-support/libffi/libffi_3.2.1.bb
index 43eee8e..a0b1fcd 100644
--- a/import-layers/yocto-poky/meta/recipes-support/libffi/libffi_3.2.1.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/libffi/libffi_3.2.1.bb
@@ -1,4 +1,5 @@
 SUMMARY = "A portable foreign function interface library"
+HOMEPAGE = "http://sourceware.org/libffi/"
 DESCRIPTION = "The `libffi' library provides a portable, high level programming interface to various calling \
 conventions.  This allows a programmer to call any function specified by a call interface description at run \
 time. FFI stands for Foreign Function Interface.  A foreign function interface is the popular name for the \
@@ -13,13 +14,14 @@
            file://not-win32.patch \
 	   file://0001-mips-Use-compiler-internal-define-for-linux.patch \
            file://0001-mips-fix-MIPS-softfloat-build-issue.patch \
+           file://0001-libffi-Support-musl-x32-build.patch \
 	   "
 
 SRC_URI[md5sum] = "83b89587607e3eb65c70d361f13bab43"
 SRC_URI[sha256sum] = "d06ebb8e1d9a22d19e38d63fdb83954253f39bedc5d46232a05645685722ca37"
 
 EXTRA_OECONF += "--disable-builddir"
-
+EXTRA_OEMAKE_class-target = "LIBTOOLFLAGS='--tag=CC'"
 inherit autotools texinfo
 
 FILES_${PN}-dev += "${libdir}/libffi-${PV}"
@@ -29,3 +31,4 @@
 MIPS_INSTRUCTION_SET = "mips"
 
 BBCLASSEXTEND = "native nativesdk"
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/libfm/libfm_1.2.5.bb b/import-layers/yocto-poky/meta/recipes-support/libfm/libfm_1.2.5.bb
index aca59e1..1ddddfd 100644
--- a/import-layers/yocto-poky/meta/recipes-support/libfm/libfm_1.2.5.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/libfm/libfm_1.2.5.bb
@@ -32,6 +32,7 @@
     rm -f ${D}${includedir}/libfm-1.0/fm-xml-file.h
     rm -f ${D}${includedir}/libfm-1.0/fm-version.h
     rm -f ${D}${includedir}/libfm-1.0/fm-extra.h
+    rm -f ${D}${includedir}/libfm
     rm -f ${D}${libdir}/pkgconfig/libfm-extra.pc
     rm -f ${D}${libdir}/libfm-extra.so*
     rm -f ${D}${libdir}/libfm-extra.a
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0001-Add-and-use-pkg-config-for-libgcrypt-instead-of-conf.patch b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0001-Add-and-use-pkg-config-for-libgcrypt-instead-of-conf.patch
new file mode 100644
index 0000000..d41c3de
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0001-Add-and-use-pkg-config-for-libgcrypt-instead-of-conf.patch
@@ -0,0 +1,183 @@
+From 72b9e9040d58c15f0302bd8abda28179f04e1c5f Mon Sep 17 00:00:00 2001
+From: Richard Purdie <richard.purdie@linuxfoundation.org>
+Date: Wed, 16 Aug 2017 10:43:18 +0800
+Subject: [PATCH 1/4] Add and use pkg-config for libgcrypt instead of -config
+ scripts.
+
+Upstream-Status: Denied [upstream have indicated they don't want a
+pkg-config dependency]
+
+RP 2014/5/22
+
+Rebase to 1.8.0
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ configure.ac        |  1 +
+ src/libgcrypt.m4    | 71 +++--------------------------------------------------
+ src/libgcrypt.pc.in | 33 +++++++++++++++++++++++++
+ 3 files changed, 38 insertions(+), 67 deletions(-)
+ create mode 100644 src/libgcrypt.pc.in
+
+diff --git a/configure.ac b/configure.ac
+index bbe8104..3d2de73 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -2607,6 +2607,7 @@ random/Makefile
+ doc/Makefile
+ src/Makefile
+ src/gcrypt.h
++src/libgcrypt.pc
+ src/libgcrypt-config
+ src/versioninfo.rc
+ tests/Makefile
+diff --git a/src/libgcrypt.m4 b/src/libgcrypt.m4
+index c67cfec..4ea5f2c 100644
+--- a/src/libgcrypt.m4
++++ b/src/libgcrypt.m4
+@@ -29,30 +29,6 @@ dnl is added to the gpg_config_script_warn variable.
+ dnl
+ AC_DEFUN([AM_PATH_LIBGCRYPT],
+ [ AC_REQUIRE([AC_CANONICAL_HOST])
+-  AC_ARG_WITH(libgcrypt-prefix,
+-            AC_HELP_STRING([--with-libgcrypt-prefix=PFX],
+-                           [prefix where LIBGCRYPT is installed (optional)]),
+-     libgcrypt_config_prefix="$withval", libgcrypt_config_prefix="")
+-  if test x"${LIBGCRYPT_CONFIG}" = x ; then
+-     if test x"${libgcrypt_config_prefix}" != x ; then
+-        LIBGCRYPT_CONFIG="${libgcrypt_config_prefix}/bin/libgcrypt-config"
+-     else
+-       case "${SYSROOT}" in
+-         /*)
+-           if test -x "${SYSROOT}/bin/libgcrypt-config" ; then
+-             LIBGCRYPT_CONFIG="${SYSROOT}/bin/libgcrypt-config"
+-           fi
+-           ;;
+-         '')
+-           ;;
+-          *)
+-           AC_MSG_WARN([Ignoring \$SYSROOT as it is not an absolute path.])
+-           ;;
+-       esac
+-     fi
+-  fi
+-
+-  AC_PATH_PROG(LIBGCRYPT_CONFIG, libgcrypt-config, no)
+   tmp=ifelse([$1], ,1:1.2.0,$1)
+   if echo "$tmp" | grep ':' >/dev/null 2>/dev/null ; then
+      req_libgcrypt_api=`echo "$tmp"     | sed 's/\(.*\):\(.*\)/\1/'`
+@@ -62,48 +38,13 @@ AC_DEFUN([AM_PATH_LIBGCRYPT],
+      min_libgcrypt_version="$tmp"
+   fi
+ 
+-  AC_MSG_CHECKING(for LIBGCRYPT - version >= $min_libgcrypt_version)
+-  ok=no
+-  if test "$LIBGCRYPT_CONFIG" != "no" ; then
+-    req_major=`echo $min_libgcrypt_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\1/'`
+-    req_minor=`echo $min_libgcrypt_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\2/'`
+-    req_micro=`echo $min_libgcrypt_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\3/'`
+-    libgcrypt_config_version=`$LIBGCRYPT_CONFIG --version`
+-    major=`echo $libgcrypt_config_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\).*/\1/'`
+-    minor=`echo $libgcrypt_config_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\).*/\2/'`
+-    micro=`echo $libgcrypt_config_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\).*/\3/'`
+-    if test "$major" -gt "$req_major"; then
+-        ok=yes
+-    else
+-        if test "$major" -eq "$req_major"; then
+-            if test "$minor" -gt "$req_minor"; then
+-               ok=yes
+-            else
+-               if test "$minor" -eq "$req_minor"; then
+-                   if test "$micro" -ge "$req_micro"; then
+-                     ok=yes
+-                   fi
+-               fi
+-            fi
+-        fi
+-    fi
+-  fi
+-  if test $ok = yes; then
+-    AC_MSG_RESULT([yes ($libgcrypt_config_version)])
+-  else
+-    AC_MSG_RESULT(no)
+-  fi
++  PKG_CHECK_MODULES(LIBGCRYPT, [libgcrypt >= $min_libgcrypt_version], [ok=yes], [ok=no])
++
+   if test $ok = yes; then
+      # If we have a recent libgcrypt, we should also check that the
+      # API is compatible
+      if test "$req_libgcrypt_api" -gt 0 ; then
+-        tmp=`$LIBGCRYPT_CONFIG --api-version 2>/dev/null || echo 0`
++        tmp=`$PKG_CONFIG --variable=api_version libgcrypt`
+         if test "$tmp" -gt 0 ; then
+            AC_MSG_CHECKING([LIBGCRYPT API version])
+            if test "$req_libgcrypt_api" -eq "$tmp" ; then
+@@ -116,10 +57,8 @@ AC_DEFUN([AM_PATH_LIBGCRYPT],
+      fi
+   fi
+   if test $ok = yes; then
+-    LIBGCRYPT_CFLAGS=`$LIBGCRYPT_CONFIG --cflags`
+-    LIBGCRYPT_LIBS=`$LIBGCRYPT_CONFIG --libs`
+     ifelse([$2], , :, [$2])
+-    libgcrypt_config_host=`$LIBGCRYPT_CONFIG --host 2>/dev/null || echo none`
++    libgcrypt_config_host=`$PKG_CONFIG --variable=host libgcrypt`
+     if test x"$libgcrypt_config_host" != xnone ; then
+       if test x"$libgcrypt_config_host" != x"$host" ; then
+   AC_MSG_WARN([[
+@@ -134,8 +73,6 @@ AC_DEFUN([AM_PATH_LIBGCRYPT],
+       fi
+     fi
+   else
+-    LIBGCRYPT_CFLAGS=""
+-    LIBGCRYPT_LIBS=""
+     ifelse([$3], , :, [$3])
+   fi
+   AC_SUBST(LIBGCRYPT_CFLAGS)
+diff --git a/src/libgcrypt.pc.in b/src/libgcrypt.pc.in
+new file mode 100644
+index 0000000..2fc8f53
+--- /dev/null
++++ b/src/libgcrypt.pc.in
+@@ -0,0 +1,33 @@
++# Process this file with autoconf to produce a pkg-config metadata file.
++# Copyright (C) 2002, 2003, 2004, 2005, 2006 Free Software Foundation
++# Author: Simon Josefsson
++#
++# This file is free software; as a special exception the author gives
++# unlimited permission to copy and/or distribute it, with or without
++# modifications, as long as this notice is preserved.
++#
++# This file is distributed in the hope that it will be useful, but
++# WITHOUT ANY WARRANTY, to the extent permitted by law; without even the
++# implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
++
++prefix=@prefix@
++exec_prefix=@exec_prefix@
++libdir=@libdir@
++includedir=@includedir@
++
++# API info
++api_version=@LIBGCRYPT_CONFIG_API_VERSION@
++host=@LIBGCRYPT_CONFIG_HOST@
++
++# Misc information.
++symmetric_ciphers=@LIBGCRYPT_CIPHERS@
++asymmetric_ciphers=@LIBGCRYPT_PUBKEY_CIPHERS@
++digests=@LIBGCRYPT_DIGESTS@
++
++Name: libgcrypt
++Description: GNU crypto library
++URL: http://www.gnupg.org
++Version: @VERSION@
++Libs: -L${libdir} -lgcrypt
++Libs.private: -L${libdir} -lgpg-error
++Cflags: -I${includedir} 
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0001-ecc-Store-EdDSA-session-key-in-secure-memory.patch b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0001-ecc-Store-EdDSA-session-key-in-secure-memory.patch
deleted file mode 100644
index 0a4dfe6..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0001-ecc-Store-EdDSA-session-key-in-secure-memory.patch
+++ /dev/null
@@ -1,39 +0,0 @@
-CVE: CVE-2017-9526
-Upstream-Status: Backport
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-From 4a1768d683f6572ad86d833508c70e6b3dc1efdc Mon Sep 17 00:00:00 2001
-From: Jo Van Bulck <jo.vanbulck@cs.kuleuven.be>
-Date: Thu, 19 Jan 2017 17:00:15 +0100
-Subject: [PATCH] ecc: Store EdDSA session key in secure memory.
-
-* cipher/ecc-eddsa.c (_gcry_ecc_eddsa_sign): use mpi_snew to allocate
-session key.
---
-
-An attacker who learns the EdDSA session key from side-channel
-observation during the signing process, can easily revover the long-
-term secret key. Storing the session key in secure memory ensures that
-constant time point operations are used in the MPI library.
-
-Signed-off-by: Jo Van Bulck <jo.vanbulck@cs.kuleuven.be>
----
- cipher/ecc-eddsa.c | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/cipher/ecc-eddsa.c b/cipher/ecc-eddsa.c
-index f91f8489..813e030d 100644
---- a/cipher/ecc-eddsa.c
-+++ b/cipher/ecc-eddsa.c
-@@ -603,7 +603,7 @@ _gcry_ecc_eddsa_sign (gcry_mpi_t input, ECC_secret_key *skey,
-   a = mpi_snew (0);
-   x = mpi_new (0);
-   y = mpi_new (0);
--  r = mpi_new (0);
-+  r = mpi_snew (0);
-   ctx = _gcry_mpi_ec_p_internal_new (skey->E.model, skey->E.dialect, 0,
-                                      skey->E.p, skey->E.a, skey->E.b);
-   b = (ctx->nbits+7)/8;
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0002-libgcrypt-fix-building-error-with-O2-in-sysroot-path.patch b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0002-libgcrypt-fix-building-error-with-O2-in-sysroot-path.patch
new file mode 100644
index 0000000..d7554f3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0002-libgcrypt-fix-building-error-with-O2-in-sysroot-path.patch
@@ -0,0 +1,41 @@
+From 97570ef271ea1fb7b5ca903eec88f68407b0ec76 Mon Sep 17 00:00:00 2001
+From: Chen Qi <Qi.Chen@windriver.com>
+Date: Wed, 16 Aug 2017 10:44:41 +0800
+Subject: [PATCH 2/4] libgcrypt: fix building error with '-O2' in sysroot path
+
+Upstream-Status: Pending
+
+Characters like '-O2' or '-Ofast' will be replaced by '-O1' when
+compiling cipher.
+If we are cross compiling libgcrypt and sysroot contains such
+characters, we would
+get compile errors because the sysroot path has been modified.
+
+Fix this by adding blank spaces before and after the original matching
+pattern in the
+sed command.
+
+Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
+
+Rebase to 1.8.0
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ cipher/Makefile.am | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/cipher/Makefile.am b/cipher/Makefile.am
+index 95c4510..bd52ec7 100644
+--- a/cipher/Makefile.am
++++ b/cipher/Makefile.am
+@@ -116,7 +116,7 @@ gost-s-box: gost-s-box.c
+ 
+ 
+ if ENABLE_O_FLAG_MUNGING
+-o_flag_munging = sed -e 's/-O\([2-9s][2-9s]*\)/-O1/' -e 's/-Ofast/-O1/g'
++o_flag_munging = sed -e 's/ -O\([2-9s][2-9s]*\) / -O1 /' -e 's/ -Ofast / -O1 /g'
+ else
+ o_flag_munging = cat
+ endif
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0003-tests-bench-slope.c-workaround-ICE-failure-on-mips-w.patch b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0003-tests-bench-slope.c-workaround-ICE-failure-on-mips-w.patch
new file mode 100644
index 0000000..105df29
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0003-tests-bench-slope.c-workaround-ICE-failure-on-mips-w.patch
@@ -0,0 +1,79 @@
+From 7cc702c7b5a1ccc2b0091f3effa1391b6c3030fd Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Wed, 16 Aug 2017 10:46:28 +0800
+Subject: [PATCH 3/4] tests/bench-slope.c: workaround ICE failure on mips with
+ '-O -g'
+
+Hit a ICE and could reduce it to the following minimal example:
+
+1. Only the size of array assigned with 2 caused the issue:
+$ cat > mipgcc-test.c << END
+
+int main (int argc, char **argv)
+{
+        char *pStrArry[ARRAY_SIZE_MAX] = {"hello"};
+        int i = 0;
+
+        while(pStrArry[i] && i<ARRAY_SIZE_MAX)
+        {
+                printf("%s\n", pStrArry[i]);
+                i++;
+        }
+
+        return 0;
+}
+
+END
+
+2. Only -O1 and -g on mips caused the issue:
+$ mips-poky-linux-gcc -O1 -g -o mipgcc-test mipgcc-test.c
+mipgcc-test.c: In function 'main':
+mipgcc-test.c:18:1: internal compiler error: in dwarf2out_var_location,
+at dwarf2out.c:20810
+ }
+ ^
+Please submit a full bug report,
+with preprocessed source if appropriate.
+See <http://gcc.gnu.org/bugs.html> for instructions
+
+3. The quick workround is trying to enlarge the size of array with
+larger
+than 2.
+
+4. File a bug to GNU, but it could not be reproduced on there
+environment.
+http://gcc.gnu.org/bugzilla/show_bug.cgi?id=60643
+
+Upstream-Status: Inappropriate [oe specific]
+
+Rebase to 1.8.0
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ tests/bench-slope.c | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/tests/bench-slope.c b/tests/bench-slope.c
+index 75e6e43..4e70842 100644
+--- a/tests/bench-slope.c
++++ b/tests/bench-slope.c
+@@ -1463,7 +1463,7 @@ static struct bench_ops hash_ops = {
+ };
+ 
+ 
+-static struct bench_hash_mode hash_modes[] = {
++static struct bench_hash_mode hash_modes[3] = {
+   {"", &hash_ops},
+   {0},
+ };
+@@ -1629,7 +1629,7 @@ static struct bench_ops mac_ops = {
+ };
+ 
+ 
+-static struct bench_mac_mode mac_modes[] = {
++static struct bench_mac_mode mac_modes[3] = {
+   {"", &mac_ops},
+   {0},
+ };
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0004-tests-Makefile.am-fix-undefined-reference-to-pthread.patch b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0004-tests-Makefile.am-fix-undefined-reference-to-pthread.patch
new file mode 100644
index 0000000..8622df3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0004-tests-Makefile.am-fix-undefined-reference-to-pthread.patch
@@ -0,0 +1,28 @@
+From e20dbdb0b8f0af840ef90b299c4e2277c52ddf87 Mon Sep 17 00:00:00 2001
+From: Hongxu Jia <hongxu.jia@windriver.com>
+Date: Sun, 12 Jun 2016 04:44:29 -0400
+Subject: [PATCH 4/4] tests/Makefile.am: fix undefined reference to
+ `pthread_create'
+
+Add missing '-lpthread' to CFLAGS
+
+Upstream-Status: Pending
+
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ tests/Makefile.am | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/tests/Makefile.am b/tests/Makefile.am
+index 1744ea7..04cf425 100644
+--- a/tests/Makefile.am
++++ b/tests/Makefile.am
+@@ -64,4 +64,4 @@ EXTRA_DIST = README rsa-16k.key cavs_tests.sh cavs_driver.pl \
+ 
+ LDADD = $(standard_ldadd) $(GPG_ERROR_LIBS)
+ t_lock_LDADD = $(standard_ldadd) $(GPG_ERROR_MT_LIBS)
+-t_lock_CFLAGS = $(GPG_ERROR_MT_CFLAGS)
++t_lock_CFLAGS = $(GPG_ERROR_MT_CFLAGS) -lpthread
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0005-ecc-Add-input-validation-for-X25519.patch b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0005-ecc-Add-input-validation-for-X25519.patch
new file mode 100644
index 0000000..66fdd74
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0005-ecc-Add-input-validation-for-X25519.patch
@@ -0,0 +1,158 @@
+From ef570e3d2773c12126e7d3fcdc4db9ef80a5e214 Mon Sep 17 00:00:00 2001
+From: NIIBE Yutaka <gniibe@fsij.org>
+Date: Fri, 25 Aug 2017 18:13:28 +0900
+Subject: [PATCH] ecc: Add input validation for X25519.
+
+* cipher/ecc.c (ecc_decrypt_raw): Add input validation.
+* mpi/ec.c (ec_p_init): Use scratch buffer for bad points.
+(_gcry_mpi_ec_bad_point): New.
+
+--
+
+Following is the paper describing the attack:
+
+    May the Fourth Be With You: A Microarchitectural Side Channel Attack
+    on Real-World Applications of Curve25519
+    by Daniel Genkin, Luke Valenta, and Yuval Yarom
+
+In the current implementation, we do output checking and it results an
+error for those bad points.  However, when attacked, the computation
+will done with leak of private key, even it will results errors.  To
+mitigate leak, we added input validation.
+
+Note that we only list bad points with MSB=0.  By X25519, MSB is
+always cleared.
+
+In future, we should implement constant-time field computation.  Then,
+this input validation could be removed, if performance is important
+and we are sure for no leak.
+
+CVE-id: CVE-2017-0379
+Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>
+
+Upstream-Status: Backport
+CVE: CVE-2017-0379
+Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
+---
+ cipher/ecc.c | 17 +++++++++++++++--
+ mpi/ec.c     | 51 ++++++++++++++++++++++++++++++++++++++++++++++++---
+ src/mpi.h    |  1 +
+ 3 files changed, 64 insertions(+), 5 deletions(-)
+
+diff --git a/cipher/ecc.c b/cipher/ecc.c
+index e25bf09..4e3e5b1 100644
+--- a/cipher/ecc.c
++++ b/cipher/ecc.c
+@@ -1628,9 +1628,22 @@ ecc_decrypt_raw (gcry_sexp_t *r_plain, gcry_sexp_t s_data, gcry_sexp_t keyparms)
+   if (DBG_CIPHER)
+     log_printpnt ("ecc_decrypt    kG", &kG, NULL);
+ 
+-  if (!(flags & PUBKEY_FLAG_DJB_TWEAK)
++  if ((flags & PUBKEY_FLAG_DJB_TWEAK))
++    {
+       /* For X25519, by its definition, validation should not be done.  */
+-      && !_gcry_mpi_ec_curve_point (&kG, ec))
++      /* (Instead, we do output check.)
++       *
++       * However, to mitigate secret key leak from our implementation,
++       * we also do input validation here.  For constant-time
++       * implementation, we can remove this input validation.
++       */
++      if (_gcry_mpi_ec_bad_point (&kG, ec))
++        {
++          rc = GPG_ERR_INV_DATA;
++          goto leave;
++        }
++    }
++  else if (!_gcry_mpi_ec_curve_point (&kG, ec))
+     {
+       rc = GPG_ERR_INV_DATA;
+       goto leave;
+diff --git a/mpi/ec.c b/mpi/ec.c
+index a0f7357..4c16603 100644
+--- a/mpi/ec.c
++++ b/mpi/ec.c
+@@ -396,6 +396,29 @@ ec_get_two_inv_p (mpi_ec_t ec)
+ }
+ 
+ 
++static const char *curve25519_bad_points[] = {
++  "0x0000000000000000000000000000000000000000000000000000000000000000",
++  "0x0000000000000000000000000000000000000000000000000000000000000001",
++  "0x00b8495f16056286fdb1329ceb8d09da6ac49ff1fae35616aeb8413b7c7aebe0",
++  "0x57119fd0dd4e22d8868e1c58c45c44045bef839c55b1d0b1248c50a3bc959c5f",
++  "0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffec",
++  "0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffed",
++  "0x7fffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffee",
++  NULL
++};
++
++static gcry_mpi_t
++scanval (const char *string)
++{
++  gpg_err_code_t rc;
++  gcry_mpi_t val;
++
++  rc = _gcry_mpi_scan (&val, GCRYMPI_FMT_HEX, string, 0, NULL);
++  if (rc)
++    log_fatal ("scanning ECC parameter failed: %s\n", gpg_strerror (rc));
++  return val;
++}
++
+ 
+ /* This function initialized a context for elliptic curve based on the
+    field GF(p).  P is the prime specifying this field, A is the first
+@@ -434,9 +457,17 @@ ec_p_init (mpi_ec_t ctx, enum gcry_mpi_ec_models model,
+ 
+   _gcry_mpi_ec_get_reset (ctx);
+ 
+-  /* Allocate scratch variables.  */
+-  for (i=0; i< DIM(ctx->t.scratch); i++)
+-    ctx->t.scratch[i] = mpi_alloc_like (ctx->p);
++  if (model == MPI_EC_MONTGOMERY)
++    {
++      for (i=0; i< DIM(ctx->t.scratch) && curve25519_bad_points[i]; i++)
++        ctx->t.scratch[i] = scanval (curve25519_bad_points[i]);
++    }
++  else
++    {
++      /* Allocate scratch variables.  */
++      for (i=0; i< DIM(ctx->t.scratch); i++)
++        ctx->t.scratch[i] = mpi_alloc_like (ctx->p);
++    }
+ 
+   /* Prepare for fast reduction.  */
+   /* FIXME: need a test for NIST values.  However it does not gain us
+@@ -1572,3 +1603,17 @@ _gcry_mpi_ec_curve_point (gcry_mpi_point_t point, mpi_ec_t ctx)
+ 
+   return res;
+ }
++
++
++int
++_gcry_mpi_ec_bad_point (gcry_mpi_point_t point, mpi_ec_t ctx)
++{
++  int i;
++  gcry_mpi_t x_bad;
++
++  for (i = 0; (x_bad = ctx->t.scratch[i]); i++)
++    if (!mpi_cmp (point->x, x_bad))
++      return 1;
++
++  return 0;
++}
+diff --git a/src/mpi.h b/src/mpi.h
+index b5385b5..aeba7f8 100644
+--- a/src/mpi.h
++++ b/src/mpi.h
+@@ -296,6 +296,7 @@ void _gcry_mpi_ec_mul_point (mpi_point_t result,
+                              gcry_mpi_t scalar, mpi_point_t point,
+                              mpi_ec_t ctx);
+ int  _gcry_mpi_ec_curve_point (gcry_mpi_point_t point, mpi_ec_t ctx);
++int _gcry_mpi_ec_bad_point (gcry_mpi_point_t point, mpi_ec_t ctx);
+ 
+ gcry_mpi_t _gcry_mpi_ec_ec2os (gcry_mpi_point_t point, mpi_ec_t ectx);
+ 
+-- 
+1.8.3.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0006-Fix-building-AArch32-CE-implementations-when-target-.patch b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0006-Fix-building-AArch32-CE-implementations-when-target-.patch
new file mode 100644
index 0000000..65b9fd4
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/0006-Fix-building-AArch32-CE-implementations-when-target-.patch
@@ -0,0 +1,96 @@
+From 4a7aa30ae9f3ce798dd886c2f2d4164c43027748 Mon Sep 17 00:00:00 2001
+From: Jussi Kivilinna <jussi.kivilinna@iki.fi>
+Date: Sat, 29 Jul 2017 14:34:23 +0300
+Subject: [PATCH] Fix building AArch32 CE implementations when target is ARMv6
+ arch
+
+* cipher/cipher-gcm-armv8-aarch32-ce.S: Select ARMv8 architecure.
+* cipher/rijndael-armv8-aarch32-ce.S: Ditto.
+* cipher/sha1-armv8-aarch32-ce.S: Ditto.
+* cipher/sha256-armv8-aarch32-ce.S: Ditto.
+* configure.ac (gcry_cv_gcc_inline_asm_aarch32_crypto): Ditto.
+--
+
+Raspbian distribution defaults to ARMv6 architecture thus 'rbit'
+instruction is not available with default compiler flags. Patch
+adds explicit architecture selection for ARMv8 to enable 'rbit'
+usage with ARMv8/AArch32-CE assembly implementations of SHA,
+GHASH and AES.
+
+Reported-by: Chris Horry <zerbey@gmail.com>
+Signed-off-by: Jussi Kivilinna <jussi.kivilinna@iki.fi>
+
+Signed-off-by: Paul Barker <pbarker@toganlabs.com>
+Upstream-Status: Backport
+
+---
+ cipher/cipher-gcm-armv8-aarch32-ce.S | 1 +
+ cipher/rijndael-armv8-aarch32-ce.S   | 1 +
+ cipher/sha1-armv8-aarch32-ce.S       | 1 +
+ cipher/sha256-armv8-aarch32-ce.S     | 1 +
+ configure.ac                         | 1 +
+ 5 files changed, 5 insertions(+)
+
+diff --git a/cipher/cipher-gcm-armv8-aarch32-ce.S b/cipher/cipher-gcm-armv8-aarch32-ce.S
+index b61a7871..1de66a16 100644
+--- a/cipher/cipher-gcm-armv8-aarch32-ce.S
++++ b/cipher/cipher-gcm-armv8-aarch32-ce.S
+@@ -24,6 +24,7 @@
+     defined(HAVE_GCC_INLINE_ASM_AARCH32_CRYPTO)
+ 
+ .syntax unified
++.arch armv8-a
+ .fpu crypto-neon-fp-armv8
+ .arm
+ 
+diff --git a/cipher/rijndael-armv8-aarch32-ce.S b/cipher/rijndael-armv8-aarch32-ce.S
+index f375f673..5c8fa3c0 100644
+--- a/cipher/rijndael-armv8-aarch32-ce.S
++++ b/cipher/rijndael-armv8-aarch32-ce.S
+@@ -24,6 +24,7 @@
+     defined(HAVE_GCC_INLINE_ASM_AARCH32_CRYPTO)
+ 
+ .syntax unified
++.arch armv8-a
+ .fpu crypto-neon-fp-armv8
+ .arm
+ 
+diff --git a/cipher/sha1-armv8-aarch32-ce.S b/cipher/sha1-armv8-aarch32-ce.S
+index b0bc5ffe..bf2b233b 100644
+--- a/cipher/sha1-armv8-aarch32-ce.S
++++ b/cipher/sha1-armv8-aarch32-ce.S
+@@ -24,6 +24,7 @@
+     defined(HAVE_GCC_INLINE_ASM_AARCH32_CRYPTO) && defined(USE_SHA1)
+ 
+ .syntax unified
++.arch armv8-a
+ .fpu crypto-neon-fp-armv8
+ .arm
+ 
+diff --git a/cipher/sha256-armv8-aarch32-ce.S b/cipher/sha256-armv8-aarch32-ce.S
+index 2041a237..2b17ab1b 100644
+--- a/cipher/sha256-armv8-aarch32-ce.S
++++ b/cipher/sha256-armv8-aarch32-ce.S
+@@ -24,6 +24,7 @@
+     defined(HAVE_GCC_INLINE_ASM_AARCH32_CRYPTO) && defined(USE_SHA256)
+ 
+ .syntax unified
++.arch armv8-a
+ .fpu crypto-neon-fp-armv8
+ .arm
+ 
+diff --git a/configure.ac b/configure.ac
+index 27faa7f4..66e7cd67 100644
+--- a/configure.ac
++++ b/configure.ac
+@@ -1619,6 +1619,7 @@ AC_CACHE_CHECK([whether GCC inline assembler supports AArch32 Crypto Extension i
+           AC_COMPILE_IFELSE([AC_LANG_SOURCE(
+           [[__asm__(
+                 ".syntax unified\n\t"
++                ".arch armv8-a\n\t"
+                 ".arm\n\t"
+                 ".fpu crypto-neon-fp-armv8\n\t"
+ 
+-- 
+2.11.0
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/CVE-2017-7526.patch b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/CVE-2017-7526.patch
deleted file mode 100644
index 384fa96..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/CVE-2017-7526.patch
+++ /dev/null
@@ -1,455 +0,0 @@
-Flush+reload side-channel attack on RSA secret keys dubbed "Sliding right
-into disaster".
-
-CVE: CVE-2017-7526
-Upstream-Status: Backport
-Signed-off-by: Ross Burton <ross.burton@intel.com>
-
-From 12ee400c39e0ebb5fb819c3926d459c278fc99fd Mon Sep 17 00:00:00 2001
-From: NIIBE Yutaka <gniibe@fsij.org>
-Date: Tue, 4 Apr 2017 17:38:05 +0900
-Subject: [PATCH 1/5] mpi: Simplify mpi_powm.
-
-* mpi/mpi-pow.c (_gcry_mpi_powm): Simplify the loop.
-
---
-
-This fix is not a solution for the problem reported (yet).  The
-problem is that the current algorithm of _gcry_mpi_powm depends on
-exponent and some information leaks is possible.
-
-Reported-by: Andreas Zankl <andreas.zankl@aisec.fraunhofer.de>
-Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>
-
-(backport from master commit:
-719468e53133d3bdf12156c5bfdea2bf15f9f6f1)
-
-Signed-off-by: Ross Burton <ross.burton@intel.com>
----
- mpi/mpi-pow.c | 105 +++++++++++++++++-----------------------------------------
- 1 file changed, 30 insertions(+), 75 deletions(-)
-
-diff --git a/mpi/mpi-pow.c b/mpi/mpi-pow.c
-index a780ebd1..7b3dc318 100644
---- a/mpi/mpi-pow.c
-+++ b/mpi/mpi-pow.c
-@@ -609,12 +609,8 @@ _gcry_mpi_powm (gcry_mpi_t res,
-       if (e == 0)
-         {
-           j += c;
--          i--;
--          if ( i < 0 )
--            {
--              c = 0;
--              break;
--            }
-+          if ( --i < 0 )
-+            break;
- 
-           e = ep[i];
-           c = BITS_PER_MPI_LIMB;
-@@ -629,38 +625,33 @@ _gcry_mpi_powm (gcry_mpi_t res,
-           c -= c0;
-           j += c0;
- 
-+          e0 = (e >> (BITS_PER_MPI_LIMB - W));
-           if (c >= W)
--            {
--              e0 = (e >> (BITS_PER_MPI_LIMB - W));
--              e = (e << W);
--              c -= W;
--            }
-+            c0 = 0;
-           else
-             {
--              i--;
--              if ( i < 0 )
-+              if ( --i < 0 )
-                 {
--                  e = (e >> (BITS_PER_MPI_LIMB - c));
--                  break;
-+                  e0 = (e >> (BITS_PER_MPI_LIMB - c));
-+                  j += c - W;
-+                  goto last_step;
-+                }
-+              else
-+                {
-+                  c0 = c;
-+                  e = ep[i];
-+                  c = BITS_PER_MPI_LIMB;
-+                  e0 |= (e >> (BITS_PER_MPI_LIMB - (W - c0)));
-                 }
--
--              c0 = c;
--              e0 = (e >> (BITS_PER_MPI_LIMB - W))
--                | (ep[i] >> (BITS_PER_MPI_LIMB - W + c0));
--              e = (ep[i] << (W - c0));
--              c = BITS_PER_MPI_LIMB - W + c0;
-             }
- 
-+          e = e << (W - c0);
-+          c -= (W - c0);
-+
-+        last_step:
-           count_trailing_zeros (c0, e0);
-           e0 = (e0 >> c0) >> 1;
- 
--          for (j += W - c0; j; j--)
--            {
--              mul_mod (xp, &xsize, rp, rsize, rp, rsize, mp, msize, &karactx);
--              tp = rp; rp = xp; xp = tp;
--              rsize = xsize;
--            }
--
-           /*
-            *  base_u <= precomp[e0]
-            *  base_u_size <= precomp_size[e0]
-@@ -677,25 +668,23 @@ _gcry_mpi_powm (gcry_mpi_t res,
-               u.d = precomp[k];
- 
-               mpi_set_cond (&w, &u, k == e0);
--              base_u_size |= (precomp_size[k] & ((mpi_size_t)0 - (k == e0)) );
-+              base_u_size |= ( precomp_size[k] & ((mpi_size_t)0 - (k == e0)) );
-             }
- 
--          mul_mod (xp, &xsize, rp, rsize, base_u, base_u_size,
--                   mp, msize, &karactx);
--          tp = rp; rp = xp; xp = tp;
--          rsize = xsize;
-+          for (j += W - c0; j >= 0; j--)
-+            {
-+              mul_mod (xp, &xsize, rp, rsize,
-+                       j == 0 ? base_u : rp, j == 0 ? base_u_size : rsize,
-+                       mp, msize, &karactx);
-+              tp = rp; rp = xp; xp = tp;
-+              rsize = xsize;
-+            }
- 
-           j = c0;
-+          if ( i < 0 )
-+            break;
-         }
- 
--    if (c != 0)
--      {
--        j += c;
--        count_trailing_zeros (c, e);
--        e = (e >> c);
--        j -= c;
--      }
--
-     while (j--)
-       {
-         mul_mod (xp, &xsize, rp, rsize, rp, rsize, mp, msize, &karactx);
-@@ -703,40 +692,6 @@ _gcry_mpi_powm (gcry_mpi_t res,
-         rsize = xsize;
-       }
- 
--    if (e != 0)
--      {
--        /*
--         * base_u <= precomp[(e>>1)]
--         * base_u_size <= precomp_size[(e>>1)]
--         */
--        base_u_size = 0;
--        for (k = 0; k < (1<< (W - 1)); k++)
--          {
--            struct gcry_mpi w, u;
--            w.alloced = w.nlimbs = precomp_size[k];
--            u.alloced = u.nlimbs = precomp_size[k];
--            w.sign = u.sign = 0;
--            w.flags = u.flags = 0;
--            w.d = base_u;
--            u.d = precomp[k];
--
--            mpi_set_cond (&w, &u, k == (e>>1));
--            base_u_size |= (precomp_size[k] & ((mpi_size_t)0 - (k == (e>>1))) );
--          }
--
--        mul_mod (xp, &xsize, rp, rsize, base_u, base_u_size,
--                 mp, msize, &karactx);
--        tp = rp; rp = xp; xp = tp;
--        rsize = xsize;
--
--        for (; c; c--)
--          {
--            mul_mod (xp, &xsize, rp, rsize, rp, rsize, mp, msize, &karactx);
--            tp = rp; rp = xp; xp = tp;
--            rsize = xsize;
--          }
--      }
--
-     /* We shifted MOD, the modulo reduction argument, left
-        MOD_SHIFT_CNT steps.  Adjust the result by reducing it with the
-        original MOD.
--- 
-2.11.0
-
-
-From a4b275c4d5378837e820fdc84f4ada876f9c8ccd Mon Sep 17 00:00:00 2001
-From: NIIBE Yutaka <gniibe@fsij.org>
-Date: Sat, 24 Jun 2017 20:46:20 +0900
-Subject: [PATCH 2/5] Same computation for square and multiply.
-
-* mpi/mpi-pow.c (_gcry_mpi_powm): Compare msize for max_u_size.  Move
-the assignment to base_u into the loop.  Copy content refered by RP to
-BASE_U except the last of the loop.
-
---
-
-Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>
-(backport from master commit:
-78130828e9a140a9de4dafadbc844dbb64cb709a)
-
-Signed-off-by: Ross Burton <ross.burton@intel.com>
----
- mpi/mpi-pow.c | 50 +++++++++++++++++++++++++++++---------------------
- 1 file changed, 29 insertions(+), 21 deletions(-)
-
-diff --git a/mpi/mpi-pow.c b/mpi/mpi-pow.c
-index 7b3dc318..3cba6903 100644
---- a/mpi/mpi-pow.c
-+++ b/mpi/mpi-pow.c
-@@ -573,6 +573,8 @@ _gcry_mpi_powm (gcry_mpi_t res,
-         MPN_COPY (precomp[i], rp, rsize);
-       }
- 
-+    if (msize > max_u_size)
-+      max_u_size = msize;
-     base_u = mpi_alloc_limb_space (max_u_size, esec);
-     MPN_ZERO (base_u, max_u_size);
- 
-@@ -619,6 +621,10 @@ _gcry_mpi_powm (gcry_mpi_t res,
-         {
-           int c0;
-           mpi_limb_t e0;
-+          struct gcry_mpi w, u;
-+          w.sign = u.sign = 0;
-+          w.flags = u.flags = 0;
-+          w.d = base_u;
- 
-           count_leading_zeros (c0, e);
-           e = (e << c0);
-@@ -652,29 +658,31 @@ _gcry_mpi_powm (gcry_mpi_t res,
-           count_trailing_zeros (c0, e0);
-           e0 = (e0 >> c0) >> 1;
- 
--          /*
--           *  base_u <= precomp[e0]
--           *  base_u_size <= precomp_size[e0]
--           */
--          base_u_size = 0;
--          for (k = 0; k < (1<< (W - 1)); k++)
--            {
--              struct gcry_mpi w, u;
--              w.alloced = w.nlimbs = precomp_size[k];
--              u.alloced = u.nlimbs = precomp_size[k];
--              w.sign = u.sign = 0;
--              w.flags = u.flags = 0;
--              w.d = base_u;
--              u.d = precomp[k];
--
--              mpi_set_cond (&w, &u, k == e0);
--              base_u_size |= ( precomp_size[k] & ((mpi_size_t)0 - (k == e0)) );
--            }
--
-           for (j += W - c0; j >= 0; j--)
-             {
--              mul_mod (xp, &xsize, rp, rsize,
--                       j == 0 ? base_u : rp, j == 0 ? base_u_size : rsize,
-+
-+              /*
-+               *  base_u <= precomp[e0]
-+               *  base_u_size <= precomp_size[e0]
-+               */
-+              base_u_size = 0;
-+              for (k = 0; k < (1<< (W - 1)); k++)
-+                {
-+                  w.alloced = w.nlimbs = precomp_size[k];
-+                  u.alloced = u.nlimbs = precomp_size[k];
-+                  u.d = precomp[k];
-+
-+                  mpi_set_cond (&w, &u, k == e0);
-+                  base_u_size |= ( precomp_size[k] & (0UL - (k == e0)) );
-+                }
-+
-+              w.alloced = w.nlimbs = rsize;
-+              u.alloced = u.nlimbs = rsize;
-+              u.d = rp;
-+              mpi_set_cond (&w, &u, j != 0);
-+              base_u_size ^= ((base_u_size ^ rsize)  & (0UL - (j != 0)));
-+
-+              mul_mod (xp, &xsize, rp, rsize, base_u, base_u_size,
-                        mp, msize, &karactx);
-               tp = rp; rp = xp; xp = tp;
-               rsize = xsize;
--- 
-2.11.0
-
-
-From 129c1960e55603ec3f6fd1cd9cd51b22e9a9a7ef Mon Sep 17 00:00:00 2001
-From: NIIBE Yutaka <gniibe@fsij.org>
-Date: Thu, 29 Jun 2017 11:48:44 +0900
-Subject: [PATCH 3/5] rsa: Add exponent blinding.
-
-* cipher/rsa.c (secret): Blind secret D with randomized nonce R for
-mpi_powm computation.
-
---
-
-Co-authored-by: Werner Koch <wk@gnupg.org>
-Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>
-
-The paper describing attack: https://eprint.iacr.org/2017/627
-
-Sliding right into disaster: Left-to-right sliding windows leak
-by Daniel J. Bernstein and Joachim Breitner and Daniel Genkin and
-Leon Groot Bruinderink and Nadia Heninger and Tanja Lange and
-Christine van Vredendaal and Yuval Yarom
-
-  It is well known that constant-time implementations of modular
-  exponentiation cannot use sliding windows. However, software
-  libraries such as Libgcrypt, used by GnuPG, continue to use sliding
-  windows. It is widely believed that, even if the complete pattern of
-  squarings and multiplications is observed through a side-channel
-  attack, the number of exponent bits leaked is not sufficient to
-  carry out a full key-recovery attack against RSA. Specifically,
-  4-bit sliding windows leak only 40% of the bits, and 5-bit sliding
-  windows leak only 33% of the bits.
-
-  In this paper we demonstrate a complete break of RSA-1024 as
-  implemented in Libgcrypt. Our attack makes essential use of the fact
-  that Libgcrypt uses the left-to-right method for computing the
-  sliding-window expansion. We show for the first time that the
-  direction of the encoding matters: the pattern of squarings and
-  multiplications in left-to-right sliding windows leaks significantly
-  more information about exponent bits than for right-to-left. We show
-  how to incorporate this additional information into the
-  Heninger-Shacham algorithm for partial key reconstruction, and use
-  it to obtain very efficient full key recovery for RSA-1024. We also
-  provide strong evidence that the same attack works for RSA-2048 with
-  only moderately more computation.
-
-Exponent blinding is a kind of workaround to add noise.  Signal (leak)
-is still there for non-constant-time implementation.
-
-(backported from master commit:
-8725c99ffa41778f382ca97233183bcd687bb0ce)
-
-Signed-off-by: Ross Burton <ross.burton@intel.com>
----
- cipher/rsa.c | 32 +++++++++++++++++++++++++-------
- 1 file changed, 25 insertions(+), 7 deletions(-)
-
-diff --git a/cipher/rsa.c b/cipher/rsa.c
-index b6c73741..25e29b5c 100644
---- a/cipher/rsa.c
-+++ b/cipher/rsa.c
-@@ -1021,15 +1021,33 @@ secret (gcry_mpi_t output, gcry_mpi_t input, RSA_secret_key *skey )
-       gcry_mpi_t m1 = mpi_alloc_secure( mpi_get_nlimbs(skey->n)+1 );
-       gcry_mpi_t m2 = mpi_alloc_secure( mpi_get_nlimbs(skey->n)+1 );
-       gcry_mpi_t h  = mpi_alloc_secure( mpi_get_nlimbs(skey->n)+1 );
--
--      /* m1 = c ^ (d mod (p-1)) mod p */
-+      gcry_mpi_t D_blind = mpi_alloc_secure ( mpi_get_nlimbs(skey->n) + 1 );
-+      gcry_mpi_t r;
-+      unsigned int r_nbits;
-+
-+      r_nbits = mpi_get_nbits (skey->p) / 4;
-+      if (r_nbits < 96)
-+        r_nbits = 96;
-+      r = mpi_alloc_secure ((r_nbits + BITS_PER_MPI_LIMB-1)/BITS_PER_MPI_LIMB);
-+
-+      /* d_blind = (d mod (p-1)) + (p-1) * r */
-+      /* m1 = c ^ d_blind mod p */
-+      _gcry_mpi_randomize (r, r_nbits, GCRY_WEAK_RANDOM);
-+      mpi_set_highbit (r, r_nbits - 1);
-       mpi_sub_ui( h, skey->p, 1  );
--      mpi_fdiv_r( h, skey->d, h );
--      mpi_powm( m1, input, h, skey->p );
--      /* m2 = c ^ (d mod (q-1)) mod q */
-+      mpi_mul ( D_blind, h, r );
-+      mpi_fdiv_r ( h, skey->d, h );
-+      mpi_add ( D_blind, D_blind, h );
-+      mpi_powm( m1, input, D_blind, skey->p );
-+      /* d_blind = (d mod (q-1)) + (q-1) * r */
-+      /* m2 = c ^ d_blind mod q */
-+      _gcry_mpi_randomize (r, r_nbits, GCRY_WEAK_RANDOM);
-+      mpi_set_highbit (r, r_nbits - 1);
-       mpi_sub_ui( h, skey->q, 1  );
--      mpi_fdiv_r( h, skey->d, h );
--      mpi_powm( m2, input, h, skey->q );
-+      mpi_mul ( D_blind, h, r );
-+      mpi_fdiv_r ( h, skey->d, h );
-+      mpi_add ( D_blind, D_blind, h );
-+      mpi_powm( m2, input, D_blind, skey->q );
-       /* h = u * ( m2 - m1 ) mod q */
-       mpi_sub( h, m2, m1 );
-       if ( mpi_has_sign ( h ) )
--- 
-2.11.0
-
-
-From 8e1a6289b7d11a8bf6c94affa06c9794e7216e26 Mon Sep 17 00:00:00 2001
-From: NIIBE Yutaka <gniibe@fsij.org>
-Date: Thu, 29 Jun 2017 12:36:27 +0900
-Subject: [PATCH 4/5] rsa: Fix exponent blinding.
-
-* cipher/rsa.c (secret): Free D_BLIND.
-
---
-
-Fixes-commit: a9f612def801c8145d551d995475e5d51a4c988c
-Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>
-Signed-off-by: Ross Burton <ross.burton@intel.com>
----
- cipher/rsa.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/cipher/rsa.c b/cipher/rsa.c
-index 25e29b5c..33f92ebd 100644
---- a/cipher/rsa.c
-+++ b/cipher/rsa.c
-@@ -1057,6 +1057,7 @@ secret (gcry_mpi_t output, gcry_mpi_t input, RSA_secret_key *skey )
-       mpi_mul ( h, h, skey->p );
-       mpi_add ( output, m1, h );
- 
-+      mpi_free ( D_blind );
-       mpi_free ( h );
-       mpi_free ( m1 );
-       mpi_free ( m2 );
--- 
-2.11.0
-
-
-From 4e5497752172edc444029af645f28cb88ce93906 Mon Sep 17 00:00:00 2001
-From: NIIBE Yutaka <gniibe@fsij.org>
-Date: Thu, 29 Jun 2017 12:40:19 +0900
-Subject: [PATCH 5/5] rsa: More fix.
-
-* cipher/rsa.c (secret): Free R.
-
---
-
-Fixes-commit: a9f612def801c8145d551d995475e5d51a4c988c
-Signed-off-by: NIIBE Yutaka <gniibe@fsij.org>
-Signed-off-by: Ross Burton <ross.burton@intel.com>
----
- cipher/rsa.c | 1 +
- 1 file changed, 1 insertion(+)
-
-diff --git a/cipher/rsa.c b/cipher/rsa.c
-index 33f92ebd..8d8d157b 100644
---- a/cipher/rsa.c
-+++ b/cipher/rsa.c
-@@ -1057,6 +1057,7 @@ secret (gcry_mpi_t output, gcry_mpi_t input, RSA_secret_key *skey )
-       mpi_mul ( h, h, skey->p );
-       mpi_add ( output, m1, h );
- 
-+      mpi_free ( r );
-       mpi_free ( D_blind );
-       mpi_free ( h );
-       mpi_free ( m1 );
--- 
-2.11.0
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/add-pkgconfig-support.patch b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/add-pkgconfig-support.patch
deleted file mode 100644
index 69589f5..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/add-pkgconfig-support.patch
+++ /dev/null
@@ -1,177 +0,0 @@
-Add and use pkg-config for libgcrypt instead of -config scripts.
-
-Upstream-Status: Denied [upstream have indicated they don't want a pkg-config dependency]
-
-RP 2014/5/22
-
-Rebase to 1.7.0
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- configure.ac        |  1 +
- src/libgcrypt.m4    | 71 +++--------------------------------------------------
- src/libgcrypt.pc.in | 33 +++++++++++++++++++++++++
- 3 files changed, 38 insertions(+), 67 deletions(-)
- create mode 100644 src/libgcrypt.pc.in
-
-diff --git a/configure.ac b/configure.ac
-index f683e21..566e1c8 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -2314,6 +2314,7 @@ random/Makefile
- doc/Makefile
- src/Makefile
- src/gcrypt.h
-+src/libgcrypt.pc
- src/libgcrypt-config
- src/versioninfo.rc
- tests/Makefile
-diff --git a/src/libgcrypt.m4 b/src/libgcrypt.m4
-index c67cfec..4ea5f2c 100644
---- a/src/libgcrypt.m4
-+++ b/src/libgcrypt.m4
-@@ -29,30 +29,6 @@ dnl is added to the gpg_config_script_warn variable.
- dnl
- AC_DEFUN([AM_PATH_LIBGCRYPT],
- [ AC_REQUIRE([AC_CANONICAL_HOST])
--  AC_ARG_WITH(libgcrypt-prefix,
--            AC_HELP_STRING([--with-libgcrypt-prefix=PFX],
--                           [prefix where LIBGCRYPT is installed (optional)]),
--     libgcrypt_config_prefix="$withval", libgcrypt_config_prefix="")
--  if test x"${LIBGCRYPT_CONFIG}" = x ; then
--     if test x"${libgcrypt_config_prefix}" != x ; then
--        LIBGCRYPT_CONFIG="${libgcrypt_config_prefix}/bin/libgcrypt-config"
--     else
--       case "${SYSROOT}" in
--         /*)
--           if test -x "${SYSROOT}/bin/libgcrypt-config" ; then
--             LIBGCRYPT_CONFIG="${SYSROOT}/bin/libgcrypt-config"
--           fi
--           ;;
--         '')
--           ;;
--          *)
--           AC_MSG_WARN([Ignoring \$SYSROOT as it is not an absolute path.])
--           ;;
--       esac
--     fi
--  fi
--
--  AC_PATH_PROG(LIBGCRYPT_CONFIG, libgcrypt-config, no)
-   tmp=ifelse([$1], ,1:1.2.0,$1)
-   if echo "$tmp" | grep ':' >/dev/null 2>/dev/null ; then
-      req_libgcrypt_api=`echo "$tmp"     | sed 's/\(.*\):\(.*\)/\1/'`
-@@ -62,48 +38,13 @@ AC_DEFUN([AM_PATH_LIBGCRYPT],
-      min_libgcrypt_version="$tmp"
-   fi
- 
--  AC_MSG_CHECKING(for LIBGCRYPT - version >= $min_libgcrypt_version)
--  ok=no
--  if test "$LIBGCRYPT_CONFIG" != "no" ; then
--    req_major=`echo $min_libgcrypt_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\1/'`
--    req_minor=`echo $min_libgcrypt_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\2/'`
--    req_micro=`echo $min_libgcrypt_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\3/'`
--    libgcrypt_config_version=`$LIBGCRYPT_CONFIG --version`
--    major=`echo $libgcrypt_config_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\).*/\1/'`
--    minor=`echo $libgcrypt_config_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\).*/\2/'`
--    micro=`echo $libgcrypt_config_version | \
--               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\).*/\3/'`
--    if test "$major" -gt "$req_major"; then
--        ok=yes
--    else
--        if test "$major" -eq "$req_major"; then
--            if test "$minor" -gt "$req_minor"; then
--               ok=yes
--            else
--               if test "$minor" -eq "$req_minor"; then
--                   if test "$micro" -ge "$req_micro"; then
--                     ok=yes
--                   fi
--               fi
--            fi
--        fi
--    fi
--  fi
--  if test $ok = yes; then
--    AC_MSG_RESULT([yes ($libgcrypt_config_version)])
--  else
--    AC_MSG_RESULT(no)
--  fi
-+  PKG_CHECK_MODULES(LIBGCRYPT, [libgcrypt >= $min_libgcrypt_version], [ok=yes], [ok=no])
-+
-   if test $ok = yes; then
-      # If we have a recent libgcrypt, we should also check that the
-      # API is compatible
-      if test "$req_libgcrypt_api" -gt 0 ; then
--        tmp=`$LIBGCRYPT_CONFIG --api-version 2>/dev/null || echo 0`
-+        tmp=`$PKG_CONFIG --variable=api_version libgcrypt`
-         if test "$tmp" -gt 0 ; then
-            AC_MSG_CHECKING([LIBGCRYPT API version])
-            if test "$req_libgcrypt_api" -eq "$tmp" ; then
-@@ -116,10 +57,8 @@ AC_DEFUN([AM_PATH_LIBGCRYPT],
-      fi
-   fi
-   if test $ok = yes; then
--    LIBGCRYPT_CFLAGS=`$LIBGCRYPT_CONFIG --cflags`
--    LIBGCRYPT_LIBS=`$LIBGCRYPT_CONFIG --libs`
-     ifelse([$2], , :, [$2])
--    libgcrypt_config_host=`$LIBGCRYPT_CONFIG --host 2>/dev/null || echo none`
-+    libgcrypt_config_host=`$PKG_CONFIG --variable=host libgcrypt`
-     if test x"$libgcrypt_config_host" != xnone ; then
-       if test x"$libgcrypt_config_host" != x"$host" ; then
-   AC_MSG_WARN([[
-@@ -134,8 +73,6 @@ AC_DEFUN([AM_PATH_LIBGCRYPT],
-       fi
-     fi
-   else
--    LIBGCRYPT_CFLAGS=""
--    LIBGCRYPT_LIBS=""
-     ifelse([$3], , :, [$3])
-   fi
-   AC_SUBST(LIBGCRYPT_CFLAGS)
-diff --git a/src/libgcrypt.pc.in b/src/libgcrypt.pc.in
-new file mode 100644
-index 0000000..2fc8f53
---- /dev/null
-+++ b/src/libgcrypt.pc.in
-@@ -0,0 +1,33 @@
-+# Process this file with autoconf to produce a pkg-config metadata file.
-+# Copyright (C) 2002, 2003, 2004, 2005, 2006 Free Software Foundation
-+# Author: Simon Josefsson
-+#
-+# This file is free software; as a special exception the author gives
-+# unlimited permission to copy and/or distribute it, with or without
-+# modifications, as long as this notice is preserved.
-+#
-+# This file is distributed in the hope that it will be useful, but
-+# WITHOUT ANY WARRANTY, to the extent permitted by law; without even the
-+# implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
-+
-+prefix=@prefix@
-+exec_prefix=@exec_prefix@
-+libdir=@libdir@
-+includedir=@includedir@
-+
-+# API info
-+api_version=@LIBGCRYPT_CONFIG_API_VERSION@
-+host=@LIBGCRYPT_CONFIG_HOST@
-+
-+# Misc information.
-+symmetric_ciphers=@LIBGCRYPT_CIPHERS@
-+asymmetric_ciphers=@LIBGCRYPT_PUBKEY_CIPHERS@
-+digests=@LIBGCRYPT_DIGESTS@
-+
-+Name: libgcrypt
-+Description: GNU crypto library
-+URL: http://www.gnupg.org
-+Version: @VERSION@
-+Libs: -L${libdir} -lgcrypt
-+Libs.private: -L${libdir} -lgpg-error
-+Cflags: -I${includedir} 
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/fix-ICE-failure-on-mips-with-option-O-and-g.patch b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/fix-ICE-failure-on-mips-with-option-O-and-g.patch
deleted file mode 100644
index 582e62f..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/fix-ICE-failure-on-mips-with-option-O-and-g.patch
+++ /dev/null
@@ -1,71 +0,0 @@
-tests/bench-slope.c: workaround ICE failure on mips with '-O -g'
-
-Hit a ICE and could reduce it to the following minimal example:
-
-1. Only the size of array assigned with 2 caused the issue:
-$ cat > mipgcc-test.c << END
-
-int main (int argc, char **argv)
-{
-        char *pStrArry[ARRAY_SIZE_MAX] = {"hello"};
-        int i = 0;
-
-        while(pStrArry[i] && i<ARRAY_SIZE_MAX)
-        {
-                printf("%s\n", pStrArry[i]);
-                i++;
-        }
-
-        return 0;
-}
-
-END
-
-2. Only -O1 and -g on mips caused the issue:
-$ mips-poky-linux-gcc -O1 -g -o mipgcc-test mipgcc-test.c
-mipgcc-test.c: In function 'main':
-mipgcc-test.c:18:1: internal compiler error: in dwarf2out_var_location, at dwarf2out.c:20810
- }
- ^
-Please submit a full bug report,
-with preprocessed source if appropriate.
-See <http://gcc.gnu.org/bugs.html> for instructions
-
-3. The quick workround is trying to enlarge the size of array with larger
-than 2.
-
-4. File a bug to GNU, but it could not be reproduced on there environment.
-http://gcc.gnu.org/bugzilla/show_bug.cgi?id=60643
-
-Upstream-Status: Inappropriate [oe specific]
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- tests/bench-slope.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
-
-diff --git a/tests/bench-slope.c b/tests/bench-slope.c
-index bd05064..28c2438 100644
---- a/tests/bench-slope.c
-+++ b/tests/bench-slope.c
-@@ -1197,7 +1197,7 @@ static struct bench_ops hash_ops = {
- };
- 
- 
--static struct bench_hash_mode hash_modes[] = {
-+static struct bench_hash_mode hash_modes[3] = {
-   {"", &hash_ops},
-   {0},
- };
-@@ -1349,7 +1349,7 @@ static struct bench_ops mac_ops = {
- };
- 
- 
--static struct bench_mac_mode mac_modes[] = {
-+static struct bench_mac_mode mac_modes[3] = {
-   {"", &mac_ops},
-   {0},
- };
--- 
-1.8.1.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/fix-undefined-reference-to-pthread.patch b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/fix-undefined-reference-to-pthread.patch
deleted file mode 100644
index e7de8ba..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/fix-undefined-reference-to-pthread.patch
+++ /dev/null
@@ -1,27 +0,0 @@
-From cc0e2b403d33892963513a3ba98e4ae5a05a4d3c Mon Sep 17 00:00:00 2001
-From: Hongxu Jia <hongxu.jia@windriver.com>
-Date: Sun, 12 Jun 2016 04:44:29 -0400
-Subject: [PATCH] tests/Makefile.am: fix undefined reference to `pthread_create'
-
-Add missing '-lpthread' to CFLAGS
-
-Upstream-Status: Pending
-
-Signed-off-by: Hongxu Jia <hongxu.jia@windriver.com>
----
- tests/Makefile.am | 2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/tests/Makefile.am b/tests/Makefile.am
-index d462f30..bef6dd7 100644
---- a/tests/Makefile.am
-+++ b/tests/Makefile.am
-@@ -62,4 +62,4 @@ EXTRA_DIST = README rsa-16k.key cavs_tests.sh cavs_driver.pl \
- 
- LDADD = $(standard_ldadd) $(GPG_ERROR_LIBS)
- t_lock_LDADD = $(standard_ldadd) $(GPG_ERROR_MT_LIBS)
--t_lock_CFLAGS = $(GPG_ERROR_MT_CFLAGS)
-+t_lock_CFLAGS = $(GPG_ERROR_MT_CFLAGS) -lpthread
--- 
-2.8.1
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/libgcrypt-fix-building-error-with-O2-in-sysroot-path.patch b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/libgcrypt-fix-building-error-with-O2-in-sysroot-path.patch
deleted file mode 100644
index a3e5403..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/files/libgcrypt-fix-building-error-with-O2-in-sysroot-path.patch
+++ /dev/null
@@ -1,32 +0,0 @@
-Upstream-Status: Pending
-
-libgcrypt: fix building error with '-O2' in sysroot path
-
-Characters like '-O2' or '-Ofast' will be replaced by '-O1' when compiling cipher.
-If we are cross compiling libgcrypt and sysroot contains such characters, we would
-get compile errors because the sysroot path has been modified.
-
-Fix this by adding blank spaces before and after the original matching pattern in the
-sed command.
-
-Signed-off-by: Chen Qi <Qi.Chen@windriver.com>
----
- cipher/Makefile.am |    2 +-
- 1 file changed, 1 insertion(+), 1 deletion(-)
-
-diff --git a/cipher/Makefile.am b/cipher/Makefile.am
-index 76cdc96..9a89792 100644
---- a/cipher/Makefile.am
-+++ b/cipher/Makefile.am
-@@ -69,7 +69,7 @@ rfc2268.c \
- camellia.c camellia.h camellia-glue.c
- 
- if ENABLE_O_FLAG_MUNGING
--o_flag_munging = sed -e 's/-O\([2-9s][2-9s]*\)/-O1/' -e 's/-Ofast/-O1/g'
-+o_flag_munging = sed -e 's/ -O\([2-9s][2-9s]*\) / -O1 /' -e 's/ -Ofast / -O1 /g'
- else
- o_flag_munging = cat
- endif
--- 
-1.7.9.5
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/libgcrypt.inc b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/libgcrypt.inc
deleted file mode 100644
index 3403579..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/libgcrypt.inc
+++ /dev/null
@@ -1,54 +0,0 @@
-SUMMARY = "General purpose cryptographic library based on the code from GnuPG"
-HOMEPAGE = "http://directory.fsf.org/project/libgcrypt/"
-BUGTRACKER = "https://bugs.g10code.com/gnupg/index"
-SECTION = "libs"
-
-# helper program gcryptrnd and getrandom are under GPL, rest LGPL
-LICENSE = "GPLv2+ & LGPLv2.1+ & GPLv3+"
-LICENSE_${PN} = "LGPLv2.1+"
-LICENSE_${PN}-dev = "GPLv2+ & LGPLv2.1+"
-LICENSE_dumpsexp-dev = "GPLv3+"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
-                    file://COPYING.LIB;md5=bbb461211a33b134d42ed5ee802b37ff"
-
-DEPENDS = "libgpg-error"
-
-UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
-SRC_URI = "${GNUPG_MIRROR}/libgcrypt/libgcrypt-${PV}.tar.gz \
-           file://add-pkgconfig-support.patch \
-           file://libgcrypt-fix-building-error-with-O2-in-sysroot-path.patch \
-           file://fix-ICE-failure-on-mips-with-option-O-and-g.patch \
-           file://fix-undefined-reference-to-pthread.patch \
-           file://0001-ecc-Store-EdDSA-session-key-in-secure-memory.patch \
-           file://CVE-2017-7526.patch \
-"
-
-BINCONFIG = "${bindir}/libgcrypt-config"
-
-inherit autotools texinfo binconfig-disabled pkgconfig
-
-EXTRA_OECONF = "--disable-asm"
-
-PACKAGECONFIG ??= "capabilities"
-PACKAGECONFIG[capabilities] = "--with-capabilities,--without-capabilities,libcap"
-
-do_configure_prepend () {
-	# Else this could be used in preference to the one in aclocal-copy
-	rm -f ${S}/m4/gpg-error.m4
-}
-
-# libgcrypt.pc is added locally and thus installed here
-do_install_append() {
-	install -d ${D}/${libdir}/pkgconfig
-	install -m 0644 ${B}/src/libgcrypt.pc ${D}/${libdir}/pkgconfig/
-}
-
-PACKAGES =+ "dumpsexp-dev"
-
-FILES_${PN}-dev += "${bindir}/hmac256"
-FILES_dumpsexp-dev += "${bindir}/dumpsexp"
-
-ARM_INSTRUCTION_SET = "arm"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/libgcrypt_1.7.6.bb b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/libgcrypt_1.7.6.bb
deleted file mode 100644
index da0a1fe..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/libgcrypt_1.7.6.bb
+++ /dev/null
@@ -1,4 +0,0 @@
-require libgcrypt.inc
-
-SRC_URI[md5sum] = "eac6d11999650e8a1493674c1bdbc7f8"
-SRC_URI[sha256sum] = "fc0aec7714d75d812b665bd510d66031b1b2ce8fa855cc2c02238c954ea36982"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgcrypt/libgcrypt_1.8.0.bb b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/libgcrypt_1.8.0.bb
new file mode 100644
index 0000000..02982f0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libgcrypt/libgcrypt_1.8.0.bb
@@ -0,0 +1,55 @@
+SUMMARY = "General purpose cryptographic library based on the code from GnuPG"
+HOMEPAGE = "http://directory.fsf.org/project/libgcrypt/"
+BUGTRACKER = "https://bugs.g10code.com/gnupg/index"
+SECTION = "libs"
+
+# helper program gcryptrnd and getrandom are under GPL, rest LGPL
+LICENSE = "GPLv2+ & LGPLv2.1+ & GPLv3+"
+LICENSE_${PN} = "LGPLv2.1+"
+LICENSE_${PN}-dev = "GPLv2+ & LGPLv2.1+"
+LICENSE_dumpsexp-dev = "GPLv3+"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=94d55d512a9ba36caa9b7df079bae19f \
+                    file://COPYING.LIB;md5=bbb461211a33b134d42ed5ee802b37ff"
+
+DEPENDS = "libgpg-error"
+
+UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
+SRC_URI = "${GNUPG_MIRROR}/libgcrypt/libgcrypt-${PV}.tar.gz \
+           file://0001-Add-and-use-pkg-config-for-libgcrypt-instead-of-conf.patch \
+           file://0003-tests-bench-slope.c-workaround-ICE-failure-on-mips-w.patch \
+           file://0002-libgcrypt-fix-building-error-with-O2-in-sysroot-path.patch \
+           file://0004-tests-Makefile.am-fix-undefined-reference-to-pthread.patch \
+           file://0005-ecc-Add-input-validation-for-X25519.patch \
+           file://0006-Fix-building-AArch32-CE-implementations-when-target-.patch \
+"
+SRC_URI[md5sum] = "110ce4352f9ea6f560bdc6c5644ae93c"
+SRC_URI[sha256sum] = "f6e470b7f2d3a703e8747f05a8c19d9e10e26ebf2d5f3d71ff75a40f504e12ee"
+
+BINCONFIG = "${bindir}/libgcrypt-config"
+
+inherit autotools texinfo binconfig-disabled pkgconfig
+
+EXTRA_OECONF = "--disable-asm"
+EXTRA_OEMAKE_class-target = "LIBTOOLFLAGS='--tag=CC'"
+
+PACKAGECONFIG ??= "capabilities"
+PACKAGECONFIG[capabilities] = "--with-capabilities,--without-capabilities,libcap"
+
+do_configure_prepend () {
+	# Else this could be used in preference to the one in aclocal-copy
+	rm -f ${S}/m4/gpg-error.m4
+}
+
+# libgcrypt.pc is added locally and thus installed here
+do_install_append() {
+	install -d ${D}/${libdir}/pkgconfig
+	install -m 0644 ${B}/src/libgcrypt.pc ${D}/${libdir}/pkgconfig/
+}
+
+PACKAGES =+ "dumpsexp-dev"
+
+FILES_${PN}-dev += "${bindir}/hmac256"
+FILES_dumpsexp-dev += "${bindir}/dumpsexp"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgpg-error/libgpg-error_1.26.bb b/import-layers/yocto-poky/meta/recipes-support/libgpg-error/libgpg-error_1.26.bb
deleted file mode 100644
index 090db1b..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libgpg-error/libgpg-error_1.26.bb
+++ /dev/null
@@ -1,62 +0,0 @@
-SUMMARY = "Small library that defines common error values for all GnuPG components"
-HOMEPAGE = "http://www.gnupg.org/related_software/libgpg-error/"
-BUGTRACKER = "https://bugs.g10code.com/gnupg/index"
-
-LICENSE = "GPLv2+ & LGPLv2.1+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552 \
-                    file://COPYING.LIB;md5=2d5025d4aa3495befef8f17206a5b0a1 \
-                    file://src/gpg-error.h.in;endline=23;md5=64af9846baaf852793fd3a5af393acbf \
-                    file://src/init.c;endline=20;md5=872b2389fe9bae7ffb80d2b91225afbc"
-
-
-SECTION = "libs"
-
-UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
-SRC_URI = "${GNUPG_MIRROR}/libgpg-error/libgpg-error-${PV}.tar.bz2 \
-           file://pkgconfig.patch \
-	  "
-
-SRC_URI[md5sum] = "97456709dbbcbb69354317ffe3e347cd"
-SRC_URI[sha256sum] = "4c4bcbc90116932e3acd37b37812d8653b1b189c1904985898e860af818aee69"
-PR = "r1"
-
-BINCONFIG = "${bindir}/gpg-error-config"
-
-inherit autotools binconfig-disabled pkgconfig gettext
-CPPFLAGS += "-P"
-do_compile_prepend() {
-	TARGET_FILE=linux-gnu
-	if [ ${TARGET_OS} = "mingw32" ]; then
-		# There are no arch specific syscfg files for mingw32
-		TARGET_FILE=
-	elif [ ${TARGET_OS} != "linux" ]; then
-		TARGET_FILE=${TARGET_OS}
-	fi
-
-	case ${TARGET_ARCH} in
-	  aarch64_be) TUPLE=aarch64-unknown-linux-gnu ;;
-	  arm)	      TUPLE=arm-unknown-linux-gnueabi ;;
-	  armeb)      TUPLE=arm-unknown-linux-gnueabi ;;
-	  i586|i686)  TUPLE=i686-pc-linux-gnu ;;
-	  mips64*)    TUPLE=mips64el-unknown-linux-gnuabi64 ;;
-	  mips*el)    TUPLE=mipsel-unknown-linux-gnu ;;
-	  mips*)      TUPLE=mips-unknown-linux-gnu ;;
-	  x86_64)     TUPLE=x86_64-pc-linux-gnu ;;
-	  *)          TUPLE=${TARGET_ARCH}-unknown-linux-gnu ;; 
-	esac
-
-	if [ -n "$TARGET_FILE" ]; then
-		cp ${S}/src/syscfg/lock-obj-pub.$TUPLE.h \
-			${S}/src/syscfg/lock-obj-pub.$TARGET_FILE.h
-	fi
-}
-
-do_install_append() {
-	# we don't have common lisp in OE
-	rm -rf "${D}${datadir}/common-lisp/"
-}
-
-FILES_${PN}-dev += "${bindir}/gpg-error"
-FILES_${PN}-doc += "${datadir}/libgpg-error/errorref.txt"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libgpg-error/libgpg-error_1.27.bb b/import-layers/yocto-poky/meta/recipes-support/libgpg-error/libgpg-error_1.27.bb
new file mode 100644
index 0000000..b2e2d50
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libgpg-error/libgpg-error_1.27.bb
@@ -0,0 +1,61 @@
+SUMMARY = "Small library that defines common error values for all GnuPG components"
+HOMEPAGE = "http://www.gnupg.org/related_software/libgpg-error/"
+BUGTRACKER = "https://bugs.g10code.com/gnupg/index"
+
+LICENSE = "GPLv2+ & LGPLv2.1+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=59530bdf33659b29e73d4adb9f9f6552 \
+                    file://COPYING.LIB;md5=2d5025d4aa3495befef8f17206a5b0a1 \
+                    file://src/gpg-error.h.in;endline=23;md5=beae1e44d8d5c265d194760276033a7c \
+                    file://src/init.c;endline=20;md5=872b2389fe9bae7ffb80d2b91225afbc"
+
+
+SECTION = "libs"
+
+UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
+SRC_URI = "${GNUPG_MIRROR}/libgpg-error/libgpg-error-${PV}.tar.bz2 \
+           file://pkgconfig.patch \
+	  "
+
+SRC_URI[md5sum] = "5217ef3e76a7275a2a3b569a12ddc989"
+SRC_URI[sha256sum] = "4f93aac6fecb7da2b92871bb9ee33032be6a87b174f54abf8ddf0911a22d29d2"
+
+BINCONFIG = "${bindir}/gpg-error-config"
+
+inherit autotools binconfig-disabled pkgconfig gettext
+CPPFLAGS += "-P"
+do_compile_prepend() {
+	TARGET_FILE=linux-gnu
+	if [ ${TARGET_OS} = "mingw32" ]; then
+		# There are no arch specific syscfg files for mingw32
+		TARGET_FILE=
+	elif [ ${TARGET_OS} != "linux" ]; then
+		TARGET_FILE=${TARGET_OS}
+	fi
+
+	case ${TARGET_ARCH} in
+	  aarch64_be) TUPLE=aarch64-unknown-linux-gnu ;;
+	  arm)	      TUPLE=arm-unknown-linux-gnueabi ;;
+	  armeb)      TUPLE=arm-unknown-linux-gnueabi ;;
+	  i586|i686)  TUPLE=i686-pc-linux-gnu ;;
+	  mips64*)    TUPLE=mips64el-unknown-linux-gnuabi64 ;;
+	  mips*el)    TUPLE=mipsel-unknown-linux-gnu ;;
+	  mips*)      TUPLE=mips-unknown-linux-gnu ;;
+	  x86_64)     TUPLE=x86_64-pc-linux-gnu ;;
+	  *)          TUPLE=${TARGET_ARCH}-unknown-linux-gnu ;; 
+	esac
+
+	if [ -n "$TARGET_FILE" ]; then
+		cp ${S}/src/syscfg/lock-obj-pub.$TUPLE.h \
+			${S}/src/syscfg/lock-obj-pub.$TARGET_FILE.h
+	fi
+}
+
+do_install_append() {
+	# we don't have common lisp in OE
+	rm -rf "${D}${datadir}/common-lisp/"
+}
+
+FILES_${PN}-dev += "${bindir}/gpg-error"
+FILES_${PN}-doc += "${datadir}/libgpg-error/errorref.txt"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libiconv/libiconv-1.14/add-relocatable-module.patch b/import-layers/yocto-poky/meta/recipes-support/libiconv/libiconv-1.14/add-relocatable-module.patch
deleted file mode 100644
index 6af377b..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libiconv/libiconv-1.14/add-relocatable-module.patch
+++ /dev/null
@@ -1,5008 +0,0 @@
-This patch is needed to solve issues like
-| iconv.o: In function `main':
-| /home/james/stuff/beagle/tmp-eglibc/work/armv7a-vfp-neon-oe-linux-gnueabi/libiconv-1.14-r0/libiconv-1.14/src/./iconv.c:861: undefined reference to `relocate'
-| ../srclib/libicrt.a(progreloc.o): In function `prepare_relocate':
-| /home/james/stuff/beagle/tmp-eglibc/work/armv7a-vfp-neon-oe-linux-gnueabi/libiconv-1.14-r0/libiconv-1.14/srclib/progreloc.c:297: undefined reference to `compute_curr_prefix'
-| /home/james/stuff/beagle/tmp-eglibc/work/armv7a-vfp-neon-oe-linux-gnueabi/libiconv-1.14-r0/libiconv-1.14/srclib/progreloc.c:302: undefined reference to `set_relocation_prefix'
-| collect2: ld returned 1 exit status
-| make[1]: *** [install] Error 1
-
-Upstream-Status: Inappropriate [OE config specific]
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-diff -Naurp libiconv-1.14.org//build-aux/arg-nonnull.h libiconv-1.14/build-aux/arg-nonnull.h
---- libiconv-1.14.org//build-aux/arg-nonnull.h	1969-12-31 16:00:00.000000000 -0800
-+++ libiconv-1.14/build-aux/arg-nonnull.h	2012-01-08 02:07:39.930484438 -0800
-@@ -0,0 +1,26 @@
-+/* A C macro for declaring that specific arguments must not be NULL.
-+   Copyright (C) 2009-2011 Free Software Foundation, Inc.
-+
-+   This program is free software: you can redistribute it and/or modify it
-+   under the terms of the GNU General Public License as published
-+   by the Free Software Foundation; either version 3 of the License, or
-+   (at your option) any later version.
-+
-+   This program is distributed in the hope that it will be useful,
-+   but WITHOUT ANY WARRANTY; without even the implied warranty of
-+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+   General Public License for more details.
-+
-+   You should have received a copy of the GNU General Public License
-+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
-+
-+/* _GL_ARG_NONNULL((n,...,m)) tells the compiler and static analyzer tools
-+   that the values passed as arguments n, ..., m must be non-NULL pointers.
-+   n = 1 stands for the first argument, n = 2 for the second argument etc.  */
-+#ifndef _GL_ARG_NONNULL
-+# if (__GNUC__ == 3 && __GNUC_MINOR__ >= 3) || __GNUC__ > 3
-+#  define _GL_ARG_NONNULL(params) __attribute__ ((__nonnull__ params))
-+# else
-+#  define _GL_ARG_NONNULL(params)
-+# endif
-+#endif
-diff -Naurp libiconv-1.14.org//build-aux/c++defs.h libiconv-1.14/build-aux/c++defs.h
---- libiconv-1.14.org//build-aux/c++defs.h	1969-12-31 16:00:00.000000000 -0800
-+++ libiconv-1.14/build-aux/c++defs.h	2012-01-08 02:07:39.942484438 -0800
-@@ -0,0 +1,271 @@
-+/* C++ compatible function declaration macros.
-+   Copyright (C) 2010-2011 Free Software Foundation, Inc.
-+
-+   This program is free software: you can redistribute it and/or modify it
-+   under the terms of the GNU General Public License as published
-+   by the Free Software Foundation; either version 3 of the License, or
-+   (at your option) any later version.
-+
-+   This program is distributed in the hope that it will be useful,
-+   but WITHOUT ANY WARRANTY; without even the implied warranty of
-+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+   General Public License for more details.
-+
-+   You should have received a copy of the GNU General Public License
-+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
-+
-+#ifndef _GL_CXXDEFS_H
-+#define _GL_CXXDEFS_H
-+
-+/* The three most frequent use cases of these macros are:
-+
-+   * For providing a substitute for a function that is missing on some
-+     platforms, but is declared and works fine on the platforms on which
-+     it exists:
-+
-+       #if @GNULIB_FOO@
-+       # if !@HAVE_FOO@
-+       _GL_FUNCDECL_SYS (foo, ...);
-+       # endif
-+       _GL_CXXALIAS_SYS (foo, ...);
-+       _GL_CXXALIASWARN (foo);
-+       #elif defined GNULIB_POSIXCHECK
-+       ...
-+       #endif
-+
-+   * For providing a replacement for a function that exists on all platforms,
-+     but is broken/insufficient and needs to be replaced on some platforms:
-+
-+       #if @GNULIB_FOO@
-+       # if @REPLACE_FOO@
-+       #  if !(defined __cplusplus && defined GNULIB_NAMESPACE)
-+       #   undef foo
-+       #   define foo rpl_foo
-+       #  endif
-+       _GL_FUNCDECL_RPL (foo, ...);
-+       _GL_CXXALIAS_RPL (foo, ...);
-+       # else
-+       _GL_CXXALIAS_SYS (foo, ...);
-+       # endif
-+       _GL_CXXALIASWARN (foo);
-+       #elif defined GNULIB_POSIXCHECK
-+       ...
-+       #endif
-+
-+   * For providing a replacement for a function that exists on some platforms
-+     but is broken/insufficient and needs to be replaced on some of them and
-+     is additionally either missing or undeclared on some other platforms:
-+
-+       #if @GNULIB_FOO@
-+       # if @REPLACE_FOO@
-+       #  if !(defined __cplusplus && defined GNULIB_NAMESPACE)
-+       #   undef foo
-+       #   define foo rpl_foo
-+       #  endif
-+       _GL_FUNCDECL_RPL (foo, ...);
-+       _GL_CXXALIAS_RPL (foo, ...);
-+       # else
-+       #  if !@HAVE_FOO@   or   if !@HAVE_DECL_FOO@
-+       _GL_FUNCDECL_SYS (foo, ...);
-+       #  endif
-+       _GL_CXXALIAS_SYS (foo, ...);
-+       # endif
-+       _GL_CXXALIASWARN (foo);
-+       #elif defined GNULIB_POSIXCHECK
-+       ...
-+       #endif
-+*/
-+
-+/* _GL_EXTERN_C declaration;
-+   performs the declaration with C linkage.  */
-+#if defined __cplusplus
-+# define _GL_EXTERN_C extern "C"
-+#else
-+# define _GL_EXTERN_C extern
-+#endif
-+
-+/* _GL_FUNCDECL_RPL (func, rettype, parameters_and_attributes);
-+   declares a replacement function, named rpl_func, with the given prototype,
-+   consisting of return type, parameters, and attributes.
-+   Example:
-+     _GL_FUNCDECL_RPL (open, int, (const char *filename, int flags, ...)
-+                                  _GL_ARG_NONNULL ((1)));
-+ */
-+#define _GL_FUNCDECL_RPL(func,rettype,parameters_and_attributes) \
-+  _GL_FUNCDECL_RPL_1 (rpl_##func, rettype, parameters_and_attributes)
-+#define _GL_FUNCDECL_RPL_1(rpl_func,rettype,parameters_and_attributes) \
-+  _GL_EXTERN_C rettype rpl_func parameters_and_attributes
-+
-+/* _GL_FUNCDECL_SYS (func, rettype, parameters_and_attributes);
-+   declares the system function, named func, with the given prototype,
-+   consisting of return type, parameters, and attributes.
-+   Example:
-+     _GL_FUNCDECL_SYS (open, int, (const char *filename, int flags, ...)
-+                                  _GL_ARG_NONNULL ((1)));
-+ */
-+#define _GL_FUNCDECL_SYS(func,rettype,parameters_and_attributes) \
-+  _GL_EXTERN_C rettype func parameters_and_attributes
-+
-+/* _GL_CXXALIAS_RPL (func, rettype, parameters);
-+   declares a C++ alias called GNULIB_NAMESPACE::func
-+   that redirects to rpl_func, if GNULIB_NAMESPACE is defined.
-+   Example:
-+     _GL_CXXALIAS_RPL (open, int, (const char *filename, int flags, ...));
-+ */
-+#define _GL_CXXALIAS_RPL(func,rettype,parameters) \
-+  _GL_CXXALIAS_RPL_1 (func, rpl_##func, rettype, parameters)
-+#if defined __cplusplus && defined GNULIB_NAMESPACE
-+# define _GL_CXXALIAS_RPL_1(func,rpl_func,rettype,parameters) \
-+    namespace GNULIB_NAMESPACE                                \
-+    {                                                         \
-+      rettype (*const func) parameters = ::rpl_func;          \
-+    }                                                         \
-+    _GL_EXTERN_C int _gl_cxxalias_dummy
-+#else
-+# define _GL_CXXALIAS_RPL_1(func,rpl_func,rettype,parameters) \
-+    _GL_EXTERN_C int _gl_cxxalias_dummy
-+#endif
-+
-+/* _GL_CXXALIAS_RPL_CAST_1 (func, rpl_func, rettype, parameters);
-+   is like  _GL_CXXALIAS_RPL_1 (func, rpl_func, rettype, parameters);
-+   except that the C function rpl_func may have a slightly different
-+   declaration.  A cast is used to silence the "invalid conversion" error
-+   that would otherwise occur.  */
-+#if defined __cplusplus && defined GNULIB_NAMESPACE
-+# define _GL_CXXALIAS_RPL_CAST_1(func,rpl_func,rettype,parameters) \
-+    namespace GNULIB_NAMESPACE                                     \
-+    {                                                              \
-+      rettype (*const func) parameters =                           \
-+        reinterpret_cast<rettype(*)parameters>(::rpl_func);        \
-+    }                                                              \
-+    _GL_EXTERN_C int _gl_cxxalias_dummy
-+#else
-+# define _GL_CXXALIAS_RPL_CAST_1(func,rpl_func,rettype,parameters) \
-+    _GL_EXTERN_C int _gl_cxxalias_dummy
-+#endif
-+
-+/* _GL_CXXALIAS_SYS (func, rettype, parameters);
-+   declares a C++ alias called GNULIB_NAMESPACE::func
-+   that redirects to the system provided function func, if GNULIB_NAMESPACE
-+   is defined.
-+   Example:
-+     _GL_CXXALIAS_SYS (open, int, (const char *filename, int flags, ...));
-+ */
-+#if defined __cplusplus && defined GNULIB_NAMESPACE
-+  /* If we were to write
-+       rettype (*const func) parameters = ::func;
-+     like above in _GL_CXXALIAS_RPL_1, the compiler could optimize calls
-+     better (remove an indirection through a 'static' pointer variable),
-+     but then the _GL_CXXALIASWARN macro below would cause a warning not only
-+     for uses of ::func but also for uses of GNULIB_NAMESPACE::func.  */
-+# define _GL_CXXALIAS_SYS(func,rettype,parameters) \
-+    namespace GNULIB_NAMESPACE                     \
-+    {                                              \
-+      static rettype (*func) parameters = ::func;  \
-+    }                                              \
-+    _GL_EXTERN_C int _gl_cxxalias_dummy
-+#else
-+# define _GL_CXXALIAS_SYS(func,rettype,parameters) \
-+    _GL_EXTERN_C int _gl_cxxalias_dummy
-+#endif
-+
-+/* _GL_CXXALIAS_SYS_CAST (func, rettype, parameters);
-+   is like  _GL_CXXALIAS_SYS (func, rettype, parameters);
-+   except that the C function func may have a slightly different declaration.
-+   A cast is used to silence the "invalid conversion" error that would
-+   otherwise occur.  */
-+#if defined __cplusplus && defined GNULIB_NAMESPACE
-+# define _GL_CXXALIAS_SYS_CAST(func,rettype,parameters) \
-+    namespace GNULIB_NAMESPACE                          \
-+    {                                                   \
-+      static rettype (*func) parameters =               \
-+        reinterpret_cast<rettype(*)parameters>(::func); \
-+    }                                                   \
-+    _GL_EXTERN_C int _gl_cxxalias_dummy
-+#else
-+# define _GL_CXXALIAS_SYS_CAST(func,rettype,parameters) \
-+    _GL_EXTERN_C int _gl_cxxalias_dummy
-+#endif
-+
-+/* _GL_CXXALIAS_SYS_CAST2 (func, rettype, parameters, rettype2, parameters2);
-+   is like  _GL_CXXALIAS_SYS (func, rettype, parameters);
-+   except that the C function is picked among a set of overloaded functions,
-+   namely the one with rettype2 and parameters2.  Two consecutive casts
-+   are used to silence the "cannot find a match" and "invalid conversion"
-+   errors that would otherwise occur.  */
-+#if defined __cplusplus && defined GNULIB_NAMESPACE
-+  /* The outer cast must be a reinterpret_cast.
-+     The inner cast: When the function is defined as a set of overloaded
-+     functions, it works as a static_cast<>, choosing the designated variant.
-+     When the function is defined as a single variant, it works as a
-+     reinterpret_cast<>. The parenthesized cast syntax works both ways.  */
-+# define _GL_CXXALIAS_SYS_CAST2(func,rettype,parameters,rettype2,parameters2) \
-+    namespace GNULIB_NAMESPACE                                                \
-+    {                                                                         \
-+      static rettype (*func) parameters =                                     \
-+        reinterpret_cast<rettype(*)parameters>(                               \
-+          (rettype2(*)parameters2)(::func));                                  \
-+    }                                                                         \
-+    _GL_EXTERN_C int _gl_cxxalias_dummy
-+#else
-+# define _GL_CXXALIAS_SYS_CAST2(func,rettype,parameters,rettype2,parameters2) \
-+    _GL_EXTERN_C int _gl_cxxalias_dummy
-+#endif
-+
-+/* _GL_CXXALIASWARN (func);
-+   causes a warning to be emitted when ::func is used but not when
-+   GNULIB_NAMESPACE::func is used.  func must be defined without overloaded
-+   variants.  */
-+#if defined __cplusplus && defined GNULIB_NAMESPACE
-+# define _GL_CXXALIASWARN(func) \
-+   _GL_CXXALIASWARN_1 (func, GNULIB_NAMESPACE)
-+# define _GL_CXXALIASWARN_1(func,namespace) \
-+   _GL_CXXALIASWARN_2 (func, namespace)
-+/* To work around GCC bug <http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43881>,
-+   we enable the warning only when not optimizing.  */
-+# if !__OPTIMIZE__
-+#  define _GL_CXXALIASWARN_2(func,namespace) \
-+    _GL_WARN_ON_USE (func, \
-+                     "The symbol ::" #func " refers to the system function. " \
-+                     "Use " #namespace "::" #func " instead.")
-+# elif __GNUC__ >= 3 && GNULIB_STRICT_CHECKING
-+#  define _GL_CXXALIASWARN_2(func,namespace) \
-+     extern __typeof__ (func) func
-+# else
-+#  define _GL_CXXALIASWARN_2(func,namespace) \
-+     _GL_EXTERN_C int _gl_cxxalias_dummy
-+# endif
-+#else
-+# define _GL_CXXALIASWARN(func) \
-+    _GL_EXTERN_C int _gl_cxxalias_dummy
-+#endif
-+
-+/* _GL_CXXALIASWARN1 (func, rettype, parameters_and_attributes);
-+   causes a warning to be emitted when the given overloaded variant of ::func
-+   is used but not when GNULIB_NAMESPACE::func is used.  */
-+#if defined __cplusplus && defined GNULIB_NAMESPACE
-+# define _GL_CXXALIASWARN1(func,rettype,parameters_and_attributes) \
-+   _GL_CXXALIASWARN1_1 (func, rettype, parameters_and_attributes, \
-+                        GNULIB_NAMESPACE)
-+# define _GL_CXXALIASWARN1_1(func,rettype,parameters_and_attributes,namespace) \
-+   _GL_CXXALIASWARN1_2 (func, rettype, parameters_and_attributes, namespace)
-+/* To work around GCC bug <http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43881>,
-+   we enable the warning only when not optimizing.  */
-+# if !__OPTIMIZE__
-+#  define _GL_CXXALIASWARN1_2(func,rettype,parameters_and_attributes,namespace) \
-+    _GL_WARN_ON_USE_CXX (func, rettype, parameters_and_attributes, \
-+                         "The symbol ::" #func " refers to the system function. " \
-+                         "Use " #namespace "::" #func " instead.")
-+# elif __GNUC__ >= 3 && GNULIB_STRICT_CHECKING
-+#  define _GL_CXXALIASWARN1_2(func,rettype,parameters_and_attributes,namespace) \
-+     extern __typeof__ (func) func
-+# else
-+#  define _GL_CXXALIASWARN1_2(func,rettype,parameters_and_attributes,namespace) \
-+     _GL_EXTERN_C int _gl_cxxalias_dummy
-+# endif
-+#else
-+# define _GL_CXXALIASWARN1(func,rettype,parameters_and_attributes) \
-+    _GL_EXTERN_C int _gl_cxxalias_dummy
-+#endif
-+
-+#endif /* _GL_CXXDEFS_H */
-diff -Naurp libiconv-1.14.org//build-aux/snippet/arg-nonnull.h libiconv-1.14/build-aux/snippet/arg-nonnull.h
---- libiconv-1.14.org//build-aux/snippet/arg-nonnull.h	2011-08-07 06:22:07.000000000 -0700
-+++ libiconv-1.14/build-aux/snippet/arg-nonnull.h	1969-12-31 16:00:00.000000000 -0800
-@@ -1,26 +0,0 @@
--/* A C macro for declaring that specific arguments must not be NULL.
--   Copyright (C) 2009-2011 Free Software Foundation, Inc.
--
--   This program is free software: you can redistribute it and/or modify it
--   under the terms of the GNU General Public License as published
--   by the Free Software Foundation; either version 3 of the License, or
--   (at your option) any later version.
--
--   This program is distributed in the hope that it will be useful,
--   but WITHOUT ANY WARRANTY; without even the implied warranty of
--   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
--   General Public License for more details.
--
--   You should have received a copy of the GNU General Public License
--   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
--
--/* _GL_ARG_NONNULL((n,...,m)) tells the compiler and static analyzer tools
--   that the values passed as arguments n, ..., m must be non-NULL pointers.
--   n = 1 stands for the first argument, n = 2 for the second argument etc.  */
--#ifndef _GL_ARG_NONNULL
--# if (__GNUC__ == 3 && __GNUC_MINOR__ >= 3) || __GNUC__ > 3
--#  define _GL_ARG_NONNULL(params) __attribute__ ((__nonnull__ params))
--# else
--#  define _GL_ARG_NONNULL(params)
--# endif
--#endif
-diff -Naurp libiconv-1.14.org//build-aux/snippet/c++defs.h libiconv-1.14/build-aux/snippet/c++defs.h
---- libiconv-1.14.org//build-aux/snippet/c++defs.h	2011-08-07 06:22:07.000000000 -0700
-+++ libiconv-1.14/build-aux/snippet/c++defs.h	1969-12-31 16:00:00.000000000 -0800
-@@ -1,271 +0,0 @@
--/* C++ compatible function declaration macros.
--   Copyright (C) 2010-2011 Free Software Foundation, Inc.
--
--   This program is free software: you can redistribute it and/or modify it
--   under the terms of the GNU General Public License as published
--   by the Free Software Foundation; either version 3 of the License, or
--   (at your option) any later version.
--
--   This program is distributed in the hope that it will be useful,
--   but WITHOUT ANY WARRANTY; without even the implied warranty of
--   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
--   General Public License for more details.
--
--   You should have received a copy of the GNU General Public License
--   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
--
--#ifndef _GL_CXXDEFS_H
--#define _GL_CXXDEFS_H
--
--/* The three most frequent use cases of these macros are:
--
--   * For providing a substitute for a function that is missing on some
--     platforms, but is declared and works fine on the platforms on which
--     it exists:
--
--       #if @GNULIB_FOO@
--       # if !@HAVE_FOO@
--       _GL_FUNCDECL_SYS (foo, ...);
--       # endif
--       _GL_CXXALIAS_SYS (foo, ...);
--       _GL_CXXALIASWARN (foo);
--       #elif defined GNULIB_POSIXCHECK
--       ...
--       #endif
--
--   * For providing a replacement for a function that exists on all platforms,
--     but is broken/insufficient and needs to be replaced on some platforms:
--
--       #if @GNULIB_FOO@
--       # if @REPLACE_FOO@
--       #  if !(defined __cplusplus && defined GNULIB_NAMESPACE)
--       #   undef foo
--       #   define foo rpl_foo
--       #  endif
--       _GL_FUNCDECL_RPL (foo, ...);
--       _GL_CXXALIAS_RPL (foo, ...);
--       # else
--       _GL_CXXALIAS_SYS (foo, ...);
--       # endif
--       _GL_CXXALIASWARN (foo);
--       #elif defined GNULIB_POSIXCHECK
--       ...
--       #endif
--
--   * For providing a replacement for a function that exists on some platforms
--     but is broken/insufficient and needs to be replaced on some of them and
--     is additionally either missing or undeclared on some other platforms:
--
--       #if @GNULIB_FOO@
--       # if @REPLACE_FOO@
--       #  if !(defined __cplusplus && defined GNULIB_NAMESPACE)
--       #   undef foo
--       #   define foo rpl_foo
--       #  endif
--       _GL_FUNCDECL_RPL (foo, ...);
--       _GL_CXXALIAS_RPL (foo, ...);
--       # else
--       #  if !@HAVE_FOO@   or   if !@HAVE_DECL_FOO@
--       _GL_FUNCDECL_SYS (foo, ...);
--       #  endif
--       _GL_CXXALIAS_SYS (foo, ...);
--       # endif
--       _GL_CXXALIASWARN (foo);
--       #elif defined GNULIB_POSIXCHECK
--       ...
--       #endif
--*/
--
--/* _GL_EXTERN_C declaration;
--   performs the declaration with C linkage.  */
--#if defined __cplusplus
--# define _GL_EXTERN_C extern "C"
--#else
--# define _GL_EXTERN_C extern
--#endif
--
--/* _GL_FUNCDECL_RPL (func, rettype, parameters_and_attributes);
--   declares a replacement function, named rpl_func, with the given prototype,
--   consisting of return type, parameters, and attributes.
--   Example:
--     _GL_FUNCDECL_RPL (open, int, (const char *filename, int flags, ...)
--                                  _GL_ARG_NONNULL ((1)));
-- */
--#define _GL_FUNCDECL_RPL(func,rettype,parameters_and_attributes) \
--  _GL_FUNCDECL_RPL_1 (rpl_##func, rettype, parameters_and_attributes)
--#define _GL_FUNCDECL_RPL_1(rpl_func,rettype,parameters_and_attributes) \
--  _GL_EXTERN_C rettype rpl_func parameters_and_attributes
--
--/* _GL_FUNCDECL_SYS (func, rettype, parameters_and_attributes);
--   declares the system function, named func, with the given prototype,
--   consisting of return type, parameters, and attributes.
--   Example:
--     _GL_FUNCDECL_SYS (open, int, (const char *filename, int flags, ...)
--                                  _GL_ARG_NONNULL ((1)));
-- */
--#define _GL_FUNCDECL_SYS(func,rettype,parameters_and_attributes) \
--  _GL_EXTERN_C rettype func parameters_and_attributes
--
--/* _GL_CXXALIAS_RPL (func, rettype, parameters);
--   declares a C++ alias called GNULIB_NAMESPACE::func
--   that redirects to rpl_func, if GNULIB_NAMESPACE is defined.
--   Example:
--     _GL_CXXALIAS_RPL (open, int, (const char *filename, int flags, ...));
-- */
--#define _GL_CXXALIAS_RPL(func,rettype,parameters) \
--  _GL_CXXALIAS_RPL_1 (func, rpl_##func, rettype, parameters)
--#if defined __cplusplus && defined GNULIB_NAMESPACE
--# define _GL_CXXALIAS_RPL_1(func,rpl_func,rettype,parameters) \
--    namespace GNULIB_NAMESPACE                                \
--    {                                                         \
--      rettype (*const func) parameters = ::rpl_func;          \
--    }                                                         \
--    _GL_EXTERN_C int _gl_cxxalias_dummy
--#else
--# define _GL_CXXALIAS_RPL_1(func,rpl_func,rettype,parameters) \
--    _GL_EXTERN_C int _gl_cxxalias_dummy
--#endif
--
--/* _GL_CXXALIAS_RPL_CAST_1 (func, rpl_func, rettype, parameters);
--   is like  _GL_CXXALIAS_RPL_1 (func, rpl_func, rettype, parameters);
--   except that the C function rpl_func may have a slightly different
--   declaration.  A cast is used to silence the "invalid conversion" error
--   that would otherwise occur.  */
--#if defined __cplusplus && defined GNULIB_NAMESPACE
--# define _GL_CXXALIAS_RPL_CAST_1(func,rpl_func,rettype,parameters) \
--    namespace GNULIB_NAMESPACE                                     \
--    {                                                              \
--      rettype (*const func) parameters =                           \
--        reinterpret_cast<rettype(*)parameters>(::rpl_func);        \
--    }                                                              \
--    _GL_EXTERN_C int _gl_cxxalias_dummy
--#else
--# define _GL_CXXALIAS_RPL_CAST_1(func,rpl_func,rettype,parameters) \
--    _GL_EXTERN_C int _gl_cxxalias_dummy
--#endif
--
--/* _GL_CXXALIAS_SYS (func, rettype, parameters);
--   declares a C++ alias called GNULIB_NAMESPACE::func
--   that redirects to the system provided function func, if GNULIB_NAMESPACE
--   is defined.
--   Example:
--     _GL_CXXALIAS_SYS (open, int, (const char *filename, int flags, ...));
-- */
--#if defined __cplusplus && defined GNULIB_NAMESPACE
--  /* If we were to write
--       rettype (*const func) parameters = ::func;
--     like above in _GL_CXXALIAS_RPL_1, the compiler could optimize calls
--     better (remove an indirection through a 'static' pointer variable),
--     but then the _GL_CXXALIASWARN macro below would cause a warning not only
--     for uses of ::func but also for uses of GNULIB_NAMESPACE::func.  */
--# define _GL_CXXALIAS_SYS(func,rettype,parameters) \
--    namespace GNULIB_NAMESPACE                     \
--    {                                              \
--      static rettype (*func) parameters = ::func;  \
--    }                                              \
--    _GL_EXTERN_C int _gl_cxxalias_dummy
--#else
--# define _GL_CXXALIAS_SYS(func,rettype,parameters) \
--    _GL_EXTERN_C int _gl_cxxalias_dummy
--#endif
--
--/* _GL_CXXALIAS_SYS_CAST (func, rettype, parameters);
--   is like  _GL_CXXALIAS_SYS (func, rettype, parameters);
--   except that the C function func may have a slightly different declaration.
--   A cast is used to silence the "invalid conversion" error that would
--   otherwise occur.  */
--#if defined __cplusplus && defined GNULIB_NAMESPACE
--# define _GL_CXXALIAS_SYS_CAST(func,rettype,parameters) \
--    namespace GNULIB_NAMESPACE                          \
--    {                                                   \
--      static rettype (*func) parameters =               \
--        reinterpret_cast<rettype(*)parameters>(::func); \
--    }                                                   \
--    _GL_EXTERN_C int _gl_cxxalias_dummy
--#else
--# define _GL_CXXALIAS_SYS_CAST(func,rettype,parameters) \
--    _GL_EXTERN_C int _gl_cxxalias_dummy
--#endif
--
--/* _GL_CXXALIAS_SYS_CAST2 (func, rettype, parameters, rettype2, parameters2);
--   is like  _GL_CXXALIAS_SYS (func, rettype, parameters);
--   except that the C function is picked among a set of overloaded functions,
--   namely the one with rettype2 and parameters2.  Two consecutive casts
--   are used to silence the "cannot find a match" and "invalid conversion"
--   errors that would otherwise occur.  */
--#if defined __cplusplus && defined GNULIB_NAMESPACE
--  /* The outer cast must be a reinterpret_cast.
--     The inner cast: When the function is defined as a set of overloaded
--     functions, it works as a static_cast<>, choosing the designated variant.
--     When the function is defined as a single variant, it works as a
--     reinterpret_cast<>. The parenthesized cast syntax works both ways.  */
--# define _GL_CXXALIAS_SYS_CAST2(func,rettype,parameters,rettype2,parameters2) \
--    namespace GNULIB_NAMESPACE                                                \
--    {                                                                         \
--      static rettype (*func) parameters =                                     \
--        reinterpret_cast<rettype(*)parameters>(                               \
--          (rettype2(*)parameters2)(::func));                                  \
--    }                                                                         \
--    _GL_EXTERN_C int _gl_cxxalias_dummy
--#else
--# define _GL_CXXALIAS_SYS_CAST2(func,rettype,parameters,rettype2,parameters2) \
--    _GL_EXTERN_C int _gl_cxxalias_dummy
--#endif
--
--/* _GL_CXXALIASWARN (func);
--   causes a warning to be emitted when ::func is used but not when
--   GNULIB_NAMESPACE::func is used.  func must be defined without overloaded
--   variants.  */
--#if defined __cplusplus && defined GNULIB_NAMESPACE
--# define _GL_CXXALIASWARN(func) \
--   _GL_CXXALIASWARN_1 (func, GNULIB_NAMESPACE)
--# define _GL_CXXALIASWARN_1(func,namespace) \
--   _GL_CXXALIASWARN_2 (func, namespace)
--/* To work around GCC bug <http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43881>,
--   we enable the warning only when not optimizing.  */
--# if !__OPTIMIZE__
--#  define _GL_CXXALIASWARN_2(func,namespace) \
--    _GL_WARN_ON_USE (func, \
--                     "The symbol ::" #func " refers to the system function. " \
--                     "Use " #namespace "::" #func " instead.")
--# elif __GNUC__ >= 3 && GNULIB_STRICT_CHECKING
--#  define _GL_CXXALIASWARN_2(func,namespace) \
--     extern __typeof__ (func) func
--# else
--#  define _GL_CXXALIASWARN_2(func,namespace) \
--     _GL_EXTERN_C int _gl_cxxalias_dummy
--# endif
--#else
--# define _GL_CXXALIASWARN(func) \
--    _GL_EXTERN_C int _gl_cxxalias_dummy
--#endif
--
--/* _GL_CXXALIASWARN1 (func, rettype, parameters_and_attributes);
--   causes a warning to be emitted when the given overloaded variant of ::func
--   is used but not when GNULIB_NAMESPACE::func is used.  */
--#if defined __cplusplus && defined GNULIB_NAMESPACE
--# define _GL_CXXALIASWARN1(func,rettype,parameters_and_attributes) \
--   _GL_CXXALIASWARN1_1 (func, rettype, parameters_and_attributes, \
--                        GNULIB_NAMESPACE)
--# define _GL_CXXALIASWARN1_1(func,rettype,parameters_and_attributes,namespace) \
--   _GL_CXXALIASWARN1_2 (func, rettype, parameters_and_attributes, namespace)
--/* To work around GCC bug <http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43881>,
--   we enable the warning only when not optimizing.  */
--# if !__OPTIMIZE__
--#  define _GL_CXXALIASWARN1_2(func,rettype,parameters_and_attributes,namespace) \
--    _GL_WARN_ON_USE_CXX (func, rettype, parameters_and_attributes, \
--                         "The symbol ::" #func " refers to the system function. " \
--                         "Use " #namespace "::" #func " instead.")
--# elif __GNUC__ >= 3 && GNULIB_STRICT_CHECKING
--#  define _GL_CXXALIASWARN1_2(func,rettype,parameters_and_attributes,namespace) \
--     extern __typeof__ (func) func
--# else
--#  define _GL_CXXALIASWARN1_2(func,rettype,parameters_and_attributes,namespace) \
--     _GL_EXTERN_C int _gl_cxxalias_dummy
--# endif
--#else
--# define _GL_CXXALIASWARN1(func,rettype,parameters_and_attributes) \
--    _GL_EXTERN_C int _gl_cxxalias_dummy
--#endif
--
--#endif /* _GL_CXXDEFS_H */
-diff -Naurp libiconv-1.14.org//build-aux/snippet/_Noreturn.h libiconv-1.14/build-aux/snippet/_Noreturn.h
---- libiconv-1.14.org//build-aux/snippet/_Noreturn.h	2011-08-07 06:22:07.000000000 -0700
-+++ libiconv-1.14/build-aux/snippet/_Noreturn.h	1969-12-31 16:00:00.000000000 -0800
-@@ -1,10 +0,0 @@
--#ifndef _Noreturn
--# if (3 <= __GNUC__ || (__GNUC__ == 2 && 8 <= __GNUC_MINOR__) \
--      || 0x5110 <= __SUNPRO_C)
--#  define _Noreturn __attribute__ ((__noreturn__))
--# elif 1200 <= _MSC_VER
--#  define _Noreturn __declspec (noreturn)
--# else
--#  define _Noreturn
--# endif
--#endif
-diff -Naurp libiconv-1.14.org//build-aux/snippet/warn-on-use.h libiconv-1.14/build-aux/snippet/warn-on-use.h
---- libiconv-1.14.org//build-aux/snippet/warn-on-use.h	2011-08-07 06:22:07.000000000 -0700
-+++ libiconv-1.14/build-aux/snippet/warn-on-use.h	1969-12-31 16:00:00.000000000 -0800
-@@ -1,109 +0,0 @@
--/* A C macro for emitting warnings if a function is used.
--   Copyright (C) 2010-2011 Free Software Foundation, Inc.
--
--   This program is free software: you can redistribute it and/or modify it
--   under the terms of the GNU General Public License as published
--   by the Free Software Foundation; either version 3 of the License, or
--   (at your option) any later version.
--
--   This program is distributed in the hope that it will be useful,
--   but WITHOUT ANY WARRANTY; without even the implied warranty of
--   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
--   General Public License for more details.
--
--   You should have received a copy of the GNU General Public License
--   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
--
--/* _GL_WARN_ON_USE (function, "literal string") issues a declaration
--   for FUNCTION which will then trigger a compiler warning containing
--   the text of "literal string" anywhere that function is called, if
--   supported by the compiler.  If the compiler does not support this
--   feature, the macro expands to an unused extern declaration.
--
--   This macro is useful for marking a function as a potential
--   portability trap, with the intent that "literal string" include
--   instructions on the replacement function that should be used
--   instead.  However, one of the reasons that a function is a
--   portability trap is if it has the wrong signature.  Declaring
--   FUNCTION with a different signature in C is a compilation error, so
--   this macro must use the same type as any existing declaration so
--   that programs that avoid the problematic FUNCTION do not fail to
--   compile merely because they included a header that poisoned the
--   function.  But this implies that _GL_WARN_ON_USE is only safe to
--   use if FUNCTION is known to already have a declaration.  Use of
--   this macro implies that there must not be any other macro hiding
--   the declaration of FUNCTION; but undefining FUNCTION first is part
--   of the poisoning process anyway (although for symbols that are
--   provided only via a macro, the result is a compilation error rather
--   than a warning containing "literal string").  Also note that in
--   C++, it is only safe to use if FUNCTION has no overloads.
--
--   For an example, it is possible to poison 'getline' by:
--   - adding a call to gl_WARN_ON_USE_PREPARE([[#include <stdio.h>]],
--     [getline]) in configure.ac, which potentially defines
--     HAVE_RAW_DECL_GETLINE
--   - adding this code to a header that wraps the system <stdio.h>:
--     #undef getline
--     #if HAVE_RAW_DECL_GETLINE
--     _GL_WARN_ON_USE (getline, "getline is required by POSIX 2008, but"
--       "not universally present; use the gnulib module getline");
--     #endif
--
--   It is not possible to directly poison global variables.  But it is
--   possible to write a wrapper accessor function, and poison that
--   (less common usage, like &environ, will cause a compilation error
--   rather than issue the nice warning, but the end result of informing
--   the developer about their portability problem is still achieved):
--   #if HAVE_RAW_DECL_ENVIRON
--   static inline char ***rpl_environ (void) { return &environ; }
--   _GL_WARN_ON_USE (rpl_environ, "environ is not always properly declared");
--   # undef environ
--   # define environ (*rpl_environ ())
--   #endif
--   */
--#ifndef _GL_WARN_ON_USE
--
--# if 4 < __GNUC__ || (__GNUC__ == 4 && 3 <= __GNUC_MINOR__)
--/* A compiler attribute is available in gcc versions 4.3.0 and later.  */
--#  define _GL_WARN_ON_USE(function, message) \
--extern __typeof__ (function) function __attribute__ ((__warning__ (message)))
--# elif __GNUC__ >= 3 && GNULIB_STRICT_CHECKING
--/* Verify the existence of the function.  */
--#  define _GL_WARN_ON_USE(function, message) \
--extern __typeof__ (function) function
--# else /* Unsupported.  */
--#  define _GL_WARN_ON_USE(function, message) \
--_GL_WARN_EXTERN_C int _gl_warn_on_use
--# endif
--#endif
--
--/* _GL_WARN_ON_USE_CXX (function, rettype, parameters_and_attributes, "string")
--   is like _GL_WARN_ON_USE (function, "string"), except that the function is
--   declared with the given prototype, consisting of return type, parameters,
--   and attributes.
--   This variant is useful for overloaded functions in C++. _GL_WARN_ON_USE does
--   not work in this case.  */
--#ifndef _GL_WARN_ON_USE_CXX
--# if 4 < __GNUC__ || (__GNUC__ == 4 && 3 <= __GNUC_MINOR__)
--#  define _GL_WARN_ON_USE_CXX(function,rettype,parameters_and_attributes,msg) \
--extern rettype function parameters_and_attributes \
--     __attribute__ ((__warning__ (msg)))
--# elif __GNUC__ >= 3 && GNULIB_STRICT_CHECKING
--/* Verify the existence of the function.  */
--#  define _GL_WARN_ON_USE_CXX(function,rettype,parameters_and_attributes,msg) \
--extern rettype function parameters_and_attributes
--# else /* Unsupported.  */
--#  define _GL_WARN_ON_USE_CXX(function,rettype,parameters_and_attributes,msg) \
--_GL_WARN_EXTERN_C int _gl_warn_on_use
--# endif
--#endif
--
--/* _GL_WARN_EXTERN_C declaration;
--   performs the declaration with C linkage.  */
--#ifndef _GL_WARN_EXTERN_C
--# if defined __cplusplus
--#  define _GL_WARN_EXTERN_C extern "C"
--# else
--#  define _GL_WARN_EXTERN_C extern
--# endif
--#endif
-diff -Naurp libiconv-1.14.org//build-aux/warn-on-use.h libiconv-1.14/build-aux/warn-on-use.h
---- libiconv-1.14.org//build-aux/warn-on-use.h	1969-12-31 16:00:00.000000000 -0800
-+++ libiconv-1.14/build-aux/warn-on-use.h	2012-01-08 02:07:39.950484439 -0800
-@@ -0,0 +1,109 @@
-+/* A C macro for emitting warnings if a function is used.
-+   Copyright (C) 2010-2011 Free Software Foundation, Inc.
-+
-+   This program is free software: you can redistribute it and/or modify it
-+   under the terms of the GNU General Public License as published
-+   by the Free Software Foundation; either version 3 of the License, or
-+   (at your option) any later version.
-+
-+   This program is distributed in the hope that it will be useful,
-+   but WITHOUT ANY WARRANTY; without even the implied warranty of
-+   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+   General Public License for more details.
-+
-+   You should have received a copy of the GNU General Public License
-+   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
-+
-+/* _GL_WARN_ON_USE (function, "literal string") issues a declaration
-+   for FUNCTION which will then trigger a compiler warning containing
-+   the text of "literal string" anywhere that function is called, if
-+   supported by the compiler.  If the compiler does not support this
-+   feature, the macro expands to an unused extern declaration.
-+
-+   This macro is useful for marking a function as a potential
-+   portability trap, with the intent that "literal string" include
-+   instructions on the replacement function that should be used
-+   instead.  However, one of the reasons that a function is a
-+   portability trap is if it has the wrong signature.  Declaring
-+   FUNCTION with a different signature in C is a compilation error, so
-+   this macro must use the same type as any existing declaration so
-+   that programs that avoid the problematic FUNCTION do not fail to
-+   compile merely because they included a header that poisoned the
-+   function.  But this implies that _GL_WARN_ON_USE is only safe to
-+   use if FUNCTION is known to already have a declaration.  Use of
-+   this macro implies that there must not be any other macro hiding
-+   the declaration of FUNCTION; but undefining FUNCTION first is part
-+   of the poisoning process anyway (although for symbols that are
-+   provided only via a macro, the result is a compilation error rather
-+   than a warning containing "literal string").  Also note that in
-+   C++, it is only safe to use if FUNCTION has no overloads.
-+
-+   For an example, it is possible to poison 'getline' by:
-+   - adding a call to gl_WARN_ON_USE_PREPARE([[#include <stdio.h>]],
-+     [getline]) in configure.ac, which potentially defines
-+     HAVE_RAW_DECL_GETLINE
-+   - adding this code to a header that wraps the system <stdio.h>:
-+     #undef getline
-+     #if HAVE_RAW_DECL_GETLINE
-+     _GL_WARN_ON_USE (getline, "getline is required by POSIX 2008, but"
-+       "not universally present; use the gnulib module getline");
-+     #endif
-+
-+   It is not possible to directly poison global variables.  But it is
-+   possible to write a wrapper accessor function, and poison that
-+   (less common usage, like &environ, will cause a compilation error
-+   rather than issue the nice warning, but the end result of informing
-+   the developer about their portability problem is still achieved):
-+   #if HAVE_RAW_DECL_ENVIRON
-+   static inline char ***rpl_environ (void) { return &environ; }
-+   _GL_WARN_ON_USE (rpl_environ, "environ is not always properly declared");
-+   # undef environ
-+   # define environ (*rpl_environ ())
-+   #endif
-+   */
-+#ifndef _GL_WARN_ON_USE
-+
-+# if 4 < __GNUC__ || (__GNUC__ == 4 && 3 <= __GNUC_MINOR__)
-+/* A compiler attribute is available in gcc versions 4.3.0 and later.  */
-+#  define _GL_WARN_ON_USE(function, message) \
-+extern __typeof__ (function) function __attribute__ ((__warning__ (message)))
-+# elif __GNUC__ >= 3 && GNULIB_STRICT_CHECKING
-+/* Verify the existence of the function.  */
-+#  define _GL_WARN_ON_USE(function, message) \
-+extern __typeof__ (function) function
-+# else /* Unsupported.  */
-+#  define _GL_WARN_ON_USE(function, message) \
-+_GL_WARN_EXTERN_C int _gl_warn_on_use
-+# endif
-+#endif
-+
-+/* _GL_WARN_ON_USE_CXX (function, rettype, parameters_and_attributes, "string")
-+   is like _GL_WARN_ON_USE (function, "string"), except that the function is
-+   declared with the given prototype, consisting of return type, parameters,
-+   and attributes.
-+   This variant is useful for overloaded functions in C++. _GL_WARN_ON_USE does
-+   not work in this case.  */
-+#ifndef _GL_WARN_ON_USE_CXX
-+# if 4 < __GNUC__ || (__GNUC__ == 4 && 3 <= __GNUC_MINOR__)
-+#  define _GL_WARN_ON_USE_CXX(function,rettype,parameters_and_attributes,msg) \
-+extern rettype function parameters_and_attributes \
-+     __attribute__ ((__warning__ (msg)))
-+# elif __GNUC__ >= 3 && GNULIB_STRICT_CHECKING
-+/* Verify the existence of the function.  */
-+#  define _GL_WARN_ON_USE_CXX(function,rettype,parameters_and_attributes,msg) \
-+extern rettype function parameters_and_attributes
-+# else /* Unsupported.  */
-+#  define _GL_WARN_ON_USE_CXX(function,rettype,parameters_and_attributes,msg) \
-+_GL_WARN_EXTERN_C int _gl_warn_on_use
-+# endif
-+#endif
-+
-+/* _GL_WARN_EXTERN_C declaration;
-+   performs the declaration with C linkage.  */
-+#ifndef _GL_WARN_EXTERN_C
-+# if defined __cplusplus
-+#  define _GL_WARN_EXTERN_C extern "C"
-+# else
-+#  define _GL_WARN_EXTERN_C extern
-+# endif
-+#endif
-diff -Naurp libiconv-1.14.org//srclib/allocator.h libiconv-1.14/srclib/allocator.h
---- libiconv-1.14.org//srclib/allocator.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/allocator.h	2012-01-08 02:07:40.050484444 -0800
-@@ -45,11 +45,10 @@ struct allocator
-   /* Call FREE to free memory, like 'free'.  */
-   void (*free) (void *);
- 
--  /* If nonnull, call DIE (SIZE) if MALLOC (SIZE) or REALLOC (...,
--     SIZE) fails.  DIE should not return.  SIZE should equal SIZE_MAX
--     if size_t overflow was detected while calculating sizes to be
--     passed to MALLOC or REALLOC.  */
--  void (*die) (size_t);
-+  /* If nonnull, call DIE if MALLOC or REALLOC fails.  DIE should not
-+     return.  DIE can be used by code that detects memory overflow
-+     while calculating sizes to be passed to MALLOC or REALLOC.  */
-+  void (*die) (void);
- };
- 
- /* An allocator using the stdlib functions and a null DIE function.  */
-diff -Naurp libiconv-1.14.org//srclib/canonicalize-lgpl.c libiconv-1.14/srclib/canonicalize-lgpl.c
---- libiconv-1.14.org//srclib/canonicalize-lgpl.c	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/canonicalize-lgpl.c	2012-01-08 02:07:40.094484446 -0800
-@@ -125,7 +125,7 @@ __realpath (const char *name, char *reso
- #else
-   path_max = pathconf (name, _PC_PATH_MAX);
-   if (path_max <= 0)
--    path_max = 8192;
-+    path_max = 1024;
- #endif
- 
-   if (resolved == NULL)
-diff -Naurp libiconv-1.14.org//srclib/careadlinkat.c libiconv-1.14/srclib/careadlinkat.c
---- libiconv-1.14.org//srclib/careadlinkat.c	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/careadlinkat.c	2012-01-08 02:07:40.102484445 -0800
-@@ -133,7 +133,6 @@ careadlinkat (int fd, char const *filena
-           if (buf == stack_buf)
-             {
-               char *b = (char *) alloc->allocate (link_size);
--              buf_size = link_size;
-               if (! b)
-                 break;
-               memcpy (b, buf, link_size);
-@@ -157,11 +156,6 @@ careadlinkat (int fd, char const *filena
-         buf_size *= 2;
-       else if (buf_size < buf_size_max)
-         buf_size = buf_size_max;
--      else if (buf_size_max < SIZE_MAX)
--        {
--          errno = ENAMETOOLONG;
--          return NULL;
--        }
-       else
-         break;
-       buf = (char *) alloc->allocate (buf_size);
-@@ -169,7 +163,7 @@ careadlinkat (int fd, char const *filena
-   while (buf);
- 
-   if (alloc->die)
--    alloc->die (buf_size);
-+    alloc->die ();
-   errno = ENOMEM;
-   return NULL;
- }
-diff -Naurp libiconv-1.14.org//srclib/errno.in.h libiconv-1.14/srclib/errno.in.h
---- libiconv-1.14.org//srclib/errno.in.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/errno.in.h	2012-01-08 02:07:40.122484446 -0800
-@@ -16,7 +16,7 @@
-    along with this program; if not, write to the Free Software Foundation,
-    Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.  */
- 
--#ifndef _@GUARD_PREFIX@_ERRNO_H
-+#ifndef _GL_ERRNO_H
- 
- #if __GNUC__ >= 3
- @PRAGMA_SYSTEM_HEADER@
-@@ -26,8 +26,8 @@
- /* The include_next requires a split double-inclusion guard.  */
- #@INCLUDE_NEXT@ @NEXT_ERRNO_H@
- 
--#ifndef _@GUARD_PREFIX@_ERRNO_H
--#define _@GUARD_PREFIX@_ERRNO_H
-+#ifndef _GL_ERRNO_H
-+#define _GL_ERRNO_H
- 
- 
- /* On native Windows platforms, many macros are not defined.  */
-@@ -147,16 +147,6 @@
- #  define GNULIB_defined_ENOTSUP 1
- # endif
- 
--# ifndef ENETRESET
--#  define ENETRESET 2011
--#  define GNULIB_defined_ENETRESET 1
--# endif
--
--# ifndef ECONNABORTED
--#  define ECONNABORTED 2012
--#  define GNULIB_defined_ECONNABORTED 1
--# endif
--
- # ifndef ESTALE
- #  define ESTALE    2009
- #  define GNULIB_defined_ESTALE 1
-@@ -173,5 +163,5 @@
- # endif
- 
- 
--#endif /* _@GUARD_PREFIX@_ERRNO_H */
--#endif /* _@GUARD_PREFIX@_ERRNO_H */
-+#endif /* _GL_ERRNO_H */
-+#endif /* _GL_ERRNO_H */
-diff -Naurp libiconv-1.14.org//srclib/error.c libiconv-1.14/srclib/error.c
---- libiconv-1.14.org//srclib/error.c	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/error.c	2012-01-08 02:07:40.134484448 -0800
-@@ -97,15 +97,11 @@ extern void __error_at_line (int status,
- /* The gnulib override of fcntl is not needed in this file.  */
- # undef fcntl
- 
--# if !HAVE_DECL_STRERROR_R
-+# if !HAVE_DECL_STRERROR_R && STRERROR_R_CHAR_P
- #  ifndef HAVE_DECL_STRERROR_R
- "this configure-time declaration test was not run"
- #  endif
--#  if STRERROR_R_CHAR_P
- char *strerror_r ();
--#  else
--int strerror_r ();
--#  endif
- # endif
- 
- /* The calling program should define program_name and set it to the
-diff -Naurp libiconv-1.14.org//srclib/fcntl.in.h libiconv-1.14/srclib/fcntl.in.h
---- libiconv-1.14.org//srclib/fcntl.in.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/fcntl.in.h	2012-01-08 02:07:40.154484449 -0800
-@@ -40,7 +40,7 @@
- #else
- /* Normal invocation convention.  */
- 
--#ifndef _@GUARD_PREFIX@_FCNTL_H
-+#ifndef _GL_FCNTL_H
- 
- #include <sys/types.h>
- /* On some systems other than glibc, <sys/stat.h> is a prerequisite of
-@@ -55,8 +55,8 @@
- /* The include_next requires a split double-inclusion guard.  */
- #@INCLUDE_NEXT@ @NEXT_FCNTL_H@
- 
--#ifndef _@GUARD_PREFIX@_FCNTL_H
--#define _@GUARD_PREFIX@_FCNTL_H
-+#ifndef _GL_FCNTL_H
-+#define _GL_FCNTL_H
- 
- #ifndef __GLIBC__ /* Avoid namespace pollution on glibc systems.  */
- # include <unistd.h>
-@@ -320,6 +320,6 @@ _GL_WARN_ON_USE (openat, "openat is not
- #endif
- 
- 
--#endif /* _@GUARD_PREFIX@_FCNTL_H */
--#endif /* _@GUARD_PREFIX@_FCNTL_H */
-+#endif /* _GL_FCNTL_H */
-+#endif /* _GL_FCNTL_H */
- #endif
-diff -Naurp libiconv-1.14.org//srclib/intprops.h libiconv-1.14/srclib/intprops.h
---- libiconv-1.14.org//srclib/intprops.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/intprops.h	2012-01-08 02:07:40.174484450 -0800
-@@ -22,13 +22,14 @@
- 
- #include <limits.h>
- 
--/* Return an integer value, converted to the same type as the integer
--   expression E after integer type promotion.  V is the unconverted value.  */
--#define _GL_INT_CONVERT(e, v) (0 * (e) + (v))
-+/* Return a integer value, converted to the same type as the integer
-+   expression E after integer type promotion.  V is the unconverted value.
-+   E should not have side effects.  */
-+#define _GL_INT_CONVERT(e, v) ((e) - (e) + (v))
- 
- /* Act like _GL_INT_CONVERT (E, -V) but work around a bug in IRIX 6.5 cc; see
-    <http://lists.gnu.org/archive/html/bug-gnulib/2011-05/msg00406.html>.  */
--#define _GL_INT_NEGATE_CONVERT(e, v) (0 * (e) - (v))
-+#define _GL_INT_NEGATE_CONVERT(e, v) ((e) - (e) - (v))
- 
- /* The extra casts in the following macros work around compiler bugs,
-    e.g., in Cray C 5.0.3.0.  */
-@@ -52,7 +53,7 @@
- #define TYPE_SIGNED(t) (! ((t) 0 < (t) -1))
- 
- /* Return 1 if the integer expression E, after integer promotion, has
--   a signed type.  */
-+   a signed type.  E should not have side effects.  */
- #define _GL_INT_SIGNED(e) (_GL_INT_NEGATE_CONVERT (e, 1) < 0)
- 
- 
-@@ -310,10 +311,13 @@
- /* Return 1 if the expression A <op> B would overflow,
-    where OP_RESULT_OVERFLOW (A, B, MIN, MAX) does the actual test,
-    assuming MIN and MAX are the minimum and maximum for the result type.
--   Arguments should be free of side effects.  */
-+
-+   This macro assumes that A | B is a valid integer if both A and B are,
-+   which is true of all known practical hosts.  If this is a problem
-+   for you, please let us know how to fix it for your host.  */
- #define _GL_BINARY_OP_OVERFLOW(a, b, op_result_overflow)        \
-   op_result_overflow (a, b,                                     \
--                      _GL_INT_MINIMUM (0 * (b) + (a)),          \
--                      _GL_INT_MAXIMUM (0 * (b) + (a)))
-+                      _GL_INT_MINIMUM ((a) | (b)),              \
-+                      _GL_INT_MAXIMUM ((a) | (b)))
- 
- #endif /* _GL_INTPROPS_H */
-diff -Naurp libiconv-1.14.org//srclib/Makefile.gnulib libiconv-1.14/srclib/Makefile.gnulib
---- libiconv-1.14.org//srclib/Makefile.gnulib	2012-01-08 02:05:18.754477606 -0800
-+++ libiconv-1.14/srclib/Makefile.gnulib	2012-01-08 02:07:43.138484592 -0800
-@@ -9,7 +9,7 @@
- # the same distribution terms as the rest of that program.
- #
- # Generated by gnulib-tool.
--# Reproduce by: gnulib-tool --import --dir=. --local-dir=gnulib-local --lib=libicrt --source-base=srclib --m4-base=srcm4 --doc-base=doc --tests-base=tests --aux-dir=build-aux --makefile-name=Makefile.gnulib --no-conditional-dependencies --no-libtool --macro-prefix=gl --no-vc-files binary-io error gettext gettext-h libiconv-misc mbstate memmove progname relocatable-prog safe-read sigpipe stdio stdlib strerror unistd uniwidth/width unlocked-io xalloc
-+# Reproduce by: gnulib-tool --import --dir=. --local-dir=gnulib-local --lib=libicrt --source-base=srclib --m4-base=srcm4 --doc-base=doc --tests-base=tests --aux-dir=build-aux --makefile-name=Makefile.gnulib --no-libtool --macro-prefix=gl --no-vc-files binary-io error gettext gettext-h libiconv-misc mbstate memmove progname relocatable relocatable-prog safe-read sigpipe stdio stdlib strerror unistd uniwidth/width unlocked-io xalloc
- 
- 
- MOSTLYCLEANFILES += core *.stackdump
-@@ -60,12 +60,60 @@ EXTRA_DIST += areadlink.h
- 
- ## end   gnulib module areadlink
- 
-+## begin gnulib module arg-nonnull
-+
-+# The BUILT_SOURCES created by this Makefile snippet are not used via #include
-+# statements but through direct file reference. Therefore this snippet must be
-+# present in all Makefile.am that need it. This is ensured by the applicability
-+# 'all' defined above.
-+
-+BUILT_SOURCES += arg-nonnull.h
-+# The arg-nonnull.h that gets inserted into generated .h files is the same as
-+# build-aux/arg-nonnull.h, except that it has the copyright header cut off.
-+arg-nonnull.h: $(top_srcdir)/build-aux/arg-nonnull.h
-+	$(AM_V_GEN)rm -f $@-t $@ && \
-+	sed -n -e '/GL_ARG_NONNULL/,$$p' \
-+	  < $(top_srcdir)/build-aux/arg-nonnull.h \
-+	  > $@-t && \
-+	mv $@-t $@
-+MOSTLYCLEANFILES += arg-nonnull.h arg-nonnull.h-t
-+
-+ARG_NONNULL_H=arg-nonnull.h
-+
-+EXTRA_DIST += $(top_srcdir)/build-aux/arg-nonnull.h
-+
-+## end   gnulib module arg-nonnull
-+
- ## begin gnulib module binary-io
- 
- libicrt_a_SOURCES += binary-io.h
- 
- ## end   gnulib module binary-io
- 
-+## begin gnulib module c++defs
-+
-+# The BUILT_SOURCES created by this Makefile snippet are not used via #include
-+# statements but through direct file reference. Therefore this snippet must be
-+# present in all Makefile.am that need it. This is ensured by the applicability
-+# 'all' defined above.
-+
-+BUILT_SOURCES += c++defs.h
-+# The c++defs.h that gets inserted into generated .h files is the same as
-+# build-aux/c++defs.h, except that it has the copyright header cut off.
-+c++defs.h: $(top_srcdir)/build-aux/c++defs.h
-+	$(AM_V_GEN)rm -f $@-t $@ && \
-+	sed -n -e '/_GL_CXXDEFS/,$$p' \
-+	  < $(top_srcdir)/build-aux/c++defs.h \
-+	  > $@-t && \
-+	mv $@-t $@
-+MOSTLYCLEANFILES += c++defs.h c++defs.h-t
-+
-+CXXDEFS_H=c++defs.h
-+
-+EXTRA_DIST += $(top_srcdir)/build-aux/c++defs.h
-+
-+## end   gnulib module c++defs
-+
- ## begin gnulib module canonicalize-lgpl
- 
- 
-@@ -100,8 +148,7 @@ if GL_GENERATE_ERRNO_H
- errno.h: errno.in.h $(top_builddir)/config.status
- 	$(AM_V_GEN)rm -f $@-t $@ && \
- 	{ echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \
--	  sed -e 's|@''GUARD_PREFIX''@|GL|g' \
--	      -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
-+	  sed -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
- 	      -e 's|@''PRAGMA_SYSTEM_HEADER''@|@PRAGMA_SYSTEM_HEADER@|g' \
- 	      -e 's|@''PRAGMA_COLUMNS''@|@PRAGMA_COLUMNS@|g' \
- 	      -e 's|@''NEXT_ERRNO_H''@|$(NEXT_ERRNO_H)|g' \
-@@ -142,15 +189,14 @@ BUILT_SOURCES += fcntl.h
- fcntl.h: fcntl.in.h $(top_builddir)/config.status $(CXXDEFS_H) $(ARG_NONNULL_H) $(WARN_ON_USE_H)
- 	$(AM_V_GEN)rm -f $@-t $@ && \
- 	{ echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \
--	  sed -e 's|@''GUARD_PREFIX''@|GL|g' \
--	      -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
-+	  sed -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
- 	      -e 's|@''PRAGMA_SYSTEM_HEADER''@|@PRAGMA_SYSTEM_HEADER@|g' \
- 	      -e 's|@''PRAGMA_COLUMNS''@|@PRAGMA_COLUMNS@|g' \
- 	      -e 's|@''NEXT_FCNTL_H''@|$(NEXT_FCNTL_H)|g' \
--	      -e 's/@''GNULIB_FCNTL''@/$(GNULIB_FCNTL)/g' \
--	      -e 's/@''GNULIB_NONBLOCKING''@/$(GNULIB_NONBLOCKING)/g' \
--	      -e 's/@''GNULIB_OPEN''@/$(GNULIB_OPEN)/g' \
--	      -e 's/@''GNULIB_OPENAT''@/$(GNULIB_OPENAT)/g' \
-+	      -e 's|@''GNULIB_FCNTL''@|$(GNULIB_FCNTL)|g' \
-+	      -e 's|@''GNULIB_NONBLOCKING''@|$(GNULIB_NONBLOCKING)|g' \
-+	      -e 's|@''GNULIB_OPEN''@|$(GNULIB_OPEN)|g' \
-+	      -e 's|@''GNULIB_OPENAT''@|$(GNULIB_OPENAT)|g' \
- 	      -e 's|@''HAVE_FCNTL''@|$(HAVE_FCNTL)|g' \
- 	      -e 's|@''HAVE_OPENAT''@|$(HAVE_OPENAT)|g' \
- 	      -e 's|@''REPLACE_FCNTL''@|$(REPLACE_FCNTL)|g' \
-@@ -297,7 +343,7 @@ EXTRA_DIST += $(top_srcdir)/build-aux/co
- ## begin gnulib module relocatable-prog-wrapper
- 
- 
--EXTRA_DIST += allocator.c allocator.h areadlink.c areadlink.h c-ctype.c c-ctype.h canonicalize-lgpl.c careadlinkat.c careadlinkat.h malloca.c malloca.h progname.c progname.h progreloc.c readlink.c relocatable.c relocatable.h relocwrapper.c setenv.c
-+EXTRA_DIST += allocator.c allocator.h areadlink.c areadlink.h c-ctype.c c-ctype.h canonicalize-lgpl.c careadlinkat.c careadlinkat.h malloca.c malloca.h progname.c progname.h progreloc.c readlink.c relocatable.c relocatable.h relocwrapper.c setenv.c strerror.c
- 
- EXTRA_DIST += $(top_srcdir)/build-aux/install-reloc
- 
-@@ -305,9 +351,10 @@ EXTRA_DIST += $(top_srcdir)/build-aux/in
- 
- ## begin gnulib module safe-read
- 
--libicrt_a_SOURCES += safe-read.c
- 
--EXTRA_DIST += safe-read.h
-+EXTRA_DIST += safe-read.c safe-read.h
-+
-+EXTRA_libicrt_a_SOURCES += safe-read.c
- 
- ## end   gnulib module safe-read
- 
-@@ -320,24 +367,20 @@ BUILT_SOURCES += signal.h
- signal.h: signal.in.h $(top_builddir)/config.status $(CXXDEFS_H) $(ARG_NONNULL_H) $(WARN_ON_USE_H)
- 	$(AM_V_GEN)rm -f $@-t $@ && \
- 	{ echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \
--	  sed -e 's|@''GUARD_PREFIX''@|GL|g' \
--	      -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
-+	  sed -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
- 	      -e 's|@''PRAGMA_SYSTEM_HEADER''@|@PRAGMA_SYSTEM_HEADER@|g' \
- 	      -e 's|@''PRAGMA_COLUMNS''@|@PRAGMA_COLUMNS@|g' \
- 	      -e 's|@''NEXT_SIGNAL_H''@|$(NEXT_SIGNAL_H)|g' \
--	      -e 's|@''GNULIB_PTHREAD_SIGMASK''@|$(GNULIB_PTHREAD_SIGMASK)|g' \
--	      -e 's/@''GNULIB_SIGNAL_H_SIGPIPE''@/$(GNULIB_SIGNAL_H_SIGPIPE)/g' \
--	      -e 's/@''GNULIB_SIGPROCMASK''@/$(GNULIB_SIGPROCMASK)/g' \
--	      -e 's/@''GNULIB_SIGACTION''@/$(GNULIB_SIGACTION)/g' \
-+	      -e 's|@''GNULIB_SIGNAL_H_SIGPIPE''@|$(GNULIB_SIGNAL_H_SIGPIPE)|g' \
-+	      -e 's|@''GNULIB_SIGPROCMASK''@|$(GNULIB_SIGPROCMASK)|g' \
-+	      -e 's|@''GNULIB_SIGACTION''@|$(GNULIB_SIGACTION)|g' \
- 	      -e 's|@''HAVE_POSIX_SIGNALBLOCKING''@|$(HAVE_POSIX_SIGNALBLOCKING)|g' \
--	      -e 's|@''HAVE_PTHREAD_SIGMASK''@|$(HAVE_PTHREAD_SIGMASK)|g' \
- 	      -e 's|@''HAVE_SIGSET_T''@|$(HAVE_SIGSET_T)|g' \
- 	      -e 's|@''HAVE_SIGINFO_T''@|$(HAVE_SIGINFO_T)|g' \
- 	      -e 's|@''HAVE_SIGACTION''@|$(HAVE_SIGACTION)|g' \
- 	      -e 's|@''HAVE_STRUCT_SIGACTION_SA_SIGACTION''@|$(HAVE_STRUCT_SIGACTION_SA_SIGACTION)|g' \
- 	      -e 's|@''HAVE_TYPE_VOLATILE_SIG_ATOMIC_T''@|$(HAVE_TYPE_VOLATILE_SIG_ATOMIC_T)|g' \
- 	      -e 's|@''HAVE_SIGHANDLER_T''@|$(HAVE_SIGHANDLER_T)|g' \
--	      -e 's|@''REPLACE_PTHREAD_SIGMASK''@|$(REPLACE_PTHREAD_SIGMASK)|g' \
- 	      -e '/definitions of _GL_FUNCDECL_RPL/r $(CXXDEFS_H)' \
- 	      -e '/definition of _GL_ARG_NONNULL/r $(ARG_NONNULL_H)' \
- 	      -e '/definition of _GL_WARN_ON_USE/r $(WARN_ON_USE_H)' \
-@@ -368,87 +411,6 @@ EXTRA_libicrt_a_SOURCES += sigprocmask.c
- 
- ## end   gnulib module sigprocmask
- 
--## begin gnulib module snippet/_Noreturn
--
--# Because this Makefile snippet defines a variable used by other
--# gnulib Makefile snippets, it must be present in all Makefile.am that
--# need it. This is ensured by the applicability 'all' defined above.
--
--_NORETURN_H=$(top_srcdir)/build-aux/snippet/_Noreturn.h
--
--EXTRA_DIST += $(top_srcdir)/build-aux/snippet/_Noreturn.h
--
--## end   gnulib module snippet/_Noreturn
--
--## begin gnulib module snippet/arg-nonnull
--
--# The BUILT_SOURCES created by this Makefile snippet are not used via #include
--# statements but through direct file reference. Therefore this snippet must be
--# present in all Makefile.am that need it. This is ensured by the applicability
--# 'all' defined above.
--
--BUILT_SOURCES += arg-nonnull.h
--# The arg-nonnull.h that gets inserted into generated .h files is the same as
--# build-aux/snippet/arg-nonnull.h, except that it has the copyright header cut
--# off.
--arg-nonnull.h: $(top_srcdir)/build-aux/snippet/arg-nonnull.h
--	$(AM_V_GEN)rm -f $@-t $@ && \
--	sed -n -e '/GL_ARG_NONNULL/,$$p' \
--	  < $(top_srcdir)/build-aux/snippet/arg-nonnull.h \
--	  > $@-t && \
--	mv $@-t $@
--MOSTLYCLEANFILES += arg-nonnull.h arg-nonnull.h-t
--
--ARG_NONNULL_H=arg-nonnull.h
--
--EXTRA_DIST += $(top_srcdir)/build-aux/snippet/arg-nonnull.h
--
--## end   gnulib module snippet/arg-nonnull
--
--## begin gnulib module snippet/c++defs
--
--# The BUILT_SOURCES created by this Makefile snippet are not used via #include
--# statements but through direct file reference. Therefore this snippet must be
--# present in all Makefile.am that need it. This is ensured by the applicability
--# 'all' defined above.
--
--BUILT_SOURCES += c++defs.h
--# The c++defs.h that gets inserted into generated .h files is the same as
--# build-aux/snippet/c++defs.h, except that it has the copyright header cut off.
--c++defs.h: $(top_srcdir)/build-aux/snippet/c++defs.h
--	$(AM_V_GEN)rm -f $@-t $@ && \
--	sed -n -e '/_GL_CXXDEFS/,$$p' \
--	  < $(top_srcdir)/build-aux/snippet/c++defs.h \
--	  > $@-t && \
--	mv $@-t $@
--MOSTLYCLEANFILES += c++defs.h c++defs.h-t
--
--CXXDEFS_H=c++defs.h
--
--EXTRA_DIST += $(top_srcdir)/build-aux/snippet/c++defs.h
--
--## end   gnulib module snippet/c++defs
--
--## begin gnulib module snippet/warn-on-use
--
--BUILT_SOURCES += warn-on-use.h
--# The warn-on-use.h that gets inserted into generated .h files is the same as
--# build-aux/snippet/warn-on-use.h, except that it has the copyright header cut
--# off.
--warn-on-use.h: $(top_srcdir)/build-aux/snippet/warn-on-use.h
--	$(AM_V_GEN)rm -f $@-t $@ && \
--	sed -n -e '/^.ifndef/,$$p' \
--	  < $(top_srcdir)/build-aux/snippet/warn-on-use.h \
--	  > $@-t && \
--	mv $@-t $@
--MOSTLYCLEANFILES += warn-on-use.h warn-on-use.h-t
--
--WARN_ON_USE_H=warn-on-use.h
--
--EXTRA_DIST += $(top_srcdir)/build-aux/snippet/warn-on-use.h
--
--## end   gnulib module snippet/warn-on-use
--
- ## begin gnulib module stat
- 
- 
-@@ -491,8 +453,7 @@ if GL_GENERATE_STDDEF_H
- stddef.h: stddef.in.h $(top_builddir)/config.status
- 	$(AM_V_GEN)rm -f $@-t $@ && \
- 	{ echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \
--	  sed -e 's|@''GUARD_PREFIX''@|GL|g' \
--	      -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
-+	  sed -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
- 	      -e 's|@''PRAGMA_SYSTEM_HEADER''@|@PRAGMA_SYSTEM_HEADER@|g' \
- 	      -e 's|@''PRAGMA_COLUMNS''@|@PRAGMA_COLUMNS@|g' \
- 	      -e 's|@''NEXT_STDDEF_H''@|$(NEXT_STDDEF_H)|g' \
-@@ -521,8 +482,7 @@ if GL_GENERATE_STDINT_H
- stdint.h: stdint.in.h $(top_builddir)/config.status
- 	$(AM_V_GEN)rm -f $@-t $@ && \
- 	{ echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \
--	  sed -e 's|@''GUARD_PREFIX''@|GL|g' \
--	      -e 's/@''HAVE_STDINT_H''@/$(HAVE_STDINT_H)/g' \
-+	  sed -e 's/@''HAVE_STDINT_H''@/$(HAVE_STDINT_H)/g' \
- 	      -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
- 	      -e 's|@''PRAGMA_SYSTEM_HEADER''@|@PRAGMA_SYSTEM_HEADER@|g' \
- 	      -e 's|@''PRAGMA_COLUMNS''@|@PRAGMA_COLUMNS@|g' \
-@@ -570,63 +530,62 @@ BUILT_SOURCES += stdio.h
- stdio.h: stdio.in.h $(top_builddir)/config.status $(CXXDEFS_H) $(ARG_NONNULL_H) $(WARN_ON_USE_H)
- 	$(AM_V_GEN)rm -f $@-t $@ && \
- 	{ echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \
--	  sed -e 's|@''GUARD_PREFIX''@|GL|g' \
--	      -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
-+	  sed -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
- 	      -e 's|@''PRAGMA_SYSTEM_HEADER''@|@PRAGMA_SYSTEM_HEADER@|g' \
- 	      -e 's|@''PRAGMA_COLUMNS''@|@PRAGMA_COLUMNS@|g' \
- 	      -e 's|@''NEXT_STDIO_H''@|$(NEXT_STDIO_H)|g' \
--	      -e 's/@''GNULIB_DPRINTF''@/$(GNULIB_DPRINTF)/g' \
--	      -e 's/@''GNULIB_FCLOSE''@/$(GNULIB_FCLOSE)/g' \
--	      -e 's/@''GNULIB_FFLUSH''@/$(GNULIB_FFLUSH)/g' \
--	      -e 's/@''GNULIB_FGETC''@/$(GNULIB_FGETC)/g' \
--	      -e 's/@''GNULIB_FGETS''@/$(GNULIB_FGETS)/g' \
--	      -e 's/@''GNULIB_FOPEN''@/$(GNULIB_FOPEN)/g' \
--	      -e 's/@''GNULIB_FPRINTF''@/$(GNULIB_FPRINTF)/g' \
--	      -e 's/@''GNULIB_FPRINTF_POSIX''@/$(GNULIB_FPRINTF_POSIX)/g' \
--	      -e 's/@''GNULIB_FPURGE''@/$(GNULIB_FPURGE)/g' \
--	      -e 's/@''GNULIB_FPUTC''@/$(GNULIB_FPUTC)/g' \
--	      -e 's/@''GNULIB_FPUTS''@/$(GNULIB_FPUTS)/g' \
--	      -e 's/@''GNULIB_FREAD''@/$(GNULIB_FREAD)/g' \
--	      -e 's/@''GNULIB_FREOPEN''@/$(GNULIB_FREOPEN)/g' \
--	      -e 's/@''GNULIB_FSCANF''@/$(GNULIB_FSCANF)/g' \
--	      -e 's/@''GNULIB_FSEEK''@/$(GNULIB_FSEEK)/g' \
--	      -e 's/@''GNULIB_FSEEKO''@/$(GNULIB_FSEEKO)/g' \
--	      -e 's/@''GNULIB_FTELL''@/$(GNULIB_FTELL)/g' \
--	      -e 's/@''GNULIB_FTELLO''@/$(GNULIB_FTELLO)/g' \
--	      -e 's/@''GNULIB_FWRITE''@/$(GNULIB_FWRITE)/g' \
--	      -e 's/@''GNULIB_GETC''@/$(GNULIB_GETC)/g' \
--	      -e 's/@''GNULIB_GETCHAR''@/$(GNULIB_GETCHAR)/g' \
--	      -e 's/@''GNULIB_GETDELIM''@/$(GNULIB_GETDELIM)/g' \
--	      -e 's/@''GNULIB_GETLINE''@/$(GNULIB_GETLINE)/g' \
--	      -e 's/@''GNULIB_GETS''@/$(GNULIB_GETS)/g' \
--	      -e 's/@''GNULIB_OBSTACK_PRINTF''@/$(GNULIB_OBSTACK_PRINTF)/g' \
--	      -e 's/@''GNULIB_OBSTACK_PRINTF_POSIX''@/$(GNULIB_OBSTACK_PRINTF_POSIX)/g' \
--	      -e 's/@''GNULIB_PERROR''@/$(GNULIB_PERROR)/g' \
--	      -e 's/@''GNULIB_POPEN''@/$(GNULIB_POPEN)/g' \
--	      -e 's/@''GNULIB_PRINTF''@/$(GNULIB_PRINTF)/g' \
--	      -e 's/@''GNULIB_PRINTF_POSIX''@/$(GNULIB_PRINTF_POSIX)/g' \
--	      -e 's/@''GNULIB_PUTC''@/$(GNULIB_PUTC)/g' \
--	      -e 's/@''GNULIB_PUTCHAR''@/$(GNULIB_PUTCHAR)/g' \
--	      -e 's/@''GNULIB_PUTS''@/$(GNULIB_PUTS)/g' \
--	      -e 's/@''GNULIB_REMOVE''@/$(GNULIB_REMOVE)/g' \
--	      -e 's/@''GNULIB_RENAME''@/$(GNULIB_RENAME)/g' \
--	      -e 's/@''GNULIB_RENAMEAT''@/$(GNULIB_RENAMEAT)/g' \
--	      -e 's/@''GNULIB_SCANF''@/$(GNULIB_SCANF)/g' \
--	      -e 's/@''GNULIB_SNPRINTF''@/$(GNULIB_SNPRINTF)/g' \
--	      -e 's/@''GNULIB_SPRINTF_POSIX''@/$(GNULIB_SPRINTF_POSIX)/g' \
--	      -e 's/@''GNULIB_STDIO_H_NONBLOCKING''@/$(GNULIB_STDIO_H_NONBLOCKING)/g' \
--	      -e 's/@''GNULIB_STDIO_H_SIGPIPE''@/$(GNULIB_STDIO_H_SIGPIPE)/g' \
--	      -e 's/@''GNULIB_TMPFILE''@/$(GNULIB_TMPFILE)/g' \
--	      -e 's/@''GNULIB_VASPRINTF''@/$(GNULIB_VASPRINTF)/g' \
--	      -e 's/@''GNULIB_VDPRINTF''@/$(GNULIB_VDPRINTF)/g' \
--	      -e 's/@''GNULIB_VFPRINTF''@/$(GNULIB_VFPRINTF)/g' \
--	      -e 's/@''GNULIB_VFPRINTF_POSIX''@/$(GNULIB_VFPRINTF_POSIX)/g' \
--	      -e 's/@''GNULIB_VFSCANF''@/$(GNULIB_VFSCANF)/g' \
--	      -e 's/@''GNULIB_VSCANF''@/$(GNULIB_VSCANF)/g' \
--	      -e 's/@''GNULIB_VPRINTF''@/$(GNULIB_VPRINTF)/g' \
--	      -e 's/@''GNULIB_VPRINTF_POSIX''@/$(GNULIB_VPRINTF_POSIX)/g' \
--	      -e 's/@''GNULIB_VSNPRINTF''@/$(GNULIB_VSNPRINTF)/g' \
--	      -e 's/@''GNULIB_VSPRINTF_POSIX''@/$(GNULIB_VSPRINTF_POSIX)/g' \
-+	      -e 's|@''GNULIB_DPRINTF''@|$(GNULIB_DPRINTF)|g' \
-+	      -e 's|@''GNULIB_FCLOSE''@|$(GNULIB_FCLOSE)|g' \
-+	      -e 's|@''GNULIB_FFLUSH''@|$(GNULIB_FFLUSH)|g' \
-+	      -e 's|@''GNULIB_FGETC''@|$(GNULIB_FGETC)|g' \
-+	      -e 's|@''GNULIB_FGETS''@|$(GNULIB_FGETS)|g' \
-+	      -e 's|@''GNULIB_FOPEN''@|$(GNULIB_FOPEN)|g' \
-+	      -e 's|@''GNULIB_FPRINTF''@|$(GNULIB_FPRINTF)|g' \
-+	      -e 's|@''GNULIB_FPRINTF_POSIX''@|$(GNULIB_FPRINTF_POSIX)|g' \
-+	      -e 's|@''GNULIB_FPURGE''@|$(GNULIB_FPURGE)|g' \
-+	      -e 's|@''GNULIB_FPUTC''@|$(GNULIB_FPUTC)|g' \
-+	      -e 's|@''GNULIB_FPUTS''@|$(GNULIB_FPUTS)|g' \
-+	      -e 's|@''GNULIB_FREAD''@|$(GNULIB_FREAD)|g' \
-+	      -e 's|@''GNULIB_FREOPEN''@|$(GNULIB_FREOPEN)|g' \
-+	      -e 's|@''GNULIB_FSCANF''@|$(GNULIB_FSCANF)|g' \
-+	      -e 's|@''GNULIB_FSEEK''@|$(GNULIB_FSEEK)|g' \
-+	      -e 's|@''GNULIB_FSEEKO''@|$(GNULIB_FSEEKO)|g' \
-+	      -e 's|@''GNULIB_FTELL''@|$(GNULIB_FTELL)|g' \
-+	      -e 's|@''GNULIB_FTELLO''@|$(GNULIB_FTELLO)|g' \
-+	      -e 's|@''GNULIB_FWRITE''@|$(GNULIB_FWRITE)|g' \
-+	      -e 's|@''GNULIB_GETC''@|$(GNULIB_GETC)|g' \
-+	      -e 's|@''GNULIB_GETCHAR''@|$(GNULIB_GETCHAR)|g' \
-+	      -e 's|@''GNULIB_GETDELIM''@|$(GNULIB_GETDELIM)|g' \
-+	      -e 's|@''GNULIB_GETLINE''@|$(GNULIB_GETLINE)|g' \
-+	      -e 's|@''GNULIB_GETS''@|$(GNULIB_GETS)|g' \
-+	      -e 's|@''GNULIB_OBSTACK_PRINTF''@|$(GNULIB_OBSTACK_PRINTF)|g' \
-+	      -e 's|@''GNULIB_OBSTACK_PRINTF_POSIX''@|$(GNULIB_OBSTACK_PRINTF_POSIX)|g' \
-+	      -e 's|@''GNULIB_PERROR''@|$(GNULIB_PERROR)|g' \
-+	      -e 's|@''GNULIB_POPEN''@|$(GNULIB_POPEN)|g' \
-+	      -e 's|@''GNULIB_PRINTF''@|$(GNULIB_PRINTF)|g' \
-+	      -e 's|@''GNULIB_PRINTF_POSIX''@|$(GNULIB_PRINTF_POSIX)|g' \
-+	      -e 's|@''GNULIB_PUTC''@|$(GNULIB_PUTC)|g' \
-+	      -e 's|@''GNULIB_PUTCHAR''@|$(GNULIB_PUTCHAR)|g' \
-+	      -e 's|@''GNULIB_PUTS''@|$(GNULIB_PUTS)|g' \
-+	      -e 's|@''GNULIB_REMOVE''@|$(GNULIB_REMOVE)|g' \
-+	      -e 's|@''GNULIB_RENAME''@|$(GNULIB_RENAME)|g' \
-+	      -e 's|@''GNULIB_RENAMEAT''@|$(GNULIB_RENAMEAT)|g' \
-+	      -e 's|@''GNULIB_SCANF''@|$(GNULIB_SCANF)|g' \
-+	      -e 's|@''GNULIB_SNPRINTF''@|$(GNULIB_SNPRINTF)|g' \
-+	      -e 's|@''GNULIB_SPRINTF_POSIX''@|$(GNULIB_SPRINTF_POSIX)|g' \
-+	      -e 's|@''GNULIB_STDIO_H_NONBLOCKING''@|$(GNULIB_STDIO_H_NONBLOCKING)|g' \
-+	      -e 's|@''GNULIB_STDIO_H_SIGPIPE''@|$(GNULIB_STDIO_H_SIGPIPE)|g' \
-+	      -e 's|@''GNULIB_TMPFILE''@|$(GNULIB_TMPFILE)|g' \
-+	      -e 's|@''GNULIB_VASPRINTF''@|$(GNULIB_VASPRINTF)|g' \
-+	      -e 's|@''GNULIB_VDPRINTF''@|$(GNULIB_VDPRINTF)|g' \
-+	      -e 's|@''GNULIB_VFPRINTF''@|$(GNULIB_VFPRINTF)|g' \
-+	      -e 's|@''GNULIB_VFPRINTF_POSIX''@|$(GNULIB_VFPRINTF_POSIX)|g' \
-+	      -e 's|@''GNULIB_VFSCANF''@|$(GNULIB_VFSCANF)|g' \
-+	      -e 's|@''GNULIB_VSCANF''@|$(GNULIB_VSCANF)|g' \
-+	      -e 's|@''GNULIB_VPRINTF''@|$(GNULIB_VPRINTF)|g' \
-+	      -e 's|@''GNULIB_VPRINTF_POSIX''@|$(GNULIB_VPRINTF_POSIX)|g' \
-+	      -e 's|@''GNULIB_VSNPRINTF''@|$(GNULIB_VSNPRINTF)|g' \
-+	      -e 's|@''GNULIB_VSPRINTF_POSIX''@|$(GNULIB_VSPRINTF_POSIX)|g' \
- 	      < $(srcdir)/stdio.in.h | \
- 	  sed -e 's|@''HAVE_DECL_FPURGE''@|$(HAVE_DECL_FPURGE)|g' \
- 	      -e 's|@''HAVE_DECL_FSEEKO''@|$(HAVE_DECL_FSEEKO)|g' \
-@@ -691,43 +650,41 @@ BUILT_SOURCES += stdlib.h
- 
- # We need the following in order to create <stdlib.h> when the system
- # doesn't have one that works with the given compiler.
--stdlib.h: stdlib.in.h $(top_builddir)/config.status $(CXXDEFS_H) \
--  $(_NORETURN_H) $(ARG_NONNULL_H) $(WARN_ON_USE_H)
-+stdlib.h: stdlib.in.h $(top_builddir)/config.status $(CXXDEFS_H) $(ARG_NONNULL_H) $(WARN_ON_USE_H)
- 	$(AM_V_GEN)rm -f $@-t $@ && \
- 	{ echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \
--	  sed -e 's|@''GUARD_PREFIX''@|GL|g' \
--	      -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
-+	  sed -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
- 	      -e 's|@''PRAGMA_SYSTEM_HEADER''@|@PRAGMA_SYSTEM_HEADER@|g' \
- 	      -e 's|@''PRAGMA_COLUMNS''@|@PRAGMA_COLUMNS@|g' \
- 	      -e 's|@''NEXT_STDLIB_H''@|$(NEXT_STDLIB_H)|g' \
--	      -e 's/@''GNULIB__EXIT''@/$(GNULIB__EXIT)/g' \
--	      -e 's/@''GNULIB_ATOLL''@/$(GNULIB_ATOLL)/g' \
--	      -e 's/@''GNULIB_CALLOC_POSIX''@/$(GNULIB_CALLOC_POSIX)/g' \
--	      -e 's/@''GNULIB_CANONICALIZE_FILE_NAME''@/$(GNULIB_CANONICALIZE_FILE_NAME)/g' \
--	      -e 's/@''GNULIB_GETLOADAVG''@/$(GNULIB_GETLOADAVG)/g' \
--	      -e 's/@''GNULIB_GETSUBOPT''@/$(GNULIB_GETSUBOPT)/g' \
--	      -e 's/@''GNULIB_GRANTPT''@/$(GNULIB_GRANTPT)/g' \
--	      -e 's/@''GNULIB_MALLOC_POSIX''@/$(GNULIB_MALLOC_POSIX)/g' \
--	      -e 's/@''GNULIB_MBTOWC''@/$(GNULIB_MBTOWC)/g' \
--	      -e 's/@''GNULIB_MKDTEMP''@/$(GNULIB_MKDTEMP)/g' \
--	      -e 's/@''GNULIB_MKOSTEMP''@/$(GNULIB_MKOSTEMP)/g' \
--	      -e 's/@''GNULIB_MKOSTEMPS''@/$(GNULIB_MKOSTEMPS)/g' \
--	      -e 's/@''GNULIB_MKSTEMP''@/$(GNULIB_MKSTEMP)/g' \
--	      -e 's/@''GNULIB_MKSTEMPS''@/$(GNULIB_MKSTEMPS)/g' \
--	      -e 's/@''GNULIB_PTSNAME''@/$(GNULIB_PTSNAME)/g' \
--	      -e 's/@''GNULIB_PUTENV''@/$(GNULIB_PUTENV)/g' \
--	      -e 's/@''GNULIB_RANDOM_R''@/$(GNULIB_RANDOM_R)/g' \
--	      -e 's/@''GNULIB_REALLOC_POSIX''@/$(GNULIB_REALLOC_POSIX)/g' \
--	      -e 's/@''GNULIB_REALPATH''@/$(GNULIB_REALPATH)/g' \
--	      -e 's/@''GNULIB_RPMATCH''@/$(GNULIB_RPMATCH)/g' \
--	      -e 's/@''GNULIB_SETENV''@/$(GNULIB_SETENV)/g' \
--	      -e 's/@''GNULIB_STRTOD''@/$(GNULIB_STRTOD)/g' \
--	      -e 's/@''GNULIB_STRTOLL''@/$(GNULIB_STRTOLL)/g' \
--	      -e 's/@''GNULIB_STRTOULL''@/$(GNULIB_STRTOULL)/g' \
--	      -e 's/@''GNULIB_SYSTEM_POSIX''@/$(GNULIB_SYSTEM_POSIX)/g' \
--	      -e 's/@''GNULIB_UNLOCKPT''@/$(GNULIB_UNLOCKPT)/g' \
--	      -e 's/@''GNULIB_UNSETENV''@/$(GNULIB_UNSETENV)/g' \
--	      -e 's/@''GNULIB_WCTOMB''@/$(GNULIB_WCTOMB)/g' \
-+	      -e 's|@''GNULIB__EXIT''@|$(GNULIB__EXIT)|g' \
-+	      -e 's|@''GNULIB_ATOLL''@|$(GNULIB_ATOLL)|g' \
-+	      -e 's|@''GNULIB_CALLOC_POSIX''@|$(GNULIB_CALLOC_POSIX)|g' \
-+	      -e 's|@''GNULIB_CANONICALIZE_FILE_NAME''@|$(GNULIB_CANONICALIZE_FILE_NAME)|g' \
-+	      -e 's|@''GNULIB_GETLOADAVG''@|$(GNULIB_GETLOADAVG)|g' \
-+	      -e 's|@''GNULIB_GETSUBOPT''@|$(GNULIB_GETSUBOPT)|g' \
-+	      -e 's|@''GNULIB_GRANTPT''@|$(GNULIB_GRANTPT)|g' \
-+	      -e 's|@''GNULIB_MALLOC_POSIX''@|$(GNULIB_MALLOC_POSIX)|g' \
-+	      -e 's|@''GNULIB_MBTOWC''@|$(GNULIB_MBTOWC)|g' \
-+	      -e 's|@''GNULIB_MKDTEMP''@|$(GNULIB_MKDTEMP)|g' \
-+	      -e 's|@''GNULIB_MKOSTEMP''@|$(GNULIB_MKOSTEMP)|g' \
-+	      -e 's|@''GNULIB_MKOSTEMPS''@|$(GNULIB_MKOSTEMPS)|g' \
-+	      -e 's|@''GNULIB_MKSTEMP''@|$(GNULIB_MKSTEMP)|g' \
-+	      -e 's|@''GNULIB_MKSTEMPS''@|$(GNULIB_MKSTEMPS)|g' \
-+	      -e 's|@''GNULIB_PTSNAME''@|$(GNULIB_PTSNAME)|g' \
-+	      -e 's|@''GNULIB_PUTENV''@|$(GNULIB_PUTENV)|g' \
-+	      -e 's|@''GNULIB_RANDOM_R''@|$(GNULIB_RANDOM_R)|g' \
-+	      -e 's|@''GNULIB_REALLOC_POSIX''@|$(GNULIB_REALLOC_POSIX)|g' \
-+	      -e 's|@''GNULIB_REALPATH''@|$(GNULIB_REALPATH)|g' \
-+	      -e 's|@''GNULIB_RPMATCH''@|$(GNULIB_RPMATCH)|g' \
-+	      -e 's|@''GNULIB_SETENV''@|$(GNULIB_SETENV)|g' \
-+	      -e 's|@''GNULIB_STRTOD''@|$(GNULIB_STRTOD)|g' \
-+	      -e 's|@''GNULIB_STRTOLL''@|$(GNULIB_STRTOLL)|g' \
-+	      -e 's|@''GNULIB_STRTOULL''@|$(GNULIB_STRTOULL)|g' \
-+	      -e 's|@''GNULIB_SYSTEM_POSIX''@|$(GNULIB_SYSTEM_POSIX)|g' \
-+	      -e 's|@''GNULIB_UNLOCKPT''@|$(GNULIB_UNLOCKPT)|g' \
-+	      -e 's|@''GNULIB_UNSETENV''@|$(GNULIB_UNSETENV)|g' \
-+	      -e 's|@''GNULIB_WCTOMB''@|$(GNULIB_WCTOMB)|g' \
- 	      < $(srcdir)/stdlib.in.h | \
- 	  sed -e 's|@''HAVE__EXIT''@|$(HAVE__EXIT)|g' \
- 	      -e 's|@''HAVE_ATOLL''@|$(HAVE_ATOLL)|g' \
-@@ -766,7 +723,6 @@ stdlib.h: stdlib.in.h $(top_builddir)/co
- 	      -e 's|@''REPLACE_UNSETENV''@|$(REPLACE_UNSETENV)|g' \
- 	      -e 's|@''REPLACE_WCTOMB''@|$(REPLACE_WCTOMB)|g' \
- 	      -e '/definitions of _GL_FUNCDECL_RPL/r $(CXXDEFS_H)' \
--	      -e '/definition of _Noreturn/r $(_NORETURN_H)' \
- 	      -e '/definition of _GL_ARG_NONNULL/r $(ARG_NONNULL_H)' \
- 	      -e '/definition of _GL_WARN_ON_USE/r $(WARN_ON_USE_H)'; \
- 	} > $@-t && \
-@@ -793,15 +749,6 @@ EXTRA_libicrt_a_SOURCES += strerror.c
- 
- ## end   gnulib module strerror
- 
--## begin gnulib module strerror-override
--
--
--EXTRA_DIST += strerror-override.c strerror-override.h
--
--EXTRA_libicrt_a_SOURCES += strerror-override.c
--
--## end   gnulib module strerror-override
--
- ## begin gnulib module string
- 
- BUILT_SOURCES += string.h
-@@ -811,52 +758,47 @@ BUILT_SOURCES += string.h
- string.h: string.in.h $(top_builddir)/config.status $(CXXDEFS_H) $(ARG_NONNULL_H) $(WARN_ON_USE_H)
- 	$(AM_V_GEN)rm -f $@-t $@ && \
- 	{ echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \
--	  sed -e 's|@''GUARD_PREFIX''@|GL|g' \
--	      -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
-+	  sed -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
- 	      -e 's|@''PRAGMA_SYSTEM_HEADER''@|@PRAGMA_SYSTEM_HEADER@|g' \
- 	      -e 's|@''PRAGMA_COLUMNS''@|@PRAGMA_COLUMNS@|g' \
- 	      -e 's|@''NEXT_STRING_H''@|$(NEXT_STRING_H)|g' \
--	      -e 's/@''GNULIB_FFSL''@/$(GNULIB_FFSL)/g' \
--	      -e 's/@''GNULIB_FFSLL''@/$(GNULIB_FFSLL)/g' \
--	      -e 's/@''GNULIB_MBSLEN''@/$(GNULIB_MBSLEN)/g' \
--	      -e 's/@''GNULIB_MBSNLEN''@/$(GNULIB_MBSNLEN)/g' \
--	      -e 's/@''GNULIB_MBSCHR''@/$(GNULIB_MBSCHR)/g' \
--	      -e 's/@''GNULIB_MBSRCHR''@/$(GNULIB_MBSRCHR)/g' \
--	      -e 's/@''GNULIB_MBSSTR''@/$(GNULIB_MBSSTR)/g' \
--	      -e 's/@''GNULIB_MBSCASECMP''@/$(GNULIB_MBSCASECMP)/g' \
--	      -e 's/@''GNULIB_MBSNCASECMP''@/$(GNULIB_MBSNCASECMP)/g' \
--	      -e 's/@''GNULIB_MBSPCASECMP''@/$(GNULIB_MBSPCASECMP)/g' \
--	      -e 's/@''GNULIB_MBSCASESTR''@/$(GNULIB_MBSCASESTR)/g' \
--	      -e 's/@''GNULIB_MBSCSPN''@/$(GNULIB_MBSCSPN)/g' \
--	      -e 's/@''GNULIB_MBSPBRK''@/$(GNULIB_MBSPBRK)/g' \
--	      -e 's/@''GNULIB_MBSSPN''@/$(GNULIB_MBSSPN)/g' \
--	      -e 's/@''GNULIB_MBSSEP''@/$(GNULIB_MBSSEP)/g' \
--	      -e 's/@''GNULIB_MBSTOK_R''@/$(GNULIB_MBSTOK_R)/g' \
--	      -e 's/@''GNULIB_MEMCHR''@/$(GNULIB_MEMCHR)/g' \
--	      -e 's/@''GNULIB_MEMMEM''@/$(GNULIB_MEMMEM)/g' \
--	      -e 's/@''GNULIB_MEMPCPY''@/$(GNULIB_MEMPCPY)/g' \
--	      -e 's/@''GNULIB_MEMRCHR''@/$(GNULIB_MEMRCHR)/g' \
--	      -e 's/@''GNULIB_RAWMEMCHR''@/$(GNULIB_RAWMEMCHR)/g' \
--	      -e 's/@''GNULIB_STPCPY''@/$(GNULIB_STPCPY)/g' \
--	      -e 's/@''GNULIB_STPNCPY''@/$(GNULIB_STPNCPY)/g' \
--	      -e 's/@''GNULIB_STRCHRNUL''@/$(GNULIB_STRCHRNUL)/g' \
--	      -e 's/@''GNULIB_STRDUP''@/$(GNULIB_STRDUP)/g' \
--	      -e 's/@''GNULIB_STRNCAT''@/$(GNULIB_STRNCAT)/g' \
--	      -e 's/@''GNULIB_STRNDUP''@/$(GNULIB_STRNDUP)/g' \
--	      -e 's/@''GNULIB_STRNLEN''@/$(GNULIB_STRNLEN)/g' \
--	      -e 's/@''GNULIB_STRPBRK''@/$(GNULIB_STRPBRK)/g' \
--	      -e 's/@''GNULIB_STRSEP''@/$(GNULIB_STRSEP)/g' \
--	      -e 's/@''GNULIB_STRSTR''@/$(GNULIB_STRSTR)/g' \
--	      -e 's/@''GNULIB_STRCASESTR''@/$(GNULIB_STRCASESTR)/g' \
--	      -e 's/@''GNULIB_STRTOK_R''@/$(GNULIB_STRTOK_R)/g' \
--	      -e 's/@''GNULIB_STRERROR''@/$(GNULIB_STRERROR)/g' \
--	      -e 's/@''GNULIB_STRERROR_R''@/$(GNULIB_STRERROR_R)/g' \
--	      -e 's/@''GNULIB_STRSIGNAL''@/$(GNULIB_STRSIGNAL)/g' \
--	      -e 's/@''GNULIB_STRVERSCMP''@/$(GNULIB_STRVERSCMP)/g' \
-+	      -e 's|@''GNULIB_MBSLEN''@|$(GNULIB_MBSLEN)|g' \
-+	      -e 's|@''GNULIB_MBSNLEN''@|$(GNULIB_MBSNLEN)|g' \
-+	      -e 's|@''GNULIB_MBSCHR''@|$(GNULIB_MBSCHR)|g' \
-+	      -e 's|@''GNULIB_MBSRCHR''@|$(GNULIB_MBSRCHR)|g' \
-+	      -e 's|@''GNULIB_MBSSTR''@|$(GNULIB_MBSSTR)|g' \
-+	      -e 's|@''GNULIB_MBSCASECMP''@|$(GNULIB_MBSCASECMP)|g' \
-+	      -e 's|@''GNULIB_MBSNCASECMP''@|$(GNULIB_MBSNCASECMP)|g' \
-+	      -e 's|@''GNULIB_MBSPCASECMP''@|$(GNULIB_MBSPCASECMP)|g' \
-+	      -e 's|@''GNULIB_MBSCASESTR''@|$(GNULIB_MBSCASESTR)|g' \
-+	      -e 's|@''GNULIB_MBSCSPN''@|$(GNULIB_MBSCSPN)|g' \
-+	      -e 's|@''GNULIB_MBSPBRK''@|$(GNULIB_MBSPBRK)|g' \
-+	      -e 's|@''GNULIB_MBSSPN''@|$(GNULIB_MBSSPN)|g' \
-+	      -e 's|@''GNULIB_MBSSEP''@|$(GNULIB_MBSSEP)|g' \
-+	      -e 's|@''GNULIB_MBSTOK_R''@|$(GNULIB_MBSTOK_R)|g' \
-+	      -e 's|@''GNULIB_MEMCHR''@|$(GNULIB_MEMCHR)|g' \
-+	      -e 's|@''GNULIB_MEMMEM''@|$(GNULIB_MEMMEM)|g' \
-+	      -e 's|@''GNULIB_MEMPCPY''@|$(GNULIB_MEMPCPY)|g' \
-+	      -e 's|@''GNULIB_MEMRCHR''@|$(GNULIB_MEMRCHR)|g' \
-+	      -e 's|@''GNULIB_RAWMEMCHR''@|$(GNULIB_RAWMEMCHR)|g' \
-+	      -e 's|@''GNULIB_STPCPY''@|$(GNULIB_STPCPY)|g' \
-+	      -e 's|@''GNULIB_STPNCPY''@|$(GNULIB_STPNCPY)|g' \
-+	      -e 's|@''GNULIB_STRCHRNUL''@|$(GNULIB_STRCHRNUL)|g' \
-+	      -e 's|@''GNULIB_STRDUP''@|$(GNULIB_STRDUP)|g' \
-+	      -e 's|@''GNULIB_STRNCAT''@|$(GNULIB_STRNCAT)|g' \
-+	      -e 's|@''GNULIB_STRNDUP''@|$(GNULIB_STRNDUP)|g' \
-+	      -e 's|@''GNULIB_STRNLEN''@|$(GNULIB_STRNLEN)|g' \
-+	      -e 's|@''GNULIB_STRPBRK''@|$(GNULIB_STRPBRK)|g' \
-+	      -e 's|@''GNULIB_STRSEP''@|$(GNULIB_STRSEP)|g' \
-+	      -e 's|@''GNULIB_STRSTR''@|$(GNULIB_STRSTR)|g' \
-+	      -e 's|@''GNULIB_STRCASESTR''@|$(GNULIB_STRCASESTR)|g' \
-+	      -e 's|@''GNULIB_STRTOK_R''@|$(GNULIB_STRTOK_R)|g' \
-+	      -e 's|@''GNULIB_STRERROR''@|$(GNULIB_STRERROR)|g' \
-+	      -e 's|@''GNULIB_STRERROR_R''@|$(GNULIB_STRERROR_R)|g' \
-+	      -e 's|@''GNULIB_STRSIGNAL''@|$(GNULIB_STRSIGNAL)|g' \
-+	      -e 's|@''GNULIB_STRVERSCMP''@|$(GNULIB_STRVERSCMP)|g' \
- 	      < $(srcdir)/string.in.h | \
--	  sed -e 's|@''HAVE_FFSL''@|$(HAVE_FFSL)|g' \
--	      -e 's|@''HAVE_FFSLL''@|$(HAVE_FFSLL)|g' \
--	      -e 's|@''HAVE_MBSLEN''@|$(HAVE_MBSLEN)|g' \
-+	  sed -e 's|@''HAVE_MBSLEN''@|$(HAVE_MBSLEN)|g' \
- 	      -e 's|@''HAVE_MEMCHR''@|$(HAVE_MEMCHR)|g' \
- 	      -e 's|@''HAVE_DECL_MEMMEM''@|$(HAVE_DECL_MEMMEM)|g' \
- 	      -e 's|@''HAVE_MEMPCPY''@|$(HAVE_MEMPCPY)|g' \
-@@ -912,23 +854,22 @@ sys/stat.h: sys_stat.in.h $(top_builddir
- 	$(AM_V_at)$(MKDIR_P) sys
- 	$(AM_V_GEN)rm -f $@-t $@ && \
- 	{ echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \
--	  sed -e 's|@''GUARD_PREFIX''@|GL|g' \
--	      -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
-+	  sed -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
- 	      -e 's|@''PRAGMA_SYSTEM_HEADER''@|@PRAGMA_SYSTEM_HEADER@|g' \
- 	      -e 's|@''PRAGMA_COLUMNS''@|@PRAGMA_COLUMNS@|g' \
- 	      -e 's|@''NEXT_SYS_STAT_H''@|$(NEXT_SYS_STAT_H)|g' \
--	      -e 's/@''GNULIB_FCHMODAT''@/$(GNULIB_FCHMODAT)/g' \
--	      -e 's/@''GNULIB_FSTATAT''@/$(GNULIB_FSTATAT)/g' \
--	      -e 's/@''GNULIB_FUTIMENS''@/$(GNULIB_FUTIMENS)/g' \
--	      -e 's/@''GNULIB_LCHMOD''@/$(GNULIB_LCHMOD)/g' \
--	      -e 's/@''GNULIB_LSTAT''@/$(GNULIB_LSTAT)/g' \
--	      -e 's/@''GNULIB_MKDIRAT''@/$(GNULIB_MKDIRAT)/g' \
--	      -e 's/@''GNULIB_MKFIFO''@/$(GNULIB_MKFIFO)/g' \
--	      -e 's/@''GNULIB_MKFIFOAT''@/$(GNULIB_MKFIFOAT)/g' \
--	      -e 's/@''GNULIB_MKNOD''@/$(GNULIB_MKNOD)/g' \
--	      -e 's/@''GNULIB_MKNODAT''@/$(GNULIB_MKNODAT)/g' \
--	      -e 's/@''GNULIB_STAT''@/$(GNULIB_STAT)/g' \
--	      -e 's/@''GNULIB_UTIMENSAT''@/$(GNULIB_UTIMENSAT)/g' \
-+	      -e 's|@''GNULIB_FCHMODAT''@|$(GNULIB_FCHMODAT)|g' \
-+	      -e 's|@''GNULIB_FSTATAT''@|$(GNULIB_FSTATAT)|g' \
-+	      -e 's|@''GNULIB_FUTIMENS''@|$(GNULIB_FUTIMENS)|g' \
-+	      -e 's|@''GNULIB_LCHMOD''@|$(GNULIB_LCHMOD)|g' \
-+	      -e 's|@''GNULIB_LSTAT''@|$(GNULIB_LSTAT)|g' \
-+	      -e 's|@''GNULIB_MKDIRAT''@|$(GNULIB_MKDIRAT)|g' \
-+	      -e 's|@''GNULIB_MKFIFO''@|$(GNULIB_MKFIFO)|g' \
-+	      -e 's|@''GNULIB_MKFIFOAT''@|$(GNULIB_MKFIFOAT)|g' \
-+	      -e 's|@''GNULIB_MKNOD''@|$(GNULIB_MKNOD)|g' \
-+	      -e 's|@''GNULIB_MKNODAT''@|$(GNULIB_MKNODAT)|g' \
-+	      -e 's|@''GNULIB_STAT''@|$(GNULIB_STAT)|g' \
-+	      -e 's|@''GNULIB_UTIMENSAT''@|$(GNULIB_UTIMENSAT)|g' \
- 	      -e 's|@''HAVE_FCHMODAT''@|$(HAVE_FCHMODAT)|g' \
- 	      -e 's|@''HAVE_FSTATAT''@|$(HAVE_FSTATAT)|g' \
- 	      -e 's|@''HAVE_FUTIMENS''@|$(HAVE_FUTIMENS)|g' \
-@@ -971,16 +912,15 @@ BUILT_SOURCES += time.h
- time.h: time.in.h $(top_builddir)/config.status $(CXXDEFS_H) $(ARG_NONNULL_H) $(WARN_ON_USE_H)
- 	$(AM_V_GEN)rm -f $@-t $@ && \
- 	{ echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \
--	  sed -e 's|@''GUARD_PREFIX''@|GL|g' \
--	      -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
-+	  sed -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
- 	      -e 's|@''PRAGMA_SYSTEM_HEADER''@|@PRAGMA_SYSTEM_HEADER@|g' \
- 	      -e 's|@''PRAGMA_COLUMNS''@|@PRAGMA_COLUMNS@|g' \
- 	      -e 's|@''NEXT_TIME_H''@|$(NEXT_TIME_H)|g' \
--	      -e 's/@''GNULIB_MKTIME''@/$(GNULIB_MKTIME)/g' \
--	      -e 's/@''GNULIB_NANOSLEEP''@/$(GNULIB_NANOSLEEP)/g' \
--	      -e 's/@''GNULIB_STRPTIME''@/$(GNULIB_STRPTIME)/g' \
--	      -e 's/@''GNULIB_TIMEGM''@/$(GNULIB_TIMEGM)/g' \
--	      -e 's/@''GNULIB_TIME_R''@/$(GNULIB_TIME_R)/g' \
-+	      -e 's|@''GNULIB_MKTIME''@|$(GNULIB_MKTIME)|g' \
-+	      -e 's|@''GNULIB_NANOSLEEP''@|$(GNULIB_NANOSLEEP)|g' \
-+	      -e 's|@''GNULIB_STRPTIME''@|$(GNULIB_STRPTIME)|g' \
-+	      -e 's|@''GNULIB_TIMEGM''@|$(GNULIB_TIMEGM)|g' \
-+	      -e 's|@''GNULIB_TIME_R''@|$(GNULIB_TIME_R)|g' \
- 	      -e 's|@''HAVE_DECL_LOCALTIME_R''@|$(HAVE_DECL_LOCALTIME_R)|g' \
- 	      -e 's|@''HAVE_NANOSLEEP''@|$(HAVE_NANOSLEEP)|g' \
- 	      -e 's|@''HAVE_STRPTIME''@|$(HAVE_STRPTIME)|g' \
-@@ -1013,56 +953,55 @@ BUILT_SOURCES += unistd.h
- unistd.h: unistd.in.h $(top_builddir)/config.status $(CXXDEFS_H) $(ARG_NONNULL_H) $(WARN_ON_USE_H)
- 	$(AM_V_GEN)rm -f $@-t $@ && \
- 	{ echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \
--	  sed -e 's|@''GUARD_PREFIX''@|GL|g' \
--	      -e 's|@''HAVE_UNISTD_H''@|$(HAVE_UNISTD_H)|g' \
-+	  sed -e 's|@''HAVE_UNISTD_H''@|$(HAVE_UNISTD_H)|g' \
- 	      -e 's|@''INCLUDE_NEXT''@|$(INCLUDE_NEXT)|g' \
- 	      -e 's|@''PRAGMA_SYSTEM_HEADER''@|@PRAGMA_SYSTEM_HEADER@|g' \
- 	      -e 's|@''PRAGMA_COLUMNS''@|@PRAGMA_COLUMNS@|g' \
- 	      -e 's|@''NEXT_UNISTD_H''@|$(NEXT_UNISTD_H)|g' \
--	      -e 's/@''GNULIB_CHOWN''@/$(GNULIB_CHOWN)/g' \
--	      -e 's/@''GNULIB_CLOSE''@/$(GNULIB_CLOSE)/g' \
--	      -e 's/@''GNULIB_DUP2''@/$(GNULIB_DUP2)/g' \
--	      -e 's/@''GNULIB_DUP3''@/$(GNULIB_DUP3)/g' \
--	      -e 's/@''GNULIB_ENVIRON''@/$(GNULIB_ENVIRON)/g' \
--	      -e 's/@''GNULIB_EUIDACCESS''@/$(GNULIB_EUIDACCESS)/g' \
--	      -e 's/@''GNULIB_FACCESSAT''@/$(GNULIB_FACCESSAT)/g' \
--	      -e 's/@''GNULIB_FCHDIR''@/$(GNULIB_FCHDIR)/g' \
--	      -e 's/@''GNULIB_FCHOWNAT''@/$(GNULIB_FCHOWNAT)/g' \
--	      -e 's/@''GNULIB_FSYNC''@/$(GNULIB_FSYNC)/g' \
--	      -e 's/@''GNULIB_FTRUNCATE''@/$(GNULIB_FTRUNCATE)/g' \
--	      -e 's/@''GNULIB_GETCWD''@/$(GNULIB_GETCWD)/g' \
--	      -e 's/@''GNULIB_GETDOMAINNAME''@/$(GNULIB_GETDOMAINNAME)/g' \
--	      -e 's/@''GNULIB_GETDTABLESIZE''@/$(GNULIB_GETDTABLESIZE)/g' \
--	      -e 's/@''GNULIB_GETGROUPS''@/$(GNULIB_GETGROUPS)/g' \
--	      -e 's/@''GNULIB_GETHOSTNAME''@/$(GNULIB_GETHOSTNAME)/g' \
--	      -e 's/@''GNULIB_GETLOGIN''@/$(GNULIB_GETLOGIN)/g' \
--	      -e 's/@''GNULIB_GETLOGIN_R''@/$(GNULIB_GETLOGIN_R)/g' \
--	      -e 's/@''GNULIB_GETPAGESIZE''@/$(GNULIB_GETPAGESIZE)/g' \
--	      -e 's/@''GNULIB_GETUSERSHELL''@/$(GNULIB_GETUSERSHELL)/g' \
--	      -e 's/@''GNULIB_GROUP_MEMBER''@/$(GNULIB_GROUP_MEMBER)/g' \
--	      -e 's/@''GNULIB_LCHOWN''@/$(GNULIB_LCHOWN)/g' \
--	      -e 's/@''GNULIB_LINK''@/$(GNULIB_LINK)/g' \
--	      -e 's/@''GNULIB_LINKAT''@/$(GNULIB_LINKAT)/g' \
--	      -e 's/@''GNULIB_LSEEK''@/$(GNULIB_LSEEK)/g' \
--	      -e 's/@''GNULIB_PIPE''@/$(GNULIB_PIPE)/g' \
--	      -e 's/@''GNULIB_PIPE2''@/$(GNULIB_PIPE2)/g' \
--	      -e 's/@''GNULIB_PREAD''@/$(GNULIB_PREAD)/g' \
--	      -e 's/@''GNULIB_PWRITE''@/$(GNULIB_PWRITE)/g' \
--	      -e 's/@''GNULIB_READ''@/$(GNULIB_READ)/g' \
--	      -e 's/@''GNULIB_READLINK''@/$(GNULIB_READLINK)/g' \
--	      -e 's/@''GNULIB_READLINKAT''@/$(GNULIB_READLINKAT)/g' \
--	      -e 's/@''GNULIB_RMDIR''@/$(GNULIB_RMDIR)/g' \
--	      -e 's/@''GNULIB_SLEEP''@/$(GNULIB_SLEEP)/g' \
--	      -e 's/@''GNULIB_SYMLINK''@/$(GNULIB_SYMLINK)/g' \
--	      -e 's/@''GNULIB_SYMLINKAT''@/$(GNULIB_SYMLINKAT)/g' \
--	      -e 's/@''GNULIB_TTYNAME_R''@/$(GNULIB_TTYNAME_R)/g' \
--	      -e 's/@''GNULIB_UNISTD_H_GETOPT''@/$(GNULIB_UNISTD_H_GETOPT)/g' \
--	      -e 's/@''GNULIB_UNISTD_H_NONBLOCKING''@/$(GNULIB_UNISTD_H_NONBLOCKING)/g' \
--	      -e 's/@''GNULIB_UNISTD_H_SIGPIPE''@/$(GNULIB_UNISTD_H_SIGPIPE)/g' \
--	      -e 's/@''GNULIB_UNLINK''@/$(GNULIB_UNLINK)/g' \
--	      -e 's/@''GNULIB_UNLINKAT''@/$(GNULIB_UNLINKAT)/g' \
--	      -e 's/@''GNULIB_USLEEP''@/$(GNULIB_USLEEP)/g' \
--	      -e 's/@''GNULIB_WRITE''@/$(GNULIB_WRITE)/g' \
-+	      -e 's|@''GNULIB_CHOWN''@|$(GNULIB_CHOWN)|g' \
-+	      -e 's|@''GNULIB_CLOSE''@|$(GNULIB_CLOSE)|g' \
-+	      -e 's|@''GNULIB_DUP2''@|$(GNULIB_DUP2)|g' \
-+	      -e 's|@''GNULIB_DUP3''@|$(GNULIB_DUP3)|g' \
-+	      -e 's|@''GNULIB_ENVIRON''@|$(GNULIB_ENVIRON)|g' \
-+	      -e 's|@''GNULIB_EUIDACCESS''@|$(GNULIB_EUIDACCESS)|g' \
-+	      -e 's|@''GNULIB_FACCESSAT''@|$(GNULIB_FACCESSAT)|g' \
-+	      -e 's|@''GNULIB_FCHDIR''@|$(GNULIB_FCHDIR)|g' \
-+	      -e 's|@''GNULIB_FCHOWNAT''@|$(GNULIB_FCHOWNAT)|g' \
-+	      -e 's|@''GNULIB_FSYNC''@|$(GNULIB_FSYNC)|g' \
-+	      -e 's|@''GNULIB_FTRUNCATE''@|$(GNULIB_FTRUNCATE)|g' \
-+	      -e 's|@''GNULIB_GETCWD''@|$(GNULIB_GETCWD)|g' \
-+	      -e 's|@''GNULIB_GETDOMAINNAME''@|$(GNULIB_GETDOMAINNAME)|g' \
-+	      -e 's|@''GNULIB_GETDTABLESIZE''@|$(GNULIB_GETDTABLESIZE)|g' \
-+	      -e 's|@''GNULIB_GETGROUPS''@|$(GNULIB_GETGROUPS)|g' \
-+	      -e 's|@''GNULIB_GETHOSTNAME''@|$(GNULIB_GETHOSTNAME)|g' \
-+	      -e 's|@''GNULIB_GETLOGIN''@|$(GNULIB_GETLOGIN)|g' \
-+	      -e 's|@''GNULIB_GETLOGIN_R''@|$(GNULIB_GETLOGIN_R)|g' \
-+	      -e 's|@''GNULIB_GETPAGESIZE''@|$(GNULIB_GETPAGESIZE)|g' \
-+	      -e 's|@''GNULIB_GETUSERSHELL''@|$(GNULIB_GETUSERSHELL)|g' \
-+	      -e 's|@''GNULIB_GROUP_MEMBER''@|$(GNULIB_GROUP_MEMBER)|g' \
-+	      -e 's|@''GNULIB_LCHOWN''@|$(GNULIB_LCHOWN)|g' \
-+	      -e 's|@''GNULIB_LINK''@|$(GNULIB_LINK)|g' \
-+	      -e 's|@''GNULIB_LINKAT''@|$(GNULIB_LINKAT)|g' \
-+	      -e 's|@''GNULIB_LSEEK''@|$(GNULIB_LSEEK)|g' \
-+	      -e 's|@''GNULIB_PIPE''@|$(GNULIB_PIPE)|g' \
-+	      -e 's|@''GNULIB_PIPE2''@|$(GNULIB_PIPE2)|g' \
-+	      -e 's|@''GNULIB_PREAD''@|$(GNULIB_PREAD)|g' \
-+	      -e 's|@''GNULIB_PWRITE''@|$(GNULIB_PWRITE)|g' \
-+	      -e 's|@''GNULIB_READ''@|$(GNULIB_READ)|g' \
-+	      -e 's|@''GNULIB_READLINK''@|$(GNULIB_READLINK)|g' \
-+	      -e 's|@''GNULIB_READLINKAT''@|$(GNULIB_READLINKAT)|g' \
-+	      -e 's|@''GNULIB_RMDIR''@|$(GNULIB_RMDIR)|g' \
-+	      -e 's|@''GNULIB_SLEEP''@|$(GNULIB_SLEEP)|g' \
-+	      -e 's|@''GNULIB_SYMLINK''@|$(GNULIB_SYMLINK)|g' \
-+	      -e 's|@''GNULIB_SYMLINKAT''@|$(GNULIB_SYMLINKAT)|g' \
-+	      -e 's|@''GNULIB_TTYNAME_R''@|$(GNULIB_TTYNAME_R)|g' \
-+	      -e 's|@''GNULIB_UNISTD_H_GETOPT''@|$(GNULIB_UNISTD_H_GETOPT)|g' \
-+	      -e 's|@''GNULIB_UNISTD_H_NONBLOCKING''@|$(GNULIB_UNISTD_H_NONBLOCKING)|g' \
-+	      -e 's|@''GNULIB_UNISTD_H_SIGPIPE''@|$(GNULIB_UNISTD_H_SIGPIPE)|g' \
-+	      -e 's|@''GNULIB_UNLINK''@|$(GNULIB_UNLINK)|g' \
-+	      -e 's|@''GNULIB_UNLINKAT''@|$(GNULIB_UNLINKAT)|g' \
-+	      -e 's|@''GNULIB_USLEEP''@|$(GNULIB_USLEEP)|g' \
-+	      -e 's|@''GNULIB_WRITE''@|$(GNULIB_WRITE)|g' \
- 	      < $(srcdir)/unistd.in.h | \
- 	  sed -e 's|@''HAVE_CHOWN''@|$(HAVE_CHOWN)|g' \
- 	      -e 's|@''HAVE_DUP2''@|$(HAVE_DUP2)|g' \
-@@ -1198,6 +1137,25 @@ EXTRA_DIST += verify.h
- 
- ## end   gnulib module verify
- 
-+## begin gnulib module warn-on-use
-+
-+BUILT_SOURCES += warn-on-use.h
-+# The warn-on-use.h that gets inserted into generated .h files is the same as
-+# build-aux/warn-on-use.h, except that it has the copyright header cut off.
-+warn-on-use.h: $(top_srcdir)/build-aux/warn-on-use.h
-+	$(AM_V_GEN)rm -f $@-t $@ && \
-+	sed -n -e '/^.ifndef/,$$p' \
-+	  < $(top_srcdir)/build-aux/warn-on-use.h \
-+	  > $@-t && \
-+	mv $@-t $@
-+MOSTLYCLEANFILES += warn-on-use.h warn-on-use.h-t
-+
-+WARN_ON_USE_H=warn-on-use.h
-+
-+EXTRA_DIST += $(top_srcdir)/build-aux/warn-on-use.h
-+
-+## end   gnulib module warn-on-use
-+
- ## begin gnulib module xalloc
- 
- libicrt_a_SOURCES += xalloc.h xmalloc.c xstrdup.c
-diff -Naurp libiconv-1.14.org//srclib/pathmax.h libiconv-1.14/srclib/pathmax.h
---- libiconv-1.14.org//srclib/pathmax.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/pathmax.h	2012-01-08 02:07:40.218484451 -0800
-@@ -19,27 +19,6 @@
- #ifndef _PATHMAX_H
- # define _PATHMAX_H
- 
--/* POSIX:2008 defines PATH_MAX to be the maximum number of bytes in a filename,
--   including the terminating NUL byte.
--   <http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/limits.h.html>
--   PATH_MAX is not defined on systems which have no limit on filename length,
--   such as GNU/Hurd.
--
--   This file does *not* define PATH_MAX always.  Programs that use this file
--   can handle the GNU/Hurd case in several ways:
--     - Either with a package-wide handling, or with a per-file handling,
--     - Either through a
--         #ifdef PATH_MAX
--       or through a fallback like
--         #ifndef PATH_MAX
--         # define PATH_MAX 8192
--         #endif
--       or through a fallback like
--         #ifndef PATH_MAX
--         # define PATH_MAX pathconf ("/", _PC_PATH_MAX)
--         #endif
-- */
--
- # include <unistd.h>
- 
- # include <limits.h>
-@@ -48,6 +27,11 @@
- #  define _POSIX_PATH_MAX 256
- # endif
- 
-+# if !defined PATH_MAX && defined _PC_PATH_MAX && defined HAVE_PATHCONF
-+#  define PATH_MAX (pathconf ("/", _PC_PATH_MAX) < 1 ? 1024 \
-+                    : pathconf ("/", _PC_PATH_MAX))
-+# endif
-+
- /* Don't include sys/param.h if it already has been.  */
- # if defined HAVE_SYS_PARAM_H && !defined PATH_MAX && !defined MAXPATHLEN
- #  include <sys/param.h>
-@@ -57,13 +41,8 @@
- #  define PATH_MAX MAXPATHLEN
- # endif
- 
--# ifdef __hpux
--/* On HP-UX, PATH_MAX designates the maximum number of bytes in a filename,
--   *not* including the terminating NUL byte, and is set to 1023.
--   Additionally, when _XOPEN_SOURCE is defined to 500 or more, PATH_MAX is
--   not defined at all any more.  */
--#  undef PATH_MAX
--#  define PATH_MAX 1024
-+# ifndef PATH_MAX
-+#  define PATH_MAX _POSIX_PATH_MAX
- # endif
- 
- #endif /* _PATHMAX_H */
-diff -Naurp libiconv-1.14.org//srclib/relocwrapper.c libiconv-1.14/srclib/relocwrapper.c
---- libiconv-1.14.org//srclib/relocwrapper.c	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/relocwrapper.c	2012-01-08 02:07:40.282484455 -0800
-@@ -29,6 +29,7 @@
-     -> relocatable
-     -> setenv
-        -> malloca
-+    -> strerror
-     -> c-ctype
- 
-    Macros that need to be set while compiling this file:
-diff -Naurp libiconv-1.14.org//srclib/safe-read.h libiconv-1.14/srclib/safe-read.h
---- libiconv-1.14.org//srclib/safe-read.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/safe-read.h	2012-01-08 02:07:40.298484455 -0800
-@@ -14,19 +14,6 @@
-    You should have received a copy of the GNU General Public License
-    along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
- 
--/* Some system calls may be interrupted and fail with errno = EINTR in the
--   following situations:
--     - The process is stopped and restarted (signal SIGSTOP and SIGCONT, user
--       types Ctrl-Z) on some platforms: MacOS X.
--     - The process receives a signal for which a signal handler was installed
--       with sigaction() with an sa_flags field that does not contain
--       SA_RESTART.
--     - The process receives a signal for which a signal handler was installed
--       with signal() and for which no call to siginterrupt(sig,0) was done,
--       on some platforms: AIX, HP-UX, IRIX, OSF/1, Solaris.
--
--   This module provides a wrapper around read() that handles EINTR.  */
--
- #include <stddef.h>
- 
- #ifdef __cplusplus
-diff -Naurp libiconv-1.14.org//srclib/signal.in.h libiconv-1.14/srclib/signal.in.h
---- libiconv-1.14.org//srclib/signal.in.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/signal.in.h	2012-01-08 02:07:40.314484456 -0800
-@@ -20,49 +20,21 @@
- #endif
- @PRAGMA_COLUMNS@
- 
--#if defined __need_sig_atomic_t || defined __need_sigset_t || defined _GL_ALREADY_INCLUDING_SIGNAL_H || (defined _SIGNAL_H && !defined __SIZEOF_PTHREAD_MUTEX_T)
--/* Special invocation convention:
--   - Inside glibc header files.
--   - On glibc systems we have a sequence of nested includes
--     <signal.h> -> <ucontext.h> -> <signal.h>.
--     In this situation, the functions are not yet declared, therefore we cannot
--     provide the C++ aliases.
--   - On glibc systems with GCC 4.3 we have a sequence of nested includes
--     <csignal> -> </usr/include/signal.h> -> <sys/ucontext.h> -> <signal.h>.
--     In this situation, some of the functions are not yet declared, therefore
--     we cannot provide the C++ aliases.  */
-+#if defined __need_sig_atomic_t || defined __need_sigset_t
-+/* Special invocation convention inside glibc header files.  */
- 
- # @INCLUDE_NEXT@ @NEXT_SIGNAL_H@
- 
- #else
- /* Normal invocation convention.  */
- 
--#ifndef _@GUARD_PREFIX@_SIGNAL_H
--
--#define _GL_ALREADY_INCLUDING_SIGNAL_H
--
--/* Define pid_t, uid_t.
--   Also, mingw defines sigset_t not in <signal.h>, but in <sys/types.h>.
--   On Solaris 10, <signal.h> includes <sys/types.h>, which eventually includes
--   us; so include <sys/types.h> now, before the second inclusion guard.  */
--#include <sys/types.h>
-+#ifndef _GL_SIGNAL_H
- 
- /* The include_next requires a split double-inclusion guard.  */
- #@INCLUDE_NEXT@ @NEXT_SIGNAL_H@
- 
--#undef _GL_ALREADY_INCLUDING_SIGNAL_H
--
--#ifndef _@GUARD_PREFIX@_SIGNAL_H
--#define _@GUARD_PREFIX@_SIGNAL_H
--
--/* MacOS X 10.3, FreeBSD 6.4, OpenBSD 3.8, OSF/1 4.0, Solaris 2.6 declare
--   pthread_sigmask in <pthread.h>, not in <signal.h>.
--   But avoid namespace pollution on glibc systems.*/
--#if (@GNULIB_PTHREAD_SIGMASK@ || defined GNULIB_POSIXCHECK) \
--    && ((defined __APPLE__ && defined __MACH__) || defined __FreeBSD__ || defined __OpenBSD__ || defined __osf__ || defined __sun) \
--    && ! defined __GLIBC__
--# include <pthread.h>
--#endif
-+#ifndef _GL_SIGNAL_H
-+#define _GL_SIGNAL_H
- 
- /* The definitions of _GL_FUNCDECL_RPL etc. are copied here.  */
- 
-@@ -70,6 +42,10 @@
- 
- /* The definition of _GL_WARN_ON_USE is copied here.  */
- 
-+/* Define pid_t, uid_t.
-+   Also, mingw defines sigset_t not in <signal.h>, but in <sys/types.h>.  */
-+#include <sys/types.h>
-+
- /* On AIX, sig_atomic_t already includes volatile.  C99 requires that
-    'volatile sig_atomic_t' ignore the extra modifier, but C89 did not.
-    Hence, redefine this to a non-volatile type as needed.  */
-@@ -124,34 +100,6 @@ typedef void (*sighandler_t) (int);
- #endif
- 
- 
--#if @GNULIB_PTHREAD_SIGMASK@
--# if @REPLACE_PTHREAD_SIGMASK@
--#  if !(defined __cplusplus && defined GNULIB_NAMESPACE)
--#   undef pthread_sigmask
--#   define pthread_sigmask rpl_pthread_sigmask
--#  endif
--_GL_FUNCDECL_RPL (pthread_sigmask, int,
--                  (int how, const sigset_t *new_mask, sigset_t *old_mask));
--_GL_CXXALIAS_RPL (pthread_sigmask, int,
--                  (int how, const sigset_t *new_mask, sigset_t *old_mask));
--# else
--#  if !@HAVE_PTHREAD_SIGMASK@
--_GL_FUNCDECL_SYS (pthread_sigmask, int,
--                  (int how, const sigset_t *new_mask, sigset_t *old_mask));
--#  endif
--_GL_CXXALIAS_SYS (pthread_sigmask, int,
--                  (int how, const sigset_t *new_mask, sigset_t *old_mask));
--# endif
--_GL_CXXALIASWARN (pthread_sigmask);
--#elif defined GNULIB_POSIXCHECK
--# undef pthread_sigmask
--# if HAVE_RAW_DECL_PTHREAD_SIGMASK
--_GL_WARN_ON_USE (pthread_sigmask, "pthread_sigmask is not portable - "
--                 "use gnulib module pthread_sigmask for portability");
--# endif
--#endif
--
--
- #if @GNULIB_SIGPROCMASK@
- # if !@HAVE_POSIX_SIGNALBLOCKING@
- 
-@@ -423,6 +371,6 @@ _GL_WARN_ON_USE (sigaction, "sigaction i
- #endif
- 
- 
--#endif /* _@GUARD_PREFIX@_SIGNAL_H */
--#endif /* _@GUARD_PREFIX@_SIGNAL_H */
-+#endif /* _GL_SIGNAL_H */
-+#endif /* _GL_SIGNAL_H */
- #endif
-diff -Naurp libiconv-1.14.org//srclib/stat.c libiconv-1.14/srclib/stat.c
---- libiconv-1.14.org//srclib/stat.c	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/stat.c	2012-01-08 02:07:40.330484457 -0800
-@@ -38,7 +38,6 @@ orig_stat (const char *filename, struct
- #include <stdbool.h>
- #include <string.h>
- #include "dosname.h"
--#include "verify.h"
- 
- /* Store information about NAME into ST.  Work around bugs with
-    trailing slashes.  Mingw has other bugs (such as st_ino always
-@@ -64,12 +63,6 @@ rpl_stat (char const *name, struct stat
-     }
- #endif /* REPLACE_FUNC_STAT_FILE */
- #if REPLACE_FUNC_STAT_DIR
--  /* The only known systems where REPLACE_FUNC_STAT_DIR is needed also
--     have a constant PATH_MAX.  */
--# ifndef PATH_MAX
--#  error "Please port this replacement to your platform"
--# endif
--
-   if (result == -1 && errno == ENOENT)
-     {
-       /* Due to mingw's oddities, there are some directories (like
-@@ -84,7 +77,6 @@ rpl_stat (char const *name, struct stat
-       char fixed_name[PATH_MAX + 1] = {0};
-       size_t len = strlen (name);
-       bool check_dir = false;
--      verify (PATH_MAX <= 4096);
-       if (PATH_MAX <= len)
-         errno = ENAMETOOLONG;
-       else if (len)
-diff -Naurp libiconv-1.14.org//srclib/stddef.in.h libiconv-1.14/srclib/stddef.in.h
---- libiconv-1.14.org//srclib/stddef.in.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/stddef.in.h	2012-01-08 02:07:40.346484458 -0800
-@@ -38,9 +38,9 @@
-    remember if special invocation has ever been used to obtain wint_t,
-    in which case we need to clean up NULL yet again.  */
- 
--# if !(defined _@GUARD_PREFIX@_STDDEF_H && defined _GL_STDDEF_WINT_T)
-+# if !(defined _GL_STDDEF_H && defined _GL_STDDEF_WINT_T)
- #  ifdef __need_wint_t
--#   undef _@GUARD_PREFIX@_STDDEF_H
-+#   undef _GL_STDDEF_H
- #   define _GL_STDDEF_WINT_T
- #  endif
- #  @INCLUDE_NEXT@ @NEXT_STDDEF_H@
-@@ -49,14 +49,14 @@
- #else
- /* Normal invocation convention.  */
- 
--# ifndef _@GUARD_PREFIX@_STDDEF_H
-+# ifndef _GL_STDDEF_H
- 
- /* The include_next requires a split double-inclusion guard.  */
- 
- #  @INCLUDE_NEXT@ @NEXT_STDDEF_H@
- 
--#  ifndef _@GUARD_PREFIX@_STDDEF_H
--#   define _@GUARD_PREFIX@_STDDEF_H
-+#  ifndef _GL_STDDEF_H
-+#   define _GL_STDDEF_H
- 
- /* On NetBSD 5.0, the definition of NULL lacks proper parentheses.  */
- #if @REPLACE_NULL@
-@@ -82,6 +82,6 @@
- # define wchar_t int
- #endif
- 
--#  endif /* _@GUARD_PREFIX@_STDDEF_H */
--# endif /* _@GUARD_PREFIX@_STDDEF_H */
-+#  endif /* _GL_STDDEF_H */
-+# endif /* _GL_STDDEF_H */
- #endif /* __need_XXX */
-diff -Naurp libiconv-1.14.org//srclib/stdint.in.h libiconv-1.14/srclib/stdint.in.h
---- libiconv-1.14.org//srclib/stdint.in.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/stdint.in.h	2012-01-08 02:07:40.358484458 -0800
-@@ -21,7 +21,7 @@
-  * <http://www.opengroup.org/susv3xbd/stdint.h.html>
-  */
- 
--#ifndef _@GUARD_PREFIX@_STDINT_H
-+#ifndef _GL_STDINT_H
- 
- #if __GNUC__ >= 3
- @PRAGMA_SYSTEM_HEADER@
-@@ -52,13 +52,13 @@
-   /* Other systems may have an incomplete or buggy <stdint.h>.
-      Include it before <inttypes.h>, since any "#include <stdint.h>"
-      in <inttypes.h> would reinclude us, skipping our contents because
--     _@GUARD_PREFIX@_STDINT_H is defined.
-+     _GL_STDINT_H is defined.
-      The include_next requires a split double-inclusion guard.  */
- # @INCLUDE_NEXT@ @NEXT_STDINT_H@
- #endif
- 
--#if ! defined _@GUARD_PREFIX@_STDINT_H && ! defined _GL_JUST_INCLUDE_SYSTEM_STDINT_H
--#define _@GUARD_PREFIX@_STDINT_H
-+#if ! defined _GL_STDINT_H && ! defined _GL_JUST_INCLUDE_SYSTEM_STDINT_H
-+#define _GL_STDINT_H
- 
- /* <sys/types.h> defines some of the stdint.h types as well, on glibc,
-    IRIX 6.5, and OpenBSD 3.8 (via <machine/types.h>).
-@@ -270,36 +270,26 @@ typedef unsigned long int gl_uintptr_t;
- /* Note: These types are compiler dependent. It may be unwise to use them in
-    public header files. */
- 
--/* If the system defines INTMAX_MAX, assume that intmax_t works, and
--   similarly for UINTMAX_MAX and uintmax_t.  This avoids problems with
--   assuming one type where another is used by the system.  */
--
--#ifndef INTMAX_MAX
--# undef INTMAX_C
--# undef intmax_t
--# if @HAVE_LONG_LONG_INT@ && LONG_MAX >> 30 == 1
-+#undef intmax_t
-+#if @HAVE_LONG_LONG_INT@ && LONG_MAX >> 30 == 1
- typedef long long int gl_intmax_t;
--#  define intmax_t gl_intmax_t
--# elif defined GL_INT64_T
--#  define intmax_t int64_t
--# else
-+# define intmax_t gl_intmax_t
-+#elif defined GL_INT64_T
-+# define intmax_t int64_t
-+#else
- typedef long int gl_intmax_t;
--#  define intmax_t gl_intmax_t
--# endif
-+# define intmax_t gl_intmax_t
- #endif
- 
--#ifndef UINTMAX_MAX
--# undef UINTMAX_C
--# undef uintmax_t
--# if @HAVE_UNSIGNED_LONG_LONG_INT@ && ULONG_MAX >> 31 == 1
-+#undef uintmax_t
-+#if @HAVE_UNSIGNED_LONG_LONG_INT@ && ULONG_MAX >> 31 == 1
- typedef unsigned long long int gl_uintmax_t;
--#  define uintmax_t gl_uintmax_t
--# elif defined GL_UINT64_T
--#  define uintmax_t uint64_t
--# else
-+# define uintmax_t gl_uintmax_t
-+#elif defined GL_UINT64_T
-+# define uintmax_t uint64_t
-+#else
- typedef unsigned long int gl_uintmax_t;
--#  define uintmax_t gl_uintmax_t
--# endif
-+# define uintmax_t gl_uintmax_t
- #endif
- 
- /* Verify that intmax_t and uintmax_t have the same size.  Too much code
-@@ -441,23 +431,21 @@ typedef int _verify_intmax_size[sizeof (
- 
- /* 7.18.2.5. Limits of greatest-width integer types */
- 
--#ifndef INTMAX_MAX
--# undef INTMAX_MIN
--# ifdef INT64_MAX
--#  define INTMAX_MIN  INT64_MIN
--#  define INTMAX_MAX  INT64_MAX
--# else
--#  define INTMAX_MIN  INT32_MIN
--#  define INTMAX_MAX  INT32_MAX
--# endif
-+#undef INTMAX_MIN
-+#undef INTMAX_MAX
-+#ifdef INT64_MAX
-+# define INTMAX_MIN  INT64_MIN
-+# define INTMAX_MAX  INT64_MAX
-+#else
-+# define INTMAX_MIN  INT32_MIN
-+# define INTMAX_MAX  INT32_MAX
- #endif
- 
--#ifndef UINTMAX_MAX
--# ifdef UINT64_MAX
--#  define UINTMAX_MAX  UINT64_MAX
--# else
--#  define UINTMAX_MAX  UINT32_MAX
--# endif
-+#undef UINTMAX_MAX
-+#ifdef UINT64_MAX
-+# define UINTMAX_MAX  UINT64_MAX
-+#else
-+# define UINTMAX_MAX  UINT32_MAX
- #endif
- 
- /* 7.18.3. Limits of other integer types */
-@@ -580,27 +568,25 @@ typedef int _verify_intmax_size[sizeof (
- 
- /* 7.18.4.2. Macros for greatest-width integer constants */
- 
--#ifndef INTMAX_C
--# if @HAVE_LONG_LONG_INT@ && LONG_MAX >> 30 == 1
--#  define INTMAX_C(x)   x##LL
--# elif defined GL_INT64_T
--#  define INTMAX_C(x)   INT64_C(x)
--# else
--#  define INTMAX_C(x)   x##L
--# endif
-+#undef INTMAX_C
-+#if @HAVE_LONG_LONG_INT@ && LONG_MAX >> 30 == 1
-+# define INTMAX_C(x)   x##LL
-+#elif defined GL_INT64_T
-+# define INTMAX_C(x)   INT64_C(x)
-+#else
-+# define INTMAX_C(x)   x##L
- #endif
- 
--#ifndef UINTMAX_C
--# if @HAVE_UNSIGNED_LONG_LONG_INT@ && ULONG_MAX >> 31 == 1
--#  define UINTMAX_C(x)  x##ULL
--# elif defined GL_UINT64_T
--#  define UINTMAX_C(x)  UINT64_C(x)
--# else
--#  define UINTMAX_C(x)  x##UL
--# endif
-+#undef UINTMAX_C
-+#if @HAVE_UNSIGNED_LONG_LONG_INT@ && ULONG_MAX >> 31 == 1
-+# define UINTMAX_C(x)  x##ULL
-+#elif defined GL_UINT64_T
-+# define UINTMAX_C(x)  UINT64_C(x)
-+#else
-+# define UINTMAX_C(x)  x##UL
- #endif
- 
- #endif /* !defined __cplusplus || defined __STDC_CONSTANT_MACROS */
- 
--#endif /* _@GUARD_PREFIX@_STDINT_H */
--#endif /* !defined _@GUARD_PREFIX@_STDINT_H && !defined _GL_JUST_INCLUDE_SYSTEM_STDINT_H */
-+#endif /* _GL_STDINT_H */
-+#endif /* !defined _GL_STDINT_H && !defined _GL_JUST_INCLUDE_SYSTEM_STDINT_H */
-diff -Naurp libiconv-1.14.org//srclib/stdio.in.h libiconv-1.14/srclib/stdio.in.h
---- libiconv-1.14.org//srclib/stdio.in.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/stdio.in.h	2012-01-08 02:07:40.378484459 -0800
-@@ -35,7 +35,7 @@
- #else
- /* Normal invocation convention.  */
- 
--#ifndef _@GUARD_PREFIX@_STDIO_H
-+#ifndef _GL_STDIO_H
- 
- #define _GL_ALREADY_INCLUDING_STDIO_H
- 
-@@ -44,8 +44,8 @@
- 
- #undef _GL_ALREADY_INCLUDING_STDIO_H
- 
--#ifndef _@GUARD_PREFIX@_STDIO_H
--#define _@GUARD_PREFIX@_STDIO_H
-+#ifndef _GL_STDIO_H
-+#define _GL_STDIO_H
- 
- /* Get va_list.  Needed on many systems, including glibc 2.8.  */
- #include <stdarg.h>
-@@ -461,6 +461,25 @@ _GL_FUNCDECL_SYS (fseeko, int, (FILE *fp
- _GL_CXXALIAS_SYS (fseeko, int, (FILE *fp, off_t offset, int whence));
- # endif
- _GL_CXXALIASWARN (fseeko);
-+# if (@REPLACE_FSEEKO@ || !@HAVE_FSEEKO@) && !@GNULIB_FSEEK@
-+   /* Provide an fseek function that is consistent with fseeko.  */
-+   /* In order to avoid that fseek gets defined as a macro here, the
-+      developer can request the 'fseek' module.  */
-+#  if !GNULIB_defined_fseek_function
-+#   undef fseek
-+#   define fseek rpl_fseek
-+static inline int _GL_ARG_NONNULL ((1))
-+rpl_fseek (FILE *fp, long offset, int whence)
-+{
-+#   if @REPLACE_FSEEKO@
-+  return rpl_fseeko (fp, offset, whence);
-+#   else
-+  return fseeko (fp, offset, whence);
-+#   endif
-+}
-+#   define GNULIB_defined_fseek_function 1
-+#  endif
-+# endif
- #elif defined GNULIB_POSIXCHECK
- # define _GL_FSEEK_WARN /* Category 1, above.  */
- # undef fseek
-@@ -520,6 +539,25 @@ _GL_FUNCDECL_SYS (ftello, off_t, (FILE *
- _GL_CXXALIAS_SYS (ftello, off_t, (FILE *fp));
- # endif
- _GL_CXXALIASWARN (ftello);
-+# if (@REPLACE_FTELLO@ || !@HAVE_FTELLO@) && !@GNULIB_FTELL@
-+   /* Provide an ftell function that is consistent with ftello.  */
-+   /* In order to avoid that ftell gets defined as a macro here, the
-+      developer can request the 'ftell' module.  */
-+#  if !GNULIB_defined_ftell_function
-+#   undef ftell
-+#   define ftell rpl_ftell
-+static inline long _GL_ARG_NONNULL ((1))
-+rpl_ftell (FILE *f)
-+{
-+#   if @REPLACE_FTELLO@
-+  return rpl_ftello (f);
-+#   else
-+  return ftello (f);
-+#   endif
-+}
-+#   define GNULIB_defined_ftell_function 1
-+#  endif
-+# endif
- #elif defined GNULIB_POSIXCHECK
- # define _GL_FTELL_WARN /* Category 1, above.  */
- # undef ftell
-@@ -1307,6 +1345,6 @@ _GL_WARN_ON_USE (vsprintf, "vsprintf is
- #endif
- 
- 
--#endif /* _@GUARD_PREFIX@_STDIO_H */
--#endif /* _@GUARD_PREFIX@_STDIO_H */
-+#endif /* _GL_STDIO_H */
-+#endif /* _GL_STDIO_H */
- #endif
-diff -Naurp libiconv-1.14.org//srclib/stdlib.in.h libiconv-1.14/srclib/stdlib.in.h
---- libiconv-1.14.org//srclib/stdlib.in.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/stdlib.in.h	2012-01-08 02:07:40.390484459 -0800
-@@ -28,13 +28,13 @@
- #else
- /* Normal invocation convention.  */
- 
--#ifndef _@GUARD_PREFIX@_STDLIB_H
-+#ifndef _GL_STDLIB_H
- 
- /* The include_next requires a split double-inclusion guard.  */
- #@INCLUDE_NEXT@ @NEXT_STDLIB_H@
- 
--#ifndef _@GUARD_PREFIX@_STDLIB_H
--#define _@GUARD_PREFIX@_STDLIB_H
-+#ifndef _GL_STDLIB_H
-+#define _GL_STDLIB_H
- 
- /* NetBSD 5.0 mis-defines NULL.  */
- #include <stddef.h>
-@@ -89,7 +89,11 @@ struct random_data
- # include <unistd.h>
- #endif
- 
--/* The definition of _Noreturn is copied here.  */
-+#if 3 <= __GNUC__ || __GNUC__ == 2 && 8 <= __GNUC_MINOR__
-+# define _GL_ATTRIBUTE_NORETURN __attribute__ ((__noreturn__))
-+#else
-+# define _GL_ATTRIBUTE_NORETURN
-+#endif
- 
- /* The definitions of _GL_FUNCDECL_RPL etc. are copied here.  */
- 
-@@ -116,7 +120,7 @@ struct random_data
- /* Terminate the current process with the given return code, without running
-    the 'atexit' handlers.  */
- # if !@HAVE__EXIT@
--_GL_FUNCDECL_SYS (_Exit, _Noreturn void, (int status));
-+_GL_FUNCDECL_SYS (_Exit, void, (int status) _GL_ATTRIBUTE_NORETURN);
- # endif
- _GL_CXXALIAS_SYS (_Exit, void, (int status));
- _GL_CXXALIASWARN (_Exit);
-@@ -757,6 +761,6 @@ _GL_CXXALIASWARN (wctomb);
- #endif
- 
- 
--#endif /* _@GUARD_PREFIX@_STDLIB_H */
--#endif /* _@GUARD_PREFIX@_STDLIB_H */
-+#endif /* _GL_STDLIB_H */
-+#endif /* _GL_STDLIB_H */
- #endif
-diff -Naurp libiconv-1.14.org//srclib/strerror.c libiconv-1.14/srclib/strerror.c
---- libiconv-1.14.org//srclib/strerror.c	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/strerror.c	2012-01-08 02:07:40.406484461 -0800
-@@ -17,54 +17,340 @@
- 
- #include <config.h>
- 
--/* Specification.  */
- #include <string.h>
- 
--#include <errno.h>
--#include <stdio.h>
--#include <stdlib.h>
--#include <string.h>
-+#if REPLACE_STRERROR
-+
-+# include <errno.h>
-+# include <stdio.h>
-+
-+# if GNULIB_defined_ESOCK /* native Windows platforms */
-+#  if HAVE_WINSOCK2_H
-+#   include <winsock2.h>
-+#  endif
-+# endif
- 
--#include "intprops.h"
--#include "strerror-override.h"
--#include "verify.h"
-+# include "intprops.h"
- 
- /* Use the system functions, not the gnulib overrides in this file.  */
--#undef sprintf
-+# undef sprintf
-+
-+# undef strerror
-+# if ! HAVE_DECL_STRERROR
-+#  define strerror(n) NULL
-+# endif
- 
- char *
--strerror (int n)
--#undef strerror
-+rpl_strerror (int n)
- {
--  static char buf[STACKBUF_LEN];
--  size_t len;
-+  char const *msg = NULL;
-+  /* These error messages are taken from glibc/sysdeps/gnu/errlist.c.  */
-+  switch (n)
-+    {
-+# if GNULIB_defined_ETXTBSY
-+    case ETXTBSY:
-+      msg = "Text file busy";
-+      break;
-+# endif
-+
-+# if GNULIB_defined_ESOCK /* native Windows platforms */
-+    /* EWOULDBLOCK is the same as EAGAIN.  */
-+    case EINPROGRESS:
-+      msg = "Operation now in progress";
-+      break;
-+    case EALREADY:
-+      msg = "Operation already in progress";
-+      break;
-+    case ENOTSOCK:
-+      msg = "Socket operation on non-socket";
-+      break;
-+    case EDESTADDRREQ:
-+      msg = "Destination address required";
-+      break;
-+    case EMSGSIZE:
-+      msg = "Message too long";
-+      break;
-+    case EPROTOTYPE:
-+      msg = "Protocol wrong type for socket";
-+      break;
-+    case ENOPROTOOPT:
-+      msg = "Protocol not available";
-+      break;
-+    case EPROTONOSUPPORT:
-+      msg = "Protocol not supported";
-+      break;
-+    case ESOCKTNOSUPPORT:
-+      msg = "Socket type not supported";
-+      break;
-+    case EOPNOTSUPP:
-+      msg = "Operation not supported";
-+      break;
-+    case EPFNOSUPPORT:
-+      msg = "Protocol family not supported";
-+      break;
-+    case EAFNOSUPPORT:
-+      msg = "Address family not supported by protocol";
-+      break;
-+    case EADDRINUSE:
-+      msg = "Address already in use";
-+      break;
-+    case EADDRNOTAVAIL:
-+      msg = "Cannot assign requested address";
-+      break;
-+    case ENETDOWN:
-+      msg = "Network is down";
-+      break;
-+    case ENETUNREACH:
-+      msg = "Network is unreachable";
-+      break;
-+    case ENETRESET:
-+      msg = "Network dropped connection on reset";
-+      break;
-+    case ECONNABORTED:
-+      msg = "Software caused connection abort";
-+      break;
-+    case ECONNRESET:
-+      msg = "Connection reset by peer";
-+      break;
-+    case ENOBUFS:
-+      msg = "No buffer space available";
-+      break;
-+    case EISCONN:
-+      msg = "Transport endpoint is already connected";
-+      break;
-+    case ENOTCONN:
-+      msg = "Transport endpoint is not connected";
-+      break;
-+    case ESHUTDOWN:
-+      msg = "Cannot send after transport endpoint shutdown";
-+      break;
-+    case ETOOMANYREFS:
-+      msg = "Too many references: cannot splice";
-+      break;
-+    case ETIMEDOUT:
-+      msg = "Connection timed out";
-+      break;
-+    case ECONNREFUSED:
-+      msg = "Connection refused";
-+      break;
-+    case ELOOP:
-+      msg = "Too many levels of symbolic links";
-+      break;
-+    case EHOSTDOWN:
-+      msg = "Host is down";
-+      break;
-+    case EHOSTUNREACH:
-+      msg = "No route to host";
-+      break;
-+    case EPROCLIM:
-+      msg = "Too many processes";
-+      break;
-+    case EUSERS:
-+      msg = "Too many users";
-+      break;
-+    case EDQUOT:
-+      msg = "Disk quota exceeded";
-+      break;
-+    case ESTALE:
-+      msg = "Stale NFS file handle";
-+      break;
-+    case EREMOTE:
-+      msg = "Object is remote";
-+      break;
-+#  if HAVE_WINSOCK2_H
-+    /* WSA_INVALID_HANDLE maps to EBADF */
-+    /* WSA_NOT_ENOUGH_MEMORY maps to ENOMEM */
-+    /* WSA_INVALID_PARAMETER maps to EINVAL */
-+    case WSA_OPERATION_ABORTED:
-+      msg = "Overlapped operation aborted";
-+      break;
-+    case WSA_IO_INCOMPLETE:
-+      msg = "Overlapped I/O event object not in signaled state";
-+      break;
-+    case WSA_IO_PENDING:
-+      msg = "Overlapped operations will complete later";
-+      break;
-+    /* WSAEINTR maps to EINTR */
-+    /* WSAEBADF maps to EBADF */
-+    /* WSAEACCES maps to EACCES */
-+    /* WSAEFAULT maps to EFAULT */
-+    /* WSAEINVAL maps to EINVAL */
-+    /* WSAEMFILE maps to EMFILE */
-+    /* WSAEWOULDBLOCK maps to EWOULDBLOCK */
-+    /* WSAEINPROGRESS is EINPROGRESS */
-+    /* WSAEALREADY is EALREADY */
-+    /* WSAENOTSOCK is ENOTSOCK */
-+    /* WSAEDESTADDRREQ is EDESTADDRREQ */
-+    /* WSAEMSGSIZE is EMSGSIZE */
-+    /* WSAEPROTOTYPE is EPROTOTYPE */
-+    /* WSAENOPROTOOPT is ENOPROTOOPT */
-+    /* WSAEPROTONOSUPPORT is EPROTONOSUPPORT */
-+    /* WSAESOCKTNOSUPPORT is ESOCKTNOSUPPORT */
-+    /* WSAEOPNOTSUPP is EOPNOTSUPP */
-+    /* WSAEPFNOSUPPORT is EPFNOSUPPORT */
-+    /* WSAEAFNOSUPPORT is EAFNOSUPPORT */
-+    /* WSAEADDRINUSE is EADDRINUSE */
-+    /* WSAEADDRNOTAVAIL is EADDRNOTAVAIL */
-+    /* WSAENETDOWN is ENETDOWN */
-+    /* WSAENETUNREACH is ENETUNREACH */
-+    /* WSAENETRESET is ENETRESET */
-+    /* WSAECONNABORTED is ECONNABORTED */
-+    /* WSAECONNRESET is ECONNRESET */
-+    /* WSAENOBUFS is ENOBUFS */
-+    /* WSAEISCONN is EISCONN */
-+    /* WSAENOTCONN is ENOTCONN */
-+    /* WSAESHUTDOWN is ESHUTDOWN */
-+    /* WSAETOOMANYREFS is ETOOMANYREFS */
-+    /* WSAETIMEDOUT is ETIMEDOUT */
-+    /* WSAECONNREFUSED is ECONNREFUSED */
-+    /* WSAELOOP is ELOOP */
-+    /* WSAENAMETOOLONG maps to ENAMETOOLONG */
-+    /* WSAEHOSTDOWN is EHOSTDOWN */
-+    /* WSAEHOSTUNREACH is EHOSTUNREACH */
-+    /* WSAENOTEMPTY maps to ENOTEMPTY */
-+    /* WSAEPROCLIM is EPROCLIM */
-+    /* WSAEUSERS is EUSERS */
-+    /* WSAEDQUOT is EDQUOT */
-+    /* WSAESTALE is ESTALE */
-+    /* WSAEREMOTE is EREMOTE */
-+    case WSASYSNOTREADY:
-+      msg = "Network subsystem is unavailable";
-+      break;
-+    case WSAVERNOTSUPPORTED:
-+      msg = "Winsock.dll version out of range";
-+      break;
-+    case WSANOTINITIALISED:
-+      msg = "Successful WSAStartup not yet performed";
-+      break;
-+    case WSAEDISCON:
-+      msg = "Graceful shutdown in progress";
-+      break;
-+    case WSAENOMORE: case WSA_E_NO_MORE:
-+      msg = "No more results";
-+      break;
-+    case WSAECANCELLED: case WSA_E_CANCELLED:
-+      msg = "Call was canceled";
-+      break;
-+    case WSAEINVALIDPROCTABLE:
-+      msg = "Procedure call table is invalid";
-+      break;
-+    case WSAEINVALIDPROVIDER:
-+      msg = "Service provider is invalid";
-+      break;
-+    case WSAEPROVIDERFAILEDINIT:
-+      msg = "Service provider failed to initialize";
-+      break;
-+    case WSASYSCALLFAILURE:
-+      msg = "System call failure";
-+      break;
-+    case WSASERVICE_NOT_FOUND:
-+      msg = "Service not found";
-+      break;
-+    case WSATYPE_NOT_FOUND:
-+      msg = "Class type not found";
-+      break;
-+    case WSAEREFUSED:
-+      msg = "Database query was refused";
-+      break;
-+    case WSAHOST_NOT_FOUND:
-+      msg = "Host not found";
-+      break;
-+    case WSATRY_AGAIN:
-+      msg = "Nonauthoritative host not found";
-+      break;
-+    case WSANO_RECOVERY:
-+      msg = "Nonrecoverable error";
-+      break;
-+    case WSANO_DATA:
-+      msg = "Valid name, no data record of requested type";
-+      break;
-+    /* WSA_QOS_* omitted */
-+#  endif
-+# endif
-+
-+# if GNULIB_defined_ENOMSG
-+    case ENOMSG:
-+      msg = "No message of desired type";
-+      break;
-+# endif
-+
-+# if GNULIB_defined_EIDRM
-+    case EIDRM:
-+      msg = "Identifier removed";
-+      break;
-+# endif
-+
-+# if GNULIB_defined_ENOLINK
-+    case ENOLINK:
-+      msg = "Link has been severed";
-+      break;
-+# endif
-+
-+# if GNULIB_defined_EPROTO
-+    case EPROTO:
-+      msg = "Protocol error";
-+      break;
-+# endif
-+
-+# if GNULIB_defined_EMULTIHOP
-+    case EMULTIHOP:
-+      msg = "Multihop attempted";
-+      break;
-+# endif
-+
-+# if GNULIB_defined_EBADMSG
-+    case EBADMSG:
-+      msg = "Bad message";
-+      break;
-+# endif
-+
-+# if GNULIB_defined_EOVERFLOW
-+    case EOVERFLOW:
-+      msg = "Value too large for defined data type";
-+      break;
-+# endif
-+
-+# if GNULIB_defined_ENOTSUP
-+    case ENOTSUP:
-+      msg = "Not supported";
-+      break;
-+# endif
-+
-+# if GNULIB_defined_ESTALE
-+    case ESTALE:
-+      msg = "Stale NFS file handle";
-+      break;
-+# endif
-+
-+# if GNULIB_defined_EDQUOT
-+    case EDQUOT:
-+      msg = "Disk quota exceeded";
-+      break;
-+# endif
-+
-+# if GNULIB_defined_ECANCELED
-+    case ECANCELED:
-+      msg = "Operation canceled";
-+      break;
-+# endif
-+    }
- 
--  /* Cast away const, due to the historical signature of strerror;
--     callers should not be modifying the string.  */
--  const char *msg = strerror_override (n);
-   if (msg)
-     return (char *) msg;
- 
--  msg = strerror (n);
-+  {
-+    char *result = strerror (n);
- 
--  /* Our strerror_r implementation might use the system's strerror
--     buffer, so all other clients of strerror have to see the error
--     copied into a buffer that we manage.  This is not thread-safe,
--     even if the system strerror is, but portable programs shouldn't
--     be using strerror if they care about thread-safety.  */
--  if (!msg || !*msg)
--    {
--      static char const fmt[] = "Unknown error %d";
--      verify (sizeof buf >= sizeof (fmt) + INT_STRLEN_BOUND (n));
--      sprintf (buf, fmt, n);
--      errno = EINVAL;
--      return buf;
--    }
-+    if (result == NULL || result[0] == '\0')
-+      {
-+        static char const fmt[] = "Unknown error (%d)";
-+        static char msg_buf[sizeof fmt + INT_STRLEN_BOUND (n)];
-+        sprintf (msg_buf, fmt, n);
-+        return msg_buf;
-+      }
- 
--  /* Fix STACKBUF_LEN if this ever aborts.  */
--  len = strlen (msg);
--  if (sizeof buf <= len)
--    abort ();
--
--  return memcpy (buf, msg, len + 1);
-+    return result;
-+  }
- }
-+
-+#endif
-diff -Naurp libiconv-1.14.org//srclib/strerror-override.c libiconv-1.14/srclib/strerror-override.c
---- libiconv-1.14.org//srclib/strerror-override.c	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/strerror-override.c	1969-12-31 16:00:00.000000000 -0800
-@@ -1,279 +0,0 @@
--/* strerror-override.c --- POSIX compatible system error routine
--
--   Copyright (C) 2010-2011 Free Software Foundation, Inc.
--
--   This program is free software: you can redistribute it and/or modify
--   it under the terms of the GNU General Public License as published by
--   the Free Software Foundation; either version 3 of the License, or
--   (at your option) any later version.
--
--   This program is distributed in the hope that it will be useful,
--   but WITHOUT ANY WARRANTY; without even the implied warranty of
--   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
--   GNU General Public License for more details.
--
--   You should have received a copy of the GNU General Public License
--   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
--
--/* Written by Bruno Haible <bruno@clisp.org>, 2010.  */
--
--#include <config.h>
--
--#include "strerror-override.h"
--
--#include <errno.h>
--
--#if GNULIB_defined_ESOCK /* native Windows platforms */
--# if HAVE_WINSOCK2_H
--#  include <winsock2.h>
--# endif
--#endif
--
--/* If ERRNUM maps to an errno value defined by gnulib, return a string
--   describing the error.  Otherwise return NULL.  */
--const char *
--strerror_override (int errnum)
--{
--  /* These error messages are taken from glibc/sysdeps/gnu/errlist.c.  */
--  switch (errnum)
--    {
--#if REPLACE_STRERROR_0
--    case 0:
--      return "Success";
--#endif
--
--#if GNULIB_defined_ETXTBSY
--    case ETXTBSY:
--      return "Text file busy";
--#endif
--
--#if GNULIB_defined_ESOCK /* native Windows platforms */
--      /* EWOULDBLOCK is the same as EAGAIN.  */
--    case EINPROGRESS:
--      return "Operation now in progress";
--    case EALREADY:
--      return "Operation already in progress";
--    case ENOTSOCK:
--      return "Socket operation on non-socket";
--    case EDESTADDRREQ:
--      return "Destination address required";
--    case EMSGSIZE:
--      return "Message too long";
--    case EPROTOTYPE:
--      return "Protocol wrong type for socket";
--    case ENOPROTOOPT:
--      return "Protocol not available";
--    case EPROTONOSUPPORT:
--      return "Protocol not supported";
--    case ESOCKTNOSUPPORT:
--      return "Socket type not supported";
--    case EOPNOTSUPP:
--      return "Operation not supported";
--    case EPFNOSUPPORT:
--      return "Protocol family not supported";
--    case EAFNOSUPPORT:
--      return "Address family not supported by protocol";
--    case EADDRINUSE:
--      return "Address already in use";
--    case EADDRNOTAVAIL:
--      return "Cannot assign requested address";
--    case ENETDOWN:
--      return "Network is down";
--    case ENETUNREACH:
--      return "Network is unreachable";
--    case ENETRESET:
--      return "Network dropped connection on reset";
--    case ECONNABORTED:
--      return "Software caused connection abort";
--    case ECONNRESET:
--      return "Connection reset by peer";
--    case ENOBUFS:
--      return "No buffer space available";
--    case EISCONN:
--      return "Transport endpoint is already connected";
--    case ENOTCONN:
--      return "Transport endpoint is not connected";
--    case ESHUTDOWN:
--      return "Cannot send after transport endpoint shutdown";
--    case ETOOMANYREFS:
--      return "Too many references: cannot splice";
--    case ETIMEDOUT:
--      return "Connection timed out";
--    case ECONNREFUSED:
--      return "Connection refused";
--    case ELOOP:
--      return "Too many levels of symbolic links";
--    case EHOSTDOWN:
--      return "Host is down";
--    case EHOSTUNREACH:
--      return "No route to host";
--    case EPROCLIM:
--      return "Too many processes";
--    case EUSERS:
--      return "Too many users";
--    case EDQUOT:
--      return "Disk quota exceeded";
--    case ESTALE:
--      return "Stale NFS file handle";
--    case EREMOTE:
--      return "Object is remote";
--# if HAVE_WINSOCK2_H
--      /* WSA_INVALID_HANDLE maps to EBADF */
--      /* WSA_NOT_ENOUGH_MEMORY maps to ENOMEM */
--      /* WSA_INVALID_PARAMETER maps to EINVAL */
--    case WSA_OPERATION_ABORTED:
--      return "Overlapped operation aborted";
--    case WSA_IO_INCOMPLETE:
--      return "Overlapped I/O event object not in signaled state";
--    case WSA_IO_PENDING:
--      return "Overlapped operations will complete later";
--      /* WSAEINTR maps to EINTR */
--      /* WSAEBADF maps to EBADF */
--      /* WSAEACCES maps to EACCES */
--      /* WSAEFAULT maps to EFAULT */
--      /* WSAEINVAL maps to EINVAL */
--      /* WSAEMFILE maps to EMFILE */
--      /* WSAEWOULDBLOCK maps to EWOULDBLOCK */
--      /* WSAEINPROGRESS is EINPROGRESS */
--      /* WSAEALREADY is EALREADY */
--      /* WSAENOTSOCK is ENOTSOCK */
--      /* WSAEDESTADDRREQ is EDESTADDRREQ */
--      /* WSAEMSGSIZE is EMSGSIZE */
--      /* WSAEPROTOTYPE is EPROTOTYPE */
--      /* WSAENOPROTOOPT is ENOPROTOOPT */
--      /* WSAEPROTONOSUPPORT is EPROTONOSUPPORT */
--      /* WSAESOCKTNOSUPPORT is ESOCKTNOSUPPORT */
--      /* WSAEOPNOTSUPP is EOPNOTSUPP */
--      /* WSAEPFNOSUPPORT is EPFNOSUPPORT */
--      /* WSAEAFNOSUPPORT is EAFNOSUPPORT */
--      /* WSAEADDRINUSE is EADDRINUSE */
--      /* WSAEADDRNOTAVAIL is EADDRNOTAVAIL */
--      /* WSAENETDOWN is ENETDOWN */
--      /* WSAENETUNREACH is ENETUNREACH */
--      /* WSAENETRESET is ENETRESET */
--      /* WSAECONNABORTED is ECONNABORTED */
--      /* WSAECONNRESET is ECONNRESET */
--      /* WSAENOBUFS is ENOBUFS */
--      /* WSAEISCONN is EISCONN */
--      /* WSAENOTCONN is ENOTCONN */
--      /* WSAESHUTDOWN is ESHUTDOWN */
--      /* WSAETOOMANYREFS is ETOOMANYREFS */
--      /* WSAETIMEDOUT is ETIMEDOUT */
--      /* WSAECONNREFUSED is ECONNREFUSED */
--      /* WSAELOOP is ELOOP */
--      /* WSAENAMETOOLONG maps to ENAMETOOLONG */
--      /* WSAEHOSTDOWN is EHOSTDOWN */
--      /* WSAEHOSTUNREACH is EHOSTUNREACH */
--      /* WSAENOTEMPTY maps to ENOTEMPTY */
--      /* WSAEPROCLIM is EPROCLIM */
--      /* WSAEUSERS is EUSERS */
--      /* WSAEDQUOT is EDQUOT */
--      /* WSAESTALE is ESTALE */
--      /* WSAEREMOTE is EREMOTE */
--    case WSASYSNOTREADY:
--      return "Network subsystem is unavailable";
--    case WSAVERNOTSUPPORTED:
--      return "Winsock.dll version out of range";
--    case WSANOTINITIALISED:
--      return "Successful WSAStartup not yet performed";
--    case WSAEDISCON:
--      return "Graceful shutdown in progress";
--    case WSAENOMORE: case WSA_E_NO_MORE:
--      return "No more results";
--    case WSAECANCELLED: case WSA_E_CANCELLED:
--      return "Call was canceled";
--    case WSAEINVALIDPROCTABLE:
--      return "Procedure call table is invalid";
--    case WSAEINVALIDPROVIDER:
--      return "Service provider is invalid";
--    case WSAEPROVIDERFAILEDINIT:
--      return "Service provider failed to initialize";
--    case WSASYSCALLFAILURE:
--      return "System call failure";
--    case WSASERVICE_NOT_FOUND:
--      return "Service not found";
--    case WSATYPE_NOT_FOUND:
--      return "Class type not found";
--    case WSAEREFUSED:
--      return "Database query was refused";
--    case WSAHOST_NOT_FOUND:
--      return "Host not found";
--    case WSATRY_AGAIN:
--      return "Nonauthoritative host not found";
--    case WSANO_RECOVERY:
--      return "Nonrecoverable error";
--    case WSANO_DATA:
--      return "Valid name, no data record of requested type";
--      /* WSA_QOS_* omitted */
--# endif
--#endif
--
--#if GNULIB_defined_ENOMSG
--    case ENOMSG:
--      return "No message of desired type";
--#endif
--
--#if GNULIB_defined_EIDRM
--    case EIDRM:
--      return "Identifier removed";
--#endif
--
--#if GNULIB_defined_ENOLINK
--    case ENOLINK:
--      return "Link has been severed";
--#endif
--
--#if GNULIB_defined_EPROTO
--    case EPROTO:
--      return "Protocol error";
--#endif
--
--#if GNULIB_defined_EMULTIHOP
--    case EMULTIHOP:
--      return "Multihop attempted";
--#endif
--
--#if GNULIB_defined_EBADMSG
--    case EBADMSG:
--      return "Bad message";
--#endif
--
--#if GNULIB_defined_EOVERFLOW
--    case EOVERFLOW:
--      return "Value too large for defined data type";
--#endif
--
--#if GNULIB_defined_ENOTSUP
--    case ENOTSUP:
--      return "Not supported";
--#endif
--
--#if GNULIB_defined_ENETRESET
--    case ENETRESET:
--      return "Network dropped connection on reset";
--#endif
--
--#if GNULIB_defined_ECONNABORTED
--    case ECONNABORTED:
--      return "Software caused connection abort";
--#endif
--
--#if GNULIB_defined_ESTALE
--    case ESTALE:
--      return "Stale NFS file handle";
--#endif
--
--#if GNULIB_defined_EDQUOT
--    case EDQUOT:
--      return "Disk quota exceeded";
--#endif
--
--#if GNULIB_defined_ECANCELED
--    case ECANCELED:
--      return "Operation canceled";
--#endif
--
--    default:
--      return NULL;
--    }
--}
-diff -Naurp libiconv-1.14.org//srclib/strerror-override.h libiconv-1.14/srclib/strerror-override.h
---- libiconv-1.14.org//srclib/strerror-override.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/strerror-override.h	1969-12-31 16:00:00.000000000 -0800
-@@ -1,52 +0,0 @@
--/* strerror-override.h --- POSIX compatible system error routine
--
--   Copyright (C) 2010-2011 Free Software Foundation, Inc.
--
--   This program is free software: you can redistribute it and/or modify
--   it under the terms of the GNU General Public License as published by
--   the Free Software Foundation; either version 3 of the License, or
--   (at your option) any later version.
--
--   This program is distributed in the hope that it will be useful,
--   but WITHOUT ANY WARRANTY; without even the implied warranty of
--   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
--   GNU General Public License for more details.
--
--   You should have received a copy of the GNU General Public License
--   along with this program.  If not, see <http://www.gnu.org/licenses/>.  */
--
--#ifndef _GL_STRERROR_OVERRIDE_H
--# define _GL_STRERROR_OVERRIDE_H
--
--# include <errno.h>
--# include <stddef.h>
--
--/* Reasonable buffer size that should never trigger ERANGE; if this
--   proves too small, we intentionally abort(), to remind us to fix
--   this value.  */
--# define STACKBUF_LEN 256
--
--/* If ERRNUM maps to an errno value defined by gnulib, return a string
--   describing the error.  Otherwise return NULL.  */
--# if REPLACE_STRERROR_0 \
--     || GNULIB_defined_ETXTBSY \
--     || GNULIB_defined_ESOCK \
--     || GNULIB_defined_ENOMSG \
--     || GNULIB_defined_EIDRM \
--     || GNULIB_defined_ENOLINK \
--     || GNULIB_defined_EPROTO \
--     || GNULIB_defined_EMULTIHOP \
--     || GNULIB_defined_EBADMSG \
--     || GNULIB_defined_EOVERFLOW \
--     || GNULIB_defined_ENOTSUP \
--     || GNULIB_defined_ENETRESET \
--     || GNULIB_defined_ECONNABORTED \
--     || GNULIB_defined_ESTALE \
--     || GNULIB_defined_EDQUOT \
--     || GNULIB_defined_ECANCELED
--extern const char *strerror_override (int errnum);
--# else
--#  define strerror_override(ignored) NULL
--# endif
--
--#endif /* _GL_STRERROR_OVERRIDE_H */
-diff -Naurp libiconv-1.14.org//srclib/string.in.h libiconv-1.14/srclib/string.in.h
---- libiconv-1.14.org//srclib/string.in.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/string.in.h	2012-01-08 02:07:40.418484461 -0800
-@@ -16,7 +16,7 @@
-    along with this program; if not, write to the Free Software Foundation,
-    Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.  */
- 
--#ifndef _@GUARD_PREFIX@_STRING_H
-+#ifndef _GL_STRING_H
- 
- #if __GNUC__ >= 3
- @PRAGMA_SYSTEM_HEADER@
-@@ -26,8 +26,8 @@
- /* The include_next requires a split double-inclusion guard.  */
- #@INCLUDE_NEXT@ @NEXT_STRING_H@
- 
--#ifndef _@GUARD_PREFIX@_STRING_H
--#define _@GUARD_PREFIX@_STRING_H
-+#ifndef _GL_STRING_H
-+#define _GL_STRING_H
- 
- /* NetBSD 5.0 mis-defines NULL.  */
- #include <stddef.h>
-@@ -59,36 +59,6 @@
- /* The definition of _GL_WARN_ON_USE is copied here.  */
- 
- 
--/* Find the index of the least-significant set bit.  */
--#if @GNULIB_FFSL@
--# if !@HAVE_FFSL@
--_GL_FUNCDECL_SYS (ffsl, int, (long int i));
--# endif
--_GL_CXXALIAS_SYS (ffsl, int, (long int i));
--_GL_CXXALIASWARN (ffsl);
--#elif defined GNULIB_POSIXCHECK
--# undef ffsl
--# if HAVE_RAW_DECL_FFSL
--_GL_WARN_ON_USE (ffsl, "ffsl is not portable - use the ffsl module");
--# endif
--#endif
--
--
--/* Find the index of the least-significant set bit.  */
--#if @GNULIB_FFSLL@
--# if !@HAVE_FFSLL@
--_GL_FUNCDECL_SYS (ffsll, int, (long long int i));
--# endif
--_GL_CXXALIAS_SYS (ffsll, int, (long long int i));
--_GL_CXXALIASWARN (ffsll);
--#elif defined GNULIB_POSIXCHECK
--# undef ffsll
--# if HAVE_RAW_DECL_FFSLL
--_GL_WARN_ON_USE (ffsll, "ffsll is not portable - use the ffsll module");
--# endif
--#endif
--
--
- /* Return the first instance of C within N bytes of S, or NULL.  */
- #if @GNULIB_MEMCHR@
- # if @REPLACE_MEMCHR@
-@@ -1007,5 +977,5 @@ _GL_WARN_ON_USE (strverscmp, "strverscmp
- #endif
- 
- 
--#endif /* _@GUARD_PREFIX@_STRING_H */
--#endif /* _@GUARD_PREFIX@_STRING_H */
-+#endif /* _GL_STRING_H */
-+#endif /* _GL_STRING_H */
-diff -Naurp libiconv-1.14.org//srclib/sys_stat.in.h libiconv-1.14/srclib/sys_stat.in.h
---- libiconv-1.14.org//srclib/sys_stat.in.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/sys_stat.in.h	2012-01-08 02:07:40.430484461 -0800
-@@ -34,7 +34,7 @@
- #else
- /* Normal invocation convention.  */
- 
--#ifndef _@GUARD_PREFIX@_SYS_STAT_H
-+#ifndef _GL_SYS_STAT_H
- 
- /* Get nlink_t.  */
- #include <sys/types.h>
-@@ -45,8 +45,8 @@
- /* The include_next requires a split double-inclusion guard.  */
- #@INCLUDE_NEXT@ @NEXT_SYS_STAT_H@
- 
--#ifndef _@GUARD_PREFIX@_SYS_STAT_H
--#define _@GUARD_PREFIX@_SYS_STAT_H
-+#ifndef _GL_SYS_STAT_H
-+#define _GL_SYS_STAT_H
- 
- /* The definitions of _GL_FUNCDECL_RPL etc. are copied here.  */
- 
-@@ -653,6 +653,6 @@ _GL_WARN_ON_USE (utimensat, "utimensat i
- #endif
- 
- 
--#endif /* _@GUARD_PREFIX@_SYS_STAT_H */
--#endif /* _@GUARD_PREFIX@_SYS_STAT_H */
-+#endif /* _GL_SYS_STAT_H */
-+#endif /* _GL_SYS_STAT_H */
- #endif
-diff -Naurp libiconv-1.14.org//srclib/time.in.h libiconv-1.14/srclib/time.in.h
---- libiconv-1.14.org//srclib/time.in.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/time.in.h	2012-01-08 02:07:40.438484462 -0800
-@@ -28,13 +28,13 @@
-    without adding our own declarations.  */
- #if (defined __need_time_t || defined __need_clock_t \
-      || defined __need_timespec \
--     || defined _@GUARD_PREFIX@_TIME_H)
-+     || defined _GL_TIME_H)
- 
- # @INCLUDE_NEXT@ @NEXT_TIME_H@
- 
- #else
- 
--# define _@GUARD_PREFIX@_TIME_H
-+# define _GL_TIME_H
- 
- # @INCLUDE_NEXT@ @NEXT_TIME_H@
- 
-diff -Naurp libiconv-1.14.org//srclib/unistd.in.h libiconv-1.14/srclib/unistd.in.h
---- libiconv-1.14.org//srclib/unistd.in.h	2011-08-07 06:42:06.000000000 -0700
-+++ libiconv-1.14/srclib/unistd.in.h	2012-01-08 02:07:40.450484462 -0800
-@@ -36,7 +36,7 @@
- # define _GL_WINSOCK2_H_WITNESS
- 
- /* Normal invocation.  */
--#elif !defined _@GUARD_PREFIX@_UNISTD_H
-+#elif !defined _GL_UNISTD_H
- 
- /* The include_next requires a split double-inclusion guard.  */
- #if @HAVE_UNISTD_H@
-@@ -51,8 +51,8 @@
- # undef _GL_INCLUDING_WINSOCK2_H
- #endif
- 
--#if !defined _@GUARD_PREFIX@_UNISTD_H && !defined _GL_INCLUDING_WINSOCK2_H
--#define _@GUARD_PREFIX@_UNISTD_H
-+#if !defined _GL_UNISTD_H && !defined _GL_INCLUDING_WINSOCK2_H
-+#define _GL_UNISTD_H
- 
- /* NetBSD 5.0 mis-defines NULL.  Also get size_t.  */
- #include <stddef.h>
-@@ -117,77 +117,78 @@
- /* The definition of _GL_WARN_ON_USE is copied here.  */
- 
- 
--/* Hide some function declarations from <winsock2.h>.  */
--
--#if @GNULIB_GETHOSTNAME@ && @UNISTD_H_HAVE_WINSOCK2_H@
--# if !defined _@GUARD_PREFIX@_SYS_SOCKET_H
--#  if !(defined __cplusplus && defined GNULIB_NAMESPACE)
--#   undef socket
--#   define socket              socket_used_without_including_sys_socket_h
--#   undef connect
--#   define connect             connect_used_without_including_sys_socket_h
--#   undef accept
--#   define accept              accept_used_without_including_sys_socket_h
--#   undef bind
--#   define bind                bind_used_without_including_sys_socket_h
--#   undef getpeername
--#   define getpeername         getpeername_used_without_including_sys_socket_h
--#   undef getsockname
--#   define getsockname         getsockname_used_without_including_sys_socket_h
--#   undef getsockopt
--#   define getsockopt          getsockopt_used_without_including_sys_socket_h
--#   undef listen
--#   define listen              listen_used_without_including_sys_socket_h
--#   undef recv
--#   define recv                recv_used_without_including_sys_socket_h
--#   undef send
--#   define send                send_used_without_including_sys_socket_h
--#   undef recvfrom
--#   define recvfrom            recvfrom_used_without_including_sys_socket_h
--#   undef sendto
--#   define sendto              sendto_used_without_including_sys_socket_h
--#   undef setsockopt
--#   define setsockopt          setsockopt_used_without_including_sys_socket_h
--#   undef shutdown
--#   define shutdown            shutdown_used_without_including_sys_socket_h
--#  else
--    _GL_WARN_ON_USE (socket,
--                     "socket() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (connect,
--                     "connect() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (accept,
--                     "accept() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (bind,
--                     "bind() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (getpeername,
--                     "getpeername() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (getsockname,
--                     "getsockname() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (getsockopt,
--                     "getsockopt() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (listen,
--                     "listen() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (recv,
--                     "recv() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (send,
--                     "send() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (recvfrom,
--                     "recvfrom() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (sendto,
--                     "sendto() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (setsockopt,
--                     "setsockopt() used without including <sys/socket.h>");
--    _GL_WARN_ON_USE (shutdown,
--                     "shutdown() used without including <sys/socket.h>");
-+#if @GNULIB_GETHOSTNAME@
-+/* Get all possible declarations of gethostname().  */
-+# if @UNISTD_H_HAVE_WINSOCK2_H@
-+#  if !defined _GL_SYS_SOCKET_H
-+#   if !(defined __cplusplus && defined GNULIB_NAMESPACE)
-+#    undef socket
-+#    define socket              socket_used_without_including_sys_socket_h
-+#    undef connect
-+#    define connect             connect_used_without_including_sys_socket_h
-+#    undef accept
-+#    define accept              accept_used_without_including_sys_socket_h
-+#    undef bind
-+#    define bind                bind_used_without_including_sys_socket_h
-+#    undef getpeername
-+#    define getpeername         getpeername_used_without_including_sys_socket_h
-+#    undef getsockname
-+#    define getsockname         getsockname_used_without_including_sys_socket_h
-+#    undef getsockopt
-+#    define getsockopt          getsockopt_used_without_including_sys_socket_h
-+#    undef listen
-+#    define listen              listen_used_without_including_sys_socket_h
-+#    undef recv
-+#    define recv                recv_used_without_including_sys_socket_h
-+#    undef send
-+#    define send                send_used_without_including_sys_socket_h
-+#    undef recvfrom
-+#    define recvfrom            recvfrom_used_without_including_sys_socket_h
-+#    undef sendto
-+#    define sendto              sendto_used_without_including_sys_socket_h
-+#    undef setsockopt
-+#    define setsockopt          setsockopt_used_without_including_sys_socket_h
-+#    undef shutdown
-+#    define shutdown            shutdown_used_without_including_sys_socket_h
-+#   else
-+     _GL_WARN_ON_USE (socket,
-+                      "socket() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (connect,
-+                      "connect() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (accept,
-+                      "accept() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (bind,
-+                      "bind() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (getpeername,
-+                      "getpeername() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (getsockname,
-+                      "getsockname() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (getsockopt,
-+                      "getsockopt() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (listen,
-+                      "listen() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (recv,
-+                      "recv() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (send,
-+                      "send() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (recvfrom,
-+                      "recvfrom() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (sendto,
-+                      "sendto() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (setsockopt,
-+                      "setsockopt() used without including <sys/socket.h>");
-+     _GL_WARN_ON_USE (shutdown,
-+                      "shutdown() used without including <sys/socket.h>");
-+#   endif
- #  endif
--# endif
--# if !defined _@GUARD_PREFIX@_SYS_SELECT_H
--#  if !(defined __cplusplus && defined GNULIB_NAMESPACE)
--#   undef select
--#   define select              select_used_without_including_sys_select_h
--#  else
--    _GL_WARN_ON_USE (select,
--                     "select() used without including <sys/select.h>");
-+#  if !defined _GL_SYS_SELECT_H
-+#   if !(defined __cplusplus && defined GNULIB_NAMESPACE)
-+#    undef select
-+#    define select              select_used_without_including_sys_select_h
-+#   else
-+     _GL_WARN_ON_USE (select,
-+                      "select() used without including <sys/select.h>");
-+#   endif
- #  endif
- # endif
- #endif
-@@ -1061,7 +1062,6 @@ _GL_WARN_ON_USE (pipe2, "pipe2 is unport
-    specification <http://www.opengroup.org/susv3xsh/pread.html>.  */
- # if @REPLACE_PREAD@
- #  if !(defined __cplusplus && defined GNULIB_NAMESPACE)
--#   undef pread
- #   define pread rpl_pread
- #  endif
- _GL_FUNCDECL_RPL (pread, ssize_t,
-@@ -1096,7 +1096,6 @@ _GL_WARN_ON_USE (pread, "pread is unport
-    <http://www.opengroup.org/susv3xsh/pwrite.html>.  */
- # if @REPLACE_PWRITE@
- #  if !(defined __cplusplus && defined GNULIB_NAMESPACE)
--#   undef pwrite
- #   define pwrite rpl_pwrite
- #  endif
- _GL_FUNCDECL_RPL (pwrite, ssize_t,
-@@ -1417,5 +1416,5 @@ _GL_CXXALIASWARN (write);
- #endif
- 
- 
--#endif /* _@GUARD_PREFIX@_UNISTD_H */
--#endif /* _@GUARD_PREFIX@_UNISTD_H */
-+#endif /* _GL_UNISTD_H */
-+#endif /* _GL_UNISTD_H */
-diff -Naurp libiconv-1.14.org//srclib/verify.h libiconv-1.14/srclib/verify.h
---- libiconv-1.14.org//srclib/verify.h	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srclib/verify.h	2012-01-08 02:07:40.494484464 -0800
-@@ -164,13 +164,10 @@
-     (!!sizeof (_GL_VERIFY_TYPE (R, DIAGNOSTIC)))
- 
- # ifdef __cplusplus
--#  if !GNULIB_defined_struct__gl_verify_type
- template <int w>
-   struct _gl_verify_type {
-     unsigned int _gl_verify_error_if_negative: w;
-   };
--#   define GNULIB_defined_struct__gl_verify_type 1
--#  endif
- #  define _GL_VERIFY_TYPE(R, DIAGNOSTIC) \
-     _gl_verify_type<(R) ? 1 : -1>
- # elif defined _GL_HAVE__STATIC_ASSERT
-@@ -209,7 +206,7 @@ template <int w>
- #  endif
- # endif
- 
--/* @assert.h omit start@  */
-+# ifdef _GL_VERIFY_H
- 
- /* Each of these macros verifies that its argument R is nonzero.  To
-    be portable, R should be an integer constant expression.  Unlike
-@@ -221,23 +218,15 @@ template <int w>
-    contexts, e.g., the top level.  */
- 
- /* Verify requirement R at compile-time, as an integer constant expression.
--   Return 1.  This is equivalent to verify_expr (R, 1).
--
--   verify_true is obsolescent; please use verify_expr instead.  */
--
--# define verify_true(R) _GL_VERIFY_TRUE (R, "verify_true (" #R ")")
-+   Return 1.  */
- 
--/* Verify requirement R at compile-time.  Return the value of the
--   expression E.  */
--
--# define verify_expr(R, E) \
--    (_GL_VERIFY_TRUE (R, "verify_expr (" #R ", " #E ")") ? (E) : (E))
-+#  define verify_true(R) _GL_VERIFY_TRUE (R, "verify_true (" #R ")")
- 
- /* Verify requirement R at compile-time, as a declaration without a
-    trailing ';'.  */
- 
--# define verify(R) _GL_VERIFY (R, "verify (" #R ")")
-+#  define verify(R) _GL_VERIFY (R, "verify (" #R ")")
- 
--/* @assert.h omit end@  */
-+# endif
- 
- #endif
-diff -Naurp libiconv-1.14.org//srcm4/canonicalize.m4 libiconv-1.14/srcm4/canonicalize.m4
---- libiconv-1.14.org//srcm4/canonicalize.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/canonicalize.m4	2012-01-08 02:07:40.566484469 -0800
-@@ -1,4 +1,4 @@
--# canonicalize.m4 serial 23
-+# canonicalize.m4 serial 21
- 
- dnl Copyright (C) 2003-2007, 2009-2011 Free Software Foundation, Inc.
- 
-@@ -10,6 +10,8 @@ dnl with or without modifications, as lo
- # not provide or fix realpath.
- AC_DEFUN([gl_FUNC_CANONICALIZE_FILENAME_MODE],
- [
-+  AC_LIBOBJ([canonicalize])
-+
-   AC_REQUIRE([gl_USE_SYSTEM_EXTENSIONS])
-   AC_CHECK_FUNCS_ONCE([canonicalize_file_name])
-   AC_REQUIRE([gl_DOUBLE_SLASH_ROOT])
-@@ -28,14 +30,16 @@ AC_DEFUN([gl_CANONICALIZE_LGPL],
-   AC_REQUIRE([gl_CANONICALIZE_LGPL_SEPARATE])
-   if test $ac_cv_func_canonicalize_file_name = no; then
-     HAVE_CANONICALIZE_FILE_NAME=0
-+    AC_LIBOBJ([canonicalize-lgpl])
-     if test $ac_cv_func_realpath = no; then
-       HAVE_REALPATH=0
-     elif test "$gl_cv_func_realpath_works" != yes; then
-       REPLACE_REALPATH=1
-     fi
-   elif test "$gl_cv_func_realpath_works" != yes; then
--    REPLACE_CANONICALIZE_FILE_NAME=1
-+    AC_LIBOBJ([canonicalize-lgpl])
-     REPLACE_REALPATH=1
-+    REPLACE_CANONICALIZE_FILE_NAME=1
-   fi
- ])
- 
-diff -Naurp libiconv-1.14.org//srcm4/errno_h.m4 libiconv-1.14/srcm4/errno_h.m4
---- libiconv-1.14.org//srcm4/errno_h.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/errno_h.m4	2012-01-08 02:07:40.590484469 -0800
-@@ -1,4 +1,4 @@
--# errno_h.m4 serial 10
-+# errno_h.m4 serial 9
- dnl Copyright (C) 2004, 2006, 2008-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -34,12 +34,6 @@ booboo
- #if !defined ENOTSUP
- booboo
- #endif
--#if !defined ENETRESET
--booboo
--#endif
--#if !defined ECONNABORTED
--booboo
--#endif
- #if !defined ESTALE
- booboo
- #endif
-diff -Naurp libiconv-1.14.org//srcm4/error.m4 libiconv-1.14/srcm4/error.m4
---- libiconv-1.14.org//srcm4/error.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/error.m4	2012-01-08 02:07:40.598484470 -0800
-@@ -1,4 +1,4 @@
--#serial 14
-+#serial 13
- 
- # Copyright (C) 1996-1998, 2001-2004, 2009-2011 Free Software Foundation, Inc.
- #
-@@ -8,8 +8,16 @@
- 
- AC_DEFUN([gl_ERROR],
- [
--  dnl We don't use AC_FUNC_ERROR_AT_LINE any more, because it is no longer
--  dnl maintained in Autoconf and because it invokes AC_LIBOBJ.
-+  AC_FUNC_ERROR_AT_LINE
-+  dnl Note: AC_FUNC_ERROR_AT_LINE does AC_LIBSOURCES([error.h, error.c]).
-+  gl_PREREQ_ERROR
-+])
-+
-+# Redefine AC_FUNC_ERROR_AT_LINE, because it is no longer maintained in
-+# Autoconf.
-+AC_DEFUN([AC_FUNC_ERROR_AT_LINE],
-+[
-+  AC_LIBSOURCES([error.h, error.c])dnl
-   AC_CACHE_CHECK([for error_at_line], [ac_cv_lib_error_at_line],
-     [AC_LINK_IFELSE(
-        [AC_LANG_PROGRAM(
-@@ -17,6 +25,9 @@ AC_DEFUN([gl_ERROR],
-           [[error_at_line (0, 0, "", 0, "an error occurred");]])],
-        [ac_cv_lib_error_at_line=yes],
-        [ac_cv_lib_error_at_line=no])])
-+  if test $ac_cv_lib_error_at_line = no; then
-+    AC_LIBOBJ([error])
-+  fi
- ])
- 
- # Prerequisites of lib/error.c.
-diff -Naurp libiconv-1.14.org//srcm4/extensions.m4 libiconv-1.14/srcm4/extensions.m4
---- libiconv-1.14.org//srcm4/extensions.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/extensions.m4	2012-01-08 02:07:40.606484471 -0800
-@@ -1,4 +1,4 @@
--# serial 10  -*- Autoconf -*-
-+# serial 9  -*- Autoconf -*-
- # Enable extensions on systems that normally disable them.
- 
- # Copyright (C) 2003, 2006-2011 Free Software Foundation, Inc.
-@@ -67,10 +67,6 @@ AC_BEFORE([$0], [AC_RUN_IFELSE])dnl
- #ifndef _ALL_SOURCE
- # undef _ALL_SOURCE
- #endif
--/* Enable general extensions on MacOS X.  */
--#ifndef _DARWIN_C_SOURCE
--# undef _DARWIN_C_SOURCE
--#endif
- /* Enable GNU extensions on systems that have them.  */
- #ifndef _GNU_SOURCE
- # undef _GNU_SOURCE
-@@ -99,7 +95,6 @@ AC_BEFORE([$0], [AC_RUN_IFELSE])dnl
-   test $ac_cv_safe_to_define___extensions__ = yes &&
-     AC_DEFINE([__EXTENSIONS__])
-   AC_DEFINE([_ALL_SOURCE])
--  AC_DEFINE([_DARWIN_C_SOURCE])
-   AC_DEFINE([_GNU_SOURCE])
-   AC_DEFINE([_POSIX_PTHREAD_SEMANTICS])
-   AC_DEFINE([_TANDEM_SOURCE])
-diff -Naurp libiconv-1.14.org//srcm4/gnulib-cache.m4 libiconv-1.14/srcm4/gnulib-cache.m4
---- libiconv-1.14.org//srcm4/gnulib-cache.m4	2011-08-07 06:42:11.000000000 -0700
-+++ libiconv-1.14/srcm4/gnulib-cache.m4	2012-01-08 02:07:43.154484593 -0800
-@@ -15,7 +15,7 @@
- 
- 
- # Specification in the form of a command-line invocation:
--#   gnulib-tool --import --dir=. --local-dir=gnulib-local --lib=libicrt --source-base=srclib --m4-base=srcm4 --doc-base=doc --tests-base=tests --aux-dir=build-aux --makefile-name=Makefile.gnulib --no-conditional-dependencies --no-libtool --macro-prefix=gl --no-vc-files binary-io error gettext gettext-h libiconv-misc mbstate memmove progname relocatable-prog safe-read sigpipe stdio stdlib strerror unistd uniwidth/width unlocked-io xalloc
-+#   gnulib-tool --import --dir=. --local-dir=gnulib-local --lib=libicrt --source-base=srclib --m4-base=srcm4 --doc-base=doc --tests-base=tests --aux-dir=build-aux --makefile-name=Makefile.gnulib --no-libtool --macro-prefix=gl --no-vc-files binary-io error gettext gettext-h libiconv-misc mbstate memmove progname relocatable relocatable-prog safe-read sigpipe stdio stdlib strerror unistd uniwidth/width unlocked-io xalloc
- 
- # Specification in the form of a few gnulib-tool.m4 macro invocations:
- gl_LOCAL_DIR([gnulib-local])
-@@ -28,6 +28,7 @@ gl_MODULES([
-   mbstate
-   memmove
-   progname
-+  relocatable
-   relocatable-prog
-   safe-read
-   sigpipe
-@@ -49,5 +50,4 @@ gl_LIB([libicrt])
- gl_MAKEFILE_NAME([Makefile.gnulib])
- gl_MACRO_PREFIX([gl])
- gl_PO_DOMAIN([])
--gl_WITNESS_C_DOMAIN([])
- gl_VC_FILES([false])
-diff -Naurp libiconv-1.14.org//srcm4/gnulib-common.m4 libiconv-1.14/srcm4/gnulib-common.m4
---- libiconv-1.14.org//srcm4/gnulib-common.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/gnulib-common.m4	2012-01-08 02:07:40.634484471 -0800
-@@ -1,4 +1,4 @@
--# gnulib-common.m4 serial 29
-+# gnulib-common.m4 serial 24
- dnl Copyright (C) 2007-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -12,19 +12,6 @@ AC_DEFUN([gl_COMMON], [
-   AC_REQUIRE([gl_COMMON_BODY])
- ])
- AC_DEFUN([gl_COMMON_BODY], [
--  AH_VERBATIM([_Noreturn],
--[/* The _Noreturn keyword of draft C1X.  */
--#ifndef _Noreturn
--# if (3 <= __GNUC__ || (__GNUC__ == 2 && 8 <= __GNUC_MINOR__) \
--      || 0x5110 <= __SUNPRO_C)
--#  define _Noreturn __attribute__ ((__noreturn__))
--# elif 1200 <= _MSC_VER
--#  define _Noreturn __declspec (noreturn)
--# else
--#  define _Noreturn
--# endif
--#endif
--])
-   AH_VERBATIM([isoc99_inline],
- [/* Work around a bug in Apple GCC 4.0.1 build 5465: In C99 mode, it supports
-    the ISO C 99 semantics of 'extern inline' (unlike the GNU C semantics of
-@@ -47,20 +34,6 @@ AC_DEFUN([gl_COMMON_BODY], [
- /* The name _UNUSED_PARAMETER_ is an earlier spelling, although the name
-    is a misnomer outside of parameter lists.  */
- #define _UNUSED_PARAMETER_ _GL_UNUSED
--
--/* The __pure__ attribute was added in gcc 2.96.  */
--#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 96)
--# define _GL_ATTRIBUTE_PURE __attribute__ ((__pure__))
--#else
--# define _GL_ATTRIBUTE_PURE /* empty */
--#endif
--
--/* The __const__ attribute was added in gcc 2.95.  */
--#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 95)
--# define _GL_ATTRIBUTE_CONST __attribute__ ((__const__))
--#else
--# define _GL_ATTRIBUTE_CONST /* empty */
--#endif
- ])
-   dnl Preparation for running test programs:
-   dnl Tell glibc to write diagnostics from -D_FORTIFY_SOURCE=2 to stderr, not
-@@ -74,49 +47,16 @@ AC_DEFUN([gl_COMMON_BODY], [
- # expands to a C preprocessor expression that evaluates to 1 or 0, depending
- # whether a gnulib module that has been requested shall be considered present
- # or not.
--m4_define([gl_MODULE_INDICATOR_CONDITION], [1])
-+AC_DEFUN([gl_MODULE_INDICATOR_CONDITION], [1])
- 
- # gl_MODULE_INDICATOR_SET_VARIABLE([modulename])
- # sets the shell variable that indicates the presence of the given module to
- # a C preprocessor expression that will evaluate to 1.
- AC_DEFUN([gl_MODULE_INDICATOR_SET_VARIABLE],
- [
--  gl_MODULE_INDICATOR_SET_VARIABLE_AUX(
--    [GNULIB_[]m4_translit([[$1]],
--                          [abcdefghijklmnopqrstuvwxyz./-],
--                          [ABCDEFGHIJKLMNOPQRSTUVWXYZ___])],
--    [gl_MODULE_INDICATOR_CONDITION])
--])
--
--# gl_MODULE_INDICATOR_SET_VARIABLE_AUX([variable])
--# modifies the shell variable to include the gl_MODULE_INDICATOR_CONDITION.
--# The shell variable's value is a C preprocessor expression that evaluates
--# to 0 or 1.
--AC_DEFUN([gl_MODULE_INDICATOR_SET_VARIABLE_AUX],
--[
--  m4_if(m4_defn([gl_MODULE_INDICATOR_CONDITION]), [1],
--    [
--     dnl Simplify the expression VALUE || 1 to 1.
--     $1=1
--    ],
--    [gl_MODULE_INDICATOR_SET_VARIABLE_AUX_OR([$1],
--                                             [gl_MODULE_INDICATOR_CONDITION])])
--])
--
--# gl_MODULE_INDICATOR_SET_VARIABLE_AUX_OR([variable], [condition])
--# modifies the shell variable to include the given condition.  The shell
--# variable's value is a C preprocessor expression that evaluates to 0 or 1.
--AC_DEFUN([gl_MODULE_INDICATOR_SET_VARIABLE_AUX_OR],
--[
--  dnl Simplify the expression 1 || CONDITION to 1.
--  if test "$[]$1" != 1; then
--    dnl Simplify the expression 0 || CONDITION to CONDITION.
--    if test "$[]$1" = 0; then
--      $1=$2
--    else
--      $1="($[]$1 || $2)"
--    fi
--  fi
-+  GNULIB_[]m4_translit([[$1]],
-+    [abcdefghijklmnopqrstuvwxyz./-],
-+    [ABCDEFGHIJKLMNOPQRSTUVWXYZ___])=gl_MODULE_INDICATOR_CONDITION
- ])
- 
- # gl_MODULE_INDICATOR([modulename])
-@@ -211,35 +151,6 @@ m4_ifndef([AS_VAR_IF],
- [m4_define([AS_VAR_IF],
- [AS_IF([test x"AS_VAR_GET([$1])" = x""$2], [$3], [$4])])])
- 
--# gl_PROG_AR_RANLIB
--# Determines the values for AR, ARFLAGS, RANLIB that fit with the compiler.
--AC_DEFUN([gl_PROG_AR_RANLIB],
--[
--  dnl Minix 3 comes with two toolchains: The Amsterdam Compiler Kit compiler
--  dnl as "cc", and GCC as "gcc". They have different object file formats and
--  dnl library formats. In particular, the GNU binutils programs ar, ranlib
--  dnl produce libraries that work only with gcc, not with cc.
--  AC_REQUIRE([AC_PROG_CC])
--  AC_EGREP_CPP([Amsterdam],
--    [
--#ifdef __ACK__
--Amsterdam
--#endif
--    ],
--    [AR='cc -c.a'
--     ARFLAGS='-o'
--     RANLIB=':'
--    ],
--    [dnl Use the Automake-documented default values for AR and ARFLAGS.
--     AR='ar'
--     ARFLAGS='cru'
--     dnl Use the ranlib program if it is available.
--     AC_PROG_RANLIB
--    ])
--  AC_SUBST([AR])
--  AC_SUBST([ARFLAGS])
--])
--
- # AC_PROG_MKDIR_P
- # is a backport of autoconf-2.60's AC_PROG_MKDIR_P, with a fix
- # for interoperability with automake-1.9.6 from autoconf-2.62.
-diff -Naurp libiconv-1.14.org//srcm4/gnulib-comp.m4 libiconv-1.14/srcm4/gnulib-comp.m4
---- libiconv-1.14.org//srcm4/gnulib-comp.m4	2011-08-07 06:42:12.000000000 -0700
-+++ libiconv-1.14/srcm4/gnulib-comp.m4	2012-01-08 02:07:43.922484630 -0800
-@@ -25,12 +25,14 @@ AC_DEFUN([gl_EARLY],
-   m4_pattern_allow([^gl_ES$])dnl a valid locale name
-   m4_pattern_allow([^gl_LIBOBJS$])dnl a variable
-   m4_pattern_allow([^gl_LTLIBOBJS$])dnl a variable
--  AC_REQUIRE([gl_PROG_AR_RANLIB])
-+  AC_REQUIRE([AC_PROG_RANLIB])
-   AC_REQUIRE([AM_PROG_CC_C_O])
-   # Code from module alloca-opt:
-   # Code from module allocator:
-   # Code from module areadlink:
-+  # Code from module arg-nonnull:
-   # Code from module binary-io:
-+  # Code from module c++defs:
-   # Code from module canonicalize-lgpl:
-   # Code from module careadlinkat:
-   # Code from module dosname:
-@@ -46,7 +48,6 @@ AC_DEFUN([gl_EARLY],
-   # Code from module havelib:
-   # Code from module include_next:
-   # Code from module intprops:
--  # Code from module largefile:
-   # Code from module libiconv-misc:
-   # Code from module lstat:
-   # Code from module malloca:
-@@ -64,10 +65,6 @@ AC_DEFUN([gl_EARLY],
-   # Code from module signal:
-   # Code from module sigpipe:
-   # Code from module sigprocmask:
--  # Code from module snippet/_Noreturn:
--  # Code from module snippet/arg-nonnull:
--  # Code from module snippet/c++defs:
--  # Code from module snippet/warn-on-use:
-   # Code from module ssize_t:
-   # Code from module stat:
-   # Code from module stdbool:
-@@ -77,7 +74,6 @@ AC_DEFUN([gl_EARLY],
-   # Code from module stdlib:
-   # Code from module streq:
-   # Code from module strerror:
--  # Code from module strerror-override:
-   # Code from module string:
-   # Code from module sys_stat:
-   # Code from module time:
-@@ -87,6 +83,7 @@ AC_DEFUN([gl_EARLY],
-   # Code from module uniwidth/width:
-   # Code from module unlocked-io:
-   # Code from module verify:
-+  # Code from module warn-on-use:
-   # Code from module xalloc:
-   # Code from module xreadlink:
- ])
-@@ -109,9 +106,6 @@ AC_DEFUN([gl_INIT],
-   gl_source_base='srclib'
- gl_FUNC_ALLOCA
- gl_CANONICALIZE_LGPL
--if test $HAVE_CANONICALIZE_FILE_NAME = 0 || test $REPLACE_CANONICALIZE_FILE_NAME = 1; then
--  AC_LIBOBJ([canonicalize-lgpl])
--fi
- gl_MODULE_INDICATOR([canonicalize-lgpl])
- gl_STDLIB_MODULE_INDICATOR([canonicalize_file_name])
- gl_STDLIB_MODULE_INDICATOR([realpath])
-@@ -121,10 +115,6 @@ gl_ENVIRON
- gl_UNISTD_MODULE_INDICATOR([environ])
- gl_HEADER_ERRNO_H
- gl_ERROR
--if test $ac_cv_lib_error_at_line = no; then
--  AC_LIBOBJ([error])
--  gl_PREREQ_ERROR
--fi
- m4_ifdef([AM_XGETTEXT_OPTION],
-   [AM_][XGETTEXT_OPTION([--flag=error:3:c-format])
-    AM_][XGETTEXT_OPTION([--flag=error_at_line:5:c-format])])
-@@ -134,43 +124,26 @@ AM_GNU_GETTEXT_VERSION([0.18.1])
- AC_SUBST([LIBINTL])
- AC_SUBST([LTLIBINTL])
- gl_FUNC_LSTAT
--if test $REPLACE_LSTAT = 1; then
--  AC_LIBOBJ([lstat])
--  gl_PREREQ_LSTAT
--fi
- gl_SYS_STAT_MODULE_INDICATOR([lstat])
- gl_MALLOCA
- AC_TYPE_MBSTATE_T
- gl_FUNC_MEMMOVE
--if test $ac_cv_func_memmove = no; then
--  AC_LIBOBJ([memmove])
--  gl_PREREQ_MEMMOVE
--fi
- gl_MULTIARCH
- gl_PATHMAX
- AC_CHECK_DECLS([program_invocation_name], [], [], [#include <errno.h>])
- AC_CHECK_DECLS([program_invocation_short_name], [], [], [#include <errno.h>])
- gl_FUNC_READ
--if test $REPLACE_READ = 1; then
--  AC_LIBOBJ([read])
--fi
- gl_UNISTD_MODULE_INDICATOR([read])
- gl_FUNC_READLINK
--if test $HAVE_READLINK = 0 || test $REPLACE_READLINK = 1; then
--  AC_LIBOBJ([readlink])
--  gl_PREREQ_READLINK
--fi
- gl_UNISTD_MODULE_INDICATOR([readlink])
- gl_RELOCATABLE([$gl_source_base])
--if test $RELOCATABLE = yes; then
--  AC_LIBOBJ([progreloc])
--fi
- gl_FUNC_READLINK_SEPARATE
- gl_CANONICALIZE_LGPL_SEPARATE
- gl_MALLOCA
--gl_RELOCATABLE_LIBRARY
-+gl_RELOCATABLE_LIBRARY_SEPARATE
- gl_FUNC_SETENV_SEPARATE
--gl_PREREQ_SAFE_READ
-+gl_FUNC_STRERROR_SEPARATE
-+gl_SAFE_READ
- gl_SIGNAL_H
- gl_SIGNAL_SIGPIPE
- dnl Define the C macro GNULIB_SIGPIPE to 1.
-@@ -186,17 +159,9 @@ dnl Define the substituted variable GNUL
- AC_REQUIRE([gl_UNISTD_H_DEFAULTS])
- GNULIB_UNISTD_H_SIGPIPE=1
- gl_SIGNALBLOCKING
--if test $HAVE_POSIX_SIGNALBLOCKING = 0; then
--  AC_LIBOBJ([sigprocmask])
--  gl_PREREQ_SIGPROCMASK
--fi
- gl_SIGNAL_MODULE_INDICATOR([sigprocmask])
- gt_TYPE_SSIZE_T
- gl_FUNC_STAT
--if test $REPLACE_STAT = 1; then
--  AC_LIBOBJ([stat])
--  gl_PREREQ_STAT
--fi
- gl_SYS_STAT_MODULE_INDICATOR([stat])
- AM_STDBOOL_H
- gl_STDDEF_H
-@@ -204,17 +169,7 @@ gl_STDINT_H
- gl_STDIO_H
- gl_STDLIB_H
- gl_FUNC_STRERROR
--if test $REPLACE_STRERROR = 1; then
--  AC_LIBOBJ([strerror])
--fi
--gl_MODULE_INDICATOR([strerror])
- gl_STRING_MODULE_INDICATOR([strerror])
--AC_REQUIRE([gl_HEADER_ERRNO_H])
--AC_REQUIRE([gl_FUNC_STRERROR_0])
--if test -n "$ERRNO_H" || test $REPLACE_STRERROR_0 = 1; then
--  AC_LIBOBJ([strerror-override])
--  gl_PREREQ_SYS_H_WINSOCK2
--fi
- gl_HEADER_STRING_H
- gl_HEADER_SYS_STAT_H
- AC_PROG_MKDIR_P
-@@ -364,14 +319,13 @@ AC_DEFUN([gltests_LIBSOURCES], [
- # This macro records the list of files which have been installed by
- # gnulib-tool and may be removed by future gnulib-tool invocations.
- AC_DEFUN([gl_FILE_LIST], [
-+  build-aux/arg-nonnull.h
-+  build-aux/c++defs.h
-   build-aux/config.libpath
-   build-aux/config.rpath
-   build-aux/install-reloc
-   build-aux/reloc-ldflags
--  build-aux/snippet/_Noreturn.h
--  build-aux/snippet/arg-nonnull.h
--  build-aux/snippet/c++defs.h
--  build-aux/snippet/warn-on-use.h
-+  build-aux/warn-on-use.h
-   doc/relocatable.texi
-   lib/alloca.in.h
-   lib/allocator.c
-@@ -419,8 +373,6 @@ AC_DEFUN([gl_FILE_LIST], [
-   lib/stdio.in.h
-   lib/stdlib.in.h
-   lib/streq.h
--  lib/strerror-override.c
--  lib/strerror-override.h
-   lib/strerror.c
-   lib/string.in.h
-   lib/sys_stat.in.h
-@@ -463,7 +415,6 @@ AC_DEFUN([gl_FILE_LIST], [
-   m4/intmax.m4
-   m4/inttypes-pri.m4
-   m4/inttypes_h.m4
--  m4/largefile.m4
-   m4/lcmessage.m4
-   m4/lib-ld.m4
-   m4/lib-link.m4
-@@ -502,7 +453,6 @@ AC_DEFUN([gl_FILE_LIST], [
-   m4/stdlib_h.m4
-   m4/strerror.m4
-   m4/string_h.m4
--  m4/sys_socket_h.m4
-   m4/sys_stat_h.m4
-   m4/threadlib.m4
-   m4/time_h.m4
-diff -Naurp libiconv-1.14.org//srcm4/include_next.m4 libiconv-1.14/srcm4/include_next.m4
---- libiconv-1.14.org//srcm4/include_next.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/include_next.m4	2012-01-08 02:07:40.650484472 -0800
-@@ -1,4 +1,4 @@
--# include_next.m4 serial 20
-+# include_next.m4 serial 18
- dnl Copyright (C) 2006-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -175,13 +175,11 @@ AC_DEFUN([gl_NEXT_HEADERS_INTERNAL],
-     [AC_CHECK_HEADERS_ONCE([$1])
-     ])
- 
--dnl FIXME: gl_next_header and gl_header_exists must be used unquoted
--dnl until we can assume autoconf 2.64 or newer.
-   m4_foreach_w([gl_HEADER_NAME], [$1],
-     [AS_VAR_PUSHDEF([gl_next_header],
-                     [gl_cv_next_]m4_defn([gl_HEADER_NAME]))
-      if test $gl_cv_have_include_next = yes; then
--       AS_VAR_SET(gl_next_header, ['<'gl_HEADER_NAME'>'])
-+       AS_VAR_SET([gl_next_header], ['<'gl_HEADER_NAME'>'])
-      else
-        AC_CACHE_CHECK(
-          [absolute name of <]m4_defn([gl_HEADER_NAME])[>],
-@@ -210,7 +208,7 @@ dnl until we can assume autoconf 2.64 or
-                dnl eval is necessary to expand gl_absname_cpp.
-                dnl Ultrix and Pyramid sh refuse to redirect output of eval,
-                dnl so use subshell.
--               AS_VAR_SET(gl_next_header,
-+               AS_VAR_SET([gl_next_header],
-                  ['"'`(eval "$gl_absname_cpp conftest.$ac_ext") 2>&AS_MESSAGE_LOG_FD |
-                   sed -n '\#/]m4_defn([gl_HEADER_NAME])[#{
-                     s#.*"\(.*/]m4_defn([gl_HEADER_NAME])[\)".*#\1#
-@@ -220,20 +218,20 @@ dnl until we can assume autoconf 2.64 or
-                   }'`'"'])
-           m4_if([$2], [check],
-             [else
--               AS_VAR_SET(gl_next_header, ['<'gl_HEADER_NAME'>'])
-+               AS_VAR_SET([gl_next_header], ['<'gl_HEADER_NAME'>'])
-              fi
-             ])
-          ])
-      fi
-      AC_SUBST(
-        AS_TR_CPP([NEXT_]m4_defn([gl_HEADER_NAME])),
--       [AS_VAR_GET(gl_next_header)])
-+       [AS_VAR_GET([gl_next_header])])
-      if test $gl_cv_have_include_next = yes || test $gl_cv_have_include_next = buggy; then
-        # INCLUDE_NEXT_AS_FIRST_DIRECTIVE='include_next'
-        gl_next_as_first_directive='<'gl_HEADER_NAME'>'
-      else
-        # INCLUDE_NEXT_AS_FIRST_DIRECTIVE='include'
--       gl_next_as_first_directive=AS_VAR_GET(gl_next_header)
-+       gl_next_as_first_directive=AS_VAR_GET([gl_next_header])
-      fi
-      AC_SUBST(
-        AS_TR_CPP([NEXT_AS_FIRST_DIRECTIVE_]m4_defn([gl_HEADER_NAME])),
-diff -Naurp libiconv-1.14.org//srcm4/largefile.m4 libiconv-1.14/srcm4/largefile.m4
---- libiconv-1.14.org//srcm4/largefile.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/largefile.m4	1969-12-31 16:00:00.000000000 -0800
-@@ -1,104 +0,0 @@
--# Enable large files on systems where this is not the default.
--
--# Copyright 1992-1996, 1998-2011 Free Software Foundation, Inc.
--# This file is free software; the Free Software Foundation
--# gives unlimited permission to copy and/or distribute it,
--# with or without modifications, as long as this notice is preserved.
--
--# The following implementation works around a problem in autoconf <= 2.68;
--# AC_SYS_LARGEFILE does not configure for large inodes on Mac OS X 10.5.
--m4_version_prereq([2.69], [] ,[
--
--# _AC_SYS_LARGEFILE_TEST_INCLUDES
--# -------------------------------
--m4_define([_AC_SYS_LARGEFILE_TEST_INCLUDES],
--[@%:@include <sys/types.h>
-- /* Check that off_t can represent 2**63 - 1 correctly.
--    We can't simply define LARGE_OFF_T to be 9223372036854775807,
--    since some C++ compilers masquerading as C compilers
--    incorrectly reject 9223372036854775807.  */
--@%:@define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
--  int off_t_is_large[[(LARGE_OFF_T % 2147483629 == 721
--		       && LARGE_OFF_T % 2147483647 == 1)
--		      ? 1 : -1]];[]dnl
--])
--
--
--# _AC_SYS_LARGEFILE_MACRO_VALUE(C-MACRO, VALUE,
--#				CACHE-VAR,
--#				DESCRIPTION,
--#				PROLOGUE, [FUNCTION-BODY])
--# --------------------------------------------------------
--m4_define([_AC_SYS_LARGEFILE_MACRO_VALUE],
--[AC_CACHE_CHECK([for $1 value needed for large files], [$3],
--[while :; do
--  m4_ifval([$6], [AC_LINK_IFELSE], [AC_COMPILE_IFELSE])(
--    [AC_LANG_PROGRAM([$5], [$6])],
--    [$3=no; break])
--  m4_ifval([$6], [AC_LINK_IFELSE], [AC_COMPILE_IFELSE])(
--    [AC_LANG_PROGRAM([@%:@define $1 $2
--$5], [$6])],
--    [$3=$2; break])
--  $3=unknown
--  break
--done])
--case $$3 in #(
--  no | unknown) ;;
--  *) AC_DEFINE_UNQUOTED([$1], [$$3], [$4]);;
--esac
--rm -rf conftest*[]dnl
--])# _AC_SYS_LARGEFILE_MACRO_VALUE
--
--
--# AC_SYS_LARGEFILE
--# ----------------
--# By default, many hosts won't let programs access large files;
--# one must use special compiler options to get large-file access to work.
--# For more details about this brain damage please see:
--# http://www.unix-systems.org/version2/whatsnew/lfs20mar.html
--AC_DEFUN([AC_SYS_LARGEFILE],
--[AC_ARG_ENABLE(largefile,
--	       [  --disable-largefile     omit support for large files])
--if test "$enable_largefile" != no; then
--
--  AC_CACHE_CHECK([for special C compiler options needed for large files],
--    ac_cv_sys_largefile_CC,
--    [ac_cv_sys_largefile_CC=no
--     if test "$GCC" != yes; then
--       ac_save_CC=$CC
--       while :; do
--	 # IRIX 6.2 and later do not support large files by default,
--	 # so use the C compiler's -n32 option if that helps.
--	 AC_LANG_CONFTEST([AC_LANG_PROGRAM([_AC_SYS_LARGEFILE_TEST_INCLUDES])])
--	 AC_COMPILE_IFELSE([], [break])
--	 CC="$CC -n32"
--	 AC_COMPILE_IFELSE([], [ac_cv_sys_largefile_CC=' -n32'; break])
--	 break
--       done
--       CC=$ac_save_CC
--       rm -f conftest.$ac_ext
--    fi])
--  if test "$ac_cv_sys_largefile_CC" != no; then
--    CC=$CC$ac_cv_sys_largefile_CC
--  fi
--
--  _AC_SYS_LARGEFILE_MACRO_VALUE(_FILE_OFFSET_BITS, 64,
--    ac_cv_sys_file_offset_bits,
--    [Number of bits in a file offset, on hosts where this is settable.],
--    [_AC_SYS_LARGEFILE_TEST_INCLUDES])
--  if test $ac_cv_sys_file_offset_bits = unknown; then
--    _AC_SYS_LARGEFILE_MACRO_VALUE(_LARGE_FILES, 1,
--      ac_cv_sys_large_files,
--      [Define for large files, on AIX-style hosts.],
--      [_AC_SYS_LARGEFILE_TEST_INCLUDES])
--  fi
--
--  AH_VERBATIM([_DARWIN_USE_64_BIT_INODE],
--[/* Enable large inode numbers on Mac OS X.  */
--#ifndef _DARWIN_USE_64_BIT_INODE
--# define _DARWIN_USE_64_BIT_INODE 1
--#endif])
--fi
--])# AC_SYS_LARGEFILE
--
--])# m4_version_prereq 2.69
-diff -Naurp libiconv-1.14.org//srcm4/lstat.m4 libiconv-1.14/srcm4/lstat.m4
---- libiconv-1.14.org//srcm4/lstat.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/lstat.m4	2012-01-08 02:07:40.722484476 -0800
-@@ -1,4 +1,4 @@
--# serial 23
-+# serial 21
- 
- # Copyright (C) 1997-2001, 2003-2011 Free Software Foundation, Inc.
- #
-@@ -15,28 +15,24 @@ AC_DEFUN([gl_FUNC_LSTAT],
-   dnl "#define lstat stat", and lstat.c is a no-op.
-   AC_CHECK_FUNCS_ONCE([lstat])
-   if test $ac_cv_func_lstat = yes; then
--    AC_REQUIRE([gl_FUNC_LSTAT_FOLLOWS_SLASHED_SYMLINK])
--    if test $gl_cv_func_lstat_dereferences_slashed_symlink = no; then
-+    AC_REQUIRE([AC_FUNC_LSTAT_FOLLOWS_SLASHED_SYMLINK])
-+    if test $ac_cv_func_lstat_dereferences_slashed_symlink = no; then
-+      dnl Note: AC_FUNC_LSTAT_FOLLOWS_SLASHED_SYMLINK does AC_LIBOBJ([lstat]).
-       REPLACE_LSTAT=1
-     fi
-+    # Prerequisites of lib/lstat.c.
-+    AC_REQUIRE([AC_C_INLINE])
-   else
-     HAVE_LSTAT=0
-   fi
- ])
- 
--# Prerequisites of lib/lstat.c.
--AC_DEFUN([gl_PREREQ_LSTAT],
-+# Redefine AC_FUNC_LSTAT_FOLLOWS_SLASHED_SYMLINK, because it is no longer
-+# maintained in Autoconf.
-+AC_DEFUN([AC_FUNC_LSTAT_FOLLOWS_SLASHED_SYMLINK],
- [
--  AC_REQUIRE([AC_C_INLINE])
--  :
--])
--
--AC_DEFUN([gl_FUNC_LSTAT_FOLLOWS_SLASHED_SYMLINK],
--[
--  dnl We don't use AC_FUNC_LSTAT_FOLLOWS_SLASHED_SYMLINK any more, because it
--  dnl is no longer maintained in Autoconf and because it invokes AC_LIBOBJ.
-   AC_CACHE_CHECK([whether lstat correctly handles trailing slash],
--    [gl_cv_func_lstat_dereferences_slashed_symlink],
-+    [ac_cv_func_lstat_dereferences_slashed_symlink],
-     [rm -f conftest.sym conftest.file
-      echo >conftest.file
-      if test "$as_ln_s" = "ln -s" && ln -s conftest.file conftest.sym; then
-@@ -49,22 +45,25 @@ AC_DEFUN([gl_FUNC_LSTAT_FOLLOWS_SLASHED_
-                  have to compile and use the lstat wrapper.  */
-               return lstat ("conftest.sym/", &sbuf) == 0;
-             ]])],
--         [gl_cv_func_lstat_dereferences_slashed_symlink=yes],
--         [gl_cv_func_lstat_dereferences_slashed_symlink=no],
-+         [ac_cv_func_lstat_dereferences_slashed_symlink=yes],
-+         [ac_cv_func_lstat_dereferences_slashed_symlink=no],
-          [# When cross-compiling, be pessimistic so we will end up using the
-           # replacement version of lstat that checks for trailing slashes and
-           # calls lstat a second time when necessary.
--          gl_cv_func_lstat_dereferences_slashed_symlink=no
-+          ac_cv_func_lstat_dereferences_slashed_symlink=no
-          ])
-      else
-        # If the 'ln -s' command failed, then we probably don't even
-        # have an lstat function.
--       gl_cv_func_lstat_dereferences_slashed_symlink=no
-+       ac_cv_func_lstat_dereferences_slashed_symlink=no
-      fi
-      rm -f conftest.sym conftest.file
-     ])
--  test $gl_cv_func_lstat_dereferences_slashed_symlink = yes &&
-+  test $ac_cv_func_lstat_dereferences_slashed_symlink = yes &&
-     AC_DEFINE_UNQUOTED([LSTAT_FOLLOWS_SLASHED_SYMLINK], [1],
-       [Define to 1 if `lstat' dereferences a symlink specified
-        with a trailing slash.])
-+  if test "x$ac_cv_func_lstat_dereferences_slashed_symlink" = xno; then
-+    AC_LIBOBJ([lstat])
-+  fi
- ])
-diff -Naurp libiconv-1.14.org//srcm4/memmove.m4 libiconv-1.14/srcm4/memmove.m4
---- libiconv-1.14.org//srcm4/memmove.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/memmove.m4	2012-01-08 02:07:40.738484477 -0800
-@@ -1,4 +1,4 @@
--# memmove.m4 serial 4
-+# memmove.m4 serial 3
- dnl Copyright (C) 2002, 2009-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -6,7 +6,10 @@ dnl with or without modifications, as lo
- 
- AC_DEFUN([gl_FUNC_MEMMOVE],
- [
--  AC_CHECK_FUNCS([memmove])
-+  AC_REPLACE_FUNCS([memmove])
-+  if test $ac_cv_func_memmove = no; then
-+    gl_PREREQ_MEMMOVE
-+  fi
- ])
- 
- # Prerequisites of lib/memmove.c.
-diff -Naurp libiconv-1.14.org//srcm4/pathmax.m4 libiconv-1.14/srcm4/pathmax.m4
---- libiconv-1.14.org//srcm4/pathmax.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/pathmax.m4	2012-01-08 02:07:40.758484478 -0800
-@@ -1,4 +1,4 @@
--# pathmax.m4 serial 9
-+# pathmax.m4 serial 8
- dnl Copyright (C) 2002-2003, 2005-2006, 2009-2011 Free Software Foundation,
- dnl Inc.
- dnl This file is free software; the Free Software Foundation
-@@ -8,5 +8,6 @@ dnl with or without modifications, as lo
- AC_DEFUN([gl_PATHMAX],
- [
-   dnl Prerequisites of lib/pathmax.h.
-+  AC_CHECK_FUNCS_ONCE([pathconf])
-   AC_CHECK_HEADERS_ONCE([sys/param.h])
- ])
-diff -Naurp libiconv-1.14.org//srcm4/po.m4 libiconv-1.14/srcm4/po.m4
---- libiconv-1.14.org//srcm4/po.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/po.m4	2012-01-08 02:07:40.766484477 -0800
-@@ -1,4 +1,4 @@
--# po.m4 serial 17a
-+# po.m4 serial 17 (gettext-0.18)
- dnl Copyright (C) 1995-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -118,8 +118,7 @@ changequote([,])dnl
-         if test -f "$ac_given_srcdir/$ac_dir/POTFILES.in"; then
-           rm -f "$ac_dir/POTFILES"
-           test -n "$as_me" && echo "$as_me: creating $ac_dir/POTFILES" || echo "creating $ac_dir/POTFILES"
--          gt_tab=`printf '\t'`
--          cat "$ac_given_srcdir/$ac_dir/POTFILES.in" | sed -e "/^#/d" -e "/^[ ${gt_tab}]*\$/d" -e "s,.*,     $top_srcdir/& \\\\," | sed -e "\$s/\(.*\) \\\\/\1/" > "$ac_dir/POTFILES"
-+          cat "$ac_given_srcdir/$ac_dir/POTFILES.in" | sed -e "/^#/d" -e "/^[ 	]*\$/d" -e "s,.*,     $top_srcdir/& \\\\," | sed -e "\$s/\(.*\) \\\\/\1/" > "$ac_dir/POTFILES"
-           POMAKEFILEDEPS="POTFILES.in"
-           # ALL_LINGUAS, POFILES, UPDATEPOFILES, DUMMYPOFILES, GMOFILES depend
-           # on $ac_dir but don't depend on user-specified configuration
-@@ -255,7 +254,6 @@ EOT
-   fi
- 
-   # A sed script that extracts the value of VARIABLE from a Makefile.
--  tab=`printf '\t'`
-   sed_x_variable='
- # Test if the hold space is empty.
- x
-@@ -263,9 +261,9 @@ s/P/P/
- x
- ta
- # Yes it was empty. Look if we have the expected variable definition.
--/^['"${tab}"' ]*VARIABLE['"${tab}"' ]*=/{
-+/^[	 ]*VARIABLE[	 ]*=/{
-   # Seen the first line of the variable definition.
--  s/^['"${tab}"' ]*VARIABLE['"${tab}"' ]*=//
-+  s/^[	 ]*VARIABLE[	 ]*=//
-   ba
- }
- bd
-@@ -407,15 +405,14 @@ changequote([,])dnl
-   fi
- 
-   sed -e "s|@POTFILES_DEPS@|$POTFILES_DEPS|g" -e "s|@POFILES@|$POFILES|g" -e "s|@UPDATEPOFILES@|$UPDATEPOFILES|g" -e "s|@DUMMYPOFILES@|$DUMMYPOFILES|g" -e "s|@GMOFILES@|$GMOFILES|g" -e "s|@PROPERTIESFILES@|$PROPERTIESFILES|g" -e "s|@CLASSFILES@|$CLASSFILES|g" -e "s|@QMFILES@|$QMFILES|g" -e "s|@MSGFILES@|$MSGFILES|g" -e "s|@RESOURCESDLLFILES@|$RESOURCESDLLFILES|g" -e "s|@CATALOGS@|$CATALOGS|g" -e "s|@JAVACATALOGS@|$JAVACATALOGS|g" -e "s|@QTCATALOGS@|$QTCATALOGS|g" -e "s|@TCLCATALOGS@|$TCLCATALOGS|g" -e "s|@CSHARPCATALOGS@|$CSHARPCATALOGS|g" -e 's,^#distdir:,distdir:,' < "$ac_file" > "$ac_file.tmp"
--  tab=`printf '\t'`
-   if grep -l '@TCLCATALOGS@' "$ac_file" > /dev/null; then
-     # Add dependencies that cannot be formulated as a simple suffix rule.
-     for lang in $ALL_LINGUAS; do
-       frobbedlang=`echo $lang | sed -e 's/\..*$//' -e 'y/ABCDEFGHIJKLMNOPQRSTUVWXYZ/abcdefghijklmnopqrstuvwxyz/'`
-       cat >> "$ac_file.tmp" <<EOF
- $frobbedlang.msg: $lang.po
--${tab}@echo "\$(MSGFMT) -c --tcl -d \$(srcdir) -l $lang $srcdirpre$lang.po"; \
--${tab}\$(MSGFMT) -c --tcl -d "\$(srcdir)" -l $lang $srcdirpre$lang.po || { rm -f "\$(srcdir)/$frobbedlang.msg"; exit 1; }
-+	@echo "\$(MSGFMT) -c --tcl -d \$(srcdir) -l $lang $srcdirpre$lang.po"; \
-+	\$(MSGFMT) -c --tcl -d "\$(srcdir)" -l $lang $srcdirpre$lang.po || { rm -f "\$(srcdir)/$frobbedlang.msg"; exit 1; }
- EOF
-     done
-   fi
-@@ -425,8 +422,8 @@ EOF
-       frobbedlang=`echo $lang | sed -e 's/_/-/g' -e 's/^sr-CS/sr-SP/' -e 's/@latin$/-Latn/' -e 's/@cyrillic$/-Cyrl/' -e 's/^sr-SP$/sr-SP-Latn/' -e 's/^uz-UZ$/uz-UZ-Latn/'`
-       cat >> "$ac_file.tmp" <<EOF
- $frobbedlang/\$(DOMAIN).resources.dll: $lang.po
--${tab}@echo "\$(MSGFMT) -c --csharp -d \$(srcdir) -l $lang $srcdirpre$lang.po -r \$(DOMAIN)"; \
--${tab}\$(MSGFMT) -c --csharp -d "\$(srcdir)" -l $lang $srcdirpre$lang.po -r "\$(DOMAIN)" || { rm -f "\$(srcdir)/$frobbedlang.msg"; exit 1; }
-+	@echo "\$(MSGFMT) -c --csharp -d \$(srcdir) -l $lang $srcdirpre$lang.po -r \$(DOMAIN)"; \
-+	\$(MSGFMT) -c --csharp -d "\$(srcdir)" -l $lang $srcdirpre$lang.po -r "\$(DOMAIN)" || { rm -f "\$(srcdir)/$frobbedlang.msg"; exit 1; }
- EOF
-     done
-   fi
-diff -Naurp libiconv-1.14.org//srcm4/readlink.m4 libiconv-1.14/srcm4/readlink.m4
---- libiconv-1.14.org//srcm4/readlink.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/readlink.m4	2012-01-08 02:07:40.790484479 -0800
-@@ -1,4 +1,4 @@
--# readlink.m4 serial 11
-+# readlink.m4 serial 10
- dnl Copyright (C) 2003, 2007, 2009-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -10,6 +10,8 @@ AC_DEFUN([gl_FUNC_READLINK],
-   AC_CHECK_FUNCS_ONCE([readlink])
-   if test $ac_cv_func_readlink = no; then
-     HAVE_READLINK=0
-+    AC_LIBOBJ([readlink])
-+    gl_PREREQ_READLINK
-   else
-     AC_CACHE_CHECK([whether readlink signature is correct],
-       [gl_cv_decl_readlink_works],
-@@ -38,8 +40,10 @@ AC_DEFUN([gl_FUNC_READLINK],
-       AC_DEFINE([READLINK_TRAILING_SLASH_BUG], [1], [Define to 1 if readlink
-         fails to recognize a trailing slash.])
-       REPLACE_READLINK=1
-+      AC_LIBOBJ([readlink])
-     elif test "$gl_cv_decl_readlink_works" != yes; then
-       REPLACE_READLINK=1
-+      AC_LIBOBJ([readlink])
-     fi
-   fi
- ])
-diff -Naurp libiconv-1.14.org//srcm4/read.m4 libiconv-1.14/srcm4/read.m4
---- libiconv-1.14.org//srcm4/read.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/read.m4	2012-01-08 02:07:40.782484478 -0800
-@@ -1,4 +1,4 @@
--# read.m4 serial 2
-+# read.m4 serial 1
- dnl Copyright (C) 2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -14,6 +14,7 @@ AC_DEFUN([gl_FUNC_READ],
-     gl_NONBLOCKING_IO
-     if test $gl_cv_have_nonblocking != yes; then
-       REPLACE_READ=1
-+      AC_LIBOBJ([read])
-     fi
-   ])
- ])
-diff -Naurp libiconv-1.14.org//srcm4/relocatable-lib.m4 libiconv-1.14/srcm4/relocatable-lib.m4
---- libiconv-1.14.org//srcm4/relocatable-lib.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/relocatable-lib.m4	2012-01-08 02:07:40.798484480 -0800
-@@ -1,4 +1,4 @@
--# relocatable-lib.m4 serial 6
-+# relocatable-lib.m4 serial 5
- dnl Copyright (C) 2003, 2005-2007, 2009-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -10,6 +10,9 @@ dnl Support for relocatable libraries.
- AC_DEFUN([gl_RELOCATABLE_LIBRARY],
- [
-   AC_REQUIRE([gl_RELOCATABLE_LIBRARY_BODY])
-+  if test $RELOCATABLE = yes; then
-+    AC_LIBOBJ([relocatable])
-+  fi
- ])
- AC_DEFUN([gl_RELOCATABLE_LIBRARY_BODY],
- [
-@@ -29,6 +32,13 @@ AC_DEFUN([gl_RELOCATABLE_LIBRARY_BODY],
-   fi
- ])
- 
-+dnl Like gl_RELOCATABLE_LIBRARY, except prepare for separate compilation
-+dnl (no AC_LIBOBJ).
-+AC_DEFUN([gl_RELOCATABLE_LIBRARY_SEPARATE],
-+[
-+  AC_REQUIRE([gl_RELOCATABLE_LIBRARY_BODY])
-+])
-+
- dnl Support for relocatable packages for which it is a nop.
- AC_DEFUN([gl_RELOCATABLE_NOP],
- [
-diff -Naurp libiconv-1.14.org//srcm4/relocatable.m4 libiconv-1.14/srcm4/relocatable.m4
---- libiconv-1.14.org//srcm4/relocatable.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/relocatable.m4	2012-01-08 02:07:40.802484479 -0800
-@@ -1,4 +1,4 @@
--# relocatable.m4 serial 17
-+# relocatable.m4 serial 16
- dnl Copyright (C) 2003, 2005-2007, 2009-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -14,6 +14,9 @@ AC_DEFUN([gl_RELOCATABLE],
- [
-   AC_REQUIRE([gl_RELOCATABLE_BODY])
-   gl_RELOCATABLE_LIBRARY
-+  if test $RELOCATABLE = yes; then
-+    AC_LIBOBJ([progreloc])
-+  fi
-   : ${RELOCATABLE_CONFIG_H_DIR='$(top_builddir)'}
-   RELOCATABLE_SRC_DIR="\$(top_srcdir)/$gl_source_base"
-   RELOCATABLE_BUILD_DIR="\$(top_builddir)/$gl_source_base"
-diff -Naurp libiconv-1.14.org//srcm4/safe-read.m4 libiconv-1.14/srcm4/safe-read.m4
---- libiconv-1.14.org//srcm4/safe-read.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/safe-read.m4	2012-01-08 02:07:40.810484480 -0800
-@@ -1,10 +1,17 @@
--# safe-read.m4 serial 6
-+# safe-read.m4 serial 5
- dnl Copyright (C) 2002-2003, 2005-2006, 2009-2011 Free Software Foundation,
- dnl Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
- dnl with or without modifications, as long as this notice is preserved.
- 
-+AC_DEFUN([gl_SAFE_READ],
-+[
-+  AC_LIBOBJ([safe-read])
-+
-+  gl_PREREQ_SAFE_READ
-+])
-+
- # Prerequisites of lib/safe-read.c.
- AC_DEFUN([gl_PREREQ_SAFE_READ],
- [
-diff -Naurp libiconv-1.14.org//srcm4/setenv.m4 libiconv-1.14/srcm4/setenv.m4
---- libiconv-1.14.org//srcm4/setenv.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/setenv.m4	2012-01-08 02:07:40.818484481 -0800
-@@ -1,4 +1,4 @@
--# setenv.m4 serial 24
-+# setenv.m4 serial 22
- dnl Copyright (C) 2001-2004, 2006-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -38,6 +38,9 @@ AC_DEFUN([gl_FUNC_SETENV],
-       REPLACE_SETENV=1
-     fi
-   fi
-+  if test $HAVE_SETENV$REPLACE_SETENV != 10; then
-+    AC_LIBOBJ([setenv])
-+  fi
- ])
- 
- # Like gl_FUNC_SETENV, except prepare for separate compilation
-@@ -62,9 +65,9 @@ AC_DEFUN([gl_FUNC_UNSETENV],
-   fi
-   AC_CHECK_FUNCS([unsetenv])
-   if test $ac_cv_func_unsetenv = no; then
--    HAVE_UNSETENV=0
-+    AC_LIBOBJ([unsetenv])
-+    gl_PREREQ_UNSETENV
-   else
--    HAVE_UNSETENV=1
-     dnl Some BSDs return void, failing to do error checking.
-     AC_CACHE_CHECK([for unsetenv() return type], [gt_cv_func_unsetenv_ret],
-       [AC_COMPILE_IFELSE(
-@@ -90,6 +93,7 @@ int unsetenv();
-       AC_DEFINE([VOID_UNSETENV], [1], [Define to 1 if unsetenv returns void
-        instead of int.])
-       REPLACE_UNSETENV=1
-+      AC_LIBOBJ([unsetenv])
-     fi
- 
-     dnl Solaris 10 unsetenv does not remove all copies of a name.
-@@ -122,6 +126,7 @@ int unsetenv();
-       [gl_cv_func_unsetenv_works="guessing no"])])
-     if test "$gl_cv_func_unsetenv_works" != yes; then
-       REPLACE_UNSETENV=1
-+      AC_LIBOBJ([unsetenv])
-     fi
-   fi
- ])
-diff -Naurp libiconv-1.14.org//srcm4/signalblocking.m4 libiconv-1.14/srcm4/signalblocking.m4
---- libiconv-1.14.org//srcm4/signalblocking.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/signalblocking.m4	2012-01-08 02:07:40.834484482 -0800
-@@ -1,4 +1,4 @@
--# signalblocking.m4 serial 12
-+# signalblocking.m4 serial 10
- dnl Copyright (C) 2001-2002, 2006-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -12,14 +12,31 @@ dnl with or without modifications, as lo
- AC_DEFUN([gl_SIGNALBLOCKING],
- [
-   AC_REQUIRE([gl_SIGNAL_H_DEFAULTS])
--  AC_REQUIRE([gl_CHECK_TYPE_SIGSET_T])
--  if test $gl_cv_type_sigset_t = yes; then
-+  signals_not_posix=
-+  AC_EGREP_HEADER([sigset_t], [signal.h], , [signals_not_posix=1])
-+  if test -z "$signals_not_posix"; then
-     AC_CHECK_FUNC([sigprocmask], [gl_cv_func_sigprocmask=1])
-   fi
-   if test -z "$gl_cv_func_sigprocmask"; then
-     HAVE_POSIX_SIGNALBLOCKING=0
-+    AC_LIBOBJ([sigprocmask])
-+    gl_PREREQ_SIGPROCMASK
-   fi
- ])
- 
--# Prerequisites of lib/sigprocmask.c.
--AC_DEFUN([gl_PREREQ_SIGPROCMASK], [:])
-+# Prerequisites of the part of lib/signal.in.h and of lib/sigprocmask.c.
-+AC_DEFUN([gl_PREREQ_SIGPROCMASK],
-+[
-+  AC_REQUIRE([gl_SIGNAL_H_DEFAULTS])
-+  AC_CHECK_TYPES([sigset_t],
-+    [gl_cv_type_sigset_t=yes], [gl_cv_type_sigset_t=no],
-+    [#include <signal.h>
-+/* Mingw defines sigset_t not in <signal.h>, but in <sys/types.h>.  */
-+#include <sys/types.h>])
-+  if test $gl_cv_type_sigset_t != yes; then
-+    HAVE_SIGSET_T=0
-+  fi
-+  dnl HAVE_SIGSET_T is 1 if the system lacks the sigprocmask function but has
-+  dnl the sigset_t type.
-+  AC_SUBST([HAVE_SIGSET_T])
-+])
-diff -Naurp libiconv-1.14.org//srcm4/signal_h.m4 libiconv-1.14/srcm4/signal_h.m4
---- libiconv-1.14.org//srcm4/signal_h.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/signal_h.m4	2012-01-08 02:07:40.826484480 -0800
-@@ -1,4 +1,4 @@
--# signal_h.m4 serial 16
-+# signal_h.m4 serial 12
- dnl Copyright (C) 2007-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -7,7 +7,6 @@ dnl with or without modifications, as lo
- AC_DEFUN([gl_SIGNAL_H],
- [
-   AC_REQUIRE([gl_SIGNAL_H_DEFAULTS])
--  AC_REQUIRE([gl_CHECK_TYPE_SIGSET_T])
-   gl_NEXT_HEADERS([signal.h])
- 
- # AIX declares sig_atomic_t to already include volatile, and C89 compilers
-@@ -28,25 +27,10 @@ AC_DEFUN([gl_SIGNAL_H],
-   dnl Check for declarations of anything we want to poison if the
-   dnl corresponding gnulib module is not in use.
-   gl_WARN_ON_USE_PREPARE([[#include <signal.h>
--    ]], [pthread_sigmask sigaction
--    sigaddset sigdelset sigemptyset sigfillset sigismember
-+    ]], [sigaction sigaddset sigdelset sigemptyset sigfillset sigismember
-     sigpending sigprocmask])
- ])
- 
--AC_DEFUN([gl_CHECK_TYPE_SIGSET_T],
--[
--  AC_CHECK_TYPES([sigset_t],
--    [gl_cv_type_sigset_t=yes], [gl_cv_type_sigset_t=no],
--    [[
--      #include <signal.h>
--      /* Mingw defines sigset_t not in <signal.h>, but in <sys/types.h>.  */
--      #include <sys/types.h>
--    ]])
--  if test $gl_cv_type_sigset_t != yes; then
--    HAVE_SIGSET_T=0
--  fi
--])
--
- AC_DEFUN([gl_SIGNAL_MODULE_INDICATOR],
- [
-   dnl Use AC_REQUIRE here, so that the default settings are expanded once only.
-@@ -58,13 +42,11 @@ AC_DEFUN([gl_SIGNAL_MODULE_INDICATOR],
- 
- AC_DEFUN([gl_SIGNAL_H_DEFAULTS],
- [
--  GNULIB_PTHREAD_SIGMASK=0;    AC_SUBST([GNULIB_PTHREAD_SIGMASK])
-   GNULIB_SIGNAL_H_SIGPIPE=0;   AC_SUBST([GNULIB_SIGNAL_H_SIGPIPE])
-   GNULIB_SIGPROCMASK=0;        AC_SUBST([GNULIB_SIGPROCMASK])
-   GNULIB_SIGACTION=0;          AC_SUBST([GNULIB_SIGACTION])
-   dnl Assume proper GNU behavior unless another module says otherwise.
-   HAVE_POSIX_SIGNALBLOCKING=1; AC_SUBST([HAVE_POSIX_SIGNALBLOCKING])
--  HAVE_PTHREAD_SIGMASK=1;      AC_SUBST([HAVE_PTHREAD_SIGMASK])
-   HAVE_SIGSET_T=1;             AC_SUBST([HAVE_SIGSET_T])
-   HAVE_SIGINFO_T=1;            AC_SUBST([HAVE_SIGINFO_T])
-   HAVE_SIGACTION=1;            AC_SUBST([HAVE_SIGACTION])
-@@ -73,5 +55,4 @@ AC_DEFUN([gl_SIGNAL_H_DEFAULTS],
-   HAVE_TYPE_VOLATILE_SIG_ATOMIC_T=1;
-                                AC_SUBST([HAVE_TYPE_VOLATILE_SIG_ATOMIC_T])
-   HAVE_SIGHANDLER_T=1;         AC_SUBST([HAVE_SIGHANDLER_T])
--  REPLACE_PTHREAD_SIGMASK=0;   AC_SUBST([REPLACE_PTHREAD_SIGMASK])
- ])
-diff -Naurp libiconv-1.14.org//srcm4/stat.m4 libiconv-1.14/srcm4/stat.m4
---- libiconv-1.14.org//srcm4/stat.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/stat.m4	2012-01-08 02:07:40.854484483 -0800
-@@ -1,4 +1,4 @@
--# serial 8
-+# serial 7
- 
- # Copyright (C) 2009-2011 Free Software Foundation, Inc.
- #
-@@ -58,11 +58,9 @@ AC_DEFUN([gl_FUNC_STAT],
-       AC_DEFINE([REPLACE_FUNC_STAT_FILE], [1], [Define to 1 if stat needs
-         help when passed a file name with a trailing slash]);;
-   esac
--])
--
--# Prerequisites of lib/stat.c.
--AC_DEFUN([gl_PREREQ_STAT],
--[
--  AC_REQUIRE([AC_C_INLINE])
--  :
-+  if test $REPLACE_STAT = 1; then
-+    AC_LIBOBJ([stat])
-+    dnl Prerequisites of lib/stat.c.
-+    AC_REQUIRE([AC_C_INLINE])
-+  fi
- ])
-diff -Naurp libiconv-1.14.org//srcm4/strerror.m4 libiconv-1.14/srcm4/strerror.m4
---- libiconv-1.14.org//srcm4/strerror.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/strerror.m4	2012-01-08 02:07:40.890484484 -0800
-@@ -1,4 +1,4 @@
--# strerror.m4 serial 16
-+# strerror.m4 serial 9
- dnl Copyright (C) 2002, 2007-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -6,73 +6,63 @@ dnl with or without modifications, as lo
- 
- AC_DEFUN([gl_FUNC_STRERROR],
- [
-+  AC_REQUIRE([gl_FUNC_STRERROR_SEPARATE])
-+  if test $REPLACE_STRERROR = 1; then
-+    AC_LIBOBJ([strerror])
-+    AC_DEFINE_UNQUOTED([REPLACE_STRERROR], [$REPLACE_STRERROR],
-+      [Define this to 1 if strerror is broken.])
-+  fi
-+])
-+
-+# Like gl_FUNC_STRERROR, except prepare for separate compilation (no AC_LIBOBJ).
-+AC_DEFUN([gl_FUNC_STRERROR_SEPARATE],
-+[
-   AC_REQUIRE([gl_HEADER_STRING_H_DEFAULTS])
-   AC_REQUIRE([gl_HEADER_ERRNO_H])
--  AC_REQUIRE([gl_FUNC_STRERROR_0])
--  m4_ifdef([gl_FUNC_STRERROR_R_WORKS], [
--    AC_REQUIRE([gl_FUNC_STRERROR_R_WORKS])
--  ])
--  if test "$ERRNO_H:$REPLACE_STRERROR_0" = :0; then
-+  if test -z "$ERRNO_H"; then
-     AC_CACHE_CHECK([for working strerror function],
-      [gl_cv_func_working_strerror],
-      [AC_RUN_IFELSE(
-         [AC_LANG_PROGRAM(
-            [[#include <string.h>
-            ]],
--           [[if (!*strerror (-2)) return 1;]])],
-+           [[return !*strerror (-2);]])],
-         [gl_cv_func_working_strerror=yes],
-         [gl_cv_func_working_strerror=no],
--        [dnl Be pessimistic on cross-compiles for now.
--         gl_cv_func_working_strerror="guessing no"])
-+        [dnl Assume crossbuild works if it compiles.
-+         AC_COMPILE_IFELSE(
-+           [AC_LANG_PROGRAM(
-+              [[#include <string.h>
-+              ]],
-+              [[return !*strerror (-2);]])],
-+           [gl_cv_func_working_strerror=yes],
-+           [gl_cv_func_working_strerror=no])
-+      ])
-     ])
--    if test "$gl_cv_func_working_strerror" != yes; then
-+    if test $gl_cv_func_working_strerror = no; then
-       dnl The system's strerror() fails to return a string for out-of-range
-       dnl integers. Replace it.
-       REPLACE_STRERROR=1
-     fi
--    m4_ifdef([gl_FUNC_STRERROR_R_WORKS], [
--      dnl If the system's strerror_r or __xpg_strerror_r clobbers strerror's
--      dnl buffer, we must replace strerror.
--      case "$gl_cv_func_strerror_r_works" in
--        *no) REPLACE_STRERROR=1 ;;
--      esac
--    ])
-   else
-     dnl The system's strerror() cannot know about the new errno values we add
--    dnl to <errno.h>, or any fix for strerror(0). Replace it.
-+    dnl to <errno.h>. Replace it.
-     REPLACE_STRERROR=1
-   fi
-+  if test $REPLACE_STRERROR = 1; then
-+    gl_PREREQ_STRERROR
-+  fi
- ])
- 
--dnl Detect if strerror(0) passes (that is, does not set errno, and does not
--dnl return a string that matches strerror(-1)).
--AC_DEFUN([gl_FUNC_STRERROR_0],
--[
--  REPLACE_STRERROR_0=0
--  AC_CACHE_CHECK([whether strerror(0) succeeds],
--   [gl_cv_func_strerror_0_works],
--   [AC_RUN_IFELSE(
--      [AC_LANG_PROGRAM(
--         [[#include <string.h>
--           #include <errno.h>
--         ]],
--         [[int result = 0;
--           char *str;
--           errno = 0;
--           str = strerror (0);
--           if (!*str) result |= 1;
--           if (errno) result |= 2;
--           if (strstr (str, "nknown") || strstr (str, "ndefined"))
--             result |= 4;
--           return result;]])],
--      [gl_cv_func_strerror_0_works=yes],
--      [gl_cv_func_strerror_0_works=no],
--      [dnl Be pessimistic on cross-compiles for now.
--       gl_cv_func_strerror_0_works="guessing no"])
--  ])
--  if test "$gl_cv_func_strerror_0_works" != yes; then
--    REPLACE_STRERROR_0=1
--    AC_DEFINE([REPLACE_STRERROR_0], [1], [Define to 1 if strerror(0)
--      does not return a message implying success.])
-+# Prerequisites of lib/strerror.c.
-+AC_DEFUN([gl_PREREQ_STRERROR], [
-+  AC_CHECK_DECLS([strerror])
-+  AC_CHECK_HEADERS_ONCE([sys/socket.h])
-+  if test $ac_cv_header_sys_socket_h != yes; then
-+    dnl We cannot use AC_CHECK_HEADERS_ONCE here, because that would make
-+    dnl the check for those headers unconditional; yet cygwin reports
-+    dnl that the headers are present but cannot be compiled (since on
-+    dnl cygwin, all socket information should come from sys/socket.h).
-+    AC_CHECK_HEADERS([winsock2.h])
-   fi
- ])
-diff -Naurp libiconv-1.14.org//srcm4/string_h.m4 libiconv-1.14/srcm4/string_h.m4
---- libiconv-1.14.org//srcm4/string_h.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/string_h.m4	2012-01-08 02:07:40.894484485 -0800
-@@ -5,7 +5,7 @@
- # gives unlimited permission to copy and/or distribute it,
- # with or without modifications, as long as this notice is preserved.
- 
--# serial 21
-+# serial 20
- 
- # Written by Paul Eggert.
- 
-@@ -27,9 +27,9 @@ AC_DEFUN([gl_HEADER_STRING_H_BODY],
-   dnl guaranteed by C89.
-   gl_WARN_ON_USE_PREPARE([[#include <string.h>
-     ]],
--    [ffsl ffsll memmem mempcpy memrchr rawmemchr stpcpy stpncpy strchrnul
--     strdup strncat strndup strnlen strpbrk strsep strcasestr strtok_r
--     strerror_r strsignal strverscmp])
-+    [memmem mempcpy memrchr rawmemchr stpcpy stpncpy strchrnul strdup
-+     strncat strndup strnlen strpbrk strsep strcasestr strtok_r strerror_r
-+     strsignal strverscmp])
- ])
- 
- AC_DEFUN([gl_STRING_MODULE_INDICATOR],
-@@ -43,8 +43,6 @@ AC_DEFUN([gl_STRING_MODULE_INDICATOR],
- 
- AC_DEFUN([gl_HEADER_STRING_H_DEFAULTS],
- [
--  GNULIB_FFSL=0;        AC_SUBST([GNULIB_FFSL])
--  GNULIB_FFSLL=0;       AC_SUBST([GNULIB_FFSLL])
-   GNULIB_MEMCHR=0;      AC_SUBST([GNULIB_MEMCHR])
-   GNULIB_MEMMEM=0;      AC_SUBST([GNULIB_MEMMEM])
-   GNULIB_MEMPCPY=0;     AC_SUBST([GNULIB_MEMPCPY])
-@@ -82,8 +80,6 @@ AC_DEFUN([gl_HEADER_STRING_H_DEFAULTS],
-   GNULIB_STRVERSCMP=0;  AC_SUBST([GNULIB_STRVERSCMP])
-   HAVE_MBSLEN=0;        AC_SUBST([HAVE_MBSLEN])
-   dnl Assume proper GNU behavior unless another module says otherwise.
--  HAVE_FFSL=1;                  AC_SUBST([HAVE_FFSL])
--  HAVE_FFSLL=1;                 AC_SUBST([HAVE_FFSLL])
-   HAVE_MEMCHR=1;                AC_SUBST([HAVE_MEMCHR])
-   HAVE_DECL_MEMMEM=1;           AC_SUBST([HAVE_DECL_MEMMEM])
-   HAVE_MEMPCPY=1;               AC_SUBST([HAVE_MEMPCPY])
-diff -Naurp libiconv-1.14.org//srcm4/sys_socket_h.m4 libiconv-1.14/srcm4/sys_socket_h.m4
---- libiconv-1.14.org//srcm4/sys_socket_h.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/sys_socket_h.m4	1969-12-31 16:00:00.000000000 -0800
-@@ -1,177 +0,0 @@
--# sys_socket_h.m4 serial 22
--dnl Copyright (C) 2005-2011 Free Software Foundation, Inc.
--dnl This file is free software; the Free Software Foundation
--dnl gives unlimited permission to copy and/or distribute it,
--dnl with or without modifications, as long as this notice is preserved.
--
--dnl From Simon Josefsson.
--
--AC_DEFUN([gl_HEADER_SYS_SOCKET],
--[
--  AC_REQUIRE([gl_SYS_SOCKET_H_DEFAULTS])
--  AC_REQUIRE([AC_CANONICAL_HOST])
--  AC_REQUIRE([AC_C_INLINE])
--
--  dnl On OSF/1, the functions recv(), send(), recvfrom(), sendto() have
--  dnl old-style declarations (with return type 'int' instead of 'ssize_t')
--  dnl unless _POSIX_PII_SOCKET is defined.
--  case "$host_os" in
--    osf*)
--      AC_DEFINE([_POSIX_PII_SOCKET], [1],
--        [Define to 1 in order to get the POSIX compatible declarations
--         of socket functions.])
--      ;;
--  esac
--
--  AC_CACHE_CHECK([whether <sys/socket.h> is self-contained],
--    [gl_cv_header_sys_socket_h_selfcontained],
--    [
--      AC_COMPILE_IFELSE([AC_LANG_PROGRAM([[#include <sys/socket.h>]], [[]])],
--        [gl_cv_header_sys_socket_h_selfcontained=yes],
--        [gl_cv_header_sys_socket_h_selfcontained=no])
--    ])
--  if test $gl_cv_header_sys_socket_h_selfcontained = yes; then
--    dnl If the shutdown function exists, <sys/socket.h> should define
--    dnl SHUT_RD, SHUT_WR, SHUT_RDWR.
--    AC_CHECK_FUNCS([shutdown])
--    if test $ac_cv_func_shutdown = yes; then
--      AC_CACHE_CHECK([whether <sys/socket.h> defines the SHUT_* macros],
--        [gl_cv_header_sys_socket_h_shut],
--        [
--          AC_COMPILE_IFELSE(
--            [AC_LANG_PROGRAM([[#include <sys/socket.h>]],
--               [[int a[] = { SHUT_RD, SHUT_WR, SHUT_RDWR };]])],
--            [gl_cv_header_sys_socket_h_shut=yes],
--            [gl_cv_header_sys_socket_h_shut=no])
--        ])
--      if test $gl_cv_header_sys_socket_h_shut = no; then
--        SYS_SOCKET_H='sys/socket.h'
--      fi
--    fi
--  fi
--  # We need to check for ws2tcpip.h now.
--  gl_PREREQ_SYS_H_SOCKET
--  AC_CHECK_TYPES([struct sockaddr_storage, sa_family_t],,,[
--  /* sys/types.h is not needed according to POSIX, but the
--     sys/socket.h in i386-unknown-freebsd4.10 and
--     powerpc-apple-darwin5.5 required it. */
--#include <sys/types.h>
--#ifdef HAVE_SYS_SOCKET_H
--#include <sys/socket.h>
--#endif
--#ifdef HAVE_WS2TCPIP_H
--#include <ws2tcpip.h>
--#endif
--])
--  if test $ac_cv_type_struct_sockaddr_storage = no; then
--    HAVE_STRUCT_SOCKADDR_STORAGE=0
--  fi
--  if test $ac_cv_type_sa_family_t = no; then
--    HAVE_SA_FAMILY_T=0
--  fi
--  if test $ac_cv_type_struct_sockaddr_storage != no; then
--    AC_CHECK_MEMBERS([struct sockaddr_storage.ss_family],
--      [],
--      [HAVE_STRUCT_SOCKADDR_STORAGE_SS_FAMILY=0],
--      [#include <sys/types.h>
--       #ifdef HAVE_SYS_SOCKET_H
--       #include <sys/socket.h>
--       #endif
--       #ifdef HAVE_WS2TCPIP_H
--       #include <ws2tcpip.h>
--       #endif
--      ])
--  fi
--  if test $HAVE_STRUCT_SOCKADDR_STORAGE = 0 || test $HAVE_SA_FAMILY_T = 0 \
--     || test $HAVE_STRUCT_SOCKADDR_STORAGE_SS_FAMILY = 0; then
--    SYS_SOCKET_H='sys/socket.h'
--  fi
--  gl_PREREQ_SYS_H_WINSOCK2
--
--  dnl Check for declarations of anything we want to poison if the
--  dnl corresponding gnulib module is not in use.
--  gl_WARN_ON_USE_PREPARE([[
--/* Some systems require prerequisite headers.  */
--#include <sys/types.h>
--#include <sys/socket.h>
--    ]], [socket connect accept bind getpeername getsockname getsockopt
--    listen recv send recvfrom sendto setsockopt shutdown accept4])
--])
--
--AC_DEFUN([gl_PREREQ_SYS_H_SOCKET],
--[
--  dnl Check prerequisites of the <sys/socket.h> replacement.
--  AC_REQUIRE([gl_CHECK_SOCKET_HEADERS])
--  gl_CHECK_NEXT_HEADERS([sys/socket.h])
--  if test $ac_cv_header_sys_socket_h = yes; then
--    HAVE_SYS_SOCKET_H=1
--    HAVE_WS2TCPIP_H=0
--  else
--    HAVE_SYS_SOCKET_H=0
--    if test $ac_cv_header_ws2tcpip_h = yes; then
--      HAVE_WS2TCPIP_H=1
--    else
--      HAVE_WS2TCPIP_H=0
--    fi
--  fi
--  AC_SUBST([HAVE_SYS_SOCKET_H])
--  AC_SUBST([HAVE_WS2TCPIP_H])
--])
--
--# Common prerequisites of the <sys/socket.h> replacement and of the
--# <sys/select.h> replacement.
--# Sets and substitutes HAVE_WINSOCK2_H.
--AC_DEFUN([gl_PREREQ_SYS_H_WINSOCK2],
--[
--  m4_ifdef([gl_UNISTD_H_DEFAULTS], [AC_REQUIRE([gl_UNISTD_H_DEFAULTS])])
--  m4_ifdef([gl_SYS_IOCTL_H_DEFAULTS], [AC_REQUIRE([gl_SYS_IOCTL_H_DEFAULTS])])
--  AC_CHECK_HEADERS_ONCE([sys/socket.h])
--  if test $ac_cv_header_sys_socket_h != yes; then
--    dnl We cannot use AC_CHECK_HEADERS_ONCE here, because that would make
--    dnl the check for those headers unconditional; yet cygwin reports
--    dnl that the headers are present but cannot be compiled (since on
--    dnl cygwin, all socket information should come from sys/socket.h).
--    AC_CHECK_HEADERS([winsock2.h])
--  fi
--  if test "$ac_cv_header_winsock2_h" = yes; then
--    HAVE_WINSOCK2_H=1
--    UNISTD_H_HAVE_WINSOCK2_H=1
--    SYS_IOCTL_H_HAVE_WINSOCK2_H=1
--  else
--    HAVE_WINSOCK2_H=0
--  fi
--  AC_SUBST([HAVE_WINSOCK2_H])
--])
--
--AC_DEFUN([gl_SYS_SOCKET_MODULE_INDICATOR],
--[
--  dnl Use AC_REQUIRE here, so that the default settings are expanded once only.
--  AC_REQUIRE([gl_SYS_SOCKET_H_DEFAULTS])
--  gl_MODULE_INDICATOR_SET_VARIABLE([$1])
--  dnl Define it also as a C macro, for the benefit of the unit tests.
--  gl_MODULE_INDICATOR_FOR_TESTS([$1])
--])
--
--AC_DEFUN([gl_SYS_SOCKET_H_DEFAULTS],
--[
--  GNULIB_SOCKET=0;      AC_SUBST([GNULIB_SOCKET])
--  GNULIB_CONNECT=0;     AC_SUBST([GNULIB_CONNECT])
--  GNULIB_ACCEPT=0;      AC_SUBST([GNULIB_ACCEPT])
--  GNULIB_BIND=0;        AC_SUBST([GNULIB_BIND])
--  GNULIB_GETPEERNAME=0; AC_SUBST([GNULIB_GETPEERNAME])
--  GNULIB_GETSOCKNAME=0; AC_SUBST([GNULIB_GETSOCKNAME])
--  GNULIB_GETSOCKOPT=0;  AC_SUBST([GNULIB_GETSOCKOPT])
--  GNULIB_LISTEN=0;      AC_SUBST([GNULIB_LISTEN])
--  GNULIB_RECV=0;        AC_SUBST([GNULIB_RECV])
--  GNULIB_SEND=0;        AC_SUBST([GNULIB_SEND])
--  GNULIB_RECVFROM=0;    AC_SUBST([GNULIB_RECVFROM])
--  GNULIB_SENDTO=0;      AC_SUBST([GNULIB_SENDTO])
--  GNULIB_SETSOCKOPT=0;  AC_SUBST([GNULIB_SETSOCKOPT])
--  GNULIB_SHUTDOWN=0;    AC_SUBST([GNULIB_SHUTDOWN])
--  GNULIB_ACCEPT4=0;     AC_SUBST([GNULIB_ACCEPT4])
--  HAVE_STRUCT_SOCKADDR_STORAGE=1; AC_SUBST([HAVE_STRUCT_SOCKADDR_STORAGE])
--  HAVE_STRUCT_SOCKADDR_STORAGE_SS_FAMILY=1;
--                        AC_SUBST([HAVE_STRUCT_SOCKADDR_STORAGE_SS_FAMILY])
--  HAVE_SA_FAMILY_T=1;   AC_SUBST([HAVE_SA_FAMILY_T])
--  HAVE_ACCEPT4=1;       AC_SUBST([HAVE_ACCEPT4])
--])
-diff -Naurp libiconv-1.14.org//srcm4/warn-on-use.m4 libiconv-1.14/srcm4/warn-on-use.m4
---- libiconv-1.14.org//srcm4/warn-on-use.m4	2011-08-07 06:42:07.000000000 -0700
-+++ libiconv-1.14/srcm4/warn-on-use.m4	2012-01-08 02:07:40.934484487 -0800
-@@ -1,4 +1,4 @@
--# warn-on-use.m4 serial 4
-+# warn-on-use.m4 serial 2
- dnl Copyright (C) 2010-2011 Free Software Foundation, Inc.
- dnl This file is free software; the Free Software Foundation
- dnl gives unlimited permission to copy and/or distribute it,
-@@ -27,8 +27,6 @@ AC_DEFUN([gl_WARN_ON_USE_PREPARE],
-     [AH_TEMPLATE([HAVE_RAW_DECL_]AS_TR_CPP(m4_defn([gl_decl])),
-       [Define to 1 if ]m4_defn([gl_decl])[ is declared even after
-        undefining macros.])])dnl
--dnl FIXME: gl_Symbol must be used unquoted until we can assume
--dnl autoconf 2.64 or newer.
-   for gl_func in m4_flatten([$2]); do
-     AS_VAR_PUSHDEF([gl_Symbol], [gl_cv_have_raw_decl_$gl_func])dnl
-     AC_CACHE_CHECK([whether $gl_func is declared without a macro],
-@@ -37,8 +35,8 @@ dnl autoconf 2.64 or newer.
- [@%:@undef $gl_func
-   (void) $gl_func;])],
-         [AS_VAR_SET(gl_Symbol, [yes])], [AS_VAR_SET(gl_Symbol, [no])])])
--    AS_VAR_IF(gl_Symbol, [yes],
--      [AC_DEFINE_UNQUOTED(AS_TR_CPP([HAVE_RAW_DECL_$gl_func]), [1])
-+     AS_VAR_IF(gl_Symbol, [yes],
-+       [AC_DEFINE_UNQUOTED(AS_TR_CPP([HAVE_RAW_DECL_$gl_func]), [1])
-        dnl shortcut - if the raw declaration exists, then set a cache
-        dnl variable to allow skipping any later AC_CHECK_DECL efforts
-        eval ac_cv_have_decl_$gl_func=yes])
diff --git a/import-layers/yocto-poky/meta/recipes-support/libiconv/libiconv-1.14/autoconf.patch b/import-layers/yocto-poky/meta/recipes-support/libiconv/libiconv-1.14/autoconf.patch
deleted file mode 100644
index 5d34ce7..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libiconv/libiconv-1.14/autoconf.patch
+++ /dev/null
@@ -1,50 +0,0 @@
-It adds the variables that are needed
-for autoconf 2.65 to reconfigure libiconv and delete the m4 macros
-directory. Its imported from OE.
-
-Upstream-Status: Pending
-
-Signed-off-by: Khem Raj <raj.khem@gmail.com>
-
-Index: libiconv-1.13.1/configure.ac
-===================================================================
---- libiconv-1.13.1.orig/configure.ac
-+++ libiconv-1.13.1/configure.ac
-@@ -23,7 +23,7 @@ AC_CONFIG_AUX_DIR([build-aux])
- AM_INIT_AUTOMAKE([libiconv], [1.13.1])
- AC_CONFIG_HEADERS([config.h lib/config.h])
- AC_PROG_MAKE_SET
--
-+AC_CONFIG_MACRO_DIR([m4])
- dnl           checks for basic programs
- 
- AC_PROG_CC
-Index: libiconv-1.13.1/libcharset/configure.ac
-===================================================================
---- libiconv-1.13.1.orig/libcharset/configure.ac
-+++ libiconv-1.13.1/libcharset/configure.ac
-@@ -16,17 +16,17 @@ dnl along with the GNU CHARSET Library;
- dnl write to the Free Software Foundation, Inc., 51 Franklin Street,
- dnl Fifth Floor, Boston, MA 02110-1301, USA.
- 
--AC_PREREQ([2.13])
-+AC_PREREQ(2.61)
-+AC_INIT([libcharset],[1.4] )
-+AC_CONFIG_SRCDIR([lib/localcharset.c])
- 
--PACKAGE=libcharset
--VERSION=1.4
--
--AC_INIT([lib/localcharset.c])
- AC_CONFIG_AUX_DIR([build-aux])
- AC_CONFIG_HEADER([config.h])
- AC_PROG_MAKE_SET
--AC_SUBST([PACKAGE])
--AC_SUBST([VERSION])
-+dnl AC_SUBST(PACKAGE)
-+dnl AC_SUBST(VERSION)
-+
-+AC_CONFIG_MACRO_DIR([m4])
- 
- dnl           checks for basic programs
- 
diff --git a/import-layers/yocto-poky/meta/recipes-support/libiconv/libiconv_1.14.bb b/import-layers/yocto-poky/meta/recipes-support/libiconv/libiconv_1.14.bb
deleted file mode 100644
index 9fd5114..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libiconv/libiconv_1.14.bb
+++ /dev/null
@@ -1,51 +0,0 @@
-SUMMARY = "Character encoding support library"
-DESCRIPTION = "GNU libiconv - libiconv is for you if your application needs to support \
-multiple character encodings, but that support lacks from your system."
-HOMEPAGE = "http://www.gnu.org/software/libiconv"
-SECTION = "libs"
-NOTES = "Needs to be stripped down to: ascii iso8859-1 eucjp iso-2022jp gb utf8"
-PROVIDES = "virtual/libiconv"
-PR = "r1"
-LICENSE = "LGPLv3"
-LIC_FILES_CHKSUM = "file://COPYING.LIB;md5=9f604d8a4f8e74f4f5140845a21b6674 \
-                    file://libcharset/COPYING.LIB;md5=9f604d8a4f8e74f4f5140845a21b6674"
-
-SRC_URI = "${GNU_MIRROR}/${BPN}/${BPN}-${PV}.tar.gz \
-           file://autoconf.patch \
-           file://add-relocatable-module.patch \
-          "
-
-SRC_URI[md5sum] = "e34509b1623cec449dfeb73d7ce9c6c6"
-SRC_URI[sha256sum] = "72b24ded17d687193c3366d0ebe7cde1e6b18f0df8c55438ac95be39e8a30613"
-
-S = "${WORKDIR}/libiconv-${PV}"
-
-inherit autotools pkgconfig gettext
-
-python __anonymous() {
-    if d.getVar("TARGET_OS") != "linux":
-        return
-    if d.getVar("TCLIBC") == "glibc":
-        raise bb.parse.SkipPackage("libiconv is provided for use with uClibc only - glibc already provides iconv")
-}
-
-EXTRA_OECONF += "--enable-shared --enable-static --enable-relocatable"
-
-LEAD_SONAME = "libiconv.so"
-
-do_configure_prepend () {
-	rm -f ${S}/m4/libtool.m4 ${S}/m4/ltoptions.m4 ${S}/m4/ltsugar.m4 ${S}/m4/ltversion.m4 ${S}/m4/lt~obsolete.m4 ${S}/libcharset/m4/libtool.m4 ${S}/libcharset/m4/ltoptions.m4 ${S}/libcharset/m4/ltsugar.m4 ${S}/libcharset/m4/ltversion.m4 ${S}/libcharset/m4/lt~obsolete.m4
-}
-
-do_configure_append () {
-        # forcibly remove RPATH from libtool
-        sed -i 's|^hardcode_libdir_flag_spec=.*|hardcode_libdir_flag_spec=""|g' *libtool
-        sed -i 's|^runpath_var=LD_RUN_PATH|runpath_var=_NO_RPATH_|g' *libtool
-}
-
-do_install_append () {
-	rm -rf ${D}${libdir}/preloadable_libiconv.so
-	rm -rf ${D}${libdir}/charset.alias
-}
-
-BBCLASSEXTEND = "nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre2/libpcre2-CVE-2017-7186.patch b/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre2/libpcre2-CVE-2017-7186.patch
new file mode 100644
index 0000000..bfa3bfe
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre2/libpcre2-CVE-2017-7186.patch
@@ -0,0 +1,96 @@
+libpcre2-10.23: Fix CVE-2017-7186
+
+A fuzz on libpcre1 through the pcretest utility revealed an invalid read in the
+library. For who is interested in a detailed description of the bug, will
+follow a feedback from upstream:
+
+This was a genuine bug in the 32-bit library. Thanks for finding it. The crash
+was caused by trying to find a Unicode property for a code value greater than
+0x10ffff, the Unicode maximum, when running in non-UTF mode (where character
+values can be up to 0xffffffff).
+
+The complete ASan output:
+
+# pcretest -32 -d $FILE
+==14788==ERROR: AddressSanitizer: SEGV on unknown address 0x7f1bbffed4df (pc 0x7f1bbee3fe6b bp 0x7fff8b50d8c0 sp 0x7fff8b50d3a0 T0)
+==14788==The signal is caused by a READ memory access.
+    #0 0x7f1bbee3fe6a in match /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_exec.c:5473:18
+    #1 0x7f1bbee09226 in pcre32_exec /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_exec.c:6936:8
+    #2 0x527d6c in main /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcretest.c:5218:9
+    #3 0x7f1bbddd678f in __libc_start_main /tmp/portage/sys-libs/glibc-2.23-r3/work/glibc-2.23/csu/../csu/libc-start.c:289
+    #4 0x41b438 in _init (/usr/bin/pcretest+0x41b438)
+
+AddressSanitizer can not provide additional info.
+SUMMARY: AddressSanitizer: SEGV /tmp/portage/dev-libs/libpcre-8.40/work/pcre-8.40/pcre_exec.c:5473:18 in match
+==14788==ABORTING
+
+Upstream-Status: Backport [https://vcs.pcre.org/pcre2/code/trunk/src/pcre2_ucd.c?view=patch&r1=316&r2=670&sortby=date \
+                        https://vcs.pcre.org/pcre2/code/trunk/src/pcre2_internal.h?view=patch&r1=600&r2=670&sortby=date]
+CVE: CVE-2017-7186
+
+Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
+
+--- trunk/src/pcre2_ucd.c  2015/07/17 15:44:51 316
++++ trunk/src/pcre2_ucd.c  2017/02/24 18:25:32 670
+@@ -41,6 +41,20 @@
+ 
+ const char *PRIV(unicode_version) = "8.0.0";
+ 
++/* If the 32-bit library is run in non-32-bit mode, character values
++greater than 0x10ffff may be encountered. For these we set up a
++special record. */
++
++#if PCRE2_CODE_UNIT_WIDTH == 32
++const ucd_record PRIV(dummy_ucd_record)[] = {{
++  ucp_Common,    /* script */
++  ucp_Cn,        /* type unassigned */
++  ucp_gbOther,   /* grapheme break property */
++  0,             /* case set */
++  0,             /* other case */
++  }};
++#endif
++
+ /* When recompiling tables with a new Unicode version, please check the
+ types in this structure definition from pcre2_internal.h (the actual
+ field names will be different):
+--- trunk/src/pcre2_internal.h 2016/11/19 12:46:24 600
++++ trunk/src/pcre2_internal.h 2017/02/24 18:25:32 670
+@@ -1774,10 +1774,17 @@
+ /* UCD access macros */
+ 
+ #define UCD_BLOCK_SIZE 128
+-#define GET_UCD(ch) (PRIV(ucd_records) + \
++#define REAL_GET_UCD(ch) (PRIV(ucd_records) + \
+         PRIV(ucd_stage2)[PRIV(ucd_stage1)[(int)(ch) / UCD_BLOCK_SIZE] * \
+         UCD_BLOCK_SIZE + (int)(ch) % UCD_BLOCK_SIZE])
+ 
++#if PCRE2_CODE_UNIT_WIDTH == 32
++#define GET_UCD(ch) ((ch > MAX_UTF_CODE_POINT)? \
++  PRIV(dummy_ucd_record) : REAL_GET_UCD(ch))
++#else
++#define GET_UCD(ch) REAL_GET_UCD(ch)
++#endif
++
+ #define UCD_CHARTYPE(ch)    GET_UCD(ch)->chartype
+ #define UCD_SCRIPT(ch)      GET_UCD(ch)->script
+ #define UCD_CATEGORY(ch)    PRIV(ucp_gentype)[UCD_CHARTYPE(ch)]
+@@ -1834,6 +1841,9 @@
+ #define _pcre2_default_compile_context PCRE2_SUFFIX(_pcre2_default_compile_context_)
+ #define _pcre2_default_match_context   PCRE2_SUFFIX(_pcre2_default_match_context_)
+ #define _pcre2_default_tables          PCRE2_SUFFIX(_pcre2_default_tables_)
++#if PCRE2_CODE_UNIT_WIDTH == 32
++#define _pcre2_dummy_ucd_record        PCRE2_SUFFIX(_pcre2_dummy_ucd_record_)
++#endif
+ #define _pcre2_hspace_list             PCRE2_SUFFIX(_pcre2_hspace_list_)
+ #define _pcre2_vspace_list             PCRE2_SUFFIX(_pcre2_vspace_list_)
+ #define _pcre2_ucd_caseless_sets       PCRE2_SUFFIX(_pcre2_ucd_caseless_sets_)
+@@ -1858,6 +1868,9 @@
+ extern const uint32_t                  PRIV(vspace_list)[];
+ extern const uint32_t                  PRIV(ucd_caseless_sets)[];
+ extern const ucd_record                PRIV(ucd_records)[];
++#if PCRE2_CODE_UNIT_WIDTH == 32
++extern const ucd_record                PRIV(dummy_ucd_record)[];
++#endif
+ extern const uint8_t                   PRIV(ucd_stage1)[];
+ extern const uint16_t                  PRIV(ucd_stage2)[];
+ extern const uint32_t                  PRIV(ucp_gbtable)[];
diff --git a/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre2/libpcre2-CVE-2017-8786.patch b/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre2/libpcre2-CVE-2017-8786.patch
new file mode 100644
index 0000000..eafafc1
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre2/libpcre2-CVE-2017-8786.patch
@@ -0,0 +1,93 @@
+libpcre2-10.23: Fix CVE-2017-8786
+
+The pcre2test.c in PCRE2 10.23 allows remote attackers to cause a denial of
+service (heap-based buffer overflow) or possibly have unspecified other impact
+via a crafted regular expression.
+
+Upstream-Status: Backport [https://vcs.pcre.org/pcre2/code/trunk/src/pcre2test.c?r1=692&r2=697&view=patch]
+CVE: CVE-2017-8786
+
+Signed-off-by: Robert Yang <liezhi.yang@windriver.com>
+
+--- trunk/src/pcre2test.c  2017/03/21 16:18:54 692
++++ trunk/src/pcre2test.c  2017/03/21 18:36:13 697
+@@ -1017,9 +1017,9 @@
+   if (test_mode == PCRE8_MODE) \
+     r = pcre2_get_error_message_8(a,G(b,8),G(G(b,8),_size)); \
+   else if (test_mode == PCRE16_MODE) \
+-    r = pcre2_get_error_message_16(a,G(b,16),G(G(b,16),_size)); \
++    r = pcre2_get_error_message_16(a,G(b,16),G(G(b,16),_size/2)); \
+   else \
+-    r = pcre2_get_error_message_32(a,G(b,32),G(G(b,32),_size))
++    r = pcre2_get_error_message_32(a,G(b,32),G(G(b,32),_size/4))
+ 
+ #define PCRE2_GET_OVECTOR_COUNT(a,b) \
+   if (test_mode == PCRE8_MODE) \
+@@ -1399,6 +1399,9 @@
+ 
+ /* ----- Common macros for two-mode cases ----- */
+ 
++#define BYTEONE (BITONE/8)
++#define BYTETWO (BITTWO/8)
++
+ #define CASTFLD(t,a,b) \
+   ((test_mode == G(G(PCRE,BITONE),_MODE))? (t)(G(a,BITONE)->b) : \
+     (t)(G(a,BITTWO)->b))
+@@ -1481,9 +1484,9 @@
+ 
+ #define PCRE2_GET_ERROR_MESSAGE(r,a,b) \
+   if (test_mode == G(G(PCRE,BITONE),_MODE)) \
+-    r = G(pcre2_get_error_message_,BITONE)(a,G(b,BITONE),G(G(b,BITONE),_size)); \
++    r = G(pcre2_get_error_message_,BITONE)(a,G(b,BITONE),G(G(b,BITONE),_size/BYTEONE)); \
+   else \
+-    r = G(pcre2_get_error_message_,BITTWO)(a,G(b,BITTWO),G(G(b,BITTWO),_size))
++    r = G(pcre2_get_error_message_,BITTWO)(a,G(b,BITTWO),G(G(b,BITTWO),_size/BYTETWO))
+ 
+ #define PCRE2_GET_OVECTOR_COUNT(a,b) \
+   if (test_mode == G(G(PCRE,BITONE),_MODE)) \
+@@ -1904,7 +1907,7 @@
+ #define PCRE2_DFA_MATCH(a,b,c,d,e,f,g,h,i,j) \
+   a = pcre2_dfa_match_16(G(b,16),(PCRE2_SPTR16)c,d,e,f,G(g,16),h,i,j)
+ #define PCRE2_GET_ERROR_MESSAGE(r,a,b) \
+-  r = pcre2_get_error_message_16(a,G(b,16),G(G(b,16),_size))
++  r = pcre2_get_error_message_16(a,G(b,16),G(G(b,16),_size/2))
+ #define PCRE2_GET_OVECTOR_COUNT(a,b) a = pcre2_get_ovector_count_16(G(b,16))
+ #define PCRE2_GET_STARTCHAR(a,b) a = pcre2_get_startchar_16(G(b,16))
+ #define PCRE2_JIT_COMPILE(r,a,b) r = pcre2_jit_compile_16(G(a,16),b)
+@@ -2000,7 +2003,7 @@
+ #define PCRE2_DFA_MATCH(a,b,c,d,e,f,g,h,i,j) \
+   a = pcre2_dfa_match_32(G(b,32),(PCRE2_SPTR32)c,d,e,f,G(g,32),h,i,j)
+ #define PCRE2_GET_ERROR_MESSAGE(r,a,b) \
+-  r = pcre2_get_error_message_32(a,G(b,32),G(G(b,32),_size))
++  r = pcre2_get_error_message_32(a,G(b,32),G(G(b,32),_size/4))
+ #define PCRE2_GET_OVECTOR_COUNT(a,b) a = pcre2_get_ovector_count_32(G(b,32))
+ #define PCRE2_GET_STARTCHAR(a,b) a = pcre2_get_startchar_32(G(b,32))
+ #define PCRE2_JIT_COMPILE(r,a,b) r = pcre2_jit_compile_32(G(a,32),b)
+@@ -2889,7 +2892,7 @@
+   {
+   if (pbuffer32 != NULL) free(pbuffer32);
+   pbuffer32_size = 4*len + 4;
+-  if (pbuffer32_size < 256) pbuffer32_size = 256;
++  if (pbuffer32_size < 512) pbuffer32_size = 512;
+   pbuffer32 = (uint32_t *)malloc(pbuffer32_size);
+   if (pbuffer32 == NULL)
+     {
+@@ -7600,7 +7603,8 @@
+   int errcode;
+   char *endptr;
+ 
+-/* Ensure the relevant non-8-bit buffer is available. */
++/* Ensure the relevant non-8-bit buffer is available. Ensure that it is at 
++least 128 code units, because it is used for retrieving error messages. */
+ 
+ #ifdef SUPPORT_PCRE2_16
+   if (test_mode == PCRE16_MODE)
+@@ -7620,7 +7624,7 @@
+ #ifdef SUPPORT_PCRE2_32
+   if (test_mode == PCRE32_MODE)
+     {
+-    pbuffer32_size = 256;
++    pbuffer32_size = 512;
+     pbuffer32 = (uint32_t *)malloc(pbuffer32_size);
+     if (pbuffer32 == NULL)
+       {
diff --git a/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre2_10.22.bb b/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre2_10.22.bb
deleted file mode 100644
index 0cf81c0..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre2_10.22.bb
+++ /dev/null
@@ -1,58 +0,0 @@
-DESCRIPTION = "There are two major versions of the PCRE library. The \
-newest version is PCRE2, which is a re-working of the original PCRE \
-library to provide an entirely new API. The original, very widely \
-deployed PCRE library's API and feature are stable, future releases \
- will be for bugfixes only. All new future features will be to PCRE2, \
-not the original PCRE 8.x series."
-SUMMARY = "Perl Compatible Regular Expressions version 2"
-HOMEPAGE = "http://www.pcre.org"
-SECTION = "devel"
-LICENSE = "BSD"
-LIC_FILES_CHKSUM = "file://LICENCE;md5=ab9633efd38d6f799398df2c248b5aec"
-
-SRC_URI = "https://ftp.pcre.org/pub/pcre/pcre2-${PV}.tar.bz2 \
-           file://pcre-cross.patch \
-"
-
-SRC_URI[md5sum] = "c0c02517938ee2b0d350d53edf450664"
-SRC_URI[sha256sum] = "b2b44619f4ac6c50ad74c2865fd56807571392496fae1c9ad7a70993d018f416"
-
-CVE_PRODUCT = "pcre2"
-
-S = "${WORKDIR}/pcre2-${PV}"
-
-PROVIDES += "pcre2"
-DEPENDS += "bzip2 zlib"
-
-BINCONFIG = "${bindir}/pcre2-config"
-
-inherit autotools binconfig-disabled
-
-EXTRA_OECONF = "\
-    --enable-newline-is-lf \
-    --enable-rebuild-chartables \
-    --with-link-size=2 \
-    --with-match-limit=10000000 \
-"
-
-# Set LINK_SIZE in BUILD_CFLAGS given that the autotools bbclass use it to
-# set CFLAGS_FOR_BUILD, required for the libpcre build.
-BUILD_CFLAGS =+ "-DLINK_SIZE=2 -I${B}/src"
-CFLAGS += "-D_REENTRANT"
-CXXFLAGS_append_powerpc = " -lstdc++"
-
-export CCLD_FOR_BUILD ="${BUILD_CCLD}"
-
-PACKAGES =+ "pcre2grep pcre2grep-doc pcre2test pcre2test-doc"
-
-SUMMARY_pcre2grep = "grep utility that uses perl 5 compatible regexes"
-SUMMARY_pcre2grep-doc = "grep utility that uses perl 5 compatible regexes - docs"
-SUMMARY_pcre2test = "program for testing Perl-comatible regular expressions"
-SUMMARY_pcre2test-doc = "program for testing Perl-comatible regular expressions - docs"
-
-FILES_pcre2grep = "${bindir}/pcre2grep"
-FILES_pcre2grep-doc = "${mandir}/man1/pcre2grep.1"
-FILES_pcre2test = "${bindir}/pcre2test"
-FILES_pcre2test-doc = "${mandir}/man1/pcre2test.1"
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre2_10.23.bb b/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre2_10.23.bb
new file mode 100644
index 0000000..ca2b028
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre2_10.23.bb
@@ -0,0 +1,60 @@
+DESCRIPTION = "There are two major versions of the PCRE library. The \
+newest version is PCRE2, which is a re-working of the original PCRE \
+library to provide an entirely new API. The original, very widely \
+deployed PCRE library's API and feature are stable, future releases \
+ will be for bugfixes only. All new future features will be to PCRE2, \
+not the original PCRE 8.x series."
+SUMMARY = "Perl Compatible Regular Expressions version 2"
+HOMEPAGE = "http://www.pcre.org"
+SECTION = "devel"
+LICENSE = "BSD"
+LIC_FILES_CHKSUM = "file://LICENCE;md5=3de34df49e1fe3c3b59a08dff214488b"
+
+SRC_URI = "https://ftp.pcre.org/pub/pcre/pcre2-${PV}.tar.bz2 \
+           file://pcre-cross.patch \
+           file://libpcre2-CVE-2017-8786.patch \
+           file://libpcre2-CVE-2017-7186.patch \
+"
+
+SRC_URI[md5sum] = "b2cd00ca7e24049040099b0a46bb3649"
+SRC_URI[sha256sum] = "dfc79b918771f02d33968bd34a749ad7487fa1014aeb787fad29dd392b78c56e"
+
+CVE_PRODUCT = "pcre2"
+
+S = "${WORKDIR}/pcre2-${PV}"
+
+PROVIDES += "pcre2"
+DEPENDS += "bzip2 zlib"
+
+BINCONFIG = "${bindir}/pcre2-config"
+
+inherit autotools binconfig-disabled
+
+EXTRA_OECONF = "\
+    --enable-newline-is-lf \
+    --enable-rebuild-chartables \
+    --with-link-size=2 \
+    --with-match-limit=10000000 \
+"
+
+# Set LINK_SIZE in BUILD_CFLAGS given that the autotools bbclass use it to
+# set CFLAGS_FOR_BUILD, required for the libpcre build.
+BUILD_CFLAGS =+ "-DLINK_SIZE=2 -I${B}/src"
+CFLAGS += "-D_REENTRANT"
+CXXFLAGS_append_powerpc = " -lstdc++"
+
+export CCLD_FOR_BUILD ="${BUILD_CCLD}"
+
+PACKAGES =+ "pcre2grep pcre2grep-doc pcre2test pcre2test-doc"
+
+SUMMARY_pcre2grep = "grep utility that uses perl 5 compatible regexes"
+SUMMARY_pcre2grep-doc = "grep utility that uses perl 5 compatible regexes - docs"
+SUMMARY_pcre2test = "program for testing Perl-comatible regular expressions"
+SUMMARY_pcre2test-doc = "program for testing Perl-comatible regular expressions - docs"
+
+FILES_pcre2grep = "${bindir}/pcre2grep"
+FILES_pcre2grep-doc = "${mandir}/man1/pcre2grep.1"
+FILES_pcre2test = "${bindir}/pcre2test"
+FILES_pcre2test-doc = "${mandir}/man1/pcre2test.1"
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre_8.40.bb b/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre_8.40.bb
deleted file mode 100644
index 8b6ab25..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre_8.40.bb
+++ /dev/null
@@ -1,83 +0,0 @@
-DESCRIPTION = "The PCRE library is a set of functions that implement regular \
-expression pattern matching using the same syntax and semantics as Perl 5. PCRE \
-has its own native API, as well as a set of wrapper functions that correspond \
-to the POSIX regular expression API."
-SUMMARY = "Perl Compatible Regular Expressions"
-HOMEPAGE = "http://www.pcre.org"
-SECTION = "devel"
-LICENSE = "BSD"
-LIC_FILES_CHKSUM = "file://LICENCE;md5=60da32d84d067f53e22071c4ecb4384d"
-SRC_URI = "ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-${PV}.tar.bz2 \
-           file://pcre-cross.patch \
-           file://fix-pcre-name-collision.patch \
-           file://run-ptest \
-           file://Makefile \
-"
-
-SRC_URI[md5sum] = "41a842bf7dcecd6634219336e2167d1d"
-SRC_URI[sha256sum] = "00e27a29ead4267e3de8111fcaa59b132d0533cdfdbdddf4b0604279acbcf4f4"
-
-CVE_PRODUCT = "pcre"
-
-S = "${WORKDIR}/pcre-${PV}"
-
-PROVIDES += "pcre"
-DEPENDS += "bzip2 zlib"
-
-PACKAGECONFIG ??= "pcre8 unicode-properties"
-
-PACKAGECONFIG[pcre8] = "--enable-pcre8,--disable-pcre8"
-PACKAGECONFIG[pcre16] = "--enable-pcre16,--disable-pcre16"
-PACKAGECONFIG[pcre32] = "--enable-pcre32,--disable-pcre32"
-PACKAGECONFIG[pcretest-readline] = "--enable-pcretest-libreadline,--disable-pcretest-libreadline,readline,"
-PACKAGECONFIG[unicode-properties] = "--enable-unicode-properties,--disable-unicode-properties"
-
-BINCONFIG = "${bindir}/pcre-config"
-
-inherit autotools binconfig-disabled ptest
-
-EXTRA_OECONF = "\
-    --enable-newline-is-lf \
-    --enable-rebuild-chartables \
-    --enable-utf \
-    --with-link-size=2 \
-    --with-match-limit=10000000 \
-"
-
-# Set LINK_SIZE in BUILD_CFLAGS given that the autotools bbclass use it to
-# set CFLAGS_FOR_BUILD, required for the libpcre build.
-BUILD_CFLAGS =+ "-DLINK_SIZE=2 -I${B}"
-CFLAGS += "-D_REENTRANT"
-CXXFLAGS_append_powerpc = " -lstdc++"
-
-export CCLD_FOR_BUILD ="${BUILD_CCLD}"
-
-PACKAGES =+ "libpcrecpp libpcreposix pcregrep pcregrep-doc pcretest pcretest-doc"
-
-SUMMARY_libpcrecpp = "${SUMMARY} - C++ wrapper functions"
-SUMMARY_libpcreposix = "${SUMMARY} - C wrapper functions based on the POSIX regex API"
-SUMMARY_pcregrep = "grep utility that uses perl 5 compatible regexes"
-SUMMARY_pcregrep-doc = "grep utility that uses perl 5 compatible regexes - docs"
-SUMMARY_pcretest = "program for testing Perl-comatible regular expressions"
-SUMMARY_pcretest-doc = "program for testing Perl-comatible regular expressions - docs"
-
-FILES_libpcrecpp = "${libdir}/libpcrecpp.so.*"
-FILES_libpcreposix = "${libdir}/libpcreposix.so.*"
-FILES_pcregrep = "${bindir}/pcregrep"
-FILES_pcregrep-doc = "${mandir}/man1/pcregrep.1"
-FILES_pcretest = "${bindir}/pcretest"
-FILES_pcretest-doc = "${mandir}/man1/pcretest.1"
-
-BBCLASSEXTEND = "native nativesdk"
-
-do_install_ptest() {
-	t=${D}${PTEST_PATH}
-	cp ${WORKDIR}/Makefile $t
-	cp -r ${S}/testdata $t
-	for i in pcre_stringpiece_unittest pcregrep pcretest; \
-	  do cp ${B}/.libs/$i $t; \
-	done
-	for i in RunTest RunGrepTest test-driver; \
-	  do cp ${S}/$i $t; \
-	done
-}
diff --git a/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre_8.41.bb b/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre_8.41.bb
new file mode 100644
index 0000000..0eaed18
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libpcre/libpcre_8.41.bb
@@ -0,0 +1,83 @@
+DESCRIPTION = "The PCRE library is a set of functions that implement regular \
+expression pattern matching using the same syntax and semantics as Perl 5. PCRE \
+has its own native API, as well as a set of wrapper functions that correspond \
+to the POSIX regular expression API."
+SUMMARY = "Perl Compatible Regular Expressions"
+HOMEPAGE = "http://www.pcre.org"
+SECTION = "devel"
+LICENSE = "BSD"
+LIC_FILES_CHKSUM = "file://LICENCE;md5=60da32d84d067f53e22071c4ecb4384d"
+SRC_URI = "https://ftp.pcre.org/pub/pcre/pcre-${PV}.tar.bz2 \
+           file://pcre-cross.patch \
+           file://fix-pcre-name-collision.patch \
+           file://run-ptest \
+           file://Makefile \
+"
+
+SRC_URI[md5sum] = "c160d22723b1670447341b08c58981c1"
+SRC_URI[sha256sum] = "e62c7eac5ae7c0e7286db61ff82912e1c0b7a0c13706616e94a7dd729321b530"
+
+CVE_PRODUCT = "pcre"
+
+S = "${WORKDIR}/pcre-${PV}"
+
+PROVIDES += "pcre"
+DEPENDS += "bzip2 zlib"
+
+PACKAGECONFIG ??= "pcre8 unicode-properties"
+
+PACKAGECONFIG[pcre8] = "--enable-pcre8,--disable-pcre8"
+PACKAGECONFIG[pcre16] = "--enable-pcre16,--disable-pcre16"
+PACKAGECONFIG[pcre32] = "--enable-pcre32,--disable-pcre32"
+PACKAGECONFIG[pcretest-readline] = "--enable-pcretest-libreadline,--disable-pcretest-libreadline,readline,"
+PACKAGECONFIG[unicode-properties] = "--enable-unicode-properties,--disable-unicode-properties"
+
+BINCONFIG = "${bindir}/pcre-config"
+
+inherit autotools binconfig-disabled ptest
+
+EXTRA_OECONF = "\
+    --enable-newline-is-lf \
+    --enable-rebuild-chartables \
+    --enable-utf \
+    --with-link-size=2 \
+    --with-match-limit=10000000 \
+"
+
+# Set LINK_SIZE in BUILD_CFLAGS given that the autotools bbclass use it to
+# set CFLAGS_FOR_BUILD, required for the libpcre build.
+BUILD_CFLAGS =+ "-DLINK_SIZE=2 -I${B}"
+CFLAGS += "-D_REENTRANT"
+CXXFLAGS_append_powerpc = " -lstdc++"
+
+export CCLD_FOR_BUILD ="${BUILD_CCLD}"
+
+PACKAGES =+ "libpcrecpp libpcreposix pcregrep pcregrep-doc pcretest pcretest-doc"
+
+SUMMARY_libpcrecpp = "${SUMMARY} - C++ wrapper functions"
+SUMMARY_libpcreposix = "${SUMMARY} - C wrapper functions based on the POSIX regex API"
+SUMMARY_pcregrep = "grep utility that uses perl 5 compatible regexes"
+SUMMARY_pcregrep-doc = "grep utility that uses perl 5 compatible regexes - docs"
+SUMMARY_pcretest = "program for testing Perl-comatible regular expressions"
+SUMMARY_pcretest-doc = "program for testing Perl-comatible regular expressions - docs"
+
+FILES_libpcrecpp = "${libdir}/libpcrecpp.so.*"
+FILES_libpcreposix = "${libdir}/libpcreposix.so.*"
+FILES_pcregrep = "${bindir}/pcregrep"
+FILES_pcregrep-doc = "${mandir}/man1/pcregrep.1"
+FILES_pcretest = "${bindir}/pcretest"
+FILES_pcretest-doc = "${mandir}/man1/pcretest.1"
+
+BBCLASSEXTEND = "native nativesdk"
+
+do_install_ptest() {
+	t=${D}${PTEST_PATH}
+	cp ${WORKDIR}/Makefile $t
+	cp -r ${S}/testdata $t
+	for i in pcre_stringpiece_unittest pcregrep pcretest; \
+	  do cp ${B}/.libs/$i $t; \
+	done
+	for i in RunTest RunGrepTest test-driver; \
+	  do cp ${S}/$i $t; \
+	done
+}
diff --git a/import-layers/yocto-poky/meta/recipes-support/libproxy/libproxy_0.4.14.bb b/import-layers/yocto-poky/meta/recipes-support/libproxy/libproxy_0.4.14.bb
index 62a6a13..4c39f52 100644
--- a/import-layers/yocto-poky/meta/recipes-support/libproxy/libproxy_0.4.14.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/libproxy/libproxy_0.4.14.bb
@@ -31,6 +31,7 @@
     -DLIB_INSTALL_DIR=${libdir} \
     -DLIBEXEC_INSTALL_DIR=${libexecdir} \
 "
+SECURITY_PIE_CFLAGS_remove = "-fPIE -pie"
 
 FILES_${PN} += "${libdir}/${BPN}/${PV}/modules"
 FILES_${PN}-dev += "${datadir}/cmake"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libsoup/libsoup-2.4_2.56.0.bb b/import-layers/yocto-poky/meta/recipes-support/libsoup/libsoup-2.4_2.56.0.bb
deleted file mode 100644
index a1f294e..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libsoup/libsoup-2.4_2.56.0.bb
+++ /dev/null
@@ -1,35 +0,0 @@
-SUMMARY = "An HTTP library implementation in C"
-HOMEPAGE = "https://wiki.gnome.org/Projects/libsoup"
-BUGTRACKER = "https://bugzilla.gnome.org/"
-SECTION = "x11/gnome/libs"
-LICENSE = "LGPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=5f30f0716dfdd0d91eb439ebec522ec2"
-
-DEPENDS = "glib-2.0 glib-2.0-native libxml2 sqlite3 intltool-native"
-
-SHRT_VER = "${@d.getVar('PV').split('.')[0]}.${@d.getVar('PV').split('.')[1]}"
-
-SRC_URI = "${GNOME_MIRROR}/libsoup/${SHRT_VER}/libsoup-${PV}.tar.xz"
-
-SRC_URI[md5sum] = "465083f74b7bb035959ddb0599313986"
-SRC_URI[sha256sum] = "d8216b71de8247bc6f274ec054c08547b2e04369c1f8add713e9350c8ef81fe5"
-
-S = "${WORKDIR}/libsoup-${PV}"
-
-inherit autotools gettext pkgconfig upstream-version-is-even gobject-introspection gtk-doc
-
-# libsoup-gnome is entirely deprecated and just stubs in 2.42 onwards. Disable by default.
-PACKAGECONFIG ??= ""
-PACKAGECONFIG[gnome] = "--with-gnome,--without-gnome"
-PACKAGECONFIG[gssapi] = "--with-gssapi,--without-gssapi,krb5"
-
-EXTRA_OECONF = "--disable-vala"
-
-# When built without gnome support, libsoup-2.4 will contain only one shared lib
-# and will therefore become subject to renaming by debian.bbclass. Prevent
-# renaming in order to keep the package name consistent regardless of whether
-# gnome support is enabled or disabled.
-DEBIAN_NOAUTONAME_${PN} = "1"
-
-# glib-networking is needed for SSL, proxies, etc.
-RRECOMMENDS_${PN} = "glib-networking"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libsoup/libsoup-2.4_2.58.2.bb b/import-layers/yocto-poky/meta/recipes-support/libsoup/libsoup-2.4_2.58.2.bb
new file mode 100644
index 0000000..c9f95e5
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libsoup/libsoup-2.4_2.58.2.bb
@@ -0,0 +1,35 @@
+SUMMARY = "An HTTP library implementation in C"
+HOMEPAGE = "https://wiki.gnome.org/Projects/libsoup"
+BUGTRACKER = "https://bugzilla.gnome.org/"
+SECTION = "x11/gnome/libs"
+LICENSE = "LGPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=5f30f0716dfdd0d91eb439ebec522ec2"
+
+DEPENDS = "glib-2.0 glib-2.0-native libxml2 sqlite3 intltool-native"
+
+SHRT_VER = "${@d.getVar('PV').split('.')[0]}.${@d.getVar('PV').split('.')[1]}"
+
+SRC_URI = "${GNOME_MIRROR}/libsoup/${SHRT_VER}/libsoup-${PV}.tar.xz"
+
+SRC_URI[md5sum] = "eb33adb459c2283efc5c7d09ccdbbcfc"
+SRC_URI[sha256sum] = "442300ca1b1bf8a3bbf2f788203287ff862542d4fc048f19a92a068a27d17b72"
+
+S = "${WORKDIR}/libsoup-${PV}"
+
+inherit autotools gettext pkgconfig upstream-version-is-even gobject-introspection gtk-doc
+
+# libsoup-gnome is entirely deprecated and just stubs in 2.42 onwards. Disable by default.
+PACKAGECONFIG ??= ""
+PACKAGECONFIG[gnome] = "--with-gnome,--without-gnome"
+PACKAGECONFIG[gssapi] = "--with-gssapi,--without-gssapi,krb5"
+
+EXTRA_OECONF = "--disable-vala"
+
+# When built without gnome support, libsoup-2.4 will contain only one shared lib
+# and will therefore become subject to renaming by debian.bbclass. Prevent
+# renaming in order to keep the package name consistent regardless of whether
+# gnome support is enabled or disabled.
+DEBIAN_NOAUTONAME_${PN} = "1"
+
+# glib-networking is needed for SSL, proxies, etc.
+RRECOMMENDS_${PN} = "glib-networking"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind.inc b/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind.inc
index c8ff836..ed32d19 100644
--- a/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind.inc
+++ b/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind.inc
@@ -2,23 +2,18 @@
 DESCRIPTION = "a portable and efficient C programming interface (API) to determine the call-chain of a program"
 HOMEPAGE = "http://www.nongnu.org/libunwind"
 LICENSE = "MIT"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=3fced11d6df719b47505837a51c16ae5"
+LIC_FILES_CHKSUM = "file://COPYING;beginline=3;md5=3fced11d6df719b47505837a51c16ae5"
 DEPENDS += "libatomic-ops"
 
 inherit autotools
 
 PACKAGECONFIG ??= ""
 PACKAGECONFIG[lzma] = "--enable-minidebuginfo,--disable-minidebuginfo,xz"
-PACKAGECONFIG[latexdocs] = "--enable-documentaiton, --disable-documentation, latex2man-native"
+PACKAGECONFIG[latexdocs] = "--enable-documentation, --disable-documentation, latex2man-native"
 
 EXTRA_OECONF_arm = "--enable-debug-frame"
 EXTRA_OECONF_aarch64 = "--enable-debug-frame"
 
-CFLAGS += "${ATOMICOPS}"
-ATOMICOPS_armv5 = "-DAO_USE_PTHREAD_DEFS=1"
-ATOMICOPS_armv4 = "-DAO_USE_PTHREAD_DEFS=1"
-ATOMICOPS ?= ""
-
 SECURITY_LDFLAGS_append_libc-musl = " -lssp_nonshared -lssp"
 
 BBCLASSEXTEND = "native"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind/fix-mips.patch b/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind/fix-mips.patch
new file mode 100644
index 0000000..0022237
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind/fix-mips.patch
@@ -0,0 +1,134 @@
+Upstream-Status: Backport
+Signed-off-by: Ross Burton <ross.burton@intel.com>
+
+From 5f354cb7b9c84dae006f0ebd8ad7a78d7e2aad0c Mon Sep 17 00:00:00 2001
+From: Dave Watson <davejwatson@fb.com>
+Date: Wed, 25 Jan 2017 16:18:02 -0800
+Subject: [PATCH] mips/tilegx: Add missing unwind_i.h header file
+
+reported-by: John Knight <John.Knight@belkin.com>
+---
+ src/Makefile.am | 4 ++--
+ 1 file changed, 2 insertions(+), 2 deletions(-)
+
+diff --git a/src/Makefile.am b/src/Makefile.am
+index 5d874755..7de4c425 100644
+--- a/src/Makefile.am
++++ b/src/Makefile.am
+@@ -280,7 +280,7 @@ libunwind_hppa_la_SOURCES_hppa = $(libunwind_la_SOURCES_hppa_common)	\
+ 	hppa/Gresume.c hppa/Gstep.c
+ 
+ # The list of files that go info libunwind and libunwind-mips:
+-noinst_HEADERS += mips/init.h mips/offsets.h
++noinst_HEADERS += mips/init.h mips/offsets.h mips/unwind_i.h
+ libunwind_la_SOURCES_mips_common = $(libunwind_la_SOURCES_common)	    \
+ 	mips/is_fpreg.c mips/regname.c
+ 
+@@ -299,7 +299,7 @@ libunwind_mips_la_SOURCES_mips = $(libunwind_la_SOURCES_mips_common)	    \
+ 	mips/Gis_signal_frame.c mips/Gregs.c mips/Gresume.c mips/Gstep.c
+ 
+ # The list of files that go info libunwind and libunwind-tilegx:
+-noinst_HEADERS += tilegx/init.h tilegx/offsets.h
++noinst_HEADERS += tilegx/init.h tilegx/offsets.h tilegx/unwind_i.h
+ libunwind_la_SOURCES_tilegx_common = $(libunwind_la_SOURCES_common)	    \
+ 	tilegx/is_fpreg.c tilegx/regname.c
+ 
+diff --git a/src/mips/unwind_i.h b/src/mips/unwind_i.h
+new file mode 100644
+index 0000000..3382dcf
+--- /dev/null
++++ b/src/mips/unwind_i.h
+@@ -0,0 +1,43 @@
++/* libunwind - a platform-independent unwind library
++   Copyright (C) 2008 CodeSourcery
++
++This file is part of libunwind.
++
++Permission is hereby granted, free of charge, to any person obtaining
++a copy of this software and associated documentation files (the
++"Software"), to deal in the Software without restriction, including
++without limitation the rights to use, copy, modify, merge, publish,
++distribute, sublicense, and/or sell copies of the Software, and to
++permit persons to whom the Software is furnished to do so, subject to
++the following conditions:
++
++The above copyright notice and this permission notice shall be
++included in all copies or substantial portions of the Software.
++
++THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
++EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
++MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
++NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
++LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
++OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
++WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.  */
++
++#ifndef unwind_i_h
++#define unwind_i_h
++
++#include <stdint.h>
++
++#include <libunwind-mips.h>
++
++#include "libunwind_i.h"
++
++#define mips_lock                       UNW_OBJ(lock)
++#define mips_local_resume               UNW_OBJ(local_resume)
++#define mips_local_addr_space_init      UNW_OBJ(local_addr_space_init)
++
++extern int mips_local_resume (unw_addr_space_t as, unw_cursor_t *cursor,
++                             void *arg);
++
++extern void mips_local_addr_space_init (void);
++
++#endif /* unwind_i_h */
+diff --git a/src/tilegx/unwind_i.h b/src/tilegx/unwind_i.h
+new file mode 100644
+index 0000000..aac7be3
+--- /dev/null
++++ b/src/tilegx/unwind_i.h
+@@ -0,0 +1,44 @@
++/* libunwind - a platform-independent unwind library
++   Copyright (C) 2008 CodeSourcery
++
++This file is part of libunwind.
++
++Permission is hereby granted, free of charge, to any person obtaining
++a copy of this software and associated documentation files (the
++"Software"), to deal in the Software without restriction, including
++without limitation the rights to use, copy, modify, merge, publish,
++distribute, sublicense, and/or sell copies of the Software, and to
++permit persons to whom the Software is furnished to do so, subject to
++the following conditions:
++
++The above copyright notice and this permission notice shall be
++included in all copies or substantial portions of the Software.
++
++THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
++EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
++MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
++NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
++LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
++OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
++WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.  */
++
++#ifndef unwind_i_h
++#define unwind_i_h
++
++#include <memory.h>
++#include <stdint.h>
++
++#include <libunwind-tilegx.h>
++
++#include "libunwind_i.h"
++
++#define tilegx_local_resume            UNW_OBJ(local_resume)
++#define tilegx_local_addr_space_init   UNW_OBJ(local_addr_space_init)
++
++extern int tilegx_local_resume (unw_addr_space_t as,
++                                unw_cursor_t *cursor,
++                                void *arg);
++
++extern void tilegx_local_addr_space_init (void);
++
++#endif /* unwind_i_h */
diff --git a/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind_1.2.bb b/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind_1.2.bb
new file mode 100644
index 0000000..c6312f2
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind_1.2.bb
@@ -0,0 +1,24 @@
+require libunwind.inc
+
+SRC_URI[md5sum] = "eefcb5d7f78fdc8f1ed172a26ea4202f"
+SRC_URI[sha256sum] = "1de38ffbdc88bd694d10081865871cd2bfbb02ad8ef9e1606aee18d65532b992"
+
+SRC_URI = "http://download.savannah.nongnu.org/releases/libunwind/libunwind-${PV}.tar.gz \
+           file://Add-AO_REQUIRE_CAS-to-fix-build-on-ARM-v6.patch \
+           file://0001-backtrace-Use-only-with-glibc-and-uclibc.patch \
+           file://0001-x86-Stub-out-x86_local_resume.patch \
+           file://0001-Fix-build-on-mips-musl.patch \
+           file://0001-add-knobs-to-disable-enable-tests.patch \
+           file://0001-ppc32-Consider-ucontext-mismatches-between-glibc-and.patch \
+           file://libunwind-1.1-x32.patch \
+           file://fix-mips.patch \
+           "
+
+SRC_URI_append_libc-musl = " file://musl-header-conflict.patch"
+EXTRA_OECONF_append_libc-musl = " --disable-documentation --disable-tests "
+
+# http://errors.yoctoproject.org/Errors/Details/20487/
+ARM_INSTRUCTION_SET_armv4 = "arm"
+ARM_INSTRUCTION_SET_armv5 = "arm"
+
+LDFLAGS += "-Wl,-z,relro,-z,now ${@bb.utils.contains('DISTRO_FEATURES', 'ld-is-gold', ' -fuse-ld=bfd ', '', d)}"
diff --git a/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind_git.bb b/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind_git.bb
deleted file mode 100644
index b637c5c..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/libunwind/libunwind_git.bb
+++ /dev/null
@@ -1,30 +0,0 @@
-require libunwind.inc
-
-PV = "1.1+git${SRCPV}"
-
-SRCREV = "bc8698fd7ed13a629a8ec3cb2a89bd74f9d8b5c0"
-
-SRC_URI = "git://git.sv.gnu.org/libunwind.git \
-           file://Add-AO_REQUIRE_CAS-to-fix-build-on-ARM-v6.patch \
-           file://0001-backtrace-Use-only-with-glibc-and-uclibc.patch \
-           file://0001-x86-Stub-out-x86_local_resume.patch \
-           file://0001-Fix-build-on-mips-musl.patch \
-           file://0001-add-knobs-to-disable-enable-tests.patch \
-           file://0001-ppc32-Consider-ucontext-mismatches-between-glibc-and.patch \
-           file://libunwind-1.1-x32.patch \
-           "
-
-SRC_URI_append_libc-musl = " file://musl-header-conflict.patch"
-EXTRA_OECONF_append_libc-musl = " --disable-documentation --disable-tests "
-
-# http://errors.yoctoproject.org/Errors/Details/20487/
-ARM_INSTRUCTION_SET_armv4 = "arm"
-ARM_INSTRUCTION_SET_armv5 = "arm"
-
-# see https://sourceware.org/bugzilla/show_bug.cgi?id=19987
-SECURITY_CFLAGS_remove_aarch64 = "-fpie"
-SECURITY_CFLAGS_append_aarch64 = " -fPIE"
-
-S = "${WORKDIR}/git"
-
-LDFLAGS += "-Wl,-z,relro,-z,now ${@bb.utils.contains('DISTRO_FEATURES', 'ld-is-gold', ' -fuse-ld=bfd ', '', d)}"
diff --git a/import-layers/yocto-poky/meta/recipes-support/liburcu/liburcu/0001-Support-for-NIOS2-architecture.patch b/import-layers/yocto-poky/meta/recipes-support/liburcu/liburcu/0001-Support-for-NIOS2-architecture.patch
deleted file mode 100644
index 6296238..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/liburcu/liburcu/0001-Support-for-NIOS2-architecture.patch
+++ /dev/null
@@ -1,145 +0,0 @@
-From f37c5b56376d9bb506da68bb11d0d7463e6e563b Mon Sep 17 00:00:00 2001
-From: Marek Vasut <marex@denx.de>
-Date: Tue, 9 Feb 2016 01:52:26 +0100
-Subject: [PATCH] Support for NIOS2 architecture
-
-Add support for the Altera NIOS2 CPU archirecture. The atomic operations
-are handled by the GCC. The memory barriers on this systems are entirely
-trivial too, since the CPU does not support SMP at all.
-
-Signed-off-by: Marek Vasut <marex@denx.de>
-Upstream-Status: Backport [ http://git.lttng.org/?p=userspace-rcu.git;a=commit;h=859050b3088aa3f0cb59d7f51ce24b9a0f18faa5 ]
-
----
- LICENSE              |  1 +
- README.md            |  1 +
- configure.ac         |  1 +
- urcu/arch/nios2.h    | 40 ++++++++++++++++++++++++++++++++++++++++
- urcu/uatomic/nios2.h | 32 ++++++++++++++++++++++++++++++++
- 5 files changed, 75 insertions(+)
- create mode 100644 urcu/arch/nios2.h
- create mode 100644 urcu/uatomic/nios2.h
-
-diff --git a/LICENSE b/LICENSE
-index 3147094..a06fdcc 100644
---- a/LICENSE
-+++ b/LICENSE
-@@ -45,6 +45,7 @@ compiler.h
- arch/s390.h
- uatomic/alpha.h
- uatomic/mips.h
-+uatomic/nios2.h
- uatomic/s390.h
- system.h
- 
-diff --git a/README.md b/README.md
-index f6b290f..6fe9c1e 100644
---- a/README.md
-+++ b/README.md
-@@ -43,6 +43,7 @@ Currently, the following architectures are supported:
-   - S390, S390x
-   - ARM 32/64
-   - MIPS
-+  - NIOS2
-   - Alpha
-   - ia64
-   - Sparcv9 32/64
-diff --git a/configure.ac b/configure.ac
-index eebed56..8014e1d 100644
---- a/configure.ac
-+++ b/configure.ac
-@@ -136,6 +136,7 @@ AS_CASE([$host_cpu],
- 	[arm*], [ARCHTYPE="arm"],
- 	[aarch64*], [ARCHTYPE="aarch64"],
- 	[mips*], [ARCHTYPE="mips"],
-+	[nios2*], [ARCHTYPE="nios2"],
- 	[tile*], [ARCHTYPE="tile"],
- 	[hppa*], [ARCHTYPE="hppa"],
- 	[ARCHTYPE="unknown"]
-diff --git a/urcu/arch/nios2.h b/urcu/arch/nios2.h
-new file mode 100644
-index 0000000..b4f3e50
---- /dev/null
-+++ b/urcu/arch/nios2.h
-@@ -0,0 +1,40 @@
-+#ifndef _URCU_ARCH_NIOS2_H
-+#define _URCU_ARCH_NIOS2_H
-+
-+/*
-+ * arch_nios2.h: trivial definitions for the NIOS2 architecture.
-+ *
-+ * Copyright (c) 2016 Marek Vasut <marex@denx.de>
-+ *
-+ * This library is free software; you can redistribute it and/or
-+ * modify it under the terms of the GNU Lesser General Public
-+ * License as published by the Free Software Foundation; either
-+ * version 2.1 of the License, or (at your option) any later version.
-+ *
-+ * This library is distributed in the hope that it will be useful,
-+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
-+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
-+ * Lesser General Public License for more details.
-+ *
-+ * You should have received a copy of the GNU Lesser General Public
-+ * License along with this library; if not, write to the Free Software
-+ * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
-+ */
-+
-+#include <urcu/compiler.h>
-+#include <urcu/config.h>
-+#include <urcu/syscall-compat.h>
-+
-+#ifdef __cplusplus
-+extern "C" {
-+#endif
-+
-+#define cmm_mb()	cmm_barrier()
-+
-+#ifdef __cplusplus
-+}
-+#endif
-+
-+#include <urcu/arch/generic.h>
-+
-+#endif /* _URCU_ARCH_NIOS2_H */
-diff --git a/urcu/uatomic/nios2.h b/urcu/uatomic/nios2.h
-new file mode 100644
-index 0000000..5b3c303
---- /dev/null
-+++ b/urcu/uatomic/nios2.h
-@@ -0,0 +1,32 @@
-+#ifndef _URCU_UATOMIC_ARCH_NIOS2_H
-+#define _URCU_UATOMIC_ARCH_NIOS2_H
-+
-+/*
-+ * Atomic exchange operations for the NIOS2 architecture. Let GCC do it.
-+ *
-+ * Copyright (c) 2016 Marek Vasut <marex@denx.de>
-+ *
-+ * Permission is hereby granted, free of charge, to any person obtaining a copy
-+ * of this software and associated documentation files (the "Software"), to
-+ * deal in the Software without restriction, including without limitation the
-+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
-+ * sell copies of the Software, and to permit persons to whom the Software is
-+ * furnished to do so, subject to the following conditions:
-+ *
-+ * The above copyright notice and this permission notice shall be included in
-+ * all copies or substantial portions of the Software.
-+ *
-+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
-+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
-+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
-+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
-+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
-+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
-+ * IN THE SOFTWARE.
-+ */
-+
-+#include <urcu/compiler.h>
-+#include <urcu/system.h>
-+#include <urcu/uatomic/generic.h>
-+
-+#endif /* _URCU_UATOMIC_ARCH_NIOS2_H */
--- 
-2.10.2
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/liburcu/liburcu_0.10.0.bb b/import-layers/yocto-poky/meta/recipes-support/liburcu/liburcu_0.10.0.bb
new file mode 100644
index 0000000..4ecb20b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/liburcu/liburcu_0.10.0.bb
@@ -0,0 +1,17 @@
+SUMMARY = "Userspace RCU (read-copy-update) library"
+HOMEPAGE = "http://lttng.org/urcu"
+BUGTRACKER = "http://lttng.org/project/issues"
+
+LICENSE = "LGPLv2.1+ & MIT-style"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=e548d28737289d75a8f1e01ba2fd7825 \
+                    file://src/urcu.h;beginline=4;endline=32;md5=4de0d68d3a997643715036d2209ae1d9 \
+                    file://include/urcu/uatomic/x86.h;beginline=4;endline=21;md5=58e50bbd8a2f073bb5500e6554af0d0b"
+
+SRC_URI = "http://lttng.org/files/urcu/userspace-rcu-${PV}.tar.bz2 \
+           "
+
+SRC_URI[md5sum] = "69dab85b6929c378338b9504adc6aea7"
+SRC_URI[sha256sum] = "7cb58a7ba5151198087f025dc8d19d8918e9c6d56772f039696c111d9aad3190"
+
+S = "${WORKDIR}/userspace-rcu-${PV}"
+inherit autotools
diff --git a/import-layers/yocto-poky/meta/recipes-support/liburcu/liburcu_0.9.3.bb b/import-layers/yocto-poky/meta/recipes-support/liburcu/liburcu_0.9.3.bb
deleted file mode 100644
index 4486e0a..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/liburcu/liburcu_0.9.3.bb
+++ /dev/null
@@ -1,19 +0,0 @@
-SUMMARY = "Userspace RCU (read-copy-update) library"
-HOMEPAGE = "http://lttng.org/urcu"
-BUGTRACKER = "http://lttng.org/project/issues"
-
-LICENSE = "LGPLv2.1+ & MIT-style"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=e548d28737289d75a8f1e01ba2fd7825 \
-                    file://urcu.h;beginline=4;endline=32;md5=4de0d68d3a997643715036d2209ae1d9 \
-                    file://urcu/uatomic/x86.h;beginline=4;endline=21;md5=58e50bbd8a2f073bb5500e6554af0d0b"
-
-SRC_URI = "http://lttng.org/files/urcu/userspace-rcu-${PV}.tar.bz2 \
-           file://0001-Support-for-NIOS2-architecture.patch \
-           "
-
-SRC_URI[md5sum] = "920970e35a1a2066c8353eabfeab8730"
-SRC_URI[sha256sum] = "1bce32e6a6c967fef6d37adaadf33df19878d69673f9ef9d3f2470e0c6ed4006"
-
-S = "${WORKDIR}/userspace-rcu-${PV}"
-CFLAGS_append_libc-uclibc = " -D_GNU_SOURCE"
-inherit autotools
diff --git a/import-layers/yocto-poky/meta/recipes-support/libxslt/libxslt_1.1.29.bb b/import-layers/yocto-poky/meta/recipes-support/libxslt/libxslt_1.1.29.bb
index d27c706..5b11bc2 100644
--- a/import-layers/yocto-poky/meta/recipes-support/libxslt/libxslt_1.1.29.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/libxslt/libxslt_1.1.29.bb
@@ -8,7 +8,7 @@
 SECTION = "libs"
 DEPENDS = "libxml2"
 
-SRC_URI = "ftp://xmlsoft.org/libxslt/libxslt-${PV}.tar.gz \
+SRC_URI = "http://xmlsoft.org/sources/libxslt-${PV}.tar.gz \
            file://pkgconfig_fix.patch \
            file://0001-Use-pkg-config-to-find-gcrypt-and-libxml2.patch \
            file://0001-Link-libraries-with-libm.patch \
diff --git a/import-layers/yocto-poky/meta/recipes-support/lz4/files/0001-tests-Makefile-don-t-use-LIBDIR-as-variable.patch b/import-layers/yocto-poky/meta/recipes-support/lz4/files/0001-tests-Makefile-don-t-use-LIBDIR-as-variable.patch
new file mode 100644
index 0000000..00494e8
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/lz4/files/0001-tests-Makefile-don-t-use-LIBDIR-as-variable.patch
@@ -0,0 +1,82 @@
+From d4768d9e29b805096a86aa13c0d30ee8215af4df Mon Sep 17 00:00:00 2001
+From: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Date: Mon, 26 Jun 2017 12:07:09 +0300
+Subject: [PATCH] tests/Makefile: don't use LIBDIR as variable
+
+LIBDIR may be overriden with a environment variable: In this case make
+clean breaks. Use another variable name.
+
+Signed-off-by: Jussi Kukkonen <jussi.kukkonen@intel.com>
+Upstream-Status: Backport
+---
+ tests/Makefile | 26 +++++++++++++-------------
+ 1 file changed, 13 insertions(+), 13 deletions(-)
+
+diff --git a/tests/Makefile b/tests/Makefile
+index 97fa782..51dd038 100644
+--- a/tests/Makefile
++++ b/tests/Makefile
+@@ -32,7 +32,7 @@ DESTDIR ?=
+ PREFIX  ?= /usr/local
+ BINDIR  := $(PREFIX)/bin
+ MANDIR  := $(PREFIX)/share/man/man1
+-LIBDIR  := ../lib
++LZ4DIR  := ../lib
+ PRGDIR  := ../programs
+ VOID    := /dev/null
+ TESTDIR := versionsTest
+@@ -43,7 +43,7 @@ CFLAGS  += -g -Wall -Wextra -Wundef -Wcast-qual -Wcast-align -Wshadow -Wswitch-e
+            -Wdeclaration-after-statement -Wstrict-prototypes \
+            -Wpointer-arith -Wstrict-aliasing=1
+ CFLAGS  += $(MOREFLAGS)
+-CPPFLAGS:= -I$(LIBDIR) -I$(PRGDIR) -DXXH_NAMESPACE=LZ4_
++CPPFLAGS:= -I$(LZ4DIR) -I$(PRGDIR) -DXXH_NAMESPACE=LZ4_
+ FLAGS    = $(CFLAGS) $(CPPFLAGS) $(LDFLAGS)
+ 
+ 
+@@ -79,31 +79,31 @@ lz4c32:   # create a 32-bits version for 32/64 interop tests
+ 	$(MAKE) -C $(PRGDIR) clean $@ CFLAGS="-m32 $(CFLAGS)"
+ 	cp $(LZ4) $(LZ4)c32
+ 
+-fullbench  : $(LIBDIR)/lz4.o $(LIBDIR)/lz4hc.o $(LIBDIR)/lz4frame.o $(LIBDIR)/xxhash.o fullbench.c
++fullbench  : $(LZ4DIR)/lz4.o $(LZ4DIR)/lz4hc.o $(LZ4DIR)/lz4frame.o $(LZ4DIR)/xxhash.o fullbench.c
+ 	$(CC) $(FLAGS) $^ -o $@$(EXT)
+ 
+-fullbench-lib: fullbench.c $(LIBDIR)/xxhash.c
+-	$(MAKE) -C $(LIBDIR) liblz4.a
+-	$(CC) $(FLAGS) $^ -o $@$(EXT) $(LIBDIR)/liblz4.a
++fullbench-lib: fullbench.c $(LZ4DIR)/xxhash.c
++	$(MAKE) -C $(LZ4DIR) liblz4.a
++	$(CC) $(FLAGS) $^ -o $@$(EXT) $(LZ4DIR)/liblz4.a
+ 
+-fullbench-dll: fullbench.c $(LIBDIR)/xxhash.c
+-	$(MAKE) -C $(LIBDIR) liblz4
+-	$(CC) $(FLAGS) $^ -o $@$(EXT) -DLZ4_DLL_IMPORT=1 $(LIBDIR)/dll/liblz4.dll
++fullbench-dll: fullbench.c $(LZ4DIR)/xxhash.c
++	$(MAKE) -C $(LZ4DIR) liblz4
++	$(CC) $(FLAGS) $^ -o $@$(EXT) -DLZ4_DLL_IMPORT=1 $(LZ4DIR)/dll/liblz4.dll
+ 
+-fuzzer  : $(LIBDIR)/lz4.o $(LIBDIR)/lz4hc.o $(LIBDIR)/xxhash.o fuzzer.c
++fuzzer  : $(LZ4DIR)/lz4.o $(LZ4DIR)/lz4hc.o $(LZ4DIR)/xxhash.o fuzzer.c
+ 	$(CC) $(FLAGS) $^ -o $@$(EXT)
+ 
+-frametest: $(LIBDIR)/lz4frame.o $(LIBDIR)/lz4.o $(LIBDIR)/lz4hc.o $(LIBDIR)/xxhash.o frametest.c
++frametest: $(LZ4DIR)/lz4frame.o $(LZ4DIR)/lz4.o $(LZ4DIR)/lz4hc.o $(LZ4DIR)/xxhash.o frametest.c
+ 	$(CC) $(FLAGS) $^ -o $@$(EXT)
+ 
+-fasttest: $(LIBDIR)/lz4.o fasttest.c
++fasttest: $(LZ4DIR)/lz4.o fasttest.c
+ 	$(CC) $(FLAGS) $^ -o $@$(EXT)
+ 
+ datagen : $(PRGDIR)/datagen.c datagencli.c
+ 	$(CC) $(FLAGS) -I$(PRGDIR) $^ -o $@$(EXT)
+ 
+ clean:
+-	@$(MAKE) -C $(LIBDIR) $@ > $(VOID)
++	@$(MAKE) -C $(LZ4DIR) $@ > $(VOID)
+ 	@$(MAKE) -C $(PRGDIR) $@ > $(VOID)
+ 	@$(RM) core *.o *.test tmp* \
+         fullbench-dll$(EXT) fullbench-lib$(EXT) \
+-- 
+2.1.4
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/lz4/files/run-ptest b/import-layers/yocto-poky/meta/recipes-support/lz4/files/run-ptest
new file mode 100644
index 0000000..d3bfc49
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/lz4/files/run-ptest
@@ -0,0 +1,43 @@
+#!/bin/sh
+cd testsuite
+
+echo -n "---- test-lz4 ----"
+make -C tests test-lz4  > /dev/null 2>&1
+
+if [ $? -eq 0 ]; then
+  echo "PASS"
+else
+  echo "FAIL"
+fi
+
+echo -n "---- test-fasttest ----"
+make -C tests test-fasttest  > /dev/null 2>&1
+if [ $? -eq 0 ]; then
+  echo "PASS"
+else
+  echo "FAIL"
+fi
+
+echo -n "---- test-frametest ----"
+make -C tests test-frametest > /dev/null 2>&1
+if [ $? -eq 0 ]; then
+  echo "PASS"
+else
+  echo "FAIL"
+fi
+
+echo -n "---- test-fullbench ----"
+make -C tests test-fullbench >  /dev/null 2>&1
+if [ $? -eq 0 ]; then
+  echo "PASS"
+else
+  echo "FAIL"
+fi
+
+echo -n "---- test-fuzzer ----"
+make -C tests test-fuzzer >  /dev/null 2>&1
+if [ $? -eq 0 ]; then
+  echo "PASS"
+else
+  echo "FAIL"
+fi
diff --git a/import-layers/yocto-poky/meta/recipes-support/lz4/lz4.bb b/import-layers/yocto-poky/meta/recipes-support/lz4/lz4.bb
deleted file mode 100644
index 03c5a7a..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/lz4/lz4.bb
+++ /dev/null
@@ -1,21 +0,0 @@
-SUMMARY = "Extremely Fast Compression algorithm"
-DESCRIPTION = "LZ4 is a very fast lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems."
-
-LICENSE = "BSD"
-LIC_FILES_CHKSUM = "file://lib/LICENSE;md5=0b0d063f37a4477b54af2459477dcafd"
-
-SRCREV = "d86dc916771c126afb797637dda9f6421c0cb998"
-
-PV = "131+git${SRCPV}"
-
-SRC_URI = "git://github.com/Cyan4973/lz4.git"
-
-S = "${WORKDIR}/git"
-
-EXTRA_OEMAKE = "PREFIX=${prefix} CC='${CC}' DESTDIR=${D} LIBDIR=${libdir} INCLUDEDIR=${includedir}"
-
-do_install() {
-	oe_runmake install
-}
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/lz4/lz4_1.7.4.bb b/import-layers/yocto-poky/meta/recipes-support/lz4/lz4_1.7.4.bb
new file mode 100644
index 0000000..86a1ab9
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/lz4/lz4_1.7.4.bb
@@ -0,0 +1,27 @@
+SUMMARY = "Extremely Fast Compression algorithm"
+DESCRIPTION = "LZ4 is a very fast lossless compression algorithm, providing compression speed at 400 MB/s per core, scalable with multi-cores CPU. It also features an extremely fast decoder, with speed in multiple GB/s per core, typically reaching RAM speed limits on multi-core systems."
+
+LICENSE = "BSD | BSD-2-Clause | GPL-2.0"
+LIC_FILES_CHKSUM = "file://lib/LICENSE;md5=ebc2ea4814a64de7708f1571904b32cc\
+                    file://programs/COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://LICENSE;md5=7f2857d58beff6d04137bf9b09e5ffb6"
+
+PE = "1"
+
+SRCREV = "7bb64ff2b69a9f8367de9ab483cdadf42b4c1b65"
+
+SRC_URI = "git://github.com/lz4/lz4.git \
+           file://0001-tests-Makefile-don-t-use-LIBDIR-as-variable.patch \
+           file://run-ptest \
+"
+UPSTREAM_CHECK_GITTAGREGEX = "v(?P<pver>.*)"
+
+S = "${WORKDIR}/git"
+
+EXTRA_OEMAKE = "PREFIX=${prefix} CC='${CC}' DESTDIR=${D} LIBDIR=${libdir} INCLUDEDIR=${includedir}"
+
+do_install() {
+	oe_runmake install
+}
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/lzo/lzo/0001-Add-pkgconfigdir-to-solve-the-undefine-error.patch b/import-layers/yocto-poky/meta/recipes-support/lzo/lzo/0001-Add-pkgconfigdir-to-solve-the-undefine-error.patch
new file mode 100644
index 0000000..5235a15
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/lzo/lzo/0001-Add-pkgconfigdir-to-solve-the-undefine-error.patch
@@ -0,0 +1,27 @@
+From e730bfd7c2d3a4b5f3605878599cb9b20d31b1fd Mon Sep 17 00:00:00 2001
+From: Fan Xin <fan.xin@jp.fujitsu.com>
+Date: Fri, 2 Jun 2017 11:52:25 +0900
+Subject: [PATCH] Add pkgconfigdir to solve the undefine error.
+
+Upstream-Status: Pending
+
+Signed-off-by: Fan Xin <fan.xin@jp.fujitsu.com>
+---
+ Makefile.am | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/Makefile.am b/Makefile.am
+index e4d383b..c75023d 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -14,6 +14,7 @@ AM_CPPFLAGS = -I$(top_srcdir)/include -I$(top_srcdir)
+ LDADD = src/liblzo2.la
+ lib_LTLIBRARIES =
+ noinst_PROGRAMS =
++pkgconfigdir = $(libdir)/pkgconfig
+ pkgconfig_DATA = lzo2.pc
+ 
+ 
+-- 
+1.9.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/lzo/lzo_2.09.bb b/import-layers/yocto-poky/meta/recipes-support/lzo/lzo_2.09.bb
deleted file mode 100644
index 2978617..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/lzo/lzo_2.09.bb
+++ /dev/null
@@ -1,35 +0,0 @@
-SUMMARY = "Lossless data compression library"
-HOMEPAGE = "http://www.oberhumer.com/opensource/lzo/"
-SECTION = "libs"
-LICENSE = "GPLv2+"
-LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
-                    file://src/lzo_init.c;beginline=5;endline=25;md5=355023835a9b9eeb70ab895395e951ff"
-
-SRC_URI = "http://www.oberhumer.com/opensource/lzo/download/lzo-${PV}.tar.gz \
-           file://0001-Use-memcpy-instead-of-reinventing-it.patch \
-           file://acinclude.m4 \
-           file://run-ptest \
-           "
-
-SRC_URI[md5sum] = "c7ffc9a103afe2d1bba0b015e7aa887f"
-SRC_URI[sha256sum] = "f294a7ced313063c057c504257f437c8335c41bfeed23531ee4e6a2b87bcb34c"
-
-inherit autotools ptest
-
-EXTRA_OECONF = "--enable-shared"
-
-do_configure_prepend () {
-	cp ${WORKDIR}/acinclude.m4 ${S}/
-}
-
-do_install_ptest() {
-	t=${D}${PTEST_PATH}
-	cp ${S}/util/check.sh $t
-	cp ${B}/minilzo/testmini $t
-	for i in tests/align tests/chksum lzotest/lzotest examples/simple
-		do cp ${B}/`dirname $i`/.libs/`basename $i` $t; \
-	done
-}
-
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/lzo/lzo_2.10.bb b/import-layers/yocto-poky/meta/recipes-support/lzo/lzo_2.10.bb
new file mode 100644
index 0000000..490d230
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/lzo/lzo_2.10.bb
@@ -0,0 +1,36 @@
+SUMMARY = "Lossless data compression library"
+HOMEPAGE = "http://www.oberhumer.com/opensource/lzo/"
+SECTION = "libs"
+LICENSE = "GPLv2+"
+LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263 \
+                    file://src/lzo_init.c;beginline=5;endline=25;md5=9ae697ca01829b0a383c5d2d163e0108"
+
+SRC_URI = "http://www.oberhumer.com/opensource/lzo/download/lzo-${PV}.tar.gz \
+           file://0001-Use-memcpy-instead-of-reinventing-it.patch \
+	   file://0001-Add-pkgconfigdir-to-solve-the-undefine-error.patch \
+           file://acinclude.m4 \
+           file://run-ptest \
+           "
+
+SRC_URI[md5sum] = "39d3f3f9c55c87b1e5d6888e1420f4b5"
+SRC_URI[sha256sum] = "c0f892943208266f9b6543b3ae308fab6284c5c90e627931446fb49b4221a072"
+
+inherit autotools ptest
+
+EXTRA_OECONF = "--enable-shared"
+
+do_configure_prepend () {
+	cp ${WORKDIR}/acinclude.m4 ${S}/
+}
+
+do_install_ptest() {
+	t=${D}${PTEST_PATH}
+	cp ${S}/util/check.sh $t
+	cp ${B}/minilzo/testmini $t
+	for i in tests/align tests/chksum lzotest/lzotest examples/simple
+		do cp ${B}/`dirname $i`/.libs/`basename $i` $t; \
+	done
+}
+
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/nettle/nettle-3.3/dlopen-test.patch b/import-layers/yocto-poky/meta/recipes-support/nettle/nettle-3.3/dlopen-test.patch
new file mode 100644
index 0000000..c4f0b7e
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/nettle/nettle-3.3/dlopen-test.patch
@@ -0,0 +1,20 @@
+Replace relative path of libnettle.so with absolute path so the test
+program can find it.
+Relative paths are not suitable, as the folder strucure for ptest
+is different from the one expected by the nettle testsuite.
+
+Upstream-Status: Inappropriate [embedded specific]
+
+Signed-off-by: Juro Bystricky <juro.bystricky@intel.com>
+
+--- a/testsuite/dlopen-test.c	2016-10-01 00:28:38.000000000 -0700
++++ b/testsuite/dlopen-test.c	2017-10-13 11:08:57.227572860 -0700
+@@ -9,7 +9,7 @@
+ main (int argc UNUSED, char **argv UNUSED)
+ {
+ #if HAVE_LIBDL
+-  void *handle = dlopen ("../libnettle.so", RTLD_NOW);
++  void *handle = dlopen ("/usr/lib/libnettle.so", RTLD_NOW);
+   int (*get_version)(void);
+   if (!handle)
+     {
diff --git a/import-layers/yocto-poky/meta/recipes-support/nettle/nettle_3.3.bb b/import-layers/yocto-poky/meta/recipes-support/nettle/nettle_3.3.bb
index b76babf..3951678 100644
--- a/import-layers/yocto-poky/meta/recipes-support/nettle/nettle_3.3.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/nettle/nettle_3.3.bb
@@ -11,5 +11,9 @@
             file://check-header-files-of-openssl-only-if-enable_.patch \
             "
 
+SRC_URI_append_class-target = "\
+            file://dlopen-test.patch \
+            "
+
 SRC_URI[md5sum] = "10f969f78a463704ae73529978148dbe"
 SRC_URI[sha256sum] = "46942627d5d0ca11720fec18d81fc38f7ef837ea4197c1f630e71ce0d470b11e"
diff --git a/import-layers/yocto-poky/meta/recipes-support/npth/npth_1.3.bb b/import-layers/yocto-poky/meta/recipes-support/npth/npth_1.3.bb
deleted file mode 100644
index d4fc064..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/npth/npth_1.3.bb
+++ /dev/null
@@ -1,22 +0,0 @@
-SUMMARY = "New GNU Portable Threads library"
-HOMEPAGE = "http://www.gnupg.org/software/pth/"
-SECTION = "libs"
-LICENSE = "LGPLv3+ & GPLv2+"
-LIC_FILES_CHKSUM = "\
-    file://COPYING;md5=751419260aa954499f7abaabaa882bbe\
-    file://COPYING.LESSER;md5=6a6a8e020838b23406c81b19c1d46df6\
-    "
-UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
-SRC_URI = "${GNUPG_MIRROR}/npth/npth-${PV}.tar.bz2 \
-           file://pkgconfig.patch \
-          "
-
-SRC_URI[md5sum] = "efe1524c53670b5755dc27893d2d68a0"
-SRC_URI[sha256sum] = "bca81940436aed0734eb8d0ff8b179e04cc8c087f5625204419f5f45d736a82a"
-
-BINCONFIG = "${bindir}/npth-config"
-
-inherit autotools binconfig-disabled
-
-FILES_${PN} = "${libdir}/libnpth.so.*"
-FILES_${PN}-dev += "${bindir}/npth-config"
diff --git a/import-layers/yocto-poky/meta/recipes-support/npth/npth_1.5.bb b/import-layers/yocto-poky/meta/recipes-support/npth/npth_1.5.bb
new file mode 100644
index 0000000..54de70c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/npth/npth_1.5.bb
@@ -0,0 +1,21 @@
+SUMMARY = "New GNU Portable Threads library"
+HOMEPAGE = "http://www.gnupg.org/software/pth/"
+SECTION = "libs"
+LICENSE = "LGPLv2+"
+LIC_FILES_CHKSUM = "\
+    file://COPYING.LIB;md5=2caced0b25dfefd4c601d92bd15116de\
+    "
+UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
+SRC_URI = "${GNUPG_MIRROR}/npth/npth-${PV}.tar.bz2 \
+           file://pkgconfig.patch \
+          "
+
+SRC_URI[md5sum] = "9ba2dc4302d2f32c66737c43ed191b1b"
+SRC_URI[sha256sum] = "294a690c1f537b92ed829d867bee537e46be93fbd60b16c04630fbbfcd9db3c2"
+
+BINCONFIG = "${bindir}/npth-config"
+
+inherit autotools binconfig-disabled
+
+FILES_${PN} = "${libdir}/libnpth.so.*"
+FILES_${PN}-dev += "${bindir}/npth-config"
diff --git a/import-layers/yocto-poky/meta/recipes-support/nspr/nspr/0001-md-Fix-build-with-musl.patch b/import-layers/yocto-poky/meta/recipes-support/nspr/nspr/0001-md-Fix-build-with-musl.patch
new file mode 100644
index 0000000..f3cd670
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/nspr/nspr/0001-md-Fix-build-with-musl.patch
@@ -0,0 +1,31 @@
+From 147f3c2acbd96d44025cec11800ded0282327764 Mon Sep 17 00:00:00 2001
+From: Khem Raj <raj.khem@gmail.com>
+Date: Mon, 18 Sep 2017 17:22:43 -0700
+Subject: [PATCH] md: Fix build with musl
+
+The MIPS specific header <sgidefs.h> is not provided by musl
+linux kernel headers provide <asm/sgidefs.h> which has same definitions
+
+Signed-off-by: Khem Raj <raj.khem@gmail.com>
+---
+Upstream-Status: Pending
+
+ pr/include/md/_linux.cfg | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
+
+diff --git a/pr/include/md/_linux.cfg b/pr/include/md/_linux.cfg
+index 640b19c..31296a8 100644
+--- a/pr/include/md/_linux.cfg
++++ b/pr/include/md/_linux.cfg
+@@ -499,7 +499,7 @@
+ #elif defined(__mips__)
+ 
+ /* For _ABI64 */
+-#include <sgidefs.h>
++#include <asm/sgidefs.h>
+ 
+ #ifdef __MIPSEB__
+ #define IS_BIG_ENDIAN 1
+-- 
+2.14.1
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/nspr/nspr_4.13.1.bb b/import-layers/yocto-poky/meta/recipes-support/nspr/nspr_4.13.1.bb
deleted file mode 100644
index 63ebecf..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/nspr/nspr_4.13.1.bb
+++ /dev/null
@@ -1,190 +0,0 @@
-SUMMARY = "Netscape Portable Runtime Library"
-HOMEPAGE =  "http://www.mozilla.org/projects/nspr/"
-LICENSE = "GPL-2.0 | MPL-2.0 | LGPL-2.1"
-LIC_FILES_CHKSUM = "file://configure.in;beginline=3;endline=6;md5=90c2fdee38e45d6302abcfe475c8b5c5 \
-                    file://Makefile.in;beginline=4;endline=38;md5=beda1dbb98a515f557d3e58ef06bca99"
-SECTION = "libs/network"
-
-SRC_URI = "http://ftp.mozilla.org/pub/nspr/releases/v${PV}/src/nspr-${PV}.tar.gz \
-           file://remove-rpath-from-tests.patch \
-           file://fix-build-on-x86_64.patch \
-           file://remove-srcdir-from-configure-in.patch \
-           file://0002-Add-nios2-support.patch \
-           file://0001-include-stdint.h-for-SSIZE_MAX-and-SIZE_MAX-definiti.patch \
-           file://nspr.pc.in \
-"
-
-CACHED_CONFIGUREVARS_append_libc-musl = " CFLAGS='${CFLAGS} -D_PR_POLL_AVAILABLE \
-                                          -D_PR_HAVE_OFF64_T -D_PR_INET6 -D_PR_HAVE_INET_NTOP \
-                                          -D_PR_HAVE_GETHOSTBYNAME2 -D_PR_HAVE_GETADDRINFO \
-                                          -D_PR_INET6_PROBE -DNO_DLOPEN_NULL'"
-
-UPSTREAM_CHECK_URI = "http://ftp.mozilla.org/pub/nspr/releases/"
-UPSTREAM_CHECK_REGEX = "v(?P<pver>\d+(\.\d+)+)/"
-
-SRC_URI[md5sum] = "9c44298a6fc478b3c0a4e98f4f9981ed"
-SRC_URI[sha256sum] = "5e4c1751339a76e7c772c0c04747488d7f8c98980b434dc846977e43117833ab"
-
-CVE_PRODUCT = "netscape_portable_runtime"
-
-S = "${WORKDIR}/nspr-${PV}/nspr"
-
-RDEPENDS_${PN}-dev += "perl"
-TARGET_CC_ARCH += "${LDFLAGS}"
-
-TESTS = " \
-    accept \
-    acceptread \
-    acceptreademu \
-    affinity \
-    alarm \
-    anonfm \
-    atomic \
-    attach \
-    bigfile \
-    cleanup \
-    cltsrv  \
-    concur \
-    cvar \
-    cvar2 \
-    dlltest \
-    dtoa \
-    errcodes \
-    exit \
-    fdcach \
-    fileio \
-    foreign \
-    formattm \
-    fsync \
-    gethost \
-    getproto \
-    i2l \
-    initclk \
-    inrval \
-    instrumt \
-    intrio \
-    intrupt \
-    io_timeout \
-    ioconthr \
-    join \
-    joinkk \
-    joinku \
-    joinuk \
-    joinuu \
-    layer \
-    lazyinit \
-    libfilename \
-    lltest \
-    lock \
-    lockfile \
-    logfile \
-    logger \
-    many_cv \
-    multiwait \
-    nameshm1 \
-    nblayer \
-    nonblock \
-    ntioto \
-    ntoh \
-    op_2long \
-    op_excl \
-    op_filnf \
-    op_filok \
-    op_nofil \
-    parent \
-    parsetm \
-    peek \
-    perf \
-    pipeping \
-    pipeping2 \
-    pipeself \
-    poll_nm \
-    poll_to \
-    pollable \
-    prftest \
-    primblok \
-    provider \
-    prpollml \
-    ranfile \
-    randseed \
-    reinit \
-    rwlocktest \
-    sel_spd \
-    selct_er \
-    selct_nm \
-    selct_to \
-    selintr \
-    sema \
-    semaerr \
-    semaping \
-    sendzlf \
-    server_test \
-    servr_kk \
-    servr_uk \
-    servr_ku \
-    servr_uu \
-    short_thread \
-    sigpipe \
-    socket \
-    sockopt \
-    sockping \
-    sprintf \
-    stack \
-    stdio \
-    str2addr \
-    strod \
-    switch \
-    system \
-    testbit \
-    testfile \
-    threads \
-    timemac \
-    timetest \
-    tpd \
-    udpsrv \
-    vercheck \
-    version \
-    writev \
-    xnotify \
-    zerolen"
-
-inherit autotools
-
-PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)}"
-PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
-
-do_compile_prepend() {
-	oe_runmake CROSS_COMPILE=1 CFLAGS="-DXP_UNIX" LDFLAGS="" CC=gcc -C config export
-}
-
-do_compile_append() {
-	oe_runmake -C pr/tests
-}
-
-do_install_append() {
-    install -D ${WORKDIR}/nspr.pc.in ${D}${libdir}/pkgconfig/nspr.pc
-    sed -i  \
-    -e 's:NSPRVERSION:${PV}:g' \
-    -e 's:OEPREFIX:${prefix}:g' \
-    -e 's:OELIBDIR:${libdir}:g' \
-    -e 's:OEINCDIR:${includedir}:g' \
-    -e 's:OEEXECPREFIX:${exec_prefix}:g' \
-    ${D}${libdir}/pkgconfig/nspr.pc
-
-    mkdir -p ${D}${libdir}/nspr/tests
-    install -m 0755 ${S}/pr/tests/runtests.pl ${D}${libdir}/nspr/tests
-    install -m 0755 ${S}/pr/tests/runtests.sh ${D}${libdir}/nspr/tests
-    cd ${B}/pr/tests
-    install -m 0755 ${TESTS} ${D}${libdir}/nspr/tests
-
-    # delete compile-et.pl and perr.properties from ${bindir} because these are
-    # only used to generate prerr.c and prerr.h files from prerr.et at compile
-    # time
-    rm ${D}${bindir}/compile-et.pl ${D}${bindir}/prerr.properties
-}
-
-FILES_${PN} = "${libdir}/lib*.so"
-FILES_${PN}-dev = "${bindir}/* ${libdir}/nspr/tests/* ${libdir}/pkgconfig \
-                ${includedir}/* ${datadir}/aclocal/* "
-
-BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/nspr/nspr_4.16.bb b/import-layers/yocto-poky/meta/recipes-support/nspr/nspr_4.16.bb
new file mode 100644
index 0000000..78ef994
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/nspr/nspr_4.16.bb
@@ -0,0 +1,191 @@
+SUMMARY = "Netscape Portable Runtime Library"
+HOMEPAGE =  "http://www.mozilla.org/projects/nspr/"
+LICENSE = "GPL-2.0 | MPL-2.0 | LGPL-2.1"
+LIC_FILES_CHKSUM = "file://configure.in;beginline=3;endline=6;md5=90c2fdee38e45d6302abcfe475c8b5c5 \
+                    file://Makefile.in;beginline=4;endline=38;md5=beda1dbb98a515f557d3e58ef06bca99"
+SECTION = "libs/network"
+
+SRC_URI = "http://ftp.mozilla.org/pub/nspr/releases/v${PV}/src/nspr-${PV}.tar.gz \
+           file://remove-rpath-from-tests.patch \
+           file://fix-build-on-x86_64.patch \
+           file://remove-srcdir-from-configure-in.patch \
+           file://0002-Add-nios2-support.patch \
+           file://0001-include-stdint.h-for-SSIZE_MAX-and-SIZE_MAX-definiti.patch \
+           file://0001-md-Fix-build-with-musl.patch \
+           file://nspr.pc.in \
+"
+
+CACHED_CONFIGUREVARS_append_libc-musl = " CFLAGS='${CFLAGS} -D_PR_POLL_AVAILABLE \
+                                          -D_PR_HAVE_OFF64_T -D_PR_INET6 -D_PR_HAVE_INET_NTOP \
+                                          -D_PR_HAVE_GETHOSTBYNAME2 -D_PR_HAVE_GETADDRINFO \
+                                          -D_PR_INET6_PROBE -DNO_DLOPEN_NULL'"
+
+UPSTREAM_CHECK_URI = "http://ftp.mozilla.org/pub/nspr/releases/"
+UPSTREAM_CHECK_REGEX = "v(?P<pver>\d+(\.\d+)+)/"
+
+SRC_URI[md5sum] = "42fd8963a4b394f62d43ba604f03fab7"
+SRC_URI[sha256sum] = "9b3102d97665504aeee73363c11a21c062ad67a2522242368b7f019f96a53cd1"
+
+CVE_PRODUCT = "netscape_portable_runtime"
+
+S = "${WORKDIR}/nspr-${PV}/nspr"
+
+RDEPENDS_${PN}-dev += "perl"
+TARGET_CC_ARCH += "${LDFLAGS}"
+
+TESTS = " \
+    accept \
+    acceptread \
+    acceptreademu \
+    affinity \
+    alarm \
+    anonfm \
+    atomic \
+    attach \
+    bigfile \
+    cleanup \
+    cltsrv  \
+    concur \
+    cvar \
+    cvar2 \
+    dlltest \
+    dtoa \
+    errcodes \
+    exit \
+    fdcach \
+    fileio \
+    foreign \
+    formattm \
+    fsync \
+    gethost \
+    getproto \
+    i2l \
+    initclk \
+    inrval \
+    instrumt \
+    intrio \
+    intrupt \
+    io_timeout \
+    ioconthr \
+    join \
+    joinkk \
+    joinku \
+    joinuk \
+    joinuu \
+    layer \
+    lazyinit \
+    libfilename \
+    lltest \
+    lock \
+    lockfile \
+    logfile \
+    logger \
+    many_cv \
+    multiwait \
+    nameshm1 \
+    nblayer \
+    nonblock \
+    ntioto \
+    ntoh \
+    op_2long \
+    op_excl \
+    op_filnf \
+    op_filok \
+    op_nofil \
+    parent \
+    parsetm \
+    peek \
+    perf \
+    pipeping \
+    pipeping2 \
+    pipeself \
+    poll_nm \
+    poll_to \
+    pollable \
+    prftest \
+    primblok \
+    provider \
+    prpollml \
+    ranfile \
+    randseed \
+    reinit \
+    rwlocktest \
+    sel_spd \
+    selct_er \
+    selct_nm \
+    selct_to \
+    selintr \
+    sema \
+    semaerr \
+    semaping \
+    sendzlf \
+    server_test \
+    servr_kk \
+    servr_uk \
+    servr_ku \
+    servr_uu \
+    short_thread \
+    sigpipe \
+    socket \
+    sockopt \
+    sockping \
+    sprintf \
+    stack \
+    stdio \
+    str2addr \
+    strod \
+    switch \
+    system \
+    testbit \
+    testfile \
+    threads \
+    timemac \
+    timetest \
+    tpd \
+    udpsrv \
+    vercheck \
+    version \
+    writev \
+    xnotify \
+    zerolen"
+
+inherit autotools
+
+PACKAGECONFIG ??= "${@bb.utils.filter('DISTRO_FEATURES', 'ipv6', d)}"
+PACKAGECONFIG[ipv6] = "--enable-ipv6,--disable-ipv6,"
+
+do_compile_prepend() {
+	oe_runmake CROSS_COMPILE=1 CFLAGS="-DXP_UNIX ${BUILD_CFLAGS}" LDFLAGS="" CC="${BUILD_CC}" -C config export
+}
+
+do_compile_append() {
+	oe_runmake -C pr/tests
+}
+
+do_install_append() {
+    install -D ${WORKDIR}/nspr.pc.in ${D}${libdir}/pkgconfig/nspr.pc
+    sed -i  \
+    -e 's:NSPRVERSION:${PV}:g' \
+    -e 's:OEPREFIX:${prefix}:g' \
+    -e 's:OELIBDIR:${libdir}:g' \
+    -e 's:OEINCDIR:${includedir}:g' \
+    -e 's:OEEXECPREFIX:${exec_prefix}:g' \
+    ${D}${libdir}/pkgconfig/nspr.pc
+
+    mkdir -p ${D}${libdir}/nspr/tests
+    install -m 0755 ${S}/pr/tests/runtests.pl ${D}${libdir}/nspr/tests
+    install -m 0755 ${S}/pr/tests/runtests.sh ${D}${libdir}/nspr/tests
+    cd ${B}/pr/tests
+    install -m 0755 ${TESTS} ${D}${libdir}/nspr/tests
+
+    # delete compile-et.pl and perr.properties from ${bindir} because these are
+    # only used to generate prerr.c and prerr.h files from prerr.et at compile
+    # time
+    rm ${D}${bindir}/compile-et.pl ${D}${bindir}/prerr.properties
+}
+
+FILES_${PN} = "${libdir}/lib*.so"
+FILES_${PN}-dev = "${bindir}/* ${libdir}/nspr/tests/* ${libdir}/pkgconfig \
+                ${includedir}/* ${datadir}/aclocal/* "
+
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/nss/nss/Fix-compilation-for-X32.patch b/import-layers/yocto-poky/meta/recipes-support/nss/nss/Fix-compilation-for-X32.patch
index f7f2c21..80b8690 100644
--- a/import-layers/yocto-poky/meta/recipes-support/nss/nss/Fix-compilation-for-X32.patch
+++ b/import-layers/yocto-poky/meta/recipes-support/nss/nss/Fix-compilation-for-X32.patch
@@ -6,6 +6,8 @@
 X32 uses 32-bit pointers, not 64-bit.
 
 Signed-off-by: Christopher Larson <chris_larson@mentor.com>
+
+Upstream-Status: Pending
 ---
  nss/lib/freebl/poly1305-donna-x64-sse2-incremental-source.c | 4 ++++
  1 file changed, 4 insertions(+)
diff --git a/import-layers/yocto-poky/meta/recipes-support/nss/nss_3.28.1.bb b/import-layers/yocto-poky/meta/recipes-support/nss/nss_3.28.1.bb
deleted file mode 100644
index fed86fc..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/nss/nss_3.28.1.bb
+++ /dev/null
@@ -1,246 +0,0 @@
-SUMMARY = "Mozilla's SSL and TLS implementation"
-DESCRIPTION = "Network Security Services (NSS) is a set of libraries \
-designed to support cross-platform development of \
-security-enabled client and server applications. \
-Applications built with NSS can support SSL v2 and v3, \
-TLS, PKCS 5, PKCS 7, PKCS 11, PKCS 12, S/MIME, X.509 \
-v3 certificates, and other security standards."
-HOMEPAGE = "http://www.mozilla.org/projects/security/pki/nss/"
-SECTION = "libs"
-
-LICENSE = "MPL-2.0 | (MPL-2.0 & GPL-2.0+) | (MPL-2.0 & LGPL-2.1+)"
-
-LIC_FILES_CHKSUM = "file://nss/COPYING;md5=3b1e88e1b9c0b5a4b2881d46cce06a18 \
-                    file://nss/lib/freebl/mpi/doc/LICENSE;md5=491f158d09d948466afce85d6f1fe18f \
-                    file://nss/lib/freebl/mpi/doc/LICENSE-MPL;md5=5d425c8f3157dbf212db2ec53d9e5132"
-
-SRC_URI = "\
-    http://ftp.mozilla.org/pub/mozilla.org/security/nss/releases/NSS_3_28_1_RTM/src/${BP}.tar.gz \
-    file://0001-nss-fix-support-cross-compiling.patch \
-    file://nss-no-rpath-for-cross-compiling.patch \
-    file://nss-fix-incorrect-shebang-of-perl.patch \
-    file://nss-fix-nsinstall-build.patch \
-    file://disable-Wvarargs-with-clang.patch \
-    file://pqg.c-ULL_addend.patch \
-    file://Fix-compilation-for-X32.patch \
-    file://nss.pc.in \
-    file://signlibs.sh \
-"
-SRC_URI[md5sum] = "e98d48435cee5792f97ef7fc35a602c3"
-SRC_URI[sha256sum] = "58cc0c05c0ed9523e6d820bea74f513538f48c87aac931876e3d3775de1a82ad"
-
-UPSTREAM_CHECK_URI = "https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_Releases"
-UPSTREAM_CHECK_REGEX = "NSS_(?P<pver>.+)_release_notes"
-
-inherit siteinfo
-
-DEPENDS = "sqlite3 nspr zlib nss-native"
-DEPENDS_class-native = "sqlite3-native nspr-native zlib-native"
-RDEPENDS_${PN}-smime = "perl"
-
-TD = "${S}/tentative-dist"
-TDS = "${S}/tentative-dist-staging"
-
-TARGET_CC_ARCH += "${LDFLAGS}"
-
-do_configure_prepend_libc-musl () {
-    sed -i -e '/-DHAVE_SYS_CDEFS_H/d' ${S}/nss/lib/dbm/config/config.mk
-}
-
-do_compile_prepend_class-native() {
-    export NSPR_INCLUDE_DIR=${STAGING_INCDIR_NATIVE}
-    export NSPR_LIB_DIR=${STAGING_LIBDIR_NATIVE}
-    export NSS_ENABLE_WERROR=0
-}
-
-do_compile_prepend_class-nativesdk() {
-    export LDFLAGS=""
-}
-
-do_compile_prepend_class-native() {
-    # Need to set RPATH so that chrpath will do its job correctly
-    RPATH="-Wl,-rpath-link,${STAGING_LIBDIR_NATIVE} -Wl,-rpath-link,${STAGING_BASE_LIBDIR_NATIVE} -Wl,-rpath,${STAGING_LIBDIR_NATIVE} -Wl,-rpath,${STAGING_BASE_LIBDIR_NATIVE}"
-}
-
-do_compile() {
-    export CROSS_COMPILE=1
-    export NATIVE_CC="gcc"
-    export NATIVE_FLAGS="${HOST_CFLAGS}"
-    export BUILD_OPT=1
-
-    export FREEBL_NO_DEPEND=1
-    export FREEBL_LOWHASH=1
-
-    export LIBDIR=${libdir}
-    export MOZILLA_CLIENT=1
-    export NS_USE_GCC=1
-    export NSS_USE_SYSTEM_SQLITE=1
-    export NSS_ENABLE_ECC=1
-
-    export OS_RELEASE=3.4
-    export OS_TARGET=Linux
-    export OS_ARCH=Linux
-
-    if [ "${TARGET_ARCH}" = "powerpc" ]; then
-        OS_TEST=ppc
-    elif [ "${TARGET_ARCH}" = "powerpc64" ]; then
-        OS_TEST=ppc64
-    elif [ "${TARGET_ARCH}" = "mips" -o "${TARGET_ARCH}" = "mipsel" -o "${TARGET_ARCH}" = "mips64" -o "${TARGET_ARCH}" = "mips64el" ]; then
-        OS_TEST=mips
-    else
-        OS_TEST="${TARGET_ARCH}"
-    fi
-
-    if [ "${SITEINFO_BITS}" = "64" ]; then
-        export USE_64=1
-    elif [ "${TARGET_ARCH}" = "x86_64" -a "${SITEINFO_BITS}" = "32" ]; then
-        export USE_X32=1
-    fi
-
-    export NSS_DISABLE_GTESTS=1
-
-    # We can modify CC in the environment, but if we set it via an
-    # argument to make, nsinstall, a host program, will also build with it!
-    #
-    export CC="${CC} -g"
-    make -C ./nss CCC="${CXX} -g" \
-        OS_TEST=${OS_TEST} \
-        RPATH="${RPATH}"
-}
-do_compile[vardepsexclude] += "SITEINFO_BITS"
-
-
-do_install_prepend_class-nativesdk() {
-    export LDFLAGS=""
-}
-
-do_install() {
-    export CROSS_COMPILE=1
-    export NATIVE_CC="gcc"
-    export BUILD_OPT=1
-
-    export FREEBL_NO_DEPEND=1
-
-    export LIBDIR=${libdir}
-    export MOZILLA_CLIENT=1
-    export NS_USE_GCC=1
-    export NSS_USE_SYSTEM_SQLITE=1
-    export NSS_ENABLE_ECC=1
-
-    export OS_RELEASE=3.4
-    export OS_TARGET=Linux
-    export OS_ARCH=Linux
-
-    if [ "${TARGET_ARCH}" = "powerpc" ]; then
-        OS_TEST=ppc
-    elif [ "${TARGET_ARCH}" = "powerpc64" ]; then
-        OS_TEST=ppc64
-    elif [ "${TARGET_ARCH}" = "mips" -o "${TARGET_ARCH}" = "mipsel" -o "${TARGET_ARCH}" = "mips64" -o "${TARGET_ARCH}" = "mips64el" ]; then
-        OS_TEST=mips
-    else
-        OS_TEST="${TARGET_ARCH}"
-    fi
-    if [ "${SITEINFO_BITS}" = "64" ]; then
-        export USE_64=1
-    elif [ "${TARGET_ARCH}" = "x86_64" -a "${SITEINFO_BITS}" = "32" ]; then
-        export USE_X32=1
-    fi
-
-    export NSS_DISABLE_GTESTS=1
-
-    make -C ./nss \
-        CCC="${CXX}" \
-        OS_TEST=${OS_TEST} \
-        SOURCE_LIB_DIR="${TD}/${libdir}" \
-        SOURCE_BIN_DIR="${TD}/${bindir}" \
-        install
-
-    install -d ${D}/${libdir}/
-    for file in ${S}/dist/*.OBJ/lib/*.so; do
-        echo "Installing `basename $file`..."
-        cp $file  ${D}/${libdir}/
-    done
-
-    for shared_lib in ${TD}/${libdir}/*.so.*; do
-        if [ -f $shared_lib ]; then
-            cp $shared_lib ${D}/${libdir}
-            ln -sf $(basename $shared_lib) ${D}/${libdir}/$(basename $shared_lib .1oe)
-        fi
-    done
-    for shared_lib in ${TD}/${libdir}/*.so; do
-        if [ -f $shared_lib -a ! -e ${D}/${libdir}/$shared_lib ]; then
-            cp $shared_lib ${D}/${libdir}
-        fi
-    done
-
-    install -d ${D}/${includedir}/nss3
-    install -m 644 -t ${D}/${includedir}/nss3 dist/public/nss/*
-
-    install -d ${D}/${bindir}
-    for binary in ${TD}/${bindir}/*; do
-        install -m 755 -t ${D}/${bindir} $binary
-    done
-}
-do_install[vardepsexclude] += "SITEINFO_BITS"
-
-do_install_append() {
-    # Create empty .chk files for the NSS libraries at build time. They could
-    # be regenerated at target's boot time.
-    for file in libsoftokn3.chk libfreebl3.chk libnssdbm3.chk; do
-        touch ${D}/${libdir}/$file
-        chmod 755 ${D}/${libdir}/$file
-    done
-    install -D -m 755 ${WORKDIR}/signlibs.sh ${D}/${bindir}/signlibs.sh
-
-    install -d ${D}${libdir}/pkgconfig/
-    sed 's/%NSS_VERSION%/${PV}/' ${WORKDIR}/nss.pc.in | sed 's/%NSPR_VERSION%/4.9.2/' > ${D}${libdir}/pkgconfig/nss.pc
-    sed -i s:OEPREFIX:${prefix}:g ${D}${libdir}/pkgconfig/nss.pc
-    sed -i s:OEEXECPREFIX:${exec_prefix}:g ${D}${libdir}/pkgconfig/nss.pc
-    sed -i s:OELIBDIR:${libdir}:g ${D}${libdir}/pkgconfig/nss.pc
-    sed -i s:OEINCDIR:${includedir}/nss3:g ${D}${libdir}/pkgconfig/nss.pc
-}
-
-do_install_append_class-target() {
-    # Create a blank certificate
-    mkdir -p ${D}${sysconfdir}/pki/nssdb/
-    touch ./empty_password
-    certutil -N -d ${D}${sysconfdir}/pki/nssdb/ -f ./empty_password
-    chmod 644 ${D}${sysconfdir}/pki/nssdb/*.db
-    rm ./empty_password
-}
-
-PACKAGE_WRITE_DEPS += "nss-native"
-pkg_postinst_${PN} () {
-    if [ -n "$D" ]; then
-        for I in $D${libdir}/lib*.chk; do
-            DN=`dirname $I`
-            BN=`basename $I .chk`
-            FN=$DN/$BN.so
-            shlibsign -i $FN
-            if [ $? -ne 0 ]; then
-                exit 1
-            fi
-        done
-    else
-        signlibs.sh
-    fi
-}
-
-PACKAGES =+ "${PN}-smime"
-FILES_${PN}-smime = "\
-    ${bindir}/smime \
-"
-FILES_${PN} = "\
-    ${sysconfdir} \
-    ${bindir} \
-    ${libdir}/lib*.chk \
-    ${libdir}/lib*.so \
-    "
-FILES_${PN}-dev = "\
-    ${libdir}/nss \
-    ${libdir}/pkgconfig/* \
-    ${includedir}/* \
-    "
-
-BBCLASSEXTEND = "native nativesdk"
-
diff --git a/import-layers/yocto-poky/meta/recipes-support/nss/nss_3.31.1.bb b/import-layers/yocto-poky/meta/recipes-support/nss/nss_3.31.1.bb
new file mode 100644
index 0000000..2bb9bdb
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/nss/nss_3.31.1.bb
@@ -0,0 +1,253 @@
+SUMMARY = "Mozilla's SSL and TLS implementation"
+DESCRIPTION = "Network Security Services (NSS) is a set of libraries \
+designed to support cross-platform development of \
+security-enabled client and server applications. \
+Applications built with NSS can support SSL v2 and v3, \
+TLS, PKCS 5, PKCS 7, PKCS 11, PKCS 12, S/MIME, X.509 \
+v3 certificates, and other security standards."
+HOMEPAGE = "http://www.mozilla.org/projects/security/pki/nss/"
+SECTION = "libs"
+
+LICENSE = "MPL-2.0 | (MPL-2.0 & GPL-2.0+) | (MPL-2.0 & LGPL-2.1+)"
+
+LIC_FILES_CHKSUM = "file://nss/COPYING;md5=3b1e88e1b9c0b5a4b2881d46cce06a18 \
+                    file://nss/lib/freebl/mpi/doc/LICENSE;md5=491f158d09d948466afce85d6f1fe18f \
+                    file://nss/lib/freebl/mpi/doc/LICENSE-MPL;md5=5d425c8f3157dbf212db2ec53d9e5132"
+
+VERSION_DIR = "${@d.getVar('BP').upper().replace('-', '_').replace('.', '_') + '_RTM'}"
+
+SRC_URI = "http://ftp.mozilla.org/pub/mozilla.org/security/nss/releases/${VERSION_DIR}/src/${BP}.tar.gz \
+           file://nss.pc.in \
+           file://signlibs.sh \
+           file://0001-nss-fix-support-cross-compiling.patch \
+           file://nss-no-rpath-for-cross-compiling.patch \
+           file://nss-fix-incorrect-shebang-of-perl.patch \
+           file://nss-fix-nsinstall-build.patch \
+           file://disable-Wvarargs-with-clang.patch \
+           file://pqg.c-ULL_addend.patch \
+           file://Fix-compilation-for-X32.patch \
+           "
+
+SRC_URI[md5sum] = "ebb44f1394250d2cf6ec3c2e3d71fa20"
+SRC_URI[sha256sum] = "933439214dc03ee60e86d1419c19e1568998b0776dde987f41fa70ced6cd08dc"
+
+UPSTREAM_CHECK_URI = "https://developer.mozilla.org/en-US/docs/Mozilla/Projects/NSS/NSS_Releases"
+UPSTREAM_CHECK_REGEX = "NSS_(?P<pver>.+)_release_notes"
+
+inherit siteinfo
+
+DEPENDS = "sqlite3 nspr zlib nss-native"
+DEPENDS_class-native = "sqlite3-native nspr-native zlib-native"
+RDEPENDS_${PN}-smime = "perl"
+
+TD = "${S}/tentative-dist"
+TDS = "${S}/tentative-dist-staging"
+
+TARGET_CC_ARCH += "${LDFLAGS}"
+
+do_configure_prepend_libc-musl () {
+    sed -i -e '/-DHAVE_SYS_CDEFS_H/d' ${S}/nss/lib/dbm/config/config.mk
+}
+
+do_compile_prepend_class-native() {
+    export NSPR_INCLUDE_DIR=${STAGING_INCDIR_NATIVE}
+    export NSPR_LIB_DIR=${STAGING_LIBDIR_NATIVE}
+    export NSS_ENABLE_WERROR=0
+}
+
+do_compile_prepend_class-nativesdk() {
+    export LDFLAGS=""
+}
+
+do_compile_prepend_class-native() {
+    # Need to set RPATH so that chrpath will do its job correctly
+    RPATH="-Wl,-rpath-link,${STAGING_LIBDIR_NATIVE} -Wl,-rpath-link,${STAGING_BASE_LIBDIR_NATIVE} -Wl,-rpath,${STAGING_LIBDIR_NATIVE} -Wl,-rpath,${STAGING_BASE_LIBDIR_NATIVE}"
+}
+
+do_compile() {
+    export CROSS_COMPILE=1
+    export NATIVE_CC="${BUILD_CC}"
+    export NATIVE_FLAGS="${BUILD_CFLAGS}"
+    export BUILD_OPT=1
+
+    export FREEBL_NO_DEPEND=1
+    export FREEBL_LOWHASH=1
+
+    export LIBDIR=${libdir}
+    export MOZILLA_CLIENT=1
+    export NS_USE_GCC=1
+    export NSS_USE_SYSTEM_SQLITE=1
+    export NSS_ENABLE_ECC=1
+
+    export OS_RELEASE=3.4
+    export OS_TARGET=Linux
+    export OS_ARCH=Linux
+
+    if [ "${TARGET_ARCH}" = "powerpc" ]; then
+        OS_TEST=ppc
+    elif [ "${TARGET_ARCH}" = "powerpc64" ]; then
+        OS_TEST=ppc64
+    elif [ "${TARGET_ARCH}" = "mips" -o "${TARGET_ARCH}" = "mipsel" -o "${TARGET_ARCH}" = "mips64" -o "${TARGET_ARCH}" = "mips64el" ]; then
+        OS_TEST=mips
+    else
+        OS_TEST="${TARGET_ARCH}"
+    fi
+
+    if [ "${SITEINFO_BITS}" = "64" ]; then
+        export USE_64=1
+    elif [ "${TARGET_ARCH}" = "x86_64" -a "${SITEINFO_BITS}" = "32" ]; then
+        export USE_X32=1
+    fi
+
+    export NSS_DISABLE_GTESTS=1
+
+    # We can modify CC in the environment, but if we set it via an
+    # argument to make, nsinstall, a host program, will also build with it!
+    #
+    # nss pretty much does its own thing with CFLAGS, so we put them into CC.
+    # Optimization will get clobbered, but most of the stuff will survive.
+    # The motivation for this is to point to the correct place for debug
+    # source files and CFLAGS does that.  Nothing uses CCC.
+    #
+    export CC="${CC} ${CFLAGS}"
+    make -C ./nss CCC="${CXX} -g" \
+        OS_TEST=${OS_TEST} \
+        RPATH="${RPATH}"
+}
+do_compile[vardepsexclude] += "SITEINFO_BITS"
+
+
+do_install_prepend_class-nativesdk() {
+    export LDFLAGS=""
+}
+
+do_install() {
+    export CROSS_COMPILE=1
+    export NATIVE_CC="${BUILD_CC}"
+    export BUILD_OPT=1
+
+    export FREEBL_NO_DEPEND=1
+
+    export LIBDIR=${libdir}
+    export MOZILLA_CLIENT=1
+    export NS_USE_GCC=1
+    export NSS_USE_SYSTEM_SQLITE=1
+    export NSS_ENABLE_ECC=1
+
+    export OS_RELEASE=3.4
+    export OS_TARGET=Linux
+    export OS_ARCH=Linux
+
+    if [ "${TARGET_ARCH}" = "powerpc" ]; then
+        OS_TEST=ppc
+    elif [ "${TARGET_ARCH}" = "powerpc64" ]; then
+        OS_TEST=ppc64
+    elif [ "${TARGET_ARCH}" = "mips" -o "${TARGET_ARCH}" = "mipsel" -o "${TARGET_ARCH}" = "mips64" -o "${TARGET_ARCH}" = "mips64el" ]; then
+        OS_TEST=mips
+    else
+        OS_TEST="${TARGET_ARCH}"
+    fi
+    if [ "${SITEINFO_BITS}" = "64" ]; then
+        export USE_64=1
+    elif [ "${TARGET_ARCH}" = "x86_64" -a "${SITEINFO_BITS}" = "32" ]; then
+        export USE_X32=1
+    fi
+
+    export NSS_DISABLE_GTESTS=1
+
+    make -C ./nss \
+        CCC="${CXX}" \
+        OS_TEST=${OS_TEST} \
+        SOURCE_LIB_DIR="${TD}/${libdir}" \
+        SOURCE_BIN_DIR="${TD}/${bindir}" \
+        install
+
+    install -d ${D}/${libdir}/
+    for file in ${S}/dist/*.OBJ/lib/*.so; do
+        echo "Installing `basename $file`..."
+        cp $file  ${D}/${libdir}/
+    done
+
+    for shared_lib in ${TD}/${libdir}/*.so.*; do
+        if [ -f $shared_lib ]; then
+            cp $shared_lib ${D}/${libdir}
+            ln -sf $(basename $shared_lib) ${D}/${libdir}/$(basename $shared_lib .1oe)
+        fi
+    done
+    for shared_lib in ${TD}/${libdir}/*.so; do
+        if [ -f $shared_lib -a ! -e ${D}/${libdir}/$shared_lib ]; then
+            cp $shared_lib ${D}/${libdir}
+        fi
+    done
+
+    install -d ${D}/${includedir}/nss3
+    install -m 644 -t ${D}/${includedir}/nss3 dist/public/nss/*
+
+    install -d ${D}/${bindir}
+    for binary in ${TD}/${bindir}/*; do
+        install -m 755 -t ${D}/${bindir} $binary
+    done
+}
+do_install[vardepsexclude] += "SITEINFO_BITS"
+
+do_install_append() {
+    # Create empty .chk files for the NSS libraries at build time. They could
+    # be regenerated at target's boot time.
+    for file in libsoftokn3.chk libfreebl3.chk libnssdbm3.chk; do
+        touch ${D}/${libdir}/$file
+        chmod 755 ${D}/${libdir}/$file
+    done
+    install -D -m 755 ${WORKDIR}/signlibs.sh ${D}/${bindir}/signlibs.sh
+
+    install -d ${D}${libdir}/pkgconfig/
+    sed 's/%NSS_VERSION%/${PV}/' ${WORKDIR}/nss.pc.in | sed 's/%NSPR_VERSION%/4.9.2/' > ${D}${libdir}/pkgconfig/nss.pc
+    sed -i s:OEPREFIX:${prefix}:g ${D}${libdir}/pkgconfig/nss.pc
+    sed -i s:OEEXECPREFIX:${exec_prefix}:g ${D}${libdir}/pkgconfig/nss.pc
+    sed -i s:OELIBDIR:${libdir}:g ${D}${libdir}/pkgconfig/nss.pc
+    sed -i s:OEINCDIR:${includedir}/nss3:g ${D}${libdir}/pkgconfig/nss.pc
+}
+
+do_install_append_class-target() {
+    # Create a blank certificate
+    mkdir -p ${D}${sysconfdir}/pki/nssdb/
+    touch ./empty_password
+    certutil -N -d ${D}${sysconfdir}/pki/nssdb/ -f ./empty_password
+    chmod 644 ${D}${sysconfdir}/pki/nssdb/*.db
+    rm ./empty_password
+}
+
+PACKAGE_WRITE_DEPS += "nss-native"
+pkg_postinst_${PN} () {
+    if [ -n "$D" ]; then
+        for I in $D${libdir}/lib*.chk; do
+            DN=`dirname $I`
+            BN=`basename $I .chk`
+            FN=$DN/$BN.so
+            shlibsign -i $FN
+            if [ $? -ne 0 ]; then
+                exit 1
+            fi
+        done
+    else
+        signlibs.sh
+    fi
+}
+
+PACKAGES =+ "${PN}-smime"
+FILES_${PN}-smime = "\
+    ${bindir}/smime \
+"
+FILES_${PN} = "\
+    ${sysconfdir} \
+    ${bindir} \
+    ${libdir}/lib*.chk \
+    ${libdir}/lib*.so \
+    "
+FILES_${PN}-dev = "\
+    ${libdir}/nss \
+    ${libdir}/pkgconfig/* \
+    ${includedir}/* \
+    "
+
+BBCLASSEXTEND = "native nativesdk"
+
diff --git a/import-layers/yocto-poky/meta/recipes-support/pinentry/pinentry-1.0.0/gpg-error_pkconf.patch b/import-layers/yocto-poky/meta/recipes-support/pinentry/pinentry-1.0.0/gpg-error_pkconf.patch
new file mode 100644
index 0000000..431edb0
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/pinentry/pinentry-1.0.0/gpg-error_pkconf.patch
@@ -0,0 +1,100 @@
+Convert to pkg-config support to match changes done to 
+the gpg-error recipe for gpg-error.pc  generation.
+
+Upstream-Status: Inappropriate [OE specific]
+
+Signed-off-by: Armin Kuster <akuster@mvista.com>
+
+Index: pinentry-1.0.0/m4/gpg-error.m4
+===================================================================
+--- pinentry-1.0.0.orig/m4/gpg-error.m4
++++ pinentry-1.0.0/m4/gpg-error.m4
+@@ -25,74 +25,12 @@ dnl config script does not match the hos
+ dnl is added to the gpg_config_script_warn variable.
+ dnl
+ AC_DEFUN([AM_PATH_GPG_ERROR],
+-[ AC_REQUIRE([AC_CANONICAL_HOST])
+-  gpg_error_config_prefix=""
+-  dnl --with-libgpg-error-prefix=PFX is the preferred name for this option,
+-  dnl since that is consistent with how our three siblings use the directory/
+-  dnl package name in --with-$dir_name-prefix=PFX.
+-  AC_ARG_WITH(libgpg-error-prefix,
+-              AC_HELP_STRING([--with-libgpg-error-prefix=PFX],
+-                             [prefix where GPG Error is installed (optional)]),
+-              [gpg_error_config_prefix="$withval"])
+-
+-  dnl Accept --with-gpg-error-prefix and make it work the same as
+-  dnl --with-libgpg-error-prefix above, for backwards compatibility,
+-  dnl but do not document this old, inconsistently-named option.
+-  AC_ARG_WITH(gpg-error-prefix,,
+-              [gpg_error_config_prefix="$withval"])
+-
+-  if test x"${GPG_ERROR_CONFIG}" = x ; then
+-     if test x"${gpg_error_config_prefix}" != x ; then
+-        GPG_ERROR_CONFIG="${gpg_error_config_prefix}/bin/gpg-error-config"
+-     else
+-       case "${SYSROOT}" in
+-         /*)
+-           if test -x "${SYSROOT}/bin/gpg-error-config" ; then
+-             GPG_ERROR_CONFIG="${SYSROOT}/bin/gpg-error-config"
+-           fi
+-           ;;
+-         '')
+-           ;;
+-          *)
+-           AC_MSG_WARN([Ignoring \$SYSROOT as it is not an absolute path.])
+-           ;;
+-       esac
+-     fi
+-  fi
+-
+-  AC_PATH_PROG(GPG_ERROR_CONFIG, gpg-error-config, no)
++[
+   min_gpg_error_version=ifelse([$1], ,0.0,$1)
+-  AC_MSG_CHECKING(for GPG Error - version >= $min_gpg_error_version)
+-  ok=no
+-  if test "$GPG_ERROR_CONFIG" != "no" \
+-     && test -f "$GPG_ERROR_CONFIG" ; then
+-    req_major=`echo $min_gpg_error_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)/\1/'`
+-    req_minor=`echo $min_gpg_error_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)/\2/'`
+-    gpg_error_config_version=`$GPG_ERROR_CONFIG $gpg_error_config_args --version`
+-    major=`echo $gpg_error_config_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\).*/\1/'`
+-    minor=`echo $gpg_error_config_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\).*/\2/'`
+-    if test "$major" -gt "$req_major"; then
+-        ok=yes
+-    else
+-        if test "$major" -eq "$req_major"; then
+-            if test "$minor" -ge "$req_minor"; then
+-               ok=yes
+-            fi
+-        fi
+-    fi
+-  fi
++  PKG_CHECK_MODULES(GPG_ERROR, [gpg-error >= $min_gpg_error_version gpg-error], [ok=yes], [ok=no])
+   if test $ok = yes; then
+-    GPG_ERROR_CFLAGS=`$GPG_ERROR_CONFIG $gpg_error_config_args --cflags`
+-    GPG_ERROR_LIBS=`$GPG_ERROR_CONFIG $gpg_error_config_args --libs`
+-    GPG_ERROR_MT_CFLAGS=`$GPG_ERROR_CONFIG $gpg_error_config_args --mt --cflags 2>/dev/null`
+-    GPG_ERROR_MT_LIBS=`$GPG_ERROR_CONFIG $gpg_error_config_args --mt --libs 2>/dev/null`
+-    AC_MSG_RESULT([yes ($gpg_error_config_version)])
+     ifelse([$2], , :, [$2])
+-    gpg_error_config_host=`$GPG_ERROR_CONFIG $gpg_error_config_args --host 2>/dev/null || echo none`
++    gpg_error_config_host=`$PKG_CONFIG --host gpg-error 2>/dev/null || echo none`
+     if test x"$gpg_error_config_host" != xnone ; then
+       if test x"$gpg_error_config_host" != x"$host" ; then
+   AC_MSG_WARN([[
+@@ -107,10 +45,6 @@ AC_DEFUN([AM_PATH_GPG_ERROR],
+       fi
+     fi
+   else
+-    GPG_ERROR_CFLAGS=""
+-    GPG_ERROR_LIBS=""
+-    GPG_ERROR_MT_CFLAGS=""
+-    GPG_ERROR_MT_LIBS=""
+     AC_MSG_RESULT(no)
+     ifelse([$3], , :, [$3])
+   fi
diff --git a/import-layers/yocto-poky/meta/recipes-support/pinentry/pinentry-1.0.0/libassuan_pkgconf.patch b/import-layers/yocto-poky/meta/recipes-support/pinentry/pinentry-1.0.0/libassuan_pkgconf.patch
new file mode 100644
index 0000000..11d564f
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/pinentry/pinentry-1.0.0/libassuan_pkgconf.patch
@@ -0,0 +1,153 @@
+Convert to pkg-config support to match changes done to
+the libassuan recipe for libassuan.pc generation.
+
+Upstream-Status: Inappropriate [OE specific]
+
+Signed-off-by: Armin Kuster <akuster@mvista.com>
+
+Index: pinentry-1.0.0/m4/libassuan.m4
+===================================================================
+--- pinentry-1.0.0.orig/m4/libassuan.m4
++++ pinentry-1.0.0/m4/libassuan.m4
+@@ -13,34 +13,8 @@ dnl
+ dnl Common code used for libassuan detection [internal]
+ dnl Returns ok set to yes or no.
+ dnl
+-AC_DEFUN([_AM_PATH_LIBASSUAN_COMMON],
+-[ AC_REQUIRE([AC_CANONICAL_HOST])
+-  AC_ARG_WITH(libassuan-prefix,
+-              AC_HELP_STRING([--with-libassuan-prefix=PFX],
+-                             [prefix where LIBASSUAN is installed (optional)]),
+-     libassuan_config_prefix="$withval", libassuan_config_prefix="")
+-  if test x$libassuan_config_prefix != x ; then
+-    libassuan_config_args="$libassuan_config_args --prefix=$libassuan_config_prefix"
+-    if test x${LIBASSUAN_CONFIG+set} != xset ; then
+-      LIBASSUAN_CONFIG=$libassuan_config_prefix/bin/libassuan-config
+-    fi
+-  else
+-    case "${SYSROOT}" in
+-       /*)
+-         if test -x "${SYSROOT}/bin/libassuan-config" ; then
+-           LIBASSUAN_CONFIG="${SYSROOT}/bin/libassuan-config"
+-         fi
+-         ;;
+-       '')
+-         ;;
+-        *)
+-         AC_MSG_WARN([Ignoring \$SYSROOT as it is not an absolute path.])
+-         ;;
+-     esac
+-  fi
+-
+-  AC_PATH_TOOL(LIBASSUAN_CONFIG, libassuan-config, no)
+-
++AC_DEFUN([AM_PATH_LIBASSUAN_COMMON],
++[
+   tmp=ifelse([$1], ,1:0.9.2,$1)
+   if echo "$tmp" | grep ':' >/dev/null 2>/dev/null ; then
+     req_libassuan_api=`echo "$tmp"     | sed 's/\(.*\):\(.*\)/\1/'`
+@@ -50,51 +24,11 @@ AC_DEFUN([_AM_PATH_LIBASSUAN_COMMON],
+     min_libassuan_version="$tmp"
+   fi
+ 
+-  AC_MSG_CHECKING(for LIBASSUAN - version >= $min_libassuan_version)
+-  ok=no
+-  if test "$LIBASSUAN_CONFIG" != "no" \
+-     && test -f "$LIBASSUAN_CONFIG" ; then
+-    req_major=`echo $min_libassuan_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\1/'`
+-    req_minor=`echo $min_libassuan_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\2/'`
+-    req_micro=`echo $min_libassuan_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\)/\3/'`
+-
+-    libassuan_config_version=`$LIBASSUAN_CONFIG --version`
+-    major=`echo $libassuan_config_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\).*/\1/'`
+-    minor=`echo $libassuan_config_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\).*/\2/'`
+-    micro=`echo $libassuan_config_version | \
+-               sed 's/\([[0-9]]*\)\.\([[0-9]]*\)\.\([[0-9]]*\).*/\3/'`
+-
+-    if test "$major" -gt "$req_major"; then
+-        ok=yes
+-    else
+-        if test "$major" -eq "$req_major"; then
+-            if test "$minor" -gt "$req_minor"; then
+-               ok=yes
+-            else
+-               if test "$minor" -eq "$req_minor"; then
+-                   if test "$micro" -ge "$req_micro"; then
+-                     ok=yes
+-                   fi
+-               fi
+-            fi
+-        fi
+-    fi
+-  fi
+-
+-  if test $ok = yes; then
+-    AC_MSG_RESULT([yes ($libassuan_config_version)])
+-  else
+-    AC_MSG_RESULT(no)
+-  fi
++  PKG_CHECK_MODULES(LIBASSUAN_COMMON, [libassuan >= $min_libassuan_version libassuan], [ok=yes], [ok=no])
+ 
+   if test $ok = yes; then
+     if test "$req_libassuan_api" -gt 0 ; then
+-      tmp=`$LIBASSUAN_CONFIG --api-version 2>/dev/null || echo 0`
++      tmp=`$PKG_CONFIG --variable=api_version libassuan 2>/dev/null || echo 0`
+       if test "$tmp" -gt 0 ; then
+         AC_MSG_CHECKING([LIBASSUAN API version])
+         if test "$req_libassuan_api" -eq "$tmp" ; then
+@@ -109,7 +43,7 @@ AC_DEFUN([_AM_PATH_LIBASSUAN_COMMON],
+ 
+   if test $ok = yes; then
+     if test x"$host" != x ; then
+-      libassuan_config_host=`$LIBASSUAN_CONFIG --host 2>/dev/null || echo none`
++      libassuan_config_host=`$PKG_CONFIG --host libassuan 2>/dev/null || echo 0`
+       if test x"$libassuan_config_host" != xnone ; then
+         if test x"$libassuan_config_host" != x"$host" ; then
+   AC_MSG_WARN([[
+@@ -132,7 +66,7 @@ dnl Test whether libassuan has at least
+ dnl used to test for features only available in newer versions.
+ dnl
+ AC_DEFUN([AM_CHECK_LIBASSUAN],
+-[ _AM_PATH_LIBASSUAN_COMMON($1)
++[ AM_PATH_LIBASSUAN_COMMON($1)
+   if test $ok = yes; then
+     ifelse([$2], , :, [$2])
+   else
+@@ -148,16 +82,10 @@ dnl                   [ACTION-IF-FOUND [
+ dnl Test for libassuan and define LIBASSUAN_CFLAGS and LIBASSUAN_LIBS
+ dnl
+ AC_DEFUN([AM_PATH_LIBASSUAN],
+-[ _AM_PATH_LIBASSUAN_COMMON($1)
++[ AM_PATH_LIBASSUAN_COMMON($1)
+   if test $ok = yes; then
+-    LIBASSUAN_CFLAGS=`$LIBASSUAN_CONFIG $libassuan_config_args --cflags`
+-    LIBASSUAN_LIBS=`$LIBASSUAN_CONFIG $libassuan_config_args --libs`
+     ifelse([$2], , :, [$2])
+   else
+-    LIBASSUAN_CFLAGS=""
+-    LIBASSUAN_LIBS=""
+     ifelse([$3], , :, [$3])
+   fi
+-  AC_SUBST(LIBASSUAN_CFLAGS)
+-  AC_SUBST(LIBASSUAN_LIBS)
+ ])
+Index: pinentry-1.0.0/configure.ac
+===================================================================
+--- pinentry-1.0.0.orig/configure.ac
++++ pinentry-1.0.0/configure.ac
+@@ -266,8 +266,8 @@ if test "$have_libassuan" = "yes"; then
+                      [version of the libassuan library])
+ fi
+ 
+-COMMON_CFLAGS="$LIBASSUAN_CFLAGS $COMMON_CFLAGS"
+-COMMON_LIBS="$LIBASSUAN_LIBS $COMMON_LIBS"
++COMMON_CFLAGS="$LIBASSUAN_COMMON_CFLAGS $COMMON_CFLAGS"
++COMMON_LIBS="$LIBASSUAN_COMMON_LIBS $COMMON_LIBS"
+ 
+ 
+ dnl Checks for libsecmem.
diff --git a/import-layers/yocto-poky/meta/recipes-support/pinentry/pinentry_0.9.2.bb b/import-layers/yocto-poky/meta/recipes-support/pinentry/pinentry_0.9.2.bb
deleted file mode 100644
index d315a99..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/pinentry/pinentry_0.9.2.bb
+++ /dev/null
@@ -1,34 +0,0 @@
-SUMMARY = "Collection of simple PIN or passphrase entry dialogs"
-DESCRIPTION = "\
-	Pinentry is a collection of simple PIN or passphrase entry dialogs which \
-	utilize the Assuan protocol as described by the aegypten project; see \
-	http://www.gnupg.org/aegypten/ for details."
-
-HOMEPAGE = "http://www.gnupg.org/related_software/pinentry/index.en.html"
-LICENSE = "GPLv2"
-LIC_FILES_CHKSUM = "file://COPYING;md5=cbbd794e2a0a289b9dfcc9f513d1996e"
-
-inherit autotools
-
-DEPENDS = "gettext-native"
-
-UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
-SRC_URI = "${GNUPG_MIRROR}/${BPN}/${BPN}-${PV}.tar.bz2"
-
-SRC_URI[md5sum] = "f51d454f921111b5156a2291cbf70278"
-SRC_URI[sha256sum] = "fd8bc1592ceb22bb492b07cb29b1b140bb882c859e6503b974254c0a4b4134d1"
-
-EXTRA_OECONF = "--disable-rpath \
-		        --disable-dependency-tracking \
-               "
-
-PACKAGECONFIG ??= "ncurses libcap"
-
-PACKAGECONFIG[ncurses] = "--enable-ncurses  --with-ncurses-include-dir=${STAGING_INCDIR}, --disable-ncurses, ncurses"
-PACKAGECONFIG[libcap] = "--with-libcap, --without-libcap, libcap"
-PACKAGECONFIG[qt4] = "--enable-pinentry-qt4, --disable-pinentry-qt4, qt4-x11"
-PACKAGECONFIG[qt4_clipboard] = "--enable-pinentry-qt4-clipboard --enable-pinentry-qt4, --disable-pinentry-qt4-clipboard, qt4-x11"
-PACKAGECONFIG[gtk2] = "--enable-pinentry-gtk2, --disable-pinentry-gtk2, gtk+ glib-2.0"
-
-#To use libsecret, add meta-gnome
-PACKAGECONFIG[secret] = "--enable-libsecret, --disable-libsecret, libsecret"
diff --git a/import-layers/yocto-poky/meta/recipes-support/pinentry/pinentry_1.0.0.bb b/import-layers/yocto-poky/meta/recipes-support/pinentry/pinentry_1.0.0.bb
new file mode 100644
index 0000000..319acd3
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/pinentry/pinentry_1.0.0.bb
@@ -0,0 +1,36 @@
+SUMMARY = "Collection of simple PIN or passphrase entry dialogs"
+DESCRIPTION = "\
+	Pinentry is a collection of simple PIN or passphrase entry dialogs which \
+	utilize the Assuan protocol as described by the aegypten project; see \
+	http://www.gnupg.org/aegypten/ for details."
+
+HOMEPAGE = "http://www.gnupg.org/related_software/pinentry/index.en.html"
+LICENSE = "GPLv2"
+LIC_FILES_CHKSUM = "file://COPYING;md5=cbbd794e2a0a289b9dfcc9f513d1996e"
+
+inherit autotools pkgconfig
+
+DEPENDS = "gettext-native libassuan libgpg-error"
+
+UPSTREAM_CHECK_URI = "https://gnupg.org/download/index.html"
+SRC_URI = "${GNUPG_MIRROR}/${BPN}/${BPN}-${PV}.tar.bz2 \
+           file://libassuan_pkgconf.patch \
+           file://gpg-error_pkconf.patch \
+"
+
+SRC_URI[md5sum] = "4a3fad8b31f9b4c5526c8837495015dc"
+SRC_URI[sha256sum] = "1672c2edc1feb036075b187c0773787b2afd0544f55025c645a71b4c2f79275a"
+
+EXTRA_OECONF = "--disable-rpath --disable-dependency-tracking \
+                --disable-pinentry-qt5  \
+"
+
+PACKAGECONFIG ??= "ncurses libcap"
+
+PACKAGECONFIG[ncurses] = "--enable-ncurses  --with-ncurses-include-dir=${STAGING_INCDIR}, --disable-ncurses, ncurses"
+PACKAGECONFIG[libcap] = "--with-libcap, --without-libcap, libcap"
+PACKAGECONFIG[qt] = "--enable-pinentry-qt, --disable-pinentry-qt, qt4-x11"
+PACKAGECONFIG[gtk2] = "--enable-pinentry-gtk2, --disable-pinentry-gtk2, gtk+ glib-2.0"
+
+#To use libsecret, add meta-gnome
+PACKAGECONFIG[secret] = "--enable-libsecret, --disable-libsecret, libsecret"
diff --git a/import-layers/yocto-poky/meta/recipes-support/ptest-runner/ptest-runner_2.0.2.bb b/import-layers/yocto-poky/meta/recipes-support/ptest-runner/ptest-runner_2.0.2.bb
deleted file mode 100644
index a3e9a7e..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/ptest-runner/ptest-runner_2.0.2.bb
+++ /dev/null
@@ -1,26 +0,0 @@
-SUMMARY = "A C program to run all installed ptests"
-DESCRIPTION = "The ptest-runner2 package installs a ptest-runner \
-program which loops through all installed ptest test suites and \
-runs them in sequence."
-HOMEPAGE = "http://git.yoctoproject.org/cgit/cgit.cgi/ptest-runner2/about/"
-
-LICENSE = "GPLv2"
-LIC_FILES_CHKSUM = "file://LICENSE;md5=751419260aa954499f7abaabaa882bbe"
-
-SRCREV = "6d2872116cd9d6e1617a768f9d72bbf091489b2d"
-PV = "2.0.2+git${SRCPV}"
-
-SRC_URI = "git://git.yoctoproject.org/ptest-runner2"
-S = "${WORKDIR}/git"
-
-FILES_${PN} = "${bindir}/ptest-runner"
-
-EXTRA_OEMAKE = "-e MAKEFLAGS="
-
-do_compile () {
-	oe_runmake
-}
-
-do_install () {
-	install -D -m 0755 ${S}/ptest-runner ${D}${bindir}/ptest-runner
-}
diff --git a/import-layers/yocto-poky/meta/recipes-support/ptest-runner/ptest-runner_2.1.bb b/import-layers/yocto-poky/meta/recipes-support/ptest-runner/ptest-runner_2.1.bb
new file mode 100644
index 0000000..71c1dab
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/ptest-runner/ptest-runner_2.1.bb
@@ -0,0 +1,26 @@
+SUMMARY = "A C program to run all installed ptests"
+DESCRIPTION = "The ptest-runner2 package installs a ptest-runner \
+program which loops through all installed ptest test suites and \
+runs them in sequence."
+HOMEPAGE = "http://git.yoctoproject.org/cgit/cgit.cgi/ptest-runner2/about/"
+
+LICENSE = "GPLv2"
+LIC_FILES_CHKSUM = "file://LICENSE;md5=751419260aa954499f7abaabaa882bbe"
+
+SRCREV = "8a93832dad621535e90aa8e1fb74ae5ba743fc3e"
+PV = "2.1+git${SRCPV}"
+
+SRC_URI = "git://git.yoctoproject.org/ptest-runner2"
+S = "${WORKDIR}/git"
+
+FILES_${PN} = "${bindir}/ptest-runner"
+
+EXTRA_OEMAKE = "-e MAKEFLAGS="
+
+do_compile () {
+	oe_runmake
+}
+
+do_install () {
+	install -D -m 0755 ${S}/ptest-runner ${D}${bindir}/ptest-runner
+}
diff --git a/import-layers/yocto-poky/meta/recipes-support/re2c/re2c/mkdir.patch b/import-layers/yocto-poky/meta/recipes-support/re2c/re2c/mkdir.patch
new file mode 100644
index 0000000..d59f01b
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/re2c/re2c/mkdir.patch
@@ -0,0 +1,36 @@
+Upstream-Status: Submitted (https://github.com/skvadrik/re2c/pull/191)
+Signed-off-by: Ross Burton <ross.burton@intel.com>
+
+From bccc10c60523f88c8f81413151cdcd612eb16198 Mon Sep 17 00:00:00 2001
+From: Ross Burton <ross.burton@intel.com>
+Date: Mon, 31 Jul 2017 15:43:41 +0100
+Subject: [PATCH] Makefile.am: create target directory before writing into it
+
+In some situations src/parse/ may not exist before a file is copied into the
+directory. Ensure that this doesn't happen by creating the directory first.
+---
+ re2c/Makefile.am | 1 +
+ 1 file changed, 1 insertion(+)
+
+diff --git a/re2c/Makefile.am b/re2c/Makefile.am
+index 3b3b2c5e..0707fc5a 100644
+--- a/Makefile.am
++++ b/Makefile.am
+@@ -187,6 +187,7 @@ CLEANFILES = \
+ 	$(DOC)
+ 
+ $(AUTOGEN_PARSER): $(CUSTOM_PARSER)
++	$(AM_V_at)$(MKDIR_P) $(dir $@)
+ 	$(AM_V_GEN) if test $(BISON) = "no"; \
+ 	then \
+ 		cp $(top_srcdir)/$(BOOTSTRAP_PARSER) $@ && \
+@@ -211,6 +212,7 @@ $(BOOTSTRAP_PARSER): $(CUSTOM_PARSER)
+ 		$(top_srcdir)/$(CUSTOM_PARSER);
+ 
+ .re.cc:
++	$(AM_V_at)$(MKDIR_P) $(dir $@)
+ 	$(AM_V_GEN) if test -x $(RE2C); \
+ 	then \
+ 		$(top_builddir)/$(RE2C) $(RE2CFLAGS) -o $@ $< && \
+-- 
+2.11.0
diff --git a/import-layers/yocto-poky/meta/recipes-support/re2c/re2c_0.16.bb b/import-layers/yocto-poky/meta/recipes-support/re2c/re2c_0.16.bb
new file mode 100644
index 0000000..50dd7b7
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/re2c/re2c_0.16.bb
@@ -0,0 +1,15 @@
+SUMMARY = "Tool for writing very fast and very flexible scanners"
+HOMEPAGE = "http://re2c.sourceforge.net/"
+AUTHOR = "Marcus Börger <helly@users.sourceforge.net>"
+SECTION = "devel"
+LICENSE = "PD"
+LIC_FILES_CHKSUM = "file://README;beginline=146;md5=881056c9add17f8019ccd8c382ba963a"
+
+SRC_URI = "${SOURCEFORGE_MIRROR}/${BPN}/${BPN}-${PV}.tar.gz \
+           file://mkdir.patch"
+SRC_URI[md5sum] = "3bf508fabd52ed7334647d0ccb956e8d"
+SRC_URI[sha256sum] = "48c12564297641cceb5ff05aead57f28118db6277f31e2262437feba89069e84"
+
+BBCLASSEXTEND = "native"
+
+inherit autotools
diff --git a/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/default b/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/default
index 7aede9b..ab7cd93 100644
--- a/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/default
+++ b/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/default
@@ -1,3 +1,2 @@
 # Specify rng device
-#RNG_DEVICE=/dev/hwrng
-RNG_DEVICE=/dev/urandom
+RNG_DEVICE=/dev/hwrng
diff --git a/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/rng-tools-5-fix-textrels-on-PIC-x86.patch b/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/rng-tools-5-fix-textrels-on-PIC-x86.patch
new file mode 100644
index 0000000..93a5864
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/rng-tools-5-fix-textrels-on-PIC-x86.patch
@@ -0,0 +1,103 @@
+From: Francisco Blas Izquierdo Riera (klondike) <klondike@gentoo.org>
+Subject: [PATCH] Fix assemby textrels on rdrand_asm.S on PIC x86
+
+This patch updates the fixes in the assembly in rdrand_asm.S in
+sys-apps/rng-tools-5 so it won't generate textrels on PIC systems.
+The main fixes are in the use of leal in SETPTR for such systems, the rest is
+the usual PIC support stuff.
+
+This should fix Gentoo bug #469962 and help fix #518210
+
+This patch is released under the GPLv2 or a higher version license as is the
+original file as long as the author and the tester are credited.
+
+Gentoo-bug-url: https://bugs.gentoo.org/show_bug.cgi?id=469962
+Gentoo-bug-url: https://bugs.gentoo.org/show_bug.cgi?id=518210
+Signed-off-by: Francisco Blas Izquierdo Riera (klondike) <klondike@gentoo.org>
+Reported-by: cilly <cilly@cilly.mine.nu>
+Reported-by: Manuel Rüger <mrueg@gentoo.org>
+Tested-by: Anthony Basile <blueness@gentoo.org>
+
+Upstream-Status: Pending
+
+Index: rng-tools-5/rdrand_asm.S
+===================================================================
+--- rng-tools-5.orig/rdrand_asm.S
++++ rng-tools-5/rdrand_asm.S
+@@ -2,6 +2,7 @@
+  * Copyright (c) 2011-2014, Intel Corporation
+  * Authors: Fenghua Yu <fenghua.yu@intel.com>,
+  *          H. Peter Anvin <hpa@linux.intel.com>
++ * PIC code by: Francisco Blas Izquierdo Riera (klondike) <klondike@gentoo.org>
+  *
+  * This program is free software; you can redistribute it and/or modify it
+  * under the terms and conditions of the GNU General Public License,
+@@ -174,7 +175,19 @@ ENTRY(x86_rdseed_or_rdrand_bytes)
+ 	jmp	4b
+ ENDPROC(x86_rdseed_or_rdrand_bytes)
+ 
++#if defined(__PIC__)
++#define INIT_PIC() \
++	pushl	%ebx ; \
++	call    __x86.get_pc_thunk.bx ; \
++	addl    $_GLOBAL_OFFSET_TABLE_, %ebx
++#define END_PIC() \
++	popl	%ebx
++#define SETPTR(var,ptr) leal (var)@GOTOFF(%ebx),ptr
++#else
++#define INIT_PIC()
++#define END_PIC()
+ #define SETPTR(var,ptr)	movl $(var),ptr
++#endif
+ #define PTR0	%eax
+ #define PTR1	%edx
+ #define PTR2	%ecx
+@@ -190,6 +203,7 @@ ENTRY(x86_aes_mangle)
+ 	movl	8(%ebp), %eax
+ 	movl	12(%ebp), %edx
+ 	push	%esi
++	INIT_PIC()
+ #endif
+ 	movl	$512, CTR3	/* Number of rounds */
+ 	
+@@ -280,6 +294,7 @@ offset = offset + 16
+ 	movdqa	%xmm7, (7*16)(PTR1)
+ 
+ #ifdef __i386__
++	END_PIC()
+ 	pop	%esi
+ 	pop	%ebp
+ #endif
+@@ -294,6 +309,7 @@ ENTRY(x86_aes_expand_key)
+ 	push	%ebp
+ 	mov	%esp, %ebp
+ 	movl	8(%ebp), %eax
++	INIT_PIC()
+ #endif
+ 
+ 	SETPTR(aes_round_keys, PTR1)
+@@ -323,6 +339,7 @@ ENTRY(x86_aes_expand_key)
+ 	call	1f
+ 
+ #ifdef __i386__
++	END_PIC()
+ 	pop	%ebp
+ #endif
+ 	ret
+@@ -343,6 +360,16 @@ ENTRY(x86_aes_expand_key)
+ 
+ ENDPROC(x86_aes_expand_key)
+ 
++#if defined(__i386__) && defined(__PIC__)
++	.section	.text.__x86.get_pc_thunk.bx,"axG",@progbits,__x86.get_pc_thunk.bx,comdat
++	.globl	__x86.get_pc_thunk.bx
++	.hidden	__x86.get_pc_thunk.bx
++	.type	__x86.get_pc_thunk.bx, @function
++__x86.get_pc_thunk.bx:
++	movl	(%esp), %ebx
++	ret
++#endif
++
+ 	.bss
+ 	.balign 64
+ aes_round_keys:
diff --git a/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/rngd.service b/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/rngd.service
new file mode 100644
index 0000000..b94ad50
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/rngd.service
@@ -0,0 +1,9 @@
+[Unit]
+Description=Hardware RNG Entropy Gatherer Daemon
+
+[Service]
+ExecStart=@SBINDIR@/rngd -f -r /dev/urandom
+SuccessExitStatus=66
+
+[Install]
+WantedBy=multi-user.target
diff --git a/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/uclibc-libuargp-configure.patch b/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/uclibc-libuargp-configure.patch
deleted file mode 100644
index e691315..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/uclibc-libuargp-configure.patch
+++ /dev/null
@@ -1,63 +0,0 @@
-In case of uclibc, use libuargp
-
-If we use uclibc for system libraries, select libuargp
-
-Upstream-Status: Pending
-
-Signed-off-by: Maxin B. John <maxin.john@intel.com>
----
-diff -Naur rng-tools-5-orig/configure.ac rng-tools-5/configure.ac
---- rng-tools-5-orig/configure.ac	2016-02-24 18:11:24.023690235 +0200
-+++ rng-tools-5/configure.ac	2016-02-24 18:14:49.763118138 +0200
-@@ -39,6 +39,13 @@
- 	[with_libargp=check]
- )
- 
-+AC_ARG_ENABLE([uclibc],
-+    AS_HELP_STRING([--enable-uclibc], [Use uclibc for system libraries]),
-+        use_uclibc=yes, use_uclibc=no)
-+AM_CONDITIONAL(USE_UCLIBC, test "x$use_uclibc" = "xyes")
-+AS_IF([test "x$use_uclibc" = "xyes"], [AC_DEFINE(USE_UCLIBC)])
-+AH_TEMPLATE([USE_UCLIBC], [Defined if uclibc libraries are used.])
-+
- dnl Make sure anyone changing configure.ac/Makefile.am has a clue
- AM_MAINTAINER_MODE
- 
-@@ -101,7 +108,7 @@
- 			[need_libargp=no],
- 			[need_libargp=yes
- 			 if test "x$with_libargp" = "xno"; then
--				AC_MSG_FAILURE([libargp disabled and libc does not have argp])
-+				AC_MSG_WARN([libargp disabled and libc does not have argp])
- 			 fi]
- 		)
- 	],
-@@ -110,7 +117,7 @@
- 
- dnl Check for libargp
- AS_IF(
--	[test "x$need_libargp" = "xyes"],
-+	[test "x$need_libargp" = "xyes" -a "x$use_uclibc" = "xno"],
- 	[
- 		AC_CHECK_LIB(
- 			[argp],
-@@ -120,6 +127,19 @@
- 		)
- 	]
- )
-+
-+dnl Check for libuargp
-+AS_IF(
-+	[test "x$use_uclibc" = "xyes"],
-+	[
-+		AC_CHECK_LIB(
-+			[uargp],
-+			[argp_parse],
-+			[LIBS="$LIBS -luargp"],
-+			[AC_MSG_FAILURE([libuargp not found])]
-+		)
-+	]
-+)
- 
- dnl -----------------
- dnl Configure options
diff --git a/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/underquote.patch b/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/underquote.patch
index 1422571..afd08d5 100644
--- a/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/underquote.patch
+++ b/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools/underquote.patch
@@ -7,6 +7,8 @@
 RP
 2016/2/16
 
+Upstream-Status: Pending
+
 Index: rng-tools-5/configure.ac
 ===================================================================
 --- rng-tools-5.orig/configure.ac
diff --git a/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools_5.bb b/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools_5.bb
index 9329e8a..4a66bed 100644
--- a/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools_5.bb
+++ b/import-layers/yocto-poky/meta/recipes-support/rng-tools/rng-tools_5.bb
@@ -6,9 +6,11 @@
            file://0001-If-the-libc-is-lacking-argp-use-libargp.patch \
            file://0002-Add-argument-to-control-the-libargp-dependency.patch \
            file://underquote.patch \
-           file://uclibc-libuargp-configure.patch \
+           file://rng-tools-5-fix-textrels-on-PIC-x86.patch \
            file://init \
-           file://default"
+           file://default \
+           file://rngd.service \
+"
 
 SRC_URI[md5sum] = "6726cdc6fae1f5122463f24ae980dd68"
 SRC_URI[sha256sum] = "60a102b6603bbcce2da341470cad42eeaa9564a16b4490e7867026ca11a3078e"
@@ -20,13 +22,11 @@
         d.setVar("INHIBIT_UPDATERCD_BBCLASS", "1")
 }
 
-inherit autotools update-rc.d
+inherit autotools update-rc.d systemd
 
 PACKAGECONFIG = "libgcrypt"
 PACKAGECONFIG_libc-musl = "libargp"
-PACKAGECONFIG_libc-uclibc = "libuargp"
 PACKAGECONFIG[libargp] = "--with-libargp,--without-libargp,argp-standalone,"
-PACKAGECONFIG[libuargp] = "--enable-uclibc,,,"
 PACKAGECONFIG[libgcrypt] = "--with-libgcrypt,--without-libgcrypt,libgcrypt,"
 
 do_install_append() {
@@ -40,7 +40,15 @@
         install -d "${D}${sysconfdir}/default"
         install -m 0644 ${WORKDIR}/default ${D}${sysconfdir}/default/rng-tools
     fi
+
+    if ${@bb.utils.contains('DISTRO_FEATURES', 'systemd', 'true', 'false', d)}; then
+        install -d ${D}${systemd_unitdir}/system
+        install -m 644 ${WORKDIR}/rngd.service ${D}${systemd_unitdir}/system
+        sed -i -e 's,@SBINDIR@,${sbindir},g' ${D}${systemd_unitdir}/system/rngd.service
+    fi
 }
 
 INITSCRIPT_NAME = "rng-tools"
 INITSCRIPT_PARAMS = "start 30 2 3 4 5 . stop 30 0 6 1 ."
+
+SYSTEMD_SERVICE_${PN} = "rngd.service"
diff --git a/import-layers/yocto-poky/meta/recipes-support/shared-mime-info/shared-mime-info.inc b/import-layers/yocto-poky/meta/recipes-support/shared-mime-info/shared-mime-info.inc
index 6eedb6d..1f51225 100644
--- a/import-layers/yocto-poky/meta/recipes-support/shared-mime-info/shared-mime-info.inc
+++ b/import-layers/yocto-poky/meta/recipes-support/shared-mime-info/shared-mime-info.inc
@@ -6,7 +6,6 @@
 LIC_FILES_CHKSUM = "file://COPYING;md5=b234ee4d69f5fce4486a80fdaf4a4263"
 
 DEPENDS = "libxml2 intltool-native glib-2.0 shared-mime-info-native"
-DEPENDS_class-native = "libxml2-native intltool-native glib-2.0-native"
 
 SRC_URI = "http://freedesktop.org/~hadess/shared-mime-info-${PV}.tar.xz"
 
@@ -33,4 +32,4 @@
 	autotools_do_install
 }
 
-BBCLASSEXTEND = "native"
+BBCLASSEXTEND = "native nativesdk"
diff --git a/import-layers/yocto-poky/meta/recipes-support/sqlite/files/sqlite3-fix-CVE-2017-13685.patch b/import-layers/yocto-poky/meta/recipes-support/sqlite/files/sqlite3-fix-CVE-2017-13685.patch
new file mode 100644
index 0000000..aac428c
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/sqlite/files/sqlite3-fix-CVE-2017-13685.patch
@@ -0,0 +1,57 @@
+Fix CVE-2017-13685
+
+The dump_callback function in SQLite 3.20.0 allows remote attackers to
+cause a denial of service (EXC_BAD_ACCESS and application crash) via a
+crafted file.
+
+References:
+https://sqlite.org/src/info/02f0f4c54f2819b3
+http://www.mail-archive.com/sqlite-users%40mailinglists.sqlite.org/msg105314.html
+
+Upstream-Status: Backport [https://sqlite.org/src/info/cf0d3715caac9149]
+
+CVE: CVE-2017-13685
+
+Signed-off-by: Wenzong Fan <wenzong.fan@windriver.com>
+
+Index: src/shell.c
+==================================================================
+--- src/shell.c
++++ src/shell.c
+@@ -2657,10 +2657,11 @@
+   int *aiType      /* Column types */
+ ){
+   int i;
+   ShellState *p = (ShellState*)pArg;
+ 
++  if( azArg==0 ) return 0;
+   switch( p->cMode ){
+     case MODE_Line: {
+       int w = 5;
+       if( azArg==0 ) break;
+       for(i=0; i<nArg; i++){
+@@ -3007,10 +3008,11 @@
+ */
+ static int captureOutputCallback(void *pArg, int nArg, char **azArg, char **az){
+   ShellText *p = (ShellText*)pArg;
+   int i;
+   UNUSED_PARAMETER(az);
++  if( azArg==0 ) return 0;
+   if( p->n ) appendText(p, "|", 0);
+   for(i=0; i<nArg; i++){
+     if( i ) appendText(p, ",", 0);
+     if( azArg[i] ) appendText(p, azArg[i], 0);
+   }
+@@ -3888,11 +3890,11 @@
+   const char *zType;
+   const char *zSql;
+   ShellState *p = (ShellState *)pArg;
+ 
+   UNUSED_PARAMETER(azNotUsed);
+-  if( nArg!=3 ) return 1;
++  if( nArg!=3 || azArg==0 ) return 0;
+   zTable = azArg[0];
+   zType = azArg[1];
+   zSql = azArg[2];
+ 
+   if( strcmp(zTable, "sqlite_sequence")==0 ){
diff --git a/import-layers/yocto-poky/meta/recipes-support/sqlite/sqlite3_3.17.0.bb b/import-layers/yocto-poky/meta/recipes-support/sqlite/sqlite3_3.17.0.bb
deleted file mode 100644
index 593cb32..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/sqlite/sqlite3_3.17.0.bb
+++ /dev/null
@@ -1,10 +0,0 @@
-require sqlite3.inc
-
-LICENSE = "PD"
-LIC_FILES_CHKSUM = "file://sqlite3.h;endline=11;md5=65f0a57ca6928710b418c094b3570bb0"
-
-SRC_URI = "\
-  http://www.sqlite.org/2017/sqlite-autoconf-${SQLITE_PV}.tar.gz \
-  "
-SRC_URI[md5sum] = "450a95a7bde697c9fe4de9ae2fffdcca"
-SRC_URI[sha256sum] = "a4e485ad3a16e054765baf6371826b5000beed07e626510896069c0bf013874c"
diff --git a/import-layers/yocto-poky/meta/recipes-support/sqlite/sqlite3_3.20.0.bb b/import-layers/yocto-poky/meta/recipes-support/sqlite/sqlite3_3.20.0.bb
new file mode 100644
index 0000000..e508258
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/sqlite/sqlite3_3.20.0.bb
@@ -0,0 +1,11 @@
+require sqlite3.inc
+
+LICENSE = "PD"
+LIC_FILES_CHKSUM = "file://sqlite3.h;endline=11;md5=786d3dc581eff03f4fd9e4a77ed00c66"
+
+SRC_URI = "\
+  http://www.sqlite.org/2017/sqlite-autoconf-${SQLITE_PV}.tar.gz \
+  file://sqlite3-fix-CVE-2017-13685.patch \
+  "
+SRC_URI[md5sum] = "e262a28b73cc330e7e83520c8ce14e4d"
+SRC_URI[sha256sum] = "3814c6f629ff93968b2b37a70497cfe98b366bf587a2261a56a5f750af6ae6a0"
diff --git a/import-layers/yocto-poky/meta/recipes-support/vte/vte_0.46.1.bb b/import-layers/yocto-poky/meta/recipes-support/vte/vte_0.46.1.bb
deleted file mode 100644
index 0afe625..0000000
--- a/import-layers/yocto-poky/meta/recipes-support/vte/vte_0.46.1.bb
+++ /dev/null
@@ -1,43 +0,0 @@
-SUMMARY = "Virtual terminal emulator GTK+ widget library"
-BUGTRACKER = "https://bugzilla.gnome.org/buglist.cgi?product=vte"
-LICENSE = "LGPLv2.1+"
-DEPENDS = "glib-2.0 gtk+3 libpcre2 intltool-native libxml2-native"
-
-LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
-
-inherit gnomebase gtk-doc distro_features_check upstream-version-is-even gobject-introspection
-
-# vapigen.m4 is required when vala is not present (but the one from vala should be used normally)
-SRC_URI += "file://0001-Don-t-enable-stack-protection-by-default.patch \
-            ${@bb.utils.contains('PACKAGECONFIG', 'vala', '', 'file://0001-Add-m4-vapigen.m4.patch', d) } \
-            "
-SRC_URI[archive.md5sum] = "e8f4393b9f1ec2e2f3cdb3fd4f5a16de"
-SRC_URI[archive.sha256sum] = "8800cf8bc259704a12ad1853fb0eb43bfe3857af15242e6fb9f2c3fd95b3f5c6"
-
-ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
-
-# Instead of "inherit vala" we do the relevant settings here so we can
-# set DEPENDS based on PACKAGECONFIG.
-
-# Our patched version of Vala looks in STAGING_DATADIR for .vapi files
-export STAGING_DATADIR
-# Upstream Vala >= 0.11 looks in XDG_DATA_DIRS for .vapi files
-export XDG_DATA_DIRS = "${STAGING_DATADIR}"
-
-# Package additional files
-FILES_${PN}-dev += "${datadir}/vala/vapi/*"
-
-PACKAGECONFIG ??= "gnutls"
-PACKAGECONFIG[vala] = "--enable-vala,--disable-vala,vala-native vala"
-PACKAGECONFIG[gnutls] = "--with-gnutls,--without-gnutls,gnutls"
-
-CFLAGS += "-D_GNU_SOURCE"
-
-EXTRA_OECONF = "--disable-test-application"
-
-# libtool adds "-nostdlib" when g++ is used. This breaks PIE builds.
-# Use libtool-cross (which has a hack to prevent that) instead.
-EXTRA_OEMAKE_class-target = "LIBTOOL=${STAGING_BINDIR_CROSS}/${HOST_SYS}-libtool"
-
-PACKAGES =+ "libvte"
-FILES_libvte = "${libdir}/*.so.* ${libdir}/girepository-1.0/*"
diff --git a/import-layers/yocto-poky/meta/recipes-support/vte/vte_0.48.3.bb b/import-layers/yocto-poky/meta/recipes-support/vte/vte_0.48.3.bb
new file mode 100644
index 0000000..4720841
--- /dev/null
+++ b/import-layers/yocto-poky/meta/recipes-support/vte/vte_0.48.3.bb
@@ -0,0 +1,48 @@
+SUMMARY = "Virtual terminal emulator GTK+ widget library"
+BUGTRACKER = "https://bugzilla.gnome.org/buglist.cgi?product=vte"
+LICENSE = "LGPLv2.1+"
+DEPENDS = "glib-2.0 gtk+3 libpcre2 intltool-native libxml2-native gperf-native"
+
+LIC_FILES_CHKSUM = "file://COPYING;md5=4fbd65380cdd255951079008b364516c"
+
+inherit gnomebase gtk-doc distro_features_check upstream-version-is-even gobject-introspection
+
+# vapigen.m4 is required when vala is not present (but the one from vala should be used normally)
+SRC_URI += "file://0001-Don-t-enable-stack-protection-by-default.patch \
+            ${@bb.utils.contains('PACKAGECONFIG', 'vala', '', 'file://0001-Add-m4-vapigen.m4.patch', d) } \
+            "
+SRC_URI[archive.md5sum] = "b300675ac5f269aa6eb48fe89a0d726d"
+SRC_URI[archive.sha256sum] = "a3a9fb182740b392a45cd3f46fa61a985f68bb6b1817b52daec22034c46158c3"
+
+ANY_OF_DISTRO_FEATURES = "${GTK3DISTROFEATURES}"
+
+# Instead of "inherit vala" we do the relevant settings here so we can
+# set DEPENDS based on PACKAGECONFIG.
+
+# Our patched version of Vala looks in STAGING_DATADIR for .vapi files
+export STAGING_DATADIR
+# Upstream Vala >= 0.11 looks in XDG_DATA_DIRS for .vapi files
+export XDG_DATA_DIRS = "${STAGING_DATADIR}"
+
+# Help g-ir-scanner find the .so for linking
+do_compile_prepend() {
+        export GIR_EXTRA_LIBS_PATH="${B}/src/.libs"
+}
+
+# Package additional files
+FILES_${PN}-dev += "${datadir}/vala/vapi/*"
+
+PACKAGECONFIG ??= "gnutls"
+PACKAGECONFIG[vala] = "--enable-vala,--disable-vala,vala-native vala"
+PACKAGECONFIG[gnutls] = "--with-gnutls,--without-gnutls,gnutls"
+
+CFLAGS += "-D_GNU_SOURCE"
+
+EXTRA_OECONF = "--disable-test-application"
+
+# libtool adds "-nostdlib" when g++ is used. This breaks PIE builds.
+# Use libtool-cross (which has a hack to prevent that) instead.
+EXTRA_OEMAKE_class-target = "LIBTOOL=${STAGING_BINDIR_CROSS}/${HOST_SYS}-libtool"
+
+PACKAGES =+ "libvte"
+FILES_libvte = "${libdir}/*.so.* ${libdir}/girepository-1.0/*"
diff --git a/import-layers/yocto-poky/meta/site/arm-linux-uclibc b/import-layers/yocto-poky/meta/site/arm-linux-uclibc
deleted file mode 100644
index 6ae7c6e..0000000
--- a/import-layers/yocto-poky/meta/site/arm-linux-uclibc
+++ /dev/null
@@ -1,105 +0,0 @@
-ac_cv_func_setvbuf_reversed=no
-ac_cv_sizeof___int64=${ac_cv_sizeof___int64=0}
-ac_cv_sizeof_char=${ac_cv_sizeof_char=1}
-ac_cv_sizeof_int=${ac_cv_sizeof_int=4}
-ac_cv_sizeof_long=${ac_cv_sizeof_long=4}
-ac_cv_sizeof_long_int=${ac_cv_sizeof_long_int=4}
-ac_cv_sizeof_long_long=${ac_cv_sizeof_long_long=8}
-ac_cv_sizeof_short=${ac_cv_sizeof_short=2}
-ac_cv_sizeof_short_int=${ac_cv_sizeof_short_int=2}
-ac_cv_sizeof_size_t=${ac_cv_sizeof_size_t=4}
-ac_cv_sizeof_void_p=${ac_cv_sizeof_void_p=4}
-ac_cv_sizeof_long_double=${ac_cv_sizeof_long_double=8}
-
-ac_cv_uchar=${ac_cv_uchar=no}
-ac_cv_uint=${ac_cv_uint=yes}
-ac_cv_ulong=${ac_cv_ulong=yes}
-ac_cv_ushort=${ac_cv_ushort=yes}
-
-ac_cv_time_r_type=${ac_cv_time_r_type=POSIX}
-
-# samba
-samba_cv_BROKEN_NISPLUS_INCLUDE_FILES=${samba_cv_BROKEN_NISPLUS_INCLUDE_FILES=yes}
-samba_cv_BROKEN_REDHAT_7_SYSTEM_HEADERS=${samba_cv_BROKEN_REDHAT_7_SYSTEM_HEADERS=no}
-samba_cv_FTRUNCATE_NEEDS_ROOT=${samba_cv_FTRUNCATE_NEEDS_ROOT=no}
-samba_cv_HAVE_BROKEN_FCNTL64_LOCKS=${samba_cv_HAVE_BROKEN_FCNTL64_LOCKS=no}
-samba_cv_HAVE_BROKEN_GETGROUPS=${samba_cv_HAVE_BROKEN_GETGROUPS=no}
-samba_cv_HAVE_BROKEN_LINUX_SENDFILE=${samba_cv_HAVE_BROKEN_LINUX_SENDFILE=yes}
-samba_cv_HAVE_BROKEN_READDIR=${samba_cv_HAVE_BROKEN_READDIR=no}
-samba_cv_HAVE_C99_VSNPRINTF=${samba_cv_HAVE_C99_VSNPRINTF=yes}
-samba_cv_HAVE_DEV64_T=${samba_cv_HAVE_DEV64_T=no}
-samba_cv_HAVE_DEVICE_MAJOR_FN=${samba_cv_HAVE_DEVICE_MAJOR_FN=yes}
-samba_cv_HAVE_DEVICE_MINOR_FN=${samba_cv_HAVE_DEVICE_MINOR_FN=yes}
-samba_cv_HAVE_DQB_FSOFTLIMIT=${samba_cv_HAVE_DQB_FSOFTLIMIT=no}
-samba_cv_HAVE_EXPLICIT_LARGEFILE_SUPPORT=${samba_cv_HAVE_EXPLICIT_LARGEFILE_SUPPORT=yes}
-samba_cv_HAVE_FCNTL_LOCK=${samba_cv_HAVE_FCNTL_LOCK=yes}
-samba_cv_HAVE_FTRUNCATE_EXTEND=${samba_cv_HAVE_FTRUNCATE_EXTEND=yes}
-samba_cv_HAVE_FUNCTION_MACRO=${samba_cv_HAVE_FUNCTION_MACRO=yes}
-samba_cv_HAVE_GETTIMEOFDAY_TZ=${samba_cv_HAVE_GETTIMEOFDAY_TZ=yes}
-samba_cv_HAVE_INO64_T=${samba_cv_HAVE_INO64_T=no}
-samba_cv_HAVE_INT16_FROM_RPC_RPC_H=${samba_cv_HAVE_INT16_FROM_RPC_RPC_H=no}
-samba_cv_HAVE_INT32_FROM_RPC_RPC_H=${samba_cv_HAVE_INT32_FROM_RPC_RPC_H=no}
-samba_cv_HAVE_KERNEL_CHANGE_NOTIFY=${samba_cv_HAVE_KERNEL_CHANGE_NOTIFY=yes}
-samba_cv_HAVE_KERNEL_OPLOCKS_LINUX=${samba_cv_HAVE_KERNEL_OPLOCKS_LINUX=yes}
-samba_cv_HAVE_KERNEL_SHARE_MODES=${samba_cv_HAVE_KERNEL_SHARE_MODES=yes}
-samba_cv_HAVE_MMAP=${samba_cv_HAVE_MMAP=yes}
-samba_cv_HAVE_NATIVE_ICONV=${samba_cv_HAVE_NATIVE_ICONV=yes}
-samba_cv_HAVE_OFF64_T=${samba_cv_HAVE_OFF64_T=no}
-samba_cv_HAVE_QUOTACTL_4A=${samba_cv_HAVE_QUOTACTL_4A=yes}
-samba_cv_HAVE_ROOT=${samba_cv_HAVE_ROOT=no}
-samba_cv_HAVE_RPC_AUTH_ERROR_CONFLICT=${samba_cv_HAVE_RPC_AUTH_ERROR_CONFLICT=no}
-samba_cv_HAVE_SECURE_MKSTEMP=${samba_cv_HAVE_SECURE_MKSTEMP=yes}
-samba_cv_HAVE_SENDFILE=${samba_cv_HAVE_SENDFILE=no}
-samba_cv_HAVE_SENDFILE64=${samba_cv_HAVE_SENDFILE64=no}
-samba_cv_HAVE_SOCK_SIN_LEN=${samba_cv_HAVE_SOCK_SIN_LEN=no}
-samba_cv_HAVE_STAT_ST_BLKSIZE=${samba_cv_HAVE_STAT_ST_BLKSIZE=yes}
-samba_cv_HAVE_STAT_ST_BLOCKS=${samba_cv_HAVE_STAT_ST_BLOCKS=yes}
-samba_cv_HAVE_STRUCT_DIRENT64=${samba_cv_HAVE_STRUCT_DIRENT64=no}
-samba_cv_HAVE_STRUCT_FLOCK64=${samba_cv_HAVE_STRUCT_FLOCK64=yes}
-samba_cv_HAVE_STRUCT_IF_DQBLK=${samba_cv_HAVE_STRUCT_IF_DQBLK=no}
-samba_cv_HAVE_STRUCT_MEM_DQBLK=${samba_cv_HAVE_STRUCT_MEM_DQBLK=no}
-samba_cv_HAVE_TRUNCATED_SALT=${samba_cv_HAVE_TRUNCATED_SALT=no}
-samba_cv_HAVE_UINT16_FROM_RPC_RPC_H=${samba_cv_HAVE_UINT16_FROM_RPC_RPC_H=no}
-samba_cv_HAVE_UINT32_FROM_RPC_RPC_H=${samba_cv_HAVE_UINT32_FROM_RPC_RPC_H=no}
-samba_cv_HAVE_UNSIGNED_CHAR=${samba_cv_HAVE_UNSIGNED_CHAR=yes}
-samba_cv_HAVE_UTIMBUF=${samba_cv_HAVE_UTIMBUF=yes}
-samba_cv_HAVE_UT_UT_ADDR=${samba_cv_HAVE_UT_UT_ADDR=yes}
-samba_cv_HAVE_UT_UT_EXIT=${samba_cv_HAVE_UT_UT_EXIT=yes}
-samba_cv_HAVE_UT_UT_HOST=${samba_cv_HAVE_UT_UT_HOST=yes}
-samba_cv_HAVE_UT_UT_ID=${samba_cv_HAVE_UT_UT_ID=yes}
-samba_cv_HAVE_UT_UT_NAME=${samba_cv_HAVE_UT_UT_NAME=yes}
-samba_cv_HAVE_UT_UT_PID=${samba_cv_HAVE_UT_UT_PID=yes}
-samba_cv_HAVE_UT_UT_TIME=${samba_cv_HAVE_UT_UT_TIME=yes}
-samba_cv_HAVE_UT_UT_TV=${samba_cv_HAVE_UT_UT_TV=yes}
-samba_cv_HAVE_UT_UT_TYPE=${samba_cv_HAVE_UT_UT_TYPE=yes}
-samba_cv_HAVE_UT_UT_USER=${samba_cv_HAVE_UT_UT_USER=yes}
-samba_cv_HAVE_UX_UT_SYSLEN=${samba_cv_HAVE_UX_UT_SYSLEN=no}
-samba_cv_HAVE_VA_COPY=${samba_cv_HAVE_VA_COPY=yes}
-samba_cv_HAVE_WORKING_AF_LOCAL=${samba_cv_HAVE_WORKING_AF_LOCAL=no}
-samba_cv_HAVE_Werror=${samba_cv_HAVE_Werror=yes}
-samba_cv_PUTUTLINE_RETURNS_UTMP=${samba_cv_PUTUTLINE_RETURNS_UTMP=yes}
-samba_cv_QUOTA_WORKS=${samba_cv_QUOTA_WORKS=yes}
-samba_cv_REPLACE_GETPASS=${samba_cv_REPLACE_GETPASS=yes}
-samba_cv_REPLACE_INET_NTOA=${samba_cv_REPLACE_INET_NTOA=no}
-samba_cv_RUN_QUOTA_TESTS=${samba_cv_RUN_QUOTA_TESTS=auto}
-samba_cv_SEEKDIR_RETURNS_VOID=${samba_cv_SEEKDIR_RETURNS_VOID=yes}
-samba_cv_SIZEOF_INO_T=${samba_cv_SIZEOF_INO_T=yes}
-samba_cv_SIZEOF_OFF_T=${samba_cv_SIZEOF_OFF_T=yes}
-samba_cv_SYSCONF_SC_NGROUPS_MAX=${samba_cv_SYSCONF_SC_NGROUPS_MAX=yes}
-samba_cv_SYSQUOTA_FOUND=${samba_cv_SYSQUOTA_FOUND=yes}
-samba_cv_SYSQUOTA_WORKS=${samba_cv_SYSQUOTA_WORKS=yes}
-samba_cv_TRY_QUOTAS=${samba_cv_TRY_QUOTAS=no}
-samba_cv_TRY_SYS_QUOTAS=${samba_cv_TRY_SYS_QUOTAS=no}
-samba_cv_USE_SETRESUID=${samba_cv_USE_SETRESUID=yes}
-samba_cv_WITH_QUOTAS=${samba_cv_WITH_QUOTAS=auto}
-samba_cv_WITH_SYS_QUOTAS=${samba_cv_WITH_SYS_QUOTAS=auto}
-samba_cv_compiler_supports_ll=${samba_cv_compiler_supports_ll=yes}
-samba_cv_have_longlong=${samba_cv_have_longlong=yes}
-samba_cv_have_setresgid=${samba_cv_have_setresgid=yes}
-samba_cv_have_setresuid=${samba_cv_have_setresuid=yes}
-samba_cv_immediate_structures=${samba_cv_immediate_structures=yes}
-samba_cv_optimize_out_funcation_calls=${samba_cv_optimize_out_funcation_calls=yes}
-samba_cv_sig_atomic_t=${samba_cv_sig_atomic_t=yes}
-samba_cv_socklen_t=${samba_cv_socklen_t=yes}
-samba_cv_unixsocket=${samba_cv_unixsocket=yes}
-samba_cv_volatile=${samba_cv_volatile=yes}
diff --git a/import-layers/yocto-poky/meta/site/armeb-linux-uclibc b/import-layers/yocto-poky/meta/site/armeb-linux-uclibc
deleted file mode 100644
index 731e857..0000000
--- a/import-layers/yocto-poky/meta/site/armeb-linux-uclibc
+++ /dev/null
@@ -1,38 +0,0 @@
-ac_cv_func_setvbuf_reversed=no
-ac_cv_sizeof___int64=${ac_cv_sizeof___int64=0}
-ac_cv_sizeof_char=${ac_cv_sizeof_char=1}
-ac_cv_sizeof_int=${ac_cv_sizeof_int=4}
-ac_cv_sizeof_long=${ac_cv_sizeof_long=4}
-ac_cv_sizeof_long_int=${ac_cv_sizeof_long_int=4}
-ac_cv_sizeof_long_long=${ac_cv_sizeof_long_long=8}
-ac_cv_sizeof_short=${ac_cv_sizeof_short=2}
-ac_cv_sizeof_short_int=${ac_cv_sizeof_short_int=2}
-ac_cv_sizeof_size_t=${ac_cv_sizeof_size_t=4}
-ac_cv_sizeof_void_p=${ac_cv_sizeof_void_p=4}
-ac_cv_sizeof_long_double=${ac_cv_sizeof_long_double=8}
-ac_cv_sizeof_char_p=${ac_cv_sizeof_char_p=4}
-ac_cv_sizeof_unsigned=${ac_cv_sizeof_unsigned=4}
-ac_cv_sizeof_ptrdiff_t=${glib_cv_sizeof_ptrdiff_t=4}
-
-ac_cv_uchar=${ac_cv_uchar=no}
-ac_cv_uint=${ac_cv_uint=yes}
-ac_cv_ulong=${ac_cv_ulong=yes}
-ac_cv_ushort=${ac_cv_ushort=yes}
-
-ac_cv_time_r_type=${ac_cv_time_r_type=POSIX}
-
-# samba
-samba_cv_HAVE_GETTIMEOFDAY_TZ=${samba_cv_HAVE_GETTIMEOFDAY_TZ=yes}
-samba_cv_USE_SETEUID=${samba_cv_USE_SETEUID=yes}
-samba_cv_USE_SETRESUID=${samba_cv_USE_SETRESUID=yes}
-samba_cv_USE_SETREUID=${samba_cv_USE_SETREUID=yes}
-samba_cv_USE_SETUIDX=${samba_cv_USE_SETUIDX=yes}
-samba_cv_have_setresgid=${samba_cv_have_setresgid=yes}
-samba_cv_have_setresuid=${samba_cv_have_setresuid=yes}
-samba_cv_HAVE_SENDFILE=${samba_cv_HAVE_SENDFILE=yes}
-samba_cv_HAVE_SENDFILE64=${samba_cv_HAVE_SENDFILE64=yes}
-samba_cv_HAVE_SECURE_MKSTEMP=${samba_cv_HAVE_SECURE_MKSTEMP=yes}
-samba_cv_HAVE_MMAP=${samba_cv_HAVE_MMAP=yes}
-samba_cv_HAVE_KERNEL_OPLOCKS_LINUX=${samba_cv_HAVE_KERNEL_OPLOCKS_LINUX=yes}
-samba_cv_HAVE_KERNEL_CHANGE_NOTIFY=${samba_cv_HAVE_KERNEL_CHANGE_NOTIFY=yes}
-samba_cv_HAVE_KERNEL_SHARE_MODES=${samba_cv_HAVE_KERNEL_SHARE_MODES=yes}
diff --git a/import-layers/yocto-poky/meta/site/common-uclibc b/import-layers/yocto-poky/meta/site/common-uclibc
deleted file mode 100644
index 26fc103..0000000
--- a/import-layers/yocto-poky/meta/site/common-uclibc
+++ /dev/null
@@ -1,52 +0,0 @@
-# general
-ac_cv_have_decl_sys_siglist=${ac_cv_have_decl_sys_siglist=no}
-ac_cv_func_realloc_works=${ac_cv_func_realloc_works=yes}
-ac_cv_func_realloc_0_nonnull=${ac_cv_func_realloc_0_nonnull=yes}
-ac_cv_func_malloc_works=${ac_cv_func_malloc_works=yes}
-ac_cv_func_malloc_0_nonnull=${ac_cv_func_malloc_0_nonnull=yes}
-ac_cv_func_getpgrp_void=yes
-ac_cv_func_setpgrp_void=yes
-ac_cv_func_setgrent_void=yes
-ac_cv_func_getgrgid_r=${ac_cv_func_getgrgid_r=yes}
-ac_cv_func_getpwuid_r=${ac_cv_func_getpwuid_r=yes}
-ac_cv_func_posix_getpwuid_r=${ac_cv_func_posix_getpwuid_r=yes}
-ac_cv_func_posix_getgrgid_r=${ac_cv_func_posix_getgrgid_r=yes}
-ac_cv_func_getaddrinfo=${ac_cv_func_getaddrinfo=yes}
-
-# glib
-glib_cv_strlcpy=${glib_cv_strlcpy=no}
-ac_cv_func_printf_unix98=${ac_cv_func_printf_unix98=yes}
-ac_cv_func_snprintf_c99=${ac_cv_func_snprintf_c99=yes}
-ac_cv_func_vsnprintf_c99=${ac_cv_func_vsnprintf_c99=yes}
-glib_cv_compliant_posix_memalign=${glib_cv_compliant_posix_memalign=1}
-glib_cv_long_long_format=${glib_cv_long_long_format=ll}
-glib_cv_have_qsort_r=${glib_cv_have_qsort_r=no}
-
-#dbus-glib
-ac_cv_have_abstract_sockets=${ac_cv_have_abstract_sockets=yes}
-
-# bash
-bash_cv_under_sys_siglist=${bash_cv_under_sys_siglist=no}
-bash_cv_sys_siglist=${bash_cv_sys_siglist=no}
-
-# coreutils
-fu_cv_sys_stat_statfs2_bsize=${fu_cv_sys_stat_statfs2_bsize=yes}
-gl_cv_func_gettimeofday_clobber=${gl_cv_func_gettimeofday_clobber=no}
-gl_cv_func_tzset_clobber=${gl_cv_func_tzset_clobber=no}
-gl_cv_func_gettimeofday_posix_signature=${gl_cv_func_gettimeofday_posix_signature=yes}
-ac_cv_func_posix_spawn=${ac_cv_func_posix_spawn=yes}
-ac_cv_func_posix_spawn_works=${ac_cv_func_posix_spawn_works=yes}
-
-# va_copy and _va_copy
-ac_cv_va_copy=${ac_cv_va_copy=yes}
-ac_cv___va_copy=${ac_cv___va_copy=yes}
-ac_cv_func_va_copy=${ac_cv_func_va_copy=yes}
-ac_cv_func___va_copy=${ac_cv_func___va_copy=yes}
-
-# posix_getpwuid_r posix_getgrgid_r posix_getpwnam_r
-ac_cv_func_posix_getpwuid_r=${ac_cv_func_posix_getpwuid_r=yes}
-ac_cv_func_posix_getgrgid_r=${ac_cv_func_posix_getgrgid_r=yes}
-ac_cv_func_posix_getpwnam_r=${ac_cv_func_posix_getpwnam_r=yes}
-
-# Xorg
-xorg_cv_malloc0_returns_null=${xorg_cv_malloc0_returns_null=yes}
diff --git a/import-layers/yocto-poky/meta/site/ix86-common b/import-layers/yocto-poky/meta/site/ix86-common
index f4cf0b8..4fbf58c 100644
--- a/import-layers/yocto-poky/meta/site/ix86-common
+++ b/import-layers/yocto-poky/meta/site/ix86-common
@@ -19,7 +19,6 @@
 ac_cv_sizeof_float=${ac_cv_sizeof_float=4}
 ac_cv_sizeof_uid_t=${ac_cv_sizeof_uid_t=4}
 ac_cv_sizeof_gid_t=${ac_cv_sizeof_gid_t=4}
-ac_cv_sizeof_ino_t=${ac_cv_sizeof_ino_t=4}
 ac_cv_sizeof_dev_t=${ac_cv_sizeof_dev_t=8}
 ac_cv_func_lstat_dereferences_slashed_symlink=${ac_cv_func_lstat_dereferences_slashed_symlink=yes}
 ac_cv_func_lstat_empty_string_bug=${ac_cv_func_lstat_empty_string_bug=no}
diff --git a/import-layers/yocto-poky/meta/site/mips-linux-uclibc b/import-layers/yocto-poky/meta/site/mips-linux-uclibc
deleted file mode 100644
index b23363e..0000000
--- a/import-layers/yocto-poky/meta/site/mips-linux-uclibc
+++ /dev/null
@@ -1,80 +0,0 @@
-# general
-ac_cv_func_setvbuf_reversed=${ac_cv_func_setvbuf_reversed=no}
-
-# bash
-ac_cv_c_long_double=${ac_cv_c_long_double=no}
-bash_cv_func_sigsetjmp=${bash_cv_func_sigsetjmp=present}
-
-# openssh
-ac_cv_have_accrights_in_msghdr=${ac_cv_have_accrights_in_msghdr=no}
-ac_cv_have_broken_snprintf=${ac_cv_have_broken_snprintf=no}
-ac_cv_have_control_in_msghdr=${ac_cv_have_control_in_msghdr=yes}
-ac_cv_have_openpty_ctty_bug=${ac_cv_have_openpty_ctty_bug=no}
-ac_cv_have_space_d_name_in_struct_dirent=${ac_cv_have_space_d_name_in_struct_dirent=yes}
-
-# fget
-compat_cv_func_snprintf_works=${compat_cv_func_snprintf_works=yes}
-
-# glib
-glib_cv___va_copy=${glib_cv___va_copy=yes}
-glib_cv_has__inline=${glib_cv_has__inline=yes}
-glib_cv_has__inline__=${glib_cv_has__inline__=yes}
-glib_cv_hasinline=${glib_cv_hasinline=yes}
-glib_cv_long_long_format=${glib_cv_long_long_format=ll}
-glib_cv_rtldglobal_broken=${glib_cv_rtldglobal_broken=no}
-glib_cv_sane_realloc=${glib_cv_sane_realloc=yes}
-glib_cv_sizeof_gmutex=${glib_cv_sizeof_gmutex=24}
-glib_cv_sizeof_system_thread=${glib_cv_sizeof_system_thread=4}
-glib_cv_stack_grows=${glib_cv_stack_grows=no}
-glib_cv_uscore=${glib_cv_uscore=no}
-
-# glib-2.0
-glib_cv_stack_grows=${glib_cv_stack_grows=no}
-utils_cv_sys_open_max=${utils_cv_sys_open_max=1015}
-glib_cv_use_pid_surrogate=${glib_cv_use_pid_surrogate=yes}
-
-# libpcap
-ac_cv_linux_vers=${ac_cv_linux_vers=2}
-
-# startup-notification
-lf_cv_sane_realloc=${lf_cv_sane_realloc=yes}
-
-# libidl
-libIDL_cv_long_long_format=${libIDL_cv_long_long_format=ll}
-
-# ncftp
-wi_cv_struct_timeval_tv_sec=${wi_cv_struct_timeval_tv_sec=long}
-wi_cv_struct_timeval_tv_usec=${wi_cv_struct_timeval_tv_usec=long}
-wi_cv_unix_domain_sockets=${wi_cv_unix_domain_sockets=yes}
-
-# db
-db_cv_align_t=${db_cv_align_t='unsigned long long'}
-db_cv_alignp_t=${db_cv_alignp_t='unsigned long'}
-db_cv_fcntl_f_setfd=${db_cv_fcntl_f_setfd=yes}
-db_cv_sprintf_count=${db_cv_sprintf_count=yes}
-
-# rrdtool
-rd_cv_ieee_works=${rd_cv_ieee_works=yes}
-# ac_cv_path_PERL=${ac_cv_path_PERL=no}
-
-# gettext
-am_cv_func_working_getline=${am_cv_func_working_getline=yes}
-
-# samba
-samba_cv_HAVE_GETTIMEOFDAY_TZ=${samba_cv_HAVE_GETTIMEOFDAY_TZ=yes}
-
-# vim
-ac_cv_sizeof_int=${ac_cv_sizeof_int=4}
-
-# intercom
-ac_cv_func_fnmatch_works=${ac_cv_func_fnmatch_works=yes}
-
-# lmbench
-ac_cv_uint=${ac_cv_unit=yes}
-
-# D-BUS
-ac_cv_func_posix_getpwnam_r=${ac_cv_func_posix_getpwnam_r=yes}
-
-# evolution-data-server
-ac_cv_libiconv_utf8=${ac_cv_libiconv_utf8=yes}
-
diff --git a/import-layers/yocto-poky/meta/site/mips64-linux-uclibc b/import-layers/yocto-poky/meta/site/mips64-linux-uclibc
deleted file mode 100644
index cbba552..0000000
--- a/import-layers/yocto-poky/meta/site/mips64-linux-uclibc
+++ /dev/null
@@ -1,83 +0,0 @@
-# general
-ac_cv_func_setvbuf_reversed=${ac_cv_func_setvbuf_reversed=no}
-
-# bash
-ac_cv_c_long_double=${ac_cv_c_long_double=no}
-bash_cv_func_sigsetjmp=${bash_cv_func_sigsetjmp=present}
-
-# openssh
-ac_cv_have_accrights_in_msghdr=${ac_cv_have_accrights_in_msghdr=no}
-ac_cv_have_broken_snprintf=${ac_cv_have_broken_snprintf=no}
-ac_cv_have_control_in_msghdr=${ac_cv_have_control_in_msghdr=yes}
-ac_cv_have_openpty_ctty_bug=${ac_cv_have_openpty_ctty_bug=no}
-ac_cv_have_space_d_name_in_struct_dirent=${ac_cv_have_space_d_name_in_struct_dirent=yes}
-
-# fget
-compat_cv_func_snprintf_works=${compat_cv_func_snprintf_works=yes}
-
-# glib
-glib_cv___va_copy=${glib_cv___va_copy=yes}
-glib_cv_has__inline=${glib_cv_has__inline=yes}
-glib_cv_has__inline__=${glib_cv_has__inline__=yes}
-glib_cv_hasinline=${glib_cv_hasinline=yes}
-glib_cv_long_long_format=${glib_cv_long_long_format=ll}
-glib_cv_rtldglobal_broken=${glib_cv_rtldglobal_broken=no}
-glib_cv_sane_realloc=${glib_cv_sane_realloc=yes}
-glib_cv_sizeof_gmutex=${glib_cv_sizeof_gmutex=24}
-glib_cv_sizeof_system_thread=${glib_cv_sizeof_system_thread=4}
-glib_cv_stack_grows=${glib_cv_stack_grows=no}
-glib_cv_uscore=${glib_cv_uscore=no}
-
-# glib-2.0
-glib_cv_stack_grows=${glib_cv_stack_grows=no}
-utils_cv_sys_open_max=${utils_cv_sys_open_max=1015}
-glib_cv_use_pid_surrogate=${glib_cv_use_pid_surrogate=yes}
-ac_cv_alignof_guint32=4
-ac_cv_alignof_guint64=8
-ac_cv_alignof_unsigned_long=8
-
-# libpcap
-ac_cv_linux_vers=${ac_cv_linux_vers=2}
-
-# startup-notification
-lf_cv_sane_realloc=${lf_cv_sane_realloc=yes}
-
-# libidl
-libIDL_cv_long_long_format=${libIDL_cv_long_long_format=ll}
-
-# ncftp
-wi_cv_struct_timeval_tv_sec=${wi_cv_struct_timeval_tv_sec=long}
-wi_cv_struct_timeval_tv_usec=${wi_cv_struct_timeval_tv_usec=long}
-wi_cv_unix_domain_sockets=${wi_cv_unix_domain_sockets=yes}
-
-# db
-db_cv_align_t=${db_cv_align_t='unsigned long long'}
-db_cv_alignp_t=${db_cv_alignp_t='unsigned long'}
-db_cv_fcntl_f_setfd=${db_cv_fcntl_f_setfd=yes}
-db_cv_sprintf_count=${db_cv_sprintf_count=yes}
-
-# rrdtool
-rd_cv_ieee_works=${rd_cv_ieee_works=yes}
-# ac_cv_path_PERL=${ac_cv_path_PERL=no}
-
-# gettext
-am_cv_func_working_getline=${am_cv_func_working_getline=yes}
-
-# samba
-samba_cv_HAVE_GETTIMEOFDAY_TZ=${samba_cv_HAVE_GETTIMEOFDAY_TZ=yes}
-
-# vim
-ac_cv_sizeof_int=${ac_cv_sizeof_int=4}
-
-# intercom
-ac_cv_func_fnmatch_works=${ac_cv_func_fnmatch_works=yes}
-
-# lmbench
-ac_cv_uint=${ac_cv_unit=yes}
-
-# D-BUS
-ac_cv_func_posix_getpwnam_r=${ac_cv_func_posix_getpwnam_r=yes}
-
-# eds-dbus
-ac_cv_libiconv_utf8=${ac_cv_libiconv_utf8=yes}
-
diff --git a/import-layers/yocto-poky/meta/site/mips64el-linux-uclibc b/import-layers/yocto-poky/meta/site/mips64el-linux-uclibc
deleted file mode 100644
index 6a0499d..0000000
--- a/import-layers/yocto-poky/meta/site/mips64el-linux-uclibc
+++ /dev/null
@@ -1,108 +0,0 @@
-# general
-ac_cv_func_setvbuf_reversed=${ac_cv_func_setvbuf_reversed=no}
-
-# bash
-ac_cv_c_long_double=${ac_cv_c_long_double=no}
-bash_cv_func_sigsetjmp=${bash_cv_func_sigsetjmp=present}
-
-# openssh
-ac_cv_have_accrights_in_msghdr=${ac_cv_have_accrights_in_msghdr=no}
-ac_cv_have_broken_snprintf=${ac_cv_have_broken_snprintf=no}
-ac_cv_have_control_in_msghdr=${ac_cv_have_control_in_msghdr=yes}
-ac_cv_have_openpty_ctty_bug=${ac_cv_have_openpty_ctty_bug=no}
-ac_cv_have_space_d_name_in_struct_dirent=${ac_cv_have_space_d_name_in_struct_dirent=yes}
-
-# fget
-compat_cv_func_snprintf_works=${compat_cv_func_snprintf_works=yes}
-
-# glib
-glib_cv___va_copy=${glib_cv___va_copy=yes}
-glib_cv_has__inline=${glib_cv_has__inline=yes}
-glib_cv_has__inline__=${glib_cv_has__inline__=yes}
-glib_cv_hasinline=${glib_cv_hasinline=yes}
-glib_cv_long_long_format=${glib_cv_long_long_format=ll}
-glib_cv_rtldglobal_broken=${glib_cv_rtldglobal_broken=no}
-glib_cv_sane_realloc=${glib_cv_sane_realloc=yes}
-glib_cv_sizeof_gmutex=${glib_cv_sizeof_gmutex=24}
-glib_cv_sizeof_system_thread=${glib_cv_sizeof_system_thread=4}
-glib_cv_stack_grows=${glib_cv_stack_grows=no}
-glib_cv_uscore=${glib_cv_uscore=no}
-
-# glib-2.0
-glib_cv_stack_grows=${glib_cv_stack_grows=no}
-utils_cv_sys_open_max=${utils_cv_sys_open_max=1015}
-glib_cv_use_pid_surrogate=${glib_cv_use_pid_surrogate=yes}
-ac_cv_alignof_guint32=4
-ac_cv_alignof_guint64=8
-ac_cv_alignof_unsigned_long=8
-
-# libpcap
-ac_cv_linux_vers=${ac_cv_linux_vers=2}
-
-# startup-notification
-lf_cv_sane_realloc=${lf_cv_sane_realloc=yes}
-
-# libidl
-libIDL_cv_long_long_format=${libIDL_cv_long_long_format=ll}
-
-# ncftp
-wi_cv_struct_timeval_tv_sec=${wi_cv_struct_timeval_tv_sec=long}
-wi_cv_struct_timeval_tv_usec=${wi_cv_struct_timeval_tv_usec=long}
-wi_cv_unix_domain_sockets=${wi_cv_unix_domain_sockets=yes}
-
-# gettext
-am_cv_func_working_getline=${am_cv_func_working_getline=yes}
-
-# samba
-# from samba 3.0.14a on 5/29/2005
-ac_cv_func_memcmp_working=${ac_cv_func_memcmp_working=yes}
-ac_cv_have_asprintf_decl=${ac_cv_have_asprintf_decl=yes}
-ac_cv_have_setresgid_decl=${ac_cv_have_setresgid_decl=yes}
-ac_cv_have_setresuid_decl=${ac_cv_have_setresuid_decl=yes}
-ac_cv_have_vasprintf_decl=${ac_cv_have_vasprintf_decl=yes}
-fu_cv_sys_stat_statvfs64=${fu_cv_sys_stat_statvfs64=yes}
-samba_cv_FTRUNCATE_NEEDS_ROOT=${samba_cv_FTRUNCATE_NEEDS_ROOT=no}
-samba_cv_HAVE_BROKEN_FCNTL64_LOCKS=${samba_cv_HAVE_BROKEN_FCNTL64_LOCKS=no}
-samba_cv_HAVE_BROKEN_GETGROUPS=${samba_cv_HAVE_BROKEN_GETGROUPS=no}
-samba_cv_HAVE_BROKEN_READDIR=${samba_cv_HAVE_BROKEN_READDIR=no}
-samba_cv_HAVE_C99_VSNPRINTF=${samba_cv_HAVE_C99_VSNPRINTF=yes}
-samba_cv_HAVE_DEV64_T=${samba_cv_HAVE_DEV64_T=no}
-samba_cv_HAVE_DEVICE_MAJOR_FN=${samba_cv_HAVE_DEVICE_MAJOR_FN=yes}
-samba_cv_HAVE_DEVICE_MINOR_FN=${samba_cv_HAVE_DEVICE_MINOR_FN=yes}
-samba_cv_HAVE_EXPLICIT_LARGEFILE_SUPPORT=${samba_cv_HAVE_EXPLICIT_LARGEFILE_SUPPORT=yes}
-samba_cv_HAVE_FCNTL_LOCK=${samba_cv_HAVE_FCNTL_LOCK=yes}
-samba_cv_HAVE_FTRUNCATE_EXTEND=${samba_cv_HAVE_FTRUNCATE_EXTEND=yes}
-samba_cv_HAVE_GETTIMEOFDAY_TZ=${samba_cv_HAVE_GETTIMEOFDAY_TZ=yes}
-samba_cv_HAVE_INO64_T=${samba_cv_HAVE_INO64_T=no}
-samba_cv_HAVE_KERNEL_CHANGE_NOTIFY=${samba_cv_HAVE_KERNEL_CHANGE_NOTIFY=yes}
-samba_cv_HAVE_KERNEL_OPLOCKS_LINUX=${samba_cv_HAVE_KERNEL_OPLOCKS_LINUX=yes}
-samba_cv_HAVE_KERNEL_SHARE_MODES=${samba_cv_HAVE_KERNEL_SHARE_MODES=yes}
-samba_cv_HAVE_MAKEDEV=${samba_cv_HAVE_MAKEDEV=yes}
-samba_cv_HAVE_MMAP=${samba_cv_HAVE_MMAP=yes}
-samba_cv_HAVE_OFF64_T=${samba_cv_HAVE_OFF64_T=no}
-samba_cv_HAVE_QUOTACTL_4A=${samba_cv_HAVE_QUOTACTL_4A=yes}
-samba_cv_HAVE_ROOT=${samba_cv_HAVE_ROOT=no}
-samba_cv_HAVE_SECURE_MKSTEMP=${samba_cv_HAVE_SECURE_MKSTEMP=yes}
-samba_cv_HAVE_SENDFILE64=${samba_cv_HAVE_SENDFILE64=yes}
-samba_cv_HAVE_STRUCT_DIRENT64=${samba_cv_HAVE_STRUCT_DIRENT64=yes}
-samba_cv_HAVE_STRUCT_FLOCK64=${samba_cv_HAVE_STRUCT_FLOCK64=yes}
-samba_cv_HAVE_TRUNCATED_SALT=${samba_cv_HAVE_TRUNCATED_SALT=no}
-samba_cv_HAVE_UNSIGNED_CHAR=${samba_cv_HAVE_UNSIGNED_CHAR=no}
-samba_cv_HAVE_WORKING_AF_LOCAL=${samba_cv_HAVE_WORKING_AF_LOCAL=no}
-samba_cv_HAVE_Werror=${samba_cv_HAVE_Werror=yes}
-samba_cv_REALPATH_TAKES_NULL=${samba_cv_REALPATH_TAKES_NULL=no}
-samba_cv_REPLACE_INET_NTOA=${samba_cv_REPLACE_INET_NTOA=no}
-samba_cv_SIZEOF_INO_T=${samba_cv_SIZEOF_INO_T=yes}
-samba_cv_SIZEOF_OFF_T=${samba_cv_SIZEOF_OFF_T=yes}
-samba_cv_SYSCONF_SC_NGROUPS_MAX=${samba_cv_SYSCONF_SC_NGROUPS_MAX=yes}
-samba_cv_SYSCONF_SC_NPROC_ONLN=${samba_cv_SYSCONF_SC_NPROC_ONLN=no}
-samba_cv_SYSQUOTA_FOUND=${samba_cv_SYSQUOTA_FOUND=yes}
-samba_cv_SYSQUOTA_WORKS=${samba_cv_SYSQUOTA_WORKS=yes}
-samba_cv_USE_SETRESUID=${samba_cv_USE_SETRESUID=yes}
-samba_cv_have_longlong=${samba_cv_have_longlong=yes}
-samba_cv_have_setresgid=${samba_cv_have_setresgid=yes}
-samba_cv_have_setresuid=${samba_cv_have_setresuid=yes}
-samba_cv_sysquotas_file=${samba_cv_sysquotas_file=lib/sysquotas_4A.c}
-# This cached value needs a local patch to pick it up, upstream 3.0.14a
-# doesn't cache it.
-samba_cv_LINUX_LFS_SUPPORT=${samba_cv_LINUX_LFS_SUPPORT=yes}
diff --git a/import-layers/yocto-poky/meta/site/mipsel-linux-uclibc b/import-layers/yocto-poky/meta/site/mipsel-linux-uclibc
deleted file mode 100644
index f921cda..0000000
--- a/import-layers/yocto-poky/meta/site/mipsel-linux-uclibc
+++ /dev/null
@@ -1,100 +0,0 @@
-# general
-ac_cv_func_setvbuf_reversed=${ac_cv_func_setvbuf_reversed=no}
-
-# bash
-ac_cv_c_long_double=${ac_cv_c_long_double=no}
-bash_cv_func_sigsetjmp=${bash_cv_func_sigsetjmp=present}
-
-# openssh
-ac_cv_have_accrights_in_msghdr=${ac_cv_have_accrights_in_msghdr=no}
-ac_cv_have_broken_snprintf=${ac_cv_have_broken_snprintf=no}
-ac_cv_have_control_in_msghdr=${ac_cv_have_control_in_msghdr=yes}
-ac_cv_have_openpty_ctty_bug=${ac_cv_have_openpty_ctty_bug=no}
-ac_cv_have_space_d_name_in_struct_dirent=${ac_cv_have_space_d_name_in_struct_dirent=yes}
-
-# fget
-compat_cv_func_snprintf_works=${compat_cv_func_snprintf_works=yes}
-
-# glib
-glib_cv___va_copy=${glib_cv___va_copy=yes}
-glib_cv_has__inline=${glib_cv_has__inline=yes}
-glib_cv_has__inline__=${glib_cv_has__inline__=yes}
-glib_cv_hasinline=${glib_cv_hasinline=yes}
-glib_cv_long_long_format=${glib_cv_long_long_format=ll}
-glib_cv_rtldglobal_broken=${glib_cv_rtldglobal_broken=no}
-glib_cv_sane_realloc=${glib_cv_sane_realloc=yes}
-glib_cv_sizeof_gmutex=${glib_cv_sizeof_gmutex=24}
-glib_cv_sizeof_system_thread=${glib_cv_sizeof_system_thread=4}
-glib_cv_stack_grows=${glib_cv_stack_grows=no}
-glib_cv_uscore=${glib_cv_uscore=no}
-
-# libpcap
-ac_cv_linux_vers=${ac_cv_linux_vers=2}
-
-# startup-notification
-lf_cv_sane_realloc=${lf_cv_sane_realloc=yes}
-
-# libidl
-libIDL_cv_long_long_format=${libIDL_cv_long_long_format=ll}
-
-# ncftp
-wi_cv_struct_timeval_tv_sec=${wi_cv_struct_timeval_tv_sec=long}
-wi_cv_struct_timeval_tv_usec=${wi_cv_struct_timeval_tv_usec=long}
-wi_cv_unix_domain_sockets=${wi_cv_unix_domain_sockets=yes}
-
-# gettext
-am_cv_func_working_getline=${am_cv_func_working_getline=yes}
-
-# samba
-# from samba 3.0.14a on 5/29/2005
-ac_cv_func_memcmp_working=${ac_cv_func_memcmp_working=yes}
-ac_cv_have_asprintf_decl=${ac_cv_have_asprintf_decl=yes}
-ac_cv_have_setresgid_decl=${ac_cv_have_setresgid_decl=yes}
-ac_cv_have_setresuid_decl=${ac_cv_have_setresuid_decl=yes}
-ac_cv_have_vasprintf_decl=${ac_cv_have_vasprintf_decl=yes}
-fu_cv_sys_stat_statvfs64=${fu_cv_sys_stat_statvfs64=yes}
-samba_cv_FTRUNCATE_NEEDS_ROOT=${samba_cv_FTRUNCATE_NEEDS_ROOT=no}
-samba_cv_HAVE_BROKEN_FCNTL64_LOCKS=${samba_cv_HAVE_BROKEN_FCNTL64_LOCKS=no}
-samba_cv_HAVE_BROKEN_GETGROUPS=${samba_cv_HAVE_BROKEN_GETGROUPS=no}
-samba_cv_HAVE_BROKEN_READDIR=${samba_cv_HAVE_BROKEN_READDIR=no}
-samba_cv_HAVE_C99_VSNPRINTF=${samba_cv_HAVE_C99_VSNPRINTF=yes}
-samba_cv_HAVE_DEV64_T=${samba_cv_HAVE_DEV64_T=no}
-samba_cv_HAVE_DEVICE_MAJOR_FN=${samba_cv_HAVE_DEVICE_MAJOR_FN=yes}
-samba_cv_HAVE_DEVICE_MINOR_FN=${samba_cv_HAVE_DEVICE_MINOR_FN=yes}
-samba_cv_HAVE_EXPLICIT_LARGEFILE_SUPPORT=${samba_cv_HAVE_EXPLICIT_LARGEFILE_SUPPORT=yes}
-samba_cv_HAVE_FCNTL_LOCK=${samba_cv_HAVE_FCNTL_LOCK=yes}
-samba_cv_HAVE_FTRUNCATE_EXTEND=${samba_cv_HAVE_FTRUNCATE_EXTEND=yes}
-samba_cv_HAVE_GETTIMEOFDAY_TZ=${samba_cv_HAVE_GETTIMEOFDAY_TZ=yes}
-samba_cv_HAVE_INO64_T=${samba_cv_HAVE_INO64_T=no}
-samba_cv_HAVE_KERNEL_CHANGE_NOTIFY=${samba_cv_HAVE_KERNEL_CHANGE_NOTIFY=yes}
-samba_cv_HAVE_KERNEL_OPLOCKS_LINUX=${samba_cv_HAVE_KERNEL_OPLOCKS_LINUX=yes}
-samba_cv_HAVE_KERNEL_SHARE_MODES=${samba_cv_HAVE_KERNEL_SHARE_MODES=yes}
-samba_cv_HAVE_MAKEDEV=${samba_cv_HAVE_MAKEDEV=yes}
-samba_cv_HAVE_MMAP=${samba_cv_HAVE_MMAP=yes}
-samba_cv_HAVE_OFF64_T=${samba_cv_HAVE_OFF64_T=no}
-samba_cv_HAVE_QUOTACTL_4A=${samba_cv_HAVE_QUOTACTL_4A=yes}
-samba_cv_HAVE_ROOT=${samba_cv_HAVE_ROOT=no}
-samba_cv_HAVE_SECURE_MKSTEMP=${samba_cv_HAVE_SECURE_MKSTEMP=yes}
-samba_cv_HAVE_SENDFILE64=${samba_cv_HAVE_SENDFILE64=yes}
-samba_cv_HAVE_STRUCT_DIRENT64=${samba_cv_HAVE_STRUCT_DIRENT64=yes}
-samba_cv_HAVE_STRUCT_FLOCK64=${samba_cv_HAVE_STRUCT_FLOCK64=yes}
-samba_cv_HAVE_TRUNCATED_SALT=${samba_cv_HAVE_TRUNCATED_SALT=no}
-samba_cv_HAVE_UNSIGNED_CHAR=${samba_cv_HAVE_UNSIGNED_CHAR=no}
-samba_cv_HAVE_WORKING_AF_LOCAL=${samba_cv_HAVE_WORKING_AF_LOCAL=no}
-samba_cv_HAVE_Werror=${samba_cv_HAVE_Werror=yes}
-samba_cv_REALPATH_TAKES_NULL=${samba_cv_REALPATH_TAKES_NULL=no}
-samba_cv_REPLACE_INET_NTOA=${samba_cv_REPLACE_INET_NTOA=no}
-samba_cv_SIZEOF_INO_T=${samba_cv_SIZEOF_INO_T=yes}
-samba_cv_SIZEOF_OFF_T=${samba_cv_SIZEOF_OFF_T=yes}
-samba_cv_SYSCONF_SC_NGROUPS_MAX=${samba_cv_SYSCONF_SC_NGROUPS_MAX=yes}
-samba_cv_SYSCONF_SC_NPROC_ONLN=${samba_cv_SYSCONF_SC_NPROC_ONLN=no}
-samba_cv_SYSQUOTA_FOUND=${samba_cv_SYSQUOTA_FOUND=yes}
-samba_cv_SYSQUOTA_WORKS=${samba_cv_SYSQUOTA_WORKS=yes}
-samba_cv_USE_SETRESUID=${samba_cv_USE_SETRESUID=yes}
-samba_cv_have_longlong=${samba_cv_have_longlong=yes}
-samba_cv_have_setresgid=${samba_cv_have_setresgid=yes}
-samba_cv_have_setresuid=${samba_cv_have_setresuid=yes}
-samba_cv_sysquotas_file=${samba_cv_sysquotas_file=lib/sysquotas_4A.c}
-# This cached value needs a local patch to pick it up, upstream 3.0.14a
-# doesn't cache it.
-samba_cv_LINUX_LFS_SUPPORT=${samba_cv_LINUX_LFS_SUPPORT=yes}
diff --git a/import-layers/yocto-poky/meta/site/nios2-linux b/import-layers/yocto-poky/meta/site/nios2-linux
index 2f4e570..5bae748 100644
--- a/import-layers/yocto-poky/meta/site/nios2-linux
+++ b/import-layers/yocto-poky/meta/site/nios2-linux
@@ -33,7 +33,6 @@
 db_cv_path_strip=${db_cv_path_strip=/usr/bin/strip}
 db_cv_align_t=${db_cv_align_t='unsigned long long'}
 db_cv_alignp_t=${db_cv_alignp_t='unsigned long'}
-db_cv_mutex=${db_cv_mutex=ARM/gcc-assembly}
 db_cv_posixmutexes=${db_cv_posixmutexes=no}
 db_cv_uimutexes=${db_cv_uimutexes=no}
 
diff --git a/import-layers/yocto-poky/meta/site/x32-linux b/import-layers/yocto-poky/meta/site/x32-linux
index 308d6e2..4b70422 100644
--- a/import-layers/yocto-poky/meta/site/x32-linux
+++ b/import-layers/yocto-poky/meta/site/x32-linux
@@ -1,6 +1,5 @@
 # general
 ac_cv_sizeof_long_double=${ac_cv_sizeof_long_double=16}
-ac_cv_sizeof_ino_t=${ac_cv_sizeof_ino_t=8}
 ac_cv_sizeof_dev_t=${ac_cv_sizeof_dev_t=8}
 ac_cv_sys_file_offset_bits=${ac_cv_sys_file_offset_bits=64}
 ac_cv_alignof_double=8
diff --git a/import-layers/yocto-poky/meta/site/x86_64-linux b/import-layers/yocto-poky/meta/site/x86_64-linux
index ebdcf69..778e2c5 100644
--- a/import-layers/yocto-poky/meta/site/x86_64-linux
+++ b/import-layers/yocto-poky/meta/site/x86_64-linux
@@ -24,7 +24,6 @@
 ac_cv_sizeof_size_t=${ac_cv_sizeof_size_t=8}
 ac_cv_sizeof_uid_t=${ac_cv_sizeof_uid_t=4}
 ac_cv_sizeof_gid_t=${ac_cv_sizeof_gid_t=4}
-ac_cv_sizeof_ino_t=${ac_cv_sizeof_ino_t=8}
 ac_cv_sizeof_dev_t=${ac_cv_sizeof_dev_t=8}
 ac_cv_sizeof_void_p=${ac_cv_sizeof_void_p=8}
 ac_cv_strerror_r_SUSv3=${ac_cv_strerror_r_SUSv3=no}
diff --git a/import-layers/yocto-poky/meta/site/x86_64-linux-uclibc b/import-layers/yocto-poky/meta/site/x86_64-linux-uclibc
deleted file mode 100644
index 2d269f7..0000000
--- a/import-layers/yocto-poky/meta/site/x86_64-linux-uclibc
+++ /dev/null
@@ -1,91 +0,0 @@
-# general
-ac_cv_va_val_copy=${ac_cv_va_val_copy=no}
-ac_cv_func_lstat_dereferences_slashed_symlink=${ac_cv_func_lstat_dereferences_slashed_symlink=yes}
-ac_cv_func_lstat_empty_string_bug=${ac_cv_func_lstat_empty_string_bug=no}
-ac_cv_func_posix_getpwnam_r=${ac_cv_func_posix_getpwnam_r=yes}
-ac_cv_func_setvbuf_reversed=${ac_cv_func_setvbuf_reversed=no}
-ac_cv_func_stat_empty_string_bug=${ac_cv_func_stat_empty_string_bug=no}
-ac_cv_func_stat_ignores_trailing_slash=${ac_cv_func_stat_ignores_trailing_slash=no}
-ac_libnet_have_packet_socket=${ac_libnet_have_packet_socket=yes}
-ac_cv_linux_vers=${ac_cv_linux_vers=2}
-ac_cv_need_trio=${ac_cv_need_trio=no}
-ac_cv_sizeof_char=${ac_cv_sizeof_char=1}
-ac_cv_sizeof_int=${ac_cv_sizeof_int=4}
-ac_cv_sizeof___int64=${ac_cv_sizeof___int64=0}
-ac_cv_sizeof_long=${ac_cv_sizeof_long=8}
-ac_cv_sizeof_long_double=${ac_cv_sizeof_long_double=16}
-ac_cv_sizeof_long_int=${ac_cv_sizeof_long_int=8}
-ac_cv_sizeof_long_long=${ac_cv_sizeof_long_long=8}
-ac_cv_sizeof_short=${ac_cv_sizeof_short=2}
-ac_cv_sizeof_short_int=${ac_cv_sizeof_short_int=2}
-ac_cv_sizeof_size_t=${ac_cv_sizeof_size_t=8}
-ac_cv_sizeof_void_p=${ac_cv_sizeof_void_p=8}
-ac_cv_strerror_r_SUSv3=${ac_cv_strerror_r_SUSv3=no}
-db_cv_alignp_t=${db_cv_alignp_t='unsigned long long'}
-db_cv_align_t=${db_cv_align_t='unsigned long long'}
-db_cv_fcntl_f_setfd=${db_cv_fcntl_f_setfd=yes}
-db_cv_sprintf_count=${db_cv_sprintf_count=yes}
-glib_cv_hasinline=${glib_cv_hasinline=yes}
-glib_cv_has__inline=${glib_cv_has__inline=yes}
-glib_cv_has__inline__=${glib_cv_has__inline__=yes}
-glib_cv_long_long_format=${glib_cv_long_long_format=ll}
-glib_cv_rtldglobal_broken=${glib_cv_rtldglobal_broken=yes}
-glib_cv_sane_realloc=${glib_cv_sane_realloc=yes}
-glib_cv_sizeof_gmutex=${glib_cv_sizeof_gmutex=40}
-glib_cv_sizeof_intmax_t=${glib_cv_sizeof_intmax_t=8}
-glib_cv_sizeof_ptrdiff_t=${glib_cv_sizeof_ptrdiff_t=8}
-glib_cv_sizeof_size_t=${glib_cv_sizeof_size_t=8}
-glib_cv_sizeof_system_thread=${glib_cv_sizeof_system_thread=8}
-glib_cv_stack_grows=${glib_cv_stack_grows=no}
-glib_cv_sys_pthread_cond_timedwait_posix=${glib_cv_sys_pthread_cond_timedwait_posix=yes}
-glib_cv_sys_pthread_getspecific_posix=${glib_cv_sys_pthread_getspecific_posix=yes}
-glib_cv_sys_pthread_mutex_trylock_posix=${glib_cv_sys_pthread_mutex_trylock_posix=yes}
-glib_cv_uscore=${glib_cv_uscore=no}
-glib_cv_va_val_copy=${glib_cv_va_val_copy=no}
-nano_cv_func_regexec_segv_emptystr=${nano_cv_func_regexec_segv_emptystr=no}
-samba_cv_HAVE_VA_COPY=${samba_cv_HAVE_VA_COPY=yes}
-screen_cv_sys_bcopy_overlap=${screen_cv_sys_bcopy_overlap=no}
-screen_cv_sys_fifo_broken_impl=${screen_cv_sys_fifo_broken_impl=yes}
-screen_cv_sys_fifo_usable=${screen_cv_sys_fifo_usable=yes}
-screen_cv_sys_memcpy_overlap=${screen_cv_sys_memcpy_overlap=no}
-screen_cv_sys_memmove_overlap=${screen_cv_sys_memmove_overlap=no}
-screen_cv_sys_select_broken_retval=${screen_cv_sys_select_broken_retval=no}
-screen_cv_sys_sockets_nofs=${screen_cv_sys_sockets_nofs=no}
-screen_cv_sys_sockets_usable=${screen_cv_sys_sockets_usable=yes}
-screen_cv_sys_terminfo_used=${screen_cv_sys_terminfo_used=yes}
-utils_cv_sys_open_max=${utils_cv_sys_open_max=1015}
-
-# gettext
-am_cv_func_working_getline=${am_cv_func_working_getline=yes}
-
-# glib-2.0
-glib_cv_use_pid_surrogate=${glib_cv_use_pid_surrogate=yes}
-ac_cv_alignof_guint32=4
-ac_cv_alignof_guint64=8
-ac_cv_alignof_unsigned_long=8
-
-# libidl
-libIDL_cv_long_long_format=${libIDL_cv_long_long_format=ll}
-
-# ORBit2
-ac_cv_alignof_CORBA_boolean=1
-ac_cv_alignof_CORBA_char=1
-ac_cv_alignof_CORBA_double=4
-ac_cv_alignof_CORBA_float=4
-ac_cv_alignof_CORBA_long=4
-ac_cv_alignof_CORBA_long_double=8
-ac_cv_alignof_CORBA_long_long=8
-ac_cv_alignof_CORBA_octet=1
-ac_cv_alignof_CORBA_pointer=8
-ac_cv_alignof_CORBA_short=2
-ac_cv_alignof_CORBA_struct=1
-ac_cv_alignof_CORBA_wchar=2
-
-# startup-notification
-lf_cv_sane_realloc=${lf_cv_sane_realloc=yes}
-
-# lftp
-lftp_cv_va_val_copy=${lftp_cv_va_val_copy=no}
-
-# slrn
-slrn_cv_va_val_copy=${slrn_cv_va_val_copy=no}
diff --git a/import-layers/yocto-poky/oe-init-build-env b/import-layers/yocto-poky/oe-init-build-env
index 5fe68d1..e813230 100755
--- a/import-layers/yocto-poky/oe-init-build-env
+++ b/import-layers/yocto-poky/oe-init-build-env
@@ -57,11 +57,3 @@
 
 [ -z "$BUILDDIR" ] || cd "$BUILDDIR"
 
-# Shutdown any bitbake server if the BBSERVER variable is not set
-if [ -z "$BBSERVER" ] && [ -f bitbake.lock ]; then
-    grep ":" bitbake.lock > /dev/null && BBSERVER=$(cat bitbake.lock) bitbake --status-only
-    if [ $? = 0 ]; then
-        echo "Shutting down bitbake memory resident server with bitbake -m"
-        BBSERVER=$(cat bitbake.lock) bitbake -m
-    fi
-fi
diff --git a/import-layers/yocto-poky/oe-init-build-env-memres b/import-layers/yocto-poky/oe-init-build-env-memres
deleted file mode 100755
index 9e1425e..0000000
--- a/import-layers/yocto-poky/oe-init-build-env-memres
+++ /dev/null
@@ -1,90 +0,0 @@
-#!/bin/sh
-
-# OE Build Environment Setup Script
-#
-# Copyright (C) 2006-2011 Linux Foundation
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, write to the Free Software
-# Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
-
-#
-# Normally this is called as '. ./oe-init-build-env-memres <portnumber> <builddir>'
-#
-# This works in most shells (not dash), but not all of them pass the arguments
-# when being sourced.  To workaround the shell limitation use "set <portnumber>
-# <builddir>" prior to sourcing this script.
-#
-if [ -z "$1" ]; then
-    echo "No port specified, using dynamically selected port"
-    port=-1
-else
-    port=$1
-    shift
-fi
-
-if [ -n "$BASH_SOURCE" ]; then
-    THIS_SCRIPT=$BASH_SOURCE
-elif [ -n "$ZSH_NAME" ]; then
-    THIS_SCRIPT=$0
-else
-    THIS_SCRIPT="$(pwd)/oe-init-build-env"
-fi
-if [ -n "$BBSERVER" ]; then
-    unset BBSERVER
-fi
-
-if [ -z "$ZSH_NAME" ] && [ "$0" = "$THIS_SCRIPT" ]; then
-    echo "Error: This script needs to be sourced. Please run as '. $THIS_SCRIPT'"
-    exit 1
-fi
-
-if [ -z "$OEROOT" ]; then
-    OEROOT=$(dirname "$THIS_SCRIPT")
-    OEROOT=$(readlink -f "$OEROOT")
-fi
-unset THIS_SCRIPT
-
-export OEROOT
-. $OEROOT/scripts/oe-buildenv-internal &&
-    TEMPLATECONF="$TEMPLATECONF" $OEROOT/scripts/oe-setup-builddir || {
-    unset OEROOT
-    return 1
-}
-unset OEROOT
-
-[ -z "$BUILDDIR" ] || cd "$BUILDDIR"
-
-res=1
-if [ -e bitbake.lock ] && grep : bitbake.lock > /dev/null; then
-    BBSERVER=$(cat bitbake.lock) bitbake --status-only
-    res=$?
-fi
-
-if [ $res != 0 ]; then
-    bitbake --server-only -t xmlrpc -B localhost:$port
-fi
-
-if [ $port = -1 ]; then
-    export BBSERVER=localhost:-1
-    echo "Bitbake server started on demand as needed, use bitbake -m to shut it down"
-else
-    export BBSERVER=$(cat bitbake.lock)
-
-    if [ $res = 0 ]; then
-        echo "Using existing bitbake server at: $BBSERVER, use bitbake -m to shut it down"
-    else
-        echo "Bitbake server started at: $BBSERVER, use bitbake -m to shut it down"
-    fi
-fi
-unset port res
diff --git a/import-layers/yocto-poky/scripts/buildhistory-diff b/import-layers/yocto-poky/scripts/buildhistory-diff
index dd9745e..e79cb7a 100755
--- a/import-layers/yocto-poky/scripts/buildhistory-diff
+++ b/import-layers/yocto-poky/scripts/buildhistory-diff
@@ -7,7 +7,7 @@
 
 import sys
 import os
-import optparse
+import argparse
 from distutils.version import LooseVersion
 
 # Ensure PythonGit is installed (buildhistory_analysis needs it)
@@ -17,47 +17,70 @@
     print("Please install GitPython (python3-git) 0.3.4 or later in order to use this script")
     sys.exit(1)
 
+def get_args_parser():
+    description = "Reports significant differences in the buildhistory repository."
+
+    parser = argparse.ArgumentParser(description=description,
+                                     usage="""
+    %(prog)s [options] [from-revision [to-revision]]
+    (if not specified, from-revision defaults to build-minus-1, and to-revision defaults to HEAD)""")
+
+    parser.add_argument('-p', '--buildhistory-dir',
+                        action='store',
+                        dest='buildhistory_dir',
+                        default='buildhistory/',
+                        help="Specify path to buildhistory directory (defaults to buildhistory/ under cwd)")
+    parser.add_argument('-v', '--report-version',
+                        action='store_true',
+                        dest='report_ver',
+                        default=False,
+                        help="Report changes in PKGE/PKGV/PKGR even when the values are still the default (PE/PV/PR)")
+    parser.add_argument('-a', '--report-all',
+                        action='store_true',
+                        dest='report_all',
+                        default='False',
+                        help="Report all changes, not just the default significant ones")
+    parser.add_argument('-s', '---signatures',
+                        action='store_true',
+                        dest='sigs',
+                        default=False,
+                        help="Report list of signatures differing instead of output")
+    parser.add_argument('-S', '--signatures-with-diff',
+                        action='store_true',
+                        dest='sigsdiff',
+                        default=False,
+                        help="Report on actual signature differences instead of output (requires signature data to have been generated, either by running the actual tasks or using bitbake -S)")
+    parser.add_argument('-e', '--exclude-path',
+                        action='append',
+                        help="Exclude path from the output")
+    parser.add_argument('revisions',
+                        default = ['build-minus-1', 'HEAD'],
+                        nargs='*',
+                        help=argparse.SUPPRESS)
+    return parser
+
 def main():
-    parser = optparse.OptionParser(
-        description = "Reports significant differences in the buildhistory repository.",
-        usage = """
-    %prog [options] [from-revision [to-revision]]
-(if not specified, from-revision defaults to build-minus-1, and to-revision defaults to HEAD)""")
 
-    parser.add_option("-p", "--buildhistory-dir",
-            help = "Specify path to buildhistory directory (defaults to buildhistory/ under cwd)",
-            action="store", dest="buildhistory_dir", default='buildhistory/')
-    parser.add_option("-v", "--report-version",
-            help = "Report changes in PKGE/PKGV/PKGR even when the values are still the default (PE/PV/PR)",
-            action="store_true", dest="report_ver", default=False)
-    parser.add_option("-a", "--report-all",
-            help = "Report all changes, not just the default significant ones",
-            action="store_true", dest="report_all", default=False)
-    parser.add_option("-s", "--signatures",
-            help = "Report list of signatures differing instead of output",
-            action="store_true", dest="sigs", default=False)
-    parser.add_option("-S", "--signatures-with-diff",
-            help = "Report on actual signature differences instead of output (requires signature data to have been generated, either by running the actual tasks or using bitbake -S)",
-            action="store_true", dest="sigsdiff", default=False)
-
-    options, args = parser.parse_args(sys.argv)
-
-    if len(args) > 3:
-        sys.stderr.write('Invalid argument(s) specified: %s\n\n' % ' '.join(args[3:]))
-        parser.print_help()
-        sys.exit(1)
+    parser = get_args_parser()
+    args = parser.parse_args()
 
     if LooseVersion(git.__version__) < '0.3.1':
         sys.stderr.write("Version of GitPython is too old, please install GitPython (python-git) 0.3.1 or later in order to use this script\n")
         sys.exit(1)
 
-    if not os.path.exists(options.buildhistory_dir):
-        if options.buildhistory_dir == 'buildhistory/':
+    if len(args.revisions) > 2:
+        sys.stderr.write('Invalid argument(s) specified: %s\n\n' % ' '.join(args.revisions[2:]))
+        parser.print_help()
+
+        sys.exit(1)
+    if not os.path.exists(args.buildhistory_dir):
+        if args.buildhistory_dir == 'buildhistory/':
             cwd = os.getcwd()
             if os.path.basename(cwd) == 'buildhistory':
-                options.buildhistory_dir = cwd
-    if not os.path.exists(options.buildhistory_dir):
-        sys.stderr.write('Buildhistory directory "%s" does not exist\n\n' % options.buildhistory_dir)
+                args.buildhistory_dir = cwd
+
+    if not os.path.exists(args.buildhistory_dir):
+        sys.stderr.write('Buildhistory directory "%s" does not exist\n\n' % args.buildhistory_dir)
         parser.print_help()
         sys.exit(1)
 
@@ -71,30 +94,29 @@
     scriptpath.add_oe_lib_path()
     # Set path to bitbake lib dir so the buildhistory_analysis module can load bb.utils
     bitbakepath = scriptpath.add_bitbake_lib_path()
+
     if not bitbakepath:
         sys.stderr.write("Unable to find bitbake by searching parent directory of this script or PATH\n")
         sys.exit(1)
 
-    import oe.buildhistory_analysis
-
-    fromrev = 'build-minus-1'
-    torev = 'HEAD'
-    if len(args) > 1:
-        if len(args) == 2 and '..' in args[1]:
-            revs = args[1].split('..')
-            fromrev = revs[0]
-            if revs[1]:
-                torev = revs[1]
+    if len(args.revisions) == 1:
+        if '..'  in args.revisions[0]:
+            fromrev, torev = args.revisions[0].split('..')
         else:
-            fromrev = args[1]
-    if len(args) > 2:
-        torev = args[2]
+            fromrev, torev = args.revisions[0], 'HEAD'
+    elif len(args.revisions) == 2:
+        fromrev, torev = args.revisions
+
+    from oe.buildhistory_analysis import process_changes
 
     import gitdb
+
     try:
-        changes = oe.buildhistory_analysis.process_changes(options.buildhistory_dir, fromrev, torev, options.report_all, options.report_ver, options.sigs, options.sigsdiff)
+        changes = process_changes(args.buildhistory_dir, fromrev, torev,
+                                  args.report_all, args.report_ver, args.sigs,
+                                  args.sigsdiff, args.exclude_path)
     except gitdb.exc.BadObject as e:
-        if len(args) == 1:
+        if not args.revisions:
             sys.stderr.write("Unable to find previous build revision in buildhistory repository\n\n")
             parser.print_help()
         else:
@@ -102,10 +124,11 @@
         sys.exit(1)
 
     for chg in changes:
-        print('%s' % chg)
+        out = str(chg)
+        if out:
+            print(out)
 
     sys.exit(0)
 
-
 if __name__ == "__main__":
     main()
diff --git a/import-layers/yocto-poky/scripts/buildstats-diff b/import-layers/yocto-poky/scripts/buildstats-diff
index adeba44..a128dd3 100755
--- a/import-layers/yocto-poky/scripts/buildstats-diff
+++ b/import-layers/yocto-poky/scripts/buildstats-diff
@@ -15,15 +15,18 @@
 #
 import argparse
 import glob
-import json
 import logging
 import math
 import os
-import re
 import sys
-from collections import namedtuple
 from operator import attrgetter
 
+# Import oe libs
+scripts_path = os.path.dirname(os.path.realpath(__file__))
+sys.path.append(os.path.join(scripts_path, 'lib'))
+from buildstats import BuildStats, diff_buildstats, taskdiff_fields, BSVerDiff
+
+
 # Setup logging
 logging.basicConfig(level=logging.INFO, format="%(levelname)s: %(message)s")
 log = logging.getLogger()
@@ -34,180 +37,16 @@
     pass
 
 
-taskdiff_fields = ('pkg', 'pkg_op', 'task', 'task_op', 'value1', 'value2',
-                   'absdiff', 'reldiff')
-TaskDiff = namedtuple('TaskDiff', ' '.join(taskdiff_fields))
-
-
-class BSTask(dict):
-    def __init__(self, *args, **kwargs):
-        self['start_time'] = None
-        self['elapsed_time'] = None
-        self['status'] = None
-        self['iostat'] = {}
-        self['rusage'] = {}
-        self['child_rusage'] = {}
-        super(BSTask, self).__init__(*args, **kwargs)
-
-    @property
-    def cputime(self):
-        """Sum of user and system time taken by the task"""
-        return self['rusage']['ru_stime'] + self['rusage']['ru_utime'] + \
-               self['child_rusage']['ru_stime'] + self['child_rusage']['ru_utime']
-
-    @property
-    def walltime(self):
-        """Elapsed wall clock time"""
-        return self['elapsed_time']
-
-    @property
-    def read_bytes(self):
-        """Bytes read from the block layer"""
-        return self['iostat']['read_bytes']
-
-    @property
-    def write_bytes(self):
-        """Bytes written to the block layer"""
-        return self['iostat']['write_bytes']
-
-    @property
-    def read_ops(self):
-        """Number of read operations on the block layer"""
-        return self['rusage']['ru_inblock'] + self['child_rusage']['ru_inblock']
-
-    @property
-    def write_ops(self):
-        """Number of write operations on the block layer"""
-        return self['rusage']['ru_oublock'] + self['child_rusage']['ru_oublock']
-
-
-def read_buildstats_file(buildstat_file):
-    """Convert buildstat text file into dict/json"""
-    bs_task = BSTask()
-    log.debug("Reading task buildstats from %s", buildstat_file)
-    with open(buildstat_file) as fobj:
-        for line in fobj.readlines():
-            key, val = line.split(':', 1)
-            val = val.strip()
-            if key == 'Started':
-                start_time = float(val)
-                bs_task['start_time'] = start_time
-            elif key == 'Ended':
-                end_time = float(val)
-            elif key.startswith('IO '):
-                split = key.split()
-                bs_task['iostat'][split[1]] = int(val)
-            elif key.find('rusage') >= 0:
-                split = key.split()
-                ru_key = split[-1]
-                if ru_key in ('ru_stime', 'ru_utime'):
-                    val = float(val)
-                else:
-                    val = int(val)
-                ru_type = 'rusage' if split[0] == 'rusage' else \
-                                                  'child_rusage'
-                bs_task[ru_type][ru_key] = val
-            elif key == 'Status':
-                bs_task['status'] = val
-    bs_task['elapsed_time'] = end_time - start_time
-    return bs_task
-
-
-def read_buildstats_dir(bs_dir):
-    """Read buildstats directory"""
-    def split_nevr(nevr):
-        """Split name and version information from recipe "nevr" string"""
-        n_e_v, revision = nevr.rsplit('-', 1)
-        match = re.match(r'^(?P<name>\S+)-((?P<epoch>[0-9]{1,5})_)?(?P<version>[0-9]\S*)$',
-                         n_e_v)
-        if not match:
-            # If we're not able to parse a version starting with a number, just
-            # take the part after last dash
-            match = re.match(r'^(?P<name>\S+)-((?P<epoch>[0-9]{1,5})_)?(?P<version>[^-]+)$',
-                             n_e_v)
-        name = match.group('name')
-        version = match.group('version')
-        epoch = match.group('epoch')
-        return name, epoch, version, revision
-
-    if not os.path.isfile(os.path.join(bs_dir, 'build_stats')):
-        raise ScriptError("{} does not look like a buildstats directory".format(bs_dir))
-
-    log.debug("Reading buildstats directory %s", bs_dir)
-
-    buildstats = {}
-    subdirs = os.listdir(bs_dir)
-    for dirname in subdirs:
-        recipe_dir = os.path.join(bs_dir, dirname)
-        if not os.path.isdir(recipe_dir):
-            continue
-        name, epoch, version, revision = split_nevr(dirname)
-        recipe_bs = {'nevr': dirname,
-                     'name': name,
-                     'epoch': epoch,
-                     'version': version,
-                     'revision': revision,
-                     'tasks': {}}
-        for task in os.listdir(recipe_dir):
-            recipe_bs['tasks'][task] = [read_buildstats_file(
-                    os.path.join(recipe_dir, task))]
-        if name in buildstats:
-            raise ScriptError("Cannot handle multiple versions of the same "
-                              "package ({})".format(name))
-        buildstats[name] = recipe_bs
-
-    return buildstats
-
-
-def bs_append(dst, src):
-    """Append data from another buildstats"""
-    if set(dst.keys()) != set(src.keys()):
-        raise ScriptError("Refusing to join buildstats, set of packages is "
-                          "different")
-    for pkg, data in dst.items():
-        if data['nevr'] != src[pkg]['nevr']:
-            raise ScriptError("Refusing to join buildstats, package version "
-                              "differs: {} vs. {}".format(data['nevr'], src[pkg]['nevr']))
-        if set(data['tasks'].keys()) != set(src[pkg]['tasks'].keys()):
-            raise ScriptError("Refusing to join buildstats, set of tasks "
-                              "in {} differ".format(pkg))
-        for taskname, taskdata in data['tasks'].items():
-            taskdata.extend(src[pkg]['tasks'][taskname])
-
-
-def read_buildstats_json(path):
-    """Read buildstats from JSON file"""
-    buildstats = {}
-    with open(path) as fobj:
-        bs_json = json.load(fobj)
-    for recipe_bs in bs_json:
-        if recipe_bs['name'] in buildstats:
-            raise ScriptError("Cannot handle multiple versions of the same "
-                              "package ({})".format(recipe_bs['name']))
-
-        if recipe_bs['epoch'] is None:
-            recipe_bs['nevr'] = "{}-{}-{}".format(recipe_bs['name'], recipe_bs['version'], recipe_bs['revision'])
-        else:
-            recipe_bs['nevr'] = "{}-{}_{}-{}".format(recipe_bs['name'], recipe_bs['epoch'], recipe_bs['version'], recipe_bs['revision'])
-
-        for task, data in recipe_bs['tasks'].copy().items():
-            recipe_bs['tasks'][task] = [BSTask(data)]
-
-        buildstats[recipe_bs['name']] = recipe_bs
-
-    return buildstats
-
-
 def read_buildstats(path, multi):
     """Read buildstats"""
     if not os.path.exists(path):
         raise ScriptError("No such file or directory: {}".format(path))
 
     if os.path.isfile(path):
-        return read_buildstats_json(path)
+        return BuildStats.from_file_json(path)
 
     if os.path.isfile(os.path.join(path, 'build_stats')):
-        return read_buildstats_dir(path)
+        return BuildStats.from_dir(path)
 
     # Handle a non-buildstat directory
     subpaths = sorted(glob.glob(path + '/*'))
@@ -222,88 +61,63 @@
     bs = None
     for subpath in subpaths:
         if os.path.isfile(subpath):
-            tmpbs = read_buildstats_json(subpath)
+            _bs = BuildStats.from_file_json(subpath)
         else:
-            tmpbs = read_buildstats_dir(subpath)
-        if not bs:
-            bs = tmpbs
+            _bs = BuildStats.from_dir(subpath)
+        if bs is None:
+            bs = _bs
         else:
-            log.debug("Joining buildstats")
-            bs_append(bs, tmpbs)
-
+            bs.aggregate(_bs)
     if not bs:
         raise ScriptError("No buildstats found under {}".format(path))
+
     return bs
 
 
 def print_ver_diff(bs1, bs2):
     """Print package version differences"""
-    pkgs1 = set(bs1.keys())
-    pkgs2 = set(bs2.keys())
-    new_pkgs = pkgs2 - pkgs1
-    deleted_pkgs = pkgs1 - pkgs2
 
-    echanged = []
-    vchanged = []
-    rchanged = []
-    unchanged = []
-    common_pkgs = pkgs2.intersection(pkgs1)
-    if common_pkgs:
-        for pkg in common_pkgs:
-            if bs1[pkg]['epoch'] != bs2[pkg]['epoch']:
-                echanged.append(pkg)
-            elif bs1[pkg]['version'] != bs2[pkg]['version']:
-                vchanged.append(pkg)
-            elif bs1[pkg]['revision'] != bs2[pkg]['revision']:
-                rchanged.append(pkg)
-            else:
-                unchanged.append(pkg)
+    diff = BSVerDiff(bs1, bs2)
 
-    maxlen = max([len(pkg) for pkg in pkgs1.union(pkgs2)])
+    maxlen = max([len(r) for r in set(bs1.keys()).union(set(bs2.keys()))])
     fmt_str = "  {:{maxlen}} ({})"
-#    if unchanged:
-#        print("\nUNCHANGED PACKAGES:")
-#        print("-------------------")
-#        maxlen = max([len(pkg) for pkg in unchanged])
-#        for pkg in sorted(unchanged):
-#            print(fmt_str.format(pkg, bs2[pkg]['nevr'], maxlen=maxlen))
 
-    if new_pkgs:
-        print("\nNEW PACKAGES:")
-        print("-------------")
-        for pkg in sorted(new_pkgs):
-            print(fmt_str.format(pkg, bs2[pkg]['nevr'], maxlen=maxlen))
+    if diff.new:
+        print("\nNEW RECIPES:")
+        print("------------")
+        for name, val in sorted(diff.new.items()):
+            print(fmt_str.format(name, val.nevr, maxlen=maxlen))
 
-    if deleted_pkgs:
-        print("\nDELETED PACKAGES:")
-        print("-----------------")
-        for pkg in sorted(deleted_pkgs):
-            print(fmt_str.format(pkg, bs1[pkg]['nevr'], maxlen=maxlen))
+    if diff.dropped:
+        print("\nDROPPED RECIPES:")
+        print("----------------")
+        for name, val in sorted(diff.dropped.items()):
+            print(fmt_str.format(name, val.nevr, maxlen=maxlen))
 
     fmt_str = "  {0:{maxlen}} {1:<20}    ({2})"
-    if rchanged:
+    if diff.rchanged:
         print("\nREVISION CHANGED:")
         print("-----------------")
-        for pkg in sorted(rchanged):
-            field1 = "{} -> {}".format(pkg, bs1[pkg]['revision'], bs2[pkg]['revision'])
-            field2 = "{} -> {}".format(bs1[pkg]['nevr'], bs2[pkg]['nevr'])
-            print(fmt_str.format(pkg, field1, field2, maxlen=maxlen))
+        for name, val in sorted(diff.rchanged.items()):
+            field1 = "{} -> {}".format(val.left.revision, val.right.revision)
+            field2 = "{} -> {}".format(val.left.nevr, val.right.nevr)
+            print(fmt_str.format(name, field1, field2, maxlen=maxlen))
 
-    if vchanged:
+    if diff.vchanged:
         print("\nVERSION CHANGED:")
         print("----------------")
-        for pkg in sorted(vchanged):
-            field1 = "{} -> {}".format(bs1[pkg]['version'], bs2[pkg]['version'])
-            field2 = "{} -> {}".format(bs1[pkg]['nevr'], bs2[pkg]['nevr'])
-            print(fmt_str.format(pkg, field1, field2, maxlen=maxlen))
+        for name, val in sorted(diff.vchanged.items()):
+            field1 = "{} -> {}".format(val.left.version, val.right.version)
+            field2 = "{} -> {}".format(val.left.nevr, val.right.nevr)
+            print(fmt_str.format(name, field1, field2, maxlen=maxlen))
 
-    if echanged:
+    if diff.echanged:
         print("\nEPOCH CHANGED:")
         print("--------------")
-        for pkg in sorted(echanged):
-            field1 = "{} -> {}".format(bs1[pkg]['epoch'], bs2[pkg]['epoch'])
-            field2 = "{} -> {}".format(bs1[pkg]['nevr'], bs2[pkg]['nevr'])
-            print(fmt_str.format(pkg, field1, field2, maxlen=maxlen))
+        for name, val in sorted(diff.echanged.items()):
+            field1 = "{} -> {}".format(val.left.epoch, val.right.epoch)
+            field2 = "{} -> {}".format(val.left.nevr, val.right.nevr)
+            print(fmt_str.format(name, field1, field2, maxlen=maxlen))
 
 
 def print_task_diff(bs1, bs2, val_type, min_val=0, min_absdiff=0, sort_by=('absdiff',)):
@@ -343,12 +157,10 @@
         """Get cumulative sum of all tasks"""
         total = 0.0
         for recipe_data in buildstats.values():
-            for bs_task in recipe_data['tasks'].values():
-                total += sum([getattr(b, val_type) for b in bs_task]) / len(bs_task)
+            for bs_task in recipe_data.tasks.values():
+                total += getattr(bs_task, val_type)
         return total
 
-    tasks_diff = []
-
     if min_val:
         print("Ignoring tasks less than {} ({})".format(
                 val_to_str(min_val, True), val_to_str(min_val)))
@@ -357,49 +169,7 @@
                 val_to_str(min_absdiff, True), val_to_str(min_absdiff)))
 
     # Prepare the data
-    pkgs = set(bs1.keys()).union(set(bs2.keys()))
-    for pkg in pkgs:
-        tasks1 = bs1[pkg]['tasks'] if pkg in bs1 else {}
-        tasks2 = bs2[pkg]['tasks'] if pkg in bs2 else {}
-        if not tasks1:
-            pkg_op = '+ '
-        elif not tasks2:
-            pkg_op = '- '
-        else:
-            pkg_op = '  '
-
-        for task in set(tasks1.keys()).union(set(tasks2.keys())):
-            task_op = '  '
-            if task in tasks1:
-                # Average over all values
-                val1 = [getattr(b, val_type) for b in bs1[pkg]['tasks'][task]]
-                val1 = sum(val1) / len(val1)
-            else:
-                task_op = '+ '
-                val1 = 0
-            if task in tasks2:
-                # Average over all values
-                val2 = [getattr(b, val_type) for b in bs2[pkg]['tasks'][task]]
-                val2 = sum(val2) / len(val2)
-            else:
-                val2 = 0
-                task_op = '- '
-
-            if val1 == 0:
-                reldiff = float('inf')
-            else:
-                reldiff = 100 * (val2 - val1) / val1
-
-            if max(val1, val2) < min_val:
-                log.debug("Filtering out %s:%s (%s)", pkg, task,
-                          val_to_str(max(val1, val2)))
-                continue
-            if abs(val2 - val1) < min_absdiff:
-                log.debug("Filtering out %s:%s (difference of %s)", pkg, task,
-                          val_to_str(val2-val1))
-                continue
-            tasks_diff.append(TaskDiff(pkg, pkg_op, task, task_op, val1, val2,
-                                       val2-val1, reldiff))
+    tasks_diff = diff_buildstats(bs1, bs2, val_type, min_val, min_absdiff)
 
     # Sort our list
     for field in reversed(sort_by):
diff --git a/import-layers/yocto-poky/scripts/contrib/build-perf-test-wrapper.sh b/import-layers/yocto-poky/scripts/contrib/build-perf-test-wrapper.sh
index 3da3253..19bee1d 100755
--- a/import-layers/yocto-poky/scripts/contrib/build-perf-test-wrapper.sh
+++ b/import-layers/yocto-poky/scripts/contrib/build-perf-test-wrapper.sh
@@ -35,6 +35,7 @@
   -C GIT_REPO       commit results into Git
   -E EMAIL_ADDR     send email report
   -P GIT_REMOTE     push results to a remote Git repository
+  -R DEST           rsync reports to a remote destination
   -w WORK_DIR       work dir for this script
                     (default: GIT_TOP_DIR/build-perf-test)
   -x                create xml report (instead of json)
@@ -50,7 +51,7 @@
 commitish=""
 oe_build_perf_test_extra_opts=()
 oe_git_archive_extra_opts=()
-while getopts "ha:c:C:E:P:w:x" opt; do
+while getopts "ha:c:C:E:P:R:w:x" opt; do
     case $opt in
         h)  usage
             exit 0
@@ -65,6 +66,8 @@
             ;;
         P)  oe_git_archive_extra_opts+=("--push" "$OPTARG")
             ;;
+        R)  rsync_dst="$OPTARG"
+            ;;
         w)  base_dir=`realpath -s "$OPTARG"`
             ;;
         x)  oe_build_perf_test_extra_opts+=("--xml")
@@ -132,6 +135,11 @@
     git reset --hard $commit > /dev/null
 fi
 
+# Determine name of the current branch
+branch=`git symbolic-ref HEAD 2> /dev/null`
+# Strip refs/heads/
+branch=${branch:11}
+
 # Setup build environment
 if [ -z "$base_dir" ]; then
     base_dir="$git_topdir/build-perf-test"
@@ -187,13 +195,25 @@
         "${oe_git_archive_extra_opts[@]}" \
         "$results_dir"
 
+    # Generate test reports
+    sanitized_branch=`echo $branch | tr / _`
+    report_txt=`hostname`_${sanitized_branch}_${machine}.txt
+    report_html=`hostname`_${sanitized_branch}_${machine}.html
+    echo -e "\nGenerating test report"
+    oe-build-perf-report -r "$results_repo" > $report_txt
+    oe-build-perf-report -r "$results_repo" --html > $report_html
+
     # Send email report
     if [ -n "$email_to" ]; then
-        echo -e "\nEmailing test report"
+        echo "Emailing test report"
         os_name=`get_os_release_var PRETTY_NAME`
-        oe-build-perf-report -r "$results_repo" > report.txt
-        oe-build-perf-report -r "$results_repo" --html > report.html
-        "$script_dir"/oe-build-perf-report-email.py --to "$email_to" --subject "Build Perf Test Report for $os_name" --text report.txt --html report.html "${OE_BUILD_PERF_REPORT_EMAIL_EXTRA_ARGS[@]}"
+        "$script_dir"/oe-build-perf-report-email.py --to "$email_to" --subject "Build Perf Test Report for $os_name" --text $report_txt --html $report_html "${OE_BUILD_PERF_REPORT_EMAIL_EXTRA_ARGS[@]}"
+    fi
+
+    # Upload report files, unless we're on detached head
+    if [ -n "$rsync_dst" -a -n "$branch" ]; then
+        echo "Uploading test report"
+        rsync $report_txt $report_html $rsync_dst
     fi
 fi
 
diff --git a/import-layers/yocto-poky/scripts/contrib/oe-build-perf-report-email.py b/import-layers/yocto-poky/scripts/contrib/oe-build-perf-report-email.py
index 261ca51..913847b 100755
--- a/import-layers/yocto-poky/scripts/contrib/oe-build-perf-report-email.py
+++ b/import-layers/yocto-poky/scripts/contrib/oe-build-perf-report-email.py
@@ -25,6 +25,7 @@
 import subprocess
 import sys
 import tempfile
+from email.mime.image import MIMEImage
 from email.mime.multipart import MIMEMultipart
 from email.mime.text import MIMEText
 
@@ -71,6 +72,10 @@
                         help="Only print errors")
     parser.add_argument('--to', action='append',
                         help="Recipients of the email")
+    parser.add_argument('--cc', action='append',
+                        help="Carbon copy recipients of the email")
+    parser.add_argument('--bcc', action='append',
+                        help="Blind carbon copy recipients of the email")
     parser.add_argument('--subject', default="Yocto build perf test report",
                         help="Email subject")
     parser.add_argument('--outdir', '-o',
@@ -107,15 +112,6 @@
     subprocess.check_output(['optipng', outfile], stderr=subprocess.STDOUT)
 
 
-def encode_png(pngfile):
-    """Encode png into a <img> html element"""
-    with open(pngfile, 'rb') as f:
-        data = f.read()
-
-    b64_data = base64.b64encode(data)
-    return '<img src="data:image/png;base64,' + b64_data.decode('utf-8') + '">\n'
-
-
 def mangle_html_report(infile, outfile, pngs):
     """Mangle html file into a email compatible format"""
     paste = True
@@ -140,9 +136,7 @@
                     # Replace charts with <img> elements
                     match = re.match('<div id="(?P<id>\w+)"', stripped)
                     if match and match.group('id') in pngs:
-                        #f_out.write('<img src="{}">\n'.format(match.group('id') + '.png'))
-                        png_file = os.path.join(png_dir, match.group('id') + '.png')
-                        f_out.write(encode_png(png_file))
+                        f_out.write('<img src="cid:{}"\n'.format(match.group('id')))
                     else:
                         f_out.write(line)
 
@@ -166,7 +160,7 @@
                                 stderr=subprocess.STDOUT)
 
         pngs = []
-        attachments = []
+        images = []
         for fname in os.listdir(tmpdir):
             base, ext = os.path.splitext(fname)
             if ext == '.png':
@@ -174,7 +168,7 @@
                 decode_png(os.path.join(tmpdir, fname),
                            os.path.join(outdir, fname))
                 pngs.append(base)
-                attachments.append(fname)
+                images.append(fname)
             elif ext in ('.html', '.htm'):
                 report_file = fname
             else:
@@ -184,11 +178,13 @@
         log.debug("Mangling html report file %s", report_file)
         mangle_html_report(os.path.join(tmpdir, report_file),
                            os.path.join(outdir, report_file), pngs)
-        return report_file, attachments
+        return (os.path.join(outdir, report_file),
+                [os.path.join(outdir, i) for i in images])
     finally:
         shutil.rmtree(tmpdir)
 
-def send_email(text_fn, html_fn, subject, recipients):
+def send_email(text_fn, html_fn, image_fns, subject, recipients, copy=[],
+               blind_copy=[]):
     """Send email"""
     # Generate email message
     text_msg = html_msg = None
@@ -197,8 +193,16 @@
             text_msg = MIMEText("Yocto build performance test report.\n" +
                                 f.read(), 'plain')
     if html_fn:
+        html_msg = msg = MIMEMultipart('related')
         with open(html_fn) as f:
-            html_msg = MIMEText(f.read(), 'html')
+            html_msg.attach(MIMEText(f.read(), 'html'))
+        for img_fn in image_fns:
+            # Expect that content id is same as the filename
+            cid = os.path.splitext(os.path.basename(img_fn))[0]
+            with open(img_fn, 'rb') as f:
+                image_msg = MIMEImage(f.read())
+            image_msg['Content-ID'] = '<{}>'.format(cid)
+            html_msg.attach(image_msg)
 
     if text_msg and html_msg:
         msg = MIMEMultipart('alternative')
@@ -217,6 +221,10 @@
                            '{}@{}'.format(pw_data.pw_name, socket.getfqdn()))
     msg['From'] = "{} <{}>".format(full_name, email)
     msg['To'] = ', '.join(recipients)
+    if copy:
+        msg['Cc'] = ', '.join(copy)
+    if blind_copy:
+        msg['Bcc'] = ', '.join(blind_copy)
     msg['Subject'] = subject
 
     # Send email
@@ -243,14 +251,19 @@
 
     try:
         log.debug("Storing email parts in %s", outdir)
-        html_report = None
+        html_report = images = None
         if args.html:
-            scrape_html_report(args.html, outdir, args.phantomjs_args)
-            html_report = os.path.join(outdir, os.path.basename(args.html))
+            html_report, images = scrape_html_report(args.html, outdir,
+                                                     args.phantomjs_args)
 
         if args.to:
             log.info("Sending email to %s", ', '.join(args.to))
-            send_email(args.text, html_report, args.subject, args.to)
+            if args.cc:
+                log.info("Copying to %s", ', '.join(args.cc))
+            if args.bcc:
+                log.info("Blind copying to %s", ', '.join(args.bcc))
+            send_email(args.text, html_report, images, args.subject,
+                       args.to, args.cc, args.bcc)
     except subprocess.CalledProcessError as err:
         log.error("%s, with output:\n%s", str(err), err.output.decode())
         return 1
diff --git a/import-layers/yocto-poky/scripts/contrib/patchreview.py b/import-layers/yocto-poky/scripts/contrib/patchreview.py
new file mode 100755
index 0000000..4e3e73c
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/contrib/patchreview.py
@@ -0,0 +1,211 @@
+#! /usr/bin/env python3
+
+# TODO
+# - option to just list all broken files
+# - test suite
+# - validate signed-off-by
+
+
+class Result:
+    # Whether the patch has an Upstream-Status or not
+    missing_upstream_status = False
+    # If the Upstream-Status tag is malformed in some way (string for bad bit)
+    malformed_upstream_status = None
+    # If the Upstream-Status value is unknown (boolean)
+    unknown_upstream_status = False
+    # The upstream status value (Pending, etc)
+    upstream_status = None
+    # Whether the patch has a Signed-off-by or not
+    missing_sob = False
+    # Whether the Signed-off-by tag is malformed in some way
+    malformed_sob = False
+    # The Signed-off-by tag value
+    sob = None
+    # Whether a patch looks like a CVE but doesn't have a CVE tag
+    missing_cve = False
+
+def blame_patch(patch):
+    """
+    From a patch filename, return a list of "commit summary (author name <author
+    email>)" strings representing the history.
+    """
+    import subprocess
+    return subprocess.check_output(("git", "log",
+                                    "--follow", "--find-renames", "--diff-filter=A",
+                                    "--format=%s (%aN <%aE>)",
+                                    "--", patch)).decode("utf-8").splitlines()
+
+def patchreview(patches):
+    import re
+
+    # General pattern: start of line, optional whitespace, tag with optional
+    # hyphen or spaces, maybe a colon, some whitespace, then the value, all case
+    # insensitive.
+    sob_re = re.compile(r"^[\t ]*(Signed[-_ ]off[-_ ]by:?)[\t ]*(.+)", re.IGNORECASE | re.MULTILINE)
+    status_re = re.compile(r"^[\t ]*(Upstream[-_ ]Status:?)[\t ]*(\w*)", re.IGNORECASE | re.MULTILINE)
+    status_values = ("accepted", "pending", "inappropriate", "backport", "submitted", "denied")
+    cve_tag_re = re.compile(r"^[\t ]*(CVE:)[\t ]*(.*)", re.IGNORECASE | re.MULTILINE)
+    cve_re = re.compile(r"cve-[0-9]{4}-[0-9]{4,6}", re.IGNORECASE)
+
+    results = {}
+
+    for patch in patches:
+        result = Result()
+        results[patch] = result
+
+        content = open(patch, encoding='ascii', errors='ignore').read()
+
+        # Find the Signed-off-by tag
+        match = sob_re.search(content)
+        if match:
+            value = match.group(1)
+            if value != "Signed-off-by:":
+                result.malformed_sob = value
+            result.sob = match.group(2)
+        else:
+            result.missing_sob = True
+
+
+        # Find the Upstream-Status tag
+        match = status_re.search(content)
+        if match:
+            value = match.group(1)
+            if value != "Upstream-Status:":
+                result.malformed_upstream_status = value
+
+            value = match.group(2).lower()
+            # TODO: check case
+            if value not in status_values:
+                result.unknown_upstream_status = True
+            result.upstream_status = value
+        else:
+            result.missing_upstream_status = True
+
+        # Check that patches which looks like CVEs have CVE tags
+        if cve_re.search(patch) or cve_re.search(content):
+            if not cve_tag_re.search(content):
+                result.missing_cve = True
+        # TODO: extract CVE list
+
+    return results
+
+
+def analyse(results, want_blame=False, verbose=True):
+    """
+    want_blame: display blame data for each malformed patch
+    verbose: display per-file results instead of just summary
+    """
+
+    # want_blame requires verbose, so disable blame if we're not verbose
+    if want_blame and not verbose:
+        want_blame = False
+
+    total_patches = 0
+    missing_sob = 0
+    malformed_sob = 0
+    missing_status = 0
+    malformed_status = 0
+    missing_cve = 0
+    pending_patches = 0
+
+    for patch in sorted(results):
+        r = results[patch]
+        total_patches += 1
+        need_blame = False
+
+        # Build statistics
+        if r.missing_sob:
+            missing_sob += 1
+        if r.malformed_sob:
+            malformed_sob += 1
+        if r.missing_upstream_status:
+            missing_status += 1
+        if r.malformed_upstream_status or r.unknown_upstream_status:
+            malformed_status += 1
+        if r.missing_cve:
+            missing_cve += 1
+        if r.upstream_status == "pending":
+            pending_patches += 1
+
+        # Output warnings
+        if r.missing_sob:
+            need_blame = True
+            if verbose:
+                print("Missing Signed-off-by tag (%s)" % patch)
+        # TODO: disable this for now as too much fails
+        if False and r.malformed_sob:
+            need_blame = True
+            if verbose:
+                print("Malformed Signed-off-by '%s' (%s)" % (r.malformed_sob, patch))
+        if r.missing_cve:
+            need_blame = True
+            if verbose:
+                print("Missing CVE tag (%s)" % patch)
+        if r.missing_upstream_status:
+            need_blame = True
+            if verbose:
+                print("Missing Upstream-Status tag (%s)" % patch)
+        if r.malformed_upstream_status:
+            need_blame = True
+            if verbose:
+                print("Malformed Upstream-Status '%s' (%s)" % (r.malformed_upstream_status, patch))
+        if r.unknown_upstream_status:
+            need_blame = True
+            if verbose:
+                print("Unknown Upstream-Status value '%s' (%s)" % (r.upstream_status, patch))
+
+        if want_blame and need_blame:
+            print("\n".join(blame_patch(patch)) + "\n")
+
+    def percent(num):
+        try:
+            return "%d (%d%%)" % (num, round(num * 100.0 / total_patches))
+        except ZeroDivisionError:
+            return "N/A"
+
+    if verbose:
+        print()
+
+    print("""Total patches found: %d
+Patches missing Signed-off-by: %s
+Patches with malformed Signed-off-by: %s
+Patches missing CVE: %s
+Patches missing Upstream-Status: %s
+Patches with malformed Upstream-Status: %s
+Patches in Pending state: %s""" % (total_patches,
+                                   percent(missing_sob),
+                                   percent(malformed_sob),
+                                   percent(missing_cve),
+                                   percent(missing_status),
+                                   percent(malformed_status),
+                                   percent(pending_patches)))
+
+
+
+def histogram(results):
+    from toolz import recipes, dicttoolz
+    import math
+    counts = recipes.countby(lambda r: r.upstream_status, results.values())
+    bars = dicttoolz.valmap(lambda v: "#" * int(math.ceil(float(v) / len(results) * 100)), counts)
+    for k in bars:
+        print("%-20s %s (%d)" % (k.capitalize() if k else "No status", bars[k], counts[k]))
+
+
+if __name__ == "__main__":
+    import argparse, subprocess, os
+
+    args = argparse.ArgumentParser(description="Patch Review Tool")
+    args.add_argument("-b", "--blame", action="store_true", help="show blame for malformed patches")
+    args.add_argument("-v", "--verbose", action="store_true", help="show per-patch results")
+    args.add_argument("-g", "--histogram", action="store_true", help="show patch histogram")
+    args.add_argument("directory", nargs="?", help="directory to scan")
+    args = args.parse_args()
+
+    if args.directory:
+        os.chdir(args.directory)
+    patches = subprocess.check_output(("git", "ls-files", "*.patch", "*.diff")).decode("utf-8").split()
+    results = patchreview(patches)
+    analyse(results, want_blame=args.blame, verbose=args.verbose)
+    if args.histogram:
+        print()
+        histogram(results)
diff --git a/import-layers/yocto-poky/scripts/contrib/patchtest.sh b/import-layers/yocto-poky/scripts/contrib/patchtest.sh
new file mode 100755
index 0000000..7fe5666
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/contrib/patchtest.sh
@@ -0,0 +1,118 @@
+#!/bin/bash
+# ex:ts=4:sw=4:sts=4:et
+# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
+#
+# patchtest: Run patchtest on commits starting at master
+#
+# Copyright (c) 2017, Intel Corporation.
+# All rights reserved.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
+#
+set -o errexit
+
+# Default values
+pokydir=''
+
+usage() {
+CMD=$(basename $0)
+cat <<EOM
+Usage: $CMD [-h] [-p pokydir]
+  -p pokydir  Defaults to current directory
+EOM
+>&2
+    exit 1
+}
+
+function clone() {
+    local REPOREMOTE=$1
+    local REPODIR=$2
+    if [ ! -d $REPODIR ]; then
+	git clone $REPOREMOTE $REPODIR --quiet
+    else
+	( cd $REPODIR; git pull --quiet )
+    fi
+}
+
+while getopts ":p:h" opt; do
+    case $opt in
+	p)
+	    pokydir=$OPTARG
+	    ;;
+	h)
+	    usage
+	    ;;
+	\?)
+	    echo "Invalid option: -$OPTARG" >&2
+	    usage
+	    ;;
+	:)
+	    echo "Option -$OPTARG requires an argument." >&2
+	    usage
+	    ;;
+    esac
+done
+shift $((OPTIND-1))
+
+CDIR="$PWD"
+
+# default pokydir to current directory if user did not specify one
+if [ -z "$pokydir" ]; then
+    pokydir="$CDIR"
+fi
+
+PTENV="$PWD/patchtest"
+PT="$PTENV/patchtest"
+PTOE="$PTENV/patchtest-oe"
+
+if ! which virtualenv > /dev/null; then
+    echo "Install virtualenv before proceeding"
+    exit 1;
+fi
+
+# activate the virtual env
+virtualenv $PTENV --quiet
+source $PTENV/bin/activate
+
+cd $PTENV
+
+# clone or pull
+clone git://git.yoctoproject.org/patchtest $PT
+clone git://git.yoctoproject.org/patchtest-oe $PTOE
+
+# install requirements
+pip install -r $PT/requirements.txt --quiet
+pip install -r $PTOE/requirements.txt --quiet
+
+PATH="$PT:$PT/scripts:$PATH"
+
+# loop through parent to HEAD and execute patchtest on each commit
+for commit in $(git rev-list master..HEAD --reverse)
+do
+    shortlog="$(git log "$commit^1..$commit" --pretty='%h: %aN: %cd: %s')"
+    log="$(git format-patch "$commit^1..$commit" --stdout | patchtest - -r $pokydir -s $PTOE/tests --base-commit $commit^1 --json 2>/dev/null | create-summary --fail --only-results)"
+    if [ -z "$log" ]; then
+	shortlog="$shortlog: OK"
+    else
+	shortlog="$shortlog: FAIL"
+    fi
+    echo "$shortlog"
+    echo "$log" | sed -n -e '/Issue/p' -e '/Suggested fix/p'
+    echo ""
+done
+
+deactivate
+
+cd $CDIR
diff --git a/import-layers/yocto-poky/scripts/contrib/python/generate-manifest-2.7.py b/import-layers/yocto-poky/scripts/contrib/python/generate-manifest-2.7.py
index 8c3655d..586b329 100755
--- a/import-layers/yocto-poky/scripts/contrib/python/generate-manifest-2.7.py
+++ b/import-layers/yocto-poky/scripts/contrib/python/generate-manifest-2.7.py
@@ -28,6 +28,7 @@
     def __init__( self, outfile, isNative ):
         """initialize"""
         self.packages = {}
+        self.excluded_pkgs = []
         self.targetPrefix = "${libdir}/python%s/" % VERSION[:3]
         self.isNative = isNative
         self.output = outfile
@@ -52,7 +53,7 @@
         self.out( """ """ )
         self.out( "" )
 
-    def addPackage( self, name, description, dependencies, filenames ):
+    def addPackage( self, name, description, dependencies, filenames, mod_exclude = False ):
         """add a package to the Makefile"""
         if type( filenames ) == type( "" ):
             filenames = filenames.split()
@@ -62,6 +63,8 @@
                 fullFilenames.append( "%s%s" % ( self.targetPrefix, filename ) )
             else:
                 fullFilenames.append( filename )
+        if mod_exclude:
+            self.excluded_pkgs.append( name )
         self.packages[name] = description, dependencies, fullFilenames
 
     def doBody( self ):
@@ -74,13 +77,11 @@
         #
 
         if self.isNative:
-            rprovideLine = 'RPROVIDES+="'
-            for name in sorted(self.packages):
-                rprovideLine += "%s-native " % name.replace( '${PN}', 'python' )
-            rprovideLine += '"'
+            pkglist = []
+            for name in ['${PN}-modules'] + sorted(self.packages):
+                pkglist.append('%s-native' % name.replace('${PN}', 'python'))
 
-            self.out( rprovideLine )
-            self.out( "" )
+            self.out('RPROVIDES += "%s"' % " ".join(pkglist))
             return
 
         #
@@ -149,7 +150,7 @@
         line = 'RDEPENDS_${PN}-modules="'
 
         for name, data in sorted(self.packages.items()):
-            if name not in ['${PN}-dev', '${PN}-distutils-staticdev']:
+            if name not in ['${PN}-dev', '${PN}-distutils-staticdev'] and name not in self.excluded_pkgs:
                 line += "%s " % name
 
         self.out( "%s \"" % line )
@@ -384,7 +385,7 @@
     "pty.* tty.*" )
 
     m.addPackage( "${PN}-tests", "Python tests", "${PN}-core ${PN}-modules",
-    "test" ) # package
+    "test", True ) # package
 
     m.addPackage( "${PN}-threading", "Python threading & synchronization support", "${PN}-core ${PN}-lang",
     "_threading_local.* dummy_thread.* dummy_threading.* mutex.* threading.* Queue.*" )
diff --git a/import-layers/yocto-poky/scripts/contrib/python/generate-manifest-3.5.py b/import-layers/yocto-poky/scripts/contrib/python/generate-manifest-3.5.py
index 075860c..6352f8f 100755
--- a/import-layers/yocto-poky/scripts/contrib/python/generate-manifest-3.5.py
+++ b/import-layers/yocto-poky/scripts/contrib/python/generate-manifest-3.5.py
@@ -31,6 +31,7 @@
     def __init__( self, outfile, isNative ):
         """initialize"""
         self.packages = {}
+        self.excluded_pkgs = []
         self.targetPrefix = "${libdir}/python%s/" % VERSION[:3]
         self.isNative = isNative
         self.output = outfile
@@ -55,7 +56,7 @@
         self.out( """ """ )
         self.out( "" )
 
-    def addPackage( self, name, description, dependencies, filenames ):
+    def addPackage( self, name, description, dependencies, filenames, mod_exclude = False ):
         """add a package to the Makefile"""
         if type( filenames ) == type( "" ):
             filenames = filenames.split()
@@ -67,6 +68,8 @@
                                                  self.pycachePath( filename ) ) )
             else:
                 fullFilenames.append( filename )
+        if mod_exclude:
+            self.excluded_pkgs.append( name )
         self.packages[name] = description, dependencies, fullFilenames
 
     def pycachePath( self, filename ):
@@ -87,13 +90,11 @@
         #
 
         if self.isNative:
-            rprovideLine = 'RPROVIDES+="'
-            for name in sorted(self.packages):
-                rprovideLine += "%s-native " % name.replace( '${PN}', 'python3' )
-            rprovideLine += '"'
+            pkglist = []
+            for name in ['${PN}-modules'] + sorted(self.packages):
+                pkglist.append('%s-native' % name.replace('${PN}', 'python3'))
 
-            self.out( rprovideLine )
-            self.out( "" )
+            self.out('RPROVIDES += "%s"' % " ".join(pkglist))
             return
 
         #
@@ -162,7 +163,7 @@
         line = 'RDEPENDS_${PN}-modules="'
 
         for name, data in sorted(self.packages.items()):
-            if name not in ['${PN}-dev', '${PN}-distutils-staticdev']:
+            if name not in ['${PN}-dev', '${PN}-distutils-staticdev'] and name not in self.excluded_pkgs:
                 line += "%s " % name
 
         self.out( "%s \"" % line )
@@ -224,7 +225,7 @@
     "${base_libdir}/*.o " +
     "${datadir}/aclocal " +
     "${datadir}/pkgconfig " +
-    "config/Makefile ")
+    "config*/Makefile ")
 
     m.addPackage( "${PN}-2to3", "Python automated Python 2 to 3 code translator", "${PN}-core",
     "lib2to3" ) # package
@@ -254,7 +255,7 @@
     "py_compile.* compileall.*" )
 
     m.addPackage( "${PN}-compression", "Python high-level compression support", "${PN}-core ${PN}-codecs ${PN}-importlib ${PN}-threading ${PN}-shell",
-    "gzip.* zipfile.* tarfile.* lib-dynload/bz2.*.so lib-dynload/zlib.*.so" )
+    "gzip.* zipfile.* tarfile.* lib-dynload/bz2.*.so lib-dynload/zlib.*.so bz2.py lzma.py _compression.py" )
 
     m.addPackage( "${PN}-crypt", "Python basic cryptographic and hashing support", "${PN}-core",
     "hashlib.* md5.* sha.* lib-dynload/crypt.*.so lib-dynload/_hashlib.*.so lib-dynload/_sha256.*.so lib-dynload/_sha512.*.so" )
@@ -402,8 +403,8 @@
     m.addPackage( "${PN}-terminal", "Python terminal controlling support", "${PN}-core ${PN}-io",
     "pty.* tty.*" )
 
-    m.addPackage( "${PN}-tests", "Python tests", "${PN}-core",
-    "test" ) # package
+    m.addPackage( "${PN}-tests", "Python tests", "${PN}-core ${PN}-compression",
+    "test", True ) # package
 
     m.addPackage( "${PN}-threading", "Python threading & synchronization support", "${PN}-core ${PN}-lang",
     "_threading_local.* dummy_thread.* dummy_threading.* mutex.* threading.* queue.*" )
diff --git a/import-layers/yocto-poky/scripts/create-pull-request b/import-layers/yocto-poky/scripts/create-pull-request
index e82858b..280880b 100755
--- a/import-layers/yocto-poky/scripts/create-pull-request
+++ b/import-layers/yocto-poky/scripts/create-pull-request
@@ -34,7 +34,7 @@
 usage() {
 CMD=$(basename $0)
 cat <<EOM
-Usage: $CMD [-h] [-o output_dir] [-m msg_body_file] [-s subject] [-r relative_to] [-i commit_id] [-d relative_dir] -u remote [-b branch]
+Usage: $CMD [-h] [-o output_dir] [-m msg_body_file] [-s subject] [-r relative_to] [-i commit_id] [-d relative_dir] -u remote [-b branch] [-- <format-patch options>]
   -b branch           Branch name in the specified remote (default: current branch)
   -l local branch     Local branch name (default: HEAD)
   -c                  Create an RFC (Request for Comment) patch series
@@ -57,6 +57,7 @@
    $CMD -u contrib -r master -i misc -b nitin/misc -o pull-misc
    $CMD -u contrib -p "RFC PATCH" -b nitin/experimental
    $CMD -u contrib -i misc -b nitin/misc -d ./bitbake
+   $CMD -u contrib -r origin/master -o /tmp/out.v3 -- -v3 --in-reply-to=20170511120134.XX7799@site.com
 EOM
 }
 
@@ -108,9 +109,16 @@
 	a)
 		CPR_CONTRIB_AUTO_PUSH="1"
 		;;
+	--)
+		shift
+		break
+		;;
 	esac
 done
 
+shift "$((OPTIND - 1))"
+extraopts="$@"
+
 if [ -z "$REMOTE" ]; then
 	echo "ERROR: Missing parameter -u or CPR_CONTRIB_REMOTE in env, no git remote!"
 	usage
@@ -201,7 +209,7 @@
 	ODIR=$(realpath $ODIR)
 	pdir=$(pwd)
 	cd $RELDIR
-	extraopts="--relative"
+	extraopts="$extraopts --relative"
 fi
 
 # Generate the patches and cover letter
@@ -218,7 +226,7 @@
 [ -n "$RELDIR" ] && cd $pdir
 
 # Customize the cover letter
-CL="$ODIR/0000-cover-letter.patch"
+CL="$(echo $ODIR/*0000-cover-letter.patch)"
 PM="$ODIR/pull-msg"
 GIT_VERSION=$(`git --version` | tr -d '[:alpha:][:space:].' | sed 's/\(...\).*/\1/')
 NEWER_GIT_VERSION=210
diff --git a/import-layers/yocto-poky/scripts/devtool b/import-layers/yocto-poky/scripts/devtool
index c9ad9dd..5292f18 100755
--- a/import-layers/yocto-poky/scripts/devtool
+++ b/import-layers/yocto-poky/scripts/devtool
@@ -130,25 +130,6 @@
                                      'recipefile': recipefile}
                     logger.debug('Found recipe %s' % workspace[pn])
 
-def create_unlockedsigs():
-    """ This function will make unlocked-sigs.inc match the recipes in the
-    workspace. This runs on every run of devtool, but it lets us ensure
-    the unlocked items are in sync with the workspace. """
-
-    confdir = os.path.join(basepath, 'conf')
-    unlockedsigs = os.path.join(confdir, 'unlocked-sigs.inc')
-    bb.utils.mkdirhier(confdir)
-    with open(os.path.join(confdir, 'unlocked-sigs.inc'), 'w') as f:
-        f.write("# DO NOT MODIFY! YOUR CHANGES WILL BE LOST.\n" +
-                "# This layer was created by the OpenEmbedded devtool" +
-                " utility in order to\n" +
-                "# contain recipes that are unlocked.\n")
-
-        f.write('SIGGEN_UNLOCKED_RECIPES += "\\\n')
-        for pn in workspace:
-            f.write('    ' + pn)
-        f.write('"')
-
 def create_workspace(args, config, basepath, workspace):
     if args.layerpath:
         workspacedir = os.path.abspath(args.layerpath)
@@ -332,7 +313,6 @@
 
     if not getattr(args, 'no_workspace', False):
         read_workspace()
-        create_unlockedsigs()
 
     try:
         ret = args.func(args, config, basepath, workspace)
diff --git a/import-layers/yocto-poky/scripts/lib/argparse_oe.py b/import-layers/yocto-poky/scripts/lib/argparse_oe.py
index bf6eb17..9bdfc1c 100644
--- a/import-layers/yocto-poky/scripts/lib/argparse_oe.py
+++ b/import-layers/yocto-poky/scripts/lib/argparse_oe.py
@@ -167,3 +167,10 @@
             return '\n'.join(lines)
         else:
             return super(OeHelpFormatter, self)._format_action(action)
+
+def int_positive(value):
+    ivalue = int(value)
+    if ivalue <= 0:
+        raise argparse.ArgumentTypeError(
+                "%s is not a positive int value" % value)
+    return ivalue
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/help.py b/import-layers/yocto-poky/scripts/lib/bsp/help.py
index 4f0d772..85d446b 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/help.py
+++ b/import-layers/yocto-poky/scripts/lib/bsp/help.py
@@ -818,6 +818,10 @@
 
 yocto_layer_create_usage = """
 
+ WARNING: this plugin will be removed starting 2.5 development in favour
+ of using 'bitbake-layers create-layer' script/plugin, offering a single
+ script to manage layers.
+
  Create a new generic Yocto layer
 
  usage: yocto-layer create <layer-name> [layer_priority]
@@ -845,6 +849,10 @@
 
 yocto_layer_create_help = """
 
+WARNING: this plugin will be removed starting 2.5 development in favour
+of using 'bitbake-layers create-layer' script/plugin, offering a single
+script to manage layers.
+
 NAME
     yocto-layer create - Create a new generic Yocto layer
 
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/files/machine.scc b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/files/machine.scc
index 828400d..fb3866f 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/files/machine.scc
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/files/machine.scc
@@ -4,5 +4,3 @@
 
 include features/usb-net/usb-net.scc
 
-kconf hardware {{=machine}}-user-config.cfg
-include {{=machine}}-user-patches.scc
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/kernel-list.noinstall b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/kernel-list.noinstall
index 20f2059..917f0e2 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/kernel-list.noinstall
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/kernel-list.noinstall
@@ -1,5 +1,5 @@
 {{ if kernel_choice != "custom": }}
-{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.10) kernel? (y/n)" default:"y"}}
+{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.12) kernel? (y/n)" default:"y"}}
 
 {{ if kernel_choice != "custom" and use_default_kernel == "n": }}
-{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.10"}}
+{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.12"}}
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-dev.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-dev.bbappend
index c336007..22ed273 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-dev.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-dev.bbappend
@@ -19,7 +19,10 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
deleted file mode 100644
index d15a178..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
+++ /dev/null
@@ -1,35 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
index 5cc82e8..bae943e 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
@@ -20,7 +20,9 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
new file mode 100644
index 0000000..6f3e104
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
@@ -0,0 +1,37 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-patches.scc \
+            file://{{=machine}}-user-features.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
index 070bd87..62d1817 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
@@ -20,7 +20,9 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
deleted file mode 100644
index c391322..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
+++ /dev/null
@@ -1,35 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.1.bbappend
deleted file mode 100644
index 4e7d7cb..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.1.bbappend
+++ /dev/null
@@ -1,34 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.10.bbappend
index eaf4367..dfbecb5 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.10.bbappend
@@ -20,9 +20,12 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.12.bbappend
new file mode 100644
index 0000000..e874c9e
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.12.bbappend
@@ -0,0 +1,37 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.4.bbappend
index 56e8ad3..a809c76 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.4.bbappend
@@ -20,9 +20,12 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.8.bbappend
deleted file mode 100644
index 59752a9..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/arm/recipes-kernel/linux/linux-yocto_4.8.bbappend
+++ /dev/null
@@ -1,34 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/common/recipes-kernel/linux/linux-yocto-custom.bb b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/common/recipes-kernel/linux/linux-yocto-custom.bb
index fda955b..3ba4226 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/common/recipes-kernel/linux/linux-yocto-custom.bb
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/common/recipes-kernel/linux/linux-yocto-custom.bb
@@ -42,6 +42,7 @@
             file://{{=machine}}.cfg \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
+            file://{{=machine}}-user-features.scc \
            "
 
 {{ if kernel_choice == "custom" and custom_kernel_need_kbranch == "y" and custom_kernel_kbranch and custom_kernel_kbranch != "master": }}
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/common/recipes-kernel/linux/linux-yocto-custom/machine-user-features.scc b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/common/recipes-kernel/linux/linux-yocto-custom/machine-user-features.scc
new file mode 100644
index 0000000..582759e
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/common/recipes-kernel/linux/linux-yocto-custom/machine-user-features.scc
@@ -0,0 +1 @@
+# yocto-bsp-filename {{=machine}}-user-features.scc
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/common/recipes-kernel/linux/linux-yocto-custom/machine.scc b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/common/recipes-kernel/linux/linux-yocto-custom/machine.scc
index 0b6b413..64d3ed1 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/common/recipes-kernel/linux/linux-yocto-custom/machine.scc
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/common/recipes-kernel/linux/linux-yocto-custom/machine.scc
@@ -14,5 +14,3 @@
 # These are used by yocto-kernel to add config fragments and features.
 # Don't remove if you plan on using yocto-kernel with this BSP.
 
-kconf hardware {{=machine}}-user-config.cfg
-include {{=machine}}-user-patches.scc
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/files/machine.scc b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/files/machine.scc
index 3d32f11..3e4c54f 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/files/machine.scc
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/files/machine.scc
@@ -17,5 +17,3 @@
 include cfg/boot-live.scc
 include features/power/intel.scc
 
-kconf hardware {{=machine}}-user-config.cfg
-include {{=machine}}-user-patches.scc
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/kernel-list.noinstall b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/kernel-list.noinstall
index 20f2059..917f0e2 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/kernel-list.noinstall
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/kernel-list.noinstall
@@ -1,5 +1,5 @@
 {{ if kernel_choice != "custom": }}
-{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.10) kernel? (y/n)" default:"y"}}
+{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.12) kernel? (y/n)" default:"y"}}
 
 {{ if kernel_choice != "custom" and use_default_kernel == "n": }}
-{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.10"}}
+{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.12"}}
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-dev.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-dev.bbappend
index c336007..22ed273 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-dev.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-dev.bbappend
@@ -19,7 +19,10 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
deleted file mode 100644
index d15a178..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
+++ /dev/null
@@ -1,35 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
index 5cc82e8..bae943e 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
@@ -20,7 +20,9 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
new file mode 100644
index 0000000..6f3e104
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
@@ -0,0 +1,37 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-patches.scc \
+            file://{{=machine}}-user-features.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
index 070bd87..62d1817 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
@@ -20,7 +20,9 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
deleted file mode 100644
index c391322..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
+++ /dev/null
@@ -1,35 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.1.bbappend
deleted file mode 100644
index 5ed144b..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.1.bbappend
+++ /dev/null
@@ -1,34 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard:standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard:standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.10.bbappend
index 0205920..f8616ed 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.10.bbappend
@@ -20,9 +20,12 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.12.bbappend
new file mode 100644
index 0000000..20d57f6
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.12.bbappend
@@ -0,0 +1,37 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard:standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard:standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.4.bbappend
index ab644bd..0a9d475 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.4.bbappend
@@ -20,9 +20,12 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.8.bbappend
deleted file mode 100644
index a535aea..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/i386/recipes-kernel/linux/linux-yocto_4.8.bbappend
+++ /dev/null
@@ -1,34 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard:standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard:standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/files/machine.scc b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/files/machine.scc
index f39dc3e..792fdc9 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/files/machine.scc
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/files/machine.scc
@@ -4,5 +4,3 @@
 include cfg/usb-mass-storage.scc
 include cfg/fs/vfat.scc
 
-kconf hardware {{=machine}}-user-config.cfg
-include {{=machine}}-user-patches.scc
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/kernel-list.noinstall b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/kernel-list.noinstall
index 20f2059..917f0e2 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/kernel-list.noinstall
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/kernel-list.noinstall
@@ -1,5 +1,5 @@
 {{ if kernel_choice != "custom": }}
-{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.10) kernel? (y/n)" default:"y"}}
+{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.12) kernel? (y/n)" default:"y"}}
 
 {{ if kernel_choice != "custom" and use_default_kernel == "n": }}
-{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.10"}}
+{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.12"}}
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-dev.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-dev.bbappend
index c336007..22ed273 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-dev.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-dev.bbappend
@@ -19,7 +19,10 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
deleted file mode 100644
index d15a178..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
+++ /dev/null
@@ -1,35 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
index 5cc82e8..bae943e 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
@@ -20,7 +20,9 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
new file mode 100644
index 0000000..6f3e104
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
@@ -0,0 +1,37 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-patches.scc \
+            file://{{=machine}}-user-features.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
index 070bd87..62d1817 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
@@ -20,7 +20,9 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
deleted file mode 100644
index c391322..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
+++ /dev/null
@@ -1,35 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.1.bbappend
deleted file mode 100644
index 4e7d7cb..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.1.bbappend
+++ /dev/null
@@ -1,34 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.10.bbappend
index eaf4367..dfbecb5 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.10.bbappend
@@ -20,9 +20,12 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.12.bbappend
new file mode 100644
index 0000000..e874c9e
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.12.bbappend
@@ -0,0 +1,37 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.4.bbappend
index 56e8ad3..a809c76 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.4.bbappend
@@ -20,9 +20,12 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.8.bbappend
deleted file mode 100644
index 59752a9..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips/recipes-kernel/linux/linux-yocto_4.8.bbappend
+++ /dev/null
@@ -1,34 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/files/machine.scc b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/files/machine.scc
index f39dc3e..792fdc9 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/files/machine.scc
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/files/machine.scc
@@ -4,5 +4,3 @@
 include cfg/usb-mass-storage.scc
 include cfg/fs/vfat.scc
 
-kconf hardware {{=machine}}-user-config.cfg
-include {{=machine}}-user-patches.scc
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/kernel-list.noinstall b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/kernel-list.noinstall
index 20f2059..917f0e2 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/kernel-list.noinstall
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/kernel-list.noinstall
@@ -1,5 +1,5 @@
 {{ if kernel_choice != "custom": }}
-{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.10) kernel? (y/n)" default:"y"}}
+{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.12) kernel? (y/n)" default:"y"}}
 
 {{ if kernel_choice != "custom" and use_default_kernel == "n": }}
-{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.10"}}
+{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.12"}}
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-dev.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-dev.bbappend
index c336007..22ed273 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-dev.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-dev.bbappend
@@ -19,7 +19,10 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
deleted file mode 100644
index d15a178..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
+++ /dev/null
@@ -1,35 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
index 5cc82e8..bae943e 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
@@ -20,7 +20,9 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
new file mode 100644
index 0000000..6f3e104
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
@@ -0,0 +1,37 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-patches.scc \
+            file://{{=machine}}-user-features.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
index 070bd87..62d1817 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
@@ -20,7 +20,9 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
deleted file mode 100644
index c391322..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
+++ /dev/null
@@ -1,35 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.1.bbappend
deleted file mode 100644
index 802e5f4..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.1.bbappend
+++ /dev/null
@@ -1,34 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/edgerouter" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/edgerouter" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.10.bbappend
index 512b597..336a956 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.10.bbappend
@@ -20,9 +20,12 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.12.bbappend
new file mode 100644
index 0000000..5333c30
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.12.bbappend
@@ -0,0 +1,37 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/edgerouter" }}
+
+{{ if need_new_kbranch == "n": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/edgerouter" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.4.bbappend
index dbb0fd5..7d18566 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.4.bbappend
@@ -20,9 +20,12 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.8.bbappend
deleted file mode 100644
index c2eb40d..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/mips64/recipes-kernel/linux/linux-yocto_4.8.bbappend
+++ /dev/null
@@ -1,34 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/edgerouter" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/edgerouter" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/files/machine.scc b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/files/machine.scc
index 7aac8b0..89bb97e 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/files/machine.scc
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/files/machine.scc
@@ -6,5 +6,3 @@
 
 include cfg/dmaengine.scc
 
-kconf hardware {{=machine}}-user-config.cfg
-include {{=machine}}-user-patches.scc
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/kernel-list.noinstall b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/kernel-list.noinstall
index 20f2059..917f0e2 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/kernel-list.noinstall
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/kernel-list.noinstall
@@ -1,5 +1,5 @@
 {{ if kernel_choice != "custom": }}
-{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.10) kernel? (y/n)" default:"y"}}
+{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.12) kernel? (y/n)" default:"y"}}
 
 {{ if kernel_choice != "custom" and use_default_kernel == "n": }}
-{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.10"}}
+{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.12"}}
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-dev.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-dev.bbappend
index c336007..22ed273 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-dev.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-dev.bbappend
@@ -19,7 +19,10 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
deleted file mode 100644
index d15a178..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
+++ /dev/null
@@ -1,35 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
index 5cc82e8..bae943e 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
@@ -20,7 +20,9 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
new file mode 100644
index 0000000..6f3e104
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
@@ -0,0 +1,37 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-patches.scc \
+            file://{{=machine}}-user-features.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
index 070bd87..62d1817 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
@@ -20,7 +20,9 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
deleted file mode 100644
index c391322..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
+++ /dev/null
@@ -1,35 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.1.bbappend
deleted file mode 100644
index 4e7d7cb..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.1.bbappend
+++ /dev/null
@@ -1,34 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.10.bbappend
index eaf4367..dfbecb5 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.10.bbappend
@@ -20,9 +20,12 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.12.bbappend
new file mode 100644
index 0000000..e874c9e
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.12.bbappend
@@ -0,0 +1,37 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.4.bbappend
index 56e8ad3..a809c76 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.4.bbappend
@@ -20,9 +20,12 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.8.bbappend
deleted file mode 100644
index 59752a9..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/powerpc/recipes-kernel/linux/linux-yocto_4.8.bbappend
+++ /dev/null
@@ -1,34 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/files/machine.scc b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/files/machine.scc
index 8301e05..d25d0a0 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/files/machine.scc
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/files/machine.scc
@@ -1,5 +1,3 @@
 # yocto-bsp-filename {{=machine}}.scc
 kconf hardware {{=machine}}.cfg
 
-kconf hardware {{=machine}}-user-config.cfg
-include {{=machine}}-user-patches.scc
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/kernel-list.noinstall b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/kernel-list.noinstall
index 20f2059..917f0e2 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/kernel-list.noinstall
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/kernel-list.noinstall
@@ -1,5 +1,5 @@
 {{ if kernel_choice != "custom": }}
-{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.10) kernel? (y/n)" default:"y"}}
+{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.12) kernel? (y/n)" default:"y"}}
 
 {{ if kernel_choice != "custom" and use_default_kernel == "n": }}
-{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.10"}}
+{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.12"}}
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-dev.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-dev.bbappend
index 7e3ce5b..d7b9cef 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-dev.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-dev.bbappend
@@ -45,11 +45,15 @@
 {{ if need_new_kbranch == "n": }}
 KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
 
+{{ if qemuarch != "arm": }}
 {{ input type:"boolean" name:"smp" prio:"30" msg:"Would you like SMP support? (y/n)" default:"y"}}
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
deleted file mode 100644
index 81392ce..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
+++ /dev/null
@@ -1,64 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "arm": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"arm" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "arm": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"arm" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "powerpc": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"powerpc" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "powerpc": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"powerpc" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "i386": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"i386" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "i386": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"i386" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/common-pc" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "x86_64": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"x86_64" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "x86_64": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"x86_64" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "mips": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"mips" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "mips": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"mips" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "mips64": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"mips64" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "mips64": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"mips64" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
index 29ad17b..8c0fd15 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
@@ -45,11 +45,14 @@
 {{ if need_new_kbranch == "n": }}
 KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
 
+{{ if qemuarch != "arm": }}
 {{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
new file mode 100644
index 0000000..83eb216
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
@@ -0,0 +1,67 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y" and qemuarch == "arm": }}
+{{ input type:"choicelist" name:"new_kbranch" nameappend:"arm" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n" and qemuarch == "arm": }}
+{{ input type:"choicelist" name:"existing_kbranch" nameappend:"arm" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "y" and qemuarch == "powerpc": }}
+{{ input type:"choicelist" name:"new_kbranch" nameappend:"powerpc" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n" and qemuarch == "powerpc": }}
+{{ input type:"choicelist" name:"existing_kbranch" nameappend:"powerpc" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "y" and qemuarch == "i386": }}
+{{ input type:"choicelist" name:"new_kbranch" nameappend:"i386" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n" and qemuarch == "i386": }}
+{{ input type:"choicelist" name:"existing_kbranch" nameappend:"i386" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/common-pc" }}
+
+{{ if need_new_kbranch == "y" and qemuarch == "x86_64": }}
+{{ input type:"choicelist" name:"new_kbranch" nameappend:"x86_64" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n" and qemuarch == "x86_64": }}
+{{ input type:"choicelist" name:"existing_kbranch" nameappend:"x86_64" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "y" and qemuarch == "mips": }}
+{{ input type:"choicelist" name:"new_kbranch" nameappend:"mips" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n" and qemuarch == "mips": }}
+{{ input type:"choicelist" name:"existing_kbranch" nameappend:"mips" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "y" and qemuarch == "mips64": }}
+{{ input type:"choicelist" name:"new_kbranch" nameappend:"mips64" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n" and qemuarch == "mips64": }}
+{{ input type:"choicelist" name:"existing_kbranch" nameappend:"mips64" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ if qemuarch != "arm": }}
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-patches.scc \
+            file://{{=machine}}-user-features.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
index a73b1aa..22abc23 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
@@ -45,11 +45,14 @@
 {{ if need_new_kbranch == "n": }}
 KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
 
+{{ if qemuarch != "arm": }}
 {{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
deleted file mode 100644
index 7d40671..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
+++ /dev/null
@@ -1,64 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "arm": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"arm" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "arm": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"arm" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "powerpc": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"powerpc" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "powerpc": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"powerpc" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "i386": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"i386" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "i386": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"i386" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/common-pc" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "x86_64": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"x86_64" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "x86_64": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"x86_64" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "mips": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"mips" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "mips": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"mips" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "mips64": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"mips64" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "mips64": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"mips64" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.1.bbappend
deleted file mode 100644
index a9fd9ec..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.1.bbappend
+++ /dev/null
@@ -1,63 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "arm": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base your new BSP branch on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "arm": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose an existing machine branch to use for this BSP:" default:"standard/arm-versatile-926ejs" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "powerpc": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"powerpc" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "powerpc": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"powerpc" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/qemuppc" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "i386": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"i386" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "i386": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"i386" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "x86_64": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"x86_64" gen:"bsp.kernel.all_branches" branches_base:"standard"  prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "x86_64": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"x86_64" gen:"bsp.kernel.all_branches" branches_base:"standard"  prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "mips": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"mips" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/mti-malta32" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "mips64": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"mips64" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/mti-malta64" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "mips": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"mips" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "mips64": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"mips64" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Would you like SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.10.bbappend
index 5873da4..851d96c 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.10.bbappend
@@ -45,13 +45,17 @@
 {{ if need_new_kbranch == "n": }}
 KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
 
+{{ if qemuarch != "arm": }}
 {{ input type:"boolean" name:"smp" prio:"30" msg:"Would you like SMP support? (y/n)" default:"y"}}
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.12.bbappend
new file mode 100644
index 0000000..d7ce37e
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.12.bbappend
@@ -0,0 +1,67 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y" and qemuarch == "arm": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base your new BSP branch on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n" and qemuarch == "arm": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose an existing machine branch to use for this BSP:" default:"standard/arm-versatile-926ejs" }}
+
+{{ if need_new_kbranch == "y" and qemuarch == "powerpc": }}
+{{ input type:"choicelist" name:"new_kbranch" nameappend:"powerpc" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n" and qemuarch == "powerpc": }}
+{{ input type:"choicelist" name:"existing_kbranch" nameappend:"powerpc" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/qemuppc" }}
+
+{{ if need_new_kbranch == "y" and qemuarch == "i386": }}
+{{ input type:"choicelist" name:"new_kbranch" nameappend:"i386" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n" and qemuarch == "i386": }}
+{{ input type:"choicelist" name:"existing_kbranch" nameappend:"i386" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "y" and qemuarch == "x86_64": }}
+{{ input type:"choicelist" name:"new_kbranch" nameappend:"x86_64" gen:"bsp.kernel.all_branches" branches_base:"standard"  prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n" and qemuarch == "x86_64": }}
+{{ input type:"choicelist" name:"existing_kbranch" nameappend:"x86_64" gen:"bsp.kernel.all_branches" branches_base:"standard"  prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n" and qemuarch == "mips": }}
+{{ input type:"choicelist" name:"existing_kbranch" nameappend:"mips" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/mti-malta32" }}
+
+{{ if need_new_kbranch == "n" and qemuarch == "mips64": }}
+{{ input type:"choicelist" name:"existing_kbranch" nameappend:"mips64" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/mti-malta64" }}
+
+{{ if need_new_kbranch == "y" and qemuarch == "mips": }}
+{{ input type:"choicelist" name:"new_kbranch" nameappend:"mips" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "y" and qemuarch == "mips64": }}
+{{ input type:"choicelist" name:"new_kbranch" nameappend:"mips64" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ if qemuarch != "arm": }}
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Would you like SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.4.bbappend
index cdee773..71be913 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.4.bbappend
@@ -45,13 +45,17 @@
 {{ if need_new_kbranch == "n": }}
 KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
 
+{{ if qemuarch != "arm": }}
 {{ input type:"boolean" name:"smp" prio:"30" msg:"Would you like SMP support? (y/n)" default:"y"}}
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.8.bbappend
deleted file mode 100644
index 24c2880..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/qemu/recipes-kernel/linux/linux-yocto_4.8.bbappend
+++ /dev/null
@@ -1,63 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "arm": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base your new BSP branch on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "arm": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose an existing machine branch to use for this BSP:" default:"standard/arm-versatile-926ejs" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "powerpc": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"powerpc" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "powerpc": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"powerpc" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/qemuppc" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "i386": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"i386" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "i386": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"i386" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "x86_64": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"x86_64" gen:"bsp.kernel.all_branches" branches_base:"standard"  prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "x86_64": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"x86_64" gen:"bsp.kernel.all_branches" branches_base:"standard"  prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "mips": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"mips" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/mti-malta32" }}
-
-{{ if need_new_kbranch == "n" and qemuarch == "mips64": }}
-{{ input type:"choicelist" name:"existing_kbranch" nameappend:"mips64" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/mti-malta64" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "mips": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"mips" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "y" and qemuarch == "mips64": }}
-{{ input type:"choicelist" name:"new_kbranch" nameappend:"mips64" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Would you like SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/files/machine.scc b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/files/machine.scc
index 9b7c291..9d20d19 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/files/machine.scc
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/files/machine.scc
@@ -10,5 +10,3 @@
 include cfg/usb-mass-storage.scc
 include features/power/intel.scc
 
-kconf hardware {{=machine}}-user-config.cfg
-include {{=machine}}-user-patches.scc
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/kernel-list.noinstall b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/kernel-list.noinstall
index 20f2059..917f0e2 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/kernel-list.noinstall
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/kernel-list.noinstall
@@ -1,5 +1,5 @@
 {{ if kernel_choice != "custom": }}
-{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.10) kernel? (y/n)" default:"y"}}
+{{ input type:"boolean" name:"use_default_kernel" prio:"10" msg:"Would you like to use the default (4.12) kernel? (y/n)" default:"y"}}
 
 {{ if kernel_choice != "custom" and use_default_kernel == "n": }}
-{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.10"}}
+{{ input type:"choicelist" name:"kernel_choice" gen:"bsp.kernel.kernels" prio:"10" msg:"Please choose the kernel to use in this BSP:" default:"linux-yocto_4.12"}}
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-dev.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-dev.bbappend
index c336007..22ed273 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-dev.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-dev.bbappend
@@ -19,7 +19,10 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
deleted file mode 100644
index d15a178..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.1.bbappend
+++ /dev/null
@@ -1,35 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
index 5cc82e8..bae943e 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.10.bbappend
@@ -20,7 +20,9 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
new file mode 100644
index 0000000..6f3e104
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.12.bbappend
@@ -0,0 +1,37 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-patches.scc \
+            file://{{=machine}}-user-features.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
index 070bd87..62d1817 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.4.bbappend
@@ -20,7 +20,9 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-tiny.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-tiny.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-patches.scc \
             file://{{=machine}}-user-features.scc \
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
deleted file mode 100644
index c391322..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto-tiny_4.8.bbappend
+++ /dev/null
@@ -1,35 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto-tiny_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard/tiny" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/tiny/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-tiny.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-patches.scc \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto-tiny_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.1.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.1.bbappend
deleted file mode 100644
index 4e7d7cb..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.1.bbappend
+++ /dev/null
@@ -1,34 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.1": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.1"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.10.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.10.bbappend
index eaf4367..dfbecb5 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.10.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.10.bbappend
@@ -20,9 +20,12 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.12.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.12.bbappend
new file mode 100644
index 0000000..e874c9e
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.12.bbappend
@@ -0,0 +1,37 @@
+# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.12": }} this
+FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
+
+PR := "${PR}.1"
+
+COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
+
+{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
+
+{{ if need_new_kbranch == "y": }}
+{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n": }}
+{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
+
+{{ if need_new_kbranch == "n": }}
+KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
+
+{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
+{{ if smp == "y": }}
+KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
+
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
+            file://{{=machine}}-user-config.cfg \
+            file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
+           "
+
+# replace these SRCREVs with the real commit ids once you've had
+# the appropriate changes committed to the upstream linux-yocto repo
+SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
+#LINUX_VERSION = "4.10"
+#Remove the following line once AUTOREV is locked to a certain SRCREV
+KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.4.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.4.bbappend
index 56e8ad3..a809c76 100644
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.4.bbappend
+++ b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.4.bbappend
@@ -20,9 +20,12 @@
 {{ if smp == "y": }}
 KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
 
-SRC_URI += "file://{{=machine}}-standard.scc \
+SRC_URI += "file://{{=machine}}.scc \
+            file://{{=machine}}.cfg \
+            file://{{=machine}}-standard.scc \
             file://{{=machine}}-user-config.cfg \
             file://{{=machine}}-user-features.scc \
+            file://{{=machine}}-user-patches.scc \
            "
 
 # replace these SRCREVs with the real commit ids once you've had
diff --git a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.8.bbappend b/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.8.bbappend
deleted file mode 100644
index 59752a9..0000000
--- a/import-layers/yocto-poky/scripts/lib/bsp/substrate/target/arch/x86_64/recipes-kernel/linux/linux-yocto_4.8.bbappend
+++ /dev/null
@@ -1,34 +0,0 @@
-# yocto-bsp-filename {{ if kernel_choice == "linux-yocto_4.8": }} this
-FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
-
-PR := "${PR}.1"
-
-COMPATIBLE_MACHINE_{{=machine}} = "{{=machine}}"
-
-{{ input type:"boolean" name:"need_new_kbranch" prio:"20" msg:"Do you need a new machine branch for this BSP (the alternative is to re-use an existing branch)? [y/n]" default:"y" }}
-
-{{ if need_new_kbranch == "y": }}
-{{ input type:"choicelist" name:"new_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-{{ input type:"choicelist" name:"existing_kbranch" gen:"bsp.kernel.all_branches" branches_base:"standard" prio:"20" msg:"Please choose a machine branch to base this BSP on:" default:"standard/base" }}
-
-{{ if need_new_kbranch == "n": }}
-KBRANCH_{{=machine}}  = "{{=existing_kbranch}}"
-
-{{ input type:"boolean" name:"smp" prio:"30" msg:"Do you need SMP support? (y/n)" default:"y"}}
-{{ if smp == "y": }}
-KERNEL_FEATURES_append_{{=machine}} += " cfg/smp.scc"
-
-SRC_URI += "file://{{=machine}}-standard.scc \
-            file://{{=machine}}-user-config.cfg \
-            file://{{=machine}}-user-features.scc \
-           "
-
-# replace these SRCREVs with the real commit ids once you've had
-# the appropriate changes committed to the upstream linux-yocto repo
-SRCREV_machine_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-SRCREV_meta_pn-linux-yocto_{{=machine}} ?= "${AUTOREV}"
-#LINUX_VERSION = "4.8"
-#Remove the following line once AUTOREV is locked to a certain SRCREV
-KERNEL_VERSION_SANITY_SKIP = "1"
diff --git a/import-layers/yocto-poky/scripts/lib/build_perf/html/report.html b/import-layers/yocto-poky/scripts/lib/build_perf/html/report.html
index 165cbb8..291ad9d 100644
--- a/import-layers/yocto-poky/scripts/lib/build_perf/html/report.html
+++ b/import-layers/yocto-poky/scripts/lib/build_perf/html/report.html
@@ -53,9 +53,11 @@
   border-collapse: collapse;
 }
 .details th {
-  font-weight: normal;
   padding-right: 8px;
 }
+.details.plain th {
+  font-weight: normal;
+}
 .preformatted {
   font-family: monospace;
   white-space: pre-wrap;
@@ -118,29 +120,32 @@
       {% else %}
         {% set row_style = 'style="background-color: #ffffff"' %}
       {% endif %}
-      <tr {{ row_style }}><td>{{ test.name }}: {{ test.description }}</td>
       {% if test.status == 'SUCCESS' %}
         {% for measurement in test.measurements %}
-          {# add empty cell in place of the test name#}
-          {% if loop.index > 1 %}<td></td>{% endif %}
-          {% if measurement.absdiff > 0 %}
-            {% set result_style = "color: red" %}
-          {% elif measurement.absdiff == measurement.absdiff %}
-            {% set result_style = "color: green" %}
-          {% else %}
-            {% set result_style = "color: orange" %}
-          {%endif %}
-          <td>{{ measurement.description }}</td>
-          <td style="font-weight: bold">{{ measurement.value.mean }}</td>
-          <td style="{{ result_style }}">{{ measurement.absdiff_str }}</td>
-          <td style="{{ result_style }}">{{ measurement.reldiff }}</td>
-          </tr><tr {{ row_style }}>
+          <tr {{ row_style }}>
+            {% if loop.index == 1 %}
+              <td>{{ test.name }}: {{ test.description }}</td>
+            {% else %}
+              {# add empty cell in place of the test name#}
+              <td></td>
+            {% endif %}
+            {% if measurement.absdiff > 0 %}
+              {% set result_style = "color: red" %}
+            {% elif measurement.absdiff == measurement.absdiff %}
+              {% set result_style = "color: green" %}
+            {% else %}
+              {% set result_style = "color: orange" %}
+            {%endif %}
+            <td>{{ measurement.description }}</td>
+            <td style="font-weight: bold">{{ measurement.value.mean }}</td>
+            <td style="{{ result_style }}">{{ measurement.absdiff_str }}</td>
+            <td style="{{ result_style }}">{{ measurement.reldiff }}</td>
+          </tr>
         {% endfor %}
       {% else %}
         <td style="font-weight: bold; color: red;">{{test.status }}</td>
         <td></td> <td></td> <td></td> <td></td>
       {% endif %}
-      </tr>
     {% endfor %}
   </table>
 
@@ -165,6 +170,7 @@
             {{ measurement.absdiff_str }} ({{measurement.reldiff}})
             </span></span>
           </div>
+          {# Table for trendchart and the statistics #}
           <table style="width: 100%">
             <tr>
               <td style="width: 75%">
@@ -173,7 +179,7 @@
               </td>
               <td>
                 {# Measurement statistics #}
-                <table class="details">
+                <table class="details plain">
                   <tr>
                     <th>Test runs</th><td>{{ measurement.value.sample_cnt }}</td>
                   </tr><tr>
@@ -186,11 +192,85 @@
                     <th>Stdev</th><td>{{ measurement.value.stdev }}</td>
                   </tr><tr>
                     <th><div id="{{ test.name }}_{{ measurement.name }}_chart_png"></div></th>
+                    <td></td>
                   </tr>
                 </table>
               </td>
             </tr>
           </table>
+
+          {# Task and recipe summary from buildstats #}
+          {% if 'buildstats' in measurement %}
+            Task resource usage
+            <table class="details" style="width:100%">
+              <tr>
+                <th>Number of tasks</th>
+                <th>Top consumers of cputime</th>
+              </tr>
+              <tr>
+                <td style="vertical-align: top">{{ measurement.buildstats.tasks.count }} ({{ measurement.buildstats.tasks.change }})</td>
+                {# Table of most resource-hungry tasks #}
+                <td>
+                  <table class="details plain">
+                    {% for diff in measurement.buildstats.top_consumer|reverse %}
+                    <tr>
+                      <th>{{ diff.pkg }}.{{ diff.task }}</th>
+                      <td>{{ '%0.0f' % diff.value2 }} s</td>
+                    </tr>
+                    {% endfor %}
+                  </table>
+                </td>
+              </tr>
+              <tr>
+                <th>Biggest increase in cputime</th>
+                <th>Biggest decrease in cputime</th>
+              </tr>
+              <tr>
+                {# Table biggest increase in resource usage #}
+                <td>
+                  <table class="details plain">
+                    {% for diff in measurement.buildstats.top_increase|reverse %}
+                    <tr>
+                      <th>{{ diff.pkg }}.{{ diff.task }}</th>
+                      <td>{{ '%+0.0f' % diff.absdiff }} s</td>
+                    </tr>
+                    {% endfor %}
+                  </table>
+                </td>
+                {# Table biggest decrease in resource usage #}
+                <td>
+                  <table class="details plain">
+                    {% for diff in measurement.buildstats.top_decrease %}
+                    <tr>
+                      <th>{{ diff.pkg }}.{{ diff.task }}</th>
+                      <td>{{ '%+0.0f' % diff.absdiff }} s</td>
+                    </tr>
+                    {% endfor %}
+                  </table>
+                </td>
+              </tr>
+            </table>
+
+            {# Recipe version differences #}
+            {% if measurement.buildstats.ver_diff %}
+              <div style="margin-top: 16px">Recipe version changes</div>
+              <table class="details">
+                {% for head, recipes in measurement.buildstats.ver_diff.items() %}
+                  <tr>
+                    <th colspan="2">{{ head }}</th>
+                  </tr>
+                  {% for name, info in recipes|sort %}
+                    <tr>
+                      <td>{{ name }}</td>
+                      <td>{{ info }}</td>
+                    </tr>
+                  {% endfor %}
+                {% endfor %}
+              </table>
+            {% else %}
+              <div style="margin-top: 16px">No recipe version changes detected</div>
+            {% endif %}
+          {% endif %}
         </div>
       {% endfor %}
     {# Unsuccessful test #}
diff --git a/import-layers/yocto-poky/scripts/lib/build_perf/report.py b/import-layers/yocto-poky/scripts/lib/build_perf/report.py
index eb00ccc..d99a367 100644
--- a/import-layers/yocto-poky/scripts/lib/build_perf/report.py
+++ b/import-layers/yocto-poky/scripts/lib/build_perf/report.py
@@ -11,12 +11,15 @@
 # more details.
 #
 """Handling of build perf test reports"""
-from collections import OrderedDict, Mapping
+from collections import OrderedDict, Mapping, namedtuple
 from datetime import datetime, timezone
 from numbers import Number
 from statistics import mean, stdev, variance
 
 
+AggregateTestData = namedtuple('AggregateTestData', ['metadata', 'results'])
+
+
 def isofmt_to_timestamp(string):
     """Convert timestamp string in ISO 8601 format into unix timestamp"""
     if '.' in string:
diff --git a/import-layers/yocto-poky/scripts/lib/buildstats.py b/import-layers/yocto-poky/scripts/lib/buildstats.py
new file mode 100644
index 0000000..d9aadf3
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/buildstats.py
@@ -0,0 +1,349 @@
+#
+# Copyright (c) 2017, Intel Corporation.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms and conditions of the GNU General Public License,
+# version 2, as published by the Free Software Foundation.
+#
+# This program is distributed in the hope it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+"""Functionality for analyzing buildstats"""
+import json
+import logging
+import os
+import re
+from collections import namedtuple,OrderedDict
+from statistics import mean
+
+
+log = logging.getLogger()
+
+
+taskdiff_fields = ('pkg', 'pkg_op', 'task', 'task_op', 'value1', 'value2',
+                   'absdiff', 'reldiff')
+TaskDiff = namedtuple('TaskDiff', ' '.join(taskdiff_fields))
+
+
+class BSError(Exception):
+    """Error handling of buildstats"""
+    pass
+
+
+class BSTask(dict):
+    def __init__(self, *args, **kwargs):
+        self['start_time'] = None
+        self['elapsed_time'] = None
+        self['status'] = None
+        self['iostat'] = {}
+        self['rusage'] = {}
+        self['child_rusage'] = {}
+        super(BSTask, self).__init__(*args, **kwargs)
+
+    @property
+    def cputime(self):
+        """Sum of user and system time taken by the task"""
+        rusage = self['rusage']['ru_stime'] + self['rusage']['ru_utime']
+        if self['child_rusage']:
+            # Child rusage may have been optimized out
+            return rusage + self['child_rusage']['ru_stime'] + self['child_rusage']['ru_utime']
+        else:
+            return rusage
+
+    @property
+    def walltime(self):
+        """Elapsed wall clock time"""
+        return self['elapsed_time']
+
+    @property
+    def read_bytes(self):
+        """Bytes read from the block layer"""
+        return self['iostat']['read_bytes']
+
+    @property
+    def write_bytes(self):
+        """Bytes written to the block layer"""
+        return self['iostat']['write_bytes']
+
+    @property
+    def read_ops(self):
+        """Number of read operations on the block layer"""
+        if self['child_rusage']:
+            # Child rusage may have been optimized out
+            return self['rusage']['ru_inblock'] + self['child_rusage']['ru_inblock']
+        else:
+            return self['rusage']['ru_inblock']
+
+    @property
+    def write_ops(self):
+        """Number of write operations on the block layer"""
+        if self['child_rusage']:
+            # Child rusage may have been optimized out
+            return self['rusage']['ru_oublock'] + self['child_rusage']['ru_oublock']
+        else:
+            return self['rusage']['ru_oublock']
+
+    @classmethod
+    def from_file(cls, buildstat_file):
+        """Read buildstat text file"""
+        bs_task = cls()
+        log.debug("Reading task buildstats from %s", buildstat_file)
+        end_time = None
+        with open(buildstat_file) as fobj:
+            for line in fobj.readlines():
+                key, val = line.split(':', 1)
+                val = val.strip()
+                if key == 'Started':
+                    start_time = float(val)
+                    bs_task['start_time'] = start_time
+                elif key == 'Ended':
+                    end_time = float(val)
+                elif key.startswith('IO '):
+                    split = key.split()
+                    bs_task['iostat'][split[1]] = int(val)
+                elif key.find('rusage') >= 0:
+                    split = key.split()
+                    ru_key = split[-1]
+                    if ru_key in ('ru_stime', 'ru_utime'):
+                        val = float(val)
+                    else:
+                        val = int(val)
+                    ru_type = 'rusage' if split[0] == 'rusage' else \
+                                                      'child_rusage'
+                    bs_task[ru_type][ru_key] = val
+                elif key == 'Status':
+                    bs_task['status'] = val
+        if end_time is not None and start_time is not None:
+            bs_task['elapsed_time'] = end_time - start_time
+        else:
+            raise BSError("{} looks like a invalid buildstats file".format(buildstat_file))
+        return bs_task
+
+
+class BSTaskAggregate(object):
+    """Class representing multiple runs of the same task"""
+    properties = ('cputime', 'walltime', 'read_bytes', 'write_bytes',
+                  'read_ops', 'write_ops')
+
+    def __init__(self, tasks=None):
+        self._tasks = tasks or []
+        self._properties = {}
+
+    def __getattr__(self, name):
+        if name in self.properties:
+            if name not in self._properties:
+                # Calculate properties on demand only. We only provide mean
+                # value, so far
+                self._properties[name] = mean([getattr(t, name) for t in self._tasks])
+            return self._properties[name]
+        else:
+            raise AttributeError("'BSTaskAggregate' has no attribute '{}'".format(name))
+
+    def append(self, task):
+        """Append new task"""
+        # Reset pre-calculated properties
+        assert isinstance(task, BSTask), "Type is '{}' instead of 'BSTask'".format(type(task))
+        self._properties = {}
+        self._tasks.append(task)
+
+
+class BSRecipe(object):
+    """Class representing buildstats of one recipe"""
+    def __init__(self, name, epoch, version, revision):
+        self.name = name
+        self.epoch = epoch
+        self.version = version
+        self.revision = revision
+        if epoch is None:
+            self.evr = "{}-{}".format(version, revision)
+        else:
+            self.evr = "{}_{}-{}".format(epoch, version, revision)
+        self.tasks = {}
+
+    def aggregate(self, bsrecipe):
+        """Aggregate data of another recipe buildstats"""
+        if self.nevr != bsrecipe.nevr:
+            raise ValueError("Refusing to aggregate buildstats, recipe version "
+                             "differs: {} vs. {}".format(self.nevr, bsrecipe.nevr))
+        if set(self.tasks.keys()) != set(bsrecipe.tasks.keys()):
+            raise ValueError("Refusing to aggregate buildstats, set of tasks "
+                             "in {} differ".format(self.name))
+
+        for taskname, taskdata in bsrecipe.tasks.items():
+            if not isinstance(self.tasks[taskname], BSTaskAggregate):
+                self.tasks[taskname] = BSTaskAggregate([self.tasks[taskname]])
+            self.tasks[taskname].append(taskdata)
+
+    @property
+    def nevr(self):
+        return self.name + '-' + self.evr
+
+
+class BuildStats(dict):
+    """Class representing buildstats of one build"""
+
+    @property
+    def num_tasks(self):
+        """Get number of tasks"""
+        num = 0
+        for recipe in self.values():
+            num += len(recipe.tasks)
+        return num
+
+    @classmethod
+    def from_json(cls, bs_json):
+        """Create new BuildStats object from JSON object"""
+        buildstats = cls()
+        for recipe in bs_json:
+            if recipe['name'] in buildstats:
+                raise BSError("Cannot handle multiple versions of the same "
+                              "package ({})".format(recipe['name']))
+            bsrecipe = BSRecipe(recipe['name'], recipe['epoch'],
+                                recipe['version'], recipe['revision'])
+            for task, data in recipe['tasks'].items():
+                bsrecipe.tasks[task] = BSTask(data)
+
+            buildstats[recipe['name']] = bsrecipe
+
+        return buildstats
+
+    @staticmethod
+    def from_file_json(path):
+        """Load buildstats from a JSON file"""
+        with open(path) as fobj:
+            bs_json = json.load(fobj)
+        return BuildStats.from_json(bs_json)
+
+
+    @staticmethod
+    def split_nevr(nevr):
+        """Split name and version information from recipe "nevr" string"""
+        n_e_v, revision = nevr.rsplit('-', 1)
+        match = re.match(r'^(?P<name>\S+)-((?P<epoch>[0-9]{1,5})_)?(?P<version>[0-9]\S*)$',
+                         n_e_v)
+        if not match:
+            # If we're not able to parse a version starting with a number, just
+            # take the part after last dash
+            match = re.match(r'^(?P<name>\S+)-((?P<epoch>[0-9]{1,5})_)?(?P<version>[^-]+)$',
+                             n_e_v)
+        name = match.group('name')
+        version = match.group('version')
+        epoch = match.group('epoch')
+        return name, epoch, version, revision
+
+    @classmethod
+    def from_dir(cls, path):
+        """Load buildstats from a buildstats directory"""
+        if not os.path.isfile(os.path.join(path, 'build_stats')):
+            raise BSError("{} does not look like a buildstats directory".format(path))
+
+        log.debug("Reading buildstats directory %s", path)
+
+        buildstats = cls()
+        subdirs = os.listdir(path)
+        for dirname in subdirs:
+            recipe_dir = os.path.join(path, dirname)
+            if not os.path.isdir(recipe_dir):
+                continue
+            name, epoch, version, revision = cls.split_nevr(dirname)
+            bsrecipe = BSRecipe(name, epoch, version, revision)
+            for task in os.listdir(recipe_dir):
+                bsrecipe.tasks[task] = BSTask.from_file(
+                    os.path.join(recipe_dir, task))
+            if name in buildstats:
+                raise BSError("Cannot handle multiple versions of the same "
+                              "package ({})".format(name))
+            buildstats[name] = bsrecipe
+
+        return buildstats
+
+    def aggregate(self, buildstats):
+        """Aggregate other buildstats into this"""
+        if set(self.keys()) != set(buildstats.keys()):
+            raise ValueError("Refusing to aggregate buildstats, set of "
+                             "recipes is different")
+        for pkg, data in buildstats.items():
+            self[pkg].aggregate(data)
+
+
+def diff_buildstats(bs1, bs2, stat_attr, min_val=None, min_absdiff=None):
+    """Compare the tasks of two buildstats"""
+    tasks_diff = []
+    pkgs = set(bs1.keys()).union(set(bs2.keys()))
+    for pkg in pkgs:
+        tasks1 = bs1[pkg].tasks if pkg in bs1 else {}
+        tasks2 = bs2[pkg].tasks if pkg in bs2 else {}
+        if not tasks1:
+            pkg_op = '+'
+        elif not tasks2:
+            pkg_op = '-'
+        else:
+            pkg_op = ' '
+
+        for task in set(tasks1.keys()).union(set(tasks2.keys())):
+            task_op = ' '
+            if task in tasks1:
+                val1 = getattr(bs1[pkg].tasks[task], stat_attr)
+            else:
+                task_op = '+'
+                val1 = 0
+            if task in tasks2:
+                val2 = getattr(bs2[pkg].tasks[task], stat_attr)
+            else:
+                val2 = 0
+                task_op = '-'
+
+            if val1 == 0:
+                reldiff = float('inf')
+            else:
+                reldiff = 100 * (val2 - val1) / val1
+
+            if min_val and max(val1, val2) < min_val:
+                log.debug("Filtering out %s:%s (%s)", pkg, task,
+                          max(val1, val2))
+                continue
+            if min_absdiff and abs(val2 - val1) < min_absdiff:
+                log.debug("Filtering out %s:%s (difference of %s)", pkg, task,
+                          val2-val1)
+                continue
+            tasks_diff.append(TaskDiff(pkg, pkg_op, task, task_op, val1, val2,
+                                       val2-val1, reldiff))
+    return tasks_diff
+
+
+class BSVerDiff(object):
+    """Class representing recipe version differences between two buildstats"""
+    def __init__(self, bs1, bs2):
+        RecipeVerDiff = namedtuple('RecipeVerDiff', 'left right')
+
+        recipes1 = set(bs1.keys())
+        recipes2 = set(bs2.keys())
+
+        self.new = dict([(r, bs2[r]) for r in sorted(recipes2 - recipes1)])
+        self.dropped = dict([(r, bs1[r]) for r in sorted(recipes1 - recipes2)])
+        self.echanged = {}
+        self.vchanged = {}
+        self.rchanged = {}
+        self.unchanged = {}
+        self.empty_diff = False
+
+        common = recipes2.intersection(recipes1)
+        if common:
+            for recipe in common:
+                rdiff = RecipeVerDiff(bs1[recipe], bs2[recipe])
+                if bs1[recipe].epoch != bs2[recipe].epoch:
+                    self.echanged[recipe] = rdiff
+                elif bs1[recipe].version != bs2[recipe].version:
+                    self.vchanged[recipe] = rdiff
+                elif bs1[recipe].revision != bs2[recipe].revision:
+                    self.rchanged[recipe] = rdiff
+                else:
+                    self.unchanged[recipe] = rdiff
+
+        if len(recipes1) == len(recipes2) == len(self.unchanged):
+            self.empty_diff = True
+
+    def __bool__(self):
+        return not self.empty_diff
diff --git a/import-layers/yocto-poky/scripts/lib/checklayer/__init__.py b/import-layers/yocto-poky/scripts/lib/checklayer/__init__.py
new file mode 100644
index 0000000..6395261
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/checklayer/__init__.py
@@ -0,0 +1,392 @@
+# Yocto Project layer check tool
+#
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+import os
+import re
+import subprocess
+from enum import Enum
+
+import bb.tinfoil
+
+class LayerType(Enum):
+    BSP = 0
+    DISTRO = 1
+    SOFTWARE = 2
+    ERROR_NO_LAYER_CONF = 98
+    ERROR_BSP_DISTRO = 99
+
+def _get_configurations(path):
+    configs = []
+
+    for f in os.listdir(path):
+        file_path = os.path.join(path, f)
+        if os.path.isfile(file_path) and f.endswith('.conf'):
+            configs.append(f[:-5]) # strip .conf
+    return configs
+
+def _get_layer_collections(layer_path, lconf=None, data=None):
+    import bb.parse
+    import bb.data
+
+    if lconf is None:
+        lconf = os.path.join(layer_path, 'conf', 'layer.conf')
+
+    if data is None:
+        ldata = bb.data.init()
+        bb.parse.init_parser(ldata)
+    else:
+        ldata = data.createCopy()
+
+    ldata.setVar('LAYERDIR', layer_path)
+    try:
+        ldata = bb.parse.handle(lconf, ldata, include=True)
+    except BaseException as exc:
+        raise LayerError(exc)
+    ldata.expandVarref('LAYERDIR')
+
+    collections = (ldata.getVar('BBFILE_COLLECTIONS') or '').split()
+    if not collections:
+        name = os.path.basename(layer_path)
+        collections = [name]
+
+    collections = {c: {} for c in collections}
+    for name in collections:
+        priority = ldata.getVar('BBFILE_PRIORITY_%s' % name)
+        pattern = ldata.getVar('BBFILE_PATTERN_%s' % name)
+        depends = ldata.getVar('LAYERDEPENDS_%s' % name)
+        collections[name]['priority'] = priority
+        collections[name]['pattern'] = pattern
+        collections[name]['depends'] = depends
+
+    return collections
+
+def _detect_layer(layer_path):
+    """
+        Scans layer directory to detect what type of layer
+        is BSP, Distro or Software.
+
+        Returns a dictionary with layer name, type and path.
+    """
+
+    layer = {}
+    layer_name = os.path.basename(layer_path)
+
+    layer['name'] = layer_name
+    layer['path'] = layer_path
+    layer['conf'] = {}
+
+    if not os.path.isfile(os.path.join(layer_path, 'conf', 'layer.conf')):
+        layer['type'] = LayerType.ERROR_NO_LAYER_CONF
+        return layer
+
+    machine_conf = os.path.join(layer_path, 'conf', 'machine')
+    distro_conf = os.path.join(layer_path, 'conf', 'distro')
+
+    is_bsp = False
+    is_distro = False
+
+    if os.path.isdir(machine_conf):
+        machines = _get_configurations(machine_conf)
+        if machines:
+            is_bsp = True
+
+    if os.path.isdir(distro_conf):
+        distros = _get_configurations(distro_conf)
+        if distros:
+            is_distro = True
+
+    if is_bsp and is_distro:
+        layer['type'] = LayerType.ERROR_BSP_DISTRO
+    elif is_bsp:
+        layer['type'] = LayerType.BSP
+        layer['conf']['machines'] = machines
+    elif is_distro:
+        layer['type'] = LayerType.DISTRO
+        layer['conf']['distros'] = distros
+    else:
+        layer['type'] = LayerType.SOFTWARE
+
+    layer['collections'] = _get_layer_collections(layer['path'])
+
+    return layer
+
+def detect_layers(layer_directories, no_auto):
+    layers = []
+
+    for directory in layer_directories:
+        directory = os.path.realpath(directory)
+        if directory[-1] == '/':
+            directory = directory[0:-1]
+
+        if no_auto:
+            conf_dir = os.path.join(directory, 'conf')
+            if os.path.isdir(conf_dir):
+                layer = _detect_layer(directory)
+                if layer:
+                    layers.append(layer)
+        else:
+            for root, dirs, files in os.walk(directory):
+                dir_name = os.path.basename(root)
+                conf_dir = os.path.join(root, 'conf')
+                if os.path.isdir(conf_dir):
+                    layer = _detect_layer(root)
+                    if layer:
+                        layers.append(layer)
+
+    return layers
+
+def _find_layer_depends(depend, layers):
+    for layer in layers:
+        for collection in layer['collections']:
+            if depend == collection:
+                return layer
+    return None
+
+def add_layer_dependencies(bblayersconf, layer, layers, logger):
+    def recurse_dependencies(depends, layer, layers, logger, ret = []):
+        logger.debug('Processing dependencies %s for layer %s.' % \
+                    (depends, layer['name']))
+
+        for depend in depends.split():
+            # core (oe-core) is suppose to be provided
+            if depend == 'core':
+                continue
+
+            layer_depend = _find_layer_depends(depend, layers)
+            if not layer_depend:
+                logger.error('Layer %s depends on %s and isn\'t found.' % \
+                        (layer['name'], depend))
+                ret = None
+                continue
+
+            # We keep processing, even if ret is None, this allows us to report
+            # multiple errors at once
+            if ret is not None and layer_depend not in ret:
+                ret.append(layer_depend)
+
+            # Recursively process...
+            if 'collections' not in layer_depend:
+                continue
+
+            for collection in layer_depend['collections']:
+                collect_deps = layer_depend['collections'][collection]['depends']
+                if not collect_deps:
+                    continue
+                ret = recurse_dependencies(collect_deps, layer_depend, layers, logger, ret)
+
+        return ret
+
+    layer_depends = []
+    for collection in layer['collections']:
+        depends = layer['collections'][collection]['depends']
+        if not depends:
+            continue
+
+        layer_depends = recurse_dependencies(depends, layer, layers, logger, layer_depends)
+
+    # Note: [] (empty) is allowed, None is not!
+    if layer_depends is None:
+        return False
+    else:
+        # Don't add a layer that is already present.
+        added = set()
+        output = check_command('Getting existing layers failed.', 'bitbake-layers show-layers').decode('utf-8')
+        for layer, path, pri in re.findall(r'^(\S+) +([^\n]*?) +(\d+)$', output, re.MULTILINE):
+            added.add(path)
+
+        for layer_depend in layer_depends:
+            name = layer_depend['name']
+            path = layer_depend['path']
+            if path in added:
+                continue
+            else:
+                added.add(path)
+            logger.info('Adding layer dependency %s' % name)
+            with open(bblayersconf, 'a+') as f:
+                f.write("\nBBLAYERS += \"%s\"\n" % path)
+    return True
+
+def add_layer(bblayersconf, layer, layers, logger):
+    logger.info('Adding layer %s' % layer['name'])
+    with open(bblayersconf, 'a+') as f:
+        f.write("\nBBLAYERS += \"%s\"\n" % layer['path'])
+
+    return True
+
+def check_command(error_msg, cmd):
+    '''
+    Run a command under a shell, capture stdout and stderr in a single stream,
+    throw an error when command returns non-zero exit code. Returns the output.
+    '''
+
+    p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
+    output, _ = p.communicate()
+    if p.returncode:
+        msg = "%s\nCommand: %s\nOutput:\n%s" % (error_msg, cmd, output.decode('utf-8'))
+        raise RuntimeError(msg)
+    return output
+
+def get_signatures(builddir, failsafe=False, machine=None):
+    import re
+
+    # some recipes needs to be excluded like meta-world-pkgdata
+    # because a layer can add recipes to a world build so signature
+    # will be change
+    exclude_recipes = ('meta-world-pkgdata',)
+
+    sigs = {}
+    tune2tasks = {}
+
+    cmd = ''
+    if machine:
+        cmd += 'MACHINE=%s ' % machine
+    cmd += 'bitbake '
+    if failsafe:
+        cmd += '-k '
+    cmd += '-S none world'
+    sigs_file = os.path.join(builddir, 'locked-sigs.inc')
+    if os.path.exists(sigs_file):
+        os.unlink(sigs_file)
+    try:
+        check_command('Generating signatures failed. This might be due to some parse error and/or general layer incompatibilities.',
+                      cmd)
+    except RuntimeError as ex:
+        if failsafe and os.path.exists(sigs_file):
+            # Ignore the error here. Most likely some recipes active
+            # in a world build lack some dependencies. There is a
+            # separate test_machine_world_build which exposes the
+            # failure.
+            pass
+        else:
+            raise
+
+    sig_regex = re.compile("^(?P<task>.*:.*):(?P<hash>.*) .$")
+    tune_regex = re.compile("(^|\s)SIGGEN_LOCKEDSIGS_t-(?P<tune>\S*)\s*=\s*")
+    current_tune = None
+    with open(sigs_file, 'r') as f:
+        for line in f.readlines():
+            line = line.strip()
+            t = tune_regex.search(line)
+            if t:
+                current_tune = t.group('tune')
+            s = sig_regex.match(line)
+            if s:
+                exclude = False
+                for er in exclude_recipes:
+                    (recipe, task) = s.group('task').split(':')
+                    if er == recipe:
+                        exclude = True
+                        break
+                if exclude:
+                    continue
+
+                sigs[s.group('task')] = s.group('hash')
+                tune2tasks.setdefault(current_tune, []).append(s.group('task'))
+
+    if not sigs:
+        raise RuntimeError('Can\'t load signatures from %s' % sigs_file)
+
+    return (sigs, tune2tasks)
+
+def get_depgraph(targets=['world'], failsafe=False):
+    '''
+    Returns the dependency graph for the given target(s).
+    The dependency graph is taken directly from DepTreeEvent.
+    '''
+    depgraph = None
+    with bb.tinfoil.Tinfoil() as tinfoil:
+        tinfoil.prepare(config_only=False)
+        tinfoil.set_event_mask(['bb.event.NoProvider', 'bb.event.DepTreeGenerated', 'bb.command.CommandCompleted'])
+        if not tinfoil.run_command('generateDepTreeEvent', targets, 'do_build'):
+            raise RuntimeError('starting generateDepTreeEvent failed')
+        while True:
+            event = tinfoil.wait_event(timeout=1000)
+            if event:
+                if isinstance(event, bb.command.CommandFailed):
+                    raise RuntimeError('Generating dependency information failed: %s' % event.error)
+                elif isinstance(event, bb.command.CommandCompleted):
+                    break
+                elif isinstance(event, bb.event.NoProvider):
+                    if failsafe:
+                        # The event is informational, we will get information about the
+                        # remaining dependencies eventually and thus can ignore this
+                        # here like we do in get_signatures(), if desired.
+                        continue
+                    if event._reasons:
+                        raise RuntimeError('Nothing provides %s: %s' % (event._item, event._reasons))
+                    else:
+                        raise RuntimeError('Nothing provides %s.' % (event._item))
+                elif isinstance(event, bb.event.DepTreeGenerated):
+                    depgraph = event._depgraph
+
+    if depgraph is None:
+        raise RuntimeError('Could not retrieve the depgraph.')
+    return depgraph
+
+def compare_signatures(old_sigs, curr_sigs):
+    '''
+    Compares the result of two get_signatures() calls. Returns None if no
+    problems found, otherwise a string that can be used as additional
+    explanation in self.fail().
+    '''
+    # task -> (old signature, new signature)
+    sig_diff = {}
+    for task in old_sigs:
+        if task in curr_sigs and \
+           old_sigs[task] != curr_sigs[task]:
+            sig_diff[task] = (old_sigs[task], curr_sigs[task])
+
+    if not sig_diff:
+        return None
+
+    # Beware, depgraph uses task=<pn>.<taskname> whereas get_signatures()
+    # uses <pn>:<taskname>. Need to convert sometimes. The output follows
+    # the convention from get_signatures() because that seems closer to
+    # normal bitbake output.
+    def sig2graph(task):
+        pn, taskname = task.rsplit(':', 1)
+        return pn + '.' + taskname
+    def graph2sig(task):
+        pn, taskname = task.rsplit('.', 1)
+        return pn + ':' + taskname
+    depgraph = get_depgraph(failsafe=True)
+    depends = depgraph['tdepends']
+
+    # If a task A has a changed signature, but none of its
+    # dependencies, then we need to report it because it is
+    # the one which introduces a change. Any task depending on
+    # A (directly or indirectly) will also have a changed
+    # signature, but we don't need to report it. It might have
+    # its own changes, which will become apparent once the
+    # issues that we do report are fixed and the test gets run
+    # again.
+    sig_diff_filtered = []
+    for task, (old_sig, new_sig) in sig_diff.items():
+        deps_tainted = False
+        for dep in depends.get(sig2graph(task), ()):
+            if graph2sig(dep) in sig_diff:
+                deps_tainted = True
+                break
+        if not deps_tainted:
+            sig_diff_filtered.append((task, old_sig, new_sig))
+
+    msg = []
+    msg.append('%d signatures changed, initial differences (first hash before, second after):' %
+               len(sig_diff))
+    for diff in sorted(sig_diff_filtered):
+        recipe, taskname = diff[0].rsplit(':', 1)
+        cmd = 'bitbake-diffsigs --task %s %s --signature %s %s' % \
+              (recipe, taskname, diff[1], diff[2])
+        msg.append('   %s: %s -> %s' % diff)
+        msg.append('      %s' % cmd)
+        try:
+            output = check_command('Determining signature difference failed.',
+                                   cmd).decode('utf-8')
+        except RuntimeError as error:
+            output = str(error)
+        if output:
+            msg.extend(['      ' + line for line in output.splitlines()])
+            msg.append('')
+    return '\n'.join(msg)
diff --git a/import-layers/yocto-poky/scripts/lib/checklayer/case.py b/import-layers/yocto-poky/scripts/lib/checklayer/case.py
new file mode 100644
index 0000000..9dd0041
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/checklayer/case.py
@@ -0,0 +1,7 @@
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+from oeqa.core.case import OETestCase
+
+class OECheckLayerTestCase(OETestCase):
+    pass
diff --git a/import-layers/yocto-poky/scripts/lib/compatlayer/cases/__init__.py b/import-layers/yocto-poky/scripts/lib/checklayer/cases/__init__.py
similarity index 100%
rename from import-layers/yocto-poky/scripts/lib/compatlayer/cases/__init__.py
rename to import-layers/yocto-poky/scripts/lib/checklayer/cases/__init__.py
diff --git a/import-layers/yocto-poky/scripts/lib/checklayer/cases/bsp.py b/import-layers/yocto-poky/scripts/lib/checklayer/cases/bsp.py
new file mode 100644
index 0000000..b6b611b
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/checklayer/cases/bsp.py
@@ -0,0 +1,204 @@
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+import unittest
+
+from checklayer import LayerType, get_signatures, check_command, get_depgraph
+from checklayer.case import OECheckLayerTestCase
+
+class BSPCheckLayer(OECheckLayerTestCase):
+    @classmethod
+    def setUpClass(self):
+        if self.tc.layer['type'] != LayerType.BSP:
+            raise unittest.SkipTest("BSPCheckLayer: Layer %s isn't BSP one." %\
+                self.tc.layer['name'])
+
+    def test_bsp_defines_machines(self):
+        self.assertTrue(self.tc.layer['conf']['machines'], 
+                "Layer is BSP but doesn't defines machines.")
+
+    def test_bsp_no_set_machine(self):
+        from oeqa.utils.commands import get_bb_var
+
+        machine = get_bb_var('MACHINE')
+        self.assertEqual(self.td['bbvars']['MACHINE'], machine,
+                msg="Layer %s modified machine %s -> %s" % \
+                    (self.tc.layer['name'], self.td['bbvars']['MACHINE'], machine))
+
+
+    def test_machine_world(self):
+        '''
+        "bitbake world" is expected to work regardless which machine is selected.
+        BSP layers sometimes break that by enabling a recipe for a certain machine
+        without checking whether that recipe actually can be built in the current
+        distro configuration (for example, OpenGL might not enabled).
+
+        This test iterates over all machines. It would be nicer to instantiate
+        it once per machine. It merely checks for errors during parse
+        time. It does not actually attempt to build anything.
+        '''
+
+        if not self.td['machines']:
+            self.skipTest('No machines set with --machines.')
+        msg = []
+        for machine in self.td['machines']:
+            # In contrast to test_machine_signatures() below, errors are fatal here.
+            try:
+                get_signatures(self.td['builddir'], failsafe=False, machine=machine)
+            except RuntimeError as ex:
+                msg.append(str(ex))
+        if msg:
+            msg.insert(0, 'The following machines broke a world build:')
+            self.fail('\n'.join(msg))
+
+    def test_machine_signatures(self):
+        '''
+        Selecting a machine may only affect the signature of tasks that are specific
+        to that machine. In other words, when MACHINE=A and MACHINE=B share a recipe
+        foo and the output of foo, then both machine configurations must build foo
+        in exactly the same way. Otherwise it is not possible to use both machines
+        in the same distribution.
+
+        This criteria can only be tested by testing different machines in combination,
+        i.e. one main layer, potentially several additional BSP layers and an explicit
+        choice of machines:
+        yocto-check-layer --additional-layers .../meta-intel --machines intel-corei7-64 imx6slevk -- .../meta-freescale
+        '''
+
+        if not self.td['machines']:
+            self.skipTest('No machines set with --machines.')
+
+        # Collect signatures for all machines that we are testing
+        # and merge that into a hash:
+        # tune -> task -> signature -> list of machines with that combination
+        #
+        # It is an error if any tune/task pair has more than one signature,
+        # because that implies that the machines that caused those different
+        # signatures do not agree on how to execute the task.
+        tunes = {}
+        # Preserve ordering of machines as chosen by the user.
+        for machine in self.td['machines']:
+            curr_sigs, tune2tasks = get_signatures(self.td['builddir'], failsafe=True, machine=machine)
+            # Invert the tune -> [tasks] mapping.
+            tasks2tune = {}
+            for tune, tasks in tune2tasks.items():
+                for task in tasks:
+                    tasks2tune[task] = tune
+            for task, sighash in curr_sigs.items():
+                tunes.setdefault(tasks2tune[task], {}).setdefault(task, {}).setdefault(sighash, []).append(machine)
+
+        msg = []
+        pruned = 0
+        last_line_key = None
+        # do_fetch, do_unpack, ..., do_build
+        taskname_list = []
+        if tunes:
+            # The output below is most useful when we start with tasks that are at
+            # the bottom of the dependency chain, i.e. those that run first. If
+            # those tasks differ, the rest also does.
+            #
+            # To get an ordering of tasks, we do a topological sort of the entire
+            # depgraph for the base configuration, then on-the-fly flatten that list by stripping
+            # out the recipe names and removing duplicates. The base configuration
+            # is not necessarily representative, but should be close enough. Tasks
+            # that were not encountered get a default priority.
+            depgraph = get_depgraph()
+            depends = depgraph['tdepends']
+            WHITE = 1
+            GRAY = 2
+            BLACK = 3
+            color = {}
+            found = set()
+            def visit(task):
+                color[task] = GRAY
+                for dep in depends.get(task, ()):
+                    if color.setdefault(dep, WHITE) == WHITE:
+                        visit(dep)
+                color[task] = BLACK
+                pn, taskname = task.rsplit('.', 1)
+                if taskname not in found:
+                    taskname_list.append(taskname)
+                    found.add(taskname)
+            for task in depends.keys():
+                if color.setdefault(task, WHITE) == WHITE:
+                    visit(task)
+
+        taskname_order = dict([(task, index) for index, task in enumerate(taskname_list) ])
+        def task_key(task):
+            pn, taskname = task.rsplit(':', 1)
+            return (pn, taskname_order.get(taskname, len(taskname_list)), taskname)
+
+        for tune in sorted(tunes.keys()):
+            tasks = tunes[tune]
+            # As for test_signatures it would be nicer to sort tasks
+            # by dependencies here, but that is harder because we have
+            # to report on tasks from different machines, which might
+            # have different dependencies. We resort to pruning the
+            # output by reporting only one task per recipe if the set
+            # of machines matches.
+            #
+            # "bitbake-diffsigs -t -s" is intelligent enough to print
+            # diffs recursively, so often it does not matter that much
+            # if we don't pick the underlying difference
+            # here. However, sometimes recursion fails
+            # (https://bugzilla.yoctoproject.org/show_bug.cgi?id=6428).
+            #
+            # To mitigate that a bit, we use a hard-coded ordering of
+            # tasks that represents how they normally run and prefer
+            # to print the ones that run first.
+            for task in sorted(tasks.keys(), key=task_key):
+                signatures = tasks[task]
+                # do_build can be ignored: it is know to have
+                # different signatures in some cases, for example in
+                # the allarch ca-certificates due to RDEPENDS=openssl.
+                # That particular dependency is whitelisted via
+                # SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS, but still shows up
+                # in the sstate signature hash because filtering it
+                # out would be hard and running do_build multiple
+                # times doesn't really matter.
+                if len(signatures.keys()) > 1 and \
+                   not task.endswith(':do_build'):
+                    # Error!
+                    #
+                    # Sort signatures by machines, because the hex values don't mean anything.
+                    # => all-arch adwaita-icon-theme:do_build: 1234... (beaglebone, qemux86) != abcdf... (qemux86-64)
+                    #
+                    # Skip the line if it is covered already by the predecessor (same pn, same sets of machines).
+                    pn, taskname = task.rsplit(':', 1)
+                    next_line_key = (pn, sorted(signatures.values()))
+                    if next_line_key != last_line_key:
+                        line = '   %s %s: ' % (tune, task)
+                        line += ' != '.join(['%s (%s)' % (signature, ', '.join([m for m in signatures[signature]])) for
+                                             signature in sorted(signatures.keys(), key=lambda s: signatures[s])])
+                        last_line_key = next_line_key
+                        msg.append(line)
+                        # Randomly pick two mismatched signatures and remember how to invoke
+                        # bitbake-diffsigs for them.
+                        iterator = iter(signatures.items())
+                        a = next(iterator)
+                        b = next(iterator)
+                        diffsig_machines = '(%s) != (%s)' % (', '.join(a[1]), ', '.join(b[1]))
+                        diffsig_params = '-t %s %s -s %s %s' % (pn, taskname, a[0], b[0])
+                    else:
+                        pruned += 1
+
+        if msg:
+            msg.insert(0, 'The machines have conflicting signatures for some shared tasks:')
+            if pruned > 0:
+                msg.append('')
+                msg.append('%d tasks where not listed because some other task of the recipe already differed.' % pruned)
+                msg.append('It is likely that differences from different recipes also have the same root cause.')
+            msg.append('')
+            # Explain how to investigate...
+            msg.append('To investigate, run bitbake-diffsigs -t recipename taskname -s fromsig tosig.')
+            cmd = 'bitbake-diffsigs %s' % diffsig_params
+            msg.append('Example: %s in the last line' % diffsig_machines)
+            msg.append('Command: %s' % cmd)
+            # ... and actually do it automatically for that example, but without aborting
+            # when that fails.
+            try:
+                output = check_command('Comparing signatures failed.', cmd).decode('utf-8')
+            except RuntimeError as ex:
+                output = str(ex)
+            msg.extend(['   ' + line for line in output.splitlines()])
+            self.fail('\n'.join(msg))
diff --git a/import-layers/yocto-poky/scripts/lib/checklayer/cases/common.py b/import-layers/yocto-poky/scripts/lib/checklayer/cases/common.py
new file mode 100644
index 0000000..a13c108
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/checklayer/cases/common.py
@@ -0,0 +1,53 @@
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+import glob
+import os
+import unittest
+from checklayer import get_signatures, LayerType, check_command, get_depgraph, compare_signatures
+from checklayer.case import OECheckLayerTestCase
+
+class CommonCheckLayer(OECheckLayerTestCase):
+    def test_readme(self):
+        # The top-level README file may have a suffix (like README.rst or README.txt).
+        readme_files = glob.glob(os.path.join(self.tc.layer['path'], 'README*'))
+        self.assertTrue(len(readme_files) > 0,
+                        msg="Layer doesn't contains README file.")
+
+        # There might be more than one file matching the file pattern above
+        # (for example, README.rst and README-COPYING.rst). The one with the shortest
+        # name is considered the "main" one.
+        readme_file = sorted(readme_files)[0]
+        data = ''
+        with open(readme_file, 'r') as f:
+            data = f.read()
+        self.assertTrue(data,
+                msg="Layer contains a README file but it is empty.")
+
+    def test_parse(self):
+        check_command('Layer %s failed to parse.' % self.tc.layer['name'],
+                      'bitbake -p')
+
+    def test_show_environment(self):
+        check_command('Layer %s failed to show environment.' % self.tc.layer['name'],
+                      'bitbake -e')
+
+    def test_world(self):
+        '''
+        "bitbake world" is expected to work. test_signatures does not cover that
+        because it is more lenient and ignores recipes in a world build that
+        are not actually buildable, so here we fail when "bitbake -S none world"
+        fails.
+        '''
+        get_signatures(self.td['builddir'], failsafe=False)
+
+    def test_signatures(self):
+        if self.tc.layer['type'] == LayerType.SOFTWARE and \
+           not self.tc.test_software_layer_signatures:
+            raise unittest.SkipTest("Not testing for signature changes in a software layer %s." \
+                     % self.tc.layer['name'])
+
+        curr_sigs, _ = get_signatures(self.td['builddir'], failsafe=True)
+        msg = compare_signatures(self.td['sigs'], curr_sigs)
+        if msg is not None:
+            self.fail('Adding layer %s changed signatures.\n%s' % (self.tc.layer['name'], msg))
diff --git a/import-layers/yocto-poky/scripts/lib/checklayer/cases/distro.py b/import-layers/yocto-poky/scripts/lib/checklayer/cases/distro.py
new file mode 100644
index 0000000..df1b303
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/checklayer/cases/distro.py
@@ -0,0 +1,26 @@
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+import unittest
+
+from checklayer import LayerType
+from checklayer.case import OECheckLayerTestCase
+
+class DistroCheckLayer(OECheckLayerTestCase):
+    @classmethod
+    def setUpClass(self):
+        if self.tc.layer['type'] != LayerType.DISTRO:
+            raise unittest.SkipTest("DistroCheckLayer: Layer %s isn't Distro one." %\
+                self.tc.layer['name'])
+
+    def test_distro_defines_distros(self):
+        self.assertTrue(self.tc.layer['conf']['distros'], 
+                "Layer is BSP but doesn't defines machines.")
+
+    def test_distro_no_set_distros(self):
+        from oeqa.utils.commands import get_bb_var
+
+        distro = get_bb_var('DISTRO')
+        self.assertEqual(self.td['bbvars']['DISTRO'], distro,
+                msg="Layer %s modified distro %s -> %s" % \
+                    (self.tc.layer['name'], self.td['bbvars']['DISTRO'], distro))
diff --git a/import-layers/yocto-poky/scripts/lib/checklayer/context.py b/import-layers/yocto-poky/scripts/lib/checklayer/context.py
new file mode 100644
index 0000000..1bec2c4
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/checklayer/context.py
@@ -0,0 +1,15 @@
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+import os
+import sys
+import glob
+import re
+
+from oeqa.core.context import OETestContext
+
+class CheckLayerTestContext(OETestContext):
+    def __init__(self, td=None, logger=None, layer=None, test_software_layer_signatures=True):
+        super(CheckLayerTestContext, self).__init__(td, logger)
+        self.layer = layer
+        self.test_software_layer_signatures = test_software_layer_signatures
diff --git a/import-layers/yocto-poky/scripts/lib/compatlayer/__init__.py b/import-layers/yocto-poky/scripts/lib/compatlayer/__init__.py
deleted file mode 100644
index 7197e85..0000000
--- a/import-layers/yocto-poky/scripts/lib/compatlayer/__init__.py
+++ /dev/null
@@ -1,392 +0,0 @@
-# Yocto Project compatibility layer tool
-#
-# Copyright (C) 2017 Intel Corporation
-# Released under the MIT license (see COPYING.MIT)
-
-import os
-import re
-import subprocess
-from enum import Enum
-
-import bb.tinfoil
-
-class LayerType(Enum):
-    BSP = 0
-    DISTRO = 1
-    SOFTWARE = 2
-    ERROR_NO_LAYER_CONF = 98
-    ERROR_BSP_DISTRO = 99
-
-def _get_configurations(path):
-    configs = []
-
-    for f in os.listdir(path):
-        file_path = os.path.join(path, f)
-        if os.path.isfile(file_path) and f.endswith('.conf'):
-            configs.append(f[:-5]) # strip .conf
-    return configs
-
-def _get_layer_collections(layer_path, lconf=None, data=None):
-    import bb.parse
-    import bb.data
-
-    if lconf is None:
-        lconf = os.path.join(layer_path, 'conf', 'layer.conf')
-
-    if data is None:
-        ldata = bb.data.init()
-        bb.parse.init_parser(ldata)
-    else:
-        ldata = data.createCopy()
-
-    ldata.setVar('LAYERDIR', layer_path)
-    try:
-        ldata = bb.parse.handle(lconf, ldata, include=True)
-    except BaseException as exc:
-        raise LayerError(exc)
-    ldata.expandVarref('LAYERDIR')
-
-    collections = (ldata.getVar('BBFILE_COLLECTIONS', True) or '').split()
-    if not collections:
-        name = os.path.basename(layer_path)
-        collections = [name]
-
-    collections = {c: {} for c in collections}
-    for name in collections:
-        priority = ldata.getVar('BBFILE_PRIORITY_%s' % name, True)
-        pattern = ldata.getVar('BBFILE_PATTERN_%s' % name, True)
-        depends = ldata.getVar('LAYERDEPENDS_%s' % name, True)
-        collections[name]['priority'] = priority
-        collections[name]['pattern'] = pattern
-        collections[name]['depends'] = depends
-
-    return collections
-
-def _detect_layer(layer_path):
-    """
-        Scans layer directory to detect what type of layer
-        is BSP, Distro or Software.
-
-        Returns a dictionary with layer name, type and path.
-    """
-
-    layer = {}
-    layer_name = os.path.basename(layer_path)
-
-    layer['name'] = layer_name
-    layer['path'] = layer_path
-    layer['conf'] = {}
-
-    if not os.path.isfile(os.path.join(layer_path, 'conf', 'layer.conf')):
-        layer['type'] = LayerType.ERROR_NO_LAYER_CONF
-        return layer
-
-    machine_conf = os.path.join(layer_path, 'conf', 'machine')
-    distro_conf = os.path.join(layer_path, 'conf', 'distro')
-
-    is_bsp = False
-    is_distro = False
-
-    if os.path.isdir(machine_conf):
-        machines = _get_configurations(machine_conf)
-        if machines:
-            is_bsp = True
-
-    if os.path.isdir(distro_conf):
-        distros = _get_configurations(distro_conf)
-        if distros:
-            is_distro = True
-
-    if is_bsp and is_distro:
-        layer['type'] = LayerType.ERROR_BSP_DISTRO
-    elif is_bsp:
-        layer['type'] = LayerType.BSP
-        layer['conf']['machines'] = machines
-    elif is_distro:
-        layer['type'] = LayerType.DISTRO
-        layer['conf']['distros'] = distros
-    else:
-        layer['type'] = LayerType.SOFTWARE
-
-    layer['collections'] = _get_layer_collections(layer['path'])
-
-    return layer
-
-def detect_layers(layer_directories, no_auto):
-    layers = []
-
-    for directory in layer_directories:
-        directory = os.path.realpath(directory)
-        if directory[-1] == '/':
-            directory = directory[0:-1]
-
-        if no_auto:
-            conf_dir = os.path.join(directory, 'conf')
-            if os.path.isdir(conf_dir):
-                layer = _detect_layer(directory)
-                if layer:
-                    layers.append(layer)
-        else:
-            for root, dirs, files in os.walk(directory):
-                dir_name = os.path.basename(root)
-                conf_dir = os.path.join(root, 'conf')
-                if os.path.isdir(conf_dir):
-                    layer = _detect_layer(root)
-                    if layer:
-                        layers.append(layer)
-
-    return layers
-
-def _find_layer_depends(depend, layers):
-    for layer in layers:
-        for collection in layer['collections']:
-            if depend == collection:
-                return layer
-    return None
-
-def add_layer_dependencies(bblayersconf, layer, layers, logger):
-    def recurse_dependencies(depends, layer, layers, logger, ret = []):
-        logger.debug('Processing dependencies %s for layer %s.' % \
-                    (depends, layer['name']))
-
-        for depend in depends.split():
-            # core (oe-core) is suppose to be provided
-            if depend == 'core':
-                continue
-
-            layer_depend = _find_layer_depends(depend, layers)
-            if not layer_depend:
-                logger.error('Layer %s depends on %s and isn\'t found.' % \
-                        (layer['name'], depend))
-                ret = None
-                continue
-
-            # We keep processing, even if ret is None, this allows us to report
-            # multiple errors at once
-            if ret is not None and layer_depend not in ret:
-                ret.append(layer_depend)
-
-            # Recursively process...
-            if 'collections' not in layer_depend:
-                continue
-
-            for collection in layer_depend['collections']:
-                collect_deps = layer_depend['collections'][collection]['depends']
-                if not collect_deps:
-                    continue
-                ret = recurse_dependencies(collect_deps, layer_depend, layers, logger, ret)
-
-        return ret
-
-    layer_depends = []
-    for collection in layer['collections']:
-        depends = layer['collections'][collection]['depends']
-        if not depends:
-            continue
-
-        layer_depends = recurse_dependencies(depends, layer, layers, logger, layer_depends)
-
-    # Note: [] (empty) is allowed, None is not!
-    if layer_depends is None:
-        return False
-    else:
-        # Don't add a layer that is already present.
-        added = set()
-        output = check_command('Getting existing layers failed.', 'bitbake-layers show-layers').decode('utf-8')
-        for layer, path, pri in re.findall(r'^(\S+) +([^\n]*?) +(\d+)$', output, re.MULTILINE):
-            added.add(path)
-
-        for layer_depend in layer_depends:
-            name = layer_depend['name']
-            path = layer_depend['path']
-            if path in added:
-                continue
-            else:
-                added.add(path)
-            logger.info('Adding layer dependency %s' % name)
-            with open(bblayersconf, 'a+') as f:
-                f.write("\nBBLAYERS += \"%s\"\n" % path)
-    return True
-
-def add_layer(bblayersconf, layer, layers, logger):
-    logger.info('Adding layer %s' % layer['name'])
-    with open(bblayersconf, 'a+') as f:
-        f.write("\nBBLAYERS += \"%s\"\n" % layer['path'])
-
-    return True
-
-def check_command(error_msg, cmd):
-    '''
-    Run a command under a shell, capture stdout and stderr in a single stream,
-    throw an error when command returns non-zero exit code. Returns the output.
-    '''
-
-    p = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
-    output, _ = p.communicate()
-    if p.returncode:
-        msg = "%s\nCommand: %s\nOutput:\n%s" % (error_msg, cmd, output.decode('utf-8'))
-        raise RuntimeError(msg)
-    return output
-
-def get_signatures(builddir, failsafe=False, machine=None):
-    import re
-
-    # some recipes needs to be excluded like meta-world-pkgdata
-    # because a layer can add recipes to a world build so signature
-    # will be change
-    exclude_recipes = ('meta-world-pkgdata',)
-
-    sigs = {}
-    tune2tasks = {}
-
-    cmd = ''
-    if machine:
-        cmd += 'MACHINE=%s ' % machine
-    cmd += 'bitbake '
-    if failsafe:
-        cmd += '-k '
-    cmd += '-S none world'
-    sigs_file = os.path.join(builddir, 'locked-sigs.inc')
-    if os.path.exists(sigs_file):
-        os.unlink(sigs_file)
-    try:
-        check_command('Generating signatures failed. This might be due to some parse error and/or general layer incompatibilities.',
-                      cmd)
-    except RuntimeError as ex:
-        if failsafe and os.path.exists(sigs_file):
-            # Ignore the error here. Most likely some recipes active
-            # in a world build lack some dependencies. There is a
-            # separate test_machine_world_build which exposes the
-            # failure.
-            pass
-        else:
-            raise
-
-    sig_regex = re.compile("^(?P<task>.*:.*):(?P<hash>.*) .$")
-    tune_regex = re.compile("(^|\s)SIGGEN_LOCKEDSIGS_t-(?P<tune>\S*)\s*=\s*")
-    current_tune = None
-    with open(sigs_file, 'r') as f:
-        for line in f.readlines():
-            line = line.strip()
-            t = tune_regex.search(line)
-            if t:
-                current_tune = t.group('tune')
-            s = sig_regex.match(line)
-            if s:
-                exclude = False
-                for er in exclude_recipes:
-                    (recipe, task) = s.group('task').split(':')
-                    if er == recipe:
-                        exclude = True
-                        break
-                if exclude:
-                    continue
-
-                sigs[s.group('task')] = s.group('hash')
-                tune2tasks.setdefault(current_tune, []).append(s.group('task'))
-
-    if not sigs:
-        raise RuntimeError('Can\'t load signatures from %s' % sigs_file)
-
-    return (sigs, tune2tasks)
-
-def get_depgraph(targets=['world'], failsafe=False):
-    '''
-    Returns the dependency graph for the given target(s).
-    The dependency graph is taken directly from DepTreeEvent.
-    '''
-    depgraph = None
-    with bb.tinfoil.Tinfoil() as tinfoil:
-        tinfoil.prepare(config_only=False)
-        tinfoil.set_event_mask(['bb.event.NoProvider', 'bb.event.DepTreeGenerated', 'bb.command.CommandCompleted'])
-        if not tinfoil.run_command('generateDepTreeEvent', targets, 'do_build'):
-            raise RuntimeError('starting generateDepTreeEvent failed')
-        while True:
-            event = tinfoil.wait_event(timeout=1000)
-            if event:
-                if isinstance(event, bb.command.CommandFailed):
-                    raise RuntimeError('Generating dependency information failed: %s' % event.error)
-                elif isinstance(event, bb.command.CommandCompleted):
-                    break
-                elif isinstance(event, bb.event.NoProvider):
-                    if failsafe:
-                        # The event is informational, we will get information about the
-                        # remaining dependencies eventually and thus can ignore this
-                        # here like we do in get_signatures(), if desired.
-                        continue
-                    if event._reasons:
-                        raise RuntimeError('Nothing provides %s: %s' % (event._item, event._reasons))
-                    else:
-                        raise RuntimeError('Nothing provides %s.' % (event._item))
-                elif isinstance(event, bb.event.DepTreeGenerated):
-                    depgraph = event._depgraph
-
-    if depgraph is None:
-        raise RuntimeError('Could not retrieve the depgraph.')
-    return depgraph
-
-def compare_signatures(old_sigs, curr_sigs):
-    '''
-    Compares the result of two get_signatures() calls. Returns None if no
-    problems found, otherwise a string that can be used as additional
-    explanation in self.fail().
-    '''
-    # task -> (old signature, new signature)
-    sig_diff = {}
-    for task in old_sigs:
-        if task in curr_sigs and \
-           old_sigs[task] != curr_sigs[task]:
-            sig_diff[task] = (old_sigs[task], curr_sigs[task])
-
-    if not sig_diff:
-        return None
-
-    # Beware, depgraph uses task=<pn>.<taskname> whereas get_signatures()
-    # uses <pn>:<taskname>. Need to convert sometimes. The output follows
-    # the convention from get_signatures() because that seems closer to
-    # normal bitbake output.
-    def sig2graph(task):
-        pn, taskname = task.rsplit(':', 1)
-        return pn + '.' + taskname
-    def graph2sig(task):
-        pn, taskname = task.rsplit('.', 1)
-        return pn + ':' + taskname
-    depgraph = get_depgraph(failsafe=True)
-    depends = depgraph['tdepends']
-
-    # If a task A has a changed signature, but none of its
-    # dependencies, then we need to report it because it is
-    # the one which introduces a change. Any task depending on
-    # A (directly or indirectly) will also have a changed
-    # signature, but we don't need to report it. It might have
-    # its own changes, which will become apparent once the
-    # issues that we do report are fixed and the test gets run
-    # again.
-    sig_diff_filtered = []
-    for task, (old_sig, new_sig) in sig_diff.items():
-        deps_tainted = False
-        for dep in depends.get(sig2graph(task), ()):
-            if graph2sig(dep) in sig_diff:
-                deps_tainted = True
-                break
-        if not deps_tainted:
-            sig_diff_filtered.append((task, old_sig, new_sig))
-
-    msg = []
-    msg.append('%d signatures changed, initial differences (first hash before, second after):' %
-               len(sig_diff))
-    for diff in sorted(sig_diff_filtered):
-        recipe, taskname = diff[0].rsplit(':', 1)
-        cmd = 'bitbake-diffsigs --task %s %s --signature %s %s' % \
-              (recipe, taskname, diff[1], diff[2])
-        msg.append('   %s: %s -> %s' % diff)
-        msg.append('      %s' % cmd)
-        try:
-            output = check_command('Determining signature difference failed.',
-                                   cmd).decode('utf-8')
-        except RuntimeError as error:
-            output = str(error)
-        if output:
-            msg.extend(['      ' + line for line in output.splitlines()])
-            msg.append('')
-    return '\n'.join(msg)
diff --git a/import-layers/yocto-poky/scripts/lib/compatlayer/case.py b/import-layers/yocto-poky/scripts/lib/compatlayer/case.py
deleted file mode 100644
index 54ce78a..0000000
--- a/import-layers/yocto-poky/scripts/lib/compatlayer/case.py
+++ /dev/null
@@ -1,7 +0,0 @@
-# Copyright (C) 2017 Intel Corporation
-# Released under the MIT license (see COPYING.MIT)
-
-from oeqa.core.case import OETestCase
-
-class OECompatLayerTestCase(OETestCase):
-    pass
diff --git a/import-layers/yocto-poky/scripts/lib/compatlayer/cases/bsp.py b/import-layers/yocto-poky/scripts/lib/compatlayer/cases/bsp.py
deleted file mode 100644
index 43efae4..0000000
--- a/import-layers/yocto-poky/scripts/lib/compatlayer/cases/bsp.py
+++ /dev/null
@@ -1,204 +0,0 @@
-# Copyright (C) 2017 Intel Corporation
-# Released under the MIT license (see COPYING.MIT)
-
-import unittest
-
-from compatlayer import LayerType, get_signatures, check_command, get_depgraph
-from compatlayer.case import OECompatLayerTestCase
-
-class BSPCompatLayer(OECompatLayerTestCase):
-    @classmethod
-    def setUpClass(self):
-        if self.tc.layer['type'] != LayerType.BSP:
-            raise unittest.SkipTest("BSPCompatLayer: Layer %s isn't BSP one." %\
-                self.tc.layer['name'])
-
-    def test_bsp_defines_machines(self):
-        self.assertTrue(self.tc.layer['conf']['machines'], 
-                "Layer is BSP but doesn't defines machines.")
-
-    def test_bsp_no_set_machine(self):
-        from oeqa.utils.commands import get_bb_var
-
-        machine = get_bb_var('MACHINE')
-        self.assertEqual(self.td['bbvars']['MACHINE'], machine,
-                msg="Layer %s modified machine %s -> %s" % \
-                    (self.tc.layer['name'], self.td['bbvars']['MACHINE'], machine))
-
-
-    def test_machine_world(self):
-        '''
-        "bitbake world" is expected to work regardless which machine is selected.
-        BSP layers sometimes break that by enabling a recipe for a certain machine
-        without checking whether that recipe actually can be built in the current
-        distro configuration (for example, OpenGL might not enabled).
-
-        This test iterates over all machines. It would be nicer to instantiate
-        it once per machine. It merely checks for errors during parse
-        time. It does not actually attempt to build anything.
-        '''
-
-        if not self.td['machines']:
-            self.skipTest('No machines set with --machines.')
-        msg = []
-        for machine in self.td['machines']:
-            # In contrast to test_machine_signatures() below, errors are fatal here.
-            try:
-                get_signatures(self.td['builddir'], failsafe=False, machine=machine)
-            except RuntimeError as ex:
-                msg.append(str(ex))
-        if msg:
-            msg.insert(0, 'The following machines broke a world build:')
-            self.fail('\n'.join(msg))
-
-    def test_machine_signatures(self):
-        '''
-        Selecting a machine may only affect the signature of tasks that are specific
-        to that machine. In other words, when MACHINE=A and MACHINE=B share a recipe
-        foo and the output of foo, then both machine configurations must build foo
-        in exactly the same way. Otherwise it is not possible to use both machines
-        in the same distribution.
-
-        This criteria can only be tested by testing different machines in combination,
-        i.e. one main layer, potentially several additional BSP layers and an explicit
-        choice of machines:
-        yocto-compat-layer --additional-layers .../meta-intel --machines intel-corei7-64 imx6slevk -- .../meta-freescale
-        '''
-
-        if not self.td['machines']:
-            self.skipTest('No machines set with --machines.')
-
-        # Collect signatures for all machines that we are testing
-        # and merge that into a hash:
-        # tune -> task -> signature -> list of machines with that combination
-        #
-        # It is an error if any tune/task pair has more than one signature,
-        # because that implies that the machines that caused those different
-        # signatures do not agree on how to execute the task.
-        tunes = {}
-        # Preserve ordering of machines as chosen by the user.
-        for machine in self.td['machines']:
-            curr_sigs, tune2tasks = get_signatures(self.td['builddir'], failsafe=True, machine=machine)
-            # Invert the tune -> [tasks] mapping.
-            tasks2tune = {}
-            for tune, tasks in tune2tasks.items():
-                for task in tasks:
-                    tasks2tune[task] = tune
-            for task, sighash in curr_sigs.items():
-                tunes.setdefault(tasks2tune[task], {}).setdefault(task, {}).setdefault(sighash, []).append(machine)
-
-        msg = []
-        pruned = 0
-        last_line_key = None
-        # do_fetch, do_unpack, ..., do_build
-        taskname_list = []
-        if tunes:
-            # The output below is most useful when we start with tasks that are at
-            # the bottom of the dependency chain, i.e. those that run first. If
-            # those tasks differ, the rest also does.
-            #
-            # To get an ordering of tasks, we do a topological sort of the entire
-            # depgraph for the base configuration, then on-the-fly flatten that list by stripping
-            # out the recipe names and removing duplicates. The base configuration
-            # is not necessarily representative, but should be close enough. Tasks
-            # that were not encountered get a default priority.
-            depgraph = get_depgraph()
-            depends = depgraph['tdepends']
-            WHITE = 1
-            GRAY = 2
-            BLACK = 3
-            color = {}
-            found = set()
-            def visit(task):
-                color[task] = GRAY
-                for dep in depends.get(task, ()):
-                    if color.setdefault(dep, WHITE) == WHITE:
-                        visit(dep)
-                color[task] = BLACK
-                pn, taskname = task.rsplit('.', 1)
-                if taskname not in found:
-                    taskname_list.append(taskname)
-                    found.add(taskname)
-            for task in depends.keys():
-                if color.setdefault(task, WHITE) == WHITE:
-                    visit(task)
-
-        taskname_order = dict([(task, index) for index, task in enumerate(taskname_list) ])
-        def task_key(task):
-            pn, taskname = task.rsplit(':', 1)
-            return (pn, taskname_order.get(taskname, len(taskname_list)), taskname)
-
-        for tune in sorted(tunes.keys()):
-            tasks = tunes[tune]
-            # As for test_signatures it would be nicer to sort tasks
-            # by dependencies here, but that is harder because we have
-            # to report on tasks from different machines, which might
-            # have different dependencies. We resort to pruning the
-            # output by reporting only one task per recipe if the set
-            # of machines matches.
-            #
-            # "bitbake-diffsigs -t -s" is intelligent enough to print
-            # diffs recursively, so often it does not matter that much
-            # if we don't pick the underlying difference
-            # here. However, sometimes recursion fails
-            # (https://bugzilla.yoctoproject.org/show_bug.cgi?id=6428).
-            #
-            # To mitigate that a bit, we use a hard-coded ordering of
-            # tasks that represents how they normally run and prefer
-            # to print the ones that run first.
-            for task in sorted(tasks.keys(), key=task_key):
-                signatures = tasks[task]
-                # do_build can be ignored: it is know to have
-                # different signatures in some cases, for example in
-                # the allarch ca-certificates due to RDEPENDS=openssl.
-                # That particular dependency is whitelisted via
-                # SIGGEN_EXCLUDE_SAFE_RECIPE_DEPS, but still shows up
-                # in the sstate signature hash because filtering it
-                # out would be hard and running do_build multiple
-                # times doesn't really matter.
-                if len(signatures.keys()) > 1 and \
-                   not task.endswith(':do_build'):
-                    # Error!
-                    #
-                    # Sort signatures by machines, because the hex values don't mean anything.
-                    # => all-arch adwaita-icon-theme:do_build: 1234... (beaglebone, qemux86) != abcdf... (qemux86-64)
-                    #
-                    # Skip the line if it is covered already by the predecessor (same pn, same sets of machines).
-                    pn, taskname = task.rsplit(':', 1)
-                    next_line_key = (pn, sorted(signatures.values()))
-                    if next_line_key != last_line_key:
-                        line = '   %s %s: ' % (tune, task)
-                        line += ' != '.join(['%s (%s)' % (signature, ', '.join([m for m in signatures[signature]])) for
-                                             signature in sorted(signatures.keys(), key=lambda s: signatures[s])])
-                        last_line_key = next_line_key
-                        msg.append(line)
-                        # Randomly pick two mismatched signatures and remember how to invoke
-                        # bitbake-diffsigs for them.
-                        iterator = iter(signatures.items())
-                        a = next(iterator)
-                        b = next(iterator)
-                        diffsig_machines = '(%s) != (%s)' % (', '.join(a[1]), ', '.join(b[1]))
-                        diffsig_params = '-t %s %s -s %s %s' % (pn, taskname, a[0], b[0])
-                    else:
-                        pruned += 1
-
-        if msg:
-            msg.insert(0, 'The machines have conflicting signatures for some shared tasks:')
-            if pruned > 0:
-                msg.append('')
-                msg.append('%d tasks where not listed because some other task of the recipe already differed.' % pruned)
-                msg.append('It is likely that differences from different recipes also have the same root cause.')
-            msg.append('')
-            # Explain how to investigate...
-            msg.append('To investigate, run bitbake-diffsigs -t recipename taskname -s fromsig tosig.')
-            cmd = 'bitbake-diffsigs %s' % diffsig_params
-            msg.append('Example: %s in the last line' % diffsig_machines)
-            msg.append('Command: %s' % cmd)
-            # ... and actually do it automatically for that example, but without aborting
-            # when that fails.
-            try:
-                output = check_command('Comparing signatures failed.', cmd).decode('utf-8')
-            except RuntimeError as ex:
-                output = str(ex)
-            msg.extend(['   ' + line for line in output.splitlines()])
-            self.fail('\n'.join(msg))
diff --git a/import-layers/yocto-poky/scripts/lib/compatlayer/cases/common.py b/import-layers/yocto-poky/scripts/lib/compatlayer/cases/common.py
deleted file mode 100644
index 55e8ba4..0000000
--- a/import-layers/yocto-poky/scripts/lib/compatlayer/cases/common.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# Copyright (C) 2017 Intel Corporation
-# Released under the MIT license (see COPYING.MIT)
-
-import glob
-import os
-import unittest
-from compatlayer import get_signatures, LayerType, check_command, get_depgraph, compare_signatures
-from compatlayer.case import OECompatLayerTestCase
-
-class CommonCompatLayer(OECompatLayerTestCase):
-    def test_readme(self):
-        # The top-level README file may have a suffix (like README.rst or README.txt).
-        readme_files = glob.glob(os.path.join(self.tc.layer['path'], 'README*'))
-        self.assertTrue(len(readme_files) > 0,
-                        msg="Layer doesn't contains README file.")
-
-        # There might be more than one file matching the file pattern above
-        # (for example, README.rst and README-COPYING.rst). The one with the shortest
-        # name is considered the "main" one.
-        readme_file = sorted(readme_files)[0]
-        data = ''
-        with open(readme_file, 'r') as f:
-            data = f.read()
-        self.assertTrue(data,
-                msg="Layer contains a README file but it is empty.")
-
-    def test_parse(self):
-        check_command('Layer %s failed to parse.' % self.tc.layer['name'],
-                      'bitbake -p')
-
-    def test_show_environment(self):
-        check_command('Layer %s failed to show environment.' % self.tc.layer['name'],
-                      'bitbake -e')
-
-    def test_world(self):
-        '''
-        "bitbake world" is expected to work. test_signatures does not cover that
-        because it is more lenient and ignores recipes in a world build that
-        are not actually buildable, so here we fail when "bitbake -S none world"
-        fails.
-        '''
-        get_signatures(self.td['builddir'], failsafe=False)
-
-    def test_signatures(self):
-        if self.tc.layer['type'] == LayerType.SOFTWARE and \
-           not self.tc.test_software_layer_signatures:
-            raise unittest.SkipTest("Not testing for signature changes in a software layer %s." \
-                     % self.tc.layer['name'])
-
-        curr_sigs, _ = get_signatures(self.td['builddir'], failsafe=True)
-        msg = compare_signatures(self.td['sigs'], curr_sigs)
-        if msg is not None:
-            self.fail('Adding layer %s changed signatures.\n%s' % (self.tc.layer['name'], msg))
diff --git a/import-layers/yocto-poky/scripts/lib/compatlayer/cases/distro.py b/import-layers/yocto-poky/scripts/lib/compatlayer/cases/distro.py
deleted file mode 100644
index 523acc1..0000000
--- a/import-layers/yocto-poky/scripts/lib/compatlayer/cases/distro.py
+++ /dev/null
@@ -1,26 +0,0 @@
-# Copyright (C) 2017 Intel Corporation
-# Released under the MIT license (see COPYING.MIT)
-
-import unittest
-
-from compatlayer import LayerType
-from compatlayer.case import OECompatLayerTestCase
-
-class DistroCompatLayer(OECompatLayerTestCase):
-    @classmethod
-    def setUpClass(self):
-        if self.tc.layer['type'] != LayerType.DISTRO:
-            raise unittest.SkipTest("DistroCompatLayer: Layer %s isn't Distro one." %\
-                self.tc.layer['name'])
-
-    def test_distro_defines_distros(self):
-        self.assertTrue(self.tc.layer['conf']['distros'], 
-                "Layer is BSP but doesn't defines machines.")
-
-    def test_distro_no_set_distros(self):
-        from oeqa.utils.commands import get_bb_var
-
-        distro = get_bb_var('DISTRO')
-        self.assertEqual(self.td['bbvars']['DISTRO'], distro,
-                msg="Layer %s modified distro %s -> %s" % \
-                    (self.tc.layer['name'], self.td['bbvars']['DISTRO'], distro))
diff --git a/import-layers/yocto-poky/scripts/lib/compatlayer/context.py b/import-layers/yocto-poky/scripts/lib/compatlayer/context.py
deleted file mode 100644
index 7811d4a..0000000
--- a/import-layers/yocto-poky/scripts/lib/compatlayer/context.py
+++ /dev/null
@@ -1,15 +0,0 @@
-# Copyright (C) 2017 Intel Corporation
-# Released under the MIT license (see COPYING.MIT)
-
-import os
-import sys
-import glob
-import re
-
-from oeqa.core.context import OETestContext
-
-class CompatLayerTestContext(OETestContext):
-    def __init__(self, td=None, logger=None, layer=None, test_software_layer_signatures=True):
-        super(CompatLayerTestContext, self).__init__(td, logger)
-        self.layer = layer
-        self.test_software_layer_signatures = test_software_layer_signatures
diff --git a/import-layers/yocto-poky/scripts/lib/devtool/__init__.py b/import-layers/yocto-poky/scripts/lib/devtool/__init__.py
index d646b0c..94e3d7d 100644
--- a/import-layers/yocto-poky/scripts/lib/devtool/__init__.py
+++ b/import-layers/yocto-poky/scripts/lib/devtool/__init__.py
@@ -115,8 +115,8 @@
         import bb.tinfoil
         tinfoil = bb.tinfoil.Tinfoil(tracking=tracking)
         try:
-            tinfoil.prepare(config_only)
             tinfoil.logger.setLevel(logger.getEffectiveLevel())
+            tinfoil.prepare(config_only)
         except bb.tinfoil.TinfoilUIException:
             tinfoil.shutdown()
             raise DevtoolError('Failed to start bitbake environment')
@@ -191,7 +191,7 @@
         logger.info('Using source tree as build directory since --same-dir specified')
     elif bb.data.inherits_class('autotools-brokensep', d):
         logger.info('Using source tree as build directory since recipe inherits autotools-brokensep')
-    elif d.getVar('B') == os.path.abspath(d.getVar('S')):
+    elif os.path.abspath(d.getVar('B')) == os.path.abspath(d.getVar('S')):
         logger.info('Using source tree as build directory since that would be the default for this recipe')
     else:
         b_is_s = False
@@ -261,34 +261,79 @@
                 targets.append('%s-%s' % (pn, variant))
     return targets
 
-def ensure_npm(config, basepath, fixed_setup=False, check_exists=True):
-    """
-    Ensure that npm is available and either build it or show a
-    reasonable error message
-    """
-    if check_exists:
-        tinfoil = setup_tinfoil(config_only=False, basepath=basepath)
-        try:
-            rd = tinfoil.parse_recipe('nodejs-native')
-            nativepath = rd.getVar('STAGING_BINDIR_NATIVE')
-        finally:
-            tinfoil.shutdown()
-        npmpath = os.path.join(nativepath, 'npm')
-        build_npm = not os.path.exists(npmpath)
-    else:
-        build_npm = True
+def replace_from_file(path, old, new):
+    """Replace strings on a file"""
 
-    if build_npm:
-        logger.info('Building nodejs-native')
+    def read_file(path):
+        data = None
+        with open(path) as f:
+            data = f.read()
+        return data
+
+    def write_file(path, data):
+        if data is None:
+            return
+        wdata = data.rstrip() + "\n"
+        with open(path, "w") as f:
+            f.write(wdata)
+
+    # In case old is None, return immediately
+    if old is None:
+        return
+    try:
+        rdata = read_file(path)
+    except IOError as e:
+        # if file does not exit, just quit, otherwise raise an exception
+        if e.errno == errno.ENOENT:
+            return
+        else:
+            raise
+
+    old_contents = rdata.splitlines()
+    new_contents = []
+    for old_content in old_contents:
         try:
-            exec_build_env_command(config.init_path, basepath,
-                                'bitbake -q nodejs-native -c addto_recipe_sysroot', watch=True)
-        except bb.process.ExecutionError as e:
-            if "Nothing PROVIDES 'nodejs-native'" in e.stdout:
-                if fixed_setup:
-                    msg = 'nodejs-native is required for npm but is not available within this SDK'
-                else:
-                    msg = 'nodejs-native is required for npm but is not available - you will likely need to add a layer that provides nodejs'
-                raise DevtoolError(msg)
-            else:
-                raise
+            new_contents.append(old_content.replace(old, new))
+        except ValueError:
+            pass
+    write_file(path, "\n".join(new_contents))
+
+
+def update_unlockedsigs(basepath, workspace, fixed_setup, extra=None):
+    """ This function will make unlocked-sigs.inc match the recipes in the
+    workspace plus any extras we want unlocked. """
+
+    if not fixed_setup:
+        # Only need to write this out within the eSDK
+        return
+
+    if not extra:
+        extra = []
+
+    confdir = os.path.join(basepath, 'conf')
+    unlockedsigs = os.path.join(confdir, 'unlocked-sigs.inc')
+
+    # Get current unlocked list if any
+    values = {}
+    def get_unlockedsigs_varfunc(varname, origvalue, op, newlines):
+        values[varname] = origvalue
+        return origvalue, None, 0, True
+    if os.path.exists(unlockedsigs):
+        with open(unlockedsigs, 'r') as f:
+            bb.utils.edit_metadata(f, ['SIGGEN_UNLOCKED_RECIPES'], get_unlockedsigs_varfunc)
+    unlocked = sorted(values.get('SIGGEN_UNLOCKED_RECIPES', []))
+
+    # If the new list is different to the current list, write it out
+    newunlocked = sorted(list(workspace.keys()) + extra)
+    if unlocked != newunlocked:
+        bb.utils.mkdirhier(confdir)
+        with open(unlockedsigs, 'w') as f:
+            f.write("# DO NOT MODIFY! YOUR CHANGES WILL BE LOST.\n" +
+                    "# This layer was created by the OpenEmbedded devtool" +
+                    " utility in order to\n" +
+                    "# contain recipes that are unlocked.\n")
+
+            f.write('SIGGEN_UNLOCKED_RECIPES += "\\\n')
+            for pn in newunlocked:
+                f.write('    ' + pn)
+            f.write('"')
diff --git a/import-layers/yocto-poky/scripts/lib/devtool/deploy.py b/import-layers/yocto-poky/scripts/lib/devtool/deploy.py
index b3730ae..9cc4927 100644
--- a/import-layers/yocto-poky/scripts/lib/devtool/deploy.py
+++ b/import-layers/yocto-poky/scripts/lib/devtool/deploy.py
@@ -16,12 +16,16 @@
 # 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
 """Devtool plugin containing the deploy subcommands"""
 
-import os
-import subprocess
 import logging
-import tempfile
+import os
 import shutil
+import subprocess
+import tempfile
+
+import bb.utils
 import argparse_oe
+import oe.types
+
 from devtool import exec_fakeroot, setup_tinfoil, check_workspace_recipe, DevtoolError
 
 logger = logging.getLogger('devtool')
@@ -64,7 +68,7 @@
         lines.append('                rmdir $file > /dev/null 2>&1 || true')
         lines.append('            fi')
         lines.append('        else')
-        lines.append('            rm $file')
+        lines.append('            rm -f $file')
         lines.append('        fi')
     lines.append('    done')
     if not dryrun:
@@ -119,7 +123,11 @@
         # Put any preserved files back
         lines.append('if [ -d $preservedir ] ; then')
         lines.append('    cd $preservedir')
-        lines.append('    find . -type f -exec mv {} /{} \;')
+        # find from busybox might not have -exec, so we don't use that
+        lines.append('    find . -type f | while read file')
+        lines.append('    do')
+        lines.append('        mv $file /$file')
+        lines.append('    done')
         lines.append('    cd /')
         lines.append('    rm -rf $preservedir')
         lines.append('fi')
@@ -136,11 +144,12 @@
     return '\n'.join(lines)
 
 
+
 def deploy(args, config, basepath, workspace):
     """Entry point for the devtool 'deploy' subcommand"""
-    import re
     import math
     import oe.recipeutils
+    import oe.package
 
     check_workspace_recipe(workspace, args.recipename, checksrc=False)
 
@@ -166,6 +175,17 @@
                             'recipe? If so, the install step has not installed '
                             'any files.' % args.recipename)
 
+        if args.strip and not args.dry_run:
+            # Fakeroot copy to new destination
+            srcdir = recipe_outdir
+            recipe_outdir = os.path.join(rd.getVar('WORKDIR'), 'deploy-target-stripped')
+            if os.path.isdir(recipe_outdir):
+                bb.utils.remove(recipe_outdir, True)
+            exec_fakeroot(rd, "cp -af %s %s" % (os.path.join(srcdir, '.'), recipe_outdir), shell=True)
+            os.environ['PATH'] = ':'.join([os.environ['PATH'], rd.getVar('PATH') or ''])
+            oe.package.strip_execs(args.recipename, recipe_outdir, rd.getVar('STRIP'), rd.getVar('libdir'),
+                        rd.getVar('base_libdir'))
+
         filelist = []
         ftotalsize = 0
         for root, _, files in os.walk(recipe_outdir):
@@ -185,7 +205,6 @@
                 print('  %s' % item)
             return 0
 
-
         extraoptions = ''
         if args.no_host_check:
             extraoptions += '-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'
@@ -297,6 +316,7 @@
 
 def register_commands(subparsers, context):
     """Register devtool subcommands from the deploy plugin"""
+
     parser_deploy = subparsers.add_parser('deploy-target',
                                           help='Deploy recipe output files to live target machine',
                                           description='Deploys a recipe\'s build output (i.e. the output of the do_install task) to a live target machine over ssh. By default, any existing files will be preserved instead of being overwritten and will be restored if you run devtool undeploy-target. Note: this only deploys the recipe itself and not any runtime dependencies, so it is assumed that those have been installed on the target beforehand.',
@@ -309,6 +329,15 @@
     parser_deploy.add_argument('-p', '--no-preserve', help='Do not preserve existing files', action='store_true')
     parser_deploy.add_argument('--no-check-space', help='Do not check for available space before deploying', action='store_true')
     parser_deploy.add_argument('-P', '--port', default='22', help='Port to use for connection to the target')
+
+    strip_opts = parser_deploy.add_mutually_exclusive_group(required=False)
+    strip_opts.add_argument('-S', '--strip',
+                               help='Strip executables prior to deploying (default: %(default)s). '
+                                    'The default value of this option can be controlled by setting the strip option in the [Deploy] section to True or False.',
+                               default=oe.types.boolean(context.config.get('Deploy', 'strip', default='0')),
+                               action='store_true')
+    strip_opts.add_argument('--no-strip', help='Do not strip executables prior to deploy', dest='strip', action='store_false')
+
     parser_deploy.set_defaults(func=deploy)
 
     parser_undeploy = subparsers.add_parser('undeploy-target',
diff --git a/import-layers/yocto-poky/scripts/lib/devtool/export.py b/import-layers/yocto-poky/scripts/lib/devtool/export.py
new file mode 100644
index 0000000..13ee258
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/devtool/export.py
@@ -0,0 +1,119 @@
+# Development tool - export command plugin
+#
+# Copyright (C) 2014-2017 Intel Corporation
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+"""Devtool export plugin"""
+
+import os
+import argparse
+import tarfile
+import logging
+import datetime
+import json
+
+logger = logging.getLogger('devtool')
+
+# output files
+default_arcname_prefix = "workspace-export"
+metadata = '.export_metadata'
+
+def export(args, config, basepath, workspace):
+    """Entry point for the devtool 'export' subcommand"""
+
+    def add_metadata(tar):
+        """Archive the workspace object"""
+        # finally store the workspace metadata
+        with open(metadata, 'w') as fd:
+            fd.write(json.dumps((config.workspace_path, workspace)))
+        tar.add(metadata)
+        os.unlink(metadata)
+
+    def add_recipe(tar, recipe, data):
+        """Archive recipe with proper arcname"""
+        # Create a map of name/arcnames
+        arcnames = []
+        for key, name in data.items():
+            if name:
+                if key == 'srctree':
+                    # all sources, no matter where are located, goes into the sources directory
+                    arcname = 'sources/%s' % recipe
+                else:
+                    arcname = name.replace(config.workspace_path, '')
+                arcnames.append((name, arcname))
+
+        for name, arcname in arcnames:
+            tar.add(name, arcname=arcname)
+
+
+    # Make sure workspace is non-empty and possible listed include/excluded recipes are in workspace
+    if not workspace:
+        logger.info('Workspace contains no recipes, nothing to export')
+        return 0
+    else:
+        for param, recipes in {'include':args.include,'exclude':args.exclude}.items():
+            for recipe in recipes:
+                if recipe not in workspace:
+                    logger.error('Recipe (%s) on %s argument not in the current workspace' % (recipe, param))
+                    return 1
+
+    name = args.file
+
+    default_name = "%s-%s.tar.gz" % (default_arcname_prefix, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
+    if not name:
+        name = default_name
+    else:
+        # if name is a directory, append the default name
+        if os.path.isdir(name):
+            name = os.path.join(name, default_name)
+
+    if os.path.exists(name) and not args.overwrite:
+        logger.error('Tar archive %s exists. Use --overwrite/-o to overwrite it')
+        return 1
+
+    # if all workspace is excluded, quit
+    if not len(set(workspace.keys()).difference(set(args.exclude))):
+        logger.warn('All recipes in workspace excluded, nothing to export')
+        return 0
+
+    exported = []
+    with tarfile.open(name, 'w:gz') as tar:
+        if args.include:
+            for recipe in args.include:
+                add_recipe(tar, recipe, workspace[recipe])
+                exported.append(recipe)
+        else:
+            for recipe, data in workspace.items():
+                if recipe not in args.exclude:
+                    add_recipe(tar, recipe, data)
+                    exported.append(recipe)
+
+        add_metadata(tar)
+
+    logger.info('Tar archive created at %s with the following recipes: %s' % (name, ', '.join(exported)))
+    return 0
+
+def register_commands(subparsers, context):
+    """Register devtool export subcommands"""
+    parser = subparsers.add_parser('export',
+                                   help='Export workspace into a tar archive',
+                                   description='Export one or more recipes from current workspace into a tar archive',
+                                   group='advanced')
+
+    parser.add_argument('--file', '-f', help='Output archive file name')
+    parser.add_argument('--overwrite', '-o', action="store_true", help='Overwrite previous export tar archive')
+    group = parser.add_mutually_exclusive_group()
+    group.add_argument('--include', '-i', nargs='+', default=[], help='Include recipes into the tar archive')
+    group.add_argument('--exclude', '-e', nargs='+', default=[], help='Exclude recipes into the tar archive')
+    parser.set_defaults(func=export)
diff --git a/import-layers/yocto-poky/scripts/lib/devtool/import.py b/import-layers/yocto-poky/scripts/lib/devtool/import.py
new file mode 100644
index 0000000..c13a180
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/devtool/import.py
@@ -0,0 +1,144 @@
+# Development tool - import command plugin
+#
+# Copyright (C) 2014-2017 Intel Corporation
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+"""Devtool import plugin"""
+
+import os
+import tarfile
+import logging
+import collections
+import json
+import fnmatch
+
+from devtool import standard, setup_tinfoil, replace_from_file, DevtoolError
+from devtool import export
+
+logger = logging.getLogger('devtool')
+
+def devimport(args, config, basepath, workspace):
+    """Entry point for the devtool 'import' subcommand"""
+
+    def get_pn(name):
+        """ Returns the filename of a workspace recipe/append"""
+        metadata = name.split('/')[-1]
+        fn, _ = os.path.splitext(metadata)
+        return fn
+
+    if not os.path.exists(args.file):
+        raise DevtoolError('Tar archive %s does not exist. Export your workspace using "devtool export"' % args.file)
+
+    with tarfile.open(args.file) as tar:
+        # Get exported metadata
+        export_workspace_path = export_workspace = None
+        try:
+            metadata = tar.getmember(export.metadata)
+        except KeyError as ke:
+            raise DevtoolError('The export metadata file created by "devtool export" was not found. "devtool import" can only be used to import tar archives created by "devtool export".')
+
+        tar.extract(metadata)
+        with open(metadata.name) as fdm:
+            export_workspace_path, export_workspace = json.load(fdm)
+        os.unlink(metadata.name)
+
+        members = tar.getmembers()
+
+        # Get appends and recipes from the exported archive, these
+        # will be needed to find out those appends without corresponding
+        # recipe pair
+        append_fns, recipe_fns = set(), set()
+        for member in members:
+            if member.name.startswith('appends'):
+                append_fns.add(get_pn(member.name))
+            elif member.name.startswith('recipes'):
+                recipe_fns.add(get_pn(member.name))
+
+        # Setup tinfoil, get required data and shutdown
+        tinfoil = setup_tinfoil(config_only=False, basepath=basepath)
+        try:
+            current_fns = [os.path.basename(recipe[0]) for recipe in tinfoil.cooker.recipecaches[''].pkg_fn.items()]
+        finally:
+            tinfoil.shutdown()
+
+        # Find those appends that do not have recipes in current metadata
+        non_importables = []
+        for fn in append_fns - recipe_fns:
+            # Check on current metadata (covering those layers indicated in bblayers.conf)
+            for current_fn in current_fns:
+                if fnmatch.fnmatch(current_fn, '*' + fn.replace('%', '') + '*'):
+                    break
+            else:
+                non_importables.append(fn)
+                logger.warn('No recipe to append %s.bbapppend, skipping' % fn)
+
+        # Extract
+        imported = []
+        for member in members:
+            if member.name == export.metadata:
+                continue
+
+            for nonimp in non_importables:
+                pn = nonimp.split('_')[0]
+                # do not extract data from non-importable recipes or metadata
+                if member.name.startswith('appends/%s' % nonimp) or \
+                        member.name.startswith('recipes/%s' % nonimp) or \
+                        member.name.startswith('sources/%s' % pn):
+                    break
+            else:
+                path = os.path.join(config.workspace_path, member.name)
+                if os.path.exists(path):
+                    # by default, no file overwrite is done unless -o is given by the user
+                    if args.overwrite:
+                        try:
+                            tar.extract(member, path=config.workspace_path)
+                        except PermissionError as pe:
+                            logger.warn(pe)
+                    else:
+                        logger.warn('File already present. Use --overwrite/-o to overwrite it: %s' % member.name)
+                        continue
+                else:
+                    tar.extract(member, path=config.workspace_path)
+
+                # Update EXTERNALSRC and the devtool md5 file
+                if member.name.startswith('appends'):
+                    if export_workspace_path:
+                        # appends created by 'devtool modify' just need to update the workspace
+                        replace_from_file(path, export_workspace_path, config.workspace_path)
+
+                        # appends created by 'devtool add' need replacement of exported source tree
+                        pn = get_pn(member.name).split('_')[0]
+                        exported_srctree = export_workspace[pn]['srctree']
+                        if exported_srctree:
+                            replace_from_file(path, exported_srctree, os.path.join(config.workspace_path, 'sources', pn))
+
+                    standard._add_md5(config, pn, path)
+                    imported.append(pn)
+
+    if imported:
+        logger.info('Imported recipes into workspace %s: %s' % (config.workspace_path, ', '.join(imported)))
+    else:
+        logger.warn('No recipes imported into the workspace')
+
+    return 0
+
+def register_commands(subparsers, context):
+    """Register devtool import subcommands"""
+    parser = subparsers.add_parser('import',
+                                   help='Import exported tar archive into workspace',
+                                   description='Import tar archive previously created by "devtool export" into workspace',
+                                   group='advanced')
+    parser.add_argument('file', metavar='FILE', help='Name of the tar archive to import')
+    parser.add_argument('--overwrite', '-o', action="store_true", help='Overwrite files when extracting')
+    parser.set_defaults(func=devimport)
diff --git a/import-layers/yocto-poky/scripts/lib/devtool/sdk.py b/import-layers/yocto-poky/scripts/lib/devtool/sdk.py
index e8bf0ad..f46577c 100644
--- a/import-layers/yocto-poky/scripts/lib/devtool/sdk.py
+++ b/import-layers/yocto-poky/scripts/lib/devtool/sdk.py
@@ -155,7 +155,7 @@
         if os.path.exists(os.path.join(basepath, 'layers/.git')):
             out = subprocess.check_output("git status --porcelain", shell=True, cwd=layers_dir)
             if not out:
-                ret = subprocess.call("git fetch --all; git reset --hard", shell=True, cwd=layers_dir)
+                ret = subprocess.call("git fetch --all; git reset --hard @{u}", shell=True, cwd=layers_dir)
             else:
                 logger.error("Failed to update metadata as there have been changes made to it. Aborting.");
                 logger.error("Changed files:\n%s" % out);
diff --git a/import-layers/yocto-poky/scripts/lib/devtool/standard.py b/import-layers/yocto-poky/scripts/lib/devtool/standard.py
index 5ff1e23..beea0d4 100644
--- a/import-layers/yocto-poky/scripts/lib/devtool/standard.py
+++ b/import-layers/yocto-poky/scripts/lib/devtool/standard.py
@@ -1,6 +1,6 @@
 # Development tool - standard commands plugin
 #
-# Copyright (C) 2014-2016 Intel Corporation
+# Copyright (C) 2014-2017 Intel Corporation
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License version 2 as
@@ -30,7 +30,7 @@
 import glob
 import filecmp
 from collections import OrderedDict
-from devtool import exec_build_env_command, setup_tinfoil, check_workspace_recipe, use_external_build, setup_git_repo, recipe_to_append, get_bbclassextend_targets, ensure_npm, DevtoolError
+from devtool import exec_build_env_command, setup_tinfoil, check_workspace_recipe, use_external_build, setup_git_repo, recipe_to_append, get_bbclassextend_targets, update_unlockedsigs, DevtoolError
 from devtool import parse_recipe
 
 logger = logging.getLogger('devtool')
@@ -66,6 +66,12 @@
         elif os.path.isdir(args.recipename):
             logger.warn('Ambiguous argument "%s" - assuming you mean it to be the recipe name' % args.recipename)
 
+    if not args.fetchuri:
+        if args.srcrev:
+            raise DevtoolError('The -S/--srcrev option is only valid when fetching from an SCM repository')
+        if args.srcbranch:
+            raise DevtoolError('The -B/--srcbranch option is only valid when fetching from an SCM repository')
+
     if args.srctree and os.path.isfile(args.srctree):
         args.fetchuri = 'file://' + os.path.abspath(args.srctree)
         args.srctree = ''
@@ -128,9 +134,6 @@
         color = args.color
     extracmdopts = ''
     if args.fetchuri:
-        if args.fetchuri.startswith('npm://'):
-            ensure_npm(config, basepath, args.fixed_setup)
-
         source = args.fetchuri
         if srctree:
             extracmdopts += ' -x %s' % srctree
@@ -152,31 +155,24 @@
         extracmdopts += ' -a'
     if args.fetch_dev:
         extracmdopts += ' --fetch-dev'
+    if args.mirrors:
+        extracmdopts += ' --mirrors'
+    if args.srcrev:
+        extracmdopts += ' --srcrev %s' % args.srcrev
+    if args.srcbranch:
+        extracmdopts += ' --srcbranch %s' % args.srcbranch
+    if args.provides:
+        extracmdopts += ' --provides %s' % args.provides
 
     tempdir = tempfile.mkdtemp(prefix='devtool')
     try:
-        builtnpm = False
-        while True:
-            try:
-                stdout, _ = exec_build_env_command(config.init_path, basepath, 'recipetool --color=%s create --devtool -o %s \'%s\' %s' % (color, tempdir, source, extracmdopts), watch=True)
-            except bb.process.ExecutionError as e:
-                if e.exitcode == 14:
-                    if builtnpm:
-                        raise DevtoolError('Re-running recipetool still failed to find npm')
-                    # FIXME this is a horrible hack that is unfortunately
-                    # necessary due to the fact that we can't run bitbake from
-                    # inside recipetool since recipetool keeps tinfoil active
-                    # with references to it throughout the code, so we have
-                    # to exit out and come back here to do it.
-                    ensure_npm(config, basepath, args.fixed_setup, check_exists=False)
-                    logger.info('Re-running recipe creation process after building nodejs')
-                    builtnpm = True
-                    continue
-                elif e.exitcode == 15:
-                    raise DevtoolError('Could not auto-determine recipe name, please specify it on the command line')
-                else:
-                    raise DevtoolError('Command \'%s\' failed' % e.command)
-            break
+        try:
+            stdout, _ = exec_build_env_command(config.init_path, basepath, 'recipetool --color=%s create --devtool -o %s \'%s\' %s' % (color, tempdir, source, extracmdopts), watch=True)
+        except bb.process.ExecutionError as e:
+            if e.exitcode == 15:
+                raise DevtoolError('Could not auto-determine recipe name, please specify it on the command line')
+            else:
+                raise DevtoolError('Command \'%s\' failed' % e.command)
 
         recipes = glob.glob(os.path.join(tempdir, '*.bb'))
         if recipes:
@@ -282,6 +278,24 @@
                 f.write('    done\n')
                 f.write('}\n')
 
+        # Check if the new layer provides recipes whose priorities have been
+        # overriden by PREFERRED_PROVIDER.
+        recipe_name = rd.getVar('PN')
+        provides = rd.getVar('PROVIDES')
+        # Search every item defined in PROVIDES
+        for recipe_provided in provides.split():
+            preferred_provider = 'PREFERRED_PROVIDER_' + recipe_provided
+            current_pprovider = rd.getVar(preferred_provider)
+            if current_pprovider and current_pprovider != recipe_name:
+                if args.fixed_setup:
+                    #if we are inside the eSDK add the new PREFERRED_PROVIDER in the workspace layer.conf
+                    layerconf_file = os.path.join(config.workspace_path, "conf", "layer.conf")
+                    with open(layerconf_file, 'a') as f:
+                        f.write('%s = "%s"\n' % (preferred_provider, recipe_name))
+                else:
+                    logger.warn('Set \'%s\' in order to use the recipe' % preferred_provider)
+                break
+
         _add_md5(config, recipename, appendfile)
 
         logger.info('Recipe %s has been automatically created; further editing may be required to make it fully functional' % recipefile)
@@ -383,7 +397,7 @@
     """Entry point for the devtool 'extract' subcommand"""
     import bb
 
-    tinfoil = _prep_extract_operation(config, basepath, args.recipename)
+    tinfoil = setup_tinfoil(basepath=basepath)
     if not tinfoil:
         # Error already shown
         return 1
@@ -393,7 +407,7 @@
             return 1
 
         srctree = os.path.abspath(args.srctree)
-        initial_rev = _extract_source(srctree, args.keep_temp, args.branch, False, rd, tinfoil)
+        initial_rev = _extract_source(srctree, args.keep_temp, args.branch, False, config, basepath, workspace, args.fixed_setup, rd, tinfoil)
         logger.info('Source tree extracted to %s' % srctree)
 
         if initial_rev:
@@ -407,7 +421,7 @@
     """Entry point for the devtool 'sync' subcommand"""
     import bb
 
-    tinfoil = _prep_extract_operation(config, basepath, args.recipename)
+    tinfoil = setup_tinfoil(basepath=basepath)
     if not tinfoil:
         # Error already shown
         return 1
@@ -417,7 +431,7 @@
             return 1
 
         srctree = os.path.abspath(args.srctree)
-        initial_rev = _extract_source(srctree, args.keep_temp, args.branch, True, rd, tinfoil)
+        initial_rev = _extract_source(srctree, args.keep_temp, args.branch, True, config, basepath, workspace, args.fixed_setup, rd, tinfoil)
         logger.info('Source tree %s synchronized' % srctree)
 
         if initial_rev:
@@ -428,31 +442,10 @@
         tinfoil.shutdown()
 
 
-def _prep_extract_operation(config, basepath, recipename, tinfoil=None):
-    """HACK: Ugly workaround for making sure that requirements are met when
-       trying to extract a package. Returns the tinfoil instance to be used."""
-    if not tinfoil:
-        tinfoil = setup_tinfoil(basepath=basepath)
-
-    rd = parse_recipe(config, tinfoil, recipename, True)
-    if not rd:
-        return None
-
-    if bb.data.inherits_class('kernel-yocto', rd):
-        tinfoil.shutdown()
-        try:
-            stdout, _ = exec_build_env_command(config.init_path, basepath,
-                                               'bitbake kern-tools-native')
-            tinfoil = setup_tinfoil(basepath=basepath)
-        except bb.process.ExecutionError as err:
-            raise DevtoolError("Failed to build kern-tools-native:\n%s" %
-                               err.stdout)
-    return tinfoil
-
-
-def _extract_source(srctree, keep_temp, devbranch, sync, d, tinfoil):
+def _extract_source(srctree, keep_temp, devbranch, sync, config, basepath, workspace, fixed_setup, d, tinfoil):
     """Extract sources of a recipe"""
     import oe.recipeutils
+    import oe.patch
 
     pn = d.getVar('PN')
 
@@ -480,6 +473,12 @@
         os.rmdir(srctree)
 
     initial_rev = None
+
+    appendexisted = False
+    recipefile = d.getVar('FILE')
+    appendfile = recipe_to_append(recipefile, config)
+    is_kernel_yocto = bb.data.inherits_class('kernel-yocto', d)
+
     # We need to redirect WORKDIR, STAMPS_DIR etc. under a temporary
     # directory so that:
     # (a) we pick up all files that get unpacked to the WORKDIR, and
@@ -498,137 +497,56 @@
     try:
         tinfoil.logger.setLevel(logging.WARNING)
 
-        crd = d.createCopy()
-        # Make a subdir so we guard against WORKDIR==S
-        workdir = os.path.join(tempdir, 'workdir')
-        crd.setVar('WORKDIR', workdir)
-        if not crd.getVar('S').startswith(workdir):
-            # Usually a shared workdir recipe (kernel, gcc)
-            # Try to set a reasonable default
-            if bb.data.inherits_class('kernel', d):
-                crd.setVar('S', '${WORKDIR}/source')
+        # FIXME this results in a cache reload under control of tinfoil, which is fine
+        # except we don't get the knotty progress bar
+
+        if os.path.exists(appendfile):
+            appendbackup = os.path.join(tempdir, os.path.basename(appendfile) + '.bak')
+            shutil.copyfile(appendfile, appendbackup)
+        else:
+            appendbackup = None
+            bb.utils.mkdirhier(os.path.dirname(appendfile))
+        logger.debug('writing append file %s' % appendfile)
+        with open(appendfile, 'a') as f:
+            f.write('###--- _extract_source\n')
+            f.write('DEVTOOL_TEMPDIR = "%s"\n' % tempdir)
+            f.write('DEVTOOL_DEVBRANCH = "%s"\n' % devbranch)
+            if not is_kernel_yocto:
+                f.write('PATCHTOOL = "git"\n')
+                f.write('PATCH_COMMIT_FUNCTIONS = "1"\n')
+            f.write('inherit devtool-source\n')
+            f.write('###--- _extract_source\n')
+
+        update_unlockedsigs(basepath, workspace, fixed_setup, [pn])
+
+        sstate_manifests = d.getVar('SSTATE_MANIFESTS')
+        bb.utils.mkdirhier(sstate_manifests)
+        preservestampfile = os.path.join(sstate_manifests, 'preserve-stamps')
+        with open(preservestampfile, 'w') as f:
+            f.write(d.getVar('STAMP'))
+        try:
+            if bb.data.inherits_class('kernel-yocto', d):
+                # We need to generate the kernel config
+                task = 'do_configure'
             else:
-                crd.setVar('S', '${WORKDIR}/%s' % os.path.basename(d.getVar('S')))
-        if bb.data.inherits_class('kernel', d):
-            # We don't want to move the source to STAGING_KERNEL_DIR here
-            crd.setVar('STAGING_KERNEL_DIR', '${S}')
+                task = 'do_patch'
 
-        is_kernel_yocto = bb.data.inherits_class('kernel-yocto', d)
-        if not is_kernel_yocto:
-            crd.setVar('PATCHTOOL', 'git')
-            crd.setVar('PATCH_COMMIT_FUNCTIONS', '1')
+            # Run the fetch + unpack tasks
+            res = tinfoil.build_targets(pn,
+                                        task,
+                                        handle_events=True)
+        finally:
+            if os.path.exists(preservestampfile):
+                os.remove(preservestampfile)
 
-        # Apply our changes to the datastore to the server's datastore
-        for key in crd.localkeys():
-            tinfoil.config_data.setVar('%s_pn-%s' % (key, pn), crd.getVar(key, False))
+        if not res:
+            raise DevtoolError('Extracting source for %s failed' % pn)
 
-        tinfoil.config_data.setVar('STAMPS_DIR', os.path.join(tempdir, 'stamps'))
-        tinfoil.config_data.setVar('T', os.path.join(tempdir, 'temp'))
-        tinfoil.config_data.setVar('BUILDCFG_FUNCS', '')
-        tinfoil.config_data.setVar('BUILDCFG_HEADER', '')
-        tinfoil.config_data.setVar('BB_HASH_IGNORE_MISMATCH', '1')
+        with open(os.path.join(tempdir, 'initial_rev'), 'r') as f:
+            initial_rev = f.read()
 
-        tinfoil.set_event_mask(['bb.event.BuildStarted',
-                                'bb.event.BuildCompleted',
-                                'logging.LogRecord',
-                                'bb.command.CommandCompleted',
-                                'bb.command.CommandFailed',
-                                'bb.build.TaskStarted',
-                                'bb.build.TaskSucceeded',
-                                'bb.build.TaskFailed',
-                                'bb.build.TaskFailedSilent'])
-
-        def runtask(target, task):
-            if tinfoil.build_file(target, task):
-                while True:
-                    event = tinfoil.wait_event(0.25)
-                    if event:
-                        if isinstance(event, bb.command.CommandCompleted):
-                            break
-                        elif isinstance(event, bb.command.CommandFailed):
-                            raise DevtoolError('Task do_%s failed: %s' % (task, event.error))
-                        elif isinstance(event, bb.build.TaskFailed):
-                            raise DevtoolError('Task do_%s failed' % task)
-                        elif isinstance(event, bb.build.TaskStarted):
-                            logger.info('Executing %s...' % event._task)
-                        elif isinstance(event, logging.LogRecord):
-                            if event.levelno <= logging.INFO:
-                                continue
-                            logger.handle(event)
-
-        # we need virtual:native:/path/to/recipe if it's a BBCLASSEXTEND
-        fn = tinfoil.get_recipe_file(pn)
-        runtask(fn, 'unpack')
-
-        if bb.data.inherits_class('kernel-yocto', d):
-            # Extra step for kernel to populate the source directory
-            runtask(fn, 'kernel_checkout')
-
-        srcsubdir = crd.getVar('S')
-
-        # Move local source files into separate subdir
-        recipe_patches = [os.path.basename(patch) for patch in
-                          oe.recipeutils.get_recipe_patches(crd)]
-        local_files = oe.recipeutils.get_recipe_local_files(crd)
-
-        # Ignore local files with subdir={BP}
-        srcabspath = os.path.abspath(srcsubdir)
-        local_files = [fname for fname in local_files if
-                       os.path.exists(os.path.join(workdir, fname)) and
-                       (srcabspath == workdir or not
-                       os.path.join(workdir, fname).startswith(srcabspath +
-                           os.sep))]
-        if local_files:
-            for fname in local_files:
-                _move_file(os.path.join(workdir, fname),
-                           os.path.join(tempdir, 'oe-local-files', fname))
-            with open(os.path.join(tempdir, 'oe-local-files', '.gitignore'),
-                      'w') as f:
-                f.write('# Ignore local files, by default. Remove this file '
-                        'if you want to commit the directory to Git\n*\n')
-
-        if srcsubdir == workdir:
-            # Find non-patch non-local sources that were "unpacked" to srctree
-            # directory
-            src_files = [fname for fname in _ls_tree(workdir) if
-                         os.path.basename(fname) not in recipe_patches]
-            # Force separate S so that patch files can be left out from srctree
-            srcsubdir = tempfile.mkdtemp(dir=workdir)
-            tinfoil.config_data.setVar('S_task-patch', srcsubdir)
-            # Move source files to S
-            for path in src_files:
-                _move_file(os.path.join(workdir, path),
-                           os.path.join(srcsubdir, path))
-        elif os.path.dirname(srcsubdir) != workdir:
-            # Handle if S is set to a subdirectory of the source
-            srcsubdir = os.path.join(workdir, os.path.relpath(srcsubdir, workdir).split(os.sep)[0])
-
-        scriptutils.git_convert_standalone_clone(srcsubdir)
-
-        # Make sure that srcsubdir exists
-        bb.utils.mkdirhier(srcsubdir)
-        if not os.path.exists(srcsubdir) or not os.listdir(srcsubdir):
-            logger.warning("no source unpacked to S, either the %s recipe "
-                           "doesn't use any source or the correct source "
-                           "directory could not be determined" % pn)
-
-        setup_git_repo(srcsubdir, crd.getVar('PV'), devbranch, d=d)
-
-        (stdout, _) = bb.process.run('git rev-parse HEAD', cwd=srcsubdir)
-        initial_rev = stdout.rstrip()
-
-        logger.info('Patching...')
-        runtask(fn, 'patch')
-
-        bb.process.run('git tag -f devtool-patched', cwd=srcsubdir)
-
-        kconfig = None
-        if bb.data.inherits_class('kernel-yocto', d):
-            # Store generate and store kernel config
-            logger.info('Generating kernel config')
-            runtask(fn, 'configure')
-            kconfig = os.path.join(crd.getVar('B'), '.config')
-
+        with open(os.path.join(tempdir, 'srcsubdir'), 'r') as f:
+            srcsubdir = f.read()
 
         tempdir_localdir = os.path.join(tempdir, 'oe-local-files')
         srctree_localdir = os.path.join(srctree, 'oe-local-files')
@@ -682,11 +600,15 @@
                 oe.patch.GitApplyTree.gitCommandUserOptions(useroptions, d=d)
                 bb.process.run('git %s commit -a -m "Committing local file symlinks\n\n%s"' % (' '.join(useroptions), oe.patch.GitApplyTree.ignore_commit_prefix), cwd=srctree)
 
-        if kconfig:
+        if is_kernel_yocto:
             logger.info('Copying kernel config to srctree')
-            shutil.copy2(kconfig, srctree)
+            shutil.copy2(os.path.join(tempdir, '.config'), srctree)
 
     finally:
+        if appendbackup:
+            shutil.copyfile(appendbackup, appendfile)
+        elif os.path.exists(appendfile):
+            os.remove(appendfile)
         if keep_temp:
             logger.info('Preserving temporary directory %s' % tempdir)
         else:
@@ -699,8 +621,11 @@
 
     def addfile(fn):
         md5 = bb.utils.md5_file(fn)
-        with open(os.path.join(config.workspace_path, '.devtool_md5'), 'a') as f:
-            f.write('%s|%s|%s\n' % (recipename, os.path.relpath(fn, config.workspace_path), md5))
+        with open(os.path.join(config.workspace_path, '.devtool_md5'), 'a+') as f:
+            md5_str = '%s|%s|%s\n' % (recipename, os.path.relpath(fn, config.workspace_path), md5)
+            f.seek(0, os.SEEK_SET)
+            if not md5_str in f.read():
+                f.write(md5_str)
 
     if os.path.isdir(filename):
         for root, _, files in os.walk(filename):
@@ -772,13 +697,6 @@
             raise DevtoolError("--no-extract specified and source path %s does "
                             "not exist or is not a directory" %
                             srctree)
-        if not args.no_extract:
-            tinfoil = _prep_extract_operation(config, basepath, pn, tinfoil)
-            if not tinfoil:
-                # Error already shown
-                return 1
-            # We need to re-parse because tinfoil may have been re-initialised
-            rd = parse_recipe(config, tinfoil, args.recipename, True)
 
         recipefile = rd.getVar('FILE')
         appendfile = recipe_to_append(recipefile, config, args.wildcard)
@@ -793,7 +711,7 @@
         initial_rev = None
         commits = []
         if not args.no_extract:
-            initial_rev = _extract_source(srctree, args.keep_temp, args.branch, False, rd, tinfoil)
+            initial_rev = _extract_source(srctree, args.keep_temp, args.branch, False, config, basepath, workspace, args.fixed_setup, rd, tinfoil)
             if not initial_rev:
                 return 1
             logger.info('Source tree extracted to %s' % srctree)
@@ -842,7 +760,10 @@
 
             if bb.data.inherits_class('kernel', rd):
                 f.write('SRCTREECOVEREDTASKS = "do_validate_branches do_kernel_checkout '
-                        'do_fetch do_unpack do_patch do_kernel_configme do_kernel_configcheck"\n')
+                        'do_fetch do_unpack do_kernel_configme do_kernel_configcheck"\n')
+                f.write('\ndo_patch() {\n'
+                        '    :\n'
+                        '}\n')
                 f.write('\ndo_configure_append() {\n'
                         '    cp ${B}/.config ${S}/.config.baseline\n'
                         '    ln -sfT ${B}/.config ${S}/.config.new\n'
@@ -852,6 +773,8 @@
                 for commit in commits:
                     f.write('# commit: %s\n' % commit)
 
+        update_unlockedsigs(basepath, workspace, args.fixed_setup, [pn])
+
         _add_md5(config, pn, appendfile)
 
         logger.info('Recipe %s now set up to build from %s' % (pn, srctree))
@@ -1375,12 +1298,13 @@
         if not no_remove:
             # Find list of existing patches in recipe file
             patches_dir = tempfile.mkdtemp(dir=tempdir)
-            old_srcrev = (rd.getVar('SRCREV', False) or '')
+            old_srcrev = rd.getVar('SRCREV') or ''
             upd_p, new_p, del_p = _export_patches(srctree, rd, old_srcrev,
                                                   patches_dir)
+            logger.debug('Patches: update %s, new %s, delete %s' % (dict(upd_p), dict(new_p), dict(del_p)))
 
             # Remove deleted local files and "overlapping" patches
-            remove_files = list(del_f.values()) + list(upd_p.values())
+            remove_files = list(del_f.values()) + list(upd_p.values()) + list(del_p.values())
             if remove_files:
                 removedentries = _remove_file_entries(srcuri, remove_files)[0]
                 update_srcuri = True
@@ -1612,7 +1536,7 @@
 def status(args, config, basepath, workspace):
     """Entry point for the devtool 'status' subcommand"""
     if workspace:
-        for recipe, value in workspace.items():
+        for recipe, value in sorted(workspace.items()):
             recipefile = value['recipefile']
             if recipefile:
                 recipestr = ' (%s)' % recipefile
@@ -1627,6 +1551,26 @@
 def _reset(recipes, no_clean, config, basepath, workspace):
     """Reset one or more recipes"""
 
+    def clean_preferred_provider(pn, layerconf_path):
+        """Remove PREFERRED_PROVIDER from layer.conf'"""
+        import re
+        layerconf_file = os.path.join(layerconf_path, 'conf', 'layer.conf')
+        new_layerconf_file = os.path.join(layerconf_path, 'conf', '.layer.conf')
+        pprovider_found = False
+        with open(layerconf_file, 'r') as f:
+            lines = f.readlines()
+            with open(new_layerconf_file, 'a') as nf:
+                for line in lines:
+                    pprovider_exp = r'^PREFERRED_PROVIDER_.*? = "' + pn + r'"$'
+                    if not re.match(pprovider_exp, line):
+                        nf.write(line)
+                    else:
+                        pprovider_found = True
+        if pprovider_found:
+            shutil.move(new_layerconf_file, layerconf_file)
+        else:
+            os.remove(new_layerconf_file)
+
     if recipes and not no_clean:
         if len(recipes) == 1:
             logger.info('Cleaning sysroot for recipe %s...' % recipes[0])
@@ -1679,6 +1623,7 @@
                 # This is unlikely, but if it's empty we can just remove it
                 os.rmdir(srctree)
 
+        clean_preferred_provider(pn, config.workspace_path)
 
 def reset(args, config, basepath, workspace):
     """Entry point for the devtool 'reset' subcommand"""
@@ -1834,10 +1779,15 @@
     parser_add.add_argument('--fetch-dev', help='For npm, also fetch devDependencies', action="store_true")
     parser_add.add_argument('--version', '-V', help='Version to use within recipe (PV)')
     parser_add.add_argument('--no-git', '-g', help='If fetching source, do not set up source tree as a git repository', action="store_true")
-    parser_add.add_argument('--autorev', '-a', help='When fetching from a git repository, set SRCREV in the recipe to a floating revision instead of fixed', action="store_true")
+    group = parser_add.add_mutually_exclusive_group()
+    group.add_argument('--srcrev', '-S', help='Source revision to fetch if fetching from an SCM such as git (default latest)')
+    group.add_argument('--autorev', '-a', help='When fetching from a git repository, set SRCREV in the recipe to a floating revision instead of fixed', action="store_true")
+    parser_add.add_argument('--srcbranch', '-B', help='Branch in source repository if fetching from an SCM such as git (default master)')
     parser_add.add_argument('--binary', '-b', help='Treat the source tree as something that should be installed verbatim (no compilation, same directory structure). Useful with binary packages e.g. RPMs.', action='store_true')
     parser_add.add_argument('--also-native', help='Also add native variant (i.e. support building recipe for the build host as well as the target machine)', action='store_true')
     parser_add.add_argument('--src-subdir', help='Specify subdirectory within source tree to use', metavar='SUBDIR')
+    parser_add.add_argument('--mirrors', help='Enable PREMIRRORS and MIRRORS for source tree fetching (disable by default).', action="store_true")
+    parser_add.add_argument('--provides', '-p', help='Specify an alias for the item provided by the recipe. E.g. virtual/libgl')
     parser_add.set_defaults(func=add, fixed_setup=context.fixed_setup)
 
     parser_modify = subparsers.add_parser('modify', help='Modify the source for an existing recipe',
@@ -1854,7 +1804,7 @@
     group.add_argument('--no-same-dir', help='Force build in a separate build directory', action="store_true")
     parser_modify.add_argument('--branch', '-b', default="devtool", help='Name for development branch to checkout (when not using -n/--no-extract) (default "%(default)s")')
     parser_modify.add_argument('--keep-temp', help='Keep temporary directory (for debugging)', action="store_true")
-    parser_modify.set_defaults(func=modify)
+    parser_modify.set_defaults(func=modify, fixed_setup=context.fixed_setup)
 
     parser_extract = subparsers.add_parser('extract', help='Extract the source for an existing recipe',
                                        description='Extracts the source for an existing recipe',
@@ -1863,7 +1813,7 @@
     parser_extract.add_argument('srctree', help='Path to where to extract the source tree')
     parser_extract.add_argument('--branch', '-b', default="devtool", help='Name for development branch to checkout (default "%(default)s")')
     parser_extract.add_argument('--keep-temp', action="store_true", help='Keep temporary directory (for debugging)')
-    parser_extract.set_defaults(func=extract, no_workspace=True)
+    parser_extract.set_defaults(func=extract, fixed_setup=context.fixed_setup)
 
     parser_sync = subparsers.add_parser('sync', help='Synchronize the source tree for an existing recipe',
                                        description='Synchronize the previously extracted source tree for an existing recipe',
@@ -1873,7 +1823,7 @@
     parser_sync.add_argument('srctree', help='Path to the source tree')
     parser_sync.add_argument('--branch', '-b', default="devtool", help='Name for development branch to checkout')
     parser_sync.add_argument('--keep-temp', action="store_true", help='Keep temporary directory (for debugging)')
-    parser_sync.set_defaults(func=sync)
+    parser_sync.set_defaults(func=sync, fixed_setup=context.fixed_setup)
 
     parser_rename = subparsers.add_parser('rename', help='Rename a recipe file in the workspace',
                                        description='Renames the recipe file for a recipe in the workspace, changing the name or version part or both, ensuring that all references within the workspace are updated at the same time. Only works when the recipe file itself is in the workspace, e.g. after devtool add. Particularly useful when devtool add did not automatically determine the correct name.',
diff --git a/import-layers/yocto-poky/scripts/lib/devtool/upgrade.py b/import-layers/yocto-poky/scripts/lib/devtool/upgrade.py
index 05fb9e5..f1b3ff0 100644
--- a/import-layers/yocto-poky/scripts/lib/devtool/upgrade.py
+++ b/import-layers/yocto-poky/scripts/lib/devtool/upgrade.py
@@ -1,6 +1,6 @@
 # Development tool - upgrade command plugin
 #
-# Copyright (C) 2014-2015 Intel Corporation
+# Copyright (C) 2014-2017 Intel Corporation
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License version 2 as
@@ -33,7 +33,7 @@
 
 import oe.recipeutils
 from devtool import standard
-from devtool import exec_build_env_command, setup_tinfoil, DevtoolError, parse_recipe, use_external_build
+from devtool import exec_build_env_command, setup_tinfoil, DevtoolError, parse_recipe, use_external_build, update_unlockedsigs
 
 logger = logging.getLogger('devtool')
 
@@ -180,7 +180,7 @@
             srcuri = rev_re.sub('', srcuri)
     return srcuri, srcrev
 
-def _extract_new_source(newpv, srctree, no_patch, srcrev, branch, keep_temp, tinfoil, rd):
+def _extract_new_source(newpv, srctree, no_patch, srcrev, srcbranch, branch, keep_temp, tinfoil, rd):
     """Extract sources of a recipe with a new version"""
 
     def __run(cmd):
@@ -202,15 +202,38 @@
         __run('git tag -f devtool-base-new')
         md5 = None
         sha256 = None
+        if not srcbranch:
+            check_branch, check_branch_err = __run('git branch -r --contains %s' % srcrev)
+            get_branch = [x.strip() for x in check_branch.splitlines()]
+            # Remove HEAD reference point and drop remote prefix
+            get_branch = [x.split('/', 1)[1] for x in get_branch if not x.startswith('origin/HEAD')]
+            if 'master' in get_branch:
+                # If it is master, we do not need to append 'branch=master' as this is default.
+                # Even with the case where get_branch has multiple objects, if 'master' is one
+                # of them, we should default take from 'master'
+                srcbranch = ''
+            elif len(get_branch) == 1:
+                # If 'master' isn't in get_branch and get_branch contains only ONE object, then store result into 'srcbranch'
+                srcbranch = get_branch[0]
+            else:
+                # If get_branch contains more than one objects, then display error and exit.
+                mbrch = '\n  ' + '\n  '.join(get_branch)
+                raise DevtoolError('Revision %s was found on multiple branches: %s\nPlease provide the correct branch in the devtool command with "--srcbranch" or "-B" option.' % (srcrev, mbrch))
     else:
         __run('git checkout devtool-base -b devtool-%s' % newpv)
 
         tmpdir = tempfile.mkdtemp(prefix='devtool')
         try:
-            md5, sha256 = scriptutils.fetch_uri(tinfoil.config_data, uri, tmpdir, rev)
-        except bb.fetch2.FetchError as e:
+            checksums, ftmpdir = scriptutils.fetch_url(tinfoil, uri, rev, tmpdir, logger, preserve_tmp=keep_temp)
+        except scriptutils.FetchUrlFailure as e:
             raise DevtoolError(e)
 
+        if ftmpdir and keep_temp:
+            logger.info('Fetch temp directory is %s' % ftmpdir)
+
+        md5 = checksums['md5sum']
+        sha256 = checksums['sha256sum']
+
         tmpsrctree = _get_srctree(tmpdir)
         srctree = os.path.abspath(srctree)
 
@@ -269,7 +292,7 @@
         else:
             shutil.rmtree(tmpsrctree)
 
-    return (rev, md5, sha256)
+    return (rev, md5, sha256, srcbranch)
 
 def _create_new_recipe(newpv, md5, sha256, srcrev, srcbranch, workspace, tinfoil, rd):
     """Creates the new recipe under workspace"""
@@ -277,7 +300,10 @@
     bpn = rd.getVar('BPN')
     path = os.path.join(workspace, 'recipes', bpn)
     bb.utils.mkdirhier(path)
-    copied, _ = oe.recipeutils.copy_recipe_files(rd, path)
+    copied, _ = oe.recipeutils.copy_recipe_files(rd, path, all_variants=True)
+    if not copied:
+        raise DevtoolError('Internal error - no files were copied for recipe %s' % bpn)
+    logger.debug('Copied %s to %s' % (copied, path))
 
     oldpv = rd.getVar('PV')
     if not newpv:
@@ -329,6 +355,29 @@
 
     return fullpath, copied
 
+
+def _check_git_config():
+    def getconfig(name):
+        try:
+            value = bb.process.run('git config --global %s' % name)[0].strip()
+        except bb.process.ExecutionError as e:
+            if e.exitcode == 1:
+                value = None
+            else:
+                raise
+        return value
+
+    username = getconfig('user.name')
+    useremail = getconfig('user.email')
+    configerr = []
+    if not username:
+        configerr.append('Please set your name using:\n  git config --global user.name')
+    if not useremail:
+        configerr.append('Please set your email using:\n  git config --global user.email')
+    if configerr:
+        raise DevtoolError('Your git configuration is incomplete which will prevent rebases from working:\n' + '\n'.join(configerr))
+
+
 def upgrade(args, config, basepath, workspace):
     """Entry point for the devtool 'upgrade' subcommand"""
 
@@ -339,6 +388,8 @@
     if args.srcbranch and not args.srcrev:
         raise DevtoolError("If you specify --srcbranch/-B then you must use --srcrev/-S to specify the revision" % args.recipename)
 
+    _check_git_config()
+
     tinfoil = setup_tinfoil(basepath=basepath, tracking=True)
     try:
         rd = parse_recipe(config, tinfoil, args.recipename, True)
@@ -367,11 +418,11 @@
 
         rf = None
         try:
-            rev1 = standard._extract_source(srctree, False, 'devtool-orig', False, rd, tinfoil)
-            rev2, md5, sha256 = _extract_new_source(args.version, srctree, args.no_patch,
-                                                    args.srcrev, args.branch, args.keep_temp,
+            rev1 = standard._extract_source(srctree, False, 'devtool-orig', False, config, basepath, workspace, args.fixed_setup, rd, tinfoil)
+            rev2, md5, sha256, srcbranch = _extract_new_source(args.version, srctree, args.no_patch,
+                                                    args.srcrev, args.srcbranch, args.branch, args.keep_temp,
                                                     tinfoil, rd)
-            rf, copied = _create_new_recipe(args.version, md5, sha256, args.srcrev, args.srcbranch, config.workspace_path, tinfoil, rd)
+            rf, copied = _create_new_recipe(args.version, md5, sha256, args.srcrev, srcbranch, config.workspace_path, tinfoil, rd)
         except bb.process.CmdError as e:
             _upgrade_error(e, rf, srctree)
         except DevtoolError as e:
@@ -381,6 +432,9 @@
         af = _write_append(rf, srctree, args.same_dir, args.no_same_dir, rev2,
                         copied, config.workspace_path, rd)
         standard._add_md5(config, pn, af)
+
+        update_unlockedsigs(basepath, workspace, [pn], args.fixed_setup)
+
         logger.info('Upgraded source extracted to %s' % srctree)
         logger.info('New recipe is %s' % rf)
     finally:
@@ -406,4 +460,4 @@
     group.add_argument('--same-dir', '-s', help='Build in same directory as source', action="store_true")
     group.add_argument('--no-same-dir', help='Force build in a separate build directory', action="store_true")
     parser_upgrade.add_argument('--keep-temp', action="store_true", help='Keep temporary directory (for debugging)')
-    parser_upgrade.set_defaults(func=upgrade)
+    parser_upgrade.set_defaults(func=upgrade, fixed_setup=context.fixed_setup)
diff --git a/import-layers/yocto-poky/scripts/lib/devtool/utilcmds.py b/import-layers/yocto-poky/scripts/lib/devtool/utilcmds.py
index 0437e64..b745116 100644
--- a/import-layers/yocto-poky/scripts/lib/devtool/utilcmds.py
+++ b/import-layers/yocto-poky/scripts/lib/devtool/utilcmds.py
@@ -30,15 +30,13 @@
 
 logger = logging.getLogger('devtool')
 
-
-def edit_recipe(args, config, basepath, workspace):
-    """Entry point for the devtool 'edit-recipe' subcommand"""
+def _find_recipe_path(args, config, basepath, workspace):
     if args.any_recipe:
         tinfoil = setup_tinfoil(config_only=False, basepath=basepath)
         try:
             rd = parse_recipe(config, tinfoil, args.recipename, True)
             if not rd:
-                return 1
+                raise DevtoolError("Failed to find specified recipe")
             recipefile = rd.getVar('FILE')
         finally:
             tinfoil.shutdown()
@@ -48,8 +46,19 @@
         if not recipefile:
             raise DevtoolError("Recipe file for %s is not under the workspace" %
                                args.recipename)
+    return recipefile
 
-    return scriptutils.run_editor(recipefile)
+
+def find_recipe(args, config, basepath, workspace):
+    """Entry point for the devtool 'find-recipe' subcommand"""
+    recipefile = _find_recipe_path(args, config, basepath, workspace)
+    print(recipefile)
+    return 0
+
+
+def edit_recipe(args, config, basepath, workspace):
+    """Entry point for the devtool 'edit-recipe' subcommand"""
+    return scriptutils.run_editor(_find_recipe_path(args, config, basepath, workspace), logger)
 
 
 def configure_help(args, config, basepath, workspace):
@@ -220,6 +229,14 @@
     parser_edit_recipe.add_argument('--any-recipe', '-a', action="store_true", help='Edit any recipe, not just where the recipe file itself is in the workspace')
     parser_edit_recipe.set_defaults(func=edit_recipe)
 
+    # Find-recipe
+    parser_find_recipe = subparsers.add_parser('find-recipe', help='Find a recipe file in your workspace',
+                                         description='By default, this will find a recipe file in your workspace; you can override this with the -a/--any-recipe option.',
+                                         group='working')
+    parser_find_recipe.add_argument('recipename', help='Recipe to find')
+    parser_find_recipe.add_argument('--any-recipe', '-a', action="store_true", help='Find any recipe, not just where the recipe file itself is in the workspace')
+    parser_find_recipe.set_defaults(func=find_recipe)
+
     # NOTE: Needed to override the usage string here since the default
     # gets the order wrong - recipename must come before --arg
     parser_configure_help = subparsers.add_parser('configure-help', help='Get help on configure script options',
diff --git a/import-layers/yocto-poky/scripts/lib/recipetool/create.py b/import-layers/yocto-poky/scripts/lib/recipetool/create.py
index 4de52fc..5bf939e 100644
--- a/import-layers/yocto-poky/scripts/lib/recipetool/create.py
+++ b/import-layers/yocto-poky/scripts/lib/recipetool/create.py
@@ -1,6 +1,6 @@
 # Recipe creation tool - create command plugin
 #
-# Copyright (C) 2014-2016 Intel Corporation
+# Copyright (C) 2014-2017 Intel Corporation
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License version 2 as
@@ -156,10 +156,12 @@
                         RecipeHandler.recipebinmap[prog] = pn
 
     @staticmethod
-    def checkfiles(path, speclist, recursive=False):
+    def checkfiles(path, speclist, recursive=False, excludedirs=None):
         results = []
         if recursive:
-            for root, _, files in os.walk(path):
+            for root, dirs, files in os.walk(path, topdown=True):
+                if excludedirs:
+                    dirs[:] = [d for d in dirs if d not in excludedirs]
                 for fn in files:
                     for spec in speclist:
                         if fnmatch.fnmatch(fn, spec):
@@ -339,9 +341,14 @@
                 pn = res.group(1).strip().replace('_', '-')
                 pv = res.group(2).strip().replace('_', '.')
 
-        if not pn and not pv and parseres.scheme not in ['git', 'gitsm', 'svn', 'hg']:
-            srcfile = os.path.basename(parseres.path.rstrip('/'))
-            pn, pv = determine_from_filename(srcfile)
+        if not pn and not pv:
+            if parseres.scheme not in ['git', 'gitsm', 'svn', 'hg']:
+                srcfile = os.path.basename(parseres.path.rstrip('/'))
+                pn, pv = determine_from_filename(srcfile)
+            elif parseres.scheme in ['git', 'gitsm']:
+                pn = os.path.basename(parseres.path.rstrip('/')).lower().replace('_', '-')
+                if pn.endswith('.git'):
+                    pn = pn[:-4]
 
     logger.debug('Determined from source URL: name = "%s", version = "%s"' % (pn, pv))
     return (pn, pv)
@@ -412,11 +419,15 @@
         pkgarch = "${MACHINE_ARCH}"
 
     extravalues = {}
-    checksums = (None, None)
+    checksums = {}
     tempsrc = ''
     source = args.source
     srcsubdir = ''
     srcrev = '${AUTOREV}'
+    srcbranch = ''
+    scheme = ''
+    storeTagName = ''
+    pv_srcpv = False
 
     if os.path.isfile(source):
         source = 'file://%s' % os.path.abspath(source)
@@ -432,24 +443,65 @@
         rev_re = re.compile(';rev=([^;]+)')
         res = rev_re.search(srcuri)
         if res:
+            if args.srcrev:
+                logger.error('rev= parameter and -S/--srcrev option cannot both be specified - use one or the other')
+                sys.exit(1)
+            if args.autorev:
+                logger.error('rev= parameter and -a/--autorev option cannot both be specified - use one or the other')
+                sys.exit(1)
             srcrev = res.group(1)
             srcuri = rev_re.sub('', srcuri)
-        tempsrc = tempfile.mkdtemp(prefix='recipetool-')
-        srctree = tempsrc
-        d = bb.data.createCopy(tinfoil.config_data)
-        if fetchuri.startswith('npm://'):
-            # Check if npm is available
-            npm_bindir = check_npm(tinfoil, args.devtool)
-            d.prependVar('PATH', '%s:' % npm_bindir)
-        logger.info('Fetching %s...' % srcuri)
-        try:
-            checksums = scriptutils.fetch_uri(d, fetchuri, srctree, srcrev)
-        except bb.fetch2.BBFetchException as e:
-            logger.error(str(e).rstrip())
+        elif args.srcrev:
+            srcrev = args.srcrev
+
+        # Check whether users provides any branch info in fetchuri.
+        # If true, we will skip all branch checking process to honor all user's input.
+        scheme, network, path, user, passwd, params = bb.fetch2.decodeurl(fetchuri)
+        srcbranch = params.get('branch')
+        if args.srcbranch:
+            if srcbranch:
+                logger.error('branch= parameter and -B/--srcbranch option cannot both be specified - use one or the other')
+                sys.exit(1)
+            srcbranch = args.srcbranch
+        nobranch = params.get('nobranch')
+        if nobranch and srcbranch:
+            logger.error('nobranch= cannot be used if you specify a branch')
             sys.exit(1)
+        tag = params.get('tag')
+        if not srcbranch and not nobranch and srcrev != '${AUTOREV}':
+            # Append nobranch=1 in the following conditions:
+            # 1. User did not set 'branch=' in srcuri, and
+            # 2. User did not set 'nobranch=1' in srcuri, and
+            # 3. Source revision is not '${AUTOREV}'
+            params['nobranch'] = '1'
+        if tag:
+            # Keep a copy of tag and append nobranch=1 then remove tag from URL.
+            # Bitbake fetcher unable to fetch when {AUTOREV} and tag is set at the same time.
+            storeTagName = params['tag']
+            params['nobranch'] = '1'
+            del params['tag']
+        if scheme == 'npm':
+            params['noverify'] = '1'
+        fetchuri = bb.fetch2.encodeurl((scheme, network, path, user, passwd, params))
+
+        tmpparent = tinfoil.config_data.getVar('BASE_WORKDIR')
+        bb.utils.mkdirhier(tmpparent)
+        tempsrc = tempfile.mkdtemp(prefix='recipetool-', dir=tmpparent)
+        srctree = os.path.join(tempsrc, 'source')
+
+        try:
+            checksums, ftmpdir = scriptutils.fetch_url(tinfoil, fetchuri, srcrev, srctree, logger, preserve_tmp=args.keep_temp)
+        except scriptutils.FetchUrlFailure as e:
+            logger.error(str(e))
+            sys.exit(1)
+
+        if ftmpdir and args.keep_temp:
+            logger.info('Fetch temp directory is %s' % ftmpdir)
+
         dirlist = os.listdir(srctree)
-        if 'git.indirectionsymlink' in dirlist:
-            dirlist.remove('git.indirectionsymlink')
+        filterout = ['git.indirectionsymlink']
+        dirlist = [x for x in dirlist if x not in filterout]
+        logger.debug('Directory listing (excluding filtered out):\n  %s' % '\n  '.join(dirlist))
         if len(dirlist) == 1:
             singleitem = os.path.join(srctree, dirlist[0])
             if os.path.isdir(singleitem):
@@ -457,30 +509,74 @@
                 srcsubdir = dirlist[0]
                 srctree = os.path.join(srctree, srcsubdir)
             else:
-                with open(singleitem, 'r', errors='surrogateescape') as f:
-                    if '<html' in f.read(100).lower():
-                        logger.error('Fetching "%s" returned a single HTML page - check the URL is correct and functional' % fetchuri)
-                        sys.exit(1)
+                check_single_file(dirlist[0], fetchuri)
+        elif len(dirlist) == 0:
+            if '/' in fetchuri:
+                fn = os.path.join(tinfoil.config_data.getVar('DL_DIR'), fetchuri.split('/')[-1])
+                if os.path.isfile(fn):
+                    check_single_file(fn, fetchuri)
+            # If we've got to here then there's no source so we might as well give up
+            logger.error('URL %s resulted in an empty source tree' % fetchuri)
+            sys.exit(1)
+
+        # We need this checking mechanism to improve the recipe created by recipetool and devtool
+        # is able to parse and build by bitbake.
+        # If there is no input for branch name, then check for branch name with SRCREV provided.
+        if not srcbranch and not nobranch and srcrev and (srcrev != '${AUTOREV}') and scheme in ['git', 'gitsm']:
+            try:
+                cmd = 'git branch -r --contains'
+                check_branch, check_branch_err = bb.process.run('%s %s' % (cmd, srcrev), cwd=srctree)
+            except bb.process.ExecutionError as err:
+                logger.error(str(err))
+                sys.exit(1)
+            get_branch = [x.strip() for x in check_branch.splitlines()]
+            # Remove HEAD reference point and drop remote prefix
+            get_branch = [x.split('/', 1)[1] for x in get_branch if not x.startswith('origin/HEAD')]
+            if 'master' in get_branch:
+                # If it is master, we do not need to append 'branch=master' as this is default.
+                # Even with the case where get_branch has multiple objects, if 'master' is one
+                # of them, we should default take from 'master'
+                srcbranch = ''
+            elif len(get_branch) == 1:
+                # If 'master' isn't in get_branch and get_branch contains only ONE object, then store result into 'srcbranch'
+                srcbranch = get_branch[0]
+            else:
+                # If get_branch contains more than one objects, then display error and exit.
+                mbrch = '\n  ' + '\n  '.join(get_branch)
+                logger.error('Revision %s was found on multiple branches: %s\nPlease provide the correct branch with -B/--srcbranch' % (srcrev, mbrch))
+                sys.exit(1)
+
+        # Since we might have a value in srcbranch, we need to
+        # recontruct the srcuri to include 'branch' in params.
+        scheme, network, path, user, passwd, params = bb.fetch2.decodeurl(srcuri)
+        if srcbranch:
+            params['branch'] = srcbranch
+
+        if storeTagName and scheme in ['git', 'gitsm']:
+            # Check srcrev using tag and check validity of the tag
+            cmd = ('git rev-parse --verify %s' % (storeTagName))
+            try:
+                check_tag, check_tag_err = bb.process.run('%s' % cmd, cwd=srctree)
+                srcrev = check_tag.split()[0]
+            except bb.process.ExecutionError as err:
+                logger.error(str(err))
+                logger.error("Possibly wrong tag name is provided")
+                sys.exit(1)
+            # Drop tag from srcuri as it will have conflicts with SRCREV during recipe parse.
+            del params['tag']
+        srcuri = bb.fetch2.encodeurl((scheme, network, path, user, passwd, params))
+
         if os.path.exists(os.path.join(srctree, '.gitmodules')) and srcuri.startswith('git://'):
             srcuri = 'gitsm://' + srcuri[6:]
             logger.info('Fetching submodules...')
             bb.process.run('git submodule update --init --recursive', cwd=srctree)
 
         if is_package(fetchuri):
-            tmpfdir = tempfile.mkdtemp(prefix='recipetool-')
-            try:
-                pkgfile = None
+            localdata = bb.data.createCopy(tinfoil.config_data)
+            pkgfile = bb.fetch2.localpath(fetchuri, localdata)
+            if pkgfile:
+                tmpfdir = tempfile.mkdtemp(prefix='recipetool-')
                 try:
-                    fileuri = fetchuri + ';unpack=0'
-                    scriptutils.fetch_uri(tinfoil.config_data, fileuri, tmpfdir, srcrev)
-                    for root, _, files in os.walk(tmpfdir):
-                        for f in files:
-                            pkgfile = os.path.join(root, f)
-                            break
-                except bb.fetch2.BBFetchException as e:
-                    logger.warn('Second fetch to get metadata failed: %s' % str(e).rstrip())
-
-                if pkgfile:
                     if pkgfile.endswith(('.deb', '.ipk')):
                         stdout, _ = bb.process.run('ar x %s' % pkgfile, cwd=tmpfdir)
                         stdout, _ = bb.process.run('tar xf control.tar.gz', cwd=tmpfdir)
@@ -490,8 +586,8 @@
                         stdout, _ = bb.process.run('rpm -qp --xml %s > pkginfo.xml' % pkgfile, cwd=tmpfdir)
                         values = convert_rpm_xml(os.path.join(tmpfdir, 'pkginfo.xml'))
                         extravalues.update(values)
-            finally:
-                shutil.rmtree(tmpfdir)
+                finally:
+                    shutil.rmtree(tmpfdir)
     else:
         # Assume we're pointing to an existing source tree
         if args.extract_to:
@@ -519,9 +615,9 @@
 
     if args.src_subdir:
         srcsubdir = os.path.join(srcsubdir, args.src_subdir)
-        srctree_use = os.path.join(srctree, args.src_subdir)
+        srctree_use = os.path.abspath(os.path.join(srctree, args.src_subdir))
     else:
-        srctree_use = srctree
+        srctree_use = os.path.abspath(srctree)
 
     if args.outfile and os.path.isdir(args.outfile):
         outfile = None
@@ -543,9 +639,10 @@
     # We need a blank line here so that patch_recipe_lines can rewind before the LICENSE comments
     lines_before.append('')
 
-    handled = []
-    licvalues = handle_license_vars(srctree_use, lines_before, handled, extravalues, tinfoil.config_data)
+    # We'll come back and replace this later in handle_license_vars()
+    lines_before.append('##LICENSE_PLACEHOLDER##')
 
+    handled = []
     classes = []
 
     # FIXME This is kind of a hack, we probably ought to be using bitbake to do this
@@ -581,30 +678,31 @@
     else:
         realpv = None
 
-    if srcuri and not realpv or not pn:
-        name_pn, name_pv = determine_from_url(srcuri)
-        if name_pn and not pn:
-            pn = name_pn
-        if name_pv and not realpv:
-            realpv = name_pv
-
     if not srcuri:
         lines_before.append('# No information for SRC_URI yet (only an external source tree was specified)')
     lines_before.append('SRC_URI = "%s"' % srcuri)
-    (md5value, sha256value) = checksums
-    if md5value:
-        lines_before.append('SRC_URI[md5sum] = "%s"' % md5value)
-    if sha256value:
-        lines_before.append('SRC_URI[sha256sum] = "%s"' % sha256value)
+    for key, value in sorted(checksums.items()):
+        lines_before.append('SRC_URI[%s] = "%s"' % (key, value))
     if srcuri and supports_srcrev(srcuri):
         lines_before.append('')
         lines_before.append('# Modify these as desired')
-        lines_before.append('PV = "%s+git${SRCPV}"' % (realpv or '1.0'))
+        # Note: we have code to replace realpv further down if it gets set to some other value
+        scheme, _, _, _, _, _ = bb.fetch2.decodeurl(srcuri)
+        if scheme in ['git', 'gitsm']:
+            srcpvprefix = 'git'
+        elif scheme == 'svn':
+            srcpvprefix = 'svnr'
+        else:
+            srcpvprefix = scheme
+        lines_before.append('PV = "%s+%s${SRCPV}"' % (realpv or '1.0', srcpvprefix))
+        pv_srcpv = True
         if not args.autorev and srcrev == '${AUTOREV}':
             if os.path.exists(os.path.join(srctree, '.git')):
                 (stdout, _) = bb.process.run('git rev-parse HEAD', cwd=srctree)
             srcrev = stdout.rstrip()
         lines_before.append('SRCREV = "%s"' % srcrev)
+    if args.provides:
+        lines_before.append('PROVIDES = "%s"' % args.provides)
     lines_before.append('')
 
     if srcsubdir and not args.binary:
@@ -677,6 +775,15 @@
             if '_' in pn:
                 pn = pn.replace('_', '-')
 
+    if srcuri and not realpv or not pn:
+        name_pn, name_pv = determine_from_url(srcuri)
+        if name_pn and not pn:
+            pn = name_pn
+        if name_pv and not realpv:
+            realpv = name_pv
+
+    licvalues = handle_license_vars(srctree_use, lines_before, handled, extravalues, tinfoil.config_data)
+
     if not outfile:
         if not pn:
             log_error_cond('Unable to determine short program name from source tree - please specify name with -N/--name or output file name with -o/--outfile', args.devtool)
@@ -726,10 +833,11 @@
                 skipblank = True
                 continue
         elif line.startswith('SRC_URI = '):
-            if realpv:
+            if realpv and not pv_srcpv:
                 line = line.replace(realpv, '${PV}')
         elif line.startswith('PV = '):
             if realpv:
+                # Replace the first part of the PV value
                 line = re.sub('"[^+]*\+', '"%s+' % realpv, line)
         lines_before.append(line)
 
@@ -768,9 +876,6 @@
     outlines.extend(lines_after)
 
     if extravalues:
-        if 'LICENSE' in extravalues and not licvalues:
-            # Don't blow away 'CLOSED' value that comments say we set
-            del extravalues['LICENSE']
         _, outlines = oe.recipeutils.patch_recipe_lines(outlines, extravalues, trailing_newline=False)
 
     if args.extract_to:
@@ -807,54 +912,101 @@
 
     return 0
 
+def check_single_file(fn, fetchuri):
+    """Determine if a single downloaded file is something we can't handle"""
+    with open(fn, 'r', errors='surrogateescape') as f:
+        if '<html' in f.read(100).lower():
+            logger.error('Fetching "%s" returned a single HTML page - check the URL is correct and functional' % fetchuri)
+            sys.exit(1)
+
+def split_value(value):
+    if isinstance(value, str):
+        return value.split()
+    else:
+        return value
+
 def handle_license_vars(srctree, lines_before, handled, extravalues, d):
+    lichandled = [x for x in handled if x[0] == 'license']
+    if lichandled:
+        # Someone else has already handled the license vars, just return their value
+        return lichandled[0][1]
+
     licvalues = guess_license(srctree, d)
+    licenses = []
     lic_files_chksum = []
     lic_unknown = []
+    lines = []
     if licvalues:
-        licenses = []
         for licvalue in licvalues:
             if not licvalue[0] in licenses:
                 licenses.append(licvalue[0])
             lic_files_chksum.append('file://%s;md5=%s' % (licvalue[1], licvalue[2]))
             if licvalue[0] == 'Unknown':
                 lic_unknown.append(licvalue[1])
-        lines_before.append('# WARNING: the following LICENSE and LIC_FILES_CHKSUM values are best guesses - it is')
-        lines_before.append('# your responsibility to verify that the values are complete and correct.')
-        if len(licvalues) > 1:
-            lines_before.append('#')
-            lines_before.append('# NOTE: multiple licenses have been detected; they have been separated with &')
-            lines_before.append('# in the LICENSE value for now since it is a reasonable assumption that all')
-            lines_before.append('# of the licenses apply. If instead there is a choice between the multiple')
-            lines_before.append('# licenses then you should change the value to separate the licenses with |')
-            lines_before.append('# instead of &. If there is any doubt, check the accompanying documentation')
-            lines_before.append('# to determine which situation is applicable.')
         if lic_unknown:
-            lines_before.append('#')
-            lines_before.append('# The following license files were not able to be identified and are')
-            lines_before.append('# represented as "Unknown" below, you will need to check them yourself:')
+            lines.append('#')
+            lines.append('# The following license files were not able to be identified and are')
+            lines.append('# represented as "Unknown" below, you will need to check them yourself:')
             for licfile in lic_unknown:
-                lines_before.append('#   %s' % licfile)
-            lines_before.append('#')
-    else:
-        lines_before.append('# Unable to find any files that looked like license statements. Check the accompanying')
-        lines_before.append('# documentation and source headers and set LICENSE and LIC_FILES_CHKSUM accordingly.')
-        lines_before.append('#')
-        lines_before.append('# NOTE: LICENSE is being set to "CLOSED" to allow you to at least start building - if')
-        lines_before.append('# this is not accurate with respect to the licensing of the software being built (it')
-        lines_before.append('# will not be in most cases) you must specify the correct value before using this')
-        lines_before.append('# recipe for anything other than initial testing/development!')
-        licenses = ['CLOSED']
-    pkg_license = extravalues.pop('LICENSE', None)
-    if pkg_license:
+                lines.append('#   %s' % licfile)
+
+    extra_license = split_value(extravalues.pop('LICENSE', []))
+    if '&' in extra_license:
+        extra_license.remove('&')
+    if extra_license:
         if licenses == ['Unknown']:
-            lines_before.append('# NOTE: The following LICENSE value was determined from the original package metadata')
-            licenses = [pkg_license]
+            licenses = extra_license
         else:
-            lines_before.append('# NOTE: Original package metadata indicates license is: %s' % pkg_license)
-    lines_before.append('LICENSE = "%s"' % ' & '.join(licenses))
-    lines_before.append('LIC_FILES_CHKSUM = "%s"' % ' \\\n                    '.join(lic_files_chksum))
-    lines_before.append('')
+            for item in extra_license:
+                if item not in licenses:
+                    licenses.append(item)
+    extra_lic_files_chksum = split_value(extravalues.pop('LIC_FILES_CHKSUM', []))
+    for item in extra_lic_files_chksum:
+        if item not in lic_files_chksum:
+            lic_files_chksum.append(item)
+
+    if lic_files_chksum:
+        # We are going to set the vars, so prepend the standard disclaimer
+        lines.insert(0, '# WARNING: the following LICENSE and LIC_FILES_CHKSUM values are best guesses - it is')
+        lines.insert(1, '# your responsibility to verify that the values are complete and correct.')
+    else:
+        # Without LIC_FILES_CHKSUM we set LICENSE = "CLOSED" to allow the
+        # user to get started easily
+        lines.append('# Unable to find any files that looked like license statements. Check the accompanying')
+        lines.append('# documentation and source headers and set LICENSE and LIC_FILES_CHKSUM accordingly.')
+        lines.append('#')
+        lines.append('# NOTE: LICENSE is being set to "CLOSED" to allow you to at least start building - if')
+        lines.append('# this is not accurate with respect to the licensing of the software being built (it')
+        lines.append('# will not be in most cases) you must specify the correct value before using this')
+        lines.append('# recipe for anything other than initial testing/development!')
+        licenses = ['CLOSED']
+
+    if extra_license and sorted(licenses) != sorted(extra_license):
+        lines.append('# NOTE: Original package / source metadata indicates license is: %s' % ' & '.join(extra_license))
+
+    if len(licenses) > 1:
+        lines.append('#')
+        lines.append('# NOTE: multiple licenses have been detected; they have been separated with &')
+        lines.append('# in the LICENSE value for now since it is a reasonable assumption that all')
+        lines.append('# of the licenses apply. If instead there is a choice between the multiple')
+        lines.append('# licenses then you should change the value to separate the licenses with |')
+        lines.append('# instead of &. If there is any doubt, check the accompanying documentation')
+        lines.append('# to determine which situation is applicable.')
+
+    lines.append('LICENSE = "%s"' % ' & '.join(licenses))
+    lines.append('LIC_FILES_CHKSUM = "%s"' % ' \\\n                    '.join(lic_files_chksum))
+    lines.append('')
+
+    # Replace the placeholder so we get the values in the right place in the recipe file
+    try:
+        pos = lines_before.index('##LICENSE_PLACEHOLDER##')
+    except ValueError:
+        pos = -1
+    if pos == -1:
+        lines_before.extend(lines)
+    else:
+        lines_before[pos:pos+1] = lines
+
     handled.append(('license', licvalues))
     return licvalues
 
@@ -951,6 +1103,10 @@
     crunched_md5sums['1daebd9491d1e8426900b4fa5a422814'] = 'LGPLv2.1'
     # https://github.com/FFmpeg/FFmpeg/blob/master/COPYING.LGPLv3
     crunched_md5sums['2ebfb3bb49b9a48a075cc1425e7f4129'] = 'LGPLv3'
+    # https://raw.githubusercontent.com/eclipse/mosquitto/v1.4.14/epl-v10
+    crunched_md5sums['efe2cb9a35826992b9df68224e3c2628'] = 'EPL-1.0'
+    # https://raw.githubusercontent.com/eclipse/mosquitto/v1.4.14/edl-v10
+    crunched_md5sums['0a9c78c0a398d1bbce4a166757d60387'] = 'EDL-1.0'
     lictext = []
     with open(licfile, 'r', errors='surrogateescape') as f:
         for line in f:
@@ -983,7 +1139,7 @@
     md5sums = get_license_md5sums(d)
 
     licenses = []
-    licspecs = ['*LICEN[CS]E*', 'COPYING*', '*[Ll]icense*', 'LEGAL*', '[Ll]egal*', '*GPL*', 'README.lic*', 'COPYRIGHT*', '[Cc]opyright*']
+    licspecs = ['*LICEN[CS]E*', 'COPYING*', '*[Ll]icense*', 'LEGAL*', '[Ll]egal*', '*GPL*', 'README.lic*', 'COPYRIGHT*', '[Cc]opyright*', 'e[dp]l-v10']
     licfiles = []
     for root, dirs, files in os.walk(srctree):
         for fn in files:
@@ -1142,28 +1298,13 @@
     return values
 
 
-def check_npm(tinfoil, debugonly=False):
-    try:
-        rd = tinfoil.parse_recipe('nodejs-native')
-    except bb.providers.NoProvider:
-        # We still conditionally show the message and exit with the special
-        # return code, otherwise we can't show the proper message for eSDK
-        # users
-        log_error_cond('nodejs-native is required for npm but is not available - you will likely need to add a layer that provides nodejs', debugonly)
-        sys.exit(14)
-    bindir = rd.getVar('STAGING_BINDIR_NATIVE')
-    npmpath = os.path.join(bindir, 'npm')
-    if not os.path.exists(npmpath):
-        log_error_cond('npm required to process specified source, but npm is not available - you need to run bitbake -c addto_recipe_sysroot nodejs-native first', debugonly)
-        sys.exit(14)
-    return bindir
-
 def register_commands(subparsers):
     parser_create = subparsers.add_parser('create',
                                           help='Create a new recipe',
                                           description='Creates a new recipe from a source tree')
     parser_create.add_argument('source', help='Path or URL to source')
     parser_create.add_argument('-o', '--outfile', help='Specify filename for recipe to create')
+    parser_create.add_argument('-p', '--provides', help='Specify an alias for the item provided by the recipe')
     parser_create.add_argument('-m', '--machine', help='Make recipe machine-specific as opposed to architecture-specific', action='store_true')
     parser_create.add_argument('-x', '--extract-to', metavar='EXTRACTPATH', help='Assuming source is a URL, fetch it and extract it to the directory specified as %(metavar)s')
     parser_create.add_argument('-N', '--name', help='Name to use within recipe (PN)')
@@ -1171,12 +1312,13 @@
     parser_create.add_argument('-b', '--binary', help='Treat the source tree as something that should be installed verbatim (no compilation, same directory structure)', action='store_true')
     parser_create.add_argument('--also-native', help='Also add native variant (i.e. support building recipe for the build host as well as the target machine)', action='store_true')
     parser_create.add_argument('--src-subdir', help='Specify subdirectory within source tree to use', metavar='SUBDIR')
-    parser_create.add_argument('-a', '--autorev', help='When fetching from a git repository, set SRCREV in the recipe to a floating revision instead of fixed', action="store_true")
+    group = parser_create.add_mutually_exclusive_group()
+    group.add_argument('-a', '--autorev', help='When fetching from a git repository, set SRCREV in the recipe to a floating revision instead of fixed', action="store_true")
+    group.add_argument('-S', '--srcrev', help='Source revision to fetch if fetching from an SCM such as git (default latest)')
+    parser_create.add_argument('-B', '--srcbranch', help='Branch in source repository if fetching from an SCM such as git (default master)')
     parser_create.add_argument('--keep-temp', action="store_true", help='Keep temporary directory (for debugging)')
     parser_create.add_argument('--fetch-dev', action="store_true", help='For npm, also fetch devDependencies')
     parser_create.add_argument('--devtool', action="store_true", help=argparse.SUPPRESS)
-    # FIXME I really hate having to set parserecipes for this, but given we may need
-    # to call into npm (and we don't know in advance if we will or not) and in order
-    # to do so we need to know npm's recipe sysroot path, there's not much alternative
-    parser_create.set_defaults(func=create_recipe, parserecipes=True)
+    parser_create.add_argument('--mirrors', action="store_true", help='Enable PREMIRRORS and MIRRORS for source tree fetching (disabled by default).')
+    parser_create.set_defaults(func=create_recipe)
 
diff --git a/import-layers/yocto-poky/scripts/lib/recipetool/create_buildsys.py b/import-layers/yocto-poky/scripts/lib/recipetool/create_buildsys.py
index e914e53..4743c74 100644
--- a/import-layers/yocto-poky/scripts/lib/recipetool/create_buildsys.py
+++ b/import-layers/yocto-poky/scripts/lib/recipetool/create_buildsys.py
@@ -863,6 +863,10 @@
                             break
                     if len(foundvalues) == len(valuemap):
                         break
+        # Drop values containing unexpanded RPM macros
+        for k in list(foundvalues.keys()):
+            if '%' in foundvalues[k]:
+                del foundvalues[k]
         if 'PV' in foundvalues:
             if not validate_pv(foundvalues['PV']):
                 del foundvalues['PV']
diff --git a/import-layers/yocto-poky/scripts/lib/recipetool/create_buildsys_python.py b/import-layers/yocto-poky/scripts/lib/recipetool/create_buildsys_python.py
index ec5449b..5bd2aa3 100644
--- a/import-layers/yocto-poky/scripts/lib/recipetool/create_buildsys_python.py
+++ b/import-layers/yocto-poky/scripts/lib/recipetool/create_buildsys_python.py
@@ -356,6 +356,8 @@
         # Naive mapping of setup() arguments to PKG-INFO field names
         for d in [info, non_literals]:
             for key, value in list(d.items()):
+                if key is None:
+                    continue
                 new_key = _map(key)
                 if new_key != key:
                     del d[key]
diff --git a/import-layers/yocto-poky/scripts/lib/recipetool/create_kmod.py b/import-layers/yocto-poky/scripts/lib/recipetool/create_kmod.py
index 7cf188d..4569b53 100644
--- a/import-layers/yocto-poky/scripts/lib/recipetool/create_kmod.py
+++ b/import-layers/yocto-poky/scripts/lib/recipetool/create_kmod.py
@@ -40,7 +40,7 @@
 
         makefiles = []
 
-        files = RecipeHandler.checkfiles(srctree, ['*.c', '*.h'], recursive=True)
+        files = RecipeHandler.checkfiles(srctree, ['*.c', '*.h'], recursive=True, excludedirs=['contrib', 'test', 'examples'])
         if files:
             for cfile in files:
                 # Look in same dir or parent for Makefile
diff --git a/import-layers/yocto-poky/scripts/lib/recipetool/create_npm.py b/import-layers/yocto-poky/scripts/lib/recipetool/create_npm.py
index cb8f338..ae53972 100644
--- a/import-layers/yocto-poky/scripts/lib/recipetool/create_npm.py
+++ b/import-layers/yocto-poky/scripts/lib/recipetool/create_npm.py
@@ -21,7 +21,7 @@
 import tempfile
 import shutil
 import json
-from recipetool.create import RecipeHandler, split_pkg_licenses, handle_license_vars, check_npm
+from recipetool.create import RecipeHandler, split_pkg_licenses, handle_license_vars
 
 logger = logging.getLogger('recipetool')
 
@@ -36,6 +36,27 @@
 class NpmRecipeHandler(RecipeHandler):
     lockdownpath = None
 
+    def _ensure_npm(self, fixed_setup=False):
+        if not tinfoil.recipes_parsed:
+            tinfoil.parse_recipes()
+        try:
+            rd = tinfoil.parse_recipe('nodejs-native')
+        except bb.providers.NoProvider:
+            if fixed_setup:
+                msg = 'nodejs-native is required for npm but is not available within this SDK'
+            else:
+                msg = 'nodejs-native is required for npm but is not available - you will likely need to add a layer that provides nodejs'
+            logger.error(msg)
+            return None
+        bindir = rd.getVar('STAGING_BINDIR_NATIVE')
+        npmpath = os.path.join(bindir, 'npm')
+        if not os.path.exists(npmpath):
+            tinfoil.build_targets('nodejs-native', 'addto_recipe_sysroot')
+            if not os.path.exists(npmpath):
+                logger.error('npm required to process specified source, but nodejs-native did not seem to populate it')
+                return None
+        return bindir
+
     def _handle_license(self, data):
         '''
         Handle the license value from an npm package.json file
@@ -109,7 +130,6 @@
             if varname == 'SRC_URI':
                 if not origvalue.startswith('npm://'):
                     src_uri = origvalue.split()
-                    changed = False
                     deplist = {}
                     for dep, depver in optdeps.items():
                         depdata = self.get_npm_data(dep, depver, d)
@@ -123,14 +143,15 @@
                         depdata = self.get_npm_data(dep, depver, d)
                         deplist[dep] = depdata
 
+                    extra_urls = []
                     for dep, depdata in deplist.items():
                         version = depdata.get('version', None)
                         if version:
                             url = 'npm://registry.npmjs.org;name=%s;version=%s;subdir=node_modules/%s' % (dep, version, dep)
-                            scriptutils.fetch_uri(d, url, srctree)
-                            src_uri.append(url)
-                            changed = True
-                    if changed:
+                            extra_urls.append(url)
+                    if extra_urls:
+                        scriptutils.fetch_url(tinfoil, ' '.join(extra_urls), None, srctree, logger)
+                        src_uri.extend(extra_urls)
                         return src_uri, None, -1, True
             return origvalue, None, 0, True
         updated, newlines = bb.utils.edit_metadata(lines_before, ['SRC_URI'], varfunc)
@@ -143,40 +164,9 @@
                 lines_before.append(line)
         return updated
 
-    def _replace_license_vars(self, srctree, lines_before, handled, extravalues, d):
-        for item in handled:
-            if isinstance(item, tuple):
-                if item[0] == 'license':
-                    del item
-                    break
-
-        calledvars = []
-        def varfunc(varname, origvalue, op, newlines):
-            if varname in ['LICENSE', 'LIC_FILES_CHKSUM']:
-                for i, e in enumerate(reversed(newlines)):
-                    if not e.startswith('#'):
-                        stop = i
-                        while stop > 0:
-                            newlines.pop()
-                            stop -= 1
-                        break
-                calledvars.append(varname)
-                if len(calledvars) > 1:
-                    # The second time around, put the new license text in
-                    insertpos = len(newlines)
-                    handle_license_vars(srctree, newlines, handled, extravalues, d)
-                return None, None, 0, True
-            return origvalue, None, 0, True
-        updated, newlines = bb.utils.edit_metadata(lines_before, ['LICENSE', 'LIC_FILES_CHKSUM'], varfunc)
-        if updated:
-            del lines_before[:]
-            lines_before.extend(newlines)
-        else:
-            raise Exception('Did not find license variables')
-
     def process(self, srctree, classes, lines_before, lines_after, handled, extravalues):
         import bb.utils
-        import oe
+        import oe.package
         from collections import OrderedDict
 
         if 'buildsystem' in handled:
@@ -189,7 +179,9 @@
         files = RecipeHandler.checkfiles(srctree, ['package.json'])
         if files:
             d = bb.data.createCopy(tinfoil.config_data)
-            npm_bindir = check_npm(tinfoil, self._devtool)
+            npm_bindir = self._ensure_npm()
+            if not npm_bindir:
+                sys.exit(14)
             d.prependVar('PATH', '%s:' % npm_bindir)
 
             data = read_package_json(files[0])
@@ -205,10 +197,7 @@
 
                 fetchdev = extravalues['fetchdev'] or None
                 deps, optdeps, devdeps = self.get_npm_package_dependencies(data, fetchdev)
-                updated = self._handle_dependencies(d, deps, optdeps, devdeps, lines_before, srctree)
-                if updated:
-                    # We need to redo the license stuff
-                    self._replace_license_vars(srctree, lines_before, handled, extravalues, d)
+                self._handle_dependencies(d, deps, optdeps, devdeps, lines_before, srctree)
 
                 # Shrinkwrap
                 localfilesdir = tempfile.mkdtemp(prefix='recipetool-npm')
@@ -219,11 +208,14 @@
 
                 # Split each npm module out to is own package
                 npmpackages = oe.package.npm_split_package_dirs(srctree)
+                licvalues = None
                 for item in handled:
                     if isinstance(item, tuple):
                         if item[0] == 'license':
                             licvalues = item[1]
                             break
+                if not licvalues:
+                    licvalues = handle_license_vars(srctree, lines_before, handled, extravalues, d)
                 if licvalues:
                     # Augment the license list with information we have in the packages
                     licenses = {}
@@ -244,13 +236,7 @@
                     all_licenses = list(set([item.replace('_', ' ') for pkglicense in pkglicenses.values() for item in pkglicense]))
                     if '&' in all_licenses:
                         all_licenses.remove('&')
-                    # Go back and update the LICENSE value since we have a bit more
-                    # information than when that was written out (and we know all apply
-                    # vs. there being a choice, so we can join them with &)
-                    for i, line in enumerate(lines_before):
-                        if line.startswith('LICENSE = '):
-                            lines_before[i] = 'LICENSE = "%s"' % ' & '.join(all_licenses)
-                            break
+                    extravalues['LICENSE'] = ' & '.join(all_licenses)
 
                 # Need to move S setting after inherit npm
                 for i, line in enumerate(lines_before):
diff --git a/import-layers/yocto-poky/scripts/lib/recipetool/newappend.py b/import-layers/yocto-poky/scripts/lib/recipetool/newappend.py
index 0b63759..decce83 100644
--- a/import-layers/yocto-poky/scripts/lib/recipetool/newappend.py
+++ b/import-layers/yocto-poky/scripts/lib/recipetool/newappend.py
@@ -74,7 +74,7 @@
             return 1
 
     if args.edit:
-        return scriptutils.run_editor([append_path, recipe_path])
+        return scriptutils.run_editor([append_path, recipe_path], logger)
     else:
         print(append_path)
 
diff --git a/import-layers/yocto-poky/scripts/lib/scriptutils.py b/import-layers/yocto-poky/scripts/lib/scriptutils.py
index 92b601c..85b1c94 100644
--- a/import-layers/yocto-poky/scripts/lib/scriptutils.py
+++ b/import-layers/yocto-poky/scripts/lib/scriptutils.py
@@ -23,6 +23,8 @@
 import subprocess
 import tempfile
 import shutil
+import random
+import string
 
 def logger_create(name, stream=None):
     logger = logging.getLogger(name)
@@ -78,52 +80,139 @@
             bb.process.run('git repack -a', cwd=repodir)
             os.remove(alternatesfile)
 
-def fetch_uri(d, uri, destdir, srcrev=None):
-    """Fetch a URI to a local directory"""
+def _get_temp_recipe_dir(d):
+    # This is a little bit hacky but we need to find a place where we can put
+    # the recipe so that bitbake can find it. We're going to delete it at the
+    # end so it doesn't really matter where we put it.
+    bbfiles = d.getVar('BBFILES').split()
+    fetchrecipedir = None
+    for pth in bbfiles:
+        if pth.endswith('.bb'):
+            pthdir = os.path.dirname(pth)
+            if os.access(os.path.dirname(os.path.dirname(pthdir)), os.W_OK):
+                fetchrecipedir = pthdir.replace('*', 'recipetool')
+                if pthdir.endswith('workspace/recipes/*'):
+                    # Prefer the workspace
+                    break
+    return fetchrecipedir
+
+class FetchUrlFailure(Exception):
+    def __init__(self, url):
+        self.url = url
+    def __str__(self):
+        return "Failed to fetch URL %s" % self.url
+
+def fetch_url(tinfoil, srcuri, srcrev, destdir, logger, preserve_tmp=False, mirrors=False):
+    """
+    Fetch the specified URL using normal do_fetch and do_unpack tasks, i.e.
+    any dependencies that need to be satisfied in order to support the fetch
+    operation will be taken care of
+    """
+
     import bb
-    tmpparent = d.getVar('BASE_WORKDIR')
+
+    checksums = {}
+    fetchrecipepn = None
+
+    # We need to put our temp directory under ${BASE_WORKDIR} otherwise
+    # we may have problems with the recipe-specific sysroot population
+    tmpparent = tinfoil.config_data.getVar('BASE_WORKDIR')
     bb.utils.mkdirhier(tmpparent)
-    tmpworkdir = tempfile.mkdtemp(dir=tmpparent)
+    tmpdir = tempfile.mkdtemp(prefix='recipetool-', dir=tmpparent)
     try:
-        bb.utils.mkdirhier(destdir)
-        localdata = bb.data.createCopy(d)
+        tmpworkdir = os.path.join(tmpdir, 'work')
+        logger.debug('fetch_url: temp dir is %s' % tmpdir)
 
-        # Set some values to allow extend_recipe_sysroot to work here we're we are not running from a task
-        localdata.setVar('WORKDIR', tmpworkdir)
-        localdata.setVar('BB_RUNTASK', 'do_fetch')
-        localdata.setVar('PN', 'dummy')
-        localdata.setVar('BB_LIMITEDDEPS', '1')
-        bb.build.exec_func("extend_recipe_sysroot", localdata)
-
-        # Set some values for the benefit of the fetcher code
-        localdata.setVar('BB_STRICT_CHECKSUM', '')
-        localdata.setVar('SRCREV', srcrev)
-        ret = (None, None)
-        olddir = os.getcwd()
+        fetchrecipedir = _get_temp_recipe_dir(tinfoil.config_data)
+        if not fetchrecipedir:
+            logger.error('Searched BBFILES but unable to find a writeable place to put temporary recipe')
+            sys.exit(1)
+        fetchrecipe = None
+        bb.utils.mkdirhier(fetchrecipedir)
         try:
-            fetcher = bb.fetch2.Fetch([uri], localdata)
-            for u in fetcher.ud:
-                ud = fetcher.ud[u]
-                ud.ignore_checksums = True
-            fetcher.download()
-            for u in fetcher.ud:
-                ud = fetcher.ud[u]
-                if ud.localpath.rstrip(os.sep) == localdata.getVar('DL_DIR').rstrip(os.sep):
-                    raise Exception('Local path is download directory - please check that the URI "%s" is correct' % uri)
-            fetcher.unpack(destdir)
-            for u in fetcher.ud:
-                ud = fetcher.ud[u]
-                if ud.method.recommends_checksum(ud):
-                    md5value = bb.utils.md5_file(ud.localpath)
-                    sha256value = bb.utils.sha256_file(ud.localpath)
-                    ret = (md5value, sha256value)
-        finally:
-            os.chdir(olddir)
-    finally:
-        shutil.rmtree(tmpworkdir)
-    return ret
+            # Generate a dummy recipe so we can follow more or less normal paths
+            # for do_fetch and do_unpack
+            # I'd use tempfile functions here but underscores can be produced by that and those
+            # aren't allowed in recipe file names except to separate the version
+            rndstring = ''.join(random.choice(string.ascii_lowercase + string.digits) for _ in range(8))
+            fetchrecipe = os.path.join(fetchrecipedir, 'tmp-recipetool-%s.bb' % rndstring)
+            fetchrecipepn = os.path.splitext(os.path.basename(fetchrecipe))[0]
+            logger.debug('Generating initial recipe %s for fetching' % fetchrecipe)
+            with open(fetchrecipe, 'w') as f:
+                # We don't want to have to specify LIC_FILES_CHKSUM
+                f.write('LICENSE = "CLOSED"\n')
+                # We don't need the cross-compiler
+                f.write('INHIBIT_DEFAULT_DEPS = "1"\n')
+                # We don't have the checksums yet so we can't require them
+                f.write('BB_STRICT_CHECKSUM = "ignore"\n')
+                f.write('SRC_URI = "%s"\n' % srcuri)
+                f.write('SRCREV = "%s"\n' % srcrev)
+                f.write('WORKDIR = "%s"\n' % tmpworkdir)
+                # Set S out of the way so it doesn't get created under the workdir
+                f.write('S = "%s"\n' % os.path.join(tmpdir, 'emptysrc'))
+                if not mirrors:
+                    # We do not need PREMIRRORS since we are almost certainly
+                    # fetching new source rather than something that has already
+                    # been fetched. Hence, we disable them by default.
+                    # However, we provide an option for users to enable it.
+                    f.write('PREMIRRORS = ""\n')
+                    f.write('MIRRORS = ""\n')
 
-def run_editor(fn):
+            logger.info('Fetching %s...' % srcuri)
+
+            # FIXME this is too noisy at the moment
+
+            # Parse recipes so our new recipe gets picked up
+            tinfoil.parse_recipes()
+
+            def eventhandler(event):
+                if isinstance(event, bb.fetch2.MissingChecksumEvent):
+                    checksums.update(event.checksums)
+                    return True
+                return False
+
+            # Run the fetch + unpack tasks
+            res = tinfoil.build_targets(fetchrecipepn,
+                                        'do_unpack',
+                                        handle_events=True,
+                                        extra_events=['bb.fetch2.MissingChecksumEvent'],
+                                        event_callback=eventhandler)
+            if not res:
+                raise FetchUrlFailure(srcuri)
+
+            # Remove unneeded directories
+            rd = tinfoil.parse_recipe(fetchrecipepn)
+            if rd:
+                pathvars = ['T', 'RECIPE_SYSROOT', 'RECIPE_SYSROOT_NATIVE']
+                for pathvar in pathvars:
+                    path = rd.getVar(pathvar)
+                    shutil.rmtree(path)
+        finally:
+            if fetchrecipe:
+                try:
+                    os.remove(fetchrecipe)
+                except FileNotFoundError:
+                    pass
+            try:
+                os.rmdir(fetchrecipedir)
+            except OSError as e:
+                import errno
+                if e.errno != errno.ENOTEMPTY:
+                    raise
+
+        bb.utils.mkdirhier(destdir)
+        for fn in os.listdir(tmpworkdir):
+            shutil.move(os.path.join(tmpworkdir, fn), destdir)
+
+    finally:
+        if not preserve_tmp:
+            shutil.rmtree(tmpdir)
+            tmpdir = None
+
+    return checksums, tmpdir
+
+
+def run_editor(fn, logger=None):
     if isinstance(fn, str):
         params = '"%s"' % fn
     else:
@@ -134,8 +223,8 @@
     editor = os.getenv('VISUAL', os.getenv('EDITOR', 'vi'))
     try:
         return subprocess.check_call('%s %s' % (editor, params), shell=True)
-    except OSError as exc:
-        logger.error("Execution of editor '%s' failed: %s", editor, exc)
+    except subprocess.CalledProcessError as exc:
+        logger.error("Execution of '%s' failed: %s" % (editor, exc))
         return 1
 
 def is_src_url(param):
diff --git a/import-layers/yocto-poky/scripts/lib/wic/canned-wks/common.wks.inc b/import-layers/yocto-poky/scripts/lib/wic/canned-wks/common.wks.inc
index 5cf2fd1..89880b4 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/canned-wks/common.wks.inc
+++ b/import-layers/yocto-poky/scripts/lib/wic/canned-wks/common.wks.inc
@@ -1,3 +1,3 @@
 # This file is included into 3 canned wks files from this directory
 part /boot --source bootimg-pcbios --ondisk sda --label boot --active --align 1024
-part / --source rootfs --ondisk sda --fstype=ext4 --label platform --align 1024
+part / --source rootfs --use-uuid --fstype=ext4 --label platform --align 1024
diff --git a/import-layers/yocto-poky/scripts/lib/wic/canned-wks/directdisk-bootloader-config.cfg b/import-layers/yocto-poky/scripts/lib/wic/canned-wks/directdisk-bootloader-config.cfg
index d5a07d2..c58e74a 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/canned-wks/directdisk-bootloader-config.cfg
+++ b/import-layers/yocto-poky/scripts/lib/wic/canned-wks/directdisk-bootloader-config.cfg
@@ -12,16 +12,16 @@
 
 LABEL Graphics console boot
 KERNEL /vmlinuz
-APPEND label=boot root=/dev/sda2 rootwait
+APPEND label=boot rootwait
 
 LABEL Serial console boot
 KERNEL /vmlinuz
-APPEND label=boot root=/dev/sda2 rootwait console=ttyS0,115200
+APPEND label=boot rootwait console=ttyS0,115200
 
 LABEL Graphics console install
 KERNEL /vmlinuz
-APPEND label=install root=/dev/sda2 rootwait
+APPEND label=install rootwait
 
 LABEL Serial console install
 KERNEL /vmlinuz
-APPEND label=install root=/dev/sda2 rootwait console=ttyS0,115200
+APPEND label=install rootwait console=ttyS0,115200
diff --git a/import-layers/yocto-poky/scripts/lib/wic/canned-wks/qemux86-directdisk.wks b/import-layers/yocto-poky/scripts/lib/wic/canned-wks/qemux86-directdisk.wks
index db30bbc..1f8466a 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/canned-wks/qemux86-directdisk.wks
+++ b/import-layers/yocto-poky/scripts/lib/wic/canned-wks/qemux86-directdisk.wks
@@ -4,5 +4,5 @@
 
 include common.wks.inc
 
-bootloader  --timeout=0  --append="vga=0 uvesafb.mode_option=640x480-32 root=/dev/sda2 rw mem=256M ip=192.168.7.2::192.168.7.1:255.255.255.0 oprofile.timer=1 rootfstype=ext4 "
+bootloader  --timeout=0  --append="vga=0 uvesafb.mode_option=640x480-32 rw mem=256M ip=192.168.7.2::192.168.7.1:255.255.255.0 oprofile.timer=1 rootfstype=ext4 "
 
diff --git a/import-layers/yocto-poky/scripts/lib/wic/canned-wks/systemd-bootdisk.wks b/import-layers/yocto-poky/scripts/lib/wic/canned-wks/systemd-bootdisk.wks
index 4bd9d6a..95d7b97 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/canned-wks/systemd-bootdisk.wks
+++ b/import-layers/yocto-poky/scripts/lib/wic/canned-wks/systemd-bootdisk.wks
@@ -2,10 +2,10 @@
 # long-description: Creates a partitioned EFI disk image that the user
 # can directly dd to boot media. The selected bootloader is systemd-boot.
 
-part /boot --source bootimg-efi --sourceparams="loader=systemd-boot" --ondisk sda --label msdos --active --align 1024
+part /boot --source bootimg-efi --sourceparams="loader=systemd-boot" --ondisk sda --label msdos --active --align 1024 --use-uuid
 
 part / --source rootfs --ondisk sda --fstype=ext4 --label platform --align 1024 --use-uuid
 
-part swap --ondisk sda --size 44 --label swap1 --fstype=swap
+part swap --ondisk sda --size 44 --label swap1 --fstype=swap --use-uuid
 
 bootloader --ptable gpt --timeout=5 --append="rootwait rootfstype=ext4 console=ttyS0,115200 console=tty0"
diff --git a/import-layers/yocto-poky/scripts/lib/wic/engine.py b/import-layers/yocto-poky/scripts/lib/wic/engine.py
index f59821f..edcfab3 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/engine.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/engine.py
@@ -30,10 +30,17 @@
 
 import logging
 import os
+import tempfile
+import json
+import subprocess
+
+from collections import namedtuple, OrderedDict
+from distutils.spawn import find_executable
 
 from wic import WicError
+from wic.filemap import sparse_copy
 from wic.pluginbase import PluginMgr
-from wic.utils.misc import get_bitbake_var
+from wic.misc import get_bitbake_var, exec_cmd
 
 logger = logging.getLogger('wic')
 
@@ -201,17 +208,18 @@
     """
     Print the list of images or source plugins.
     """
-    if len(args) < 1:
+    if args.list_type is None:
         return False
 
-    if args == ["images"]:
+    if args.list_type == "images":
+
         list_canned_images(scripts_path)
         return True
-    elif args == ["source-plugins"]:
+    elif args.list_type == "source-plugins":
         list_source_plugins()
         return True
-    elif len(args) == 2 and args[1] == "help":
-        wks_file = args[0]
+    elif len(args.help_for) == 1 and args.help_for[0] == 'help':
+        wks_file = args.list_type
         fullpath = find_canned_image(scripts_path, wks_file)
         if not fullpath:
             raise WicError("No image named %s found, exiting. "
@@ -224,6 +232,306 @@
 
     return False
 
+
+class Disk:
+    def __init__(self, imagepath, native_sysroot, fstypes=('fat', 'ext')):
+        self.imagepath = imagepath
+        self.native_sysroot = native_sysroot
+        self.fstypes = fstypes
+        self._partitions = None
+        self._partimages = {}
+        self._lsector_size = None
+        self._psector_size = None
+        self._ptable_format = None
+
+        # find parted
+        self.paths = "/bin:/usr/bin:/usr/sbin:/sbin/"
+        if native_sysroot:
+            for path in self.paths.split(':'):
+                self.paths = "%s%s:%s" % (native_sysroot, path, self.paths)
+
+        self.parted = find_executable("parted", self.paths)
+        if not self.parted:
+            raise WicError("Can't find executable parted")
+
+        self.partitions = self.get_partitions()
+
+    def __del__(self):
+        for path in self._partimages.values():
+            os.unlink(path)
+
+    def get_partitions(self):
+        if self._partitions is None:
+            self._partitions = OrderedDict()
+            out = exec_cmd("%s -sm %s unit B print" % (self.parted, self.imagepath))
+            parttype = namedtuple("Part", "pnum start end size fstype")
+            splitted = out.splitlines()
+            lsector_size, psector_size, self._ptable_format = splitted[1].split(":")[3:6]
+            self._lsector_size = int(lsector_size)
+            self._psector_size = int(psector_size)
+            for line in splitted[2:]:
+                pnum, start, end, size, fstype = line.split(':')[:5]
+                partition = parttype(int(pnum), int(start[:-1]), int(end[:-1]),
+                                     int(size[:-1]), fstype)
+                self._partitions[pnum] = partition
+
+        return self._partitions
+
+    def __getattr__(self, name):
+        """Get path to the executable in a lazy way."""
+        if name in ("mdir", "mcopy", "mdel", "mdeltree", "sfdisk", "e2fsck",
+                    "resize2fs", "mkswap", "mkdosfs", "debugfs"):
+            aname = "_%s" % name
+            if aname not in self.__dict__:
+                setattr(self, aname, find_executable(name, self.paths))
+                if aname not in self.__dict__:
+                    raise WicError("Can't find executable {}".format(name))
+            return self.__dict__[aname]
+        return self.__dict__[name]
+
+    def _get_part_image(self, pnum):
+        if pnum not in self.partitions:
+            raise WicError("Partition %s is not in the image")
+        part = self.partitions[pnum]
+        # check if fstype is supported
+        for fstype in self.fstypes:
+            if part.fstype.startswith(fstype):
+                break
+        else:
+            raise WicError("Not supported fstype: {}".format(part.fstype))
+        if pnum not in self._partimages:
+            tmpf = tempfile.NamedTemporaryFile(prefix="wic-part")
+            dst_fname = tmpf.name
+            tmpf.close()
+            sparse_copy(self.imagepath, dst_fname, skip=part.start, length=part.size)
+            self._partimages[pnum] = dst_fname
+
+        return self._partimages[pnum]
+
+    def _put_part_image(self, pnum):
+        """Put partition image into partitioned image."""
+        sparse_copy(self._partimages[pnum], self.imagepath,
+                    seek=self.partitions[pnum].start)
+
+    def dir(self, pnum, path):
+        if self.partitions[pnum].fstype.startswith('ext'):
+            return exec_cmd("{} {} -R 'ls -l {}'".format(self.debugfs,
+                                                         self._get_part_image(pnum),
+                                                         path), as_shell=True)
+        else: # fat
+            return exec_cmd("{} -i {} ::{}".format(self.mdir,
+                                                   self._get_part_image(pnum),
+                                                   path))
+
+    def copy(self, src, pnum, path):
+        """Copy partition image into wic image."""
+        if self.partitions[pnum].fstype.startswith('ext'):
+            cmd = "echo -e 'cd {}\nwrite {} {}' | {} -w {}".\
+                      format(path, src, os.path.basename(src),
+                             self.debugfs, self._get_part_image(pnum))
+        else: # fat
+            cmd = "{} -i {} -snop {} ::{}".format(self.mcopy,
+                                                  self._get_part_image(pnum),
+                                                  src, path)
+        exec_cmd(cmd, as_shell=True)
+        self._put_part_image(pnum)
+
+    def remove(self, pnum, path):
+        """Remove files/dirs from the partition."""
+        partimg = self._get_part_image(pnum)
+        if self.partitions[pnum].fstype.startswith('ext'):
+            exec_cmd("{} {} -wR 'rm {}'".format(self.debugfs,
+                                                self._get_part_image(pnum),
+                                                path), as_shell=True)
+        else: # fat
+            cmd = "{} -i {} ::{}".format(self.mdel, partimg, path)
+            try:
+                exec_cmd(cmd)
+            except WicError as err:
+                if "not found" in str(err) or "non empty" in str(err):
+                    # mdel outputs 'File ... not found' or 'directory .. non empty"
+                    # try to use mdeltree as path could be a directory
+                    cmd = "{} -i {} ::{}".format(self.mdeltree,
+                                                 partimg, path)
+                    exec_cmd(cmd)
+                else:
+                    raise err
+        self._put_part_image(pnum)
+
+    def write(self, target, expand):
+        """Write disk image to the media or file."""
+        def write_sfdisk_script(outf, parts):
+            for key, val in parts['partitiontable'].items():
+                if key in ("partitions", "device", "firstlba", "lastlba"):
+                    continue
+                if key == "id":
+                    key = "label-id"
+                outf.write("{}: {}\n".format(key, val))
+            outf.write("\n")
+            for part in parts['partitiontable']['partitions']:
+                line = ''
+                for name in ('attrs', 'name', 'size', 'type', 'uuid'):
+                    if name == 'size' and part['type'] == 'f':
+                        # don't write size for extended partition
+                        continue
+                    val = part.get(name)
+                    if val:
+                        line += '{}={}, '.format(name, val)
+                if line:
+                    line = line[:-2] # strip ', '
+                if part.get('bootable'):
+                    line += ' ,bootable'
+                outf.write("{}\n".format(line))
+            outf.flush()
+
+        def read_ptable(path):
+            out = exec_cmd("{} -dJ {}".format(self.sfdisk, path))
+            return json.loads(out)
+
+        def write_ptable(parts, target):
+            with tempfile.NamedTemporaryFile(prefix="wic-sfdisk-", mode='w') as outf:
+                write_sfdisk_script(outf, parts)
+                cmd = "{} --no-reread {} < {} 2>/dev/null".format(self.sfdisk, target, outf.name)
+                try:
+                    subprocess.check_output(cmd, shell=True)
+                except subprocess.CalledProcessError as err:
+                    raise WicError("Can't run '{}' command: {}".format(cmd, err))
+
+        if expand is None:
+            sparse_copy(self.imagepath, target)
+        else:
+            # copy first sectors that may contain bootloader
+            sparse_copy(self.imagepath, target, length=2048 * self._lsector_size)
+
+            # copy source partition table to the target
+            parts = read_ptable(self.imagepath)
+            write_ptable(parts, target)
+
+            # get size of unpartitioned space
+            free = None
+            for line in exec_cmd("{} -F {}".format(self.sfdisk, target)).splitlines():
+                if line.startswith("Unpartitioned space ") and line.endswith("sectors"):
+                    free = int(line.split()[-2])
+            if free is None:
+                raise WicError("Can't get size of unpartitioned space")
+
+            # calculate expanded partitions sizes
+            sizes = {}
+            for num, part in enumerate(parts['partitiontable']['partitions'], 1):
+                if num in expand:
+                    if expand[num] != 0: # don't resize partition if size is set to 0
+                        sectors = expand[num] // self._lsector_size
+                        free -= sectors - part['size']
+                        part['size'] = sectors
+                        sizes[num] = sectors
+                elif part['type'] != 'f':
+                    sizes[num] = -1
+
+            for num, part in enumerate(parts['partitiontable']['partitions'], 1):
+                if sizes.get(num) == -1:
+                    part['size'] += free // len(sizes)
+
+            # write resized partition table to the target
+            write_ptable(parts, target)
+
+            # read resized partition table
+            parts = read_ptable(target)
+
+            # copy partitions content
+            for num, part in enumerate(parts['partitiontable']['partitions'], 1):
+                pnum = str(num)
+                fstype = self.partitions[pnum].fstype
+
+                # copy unchanged partition
+                if part['size'] == self.partitions[pnum].size // self._lsector_size:
+                    logger.info("copying unchanged partition {}".format(pnum))
+                    sparse_copy(self._get_part_image(pnum), target, seek=part['start'] * self._lsector_size)
+                    continue
+
+                # resize or re-create partitions
+                if fstype.startswith('ext') or fstype.startswith('fat') or \
+                   fstype.startswith('linux-swap'):
+
+                    partfname = None
+                    with tempfile.NamedTemporaryFile(prefix="wic-part{}-".format(pnum)) as partf:
+                        partfname = partf.name
+
+                    if fstype.startswith('ext'):
+                        logger.info("resizing ext partition {}".format(pnum))
+                        partimg = self._get_part_image(pnum)
+                        sparse_copy(partimg, partfname)
+                        exec_cmd("{} -pf {}".format(self.e2fsck, partfname))
+                        exec_cmd("{} {} {}s".format(\
+                                 self.resize2fs, partfname, part['size']))
+                    elif fstype.startswith('fat'):
+                        logger.info("copying content of the fat partition {}".format(pnum))
+                        with tempfile.TemporaryDirectory(prefix='wic-fatdir-') as tmpdir:
+                            # copy content to the temporary directory
+                            cmd = "{} -snompi {} :: {}".format(self.mcopy,
+                                                               self._get_part_image(pnum),
+                                                               tmpdir)
+                            exec_cmd(cmd)
+                            # create new msdos partition
+                            label = part.get("name")
+                            label_str = "-n {}".format(label) if label else ''
+
+                            cmd = "{} {} -C {} {}".format(self.mkdosfs, label_str, partfname,
+                                                          part['size'])
+                            exec_cmd(cmd)
+                            # copy content from the temporary directory to the new partition
+                            cmd = "{} -snompi {} {}/* ::".format(self.mcopy, partfname, tmpdir)
+                            exec_cmd(cmd, as_shell=True)
+                    elif fstype.startswith('linux-swap'):
+                        logger.info("creating swap partition {}".format(pnum))
+                        label = part.get("name")
+                        label_str = "-L {}".format(label) if label else ''
+                        uuid = part.get("uuid")
+                        uuid_str = "-U {}".format(uuid) if uuid else ''
+                        with open(partfname, 'w') as sparse:
+                            os.ftruncate(sparse.fileno(), part['size'] * self._lsector_size)
+                        exec_cmd("{} {} {} {}".format(self.mkswap, label_str, uuid_str, partfname))
+                    sparse_copy(partfname, target, seek=part['start'] * self._lsector_size)
+                    os.unlink(partfname)
+                elif part['type'] != 'f':
+                    logger.warn("skipping partition {}: unsupported fstype {}".format(pnum, fstype))
+
+def wic_ls(args, native_sysroot):
+    """List contents of partitioned image or vfat partition."""
+    disk = Disk(args.path.image, native_sysroot)
+    if not args.path.part:
+        if disk.partitions:
+            print('Num     Start        End          Size      Fstype')
+            for part in disk.partitions.values():
+                print("{:2d}  {:12d} {:12d} {:12d}  {}".format(\
+                          part.pnum, part.start, part.end,
+                          part.size, part.fstype))
+    else:
+        path = args.path.path or '/'
+        print(disk.dir(args.path.part, path))
+
+def wic_cp(args, native_sysroot):
+    """
+    Copy local file or directory to the vfat partition of
+    partitioned image.
+    """
+    disk = Disk(args.dest.image, native_sysroot)
+    disk.copy(args.src, args.dest.part, args.dest.path)
+
+def wic_rm(args, native_sysroot):
+    """
+    Remove files or directories from the vfat partition of
+    partitioned image.
+    """
+    disk = Disk(args.path.image, native_sysroot)
+    disk.remove(args.path.part, args.path.path)
+
+def wic_write(args, native_sysroot):
+    """
+    Write image to a target device.
+    """
+    disk = Disk(args.image, native_sysroot, ('fat', 'ext', 'swap'))
+    disk.write(args.target, args.expand)
+
 def find_canned(scripts_path, file_name):
     """
     Find a file either by its path or by name in the canned files dir.
diff --git a/import-layers/yocto-poky/scripts/lib/wic/filemap.py b/import-layers/yocto-poky/scripts/lib/wic/filemap.py
index 1f1aacc..77e32b9 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/filemap.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/filemap.py
@@ -34,13 +34,9 @@
     Returns block size for file object 'file_obj'. Errors are indicated by the
     'IOError' exception.
     """
-
-    from fcntl import ioctl
-    import struct
-
     # Get the block size of the host file-system for the image file by calling
     # the FIGETBSZ ioctl (number 2).
-    binary_data = ioctl(file_obj, 2, struct.pack('I', 0))
+    binary_data = fcntl.ioctl(file_obj, 2, struct.pack('I', 0))
     return struct.unpack('I', binary_data)[0]
 
 class ErrorNotSupp(Exception):
@@ -228,7 +224,7 @@
         try:
             tmp_obj = tempfile.TemporaryFile("w+", dir=directory)
         except IOError as err:
-            raise ErrorNotSupp("cannot create a temporary in \"%s\": %s"
+            raise ErrorNotSupp("cannot create a temporary in \"%s\": %s" \
                               % (directory, err))
 
         try:
@@ -530,8 +526,18 @@
     except ErrorNotSupp:
         return FilemapSeek(image, log)
 
-def sparse_copy(src_fname, dst_fname, offset=0, skip=0, api=None):
-    """Efficiently copy sparse file to or into another file."""
+def sparse_copy(src_fname, dst_fname, skip=0, seek=0,
+                length=0, api=None):
+    """
+    Efficiently copy sparse file to or into another file.
+
+    src_fname: path to source file
+    dst_fname: path to destination file
+    skip: skip N bytes at thestart of src
+    seek: seek N bytes from the start of dst
+    length: read N bytes from src and write them to dst
+    api: FilemapFiemap or FilemapSeek object
+    """
     if not api:
         api = filemap
     fmap = api(src_fname)
@@ -539,17 +545,32 @@
         dst_file = open(dst_fname, 'r+b')
     except IOError:
         dst_file = open(dst_fname, 'wb')
-        dst_file.truncate(os.path.getsize(src_fname))
+        if length:
+            dst_size = length + seek
+        else:
+            dst_size = os.path.getsize(src_fname) + seek - skip
+        dst_file.truncate(dst_size)
 
+    written = 0
     for first, last in fmap.get_mapped_ranges(0, fmap.blocks_cnt):
         start = first * fmap.block_size
         end = (last + 1) * fmap.block_size
 
+        if skip >= end:
+            continue
+
         if start < skip < end:
-            fmap._f_image.seek(skip, os.SEEK_SET)
-        else:
-            fmap._f_image.seek(start, os.SEEK_SET)
-        dst_file.seek(offset + start, os.SEEK_SET)
+            start = skip
+
+        fmap._f_image.seek(start, os.SEEK_SET)
+
+        written += start - skip - written
+        if length and written >= length:
+            dst_file.seek(seek + length, os.SEEK_SET)
+            dst_file.close()
+            return
+
+        dst_file.seek(seek + start - skip, os.SEEK_SET)
 
         chunk_size = 1024 * 1024
         to_read = end - start
@@ -558,7 +579,14 @@
         while read < to_read:
             if read + chunk_size > to_read:
                 chunk_size = to_read - read
-            chunk = fmap._f_image.read(chunk_size)
+            size = chunk_size
+            if length and written + size > length:
+                size = length - written
+            chunk = fmap._f_image.read(size)
             dst_file.write(chunk)
-            read += chunk_size
+            read += size
+            written += size
+            if written == length:
+                dst_file.close()
+                return
     dst_file.close()
diff --git a/import-layers/yocto-poky/scripts/lib/wic/help.py b/import-layers/yocto-poky/scripts/lib/wic/help.py
index d6e027d..2ac45e0 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/help.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/help.py
@@ -56,7 +56,7 @@
     """
     Subcommand help dispatcher.
     """
-    if len(args) == 1 or not display_help(args[1], subcommands):
+    if args.help_topic == None or not display_help(args.help_topic, subcommands):
         print(usage_str)
 
 
@@ -82,19 +82,20 @@
     Dispatch to subcommand handler borrowed from combo-layer.
     Should use argparse, but has to work in 2.6.
     """
-    if not args:
+    if not args.command:
         logger.error("No subcommand specified, exiting")
         parser.print_help()
         return 1
-    elif args[0] == "help":
+    elif args.command == "help":
         wic_help(args, main_command_usage, subcommands)
-    elif args[0] not in subcommands:
-        logger.error("Unsupported subcommand %s, exiting\n", args[0])
+    elif args.command not in subcommands:
+        logger.error("Unsupported subcommand %s, exiting\n", args.command)
         parser.print_help()
         return 1
     else:
-        usage = subcommands.get(args[0], subcommand_error)[1]
-        subcommands.get(args[0], subcommand_error)[0](args[1:], usage)
+        subcmd = subcommands.get(args.command, subcommand_error)
+        usage = subcmd[1]
+        subcmd[0](args, usage)
 
 
 ##
@@ -130,10 +131,10 @@
  Create a new OpenEmbedded image
 
  usage: wic create <wks file or image name> [-o <DIRNAME> | --outdir <DIRNAME>]
-            [-i <JSON PROPERTY FILE> | --infile <JSON PROPERTY_FILE>]
             [-e | --image-name] [-s, --skip-build-check] [-D, --debug]
             [-r, --rootfs-dir] [-b, --bootimg-dir]
             [-k, --kernel-dir] [-n, --native-sysroot] [-f, --build-rootfs]
+            [-c, --compress-with] [-m, --bmap]
 
  This command creates an OpenEmbedded image based on the 'OE kickstart
  commands' found in the <wks file>.
@@ -154,7 +155,7 @@
         [-e | --image-name] [-s, --skip-build-check] [-D, --debug]
         [-r, --rootfs-dir] [-b, --bootimg-dir]
         [-k, --kernel-dir] [-n, --native-sysroot] [-f, --build-rootfs]
-        [-c, --compress-with] [-m, --bmap]
+        [-c, --compress-with] [-m, --bmap] [--no-fstab-update]
 
 DESCRIPTION
     This command creates an OpenEmbedded image based on the 'OE
@@ -226,6 +227,11 @@
 
     The -m option is used to produce .bmap file for the image. This file
     can be used to flash image using bmaptool utility.
+
+    The --no-fstab-update option is used to doesn't change fstab file. When
+    using this option the final fstab file will be same that in rootfs and
+    wic doesn't update file, e.g adding a new mount point. User can control
+    the fstab file content in base-files recipe.
 """
 
 wic_list_usage = """
@@ -283,6 +289,230 @@
     details.
 """
 
+wic_ls_usage = """
+
+ List content of a partitioned image
+
+ usage: wic ls <image>[:<partition>[<path>]] [--native-sysroot <path>]
+
+ This command  outputs either list of image partitions or directory contents
+ of vfat and ext* partitions.
+
+ See 'wic help ls' for more detailed instructions.
+
+"""
+
+wic_ls_help = """
+
+NAME
+    wic ls - List contents of partitioned image or partition
+
+SYNOPSIS
+    wic ls <image>
+    wic ls <image>:<vfat or ext* partition>
+    wic ls <image>:<vfat or ext* partition><path>
+    wic ls <image>:<vfat or ext* partition><path> --native-sysroot <path>
+
+DESCRIPTION
+    This command lists either partitions of the image or directory contents
+    of vfat or ext* partitions.
+
+    The first form it lists partitions of the image.
+    For example:
+        $ wic ls tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64.wic
+        Num     Start        End          Size      Fstype
+        1        1048576     24438783     23390208  fat16
+        2       25165824     50315263     25149440  ext4
+
+    Second and third form list directory content of the partition:
+        $ wic ls tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64.wic:1
+        Volume in drive : is boot
+         Volume Serial Number is 2DF2-5F02
+        Directory for ::/
+
+        efi          <DIR>     2017-05-11  10:54
+        startup  nsh        26 2017-05-11  10:54
+        vmlinuz        6922288 2017-05-11  10:54
+                3 files           6 922 314 bytes
+                                 15 818 752 bytes free
+
+
+        $ wic ls tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64.wic:1/EFI/boot/
+        Volume in drive : is boot
+         Volume Serial Number is 2DF2-5F02
+        Directory for ::/EFI/boot
+
+        .            <DIR>     2017-05-11  10:54
+        ..           <DIR>     2017-05-11  10:54
+        grub     cfg       679 2017-05-11  10:54
+        bootx64  efi    571392 2017-05-11  10:54
+                4 files             572 071 bytes
+                                 15 818 752 bytes free
+
+    The -n option is used to specify the path to the native sysroot
+    containing the tools(parted and mtools) to use.
+
+"""
+
+wic_cp_usage = """
+
+ Copy files and directories to the vfat or ext* partition
+
+ usage: wic cp <src> <image>:<partition>[<path>] [--native-sysroot <path>]
+
+ This command  copies local files or directories to the vfat or ext* partitions
+of partitioned  image.
+
+ See 'wic help cp' for more detailed instructions.
+
+"""
+
+wic_cp_help = """
+
+NAME
+    wic cp - copy files and directories to the vfat or ext* partitions
+
+SYNOPSIS
+    wic cp <src> <image>:<partition>
+    wic cp <src> <image>:<partition><path>
+    wic cp <src> <image>:<partition><path> --native-sysroot <path>
+
+DESCRIPTION
+    This command copies files and directories to the vfat or ext* partition of
+    the partitioned image.
+
+    The first form of it copies file or directory to the root directory of
+    the partition:
+        $ wic cp test.wks tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64.wic:1
+        $ wic ls tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64.wic:1
+        Volume in drive : is boot
+         Volume Serial Number is DB4C-FD4C
+        Directory for ::/
+
+        efi          <DIR>     2017-05-24  18:15
+        loader       <DIR>     2017-05-24  18:15
+        startup  nsh        26 2017-05-24  18:15
+        vmlinuz        6926384 2017-05-24  18:15
+        test     wks       628 2017-05-24  21:22
+                5 files           6 927 038 bytes
+                                 15 677 440 bytes free
+
+    The second form of the command copies file or directory to the specified directory
+    on the partition:
+       $ wic cp test tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64.wic:1/efi/
+       $ wic ls tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64.wic:1/efi/
+       Volume in drive : is boot
+        Volume Serial Number is DB4C-FD4C
+       Directory for ::/efi
+
+       .            <DIR>     2017-05-24  18:15
+       ..           <DIR>     2017-05-24  18:15
+       boot         <DIR>     2017-05-24  18:15
+       test         <DIR>     2017-05-24  21:27
+               4 files                   0 bytes
+                                15 675 392 bytes free
+
+    The -n option is used to specify the path to the native sysroot
+    containing the tools(parted and mtools) to use.
+"""
+
+wic_rm_usage = """
+
+ Remove files or directories from the vfat or ext* partitions
+
+ usage: wic rm <image>:<partition><path> [--native-sysroot <path>]
+
+ This command  removes files or directories from the vfat or ext* partitions of
+ the partitioned image.
+
+ See 'wic help rm' for more detailed instructions.
+
+"""
+
+wic_rm_help = """
+
+NAME
+    wic rm - remove files or directories from the vfat or ext* partitions
+
+SYNOPSIS
+    wic rm <src> <image>:<partition><path>
+    wic rm <src> <image>:<partition><path> --native-sysroot <path>
+
+DESCRIPTION
+    This command removes files or directories from the vfat or ext* partition of the
+    partitioned image:
+
+        $ wic ls ./tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64.wic:1
+        Volume in drive : is boot
+         Volume Serial Number is 11D0-DE21
+        Directory for ::/
+
+        libcom32 c32    186500 2017-06-02  15:15
+        libutil  c32     24148 2017-06-02  15:15
+        syslinux cfg       209 2017-06-02  15:15
+        vesamenu c32     27104 2017-06-02  15:15
+        vmlinuz        6926384 2017-06-02  15:15
+                5 files           7 164 345 bytes
+                                 16 582 656 bytes free
+
+        $ wic rm ./tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64.wic:1/libutil.c32
+
+        $ wic ls ./tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64.wic:1
+        Volume in drive : is boot
+         Volume Serial Number is 11D0-DE21
+        Directory for ::/
+
+        libcom32 c32    186500 2017-06-02  15:15
+        syslinux cfg       209 2017-06-02  15:15
+        vesamenu c32     27104 2017-06-02  15:15
+        vmlinuz        6926384 2017-06-02  15:15
+                4 files           7 140 197 bytes
+                                 16 607 232 bytes free
+
+    The -n option is used to specify the path to the native sysroot
+    containing the tools(parted and mtools) to use.
+"""
+
+wic_write_usage = """
+
+ Write image to a device
+
+ usage: wic write <image> <target device> [--expand [rules]] [--native-sysroot <path>]
+
+ This command writes partitioned image to a target device (USB stick, SD card etc).
+
+ See 'wic help write' for more detailed instructions.
+
+"""
+
+wic_write_help = """
+
+NAME
+    wic write - write an image to a device
+
+SYNOPSIS
+    wic write <image> <target>
+    wic write <image> <target> --expand auto
+    wic write <image> <target> --expand 1:100M-2:300M
+    wic write <image> <target> --native-sysroot <path>
+
+DESCRIPTION
+    This command writes an image to a target device (USB stick, SD card etc)
+
+        $ wic write ./tmp/deploy/images/qemux86-64/core-image-minimal-qemux86-64.wic /dev/sdb
+
+    The --expand option is used to resize image partitions.
+    --expand auto expands partitions to occupy all free space available on the target device.
+    It's also possible to specify expansion rules in a format
+    <partition>:<size>[-<partition>:<size>...] for one or more partitions.
+    Specifying size 0 will keep partition unmodified.
+    Note: Resizing boot partition can result in non-bootable image for non-EFI images. It is
+    recommended to use size 0 for boot partition to keep image bootable.
+
+    The --native-sysroot option is used to specify the path to the native sysroot
+    containing the tools(parted, resize2fs) to use.
+"""
+
 wic_plugins_help = """
 
 NAME
@@ -740,6 +970,8 @@
                             This option cannot be used with --fixed-size
                             option.
 
+         --part-name: This option is specific to wic. It specifies name for GPT partitions.
+
          --part-type: This option is specific to wic. It specifies partition
                       type GUID for GPT partitions.
                       List of partition type GUIDS can be found here:
@@ -758,6 +990,12 @@
                       for the harware that requires non-default partition system ids. The parameter
                       in one byte long hex number either with 0x prefix or without it.
 
+         --mkfs-extraopts: This option specifies extra options to pass to mkfs utility.
+                           NOTE, that wic uses default options for some filesystems, for example
+                           '-S 512' for mkfs.fat or '-F -i 8192' for mkfs.ext. Those options will
+                           not take effect when --mkfs-extraopts is used. This should be taken into
+                           account when using --mkfs-extraopts.
+
     * bootloader
 
       This command allows the user to specify various bootloader
@@ -795,3 +1033,11 @@
       .wks files.
 
 """
+
+wic_help_help = """
+NAME
+    wic help - display a help topic
+
+DESCRIPTION
+    Specify a help topic to display it. Topics are shown above.
+"""
diff --git a/import-layers/yocto-poky/scripts/lib/wic/ksparser.py b/import-layers/yocto-poky/scripts/lib/wic/ksparser.py
index d026caa..7850e81 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/ksparser.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/ksparser.py
@@ -114,7 +114,7 @@
     return arg
 
 class KickStart():
-    """"Kickstart parser implementation."""
+    """Kickstart parser implementation."""
 
     DEFAULT_EXTRA_SPACE = 10*1024
     DEFAULT_OVERHEAD_FACTOR = 1.3
@@ -139,10 +139,12 @@
         part.add_argument('--fstype', default='vfat',
                           choices=('ext2', 'ext3', 'ext4', 'btrfs',
                                    'squashfs', 'vfat', 'msdos', 'swap'))
+        part.add_argument('--mkfs-extraopts', default='')
         part.add_argument('--label')
         part.add_argument('--no-table', action='store_true')
         part.add_argument('--ondisk', '--ondrive', dest='disk', default='sda')
         part.add_argument("--overhead-factor", type=overheadtype)
+        part.add_argument('--part-name')
         part.add_argument('--part-type')
         part.add_argument('--rootfs-dir')
 
diff --git a/import-layers/yocto-poky/scripts/lib/wic/misc.py b/import-layers/yocto-poky/scripts/lib/wic/misc.py
new file mode 100644
index 0000000..ee888b4
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/lib/wic/misc.py
@@ -0,0 +1,263 @@
+# ex:ts=4:sw=4:sts=4:et
+# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
+#
+# Copyright (c) 2013, Intel Corporation.
+# All rights reserved.
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License version 2 as
+# published by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, write to the Free Software Foundation, Inc.,
+# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
+#
+# DESCRIPTION
+# This module provides a place to collect various wic-related utils
+# for the OpenEmbedded Image Tools.
+#
+# AUTHORS
+# Tom Zanussi <tom.zanussi (at] linux.intel.com>
+#
+"""Miscellaneous functions."""
+
+import logging
+import os
+import re
+import subprocess
+
+from collections import defaultdict
+from distutils import spawn
+
+from wic import WicError
+
+logger = logging.getLogger('wic')
+
+# executable -> recipe pairs for exec_native_cmd
+NATIVE_RECIPES = {"bmaptool": "bmap-tools",
+                  "grub-mkimage": "grub-efi",
+                  "isohybrid": "syslinux",
+                  "mcopy": "mtools",
+                  "mdel" : "mtools",
+                  "mdeltree" : "mtools",
+                  "mdir" : "mtools",
+                  "mkdosfs": "dosfstools",
+                  "mkisofs": "cdrtools",
+                  "mkfs.btrfs": "btrfs-tools",
+                  "mkfs.ext2": "e2fsprogs",
+                  "mkfs.ext3": "e2fsprogs",
+                  "mkfs.ext4": "e2fsprogs",
+                  "mkfs.vfat": "dosfstools",
+                  "mksquashfs": "squashfs-tools",
+                  "mkswap": "util-linux",
+                  "mmd": "mtools",
+                  "parted": "parted",
+                  "sfdisk": "util-linux",
+                  "sgdisk": "gptfdisk",
+                  "syslinux": "syslinux"
+                 }
+
+def runtool(cmdln_or_args):
+    """ wrapper for most of the subprocess calls
+    input:
+        cmdln_or_args: can be both args and cmdln str (shell=True)
+    return:
+        rc, output
+    """
+    if isinstance(cmdln_or_args, list):
+        cmd = cmdln_or_args[0]
+        shell = False
+    else:
+        import shlex
+        cmd = shlex.split(cmdln_or_args)[0]
+        shell = True
+
+    sout = subprocess.PIPE
+    serr = subprocess.STDOUT
+
+    try:
+        process = subprocess.Popen(cmdln_or_args, stdout=sout,
+                                   stderr=serr, shell=shell)
+        sout, serr = process.communicate()
+        # combine stdout and stderr, filter None out and decode
+        out = ''.join([out.decode('utf-8') for out in [sout, serr] if out])
+    except OSError as err:
+        if err.errno == 2:
+            # [Errno 2] No such file or directory
+            raise WicError('Cannot run command: %s, lost dependency?' % cmd)
+        else:
+            raise # relay
+
+    return process.returncode, out
+
+def _exec_cmd(cmd_and_args, as_shell=False):
+    """
+    Execute command, catching stderr, stdout
+
+    Need to execute as_shell if the command uses wildcards
+    """
+    logger.debug("_exec_cmd: %s", cmd_and_args)
+    args = cmd_and_args.split()
+    logger.debug(args)
+
+    if as_shell:
+        ret, out = runtool(cmd_and_args)
+    else:
+        ret, out = runtool(args)
+    out = out.strip()
+    if ret != 0:
+        raise WicError("_exec_cmd: %s returned '%s' instead of 0\noutput: %s" % \
+                       (cmd_and_args, ret, out))
+
+    logger.debug("_exec_cmd: output for %s (rc = %d): %s",
+                 cmd_and_args, ret, out)
+
+    return ret, out
+
+
+def exec_cmd(cmd_and_args, as_shell=False):
+    """
+    Execute command, return output
+    """
+    return _exec_cmd(cmd_and_args, as_shell)[1]
+
+
+def exec_native_cmd(cmd_and_args, native_sysroot, pseudo=""):
+    """
+    Execute native command, catching stderr, stdout
+
+    Need to execute as_shell if the command uses wildcards
+
+    Always need to execute native commands as_shell
+    """
+    # The reason -1 is used is because there may be "export" commands.
+    args = cmd_and_args.split(';')[-1].split()
+    logger.debug(args)
+
+    if pseudo:
+        cmd_and_args = pseudo + cmd_and_args
+
+    native_paths = "%s/sbin:%s/usr/sbin:%s/usr/bin" % \
+                   (native_sysroot, native_sysroot, native_sysroot)
+
+    native_cmd_and_args = "export PATH=%s:$PATH;%s" % \
+                   (native_paths, cmd_and_args)
+    logger.debug("exec_native_cmd: %s", native_cmd_and_args)
+
+    # If the command isn't in the native sysroot say we failed.
+    if spawn.find_executable(args[0], native_paths):
+        ret, out = _exec_cmd(native_cmd_and_args, True)
+    else:
+        ret = 127
+        out = "can't find native executable %s in %s" % (args[0], native_paths)
+
+    prog = args[0]
+    # shell command-not-found
+    if ret == 127 \
+       or (pseudo and ret == 1 and out == "Can't find '%s' in $PATH." % prog):
+        msg = "A native program %s required to build the image "\
+              "was not found (see details above).\n\n" % prog
+        recipe = NATIVE_RECIPES.get(prog)
+        if recipe:
+            msg += "Please make sure wic-tools have %s-native in its DEPENDS, "\
+                   "build it with 'bitbake wic-tools' and try again.\n" % recipe
+        else:
+            msg += "Wic failed to find a recipe to build native %s. Please "\
+                   "file a bug against wic.\n" % prog
+        raise WicError(msg)
+
+    return ret, out
+
+BOOTDD_EXTRA_SPACE = 16384
+
+class BitbakeVars(defaultdict):
+    """
+    Container for Bitbake variables.
+    """
+    def __init__(self):
+        defaultdict.__init__(self, dict)
+
+        # default_image and vars_dir attributes should be set from outside
+        self.default_image = None
+        self.vars_dir = None
+
+    def _parse_line(self, line, image, matcher=re.compile(r"^([a-zA-Z0-9\-_+./~]+)=(.*)")):
+        """
+        Parse one line from bitbake -e output or from .env file.
+        Put result key-value pair into the storage.
+        """
+        if "=" not in line:
+            return
+        match = matcher.match(line)
+        if not match:
+            return
+        key, val = match.groups()
+        self[image][key] = val.strip('"')
+
+    def get_var(self, var, image=None, cache=True):
+        """
+        Get bitbake variable from 'bitbake -e' output or from .env file.
+        This is a lazy method, i.e. it runs bitbake or parses file only when
+        only when variable is requested. It also caches results.
+        """
+        if not image:
+            image = self.default_image
+
+        if image not in self:
+            if image and self.vars_dir:
+                fname = os.path.join(self.vars_dir, image + '.env')
+                if os.path.isfile(fname):
+                    # parse .env file
+                    with open(fname) as varsfile:
+                        for line in varsfile:
+                            self._parse_line(line, image)
+                else:
+                    print("Couldn't get bitbake variable from %s." % fname)
+                    print("File %s doesn't exist." % fname)
+                    return
+            else:
+                # Get bitbake -e output
+                cmd = "bitbake -e"
+                if image:
+                    cmd += " %s" % image
+
+                log_level = logger.getEffectiveLevel()
+                logger.setLevel(logging.INFO)
+                ret, lines = _exec_cmd(cmd)
+                logger.setLevel(log_level)
+
+                if ret:
+                    logger.error("Couldn't get '%s' output.", cmd)
+                    logger.error("Bitbake failed with error:\n%s\n", lines)
+                    return
+
+                # Parse bitbake -e output
+                for line in lines.split('\n'):
+                    self._parse_line(line, image)
+
+            # Make first image a default set of variables
+            if cache:
+                images = [key for key in self if key]
+                if len(images) == 1:
+                    self[None] = self[image]
+
+        result = self[image].get(var)
+        if not cache:
+            self.pop(image, None)
+
+        return result
+
+# Create BB_VARS singleton
+BB_VARS = BitbakeVars()
+
+def get_bitbake_var(var, image=None, cache=True):
+    """
+    Provide old get_bitbake_var API by wrapping
+    get_var method of BB_VARS singleton.
+    """
+    return BB_VARS.get_var(var, image, cache)
diff --git a/import-layers/yocto-poky/scripts/lib/wic/partition.py b/import-layers/yocto-poky/scripts/lib/wic/partition.py
index 939e667..84fe85d 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/partition.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/partition.py
@@ -26,10 +26,9 @@
 
 import logging
 import os
-import tempfile
 
 from wic import WicError
-from wic.utils.misc import exec_cmd, exec_native_cmd, get_bitbake_var
+from wic.misc import exec_cmd, exec_native_cmd, get_bitbake_var
 from wic.pluginbase import PluginMgr
 
 logger = logging.getLogger('wic')
@@ -47,10 +46,12 @@
         self.fsopts = args.fsopts
         self.fstype = args.fstype
         self.label = args.label
+        self.mkfs_extraopts = args.mkfs_extraopts
         self.mountpoint = args.mountpoint
         self.no_table = args.no_table
         self.num = None
         self.overhead_factor = args.overhead_factor
+        self.part_name = args.part_name
         self.part_type = args.part_type
         self.rootfs_dir = args.rootfs_dir
         self.size = args.size
@@ -205,7 +206,7 @@
         """
         p_prefix = os.environ.get("PSEUDO_PREFIX", "%s/usr" % native_sysroot)
         p_localstatedir = os.environ.get("PSEUDO_LOCALSTATEDIR",
-                                         "%s/../pseudo" % rootfs_dir)
+                                         "%s/../pseudo" %  get_bitbake_var("IMAGE_ROOTFS"))
         p_passwd = os.environ.get("PSEUDO_PASSWD", rootfs_dir)
         p_nosymlinkexp = os.environ.get("PSEUDO_NOSYMLINKEXP", "1")
         pseudo = "export PSEUDO_PREFIX=%s;" % p_prefix
@@ -257,14 +258,14 @@
         with open(rootfs, 'w') as sparse:
             os.ftruncate(sparse.fileno(), rootfs_size * 1024)
 
-        extra_imagecmd = "-i 8192"
+        extraopts = self.mkfs_extraopts or "-F -i 8192"
 
         label_str = ""
         if self.label:
             label_str = "-L %s" % self.label
 
-        mkfs_cmd = "mkfs.%s -F %s %s %s -d %s" % \
-            (self.fstype, extra_imagecmd, rootfs, label_str, rootfs_dir)
+        mkfs_cmd = "mkfs.%s %s %s %s -d %s" % \
+            (self.fstype, extraopts, rootfs, label_str, rootfs_dir)
         exec_native_cmd(mkfs_cmd, native_sysroot, pseudo=pseudo)
 
         mkfs_cmd = "fsck.%s -pvfD %s" % (self.fstype, rootfs)
@@ -290,8 +291,9 @@
         if self.label:
             label_str = "-L %s" % self.label
 
-        mkfs_cmd = "mkfs.%s -b %d -r %s %s %s" % \
-            (self.fstype, rootfs_size * 1024, rootfs_dir, label_str, rootfs)
+        mkfs_cmd = "mkfs.%s -b %d -r %s %s %s %s" % \
+            (self.fstype, rootfs_size * 1024, rootfs_dir, label_str,
+             self.mkfs_extraopts, rootfs)
         exec_native_cmd(mkfs_cmd, native_sysroot, pseudo=pseudo)
 
     def prepare_rootfs_msdos(self, rootfs, oe_builddir, rootfs_dir,
@@ -313,8 +315,10 @@
         if self.fstype == 'msdos':
             size_str = "-F 16" # FAT 16
 
-        dosfs_cmd = "mkdosfs %s -S 512 %s -C %s %d" % (label_str, size_str,
-                                                       rootfs, rootfs_size)
+        extraopts = self.mkfs_extraopts or '-S 512'
+
+        dosfs_cmd = "mkdosfs %s %s %s -C %s %d" % \
+                    (label_str, size_str, extraopts, rootfs, rootfs_size)
         exec_native_cmd(dosfs_cmd, native_sysroot)
 
         mcopy_cmd = "mcopy -i %s -s %s/* ::/" % (rootfs, rootfs_dir)
@@ -330,8 +334,9 @@
         """
         Prepare content for a squashfs rootfs partition.
         """
-        squashfs_cmd = "mksquashfs %s %s -noappend" % \
-                       (rootfs_dir, rootfs)
+        extraopts = self.mkfs_extraopts or '-noappend'
+        squashfs_cmd = "mksquashfs %s %s %s" % \
+                       (rootfs_dir, rootfs, extraopts)
         exec_native_cmd(squashfs_cmd, native_sysroot, pseudo=pseudo)
 
     def prepare_empty_partition_ext(self, rootfs, oe_builddir,
@@ -343,14 +348,14 @@
         with open(rootfs, 'w') as sparse:
             os.ftruncate(sparse.fileno(), size * 1024)
 
-        extra_imagecmd = "-i 8192"
+        extraopts = self.mkfs_extraopts or "-i 8192"
 
         label_str = ""
         if self.label:
             label_str = "-L %s" % self.label
 
         mkfs_cmd = "mkfs.%s -F %s %s %s" % \
-            (self.fstype, extra_imagecmd, label_str, rootfs)
+            (self.fstype, extraopts, label_str, rootfs)
         exec_native_cmd(mkfs_cmd, native_sysroot)
 
     def prepare_empty_partition_btrfs(self, rootfs, oe_builddir,
@@ -366,8 +371,9 @@
         if self.label:
             label_str = "-L %s" % self.label
 
-        mkfs_cmd = "mkfs.%s -b %d %s %s" % \
-            (self.fstype, self.size * 1024, label_str, rootfs)
+        mkfs_cmd = "mkfs.%s -b %d %s %s %s" % \
+                   (self.fstype, self.size * 1024, label_str,
+                    self.mkfs_extraopts, rootfs)
         exec_native_cmd(mkfs_cmd, native_sysroot)
 
     def prepare_empty_partition_msdos(self, rootfs, oe_builddir,
@@ -385,8 +391,11 @@
         if self.fstype == 'msdos':
             size_str = "-F 16" # FAT 16
 
-        dosfs_cmd = "mkdosfs %s -S 512 %s -C %s %d" % (label_str, size_str,
-                                                       rootfs, blocks)
+        extraopts = self.mkfs_extraopts or '-S 512'
+
+        dosfs_cmd = "mkdosfs %s %s %s -C %s %d" % \
+                    (label_str, extraopts, size_str, rootfs, blocks)
+
         exec_native_cmd(dosfs_cmd, native_sysroot)
 
         chmod_cmd = "chmod 644 %s" % rootfs
diff --git a/import-layers/yocto-poky/scripts/lib/wic/pluginbase.py b/import-layers/yocto-poky/scripts/lib/wic/pluginbase.py
index fb3d179..c009820 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/pluginbase.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/pluginbase.py
@@ -24,7 +24,7 @@
 from importlib.machinery import SourceFileLoader
 
 from wic import WicError
-from wic.utils.misc import get_bitbake_var
+from wic.misc import get_bitbake_var
 
 PLUGIN_TYPES = ["imager", "source"]
 
@@ -137,4 +137,3 @@
         'prepares' the partition to be incorporated into the image.
         """
         logger.debug("SourcePlugin: do_prepare_partition: part: %s", part)
-
diff --git a/import-layers/yocto-poky/scripts/lib/wic/plugins/imager/direct.py b/import-layers/yocto-poky/scripts/lib/wic/plugins/imager/direct.py
index f2e6127..da1c061 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/plugins/imager/direct.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/plugins/imager/direct.py
@@ -26,17 +26,20 @@
 
 import logging
 import os
+import random
 import shutil
 import tempfile
 import uuid
 
 from time import strftime
 
+from oe.path import copyhardlinktree
+
 from wic import WicError
 from wic.filemap import sparse_copy
 from wic.ksparser import KickStart, KickStartError
 from wic.pluginbase import PluginMgr, ImagerPlugin
-from wic.utils.misc import get_bitbake_var, exec_cmd, exec_native_cmd
+from wic.misc import get_bitbake_var, exec_cmd, exec_native_cmd
 
 logger = logging.getLogger('wic')
 
@@ -68,6 +71,7 @@
         self.outdir = options.outdir
         self.compressor = options.compressor
         self.bmap = options.bmap
+        self.no_fstab_update = options.no_fstab_update
 
         self.name = "%s-%s" % (os.path.splitext(os.path.basename(wks_file))[0],
                                strftime("%Y%m%d%H%M"))
@@ -115,24 +119,33 @@
             fstab_lines = fstab.readlines()
 
         if self._update_fstab(fstab_lines, self.parts):
-            shutil.copyfile(fstab_path, fstab_path + ".orig")
+            # copy rootfs dir to workdir to update fstab
+            # as rootfs can be used by other tasks and can't be modified
+            new_rootfs = os.path.realpath(os.path.join(self.workdir, "rootfs_copy"))
+            copyhardlinktree(image_rootfs, new_rootfs)
+            fstab_path = os.path.join(new_rootfs, 'etc/fstab')
+
+            os.unlink(fstab_path)
 
             with open(fstab_path, "w") as fstab:
                 fstab.writelines(fstab_lines)
 
-            return fstab_path
+            return new_rootfs
 
     def _update_fstab(self, fstab_lines, parts):
         """Assume partition order same as in wks"""
         updated = False
         for part in parts:
             if not part.realnum or not part.mountpoint \
-               or part.mountpoint in ("/", "/boot"):
+               or part.mountpoint == "/":
                 continue
 
-            # mmc device partitions are named mmcblk0p1, mmcblk0p2..
-            prefix = 'p' if  part.disk.startswith('mmcblk') else ''
-            device_name = "/dev/%s%s%d" % (part.disk, prefix, part.realnum)
+            if part.use_uuid:
+                device_name = "PARTUUID=%s" % part.uuid
+            else:
+                # mmc device partitions are named mmcblk0p1, mmcblk0p2..
+                prefix = 'p' if  part.disk.startswith('mmcblk') else ''
+                device_name = "/dev/%s%s%d" % (part.disk, prefix, part.realnum)
 
             opts = part.fsopts if part.fsopts else "defaults"
             line = "\t".join([device_name, part.mountpoint, part.fstype,
@@ -156,7 +169,13 @@
         filesystems from the artifacts directly and combine them into
         a partitioned image.
         """
-        fstab_path = self._write_fstab(self.rootfs_dir.get("ROOTFS_DIR"))
+        if self.no_fstab_update:
+            new_rootfs = None
+        else:
+            new_rootfs = self._write_fstab(self.rootfs_dir.get("ROOTFS_DIR"))
+        if new_rootfs:
+            # rootfs was copied to update fstab
+            self.rootfs_dir['ROOTFS_DIR'] = new_rootfs
 
         for part in self.parts:
             # get rootfs size from bitbake variable if it's not set in .ks file
@@ -173,10 +192,6 @@
                         part.size = int(round(float(rsize_bb)))
 
         self._image.prepare(self)
-
-        if fstab_path:
-            shutil.move(fstab_path + ".orig", fstab_path)
-
         self._image.layout_partitions()
         self._image.create()
 
@@ -205,8 +220,10 @@
         # Generate .bmap
         if self.bmap:
             logger.debug("Generating bmap file for %s", disk_name)
-            exec_native_cmd("bmaptool create %s -o %s.bmap" % (full_path, full_path),
-                            self.native_sysroot)
+            python = os.path.join(self.native_sysroot, 'usr/bin/python3-native/python3')
+            bmaptool = os.path.join(self.native_sysroot, 'usr/bin/bmaptool')
+            exec_native_cmd("%s %s create %s -o %s.bmap" % \
+                            (python, bmaptool, full_path, full_path), self.native_sysroot)
         # Compress the image
         if self.compressor:
             logger.debug("Compressing disk %s with %s", disk_name, self.compressor)
@@ -296,7 +313,7 @@
                           # all partitions (in bytes)
         self.ptable_format = ptable_format  # Partition table format
         # Disk system identifier
-        self.identifier = int.from_bytes(os.urandom(4), 'little')
+        self.identifier = random.SystemRandom().randint(1, 0xffffffff)
 
         self.partitions = partitions
         self.partimages = []
@@ -312,7 +329,7 @@
                 part.realnum = 0
             else:
                 realnum += 1
-                if self.ptable_format == 'msdos' and realnum > 3:
+                if self.ptable_format == 'msdos' and realnum > 3 and len(partitions) > 4:
                     part.realnum = realnum + 1
                     continue
                 part.realnum = realnum
@@ -352,6 +369,10 @@
         for num in range(len(self.partitions)):
             part = self.partitions[num]
 
+            if self.ptable_format == 'msdos' and part.part_name:
+                raise WicError("setting custom partition name is not " \
+                               "implemented for msdos partitions")
+
             if self.ptable_format == 'msdos' and part.part_type:
                 # The --part-type can also be implemented for MBR partitions,
                 # in which case it would map to the 1-byte "partition type"
@@ -505,6 +526,13 @@
             self._create_partition(self.path, part.type,
                                    parted_fs_type, part.start, part.size_sec)
 
+            if part.part_name:
+                logger.debug("partition %d: set name to %s",
+                             part.num, part.part_name)
+                exec_native_cmd("sgdisk --change-name=%d:%s %s" % \
+                                         (part.num, part.part_name,
+                                          self.path), self.native_sysroot)
+
             if part.part_type:
                 logger.debug("partition %d: set type UID to %s",
                              part.num, part.part_type)
@@ -550,7 +578,7 @@
             source = part.source_file
             if source:
                 # install source_file contents into a partition
-                sparse_copy(source, self.path, part.start * self.sector_size)
+                sparse_copy(source, self.path, seek=part.start * self.sector_size)
 
                 logger.debug("Installed %s in partition %d, sectors %d-%d, "
                              "size %d sectors", source, part.num, part.start,
diff --git a/import-layers/yocto-poky/scripts/lib/wic/plugins/source/bootimg-efi.py b/import-layers/yocto-poky/scripts/lib/wic/plugins/source/bootimg-efi.py
index 9879cb9..4c4f36a 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/plugins/source/bootimg-efi.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/plugins/source/bootimg-efi.py
@@ -31,8 +31,8 @@
 from wic import WicError
 from wic.engine import get_custom_config
 from wic.pluginbase import SourcePlugin
-from wic.utils.misc import (exec_cmd, exec_native_cmd, get_bitbake_var,
-                            BOOTDD_EXTRA_SPACE)
+from wic.misc import (exec_cmd, exec_native_cmd,
+                      get_bitbake_var, BOOTDD_EXTRA_SPACE)
 
 logger = logging.getLogger('wic')
 
diff --git a/import-layers/yocto-poky/scripts/lib/wic/plugins/source/bootimg-partition.py b/import-layers/yocto-poky/scripts/lib/wic/plugins/source/bootimg-partition.py
index 13fddbd..67e5498 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/plugins/source/bootimg-partition.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/plugins/source/bootimg-partition.py
@@ -31,7 +31,7 @@
 
 from wic import WicError
 from wic.pluginbase import SourcePlugin
-from wic.utils.misc import exec_cmd, get_bitbake_var
+from wic.misc import exec_cmd, get_bitbake_var
 
 logger = logging.getLogger('wic')
 
@@ -54,7 +54,7 @@
         - sets up a vfat partition
         - copies all files listed in IMAGE_BOOT_FILES variable
         """
-        hdddir = "%s/boot" % cr_workdir
+        hdddir = "%s/boot.%d" % (cr_workdir, part.lineno)
         install_cmd = "install -d %s" % hdddir
         exec_cmd(install_cmd)
 
@@ -65,10 +65,19 @@
 
         logger.debug('Kernel dir: %s', bootimg_dir)
 
-        boot_files = get_bitbake_var("IMAGE_BOOT_FILES")
+        boot_files = None
+        for (fmt, id) in (("_uuid-%s", part.uuid), ("_label-%s", part.label), (None, None)):
+            if fmt:
+                var = fmt % id
+            else:
+                var = ""
 
-        if not boot_files:
-            raise WicError('No boot files defined, IMAGE_BOOT_FILES unset')
+            boot_files = get_bitbake_var("IMAGE_BOOT_FILES" + var)
+            if boot_files is not None:
+                break
+
+        if boot_files is None:
+            raise WicError('No boot files defined, IMAGE_BOOT_FILES unset for entry #%d' % part.lineno)
 
         logger.debug('Boot files: %s', boot_files)
 
diff --git a/import-layers/yocto-poky/scripts/lib/wic/plugins/source/bootimg-pcbios.py b/import-layers/yocto-poky/scripts/lib/wic/plugins/source/bootimg-pcbios.py
index 5890c12..56da468 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/plugins/source/bootimg-pcbios.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/plugins/source/bootimg-pcbios.py
@@ -29,10 +29,9 @@
 
 from wic import WicError
 from wic.engine import get_custom_config
-from wic.utils import runner
 from wic.pluginbase import SourcePlugin
-from wic.utils.misc import (exec_cmd, exec_native_cmd,
-                            get_bitbake_var, BOOTDD_EXTRA_SPACE)
+from wic.misc import (exec_cmd, exec_native_cmd,
+                      get_bitbake_var, BOOTDD_EXTRA_SPACE)
 
 logger = logging.getLogger('wic')
 
@@ -46,10 +45,9 @@
     @classmethod
     def _get_bootimg_dir(cls, bootimg_dir, dirname):
         """
-        Check if dirname exists in default bootimg_dir or
-        in wic-tools STAGING_DIR.
+        Check if dirname exists in default bootimg_dir or in STAGING_DIR.
         """
-        for result in (bootimg_dir, get_bitbake_var("STAGING_DATADIR", "wic-tools")):
+        for result in (bootimg_dir, get_bitbake_var("STAGING_DATADIR")):
             if os.path.exists("%s/%s" % (result, dirname)):
                 return result
 
@@ -186,7 +184,7 @@
                      extra_blocks, part.mountpoint, blocks)
 
         # dosfs image, created by mkdosfs
-        bootimg = "%s/boot.img" % cr_workdir
+        bootimg = "%s/boot%s.img" % (cr_workdir, part.lineno)
 
         dosfs_cmd = "mkdosfs -n boot -S 512 -C %s %d" % (bootimg, blocks)
         exec_native_cmd(dosfs_cmd, native_sysroot)
diff --git a/import-layers/yocto-poky/scripts/lib/wic/plugins/source/isoimage-isohybrid.py b/import-layers/yocto-poky/scripts/lib/wic/plugins/source/isoimage-isohybrid.py
index 1ceba62..d6bd3bf 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/plugins/source/isoimage-isohybrid.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/plugins/source/isoimage-isohybrid.py
@@ -29,7 +29,7 @@
 from wic import WicError
 from wic.engine import get_custom_config
 from wic.pluginbase import SourcePlugin
-from wic.utils.misc import exec_cmd, exec_native_cmd, get_bitbake_var
+from wic.misc import exec_cmd, exec_native_cmd, get_bitbake_var
 
 logger = logging.getLogger('wic')
 
@@ -95,7 +95,7 @@
             cfg.write(syslinux_conf)
 
     @classmethod
-    def do_configure_grubefi(cls, part, creator, cr_workdir):
+    def do_configure_grubefi(cls, part, creator, target_dir):
         """
         Create loader-specific (grub-efi) config
         """
@@ -109,7 +109,7 @@
                 raise WicError("configfile is specified "
                                "but failed to get it from %s", configfile)
         else:
-            splash = os.path.join(cr_workdir, "EFI/boot/splash.jpg")
+            splash = os.path.join(target_dir, "splash.jpg")
             if os.path.exists(splash):
                 splashline = "menu background splash.jpg"
             else:
@@ -137,9 +137,10 @@
             if splashline:
                 grubefi_conf += "%s\n" % splashline
 
-        logger.debug("Writing grubefi config %s/EFI/BOOT/grub.cfg", cr_workdir)
+        cfg_path = os.path.join(target_dir, "grub.cfg")
+        logger.debug("Writing grubefi config %s", cfg_path)
 
-        with open("%s/EFI/BOOT/grub.cfg" % cr_workdir, "w") as cfg:
+        with open(cfg_path, "w") as cfg:
             cfg.write(grubefi_conf)
 
     @staticmethod
@@ -162,13 +163,14 @@
             if not image_type:
                 raise WicError("Couldn't find INITRAMFS_FSTYPES, exiting.")
 
-            target_arch = get_bitbake_var("TRANSLATED_TARGET_ARCH")
-            if not target_arch:
-                raise WicError("Couldn't find TRANSLATED_TARGET_ARCH, exiting.")
+            machine = os.path.basename(initrd_dir)
 
-            initrd = glob.glob('%s/%s*%s.%s' % (initrd_dir, image_name, target_arch, image_type))[0]
+            pattern = '%s/%s*%s.%s' % (initrd_dir, image_name, machine, image_type)
+            files = glob.glob(pattern)
+            if files:
+                initrd = files[0]
 
-        if not os.path.exists(initrd):
+        if not initrd or not os.path.exists(initrd):
             # Create initrd from rootfs directory
             initrd = "%s/initrd.cpio.gz" % cr_workdir
             initrd_dir = "%s/INITRD" % cr_workdir
@@ -206,8 +208,8 @@
         """
         isodir = "%s/ISO/" % cr_workdir
 
-        if os.path.exists(cr_workdir):
-            shutil.rmtree(cr_workdir)
+        if os.path.exists(isodir):
+            shutil.rmtree(isodir)
 
         install_cmd = "install -d %s " % isodir
         exec_cmd(install_cmd)
@@ -312,20 +314,13 @@
 
         #Create bootloader for efi boot
         try:
+            target_dir = "%s/EFI/BOOT" % isodir
+            if os.path.exists(target_dir):
+                shutil.rmtree(target_dir)
+
+            os.makedirs(target_dir)
+
             if source_params['loader'] == 'grub-efi':
-                # Builds grub.cfg if ISODIR didn't exist or
-                # didn't contains grub.cfg
-                bootimg_dir = img_iso_dir
-                if not os.path.exists("%s/EFI/BOOT" % bootimg_dir):
-                    bootimg_dir = "%s/bootimg" % cr_workdir
-                    if os.path.exists(bootimg_dir):
-                        shutil.rmtree(bootimg_dir)
-                    install_cmd = "install -d %s/EFI/BOOT" % bootimg_dir
-                    exec_cmd(install_cmd)
-
-                if not os.path.isfile("%s/EFI/BOOT/boot.cfg" % bootimg_dir):
-                    cls.do_configure_grubefi(part, creator, bootimg_dir)
-
                 # Builds bootx64.efi/bootia32.efi if ISODIR didn't exist or
                 # didn't contains it
                 target_arch = get_bitbake_var("TARGET_SYS")
@@ -333,37 +328,23 @@
                     raise WicError("Coludn't find target architecture")
 
                 if re.match("x86_64", target_arch):
-                    grub_target = 'x86_64-efi'
-                    grub_image = "bootx64.efi"
+                    grub_image = "grub-efi-bootx64.efi"
                 elif re.match('i.86', target_arch):
-                    grub_target = 'i386-efi'
-                    grub_image = "bootia32.efi"
+                    grub_image = "grub-efi-bootia32.efi"
                 else:
                     raise WicError("grub-efi is incompatible with target %s" %
                                    target_arch)
 
-                if not os.path.isfile("%s/EFI/BOOT/%s" \
-                                % (bootimg_dir, grub_image)):
-                    grub_path = get_bitbake_var("STAGING_LIBDIR", "wic-tools")
-                    if not grub_path:
-                        raise WicError("Couldn't find STAGING_LIBDIR, exiting.")
+                grub_target = os.path.join(target_dir, grub_image)
+                if not os.path.isfile(grub_target):
+                    grub_src = os.path.join(deploy_dir, grub_image)
+                    if not os.path.exists(grub_src):
+                        raise WicError("Grub loader %s is not found in %s. "
+                                       "Please build grub-efi first" % (grub_image, deploy_dir))
+                    shutil.copy(grub_src, grub_target)
 
-                    grub_core = "%s/grub/%s" % (grub_path, grub_target)
-                    if not os.path.exists(grub_core):
-                        raise WicError("Please build grub-efi first")
-
-                    grub_cmd = "grub-mkimage -p '/EFI/BOOT' "
-                    grub_cmd += "-d %s "  % grub_core
-                    grub_cmd += "-O %s -o %s/EFI/BOOT/%s " \
-                                % (grub_target, bootimg_dir, grub_image)
-                    grub_cmd += "part_gpt part_msdos ntfs ntfscomp fat ext2 "
-                    grub_cmd += "normal chain boot configfile linux multiboot "
-                    grub_cmd += "search efi_gop efi_uga font gfxterm gfxmenu "
-                    grub_cmd += "terminal minicmd test iorw loadenv echo help "
-                    grub_cmd += "reboot serial terminfo iso9660 loopback tar "
-                    grub_cmd += "memdisk ls search_fs_uuid udf btrfs xfs lvm "
-                    grub_cmd += "reiserfs ata "
-                    exec_native_cmd(grub_cmd, native_sysroot)
+                if not os.path.isfile(os.path.join(target_dir, "boot.cfg")):
+                    cls.do_configure_grubefi(part, creator, target_dir)
 
             else:
                 raise WicError("unrecognized bootimg-efi loader: %s" %
@@ -371,15 +352,6 @@
         except KeyError:
             raise WicError("bootimg-efi requires a loader, none specified")
 
-        if os.path.exists("%s/EFI/BOOT" % isodir):
-            shutil.rmtree("%s/EFI/BOOT" % isodir)
-
-        shutil.copytree(bootimg_dir+"/EFI/BOOT", isodir+"/EFI/BOOT")
-
-        # If exists, remove cr_workdir/bootimg temporary folder
-        if os.path.exists("%s/bootimg" % cr_workdir):
-            shutil.rmtree("%s/bootimg" % cr_workdir)
-
         # Create efi.img that contains bootloader files for EFI booting
         # if ISODIR didn't exist or didn't contains it
         if os.path.isfile("%s/efi.img" % img_iso_dir):
@@ -413,7 +385,7 @@
             exec_cmd(chmod_cmd)
 
         # Prepare files for legacy boot
-        syslinux_dir = get_bitbake_var("STAGING_DATADIR", "wic-tools")
+        syslinux_dir = get_bitbake_var("STAGING_DATADIR")
         if not syslinux_dir:
             raise WicError("Couldn't find STAGING_DATADIR, exiting.")
 
diff --git a/import-layers/yocto-poky/scripts/lib/wic/plugins/source/rawcopy.py b/import-layers/yocto-poky/scripts/lib/wic/plugins/source/rawcopy.py
index e1c4f5e..e86398a 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/plugins/source/rawcopy.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/plugins/source/rawcopy.py
@@ -20,7 +20,7 @@
 
 from wic import WicError
 from wic.pluginbase import SourcePlugin
-from wic.utils.misc import exec_cmd, get_bitbake_var
+from wic.misc import exec_cmd, get_bitbake_var
 from wic.filemap import sparse_copy
 
 logger = logging.getLogger('wic')
@@ -32,6 +32,25 @@
 
     name = 'rawcopy'
 
+    @staticmethod
+    def do_image_label(fstype, dst, label):
+        if fstype.startswith('ext'):
+            cmd = 'tune2fs -L %s %s' % (label, dst)
+        elif fstype in ('msdos', 'vfat'):
+            cmd = 'dosfslabel %s %s' % (dst, label)
+        elif fstype == 'btrfs':
+            cmd = 'btrfs filesystem label %s %s' % (dst, label)
+        elif fstype == 'swap':
+            cmd = 'mkswap -L %s %s' % (label, dst)
+        elif fstype == 'squashfs':
+            raise WicError("It's not possible to update a squashfs "
+                           "filesystem label '%s'" % (label))
+        else:
+            raise WicError("Cannot update filesystem label: "
+                           "Unknown fstype: '%s'" % (fstype))
+
+        exec_cmd(cmd)
+
     @classmethod
     def do_prepare_partition(cls, part, source_params, cr, cr_workdir,
                              oe_builddir, bootimg_dir, kernel_dir,
@@ -66,4 +85,7 @@
         if filesize > part.size:
             part.size = filesize
 
+        if part.label:
+            RawCopyPlugin.do_image_label(part.fstype, dst, part.label)
+
         part.source_file = dst
diff --git a/import-layers/yocto-poky/scripts/lib/wic/plugins/source/rootfs.py b/import-layers/yocto-poky/scripts/lib/wic/plugins/source/rootfs.py
index f2e2ca8..aec720f 100644
--- a/import-layers/yocto-poky/scripts/lib/wic/plugins/source/rootfs.py
+++ b/import-layers/yocto-poky/scripts/lib/wic/plugins/source/rootfs.py
@@ -28,12 +28,13 @@
 import logging
 import os
 import shutil
+import sys
 
 from oe.path import copyhardlinktree
 
 from wic import WicError
 from wic.pluginbase import SourcePlugin
-from wic.utils.misc import get_bitbake_var, exec_cmd
+from wic.misc import get_bitbake_var
 
 logger = logging.getLogger('wic')
 
@@ -47,7 +48,7 @@
     @staticmethod
     def __get_rootfs_dir(rootfs_dir):
         if os.path.isdir(rootfs_dir):
-            return rootfs_dir
+            return os.path.realpath(rootfs_dir)
 
         image_rootfs_dir = get_bitbake_var("IMAGE_ROOTFS", rootfs_dir)
         if not os.path.isdir(image_rootfs_dir):
@@ -55,7 +56,7 @@
                            "named %s has been found at %s, exiting." %
                            (rootfs_dir, image_rootfs_dir))
 
-        return image_rootfs_dir
+        return os.path.realpath(image_rootfs_dir)
 
     @classmethod
     def do_prepare_partition(cls, part, source_params, cr, cr_workdir,
@@ -80,25 +81,25 @@
                 raise WicError("Couldn't find --rootfs-dir=%s connection or "
                                "it is not a valid path, exiting" % part.rootfs_dir)
 
-        real_rootfs_dir = cls.__get_rootfs_dir(rootfs_dir)
+        part.rootfs_dir = cls.__get_rootfs_dir(rootfs_dir)
 
+        new_rootfs = None
         # Handle excluded paths.
         if part.exclude_path is not None:
             # We need a new rootfs directory we can delete files from. Copy to
             # workdir.
-            new_rootfs = os.path.realpath(os.path.join(cr_workdir, "rootfs"))
+            new_rootfs = os.path.realpath(os.path.join(cr_workdir, "rootfs%d" % part.lineno))
 
             if os.path.lexists(new_rootfs):
                 shutil.rmtree(os.path.join(new_rootfs))
 
-            copyhardlinktree(real_rootfs_dir, new_rootfs)
-
-            real_rootfs_dir = new_rootfs
+            copyhardlinktree(part.rootfs_dir, new_rootfs)
 
             for orig_path in part.exclude_path:
                 path = orig_path
                 if os.path.isabs(path):
-                    msger.error("Must be relative: --exclude-path=%s" % orig_path)
+                    logger.error("Must be relative: --exclude-path=%s" % orig_path)
+                    sys.exit(1)
 
                 full_path = os.path.realpath(os.path.join(new_rootfs, path))
 
@@ -106,7 +107,8 @@
                 # because doing so could be quite disastrous (we will delete the
                 # directory).
                 if not full_path.startswith(new_rootfs):
-                    msger.error("'%s' points to a path outside the rootfs" % orig_path)
+                    logger.error("'%s' points to a path outside the rootfs" % orig_path)
+                    sys.exit(1)
 
                 if path.endswith(os.sep):
                     # Delete content only.
@@ -120,6 +122,5 @@
                     # Delete whole directory.
                     shutil.rmtree(full_path)
 
-        part.rootfs_dir = real_rootfs_dir
         part.prepare_rootfs(cr_workdir, oe_builddir,
-                            real_rootfs_dir, native_sysroot)
+                            new_rootfs or part.rootfs_dir, native_sysroot)
diff --git a/import-layers/yocto-poky/scripts/lib/wic/utils/__init__.py b/import-layers/yocto-poky/scripts/lib/wic/utils/__init__.py
deleted file mode 100644
index e69de29..0000000
--- a/import-layers/yocto-poky/scripts/lib/wic/utils/__init__.py
+++ /dev/null
diff --git a/import-layers/yocto-poky/scripts/lib/wic/utils/misc.py b/import-layers/yocto-poky/scripts/lib/wic/utils/misc.py
deleted file mode 100644
index 37e0ad6..0000000
--- a/import-layers/yocto-poky/scripts/lib/wic/utils/misc.py
+++ /dev/null
@@ -1,230 +0,0 @@
-# ex:ts=4:sw=4:sts=4:et
-# -*- tab-width: 4; c-basic-offset: 4; indent-tabs-mode: nil -*-
-#
-# Copyright (c) 2013, Intel Corporation.
-# All rights reserved.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License version 2 as
-# published by the Free Software Foundation.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License along
-# with this program; if not, write to the Free Software Foundation, Inc.,
-# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
-#
-# DESCRIPTION
-# This module provides a place to collect various wic-related utils
-# for the OpenEmbedded Image Tools.
-#
-# AUTHORS
-# Tom Zanussi <tom.zanussi (at] linux.intel.com>
-#
-"""Miscellaneous functions."""
-
-import logging
-import os
-import re
-
-from collections import defaultdict
-from distutils import spawn
-
-from wic import WicError
-from wic.utils import runner
-
-logger = logging.getLogger('wic')
-
-# executable -> recipe pairs for exec_native_cmd
-NATIVE_RECIPES = {"bmaptool": "bmap-tools",
-                  "grub-mkimage": "grub-efi",
-                  "isohybrid": "syslinux",
-                  "mcopy": "mtools",
-                  "mkdosfs": "dosfstools",
-                  "mkisofs": "cdrtools",
-                  "mkfs.btrfs": "btrfs-tools",
-                  "mkfs.ext2": "e2fsprogs",
-                  "mkfs.ext3": "e2fsprogs",
-                  "mkfs.ext4": "e2fsprogs",
-                  "mkfs.vfat": "dosfstools",
-                  "mksquashfs": "squashfs-tools",
-                  "mkswap": "util-linux",
-                  "mmd": "syslinux",
-                  "parted": "parted",
-                  "sfdisk": "util-linux",
-                  "sgdisk": "gptfdisk",
-                  "syslinux": "syslinux"
-                 }
-
-def _exec_cmd(cmd_and_args, as_shell=False):
-    """
-    Execute command, catching stderr, stdout
-
-    Need to execute as_shell if the command uses wildcards
-    """
-    logger.debug("_exec_cmd: %s", cmd_and_args)
-    args = cmd_and_args.split()
-    logger.debug(args)
-
-    if as_shell:
-        ret, out = runner.runtool(cmd_and_args)
-    else:
-        ret, out = runner.runtool(args)
-    out = out.strip()
-    if ret != 0:
-        raise WicError("_exec_cmd: %s returned '%s' instead of 0\noutput: %s" % \
-                       (cmd_and_args, ret, out))
-
-    logger.debug("_exec_cmd: output for %s (rc = %d): %s",
-                 cmd_and_args, ret, out)
-
-    return ret, out
-
-
-def exec_cmd(cmd_and_args, as_shell=False):
-    """
-    Execute command, return output
-    """
-    return _exec_cmd(cmd_and_args, as_shell)[1]
-
-
-def exec_native_cmd(cmd_and_args, native_sysroot, pseudo=""):
-    """
-    Execute native command, catching stderr, stdout
-
-    Need to execute as_shell if the command uses wildcards
-
-    Always need to execute native commands as_shell
-    """
-    # The reason -1 is used is because there may be "export" commands.
-    args = cmd_and_args.split(';')[-1].split()
-    logger.debug(args)
-
-    if pseudo:
-        cmd_and_args = pseudo + cmd_and_args
-
-    wtools_sysroot = get_bitbake_var("RECIPE_SYSROOT_NATIVE", "wic-tools")
-
-    native_paths = \
-            "%s/sbin:%s/usr/sbin:%s/usr/bin:%s/sbin:%s/usr/sbin:%s/usr/bin" % \
-            (wtools_sysroot, wtools_sysroot, wtools_sysroot,
-             native_sysroot, native_sysroot, native_sysroot)
-    native_cmd_and_args = "export PATH=%s:$PATH;%s" % \
-                           (native_paths, cmd_and_args)
-    logger.debug("exec_native_cmd: %s", native_cmd_and_args)
-
-    # If the command isn't in the native sysroot say we failed.
-    if spawn.find_executable(args[0], native_paths):
-        ret, out = _exec_cmd(native_cmd_and_args, True)
-    else:
-        ret = 127
-        out = "can't find native executable %s in %s" % (args[0], native_paths)
-
-    prog = args[0]
-    # shell command-not-found
-    if ret == 127 \
-       or (pseudo and ret == 1 and out == "Can't find '%s' in $PATH." % prog):
-        msg = "A native program %s required to build the image "\
-              "was not found (see details above).\n\n" % prog
-        recipe = NATIVE_RECIPES.get(prog)
-        if recipe:
-            msg += "Please make sure wic-tools have %s-native in its DEPENDS, bake it with 'bitbake wic-tools' "\
-                   "and try again.\n" % recipe
-        else:
-            msg += "Wic failed to find a recipe to build native %s. Please "\
-                   "file a bug against wic.\n" % prog
-        raise WicError(msg)
-
-    return ret, out
-
-BOOTDD_EXTRA_SPACE = 16384
-
-class BitbakeVars(defaultdict):
-    """
-    Container for Bitbake variables.
-    """
-    def __init__(self):
-        defaultdict.__init__(self, dict)
-
-        # default_image and vars_dir attributes should be set from outside
-        self.default_image = None
-        self.vars_dir = None
-
-    def _parse_line(self, line, image, matcher=re.compile(r"^([a-zA-Z0-9\-_+./~]+)=(.*)")):
-        """
-        Parse one line from bitbake -e output or from .env file.
-        Put result key-value pair into the storage.
-        """
-        if "=" not in line:
-            return
-        match = matcher.match(line)
-        if not match:
-            return
-        key, val = match.groups()
-        self[image][key] = val.strip('"')
-
-    def get_var(self, var, image=None, cache=True):
-        """
-        Get bitbake variable from 'bitbake -e' output or from .env file.
-        This is a lazy method, i.e. it runs bitbake or parses file only when
-        only when variable is requested. It also caches results.
-        """
-        if not image:
-            image = self.default_image
-
-        if image not in self:
-            if image and self.vars_dir:
-                fname = os.path.join(self.vars_dir, image + '.env')
-                if os.path.isfile(fname):
-                    # parse .env file
-                    with open(fname) as varsfile:
-                        for line in varsfile:
-                            self._parse_line(line, image)
-                else:
-                    print("Couldn't get bitbake variable from %s." % fname)
-                    print("File %s doesn't exist." % fname)
-                    return
-            else:
-                # Get bitbake -e output
-                cmd = "bitbake -e"
-                if image:
-                    cmd += " %s" % image
-
-                log_level = logger.getEffectiveLevel()
-                logger.setLevel(logging.INFO)
-                ret, lines = _exec_cmd(cmd)
-                logger.setLevel(log_level)
-
-                if ret:
-                    logger.error("Couldn't get '%s' output.", cmd)
-                    logger.error("Bitbake failed with error:\n%s\n", lines)
-                    return
-
-                # Parse bitbake -e output
-                for line in lines.split('\n'):
-                    self._parse_line(line, image)
-
-            # Make first image a default set of variables
-            if cache:
-                images = [key for key in self if key]
-                if len(images) == 1:
-                    self[None] = self[image]
-
-        result = self[image].get(var)
-        if not cache:
-            self.pop(image, None)
-
-        return result
-
-# Create BB_VARS singleton
-BB_VARS = BitbakeVars()
-
-def get_bitbake_var(var, image=None, cache=True):
-    """
-    Provide old get_bitbake_var API by wrapping
-    get_var method of BB_VARS singleton.
-    """
-    return BB_VARS.get_var(var, image, cache)
diff --git a/import-layers/yocto-poky/scripts/lib/wic/utils/runner.py b/import-layers/yocto-poky/scripts/lib/wic/utils/runner.py
deleted file mode 100644
index 4aa00fb..0000000
--- a/import-layers/yocto-poky/scripts/lib/wic/utils/runner.py
+++ /dev/null
@@ -1,52 +0,0 @@
-#!/usr/bin/env python -tt
-#
-# Copyright (c) 2011 Intel, Inc.
-#
-# This program is free software; you can redistribute it and/or modify it
-# under the terms of the GNU General Public License as published by the Free
-# Software Foundation; version 2 of the License
-#
-# This program is distributed in the hope that it will be useful, but
-# WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
-# or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
-# for more details.
-#
-# You should have received a copy of the GNU General Public License along
-# with this program; if not, write to the Free Software Foundation, Inc., 59
-# Temple Place - Suite 330, Boston, MA 02111-1307, USA.
-import subprocess
-
-from wic import WicError
-
-def runtool(cmdln_or_args):
-    """ wrapper for most of the subprocess calls
-    input:
-        cmdln_or_args: can be both args and cmdln str (shell=True)
-    return:
-        rc, output
-    """
-    if isinstance(cmdln_or_args, list):
-        cmd = cmdln_or_args[0]
-        shell = False
-    else:
-        import shlex
-        cmd = shlex.split(cmdln_or_args)[0]
-        shell = True
-
-    sout = subprocess.PIPE
-    serr = subprocess.STDOUT
-
-    try:
-        process = subprocess.Popen(cmdln_or_args, stdout=sout,
-                                   stderr=serr, shell=shell)
-        sout, serr = process.communicate()
-        # combine stdout and stderr, filter None out and decode
-        out = ''.join([out.decode('utf-8') for out in [sout, serr] if out])
-    except OSError as err:
-        if err.errno == 2:
-            # [Errno 2] No such file or directory
-            raise WicError('Cannot run command: %s, lost dependency?' % cmd)
-        else:
-            raise # relay
-
-    return process.returncode, out
diff --git a/import-layers/yocto-poky/scripts/oe-build-perf-report b/import-layers/yocto-poky/scripts/oe-build-perf-report
index 6f0b84f..ac88f0f 100755
--- a/import-layers/yocto-poky/scripts/oe-build-perf-report
+++ b/import-layers/yocto-poky/scripts/oe-build-perf-report
@@ -29,12 +29,14 @@
 import scriptpath
 from build_perf import print_table
 from build_perf.report import (metadata_xml_to_json, results_xml_to_json,
-                               aggregate_data, aggregate_metadata, measurement_stats)
+                               aggregate_data, aggregate_metadata, measurement_stats,
+                               AggregateTestData)
 from build_perf import html
+from buildstats import BuildStats, diff_buildstats, BSVerDiff
 
 scriptpath.add_oe_lib_path()
 
-from oeqa.utils.git import GitRepo
+from oeqa.utils.git import GitRepo, GitError
 
 
 # Setup logging
@@ -82,29 +84,52 @@
     # Return field names and a sorted list of revs
     return undef_fields, sorted(revs)
 
-def list_test_revs(repo, tag_name, **kwargs):
+def list_test_revs(repo, tag_name, verbosity, **kwargs):
     """Get list of all tested revisions"""
-    fields, revs = get_test_runs(repo, tag_name, **kwargs)
+    valid_kwargs = dict([(k, v) for k, v in kwargs.items() if v is not None])
+
+    fields, revs = get_test_runs(repo, tag_name, **valid_kwargs)
     ignore_fields = ['tag_number']
+    if verbosity < 2:
+        extra_fields = ['COMMITS', 'TEST RUNS']
+        ignore_fields.extend(['commit_number', 'commit'])
+    else:
+        extra_fields = ['TEST RUNS']
+
     print_fields = [i for i, f in enumerate(fields) if f not in ignore_fields]
 
     # Sort revs
-    rows = [[fields[i].upper() for i in print_fields] + ['TEST RUNS']]
-    prev = [''] * len(revs)
+    rows = [[fields[i].upper() for i in print_fields] + extra_fields]
+
+    prev = [''] * len(print_fields)
+    prev_commit = None
+    commit_cnt = 0
+    commit_field = fields.index('commit')
     for rev in revs:
         # Only use fields that we want to print
-        rev = [rev[i] for i in print_fields]
+        cols = [rev[i] for i in print_fields]
 
-        if rev != prev:
-            new_row = [''] * len(print_fields) + [1]
+
+        if cols != prev:
+            commit_cnt = 1
+            test_run_cnt = 1
+            new_row = [''] * (len(print_fields) + len(extra_fields))
+
             for i in print_fields:
-                if rev[i] != prev[i]:
+                if cols[i] != prev[i]:
                     break
-            new_row[i:-1] = rev[i:]
+            new_row[i:-len(extra_fields)] = cols[i:]
             rows.append(new_row)
         else:
-            rows[-1][-1] += 1
-        prev = rev
+            if rev[commit_field] != prev_commit:
+                commit_cnt += 1
+            test_run_cnt += 1
+
+        if verbosity < 2:
+            new_row[-2] = commit_cnt
+        new_row[-1] = test_run_cnt
+        prev = cols
+        prev_commit = rev[commit_field]
 
     print_table(rows)
 
@@ -309,20 +334,50 @@
     print()
 
 
-def print_html_report(data, id_comp):
+class BSSummary(object):
+    def __init__(self, bs1, bs2):
+        self.tasks = {'count': bs2.num_tasks,
+                      'change': '{:+d}'.format(bs2.num_tasks - bs1.num_tasks)}
+        self.top_consumer = None
+        self.top_decrease = None
+        self.top_increase = None
+        self.ver_diff = OrderedDict()
+
+        tasks_diff = diff_buildstats(bs1, bs2, 'cputime')
+
+        # Get top consumers of resources
+        tasks_diff = sorted(tasks_diff, key=attrgetter('value2'))
+        self.top_consumer = tasks_diff[-5:]
+
+        # Get biggest increase and decrease in resource usage
+        tasks_diff = sorted(tasks_diff, key=attrgetter('absdiff'))
+        self.top_decrease = tasks_diff[0:5]
+        self.top_increase = tasks_diff[-5:]
+
+        # Compare recipe versions and prepare data for display
+        ver_diff = BSVerDiff(bs1, bs2)
+        if ver_diff:
+            if ver_diff.new:
+                self.ver_diff['New recipes'] = [(n, r.evr) for n, r in ver_diff.new.items()]
+            if ver_diff.dropped:
+                self.ver_diff['Dropped recipes'] = [(n, r.evr) for n, r in ver_diff.dropped.items()]
+            if ver_diff.echanged:
+                self.ver_diff['Epoch changed'] = [(n, "{} &rarr; {}".format(r.left.evr, r.right.evr)) for n, r in ver_diff.echanged.items()]
+            if ver_diff.vchanged:
+                self.ver_diff['Version changed'] = [(n, "{} &rarr; {}".format(r.left.version, r.right.version)) for n, r in ver_diff.vchanged.items()]
+            if ver_diff.rchanged:
+                self.ver_diff['Revision changed'] = [(n, "{} &rarr; {}".format(r.left.evr, r.right.evr)) for n, r in ver_diff.rchanged.items()]
+
+
+def print_html_report(data, id_comp, buildstats):
     """Print report in html format"""
     # Handle metadata
-    metadata = {'branch': {'title': 'Branch', 'value': 'master'},
-                'hostname': {'title': 'Hostname', 'value': 'foobar'},
-                'commit': {'title': 'Commit', 'value': '1234'}
-               }
-    metadata = metadata_diff(data[id_comp][0], data[-1][0])
-
+    metadata = metadata_diff(data[id_comp].metadata, data[-1].metadata)
 
     # Generate list of tests
     tests = []
-    for test in data[-1][1]['tests'].keys():
-        test_r = data[-1][1]['tests'][test]
+    for test in data[-1].results['tests'].keys():
+        test_r = data[-1].results['tests'][test]
         new_test = {'name': test_r['name'],
                     'description': test_r['description'],
                     'status': test_r['status'],
@@ -368,6 +423,16 @@
             new_meas['value'] = samples[-1]
             new_meas['value_type'] = samples[-1]['val_cls']
 
+            # Compare buildstats
+            bs_key = test + '.' + meas
+            rev = metadata['commit_num']['value']
+            comp_rev = metadata['commit_num']['value_old']
+            if (rev in buildstats and bs_key in buildstats[rev] and
+                    comp_rev in buildstats and bs_key in buildstats[comp_rev]):
+                new_meas['buildstats'] = BSSummary(buildstats[comp_rev][bs_key],
+                                                   buildstats[rev][bs_key])
+
+
             new_test['measurements'].append(new_meas)
         tests.append(new_test)
 
@@ -376,7 +441,61 @@
                             'max': get_data_item(data[-1][0], 'layers.meta.commit_count')}
                  }
 
-    print(html.template.render(metadata=metadata, test_data=tests, chart_opts=chart_opts))
+    print(html.template.render(title="Build Perf Test Report",
+                               metadata=metadata, test_data=tests,
+                               chart_opts=chart_opts))
+
+
+def get_buildstats(repo, notes_ref, revs, outdir=None):
+    """Get the buildstats from git notes"""
+    full_ref = 'refs/notes/' + notes_ref
+    if not repo.rev_parse(full_ref):
+        log.error("No buildstats found, please try running "
+                  "'git fetch origin %s:%s' to fetch them from the remote",
+                  full_ref, full_ref)
+        return
+
+    missing = False
+    buildstats = {}
+    log.info("Parsing buildstats from 'refs/notes/%s'", notes_ref)
+    for rev in revs:
+        buildstats[rev.commit_number] = {}
+        log.debug('Dumping buildstats for %s (%s)', rev.commit_number,
+                  rev.commit)
+        for tag in rev.tags:
+            log.debug('    %s', tag)
+            try:
+                bs_all = json.loads(repo.run_cmd(['notes', '--ref', notes_ref,
+                                                  'show', tag + '^0']))
+            except GitError:
+                log.warning("Buildstats not found for %s", tag)
+                bs_all = {}
+                missing = True
+
+            for measurement, bs in bs_all.items():
+                # Write out onto disk
+                if outdir:
+                    tag_base, run_id = tag.rsplit('/', 1)
+                    tag_base = tag_base.replace('/', '_')
+                    bs_dir = os.path.join(outdir, measurement, tag_base)
+                    if not os.path.exists(bs_dir):
+                        os.makedirs(bs_dir)
+                    with open(os.path.join(bs_dir, run_id + '.json'), 'w') as f:
+                        json.dump(bs, f, indent=2)
+
+                # Read buildstats into a dict
+                _bs = BuildStats.from_json(bs)
+                if measurement not in buildstats[rev.commit_number]:
+                    buildstats[rev.commit_number][measurement] = _bs
+                else:
+                    buildstats[rev.commit_number][measurement].aggregate(_bs)
+
+    if missing:
+        log.info("Buildstats were missing for some test runs, please "
+                 "run 'git fetch origin %s:%s' and try again",
+                 full_ref, full_ref)
+
+    return buildstats
 
 
 def auto_args(repo, args):
@@ -411,7 +530,7 @@
                         help="Verbose logging")
     parser.add_argument('--repo', '-r', required=True,
                         help="Results repository (local git clone)")
-    parser.add_argument('--list', '-l', action='store_true',
+    parser.add_argument('--list', '-l', action='count',
                         help="List available test runs")
     parser.add_argument('--html', action='store_true',
                         help="Generate report in html format")
@@ -434,6 +553,8 @@
     group.add_argument('--commit-number2',
                        help="Revision number to compare with, redundant if "
                             "--commit2 is specified")
+    parser.add_argument('--dump-buildstats', nargs='?', const='.',
+                        help="Dump buildstats of the tests")
 
     return parser.parse_args(argv)
 
@@ -447,7 +568,7 @@
     repo = GitRepo(args.repo)
 
     if args.list:
-        list_test_revs(repo, args.tag_name)
+        list_test_revs(repo, args.tag_name, args.list, hostname=args.hostname)
         return 0
 
     # Determine hostname which to use
@@ -501,7 +622,7 @@
     xml = is_xml_format(repo, revs[index_r].tags[-1])
 
     if args.html:
-        index_0 = max(0, index_r - args.history_length)
+        index_0 = max(0, min(index_l, index_r - args.history_length))
         rev_range = range(index_0, index_r + 1)
     else:
         # We do not need range of commits for text report (no graphs)
@@ -515,18 +636,27 @@
 
     data = []
     for raw_m, raw_d in raw_data:
-        data.append((aggregate_metadata(raw_m), aggregate_data(raw_d)))
+        data.append(AggregateTestData(aggregate_metadata(raw_m),
+                                      aggregate_data(raw_d)))
 
     # Re-map list indexes to the new table starting from index 0
     index_r = index_r - index_0
     index_l = index_l - index_0
 
+    # Read buildstats only when needed
+    buildstats = None
+    if args.dump_buildstats or args.html:
+        outdir = 'oe-build-perf-buildstats' if args.dump_buildstats else None
+        notes_ref = 'buildstats/{}/{}/{}'.format(args.hostname, args.branch,
+                                                 args.machine)
+        buildstats = get_buildstats(repo, notes_ref, [rev_l, rev_r], outdir)
+
     # Print report
     if not args.html:
-        print_diff_report(data[index_l][0], data[index_l][1],
-                          data[index_r][0], data[index_r][1])
+        print_diff_report(data[index_l].metadata, data[index_l].results,
+                          data[index_r].metadata, data[index_r].results)
     else:
-        print_html_report(data, index_l)
+        print_html_report(data, index_l, buildstats)
 
     return 0
 
diff --git a/import-layers/yocto-poky/scripts/oe-buildenv-internal b/import-layers/yocto-poky/scripts/oe-buildenv-internal
index c890552..77f98a3 100755
--- a/import-layers/yocto-poky/scripts/oe-buildenv-internal
+++ b/import-layers/yocto-poky/scripts/oe-buildenv-internal
@@ -24,8 +24,7 @@
     echo 'Usage: . $OEROOT/scripts/oe-buildenv-internal &&'
     echo ''
     echo 'OpenEmbedded oe-buildenv-internal - an internal script that is'
-    echo 'used in oe-init-build-env and oe-init-build-env-memres to'
-    echo 'initialize oe build environment'
+    echo 'used in oe-init-build-env to initialize oe build environment'
     echo ''
     exit 2
 fi
@@ -106,6 +105,9 @@
 
 BITBAKEDIR=$(readlink -f "$BITBAKEDIR")
 BUILDDIR=$(readlink -f "$BUILDDIR")
+BBPATH=$BUILDDIR
+
+export BBPATH
 
 if [ ! -d "$BITBAKEDIR" ]; then
     echo >&2 "Error: The bitbake directory ($BITBAKEDIR) does not exist!  Please ensure a copy of bitbake exists at this location or specify an alternative path on the command line"
diff --git a/import-layers/yocto-poky/scripts/oe-find-native-sysroot b/import-layers/yocto-poky/scripts/oe-find-native-sysroot
index 235a67c..350ea21 100755
--- a/import-layers/yocto-poky/scripts/oe-find-native-sysroot
+++ b/import-layers/yocto-poky/scripts/oe-find-native-sysroot
@@ -50,7 +50,7 @@
 set_oe_native_sysroot(){
     echo "Running bitbake -e $1"
     BITBAKE_E="`bitbake -e $1`"
-    OECORE_NATIVE_SYSROOT=`echo "$BITBAKE_E" | grep ^STAGING_DIR_NATIVE | cut -d '"' -f2`
+    OECORE_NATIVE_SYSROOT=`echo "$BITBAKE_E" | grep ^STAGING_DIR_NATIVE= | cut -d '"' -f2`
 
     if [ "x$OECORE_NATIVE_SYSROOT" = "x" ]; then
         # This indicates that there was an error running bitbake -e that
diff --git a/import-layers/yocto-poky/scripts/oe-pkgdata-util b/import-layers/yocto-poky/scripts/oe-pkgdata-util
index 677effe..c6fba56 100755
--- a/import-layers/yocto-poky/scripts/oe-pkgdata-util
+++ b/import-layers/yocto-poky/scripts/oe-pkgdata-util
@@ -40,9 +40,8 @@
     import bb.tinfoil
     import logging
     tinfoil = bb.tinfoil.Tinfoil()
-    tinfoil.prepare(True)
-
     tinfoil.logger.setLevel(logging.WARNING)
+    tinfoil.prepare(True)
     return tinfoil
 
 
@@ -198,6 +197,10 @@
                 # PKGSIZE is now in bytes, but we we want it in KB
                 pkgsize = (int(value) + 1024 // 2) // 1024
                 value = "%d" % pkgsize
+            if args.unescape:
+                import codecs
+                # escape_decode() unescapes backslash encodings in byte streams
+                value = codecs.escape_decode(bytes(value, "utf-8"))[0].decode("utf-8")
             if args.prefix_name:
                 print('%s %s' % (pkg_name, value))
             else:
@@ -553,6 +556,7 @@
     parser_read_value.add_argument('pkg', nargs='*', help='Runtime package name to look up')
     parser_read_value.add_argument('-f', '--file', help='Read package names from the specified file (one per line, first field only)')
     parser_read_value.add_argument('-n', '--prefix-name', help='Prefix output with package name', action='store_true')
+    parser_read_value.add_argument('-u', '--unescape', help='Expand escapes such as \\n', action='store_true')
     parser_read_value.set_defaults(func=read_value)
 
     parser_glob = subparsers.add_parser('glob',
diff --git a/import-layers/yocto-poky/scripts/oe-publish-sdk b/import-layers/yocto-poky/scripts/oe-publish-sdk
index 9f7963c..ee33acf 100755
--- a/import-layers/yocto-poky/scripts/oe-publish-sdk
+++ b/import-layers/yocto-poky/scripts/oe-publish-sdk
@@ -114,9 +114,9 @@
 
     # Setting up the git repo
     if not is_remote:
-        cmd = 'set -e; mkdir -p %s/layers; cd %s/layers; if [ ! -e .git ]; then git init .; mv .git/hooks/post-update.sample .git/hooks/post-update; echo "*.pyc\n*.pyo\npyshtables.py" > .gitignore; fi; git add -A .; git config user.email "oe@oe.oe" && git config user.name "OE" && git commit -q -m "init repo" || true; git update-server-info' % (destination, destination)
+        cmd = 'set -e; mkdir -p %s/layers; cd %s/layers; if [ ! -e .git ]; then git init .; cp .git/hooks/post-update.sample .git/hooks/post-commit; echo "*.pyc\n*.pyo\npyshtables.py" > .gitignore; fi; git add -A .; git config user.email "oe@oe.oe" && git config user.name "OE" && git commit -q -m "init repo" || true' % (destination, destination)
     else:
-        cmd = "ssh %s 'set -e; mkdir -p %s/layers; cd %s/layers; if [ ! -e .git ]; then git init .; mv .git/hooks/post-update.sample .git/hooks/post-update; echo '*.pyc\n*.pyo\npyshtables.py' > .gitignore; fi; git add -A .; git config user.email 'oe@oe.oe' && git config user.name 'OE' && git commit -q -m \"init repo\" || true; git update-server-info'" % (host, destdir, destdir)
+        cmd = "ssh %s 'set -e; mkdir -p %s/layers; cd %s/layers; if [ ! -e .git ]; then git init .; cp .git/hooks/post-update.sample .git/hooks/post-commit; echo '*.pyc\n*.pyo\npyshtables.py' > .gitignore; fi; git add -A .; git config user.email 'oe@oe.oe' && git config user.name 'OE' && git commit -q -m \"init repo\" || true'" % (host, destdir, destdir)
     ret = subprocess.call(cmd, shell=True)
     if ret == 0:
         logger.info('SDK published successfully')
diff --git a/import-layers/yocto-poky/scripts/oe-selftest b/import-layers/yocto-poky/scripts/oe-selftest
index 52366b1..1bf860a 100755
--- a/import-layers/yocto-poky/scripts/oe-selftest
+++ b/import-layers/yocto-poky/scripts/oe-selftest
@@ -1,6 +1,6 @@
 #!/usr/bin/env python3
 
-# Copyright (c) 2013 Intel Corporation
+# Copyright (c) 2013-2017 Intel Corporation
 #
 # This program is free software; you can redistribute it and/or modify
 # it under the terms of the GNU General Public License version 2 as
@@ -25,790 +25,51 @@
 # E.g: "oe-selftest -r bblayers.BitbakeLayers" will run just the BitbakeLayers class from meta/lib/oeqa/selftest/bblayers.py
 
 
+
 import os
 import sys
-import unittest
-import logging
 import argparse
-import subprocess
-import time as t
-import re
-import fnmatch
-import collections
-import imp
+import logging
 
-sys.path.insert(0, os.path.dirname(os.path.realpath(__file__)) + '/lib')
-import scriptpath
-scriptpath.add_bitbake_lib_path()
-scriptpath.add_oe_lib_path()
+scripts_path = os.path.dirname(os.path.realpath(__file__))
+lib_path = scripts_path + '/lib'
+sys.path = sys.path + [lib_path]
 import argparse_oe
+import scriptutils
+import scriptpath
+scriptpath.add_oe_lib_path()
+scriptpath.add_bitbake_lib_path()
 
-import oeqa.selftest
-import oeqa.utils.ftools as ftools
-from oeqa.utils.commands import runCmd, get_bb_var, get_test_layer
-from oeqa.utils.metadata import metadata_from_bb, write_metadata_file
-from oeqa.selftest.base import oeSelfTest, get_available_machines
+from oeqa.utils import load_test_components
+from oeqa.core.exception import OEQAPreRun
 
-try:
-    import xmlrunner
-    from xmlrunner.result import _XMLTestResult as TestResult
-    from xmlrunner import XMLTestRunner as _TestRunner
-except ImportError:
-    # use the base runner instead
-    from unittest import TextTestResult as TestResult
-    from unittest import TextTestRunner as _TestRunner
-
-log_prefix = "oe-selftest-" + t.strftime("%Y%m%d-%H%M%S")
-
-def logger_create():
-    log_file = log_prefix + ".log"
-    if os.path.lexists("oe-selftest.log"):
-        os.remove("oe-selftest.log")
-    os.symlink(log_file, "oe-selftest.log")
-
-    log = logging.getLogger("selftest")
-    log.setLevel(logging.DEBUG)
-
-    fh = logging.FileHandler(filename=log_file, mode='w')
-    fh.setLevel(logging.DEBUG)
-
-    ch = logging.StreamHandler(sys.stdout)
-    ch.setLevel(logging.INFO)
-
-    formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
-    fh.setFormatter(formatter)
-    ch.setFormatter(formatter)
-
-    log.addHandler(fh)
-    log.addHandler(ch)
-
-    return log
-
-log = logger_create()
-
-def get_args_parser():
-    description = "Script that runs unit tests against bitbake and other Yocto related tools. The goal is to validate tools functionality and metadata integrity. Refer to https://wiki.yoctoproject.org/wiki/Oe-selftest for more information."
-    parser = argparse_oe.ArgumentParser(description=description)
-    group = parser.add_mutually_exclusive_group(required=True)
-    group.add_argument('-r', '--run-tests', required=False, action='store', nargs='*', dest="run_tests", default=None, help='Select what tests to run (modules, classes or test methods). Format should be: <module>.<class>.<test_method>')
-    group.add_argument('-a', '--run-all-tests', required=False, action="store_true", dest="run_all_tests", default=False, help='Run all (unhidden) tests')
-    group.add_argument('-m', '--list-modules', required=False, action="store_true", dest="list_modules", default=False, help='List all available test modules.')
-    group.add_argument('--list-classes', required=False, action="store_true", dest="list_allclasses", default=False, help='List all available test classes.')
-    parser.add_argument('--coverage', action="store_true", help="Run code coverage when testing")
-    parser.add_argument('--coverage-source', dest="coverage_source", nargs="+", help="Specifiy the directories to take coverage from")
-    parser.add_argument('--coverage-include', dest="coverage_include", nargs="+", help="Specify extra patterns to include into the coverage measurement")
-    parser.add_argument('--coverage-omit', dest="coverage_omit", nargs="+", help="Specify with extra patterns to exclude from the coverage measurement")
-    group.add_argument('--run-tests-by', required=False, dest='run_tests_by', default=False, nargs='*',
-                       help='run-tests-by <name|class|module|id|tag> <list of tests|classes|modules|ids|tags>')
-    group.add_argument('--list-tests-by', required=False, dest='list_tests_by', default=False, nargs='*',
-                       help='list-tests-by <name|class|module|id|tag> <list of tests|classes|modules|ids|tags>')
-    group.add_argument('-l', '--list-tests', required=False,  action="store_true", dest="list_tests", default=False,
-                       help='List all available tests.')
-    group.add_argument('--list-tags', required=False, dest='list_tags', default=False, action="store_true",
-                       help='List all tags that have been set to test cases.')
-    parser.add_argument('--machine', required=False, dest='machine', choices=['random', 'all'], default=None,
-                        help='Run tests on different machines (random/all).')
-    parser.add_argument('--repository', required=False, dest='repository', default='', action='store',
-                        help='Submit test results to a repository')
-    return parser
-
-builddir = None
-
-
-def preflight_check():
-
-    global builddir
-
-    log.info("Checking that everything is in order before running the tests")
-
-    if not os.environ.get("BUILDDIR"):
-        log.error("BUILDDIR isn't set. Did you forget to source your build environment setup script?")
-        return False
-
-    builddir = os.environ.get("BUILDDIR")
-    if os.getcwd() != builddir:
-        log.info("Changing cwd to %s" % builddir)
-        os.chdir(builddir)
-
-    if not "meta-selftest" in get_bb_var("BBLAYERS"):
-        log.warn("meta-selftest layer not found in BBLAYERS, adding it")
-        meta_selftestdir = os.path.join(
-                get_bb_var("BBLAYERS_FETCH_DIR"),
-                'meta-selftest')
-        if os.path.isdir(meta_selftestdir):
-            runCmd("bitbake-layers add-layer %s" %meta_selftestdir)
-        else:
-            log.error("could not locate meta-selftest in:\n%s"
-                    %meta_selftestdir)
-            return False
-
-    if "buildhistory.bbclass" in get_bb_var("BBINCLUDED"):
-        log.error("You have buildhistory enabled already and this isn't recommended for selftest, please disable it first.")
-        return False
-
-    if get_bb_var("PRSERV_HOST"):
-        log.error("Please unset PRSERV_HOST in order to run oe-selftest")
-        return False
-
-    if get_bb_var("SANITY_TESTED_DISTROS"):
-        log.error("Please unset SANITY_TESTED_DISTROS in order to run oe-selftest")
-        return False
-
-    log.info("Running bitbake -p")
-    runCmd("bitbake -p")
-
-    return True
-
-def add_include():
-    global builddir
-    if "#include added by oe-selftest.py" \
-        not in ftools.read_file(os.path.join(builddir, "conf/local.conf")):
-            log.info("Adding: \"include selftest.inc\" in local.conf")
-            ftools.append_file(os.path.join(builddir, "conf/local.conf"), \
-                    "\n#include added by oe-selftest.py\ninclude machine.inc\ninclude selftest.inc")
-
-    if "#include added by oe-selftest.py" \
-        not in ftools.read_file(os.path.join(builddir, "conf/bblayers.conf")):
-            log.info("Adding: \"include bblayers.inc\" in bblayers.conf")
-            ftools.append_file(os.path.join(builddir, "conf/bblayers.conf"), \
-                    "\n#include added by oe-selftest.py\ninclude bblayers.inc")
-
-def remove_include():
-    global builddir
-    if builddir is None:
-        return
-    if "#include added by oe-selftest.py" \
-        in ftools.read_file(os.path.join(builddir, "conf/local.conf")):
-            log.info("Removing the include from local.conf")
-            ftools.remove_from_file(os.path.join(builddir, "conf/local.conf"), \
-                    "\n#include added by oe-selftest.py\ninclude machine.inc\ninclude selftest.inc")
-
-    if "#include added by oe-selftest.py" \
-        in ftools.read_file(os.path.join(builddir, "conf/bblayers.conf")):
-            log.info("Removing the include from bblayers.conf")
-            ftools.remove_from_file(os.path.join(builddir, "conf/bblayers.conf"), \
-                    "\n#include added by oe-selftest.py\ninclude bblayers.inc")
-
-def remove_inc_files():
-    global builddir
-    if builddir is None:
-        return
-    try:
-        os.remove(os.path.join(builddir, "conf/selftest.inc"))
-        for root, _, files in os.walk(get_test_layer()):
-            for f in files:
-                if f == 'test_recipe.inc':
-                    os.remove(os.path.join(root, f))
-    except OSError as e:
-        pass
-
-    for incl_file in ['conf/bblayers.inc', 'conf/machine.inc']:
-        try:
-            os.remove(os.path.join(builddir, incl_file))
-        except:
-            pass
-
-
-def get_tests_modules(include_hidden=False):
-    modules_list = list()
-    for modules_path in oeqa.selftest.__path__:
-        for (p, d, f) in os.walk(modules_path):
-            files = sorted([f for f in os.listdir(p) if f.endswith('.py') and not (f.startswith('_') and not include_hidden) and not f.startswith('__') and f != 'base.py'])
-            for f in files:
-                submodules = p.split("selftest")[-1]
-                module = ""
-                if submodules:
-                    module = 'oeqa.selftest' + submodules.replace("/",".") + "." + f.split('.py')[0]
-                else:
-                    module = 'oeqa.selftest.' + f.split('.py')[0]
-                if module not in modules_list:
-                    modules_list.append(module)
-    return modules_list
-
-
-def get_tests(exclusive_modules=[], include_hidden=False):
-    test_modules = list()
-    for x in exclusive_modules:
-        test_modules.append('oeqa.selftest.' + x)
-    if not test_modules:
-        inc_hidden = include_hidden
-        test_modules = get_tests_modules(inc_hidden)
-
-    return test_modules
-
-
-class Tc:
-    def __init__(self, tcname, tcclass, tcmodule, tcid=None, tctag=None):
-        self.tcname = tcname
-        self.tcclass = tcclass
-        self.tcmodule = tcmodule
-        self.tcid = tcid
-        # A test case can have multiple tags (as tuples) otherwise str will suffice
-        self.tctag = tctag
-        self.fullpath = '.'.join(['oeqa', 'selftest', tcmodule, tcclass, tcname])
-
-
-def get_tests_from_module(tmod):
-    tlist = []
-    prefix = 'oeqa.selftest.'
-
-    try:
-        import importlib
-        modlib = importlib.import_module(tmod)
-        for mod in list(vars(modlib).values()):
-            if isinstance(mod, type(oeSelfTest)) and issubclass(mod, oeSelfTest) and mod is not oeSelfTest:
-                for test in dir(mod):
-                    if test.startswith('test_') and hasattr(vars(mod)[test], '__call__'):
-                        # Get test case id and feature tag
-                        # NOTE: if testcase decorator or feature tag not set will throw error
-                        try:
-                            tid = vars(mod)[test].test_case
-                        except:
-                            print('DEBUG: tc id missing for ' + str(test))
-                            tid = None
-                        try:
-                            ttag = vars(mod)[test].tag__feature
-                        except:
-                            # print('DEBUG: feature tag missing for ' + str(test))
-                            ttag = None
-
-                        # NOTE: for some reason lstrip() doesn't work for mod.__module__
-                        tlist.append(Tc(test, mod.__name__, mod.__module__.replace(prefix, ''), tid, ttag))
-    except:
-        pass
-
-    return tlist
-
-
-def get_all_tests():
-    # Get all the test modules (except the hidden ones)
-    testlist = []
-    tests_modules = get_tests_modules()
-    # Get all the tests from modules
-    for tmod in sorted(tests_modules):
-        testlist += get_tests_from_module(tmod)
-    return testlist
-
-
-def get_testsuite_by(criteria, keyword):
-    # Get a testsuite based on 'keyword'
-    # criteria: name, class, module, id, tag
-    # keyword: a list of tests, classes, modules, ids, tags
-
-    ts = []
-    all_tests = get_all_tests()
-
-    def get_matches(values):
-        # Get an item and return the ones that match with keyword(s)
-        # values: the list of items (names, modules, classes...)
-        result = []
-        remaining = values[:]
-        for key in keyword:
-            found = False
-            if key in remaining:
-                # Regular matching of exact item
-                result.append(key)
-                remaining.remove(key)
-                found = True
-            else:
-                # Wildcard matching
-                pattern = re.compile(fnmatch.translate(r"%s" % key))
-                added = [x for x in remaining if pattern.match(x)]
-                if added:
-                    result.extend(added)
-                    remaining = [x for x in remaining if x not in added]
-                    found = True
-            if not found:
-                log.error("Failed to find test: %s" % key)
-
-        return result
-
-    if criteria == 'name':
-        names = get_matches([ tc.tcname for tc in all_tests ])
-        ts = [ tc for tc in all_tests if tc.tcname in names ]
-
-    elif criteria == 'class':
-        classes = get_matches([ tc.tcclass for tc in all_tests ])
-        ts = [ tc for tc in all_tests if tc.tcclass in classes ]
-
-    elif criteria == 'module':
-        modules = get_matches([ tc.tcmodule for tc in all_tests ])
-        ts = [ tc for tc in all_tests if tc.tcmodule in modules ]
-
-    elif criteria == 'id':
-        ids = get_matches([ str(tc.tcid) for tc in all_tests ])
-        ts = [ tc for tc in all_tests if str(tc.tcid) in ids ]
-
-    elif criteria == 'tag':
-        values = set()
-        for tc in all_tests:
-            # tc can have multiple tags (as tuple) otherwise str will suffice
-            if isinstance(tc.tctag, tuple):
-                values |= { str(tag) for tag in tc.tctag }
-            else:
-                values.add(str(tc.tctag))
-
-        tags = get_matches(list(values))
-
-        for tc in all_tests:
-            for tag in tags:
-                if isinstance(tc.tctag, tuple) and tag in tc.tctag:
-                    ts.append(tc)
-                elif tag == tc.tctag:
-                    ts.append(tc)
-
-        # Remove duplicates from the list
-        ts = list(set(ts))
-
-    return ts
-
-
-def list_testsuite_by(criteria, keyword):
-    # Get a testsuite based on 'keyword'
-    # criteria: name, class, module, id, tag
-    # keyword: a list of tests, classes, modules, ids, tags
-    def tc_key(t):
-        if t[0] is None:
-            return  (0,) + t[1:]
-        return t
-    # tcid may be None if no ID was assigned, in which case sorted() will throw
-    # a TypeError as Python 3 does not allow comparison (<,<=,>=,>) of
-    # heterogeneous types, handle this by using a custom key generator
-    ts = sorted([ (tc.tcid, tc.tctag, tc.tcname, tc.tcclass, tc.tcmodule) \
-                  for tc in get_testsuite_by(criteria, keyword) ], key=tc_key)
-    print('_' * 150)
-    for t in ts:
-        if isinstance(t[1], (tuple, list)):
-            print('%-4s\t%-20s\t%-60s\t%-25s\t%-20s' % (t[0], ', '.join(t[1]), t[2], t[3], t[4]))
-        else:
-            print('%-4s\t%-20s\t%-60s\t%-25s\t%-20s' % t)
-    print('_' * 150)
-    print('Filtering by:\t %s' % criteria)
-    print('Looking for:\t %s' % ', '.join(str(x) for x in keyword))
-    print('Total found:\t %s' % len(ts))
-
-
-def list_tests():
-    # List all available oe-selftest tests
-
-    ts = get_all_tests()
-
-    print('%-4s\t%-10s\t%-50s' % ('id', 'tag', 'test'))
-    print('_' * 80)
-    for t in ts:
-        if isinstance(t.tctag, (tuple, list)):
-            print('%-4s\t%-10s\t%-50s' % (t.tcid, ', '.join(t.tctag), '.'.join([t.tcmodule, t.tcclass, t.tcname])))
-        else:
-            print('%-4s\t%-10s\t%-50s' % (t.tcid, t.tctag, '.'.join([t.tcmodule, t.tcclass, t.tcname])))
-    print('_' * 80)
-    print('Total found:\t %s' % len(ts))
-
-def list_tags():
-    # Get all tags set to test cases
-    # This is useful when setting tags to test cases
-    # The list of tags should be kept as minimal as possible
-    tags = set()
-    all_tests = get_all_tests()
-
-    for tc in all_tests:
-        if isinstance(tc.tctag, (tuple, list)):
-            tags.update(set(tc.tctag))
-        else:
-            tags.add(tc.tctag)
-
-    print('Tags:\t%s' % ', '.join(str(x) for x in tags))
-
-def coverage_setup(coverage_source, coverage_include, coverage_omit):
-    """ Set up the coverage measurement for the testcases to be run """
-    import datetime
-    import subprocess
-    global builddir
-    pokydir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
-    curcommit= subprocess.check_output(["git", "--git-dir", os.path.join(pokydir, ".git"), "rev-parse", "HEAD"]).decode('utf-8')
-    coveragerc = "%s/.coveragerc" % builddir
-    data_file = "%s/.coverage." % builddir
-    data_file += datetime.datetime.now().strftime('%Y%m%dT%H%M%S')
-    if os.path.isfile(data_file):
-        os.remove(data_file)
-    with open(coveragerc, 'w') as cps:
-        cps.write("# Generated with command '%s'\n" % " ".join(sys.argv))
-        cps.write("# HEAD commit %s\n" % curcommit.strip())
-        cps.write("[run]\n")
-        cps.write("data_file = %s\n" % data_file)
-        cps.write("branch = True\n")
-        # Measure just BBLAYERS, scripts and bitbake folders
-        cps.write("source = \n")
-        if coverage_source:
-            for directory in coverage_source:
-                if not os.path.isdir(directory):
-                    log.warn("Directory %s is not valid.", directory)
-                cps.write("    %s\n" % directory)
-        else:
-            for layer in get_bb_var('BBLAYERS').split():
-                cps.write("    %s\n" % layer)
-            cps.write("    %s\n" % os.path.dirname(os.path.realpath(__file__)))
-            cps.write("    %s\n" % os.path.join(os.path.dirname(os.path.dirname(os.path.realpath(__file__))),'bitbake'))
-
-        if coverage_include:
-            cps.write("include = \n")
-            for pattern in coverage_include:
-                cps.write("    %s\n" % pattern)
-        if coverage_omit:
-            cps.write("omit = \n")
-            for pattern in coverage_omit:
-                cps.write("    %s\n" % pattern)
-
-        return coveragerc
-
-def coverage_report():
-    """ Loads the coverage data gathered and reports it back """
-    try:
-        # Coverage4 uses coverage.Coverage
-        from coverage import Coverage
-    except:
-        # Coverage under version 4 uses coverage.coverage
-        from coverage import coverage as Coverage
-
-    import io as StringIO
-    from coverage.misc import CoverageException
-
-    cov_output = StringIO.StringIO()
-    # Creating the coverage data with the setting from the configuration file
-    cov = Coverage(config_file = os.environ.get('COVERAGE_PROCESS_START'))
-    try:
-        # Load data from the data file specified in the configuration
-        cov.load()
-        # Store report data in a StringIO variable
-        cov.report(file = cov_output, show_missing=False)
-        log.info("\n%s" % cov_output.getvalue())
-    except CoverageException as e:
-        # Show problems with the reporting. Since Coverage4 not finding  any data to report raises an exception
-        log.warn("%s" % str(e))
-    finally:
-        cov_output.close()
-
+logger = scriptutils.logger_create('oe-selftest', stream=sys.stdout)
 
 def main():
-    parser = get_args_parser()
-    args = parser.parse_args()
+    description = "Script that runs unit tests against bitbake and other Yocto related tools. The goal is to validate tools functionality and metadata integrity. Refer to https://wiki.yoctoproject.org/wiki/Oe-selftest for more information."
+    parser = argparse_oe.ArgumentParser(description=description)
 
-    # Add <layer>/lib to sys.path, so layers can add selftests
-    log.info("Running bitbake -e to get BBPATH")
-    bbpath = get_bb_var('BBPATH').split(':')
-    layer_libdirs = [p for p in (os.path.join(l, 'lib') for l in bbpath) if os.path.exists(p)]
-    sys.path.extend(layer_libdirs)
-    imp.reload(oeqa.selftest)
+    comp_name, comp = load_test_components(logger, 'oe-selftest').popitem()
+    comp.register_commands(logger, parser)
 
-    # act like bitbake and enforce en_US.UTF-8 locale
-    os.environ["LC_ALL"] = "en_US.UTF-8"
+    try:
+        args = parser.parse_args()
+        results = args.func(logger, args)
+        ret = 0 if results.wasSuccessful() else 1
+    except SystemExit as err:
+        if err.code != 0:
+            raise err
+        ret = err.code
+    except OEQAPreRun as pr:
+        ret = 1
 
-    if args.run_tests_by and len(args.run_tests_by) >= 2:
-        valid_options = ['name', 'class', 'module', 'id', 'tag']
-        if args.run_tests_by[0] not in valid_options:
-            print('--run-tests-by %s not a valid option. Choose one of <name|class|module|id|tag>.' % args.run_tests_by[0])
-            return 1
-        else:
-            criteria = args.run_tests_by[0]
-            keyword = args.run_tests_by[1:]
-            ts = sorted([ tc.fullpath for tc in get_testsuite_by(criteria, keyword) ])
-        if not ts:
-            return 1
+    return ret
 
-    if args.list_tests_by and len(args.list_tests_by) >= 2:
-        valid_options = ['name', 'class', 'module', 'id', 'tag']
-        if args.list_tests_by[0] not in valid_options:
-            print('--list-tests-by %s not a valid option. Choose one of <name|class|module|id|tag>.' % args.list_tests_by[0])
-            return 1
-        else:
-            criteria = args.list_tests_by[0]
-            keyword = args.list_tests_by[1:]
-            list_testsuite_by(criteria, keyword)
-
-    if args.list_tests:
-        list_tests()
-
-    if args.list_tags:
-        list_tags()
-
-    if args.list_allclasses:
-        args.list_modules = True
-
-    if args.list_modules:
-        log.info('Listing all available test modules:')
-        testslist = get_tests(include_hidden=True)
-        for test in testslist:
-            module = test.split('oeqa.selftest.')[-1]
-            info = ''
-            if module.startswith('_'):
-                info = ' (hidden)'
-            print(module + info)
-            if args.list_allclasses:
-                try:
-                    import importlib
-                    modlib = importlib.import_module(test)
-                    for v in vars(modlib):
-                        t = vars(modlib)[v]
-                        if isinstance(t, type(oeSelfTest)) and issubclass(t, oeSelfTest) and t!=oeSelfTest:
-                            print(" --", v)
-                            for method in dir(t):
-                                if method.startswith("test_") and isinstance(vars(t)[method], collections.Callable):
-                                    print(" --  --", method)
-
-                except (AttributeError, ImportError) as e:
-                    print(e)
-                    pass
-
-    if args.run_tests or args.run_all_tests or args.run_tests_by:
-        if not preflight_check():
-            return 1
-
-        if args.run_tests_by:
-            testslist = ts
-        else:
-            testslist = get_tests(exclusive_modules=(args.run_tests or []), include_hidden=False)
-
-        suite = unittest.TestSuite()
-        loader = unittest.TestLoader()
-        loader.sortTestMethodsUsing = None
-        runner = TestRunner(verbosity=2,
-                resultclass=buildResultClass(args))
-        # we need to do this here, otherwise just loading the tests
-        # will take 2 minutes (bitbake -e calls)
-        oeSelfTest.testlayer_path = get_test_layer()
-        for test in testslist:
-            log.info("Loading tests from: %s" % test)
-            try:
-                suite.addTests(loader.loadTestsFromName(test))
-            except AttributeError as e:
-                log.error("Failed to import %s" % test)
-                log.error(e)
-                return 1
-        add_include()
-
-        if args.machine:
-            # Custom machine sets only weak default values (??=) for MACHINE in machine.inc
-            # This let test cases that require a specific MACHINE to be able to override it, using (?= or =)
-            log.info('Custom machine mode enabled. MACHINE set to %s' % args.machine)
-            if args.machine == 'random':
-                os.environ['CUSTOMMACHINE'] = 'random'
-                result = runner.run(suite)
-            else:  # all
-                machines = get_available_machines()
-                for m in machines:
-                    log.info('Run tests with custom MACHINE set to: %s' % m)
-                    os.environ['CUSTOMMACHINE'] = m
-                    result = runner.run(suite)
-        else:
-            result = runner.run(suite)
-
-        log.info("Finished")
-
-        if args.repository:
-            import git
-            # Commit tests results to repository
-            metadata = metadata_from_bb()
-            git_dir = os.path.join(os.getcwd(), 'selftest')
-            if not os.path.isdir(git_dir):
-                os.mkdir(git_dir)
-
-            log.debug('Checking for git repository in %s' % git_dir)
-            try:
-                repo = git.Repo(git_dir)
-            except git.exc.InvalidGitRepositoryError:
-                log.debug("Couldn't find git repository %s; "
-                         "cloning from %s" % (git_dir, args.repository))
-                repo = git.Repo.clone_from(args.repository, git_dir)
-
-            r_branches = repo.git.branch(r=True)
-            r_branches = set(r_branches.replace('origin/', '').split())
-            l_branches = {str(branch) for branch in repo.branches}
-            branch = '%s/%s/%s' % (metadata['hostname'],
-                                   metadata['layers']['meta'].get('branch', '(nogit)'),
-                                   metadata['config']['MACHINE'])
-
-            if branch in l_branches:
-                log.debug('Found branch in local repository, checking out')
-                repo.git.checkout(branch)
-            elif branch in r_branches:
-                log.debug('Found branch in remote repository, checking'
-                          ' out and pulling')
-                repo.git.checkout(branch)
-                repo.git.pull()
-            else:
-                log.debug('New branch %s' % branch)
-                repo.git.checkout('master')
-                repo.git.checkout(b=branch)
-
-            cleanResultsDir(repo)
-            xml_dir = os.path.join(os.getcwd(), log_prefix)
-            copyResultFiles(xml_dir, git_dir, repo)
-            metadata_file = os.path.join(git_dir, 'metadata.xml')
-            write_metadata_file(metadata_file, metadata)
-            repo.index.add([metadata_file])
-            repo.index.write()
-
-            # Get information for commit message
-            layer_info = ''
-            for layer, values in metadata['layers'].items():
-                layer_info = '%s%-17s = %s:%s\n' % (layer_info, layer,
-                              values.get('branch', '(nogit)'), values.get('commit', '0'*40))
-            msg = 'Selftest for build %s of %s for machine %s on %s\n\n%s' % (
-                   log_prefix[12:], metadata['distro']['pretty_name'],
-                   metadata['config']['MACHINE'], metadata['hostname'], layer_info)
-
-            log.debug('Commiting results to local repository')
-            repo.index.commit(msg)
-            if not repo.is_dirty():
-                try:
-                    if branch in r_branches:
-                        log.debug('Pushing changes to remote repository')
-                        repo.git.push()
-                    else:
-                        log.debug('Pushing changes to remote repository '
-                                  'creating new branch')
-                        repo.git.push('-u', 'origin', branch)
-                except GitCommandError:
-                    log.error('Falied to push to remote repository')
-                    return 1
-            else:
-                log.error('Local repository is dirty, not pushing commits')
-
-        if result.wasSuccessful():
-            return 0
-        else:
-            return 1
-
-def buildResultClass(args):
-    """Build a Result Class to use in the testcase execution"""
-    import site
-
-    class StampedResult(TestResult):
-        """
-        Custom TestResult that prints the time when a test starts.  As oe-selftest
-        can take a long time (ie a few hours) to run, timestamps help us understand
-        what tests are taking a long time to execute.
-        If coverage is required, this class executes the coverage setup and reporting.
-        """
-        def startTest(self, test):
-            import time
-            self.stream.write(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime()) + " - ")
-            super(StampedResult, self).startTest(test)
-
-        def startTestRun(self):
-            """ Setup coverage before running any testcase """
-
-            # variable holding the coverage configuration file allowing subprocess to be measured
-            self.coveragepth = None
-
-            # indicates the system if coverage is currently installed
-            self.coverage_installed = True
-
-            if args.coverage or args.coverage_source or args.coverage_include or args.coverage_omit:
-                try:
-                    # check if user can do coverage
-                    import coverage
-                except:
-                    log.warn("python coverage is not installed. More info on https://pypi.python.org/pypi/coverage")
-                    self.coverage_installed = False
-
-                if self.coverage_installed:
-                    log.info("Coverage is enabled")
-
-                    major_version = int(coverage.version.__version__[0])
-                    if major_version < 4:
-                        log.error("python coverage %s installed. Require version 4 or greater." % coverage.version.__version__)
-                        self.stop()
-                    # In case the user has not set the variable COVERAGE_PROCESS_START,
-                    # create a default one and export it. The COVERAGE_PROCESS_START
-                    # value indicates where the coverage configuration file resides
-                    # More info on https://pypi.python.org/pypi/coverage
-                    if not os.environ.get('COVERAGE_PROCESS_START'):
-                        os.environ['COVERAGE_PROCESS_START'] = coverage_setup(args.coverage_source, args.coverage_include, args.coverage_omit)
-
-                    # Use default site.USER_SITE and write corresponding config file
-                    site.ENABLE_USER_SITE = True
-                    if not os.path.exists(site.USER_SITE):
-                        os.makedirs(site.USER_SITE)
-                    self.coveragepth = os.path.join(site.USER_SITE, "coverage.pth")
-                    with open(self.coveragepth, 'w') as cps:
-                        cps.write('import sys,site; sys.path.extend(site.getsitepackages()); import coverage; coverage.process_startup();')
-
-        def stopTestRun(self):
-            """ Report coverage data after the testcases are run """
-
-            if args.coverage or args.coverage_source or args.coverage_include or args.coverage_omit:
-                if self.coverage_installed:
-                    with open(os.environ['COVERAGE_PROCESS_START']) as ccf:
-                        log.info("Coverage configuration file (%s)" % os.environ.get('COVERAGE_PROCESS_START'))
-                        log.info("===========================")
-                        log.info("\n%s" % "".join(ccf.readlines()))
-
-                    log.info("Coverage Report")
-                    log.info("===============")
-                    try:
-                        coverage_report()
-                    finally:
-                        # remove the pth file
-                        try:
-                            os.remove(self.coveragepth)
-                        except OSError:
-                            log.warn("Expected temporal file from coverage is missing, ignoring removal.")
-
-    return StampedResult
-
-def cleanResultsDir(repo):
-    """ Remove result files from directory """
-
-    xml_files = []
-    directory = repo.working_tree_dir
-    for f in os.listdir(directory):
-        path = os.path.join(directory, f)
-        if os.path.isfile(path) and path.endswith('.xml'):
-            xml_files.append(f)
-    repo.index.remove(xml_files, working_tree=True)
-
-def copyResultFiles(src, dst, repo):
-    """ Copy result files from src to dst removing the time stamp. """
-
-    import shutil
-
-    re_time = re.compile("-[0-9]+")
-    file_list = []
-
-    for root, subdirs, files in os.walk(src):
-        tmp_dir = root.replace(src, '').lstrip('/')
-        for s in subdirs:
-            os.mkdir(os.path.join(dst, tmp_dir, s))
-        for f in files:
-            file_name = os.path.join(dst, tmp_dir, re_time.sub("", f))
-            shutil.copy2(os.path.join(root, f), file_name)
-            file_list.append(file_name)
-    repo.index.add(file_list)
-
-class TestRunner(_TestRunner):
-    """Test runner class aware of exporting tests."""
-    def __init__(self, *args, **kwargs):
-        try:
-            exportdir = os.path.join(os.getcwd(), log_prefix)
-            kwargsx = dict(**kwargs)
-            # argument specific to XMLTestRunner, if adding a new runner then
-            # also add logic to use other runner's args.
-            kwargsx['output'] = exportdir
-            kwargsx['descriptions'] = False
-            # done for the case where telling the runner where to export
-            super(TestRunner, self).__init__(*args, **kwargsx)
-        except TypeError:
-            log.info("test runner init'ed like unittest")
-            super(TestRunner, self).__init__(*args, **kwargs)
-
-if __name__ == "__main__":
+if __name__ == '__main__':
     try:
         ret = main()
     except Exception:
         ret = 1
         import traceback
         traceback.print_exc()
-    finally:
-        remove_include()
-        remove_inc_files()
     sys.exit(ret)
diff --git a/import-layers/yocto-poky/scripts/oe-setup-builddir b/import-layers/yocto-poky/scripts/oe-setup-builddir
index ef49551..55d73ca 100755
--- a/import-layers/yocto-poky/scripts/oe-setup-builddir
+++ b/import-layers/yocto-poky/scripts/oe-setup-builddir
@@ -133,13 +133,6 @@
 #    unset SHOWYPDOC
 fi
 
-cat <<EOM
-
-### Shell environment set up for builds. ###
-
-You can now run 'bitbake <target>'
-
-EOM
 if [ -z "$OECORENOTESCONF" ]; then
     OECORENOTESCONF="$OEROOT/meta/conf/conf-notes.txt"
 fi
diff --git a/import-layers/yocto-poky/scripts/oe-test b/import-layers/yocto-poky/scripts/oe-test
index a1d282d..34d9012 100755
--- a/import-layers/yocto-poky/scripts/oe-test
+++ b/import-layers/yocto-poky/scripts/oe-test
@@ -8,7 +8,6 @@
 import os
 import sys
 import argparse
-import importlib
 import logging
 
 scripts_path = os.path.dirname(os.path.realpath(__file__))
@@ -25,35 +24,10 @@
 except ImportError:
     pass
 
-from oeqa.core.context import OETestContextExecutor
+from oeqa.utils import load_test_components
+from oeqa.core.exception import OEQAPreRun
 
-logger = scriptutils.logger_create('oe-test')
-
-def _load_test_components(logger):
-    components = {}
-
-    for path in sys.path:
-        base_dir = os.path.join(path, 'oeqa')
-        if os.path.exists(base_dir) and os.path.isdir(base_dir):
-            for file in os.listdir(base_dir):
-                comp_name = file
-                comp_context = os.path.join(base_dir, file, 'context.py')
-                if os.path.exists(comp_context):
-                    comp_plugin = importlib.import_module('oeqa.%s.%s' % \
-                            (comp_name, 'context'))
-                    try:
-                        if not issubclass(comp_plugin._executor_class,
-                                OETestContextExecutor):
-                            raise TypeError("Component %s in %s, _executor_class "\
-                                "isn't derived from OETestContextExecutor."\
-                                % (comp_name, comp_context))
-
-                        components[comp_name] = comp_plugin._executor_class()
-                    except AttributeError:
-                        raise AttributeError("Component %s in %s don't have "\
-                                "_executor_class defined." % (comp_name, comp_context))
-
-    return components
+logger = scriptutils.logger_create('oe-test', stream=sys.stdout)
 
 def main():
     parser = argparse_oe.ArgumentParser(description="OpenEmbedded test tool",
@@ -73,7 +47,7 @@
     elif global_args.quiet:
         logger.setLevel(logging.ERROR)
 
-    components = _load_test_components(logger)
+    components = load_test_components(logger, 'oe-test')
 
     subparsers = parser.add_subparsers(dest="subparser_name", title='subcommands', metavar='<subcommand>')
     subparsers.add_subparser_group('components', 'Test components')
@@ -92,6 +66,8 @@
         ret = err.code
     except argparse_oe.ArgumentUsageError as ae:
         parser.error_subcommand(ae.message, ae.subcommand)
+    except OEQAPreRun as pr:
+        ret = 1
 
     return ret
 
diff --git a/import-layers/yocto-poky/scripts/recipetool b/import-layers/yocto-poky/scripts/recipetool
index 3765ec7..3a3c9b7 100755
--- a/import-layers/yocto-poky/scripts/recipetool
+++ b/import-layers/yocto-poky/scripts/recipetool
@@ -36,8 +36,8 @@
     import bb.tinfoil
     import logging
     tinfoil = bb.tinfoil.Tinfoil(tracking=True)
-    tinfoil.prepare(not parserecipes)
     tinfoil.logger.setLevel(logger.getEffectiveLevel())
+    tinfoil.prepare(not parserecipes)
     return tinfoil
 
 def main():
diff --git a/import-layers/yocto-poky/scripts/runqemu b/import-layers/yocto-poky/scripts/runqemu
index 9b6d330..0ed1eec 100755
--- a/import-layers/yocto-poky/scripts/runqemu
+++ b/import-layers/yocto-poky/scripts/runqemu
@@ -28,14 +28,18 @@
 import glob
 import configparser
 
-class OEPathError(Exception):
+class RunQemuError(Exception):
+    """Custom exception to raise on known errors."""
+    pass
+
+class OEPathError(RunQemuError):
     """Custom Exception to give better guidance on missing binaries"""
     def __init__(self, message):
-        self.message = "In order for this script to dynamically infer paths\n \
+        super().__init__("In order for this script to dynamically infer paths\n \
 kernels or filesystem images, you either need bitbake in your PATH\n \
 or to source oe-init-build-env before running this script.\n\n \
 Dynamic path inference can be avoided by passing a *.qemuboot.conf to\n \
-runqemu, i.e. `runqemu /path/to/my-image-name.qemuboot.conf`\n\n %s" % message
+runqemu, i.e. `runqemu /path/to/my-image-name.qemuboot.conf`\n\n %s" % message)
 
 
 def create_logger():
@@ -44,7 +48,7 @@
 
     # create console handler and set level to debug
     ch = logging.StreamHandler()
-    ch.setLevel(logging.INFO)
+    ch.setLevel(logging.DEBUG)
 
     # create formatter
     formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
@@ -81,6 +85,8 @@
   qemuparams=<xyz> - specify custom parameters to QEMU
   bootparams=<xyz> - specify custom kernel parameters during boot
   help, -h, --help: print this text
+  -d, --debug: Enable debug output
+  -q, --quite: Hide most output except error messages
 
 Examples:
   runqemu
@@ -90,25 +96,25 @@
   runqemu qemux86-64 core-image-sato ext4
   runqemu qemux86-64 wic-image-minimal wic
   runqemu path/to/bzImage-qemux86.bin path/to/nfsrootdir/ serial
-  runqemu qemux86 iso/hddimg/vmdk/qcow2/vdi/ramfs/cpio.gz...
+  runqemu qemux86 iso/hddimg/wic.vmdk/wic.qcow2/wic.vdi/ramfs/cpio.gz...
   runqemu qemux86 qemuparams="-m 256"
   runqemu qemux86 bootparams="psplash=false"
-  runqemu path/to/<image>-<machine>.vmdk
   runqemu path/to/<image>-<machine>.wic
+  runqemu path/to/<image>-<machine>.wic.vmdk
 """)
 
 def check_tun():
     """Check /dev/net/tun"""
     dev_tun = '/dev/net/tun'
     if not os.path.exists(dev_tun):
-        raise Exception("TUN control device %s is unavailable; you may need to enable TUN (e.g. sudo modprobe tun)" % dev_tun)
+        raise RunQemuError("TUN control device %s is unavailable; you may need to enable TUN (e.g. sudo modprobe tun)" % dev_tun)
 
     if not os.access(dev_tun, os.W_OK):
-        raise Exception("TUN control device %s is not writable, please fix (e.g. sudo chmod 666 %s)" % (dev_tun, dev_tun))
+        raise RunQemuError("TUN control device %s is not writable, please fix (e.g. sudo chmod 666 %s)" % (dev_tun, dev_tun))
 
 def check_libgl(qemu_bin):
     cmd = 'ldd %s' % qemu_bin
-    logger.info('Running %s...' % cmd)
+    logger.debug('Running %s...' % cmd)
     need_gl = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE).stdout.read().decode('utf-8')
     if re.search('libGLU', need_gl):
         # We can't run without a libGL.so
@@ -137,7 +143,7 @@
             logger.error("You need libGL.so and libGLU.so to exist in your library path to run the QEMU emulator.")
             logger.error("Ubuntu package names are: libgl1-mesa-dev and libglu1-mesa-dev.")
             logger.error("Fedora package names are: mesa-libGL-devel mesa-libGLU-devel.")
-            raise Exception('%s requires libGLU, but not found' % qemu_bin)
+            raise RunQemuError('%s requires libGLU, but not found' % qemu_bin)
 
 def get_first_file(cmds):
     """Return first file found in wildcard cmds"""
@@ -212,8 +218,10 @@
         self.lock_descriptor = ''
         self.bitbake_e = ''
         self.snapshot = False
-        self.fstypes = ('ext2', 'ext3', 'ext4', 'jffs2', 'nfs', 'btrfs', 'cpio.gz', 'cpio', 'ramfs')
-        self.vmtypes = ('hddimg', 'hdddirect', 'wic', 'vmdk', 'qcow2', 'vdi', 'iso')
+        self.fstypes = ('ext2', 'ext3', 'ext4', 'jffs2', 'nfs', 'btrfs',
+                        'cpio.gz', 'cpio', 'ramfs', 'tar.bz2', 'tar.gz')
+        self.vmtypes = ('hddimg', 'hdddirect', 'wic', 'wic.vmdk',
+                        'wic.qcow2', 'wic.vdi', 'iso')
         self.network_device = "-device e1000,netdev=net0,mac=@MAC@"
         # Use different mac section for tap and slirp to avoid
         # conflicts, e.g., when one is running with tap, the other is
@@ -224,13 +232,17 @@
         self.mac_tap = "52:54:00:12:34:"
         self.mac_slirp = "52:54:00:12:35:"
 
-    def acquire_lock(self):
-        logger.info("Acquiring lockfile %s..." % self.lock)
+    def acquire_lock(self, error=True):
+        logger.debug("Acquiring lockfile %s..." % self.lock)
         try:
             self.lock_descriptor = open(self.lock, 'w')
             fcntl.flock(self.lock_descriptor, fcntl.LOCK_EX|fcntl.LOCK_NB)
         except Exception as e:
-            logger.info("Acquiring lockfile %s failed: %s" % (self.lock, e))
+            msg = "Acquiring lockfile %s failed: %s" % (self.lock, e)
+            if error:
+                logger.error(msg)
+            else:
+                logger.info(msg)
             if self.lock_descriptor:
                 self.lock_descriptor.close()
             return False
@@ -255,10 +267,10 @@
     def is_deploy_dir_image(self, p):
         if os.path.isdir(p):
             if not re.search('.qemuboot.conf$', '\n'.join(os.listdir(p)), re.M):
-                logger.info("Can't find required *.qemuboot.conf in %s" % p)
+                logger.debug("Can't find required *.qemuboot.conf in %s" % p)
                 return False
             if not any(map(lambda name: '-image-' in name, os.listdir(p))):
-                logger.info("Can't find *-image-* in %s" % p)
+                logger.debug("Can't find *-image-* in %s" % p)
                 return False
             return True
         else:
@@ -271,15 +283,17 @@
         if not self.fstype or self.fstype == fst:
             if fst == 'ramfs':
                 fst = 'cpio.gz'
+            if fst in ('tar.bz2', 'tar.gz'):
+                fst = 'nfs'
             self.fstype = fst
         else:
-            raise Exception("Conflicting: FSTYPE %s and %s" % (self.fstype, fst))
+            raise RunQemuError("Conflicting: FSTYPE %s and %s" % (self.fstype, fst))
 
     def set_machine_deploy_dir(self, machine, deploy_dir_image):
         """Set MACHINE and DEPLOY_DIR_IMAGE"""
-        logger.info('MACHINE: %s' % machine)
+        logger.debug('MACHINE: %s' % machine)
         self.set("MACHINE", machine)
-        logger.info('DEPLOY_DIR_IMAGE: %s' % deploy_dir_image)
+        logger.debug('DEPLOY_DIR_IMAGE: %s' % deploy_dir_image)
         self.set("DEPLOY_DIR_IMAGE", deploy_dir_image)
 
     def check_arg_nfs(self, p):
@@ -329,30 +343,30 @@
                 else:
                     logger.warn("%s doesn't exist" % qb)
             else:
-                raise Exception("Can't find FSTYPE from: %s" % p)
+                raise RunQemuError("Can't find FSTYPE from: %s" % p)
 
         elif os.path.isdir(p) or re.search(':', p) and re.search('/', p):
             if self.is_deploy_dir_image(p):
-                logger.info('DEPLOY_DIR_IMAGE: %s' % p)
+                logger.debug('DEPLOY_DIR_IMAGE: %s' % p)
                 self.set("DEPLOY_DIR_IMAGE", p)
             else:
-                logger.info("Assuming %s is an nfs rootfs" % p)
+                logger.debug("Assuming %s is an nfs rootfs" % p)
                 self.check_arg_nfs(p)
         elif os.path.basename(p).startswith('ovmf'):
             self.ovmf_bios.append(p)
         else:
-            raise Exception("Unknown path arg %s" % p)
+            raise RunQemuError("Unknown path arg %s" % p)
 
     def check_arg_machine(self, arg):
         """Check whether it is a machine"""
         if self.get('MACHINE') == arg:
             return
         elif self.get('MACHINE') and self.get('MACHINE') != arg:
-            raise Exception("Maybe conflicted MACHINE: %s vs %s" % (self.get('MACHINE'), arg))
+            raise RunQemuError("Maybe conflicted MACHINE: %s vs %s" % (self.get('MACHINE'), arg))
         elif re.search('/', arg):
-            raise Exception("Unknown arg: %s" % arg)
+            raise RunQemuError("Unknown arg: %s" % arg)
 
-        logger.info('Assuming MACHINE = %s' % arg)
+        logger.debug('Assuming MACHINE = %s' % arg)
 
         # if we're running under testimage, or similarly as a child
         # of an existing bitbake invocation, we can't invoke bitbake
@@ -381,7 +395,7 @@
         if s:
             deploy_dir_image = s.group(1)
         else:
-            raise Exception("bitbake -e %s" % self.bitbake_e)
+            raise RunQemuError("bitbake -e %s" % self.bitbake_e)
         if self.is_deploy_dir_image(deploy_dir_image):
             self.set_machine_deploy_dir(arg, deploy_dir_image)
         else:
@@ -389,6 +403,16 @@
             self.set("MACHINE", arg)
 
     def check_args(self):
+        for debug in ("-d", "--debug"):
+            if debug in sys.argv:
+                logger.setLevel(logging.DEBUG)
+                sys.argv.remove(debug)
+
+        for quiet in ("-q", "--quiet"):
+            if quiet in sys.argv:
+                logger.setLevel(logging.ERROR)
+                sys.argv.remove(quiet)
+
         unknown_arg = ""
         for arg in sys.argv[1:]:
             if arg in self.fstypes + self.vmtypes:
@@ -435,7 +459,9 @@
                 if (not unknown_arg) or unknown_arg == arg:
                     unknown_arg = arg
                 else:
-                    raise Exception("Can't handle two unknown args: %s %s" % (unknown_arg, arg))
+                    raise RunQemuError("Can't handle two unknown args: %s %s\n"
+                                       "Try 'runqemu help' on how to use it" % \
+                                        (unknown_arg, arg))
         # Check to make sure it is a valid machine
         if unknown_arg:
             if self.get('MACHINE') == unknown_arg:
@@ -448,7 +474,7 @@
 
             self.check_arg_machine(unknown_arg)
 
-        if not self.get('DEPLOY_DIR_IMAGE'):
+        if not (self.get('DEPLOY_DIR_IMAGE') or self.qbconfload):
             self.load_bitbake_env()
             s = re.search('^DEPLOY_DIR_IMAGE="(.*)"', self.bitbake_e, re.M)
             if s:
@@ -461,7 +487,7 @@
             return
 
         if not self.get('QB_CPU_KVM'):
-            raise Exception("QB_CPU_KVM is NULL, this board doesn't support kvm")
+            raise RunQemuError("QB_CPU_KVM is NULL, this board doesn't support kvm")
 
         self.qemu_opt_script += ' %s %s' % (self.get('QB_MACHINE'), self.get('QB_CPU_KVM'))
         yocto_kvm_wiki = "https://wiki.yoctoproject.org/wiki/How_to_enable_KVM_for_Poky_qemu"
@@ -473,12 +499,12 @@
         if not kvm_cap:
             logger.error("You are trying to enable KVM on a cpu without VT support.")
             logger.error("Remove kvm from the command-line, or refer:")
-            raise Exception(yocto_kvm_wiki)
+            raise RunQemuError(yocto_kvm_wiki)
 
         if not os.path.exists(dev_kvm):
             logger.error("Missing KVM device. Have you inserted kvm modules?")
             logger.error("For further help see:")
-            raise Exception(yocto_kvm_wiki)
+            raise RunQemuError(yocto_kvm_wiki)
 
         if os.access(dev_kvm, os.W_OK|os.R_OK):
             self.qemu_opt_script += ' -enable-kvm'
@@ -490,18 +516,18 @@
         else:
             logger.error("You have no read or write permission on /dev/kvm.")
             logger.error("Please change the ownership of this file as described at:")
-            raise Exception(yocto_kvm_wiki)
+            raise RunQemuError(yocto_kvm_wiki)
 
         if self.vhost_enabled:
             if not os.path.exists(dev_vhost):
                 logger.error("Missing virtio net device. Have you inserted vhost-net module?")
                 logger.error("For further help see:")
-                raise Exception(yocto_paravirt_kvm_wiki)
+                raise RunQemuError(yocto_paravirt_kvm_wiki)
 
         if not os.access(dev_kvm, os.W_OK|os.R_OK):
                 logger.error("You have no read or write permission on /dev/vhost-net.")
                 logger.error("Please change the ownership of this file as described at:")
-                raise Exception(yocto_kvm_wiki)
+                raise RunQemuError(yocto_kvm_wiki)
 
     def check_fstype(self):
         """Check and setup FSTYPE"""
@@ -510,7 +536,7 @@
             if fstype:
                 self.fstype = fstype
             else:
-                raise Exception("FSTYPE is NULL!")
+                raise RunQemuError("FSTYPE is NULL!")
 
     def check_rootfs(self):
         """Check and set rootfs"""
@@ -522,7 +548,7 @@
             if not self.rootfs:
                 self.rootfs = self.get('ROOTFS')
             elif self.get('ROOTFS') != self.rootfs:
-                raise Exception("Maybe conflicted ROOTFS: %s vs %s" % (self.get('ROOTFS'), self.rootfs))
+                raise RunQemuError("Maybe conflicted ROOTFS: %s vs %s" % (self.get('ROOTFS'), self.rootfs))
 
         if self.fstype == 'nfs':
             return
@@ -538,10 +564,10 @@
             cmds = (cmd_name, cmd_link)
             self.rootfs = get_first_file(cmds)
             if not self.rootfs:
-                raise Exception("Failed to find rootfs: %s or %s" % cmds)
+                raise RunQemuError("Failed to find rootfs: %s or %s" % cmds)
 
         if not os.path.exists(self.rootfs):
-            raise Exception("Can't find rootfs: %s" % self.rootfs)
+            raise RunQemuError("Can't find rootfs: %s" % self.rootfs)
 
     def check_ovmf(self):
         """Check and set full path for OVMF firmware and variable file(s)."""
@@ -555,7 +581,7 @@
                     self.ovmf_bios[index] = path
                     break
             else:
-                raise Exception("Can't find OVMF firmware: %s" % ovmf)
+                raise RunQemuError("Can't find OVMF firmware: %s" % ovmf)
 
     def check_kernel(self):
         """Check and set kernel, dtb"""
@@ -578,10 +604,10 @@
             cmds = (kernel_match_name, kernel_match_link, kernel_startswith)
             self.kernel = get_first_file(cmds)
             if not self.kernel:
-                raise Exception('KERNEL not found: %s, %s or %s' % cmds)
+                raise RunQemuError('KERNEL not found: %s, %s or %s' % cmds)
 
         if not os.path.exists(self.kernel):
-            raise Exception("KERNEL %s not found" % self.kernel)
+            raise RunQemuError("KERNEL %s not found" % self.kernel)
 
         dtb = self.get('QB_DTB')
         if dtb:
@@ -591,7 +617,7 @@
             cmds = (cmd_match, cmd_startswith, cmd_wild)
             self.dtb = get_first_file(cmds)
             if not os.path.exists(self.dtb):
-                raise Exception('DTB not found: %s, %s or %s' % cmds)
+                raise RunQemuError('DTB not found: %s, %s or %s' % cmds)
 
     def check_biosdir(self):
         """Check custombiosdir"""
@@ -607,11 +633,11 @@
                 break
 
         if biosdir:
-            logger.info("Assuming biosdir is: %s" % biosdir)
+            logger.debug("Assuming biosdir is: %s" % biosdir)
             self.qemu_opt_script += ' -L %s' % biosdir
         else:
             logger.error("Custom BIOS directory not found. Tried: %s, %s, and %s" % (self.custombiosdir, biosdir_native, biosdir_host))
-            raise Exception("Invalid custombiosdir: %s" % self.custombiosdir)
+            raise RunQemuError("Invalid custombiosdir: %s" % self.custombiosdir)
 
     def check_mem(self):
         s = re.search('-m +([0-9]+)', self.qemu_opt_script)
@@ -639,7 +665,7 @@
         # Check audio
         if self.audio_enabled:
             if not self.get('QB_AUDIO_DRV'):
-                raise Exception("QB_AUDIO_DRV is NULL, this board doesn't support audio")
+                raise RunQemuError("QB_AUDIO_DRV is NULL, this board doesn't support audio")
             if not self.get('QB_AUDIO_OPT'):
                 logger.warn('QB_AUDIO_OPT is NULL, you may need define it to make audio work')
             else:
@@ -662,7 +688,7 @@
             if self.get('DEPLOY_DIR_IMAGE'):
                 deploy_dir_image = self.get('DEPLOY_DIR_IMAGE')
             else:
-                logger.info("Can't find qemuboot conf file, DEPLOY_DIR_IMAGE is NULL!")
+                logger.warn("Can't find qemuboot conf file, DEPLOY_DIR_IMAGE is NULL!")
                 return
 
             if self.rootfs and not os.path.exists(self.rootfs):
@@ -674,8 +700,11 @@
                         self.rootfs, machine)
             else:
                 cmd = 'ls -t %s/*.qemuboot.conf' %  deploy_dir_image
-                logger.info('Running %s...' % cmd)
-                qbs = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE).stdout.read().decode('utf-8')
+                logger.debug('Running %s...' % cmd)
+                try:
+                    qbs = subprocess.check_output(cmd, shell=True).decode('utf-8')
+                except subprocess.CalledProcessError as err:
+                    raise RunQemuError(err)
                 if qbs:
                     for qb in qbs.split():
                         # Don't use initramfs when other choices unless fstype is ramfs
@@ -694,14 +723,18 @@
             return
 
         if not os.path.exists(self.qemuboot):
-            raise Exception("Failed to find %s (wrong image name or BSP does not support running under qemu?)." % self.qemuboot)
+            raise RunQemuError("Failed to find %s (wrong image name or BSP does not support running under qemu?)." % self.qemuboot)
 
-        logger.info('CONFFILE: %s' % self.qemuboot)
+        logger.debug('CONFFILE: %s' % self.qemuboot)
 
         cf = configparser.ConfigParser()
         cf.read(self.qemuboot)
         for k, v in cf.items('config_bsp'):
             k_upper = k.upper()
+            if v.startswith("../"):
+                v = os.path.abspath(os.path.dirname(self.qemuboot) + "/" + v)
+            elif v == ".":
+                v = os.path.dirname(self.qemuboot)
             self.set(k_upper, v)
 
     def validate_paths(self):
@@ -789,16 +822,12 @@
             all_instances.sort(key=int)
             self.nfs_instance = int(all_instances.pop()) + 1
 
-        mountd_rpcport = 21111 + self.nfs_instance
-        nfsd_rpcport = 11111 + self.nfs_instance
         nfsd_port = 3049 + 2 * self.nfs_instance
         mountd_port = 3048 + 2 * self.nfs_instance
 
         # Export vars for runqemu-export-rootfs
         export_dict = {
             'NFS_INSTANCE': self.nfs_instance,
-            'MOUNTD_RPCPORT': mountd_rpcport,
-            'NFSD_RPCPORT': nfsd_rpcport,
             'NFSD_PORT': nfsd_port,
             'MOUNTD_PORT': mountd_port,
         }
@@ -806,7 +835,7 @@
             # Use '%s' since they are integers
             os.putenv(k, '%s' % v)
 
-        self.unfs_opts="nfsvers=3,port=%s,mountprog=%s,nfsprog=%s,udp,mountport=%s" % (nfsd_port, mountd_rpcport, nfsd_rpcport, mountd_port)
+        self.unfs_opts="nfsvers=3,port=%s,udp,mountport=%s" % (nfsd_port, mountd_port)
 
         # Extract .tar.bz2 or .tar.bz if no nfs dir
         if not (self.rootfs and os.path.isdir(self.rootfs)):
@@ -824,12 +853,12 @@
                 elif os.path.exists(src2):
                     src = src2
                 if not src:
-                    raise Exception("No NFS_DIR is set, and can't find %s or %s to extract" % (src1, src2))
+                    raise RunQemuError("No NFS_DIR is set, and can't find %s or %s to extract" % (src1, src2))
                 logger.info('NFS_DIR not found, extracting %s to %s' % (src, dest))
                 cmd = 'runqemu-extract-sdk %s %s' % (src, dest)
                 logger.info('Running %s...' % cmd)
                 if subprocess.call(cmd, shell=True) != 0:
-                    raise Exception('Failed to run %s' % cmd)
+                    raise RunQemuError('Failed to run %s' % cmd)
                 self.clean_nfs_dir = True
                 self.rootfs = dest
 
@@ -837,7 +866,7 @@
         cmd = 'runqemu-export-rootfs start %s' % self.rootfs
         logger.info('Running %s...' % cmd)
         if subprocess.call(cmd, shell=True) != 0:
-            raise Exception('Failed to run %s' % cmd)
+            raise RunQemuError('Failed to run %s' % cmd)
 
         self.nfs_running = True
 
@@ -849,7 +878,7 @@
         self.kernel_cmdline_script += ' ip=dhcp'
         # Port mapping
         hostfwd = ",hostfwd=tcp::2222-:22,hostfwd=tcp::2323-:23"
-        qb_slirp_opt_default = "-netdev user,id=net0%s" % hostfwd
+        qb_slirp_opt_default = "-netdev user,id=net0%s,tftp=%s" % (hostfwd, self.get('DEPLOY_DIR_IMAGE'))
         qb_slirp_opt = self.get('QB_SLIRP_OPT') or qb_slirp_opt_default
         # Figure out the port
         ports = re.findall('hostfwd=[^-]*:([0-9]+)-[^,-]*', qb_slirp_opt)
@@ -888,6 +917,9 @@
         lockdir = "/tmp/qemu-tap-locks"
 
         if not (self.qemuifup and self.qemuifdown and ip):
+            logger.error("runqemu-ifup: %s" % self.qemuifup)
+            logger.error("runqemu-ifdown: %s" % self.qemuifdown)
+            logger.error("ip: %s" % ip)
             raise OEPathError("runqemu-ifup, runqemu-ifdown or ip not found")
 
         if not os.path.exists(lockdir):
@@ -895,14 +927,15 @@
             # running at the same time.
             try:
                 os.mkdir(lockdir)
+                os.chmod(lockdir, 0o777)
             except FileExistsError:
                 pass
 
         cmd = '%s link' % ip
-        logger.info('Running %s...' % cmd)
+        logger.debug('Running %s...' % cmd)
         ip_link = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE).stdout.read().decode('utf-8')
         # Matches line like: 6: tap0: <foo>
-        possibles = re.findall('^[1-9]+: +(tap[0-9]+): <.*', ip_link, re.M)
+        possibles = re.findall('^[0-9]+: +(tap[0-9]+): <.*', ip_link, re.M)
         tap = ""
         for p in possibles:
             lockfile = os.path.join(lockdir, p)
@@ -910,7 +943,7 @@
                 logger.info('Found %s.skip, skipping %s' % (lockfile, p))
                 continue
             self.lock = lockfile + '.lock'
-            if self.acquire_lock():
+            if self.acquire_lock(error=False):
                 tap = p
                 logger.info("Using preconfigured tap device %s" % tap)
                 logger.info("If this is not intended, touch %s.skip to make runqemu skip %s." %(lockfile, tap))
@@ -920,7 +953,7 @@
             if os.path.exists(nosudo_flag):
                 logger.error("Error: There are no available tap devices to use for networking,")
                 logger.error("and I see %s exists, so I am not going to try creating" % nosudo_flag)
-                raise Exception("a new one with sudo.")
+                raise RunQemuError("a new one with sudo.")
 
             gid = os.getgid()
             uid = os.getuid()
@@ -931,7 +964,7 @@
             self.lock = lockfile + '.lock'
             self.acquire_lock()
             self.cleantap = True
-            logger.info('Created tap: %s' % tap)
+            logger.debug('Created tap: %s' % tap)
 
         if not tap:
             logger.error("Failed to setup tap device. Run runqemu-gen-tapdevs to manually create.")
@@ -960,8 +993,8 @@
     def setup_network(self):
         if self.get('QB_NET') == 'none':
             return
-        cmd = "stty -g"
-        self.saved_stty = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE).stdout.read().decode('utf-8')
+        if sys.stdin.isatty():
+            self.saved_stty = subprocess.check_output("stty -g", shell=True).decode('utf-8')
         self.network_device = self.get('QB_NETWORK_DEVICE') or self.network_device
         if self.slirp_enabled:
             self.setup_slirp()
@@ -971,6 +1004,8 @@
     def setup_rootfs(self):
         if self.get('QB_ROOTFS') == 'none':
             return
+        if 'wic.' in self.fstype:
+            self.fstype = self.fstype[4:]
         rootfs_format = self.fstype if self.fstype in ('vmdk', 'qcow2', 'vdi') else 'raw'
 
         qb_rootfs_opt = self.get('QB_ROOTFS_OPT')
@@ -986,7 +1021,7 @@
             vm_drive = ''
             if self.fstype in self.vmtypes:
                 if self.fstype == 'iso':
-                    vm_drive = '-cdrom %s' % self.rootfs
+                    vm_drive = '-drive file=%s,if=virtio,media=cdrom' % self.rootfs
                 elif self.get('QB_DRIVE_TYPE'):
                     drive_type = self.get('QB_DRIVE_TYPE')
                     if drive_type.startswith("/dev/sd"):
@@ -995,7 +1030,7 @@
                                        % (self.rootfs, rootfs_format)
                     elif drive_type.startswith("/dev/hd"):
                         logger.info('Using ide drive')
-                        vm_drive = "%s,format=%s" % (self.rootfs, rootfs_format)
+                        vm_drive = "-drive file=%s,format=%s" % (self.rootfs, rootfs_format)
                     else:
                         # virtio might have been selected explicitly (just use it), or
                         # is used as fallback (then warn about that).
@@ -1011,7 +1046,7 @@
 
         if self.fstype == 'nfs':
             self.rootfs_options = ''
-            k_root = '/dev/nfs nfsroot=%s:%s,%s' % (self.nfs_server, self.rootfs, self.unfs_opts)
+            k_root = '/dev/nfs nfsroot=%s:%s,%s' % (self.nfs_server, os.path.abspath(self.rootfs), self.unfs_opts)
             self.kernel_cmdline = 'root=%s rw highres=off' % k_root
 
         if self.fstype == 'none':
@@ -1062,7 +1097,7 @@
         if not qemu_system:
             qemu_system = self.guess_qb_system()
         if not qemu_system:
-            raise Exception("Failed to boot, QB_SYSTEM_NAME is NULL!")
+            raise RunQemuError("Failed to boot, QB_SYSTEM_NAME is NULL!")
 
         qemu_bin = '%s/%s' % (self.bindir_native, qemu_system)
 
@@ -1101,9 +1136,9 @@
             self.qemu_opt += " -snapshot"
 
         if self.serialstdio:
-            logger.info("Interrupt character is '^]'")
-            cmd = "stty intr ^]"
-            subprocess.call(cmd, shell=True)
+            if sys.stdin.isatty():
+                subprocess.check_call("stty intr ^]", shell=True)
+                logger.info("Interrupt character is '^]'")
 
             first_serial = ""
             if not re.search("-nographic", self.qemu_opt):
@@ -1142,15 +1177,16 @@
         else:
             kernel_opts = ""
         cmd = "%s %s" % (self.qemu_opt, kernel_opts)
-        logger.info('Running %s' % cmd)
-        if subprocess.call(cmd, shell=True) != 0:
-            raise Exception('Failed to run %s' % cmd)
+        logger.info('Running %s\n' % cmd)
+        process = subprocess.Popen(cmd, shell=True, stderr=subprocess.PIPE)
+        if process.wait():
+            logger.error("Failed to run qemu: %s", process.stderr.read().decode())
 
     def cleanup(self):
         if self.cleantap:
             cmd = 'sudo %s %s %s' % (self.qemuifdown, self.tap, self.bindir_native)
-            logger.info('Running %s' % cmd)
-            subprocess.call(cmd, shell=True)
+            logger.debug('Running %s' % cmd)
+            subprocess.check_call(cmd, shell=True)
         if self.lock_descriptor:
             logger.info("Releasing lockfile for tap device '%s'" % self.tap)
             self.release_lock()
@@ -1158,12 +1194,12 @@
         if self.nfs_running:
             logger.info("Shutting down the userspace NFS server...")
             cmd = "runqemu-export-rootfs stop %s" % self.rootfs
-            logger.info('Running %s' % cmd)
-            subprocess.call(cmd, shell=True)
+            logger.debug('Running %s' % cmd)
+            subprocess.check_call(cmd, shell=True)
 
         if self.saved_stty:
             cmd = "stty %s" % self.saved_stty
-            subprocess.call(cmd, shell=True)
+            subprocess.check_call(cmd, shell=True)
 
         if self.clean_nfs_dir:
             logger.info('Removing %s' % self.rootfs)
@@ -1193,6 +1229,10 @@
             self.bitbake_e = ''
             logger.warn("Couldn't run 'bitbake -e' to gather environment information:\n%s" % err.output.decode('utf-8'))
 
+    def validate_combos(self):
+        if (self.fstype in self.vmtypes) and self.kernel:
+            raise RunQemuError("%s doesn't need kernel %s!" % (self.fstype, self.kernel))
+
     @property
     def bindir_native(self):
         result = self.get('STAGING_BINDIR_NATIVE')
@@ -1210,42 +1250,37 @@
             if os.path.exists(result):
                 self.set('STAGING_BINDIR_NATIVE', result)
                 return result
-            raise Exception("Native sysroot directory %s doesn't exist" % result)
+            raise RunQemuError("Native sysroot directory %s doesn't exist" % result)
         else:
-            raise Exception("Can't find STAGING_BINDIR_NATIVE in '%s' output" % cmd)
+            raise RunQemuError("Can't find STAGING_BINDIR_NATIVE in '%s' output" % cmd)
 
 
 def main():
     if "help" in sys.argv or '-h' in sys.argv or '--help' in sys.argv:
         print_usage()
         return 0
-    config = BaseConfig()
     try:
+        config = BaseConfig()
         config.check_args()
-    except Exception as esc:
-        logger.error(esc)
-        logger.error("Try 'runqemu help' on how to use it")
-        return 1
-    config.read_qemuboot()
-    config.check_and_set()
-    config.print_config()
-    try:
+        config.read_qemuboot()
+        config.check_and_set()
+        # Check whether the combos is valid or not
+        config.validate_combos()
+        config.print_config()
         config.setup_network()
         config.setup_rootfs()
         config.setup_final()
         config.start_qemu()
-    finally:
-        config.cleanup()
-    return 0
-
-if __name__ == "__main__":
-    try:
-        ret = main()
-    except OEPathError as err:
-        ret = 1
-        logger.error(err.message)
-    except Exception as esc:
-        ret = 1
+    except RunQemuError as err:
+        logger.error(err)
+        return 1
+    except Exception as err:
         import traceback
         traceback.print_exc()
-    sys.exit(ret)
+        return 1
+    finally:
+        print("Cleanup")
+        config.cleanup()
+
+if __name__ == "__main__":
+    sys.exit(main())
diff --git a/import-layers/yocto-poky/scripts/runqemu-export-rootfs b/import-layers/yocto-poky/scripts/runqemu-export-rootfs
index c7992d8..70cdcdb 100755
--- a/import-layers/yocto-poky/scripts/runqemu-export-rootfs
+++ b/import-layers/yocto-poky/scripts/runqemu-export-rootfs
@@ -77,10 +77,6 @@
 	exit 1	
 fi
 
-# rpc.mountd RPC port
-MOUNTD_RPCPORT=${MOUNTD_RPCPORT:=$[ 21111 + $NFS_INSTANCE ]}
-# rpc.nfsd RPC port
-NFSD_RPCPORT=${NFSD_RPCPORT:=$[ 11111 + $NFS_INSTANCE ]}
 # NFS server port number
 NFSD_PORT=${NFSD_PORT:=$[ 3049 + 2 * $NFS_INSTANCE ]}
 # mountd port number
@@ -88,7 +84,7 @@
 
 ## For debugging you would additionally add
 ## --debug all
-UNFSD_OPTS="-p -N -i $NFSPID -e $EXPORTS -x $NFSD_RPCPORT -n $NFSD_PORT -y $MOUNTD_RPCPORT -m $MOUNTD_PORT"
+UNFSD_OPTS="-p -N -i $NFSPID -e $EXPORTS -n $NFSD_PORT -m $MOUNTD_PORT"
 
 # See how we were called.
 case "$1" in
@@ -130,7 +126,7 @@
 	fi
 	echo " "
 	echo "On your target please remember to add the following options for NFS"
-	echo "nfsroot=IP_ADDRESS:$NFS_EXPORT_DIR,nfsvers=3,port=$NFSD_PORT,mountprog=$MOUNTD_RPCPORT,nfsprog=$NFSD_RPCPORT,udp,mountport=$MOUNTD_PORT"
+	echo "nfsroot=IP_ADDRESS:$NFS_EXPORT_DIR,nfsvers=3,port=$NFSD_PORT,udp,mountport=$MOUNTD_PORT"
 	;;
   stop)
 	if [ -f "$NFSPID" ]; then
diff --git a/import-layers/yocto-poky/scripts/runqemu.README b/import-layers/yocto-poky/scripts/runqemu.README
index 5908d83..da9abd7 100644
--- a/import-layers/yocto-poky/scripts/runqemu.README
+++ b/import-layers/yocto-poky/scripts/runqemu.README
@@ -35,7 +35,7 @@
    run as non root. The runqemu-gen-tapdevs script can also be used by
    root to prepopulate the appropriate network devices.
  - You can access the host computer at 192.168.7.1 within the image.
- - Your qemu system will be accessible as 192.16.7.2.
+ - Your qemu system will be accessible as 192.168.7.2.
  - The script extracts the root filesystem specified under pseudo and sets up a userspace
    NFS server to share the image over by default meaning the filesystem can be accessed by
    both the host and guest systems.
diff --git a/import-layers/yocto-poky/scripts/sstate-diff-machines.sh b/import-layers/yocto-poky/scripts/sstate-diff-machines.sh
index 056aa0a..27c6a33 100755
--- a/import-layers/yocto-poky/scripts/sstate-diff-machines.sh
+++ b/import-layers/yocto-poky/scripts/sstate-diff-machines.sh
@@ -118,7 +118,7 @@
     cp -ra ${tmpdir}/stamps/* ${OUTPUT}/${M}
     find ${OUTPUT}/${M} -name \*sigdata\* | sed "s#${OUTPUT}/${M}/##g" | sort > ${OUTPUT}/${M}/list
     M_UNDERSCORE=`echo ${M} | sed 's/-/_/g'`
-    sed "s/${M_UNDERSCORE}/MACHINE/g; s/${M}/MACHINE/g" ${OUTPUT}/${M}/list | sort > ${OUTPUT}/${M}/list.M
+    sed "s/^${M_UNDERSCORE}-/MACHINE/g" ${OUTPUT}/${M}/list | sort > ${OUTPUT}/${M}/list.M
     find ${tmpdir}/stamps/ -name \*sigdata\* | xargs rm -f
   else
     printf "ERROR: no sigdata files were generated for MACHINE $M in ${tmpdir}/stamps\n";
diff --git a/import-layers/yocto-poky/scripts/sstate-sysroot-cruft.sh b/import-layers/yocto-poky/scripts/sstate-sysroot-cruft.sh
index b6166aa..d9917f5 100755
--- a/import-layers/yocto-poky/scripts/sstate-sysroot-cruft.sh
+++ b/import-layers/yocto-poky/scripts/sstate-sysroot-cruft.sh
@@ -105,7 +105,9 @@
 
 # generated by php
 WHITELIST="${WHITELIST} \
+  .*/usr/lib/php5/php/.channels \
   .*/usr/lib/php5/php/.channels/.* \
+  .*/usr/lib/php5/php/.registry \
   .*/usr/lib/php5/php/.registry/.* \
   .*/usr/lib/php5/php/.depdb \
   .*/usr/lib/php5/php/.depdblock \
diff --git a/import-layers/yocto-poky/scripts/test-reexec b/import-layers/yocto-poky/scripts/test-reexec
index 9eaa96e..30e792c 100755
--- a/import-layers/yocto-poky/scripts/test-reexec
+++ b/import-layers/yocto-poky/scripts/test-reexec
@@ -38,9 +38,9 @@
 function clearsstate {
 	target=$1
 
-	sstate_dir=`bitbake $target -e | grep "^SSTATE_DIR" | cut -d "\"" -f 2`
-	sstate_pkgspec=`bitbake $target -e | grep "^SSTATE_PKGSPEC" | cut -d "\"" -f 2`
-	sstasks=`bitbake $target -e | grep "^SSTATETASKS" | cut -d "\"" -f 2`
+	sstate_dir=`bitbake $target -e | grep "^SSTATE_DIR=" | cut -d "\"" -f 2`
+	sstate_pkgspec=`bitbake $target -e | grep "^SSTATE_PKGSPEC=" | cut -d "\"" -f 2`
+	sstasks=`bitbake $target -e | grep "^SSTATETASKS=" | cut -d "\"" -f 2`
 
 	for sstask in $sstasks
 	do
diff --git a/import-layers/yocto-poky/scripts/wic b/import-layers/yocto-poky/scripts/wic
index a5f2dbf..097084a 100755
--- a/import-layers/yocto-poky/scripts/wic
+++ b/import-layers/yocto-poky/scripts/wic
@@ -33,8 +33,10 @@
 # Python Standard Library modules
 import os
 import sys
-import optparse
+import argparse
 import logging
+
+from collections import namedtuple
 from distutils import spawn
 
 # External modules
@@ -54,7 +56,7 @@
     bitbake_main = None
 
 from wic import WicError
-from wic.utils.misc import get_bitbake_var, BB_VARS
+from wic.misc import get_bitbake_var, BB_VARS
 from wic import engine
 from wic import help as hlp
 
@@ -85,66 +87,30 @@
         rootfs_dir += '='.join([key, val])
     return rootfs_dir.strip()
 
-def callback_rootfs_dir(option, opt, value, parser):
-    """
-    Build a dict using --rootfs_dir connection=dir
-    """
-    if not type(parser.values.rootfs_dir) is dict:
-        parser.values.rootfs_dir = dict()
 
-    if '=' in value:
-        (key, rootfs_dir) = value.split('=')
-    else:
-        key = 'ROOTFS_DIR'
-        rootfs_dir = value
+class RootfsArgAction(argparse.Action):
+    def __init__(self, **kwargs):
+        super().__init__(**kwargs)
 
-    parser.values.rootfs_dir[key] = rootfs_dir
+    def __call__(self, parser, namespace, value, option_string=None):
+        if not "rootfs_dir" in vars(namespace) or \
+           not type(namespace.__dict__['rootfs_dir']) is dict:
+            namespace.__dict__['rootfs_dir'] = {}
 
-def wic_create_subcommand(args, usage_str):
+        if '=' in value:
+            (key, rootfs_dir) = value.split('=')
+        else:
+            key = 'ROOTFS_DIR'
+            rootfs_dir = value
+
+        namespace.__dict__['rootfs_dir'][key] = rootfs_dir
+
+
+def wic_create_subcommand(options, usage_str):
     """
     Command-line handling for image creation.  The real work is done
     by image.engine.wic_create()
     """
-    parser = optparse.OptionParser(usage=usage_str)
-
-    parser.add_option("-o", "--outdir", dest="outdir", default='.',
-                      help="name of directory to create image in")
-    parser.add_option("-e", "--image-name", dest="image_name",
-                      help="name of the image to use the artifacts from "
-                           "e.g. core-image-sato")
-    parser.add_option("-r", "--rootfs-dir", dest="rootfs_dir", type="string",
-                      action="callback", callback=callback_rootfs_dir,
-                      help="path to the /rootfs dir to use as the "
-                           ".wks rootfs source")
-    parser.add_option("-b", "--bootimg-dir", dest="bootimg_dir",
-                      help="path to the dir containing the boot artifacts "
-                           "(e.g. /EFI or /syslinux dirs) to use as the "
-                           ".wks bootimg source")
-    parser.add_option("-k", "--kernel-dir", dest="kernel_dir",
-                      help="path to the dir containing the kernel to use "
-                           "in the .wks bootimg")
-    parser.add_option("-n", "--native-sysroot", dest="native_sysroot",
-                      help="path to the native sysroot containing the tools "
-                           "to use to build the image")
-    parser.add_option("-s", "--skip-build-check", dest="build_check",
-                      action="store_false", default=True, help="skip the build check")
-    parser.add_option("-f", "--build-rootfs", action="store_true", help="build rootfs")
-    parser.add_option("-c", "--compress-with", choices=("gzip", "bzip2", "xz"),
-                      dest='compressor',
-                      help="compress image with specified compressor")
-    parser.add_option("-m", "--bmap", action="store_true", help="generate .bmap")
-    parser.add_option("-v", "--vars", dest='vars_dir',
-                      help="directory with <image>.env files that store "
-                           "bitbake variables")
-    parser.add_option("-D", "--debug", dest="debug", action="store_true",
-                      default=False, help="output debug information")
-
-    (options, args) = parser.parse_args(args)
-
-    if len(args) != 1:
-        parser.print_help()
-        raise WicError("Wrong number of arguments, exiting")
-
     if options.build_rootfs and not bitbake_main:
         raise WicError("Can't build rootfs as bitbake is not in the $PATH")
 
@@ -188,32 +154,34 @@
         rootfs_dir = get_bitbake_var("IMAGE_ROOTFS", options.image_name)
         kernel_dir = get_bitbake_var("DEPLOY_DIR_IMAGE", options.image_name)
         bootimg_dir = get_bitbake_var("STAGING_DATADIR", options.image_name)
-        native_sysroot = get_bitbake_var("RECIPE_SYSROOT_NATIVE",
-                                         options.image_name) #, cache=False)
+
+        native_sysroot = options.native_sysroot
+        if options.vars_dir and not native_sysroot:
+            native_sysroot = get_bitbake_var("RECIPE_SYSROOT_NATIVE", options.image_name)
     else:
         if options.build_rootfs:
             raise WicError("Image name is not specified, exiting. "
                            "(Use -e/--image-name to specify it)")
         native_sysroot = options.native_sysroot
 
-    if not native_sysroot or not os.path.isdir(native_sysroot):
+    if not options.vars_dir and (not native_sysroot or not os.path.isdir(native_sysroot)):
         logger.info("Building wic-tools...\n")
         if bitbake_main(BitBakeConfigParameters("bitbake wic-tools".split()),
                         cookerdata.CookerConfiguration()):
             raise WicError("bitbake wic-tools failed")
         native_sysroot = get_bitbake_var("RECIPE_SYSROOT_NATIVE", "wic-tools")
-        if not native_sysroot:
-            raise WicError("Unable to find the location of the native "
-                           "tools sysroot to use")
 
-    wks_file = args[0]
+    if not native_sysroot:
+        raise WicError("Unable to find the location of the native tools sysroot")
+
+    wks_file = options.wks_file
 
     if not wks_file.endswith(".wks"):
         wks_file = engine.find_canned_image(scripts_path, wks_file)
         if not wks_file:
             raise WicError("No image named %s found, exiting.  (Use 'wic list images' "
                            "to list available images, or specify a fully-qualified OE "
-                           "kickstart (.wks) filename)" % args[0])
+                           "kickstart (.wks) filename)" % options.wks_file)
 
     if not options.image_name:
         rootfs_dir = ''
@@ -264,59 +232,290 @@
     Command-line handling for listing available images.
     The real work is done by image.engine.wic_list()
     """
-    parser = optparse.OptionParser(usage=usage_str)
-    args = parser.parse_args(args)[1]
-
     if not engine.wic_list(args, scripts_path):
-        parser.print_help()
         raise WicError("Bad list arguments, exiting")
 
 
-def wic_help_topic_subcommand(args, usage_str):
+def wic_ls_subcommand(args, usage_str):
     """
-    Command-line handling for help-only 'subcommands'.  This is
-    essentially a dummy command that doesn nothing but allow users to
-    use the existing subcommand infrastructure to display help on a
-    particular topic not attached to any particular subcommand.
+    Command-line handling for list content of images.
+    The real work is done by engine.wic_ls()
+    """
+    engine.wic_ls(args, args.native_sysroot)
+
+def wic_cp_subcommand(args, usage_str):
+    """
+    Command-line handling for copying files/dirs to images.
+    The real work is done by engine.wic_cp()
+    """
+    engine.wic_cp(args, args.native_sysroot)
+
+def wic_rm_subcommand(args, usage_str):
+    """
+    Command-line handling for removing files/dirs from images.
+    The real work is done by engine.wic_rm()
+    """
+    engine.wic_rm(args, args.native_sysroot)
+
+def wic_write_subcommand(args, usage_str):
+    """
+    Command-line handling for writing images.
+    The real work is done by engine.wic_write()
+    """
+    engine.wic_write(args, args.native_sysroot)
+
+def wic_help_subcommand(args, usage_str):
+    """
+    Command-line handling for help subcommand to keep the current
+    structure of the function definitions.
     """
     pass
 
 
+def wic_help_topic_subcommand(usage_str, help_str):
+    """
+    Display function for help 'sub-subcommands'.
+    """
+    print(help_str)
+    return
+
+
 wic_help_topic_usage = """
 """
 
-subcommands = {
-    "create":    [wic_create_subcommand,
-                  hlp.wic_create_usage,
-                  hlp.wic_create_help],
-    "list":      [wic_list_subcommand,
-                  hlp.wic_list_usage,
-                  hlp.wic_list_help],
+helptopics = {
     "plugins":   [wic_help_topic_subcommand,
                   wic_help_topic_usage,
-                  hlp.get_wic_plugins_help],
+                  hlp.wic_plugins_help],
     "overview":  [wic_help_topic_subcommand,
                   wic_help_topic_usage,
                   hlp.wic_overview_help],
     "kickstart": [wic_help_topic_subcommand,
                   wic_help_topic_usage,
                   hlp.wic_kickstart_help],
+    "create":    [wic_help_topic_subcommand,
+                  wic_help_topic_usage,
+                  hlp.wic_create_help],
+    "ls":        [wic_help_topic_subcommand,
+                  wic_help_topic_usage,
+                  hlp.wic_ls_help],
+    "cp":        [wic_help_topic_subcommand,
+                  wic_help_topic_usage,
+                  hlp.wic_cp_help],
+    "rm":        [wic_help_topic_subcommand,
+                  wic_help_topic_usage,
+                  hlp.wic_rm_help],
+    "write":     [wic_help_topic_subcommand,
+                  wic_help_topic_usage,
+                  hlp.wic_write_help],
+    "list":      [wic_help_topic_subcommand,
+                  wic_help_topic_usage,
+                  hlp.wic_list_help]
 }
 
 
+def wic_init_parser_create(subparser):
+    subparser.add_argument("wks_file")
+
+    subparser.add_argument("-o", "--outdir", dest="outdir", default='.',
+                      help="name of directory to create image in")
+    subparser.add_argument("-e", "--image-name", dest="image_name",
+                      help="name of the image to use the artifacts from "
+                           "e.g. core-image-sato")
+    subparser.add_argument("-r", "--rootfs-dir", action=RootfsArgAction,
+                      help="path to the /rootfs dir to use as the "
+                           ".wks rootfs source")
+    subparser.add_argument("-b", "--bootimg-dir", dest="bootimg_dir",
+                      help="path to the dir containing the boot artifacts "
+                           "(e.g. /EFI or /syslinux dirs) to use as the "
+                           ".wks bootimg source")
+    subparser.add_argument("-k", "--kernel-dir", dest="kernel_dir",
+                      help="path to the dir containing the kernel to use "
+                           "in the .wks bootimg")
+    subparser.add_argument("-n", "--native-sysroot", dest="native_sysroot",
+                      help="path to the native sysroot containing the tools "
+                           "to use to build the image")
+    subparser.add_argument("-s", "--skip-build-check", dest="build_check",
+                      action="store_false", default=True, help="skip the build check")
+    subparser.add_argument("-f", "--build-rootfs", action="store_true", help="build rootfs")
+    subparser.add_argument("-c", "--compress-with", choices=("gzip", "bzip2", "xz"),
+                      dest='compressor',
+                      help="compress image with specified compressor")
+    subparser.add_argument("-m", "--bmap", action="store_true", help="generate .bmap")
+    subparser.add_argument("--no-fstab-update" ,action="store_true",
+                      help="Do not change fstab file.")
+    subparser.add_argument("-v", "--vars", dest='vars_dir',
+                      help="directory with <image>.env files that store "
+                           "bitbake variables")
+    subparser.add_argument("-D", "--debug", dest="debug", action="store_true",
+                      default=False, help="output debug information")
+    return
+
+
+def wic_init_parser_list(subparser):
+    subparser.add_argument("list_type",
+                        help="can be 'images' or 'source-plugins' "
+                             "to obtain a list. "
+                             "If value is a valid .wks image file")
+    subparser.add_argument("help_for", default=[], nargs='*',
+                        help="If 'list_type' is a valid .wks image file "
+                             "this value can be 'help' to show the help information "
+                             "defined inside the .wks file")
+    return
+
+def imgtype(arg):
+    """
+    Custom type for ArgumentParser
+    Converts path spec to named tuple: (image, partition, path)
+    """
+    image = arg
+    part = path = None
+    if ':' in image:
+        image, part = image.split(':')
+        if '/' in part:
+            part, path = part.split('/', 1)
+        if not path:
+            path = '/'
+
+    if not os.path.isfile(image):
+        err = "%s is not a regular file or symlink" % image
+        raise argparse.ArgumentTypeError(err)
+
+    return namedtuple('ImgType', 'image part path')(image, part, path)
+
+def wic_init_parser_ls(subparser):
+    subparser.add_argument("path", type=imgtype,
+                        help="image spec: <image>[:<vfat partition>[<path>]]")
+    subparser.add_argument("-n", "--native-sysroot",
+                        help="path to the native sysroot containing the tools")
+
+def imgpathtype(arg):
+    img = imgtype(arg)
+    if img.part is None:
+        raise argparse.ArgumentTypeError("partition number is not specified")
+    return img
+
+def wic_init_parser_cp(subparser):
+    subparser.add_argument("src",
+                        help="source spec")
+    subparser.add_argument("dest", type=imgpathtype,
+                        help="image spec: <image>:<vfat partition>[<path>]")
+    subparser.add_argument("-n", "--native-sysroot",
+                        help="path to the native sysroot containing the tools")
+
+def wic_init_parser_rm(subparser):
+    subparser.add_argument("path", type=imgpathtype,
+                        help="path: <image>:<vfat partition><path>")
+    subparser.add_argument("-n", "--native-sysroot",
+                        help="path to the native sysroot containing the tools")
+
+def expandtype(rules):
+    """
+    Custom type for ArgumentParser
+    Converts expand rules to the dictionary {<partition>: size}
+    """
+    if rules == 'auto':
+        return {}
+    result = {}
+    for rule in rules.split('-'):
+        try:
+            part, size = rule.split(':')
+        except ValueError:
+            raise argparse.ArgumentTypeError("Incorrect rule format: %s" % rule)
+
+        if not part.isdigit():
+            raise argparse.ArgumentTypeError("Rule '%s': partition number must be integer" % rule)
+
+        # validate size
+        multiplier = 1
+        for suffix, mult in [('K', 1024), ('M', 1024 * 1024), ('G', 1024 * 1024 * 1024)]:
+            if size.upper().endswith(suffix):
+                multiplier = mult
+                size = size[:-1]
+                break
+        if not size.isdigit():
+            raise argparse.ArgumentTypeError("Rule '%s': size must be integer" % rule)
+
+        result[int(part)] = int(size) * multiplier
+
+    return result
+
+def wic_init_parser_write(subparser):
+    subparser.add_argument("image",
+                        help="path to the wic image")
+    subparser.add_argument("target",
+                        help="target file or device")
+    subparser.add_argument("-e", "--expand", type=expandtype,
+                        help="expand rules: auto or <partition>:<size>[,<partition>:<size>]")
+    subparser.add_argument("-n", "--native-sysroot",
+                        help="path to the native sysroot containing the tools")
+
+def wic_init_parser_help(subparser):
+    helpparsers = subparser.add_subparsers(dest='help_topic', help=hlp.wic_usage)
+    for helptopic in helptopics:
+        helpparsers.add_parser(helptopic, help=helptopics[helptopic][2])
+    return
+
+
+subcommands = {
+    "create":    [wic_create_subcommand,
+                  hlp.wic_create_usage,
+                  hlp.wic_create_help,
+                  wic_init_parser_create],
+    "list":      [wic_list_subcommand,
+                  hlp.wic_list_usage,
+                  hlp.wic_list_help,
+                  wic_init_parser_list],
+    "ls":        [wic_ls_subcommand,
+                  hlp.wic_ls_usage,
+                  hlp.wic_ls_help,
+                  wic_init_parser_ls],
+    "cp":        [wic_cp_subcommand,
+                  hlp.wic_cp_usage,
+                  hlp.wic_cp_help,
+                  wic_init_parser_cp],
+    "rm":        [wic_rm_subcommand,
+                  hlp.wic_rm_usage,
+                  hlp.wic_rm_help,
+                  wic_init_parser_rm],
+    "write":     [wic_write_subcommand,
+                  hlp.wic_write_usage,
+                  hlp.wic_write_help,
+                  wic_init_parser_write],
+    "help":      [wic_help_subcommand,
+                  wic_help_topic_usage,
+                  hlp.wic_help_help,
+                  wic_init_parser_help]
+}
+
+
+def init_parser(parser):
+    parser.add_argument("--version", action="version",
+        version="%(prog)s {version}".format(version=__version__))
+    subparsers = parser.add_subparsers(dest='command', help=hlp.wic_usage)
+    for subcmd in subcommands:
+        subparser = subparsers.add_parser(subcmd, help=subcommands[subcmd][2])
+        subcommands[subcmd][3](subparser)
+
+
 def main(argv):
-    parser = optparse.OptionParser(version="wic version %s" % __version__,
-                                   usage=hlp.wic_usage)
+    parser = argparse.ArgumentParser(
+        description="wic version %s" % __version__)
 
-    parser.disable_interspersed_args()
+    init_parser(parser)
 
-    args = parser.parse_args(argv)[1]
+    args = parser.parse_args(argv)
 
-    if len(args):
-        if args[0] == "help":
-            if len(args) == 1:
+    if "command" in vars(args):
+        if args.command == "help":
+            if args.help_topic is None:
                 parser.print_help()
-                raise WicError("help command requires parameter")
+                print()
+                print("Please specify a help topic")
+            elif args.help_topic in helptopics:
+                hlpt = helptopics[args.help_topic]
+                hlpt[0](hlpt[1], hlpt[2])
+            return 0
 
     return hlp.invoke_subcommand(args, parser, hlp.wic_help_usage, subcommands)
 
diff --git a/import-layers/yocto-poky/scripts/yocto-check-layer b/import-layers/yocto-poky/scripts/yocto-check-layer
new file mode 100755
index 0000000..5a4fd75
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/yocto-check-layer
@@ -0,0 +1,208 @@
+#!/usr/bin/env python3
+
+# Yocto Project layer checking tool
+#
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+import os
+import sys
+import argparse
+import logging
+import time
+import signal
+import shutil
+import collections
+
+scripts_path = os.path.dirname(os.path.realpath(__file__))
+lib_path = scripts_path + '/lib'
+sys.path = sys.path + [lib_path]
+import scriptutils
+import scriptpath
+scriptpath.add_oe_lib_path()
+scriptpath.add_bitbake_lib_path()
+
+from checklayer import LayerType, detect_layers, add_layer, add_layer_dependencies, get_signatures
+from oeqa.utils.commands import get_bb_vars
+
+PROGNAME = 'yocto-check-layer'
+CASES_PATHS = [os.path.join(os.path.abspath(os.path.dirname(__file__)),
+                'lib', 'checklayer', 'cases')]
+logger = scriptutils.logger_create(PROGNAME, stream=sys.stdout)
+
+def test_layer(td, layer, test_software_layer_signatures):
+    from checklayer.context import CheckLayerTestContext
+    logger.info("Starting to analyze: %s" % layer['name'])
+    logger.info("----------------------------------------------------------------------")
+
+    tc = CheckLayerTestContext(td=td, logger=logger, layer=layer, test_software_layer_signatures=test_software_layer_signatures)
+    tc.loadTests(CASES_PATHS)
+    return tc.runTests()
+
+def main():
+    parser = argparse.ArgumentParser(
+            description="Yocto Project layer checking tool",
+            add_help=False)
+    parser.add_argument('layers', metavar='LAYER_DIR', nargs='+',
+            help='Layer to check')
+    parser.add_argument('-o', '--output-log',
+            help='File to output log (optional)', action='store')
+    parser.add_argument('--dependency', nargs="+",
+            help='Layers to process for dependencies', action='store')
+    parser.add_argument('--machines', nargs="+",
+            help='List of MACHINEs to be used during testing', action='store')
+    parser.add_argument('--additional-layers', nargs="+",
+            help='List of additional layers to add during testing', action='store')
+    group = parser.add_mutually_exclusive_group()
+    group.add_argument('--with-software-layer-signature-check', action='store_true', dest='test_software_layer_signatures',
+                       default=True,
+                       help='check that software layers do not change signatures (on by default)')
+    group.add_argument('--without-software-layer-signature-check', action='store_false', dest='test_software_layer_signatures',
+                       help='disable signature checking for software layers')
+    parser.add_argument('-n', '--no-auto', help='Disable auto layer discovery',
+            action='store_true')
+    parser.add_argument('-d', '--debug', help='Enable debug output',
+            action='store_true')
+    parser.add_argument('-q', '--quiet', help='Print only errors',
+            action='store_true')
+
+    parser.add_argument('-h', '--help', action='help',
+            default=argparse.SUPPRESS,
+            help='show this help message and exit')
+
+    args = parser.parse_args()
+
+    if args.output_log:
+        fh = logging.FileHandler(args.output_log)
+        fh.setFormatter(logging.Formatter("%(levelname)s: %(message)s"))
+        logger.addHandler(fh)
+    if args.debug:
+        logger.setLevel(logging.DEBUG)
+    elif args.quiet:
+        logger.setLevel(logging.ERROR)
+
+    if not 'BUILDDIR' in os.environ:
+        logger.error("You must source the environment before run this script.")
+        logger.error("$ source oe-init-build-env")
+        return 1
+    builddir = os.environ['BUILDDIR']
+    bblayersconf = os.path.join(builddir, 'conf', 'bblayers.conf')
+
+    layers = detect_layers(args.layers, args.no_auto)
+    if not layers:
+        logger.error("Fail to detect layers")
+        return 1
+    if args.additional_layers:
+        additional_layers = detect_layers(args.additional_layers, args.no_auto)
+    else:
+        additional_layers = []
+    if args.dependency:
+        dep_layers = detect_layers(args.dependency, args.no_auto)
+        dep_layers = dep_layers + layers
+    else:
+        dep_layers = layers
+
+    logger.info("Detected layers:")
+    for layer in layers:
+        if layer['type'] == LayerType.ERROR_BSP_DISTRO:
+            logger.error("%s: Can't be DISTRO and BSP type at the same time."\
+                     " The conf/distro and conf/machine folders was found."\
+                     % layer['name'])
+            layers.remove(layer)
+        elif layer['type'] == LayerType.ERROR_NO_LAYER_CONF:
+            logger.error("%s: Don't have conf/layer.conf file."\
+                     % layer['name'])
+            layers.remove(layer)
+        else:
+            logger.info("%s: %s, %s" % (layer['name'], layer['type'],
+                layer['path']))
+    if not layers:
+        return 1
+
+    shutil.copyfile(bblayersconf, bblayersconf + '.backup')
+    def cleanup_bblayers(signum, frame):
+        shutil.copyfile(bblayersconf + '.backup', bblayersconf)
+        os.unlink(bblayersconf + '.backup')
+    signal.signal(signal.SIGTERM, cleanup_bblayers)
+    signal.signal(signal.SIGINT, cleanup_bblayers)
+
+    td = {}
+    results = collections.OrderedDict()
+    results_status = collections.OrderedDict()
+
+    layers_tested = 0
+    for layer in layers:
+        if layer['type'] == LayerType.ERROR_NO_LAYER_CONF or \
+                layer['type'] == LayerType.ERROR_BSP_DISTRO:
+            continue
+
+        logger.info('')
+        logger.info("Setting up for %s(%s), %s" % (layer['name'], layer['type'],
+            layer['path']))
+
+        shutil.copyfile(bblayersconf + '.backup', bblayersconf)
+
+        missing_dependencies = not add_layer_dependencies(bblayersconf, layer, dep_layers, logger)
+        if not missing_dependencies:
+            for additional_layer in additional_layers:
+                if not add_layer_dependencies(bblayersconf, additional_layer, dep_layers, logger):
+                    missing_dependencies = True
+                    break
+        if not add_layer_dependencies(bblayersconf, layer, dep_layers, logger) or \
+           any(map(lambda additional_layer: not add_layer_dependencies(bblayersconf, additional_layer, dep_layers, logger),
+                   additional_layers)):
+            logger.info('Skipping %s due to missing dependencies.' % layer['name'])
+            results[layer['name']] = None
+            results_status[layer['name']] = 'SKIPPED (Missing dependencies)'
+            layers_tested = layers_tested + 1
+            continue
+
+        if any(map(lambda additional_layer: not add_layer(bblayersconf, additional_layer, dep_layers, logger),
+                   additional_layers)):
+            logger.info('Skipping %s due to missing additional layers.' % layer['name'])
+            results[layer['name']] = None
+            results_status[layer['name']] = 'SKIPPED (Missing additional layers)'
+            layers_tested = layers_tested + 1
+            continue
+
+        logger.info('Getting initial bitbake variables ...')
+        td['bbvars'] = get_bb_vars()
+        logger.info('Getting initial signatures ...')
+        td['builddir'] = builddir
+        td['sigs'], td['tunetasks'] = get_signatures(td['builddir'])
+        td['machines'] = args.machines
+
+        if not add_layer(bblayersconf, layer, dep_layers, logger):
+            logger.info('Skipping %s ???.' % layer['name'])
+            results[layer['name']] = None
+            results_status[layer['name']] = 'SKIPPED (Unknown)'
+            layers_tested = layers_tested + 1
+            continue
+
+        result = test_layer(td, layer, args.test_software_layer_signatures)
+        results[layer['name']] = result
+        results_status[layer['name']] = 'PASS' if results[layer['name']].wasSuccessful() else 'FAIL'
+        layers_tested = layers_tested + 1
+
+    ret = 0
+    if layers_tested:
+        logger.info('')
+        logger.info('Summary of results:')
+        logger.info('')
+        for layer_name in results_status:
+            logger.info('%s ... %s' % (layer_name, results_status[layer_name]))
+            if not results[layer_name] or not results[layer_name].wasSuccessful():
+                ret = 2 # ret = 1 used for initialization errors
+
+    cleanup_bblayers(None, None)
+
+    return ret
+
+if __name__ == '__main__':
+    try:
+        ret =  main()
+    except Exception:
+        ret = 1
+        import traceback
+        traceback.print_exc()
+    sys.exit(ret)
diff --git a/import-layers/yocto-poky/scripts/yocto-check-layer-wrapper b/import-layers/yocto-poky/scripts/yocto-check-layer-wrapper
new file mode 100755
index 0000000..bbf6ee1
--- /dev/null
+++ b/import-layers/yocto-poky/scripts/yocto-check-layer-wrapper
@@ -0,0 +1,43 @@
+#!/usr/bin/env bash
+
+# Yocto Project layer check tool wrapper
+#
+# Creates a temporary build directory to run the yocto-check-layer
+# script to avoid a contaminated environment.
+#
+# Copyright (C) 2017 Intel Corporation
+# Released under the MIT license (see COPYING.MIT)
+
+if [ -z "$BUILDDIR" ]; then
+	echo "Please source oe-init-build-env before run this script."
+	exit 2
+fi
+
+# since we are using a temp directory, use the realpath for output
+# log option
+output_log=''
+while getopts o: name
+do
+	case $name in
+	o) output_log=$(realpath "$OPTARG")
+	esac
+done
+shift $(($OPTIND - 1))
+
+# generate a temp directory to run check layer script
+base_dir=$(realpath $BUILDDIR/../)
+cd $base_dir
+
+build_dir=$(mktemp -p $base_dir -d -t build-XXXX)
+
+source oe-init-build-env $build_dir
+if [[ $output_log != '' ]]; then
+	yocto-check-layer -o "$output_log" "$*"
+else
+	yocto-check-layer "$@"
+fi
+retcode=$?
+
+rm -rf $build_dir
+
+exit $retcode
diff --git a/import-layers/yocto-poky/scripts/yocto-compat-layer-wrapper b/import-layers/yocto-poky/scripts/yocto-compat-layer-wrapper
deleted file mode 100755
index db4b687..0000000
--- a/import-layers/yocto-poky/scripts/yocto-compat-layer-wrapper
+++ /dev/null
@@ -1,27 +0,0 @@
-#!/usr/bin/env bash
-
-# Yocto Project compatibility layer tool wrapper
-#
-# Creates a temprary build directory to run Yocto Project Compatible
-# script to avoid a contaminated environment.
-#
-# Copyright (C) 2017 Intel Corporation
-# Released under the MIT license (see COPYING.MIT)
-
-if [ -z "$BUILDDIR" ]; then
-	echo "Please source oe-init-build-env before run this script."
-	exit 2
-fi
-
-base_dir=$(realpath $BUILDDIR/../)
-cd $base_dir
-
-build_dir=$(mktemp -p $base_dir -d -t build-XXXX)
-
-source oe-init-build-env $build_dir
-yocto-compat-layer.py "$@"
-retcode=$?
-
-rm -rf $build_dir
-
-exit $retcode
diff --git a/import-layers/yocto-poky/scripts/yocto-compat-layer.py b/import-layers/yocto-poky/scripts/yocto-compat-layer.py
deleted file mode 100755
index ba64b4d..0000000
--- a/import-layers/yocto-poky/scripts/yocto-compat-layer.py
+++ /dev/null
@@ -1,205 +0,0 @@
-#!/usr/bin/env python3
-
-# Yocto Project compatibility layer tool
-#
-# Copyright (C) 2017 Intel Corporation
-# Released under the MIT license (see COPYING.MIT)
-
-import os
-import sys
-import argparse
-import logging
-import time
-import signal
-import shutil
-import collections
-
-scripts_path = os.path.dirname(os.path.realpath(__file__))
-lib_path = scripts_path + '/lib'
-sys.path = sys.path + [lib_path]
-import scriptutils
-import scriptpath
-scriptpath.add_oe_lib_path()
-scriptpath.add_bitbake_lib_path()
-
-from compatlayer import LayerType, detect_layers, add_layer, add_layer_dependencies, get_signatures
-from oeqa.utils.commands import get_bb_vars
-
-PROGNAME = 'yocto-compat-layer'
-CASES_PATHS = [os.path.join(os.path.abspath(os.path.dirname(__file__)),
-                'lib', 'compatlayer', 'cases')]
-logger = scriptutils.logger_create(PROGNAME, stream=sys.stdout)
-
-def test_layer_compatibility(td, layer, test_software_layer_signatures):
-    from compatlayer.context import CompatLayerTestContext
-    logger.info("Starting to analyze: %s" % layer['name'])
-    logger.info("----------------------------------------------------------------------")
-
-    tc = CompatLayerTestContext(td=td, logger=logger, layer=layer, test_software_layer_signatures=test_software_layer_signatures)
-    tc.loadTests(CASES_PATHS)
-    return tc.runTests()
-
-def main():
-    parser = argparse.ArgumentParser(
-            description="Yocto Project compatibility layer tool",
-            add_help=False)
-    parser.add_argument('layers', metavar='LAYER_DIR', nargs='+',
-            help='Layer to test compatibility with Yocto Project')
-    parser.add_argument('-o', '--output-log',
-            help='File to output log (optional)', action='store')
-    parser.add_argument('--dependency', nargs="+",
-            help='Layers to process for dependencies', action='store')
-    parser.add_argument('--machines', nargs="+",
-            help='List of MACHINEs to be used during testing', action='store')
-    parser.add_argument('--additional-layers', nargs="+",
-            help='List of additional layers to add during testing', action='store')
-    group = parser.add_mutually_exclusive_group()
-    group.add_argument('--with-software-layer-signature-check', action='store_true', dest='test_software_layer_signatures',
-                       default=True,
-                       help='check that software layers do not change signatures (on by default)')
-    group.add_argument('--without-software-layer-signature-check', action='store_false', dest='test_software_layer_signatures',
-                       help='disable signature checking for software layers')
-    parser.add_argument('-n', '--no-auto', help='Disable auto layer discovery',
-            action='store_true')
-    parser.add_argument('-d', '--debug', help='Enable debug output',
-            action='store_true')
-    parser.add_argument('-q', '--quiet', help='Print only errors',
-            action='store_true')
-
-    parser.add_argument('-h', '--help', action='help',
-            default=argparse.SUPPRESS,
-            help='show this help message and exit')
-
-    args = parser.parse_args()
-
-    if args.output_log:
-        fh = logging.FileHandler(args.output_log)
-        fh.setFormatter(logging.Formatter("%(levelname)s: %(message)s"))
-        logger.addHandler(fh)
-    if args.debug:
-        logger.setLevel(logging.DEBUG)
-    elif args.quiet:
-        logger.setLevel(logging.ERROR)
-
-    if not 'BUILDDIR' in os.environ:
-        logger.error("You must source the environment before run this script.")
-        logger.error("$ source oe-init-build-env")
-        return 1
-    builddir = os.environ['BUILDDIR']
-    bblayersconf = os.path.join(builddir, 'conf', 'bblayers.conf')
-
-    layers = detect_layers(args.layers, args.no_auto)
-    if not layers:
-        logger.error("Fail to detect layers")
-        return 1
-    if args.additional_layers:
-        additional_layers = detect_layers(args.additional_layers, args.no_auto)
-    else:
-        additional_layers = []
-    if args.dependency:
-        dep_layers = detect_layers(args.dependency, args.no_auto)
-        dep_layers = dep_layers + layers
-    else:
-        dep_layers = layers
-
-    logger.info("Detected layers:")
-    for layer in layers:
-        if layer['type'] == LayerType.ERROR_BSP_DISTRO:
-            logger.error("%s: Can't be DISTRO and BSP type at the same time."\
-                     " The conf/distro and conf/machine folders was found."\
-                     % layer['name'])
-            layers.remove(layer)
-        elif layer['type'] == LayerType.ERROR_NO_LAYER_CONF:
-            logger.error("%s: Don't have conf/layer.conf file."\
-                     % layer['name'])
-            layers.remove(layer)
-        else:
-            logger.info("%s: %s, %s" % (layer['name'], layer['type'],
-                layer['path']))
-    if not layers:
-        return 1
-
-    shutil.copyfile(bblayersconf, bblayersconf + '.backup')
-    def cleanup_bblayers(signum, frame):
-        shutil.copyfile(bblayersconf + '.backup', bblayersconf)
-        os.unlink(bblayersconf + '.backup')
-    signal.signal(signal.SIGTERM, cleanup_bblayers)
-    signal.signal(signal.SIGINT, cleanup_bblayers)
-
-    td = {}
-    results = collections.OrderedDict()
-    results_status = collections.OrderedDict()
-
-    layers_tested = 0
-    for layer in layers:
-        if layer['type'] == LayerType.ERROR_NO_LAYER_CONF or \
-                layer['type'] == LayerType.ERROR_BSP_DISTRO:
-            continue
-
-        logger.info('')
-        logger.info("Setting up for %s(%s), %s" % (layer['name'], layer['type'],
-            layer['path']))
-
-        shutil.copyfile(bblayersconf + '.backup', bblayersconf)
-
-        missing_dependencies = not add_layer_dependencies(bblayersconf, layer, dep_layers, logger)
-        if not missing_dependencies:
-            for additional_layer in additional_layers:
-                if not add_layer_dependencies(bblayersconf, additional_layer, dep_layers, logger):
-                    missing_dependencies = True
-                    break
-        if not add_layer_dependencies(bblayersconf, layer, dep_layers, logger) or \
-           any(map(lambda additional_layer: not add_layer_dependencies(bblayersconf, additional_layer, dep_layers, logger),
-                   additional_layers)):
-            logger.info('Skipping %s due to missing dependencies.' % layer['name'])
-            results[layer['name']] = None
-            results_status[layer['name']] = 'SKIPPED (Missing dependencies)'
-            layers_tested = layers_tested + 1
-            continue
-
-        if any(map(lambda additional_layer: not add_layer(bblayersconf, additional_layer, dep_layers, logger),
-                   additional_layers)):
-            logger.info('Skipping %s due to missing additional layers.' % layer['name'])
-            results[layer['name']] = None
-            results_status[layer['name']] = 'SKIPPED (Missing additional layers)'
-            layers_tested = layers_tested + 1
-            continue
-
-        logger.info('Getting initial bitbake variables ...')
-        td['bbvars'] = get_bb_vars()
-        logger.info('Getting initial signatures ...')
-        td['builddir'] = builddir
-        td['sigs'], td['tunetasks'] = get_signatures(td['builddir'])
-        td['machines'] = args.machines
-
-        if not add_layer(bblayersconf, layer, dep_layers, logger):
-            logger.info('Skipping %s ???.' % layer['name'])
-            results[layer['name']] = None
-            results_status[layer['name']] = 'SKIPPED (Unknown)'
-            layers_tested = layers_tested + 1
-            continue
-
-        result = test_layer_compatibility(td, layer, args.test_software_layer_signatures)
-        results[layer['name']] = result
-        results_status[layer['name']] = 'PASS' if results[layer['name']].wasSuccessful() else 'FAIL'
-        layers_tested = layers_tested + 1
-
-    if layers_tested:
-        logger.info('')
-        logger.info('Summary of results:')
-        logger.info('')
-        for layer_name in results_status:
-            logger.info('%s ... %s' % (layer_name, results_status[layer_name]))
-
-    cleanup_bblayers(None, None)
-
-    return 0
-
-if __name__ == '__main__':
-    try:
-        ret =  main()
-    except Exception:
-        ret = 1
-        import traceback
-        traceback.print_exc()
-    sys.exit(ret)